From owner-xfs@oss.sgi.com Sat Mar 1 03:20:56 2008 Received: with ECARTIS (v1.0.0; list xfs); Sat, 01 Mar 2008 03:21:15 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.2 required=5.0 tests=BAYES_00,J_CHICKENPOX_23, MIME_8BIT_HEADER,T_STOX_BOUND_090909_B autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m21BKsHC029320 for ; Sat, 1 Mar 2008 03:20:55 -0800 X-ASG-Debug-ID: 1204370480-198f02340000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from moutng.kundenserver.de (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 74D92F02962 for ; Sat, 1 Mar 2008 03:21:21 -0800 (PST) Received: from moutng.kundenserver.de (moutng.kundenserver.de [212.227.126.188]) by cuda.sgi.com with ESMTP id Wv0HxmUxjgp6OEQo for ; Sat, 01 Mar 2008 03:21:21 -0800 (PST) Received: from [10.17.19.2] (dslb-084-056-000-250.pools.arcor-ip.net [84.56.0.250]) by mrelayeu.kundenserver.de (node=mrelayeu1) with ESMTP (Nemesis) id 0MKwpI-1JVPmJ1qQa-0006EM; Sat, 01 Mar 2008 12:21:20 +0100 Message-ID: <47C93C32.40006@mathtm.de> Date: Sat, 01 Mar 2008 12:21:22 +0100 From: =?ISO-8859-15?Q?Thomas_M=FCller?= User-Agent: Thunderbird 2.0.0.12 (Windows/20080213) MIME-Version: 1.0 To: xfs@oss.sgi.com CC: linux-kernel@vger.kernel.org, =?ISO-8859-15?Q?Thomas_M=FCller?= X-ASG-Orig-Subj: Kernel oops / XFS filesystem corruption Subject: Kernel oops / XFS filesystem corruption X-Enigmail-Version: 0.95.6 Content-Type: multipart/mixed; boundary="------------080104030007000406030106" X-Provags-ID: V01U2FsdGVkX18e837ujzL/KO8Hu8NYB0F7JzlF3Sdy48kJwMf cFERV1UOpA2pYlCzW1izw0NuItGz5K+JIW0yyVpB8T5Q5jKYPR EoD9gHwMoT9KY84mtHnuhgN14J3AbHH X-Barracuda-Connect: moutng.kundenserver.de[212.227.126.188] X-Barracuda-Start-Time: 1204370482 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.43563 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14726 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: thomas@mathtm.de Precedence: bulk X-list: xfs This is a multi-part message in MIME format. --------------080104030007000406030106 Content-Type: text/plain; charset=ISO-8859-15; format=flowed Content-Transfer-Encoding: 7bit Hello :) My system just crashed because of a power fluctuation and the root filesystem was damaged. The system booted up just fine, but when samba tried to start up the kernel oops'd. xfs_repair was apparently able to repair the damage, though I seem to have lost some files. I do realize that a lot of awful things can happen if you just cut the power, but the kernel shouldn't oops on a mounted file system, right? Please CC me, as I'm not subscribed to the lists. Regards Thomas $ rpm -q xfsprogs xfsprogs-2.9.4-4.fc8 $ uname -a Linux linux.local.loc 2.6.23.15-137.fc8 #1 SMP Sun Feb 10 17:48:34 EST 2008 i686 i686 i386 GNU/Linux --------------080104030007000406030106 Content-Type: text/plain; name="xfs_check" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="xfs_check" block 0/19018 expected type unknown got free2 agi unlinked bucket 6 is 103430 in ag 3 (inode=12686342) agi unlinked bucket 14 is 91278 in ag 3 (inode=12674190) agi unlinked bucket 23 is 106135 in ag 3 (inode=12689047) agi unlinked bucket 31 is 53279 in ag 3 (inode=12636191) agi unlinked bucket 35 is 106147 in ag 3 (inode=12689059) agi unlinked bucket 36 is 60836 in ag 3 (inode=12643748) agi unlinked bucket 39 is 60839 in ag 3 (inode=12643751) agi unlinked bucket 41 is 378537 in ag 3 (inode=12961449) agi unlinked bucket 50 is 91250 in ag 3 (inode=12674162) agi unlinked bucket 20 is 38996 in ag 4 (inode=16816212) agi unlinked bucket 57 is 95353 in ag 4 (inode=16872569) agi unlinked bucket 4 is 199940 in ag 8 (inode=33754372) agi unlinked bucket 8 is 56392 in ag 8 (inode=33610824) agi unlinked bucket 21 is 177621 in ag 8 (inode=33732053) agi unlinked bucket 22 is 56406 in ag 8 (inode=33610838) agi unlinked bucket 23 is 56407 in ag 8 (inode=33610839) agi unlinked bucket 27 is 54747 in ag 8 (inode=33609179) agi unlinked bucket 32 is 67232 in ag 8 (inode=33621664) agi unlinked bucket 37 is 54757 in ag 8 (inode=33609189) agi unlinked bucket 39 is 67239 in ag 8 (inode=33621671) agi unlinked bucket 40 is 67240 in ag 8 (inode=33621672) agi unlinked bucket 47 is 56367 in ag 8 (inode=33610799) agi unlinked bucket 0 is 34944 in ag 10 (inode=41977984) agi unlinked bucket 20 is 42516 in ag 11 (inode=46179860) agi unlinked bucket 15 is 463 in ag 13 (inode=54526415) agi unlinked bucket 62 is 154430 in ag 13 (inode=54680382) block 0/21136 type unknown not expected allocated inode 12689047 has 0 link count allocated inode 12689059 has 0 link count allocated inode 12674162 has 0 link count allocated inode 12674190 has 0 link count allocated inode 12636191 has 0 link count allocated inode 12961449 has 0 link count allocated inode 12643748 has 0 link count allocated inode 12643751 has 0 link count allocated inode 12686342 has 0 link count allocated inode 16816212 has 0 link count allocated inode 16872569 has 0 link count allocated inode 33754372 has 0 link count allocated inode 33732053 has 0 link count allocated inode 33621664 has 0 link count allocated inode 33621671 has 0 link count allocated inode 33621672 has 0 link count allocated inode 33609179 has 0 link count allocated inode 33609189 has 0 link count allocated inode 33610799 has 0 link count allocated inode 33610824 has 0 link count allocated inode 33610838 has 0 link count allocated inode 33610839 has 0 link count allocated inode 41977984 has 0 link count allocated inode 46179860 has 0 link count allocated inode 54680382 has 0 link count allocated inode 54526415 has 0 link count sb_ifree 3257, counted 3259 sb_fdblocks 7248513, counted 7248904 --------------080104030007000406030106 Content-Type: text/plain; name="xfs_oops" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="xfs_oops" Mar 1 10:32:03 linux kernel: BUG: unable to handle kernel NULL pointer dereference at virtual address 00000002 Mar 1 10:32:03 linux kernel: printing eip: f8a96141 *pde = 38ccb067 Mar 1 10:32:03 linux kernel: Oops: 0000 [#1] SMP Mar 1 10:32:03 linux kernel: Modules linked in: asb100 hwmon_vid hwmon tun sch_sfq sch_htb pppoe pppox ppp_synctty ppp_async crc_ccitt ppp_generic slhc bridge xt_NOTRACK iptable_raw ipt_MASQUERADE iptable_nat nf_nat ipt_REJECT xt_mac ipt_LOG nf_conntrack_ipv4 xt_state nf_conntrack nfnetlink iptable_filter xt_CLASSIFY xt_length ipt_owner xt_TCPMSS xt_comment xt_tcpudp iptable_mangle ip_tables x_tables ext2 mbcache dm_mirror dm_mod 8139too r8169 mii i2c_i801 iTCO_wdt iTCO_vendor_support i2c_core sg sr_mod cdrom ata_generic ata_piix libata sd_mod scsi_mod xfs ehci_hcd Mar 1 10:32:03 linux kernel: CPU: 0 Mar 1 10:32:03 linux kernel: EIP: 0060:[] Not tainted VLI Mar 1 10:32:03 linux kernel: EFLAGS: 00010292 (2.6.23.15-137.fc8 #1) Mar 1 10:32:03 linux kernel: EIP is at xfs_attr_shortform_getvalue+0x15/0xdb [xfs] Mar 1 10:32:03 linux kernel: eax: 00000000 ebx: f268cddc ecx: f8ae4d9d edx: 08d26645 Mar 1 10:32:03 linux kernel: esi: f04d1600 edi: 00000004 ebp: f8ae4d91 esp: f268cdbc Mar 1 10:32:03 linux kernel: ds: 007b es: 007b fs: 00d8 gs: 0033 ss: 0068 Mar 1 10:32:03 linux kernel: Process smbd (pid: 2036, ti=f268c000 task=f7207840 task.ti=f268c000) Mar 1 10:32:03 linux kernel: Stack: 00000003 f37888d4 00000003 f04d1600 f04d1600 f268ce38 f8ae4d91 f8a93a97 Mar 1 10:32:03 linux kernel: f8ae4d91 0000000c c1ba6000 00000130 00000402 275b19c4 00000000 00000000 Mar 1 10:32:03 linux kernel: f04d1600 00000000 00000000 00000000 00000000 00000001 00000000 00000000 Mar 1 10:32:03 linux kernel: Call Trace: Mar 1 10:32:03 linux kernel: [] xfs_attr_fetch+0x9e/0xee [xfs] Mar 1 10:32:03 linux kernel: [] xfs_acl_iaccess+0x59/0xc2 [xfs] Mar 1 10:32:03 linux kernel: [] xfs_iaccess+0x87/0x15c [xfs] Mar 1 10:32:03 linux kernel: [] xfs_access+0x26/0x3a [xfs] Mar 1 10:32:03 linux kernel: [] xfs_vn_permission+0x0/0x13 [xfs] Mar 1 10:32:03 linux kernel: [] xfs_vn_permission+0xf/0x13 [xfs] Mar 1 10:32:03 linux kernel: [] permission+0x9e/0xdb Mar 1 10:32:03 linux kernel: [] may_open+0x5c/0x205 Mar 1 10:32:03 linux kernel: [] open_namei+0x27d/0x576 Mar 1 10:32:03 linux kernel: [] do_filp_open+0x2a/0x3e Mar 1 10:32:03 linux kernel: [] get_unused_fd_flags+0x52/0xc5 Mar 1 10:32:03 linux kernel: [] do_sys_open+0x48/0xca Mar 1 10:32:03 linux kernel: [] sys_open+0x1c/0x1e Mar 1 10:32:03 linux kernel: [] syscall_call+0x7/0xb Mar 1 10:32:03 linux kernel: ======================= Mar 1 10:32:03 linux kernel: Code: 00 00 c6 40 02 00 66 c7 00 00 04 8b 47 2c 5b 5e 5f e9 08 bc 03 00 55 57 56 53 89 c3 83 ec 0c 8b 40 20 8b 40 4c 8b 40 14 8d 78 04 <0f> b6 40 02 c7 44 24 08 00 00 00 00 89 44 24 04 e9 96 00 00 00 Mar 1 10:32:03 linux kernel: EIP: [] xfs_attr_shortform_getvalue+0x15/0xdb [xfs] SS:ESP 0068:f268cdbc --------------080104030007000406030106-- From owner-xfs@oss.sgi.com Sat Mar 1 13:01:42 2008 Received: with ECARTIS (v1.0.0; list xfs); Sat, 01 Mar 2008 13:02:01 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.8 required=5.0 tests=AWL,BAYES_00,MIME_8BIT_HEADER autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m21L1dxl014791 for ; Sat, 1 Mar 2008 13:01:42 -0800 X-ASG-Debug-ID: 1204405326-444d00000000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id A9D3912458ED for ; Sat, 1 Mar 2008 13:02:06 -0800 (PST) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id N5bx7nKMaEcBvb55 for ; Sat, 01 Mar 2008 13:02:06 -0800 (PST) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 46FCB1801201B; Sat, 1 Mar 2008 15:02:05 -0600 (CST) Message-ID: <47C9C44D.8080400@sandeen.net> Date: Sat, 01 Mar 2008 15:02:05 -0600 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: =?ISO-8859-15?Q?Thomas_M=FCller?= CC: xfs@oss.sgi.com, linux-kernel@vger.kernel.org X-ASG-Orig-Subj: Re: Kernel oops / XFS filesystem corruption Subject: Re: Kernel oops / XFS filesystem corruption References: <47C93C32.40006@mathtm.de> In-Reply-To: <47C93C32.40006@mathtm.de> Content-Type: text/plain; charset=ISO-8859-15 Content-Transfer-Encoding: 8bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1204405327 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.43601 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14727 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Thomas Müller wrote: > Hello :) > > My system just crashed because of a power fluctuation and the root > filesystem was damaged. > The system booted up just fine, but when samba tried to start up > the kernel oops'd. > > xfs_repair was apparently able to repair the damage, though I seem > to have lost some files. > > I do realize that a lot of awful things can happen if you just cut > the power, but the kernel shouldn't oops on a mounted file > system, right? right. here's the disassembly of that function in your kernrel FWIW: 0001012c : 1012c: 55 push %ebp 1012d: 57 push %edi 1012e: 56 push %esi 1012f: 53 push %ebx 10130: 89 c3 mov %eax,%ebx 10132: 83 ec 0c sub $0xc,%esp 10135: 8b 40 20 mov 0x20(%eax),%eax 10138: 8b 40 4c mov 0x4c(%eax),%eax 1013b: 8b 40 14 mov 0x14(%eax),%eax 1013e: 8d 78 04 lea 0x4(%eax),%edi 10141: 0f b6 40 02 movzbl 0x2(%eax),%eax <--- boom. 10145: c7 44 24 08 00 00 00 movl $0x0,0x8(%esp) 1014c: 00 1014d: 89 44 24 04 mov %eax,0x4(%esp) 10151: e9 96 00 00 00 jmp 101ec ... at this point eax is "sf" (0x0) and edi is "sfe" (0x04) Mar 1 10:32:03 linux kernel: eax: 00000000 ebx: f268cddc ecx: f8ae4d9d edx: 08d26645 Mar 1 10:32:03 linux kernel: esi: f04d1600 edi: 00000004 ebp: f8ae4d91 esp: f268cdbc first part of the function: int xfs_attr_shortform_getvalue(xfs_da_args_t *args) { xfs_attr_shortform_t *sf; xfs_attr_sf_entry_t *sfe; int i; ASSERT(args->dp->i_d.di_aformat == XFS_IFINLINE); sf = (xfs_attr_shortform_t *)args->dp->i_afp->if_u1.if_data; sfe = &sf->list[0]; for (i = 0; i < sf->hdr.count; <--- died here, sf is 0 sfe = XFS_ATTR_SF_NEXTENTRY(sfe), i++) { we blew up on sf->hdr.count because sf is NULL (hdr.count is 0x2 into sf) maybe the sgi guys can take it from there ;) Did you also happen to save the xfs_repair output? -Eric From owner-xfs@oss.sgi.com Sat Mar 1 16:33:24 2008 Received: with ECARTIS (v1.0.0; list xfs); Sat, 01 Mar 2008 16:33:43 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.2 required=5.0 tests=BAYES_00,J_CHICKENPOX_45, MIME_8BIT_HEADER,T_STOX_BOUND_090909_B autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m220XL3X032309 for ; Sat, 1 Mar 2008 16:33:24 -0800 X-ASG-Debug-ID: 1204418029-081601610000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from moutng.kundenserver.de (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 2E5CDE2360B for ; Sat, 1 Mar 2008 16:33:49 -0800 (PST) Received: from moutng.kundenserver.de (moutng.kundenserver.de [212.227.126.188]) by cuda.sgi.com with ESMTP id eUfSGQFv5o2zqkxQ for ; Sat, 01 Mar 2008 16:33:49 -0800 (PST) Received: from [10.17.19.2] (dslb-084-056-038-245.pools.arcor-ip.net [84.56.38.245]) by mrelayeu.kundenserver.de (node=mrelayeu5) with ESMTP (Nemesis) id 0ML25U-1JVc970FZB-0006Us; Sun, 02 Mar 2008 01:33:41 +0100 Message-ID: <47C9F5E9.70703@mathtm.de> Date: Sun, 02 Mar 2008 01:33:45 +0100 From: =?ISO-8859-15?Q?Thomas_M=FCller?= User-Agent: Thunderbird 2.0.0.12 (Windows/20080213) MIME-Version: 1.0 To: Eric Sandeen CC: xfs@oss.sgi.com, linux-kernel@vger.kernel.org X-ASG-Orig-Subj: Re: Kernel oops / XFS filesystem corruption Subject: Re: Kernel oops / XFS filesystem corruption References: <47C93C32.40006@mathtm.de> <47C9C44D.8080400@sandeen.net> In-Reply-To: <47C9C44D.8080400@sandeen.net> X-Enigmail-Version: 0.95.6 Content-Type: multipart/mixed; boundary="------------080300090103030503010006" X-Provags-ID: V01U2FsdGVkX1+aEmKx06ZJAj3jyd3KNc69dxwc/IXrhm67emb YiyNopPPWYP6RklBChi5iJqLjPcgV6ndyOufllxmPGv6vuLkCO YoLGiMUzZYAx2YpI1gZtMXjaVoeGLP5 X-Barracuda-Connect: moutng.kundenserver.de[212.227.126.188] X-Barracuda-Start-Time: 1204418030 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.43615 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14728 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: thomas@mathtm.de Precedence: bulk X-list: xfs This is a multi-part message in MIME format. --------------080300090103030503010006 Content-Type: text/plain; charset=ISO-8859-15; format=flowed Content-Transfer-Encoding: 7bit Eric Sandeen wrote: > Did you also happen to save the xfs_repair output? No, but I made a complete copy of the file system before repairing it, so I can easily recreate it... :) Thomas --------------080300090103030503010006 Content-Type: text/plain; name="xfs_repair" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="xfs_repair" Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 data fork in ino 128638 claims free block 19018 - agno = 1 - agno = 2 b5ac7b90: Badness in key lookup (length) bp=(bno 11701280, len 32768 bytes) key=(bno 11701280, len 8192 bytes) b5ac7b90: Badness in key lookup (length) bp=(bno 11708896, len 32768 bytes) key=(bno 11708896, len 8192 bytes) b5ac7b90: Badness in key lookup (length) bp=(bno 11739296, len 32768 bytes) key=(bno 11739296, len 8192 bytes) b5ac7b90: Badness in key lookup (length) bp=(bno 11751440, len 32768 bytes) key=(bno 11751440, len 8192 bytes) b5ac7b90: Badness in key lookup (length) bp=(bno 11754176, len 32768 bytes) key=(bno 11754176, len 8192 bytes) b5ac7b90: Badness in key lookup (length) bp=(bno 12026592, len 32768 bytes) key=(bno 12026592, len 8192 bytes) - agno = 3 b50c6b90: Badness in key lookup (length) bp=(bno 15569728, len 32768 bytes) key=(bno 15569728, len 8192 bytes) b50c6b90: Badness in key lookup (length) bp=(bno 15626080, len 32768 bytes) key=(bno 15626080, len 8192 bytes) - agno = 4 - agno = 5 - agno = 6 - agno = 7 b41ffb90: Badness in key lookup (length) bp=(bno 31116224, len 32768 bytes) key=(bno 31116224, len 8192 bytes) b41ffb90: Badness in key lookup (length) bp=(bno 31117856, len 32768 bytes) key=(bno 31117856, len 8192 bytes) b41ffb90: Badness in key lookup (length) bp=(bno 31128704, len 32768 bytes) key=(bno 31128704, len 8192 bytes) b41ffb90: Badness in key lookup (length) bp=(bno 31239104, len 32768 bytes) key=(bno 31239104, len 8192 bytes) b41ffb90: Badness in key lookup (length) bp=(bno 31261408, len 32768 bytes) key=(bno 31261408, len 8192 bytes) - agno = 8 local inode 33609156 attr too small (size = 0, min size = 4) bad attribute fork in inode 33609156, clearing attr fork clearing inode 33609156 attributes cleared inode 33609156 - agno = 9 b50c6b90: Badness in key lookup (length) bp=(bno 38861808, len 32768 bytes) key=(bno 38861808, len 8192 bytes) - agno = 10 b41ffb90: Badness in key lookup (length) bp=(bno 42752032, len 32768 bytes) key=(bno 42752032, len 8192 bytes) - agno = 11 - agno = 12 b50c6b90: Badness in key lookup (length) bp=(bno 50475360, len 32768 bytes) key=(bno 50475360, len 8192 bytes) b50c6b90: Badness in key lookup (length) bp=(bno 50629312, len 32768 bytes) key=(bno 50629312, len 8192 bytes) - agno = 13 - agno = 14 - agno = 15 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 bad bmap btree ptr 0xc3a0000100000000 in ino 33609156 bad data fork in inode 33609156 cleared inode 33609156 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... entry "locking.tdb" in directory inode 33585205 points to free inode 33609156 bad hash table for directory inode 33585205 (no data entry): rebuilding rebuilding directory inode 33585205 - traversal finished ... - moving disconnected inodes to lost+found ... disconnected inode 12636191, moving to lost+found disconnected inode 12643748, moving to lost+found disconnected inode 12643751, moving to lost+found disconnected inode 12674162, moving to lost+found disconnected inode 12674190, moving to lost+found disconnected inode 12686342, moving to lost+found disconnected inode 12689047, moving to lost+found disconnected inode 12689059, moving to lost+found disconnected inode 12961449, moving to lost+found disconnected inode 16816212, moving to lost+found disconnected inode 16872569, moving to lost+found disconnected inode 33609179, moving to lost+found disconnected inode 33609189, moving to lost+found disconnected inode 33610799, moving to lost+found disconnected inode 33610824, moving to lost+found disconnected inode 33610838, moving to lost+found disconnected inode 33610839, moving to lost+found disconnected inode 33621664, moving to lost+found disconnected inode 33621671, moving to lost+found disconnected inode 33621672, moving to lost+found disconnected inode 33732053, moving to lost+found disconnected inode 33754372, moving to lost+found disconnected inode 41977984, moving to lost+found disconnected inode 46179860, moving to lost+found disconnected inode 54526415, moving to lost+found disconnected inode 54680382, moving to lost+found Phase 7 - verify and correct link counts... done --------------080300090103030503010006-- From owner-xfs@oss.sgi.com Sat Mar 1 17:34:29 2008 Received: with ECARTIS (v1.0.0; list xfs); Sat, 01 Mar 2008 17:34:48 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.8 required=5.0 tests=AWL,BAYES_00,MIME_8BIT_HEADER autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m221YS1q006273 for ; Sat, 1 Mar 2008 17:34:29 -0800 X-ASG-Debug-ID: 1204421696-79d300ff0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id B216563A621 for ; Sat, 1 Mar 2008 17:34:57 -0800 (PST) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id 9tQsP0WqoHlYyBSr for ; Sat, 01 Mar 2008 17:34:57 -0800 (PST) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id C9F4318DA0AF0; Sat, 1 Mar 2008 19:34:55 -0600 (CST) Message-ID: <47CA043F.5070202@sandeen.net> Date: Sat, 01 Mar 2008 19:34:55 -0600 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: =?ISO-8859-15?Q?Thomas_M=FCller?= CC: xfs@oss.sgi.com, linux-kernel@vger.kernel.org X-ASG-Orig-Subj: Re: Kernel oops / XFS filesystem corruption Subject: Re: Kernel oops / XFS filesystem corruption References: <47C93C32.40006@mathtm.de> <47C9C44D.8080400@sandeen.net> <47C9F5E9.70703@mathtm.de> In-Reply-To: <47C9F5E9.70703@mathtm.de> Content-Type: text/plain; charset=ISO-8859-15 Content-Transfer-Encoding: 8bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1204421697 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.43620 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14729 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Thomas Müller wrote: > Eric Sandeen wrote: >> Did you also happen to save the xfs_repair output? > No, but I made a complete copy of the file system before > repairing it, so I can easily recreate it... :) oh, like a dd image? great. You can use xfs_metadump to make a more transportable image... xfs folks might even be able to use that to recreate the oops. -Eric From owner-xfs@oss.sgi.com Sun Mar 2 02:26:13 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 02 Mar 2008 02:26:31 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m22AQBX4017473 for ; Sun, 2 Mar 2008 02:26:12 -0800 X-ASG-Debug-ID: 1204453599-28ca00790000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from smtp-out02.alice-dsl.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id CD06B124824C for ; Sun, 2 Mar 2008 02:26:39 -0800 (PST) Received: from smtp-out02.alice-dsl.net (smtp-out02.alice-dsl.net [88.44.60.12]) by cuda.sgi.com with ESMTP id fREn2pCPaXWk1tnX for ; Sun, 02 Mar 2008 02:26:39 -0800 (PST) Received: from out.alice-dsl.de ([192.168.125.59]) by smtp-out02.alice-dsl.net with Microsoft SMTPSVC(6.0.3790.1830); Sun, 2 Mar 2008 11:19:39 +0100 Received: from basil.firstfloor.org ([78.53.157.204]) by out.alice-dsl.de with Microsoft SMTPSVC(6.0.3790.1830); Sun, 2 Mar 2008 11:19:38 +0100 Received: by basil.firstfloor.org (Postfix, from userid 1000) id 6DA8A1B4194; Sun, 2 Mar 2008 11:26:07 +0100 (CET) To: "Barry Naujok" Cc: "xfs@oss.sgi.com" X-ASG-Orig-Subj: Re: [REVIEW] Don't make lazy counters default for mkfs Subject: Re: [REVIEW] Don't make lazy counters default for mkfs References: From: Andi Kleen Date: 02 Mar 2008 11:26:07 +0100 In-Reply-To: Message-ID: <87ir05h16o.fsf@basil.nowhere.org> Lines: 8 User-Agent: Gnus/5.09 (Gnus v5.9.0) Emacs/21.3 MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-OriginalArrivalTime: 02 Mar 2008 10:19:39.0084 (UTC) FILETIME=[EC1F80C0:01C87C4E] X-Barracuda-Connect: smtp-out02.alice-dsl.net[88.44.60.12] X-Barracuda-Start-Time: 1204453600 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.43655 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14730 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: andi@firstfloor.org Precedence: bulk X-list: xfs "Barry Naujok" writes: > > Lazy counters will default to on again in xfsprogs 2.10.0 (when CI > support is released). CI is what? -Andi From owner-xfs@oss.sgi.com Sun Mar 2 02:41:32 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 02 Mar 2008 02:41:52 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.2 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m22AfVUq018701 for ; Sun, 2 Mar 2008 02:41:32 -0800 X-ASG-Debug-ID: 1204454517-28ca00a30000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from filer.fsl.cs.sunysb.edu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id EC93B124815F; Sun, 2 Mar 2008 02:41:58 -0800 (PST) Received: from filer.fsl.cs.sunysb.edu (filer.fsl.cs.sunysb.edu [130.245.126.2]) by cuda.sgi.com with ESMTP id 4I3Q7CE1qfEDmb7f; Sun, 02 Mar 2008 02:41:58 -0800 (PST) Received: from josefsipek.net (baal.fsl.cs.sunysb.edu [130.245.126.78]) by filer.fsl.cs.sunysb.edu (8.12.11.20060308/8.13.1) with ESMTP id m22Afq0B028713; Sun, 2 Mar 2008 05:41:53 -0500 Received: by josefsipek.net (Postfix, from userid 1000) id 207D61C00114; Sun, 2 Mar 2008 05:41:53 -0500 (EST) Date: Sun, 2 Mar 2008 05:41:53 -0500 From: "Josef 'Jeff' Sipek" To: Andi Kleen Cc: Barry Naujok , "xfs@oss.sgi.com" X-ASG-Orig-Subj: Re: [REVIEW] Don't make lazy counters default for mkfs Subject: Re: [REVIEW] Don't make lazy counters default for mkfs Message-ID: <20080302104153.GD2483@josefsipek.net> References: <87ir05h16o.fsf@basil.nowhere.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <87ir05h16o.fsf@basil.nowhere.org> User-Agent: Mutt/1.5.16 (2007-06-11) X-Barracuda-Connect: filer.fsl.cs.sunysb.edu[130.245.126.2] X-Barracuda-Start-Time: 1204454520 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.43657 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14731 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeffpc@josefsipek.net Precedence: bulk X-list: xfs On Sun, Mar 02, 2008 at 11:26:07AM +0100, Andi Kleen wrote: > "Barry Naujok" writes: > > > > Lazy counters will default to on again in xfsprogs 2.10.0 (when CI > > support is released). > > CI is what? Case-insensitivity aka. case-folding. Josef 'Jeff' Sipek. -- Bad pun of the week: The formula 1 control computer suffered from a race condition From owner-xfs@oss.sgi.com Sun Mar 2 08:14:46 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 02 Mar 2008 08:15:11 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.1 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m22GEjDv013265 for ; Sun, 2 Mar 2008 08:14:46 -0800 X-ASG-Debug-ID: 1204474512-062a022d0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from ug-out-1314.google.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id A210F1249536 for ; Sun, 2 Mar 2008 08:15:12 -0800 (PST) Received: from ug-out-1314.google.com (ug-out-1314.google.com [66.249.92.170]) by cuda.sgi.com with ESMTP id Xj7wUJnrZ7THUz9H for ; Sun, 02 Mar 2008 08:15:12 -0800 (PST) Received: by ug-out-1314.google.com with SMTP id o29so2047715ugd.20 for ; Sun, 02 Mar 2008 08:15:11 -0800 (PST) Received: by 10.78.201.8 with SMTP id y8mr15643183huf.18.1204474511087; Sun, 02 Mar 2008 08:15:11 -0800 (PST) Received: from teal.hq.k1024.org ( [84.75.117.152]) by mx.google.com with ESMTPS id y37sm8564217mug.19.2008.03.02.08.15.09 (version=TLSv1/SSLv3 cipher=OTHER); Sun, 02 Mar 2008 08:15:10 -0800 (PST) Received: by teal.hq.k1024.org (Postfix, from userid 4004) id C625C40A100; Sun, 2 Mar 2008 17:15:07 +0100 (CET) Date: Sun, 2 Mar 2008 17:15:07 +0100 From: Iustin Pop To: xfs@oss.sgi.com X-ASG-Orig-Subj: XFS_WANT_CORRUPTED_GOTO report Subject: XFS_WANT_CORRUPTED_GOTO report Message-ID: <20080302161507.GC12740@teal.hq.k1024.org> Mail-Followup-To: xfs@oss.sgi.com MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline X-Linux: This message was written on Linux X-Header: /usr/include gives great headers User-Agent: Mutt/1.5.17+20080114 (2008-01-14) X-Barracuda-Connect: ug-out-1314.google.com[66.249.92.170] X-Barracuda-Start-Time: 1204474513 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.43678 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14732 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: iusty@k1024.org Precedence: bulk X-list: xfs Hi, I searched the list but didn't find any reports of XFS_WANT_CORRUPTED_GOTO in xfs_bmap_add_extent_unwritten_real, so here it goes. My kernel is tainted as I use nvidia's binary driver, so if I'm told to go away I understand :) Otherwise it's a self compiled amd64 kernel on debian unstable. The filesystem in question was recently grown, and I did on a file: xfs_io disk0.img resvp 0 2G truncate 8G (not with G but with the actual numbers). Then I proceeded to write into this file (it was used as a qemu disk image) and at some point: XFS internal error XFS_WANT_CORRUPTED_GOTO at line 2058 of file fs/xfs/xfs_bmap_btree.c. Caller 0xffffffff80318a80 Pid: 281, comm: xfsdatad/1 Tainted: P 2.6.24.3-teal #1 Call Trace: [] xfs_bmap_add_extent_unwritten_real+0x710/0xce0 [] xfs_bmbt_insert+0x14d/0x150 [] xfs_bmap_add_extent_unwritten_real+0x710/0xce0 [] xfs_bmap_add_extent+0x147/0x440 [] xfs_iext_get_ext+0x49/0x80 [] xfs_btree_init_cursor+0x45/0x220 [] xfs_bmapi+0xc31/0x1360 [] xlog_grant_log_space+0x298/0x2e0 [] xfs_trans_reserve+0xa8/0x210 [] xfs_iomap_write_unwritten+0x14b/0x220 [] xfs_iomap+0x25a/0x390 [] thread_return+0x3a/0x56c [] xfs_end_bio_unwritten+0x0/0x40 [] xfs_end_bio_unwritten+0x2f/0x40 [] run_workqueue+0xcc/0x170 [] worker_thread+0x0/0x110 [] worker_thread+0x0/0x110 [] worker_thread+0xa3/0x110 [] autoremove_wake_function+0x0/0x30 [] worker_thread+0x0/0x110 [] worker_thread+0x0/0x110 [] kthread+0x4b/0x80 [] child_rip+0xa/0x12 [] kthread+0x0/0x80 [] child_rip+0x0/0x12 Filesystem "dm-4": XFS internal error xfs_trans_cancel at line 1163 of file fs/xfs/xfs_trans.c. Caller 0xffffffff80340a9b Pid: 281, comm: xfsdatad/1 Tainted: P 2.6.24.3-teal #1 Call Trace: [] xfs_iomap_write_unwritten+0x1fb/0x220 [] xfs_trans_cancel+0x104/0x130 [] xfs_iomap_write_unwritten+0x1fb/0x220 [] xfs_iomap+0x25a/0x390 [] thread_return+0x3a/0x56c [] xfs_end_bio_unwritten+0x0/0x40 [] xfs_end_bio_unwritten+0x2f/0x40 [] run_workqueue+0xcc/0x170 [] worker_thread+0x0/0x110 [] worker_thread+0x0/0x110 [] worker_thread+0xa3/0x110 [] autoremove_wake_function+0x0/0x30 [] worker_thread+0x0/0x110 [] worker_thread+0x0/0x110 [] kthread+0x4b/0x80 [] child_rip+0xa/0x12 [] kthread+0x0/0x80 [] child_rip+0x0/0x12 xfs_force_shutdown(dm-4,0x8) called from line 1164 of file fs/xfs/xfs_trans.c. Return address = 0xffffffff803515ed Filesystem "dm-4": Corruption of in-memory data detected. Shutting down filesystem: dm-4 Please umount the filesystem, and rectify the problem(s) xfs_repair didn't say anything related to corruption, mounting it just said starting recovery... ending recovery. After mount, the file in question is heavily fragmented (around 1600 segments). I'm not sure if this file caused the corruption, but I'm almost certain, as no other traffic should have been at that time. I also have a metadump (run before recovery) and a full copy of the filesystem if it's useful. thanks, iustin From owner-xfs@oss.sgi.com Sun Mar 2 11:02:10 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 02 Mar 2008 11:02:28 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.2 required=5.0 tests=AWL,BAYES_00,MIME_8BIT_HEADER autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m22J27qB028442 for ; Sun, 2 Mar 2008 11:02:10 -0800 X-ASG-Debug-ID: 1204484555-073f00030000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from moutng.kundenserver.de (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 7DCCF124A410 for ; Sun, 2 Mar 2008 11:02:35 -0800 (PST) Received: from moutng.kundenserver.de (moutng.kundenserver.de [212.227.126.186]) by cuda.sgi.com with ESMTP id kGFd2TwyRN80xFhk for ; Sun, 02 Mar 2008 11:02:35 -0800 (PST) Received: from [10.17.19.2] (dslb-084-056-059-096.pools.arcor-ip.net [84.56.59.96]) by mrelayeu.kundenserver.de (node=mrelayeu2) with ESMTP (Nemesis) id 0MKwtQ-1JVtSB1xqq-0005PD; Sun, 02 Mar 2008 20:02:34 +0100 Message-ID: <47CAF9C4.6090000@mathtm.de> Date: Sun, 02 Mar 2008 20:02:28 +0100 From: =?ISO-8859-15?Q?Thomas_M=FCller?= User-Agent: Thunderbird 2.0.0.12 (Windows/20080213) MIME-Version: 1.0 To: Eric Sandeen CC: xfs@oss.sgi.com, linux-kernel@vger.kernel.org X-ASG-Orig-Subj: Re: Kernel oops / XFS filesystem corruption Subject: Re: Kernel oops / XFS filesystem corruption References: <47C93C32.40006@mathtm.de> <47C9C44D.8080400@sandeen.net> <47C9F5E9.70703@mathtm.de> <47CA043F.5070202@sandeen.net> In-Reply-To: <47CA043F.5070202@sandeen.net> X-Enigmail-Version: 0.95.6 Content-Type: text/plain; charset=ISO-8859-15; format=flowed Content-Transfer-Encoding: 7bit X-Provags-ID: V01U2FsdGVkX1/mazNlOs0mqYxoAI30gKVkCV9ng20b6cbzldd k+QE9VuQPNF/mira2XdGh4kxW+YK5zLIdLuXUkU4utPq0EnNCf McSYwpt0aYazNXr1Jzahe6d/bJIAFB1 X-Barracuda-Connect: moutng.kundenserver.de[212.227.126.186] X-Barracuda-Start-Time: 1204484556 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.43688 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14733 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: thomas@mathtm.de Precedence: bulk X-list: xfs Eric Sandeen wrote: > oh, like a dd image? great. Yup :) > You can use xfs_metadump to make a more transportable image... I will, if someone needs it. As said, I have a complete file system image, so if anyone needs more information/data, just tell me. Thomas From owner-xfs@oss.sgi.com Sun Mar 2 15:34:32 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 02 Mar 2008 15:34:53 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.6 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m22NYT6j020544 for ; Sun, 2 Mar 2008 15:34:32 -0800 X-ASG-Debug-ID: 1204500897-0882032c0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from postoffice.aconex.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 2E63F63E3AA for ; Sun, 2 Mar 2008 15:34:57 -0800 (PST) Received: from postoffice.aconex.com (prod.aconex.com [203.89.192.138]) by cuda.sgi.com with ESMTP id ZYz48CG7zGnCKmqG for ; Sun, 02 Mar 2008 15:34:57 -0800 (PST) Received: from edge.scott.net.au (unknown [203.89.192.141]) by postoffice.aconex.com (Postfix) with ESMTP id BFA3492D21B; Mon, 3 Mar 2008 10:34:55 +1100 (EST) X-ASG-Orig-Subj: Re: [REVIEW] Don't make lazy counters default for mkfs Subject: Re: [REVIEW] Don't make lazy counters default for mkfs From: Nathan Scott Reply-To: nscott@aconex.com To: Russell Cattelan Cc: Eric Sandeen , Barry Naujok , "xfs@oss.sgi.com" In-Reply-To: <47C89303.7070902@thebarn.com> References: <1204166101.13569.102.camel@edge.scott.net.au> <47C87775.2010007@thebarn.com> <47C89137.3070805@sandeen.net> <47C89303.7070902@thebarn.com> Content-Type: text/plain Organization: Aconex Date: Mon, 03 Mar 2008 10:34:55 +1100 Message-Id: <1204500895.10190.3.camel@edge.scott.net.au> Mime-Version: 1.0 X-Mailer: Evolution 2.6.3 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: prod.aconex.com[203.89.192.138] X-Barracuda-Start-Time: 1204500898 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.43707 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14734 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nscott@aconex.com Precedence: bulk X-list: xfs On Fri, 2008-02-29 at 17:19 -0600, Russell Cattelan wrote: > > > I thought about that; xfs *could* stick someting in /proc/fs/xfs > with > > supported features or somesuch. > > > > But, the kernel you mkfs under isn't necessarily the one you're > going to > > need to fall back to tomorrow, though... > > > > > True but at least it could make a bit of a intelligent decision. > and maybe a warning for a while about potentially incompatible flags. Might also be a good idea to require -f to force a mkfs of a filesystem which the kernel doesn't support. Would be good to get blocksize > pagesize into this scheme too btw, and unfortunately that one isn't a superblock flag) - so this scheme might need to go beyond those flags, if anyone decides to implement it. cheers. -- Nathan From owner-xfs@oss.sgi.com Sun Mar 2 15:58:46 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 02 Mar 2008 15:59:05 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m22NweNf022066 for ; Sun, 2 Mar 2008 15:58:44 -0800 Received: from pc-bnaujok.melbourne.sgi.com (pc-bnaujok.melbourne.sgi.com [134.14.55.58]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA21371; Mon, 3 Mar 2008 10:58:56 +1100 Date: Mon, 03 Mar 2008 10:59:43 +1100 To: "Eric Sandeen" , markgw@sgi.com Subject: Re: [REVIEW] Don't make lazy counters default for mkfs From: "Barry Naujok" Organization: SGI Cc: "Russell Cattelan" , nscott@aconex.com, "xfs@oss.sgi.com" Content-Type: text/plain; format=flowed; delsp=yes; charset=utf-8 MIME-Version: 1.0 References: <1204166101.13569.102.camel@edge.scott.net.au> <47C87775.2010007@thebarn.com> <47C89137.3070805@sandeen.net> <47C89303.7070902@thebarn.com> <47C8997A.9030804@sgi.com> <47C89B94.4060002@sandeen.net> <47C89F17.7040307@sandeen.net> Content-Transfer-Encoding: 7bit Message-ID: In-Reply-To: <47C89F17.7040307@sandeen.net> User-Agent: Opera Mail/9.24 (Win32) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14735 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@sgi.com Precedence: bulk X-list: xfs On Sat, 01 Mar 2008 11:11:03 +1100, Eric Sandeen wrote: > Eric Sandeen wrote: > >> maybe just 2 values, with the actual supported features (& features2) >> values anded together? Easy enough to parse in the code. >> >> i.e. >> >> # cat /proc/fs/xfs/features >> 0xffffffff >> 0x00000001 > > hm, but of course mkfs should never be checking for anything in the > first features slot; yesterday's kernels support all those flags but > don't export anything. Bzzzzt! IRIX/ASCII case-insensitive mode (or OLDCI or "V1 CI" as I'm calling it :) ) is in the first features slot - 0x4000 > wow, this is starting to feel as complex as ext4's flags ;) > > -Eric > > From owner-xfs@oss.sgi.com Sun Mar 2 16:16:00 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 02 Mar 2008 16:16:19 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.8 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_43 autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m230Ftk2023782 for ; Sun, 2 Mar 2008 16:15:59 -0800 Received: from boing.melbourne.sgi.com (boing.melbourne.sgi.com [134.14.55.141]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA21931; Mon, 3 Mar 2008 11:16:11 +1100 Message-ID: <47CB434B.4040005@sgi.com> Date: Mon, 03 Mar 2008 11:16:11 +1100 From: Timothy Shimmin User-Agent: Thunderbird 2.0.0.9 (Macintosh/20071031) MIME-Version: 1.0 To: Mark Goodwin CC: nscott@aconex.com, Russell Cattelan , Eric Sandeen , Barry Naujok , "xfs@oss.sgi.com" Subject: Re: [REVIEW] Don't make lazy counters default for mkfs References: <1204166101.13569.102.camel@edge.scott.net.au> <47C87775.2010007@thebarn.com> <47C89137.3070805@sandeen.net> <47C89303.7070902@thebarn.com> <1204500895.10190.3.camel@edge.scott.net.au> In-Reply-To: <1204500895.10190.3.camel@edge.scott.net.au> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14736 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs Nathan Scott wrote: > On Fri, 2008-02-29 at 17:19 -0600, Russell Cattelan wrote: >>> I thought about that; xfs *could* stick someting in /proc/fs/xfs >> with >>> supported features or somesuch. >>> >>> But, the kernel you mkfs under isn't necessarily the one you're >> going to >>> need to fall back to tomorrow, though... >>> >>> >> True but at least it could make a bit of a intelligent decision. >> and maybe a warning for a while about potentially incompatible flags. > > Might also be a good idea to require -f to force a mkfs of a filesystem > which the kernel doesn't support. > 974981: mkfs.xfs should warn if it is about to create a fs that cannot be mounted Ivan was wanting this in December last year. Remember, Mark? He wanted to know what XFS features the running kernel supported? I don't think Dave (dgc) and others were not so keen on it IIRC. (Seems fine to me:) --Tim From owner-xfs@oss.sgi.com Sun Mar 2 16:18:43 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 02 Mar 2008 16:18:51 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from relay.sgi.com (netops-testserver-3.corp.sgi.com [192.26.57.72]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m230Iglo024341 for ; Sun, 2 Mar 2008 16:18:43 -0800 Received: from outhouse.melbourne.sgi.com (outhouse.melbourne.sgi.com [134.14.52.145]) by netops-testserver-3.corp.sgi.com (Postfix) with ESMTP id 233F990890; Sun, 2 Mar 2008 16:19:04 -0800 (PST) Received: from [134.15.251.5] (melb-sw-corp-251-5.corp.sgi.com [134.15.251.5]) by outhouse.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m230ItTG2445893; Mon, 3 Mar 2008 11:18:58 +1100 (AEDT) Message-ID: <47CB43EE.3060405@sgi.com> Date: Mon, 03 Mar 2008 11:18:54 +1100 From: Donald Douwsma User-Agent: Thunderbird 2.0.0.6 (X11/20071022) MIME-Version: 1.0 To: nscott@aconex.com CC: Russell Cattelan , Eric Sandeen , Barry Naujok , "xfs@oss.sgi.com" Subject: Re: [REVIEW] Don't make lazy counters default for mkfs References: <1204166101.13569.102.camel@edge.scott.net.au> <47C87775.2010007@thebarn.com> <47C89137.3070805@sandeen.net> <47C89303.7070902@thebarn.com> <1204500895.10190.3.camel@edge.scott.net.au> In-Reply-To: <1204500895.10190.3.camel@edge.scott.net.au> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14737 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: donaldd@sgi.com Precedence: bulk X-list: xfs Nathan Scott wrote: > On Fri, 2008-02-29 at 17:19 -0600, Russell Cattelan wrote: >>> I thought about that; xfs *could* stick someting in /proc/fs/xfs >> with >>> supported features or somesuch. >>> >>> But, the kernel you mkfs under isn't necessarily the one you're >> going to >>> need to fall back to tomorrow, though... >>> >>> >> True but at least it could make a bit of a intelligent decision. >> and maybe a warning for a while about potentially incompatible flags. > > Might also be a good idea to require -f to force a mkfs of a filesystem > which the kernel doesn't support. Could work but I dont like the idea of using -f for anything but mkfsing an existing filesystem. If that becomes habit for people it could lead to disasters. Don From owner-xfs@oss.sgi.com Sun Mar 2 16:24:14 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 02 Mar 2008 16:24:21 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.6 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m230ODQ0029452 for ; Sun, 2 Mar 2008 16:24:14 -0800 X-ASG-Debug-ID: 1204503882-243600160000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from postoffice.aconex.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id BC8CC124BCC3 for ; Sun, 2 Mar 2008 16:24:42 -0800 (PST) Received: from postoffice.aconex.com (prod.aconex.com [203.89.192.138]) by cuda.sgi.com with ESMTP id 7whER3PxB6Foh4ed for ; Sun, 02 Mar 2008 16:24:42 -0800 (PST) Received: from edge.scott.net.au (unknown [203.89.192.141]) by postoffice.aconex.com (Postfix) with ESMTP id 440F592DEF0; Mon, 3 Mar 2008 11:24:10 +1100 (EST) X-ASG-Orig-Subj: Re: [REVIEW] Don't make lazy counters default for mkfs Subject: Re: [REVIEW] Don't make lazy counters default for mkfs From: Nathan Scott Reply-To: nscott@aconex.com To: Donald Douwsma Cc: Russell Cattelan , Eric Sandeen , Barry Naujok , "xfs@oss.sgi.com" In-Reply-To: <47CB43EE.3060405@sgi.com> References: <1204166101.13569.102.camel@edge.scott.net.au> <47C87775.2010007@thebarn.com> <47C89137.3070805@sandeen.net> <47C89303.7070902@thebarn.com> <1204500895.10190.3.camel@edge.scott.net.au> <47CB43EE.3060405@sgi.com> Content-Type: text/plain Organization: Aconex Date: Mon, 03 Mar 2008 11:24:09 +1100 Message-Id: <1204503849.10190.26.camel@edge.scott.net.au> Mime-Version: 1.0 X-Mailer: Evolution 2.6.3 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: prod.aconex.com[203.89.192.138] X-Barracuda-Start-Time: 1204503882 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.43710 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14738 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nscott@aconex.com Precedence: bulk X-list: xfs On Mon, 2008-03-03 at 11:18 +1100, Donald Douwsma wrote: > > > Could work but I dont like the idea of using -f for anything but > mkfsing an > existing filesystem. If that becomes habit for people it could lead to > disasters. Its already used for more than just overwriting existing filesystems. :) And it is already habit for some people. Which leads to disasters, yes. Its not a perfect system, but there's only so much you can do for people before they shoot themselves in the foot. -- Nathan From owner-xfs@oss.sgi.com Sun Mar 2 16:31:22 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 02 Mar 2008 16:31:41 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.6 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_43 autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m230VIjx030322 for ; Sun, 2 Mar 2008 16:31:20 -0800 Received: from [134.14.55.21] (dhcp21.melbourne.sgi.com [134.14.55.21]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA22785; Mon, 3 Mar 2008 11:31:32 +1100 Message-ID: <47CB4696.1030304@sgi.com> Date: Mon, 03 Mar 2008 11:30:14 +1100 From: Mark Goodwin Reply-To: markgw@sgi.com Organization: SGI Engineering User-Agent: Thunderbird 1.5.0.14 (Windows/20071210) MIME-Version: 1.0 To: Timothy Shimmin CC: nscott@aconex.com, Russell Cattelan , Eric Sandeen , Barry Naujok , "xfs@oss.sgi.com" Subject: Re: [REVIEW] Don't make lazy counters default for mkfs References: <1204166101.13569.102.camel@edge.scott.net.au> <47C87775.2010007@thebarn.com> <47C89137.3070805@sandeen.net> <47C89303.7070902@thebarn.com> <1204500895.10190.3.camel@edge.scott.net.au> <47CB434B.4040005@sgi.com> In-Reply-To: <47CB434B.4040005@sgi.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14739 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: markgw@sgi.com Precedence: bulk X-list: xfs Timothy Shimmin wrote: > Nathan Scott wrote: >> On Fri, 2008-02-29 at 17:19 -0600, Russell Cattelan wrote: >>>> I thought about that; xfs *could* stick someting in /proc/fs/xfs >>> with >>>> supported features or somesuch. >>>> >>>> But, the kernel you mkfs under isn't necessarily the one you're >>> going to >>>> need to fall back to tomorrow, though... >>>> >>>> >>> True but at least it could make a bit of a intelligent decision. >>> and maybe a warning for a while about potentially incompatible flags. >> >> Might also be a good idea to require -f to force a mkfs of a filesystem >> which the kernel doesn't support. >> > > 974981: mkfs.xfs should warn if it is about to create a fs that cannot > be mounted > > Ivan was wanting this in December last year. Remember, Mark? > He wanted to know what XFS features the running kernel supported? It was worse than that - IIRC, he wanted to know what features are supported by the XFS kernel module he just installed (this was part of an Appman upgrade scenario). I thought we rejected that bug ? > > I don't think Dave (dgc) and others were not so keen on it IIRC. anyone recall the reasons? Maybe I'm missing something, but if we export all the feature bits, both new and old, then (a) an old mkfs will continue to ignore them, and (b) future versions of mkfs will have all the information needed, but will need t be smart about how that information is used. Cheers -- Mark Goodwin markgw@sgi.com Engineering Manager for XFS and PCP Phone: +61-3-99631937 SGI Australian Software Group Cell: +61-4-18969583 ------------------------------------------------------------- From owner-xfs@oss.sgi.com Sun Mar 2 16:31:46 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 02 Mar 2008 16:31:54 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from relay.sgi.com (relay1.corp.sgi.com [192.26.58.214]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m230VkSj030411 for ; Sun, 2 Mar 2008 16:31:46 -0800 Received: from outhouse.melbourne.sgi.com (outhouse.melbourne.sgi.com [134.14.52.145]) by relay1.corp.sgi.com (Postfix) with ESMTP id 82BB98F809C; Sun, 2 Mar 2008 16:32:09 -0800 (PST) Received: from [134.15.251.5] (melb-sw-corp-251-5.corp.sgi.com [134.15.251.5]) by outhouse.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m230W1TG2332433; Mon, 3 Mar 2008 11:32:05 +1100 (AEDT) Message-ID: <47CB4700.9090808@sgi.com> Date: Mon, 03 Mar 2008 11:32:00 +1100 From: Donald Douwsma User-Agent: Thunderbird 2.0.0.6 (X11/20071022) MIME-Version: 1.0 To: Christoph Hellwig CC: xfs@oss.sgi.com Subject: Re: [PATCH] remove superflous xfs_readsb call in xfs_mountfs References: <20071218174829.GA3195@lst.de> <20080222034845.GA5354@lst.de> In-Reply-To: <20080222034845.GA5354@lst.de> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14740 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: donaldd@sgi.com Precedence: bulk X-list: xfs Christoph Hellwig wrote: > On Tue, Dec 18, 2007 at 06:48:29PM +0100, Christoph Hellwig wrote: >> When xfs_mountfs is called by xfs_mount xfs_readsb was called 35 lines >> above unconditionally, so there is no need to try to read the superblock >> if it's not present. If any other port doesn't have the superblock >> read at this point it should just call it directly from it's xfs_mount >> equivalent. > > Ping? Looks good, will be in shortly. Don > >> >> Signed-off-by: Christoph Hellwig >> >> Index: linux-2.6-xfs/fs/xfs/xfs_mount.c >> =================================================================== >> --- linux-2.6-xfs.orig/fs/xfs/xfs_mount.c 2007-12-17 14:34:57.000000000 +0100 >> +++ linux-2.6-xfs/fs/xfs/xfs_mount.c 2007-12-17 14:35:17.000000000 +0100 >> @@ -968,11 +968,6 @@ xfs_mountfs( >> int uuid_mounted = 0; >> int error = 0; >> >> - if (mp->m_sb_bp == NULL) { >> - error = xfs_readsb(mp, mfsi_flags); >> - if (error) >> - return error; >> - } >> xfs_mount_common(mp, sbp); >> >> /* > ---end quoted text--- > From owner-xfs@oss.sgi.com Sun Mar 2 16:42:03 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 02 Mar 2008 16:42:23 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m230fwB1031841 for ; Sun, 2 Mar 2008 16:42:01 -0800 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA23034; Mon, 3 Mar 2008 11:42:19 +1100 Received: by chook.melbourne.sgi.com (Postfix, from userid 1161) id AA12358C4C0F; Mon, 3 Mar 2008 11:42:19 +1100 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: TAKE 907752 - Version bump and debian updates Message-Id: <20080303004219.AA12358C4C0F@chook.melbourne.sgi.com> Date: Mon, 3 Mar 2008 11:42:19 +1100 (EST) From: bnaujok@sgi.com (Barry Naujok) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14741 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@sgi.com Precedence: bulk X-list: xfs Debian and version updates Date: Mon Mar 3 11:41:55 AEDT 2008 Workarea: chook.melbourne.sgi.com:/home/bnaujok/isms/xcmds-clean Inspected by: nscott@aconex.com The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:30604a acl/debian/control - 1.24 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/acl/debian/control.diff?r1=text&tr1=1.24&r2=text&tr2=1.23&f=h attr/debian/control - 1.17 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/attr/debian/control.diff?r1=text&tr1=1.17&r2=text&tr2=1.16&f=h - Debian update for uploaders xfsprogs/VERSION - 1.179 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/VERSION.diff?r1=text&tr1=1.179&r2=text&tr2=1.178&f=h xfsprogs/doc/CHANGES - 1.251 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/doc/CHANGES.diff?r1=text&tr1=1.251&r2=text&tr2=1.250&f=h - Bump to 2.9.7 xfsprogs/debian/control - 1.23 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/debian/control.diff?r1=text&tr1=1.23&r2=text&tr2=1.22&f=h - Debian update for uploaders xfsprogs/debian/changelog - 1.153 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/debian/changelog.diff?r1=text&tr1=1.153&r2=text&tr2=1.152&f=h - Debian update xfsdump/debian/control - 1.24 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsdump/debian/control.diff?r1=text&tr1=1.24&r2=text&tr2=1.23&f=h dmapi/debian/control - 1.14 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/dmapi/debian/control.diff?r1=text&tr1=1.14&r2=text&tr2=1.13&f=h - Debian update for uploaders xfsprogs/fsck/xfs_fsck.sh - 1.3 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/fsck/xfs_fsck.sh.diff?r1=text&tr1=1.3&r2=text&tr2=1.2&f=h - Execute bits changed from xxx to --- Ignore another option From owner-xfs@oss.sgi.com Sun Mar 2 16:41:42 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 02 Mar 2008 16:57:44 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m230fbhJ031823 for ; Sun, 2 Mar 2008 16:41:41 -0800 Received: from linuxbuild.melbourne.sgi.com (linuxbuild.melbourne.sgi.com [134.14.54.115]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA23025; Mon, 3 Mar 2008 11:42:02 +1100 From: donaldd@sgi.com Received: by linuxbuild.melbourne.sgi.com (Postfix, from userid 16365) id C3753323900C; Mon, 3 Mar 2008 11:42:01 +1100 (EST) To: sgi.bugs.xfs@engr.sgi.com, xfs@oss.sgi.com Subject: TAKE 976035 - Remove superflous xfs_readsb call in xfs_mountfs. Message-Id: <20080303004201.C3753323900C@linuxbuild.melbourne.sgi.com> Date: Mon, 3 Mar 2008 11:42:01 +1100 (EST) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14742 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: donaldd@sgi.com Precedence: bulk X-list: xfs Remove superflous xfs_readsb call in xfs_mountfs. When xfs_mountfs is called by xfs_mount xfs_readsb was called 35 lines above unconditionally, so there is no need to try to read the superblock if it's not present. If any other port doesn't have the superblock read at this point it should just call it directly from it's xfs_mount equivalent. Signed-off-by: Christoph Hellwig Date: Mon Mar 3 11:41:13 AEDT 2008 Workarea: linuxbuild.melbourne.sgi.com:/home/donaldd/isms/2.6.x-xfs Inspected by: donaldd The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:30603a fs/xfs/xfs_mount.c - 1.420 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_mount.c.diff?r1=text&tr1=1.420&r2=text&tr2=1.419&f=h - Remove superflous xfs_readsb call in xfs_mountfs. From owner-xfs@oss.sgi.com Sun Mar 2 17:01:30 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 02 Mar 2008 17:01:50 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.7 required=5.0 tests=AWL,BAYES_00,MIME_8BIT_HEADER autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2311QtF001645 for ; Sun, 2 Mar 2008 17:01:28 -0800 Received: from pc-bnaujok.melbourne.sgi.com (pc-bnaujok.melbourne.sgi.com [134.14.55.58]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id MAA23630; Mon, 3 Mar 2008 12:01:41 +1100 To: =?utf-8?Q?Thomas_M=C3=BCller?= , "Eric Sandeen" Subject: Re: Kernel oops / XFS filesystem corruption From: "Barry Naujok" Organization: SGI Cc: xfs@oss.sgi.com, linux-kernel@vger.kernel.org Content-Type: text/plain; format=flowed; delsp=yes; charset=utf-8 MIME-Version: 1.0 References: <47C93C32.40006@mathtm.de> <47C9C44D.8080400@sandeen.net> <47C9F5E9.70703@mathtm.de> <47CA043F.5070202@sandeen.net> <47CAF9C4.6090000@mathtm.de> Content-Transfer-Encoding: 8bit Date: Mon, 03 Mar 2008 12:02:43 +1100 Message-ID: In-Reply-To: <47CAF9C4.6090000@mathtm.de> User-Agent: Opera Mail/9.24 (Win32) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14743 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@sgi.com Precedence: bulk X-list: xfs On Mon, 03 Mar 2008 06:02:28 +1100, Thomas MĂźller wrote: > Eric Sandeen wrote: >> oh, like a dd image? great. > Yup :) > > > You can use xfs_metadump to make a more transportable image... > I will, if someone needs it. > > As said, I have a complete file system image, so if anyone needs > more information/data, just tell me. I could use the metadump image for the badness in key lookups that xfs_repair was reporting. Thanks, Barry. From owner-xfs@oss.sgi.com Sun Mar 2 17:03:28 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 02 Mar 2008 17:03:35 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.7 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2313M5U002093 for ; Sun, 2 Mar 2008 17:03:27 -0800 Received: from [134.14.55.21] (dhcp21.melbourne.sgi.com [134.14.55.21]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id MAA23702; Mon, 3 Mar 2008 12:03:44 +1100 Message-ID: <47CB4E20.5070808@sgi.com> Date: Mon, 03 Mar 2008 12:02:24 +1100 From: Mark Goodwin Reply-To: markgw@sgi.com Organization: SGI Engineering User-Agent: Thunderbird 1.5.0.14 (Windows/20071210) MIME-Version: 1.0 To: Eric Sandeen CC: =?ISO-8859-15?Q?Thomas_M=FCller?= , xfs@oss.sgi.com, linux-kernel@vger.kernel.org Subject: Re: Kernel oops / XFS filesystem corruption References: <47C93C32.40006@mathtm.de> <47C9C44D.8080400@sandeen.net> <47C9F5E9.70703@mathtm.de> <47CA043F.5070202@sandeen.net> In-Reply-To: <47CA043F.5070202@sandeen.net> Content-Type: text/plain; charset=ISO-8859-15; format=flowed Content-Transfer-Encoding: 8bit X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14744 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: markgw@sgi.com Precedence: bulk X-list: xfs Eric Sandeen wrote: > Thomas Müller wrote: >> Eric Sandeen wrote: >>> Did you also happen to save the xfs_repair output? >> No, but I made a complete copy of the file system before >> repairing it, so I can easily recreate it... :) > > oh, like a dd image? great. You can use xfs_metadump to make a more > transportable image... xfs folks might even be able to use that to > recreate the oops. YES PLEASE. See the xfs_metadump man page for instructions. It will obfuscate filenames by default (but please only do so if you need to). Please make it available for Barry, thanks. Cheers -- Mark From owner-xfs@oss.sgi.com Sun Mar 2 17:34:09 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 02 Mar 2008 17:34:26 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.2 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m231Y68A004401 for ; Sun, 2 Mar 2008 17:34:09 -0800 X-ASG-Debug-ID: 1204508074-0c5102e40000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from filer.fsl.cs.sunysb.edu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 46EF9F02C70 for ; Sun, 2 Mar 2008 17:34:34 -0800 (PST) Received: from filer.fsl.cs.sunysb.edu (filer.fsl.cs.sunysb.edu [130.245.126.2]) by cuda.sgi.com with ESMTP id r1IJ0a6tCkeMoR5a for ; Sun, 02 Mar 2008 17:34:34 -0800 (PST) Received: from josefsipek.net (baal.fsl.cs.sunysb.edu [130.245.126.78]) by filer.fsl.cs.sunysb.edu (8.12.11.20060308/8.13.1) with ESMTP id m231FwSA003215; Sun, 2 Mar 2008 20:16:01 -0500 Received: by josefsipek.net (Postfix, from userid 1000) id CDCF91C21D25; Sun, 2 Mar 2008 20:15:59 -0500 (EST) Date: Sun, 2 Mar 2008 20:15:59 -0500 From: "Josef 'Jeff' Sipek" To: Mark Goodwin Cc: Timothy Shimmin , nscott@aconex.com, Russell Cattelan , Eric Sandeen , Barry Naujok , "xfs@oss.sgi.com" X-ASG-Orig-Subj: Re: [REVIEW] Don't make lazy counters default for mkfs Subject: Re: [REVIEW] Don't make lazy counters default for mkfs Message-ID: <20080303011559.GB13879@josefsipek.net> References: <1204166101.13569.102.camel@edge.scott.net.au> <47C87775.2010007@thebarn.com> <47C89137.3070805@sandeen.net> <47C89303.7070902@thebarn.com> <1204500895.10190.3.camel@edge.scott.net.au> <47CB434B.4040005@sgi.com> <47CB4696.1030304@sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <47CB4696.1030304@sgi.com> User-Agent: Mutt/1.5.16 (2007-06-11) X-Barracuda-Connect: filer.fsl.cs.sunysb.edu[130.245.126.2] X-Barracuda-Start-Time: 1204508075 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.43714 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14745 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeffpc@josefsipek.net Precedence: bulk X-list: xfs On Mon, Mar 03, 2008 at 11:30:14AM +1100, Mark Goodwin wrote: ... > Maybe I'm missing something, but if we export all the feature bits, > both new and old, then (a) an old mkfs will continue to ignore them, > and (b) future versions of mkfs will have all the information needed, > but will need t be smart about how that information is used. IMHO: 1) mkfs should make a filesystem, the defaults should be conservative (say using features that have been around >1 year) 2) xfs should export supported features to userspace 3) if you want to make sure that the fs you create will be mountable with your current kernel, write a small shell script or something along those lines that reads the features from some kernel interface, and based on those passes the right options to mkfs 4) if you just use mkfs and it creates a fs that's incompatible with your current kernel, the mount will fail - as it does today, but perhaps a less cryptic error message would be in order Since installers are just gigantic wrappers around basic commands like mkfs, #3 gets nicely covered. Josef 'Jeff' Sipek. -- The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man. - George Bernard Shaw From owner-xfs@oss.sgi.com Sun Mar 2 17:40:41 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 02 Mar 2008 17:40:48 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.4 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m231eZMe005029 for ; Sun, 2 Mar 2008 17:40:39 -0800 Received: from [134.14.55.78] (redback.melbourne.sgi.com [134.14.55.78]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id MAA24849; Mon, 3 Mar 2008 12:40:56 +1100 Message-ID: <47CB587E.8020602@sgi.com> Date: Mon, 03 Mar 2008 12:46:38 +1100 From: Lachlan McIlroy Reply-To: lachlan@sgi.com User-Agent: Thunderbird 2.0.0.12 (X11/20080213) MIME-Version: 1.0 To: iusty@k1024.org CC: xfs-oss Subject: Re: XFS_WANT_CORRUPTED_GOTO report References: <20080302161507.GC12740@teal.hq.k1024.org> In-Reply-To: <20080302161507.GC12740@teal.hq.k1024.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14746 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: lachlan@sgi.com Precedence: bulk X-list: xfs Iustin Pop wrote: > Hi, > > I searched the list but didn't find any reports of > XFS_WANT_CORRUPTED_GOTO in xfs_bmap_add_extent_unwritten_real, so here > it goes. My kernel is tainted as I use nvidia's binary driver, so if I'm > told to go away I understand :) Otherwise it's a self compiled amd64 > kernel on debian unstable. > > The filesystem in question was recently grown, and I did on a file: > xfs_io disk0.img > resvp 0 2G > truncate 8G > > (not with G but with the actual numbers). Then I proceeded to write into > this file (it was used as a qemu disk image) and at some point: > > XFS internal error XFS_WANT_CORRUPTED_GOTO at line 2058 of file fs/xfs/xfs_bmap_btree.c. Caller 0xffffffff80318a80 > Pid: 281, comm: xfsdatad/1 Tainted: P 2.6.24.3-teal #1 > > Call Trace: > [] xfs_bmap_add_extent_unwritten_real+0x710/0xce0 > [] xfs_bmbt_insert+0x14d/0x150 > [] xfs_bmap_add_extent_unwritten_real+0x710/0xce0 > [] xfs_bmap_add_extent+0x147/0x440 > [] xfs_iext_get_ext+0x49/0x80 > [] xfs_btree_init_cursor+0x45/0x220 > [] xfs_bmapi+0xc31/0x1360 > [] xlog_grant_log_space+0x298/0x2e0 > [] xfs_trans_reserve+0xa8/0x210 > [] xfs_iomap_write_unwritten+0x14b/0x220 > [] xfs_iomap+0x25a/0x390 > [] thread_return+0x3a/0x56c > [] xfs_end_bio_unwritten+0x0/0x40 > [] xfs_end_bio_unwritten+0x2f/0x40 > [] run_workqueue+0xcc/0x170 > [] worker_thread+0x0/0x110 > [] worker_thread+0x0/0x110 > [] worker_thread+0xa3/0x110 > [] autoremove_wake_function+0x0/0x30 > [] worker_thread+0x0/0x110 > [] worker_thread+0x0/0x110 > [] kthread+0x4b/0x80 > [] child_rip+0xa/0x12 > [] kthread+0x0/0x80 > [] child_rip+0x0/0x12 > > Filesystem "dm-4": XFS internal error xfs_trans_cancel at line 1163 of file fs/xfs/xfs_trans.c. Caller 0xffffffff80340a9b > Pid: 281, comm: xfsdatad/1 Tainted: P 2.6.24.3-teal #1 > > Call Trace: > [] xfs_iomap_write_unwritten+0x1fb/0x220 > [] xfs_trans_cancel+0x104/0x130 > [] xfs_iomap_write_unwritten+0x1fb/0x220 > [] xfs_iomap+0x25a/0x390 > [] thread_return+0x3a/0x56c > [] xfs_end_bio_unwritten+0x0/0x40 > [] xfs_end_bio_unwritten+0x2f/0x40 > [] run_workqueue+0xcc/0x170 > [] worker_thread+0x0/0x110 > [] worker_thread+0x0/0x110 > [] worker_thread+0xa3/0x110 > [] autoremove_wake_function+0x0/0x30 > [] worker_thread+0x0/0x110 > [] worker_thread+0x0/0x110 > [] kthread+0x4b/0x80 > [] child_rip+0xa/0x12 > [] kthread+0x0/0x80 > [] child_rip+0x0/0x12 > > xfs_force_shutdown(dm-4,0x8) called from line 1164 of file fs/xfs/xfs_trans.c. Return address = 0xffffffff803515ed > Filesystem "dm-4": Corruption of in-memory data detected. Shutting down filesystem: dm-4 > Please umount the filesystem, and rectify the problem(s) > > > xfs_repair didn't say anything related to corruption, mounting it just > said starting recovery... ending recovery. That reinforces the message above that the corruption was in-memory and that the on-disk version is good. > > After mount, the file in question is heavily fragmented (around 1600 > segments). I'm not sure if this file caused the corruption, but I'm > almost certain, as no other traffic should have been at that time. The file being written to (that caused the panic) has unwritten extents and we were trying to convert the extents from unwritten to real after writing to them. These XFS_WANT_CORRUPTED_GOTO bugs often occur with extent tree corruption so this is not surprising. Could we get output from xfs_bmap -v on this file? > > I also have a metadump (run before recovery) and a full copy of the > filesystem if it's useful. Can we get a copy of that metadump? I don't hold high hopes for it though - the filesystem can be inconsistent until the log is replayed but after the log was replayed the problem was gone. I don't suppose you have a copy of the log? From owner-xfs@oss.sgi.com Sun Mar 2 19:56:27 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 02 Mar 2008 19:56:51 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_43 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m233uP5T001060 for ; Sun, 2 Mar 2008 19:56:27 -0800 X-ASG-Debug-ID: 1204516613-54a500160000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 7E61CF031B3 for ; Sun, 2 Mar 2008 19:56:53 -0800 (PST) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id bI40NENoumDNtDkB for ; Sun, 02 Mar 2008 19:56:53 -0800 (PST) Received: from Liberator.local (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id C5E3F18004EC4; Sun, 2 Mar 2008 21:56:50 -0600 (CST) Message-ID: <47CB7702.5080905@sandeen.net> Date: Sun, 02 Mar 2008 21:56:50 -0600 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: "Josef 'Jeff' Sipek" CC: Mark Goodwin , Timothy Shimmin , nscott@aconex.com, Russell Cattelan , Barry Naujok , "xfs@oss.sgi.com" X-ASG-Orig-Subj: Re: [REVIEW] Don't make lazy counters default for mkfs Subject: Re: [REVIEW] Don't make lazy counters default for mkfs References: <1204166101.13569.102.camel@edge.scott.net.au> <47C87775.2010007@thebarn.com> <47C89137.3070805@sandeen.net> <47C89303.7070902@thebarn.com> <1204500895.10190.3.camel@edge.scott.net.au> <47CB434B.4040005@sgi.com> <47CB4696.1030304@sgi.com> <20080303011559.GB13879@josefsipek.net> In-Reply-To: <20080303011559.GB13879@josefsipek.net> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1204516614 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.43724 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14747 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Josef 'Jeff' Sipek wrote: > On Mon, Mar 03, 2008 at 11:30:14AM +1100, Mark Goodwin wrote: > ... >> Maybe I'm missing something, but if we export all the feature bits, >> both new and old, then (a) an old mkfs will continue to ignore them, >> and (b) future versions of mkfs will have all the information needed, >> but will need t be smart about how that information is used. > > IMHO: > > 1) mkfs should make a filesystem, the defaults should be conservative (say > using features that have been around >1 year) I suppose I have to agree, unfortunately that means most competetive benchmarks will be using sub-optimal mkfs's, but... > 2) xfs should export supported features to userspace > > 3) if you want to make sure that the fs you create will be mountable with > your current kernel, write a small shell script or something along those > lines that reads the features from some kernel interface, and based on > those passes the right options to mkfs > 4) if you just use mkfs and it creates a fs that's incompatible with your > current kernel, the mount will fail - as it does today, but perhaps a > less cryptic error message would be in order Ya know, good point. We already have "running kernel compatibility checks" built in; it's called "see what happens when you mount it" It's not like we're running mkfs.ext3 here... ;) mkfs; mount will tell you quickly if there's a problem, won't it. Adding complexity to mkfs might not make a lot of sense. And I still am not a huge fan of checking the currently-running kernel; that's just a point in time, and not necessarily what you're gonna mount it with. (heck maybe you're mkfs'ing a san filesystem?) it's unix, after all. hand out the hangin' rope... just make the kernel explain exactly how & why you've just hung yourself at mount time, in that case.... -Eric From owner-xfs@oss.sgi.com Sun Mar 2 20:04:38 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 02 Mar 2008 20:04:45 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2344ZAC001986 for ; Sun, 2 Mar 2008 20:04:38 -0800 X-ASG-Debug-ID: 1204517098-41b000210000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from bombadil.infradead.org (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id B5E7663F390; Sun, 2 Mar 2008 20:04:59 -0800 (PST) Received: from bombadil.infradead.org (bombadil.infradead.org [18.85.46.34]) by cuda.sgi.com with ESMTP id zTcq5f7jxxdlu8VD; Sun, 02 Mar 2008 20:04:59 -0800 (PST) Received: from hch by bombadil.infradead.org with local (Exim 4.68 #1 (Red Hat Linux)) id 1JW1v8-0002hy-Bd; Mon, 03 Mar 2008 04:04:58 +0000 Date: Sun, 2 Mar 2008 23:04:58 -0500 From: Christoph Hellwig To: David Chinner Cc: xfs-dev , xfs-oss X-ASG-Orig-Subj: Re: [patch] fix inode leak in xfs_iget_core() Subject: Re: [patch] fix inode leak in xfs_iget_core() Message-ID: <20080303040458.GA3177@infradead.org> References: <20080223061924.GI155259@sgi.com> <20080223092255.GA21453@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080223092255.GA21453@infradead.org> User-Agent: Mutt/1.5.17 (2007-11-01) X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org See http://www.infradead.org/rpr.html X-Barracuda-Connect: bombadil.infradead.org[18.85.46.34] X-Barracuda-Start-Time: 1204517103 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.43725 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14748 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs On Sat, Feb 23, 2008 at 04:22:55AM -0500, Christoph Hellwig wrote: > On Sat, Feb 23, 2008 at 05:19:24PM +1100, David Chinner wrote: > > If the radix_tree_preload() fails, we need to destroy the > > inode we just read in before trying again. This could leak > > xfs_vnode structures when there is memory pressure. Noticed > > by Christoph Hellwig. > > What we're leaking would be the xfs_inode. But this is exactly > the patch I had so OK from me :) Now that you're hopefully safe home can you commit it and push it for .25? From owner-xfs@oss.sgi.com Sun Mar 2 20:15:51 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 02 Mar 2008 20:16:12 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.4 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m234FjJP003479 for ; Sun, 2 Mar 2008 20:15:50 -0800 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA28768; Mon, 3 Mar 2008 15:16:10 +1100 Received: by chook.melbourne.sgi.com (Postfix, from userid 44625) id 80A3D58C4C0F; Mon, 3 Mar 2008 15:16:10 +1100 (EST) To: sgi.bugs.xfs@engr.sgi.com, xfs@oss.sgi.com Subject: PARTIAL TAKE 977823 - fix inode leak in xfs_iget_core() Message-Id: <20080303041610.80A3D58C4C0F@chook.melbourne.sgi.com> Date: Mon, 3 Mar 2008 15:16:10 +1100 (EST) From: lachlan@sgi.com (Lachlan McIlroy) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14749 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: lachlan@sgi.com Precedence: bulk X-list: xfs fix inode leak in xfs_iget_core() If the radix_tree_preload() fails, we need to destroy the inode we just read in before trying again. This could leak xfs_vnode structures when there is memory pressure. Noticed by Christoph Hellwig. Signed-off-by: Dave Chinner Date: Mon Mar 3 15:15:23 AEDT 2008 Workarea: redback.melbourne.sgi.com:/home/lachlan/isms/2.6.x-xfs Inspected by: hch Author: lachlan The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:30606a fs/xfs/xfs_iget.c - 1.240 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_iget.c.diff?r1=text&tr1=1.240&r2=text&tr2=1.239&f=h - fix inode leak in xfs_iget_core() From owner-xfs@oss.sgi.com Sun Mar 2 20:17:51 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 02 Mar 2008 20:18:00 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.4 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m234Hmux003913 for ; Sun, 2 Mar 2008 20:17:50 -0800 Received: from [134.14.55.78] (redback.melbourne.sgi.com [134.14.55.78]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA28958; Mon, 3 Mar 2008 15:18:03 +1100 Message-ID: <47CB7D52.2030704@sgi.com> Date: Mon, 03 Mar 2008 15:23:46 +1100 From: Lachlan McIlroy Reply-To: lachlan@sgi.com User-Agent: Thunderbird 2.0.0.12 (X11/20080213) MIME-Version: 1.0 To: Marc Dietrich CC: Barry Naujok , xfs@oss.sgi.com Subject: Re: filesystem corruption in linus tree References: <03F8FD43-322F-41E3-A7A0-CD4E9AD8B4DE@ap.physik.uni-giessen.de> <200802252347.50576.marc.dietrich@ap.physik.uni-giessen.de> <47C3C20A.90603@sgi.com> <200802262100.18631.marc.dietrich@ap.physik.uni-giessen.de> In-Reply-To: <200802262100.18631.marc.dietrich@ap.physik.uni-giessen.de> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14750 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: lachlan@sgi.com Precedence: bulk X-list: xfs Marc Dietrich wrote: > Hi again, > > On Tuesday 26 February 2008 08:38:50 Lachlan McIlroy wrote: >> Marc Dietrich wrote: >>> Hi, >>> >>> On Monday 25 February 2008 01:36:28 Barry Naujok wrote: >>>> On Sun, 24 Feb 2008 20:58:26 +1100, Marc Dietrich >>>> >>>> wrote: >>>>> Hi, >>>>> >>>>> somewhere after the release of 2.6.24 my xfs filesystem got corrupted. >>>>> Initialy I thought it was only related to the readdir bug. >>>>> (http://oss.sgi.com/archives/xfs/2008-02/msg00027.html) So I waited for >>>>> the fix to go into mainline. Yesterday I tried again, but got this >>>>> error during boot: > > > >> We've had a few problems reported with XFS on 32-bit powermacs and the >> culprit appears to be some changes to bit manipulation routines. Could you >> please try reverse applying the attached patches and see if the problem is >> resolved? > > I saw, that you already pushed it into mainline - for a good reason ;-) Works > as expeted. > > Please also don't forget 2.6.24-stable ! The changes that caused this regression went into 2.6.25-rc1 so no need for a 2.6.24 stable fix. From owner-xfs@oss.sgi.com Sun Mar 2 20:19:21 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 02 Mar 2008 20:19:29 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_52 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m234JJPc008659 for ; Sun, 2 Mar 2008 20:19:21 -0800 X-ASG-Debug-ID: 1204517987-5499004f0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 4DDE1F03727 for ; Sun, 2 Mar 2008 20:19:48 -0800 (PST) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id f2F3OcpDO8FwuXXn for ; Sun, 02 Mar 2008 20:19:48 -0800 (PST) Received: from Liberator.local (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id D2EEA18004EDA; Sun, 2 Mar 2008 22:19:16 -0600 (CST) Message-ID: <47CB7C44.2030508@sandeen.net> Date: Sun, 02 Mar 2008 22:19:16 -0600 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: "Josef 'Jeff' Sipek" CC: Mark Goodwin , Timothy Shimmin , nscott@aconex.com, Russell Cattelan , Barry Naujok , "xfs@oss.sgi.com" X-ASG-Orig-Subj: Re: [REVIEW] Don't make lazy counters default for mkfs Subject: Re: [REVIEW] Don't make lazy counters default for mkfs References: <1204166101.13569.102.camel@edge.scott.net.au> <47C87775.2010007@thebarn.com> <47C89137.3070805@sandeen.net> <47C89303.7070902@thebarn.com> <1204500895.10190.3.camel@edge.scott.net.au> <47CB434B.4040005@sgi.com> <47CB4696.1030304@sgi.com> <20080303011559.GB13879@josefsipek.net> <47CB7702.5080905@sandeen.net> <20080303041409.GC13879@josefsipek.net> In-Reply-To: <20080303041409.GC13879@josefsipek.net> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1204517988 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.43726 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14751 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Josef 'Jeff' Sipek wrote: > On Sun, Mar 02, 2008 at 09:56:50PM -0600, Eric Sandeen wrote: >> Josef 'Jeff' Sipek wrote: >>> On Mon, Mar 03, 2008 at 11:30:14AM +1100, Mark Goodwin wrote: >>> ... >>>> Maybe I'm missing something, but if we export all the feature bits, >>>> both new and old, then (a) an old mkfs will continue to ignore them, >>>> and (b) future versions of mkfs will have all the information needed, >>>> but will need t be smart about how that information is used. >>> IMHO: >>> >>> 1) mkfs should make a filesystem, the defaults should be conservative (say >>> using features that have been around >1 year) >> I suppose I have to agree, unfortunately that means most competetive >> benchmarks will be using sub-optimal mkfs's, but... > > Benchmarks that use default mkfs options on xfs, but non-default on other > fs? most benchmarks I see tune the heck out of "the home team" and leave the rest ;) > If you want, have a simple printf in mkfs that tells the user that he's not > using the latest and greatest features (e.g., lazy-count); that should be > enough to make it obvious that there're better options than the default. eh, nobody reads that stuff :) -Eric From owner-xfs@oss.sgi.com Sun Mar 2 20:34:04 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 02 Mar 2008 20:34:23 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.2 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_52 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m234Y2f3009897 for ; Sun, 2 Mar 2008 20:34:04 -0800 X-ASG-Debug-ID: 1204518871-544f00890000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from filer.fsl.cs.sunysb.edu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id C5551F0381C for ; Sun, 2 Mar 2008 20:34:31 -0800 (PST) Received: from filer.fsl.cs.sunysb.edu (filer.fsl.cs.sunysb.edu [130.245.126.2]) by cuda.sgi.com with ESMTP id TRh6VWXj2n2TZkkv for ; Sun, 02 Mar 2008 20:34:31 -0800 (PST) Received: from josefsipek.net (baal.fsl.cs.sunysb.edu [130.245.126.78]) by filer.fsl.cs.sunysb.edu (8.12.11.20060308/8.13.1) with ESMTP id m234E7Am018300; Sun, 2 Mar 2008 23:14:07 -0500 Received: by josefsipek.net (Postfix, from userid 1000) id 9E2D21C00124; Sun, 2 Mar 2008 23:14:09 -0500 (EST) Date: Sun, 2 Mar 2008 23:14:09 -0500 From: "Josef 'Jeff' Sipek" To: Eric Sandeen Cc: Mark Goodwin , Timothy Shimmin , nscott@aconex.com, Russell Cattelan , Barry Naujok , "xfs@oss.sgi.com" X-ASG-Orig-Subj: Re: [REVIEW] Don't make lazy counters default for mkfs Subject: Re: [REVIEW] Don't make lazy counters default for mkfs Message-ID: <20080303041409.GC13879@josefsipek.net> References: <1204166101.13569.102.camel@edge.scott.net.au> <47C87775.2010007@thebarn.com> <47C89137.3070805@sandeen.net> <47C89303.7070902@thebarn.com> <1204500895.10190.3.camel@edge.scott.net.au> <47CB434B.4040005@sgi.com> <47CB4696.1030304@sgi.com> <20080303011559.GB13879@josefsipek.net> <47CB7702.5080905@sandeen.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <47CB7702.5080905@sandeen.net> User-Agent: Mutt/1.5.16 (2007-06-11) X-Barracuda-Connect: filer.fsl.cs.sunysb.edu[130.245.126.2] X-Barracuda-Start-Time: 1204518871 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.43726 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14752 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeffpc@josefsipek.net Precedence: bulk X-list: xfs On Sun, Mar 02, 2008 at 09:56:50PM -0600, Eric Sandeen wrote: > Josef 'Jeff' Sipek wrote: > > On Mon, Mar 03, 2008 at 11:30:14AM +1100, Mark Goodwin wrote: > > ... > >> Maybe I'm missing something, but if we export all the feature bits, > >> both new and old, then (a) an old mkfs will continue to ignore them, > >> and (b) future versions of mkfs will have all the information needed, > >> but will need t be smart about how that information is used. > > > > IMHO: > > > > 1) mkfs should make a filesystem, the defaults should be conservative (say > > using features that have been around >1 year) > > I suppose I have to agree, unfortunately that means most competetive > benchmarks will be using sub-optimal mkfs's, but... Benchmarks that use default mkfs options on xfs, but non-default on other fs? If you want, have a simple printf in mkfs that tells the user that he's not using the latest and greatest features (e.g., lazy-count); that should be enough to make it obvious that there're better options than the default. > It's not like we're running mkfs.ext3 here... ;) mkfs; mount will tell > you quickly if there's a problem, won't it. Adding complexity to mkfs > might not make a lot of sense. Exactly :) Josef 'Jeff' Sipek. -- I already backed up the [server] once, I can do it again. - a sysadmin threatening to do more frequent backups From owner-xfs@oss.sgi.com Sun Mar 2 20:48:09 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 02 Mar 2008 20:48:28 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.5 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_43 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m234m7m8011242 for ; Sun, 2 Mar 2008 20:48:09 -0800 X-ASG-Debug-ID: 1204519716-549400b00000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mail.sceen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 18E1CF035CF for ; Sun, 2 Mar 2008 20:48:36 -0800 (PST) Received: from mail.sceen.net (sceen.net [213.41.243.68]) by cuda.sgi.com with ESMTP id WdYDvQQgXa8Or2ms for ; Sun, 02 Mar 2008 20:48:36 -0800 (PST) Received: from localhost (localhost.localdomain [127.0.0.1]) by mail.sceen.net (Postfix) with ESMTP id C61E71E25B; Mon, 3 Mar 2008 05:48:03 +0100 (CET) Received: from mail.sceen.net ([127.0.0.1]) by localhost (mail.sceen.net [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 05834-09; Mon, 3 Mar 2008 05:47:56 +0100 (CET) Received: from itchy (cesrt42.asia.info.net [61.14.27.42]) by mail.sceen.net (Postfix) with ESMTP id E2FD01C5DB; Mon, 3 Mar 2008 05:47:47 +0100 (CET) From: Niv Sardi To: markgw@sgi.com Cc: Timothy Shimmin , nscott@aconex.com, Russell Cattelan , Eric Sandeen , Barry Naujok , "xfs@oss.sgi.com" X-ASG-Orig-Subj: Re: [REVIEW] Don't make lazy counters default for mkfs Subject: Re: [REVIEW] Don't make lazy counters default for mkfs References: <1204166101.13569.102.camel@edge.scott.net.au> <47C87775.2010007@thebarn.com> <47C89137.3070805@sandeen.net> <47C89303.7070902@thebarn.com> <1204500895.10190.3.camel@edge.scott.net.au> <47CB434B.4040005@sgi.com> <47CB4696.1030304@sgi.com> Date: Mon, 03 Mar 2008 15:47:39 +1100 In-Reply-To: <47CB4696.1030304@sgi.com> (Mark Goodwin's message of "Mon, 03 Mar 2008 11:30:14 +1100") Message-ID: User-Agent: Gnus/5.110007 (No Gnus v0.7) Emacs/23.0.60 (i486-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Scanned: by amavisd-new-20030616-p10 (Debian) at sceen.net X-Barracuda-Connect: sceen.net[213.41.243.68] X-Barracuda-Start-Time: 1204519717 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.43728 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Status: Clean X-archive-position: 14753 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: xaiki@cxhome.ath.cx Precedence: bulk X-list: xfs Mark Goodwin writes: > Timothy Shimmin wrote: >> Nathan Scott wrote: >>> On Fri, 2008-02-29 at 17:19 -0600, Russell Cattelan wrote: >>>>> I thought about that; xfs *could* stick someting in /proc/fs/xfs >>>> with >>>>> supported features or somesuch. >>>>> >>>>> But, the kernel you mkfs under isn't necessarily the one you're >>>> going to >>>>> need to fall back to tomorrow, though... >>>>> >>>>> >>>> True but at least it could make a bit of a intelligent decision. >>>> and maybe a warning for a while about potentially incompatible >>>> flags. >>> >>> Might also be a good idea to require -f to force a mkfs of a filesystem >>> which the kernel doesn't support. >>> >> 974981: mkfs.xfs should warn if it is about to create a fs that >> cannot be mounted >> >> Ivan was wanting this in December last year. Remember, Mark? >> He wanted to know what XFS features the running kernel supported? > > It was worse than that - IIRC, he wanted to know what features are > supported by the XFS kernel module he just installed (this was part > of an Appman upgrade scenario). I thought we rejected that bug ? > >> >> I don't think Dave (dgc) and others were not so keen on it IIRC. > > anyone recall the reasons? Yes, we got to the consensus that having mkfs check for kernel stuff is plain wrong, and there are a load of reasons to that, the most convincing is that you can have no XFS support in the kernel at mkfs time (i.e. module, that'll be loaded only on mount). Others reasons go along the line of: * You could be mkfsing for another box/kernel. * We want people to run latest kernels if they run latest xfsprogs =) Cheers, -- Niv Sardi From owner-xfs@oss.sgi.com Sun Mar 2 22:32:18 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 02 Mar 2008 22:32:40 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.2 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m236WG38019256 for ; Sun, 2 Mar 2008 22:32:18 -0800 X-ASG-Debug-ID: 1204525963-218000bf0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from nf-out-0910.google.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 867AEDAEFF3 for ; Sun, 2 Mar 2008 22:32:44 -0800 (PST) Received: from nf-out-0910.google.com (nf-out-0910.google.com [64.233.182.184]) by cuda.sgi.com with ESMTP id JEeYzmeeCHWsRvfn for ; Sun, 02 Mar 2008 22:32:44 -0800 (PST) Received: by nf-out-0910.google.com with SMTP id e27so3486885nfd.42 for ; Sun, 02 Mar 2008 22:32:43 -0800 (PST) Received: by 10.82.112.3 with SMTP id k3mr2019385buc.33.1204525961751; Sun, 02 Mar 2008 22:32:41 -0800 (PST) Received: from teal.hq.k1024.org ( [84.75.117.152]) by mx.google.com with ESMTPS id t10sm9190270muh.13.2008.03.02.22.32.39 (version=TLSv1/SSLv3 cipher=OTHER); Sun, 02 Mar 2008 22:32:40 -0800 (PST) Received: by teal.hq.k1024.org (Postfix, from userid 4004) id E74DC40A06D; Mon, 3 Mar 2008 07:32:37 +0100 (CET) Date: Mon, 3 Mar 2008 07:32:37 +0100 From: Iustin Pop To: Lachlan McIlroy Cc: xfs-oss X-ASG-Orig-Subj: Re: XFS_WANT_CORRUPTED_GOTO report Subject: Re: XFS_WANT_CORRUPTED_GOTO report Message-ID: <20080303063237.GB21775@teal.hq.k1024.org> Mail-Followup-To: Lachlan McIlroy , xfs-oss References: <20080302161507.GC12740@teal.hq.k1024.org> <47CB587E.8020602@sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <47CB587E.8020602@sgi.com> X-Linux: This message was written on Linux X-Header: /usr/include gives great headers User-Agent: Mutt/1.5.17+20080114 (2008-01-14) X-Barracuda-Connect: nf-out-0910.google.com[64.233.182.184] X-Barracuda-Start-Time: 1204525965 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.43734 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14754 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: iusty@k1024.org Precedence: bulk X-list: xfs On Mon, Mar 03, 2008 at 12:46:38PM +1100, Lachlan McIlroy wrote: >> After mount, the file in question is heavily fragmented (around 1600 >> segments). I'm not sure if this file caused the corruption, but I'm >> almost certain, as no other traffic should have been at that time. > The file being written to (that caused the panic) has unwritten extents > and we were trying to convert the extents from unwritten to real after > writing to them. These XFS_WANT_CORRUPTED_GOTO bugs often occur with > extent tree corruption so this is not surprising. Could we get output > from xfs_bmap -v on this file? http://www.k1024.org/iusty/plains.bmap >> I also have a metadump (run before recovery) and a full copy of the >> filesystem if it's useful. > Can we get a copy of that metadump? I don't hold high hopes for it > though - the filesystem can be inconsistent until the log is replayed > but after the log was replayed the problem was gone. I don't suppose > you have a copy of the log? The metadump is here http://www.k1024.org/iusty/plains.metadump.bz2 (~80MB), done with default options. Warning, the server is somewhat slow :) The copy of the log, done with xfs_logprint -t, is here http://www.k1024.org/iusty/plains.logprint The inode of the file in question is 96424401. Let me know if I can give more information. thanks, iustin From owner-xfs@oss.sgi.com Mon Mar 3 05:15:40 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 03 Mar 2008 05:15:47 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.2 required=5.0 tests=AWL,BAYES_50,HTML_MESSAGE autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m23DFd4D026989 for ; Mon, 3 Mar 2008 05:15:40 -0800 X-ASG-Debug-ID: 1204550167-3b7b01dc0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from wf-out-1314.google.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 5DB06639D82 for ; Mon, 3 Mar 2008 05:16:07 -0800 (PST) Received: from wf-out-1314.google.com (wf-out-1314.google.com [209.85.200.169]) by cuda.sgi.com with ESMTP id pihDvcLYTeQOkc1M for ; Mon, 03 Mar 2008 05:16:07 -0800 (PST) Received: by wf-out-1314.google.com with SMTP id 29so62wff.32 for ; Mon, 03 Mar 2008 05:16:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:to:subject:mime-version:content-type; bh=55aeZUuSrIo1HDzWTg4sTnjYsfaHlXyUmvtYPxzQmRQ=; b=JWyS25N5m/ehrmhMrFCCXBtoq0JQ9Siq0HOGP16sDkUC+3G++tyo3gaY7L0nKAN6yN9JZxH7c++mlnP5WVzHWTGbf3kYqo/SMwRNeEx5P91sRE2YHg/4elotajwpb0VI1nxPx0qc+KXU2PHM1/UiKlSF1iP0aRR+hw5QKXyYsOA= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:mime-version:content-type; b=OIXSRFLQE/ulDppbxYofxBNtGWFeDrCs/IHqGFK9N/eLkOXKQKmom74gDKp/Y7EFDYIFClEJ6H9z48puU77VFnNx52K+lTBd4/46HTSFjhTanu5BTOt0z5Ff61eJIeEa97X3ebmKY39pZn2gQMIB6CsSX2aT6SHPlFcruPcVGWc= Received: by 10.142.128.6 with SMTP id a6mr9027579wfd.135.1204550167673; Mon, 03 Mar 2008 05:16:07 -0800 (PST) Received: by 10.142.180.7 with HTTP; Mon, 3 Mar 2008 05:16:07 -0800 (PST) Message-ID: Date: Mon, 3 Mar 2008 21:16:07 +0800 From: cgxu To: xfs@oss.sgi.com, xfscn@googlegroups.com X-ASG-Orig-Subj: Will this functon be used? Subject: Will this functon be used? MIME-Version: 1.0 X-Barracuda-Connect: wf-out-1314.google.com[209.85.200.169] X-Barracuda-Start-Time: 1204550168 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0027 1.0000 -2.0031 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -1.06 X-Barracuda-Spam-Status: No, SCORE=-1.06 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests=HTML_10_20, HTML_MESSAGE X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.43758 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.00 HTML_MESSAGE BODY: HTML included in message 0.94 HTML_10_20 BODY: Message is 10% to 20% HTML X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean Content-Type: text/plain Content-Disposition: inline Content-Transfer-Encoding: 7bit Content-length: 408 X-archive-position: 14756 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: cgxu.gg@gmail.com Precedence: bulk X-list: xfs Hi I found a function which never been used in the shortform directory format. It named xfs_dir2_sf_addname_hard(). The purpose of this function is to insert a new entry to the existent entry's hole. While you remove a entry in shortform format, the residual entrys will move forward, so it will never make a hole in the shortform format entrys. Best Regards Kevin Xu [[HTML alternate version deleted]] From owner-xfs@oss.sgi.com Mon Mar 3 05:14:13 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 03 Mar 2008 05:14:32 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.4 required=5.0 tests=BAYES_00,SUBJECT_FUZZY_TION autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m23DEC1Q026610 for ; Mon, 3 Mar 2008 05:14:13 -0800 X-ASG-Debug-ID: 1204550079-303103e30000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from nf-out-0910.google.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 43A14124C97A for ; Mon, 3 Mar 2008 05:14:40 -0800 (PST) Received: from nf-out-0910.google.com (nf-out-0910.google.com [64.233.182.184]) by cuda.sgi.com with ESMTP id JLqMhCeqSVxEm8QX for ; Mon, 03 Mar 2008 05:14:40 -0800 (PST) Received: by nf-out-0910.google.com with SMTP id e27so3601236nfd.42 for ; Mon, 03 Mar 2008 05:14:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:sender:to:subject:mime-version:content-type:content-transfer-encoding:content-disposition:x-google-sender-auth; bh=TLMCxsWBImz6zXuYu7yQHgWjepUZhw7n720+TX49dUg=; b=a8eL30nViynHmvxGF9m3XM2+IKsfsrQoLWTEqMSuhmvjQF8K5egRuMmdTZIv57dcC24p90eHC11p8I/r9CyHAIbZ/2fkjQ+OGcp6TYh9Mf520DWYtCHBRwG43pZLIPnJvKPWhnX7qMxEyV1DxbXFnSQTd196Ue8aPFRocB8XkPc= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:sender:to:subject:mime-version:content-type:content-transfer-encoding:content-disposition:x-google-sender-auth; b=Vv9HRqWSR74AXxHP6YjFesfc8s7SmG65KL3E8wbUmBneH2F+ekTRzSwT/gasQMnCHbTsUPx+mp3sU2Cx8pISSMbI8S66fXtWHuBojh9JjJyfs2lv/AGzR4kDWTP14U2/UypYjgUgtd59JsqnfnTpXKbPJ5gkHQvv4oriqjlzvts= Received: by 10.78.81.20 with SMTP id e20mr18747462hub.19.1204550078638; Mon, 03 Mar 2008 05:14:38 -0800 (PST) Received: by 10.78.184.20 with HTTP; Mon, 3 Mar 2008 05:14:38 -0800 (PST) Message-ID: Date: Mon, 3 Mar 2008 05:14:38 -0800 From: "Jeff Breidenbach" To: xfs@oss.sgi.com X-ASG-Orig-Subj: disappearing xfs partition Subject: disappearing xfs partition MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Content-Disposition: inline X-Google-Sender-Auth: 573d54696549fa11 X-Barracuda-Connect: nf-out-0910.google.com[64.233.182.184] X-Barracuda-Start-Time: 1204550081 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.43762 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14755 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeff@jab.org Precedence: bulk X-list: xfs I have no special reason to believe this is xfs specific, but is it common for partitions to vanish from the face of the earth, at least as far as mount is concerned? I usually mount by UUID, but here's a sightly interesting transcript by drive letters. If this is not xfs relevant, any suggestions where to start digging? This is a standard format XFS partition, created with an xfsprogs that I compiled a couple weeks ago from a source code download. Note that other xfs partitions are mounting fine on the same machine. -- after a fresh reboot --- # cfdisk /dev/sde Name Flags Part Type FS Type [Label] Size (MB) ------------------------------------------------------------------------------ sde1 Primary Linux XFS 1000202.28 # mount /dev/sde1 /mnt mount: special device /dev/sde1 does not exist # xfs_check /dev/sde1 /dev/sde1: No such file or directory fatal error -- couldn't initialize XFS library # fdisk /dev/sde The number of cylinders for this disk is set to 121601. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Command (m for help): p Disk /dev/sde: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sde1 1 121601 976760001 83 Linux # cat /proc/version Linux version 2.6.24-8-server (buildd@yellow) (gcc version 4.2.3 (Ubuntu 4.2.3-1ubuntu2)) #1 SMP Thu Feb 14 20:42:20 UTC 2008 # dpkg -l xfsprogs Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Installed/Config-f/Unpacked/Failed-cfg/Half-inst/t-aWait/T-pend |/ Err?=(none)/Hold/Reinst-required/X=both-problems (Status,Err: uppercase=bad) ||/ Name Version Description +++-==============-==============-============================================ ii xfsprogs 2.9.5-1 Utilities for managing the XFS filesystem From owner-xfs@oss.sgi.com Mon Mar 3 05:57:53 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 03 Mar 2008 05:58:12 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.1 required=5.0 tests=AWL,BAYES_00, SUBJECT_FUZZY_TION autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m23Dvq5k030763 for ; Mon, 3 Mar 2008 05:57:52 -0800 X-ASG-Debug-ID: 1204552696-0bce004e0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from lucidpixels.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 09B21642436 for ; Mon, 3 Mar 2008 05:58:16 -0800 (PST) Received: from lucidpixels.com (lucidpixels.com [75.144.35.66]) by cuda.sgi.com with ESMTP id LMZlQHMGhmfUtkWA for ; Mon, 03 Mar 2008 05:58:16 -0800 (PST) Received: by lucidpixels.com (Postfix, from userid 1001) id 5D00D1C00026B; Mon, 3 Mar 2008 08:58:16 -0500 (EST) Date: Mon, 3 Mar 2008 08:58:16 -0500 (EST) From: Justin Piszcz To: Jeff Breidenbach cc: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: disappearing xfs partition Subject: Re: disappearing xfs partition In-Reply-To: Message-ID: References: User-Agent: Alpine 1.00 (DEB 882 2007-12-20) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Barracuda-Connect: lucidpixels.com[75.144.35.66] X-Barracuda-Start-Time: 1204552701 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.43765 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14757 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Looks like a udev problem or something is not making /dev/sde1? Have you tried a manual mknod of the device? On Mon, 3 Mar 2008, Jeff Breidenbach wrote: > I have no special reason to believe this is xfs specific, but > is it common for partitions to vanish from the face of the > earth, at least as far as mount is concerned? I usually mount > by UUID, but here's a sightly interesting transcript by drive > letters. If this is not xfs relevant, any suggestions where > to start digging? > > This is a standard format XFS partition, created with an > xfsprogs that I compiled a couple weeks ago from a source > code download. Note that other xfs partitions are mounting > fine on the same machine. > > > -- after a fresh reboot --- > > # cfdisk /dev/sde > > Name Flags Part Type FS Type [Label] Size (MB) > ------------------------------------------------------------------------------ > sde1 Primary Linux XFS 1000202.28 > > # mount /dev/sde1 /mnt > mount: special device /dev/sde1 does not exist > > # xfs_check /dev/sde1 > /dev/sde1: No such file or directory > > fatal error -- couldn't initialize XFS library > > # fdisk /dev/sde > > The number of cylinders for this disk is set to 121601. > There is nothing wrong with that, but this is larger than 1024, > and could in certain setups cause problems with: > 1) software that runs at boot time (e.g., old versions of LILO) > 2) booting and partitioning software from other OSs > (e.g., DOS FDISK, OS/2 FDISK) > > Command (m for help): p > > Disk /dev/sde: 1000.2 GB, 1000204886016 bytes > 255 heads, 63 sectors/track, 121601 cylinders > Units = cylinders of 16065 * 512 = 8225280 bytes > Disk identifier: 0x00000000 > > Device Boot Start End Blocks Id System > /dev/sde1 1 121601 976760001 83 Linux > > # cat /proc/version > Linux version 2.6.24-8-server (buildd@yellow) (gcc version 4.2.3 > (Ubuntu 4.2.3-1ubuntu2)) #1 SMP Thu Feb 14 20:42:20 UTC 2008 > > # dpkg -l xfsprogs > Desired=Unknown/Install/Remove/Purge/Hold > | Status=Not/Installed/Config-f/Unpacked/Failed-cfg/Half-inst/t-aWait/T-pend > |/ Err?=(none)/Hold/Reinst-required/X=both-problems (Status,Err: uppercase=bad) > ||/ Name Version Description > +++-==============-==============-============================================ > ii xfsprogs 2.9.5-1 Utilities for managing the XFS filesystem > > From owner-xfs@oss.sgi.com Mon Mar 3 08:19:43 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 03 Mar 2008 08:20:04 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00, SUBJECT_FUZZY_TION autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m23GJgDL013174 for ; Mon, 3 Mar 2008 08:19:43 -0800 X-ASG-Debug-ID: 1204561207-38fb00410000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 5E6DBF084D5 for ; Mon, 3 Mar 2008 08:20:08 -0800 (PST) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id GucNplOQSdaDSxCU for ; Mon, 03 Mar 2008 08:20:08 -0800 (PST) Received: from Liberator.local (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id EA35418004EDB; Mon, 3 Mar 2008 10:20:06 -0600 (CST) Message-ID: <47CC2536.7080205@sandeen.net> Date: Mon, 03 Mar 2008 10:20:06 -0600 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: Justin Piszcz CC: Jeff Breidenbach , xfs@oss.sgi.com X-ASG-Orig-Subj: Re: disappearing xfs partition Subject: Re: disappearing xfs partition References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1204561211 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.43774 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14758 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Justin Piszcz wrote: > Looks like a udev problem or something is not making /dev/sde1? Have you > tried a manual mknod of the device? but... it is not an xfs problem. :) (also check /proc/partitions for sde1...) -Eric From owner-xfs@oss.sgi.com Mon Mar 3 09:25:17 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 03 Mar 2008 09:25:36 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.4 required=5.0 tests=BAYES_00,SUBJECT_FUZZY_TION autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m23HPG3U017651 for ; Mon, 3 Mar 2008 09:25:17 -0800 X-ASG-Debug-ID: 1204565144-2adc00150000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from nf-out-0910.google.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id CDE2F644095 for ; Mon, 3 Mar 2008 09:25:44 -0800 (PST) Received: from nf-out-0910.google.com (nf-out-0910.google.com [64.233.182.188]) by cuda.sgi.com with ESMTP id 8BF3lonYvNhMti05 for ; Mon, 03 Mar 2008 09:25:44 -0800 (PST) Received: by nf-out-0910.google.com with SMTP id e27so95784nfd.42 for ; Mon, 03 Mar 2008 09:25:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:sender:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references:x-google-sender-auth; bh=DIMBF5R6+iTL3ZZDtwdp45IjxgPfLj+6jBmTkLSqfps=; b=c9fbdsZSO5cWUle+xrUX6fkx+lCbkF7+Ey75SySVlJ0LoFyL7bpZR6f4LcqwpvNXV14ywCs1nsAQ15QvW3KGyPiTnsZ16kUEH6VxoOl5Gy6GSYqPT5b/x6G88bUJdWd7VUCPxILKY6EY54Zeyf7RfVSdox0FDZ55mSj+8ZUFkuo= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:sender:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references:x-google-sender-auth; b=nD7BD8jLtva7d0u2qAvLgb+NMJu70VYYqUXDnmS3O4i/eh+oz/dAfDoVmbGVk/SR+eyRWOh21WoxjMu8RnFssmezcR1bhsJmuuq3B/QneWkClDcmac0N8WqcgGYfDda5p3AGhrrtzw+Qhx6/TMFnAqtLgmS+SwVugR2foxJidUA= Received: by 10.78.77.9 with SMTP id z9mr298907hua.45.1204563668653; Mon, 03 Mar 2008 09:01:08 -0800 (PST) Received: by 10.78.184.20 with HTTP; Mon, 3 Mar 2008 09:01:08 -0800 (PST) Message-ID: Date: Mon, 3 Mar 2008 09:01:08 -0800 From: "Jeff Breidenbach" To: "Eric Sandeen" X-ASG-Orig-Subj: Re: disappearing xfs partition Subject: Re: disappearing xfs partition Cc: "Justin Piszcz" , xfs@oss.sgi.com In-Reply-To: <47CC2536.7080205@sandeen.net> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Content-Disposition: inline References: <47CC2536.7080205@sandeen.net> X-Google-Sender-Auth: 632bb0dff25a8e86 X-Barracuda-Connect: nf-out-0910.google.com[64.233.182.188] X-Barracuda-Start-Time: 1204565145 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.43779 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14759 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeff@jab.org Precedence: bulk X-list: xfs On Mon, Mar 3, 2008 at 8:20 AM, Eric Sandeen wrote: > Justin Piszcz wrote: > > Looks like a udev problem or something is not making /dev/sde1? Have you > > tried a manual mknod of the device? # mknod /dev/sde1 b 8 65 # mount /dev/sde1 /data2 mount: /dev/sde1 is not a valid block device > but... it is not an xfs problem. :) Sorry for posting. > (also check /proc/partitions for sde1...) Not present. # grep sde /proc/partitions 8 64 976762584 sde From owner-xfs@oss.sgi.com Mon Mar 3 17:29:30 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 03 Mar 2008 17:30:11 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.1 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m241TLRF028264 for ; Mon, 3 Mar 2008 17:29:30 -0800 X-ASG-Debug-ID: 1204594168-069400050000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from toro.qb3.berkeley.edu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 19969647FC4 for ; Mon, 3 Mar 2008 17:29:28 -0800 (PST) Received: from toro.qb3.berkeley.edu (toro.QB3.Berkeley.EDU [169.229.244.93]) by cuda.sgi.com with ESMTP id 1OcC32XEhA5hUHL3 for ; Mon, 03 Mar 2008 17:29:28 -0800 (PST) Received: by toro.qb3.berkeley.edu (Postfix, from userid 14019) id 760B12393B1; Mon, 3 Mar 2008 17:29:27 -0800 (PST) Received: from localhost (localhost [127.0.0.1]) by toro.qb3.berkeley.edu (Postfix) with ESMTP id 6F8232393AE; Mon, 3 Mar 2008 17:29:27 -0800 (PST) Date: Mon, 3 Mar 2008 17:29:27 -0800 (PST) From: slaton X-X-Sender: slaton@toro.qb3.berkeley.edu To: Barry Naujok cc: xfs-oss X-ASG-Orig-Subj: Re: Linux XFS filesystem corruption (XFS_WANT_CORRUPTED_GOTO) Subject: Re: Linux XFS filesystem corruption (XFS_WANT_CORRUPTED_GOTO) In-Reply-To: Message-ID: References: <47C343D1.30304@sandeen.net> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Barracuda-Connect: toro.QB3.Berkeley.EDU[169.229.244.93] X-Barracuda-Start-Time: 1204594171 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.43811 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14760 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: slaton@berkeley.edu Precedence: bulk X-list: xfs Barry, I ran xfs_metadump (with -g -o -w options) on the partition and in addition to the file output this was written to stder: xfs_metadump: suspicious count 22 in bmap extent 9 in dir2 ino 940064492 xfs_metadump: suspicious count 21 in bmap extent 8 in dir2 ino 1348807890 xfs_metadump: suspicious count 29 in bmap extent 9 in dir2 ino 2826081099 xfs_metadump: suspicious count 23 in bmap extent 54 in dir2 ino 3093231364 xfs_metadump: suspicious count 106 in bmap extent 4 in dir2 ino 3505884782 Should i go ahead and do a mount/umount (to replay log) and then xfs_repair, or would another course of action be recommended, given these potential problem inodes? thanks slaton Slaton Lipscomb Nogales Lab, Howard Hughes Medical Institute http://cryoem.berkeley.edu On Thu, 28 Feb 2008, Barry Naujok wrote: > On Thu, 28 Feb 2008 09:44:04 +1100, slaton wrote: > > > Hi, > > > > I'm still hoping for some help with this. Is any more information needed > > in addition to the ksymoops output previously posted? > > > > In particular i'd like to know if just remounting the filesystem (to > > replay the journal), then unmounting and running xfs_repair is the best > > course of action. In addition, i'd like to know what recommended > > kernel/xfsprogs versions to use for best results. > > I would get xfsprogs 2.9.4 (2.9.6 is not a good version with your kernel), > ftp://oss.sgi.com/projects/xfs/previous/cmd_tars/xfsprogs_2.9.4-1.tar.gz > > To be on the safe side, either make an entire copy of your drive to > another device, or run "xfs_metadump -o /dev/sda1" to capture > a metadata (no file data) of your filesystem. > > Then run xfs_repair (mount/unmount maybe required if the log is dirty). > > If the filesystem is in a bad state after the repair (eg. everything in > lost+found), email the xfs_repair log and request further advise. > > Regards, > Barry. > > > > thanks > > slaton > > > > Slaton Lipscomb > > Nogales Lab, Howard Hughes Medical Institute > > http://cryoem.berkeley.edu > > > > On Mon, 25 Feb 2008, slaton wrote: > > > > > Thanks for the reply. > > > > > > > Are you hitting http://oss.sgi.com/projects/xfs/faq.html#dir2 ? > > > > > > Presumably not - i'm using 2.6.17.11, and that information indicates the > > > bug was fixed in 2.6.17.7. > > > > > > I've attached the output from running ksymoops on messages.1. First > > > crash/trace (Feb 21 19:xx) corresponds to the original XFS event; the > > > second (Feb 22 15:xx) is the system going down when i tried to unmount the > > > volume. > > > > > > Here are the additional syslog msgs corresponding to the Feb 22 15:xx > > > crash. > > > > > > Feb 22 15:47:13 qln01 kernel: grsec: From 10.0.2.93: unmount of /dev/sda1 > > > by /bin/umount[umount:18604] uid/euid:0/0 gid/egid:0/0, parent > > > /bin/bash[bash:31972] uid/euid:0/0 gid/egid:0/0 > > > Feb 22 15:47:14 qln01 kernel: xfs_force_shutdown(sda1,0x1) called from > > > line 338 of file fs/xfs/xfs_rw.c. Return address = 0xffffffff88173ce4 > > > Feb 22 15:47:14 qln01 kernel: xfs_force_shutdown(sda1,0x1) called from > > > line 338 of file fs/xfs/xfs_rw.c. Return address = 0xffffffff88173ce4 > > > Feb 22 15:47:28 qln01 kernel: BUG: soft lockup detected on CPU#0! > > > > > > thanks > > > slaton > > > > > From owner-xfs@oss.sgi.com Mon Mar 3 17:42:37 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 03 Mar 2008 17:42:58 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.2 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m241gZex029390 for ; Mon, 3 Mar 2008 17:42:37 -0800 X-ASG-Debug-ID: 1204594984-5a0802170000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from toro.qb3.berkeley.edu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id A8C8A6480CE for ; Mon, 3 Mar 2008 17:43:04 -0800 (PST) Received: from toro.qb3.berkeley.edu (toro.QB3.Berkeley.EDU [169.229.244.93]) by cuda.sgi.com with ESMTP id pQRikbRjjIINKlUT for ; Mon, 03 Mar 2008 17:43:04 -0800 (PST) Received: by toro.qb3.berkeley.edu (Postfix, from userid 14019) id 05B942393B1; Mon, 3 Mar 2008 17:43:04 -0800 (PST) Received: from localhost (localhost [127.0.0.1]) by toro.qb3.berkeley.edu (Postfix) with ESMTP id 038E02393AE; Mon, 3 Mar 2008 17:43:04 -0800 (PST) Date: Mon, 3 Mar 2008 17:43:03 -0800 (PST) From: slaton X-X-Sender: slaton@toro.qb3.berkeley.edu To: Barry Naujok cc: xfs-oss X-ASG-Orig-Subj: Re: Linux XFS filesystem corruption (XFS_WANT_CORRUPTED_GOTO) Subject: Re: Linux XFS filesystem corruption (XFS_WANT_CORRUPTED_GOTO) In-Reply-To: Message-ID: References: <47C343D1.30304@sandeen.net> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Barracuda-Connect: toro.QB3.Berkeley.EDU[169.229.244.93] X-Barracuda-Start-Time: 1204594984 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.43811 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14761 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: slaton@berkeley.edu Precedence: bulk X-list: xfs Unfortunately, mounting triggered another XFS_WANT_CORRUPTED_GOTO error: XFS mounting filesystem sda1 Starting XFS recovery on filesystem: sda1 (logdev: internal) XFS internal error XFS_WANT_CORRUPTED_GOTO at line 1546 of file fs/xfs/xfs_alloc.c. Caller 0xffffffff882c3be6 Call Trace: [] :xfs:xfs_free_ag_extent+0x18a/0x690 [] :xfs:xfs_free_extent+0xa9/0xc9 [] :xfs:xlog_recover_process_efi+0x117/0x149 [] :xfs:xlog_recover_process_efis+0x46/0x6f [] :xfs:xlog_recover_finish+0x16/0x98 [] :xfs:xfs_log_mount_finish+0x19/0x1c [] :xfs:xfs_mountfs+0x892/0x99a [] :xfs:kmem_alloc+0x67/0xcd [] :xfs:kmem_zalloc+0x9/0x21 [] :xfs:xfs_mru_cache_create+0x127/0x188 [] :xfs:xfs_mount+0x333/0x3b4 [] :xfs:xfs_fs_fill_super+0x0/0x1ab [] :xfs:xfs_fs_fill_super+0x7e/0x1ab [] __down_write_nested+0x12/0x9a [] get_filesystem+0x12/0x35 [] sget+0x379/0x38e [] set_bdev_super+0x0/0xf [] get_sb_bdev+0x11d/0x168 [] vfs_kern_mount+0x94/0x124 [] do_kern_mount+0x3d/0xee [] do_mount+0x6e5/0x738 [] handle_mm_fault+0x385/0x789 [] __up_read+0x10/0x8a [] do_page_fault+0x453/0x7a3 [] handle_mm_fault+0x3ff/0x789 [] zone_statistics+0x41/0x63 [] __alloc_pages+0x6a/0x2d4 [] sys_mount+0x8b/0xce [] system_call+0x7e/0x83 Ending XFS recovery on filesystem: sda1 (logdev: internal) Haven't tried to unmount or anything else, yet. How to proceed? Just to reiterate, currently using kernel 2.6.23.16 and xfsprogs 2.9.4-1. thanks slaton Slaton Lipscomb Nogales Lab, Howard Hughes Medical Institute http://cryoem.berkeley.edu On Tue, 4 Mar 2008, Barry Naujok wrote: > On Tue, 04 Mar 2008 12:29:27 +1100, slaton wrote: > > > Barry, > > > > I ran xfs_metadump (with -g -o -w options) on the partition and in > > addition to the file output this was written to stder: > > > > xfs_metadump: suspicious count 22 in bmap extent 9 in dir2 ino 940064492 > > xfs_metadump: suspicious count 21 in bmap extent 8 in dir2 ino 1348807890 > > xfs_metadump: suspicious count 29 in bmap extent 9 in dir2 ino 2826081099 > > xfs_metadump: suspicious count 23 in bmap extent 54 in dir2 ino 3093231364 > > xfs_metadump: suspicious count 106 in bmap extent 4 in dir2 ino 3505884782 > > > > Should i go ahead and do a mount/umount (to replay log) and then > > xfs_repair, or would another course of action be recommended, given these > > potential problem inodes? > > Depending on the size of the directories, these numbers are probably fine. > I believe a mount/unmount/repair is the best course of action from here. > > So be extra safe, run another metadump after mount/unmount before running > repair. > > Barry. > > > thanks > > slaton > > > > Slaton Lipscomb > > Nogales Lab, Howard Hughes Medical Institute > > http://cryoem.berkeley.edu > > > > On Thu, 28 Feb 2008, Barry Naujok wrote: > > > > > On Thu, 28 Feb 2008 09:44:04 +1100, slaton wrote: > > > > > > > Hi, > > > > > > > > I'm still hoping for some help with this. Is any more information needed > > > > in addition to the ksymoops output previously posted? > > > > > > > > In particular i'd like to know if just remounting the filesystem (to > > > > replay the journal), then unmounting and running xfs_repair is the best > > > > course of action. In addition, i'd like to know what recommended > > > > kernel/xfsprogs versions to use for best results. > > > > > > I would get xfsprogs 2.9.4 (2.9.6 is not a good version with your kernel), > > > ftp://oss.sgi.com/projects/xfs/previous/cmd_tars/xfsprogs_2.9.4-1.tar.gz > > > > > > To be on the safe side, either make an entire copy of your drive to > > > another device, or run "xfs_metadump -o /dev/sda1" to capture > > > a metadata (no file data) of your filesystem. > > > > > > Then run xfs_repair (mount/unmount maybe required if the log is dirty). > > > > > > If the filesystem is in a bad state after the repair (eg. everything in > > > lost+found), email the xfs_repair log and request further advise. > > > > > > Regards, > > > Barry. > > > > > > > > > > thanks > > > > slaton > > > > > > > > Slaton Lipscomb > > > > Nogales Lab, Howard Hughes Medical Institute > > > > http://cryoem.berkeley.edu > > > > > > > > On Mon, 25 Feb 2008, slaton wrote: > > > > > > > > > Thanks for the reply. > > > > > > > > > > > Are you hitting http://oss.sgi.com/projects/xfs/faq.html#dir2 ? > > > > > > > > > > Presumably not - i'm using 2.6.17.11, and that information indicates > > > > the > > > > > bug was fixed in 2.6.17.7. > > > > > > > > > > I've attached the output from running ksymoops on messages.1. First > > > > > crash/trace (Feb 21 19:xx) corresponds to the original XFS event; the > > > > > second (Feb 22 15:xx) is the system going down when i tried to unmount > > > > the > > > > > volume. > > > > > > > > > > Here are the additional syslog msgs corresponding to the Feb 22 15:xx > > > > > crash. > > > > > > > > > > Feb 22 15:47:13 qln01 kernel: grsec: From 10.0.2.93: unmount of > > > > /dev/sda1 > > > > > by /bin/umount[umount:18604] uid/euid:0/0 gid/egid:0/0, parent > > > > > /bin/bash[bash:31972] uid/euid:0/0 gid/egid:0/0 > > > > > Feb 22 15:47:14 qln01 kernel: xfs_force_shutdown(sda1,0x1) called from > > > > > line 338 of file fs/xfs/xfs_rw.c. Return address = 0xffffffff88173ce4 > > > > > Feb 22 15:47:14 qln01 kernel: xfs_force_shutdown(sda1,0x1) called from > > > > > line 338 of file fs/xfs/xfs_rw.c. Return address = 0xffffffff88173ce4 > > > > > Feb 22 15:47:28 qln01 kernel: BUG: soft lockup detected on CPU#0! > > > > > > > > > > thanks > > > > > slaton > > > > > > > > > > > > From owner-xfs@oss.sgi.com Mon Mar 3 17:35:25 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 03 Mar 2008 17:51:26 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m241ZLHb028896 for ; Mon, 3 Mar 2008 17:35:23 -0800 Received: from pc-bnaujok.melbourne.sgi.com (pc-bnaujok.melbourne.sgi.com [134.14.55.58]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id MAA09273; Tue, 4 Mar 2008 12:35:47 +1100 Date: Tue, 04 Mar 2008 12:36:57 +1100 To: slaton Subject: Re: Linux XFS filesystem corruption (XFS_WANT_CORRUPTED_GOTO) From: "Barry Naujok" Organization: SGI Cc: xfs-oss Content-Type: text/plain; format=flowed; delsp=yes; charset=utf-8 MIME-Version: 1.0 References: <47C343D1.30304@sandeen.net> Message-ID: In-Reply-To: User-Agent: Opera Mail/9.24 (Win32) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from Quoted-Printable to 8bit by oss.sgi.com id m241ZPHb028905 X-archive-position: 14762 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@sgi.com Precedence: bulk X-list: xfs On Tue, 04 Mar 2008 12:29:27 +1100, slaton wrote: > Barry, > > I ran xfs_metadump (with -g -o -w options) on the partition and in > addition to the file output this was written to stder: > > xfs_metadump: suspicious count 22 in bmap extent 9 in dir2 ino 940064492 > xfs_metadump: suspicious count 21 in bmap extent 8 in dir2 ino 1348807890 > xfs_metadump: suspicious count 29 in bmap extent 9 in dir2 ino 2826081099 > xfs_metadump: suspicious count 23 in bmap extent 54 in dir2 ino > 3093231364 > xfs_metadump: suspicious count 106 in bmap extent 4 in dir2 ino > 3505884782 > > Should i go ahead and do a mount/umount (to replay log) and then > xfs_repair, or would another course of action be recommended, given these > potential problem inodes? Depending on the size of the directories, these numbers are probably fine. I believe a mount/unmount/repair is the best course of action from here. So be extra safe, run another metadump after mount/unmount before running repair. Barry. > thanks > slaton > > Slaton Lipscomb > Nogales Lab, Howard Hughes Medical Institute > http://cryoem.berkeley.edu > > On Thu, 28 Feb 2008, Barry Naujok wrote: > >> On Thu, 28 Feb 2008 09:44:04 +1100, slaton wrote: >> >> > Hi, >> > >> > I'm still hoping for some help with this. Is any more information >> needed >> > in addition to the ksymoops output previously posted? >> > >> > In particular i'd like to know if just remounting the filesystem (to >> > replay the journal), then unmounting and running xfs_repair is the >> best >> > course of action. In addition, i'd like to know what recommended >> > kernel/xfsprogs versions to use for best results. >> >> I would get xfsprogs 2.9.4 (2.9.6 is not a good version with your >> kernel), >> ftp://oss.sgi.com/projects/xfs/previous/cmd_tars/xfsprogs_2.9.4-1.tar.gz >> >> To be on the safe side, either make an entire copy of your drive to >> another device, or run "xfs_metadump -o /dev/sda1" to capture >> a metadata (no file data) of your filesystem. >> >> Then run xfs_repair (mount/unmount maybe required if the log is dirty). >> >> If the filesystem is in a bad state after the repair (eg. everything in >> lost+found), email the xfs_repair log and request further advise. >> >> Regards, >> Barry. >> >> >> > thanks >> > slaton >> > >> > Slaton Lipscomb >> > Nogales Lab, Howard Hughes Medical Institute >> > http://cryoem.berkeley.edu >> > >> > On Mon, 25 Feb 2008, slaton wrote: >> > >> > > Thanks for the reply. >> > > >> > > > Are you hitting http://oss.sgi.com/projects/xfs/faq.html#dir2 ? >> > > >> > > Presumably not - i'm using 2.6.17.11, and that information >> indicates the >> > > bug was fixed in 2.6.17.7. >> > > >> > > I've attached the output from running ksymoops on messages.1. First >> > > crash/trace (Feb 21 19:xx) corresponds to the original XFS event; >> the >> > > second (Feb 22 15:xx) is the system going down when i tried to >> unmount the >> > > volume. >> > > >> > > Here are the additional syslog msgs corresponding to the Feb 22 >> 15:xx >> > > crash. >> > > >> > > Feb 22 15:47:13 qln01 kernel: grsec: From 10.0.2.93: unmount of >> /dev/sda1 >> > > by /bin/umount[umount:18604] uid/euid:0/0 gid/egid:0/0, parent >> > > /bin/bash[bash:31972] uid/euid:0/0 gid/egid:0/0 >> > > Feb 22 15:47:14 qln01 kernel: xfs_force_shutdown(sda1,0x1) called >> from >> > > line 338 of file fs/xfs/xfs_rw.c. Return address = >> 0xffffffff88173ce4 >> > > Feb 22 15:47:14 qln01 kernel: xfs_force_shutdown(sda1,0x1) called >> from >> > > line 338 of file fs/xfs/xfs_rw.c. Return address = >> 0xffffffff88173ce4 >> > > Feb 22 15:47:28 qln01 kernel: BUG: soft lockup detected on CPU#0! >> > > >> > > thanks >> > > slaton >> > >> > >> From owner-xfs@oss.sgi.com Mon Mar 3 19:02:31 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 03 Mar 2008 19:02:52 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.4 required=5.0 tests=BAYES_00,SUBJECT_FUZZY_TION autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2432Tq1002004 for ; Mon, 3 Mar 2008 19:02:31 -0800 X-ASG-Debug-ID: 1204599778-5a1102c70000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from nf-out-0910.google.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 3AD3A648470 for ; Mon, 3 Mar 2008 19:02:58 -0800 (PST) Received: from nf-out-0910.google.com (nf-out-0910.google.com [64.233.182.191]) by cuda.sgi.com with ESMTP id 035xufRQN95Wuot2 for ; Mon, 03 Mar 2008 19:02:58 -0800 (PST) Received: by nf-out-0910.google.com with SMTP id e27so313040nfd.42 for ; Mon, 03 Mar 2008 19:02:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:sender:to:subject:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references:x-google-sender-auth; bh=M/GXK6+o0FtTeNIfsKUN8GJNRzTiSAHFEh7JhvWRg9g=; b=rZi9aJgCFPMNU0rMMMr47/oUnV5qf7HVuBuLi5Wk+fQdW9tKPMzJc1pHVjP8opfADCfnuQLJZKbkuWAycohwzW2RT5XfKrw0JmqxUaaeIXqMxGWnCZEfhW3gzqmcCloYw0kngULWdWn8qCA8TJkN5VFMhcQQBFQTuFUWR89LhgE= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:sender:to:subject:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references:x-google-sender-auth; b=HR2u77M5tAQzMSL1hDTJutndFrXj5sReAysLTjDXPieoqbUKZebIOqT2OIcVZfMr8g63gr2BJ4Y/EmCJrtUFvC5rPQ4rpOx0Ya6B4zj92qm8s0o8oT7uTWGCq1T8KMq26+nw5jr2JeruMW9wfE8Hy7BnoZ/1s4Cviw6Yph3uKYI= Received: by 10.78.157.19 with SMTP id f19mr1699195hue.48.1204599406220; Mon, 03 Mar 2008 18:56:46 -0800 (PST) Received: by 10.78.184.20 with HTTP; Mon, 3 Mar 2008 18:56:46 -0800 (PST) Message-ID: Date: Mon, 3 Mar 2008 18:56:46 -0800 From: "Jeff Breidenbach" To: "Eric Sandeen" , "Jeff Marshall" , xfs@oss.sgi.com X-ASG-Orig-Subj: Re: disappearing xfs partition Subject: Re: disappearing xfs partition In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Content-Disposition: inline References: <47CC3444.4070902@sandeen.net> <47CC56CC.4020504@sandeen.net> X-Google-Sender-Auth: 8bc213df28c984d6 X-Barracuda-Connect: nf-out-0910.google.com[64.233.182.191] X-Barracuda-Start-Time: 1204599779 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.43817 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14763 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeff@jab.org Precedence: bulk X-list: xfs Following up and close out the topic, I got this comment from Eric. >So parted has this bad habit of making partition tables that cannot >actually be read from the disk, and poking the supposedly values >directly into the kernel. Then things work fine until reboot, at which >time the partition table cannot be properly read. Usually this turns >into a truncated size due to an overflow.... I'd been using cfdisk and not parted, but that's apparently what happened. Rewriting the partition table with cfdisk fixed everything and allowed the partition to mount. At least for this boot. Thanks all. From owner-xfs@oss.sgi.com Mon Mar 3 21:37:36 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 03 Mar 2008 21:37:59 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.7 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_32, J_CHICKENPOX_43 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m245bYsB016598 for ; Mon, 3 Mar 2008 21:37:36 -0800 X-ASG-Debug-ID: 1204609082-5842001e0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mail.sceen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 3634A649524 for ; Mon, 3 Mar 2008 21:38:02 -0800 (PST) Received: from mail.sceen.net (sceen.net [213.41.243.68]) by cuda.sgi.com with ESMTP id Y6bONUG0Abz58wwZ for ; Mon, 03 Mar 2008 21:38:02 -0800 (PST) Received: from localhost (localhost.localdomain [127.0.0.1]) by mail.sceen.net (Postfix) with ESMTP id 634ED1C530; Tue, 4 Mar 2008 06:37:29 +0100 (CET) Received: from mail.sceen.net ([127.0.0.1]) by localhost (mail.sceen.net [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 22431-08; Tue, 4 Mar 2008 06:37:20 +0100 (CET) Received: from itchy (cesrt42.asia.info.net [61.14.27.42]) by mail.sceen.net (Postfix) with ESMTP id A1EF01C4C1; Tue, 4 Mar 2008 06:37:16 +0100 (CET) From: Niv Sardi To: xfs@oss.sgi.com Cc: xfs-dev@sgi.com X-ASG-Orig-Subj: [REVIEW] mkfs.xfs man page needs the default settings updated. Subject: [REVIEW] mkfs.xfs man page needs the default settings updated. Date: Tue, 04 Mar 2008 16:36:59 +1100 Message-ID: User-Agent: Gnus/5.110007 (No Gnus v0.7) Emacs/23.0.60 (i486-pc-linux-gnu) MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="=-=-=" X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Scanned: by amavisd-new-20030616-p10 (Debian) at sceen.net X-Barracuda-Connect: sceen.net[213.41.243.68] X-Barracuda-Start-Time: 1204609083 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.43827 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Status: Clean X-archive-position: 14764 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: xaiki@cxhome.ath.cx Precedence: bulk X-list: xfs --=-=-= Manpages update for the new defaults, please review, I believe I got'em all. --=-=-= Content-Type: text/x-diff Content-Disposition: inline; filename=0001-Update-mkfs-manpage-for-new-defaults.patch >From 71011d480d52aaefe99ef252dfff513bf77f209e Mon Sep 17 00:00:00 2001 From: Niv Sardi Date: Fri, 22 Feb 2008 16:48:32 +1100 Subject: [PATCH] Update mkfs manpage for new defaults: log, attr and inodes v2, Drop the ability to turn unwritten extents off completly, reduce imaxpct for big filesystems, less AGs for single disks configs. --- xfsprogs/man/man8/mkfs.xfs.8 | 44 +++++------------------------------------ 1 files changed, 6 insertions(+), 38 deletions(-) diff --git a/xfsprogs/man/man8/mkfs.xfs.8 b/xfsprogs/man/man8/mkfs.xfs.8 index b6024c3..f9a89af 100644 --- a/xfsprogs/man/man8/mkfs.xfs.8 +++ b/xfsprogs/man/man8/mkfs.xfs.8 @@ -304,7 +304,8 @@ bits. This specifies the maximum percentage of space in the filesystem that can be allocated to inodes. The default .I value -is 25%. Setting the +is 25% for filesystems under 1TB, 5% for filesystems under 50TB and 1% +for filesystems over 50TB. Setting the .I value to 0 means that essentially all of the filesystem can become inode blocks. @@ -327,16 +328,10 @@ that does not have the inode alignment feature .BI attr[= value ] This is used to specify the version of extended attribute inline allocation policy to be used. -By default, this is zero. Once extended attributes are used for the +By default, this is 2. Once extended attributes are used for the first time, the version will be set to either one or two. The current version (two) uses a more efficient algorithm for managing -the available inline inode space than version one does, however, for -backward compatibility reasons (and in the absence of the -.B attr=2 -mkfs option, or the -.B attr2 -mount option), version one will be selected -by default when attributes are first used on a filesystem. +the available inline inode space than version one does. .RE .TP .BI \-l " log_section_options" @@ -389,15 +384,9 @@ and directory block size, the minimum log size is larger than 512 blocks. .BI version= value This specifies the version of the log. The .I value -is either 1 or 2. Specifying +is either 1 or 2 (the default is 2). .B version=2 -enables the -.B sunit -suboption, and allows the logbsize to be increased beyond 32K. -Version 2 logs are automatically selected if a log stripe unit -is specified. See -.BR sunit " and " su -suboptions, below. +allows the logbsize to be increased beyond 32K. .TP .BI sunit= value This specifies the alignment to be used for log writes. The @@ -430,27 +419,6 @@ suffixes). This value must be a multiple of the filesystem block size. Version 2 logs are automatically selected if the log .B su suboption is specified. -.TP -.BI lazy-count= value -This changes the method of logging various persistent counters -in the superblock. Under metadata intensive workloads, these -counters are updated and logged frequently enough that the superblock -updates become a serialisation point in the filesystem. The -.I value -can be either 0 or 1. -.IP -With -.BR lazy-count=1 , -the superblock is not modified or logged on every change of the -persistent counters. Instead, enough information is kept in -other parts of the filesystem to be able to maintain the persistent -counter values without needed to keep them in the superblock. -This gives significant improvements in performance on some configurations. -The default -.I value -is 0 (off) so you must specify -.B lazy-count=1 -if you want to make use of this feature. .RE .TP .BI \-n " naming_options" -- 1.5.4.1 --=-=-= -- Niv Sardi --=-=-=-- From owner-xfs@oss.sgi.com Tue Mar 4 07:38:31 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 04 Mar 2008 07:38:51 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_32, J_CHICKENPOX_43 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m24FcUf9029632 for ; Tue, 4 Mar 2008 07:38:31 -0800 X-ASG-Debug-ID: 1204645138-6e38037e0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 629ABF135CB for ; Tue, 4 Mar 2008 07:38:58 -0800 (PST) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id pGa93fKLGXAWBOsp for ; Tue, 04 Mar 2008 07:38:58 -0800 (PST) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 855E218DA0AF5; Tue, 4 Mar 2008 09:38:54 -0600 (CST) Message-ID: <47CD6D0E.3090301@sandeen.net> Date: Tue, 04 Mar 2008 09:38:54 -0600 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: Niv Sardi CC: xfs@oss.sgi.com, xfs-dev@sgi.com X-ASG-Orig-Subj: Re: [REVIEW] mkfs.xfs man page needs the default settings updated. Subject: Re: [REVIEW] mkfs.xfs man page needs the default settings updated. References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1204645139 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.43868 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14765 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Niv Sardi wrote: > Manpages update for the new defaults, please review, I believe I got'em all. (hmm attachments make it slightly trickier to reply inline...) >>From 71011d480d52aaefe99ef252dfff513bf77f209e Mon Sep 17 00:00:00 2001 > From: Niv Sardi > Date: Fri, 22 Feb 2008 16:48:32 +1100 > Subject: [PATCH] Update mkfs manpage for new defaults: > > log, attr and inodes v2, > Drop the ability to turn unwritten extents off completly, > reduce imaxpct for big filesystems, less AGs for single disks configs. > --- > xfsprogs/man/man8/mkfs.xfs.8 | 44 +++++------------------------------------ > 1 files changed, 6 insertions(+), 38 deletions(-) > > diff --git a/xfsprogs/man/man8/mkfs.xfs.8 b/xfsprogs/man/man8/mkfs.xfs.8 > index b6024c3..f9a89af 100644 > --- a/xfsprogs/man/man8/mkfs.xfs.8 > +++ b/xfsprogs/man/man8/mkfs.xfs.8 > @@ -304,7 +304,8 @@ bits. > This specifies the maximum percentage of space in the filesystem that > can be allocated to inodes. The default > .I value > -is 25%. Setting the > +is 25% for filesystems under 1TB, 5% for filesystems under 50TB and 1% > +for filesystems over 50TB. Setting the > .I value > to 0 means that essentially all of the filesystem can > become inode blocks. Is it worth saying why you might want to override this? (i.e. why was it reduced for large filesystems, what was detrimental about having 25% at 50T?) > @@ -327,16 +328,10 @@ that does not have the inode alignment feature > .BI attr[= value ] > This is used to specify the version of extended attribute inline allocation > policy to be used. > -By default, this is zero. Once extended attributes are used for the > +By default, this is 2. Once extended attributes are used for the > first time, the version will be set to either one or two. well, it will be set to what is specified, or the default, right? Again, why would I choose one over the other? > The current version (two) uses a more efficient algorithm for managing > -the available inline inode space than version one does, however, for > -backward compatibility reasons (and in the absence of the > -.B attr=2 > -mkfs option, or the > -.B attr2 > -mount option), version one will be selected > -by default when attributes are first used on a filesystem. > +the available inline inode space than version one does. ah so I would never want to use 1? Or might I want to use it for backwards compatibility? or? > .RE > .TP > .BI \-l " log_section_options" > @@ -389,15 +384,9 @@ and directory block size, the minimum log size is larger than 512 blocks. > .BI version= value > This specifies the version of the log. The > .I value > -is either 1 or 2. Specifying > +is either 1 or 2 (the default is 2). > .B version=2 > -enables the > -.B sunit > -suboption, and allows the logbsize to be increased beyond 32K. > -Version 2 logs are automatically selected if a log stripe unit > -is specified. See > -.BR sunit " and " su > -suboptions, below. > +allows the logbsize to be increased beyond 32K. and it allows the sunit/su suboptions? And what's this 32K thing, and what's logbsize? The first-time reader may wonder what's special about 32K. Why would one want to use logv1 at this point, any reason? Perhaps it would be better to document limitations of v1 rather than the non-limitations of v2? Or just drop v1 altogether? > .TP > .BI sunit= value > This specifies the alignment to be used for log writes. The > @@ -430,27 +419,6 @@ suffixes). This value must be a multiple of the filesystem block size. > Version 2 logs are automatically selected if the log > .B su > suboption is specified. > -.TP > -.BI lazy-count= value > -This changes the method of logging various persistent counters > -in the superblock. Under metadata intensive workloads, these > -counters are updated and logged frequently enough that the superblock > -updates become a serialisation point in the filesystem. The > -.I value > -can be either 0 or 1. > -.IP > -With > -.BR lazy-count=1 , > -the superblock is not modified or logged on every change of the > -persistent counters. Instead, enough information is kept in > -other parts of the filesystem to be able to maintain the persistent > -counter values without needed to keep them in the superblock. > -This gives significant improvements in performance on some configurations. > -The default > -.I value > -is 0 (off) so you must specify > -.B lazy-count=1 > -if you want to make use of this feature. lazy-count is no longer a configurable option? -Eric > .RE > .TP > .BI \-n " naming_options" > -- 1.5.4.1 > > > > > -- Niv Sardi From owner-xfs@oss.sgi.com Tue Mar 4 07:46:05 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 04 Mar 2008 07:46:12 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_32, J_CHICKENPOX_43 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m24Fk5v1030487 for ; Tue, 4 Mar 2008 07:46:05 -0800 X-ASG-Debug-ID: 1204645592-4c7c02210000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id A127C64D006 for ; Tue, 4 Mar 2008 07:46:32 -0800 (PST) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id Ve2XMOmcOE9bwfYH for ; Tue, 04 Mar 2008 07:46:32 -0800 (PST) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 9D1C918004EFE; Tue, 4 Mar 2008 09:46:31 -0600 (CST) Message-ID: <47CD6ED7.5050505@sandeen.net> Date: Tue, 04 Mar 2008 09:46:31 -0600 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: Niv Sardi CC: xfs@oss.sgi.com, xfs-dev@sgi.com X-ASG-Orig-Subj: Re: [REVIEW] mkfs.xfs man page needs the default settings updated. Subject: Re: [REVIEW] mkfs.xfs man page needs the default settings updated. References: <47CD6D0E.3090301@sandeen.net> In-Reply-To: <47CD6D0E.3090301@sandeen.net> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1204645594 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.43867 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14766 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Eric Sandeen wrote: > Niv Sardi wrote: >> Manpages update for the new defaults, please review, I believe I got'em all. > (btw review was a bit pedantic becase there are always a million questions about tuning & options. Let's try to be as clear as we can in the manpage at least...) -Eric From owner-xfs@oss.sgi.com Tue Mar 4 08:30:55 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 04 Mar 2008 08:31:13 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=BAYES_00,MIME_8BIT_HEADER autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m24GUsvd005661 for ; Tue, 4 Mar 2008 08:30:55 -0800 X-ASG-Debug-ID: 1204648282-3fe4005a0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from ti-out-0910.google.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id C45F564D5A7 for ; Tue, 4 Mar 2008 08:31:23 -0800 (PST) Received: from ti-out-0910.google.com (ti-out-0910.google.com [209.85.142.191]) by cuda.sgi.com with ESMTP id FmQKtoholXYeRcFp for ; Tue, 04 Mar 2008 08:31:23 -0800 (PST) Received: by ti-out-0910.google.com with SMTP id d10so1204261tib.18 for ; Tue, 04 Mar 2008 08:31:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:user-agent:mime-version:to:subject:content-type:content-transfer-encoding; bh=0qmogXXSvrGI+4QZv8oYc26LNe9+HixUt5siDIbf2ms=; b=oyqcWA7h/ZDMrkKNGNd4c7QTJpzb7CqSzBhB+hW/fwGr4tI/iT0921kl0wBTSXr1Pd9A09zrtAWETrgRBRXONm1REE8pFxiint8LpbzfMqh1ZOjznaIxkWnka00SOcq/gORtonAY6iGVblDwiRD/G8RxZYXMvGtC/MCXqfuqFnY= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:user-agent:mime-version:to:subject:content-type:content-transfer-encoding; b=EzimavExLjLGoR2gv2ffjfWgayAoFH5e6SlELNhcJFL8g79No69ryn6lXpXLgANWSuCkKNABxCLhcePXOPPb3jjxjK+BVyhXHqgISQfcc+wFF5iHYIfQ4oh+Lfd8kDk+Otnsa/fgvM+ibUxIH6SuqAqVCuB4hMOi0SF3SsL2hfs= Received: by 10.150.195.21 with SMTP id s21mr597416ybf.114.1204648279440; Tue, 04 Mar 2008 08:31:19 -0800 (PST) Received: from ?89.174.127.38? ( [89.174.127.38]) by mx.google.com with ESMTPS id f4sm665472nfh.26.2008.03.04.08.31.16 (version=TLSv1/SSLv3 cipher=RC4-MD5); Tue, 04 Mar 2008 08:31:17 -0800 (PST) Message-ID: <47CD7950.9040001@gmail.com> Date: Tue, 04 Mar 2008 17:31:12 +0100 From: =?UTF-8?B?QXJ0dXIgTWFrw7N3a2E=?= User-Agent: Thunderbird 2.0.0.12 (Windows/20080213) MIME-Version: 1.0 To: xfs@oss.sgi.com X-ASG-Orig-Subj: creating new array Subject: creating new array Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Barracuda-Connect: ti-out-0910.google.com[209.85.142.191] X-Barracuda-Start-Time: 1204648283 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.43871 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14767 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: artur.makowka@gmail.com Precedence: bulk X-list: xfs i want to create new array, because my last one (raid 0) is gone... now it is going to be raid 5. this will be on xfs fs of course. now the problem is i don't really know what chunk size i should use, to get the best performance, but i can still mount old array read-only and check what is average file size there - just don't know how. (doing ls -lR * , adding sizes and dividing by number is probably going to take days to complete) there was some xfs command that showed it, but i really have no idea what it was, xfs_info doesnt give me much information the currect situation is like this: meta-data=/dev/md0 isize=256 agcount=32, agsize=5723342 blks = sectsz=512 attr=0 data = bsize=4096 blocks=183146912, imaxpct=25 = sunit=2 swidth=6 blks naming =version 2 bsize=4096 log =internal bsize=4096 blocks=32768, version=2 = sectsz=512 sunit=0 blks, lazy-count=0 realtime =none extsz=24576 blocks=0, rtextents=0 some of the files are not on it anymore, as i said, it is damaged RAID 0 the system is going to have millions of small files, (up to 3MB), or even hundreds of millions (very popular free hosting) could you advice me any xfs and mdadm options? i plan to use lazyblocks option with latest 2.6.24 kernel. any other ideas? this is going to be RAID 5, the stability is NOT very important, i will have full backups, the most important thing is performance. i can get hundreds of hits per second at peak times, i was planning to build it from 5 * 500GB disks and it will be just partition for users file, system is on another disk. (probably on lvm to give me option to resize it in future) fs performance is critical, as i have many thousands of accounts, mostly with just small websites, so it's impossible to put it all in RAM. if you have any xfs/mdadm creating advice (or mounting options), please share with me. i installed xfsprogs 2.9.7, and i will compile 2.6.24 kernel for creation time ( i can even risk using 2.6.25-rcX if there is some important thing changed that i need durning array/filesystem creation, just not -mm as XFS version from -mm once corrupted some of my users files ) From owner-xfs@oss.sgi.com Tue Mar 4 14:05:32 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 04 Mar 2008 14:05:53 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00,J_CHICKENPOX_57 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m24M5VBS001732 for ; Tue, 4 Mar 2008 14:05:32 -0800 X-ASG-Debug-ID: 1204668360-6ae101390000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from web32504.mail.mud.yahoo.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with SMTP id 1D48D650992 for ; Tue, 4 Mar 2008 14:06:00 -0800 (PST) Received: from web32504.mail.mud.yahoo.com (web32504.mail.mud.yahoo.com [68.142.207.214]) by cuda.sgi.com with SMTP id pERavPbWZQ7iWdsY for ; Tue, 04 Mar 2008 14:06:00 -0800 (PST) Received: (qmail 73982 invoked by uid 60001); 4 Mar 2008 22:05:59 -0000 DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=X-YMail-OSG:Received:Date:From:Subject:To:MIME-Version:Content-Type:Content-Transfer-Encoding:Message-ID; b=Tk43Q2OHx9+EB80rvbpOdjIh1F4xJ/zjWAanSKLhAWOk7ec9uXsVJoelfuz93DG2sHUojU2+6kIYHzWpl/lNvF0bHkpttIwmLGai7eo7WjLnVBtpbxR8zkj0V8vrM1qsNOT6E5puhnRc5tM/FF18JKYQsEbZLnJn0xCiRgR8a8U=; X-YMail-OSG: 6LFdT4sVM1koSjSG4jYlof98KOPGRa8CyxbSpZo7QFnOnP8sCTzblNbC8KmXHDhYaAuMyAz03ifSdQ5AgQwuL9UUwvqNNdGVuXXOBfLIW6Rxslu1wpo- Received: from [162.62.93.100] by web32504.mail.mud.yahoo.com via HTTP; Tue, 04 Mar 2008 14:05:59 PST Date: Tue, 4 Mar 2008 14:05:59 -0800 (PST) From: Ravi Wijayaratne X-ASG-Orig-Subj: corruption in xfs_end_bio_unwritten Subject: corruption in xfs_end_bio_unwritten To: xfs@oss.sgi.com MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 8bit Message-ID: <629727.55106.qm@web32504.mail.mud.yahoo.com> X-Barracuda-Connect: web32504.mail.mud.yahoo.com[68.142.207.214] X-Barracuda-Start-Time: 1204668361 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.43893 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14768 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: ravi_wija@yahoo.com Precedence: bulk X-list: xfs Hi all, I am seeing data corruption in xfs_end_bio_unwritten. Possibly the corruption is happening before. Here is what I see. The ioend->io_offset and ioend->io_size is completely beyond the range of the size of the file or the device altogether. The problem occurs under heavy I/O stress on 4 20GB files that was created using XFS_IOC_RESVSP64 ioctl. For a sparse file of the same size the problem does not occur. Also the problem is not seen on moderate the low system I/O loads(Created by I/O meter) It trips on the VOP_BMP(..) call that eventually calls xfs_btree_check_lblock. I am aware that this function has changed in the tip to call xfs_iomap_write_unwritten directly instead of calling xfs_iomap via VOP_BMAP. I believe that even if I change the code to what is in the tip I would still stumble some where on the fact that a write to a undefined range was completed. The call stack that was dumped by XFS_ERROR_REPORT was as follows Any thoughts how I could fix this? Thanks in advance Ravi 1> Oct 22 21:12:57 Foo kernel: Filesystem "dm-0": XFS internal error xfs_btree_check_lblock at line 215 of file fs/xfs/xfs_btree.c. Caller 0x781f907a <4> Oct 22 21:12:57 Foo kernel: [<781fc212>] xfs_btree_check_lblock+0x52/0x1c0 <4> Oct 22 21:12:57 Foo kernel: [<781f907a>] xfs_bmbt_lookup+0x1fa/0x5f0 <4> Oct 22 21:12:57 Foo kernel: [<781f907a>] xfs_bmbt_lookup+0x1fa/0x5f0 <4> Oct 22 21:12:57 Foo kernel: [<781ed172>] xfs_bmap_add_extent_unwritten_real+0xd62/0xfd0 <4> Oct 22 21:12:57 Foo kernel: [<781ee030>] xfs_bmap_add_extent+0x6f0/0x1f10 <4> Oct 22 21:12:57 Foo kernel: [<78324250>] dm_request+0xf0/0x13c <4> Oct 22 21:12:57 Foo kernel: [<78324160>] dm_request+0x0/0x13c <4> Oct 22 21:12:57 Foo kernel: [<78263561>] generic_make_request+0x161/0x210 <4> Oct 22 21:12:57 Foo kernel: [<782c97e5>] scsi_delete_timer+0x15/0x60 <4> Oct 22 21:12:57 Foo kernel: [<781150b6>] find_busiest_group+0x256/0x310 <4> Oct 22 21:12:57 Foo kernel: [<782653f5>] submit_bio+0x55/0x100 <4> Oct 22 21:12:57 Foo kernel: [<781678a7>] bio_add_page+0x37/0x50 <4> Oct 22 21:12:57 Foo kernel: [<781f6a54>] xfs_bmbt_get_state+0x14/0x30 <4> Oct 22 21:12:57 Foo kernel: [<781f02de>] xfs_bmap_do_search_extents+0x2fe/0x480 <4> Oct 22 21:12:57 Foo kernel: [<782462b7>] xfs_buf_iorequest+0x347/0x440 <4> Oct 22 21:12:57 Foo kernel: [<78247538>] kmem_zone_alloc+0x58/0xd0 <4> Oct 22 21:12:57 Foo kernel: [<781f1f73>] xfs_bmapi+0x19b3/0x2e20 <4> Oct 22 21:12:57 Foo kernel: [<78220466>] xlog_write+0x6e6/0x800 <4> Oct 22 21:12:57 Foo kernel: [<78228158>] xfs_icsb_modify_counters_locked+0x18/0x20 <4> Oct 22 21:12:57 Foo kernel: [<7822db93>] xfs_trans_tail_ail+0x13/0x30 <4> Oct 22 21:12:58 Foo kernel: [<7821f2d8>] xlog_assign_tail_lsn+0x28/0x60 <4> Oct 22 21:12:58 Foo kernel: [<7821f337>] xlog_state_release_iclog+0x27/0x530 <4> Oct 22 21:12:58 Foo kernel: [<7822f069>] xfs_trans_unlock_items+0xa9/0xb0 <4> Oct 22 21:12:58 Foo kernel: [<78221861>] xfs_log_release_iclog+0x11/0x40 <4> Oct 22 21:12:58 Foo kernel: [<7822d8b9>] _xfs_trans_commit+0x8e9/0xa60 <4> Oct 22 21:12:58 Foo kernel: [<782207bc>] xlog_grant_push_ail+0x3c/0x150 <4> Oct 22 21:12:58 Foo kernel: [<78220ece>] xfs_log_reserve+0x5fe/0x780 <4> Oct 22 21:12:58 Foo kernel: [<7822eb41>] xfs_trans_ijoin+0x31/0x70 <4> Oct 22 21:12:58 Foo kernel: [<7823ad6d>] xfs_iomap_write_unwritten+0x1bd/0x300 <4> Oct 22 21:12:58 Foo kernel: [<7823a633>] xfs_iomap+0x513/0x850 <4> Oct 22 21:12:58 Foo kernel: [<78149631>] test_clear_page_writeback+0x51/0xc0 <4> Oct 22 21:12:58 Foo kernel: [<78166059>] end_buffer_async_write+0xa9/0x140 <4> Oct 22 21:12:58 Foo kernel: [<7823ca58>] xfs_end_bio_unwritten+0x48/0x60 <4> Oct 22 21:12:58 Foo kernel: [<7812c712>] run_workqueue+0x72/0xf0 <4> Oct 22 21:12:58 Foo kernel: [<7823ca10>] xfs_end_bio_unwritten+0x0/0x60 <4> Oct 22 21:12:58 Foo kernel: [<7812cf5b>] worker_thread+0x13b/0x160 <4> Oct 22 21:12:58 Foo kernel: [<78115b40>] default_wake_function+0x0/0x10 <4> Oct 22 21:12:58 Foo kernel: [<7812ce20>] worker_thread+0x0/0x160 <4> Oct 22 21:12:58 Foo kernel: [<7812fd7b>] kthread+0xab/0xe0 <4> Oct 22 21:12:58 Foo kernel: [<7812fcd0>] kthread+0x0/0xe0 <4> Oct 22 21:12:58 Foo kernel: [<78100df5>] kernel_thread_helper+0x5/0x10 ------------------------------ Ravi Wijayaratne ____________________________________________________________________________________ Looking for last minute shopping deals? Find them fast with Yahoo! Search. http://tools.search.yahoo.com/newsearch/category.php?category=shopping From owner-xfs@oss.sgi.com Tue Mar 4 14:15:51 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 04 Mar 2008 14:16:11 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_57 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m24MFnAd002936 for ; Tue, 4 Mar 2008 14:15:51 -0800 X-ASG-Debug-ID: 1204668978-0b3200310000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 54BB4650A89 for ; Tue, 4 Mar 2008 14:16:19 -0800 (PST) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id ZN2u6mViTEoEgMhu for ; Tue, 04 Mar 2008 14:16:19 -0800 (PST) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 0A5D418004EE3; Tue, 4 Mar 2008 16:15:47 -0600 (CST) Message-ID: <47CDCA12.5060107@sandeen.net> Date: Tue, 04 Mar 2008 16:15:46 -0600 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: Ravi Wijayaratne CC: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: corruption in xfs_end_bio_unwritten Subject: Re: corruption in xfs_end_bio_unwritten References: <629727.55106.qm@web32504.mail.mud.yahoo.com> In-Reply-To: <629727.55106.qm@web32504.mail.mud.yahoo.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1204668979 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.43893 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14769 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Ravi Wijayaratne wrote: > Hi all, > > I am seeing data corruption in xfs_end_bio_unwritten. Possibly the corruption is happening before. > Here is what I see. what kernel, for starters? > The ioend->io_offset and ioend->io_size is completely beyond the range of the size of the file or > the device altogether. The problem occurs under heavy I/O stress on 4 20GB files that was created > using XFS_IOC_RESVSP64 ioctl. For a sparse file of the same size the problem does not occur. Also > the problem is not seen on moderate the low system I/O loads(Created by I/O meter) > > It trips on the VOP_BMP(..) call that eventually calls xfs_btree_check_lblock. I am aware that > this function has changed in the tip to call xfs_iomap_write_unwritten directly instead of calling > xfs_iomap via VOP_BMAP. I believe that even if I change the code to what is in the tip I would > still stumble some where on the fact that a write to a undefined range was completed. The call > stack that was dumped by XFS_ERROR_REPORT was as follows > > Any thoughts how I could fix this? > > Thanks in advance > From owner-xfs@oss.sgi.com Tue Mar 4 19:52:40 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 04 Mar 2008 19:52:48 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_33 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m253qdam014414 for ; Tue, 4 Mar 2008 19:52:40 -0800 X-ASG-Debug-ID: 1204689188-79c100230000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 63821652FA2 for ; Tue, 4 Mar 2008 19:53:08 -0800 (PST) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id KUyUB1mFC4DTOQNg for ; Tue, 04 Mar 2008 19:53:08 -0800 (PST) Received: from liberator.sandeen.net (sandeen.net [209.173.210.139]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id AFA7718004EE3; Tue, 4 Mar 2008 21:53:06 -0600 (CST) Message-ID: <47CE1921.9000708@sandeen.net> Date: Tue, 04 Mar 2008 21:53:05 -0600 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: Ravi Wijayaratne CC: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: corruption in xfs_end_bio_unwritten Subject: Re: corruption in xfs_end_bio_unwritten References: <629727.55106.qm@web32504.mail.mud.yahoo.com> <47CDCA12.5060107@sandeen.net> In-Reply-To: <47CDCA12.5060107@sandeen.net> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1204689189 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.43916 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14770 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Eric Sandeen wrote: > Ravi Wijayaratne wrote: >> Hi all, >> >> I am seeing data corruption in xfs_end_bio_unwritten. Possibly the corruption is happening before. >> Here is what I see. > > what kernel, for starters? 2.6.16 + XFS from SLES10 I hear... :) So for starters, I'd bug SuSE.... otherwise I'd see if it persists upstream. Is AIO+DIO in the mix? perhaps it is related to https://bugzilla.redhat.com/show_bug.cgi?id=217098 -Eric From owner-xfs@oss.sgi.com Tue Mar 4 20:34:50 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 04 Mar 2008 20:35:09 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.8 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_43 autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m254Yfsj020186 for ; Tue, 4 Mar 2008 20:34:45 -0800 Received: from pc-bnaujok.melbourne.sgi.com (pc-bnaujok.melbourne.sgi.com [134.14.55.58]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA26650 for ; Wed, 5 Mar 2008 15:35:10 +1100 Date: Wed, 05 Mar 2008 15:37:07 +1100 To: "xfs@oss.sgi.com" Subject: [REVIEW #4] bad_features2 support in userspace From: "Barry Naujok" Organization: SGI Content-Type: multipart/mixed; boundary=----------ypN4flJIlmhdmVY5eNxenW MIME-Version: 1.0 Message-ID: User-Agent: Opera Mail/9.24 (Win32) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14771 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@sgi.com Precedence: bulk X-list: xfs ------------ypN4flJIlmhdmVY5eNxenW Content-Type: text/plain; format=flowed; delsp=yes; charset=utf-8 Content-Transfer-Encoding: Base64 RHVlIHRvIHRoZSBpc3N1ZSBvZiBtb3VudGluZyBmaWxlc3lzdGVtIHdpdGgg b2xkZXIga2VybmVscyBhbmQNCnBvdGVudGlhbGx5IHJlYWRpbmcgc2JfZmVh dHVyZXMyIGZyb20gdGhlIHdyb25nIGxvY2F0aW9uLiBJdA0Kc2VlbXMgdGhl IGJlc3QgY291cnNlIG9mIGFjdGlvbiBpcyB0byBhbHdheXMgbWFrZSBzYl9m ZWF0dXJlczINCmFuZCBzYl9iYWRfZmVhdHVyZXMyIHRoZSBzYW1lLiBUaGlz IGlzIHByZXR0eSBpbXBvcnRhbnQgYXMNCm5ldyBiaXRzIGluIHRoaXMgYXJl IHN1cHBvc2VkIHRvIHN0b3Agb2xkZXIga2VybmVscyBmcm9tDQptb3VudGlu ZyBmaWxlc3lzdGVtcyB3aXRoIHVuc3VwcG9ydGVkIGZlYXR1cmVzLg0KDQpJ ZiBzYl9iYWRfZmVhdHVyZXMyIGlzIHplcm8sIGFuZCB0aGUgb2xkIGtlcm5l bCB0cmllcyB0byByZWFkDQpzYl9mZWF0dXJlczIgZnJvbSB0aGlzIGxvY2F0 aW9uIGR1cmluZyBtb3VudCwgaXQgd2lsbCBzdWNjZWVkDQphcyBpdCB3aWxs IHJlYWQgemVyby4NCg0KU28sIHRoaXMgcGF0Y2ggY2hhbmdlcyBta2ZzLnhm cyB0byBzZXQgc2JfYmFkX2ZlYXR1cmVzMiB0bw0KdGhlIHNhbWUgYXMgc2Jf ZmVhdHVyZXMyLCB4ZnNfY2hlY2sgYW5kIHhmc19yZXBhaXIgbm93IGFsc28N Cm1ha2VzIHN1cmUgdGhleSBhcmUgdGhlIHNhbWUuDQoNCkJhcnJ5Lg0KDQot LQ0KDQoNCj09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQ0KeGZzcHJv Z3MvZGIvY2hlY2suYw0KPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 DQoNCi0tLSBhL3hmc3Byb2dzL2RiL2NoZWNrLmMJMjAwOC0wMy0wNSAxNToz MDo1NC4wMDAwMDAwMDAgKzExMDANCisrKyBiL3hmc3Byb2dzL2RiL2NoZWNr LmMJMjAwOC0wMy0wNSAxNToyODo1OC42MzgwOTc1MTEgKzExMDANCkBAIC04 NjksNiArODY5LDE0IEBAIGJsb2NrZ2V0X2YoDQogIAkJCQltcC0+bV9zYi5z Yl9mcmV4dGVudHMsIGZyZXh0ZW50cyk7DQogIAkJZXJyb3IrKzsNCiAgCX0N CisJaWYgKG1wLT5tX3NiLnNiX2JhZF9mZWF0dXJlczIgIT0gbXAtPm1fc2Iu c2JfZmVhdHVyZXMyKSB7DQorCQlpZiAoIXNmbGFnKQ0KKwkJCWRicHJpbnRm KCJzYl9mZWF0dXJlczIgKDB4JXgpIG5vdCBzYW1lIGFzICINCisJCQkJInNi X2JhZF9mZWF0dXJlczIgKDB4JXgpXG4iLA0KKwkJCQltcC0+bV9zYi5zYl9m ZWF0dXJlczIsDQorCQkJCW1wLT5tX3NiLnNiX2JhZF9mZWF0dXJlczIpOw0K KwkJZXJyb3IrKzsNCisJfQ0KICAJaWYgKChzYnZlcnNpb24gJiBYRlNfU0Jf VkVSU0lPTl9BVFRSQklUKSAmJg0KICAJICAgICFYRlNfU0JfVkVSU0lPTl9I QVNBVFRSKCZtcC0+bV9zYikpIHsNCiAgCQlpZiAoIXNmbGFnKQ0KDQo9PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT0NCnhmc3Byb2dzL2RiL3NiLmMN Cj09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQ0KDQotLS0gYS94ZnNw cm9ncy9kYi9zYi5jCTIwMDgtMDMtMDUgMTU6MzA6NTQuMDAwMDAwMDAwICsx MTAwDQorKysgYi94ZnNwcm9ncy9kYi9zYi5jCTIwMDgtMDItMjkgMTc6MTY6 MzMuNzcwNDIzMjk2ICsxMTAwDQpAQCAtMTA4LDYgKzEwOCw3IEBAIGNvbnN0 IGZpZWxkX3QJc2JfZmxkc1tdID0gew0KICAJeyAibG9nc2VjdHNpemUiLCBG TERUX1VJTlQxNkQsIE9JKE9GRihsb2dzZWN0c2l6ZSkpLCBDMSwgMCwgVFlQ X05PTkUgfSwNCiAgCXsgImxvZ3N1bml0IiwgRkxEVF9VSU5UMzJELCBPSShP RkYobG9nc3VuaXQpKSwgQzEsIDAsIFRZUF9OT05FIH0sDQogIAl7ICJmZWF0 dXJlczIiLCBGTERUX1VJTlQzMlgsIE9JKE9GRihmZWF0dXJlczIpKSwgQzEs IDAsIFRZUF9OT05FIH0sDQorCXsgImJhZF9mZWF0dXJlczIiLCBGTERUX1VJ TlQzMlgsIE9JKE9GRihiYWRfZmVhdHVyZXMyKSksIEMxLCAwLCBUWVBfTk9O RSAgDQp9LA0KICAJeyBOVUxMIH0NCiAgfTsNCg0KDQo9PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT0NCnhmc3Byb2dzL2luY2x1ZGUveGZzX3NiLmgN Cj09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQ0KDQotLS0gYS94ZnNw cm9ncy9pbmNsdWRlL3hmc19zYi5oCTIwMDgtMDMtMDUgMTU6MzA6NTQuMDAw MDAwMDAwICsxMTAwDQorKysgYi94ZnNwcm9ncy9pbmNsdWRlL3hmc19zYi5o CTIwMDgtMDItMjkgMTc6MTY6MzMuODE0NDE3Njg3ICsxMTAwDQpAQCAtMTUx LDYgKzE1MSw3IEBAIHR5cGVkZWYgc3RydWN0IHhmc19zYg0KICAJX191aW50 MTZfdAlzYl9sb2dzZWN0c2l6ZTsJLyogc2VjdG9yIHNpemUgZm9yIHRoZSBs b2csIGJ5dGVzICovDQogIAlfX3VpbnQzMl90CXNiX2xvZ3N1bml0OwkvKiBz dHJpcGUgdW5pdCBzaXplIGZvciB0aGUgbG9nICovDQogIAlfX3VpbnQzMl90 CXNiX2ZlYXR1cmVzMjsJLyogYWRkaXRpb25hbCBmZWF0dXJlIGJpdHMgKi8N CisJX191aW50MzJfdAlzYl9iYWRfZmVhdHVyZXMyOyAvKiB1bnVzYWJsZSBz cGFjZSAqLw0KICB9IHhmc19zYl90Ow0KDQogIC8qDQpAQCAtMTY5LDcgKzE3 MCw3IEBAIHR5cGVkZWYgZW51bSB7DQogIAlYRlNfU0JTX0dRVU9USU5PLCBY RlNfU0JTX1FGTEFHUywgWEZTX1NCU19GTEFHUywgWEZTX1NCU19TSEFSRURf Vk4sDQogIAlYRlNfU0JTX0lOT0FMSUdOTVQsIFhGU19TQlNfVU5JVCwgWEZT X1NCU19XSURUSCwgWEZTX1NCU19ESVJCTEtMT0csDQogIAlYRlNfU0JTX0xP R1NFQ1RMT0csIFhGU19TQlNfTE9HU0VDVFNJWkUsIFhGU19TQlNfTE9HU1VO SVQsDQotCVhGU19TQlNfRkVBVFVSRVMyLA0KKwlYRlNfU0JTX0ZFQVRVUkVT MiwgWEZTX1NCU19CQURfRkVBVFVSRVMyLA0KICAJWEZTX1NCU19GSUVMRENP VU5UDQogIH0geGZzX3NiX2ZpZWxkX3Q7DQoNCg0KPT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09DQp4ZnNwcm9ncy9saWJ4ZnMveGZzX21vdW50LmMN Cj09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQ0KDQotLS0gYS94ZnNw cm9ncy9saWJ4ZnMveGZzX21vdW50LmMJMjAwOC0wMy0wNSAxNTozMDo1NC4w MDAwMDAwMDAgKzExMDANCisrKyBiL3hmc3Byb2dzL2xpYnhmcy94ZnNfbW91 bnQuYwkyMDA4LTAyLTI5IDE3OjE2OjMzLjgzNDQxNTEzOCArMTEwMA0KQEAg LTE0MCw2ICsxNDAsNyBAQCBzdGF0aWMgc3RydWN0IHsNCiAgICAgIHsgb2Zm c2V0b2YoeGZzX3NiX3QsIHNiX2xvZ3NlY3RzaXplKSwwIH0sDQogICAgICB7 IG9mZnNldG9mKHhmc19zYl90LCBzYl9sb2dzdW5pdCksCSAwIH0sDQogICAg ICB7IG9mZnNldG9mKHhmc19zYl90LCBzYl9mZWF0dXJlczIpLAkgMCB9LA0K KyAgICB7IG9mZnNldG9mKHhmc19zYl90LCBzYl9iYWRfZmVhdHVyZXMyKSwg MCB9LA0KICAgICAgeyBzaXplb2YoeGZzX3NiX3QpLAkJCSAwIH0NCiAgfTsN Cg0KDQo9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0NCnhmc3Byb2dz L21rZnMveGZzX21rZnMuYw0KPT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09DQoNCi0tLSBhL3hmc3Byb2dzL21rZnMveGZzX21rZnMuYwkyMDA4LTAz LTA1IDE1OjMwOjU0LjAwMDAwMDAwMCArMTEwMA0KKysrIGIveGZzcHJvZ3Mv bWtmcy94ZnNfbWtmcy5jCTIwMDgtMDMtMDUgMTU6Mjc6MzcuNTY4NDYxNzg3 ICsxMTAwDQpAQCAtMjEwMyw2ICsyMTAzLDEzIEBAIGFuIEFHIHNpemUgdGhh dCBpcyBvbmUgc3RyaXBlIHVuaXQgc21hbGwNCiAgCQkJZGlydmVyc2lvbiA9 PSAyLCBsb2d2ZXJzaW9uID09IDIsIGF0dHJ2ZXJzaW9uID09IDEsDQogIAkJ CShzZWN0b3JzaXplICE9IEJCU0laRSB8fCBsc2VjdG9yc2l6ZSAhPSBCQlNJ WkUpLA0KICAJCQlzYnAtPnNiX2ZlYXR1cmVzMiAhPSAwKTsNCisJLyoNCisJ ICogRHVlIHRvIGEgc3RydWN0dXJlIGFsaWdubWVudCBpc3N1ZSwgc2JfZmVh dHVyZXMyIGVuZGVkIHVwIGluIG9uZQ0KKwkgKiBvZiB0d28gbG9jYXRpb25z LCB0aGUgc2Vjb25kICJpbmNvcnJlY3QiIGxvY2F0aW9uIHJlcHJlc2VudGVk IGJ5DQorCSAqIHRoZSBzYl9iYWRfZmVhdHVyZXMyIGZpZWxkLiBUbyBhdm9p ZCBvbGRlciBrZXJuZWxzIG1vdW50aW5nDQorCSAqIGZpbGVzeXN0ZW1zIHRo ZXkgc2hvdWxkbid0LCBzZXQgYm90aCBmaWVsZCB0byB0aGUgc2FtZSB2YWx1 ZS4NCisJICovDQorCXNicC0+c2JfYmFkX2ZlYXR1cmVzMiA9IHNicC0+c2Jf ZmVhdHVyZXMyOw0KDQogIAlpZiAoZm9yY2Vfb3ZlcndyaXRlKQ0KICAJCXpl cm9fb2xkX3hmc19zdHJ1Y3R1cmVzKCZ4aSwgc2JwKTsNCg0KPT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09DQp4ZnNwcm9ncy9yZXBhaXIvcGhhc2Ux LmMNCj09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQ0KDQotLS0gYS94 ZnNwcm9ncy9yZXBhaXIvcGhhc2UxLmMJMjAwOC0wMy0wNSAxNTozMDo1NC4w MDAwMDAwMDAgKzExMDANCisrKyBiL3hmc3Byb2dzL3JlcGFpci9waGFzZTEu YwkyMDA4LTAzLTA1IDE1OjE5OjA5LjUxMzQxNTQxMyArMTEwMA0KQEAgLTkx LDYgKzkxLDE5IEBAIHBoYXNlMSh4ZnNfbW91bnRfdCAqbXApDQogIAkJcHJp bWFyeV9zYl9tb2RpZmllZCA9IDE7DQogIAl9DQoNCisJLyoNCisJICogQ2hl Y2sgYmFkX2ZlYXR1cmVzMiBhbmQgbWFrZSBzdXJlIGZlYXR1cmVzMiB0aGUg c2FtZSBhcw0KKwkgKiBiYWRfZmVhdHVyZXMgKE9SaW5nIHRoZSB0d28gdG9n ZXRoZXIpLiBMZWF2ZSBiYWRfZmVhdHVyZXMyDQorCSAqIHNldCBzbyBvbGRl ciBrZXJuZWxzIGNhbiBzdGlsbCB1c2UgaXQgYW5kIG5vdCBtb3VudCB1bnN1 cHBvcnRlZA0KKwkgKiBmaWxlc3lzdGVtcyB3aGVuIGl0IHJlYWRzIGJhZF9m ZWF0dXJlczIuDQorCSAqLw0KKwlpZiAoc2ItPnNiX2JhZF9mZWF0dXJlczIg IT0gc2ItPnNiX2ZlYXR1cmVzMikgew0KKwkJc2ItPnNiX2ZlYXR1cmVzMiB8 PSBzYi0+c2JfYmFkX2ZlYXR1cmVzMjsNCisJCXNiLT5zYl9iYWRfZmVhdHVy ZXMyID0gc2ItPnNiX2ZlYXR1cmVzMjsNCisJCXByaW1hcnlfc2JfbW9kaWZp ZWQgPSAxOw0KKwkJZG9fd2FybihfKCJzdXBlcmJsb2NrIGhhcyBhIGZlYXR1 cmVzMiBtaXNtYXRjaCwgY29ycmVjdGluZ1xuIikpOw0KKwl9DQorDQogIAlp ZiAocHJpbWFyeV9zYl9tb2RpZmllZCkgIHsNCiAgCQlpZiAoIW5vX21vZGlm eSkgIHsNCiAgCQkJZG9fd2FybihfKCJ3cml0aW5nIG1vZGlmaWVkIHByaW1h cnkgc3VwZXJibG9ja1xuIikpOw0K ------------ypN4flJIlmhdmVY5eNxenW Content-Disposition: attachment; filename=bad_features2.patch Content-Type: text/x-patch; name=bad_features2.patch Content-Transfer-Encoding: Base64 Cj09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQp4ZnNwcm9ncy9kYi9j aGVjay5jCj09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQoKLS0tIGEv eGZzcHJvZ3MvZGIvY2hlY2suYwkyMDA4LTAzLTA1IDE1OjMwOjU0LjAwMDAw MDAwMCArMTEwMAorKysgYi94ZnNwcm9ncy9kYi9jaGVjay5jCTIwMDgtMDMt MDUgMTU6Mjg6NTguNjM4MDk3NTExICsxMTAwCkBAIC04NjksNiArODY5LDE0 IEBAIGJsb2NrZ2V0X2YoCiAJCQkJbXAtPm1fc2Iuc2JfZnJleHRlbnRzLCBm cmV4dGVudHMpOwogCQllcnJvcisrOwogCX0KKwlpZiAobXAtPm1fc2Iuc2Jf YmFkX2ZlYXR1cmVzMiAhPSBtcC0+bV9zYi5zYl9mZWF0dXJlczIpIHsKKwkJ aWYgKCFzZmxhZykKKwkJCWRicHJpbnRmKCJzYl9mZWF0dXJlczIgKDB4JXgp IG5vdCBzYW1lIGFzICIKKwkJCQkic2JfYmFkX2ZlYXR1cmVzMiAoMHgleClc biIsCisJCQkJbXAtPm1fc2Iuc2JfZmVhdHVyZXMyLAorCQkJCW1wLT5tX3Ni LnNiX2JhZF9mZWF0dXJlczIpOworCQllcnJvcisrOworCX0KIAlpZiAoKHNi dmVyc2lvbiAmIFhGU19TQl9WRVJTSU9OX0FUVFJCSVQpICYmCiAJICAgICFY RlNfU0JfVkVSU0lPTl9IQVNBVFRSKCZtcC0+bV9zYikpIHsKIAkJaWYgKCFz ZmxhZykKCj09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQp4ZnNwcm9n cy9kYi9zYi5jCj09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQoKLS0t IGEveGZzcHJvZ3MvZGIvc2IuYwkyMDA4LTAzLTA1IDE1OjMwOjU0LjAwMDAw MDAwMCArMTEwMAorKysgYi94ZnNwcm9ncy9kYi9zYi5jCTIwMDgtMDItMjkg MTc6MTY6MzMuNzcwNDIzMjk2ICsxMTAwCkBAIC0xMDgsNiArMTA4LDcgQEAg Y29uc3QgZmllbGRfdAlzYl9mbGRzW10gPSB7CiAJeyAibG9nc2VjdHNpemUi LCBGTERUX1VJTlQxNkQsIE9JKE9GRihsb2dzZWN0c2l6ZSkpLCBDMSwgMCwg VFlQX05PTkUgfSwKIAl7ICJsb2dzdW5pdCIsIEZMRFRfVUlOVDMyRCwgT0ko T0ZGKGxvZ3N1bml0KSksIEMxLCAwLCBUWVBfTk9ORSB9LAogCXsgImZlYXR1 cmVzMiIsIEZMRFRfVUlOVDMyWCwgT0koT0ZGKGZlYXR1cmVzMikpLCBDMSwg MCwgVFlQX05PTkUgfSwKKwl7ICJiYWRfZmVhdHVyZXMyIiwgRkxEVF9VSU5U MzJYLCBPSShPRkYoYmFkX2ZlYXR1cmVzMikpLCBDMSwgMCwgVFlQX05PTkUg fSwKIAl7IE5VTEwgfQogfTsKIAoKPT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09Cnhmc3Byb2dzL2luY2x1ZGUveGZzX3NiLmgKPT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09CgotLS0gYS94ZnNwcm9ncy9pbmNsdWRlL3hm c19zYi5oCTIwMDgtMDMtMDUgMTU6MzA6NTQuMDAwMDAwMDAwICsxMTAwCisr KyBiL3hmc3Byb2dzL2luY2x1ZGUveGZzX3NiLmgJMjAwOC0wMi0yOSAxNzox NjozMy44MTQ0MTc2ODcgKzExMDAKQEAgLTE1MSw2ICsxNTEsNyBAQCB0eXBl ZGVmIHN0cnVjdCB4ZnNfc2IKIAlfX3VpbnQxNl90CXNiX2xvZ3NlY3RzaXpl OwkvKiBzZWN0b3Igc2l6ZSBmb3IgdGhlIGxvZywgYnl0ZXMgKi8KIAlfX3Vp bnQzMl90CXNiX2xvZ3N1bml0OwkvKiBzdHJpcGUgdW5pdCBzaXplIGZvciB0 aGUgbG9nICovCiAJX191aW50MzJfdAlzYl9mZWF0dXJlczI7CS8qIGFkZGl0 aW9uYWwgZmVhdHVyZSBiaXRzICovCisJX191aW50MzJfdAlzYl9iYWRfZmVh dHVyZXMyOyAvKiB1bnVzYWJsZSBzcGFjZSAqLwogfSB4ZnNfc2JfdDsKIAog LyoKQEAgLTE2OSw3ICsxNzAsNyBAQCB0eXBlZGVmIGVudW0gewogCVhGU19T QlNfR1FVT1RJTk8sIFhGU19TQlNfUUZMQUdTLCBYRlNfU0JTX0ZMQUdTLCBY RlNfU0JTX1NIQVJFRF9WTiwKIAlYRlNfU0JTX0lOT0FMSUdOTVQsIFhGU19T QlNfVU5JVCwgWEZTX1NCU19XSURUSCwgWEZTX1NCU19ESVJCTEtMT0csCiAJ WEZTX1NCU19MT0dTRUNUTE9HLCBYRlNfU0JTX0xPR1NFQ1RTSVpFLCBYRlNf U0JTX0xPR1NVTklULAotCVhGU19TQlNfRkVBVFVSRVMyLAorCVhGU19TQlNf RkVBVFVSRVMyLCBYRlNfU0JTX0JBRF9GRUFUVVJFUzIsCiAJWEZTX1NCU19G SUVMRENPVU5UCiB9IHhmc19zYl9maWVsZF90OwogCgo9PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT0KeGZzcHJvZ3MvbGlieGZzL3hmc19tb3VudC5j Cj09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQoKLS0tIGEveGZzcHJv Z3MvbGlieGZzL3hmc19tb3VudC5jCTIwMDgtMDMtMDUgMTU6MzA6NTQuMDAw MDAwMDAwICsxMTAwCisrKyBiL3hmc3Byb2dzL2xpYnhmcy94ZnNfbW91bnQu YwkyMDA4LTAyLTI5IDE3OjE2OjMzLjgzNDQxNTEzOCArMTEwMApAQCAtMTQw LDYgKzE0MCw3IEBAIHN0YXRpYyBzdHJ1Y3QgewogICAgIHsgb2Zmc2V0b2Yo eGZzX3NiX3QsIHNiX2xvZ3NlY3RzaXplKSwwIH0sCiAgICAgeyBvZmZzZXRv Zih4ZnNfc2JfdCwgc2JfbG9nc3VuaXQpLAkgMCB9LAogICAgIHsgb2Zmc2V0 b2YoeGZzX3NiX3QsIHNiX2ZlYXR1cmVzMiksCSAwIH0sCisgICAgeyBvZmZz ZXRvZih4ZnNfc2JfdCwgc2JfYmFkX2ZlYXR1cmVzMiksIDAgfSwKICAgICB7 IHNpemVvZih4ZnNfc2JfdCksCQkJIDAgfQogfTsKIAoKPT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09Cnhmc3Byb2dzL21rZnMveGZzX21rZnMuYwo9 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0KCi0tLSBhL3hmc3Byb2dz L21rZnMveGZzX21rZnMuYwkyMDA4LTAzLTA1IDE1OjMwOjU0LjAwMDAwMDAw MCArMTEwMAorKysgYi94ZnNwcm9ncy9ta2ZzL3hmc19ta2ZzLmMJMjAwOC0w My0wNSAxNToyNzozNy41Njg0NjE3ODcgKzExMDAKQEAgLTIxMDMsNiArMjEw MywxMyBAQCBhbiBBRyBzaXplIHRoYXQgaXMgb25lIHN0cmlwZSB1bml0IHNt YWxsCiAJCQlkaXJ2ZXJzaW9uID09IDIsIGxvZ3ZlcnNpb24gPT0gMiwgYXR0 cnZlcnNpb24gPT0gMSwKIAkJCShzZWN0b3JzaXplICE9IEJCU0laRSB8fCBs c2VjdG9yc2l6ZSAhPSBCQlNJWkUpLAogCQkJc2JwLT5zYl9mZWF0dXJlczIg IT0gMCk7CisJLyoKKwkgKiBEdWUgdG8gYSBzdHJ1Y3R1cmUgYWxpZ25tZW50 IGlzc3VlLCBzYl9mZWF0dXJlczIgZW5kZWQgdXAgaW4gb25lCisJICogb2Yg dHdvIGxvY2F0aW9ucywgdGhlIHNlY29uZCAiaW5jb3JyZWN0IiBsb2NhdGlv biByZXByZXNlbnRlZCBieQorCSAqIHRoZSBzYl9iYWRfZmVhdHVyZXMyIGZp ZWxkLiBUbyBhdm9pZCBvbGRlciBrZXJuZWxzIG1vdW50aW5nCisJICogZmls ZXN5c3RlbXMgdGhleSBzaG91bGRuJ3QsIHNldCBib3RoIGZpZWxkIHRvIHRo ZSBzYW1lIHZhbHVlLgorCSAqLworCXNicC0+c2JfYmFkX2ZlYXR1cmVzMiA9 IHNicC0+c2JfZmVhdHVyZXMyOwogCiAJaWYgKGZvcmNlX292ZXJ3cml0ZSkK IAkJemVyb19vbGRfeGZzX3N0cnVjdHVyZXMoJnhpLCBzYnApOwoKPT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09Cnhmc3Byb2dzL3JlcGFpci9waGFz ZTEuYwo9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0KCi0tLSBhL3hm c3Byb2dzL3JlcGFpci9waGFzZTEuYwkyMDA4LTAzLTA1IDE1OjMwOjU0LjAw MDAwMDAwMCArMTEwMAorKysgYi94ZnNwcm9ncy9yZXBhaXIvcGhhc2UxLmMJ MjAwOC0wMy0wNSAxNToxOTowOS41MTM0MTU0MTMgKzExMDAKQEAgLTkxLDYg KzkxLDE5IEBAIHBoYXNlMSh4ZnNfbW91bnRfdCAqbXApCiAJCXByaW1hcnlf c2JfbW9kaWZpZWQgPSAxOwogCX0KIAorCS8qCisJICogQ2hlY2sgYmFkX2Zl YXR1cmVzMiBhbmQgbWFrZSBzdXJlIGZlYXR1cmVzMiB0aGUgc2FtZSBhcwor CSAqIGJhZF9mZWF0dXJlcyAoT1JpbmcgdGhlIHR3byB0b2dldGhlcikuIExl YXZlIGJhZF9mZWF0dXJlczIKKwkgKiBzZXQgc28gb2xkZXIga2VybmVscyBj YW4gc3RpbGwgdXNlIGl0IGFuZCBub3QgbW91bnQgdW5zdXBwb3J0ZWQKKwkg KiBmaWxlc3lzdGVtcyB3aGVuIGl0IHJlYWRzIGJhZF9mZWF0dXJlczIuCisJ ICovCisJaWYgKHNiLT5zYl9iYWRfZmVhdHVyZXMyICE9IHNiLT5zYl9mZWF0 dXJlczIpIHsKKwkJc2ItPnNiX2ZlYXR1cmVzMiB8PSBzYi0+c2JfYmFkX2Zl YXR1cmVzMjsKKwkJc2ItPnNiX2JhZF9mZWF0dXJlczIgPSBzYi0+c2JfZmVh dHVyZXMyOworCQlwcmltYXJ5X3NiX21vZGlmaWVkID0gMTsKKwkJZG9fd2Fy bihfKCJzdXBlcmJsb2NrIGhhcyBhIGZlYXR1cmVzMiBtaXNtYXRjaCwgY29y cmVjdGluZ1xuIikpOworCX0KKwogCWlmIChwcmltYXJ5X3NiX21vZGlmaWVk KSAgewogCQlpZiAoIW5vX21vZGlmeSkgIHsKIAkJCWRvX3dhcm4oXygid3Jp dGluZyBtb2RpZmllZCBwcmltYXJ5IHN1cGVyYmxvY2tcbiIpKTsK ------------ypN4flJIlmhdmVY5eNxenW-- From owner-xfs@oss.sgi.com Tue Mar 4 20:45:13 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 04 Mar 2008 20:45:29 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.2 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_43 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m254jBCp020692 for ; Tue, 4 Mar 2008 20:45:13 -0800 X-ASG-Debug-ID: 1204692340-1387004f0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from filer.fsl.cs.sunysb.edu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id ED0E565342B; Tue, 4 Mar 2008 20:45:40 -0800 (PST) Received: from filer.fsl.cs.sunysb.edu (filer.fsl.cs.sunysb.edu [130.245.126.2]) by cuda.sgi.com with ESMTP id UNxvDMfQkaSjbMdY; Tue, 04 Mar 2008 20:45:40 -0800 (PST) Received: from josefsipek.net (baal.fsl.cs.sunysb.edu [130.245.126.78]) by filer.fsl.cs.sunysb.edu (8.12.11.20060308/8.13.1) with ESMTP id m254jb3H013530; Tue, 4 Mar 2008 23:45:37 -0500 Received: by josefsipek.net (Postfix, from userid 1000) id 6694B1C00124; Tue, 4 Mar 2008 23:45:39 -0500 (EST) Date: Tue, 4 Mar 2008 23:45:39 -0500 From: "Josef 'Jeff' Sipek" To: Barry Naujok Cc: "xfs@oss.sgi.com" X-ASG-Orig-Subj: Re: [REVIEW #4] bad_features2 support in userspace Subject: Re: [REVIEW #4] bad_features2 support in userspace Message-ID: <20080305044539.GC19104@josefsipek.net> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.16 (2007-06-11) X-Barracuda-Connect: filer.fsl.cs.sunysb.edu[130.245.126.2] X-Barracuda-Start-Time: 1204692340 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.43918 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14772 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeffpc@josefsipek.net Precedence: bulk X-list: xfs On Wed, Mar 05, 2008 at 03:37:07PM +1100, Barry Naujok wrote: > Due to the issue of mounting filesystem with older kernels and > potentially reading sb_features2 from the wrong location. It > seems the best course of action is to always make sb_features2 > and sb_bad_features2 the same. This is pretty important as > new bits in this are supposed to stop older kernels from > mounting filesystems with unsupported features. > > If sb_bad_features2 is zero, and the old kernel tries to read > sb_features2 from this location during mount, it will succeed > as it will read zero. > > So, this patch changes mkfs.xfs to set sb_bad_features2 to > the same as sb_features2, xfs_check and xfs_repair now also > makes sure they are the same. Idea: good Implementation: I didn't see anything wrong. Josef 'Jeff' Sipek. P.S. Any reason why you inline the patch _and_ attach? -- I think there is a world market for maybe five computers. - Thomas Watson, chairman of IBM, 1943. From owner-xfs@oss.sgi.com Tue Mar 4 20:49:57 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 04 Mar 2008 20:50:04 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m254nnwj021005 for ; Tue, 4 Mar 2008 20:49:56 -0800 Received: from pc-bnaujok.melbourne.sgi.com (pc-bnaujok.melbourne.sgi.com [134.14.55.58]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA27068; Wed, 5 Mar 2008 15:50:12 +1100 Date: Wed, 05 Mar 2008 15:52:13 +1100 To: "Josef 'Jeff' Sipek" Subject: Re: [REVIEW #4] bad_features2 support in userspace From: "Barry Naujok" Organization: SGI Cc: "xfs@oss.sgi.com" Content-Type: text/plain; format=flowed; delsp=yes; charset=utf-8 MIME-Version: 1.0 References: <20080305044539.GC19104@josefsipek.net> Content-Transfer-Encoding: 7bit Message-ID: In-Reply-To: <20080305044539.GC19104@josefsipek.net> User-Agent: Opera Mail/9.24 (Win32) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14773 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@sgi.com Precedence: bulk X-list: xfs On Wed, 05 Mar 2008 15:45:39 +1100, Josef 'Jeff' Sipek wrote: > P.S. Any reason why you inline the patch _and_ attach? Inline for review, attach for actually applying the patch as my mailer mangles leading spaces for the inline patch :( Barry. From owner-xfs@oss.sgi.com Tue Mar 4 22:27:23 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 04 Mar 2008 22:27:45 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_43 autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m256RGDi023443 for ; Tue, 4 Mar 2008 22:27:21 -0800 Received: from [134.14.55.78] (redback.melbourne.sgi.com [134.14.55.78]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA29875; Wed, 5 Mar 2008 17:27:40 +1100 Message-ID: <47CE3EBA.2040900@sgi.com> Date: Wed, 05 Mar 2008 17:33:30 +1100 From: Lachlan McIlroy Reply-To: lachlan@sgi.com User-Agent: Thunderbird 2.0.0.12 (X11/20080213) MIME-Version: 1.0 To: Barry Naujok CC: "xfs@oss.sgi.com" Subject: Re: [REVIEW #4] bad_features2 support in userspace References: In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14774 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: lachlan@sgi.com Precedence: bulk X-list: xfs Looks good to me. Barry Naujok wrote: > Due to the issue of mounting filesystem with older kernels and > potentially reading sb_features2 from the wrong location. It > seems the best course of action is to always make sb_features2 > and sb_bad_features2 the same. This is pretty important as > new bits in this are supposed to stop older kernels from > mounting filesystems with unsupported features. > > If sb_bad_features2 is zero, and the old kernel tries to read > sb_features2 from this location during mount, it will succeed > as it will read zero. > > So, this patch changes mkfs.xfs to set sb_bad_features2 to > the same as sb_features2, xfs_check and xfs_repair now also > makes sure they are the same. > > Barry. > > -- > > > =========================================================================== > xfsprogs/db/check.c > =========================================================================== > > --- a/xfsprogs/db/check.c 2008-03-05 15:30:54.000000000 +1100 > +++ b/xfsprogs/db/check.c 2008-03-05 15:28:58.638097511 +1100 > @@ -869,6 +869,14 @@ blockget_f( > mp->m_sb.sb_frextents, frextents); > error++; > } > + if (mp->m_sb.sb_bad_features2 != mp->m_sb.sb_features2) { > + if (!sflag) > + dbprintf("sb_features2 (0x%x) not same as " > + "sb_bad_features2 (0x%x)\n", > + mp->m_sb.sb_features2, > + mp->m_sb.sb_bad_features2); > + error++; > + } > if ((sbversion & XFS_SB_VERSION_ATTRBIT) && > !XFS_SB_VERSION_HASATTR(&mp->m_sb)) { > if (!sflag) > > =========================================================================== > xfsprogs/db/sb.c > =========================================================================== > > --- a/xfsprogs/db/sb.c 2008-03-05 15:30:54.000000000 +1100 > +++ b/xfsprogs/db/sb.c 2008-02-29 17:16:33.770423296 +1100 > @@ -108,6 +108,7 @@ const field_t sb_flds[] = { > { "logsectsize", FLDT_UINT16D, OI(OFF(logsectsize)), C1, 0, > TYP_NONE }, > { "logsunit", FLDT_UINT32D, OI(OFF(logsunit)), C1, 0, TYP_NONE }, > { "features2", FLDT_UINT32X, OI(OFF(features2)), C1, 0, TYP_NONE }, > + { "bad_features2", FLDT_UINT32X, OI(OFF(bad_features2)), C1, 0, > TYP_NONE }, > { NULL } > }; > > > =========================================================================== > xfsprogs/include/xfs_sb.h > =========================================================================== > > --- a/xfsprogs/include/xfs_sb.h 2008-03-05 15:30:54.000000000 +1100 > +++ b/xfsprogs/include/xfs_sb.h 2008-02-29 17:16:33.814417687 +1100 > @@ -151,6 +151,7 @@ typedef struct xfs_sb > __uint16_t sb_logsectsize; /* sector size for the log, bytes */ > __uint32_t sb_logsunit; /* stripe unit size for the log */ > __uint32_t sb_features2; /* additional feature bits */ > + __uint32_t sb_bad_features2; /* unusable space */ > } xfs_sb_t; > > /* > @@ -169,7 +170,7 @@ typedef enum { > XFS_SBS_GQUOTINO, XFS_SBS_QFLAGS, XFS_SBS_FLAGS, XFS_SBS_SHARED_VN, > XFS_SBS_INOALIGNMT, XFS_SBS_UNIT, XFS_SBS_WIDTH, XFS_SBS_DIRBLKLOG, > XFS_SBS_LOGSECTLOG, XFS_SBS_LOGSECTSIZE, XFS_SBS_LOGSUNIT, > - XFS_SBS_FEATURES2, > + XFS_SBS_FEATURES2, XFS_SBS_BAD_FEATURES2, > XFS_SBS_FIELDCOUNT > } xfs_sb_field_t; > > > =========================================================================== > xfsprogs/libxfs/xfs_mount.c > =========================================================================== > > --- a/xfsprogs/libxfs/xfs_mount.c 2008-03-05 15:30:54.000000000 +1100 > +++ b/xfsprogs/libxfs/xfs_mount.c 2008-02-29 17:16:33.834415138 +1100 > @@ -140,6 +140,7 @@ static struct { > { offsetof(xfs_sb_t, sb_logsectsize),0 }, > { offsetof(xfs_sb_t, sb_logsunit), 0 }, > { offsetof(xfs_sb_t, sb_features2), 0 }, > + { offsetof(xfs_sb_t, sb_bad_features2), 0 }, > { sizeof(xfs_sb_t), 0 } > }; > > > =========================================================================== > xfsprogs/mkfs/xfs_mkfs.c > =========================================================================== > > --- a/xfsprogs/mkfs/xfs_mkfs.c 2008-03-05 15:30:54.000000000 +1100 > +++ b/xfsprogs/mkfs/xfs_mkfs.c 2008-03-05 15:27:37.568461787 +1100 > @@ -2103,6 +2103,13 @@ an AG size that is one stripe unit small > dirversion == 2, logversion == 2, attrversion == 1, > (sectorsize != BBSIZE || lsectorsize != BBSIZE), > sbp->sb_features2 != 0); > + /* > + * Due to a structure alignment issue, sb_features2 ended up in one > + * of two locations, the second "incorrect" location represented by > + * the sb_bad_features2 field. To avoid older kernels mounting > + * filesystems they shouldn't, set both field to the same value. > + */ > + sbp->sb_bad_features2 = sbp->sb_features2; > > if (force_overwrite) > zero_old_xfs_structures(&xi, sbp); > > =========================================================================== > xfsprogs/repair/phase1.c > =========================================================================== > > --- a/xfsprogs/repair/phase1.c 2008-03-05 15:30:54.000000000 +1100 > +++ b/xfsprogs/repair/phase1.c 2008-03-05 15:19:09.513415413 +1100 > @@ -91,6 +91,19 @@ phase1(xfs_mount_t *mp) > primary_sb_modified = 1; > } > > + /* > + * Check bad_features2 and make sure features2 the same as > + * bad_features (ORing the two together). Leave bad_features2 > + * set so older kernels can still use it and not mount unsupported > + * filesystems when it reads bad_features2. > + */ > + if (sb->sb_bad_features2 != sb->sb_features2) { > + sb->sb_features2 |= sb->sb_bad_features2; > + sb->sb_bad_features2 = sb->sb_features2; > + primary_sb_modified = 1; > + do_warn(_("superblock has a features2 mismatch, correcting\n")); > + } > + > if (primary_sb_modified) { > if (!no_modify) { > do_warn(_("writing modified primary superblock\n")); > From owner-xfs@oss.sgi.com Tue Mar 4 23:12:23 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 04 Mar 2008 23:12:41 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m257CK5D024448 for ; Tue, 4 Mar 2008 23:12:23 -0800 X-ASG-Debug-ID: 1204701166-1387015e0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from bombadil.infradead.org (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 60318653F98 for ; Tue, 4 Mar 2008 23:12:46 -0800 (PST) Received: from bombadil.infradead.org (bombadil.infradead.org [18.85.46.34]) by cuda.sgi.com with ESMTP id aFVShyKH1HKUAi2h for ; Tue, 04 Mar 2008 23:12:46 -0800 (PST) Received: from hch by bombadil.infradead.org with local (Exim 4.68 #1 (Red Hat Linux)) id 1JWnnu-00025D-3t; Wed, 05 Mar 2008 07:12:42 +0000 Date: Wed, 5 Mar 2008 02:12:42 -0500 From: Christoph Hellwig To: Eric Sandeen Cc: Ravi Wijayaratne , xfs@oss.sgi.com X-ASG-Orig-Subj: Re: corruption in xfs_end_bio_unwritten Subject: Re: corruption in xfs_end_bio_unwritten Message-ID: <20080305071242.GA30439@infradead.org> References: <629727.55106.qm@web32504.mail.mud.yahoo.com> <47CDCA12.5060107@sandeen.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <47CDCA12.5060107@sandeen.net> User-Agent: Mutt/1.5.17 (2007-11-01) X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org See http://www.infradead.org/rpr.html X-Barracuda-Connect: bombadil.infradead.org[18.85.46.34] X-Barracuda-Start-Time: 1204701170 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.43928 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14775 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs On Tue, Mar 04, 2008 at 04:15:46PM -0600, Eric Sandeen wrote: > Ravi Wijayaratne wrote: > > Hi all, > > > > I am seeing data corruption in xfs_end_bio_unwritten. Possibly the corruption is happening before. > > Here is what I see. > > what kernel, for starters? Yeah, VOP_BMAP doesn't sound like anything recent ;-) From owner-xfs@oss.sgi.com Wed Mar 5 02:38:26 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 05 Mar 2008 02:38:47 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m25AcNAl002746 for ; Wed, 5 Mar 2008 02:38:26 -0800 X-ASG-Debug-ID: 1204713530-794e03220000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from merkurneu.hrz.uni-giessen.de (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 5C1E211F8F00 for ; Wed, 5 Mar 2008 02:38:50 -0800 (PST) Received: from merkurneu.hrz.uni-giessen.de (merkurneu.hrz.uni-giessen.de [134.176.2.3]) by cuda.sgi.com with ESMTP id jwMQE8SBmsegnMtL for ; Wed, 05 Mar 2008 02:38:50 -0800 (PST) Received: from [134.176.2.169] by merkurneu.hrz.uni-giessen.de with ESMTP; Wed, 5 Mar 2008 09:34:32 +0100 Received: from hermes.hrz.uni-giessen.de (hermes.hrz.uni-giessen.de [134.176.2.15]) by mailgw32.hrz.uni-giessen.de (Postfix) with ESMTP id E513696D988; Wed, 5 Mar 2008 09:34:04 +0100 (CET) Received: from fb07-iapwap2.physik.uni-giessen.de by hermes.hrz.uni-giessen.de with ESMTP; Wed, 5 Mar 2008 09:34:05 +0100 From: Marc To: lachlan@sgi.com X-ASG-Orig-Subj: Re: filesystem corruption in linus tree Subject: Re: filesystem corruption in linus tree Date: Wed, 5 Mar 2008 09:33:57 +0100 User-Agent: KMail/1.9.6 (enterprise 20070904.708012) Cc: Barry Naujok , xfs@oss.sgi.com References: <03F8FD43-322F-41E3-A7A0-CD4E9AD8B4DE@ap.physik.uni-giessen.de> <200802262100.18631.marc.dietrich@ap.physik.uni-giessen.de> <47CB7D52.2030704@sgi.com> In-Reply-To: <47CB7D52.2030704@sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Disposition: inline Message-Id: <200803050934.00814.marc.dietrich@ap.physik.uni-giessen.de> X-HRZ-JLUG-MailScanner-Information: Passed JLUG virus check X-HRZ-JLUG-MailScanner: No virus found X-MailScanner-From: marc.dietrich@ap.physik.uni-giessen.de X-Barracuda-Connect: merkurneu.hrz.uni-giessen.de[134.176.2.3] X-Barracuda-Start-Time: 1204713533 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests=UNPARSEABLE_RELAY X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.43942 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.00 UNPARSEABLE_RELAY Informational: message has unparseable relay lines X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id m25AcQAl002748 X-archive-position: 14776 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: Marc.Dietrich@ap.physik.uni-giessen.de Precedence: bulk X-list: xfs Hi, Am Montag 03 März 2008 05:23:46 schrieb Lachlan McIlroy: > Marc Dietrich wrote: > > Hi again, > > > > On Tuesday 26 February 2008 08:38:50 Lachlan McIlroy wrote: > >> Marc Dietrich wrote: > >>> Hi, > >>> > >>> On Monday 25 February 2008 01:36:28 Barry Naujok wrote: > >>>> On Sun, 24 Feb 2008 20:58:26 +1100, Marc Dietrich > >>>> > >>>> wrote: > >>>>> Hi, > >>>>> > >>>>> somewhere after the release of 2.6.24 my xfs filesystem got > >>>>> corrupted. Initialy I thought it was only related to the readdir bug. > >>>>> (http://oss.sgi.com/archives/xfs/2008-02/msg00027.html) So I waited > >>>>> for the fix to go into mainline. Yesterday I tried again, but got > >>>>> this error during boot: > > > > > > > >> We've had a few problems reported with XFS on 32-bit powermacs and the > >> culprit appears to be some changes to bit manipulation routines. Could > >> you please try reverse applying the attached patches and see if the > >> problem is resolved? > > > > I saw, that you already pushed it into mainline - for a good reason ;-) > > Works as expeted. > > > > Please also don't forget 2.6.24-stable ! > > The changes that caused this regression went into 2.6.25-rc1 so no need for > a 2.6.24 stable fix. yes - and I could have sworn, that 2.6.24 is also affected. My fault. Thanks Marc -- "Those who question our statements are traitors", Lord Arthur Ponsonby, "Falsehood in Wartime: Propaganda Lies of the First World War", 1928 From owner-xfs@oss.sgi.com Wed Mar 5 03:23:13 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 05 Mar 2008 03:23:32 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.2 required=5.0 tests=AWL,BAYES_00, SUBJECT_FUZZY_TION autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m25BNC0D004294 for ; Wed, 5 Mar 2008 03:23:13 -0800 X-ASG-Debug-ID: 1204716218-424501ed0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from filer.fsl.cs.sunysb.edu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 195C7655182 for ; Wed, 5 Mar 2008 03:23:38 -0800 (PST) Received: from filer.fsl.cs.sunysb.edu (filer.fsl.cs.sunysb.edu [130.245.126.2]) by cuda.sgi.com with ESMTP id lhBubU0wHu92GdqJ for ; Wed, 05 Mar 2008 03:23:38 -0800 (PST) Received: from josefsipek.net (baal.fsl.cs.sunysb.edu [130.245.126.78]) by filer.fsl.cs.sunysb.edu (8.12.11.20060308/8.13.1) with ESMTP id m23KVgrC019646; Mon, 3 Mar 2008 15:31:43 -0500 Received: by josefsipek.net (Postfix, from userid 1000) id B5AE71C00124; Mon, 3 Mar 2008 15:31:43 -0500 (EST) Date: Mon, 3 Mar 2008 15:31:43 -0500 From: "Josef 'Jeff' Sipek" To: Jeff Breidenbach Cc: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: disappearing xfs partition Subject: Re: disappearing xfs partition Message-ID: <20080303203143.GA21086@josefsipek.net> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.16 (2007-06-11) X-Barracuda-Connect: filer.fsl.cs.sunysb.edu[130.245.126.2] X-Barracuda-Start-Time: 1204716222 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.43944 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14777 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeffpc@josefsipek.net Precedence: bulk X-list: xfs On Mon, Mar 03, 2008 at 05:14:38AM -0800, Jeff Breidenbach wrote: ... > # dpkg -l xfsprogs > Desired=Unknown/Install/Remove/Purge/Hold > | Status=Not/Installed/Config-f/Unpacked/Failed-cfg/Half-inst/t-aWait/T-pend > |/ Err?=(none)/Hold/Reinst-required/X=both-problems (Status,Err: uppercase=bad) > ||/ Name Version Description > +++-==============-==============-============================================ > ii xfsprogs 2.9.5-1 Utilities for managing the XFS filesystem As others have already pointed out, the problem you have is not with XFS, but probably with udev. You may want to upgrade your xfsprogs. Version 2.9.5 has a buggy mkfs (the fs it makes uses only about 3/4 of the partition). Josef 'Jeff' Sipek. -- Computer Science is no more about computers than astronomy is about telescopes. - Edsger Dijkstra From owner-xfs@oss.sgi.com Wed Mar 5 05:52:59 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 05 Mar 2008 05:53:17 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.2 required=5.0 tests=AWL,BAYES_00,MIME_8BIT_HEADER autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m25DqurT011261 for ; Wed, 5 Mar 2008 05:52:59 -0800 X-ASG-Debug-ID: 1204725203-3131003c0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from ti-out-0910.google.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 036DB655FD2 for ; Wed, 5 Mar 2008 05:53:23 -0800 (PST) Received: from ti-out-0910.google.com (ti-out-0910.google.com [209.85.142.185]) by cuda.sgi.com with ESMTP id 9HNvOPA3rZDYOSsw for ; Wed, 05 Mar 2008 05:53:23 -0800 (PST) Received: by ti-out-0910.google.com with SMTP id d10so1998157tib.18 for ; Wed, 05 Mar 2008 05:53:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; bh=gPA1zcrjYOYq58dB15MUQ8RlSnfGJBFsVvErME4Bk7s=; b=vaCMkG3beJt9YGU1TmEjIlNchpnRibPde/Q0VpB347ef4cJCnxc/JQ+vztos9yNsw7kbedopvYMJ47ZcCV9qWcRvBRebfFhBc3gSTtoxjaqBGDHoJeQiCDQvVhtKR/i0LyVzdeQ18KYMKMkOAFNEusiY4pbUPylkyXWARNywF28= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=rHuIf5k0b9zjJ3w6yfvxZpjAUr2E3HHV2DnaIAGcX/rgSBTdfFk/I+ONZnpOwrVpOIEZUK4pPC7MhmN9p0s+0OAVRTNUzS1D6VNxMcRSHiD4nAsqRcLOa2Bafn8k2sk6YoN9AA9o7bJGhnodCwaq17oWSXT7nkZCb04oeu9M8Sc= Received: by 10.150.155.1 with SMTP id c1mr1190380ybe.85.1204725198680; Wed, 05 Mar 2008 05:53:18 -0800 (PST) Received: by 10.150.96.5 with HTTP; Wed, 5 Mar 2008 05:53:18 -0800 (PST) Message-ID: <1a4a774c0803050553h7f6294cfq41c38f34ea92ceae@mail.gmail.com> Date: Wed, 5 Mar 2008 14:53:18 +0100 From: "=?ISO-8859-1?Q?Christian_R=F8snes?=" To: "David Chinner" X-ASG-Orig-Subj: Re: XFS internal error xfs_trans_cancel at line 1150 of file fs/xfs/xfs_trans.c Subject: Re: XFS internal error xfs_trans_cancel at line 1150 of file fs/xfs/xfs_trans.c Cc: xfs@oss.sgi.com In-Reply-To: <20080213214551.GR155407@sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Disposition: inline References: <1a4a774c0802130251h657a52f7lb97942e7afdf6e3f@mail.gmail.com> <20080213214551.GR155407@sgi.com> X-Barracuda-Connect: ti-out-0910.google.com[209.85.142.185] X-Barracuda-Start-Time: 1204725206 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.43954 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id m25DqxrT011263 X-archive-position: 14778 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: christian.rosnes@gmail.com Precedence: bulk X-list: xfs On Wed, Feb 13, 2008 at 10:45 PM, David Chinner wrote: > On Wed, Feb 13, 2008 at 11:51:51AM +0100, Christian Rřsnes wrote: > > Over the past month I've been hit with two cases of "xfs_trans_cancel > > at line 1150" > > The two errors occurred on different raid sets. In both cases the > > error happened during > > rsync from a remote server to this server, and the local partition > > which reported > > the error was 99% full (as reported by df -k, see below for details). > > > > System: Dell 2850 > > Mem: 4GB RAM > > OS: Debian 3 (32-bit) > > Kernel: 2.6.17.7 (custom compiled) > > > > I've been running this kernel since Aug 2006 without any of these > > problems, until a month ago. > > > > I've not used any of the previous kernel in the 2.6.17 series. > > > > /usr/src/linux-2.6.17.7# grep 4K .config > > # CONFIG_4KSTACKS is not set > > > > > > Are there any known XFS problems with this kernel version and nearly > > full partitions ? > > Yes. Deadlocks that weren't properly fixed until 2.6.18 (partially > fixed in 2.6.17) and an accounting problem in the transaction code > that leads to the shutdown you are seeing. The accounting problem is > fixed by this commit: > > http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=45c34141126a89da07197d5b89c04c6847f1171a > > which I think went into 2.6.22. > > Luckily, neither of these problems result in corruption. > > > > I'm thinking about upgrading the kernel to a newer version, to see if > > it fixes this problem. > > Are there any known XFS problems with version 2.6.24.2 ? > > Yes - a problem with readdir. The fix is currently in the stable > queue (i.e for 2.6.24.3): > > http://git.kernel.org/?p=linux/kernel/git/stable/stable-queue.git;a=commit;h=ee864b866419890b019352412c7bc9634d96f61b > > So we are just waiting for Greg to release 2.6.24.3 now. > > Cheers, > > Dave. > -- > Dave Chinner > Principal Engineer > SGI Australian Software Group > After being hit several times by the problem mentioned above (running kernel 2.6.17.7), I upgraded the kernel to version 2.6.24.3. I then ran a rsync test to a 99% full partition: df -k: /dev/sdb1 286380096 282994528 3385568 99% /data The rsync application will probably fail because it will most likely run out of space, but I got another xfs_trans_cancel kernel message: Filesystem "sdb1": XFS internal error xfs_trans_cancel at line 1163 of file fs/xfs/xfs_trans.c. Caller 0xc021a010 Pid: 11642, comm: rsync Not tainted 2.6.24.3FC #1 [] xfs_trans_cancel+0x5d/0xe6 [] xfs_mkdir+0x45a/0x493 [] xfs_mkdir+0x45a/0x493 [] xfs_acl_vhasacl_default+0x33/0x44 [] xfs_vn_mknod+0x165/0x243 [] xfs_access+0x2f/0x35 [] xfs_vn_mkdir+0x12/0x14 [] vfs_mkdir+0xa3/0xe2 [] sys_mkdirat+0x8a/0xc3 [] sys_mkdir+0x1f/0x23 [] syscall_call+0x7/0xb ======================= xfs_force_shutdown(sdb1,0x8) called from line 1164 of file fs/xfs/xfs_trans.c. Return address = 0xc0212690 Filesystem "sdb1": Corruption of in-memory data detected. Shutting down filesystem: sdb1 Please umount the filesystem, and rectify the problem(s) Trying to umount /dev/sdb1 fails (umount just hangs) . Rebooting the system seems to hang also - and I believe the kernel outputs this message when trying to umount /dev/sdb1: xfs_force_shutdown(sdb1,0x1) called from line 420 of file fs/xfs/xfs_rw.c. Return address = 0xc021cb21 After waiting 5 minutes I power-cycle the system to bring it back up. After the restart, I ran: xfs_check /dev/sdb1 (there was no output from xfs_check). Could this be the same problem I experienced with 2.6.17.7 ? Thanks Christian btw - I've previously run memtest overnight and not found any memory problems. From owner-xfs@oss.sgi.com Wed Mar 5 12:42:54 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 05 Mar 2008 12:43:13 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m25KgpLi000435 for ; Wed, 5 Mar 2008 12:42:54 -0800 X-ASG-Debug-ID: 1204749799-44e301d10000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 76594659C06 for ; Wed, 5 Mar 2008 12:43:19 -0800 (PST) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id YofWkuE0wJW3BIa2 for ; Wed, 05 Mar 2008 12:43:19 -0800 (PST) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 8014D18004EFE; Wed, 5 Mar 2008 14:43:17 -0600 (CST) Message-ID: <47CF05E5.90409@sandeen.net> Date: Wed, 05 Mar 2008 14:43:17 -0600 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: David Chinner CC: sgi.bugs.xfs@engr.sgi.com, xfs@oss.sgi.com X-ASG-Orig-Subj: Re: TAKE 977545 - xfsaild causing too many wakeups Subject: Re: TAKE 977545 - xfsaild causing too many wakeups References: <20080222041525.37EA858C4C0F@chook.melbourne.sgi.com> In-Reply-To: <20080222041525.37EA858C4C0F@chook.melbourne.sgi.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1204749801 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.43980 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14779 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs David Chinner wrote: > xfsaild causing too many wakeups > > Idle state is not being detected properly by the xfsaild push code. > The current idle state is detected by an empty list which may never > happen with mostly idle filesystem or one using lazy superblock > counters. A single dirty item in the list that exists beyond the > push target can result repeated looping attempting to push > up to the target because it fails to check if the push target > has been acheived or not. > > Fix by considering a dirty list with everything past the target > as an idle state and set the timeout appropriately. Will this go to 2.6.25? -Eric From owner-xfs@oss.sgi.com Wed Mar 5 13:00:23 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 05 Mar 2008 13:00:42 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m25L0H1u001139 for ; Wed, 5 Mar 2008 13:00:22 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id IAA27019; Thu, 6 Mar 2008 08:00:38 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m25L0aLF87068048; Thu, 6 Mar 2008 08:00:37 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m25L0Z2h87034150; Thu, 6 Mar 2008 08:00:35 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Thu, 6 Mar 2008 08:00:35 +1100 From: David Chinner To: Eric Sandeen Cc: xfs-dev , xfs@oss.sgi.com Subject: Re: TAKE 977545 - xfsaild causing too many wakeups Message-ID: <20080305210035.GA155407@sgi.com> References: <20080222041525.37EA858C4C0F@chook.melbourne.sgi.com> <47CF05E5.90409@sandeen.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <47CF05E5.90409@sandeen.net> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14780 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Wed, Mar 05, 2008 at 02:43:17PM -0600, Eric Sandeen wrote: > David Chinner wrote: > > xfsaild causing too many wakeups > > > > Idle state is not being detected properly by the xfsaild push code. > > The current idle state is detected by an empty list which may never > > happen with mostly idle filesystem or one using lazy superblock > > counters. A single dirty item in the list that exists beyond the > > push target can result repeated looping attempting to push > > up to the target because it fails to check if the push target > > has been acheived or not. > > > > Fix by considering a dirty list with everything past the target > > as an idle state and set the timeout appropriately. > > Will this go to 2.6.25? Yes, it certainly should. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Wed Mar 5 20:41:43 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 05 Mar 2008 20:41:53 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_43 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m264fdfN023764 for ; Wed, 5 Mar 2008 20:41:42 -0800 X-ASG-Debug-ID: 1204778525-4c6402880000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mail.sceen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id A133165CF7D for ; Wed, 5 Mar 2008 20:42:05 -0800 (PST) Received: from mail.sceen.net (sceen.net [213.41.243.68]) by cuda.sgi.com with ESMTP id 5eRPcUCBTwamdpNy for ; Wed, 05 Mar 2008 20:42:05 -0800 (PST) Received: from localhost (localhost.localdomain [127.0.0.1]) by mail.sceen.net (Postfix) with ESMTP id F3D631E010; Thu, 6 Mar 2008 05:42:03 +0100 (CET) Received: from mail.sceen.net ([127.0.0.1]) by localhost (mail.sceen.net [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 18660-08; Thu, 6 Mar 2008 05:41:55 +0100 (CET) Received: from itchy (cesrt42.asia.info.net [61.14.27.42]) by mail.sceen.net (Postfix) with ESMTP id 5A0571CB8C; Thu, 6 Mar 2008 05:41:51 +0100 (CET) From: Niv Sardi To: Eric Sandeen Cc: xfs@oss.sgi.com, xfs-dev@sgi.com X-ASG-Orig-Subj: Re: [REVIEW] mkfs.xfs man page needs the default settings updated, TAKE 2. Subject: Re: [REVIEW] mkfs.xfs man page needs the default settings updated, TAKE 2. In-Reply-To: <47CD6ED7.5050505@sandeen.net> (Eric Sandeen's message of "Tue, 04 Mar 2008 09:46:31 -0600") References: <47CD6D0E.3090301@sandeen.net> <47CD6ED7.5050505@sandeen.net> User-Agent: Gnus/5.110007 (No Gnus v0.7) Emacs/23.0.60 (i486-pc-linux-gnu) Date: Thu, 06 Mar 2008 15:41:29 +1100 Message-ID: MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="=-=-=" X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Scanned: by amavisd-new-20030616-p10 (Debian) at sceen.net X-Barracuda-Connect: sceen.net[213.41.243.68] X-Barracuda-Start-Time: 1204778529 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44008 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Status: Clean X-archive-position: 14781 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: xaiki@sgi.com Precedence: bulk X-list: xfs --=-=-= Thanks to Eric for the comments, is this better ? Cheers, -- Niv Sardi --=-=-= Content-Type: text/x-diff Content-Disposition: inline; filename=0001-Update-mkfs-manpage-for-new-defaults.patch From 7e0e328663858ecf13f35678f1a6d349c3d4dd5a Mon Sep 17 00:00:00 2001 From: Niv Sardi Date: Fri, 22 Feb 2008 16:48:32 +1100 Subject: [PATCH] Update mkfs manpage for new defaults: log, attr and inodes v2, Drop the ability to turn unwritten extents off completly, reduce imaxpct for big filesystems, less AGs for single disks configs. --- xfsprogs/man/man8/mkfs.xfs.8 | 41 ++++++++++++++++++----------------------- 1 files changed, 18 insertions(+), 23 deletions(-) diff --git a/xfsprogs/man/man8/mkfs.xfs.8 b/xfsprogs/man/man8/mkfs.xfs.8 index b6024c3..afc284c 100644 --- a/xfsprogs/man/man8/mkfs.xfs.8 +++ b/xfsprogs/man/man8/mkfs.xfs.8 @@ -304,10 +304,16 @@ bits. This specifies the maximum percentage of space in the filesystem that can be allocated to inodes. The default .I value -is 25%. Setting the +is 25% for filesystems under 1TB, 5% for filesystems under 50TB and 1% +for filesystems over 50TB. Setting the .I value -to 0 means that essentially all of the filesystem can -become inode blocks. +to 0 means that essentially all of the filesystem can become inode +blocks. Note that this is only used by inode32 (on 32bits platforms), +and is ignored on 64bits platforms. On 32 bits platforms, we can only +use the first TB of disk space for inodes, so the allocator will try +to avoid this region, hence miss-using the first AG if this is set to +high (the worst case is a 4TB filesystem where a full AG will be +untouched by anything but inodes with a 25% maxpct). .TP .BI align[= value ] This is used to specify that inode allocation is or is not aligned. The @@ -325,18 +331,11 @@ that does not have the inode alignment feature (any release of IRIX before 6.2, and IRIX 6.2 without XFS patches). .TP .BI attr[= value ] -This is used to specify the version of extended attribute inline allocation -policy to be used. -By default, this is zero. Once extended attributes are used for the -first time, the version will be set to either one or two. -The current version (two) uses a more efficient algorithm for managing -the available inline inode space than version one does, however, for -backward compatibility reasons (and in the absence of the -.B attr=2 -mkfs option, or the -.B attr2 -mount option), version one will be selected -by default when attributes are first used on a filesystem. +This is used to specify the version of extended attribute inline +allocation policy to be used. By default, this is 2. The current +version (two) uses a more efficient algorithm for managing the +available inline inode space than version one does. This option is +kept for backward compatibility, attr2 was added in kernel 2.6.16. .RE .TP .BI \-l " log_section_options" @@ -389,15 +388,11 @@ and directory block size, the minimum log size is larger than 512 blocks. .BI version= value This specifies the version of the log. The .I value -is either 1 or 2. Specifying +is either 1 or 2 (the default is 2). .B version=2 -enables the -.B sunit -suboption, and allows the logbsize to be increased beyond 32K. -Version 2 logs are automatically selected if a log stripe unit -is specified. See -.BR sunit " and " su -suboptions, below. +allows bigger log buffer size (version 1 had a limit at 32K), and the +use of the sunit and su options. Possibility to use version=1 is left +for backward compatibility only. .TP .BI sunit= value This specifies the alignment to be used for log writes. The -- 1.5.4.3 --=-=-=-- From owner-xfs@oss.sgi.com Wed Mar 5 21:29:18 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 05 Mar 2008 21:29:27 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_43 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m265THoo025180 for ; Wed, 5 Mar 2008 21:29:18 -0800 X-ASG-Debug-ID: 1204781384-43e303400000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id D0D3811FAB43 for ; Wed, 5 Mar 2008 21:29:45 -0800 (PST) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id RT2IPUsqN1GEhOO9 for ; Wed, 05 Mar 2008 21:29:45 -0800 (PST) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id DFAD118004EFE; Wed, 5 Mar 2008 23:29:11 -0600 (CST) Message-ID: <47CF8127.40900@sandeen.net> Date: Wed, 05 Mar 2008 23:29:11 -0600 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: Niv Sardi CC: xfs@oss.sgi.com, xfs-dev@sgi.com X-ASG-Orig-Subj: Re: [REVIEW] mkfs.xfs man page needs the default settings updated, TAKE 2. Subject: Re: [REVIEW] mkfs.xfs man page needs the default settings updated, TAKE 2. References: <47CD6D0E.3090301@sandeen.net> <47CD6ED7.5050505@sandeen.net> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1204781386 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44011 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14782 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Niv Sardi wrote: > Thanks to Eric for the comments, is this better ? > > Cheers, > > -- Niv Sardi > > > > From 7e0e328663858ecf13f35678f1a6d349c3d4dd5a Mon Sep 17 00:00:00 2001 > From: Niv Sardi > Date: Fri, 22 Feb 2008 16:48:32 +1100 > Subject: [PATCH] Update mkfs manpage for new defaults: > > log, attr and inodes v2, > Drop the ability to turn unwritten extents off completly, > reduce imaxpct for big filesystems, less AGs for single disks configs. > --- > xfsprogs/man/man8/mkfs.xfs.8 | 41 ++++++++++++++++++----------------------- > 1 files changed, 18 insertions(+), 23 deletions(-) > > diff --git a/xfsprogs/man/man8/mkfs.xfs.8 b/xfsprogs/man/man8/mkfs.xfs.8 > index b6024c3..afc284c 100644 > --- a/xfsprogs/man/man8/mkfs.xfs.8 > +++ b/xfsprogs/man/man8/mkfs.xfs.8 > @@ -304,10 +304,16 @@ bits. > This specifies the maximum percentage of space in the filesystem that > can be allocated to inodes. The default > .I value > -is 25%. Setting the > +is 25% for filesystems under 1TB, 5% for filesystems under 50TB and 1% > +for filesystems over 50TB. Setting the > .I value > -to 0 means that essentially all of the filesystem can > -become inode blocks. > +to 0 means that essentially all of the filesystem can become inode > +blocks. Note that this is only used by inode32 (on 32bits platforms), > +and is ignored on 64bits platforms. Really? The m_maxicount tests in xfs_ialloc_ag_alloc and xfs_dialloc don't seem to care about inode32 or not, unless I'm missing something. > +On 32 bits platforms, we can only > +use the first TB of disk space for inodes, well, that depends on the inode size... > +so the allocator will try the data allocator... > +to avoid this region, hence miss-using the first AG if this is set to > +high (the worst case is a 4TB filesystem where a full AG will be > +untouched by anything but inodes with a 25% maxpct). ah, ok. It becomes slightly clearer. :) How about... maxpct=value This specifies the maximum percentage of space in the filesystem that can be allocated to inodes. The default value is 25% for filesystems under 1TB, 5% for filesystems under 50TB and 1% for filesystems over 50TB. In the default inode allocation mode, inode blocks are chosen such that inode numbers will not exceed 32 bits, which restricts the inode blocks to the lower portion of the filesystem. The data block allocator will avoid these low blocks to accommodate the specified maxpct, so a high value may result in a filesystem with nothing but inodes in a significant portion of the lower blocks of the filesystem. (This restriction is not present when the filesystem is mounted with the "inode64" option on 64-bit platforms). Setting the value to 0 means that essentially all of the filesystem can become inode blocks, subject to inode32 restrictions. This value can be modified with xfs_growfs(8). eh... could be better... but how's it sound? > .TP > .BI align[= value ] > This is used to specify that inode allocation is or is not aligned. The > @@ -325,18 +331,11 @@ that does not have the inode alignment feature > (any release of IRIX before 6.2, and IRIX 6.2 without XFS patches). > .TP > .BI attr[= value ] > -This is used to specify the version of extended attribute inline allocation > -policy to be used. > -By default, this is zero. Once extended attributes are used for the > -first time, the version will be set to either one or two. > -The current version (two) uses a more efficient algorithm for managing > -the available inline inode space than version one does, however, for > -backward compatibility reasons (and in the absence of the > -.B attr=2 > -mkfs option, or the > -.B attr2 > -mount option), version one will be selected > -by default when attributes are first used on a filesystem. > +This is used to specify the version of extended attribute inline > +allocation policy to be used. By default, this is 2. The current > +version (two) uses a more efficient algorithm for managing the > +available inline inode space than version one does. This option is > +kept for backward compatibility, attr2 was added in kernel 2.6.16. attr[=value] (hmm why the brackets; is value really optional?) This is used to specify the version of extended attribute inline allocation policy to be used. By default, this is 2, which uses an efficient algorithm for managing the available inline inode space between attribute and extent data. The previous version 1, which has fixed regions for attribute and extent data, is kept for backwards compatibility with kernels older than version 2.6.16. (aside: will older kernels refuse to mount attr2 filesystems? I suppose they will but I'm not sure they need to?) -Eric > .RE > .TP > .BI \-l " log_section_options" > @@ -389,15 +388,11 @@ and directory block size, the minimum log size is larger than 512 blocks. > .BI version= value > This specifies the version of the log. The > .I value > -is either 1 or 2. Specifying > +is either 1 or 2 (the default is 2). > .B version=2 > -enables the > -.B sunit > -suboption, and allows the logbsize to be increased beyond 32K. > -Version 2 logs are automatically selected if a log stripe unit > -is specified. See > -.BR sunit " and " su > -suboptions, below. > +allows bigger log buffer size (version 1 had a limit at 32K), and the > +use of the sunit and su options. Possibility to use version=1 is left > +for backward compatibility only. version=value This specifies the version of the log. The current default is 2, which allows for larger log buffer sizes, as well as supporting stripe-aligned log writes (see the sunit and su options, below). The previous version 1, which is limited to 32k log buffers and does not support stripe-aligned writes, is kept for backwards compatibility with kernels older than version 2.XX.XX > .TP > .BI sunit= value > This specifies the alignment to be used for log writes. The > -- 1.5.4.3 From owner-xfs@oss.sgi.com Wed Mar 5 21:59:29 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 05 Mar 2008 21:59:48 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m265xJUp026311 for ; Wed, 5 Mar 2008 21:59:23 -0800 Received: from pc-bnaujok.melbourne.sgi.com (pc-bnaujok.melbourne.sgi.com [134.14.55.58]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id QAA16464; Thu, 6 Mar 2008 16:59:49 +1100 Date: Thu, 06 Mar 2008 17:02:07 +1100 To: "xfs@oss.sgi.com" , xfs-dev Subject: Final call for review of sb_bad_features2 in userspace From: "Barry Naujok" Organization: SGI Content-Type: text/plain; format=flowed; delsp=yes; charset=utf-8 MIME-Version: 1.0 Message-ID: User-Agent: Opera Mail/9.24 (Win32) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from Base64 to 8bit by oss.sgi.com id m265xTUp026346 X-archive-position: 14783 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@sgi.com Precedence: bulk X-list: xfs I think the attached patch maybe the least offensive for past kernels, and XFSQA! xfs_check and xfs_repair will ignore sb_bad_features2 if it is zero, and if not, makes sure it's the same as sb_features2. mkfs.xfs will set sb_bad_features2 to be the same. Maybe if we change the behaviour of the kernel mount code with respects of sb_bad_features2, this can be revisited. (An intermediate solution I has was if "xfs_repair -n" is run AND sb_bad_features is zero, then ignore it to let xfs_repair continue, otherwise duplicate it, but doing that requires a golden output change to QA 030 and 033 unless the kernel mount code is changed... ARGH!) -- =========================================================================== xfsprogs/db/check.c =========================================================================== --- a/xfsprogs/db/check.c 2008-03-06 16:59:31.000000000 +1100 +++ b/xfsprogs/db/check.c 2008-03-06 12:32:54.664882390 +1100 @@ -869,6 +869,15 @@ blockget_f( mp->m_sb.sb_frextents, frextents); error++; } + if (mp->m_sb.sb_bad_features2 != 0 && + mp->m_sb.sb_bad_features2 != mp->m_sb.sb_features2) { + if (!sflag) + dbprintf("sb_features2 (0x%x) not same as " + "sb_bad_features2 (0x%x)\n", + mp->m_sb.sb_features2, + mp->m_sb.sb_bad_features2); + error++; + } if ((sbversion & XFS_SB_VERSION_ATTRBIT) && !XFS_SB_VERSION_HASATTR(&mp->m_sb)) { if (!sflag) =========================================================================== xfsprogs/db/sb.c =========================================================================== --- a/xfsprogs/db/sb.c 2008-03-06 16:59:31.000000000 +1100 +++ b/xfsprogs/db/sb.c 2008-02-29 17:16:33.770423296 +1100 @@ -108,6 +108,7 @@ const field_t sb_flds[] = { { "logsectsize", FLDT_UINT16D, OI(OFF(logsectsize)), C1, 0, TYP_NONE }, { "logsunit", FLDT_UINT32D, OI(OFF(logsunit)), C1, 0, TYP_NONE }, { "features2", FLDT_UINT32X, OI(OFF(features2)), C1, 0, TYP_NONE }, + { "bad_features2", FLDT_UINT32X, OI(OFF(bad_features2)), C1, 0, TYP_NONE }, { NULL } }; =========================================================================== xfsprogs/include/xfs_sb.h =========================================================================== --- a/xfsprogs/include/xfs_sb.h 2008-03-06 16:59:31.000000000 +1100 +++ b/xfsprogs/include/xfs_sb.h 2008-02-29 17:16:33.814417687 +1100 @@ -151,6 +151,7 @@ typedef struct xfs_sb __uint16_t sb_logsectsize; /* sector size for the log, bytes */ __uint32_t sb_logsunit; /* stripe unit size for the log */ __uint32_t sb_features2; /* additional feature bits */ + __uint32_t sb_bad_features2; /* unusable space */ } xfs_sb_t; /* @@ -169,7 +170,7 @@ typedef enum { XFS_SBS_GQUOTINO, XFS_SBS_QFLAGS, XFS_SBS_FLAGS, XFS_SBS_SHARED_VN, XFS_SBS_INOALIGNMT, XFS_SBS_UNIT, XFS_SBS_WIDTH, XFS_SBS_DIRBLKLOG, XFS_SBS_LOGSECTLOG, XFS_SBS_LOGSECTSIZE, XFS_SBS_LOGSUNIT, - XFS_SBS_FEATURES2, + XFS_SBS_FEATURES2, XFS_SBS_BAD_FEATURES2, XFS_SBS_FIELDCOUNT } xfs_sb_field_t; =========================================================================== xfsprogs/libxfs/xfs_mount.c =========================================================================== --- a/xfsprogs/libxfs/xfs_mount.c 2008-03-06 16:59:31.000000000 +1100 +++ b/xfsprogs/libxfs/xfs_mount.c 2008-02-29 17:16:33.834415138 +1100 @@ -140,6 +140,7 @@ static struct { { offsetof(xfs_sb_t, sb_logsectsize),0 }, { offsetof(xfs_sb_t, sb_logsunit), 0 }, { offsetof(xfs_sb_t, sb_features2), 0 }, + { offsetof(xfs_sb_t, sb_bad_features2), 0 }, { sizeof(xfs_sb_t), 0 } }; =========================================================================== xfsprogs/mkfs/xfs_mkfs.c =========================================================================== --- a/xfsprogs/mkfs/xfs_mkfs.c 2008-03-06 16:59:31.000000000 +1100 +++ b/xfsprogs/mkfs/xfs_mkfs.c 2008-03-05 15:27:37.568461787 +1100 @@ -2103,6 +2103,13 @@ an AG size that is one stripe unit small dirversion == 2, logversion == 2, attrversion == 1, (sectorsize != BBSIZE || lsectorsize != BBSIZE), sbp->sb_features2 != 0); + /* + * Due to a structure alignment issue, sb_features2 ended up in one + * of two locations, the second "incorrect" location represented by + * the sb_bad_features2 field. To avoid older kernels mounting + * filesystems they shouldn't, set both field to the same value. + */ + sbp->sb_bad_features2 = sbp->sb_features2; if (force_overwrite) zero_old_xfs_structures(&xi, sbp); =========================================================================== xfsprogs/repair/phase1.c =========================================================================== --- a/xfsprogs/repair/phase1.c 2008-03-06 16:59:31.000000000 +1100 +++ b/xfsprogs/repair/phase1.c 2008-03-06 16:57:40.021125442 +1100 @@ -91,6 +91,20 @@ phase1(xfs_mount_t *mp) primary_sb_modified = 1; } + /* + * Check bad_features2 and make sure features2 the same as + * bad_features (ORing the two together). Leave bad_features2 + * set so older kernels can still use it and not mount unsupported + * filesystems when it reads bad_features2. + */ + if (sb->sb_bad_features2 != 0 && + sb->sb_bad_features2 != sb->sb_features2) { + sb->sb_features2 |= sb->sb_bad_features2; + sb->sb_bad_features2 = sb->sb_features2; + primary_sb_modified = 1; + do_warn(_("superblock has a features2 mismatch, correcting\n")); + } + if (primary_sb_modified) { if (!no_modify) { do_warn(_("writing modified primary superblock\n")); From owner-xfs@oss.sgi.com Wed Mar 5 22:12:48 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 05 Mar 2008 22:13:16 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-0.9 required=5.0 tests=AWL,BAYES_00,WEIRD_PORT autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m266Ch6Z027608 for ; Wed, 5 Mar 2008 22:12:47 -0800 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA16904; Thu, 6 Mar 2008 17:13:08 +1100 Received: by chook.melbourne.sgi.com (Postfix, from userid 44625) id 0808658C4C0F; Thu, 6 Mar 2008 17:13:07 +1100 (EST) Date: Thu, 06 Mar 2008 17:13:07 +1100 To: torvalds@linux-foundation.org Cc: linux-kernel@vger.kernel.org, xfs@oss.sgi.com, akpm@linux-foundation.org Subject: [GIT PULL] XFS update for 2.6.25-rc5 User-Agent: nail 11.25 7/29/05 MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-Id: <20080306061308.0808658C4C0F@chook.melbourne.sgi.com> From: lachlan@sgi.com (Lachlan McIlroy) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14785 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: lachlan@sgi.com Precedence: bulk X-list: xfs Please pull from the for-linus branch: git pull git://oss.sgi.com:8090/xfs/xfs-2.6.git for-linus This will update the following files: fs/xfs/xfs_iget.c | 1 + fs/xfs/xfs_trans_ail.c | 17 ++++++++++------- 2 files changed, 11 insertions(+), 7 deletions(-) through these commits: commit 72772a3b5b158cddcfbbff3ef13b26b03a905158 Author: David Chinner Date: Thu Mar 6 13:49:43 2008 +1100 [XFS] fix inode leak in xfs_iget_core() If the radix_tree_preload() fails, we need to destroy the inode we just read in before trying again. This could leak xfs_vnode structures when there is memory pressure. Noticed by Christoph Hellwig. SGI-PV: 977823 SGI-Modid: xfs-linux-melb:xfs-kern:30606a Signed-off-by: David Chinner Signed-off-by: Lachlan McIlroy Signed-off-by: Christoph Hellwig commit 92d9cd1059f80b9c89dee191ffb88b0872e6a7ae Author: David Chinner Date: Thu Mar 6 13:45:10 2008 +1100 [XFS] 977545 977545 977545 977545 977545 977545 xfsaild causing too many wakeups Idle state is not being detected properly by the xfsaild push code. The current idle state is detected by an empty list which may never happen with mostly idle filesystem or one using lazy superblock counters. A single dirty item in the list that exists beyond the push target can result repeated looping attempting to push up to the target because it fails to check if the push target has been acheived or not. Fix by considering a dirty list with everything past the target as an idle state and set the timeout appropriately. SGI-PV: 977545 SGI-Modid: xfs-linux-melb:xfs-kern:30532a Signed-off-by: David Chinner Signed-off-by: Christoph Hellwig Signed-off-by: Lachlan McIlroy From owner-xfs@oss.sgi.com Wed Mar 5 22:12:35 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 05 Mar 2008 22:13:14 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_43 autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m266CSXW027587 for ; Wed, 5 Mar 2008 22:12:33 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA16886; Thu, 6 Mar 2008 17:12:47 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m266CkLF84978843; Thu, 6 Mar 2008 17:12:46 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m266Cg7o83606445; Thu, 6 Mar 2008 17:12:42 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Thu, 6 Mar 2008 17:12:42 +1100 From: David Chinner To: Niv Sardi Cc: Eric Sandeen , xfs@oss.sgi.com, xfs-dev@sgi.com Subject: Re: [REVIEW] mkfs.xfs man page needs the default settings updated, TAKE 2. Message-ID: <20080306061242.GG155407@sgi.com> References: <47CD6D0E.3090301@sandeen.net> <47CD6ED7.5050505@sandeen.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14784 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Thu, Mar 06, 2008 at 03:41:29PM +1100, Niv Sardi wrote: > Thanks to Eric for the comments, is this better ? Not much of a changelog.... > diff --git a/xfsprogs/man/man8/mkfs.xfs.8 b/xfsprogs/man/man8/mkfs.xfs.8 > index b6024c3..afc284c 100644 > --- a/xfsprogs/man/man8/mkfs.xfs.8 > +++ b/xfsprogs/man/man8/mkfs.xfs.8 > @@ -304,10 +304,16 @@ bits. > This specifies the maximum percentage of space in the filesystem that > can be allocated to inodes. The default > .I value > -is 25%. Setting the > +is 25% for filesystems under 1TB, 5% for filesystems under 50TB and 1% > +for filesystems over 50TB. Setting the > .I value > -to 0 means that essentially all of the filesystem can > -become inode blocks. > +to 0 means that essentially all of the filesystem can become inode > +blocks. Note that this is only used by inode32 (on 32bits platforms), > +and is ignored on 64bits platforms. On 32 bits platforms, we can only This is wrong. inode32 is the default on 64 bit platforms as well, and it matters then as well. > +use the first TB of disk space for inodes, so the allocator will try That's not strictly true, either - it depends on inode size; 2k inodes stratch this to 8TB. > +to avoid this region, hence miss-using the first AG if this is set to > +high (the worst case is a 4TB filesystem where a full AG will be > +untouched by anything but inodes with a 25% maxpct). No, it doesn't "miss-use" this space - it reserves it for inodes and metadata and prevents data allocation in those AGs until all other space is consumed. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Wed Mar 5 22:19:30 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 05 Mar 2008 22:19:36 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.8 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_43, J_CHICKENPOX_65 autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m266JNsU028769 for ; Wed, 5 Mar 2008 22:19:27 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA17212; Thu, 6 Mar 2008 17:19:46 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m266JjLF87309214; Thu, 6 Mar 2008 17:19:46 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m266Jfek87293730; Thu, 6 Mar 2008 17:19:41 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Thu, 6 Mar 2008 17:19:41 +1100 From: David Chinner To: Eric Sandeen Cc: Niv Sardi , xfs@oss.sgi.com, xfs-dev@sgi.com Subject: Re: [REVIEW] mkfs.xfs man page needs the default settings updated, TAKE 2. Message-ID: <20080306061941.GH155407@sgi.com> References: <47CD6D0E.3090301@sandeen.net> <47CD6ED7.5050505@sandeen.net> <47CF8127.40900@sandeen.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <47CF8127.40900@sandeen.net> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14786 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Wed, Mar 05, 2008 at 11:29:11PM -0600, Eric Sandeen wrote: > Niv Sardi wrote: > > Thanks to Eric for the comments, is this better ? > > > > Cheers, > > > > -- Niv Sardi > > > > > > > > From 7e0e328663858ecf13f35678f1a6d349c3d4dd5a Mon Sep 17 00:00:00 2001 > > From: Niv Sardi > > Date: Fri, 22 Feb 2008 16:48:32 +1100 > > Subject: [PATCH] Update mkfs manpage for new defaults: > > > > log, attr and inodes v2, > > Drop the ability to turn unwritten extents off completly, > > reduce imaxpct for big filesystems, less AGs for single disks configs. > > --- > > xfsprogs/man/man8/mkfs.xfs.8 | 41 ++++++++++++++++++----------------------- > > 1 files changed, 18 insertions(+), 23 deletions(-) > > > > diff --git a/xfsprogs/man/man8/mkfs.xfs.8 b/xfsprogs/man/man8/mkfs.xfs.8 > > index b6024c3..afc284c 100644 > > --- a/xfsprogs/man/man8/mkfs.xfs.8 > > +++ b/xfsprogs/man/man8/mkfs.xfs.8 > > @@ -304,10 +304,16 @@ bits. > > This specifies the maximum percentage of space in the filesystem that > > can be allocated to inodes. The default > > .I value > > -is 25%. Setting the > > +is 25% for filesystems under 1TB, 5% for filesystems under 50TB and 1% > > +for filesystems over 50TB. Setting the > > .I value > > -to 0 means that essentially all of the filesystem can > > -become inode blocks. > > +to 0 means that essentially all of the filesystem can become inode > > +blocks. Note that this is only used by inode32 (on 32bits platforms), > > +and is ignored on 64bits platforms. > > Really? The m_maxicount tests in xfs_ialloc_ag_alloc and xfs_dialloc > don't seem to care about inode32 or not, unless I'm missing something. See xfs_set_maxicount() and then how it is used in xfs_initialize_perag() to set up pag->pagi_inodeok, pag->pagf_metadata and mp->m_maxagi which are used by the allocator.... > How about... > > maxpct=value > This specifies the maximum percentage of space in the > filesystem that can be allocated to inodes. The > default value is 25% for filesystems under 1TB, 5% for > filesystems under 50TB and 1% for filesystems over 50TB. > > In the default inode allocation mode, inode blocks are > chosen such that inode numbers will not exceed 32 bits, > which restricts the inode blocks to the lower portion of > the filesystem. The data block allocator will avoid these > low blocks to accommodate the specified maxpct, so a high > value may result in a filesystem with nothing but inodes > in a significant portion of the lower blocks of the > filesystem. (This restriction is not present when > the filesystem is mounted with the "inode64" option on > 64-bit platforms). > > Setting the value to 0 means that essentially all of the > filesystem can become inode blocks, subject to inode32 > restrictions. > > This value can be modified with xfs_growfs(8). > > eh... could be better... but how's it sound? Much better. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Thu Mar 6 03:10:08 2008 Received: with ECARTIS (v1.0.0; list xfs); Thu, 06 Mar 2008 03:10:24 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=AWL,BAYES_00,MIME_8BIT_HEADER autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m26BA7cq017274 for ; Thu, 6 Mar 2008 03:10:08 -0800 X-ASG-Debug-ID: 1204801835-584f01e30000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from ti-out-0910.google.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 42D9065F041 for ; Thu, 6 Mar 2008 03:10:35 -0800 (PST) Received: from ti-out-0910.google.com (ti-out-0910.google.com [209.85.142.184]) by cuda.sgi.com with ESMTP id dEl8L5FemDiuCkVF for ; Thu, 06 Mar 2008 03:10:35 -0800 (PST) Received: by ti-out-0910.google.com with SMTP id d10so2606410tib.18 for ; Thu, 06 Mar 2008 03:10:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:to:subject:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; bh=n+fhSAm+DLNnIZ4bydPpj+yzn+KtZQq/g3HlgFCpIFQ=; b=rFebC9NOA5nt94u+KycpIjg7JnJB+BIcR1ZXcB+r1Ctrp6l5wIii3yZgJ0RdAZeXCs7lK1/9MiBxTbhp1e5FVqDPlu2hcRTukbchlWYY/hJTRI78gRxSk/46GNgLFI47myThxCugFJz8T3VtA1zj81iNgws5shswetwoBpuO5Fo= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=ahPk6NoF68Znqz8Ne12QZaLmRwIO1z0vfu0y3anucTIrs6K5W+rLP30PmaitK63WyvEDyvV7s7x7ruyaPbg94GxMs4hR7WSQfMCLB7R2/WlnzPlbEC+5Y3iKAXddzSHsi3EtRtZNo6QnIQ9if/ukekI4xpMvTsR5//aiGzO9Ux4= Received: by 10.150.195.21 with SMTP id s21mr1871319ybf.87.1204801832206; Thu, 06 Mar 2008 03:10:32 -0800 (PST) Received: by 10.150.96.5 with HTTP; Thu, 6 Mar 2008 03:10:32 -0800 (PST) Message-ID: <1a4a774c0803060310w2642224w690ac8fa13f96ec@mail.gmail.com> Date: Thu, 6 Mar 2008 12:10:32 +0100 From: "=?ISO-8859-1?Q?Christian_R=F8snes?=" To: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: XFS internal error xfs_trans_cancel at line 1150 of file fs/xfs/xfs_trans.c Subject: Re: XFS internal error xfs_trans_cancel at line 1150 of file fs/xfs/xfs_trans.c In-Reply-To: <1a4a774c0803050553h7f6294cfq41c38f34ea92ceae@mail.gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Disposition: inline References: <1a4a774c0802130251h657a52f7lb97942e7afdf6e3f@mail.gmail.com> <20080213214551.GR155407@sgi.com> <1a4a774c0803050553h7f6294cfq41c38f34ea92ceae@mail.gmail.com> X-Barracuda-Connect: ti-out-0910.google.com[209.85.142.184] X-Barracuda-Start-Time: 1204801837 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44034 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id m26BA8cq017278 X-archive-position: 14787 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: christian.rosnes@gmail.com Precedence: bulk X-list: xfs On Wed, Mar 5, 2008 at 2:53 PM, Christian Rřsnes wrote: > > On Wed, Feb 13, 2008 at 11:51:51AM +0100, Christian Rřsnes wrote: > > > Over the past month I've been hit with two cases of "xfs_trans_cancel > > > at line 1150" > > > The two errors occurred on different raid sets. In both cases the > > > error happened during > > > rsync from a remote server to this server, and the local partition > > > which reported > > > the error was 99% full (as reported by df -k, see below for details). > > > > > > System: Dell 2850 > > > Mem: 4GB RAM > > > OS: Debian 3 (32-bit) > > > Kernel: 2.6.17.7 (custom compiled) > > > > > After being hit several times by the problem mentioned above (running > kernel 2.6.17.7), > I upgraded the kernel to version 2.6.24.3. I then ran a rsync test to > a 99% full partition: > > df -k: > /dev/sdb1 286380096 282994528 3385568 99% /data > > The rsync application will probably fail because it will most likely > run out of space, > but I got another xfs_trans_cancel kernel message: > > Filesystem "sdb1": XFS internal error xfs_trans_cancel at line 1163 of > file fs/xfs/xfs_trans.c. Caller 0xc021a010 > Pid: 11642, comm: rsync Not tainted 2.6.24.3FC #1 > [] xfs_trans_cancel+0x5d/0xe6 > [] xfs_mkdir+0x45a/0x493 > [] xfs_mkdir+0x45a/0x493 > [] xfs_acl_vhasacl_default+0x33/0x44 > [] xfs_vn_mknod+0x165/0x243 > [] xfs_access+0x2f/0x35 > [] xfs_vn_mkdir+0x12/0x14 > [] vfs_mkdir+0xa3/0xe2 > [] sys_mkdirat+0x8a/0xc3 > [] sys_mkdir+0x1f/0x23 > [] syscall_call+0x7/0xb > ======================= > xfs_force_shutdown(sdb1,0x8) called from line 1164 of file > fs/xfs/xfs_trans.c. Return address = 0xc0212690 > > Filesystem "sdb1": Corruption of in-memory data detected. Shutting > down filesystem: sdb1 > Please umount the filesystem, and rectify the problem(s) > Actually, a single mkdir command is enough to trigger the filesystem shutdown when its 99% full (according to df -k): /data# mkdir test mkdir: cannot create directory `test': No space left on device Filesystem "sdb1": XFS internal error xfs_trans_cancel at line 1163 of file fs/xfs/xfs_trans.c. Caller 0xc021a010 Pid: 23380, comm: mkdir Not tainted 2.6.24.3FC #1 [] xfs_trans_cancel+0x5d/0xe6 [] xfs_mkdir+0x45a/0x493 [] xfs_mkdir+0x45a/0x493 [] xfs_acl_vhasacl_default+0x33/0x44 [] xfs_vn_mknod+0x165/0x243 [] xfs_access+0x2f/0x35 [] xfs_vn_mkdir+0x12/0x14 [] vfs_mkdir+0xa3/0xe2 [] sys_mkdirat+0x8a/0xc3 [] sys_mkdir+0x1f/0x23 [] syscall_call+0x7/0xb [] atm_reset_addr+0xd/0x83 ======================= xfs_force_shutdown(sdb1,0x8) called from line 1164 of file fs/xfs/xfs_trans.c. Return address = 0xc0212690 Filesystem "sdb1": Corruption of in-memory data detected. Shutting down filesystem: sdb1 Please umount the filesystem, and rectify the problem(s) df -k ----- /dev/sdb1 286380096 282994528 3385568 99% /data df -i ----- /dev/sdb1 10341248 3570112 6771136 35% /data xfs_info -------- meta-data=/dev/sdb1 isize=512 agcount=16, agsize=4476752 blks = sectsz=512 attr=0 data = bsize=4096 blocks=71627792, imaxpct=25 = sunit=16 swidth=32 blks, unwritten=1 naming =version 2 bsize=4096 log =internal bsize=4096 blocks=32768, version=2 = sectsz=512 sunit=16 blks, lazy-count=0 realtime =none extsz=65536 blocks=0, rtextents=0 xfs_db -r -c 'sb 0' -c p /dev/sdb1 ---------------------------------- magicnum = 0x58465342 blocksize = 4096 dblocks = 71627792 rblocks = 0 rextents = 0 uuid = d16489ab-4898-48c2-8345-6334af943b2d logstart = 67108880 rootino = 128 rbmino = 129 rsumino = 130 rextsize = 16 agblocks = 4476752 agcount = 16 rbmblocks = 0 logblocks = 32768 versionnum = 0x3584 sectsize = 512 inodesize = 512 inopblock = 8 fname = "\000\000\000\000\000\000\000\000\000\000\000\000" blocklog = 12 sectlog = 9 inodelog = 9 inopblog = 3 agblklog = 23 rextslog = 0 inprogress = 0 imax_pct = 25 icount = 3570112 ifree = 0 fdblocks = 847484 frextents = 0 uquotino = 0 gquotino = 0 qflags = 0 flags = 0 shared_vn = 0 inoalignmt = 2 unit = 16 width = 32 dirblklog = 0 logsectlog = 0 logsectsize = 0 logsunit = 65536 features2 = 0 Christian From owner-xfs@oss.sgi.com Thu Mar 6 08:09:37 2008 Received: with ECARTIS (v1.0.0; list xfs); Thu, 06 Mar 2008 08:09:54 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m26G9ZAB025720 for ; Thu, 6 Mar 2008 08:09:36 -0800 X-ASG-Debug-ID: 1204819803-09e6018a0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mail.pawisda.de (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 91918F2A75F for ; Thu, 6 Mar 2008 08:10:04 -0800 (PST) Received: from mail.pawisda.de (mail.pawisda.de [213.157.4.156]) by cuda.sgi.com with ESMTP id nRVxdO3g7rsyeaRU for ; Thu, 06 Mar 2008 08:10:04 -0800 (PST) Received: from localhost (localhost.intra.frontsite.de [127.0.0.1]) by mail.pawisda.de (Postfix) with ESMTP id AB41C11132; Thu, 6 Mar 2008 17:10:03 +0100 (CET) Received: from mail.pawisda.de ([127.0.0.1]) by localhost (ndb [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 30446-04; Thu, 6 Mar 2008 17:09:52 +0100 (CET) Received: from [192.168.51.99] (lw-lap099.intra.frontsite.de [192.168.51.99]) by mail.pawisda.de (Postfix) with ESMTP id 53FB51114A; Thu, 6 Mar 2008 17:09:52 +0100 (CET) X-ASG-Orig-Subj: Re: REVIEW: xfs_reno #2 Subject: Re: REVIEW: xfs_reno #2 From: Ruben Porras To: Barry Naujok Cc: "xfs@oss.sgi.com" In-Reply-To: References: Content-Type: text/plain Date: Thu, 06 Mar 2008 17:10:35 +0100 Message-Id: <1204819835.4002.36.camel@tecra.thekeening.homeunix.org> Mime-Version: 1.0 X-Mailer: Evolution 2.12.3 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Scanned: by amavisd-new at pawisda.de X-Barracuda-Connect: mail.pawisda.de[213.157.4.156] X-Barracuda-Start-Time: 1204819804 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44055 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Status: Clean X-archive-position: 14788 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: ruben.porras@linworks.de Precedence: bulk X-list: xfs Am Donnerstag, den 04.10.2007, 14:25 +1000 schrieb Barry Naujok: > A couple changes from the first xfs_reno: > > - Major one is that symlinks are now supported, but only > owner, group and extended attributes are copied for them > (not times or inode attributes). > > - Man page! > > > To make this better, ideally we need some form of > "swap inodes" function in the kernel, where the entire > contents of the inode themselves are swapped. This form > can handle any inode and without any of the dir/file/attr/etc > copy/swap mechanisms we have in xfs_reno. > > Barry. +static int +process_slink( + bignode_t *node) +{ + int i = 0; + int rval = 0; + struct stat64 st; + char *srcname = NULL; + char *pname = NULL; + char target[PATH_MAX] = ""; + char linkbuf[PATH_MAX]; + + SET_PHASE(SLINK_PHASE); + + dump_node("symlink", node); + + cur_node = node; + srcname = node->paths[0]; + + if (lstat64(srcname, &st) < 0) { + if (errno != ENOENT) { + err_stat(srcname); + global_rval |= 2; + } + goto quit; + } + if (st.st_ino <= XFS_MAXINUMBER_32 && !force_all) + /* this file has changed, and no longer needs processing */ + goto quit; This check need to go out, the same in functions process_dir and process_file. + rval = 1; + + i = readlink(srcname, linkbuf, sizeof(linkbuf) - 1); + if (i < 0) { + err_message(_("unable to read symlink: %s"), srcname); + goto quit; + } + linkbuf[i] = '\0'; + + if (realuid != 0 && realuid != st.st_uid) { + errno = EACCES; + err_open(srcname); + goto quit; + } + + /* create target */ + pname = strdup(srcname); + if (pname == NULL) { + err_nomem(); + goto quit; + } + dirname(pname); + + sprintf(target, "%s/%sXXXXXX", pname, cmd_prefix); + if (mktemp(target) == NULL) { + err_message(_("unable to create temp symlink name")); + goto quit; + } do not create the file, it is done later with symlink, if the file exists, symlink is going to fail. + cur_target = strdup(target); + if (cur_target == NULL) { + err_nomem(); + goto quit; + } cur_target is not needed. + + if (symlink(linkbuf, target) != 0) { + err_message(_("unable to create symlink: %s"), target); + goto quit; + } [...] + free(cur_target); + + cur_target = NULL; again, both are unnecesary. + numslinksdone++; + return rval; +} From owner-xfs@oss.sgi.com Thu Mar 6 08:36:48 2008 Received: with ECARTIS (v1.0.0; list xfs); Thu, 06 Mar 2008 08:37:08 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-0.3 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_31, J_CHICKENPOX_42,J_CHICKENPOX_44,J_CHICKENPOX_45,J_CHICKENPOX_46, J_CHICKENPOX_47,J_CHICKENPOX_48,J_CHICKENPOX_52,J_CHICKENPOX_57, J_CHICKENPOX_62,J_CHICKENPOX_63,J_CHICKENPOX_64,J_CHICKENPOX_66, J_CHICKENPOX_73,J_CHICKENPOX_74,J_CHICKENPOX_83,J_CHICKENPOX_93 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m26GajIM032508 for ; Thu, 6 Mar 2008 08:36:48 -0800 X-ASG-Debug-ID: 1204821430-0a2701d30000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mail.pawisda.de (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id B83C5F2AC2F for ; Thu, 6 Mar 2008 08:37:10 -0800 (PST) Received: from mail.pawisda.de (mail.pawisda.de [213.157.4.156]) by cuda.sgi.com with ESMTP id 9yr4RmE2NqfN4kkY for ; Thu, 06 Mar 2008 08:37:10 -0800 (PST) Received: from localhost (localhost.intra.frontsite.de [127.0.0.1]) by mail.pawisda.de (Postfix) with ESMTP id 1277411154; Thu, 6 Mar 2008 17:11:12 +0100 (CET) Received: from mail.pawisda.de ([127.0.0.1]) by localhost (ndb [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 30257-09; Thu, 6 Mar 2008 17:11:01 +0100 (CET) Received: from [192.168.51.99] (lw-lap099.intra.frontsite.de [192.168.51.99]) by mail.pawisda.de (Postfix) with ESMTP id AF48611113; Thu, 6 Mar 2008 17:11:01 +0100 (CET) X-ASG-Orig-Subj: Re: REVIEW: xfs_reno #2 Subject: Re: REVIEW: xfs_reno #2 From: Ruben Porras To: David Chinner Cc: Barry Naujok , "xfs@oss.sgi.com" In-Reply-To: <20071120013651.GR995458@sgi.com> References: <20071120013651.GR995458@sgi.com> Content-Type: multipart/mixed; boundary="=-4Da9GvHld0Grtgo95r+H" Date: Thu, 06 Mar 2008 17:11:46 +0100 Message-Id: <1204819906.4002.40.camel@tecra.thekeening.homeunix.org> Mime-Version: 1.0 X-Mailer: Evolution 2.12.3 X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Scanned: by amavisd-new at pawisda.de X-Barracuda-Connect: mail.pawisda.de[213.157.4.156] X-Barracuda-Start-Time: 1204821432 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44057 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Status: Clean X-archive-position: 14789 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: ruben.porras@linworks.de Precedence: bulk X-list: xfs --=-4Da9GvHld0Grtgo95r+H Content-Type: text/plain Content-Transfer-Encoding: 7bit Am Dienstag, den 20.11.2007, 12:36 +1100 schrieb David Chinner: > On Thu, Oct 04, 2007 at 02:25:16PM +1000, Barry Naujok wrote: > > To make this better, ideally we need some form of > > "swap inodes" function in the kernel, where the entire > > contents of the inode themselves are swapped. This form > > can handle any inode and without any of the dir/file/attr/etc > > copy/swap mechanisms we have in xfs_reno. > > Something like the attached patch? > > This is proof-of-concept. I've compiled it but I haven't tested > it. Your mission, Barry, should you choose to accept it, it to Hello again, I have this week again time to at xfs_reno and xfs_swapino and xfs_swap_extents functions. I adapted xfs_reno to use these ioctl instead of the user space dir/file/attr/..., and I move successfully files and directories (see the problem description later). Then I run into two problems, one processing directories, and one processing symlinks, where I do not now how to proceed, and I would like to have advice. Firsts, directories: At this moment it is not possible to use xfs_swap_extents to for directories: (extract from xfs_dfrag.c) if (VN_CACHED(tvp) != 0) { xfs_inval_cached_trace(tip, 0, -1, 0, -1); error = xfs_flushinval_pages(tip, 0, -1, FI_REMAPF_LOCKED); if (error) goto error0; } /* Verify O_DIRECT for ftmp */ if (VN_CACHED(tvp) != 0) { error = XFS_ERROR(EINVAL); goto error0; } But it is not posible to do an open(2) on a directory with O_DIRECT. I was unable to find out if this restriction comes from the kernel or from the glibc, neither why open on dirs with O_DIRECT needs to be forbidden (hints would be appreciated ;), but changing this snippet to /* There is no O_DIRECT for directories */ if (VN_CACHED(tvp) != 0 && VN_ISDIR(tvp) == 0) { error = XFS_ERROR(EINVAL); goto error0; } does the trick. Can we do that? Second, symlinks: xfs_swapino and xfs_swap_extents, require the file descriptors of the related files. However, it is not possible to get from user space the fd of a symlink, because open(2) follow always the symlinks. I would change this functions to accept xfs_inode as parameters instead of file descriptors, and get them in xfs_reno with stat und lstat, but I would like to get your opinion before changing the ioctls. Attached the modified xfs_reno.c (process_slink not working). Regards. --=-4Da9GvHld0Grtgo95r+H Content-Disposition: attachment; filename=xfs_reno.c Content-Type: text/x-csrc; name=xfs_reno.c; charset=UTF-8 Content-Transfer-Encoding: 7bit /* * Copyright (c) 2007 Silicon Graphics, Inc. * All Rights Reserved. * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License as * published by the Free Software Foundation. * * This program is distributed in the hope that it would be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write the Free Software Foundation, * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA */ /* * xfs_reno - renumber 64-bit inodes * * xfs_reno [-f] [-n] [-p] [-q] [-v] [-P seconds] path ... * xfs_reno [-r] path ... * * Renumbers all inodes > 32 bits into 32 bit space. Requires the filesytem * to be mounted with inode32. * * -f force conversion on all inodes rather than just * those with a 64bit inode number. * -n nothing, do not renumber inodes * -p show progress status. * -q quiet, do not report progress, only errors. * -v verbose, more -v's more verbose. * -P seconds set the interval for the progress status in seconds. * -r recover from an interrupted run. */ #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #define SCAN_PHASE 0x00 #define DIR_PHASE 0x10 /* nothing done or all done */ #define DIR_PHASE_1 0x11 /* temp dir created */ #define DIR_PHASE_2 0x12 /* swapped extents and inodes */ #define DIR_PHASE_3 0x13 /* src dir removed */ #define DIR_PHASE_MAX 0x13 /* renamed temp to source name */ #define FILE_PHASE 0x20 /* nothing done or all done */ #define FILE_PHASE_1 0x21 /* temp file created */ #define FILE_PHASE_2 0x22 /* swapped extents and inodes */ #define FILE_PHASE_3 0x23 /* unlinked source */ #define FILE_PHASE_4 0x24 /* hard links copied */ #define FILE_PHASE_MAX 0x24 /* renamed temp to source name */ #define SLINK_PHASE 0x30 /* nothing done or all done */ #define SLINK_PHASE_1 0x31 /* temp symlink created */ #define SLINK_PHASE_2 0x32 /* symlink attrs copied */ #define SLINK_PHASE_3 0x33 /* unlinked source */ #define SLINK_PHASE_4 0x34 /* hard links copied */ #define SLINK_PHASE_MAX 0x34 /* renamed temp to source name */ static void update_recoverfile(void); #define SET_PHASE(x) (cur_phase = x, update_recoverfile()) #define LOG_ERR 0 #define LOG_NORMAL 1 #define LOG_INFO 2 #define LOG_DEBUG 3 #define LOG_NITTY 4 #define NH_BUCKETS 65536 #define NH_HASH(ino) (nodehash + ((ino) % NH_BUCKETS)) typedef struct { xfs_ino_t ino; int ftw_flags; nlink_t numpaths; char **paths; } bignode_t; typedef struct { bignode_t *nodes; uint64_t listlen; uint64_t lastnode; } nodelist_t; static const char *cmd_prefix = "xfs_reno_"; static char *progname; static int log_level = LOG_NORMAL; static int force_all; static nodelist_t *nodehash; static int realuid; static uint64_t numdirnodes; static uint64_t numfilenodes; static uint64_t numslinknodes; static uint64_t numdirsdone; static uint64_t numfilesdone; static uint64_t numslinksdone; static int poll_interval; static time_t starttime; static bignode_t *cur_node; static char *cur_target; static int cur_phase; static int highest_numpaths; static char *recover_file; static int recover_fd; static volatile int poll_output; static int global_rval; static int *agmask; /* * message handling */ static void log_message( int level, char *fmt, ...) { char buf[1024]; va_list ap; if (log_level < level) return; va_start(ap, fmt); vsnprintf(buf, 1024, fmt, ap); va_end(ap); printf("%c%s: %s\n", poll_output ? '\n' : '\r', progname, buf); poll_output = 0; } static void err_message( char *fmt, ...) { char buf[1024]; va_list ap; va_start(ap, fmt); vsnprintf(buf, 1024, fmt, ap); va_end(ap); fprintf(stderr, "%c%s: %s\n", poll_output ? '\n' : '\r', progname, buf); poll_output = 0; } static void err_nomem(void) { err_message(_("Out of memory")); } static void err_open( const char *s) { err_message(_("Cannot open %s: %s"), s, strerror(errno)); } static void err_not_xfs( const char *s) { err_message(_("%s is not on an XFS filesystem"), s); } static void err_stat( const char *s) { err_message(_("Cannot stat %s: %s\n"), s, strerror(errno)); } static void err_swapino( int err, const char *srcname) { if (log_level >= LOG_DEBUG) { switch (err) { case EIO: err_message(_("Filesystem is going down: %s: %s"), srcname, strerror(err)); break; default: err_message(_("Swap inode failed: %s: %s"), srcname, strerror(err)); break; } } else err_message(_("Swap inode failed: %s: %s"), srcname, strerror(err)); } static void err_swapext( int err, const char *srcname, xfs_off_t bs_size) { if (log_level >= LOG_DEBUG) { switch (err) { case ENOTSUP: err_message("%s: file type not supported", srcname); break; case EFAULT: /* The file has changed since we started the copy */ err_message("%s: file modified, " "inode renumber aborted: %ld", srcname, bs_size); break; case EBUSY: /* Timestamp has changed or mmap'ed file */ err_message("%s: file busy", srcname); break; default: err_message(_("Swap extents failed: %s: %s"), srcname, strerror(errno)); break; } } else err_message(_("Swap extents failed: %s: %s"), srcname, strerror(errno)); } /* * usage message */ static void usage(void) { fprintf(stderr, _("%s [-fnpqv] [-P ] [-r] \n"), progname); exit(1); } /* * XFS interface functions */ static int xfs_bulkstat_single(int fd, xfs_ino_t *lastip, xfs_bstat_t *ubuffer) { xfs_fsop_bulkreq_t bulkreq; bulkreq.lastip = (__u64 *)lastip; bulkreq.icount = 1; bulkreq.ubuffer = ubuffer; bulkreq.ocount = NULL; return ioctl(fd, XFS_IOC_FSBULKSTAT_SINGLE, &bulkreq); } static int xfs_swapext(int fd, xfs_swapext_t *sx) { return ioctl(fd, XFS_IOC_SWAPEXT, sx); } static int xfs_get_agflags(const char *filepath) { xfs_fsop_geom_t fsgeo; xfs_ioc_agflags_t ioc_flags; int error = 0; xfs_agnumber_t agno; int fd; if ((fd = open(filepath, O_RDONLY /* | O_DIRECT | O_NOATIME */)) == -1) { err_open(filepath); return -1; } if ((error = xfsctl(filepath, fd, XFS_IOC_FSGEOMETRY, &fsgeo)) < 0) { fprintf(stderr, _("cannot get geometry of fs: %s\n"), strerror(errno)); goto error0; } agmask = (int *) calloc(fsgeo.agcount, sizeof(int)); for (agno = 0; agno < fsgeo.agcount ; agno++) { ioc_flags.ag = agno; if ((error = xfsctl(filepath, fd, XFS_IOC_GET_AGF_FLAGS, &ioc_flags)) < 0) { fprintf(stderr, _("cannot get flags %d on ag %d at %s: %s\n"), ioc_flags.flags, ioc_flags.ag, filepath, strerror(errno)); return error; } agmask[agno] = ioc_flags.flags & XFS_AGF_FLAGS_ALLOC_DENY; } error0: close(fd); return error; } static int xfs_swapino(int fd, xfs_swapino_t *iu) { return ioctl(fd, XFS_IOC_SWAPINO, iu); } static int xfs_getxattr(int fd, struct fsxattr *attr) { return ioctl(fd, XFS_IOC_FSGETXATTR, attr); } /* * A hash table of inode numbers and associated paths. */ static nodelist_t * init_nodehash(void) { nodehash = calloc(NH_BUCKETS, sizeof(nodelist_t)); if (nodehash == NULL) { err_nomem(); return NULL; } return nodehash; } static int in_ag_to_free(const char *filepath) { xfs_ioc_fileag_t fileag; int error; if ((fileag.fd = open(filepath, O_RDONLY /* | O_DIRECT | O_NOATIME */)) == -1) { err_open(filepath); return -1; } if ((error = xfsctl(filepath, fileag.fd, XFS_IOC_GETFILEAG, &fileag)) < 0) { fprintf(stderr, _("%s: cannot get the AG of the file: %s\n"), filepath, strerror(errno)); close(fileag.fd); return error; } close(fileag.fd); if (agmask[fileag.ag] == 1) printf("AG: %d, MASK: %d\t", fileag.ag, agmask[fileag.ag]); return agmask[fileag.ag]; } static void free_nodehash(void) { int i, j, k; for (i = 0; i < NH_BUCKETS; i++) { bignode_t *nodes = nodehash[i].nodes; for (j = 0; j < nodehash[i].lastnode; j++) { for (k = 0; k < nodes[j].numpaths; k++) { free(nodes[j].paths[k]); } free(nodes[j].paths); } free(nodes); } free(nodehash); } static nlink_t add_path( bignode_t *node, const char *path) { node->paths = realloc(node->paths, sizeof(char *) * (node->numpaths + 1)); if (node->paths == NULL) { err_nomem(); exit(1); } node->paths[node->numpaths] = strdup(path); if (node->paths[node->numpaths] == NULL) { err_nomem(); exit(1); } node->numpaths++; if (node->numpaths > highest_numpaths) highest_numpaths = node->numpaths; return node->numpaths; } static bignode_t * add_node( nodelist_t *list, xfs_ino_t ino, int ftw_flags, const char *path) { bignode_t *node; if (list->lastnode >= list->listlen) { list->listlen += 500; list->nodes = realloc(list->nodes, sizeof(bignode_t) * list->listlen); if (list->nodes == NULL) { err_nomem(); return NULL; } } node = list->nodes + list->lastnode; node->ino = ino; node->ftw_flags = ftw_flags; node->paths = NULL; node->numpaths = 0; add_path(node, path); list->lastnode++; return node; } static bignode_t * find_node( xfs_ino_t ino) { int i; nodelist_t *nodelist; bignode_t *nodes; nodelist = NH_HASH(ino); nodes = nodelist->nodes; for(i = 0; i < nodelist->lastnode; i++) { if (nodes[i].ino == ino) { return &nodes[i]; } } return NULL; } static bignode_t * add_node_path( xfs_ino_t ino, int ftw_flags, const char *path) { nodelist_t *nodelist; bignode_t *node; log_message(LOG_NITTY, "add_node_path: ino %llu, path %s", ino, path); node = find_node(ino); if (node == NULL) { nodelist = NH_HASH(ino); return add_node(nodelist, ino, ftw_flags, path); } add_path(node, path); return node; } static void dump_node( char *msg, bignode_t *node) { int k; if (log_level < LOG_DEBUG) return; log_message(LOG_DEBUG, "%s: %llu %llu %s", msg, node->ino, node->numpaths, node->paths[0]); for (k = 1; k < node->numpaths; k++) log_message(LOG_DEBUG, "\t%s", node->paths[k]); } static void dump_nodehash(void) { int i, j; if (log_level < LOG_NITTY) return; for (i = 0; i < NH_BUCKETS; i++) { bignode_t *nodes = nodehash[i].nodes; for (j = 0; j < nodehash[i].lastnode; j++, nodes++) dump_node("nodehash", nodes); } } static int for_all_nodes( int (*fn)(bignode_t *node), int ftw_flags, int quit_on_error) { int i; int j; int rval = 0; for (i = 0; i < NH_BUCKETS; i++) { bignode_t *nodes = nodehash[i].nodes; for (j = 0; j < nodehash[i].lastnode; j++, nodes++) { if (nodes->ftw_flags == ftw_flags) { rval = fn(nodes); if (rval && quit_on_error) goto quit; } } } quit: return rval; } /* * Adds appropriate files to the inode hash table */ static int nftw_addnodes( const char *path, const struct stat64 *st, int flags, struct FTW *sntfw) { if (flags == FTW_F || flags == FTW_D) if (!in_ag_to_free(path) && !force_all) return 0; printf("%s\n", path); if (flags == FTW_F) numfilenodes++; else if (flags == FTW_D) numdirnodes++; else if (flags == FTW_SL) numslinknodes++; else return 0; add_node_path(st->st_ino, flags, path); return 0; } static int process_dir( bignode_t *node) { int sfd = -1; int tfd = -1; int rval = 0; struct stat64 st; char *srcname = NULL; char *pname = NULL; xfs_swapino_t si; xfs_swapext_t sx; xfs_bstat_t bstatbuf; struct fsxattr fsx; char target[PATH_MAX] = ""; SET_PHASE(DIR_PHASE); dump_node("directory", node); cur_node = node; srcname = node->paths[0]; bzero(&st, sizeof(st)); bzero(&bstatbuf, sizeof(bstatbuf)); bzero(&si, sizeof(si)); bzero(&sx, sizeof(sx)); if (stat64(srcname, &st) < 0) { if (errno != ENOENT) { err_stat(srcname); global_rval |= 2; } goto quit; } if (!in_ag_to_free(srcname) && !force_all) { /* * This directory has already changed ino's, probably due * to being moved during processing of a parent directory. */ log_message(LOG_DEBUG, "process_dir: skipping %s", srcname); goto quit; } rval = 1; sfd = open(srcname, O_RDONLY); if (sfd == -1) { err_open(srcname); goto quit; } if (!platform_test_xfs_fd(sfd)) { err_not_xfs(srcname); goto quit; } if (xfs_getxattr(sfd, &fsx) < 0) { err_message(_("failed to get inode attrs: %s"), srcname); goto quit; } if (fsx.fsx_xflags & (XFS_XFLAG_IMMUTABLE | XFS_XFLAG_APPEND)) { err_message(_("%s: immutable/append, ignoring"), srcname); global_rval |= 2; goto quit; } if (realuid != 0 && realuid != st.st_uid) { errno = EACCES; err_open(srcname); goto quit; } /* mkdir parent/target */ pname = strdup(srcname); if (pname == NULL) { err_nomem(); goto quit; } dirname(pname); sprintf(target, "%s/%sXXXXXX", pname, cmd_prefix); if (mkdtemp(target) == NULL) { err_message(_("Unable to create directory copy: %s, %s"), srcname, strerror(errno)); goto quit; } tfd = open(target, O_RDONLY); if (tfd == -1) { err_open(target); goto quit; } cur_target = strdup(target); if (!cur_target) { err_nomem(); goto quit; } SET_PHASE(DIR_PHASE_1); /* swapino src target */ si.si_version = XFS_SI_VERSION; si.si_fdtarget = tfd; si.si_fdtmp = sfd; /* swap the inodes */ rval = xfs_swapino(tfd, &si); if (rval < 0) { err_swapino(rval, srcname); goto quit_unlink; } if (xfs_bulkstat_single(sfd, &st.st_ino, &bstatbuf) < 0) { err_message(_("unable to bulkstat source file: %s"), srcname); unlink(target); goto quit; } if (bstatbuf.bs_ino != st.st_ino) { err_message(_("bulkstat of source file returned wrong inode: %s"), srcname); unlink(target); goto quit; } ftruncate64(tfd, bstatbuf.bs_size); /* swapextents src target */ sx.sx_stat = bstatbuf; /* struct copy */ sx.sx_version = XFS_SX_VERSION; sx.sx_fdtarget = sfd; sx.sx_fdtmp = tfd; sx.sx_offset = 0; sx.sx_length = bstatbuf.bs_size; /* Swap the extents */ rval = xfs_swapext(sfd, &sx); if (rval < 0) { err_swapext(rval, srcname, bstatbuf.bs_size); goto quit_unlink; } SET_PHASE(DIR_PHASE_2); /* rmdir src */ rval = rmdir(srcname); if (rval != 0) { err_message(_("unable to remove directory: %s, %s"), srcname, strerror(errno)); goto quit; } SET_PHASE(DIR_PHASE_3); /* rename cur_target src */ rval = rename(target, srcname); if (rval != 0) { /* * we can't abort since the src dir is now gone. * let the admin clean this one up */ err_message(_("unable to rename directory: %s to %s, %s"), cur_target, srcname, strerror(errno)); } goto quit; quit_unlink: rval = rmdir(target); if (rval != 0) err_message(_("unable to remove directory: %s, %s"), target, strerror(errno)); quit: SET_PHASE(DIR_PHASE); if (sfd >= 0) close(sfd); if (tfd >= 0) close(tfd); free(pname); free(cur_target); cur_target = NULL; cur_node = NULL; numdirsdone++; return rval; } static int process_file( bignode_t *node) { int sfd = -1; int tfd = -1; int i = 0; int rval = 0; struct stat64 st; char *srcname = NULL; char *pname = NULL; xfs_swapino_t si; xfs_swapext_t sx; xfs_bstat_t bstatbuf; struct fsxattr fsx; char target[PATH_MAX] = ""; SET_PHASE(FILE_PHASE); dump_node("file", node); cur_node = node; srcname = node->paths[0]; bzero(&st, sizeof(st)); bzero(&bstatbuf, sizeof(bstatbuf)); bzero(&si, sizeof(si)); bzero(&sx, sizeof(sx)); if (stat64(srcname, &st) < 0) { if (errno != ENOENT) { err_stat(srcname); global_rval |= 2; } goto quit; } if (!in_ag_to_free(srcname) && !force_all) /* this file has changed, and no longer needs processing */ goto quit; rval = 1; /* open and sync source */ sfd = open(srcname, O_RDWR | O_DIRECT); if (sfd < 0) { err_open(srcname); goto quit; } if (!platform_test_xfs_fd(sfd)) { err_not_xfs(srcname); goto quit; } if (fsync(sfd) < 0) { err_message(_("sync failed: %s: %s"), srcname, strerror(errno)); goto quit; } /* * Check if a mandatory lock is set on the file to try and * avoid blocking indefinitely on the reads later. Note that * someone could still set a mandatory lock after this check * but before all reads have completed to block xfs_reno reads. * This change just closes the window a bit. */ if ((st.st_mode & S_ISGID) && !(st.st_mode & S_IXGRP)) { struct flock fl; fl.l_type = F_RDLCK; fl.l_whence = SEEK_SET; fl.l_start = (off_t)0; fl.l_len = 0; if (fcntl(sfd, F_GETLK, &fl) < 0 ) { if (log_level >= LOG_DEBUG) err_message("locking check failed: %s", srcname); global_rval |= 2; goto quit; } if (fl.l_type != F_UNLCK) { if (log_level >= LOG_DEBUG) err_message("mandatory lock: %s: ignoring", srcname); global_rval |= 2; goto quit; } } if (xfs_getxattr(sfd, &fsx) < 0) { err_message(_("failed to get inode attrs: %s"), srcname); goto quit; } if (fsx.fsx_xflags & (XFS_XFLAG_IMMUTABLE | XFS_XFLAG_APPEND)) { err_message(_("%s: immutable/append, ignoring"), srcname); global_rval |= 2; goto quit; } if (realuid != 0 && realuid != st.st_uid) { errno = EACCES; err_open(srcname); goto quit; } /* creat target */ pname = strdup(srcname); if (pname == NULL) { err_nomem(); goto quit; } dirname(pname); sprintf(target, "%s/%sXXXXXX", pname, cmd_prefix); tfd = mkstemp(target); if (tfd == -1) { err_message("unable to create file copy: %s", strerror(errno)); goto quit; } cur_target = strdup(target); if (cur_target == NULL) { err_nomem(); goto quit; } SET_PHASE(FILE_PHASE_1); /* swapino src target */ si.si_version = XFS_SI_VERSION; si.si_fdtarget = sfd; si.si_fdtmp = tfd; /* swap the inodes */ rval = xfs_swapino(sfd, &si); if (rval < 0) { err_swapino(rval, srcname); goto quit_unlink; } if (xfs_bulkstat_single(sfd, &st.st_ino, &bstatbuf) < 0) { err_message(_("unable to bulkstat source file: %s"), srcname); unlink(target); goto quit; } if (bstatbuf.bs_ino != st.st_ino) { err_message(_("bulkstat of source file returned wrong inode: %s"), srcname); unlink(target); goto quit; } ftruncate64(tfd, bstatbuf.bs_size); /* swapextents src target */ sx.sx_stat = bstatbuf; /* struct copy */ sx.sx_version = XFS_SX_VERSION; sx.sx_fdtarget = sfd; sx.sx_fdtmp = tfd; sx.sx_offset = 0; sx.sx_length = bstatbuf.bs_size; /* Swap the extents */ rval = xfs_swapext(sfd, &sx); if (rval < 0) { err_swapext(rval, srcname, bstatbuf.bs_size); goto quit_unlink; } SET_PHASE(FILE_PHASE_2); /* unlink src */ rval = unlink(srcname); if (rval != 0) { err_message(_("unable to remove file: %s, %s"), srcname, strerror(errno)); goto quit; } SET_PHASE(FILE_PHASE_3); /* rename target src */ rval = rename(target, srcname); if (rval != 0) { /* * we can't abort since the src file is now gone. * let the admin clean this one up */ err_message(_("unable to rename file: %s to %s, %s"), target, srcname, strerror(errno)); goto quit; } SET_PHASE(FILE_PHASE_4); /* for each hardlink, unlink and creat pointing to target */ for (i = 1; i < node->numpaths; i++) { /* unlink src */ rval = unlink(node->paths[i]); if (rval != 0) { err_message(_("unable to remove file: %s, %s"), node->paths[i], strerror(errno)); goto quit; } rval = link(srcname, node->paths[i]); if (rval != 0) { err_message("unable to link to file: %s, %s", srcname, strerror(errno)); goto quit; } numfilesdone++; } quit_unlink: rval = unlink(target); if (rval != 0) err_message(_("unable to remove file: %s, %s"), target, strerror(errno)); quit: SET_PHASE(FILE_PHASE); if (sfd >= 0) close(sfd); if (tfd >= 0) close(tfd); free(pname); free(cur_target); cur_target = NULL; cur_node = NULL; numfilesdone++; return rval; } static int process_slink( bignode_t *node) { int i = 0; int sfd = -1; int tfd = -1; int rval = 0; struct stat64 st; char *srcname = NULL; char *pname = NULL; char target[PATH_MAX] = ""; char linkbuf[PATH_MAX]; xfs_swapino_t si; SET_PHASE(SLINK_PHASE); dump_node("symlink", node); cur_node = node; srcname = node->paths[0]; bzero(&st, sizeof(st)); bzero(&si, sizeof(si)); if (lstat64(srcname, &st) < 0) { if (errno != ENOENT) { err_stat(srcname); global_rval |= 2; } goto quit; } rval = 1; /* open source */ sfd = open(srcname, O_RDWR | O_DIRECT); if (sfd < 0) { err_open(srcname); goto quit; } i = readlink(srcname, linkbuf, sizeof(linkbuf) - 1); if (i < 0) { err_message(_("unable to read symlink: %s, %s"), srcname, strerror(errno)); goto quit; } linkbuf[i] = '\0'; if (realuid != 0 && realuid != st.st_uid) { errno = EACCES; err_open(srcname); goto quit; } /* create target */ pname = strdup(srcname); if (pname == NULL) { err_nomem(); goto quit; } dirname(pname); sprintf(target, "%s/%sXXXXXX", pname, cmd_prefix); tfd = mkstemp(target); if (tfd == -1) { err_message(_("unable to create temp symlink name: %s"), strerror(errno)); goto quit; } cur_target = strdup(target); if (cur_target == NULL) { err_nomem(); goto quit; } if (symlink(linkbuf, target) != 0) { err_message(_("unable to create symlink: %s, %s"), target, strerror(errno)); goto quit; } SET_PHASE(SLINK_PHASE_1); /* swapino src target */ si.si_version = XFS_SI_VERSION; si.si_fdtarget = sfd; si.si_fdtmp = tfd; /* swap the inodes */ rval = xfs_swapino(sfd, &si); if (rval < 0) { err_swapino(rval, srcname); goto quit; } SET_PHASE(SLINK_PHASE_2); /* unlink src */ rval = unlink(srcname); if (rval != 0) { err_message(_("unable to remove symlink: %s, %s"), srcname, strerror(errno)); goto quit; } SET_PHASE(SLINK_PHASE_3); /* rename target src */ rval = rename(target, srcname); if (rval != 0) { /* * we can't abort since the src file is now gone. * let the admin clean this one up */ err_message(_("unable to rename symlink: %s to %s, %s"), target, srcname, strerror(errno)); goto quit; } SET_PHASE(SLINK_PHASE_4); /* for each hardlink, unlink and creat pointing to target */ for (i = 1; i < node->numpaths; i++) { /* unlink src */ rval = unlink(node->paths[i]); if (rval != 0) { err_message(_("unable to remove symlink: %s, %s"), node->paths[i], strerror(errno)); goto quit; } rval = link(srcname, node->paths[i]); if (rval != 0) { err_message("unable to link to symlink: %s, %s", srcname, strerror(errno)); goto quit; } numslinksdone++; } quit: cur_node = NULL; SET_PHASE(SLINK_PHASE); free(pname); free(cur_target); cur_target = NULL; numslinksdone++; return rval; } static int open_recoverfile(void) { recover_fd = open(recover_file, O_RDWR | O_SYNC | O_CREAT | O_EXCL, 0600); if (recover_fd < 0) { if (errno == EEXIST) err_message(_("Recovery file already exists, either " "run '%s -r %s' or remove the file."), progname, recover_file); else err_open(recover_file); return 1; } if (!platform_test_xfs_fd(recover_fd)) { err_not_xfs(recover_file); close(recover_fd); return 1; } return 0; } static void update_recoverfile(void) { static const char null_file[] = "0\n0\n0\n\ntarget: \ntemp: \nend\n"; static size_t buf_size = 0; static char *buf = NULL; int i, len; if (recover_fd <= 0) return; if (cur_node == NULL || cur_phase == 0) { /* inbetween processing or still scanning */ lseek(recover_fd, 0, SEEK_SET); write(recover_fd, null_file, sizeof(null_file)); return; } ASSERT(highest_numpaths > 0); if (buf == NULL) { buf_size = (highest_numpaths + 3) * PATH_MAX; buf = malloc(buf_size); if (buf == NULL) { err_nomem(); exit(1); } } len = sprintf(buf, "%d\n%llu\n%d\n", cur_phase, (long long)cur_node->ino, cur_node->ftw_flags); for (i = 0; i < cur_node->numpaths; i++) len += sprintf(buf + len, "%s\n", cur_node->paths[i]); /* len += sprintf(buf + len, "target: %s\ntemp: %s\nend\n", */ /* cur_target, cur_temp); */ ASSERT(len < buf_size); lseek(recover_fd, 0, SEEK_SET); ftruncate(recover_fd, 0); write(recover_fd, buf, len); } static void cleanup(void) { log_message(LOG_NORMAL, _("Interrupted -- cleaning up...")); free_nodehash(); log_message(LOG_NORMAL, _("Done.")); } static void sighandler(int sig) { static char cycle[4] = "-\\|/"; static uint64_t cur_cycle = 0; double percent; char *typename; uint64_t nodes, done; alarm(0); if (sig != SIGALRM) { cleanup(); exit(1); } if (cur_phase == SCAN_PHASE) { if (log_level >= LOG_INFO) fprintf(stderr, _("\r%llu files, %llu dirs and %llu " "symlinks to renumber found... %c"), (long long)numfilenodes, (long long)numdirnodes, (long long)numslinknodes, cycle[cur_cycle % 4]); else fprintf(stderr, "\r%c", cycle[cur_cycle % 4]); cur_cycle++; } else { if (cur_phase >= DIR_PHASE && cur_phase <= DIR_PHASE_MAX) { nodes = numdirnodes; done = numdirsdone; typename = _("dirs"); } else if (cur_phase >= FILE_PHASE && cur_phase <= FILE_PHASE_MAX) { nodes = numfilenodes; done = numfilesdone; typename = _("files"); } else { nodes = numslinknodes; done = numslinksdone; typename = _("symlinks"); } percent = 100.0 * (double)done / (double)nodes; if (percent > 100.0) percent = 100.0; if (log_level >= LOG_INFO) fprintf(stderr, _("\r%.1f%%, %llu of %llu %s, " "%u seconds elapsed"), percent, (long long)done, (long long)nodes, typename, (int)(time(0) - starttime)); else fprintf(stderr, "\r%.1f%%", percent); } poll_output = 1; signal(SIGALRM, sighandler); if (poll_interval) alarm(poll_interval); } static int read_recover_file( char *recover_file, bignode_t **node, char **target, char **temp, int *phase) { FILE *file; int rval = 1; ino_t ino; int ftw_flags; char buf[PATH_MAX + 10]; /* path + "target: " */ struct stat64 st; int first_path; /* A recovery file should look like: target: temp: end */ file = fopen(recover_file, "r"); if (file == NULL) { err_open(recover_file); return 1; } /* read phase */ *phase = 0; if (fgets(buf, PATH_MAX + 10, file) == NULL) { err_message("Recovery failed: unable to read phase"); goto quit; } buf[strlen(buf) - 1] = '\0'; *phase = atoi(buf); if (*phase == SCAN_PHASE) { fclose(file); return 0; } if ((*phase < DIR_PHASE || *phase > DIR_PHASE_MAX) && (*phase < FILE_PHASE || *phase > FILE_PHASE_MAX)) { err_message("Recovery failed: failed to read valid recovery phase"); goto quit; } /* read inode number */ if (fgets(buf, PATH_MAX + 10, file) == NULL) { err_message("Recovery failed: unable to read inode number"); goto quit; } buf[strlen(buf) - 1] = '\0'; ino = strtoull(buf, NULL, 10); if (ino == 0) { err_message("Recovery failed: unable to read inode number"); goto quit; } /* read ftw_flags */ if (fgets(buf, PATH_MAX + 10, file) == NULL) { err_message("Recovery failed: unable to read flags"); goto quit; } buf[strlen(buf) - 1] = '\0'; if (buf[1] != '\0' || (buf[0] != '0' && buf[0] != '1')) { err_message("Recovery failed: unable to read flags: '%s'", buf); goto quit; } ftw_flags = atoi(buf); /* read paths and target path */ *node = NULL; *target = NULL; first_path = 1; while (fgets(buf, PATH_MAX + 10, file) != NULL) { buf[strlen(buf) - 1] = '\0'; log_message(LOG_DEBUG, "path: '%s'", buf); if (buf[0] == '/') { if (stat64(buf, &st) < 0) { err_message(_("Recovery failed: cannot " "stat '%s'"), buf); goto quit; } if (st.st_ino != ino) { err_message(_("Recovery failed: inode " "number for '%s' does not " "match recorded number"), buf); goto quit; } if (first_path) { first_path = 0; *node = add_node_path(ino, ftw_flags, buf); } else { add_path(*node, buf); } } else if (strncmp(buf, "target: ", 8) == 0) { *target = strdup(buf + 8); if (*target == NULL) { err_nomem(); goto quit; } if (stat64(*target, &st) < 0) { err_message(_("Recovery failed: cannot " "stat '%s'"), *target); goto quit; } } else if (strncmp(buf, "temp: ", 6) == 0) { *temp = strdup(buf + 6); if (*temp == NULL) { err_nomem(); goto quit; } } else if (strcmp(buf, "end") == 0) { rval = 0; goto quit; } else { err_message(_("Recovery failed: unrecognised " "string: '%s'"), buf); goto quit; } } err_message(_("Recovery failed: end of recovery file not found")); quit: if (*node == NULL) { err_message(_("Recovery failed: no valid inode or paths " "specified")); rval = 1; } if (*target == NULL) { err_message(_("Recovery failed: no inode target specified")); rval = 1; } fclose(file); return rval; } int recover( bignode_t *node, char *target, char *tname, int phase) { char *srcname = NULL; int rval = 0; int i; int dir; dump_node("recover", node); log_message(LOG_DEBUG, "target: %s, phase: %x", target, phase); if (node) srcname = node->paths[0]; dir = (phase < DIR_PHASE || phase > DIR_PHASE_MAX); switch (phase) { case DIR_PHASE_1: case FILE_PHASE_1: case SLINK_PHASE_1: log_message(LOG_NORMAL, _("Unlinking temporary %s: \'%s\'"), dir ? "directory" : "file", target); rval = dir ? rmdir(target) : unlink(target); if ( rval < 0 && errno != ENOENT) err_message(_("unable to remove %s: %s, %s"), dir ? "directory" : "file", target, strerror(errno)); break; case DIR_PHASE_2: case FILE_PHASE_2: case SLINK_PHASE_2: log_message(LOG_NORMAL, _("Unlinking old %s: \'%s\'"), dir ? "directory" : "file", srcname); rval = dir ? rmdir(target) : unlink(srcname); if (rval < 0 && errno != ENOENT) { err_message(_("unable to remove %s: %s, %s"), dir ? "directory" : "file", srcname, strerror(errno)); break; } /* FALL THRU */ case DIR_PHASE_3: case FILE_PHASE_3: case SLINK_PHASE_3: log_message(LOG_NORMAL, _("Renaming: " "\'%s\' -> \'%s\'"), target, srcname); rval = rename(target, srcname); if (rval != 0) { /* we can't abort since the src file is now gone. * let the admin clean this one up */ err_message(_("unable to rename: %s to %s, %s"), target, srcname, strerror(errno)); break; } if (dir) break; /* FALL THRU */ case FILE_PHASE_4: case SLINK_PHASE_4: /* for each hardlink, unlink and creat pointing to target */ for (i = 1; i < node->numpaths; i++) { if (i == 1) log_message(LOG_NORMAL, _("Resetting hardlinks " "to new file")); rval = unlink(node->paths[i]); if (rval != 0) { err_message(_("unable to remove file: %s, %s"), node->paths[i], strerror(errno)); break; } rval = link(srcname, node->paths[i]); if (rval != 0) { err_message(_("unable to link to file: %s, %s"), srcname, strerror(errno)); break; } } break; } if (rval == 0) { log_message(LOG_NORMAL, _("Removing recover file: \'%s\'"), recover_file); unlink(recover_file); log_message(LOG_NORMAL, _("Recovery done.")); } else { log_message(LOG_NORMAL, _("Leaving recover file: \'%s\'"), recover_file); log_message(LOG_NORMAL, _("Recovery failed.")); } return rval; } int main( int argc, char *argv[]) { int c = 0; int rval = 0; int q_opt = 0; int v_opt = 0; int p_opt = 0; int n_opt = 0; char pathname[PATH_MAX]; struct stat64 st; progname = basename(argv[0]); setlocale(LC_ALL, ""); bindtextdomain(PACKAGE, LOCALEDIR); textdomain(PACKAGE); while ((c = getopt(argc, argv, "fnpqvP:r:")) != -1) { switch (c) { case 'f': force_all = 1; break; case 'n': n_opt++; break; case 'p': p_opt++; break; case 'q': if (v_opt) err_message(_("'q' option incompatible " "with 'v' option")); q_opt++; log_level=0; break; case 'v': if (q_opt) err_message(_("'v' option incompatible " "with 'q' option")); v_opt++; log_level++; break; case 'P': poll_interval = atoi(optarg); break; case 'r': recover_file = optarg; break; default: err_message(_("%s: illegal option -- %c\n"), c); usage(); /* NOTREACHED */ break; } } if (optind != argc - 1 && recover_file == NULL) { usage(); exit(1); } realuid = getuid(); starttime = time(0); init_nodehash(); signal(SIGALRM, sighandler); signal(SIGABRT, sighandler); signal(SIGHUP, sighandler); signal(SIGINT, sighandler); signal(SIGQUIT, sighandler); signal(SIGTERM, sighandler); if (p_opt && poll_interval == 0) { poll_interval = 1; } if (poll_interval) alarm(poll_interval); if (recover_file) { bignode_t *node = NULL; char *target = NULL; char *tname = NULL; int phase = 0; if (n_opt) goto quit; /* read node info from recovery file */ if (read_recover_file(recover_file, &node, &target, &tname, &phase) != 0) exit(1); rval = recover(node, target, tname, phase); free(target); free(tname); return rval; } recover_file = malloc(PATH_MAX); if (recover_file == NULL) { err_nomem(); exit(1); } recover_file[0] = '\0'; strcpy(pathname, argv[optind]); if (pathname[0] != '/') { err_message(_("pathname must begin with a slash ('/')")); exit(1); } if (stat64(pathname, &st) < 0) { err_stat(pathname); exit(1); } if (S_ISREG(st.st_mode)) { /* single file specified */ if (st.st_nlink > 1) { err_message(_("cannot process single file with a " "link count greater than 1")); exit(1); } strcpy(recover_file, pathname); dirname(recover_file); strcpy(recover_file + strlen(recover_file), "/xfs_reno.recover"); if (!n_opt) { if (open_recoverfile() != 0) exit(1); } add_node_path(st.st_ino, FTW_F, pathname); } else if (S_ISDIR(st.st_mode)) { /* directory tree specified */ strcpy(recover_file, pathname); strcpy(recover_file + strlen(recover_file), "/xfs_reno.recover"); if (!n_opt) { if (open_recoverfile() != 0) exit(1); } /* directory scan */ log_message(LOG_INFO, _("\rScanning directory tree...")); SET_PHASE(SCAN_PHASE); if (xfs_get_agflags(pathname) != 0) { err_message(_("Could not get non-allocatable AGs info\n")); exit(1); } nftw64(pathname, nftw_addnodes, 100, FTW_PHYS | FTW_MOUNT); } else { err_message(_("pathname must be either a regular file " "or directory")); exit(1); } dump_nodehash(); if (n_opt) { /* n flag set, don't do anything */ if (numdirnodes) log_message(LOG_NORMAL, "\rWould process %d %s", numdirnodes, numdirnodes == 1 ? "directory" : "directories"); else log_message(LOG_NORMAL, "\rNo directories to process"); if (numfilenodes) /* process files */ log_message(LOG_NORMAL, "\rWould process %d %s", numfilenodes, numfilenodes == 1 ? "file" : "files"); else log_message(LOG_NORMAL, "\rNo files to process"); if (numslinknodes) /* process files */ log_message(LOG_NORMAL, "\rWould process %d %s", numslinknodes, numslinknodes == 1 ? "symlinx" : "symlinks"); else log_message(LOG_NORMAL, "\rNo symlinks to process"); } else { /* process directories */ if (numdirnodes) { log_message(LOG_INFO, _("\rProcessing %d %s..."), numdirnodes, numdirnodes == 1 ? _("directory") : _("directories")); cur_phase = DIR_PHASE; rval = for_all_nodes(process_dir, FTW_D, 1); if (rval != 0) goto quit; } else log_message(LOG_INFO, _("\rNo directories to process...")); if (numfilenodes) { /* process files */ log_message(LOG_INFO, _("\rProcessing %d %s..."), numfilenodes, numfilenodes == 1 ? _("file") : _("files")); cur_phase = FILE_PHASE; for_all_nodes(process_file, FTW_F, 0); } else log_message(LOG_INFO, _("\rNo files to process...")); if (numslinknodes) { /* process symlinks */ log_message(LOG_INFO, _("\rProcessing %d %s..."), numslinknodes, numslinknodes == 1 ? _("symlink") : _("symlinks")); cur_phase = SLINK_PHASE; for_all_nodes(process_slink, FTW_SL, 0); } else log_message(LOG_INFO, _("\rNo symlinks to process...")); } quit: free_nodehash(); close(recover_fd); if (rval == 0) unlink(recover_file); log_message(LOG_DEBUG, "\r%u seconds elapsed", time(0) - starttime); log_message(LOG_INFO, _("\rDone. ")); return rval | global_rval; } --=-4Da9GvHld0Grtgo95r+H-- From owner-xfs@oss.sgi.com Thu Mar 6 13:42:32 2008 Received: with ECARTIS (v1.0.0; list xfs); Thu, 06 Mar 2008 13:42:51 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-0.7 required=5.0 tests=AWL,BAYES_50, T_STOX_BOUND_090909_B,UPPERCASE_50_75 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m26LgSMT019589 for ; Thu, 6 Mar 2008 13:42:32 -0800 X-ASG-Debug-ID: 1204839777-145902c90000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from steelboxtech.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 012CDF30187 for ; Thu, 6 Mar 2008 13:42:57 -0800 (PST) Received: from steelboxtech.com (steelb0x.securesites.net [161.58.192.27]) by cuda.sgi.com with ESMTP id 8AX3LWD3AHIBDPlu for ; Thu, 06 Mar 2008 13:42:57 -0800 (PST) Received: from [192.168.1.152] (cromo.steelbox.com [199.72.224.67]) (authenticated bits=0) by steelboxtech.com (8.13.6.20060614/8.13.6) with ESMTP id m26LSwMv000950; Thu, 6 Mar 2008 21:28:58 GMT Message-ID: <47D062AF.80501@steelbox.com> Date: Thu, 06 Mar 2008 16:31:27 -0500 From: Kris Kersey User-Agent: Thunderbird 2.0.0.6 (X11/20071004) MIME-Version: 1.0 To: xfs@oss.sgi.com CC: Bill Vaughan X-ASG-Orig-Subj: pdflush hang on xlog_grant_log_space() Subject: pdflush hang on xlog_grant_log_space() X-Enigmail-Version: 0.95.3 Content-Type: multipart/mixed; boundary="------------090008060702010503060703" X-Barracuda-Connect: steelb0x.securesites.net[161.58.192.27] X-Barracuda-Start-Time: 1204839779 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -1001.00 X-Barracuda-Spam-Status: No, SCORE=-1001.00 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14790 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: kkersey@steelbox.com Precedence: bulk X-list: xfs This is a multi-part message in MIME format. --------------090008060702010503060703 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Hello, I'm working on a NAS product and we're currently having lock-ups that seem to be hanging in XFS code. We're running a NAS that has 1024 NFSD threads accessing three RAID mounts. All three mounts are running XFS file systems. Lately we've had random lockups on these boxes and I am now running a kernel with KDB built-in. The lock-up takes the form of all NFSD threads in D state with one out of three pdflush threads in D state. The assumption can be made that all NFSD threads are waiting on the one pdflush thread to complete. So two times now when an NAS has gotten in this state I have accessed KDB and ran a stack trace on the pdflush thread. Both times the thread was stuck on xlog_grant_log_space+0xdb. So now I'm turning to you to help me figure out why XFS is locking up. The box has been left in this state so I can run and KDB commands you wish or if you have any questions about the setup, let me know. The system is running a mostly stock 2.6.23.12 kernel. My config file as well as photos taken of the stack dump are attached. Thanks, Kris --------------090008060702010503060703 Content-Type: image/jpeg; name="stacktrace-01.jpg" Content-Transfer-Encoding: base64 Content-Disposition: inline; filename="stacktrace-01.jpg" /9j/4AAQSkZJRgABAQEASABIAAD/4STnRXhpZgAATU0AKgAAAAgABQEaAAUAAAABAAAASgEb AAUAAAABAAAAUgEoAAMAAAABAAIAAAITAAMAAAABAAEAAIdpAAQAAAABAAAAWgAAAKgAAABI AAAAAQAAAEgAAAABAAaQAAAHAAAABDAyMTCRAQAHAAAABAECAwCgAAAHAAAABAAAAACgAQAD AAAAAQABAACgAgAEAAAAAQAAAoCgAwAEAAAAAQAAAeAAAAAAAAYBAwADAAAAAQAGAAABGgAF AAAAAQAAAPYBGwAFAAAAAQAAAP4BKAADAAAAAQACAAACAQAEAAAAAQAAAQYCAgAEAAAAAQAA I9kAAAAAAAAASAAAAAEAAABIAAAAAf/Y/+AAEEpGSUYAAQEAAAEAAQAA/9sAQwAIBgYHBgUI BwcHCQkICgwUDQwLCwwZEhMPFB0aHx4dGhwcICQuJyAiLCMcHCg3KSwwMTQ0NB8nOT04Mjwu MzQy/9sAQwEJCQkMCwwYDQ0YMiEcITIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIy MjIyMjIyMjIyMjIyMjIyMjIy/8AAEQgAkwDEAwEiAAIRAQMRAf/EAB8AAAEFAQEBAQEBAAAA AAAAAAABAgMEBQYHCAkKC//EALUQAAIBAwMCBAMFBQQEAAABfQECAwAEEQUSITFBBhNRYQci cRQygZGhCCNCscEVUtHwJDNicoIJChYXGBkaJSYnKCkqNDU2Nzg5OkNERUZHSElKU1RVVldY WVpjZGVmZ2hpanN0dXZ3eHl6g4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrC w8TFxsfIycrS09TV1tfY2drh4uPk5ebn6Onq8fLz9PX29/j5+v/EAB8BAAMBAQEBAQEBAQEA AAAAAAABAgMEBQYHCAkKC//EALURAAIBAgQEAwQHBQQEAAECdwABAgMRBAUhMQYSQVEHYXET IjKBCBRCkaGxwQkjM1LwFWJy0QoWJDThJfEXGBkaJicoKSo1Njc4OTpDREVGR0hJSlNUVVZX WFlaY2RlZmdoaWpzdHV2d3h5eoKDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5 usLDxMXGx8jJytLT1NXW19jZ2uLj5OXm5+jp6vLz9PX29/j5+v/aAAwDAQACEQMRAD8A8Xcb XZfQ4pME1PfKFvpgvTecVY0wKzSq1o1wCmSF6jnrSQFHafSjafSuhhWFo8JpErAr1J7c/wCf wqLyoyoddNkzjCZIwAcgZH17n0pgYeD6UvPpWwgU3V1u0x2Uor+WMAoMdfx61bVMxoRo55wQ SF+bigDnOaUMa6G4icFHGkeWp4AOzDHB/wARWds8y8nAsSXLAgKeEGPy560AUA9KJa0CtpLM mywlTyzukjGSWU4A756n9aVDCk20aTIZB8yqSenGOMeoP50WAzxORTxckVbggaHeJNNnmVjk bkK7Tzxx1/TpU8FmJLsqmlzOsoLRq424+U457jv+FFgM77WaUXjD1q7LHthkmGkeWiFcmQnA 5z7HngVbQzFI7hNIVj8qhlbJOcMO3p/OjlAyRfP6mlGoOORmrlol1atLJ/Z37t9xw4wAMZxz 7CtBNOuzE6rpdtszjlu4yM/+PD/PQ5bgYn9oyHuaQ38mcc81qWt3cXpaS3sYCYMYHcA9APyp n2e9vbuG5EEYKbCrZO09Dj/x4UuUDN+3SZxzmg3U2M4bHrWhdzXMNwsUlpbLJMozhf72Rgn1 61JM99ZJKJIIGVm3YILbix7f57UWAymnmUZKMB6kVGbtz3Navk6hfadG0ccTJIcYH3uuMnPu Kq/2LN8gDoWfBwOwOB+mR+dOwFM3LUglL960RoMhkKCZOOpxwKy4xjikwLsK7o8+9FWLGHzI CcfxUUgMYmrNncXFtMGtpNjsMZ46fjxVU1b066jtLyOaa3WdFzmNuhyCKoDetJpp7aKVtaii Zs70baGB5x/n3qvLdTpqCxDUle3lI3S4TrgdfyFWrWa3a3FzBoIaLeV3GQEAnB5yOBx1PHNV 59Qs7Vmjk0aNdwJVTJkZzgnI919u/rTAiujNZu01vqSTSn5NyMvKAcfjxVjz5Psccw1hCTGW MRC7gwUkD8+KrNrVm2pLcjS4hCIyhhyMHnOelTL4gsEg8pNHiXAwG3Anr7rzQBQutWvfNaMX jSRo3ynA/wAP84qqt/cq8jrKQ0nL4AGa159dsJZFkGjxpIo2grJgYzzxj0yM+9Vf7VtTqRu2 06Mr8m2LdgArj274/WhgUhfXAfzBJh8AFsckDGB+gp76ldSSCRpSZAAA+PmwDkc/XmpY761S YSGzDAoqPH2OCCWz2Jx6dzzUn9pWRt/L/suMSFNplDnOcH5sY9xx7UAVX1G8kQo1xIVPbNL/ AGheGRZDcy71zht5yM//AKzViLUbOGMRnTYpsDG9zhj157+o/KpP7YtPPLjR7XbtwEOcA5PP 6j8qAKUt/dzRmOW5ldD1VnJFIl9dxqFjuZlUY4VyBxwK0rbxCsFvBEdOtmMJBV+Rg+v1qE60 RqX2xbWFTsRSgHHylTn6nbj6E0AU2vLp1CvcSsBwAXJxxSi9vcYF1OB6CQ1pP4jPmyyx2FtF LKGDPGME5Bz+pz9QKb/wkk4EoFvCPOOXwPvdf8afzAyxPOhJWWRS3BwxGcU77RcjA86YY/2j x0/+tU99qj6girLEg2bthGcjJB5PfpjmrN14inu0dXtrVRImxykeCw3Buuc9R+tIDNLTONxZ 2GQMkk89qdm4bccynB+Y88H3qzbazNbae9ksMDRO5f5lJIJGODmrK+JrpTlbe1B+bpGeSc5J 596QGWEm8wRBZN+eEwc/lSFZQgcq4UnG4jjPpVv+17gal9vCx+eCT93g5GOmfSnXmsXN/EI5 li2g5G1cYPr/AD/OgCmYZ+B5cmSu8DB5Xrn6VHH0rRfXb5rfydyBBGYsBRypGKzo6TA6fQbf zbBmxn94R+gorR8KRhtJYn/nqf5CiueVSzsBwzrhFcdG96mshE17CJ5DHFvXe4GSozyagyCo AzgUsaF2CjqTgV0gdVDFYW5lSPxC6xFy21BjPB/A84qvPY6TNK2/WC/BKsR06kDpzz9OtaFn ouq2IurM6bbSxSFcGR8gc44PXir63GtWUXlDTbN8DKnfkv64+mec4pgcsLPRzdWy/bJRC5cS k9UwPlxxyDVxNL8PCPD6tIz7uqptBGfcGrVtZX2lTjNrZSJfuIgkjFlBxnnjpya2Ut9YEEob S9LjjUbWRicMAAOg6jAoWoHM22m+H3RTPqUqN8oZQvspJ6dMlh+FURaaY9/5X211t0TJl2Ft 7egGBjiupgttXuJyF0vTpCU8tmdflPPCn6BMgeg9az0sdT0DXAIrW0mmlt3coG3IU3EnrgDp 09KdgMZrGw+1LtvD9mZGJcr8yN82Bjqeg5Hr2q42kaH5YKa6zPhiVa1Zeik8E8HkAduTWxNq eqWN0tiNN01SwkVPLT5XCGTcOvqW4NTmy8SIyT/2JYdWCEFPlGG44b5RySKLAcv9h0yGzgeb UT9qZiJYFiJEY553Dhug6etD2mjtcweVqDrAzHzyyHMS9sDgsfoK2b/R9Y12O21CX7II55f+ WLZ2Ftqc4zx8o6Zxz71j33hy8sNRt7Od4gZ3CJIGyvJAzn05+tDQBFBokkY829licAE4jLZO xM9uPm8z8Mfipt9CEJ/06cyBc/LEfmbCfKMjsd/P0qxN4P1CEQNuieOUKQyNk8nGMde/0pW8 JzrGrC6hYuQoC8fMyoVBzgjO8Dpwc5oSAinTw75MDW88/m+ViRJFON+w88f7WOBxiobsaINQ i+zNM1p5beZwQd/zbcZ7fcqxqHhW50/TRdvPC7ZAMSMCRlNxPB7YP5GkPhllsLa9e9hEM0Rk wuC6naWA2556Yz68emTysIWceGxch7d7kx5GY5AQuNpzgjnk478Z74pRP4bV4v8ARpnXkSBm b+8MEEH+7njHWrdr4MS5cqdXgh+bH71Nvf6/5wfSqOveHBosULx38N5vDFvKx8mCo55PUt+l HyGN1SXQZLVV06GeKYNks5JyMdOp74/I0PPoQtoTFbz/AGgRBZA/KFsct1znPPpVm68KLbxy uuq2ku1AUVGG529MZ47c+9VLLQ47vTPtf9oW8TiXYYXYBtvqOefp+tIC7BfeGkcmXTpnXcSM HHGTx970xVLVrnR7hY/7OtZLcqDu3c7jx7/WtJvCunpYmc69bbxHv8sYznGcfe/zisuz0iK4 trp5b6CCaFsCN2Hz+uDmkBPPfaGbeXybCUStEUUsRhW7HrWGlbd9othbaXJcJqsUk69IBjLf OF7H0JNYi9BQwOy0C4+z6UgP8TFv1x/SiqFrOsOn2qkZJQn/AMeaiuOS1ZdznkKmzdcfMHBH 5VGtSRoDFPnquCPzpi9a60QdDpvlJJJHqb6hl4Q0ZjDBl98HqP0rSZNDiRo5Rq6uIwzFlYFB wN2M46j6Vc0q81S4soXk17TYFVAqq4UsFCk85xzwKsSzai7FD4k0xtu51wEAIwcZ7Zz25x1p gYSyeHZjGC2oFw4IRfmzgYA69/b8KSxtIH1e4Etpqc1n8wCRRvvjYn5Qeeu319al1qe9T7FG +rWt0I2DFodpETBsDofm9a6FbjUDPJ/xV+meV8uGxHkjHcH/ABpgZcVnooBjOna7LJtw2Yzu /j5wD9Pb5frVWb/hHbR9txY6ksuflWZT8ygtknDA5Jx7DFbCCaK8/deLNNUb96uoQYHzZyOm fmOBnv1GKoa7dzRR2+ox+I4bm8gVzGiRx5UsVDD5SQeCTn/ZqhGHrawyXxt9JtL2BDktDIWy wCg9Mnp8x/GsdRcISyiVSRjIz0PH65x+Naf/AAk2q/aobo3Aa4gLNDIyAmMt97HY5981Pc+N PEF5EUn1F3PGH2hWXBB4IAPVV/IUrgZraTqKFg1lcAoAxGw8AnA/WoI7S4lj3xwuyZIyFzyC oP6sv5iteTxfr0wAkvy+F2DdEhwOMj7vfAz64rPtNWvrDH2a4MeG3DAHBJVu49Y0P/AaQyJL G6k+7bSn/gB9QP5kD8abLaTwyRpLCyPIoZARywPQj61oTeJtZuVRZr+SRUbcoYLwcg+nqoP1 qrc6peXkkclxPveMKFYqMgL90dO1GgEbafeJndaTgAEk+WcYHU59KdPpd7bQPNNbOkaSeUzk cB8sMfX5G/KrjeJtZeF4m1KYxvGYmXIwUPUVDNrWpT25glu3eEuZNhAI3Hdk/X52/OgCCPTb 2WFJYrSeSNwCGSMsOSVHT1KkfhU0eianLIsaWU28nG0rgjkjkHp91uvoaW21rU7OBYba+nhj UYCo23jLHt7sx/Gn/wBvaubhp/t8/mvks+7k5zn/ANCb8zRoBBZaVfajFPJZ20k6wAGTYMkZ zjjqeh/KtC18Ia7eQiaCwZ4yoYMHXGCu4d/TJrPttUv7QyG1upYS4UP5bbc4GB09Kni17WEQ RxahcqoAUKshAxjAGPpx+NGgEdvo9/d3dxawW7ST24YyIpGRtOD9efSp08N6s9pJdC0byY1Z nbcPlCjJzzx0qrHf38VyZYrmdJ3G0sjEMR+FSfb9UlQx/abl15yu5iORg8fQ4/GkBCunXUun TXyJm3hYK7Z6EkDp+IqqvSp7ia7VHjlklCytvdWJAY+pHeoVHFJgb6siWtsrKSREOnuSf60V SvpZkkhROiwRjj/dFFYcl9SiiPkmuE7EH+dQr1qSUrLdEhuCM598VGvWt0SdhockV5YDy/DS Xr2i7XdJdrNuzztxyfz6VuTWc8kYkXwbCpUYws67ip68Bffr1GDXPaCumpZwtLc6nBNJLtcW qttlH93jvj05rYlm00W6P9s187Rkl2kCuMDOT2A68UwH39tdX+nmztfC1tbmRgVuUcHaFK55 2jg5xnv70aXqIv57o2XhK0kljUeYvmDJxn7oK+x6VQnurE2ErWdzrMjFQlvJufa0mF4POOxw BU9jJ4c8xnA1ppVgAnSJmwu3qSQc47c8CncDQW1v2tnvV8O2ZgZ42JMqfKGMe0cJ/nJ9K5fU vENteXFl5miWyC0JEiK+BNwB82AMcjtWxeTaNFIs0y6xFbcNEjGQCUHbgEseBgN064FZeqye EZEsV02G/iIkAu3bksncrkkA+1FwKsmt2E11b3EmjQZgIPl5+SUBVUK2McfKTx61NJr+lm8t bmDQYLYwTeaY43JWTAUBCT/DwxPX71Mlj8Mz3VsIZby2gEqLO0nzExYG5hgH587uOnSppofB /wDZ8zQXGq/ayMQrKECg4HLYB+UHPTn2oENg8T2sLRP/AGDYtLG8bBznogUbfodvPfnr1ytp 4qgsriZ4tB0/y5YTC0bKcEEAH+XT3pIH8KRxxCePUJJEAaQpjbI3y5HJBC8P2z8w9KgC+Gvt Yctfm18pAUG3zDJkbiOwXGQBnOcU7AR32vC7Niy6fbRtaPuBxkuAchW/2R0A9K1U8dSosONK sg8dv9m8wAh2TGMFuvvx3qAP4LB5g1o/KP44+ufp6U3T73wslwxvtMvHja3CYilHyycZYZ69 +c9+negC0fH96biOUWVrhVCshBxJg5G7155qlr3iuXxD5Zu7C0V4wQrxqVbGDgE+gJz/ADzU 11d+D5dMMVvp+oxXSK4jkLqQzc7S/PT7pwBxyOetJp1/4Wi0oRX+l3Ut4GDeZGwCnHY89Py6 +2aG2BTsvEQs7KK2bSdNuPLAAknh3NwzN1z6v+OB71MviyVJ5JU02wUMGCoIyFjDGU/KM8Y8 5sfRa0J9Z8I+W32fQ5DJvZlLjgAhsAgPzjI9OlJf6v4Rn0uSC10SeK5wSk5I+9g9QG6Z5wPX uABSAwLDVjp9w8kdlbSKxB8udC6jAI6E++fqBVx/E0rNGyafYRFDk+VCV3fMrDPPPKD9aNLv NAtre7h1DT57oyMPJkVgrIB68/mPetC61nwtNDcJHojo0mTERgeWecdG5HT9eelAzIfXpm1i LU1trdJ49uFVSEwqKgGM8ABf1rRi8b6jDdSXEdtZK7s7ECIgZYKDxn/ZH61PFr/hgBPP8Pb3 UMCysBu+b5cr0zgDPvmsrTb7RrZ7sX2nvcxyhRGAwUx8HOD25Ix9KQEGua9da9cLPdxwiQAK DGpXAGeMZ9/5Vmr0rU1m+0m6ggj03T2tmU/OzNuLDaoH6hj+NZi9RSYGpcxR+d95jgAZA9qK uwC2nj3srMwJB5orPmsM5yIYlizjBpMYY+1NAYsKeOtaCOv8O3F0NFeKHXLKzHm48qfarYOM kHrWxcF2tXhfxXpzxvlZE2pynTAweuAOOB71i+GIrifSb1ItJtL5VdS3mvtf8Pbj1rpJbXUj btjw9pYwxbaZBkHJ+bkY4z39OnSmBlx79L0+aC38TacIHV5TBGFcs+CQo44BwBnjr9MyaNFB bTRahF4psrOaeA+cBGhZWJ3EFTx1/HirElrfTQXFodG0yLzlLecXyW6neODzx3A68D0m0GLU 5NN02WDTNIkXy2VJ5m+ZR6MPf2z9RQBS1i3h1i4WK68WWssSFCXfywMAP02nkjOOf71YWpaD p1oqGDWra5Mk6oCjAhEJblhnORgHjjnrXf2setpqj+XY6C0o2HLscHl8Y4Hqc9uRXOeJINTl 0kNPo+nW6TTqWlgb595aQdSep5z2wF6U+gGBdaDYR+V9m1mCf51SUgfdBzlx/sgAe/NTJ4Y0 9ryOL/hJbAQGHe9wUfaj5/1eMZJxznpWvFot3p1veaUdK08XUi7vMupFmlTKOwVGUbRnyiOe 5HIzVq3fVLq5vIH0DRriWKSWNjJjAOHYgMTjA3HHIycc9aEBzlpoOk3cag67DbyntKnB5IA4 PB6HrgVPN4W0mKynuB4qsXMedsaxNuc4Jxjr2H510OqnUbWHTruDRNBhJnXBgjUu0hL8FTyA PpxgHPo/WNK8R6rav5unaHbrDMJ3lhlUEsu9eTuOc7efXA9KdkIwR4f8LLHBK3iJ33xAyRpF tZX6nqMY4Ix1yRzjNU9K0fQbm7vIr/WHtoo2cQSqhYPg/KSMcg8nt0966W3h1H+zLqd7bw7M 1tO+9JQJJDtdiec56ngdSMfjU1CW80+0us/2Ownc2ZSNSGhEhmO5Rn5QASM+4HY5dkBXj8Pe DzGVfxHIH8riXyzt8znjbtzjoevqKhstF8IzWyG61u6gmEhV8R7lK7WIYYXjkDue/wBa29N8 PXel5khuNElhuYzMqTL5hRsHCBSeuO578fWW8tJ7WcOtxohKQLIsMNswWTaGGGG7g4c5z16d eCrIDkrPSdCl8QXNrc6nLFpyo5hnVCzZDYG4AemT2/CtKLw54TFtul8Rys/l53JbOAD8/OCp 44TjPZvw0JUv/D0lx4jt7nSbm5uzLFNbKC4IZmd2ILcY24x+HPU68Nzrs+n20kF7oKx3Njyh YqYwwJCbd2P4sZxx9BS0A5CTRfC8fmq2rXeRHlG8kjcw354x0O1SBn159MPW7WwtL5o9Omlm hBcBpFwcCRgvGB1UKfqTXosy6tYXRuU1TRpm8ojeVLM7ASDaQW4+Vz83HZRnoeY8V6fcXAXU JNSsrkgKqxwfKwVvNkJ25PAwRz647ct2GccRSV1lp4L+1W1tOdZsIkmiEhDv8yEgEAjPvye2 DUsPgeOa58r+3bBFyoLMcdS46Z6DZz/vVIHGN1FSL1FaWu6K2i3iRG6t7lJFLI8Lg8BivI7H jpWdGBvXPTPNJgX4ba5ZNyMignoWwaKr+dBzlHP40VnqMb5eMERk47VBtIOCMGtPB7CqU6kT HNWmI1dFgjuLK+Q2dzPKFUrJDz5YzySMjNblhYxJYRNd6FqM00Zy7KTtcHBHGck4YdKx/DU0 cVxciW/urQGIkPbqW5B7gdutdJFd2jW5Y+IdWbCA70ifC8DnGOnB79qoCJrezeVYIvDd55p+ UsQcf+hEZ5HU/wCFPt7K0uLBfI8MXUru5RZfMGCVIyOvy9GFEN7p8bKkuv6sisg2QqjjjjH1 zyen59zSp7JLcK+pa3CPtjqscIfDjJwOP4u579aAMIeEPEMkxhGnSs4GSNy8dO+feoLzwzrd hZy3d3YywwxEB2cgEE47Zz1IH1z6V2cktml5FsufEHKnOTKHPzJ/s+mPxx7Vmao5utAuXNzr cxTad0xkaLdlQVbIwMHcfqBTsBzv/CM69/0CL7pn/UN09elQXGmanBEpuLeZUU7AGB+U7iNv scqePaul1LUg0RWLWtXW5ZFw0zMiupAOCB0AU5GOuacE0t3t5Bqutmz3u12ZYA53nJQoOQTk sSWx6ihAYK+HddglZTZXMEqoWKuCj7drt0PPSNvyp58LeIyNn9k35Bbp5THLfN+vyt+VbS3m nTTTPcahrvmgnbPvLOyjzgNxPTIZPplh3Na8cmnGK6uIde8UyLDmNSseMlmkAzx3yOTg5Y+v BYDj4PCGv3Fw8CaZOsiLuIcbeMkd/dT+VQXXh7U7OFZJ4QuXWMpvG5WbeACM5BzGw59K68DT oJ5YLp/Eq6mSxtnYne0YZjjAIO0jr3zkiq9xYSXmubWtdeutKjfDxSb2ZHwSAOeMAnHJPPvi i3YDHPgbxAqo7WWI3iMyyGRdpXGeueuOcUlx4K1q2z5sMI/d+YuJlO5fUYPPqfQc+tdiz6Xb SEtb+JhH5zBBLLICqqCGRcMOhwcnPoeorOvZrBNQjtpLfXYZpZP3KPPJ5ssbHAUZbg7hgcHO OenLsBzd14O1yz0z+0J7UJbnOGLjJwpfp/ugmufya77UbWG81i0t7C01s6ckqR3VrJvZnfcf MI+Y4OCBj1J/GePRPCrXLFNK16RWnISJkHADKCvBzgbsZ65K0rAec5NGTXbXFt4SsmR5dM1U QtIGBmO0lSUJXg9Au7Hc55Nc/q76K6qNKhuE6ZMx56c9/Xp6e9DAyMmjJoxSUgGt978KfGrO yqgJYngAUxvvfhWnoUe/WLdTyN2f0qZOwED2lxkAWsnAwTsPNFdnf6rBYXAhdCzbQeKKw9rL sXymL9kb0FZeoQmK4we4zXVtCS3SsLXI9txGfVauL1IDwxO9vqjFL6Gz3RspkmUMp745rrYL 6dYfn8S6ajbeAsSkZ+bj6dO3fpXF6LcPa6pHLGkTsFbiUZB+U111pcXnkt9mstGAUYIY7OMZ 4BPP3v0rVAKb2XBz4oskVctuEYLlhzkcZxkDHOcfgKbpWoNAl2kfiaC2X7c7fNCp8xc/6wem fyqpd6ve2CCYw6aoDcRwn03rjAPQZPH41m2/iy5t2uG+yWcnnT+efMjztb/Z54pgdZLqjCYv F4viLiMgM0KgE7l4PHtn6jqKpa3rdzeaBPHJ4oF758YaS3WBUywKcHjPd+n933rJk8c30xYy Wdi4ZdpVoiRjOemfp+VVj4uuxZTWkVrZwxTRNE4jixkMFBPXrhRz9aLgaun6xqd3Ml7qOuWt nLZoWtXkjV2yVwQqrxnAA+fj8at6fqAeU3L+LhbSSZeQtbhj5jpyRxwASR+YFYMXi2SG2SJd L0wsqhDI1vkkADHfg8Zz71UTxA0Wlyaatnb/AGeX75I/eH5g33vqPToSKaYHZXXiLU47S5W1 8VQXN5KUCpHAqAqSxYBmHGMg89uBUVtqVzLLexReNFBkViwa2ys3LkgA8KMYOO+49a5628a3 trHGqWOmsYxhXe2BbpjrnNKnjjUYr+K8htdPikizgJbKAc5/xPSmmBo3lzPc6vb30evGdIoZ gs6wKphz5hVdoA3bsckDjd2xWvdaq/2Sa6PjkXEykGP/AIl/BwsmAMjgknA6Yy3WuXPjfUvs c1osVokMysrKkZAwS5wADgDLnjGOBUdj4w1HT9K/s6OO1e3xj97CGI5J4Prk9fYUrgdXLPZe WLg+K7x5XjMsi/ZNjCbDc7tvyjlh3IznNUYrqHU9XuLrVNeuoWtJHNldG2BICNmJiAvT5mJH HT8stPHurKsSmO1cRMzLvjLZLAhs5PIOTx0qW5+I+t3dhPZSraGCZDG4EXO0jB5z6E/nRcDq Y9ctmiV5PGV8SxUM4t0IL5Tdx5fTAyOT0x9YBqNld2Ymm8WX8d7J5e1XtlcBgEOGwnzYIHQ/ 1zyGleMdQ0iz+y28Fm0IuPtIEsO/D8dOeOlWX+IGrtEsZhstqbQuIeRjaODnjhR09TT5hBB5 euXE6+IdcmjVYg0E5jL7nwAA2Bk4GOPepBoHhPyiD4o/eEABhbttBwmSRjJGS469hVM+NtVM Cw7bYKoRRiEZAQoygHsMxrwP8Mc0WNK4zqo9H8LOwEmuzRjeoz5ecL8mTgDry/HsK5u9jgin C20pkj2KSxH8RUFh+ByKgzSZpANP3q2fDQzrER7KrMfYYNY38Rq3Z3LWruy/xoUP0PWokrqw He2OmrfwNeT25czuXXK9F6D9BRWfH4/u4o1jjs7cIoAUfNwPzorl5KnQrmPRv+FdMeU1BD9Y iP61wXxD8PvoEtkskySmVXIKgjGMf419AzRCC4xj5G6V5T8Z7MtHpVwBlQZEJ/I/0rzMJiqr rqE3odFSEOTmieMtzTcn1NWWi7YphiPpXvXOUgJPvSVMY+KbsFAEVFSFabtp3AZkf5NKMf5N LtpCKLgGR6frSjBOAOaTFS2zrFcxyMMqrAkUNgaMeg3cts06LGSBny9/zH6Csw7QeV5rtILz TliM0l0AuM4BGfyrjrhlluZZFGFZywHoCaypzcm7jaGAp6frS7k/u03bS7K1EO3J/d/WjdH/ AHf1puylEdIB2Y/7n60Dy/7h/OrVjcLaCcNbxTebGUHmKDtz3Geh9xRp9xHZ3DSSW0c4KFQs igjkdeaVwKuI88KfzoxH/dP51ZsZltLxZ3gSZVz8jqGB+oPBqBsNMWCBVLZ2joB6UXAjKp24 /Gtvw5YC+nuIlgjmkEJKCToDkcj3pmr6t/aVvbQlBtgUKhKgFRjpx2rovhvZmbULuUg7UjC5 9yf/AK1ZVKnLTcmVGLbsjLXQ9Y1As0VsXSFjCMYXbjtj8aK9esLBNPt2iiJIZ2kJbrljk0Vw f2hHsbeymeh34Bt2JHI6VlXem2Wr6esGoW0dxEedsgzz60UV49RtTTRpD4TgPGHhXQ9PgtTa abDEZJ1RyM8gn68Vbbwd4eRBjS4eAOuT/Wiiu72kvZxdyEldnNeIdA0m0td0FhChLAZC9s0x NB0oRKfsEGcd1zRRW6nLlWorIQ6JpeR/oFv/AN8CopNH00YxY2//AHwKKKlTl3BobJpOnKpI sbccf88xXDa+qJNhI40AB+4gH8qKK6sNJuWpnLYwcn1oBPrRRXpMzNxLpwigJBjA6wIf6Vhu zbzz3oorKG42IGb1NdV4a2SIEkihdS38cSsfzIzRRU1/gHHc7ldI00qubC25H/PJf8Kf/Y2m Y/5B9t/36FFFeVzy7m9hraNpgP8Ax4W3/fsUsei6YXwbC3P/AGzFFFJzlbcLGjF4f0dkGdNt T/2yFS/8I5ovH/Ertf8Av2KKKwdSfdlWQDw7owP/ACC7T/v0K0rG0trNCltBHChOdsahR+lF FZSnJqzZpFI0ABgcUUUVkWf/2f/bAEMABQMEBAQDBQQEBAUFBQYHDAgHBwcHDwsLCQwRDxIS EQ8RERMWHBcTFBoVEREYIRgaHR0fHx8TFyIkIh4kHB4fHv/bAEMBBQUFBwYHDggIDh4UERQe Hh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHv/AABEI AeACgAMBIgACEQEDEQH/xAAdAAABBQEBAQEAAAAAAAAAAAAAAgMFBgcECAEJ/8QAVhAAAAQD BQQFCAYEDAUEAQUBAgMEBQABBgcREhMhFCIxQRUjMlFhFhckM0JScYEINENicpElU6GxNURU Y3OCkqLB0eHwJjZVk/EYJzdFsmSDo8LSRv/EABsBAQEBAQEBAQEAAAAAAAAAAAACAwQBBQYH /8QAJxEBAAEDAwUAAgIDAAAAAAAAAAMCEhMBFSIEERQyQgUGITNSYoL/2gAMAwEAAhEDEQA/ APK8EPuG4ojmgFQQmCAVBBCYAhUJhUAQQQQBBBBAELhEEAQuEQQC4IRC4AhUJggFQQmCAVC4 aggHYVDEKgHYIaggOiCGoIgPQ5HLCoDoj7jjmggOyFY448cGOA7scGOOPHC8cB3Y4XnRHY4M cBI50GdEdnQY4CRzoVnRGY4VnQEttMG0xE50GdATG0wvaYhM6DOgJvbIVnRC50G0xAmtph3b PCIDOgzoCw7ZBtkV7OgzoLT+2eEI2mIPOgzoITW0wbTELnQjaYLTe2QbTEJnQZ0BN7ZBtkQm dCc6LE1tnhCdsiHzoM6CErtnhCdpiJzoMcBLZ0GdETnQZ0BKbTBtMROdBnQEnnQnaYjscJxw EnnQjOiPzoM6AkM6G86OPHCMcB1Z0IxwzBAPR0J/WRxx1o/WBgOZ4+sRwxK1AAAFgsERUAQQ QqLBBghcTRbCvOR5wAeziw84CCwQrBC8cdyhGcS1lOX8XM3cz4wEdggwRO9Ar8wjqdxQHGEW LSON4RqWpRkqQdrs88UBHYIMETDW1KV6M1SDKyi90QjDLojDNwwYPdgGsEGCJRG1L1jeJYmJ xFB+9D6hhdSW/b8kGDtCwmahl8ICEwQYI6G8kaxQEkntiiQeGRe2p847KygiwCMLFfhH3TgI fBBgiRZ21S64tmwbvaELhHYXTbrmG48orD7Qhaa8ICCwR9iw+Sr31uMCcOHs4jO18ISXTDwM vH1QRi7JIhb4roCAgie8lXjZ/wCL5oQ4hE4tQy/0htRTy8ksoeNKIAhSAIQTPVinynAQsIix eSrxtBQAbKaUYGYs4szc04xHPjapalBQDsHWF4yxF6yFKAjoXCI7m9ApXp1xxODAhLxnQHDC 4a+zx+xCoBcIhMK34AhcI34N8fYgFwiJNOzqTqTPqEH1clTJOLvvnz+ERUA7BHc4NQ0bG2Oo x7jliywh47vfHBkndjJNxi7PVzgPsEIwHA7YBh/EG6FFplJ3qSTTfwhvgPsEKLTKR9hMo3f5 ufKEZJ2x7ZgHs4ftMOkB9ghWxrAJ9p2ZRs4uyZl6awKEaxGWE5SmNIAL2jA3QCYIlabYRuTo JGpGNGAtEJaIWHXLD7sud8OvjDsbWhdUG1KkqouY+sDqX+UBCwRP0fTCl1qQLapJVEFfaGBL 7Ol8paweTYzqP6VR5p6jpExOIr7kucBAQQ1Fh8nvQ6dxnDCofD+rw9gsm+7+1AQsJi7ulGE7 Gu6H201QjOkVhOF6zjfylylfFTRsjqpyMlAP0jEIP4A8RfDx4QHJBFrpelRrFDsjciRlKCUG 0JMJksAr++fdEOnpt4UmGg2bDkmSKEIQrpYhcJSgIyFRZqXpgZyN2UuqYZQCSDtm3rpiMBx0 7oddGRA202mHsC1YoOQBUCUhF2TBcsP/AJgKlConfJJyAYRt+UUVmFBPyxXmEyHdy79YtZlJ MhyxGDJGR6WIGHMn1hYQ8/8AcoDNYRFyqxqRpqbKXgQbGaJXMjL+5FPwQH2CLfSbOSdTe37A UsVCWyJ67sBDznBUlHj6c9AygpRdr+Zulrf3SgKbC4nPJJf0XtOcDHkzOCHXsRJ1Ywg6DTL0 eAjZ0QBnBCHtYucBUIItFSJkCZnps5MgB1hcxiD+s/FE6jQJlhaFScjS7RljH1ehfh+UBnUE aA8MO3ltQMAAmmCFmGEh0u8eUcbWSSzlubVnJOkhHBCWYYG+ApcEaUzto0zeHO2cCoxTPOMF 3cpS/wAoiS0xI3iqUwyQYCScfzgKRBBBAEJhUEAQQQQCYIVDUA+XD0Mlw9EBUdzX9cK/FHDH c1/XCvxSgJGvAemRVIuVeA9QP+blFNjOMEKLhMEbh+NPa8kbOzqcGbh3cwItAxl0dzWB7yxd G7bg9rIgtoLg1I8w07YCs3tCD73jD7ogJ6DTE5OFPnAF3SjMNpWe2pUYw7u8Kd8P43jo8XpK 3YuyLenlwQ1ZwJyVCPcyAF3gD8JwnYEakwOcT6sI8JPHhzujLf02NGVjG4GpxXZe9Ocr/CBw 6YJMKGv20I/sRGX/ALJwWuaj6wJGjZ8SVR9bJw3a85+GkSbHSTaSYuADAqSi3csV08u+UZ4j 6bXqM5GNUeoD7RYtYNmfgZ52BaH9eL/OCFoRj2azt9TbN9VW7vfEwYDOp8O/uCbJ+kf/ANYz xOS8KU5o021GlfaZfD5wwn6SGWaSm2jKL9YWG+78oB+ix5NQNwx7vWBjUjCQZbiTsYBYlO8E z2pfOMdiTUJn4lOE5SBwCnD2RC4QGgI0ZKZRgR+q2Qf/AO5dy8Y6P4mbuZo8ssez8w3f5Rlq fbDjAgTZoh+zh/wjq2N+6QwZK3a8P9f84DQHBGcNw6SADNAEiQSyQivmWZdHQ14/0UNSTmqC 7wHmcMud984zvYKkzBAAS5ZvaFvQlGgfjs3ZiVu6LCZhFdrAaooyRuB49wWYkF1+LTWU/lFI MQLPIM1GcDIHtJWXvS9Iv04xBFtVQjTiyUa3Z/aL5flDShte9nKOOTKtn0y8XDWAuvp6BnbP QwBW4hECRYrpGS74ibTPVsG5lDCmEERIuJevCcQaxqfgZA1KZb1m6SYIX+7o5XRA5IDA9JEm hNMDu53tRYs1n+xjb1XSWVsoTy8Jgu2E3lL4Th1nAPpSuUCnKKNMRG4Ql8PlFLjpa0C9fiAg AaaMO8LD/jEDQ2clMOm27cT9H9Di2nh6+XC/xgcNjWUe9DTZWUFEVk7upd3GMy/mR4/vF8r/ AIRLGVC6jb9gzgbPpiwh1FIPCU590WE0WcmBUiMazeK9qNKTkknKKdGPKNAIKrOF73u/7/bG fLHJ7AjSqTiUpCcwXUGFkylqHunHC6Oq91ytsO3Cb8ssvcAG/jwiBcLO/T6kWLHI4oK1LcUW HhIwN+v5RJsewJnSogY05RXTBQid6V2EXG6MrhcBfvrNN1cyI1gCP0pnlhxXSypazjPv99mD ciWLfnUlHsZKkAU+HDhy5cPjAWZvUoPJ+hRqRlYEq0zPLF7Mpz5yixKFiZMoI28ZSpUYrUjJ w3GzLKFLdn4Yf9zjIYfRnDRqNpRjylHsmB4xYuFpn8F0p9v6EPew9q4Xa74YoMYwFrE2MCXE YEWIR2SZ8pz5d8QvTzxtm3jch7R2AmCu3Zd0u6OVwWKXUzOXnbUP2cV0BptLqUCOm3tyOWfo 9K8esw35l4OEru+ccaNSw+btcSSpShNXEmCyTRdfnTnfoHkG74xVmNqfnKn/AENYUQ0CPyss wzAWcby+MQrgj2N0PQKQACqTiwGB+Eai6VQsAcsQOpLwV0bhTAEkCK8YsF1+53QWkOpKks3J GiNTqFOcWWE7Mn85eyHwijfaB98wUgh+9fEw+U2pZPrJyLNxYDiSxdYXP70oyFmR1gS5Vht6 kkpH+i9iLxaSxcr+6USyipGolGQBS5J1g05IsSQu6ZaiYhbkp3btwZd0ZWWDOMCSAGIZgpAC H3pzidLpVZ0g5o8acrosmQ1ZnIuc/s/EXhAXdnfmQDgqxvYPr+15xmkrsF2EMcdJvDI1M/pL qnxkuJ6rLD2xSEGcg3fGKRUjUNnUEEqco3OJCaQYX2BAnwuiOwfc7P3Y9vCO3/WFMX5xbOm0 ez0oMfrWc7CYWH2g334oq32ed7HvQGdSnzhgNyvew6RmNNMqdn9KxveLalOcHLv6sIfZ4c+E cqOp2fpDbDh4RqEBqcwsRfVF3z0lP4xVnxhOai24Azk4lq7LwkB9mQuzEmookZNSLGfpUr9H pJqF4glzvLw8pS5z7osSLfULV0o49JOXVCSBSEmFp5ykLW+eEPd8Y6jKzbVO1ddseFXnECEn x+zduy5TiJ8hh9H9Kjcv0eIuQgiyesvvu7P+sLWUHsGUcvdcKdQYApNkl4xixa3zlyldAdyO sGfyfNTLBqAm5JwMOXfMWZz7o5fK1q6P/jAlWwBRZGHTTnOfD98VSoEA2p4VNo94aczDijh7 eEHtiFIP5wF8dKzTHKCFKY5V64oZhGXdK4PeLiKc4c8s20lYRkgVHgzxKDBCLuw4pXXSjmcK JAmLcSUyxQqWt4esDl6CnppLnzio7MdujUgNSlZ2QIwwPZH3RAm6ke0a9vRs6bNEUFTnHHHa RGVANAN0F0UD0XQIfvXc47qoYeiqsFTyPGsGEsIu7jK+cP0/TwFKNzWOoFoQN4glZBAetEIU AU+/I0bH0UsAowZ0zupDfi8Jx3qKzJOxfo0eA6/PDi9Z7vylHKjpglfT7svbdoNVJVMiiyTN NJx1M9JJjmco5ZtG0GFjGYIN2BPg5TgOBwqclYzhR4FRQwlZGEIuru8e+FvFVJljP0amQGlZ hYSTDDBewHu8Yq0djGj6SfEaDHh2gzDigJN0eG1eW2E9GjykIcO8LtR1F1UAnKTJm3CiLCIO WIzf3vGJNwo8HR6obajNKNJOwBzBeslrrrFdTtoEeUc/AGEBwfRiA8Tp33cuUBMF1zk9SBt9 FD2SxGa3xGdPE9KHr1LaA00wUhh3rsucoTWiBG2ugUyP9TIRgeOXOfKIKAsxdYKf4ymAePMx h3rpRGGPCnLWdjNXCxKTPe8PhFupumEZzO0qRoM8awXXGe7L84gnSlV5KhcPqtlJDMeZi5cp fGArMEWFPTCnZ25epHhRKDJZgvCLJUjU2trX0l0ODAFXlEF/rJcr4DOoIvLw1AclCVqRo06F wLLmoXiCHQsHd8YTSbCg2hcPOTrtnJllmfZhvgKRBGmLEaAnagIyUQXIWWEsJ3td90o4dmwN exkgb+mBHGZ+Zxw90pQFBhqNbY2EH6MJyU/XXiUiFdeKfhGWrPrB/wDSi/fANFw9DJcPRAdi Qp/+FEv9JKI+JWl/4cS/0kByvDwNenCAfs7sQsKhMAQQQRYfjTaHyR0WaM440jLUy64vjGYR 3M725M+LYFOEBnaLMDjB+UFtIfKYal6zbMBuPL3vvS74Ujakaam1SZMA3AqDMXjFDMq1+O7a /wC9uhuh0urahAWaDbMRRnaxF38eN3dFoX4tAS20mlTJgbmIo+Ox4bUzwWajHvAzwiFi/wAI zPytfuj9g2zEn+8HX84QoqeoRllEjWDDhFLCIsOAYpy4axAu6gDPR6gRyMkfXBEQeWSKY8P3 v9I6KTbdgdBdcauRODdMWE7jd3TigKKqfjlBRxynrS+z1d35wvyzqTpDbwLAZuHB6uWC6fHS LFtToFLI6GpkYxlNRYdqw/D2b+cTFn40C9reBoDiilq4JojyDNDAyujPE9W1OAwQwLMWLeF1 d8vlKONPULqmeDXglThWnbphmHv8ICOR9SsIB7p0v3xt6glAcscQDzRZybrCeV3hGHGY94eD tbwt2JjyqftnCTt4927e9u6XLFEC8s9PIGd0SjQEjxmYt4Xshu7PxibLAAZf2uVsRof5zxjJ unnvf9PUYxGZu778OmVJUO2FKRrDSjS/V9Xgl+UWNFWbYcnZ9jONC35OM4XPcnw+cLTqSV7P 0kcBQRhdN0Jf5SnOU/8ACM+Lq2p8zGBf/Vy5XflCfKepNoPO6SNxi9ZiDp+XKA2R0AmBUjcM eMI9cvL7Hdr+cQ9aEjy3oH8YEixBF7FwZ8JSjLi6qqEnF+lTes3t7XWfd3QwZUL2Nv2AbkaJ P7WLiL4zgLg37eOzcWzAVBcC1pYTM/jfPnK/ldETaIcds7cz75oEYcZyvDoI4XKU+cQ5lSPw 8gA3U3Anuy/lw/KGnR+eHUsIHJeM8AeyXpdf3wExZ+1I3hQqRrCdzLxBP/V3Rc6bTI21wrJG SjNKwt0xFi13pS/z46RlCdSpJLNAScMoB26dh+0l3TjqLfngBgjgOSgJpheUIXPL934RAtdF tSB1pcOBAAS0WLEJTfvT5YRy/bfFF977osIo7Ebw6oEYkaBealKF7JcdCde2gR5I2Eo83D64 R05awFr2YlfR9EAU/V+k8oz5xNeTzCNRnL0CdLkuJ6UsPYkYAMty+/v/ANzjLtvWbPse0m7I EWIJPIM++UdnTzkNYQpWKRuOz+rJP1LixP2mNRLajp04CApGoVJhbSEv9ZKcNUGSmGx1EsOQ FLBt5MjycwN/xlHC4VPt5iXbGdKanRlzAQmxTu3p3znOfG+ONQ8Hb3RoOhyjA4TyUwp3Hfig L9Q6CnllFlOpyArEoUm5oAk5kw6aS8JeP7YYRtSNTSYiUDaURhJEPEeT2rvvxnydevRliJQL 1CUoW6IJIrpCg29fsewbeqClF2iMzc/KA02tG1hR0/gTI8XVpMgQSdAzndfiM5in3fshipDm dqtUIZ1jOiKai8IsWT2sQef9b4Rm23rxpwphr1Qk5e8WTmbgZ+EobUHHKTMaxSaqN/WHCxzg NipunmdqLEyPaZP0vlqF5Ig3GTy/Zul70u66IAw6mEFqDYsdUAykQUnXhMLu3+/BKendxlrG eZynaNp2xRtH67M3/wA4DBjOMxnDGeaLtGGCvmKA1mjxozqP9ATDNK8oDRlkiFO8svtBnO6X h8PGJWqEaNS4LFmzN/SRgVPRncYXdqaLxlPhO6MTLUqSfqyxQRi7WSZdihP9cf8AagNbWMKN BZm3DBgPNCcjOLU4pX347zcN3KXfOcctrCZANG8Oo0acpUJzLCgODxUEzlrPx1+HwjK/7eDs 4cWn5QvfH2xjFh7OIV+GAkaTWAQVY0rzvVJ1ZYzPhKNAZ8AKorwkZJSzbsJxBAhXyOlixcp8 ucZZBEDdXAbb6CN12LAWWmK2YIpYy1XGQRa7oA896OBrGBNT/YTrFpalSNaEkwJafevuxeHO V0YtCt2LyDV9jZybGzUxKlKaNUSE8OI68zOmLfwh92Uuet8Fqi9H5Pn9GgAanUCTBIEYokOW 7dfllcbuHH70ZNue5Cv6kBpdojkcdahTpwBohZZBAMQbsBc/bv7roky3VN52KrGM5EaBUkyE 29LLM4T1nwjIo+bnYwR7kG6GPCMdPhagL2rpUJIcwnEHKLxC/s3hly1uhTo9tq/KTI3JtNNR mBCeYcIMiw6XTmDFOMHwA90MGAHuhi7xdXQltqGqKte9vwt6feTi5HD4BlK/v14RUUY/SEox +yYEQvzhuCMBtLhU6DaFQ+nkpCddhASIgUpjJ1xTFPu096KfXDkgqHokCBYUlKJVzxEcABlf 66fiKKHue5C4DVOlaY85ji8KXJKIBiQIE5/bkTOWk/n3QoypGQ50eMDwAgCpSSoLO1lK4EuE vjGUwReQawz1PTYDFy8a8CPE47UWTh1EAMrtJS7/AJRwl1Uz9HhU52HCWaHYsOopjv48ozWC GQSqMltBSapSpH+kBHSAnJxcuc7oTS6wDbUCFed2CTMURkEQNP8AK1kBizlihZ12f1d8paS0 Dr/hpEd5QsKyoCntSNQFUEjCHq75Fj+EUGCAn/0IdVgTs401t7Z5h/Ewf+URLhkjWG5PqsXV /COaCAvbXVrama24B21Zrf2Qlh0F84S4Vg1KUZqDYFGy4cQcXHMvvnf4RRoIXi81JWaB1byk 2zKg4TAjF8uUo43CpEA0aFtRgVbOnU7QIR4tReEVKCAtaepyfKRzcjkw8pcXlCLD2whhxrqR qQbUSS2qAojrt3F1m74xUIIC1+U6A54NdVjUPNxS2fLF2cPC+EI6kTJlAl423NcBGTGEzFpr FXhUBci65yS8ewYlpZeEszFuBikY8cLhqAcLh6GS4eiAqJ+j/wDmBH/SRARY7P8A/mRH/SRA qEEEEbBELhELgHYah2PiceSoCP3RYoD7Gp0GSAdNlDziiB58gizPtPCM/qR46bUFKcnKyy8E dLHU6lqT7Nkp1SfFiwneyKAur5QaAbwa5EqRlJcWIwsPs/8AmJZ8Qfo9HsZxQkpZMjTkhhf2 cuIpfLlFHMtCezvYSh+770u6cJ8vHXY8nJRY8OAs7mEE+V3CLFurglAvodxWJhpzwJxFiIw8 SfCMm390eAQcUT6yqlJzGNq2NElAd68wkN2Z8uEcbo97ezo23BhAl9r3ogWmysGNO5j3CjSy +rNF7MWGpKMQP2wqQKfTcjrMvSR0r9RRmdPvaxkMNGmwCAYHCYWZwFKJoy0J7y8knZUpQbsv DxD/AOYCfptYBHt6PcEyocYTjhBumdpGa+uzTkwB5WLEH7sotvl4v63Oamo8BgsWEQZ3RDt9 Q7AW5gySsblfi5SDf3RYKPwDqRCA4GMGdLEGNcdGRA6t7sjXrMXpfViDxTd10YcjGNMYUcSP CMm4RYvGUWnzhPG8MAG8JpgsRwv1k/GUQLJTdMNrJXjKmUqTRGmBmLCLUH3dYh06nHbQaDGA 8oR80pnV6YL4ifLZ12xKsGNKaalFM0sQvH/fxjh6eOA+GvfVbUYKYxd184sNVQmAjqh2Rk+q JViCX8IjoUYdtJgjhnZozBYhC96cJiAQQmCAIVBBAEEJhUAQQQQBBBBALhELhEAQQQuARC4I IAggggCCH9gXjLzgIFQiv1gSZzB+cMQBBHX0U6gUEJhtS0Jqj1Ycme9HH/8AmHdEGA+wQ4nJ GcYEkkAxDF7IY7Gtke3LENtZ1qoBZkwCEWXoEcuX+MBHwQrB7Hth7QYPdB7Yt0IfenAJgiaM pKqgKCE3k84Zqi/LDh13eN/ddzvhh8YXhhyum0BqHOvETme1dx/bARkESDGyOr3m9FIxqsm7 MFpKQb+Gs470dH1OpLNGBqygFimEWccEvs8e1AQEETSOlahWNY3Ult9FCEQsWKV4gB4zkHjO UcCdtWHU+e/ATD6NJMkUYfyxC4B+MBwwQuO5jZ1724bA2k5qjDMfdIIA8RCnylKA4YIk3xkX smxjX5WUsDjIOJMxgMl8YRTbOvqF4C1M5O1KBBmL8Mpc5z5QEfBB2MQPdFMP5QQBBFrcKDe2 1GqOUjS40peM9MEV5hfxisIwbYoKJTbwzhYQ8pfnANwRK1YyKaefBMizAJUWWEYsvWW9D9N0 2c9p1izbE6FKhFIJxynSV4uEoCDgix+SSzyfcXsCkpUUjOkVhJvnMV8djPQa9yZynLbE5Q1B cxkkC43S75wFQggjrZ0Bzq6JW0n1qgWEOKA4YXFveKMGmZxr0akazJMyjA5N2vhEfS9NnPGe cdjRok5OaI7DfMUvuy5wEBBE/VjD0JsYwHZoFReMOLjEFAJgi7MdEkrEaE5SsNzXC8RZZN1w ZS75ziuLGdyTLFSbYzep+72pd8BEw1F3fKMGz0GQ/LzvTVBn1T9WGfC+fOKXAOFw9DUOxAVF rs3/AOaEsVSLlZX/AM0B/DGcgocJggjcIhcIggOiLRZugRr6sITKQYgC97hfFUiRY3hSyOhC 9NgEMkXZF7UQtq1YUGSvRm7MS3ty0szqzCOBgO66U4gKXoZSmfCNvUpTQYcRhIi+7jKcBlpa YebksIw5wsYsR1+/3w0ntIHtGcczgN/WdZdFoTTHR6PyseFIwJxNuLCEgQb+PdEi108yDaxE 9GpMGIwIsQes8MM4rZdpYAOB53QOJKdd1eZrfLnfDCe0jBn/AKE3MyZpPWdm/wB6LC09nSkn 0/bwYA9bkGF/ZS8YmHxMA5wSrGpAyrGUJgMzIDLGHvxS+EV9RaENY17Msbet16ws64GvfKOZ HWaZA17GgYQFGmYc8WdoZhiBaXyzfOrQKlAMro04yWJMHTL74nDKbZE1UejM6cr0Lq84vcvl PjOM/qy0I54RhJRoxt2EwI8zO1vDHYZaisGYjGNqALZ02ynYjPXSnzixCWoICUFWYEZICgGJ 5Giy+xi8IvVBtSM6k247Zm8WYZhPzy5TmL4XxmlWP3TzgUp2bZSk5OUSXx0iWputhtTP0acg 2ooJmMsQTMEw/HviBOvFnWS+HqSV5QWoJkxYf1d32UXBQwsilOQDYEuAOVl4S7jA/jihqLS1 gzBfopPlGBnnF4u0OfD8oR5y1+x4CW1OFXhCER2Kd10vCLF8fEbagb1yxSyN6zYz5BTBJuv1 4Yrobb0bUsWY9gRFO/R0xnBy5ZRc5/svlFM85YwZ4+gU+NUZIZ+Iy+Qpy7pcoY84WB0EvTU8 lKGcGYVPXTnM7FGnAaQop5qXtYiTkyUoaxJiEeQXLdHf2gXRnLXZi5Kelkw1mUoQ35IsO4ou lfx5Xx0I7UVKbCADCn2csnKLLEdO/vvnOEI7V3snav0aiN2gWIIRX9TpdDgJgz9PUu0pqe6K 2hOk9PTGE9aKcuM77v8Ae7HfXiBB5FrslGn9HTkjILCXcannzmKcUBnrPocs3Y2FEFaYEQNr 4TCEX74ddK/dV7GJtGjThNOLyjlIb96XwiOAgqXbSXh4IQKVmy7R2TuO/wApfOL9R9GJm20R iRuqwBozgiGYSIOgZy7Ovf4ftjPKfcuh3AheSSUfk9kszhFiLr916YbHg5GkPVNohCLMFf1m LlP4fnGYlvJ5HU9QVCpUrMjYVMyQktxd8zJe9IE/ziiuibY3RUj7WSZgxCDdEs11UNA+LHXo pKaNYZnZYr5ZYu8M5awGKWd+cFTxUjwoQqlAsWWmT5kosT7gmZ/MGQv63aguWHFzxd34bucc tWUe2s6dnAS6qj3V0JKPJIy9BSFyvlEOnqQ5AxrqbTASrmpQZjLEpDqG/wBqQeF/xhb5Vq91 UMqkZJSU1lLCUmMJ+7wvv/wgLW+WYktrxTabbzcp46gwIsONObdD5lmLUmdCMb8o2I4gwWHd kYIwM7rpSiAdLS6hdVjcpUpm0JredtCfLDP1nfP4+EMKK/eFihKcvQNSrZczCWIvc39RRpwE TVDaBqfDUBO1ZQd4O0huM+E7onWOm2Edm6qrXhetIyVeyZZAZT35yvDP4RHKHVBUjoJyq1Sr TdXgTlt5eKQZd29CVlQ4KXVUe1daxnKZKMxSXcoEZL9kei4ebRqBRaVyUvCgpQqbttLMEIMi +6QcPavF4XwhZZ0z5bcSgWOBozFJBBy0NwyN+6+6XGXhfFS8sHvoMtkGS3nlElbOSeaXeeWV 3S9n9kP+XlQ9HhRptlbsIixiOSF4DTMGocU4yF3cLMaeBUjS1EuSoOc5zRHFiOCMwWm6ZLul 333RAeStNr1FTJmpY5bU1oDcKcWszDwzu3e/5RE+X9Q9OJXsBLUUtSmTNDlk3YjZ/aC53xDp 3tyR1J5QpjspfnTOxexjFx07o04DVGeyth8l0LwsXqBDEQn2skxQEnZzTvEV0t384qjXR7Cv 8vM5+CEpnJEJv3vrHd8ZfCIcutqk2hcpOUlLBuBmaftJeMGPlMMuV0QCwY1ig1Sp3hnCxmcs U/hKI4D0hTYAbPSJO3jCV5MbyIsvqTLwCnMU+Up6c9YpSeiWdnUNL825pu1L0mwEnC9XIU9Z j8b+EUNHW1To0ZSMlYV1JeUScImQzQl+7fOEmVnVQzDx9Nm41RhRpwg3S1L7Epe6GXdKLvQ2 7H/xRTqlMM0QC6pNINEZ2wikDXB3B/KMrtgZ21tdELk2kjIA8Z6gRIvZwjw3/OIoyuatG4I1 /SuE1GIQ02EkMgFmD7RmHhMfjEW+PDq/LNseFm1G4cId26QZd0pSheLhYfg6UqZT/GyWA8SI XMJveD70XCy/Yx2f0HtKxUUMLqcIstPd1hgjZBvHvS/zjFm9YsbVgViBSMhQHsmB8Yk2uqqk ak+zNrwIgrFj7IZ4Rz4zDfLd+UZrajWlGMmW/wBVdGqNqFtAU6DXMUG5uHaJB5Eh+GscznSr dRlLsNSbH+lWp2TBOERfPOnMN8Zh5Q1CPfG9rRDETs4hCFfMRXHLn3yhRlT1OpTlJlL8tNTk ikIsvF2Zh4RpwQ1Ss1O30O2DbVbkhG7VZPMzA5RpebLeld3XwlwRo1leVM1OSbpFFRLYIprR Gfxgy+U9/wBoQhTjK3CoX5yMIOXvCo8SUWIjllj96Upc/GGk7w6pnATkmdVRS0z1h4Rb4vjC 9bWbOx/od6U9AjbCjF6T0AvTWcrr9eUPuGCoasNoN+alR7aS6HG9JlinIBY8OumkpSlddrOX jGQlvz8SoNUgfnAKg71xmZqLu/KGk7w8EpzUZLwtCnO9cXmaGX8b/jC9C+t/lCgo9YvxrVih UESBAH2EyMPaNFd4cJzix1gsYV/0dz/JtSPopCrTEEk7PgwmXXjmOfMQp3zv/ZGSdPP2x7B0 85bLhwZGduYfdu7o49pWbHsG2KNixYtmxdVf34YcFupQ1LCafRvZ26nWGTATva6eEXCw/J6U qTO3gGMpoMvmZfyl3/tilKF6xSnSozjsSdGHCQTyLv4/OGCzjiTMaY40g39YWK6cYD0I4NTO ppduQOSMCVoJTdYYYK+ZM824AQzFdOYx/CEWdr2pHaA7MOx9GOojxmmZAZZezgK3QiFyu4zl KUtYwAxYsHvnLFR4/wCcMnOE5xwOwpNCMXaEEyd4vnFoSrWzqXhY+5OAIW/NOMEZ3X/viPa/ 4QQj9jPK/wDylCC1iklOeSSpNKKUevCH7b8XfDEQt6aUHfpB/wBjTJQ7VlhIOPMuLUcO/lKU u6M3tETI1Pk6jpgCcSU5xGIwQdwYjJT3p3ciuN0Zfvj7ZxovxGTj7Fjd1iBMpt4c16k4HUpi tkLzJAzNJX3z5SDCS9j2yqUaPYso5zJFli1LypB3p/7u+EYT/XH/AGo+wG/0eNqTI3g5MpTl NQnTFhELQwoIbvDS/wDDHKndWoFP522J+j9iPzBYt/MFfhLlGFQQEqjasdHqn447DhMwEl+9 OfKOyztSSjrhsWKR4SizJ7wvhFdxj7GPc93lH2IG9dNtpLfgUuTeR18zzCy7t4v4S53xwFvz aNQsyViUIFCIISCcXZ1v3uU5/nGIwuA0WsNjqesGdABYDKTkT2k4u7AGKCswbYeBMPEUEUwl i96UMQQGv0/UjamZ2z0xOUBOX1gRdu/uujjfKqauizyUyzNN3R5nNTdO/L/DKMsggNHryqkb 3ZujTdV0qJTjOIL+zByjM4VDUB0Q7DUOxAVF5sj/AOYP6sUaNAsf/hg/7pcYT+gzOCCCOoIg hcEAqEwqCAXFos7akbw8bMsJzQZeLL966OYvoHyTNAMH6VxdWLn/AJQ1R7x0C8BWDAM0HZEE PHXugLrVFnuNvCsYUGwm4uuTCMnw96V8R1PsjOmUdG1C1DVLzLsOEW4EE+d8uESvnIZ0xeAl M5H70xhEdphnHMnramOkDV6xqcilRxeERhN09fCAjmum2dY+P7IT/F8Qkx+Ls3S4TihxcGd+ ZGR4WLG1MtwGFzATncdfeiqff97egGoIIIAggggCCCFwCIIXCIAggggCCCCAIIIIAggggCCC CAIIIIAggggCCFwQCIIIIAgghcAiCFwQBCIXBAIghcEAQiCFwCIIIXAIghWCDBjgEwQosAxl 4wEmiB7wS53R9gEQQr+/90MOqCTkxgSVKZQQaZ2SzC5ymL4QDEEdhaBeNYajJQLTVBfrCwkz vD8YdTs7wpxDTM7kaAsWEWFPPdnAR0EdmwL+jxL+jVuyh+0yZ3QoxtciW8pec2rQpTrsszJn cLFwu+MBwwRJuDI9oC85ezrUZXvHF3SiPgEQR3NbU5OuLo1Aaqy/WZfsw24I1KBRsy8nIUB+ zFAcsLiaR0lUixv6STM5pqUQZiCZpwlxn3w0ZTdQgZ+leijdl7WL53QEPC4nFlJVImyMbOb6 QKQAh46z7+6HTKMqQDgQg6KGI07F7XZw9q+fK6ArcLix+RNSfyDEDDjzsyWXd8YPImpN70AA Si7uuMMuBvcJSnAVyCOt0QKWpYJGsBlKC+0GOaATBFmMol7TJzTlICg5ZeMwnMvMDKIdnbVL w6FNraDPUHCwhD/rARhkEWSoKVXs7WFeccnNKEZlYife7orcA7DsNQ7EBUaHY/8AWFw/5uUZ 5GkWV/we5ne6GOfq/RbNMAIaMhJfrN+JZ8QJicI0w9wXzjoQhIXCIXFgh8sGOGIfgNZT2YtR zeh/hIShUTjzgi6sufjFHUUZU5Kw0no3Fli7WLl3xoNN2lsPk+jTLF7ghNSl4BEZc5yM/KGl lpFPKS8n9IZXbxZevwgICoKAOTI2caMk0SpUZIJxeZfh/LhE35t2fpggkY1ogbNM0ReZrjl4 x2OFoVMD2VSSNaI0JkhmF5PZhCy0KlekCDiRrRdWIIhZPZxRYqj5Z65AeNmYQYisnHhUmSlM v7t8dnkSjJpfbDmdascAiwnElHer8fl8IshdpdPAxI/SsrL+s7PIevwiCR1VT3lIa8KV7vm4 u1lywGS7rpcIDhcKGJHSaF+ZEygf8rIMO1DLwhRlN0qvpfb2pG5I1QhBAXnGX5k5+EWJPaRT AGc8GBaFQYIYyy8nTe8YpjxVQDqbZEyDNSrW07NxYdPD5xA7LTKMR0wxtKkk401Uo3VIRcAz 8IYs3ptBUO3bYmNVbKTMQSSxXYp/KO+uK8BVVDo0CzN6XCfI84WTKQPzl/lEXZvUiam3BVtm 0ZSovBmEdsvxuixKVhZuvTFo1jCjHs6gvrCDjN9OLunziaY7NGo5r/SSY0S3FhOMLM+r6Xx0 +dRnTFlbGByWGkl5WYYG7Mv79eXfHMntFpsGLcdSgFnZpfV48yfdOPcY6vNcybOUm2NQbiIm PbQmT3Zyv0u4RUagpJtTU3Tq9q2gRq5XJOdmCi1p7VGfqlJxLklNJCIOQWHGWZfwvnyitvFT 08sodGzg6SC4J1O0erlg15X/AOkeCdUUHTaOvGmnliNR6UmEIRmZz7+PfDTHZoBM+OaN7TbY AN40ASztDi79N7w5xwrK5ZDrRGepAActnQppEHBEHf093XWJpwtXYVKzZgNrl0aIsQBHYet3 u4N90AkyzFnUp39A2gyHJPgEkzjNN6XZnP8AxiHpNkpg5rcU1Q02MpQ1liCtX7RPdNlLSUpc L5xNNdqNMErFxyxA64DCwAIEEuU57vvS5RUnSuUyxjqlBsAw9PK5KC/5u73o1FMTg2lQEkG7 nGYS8zx4XxbvN1UKZQHbNlCDayk4uu96fGUU9PgzA494GmIPhGi1JaQje28pB0aqKAlMKEk3 uyWHjIXjHOOivLPf+PCqbptGBLlpsZh5xl8jLud3GUUOoGcbI4bGNSlVbuIJyYV4BSi2vFct q+1DywJTORCcSbIEEIsBunOXH8o5qoWI68qTbEGyspSdPgEY4iyxqNb79NIscdBsPTad9Hkg NGhRCUYRCwcJX3w+12bvy+n0rkSpb81YkmrISCvzDCg9oV3hDtJvaag3B4JU4HjbkAkuYgMx gDi8YlaftLRtTGhTDZFBrk3ohIEwgmdVlC44vGICVliFeJqb6eH0aJPkBPyyzrz7hfc74jHC zR7RuDEmGclwPV+QZrLCOUr5ynzi0I7cnJM4Z3Q+ai2SRAUghfaS+0nOOjzqNVT1hSw1jaNn Sta3ajlJx1/4v9IsZzXFJL6SMR7YcUeBYGYiTCb/AGdJw7T9Evb9TZrwzgz8laFKIkN+PXjP 4SidtIfgV5XBpIF7W0sTbmhbTBXyLMDOd+Z34hR3Wf2ijsoLWNraS1VTthkj84JgpAJ5Tl8b oDjeLJV7OjPUuVSNRAE5mQLuzcOLD84hZM//ALHn1J6PuuwU5hn2ob/8NP2xNWiWo+WdNr2o 6nthGscQrczOxyLwhw4ZSiul1OAmy9ZQfRuLbF4Vu15nZw8sMBdbSKGQDWWctVMDK2p6bg5h mHcOnOfrpz8YrNolBqaMLSqRr9sTqDhJ8WTMqeIM5yndKeswzu0nHT5zlIE9Lf8ADyQThToQ gIWiM+zD7MpS4X98MVhVSavHRKBT+gE5YhmmK1aoSmd8+UpezLulKXGA4bO6SOrB86NJGMrd xCMCXfh+PCLNZfTZJNaV0yPaZOsNZ2VZl4tZBNB9pLxjhpOsCbOlB/QilPVJS7CM/dEnkEYJ 7mvGffOUcbXaEsQVo/1V0OnNNfExqc9Nmbgcy6+d8Aqg0aM6wevHhSmCavSqEYU54uJeIV07 omFFkSkDfgTPwDXcklKaeQIMpFF5+ges5z/bFPp+pzmei32lQIyj074IoRpghXZOXrK7viyK LWns5vyQNSIhaYEgClbmTvOAT2A4eUoDmqCiUCOqCqVan5Q4u4V+yLSdluw3dswPeGXjdFrp ugG2m7d6ZanhZtyJcm2ogIgyuFPXQd2kuE533xVjLS1PlAGoUFNtTc5Z0z1J+ozFApywzli9 gN3IPOEvlpz25VYy1P0aiRqmUvKTBJEK7D3T585xpwEtSdm5NWvFTLEyxUBtQukkpRZJYc0Q xi1ndO6UgB1nxhlvsrUrHSskfTaUPk+WZl70rzhBlfr7oe+ccie05yJNc/8Ah5n6PdBAGegD iLLxB1vFMO8LFPta3TinrHVepWOazaRI+lBT2stN1ZYpT1y7vc8I9Ho+h2FtAz0amGNtCnUM ExnJA3ZisyYRiGb2dfxYoySu7N0dK0WfU/Tw1SJUIkLOHDqrxajMnL2Spcp36xzNdpz22t6M kltaxKkKTYky8y+ZxZXDh2b5Svu0jjqCv6hqFOuRuuzmp1hhIhF8i5FdkJfuhjzghZrWF/Qi izx1QJm9CaZThZuIKeVxYhTneZh5zlyvi4KKYR1C4U3ULqsxJ21qCq2lMXcY4mTOkGU7p+xG WulorqvWMqk5kYv0OTs6YOWKchF8LhX/AOF0dnnXqoDgBSBM1BThI2ctuy57Pl4sffinvd84 vgtzW4f/ADJVPs+ky3Qhu9mUdVHpiCbE6+qEH8JFnpERQvdKH2/zirVQ/L6kqBU/OWzhVKvW ZAbgQhre3JqRuKNGMGzuBeA8swN8vjLxjD7HpNjakwPJYG2ZTf5KfwYWXeWcYIExCNFyvndz 1jO/NEyehjJcloU70rTBQcJzJTi7Zk9efLWKajtIqpGiKRk7AKZJEkoVZhOM/Jl9nfPS6EmW kVmcoPOG69aoUgVCEEN2EYOxIHuhl3BujUSeSgQWgAZ6DalXTRak1CExWZMQAjlO7NkLS66W saHQ69ne7dKZQbSJcOn248ktxy9F6oAd82+euGUuzOet8oynzhVP04U9g6PIVFhEEIS09xe/ 2hTlzFPvhXnIq3ygIqEClKQ4JyBJSMlPKQCyxdq4Pf4wGkUGMDrRdSL+klqVU5VaQlJWkXSP 10BK/S4OvD/8oj2gT9SpVcvxzwaqT0wfNvTJgboDFBumaKV3s/nGc0/WdSU9n9DrAFAUHTPE ERch4TJ6Yg38JxwdNuvRaxq2weyuB+0KyxfbGS5znxjy8bxR7k2v1Lm4MYTUdNTJMQGEzyxG TFK86V98ha/inE/udMLiftU7i3AUiM9V2ZTuK13fGV8vhHnfyzqcbH0D0qILfhkDCWGUhiCH gEQuM5eEKWVnVS8tKSpeDcCUzNJywyBve9O7jOPRrNrGx+a+rhkjNNGF4IB1neLe0/y0+EYm 8NS9qMIAvJyDTiZHBD9ycdNQVO91DlAeF+eUTvBLCGRYL++d0cz48LHtw29fvG4cO7pIMvCU ZEbSLJwJvNW/7Tm+kOhJXU9vh4+MaDVlH0291Jt7kgxGowy3cWq8YSvVd+nhHntjqF7ZMXRS 81LmbwsPf3wrykfhmFHdMLcZOIRIsXq5i7U5eM++AuresXslk7nVWcb0g6LeiiyTBaJCud0v e7u6JGmwLybJ6n2klaU5ejhzjxX529uyB8Iy3b1g0YUY1JokpZmMJOLcCOfEV3fHUoqF7Ulk EqXhaICf1IczswGxU2pUslFs53Rq3aguIswgWphk5h7Ur+EvyizJ9gBT6EHpBQDESs8RB3r9 4Wukv8px55UPz2pUFHKXhaaMn1PWer+EJ6beOkNv6VW7X2c7M1ugPRrhsZNmezbMMIBF4gpg +sFvce/9kO1ZsZzG0oBk5oyRA6kItSZ4fb4x5s6eeNoGp6YW7QLtGZ2sJLeHUnNyXVUHM7WE yIFurxtOfrSKiGjGVlIS5ZgvhLhKKY177gl/pg/vhJa9SSnPTEqRhAq9f/OfGGID0kYMkbg5 5KMHWFhDiOFcA7ldr8IjmslnQPiEbblbP1+cdzMMu4SlyDL4Rgu3rB4ca9ULD2esnCc479ca H+tAapbQpJJoNnQYMg0S8Q8jFrly5xkMKUDGcZjOOGaP3jBXw3AOw7DUKiA7Gi2d7lLuw4zq NDpPcs/cx+9GE/o0jZaX6yJZ4UgU4Rg92UcGCPsbszUIhcIiwuFQmFQBHUnBjMCD3hYY7qTT NqxwyXI7IKw9rFdrHKoySXQ3Zh5qcs7qxe9KUBofm0RjT4Ealw6Qys0OZdMsXhFNLYV4FAel UahCnzpFGHYb8N8aoz2isOzpdpezSssjAJNlzx3xAVhVtPPbfsYFJpWWYAWLL9dd3wDRdnqD ygEjGvWmpQpNo6sMpD/xiTLsrZ+jwuu0vAk5n2O7Iwv46Q75Z08meEa8l1xYUWziMCXPd8Yl vOcw9HgQdNj2jtbaFPPB8O+LESoslZ02VnKXU3aLssQburv97SMuqRt6HfFjbjzdnMw4veja VFqLCcWUmA8GpTU92I/JngO77ozx0Opt+qB4eDh4cXqcQsExad3PWArdLtoHV4IQDHhzhYYv lQWYk9HrB02ByEqRmYBFqRS66XeGKDTa/o14RrxgxAJMkIXfGxedGm+tGN1cDcwUhlhCTqTE RjPKboZ+XvCMlY1GhSmbwhZkuHOJ9vs0zq8cWf0ropP/ABjFKUw/5xOp7TqY6UCsO6QIxFzK EEJd8i5d8Pp7SKPA8LvSVuyqMIgn7PzDLuixGM9lzOsbxYznIRuYMITA9gvD72kU9voCoR5S k5AATaIzDmBOlIeD3sPGL432kUwTn9ctD14hl+j6KJThh0tRZ19P5JI1aFVkzIEXs8pzFIXL F7MaWCv2iUSmangLVTzC6712WtOMvKMvgpOzpZ5UBZ6qJyClBIssxMqkPeu04RLU/aFT1Nse zJjnVxNMOLNEWcX9Xu43Tjhb6noltrzyhTLHc3OMGeeHJ0LxfvjMQRlm9YdOKm0CArq/VmGK JSAIPLe4RVnRApbVhqBeDCoL7QcV/wC2NgeLTqVctqZxgdeilBEyhKcnf1+7GeMfkl5UD2w5 QayhJ6sSvQwwcpeEJBXU/rCge8ZIP5xuJlkTUpLPQIEC0pUWgkeS4mHdUcbP2buH7fe0jETM G2CGmB1WdiJCL3L9L42dPa6w5eNZ01jEk2USIsPV92LFfCMUBPZ1WY1AQdGlBHtOQLEd+fy8 eEXCoLLifOQzs7OmzUAkUj1+cdppO4Vwo6DLYGobgjU9FOHoIpgTf0Ew4Z4vvR0qLXaY8pCl hKN12LYNkOxE3D7WK+Ur9Y9xhbXZjSvTFRE7AaqAncwkFk52DJJEG/Mv53RTFll1QjWOY2TZ Rt6c8YEmeZcacWHmEPOXjFkLtUpjpR29DdwolSklUWLLlmYwSuumG/8AxhZlsyY5Gq6lybFG caNNsgQj7crpSFOfZ8btY0Q4K8o9Mgp9s8m6SAu2hMUI5z2q8ZJwuMsEVtwsxqckxKgzmg9a cYEoxMFReYnnP9ZK7d79Y76TrOnqVZ1nRra5GvS4uRR4Thej8b78XH85Th9RX9N+WBFYIGdy 6azpGqc8VxQZSDh3ec53fCC1wMsxpgFcM7bsAFX/AA4cqyAnTyFCoAru3yD8IrzpR9MILZ2C nNgwp3hEDOTlmaJzh+6K/W74x0rLWmHphKpRs7lsvRh7epzLpDwHTvnMMr9YgnC0JGprxiqQ DUo2Wn0kk7cSIVxhgwdiZndK/jdBClVIg6HqR2Z8ebsKsxPme9hnEfHS6LVLq6LHVZ9aXHiU G4eGIXdHNHOsiCFwiAIIIIAhcEIgCCFwQBCIIXAIggg3IAggg/HAEEEIxg9+AXBCM4n3wR9x k++CAVBCc4n34+Ywe/ALggxghcAiCCF/2/7M4BEEK/33wYB+2AYfxBu/fAJghcOFplJ3qUag 0Ae0Isuc5QDcEEKweo3B+keq3Z9Z8IBMEdxjU6gUFEjaluad6sOTO8UHQ7rtmwdFLdow4svL 1ugOGCH1CZSjMyVJIyDQ/Zi4wxAEESBbC9jR7eBqV7LhxZ2Xpd3xwwCYIkOh3XbCEHRqjalQ cZJOHUUu+BwZ3VtUEErEBpRqgWEkPvTgIoyCOl4QKW1YJGsJGQoL7RYuIY5oB+CCCAdjRWf/ AOMzfvGCjOI0xv3LP0v84ZOOWdbK86DHHLEonQDObxLPdjqQ44IIIAh2GodgLlQdDE1IjVLF jqNGnT/qS8c/2xy1ZRjkyKCNjJVOKVQXIZJwSf3yiz2L1g1MO3I168CESgPVnGBnMF/yi7uF f0kAv/mEC4eHD1Zc7r585aSgtmFN0MscmtcsUgVIRpQzEEJhPrLtYcZ6AOX0v0wcpUId4PVm E9qU+cpxpCOtqPGjVEjqcoPHtFi6zTlpHN5Z0eOlwpungYyy8OTli5fKCFIqyzropv2lqWKH E0JkgCJEXKWs/dnKGqXoBSvUHk1CByZ8kuY8IS5XiujQfOFSQCyhjddqBiDhJyRTyfjEPVlW s7k4EAR1aBGVix5iROP+9fFjNzGQ5YsETTaB1WFF9raC5AGG79kWEug0YKb6Ve1LkjNzpFYS yccgz+9FzcKzpJSn6K6ewjyMAnEKedxk456frOm0bOa1HVOaLLOx5hhIrlEpcpf6xAgPNij6 D2wl7VCUCIEeHdlliw/uvilUu29NvBCDHlZ3tBjXvL+jzms/08aUGEYdiyRdZi/ZGSUuvA1V Ajch48os7Fu8Qyj2v0F1qizECZvVKabG6rjUZmAwk8Mt7xDdHLQ9no3gsRz90ghBhxFhJuxi /ONBMtRpXrRjqRQqzjJDLLLTzvLu5T0jgT2kUeNwzhrFSPEXPD6PO4n8tYrQRXmoZ0ZZpy9y dzyswIC9mw4w4ve05REulm6NG11EcB4VGqmvfCTly3pcv9yi5l2nUrvIyXhUjw5fpeziuOw8 fGIlRW1GL/KTOdTSOkCcgPo8+sulx/PvjUVFZRiNBZujqRYpWhWrBBCWXl9XfP3ucr4Yp+jF nlAzo6kRqEaJ0uyTyRSn2uESzhU7OOztiZOklCxQhUFDMJy57xYeXdFwqS0Wg3UtiAmOcCOj 1ZR4i9n9XIPGUZCCa7JR+Vjsge9tC2pwiNSGECljUBlw747EdjKNe+N2SvcEbQqIzTNpuzC5 38OUpxYllsFH9KYEalyNTmYsSsKe7JnPhpPj3xx+exGjdGVAmXmrGhP/AAmecj9ZdP2JT14c I0sGIPiPYHxxbQDxASniKCYL2pS5xJ+RlVZYRgZx4BETUBFil6uXOG68XoHuuHp4bfqS5TM0 vEXg4+EXwuv6e835VJHdK5WxYDjsN8872Q+IYz+xwVpZ6Sz03TuwAcj3x2CEWH7AN/EM+6Kj VlKv1KmFAeyU5WZ2clQE3Xunh4RcaorlqXl0LsC93CoYScCkzDdMM/eBPhOcoLQHVkrwxuba VJxOucI1QrVhClAKU+U58L758Y9FTs/ZPKGqCGoYBizA4urFdP8AbOUv2x0NdDVO9luKltQA CnSqREYT1EgD05Sv7XyiVo8Hm3tAbHipxp9nCGYv0cokpH/dn/jKJ+l7SKVQFngWEuuAl4G6 oMKeQ8QxchcLvjxi7BW2+yK0tez9ME0wPZcIhYhHSloDtaRFKKJqcljbHjYytlcDy0pYs7sm i4SF3RrKP6QgCVDYDopUFtLLHt5fEYhCneHBrdd3/vjlrS1qmKwY0dPIG1ybjTnRMq6+6RBO EXx0l3wsQzWqKAq2mzEYHJAULajwpS8g7M60Xsz7ottH2YjJWVSTVrIoVKmVMWaQiJM0OmLv EH3ed18/CJT6RlfkHVWWyUlNPsTerC5dIJlGZnKZXXXT4SulEFS9q53TD6srPaj+miQFCE3B 1Iwe6G+Wnhwj1butIscdUbo2DpVtykrgEoIkhyi8ac0XG+fuy96d0NUHSVKvZbwwrGQ81a3k nBUvAVWgTw9kJQOF3d8JxOqbdWcBhXRTI67pZSIxQpuxmIQdqV1/bF8Yq1H1tQFKycXVtpt8 6VUFnlEFiOkJPljvw4p8sP4Z/GCExaBZ0gZKfRtrDSpqpUYSn/TG0famXXhEXfO6KHXFDOVH 7jkvaj1GZlGkJFGMxOZ3DDyi/qLbE2xhUoGRV00YnSpVebonyiJ37gpTxTv7p6RSLTakpapn M93ZGVemclx2etUqxS3e8JYZcuGsFueyemCawtIZ6bUnZCdQZMZ4v5sEsU5fOL/QdGML2z1T UiCkgOaguoS29IgVqBZSdNxEIUwzvxXcdbvCMwouoVlJVYhqFB61GKe790Wgvndwi6U3aWzt qOomo5kcOiHZx6QLCRhmYGeG7CLFK66MhI2gWOORNcLCab2ILR1g94z6tIIMQsXdLuvirOln qlkZ251fqha0u2JyluwEivU7MLmH2Zz+E4ubpbqBYoVYKSyErhmdJl7VOY1e7gJDMV2gA3Sn OUpcYqlYVsw1O3s+30qPpVtSEoMzO9H2cF89JcZi7r5wE06MNMOtn7PsbInYnJ0eiErUZ6sx Wj7I1A8c53Sx8+EXwyzqjPPg7UqmptFstN0+YoKLOUTuVnSuw54uUtdYzCvK5pKpNlOR0etS rU5hGE5QuxyJTFfYlgulLXT+9Hcotax2qP8AWwKeGFO/IBIFKTaOswilKV8hXXcAy5RvkHex tv8A7jlMh1l9O7QuIAtF1kzExKMOphoLvCU+M7oiW6gzbRXGsHujchC1N4p7IXs8wFCwhv3e Qb5BmK770L87qkFZm1CmYQBT9ATp9EiEonPJTcpjH7U/lCLN7WlNE0X5MeTxTnvGiJPEdgyc 2WEc8N07xd0+6IvQsS2jAE2X030JTbQ5qHZh29eadquvx3CML+6GU/GLfWlAU8jpurkxLC1J UrS0yUNxhYQ7WLCAHWDFixajnO/T5RmCe11YmpshtTMJQXBO29FFqxKMZYU053i6u7UXiK+H agtjXurG8JgMWQ5PSSSVarMWTGDL0kLLLu3b5Bl4R685srL9WAf3Y1f6PbCmePLJYNAiWLW9 ESFFtuHKJxi3hb27fdwnO+KZVD8gX0/TbI2oNlKZ00wnHCDKU1CgU7xi05eM9Y6rO62U0Z0m ACDpFI6BLCpKzsqe5qG4Uo52jWLRLHzqkqBpHTZzaQoUIk4lpaQuUibp9tQH2ZAlLxiIs/eG pkouplhzCyuNNsIjERa/ZcZ69WZfIueOfAHtaa9mIlRbrUOYDY2FqQEhyQiCEQpmHpyZXATm mdqYdZznddrFPUVmvU0g8UtsiUhE8OvSp+X7Jn6sMvdjoQvlHrEDrY/Uw1NNsGBrRFCITJi+ uLmEepho5708XHnpFko/yefqLYH6rEDAJIS7BCZshOWWm3ZzCSbwvFP4y0jLfOEMmlxMiClW VDtBZZS1SXferAHXelHUstOUnNZDUjpVlQtRamStQiDfMCk2QcO93Su93nAbqjphAsLYng5G ymu4mt0PLUlhDNOHellz03N0M/dnd3w68U81E2b+U6MDaJ3MIwFumWGYMMx3bm7d4S3Yw7zw PwFCXZmRqRtCVIYkLZycUiMA53jnPneKfGOgy2moRo+ijmRlNZQkllFte9lBkGd98xdoUxT7 V8484DfKgp5tQFsqlqJbUZrgpAJxU5csakGDQMr79MU+EsMeX7WEAE1rFXIECbCUlWi6ssOg Zc4sxlt9YKTMbqjanHCdmpgmYgATXcJAkG6Wl0VkitnIlHVecACpwqi7a1IuAQX3ilIPjyv0 jBaJotGSvrRgQKQZqdQ4kAMDyEGYuEel60YW14Z6pQPC/wBY6FFEZBf1SQhzkAIN2Ur53SDd HlprUnNrgjXox4VCMyRpIvEMX7zx1gDFsBLQ3DMMmaYYQTqIyd/Wb1+us56c4sX6g7PU1GOj sScsz30LCabn9uSIYhSkHDP35S46CuipfSATdZQoAelKlCAYRGBDqdPHdLxiuo7S6qTZQMaV UUWmmlEWeGc84E53zzJ8RznP3oaLr94HWiGql5Kc9Q2lzAkTF9WUX/jAVJYAabPAPdNLvCIP MM5R6js7bUxLXRAM7KK6FEISQJeigYsQpzFP5c748vLDhrFB6lTvGqjBGmfEXGLc12kVUgby kaY5L1JOQWeYTeaEvuviBP2gWesjDRbjVQBqhAXYOjiPEU94wXKQO6L+z+n2gWSKVICsfQah VuhuDiunrL4fOMVdK2qR4LXEuS/ainARQlJYg9rL7AZe6GXdKOxZaRVqlQhU7SlKGhDgIyyb tJ8b4sehGMCPyPXLNpzcnOwrxB37+8N984aUEpvNulHtIw7QWTmKfbM377r5ctNN6MC85dYD MD6eVlBDMGzBJuI3uO5znDXnIrDbBKekihDMu3REymWHDwwh5QFitwbVK+1zopqTDPN6MKHh L8A3znOMwM9z72H9sWFHWD2mdHN12nNcHAnKMPO+zl92XCV3KK9ED1GoAj6QVEjzTctnCARI eF0gh5e9r3RlNplKtVJM4VLamN29QrL8dil3T7xziseXlW5YSemBgw3bxYbhiu758Y4fKF43 fTxiyzs8IRcMc+M4DdVAP/dg9SpxlZdMFdZ4/d8Ynab8ngGUepOOAU6mCF0cE8N5l4hXCnx/ bHnYys6nOcNvG6j2jDh8Lu66Obyke+nCnvpI3pAn1B36nuw910BJ21//ACxUX9PFNh9wUnLF hqlSdmqDN4wwXtQxAPwQmFRAXGjuA8mg2wHvXxnEaLUn/L7SD+bjCdbMDAAiyU3jG1qgA92K xFio87Bmk/rAxvIhCKPWChuH1n1gUMRYIdhqHYBEOwFgxqAg+9FhqxnQNWw7Gs2oZxeIzhu/ l/jAS1L0MB7Z+klLrsoMzAEJafN/xiJqikntkeBIBoFCr9WcWTO4UXqyeqkDaxnoDnhO2KMz EHP4Cl8bpxcHC0Kj+x08UaPshOLLn1fjKAysug146HVVD6QUaT/FhJ9TPhHcjsxUnN7cpOch pdsFLMLMT6l3xonlnRh1J5PlOnxll4csQRXin8LoHStqJORo1IKkKEMJhXU5YsZd3y5QGZVh Z0vZ8jocah6AcLDup8A/yhTPQGNjWOT2c5Nmx+sILS4x/lONP84VEkuBWN7KWDMv67LFgLl3 d8V5ZWDapqQOCtim4okOH0Yk2RArp+M53zn4xYplF0H5VbcMBy0hInLmIs8xP2tOEV5Owup3 XbAaFLnYM/lG8I7RaDHthPTexgxfqRXHacZRGN9Z2dJqLPbQOWUaLF1eyz6y/nwiBSqws0JY afG6png08ZIZCOLOLlKWvuz5xnUbTWlbUq5UGegJddsNMICUSmyxXlz4X690ZvUDayIKfbBo FmetODLODmY/j8OUWJiz+iU1To1SxYvVEFJf5MGUx6/GGqwoB4ZFBHRqZa5olBchBMy9Q3+z O6JixusG2ntuRuS8SHaPVnZcxy+GkX5ZavR4C8HSqpYboDdTzwX8zQ3xAzdjs0Xr6XdXVeBa 3GoQiEWTp1l37fyjqZ7LhrKHRvaxSqQqlCkkGSK67ALnfF+T2l0B5LnoDn5WEZYRgLzEorzs XOGHC0uhlNJowdJKBKicj0bZxX7v7IsU2vLKxs6cI6eG4Ljc/Z8k8MpYvvB8IKPsuGcW5nVa S5IxoS8ZZCQQZjO+HHXw4xfPPBRiNwKO29a4jOOzRC2f6oCcrrtf8IrdSVzTDxVCUfla5I0S cvdOQIcosueKQuHtTn3xpYhAWb2deWFSLCdjd25oTlz65TcA0sfIM5TjppOz2knss1tA/OvS /WiFhD1BOH3pzDGgt9s1AbQqJUqXBKlDhy1OyzntEwy44eOsU9vqSy4mm3hH5QvSMbkrmoOy 0OMZwf1V/uz7pwjoD6ixlqA1qsCx32pO3TV7XilkGTDK/DKXjyjPKspUlvoenqnRnGmonYOA ws3thN/yjWTLY6V8n9xY65vRgkQWwRM+M5SDKYp34dOcZXVlTplln9N0YgAPKa+uPPMDd1s+ UvDxj0UrAAHYhcEEc6xBBBAI7HYhcEEAQQQQBBBBAEEEEAQQQQBBBBAEEEEAQQQQBBBBAEIh cEAiFwiFwBBBBAEEEEAQQQQBBBBAEEEEAqCD7PHv4PwzhWD7gw/iDdANwQqFYBj7BJpv4Qzn AJhMOYIT9nj9jFhxeMAmCHzExxJeccmUFFe8IucpQYB7o8G4ZvB+9ANQmFQ+jQL1n1NGoVYe 1llznAcMEKUAGAwQB7ow9oMJgH4IIIgOxplSABs6EHukS/dGap/WBjSKsO2ZYlB/MA/dHJ1C 2TGR2M+4ojmMh9vAMajADtx3IKcPrEc0PrNwzfhiAIdhqHYCVptkOe3QpASPKGZ7UW6pLMTm dOqObXIbtsosKgsKW78tZ3xW6LdSWepEa9SPCAsyWIUehlFotNjMNOOqpoyjPVhJ4/PSC3nu m6Ycnt0KQbAtKzPtBJZzwxYm+zReprRZT2cMIEv8b2fc+Hd+2NRR1/R/SgR+UicgrCIAQ4Z3 a8+EOo6zowDosB5Tpeuu6zW7SCGbl2SnDRiOG9+lYhBLLCl03e8V+kVZro9+UrAAOalqNKIz BtYk88HyjbG+vKSJLPJ8p04U4VAhiLwi66Xhpzhh4tFps6n/ANDvbehHl4PVmzPL8Qy7MBkN YUqmYXTodApcHZw/V7HcD5TlEZ5MVPtAUfk25Zot4JeTGytdc0wgZwgdat6YUZkssWWKYyfj DDHWFNs9SHqVNYdLJ1QTMsIgjy09/wC2LGQ+TFTjUCR+Tblml9oOT2Y4TCTgKNmUgGUaEWEQ RcQzjcXSvKeck6pq8p0qEYi8JKtMSbIAfhfrGMPGDpk3A5CdQYpelmBumddEDQE9koDk5GB4 VdIKE20BLCnll/DFxis0vQb26vmwLEChsAEWE48Rca7TdqNNks7cBTU+x7ORgNTZIsd/hdKG 1FqNGHZXp6gOWIIsWz+un4wFd8yZO0f8wrdlCEQjMSeWbu90ucJ80tPEp0axTVq0opYHqCNn ln38Pyl3xcjLV6MJWBwPYzxmXi2kJIrk/dLXXjrCTK/s6X5G31ViWpy5hCp2McytflHQhnae y4naKkJHUmI1lCIYSwky3rpX3X3wwx2dJlNnZFVOrqqRmqjiyiCwp75b3Oc+UTtFv1GM7pUm 2VhtRS4gRW0iRiBNQIUtRXa8++ccZlZsgLJyKb6bNVKiVZIsvLF1hILtNeXOMlppRYUj3QJn 5wCMJhYThGFhwGYv1c+cRiey6m1lceTZK+oiN00OJSSEGKYOcu8PjFzMtjo8BZB22LVRuIGF Nki9Eu4+H5QKLV6GHUDcd02tPAWZM8w8SUXUyuukV3zj1DAqoZHWm3g1teEwyDQ35fPEDv00 h9vpWp3JOUsbWc1UnOFhLMDdFmt0q1nq2oG4bDtBqVvIEVnHF4MyYp36S7om7K7SGGj6TEzq RuG1KD8Rgiw3yTlYbp4PHwh9rcBlmOwWTm1g99LlOAleypkhBchyv4b3O6+/8oqlQUHVtPMZ Dw9tQEqU67+MBGMN/eGWsoudQWhM6yyPyYRuTl0h0tJaSHLullX8J8pd8P2gVzTdW0eUyE7Q 51IoEThPPT5OTd7w+E7r48QqljdKttZ1x0C5HKCACTGmhMJ43hl4w75pbRx1AsZ0FKqFg0+/ iLFK7L5a8L/CJqzMk6zGtEtVVgNOU3hJGV6EqAoNvFd7IZ3xoJn0jWdtMVDphG4GjyAJ0xh4 cvM1vFOd092Uvn8I0sGNea6v9oc0w2HKUNZOepJEoljwc7pc7vCGE9nVZnUn5VAaihNWRtGI KiWPLlxnh46Rvpn0j6KPTrBnNTuUtOT5ZhhRH1oV12uugJRWjK2QUf8ARzYkDVsq51XEqUgi do30ic2faGAPP4y+cLBmtB2bvT26UoN1RqEbHUCnAFSHt4ZaznKXjwlOd0vGL1WFif8Awesd aVZFqFajczU+QtUX7QmDf1sr5S7uXynFWpe1qoW3yUQL8B7VTog4Qh9acEPZlP8ADyu18Y0N RbxSQC9xNUTwPaxLyc4OUAs7DcAvjOeC+PRQ6ToNqTKEKOvCVond2yRNjYkF9kOd2aaKXZw8 4HSxx7OeH8dMDb+gEKsxOmOXqpFDOwylikHvu4XxZ261mhzKxWVk6pn9LUqxvCjMXkEhGFOO ehhicAp7s8OgL5z+EQTpXlnrrRhtJL22ogtSVzMcG4JYQZpwxSulnT0DK++8WGV8ZDIzNzF/ N34vlF1tEphHStB0ad611qJNt5h2Lqyy77pBuim9vFuYACv6sPsy7otdaVUmqqh6bZFKM0Lk wl7KWf8AZbN//q+IWstUWWuR1dsdLU2jKKOXMRbl6Wo9ZfxnfrEJUFl1YMLG4vawlF0ehKKN EeWdoZmCwhw+9r3RcFlsjP5y6ZrBAzuAiWFi6KyTtBmDwzDKfGem9Dplt6NS7pukqYGupsLS FAtbjhaqDATmIJnPdvnwnOLQrpdi1cnGLAfo/NRpC1RhOZOc7hBxcZSu0lxiMY6ebVP0f6pr MeLpVC7J0hHuBLFx+caVT/0kNma3ED8wqlipUYdkCIMlIsksYcEgzlpfhlpKMuaqrRoLJKko HYFAumHEtWSfi0JkDhIUBIrLP16qh6COZEZInKp1Myc2Z3aFfzl7AZSuviT+kBQzZQyOi0yD IErVIjukFJIpzAoNAK68N/K+Gmy09M10xQLWjZ1AlVJLDFgjTBXANx9qUpS/0jltnr9qtC8n iUbOqbCmnFmGHivmLMHiHpKNVs6i92P0wCpPK4YySjxtLKarLCYZd2ZX3/e7ohbQHhqe6k2l hR7G2lkBIJDl5eK72sMSdldbE0N5TY2obiN8azG0PWYJFyFxmLv1ujISNP2OVO90OjqdG5Nv pxBhxCTXOFIvtz+AecJ80VSeUFQsgFKUShhQFKj8IRTxTH9mCUr5zn8JX+ESdJ2zKaeoNDSR LCAWxtyxEFTndqagV8zLuG7+cCO2x+Rp2TYGpOQ4IRE9Jr8zrXUBU9woXPDL48YIRNollz3Q DxSyZ7OKNTvR5Yd0OCeggyMlMM9ZcYtNQUYz/wDq/FQ6NAV0QE4AS0ghTweokPWf4teMQtrF q/l+sp47ye6KAwqZnkl7RmZmIchiv4ainKGj7Uc63uVrnQXW4sewbRzyssO/Ba7/AEdLLkr3 XDu91OBAeykuCxsTIsufWqAhxXylLsBCHvnGAdsw3+kF++Ngs3t1OozpH/hUpdtTie4EekYN nMODhnfpr+yKHS9Qo2RrqYnYM1a9ByiRYZTAnLnixX368+XdGgqxnqxxsVSWdbe6WV02gUok PlA1zN2kkPa54h38ZzjHcHV5P3cMaeZa0p6coh4R08nKHRqKaVIWYonPaNLpY5yu4eERGJGq LDV7IjXLwVIBcnTthyorCnumoGXOcpylLuvlpPWJGj/o9vD90TjeMopcylOGYWTKZYTDewTM U7pX3a84gjLaakG8MrkNAiEBrJNIEg1yFcjJ4p5kvj8/GJGm/pA1mzrHFYNG3uahYp2gjPvw INzLwlBlwlIOko8QjLM2FH5D2uqXIko9ayoglJjP1Y8YgzmH8o52+ldp+jWQ9ptn21dUoG/F 9pvSlhDMXCUufDnEQx10sZ2ur20lAnNBVlwVYhfY6zFu/OfOFN9eLEFm6Wg0yBPsqV0C67WI U8wRweGnLTSNVtGt4olpoOxhjZ0xwFq9O+iJWr8nAMweXiFLnPDL5X3RheAe6PAPAZ2d3tfC NBtUtXeLQqbIYXJnQICiVM1WcTfMYjJ8Z935RW6wqHp7owAEw0qVtTbOSWIV8/Gf7IyGh2Bs iZTZ/Xz36IU5JRJU6RSpLxyTyHPfwynfLFOWkp3Ti62kWPttSVYlXoHjY9nTJxPGWT1ZZeC/ FKXtDF3XxidD1sspVvc20CBK4t7oIA1KZTfdeHhwif8APTWY1BB3oXUnTNw4fWbuAIRfdCHh dGofp8DUP6NlphiVIVjJdkQCFpmp4ihHS0v+HdG90/RjCOqD2E4Dee0EtKfC14b9bpYhC+N+ u9P4R5lb6/XoKLdqVJamrZ3hTtSs8Qd/M8A9m6XKV0TfnprDeGj6PSrTsAT1ZeLOOkDhK/uj IejaXpVncnioulQInESVXiTEBDcWklrg5S5S1jFrfCSfNWyrNpKWKjn40rOCXdO6V+nLTu0l Fb881Wk/UEzahAI6Z6kJIZ+kjn78+PPldrEA6VsvdXBqGsTJ+j2k+aohsT3gKzJzvEL4inxi ChWliY5Go2ZSSMo0PaLFxje7F21N5u6RHtIEpq50OEf3qJSndIM/D8owyoHVS9vip1WetOu8 cMpaSl8pROU3aE/MLWlQINi9BxCSGHF4xp5ivvmHlzixptoFmLD+nazzlYUQQnCCmDpnH33X S90AfhrHMxo0zrZvZISpTFYFD8Zmh+4HTXXnGfLLRaqWFnkqVgMChNspgcOmHv8AxT75wecK oei2xtJ2UhO1ixpssOoRd9/jEDfXCnmR+R1EB7cgGohOgcWYHLkSEM9Cg8O67S6UZp9KwkBN aMQCSSggMbA5JZIfy4flFGqStn6oW8ptXjThRBOz8kkOCRhnOYvjCHStnh4qhDUjxlKlTfcE gPAFweEBCuCBY2rBI15IyFBfaLFxDHoKx8lACzujQDHlbY4mjEEP2m9LjO/h+yPP745KXh0P clg8ShQLEL/CUvCUTDPXlSMiMpG2qQBAT6oQi7xk38wz5QDVqmDzmVJgwhBtoghCHhpFZgMG MZghjHiGYLEIQuIpzggFw7CIXEDpa/4QI/pJRea4BjdDQfzYcP5RTaf/AIYR/wBNKLhaYsJB UGD3Qyjnk91s1UesjsY/4QKjnWesgbvrAY6EJSpAemRDxOVJ6yIWPIwmHYRC40Co7C2pf/0p bve6nnDCfcMDv5X3o25rramwNaNMN+AE0Jcs47DPlygKK+WdL21GznYzTxuBkgCJ2OeMv5cY sHmcGBwCmWP2QVkZu6jvHf8Ahvi/Olf0TtCFYCp0puE6QsIQi3fHhA4V5RPTgTgVUnFiJEHO wj3e7lAYw+UM/IHjY0DaqdisvHnkp53XePdEm6UAgZ6XSvbq9uAVCq4IUxLffhn3TnfGsl2l 0T16Yl4S5uGWI88s3Add+GK6x1yydKKljrW2el1xJNnFcZ+EPdFig+b1YCz82rR7UHCLq0wk 92dKfdOOOl6GdXV0So3IlQylKN7MMJ5eEbA8V/Z68Unsw3sAd3DsQiRd/Cel0KeLSLPRp2nZ nj6uIIsOyi6mIGRWkUYCktlyV41idV2c4vLHp4RT41S1R+puqlDYmJeNs6/Gcp7GWD3d6KLV iZnRvGzMinPThDvCxY9fj8IsXem7K0Dkzo1il7cAqFnqSyE4Zlh+M56xUnCj6nRvB6DoRwPy RT6wsvtSlzjUbP7S2RtpNG2qX7o4aftE5Ip4peF2kSbha1RikvJ29wwC3hHbPrpPQMR8DP3y y5e20+yryekDVrgeEBqTLleTfx8dPGLJ5kEYHhKjWPbgAAkG0HFllhzMcvZl3xa3S1qgP0cs JdVRpoTgjEXsouru+MIcLV6D6cIUkvCoXUDAYfsot3FF6DK6ksuqRG+CQU8gVOycJOaIwVwJ k+AuV8SjhZu2s9BkPzkTU57go3clIXIZacX3tL8OnGNDT2zUT16Mk40rCEIdtOQzMzhS54f3 XzisUvaRTyN8XPzxVT6qzBDwoNn6s7F92Wko1Qg2uyvOs3HVTwsWtPXgy8QZXCKnz74n2+xm lXtOQpaqkddlzso8RwQ3mXccGkStSWqUBUNDjbVil3SqDsIMkKW/Lliv48IY85dnra4M6ltd Xc8DaHAQmChwSL01MmLjMU7/AAgKzVlmNNtRjOpAsfUqVctkky1OEZl4uE9Jf774zyvGEdK1 g408cPHsot0XhPhGrWgWl0rUPQ6bb3VcBG4hXnHmJ7suQPswyFOd85/L4xltpFQ+WFcOdSbN kAVC6kvmEuXDF4x5WK9BBBGCzWAEOwmFQBCNzt+3C4IAggggCCCFwCIIIIAggggCEwqCAIII IBMKgggCCCFwCIIMYPeDBue+D+1AJghUJgCCCFQCYIVBAJhUEEAmCFQmAIIIVAJhUEEAQQuP mCAbhULhEAQQrAPLzsk3B+sy53fnH2ARBDuAe0ZOAYje1lhDO+FbMp2jJ2NVm4cWHJnAMQmO ktMsOxAJRqhDL9YHJnuwxAcsOlwmHC4BcLggiBL0fv1Q3f00dlpA/wDihV8gwWd/82If6TFD VaE51QLDvvRz/a1edPWQ0n9YCO589ZEZHQhYqg+rlfhiCiacN9rKiFjyMELhELjQOYMcaGx2 Y7e1lHDfsKhQHEWQFLfL+1fGflxtlD1+yJqfRplNSJ27J9cWYTOY/ldKAygynntMsNTdCLcR YsOIKee94xaags0XtqNnGAag81wMkEwnY9Se/TjpGlLLUaMO6kD2MPtZmzzjvcLS6AGoRqfK QBuEyQvq4939kBSPMn+kAplNSDIK2bN3Uspjv7rr4qNQWevyB8EgbUC11TlhxZ4Sez8Y2VRa RQHTgDian7RIgCOyR3Bv4coC7V6DBnpgOoA7svSTko550/hL9l84DMHiz1tZ6XSvDk5PRqpV 2SCEMpyLH3T53eMPs9lClfSZT2cvVIRnHFhyzE8uxPnFmp+vKbJcFjq8VstWZgp+gbKLAK/u lwlE6stRoNTT4f0waFRphTbKK8Ov5QFbb7CkC/PyaqXlARiwqRGJQ734Jc4odplGE0YoRgTL FCpOqDPDnlyAZp4RtjfbNR6As3aX5Us2jslhSzuT/OMptoqphqTYAM6k1cMm8RikwvB/VlG4 coezdtfqf6YcnJyDiOkUWSiLlPjznfEe+Wb1IgqzodMgUHpRC6tXh9jvnFmsntCZKepsTUve FDSPOkMIiSZjxSl8ItbhbNQx2IkA3cQDN0w7J1DLwiPgQ/mNbRt5WTULkFR1YjzDCw5evIEC yxmkkydUpOqF9Qp0IsJgjCwz2j8EpS+UTfnjolMnKGSNyPNCGQAptn0L8Zz/AMoFFqlm68s9 M8OTusTqjAjy9j0T3cuOsI0KYZZXTw6wZ21NU7gUkdE0zyQmFhmdx+UvzjspeyJkUluI3Jyd RGluM0SYJN3yEK8P+kNeVtAAtQKqoD8+KgdoWchukXwwlAlLXDHZTdpFJM9ULnXp6oiijj5n 5JKXQ6/Wcpyv7+E740WgPNi2+S9RLOnlRq1pXyS5YS5Xazl/vnF1MsBp44solG8O+0F5O0ni EGZYsXHBpFbR15Rnk3VoF43UpVUC/asgtPjwy0w3i7+/jFip+2mm6epcKAlTUjwqMMAIRasu WWmlLjgFGSGc2wUqyUq6BRsiaoisIsIjHO7LO+8AUv8AekVljph+fk56lnQbUUnuCZvSlrPh Lx+UaHbZaXT1W02UzsJLkIYlu2nnrS8GXd9mDWd8RNi9fttAGOK9SmVLFqgMiiCy/VhlzFPx j2taYoOx/Gz1W61sS4EDYyJ4UiIwM5inhvvn+HnKWsUjze1b5NhqQaBOlQGF55ecqlIwRXfI HGcaD50aV8n66RgA+/8AEQcJAcPtT4iHO/8AffHOz2l0qTZf5MOSB1eHXZNkILPLDs4b75Sn mdqQZX6y8I8QydnbVj26JW1qTDVLVQpBIJD7V8XV4sWtOZ1CADpTxSPblMkpGJUGeIc/hw+c SFm7Ops9qBprl4fqbPTt9xuzJnCRirwlly5xprf9KVGpzfKGm1QhiUiNTCIuvSEz4XX8RwWx tHZLX6yoHNkJaku1tYpbQExVIHH4xUXxtWMjgual4MpahFgOL8Y9L0XapSS+pK8rxSApCAxs LKTNzmowDVnF6hDprO/SXHnwjFuh3K0VY7VgvqSlWVQ5KJiEmWrMofDkG7hLhBCx2gWV4LK6 brakgbhzWJW5lnnXmaT7Upd0o46gs3zqfstJpJMapeqsSHKD8wzc3Zy/shDKf92LK82v08gs qlQbO2rVzqW2CaBOJm6kEAXbMBz+ERbPa6gbS7OcDIqEqo0k0kwzFodIyVwsP+5fGC0V5kK8 G4EEk9CiSmJhKNv2r0fAGd096790TlB2S9N0vW7bgSuNVN5iQDYemVej9ZPXulw7/wAolvPv T2weTRNMOQaYE3KUR2IXpnXixCul2dOGt98Q9mdsFPWddPgp6lVuU7CIAWEwy+ZaYuU5TmKd +por/CUoIVkuyKrei1i9SsZSAJTjSi8w6d6vL0EYXp2b9L53QxSdmlQurHTlTrEw0bA8PRDb mfa5YxXCNDLulrrOLsz20021M7mjBR7gLaBH7MiEolNKEJnCQ9PzulGfs9oVVIEbE1DX57Uz uIHAtJ7xgZ33X90BudeWDs51Jvo6bp5KwLW1xyUSgxYMc1JQdJ48zhi8OcZWx0T5K1QxIK5p tO+uD4TjTNgl2zlp7xYcRw5cO/ui9GfSKZPTBk0MtVKFynbT9pVSLlnBnOZctyV8wynP8opj 5adTdTviF4rOjDXNUSi2c88lRcMwy++Uwy4SCHhKU8UQJa0Cxb/iyoljIvaqbpVGpKSkCcTL sR4ipCwh+em9OMVUAyTDQdrLFMOIPAV3OUbVVFuSCrW9zbaqonbEChaFWiIKVTlhGAvLLzBf vw3RjCgecYaPAUVmCmLJL0AXf7MvCUWtrbfZ6yGMdkqNYDGqrVxmerUh4lkgFIOUHW7W/ujR awsTZ3Wk6h6EZGimXBteJkIDAnTHnJgy0kK++eMXy1jG2u09YjY6SQDbilKilFslLepEK7q7 75l6d/fF3/8AUOAlQec20YP0hSNeZtKy/wBLn2BB3dAhvvujVDqszslTM/SJNZtrU/1Psic1 MxmKNE2MW/mXXSxyBrxndKcT5lH2XM9sFRUeCkilmy+lOKtT6huS7PIzq5SnqK+d2t0ZsotX bXWoEb3VVDFOyhOmCAwwtZMsZxwZ3yNv/wANdIlvPqgWdPnVDZ6BxNflJapWItwEVmSLDIJJ I5ylfMAcMp3SugLdT9ndnp1m+2IGFPgWU8vdU5ynVeIYJ7t0uQA+Etb+MRKajbOqYouyfykQ Np/lEYBa5qz1GuDQV18uAJSmAM9ZSit+e9T5Jiak1NlEOXRw2wtaFR1ZaUztykVzF7s533RW anrxM/NdENo6YK2SkiNnyzjsW2y0xYrrrsV0WNsR0A21PaptI6TpDyOCStUNImm65XMEpZWc L+9PDpFk829DgpNnfh0xTZtWLiiyiE27JILrgyNMkDFdoDneK6McT29jbdhTMNEomxlSlnhE 3BUesz7sd07sIZfKGFFtiZSYhJU0AlPaG0nKQIhOA7y96Q5iGPiKYhS3ohCbtbsdaUFcVG7H PySkqa6RLRNheTmYjpp5GDld9nK++6+MHwekGgB1uEU94PAUpe1GsLLcl7wjXpqwo9qqTaF+ 3EZ5k5FpjbsMt2XG6X7Ip9N1b0JSdVtuxgNcKkuAYdlyyySdZzu+c9JcJRk2U9RuJxRuNUWU dJWiUHRjapRIelqcCtCaQXp2ZznOeKes54ZzjDsHV5Mauotpex1xTdYJmRvSqqdbJtiInMnP EXhmCUxT8JTnCMdlX2FL2Gm3N+BUYF6cltJVJCwJesUTMHg+EpSuv9qJ1n+jY6uToemA/bhL YUoz8mUi9pHLFk3+EvGfjhisee+qvKxufhoG0XR7d0aFALFs5xfGUxy4znfrHfTf0hK5Zy1W cmb3Y1QeNQWeffLZpjlcLBIOktNPhBCHoNtQHfRotRezkxQ16daiJIP5lhmbK+Uvzjp8iRut iFm8moaUC2oH8aTMw3D44ZYhX8A/CUVRnrNe00JUlGEpkokVRKJHqThdsuYfclwiRT2nPaOl 6Wp5AjREJ6XW7akO1zDDed/dxnwgtefpQUq1UZZ/ZqyNuAY04lydSrycqakwAgyEKcuPanO6 +MPMJGDDjAMOYHGXiDdil3yi+WsWnOtpaNqRurU2tyVrOEaEKK+8Ux9ue93/AL4h7TKtOraq OmzkYEYC0xaRMQHXCUDhfPnOIFh+jmzkvFYPWPBjR0+qUE4g34Zyl2g+Mo77K7HCats/Rvw3 5QQoWbWElOEMruplOc5znr+6KZZ/Wa+iVjisakyU1Q4IhIDBKL+rKF2rpS/xiTpe06p6bptL TzVsoUqUs8BIhBvGHP8AWT7ufdFiz+ZkHlAag6YVGpU9MdIbSEuUs5VOeGRIePO7viKtssoH Z7RdLLzl4jVroLJVkCuvJMDdOctL++6I0u1GsyabbqeA64UredI4Jgb886cp3yCYLmGUFoFp dQ1y3oW17A3hSt52eQWmLu3+fHv5xAvFrDIg/wDVRS1PAJKIS5DeUIvlrxv1/wAYsVl9Bt6m 3uqateFIxpWOoejUyYJcutOFeGV990gBDLhGQuFor85WiIa/WARGu6EIQlBwzkXudm/nEnTd sFYU88Obwj6PNUOSkSszOL3AmildOYZS4aRYrNSJjnK0ipkyBMI0XSSsWWSX2QBF4S4SlEEn BtKhKT7Bx5QPzFKLIz1gvallROQAAE5PgZhMP4SJxTvFcH90VtP1OUMHbJFIYReMogeyakZG Q5RVzIpOym8luJAWSSn1Jvulu+8Kc/jGD1pZ6gs9dKZJxiXOCx03k5mpZBN8pBLGL2hz8OEc PnjrYBm0pjkSZRpnGFl6nYezi8JcYhPLyoRqGw444pUNrOEoTZgftRainOA3lwJR+eS15fuI 9hTICUxgS/U4ghvwy7/l8os1QIEaal245Mp2MawSfPVlk3mqAXTn8f2x5vMtRqo50XOR2xGm uF20l5PViu8Id87tc7YJYNenPNzMZeYXoTOUrpYA8A3eEB6cqhtQI6oYEBOBGUcZMQiQ/beM 5x4/qRGpWPlTL0yYWxI3E2RhgQ7hes7omvOjWwP/ALIraNfSRF9aXi44e6IDyhWApMVNk7qc 46Z55n2h058vhpAQEOw1DsA7BBBECw0P/DGP3S5xwLPWC3/aiRoMnOcDf6GBY2jzBb8Z/Yiq gB2YhIsVQA9HDFdi4xNm/wAFhiJh3aR5eD2Iaj0ELhELiwqFFwmAuA1ZjsrJXtaU7pVaJUqD iLLJTyyw/GcVIyj6qJdDUHQK0WWLDmBL/bGqUPacwo6bRo1lSDQ7P2k4U85zF+USay12jFPU 7Y4FFdvFk6inLvgKHUFlDkg6CATtRqhyMlnk5cvR+/8ALxid8yCYDxsyl+W4ApseEkkMx3xa 1lrVAbQhUkvCs0eKQjPRRdX3w0otXoPpzGS8LcrJmAR+yi+OkBlb5ZjU6Z8PRtSA1clT3Yjx CCDDfyn4/CJGoLPUbJTaVeMmolzgcHskFyyyTO4Wl8aH55qMyz0xKk1Luyy1JjfmmGC96QeH w14xXabtFphAsVPC9+elygWLCiETua/lAVd0s3UttnZVQj2gLgcYWEtMZddcL9sIpOzFeveC kdQ5rcUYTnhyRBHMX75Rfny1egHJjKAp6QNNxBFkbPpfKcd75bTQyx0QnE9KhAT2vRfV8rg6 wGQ2oUkmpJwRkpjjRAVFzHlndsN3fdFSLjSK8qSlatqBpB6R0UjDPMPUhy5iv+F+kUqqBtXl Aq6BB+j8Usnjvd/HWNKxrLHYy2r29u33c1asTZ2filIgufdw/wAYoCygKnbVBvTaPY0Sc4JR ynFKeG+d2neLwjTaTtjptAzoQORz7moyMGyJi+rM7tb5fujhry1SmKqZws+wO6VOWZIRfOfj OceALscp7y0Rtu3u6xEJpEt6u7MOHLhwl/rEqj+j2jXvDcdtLgztBhEzTwnCxmhnfpLW6+G/ O7R5L42L03TvoqCaAwzZ5boe+W9rP8o7C/pDgQPDSS2nOvQSXeWmKEoZqFE7/ZlfpKUvG+Az 6m6AanuoKwYSTlQVTSI0SQRgtMAJX6+M4zCNYT2ltSCtKwrBAgcDVT5mBSEnXAkXIXvXd0ZP gH7e+P2ogEJhUJgCCCCARgB7gP7MLgggPm57cJMwD7YAChcEAQQQQBBBCoBMEEEAQQQQBCIX BAIhcEEAQR87Hbg7fYgPsEEKgEwQQQBBDWcT78PwCYIRnE+/BnE/7DOAXBHzdhOMH393+bn/ AJQBC4RjB9//ALc/8oV2/f3vuzgEwQuCARC4+fgJNNw9rLLnP90fYAghX8XCdkm5QuyLLndA YAZJhQDiVARmerCIkV4v2QDULhwwkYDMAwDKH7pgboUnTKVP1ZGqPw9rJJmOQfygGYRD5ZKk 4sQwI1eAvdMFkzuD8Ya/i+0gxiKxSBiwzuvFwgPsEPKEC9MXjUoFRAPeMLnKUMwBBDyNGsWb iNGoVYe1kl3w2YSMkwQDgDKGHtBFpMMA1CDI6tjUjTmqckezk+sMw6Bv8YSYmO2MKzJHs4t0 JnIUBxw7DUPwC4VCYIgW+zdSSjULlIyc3Cm7MRig44Zghw/S31Nd/Ry/fC3Abbs+SmxiUe9H P9rRzh1zXjiuxP5xPQeDHvxARvGg6WDHBEiz4Ms0H3Y4DI0CYXCIXAOxcbP6MTVDnnLDluAv 7FJ2/wBsU6L3ZnWCOm88ClSoS532xIb5hgHass0fm1QV0O2qliU4P2l2MufdEsz2Vrx0m4ur wStRqiQzyCMQesnLw48YtfnmpslPkpul1Q+yI8wv/WFJ7YKJ6HNRjA9Zu9lhyb+PfOArzXY/ jpNGseFi1CqUCKxF7s90X+MKryx85BkApJM4KjTDsrJVmS3vvX6XSlE6otgpIbOQTszqJUXh +xldu+N8P+e+m0xnUoHA/OFiPMOJl1el27K/X9kBWafsl/Q65ZUiNwNVE+pIRHSwCu8Y56Ds x6eMcVjq2uTO3k4skJh2vDS/nOOp8tCph7qQpe6qanEnT7xYSywA1vv0AG6V3KLIXbfTB22b e1O4QdknLLlPF8dZQFRp+z2nnum1Q0HTQVScM8SlQLq8Wt0gh5/thuuLNE1H2bpXs5eoVOpg iswv7MuQocp+v6Yp4tYpakDuJyOx5ITsOSHFfdf33fCCqLVzqhsv8mHXalLkI4I87JCWWG7l pCMRVkdGNtYOB4HLbTQJyxCyU2kzJylE/XljjkjLQqaYZxhCoD1yY9ReMv8AxirWV1gTR7ga cpJUCAcXgxEduNG8/DUjLCSgZHU0YeyepFLHr8IBVP2JsnReB42gTli9JMCo6tP+GUuM4k09 iFGZeBSS5bOFNM0xxzp36cgh7MQae2lhJ/8A+bcMYTpmkl5ksAhXe1P/AEhwu3JqHlKV7I6n qgl4MksUpEb3HxgI+rKDs9JZ25ej6XbClC8pOX1mYeoLnxFdw0/fEJWFm4/OoGj6P7IkG1ek inP46yh+uK/oyoTEYyWF6IGlywl4jpZZJcp4hYQc5zndxiyI7dWFtrjykR0q4fUtl6w6WZdf fLW7hANsdlbajpNz6Vp5O+1OnU5GQY4TKAWH3pXTl+UWaj7CqbXtbOBZTY9ocLxL1IlQpbJP XQsPCfzjPvOjSo64VVgspJwEqEZmkFhVdkUpXS1ifT/SEJ2xK6r6YUHuCMwRpBZarAReL3tP 3QDf0gLK2ehmvJYaSNyiQ4ulhLswYviDu+XIUZNRdKuVWugkDUNPmhJmb1wrtJcpd85xpFoF uXlVS7mgBT2yuTwXIC1WYdeDBffOQAfuvnFAsvqcmiakKfuiukzU5MwEFiMukEU5aCn8Isaf ZBYUcorvo2vAEDAW2jWmIi1WGeOWgSxz/bO6cVBvsrcqhTuNSIzm1iaNrMKQJjBTMmLD3Xcu V89L4sKO3LBVHlIspjPWibjkpnpF2YYZK7F4Sl3SiHsvtUJoZjVIAU2auzjBDCWJZgI3uQpc 5XwjQzAsGNRkg9ozKCLlffdGlVhYbVtNt6VYNe3rM44koQSAinl5nD5+EQCNkptT+mHWv0SN Ucp2g5EShFPDPFfdfwjY7SPpAgBUDACjMpc0N5JY1pmXljUHBldKWIUr7pcdLohajuFgNZo6 0HTw17eEotsm4GLRX3YZTuulLjMV/CUSKP6NlYHLDwHPzaUUXlBCYEsRmcIcr5SlKXD5ziYT 23pqkrQK97JKptvCgEkEHVTtAJivwj01v5+Md9UfSNTEvB6ZkalTm1BLJwn7QJFM40qUtdN6 RXhfK+UdCHn2oG1Sw1A4siz603qZpTMPvS0i+VJY+5U9T6V1cn4oJosoZ6YlPOciSxfzmusu crogk6ClXjbHuoa26HcFh5igxAQ3zOw3/e75xZKgtdOX2d+RKNkAQVlhKMVnqJmzEEN2sgT0 lMV2vGAmrZ7MWcm2enqJoY7cXNYDTAkkzMy7pT6yfG+YsM58vZiYs4scbWq3R2o+tV6dYEll NVpicntX/ad27K/2r4gC7eHImpGl+BSrbtrei2Q48QutVl3XYb/ZBLulHGstveFNqhVoXQiU hUFum37MWZplT+PCMgxZZYg8VzTZ9QkveU37eNIiFsuOajBKYswXCQAXS4znGamJjgOgmrcN UBV7Ju8BGYsPHujVmO29e1U+uZAUq27AoUzVEpiThElEjnynhumLx74p7WjonZwvDlWbgF6z tqEiIb7y82++7FGotz/YbsFWIaMQVyiX1IYpASrRbPdJPKYccxX63ykHvu4x2eYQampGpA21 mA1K6FrBEqTEeXiml0M3eOHunziOdLbHJZUhFTpqYakb0E6Rylf2z1GEOXKV/sSw+5DpluT9 5QNzkgZG1AU2kKCEyQsU7gzP9abOfMU4BNeWFPFMU2ueAvxS8BY0QEwQk6mZ4pS5X3ftn4Rb 3D6Kjw2o1Rymrc0YSxCLyUspFbocU8Yr54ZfKKeXb3WBNYeUmxtovQJIC0gr8oMg9kzvmKOb z01P5HiptejROeIJ2FaffeXnX5k8HZmLWd0xSnAc1UWYs9N0GxvzrXeUqeEwFSdOFPKeIuc9 66cp63eEWP6SFHsKO0OiKbokGUa4NKcGWZcWVcMVwThD7xa4r+6M+tArBTWbPTbUsQJ0qenU myJBFinMZgNNRX/COyvLRXis29nTOqNvKNawhCFaSH0g7D2JCF7ob53BlGS2y0JY4TTFSV8j UqG12cG1sT9GqlJfUpxm+tnMM75SnLlOf5RJ2qWCJqqqxsWMLqiQj2JON2yCbiMvWYzwB4a3 aBvlGGUXadUlNuDssHge+mAgCv28yfWYOxrLXTuizGfSBrnaCjgI2ggATgmiJLCK46QJXFki nfflB7g3Rqw5rvSFkzO2/Rwq+rdHpwcGw5SiUiunJMnBfhl+IUsQp6Qu2eyhNUNB0lU9PejO RbO1EHIk5MpAO2gUt7xFfOfykGMfY7Tn5na6rbSSURoKsv28RgfVyFffIsMt2XanFncPpCVy pb2lAjTNTUU25AcSYM8agJPqwjFPW6XHdu1g0RdvllfmuUU8DbzVnSxBkxZmG8swvDilu/G6 E/RrYW2obUDSXUnak7azqXAJHIw0GgZT/O+I61S06obSNh6eTN5AG/EIgKQN3a48YgKLqR1p KoAvbOMAVAQzAIIuwYCfsi5xl9rekPo0MKZfZ5TlR/o1KteKlPG7GGkyxqQb0pFAlh0lfOWg cN3G+IK0yw2nkzxU1a9MKiKYTnm5hBZf8amfhyQyw34Jd8pTjPKXtmq2m2/Y0aZoNAFSarTC OJ+rGjvxTBKV0pcYHS2auXItUmUnN+zqEU0QidnvLCWKeIYrp33jFO++Yr4Ibnah5JebRxfk DIy1EiSuyEppRJNMJkwSlhngvFqIUhYZ8ZQ+xktSmq0qOo0aDyxYaaXr3RSnTyGW2HGXTCEI Q3hmMAJy0CEUYf58qz/R2SjYkpSE+SrJLT6HGhlhCIXOV0p+zdCDLbKw6U2zY2IooRZwDkhK eYClGdPEbMzW8UxT1nfEDekjDTH/AKgBVIp2I3o2gi3AjEnv2ic5ivWYeErpbt185xRLL3sm uLPENGUlU4GWt1ShatWhKa5SCqlPEK7EIOEG5z49mMy87tZgqR6qHORGuD0i6PPEInQlN+rL DwlHHR9pdT0fT/QjD0elw48lXs96gnHfiwz4X3Tu4RY9X0nStPIEdMs6wCXoMNJgNPbMnGBQ ZcIRxop4Z33/AIp390RNYUkw2r0VsbU5Igt65/TFEGpywz6OJCTjmEN3Ccw9+msed/PNXPRe x5yLHs0km2iJxqNnD9niFfprOFKLaa8H0dkqUSMDaZnElkE4JCMuuxC533d0QIC0xNQyBwIJ ok51NAEwwhWFeH1Yw9m6d0r7/CK66IFLa4CQL0w0qou7MJM0mG+J+oK2dahqRpe3hMgF0Wbm kIkxOSRfixzvu1vELjPjEVVD2sqSpFj85fWlgpYvuylK6UvlKLW9J2CIECOzCzX0/YDXp3PG rDIvFNbcPDKQp+yGUvhEPaRY1SpKx2rYkaw1EoUiCnbdC9pVCUYZzluywJwfCeKMjpu06sKe Z0rU1KU4SkOISI4wnGakmK++Zd+kuMOqLUa5U5+c8euIKTi3fsy53glfPx11gPSVvClG2s7q sRoCnhQlqNClJSHE3AT9TLDIrBvceW7HG1nbTWGwPGzqnqn2Ncc6PBZYcaRWK4UwlFy0vAAV 0tJxgSi2CvFKxCsGvS428UzSA7PLBmTlhzBd87oSotdrk5QUMa9LgLLEVswU8pFXDniHMQfa nOffAbBaxQ1KqdqtCqTazWVpZW0oshJoeoGeKfWD/PXeivWPvy+m6frSoW04ZVLUbvJGkwu4 1WcfugzRXYrgznffLWM+T2u1+S6LnXpvPVLrhGZxMpgDhlcHAHgGUpcLorvlO99BuzJt49ie lIVTiX/KDA6hv8JT5QHqZvQEqbK0rb6OQ4HUaoWhTBL9EJmcLU+/jMU+PtXXQryPphNZ/Zyz smxLGhPUqfPP/lIskYzL/CYuU5yjzQstFrBZS4qYOdf0aIsJBgQh6wwoPZLEPjh8Iab68qpA xtzCgcshtazs9MWEP2k+M7/nAbPbps3mLqJftO2KBVOArEZxL55YdNJSDd3RgDw1L2dYBG6p hpVAi5G5YvdFziWqCtn6odh6bUgPTt52eSkCHLKxz1nOcg85xE1A6qXt4Ncl/rTPd4Bl3SiB vFh6NB5j0IxrNjNcKlkEwRfbUSD2S5z7r/GWsSNqFmNNrHh4rDJVYy84IUwdJrTA7mkrt0IO dwdYwmn65qenm/YGdeAgoIpiCERePDOfOV/CF+XlVf8AWDd4nK+X+fjOA3V8oNnbbHwUS24g 7Y5owLV/HFMe+LD3Slw4xVvpKIEDJZvSjOzoxkIiXE8BeLWe7pff4xm6i0WsFLeQgG8D2dOL GWEOm933xH1ZWFQ1OWQB+ctqAl9SXhukGArsOw1DsAQuEQuIFhY/4PVfeifRs5Oz44rLePA3 iifZ1+NPgjkkWzwuEQ6j9ZAZ6yO5B9vhsz1kPM/r4Ss9eKIDEEELiw7FtszphtqSoNmcjlGV hmLLTdsV3+EVKLRZ3VRNMPm2HJjTQYcIsnt/4RAuFUWSuXShSalSdzLmM7a1EpSL/rc47vNc mbaTEvXsK14d/wBXtmWWHxl4S7okU9t7Um6lMzuRSXCLrN0Z+Ofx00itp6/YfKA9+OQPp6r7 DMUY/wA4taRa7LmryL6YeyXIpUccEAS8y7LlOd3++PCHawsuRplCNqZGFyCMzDiXnKL9Oen+ nKIxwtLRrGM9B0UqzTle1YszcDEg6WzY2vAzoFSZyMCEJiswyU8Mpe7L/OCEPbRRLVRPRJLa NQeMwMwqTjPe8IzuNCryv01YN7Igckyr9HmSGeeZd12uu7KKtWC9nXuhXQKPZURJODs3Zk++ 6A1Sz+y5kdabRrBs6hzNVesOMUTBIn8MpTl+2KtVFkr2zui4YxpwsqcXrszUIOUvxR1Ufaij ZGdKgXsipdsosReWowA+cdlSWzdPIxIDqYykot7LCd7ffAQFplNsLJS9NuTIA0O3BnnCOMvn 4RRIu1YVm1P1NtLD0CaQBvEHrjDr9JcdJRE1g8M7l0cmZEGylJScJgsNwzJ/KA0iyOzphfqT 6VUs4HZUYoCV16jLLLBPjzlBUlhTr5aGgbViIpi7ZnWepl3fGK3QdpZNMM/RS9kG5p8zHhCo ytfHSLIs+kCpU7nkwiCn9ovaPWd2KcBbKksoo9HTaw5NTaUoCVFM0J+1XqDhSlfrvaflGR2m U82ttJ0pULUTsoHZN1hOs8M/nrfFgdLbNvZzUZNPAKVHEbOYeJZfLLnxwg4cOEUetKqU1IjZ 0GzARt7ORlJiCxX/ANafjFoVqCCCIWRBBue/BAEEEEAQQQQBBBBAEELggEQQuCARBC4IBEEL j5/vvgEwQrAP3DSv6QuYP3wosBw/UplBuHtZZIh4fylANwmHslSMvO2BblfrNnFg/O6G4Agh 3AdjKK2Y8RpheaWEskU8QO/hHzJGDcOJNKH7phcwT/KcA3BDuScMsQwI1RpRfrDCyRCkH5yl CpkqdjCv2Bbspl2Wds4sAr+F07oBiCOlQjWJsO2I1SPF2c8mZeL4XwJ0a9YZgQIFqwfupk4j P3QHNBHcW1PAzFIAMjpjRiuUh2MfUz472mkcf8XNU74iibswQQznIN/CATBHYY1OpKMK85nc CEQrsKkxOKRevDWffHNAIgjsb21ycjMDa2rVww9oKYmY8P5Q0oTHJlBqZSSaQoJFhOJMDcMu fdOUAxBHZsCzos912NR0enMkUapy55ZYxcJTFwvhXRq8BbcPo1Xgcr9iwkz9Iu03e/WA4YTE g4NTk1KApnVtVNxog4glqS8ExS75Q+z09UL2WepZGFwcU6cWE44gmcwFznrdMXDhAREESCdn dVJbiNM1LTQNt+2iLLnORN3HF+38oUYyPZNPlPylkXkNR1wi1ZhM8sV/C6fjARsEdZiBYSjQ rBplGU4fVBZc+uu03e/WEuCNSgWHo1hIyFCcWEwkwN0wz8ZQHNBBBAIggggCCCCAIIIIAhoy HYaMgCFwiHYBcEEEQO7/AOviQptSADgHHEV7EKTjwKAxmIxH6yBR6yFI/Xhh1431ghxuFsYM ayHHQGBRHM3jwKI6XQeMyIHDC4RC4sOxYaDZEz3UiVAszcoXsl6TFFbiYpt4GyOhC8BObkix RC2y1RZEAbWb0I2pWxQWZLL9Ix5gPHWesRNF2PrOnA+Umy7OWHGYWEyd4fy/zg896YH1Ckut ELEIw9Vj/dCm+2waZQI5TTGbmX5mFRdMU5+N3CLQlqXsrahuj+seCSujwnCAiLxTv4d0dnm0 pgdLgA2tWUtERMRZghTmYIff8Jc4gEdt+TtX/CQBbQLEEO2XSD8dL5xyp7b1KZP6NTyfbeyE 8SjQN/uh/wA4CRp+xzocwhyrNSiWIMMswOKZRWIXCWKd18KLs36brQI+imJHTYRYgiSCnPOL 5fGYvjFeqC1HpvI6SpjPyRYhZy6c/wApdkP5RIo7bziTCAIKVRFNqfspC1E/2i/ygOx8sfWO tUOw2Q5OhbScOSSEuYxiv8P9Yn6bsZQNVHvSx+TDc3oJZgSMIurJkEN+Kcu+KootvWZh+Om0 oU532AVggA+c5b0/HWISm7VHtn6RwE7Z0gGYcsw4WUTi90MosQVm7UmeKsQtrqDPKEZLOJ4Y teE/CN0rSyhqXta5M2ksrSrCdLZjE1w5hl7oud84890+5HM7oUvJ3hlixRpBdvDqDF0VTbUl GYLEYZiEZMU++IjFwoOytnYc8moQInZ3ycRhhwcZSaXd+KcW0uzSj0ebszC1FAOPlnKVIb5i l3AlPh4Rjpdtj2T9ZZG1SD3RCFLM/FznAXbk97QIa9kbXE3FjJLMxyLJ+AZflFi+Vo1UGgpt /wD+D29uRJRTIIw+vUnz7Os/z3b4qNY0SgcqXs6TUq1AQmuWhwiusGKXeKf+d0R1QW2L6hb9 gX0qxGjDeEJxgRTGTi4zBLhIV2l90N+eZyRt7OmRsLOlKZ8ISBb05/7nzuiENFouyVtpKrFR LwBO8D6MGeWcpLlMtPOU7p7uu9+ccLxYn5W145nNo+iWolEEQTgl4CzjRcOPC+Kost+qpS4B cljU1C6vK2fLGAsQJ8p87ofa/pA1+gMVDAS29d2QiJnKSb8EdAmqT8mEdm7+c8Uq2pWdnLGk MViDiVLV090Guk7vhfGI0e29KvDc2rFOx7QZIAjsOPDf4c5xofnmfvJvofyYp0QAiGIJ4k8x zJNM4mB5SHrPe4xn1PuRzI8JXVMAo1QlMzS87WWLvjnW3Gm7EGpktIplNU7xthS5bMeyCL6v LD2AmTlzFOXDT4ww6Wettf2uVqd0kBuRNK3YiEjcSEEzLuIrp6YQxTPPNWY3hK6rOj1itKp2 osRhf2ny4Xcro46ftRqFkqR2qElG1HuDsdM8/PLndin3XRaFUqRt6HqRczgzfRT8oIjtJi8Z xsDX9H7pWsG5qBUJoURlPycDleXubTMV2Vzl+2/wih7fR72scX6uen1z+uOxmCRYSyOGkvl3 d0o6U9qFZI6c6CQLBEMKVWE4IOIxYdQFmmc5acIha+22fR1Js0s7VVUCqVbuanMKAIkScJYL hTunwvnFMsXsrWWnNdQjbVOUtbywhTk8jBi5iFylL5zhKi0WoXstcgrZe4KmV8PkrW7OXcYd h7AQX6SBK7gGOlotHJoFQaOyYha1bUGW1idcJ0xXd3LhpFi61hYDStDU2e91PVTuLYSyQqcg uV2aZpuyu4X+N8UZE1IzvoqVS6gH6moE4BCy5YzC75XSnP8ArX6Rz1Pa7XNTsbizvS9OeS5C lNWLL1Fh4SlylKXhEAjqp4R0GqoZNlBZVh2ceHDvmD+MENqtHoBlrCo7HqbZCzWkDkzYzVAS +BMt4V90tTBd84Qo+j9SXnIbKbA/OANqSLDTiCzgHGk5OC6+ctL7p8LvnGZp7V68TNbS2pnj KStIgiTYS5Y93symLjdKLRRFtFSJ7QyqndW7pxUSkPJQIEBMigBEbO8Zs5BlrPvnAW5g+jSj WV45to3JaBnJTFZRghdYWcMN+XOXO74yiOs8+j2zuVJpV786upq1UnXGh2a6RZeRPCCQtL97 j8uMUhstUtUpBwc8DktRqHQyZ5oXEmYx692P8oslH2zOVMUGqH5NqlS1UYcV0mJQICTNO47n Zxd39aNQ7abYaw0lZh0kS/G9NFpE5uIwyU88wcw5lxUtQBDKctdYttmdmNN0lag0jRvZTqqM o9QrzhXTAWfPTaCpcpSlprrGI1JaXWdQ035NvDqA1v6sJgQkyxnSBqCQx9qcpT1uviMpt7qG klhFQtQ1BBqoOUWccXMclAPd14y0jIeo68siZLQqfpRMc8KOlSUgx9KiDdiLkMOPH+4N98RF OdCLWw2xSjFlSU2oROKvCpmjlgPkGXAw2cufG+7kHWMcqS1e044w1teHgZG8ARyYKeRPZ1AG cpSlO6/2eENF2zWlklrgAqf+EhYlJ2zgx4rrtzTc07oIapWA/OFZWajYXh9bCqVp6QlaI5HI tIvydRilPjPFOX7JR50pNMS5VJTyNSD0VY4pgHh/mxClfKLC4WhVmspPyVOex9EYZAEWEMpD MBLgEY+0KXxnFbMTKUydGsOJNKTqN5MZhuzMM/Z+cFvWVNk/+/dr5x2NNsbckTtxxZfqCJhl 6qXC7TlElaTZBSto6hidDljkmmlaUQj1uHqzk88c5B++cOfcLSWGPNJdc2is9SFKdvUBd1CY KcJZycI84r2Q4OesdVQWkWogfDSX6oXApyQq80RJgZA2dRddfIN27cHhdGg9JUeSgQWXtzb0 JsYPJx3PJazrjCw4TftffFPS+eGd3fEPTb28DLo9qqdtKcVtRL0S0xMWTMCVjSguwyvlfvCn Ls33XR52MtLrk6nzWEdSKOjThYji8MrzOeouN0+ffEm6Wo2rrKfIOX1IqC1bSHJMLLAVIRoe zK8Mr58OHDSPR6atUoZnrwxs6b2s9vbROjgYTwMViBhwBDOXAEu6Ur4xqxA4HnUyaQTPVKUx 0TOoFqRR61bIielwp72XMd3AWt0UFRa7aQpeErwdVSgSpKGYSMIQyLDKd1+5KWG+ffdESsrm rVj4uflNQqBOroTkKz+EzCf1fdIOnCUZD1vYHUI6kounnhYAYVT04uSgJIRXlbt8pTNnd7N1 0v3xX6fpKikH0bHNkYXJvdsxe2CeleKU5nKtsJmIoQZaBlKV0gyvn2o86UfU9fo6fcWGlXVa Q1EpxHKySAy6sr2p38ZS77o4WN6qpqoRYUzrFSOmTlYNpyw3AMU8QXi4z4cILeoLfjkPmrtj K2lQaaSpRbhl+WTMUypBCXy4cZSu+EeSXhkXs5bd0kTkbcmkqJD9yff3RZ6wqG0WoaPbFNVL 16qm1BvomZcEpQYD4ceF3yiv1Q/L6kcAr3LKxll5RZZIcACweEo0rsG7fRrAgJsPcViw5alz qwThzEXrTLsGAuf3b9fHhF5tssjpN1qR+rYbYo28khRcikZq7GBJLDIy6+cwllc8Ab5x50sv U2qATrgWddIbOEyW15OHLCYLhqLTFpyjhdKtrltcDUDxULqlWowiSHEHGal39sM+++fOCHpB 8s0Z22whHZug2ghOuf2oDi48c4w6YM0QZT0lh0ldfFrVsKRntEspRtrVNG2siZ/JThFeKYZA kWAAt66d4pSxR5heHK2kFDoVK8b/AOTuYTsX9afVTulv6z4X8YkXQ76QJ1SMoF/T6p16zowO IM8u6V4+GktJcZ90ZrbhUFAUxWHQ7w/Jv0e10sHZEiswY+tMMFdMY5XiFOd33ogmsDJ0gusW YW2pEbQW+SNMeEl5ICTZkXzLFPjKXxuvujG9ptmR2kBTY30NXvBOPdFKczCg8Jyn2ZBDdPwl dCqXR207O8AZOlUqfazQOIhKgl4jZdud4v3xpQLgeWgYvoeOaBnmqxLKw2BWrD/G8I+1PTsT AGXHnGkW+UwwvBj6c8PzgysTSe1FLd6U055el5YAzu1lK7hPjyjy9nVUdZ2LfcPJBrUyDh/i 5Zwp33/eFOfxjoeFNeVJS7Yc5Dd3hlEvkiQZmpQlIpcu+d0py+Ue9hsX0xBkjY7J19GDwpwi OC0hTBn2bysqYfkGUUH6WGx+fRUBAMAtnbExJ+HXrZB1v8Y4KkQWtIDGBY8dJeinyRM5gVAZ 7IZykG7sT00+EUh4RrEDosRuQDSnAs6e0hO9Zmc77++OccMJhUJgCEQuCARBBBAEEEEAmEGQ 7DRkAQ7DUOwBC4RC4gdUfC0ZxxnUgxDh9GTjifa16YlYBMTug+0OjnFGgMHjhEEdYXHSYPHH HD+CA+wQQQDsSVPo0yx4RplPqjDpBFEVHSjUjTKAnA7ZYsUQPTVQWaMhzeagTJmhuAFP6Nl+ v+d0UpjscGmeEYHV4TmgELGIjL+zl/nEKXbY9/xZqbdo7IjxYjJ/5Qgu12pwKNpGS3njxYhY r97wv7vhFjREdnTIdaAsdTgJwtRJEslJk378+fdHc30lTexrCQM6IrOPF9neYKfKV/KUZqXb ZUgHDb+imjrC8OThFg/znHKXbBU+YfgRtu1GXizMmc8u/wB2XCAuCexxtYcp+fngBqcIsZxB 4ZSIDroX3iv+EOvlEnV+8BOZ3VtRsGKQcKRvyxl/LS+cUBwtLfliMJKxG3m4ftjgzGZ+3QMd Rdq9YbOQBGjSo0RfskJxBkZd7w/8oDTafsuaqb2r0lEeqwzwmL8M5k/KV90R1rlnVKoG8irX V+GjKMJlhLRJ9DhTnx8PyjPPOjUIFGcNM1ZWvo2TcXvTvnfdqK+cOrLWq2cjBJlhLesHh6sn Y5zkn/ACXCAqzGmRnVIlJH1qUR8g732kr+cemXyj2F4b1zbtKIjqerISJ9U/+seXNsU9ICXj +tZmMWIPP4Rek9q9oWx40BJRQMMgiVloxTmKUu8c4DT6Ds0Z6PdPSR9IrREmDEeYXIckQOV3 eOLgYwshLeavJAibNoyRGLzE8pnmS7paT4x52LtXrYBn1lOqxfZ5N+ZOfw4x2ecu0jpgQBpt qWnBmIJBjfPq5fdLlK6V0BqVYMjONjrxtRtSIgeSWIKkIesFil73/n4RNNdntKoLJ2dMgRtS 4GcjPWniwzGcZPlP/KMK84tf5jmgGD0pcGe3iEjnmYLrvhKUpRHMdf1U1U2JqZDv0aEyR5mW nxzEPleKA9gMbIznJ3hYcNKuVJVE8KkxLeBAC77KXMXdGYPllaOvK4blh2aU0Ftc8xSEUgGH DnPjOWt+vdGXqLTrXQN5Sk7NRtXaD+j8og6/mL3r/GOFHa7aQB42xG9+lKNwIS0oefIALrpf KLGwUfZ01Pdm77Qzlm/o1+EQScWG6e7K+U5/P/zHmFYDJcFib+TnmFf2Z3RoHnRtOZNuZ+ku jDTDpiU+iykfmT0nO/jKcZ0Xg9vH97viB9jU7A7KwWnI6iBtIylqMvCmDiuBfPmKf/n4RWzH iz3LCAFnpoeGIQnKd4o7kdfqWRQEFmLaKlDTtDso6Zxii/lrygNUrixCgKApM91e+ml3R5JW eIsV2YaLS+QdNL/GKCjbUyn6J9Ur020YCXwjq9N4OK/Wd3j4xDulZ2nVIW8U85OTg4g7bmXl 67nvTu0CHuiup6qeCaPFSSZyymJQZmmJiwy6wXfOfGLHoK0SjGSvKosmbUaA1uajmeQj1IeJ ZfHLnpdfpPWd0I8ydm6m0xCwptqEAxEqNMJIUTHIOC7L6yfEU5cbtIwzy5q0CNCjA9qNnb/q xOlwbuHxizUXbNU7JWAqqcsD+4bIJImzxZZabFdeKQQ/CA2Gm/o5UqprSoiTv4NJJKKILzpj GQeIGIV8gz9nvEL5RGWb2A0Y5UGmOWAWrF+xKzTz866QTgznIoN0uV29w+cYcx15WbCWsAz1 OtR7cZM08QRaimL709YvLHaXWFN2X7Yjp4BSdYYYkLfjDJ9oztYQ8MU9bp/ijVCdtosloakr LwnNRwwu5ZaTr8QjJiGOfWY5dkEsPDnFws3pKg6YtIF5MPxqk3yJPEpUldZMkycw9aCfI277 OPOLxWdTvbOQzutQqFzeTvFkCuuvlzn3/OOGn3typtw6SZF40Kv9cGMh64qSzGja/o+kBuUl pGyoS7nQ8WoS8yW5Pe3zTOEp70NUHQbJVVkjnR7kyGoWxLVSoJJZ3rEEiy90U56Svlwnfu73 GPMSi0Ks1hmNTVrgIeZI31nMPZu8Jd0fPOLW3RZ7P5WrQt6q8SkkIpSzpz44u+/nG94qRm+n F+LB8r7v3RuP0iCSUdqFmSZATiYgtaICQJOoMzNuF89JXxiGMn3wxLeULkPo79Kmi6LFiQb1 +zT7w+N8YrekDLOqYqH6S9pqaoWgbmUjbE69EEwwUtZFAv7M756/+IafLB6JG+BaqeTAGNtq Ug98xKLshvysY5cbpXX+E4xOm1lotYVAuGyOTk5u5hMtvP2jB1QeEjBd3dKOOqB1nTbwuaqh WOCFycCQ7eHaL9rK5SFMOgg+EaIbjaZYzRTbYJU1Ts6Yo1VmdINygjWWyDNuAHFoGfV6zwxR rYwf+wdhZ2ViAEJ+L5jL/fd3RTtsrw6zc9eNe7io5GIKfDmej73KQeesNvLfWyWhGWpnjbQ0 xnYWXNM3Qz5TAXxlfdxu5QHpSdDMi36U7m9OTaar6HY0KtsIGZO45RK6QZ38Z4cPD9kYxbRT rrUn0p6pYWpNmrVhwTS8QpAlhCSG+eKekvnFefHK0Wnnhufnh4ekLu6IgqEykwzrxJvZ+Epc oaotfXLrXgllNrFCqp1CY3MWmXTMCVKW+KYx7odOc4Combhgge2EUw/OUbUY1IF9hFi20IMZ SqpdlPLEKeAUjDMP97DfFDpOy6v6npsh+aqeBsCoJoiMxUEIzpA7cwhnvTiHRvb8vMaWRM6u B4058gNaIIp+jncJZcvZnfzjJb1bVllFliN3IkBhRIVSjb0LeSG/AYfIMsuY/nzjlsjsToM7 o5A9tresd2NsklfkXrB7UdvaikLDuA7r+1xjEXCgLZk1QMrapRqxLTBGCQGBcpTkmMD25zHw BPxnOH2uz23FM9uKNAUtQHZJRq1b0pllKJDFudbwHOYuUaoWKy9qA1N/0hmpMDCBGiOTkBFx ywzOkGX9mUR1LsiZ6+ihTKVUXkAVV+QnOPxdkocsIjJX6RU2ekrSFiisEaBMqB0T/wAw+lZY DJy1kAfv8J6R0VBRNp1PWfo3J+OToWDLCrTN3Skr9+66YSZe1PSA1j6Y7agZbJKOQM7QFvaG 96OTpCgTnPq964Wuu9O+cYLWFMOVKqGxM65QFDggCvCWWLHlgFyF96O+0BNXKBGweXilyPKc ic9tTKVGdpK6XY1unwice7PbXairwSZ7QZr4JpA5KTFqoJciU1+AMp3bobrrsMoyGvfRTA1D sQIAsTGqheVwhlkld+XKUpmaTwhlrrO7WXGLRapQ1GL3R/q1ZTCQ9/CnWiSIjBSxrwl4QiVT lroDlux5ceE1eWaOgmRS5KmVQoIko9AWdWcULgZKYdJxXjHJf1oznJaLMLkQZ1wpzML5Fz75 X8ogeyXxfUTlZxUzxTaNey1Y+HtKVEFSIEigikIF0yp3zuuDmX385cIRTQHllqNBT56JQram dscxCcldxR7wvHdmzLLvxABfOYQzw698efqkoa2NnZwPb8sUYGUuR+R0tmKEEhylrg9md3dr FI8p35e6IzvKR1VLRGBITGCVCmMMxaSu7oD16gVISbf0w+jhyXttnuPJzPqE5GXiJnx39fen GY2Dg84VDtNGVJR4/JjbVisx226fUm3DHvS0n92+c/ejNFFK1+jtXPs+2xR5TrMIVOWs9cGc pDljM7sOusdtOWY2rPDO5pmyezNpas9IJJ0hlyVjK9ZcXzlpHRoJ5jRqVn0SHoB0j+jRVYWM sXuog9ocpeFw43hGppJyp+y5ZSuAim079Po4OTMvCWBOcCcxY9b8ePenhjxYne169OlZCXJa anOMkSSizBZWMU5SlK7hxi015StYUNsZL2/JdoJPkSkbkThM01OZPuBLs/4wW2O1sks6gKPY WhEc2OS6uxqCU54rjhBDmSGfrKV0r56z+esZN9Jh1QPdvdUr2oeaixEJwmB4YiyQhH/ejsMs xtUU1oU1KR/pfo7ajTlDlfNEVwnKYuXHlFYb6AqdfaH5AoUYFzvhx9QZKZWC7Fjx8Jad/OMh UoIcMBgUHk/qTJlC+IdIWnJGpUEJiQZqhQZIoksPEQhaSlEBiCLpWFmNT0kzrHV46PykJ4E6 0KY7MGmNHK+QRXac4jKHo97rYxV0CArKRk5qk88WWWXLlx4zn4QFcgi+l2S1ntjiScBvRlIT iiBHnnXFmGGdgIfGfdHC4WdVU2tdSOq9GApLT5wSFZmZ7YtOzPWAqEEWZHQ1Tr6PIqpM2/o9 YtAiSCMFdNQYLTdlFm8yFoRxmS2pm9xNCdlHhJO+rzuv3xcA3S74DM45zItNaUk5UkoQkuQy jekCc1MYRfMBgYrCiA+wqEwqAIXBCogd3qU/4oYLJOHue9E0syd0H3ZQ0WcAEc61ZxwYIdwQ +oB1cdCHKnBjMh1QPrIW1kjOMjn9uLH2FQmFQC4eTgAMwOPsYpQzCscB6vRs7UOm2lBtKVGn OSfUiU98xTnznFA8z7UmWFDG8KuuOlhJy5boL9b4pbHXloXR+SzgzSk4cGcWjxzD4Yo5fOFW eZjG69b7WIvu5TgNscKPZ19oDYsHjKStqQwJZAQy64cp6XxJlkpiVjsMCYBG0CBiyy98yXKV /G6MM851eKXAo7bwGq+ySWWllz+7KHVleWloHARy9SajVHfrkspT/q910BpThZXSqMxZULws GUaI/OyxCllhl3ABxFO+FvDIO07KyX5wQt6XCExNs4QgFLjp4/5RlbfVVoSxGq2A5UqT/wAZ OCnx6/jhXlDaWNjD/CRTUX2cKXAAXx96Au7xSVPUZUjccw9eo2uRGWf6RMMhd+l0hXeEXpny U1QVMdgytoMLxCLD1pnwn3RhLxUNopOwvDxtRWX9SEcjukGYtL5Au4+MSJbxa6NwxgA67UcX i+qy4fPswEdagmAC1RcDADKEcEWX4eMekEaZqck6NApUqBYmyYdmJL6sMv8AOPLhjVUix8VA OQLVTkXeap+0mHDxxT8Ik2urbQlLeaSzr3ASVKHrNmJ7P9aL+xt9J2esNGPgVLUSMS0V+YYf vyRBu7P4xfsi7+jdD7TnDbAHEz6/DerFcKPMudaumbwuX6XAn7e9ri8cPGccayqrQm142xe9 uRTgIvdMOunPB93lK6IHolZvuD0gRk5CdVTgxbxcsy/W7FPjf8/lHLZmz02CwfopnOS5pZYR unV3jEeGd90/ly1jzoXXNYAMPGCpFuao9cd9oL+txjhb6he0CM1GgdVCVKdvHFk6Z0/vd8B7 J8m6eq1HUiB1OcFSXCAR/wBmVwvkEE56f74Rm9mdE0kO0Dpgmm+iQN5cxIicwXWDCLdvmLjw v0jBzK5rM4sgk6p3ARSf1JOLcD/V4QeW1YdKdK+U7l0hhw5+ZqGXdLui8g9HJ7Im2qrUK8fq kTAEBQYEKYOLUPVSxGXaSlIN3OceXtm/4gE2j3QBX7P44MV0SPlnVvX/APE7r6V68W0anfGc QWd1mP2+1i8e+IHsAz6OVAIEaqoVibbClASQJEmZOUiZTulf3jHOf/iMrWI6YavpQMtNsNN9 GFNK8JQhFin1053b1/K6Mr8raqzAj8p3XGX6v0ifV3cLu66OHphyA6dMAXqOksWLa8XW39+K A9M2ZgTNtpFtlNks5Sg0wlSJPnazPlh9TPwnz1vhii7JaDbbF0Zz8z5r+YmPNW7uYcScGU5y 1lulgBKWs5/nHm8t7eCVhq8l1WlLTvXKQnTzDPiKDpt76PE29MOHR5nrk2dPAZ+LvgJGyNA2 1JaAwM7wd6EqUy2neu3fGcbqjs6Z6ktoSoDqGQU8wN5yjCLFP9JklzlKVwOO9/ucY35frwMY WpGyMSHLLwBUkp+v+OLviC8oX4awKwb85CVFhwlniUTxhlLlKcWPS9QWXUlTb5XLq1UYnf1p YkvRzWd6vLFKWfdKXu/7lFisvptnqGxw+mKkYUpCcT4pFsh13omEmc5Clpy1l/a1jyMXUL8S YI4l7cgmmBwmGBUTxil8Ya6VcsvJ6SW5X6vOndrx08ecMiHpO1yg7N6esLPObWdOFWlbEwyl ZfESgQrr5m3zvnx3ZflGOfRvQM9Q2wMrI8ICnNKqLMzCzA6aBvvin9MLzsgCxYoWJ04sZaY8 ycyvyi7o7ZqwQLCl6BMwI1BN+WYS3hlxlOX7pziB6EY7JbOlltCoZKZoNKbWkCde1khlPCqM HO+eu7LCXdfrOcc1lFl1mhzg5s/QLauy6jW5+0CxDCSCU8ksE7pT0vlO6X5x5QMeHIbguctv UFKnA4R6kwkzBmDFO+fDxizUnajWdJM5rVTy8pKAy8WeImRh4Zi0nMIp8J3c4vIPQlWUBZcz 2ThAOnk5pRaROIw4kIQDzxmXetv43+zvaRT/AKRFJUqgY6UU7A3tLf0wQQcSQTIKkRF2/Pjv SlO++c48+mL1mWAA16o0oIsQQiOnOV/f8YSoWKTvrKlQfh7OcZjwwyD1RRZ1m690tNBQzVtj UobUYNmCZOUlswYr8EpyxYfelrf7sXusaYoWoVjac+sjYNUhAQUlK2iWaM8BIp5Bo5XzASDj PXjHhpOpOTfVlJpH9CLB+6DaTv5So/7k+fGGQbYjQPyz6JFc7YmDm+U4TyyyxbmWEQMWX92+ Q7otNszxSSV9sZMq9qmCmUjKFYcmCLMFiwAkWUIHOQRfnvSjzZtinLwbYoyv1OZPB+UJMOGP fOGM3+kFfDItvn04DiV9UUQ8AOxbUyixYg3aYsUtNbuM4rP0b8kaO0pnxlAcHqmjUqLMFdin hFfryl3xkxhwx9sYzf6QV8JxxA9zWACaZ2dWdKD0RKg9ubDixK8yXoE+fzHwituiCm6YT+UN Ks7AKr07mg6WJzpYEkxcCgC96emK4UeQdpO7G0m5X6vMnd+UJx/fHu/enBD2jWjIvqQulKVJ WeSxTge4rn5MSoDMzZsV4pXS7WMfhdfErSah8UuNTFPbQlINz2rYmXagGbEkJF9oZIcpTnKW +KV87u4UeHM47t7Soxiu3syd+nDWDGP9co/FmTvi8g9PVS9px2P24L21OlUAV1Llln5mqkAs mU/jIMsWG6OC0AsFT2DudT1zTzGlWt7OlT084plE5nmb1wQz8deGvaFHm3H2e3gD2Q4tPygx j3d8YsPZxC7PwhkW9M1apQUYssSqSvEhylO0seA0AetMksuBhvlffpPX4yjV6kUoB1xUOAAF ytVTiAOyCOuGYMB5opynhu7MpynO4U/hHg/OH28YxfiFOcGP74w/exTvgPQVuDIC0j6Q6Nkb XBKl6LpYPSRnbkkmDGMQfleDnpijAG/fcG4fsbaT+WOUCdScmLPAmONIAqDgPyxXZwb78Iu+ V8NxA/Qd4XtpyypFKNA1GgcNkAWerUSkUt0BinPe0CAMtdL9Io9PttAHVwxKaeAwGtvRjie0 nmYZHGuAjwymdlzlKWGUpXAFhjxZ/a/tThf++1Ae7aSJagfSDtAegdGmuRytvAE8zDMRKYCI EjsM79NdJxBUG8I22nEakZyLZG897PWr8Us5BmmCmVlS0vEK/wB35x4v/t/2pwf/AI+7y/KL Hr2pPJSmbIWsylkLbMIEaGSbrgFlbVmcb+2MwUxa6i4RAVomTdKUK91O1MHl4uq4gYgojL5H IAajNFfrdKffdHl7/wDr2d6D7+/j97Fr+cB60a3slytztiJwoFE1SNO3pCBCDlqC5y375z48 ZznE3ZBUlndPWyPtJMBIEqpR6UvXlCDs4sor1cxinO7eFi3btY8Xx8+zwdkMQLOxsIHsutXv bNgbWcw00sRgb8wQpimWVfymK6HLG1JKa1ij16weQnJcyjTBC9m7WKznD2fZsY9nxYsnFuX9 90NY4D20ocqeJ6YJWDaGpO4PA1fXBxzEmlKczDgglfrOc7vZ1nwivM5zIj6YQUkBoCiUNyIp pa8yV12bjMmeZfvCn2piEKceRP734oXF5B7NUPbC9lmgbVid2y6gkfmHClcnAEIQCFK+crpS lfLlGf1YpTP1ldqy8leVsT1UBI0AsXrCCRy3gS0007pR5x+5B/vw/KIHr5PWFKv1ndEL0f6J SpX4jISCMleEskM5TM485/C+LCzvbCgY3ZGpG3pVDgpOGkRBMlIa2V3EV2uGYp+EeIY+ff3/ AMWKd8B7Rp8FHutqhDk9uraJU1tJCcSbOllFmTvmO/W6U5Sn33x5Qtg2PzsVJ0blbFtotnye xgitxzmQC4ITCogLhRcJhRcApwH6RDEdKgGMyBQgOJLxj9qAdwQQ0YMcdBeCAGs4CYzfjgUe sEOHVABwxggCCCCLBHRDUKgPTlB7H5u2kGNR/QJg9rxH4d8YnUjI5HVJUiklN6KlUiEIzFKE UmOv1jeaCmznALeXumGEikXL4Y44Vi94QJzWFSdlAxYji+Zk/EXOEYtdhYweVAvZ6mfWeEbA 4Uew1PT5AF6ZaEosyY9pU8btez3z7o86J0z2gZ/KFMA1KnCZIATw98+F0ondgtLdVCPrndYN UHEXiUexLnPlKUBs9NjRozOgUbOoStpO8Xh1zPEfxhX/ABODNUjAasNOPEBMH7BMX727Ld0l drGBPCmqmEzYF7wtIzg9kKi/MD8ecoli0dop1L9JDdVSNiFuh2hdlyM/q84DcagydnKWLAGm gCpI6zmcLhKXwl4RGVJUK8FpC5qRtShVtROEXpF2HFxnLuuu748+mPbqdhGN4Wm5PqRCMn1f 4e6O6m/Kp7dMlhUuBrgZ2js66d0u8cBtlDs/QlaOzO2jHsRjYYI7evwmilznxv8AjOJux8FP ebNdTDUMaVQjxdIhy+0PxnPTlGBPiCraYL9MclBRSz7QhV6z4zjspuj62eGPpJAPZW0wzCHN VZe0D/DxnAbxUg03ke8Kck0IzEGHaxGbhk+AQgl8Yx+3DJAz0Ug/+ySoPSw8w/i8YpqjyhG+ dAjUuSpanMwFkYhTkEfh/nFmUWRV5tmzbGUqVYpBP9Ix5N+u+Phw8YsZ7BF781FYHLEZKMlK sKVX4Tyzri9PGcRlJ0NUlVOjwgakGb0Lm7afwKDg5SnznPuiBV4bi/o7KKtOZ0y87YiDVCQS stJmXmZQeM/CIR0o9+baPYKkUpgZT8cEpIXzFigK3BGl+ZaswPgUB2ykA2CS048y+4N87pF+ I590opVWMKymKgPZHL60Tdiw+MBEQQ9knDLxgTKDQfrCyZzl+cGzKdnKO2BVlHerFkzuFfAM wR17Av2gpH0at2gwOIsvZxXilLnwgLQORxhpIGpwEMkWE4IU4urnx1gOWPkTnk6p836ysx7q VOtCiycPWCELn4SlEcobXIlGUsUtq0hOd6s44mYAC/OA5YIdLJGPNwEjNyS5mmYQ34Qy4zjq Rs72sLCcmYXU0owOMJgUorhA774DggiwSppZOzRzra+QCka8tCEiYZ4zDBcZ+EpRJ2kUA90M XTfSRIxGviIKgJZYb5liFPQr8UBTIIljKeqQl0C2nU26lLRFzNCRs88eCV187vCEp2R7OWLk YGRfmt/1svJneTz1gIuCJFGzvaxv6SRsjkelyxG5xacU5YA9qfwlAYyPxLehcjmFyClXCLCm MyfWTF2bvjygI6CJR0YXtndEra8M61CtVCwkEHEzxmcv36RdLP7KF9SM6x1cl5rEBK9BY8nZ cw4SqcpCFilphkGU4vGM3gi3VZZ7VTDXCqkuh1S5QWI3JOCXPAoKL7R3DQMucWuoLFgIKfNe E1eN6wCNanSuIsIZFFiMlfPAK+eKctNOOsQMmgi+WqWek0Y3028IH7pptqAJokxxiWacfV3S vwz1unffK+I+zejzqwdHMG07C2s7cc6OavLzMkkuV+6DTEKfdfAVOHI1+zuwdTU9L0yvU1Ia lX1ISoPSJiUeIAQFfrB8sUUim6Dq17qgNPAZFBA9rKTnqTPVJ8c9Jznw4a3QFXgjZKksKAyb CvOrbIYsSoCtSelkWZeQLB1Rc94zFPh4axMF/RsU9KLMdZj6PLCjyMtHKagRiid0gilwDh48 flF4xgcEagjscUdHWhuTk/bK30irGhLMAlzNrPDd+Xs9/bhq1yy5HZ1T6MZ1Quq51UJE52Ho /AkDMzjLN1lpdOIGYQRpFaWVurVU9H0ywndPuNTNgFoMvcL3uf4ZS3r58os6iwKSOo6oRrKs Okjp1IgGYcmRXjOOU8JSlfwlAYhHyNfcLEDk1eVXR5NT5q1nZZOqQJye6agN14gzu7Pdf96M fLHjLCP3oD7BBBAEIgggCCCCAIIIIAhMEKgEwQQQBBBBAEEEEARzmesjojn+0gFwqEwqAXBB CogKL3zIkXwfZB7oZQwnBk9cP+rHKZvwEpBDsEQPhfuQkwnq4XDJkBEQQoyExsCHC4bhUB6B sTGjOoc9GcjGuN2mQgpgmZctOc4sVQMlKqVm0qUDea4C3Sy8zs6azFGBUXSr9UmfsCnY0qcP XHmHTAD4acb4YqRhWMKgJKwefmBxFnBxXChGPQzo1M51DtyM5Ml6ILMKHk4rgHCCL9t8TD5g 6UQ4xlBBliCSSWKUtJ8NJco82t9KvC+m3GocZoUrf+sEK++6+6USLXZ1VTkztzqSMAekhBw5 xwpYQT5zF3QG8bNTZzh6eBEsdQk4DAl4Z7PLwnPdBd4xXy9mqfaiXVA0JabSmbuYovmHTTXh fGO1ZRjlTZhBOMa7asWHICLew8fG6OpvoBYpptTUL25AZUSXdyzgiEMU/wAP7IsbI+IKJ8l0 aNGmRFNog4MzdBLjxkGe8YMXwuibcPJJqcG4DaNEHqcOSEy8y6fIV3+keXE7UsXmCGgRqlRW Zgz8M7v9Itayy5+JfG5tTYDwLiZG7bqAsv5xAs30gFiMbOzoAbIUtLOmIxMmMzMuXj4xerG3 VB5DtxOBvEanMkMwxadIBaeXhf7UY1WFADphn6VAv2wrPyDOry97w74douzc6p2c17UuWwog nSTkhw5hhxk/u8pQrG8OjrRgFio5G5NAXAwWIS0N3Ugv9UHvGL4RK+UNNpsIFjkl9I+qIiTp Zl05aiH3f1o8wvFDPaCqPJtM2mrDcWEkQQ3SF3znPldzi9GWCOuWHY3tOaq6vaRYcBBeLlIX Gc4aDTOkkY6kYEe2NSYaMwahTkqpZBMp6S3r7sc+ctPjHNZXWFMNrpVdMDGlISmHnLTHHOkA swQuEr/anP4xnnmNTdHnuQ6t2NlSn5Rx56XBM4zuKB7WKeko4y7GVI6oPZxvBRSdOimqLzA3 m6Sv7PD9sdA3Oj6to9GnRnL1LQU0J0k055hgrz1HulgD2phn+2Hakq2kjnymV6l4alQy1Oam TYpT2aUw4ZTwy0Bh+V3fGBU5Ysc5UmjeFbxhdFiY1WSkAG+RYAe8Px+EVt4s9UttD0y8EqSl y2oFckuQWHcDi4b2s5/lHOPTjhU9PeWgwE1U2nqujsrPzpTLJHIV+op6cP8AxHmi3R1an61B 2cmc7PSiCWDP/XDDK6c5eEXRRYOjTWiJaGHUgxLeixK1OWluBmcpB/1jJ3hqcmRZsDwj2NUH 7Pjp8YsetrM1lPHWXkDUkgIZUrLMB5ZhcsoU7+ffMU5cLtYjq4rykljwwDJWNQShLU6ozexn BLBK7h2SQhDdpzjzeW1Vs5NaMklM6qm04s1QnJ1yssvtGYe6XfOJ2qLMR03ZfTdQqRmmvVSC ls7YUTuBALhKYuN92sQPSRdpFnRz4ecmcm89aYkOBmC6sG8O8MhGaXSu+MUGqLRWsFCV4pYX 5vSuixeSFt2YPrAlhlIcwSnrdPlOf5RgtUUq903lAfm3Y87eLDikPh8I4y2dyOY1T8mbRmok p5SUw4P6wydwQ/H4RY1Gi3hhB9HhYzvy9OBaqqFOr2QWszCwCCId8vGOz6RDl5Tumcz1UB4T uRxI0FPkB+r4Q3TmKfLhfFI81Fpe8PyJcOrvxC03bpXzv7rordNpnJY+JUbCA3pJQZgJydwd /PXldAbh9HQDXQZ9SAtZRgaW14TAICFZrnXT1LnIN87pz5Rd7W7WqTOs+qEFDvxSNbhTkNeW G4YpFzvFgDxkG7dlf+UefkdB2nVPnndDuC4CNSJLmKTvaDxuv8Ocd9D2aL19aO1PVCmGjNbW s1X6MKQ8QpS0li4Xd8aDsZ3tB/6XKkp5Y6gC+OD0UqTJvbFhmCcxfsjTS6/oVTU9l1WutQ5q pgbtiUIBl35Zky8Ij5znwnfdGPp7PTv/AE6CtTGcDa+kwklpxaS2fs398xTHwlLlHOssrtCR t6NedT24uMJKCHOljDM3sY/d+cZj05SlVMNR2z4Gd1SjbWunlgjzUl+AjGMF2+PtDwyxRw2f 2x2WdKVXUi87olUuNCSAJ4ZzMUJiypAB+3WPOKyze0VnqRppg5q2Va8YskslVoLBdikKctOf OBnsxr94eHZqR08AahpUlpVOcokXLNH2QymLjOcB6as0cmFBYuA5f1TKhp5SixbuzmYh+5O+ eYZ3XC4RD1va7Rix0ZF7I8Iis5eQoPDkzmMkoJWGeK/dld7OG6MHMsrtU6HWLDmE0KVGYf1A lGpmRoYIBXGcpfCIyqLPazpWmyn5+ak6VOdlBywqJDNDMzsymGXONENbtNtLpVutTo+pqYOD UQ2ck+andnl9Z7uP7ScolrL7XWE5nehvbqlYF6yoxu+EwvHLKFKUsMp3drlfFBs4sXcl9ZrG itUC9vAnp4TyBImn1qiekgl4tZS14y4w9apYU9sihiOpJnVHpXAhPnlKzN9MpNndhxcNOcZj TawtgoB7YH1lA/qyjXpKfmLglzvTYpyyk4dOcu1pOKDaB5sVlB0tSrPaEApE1mADsiZDOecO frlZuK6WLujgY7MaPHTb6S6r3AVSMqBQqcTE/wBUTHAnhLIlP7QQp8pRS6psxrakqcLe6nQI kZIsoIigqJCPLmPUMhAlwgtbfpGVVTz8x0QwslQ+UyhhLP2pzycvFId2AN10uEpRD2F1OyU2 oqtBUJw0rfUTKa2iPLDimTilOWku/WKzZ3TY6wtAYqSJOyOkj8Jh36ssO8Ofxu4Rqlm9kVK1 OorJy2Z4WJUNR9Dt6Yk7XCDtGjnIPdr3RA0Gxi2OjKZs3plqX1IBMJlSmkKEwk4pmKJznuYc N90RdWWwUe8U+U1NtTuSBahXohluxafrXHBfmGClyAG/dxCv8Izq0Sw2sGe0BYz08gz2jrjk Ss87UJJcgzFjw3zldfdKc5axcqrsIoZnpNcsWu7u39CjQ7a8esLVyNw5mUX85Blzv5RaE1UF c2U1DUFGkv1W9MJ6d2paadscyk6pSbOWUEd0vjOeEM/GFUfabQbU61IestFOWKHBakWnvAm8 wsKnJnfkllhneEF24HX44oiG+w6zp+UMDq1KXcLasa1zh0YYZcqUllCCEkWuspCv5B+cNI7A mUdsa5qNk5FU2108W5qyZGSmMlUO+5PMznpKYr5e7Gi0DUlorQ5WO2hNqZ3MTONTVIJyQN+X fPZxGFzninddK/DfOF1BWzD5hHimPL9wql9fApQFolJOjdgFKY7hXaBu/wAISnsupjzMIa2Q Uq+v61dth/167Yiy5ikHFKWorrsWkvYFFSUUGgHYfRtVU8cqXP8AUD5JsEWZoDGLFuyl+IMt YC81faWw09VNDP1BuSWpFrPTxbArCeSIsrAG68y+d075yxd8aD576AHXFVqW2pykYHQhCUnc djHd1GLFfK6Qp3yHdLu7oqx/0cmFtqii2dzcV67Ma3FW8bOLDtBpGTcEqV192I27hwDGf/SI s9Z7PTKU6KJUI+mkx4lCQ8zMyzC5y1lOcr9ZCjO8WiobWqV86doFfoDDVg3BlkyMqbLmHacQ Q5h85+wEMw89Zx57LJyU4Qe6GHVA8lOIfux6ca/o5MKln2bJcD3XycKculhHYSNrHiuKkC66 7Tx0iB5egjQS7HLShqErb0On21U4ibyeuuzMsOMw3WXYlLnP5RozrYQQbb21UtT146bTtqRW 8KxHab45h3Bd4sIbroWDzxHyPTFB2BUq9uFRbT0kqCXVy1nThCdgkkTEynPMn7wuHdGXN9jl cOTWF4RJk/RqgQjSOsuVCSBHgEdli7uMXYM5j5G6252Po2F8R09Q9HuoNqcEyIp8PXZhQhjl Kc75cu7eu7AonlFhdEztbd6YB0wNPTtO7eeXnXbecLs3C9mXf4wHmyCN5s4sQJq+1tzQbE4M tJtIC84tSLrRHjBikXKfPXx4S+9FGoCjGqpKWtHO2lQQ4U2WYtTfq8kv2Z85zFddEWDPIIO2 WEfvBxQQCYIIIAggggCCCCAI5/tI6I5/tIBcKhMKiA7ErTZIDnQrH2Iios1n4P04VHmovXQ6 A77EEK6Bbf5MCJgsEOmerj4+sjoYxC4cwQrBH1XO5oI6cELwQWrqj1kJjpWesjmjRAhcEEBr Vg782tpbiSs2fGdulhPFLL/rXxoLhWdH5YdpcmpUqLDgLCSHqi5+9GD0HR46teNj2kCNOWXM ZhmHHp3SlE1VlnqlqRkKWclUuTmdrETdP9kXYNZ8p6SHSZqM57RZQcQcOG7aL9exzv8AhDrp WdMHI27G9pRDxFejB9mUvhpKMmpOzdY5Fual1ANCBCTP2dRCuvu1h2m7MVjlQ/Taw7o41QcH LCIv7OfPvgNbLr+j0zh/CqdYqwz6zFuEy4SLxz4fKIAus21ZUh41NTtqNiJ+xLL0Ow91+9O+ KjWFlY21OlBT21Oyg46QAl4ZSzPeMu5Bl3zgpuysY29YvqraCNn9WmSXTGK7xgNGWWi0f5P5 LbsSXrBBILMD2cXaFIuWl/jOOrzl0Sga24npgC5VkhAIssOgfjPSUowsujH5yLVLG1hUEN5I sGJXpPwl4z+EJ8hqtA3hWDZ8jOFhCWYZKRgv6vGAvVslYM7xSfQ6NeUuVGKZH+jhuLTglynP nOO6xeuWqnqTNbVLknbDRHyNzzC8eGUvdl3xVrULOvIOl2c45ZnuSozCrCENwCZ90JsroNHV qdxXuQ1BoEu6SkTds4Xff3ShWNbdLXaGOzwAcjfSvXqcnfCGUtCwfGDztUYBOl9Mzx7oSUgQ 7ieUuYp98ZrWlj9SI6kIQU825pSouQg5xnq/enPuCHnOcWtPYhTx1Ppf0qqzTCwiOcfYEO/W RYO7uvgJhRaFQC/CS8P22DJP2ogvDORBYpdkI+/xuisUvVtKoLTHap3KrTVxRyIRIjxJcvOM FyLBLWRQeAb5w/VFjlNpm804npBsSp1pKfPP6w1RKc96cg+PKOFPY+B4tsdqeRjNS0w0hAae eLfM3tZFy5TFOLHfR9pbJTFNriRvA1wzCxEJkQSdd4N0p4uUpX66xBeUlPE2d0eyI34fSrSv LWiEEu+RN09d6enzi00/YmyDWKs4laszHQwgO9dJMSHhfwvF+yK2ZYJVQKkyVOBG1CV9rieF Lf2ru+7vgLCXadSp30gDa8UvCgLV0XsQRCJ1FP7ofd/KM3twqpqrOuAuTIA3o9KikkJMUBuG ddrfdylG4o/o/UYvUOPoDgjRNp+FMXnYz1t3vX8JfCMYt4pJto+oGlG2gygLEE1BxeZjkWPF ddi5+MBa6btOp5tsvDR6k5UaoUJDClp4Q8xdgkP3AwxVFpDUsT2apkbqqNNpcrAtOEXuaylL d75ylpKGrE7LgVDRdRVm/A9CJSH9Fl4rhqDQBvmKQe6/SKyssoq1tovykcjkW6iCtOTBF1hZ QuF/K/wiBYbVHhqtOdGltowHWpcZqhWr6kFwvZ1jqoOp0Fm6cVK1sSndm/Ok4FkoDMz0kPYC Kcrru+cZzQdKuVbPHRTV2wphKjMV/ZD8OcXBosErw5nIXrtgasRE1BhagW8WCXfFjWW/6VZI OjgL0Az+rMm4iLD9oLsFlyl3f7nGV2fs51B1ojtFeF7UFvTqBKhJi1EhqLh+zhl7XhEZS9Gb TQdoy87KXK2MuWzHlmdWGXMyXffyiVquzJGnsvssWU2cFTUVTnZZgf10x3fIIQcJ6c4DYHD6 S1nqnKyW1yTJwhNHh2ftGClcH563zjD7D6zQUfWj0/VCctVFKm45IQH1hgsfC+fhKJpw+j3W yOoGxn29AaNcYcDMwzBK8uV8+Ot3jddHK32D1gsrTybApTi/Rm3lqwh6syU5zlLw1nLvgGKH rOkkFi6qg6kQOCzLdukkBZAu0ZwDjndoGXGcabUFt7VWzpSzUjGoRjMfEJ63EGRSdOSUK8V5 nEX7A6RmlH2IVa/N+2DWIm7rFOSSdfjOCR2xS+ekPK/o/V8TS6NfNSg21Rs+YkGKReTI4WGV 5k92+XGcpXxA1ipLe6PR2yYzmvaGdnKUFJHFN1oxHHzCIw0AeHK6Oqzu1GmKkqd9eFJydrbR OxDgHpE7LHeWVIM5y+GGXznGGVZZE/MNUU9TaBYndlT0eJOXhLmVIvD2p6+zLjf3Rotn9iyZ N5WgqFqQP7q1uydCmCoOuThJFhmMzjLjKfPXwjTGhJ1Z9IphWMbs2tXSARiEsAmySw3KJHCn MJghTluylffpKcUi1C1phqSg0bCgJVHupZyc3aTw3bNMn2gz4iEKffExan9H9WjrR6WMLkkQ 0mlLGpU8PQJAKCPBLjMQha3aRIsdGWS1VTdN4KYGygdHJKnbDjPrbmAN01BopX3yB7OLv5R4 tAUnb86+WC54qrGEChnm1kiQB1TXzvzbp8Rznva6Rbf/AFLMKNYjG2tTkfkkkIjFKnjswNRi CCU9TRz01nyjP/pMUqjph0QjbaVb2JvErVJSTkh2PaMsV0scvZndEJYPTza/PFTL3hNtiWm6 fUOYUWKYJKjQy3QiFLWQfhAWlHaFZW20Osp5A1PhQVR5ysxELCOSo0U55MjTBSnPAXfylxlE BbRW1JVysNqFGgcgVIuyQqc4XoyYBcrurlzEK6XhGs2B2RUe62eUQrX02gcOm0ixQ5q1AuuD O+4uRcr+XhLxiit9hQ2d8QuVVPyQdJidEqckzQG3yM9m+/QOl198aIZlZ3UnkfXjTVRIAGjb zJiw+ApYZ/snGn0Pa7TDIXUiM4l1Ronh+6aLEmuGYG+68vlppF1tIoazRhdKUWAoZOpcHYS5 OgayPVKDAiDIrNFylKU+MpRMU/ZXZisrCpBpqebRCSr0LednXySBmL1uR3iFPduv0nBan1J9 IlqeEDw2+Taspvdk520hMUYzBHDkEJevAIABlqGUrpxGeeWjPI9LSalteHZtUKUxzoSpM6sI CJhFcXK++eIQQXyvlLSFeb6j2GzC0yoFLalUmk1KoaUBZx100RQRBCCUtJ3may5T4Q39ISla YR0Oe60Az0wOmG0SIClekOxrwzFOWk+Mu7jP2uERYhIGW00T5YOL2man83pZAegXqT7swsod 2AJQL8OEEpaSu96Ik22tknUlTPA21eUnXUzKnGsjFiGEvXrThd+sS1UWTMj9bVS9P0um6Dpz yTIenAIbzTMvNFKcpcZiHOWAMWsyyOkk1cV8pR08zjyXJrToiHGfUEljIKEbxFxEKc/h3RVH +C2QWR1/StAU3tJLO5LKs2Y5OWZnTklFmSulfrdLDK7gG+ETr9narHKZo+mALyqiYV/SQV5p cpFZ08V93wxafhiJt4ZEFPW4VXTzImyESMRIyySw6FyGQAYuHK+cRllbOmqS1SlmFfvIly8I Tw+8GWsw/OIGyl/SQTHGU6NeS4LFaFuUJF60IZAmIw/LvMACWm7g0viqVRaFR9W2gUWN7QLQ 0nS5JoRCUanqxC3uwD70gSuv1jVLTLE2R4pNx8m0rExvCd8MJSbMHtFSBukC49YKd3dxirWQ WDyBaR/xg6oFKRnJTdJN3/6w8F4Sb5bumk9J8wwsQ89vmB1cHNSSTsadYpGaUR+pLEK8IflK PQyf6RTBsYMba6jUGNZbYaSHDkhJl2xB1vmIUtI46EsmbfIi0qsnoCU0KpueegkIpakgImIG bKX3ZyBLhFsqyx+mx2LlNtK081BqLYmgCZTiwHbQeaAIzDBT4XynPTX4QWrRn0jECx4aXVyp hQV0eYoAVlGeoSHBkDK++LSXdHQ1/SKo9NXDi8Apt16LVJESfDilM30WYhB5/e5infDVN2Ms lE1gjWV29pHNsyVJOUrS5JOeHdCLWessXDSCkLNSHq3cIKtaqbStqenulUiBtFcmVl33ANnP uv3pxoI5nt6YW3bFnQK3aAvyx7QEYgXCOPkINxguV2PiGV8NVhb309ZuOm9gcCFpyQtIcIm6 RWEM75ixTvFOc+F18T1ndIUy8WYBqNloNgc3KoXJfkplRkw9UXIWGRW793uD2ogXSxomkvoq v9QvBAFlUnFplYRB1kgKEcEOEIpTwinhx4uPGMxBulpFJAptlpWm2p6RtRL4Q7uZh5mZMWXx LLlOftfKUdJdsbOdaRXVSL2pwC31ciLRCLDhMOTlgBIHZnu8uekQFojIzk2KWb1k2o9hUOme jPLD9plinLMn43yjNoDfmP6SDkmtQVVCsTKiqYycBDOThMMFcHAGYhz18f2RmTPW3QlL1qjQ Jh9JVgYICkwz1aZJOc53S7xzxTl4RSoIgJ7H9WEwuCARBC4IBqFQQQCYIVBAEcsdUcsAuHYR C4gKi22bg/Tn/wC3FVi6WXg/SB/9HGcg0ouIKuF+wMZ/vmbgYsEZ3aQdtLwQ2g/rR8mjnW6E VggjpyYMEfSZOaEx04IMEBW3D1kcsSjwDrIi46EPkEfY+QF2snqpHTFQbYs3QCLwYvd/LWNP MtgpgHqRuC5QX2ThF3Av96Uoxug2EmoakSoFJwyihdrL7YvCUaZVllwOixDYUAEJpZ+HeOx5 gOXPjOLHY32o0qBGeSdtvWCx4cu+YhT4wvzqUr0GUmGBaaoD6svL3Aylwvn/AKRBUPZcd0wb 5T5WzpS+uLLM1CZPlP8A8x1UvZimGzvCx7xhzDBhRBxdkMv9+MBNeeOm0yjGSSqNGdcI84wN wC7uQZS1n+yIV0tFp51qApS5L33o9OHdLLDdmTvv0BLly1nyiTqiy5q8myCabbcLkLKCTmHb +vaEPlIMc7HZcjp4w9ZWA07jxCmJELKLEL9k/nAd/nvahpz/AENWR+pLJDjM4XSxTnuh+UVm pLSG1eWQsAStNek4cJIeBBd3MQuM5390ILsuqF7WKlgCW9nRB+rFk7+d4/6xNs9lGwUm9HPY Cj3UwswKDCK+Rc5cJzu8fjAV20S0vywodpYVOae4JTJGnq8OAAp88MNWX1ygpJGsTL9qylXa MI9Z8osVoFmjbR9h6FeTmqXoRhIla0XDe4lhl3SiEsHphqqF4XDckYHHZSJiIIOFgKxcpinA WxZbq2nJxJiWFVlCDIBxgjOsMLl9nf3d8I8+TamLK2ZnN23DIkOIXVJyZf8A5Th+tLHOkljT 5NnN6UZybEfl6F6dozwDKUTDHZvRnkulRgTbVnBHiVmB9IWjl7Qb9AADy01ixV3y2Nkck+x7 GtFtCkpQtPM+5O/CWGOfz05NpHlOgJVIW/TaSOJimcuHwldFzqigKVJp9cm6ESkJUZAREZf1 gV3MU9Z6z+MJZ6Go/wA4CpT0IlEBPTUjSCBeqCZ70++fz+UQIRPbwg2wTkpYVpvpe1kJiTrp Y/Zxi7offPpFKXXCMbUbtWIPZFgTlglO+ektRzv74gKgsxO/SNVVCsKZW/dCkISF45nX87uU Wal7H0fmPeHIAAO1ROhPoxwhaIi77g/1py1npO6LDRf0jSU22bMyKM1yFiXqzDN/uwly5XX6 RllqlZ+XLwjUkoOjkTem2VITmZg7p6zmMXOc4021mz2nqJsspRBkAcFRLwnAvP7O04pay/DF K+kg2oGe1DY2pMBKlE2EDCWXwvnxnATVn9tJNK0H5NnMI3EYSzCiBZlwA4tJ3wmoLYxvFFm0 kjZAJVDgQBAevMM6skiXHDL/ADjH46WtNt7ojQA/jR4CP7U7ogalQb3Tdj7x5TslQ+VLhs+z hRZOUDXnMXdFhqz6SCmpGtzTLKVAEa5BNB6zqwyH2hd8dX/pjdSXBzGseNlb88ohuFhljOvu vnPulf3/AJRVqLpKmyfpCNNKo3I1zASI0CvPLukE4Pw/z+cWKlZ/WYKSoOqaYA27YOoCQpxH iFdIkuUu7nOd8WFjta6KpOmWTyeTrFFNnSGgWnC9TKU754Q+8Lviao+m6f8ANJbOpUgEqd0r hMorLL3g9Z1eH4i4y10h3zGks9mbZVrq8em7InXiTi9WLFOXU3drXhpKPbBMMFura62qMtQ1 GjE0trWWoNOMzJnGnnmAwyCEPAIfhHGx/STfmqpH11Gzp3EDgZLZg9jJKB6uX7Z6RZ7dLEPK evELxTGMjpBxTpFKYsMsssvIzBGS924Mud0Vh4sNZ2GoKpUvFQq/J1nRJ1BeQXeYdM+c5S5X Tu8NfCLsQmKAtxZGSmz1jrm7biUiIbCybwYzvv8ALWd4ucVKqLdV781tyNSwgxpTihmGBUTu FIE77pB/Zx4Ra6Ls9p6p/o/uqMCk5K3tNRmKjFpie9WYQWRIWWGV0sQpzFwjzyxpumHhuQY9 l25SEnEL7OQp84zW0i0i2N1q1QxHIG0DF0KdNUSIJmMwR0+d/KUvCCi7XXJkTvCZ+QDqbpZa FarzjsvEaG67s+zpwiTtMsrZKMqSitjGvWIlz4QkU54pTl2paTu97u+7E4Oy5nrG3u1FO4r1 SYlpNTmlhT3faSlu90ro1QjHT6Q9QupZ6ZYwpBIlhZwVZGZ9YmbLDryuCHSWl8cyi2wGzsuw UA2pVTHklIlYjJjmnKK4BD3Xxa2j6ODPJ0Egcn5yPGcrW5Iibrk5RMtMzTWYp/CUU2m6YJqT 6N9AoAdQqdq7klPV8RhCLHK/5aRktBWyWnLLRejCTmcpsSt4jDQllmY5nGj7QhTiEs3rBfRj wsWIExSwC5IJIpTGdg4oXKfhF3+kPZWw2cJms9ncjj9qWnJjCTDMye57U58p+F0cf0V21G62 yBRqf+krJkiw35Y8F2LXTSU4Dvoe29ypWm2pnTU8nPGzlnFNx+0TBkSN7W7z8L9YaqC297qF H0a6sjea3hPTmpk2KdyeRPKU+M5i9rW7wjqsTsWaqzsjIqQ5yWlOSgxUQWEPYCMsIhyFfdPu iyf+nZhUoOiiX1yAvRnNpS9Xu4DNow48BfhK67WAgi7eFIHxpcgUM2p07SmPKbSAnTls8z59 acGffO6Gmu3VS2mG7NQzRsucUqSIs4WBOcX2DMXaFPnONPtDslpupKwommCZqEjQz08vN3RY DD8kwosN85+M75zuiho7E6YOtsVUr5VZTQWxdJdqV+IUvfnpMIe1Od8EM6eLRVjrQbxR6xqK PG7Pgn1avDi3VI5ynPCGW6GV0dlQWkdJUP5HpqYRNLeqMIG5mEC31+TdMMp/OV8elvo+MLKT ZvZ+g29OICgtwEYAskNznvDliFfLFdKU79Pu6xm/mXouiZtNfOqxe5sxzkiEiS5PZkKc5izA 63y03d7WC2bVxbAvqSrGJ+Az9AKGsoBIS0gh3qSACkIJU53S3f8A/UWEy35yXujwscqPb1RT ocQaYiEoHKRYyQhCHx4S15TjRbY3UCCdn9QM+A9WsPdMLkW35pqdLmSvmWVhl2QTuDiD84y7 6XACQW0FDJAEIFTGmO3ftL79Z+MaCC85y/Mr54ckYFL7WCaScxR9mkTSDhGG7npgu7sEVKj3 s6mKkaX5tBiUNpkhkZhc8AtLuMX6nyQIPoqVW/JiZbe5VCQ2qzsN8y0gcM7u+Upij0y/0/TC 2VYUesXYGVO2pkpbckQ3zSSuDdhnK8QzJzFwu92Mxgaz6RtT7OaBtptqbBnHDUZ4bzjwqTJY BHAvvlKcgXhlpzgY/pM1I2uipT5Nt6w0RJQBFix33lSnKRo53YpinfO+Lcx2cUxY/V+ByWK1 j0uZ1+wHiJzS0W+EJQ5Tu1HIPa3Z3RdnRaZT9pyhUQlby2hK0t6uoahEnvmolrIBJYZSvmI2 d0pSlPnGiGHtf0hHsmj3GmPJ5vVJV20FCM1lImSicxGhwylh1mKfGAz6Q9Q9HpdgbW9KoCYm 2leXfjV5OpRc/CN1YwNCmzcKklkA09NJn5aJrMLDgMFmCw5opBlhwy7pfOM5McKZo2zCw1NO Wc2qFAnNYEKO8xQIFwhcJXzuEZdLvlhgtRKotpOqF8QL3uiW00pPnCEkPxmbSI2UpCnPFK6X ylzhrz5OQ60XVIOm0QlBzH0EkQBzMtIhvvnLheK+fOcb0xtvlDWYLRTngaptManU5jTLUISZ tgpDDLFh4zmGWl8wz0izlrKPTdAE7Sk6aeiU4C3bZQ5igvPukG+72uzK+CHleh7XXij6PQsL azojzW3O6LcVHrEUz78wQZcL96cRXnIfvNO7WenYlxTwfI9SvUqBDNLwzDPCGXC7djTrW7NK MYXN1fn7pXaqgdnETYU2E9WkkQK7eDdrfPXhKUYGnRrzqfPe9mH0enOCnPPw7gTBdmXzjNaY rCrVj8jZ0AySkLQwp9nbkBOoC7+0bPnMYp8Yrka19E9AjX2yY1hJRuwtKlQTnBvAEy67F8o2 e0SyhntIY6WAme8DqSmONCvyZA2gGdK/Hw0DwDO6cQPHuCEGbhYh+wGPWVmlmNPUkjrR+xnu KhQkcEreScG+RJZAPWT53inzwyujgaLMWWmPo6Go8EnFY+GM6hUI4F3rVAOqDO+XLx/tRdg8 uduCPa1qFANtpZaOm0A06MomqREnqS0sg7MWSnEKZIJS79ON0ooLWyUNZQ8OZO2KukHZAECQ wQZKtmFMWk93T9+sMY8zw2WMA+xHpum2Emm3S1uoanXp3Z/Y0BIilIktxZeYG8OEEtL56Sl3 RIqKbG5WLsrJTHRSBasp4bmtCYjl1khi1Fi175+9dDGPK0EeirdKJpuifoxtSantlccLoTnu el6qcw3zu4zkHw8Io30hECZMXRC/JKIcHJnkarw/s0/8xAyuCCCAIYh+OcuAXDsNQuIDsX6y v1iz8MooMaJZX6tV8o5+o/rF6MHkpxHD9mM8osnpurDV/sB3v8on7RF+zMYiQds7cjqs3QbM x53tnb3yjhj9L3QgsEGCO4yOeOtzueFYIdwQ7ggKjVMQMWirARWI3DUEOw1GgmKXe+gXgheA GLLjRvPB2ujWHfFvmHHnY9+fMPdGb02jTLHxCSpBiKEeHEH3vCN9qSj2dyTrkfopGHCEslMT qT3S8ZzgKU12u7Hix09m5gezndqfeKcKLtgOJRnpugShDOv+03C5c/GJSk7MW1BUhXSTltmz k4xEhDfv8pd2kuMTlL0Y2gqCpH5TgN2g8QCCMP1cAgcfjFill2zLwJygI2ooI9M5SIV8xeAZ cJRwPFpfSTwFycmEBoA/ZiUTHMzwnyu8JSjTy6VbVNHlMiNAnINMQTCWEIdA/wA6IXG+IxHQ dJWbuBDksU9IqAl4CwndZiHOW8LBLs3d8BUjLZnX7ZkK2fQJCYsyZYNOEpz7U5d904aLtje+ h1TaBqTjNMvDn63E38cIZf4xeaTp5H5xCKhqFZtwFCI81ESYnuyZ9+CXC6XCIOz9y2/ppSNA UU0EnzCcZ2TVakWgCvhrwlAUNRXjqss/8khgGelzs0xWdfOd/KUuUpRy0XVp1KqDTgJtqAoD gOL4Zku6NsqRGj8256YDaUjN6LEPZiw3Fpvxz5mzjNfo5kplNoBQzgFCGWiMESI7gWO7tfGA 7FlsdTjL2PodKhS6dRrKZgA8C5i44e+PhdsdVbObszUnEq9pTlznkynxkAMtJfONGqSz1krB rbP0kbuhMGJxw4erlPWd37tIlqXZGRAxhZ0CDKS55mST9oo09aYL3YDGllqL8vb99k9HMw7W p3sxWEPAuYuGGH1Fsb8N4IX9AousIkUWSHF1xcuV/GYfCUbG4I2cCNUTgAbs7KaItEWXLZQ3 X8e8Qp/+I4GdMSdWlHvClMVjDSRuEWXoSbP3Zd90Bkyy1eqvSgPDUlPNO9WQpLnIpP8AhB4c ojKbtIqdhRuaZMMBo3YzEpOO1090IeF0aW+UA1ATn1bU+1OxuQHIQcw4p/d7U446fsfbVlnd TPzqSahUYThpCMW+nkGWmLugKfVlqlSVOxtyZ1aijWptVlD3Qi9IOl2ZCn/lEJaZUNSVU6EO VQtWwjLLyiernLTuvnG3tfRSag7FWpGT6Kcdmn4g4ACH7Qh6a73PT4xy/SUA2jsvXKUwzTR+ UYcRx2mZPX1cvc7roIYdTfkSBOaOqgOB6jF1IU2kgh/xnEwjX2YoMKxMzvB7gSZmpgiM0COX Zv8AhFisHo+m6kb3ZY9ptuWliwEk65ZMu/TtCnFwR2P0xujyR/o9AuEvDmX+k8Sg/wBWUu75 wWqJn0h7TjsjOWIhFEmZoQ5ftS7N8+N0oozPWDw1VwbXJOzmvphgjcwQdzMFxnG9F2J0Gvpd iO2YKPMLRnKzszXe1FLWektO+MatkRtqB4K6NpjoBKEw0gv/APVyDO7Munrd4wDFN2kVVTyh 7WNSkopU9CENWZh9oXGYZfuiTdLSK5OpNuQOX8EHYcs44M71ICp6F4vcvidsDo+lakpdzWOq ADi4Z+AITL8tOXdfK7vmLn3RpHkwzvH0Z0oHtMDAxpHA9Idi0JGCc5F4e++7u5cYsZM6W8Wk OTojclK9P6D6okIbixTn70pQpwtjtFJqA9Y8DSCWqCQhOIOJlIGDlfKMpb1IBqG7HvYlBOIP 9aUeq3yiabfvpQVcS/NQFxRNMFL0xZgp4MwEgh5eP/iIGNN9sFfoEZqZA6pys48as4WSHeOH pi4aXS0ldCW9TRhNLmvaOjFqpwb8Oa6iOnlBUi4Tny/8RuvmZoNA8LCQNSJdtjplHZnBATs2 Oc5XT03vewyh1ws9phB9HNdSVPD9Ccj249WvCKUxnTGqBi+7ugD3iix54qC0uqn5waV7wpTm 9DnSUJCQhwF50p35k5d98LLtOrOT45vpbkDb3YRc14sO6Zg7MtOXhG/1hZdZKjqSnUHk3ubf gLCnvkAwsJU9DRa4t7enOUw90U99omz5nt/YyamLSN7CYziViKK6sBp0hTlv63gDp3xqjIas ztarPybdlKOm3Cono448XSIRZZQTTJe7Ld3Za3S7oyJrrOpGen0LCjX5CVpW7eWHDvhVS9q+ PS9h4Ke8i6N6K201vJqFaaRk6YZTnO4w3uCGUp693OK3b/QFGI6Equs2ttRdOKiyxBJCo3CS hKPrH3zRd184LZq4P1VVhVlOprQaecHpOYmMUNzWSn2fOkPtH6Sv4yvmL7l0dKDyqZa0Ur7L 7PVVPKkqLKUiM63dM47wt3hp/VjbEQFfnXoZ7ciZ/wDxvlG8h5/OQfvcYkakdThvhdEpqVVr Eq5la+kTClkizWzCMXVjF2jBB7U97hGQ8vNdc1tSqcTCgcujtlMOxE5cp5Zo/W+GLlfEwotR tUWU+hwOSsaBK4kJyTSU++Yp4lF5nGc/C+KpaQ2pmS0SpmRAcM1KhcTSCBGCvHdLvnznKNpp eRPkv9Ho5MMGxNzwcF0Di3S1JgpCBMcu/QesBWawtFtvbXBM91VmoMu9ITmJwyL1uxBEGWl8 5y5xQ1CmqrRawNOHtT0+rg4jMgP2QJdwdAgDL5R7FeKepV+pt6YXVGPZ11Qr1RxiviXIIB41 Rcph7IeAf3xRaIbWKibRK58m2SWzoaECe3rQHaKCwhvnMXPEMUu/TBBDIqPq22lnos0mm0a0 pobc0OZschzTXesw4vC/hHK4VVbMBrDUK/pXYnw9MPPEX9YHL6vKXOUuN0g3SnGh2MSrGVjp 9QKRnqtqSHpWVFd1JZIt5WrM01HeEQZTHfFut/QHL2dxfqWOVI3BOc1ga1AVkpFKxD3Z4ZT3 QYJSv+IeEe8BjVWV5bKz1Q1LKtzUbgIsRLeQajDIu4U+s3JaXzFpO+K68MlpdW1o+jcmR1c3 pGIsC8WXdk3hvLL7g6cJR6gQUw3PCOgB1G2yA5MLW6KiGdeozDDFQTipSmMeohayxXyCKJxv BgtMrUexmqtoqdqF1J2CYSwJSutFPF2AzDO+X3eEZreQabR2keS7sSyM7r0QqxAXl5frMvWe k+N118WnzhW6pqXSvGNUhbQklem7OHGIrWReMXanK++PUFJ9FHPlMrADmsyVTv6SRupw4zDB TEIMpSlvS4Tn+cU23M1tIYrVnGQ9xcypwlqBDxFKJzlIIAlz4cZT0DzjT/QoebW+06uUZiMY HvN2POyAnFyHLrp4jZz96Yp85xZKHtFtvUrHZZSozXNQcYWJefsoRyLndcUHe3Q3S4BDGfVB TDlTbXTpzllfphukrJCWK+YZcN6N6+imBH5pHYBxJpojKnLxFk6YpSBLDj0ncGXHlwjNDK3C 0u0tM3udHr3gaPMMNA4kCTyAbeL1ofC/nHGsqetjqbYH4479B02okkZzgk9WmMulcG+Ws53S 5+7HpK1eg6Pcl7/Uh7IUOpRJ12yJhHSzFoA4QzUXa3BK/DfHVaBQFJAoSiaMTIyCqTDVyQW6 Z68rZTJimMV8+2ZfKc54YtbB1FeW3r6kZTvTT1CpAM5uTEpZZBiac+sMwS0nfd2hRzKLVLV6 eqRZti8Dc8CLwGFmIwSyQ/dlyj0o8dTXAfQNgCXQC5OUgLM10PBgLDdfO+4PIIo4XChqJfq4 enWoW1OuNTp25AFId12wFSKxGF7or5i+8LhAedmeobYx0Gb0Vtp9OqM0QloiQz43zNmEQvnf OUVnpKp/M+jYUbOoDSpjpIe0llznNSpFoEu/hy5cZxuBaVtrCgFVMjph6Z6Wp9O6dGuIXDQU pDmK4YOfZ5ynwFEi3OSNGk+jw1IU40KIyYlQ+u3NJSneL8Qt6NBgBY6zsxeCl40CplWqA4A5 5Prpcw3fGLE6WhWwL3g9hUqVAVWWFQciIS5c8srelKeGWLDKes5RstSI6MQV5TdW1y5KgGnL 1+xNh+I0vac7CWZdoKUuHs3d04jqgAcD6fmPacCXYCzV4uAAlBTa4p92KUozGPp7bLSyW9cm JqErKdLxKRCSh9rjhlwD8oaMtgtFHT5TD096AThEHq+s3ezv8dIpDocSc+OylN9XOXnjIw8M uYtIZiBfVFsFoqlYhUjewB6Pv2YksnAC8XamK7jOcJT2u1yS+Gv22N4lpxeV9Vll4ZfdigR8 gLf5xawGjf0w3Ik3yiu6WMMJlebdoEMuQQh5SlDplp1ZjpMVK9JAC1GFyKOwl9aYWHgHF3RS 4ICwqatezqPQ0Yfs/QDednlJsP2l856i48ZxwVI9utSOgnV7U7UtMuDiw4JXS4SkGWkojYIB EEEEAmGS4ehkuAXC4IIgOxqtkaBSczqjiScXWRlUajSbr0DZuep9sRgsv4xhP6CLqDaahrgp qJJxZO5h8eca4jp5eBOEAEBu6GKPYm1dYe8Kd4YuzGwF44+ZPw4LYzHPHdghvBHWhywuH4Is VurAejxTIv8AVAPQ4ocdEYRHyPsfI0DidSNGYE4nth3ovPl5WZLWBSAkKNOLsqcnUzxxTihx o1ohykbXTKnfyk5PZ9i/lp/pAcqOtqzTKEuAGaMwMxEBET2u8XjfD5loteNShUA4ZRC0z1mY TqG/ulEtaANf0pSKwnHjEXIJmX8eEXXyVoxyqBc8VhvDLIKKCHMukHw8Z/7uixk3nFq3Y9jA vAUVixCEWHrBXcsXd8IaUV5UKnIzlKf0X1Icv9s+c5xsrpZ7QyMzJbWoAgHEzOPOEZfl6aSl yldFPeKepVN5LM/Q5WBwDIZ5+ZrdKfOcBSvLyrTngS/b8S0wuZQRZcurBPjhDCWuvKnZ0+xt qwogGKYvUynPGLiLX2vGNDrhqbW2qGdNSSAoCpQdPs6AyJcZ3z5d4tIzC0AbUOrFnQn1f7QX IRvtTl4QCnCs6nWM42c5yHsRnrgh4nfiFxuiMZ3VY1KNpQbpsR0EQLW4V/WC/wCsvZouHVhD cDThK7u8IPLyrcs8AHtQHaN04Xtil3X8peEVSCLFkMrOpDmvoobkPYvaJ/WeAp8Zy8IfMryr TlBRw3gfU+pDhlIAflFUggLP5eVgBYav6eUbad9uK6cw/h93wujnLrCpwIxIAParZzvXBxes /FPnEBHyAmVFTvanZc5yNEBH9UL9gn4SgqCpH5+yAPDqoWFJ/UEi9WX4yD3+MQ0EBMs9SPzI WLoR1NbsztCJ46xYmO1GoWGlzWFqAUHOLEAxWZvmb/bnL7wu+cUSCAmfKeod39NqhYeziF3c I6k9WuXTHTb3+n1QS8onbdQFh8JRXIIC0OFc1CcYLYDuhSjA4BEINyV3O+OBRU9QqWvodS9r RNWnomZuClLlPviGhuAtzpWylY1lICWdtQ5eEQTiS9/diHMfnsag1Yc8KhKDgyAcdmaiBLgH 8PhETBATflJUIE56YD2tCUq3Tw5nrJd04PKSoejwtXTa3o8veCmCZuXxCQQE6ZU9QnGEHHPa 0Q0v1brPU/hjhdF6x1UCWOqxQ4qDO0ceLHPThL4RwQQEsjfnhAnEjQOqpKnM9YWUZdIUJMdX I7FnL1RuZdmYjL8Ug9mXwlylEfBATJlT1IcYQpOfnARqf1AhHer+EJLqSoSVB6wl+cgqlHrj 9onjF84iIID7D5axSAsJIFJoQBFjCHFoEff8Y54ICUWPz2vLwL3twVAw4MsxRO7B3XQnph16 39KreuDIB3XT6wsPAuf3Zd0RsEBKdPPYG/o0D24BbxbokwTpyLFLunKEmPDqdsuc6uBoEe8m CI6dxM+8MRsEBKdPPfSHSXTbh0gG4IVO0TzLpcJXwdPPwM0YH5yCNQHCeZtE7zJeM4i4ICQ6 VdQNZrUB1cAt53rEwTp5ZnxlA4OrkvRlI17ktWJSfVkHHTmAPyiPggO10cl7qoCpclI1RoS5 FF4vZLlwDLwhbe8PDV/BTqtbsXa2YzBiiPggOvbFm96eq6wOAzrp7we6fhD5j29nZQDntyNA T6kIlE7i/wAMRsEBL+UlQ7YFf5Quu1B9WoEqnjD8Jw0neHhNm7M9uRA1HrxFqJymZ8YjYIDt 6Vdei+iulXDo8XaSZ08oXPWUBjq5HGFDUuq00af1GI71PLc7ojoICRUOrqpdAuqx1WqnAu7L UnGXmF4eGGfK6GlDk5HZ4znJaaNUHApEI6d5wZeyKfOXhHHBAEfIIIAhEEEAQQQQBBBBAEEE EAgyGi4dMhMAuFQmFRAXEvtIxt4SR+qDvBDERDmcOAvlP2ivbUnCmJTJRFF9ndh3zl1UB0Cv OHiK/UB7EUDOHC84Y458Y12E4In+gXX/AKUq/wC3HKYgWA7aNR/25xwZdGnbVGQuO7ZvfAP+ zBgjS7RHbVW6kJ/R4ozcyNZfAfos38MZWZHXFr3eGIRC4RHQGjI7OmF/VAGpNEAvsh5B+Ucc fICS6ecswJw16gRofVixer7roa6VUj3xrFAva3hc++OKCA7emFmXk7eqyvaDmaC+PfCTHVSM zOGpNEP2RCF2fhHJBAdJjkpHiGNSoEIQcIhCMnfd3fCOfOhuCLCs6DOhMEQDOgzoIIAzoM6E wqLBnQjOhcEAjOhedBBAIzoM6FwQCM6DOhcEAjHBjhcLgGccGOHoIBnHBjh6CAZxjgxw/BAM YxwuHNyF7sAxBD+AEKwAgGYRvw/uwrcgObfg346dyDACAYxjgxjh/ACF4AQHNHzejqwAhWAE QOPeg3o6sAIN2A5d6DejswAgwA9+A496Dejs3IVgBAcO9Ct+OzACPuAHvwHBvQb0SOAHvwnA D34Dg3oN6O7AT78LwE/rICM34N+JPJJ/XQjJB78BHwb8SeSD34MkHvwETvQuJDZge/Bkg9+A jYIkMkHvwrZge/ARsESWzA9+DJB78BFQRJ5IPfhJiYEBwwqFYIVggCJQunn47fJZ1Qge9hjj LjbjKnRk0+UjAp9IyZYfyjnkkxjK2+lXIawIF6YaMr2hGRNVwz0wgayAMh2eoMFvCzMfjEU4 OSwBnp+blYt0yI7HnOAMmDofoUXH3AD3Af2Ya3yTMEdEfy7vq+//AAb2ZMPtpiv+3KGzGpq/ kCX/ALco7IMEZ5dUdtGd2uNrUjoNxOJQJwjy572GUeI/Yj3Rbh/8duf9HHhv2I/c/r2veB8b q/dzw3DsfI/SOQ3HyHYbgPkEfY+QCIIXBAIghcEAiCCCATCoIIBMEKhMAQQQRYRC4IIAhEEE AQQQQC4IRBALhEEEAuCEQQC4VDUEA7Em1tpy8zc/tco428naVASfejQ0abZiwgBGEkgSz0G1 KcO3vYyPvFhjsqSyjZi86nn4p29rLFuThwscSidSMEcOSsY0oJGmUCJOBhGHdEEUMRodpCPb E/SWDrS/WfelGeR3RgghMEbhULhqCAcxwY4bggHMcfYaggHMcfYaggHYIRBEBWODHDcEWHsc ENwRAcggLxj7AMQ/dDHQ4IF6AzA5IFSMfs55eCA54IbhcB9xw5H0sk4ZYxgTGm4e0IJc5w1A EEfI+wDkEfFADicOcmNKxdnMLnKG4D7DkNR9gHII06h7Datq1nE6plje3FfZ5/Eyf+EU2tKP fqPUZLwSDCLsnFi3IzyUCDghqPsaDoLjdS6bQPDGjzgZRuTLrg8eEYUXHpelyf0Oj/oQ/ujj 6v0GWvjU5U9uLE21IhfaYdwUVum6bWVI4CA1ZRQ9cIRRuNoB2zUms/o59qIn6PbIT0WavGAH awhjljn4XrerXhN7YI4Sx44sygGMuKs4E7Moxx+FkjfTgkdhYI+x8Tjxw9HzXeplrBOdQbiD +bjwpgj9BawTbTTa4n+ZnHgp0JyVion3TRBj9n+uycHyerREIjqhqP1bgMQiOiCPRzwQ7BFh mCHIIBuCHIIDngh2CAaj5D0fIBqCHYbgG4IXBAIghcIgEwQqCATBCoIBMIh2EwCIIXBAIhcK ggEwQqCAmqP/AIYK/DONB7cZa3nbMoCcD2Y01rUkqU4RgjgnWDEx3biRRtrkNPjyR73Zh0uJ otYpy8GPcjhyNVUqxGNNTazaf5POMkjRbUH4A0/Q5I8Qxev+7LujPY+l0jI1BDsEdaDUEOwQ CIIXH2AbhMPQQDcEOQQDcEOQQDcEdEEA1CI6IIC0WbnDQOnSRODaE4sQY0O1ysx1bT+S5Jk4 csOIOEMpYZ/KMYTjOTGYyd2H1i9Ys9cdue7HJXQI+PpfrAQ5BHUN5szqpTTDGe2pkBRpSjtC EG+MktEyfKQ04kGVnbxgfGOVO9ryU+SA6I4wYzjMY+3HLj53hiLPZ/uOgVmDNGT2QxXIeRqT kZmMkeGNxtlpFZnVhT5qZ7bUocPqTCi7phjCYlHB1WL+2OI+MIKLA1ErTZOc4f3oj4eRnbMo zgezG42dve3JMzhBtgwgM7WGICrHs54Z1RK/ewh3YrflJ1e//ZiHdHIazc9iPnUQfytEw5BH 2PqIPpwdZg96PVbOjH0ej/oS/wB0eY6fJ2l4Rk+8dKPXKcnAWEHuhlHwvy3UY3X08amWiMLq 8U+JAgwYxRN2bs42GnwoFPrcUT+CH4/P+fXZY6vHbbEQ6ExMQ2ZGEjlU8sezKMHsRLRzuibt Rxt6n7EcfNkfSjSKgGcnNB7wY8N2mMhzVWC4kYNwRkxBj3WXFPrSz1hqovA5A3/1kdf4nr/E c/UR5HhbBDGCPVBn0cqe9h1VBjjM+je1ew9qo/X6fmulcOCt5hwQYI3CrLMaGpvFtlVGiUfq A8Yq9P2dOtTmfoFAbsX8pO0B+fOOry40Y2b4IMEehk9gjaAsO2PZppv83pKDzDsmX/CSr+1G G7QrwVvO2CE4I9F+ZBhB/HFH9qOUyxangfxlUL+tDdoTBW8+QiN4UWRU8D7ZQL+tFWqSlaVQ bhOaeo93Mjo8+gxsuhEaez2egWb6zGUD3YsHm0Yf53+1Dz6DAxCPkbl5t6e9w3+1B5vab/Um /wBqPNzoMDDYI3Lze03+pH/ahHkBT38mH/aiNzoMDDIMEbt5AU9/Jv70HkHT38g/vQ3OgwMJ gjcfIangfxP+9DXkYw/yONNzoMbEoI3fyJp7+QQnyMp7+QQ3OgxsKgjcvI+nsv6gCOFYz0eT 64CUP9aHn0GNjEEaU6eQYE5uTlZuGeXh74orfsxKwW0gxAyd34x1RyM0fCY54THWh1QqOOCA 6oVHHCoDqjuRrzkfqRxDxcKLX08mTm9MINqHi3fhHPIH09YHA7YAChpwrB1OLwEjAR+GJvpu if8AohsHTdDf9EHHF/w6FAj5E9Wi9hUlldCI8jjnYoqmOO6Nzu2CODHBjjQd8EcGOFY4Dugj hxwuA74I0xvXWaZYSRkldmWIQr+MTSfzbqexsW9HDr1fZpjYzH2N7Lpij1PqUaU38Iof8jKb 7Y2oEc+5ULwPPsEegvIylf8Ao4IT5E0x/wBKKhudBgef4XG/+Q1K736KBHzyGpX/AKUD+1Dc 6DAwGCN88hqVH/8AW/3oT5B0r/03+9Dc6DAweCN28hqY/wCm/wB6DyGpj/pv96G50GBhOCE4 I3XyDpv+Qf8A8kHkNTH8g/vQ3OgwMMgwRvfkBSv8g/vR0+bqkv8Apv8A/JGe7UGB57wQYI9H F2aUl/03/wDkhXmuoz+QD/7kN2gX49bzbBgj0l5q6P8A5AP/ALkK81FH/wAgF/3Ib1AjA814 II9Jeauj/wCQD/7kHmoo/L+oD/7kN6gMFbzbBHpLzUUT/wBNN/7kK81dH/8ATR/9yI3qBpgr eaIXHpkuzGj/APpX96Owug6VJ/8Apyob1CeJWx2xem1LlVCVfk+ioxYxC/wj0jHIjRkpvqxI CgfdiRj8/wBf1md3QR4y4VBDuCPmN3//2Q== --------------090008060702010503060703 Content-Type: image/jpeg; name="stacktrace-02.jpg" Content-Transfer-Encoding: base64 Content-Disposition: inline; filename="stacktrace-02.jpg" /9j/4AAQSkZJRgABAQEASABIAAD/4SJXRXhpZgAATU0AKgAAAAgABQEaAAUAAAABAAAASgEb AAUAAAABAAAAUgEoAAMAAAABAAIAAAITAAMAAAABAAEAAIdpAAQAAAABAAAAWgAAAKgAAABI AAAAAQAAAEgAAAABAAaQAAAHAAAABDAyMTCRAQAHAAAABAECAwCgAAAHAAAABAAAAACgAQAD AAAAAQABAACgAgAEAAAAAQAAAoCgAwAEAAAAAQAAAeAAAAAAAAYBAwADAAAAAQAGAAABGgAF AAAAAQAAAPYBGwAFAAAAAQAAAP4BKAADAAAAAQACAAACAQAEAAAAAQAAAQYCAgAEAAAAAQAA IUkAAAAAAAAASAAAAAEAAABIAAAAAf/Y/+AAEEpGSUYAAQEAAAEAAQAA/9sAQwAIBgYHBgUI BwcHCQkICgwUDQwLCwwZEhMPFB0aHx4dGhwcICQuJyAiLCMcHCg3KSwwMTQ0NB8nOT04Mjwu MzQy/9sAQwEJCQkMCwwYDQ0YMiEcITIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIy MjIyMjIyMjIyMjIyMjIyMjIy/8AAEQgAkwDEAwEiAAIRAQMRAf/EAB8AAAEFAQEBAQEBAAAA AAAAAAABAgMEBQYHCAkKC//EALUQAAIBAwMCBAMFBQQEAAABfQECAwAEEQUSITFBBhNRYQci cRQygZGhCCNCscEVUtHwJDNicoIJChYXGBkaJSYnKCkqNDU2Nzg5OkNERUZHSElKU1RVVldY WVpjZGVmZ2hpanN0dXZ3eHl6g4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrC w8TFxsfIycrS09TV1tfY2drh4uPk5ebn6Onq8fLz9PX29/j5+v/EAB8BAAMBAQEBAQEBAQEA AAAAAAABAgMEBQYHCAkKC//EALURAAIBAgQEAwQHBQQEAAECdwABAgMRBAUhMQYSQVEHYXET IjKBCBRCkaGxwQkjM1LwFWJy0QoWJDThJfEXGBkaJicoKSo1Njc4OTpDREVGR0hJSlNUVVZX WFlaY2RlZmdoaWpzdHV2d3h5eoKDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5 usLDxMXGx8jJytLT1NXW19jZ2uLj5OXm5+jp6vLz9PX29/j5+v/aAAwDAQACEQMRAD8A8amn M77yAPpTBk1bv7aO38oxZ2uueTVaKNpnSNASzMAAO+aFqAmD6Uu0+lakFtItvJA1g7urkq54 xxn8ehq1I+yJozpJByVzxn/PB596tQAwcGlGa04LV4WNtJYmRpiTHlgDx1/lVnyWeEt/ZWFY 4+Z8EHPHv3oUGBh5NG41rLA8xcDTixYAk7uM4znp/tDj6VFb28treSWr2PmuwQFGPQnHf6mh wYGeHpRKRWktwCwiXTlJKg7d33h8oHb2/WpNtzCRK+lsCFzlh2/LsBRyAZX2jFOF2RVy40y7 nnjJgWPzM7ee5JbB98fyqmbCYXPkFQJMFsH0AJ/kKTg0A77caUagw9afLpNzDIFkVVBGQ+eD xmnPo9yuAAjMSQApzkgkEenYmjkl2AZ/aT+9KNUkHQmkvNLnsVVpdpBz93nGDjP61JJo88Lg Oy4YZBHOeQP6ijkkFxh1SU+tN/tKX1NW4dBlmXIniXjo3H+f/wBVQXWmNa3cdu0iNv2jeOgy SP6UckkBEdSlPc0n2+X1NT3GlPbxM5mjYg8Kp5I/z/Kmrpwa3jmFxH84JK55FHKwIPtsh9aQ 3kh9av3GkwwwM6XsUjjGEBGTk/WoILGKW3jla6jRmfDISMgZxn+tHKwKpuXPWnoxJyTUl5aQ QW8bx3KSO2NyD+Hiooz0qXoB6ZDxa24/6ZL/ACoqCWXykgXIH7pe+KK89q7NkzidZRPsljKj ZLR/MM9DgVlRu0bB0YqykEEdjTSSRyaWNirAjGRzzXelYxNm1uXmjmE+qPFIpAXupHf+tWpP srI2dacvjAJGRjJ4q8i380pnbSrSNnwSWPoVwePqPyqKSLUJw+dNty5ADoOpOQFz788V0paA UsWkksUzatJ8pbAOSyg44Bx7nP0qvaTeaZFutRljwAyMCWGf84qaQXEerWaC1topCVK7RhTu HQ+9bMSaurSILKziXe2SCQPcjBoSuxGUn2NUCDV5xwBlQ2BwuePrn8hUaGwEiyNqE+84DY3B gBjvjn1rShi1a3kciG3ygYuCS3GFweOednH41R1MXVhdLeSxWbHYsYCAkc5IPPf5f5U2tLgZ d3Ksc5FrNIyLwGPUAE459MY/Goftl0yshnlZWGCpYnNWYtXeK4M/2eFpChjIZcqwznkdz7+1 SSa7LIVZbW0jkUg7o4QP73b/AIEfyHpWencCkZblVwZJQq/LjJwPamM8rfO7Of8AaJPv/wDX q/PrlzcB98MA3qFYqpGeCPXrz1qC31Oe2VAixsE4G9M/3/8A4tv09KWncCESXDZIeQ+vJpp8 3eFO/d1A5zzVuTWLmUrlYRtGPljAz16/nUUuoTTXYunCebu3ZC9eAP6UnbuBCvmSMEXczMcA DnNDJIqq7KwU8AnoeAf5EfmKvNrt+2cyJyVP+rXjaQR29QKrzX9xPbxQSMrRxLtQbR8owo6/ RRQ7dwIfLkyBtbJwRx6jNOFvOVJEbY27unbGc/lVmHWb+BVWK42hcY+UcYAUdvQAVH/aV3sk XzflkUqw2jkH/wDUKPd7jIktp3gM6RMYlbaWA4Bq2uiakylhaPgZycjt/kfmKgj1C6ihkhSY rHISXUAYORg1L/bWolWX7XIA2cgcZz/+ofkPShcvUCCGzuLiKSSGJnSPG4rzjPSpJNMvIrY3 DwkRAZL5GOuP5mo4r25h8zypnTzPvbTjNON9duoUzyFR2zx1B/mBS0AjktZltUuWXETttBz1 P+RSx9RUTyyGNYi7bFOQpPAqSLllHvUsDuNWEguIggfAiHQe5oqh4mlmXUo1j3YEK52nvk0V yKLaNLnHUq9al8kNbGYEY3Yxnn/PNRDrXWZnTK+kMkcwfUViIVWAyV35BOD64FOZNNDRkR6i FchOjYzn+H1JxjH+FS6db3Npp8YtdYsVikQzFJCNyvtzjB+gFXLmS8BdBrtkykMvAUHOT+Q9 66Y6oDKuxpbi2+zWl7vSQSTbtxzEMgkc+1SxyaEY5ZRa37xbuHOSFJ7cMP155qW+u7mO3kX+ 17eT5/JKrGmShLDdkdsH9ams7KO1+1W6eIbSKCY8hQhDAjkYJ4prfQTKMkVnFGWfTbwYZvmI cDpJgdfZfyNZ97PpE16GghuUt/LIKkgnfzg8k8VtzST3KmKXXoPI3MSpKD5jvGRgnj1P+1WR qGj2NvewQ22rW80ciMzSlhhCOcHHr0pSvbQCIyaK98ZDFcrandmJGAfknbgnIwBjOf8A69DP oq3KvElwYNkgdHxvydwXHbj5Sf605tGga8EcOoQtB+8LSnogXOM4/vY4+tLdaHDb2nnRapa3 EhbCRRnluetZpPsAoutE3bjZ3GSXyNwwAQwXHPYlT+H5kV7oqLco+nzMr48tvM5XBJ/DtS/2 VpwJzq8RCrzhfvMc4xz0+7k+5qNdKsvtUkb6rEsKsFSYJkOMEk4zkYIA/Gn7y6AQ3FzYtcQy RWpChMSg8BmxjcB0HritB9X0fzZHj0hUDgdcNtII6A8DjIPrmoG0vS1ikYa2jMoBCi3bLcZP U/hUltYaFL9pE2qSREbfJbyiQQQCc46+mOPxo95dgHDWtNEpP9jw7M52kDrgDrjp14qlql9Z X0vmW9gtodvKxt8pPHOOwxmrF7ZaMlj51pqbNOI1PkPEcsx2556DGX9fu981Nb2vhxreB59Q nSXK+bH5ZOcgbsHHGPx6fjTbk9HYCrBqOnII/O0hJiu3JMxXdhVB6epBP1Pty0albCKRPsCZ aPYrbvu8KM9Ovyn/AL6q68PhtETZczO+BuB3YzlP9n03/l9KbfQeHfsQ+xXUxuQB99WCk557 ccf19qGpeQXM60vYLcSB7KOcMSVEh+76fWpjqkHmAx6dCiYwUySGOHGT/wB9/wDjoqXT00M2 Y+3SzrcrcZ/dg7WiwOPbnPvU0/8AwjxgbyfOEofKfewy+hz/AE9OopK9t0Mpxat5GqvfJbRk uzMUblfmJP8A9arVt4kktS3lWduARjbjgck8fnUjS+GTE5FvcCQou0bjgN82e/8Au/5zVOzO kLBcJeCdpN37qSIY4+hNGq2aAq6lfnULozvEiOx52dOgAH6frTbcbp4x6sB+tT6s+mPJF/Zs cqqN2/zevXjv6VFZDdewD1kUfrWUtwOr1lg2pyAnG0KOfpRWbrZJ1m5/3gP0FFYw+FA3qc6G zGQoO3P4U1etX7BUls7yJuGADqfcZ/xqiOtaAdfaWskul2TL4eWUyRlRcK4wwBIJIxwfc88V Jf6dcz2206AkG87vNjcZXDEheBwSDjHU4H0qtpE1kLC0E+o6nbsJCpSHfsbnouOh+lX3fTSs BW/1l2bAAbzMSHavT6dcCumNrCKUlrPZaQou9BUG1X57kuBuDEDpjk/MB7VZ0zQr+0+128ui xXCOEIaWYDbjqQcc1U1EWE+lO9rdapczpwzzBthXjIPUDB28f/WzqW/9ktNjGuyP5St5Q8z5 eeDwc4P5elNJXEZupaNqOr6htt9Oggwx3GNwVXheCccdvxJrGk0DUYp7e3aAi4nJCQk4bgA8 59iK663OjoZXurfXXVCSxzINvyIeeeOeTnsR7Vz+sGy22iWdzfiTqz3eQApVSCAM+/Ttipml uCZQl0HUIp4IfJDvPnytjA7iFDED3AI/pU3/AAi2si4uYDZESWqLJOPMXEasMgk5xjirttFp 62EbzPqc86yr53lD9yI965wTht2GUY45I55qW2bQJ4o/O/tgABQyQ4ILZQNtz/wLr3IpcqC5 nweFdVuY0kt4UkDbekgGNwBHUjswpZfCesw2/nzWqxx5UFnmQY3EAZ54+8K0mXRbDULZVttW lymJAzbcsVX7owCRyeOOo6ior3+zppLP7Bp+qG3jlLSRzDcCh2kAYPXn24K9etHLEdxj+CdS jfa7wYIBRlfIbJxx+JA59R2yRV0vwzeavZyXFtJBlG2+W77STx68d60rXT9Nlt7KWbSNVcSN HGXiZQsjFVGBnpk7iD3498Qm0tUskhn0q6S5uhFHBLgYztgLkDPU/Mfbfz7Dir6AEngq9SKV xc27NHgbQW+bgHjjnr/nNFv4Pe6hjlj1K0CvH5mGOG/g4x3Pz/p71Np3h91gaK/0K7klzkSL KsYAYqF5JwTkNx6mrselwW2oRO/hyc2omQeTJOhZi3kYUnPIzu/7+Y45p8q7COf0/wAOy6hd Xdut1bxPAu5TK4VX9ME9AfX3FaA8FS7cHVLFZCoKqZOCSEOCe338fh7021tI9CiM2s6LJcW1 zEoj8xwjBiuSR1K9evt9a3o7ZkkiQ+D1YnynyUXYRujXhvc8HJP3jnvS5Yhqc4nhiJwM6tap k/xcDH7vJPPT5z9dprFvbVbSfy1mWXjOV/lXZ29pNDJD53hmORm2gZ2hWZvKAxxjqDkdt56c 1z+v6ddRyLfPp62lvMdsaoQRnaGOMezD/OaJwVroaMOit6HwfrNw0QjteJYhKjFgAQff156U +HwZrU8kiJBHmMlWJkGMjt+orPkl2C6OcP3hV/S13anaj/psn8xUV9p9zp135F1GUfGRnuMk ZHtwataKu7WbMf8ATZf51EtExmpqlwU1S5CrkeYe3vRUGqXsY1S5Xk7ZWGfxNFZxTsga1M3T U3SXIHDBCVql3q3BJ9mvZ+R91hVTua0A6bTNUvtP0RZLW6tzsnz5DoCy4Gd2T2ro1urya2iL +IdNWOSFX2Kih1bA+UjIxwMZzWD4egebSpyNCj1FUlBLeZtYcD0HT/GuintLpYSR4Usk2uSG 88fIdzfMfl+6PfjpxXTDZMTKN6TJHJC3iGycTx7WCRIvHygj72BwOPp2qS41O9sdJjmh8TWc siRhBCkSFgACcevUY/HNPGm3Rgu3i8M2kO6NlMr3BOAQeQCO+Pbp2q1pFtqEtpYyw+H9Mk3W oCzySfeGBwRjgnr/AFquojko/GeuwlzHebfMJLfulwcgA9u4AqCbxRq889pLJdKZLRw8J8pP lIULzxzwoHPpXdKL60mug+jaM/7syOHZumHOPujPCnj296ztcW+eCwEmjacIRqCqY4G5Z+fk bIHDc/lzioafcDlf+Ep1r7W12Lw+ezly/lr1Oznpj/lmn5e5plv4h1K0jMdtMsSMMMojVt3z BjncCeSAcdK6q+kls5Vz4W0VVnnSJFwG8tzghCc46Dr05NV49E1ZtTtLKXQ7KW6ghW72eagE kY2p+8wemV5UEHntRZ9wMH/hJ9WaS1kkuPNa0Obcsg/dEY5GPoOtSN4v15oljN++FIOdq5OA oGeOwUfr610VnDqd3L59npekPHcMD9mChUAzAQMehIX1z81XobLVFTzh4a0DagRVZihBZliw Tg89M4GPvHr3ai+jDQ4aLxBrEVusEd9P5SFWCk5A2kFevYEDA6DAqB9Qv38jfPKfJbdFz91s KOPwVfyFd3pceqeUbq3t9B2SssEkAiVgAyqucA/MemV7ZJwOazP+JjqWuRWUstnBJZsk6MS5 Vj+7UD5jzxg9s8896Ti+4GCNc1tyo+2XLFRhQecAkHp7kD68U3+0tZaQSi4ui4KkNzkH5SP5 J+lei2/9rNPHFJe6NG6nyi8cbEEB0ILMGGev4ZI7nGetxqN3eWm7UtOwzRSMXRljjfdBj+Lk jI59mHGeHyvuJM4S4vtTntVS4nuXgVAqhySoUbcAe33f0pw8QawoAGqXgCgAATtwAQR39QDX VajPqGoXVpoVzqFl9lljEX2pIwANqo2Dz/spjpnP5U08DJn9/r2mIu5hmOUPwCgB6jruPHbb 78Q0+hVznG1nU2xnULo7en75uOnv/sj8hUMl9dSpskuZXXGMM5IxwMfoPyroU8I23mJHNrti jFSSUIZQcrgAkjPDdfUY96yNU0yHTvLEV9DdFsZ8o5A/dxv6+rsv1Q/QS+bqMgGq6gqqq31y AoAAErcAdB1pBqV6GLC8uAxGCRKcn9aq0UuZ9wHSzyzyhpZHkYDGXYk4rU8Pjdr1mP8ApoDW P/FW34a2/wBv2pbO0Ficf7pqJbMCrd7JbyaQ4y7luM9zRVm9W3F3J5ETeSWJTefm254zjvRS T0HYy/knvSedrc1GylJGU9jipbbC3UJY8EU2U5mY+9UI39B+x/2dem6GpDaUIktCdq/73bn3 rflTSTbO/wBm8QEBv9YS3Uk4HJxzkds81h+F7treG9RdaGnFlUgMgYSHnrwent610TakPJYH xjiQjgiBcdBnoM92reGwmZ7w6e80gjtdclZQ4kjJ+VGIYDpzgH1PY571PpFvpcllp5l0bWLi Vonz5THy5cdSPmBGPw696fHq1ux8qXxNdJCBxtCjA7DAX65/rUei6laRaVYxzeKbqz2B/Mt0 jzs5ONp2n68568Yqrq4mXHh0e3ldrvw7rBj+UYDncP8AWZwNx6jHPbb270dfh0i1023QaDqV o6XSiaad/mdPnBUcn5iUfnHY+lWb7UtOkiuVg8WXfzABfMjJDjnIOFHcjH/6qoaze6bd6ZKk niG7v7pWTZvjIRsSPzgjPCNnk9WNKVughkH9iXGoXF7Bp2sHTfLcPFbn5lbdnluQEC4BBPer Vpp0H2b7Pe6BqZjiJeQwE4B3YJ68kKDx179BUVw+m2+k3NtbeKm8qYbxaRWpjSUjIG/HAPA6 9j2qJtVe30qM23iW7OoeZuliEhVFB3F8N0Jzjoed3tSuMt/ZNDtYIppPD2srNLcLFEplIO7C nA6HJzxxz+FOsoLQ6ets/hW+kIuvMWYMSEG1QQ2Rj3IPA4zS/bdClkL3HinUJBuZ1V7ctsfB CsCc8gY54PbimWWraPBc3UM3iPVmtpbdsOqH/WHvjPX/ADmqUhFTET+IS6+HWEbW8TLaFMZy 6AMo7hunvuraSKGLEq+BZRE8pKxum9uWj4+YZxlgB0xnHrWRPqWijTJXj1G9k1McRTMWyFUJ tGcA/eU8ZwOvJAzLBrGi3OmRLqOraw03mRmWPzWZXXCeYDk8DIJBHPA/A0Cxoppkqz20kvhO ARhImUMyqsg3wjk9zlWB/wCunOQec+ysn1bVYtT0/wAOrLb2xKS2aspEm1R0GOTkk9OcCprb WvD/APoQudR1UKkarMY55SQQUOFyeBw38x2FMfUfCVvpMkem3Gq2164UBxK4UEgbiQG56A/n 7Um0NGxFYwwZlj8H2gAfcQ96rgB2jI6g8YH5E46nMI06WC1F7D4WsntYkI2m5DFlJX5gWHYg nd6HrjGcPTdX0STS7VNVvtY+1JJmVYpiUdN33eTxwB05/SrT6x4WEMGy61tnjADZnIEnCjJ+ bgjBIAxk47U+ZAZLWN34nSV9M0qKEWoRnSI4J3RooAzyR+6ZuT/Eevdv/CC6+1u0otAcAHYH G45Cnp7bufofSrP9qeG4rZDDHqSXG1FkKzHDhXX/AGv+eYYDoAT06Y5s6lekAG8uD/20P+NZ vl6lGxF4J1ed9kIgc+YiDbJkHdjB/wB3nr7fTODcW720gjkxuKJJwezKGH6EU83lyTk3Ep6d XPbpUDEsckkn3qHy9AGfx1seHzt1Pf8A3IpW/wDHDWOPvVraKCZ7naMkW0uP++cf1rOWzBGn ZTRy24JSIEEg56mis2G0Qpl7vyzn7u3NFQ4lGQqkMOcd807JPJq35TYJVATgjBqu0bIcMMGt SS3p2pSafIWjRG3FT82f4WDDp7itNPE0q2ogOm6c+AoEskG6TgAD5s56AUzwvAJ7+aM6Yt/m Bv3bPt28jkH17fjXVW+nA25C+EEIMfIe4+YjngcZzwffkVrBNrRgziL/AFGS/KboYIlTosMe 0dAP6fzqnzXoUOmSTPs/4RqzI2EFTNg/wDggfw4wevU980mhabOlp5f/AAjtleNBfNG0skwB Vhxzxyo/yOlPkbYrnn2DSc16jd6dcM4gk8NaYvmcDy5gNuWjUdFzwWHHI/Ac52t2d5N4Ve7j 0rTrSzVFJaJ8uVxFgY2g5zjrzy3XuvZ+YXPPuaXBr0e40XUBfN5PhextoHiaNQswZXPluSwL c5IPGeAQOlSXWnag6SRf8Idp0UkEfmblnUtGhMhGATg9T2J+Ue2BQC55pzSc16I2kajba+1u 3h6xmudQ3Xf2SSVWjjUGRcA5wB82epPAqWWC61S0Mdt4c0MMsbINpAMY3yrkEkD7yseD6fgc gXPNqK9WivdYsrixhk0fQobhkS0R2wMJkYyc+oPHXg8VBHp2reHrbV7+aHRp0LzvJC53h1Yp nA/u8cA8jn1p8gXPMcGkwa9Os9Y1G5tlFto+gtGYhmJ12hl2W5AwTgnlBj13VJZXWurqUdoY tGErSxud+GQbfs5DZBx0den9046CjkA8sORSZNXtULm7TzGQt9ngxs6AeUmAfcDAPvmqOKyY xM0UUUAFFGKWkA0ferU0lxGL5ycYtXH5kD+tZgHzGtTSreS4jvY4l3O0IAH/AANaUtgKTwyK EPmZDru4PTn/AOtRWhLo9+CqmIDC4GXH+NFK6ANjelVLpcSDPpXQGzAP3qzNVt/Kkj5zkU0A /wAOeV/a6rMl26sjDbasQ/T2IrqLVNPkt+IPEE3yH5tx4+/z94D+nyn3rjtLuWsr9JknaBgr YdeudpwOh6nArq7bXIjGfN8R3ULBsgBc5GD3C8HJ61tTdkJj1trR548afrZXg4DfMwwxAPzc ZC5PTpxjulha2h+0iXSNVfbqBC+S/wB1TghG+brg8n9ax9S1+4hvGNhrFxMpLEuRtJyT7dcH /PQZ8XiDVIDIYtQmTzX3vtfG5vU/kPypuaTA7S6tbCK5iJ8O6kgC5CGbJOGUnA35JwCMD19u M+/toE8Pzb9CninEQ23JmAUEeVkkbjknJGO24d845xvEerM25tSuC2MZL5OKgm1i+nthbS3s zwD/AJZlzt7dv+Aj8qHUQWOyn0+1js4zbeHtQfaQ0ssj7iynyzkAHp+H8fWmXdnavbXLr4b1 dpdyN5lxKdw7Yz1OQw49vauS/t3UhG0Y1G52Mu0r5rYIwBj6YAH4Ckm13U7iJoptSupIz1R5 mIP4Zpc6sFjv9RsrWF76KPwvNZsFEjpNcJ5e0RvkgE47g5GcEVwttomqXsaSWtpJOHAZRGNx IJcZx16xP+VQz63qdwrLNqN3IrDBDzMQRjGOvoSKhi1G7gTZFdTxrjGEkIGOeP8Ax5v++j60 nJMEi+PDmrl0jNjMskgUojjaWBDEYB652N+VUr6yuNPm8m4UK/PQg9GKnp7qR+FK2r6g8gka /ui46MZmyOvfPufzPrVaaeSeQyTSPI56s5JJ/E0m10GNzSZozRU3ADTaWjn0pAJRS4PpRj60 AJS0Y+tFADR1NdT4OTNzdHH/ACzA/WuXA5NdL4auFs7bUJ2ZQyooUMfvHngVM9gNC8sJtUvJ ZYpCscbeUMHrjr+pNFb+nixtLKOF9Rs94GXPmjljyf50VlzNaWGTN4M1xTn+z5D9CD/WuS8X aXe6XNbJd27ws6sVDDr0r6XaLypzE3/AT615V8Z9OkkXTLxI2ZEDxswHAJwRn9axp4iTqckk aSguW6PFjnvTSParhgbP3T+VMMDf3T+VdlzIqYox7VZMDf3D+VJ5Lk4EbZ+lFwK/4Un4VZ+z Tf8APF/++TSfZZ/+eEn/AHyaLoCvn2o3e1WPsdyf+XeX/vg1HJDJFxJGynGfmGKLoCPd7VJC rTSrGgyzHAqPj1FS20vkXMci8lWyB60wOgTw/C9o2LhhcgZAKjafb1rnWZkYqRgg4NdguorH AXGnXDTEcAxHGfrXHyHMjF/vEnP1qINvcBPNPoKXzT6D8qb8vqKsw2FzcJvhtpZFzjciEiru BB5p9B+VKJT6D8qt/wBk33/Plcf9+m/wpf7Jvv8AnyuP+/Tf4UuZBYihhuriKaWGB5I4V3Ss qZCDOMn0FFvFc3XmeRA8vloXfYmdqjqTjtVuKz1OCOSOO2uFWQYYeUeR+VLbWeq22/yLW5Xe u1sRHkflS5kFipbw3N5L5VtA80mC22NNxwOScCog7MwUAZPGMVo2+n6tauXhsrpWIxnyW/wp i6TqayBxYXW4HOfJb/CjmXcLDdS0u90tolvYVTzFDoVdWBH1Unn2qbS7C81K2nhtFXIZWbLA DHzetSXVnrN8ymXT7o46AQN/hXXeDfDFy1jefbY5rYTFVUEbWIGe341nKooxvJjUWzjr3TDZ zLEblJX2AvswQp9M5or0FPhvbHcXvpsk5G1QAB+OaKj6zT7l+yZ7lqYAhD4+YHg1EUWWECRQ wPUEZFFFceI+Iqmct4wI0/RZZ7WOKKUMBuEan+YrNtNPtI7KKcQIZpUDPI3zMT9TRRUXfKaW 1MnWQIbWV41VWCkg4rJsIkFosu0eY4yzHqTRRTi3yhbUsFRnpUcgGRxRRTQ2hG71xPjD/j4Y 9/s3/tQUUV0UPiMp7HGGlU4II9aKK7jE9TW4l+yK2/nYDnHtXlbklySckmiiuehuy59BBXqH gv8A5F6EerMf/HjRRTxPwBDc6QdDTsCiivPNxjVJb/eoooA1I+VqYAY6UUVhIoAAT0qWIcUU Uug0TDpRRRQM/9n/2wBDAAUDBAQEAwUEBAQFBQUGBwwIBwcHBw8LCwkMEQ8SEhEPERETFhwX ExQaFRERGCEYGh0dHx8fExciJCIeJBweHx7/2wBDAQUFBQcGBw4ICA4eFBEUHh4eHh4eHh4e Hh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh7/wAARCAHgAoADASIA AhEBAxEB/8QAHQAAAAcBAQEAAAAAAAAAAAAAAAIDBAUGBwEICf/EAFIQAAAFAwIDBQQGBgcF BgYCAwECAwQFAAYREiEHEzEUIkFRYRUjMnEWQlKBkaEIFzNisfAkJUNywdHhNDVTgvEmNlVj c5IYREVUk6InZHRlsv/EABoBAQEBAQEBAQAAAAAAAAAAAAACAwEEBQb/xAA0EQEAAQMCBQMC BAUEAwAAAAAAAgEDEhETBBQhIjEyQUJh8BUjM1EFcYGRsTRSYqFDRMH/2gAMAwEAAhEDEQA/ AMGdI6u/VYdftBpw4kFlfr7fZ8KYmHVUxA1V2iUaqHaFEo1B2hRKNQdoUKFAKFChQChQoUBq FFoUBq7RKFArQolCgPQolCgPmjaqSrtApXc0jqruaBbVXc0jqruaBXNHzSOqhqoF9VdzTfVX c0DrNd1U11UNdA81UYp6Zcyu82gf82ja6judRyq0Enzq7zqjOdXefQSvOovOqM59Dn0Et2iu 9pqH59H59BMc+j9pqD7T+9Q7TQT3aaP2zyqvdp/eo3av3qCe7V7yjdpqv9r/AHqHa6Cx9s/f pT2l+9VY7X+9Q7V60Us3baL2396q32z96i9solZjPf3qT7dVe7Z+9RO10Fj7bQ7bVc7XQ7XQ WLt1c7fVd7XQ7XQWDt9cM9/eqv8Aa6L2mgsBn1J9uNUH2mi9poJ7t3rXDPKgu00O0UE12qi9 rNUP2muc+gmu10Qzmofn0Xn0Ex2ui9pqJ541znjQS/aaKZzUVzh86HOHzoH5ljUmZY1M+Yaj UC2ulNdNy0sWgvPCXa4h/wDT/wA60x5/u91/6Y1mfCH/ALxKf+lWkvs+z3H92vl8Z6m9t5oo 9Eo9fUYBRqLQoDUbTQL3qBjaKA2gaGiicynsaiq8cclIuR6j8qBroruipOajnEemiscAFFXY pyDkM+VEhWDmTWVRbAAqJJioJRNjaqxkI/RXdFTEXEuHrNNbul5uyYGHGofKnX0bkP2OgoO9 An5JhwOAptyFc0UNFTT6Edt2Kzn3Yi3xziAbcoefrUFrNTEH0UNFP4dgo/KspkCppBk5h8KJ MMXMc8K3U31kBVM5NwOUfEKYhnooaKfxcau/avnCf/yaQqnAdsgHkPjQZx67iD9qp/s+0pt9 Jth7+MD+dMZBjihipSch3DBHmayqgVbs59HUp8dKLBxSz9w6RMbki3QMuOsOoF8vWm3IRmmh SyiImU/oxueX7RAHFOPZ6xrZPMp5MCbkG6qekcl2zn5UxDHTQrnKW5PO0G5X28Diku9UhTFG p5bseeVkDNCnxoROsb+6UN8etLTUUqyasnyJhWauyjpNpwJDAOMGD/GqxEXQqXteEXlpgzA2 pHCZx1mDYDFLnSNIwsUtJQ8o/wB0QZt+cmIhspv0/nPypiGFFqwSFv8AZoXtRHArLEIkYxdO w68fD5/ztSFuwKz2cbMHYKtyrlOOvT9kucfPam3IQ1CnzhoVw+7PGkdKYIIn5hMCG/X5etPb Vt9xK3InFrEWST1CRY+P2Y4267fdTEQveouqrPA20k+jUhUdqguuLjQJShoLyvteWf5GoZrH 9ptF1MEP71i5Im5TEPqn+ExfP5Ve1IMM0M0TTXMVkFM0M0TTQ00B81zVRdNDTQG1V3NE00NN AbXQ1VzFDRQd1UNVc0UNFAM13VXNFd00A1UNdDTQ00A10NdDTXdFAXNd1V3RXdNATNDNH0UN NATNDNH013FAnroZo2K7poCZrmqlNNdoE80M0euYoC5oZNSlCgS1V3JqPprtAlQ3pWhQJb13 vUpQoE+9XKVopqAuaXT+Gm1OU/hoFaVLRKVrgvXCEP68X/8AR/zrSHX+xrj6ZrOuDZdUw68u R/jWgyRtMW4Oc2CgQc183jPW9NtjzGEjjopqaNWSgIgI+NQdwNiNnWkhcBVtgTkVjUTE66d/ nVfvAnvgNXut+phJXaFChWqR06C1BOjqFzQLpxxzxar8ohhIwAYvj86fWWqRG4m5jqcsByXU OwZEPyoIxE37PHkoiKChdZiAbcQDqOPGoiqU1PDc8asis2S0gr30ziHQM98P8qiIlEifEJko 15XZVWI8o5dgOXPUfX09KqTxu/bItFnIq8p1js5xNkBp2nAzPbk2hURBTlcwg68Bp9B8PlXp zStcHhGB7KchVFGckJTlzuTfr/P41PvHLf6RNH5DIqpJlOQyxRAOXv41kz4jyPfKouuYi4L8 e+4h5+tPE4eSGNCRKYpWqveAebgT/d41rC7j0PqvUwq3O3l1MpGSXYGFFcDfF3enz2/0rOFm Atodi71Z55A2+z/OKSLrNpb794wFKTwz8qlXULMByG5ygYurQUhVM8sR8MeFebXIPrBWTAsm 1OYNSyACQhvr4HpVmeLMFEWRnnJBIqSRQDIakltQ4+7f/SqFLRD+JUT7SQABTOg5DZAflQi4 x1Jc06WnSTGsxx2++tI1wpgL3G8r2bJo85rz+U6ScaDAGTadsB5b/wDWmigtT2CjylkgKRFq c6RevMKbvff/ADiqTKNHMY+MzdF5SwAA7eIedSCNvyaseDlEgCUUxWKTO4k8TAHjV5+wuTx3 G8zmruEVG55BJQmB2MXA7j/nUVMOXBJZuRgsgisYiqXMEwDzExEO6aoJxb0ikzM4wkblJAqd MDd4hR8cf4flSMhEvGTqORch35BIFW+N8lEcfdXMxaluwFRdto1ZBrJqt0TiIGwBTgPfx5fL 8qNZq3Z4mTTXcNeZ7UKooUTBhQmjf5h/Oap0xGOYqQFo4083qOjf8fWg3jVHEG8lUtJk2RwK qT6wZ8QrkZYyFgmFSPIdmeMeoJIJNFEnKIjgTjqHGw9Rx41Xuwk+jftLVhTtIJcv0x1p9F22 7kGbRYh0S9sARbEOOBPpD8vnTf2PI/RtaZUS5bdu6Buchh72ofEA8Q9a5Lv+IXsN03ZXICjk 4FTUarIajdAExds1b/ajBKLbpvFkDclFQV2xBASqAJgwUA+7pWaaNX1c07iWCshKNY9Eoc1w Pc8vnWcbuNBotuyrBKSeZk0cmeg5IcwhjkiljSUfHGwY2GouJeMPoerHdsTKqkwctTkzp5hj GyQfXYKp8lHnbPlmfdccofjS7xRpnp/d6ela7uidF8WmI32KVXmgZMGzRMjf6xVkzBqyH3Dv 09Aol2S6LhYewv00E15PtSaiQ99MNIhn90PTaql2P+qRktX9uCOnG458fWkDNzpadSZi6zYL 3R3Gu73arFevakWlymqrwou1YhwzVeFLsJjHyTUYP56UnZ8ozjVFmbmX5iibtFwDk/Q4EKOS gPXFUnkH5gI8hQDHHYnLEBGulbLHUMiVBQTJmApi6ByUfIfKp3Urxb8zEM25UlnmCNXDoVCa REHBVQ7pi+fyqsRrxJnZMtGl/bybhLu+BEyDnPz9KjOWPvfdiIoFExygG5QD+FWiPtLtSccX teHUlHnfNiaO5gue6YfAe6NVrKYqdCnKLRQ2ecmogmkoVNwYwY5Yj5+v8aszWziP3DM7B8oo 0WI4McTJ4PhIuoRKHUch/wBawjblJSoUKkVGBXMh2eH57svLBQROTSYnoNMlkjoqCmoQSqEH SYo9QHyqcQjRqlUov/sa/uJQ+EkXRGSRQ6mVMGfwwA1Y1LMS5jxigssZ82jSPM93lqGECiJQ /wDd/pVRtSkKRQp8jDzC2xY1bmdoFuJfEpwDIgPljzp03iytXiqU2UUsJAZEiZgEyhhHAY8/ lU7Yh6FXRjabRW6JxEVl1Y2KZA4MJNIn1YL3fUdx/CmyMC1+hbGYM3eujyPaBKZL+wKQcAIh 4/hjrvV7EhVKFWEraEUtNxJEbuCmTAiJFjmHCio/ZDx8PzpWBtZ04uK32b8ATaSxzYUKbPwh kQ8MVO1IVmhV0ibfZvJh0mrFOGvIjV3KaBhyVYxDYDBvH1xmoy9IpOK9jCRAyAvWZlVEjZ7p iqCXbO/hVbEk5K9Qrpu7mtIibVaKFhyGjuYg9hzunDvfURXBhDHh9Xp86mFmUyUsWbUaplxC PIdFSSkSILNW6iQBg2SudYZ7vmHn5VJT3sxn7JSNAtUZERFdZv0ICZwDlgcf8MBiqjZUqlCr hNNmsdPRnJjo7nP2BVDczZuU2sQ1F9NqsrW3oklxOmbNiiqqMsikdFUodxAShqEN9vnt8qrl 05MrrlahG24wR9mt0myAorruQV5/xmAFBIGnONgAP9aeI2xHjLKRPZmvYgZgIJhjmgYN9Q9f Prvmq5SSvjkyTH51ytFuyObEtu5FOSgAslUiogTGtLvYwO3XH/Ss8MUdjaeu4Vlct4Jtyy6i 0NNXGxWgK2/NLa00ViqIpEVPtoznp+FSVyWm1eTRnKLgW6aRCGekITYC6PiJ1yPpXY2JY6qZ 5oP8WkcUWtWgY5sX2CkU6fJOyUMdExf22de4/h51l/KOZMy2geUBxLq8M5HauStYpjLIlp1d 0uR+W9HURUT+Mhi/3iiFWHhukRW8GusudKapt/MCdasryFaTFusWXOUUccxUqbjGAKIAGdWe gda5C1kpnCaJ1DaUyGMPXBQzSjVq4cl1N0VFCgOnJS7Zq6Wq3RjuJgsGmrlkaqAoJy/EbR1D 09acWakj9FYDWoKSij5US6PrDrDrWkOGyFGKwfHRFYjZUUwzuBaP7NeFb9oM2UBLGdeNsVra zdH6UNEt/wBkoPKAA5ed9x/nwCo24Em5C3KfWbmFYABiDjR0LjHrvXeWTCWTK6WZtXDxTlNk hUMBRMOPAPMaBmbhOPaO1SYTcE1EHzq0cOSFMnO83oZqUuAxkQEwVjG3qpWXjNwxUAjknLEQ 1B6h5hQbsXLlFdZFEyibcmpYwfUD1rTJCKYSHZSuyAmmmkmUi5xAN9Q90fUar0Kkp2e9CJNl ESdmFMqXkOQ2+e1XtEVdawr9yzScpI9xUmone3MHmAeNNexLjE+1eWPZNfLBQegm8q02FK21 QqpyDlON/a/VKGk2fv8A4VH3gvGueGZVow5gag+TIkTGAAd858/n402k6s0pwWm9OC/DWCi9 K0hRy1waLwb/AN4Pjf8A9cP41eZj/c7jP2B/kao3B3/apA3/AJIB+dXW4iHPb7gC4+Adxr5n F/qt7fpZbCpciNSeE/ZDkFS+nmFMryAnuzkNrKYoCAhUzZrYVoMPej8Yhp8Kh70ZKMNJNfMR NuHoPlXrj60SVOhQoV6WYydLUgWljUGtw5kD2/bjwoe8Klp5hOhR8jVGyUDHBzFiNEwOY+VE 9vtfGH+VZ2xkHjRM6bd2smmYciQpu7nzrvanhjft1TDp0bm6l8h8wr0xnHQahNRzQ3s1oq3K DRJ2CaPluA/z/jTxwkb6TNwWIUuYRZMUR+sAG+H+fwGsydJXASNL2ntYsi4EBE+QL5D6U1NI v1lkNb1cyhB9yYT7kH0HwreN6Mev35GqdgjneXTgEzLEap+7OGsQJncgj/PyqETSRfmVgVme qJTcmFJYMgKJtIDv5+O/4jVPUJMoziaKh3SciqUNHe7xijS7VjcZ3D0iBHYKpHEjkNWBE2A/ HrUZ0FhmGIMWMZ2Jmk4aAjqVV2yU5T9c+G3h6VNzTZi5ljyeQQNzmpk3GoMLFNgP86y/nOUC qM+cqmTXpOlq21eWPOpCSjZltGh2zmAz0goUhlPDzxU5Cy8Uil9ml7nKFOUULo/4mc+8x4fz vTLh6qn7NnGhu8oKZVSp5xqwIf5evyquPk5AI9q8dmVM0W7qBzmyA+gUIdm/fOtEfnnF32HG A+fhSd3Keo1GSRinnYlnyaBQTTSTTMcQESqb4SN6etN4s39BRSWKXJGzpJwb/gn+z8uvzrM5 JJ+0dKIv+aC2wnA5s58hpHnLd4Ocpg/xhq2N6j51rvxS1VwsgpHrnwmJXEAcpHAG21cvAF/E P9Krk7zkUeHTwxCiCTQiB9XQvf2z5f4YqDLAzRo/uJCKApCvyde+gOpgJ41EdpOKf7YwpbYD IiAfKubv5en37DRuS0dcZpw63KS5WpRDfZX+f5CnsGgi2uK70WZUQaqKoKJkyGkxNtWPMOv+ VZzIMHjBSOF0Jg7eQFWygmzqKI9c0++jk2Egsw/ZKI6dQ87Ad7oGf8K7W5qNDauGBVGRWRUF G7GTWIIgO6BdWch0wHXf86jpySipuwbmQa+6FounpSLjCpgPnmF88+NZs8ItHOnDRbUgqgbQ sXcMD5etSTi33jaLav1VEU0XKPPQLzMCoTzAPGq3+vhOJC1TiW5I45TpF98H7XZPHkYfD5+F Xpu5jWd/Wq7iToezk1VUz6xwdM2rcTdPuN4h41RUYd+tDt5VFLUg4eFZdcCBzB0H/OpA1oTR XSTcpEjaym75DbE0/EA/515oNEtbqxfpJcaJuUUTvxNkpgIfTq6l8w9Kq9wI/wDaCS5WVU+0 GEDkLsYPTFOlLdkE30a2yiPtFMTNVQP3VBDwz500ayMjD81miYWopnEFCY+EwdaqSUsxOl+q uUSV5fPJMt1SJm+MSY3x6b1cnTyH7Yos5cNVGS8yycMigICCZCl7+PsF286yhQ+tYyxviMIi I0T/AJQ8qbsTRs7qftm4Gpk3R27RdlID2f6gHU07H8cFEcb03b9gf8QLoctHCZWxoAFjqIdA US0aseY90f8AKsg7v2Q8vup0zdrs9XY1jN+YXSpy9tZfIfMK35yMvMRZk7qZtriuV62ZFWaT CApJ9zSORL1H93PhVjtWeYIxturKOW6aTCNO3dlMAc0pveYAu2rfUFZdRdvIM1nHiu5EoZL5 dEpFP7bUjU3uV0jtO0uMZM8KAdA8wJt65qSh5mPhPY7NzLIvFidsTK5S3KmmqnoJny3xtgPn WY9KG32a5HiMeq8WkLS8VyXEJ7STI6Vhkmp3wdDKpq6sCcOuS/P5VRJbT7QXKV2LzBsdoN/a bdfUKZbdMBjyo1Zzv5UNE41foH4fyltre7UUfJv2x/ATFKJRKP41dPplGq85ytJL9ndRSTLs hM60lMEKJ8CONtHX1DasvoVUOJrFTRy3lD+2GL3KoFYOV0iJkJsqkomJAXHzNvv09Bqvw/sG MnkVjTCrsrdsblrqEyUqwj1L/Pj1qsUKS4nJEYLjBzEbEvrn/rJysnJRxm6K2gfeLGEB1Gx/ H1pe1bqjmVvxDV5z+bFIu0+SBMkXFYNhH0LmqPQpzVXa9yYcySKtmW/CFIIrRrpVdcDBhM+o xR+8dsVcZK/Yo8hCvGCD3+rX6jvQoAhp5gAAlJjAABcB0wNZtmhUx4mUVL6jeUc15DFEjhVk Vm6bqORLhXUufVkC/COP86i303ESVwW72kioxUUQSKiqTvKhrE4hpDHnjr99VahXeZTiUdCR RwuYgYSOcwkL9koj0q4x96NW7OO5rVwL5gwFiTRjlGDcNWeofH+VUqhURvyjXUlHJY7kuROY hTxJWfJbJnSFgURERRAod7V5mN50nOTjWauz2y/YiZqZuRA7fPewQgFAfyCoChTfkpYpCbin 7pv2uKUUj2rYrZunr94UpTagz9407Tu1H6SLXAvFZedoKu2MRUSimJSgAAbzDYOnrVSoU35J xW5jeqyKbc67MFHbUypkDkHQUBUNqHPn3hpX6dPNXa+ylCSMkCR19YiTG3QB8cAG9UyhVcxM xWKcuVSSj3jQGaTQX6pVXZyDnWJR8PIPSo+afpPlG/JIJSoNypd7qOPGo2hWcrspEY4paFml otFwgVFJdByJTHIfIbl6YqQUvOYOskrpQ90oJwzkdeQxpN6AHSqzQpG/KKlgRuqQTbpkBNDm JEMRNf8AtCFHOwfiO9R3bv6hTiSl92RUVRPncw0woU3ZGJ1GvnMc8K8aiAKkzpz0++pFxc8q s3Vba00m6pBKZJMuC7jmoShXNyURNN7klUZBSRKsUXSiYJHOcue6AYx+FJtbgkmifKRWAC6x UKGkPdmHxL9momhXd2Ql/pHMdn5HbBEMCGowAJ8Z86TdTcm6Z9kcPFDJ7AbzOHkYfGoyhXN2 QdPnyzxNFNX9mgTSQA8v864xeOWKnNbKGTEQwOnxD1ptQqcg9WlH6xsneLCOQEO90+VdTlH5 CqEI7VKCg5PpH4/nTGhXdyod9td8nkdpV5X2NW2KbqOFuz8jnKcnqCee7nzx50SiGrmQLThP 4abU4L8NSFaWLSNKFqRpXB3/AOpD+4UPzqz3gfRa6/vBKAl8KrXB/wD2eTP6FCpziIbRa6mn qIgGPTNfMv8A6zePpVPhmfMKsT7Kv+FF4jJf1bn94P4hUPY8wlDt10XHvOYbUGB6U6uqeZv4 /lJFMA+terCW6lRaLRjUWvWyClTUlStApGsV363Z2walOuMgG1FT1JONJthIcAMA+A5p7A+1 knHPjGiqyhQH4SZDHrTN1zTOlu0EMCwnHmFOAgOr1qviptDMjNV9uQywuYoQMQ37M5cdPT+d qhW9pwhXDQgNtSKq4Dvj3W37M3n86ojd/Nki/duXXZEu7qDoQPL0rhnE2dEHOt6KLlUogcoj gx/Aa9mccMUtTkI9s8vaLWcNu8jEHMmn13IpsAfz91dxiSuA65DYGQbOMBsJB0l3CsylH1yN 3SB5RZ8i5TD3BlNhAPHFdYublkXSrxms8XW0aVjlN8QeRvlWlbsfH340GjPIOEOaUcv0ETc5 +cpzgXJgMJcgIf5fnUCVFG6mbcsi1UauI9mBUHOoeWsmUfhx4fLw8qpvb5disukdyuiqcffl 1dTeY+tN+3vwZ9j7Ut2f/h6u7U1vR1Gp3pEIrW7GR7cqaCCUwgVLwKIGKO/l/PWlnEMzir2R M1bcsryFdFcN9gA5iiGweWf5CspUlJE7MrM71wo3LjSQx8gAh0GlUV5iReJFSdu13BTZS7+5 B9PKkL0Kff1SnuIDQzy4IEjfYV4whClN/Z97GBH/AB2qqvm52bxdqrgTJGEg6ByAj6D4hT6S CZjpJJzKAsm66pHUHPQfAaDeLlplRd43bKOlDDrUOGO8P+dYz0qpqtvrMxTtV3p94MQZIF/A vxZKbyqKdQsOVuifsbcFVnjYXSeQwgAm3EB8NXlVJj4q5zMVexpOgRAR1pAfA+uA8abuoueb RftJ23dA3UACnOJsjjwA3jitqz1onFqdxMGy12QSLlujyEWrxNuUcAQpgEBT+W4DSUocTzUm ZZFNZ04imyxmvgppMGfuL5enw1l8knMtuwkkTPA7QXW05gjnHmXy6U9NCXOaWTbqkd9q5XNI odTfT8/Dx2rSvERlUaZ2eF50y6W0vFzSSYL90pxFIyPwj6B57dKrEagjNs1GExGo+yWLdwDB 2Bvep6RyUo/zv61WGtvXOLx0QiK6DpMdKwmV0CYcdBN40zeRsuxh+3OSqoszuBbHAD7c37Ih 9w7/AJ1O7/xTi0m8EeZw3bt47s6aASTBVsomcA+IogJhHz1D1pWHKMLcUe0cYCN5a7ddwoYP fOzpjuP+e2ayDWtp0cxXR/w9Xd/Cja1ljJkVWVUHUBSazCPe/wA6mnERporRotwGBVjY6HJI 0cNX5lToCb4SlMURH06D/nVOvx2g/vqffNN2zh+oombwEB/wpK4IuVhXyKUrqBdRPUQRPqHT 5enyqLrK5P2HKFSK0K8Ja6VyaQ9nHc9l153KpjoIdQ6daje5515sVO12uf5Z+6uloBQqbt23 HU0zcPUlUUGrZcrdRRbIAChugfkNMp6Ldws06h3xP6U2HvgTIgIYzkPSq25eQwoVY1LQflgT S4OGpgI2K7M31YV5Qm06gz1EPIKr2k+2UlC6gyGsghtTbkC12nMWzWkZJvHtg1LOTgmQPX/A KkI+CO/uZaERWLqSKoIq/U7hREf4UjblIQ1CuolOsjzk0VjF0ibIJiOwdaUKgsfAFRWETIc8 ABMf2f2vlU41CNdozoijZumssiqmCpAOnrKIaw8w86nZq3DxV1R8GKvOF4k3V1kwOkFQAfDO RDPz9KrEQFCrNb9rKzHEb6Ko87lFembKOSJ5AgF+sOcAH3iFV14lyHzpp17OsZLV56R612Vu VAnQoVMvoVRKPt1ZEiqi0yRQxCCGA7psd0fGoxENQqRNBTRFNB4x2H9GUd/BkOUT4zfIvj5U ZjAzD/sfZmCo9sbGdN9W3MTKbSIl8+9t9w1WFRGUKsNmwScy3uY6yp0TQ0Wo7IXAd85TAGkc 9OtNW7DmWG6uISK+6epNu6GSBqLnA+Q7Dj5DXdqQiKFXC4LSPDcNWU6/TUQlVZTsqrcTAIFT MTUA7dDeg1T65K3KIFCpzh/FEnb0jIdQmpNycwGLnAjgo9On8aFv2tMTxVnEckjyU3YtcqKa cH3EpR8hEAHHniqjalIQdCpxjas08kIZgk3Lz5ZFVVAomx+zE2oB/wDaakX1vyjO0WtzqJF7 EupydRBEdB98AbyzgfnU7chE0Ksl6RbeNZ2gdrkDSkUR04E3TmCsYuQ8gwAf51NSFnq/rUk4 ghGrRjE8lZzzVMk5ZgJtnxE2oKvYkKDQqf4hMm8bxAn41ilymrV8ZNIniUPy/nwquqailHzq KxxkR7h6FbP9EYorNox7G17KeEF0otpDmgtytYGzvtn+HUKzi4LYdW9F9veuW65RcETbaB2d EwA8wv7v4dQrWdiUE5fFX6G3mFa2aBhx4naUGCCaCdsldpk6l5ukO8PXPX/pUo6gmBbP9spo x6UisiVMHOwJB7zTnp19d605OWuia3O/Bh/d867W/wAtBR7Ase6ZtWLdV4qn2nIAAHHSHdLu ONzdNqxu9GhG16TrNslpQbPDFKUA2KFZ3+Gla8tPfRCUbSbyH8KfW2iRzckY2UDUkq6IU5R6 CGrxrXpSHZSrOYZvHBcduKkiZMoam45Nt0DHQNqmNrJMpYsRxRypKGNoImYwh1ApREQq6M4c tu35bDBQ/OeKK5cgfcgAY2A+/wC/arAxQTBS81iGBoITAJcwhNylDOwbbBt6U2ismV6DmUFP QbmAGRJpHIB6hRelb0nDsCXBLPG/ZDPF2Y608h7oCpdcevXNYQ3RWWbrLFIYyaH7U/gX50nY wV8ieKcvI563R5i7RZEo+JyiFSNkkIreEORXcoui5Dw++rzxC5P0LnlgWMr/AFkmUmrPu9zb F8unTapjayjqmUmT05LSBqWLWSitKlpKj1I07hL/ALtk/wDlqa4hG0waQFL1UL1qK4S/7lff 3yh/Gp6+GaTuDATrilyzgb0H0r5d+X5zePpYDkaNRKFfWYC0KFCgFOC03pWg1HhDyRtuZ1jo FNQDaybGL06VJ3JbERNvGr5TmmcdmKbmF2BUuR/AdqyaFlZGIdGcMHJkTKEEhw6lOXyMHjUj 9ILhee67WqOBA5Sph8Il6Yr12pwjDSpJocXDsGlnySLVFUpXzZcTa+pdIdB/n8aUUjUIuw41 szJ7sjlm4ET9DCJgAf8Arj8Kz5rcNzFTeFbu1zJqau0kAMhkQ73yzSXti5PYJWhlnnszBQLr IOn03rWk4f5S1iWh2M8stHu88hKV1Dr33wPT09PzquKGjYB0q2jWbrkySBknZG2RFIQN1Dx+ 70qke2rgfKIJFfOlVkzgdPR8eoOg+o+tLPpC5UpJJy6M4Rd/ATJcZz/HpVzvRlXUWl9a0bFQ 72QO3UmvepaObnmFKcPMN9sUtG2xbR7bReO2xig4MoGRAeYhgNvn94ffVHUmJ1tJLKKPHCbo 2AUA23y2pRm6ubsbgzMz1RucRUW0BkuR6j6VnrDJSbvKLjYuJjWzOJFUXEeVz7QJnIG1CGDe HQApnwlOQl/MtfQxThv06VB+2JL2X7O7csZp9gR2x5fKmrXndoJ2Tmc4DAJNHxAPpWM5RG1K Q0bJ2yWLcoLKpkenHUcQ1oGwIib5UlFx7aKbu2zdHDdJy2OXQPxG/wCL/p0+VZpML3Y20nlT v0SnAU8qdD56gP502jXk2Z0mkweOxX0AiQpTb6AHYvyD8q9M70UdzZ1uSEgyOrzNTeXV76OM AIiHx+XXr6fFVXvD2yyi50xyLPFHYqIkzuVNrqHcMdMeQ5qjRprhF48ZslnvaialHSZR72QD cR9aUL9LTwfaCKSRo3Rgw5yXT459K1lxGVaurDcRjmt3hq8comVK3wmcDeAawwA+XT8qs7xQ D8YrmbaB0BFKnIgccgr3AEQL5B6flWUN3MnIlbxSTtZdPIdnba+6A+lOZppcMdJJKzB3Cb8T gmRY6oCoA9MZ8OmKwhe06/yTi1t4Vs8t9du598isRicG4HNrSDcBLkN8h/1AKRtmHYeyZ+Cf EB3HBcIlyvjJC8rIG8sh5/nWZIxd3e2lUiEehIimAnOKmDCXyEfu6VyPiLtW7Y3bEfJCmryn SYqaNSnrnqNb04in3/VzFAJ5SdEAhv2bgAKJuhu9tn0rT7yjoiNZoPWDBpzfbLVR+iGP6Kp1 5XoBg/hWcKMHiTVZdZsZMrdz2VUB/s1fsj/nTcyyw6tayhteNeoc6xDoI+YhivHlj5W1i8Fk f1+TIHZ85MW4iQNj7iUvfKHTu+XoNUbiM3I2uru8oOc3KqblF0hq6Dkv1R26VAdocioCxnKx lw6Kifv/AI08jZp4wUVVLy3SivxGc98a7O7GQ1PhCk0ecL3bR6RFZoaZNziLY06RRHffp86m oNKwT23E26vHR4vnsCZQHW+rTg24juAmASh1rDHz9y8cKLCfk8z40kR0p/hTb+Q9OmwelbWu KhDTtRKLZLpiYtpH2T2UrZwq0mkGYOk8Dz2xsYEQDw6hnH31G8VDwMVfkK35KaikQ/cldcnc QSBX3QGHxEoeHXwzWZNXCrR4i7ROPNQOB0/ECiAgIfgIUpKP3cnKOpR+fmvHZ+aufpqN51nK /HR3FqsLNw8h9M/Y6COHc23etW2nRqR7+oxQ2wG+ceHlVjnH0E8nnDhwrG9uydKKOoqGAV7M AGKsGR0hqDun2rz9q8sh8tq53armvo5g0JrGvI3h+UUnBFVXZCLSCh3Jd0CmAeSHXUGQHu+Y DQ4xLOncg7Xbu2SttqPSmik0jAJiF09Sl+qTr19Kz3Af41zuh9X0qN+OOjuKesGXRt+/IKcc 6uQxeFVU0ZE2PTcP4h86s/CXs0ZxrU7c5alZim91rHOHLEFEjaAE3TORDx286zyi4DTo0hp8 vCs4XcWjWuDMvEMLNRjJl03TEs8rzU1egtzthKPX6usA323xvTNnc0OaAZXE4HS9j4UbdPGF DIqFNqwsURzgAKbxzvjFZjpJ9gPLp4Ub97xxjNa80ywaZxlc2+5sOIi4F81dlYOxFqmnkyhG piFAusc7myQchgOoVH3dKs3nE605Vs7KZq3ax5VldWxBS0gcB3Hpp/6VQdIF+EAD5BXdvIN+ tJ8Rk1bRA3PBNOIl2K+10WiTi5kZNu7IOAVbEOcxilHbYQMG2Qz41nke0jJS7rkO9d8pkQrt 03UIIBzD/wBmAZ6gPl1qs6SfDoDAeGKN93TpU8z7M8XCiOkom2EQDIVe1p2OLbfC9MrzU4gX Kh3yYAOUyncAcP8A9S/nVGoVjG5iprVxcQoY7xuRiddZmq5lUn2gohhq7MXSJenzxp++lLTv 21YaYTQ5zjscJyE4Z7yRyomQ4nUAQDBgE4m2HPhWQ0K9fO11RgusFcMUymL+UVWPybgZOEWJ ykHAmOcDl1eXTHjUXEybJHhLLW2qooEk6k27pEmjJNKZcDkfD4h/Cq9QrGXE5LaPxQvWBue1 VW0ad2D1eWJImRWSEOWUEgTEudwEch4Y61Tp72WEfDEYH1OuyiZ8YOnMEdg9B9Kia5Uzu5Cw cN5ttbN/Q9xPeaKEeoZQ4JFETjttgMh/GpmwbzYWx2jLdwrqnUZNMxM55aWruj0HI6g+sHSq NXa7a4isFLszvhJsxWEseX2w3OqEQ+x+wRVMIqAbzP3hwPhkaXvq/Gty2WMCVi4QXOuiuc5v 2epMolwHpgaoVCu8xVGCcuqYQnGttI8lRE8OxKzXz0U0qCfUX8fyqySHEJm8vSenRi1vZ84i kmugJsKkFPRpEPAfgDrWfV2pjxMo9XcVnWno19xKeXUugZNk4Oot2c5AMYDCXABjpVXU7+ry MYRx5b0KFTK7kqPbTRei8Qj6UVjxuXpGAR5z68pmT6Zx54/Coy5LsNPMewumReUV0Vdtk4jy iBgOX6fdVYoVUuJlJOK6vL8OtcRZhKMTS/oHYFkzKCPMSwAB9+A/yrv0/c9h9mezkfZgEIBU NY6wEptQDq6iIj1zVJrtVzczCi8rcSpVz/tbRFZNI5Ttiic3usYwHy2D8Kgk7hVBOeMskCjq aEOaf6pN8jt/CoOuVnK/KRiVYrqM3CLlubSqicDEP4gIDVp+n0wHMOg3ZNzKnFRTQT4jj4+n UaqNdqI3JRMU4nc8kEhHyJ+Uu6YEEqJz5zuPUw9RH1pX6WyfaHq2hqPbT81ZIS9zV5hVdrtd 3ZKTTe55dCUeyZVgM6ep8pbWGQ04xgPLb8KaNZEWkS4jkSBhyIaz+OPSmFCubkgq1WUbLJrN zimomOohy9Sj6U8nJ6TlG/JeLAKWvXpKUCgJvMfOo6k1K5uSiOU4LTanJagKUctEo5aDV+Ev /d90Pm4xS3FxQxLZACGENZwCi8Kf+7hv/wDJGm/GY+Ilp5c7/CvmXf8AUN4+ljtGotGr6bAW hQoUApQtJ0oWgsVhxcfMTXs+RK4AqiZtCqX1T42Aav1s2jHQ84Vx/SDLM0g1b5AxtQbl9Ky6 DkpKKfdqiziVfQJdgzko+FSre7bnYOETFeLJqIJikQFk+qYj0HPWvRbx06jS7Th2ra5Lmk9J uYs/Vb48NKhc/wA/xrkCgRe24+OP/bxa6WjbSOA/161nTW87qTeOnjZ2PNdG1rlAmS6vMA8K Qb3JcIM1W6LgeUImE2km6efiAPKvbG9a11TiuRrbh7WRZyjXmi+akSVApd+ZuGQx4fOnTeAR uecLcHbHyjVRyY5mzkuBSU+IAL6bf61QVp+4lIlBu4WcdlIUoJKGTEBwA90NXiFHNctyvnDf Q9W5qBgMnyAwOrwH51jnBS/zFspXDxGbrO2/9E9kgKpN9RRA+A+/0p1a7L2EjJxfNU5bWcTS ROXrpEvQ3pv/AK1nri5rw9rIvHDh0i/KkKBMJiURT8seNERmbu7Y65Kz1VwfHaQ5QiIiHiPr Wu7DX7/ZPcZSUO8XuCf7MnrK0eLCpp2AAAeof5UWyzlC6IzyM4J/Gjs3M62LIqpM3CguMldK HRMOBMG4/OodEv7Pl5yGBIIdc+YV4bnWqm/S0dFSTO5Y54RddH20UxwHqibONRPT0qHt+0Yq NuZucjQVlWLXnpnLuVcwH+Iv8+e1Z68mrzbt01HTmRQSyGhUS6dQ42ER86bRMlcZlkCxbx8o qgAlSKmOdICO9e+9ehOev35To2W04xqjf1zT3JOo9GVFvt9QiiWc/wA7UxtlA3sOPZn1AoLF 2iA9CY32N/n/ABrM2cjfPthwu0cS3tI/+0nDYw/OkkVbzIzekRCWK31iZ2QC7AYQ7wiHr41p vwz1TjVepS34O34eMkGbVM0igm2dtjlIOFB1Bkph+7rThrb0bdM19I3qDhoo5eK9oYORE4Cf GoDBnG233+dZiZ5PHt9Eh3D8YkTlSTAc6NWdgCnEwrdhexLSZ5PBTlK1Oc3wm8BD1rDOKmkX Qzk5HiEx5ZlmyJoRNV0KfQAIYfdgPn6bVJt3/tSFfPH7J0RQtwInKiJe+mHK7oiAgOB26/gN ZYV3fC04CPaZf2mVH6w4NyhHcPl6VxulfK0s7OkeXCRKXS5OJ8GEAANhH0rTep9/yNF0f8ol w8VxcGL2NZIxSajbCtjJQ9R/H5Vj6P7Ev90Ovyp64Vee/buVVcmUy4If6ygeJvMab15L88yI tCpGShpKOYt3rxqoi3dfsTm6G/nA/hTCvMoWhRqLtQChQp9AxjqamG8QwKUXbkcJFObSAj8+ lAzoU9jYt2/uBKBRJ/T1HHZwTEQD3nlnp99N3iJ2jxwzW2WbKGSVL9kxRwIVWMglQopjkKnr +qHiG9S9zQby3lI4j/l/1iwTftjENqA6Rx2H57dK4IijUPyz0ztUvb8C6miySqKqCCMYmVR0 q4NpKXUIAUA8zCI7AFdxERQqVuSCfQEsWNflKKiiZFUTojrTVIYuQEoh8+nhT6NtF9IQK8q2 ex48hmd8drzMLckg4MbH+FXsz10FcotG0n5ZVuSsCZvhOZMQAfkPjSrNso8eN2bcgnXcqlRS IH1jGHABWWgSrtWtOxngvrlIq9RTaW6KZHbgpdXvDjjSUOphzkPuqLuy339tzXsx2QVxMQii KqAayKFOUDBjHjgenhV7MhD0KuzfhxJPLdB+i+S7YeFGaI10CIA2A4lyJvtbCOkM+FKrcOVT ppezZ5o6WLJNI90XAFImouAiGk3QwB/ltVbUk5KLQq5SFhuiTFvR8Y/I+9uPTs0jnJy9JimA md/q94BzT0vD9jqvBytcK/YrbWbIG5DPKih1REuAL5AJR3zvtVctNSgUKvFt2G2lGLJ+tKu0 kJGVGOaaEiifIaO8co4Eu5gp/IcKXEa1fNncuY0wzh1pgyZCByDIpK8vTq27477Bmq5Sacmb 0KfvIWYZRqEi9jV27dfToObGMmDIAPl/oNMK82ihaNU1YsL9IbsZw3vcLlUEeUGT4KUR2CiQ tsz02zUfRkaZw1KodITkMGxi74EOucb/AI1eEhEUKl2NtTzx1GtkY1bnSTE75sXH7REurJg8 w7h9/DSNISkJKxtvs51y2DsT3ZFUpwOGrHwjjoOP4DXMRH0Ktt8W23hpm1GDM/M9sxDR4fWI ABVFjYEM+VSUbYjh1xaeW92QUIuMfpIPjOFAIJCnMAAUR+0I7YrXYkKBQqWvBgnFXlPRKGrk MJJdqlr66SGEAz61FVgOUKsTiE02DbkqRJMqsnJKthXE4eAlDSYPDGQ3wFSa3DK5EZRqxV7H lVVZuoOsQ5SqRNeg/iURDoAhWuxM9KkUarPbtlTE2zgHSfKKlMnWKkbqBeUO+ry69af2HbIG 4jS1tzzfvRrF6ZYgZ/aJJiPl5h/qFTsyK9qlUKsFqxfb+Ht2TJipKKRybfGr4icw+Ml86szy 0iQXB+WeSPJUlssl0hKGRQTVz3RHzGtNiQzihR6ufB+KaSc1KGdppqdjjTro6wASgfIBkfxr KMchSqGK1u6LC9sMYlzFGboyZm2tVukAACxeZp1B6FDr1qGtGLjSF4hMj8h97PizmbOeoagx uX/p+Fa8tJMZZM9ru1bTZ8IwNE2szOi1FJ3GGcOEjAAnUyBtw23Dp41ONbYjFbkkI5ZFkpHk bJaGv9ombu7jvnfPnvtVx4SUlQ7nnmk1q1/iEzQTsWadm5BzNn6ZENABlIBMbu9Nhxj/AFrJ nyKqOOYQS5DIagxtXnlbxTGWRvTktN6cVmoalS0lShaDX+Fv/dUvq5P/AIUz4uF7T2NmQcaR 5gj91PuGP/dNv5isp/GmXEAoGli+YJhXzf8A2GvwY/QoUK+kycNXKBqFAKUTpKlC0F04Sgip fTJNwQDJn8B+Yb1pN1W9Gzca7bP3iiwt5ICorkIICQe93em4bdd6whq6dsXiDxmuZFdAwHTO XwGrUpf91OeZhRIBUETqclLdQ3mPrXr4ecYeol4XW1bSjIS4O063RlGiZQOAiAgY4mDp5h0q StOAZs5a6HoBkzl+5Qx4AUweFZi1vG6WCyJu2KFVQS5Recl8RM/Wz13pVje91oOHa7Z1kzw4 qOA5WoNQhgRDyre1O1BNWjw7JF5abGNU7vOiFS8nAaPHvehv5xUT9HYO0exyrTnHdtAR2KbP M1Y3x4eH+lUdvdVyez1GzdyPJABARKnkUwHqUPT0pN5c1yLRrVN24X5BNHJWMjpMOnGkM43r mVvDQaAxtttPTje4TP3zpmdRRYjZyXSZI4d4Ch6bVPPuQMpOOzc1qCpGiuoM6yhqABDqGenT 8qyf6WXY8fIrIv1+0JDlMG5PHzx40o4ua8PbHaVlFknpk+SKfJwCiec4EvjvV78FN2fJokRj 3XQHHL98T+1yA/GH84rzemTst1CifAcqSEhgDp8fh6VKfS+7G7hbVJLFUMODpnL+zEPsh9Wo czN+qmo9M0dKpibUovyhEMj4iPnUcRdjOGEUxeg3zSOeGnWDgqy6aiBTqttxDAFLuTcenl+V QVu2wwtucBNikYQXaORMp1MBQJsAfvVlf0uuXsqbb2wvyyAUCmAe/pL0ATeIURrN3Iq4AGz9 4osZYVwITqKg9TB5Z8a0petOdzcNDdW3+cvzClUbN9ByAHNKUD9BHbV/pTWcVk2EopMF7UrG osCmRIQv7ZQxMAAh4jt57eVZA6mbvjpbtL50/ZvDpaNJy6QFPPwgHQQz4eFLMbkvY/POxk5J UN1VNGTAXbqH2armI6pxT8eo7/UvLHOgOptPNlSk0CAfEO3TpvU/xAfoy/DGclG6L1u6NJM1 QauU8GRN8Pc8/urO2J71Wg1+wFkFYtxlVzpIAkUEQ3MP+dNHEjcLuBQO5cvlYlI5Spip8BTB 8O9ZZxW1W4pNRhxDYuXLJ27CRtTs7ns5MrJ6s6jF65Ev2fyqzzTeOVt0zb3yiSjRgAHTESr4 Awh887bh+IBWDykxdCbxoeSkZArpBPU1OccGTL+6PgG1NPpBOhIGkCzL0HihdJ1gU3MXyH0r XmoolEtfDD2RekzFlXMuDZwIAuf4jAIAPe8xq/vLZiFeHZJwsUl7WND6ztDGDvlA3eck8c7b l6elZUoodVQyyxxVVObJzm3Ew+Y+tKdve7f0xfupCgXvf2Q9U/7vpXk3YZ1qtrt/Lxzia4ZJ rMFlGC8ImPI1fvGwXHiJfu8aqvGKLRjZCMcotWrVF0kf9iTliIlNjByeAh51TVnz5YqBV3i6 oNQ0ttRs8gPInkG1Om86/CSB+/H2uoUgkAj8dZcUzjjoNI4GsLJWtOal7yZoOk28ki3RMpn3 eoo46eYgG1W+S4fWqnG3tyWqC6CzVw6ZkTMT+hqkSKYpNjZyI+HjgdqwaYlTyZtXY2rJPAAZ FtkE1MdBMHiIU17Y/wD/ABJ7vgR98O4h0zSk4x0GszsLBN+CMZJrtmSRnsKkoiptzheEV090 NhwJQ9aonCNb/wDk61TgPeK/SMI+QZ6j0x+VR8xPv5iNhY19yuzwaYpMdAYECibVg330rJXN IP2vZjNI9p3gMVVony1CiHkPgG41yuNRqt2IMY7jXYMkdJsykV5E4yWjBQ1Fc4KYemNvH86X t1m2GYuYCNm6rwl+lK9FQAEfZ5jG5uRH6vjnPrWGmVWVU5yy6yyv/EObJ/xoc1YDCqRwsVQw aTnKcQE4evn0re5xEZ1qnFsF2MUW1mg2sZtGuIzEiWUOJgMJcKDygzjOdG4bD0+Ial7uSh/1 TWg4xGPZhCGiu0oOzgXS25490pvEdQBqJ4BvisFLqKXQQ6hEx6kKYQKb5h40PiyJt8+dZ1vR rXVzBrnGhJomWGlVyJH0S5zDHKaAOKOQNguj+yDcvw/fUrD3DbM9dF/mt+MQBCTZNRbtF/dE dctUgnwTUXAFDOwD4ZwFYb+IiAaQEwiI48q6bfHp0pLiY69KKjF6blJK2+0Risw7jA7IZg3Z IEXKJmb0qI7HHI6Uy93fV4eFZtw3bLRUPdjyYcQwMpGNeJnXK5AFu0/VSDbOk4+GB1edZZoS D6gdMUNBPsBW/PRlp2uYtR4yKrOU+1xz5iNqnQZAwbJqgY6ZipEAdJd9IagN/kFUWz5QkJeU DMrauTHySDlTRuOkpwEcVD8sgGyBQAa7Xlu3c2kW1w0lGozPFGNSexbs828Qex4qLF5K6fNM oO+epQOG3XPhVkWvG3lnyBXswzU77ZmzA5yD7MWTa8tRcMibBRNgAERANvDpXm/QTTp0FwHh iu6E/wDhk8unhW1rjcK64uY9NGyWXcxmNhynt2Uhlint5zEMzgP9PKdQw6UzB4kyIjnAjuFE /qG1fobybginENESLR0+I2V5irtbWAnX0BjBSFDAbgO+9Y9gnM16C6vPG9ApSE+EhQ+QBXOb j/tcxbopdsczvewlnU4ycuY2YdPXr5vuiRqocogXOQDUJSm22xkMmGqjHzDpTiBczpneKcOm tJ84DnMAJPESnHSGR8i9PnWc6CadJSBjyxtQ0l+yG3Tauz42sxvMbfsCd8YI2XbxLNrdK8io mcnLIu0OBBwmHQRHl9MF6hTf9Y0b7NuuScTCTtCaZLtmcaDYAdNhUW1AUw+JAABEfi6hWHYD xANum1d/e8fOq59OGTQOL0xFT0g9uGInjrBInQFKKxjsxSpgUdXlgS+GOtVu5m0O2j4D2avz nazDmyfe1AmsJxwX90dOO7v5+lQlFryznuGK5cFbgjrT4qQtyS6yiLJjzhUOQomN3kxKAAAY 33qf4RXxBWdHpoyRnAmb3MSSAyKWRMiVE5O703EThtqL61mFCu2r20qscmjxt/sGltpLCisN 2RiR46LX05J2NUxhOJh+0AKGAPLNO+LV927ddhlg4QjtAwP0npEFiCAJYS5Zkw6hgNh265rL a7V80YrffVwxVwzVnv0jOCoxUWzZPgFPvgKJt9Pn6VbmvEy3icRLvmF0ZH2ZcD1s7S0JjrJy TgbSYoCHXwGshoVnv92qluTkrfecTJyff7xb07l0gksTIidTcpRDz3qoJ50hnrXaLU1nkLS6 uFktw5t21yor9riJJR6dQc8o4HxkA9e6XwDxqzT3FVZ46K5j43kqFmlJICr7lEihNBk/TYMd 0C1mVCtOYmNQY8V0GMssDWA7PbfZwTaxiSgkFEQPr1CIdRMb4vMNqhWt7JI8Spm8DR5hTl0F 0jobZIKpNOfLAf8ATFUmu05qqdE/bM8jD2XcduGaCsM2CQc4BwCYJG1Bnz3xU3cF/km7LWgn MTy3qybcijkp+4PJ6CHiFUSu1MeJlpopLPpFA9qxMMil7xmodRRTGMiYR2Dz/wBKWs24Vrbf OliI89F2gKC5M4ES+nrUFQrPc9xfFOJUkRuCDBsVuBNBSH1d/SU2rAiG45yNMo+8gZLT5wiU D+3QEq4asAUo74AA261UaFaczLynFcWN8vWrFoRJm37Wzbi3buTbiUg/xEMjgacfrJni6lmx G6DxYCgsuQTZMBf+gVRqFOZmpZbiux5Mx5o87Zu3bquAcLAl1UOA+PkFR96TilwywPzpAjpR IgRMBz3SgAbj4jt/kFRVJKVlKcjEC04puWl6zB6UTpOjloNo4a7Wmy+Zx/OoW+sfSIcGEe4U MVYOHpf+y8f6kMP5jVUuQ6pLycKn6ZKH5V823/qKtPgy+hTnkh5UOzhX0mZuauUdQuKJQco5 aJRy0HDVcOE+gbyakUxpEPHzyFVE1OmLh5GuEXzfnoKkMApqcscZrtFUbhcVvRdwwPJePlFV UH+lJwQglEBwbuDtuG3X86ZWXaMbCzyzrLgVWZCEOU+4GMYd8enqFUD6c3MfJSLIp6zCJwTS ANZh+sPrXI29LsZukuS+HmkS5Wk6WRMUBzvX0ZXbVUtOs+3WEaW5VwKooq8Xdo4EMkKGn5dP 53ozNmnJWSwij/28J3kNPcLgfjAftemPurMmN73g2WkDtlFf6SoY7kvZxEoHEA1CG23WmZbr uH2eWOI/HkAXSTBQ1lKP1Q9PStd61p/f/KcWiGgbcsV83lGvNUcNjpImIRTWJwMAd708K7E2 wzlLgaXCMk+etNKjlsi4Lg6Ri94C/Las+kJ67VG7ft/agTS0CkudvgR0/Dkcb0da57zPJNnP aHhHAAIpAkiIAbI7jjxrHKCmhOrOYXLxIuN8/REGR00hIcmdaZzEDw8fGpWzWZY6LTilM5QW cNQD+yXKAY74f4/nWUfS6805pVx254hJKk0KF5QlEwAIeGPD8qbluq5mZnCRZFVMVjidUpg3 A4huIeQ1tvQjLNPtoh0Y152N255A8hoqKahw6FHNWvguJP1gNCnwICmbrUMz+kCVvuGzaMdm j1jAoqsCIjjAdQH8c1HRpnJHSKjAVSuCCBkTJfGA+GK+d75Kb66gIS4bTbsHhnDkE3JgIsf4 0j6M79ch6/nUBYcclC8VJOCZIKpsxhlQPqyIHPyx7wD5b+dUN1JX8o8Kzce1irrJmLyOVpBQ o/FkMYH50GY3+1lA7MjMlkUm/KL7vJwRD6vqFfRleh0Q1XhmLUlm2adydZFRMqhcJ/CI6gDv 7h+P50jxObRC3A+cZwDhIzZjIpisUCiU/aBU7wCHQR6d7x86yaPl7wQTeFZLSaJEzmM6AqeA TNjcendHeo1m/kuyrRbV4t2d8cBXQKYMODeAm8xqKXYwpo5itfEDQrwzshyt/vECKJHE37Tl bYAQ8tvL76odT0pAXaXsgSUU87oAg3KfAimHgUPINqIaz7qI+SZnt94CyoCJNgwOB33ryT7m iCotPJJk8jXijF+2UaukhwdM/UBprWALQqTNCSZYH292NX2aBwTFx4ZH+PzqOoC0KNQ+8P5/ 60BaFd7n2w/GhqA3wmAaDtCh8PxVJWzDPbieOmcUTnLtmirw5f8AykwyYa6I2hXCnSMXXq2/ n/OpRxCPkbVaXOcgezXbkzZI+dxOUAH4eoBuG/jXcRGUKGSj8Nd/H7gzUjlCgXvG0hqEQ6gB ByHzoag6l1CHXOga6DUK59bTvnSBsaR6edH0n1FJyXGVAyQOSbvB6bb13EFoVYLBtWRvG7kr bYAKLpRuq41LEMUpSppicc+XTGfUN6gkSHW5mhFXUmUxjk0jqIBeuQ8MUxBKFaFa/CqSno2H cklkWrqaaLO2DY6BhIYqevYynQom5ZvyqglScGRUV7E792JCHLyTZKY3wh9/50xTkToVpTXh FMfSizYOSeJMXFzM3DkCGLu25erunzjvd38+tUAsXKnlFYr2U97ckBhO35QifSXqPy9arbUa UK7U19Hlf1erXWq45KZ5AsewIAAYVlQwKmr7AAU3XG/SuRjkIShWp3Nwi9lNbkSYSbySl7eF EHLcUSkTVFTRkE/rGEBOG1Vjh7Zrm53y3bu0RsahGO5E7jTkyhW3xlIH1jZEArStiUT21VOh WmOOHNvR88ijK3O+BnINGDiMQSQKLtTtX2g+HuZHOB8ql7f4MMXCjts/m5AyxZJ8ySUbFLy0 wbFAcqag1ajeWA2zV8ndTnRjtCrYjZ5C8H215yLt8kvJyItGJU0S8nuGLkTmzkudR8DjHcx4 1bprhhBE4qMbAYmuBuZSTBspJOMcg6YFybl9Mm28cY3zURsSr0UyWuVskPwlt+ZgVrlZrS6D OPF4VwycKF5q4pEKcgkEOgCBgz3fDrUDdXDGSJPJktdEy8UrGM33MerFTFAXBMlSOfOnVsP4 VUuGmV7a6VZvQpd0iq2cKtliaVUTiQ5fIweFEKUwmTIGwnOUmfLJgD/GvMC12txubg5Hk+k0 XCNDNJGIM3I0cuXGhN0YxwIfVzMAAeu/qAVTrB4fOXUsj9JWH9HWjHzts01iQy6jfoB9tRSZ +tjfFb8vI9tWf0K1yStCzYq+vZBIpZ69kGLJZqw1m0JicuVdRgHAAUPHUPjtUla/D2zXayx0 UFJGPWmXbQqx1P2SaSYCXT01D64DpWkeDnJOrEaFWV1ZUm1tdGccyTFHtCHa2zQ/7ZVDUIai /aDYd6ujGIs+Wj7fI4ttOLPPPUSsNGrmqIgbSdTrjTqDGf41MeGlJTJqFbFeXD2LerQLeNaB GGWdvAcqIAJ/cIiHh9Y1Ss1Ydvs7uvIzCIRxGx0aLIjk2SFFYA1mHI4Eevj+FXyMxg9dq6cZ ohlDXwi0jm5UUnEW3dmITGkDmLvp64CqSt3U9Qen8a8s7eMsVVDUXpnfyo1egJCzIYE5SIBm yRaIwqKhDlABXTVEpO9nIjjJh+fpWe/qxkgeMY5zIFRXeLqlJ3B7yKZNQqF8cfPGK3nwsoJl 2qDSa1bZcVmMJLiEy5Ioso2Lhmrt1oAPeiJ9O2+4jt4jVE44JNEOLk82YJppNE1C8pMgYKAa Q9A/gHyqbliUBTC04pH61LV5h2lSmpKlKKbnYe1tx/8A6X+I1nFyPXA3Y9J/ZgfH31pVnl0w LIPJAo1lEstzbgd6f+MP8a+fb/WkuXoVXmDVrjYWNWt/2qs9NsAiJQ8/L51T6sjNYi0eyQJ+ 1Ip3/IQr3VQjZJsLfGdtQZDPXFMafSy3PeLHHbBtJQ9ApjVJco5aJRy0DtnjtSOr/iF/jXpZ 5GsXTdmwWP8A0JaFMBmoEzvg3eDyHbpt8q8x/V8qtrfiBeDaPaoFWDS2T5CDkzbvkJ9kDeVb 8NXSZLwtTPhnDouGX9Mem5zhIyBtQbE26+Q1Z5C3o6Q4iREh75MrWJMYoeJjJKDjO3y/yrKv prdRzH0PzajKFVDQnkSmL0x/lTp5ft8DKN5FwqZB4gQUk/6LoASmHcBDG9emNYff8xqyehGW uJxu3BSRbqdwoZEDEDujUFJcPbbB5ISTkVEF1H6qhNCmAIPUAAvj4VQ/preq0ou4yqo4VABW IDQRDbx6bCH5U1RvG6uW6Q7aqqChxVVIdLJiGxuIbbDj/Ctc7Xv9+UL4+Q+nrNLs7+QalYtk 012JyYTVKH1gH5eHpSnFYDQ0TAEh+1JBGOTNtZQET4wAgIdazj6X3H7LLGe01AblwBdAABtI D0EfEKdOr1u6TTbtlpNd12Y4GQAEwEwGDoPqNZbsdOrrV48kK5uK2Zh47KDj2eoVuLrZRU+o BDV6hnrWVcYGxmvFS4yGx3nAKd3puUKZXBK3K6kmryc7Uk7b95vrREgFHIDkNuuaZyTyWuGS UkXnOevDF76gEyOADxxUzn26DebL5PsO1ffKF1tDgLcNk1O8Hxb+vXG1QH0Jh4VwlNR7NXmH dJlTSUz7vv8Ae1eQbDvWWxd1XHFsexMJZdFAOhNh0D+75Vz6QXCooP8AXD0xlCAkPeyJi5+E fPeqpdhhpUej24AvIOD+9EqT5wQwnLk5BFPqT0qHfe1XETA+ynjsqBSGM7VD9oUpdJgDqPr8 9t6xiYlL8J2R9LuJhAqIh2dZQMABvDfz260pH3Ff8i8/qqVk119GBTb4xp2zkOleu7xNqTLB frLlX8xcHEQws12yD+NXORuchgHIJaS5z9Ycbh+VM4OwWBOC6FyLM8yyRCP2yhPrFKb9mb12 HwzVK9pX+2lna3aZlvJmTE7o+geZoAA2H0wHSohO4ZsjEWaUw8BmpkxkAP7sREOuKw3Ya97R uMDi45xK9nbeQi3qDxsko2cqjylAEALqJq3Dp9wgI7VcbbRafRm6GyaLtb+luTiRYe+YRTNk SDjpsG/iPiNeW3E/Nrs0GK8w9Vat8clIyuxNPTHyozi5LhcrIOHE/JKKtx1Im525B8w9dq7a 4qMU4pjipEFiZ6MAHbpwDyLTcFB0bWqhuIcoR8cYq48M7WgJmzU5CSiEVJBDnFRIY2Af9zJQ HyAMdQ9d6yd8+eP3hnki8WeOD9VVjajfLPlXU379HlFbvXSIIGEyIEUEATN9ovkPrXlpOO7r VbUZ5VP/AOHuxy9kUSSNLii4TKppKYuQyBgHrqEvUS+A7jSfGiCj2Vst5aOhGkYkR2RDuoik sUolHJT7iB9wHcPCsyUfPlGYMFHjhRmU2srY58plN9oA8B9adJzcp2hqs8erySbX9k2eHFRL 5Yqt/pinFc/0d4yPuO+nsQ+bNXifspdwkVX4SqphkBz5VpNi2PZ8ld1xKFbMFY84ptV2pNIk bnMgJjGKP1QA2fEM+lZIz4lzcfJe0oyHgY90LdRscyDcQ1pKBgxR/H+FU5uY7UokaOF0ANsf lnEusM9B865CUYEm7cN7ctxzZqDNSLjV3KMJJhIgsUOcm6TAdA+mC48B69aZ8Vou22PCnVGx rfSDKPVYvU0i4AxtOv3gfEYe/kPToGKzaLve44yFNEMVUASOUxeedPK5AOGDgU37wDUAissk VBPtKxkW5wOkic4imU3nitKX4UMVo4O9lV4gNSKtCvimbq4TMAd0dOxi6thEPKtVt9dG2+N6 pDqxyYyNruS9xHRhwZMwAUQDOBz9X1DpWN3Bd8xOtSoPCMW4EOCpDM0ASOAhnx8tx2+VQhjn FTmHUVMrnVzDGET5881G5ClNHW9cEbSjkbTupG4vZizpVR00eJqrJkI3U5OUxL+8Y4gGOm3W q80cuJ79FqPjQcsdMFPCo9SUMVM/JEA0h17wiIj4ZEA8cVkWn5+u/UfMfMaN3Tf4+Q/Ou70H W8fpIexSW24JEJNTMTSrc8SqloH3HK35envad8bgHSqZwBMiF1TxTpJhIK265JEHckwBHeSa TBq2zjO9Z+xWVYvEHrY2ldsbUgbroHzAKk7mumfuVNJKcf8Aayon1pgCZSaTee1N2Go2Kw1J LVMpPnEKhdZZ6LXkTpqpl1sSlHnhqHu7ZATFKP3VL2nJWkdGfalfxYR7i6ZtIiY6dKjI7XKO 2PgFTobAb/W2rzXykvsV0xCD9QOgB93lSF+FPZOLdLdn7UbRMbfsg7YO+bCI2wtEimBliKgc uVTkHomBC7DvnYKs89KxJuJ1tpHeRho0bmO7brC7TEpGWgQwXGOUngQDBtPSvMvd1a8BnGM+ lF5SPw8ou/XakeIU2ThneyjH9Ikj+blSpsSg9i+efAEFExDAiUw/Z1CTA58A3qt2hPNLAvS6 U5IiNyg6ZuWAuiFyRZVQogA/+nkciPXbbFULAGLowGny8KHw90vSplxHfqPRfC+74Nra9kuX MhFooQkW5QkedpB2koIrCUqYY1jq5hdg6+dQ17XjAp2qPsaRbvJuMeRbp+58Zw6YbAQd8Jpb AJc5z1CsM0JategufPG9dLp2wQAx02pG/HTwjB6NZ3tbby9uEtxLPmMf2RtJKypAcauyHVE4 lKcdh1Dq9OvWnNv3jw/acQmkhJXEkMrDQzJp7VHUok45SwncAGk2o6hiYKUdQ+oV5o0Jb9wu /XbrQ0E6aC4D0quYdxTt+rM3N/XG8jTpqMXMq4XbHTDBBSObJcemB6VJpzTJfgqNqqqcp9Gz 3tJsTA/0gihClNjwyXTVQ/drtYZ9zR6XluKltvH08/C9HEc0nDszMewpj2qP0qFOqJtygA9z xEeobVULmv8AgbhmIN5ETClpgRm+bOGZW4HatjHUyUw7bmWD4u71xvWK7avhCjVvzSMG8uL2 suTv1efUuBNk+j4VpEMJA7IxiCcpcLuESb6RDcCZHbOcUjYt+2ZbzFFqeZd4iH8goQpkTm9o lXTAqaoD0E2w/EXxrDKFaR45OC6SdxM1uCFu2ojJKmkWEss7cs9I6OUpp079BxozgA+tVxtz iFatreyDtpaauLkziMlrdo+9aJFTMQU8juI5Nq7okAcb1jVCs6cbWi6xybTKcQLbcxbKDNdU 0qmm9WemkyIHEU9SZEyJ9/UI/D0AADek5/iNadyoz0PMryjVk+SjU0X7ZsBjHBmUQHJR3DUJ tt/uCsc1UWq52umKcU/Anh/aU45kBy37Kp2Ei+5zKCHdEemTbfiNQKJhTMgfqZI5D7+OkQH/ AArlCvNO7k0bg84wWw5fS7xwxlniE44RVeNjk08kqZgPgmBDUImANtgxUIbihHvZZOSmI1cX TmJeRMoqgUN0ljZTFIOnd8QEKyqhW9eMqnRqpuIltu7idyT6Pkw/qtvFMVUt1iopl0mEwiPx G239KTtniREwbdoilFPFPZL9y6iQFQcCC5NGlXyxgNygA+tZdXNVOempd7ouyEuK2mKD+Hc+ 3mEcEeiuU+UQIBjDq37wj3vHNF+napJOyJBsyDm2q0K3wc44XED6shjAgGBx1ql0Ky5iSdF7 uy/zyKMY0t9FeIasV1nGs58qnMsYBMHkAbff41LTnFVGYmpp05hVQZTLVsgqiRTvkFv8AgP8 fPyrLq5Vc5NS8qXqhI3wtcspHZAkcLJm3+PTgmggnE2d/HP4YqjadZcH8d9vnQrtZTuZDSTc VHB+c4PCN1njlBNu65qhhRVTKBdtPrpCmn6zpgXjd4o2brrNl1TkOfqZNQukUh9Ch0xiqDQq +bmNG/Ws++kBpP2O25KrIjJZsZQ3eImOSjn7v8qpl7TitzXU9nlm6bZR2YBFMnQu1RlIqVM7 8pjtL0gWl6wB6N9UflSdKUG/W2XTBt/RsTP/ALQrG3hNEsub/wA0w/8A7DWyx+0GHo2D5fCF Y+VqdzzVs/WER/Gvn2v1ZLn6Va5RaUb+6NqAceVIc+jdo9K+ggZakKOZTVRKAUYtFoxaBxXo 62WaLmxYVkqmn/Smi5eziUNCgiTqYfqj+FebKtkHeN5sIP2XHLOFGRPhMDcTijtjumxttW/C 10uanxXz6CwNoRrGbK5W7YggRfl6gOKneDIAHhQfIt5S4Gl3KyD1wycyZSFZrI4FIxx2wG+w Y6/nWfrXxc7mLRjln+pJEoFIYUgA+AHOBpWSu+7HZWhnbsQTbqlOiPIAA5hRDSPqNb5Qx0S2 RMUnV7Xuh2hZNRsJiN3JCgU6IYAcF88eHoHSicN4BizuCU5km0eSzkgG9/ghxTOTcf4betY+ W5rz+lCku3O79qPye9ErXPNL0HIY6U0+k1yM7kNPdrWQmADQdQ6eDFDAbb9ArXdhp/URjeJf KGkCtmxlU2CigLHLjAAUc5qZ4XmJ9NI7XgUxOHX50WNc3MRjJLto1VVF+Jhcr8gcZEByYv51 DN0HzdEsgik4TTROXS4ABACm8N/urxz89FPRU9Aw09FzEdIdoXTTli9766R98Y9MY2qoWfEp 2lxejoxgRcG6scIKnMOSqjq+Io+AemQ6DWfqXzeC+CjcDr/kAAE44xkfMaRLP3VGOma/b3jd dFEStjKF3KmPXGa+hc4iF2uv35ZRjiapwj9/cUrHsGxlVm7hUwk+HugI70lb4gE4xz07SUBz 86mIN3eDiQdy0U1cv3SxRI5WBIB6h/Gq4ZJVJwLdQhk1yG0nIbYwG/zrwXe7WrV6jkmEa/8A pGyfNnbxBbTrRH4cABNyd78vSoe27VhrWkHbFk2V5a0W5MdcmRMO3wl9cB8/nWPLSHElGPa8 5aeBnkgNzBkPHu70OxcSWzhD+jzyanM1JiUdgObqb0EfGvbS5FljJp6xEkuIlhnR53ZBh1UV uePe5RT9FB2/Hbp1rCJoqZJ6WI209nK9VBHT8OnVtj0qwOiX6tcBm7lKaVluzmDpkwJeIB4Y /hVXxoMKWxRIOkxfsj5VlxM4y8OxErlTilq3IlD+2Tw64R2kTc/b4fPHXGw/hUbGs3Ei6SaM ExXXV+AhfH19A9a8bQ1oVK3Fb03bjpNpPRyjJZUomTAwgYDgA4HAh61FfD8RsUxxAoVNQtsT czFvpWNYGXZMC6nS2QACBjPj1HACOA8qgyqpH+A+aYg1Cu0TWl9sPOpBqFAu/eDegYxSGAo9 R3D8aAUK5rS9d8eA+Yf51aI+yZ1/fDSzmiKSsm6R5pAKqAlxp1Bkfq7faxjxqsZCsV2u/WUI UFBFMwgbBBHGKBRKYuouoQ89A4pjIcoV36qZh1ACuNAiQQ1dOnn1rve1aOWvqEcaeSbOfwpi C0KU0DzuRylAV37hiCBvw++rfZ/D6QuWJjZMkkzj0JSSGMYFX+NZbAeXwlyIF1G2yNVG3KQp lCpCSipGPnl4N0yXTet1hSULyxwGDadX931q8uuE50uJTfh6jdzJxOnciguQrcwETACCcR1d B2Dpt1Cm3JOTNqFStwMI2OeAhGTYTBe9zFiICmUggOMBnrnrmntj20tdMk9RI4TatY5grIvn B9+WimHgH1jDnYK5hLLQV2hWlWTwlfXDB24/Vm+zOrjK4Owb9myTCQiHfU+pqEo7jt03qlQ8 LJyc02iiMFkVVXqDM51SiVNIyx9BBMboBR867t+4jKLWo3FwkJEuks3UJmKL12ykVjNNB0jN wKJhTIPxlEByA7Z8KXNwe5KakytcLtSA9ksJFFRux1uD9rV5ZC8vxwOc4yOMVpys06xZPRtN XK5uHN0Rl1XHBxsU6mUoBwKK7xsTJNia9/IdOREPDA7VJW/YdrTlhyU+jdztu7j40HLoDttL VJyY+kjXXjJjm3xpA1RGzKTuWLOaGK1PiFwjPa8G7UYSL6RkWMizjlCHbARJyounqyj4jgch gd/GphbgYdreTKAfy6htFoHuGQ7NgR1EPoMin11Dq2/yqpWsfLmTFKFW/iRaSNr/AEfWaOHS recjhelI5xzERA4kEpsf3aqKncTE/wBkM1nO3jXRTlCtxjeCbB5FoJorS6kotbBZoq/d7MKo k5nIAA3zpx5+uKzRGw73cGUQRttftRJL2WZM3d0OeXzBKbPTBcjkfAK02KmSt0NNa1eHCVdt f1qW7bxTOiyNvtZeSE6xdKJTqCRTB+mAx575qyRPBi3nN8XZH8qTeNY+520OgQi4AKCSpBEy x+mrTt4h0Go2x5/o2mr0/wCF90HLJvIgjZ1GpOHoR6oq+9eINj4UVITqYADA5qT4qWTH23ar V/DwLhyi4YM1hmiudRCKqgGoukBEA3yHzAelacrPTUyZlpoaa3j9T0CjxAsGyZAH3bJGOWfS ix1MAIlb80UwKHTA5LkBDpVXY8O05u/mLM8a5tKHKzB7IiupzgFLmafciGdZjZAuA8c05euu hky2hWtw9hwj79Jqb4ZnK4IxOs6asFdWFEDJpichx6APT8+lZEnkxe91KIlH5hWOKna7QrlZ gUKFCgFChQoBQoUKAUkb4qVpI3xUHS0rSRaVoFaOXp+FJU4bftC/3g/jSo3zpbqvh/Rv8Kyt rqYtVieJ8/xrU5Du2u68B7P/AIVj7pbveeK+Zw3qkqfpU6pKJi1pErgyP9gmKgh50zMQvnVp 4dmOD5wBNwFLvB5hmvoyklWkya09dJ1PXAxIxkFeT3U1cm0+tQNUBSlJ0agXLXoThj2P9U7c /aDMze0EgMZMmoVNh2H8Oteeqsdpz14RzdZpb/aFkDG1qJlb8woCHj02GtLPr1PbRrd0cL7d dzEpM6HgZUUKsgmIB3zFAdRfSnFwWrFjw/hoEqS4Iov2SgqeJsgID4fz44rIlr2vMzoTuZt4 DgmopgOGBKI9QxS61332pboIrSTs0WgcuFOVsBg+HvfdXspOOVf6pby+Yt0Lmh0EETJItGjt oU2O/guMDULLWXbc+4QmJYnNInF8tPWcSCPfHc2Oo/xrJXl2cR33YVFnUmsYg6WZioZER/D8 6RfXPfkVMc5+9kI96KfL0Kpafd+QAIdM/hW2dv7/AKJxaT2SNkEfoMyXmmTFu51NnzbocdJd jfu7+O4461JuLQjovhO7t5Jur/SdKrlxn4zFVAOnyAPCsoh5LiQs3du4U8oqgqOXKqZQ0Zx1 AfDYB/Co4t2Xa2YqwXt16DcTaVGpjAO/iA+Y1O7B1q1yWTw6hHTFVZUrIyDhvlQgGPzMjvqA cgGf8qiv0pCEPPWu/RVQUQXjlCpCiAgXQU/d+W3hVGmvp45gUhmkpRWJIUBIdUAEADwz+H5V CyEvJSTdqg9fqukWhNDYhxAQTL5B5VjKsRsHBvso8OX3aTOi6JJuOWoBrzkcZ9Ks10WTbcjc j2fWjwPIEDUJM7LG5YYHrtjHSsLtEl4GUV+inbs4yqDc4FDHrmiS0ldLOSFGXkpNCQSMIiRR UdRTD1EPX5VVqdKQpq78no1no9ltSrFMCpmDcxznH3Y4U+APDO3T8qNJIvztbgb252tJ8aVT MoJ8FAQMI6hKOOmM+fSsCUjeIy1tpqGSmFofUBiBzQEM5646+dOZBrxTYs2r9yaaBEgl5JiO gEyYm2DYNyj1rbdjKiJUaRHuJZLjtb7Ird8mgxZqM011CCUXKRTFEynXpqAd8h8OaNwz4dRk ua9VJ1kiv/WjhJPUXCiHd1Affw29QHzrJHi9+NrmatXryZCc5QC2yqIqFIYfAfDI1IM7b4lu 5KWRTCUK6bqlSfiq9BMTHEMgBhHqPpU59fv9jFs8Cz5vDeJYLIlKupbCzQ7zI6SBuGjT08Nv L7I1CX1w8bxFl2o5tVIrefB03ZHeoaCc1I5MnOPh1xvgMYGsw+hnEoId8B2cgmyROYztsLvG rSGTG5f1vu61W4t5KyDpmxZyzwyqhipNSC4EAARHYA8qz1xgYr5x8SkGnsGKWbOPZsYQ6Sb1 zgDvFzDk5gDrp67+IgNPOBsLb0xb8oa6WzQGjeQb8l4fTr5o/CjgcZKYfH51nl3EuBjcDuEu R48WkI8/LUTcripyxx4fMBConmqFLpBVQpeolA2AEfP51lOXctu8moRlwI4mMjQDZmsldKRT JIHxyxMAZ046gAAIeOd80lfkVGL8DUn7KKYxwNmLUxdaRQUUMJviIqHxGHvAJfTptWIdoc8v ldqXFM25yaxwoPmbzGudsVJyUzu1uUmYFCICYRIA5DcArSN7rVGC1cF5CBbX8gW4CJqsXjZV oHOSExCKqBggj5Dv1ztW62XH8NLIgfZl3soGVk7fYD7ROBAVA3NWDSABkAOcCfVDJg8QrztN XfJ3C3LGv+xco6pRKCDYEx1Z26eG/T5U+jbGnn98fQxqimvJmQFcSgtkpgAue8bp08+lTGKk 3+ks2gEOIvabUQbIxkjFouipMyhyyqGDcAABHSP7vh5VtVm29YTzh/b14GeQpJhtah2/Y1Fi AdQ2g+o4pCXJlNYlwPrXmy27vmoJus3h3IJJnOJldbfWOr7+nWoZwsR84VfuPeLLHFU5+WIB kfLyD0q9aa6p09mp/pEyiqTiGgGrWPI3Uj2ciB0EgA5HIlwfpnqbOQ9OgYq6PJZjbH6VFoSY OUG7RxENk3ahThp1CngQEc7DrAPEuB8q876ykKmPvfebkExTd/Pl50NtXJ5awnPnu8o4iP5U 3TF6T4UdkhzXk2lXLNW5huJF2sdqqkAqtTgIiUTG2KQom1GKA5L91LW2/gWXBy4LgfqMG8Yr OS8amjy9faUlksolLgO8AHN8Qem+1eaCogdbk8o3OABExBKIH9ch1rSbTsu57nseAa/SVowh ZV+qELGOc4XWTzrUD7Pe7mocBkQDNdhX+5i0ec+iTTg5YTftMfIpxs3FPdOsqjlVDGlwUEw3 ENQgGNhH7quT+WiCyUc/m3zRJEtyzCTMXCYIigKiH9H6l7ogOnSYQDGA721eRDM1Wcx2dw1U Zu0HBSnPyx90bWAas+IAPiFW3jRG3bCXw6tW8ZZ3MKRuAbqq5FE5TlKbUT8QDz2q6SjCXdR1 oE1fkLbfHa2bkkotKQJGQRGrkUFiulklcn0qag7plwKIbZ8etSHCK6IRa1YoOTDtlGF1KyTh B6sCfZEDiQQMkImDJgAg/wCdYjZdvPLkuRnARCQc9zqMPkQhQyc4+gFAacxNsozMhcibdygu 3gmizkroC5I5KmOA0Z+14fwrtm5Ss+io9rbr6u22pWJnyNrgi/pA6jVlmkqUC6ex9r1gwPt3 1RDA9DCH3VGT1xvPozFLP76gFrxPPIrMZpkCYnSamTEDmNgAAveEPi0+AeFYZ2B4EanJHhnp WZtJyr9lNy8m2AQNjA/6UUzBZFR2Q0asU7bQVwXs4gJBP8ICHgI1Fy9mjBonHB+yetLORM+i 5K5GrN0E4+jwDlKiZYRRDUUAKJgJ4Y2zTHgvNxsPJXTHSrkjRG4bdcxhHJ/gTVNgS6vIBxjI 1W4u3nxrwiLVds1Yt1JPkWoFXSEgkFUwFAwh5b1MN7KOtxOuCyRW/wBy9u1rFIJwP2Uph2Dw zp/zrH5La9wl4iwUVw9sNF1Ksmv0bM69qoqCAODd9Q6fKD4jZ1B0Eu+NxpGT4iWWtbLeMVmF 3Thg9h3Ui+OkB1JxNJXUdA2B/ZogIYDUJhEtZPw3sVxelt3RPnIsybwUUd8mqZDKbg5MDygM OMDsPTPyqnp6RTTV0AGS5+WQprpDROL0PfN02Tc9yQ8XcVxtZOPJcb6YKsTPJRZmAOQ2OOA3 MJdwwGAHAjSDW8YhnOXYT6fNPalwNGaraVbJGTasjIL6zNQ8g0FAC4yHnWAIolFRJAiJR5yp EwLgNxEwBVsuSxJZpfk/asJGKyIQzgqSh9k/ixgRA2MVvHidPDmDULrva0rpjbjbR91Htn/t S4mEjFRUAXiR25SdzHUxzFHOQD4gzWYqTsb+o9vaiJVAlfpKaRdF8DocsClyPiOQHwqHcWzc KBikcwLtJT2oERoOTAldf8P0Gl31pXSwh3kw/g3TViydCzdKn/sVs40nDqXfIVGdcdFtl4hc Y43lu5i2pQZORezbSWj2DhMwJxoIpcsxR6CBjZ+qIU9bcYbQLckFKAZFioeyXEJInbt1NDN0 ooVQMdREusDfjWOXhbScHwpsW8U1uYtcwveakYuAR5CoJhgfHVmn81w+lf1jR1nQzB28XWjm z1zkpQFMpylFU2wiGkuR39PupXvTglrnuC27qvayGL9wDuHgWRkH702UyOh1mVMIbgJQHVjr nrWYSBCKqOyt/wBgZQ/I19eXnu5q3cbLWZWXxWn7RYHUXZR50eSdbAmEDpEUwPhtqqnLH0Jm PjOAzWVy5mqPSmj0Ix4xWx9HY4jx9JZJbQQa8akkbuHFMiQrAJsk2KUR+ERHIYpQnHS2/pJb cqrHyQFteQXRjyIHMbmx6zYEucqJt1F9W4/DttTGJ4Iw8jEqINiywzP0USmiPzm/o53Bia+Q BQ9BDffpviqlwv4TT1w3syh7larRcd7SWjnxw+MqySHOEoBtjIB8XTfbNemVZeKssO5bJzij ZTy5moEWk1YU1oI266cg1HmAdJYFNQENnID+9mlEeLtqlvC4JL+tm7R5c7W4mKqTfKiopJ6T N1A8CiI7CA1Wp6z7HhVrallmE1yZyKMs3hOaPOO45wplADgGcbFHGN/Cr1G8D7P+mFwsE0X8 w3bXM0iEidpAvY0VG4KHUMID3jEMIFxkPhHbNZ1x6UUhP11Rri0RYdkfMZFH2iKSSKBBTOZ2 Yxijrx7sCa9wKXIiHWqL9JreYcK5i14RKSF/cCbMr8HWeSiKCgKCZPrnI+fnkKmy8PIeP4SX pdsqm9flazhoqJUTW5eEgMIAvjHeybSGBDoA0ONFkxVtW2otA2rzGRG7MT3CR4KheYcoCcpi ZNjIgJc+g1prXSskxjQs54px6fGO3b6Ytn6jOLhixiqaoe+/YnSEwZHf485zv6UykOJx2txW 4tbSr5nDxDYGrnWAcx0Tm8w2C9ADfuhnwzSl5cPWzu6LBhrM5yKdwW6lKLqORAeWAmOJzj8i E6VcJDhZaSN4XgeMixlo+Kgo9xHtjLjhYy+PeashsAZHG/3VNMpzrJrjkoRuI6CPGu4OKLFo 77Y9KqaKIfAC3UOTRlQfIpchtWaFJoL+Y/OtzU4b2qtx1+hyLMWrWYtntzbQp/sa/JFXbcQE O4O2R6+FYS3OZVuUR6+P415rvb0UNXK7QrzDlCu1ygFChXaDlCu0KDlJG+KlaR+tQGS+Kl6R T+KlqA9O4/vPEA81C/xppT+F/wB6NP8A1i/xrlfA265NRLRcY68kC1kbpHrWwXR3LdVD90AJ nzx0Gswcf7KJzdc4NtXy+Fl1XKPaoRqsNhr9nmil/wCKAkqvH+Iadw/dfJH8SmyFfTkiK+Xg z7TG80PiSHUFZ4atGmHgHj9KXfUVTz6FrPFC4MIeVRY8KkTo1FoVqkuWtd4Mnb/R2SIdwq19 8n30Q7/UPUNqyGpe13NzIvBC3O2CuYvfKgTVkvrWlv1DeLqsG2Z2STmXTdcFOzp6hzjn7eXm OK79F41DhC6hmjFdMr1PnuDZERAxVfh/APKsVkJ+9G7rkyspJt3ACBwIpsIY6Y+VOGc9fjmJ dps5KUXjgAVHIEwJADxEf58K92cMkt/dR7SLj4FiwbaWjJ+30HHIiYDE+IB3x8wx8wprOWrA XM8ZHmWxnSbEjrSmvnUoOw5yG+Ax1/jWFGlr/eW+3W7ZNuYxNVMqBgHIAYfgxRrgdcQYp83e XA6mmLrGEVVVMD8vnV70cfv9xqRUodoxdWlHMJNKLfGSUOu2MchkjgI7ht8Owbm69MU8T4b2 dDIm7Yh2xVN0YnaxATrFEC5Dpt59ayGDWv8Al3C76CcybpTGldYrgChjyER/hUctK3FFKO41 aXet1DGHtKPPyAm8c+Y1lnHyPS8SxiX75s1XK8UB1CCmOo2UTFwO3iADt0z57BWFcWopnGSD lmwtAsM2ZuQQTfEEcOS48c+I4HcPIag9d5/RsFiOJn2QChEijzB5eTDsAfhSt0RV8NItqvcq Uj2HugiLhfmAUR6bfVHar3s4aJxXzgH2Za27kbuQWUASZ0IGwfYQH7+nTfPlWjXNaFszijGR ko0vNQbI8s7ge+YMmwkb12+6vOdoxtzyUhy7WB12sCiJxRW5eCh1yNcuRG54SQKhcbl8g6V9 +TW5E+vfGoBDxrnD3cYKq9NRpOVEs0ztSoKlYLpmPkcIhqH3eOn4/gNML2OiMPcajdJZBRVo goVfX3FDd3SBfMe6O3exjpWFR9q3/KW2o/ZIyDiLUyYxe1/tcBkR0dTdPvo61mcRht1F+do7 VY6AVI3B9qMUNv7PqHyrTPVli12YfxEJxstyXugjsBVt9NBNYCczluNQgGfIQ33yGPKrXNNW qNwXek4b+2DKP2jrR0ERBMAEds5xj7vSvMtxRF1MHkS1mySBnL9MFmSaioqGxnG32Rz4VMJ8 P+I/t5dkJFWb9BEqq6jmQ5eCiOxRN0z+7+IVlWu6YvUN2KpO5ZIdaKwA1cInfEUATR+UQ/e6 jjoPlWX3JYsDG8E7bdxyTN3KNXTBVN4XQBzlMr3w2648/KsIuKOlralF4WYMu1cCAGWIRYTk VAfHIbGzTuPtyae2bIXaz95DxhypuB54hoE2OhfHqXp51yde5WLY/wBLpjG9jSlTt2icyaSS SIoQQ5yjbljq1bjkAN4j06V55U2/HGwCNKGVFbQodZVbu9wVDCI4+/5VKW7ccpbiirmKWRSO cglPzEgUDT99YTrlJcUEZQvLNo5gj6EN6f517AtO2uHTqz4C8+1QRZVO2TI9kUVIAnNpMJjC ljIn1CGPn0rzm34o3kkoqsjJIgK/eOJWBfy22DvVDLRbmUt2YvZU6KpUHZSONQ6DnUOP1S/Z zitIxpHqmTRP0iZIyNyREA0YxpW/KaPiKt0ygYiuAA4ZLnxEc/wDFXv21GWx+lvDyAOWbZg9 iEk3ChDBo1mT+1nbvgHiX7qwm6LYkrYi7ffyJPcz7UXbMQ1COnOMGDHXcPxqBxoyTQoXO4gc ogI/jXZTcxek+FZG0ND3a0kHkavcprjK4cmbrJACrY6e5RE3dKQBNqEoDkPOnMK8tZlwGkZe S7E3j3jmUjkE+SJzLczPJ0iAbgUw9cD8xxXmQqHM7hGyqvXOhMxxx64qSWmpdzbiUSo7eHgW CuSNBSHkoq/a6fFvVRnQxejroJZbPhvw7RaO2Mv7Jn4tcNByqOFGenCvuwHfvY22EceFWuRf wqDmFfzCzdFZRecSZrqockU1VADkmKBgDSONinENumoa8gRbFc8hHLN2rlr2l0kRu+FAxSJn MYNJwNjzxVn4xM7vYcQHVr3hIyU88jT8lqodMxynKIFNqTDHqFdhKPuL7JX3AW9xzZ3DJxKM ikhAdiX5ShXCiS4gOk+oO6ZQAEAH5j0qycIrogPoDZXN9jND29KuXLlF0qAHaEMrzPc5MURH Tnfxz1rzgogo09ydoq1MH1FExIP4D/GlEY12876MU6dgACInTbGUAADrnAeFRn3aKegeJ1xW 7OWPJMG05FrT7lm2WO+5hMrIdq1A0zj4il0CJcmHbcKV/S5uq2puBFnGzTSXeOLhSfNxQOCn KbFakTMGQMOgNYdMBnrXnfsivskst7NX9nnUwVx2YeWJx26469fwpZSHlGZSqDb8iiVRYjcp uyGKBlDblLnzHy61V+5u9Ra+BtyMLR4pxs3KFHsfJXaKCX6nOIJAMPoGcjUrwKXi7XluJMc/ m2RWjm1n8YyXMbQR4sbAJiTzzgfxqjmt64/axYcbble3HSFUrcWxtYkAdzY8vWm5Yl+d86jP ZLsXsekZdw25AioiUgZMIh4aQ8fCsqU0dbnw8v8AtVlwZjoKVkidsTtWYaKtVC5Azk6pDNSi GcCbBR05IOM9aVPxGsBT+vpA5nzy+F44LlYGTEqcYi005AMaTG1CQBxkAEKp9n8HC3Cxt86s 8q3lrhjXEkwbEaa0QKlnAKK/VE2kfQNgrOHEXMN2r9y6hnTcGKySD06hB0pHU+ADD5jvgPHF d8UQ2HjRd9tyvGbhxcsZMtpNvCi0Tk3LdEyZQFJzrEwFHfTpHbfwpha9zQLP9Jy8rnPMIkg5 EJQUHQ5Ai3PKPLL57mEPw3xUXOcJV2XFO1bCipZOTC4GDZ525FDBSJqAOTAHiAAURAR8w6Ut bHCUt1cY3tlwkoc8FHLAR1KOW5U+Sb/hgURADm1bFAByIAI0lDWq1v4Z8QLSacBfoq/nRjpF rb8vGGaHIfS6WcqAokcukdI4ANOTF2zsIBWNxrCIDhW7kXbnE+WTQbtG2oN2+geYYS+Wcd7+ NRcgh2aUes/i7I5UQ1iHXSbrVtb2JzrLtGbVd8l9d04DCLJtp7OU4JqKGHw94IBj+NPTLEU5 H/am/f5QA4TMZT7AAcO991bbJcRrWDjdxIuRF8s4irlhRZMFCJDgy2EQDUAgOA92bcS+PSmt 38FCM4eccWee4JV7CSnYXCDpEpQVIBRE6pMB8IaDfhmq/wAOuHPtaEfXJdpJSPimrRm5aotS hzngOV+SQxBNtoKbOfPwCqhalGaM6VaBJcX7FkrynnUi2kV49vNo3LbSqImICz1JuUgJrAO5 CCcoDkuPHNJXLxetK4+Bkzbbnntbnnm5FHqLdgUjYHhXArnUE3URUz+9jAUza8HLFacRJiyZ a65X2kjIGSaERFP3LbkFVBZUeniAb6OnrUcz4Pwv6pfb6j+VGX+jak4DspinZCJVRIVuHiJz YDzx412Maa00cU29rhi5zgrw9tFpzvattme9tKcmCYXW1lEo+OwVfi8TLPJxKVlTO3q8LIWS W23Djs48xNQNACbQYB1B3PHOc701ZcI4EDcMo2bfyjWTughXb4w4KkmkICOgBxsfBQ2HzqDk rLgCcV7Wtj2XcdtR0lIA0WO/wc64iroKKfhj6oj4DvWsP015ZdCt9XJaF6fpIuLxcc5S1Hhk VHJXJBIcxUkCE04AQHcSdNQZ9KzKSIRZR2VAulE5z8kDY2LnugP5VsVhcIUrl45XDbijkzW0 oGWWYqOljhrObUYqCQDtqOYxc7eADT/hjwgtyWtmy3swxlnql0PXqJ10FRISOIkcxCAYMd4T HAB307AIdcVhjSXh2MTuN41W37BTRkTzwmUtosCvHoo5STHSQhli6h0iOgg74HOaibs4twd0 Gghftp6P9gSanYF49UQWIwMmUpRMJh94uBw3Hu7beNY2sQyLp221auzODo6vPSYQzVy4I2uw vbilF23K8zsKqThdcpBwJwSSMcC58MiFXzFaIwaAjxbtIbmjZB99IXB4aAUi4yVUQKd0Vc58 i40jsQSlNpKI6umcUwtPiZZluqO2bOOuD2YhcaNwxRj4OuoqmjyzEXHyMbfUA775ocSuEDhC yYa5LPghZLq9pLIxR3JlVO4qUhBTA2FDjgwagAu3yp5wh4JyTziU+ZXg2ZqQscoswc8pyIj2 rs3OACiXxKBgyAiGNwrmdPIpU1e7aT4UTVpGQcFkpS6T3Bq04bpFMXAp/wARxiiurqthpwyn rQtmKkWytxCwF+dyf3SPZjavd9RMJhHx+6pCzeH/ADuAV23hcJUTu1WKLmCAqg80EwccpY4k D6ucBnfxq6cQOE0C1tZy0tqKTRmSP4dhGuwciYXarsgCpzgHIJl32Nneu6dmS1Dvq/0H7i1H 1me0od3AQ5YYVHAFHWiUwiGOudjCFWG4uMkVPy0+dSFfoRk7EtWCwJiXnJmbmyUweGOnxZz6 Ue1eEpYG72/05Wi5CIcsZArUoqikkZ4hgpSn1YMACYdh8aJZfDr2/wAZyRU3azKCYowCkqmx Ree6cABfdnA/kJ9OrywNXHWKIzQTrimp9O393sY7kPvZJYmKE5hMLVIE+WJzCPU4lHr4ZGsz KnykwSJ0AK3Lh7Z9tyvD5S5WFntZNzLzbtu2bLriB2yBEgMUCY+M33B6UhD8LWcb+jNc15yp F3c+oyavmPdNy2aB19GQN8KhzAB8hvpAC9BqeIhXXua/JidD50b4igf0CtG/R5txhcl8SaUg zSeljoJ0+RbL7JKrFAAKB/3d68UYjN+6Pw7/ACoG0l+I1bpenCta44u2nVrJRrSaXjCrOo5t ggOA54pmWLnYClAMm3Ed6muFdjWoSHYtHiMbKKvZSRarOjIic7siCPdMjnoXr4FzXqpwldao pPKmtHm7WQfrhR8dK9OQdjW6lB25CezI1Ro/tZeQWOcgC5MryznAxRDV3QwHibqO9UCVtJjO 8PODDKJat4t7O9uI8XAfjBJUAAw59Mj8xrLY+LsZZV0ZCYwE+M2KLrT+2FeppCxLaZX3BpMI Ns3asrPcuiEXwYDrkU0cw476g8c7/wDLTpS27eTtlncZI6N7a5SRKL/lAZrgVhDAF2yYSl2H O29acqnd7sXlCkPrVtb7hcM9cl6PGxvZrVkuuZkntpOCQAJgAMbdB8PurFC9/vV5rlqUWmRR P4qWpJOlazBqlLbLmcYh/wD2C1G1LWj3rmjg/wDOCuV8DaLqJm2VldW6WDZrJ3Tn9ql9qtO4 mH5NkvNJsZwH51hplj/aGvn8HHJrIzcfthpSNIY7wpA+IRwFEeftxpaHHEkgfyOFfRZLbCj7 lyzU/bkznPXHlVRdbLG+dXS4mQmN21rssT4g8wqlODZUEazthKjUWjVoFC1p/A06QSEgC3N/ 2ZT9l8fw+H4f6VmBafQPtr2on7EM6B70J2c2DY/yrS3LGVKj0PMWfb1yx8WtJNlP6O2EEl1T d/AG+E3TOfOpG1bXjY23bgbsIYzftguUF++IikTlbANefZ5a82RkyTzyTRFXcgHXyBgD5U3b 3JciJViI3DJpAv8AtAIuPf8An517N2OetU4vQzeBTieEsczjGYoonKzXUN11Kgpvn1wH/Spq 5oCMuF0o0mEAM1CUItpWERBQ2kenkHyH7689N4riW5t/WkeXGIEBUKmLoAAQDqIE642H51Ct X9wzLxq0byko9dEEAbk54iJB9Ps/OuRnpSlfv76uNxfNmkE+UgbfttdRvIoHSekSyjjBwyYA 8gwPzAKcNeHlkxUfJAow9pGBVLUJyc1VEDFyJevp4j86yc1rcUCzyZDlk+3HAS9o7eHQOoCf OA+VEb2hxH7Y85ZHyKmnUuoL3QCoD6/WrXXq6u1usXn6t7kj26S/ZUZdIzMhxyIJEP09cAHX ep3jcXncNZc/7PPY1Odrz2kdtsb4/wBKyCJt6/HiMo0ZpyCaDBQSPEzOtBNePIfiEd/nXbgt C8IqLZnmzp8lYhVWzY78DnwIhgQJ4VNK9UyW79HNRH25KNli8znMVgBLONeS9P5/CtSuCBti SiYck9FtGyTZtpJrEBUTNzA0lwOM5+X3V5ylrbuGBeRbd2zXbvJEgqNCpKZPgBx9XpvUwtw7 v08wZi/R5LgqBXChnrvAAXOAAR88+FYW/wBlPQ0GmVuVMDERQ7M5cF5qZwArEophsG+PCm7x ZqeHF0RIBKpbqqZH/NDAm0iAJh56h8PT4a86LWVdrdw8ZiTJkmwuFSpOhMCiYdR/e+VVjmHM iBOarydh0ax0/hXslexqjF6ClF2tsPOE8/c7JV00bw4tjikIHOmsJw0jp28c+XQav9wDHOLk dhpLIjIxCC4oqLFAVNKmcDgeoeWdsfDXj0yqh/jWVUDwA5xEA+Vc1G1Z1qZ89Q5rywu4O4vX XFAiT+DbntRtbzuZSIilJJOFk/co6DZTMoPQC5D+NRzxpEOf0dVYyKXiUWTlgl2rsyhcncAs AiOPrd3xwOcdK8pYDfr3vi3Hf5+dDBPs/d4Ve/HKlXMHoL9JaEttnwxtxaEWjnC7VwkkVw3V KKqjcS9TFAdu96VAfovtrVkp6fhrrdMmzd3HmBNR0YpQDpnAjtnrt+VY3oAptQF9PurptJ9j BkPWsblzuot684jObUgbDnD2Q/txdSIjGybIUzpnEfeDkM+JuWP3eQVilombSX6NvEUjxRgk +XkmrpqkYSkOIgYBU0BsIhjw3/Ksr5aO3ugDA5++jG0mMBjFyJenpWkrsZaIjHR6LdT0PP2z wdm5meYrtoJVJvKsVRAD6zHDvaPIpShvsG4b1CfpTPEJiUiG0Ui1kHBV3ChDMFAcKiiJ8hkC B3QwIbDnp1rD9tWvAZ86dQr99CPAew7lSPdgGkq6GxwD+QqITjF3FrfBtR2w4d3RDtH6Fr3c 6kWrlk4k/wCjCLYg9/Bzh8Pp+VXawXljM+Et7MntzwzuUlk5EjnX3QcPDF9ydINsE2+z1EBz XnGclZWecFcz0ivKLJl0kO5HUJQ8g9NqY6Etvdl22DYK7nGnShi3XiBI+0rLtM8ZeMQnCJMI tqtC6i80ztM4cwwhjuJlxnV0HbuVq0pxGsBxfQybyejnaiVwrgzUUEB5aJ2JUyGKO+gvND4u 7868bdzVr0F1D1HG9DST/hJ9NPQOnlU5mLVeNhXN1XtGIW63LOqx0OVu4NFnM6AmkxsAZT6x tIAIj61oP6P8kTh9wZuSVvBF7GBH3EQ5W/Iyqcx0NPLEOoAYfH0rzrFyUhFGMeHfuowygaTn aKcsTh5DjqFKvJmbepmRfzck8ROIGOm4cGOQ4+Zg8R2pnlXV3Hpo3Zaf4bl/RhC24yXZlk1U mSwsXICCoO01wUX232MUMB8Wc1p9yXXE261Z3NcQPEGqt7IPARflIJ0iizFLJAyYBKQwdS6M Y8K8WdzVnQXPyClnDx475ZHz928TRDSkRwsKhUw/dz0qs44pwehVL6t5K+kG30ghhalg3DcX IHWOiYxlSiVIyw94BMUoiPXcd6p48S2NqcVb2lLeQGcipmPNHJquBwbPKKQDgIhkUwHO31gA OlZF3dONBceWNqFaXeL3Daek+FPFK2Ia07RK+m27T2HFuGb5iqgIrLKm16NAgURx3w6CHrVX 4q35bF0cLXdqsJVc8i1XjwI+OmIGmSkKIHFTOcESyGnI5HbasT2+LSGa7WG500Nvu1b9KX5a DHjRw1uOMuvnxkHCIxsq5TbnAMIAO2nqOsf8KheGHFRK1+K8q5klm7i2pCcNIOHiiRjnAE9X KMmUN8jnAB4ZrG6Fab9GiVup22fXhPSDMQM0eSa66BihgDEMbICAeHyq4R97Rf6vbCi3hVQl LJn+1N0yEESuWp1SqqDnpqAxQDFZ1QrHc7snHplTjLYvtZ86GYlHyb6XCXSKkkdHs3LKbltz 7ZMBjnwPQBLnNVWS4oRFwXk3uJzck1bDh5DtkpPsLfnIg5SPuiRM+cJacCXcQA29Ylqoaq9F OKqy2G7POI3Dyenb4m5WRl4Ve5nJUjdkZCoodgREhCJgYfhE4l74CHlgaaR/FK2G3DP2T/Wv tUtpLWySOKiHZRMdXUDsVPAQDqGB9KxTVXc1G9T9mrRryvC354vDZIzmWUb25Epx8oUpBAQE ohk6W/e6+mcBUrJ39aKtwcMG7dxOLwVkuFHR3bhPLpycyxFsafmQA67hWR0XNRufROLbbV4x srY43y1yx7iSUsqWlFZV00UaJi5OscpgDSH1dJjj0NuAb1JW7xugI+Oh01G8yk4gJB67Zot0 S8h6C6hlCgoIY0YE3QofPNYBmhmuwvUh4VosSb+J+gcyisij7fkpgrpMwE7ySO4mwbwDIjt4 59Kd8I7vTsTiNH3Qu1UdIIJLIKkT+PSqQSZL5iGelVKhUSnkpuxuOrCNiWja2Id57Tik3BYq UfgGpIzlUDOBMmX3YDoDSU2B8c9Rp1avHq2YaUmF/og+Rau5paZZFQOBjkVWbchUFNX/AL8l xnevP2qhqqs/ozxawx4mW+ThGrZLyAfKyPsoIpN0VUvJBMq4rlMbbrrEPPGNutSMpxzIqsSW jbfWTmHEjGv5MyywggoLIukpUw+LBtvHwrFtVczXeYrjopqt+cS7du6ajFZK2JBzEtReKLIL u9SgquRKICmGNJCk0F2xvvmjl4tM/p8NyGt8/YEbbNbrFlzcnKhoEhTKGH4h3/w2rJ81yu1v 6pxabww4qfQmAjo5aB9pOYhy4cRqoODEIBlyaDgoHiGwDtjxqPhuJEu04Z3RZMgdxJFnuQBF 1Fhw0KkbVpKH7w426YDpVDrmTVHMTaJ91LsP1cxFsNmxgdtnh3btwIbCI7ABPEdsZ8NqdcNb vVsm4HUoRmD1F4wUYuERNpyQ+OnrtVWoVzdllkS7mrfrolWjcqEPGpsk0AQSbH5o8wqCR+YK Zjh3jCcxh31eghXI/jI/Y8wzaCaJrJOnTqNUIbQDQzgPeZKHdH8KyqhWvOT11TttLNxdmDwr VuaPRGTbxoxZJAVBN7kwYEdI9TYEcGHIhkahbovY81Z9vW0hGpRiVulMDNwgceZ3xycRHzEd 8+o1TtVCo35GLSzcYJw7qNWPHNzFZxIxJyHUMbnojj4h8R8c/hR/1xzvJTZjERpo1AqZWzQc +7FPIgYTfEYRERzkd/Ksvo1OambbRWPGK7Gke9aaGKwPDrGOYSYxzv2gAHQOu3lWZ/3aVpD6 1ZTuyn5UWTpakU6WrMCp6xy5uyNL/wCb/gNQdWThyXVeUd/eEfyqZeBpHFzUFmq+pwCsPNqr YeMT5UGLNiTGk5hMfPjtWXav3C14+C9K5ox9+2GuMf8Aak/7wUrKFw4puj+0CvchpSnfR72+ S1nzxPlrCSr+mbLcv9wP4VSJ3/eClYWFVR9ChRq3SULV34OrIpXklzNwMQQ0gOBH5f8AWqOW nDFu8cvkW7AqhnBzYT0Dgc+fp86UGu8XmHbbZhmbBsq3X9paSIuB72BAcj44DbrWXzEatDSy rBcUxVREBExDaiiHmA1L3FD3jApllZ1ZYQRUBIq3aecKangHp86jYtrK3VPclsbtki575jHM AZ9RHw6hXsu/sPTNgnaLwdsuyMOeYjcyZnIG7qIZzjrttnyqNmoiHt9MZi24KONKHOmdcpVQ DQXWO5B8RHb5ZEawuDty7ZF5KREadwUY8yhXKJHIlJqIG4etLLWJeba207hck0NTt+0CQHeV QSDxEPTA7Vpa7emiW73RHPJVnFpNCrxbtV8Zy6amcAc+gQyfGOvTHrS7H2qdSQbvo1PBSJg0 ZqKFzy0zBubfqHX/AArzjacZOXbMIs4d0sZ0JckVcLmIUpf7w0lcTB1CSQIOJsr1xjdRq5Mp p6d0RrfmdY+lxvM48KWz+IfZmzdZT2imoRTmbqbF29dOBx18agJBmrcvDv2zdkVFtHLWM5UY +bre8Ux8JRDOwiIjtgeo7ViBTm3IQy4/aIGrcfUPEausfwxuV9FtHRHTNMHSIuGzM646zlL1 28Kil3r6Rpko5bQJuGNx3CyM8KzZimpoOBjpnMYNI423/DoIVf5ZeKc3Uk4IcsgMhDAIpLrF ARwpnA77CHlnavKFvxMjPSyUQw3dKn06FjiBSbh18qmbysCXtdiaScum8i1I47MdVqJh5amO ny261nZ/4DanklGNOMSE2u5aItGNsqouSkUAeWffCQY6juGwD91eZS/DqLsBjCYA8gEautm8 O5a6odaXbuGrJmQ6ZdbnVqVMfGjT5gPnUwnwZuQDAi/k41o8UWUSbtsiJlBJ1H90OgZHzpc/ O8OR7WZUKt1x2DcNu2qxnpLsaoPXgMyIoH1HKcc9fwGrB+pO7SSEE0fOWTH2owWfGE2RFEie NhD7WB6V5pW5R9SsmY0K0n9UM4N1W/DtX5XaE2ioqm6BsYvLAghnUX6vzNjrTW0+Fs1cMxdk em5Il9G+0Ao4FEeWqZIB2A3TI4zjrgQ2rmJkoFCtKkOD8ux4flucZhFZYsYSTWacnSBETfv+ J/3eu/Ss6Zt3DxQiTVBZdVT4E0yCY4/dXNuvkIUKmU7VupSQTjyWvMdqOmZUE+zGAdBR7xvk HnXGdsXO67V2e25RTshxI50tze7EAyOfLaq25GSHoVLfR65Bt0lwhb0j7LUxocil3TZHw8/9 KNLWzckN2Q8xb75ik5XKiQ6xMAIm/kfwrmMhD0KuV0WDMM+IjyzYJm6l3LVIqpxKny9sBkd+ gBnqP4VWJJg8i5BxHybYzR41NocIn6pm8hpthtQqbb2nc7t1GNmcG8XVlGZ3rMCE/aIl6qB+ 7sO/pRris+77eiUpWYgF2bRVcqBFFDBjWboA46bAP4VzCogqFa5B8IvafHOM4f8Ab1UG68WV 84cHKXJcp6h0gA94NWC+AjuI4qnwvD29JySmGMVBgsMQ6Fq6OZwQhCq5HuAYdhNsOweVa7NR U6FWprw7vtzb7ibJbavY2qaqqxhVKAlKkIgoIF6jp0j08qQWsm7kYe35gYQTs7gWSbsFE1im 1qqfAUfsiPr5DUbchXKFXhbhPxFRlmUarbYJqvu08kTOiaMN8c4xh+qUomxqNgM0GfCi/wB3 MPY0sSzTFmVIx3Cr4hEDc39npUHumE3lVbExR6FT7e0LocxVyyDWIOdK2jgnJGzsmbVpx+8O c7B4BmtmR4ExTxu5jY9tI+1ErbbyiUsqrpbLOVMe60/VAdRfHIYHIdK7Th61U89YoVfoexTQ j7tfExouwiyLuWZG6WrnPHSRMlTKAAJ9JjCHexgemQq3yXD2w4rioxto8RLyKz+LbOjxhXQA aOVPkVirDn4ky97GRzTl6s8mJUKsXEiEa23xGuS3WAnMzjZFRs3E/wAWgvQB8/n41NNbOjj8 L7TmXixheXZcIMUdPRm3TOUph/eMYR8dsVEYZKUXTQ016Uu/gGxWhbiTti3/AGJIx8mVCLXc ujiV8gGoDa9YYAR0ZAQql2FwtUbMn0nd0OEu+CGSko630znIooBl+UY58BnuaTGwXVtV7PXR EZsgxQxXo5HhpwtYcabh4fniZGUVSX551VR5aEWy7GVXXrAe+IHMUvewABimcLwns9bgp7cJ EEOc1oO5dOTOuYHnbElMFKCIbAkAY72BzTbo7k8+aaFb+14b2DCl4QsLmj0De30QfycipI4I YOXq5Jih0L3kgA4GDoaoWVs+KZca7BYXBY0bH29NSHIQJFvRVK8KdQqZTGP4aTCXUADnAiGa uvD1pDV3JjWmi16K4QcIYe4uOd2+1W5jWnCT7mMbx5AMIqn7+gpjFwJCEIXVqzuOkKl+BXCK 2JGyrEcyloM3Zpf2mEyu6N71M5FRTRKQBEOgAPQPCsoQjUyeXqLWgjb0Mb9Hhe40WXZ5aEug Yoy4GEe1JHLq74dAx6B4Vn5qiSxaFChUAUWjUWgFChQoBQoUKAUKFCgFChQoBQoVypHaFCuU BaT+tSlJUCxaXpAtK0BqtnC//vkz67ah/KqnVy4SlzeCXomI1MvSLLxecFI8aJac9ys5WOXV 0q78XN55MvkkFURT4grz8H+muZCeLpcVHF+Kpq5i6TF+VQteqPpQvrNx/U5VvImPvqnvv2w9 7I+NTGvlxaJQNsfeoV58VZwDejUWjVoDp1Y7BXSbXUyVVHCesNXh41XE6WboLO3CbduQTKKG ApC+tKKeqk0olwzkkXkdHotFn5TB2hYBBYu+T+mw0e02dvNZrXBow6QgyMVkoXSU4ABwyIj4 eNYDNcO7kiYdSTeO2roG5SiqgkuY50wH+fCq5CsFpiUQYMcguuIFII6gDqHXyr6U7tfNWcXq q1yRbZ1dHYzxiLhzJKKLrayZFMyfXV8/WmMW9gvoayRkXceVkSNO3XUE4cwxtWxMeIDnpjfN YM14ezLm/HtnpKodsZbqH1m0GDGrr08S/jT9vwmuQ8SZ+o6j0ze8EjbWInUBMO8PpVxl3+HG x3hOto8rI1sM4sq4qJA2UUVIUpSY8ALuXr1HpvRoMlnx0wmu5bwLW43LVwZz2QyejUbxyI7d furzhbMC8uF83Ys0DI9oENKzhMwJhkeuae3ha6VtzBoUJRKYfJn5aqbZA2SCH8fD03rPKkuu juL0JaKtkDKTTt2vFoqnIUqyCjkpi40jq6Y1G6bZosHOQjWPi9DuI7LHlWIquZcAUTLnJeWG Q/ketYRbtgSk9FyjxuUElYxEyotjpG5qmkMiAB/PWpG3eFj+TiWck9kmkao7Ny0GooiY+rAD 3vs+WfWtbc/+LhpwxlGUbxMTknBwTaC7MYhzbBpE/jXot5dlrAV0jKSVvpondiqCaBin5ieB yJgD6w5D8cV5aThnytzK277sHibgW58/BkBwI/KrheHCR5bkS+ftpEsuMesVBymk0Eu4+JPt B8qzsXMJycm2ONu22S9o9mz0SiCyaKjZI6hQKgQojku/1vHx6+FOvp5ay7jtEdNQ4tWj5Qx1 HRtKmkcd5POBMI4H57b1g1g8Nnl1N1n7xb2MzSJzCmUbCZRUc4wUviGdqsf6jHSBlyy1wJom B4m0aFQac4TCYAEBPj4A9egZqoy1cxTNwTsIlwtaOWz6LerpXR7WTYisXXy99hL1Aem+nO/x Vf33FCzJh5a1yKuWLKRVbOiuWx1xPyOaAABTeQbB5VkUxwV9nMZPRd0e7l4xJRRykkiHKTAv gY/XV0HGNs00R4ROVn1nNgnmZk7haqullyJZBICY+H7Qj66am751qN/ieJ1mJxrKEWuuOZyR CalF0TCDdMoKB3CnDAZEC9ACoe2+INhto+6mv0oZtEV379bSAY7YVZPBRD7Q5x138xGsgU4O N/pdbUU2uYPZk8RVRFw4agCvcMAY0f5iHWmti8Kfb0lefb5UWUbaplkzuCNwOZUxBwAeXgI7 VnjiYr/PcQbS/Uj7CLMpLB9HU40kWmQdfagHZQemChgPT92sy/R1kjQnE6Mc9m7Qr2Y6OwgA lExcagH0pSe4YPWfDGAveGMvLtXxDi9NygLyAL0MH1hAQz4eHWqVDtH0nIItIlNVd2rumVIc G6dfT51EvRR2L19c16W1avsaHnH4IvVIAyf9LERENK2QTVEneDoHoON6ptm8YLSj5G51pKYH D24GzpHs7c4FVRTQKmcSh4BkPEQzt0rIuHvDWfu2/IeDmE3cYlK80/bnAaz8tL4hDPXyAR2E aQW4fzUnc1wM7Mj3EjFxDgW5nLsSoCBg6lHVjfYengGa1+ODmLYLs4wWo64biwhnjNJ4aONG mQUaHMqcDK58e4BNIjnOvceg1m/6Qd4t7qvdl7AnlXsEkg11F3KmRdPBRNgfHAf6VmyhRTcG RVLhYqnKMTxA+caasyfDu9jz0hAtreVVfR5UjOUiqFyTmhkgfMc9OvpTerhhQw66tcuDiTZL /iVdrlGbXSjrgt5GNCRKgYOUongBDHxbgGfLzCsoNaMxd8tLHtNc0/HoCVD2k/WK3UXHQG+D 7464+VNrwsK87Oaou7rt5aKQcrCgic5gHWcMbbfx8fCmLO3pl/bas+yZqrMUH6UecxTCHvlO hQDx8K7WdfRUpF6MtHjTYdscOY60ptk5+lcPFHjDLJtinTIcCiUCc0O9gdX1fKsd483azvOQ hUYCSXWYNYRqyckUAxCdoS2EwAPX50i+4Q8Qo9i8duodmiiwIc7jL0mooEDJu71EQ8qbvrPE 3DSyZ1D3by4ZQzQ5zqFBEO8UC/3evXG2B3GmFfhQXmR4owCPH+0r7jTunkXDRaTB0PKEp8gm ZMwlKPUO9nbGakOHvEyxbUa3BCN3Eh7PcToS7F45Zc5TVoAqhQLnYeuk4j6jVa4k8JJJhxvd 2DaHKeEBgV8QyyunlJgUNYqD4eP4hUS04Q3+6mHEYRrF6kSoHKoZ6AJKlWH3ZiH6DkdseA+F dpSUq9KDS0uIcfD/AKPziVMj2uYnJeSTRbdpAp26LvIGUOQAH7h0j865cnGO0Jq27IgIFo4Z u4adjZICOUdLdIyAaBS1eBd86gxisja8Nrzcw91zSUa1EbXcKtpJPne8Lyg94IfaAv8AIVIm 4RX/APR9GZIzjjJKpsTi37T75PthykQKcBAO8ImDuhuGQyFTOmo3O7eJ9t2aETHHUQdg6TlU 3RWC3aeQk6OUxVBxsc2oOhg3AR3qkQ/GW3Urwk3soeZWiVk2pEv6CmPNBIB1gKO2gB1d0QHb HSqW+4JXvGTTOM/qce0nVSUcEWHloHSANQH2yHoPQaYMeGlyveJCNjMjx714s0M8I4br5RMk BdQDnwz6+IhXollSmWiYu/rGnGbG9YeA5bOFup6ZwZutkx2xRN4D9oxcFNnYfKtKT4528MPy XLacOq5hkYdyyIkAt00ygUDqEyPxCUoenpRuEnCODkLHth/clsdukZiSctn6qzkURYFTOKZQ KXIahyAGx6DvVFnOEM/bbh5LT7tNK0mR0VlHzcQUMdBRbQBCdCnVx9UBHGQzWXdi0WWb4yW3 c9zQU7cUC9FaBeODsNBNYEbHAvIKbcNRyGLnoHTqNVx1dHDVbiEpcq0ddC2rluQdGMHPF4U2 oxxKAgGkdg6+HSrHcnDi37nh7X+hEMjbUpMrulmwLKHEF4tEue0qAbIkEfABxnfAjWT3hA/R yYTYEmo+aSVQBdN4wNqSMUR6fOlyfxSUv6e+ld9zd0dm7N7VeGcFQzkUy4DBRHxHapJjepiW DB2uozSVVt+YLJxa+oQ21ajpn+YhsNPOHsDGOuGvEm8ZNEXRrfZN27Nv0LzXRhICo+pNhx41 scDwgtU9mwSX0ej/AOn2L7UNLnUNzyPzJgcBwH1C+hR9azsUpX/t2XRB/r5t5KQeSBLVlHKk hJJSiqazkEipLJCJkgASBqMUpzAONs6cD1qqynFhhcV0Mp+5Yt924kaVq5cMHIpnUcFU1FWK HQC4EC6N/HzqWsLg0SKvmDHiVJRZrfdyCrZBE6wk9pkKgJirEEOiXM0B3hLnUFT0lZ9uxNy2 omtw2j3Vzz8IAEiU8CwTWBfCqxzZHl4T3z3yhgQDzq6UpDrV5+nshluNtryri6l7jstwqNyv 03DorV4JBWbJolTRbKnDBhKUQA46NOelQxeLrdCxRhELfUSl/o6e2u2mXEUQYmOI/D9vcAyO cb+dbBYHDvhPNzNwPomLiXsWtdvs5EzxLCRUitCiYjfA75UzpHbbAhWT/RC24b9GWbuJzDsn sq+unsRRWVEHEemU+ATDrk+wiJfED58KmNKaat1Ru68oe5foQieFBRG14ska4TVUz2xMmMDt 8PUfvGptxxJhvblhLM7SWQgLIUVXYsReZVVVUUBXUJ8dCnKT54HNWP8ASKiob2EvOWRFWYtZ LOQaplXitnmoSAIpKjvgB3Hz3LkNqfSHC+EuH9IKNhGSPs63Wdstpp83QKJxVKAFExALuJjH EShgN8ZxVbdKRXHuoqUHxneW1xgkr7jGy6MXJOVnbmC7XpKosoTGow43EDDq9cVI2jxzNb7C GS+jPbFoRR4pEq9sMUodqETG5oY74gJvT860yWsi3Im5eIkqzhrbSOhcjBqkSQSAUEkezJKn KmGBwc5jdPUMVg3HGJaxPG+7oSHaclm1clOkikTupFMmQw48gyNRSVceiPdFOLnW/Vy1shu2 K3ZFfGkZBXOTPXPQpv3ClLtpDqIZGqwU5FfgOA1Z+GMe1meKVpQ8imC7J7NNknCQ9FExUDJR 9Br0rxM4aw1yW1KMSqwEXJhdZ20WsxbAngBIYUmh9IYAxhwGR23DrWWmdVVli8i6aLivRfCb gqlG8TkFZycTVXtoIv2vGGbFVDtb0ccjVkCiUoCA6gE3oFPLOsCHZvuKF0PhTEz36RNIiN5A GTTIh/a/uiGdIBp22xU4OvMpVEzY0n69K6buF1G6BXqRzbsSPBBnFNGcYCy9gR75NmVACuBc rL4M752NwDX3i7j0+Go6P4Z2fwbvWCuy6J9V9GRskpHvEXTUDc4x2hhIsmQmodIKbd7Hw5qt nKmrmrzVzUzf6gIUUyyQGEgm3DYdhr0zaNsGuL9IHh27uuVjLhhpVk8fRnZWPZ01BS1mwYgA GMCAGyPXABTzhiujKwlzXKwXgmstM34nEt3BY/LdymCPdTKURymU+RHqG+KnCK3l7um7vj/0 /wAwrnkP2gyHyr05aHDZpCcFOIss/STdXPIMXx3IAUCpsOSqYopgA75NufOMbBWXFRbv/wBE RtKuWyIPIq7Aj2axS4MKR0uYYB+0OTdfIADwpWzp5GaUWjGotYgUKFCpAoVyu0AoVyhQCkqP SRaBYtK0kWlaA1Xrg2X/ALVD/wD44/xqi1oHBcP+0i//APj4/Os7npF1vq0Jdy3VuY8Q6GJT ACnchgShisucJsPq/a/KtO4kcRrtNDr2em/SQi8gJ9CfvlA+yJvs7dKyRwXTjVWHD+hvKIl0 B7tLzDaq9VhntR2aannVdr0x9LA97YqZMCeAdKIoOv4qImQ2nNKuG50cai4yGaoI0ak6UoDl qUgVyN5hksc2AIuUwj99RJaVKkdUwEJ1MIFD50Hq5O4YEE3Tg0hBtWrpAA1isAqKdNhDUPr4 U3i5yzQuRuLKXhW6JSqpk3KHfMX9p6BnG+1Y7+qSQIzTU9ptVnpmIvgapoiPcDO2rz26etUy Phn75wVv7KcJ6lSpn1tx7ph8Br6FZ1wPk9Qw9wW42uaf03JEJunPJOZyC5QA2gAAwas758qZ RN02q2Zrcy4IzsaLpxziCcBOoQwDjR4iHy/OsWdcLpJG/I+0iGIsLhoDoVyNsgmXOB2+4evn U2z4LOjrSZV5lAgtnnZW5E2+vX3c5N00h6/nWkZS1Qv11XpC/RFkS35OISNyEiInOplREdXg ToHQO8Pn0p1C3TZ8U8ZOZieh5CfMcRF83wAAAl+sPhvvWBsbOnXL7k+yFkkecKXaxRHQOBxq DzD8qlr+sZpbUp7GYO301JiJSiQjHSTfHQQzmohlT+jraoe87VLdDlw4nodoVNgq3IdNTOoR HbcdzD/n0o/6xLTW5vYZ6NboovgUPz08GULoADCT1HG3TrnNedvolcntD2b9GXnaMCOjlbF0 9cj0DFWnhzwqm7slHaL8gQ6DIo8xVUoGHVjugXw/Ou7ssjREmmGRuLT64iZBktKC4IJ+ujUG 416CdcUrP/piq10tRScrpqppNkhE5AzkdXTy8RHrXl5u0O4lPZxOvaBQATdOuM1q0twTK2i3 3seRfSEgxQKqcDNylRUEQARAo/yO1ZRjlOpJemPE+zDyXa0Z1KPFRookQvIHQiAmAcB6mpcv FSyV+1Joz6bMpHyavNWRNlYATADGLsPUfv8AIQrJOG/C5/cbw/t8jqHaJkUECkTA6yhyAA4K UfD18atJuB8e2Ku5cysk5T1okTaopF5qfM+3t9XA1vHOqOh3f07w+lbbmiQl79jVlnBnbxMj Q/NdG+qkIiGAT8R2HIjTKSuq2CWrYTRheHIkYREUllEWxh5esS5ENsbY6Y3zURc3ChjCW3es gjNrO3kE4RIg3BAuDFU04yPUR73QAHpT1Hgcf6D2/LSkyZvIzEi1TVbJp5Fs3VHx/f26bAGQ yNZSu5CduLiHakrxGsN8e5BWaW4CyrmQ7IJAUUMJBAClxtnSPTGM1HQ/EK3Ir9aCSM+5M3uM 660cQiJg1Kq/W9PLfO3hXb24FOUYtVzZzKTVXSkeych6YoAolpzztXQChp602s/gyKlvvpO6 270XaDlFJs1YKbKEU/tNWN8YHp5V30dRMWjxTs23uBrSAWevHM6jGOWhWZETcvWsTTuboAB5 1jfC980t+8IWUlHLpu0YqlUWM2DKg4xsHn0/1rfZL9HG1o1F4HtSWcGBBZz2nJQTa6EwMRP9 8xv8K81QrZ/LrMmLBHmvXhypJE8zCPj5B6+FZXXYvQqfGm1FuLFu3oc8mybMUHaDlqRLJdBx yTHmYfEOgYDvVWeFfEW2INxcf0keyKrJ/LmkG6ZWvNFQPXcBAwhtnPgFd4gcFpSOkLMgLdQ1 TUhGOXEsVwuAJpmQMGsQP0AA1flWa3dbkrasigwmCogdy2B23OkfWRREehgH7q13OnhGiY+h kxcKz26YUYaPjV3p3TVF5IlIsmXVqKAgO+fXxGtTg+PdvsnDGcCFep3K7WQC4nqfwLoo9ATD qJjbd3YAxsNZRE8Mbtm2sA/jYpNwlOnOVocTeBOpj/YL+8bAetK3twzuy0rZ9uyq0SRI5SmR TTcgZRQpjadQB4hmku5TSOIvE5bjXahLO5LWKkUJr2i3cPV+W3BoUogUpzmz70RP06YDrSvB 6/4nglFyNtXWyb3KMi5LIIHilSLppjpAhgMI7AON/wAqhP1YxTjirw0ttzobxs3DovnQqqiP PMYR6Y6CbBQANvHpUMnwimZe/r2jbcOzQiIGXFpz1R2LqPoSTDHU3QPTxqdLcv5qSnGzijCX 7aYx8azkGT48+eUwp+zBISaADP2sY2xt5jVXnbninnAu27EbIOvasXJqPllTlAG5imzkAHxH cPDwGpOB4LXrMN5RZsvGm9nv1mAl1m94qkQFBABx00j8XT1qNLwru4/Dy3LxJ2fs88+QYt0z 5IIGVNpII5+qIh18hqtUr+bjTbn60Ppunbz9NzIwZ4qZUAcn1CUhQOkUREogXll2HrU3YnFK 3TTF+Xe/N3SxbZCIjpBbSs9UQyoUO70DWBe6HXPSqN+oO8TyANW8rGqgD9dksqBT4IZAupQQ DTkQ8h2ztRS8B7qNLLEcyjNKNQYovAfcgxtXNMIEAC+ew7eHjirjSUU9Fiiv0gG/6uJqEmLZ VGWmk35VVmZgSRDtmROcxMYOYoiG5gEcBjVvVmR4s29bvCpBwjKMJOfM1hSJs23MIoZZocom BQA2IUAIHQcd7GKxeJ4Z3RMN73WjTorI2kB+cuTcFhKAjkoeWkpjfhW6xXBy0ZCz2SfsCMbq L2MWSLJHOPaAfmKU3MHx0hq37tKaU6VPSod08b2Etd0TMEiJdJm1WduFyGdACgHcBty8BpAC YDwybfI1Vrm4sSa3ElC9rU/qZw1jgjUeYBTHUS3yKnmYc9fDAeVOzcOVOHrxlJ30glNkcv1W kVFJgI+0wAgaF9u9ytYh5COQxWpocO7AKrccijbdvBcEPFRKD1g6XN7ORkHB/fEyHQR7pQHU OBHpWkqypDGqejMbD4ttoG2YOPlYR1KPLferP41wRzoKc6oiOFfPviA6g3xkPGnFx8ZW8/aw WvJWizPCkFBVJr0Km4IqKiqwfW1K6zBuYeo7b1n3EKNPD8QrgijsEo0Wz4xexJG1EQDGQKUd shvVujbbiv1P2K9cIAo7uy7CoOV9PeSbpqATll/j/GsdyUujf4p+U43wLm+Gt1xVmKpSJSHR dGVfauY2MnywQIGNJClKOwFAN8j41lt5yMI/lE1oCBJb8WkiCKTMFBOIj11nOO5jDnrXpe7+ FNtz0Tc8S2JbFvOm06i3inLNPKhM90jdTp31Bx95wqpcOeF6FrPJWOuQ0HIXp2Fgo2YvlCig 0MqthdMR2TMpysCGBHrkAGtK016PPGVIsism9ht2DueB7MhIRdzMyt3aJzYFNQv7FYo+ZRN8 PjV5jeOx29sw8V7BIq+jYVOEK57VlFRqBSgIiljGoSbZ3xWj+zLDgOKd8RLyw4z6MxDgXszI rpB7sqrUBRbIB9oxzbFLpHxpq+tC1f1BAk2i4o6v0CLLEb9m0PDLCfIuuZv3C53Lk2QxWNrS laNGa3ZxpWuXkJXDbsc8aN5XtzRBVQfct+WCfYgxjBMYHUXTS7HjOxSnG75GzEAYs4kYZgyI 9PlFAxgMcdYhkTGzuPlV/kW/DWx7u4MspVrBqQRLdO9fP+xiftJ1Eu4qYNIiIGUwJc7hnfYK giRn/wDP3DpzPBAXBC3AQwMFGLQG6IlFQe/y8fFnz8BAPCtIXOjsYoq0+Og2yEilD2FEIRzq QLJsWabk5E2iwJFT3+0HdA2AxVBeX523hh9ATs2+DzXttWRAw61XJg0j3cYDuiAVuf6NHDaF l71vS5H7Nq/I2m30SzZq4EqBdJjc3HnjBA8sjirFwiteB/V9Zce4bRAovrbXUctjNymVdHOV UTKatHhtgdYVnnSPamfY8+XBf6cla5LaZ2hDwscs+SfSaTQxv6comAAAD9go4HIF8RpS+uKD 247uiLtJFpQrqOTSQMm0XPh0kmYolTP46cBjHqNUCNRcngyvxSU7OCnI52kdOvHw588eFa/+ itEoSt2XgsoRAXTG1nKjBVYoGBusYSlBQAEBDIZ64Gm7KtWuhu444SUtMTklJWxFSLOZeN3p 2CqigJoLoJkTKJfHvEL3g8cjUGnxFfmd31Jv2ibmavFsDY7j6jZLIaygXx6EAo+Gmt54mcF4 6/5S2ncPJdllCRbBSS5SOgizUxz8xwAaQAynd+DIfIKq1i2bAQ/Aq9LhZu3b9a5oR6uyK4RA OzIN1xRJn6wnNrKPw42q7csqdPCI9zz/AAb9aBlIybYKcpzHOUlmyg9AOUQxmtR/X9dZyrli 4WDiBcOFXqiiSJhV7SqTQK5BPnScCiboH1hrRXnD2GtXhpY1n7SiqfEaJLJqrJlAhxXblUMQ uBzy9KhQ3xmpniNYMbxFmLZju2j2duvOOHK5SEbLYROmCaQd34cn+LHw43HrWcelHZMfg+P1 8w8gV12GLklDM2rRUq6KhzKi1H3Sqihe+ZTvDkTGwOdqbQ/G+6o9nNRvsuIdJSizpwoVZI+t v2nHPKXGAAB1eXjvWjW2ja/DCTnbNi7neMJG7m8UowekaleCwOJzAZI5g7g9Q3zgc1HxCH0K 4dceXbuSI6uNGQTjXDgzQpictQRKUSbfXE5shju6SjittaQ6VRGuvhmi3Fi6nNjtLbIRqi0R at2QSREDA4O2QNrSSE/TSA+FOZzi9dMrIR0s5joMqzd32odDYwldLiXTlTPoYcAXHWtfve2T ynB2FtuIl0Y1ilasS4etDNS8v3ioa1wUH6+45DI+A7VX/wBLeJibb4ScNG9r9kSj2Dx0DU7d bm83cuFtWO8JhLqN5COKiVa0a/Rnn62rubX1GXUm0im7yDbKtI5kCAkbtSqZ1jp6iY2o24j4 1H2TxNuezmbtnD9gURcPhkSFcogfszoSiTmpfZHSIhU7+k+i2JxBhHaSKCDyRtdm8lE0AApe 1m1Abuh8OwBtWV1jOu3LRWKz2/fFwwLG6GTBwQ6d0kFKVOuXWc4DqyJfIR1m39ajpC4JR9bU RbSy/wDVEPrFo3KGA1nNqMofHxG36j06BUTQrLcBaJR8VzTWYLQoUKAUKFCpHKFdrlAKQLS9 IFoFy0pSRaUoD1o3BEv9dPT+SRf41nNaXwNJ/WEj/cJ/Eawv/pVVE2vRY3t5wqIZAhsfhUK+ cPJTGtEEylDAVJ3I5SVmlu7sVQf41L8tnyQMUpcCWsYyxhRUmfPl0VoVMhR94XqFQNKo5NnT 5USvbFmmYvQMasT63+FFnDc1QuOhSgH5UlDhnPXH1/lUjLMwRZ93f1qfkK/RqFCqHS06anBJ RM49CnAR/Gm5aPig9OQvES2CRca4UuOObppMOQdMCAKwm73TbPiHiHypBPiRZnbmZ0plNMrV ZPOUxyqPiobyEKocLwdbLwsW7cSbkZCTTMoQiSBeWljwMI+dUr6HXQR0ogeAce7OVM5gDbAj jIefyr291IRqfJvid+2OjdRT/SpESnjjtzvNA7DryAdPypu14hWQi+mdFyAkgs/Tc5BE2XBS l7xQ881m85wokGc9bUGzBZdeTRMo87oADcSD3vwAfHHSp4vBeO9uSzY8q7cFaikVBNIpQHBg DcR8i/zit9Z5ff7IWSW4qWq4t04RsqlHqmIokRLswnWT1D8RNtJdh9fCmEDxBtCCh26Lm415 16oqkpzFUBy1x4av52rPVOF91e2HyLJhzI1s8FsV4c4AKgB9YA8af35YDOCZsW8LHy8k+XSK Y6+3KIYw4wOOn+lKyn1o6v8AKcVLTdyCzEJ7lNVm66faW7QxCp6w6+ImNnI5qH4b8VI215xe PdygvoAxQMZ85bGFwooAbaSh0DcQ3GqQjwrvX28ziHkUBeaoBFjpLAYEeg94ehRx51ay8F0n HE5aJSk1UbfatU113IABjGMJg2L4D+P4V2m5X4o6MwcPG30wdyiP+ynkTOE+7ju6s9K9A/rs s/suXcpJLio3BLsiLc2hMduoj16V5/uKNLH3hJwqIjymr8WyZj7jpzsI+Y1syfBWIWi0OyNp Ey6kULv2ic+EhW+yHgAbfl0rOOW9VXxSCfGa0VphJ+p7RZiVNVuU5EM8lI4eHmbf5UDcZbQW RUju2S8ciQiBSPSNgFZcUxznHhn5/dVK4e8IZN5OJp3mTs7LUYhE26vfcmAmch5E9auv6krd LzngR7twuDQDEjgcByyqCbG6njjr/hWsayyyZ9EM+4m2dIs72FZWRScXAZLs6ZG2rAJFKBBN nx7u4b04jeOaCvD1lE3ICzqaav2yxDINgBMqSONhHz3H5eA0WW4R2qzUuxMish2xrEdtapZ9 2l3emd9Q5Dpv/dphdHC5mzsG3fo9by76UkY1JdV8LnvJqmNuAk8CgAhvp8B2rO7D2r9VZLs1 /SFtNh2tFBGYeJyC5lFnDhsX+jlEohpIX63xeOOm41CJ8bLcP25g4PPJNFhSOk8KgQVhEg9C p/CQu/8ApVJY8HJxxd0bbppyMMuqqBH5W58qNihubr46fu6VfXHBKKW42MmTBEyNntI5Jw8O 7UMAqGE+NOfrGHAbAIee1c6h5KfpGwD3L4YGSB2zTWRjW3VMwHT0AZU3h4bFD76892DJ/Ri5 oeY0AqMe4KsJAwGrBs7VrEXYFqHlOMTY7ZUXVug6PF98QImRMO6PXc2wefTpWHNzCdukc3US AI/hXnu4ydi9G3Jx7tWRuJlItoOW5JWLxk514AdLrGvSHjjAfhVDvJ4w4p3IzWYyTK14mEik opkWUU94omQREOm2d8bVmNWyx+H03e0LOSMMRNQ0Vyw5Ihk6hjD9X0D0z8qdDFsPDnj4z4YW gWxSxH0j9nmOVF+3W0IqAO4CG24ZrLeM15teIJbb7MwWZGiI4zNwZT+0ExtWS9dtxq0SnAl/ CQa0rN3fGx6SBUgOQyewKqEExE89N8Y++q0FutP/AIbPpUbkdr+kxGqzkAEVCk0j3Q/d8eg5 zV17fLqRuTiYg9uzh3cEbEqImstkg10LiA9p5QgPh0Dr6+NWmF47RsZcl0SLO1XbFlcS5Hap GrrC4OCm1a9XhkR38/IKU4jcI42U4q2RZdkvU2oP7cK7dLmJ3TFKA+/3Hc5sD3dsDjwqUsDg /GRV7T1sT6/tRq+s5aSauzoaVmZwUKnkEwz3u9kN/DpXIVjTw5JGwnFxnAcGriNE9jTu2YuV eQbNFAFRRikumVIyhTiHxgUBDUP2/GmEzxy9s2PA2ovAC0VjjsBPKFX1G1NTF0qFJ0DYBHTv vmmsXwBnVvp0keXHmWrkE1eSHLdiCJVcZ8Mh88Z3qvzXD+Kt6yLenJW7v68lI1tMpRPZvdKN 1DhgBUDxHxDwzXa6eXWu3Zx5gYZONNbXIm3ntNZ+5M1bdlTImsTSfOoO8sYRE+cbZxmqO144 ERvpe5D2wush2FFk3bjJKc4hUjaiidXqffqA7DTv9LPh6wtK4l7lZnSZIy7tFGNikSAAcoES istt8IAccYHG5ttqpfBvh+jf8lJovJhaNQjkAU0NkSqLrCI/VA2A0l8fur0UvS8QY4UQ0peU 88mLmfs3akYlc6gmftkDbGT/AOHnxDf760Vjx6dNbfZMBtNs4dt4ckGqss4EyCzQMagFHpqM AYyOcb+dMrHsho2muMUCdczha34FwZqdygIG7gAInxtpNt+ecUpD8DyS/DKHn4q5XgyUhD+0 QQWRKCRAIqRNUudzdThgcDnyqKVpTzVsOnx5mFrhZTU3bcdKrx75V0w5hgAzUhyaAQIOnAFJ 1AQABpojxjbNnkiCPD+I9hySCJH0YdYwi5WSVFUi6i3xnNqEcgYRzXOK/CGOs2ylp9pcMjJK NbjShlUlmxUiiPL1nMGMmHfYPxxV3vDhGxvP9I3iC2ROvGREG2YH7OwIUBOZVBMdtXdAod8R D0wAVWUtcGWEPLBbwnnl0XZKXPJaQeya/OVKT4SbAAFL6AAVJRt4STSy29sckqjZlJkk2Cph HLVYo57ofvedI8SrbJZ/EKZtdF0L1GPVKCTgcCKhTEA2+Ns96rGyjWY/osyUwYoAsW9UUVjg XJtAJBsA+Xe6edef0yaRWNP9ICcbOlnkba8OxcvF+1PlCmMfnr6dIG72dABkx8Fx3hzmq9+t SQeyjJ/cVvRlyqNI0kd/TxOcyxSH1lUOb4jGyIhuI5zWlvP0ZItacYsIm65JNMJQrR8LlIph 0C17Rkgl6Dp7ve0770ThvwltEl6W1fCEhLDAJs20mmyX0isZwd0CJCmHYugBwI4znwr0VrOS IxU1nxvmkwlvbFo2/PLSsh7RXGQA4gVXlgmUuAx3SFDBfEPOopxxauJxZP0VMwjQMMcESeWA B7ULADZBv5adxDetRi+DVt3xd1/T81IumBvpq8i0SJKESTSKUgHA+BDvDkxQ07BWaQvDiIdc HpG7nVwprvWlxIMgbfCXlcwA0GLjInNkDAHgHnWVJ9MzL2RUnxIkJa4LVnDwDBIbYYEjWaQo qC3cIELgpTib0Eeg+OQqR/W1OEvqEupG3ofmQTczeJi00Ti3bgce8cADvCbIjvnxr01fVr2z cUdfMJPSqy7UZCOSQQatgAYs5lATTBLYurVkNQAIgAbAFZXB2jB8Gp40O9mFELllLdMVOYFH mokWFxjSngg6dRAKXIgbBh3rSNaTk5kyi2eJczA8UH1/xMI3GUX5oqMxbqi3QOqAazgBcCBt x6+Y1IxPGO6o22W8O1aQ4qtWqjRlKnSEXbdBTqUg/DnvGABENs1ubWRCK/TCnoBuZRhEvV2p l02zIBI7cGaEECKK492XBhNgOo4zXlG7CGaXJcaJSlKKEk5KBUw7gCBx2L6VytKUpmv1JJ/c yyvDSEsYrDksI5+o6OuUBOLtycNvDu6S7YDcaUsu5JyyZZ06ZM9Pb2Z2bxrINjcl03NsYhgE NwrbrKiWCd9/o9xyTNMY5aEUkD4AMKOxIdQxhEOpimAN+oVdZCy4Hilw4Rh3MrIST/2k77A/ OgKZU1yogb3g+80JfveI6Q8q5GtJ9arw6asIdcY+I0k6RO10F7I4RdESYNlBKnyA92mIhk3K 3N3MhnI0RDjXxDWsuVgyItHUQ+55Xa4RI+7KsJjqAGC6Q3OYQ22zWy8PIm2LJu15w9ipSRiF 0rsjTHfGZqf1kUqJDdnObfSU6mcbgA52yFM5aTnYaJvw7+NH2MK8xDW3AsmWBe8xxlV4cAKI ctHAd8SiG+M9NVW6x00Y1rRkk1xd4kyNnw6EhqRjY1wg7YyIslCmFZPZIwqj1HqHrSv61eKL +4ot+1bH5pmzkzdg1ix7M5SWNlwoZPT39Q7ibwxWzcdE4k/Da94tdR3yGK9vJOU9ICi0THlA PY/MRL1ARpLibfxLY/SdkY4rSfcM2VqDHoN4xuBztDK8s4rJB4FKQA69BDpUyrrPGjRjbXiN xSbXpKSBIpRWYkkkgctlIkTFImn3SaS47oFyNQZZ2/3sDdcJ7PkXTSVe9quA4sTCpzw73eHH dAOuK9BwdutrSt+8HLu45d6lIjCPQeOdQvSFEMkROYvxDkoBgB3AcYGtFmpu3VeMKMAgZ6wW h0HyjhsDQwM5AVWyZxMKhR09wAEB1BnOwVrX6/Vzq8ZynEW7pWzU7QeSpFIgEEm5tKRQWVQS ERSSOp8RiFER2+VRry6pp5bMFbTlZFSIgFBVjUOUHcMYwmMJh6myIjV94jWfaVqWCzj4y136 8l7GZyJ7l1jyeYsOTpGAMkDbGMGHp99ZzIQUowtuFuF235cfN83sKmoBFQEjaTjp6lDPiOw+ Feaeq/S68VuK8LicPFSPpyXclFVUSEFQ/LKHhj4SFDGA8KjnzF4weKMX7VZm7T/aILF0nJ8w ra/0Z0GKnDriv21V23KPspMVmhQFcpTKHyBciHUQDIZDbrWq3Tw8su9rgLMyUau4WiSNU9QK jqkcNNZW4CAj3gx3hyHXrVxtRuW8lY9uTyG1jXzuPeyCDZVRmw09pWKHcS1dNQ+HQaL7NkfZ rWSJHuVWbtx2ZsqRPIKK/ZDzHrt6V6MUtJjA/ogXA1jGB+1TEZDSD96GR5iyr7BkgAA0gCZS 47oj1HIBWku7cjLdlOHlvW/HEjYuLu5cqfN1KZKRsYTHETdDGHVvsADWdq3CVdB4wnIKZgVk m89FO4xVUBFMjkmgT4HA4+Q0rB2zcM+m4WgYd1IpNRKVydIO6mJgyACPmIAO3pXqZe1ravGL tJ1MNXAM4uDm5BtGvza1lFRdEAvN+uYP7pREcdahI1vBMbuneF0Vacl7DkpRgZ4/jlzIlaKc sO8IDuUm+cZDoIb13Z18GMnm9rb866NJg2iHa3srPbtBBHk465/A34DXVLcn0rdTuJxDO0Yh QoGK6OXu4Hp+OB+eK2VYkba/6OnEpjFpuhVJd4Rijvm7rlKPdE23w4zt4iNXTi7b9uzMVIup p/JRMVDR0Gk4XQNzE3aRjBrIUmQARKAB0MIgIdKuNmFY1qRjJ5dcxUihGR0mqzWIzkziVkoY ogCwgOB0+e9JyjB1GvlWT1EyDhEcHTOGBLt4hXoP9LFzHyPC7ha8tQ+WPanhIwECiA8sDkBP GwDnuB4daz79Jpw1ccVEeyrJLC3hGaLk6YgIc4Cd4B3HvemdulZbUcNVfFmVChQrypFNSZaU NRE6BWj0mWj0B61LgeX/AHof0J/jWW1q3BX/AHXKH9Q/IK83Efp1UpM45/rJcxf+Kb+NSsG5 O6biT6wB3arkljtSptWcnMO3zGpG033ZphDWHd1AA0x7HfkrUX/af3BptTiN2UHu57o9PlTc 1elCatnWcyiZMbhTmSUP2PknD4Ns+NN7TR5rg2BwJS5pecH3g+tR8hB1yu0KsGpUo0lR6D0B Z/F23GVqxbN5JvGijAmgWyCRjCp45zsHhRJDi7a7oqSZCSSSSRiqk7m5jgbOTem9Q1g8LIeU strMu20hJO3bgpB5R9JECac526/6dar89wuuljNPUGrdFRkgc3LWFT6oDsA+RvSvfHLa6EvL RXXF6zPpEykEVJAyZCrAuIoDsCoBkC+f5U2W4r2WaQeaBlk2i6RCgpyMqHEg/wAB9RqvXFwl WRi7cbMExGXduSFfhzgEEyjuOR6Bjbw++rIbhLaTe5km+FnWI7WCfPwkZQDYyY3h+P3Vtjdk hx5xtiHbN7ySSEasocwoEQRA5sCXG5h2L55D8ag7Z4l2zbcGLdijLvnzgCg4BzummGeoCO49 A/jTaW4Py7qefewVUEIhE5CJ803fMIlDIF9Ou9PbksBiyt9ozgrcReSnLN29yo876AlEcjp8 R+7wrnfkJdxxxifbCLzRKrNc6lA5RSAQRDfSAd4w79RGo03GhJrfzeVh+3IQZGwIroGRKKyo gbIee3302ui0bDtWHtP2qwdOTPUecsZBQTqKqAb4dIdCdPLPSrPG8PbBeTyR/YhUnXYVFjR3 MHko/Z5huoG6bVOUpjFbymkp2+Ja5EEjJJyDztRUz9QDyH8K1iL43wrRqz7TFybp2gz7Lycl Kjjz887ht02rN+LEOwt++l42KIBWnKIoTT8GRDfT5hWzWfwwth1bcMPsRmqL1iody7XV99zf qiUMh/CpjD87A+CvfrzZjIIvFreWKLYQIhoUEeUjp0iAeYjXDcboY6fs72HJIRANhSJoXyuY dWrfwx6eNNLR4Odlmmbu6XST6IUcppNmyfxOsj1NjoQMbjWll4YWSM0m/wDo9GKqpt3Ackoj 2Uo6g0iPmP8ACtO9HpZujxliDupx05gHintCNCNbJgoAgmhgQ74juYRzny67VxrxxbsIFmwY W8btyDAscCyqoaCNwNnAAHUfyCrj9ALLLdQtDwaCjtzarp5zEx9yRUgCACUPPp1Hw2CohnYT RxwdtZvbtvw68pKw5nDpVzkFtQfXKPgBQ6hXMvztHPUol0cSEXk4WftiKXgZlVYir6QKoAqq Y+qQOhQ2D8Kuv/xFZuBKSeWwsu0KwI0US7T31TFNqAxh+fUPH0ql3BwtJGPI2BaXQaRn1lEU 3DVNpkqevcRA3jtj8autq8GGcDx6gbfuZ4lLRblgd6VI+A1mTOBdJg8s+tR3K7VUi+MEcxnL 6lXFt89zdwKEOQHOCIEOUAMUfE3h/I1lqZA7oFNhPbAj0KWvVHD+AtuSi7rmUYyBRXG7CtBc HbiYgtygHdSyXbV4Y2361gvF6KRjONF0QMWiVFBOV5TYnQhANp6j4BvWEyMgUt6xi8rTxEWM bIa8Ro6QDxx51cOHfEZjwbdOlLIe/SkZMoA57WiKJUsfCIY3EfyqWU4CMfaEWzRut4oK5yA4 cg1KKOBLkRJjqG2AyIeo02t/gUi+dW+m/uR219qsX7wxCNwMJCtjlx5fEUfLywI1tONIdZGS L4ocapK/7ZmIWSt5k3NKOkHBlyKCPJ5QABSlAevjv6jVRC7zl4PuOG3s1LsriRLIHemP3wUD AYAOnTatFuDgrb1qRcxPXXdUklFJPEGjDsyJFFTiqnqAVADp/IbViyiX9M7MQ3/zAJEOfbYT Y1D5VlV2LUmvG2fauraeowsZ7QgWoMivRDK7lDGkEhHHcJgw7F885peI44PY28F7kJaMWrzo j2ODRVQ5i9nE4HHWI945jG65HGKskfwyibJvKw5pg/eyYqzzJm5UVAgoqFXHvGKAAOkNxDfH nkKrX6UljMLOvhOVjV1TN7keP1+ScoF5B01AASlDAd3fb08a3lrb7EdDuL/SFvRFOaRWioyR NLuTuTitq9xqTBMxSAXG2gMZqsS3Ep1LWvHQb+3YddWOapsG8gcBFYrUhgEEg8MeG+ak/wBH NgjIuuIIGxz0LNeqo5JqEBxjIeQ71YbI4FQVycHom4mktIoy7+GdPgKbHJKLZQCGLpANfeyG 4Abp61jGUVK3fXFeS4kcqLvZJJpD9uTenOwLrWSEhNAEJq2xuIj0yNNou6resOU7dZLF5Lpv EdDlOcT0BjOSiTTuNaOt+jnbkk+CEZzkg3cx0sgwk3Suk4LAZuCxjJgGxfINWPMdqtHEXhtB 3zxbiu1JPWkVGWAi5RbgYE1TGBY5SFUPgShgBDVgc7bVpWuPlxgzPifcaE5d04qizdvrtZmZ SBlQHCaJi6dKePIoAAZ22o8PxUuuMtVvbDQ7X2c2YHjSaiiJgQUOB1NvhEwiHUQGtQg+CHDp acuVmSTkpTssgk3aEKsVMEUTtyqas/2pgMYAxtsHrWeW7w8g1+EUzdMrM4mGFxIMiMT/AFAB QA5Rw8TGz1DOweNdrr6nKSol+MHE9td3D5pbUdDziLc0onJLu5FPYpik0FSIboYPXxxmof8A XHeB7ufXG4bxjt1LNk2T1sZARRdFLsXUAb/d09K9S3nbUBcUffEFPSLmTYrSEemVgiUA9mHE SEICe/qBhDPw9ArEIyx2fCv9IjhdCRyppJy5KIu34l1JralDFyn104KG/l512s+7UgzJjIW9 cspMzt5wlxy0y5ciqseLREqSIafhEMbY228ALTNOYuEeHr21Y+Bep2a8fjJlVO1OIgIY0iKu NOAAv34GvT/6PSTZszurmA5RItxIdJCDYoZWKZINJT5/s+9nbeqdPn+nFgyNvMJW4oRW17cd OHCKbcSxskRBTJgyb4tQ9OuMVpWNK5OswU41cQ1lklzzCYGTdFegYpP7UqfLAf3S6BEMFx1p 6x4vcUgvJrKxiJjSR48Ct2baNEURa6tWrQAd4NWodXhWYwJW7uYg03ZC9kcP2xVyj00CcuQH 0r1dApcrjpxi7UdVoknGtkmKiRN022SZKluXu7dAEOlZW669FsVT4tcSYWQlSnc9hev3hnrl JwzDUg6MTQKqYGD3ZtA42qhFQeez136aEgqyZLFMssQhjJpqiOS5HoB9q9jXXwws3iBIW5Ly pHnNYR0eZZdQxgF6kKZxBuP2lTCAZwfIAPWslWkGlu/oerpwjZ4zUdXkZq+QOAGIsfqJNw+D QQgdM6iVVa0Z0V11xe40skWbk3a2QyAlKm5LFFKo8MIaS94S5Ee8bGPEaYR89xdirhjIUkTL HkmjFVJkxcxvMHspjalRMUQ7/ez3zZEMda9E2E7uVtdNnQd4NnUhJOJNaUlFTo5aQ6vYjlbt CH0iQDaQ1aSiXSPnmsxvr22znLMmISSvJw0lGDkXjUptUgk3ScgGNWnWYomEOvxAXxpWkckR qpcXxA4x/SyYjmjp+a5HhjO36R2JTLpHInpE5AEvutKYY7vTaqAnDzzi3jXOlASikRkRO95B hIYc7jq8d/GvdduWtDo8erovgqXbZd3KJRWsD7MkSxaSgjgurcxigXv6MeH71G4dJEDgNFR6 XORcBYb03eKJ2m6/fDRgQMqYQ64MIeADmvNG5RvXtebin4mQ1uwT0kbPNIxm5ItCrAkYATUV Hu6P72+3j0qbvi6+MzNv2O8FpOIaP0lEOzcgqCKxRN70BKUAARE3UR8sV7StgYxvxJvQdbpI 2phzxciJW+rk+7BER26BuAdB8N68z8emqQcDbdjW3tLta93GEiclsucRKbUIBqMOjVt6jvXo pLKmioR1iy2P4pcQ2Dx6+Z3Q4I8fgUHK5yEMcdJdBRLkO6JSbZCpy15jje+4f+zbbSlHdpik skOhsQ4HTMJjL94Q1Gzk4iPnVBuqHc25cD6CeaBdtMaxSNrIOooGASj49a9efo9lYBaHBw50 nnPTj5AUlEx9wmA8zXzAwOBHAYHbxCsqylGfVnWLytMX7dE5b7OIlricPYtiJDopHxjJPgEw 9TCXoGafy1z8Q426Iy9pWQeJTc41AWb8yZMuW4jgAAMY07BtjyrcpThDw3bmZcmHZtzyM1Fp S8cd0BvYpRTA4oCYDiInV8cmEAyOPDVYbstqHneJNsEm7abciEsx88YQ2o2gVkF0ypp4ABMI ad+6Bh26DV59NxTDI0/Hj9YFxkjUZpa4keR7ZN3BAuSAZABz3S934Q8PCqq+vq92LOXt5/c8 iik4XUCUaqCGTKiPvQHbbVjcA617BYk5PGTiEsdodfn3HAY0HxoKVon7w2PqlwO3pVYeWDw/ Vgp6dfxzKTeSjqddvXfZhWPrTVUFMxRxpSKBSCOBDcR6jXIX6s6vPN0M+LafDFAbk9ofQ0ia IokcOExDSOOSGn4x6bB4Y9KZPofiNPsbKgvYLtwz7IsFvNSFKQATzrVOPkJviEx/DpWj8SW4 Xjwak72uW01YWUg2sQhGuO3ConIoKK6BAA6AIBvgBMIb9K06dkkP/irRYEjzdljrKcqt2x3G QXMYgAJU+unukAm32BGruSh1b4vMzVlxCsa9krbZkkYafkgIUrZsoU3aSibBc4yUwZAd/DFK 9p4nOJCXcFlJ1R1ay6h3RwcYFksfuqGKHgfunDuhkAINeiLKbWHaXG+GSWFUt3ScNH9iRdnO 89nG7wOUM57puUGAMJxxkdvtZ5bB04bjfxvnHjnlQTZGTQXUUHui5XUEESY8TCPMAvyqc/Zx jv0suT6O+wAuGR9inwJmPN90ffO4eO4ZpR5ed3vCsgeXRLLBHjqZalh/o5sfEQfAfWq017jV IpuugKUrHfm5pFOmu+6Tznt09ySYy2gEgd84dYED6oenpXGd13SwcO3LG5JRuu8HU5WTXEDq D5iNQOaNU78zQ+9qyXsf2N7Qd+zBPzTNBPlMx9u8IeI7BvRpCZlpBihHSMu9eMm+6TZdUTkI PmAVG0KndkpIKTEqfseqVej2D/Yg5o4beqf2R2CmChzHMY5zCdQ5hMc5upjD1EfMaLQrkrkp AtChQrMFN8NJlpQ3w0QtAoWj0Wj0Bq1PhT3LTmFfmP5Vlha1Lh6bRw5l1PU+9ebiP06qizUx sqKG/eH+I0G6wpOij5DXUw1Z+Y1JxsCq/wC9nkJ/bMH85rWvgQ9vq8mQKemS37QfnXEz8o2Q opjZ3rRKSgV+U4p9LLa8VBtT6FKkHDghy1IZUKFCqApWiUeg1WxeKUZC2mlAyTCQV5KwKk7N pwcQDobPQN/nUnIcaGT7Uma21StzGMocnN+I/gJvPFVzhnFQh7HuabexaT56xSMKHOEdIdOn T/Gk7TiIKQ4U3BKmYapJuTmc05unTYodMb+Q17LVJY00JrU841sF+xHTgXQLoKkUVOJvi0hj b8a464zRqksm5Stl0k1BsducnNDWbUOfuCoqLt5pdHCdoaGi2jOXM+atyLeh/iEw+tTr79HS 4mOTuLljezp5FU5SDsAB0APEdulaUlOXhJt+u85k1kTwiyaQ6QbFQX0CUChgNRuvzquxfEJl Fx7xJpb4+0HpTJrOjK/UN1D1Hfr9407Nwp5XMWWuAE2ZWgu9fI7+kB3yHgO3T8qVgeEJ5W3R nSXCUyCgidsBSlKBkwDOowj0qqbwiXl+MHL62XK1ucwlvoclMDq5FTfOfIPuqc/XA2SmlpFv aJSg419pKZ0InWEwAHeHwDYNgquSEBFuOHv0gj9SCjA4JOeZnC2ofDPUfQKpVebKUKCdva41 rquL2wu3Ta4RK3RQTyIJplDYN+tXe3+NDyHjWbb6PpOnDJIUW6p1xApSj17oeP8ACsortTuy yzVi1dTjfcCpimViGXcEgp4MPudP2PIfWllOOcwd1q9gMk2Og5DMyLGwpr+ITm8c1kVd1VXM STi1RrxokkJz2v8AR6PFQjA0c2SA4gmiib4g+0YRyPjScfxpuGOg04plFxqYopnRQcmyIopm +IpC9A+/NZdQpu1MWlKcXpo0g0lEYWLQkW/L5jzcVnHL+Eph+qX5U5ecbbmeXtHXg4iov2hG oHQaJl1AmQpxyPr1zWWVyp3ZGLQLb4pTUCm+RJGRj1J6/GSOksAlIRx9sAL4+tVKemH09cj6 4JZTnvpBxz3JsAACbbYPTbpUZQpWeRi1BPjFcC/spq7SQZx7JVNRXsQCCq2joBhHw9Axmp2/ OPT9/LQ7u1Gpmfs1u5QFy+wZRUFxLq7vwgAFLgPxrEqFc3DFpn67L30r9oGNec3SJQcpCcET FLpKcnkYAEceXhVfj3lg+yzKy0TLyE8oJlTrArpRMsORAcfZzj549aqNCu7o0hTjFeiicYQx 44pY1wR0kQiGAMqn8BjfL0pCe4gnve4kZPiWiaXatE1CtWjH3JCHOORNt4ibcR8cYqgUK5kY rowvYtsTT95w8YexGsixMxcJuffKGTMPe3Hptj7wzQheJV5w1vo29GTAoRaDZVqkhoAQBJUc qFEOgiYfEapdCu5jQH3F2/3xY0F54Q9mqFWb6CAA80oYAxvMceP3Us+4z8RXlwBPrziYvwaG YjhuQEztzDkUzExgwZ86zmhTOScV6Y8WL/ZOpF62n8OJJTnOVBQII8zSBAMTb3eChgAL0qma /fc7UYTateRHPeznI+u3Wm1GpuSVi0A3GLiUZNMn0qWT5YBhQiRAUHAYARNjIiAZwPhUPH31 dce8jHrOZUTdRCaiUaoZMpxbFU/aYz1E2/eHI1VaFNyScV2Y8UeIDBZ6uwup22VfnFVyKZSh lTTp1l27htO2S0w+nN2/RX6KhcDssGIYO0LgAOXOdJh+IQzvjxqsUKrdkYlNvh07dMVYI+9b rYSScm1uF4k8I2BmCuQERbh/ZjnqXaq3QqY3JRUtjziBe7xwk5d3dLrLIr9pROZYfdLY08wn 2R07bdKj1rim3EOnCLSi6sQmt2gjI2OWC2c8z+9uO9QdCq3ZmK3SXEW/ZPs/tK8Zd12VUq7b UrgU1S9FAx9YPOkvp3eft76Q/SqU9scsEe2c33nK293/AHdulVajU3Zpxiskbe94xjp26jLr l2Tp6bW6VRX0mWN5m9abFue4ywKlvFuCSCFUNqUYFVwiccgO4eIZDOPGoSi1zdmrFY1ryu1d u1buLqmFUmZgM2KZyPuhKHdEvqFJPrquKRlmUrKTr6Rex5wVaKOlNfJMA5AS/eADUDQrm7KQ eST91JSTiSkVzunro4qOFzYAVD+e3T5U6a3BONIk0M2nJJCLP8bEi4giYNsgJfIcffUTQpuS oYpFSWkjqKKHlXxlVTAZQ4rDqOYuNIj5iXG3lTpa57icPkH69yS6z1t/s7lR0YVEvQo+AbVC UKrfmJhO47gI8WekuGXK8cF0ruSujAqqHkYfHFN05SRSjVIsko+LHqbqNSrmBE/94vj0++o+ i1nnIxP3Eo/cooN3Ug8cINhygiqsJyJj5lDwGuKST874H55J6o9LjS5MsIrF9AN1APSmNCmc lHnb3nbO39vd9uH/AOa5o877j9QpAyyo5KKqogc2pQBMPvDeZvMfWkq5TKQ7XKLRqgChRaFA ahRaFAKFChUgUKFCgKp8NELR1KIWgVo1Fo9AYtavYaJluGL4hC7qCf5eFZQWtgsM/J4Wuj/+ oNeXiv01RZ21QIRxlXcC5286ex8qZCSRcuUtSCZgEUQ8QqCUklqJ7QPWlIiJoUtqMei4rZJM tLKJCT4qVYh8an2QpA26mqgUoVyu0Ao9Eo1BbLHvP6OxsnGjGFkEZEgkUA6ugMCAbD5htTm2 71Rg7dkYZvbyKychkFTnWwAFEMaQAPDaqQWl61hdlDwLxD8QnEPbqMJFwjRBBM5VDHOoImUO UdhH+fvqYY8bLsQ7XzkW7wHiwLLFOYQAR+yGOhfSsvrtd3JDQpDivMvE3iRoqOKV02M1wQTA CSRvAvn1GmUXxImI21Qt5FizWQIXSmorqykGNOSB0AceNUqhXd+Ql5i4H0lEs4cSpIR7L9mi kGAMb7R/M2/WoehXajIcoUKFSBQrtCgJQo9coC0KNRaAUKFCgFChQoBQoUKAUKFGroLQo1Cg LQoUKAUKFCgFChQoBQoUKAUWjUKAUKFCgFChRtNdBaFGoVwFotGxQoC0KNRa4OUKFFoDUWhQ oBQoUKAUKFCgFChQoBQoUKAUKFCgFChQqQKFCjUCanw0UtdU+GuFoDlo9Fo1Acta5A+64Orm 8yKb/fWRlrUHTzsHCFsXRqMvkM+HWvLxXoVH1MtW7xqT008Kp77oHWrFdzdv7PYOEEip6iYN gMb16KeEquiTvelFdE+tUhjSWkjF1IjVBNNur7LUW8MhTCptNy3CLUbj1GonAUAoUahQFo9c o9Bxmio4cAg3SUWVN0IQoiNOnDV20U5Txsq2NjIFUKIDitU/Rt7IW5JHnG5OWCpecAAJy5L9 X1/nFX+4LJtu5YuIWfg6Km1bDhyfAKFLqDbr3jD/AIV6bFrOlSTzc3ZuXPMFugoqCRRMcxQy BQ9aBWjk0eV+k1WUbmUKkBykyGo3QPnXpS27TjI2055mwjjIGkO0JrmObUdMvL2x93n+dOWc IziOHcM0jWwFRRWaLcw+46wHcenXb/pWlOGxp/dOTzTKRElGYCUYLMxN0BUuKPEwkxMFVPFR q7xJD9soQvcT2zuPSvUk9bNvXO+bjPI5ZN3SxyJrqY55sbGz9nPgHXzqvGLGNu0WZAxC/s91 pUdKIZRwbUPfKH2e7vkd8VdeGjFOTz41hJVz23kxyygMRw6OAZImPkI+fX8KQTYOzw5pjsy3 YSKFSMuJBAmowbBnzrenjWMguEN4wcZ2gEEZdNNRzrADqAOMjnxDrt3s43qydjs65+DcTFsB /qFo6a60SBo5hwHfVn4s9RHx8wrCkY0/7aPLFS9v2zcE+VweEi1XaTbALq7FImI9AER8eu1X HjUz7JMCRC3GcLGkcnTZHRHvOyB9bT4F2Her7+jysxHh3IMHLQXih5FNUjMggXmCA7mN+6HX O/SpnDCeifiwOUZrxb5Vg/SFF2ibQdLqOrbYPPrU28sO8Wke1eqwDjlujJgkQN1O8OAyXw8f wr07clvWgvcy8ytHxy00bIMCnVAdwTDUoruOkC42H0qYTkmCcai8WeJogbs2t4KgCo7OUQDB d9il8ADy8K9NLEeqcnlF5w7vVo6athghVVcKgiBW6gKaTfvY+H5jRIuwLtk3Uy3axQYhl+zv VFVipkIr9jI/W2Hb0r0bcCxCFiGEekDFdxdCboEinDW5RKYRUX6fCONxH/3Udi0bOXl+wjky CiUjc4PgysAACOC6lBHPUN+oj867O1GNzF15IWJyVlkVdlEDiRQOuk3lVgj7JumThY6XjYgz hvIvAaNh1YExhDqIeBeveHat1uSHgW1tpsbPbxCkLqW9pPXJiiYw57uB6mMICOMAOdqtt6ST OEse3PouVkVqhJNvZ7YpgDCejSc/XIZyPernLRi5k88/qcv/ANrdgGOZFDkmUM4M7KCQaRwJ c+Juu1Vm8rblbQuBW35tNNOQRIU5ypnA5cGDIb16Wu76NSvELhjFSp27VnHEePX7EjkDETNk pk9Y+I5DfOOohmvNl+TD24byn5t+5Fy4cu1NKv7gbEAv7oAAAFTOHlSDN3cas97IhsI9P+tA ve06QUHWYCl7g7iIhgPzr2PwxjOFKPDe2rnlV4cs1HwZ26TZQ5RMBzZ1G0dTKGyAZxnwCsl/ SQmU/pRbNsQ5GBmJU2jtUUMCJXOrBgMIZx13D+GKzjDKmoobPh1cbviMaxUGyakqm1FyoUpw EpC6Ne49PEPxqnp+95nKQcGFLIHAqJhEvz8uterGNzRUH+mKaQRkmhYt/CFScKpiUS8zQAAU x/q94A8QDzxiucJ1bUt2yLlZPZGPkZcs64XeuyqkIV0Tl4Acj1Jkc6Q6+dbbNNdHXnS6LSlL cte27ifgAsrgbHctjJZNoKUQ2P5D3gqBUKZPk601Q54ZSDlm95/d869at5Wzv/hzhYe4ZKPK j7PVQM1LpMomsssAkEC+YFz4D18am5yZ4aNL44fSDd/EaYVBygnpMCpkinSAqGofA2Qznu6R HrUbOqZPKdo2TMXIzuByiQWvsJgZ84I5TMQxyB9nPj4VWdA7ayGTHACJThgQ2r1LwunEIriF PuuI9xwUuH0dBmsJDFFNYxlgEER274iHUu+M4xWL8RIeZuPiddMpCNFp1mo+1FetUsJadIYK UPqlKAYAPIKqdnRzJXjW45Jw/RvHUHZV5T2ail/aHPpyJgD7P8RqJUZvk3CSCka/TVW2TIZq cBP8gxvWuNJBFn+jFDQTl4ijMs76I7Ky1Bzk0igICYQ64ARHet3muJHDp3cZHjWaZC6I6clj HBi/7OJ2xSAYob6A5me8IF61EaR96Kq8lWHZstdt/srNQbOWbtfUJzLNze6KUM5EPuxnpket RryClUZCVapxr54WKcnauF27cxyAJBwIiIdOmflXsy0uINhNbh9nDc0enJNYyMbPJQTF/pIo GMZUObjfOSh8Q9egVXYHiBYcVYU61K/jhUTfyyp9aggDnnCcUzFIAZUMbuFDyDxrTCCMnkpR hIpRJJZaMfIxymBI6O3MCRvLA+OasnD2w5W87gGKTP7MRJFKzCrtZMcdlTwGogfXyYQCte4p XfbUz+jyyh1JdFWT7LHIosGxhEBFIO+Bg30FLt4+lPbP40W9McQGryRiWsMkjZakCCqpfdqL iZMwJ4DYqOSiAZCscHYse4kcOJiyfZTj38xHSrciqDtNsYmkxjaQTMHgYcfMasMDwMuGVsMt xLSPs16u2eOW8Yq1Nr0thwoCg/UNnwEPEK3n9cFhs/ZhJmday523Zk+zNUQACuebqF1qDSUp UiiG2d8+dRUXxPsphw2eRpr0I95CcsksUyQlPKGcHMKR8aeo6gHptnxrRnGUtGH3twckrUsN zcTqbTcu40jL2ow5OjkC5ANIFP0UEMhkA86o9qwb+5bqjbbiUec/kVwSRLnAeYmEfAADI/dW /wDGfiNbE5welINjNJPTPk4xKIjuWPNYcjHOE5vHp1Ed9WMVjfCOfb2jxSt66HZDKNY9wPOA nXSYol1B54znHjXLkaatorHa/CJGcUuB6a6jNYeLmiwyCqTLmHcL4DV3fqlDz8c1Wr2sC5LV vKQt1aKeu+ylcKJOU0TCVdBEO+t0+EPEfCtn4V3zaMAyu6ETuRu07Zc5pRm5cNwMVRscgFHT qKYAN4b771ZLs4y8P5W356HJcK3Mlmj7S8TbGAG4GMXlNA7ve5oZER0iAatxqp6MspZsAlOH Umw4KQ3ERUHHNmJYse2jRbjqMUxTCVQo9TauWYADG+QxmrYbgTIJXnH28/llWQ/RL6RSOG3M VRED6TIFIGROYDbbda1y8ONPDl8lbEijLC4M3uRrKKRwImEzJIjUURDfJcFMOruiXr0zmmT3 ilYCXEpGVbXid8JrLVhPaQtz4I5FYFCibVkwD1+0HTI1zq2YfenCe6oa/Jm14Jg8uNOJIkZZ 2kloAvMSBQCm8ANgenpS8pw2iIThXEXjNzsqnIyzRVdtHtmHMSAxDadJzh8IdN/XrW+R/HTh v7anVzShQD2m2dIvHLNbLsiTdNMTkABzq1EECgOnrkQrN4riZFI2PdbmWvWQfO5yMfM0LaM1 AU27lwfJFiiAAUAKG4h67DWkKR8s+uWirxfBV49i+HpFbi7NMXgcqxWnZs8ppp1Crq6GECgA 6ch8YBTaT4Udt4lQ1lWNKP5k73WLtZ8y7N2QiZ9Jz7huUA3yHUdIBkRq7frVgmd3cHJNzMOJ RG1YgzWZIRMfcLGSBMTb41jsHTPwh41KyXHGIiLitlCJVY3SkDZVhcMu8bKEE7ZVYh9JQ2zp IA9c+VZ9ujRnDHhOR/xxuThelOiV6xbKGjROkGHaxUyqaBH6gYEd/SspKOsurp4D863lxxLt Fj+k3cnFtqdZ2zRaCWERTREoPXBm5Ue/nchPjERxnptWDJgIJ9/4hyJvmNY1dcoV2uViC0KF CgFChQoBQoUKAUKFCgFChQoBQoUKkCjUWjUBaNQrtAit8Nc+rRlqBaA1HrldoDVol0bcNYUn 2jCP5Vnfh91aDfXcs2BT6dwRH8Kyu+ylAL8VT88r/VbJLyII1BIl1qB86fTS2tQpPsEAPyq0 meoxzd7pTrQQxdJKdYLXMfu0EQoicv1aT5R/KrB3T/F1ocktMhAaa5T2QJpUppVAUKNQoJW1 T3B7aRb23zvaKnwFSwI48xzsAetWG6lL+ijJmuOUe98cE98AkAweG2wVIfo/vGzDiEk5cH5Z eQYom6D1DYP+tbTMMLecQ5UJZk0RaJqmU5KqgCcf7vXvGH/rXr4aGZJgNsu73nVHDKFlpBQO WZVxhXGQKH1h8flXI9W95e21n7N/IrxTNQpQJrwTUYdtJfrVvljtodj7Y9noRzJRcQycqgZI hoENAm8B9KRhUYuN4XosI0Y9k1RAAVwcAFRQD5z+8PrWu1X/ACzyYrdEbf1vJpTFwSDtNQpi JAftIHMkYfhLgPhH+FEs+Nvu51Hy0NIvClRJqeO1nPLDHlqHr8q354e23HZfbp2YtVHaThqg c2o6wl+scvUfLP51Dzkhrub6PwjSLSaLon9oc5Uuk4Cbcp9I6ShpHp+Na1te9DJikLbdzzft NjHue0R0WQTu1Acf0cTCH/7GHy/GnRrJvhtb7F4qsCCbgAUasgc7jkcagLWpov7di7Xvhhb4 R7dipgCDrACrKaS6gAPrdNuvyqTZydsOrfgZKVdx6SbJgVAg8wOcK2chpL9n7vHpUVh/9dYx d1jXZEx7iVnXoPOxnIi5DnCoZERAMF/0prYNs3LcvbTwj80cyaEAzpyZYSE36E9RHyCtf4wX BDm4fzzfnNOZJrJixSSOBllPh1GOHgG3Wq3+jrKsmEHMR7wzfmuDgKZHBwIlsAbnz4fziuzh Dci7Fm92QkxbcsMZIKunCpil0nR1HBUBxgAx/CrYpwlvw8Szf9oAzxVMiqLEy+ToFMIB3vsj /CtsmrwstVwmupMRi0kVIEm6pSlEjbuiArF9cjsPpSbe9rXZQLTtM81MgCBG5Ew3cLKgbPM8 9Hr61UrUcnO5i05wrvFgZQ7eYLIu0naTFyYixg5RjiG2sfAPHFRheH9xueKj3huyUFw9aGw5 XBQQRAuCmETD5bh16jW2XpdUI/iwjUpOOSVfzbV2iigcB0pkOBjKK/ZHbcTedIxfEK24TjtO rEUZvI6fUTWfyIK4BIqZSgBc+OcZ05CvPL1EJMUZ8PZ91C3VIdpSBrbi50RS5g5WMTACYhfL cN/X0pZ9w9kmlgqXcjNoyCzYjfntW5xOKB1jABEs+J9y5KG4Z3CtBg5e3Oy8W1PbTRo0n11i RuTYFYMbGAOuB+/OaLZt029w+4TgzWkUJp6JklmMY3KAaVyjkTn8g2zqHPgABW9dP8uq8XgP dvtCDYSTpvHqScYvJvyqZMdqVPT7s3iZTcO6HQfGqdxGtNzZVwJRTlwDjntCvETgGB5ZumQ8 B26V6UW4tWlJPrZmHT1s3mCxTlN2XUJgaqriXAGMOw4xv5VhX6QVxxFz300dQjrtyTOMKzXe 6RAHKoGERMQB+pvtWHwrq58ka64cyROFsfxCblM9Sdu+WdBEgiKAAGdQj12x5eNVn2dJIxrB 77LeFbyBwK1Pyze8MboAD45/OvQ3Ce/7MtvgOhDzkyX2gXtBvZ5CahMKpNBc7dQznONqmn3E /hk2teyY2NcJf1LINXZGoIauzgmUSiBjD1P3s5DFKaYuSYjYvC6YmbicW9Mtl4IycY4fkKqm OpTlAG3puNVaNtW45BvrbWy8UBIievSQdJdfwh6iPlW927fVpQnGJW6ZC+FpqPFo8MCB2wii iZYwctAviYeufLbNX7hdd0Lc1wy0g2eKqtBmG8gu4UMVHBipadIAb+wTANh8cV3TroZPJZrP udNmvKntp8VuiYwKKnTHYQ2H79vypb6CXWkm3H6KvAK7xoHR12zv5dPGta4w8QIF7bUZBW9N i5Mlcjl7IEQKIJKNlFxPgRwGrIYHx6/ENXk/HSw2lwe1EXDl2rJSST5YgpjpYkIhydORzkfH u6a1lG25lV5la2fcjl8zjm1tu1F3DUXyBNGNSOvRzA8wznf0rQ+CVjX1MSlxw4O5e24uAaKq ORR7pe06AEqf97AiPyAPOr3KcWLGLxOQuFg/dLs0bNPb2oURDKplAMU2+duu+9W1H9IThu4U mTu3z1qgdcV2wFamEVxFqVDHoG23w158q08KeO2f9KMktoy5ciBPM5zGHp6iI1ZmfD+8XkhK MWdruFVIlcrV5sAcs4hkCj69dqkrLsKfQThrvBaMTi2LpOQLzXRSqikmcDfB1zsO1ejH36SH C1ysy9lpvmaftArx8qZny9ZS5NkNO5jGOIBgceI1pW3WNeo882TwvlJPinDWJcbMYkz1uq77 oAqYpCFMICABsO5RDrRbV4cvJvhJeV9d1P2A4Ik2RNj3hCmDmCPlpKIY8xAQqX4dX+wiP0i/ 1gTsg7UhUFXgt+7rUBFUpippgX/m6Vywrzs+Ns3iRatwJyHYblXBxHihnXqIYTJgfHQM6c/I QxXa1oKepY15o2s1ug1tqgxeFRMiYPjOCpgKmIFAMiBhHbzqRfcLeIDN4kzdW931EllRKRUF OWCQZOBsfCIb1sJeP0A0s23WjDtntKPbxjZ02BDSVQrRQpzDzQwOOu3TcdqH68LAZyXKjSTC 0bJO5GQlTqNgAyKrsukCph0OBd/D765GkVM+4W8H5ifui4ISdaezVIuANIFJzSd5ZQoCgUw9 ClHIibyxgcVn0LAz8tbszORcT2trBEIL5cmNCeoQKGPteOxfAM1tCnF+zD8c5S8DNpQtuSts jBuS6P6QXOjvFL4/AAVnMPxDk7Zt27LVtJFJtA3ArlIjnvqtUg28sGOYndMI0rilqH/w8x60 S9aRTmRWuJtEIPu0n0lbKrKABhSx9UNw/Cqhwx4OTcxcChL1YOYyMIwfOkiIqlBw8M2EgCUh R72nUcO9jfwGtFR/SCsv2WQjkbiBRZg3YuGaKWU0CkxzDk7wAJsFAAHx8Qpm+/SAtmYnkJKR iZFkJo1/FOFG5dQotlxKKRiB8Jjhp3DTj1q7mNfSoQOANqspOfcyQz7tmy7ERFg20mcIGXT1 HA+4AOjIDsO+9efZZIjKWkWpOZymjlRInMDB9JR21B4Gr0XHfpB2j7Wc9rbTyUS3VYqxyiZN a64NS4AqwCP1jb5z4V56uyU+kF1Tk8ZHs/taQWecnORTBQwjpz4iHnU350Sm7yt1rbPDG05l U5jzNzgd6TGOUg2KbToHzMbJRz8wqyXVwmmycRGFmWy2SWcKW6jKK9pWKQDdOYIDvsA+Hpmq 5eVzsLn4Y2pbSzNVKYt7U3Tcf2Rmo7/ecRxtjbHrV8e8X4AOKsbd8dGP1I5ha3sIEF+6oY+k S6vHu97r1HFI17FfFSbg4aXrAouDysa2TBtJto0NC4HA6q5QMXTjqGB/0paS4W3vHQVxTyzB t2G3nnZHpiLgJymwXIgHUQDUX8av63HqLeXA8VlrbWlYDs7RVgxcd0ycg3AAIsIgOQLsOdxy ABtS0h+kCwmODklaE3BvDz0k2XTXct9JGnNUUE4KaQ9R3/u9d672ap+TM70ttpE8EuHt3NTm 9o3Es8B0JvhKCSugoB6Yq3PODr1/xtjLHt5AfZxIlpIySyigB7kwl5pwz0HvYAPvqn3VdTCc 4QWPYgM10nttruDLuh3TOmspr7nmPT8PWtJS46WyTig8uQLek/Yz6AShnCY47QAkNnWAdMbA GPGkMMOvlaOiuGtrvv0mrr4Yc54gl2dUsKYu/KWBEFe/6dfnjHjWHpmMZEDG69B+YVrZeK8a 0423NxVZQzjt79oZKGQUwHZlTJlSFRQfDBQNsHXOMVkiYaEwJ18x8x86803HKFChWLrlChQo C0KNRaAUKFCgFChQoBRqFCgFFo1CgFChQoBXa5XakJLUC11xXaA1HolHoDfVrRL+aLrQ8Kk3 IJsNwEf/AGhWelrcHzTWzY6fBuT+FZXfZTJ04t4gnr7KpzPDbpTU0XIm37Mr+FbEml9ou9Mb iWIwh13GnoXu/Os90ZvQrlCtkhQrm9KFLQRr+mlP5IKZVYLQoUKB9Cxz+VlmkfGlEzpdQCp4 NjSP2hHwAPOrXd1gzVuN1nS0l7T7M5BuqZIxhEh8VG8OZFCMuyPeOz4RIcNX8/z869APLztU 3a/aEpHpouHPaCt24AcRJ9k+Oph8Q/OvTwula9U3WA2bbErdUwlFMQWR1pisJz6gJoD+NPLX sabn/byrfPKhjKJ6jahBQ5Q8K2aBvS1+2OliSrRlz0gzpKAckpR2TAdvwo0XednI23JM/bbd ojzFxImQMCuKgfEIeI+vj616YQh7udzJLg4aSkDavt5y9TcKlIiKqKWREgqCAAQB8TegedI2 Xw7l7heLNnfMh26KB11FViDkwF8MeI1sBeIFnIxLHtUkRcwAkCLIpc8swD+0Hy+f4BUfeF/R srMNWkbcbNi0ExlHC6Rc6SiPQxh2EeuwdNs00t/4O5mVp8PfpPcEk2jX/wDVkQ1Ous8VSEgD pDoUviI1WkYR+8UVVjY9V2iRYEiuMYJv0N6BW1l4hWs3kp3sEpyWZoYzFNUE++6XEP2hfHG/ n/y03sG9bDgeE/0ecORTeKJ6FyFS1CobHxD6fziopSkpaOqhenCkbTtVScPLFeLIcoq5SJ4J lTwKPjjb/Gofh3ZP0zePSrOxaMmCArLKFJrOYQDYpS+I+lX7ilxBtyY4dqQzN0Lt445OhsBB 5bMCY3z9oarnBG6mFsPpIZF12UHaHKKvjPLHz9f52pc0ygQR99cN5K23DEI1i6kmjxIhyn5W DFMY2AIIee3SrNH8DXKsCk7cyHJmVkBUURAgclsOcAmY/iYfLG1XZTjPabXldhdul3aSYIFe ckQIQv11CF8VBz3R8KaM+LVlRsSgVE7xTsxDN2jIxRET6hzzVB8BDPWt/wAvJHcqVycFmcbE yHY5tVR3HnbkdLHQAiJxPpzp8R6ht+VNVODjxzxmT4eRTsBYptE3j2QUTwCaWA1DgMhq8ADO +d6t13cVrem4d0w9pLHNJqIiolyR5LJIglzgfrGNjp0pu84vRbPik1noBwb2Ysgk3lV1kO8Z ImO6XxN02DpnA7VN3b0djkrEXwpYyFycQ2ftjSFrJKizT0BrX0EzqEehS/LOc7BSln8N7JuC xvakfcL32kTs7ZZZQoIondK47pBMG4BkPAflUzG8SLOJfXEO63Kz1NO6Gx27VAjcRPpMUC5N 5fj91Up7dkaPDG27JZc1A0ZKhIrL4xjw+Qn6eHh1rCMnVwvbgQZNFihZvbHrxeXJHKdoIBSi XTqUXN10kAQxqqmcarOZ2FfCVux7wXyHYE3HacYA5h2HAeVaBcHG8ja3YyJs9w7UcJuyuHL1 6ngEk8YFMn2th64ql8eb8YcRrxYTcaQ5UmcaVkcVCadZs51B6b1c5U6iHtnh7c9xQatwRUam qxSMJOcc4AImAM4AOvgP4VZZDhS8jv0ffpvJIqN5t/JNkmLcB25KogXJg8zCI1bOFfERlYf6 OzvTyHU+tMGMxZGENRSmIBOdjGwBkd/Om3ELjFCXZwtLbPZniMuoqzMu40CKaZW5gEMCPUfu wFYezqI4qcLoi0b6sGxkXi/bZXke03IgAkHWoBTCmHluOPluNMp7hRcL/iBd8JaDY6sTEyxY 4hTuALnWACAGEcFHpv8AwpHiRf7C6uMVu3s0bukWkOVsU5VQ75+SfVkofz57VdrX48xEPdV1 TZ4R2KE5JFkkkQHvJqlIBQA3p4/4VdfWMzb8NLtWtl5P8lmzaNzrAXWfvKlSHvmKHUS7dcb+ FWjjTww+jVpQF52031wjqHaLPSHOPM56493G2cYENvntT9jxdgm1lzMMeIeuFZE7jkomEOUm C+fi8gDPTA5ol8cZGkzw3LZcVDqtSpNGDJN0fACQjQdQCAeIiP3BjpWt/b/8aOo1g8FJL6WS UffUaqXslsqTTdiQw6lFc4IQxgzjA5yAb5p/xI4AzaMpAnsuNAW8i1bi5ZmUydssfOscjtyy lLuYRCoG0+NUyzuaSm7pEX3b4cYbUn8aKQiHeDzNnfy9KtgfpIpJclGKt04N00kWQncDqOdi kUfdmH4jHOJh3yXpWWddCStRPCUheAV33pcR0llgFI8MoiqOBQKsBDqaeve8OuwbVziJaNnp 8B2V9W3CuotwMsgxLzVhUOskYgiJjh9U2Q/L1p9IccYdzwykLOcW0sQztqDNM5FA0pIgcTgA 7dcm9elVC8OIUDJ8OUrDti31Ilod8SQfHOoYwHVIUChoz4dM/LIV6JVphXF1nxhomqjUXQby rwNA1moa65QrgGaFChjVQDNd1UX+z1/VDqbwrtcA10WuUWuA9DNEoUCmquZolCgNqruaJQoD UWhQoC1yu0KDlChQoBQoUKAUWjUKkFoUahQFo1ChQChQoUArtcrtByu0KFAmtXa4tXaA1Hol HoF2/wAQfMK9ALE0otyeSJP/APkKwSP/ANoS/vhXoB10T/uF/gFeTifio20Fqj8UHfu2seT6 5hMb5eVXzIaazMxzz1+aOqZD6fw61FvumIrlUOVTrFCvSk10UbRS+mhpoIiULUZU3LE93ULV jlChQqg5j2C0g+QZI41LqFTyPQM+I+laZOcJSMGL32Ssq9dsxKHfKUhVemoQ9ArOoN4RlKNH am5UViqCHyGtsU4tW2fmrOVnjwzjSItiEMCaYBju58RHAZr08Nj11JM6tXh9NTc4xZO2As0H GVTqHD4SAbAj/GrBbfC72tdlyonMZOLiDqIN1hKHvVAJkoBVgj+LNve1O2PyuygdIyKvdEw6 MhgpfPGA2pxF8XrWbe1uai85DpYVkUypiJzGEuMj5B03zW9NnVnLJAqcJWbfh6nMpOXDiQFl zhOCeCnV6AmUOoiPlim9o8H5VWYIneCRY5iVPK4EOAmBXTkCD5DU+34wQTOJaolZOnDpuj2d JI4YSSTz8Q/vUzvTivG3IZqkod4k1Kcqi4Jp6UwHxAPEw7hv6VyOGH1c7zC1eGbe4OIns07J 3Ew7Vmq4MZXc6ol8AHp92aJw/wCGhbhTuKcckVZxbcqycaB/jUOUmw/3fXx8Kfo8V4hrcSLx q0kQjWEYqyaEN+0UMp9Y3kHr40jZPFSKty1VYdzHuXA6VOzFJ0AThvq8utcyjmruP0+EcOjb KJilcKLlhBeqvzbEMsBRHSUPANvHy6VR+CdsMbzuYW0vzuxoslHBkEfjVOAbF+X8as7ri219 iii3ZOxkfZwxyQDsgkkYByI+Zv8AKqbwruj6EzQyJOZ3mwt9RPiKA9RD1qJS6UdXu+uD8gk1 h1rWYJlWcp+/a84BwbUH8A67bVN2/wAHYT6NiD8e2yHve1OwNgiZih3Sph9f8PDxqKU44pN2 qLOHhl0AST5BHKqmVE085MJf3zY60mjxsTbNxM3t4QcF1AyT5o8tuBw7xhHxMPnW9ucNdZIx knZbhLbzS23v9HMggzjSr9uEdSyqmMmwXwD8RH7NVu6rVsmB4hW+j2F28ZyEIidrHFyJ1nKh sFFQfqlDbPjt0pCY4vdvh12ZI0xXj5uVo8cnHJSIfWAhfER6YHb0oynFqGG8GNwlthYisbGe zmYmU1mSJ9ovgBgDIZ9RqpXYdNHcSl1cLEVJybcRkkziYiLMRqtr3AXYhkSk9A2D51IyVjQo WHGtrbYR8rKFaHWlXplMqhgciJS/L09agXHFSKcR7yNVs/VErHBRNkZyIiKhQ7h1T9TD4j51 HwvERvCwb1mwgEE5KRKZJw+8E0T/ABlKToI+WenUc1NJwz1cxkufEjh7b1vHhL0bRfbIbsbR NaKKcxOauvgC6jbj5CON9tsVU+PjKBh7ii4OIjW0fItWeqZRRzgix8CCY9ciUPyEKe3txkc3 JEw8KjBJR8dGrtlz+81KrihjllHwxt+NUK+Lkd3bd0hc79IqTp8YDHITcC4DAB+VYyuqaXwp 4dtr54MzqjJMqM4jLplM/Ux7luUuo+M4wGAH86Tuay7YjeAVqz5HPaJVafBN3kBA6wczGn90 gFKHQNxEOtV3h7xOe2Varq3mEWi6Rdu+2OTqKCUVDadIJjj6uOvnSKPERf6H/RVzEM3DQX/b SnNsCA6sm5ZOmodw1D0CohLo77tX4ucD27q/mattv0o+PkXKTMzUpcgiqKWrbrnOM5HABmqj avBb6RXNMs2FzCsxhiEK5VTQDX2gxsAnjp4Z6jR3H6QVyuZhpILwrIxGjvtqSInEfegnyyZE egFKI/DgfWq3Y/FSYtKQnnrZmk4Cdc9qdthUEhBPkRxkO9p3H516NyGKMZJ9nwNmFoO+pD2p /wB21FiNu6Ag65SYHN9/hjzHpUleXD5raf6KScgsKbi53b5i6cKae+3IsXuo5/j86gmPHO62 dvykN2NkqlKHVO4x3CFBX4ylKHgOevXbrTG6uLs3cfD0lkPI1oDMDJCd1qEVDcrGgoB0AAxj HlWUsVNG4b8HPYnEJoSbPHyR1bNcyaKCxg5abwDFIGoOugNXxVJX5wPRu1a2D27Ismr4Y1EX vZSABFicz3rgM4DSAYx55CsMtO/5+AnlZkzpSTVUYDGGI6NkOyjj3IeRNulWpxx7vQ3ITZto +MZocgpUG2Q1JIjkiBjfEJBETCYMhnOKRmmUeq2cJWlqSETelt+zY12xhGL5Rss4ARdSJk9w W32IQu22+dQVE8XJiNPwFthJa3o2OlrmVI9aFaIY7KyQAC9447mOcw58hAfSqsXi9cKELMsG 0dEouJoq5HL4iek6ZVxyqBADbfI7j0qqXRc8hcLKCZPxDs8AxBgwKXwSDG4+Zhx1qp3TFZf0 dbdjbt4423AzKPPjzis4WS6ApykzHKUfTUAZDxqY4ONULouvi0/ftGqb1G3ZF02QBH3aJsh+ zD6ukMAHlms2t2afwE8znYpYUHzM+pI4dfUPvqft/iJNwk9c081RZmkLmbKNX5jhgpCKiAn0 Y6DtWcJrX2zeA7G4eFMNczO4Xib6QjFX3JMQughUFSpqB9rxDGw05jf0fXMnPXRGtXkgCcbL MmrE4pl9+kcMrmER0gGkoCP+FUCP4nXbH2WnaLJ4QkWmwVjyE+yisbWrt0Ewj4jnalkeLV7p OLcW9p6gtxEUWJDdBKJdI6/McZDPUAEd6Rn06uVi0m2eGsRZ36X9rWz2r2xEO2ar5uJ/EBQV 05x1wJM5qkcKolm6t3jMc4f0tnFKGQ2zpDmDnHrsAVGyXFi8ZLiHH8QHi7Q07HIii2MVPSmB BA2QEA/vm/GoqGvibh29zpM+z/8AakhiShjk3wbVnR5fGP5VtG5ClVtWvOxIS0v0NtTcx3Up I+y5Z2sdLHL54BpTKPkAeH3iFVHjQg0ccMeEly9iRaycnFu0XmgMcwqJyAmYfnkR+/aoaY4o 3bK8PUrDfLNVIZNJJDOj33LS/Zlz5Fx0qvXRcUpcKzE0ksApxzMjFikQMEQRKAbAHmOK8+SU PRaFCsVBQoUKAUKFCgFChQoBQoUKkFoUKFUOV2hQqRyhQoUAoUKFAKFChQChQoUArtcoUArt crtAKFChQJLUairfEFGoD0YtFpSgfwpcyTUv/ml/jW+uC97/AJQ/hWEW2XXOMS+a5f41vawV 5OJ9hB3M+LHwrhx4gQdPzqrcMWWe0PlMiYdij65ocUnmezxpDbmMIiHpVwtNgDGHQR0741D9 9RHthkpRDNKJ2OpQxO9RDV60o7sddKzNT6lE6CuTzTQ3zVaq8XITMeNUo1WEqLSlFoC6c/Wx 61fL8ioeIj7WYtm/J7a2Bw6d7ice/wBMefSqHqq0zlzNpuNjGizDAsClIVURyIlAegeVXbl5 FjvCFtyHvC10W7BQzF6yKYUznHvGE+MmH8PKrWpw/iJGUuuOimaSbxPlGanMfBEclDI9fXxz 8qoExebOVlod6vD+5iCgVFIVAEVMDnA+QVKt+LD9B5KPEopDnyeCnyoI8tMAxpDzr0Zxy/sn FP23w3ZM7JmpCcMV64Oir2AxOmCB8X3/ACH0AaZ8RLGjm0DDXGx/orFNg37UiTIiYTG6+Yj6 Y+4KY/rff+x3UaeJbiDkopibVsmQwd4Ch/ntUdcHE13MW+zgDRDdGNbcrWQDZMuCY5KU3kX0 rXO1/lOMkdxEgW8O4YvGR/6FJois3If4yhn+HrVUp/cU28npRSSf8sFDbETTDBEi/ZKHgHpU bkteW757Wjuqu0nktDVWIPqoaqTyFDWWgPQpPWWhrLQKUKS1hQ1hQK0WiawocwKA9DVSfMLQ 5haBTVQ1UnzC0XmloFaFI80td5lArQpHmloa6oKVyk9dDXQKUKT10NdArQpLXQ10CldpLXQ1 0ClCia65roDUKLroa6A1Ci66GupBqFF10M/u0BqFF10M0BqFFoZoBQrmqhQdoVz/AJa791By hQ/5a73vKg5Qof8ALQ/5aAV2uf8ALXfuoBQrm/2aFB2hXP8AloUHaFc1UNVB2hXKFARX4qPS ZqUoO0pRaPQTtmhruaO/9cK2d4rysn+rWP8AD/H0wjdXTm/4VpXEK6WKUGs2YkTMutkgGL1J 5jXlvqU2L/7R3oLjqiQf/wBQzWrI/Lwqt8GUYxlFmWcHaCsr/wAU25Qq6fS22QlCx2tjzx+/ NRd/2JZsoSkdFPNFE0V6g3KSj6KW0UMUEXOE/q9T5VQjVoNwbRqvqFZ0arArlFzRMmoOGCuV 3VXKDlCu0KDlChRaDmmuYo1Cg5ormijUKAuihoo1CgLooaKNQoC6KGihmu6qDmihooZruqg5 oruihqoaqAaKGihqruaDnKrvLrmqhroO8uhormqhqoD8qhyqJrruaA/Locuua67mgHLrvJrm aeMWbl8pobkEw0DXk13k1dYXhtccmbS3O0KPkdTFRt2WVclsG0ybHJdh5qI6yYoK5yqHJouq ua6A/Jocmk9dG10CnINXez0lzaHNoFezDXeQNJcyhzqBXs1Ds/ypHm13mUC3ZqHI+VIc2hza BxyPlQ5HypDmDXebQL8iu9mGm3OrvOoHPZjelDsxvMKQ51F5tA67J6hXOzG9KbazUbWagX7G bzCh2Q3pSGs1DmGoF+yG9KHYx9KR5tDm0C/Yx9K52MfSkebQ11IW7H8q72P5Uhroa6oLdj+V d7D8qQ10NdAp2P5Ums20UNZqGupCPLNR9BqPrruugsnDkpfpUz19Mj/CkJ5Vv7QcAj/xDfxG krRU0XEzN+//AIVGSRsvlv75v41HzUMXRq+L86VccnYUz970qPKUw/DT9aJkGjftTloqmj9s Q2q0vQy3Dm6g/wDpZvu6/fTNaw7nJ/8AR1x88V6bR/d8vvpX/mGvz8v4nOL3cq8rmtCcIbSe LcB/y0gpbkuT/wCmuv8A2V6y/CuaCfZD8Aq/xT6Ocq8YXhFv0YFdU7NdIClz3y4rKjV7X/SO R/8A4rkNBC5wOcBjbFeLTF7ofIK+pw1/di8ko4yNcUSnNE0160kcUKU00NNAjXKWxRaBOi0r iuaaBOhStc00CdClaLigJQo9coE6FKaaFAnQpSi0BK7RqFAWuV2hQcoV2uUBaNQoUAotGotA KFChQCjFotGoHkegLt0VEn1vHyDzrQYtqkzRBJIMf41WbBIBpBfV4Jhj8auTgug3c6Vhdl7B 43cnD6wh8hqbRl1uz8lY3OSHYSH3AQ8aqHPOBu+QQCn6K5+x87FYCkX9EJR8hz2oYbr7gH2T eVVetH4gB/2b5pw3IcohnzEazg1eq34HKFCi1oDUKFCgFChXaDlChXaDldoUKAUKFCgFGoUK AUKFCgP/ACGN651rUuBslHW1LJTLyKSkMgP7QoDjw28M0Xjs7h5uQLNRcYlHKCbSciWwGCoz 9hltHolPoluC75NM/TORqw01E+1Rq9FN560j8PSwC9rNcgX9rywznzz1yNefJRAjaQXQSNlM px0fLyqIzyCFc2DvGotadwXfx9vyBZp9DpSeCjpIfcAHzD1rozPUBulcrUOMz6Fnk0pdhDpx joPjIToYKy+uRlkBSzdM6ygEIGoxugBSNWexdaLrtiRcnJ09K76Rav1C8Tvo+nNeyW4JH3Kl zw14+X3VnMkxeRr5Ri/bGbukhwdM/UK9COr7uFs3Qanfq8tqQBIlq2ERDp6Vl3E58WZ5UoqX Dn4TeteWHEZyFDoUKFeoTNpk5s8zLqxlUAzVyU4Xv1HRx7YjpE2c1V7BS5l1RxPNWt8UUSbJ qdMlqJyxFHtvhiDB8m7kFgWKTvaS0XjQsXlsIsmPeH+Hb4Q/kKjJ6+JDtyrbncoue6YPEPOq eo+drzCS3OFUSm1F1jkP9KzjKUq9Vbb6CuklEVMfVzShd6lphp10+NQSJ9CnJPtXwOLs+73W L3sfaaGmlih3e7XNNeGL1KDx0ado4byRf/LGvDun3YfKvoFxAZ9stGRR/wDJHavBDxEyayiP 2DiX86/QfwyXY+Zf9SNMFJ4p0YtJ6a+mwNqGmnGK5prob4ouml9NcxQIYoYpbTQ00CNc00rQ oEtNdo9FoCYotKUWgLXKPRaDlChQoC1yj0KoEotHoVIJQo+K5VAtCjUWgLQo1CgLQo1CgFCh RqCZtN32SS1D0OGk1aK3J+e9ZIU2mrNC3IdsXlr5MUPHxrzXYKi0DkkOYNZQEPKpfWnpACol DTVLRuqN094+BptKX2BERJHp6lPA5ugV5sZOicWpIn9Hi0v2gm5qunwAA2Cs/pZwqq4WM4cH FRQ+4mGiV7bccYoJUau0K1BMUbTXaNQE00NNHrtSCYoYpWhigSxQxS9c00CWmhppbFCgR00N NLV3TQI4rmml9NDTQPYmVcsO6TdPyGk5SQcSJtSnQOgBTbTQ01wJaaUbnO3WBVLqWu6aNign fpW/M35eADbGraq8oJjmE59xGlNFDFAhUvCzbmMLpIAKJ9dA1G6aNigdzEq5ksa9i+BajdNK 0KBLTUtb8oaNU8yjUdQoLo6uVJx705tQ4+VVaUfneG/8sOgU0rlZxtRiC0au0Ytai18L0eZd jT90RN+VavPWweW98g4URXKXu77G9Bqp/o+xR1pxxJKkHkJJgUDY2ERGt3Rbk+qUPwr5vHcX hXRtbtZdXnCci3DA3ZphgYPI2n4g8wqvQsJJSSyxodEVRSybQPXH+NernUa2XT0LtklS/vlr jeFjEVAWbs0kD+ZAAK80f4nFvsvQaxNdVaYbCCgn3yFW2mb5uVQtem9F5FejXJT+6HY1SNQz xE7dbmB9UfvqRj3RHCYf8QPir4l23g90bmQzprzGqqJ+ioafuryxxc4NTTaYXfwrcV0VREwk KG33V6x+PekzAbV934Vpw1/aTOO48DqWHdhc5g3e2w92mxrHuf8A8Dd/hXv3R07oD9wUzWJ3 vgL6bBXu/E/ojlng36EXP/4I7/8AbXfoJdPT2I6/CvbM9JxsOnzXjtJApQ+8QrK57isd8sLO 149R2obJQPp/Oqjx05ezPaedXFlXC3LrcxiyIeZwxUb7FfGNyiIKKKfZKGRr0a1sm7LjUBzc sgLJLqKYdceXp4VbYu1YKELoZMwE3/FU7xhGrl/EMTYeWS2Lc5k9YRK++++21D6B3P8A+GK1 6vWJqNq+6mDgn4Vj+Jyacs8uGsK5v/DjUT6B3F/9iP416TeEqGknKTYonWPpAMAIjV/iEjYY L9Abh/8AtKjnVtvW63JOUOZ5BvWqSEy+mFBZxCWlPoJ6koG3EWmFVvfOB31Dvit+cl7p2mTt 7DnXBcg3xtnA9aX/AFdTu3uthrbykx/CiVl+ISVsMV/VvOeRKH6uJv4e55/dW0qEopQqeemr YYz+rea+sZMKN+rSY6a062TTQpz0zYY1+reVD4jl/Cufq6kimxzkq2A1IlJ3qrnZp2mVF4aS hv8A5gnTbau/q0kP/uUw+YVremi1POTNqLJ/1Zv/AP7tPbrgKH6sn2/9OT/9taxSJjVXN3Da YtdFnOISNB4quVQBOBMAGN6gCsNUGpKatir8nH3da1ziUTmWm4N/wjlP+dZzC6FrVuRkcwAZ MpXCWrz9K91i7KUNWEoqqY1F10XOe9Ra9bMfXXebSVCgV5tDm0lXaBcp60tjw67SxauQkdl0 ynDBfMKy8tbdwvlUVrTRIs4KCjc5k8GHfFebiZVjHo0iiv1a/akv/wBa4bhn/wD7H8qvvb2X /wByn+NcM/ZfD2lP8a+fzV1rtMmvS0iW9EpvDvOYKqnLITHjVI51X/jRJJuZBmxQOBk25BUM IfaNWc19LhpSlDuYSKc6u86kKFbpL86hzqSoUCvNpZM2aa1JW+l2iYZttWOYuUBH76VGpR/C xJRm3VcSBk1FEwOIAXpkKdfqoZ/+Kq9fs1fU12491NZPugAYAQp1t4V8OXGXXr2os4/VM00/ 7zV3/crv6qGm39bqfLRWj/4UYtZ85dVsxZr+qhtq2lFPUdNd/VQ28JY3/srTaTL8VOcup2ma /qobf+Kqb/uV39VDbf8Arcwf8laZooaac5dVsxZn+qhD/wAVN037tD9VKP8A4qO/7laYYKJT nLpsxZoXhWibP9amyH7lD9ViHT2mb/2VpIeNFpzd02Ys3/Vahv8A1qYN+milk+FDcTZ9rG// AB9K0Klk8U5y6bMWe/qfb7f10p/+PwpyXgu3P/8AWzf/AI60VE2MU+TNWfOXndmLLf1Jof8A jg+X7OjfqWbb/wBdm+5OtaT71KVl+IXlbEWPfqVb/wDjZsf+nSn6kmm/9dmyH/l1rmmlSkp+ IXjYgx39SbMf/rKv/sofqRQ0/wC/Df8A462MpC1zTU/iV42Ise/Uiz0/78U//FmjfqTY/WmF cY+x41r2K6YKr8QvfubEWSl4JxAY1yy4/wDL0qRZ8G7cRMB3Cy6/pmtJ5OrNdrnPXv3VsQNI WLZxjMrZiiCSQeH+dSNBMKW096vHKWrXFzHdpRMtcpWpU//Z --------------090008060702010503060703 Content-Type: text/plain; name="kernel-config" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="kernel-config" # # Automatically generated make config: don't edit # Linux kernel version: 2.6.23.12 # Mon Mar 3 10:25:53 2008 # CONFIG_X86_32=y CONFIG_GENERIC_TIME=y CONFIG_GENERIC_CMOS_UPDATE=y CONFIG_CLOCKSOURCE_WATCHDOG=y CONFIG_GENERIC_CLOCKEVENTS=y CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y CONFIG_LOCKDEP_SUPPORT=y CONFIG_STACKTRACE_SUPPORT=y CONFIG_SEMAPHORE_SLEEPERS=y CONFIG_X86=y CONFIG_MMU=y CONFIG_ZONE_DMA=y CONFIG_QUICKLIST=y CONFIG_GENERIC_ISA_DMA=y CONFIG_GENERIC_IOMAP=y CONFIG_GENERIC_BUG=y CONFIG_GENERIC_HWEIGHT=y CONFIG_ARCH_MAY_HAVE_PC_FDC=y CONFIG_DMI=y CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config" # # General setup # CONFIG_EXPERIMENTAL=y CONFIG_LOCK_KERNEL=y CONFIG_INIT_ENV_ARG_LIMIT=32 CONFIG_LOCALVERSION="" CONFIG_LOCALVERSION_AUTO=y # CONFIG_SWAP is not set CONFIG_SYSVIPC=y CONFIG_SYSVIPC_SYSCTL=y CONFIG_POSIX_MQUEUE=y # CONFIG_BSD_PROCESS_ACCT is not set # CONFIG_TASKSTATS is not set # CONFIG_USER_NS is not set # CONFIG_AUDIT is not set # CONFIG_IKCONFIG is not set CONFIG_LOG_BUF_SHIFT=15 # CONFIG_CPUSETS is not set CONFIG_SYSFS_DEPRECATED=y # CONFIG_RELAY is not set CONFIG_BLK_DEV_INITRD=y CONFIG_INITRAMFS_SOURCE="" # CONFIG_CC_OPTIMIZE_FOR_SIZE is not set CONFIG_SYSCTL=y CONFIG_EMBEDDED=y # CONFIG_UID16 is not set # CONFIG_SYSCTL_SYSCALL is not set CONFIG_KALLSYMS=y CONFIG_KALLSYMS_ALL=y # CONFIG_KALLSYMS_EXTRA_PASS is not set CONFIG_HOTPLUG=y CONFIG_PRINTK=y CONFIG_BUG=y # CONFIG_ELF_CORE is not set CONFIG_BASE_FULL=y CONFIG_FUTEX=y CONFIG_ANON_INODES=y CONFIG_EPOLL=y CONFIG_SIGNALFD=y CONFIG_EVENTFD=y # CONFIG_SHMEM is not set CONFIG_VM_EVENT_COUNTERS=y CONFIG_SLAB=y # CONFIG_SLUB is not set # CONFIG_SLOB is not set CONFIG_RT_MUTEXES=y CONFIG_TINY_SHMEM=y CONFIG_BASE_SMALL=0 # CONFIG_MODULES is not set CONFIG_STOP_MACHINE=y CONFIG_BLOCK=y CONFIG_LBD=y # CONFIG_BLK_DEV_IO_TRACE is not set # CONFIG_LSF is not set # CONFIG_BLK_DEV_BSG is not set # # IO Schedulers # CONFIG_IOSCHED_NOOP=y # CONFIG_IOSCHED_AS is not set CONFIG_IOSCHED_DEADLINE=y # CONFIG_IOSCHED_CFQ is not set # CONFIG_DEFAULT_AS is not set CONFIG_DEFAULT_DEADLINE=y # CONFIG_DEFAULT_CFQ is not set # CONFIG_DEFAULT_NOOP is not set CONFIG_DEFAULT_IOSCHED="deadline" # # Processor type and features # CONFIG_TICK_ONESHOT=y CONFIG_NO_HZ=y CONFIG_HIGH_RES_TIMERS=y CONFIG_SMP=y CONFIG_X86_PC=y # CONFIG_X86_ELAN is not set # CONFIG_X86_VOYAGER is not set # CONFIG_X86_NUMAQ is not set # CONFIG_X86_SUMMIT is not set # CONFIG_X86_BIGSMP is not set # CONFIG_X86_VISWS is not set # CONFIG_X86_GENERICARCH is not set # CONFIG_X86_ES7000 is not set # CONFIG_PARAVIRT is not set # CONFIG_M386 is not set # CONFIG_M486 is not set # CONFIG_M586 is not set # CONFIG_M586TSC is not set # CONFIG_M586MMX is not set # CONFIG_M686 is not set # CONFIG_MPENTIUMII is not set # CONFIG_MPENTIUMIII is not set # CONFIG_MPENTIUMM is not set # CONFIG_MCORE2 is not set CONFIG_MPENTIUM4=y # CONFIG_MK6 is not set # CONFIG_MK7 is not set # CONFIG_MK8 is not set # CONFIG_MCRUSOE is not set # CONFIG_MEFFICEON is not set # CONFIG_MWINCHIPC6 is not set # CONFIG_MWINCHIP2 is not set # CONFIG_MWINCHIP3D is not set # CONFIG_MGEODEGX1 is not set # CONFIG_MGEODE_LX is not set # CONFIG_MCYRIXIII is not set # CONFIG_MVIAC3_2 is not set # CONFIG_MVIAC7 is not set # CONFIG_X86_GENERIC is not set CONFIG_X86_CMPXCHG=y CONFIG_X86_L1_CACHE_SHIFT=7 CONFIG_X86_XADD=y CONFIG_RWSEM_XCHGADD_ALGORITHM=y # CONFIG_ARCH_HAS_ILOG2_U32 is not set # CONFIG_ARCH_HAS_ILOG2_U64 is not set CONFIG_GENERIC_CALIBRATE_DELAY=y CONFIG_X86_WP_WORKS_OK=y CONFIG_X86_INVLPG=y CONFIG_X86_BSWAP=y CONFIG_X86_POPAD_OK=y CONFIG_X86_GOOD_APIC=y CONFIG_X86_INTEL_USERCOPY=y CONFIG_X86_USE_PPRO_CHECKSUM=y CONFIG_X86_TSC=y CONFIG_X86_CMOV=y CONFIG_X86_MINIMUM_CPU_FAMILY=4 CONFIG_HPET_TIMER=y CONFIG_HPET_EMULATE_RTC=y CONFIG_NR_CPUS=8 CONFIG_SCHED_SMT=y CONFIG_SCHED_MC=y CONFIG_PREEMPT_NONE=y # CONFIG_PREEMPT_VOLUNTARY is not set # CONFIG_PREEMPT is not set # CONFIG_PREEMPT_BKL is not set CONFIG_X86_LOCAL_APIC=y CONFIG_X86_IO_APIC=y # CONFIG_X86_MCE is not set # CONFIG_VM86 is not set # CONFIG_TOSHIBA is not set # CONFIG_I8K is not set # CONFIG_X86_REBOOTFIXUPS is not set # CONFIG_MICROCODE is not set # CONFIG_X86_MSR is not set # CONFIG_X86_CPUID is not set # # Firmware Drivers # # CONFIG_EDD is not set # CONFIG_DELL_RBU is not set # CONFIG_DCDBAS is not set CONFIG_DMIID=y # CONFIG_NOHIGHMEM is not set # CONFIG_HIGHMEM4G is not set CONFIG_HIGHMEM64G=y # CONFIG_VMSPLIT_3G is not set # CONFIG_VMSPLIT_3G_OPT is not set CONFIG_VMSPLIT_2G=y # CONFIG_VMSPLIT_2G_OPT is not set # CONFIG_VMSPLIT_1G is not set CONFIG_PAGE_OFFSET=0x80000000 CONFIG_HIGHMEM=y CONFIG_X86_PAE=y CONFIG_ARCH_FLATMEM_ENABLE=y CONFIG_ARCH_SPARSEMEM_ENABLE=y CONFIG_ARCH_SELECT_MEMORY_MODEL=y CONFIG_ARCH_POPULATES_NODE_MAP=y CONFIG_SELECT_MEMORY_MODEL=y CONFIG_FLATMEM_MANUAL=y # CONFIG_DISCONTIGMEM_MANUAL is not set # CONFIG_SPARSEMEM_MANUAL is not set CONFIG_FLATMEM=y CONFIG_FLAT_NODE_MEM_MAP=y CONFIG_SPARSEMEM_STATIC=y CONFIG_SPLIT_PTLOCK_CPUS=4 CONFIG_RESOURCES_64BIT=y CONFIG_ZONE_DMA_FLAG=1 CONFIG_BOUNCE=y CONFIG_NR_QUICK=1 CONFIG_VIRT_TO_BUS=y CONFIG_HIGHPTE=y # CONFIG_MATH_EMULATION is not set # CONFIG_MTRR is not set # CONFIG_EFI is not set # CONFIG_IRQBALANCE is not set # CONFIG_SECCOMP is not set # CONFIG_HZ_100 is not set # CONFIG_HZ_250 is not set CONFIG_HZ_300=y # CONFIG_HZ_1000 is not set CONFIG_HZ=300 # CONFIG_KEXEC is not set # CONFIG_CRASH_DUMP is not set CONFIG_PHYSICAL_START=0x100000 # CONFIG_RELOCATABLE is not set CONFIG_PHYSICAL_ALIGN=0x100000 CONFIG_HOTPLUG_CPU=y # CONFIG_COMPAT_VDSO is not set CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y # # Power management options (ACPI, APM) # CONFIG_PM=y # CONFIG_PM_LEGACY is not set # CONFIG_PM_DEBUG is not set CONFIG_SUSPEND_SMP_POSSIBLE=y # CONFIG_SUSPEND is not set CONFIG_HIBERNATION_SMP_POSSIBLE=y CONFIG_ACPI=y # CONFIG_ACPI_PROCFS is not set CONFIG_ACPI_PROC_EVENT=y # CONFIG_ACPI_AC is not set # CONFIG_ACPI_BATTERY is not set CONFIG_ACPI_BUTTON=y CONFIG_ACPI_FAN=y # CONFIG_ACPI_DOCK is not set CONFIG_ACPI_PROCESSOR=y CONFIG_ACPI_HOTPLUG_CPU=y CONFIG_ACPI_THERMAL=y # CONFIG_ACPI_ASUS is not set # CONFIG_ACPI_TOSHIBA is not set # CONFIG_ACPI_CUSTOM_DSDT is not set CONFIG_ACPI_BLACKLIST_YEAR=0 # CONFIG_ACPI_DEBUG is not set CONFIG_ACPI_EC=y CONFIG_ACPI_POWER=y CONFIG_ACPI_SYSTEM=y CONFIG_X86_PM_TIMER=y CONFIG_ACPI_CONTAINER=y # CONFIG_ACPI_SBS is not set # # CPU Frequency scaling # # CONFIG_CPU_FREQ is not set # # Bus options (PCI, PCMCIA, EISA, MCA, ISA) # CONFIG_PCI=y # CONFIG_PCI_GOBIOS is not set # CONFIG_PCI_GOMMCONFIG is not set # CONFIG_PCI_GODIRECT is not set CONFIG_PCI_GOANY=y CONFIG_PCI_BIOS=y CONFIG_PCI_DIRECT=y CONFIG_PCI_MMCONFIG=y CONFIG_PCIEPORTBUS=y CONFIG_PCIEAER=y CONFIG_ARCH_SUPPORTS_MSI=y CONFIG_PCI_MSI=y # CONFIG_PCI_DEBUG is not set CONFIG_HT_IRQ=y CONFIG_ISA_DMA_API=y # CONFIG_ISA is not set # CONFIG_MCA is not set # CONFIG_SCx200 is not set # # PCCARD (PCMCIA/CardBus) support # # CONFIG_PCCARD is not set # CONFIG_HOTPLUG_PCI is not set # # Executable file formats # CONFIG_BINFMT_ELF=y # CONFIG_BINFMT_AOUT is not set # CONFIG_BINFMT_MISC is not set # # Networking # CONFIG_NET=y # # Networking options # CONFIG_PACKET=y CONFIG_PACKET_MMAP=y CONFIG_UNIX=y # CONFIG_NET_KEY is not set CONFIG_INET=y # CONFIG_IP_MULTICAST is not set # CONFIG_IP_ADVANCED_ROUTER is not set CONFIG_IP_FIB_HASH=y # CONFIG_IP_PNP is not set # CONFIG_NET_IPIP is not set # CONFIG_NET_IPGRE is not set # CONFIG_ARPD is not set # CONFIG_SYN_COOKIES is not set # CONFIG_INET_AH is not set # CONFIG_INET_ESP is not set # CONFIG_INET_IPCOMP is not set # CONFIG_INET_XFRM_TUNNEL is not set # CONFIG_INET_TUNNEL is not set # CONFIG_INET_XFRM_MODE_TRANSPORT is not set # CONFIG_INET_XFRM_MODE_TUNNEL is not set # CONFIG_INET_XFRM_MODE_BEET is not set CONFIG_INET_DIAG=y CONFIG_INET_TCP_DIAG=y # CONFIG_TCP_CONG_ADVANCED is not set CONFIG_TCP_CONG_CUBIC=y CONFIG_DEFAULT_TCP_CONG="cubic" # CONFIG_TCP_MD5SIG is not set # CONFIG_IPV6 is not set # CONFIG_INET6_XFRM_TUNNEL is not set # CONFIG_INET6_TUNNEL is not set # CONFIG_NETWORK_SECMARK is not set # CONFIG_NETFILTER is not set # CONFIG_IP_DCCP is not set # CONFIG_IP_SCTP is not set # CONFIG_TIPC is not set # CONFIG_ATM is not set # CONFIG_BRIDGE is not set # CONFIG_VLAN_8021Q is not set # CONFIG_DECNET is not set # CONFIG_LLC2 is not set # CONFIG_IPX is not set # CONFIG_ATALK is not set # CONFIG_X25 is not set # CONFIG_LAPB is not set # CONFIG_ECONET is not set # CONFIG_WAN_ROUTER is not set # # QoS and/or fair queueing # # CONFIG_NET_SCHED is not set # # Network testing # # CONFIG_NET_PKTGEN is not set # CONFIG_HAMRADIO is not set # CONFIG_IRDA is not set # CONFIG_BT is not set # CONFIG_AF_RXRPC is not set # # Wireless # # CONFIG_CFG80211 is not set # CONFIG_WIRELESS_EXT is not set # CONFIG_MAC80211 is not set # CONFIG_IEEE80211 is not set # CONFIG_RFKILL is not set # CONFIG_NET_9P is not set # # Device Drivers # # # Generic Driver Options # # CONFIG_STANDALONE is not set # CONFIG_PREVENT_FIRMWARE_BUILD is not set # CONFIG_FW_LOADER is not set # CONFIG_DEBUG_DRIVER is not set # CONFIG_DEBUG_DEVRES is not set # CONFIG_SYS_HYPERVISOR is not set # CONFIG_CONNECTOR is not set # CONFIG_MTD is not set # CONFIG_PARPORT is not set CONFIG_PNP=y # CONFIG_PNP_DEBUG is not set # # Protocols # CONFIG_PNPACPI=y CONFIG_BLK_DEV=y # CONFIG_BLK_DEV_FD is not set # CONFIG_BLK_CPQ_DA is not set # CONFIG_BLK_CPQ_CISS_DA is not set # CONFIG_BLK_DEV_DAC960 is not set # CONFIG_BLK_DEV_UMEM is not set # CONFIG_BLK_DEV_COW_COMMON is not set CONFIG_BLK_DEV_LOOP=y # CONFIG_BLK_DEV_CRYPTOLOOP is not set # CONFIG_BLK_DEV_NBD is not set # CONFIG_BLK_DEV_SX8 is not set # CONFIG_BLK_DEV_UB is not set CONFIG_BLK_DEV_RAM=y CONFIG_BLK_DEV_RAM_COUNT=16 CONFIG_BLK_DEV_RAM_SIZE=4096 CONFIG_BLK_DEV_RAM_BLOCKSIZE=1024 # CONFIG_CDROM_PKTCDVD is not set # CONFIG_ATA_OVER_ETH is not set # CONFIG_MISC_DEVICES is not set CONFIG_IDE=y CONFIG_IDE_MAX_HWIFS=4 CONFIG_BLK_DEV_IDE=y # # Please see Documentation/ide.txt for help/info on IDE drives # # CONFIG_BLK_DEV_IDE_SATA is not set # CONFIG_BLK_DEV_HD_IDE is not set CONFIG_BLK_DEV_IDEDISK=y CONFIG_IDEDISK_MULTI_MODE=y # CONFIG_BLK_DEV_IDECD is not set # CONFIG_BLK_DEV_IDETAPE is not set # CONFIG_BLK_DEV_IDEFLOPPY is not set # CONFIG_BLK_DEV_IDESCSI is not set # CONFIG_BLK_DEV_IDEACPI is not set # CONFIG_IDE_TASK_IOCTL is not set CONFIG_IDE_PROC_FS=y # # IDE chipset support/bugfixes # CONFIG_IDE_GENERIC=y # CONFIG_BLK_DEV_CMD640 is not set # CONFIG_BLK_DEV_IDEPNP is not set CONFIG_BLK_DEV_IDEPCI=y CONFIG_IDEPCI_SHARE_IRQ=y CONFIG_IDEPCI_PCIBUS_ORDER=y # CONFIG_BLK_DEV_OFFBOARD is not set CONFIG_BLK_DEV_GENERIC=y # CONFIG_BLK_DEV_OPTI621 is not set # CONFIG_BLK_DEV_RZ1000 is not set CONFIG_BLK_DEV_IDEDMA_PCI=y # CONFIG_BLK_DEV_IDEDMA_FORCED is not set # CONFIG_IDEDMA_ONLYDISK is not set # CONFIG_BLK_DEV_AEC62XX is not set # CONFIG_BLK_DEV_ALI15X3 is not set # CONFIG_BLK_DEV_AMD74XX is not set # CONFIG_BLK_DEV_ATIIXP is not set # CONFIG_BLK_DEV_CMD64X is not set # CONFIG_BLK_DEV_TRIFLEX is not set # CONFIG_BLK_DEV_CY82C693 is not set # CONFIG_BLK_DEV_CS5520 is not set # CONFIG_BLK_DEV_CS5530 is not set # CONFIG_BLK_DEV_CS5535 is not set # CONFIG_BLK_DEV_HPT34X is not set # CONFIG_BLK_DEV_HPT366 is not set # CONFIG_BLK_DEV_JMICRON is not set # CONFIG_BLK_DEV_SC1200 is not set CONFIG_BLK_DEV_PIIX=y # CONFIG_BLK_DEV_IT8213 is not set # CONFIG_BLK_DEV_IT821X is not set # CONFIG_BLK_DEV_NS87415 is not set # CONFIG_BLK_DEV_PDC202XX_OLD is not set # CONFIG_BLK_DEV_PDC202XX_NEW is not set # CONFIG_BLK_DEV_SVWKS is not set # CONFIG_BLK_DEV_SIIMAGE is not set # CONFIG_BLK_DEV_SIS5513 is not set # CONFIG_BLK_DEV_SLC90E66 is not set # CONFIG_BLK_DEV_TRM290 is not set # CONFIG_BLK_DEV_VIA82CXXX is not set # CONFIG_BLK_DEV_TC86C001 is not set # CONFIG_IDE_ARM is not set CONFIG_BLK_DEV_IDEDMA=y # CONFIG_IDEDMA_IVB is not set # CONFIG_BLK_DEV_HD is not set # # SCSI device support # # CONFIG_RAID_ATTRS is not set CONFIG_SCSI=y CONFIG_SCSI_DMA=y # CONFIG_SCSI_TGT is not set # CONFIG_SCSI_NETLINK is not set CONFIG_SCSI_PROC_FS=y # # SCSI support type (disk, tape, CD-ROM) # CONFIG_BLK_DEV_SD=y # CONFIG_CHR_DEV_ST is not set # CONFIG_CHR_DEV_OSST is not set # CONFIG_BLK_DEV_SR is not set CONFIG_CHR_DEV_SG=y # CONFIG_CHR_DEV_SCH is not set # # Some SCSI devices (e.g. CD jukebox) support multiple LUNs # # CONFIG_SCSI_MULTI_LUN is not set # CONFIG_SCSI_CONSTANTS is not set # CONFIG_SCSI_LOGGING is not set # CONFIG_SCSI_SCAN_ASYNC is not set # # SCSI Transports # # CONFIG_SCSI_SPI_ATTRS is not set # CONFIG_SCSI_FC_ATTRS is not set # CONFIG_SCSI_ISCSI_ATTRS is not set # CONFIG_SCSI_SAS_LIBSAS is not set CONFIG_SCSI_LOWLEVEL=y # CONFIG_ISCSI_TCP is not set # CONFIG_BLK_DEV_3W_XXXX_RAID is not set CONFIG_SCSI_3W_9XXX=y # CONFIG_SCSI_ACARD is not set # CONFIG_SCSI_AACRAID is not set # CONFIG_SCSI_AIC7XXX is not set # CONFIG_SCSI_AIC7XXX_OLD is not set # CONFIG_SCSI_AIC79XX is not set # CONFIG_SCSI_AIC94XX is not set # CONFIG_SCSI_DPT_I2O is not set # CONFIG_SCSI_ADVANSYS is not set # CONFIG_SCSI_ARCMSR is not set # CONFIG_MEGARAID_NEWGEN is not set # CONFIG_MEGARAID_LEGACY is not set # CONFIG_MEGARAID_SAS is not set # CONFIG_SCSI_HPTIOP is not set # CONFIG_SCSI_BUSLOGIC is not set # CONFIG_SCSI_DMX3191D is not set # CONFIG_SCSI_EATA is not set # CONFIG_SCSI_FUTURE_DOMAIN is not set # CONFIG_SCSI_GDTH is not set # CONFIG_SCSI_IPS is not set # CONFIG_SCSI_INITIO is not set # CONFIG_SCSI_INIA100 is not set # CONFIG_SCSI_STEX is not set # CONFIG_SCSI_SYM53C8XX_2 is not set # CONFIG_SCSI_IPR is not set # CONFIG_SCSI_QLOGIC_1280 is not set # CONFIG_SCSI_QLA_FC is not set # CONFIG_SCSI_QLA_ISCSI is not set # CONFIG_SCSI_LPFC is not set # CONFIG_SCSI_DC395x is not set # CONFIG_SCSI_DC390T is not set # CONFIG_SCSI_NSP32 is not set # CONFIG_SCSI_DEBUG is not set # CONFIG_SCSI_SRP is not set CONFIG_ATA=y # CONFIG_ATA_NONSTANDARD is not set CONFIG_ATA_ACPI=y CONFIG_SATA_AHCI=y # CONFIG_SATA_SVW is not set # CONFIG_ATA_PIIX is not set # CONFIG_SATA_MV is not set # CONFIG_SATA_NV is not set # CONFIG_PDC_ADMA is not set # CONFIG_SATA_QSTOR is not set # CONFIG_SATA_PROMISE is not set # CONFIG_SATA_SX4 is not set # CONFIG_SATA_SIL is not set # CONFIG_SATA_SIL24 is not set # CONFIG_SATA_SIS is not set # CONFIG_SATA_ULI is not set # CONFIG_SATA_VIA is not set # CONFIG_SATA_VITESSE is not set # CONFIG_SATA_INIC162X is not set # CONFIG_PATA_ALI is not set # CONFIG_PATA_AMD is not set # CONFIG_PATA_ARTOP is not set # CONFIG_PATA_ATIIXP is not set # CONFIG_PATA_CMD640_PCI is not set # CONFIG_PATA_CMD64X is not set # CONFIG_PATA_CS5520 is not set # CONFIG_PATA_CS5530 is not set # CONFIG_PATA_CS5535 is not set # CONFIG_PATA_CYPRESS is not set # CONFIG_PATA_EFAR is not set # CONFIG_ATA_GENERIC is not set # CONFIG_PATA_HPT366 is not set # CONFIG_PATA_HPT37X is not set # CONFIG_PATA_HPT3X2N is not set # CONFIG_PATA_HPT3X3 is not set # CONFIG_PATA_IT821X is not set # CONFIG_PATA_IT8213 is not set # CONFIG_PATA_JMICRON is not set # CONFIG_PATA_TRIFLEX is not set # CONFIG_PATA_MARVELL is not set # CONFIG_PATA_MPIIX is not set # CONFIG_PATA_OLDPIIX is not set # CONFIG_PATA_NETCELL is not set # CONFIG_PATA_NS87410 is not set # CONFIG_PATA_OPTI is not set # CONFIG_PATA_OPTIDMA is not set # CONFIG_PATA_PDC_OLD is not set # CONFIG_PATA_RADISYS is not set # CONFIG_PATA_RZ1000 is not set # CONFIG_PATA_SC1200 is not set # CONFIG_PATA_SERVERWORKS is not set # CONFIG_PATA_PDC2027X is not set # CONFIG_PATA_SIL680 is not set # CONFIG_PATA_SIS is not set # CONFIG_PATA_VIA is not set # CONFIG_PATA_WINBOND is not set # CONFIG_PATA_PLATFORM is not set CONFIG_MD=y CONFIG_BLK_DEV_MD=y CONFIG_MD_LINEAR=y CONFIG_MD_RAID0=y CONFIG_MD_RAID1=y CONFIG_MD_RAID10=y CONFIG_MD_RAID456=y CONFIG_MD_RAID5_RESHAPE=y # CONFIG_MD_MULTIPATH is not set # CONFIG_MD_FAULTY is not set # CONFIG_BLK_DEV_DM is not set # # Fusion MPT device support # # CONFIG_FUSION is not set # CONFIG_FUSION_SPI is not set # CONFIG_FUSION_FC is not set # CONFIG_FUSION_SAS is not set # # IEEE 1394 (FireWire) support # # CONFIG_FIREWIRE is not set # CONFIG_IEEE1394 is not set # CONFIG_I2O is not set # CONFIG_MACINTOSH_DRIVERS is not set CONFIG_NETDEVICES=y # CONFIG_NETDEVICES_MULTIQUEUE is not set # CONFIG_DUMMY is not set CONFIG_BONDING=y # CONFIG_MACVLAN is not set # CONFIG_EQUALIZER is not set # CONFIG_TUN is not set # CONFIG_NET_SB1000 is not set # CONFIG_ARCNET is not set # CONFIG_NET_ETHERNET is not set CONFIG_NETDEV_1000=y # CONFIG_ACENIC is not set # CONFIG_DL2K is not set CONFIG_E1000=y CONFIG_E1000_NAPI=y # CONFIG_E1000_DISABLE_PACKET_SPLIT is not set # CONFIG_NS83820 is not set # CONFIG_HAMACHI is not set # CONFIG_YELLOWFIN is not set # CONFIG_R8169 is not set # CONFIG_SIS190 is not set # CONFIG_SKGE is not set # CONFIG_SKY2 is not set # CONFIG_SK98LIN is not set # CONFIG_VIA_VELOCITY is not set # CONFIG_TIGON3 is not set # CONFIG_BNX2 is not set # CONFIG_QLA3XXX is not set # CONFIG_ATL1 is not set # CONFIG_NETDEV_10000 is not set # CONFIG_TR is not set # # Wireless LAN # # CONFIG_WLAN_PRE80211 is not set # CONFIG_WLAN_80211 is not set # # USB Network Adapters # # CONFIG_USB_CATC is not set # CONFIG_USB_KAWETH is not set # CONFIG_USB_PEGASUS is not set # CONFIG_USB_RTL8150 is not set # CONFIG_USB_USBNET_MII is not set # CONFIG_USB_USBNET is not set # CONFIG_WAN is not set # CONFIG_FDDI is not set # CONFIG_HIPPI is not set # CONFIG_PPP is not set # CONFIG_SLIP is not set # CONFIG_NET_FC is not set # CONFIG_SHAPER is not set # CONFIG_NETCONSOLE is not set # CONFIG_NETPOLL is not set # CONFIG_NET_POLL_CONTROLLER is not set # CONFIG_ISDN is not set # CONFIG_PHONE is not set # # Input device support # CONFIG_INPUT=y # CONFIG_INPUT_FF_MEMLESS is not set # CONFIG_INPUT_POLLDEV is not set # # Userland interfaces # CONFIG_INPUT_MOUSEDEV=y # CONFIG_INPUT_MOUSEDEV_PSAUX is not set CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024 CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768 # CONFIG_INPUT_JOYDEV is not set # CONFIG_INPUT_TSDEV is not set # CONFIG_INPUT_EVDEV is not set # CONFIG_INPUT_EVBUG is not set # # Input Device Drivers # CONFIG_INPUT_KEYBOARD=y CONFIG_KEYBOARD_ATKBD=y # CONFIG_KEYBOARD_SUNKBD is not set # CONFIG_KEYBOARD_LKKBD is not set # CONFIG_KEYBOARD_XTKBD is not set # CONFIG_KEYBOARD_NEWTON is not set # CONFIG_KEYBOARD_STOWAWAY is not set # CONFIG_INPUT_MOUSE is not set # CONFIG_INPUT_JOYSTICK is not set # CONFIG_INPUT_TABLET is not set # CONFIG_INPUT_TOUCHSCREEN is not set # CONFIG_INPUT_MISC is not set # # Hardware I/O ports # CONFIG_SERIO=y CONFIG_SERIO_I8042=y # CONFIG_SERIO_SERPORT is not set # CONFIG_SERIO_CT82C710 is not set # CONFIG_SERIO_PCIPS2 is not set CONFIG_SERIO_LIBPS2=y # CONFIG_SERIO_RAW is not set # CONFIG_GAMEPORT is not set # # Character devices # CONFIG_VT=y CONFIG_VT_CONSOLE=y CONFIG_HW_CONSOLE=y # CONFIG_VT_HW_CONSOLE_BINDING is not set # CONFIG_SERIAL_NONSTANDARD is not set # # Serial drivers # CONFIG_SERIAL_8250=y CONFIG_SERIAL_8250_CONSOLE=y CONFIG_FIX_EARLYCON_MEM=y CONFIG_SERIAL_8250_PCI=y CONFIG_SERIAL_8250_PNP=y CONFIG_SERIAL_8250_NR_UARTS=4 CONFIG_SERIAL_8250_RUNTIME_UARTS=4 CONFIG_SERIAL_8250_EXTENDED=y # CONFIG_SERIAL_8250_MANY_PORTS is not set CONFIG_SERIAL_8250_SHARE_IRQ=y # CONFIG_SERIAL_8250_DETECT_IRQ is not set # CONFIG_SERIAL_8250_RSA is not set # # Non-8250 serial port support # CONFIG_SERIAL_CORE=y CONFIG_SERIAL_CORE_CONSOLE=y # CONFIG_SERIAL_JSM is not set CONFIG_UNIX98_PTYS=y CONFIG_LEGACY_PTYS=y CONFIG_LEGACY_PTY_COUNT=256 # CONFIG_IPMI_HANDLER is not set CONFIG_WATCHDOG=y # CONFIG_WATCHDOG_NOWAYOUT is not set # # Watchdog Device Drivers # CONFIG_SOFT_WATCHDOG=y # CONFIG_ACQUIRE_WDT is not set # CONFIG_ADVANTECH_WDT is not set # CONFIG_ALIM1535_WDT is not set # CONFIG_ALIM7101_WDT is not set # CONFIG_SC520_WDT is not set # CONFIG_EUROTECH_WDT is not set # CONFIG_IB700_WDT is not set # CONFIG_IBMASR is not set # CONFIG_WAFER_WDT is not set # CONFIG_I6300ESB_WDT is not set # CONFIG_ITCO_WDT is not set # CONFIG_SC1200_WDT is not set # CONFIG_PC87413_WDT is not set # CONFIG_60XX_WDT is not set # CONFIG_SBC8360_WDT is not set # CONFIG_CPU5_WDT is not set # CONFIG_SMSC37B787_WDT is not set # CONFIG_W83627HF_WDT is not set # CONFIG_W83697HF_WDT is not set # CONFIG_W83877F_WDT is not set # CONFIG_W83977F_WDT is not set # CONFIG_MACHZ_WDT is not set # CONFIG_SBC_EPX_C3_WATCHDOG is not set # # PCI-based Watchdog Cards # # CONFIG_PCIPCWATCHDOG is not set # CONFIG_WDTPCI is not set # # USB-based Watchdog Cards # # CONFIG_USBPCWATCHDOG is not set CONFIG_HW_RANDOM=y CONFIG_HW_RANDOM_INTEL=y # CONFIG_HW_RANDOM_AMD is not set # CONFIG_HW_RANDOM_GEODE is not set # CONFIG_HW_RANDOM_VIA is not set # CONFIG_NVRAM is not set CONFIG_RTC=y # CONFIG_R3964 is not set # CONFIG_APPLICOM is not set # CONFIG_SONYPI is not set # CONFIG_AGP is not set # CONFIG_DRM is not set # CONFIG_MWAVE is not set # CONFIG_PC8736x_GPIO is not set # CONFIG_NSC_GPIO is not set # CONFIG_CS5535_GPIO is not set # CONFIG_RAW_DRIVER is not set # CONFIG_HPET is not set CONFIG_HANGCHECK_TIMER=y # CONFIG_TCG_TPM is not set # CONFIG_TELCLOCK is not set CONFIG_DEVPORT=y CONFIG_I2C=y CONFIG_I2C_BOARDINFO=y CONFIG_I2C_CHARDEV=y # # I2C Algorithms # # CONFIG_I2C_ALGOBIT is not set # CONFIG_I2C_ALGOPCF is not set # CONFIG_I2C_ALGOPCA is not set # # I2C Hardware Bus support # # CONFIG_I2C_ALI1535 is not set # CONFIG_I2C_ALI1563 is not set # CONFIG_I2C_ALI15X3 is not set # CONFIG_I2C_AMD756 is not set # CONFIG_I2C_AMD8111 is not set CONFIG_I2C_I801=y # CONFIG_I2C_I810 is not set # CONFIG_I2C_PIIX4 is not set # CONFIG_I2C_NFORCE2 is not set # CONFIG_I2C_OCORES is not set # CONFIG_I2C_PARPORT_LIGHT is not set # CONFIG_I2C_PROSAVAGE is not set # CONFIG_I2C_SAVAGE4 is not set # CONFIG_I2C_SIMTEC is not set # CONFIG_SCx200_ACB is not set # CONFIG_I2C_SIS5595 is not set # CONFIG_I2C_SIS630 is not set # CONFIG_I2C_SIS96X is not set # CONFIG_I2C_TAOS_EVM is not set # CONFIG_I2C_TINY_USB is not set # CONFIG_I2C_VIA is not set # CONFIG_I2C_VIAPRO is not set # CONFIG_I2C_VOODOO3 is not set # # Miscellaneous I2C Chip support # # CONFIG_SENSORS_DS1337 is not set # CONFIG_SENSORS_DS1374 is not set # CONFIG_DS1682 is not set CONFIG_SENSORS_EEPROM=y # CONFIG_SENSORS_PCF8574 is not set # CONFIG_SENSORS_PCA9539 is not set # CONFIG_SENSORS_PCF8591 is not set # CONFIG_SENSORS_MAX6875 is not set # CONFIG_SENSORS_TSL2550 is not set # CONFIG_I2C_DEBUG_CORE is not set # CONFIG_I2C_DEBUG_ALGO is not set # CONFIG_I2C_DEBUG_BUS is not set # CONFIG_I2C_DEBUG_CHIP is not set # # SPI support # # CONFIG_SPI is not set # CONFIG_SPI_MASTER is not set # CONFIG_W1 is not set # CONFIG_POWER_SUPPLY is not set CONFIG_HWMON=y CONFIG_HWMON_VID=y # CONFIG_SENSORS_ABITUGURU is not set # CONFIG_SENSORS_ABITUGURU3 is not set # CONFIG_SENSORS_AD7418 is not set # CONFIG_SENSORS_ADM1021 is not set # CONFIG_SENSORS_ADM1025 is not set # CONFIG_SENSORS_ADM1026 is not set # CONFIG_SENSORS_ADM1029 is not set # CONFIG_SENSORS_ADM1031 is not set # CONFIG_SENSORS_ADM9240 is not set # CONFIG_SENSORS_K8TEMP is not set # CONFIG_SENSORS_ASB100 is not set # CONFIG_SENSORS_ATXP1 is not set # CONFIG_SENSORS_DS1621 is not set # CONFIG_SENSORS_F71805F is not set # CONFIG_SENSORS_FSCHER is not set # CONFIG_SENSORS_FSCPOS is not set # CONFIG_SENSORS_GL518SM is not set # CONFIG_SENSORS_GL520SM is not set # CONFIG_SENSORS_CORETEMP is not set # CONFIG_SENSORS_IT87 is not set # CONFIG_SENSORS_LM63 is not set # CONFIG_SENSORS_LM75 is not set # CONFIG_SENSORS_LM77 is not set # CONFIG_SENSORS_LM78 is not set # CONFIG_SENSORS_LM80 is not set # CONFIG_SENSORS_LM83 is not set # CONFIG_SENSORS_LM85 is not set # CONFIG_SENSORS_LM87 is not set # CONFIG_SENSORS_LM90 is not set # CONFIG_SENSORS_LM92 is not set CONFIG_SENSORS_LM93=y # CONFIG_SENSORS_MAX1619 is not set # CONFIG_SENSORS_MAX6650 is not set # CONFIG_SENSORS_PC87360 is not set CONFIG_SENSORS_PC87427=y # CONFIG_SENSORS_SIS5595 is not set # CONFIG_SENSORS_DME1737 is not set # CONFIG_SENSORS_SMSC47M1 is not set # CONFIG_SENSORS_SMSC47M192 is not set # CONFIG_SENSORS_SMSC47B397 is not set # CONFIG_SENSORS_THMC50 is not set # CONFIG_SENSORS_VIA686A is not set # CONFIG_SENSORS_VT1211 is not set # CONFIG_SENSORS_VT8231 is not set # CONFIG_SENSORS_W83781D is not set # CONFIG_SENSORS_W83791D is not set # CONFIG_SENSORS_W83792D is not set CONFIG_SENSORS_W83793=y # CONFIG_SENSORS_W83L785TS is not set CONFIG_SENSORS_W83627HF=y CONFIG_SENSORS_W83627EHF=y # CONFIG_SENSORS_HDAPS is not set # CONFIG_SENSORS_APPLESMC is not set # CONFIG_HWMON_DEBUG_CHIP is not set # # Multifunction device drivers # # CONFIG_MFD_SM501 is not set # # Multimedia devices # # CONFIG_VIDEO_DEV is not set # CONFIG_DVB_CORE is not set # CONFIG_DAB is not set # # Graphics support # # CONFIG_BACKLIGHT_LCD_SUPPORT is not set # # Display device support # # CONFIG_DISPLAY_SUPPORT is not set # CONFIG_VGASTATE is not set # CONFIG_VIDEO_OUTPUT_CONTROL is not set # CONFIG_FB is not set # # Console display driver support # CONFIG_VGA_CONSOLE=y CONFIG_VGACON_SOFT_SCROLLBACK=y CONFIG_VGACON_SOFT_SCROLLBACK_SIZE=128 # CONFIG_VIDEO_SELECT is not set CONFIG_DUMMY_CONSOLE=y # # Sound # # CONFIG_SOUND is not set CONFIG_HID_SUPPORT=y CONFIG_HID=y # CONFIG_HID_DEBUG is not set # # USB Input Devices # CONFIG_USB_HID=y # CONFIG_USB_HIDINPUT_POWERBOOK is not set # CONFIG_HID_FF is not set # CONFIG_USB_HIDDEV is not set CONFIG_USB_SUPPORT=y CONFIG_USB_ARCH_HAS_HCD=y CONFIG_USB_ARCH_HAS_OHCI=y CONFIG_USB_ARCH_HAS_EHCI=y CONFIG_USB=y # CONFIG_USB_DEBUG is not set # # Miscellaneous USB options # CONFIG_USB_DEVICEFS=y # CONFIG_USB_DEVICE_CLASS is not set # CONFIG_USB_DYNAMIC_MINORS is not set # CONFIG_USB_SUSPEND is not set # CONFIG_USB_PERSIST is not set # CONFIG_USB_OTG is not set # # USB Host Controller Drivers # CONFIG_USB_EHCI_HCD=y CONFIG_USB_EHCI_SPLIT_ISO=y CONFIG_USB_EHCI_ROOT_HUB_TT=y CONFIG_USB_EHCI_TT_NEWSCHED=y # CONFIG_USB_ISP116X_HCD is not set # CONFIG_USB_OHCI_HCD is not set CONFIG_USB_UHCI_HCD=y # CONFIG_USB_SL811_HCD is not set # CONFIG_USB_R8A66597_HCD is not set # # USB Device Class drivers # # CONFIG_USB_ACM is not set # CONFIG_USB_PRINTER is not set # # NOTE: USB_STORAGE enables SCSI, and 'SCSI disk support' # # # may also be needed; see USB_STORAGE Help for more information # CONFIG_USB_STORAGE=y # CONFIG_USB_STORAGE_DEBUG is not set CONFIG_USB_STORAGE_DATAFAB=y CONFIG_USB_STORAGE_FREECOM=y CONFIG_USB_STORAGE_ISD200=y CONFIG_USB_STORAGE_DPCM=y CONFIG_USB_STORAGE_USBAT=y CONFIG_USB_STORAGE_SDDR09=y CONFIG_USB_STORAGE_SDDR55=y CONFIG_USB_STORAGE_JUMPSHOT=y CONFIG_USB_STORAGE_ALAUDA=y # CONFIG_USB_STORAGE_KARMA is not set # CONFIG_USB_LIBUSUAL is not set # # USB Imaging devices # # CONFIG_USB_MDC800 is not set # CONFIG_USB_MICROTEK is not set # CONFIG_USB_MON is not set # # USB port drivers # # # USB Serial Converter support # # CONFIG_USB_SERIAL is not set # # USB Miscellaneous drivers # # CONFIG_USB_EMI62 is not set # CONFIG_USB_EMI26 is not set # CONFIG_USB_ADUTUX is not set # CONFIG_USB_AUERSWALD is not set # CONFIG_USB_RIO500 is not set # CONFIG_USB_LEGOTOWER is not set # CONFIG_USB_LCD is not set # CONFIG_USB_BERRY_CHARGE is not set # CONFIG_USB_LED is not set # CONFIG_USB_CYPRESS_CY7C63 is not set # CONFIG_USB_CYTHERM is not set # CONFIG_USB_PHIDGET is not set # CONFIG_USB_IDMOUSE is not set # CONFIG_USB_FTDI_ELAN is not set # CONFIG_USB_APPLEDISPLAY is not set # CONFIG_USB_SISUSBVGA is not set # CONFIG_USB_LD is not set # CONFIG_USB_TRANCEVIBRATOR is not set # CONFIG_USB_IOWARRIOR is not set # CONFIG_USB_TEST is not set # # USB DSL modem support # # # USB Gadget Support # # CONFIG_USB_GADGET is not set # CONFIG_MMC is not set # CONFIG_NEW_LEDS is not set # CONFIG_INFINIBAND is not set # CONFIG_EDAC is not set # CONFIG_RTC_CLASS is not set # # DMA Engine support # # CONFIG_DMA_ENGINE is not set # # DMA Clients # # # DMA Devices # # CONFIG_VIRTUALIZATION is not set # # Userspace I/O # # CONFIG_UIO is not set # # File systems # CONFIG_EXT2_FS=y # CONFIG_EXT2_FS_XATTR is not set # CONFIG_EXT2_FS_XIP is not set # CONFIG_EXT3_FS is not set # CONFIG_EXT4DEV_FS is not set # CONFIG_REISERFS_FS is not set # CONFIG_JFS_FS is not set # CONFIG_FS_POSIX_ACL is not set CONFIG_XFS_FS=y # CONFIG_XFS_QUOTA is not set # CONFIG_XFS_SECURITY is not set # CONFIG_XFS_POSIX_ACL is not set CONFIG_XFS_RT=y # CONFIG_GFS2_FS is not set # CONFIG_OCFS2_FS is not set # CONFIG_MINIX_FS is not set # CONFIG_ROMFS_FS is not set CONFIG_INOTIFY=y CONFIG_INOTIFY_USER=y # CONFIG_QUOTA is not set # CONFIG_DNOTIFY is not set # CONFIG_AUTOFS_FS is not set # CONFIG_AUTOFS4_FS is not set # CONFIG_FUSE_FS is not set # # CD-ROM/DVD Filesystems # # CONFIG_ISO9660_FS is not set # CONFIG_UDF_FS is not set # # DOS/FAT/NT Filesystems # CONFIG_FAT_FS=y CONFIG_MSDOS_FS=y CONFIG_VFAT_FS=y CONFIG_FAT_DEFAULT_CODEPAGE=437 CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1" # CONFIG_NTFS_FS is not set # # Pseudo filesystems # CONFIG_PROC_FS=y # CONFIG_PROC_KCORE is not set CONFIG_PROC_SYSCTL=y CONFIG_SYSFS=y CONFIG_TMPFS=y # CONFIG_TMPFS_POSIX_ACL is not set # CONFIG_HUGETLBFS is not set # CONFIG_HUGETLB_PAGE is not set CONFIG_RAMFS=y # CONFIG_CONFIGFS_FS is not set # # Miscellaneous filesystems # # CONFIG_ADFS_FS is not set # CONFIG_AFFS_FS is not set # CONFIG_HFS_FS is not set # CONFIG_HFSPLUS_FS is not set # CONFIG_BEFS_FS is not set # CONFIG_BFS_FS is not set # CONFIG_EFS_FS is not set # CONFIG_CRAMFS is not set # CONFIG_VXFS_FS is not set # CONFIG_HPFS_FS is not set # CONFIG_QNX4FS_FS is not set # CONFIG_SYSV_FS is not set # CONFIG_UFS_FS is not set # # Network File Systems # CONFIG_NFS_FS=y CONFIG_NFS_V3=y # CONFIG_NFS_V3_ACL is not set # CONFIG_NFS_V4 is not set # CONFIG_NFS_DIRECTIO is not set CONFIG_NFSD=y CONFIG_NFSD_V3=y # CONFIG_NFSD_V3_ACL is not set # CONFIG_NFSD_V4 is not set # CONFIG_NFSD_TCP is not set CONFIG_LOCKD=y CONFIG_LOCKD_V4=y CONFIG_EXPORTFS=y CONFIG_NFS_COMMON=y CONFIG_SUNRPC=y # CONFIG_SUNRPC_BIND34 is not set # CONFIG_RPCSEC_GSS_KRB5 is not set # CONFIG_RPCSEC_GSS_SPKM3 is not set # CONFIG_SMB_FS is not set # CONFIG_CIFS is not set # CONFIG_NCP_FS is not set # CONFIG_CODA_FS is not set # CONFIG_AFS_FS is not set # # Partition Types # # CONFIG_PARTITION_ADVANCED is not set CONFIG_MSDOS_PARTITION=y # # Native Language Support # CONFIG_NLS=y CONFIG_NLS_DEFAULT="iso8859-1" CONFIG_NLS_CODEPAGE_437=y # CONFIG_NLS_CODEPAGE_737 is not set # CONFIG_NLS_CODEPAGE_775 is not set CONFIG_NLS_CODEPAGE_850=y CONFIG_NLS_CODEPAGE_852=y # CONFIG_NLS_CODEPAGE_855 is not set # CONFIG_NLS_CODEPAGE_857 is not set # CONFIG_NLS_CODEPAGE_860 is not set # CONFIG_NLS_CODEPAGE_861 is not set # CONFIG_NLS_CODEPAGE_862 is not set # CONFIG_NLS_CODEPAGE_863 is not set # CONFIG_NLS_CODEPAGE_864 is not set # CONFIG_NLS_CODEPAGE_865 is not set # CONFIG_NLS_CODEPAGE_866 is not set # CONFIG_NLS_CODEPAGE_869 is not set # CONFIG_NLS_CODEPAGE_936 is not set # CONFIG_NLS_CODEPAGE_950 is not set # CONFIG_NLS_CODEPAGE_932 is not set # CONFIG_NLS_CODEPAGE_949 is not set # CONFIG_NLS_CODEPAGE_874 is not set # CONFIG_NLS_ISO8859_8 is not set # CONFIG_NLS_CODEPAGE_1250 is not set # CONFIG_NLS_CODEPAGE_1251 is not set CONFIG_NLS_ASCII=y CONFIG_NLS_ISO8859_1=y CONFIG_NLS_ISO8859_2=y # CONFIG_NLS_ISO8859_3 is not set # CONFIG_NLS_ISO8859_4 is not set # CONFIG_NLS_ISO8859_5 is not set # CONFIG_NLS_ISO8859_6 is not set # CONFIG_NLS_ISO8859_7 is not set # CONFIG_NLS_ISO8859_9 is not set # CONFIG_NLS_ISO8859_13 is not set # CONFIG_NLS_ISO8859_14 is not set CONFIG_NLS_ISO8859_15=y # CONFIG_NLS_KOI8_R is not set # CONFIG_NLS_KOI8_U is not set CONFIG_NLS_UTF8=y # # Distributed Lock Manager # # CONFIG_DLM is not set # CONFIG_INSTRUMENTATION is not set # # Kernel hacking # CONFIG_TRACE_IRQFLAGS_SUPPORT=y # CONFIG_PRINTK_TIME is not set # CONFIG_ENABLE_MUST_CHECK is not set CONFIG_MAGIC_SYSRQ=y # CONFIG_UNUSED_SYMBOLS is not set # CONFIG_DEBUG_FS is not set # CONFIG_HEADERS_CHECK is not set CONFIG_DEBUG_KERNEL=y # CONFIG_DEBUG_SHIRQ is not set CONFIG_DETECT_SOFTLOCKUP=y # CONFIG_SCHED_DEBUG is not set # CONFIG_SCHEDSTATS is not set # CONFIG_TIMER_STATS is not set # CONFIG_DEBUG_SLAB is not set CONFIG_DEBUG_RT_MUTEXES=y CONFIG_DEBUG_PI_LIST=y # CONFIG_RT_MUTEX_TESTER is not set # CONFIG_DEBUG_SPINLOCK is not set # CONFIG_DEBUG_MUTEXES is not set # CONFIG_DEBUG_LOCK_ALLOC is not set # CONFIG_PROVE_LOCKING is not set # CONFIG_LOCK_STAT is not set # CONFIG_DEBUG_SPINLOCK_SLEEP is not set # CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set # CONFIG_DEBUG_KOBJECT is not set # CONFIG_DEBUG_HIGHMEM is not set # CONFIG_DEBUG_BUGVERBOSE is not set CONFIG_DEBUG_INFO=y # CONFIG_DEBUG_VM is not set # CONFIG_DEBUG_LIST is not set CONFIG_FRAME_POINTER=y CONFIG_FORCED_INLINING=y # CONFIG_FAULT_INJECTION is not set CONFIG_EARLY_PRINTK=y CONFIG_DEBUG_STACKOVERFLOW=y # CONFIG_DEBUG_STACK_USAGE is not set # CONFIG_DEBUG_PAGEALLOC is not set # CONFIG_DEBUG_RODATA is not set # CONFIG_4KSTACKS is not set CONFIG_X86_FIND_SMP_CONFIG=y CONFIG_X86_MPPARSE=y CONFIG_DOUBLEFAULT=y CONFIG_KDB=y # CONFIG_KDB_MODULES is not set # CONFIG_KDB_OFF is not set CONFIG_KDB_CONTINUE_CATASTROPHIC=0 # # Security options # # CONFIG_KEYS is not set # CONFIG_SECURITY is not set CONFIG_XOR_BLOCKS=y CONFIG_ASYNC_CORE=y CONFIG_ASYNC_MEMCPY=y CONFIG_ASYNC_XOR=y # CONFIG_CRYPTO is not set # # Library routines # # CONFIG_CRC_CCITT is not set # CONFIG_CRC16 is not set # CONFIG_CRC_ITU_T is not set # CONFIG_CRC32 is not set # CONFIG_CRC7 is not set # CONFIG_LIBCRC32C is not set CONFIG_PLIST=y CONFIG_HAS_IOMEM=y CONFIG_HAS_IOPORT=y CONFIG_HAS_DMA=y CONFIG_GENERIC_HARDIRQS=y CONFIG_GENERIC_IRQ_PROBE=y CONFIG_GENERIC_PENDING_IRQ=y CONFIG_X86_SMP=y CONFIG_X86_HT=y CONFIG_X86_BIOS_REBOOT=y CONFIG_X86_TRAMPOLINE=y CONFIG_KTIME_SCALAR=y --------------090008060702010503060703-- From owner-xfs@oss.sgi.com Thu Mar 6 17:21:13 2008 Received: with ECARTIS (v1.0.0; list xfs); Thu, 06 Mar 2008 17:21:34 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.0 required=5.0 tests=AWL,BAYES_50,UPPERCASE_50_75 autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m271L7qM032067 for ; Thu, 6 Mar 2008 17:21:09 -0800 Received: from [134.15.251.7] (melb-sw-corp-251-7.corp.sgi.com [134.15.251.7]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id MAA25686; Fri, 7 Mar 2008 12:21:27 +1100 Message-ID: <47D09840.3030203@sgi.com> Date: Fri, 07 Mar 2008 12:20:00 +1100 From: Mark Goodwin Reply-To: markgw@sgi.com Organization: SGI Engineering User-Agent: Thunderbird 1.5.0.14 (Windows/20071210) MIME-Version: 1.0 To: Kris Kersey CC: xfs@oss.sgi.com, Bill Vaughan Subject: Re: pdflush hang on xlog_grant_log_space() References: <47D062AF.80501@steelbox.com> In-Reply-To: <47D062AF.80501@steelbox.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14791 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: markgw@sgi.com Precedence: bulk X-list: xfs Hi Kris, thanks for the report. We're digesting this and will get back to you. Cheers -- Mark Kris Kersey wrote: > Hello, > > I'm working on a NAS product and we're currently having lock-ups that > seem to be hanging in XFS code. We're running a NAS that has 1024 NFSD > threads accessing three RAID mounts. All three mounts are running XFS > file systems. Lately we've had random lockups on these boxes and I am > now running a kernel with KDB built-in. > > The lock-up takes the form of all NFSD threads in D state with one out > of three pdflush threads in D state. The assumption can be made that > all NFSD threads are waiting on the one pdflush thread to complete. So > two times now when an NAS has gotten in this state I have accessed KDB > and ran a stack trace on the pdflush thread. Both times the thread was > stuck on xlog_grant_log_space+0xdb. So now I'm turning to you to help > me figure out why XFS is locking up. The box has been left in this > state so I can run and KDB commands you wish or if you have any > questions about the setup, let me know. The system is running a mostly > stock 2.6.23.12 kernel. My config file as well as photos taken of the > stack dump are attached. > > Thanks, > Kris > > > ------------------------------------------------------------------------ > > > ------------------------------------------------------------------------ > > > ------------------------------------------------------------------------ > > # > # Automatically generated make config: don't edit > # Linux kernel version: 2.6.23.12 > # Mon Mar 3 10:25:53 2008 > # > CONFIG_X86_32=y > CONFIG_GENERIC_TIME=y > CONFIG_GENERIC_CMOS_UPDATE=y > CONFIG_CLOCKSOURCE_WATCHDOG=y > CONFIG_GENERIC_CLOCKEVENTS=y > CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y > CONFIG_LOCKDEP_SUPPORT=y > CONFIG_STACKTRACE_SUPPORT=y > CONFIG_SEMAPHORE_SLEEPERS=y > CONFIG_X86=y > CONFIG_MMU=y > CONFIG_ZONE_DMA=y > CONFIG_QUICKLIST=y > CONFIG_GENERIC_ISA_DMA=y > CONFIG_GENERIC_IOMAP=y > CONFIG_GENERIC_BUG=y > CONFIG_GENERIC_HWEIGHT=y > CONFIG_ARCH_MAY_HAVE_PC_FDC=y > CONFIG_DMI=y > CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config" > > # > # General setup > # > CONFIG_EXPERIMENTAL=y > CONFIG_LOCK_KERNEL=y > CONFIG_INIT_ENV_ARG_LIMIT=32 > CONFIG_LOCALVERSION="" > CONFIG_LOCALVERSION_AUTO=y > # CONFIG_SWAP is not set > CONFIG_SYSVIPC=y > CONFIG_SYSVIPC_SYSCTL=y > CONFIG_POSIX_MQUEUE=y > # CONFIG_BSD_PROCESS_ACCT is not set > # CONFIG_TASKSTATS is not set > # CONFIG_USER_NS is not set > # CONFIG_AUDIT is not set > # CONFIG_IKCONFIG is not set > CONFIG_LOG_BUF_SHIFT=15 > # CONFIG_CPUSETS is not set > CONFIG_SYSFS_DEPRECATED=y > # CONFIG_RELAY is not set > CONFIG_BLK_DEV_INITRD=y > CONFIG_INITRAMFS_SOURCE="" > # CONFIG_CC_OPTIMIZE_FOR_SIZE is not set > CONFIG_SYSCTL=y > CONFIG_EMBEDDED=y > # CONFIG_UID16 is not set > # CONFIG_SYSCTL_SYSCALL is not set > CONFIG_KALLSYMS=y > CONFIG_KALLSYMS_ALL=y > # CONFIG_KALLSYMS_EXTRA_PASS is not set > CONFIG_HOTPLUG=y > CONFIG_PRINTK=y > CONFIG_BUG=y > # CONFIG_ELF_CORE is not set > CONFIG_BASE_FULL=y > CONFIG_FUTEX=y > CONFIG_ANON_INODES=y > CONFIG_EPOLL=y > CONFIG_SIGNALFD=y > CONFIG_EVENTFD=y > # CONFIG_SHMEM is not set > CONFIG_VM_EVENT_COUNTERS=y > CONFIG_SLAB=y > # CONFIG_SLUB is not set > # CONFIG_SLOB is not set > CONFIG_RT_MUTEXES=y > CONFIG_TINY_SHMEM=y > CONFIG_BASE_SMALL=0 > # CONFIG_MODULES is not set > CONFIG_STOP_MACHINE=y > CONFIG_BLOCK=y > CONFIG_LBD=y > # CONFIG_BLK_DEV_IO_TRACE is not set > # CONFIG_LSF is not set > # CONFIG_BLK_DEV_BSG is not set > > # > # IO Schedulers > # > CONFIG_IOSCHED_NOOP=y > # CONFIG_IOSCHED_AS is not set > CONFIG_IOSCHED_DEADLINE=y > # CONFIG_IOSCHED_CFQ is not set > # CONFIG_DEFAULT_AS is not set > CONFIG_DEFAULT_DEADLINE=y > # CONFIG_DEFAULT_CFQ is not set > # CONFIG_DEFAULT_NOOP is not set > CONFIG_DEFAULT_IOSCHED="deadline" > > # > # Processor type and features > # > CONFIG_TICK_ONESHOT=y > CONFIG_NO_HZ=y > CONFIG_HIGH_RES_TIMERS=y > CONFIG_SMP=y > CONFIG_X86_PC=y > # CONFIG_X86_ELAN is not set > # CONFIG_X86_VOYAGER is not set > # CONFIG_X86_NUMAQ is not set > # CONFIG_X86_SUMMIT is not set > # CONFIG_X86_BIGSMP is not set > # CONFIG_X86_VISWS is not set > # CONFIG_X86_GENERICARCH is not set > # CONFIG_X86_ES7000 is not set > # CONFIG_PARAVIRT is not set > # CONFIG_M386 is not set > # CONFIG_M486 is not set > # CONFIG_M586 is not set > # CONFIG_M586TSC is not set > # CONFIG_M586MMX is not set > # CONFIG_M686 is not set > # CONFIG_MPENTIUMII is not set > # CONFIG_MPENTIUMIII is not set > # CONFIG_MPENTIUMM is not set > # CONFIG_MCORE2 is not set > CONFIG_MPENTIUM4=y > # CONFIG_MK6 is not set > # CONFIG_MK7 is not set > # CONFIG_MK8 is not set > # CONFIG_MCRUSOE is not set > # CONFIG_MEFFICEON is not set > # CONFIG_MWINCHIPC6 is not set > # CONFIG_MWINCHIP2 is not set > # CONFIG_MWINCHIP3D is not set > # CONFIG_MGEODEGX1 is not set > # CONFIG_MGEODE_LX is not set > # CONFIG_MCYRIXIII is not set > # CONFIG_MVIAC3_2 is not set > # CONFIG_MVIAC7 is not set > # CONFIG_X86_GENERIC is not set > CONFIG_X86_CMPXCHG=y > CONFIG_X86_L1_CACHE_SHIFT=7 > CONFIG_X86_XADD=y > CONFIG_RWSEM_XCHGADD_ALGORITHM=y > # CONFIG_ARCH_HAS_ILOG2_U32 is not set > # CONFIG_ARCH_HAS_ILOG2_U64 is not set > CONFIG_GENERIC_CALIBRATE_DELAY=y > CONFIG_X86_WP_WORKS_OK=y > CONFIG_X86_INVLPG=y > CONFIG_X86_BSWAP=y > CONFIG_X86_POPAD_OK=y > CONFIG_X86_GOOD_APIC=y > CONFIG_X86_INTEL_USERCOPY=y > CONFIG_X86_USE_PPRO_CHECKSUM=y > CONFIG_X86_TSC=y > CONFIG_X86_CMOV=y > CONFIG_X86_MINIMUM_CPU_FAMILY=4 > CONFIG_HPET_TIMER=y > CONFIG_HPET_EMULATE_RTC=y > CONFIG_NR_CPUS=8 > CONFIG_SCHED_SMT=y > CONFIG_SCHED_MC=y > CONFIG_PREEMPT_NONE=y > # CONFIG_PREEMPT_VOLUNTARY is not set > # CONFIG_PREEMPT is not set > # CONFIG_PREEMPT_BKL is not set > CONFIG_X86_LOCAL_APIC=y > CONFIG_X86_IO_APIC=y > # CONFIG_X86_MCE is not set > # CONFIG_VM86 is not set > # CONFIG_TOSHIBA is not set > # CONFIG_I8K is not set > # CONFIG_X86_REBOOTFIXUPS is not set > # CONFIG_MICROCODE is not set > # CONFIG_X86_MSR is not set > # CONFIG_X86_CPUID is not set > > # > # Firmware Drivers > # > # CONFIG_EDD is not set > # CONFIG_DELL_RBU is not set > # CONFIG_DCDBAS is not set > CONFIG_DMIID=y > # CONFIG_NOHIGHMEM is not set > # CONFIG_HIGHMEM4G is not set > CONFIG_HIGHMEM64G=y > # CONFIG_VMSPLIT_3G is not set > # CONFIG_VMSPLIT_3G_OPT is not set > CONFIG_VMSPLIT_2G=y > # CONFIG_VMSPLIT_2G_OPT is not set > # CONFIG_VMSPLIT_1G is not set > CONFIG_PAGE_OFFSET=0x80000000 > CONFIG_HIGHMEM=y > CONFIG_X86_PAE=y > CONFIG_ARCH_FLATMEM_ENABLE=y > CONFIG_ARCH_SPARSEMEM_ENABLE=y > CONFIG_ARCH_SELECT_MEMORY_MODEL=y > CONFIG_ARCH_POPULATES_NODE_MAP=y > CONFIG_SELECT_MEMORY_MODEL=y > CONFIG_FLATMEM_MANUAL=y > # CONFIG_DISCONTIGMEM_MANUAL is not set > # CONFIG_SPARSEMEM_MANUAL is not set > CONFIG_FLATMEM=y > CONFIG_FLAT_NODE_MEM_MAP=y > CONFIG_SPARSEMEM_STATIC=y > CONFIG_SPLIT_PTLOCK_CPUS=4 > CONFIG_RESOURCES_64BIT=y > CONFIG_ZONE_DMA_FLAG=1 > CONFIG_BOUNCE=y > CONFIG_NR_QUICK=1 > CONFIG_VIRT_TO_BUS=y > CONFIG_HIGHPTE=y > # CONFIG_MATH_EMULATION is not set > # CONFIG_MTRR is not set > # CONFIG_EFI is not set > # CONFIG_IRQBALANCE is not set > # CONFIG_SECCOMP is not set > # CONFIG_HZ_100 is not set > # CONFIG_HZ_250 is not set > CONFIG_HZ_300=y > # CONFIG_HZ_1000 is not set > CONFIG_HZ=300 > # CONFIG_KEXEC is not set > # CONFIG_CRASH_DUMP is not set > CONFIG_PHYSICAL_START=0x100000 > # CONFIG_RELOCATABLE is not set > CONFIG_PHYSICAL_ALIGN=0x100000 > CONFIG_HOTPLUG_CPU=y > # CONFIG_COMPAT_VDSO is not set > CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y > > # > # Power management options (ACPI, APM) > # > CONFIG_PM=y > # CONFIG_PM_LEGACY is not set > # CONFIG_PM_DEBUG is not set > CONFIG_SUSPEND_SMP_POSSIBLE=y > # CONFIG_SUSPEND is not set > CONFIG_HIBERNATION_SMP_POSSIBLE=y > CONFIG_ACPI=y > # CONFIG_ACPI_PROCFS is not set > CONFIG_ACPI_PROC_EVENT=y > # CONFIG_ACPI_AC is not set > # CONFIG_ACPI_BATTERY is not set > CONFIG_ACPI_BUTTON=y > CONFIG_ACPI_FAN=y > # CONFIG_ACPI_DOCK is not set > CONFIG_ACPI_PROCESSOR=y > CONFIG_ACPI_HOTPLUG_CPU=y > CONFIG_ACPI_THERMAL=y > # CONFIG_ACPI_ASUS is not set > # CONFIG_ACPI_TOSHIBA is not set > # CONFIG_ACPI_CUSTOM_DSDT is not set > CONFIG_ACPI_BLACKLIST_YEAR=0 > # CONFIG_ACPI_DEBUG is not set > CONFIG_ACPI_EC=y > CONFIG_ACPI_POWER=y > CONFIG_ACPI_SYSTEM=y > CONFIG_X86_PM_TIMER=y > CONFIG_ACPI_CONTAINER=y > # CONFIG_ACPI_SBS is not set > > # > # CPU Frequency scaling > # > # CONFIG_CPU_FREQ is not set > > # > # Bus options (PCI, PCMCIA, EISA, MCA, ISA) > # > CONFIG_PCI=y > # CONFIG_PCI_GOBIOS is not set > # CONFIG_PCI_GOMMCONFIG is not set > # CONFIG_PCI_GODIRECT is not set > CONFIG_PCI_GOANY=y > CONFIG_PCI_BIOS=y > CONFIG_PCI_DIRECT=y > CONFIG_PCI_MMCONFIG=y > CONFIG_PCIEPORTBUS=y > CONFIG_PCIEAER=y > CONFIG_ARCH_SUPPORTS_MSI=y > CONFIG_PCI_MSI=y > # CONFIG_PCI_DEBUG is not set > CONFIG_HT_IRQ=y > CONFIG_ISA_DMA_API=y > # CONFIG_ISA is not set > # CONFIG_MCA is not set > # CONFIG_SCx200 is not set > > # > # PCCARD (PCMCIA/CardBus) support > # > # CONFIG_PCCARD is not set > # CONFIG_HOTPLUG_PCI is not set > > # > # Executable file formats > # > CONFIG_BINFMT_ELF=y > # CONFIG_BINFMT_AOUT is not set > # CONFIG_BINFMT_MISC is not set > > # > # Networking > # > CONFIG_NET=y > > # > # Networking options > # > CONFIG_PACKET=y > CONFIG_PACKET_MMAP=y > CONFIG_UNIX=y > # CONFIG_NET_KEY is not set > CONFIG_INET=y > # CONFIG_IP_MULTICAST is not set > # CONFIG_IP_ADVANCED_ROUTER is not set > CONFIG_IP_FIB_HASH=y > # CONFIG_IP_PNP is not set > # CONFIG_NET_IPIP is not set > # CONFIG_NET_IPGRE is not set > # CONFIG_ARPD is not set > # CONFIG_SYN_COOKIES is not set > # CONFIG_INET_AH is not set > # CONFIG_INET_ESP is not set > # CONFIG_INET_IPCOMP is not set > # CONFIG_INET_XFRM_TUNNEL is not set > # CONFIG_INET_TUNNEL is not set > # CONFIG_INET_XFRM_MODE_TRANSPORT is not set > # CONFIG_INET_XFRM_MODE_TUNNEL is not set > # CONFIG_INET_XFRM_MODE_BEET is not set > CONFIG_INET_DIAG=y > CONFIG_INET_TCP_DIAG=y > # CONFIG_TCP_CONG_ADVANCED is not set > CONFIG_TCP_CONG_CUBIC=y > CONFIG_DEFAULT_TCP_CONG="cubic" > # CONFIG_TCP_MD5SIG is not set > # CONFIG_IPV6 is not set > # CONFIG_INET6_XFRM_TUNNEL is not set > # CONFIG_INET6_TUNNEL is not set > # CONFIG_NETWORK_SECMARK is not set > # CONFIG_NETFILTER is not set > # CONFIG_IP_DCCP is not set > # CONFIG_IP_SCTP is not set > # CONFIG_TIPC is not set > # CONFIG_ATM is not set > # CONFIG_BRIDGE is not set > # CONFIG_VLAN_8021Q is not set > # CONFIG_DECNET is not set > # CONFIG_LLC2 is not set > # CONFIG_IPX is not set > # CONFIG_ATALK is not set > # CONFIG_X25 is not set > # CONFIG_LAPB is not set > # CONFIG_ECONET is not set > # CONFIG_WAN_ROUTER is not set > > # > # QoS and/or fair queueing > # > # CONFIG_NET_SCHED is not set > > # > # Network testing > # > # CONFIG_NET_PKTGEN is not set > # CONFIG_HAMRADIO is not set > # CONFIG_IRDA is not set > # CONFIG_BT is not set > # CONFIG_AF_RXRPC is not set > > # > # Wireless > # > # CONFIG_CFG80211 is not set > # CONFIG_WIRELESS_EXT is not set > # CONFIG_MAC80211 is not set > # CONFIG_IEEE80211 is not set > # CONFIG_RFKILL is not set > # CONFIG_NET_9P is not set > > # > # Device Drivers > # > > # > # Generic Driver Options > # > # CONFIG_STANDALONE is not set > # CONFIG_PREVENT_FIRMWARE_BUILD is not set > # CONFIG_FW_LOADER is not set > # CONFIG_DEBUG_DRIVER is not set > # CONFIG_DEBUG_DEVRES is not set > # CONFIG_SYS_HYPERVISOR is not set > # CONFIG_CONNECTOR is not set > # CONFIG_MTD is not set > # CONFIG_PARPORT is not set > CONFIG_PNP=y > # CONFIG_PNP_DEBUG is not set > > # > # Protocols > # > CONFIG_PNPACPI=y > CONFIG_BLK_DEV=y > # CONFIG_BLK_DEV_FD is not set > # CONFIG_BLK_CPQ_DA is not set > # CONFIG_BLK_CPQ_CISS_DA is not set > # CONFIG_BLK_DEV_DAC960 is not set > # CONFIG_BLK_DEV_UMEM is not set > # CONFIG_BLK_DEV_COW_COMMON is not set > CONFIG_BLK_DEV_LOOP=y > # CONFIG_BLK_DEV_CRYPTOLOOP is not set > # CONFIG_BLK_DEV_NBD is not set > # CONFIG_BLK_DEV_SX8 is not set > # CONFIG_BLK_DEV_UB is not set > CONFIG_BLK_DEV_RAM=y > CONFIG_BLK_DEV_RAM_COUNT=16 > CONFIG_BLK_DEV_RAM_SIZE=4096 > CONFIG_BLK_DEV_RAM_BLOCKSIZE=1024 > # CONFIG_CDROM_PKTCDVD is not set > # CONFIG_ATA_OVER_ETH is not set > # CONFIG_MISC_DEVICES is not set > CONFIG_IDE=y > CONFIG_IDE_MAX_HWIFS=4 > CONFIG_BLK_DEV_IDE=y > > # > # Please see Documentation/ide.txt for help/info on IDE drives > # > # CONFIG_BLK_DEV_IDE_SATA is not set > # CONFIG_BLK_DEV_HD_IDE is not set > CONFIG_BLK_DEV_IDEDISK=y > CONFIG_IDEDISK_MULTI_MODE=y > # CONFIG_BLK_DEV_IDECD is not set > # CONFIG_BLK_DEV_IDETAPE is not set > # CONFIG_BLK_DEV_IDEFLOPPY is not set > # CONFIG_BLK_DEV_IDESCSI is not set > # CONFIG_BLK_DEV_IDEACPI is not set > # CONFIG_IDE_TASK_IOCTL is not set > CONFIG_IDE_PROC_FS=y > > # > # IDE chipset support/bugfixes > # > CONFIG_IDE_GENERIC=y > # CONFIG_BLK_DEV_CMD640 is not set > # CONFIG_BLK_DEV_IDEPNP is not set > CONFIG_BLK_DEV_IDEPCI=y > CONFIG_IDEPCI_SHARE_IRQ=y > CONFIG_IDEPCI_PCIBUS_ORDER=y > # CONFIG_BLK_DEV_OFFBOARD is not set > CONFIG_BLK_DEV_GENERIC=y > # CONFIG_BLK_DEV_OPTI621 is not set > # CONFIG_BLK_DEV_RZ1000 is not set > CONFIG_BLK_DEV_IDEDMA_PCI=y > # CONFIG_BLK_DEV_IDEDMA_FORCED is not set > # CONFIG_IDEDMA_ONLYDISK is not set > # CONFIG_BLK_DEV_AEC62XX is not set > # CONFIG_BLK_DEV_ALI15X3 is not set > # CONFIG_BLK_DEV_AMD74XX is not set > # CONFIG_BLK_DEV_ATIIXP is not set > # CONFIG_BLK_DEV_CMD64X is not set > # CONFIG_BLK_DEV_TRIFLEX is not set > # CONFIG_BLK_DEV_CY82C693 is not set > # CONFIG_BLK_DEV_CS5520 is not set > # CONFIG_BLK_DEV_CS5530 is not set > # CONFIG_BLK_DEV_CS5535 is not set > # CONFIG_BLK_DEV_HPT34X is not set > # CONFIG_BLK_DEV_HPT366 is not set > # CONFIG_BLK_DEV_JMICRON is not set > # CONFIG_BLK_DEV_SC1200 is not set > CONFIG_BLK_DEV_PIIX=y > # CONFIG_BLK_DEV_IT8213 is not set > # CONFIG_BLK_DEV_IT821X is not set > # CONFIG_BLK_DEV_NS87415 is not set > # CONFIG_BLK_DEV_PDC202XX_OLD is not set > # CONFIG_BLK_DEV_PDC202XX_NEW is not set > # CONFIG_BLK_DEV_SVWKS is not set > # CONFIG_BLK_DEV_SIIMAGE is not set > # CONFIG_BLK_DEV_SIS5513 is not set > # CONFIG_BLK_DEV_SLC90E66 is not set > # CONFIG_BLK_DEV_TRM290 is not set > # CONFIG_BLK_DEV_VIA82CXXX is not set > # CONFIG_BLK_DEV_TC86C001 is not set > # CONFIG_IDE_ARM is not set > CONFIG_BLK_DEV_IDEDMA=y > # CONFIG_IDEDMA_IVB is not set > # CONFIG_BLK_DEV_HD is not set > > # > # SCSI device support > # > # CONFIG_RAID_ATTRS is not set > CONFIG_SCSI=y > CONFIG_SCSI_DMA=y > # CONFIG_SCSI_TGT is not set > # CONFIG_SCSI_NETLINK is not set > CONFIG_SCSI_PROC_FS=y > > # > # SCSI support type (disk, tape, CD-ROM) > # > CONFIG_BLK_DEV_SD=y > # CONFIG_CHR_DEV_ST is not set > # CONFIG_CHR_DEV_OSST is not set > # CONFIG_BLK_DEV_SR is not set > CONFIG_CHR_DEV_SG=y > # CONFIG_CHR_DEV_SCH is not set > > # > # Some SCSI devices (e.g. CD jukebox) support multiple LUNs > # > # CONFIG_SCSI_MULTI_LUN is not set > # CONFIG_SCSI_CONSTANTS is not set > # CONFIG_SCSI_LOGGING is not set > # CONFIG_SCSI_SCAN_ASYNC is not set > > # > # SCSI Transports > # > # CONFIG_SCSI_SPI_ATTRS is not set > # CONFIG_SCSI_FC_ATTRS is not set > # CONFIG_SCSI_ISCSI_ATTRS is not set > # CONFIG_SCSI_SAS_LIBSAS is not set > CONFIG_SCSI_LOWLEVEL=y > # CONFIG_ISCSI_TCP is not set > # CONFIG_BLK_DEV_3W_XXXX_RAID is not set > CONFIG_SCSI_3W_9XXX=y > # CONFIG_SCSI_ACARD is not set > # CONFIG_SCSI_AACRAID is not set > # CONFIG_SCSI_AIC7XXX is not set > # CONFIG_SCSI_AIC7XXX_OLD is not set > # CONFIG_SCSI_AIC79XX is not set > # CONFIG_SCSI_AIC94XX is not set > # CONFIG_SCSI_DPT_I2O is not set > # CONFIG_SCSI_ADVANSYS is not set > # CONFIG_SCSI_ARCMSR is not set > # CONFIG_MEGARAID_NEWGEN is not set > # CONFIG_MEGARAID_LEGACY is not set > # CONFIG_MEGARAID_SAS is not set > # CONFIG_SCSI_HPTIOP is not set > # CONFIG_SCSI_BUSLOGIC is not set > # CONFIG_SCSI_DMX3191D is not set > # CONFIG_SCSI_EATA is not set > # CONFIG_SCSI_FUTURE_DOMAIN is not set > # CONFIG_SCSI_GDTH is not set > # CONFIG_SCSI_IPS is not set > # CONFIG_SCSI_INITIO is not set > # CONFIG_SCSI_INIA100 is not set > # CONFIG_SCSI_STEX is not set > # CONFIG_SCSI_SYM53C8XX_2 is not set > # CONFIG_SCSI_IPR is not set > # CONFIG_SCSI_QLOGIC_1280 is not set > # CONFIG_SCSI_QLA_FC is not set > # CONFIG_SCSI_QLA_ISCSI is not set > # CONFIG_SCSI_LPFC is not set > # CONFIG_SCSI_DC395x is not set > # CONFIG_SCSI_DC390T is not set > # CONFIG_SCSI_NSP32 is not set > # CONFIG_SCSI_DEBUG is not set > # CONFIG_SCSI_SRP is not set > CONFIG_ATA=y > # CONFIG_ATA_NONSTANDARD is not set > CONFIG_ATA_ACPI=y > CONFIG_SATA_AHCI=y > # CONFIG_SATA_SVW is not set > # CONFIG_ATA_PIIX is not set > # CONFIG_SATA_MV is not set > # CONFIG_SATA_NV is not set > # CONFIG_PDC_ADMA is not set > # CONFIG_SATA_QSTOR is not set > # CONFIG_SATA_PROMISE is not set > # CONFIG_SATA_SX4 is not set > # CONFIG_SATA_SIL is not set > # CONFIG_SATA_SIL24 is not set > # CONFIG_SATA_SIS is not set > # CONFIG_SATA_ULI is not set > # CONFIG_SATA_VIA is not set > # CONFIG_SATA_VITESSE is not set > # CONFIG_SATA_INIC162X is not set > # CONFIG_PATA_ALI is not set > # CONFIG_PATA_AMD is not set > # CONFIG_PATA_ARTOP is not set > # CONFIG_PATA_ATIIXP is not set > # CONFIG_PATA_CMD640_PCI is not set > # CONFIG_PATA_CMD64X is not set > # CONFIG_PATA_CS5520 is not set > # CONFIG_PATA_CS5530 is not set > # CONFIG_PATA_CS5535 is not set > # CONFIG_PATA_CYPRESS is not set > # CONFIG_PATA_EFAR is not set > # CONFIG_ATA_GENERIC is not set > # CONFIG_PATA_HPT366 is not set > # CONFIG_PATA_HPT37X is not set > # CONFIG_PATA_HPT3X2N is not set > # CONFIG_PATA_HPT3X3 is not set > # CONFIG_PATA_IT821X is not set > # CONFIG_PATA_IT8213 is not set > # CONFIG_PATA_JMICRON is not set > # CONFIG_PATA_TRIFLEX is not set > # CONFIG_PATA_MARVELL is not set > # CONFIG_PATA_MPIIX is not set > # CONFIG_PATA_OLDPIIX is not set > # CONFIG_PATA_NETCELL is not set > # CONFIG_PATA_NS87410 is not set > # CONFIG_PATA_OPTI is not set > # CONFIG_PATA_OPTIDMA is not set > # CONFIG_PATA_PDC_OLD is not set > # CONFIG_PATA_RADISYS is not set > # CONFIG_PATA_RZ1000 is not set > # CONFIG_PATA_SC1200 is not set > # CONFIG_PATA_SERVERWORKS is not set > # CONFIG_PATA_PDC2027X is not set > # CONFIG_PATA_SIL680 is not set > # CONFIG_PATA_SIS is not set > # CONFIG_PATA_VIA is not set > # CONFIG_PATA_WINBOND is not set > # CONFIG_PATA_PLATFORM is not set > CONFIG_MD=y > CONFIG_BLK_DEV_MD=y > CONFIG_MD_LINEAR=y > CONFIG_MD_RAID0=y > CONFIG_MD_RAID1=y > CONFIG_MD_RAID10=y > CONFIG_MD_RAID456=y > CONFIG_MD_RAID5_RESHAPE=y > # CONFIG_MD_MULTIPATH is not set > # CONFIG_MD_FAULTY is not set > # CONFIG_BLK_DEV_DM is not set > > # > # Fusion MPT device support > # > # CONFIG_FUSION is not set > # CONFIG_FUSION_SPI is not set > # CONFIG_FUSION_FC is not set > # CONFIG_FUSION_SAS is not set > > # > # IEEE 1394 (FireWire) support > # > # CONFIG_FIREWIRE is not set > # CONFIG_IEEE1394 is not set > # CONFIG_I2O is not set > # CONFIG_MACINTOSH_DRIVERS is not set > CONFIG_NETDEVICES=y > # CONFIG_NETDEVICES_MULTIQUEUE is not set > # CONFIG_DUMMY is not set > CONFIG_BONDING=y > # CONFIG_MACVLAN is not set > # CONFIG_EQUALIZER is not set > # CONFIG_TUN is not set > # CONFIG_NET_SB1000 is not set > # CONFIG_ARCNET is not set > # CONFIG_NET_ETHERNET is not set > CONFIG_NETDEV_1000=y > # CONFIG_ACENIC is not set > # CONFIG_DL2K is not set > CONFIG_E1000=y > CONFIG_E1000_NAPI=y > # CONFIG_E1000_DISABLE_PACKET_SPLIT is not set > # CONFIG_NS83820 is not set > # CONFIG_HAMACHI is not set > # CONFIG_YELLOWFIN is not set > # CONFIG_R8169 is not set > # CONFIG_SIS190 is not set > # CONFIG_SKGE is not set > # CONFIG_SKY2 is not set > # CONFIG_SK98LIN is not set > # CONFIG_VIA_VELOCITY is not set > # CONFIG_TIGON3 is not set > # CONFIG_BNX2 is not set > # CONFIG_QLA3XXX is not set > # CONFIG_ATL1 is not set > # CONFIG_NETDEV_10000 is not set > # CONFIG_TR is not set > > # > # Wireless LAN > # > # CONFIG_WLAN_PRE80211 is not set > # CONFIG_WLAN_80211 is not set > > # > # USB Network Adapters > # > # CONFIG_USB_CATC is not set > # CONFIG_USB_KAWETH is not set > # CONFIG_USB_PEGASUS is not set > # CONFIG_USB_RTL8150 is not set > # CONFIG_USB_USBNET_MII is not set > # CONFIG_USB_USBNET is not set > # CONFIG_WAN is not set > # CONFIG_FDDI is not set > # CONFIG_HIPPI is not set > # CONFIG_PPP is not set > # CONFIG_SLIP is not set > # CONFIG_NET_FC is not set > # CONFIG_SHAPER is not set > # CONFIG_NETCONSOLE is not set > # CONFIG_NETPOLL is not set > # CONFIG_NET_POLL_CONTROLLER is not set > # CONFIG_ISDN is not set > # CONFIG_PHONE is not set > > # > # Input device support > # > CONFIG_INPUT=y > # CONFIG_INPUT_FF_MEMLESS is not set > # CONFIG_INPUT_POLLDEV is not set > > # > # Userland interfaces > # > CONFIG_INPUT_MOUSEDEV=y > # CONFIG_INPUT_MOUSEDEV_PSAUX is not set > CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024 > CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768 > # CONFIG_INPUT_JOYDEV is not set > # CONFIG_INPUT_TSDEV is not set > # CONFIG_INPUT_EVDEV is not set > # CONFIG_INPUT_EVBUG is not set > > # > # Input Device Drivers > # > CONFIG_INPUT_KEYBOARD=y > CONFIG_KEYBOARD_ATKBD=y > # CONFIG_KEYBOARD_SUNKBD is not set > # CONFIG_KEYBOARD_LKKBD is not set > # CONFIG_KEYBOARD_XTKBD is not set > # CONFIG_KEYBOARD_NEWTON is not set > # CONFIG_KEYBOARD_STOWAWAY is not set > # CONFIG_INPUT_MOUSE is not set > # CONFIG_INPUT_JOYSTICK is not set > # CONFIG_INPUT_TABLET is not set > # CONFIG_INPUT_TOUCHSCREEN is not set > # CONFIG_INPUT_MISC is not set > > # > # Hardware I/O ports > # > CONFIG_SERIO=y > CONFIG_SERIO_I8042=y > # CONFIG_SERIO_SERPORT is not set > # CONFIG_SERIO_CT82C710 is not set > # CONFIG_SERIO_PCIPS2 is not set > CONFIG_SERIO_LIBPS2=y > # CONFIG_SERIO_RAW is not set > # CONFIG_GAMEPORT is not set > > # > # Character devices > # > CONFIG_VT=y > CONFIG_VT_CONSOLE=y > CONFIG_HW_CONSOLE=y > # CONFIG_VT_HW_CONSOLE_BINDING is not set > # CONFIG_SERIAL_NONSTANDARD is not set > > # > # Serial drivers > # > CONFIG_SERIAL_8250=y > CONFIG_SERIAL_8250_CONSOLE=y > CONFIG_FIX_EARLYCON_MEM=y > CONFIG_SERIAL_8250_PCI=y > CONFIG_SERIAL_8250_PNP=y > CONFIG_SERIAL_8250_NR_UARTS=4 > CONFIG_SERIAL_8250_RUNTIME_UARTS=4 > CONFIG_SERIAL_8250_EXTENDED=y > # CONFIG_SERIAL_8250_MANY_PORTS is not set > CONFIG_SERIAL_8250_SHARE_IRQ=y > # CONFIG_SERIAL_8250_DETECT_IRQ is not set > # CONFIG_SERIAL_8250_RSA is not set > > # > # Non-8250 serial port support > # > CONFIG_SERIAL_CORE=y > CONFIG_SERIAL_CORE_CONSOLE=y > # CONFIG_SERIAL_JSM is not set > CONFIG_UNIX98_PTYS=y > CONFIG_LEGACY_PTYS=y > CONFIG_LEGACY_PTY_COUNT=256 > # CONFIG_IPMI_HANDLER is not set > CONFIG_WATCHDOG=y > # CONFIG_WATCHDOG_NOWAYOUT is not set > > # > # Watchdog Device Drivers > # > CONFIG_SOFT_WATCHDOG=y > # CONFIG_ACQUIRE_WDT is not set > # CONFIG_ADVANTECH_WDT is not set > # CONFIG_ALIM1535_WDT is not set > # CONFIG_ALIM7101_WDT is not set > # CONFIG_SC520_WDT is not set > # CONFIG_EUROTECH_WDT is not set > # CONFIG_IB700_WDT is not set > # CONFIG_IBMASR is not set > # CONFIG_WAFER_WDT is not set > # CONFIG_I6300ESB_WDT is not set > # CONFIG_ITCO_WDT is not set > # CONFIG_SC1200_WDT is not set > # CONFIG_PC87413_WDT is not set > # CONFIG_60XX_WDT is not set > # CONFIG_SBC8360_WDT is not set > # CONFIG_CPU5_WDT is not set > # CONFIG_SMSC37B787_WDT is not set > # CONFIG_W83627HF_WDT is not set > # CONFIG_W83697HF_WDT is not set > # CONFIG_W83877F_WDT is not set > # CONFIG_W83977F_WDT is not set > # CONFIG_MACHZ_WDT is not set > # CONFIG_SBC_EPX_C3_WATCHDOG is not set > > # > # PCI-based Watchdog Cards > # > # CONFIG_PCIPCWATCHDOG is not set > # CONFIG_WDTPCI is not set > > # > # USB-based Watchdog Cards > # > # CONFIG_USBPCWATCHDOG is not set > CONFIG_HW_RANDOM=y > CONFIG_HW_RANDOM_INTEL=y > # CONFIG_HW_RANDOM_AMD is not set > # CONFIG_HW_RANDOM_GEODE is not set > # CONFIG_HW_RANDOM_VIA is not set > # CONFIG_NVRAM is not set > CONFIG_RTC=y > # CONFIG_R3964 is not set > # CONFIG_APPLICOM is not set > # CONFIG_SONYPI is not set > # CONFIG_AGP is not set > # CONFIG_DRM is not set > # CONFIG_MWAVE is not set > # CONFIG_PC8736x_GPIO is not set > # CONFIG_NSC_GPIO is not set > # CONFIG_CS5535_GPIO is not set > # CONFIG_RAW_DRIVER is not set > # CONFIG_HPET is not set > CONFIG_HANGCHECK_TIMER=y > # CONFIG_TCG_TPM is not set > # CONFIG_TELCLOCK is not set > CONFIG_DEVPORT=y > CONFIG_I2C=y > CONFIG_I2C_BOARDINFO=y > CONFIG_I2C_CHARDEV=y > > # > # I2C Algorithms > # > # CONFIG_I2C_ALGOBIT is not set > # CONFIG_I2C_ALGOPCF is not set > # CONFIG_I2C_ALGOPCA is not set > > # > # I2C Hardware Bus support > # > # CONFIG_I2C_ALI1535 is not set > # CONFIG_I2C_ALI1563 is not set > # CONFIG_I2C_ALI15X3 is not set > # CONFIG_I2C_AMD756 is not set > # CONFIG_I2C_AMD8111 is not set > CONFIG_I2C_I801=y > # CONFIG_I2C_I810 is not set > # CONFIG_I2C_PIIX4 is not set > # CONFIG_I2C_NFORCE2 is not set > # CONFIG_I2C_OCORES is not set > # CONFIG_I2C_PARPORT_LIGHT is not set > # CONFIG_I2C_PROSAVAGE is not set > # CONFIG_I2C_SAVAGE4 is not set > # CONFIG_I2C_SIMTEC is not set > # CONFIG_SCx200_ACB is not set > # CONFIG_I2C_SIS5595 is not set > # CONFIG_I2C_SIS630 is not set > # CONFIG_I2C_SIS96X is not set > # CONFIG_I2C_TAOS_EVM is not set > # CONFIG_I2C_TINY_USB is not set > # CONFIG_I2C_VIA is not set > # CONFIG_I2C_VIAPRO is not set > # CONFIG_I2C_VOODOO3 is not set > > # > # Miscellaneous I2C Chip support > # > # CONFIG_SENSORS_DS1337 is not set > # CONFIG_SENSORS_DS1374 is not set > # CONFIG_DS1682 is not set > CONFIG_SENSORS_EEPROM=y > # CONFIG_SENSORS_PCF8574 is not set > # CONFIG_SENSORS_PCA9539 is not set > # CONFIG_SENSORS_PCF8591 is not set > # CONFIG_SENSORS_MAX6875 is not set > # CONFIG_SENSORS_TSL2550 is not set > # CONFIG_I2C_DEBUG_CORE is not set > # CONFIG_I2C_DEBUG_ALGO is not set > # CONFIG_I2C_DEBUG_BUS is not set > # CONFIG_I2C_DEBUG_CHIP is not set > > # > # SPI support > # > # CONFIG_SPI is not set > # CONFIG_SPI_MASTER is not set > # CONFIG_W1 is not set > # CONFIG_POWER_SUPPLY is not set > CONFIG_HWMON=y > CONFIG_HWMON_VID=y > # CONFIG_SENSORS_ABITUGURU is not set > # CONFIG_SENSORS_ABITUGURU3 is not set > # CONFIG_SENSORS_AD7418 is not set > # CONFIG_SENSORS_ADM1021 is not set > # CONFIG_SENSORS_ADM1025 is not set > # CONFIG_SENSORS_ADM1026 is not set > # CONFIG_SENSORS_ADM1029 is not set > # CONFIG_SENSORS_ADM1031 is not set > # CONFIG_SENSORS_ADM9240 is not set > # CONFIG_SENSORS_K8TEMP is not set > # CONFIG_SENSORS_ASB100 is not set > # CONFIG_SENSORS_ATXP1 is not set > # CONFIG_SENSORS_DS1621 is not set > # CONFIG_SENSORS_F71805F is not set > # CONFIG_SENSORS_FSCHER is not set > # CONFIG_SENSORS_FSCPOS is not set > # CONFIG_SENSORS_GL518SM is not set > # CONFIG_SENSORS_GL520SM is not set > # CONFIG_SENSORS_CORETEMP is not set > # CONFIG_SENSORS_IT87 is not set > # CONFIG_SENSORS_LM63 is not set > # CONFIG_SENSORS_LM75 is not set > # CONFIG_SENSORS_LM77 is not set > # CONFIG_SENSORS_LM78 is not set > # CONFIG_SENSORS_LM80 is not set > # CONFIG_SENSORS_LM83 is not set > # CONFIG_SENSORS_LM85 is not set > # CONFIG_SENSORS_LM87 is not set > # CONFIG_SENSORS_LM90 is not set > # CONFIG_SENSORS_LM92 is not set > CONFIG_SENSORS_LM93=y > # CONFIG_SENSORS_MAX1619 is not set > # CONFIG_SENSORS_MAX6650 is not set > # CONFIG_SENSORS_PC87360 is not set > CONFIG_SENSORS_PC87427=y > # CONFIG_SENSORS_SIS5595 is not set > # CONFIG_SENSORS_DME1737 is not set > # CONFIG_SENSORS_SMSC47M1 is not set > # CONFIG_SENSORS_SMSC47M192 is not set > # CONFIG_SENSORS_SMSC47B397 is not set > # CONFIG_SENSORS_THMC50 is not set > # CONFIG_SENSORS_VIA686A is not set > # CONFIG_SENSORS_VT1211 is not set > # CONFIG_SENSORS_VT8231 is not set > # CONFIG_SENSORS_W83781D is not set > # CONFIG_SENSORS_W83791D is not set > # CONFIG_SENSORS_W83792D is not set > CONFIG_SENSORS_W83793=y > # CONFIG_SENSORS_W83L785TS is not set > CONFIG_SENSORS_W83627HF=y > CONFIG_SENSORS_W83627EHF=y > # CONFIG_SENSORS_HDAPS is not set > # CONFIG_SENSORS_APPLESMC is not set > # CONFIG_HWMON_DEBUG_CHIP is not set > > # > # Multifunction device drivers > # > # CONFIG_MFD_SM501 is not set > > # > # Multimedia devices > # > # CONFIG_VIDEO_DEV is not set > # CONFIG_DVB_CORE is not set > # CONFIG_DAB is not set > > # > # Graphics support > # > # CONFIG_BACKLIGHT_LCD_SUPPORT is not set > > # > # Display device support > # > # CONFIG_DISPLAY_SUPPORT is not set > # CONFIG_VGASTATE is not set > # CONFIG_VIDEO_OUTPUT_CONTROL is not set > # CONFIG_FB is not set > > # > # Console display driver support > # > CONFIG_VGA_CONSOLE=y > CONFIG_VGACON_SOFT_SCROLLBACK=y > CONFIG_VGACON_SOFT_SCROLLBACK_SIZE=128 > # CONFIG_VIDEO_SELECT is not set > CONFIG_DUMMY_CONSOLE=y > > # > # Sound > # > # CONFIG_SOUND is not set > CONFIG_HID_SUPPORT=y > CONFIG_HID=y > # CONFIG_HID_DEBUG is not set > > # > # USB Input Devices > # > CONFIG_USB_HID=y > # CONFIG_USB_HIDINPUT_POWERBOOK is not set > # CONFIG_HID_FF is not set > # CONFIG_USB_HIDDEV is not set > CONFIG_USB_SUPPORT=y > CONFIG_USB_ARCH_HAS_HCD=y > CONFIG_USB_ARCH_HAS_OHCI=y > CONFIG_USB_ARCH_HAS_EHCI=y > CONFIG_USB=y > # CONFIG_USB_DEBUG is not set > > # > # Miscellaneous USB options > # > CONFIG_USB_DEVICEFS=y > # CONFIG_USB_DEVICE_CLASS is not set > # CONFIG_USB_DYNAMIC_MINORS is not set > # CONFIG_USB_SUSPEND is not set > # CONFIG_USB_PERSIST is not set > # CONFIG_USB_OTG is not set > > # > # USB Host Controller Drivers > # > CONFIG_USB_EHCI_HCD=y > CONFIG_USB_EHCI_SPLIT_ISO=y > CONFIG_USB_EHCI_ROOT_HUB_TT=y > CONFIG_USB_EHCI_TT_NEWSCHED=y > # CONFIG_USB_ISP116X_HCD is not set > # CONFIG_USB_OHCI_HCD is not set > CONFIG_USB_UHCI_HCD=y > # CONFIG_USB_SL811_HCD is not set > # CONFIG_USB_R8A66597_HCD is not set > > # > # USB Device Class drivers > # > # CONFIG_USB_ACM is not set > # CONFIG_USB_PRINTER is not set > > # > # NOTE: USB_STORAGE enables SCSI, and 'SCSI disk support' > # > > # > # may also be needed; see USB_STORAGE Help for more information > # > CONFIG_USB_STORAGE=y > # CONFIG_USB_STORAGE_DEBUG is not set > CONFIG_USB_STORAGE_DATAFAB=y > CONFIG_USB_STORAGE_FREECOM=y > CONFIG_USB_STORAGE_ISD200=y > CONFIG_USB_STORAGE_DPCM=y > CONFIG_USB_STORAGE_USBAT=y > CONFIG_USB_STORAGE_SDDR09=y > CONFIG_USB_STORAGE_SDDR55=y > CONFIG_USB_STORAGE_JUMPSHOT=y > CONFIG_USB_STORAGE_ALAUDA=y > # CONFIG_USB_STORAGE_KARMA is not set > # CONFIG_USB_LIBUSUAL is not set > > # > # USB Imaging devices > # > # CONFIG_USB_MDC800 is not set > # CONFIG_USB_MICROTEK is not set > # CONFIG_USB_MON is not set > > # > # USB port drivers > # > > # > # USB Serial Converter support > # > # CONFIG_USB_SERIAL is not set > > # > # USB Miscellaneous drivers > # > # CONFIG_USB_EMI62 is not set > # CONFIG_USB_EMI26 is not set > # CONFIG_USB_ADUTUX is not set > # CONFIG_USB_AUERSWALD is not set > # CONFIG_USB_RIO500 is not set > # CONFIG_USB_LEGOTOWER is not set > # CONFIG_USB_LCD is not set > # CONFIG_USB_BERRY_CHARGE is not set > # CONFIG_USB_LED is not set > # CONFIG_USB_CYPRESS_CY7C63 is not set > # CONFIG_USB_CYTHERM is not set > # CONFIG_USB_PHIDGET is not set > # CONFIG_USB_IDMOUSE is not set > # CONFIG_USB_FTDI_ELAN is not set > # CONFIG_USB_APPLEDISPLAY is not set > # CONFIG_USB_SISUSBVGA is not set > # CONFIG_USB_LD is not set > # CONFIG_USB_TRANCEVIBRATOR is not set > # CONFIG_USB_IOWARRIOR is not set > # CONFIG_USB_TEST is not set > > # > # USB DSL modem support > # > > # > # USB Gadget Support > # > # CONFIG_USB_GADGET is not set > # CONFIG_MMC is not set > # CONFIG_NEW_LEDS is not set > # CONFIG_INFINIBAND is not set > # CONFIG_EDAC is not set > # CONFIG_RTC_CLASS is not set > > # > # DMA Engine support > # > # CONFIG_DMA_ENGINE is not set > > # > # DMA Clients > # > > # > # DMA Devices > # > # CONFIG_VIRTUALIZATION is not set > > # > # Userspace I/O > # > # CONFIG_UIO is not set > > # > # File systems > # > CONFIG_EXT2_FS=y > # CONFIG_EXT2_FS_XATTR is not set > # CONFIG_EXT2_FS_XIP is not set > # CONFIG_EXT3_FS is not set > # CONFIG_EXT4DEV_FS is not set > # CONFIG_REISERFS_FS is not set > # CONFIG_JFS_FS is not set > # CONFIG_FS_POSIX_ACL is not set > CONFIG_XFS_FS=y > # CONFIG_XFS_QUOTA is not set > # CONFIG_XFS_SECURITY is not set > # CONFIG_XFS_POSIX_ACL is not set > CONFIG_XFS_RT=y > # CONFIG_GFS2_FS is not set > # CONFIG_OCFS2_FS is not set > # CONFIG_MINIX_FS is not set > # CONFIG_ROMFS_FS is not set > CONFIG_INOTIFY=y > CONFIG_INOTIFY_USER=y > # CONFIG_QUOTA is not set > # CONFIG_DNOTIFY is not set > # CONFIG_AUTOFS_FS is not set > # CONFIG_AUTOFS4_FS is not set > # CONFIG_FUSE_FS is not set > > # > # CD-ROM/DVD Filesystems > # > # CONFIG_ISO9660_FS is not set > # CONFIG_UDF_FS is not set > > # > # DOS/FAT/NT Filesystems > # > CONFIG_FAT_FS=y > CONFIG_MSDOS_FS=y > CONFIG_VFAT_FS=y > CONFIG_FAT_DEFAULT_CODEPAGE=437 > CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1" > # CONFIG_NTFS_FS is not set > > # > # Pseudo filesystems > # > CONFIG_PROC_FS=y > # CONFIG_PROC_KCORE is not set > CONFIG_PROC_SYSCTL=y > CONFIG_SYSFS=y > CONFIG_TMPFS=y > # CONFIG_TMPFS_POSIX_ACL is not set > # CONFIG_HUGETLBFS is not set > # CONFIG_HUGETLB_PAGE is not set > CONFIG_RAMFS=y > # CONFIG_CONFIGFS_FS is not set > > # > # Miscellaneous filesystems > # > # CONFIG_ADFS_FS is not set > # CONFIG_AFFS_FS is not set > # CONFIG_HFS_FS is not set > # CONFIG_HFSPLUS_FS is not set > # CONFIG_BEFS_FS is not set > # CONFIG_BFS_FS is not set > # CONFIG_EFS_FS is not set > # CONFIG_CRAMFS is not set > # CONFIG_VXFS_FS is not set > # CONFIG_HPFS_FS is not set > # CONFIG_QNX4FS_FS is not set > # CONFIG_SYSV_FS is not set > # CONFIG_UFS_FS is not set > > # > # Network File Systems > # > CONFIG_NFS_FS=y > CONFIG_NFS_V3=y > # CONFIG_NFS_V3_ACL is not set > # CONFIG_NFS_V4 is not set > # CONFIG_NFS_DIRECTIO is not set > CONFIG_NFSD=y > CONFIG_NFSD_V3=y > # CONFIG_NFSD_V3_ACL is not set > # CONFIG_NFSD_V4 is not set > # CONFIG_NFSD_TCP is not set > CONFIG_LOCKD=y > CONFIG_LOCKD_V4=y > CONFIG_EXPORTFS=y > CONFIG_NFS_COMMON=y > CONFIG_SUNRPC=y > # CONFIG_SUNRPC_BIND34 is not set > # CONFIG_RPCSEC_GSS_KRB5 is not set > # CONFIG_RPCSEC_GSS_SPKM3 is not set > # CONFIG_SMB_FS is not set > # CONFIG_CIFS is not set > # CONFIG_NCP_FS is not set > # CONFIG_CODA_FS is not set > # CONFIG_AFS_FS is not set > > # > # Partition Types > # > # CONFIG_PARTITION_ADVANCED is not set > CONFIG_MSDOS_PARTITION=y > > # > # Native Language Support > # > CONFIG_NLS=y > CONFIG_NLS_DEFAULT="iso8859-1" > CONFIG_NLS_CODEPAGE_437=y > # CONFIG_NLS_CODEPAGE_737 is not set > # CONFIG_NLS_CODEPAGE_775 is not set > CONFIG_NLS_CODEPAGE_850=y > CONFIG_NLS_CODEPAGE_852=y > # CONFIG_NLS_CODEPAGE_855 is not set > # CONFIG_NLS_CODEPAGE_857 is not set > # CONFIG_NLS_CODEPAGE_860 is not set > # CONFIG_NLS_CODEPAGE_861 is not set > # CONFIG_NLS_CODEPAGE_862 is not set > # CONFIG_NLS_CODEPAGE_863 is not set > # CONFIG_NLS_CODEPAGE_864 is not set > # CONFIG_NLS_CODEPAGE_865 is not set > # CONFIG_NLS_CODEPAGE_866 is not set > # CONFIG_NLS_CODEPAGE_869 is not set > # CONFIG_NLS_CODEPAGE_936 is not set > # CONFIG_NLS_CODEPAGE_950 is not set > # CONFIG_NLS_CODEPAGE_932 is not set > # CONFIG_NLS_CODEPAGE_949 is not set > # CONFIG_NLS_CODEPAGE_874 is not set > # CONFIG_NLS_ISO8859_8 is not set > # CONFIG_NLS_CODEPAGE_1250 is not set > # CONFIG_NLS_CODEPAGE_1251 is not set > CONFIG_NLS_ASCII=y > CONFIG_NLS_ISO8859_1=y > CONFIG_NLS_ISO8859_2=y > # CONFIG_NLS_ISO8859_3 is not set > # CONFIG_NLS_ISO8859_4 is not set > # CONFIG_NLS_ISO8859_5 is not set > # CONFIG_NLS_ISO8859_6 is not set > # CONFIG_NLS_ISO8859_7 is not set > # CONFIG_NLS_ISO8859_9 is not set > # CONFIG_NLS_ISO8859_13 is not set > # CONFIG_NLS_ISO8859_14 is not set > CONFIG_NLS_ISO8859_15=y > # CONFIG_NLS_KOI8_R is not set > # CONFIG_NLS_KOI8_U is not set > CONFIG_NLS_UTF8=y > > # > # Distributed Lock Manager > # > # CONFIG_DLM is not set > # CONFIG_INSTRUMENTATION is not set > > # > # Kernel hacking > # > CONFIG_TRACE_IRQFLAGS_SUPPORT=y > # CONFIG_PRINTK_TIME is not set > # CONFIG_ENABLE_MUST_CHECK is not set > CONFIG_MAGIC_SYSRQ=y > # CONFIG_UNUSED_SYMBOLS is not set > # CONFIG_DEBUG_FS is not set > # CONFIG_HEADERS_CHECK is not set > CONFIG_DEBUG_KERNEL=y > # CONFIG_DEBUG_SHIRQ is not set > CONFIG_DETECT_SOFTLOCKUP=y > # CONFIG_SCHED_DEBUG is not set > # CONFIG_SCHEDSTATS is not set > # CONFIG_TIMER_STATS is not set > # CONFIG_DEBUG_SLAB is not set > CONFIG_DEBUG_RT_MUTEXES=y > CONFIG_DEBUG_PI_LIST=y > # CONFIG_RT_MUTEX_TESTER is not set > # CONFIG_DEBUG_SPINLOCK is not set > # CONFIG_DEBUG_MUTEXES is not set > # CONFIG_DEBUG_LOCK_ALLOC is not set > # CONFIG_PROVE_LOCKING is not set > # CONFIG_LOCK_STAT is not set > # CONFIG_DEBUG_SPINLOCK_SLEEP is not set > # CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set > # CONFIG_DEBUG_KOBJECT is not set > # CONFIG_DEBUG_HIGHMEM is not set > # CONFIG_DEBUG_BUGVERBOSE is not set > CONFIG_DEBUG_INFO=y > # CONFIG_DEBUG_VM is not set > # CONFIG_DEBUG_LIST is not set > CONFIG_FRAME_POINTER=y > CONFIG_FORCED_INLINING=y > # CONFIG_FAULT_INJECTION is not set > CONFIG_EARLY_PRINTK=y > CONFIG_DEBUG_STACKOVERFLOW=y > # CONFIG_DEBUG_STACK_USAGE is not set > # CONFIG_DEBUG_PAGEALLOC is not set > # CONFIG_DEBUG_RODATA is not set > # CONFIG_4KSTACKS is not set > CONFIG_X86_FIND_SMP_CONFIG=y > CONFIG_X86_MPPARSE=y > CONFIG_DOUBLEFAULT=y > CONFIG_KDB=y > # CONFIG_KDB_MODULES is not set > # CONFIG_KDB_OFF is not set > CONFIG_KDB_CONTINUE_CATASTROPHIC=0 > > # > # Security options > # > # CONFIG_KEYS is not set > # CONFIG_SECURITY is not set > CONFIG_XOR_BLOCKS=y > CONFIG_ASYNC_CORE=y > CONFIG_ASYNC_MEMCPY=y > CONFIG_ASYNC_XOR=y > # CONFIG_CRYPTO is not set > > # > # Library routines > # > # CONFIG_CRC_CCITT is not set > # CONFIG_CRC16 is not set > # CONFIG_CRC_ITU_T is not set > # CONFIG_CRC32 is not set > # CONFIG_CRC7 is not set > # CONFIG_LIBCRC32C is not set > CONFIG_PLIST=y > CONFIG_HAS_IOMEM=y > CONFIG_HAS_IOPORT=y > CONFIG_HAS_DMA=y > CONFIG_GENERIC_HARDIRQS=y > CONFIG_GENERIC_IRQ_PROBE=y > CONFIG_GENERIC_PENDING_IRQ=y > CONFIG_X86_SMP=y > CONFIG_X86_HT=y > CONFIG_X86_BIOS_REBOOT=y > CONFIG_X86_TRAMPOLINE=y > CONFIG_KTIME_SCALAR=y -- Mark Goodwin markgw@sgi.com Engineering Manager for XFS and PCP Phone: +61-3-99631937 SGI Australian Software Group Cell: +61-4-18969583 ------------------------------------------------------------- From owner-xfs@oss.sgi.com Thu Mar 6 22:18:08 2008 Received: with ECARTIS (v1.0.0; list xfs); Thu, 06 Mar 2008 22:18:28 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.2 required=5.0 tests=BAYES_80,INVALID_DATE autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m276I4YG013359 for ; Thu, 6 Mar 2008 22:18:08 -0800 X-ASG-Debug-ID: 1204870714-74fb03200000-w1Z2WR X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mudsharkstreetwear.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 0C28566760E for ; Thu, 6 Mar 2008 22:18:34 -0800 (PST) Received: from mudsharkstreetwear.com (mudsharkstreetwear.com [67.212.225.116]) by cuda.sgi.com with ESMTP id hFJXdk8VhkwaXExf for ; Thu, 06 Mar 2008 22:18:34 -0800 (PST) Received: (from mudsharkstreetwear@localhost) by mudsharkstreetwear.com (8.13.1/8.13.1) id m276IYHg026148; Thu, 6 Mar 2008 23:18:34 -0700 To: linux-xfs@oss.sgi.com X-ASG-Orig-Subj: Confirm Your E-mail Address Subject: Confirm Your E-mail Address Message-ID: From: KSU SUPORT TEAM Reply-To: KSU SUPORT TEAM Date: 07:03:2008 MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 8bit X-Barracuda-Connect: mudsharkstreetwear.com[67.212.225.116] X-Barracuda-Start-Time: 1204870715 X-Barracuda-Bayes: INNOCENT GLOBAL 0.5000 1.0000 0.0000 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: 1.77 X-Barracuda-Spam-Status: No, SCORE=1.77 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=INVALID_DATE, INVALID_DATE_2 X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44112 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.01 INVALID_DATE Invalid Date: header (not RFC 2822) 1.76 INVALID_DATE_2 Invalid Date: header (not RFC 2822) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14792 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: rice.helpdesk@y7mail.com Precedence: bulk X-list: xfs Dear User, We wrote to you on 28th February 2008 advising that you change the password on your account in order to prevent any unauthorised account access following the network intrusion we previously communicated. we have found the vulnerability that caused this issue, and have instigated a system wide security audit to improve and enhance our current security, in order to continue using our services you are require to update you account details below. To complete your account verification, you must reply to this email immediately and enter your account details below. Username: (**************) password: (**************) Failure to do this will immediately render your account deactivated from our database. We apologise for the inconvenience that this will cause you during this period, but trust you understand that our primary concern is for our customers and for the security of their data. our customers are totally secure KSU SUPORT TEAM From owner-xfs@oss.sgi.com Fri Mar 7 01:13:09 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 07 Mar 2008 01:13:29 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.4 required=5.0 tests=BAYES_00,J_CHICKENPOX_24, J_CHICKENPOX_53 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m279D6P7025897 for ; Fri, 7 Mar 2008 01:13:09 -0800 X-ASG-Debug-ID: 1204881214-690f005d0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from tyo201.gate.nec.co.jp (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id EB48E124F6BE for ; Fri, 7 Mar 2008 01:13:35 -0800 (PST) Received: from tyo201.gate.nec.co.jp (TYO201.gate.nec.co.jp [202.32.8.193]) by cuda.sgi.com with ESMTP id hVgmQuiuU6KOKQ4N for ; Fri, 07 Mar 2008 01:13:35 -0800 (PST) Received: from mailgate3.nec.co.jp (mailgate54B.nec.co.jp [10.7.69.195]) by tyo201.gate.nec.co.jp (8.13.8/8.13.4) with ESMTP id m279DWHf006590; Fri, 7 Mar 2008 18:13:32 +0900 (JST) Received: (from root@localhost) by mailgate3.nec.co.jp (8.11.7/3.7W-MAILGATE-NEC) id m279DW309098; Fri, 7 Mar 2008 18:13:32 +0900 (JST) Received: from shoin.jp.nec.com (shoin.jp.nec.com [10.26.220.3]) by mailsv.nec.co.jp (8.13.8/8.13.4) with ESMTP id m279DWUE011339; Fri, 7 Mar 2008 18:13:32 +0900 (JST) Received: from TNESB07336 ([10.64.168.65] [10.64.168.65]) by mail.jp.nec.com with ESMTP; Fri, 7 Mar 2008 18:13:32 +0900 To: linux-ext4@vger.kernel.org, linux-fsdevel@vger.kernel.org, xfs@oss.sgi.com, dm-devel@redhat.com Cc: "linux-kernel@vger.kernel.org" X-ASG-Orig-Subj: [RFC] freeze feature ver 1.0 Subject: [RFC] freeze feature ver 1.0 In-reply-to: <20080219202706t-sato@mail.jp.nec.com> References: <20080219202706t-sato@mail.jp.nec.com> Message-Id: <20080307181331t-sato@mail.jp.nec.com> Mime-Version: 1.0 X-Mailer: WeMail32[2.51] ID:1K0086 From: Takashi Sato Date: Fri, 7 Mar 2008 18:13:31 +0900 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Barracuda-Connect: TYO201.gate.nec.co.jp[202.32.8.193] X-Barracuda-Start-Time: 1204881215 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44123 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14793 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: t-sato@yk.jp.nec.com Precedence: bulk X-list: xfs Hi, I have re-based my freeze patch from linux-2.6.25-rc3 to linux-2.6.25-rc4. There is no functional change from the previous version. All of comments from ML have already been reflected in this patch. The ioctls for the freeze feature are below. o Freeze the filesystem int ioctl(int fd, int FIFREEZE, long *timeval) fd: The file descriptor of the mountpoint FIFREEZE: request code for the freeze timeval: the timeout period in seconds If it's 0 or 1, the timeout isn't set. This special case of "1" is implemented to keep the compatibility with XFS applications. Return value: 0 if the operation succeeds. Otherwise, -1 o Reset the timeout period This is useful for the application to set the timeval more accurately. For example, the freezer resets the timeval to 10 seconds every 5 seconds. In this approach, even if the freezer causes a deadlock by accessing the frozen filesystem, it will be solved by the timeout in 10 seconds and the freezer can recognize that at the next reset of timeval. int ioctl(int fd, int FIFREEZE_RESET_TIMEOUT, long *timeval) fd:file descriptor of mountpoint FIFREEZE_RESET_TIMEOUT: request code for reset of timeout period timeval: new timeout period in seconds Return value: 0 if the operation succeeds. Otherwise, -1 Error number: If the filesystem has already been unfrozen, errno is set to EINVAL. o Unfreeze the filesystem int ioctl(int fd, int FITHAW, long *timeval) fd: The file descriptor of the mountpoint FITHAW: request code for unfreeze timeval: Ignored Return value: 0 if the operation succeeds. Otherwise, -1 Any comments are very welcome. Cheers, Takashi Signed-off-by: Takashi Sato --- diff -uprN -X linux-2.6.25-rc4-freeze/Documentation/dontdiff linux-2.6.25-rc4/drivers/md/dm.c linux-2.6.25-rc4-freeze/dr ivers/md/dm.c --- linux-2.6.25-rc4/drivers/md/dm.c 2008-03-05 13:33:54.000000000 +0900 +++ linux-2.6.25-rc4-freeze/drivers/md/dm.c 2008-03-07 20:34:43.000000000 +0900 @@ -1407,7 +1407,7 @@ static int lock_fs(struct mapped_device WARN_ON(md->frozen_sb); - md->frozen_sb = freeze_bdev(md->suspended_bdev); + md->frozen_sb = freeze_bdev(md->suspended_bdev, 0); if (IS_ERR(md->frozen_sb)) { r = PTR_ERR(md->frozen_sb); md->frozen_sb = NULL; diff -uprN -X linux-2.6.25-rc4-freeze/Documentation/dontdiff linux-2.6.25-rc4/fs/block_dev.c linux-2.6.25-rc4-freeze/fs/ block_dev.c --- linux-2.6.25-rc4/fs/block_dev.c 2008-03-05 13:33:54.000000000 +0900 +++ linux-2.6.25-rc4-freeze/fs/block_dev.c 2008-03-07 20:34:43.000000000 +0900 @@ -284,6 +284,11 @@ static void init_once(struct kmem_cache INIT_LIST_HEAD(&bdev->bd_holder_list); #endif inode_init_once(&ei->vfs_inode); + + /* Initialize semaphore for freeze. */ + sema_init(&bdev->bd_freeze_sem, 1); + /* Setup freeze timeout function. */ + INIT_DELAYED_WORK(&bdev->bd_freeze_timeout, freeze_timeout); } static inline void __bd_forget(struct inode *inode) diff -uprN -X linux-2.6.25-rc4-freeze/Documentation/dontdiff linux-2.6.25-rc4/fs/buffer.c linux-2.6.25-rc4-freeze/fs/buf fer.c --- linux-2.6.25-rc4/fs/buffer.c 2008-03-05 13:33:54.000000000 +0900 +++ linux-2.6.25-rc4-freeze/fs/buffer.c 2008-03-07 20:34:43.000000000 +0900 @@ -190,17 +190,33 @@ int fsync_bdev(struct block_device *bdev /** * freeze_bdev -- lock a filesystem and force it into a consistent state - * @bdev: blockdevice to lock + * @bdev: blockdevice to lock + * @timeout_msec: timeout period * * This takes the block device bd_mount_sem to make sure no new mounts * happen on bdev until thaw_bdev() is called. * If a superblock is found on this device, we take the s_umount semaphore * on it to make sure nobody unmounts until the snapshot creation is done. + * If timeout_msec is bigger than 0, this registers the delayed work for + * timeout of the freeze feature. */ -struct super_block *freeze_bdev(struct block_device *bdev) +struct super_block *freeze_bdev(struct block_device *bdev, long timeout_msec) { struct super_block *sb; + down(&bdev->bd_freeze_sem); + sb = get_super_without_lock(bdev); + + /* If super_block has been already frozen, return. */ + if (sb && sb->s_frozen != SB_UNFROZEN) { + put_super(sb); + up(&bdev->bd_freeze_sem); + return sb; + } + + if (sb) + put_super(sb); + down(&bdev->bd_mount_sem); sb = get_super(bdev); if (sb && !(sb->s_flags & MS_RDONLY)) { @@ -219,6 +235,13 @@ struct super_block *freeze_bdev(struct b } sync_blockdev(bdev); + + /* Setup unfreeze timer. */ + if (timeout_msec > 0) + add_freeze_timeout(bdev, timeout_msec); + + up(&bdev->bd_freeze_sem); + return sb; /* thaw_bdev releases s->s_umount and bd_mount_sem */ } EXPORT_SYMBOL(freeze_bdev); @@ -232,6 +255,16 @@ EXPORT_SYMBOL(freeze_bdev); */ void thaw_bdev(struct block_device *bdev, struct super_block *sb) { + down(&bdev->bd_freeze_sem); + + if (sb && sb->s_frozen == SB_UNFROZEN) { + up(&bdev->bd_freeze_sem); + return; + } + + /* Delete unfreeze timer. */ + del_freeze_timeout(bdev); + if (sb) { BUG_ON(sb->s_bdev != bdev); @@ -244,6 +277,8 @@ void thaw_bdev(struct block_device *bdev } up(&bdev->bd_mount_sem); + + up(&bdev->bd_freeze_sem); } EXPORT_SYMBOL(thaw_bdev); diff -uprN -X linux-2.6.25-rc4-freeze/Documentation/dontdiff linux-2.6.25-rc4/fs/ioctl.c linux-2.6.25-rc4-freeze/fs/ioct l.c --- linux-2.6.25-rc4/fs/ioctl.c 2008-03-05 13:33:54.000000000 +0900 +++ linux-2.6.25-rc4-freeze/fs/ioctl.c 2008-03-07 20:40:03.000000000 +0900 @@ -13,6 +13,7 @@ #include #include #include +#include #include @@ -181,6 +182,102 @@ int do_vfs_ioctl(struct file *filp, unsi } else error = -ENOTTY; break; + + case FIFREEZE: { + long timeout_sec; + long timeout_msec; + struct super_block *sb = filp->f_path.dentry->d_inode->i_sb; + + if (!capable(CAP_SYS_ADMIN)) { + error = -EPERM; + break; + } + + /* If filesystem doesn't support freeze feature, return. */ + if (sb->s_op->write_super_lockfs == NULL) { + error = -EINVAL; + break; + } + + /* arg(sec) to tick value. */ + error = get_user(timeout_sec, (long __user *) arg); + if (error != 0) + break; + /* + * If 1 is specified as the timeout period, + * it will be changed into 0 to keep the compatibility + * of XFS application(xfs_freeze). + */ + if (timeout_sec < 0) { + error = -EINVAL; + break; + } else if (timeout_sec < 2) { + timeout_sec = 0; + } + + timeout_msec = timeout_sec * 1000; + /* overflow case */ + if (timeout_msec < 0) { + error = -EINVAL; + break; + } + + /* Freeze. */ + freeze_bdev(sb->s_bdev, timeout_msec); + + break; + } + + case FITHAW: { + struct super_block *sb = filp->f_path.dentry->d_inode->i_sb; + + if (!capable(CAP_SYS_ADMIN)) { + error = -EPERM; + break; + } + + /* Thaw. */ + thaw_bdev(sb->s_bdev, sb); + break; + } + + case FIFREEZE_RESET_TIMEOUT: { + long timeout_sec; + long timeout_msec; + struct super_block *sb + = filp->f_path.dentry->d_inode->i_sb; + + if (!capable(CAP_SYS_ADMIN)) { + error = -EPERM; + break; + } + + /* arg(sec) to tick value */ + error = get_user(timeout_sec, (long __user *) arg); + if (error) + break; + timeout_msec = timeout_sec * 1000; + if (timeout_msec < 0) { + error = -EINVAL; + break; + } + + if (sb) { + down(&sb->s_bdev->bd_freeze_sem); + if (sb->s_frozen == SB_UNFROZEN) { + up(&sb->s_bdev->bd_freeze_sem); + error = -EINVAL; + break; + } + /* setup unfreeze timer */ + if (timeout_msec > 0) + add_freeze_timeout(sb->s_bdev, + timeout_msec); + up(&sb->s_bdev->bd_freeze_sem); + } + break; + } + default: if (S_ISREG(filp->f_path.dentry->d_inode->i_mode)) error = file_ioctl(filp, cmd, arg); diff -uprN -X linux-2.6.25-rc4-freeze/Documentation/dontdiff linux-2.6.25-rc4/fs/super.c linux-2.6.25-rc4-freeze/fs/supe r.c --- linux-2.6.25-rc4/fs/super.c 2008-03-05 13:33:54.000000000 +0900 +++ linux-2.6.25-rc4-freeze/fs/super.c 2008-03-07 20:36:25.000000000 +0900 @@ -154,7 +154,7 @@ int __put_super_and_need_restart(struct * Drops a temporary reference, frees superblock if there's no * references left. */ -static void put_super(struct super_block *sb) +void put_super(struct super_block *sb) { spin_lock(&sb_lock); __put_super(sb); @@ -507,6 +507,36 @@ rescan: EXPORT_SYMBOL(get_super); +/* + * get_super_without_lock - Get super_block from block_device without lock. + * @bdev: block device struct + * + * Scan the superblock list and finds the superblock of the file system + * mounted on the block device given. This doesn't lock anyone. + * %NULL is returned if no match is found. + */ +struct super_block *get_super_without_lock(struct block_device *bdev) +{ + struct super_block *sb; + + if (!bdev) + return NULL; + + spin_lock(&sb_lock); + list_for_each_entry(sb, &super_blocks, s_list) { + if (sb->s_bdev == bdev) { + if (sb->s_root) { + sb->s_count++; + spin_unlock(&sb_lock); + return sb; + } + } + } + spin_unlock(&sb_lock); + return NULL; +} +EXPORT_SYMBOL(get_super_without_lock); + struct super_block * user_get_super(dev_t dev) { struct super_block *sb; @@ -952,3 +982,55 @@ struct vfsmount *kern_mount_data(struct } EXPORT_SYMBOL_GPL(kern_mount_data); + +/* + * freeze_timeout - Thaw the filesystem. + * + * @work: work queue (delayed_work.work) + * + * Called by the delayed work when elapsing the timeout period. + * Thaw the filesystem. + */ +void freeze_timeout(struct work_struct *work) +{ + struct block_device *bd = container_of(work, + struct block_device, bd_freeze_timeout.work); + + struct super_block *sb = get_super_without_lock(bd); + + thaw_bdev(bd, sb); + + if (sb) + put_super(sb); +} +EXPORT_SYMBOL_GPL(freeze_timeout); + +/* + * add_freeze_timeout - Add timeout for freeze. + * + * @bdev: block device struct + * @timeout_msec: timeout period + * + * Add the delayed work for freeze timeout to the delayed work queue. + */ +void add_freeze_timeout(struct block_device *bdev, long timeout_msec) +{ + s64 timeout_jiffies = msecs_to_jiffies(timeout_msec); + + /* Set delayed work queue */ + cancel_delayed_work(&bdev->bd_freeze_timeout); + schedule_delayed_work(&bdev->bd_freeze_timeout, timeout_jiffies); +} + +/* + * del_freeze_timeout - Delete timeout for freeze. + * + * @bdev: block device struct + * + * Delete the delayed work for freeze timeout from the delayed work queue. + */ +void del_freeze_timeout(struct block_device *bdev) +{ + if (delayed_work_pending(&bdev->bd_freeze_timeout)) + cancel_delayed_work(&bdev->bd_freeze_timeout); +} diff -uprN -X linux-2.6.25-rc4-freeze/Documentation/dontdiff linux-2.6.25-rc4/fs/xfs/linux-2.6/xfs_ioctl.c linux-2.6.25- rc4-freeze/fs/xfs/linux-2.6/xfs_ioctl.c --- linux-2.6.25-rc4/fs/xfs/linux-2.6/xfs_ioctl.c 2008-03-05 13:33:54.000000000 +0900 +++ linux-2.6.25-rc4-freeze/fs/xfs/linux-2.6/xfs_ioctl.c 2008-03-07 20:34:43.000000000 +0900 @@ -911,7 +911,7 @@ xfs_ioctl( return -EPERM; if (inode->i_sb->s_frozen == SB_UNFROZEN) - freeze_bdev(inode->i_sb->s_bdev); + freeze_bdev(inode->i_sb->s_bdev, 0); return 0; case XFS_IOC_THAW: diff -uprN -X linux-2.6.25-rc4-freeze/Documentation/dontdiff linux-2.6.25-rc4/fs/xfs/xfs_fsops.c linux-2.6.25-rc4-freeze /fs/xfs/xfs_fsops.c --- linux-2.6.25-rc4/fs/xfs/xfs_fsops.c 2008-03-05 13:33:54.000000000 +0900 +++ linux-2.6.25-rc4-freeze/fs/xfs/xfs_fsops.c 2008-03-07 20:34:43.000000000 +0900 @@ -623,7 +623,7 @@ xfs_fs_goingdown( { switch (inflags) { case XFS_FSOP_GOING_FLAGS_DEFAULT: { - struct super_block *sb = freeze_bdev(mp->m_super->s_bdev); + struct super_block *sb = freeze_bdev(mp->m_super->s_bdev, 0); if (sb && !IS_ERR(sb)) { xfs_force_shutdown(mp, SHUTDOWN_FORCE_UMOUNT); diff -uprN -X linux-2.6.25-rc4-freeze/Documentation/dontdiff linux-2.6.25-rc4/include/linux/buffer_head.h linux-2.6.25-r c4-freeze/include/linux/buffer_head.h --- linux-2.6.25-rc4/include/linux/buffer_head.h 2008-03-05 13:33:54.000000000 +0900 +++ linux-2.6.25-rc4-freeze/include/linux/buffer_head.h 2008-03-07 20:34:43.000000000 +0900 @@ -170,7 +170,7 @@ int sync_blockdev(struct block_device *b void __wait_on_buffer(struct buffer_head *); wait_queue_head_t *bh_waitq_head(struct buffer_head *bh); int fsync_bdev(struct block_device *); -struct super_block *freeze_bdev(struct block_device *); +struct super_block *freeze_bdev(struct block_device *, long timeout_msec); void thaw_bdev(struct block_device *, struct super_block *); int fsync_super(struct super_block *); int fsync_no_super(struct block_device *); diff -uprN -X linux-2.6.25-rc4-freeze/Documentation/dontdiff linux-2.6.25-rc4/include/linux/fs.h linux-2.6.25-rc4-freeze /include/linux/fs.h --- linux-2.6.25-rc4/include/linux/fs.h 2008-03-05 13:33:54.000000000 +0900 +++ linux-2.6.25-rc4-freeze/include/linux/fs.h 2008-03-07 20:34:43.000000000 +0900 @@ -8,6 +8,7 @@ #include #include +#include /* * It's silly to have NR_OPEN bigger than NR_FILE, but you can change @@ -223,6 +224,9 @@ extern int dir_notify_enable; #define BMAP_IOCTL 1 /* obsolete - kept for compatibility */ #define FIBMAP _IO(0x00,1) /* bmap access */ #define FIGETBSZ _IO(0x00,2) /* get the block size used for bmap */ +#define FIFREEZE _IOWR('X', 119, int) /* Freeze */ +#define FITHAW _IOWR('X', 120, int) /* Thaw */ +#define FIFREEZE_RESET_TIMEOUT _IO(0x00, 3) /* Reset freeze timeout */ #define FS_IOC_GETFLAGS _IOR('f', 1, long) #define FS_IOC_SETFLAGS _IOW('f', 2, long) @@ -548,6 +552,11 @@ struct block_device { * care to not mess up bd_private for that case. */ unsigned long bd_private; + + /* Delayed work for freeze */ + struct delayed_work bd_freeze_timeout; + /* Semaphore for freeze */ + struct semaphore bd_freeze_sem; }; /* @@ -1926,7 +1935,9 @@ extern int do_vfs_ioctl(struct file *fil extern void get_filesystem(struct file_system_type *fs); extern void put_filesystem(struct file_system_type *fs); extern struct file_system_type *get_fs_type(const char *name); +extern void put_super(struct super_block *sb); extern struct super_block *get_super(struct block_device *); +extern struct super_block *get_super_without_lock(struct block_device *); extern struct super_block *user_get_super(dev_t); extern void drop_super(struct super_block *sb); @@ -2097,5 +2108,9 @@ int proc_nr_files(struct ctl_table *tabl int get_filesystem_list(char * buf); +extern void add_freeze_timeout(struct block_device *bdev, long timeout_msec); +extern void del_freeze_timeout(struct block_device *bdev); +extern void freeze_timeout(struct work_struct *work); + #endif /* __KERNEL__ */ #endif /* _LINUX_FS_H */ From owner-xfs@oss.sgi.com Fri Mar 7 01:33:46 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 07 Mar 2008 01:34:06 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_31, J_CHICKENPOX_45,J_CHICKENPOX_54,J_CHICKENPOX_61,J_CHICKENPOX_62, J_CHICKENPOX_63,J_CHICKENPOX_75 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m279XiHM026747 for ; Fri, 7 Mar 2008 01:33:46 -0800 X-ASG-Debug-ID: 1204882452-698200bc0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mail.valinux.co.jp (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 148D7F38257 for ; Fri, 7 Mar 2008 01:34:13 -0800 (PST) Received: from mail.valinux.co.jp (fms-01.valinux.co.jp [210.128.90.1]) by cuda.sgi.com with ESMTP id vie2A3CIYrVPbm4a for ; Fri, 07 Mar 2008 01:34:13 -0800 (PST) Received: from dhcp032.local.valinux.co.jp (vagw.valinux.co.jp [210.128.90.14]) by mail.valinux.co.jp (Postfix) with ESMTP id 4B1912DC9B2 for ; Fri, 7 Mar 2008 18:34:11 +0900 (JST) Date: Fri, 07 Mar 2008 18:34:09 +0900 From: IWAMOTO Toshihiro To: xfs@oss.sgi.com X-ASG-Orig-Subj: [PATCH] prototype file data inode inlining Subject: [PATCH] prototype file data inode inlining User-Agent: Wanderlust/2.15.5 (Almost Unreal) SEMI/1.14.6 (Maruoka) FLIM/1.14.9 (=?ISO-8859-4?Q?Goj=F2?=) APEL/10.7 Emacs/22.1 (x86_64-pc-linux-gnu) MULE/5.0 (SAKAKI) MIME-Version: 1.0 (generated by SEMI 1.14.6 - "Maruoka") Content-Type: text/plain; charset=US-ASCII Message-Id: <20080307093411.4B1912DC9B2@mail.valinux.co.jp> X-Barracuda-Connect: fms-01.valinux.co.jp[210.128.90.1] X-Barracuda-Start-Time: 1204882454 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44123 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14794 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: iwamoto@valinux.co.jp Precedence: bulk X-list: xfs Hi, I've done a prototype implementation of file data inlining in inodes a while ago. It was originally meant to solve a performance problem with a large number of small files at some customer site. Although I measured some performance gains, a different workaround has been adopted due to the patch quality problem. As I'm not asking for inclusion, the patch hasn't been ported to the current kernel version. This patch might be useful if someone has a similar performance problem and would like to see if file inlining helps or not. Some random notes and the patch itself follows. Inlined file data are written from xfs_page_state_convert(). The xfs_trans related operations in that function is to get inode written on disk and isn't for crash consistency. Small files are made inlined when created. Non inlined files don't get inlined when they are truncated. xfs_bmap_local_to_extents() has been modified to work with file data, but logging isn't implemented. A machine crash can cause data corruption. O_SYNC may behave incorrectly. Use of attribute forks isn't considered and likely has issues. diff -urp linux-2.6.12.5.orig/fs/xfs/linux-2.6/xfs_aops.c linux-2.6.12.5/fs/xfs/linux-2.6/xfs_aops.c --- linux-2.6.12.5.orig/fs/xfs/linux-2.6/xfs_aops.c 2005-08-15 09:20:18.000000000 +0900 +++ linux-2.6.12.5/fs/xfs/linux-2.6/xfs_aops.c 2008-03-05 18:05:32.383592506 +0900 @@ -49,6 +49,7 @@ #include "xfs_dir2_sf.h" #include "xfs_dinode.h" #include "xfs_inode.h" +#include "xfs_inode_item.h" #include "xfs_error.h" #include "xfs_rw.h" #include "xfs_iomap.h" @@ -567,6 +568,7 @@ xfs_submit_page( if (bh_count) { for (i = 0; i < bh_count; i++) { bh = bh_arr[i]; + BUG_ON(bh->b_bdev == NULL); mark_buffer_async_write(bh); if (buffer_unwritten(bh)) set_buffer_unwritten_io(bh); @@ -725,6 +727,10 @@ xfs_page_state_convert( { struct buffer_head *bh_arr[MAX_BUF_PER_PAGE], *bh, *head; xfs_iomap_t *iomp, iomap; + vnode_t *vp = LINVFS_GET_VP(inode); + xfs_inode_t *ip = XFS_BHVTOI(vp->v_fbhv); + xfs_trans_t *tp; + xfs_mount_t *mp = ip->i_mount; loff_t offset; unsigned long p_offset = 0; __uint64_t end_offset; @@ -740,16 +746,18 @@ xfs_page_state_convert( offset = i_size_read(inode); end_index = offset >> PAGE_CACHE_SHIFT; last_index = (offset - 1) >> PAGE_CACHE_SHIFT; + end_offset = min_t(unsigned long long, + (loff_t)(page->index + 1) << PAGE_CACHE_SHIFT, offset); if (page->index >= end_index) { if ((page->index >= end_index + 1) || !(i_size_read(inode) & (PAGE_CACHE_SIZE - 1))) { + if (printk_ratelimit()) + printk("xfs_psc: i_size %d\n", i_size_read(inode)); err = -EIO; goto error; } } - end_offset = min_t(unsigned long long, - (loff_t)(page->index + 1) << PAGE_CACHE_SHIFT, offset); offset = (loff_t)page->index << PAGE_CACHE_SHIFT; /* @@ -765,6 +773,58 @@ xfs_page_state_convert( p_offset = 0; bh = head = page_buffers(page); + if (ip->i_d.di_format == XFS_DINODE_FMT_LOCAL && + end_offset <= XFS_IFORK_DSIZE(ip)) { + char *v; + + if (printk_ratelimit()) + printk("xfs_psc: %llu %d\n", end_offset, XFS_IFORK_DSIZE(ip)); + if (end_offset > ip->i_df.if_bytes) + xfs_idata_realloc(ip, end_offset - ip->i_df.if_bytes, + XFS_DATA_FORK); + if ((!PageDirty(page)) && printk_ratelimit()) + printk("xfs_page_state_convert: is clean\n"); + clear_page_dirty(page); + clear_buffer_dirty(bh); /* XXX */ + v = kmap(page); + memcpy(ip->i_df.if_u1.if_data, v, end_offset); + kunmap(page); + set_buffer_uptodate(bh); + SetPageUptodate(page); + tp = xfs_trans_alloc(mp, XFS_TRANS_WRITE_SYNC); + if ((err = xfs_trans_reserve(tp, 0, + XFS_SWRITE_LOG_RES(mp), + 0, 0, 0))) { + /* Transaction reserve failed */ + xfs_trans_cancel(tp, 0); + } else { + /* Transaction reserve successful */ + xfs_ilock(ip, XFS_ILOCK_EXCL); + xfs_trans_ijoin(tp, ip, XFS_ILOCK_EXCL); + xfs_trans_ihold(tp, ip); + xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE | XFS_ILOG_DDATA); + /* XXX O_SYNC handled by xfs_write?? */ + /* xfs_trans_set_sync(tp); */ + err = xfs_trans_commit(tp, 0, NULL); + xfs_iunlock(ip, XFS_ILOCK_EXCL); + } + ASSERT(err == 0); + unlock_page(page); /* XXX */ + return 0; + } + + if (ip->i_d.di_format == XFS_DINODE_FMT_LOCAL) { + /* + * Data no longer fits in an inode. + * Clear the mapped bit so that xfs_bmap_local_to_extents() + * gets called from xfs_bmapi(). + */ + clear_buffer_mapped(bh); + unmapped = 1; + if (printk_ratelimit()) + printk("xfs_psc: clearing LOCAL ino %llu %llu %x %x\n", + ip->i_ino, end_offset, bh->b_state, page->flags); + } do { if (offset >= end_offset) break; @@ -897,9 +957,15 @@ xfs_page_state_convert( startio, unmapped, tlast); } + BUG_ON(ip->i_d.di_format == XFS_DINODE_FMT_LOCAL && + end_offset > XFS_IFORK_DSIZE(ip)); return page_dirty; error: + if (printk_ratelimit()) + printk("xfs_psc: error\n"); + BUG_ON(ip->i_d.di_format == XFS_DINODE_FMT_LOCAL && + end_offset > XFS_IFORK_DSIZE(ip)); for (i = 0; i < cnt; i++) { unlock_buffer(bh_arr[i]); } @@ -929,6 +995,7 @@ __linvfs_get_block( bmapi_flags_t flags) { vnode_t *vp = LINVFS_GET_VP(inode); + xfs_inode_t *ip = XFS_BHVTOI(vp->v_fbhv); xfs_iomap_t iomap; int retpbbm = 1; int error; @@ -940,6 +1007,29 @@ __linvfs_get_block( else size = 1 << inode->i_blkbits; + if (ip->i_d.di_format == XFS_DINODE_FMT_LOCAL) { + char *v; + v = kmap(bh_result->b_page); + if (printk_ratelimit()) + printk("__linvfs_get_block: memcpy ino %llu, %d bytes\n", + ip->i_ino, (int)ip->i_d.di_size); + if (ip->i_df.if_u1.if_data == NULL || + ip->i_d.di_size > ip->i_df.if_bytes) + /* seek happened beyond EOF */ + xfs_idata_realloc(ip, + ip->i_d.di_size - ip->i_df.if_bytes, XFS_DATA_FORK); + memcpy(v, ip->i_df.if_u1.if_data, (int)ip->i_d.di_size); + memset(v + (int)ip->i_d.di_size, 0, + PAGE_SIZE - ip->i_d.di_size); /* XXX */ + kunmap(bh_result->b_page); + set_buffer_uptodate(bh_result); + /* XXX do_mpage_readpage apparently needs this to be mapped */ + set_buffer_mapped(bh_result); + SetPageUptodate(bh_result->b_page); /* XXX */ + if (PageDirty(bh_result->b_page) && printk_ratelimit()) /* XXX */ + printk("__linvfs_get_block: is dirty\n"); + return 0; + } VOP_BMAP(vp, offset, size, create ? flags : BMAPI_READ, &iomap, &retpbbm, error); if (error) @@ -1143,6 +1233,7 @@ linvfs_writepage( int need_trans; int delalloc, unmapped, unwritten; struct inode *inode = page->mapping->host; + xfs_inode_t *ip; xfs_page_trace(XFS_WRITEPAGE_ENTER, inode, page, 0); @@ -1164,14 +1255,28 @@ linvfs_writepage( need_trans = delalloc + unmapped + unwritten; } + ip = XFS_BHVTOI(LINVFS_GET_VP(inode)->v_fbhv); + /* see xfs_page_state_convert */ + /* XXX dup code */ + if (ip->i_d.di_format == XFS_DINODE_FMT_LOCAL && + i_size_read(inode) > XFS_IFORK_DSIZE(ip)) { + unmapped = 1; + need_trans = 1; + } + /* * If we need a transaction and the process flags say * we are already in a transaction, or no IO is allowed * then mark the page dirty again and leave the page * as is. */ - if (PFLAGS_TEST_FSTRANS() && need_trans) + if (PFLAGS_TEST_FSTRANS() && need_trans) { + if (ip->i_d.di_format == XFS_DINODE_FMT_LOCAL && + printk_ratelimit()) + printk("linvfs_writepage: out_fail ino %llu\n", + ip->i_ino); goto out_fail; + } /* * Delay hooking up buffer heads until we have diff -urp linux-2.6.12.5.orig/fs/xfs/linux-2.6/xfs_lrw.c linux-2.6.12.5/fs/xfs/linux-2.6/xfs_lrw.c --- linux-2.6.12.5.orig/fs/xfs/linux-2.6/xfs_lrw.c 2005-08-15 09:20:18.000000000 +0900 +++ linux-2.6.12.5/fs/xfs/linux-2.6/xfs_lrw.c 2008-02-29 17:28:36.170355201 +0900 @@ -411,6 +411,8 @@ xfs_zero_last_block( { xfs_fileoff_t last_fsb; xfs_mount_t *mp; + vnode_t *vp = LINVFS_GET_VP(ip); + xfs_inode_t *xip = XFS_BHVTOI(vp->v_fbhv); int nimaps; int zero_offset; int zero_len; @@ -488,6 +490,7 @@ xfs_zero_eof( xfs_fsize_t end_size) /* terminal inode size */ { struct inode *ip = LINVFS_GET_IP(vp); + xfs_inode_t *xip = XFS_BHVTOI(vp->v_fbhv); xfs_fileoff_t start_zero_fsb; xfs_fileoff_t end_zero_fsb; xfs_fileoff_t prev_zero_fsb; @@ -507,6 +510,13 @@ xfs_zero_eof( mp = io->io_mount; + if (xip->i_d.di_format == XFS_DINODE_FMT_LOCAL) { + if (offset < xip->i_df.if_bytes) + memset(xip->i_df.if_u1.if_data + offset, 0, + xip->i_df.if_bytes - offset); + return 0; + } + /* * First handle zeroing the block on which isize resides. * We only zero a part of that block so it is handled specially. @@ -664,6 +674,9 @@ xfs_write( break; } + if (xip->i_d.di_format == XFS_DINODE_FMT_LOCAL && + printk_ratelimit()) /* XXX */ + printk("xfs_write: ino %llu, %lu bytes\n", xip->i_ino, ocount); count = ocount; pos = *offset; @@ -769,6 +782,21 @@ start: inode_update_time(inode, 1); } + if ((!(mp->m_flags & XFS_MOUNT_NOIFILE)) && + new_size > isize && /* XXX unneeded ? */ + new_size <= XFS_IFORK_DSIZE(xip)) { + xfs_fileoff_t last_block; + + /* XXX lock */ + error = xfs_bmap_last_offset(NULL, xip, &last_block, + XFS_DATA_FORK); + if (!error && !last_block) { + xip->i_d.di_format = XFS_DINODE_FMT_LOCAL; + xip->i_df.if_flags &= ~(XFS_IFEXTENTS | XFS_IFBROOT); + xip->i_df.if_flags |= XFS_IFINLINE; + } + } + /* * If the offset is beyond the size of the file, we have a couple * of things to do. First, if there is already space allocated @@ -855,6 +883,9 @@ retry: *offset, ioflags); ret = generic_file_buffered_write(iocb, iovp, segs, pos, offset, count, ret); + if (xip->i_d.di_format == XFS_DINODE_FMT_LOCAL && + printk_ratelimit()) /* XXX */ + printk("xfs_write: generic_file_buffered_write ino %llu, ret %d\n", xip->i_ino, ret); } current->backing_dev_info = NULL; diff -urp linux-2.6.12.5.orig/fs/xfs/xfs_bmap.c linux-2.6.12.5/fs/xfs/xfs_bmap.c --- linux-2.6.12.5.orig/fs/xfs/xfs_bmap.c 2005-08-15 09:20:18.000000000 +0900 +++ linux-2.6.12.5/fs/xfs/xfs_bmap.c 2008-02-29 17:28:36.207308945 +0900 @@ -3344,13 +3344,23 @@ xfs_bmap_local_to_extents( static char fname[] = "xfs_bmap_local_to_extents"; #endif xfs_ifork_t *ifp; /* inode fork pointer */ + int ifile = 0; +#if 0 /* * We don't want to deal with the case of keeping inode data inline yet. * So sending the data fork of a regular inode is invalid. */ ASSERT(!((ip->i_d.di_mode & S_IFMT) == S_IFREG && whichfork == XFS_DATA_FORK)); +#else + if ((ip->i_d.di_mode & S_IFMT) == S_IFREG && + whichfork == XFS_DATA_FORK) { + ifile = 1; + if (printk_ratelimit()) + printk("xfs_bmap_local_to_extents: ino %d\n", ip->i_ino); + } +#endif ifp = XFS_IFORK_PTR(ip, whichfork); ASSERT(XFS_IFORK_FORMAT(ip, whichfork) == XFS_DINODE_FMT_LOCAL); flags = 0; @@ -3386,10 +3396,12 @@ xfs_bmap_local_to_extents( ASSERT(args.fsbno != NULLFSBLOCK); ASSERT(args.len == 1); *firstblock = args.fsbno; + if (!ifile) { /* XXX */ bp = xfs_btree_get_bufl(args.mp, tp, args.fsbno, 0); memcpy((char *)XFS_BUF_PTR(bp), ifp->if_u1.if_data, ifp->if_bytes); xfs_trans_log_buf(tp, bp, 0, ifp->if_bytes - 1); + } xfs_idata_realloc(ip, -ifp->if_bytes, whichfork); xfs_iext_realloc(ip, 1, whichfork); ep = ifp->if_u1.if_extents; @@ -4628,6 +4640,10 @@ xfs_bmapi( nallocs = 0; cur = NULL; if (XFS_IFORK_FORMAT(ip, whichfork) == XFS_DINODE_FMT_LOCAL) { + if (!wr) { /* XXX */ + *nmap = 0; + return 0; + } ASSERT(wr && tp); if ((error = xfs_bmap_local_to_extents(tp, ip, firstblock, total, &logflags, whichfork))) diff -urp linux-2.6.12.5.orig/fs/xfs/xfs_clnt.h linux-2.6.12.5/fs/xfs/xfs_clnt.h --- linux-2.6.12.5.orig/fs/xfs/xfs_clnt.h 2005-08-15 09:20:18.000000000 +0900 +++ linux-2.6.12.5/fs/xfs/xfs_clnt.h 2008-02-29 17:36:25.283560299 +0900 @@ -106,5 +106,6 @@ struct xfs_mount_args { #define XFSMNT_IHASHSIZE 0x20000000 /* inode hash table size */ #define XFSMNT_DIRSYNC 0x40000000 /* sync creat,link,unlink,rename * symlink,mkdir,rmdir,mknod */ +#define XFSMNT_NOIFILE 0x80000000 /* do not create inlined file */ #endif /* __XFS_CLNT_H__ */ diff -urp linux-2.6.12.5.orig/fs/xfs/xfs_inode.c linux-2.6.12.5/fs/xfs/xfs_inode.c --- linux-2.6.12.5.orig/fs/xfs/xfs_inode.c 2005-08-15 09:20:18.000000000 +0900 +++ linux-2.6.12.5/fs/xfs/xfs_inode.c 2008-02-29 17:28:36.260126921 +0900 @@ -283,6 +283,20 @@ xfs_inotobp( return 0; } +void +xfs_inode_buf_dump(xfs_buf_t *bp, u_short len) +{ + int pgs = len >> PAGE_SHIFT; + char *p; + int i, j; + + for(i = 0; i < pgs; i++) { + p = xfs_buf_offset(bp, i << PAGE_SHIFT); + for(j = PAGE_SIZE; j; j--) + printk(" %02x", *p++); + printk("\n"); + } +} /* * This routine is called to map an inode to the buffer containing @@ -413,6 +427,7 @@ xfs_itobp( mp->m_ddev_targp, (unsigned long long)imap.im_blkno, i, INT_GET(dip->di_core.di_magic, ARCH_CONVERT)); + xfs_inode_buf_dump(bp, BBTOB(imap.im_len)); #endif XFS_CORRUPTION_ERROR("xfs_itobp", XFS_ERRLEVEL_HIGH, mp, dip); @@ -506,6 +521,7 @@ xfs_iformat( case S_IFDIR: switch (INT_GET(dip->di_core.di_format, ARCH_CONVERT)) { case XFS_DINODE_FMT_LOCAL: +#if 0 /* * no local regular files yet */ @@ -518,7 +534,7 @@ xfs_iformat( ip->i_mount, dip); return XFS_ERROR(EFSCORRUPTED); } - +#endif di_size = INT_GET(dip->di_core.di_size, ARCH_CONVERT); if (unlikely(di_size > XFS_DFORK_DSIZE(dip, ip->i_mount))) { xfs_fs_cmn_err(CE_WARN, ip->i_mount, @@ -634,6 +650,9 @@ xfs_iformat_local( memcpy(ifp->if_u1.if_data, XFS_DFORK_PTR(dip, whichfork), size); ifp->if_flags &= ~XFS_IFEXTENTS; ifp->if_flags |= XFS_IFINLINE; + if ((ip->i_d.di_mode & S_IFMT) == S_IFREG && printk_ratelimit()) + printk("xfs_iformat_local: ino %llu %p, size %d\n", + ip->i_ino, ip, size); return 0; } @@ -1684,6 +1703,18 @@ xfs_itruncate_finish( unmap_len = last_block - first_unmap_block + 1; } while (!done) { + if (fork == XFS_DATA_FORK && + ip->i_d.di_format == XFS_DINODE_FMT_LOCAL) { + /* XXX realloc */ + /* XXX real_size */ + xfs_trans_log_inode(ntp, ip, XFS_ILOG_CORE); +#if 0 + ntp = xfs_trans_dup(*tp); + error = xfs_trans_commit(*tp, 0, NULL); +#endif + committed = 0; + done = 1; + } else { /* * Free up up to XFS_ITRUNC_MAX_EXTENTS. xfs_bunmapi() * will tell us whether it freed the entire range or @@ -1754,6 +1785,7 @@ xfs_itruncate_finish( } return error; } + } if (committed) { /* @@ -3026,6 +3058,8 @@ xfs_iflush_fork( ASSERT(ifp->if_u1.if_data != NULL); ASSERT(ifp->if_bytes <= XFS_IFORK_SIZE(ip, whichfork)); memcpy(cp, ifp->if_u1.if_data, ifp->if_bytes); + if (printk_ratelimit()) + printk("xfs_iflush_fork: copying %d\n", ifp->if_bytes); } if (whichfork == XFS_DATA_FORK) { if (unlikely(XFS_DIR_SHORTFORM_VALIDATE_ONDISK(mp, dip))) { @@ -3414,6 +3448,7 @@ xfs_iflush_int( xfs_cmn_err(XFS_PTAG_IFLUSH, CE_ALERT, mp, "xfs_iflush: Bad inode %Lu magic number 0x%x, ptr 0x%p", ip->i_ino, (int) INT_GET(dip->di_core.di_magic, ARCH_CONVERT), dip); + xfs_inode_buf_dump(bp, BBTOB(ip->i_len)); goto corrupt_out; } if (XFS_TEST_ERROR(ip->i_d.di_magic != XFS_DINODE_MAGIC, @@ -3421,12 +3456,14 @@ xfs_iflush_int( xfs_cmn_err(XFS_PTAG_IFLUSH, CE_ALERT, mp, "xfs_iflush: Bad inode %Lu, ptr 0x%p, magic number 0x%x", ip->i_ino, ip, ip->i_d.di_magic); + xfs_inode_buf_dump(bp, BBTOB(ip->i_len)); goto corrupt_out; } if ((ip->i_d.di_mode & S_IFMT) == S_IFREG) { if (XFS_TEST_ERROR( (ip->i_d.di_format != XFS_DINODE_FMT_EXTENTS) && - (ip->i_d.di_format != XFS_DINODE_FMT_BTREE), + (ip->i_d.di_format != XFS_DINODE_FMT_BTREE) && + (ip->i_d.di_format != XFS_DINODE_FMT_LOCAL), mp, XFS_ERRTAG_IFLUSH_3, XFS_RANDOM_IFLUSH_3)) { xfs_cmn_err(XFS_PTAG_IFLUSH, CE_ALERT, mp, "xfs_iflush: Bad regular inode %Lu, ptr 0x%p", diff -urp linux-2.6.12.5.orig/fs/xfs/xfs_mount.h linux-2.6.12.5/fs/xfs/xfs_mount.h --- linux-2.6.12.5.orig/fs/xfs/xfs_mount.h 2005-08-15 09:20:18.000000000 +0900 +++ linux-2.6.12.5/fs/xfs/xfs_mount.h 2008-02-29 17:28:36.280412258 +0900 @@ -421,6 +421,7 @@ typedef struct xfs_mount { * allocation */ #define XFS_MOUNT_IHASHSIZE 0x00100000 /* inode hash table size */ #define XFS_MOUNT_DIRSYNC 0x00200000 /* synchronous directory ops */ +#define XFS_MOUNT_NOIFILE 0x00400000 /* * Default minimum read and write sizes. diff -urp linux-2.6.12.5.orig/fs/xfs/xfs_vfsops.c linux-2.6.12.5/fs/xfs/xfs_vfsops.c --- linux-2.6.12.5.orig/fs/xfs/xfs_vfsops.c 2005-08-15 09:20:18.000000000 +0900 +++ linux-2.6.12.5/fs/xfs/xfs_vfsops.c 2008-02-29 17:28:36.303523635 +0900 @@ -307,6 +307,9 @@ xfs_start_flags( if (ap->flags & XFSMNT_DIRSYNC) mp->m_flags |= XFS_MOUNT_DIRSYNC; + if (ap->flags & XFSMNT_NOIFILE) + mp->m_flags |= XFS_MOUNT_NOIFILE; + /* * no recovery flag requires a read-only mount */ @@ -1657,6 +1660,7 @@ xfs_vget( #define MNTOPT_64BITINODE "inode64" /* inodes can be allocated anywhere */ #define MNTOPT_IKEEP "ikeep" /* do not free empty inode clusters */ #define MNTOPT_NOIKEEP "noikeep" /* free empty inode clusters */ +#define MNTOPT_NOIFILE "noifile" /* do not create inlined file */ STATIC unsigned long suffix_strtoul(const char *cp, char **endp, unsigned int base) @@ -1815,6 +1819,8 @@ xfs_parseargs( args->flags &= ~XFSMNT_IDELETE; } else if (!strcmp(this_char, MNTOPT_NOIKEEP)) { args->flags |= XFSMNT_IDELETE; + } else if (!strcmp(this_char, MNTOPT_NOIFILE)) { + args->flags |= XFSMNT_NOIFILE; } else if (!strcmp(this_char, "osyncisdsync")) { /* no-op, this is now the default */ printk("XFS: osyncisdsync is now the default, option is deprecated.\n"); @@ -1886,6 +1892,7 @@ xfs_showargs( { XFS_MOUNT_OSYNCISOSYNC, "," MNTOPT_OSYNCISOSYNC }, { XFS_MOUNT_NOLOGFLUSH, "," MNTOPT_NOLOGFLUSH }, { XFS_MOUNT_IDELETE, "," MNTOPT_NOIKEEP }, + { XFS_MOUNT_NOIFILE, "," MNTOPT_NOIFILE }, { 0, NULL } }; struct proc_xfs_info *xfs_infop; From owner-xfs@oss.sgi.com Fri Mar 7 02:48:54 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 07 Mar 2008 02:49:13 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.4 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_23, J_CHICKENPOX_72 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m27AmnC5028460 for ; Fri, 7 Mar 2008 02:48:54 -0800 X-ASG-Debug-ID: 1204886958-4d7c01e10000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from bombadil.infradead.org (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id B60AFF38F86 for ; Fri, 7 Mar 2008 02:49:19 -0800 (PST) Received: from bombadil.infradead.org (bombadil.infradead.org [18.85.46.34]) by cuda.sgi.com with ESMTP id xNjmMU7V9WY7VOey for ; Fri, 07 Mar 2008 02:49:19 -0800 (PST) Received: from hch by bombadil.infradead.org with local (Exim 4.68 #1 (Red Hat Linux)) id 1JXa8c-0005D3-Mp for xfs@oss.sgi.com; Fri, 07 Mar 2008 10:49:18 +0000 Date: Fri, 7 Mar 2008 05:49:18 -0500 From: Christoph Hellwig To: xfs@oss.sgi.com X-ASG-Orig-Subj: [Chris.Knadle@coredump.us: assfail during unmount xfs 2.6.24.3 [from 2.6.24.y git]] Subject: [Chris.Knadle@coredump.us: assfail during unmount xfs 2.6.24.3 [from 2.6.24.y git]] Message-ID: <20080307104918.GA20000@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.5.17 (2007-11-01) X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org See http://www.infradead.org/rpr.html X-Barracuda-Connect: bombadil.infradead.org[18.85.46.34] X-Barracuda-Start-Time: 1204886959 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -1.82 X-Barracuda-Spam-Status: No, SCORE=-1.82 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=PR0N_SUBJECT X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44129 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.20 PR0N_SUBJECT Subject has letters around special characters (pr0n) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14795 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs ----- Forwarded message from Chris Knadle ----- Date: Thu, 6 Mar 2008 23:29:07 -0500 From: Chris Knadle Subject: assfail during unmount xfs 2.6.24.3 [from 2.6.24.y git] To: linux-kernel@vger.kernel.org During the final unmount before reboot there was an assertion failure from XFS leading to the debug output below; I've included what was available on the screendump, which I handwrote + transcribed. [I also have screenshots from a 2.1 MP camera -- they're slightly ugly but usable.] CONFIG_KEXEC was not compiled in, so I was uanble to get a crashdump via SysRq. I'm sending this along in case someone working on the XFS fs can find + fix a bug. I'm not sending the kernel config in this first post just as not to spam everybody with it if it's not needed -- if desired just ask and I'll send it. Source used: 2.6.24.y from git -- 2.6.24.3. This is on my Desktop box, x86 system -- single P4 CPU @ 2.6 GHz, IDE disk attached to an onboard HighPoint HPT370 controller, and running Debian Sid. When replying please CC me, as I am not currently subscribed to the list. Cheers. -- Chris -- Chris Knadle Chris.Knadle@coredump.us ------------------- Cleaning up ifdown.... Deactivating swap...done. Unmounting local filesystems...done. Assertion failed: atomic_read(&mp->m_active_trans) == 0, file: fs/xfs/xfs_vfsops.c, line: 708 ------------[ cut here ]------------ kernel BUG at fs/xfs/support/debug.c:82! infalid opcode: 0000 [#1] Modules lined in: nvidia(P) xt_multiport iptable_filter ip_tables x_tables ppdev lp ac battery ipv6 ext2 mbcache joydev sidewinder kqemu loop snd_emu10k1_synth snd_emux_synth snd_seq_virmidi snd_seq_midi_emul snd_emu10k1 firmware_class snd_ac97_codec ac97_bus snd_util_mem snd_hwdep parport_pc parport snd_pcm_oss snd_pcm snd_page_alloc snd_mixer_oss psmouse rtc snd_deq_dummy pcspkr serio_raw evdev snd_seq_oss snd_seq_midi snd_rawmidi snd_seq_midi_event snd_seq snd_timer snd_seq_device emu10k1_gp gameport snd soundcore button i2c_i801 i2c_core intel_agp iTCO_wdt agpart shpchp pci_hotplug usbhid hid xfs ide_cd cdrom ata_piix ata_generic libata scsi_mod ide_disk floppy uhci_hcd usbcore hpt366 e1000 piix generic ide_core thermal processor fan Pid: 25313, comm: mount Tainted: P (2.6.24.3-686-initrd-crk1 #1) EIP: 0060:[>f8a67a3f>] EFLAGS: 00010292 CPU: 0 EIP is at assfail+0x1b/0x1f [xfs] EAX: 00000061 EBX: f7faa400 ECX: ffffffff EDX: c034bda0 ESI: dfc97000 EDI: f7faa400 EBP: db175e14 ESP: db175dcc DS: 007b ES: 007b FS: 0000 GS: 033 SS: 0068 Process mount (pid: 25313, ti=db174000 task=f7c32560 task.ti=db174000) Stack: f8a6cc60 f8a6c6b0 f8a6a135 000002c4 f8a57e38 f7faa400 f8a57ee7 00000000 dfc97000 f7faa400 db175e14 f8a66ebc 00000001 f8a7fd20 f7fbd400 00000000 00000001 c0162bd2 00000001 f7fbd43c ffffffff f7fbd400 c0174c1b 00000000 Call Trace: [] xfs_attr_quiesce+0x46/0x61 [xfs] [] xfs_mntupdate+0x84/0xbb [xfs] [] xfs_fs_remount+0x3d/0x57 [xfs] [] do_remount_sb+0xb5/0x101 [xfs] [] do_mount+0x5a9/0x65b [] __alloc_pages+0x5f/0x360 [] find_lock_page+0x15/0x67 [] handle_mm_fault+0x25a/0x52d [] __alloc_pages+0x5f/0x360 [] copy_mount_options+0x28/0x113 [] getname+0x87/0xaf [] sys_mount+0x72/0xa4 [] sysenter_past_esp+0x5f/0x85 ======================= Code: d7 6b c7 83 c3 08 81 fb c8 02 a8 f8 75 ee 5b c3 83 ec 10 89 4c 24 0c 89 54 24 08 89 44 24 04 c7 04 24 60 cc a6 f8 e8 0d ed 6a c7 <0f> 0b eb fe 56 53 83 ec 0c 89 c6 83 e6 07 9c 5b fa 89 0c 24 89 EIP: [] assfail+0x1b/0x1f [xfs] SS:ESP 0068:db175dcc ---[ end trace 65bd78ca1bf60304 ]--- WARNING: at kernel/exit.c:917 do_exit() Pid: 25313, comm: mount Tainted: P D 2.6.24.3-686-initrd-crk1 #1 [] do_exit+0x669/0x7a4 [] printk+0x1b/0x1f [] do_trap+0x0/0xbd [] do_invalid_op+0x0/0x8a [] assfail+0x1b/0x1f [xfs] [] xfs_attr_quiesce+0x46/0x61 [xfs] [] xfs_mntupdate+0x84/0xbb [xfs] [] xfs_fs_remount+0x3d/0x57 [xfs] [] do_remount_sb+0xb5/0x101 [] do_mount+0x5a9/0x65b [] __alloc_pages+0x5f/0x360 [] find_lock_page+0x15/0x67 [] handle_mm_fault+0x25a/0x52d [] __alloc_pages+0x5f/0x360 [] copy_mount_options+0x28/0x113 [] getname+0x87/0xaf [] sys_mount+0x72/0xa4 [] sysenter_past_esp+0x5f/0x85 ======================== /etc/rc6.d/S60umountroot: line17: 25313 Segmentation fault mount $MOUNT_FORCE_OPT -n -o remount,ro -t dummytype dummydev / 2>/dev/null ----- End forwarded message ----- From owner-xfs@oss.sgi.com Fri Mar 7 03:19:02 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 07 Mar 2008 03:19:20 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=AWL,BAYES_00,MIME_8BIT_HEADER autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m27BJ10O029379 for ; Fri, 7 Mar 2008 03:19:02 -0800 X-ASG-Debug-ID: 1204888769-4e5a02480000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from rn-out-0910.google.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 6FC45F394A8 for ; Fri, 7 Mar 2008 03:19:30 -0800 (PST) Received: from rn-out-0910.google.com (rn-out-0910.google.com [64.233.170.186]) by cuda.sgi.com with ESMTP id tVSjiDERLZDqFvJo for ; Fri, 07 Mar 2008 03:19:30 -0800 (PST) Received: by rn-out-0910.google.com with SMTP id a43so695715rne.10 for ; Fri, 07 Mar 2008 03:19:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:to:subject:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; bh=+1+jB1icLTcLTGjk4/HZkCRBc1a2zkaxhzDHnf/jzgg=; b=ZEeOnNEgqMvPlbEAxNLDyxQnHnMasQ9VClPjZEqt6800dbvnP1OCCUjIdGArXoyudj0GJQwPkKeL2fxLD8R9IANETZNJntIwbM/syV+oJzijD4xEJvEYJyilgECbVZ5MlgIsbAYCPRpYFRf/oGjaeW6JoTjWVyrk+enpgRdwCeo= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=RmxjW5BhgWKk2oRdZhlvjTrx4mTQXabnVsE0bJvSiBp/XMmopUKUwokAgyl8Nem7gNqc7WALzMI9vARR4kD/uyOeccc+dq9+fZqDP+/BC/Q2yiBvlvT6CT6LW2DEfWrmX+HuwrlEUmXlfp2KCVQ9VgRY4KRIFoEiTpoBxUb/ZHo= Received: by 10.150.178.6 with SMTP id a6mr431236ybf.22.1204888768434; Fri, 07 Mar 2008 03:19:28 -0800 (PST) Received: by 10.150.96.5 with HTTP; Fri, 7 Mar 2008 03:19:28 -0800 (PST) Message-ID: <1a4a774c0803070319j1eb8790ek3daae4a16b3e6256@mail.gmail.com> Date: Fri, 7 Mar 2008 12:19:28 +0100 From: "=?ISO-8859-1?Q?Christian_R=F8snes?=" To: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: XFS internal error xfs_trans_cancel at line 1150 of file fs/xfs/xfs_trans.c Subject: Re: XFS internal error xfs_trans_cancel at line 1150 of file fs/xfs/xfs_trans.c In-Reply-To: <1a4a774c0803060310w2642224w690ac8fa13f96ec@mail.gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Disposition: inline References: <1a4a774c0802130251h657a52f7lb97942e7afdf6e3f@mail.gmail.com> <20080213214551.GR155407@sgi.com> <1a4a774c0803050553h7f6294cfq41c38f34ea92ceae@mail.gmail.com> <1a4a774c0803060310w2642224w690ac8fa13f96ec@mail.gmail.com> X-Barracuda-Connect: rn-out-0910.google.com[64.233.170.186] X-Barracuda-Start-Time: 1204888771 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44131 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id m27BJ20O029381 X-archive-position: 14796 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: christian.rosnes@gmail.com Precedence: bulk X-list: xfs On Thu, Mar 6, 2008 at 12:10 PM, Christian Rřsnes wrote: > On Wed, Mar 5, 2008 at 2:53 PM, Christian Rřsnes > wrote: > > > On Wed, Feb 13, 2008 at 11:51:51AM +0100, Christian Rřsnes wrote: > > > > Over the past month I've been hit with two cases of "xfs_trans_cancel > > > > at line 1150" > > > > The two errors occurred on different raid sets. In both cases the > > > > error happened during > > > > rsync from a remote server to this server, and the local partition > > > > which reported > > > > the error was 99% full (as reported by df -k, see below for details). > > > > > > > > System: Dell 2850 > > > > Mem: 4GB RAM > > > > OS: Debian 3 (32-bit) > > > > Kernel: 2.6.17.7 (custom compiled) > > > > > > > > > > After being hit several times by the problem mentioned above (running > > kernel 2.6.17.7), > > I upgraded the kernel to version 2.6.24.3. I then ran a rsync test to > > a 99% full partition: > > > > df -k: > > /dev/sdb1 286380096 282994528 3385568 99% /data > > > > The rsync application will probably fail because it will most likely > > run out of space, > > but I got another xfs_trans_cancel kernel message: > > > > Filesystem "sdb1": XFS internal error xfs_trans_cancel at line 1163 of > > file fs/xfs/xfs_trans.c. Caller 0xc021a010 > > Pid: 11642, comm: rsync Not tainted 2.6.24.3FC #1 > > [] xfs_trans_cancel+0x5d/0xe6 > > [] xfs_mkdir+0x45a/0x493 > > [] xfs_mkdir+0x45a/0x493 > > [] xfs_acl_vhasacl_default+0x33/0x44 > > [] xfs_vn_mknod+0x165/0x243 > > [] xfs_access+0x2f/0x35 > > [] xfs_vn_mkdir+0x12/0x14 > > [] vfs_mkdir+0xa3/0xe2 > > [] sys_mkdirat+0x8a/0xc3 > > [] sys_mkdir+0x1f/0x23 > > [] syscall_call+0x7/0xb > > ======================= > > xfs_force_shutdown(sdb1,0x8) called from line 1164 of file > > fs/xfs/xfs_trans.c. Return address = 0xc0212690 > > > > Filesystem "sdb1": Corruption of in-memory data detected. Shutting > > down filesystem: sdb1 > > Please umount the filesystem, and rectify the problem(s) > > > > Actually, a single mkdir command is enough to trigger the filesystem > shutdown when its 99% full (according to df -k): > > /data# mkdir test > mkdir: cannot create directory `test': No space left on device > > > > Filesystem "sdb1": XFS internal error xfs_trans_cancel at line 1163 of > file fs/xfs/xfs_trans.c. Caller 0xc021a010 > Pid: 23380, comm: mkdir Not tainted 2.6.24.3FC #1 > > [] xfs_trans_cancel+0x5d/0xe6 > [] xfs_mkdir+0x45a/0x493 > [] xfs_mkdir+0x45a/0x493 > [] xfs_acl_vhasacl_default+0x33/0x44 > [] xfs_vn_mknod+0x165/0x243 > [] xfs_access+0x2f/0x35 > [] xfs_vn_mkdir+0x12/0x14 > [] vfs_mkdir+0xa3/0xe2 > [] sys_mkdirat+0x8a/0xc3 > [] sys_mkdir+0x1f/0x23 > [] syscall_call+0x7/0xb > [] atm_reset_addr+0xd/0x83 > > ======================= > xfs_force_shutdown(sdb1,0x8) called from line 1164 of file > fs/xfs/xfs_trans.c. Return address = 0xc0212690 > Filesystem "sdb1": Corruption of in-memory data detected. Shutting > down filesystem: sdb1 > Please umount the filesystem, and rectify the problem(s) > > > df -k > ----- > /dev/sdb1 286380096 282994528 3385568 99% /data > > df -i > ----- > /dev/sdb1 10341248 3570112 6771136 35% /data > > > xfs_info > -------- > meta-data=/dev/sdb1 isize=512 agcount=16, agsize=4476752 blks > = sectsz=512 attr=0 > data = bsize=4096 blocks=71627792, imaxpct=25 > = sunit=16 swidth=32 blks, unwritten=1 > naming =version 2 bsize=4096 > log =internal bsize=4096 blocks=32768, version=2 > = sectsz=512 sunit=16 blks, lazy-count=0 > realtime =none extsz=65536 blocks=0, rtextents=0 > > xfs_db -r -c 'sb 0' -c p /dev/sdb1 > ---------------------------------- > magicnum = 0x58465342 > blocksize = 4096 > dblocks = 71627792 > rblocks = 0 > rextents = 0 > uuid = d16489ab-4898-48c2-8345-6334af943b2d > logstart = 67108880 > rootino = 128 > rbmino = 129 > rsumino = 130 > rextsize = 16 > agblocks = 4476752 > agcount = 16 > rbmblocks = 0 > logblocks = 32768 > versionnum = 0x3584 > sectsize = 512 > inodesize = 512 > inopblock = 8 > fname = "\000\000\000\000\000\000\000\000\000\000\000\000" > blocklog = 12 > sectlog = 9 > inodelog = 9 > inopblog = 3 > agblklog = 23 > rextslog = 0 > inprogress = 0 > imax_pct = 25 > icount = 3570112 > ifree = 0 > fdblocks = 847484 > frextents = 0 > uquotino = 0 > gquotino = 0 > qflags = 0 > flags = 0 > shared_vn = 0 > inoalignmt = 2 > unit = 16 > width = 32 > dirblklog = 0 > logsectlog = 0 > logsectsize = 0 > logsunit = 65536 > features2 = 0 > Instrumenting the code, I found that this occurs on my system when I do a 'mkdir /data/test' on the partition in question: in xfs_mkdir (xfs_vnodeops.c): error = xfs_dir_ialloc(&tp, dp, mode, 2, 0, credp, prid, resblks > 0, &cdp, NULL); if (error) { if (error == ENOSPC) goto error_return; <=== this is hit and then execution jumps to error_return goto abort_return; } Is this the correct behavior for this type of situation: mkdir command fails due to no available space on filesystem, and xfs_mkdir goes to label error_return ? (And after this the filesystem is shutdown) Christian From owner-xfs@oss.sgi.com Fri Mar 7 11:36:27 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 07 Mar 2008 11:36:50 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.9 required=5.0 tests=ANY_BOUNCE_MESSAGE,AWL, BAYES_50,VBOUNCE_MESSAGE autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m27JaKtg025701 for ; Fri, 7 Mar 2008 11:36:27 -0800 X-ASG-Debug-ID: 1204918609-6a7401850000-w1Z2WR X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from omr-d24.mx.aol.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 0A31A66CA42 for ; Fri, 7 Mar 2008 11:36:49 -0800 (PST) Received: from omr-d24.mx.aol.com (omr-d24.mx.aol.com [205.188.249.68]) by cuda.sgi.com with ESMTP id g1je0lgCeZed09BH for ; Fri, 07 Mar 2008 11:36:49 -0800 (PST) Received: from rly-dc08.mx.aol.com (rly-dc08.mx.aol.com [205.188.109.12]) by omr-d24.mx.aol.com (v117.7) with ESMTP id MAILOMRD241-7d7947d19948277; Fri, 07 Mar 2008 14:36:40 -0500 Received: from localhost (localhost) by rly-dc08.mx.aol.com (8.14.1/8.14.1) id m27JYk0W005098; Fri, 7 Mar 2008 14:36:40 -0500 Date: Fri, 7 Mar 2008 14:36:40 -0500 From: Mail Delivery Subsystem Message-Id: <200803071936.m27JYk0W005098@rly-dc08.mx.aol.com> To: MIME-Version: 1.0 Content-Type: multipart/report; report-type=delivery-status; boundary="m27JYk0W005098.1204918600/rly-dc08.mx.aol.com" X-ASG-Orig-Subj: Returned mail: see transcript for details Subject: Returned mail: see transcript for details Auto-Submitted: auto-generated (failure) X-AOL-INRLY: rrcs-64-183-120-26.west.biz.rr.com [64.183.120.26] rly-dc08 X-AOL-IP: 205.188.109.12 X-Barracuda-Connect: omr-d24.mx.aol.com[205.188.249.68] X-Barracuda-Start-Time: 1204918610 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -1.52 X-Barracuda-Spam-Status: No, SCORE=-1.52 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=BSF_SC0_MJ615, FORGED_AOL_RCVD X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44163 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.00 FORGED_AOL_RCVD Received forged, contains fake AOL relays 0.50 BSF_SC0_MJ615 Custom Rule MJ615 X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14797 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: MAILER-DAEMON@aol.com Precedence: bulk X-list: xfs This is a MIME-encapsulated message --m27JYk0W005098.1204918600/rly-dc08.mx.aol.com The original message was received at Fri, 7 Mar 2008 14:34:35 -0500 from rrcs-64-183-120-26.west.biz.rr.com [64.183.120.26] *** ATTENTION *** Your e-mail is being returned to you because there was a problem with its delivery. The address which was undeliverable is listed in the section labeled: "----- The following addresses had permanent fatal errors -----". The reason your mail is being returned to you is listed in the section labeled: "----- Transcript of Session Follows -----". The line beginning with "<<<" describes the specific reason your e-mail could not be delivered. The next line contains a second error message which is a general translation for other e-mail servers. Please direct further questions regarding this message to your e-mail administrator. --AOL Postmaster ----- The following addresses had permanent fatal errors ----- (reason: 554 TRANSACTION FAILED - Unrepairable Virus Detected. Your mail has not been sent.) ----- Transcript of session follows ----- ... while talking to air-dc07.mail.aol.com.: >>> DATA <<< 554 TRANSACTION FAILED - Unrepairable Virus Detected. Your mail has not been sent. 554 5.0.0 Service unavailable --m27JYk0W005098.1204918600/rly-dc08.mx.aol.com Content-Type: message/delivery-status Reporting-MTA: dns; rly-dc08.mx.aol.com Arrival-Date: Fri, 7 Mar 2008 14:34:35 -0500 Final-Recipient: RFC822; achaean1225bc@aol.com Action: failed Status: 5.0.0 Remote-MTA: DNS; air-dc07.mail.aol.com Diagnostic-Code: SMTP; 554 TRANSACTION FAILED - Unrepairable Virus Detected. Your mail has not been sent. Last-Attempt-Date: Fri, 7 Mar 2008 14:36:40 -0500 --m27JYk0W005098.1204918600/rly-dc08.mx.aol.com Content-Type: text/rfc822-headers Received: from oss.sgi.com (rrcs-64-183-120-26.west.biz.rr.com [64.183.120.26]) by rly-dc08.mx.aol.com (v121.4) with ESMTP id MAILRELAYINDC085-b3947d198c83d6; Fri, 07 Mar 2008 14:34:33 -0500 From: linux-xfs@oss.sgi.com To: achaean1225bc@aol.com Subject: RETURNED MAIL: DATA FORMAT ERROR Date: Fri, 7 Mar 2008 11:45:59 -0800 MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_NextPart_000_0003_17750D6B.E165A5EE" X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2600.0000 X-MIMEOLE: Produced By Microsoft MimeOLE V6.00.2600.0000 X-AOL-IP: 64.183.120.26 X-AOL-SCOLL-SCORE:0:2:103076712:9395240 X-AOL-SCOLL-URL_COUNT: X-AOL-SCOLL-AUTHENTICATION: listenair ; SPF_helo : n X-AOL-SCOLL-AUTHENTICATION: listenair ; SPF_822_from : n Message-ID: <200803071434.b3947d198c83d6@rly-dc08.mx.aol.com> --m27JYk0W005098.1204918600/rly-dc08.mx.aol.com-- From owner-xfs@oss.sgi.com Fri Mar 7 12:33:11 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 07 Mar 2008 12:33:54 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.4 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m27KX8mr032191 for ; Fri, 7 Mar 2008 12:33:11 -0800 X-ASG-Debug-ID: 1204922012-712802f30000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mail.g-house.de (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id AA958F3DFAB for ; Fri, 7 Mar 2008 12:33:33 -0800 (PST) Received: from mail.g-house.de (ns2.g-housing.de [81.169.133.75]) by cuda.sgi.com with ESMTP id qDw9Fz3psY1TlmH0 for ; Fri, 07 Mar 2008 12:33:33 -0800 (PST) Received: from [89.54.188.127] (helo=[192.168.178.25]) by mail.g-house.de with esmtpa (Exim 4.63) (envelope-from ) id 1JXjFT-0000VD-UC; Fri, 07 Mar 2008 21:33:00 +0100 Date: Fri, 7 Mar 2008 21:32:57 +0100 (CET) From: Christian Kujau X-X-Sender: evil@sheep.housecafe.de To: LKML cc: xfs@oss.sgi.com X-ASG-Orig-Subj: INFO: task mount:11202 blocked for more than 120 seconds Subject: INFO: task mount:11202 blocked for more than 120 seconds Message-ID: User-Agent: Alpine 1.00 (DEB 882 2007-12-20) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; format=flowed; charset=US-ASCII X-Barracuda-Connect: ns2.g-housing.de[81.169.133.75] X-Barracuda-Start-Time: 1204922016 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44166 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14798 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: lists@nerdbynature.de Precedence: bulk X-list: xfs Hi, after upgrading from 2.6.24.1 to 2.6.25-rc3, I came across[0]. This warning seems to be gone now. With 2.6.25-rc4 (and the fix from [1]) the box was running fine for 20 hours or so (doing its usual jobs plus a "make randconfig && make" loop). After this, I noticed that /bin/sync would not exit anymore and remains stuck in D state. Looking around I noticed that the rsync backup jobs (rsync'ing to an xfs partition) from earlier this morning did not exit either and hung in D state. With sync hung, the following messages started to appear: [75377.756985] INFO: task sync:2697 blocked for more than 120 seconds. [75377.757579] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [75377.758211] sync D c013835c 0 2697 16457 [75377.758216] f59506c0 00000082 f4c34000 c013835c fffeffff f6c1bcb0 f5dd0000 f4c34000 [75377.758223] c04405d7 f53f7e98 f6c1bcb4 f6c1bcd0 00000000 f6c1bcb0 00000000 f7ca1090 [75377.758230] f4c34000 c044070a f6c1bcd0 f6c1bcd0 f5dd0000 00000001 f6c1bcb0 c044074b [75377.758237] Call Trace: [75377.758253] [] trace_hardirqs_on+0x9c/0x110 [75377.758269] [] rwsem_down_failed_common+0x67/0x150 [75377.758279] [] rwsem_down_read_failed+0x1a/0x24 [75377.758286] [] call_rwsem_down_read_failed+0x7/0xc [75377.758291] [] down_read_nested+0x4c/0x60 [75377.758295] [] xfs_ilock+0x5b/0xb0 [75377.758301] [] xfs_ilock+0x5b/0xb0 [75377.758306] [] xfs_sync_inodes+0x3dd/0x6b0 [75377.758314] [] _spin_unlock+0x14/0x20 [75377.758325] [] xfs_syncsub+0x18b/0x300 [75377.758330] [] _spin_unlock+0x14/0x20 [75377.758335] [] xfs_fs_sync_super+0x2b/0xd0 [75377.758342] [] sync_filesystems+0xa4/0x100 [75377.758351] [] down_read+0x38/0x50 [75377.758356] [] sync_filesystems+0xbf/0x100 [75377.758361] [] do_sync+0x33/0x70 [75377.758366] [] restore_nocheck+0x12/0x15 [75377.758371] [] sys_sync+0xa/0x10 [75377.758375] [] sysenter_past_esp+0x5f/0xa5 [75377.758402] ======================= [75377.758405] 3 locks held by sync/2697: [75377.758407] #0: (mutex){--..}, at: [] sync_filesystems+0x11/0x100 [75377.758414] #1: (&type->s_umount_key#22){----}, at: [] sync_filesystems+0xa4/0x100 [75377.758422] #2: (&(&ip->i_iolock)->mr_lock){----}, at: [] xfs_ilock+0x5b/0xb0 The box is still up & running, although the load is increasing slightly. I've gathered some details here: http://nerdbynature.de/bits/2.6.25-rc4/ I've searched the archives for this error, but the only thing was * http://lkml.org/lkml/2008/2/12/44 [BUG] 2.6.25-rc1-git1 softlockup while bootup on powerpc ...however, I don't get "CPU stuck" messages * http://lkml.org/lkml/2008/1/29/370 Re: system hang on latest git ...but calltrace looks a lot different. Since both mailings are not so current, I'd like to got back to -rc3 and try to reproduce this one. Do you have any idea what's going on here? Thanks, Christian. [0] http://lkml.org/lkml/2008/3/2/171 [1] http://lkml.org/lkml/2008/3/4/634 -- BOFH excuse #158: Defunct processes From owner-xfs@oss.sgi.com Fri Mar 7 14:34:53 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 07 Mar 2008 14:35:15 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m27MYnut007756 for ; Fri, 7 Mar 2008 14:34:52 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA00157; Sat, 8 Mar 2008 09:35:14 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m27MZCLF89462287; Sat, 8 Mar 2008 09:35:14 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m27MZAcA89340093; Sat, 8 Mar 2008 09:35:10 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Sat, 8 Mar 2008 09:35:10 +1100 From: David Chinner To: Kris Kersey Cc: xfs@oss.sgi.com, Bill Vaughan Subject: Re: pdflush hang on xlog_grant_log_space() Message-ID: <20080307223510.GM155407@sgi.com> References: <47D062AF.80501@steelbox.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <47D062AF.80501@steelbox.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14799 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Thu, Mar 06, 2008 at 04:31:27PM -0500, Kris Kersey wrote: > Hello, > > I'm working on a NAS product and we're currently having lock-ups that > seem to be hanging in XFS code. We're running a NAS that has 1024 NFSD > threads accessing three RAID mounts. All three mounts are running XFS > file systems. Lately we've had random lockups on these boxes and I am > now running a kernel with KDB built-in. > > The lock-up takes the form of all NFSD threads in D state with one out > of three pdflush threads in D state. The assumption can be made that > all NFSD threads are waiting on the one pdflush thread to complete. So > two times now when an NAS has gotten in this state I have accessed KDB > and ran a stack trace on the pdflush thread. Both times the thread was > stuck on xlog_grant_log_space+0xdb. Try bumping XFS_TRANS_PUSH_AIL_RESTARTS to a much larger number and seeing if the problem goes away.... Alternatively, that restart hack is backed by a "watchdog" timeout in 2.6.25-rc1, so if that is the cause of the problem perhaps the latest -rcX kernel will prevent the hang? BTW, you can get all the traces of D state threads through the sysrq interface, so you don't need to drop into kdb to get this..... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Fri Mar 7 14:40:26 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 07 Mar 2008 14:40:41 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m27MeKnB008259 for ; Fri, 7 Mar 2008 14:40:24 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA00285; Sat, 8 Mar 2008 09:40:44 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m27MehLF88354118; Sat, 8 Mar 2008 09:40:43 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m27MeewP89332309; Sat, 8 Mar 2008 09:40:40 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Sat, 8 Mar 2008 09:40:40 +1100 From: David Chinner To: Christian Kujau Cc: LKML , xfs@oss.sgi.com Subject: Re: INFO: task mount:11202 blocked for more than 120 seconds Message-ID: <20080307224040.GV155259@sgi.com> References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14800 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Fri, Mar 07, 2008 at 09:32:57PM +0100, Christian Kujau wrote: > Hi, > > after upgrading from 2.6.24.1 to 2.6.25-rc3, I came across[0]. This > warning seems to be gone now. With 2.6.25-rc4 (and the fix from [1]) > the box was running fine for 20 hours or so (doing its usual jobs plus > a "make randconfig && make" loop). > > After this, I noticed that /bin/sync would not exit anymore and > remains stuck in D state. Looking around I noticed that the rsync > backup jobs (rsync'ing to an xfs partition) from earlier this > morning did not exit either and hung in D state. With sync hung, the > following messages started to appear: > > [75377.756985] INFO: task sync:2697 blocked for more than 120 seconds. > [75377.757579] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables > this message. > [75377.758211] sync D c013835c 0 2697 16457 > [75377.758216] f59506c0 00000082 f4c34000 c013835c fffeffff f6c1bcb0 > f5dd0000 f4c34000 [75377.758223] c04405d7 f53f7e98 f6c1bcb4 f6c1bcd0 > 00000000 f6c1bcb0 00000000 f7ca1090 [75377.758230] f4c34000 c044070a > f6c1bcd0 f6c1bcd0 f5dd0000 00000001 f6c1bcb0 c044074b [75377.758237] Call > Trace: > [75377.758253] [] trace_hardirqs_on+0x9c/0x110 > [75377.758269] [] rwsem_down_failed_common+0x67/0x150 > [75377.758279] [] rwsem_down_read_failed+0x1a/0x24 > [75377.758286] [] call_rwsem_down_read_failed+0x7/0xc > [75377.758291] [] down_read_nested+0x4c/0x60 > [75377.758295] [] xfs_ilock+0x5b/0xb0 > [75377.758301] [] xfs_ilock+0x5b/0xb0 > [75377.758306] [] xfs_sync_inodes+0x3dd/0x6b0 > [75377.758314] [] _spin_unlock+0x14/0x20 > [75377.758325] [] xfs_syncsub+0x18b/0x300 > [75377.758330] [] _spin_unlock+0x14/0x20 > [75377.758335] [] xfs_fs_sync_super+0x2b/0xd0 > [75377.758342] [] sync_filesystems+0xa4/0x100 > [75377.758351] [] down_read+0x38/0x50 > [75377.758356] [] sync_filesystems+0xbf/0x100 > [75377.758361] [] do_sync+0x33/0x70 > [75377.758366] [] restore_nocheck+0x12/0x15 > [75377.758371] [] sys_sync+0xa/0x10 > [75377.758375] [] sysenter_past_esp+0x5f/0xa5 > [75377.758402] ======================= > [75377.758405] 3 locks held by sync/2697: > [75377.758407] #0: (mutex){--..}, at: [] > sync_filesystems+0x11/0x100 > [75377.758414] #1: (&type->s_umount_key#22){----}, at: [] > sync_filesystems+0xa4/0x100 > [75377.758422] #2: (&(&ip->i_iolock)->mr_lock){----}, at: [] > xfs_ilock+0x5b/0xb0 Well, if that is hung there, something else must be holding on to the iolock it's waiting on. What are the other D state processes in the machine? Also, the iolock can be held across I/O so it's possible you've lost an I/O. Any I/O errors in the syslog? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Fri Mar 7 14:46:29 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 07 Mar 2008 14:46:50 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m27MkQ4D008836 for ; Fri, 7 Mar 2008 14:46:28 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA00515; Sat, 8 Mar 2008 09:46:50 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m27MkmLF89127194; Sat, 8 Mar 2008 09:46:49 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m27Mkj3o89196247; Sat, 8 Mar 2008 09:46:45 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Sat, 8 Mar 2008 09:46:45 +1100 From: David Chinner To: Chris Knadle Cc: linux-kernel@vger.kernel.org, xfs@oss.sgi.com Subject: Re: assfail during unmount xfs 2.6.24.3 [from 2.6.24.y git] Message-ID: <20080307224645.GW155259@sgi.com> References: <200803062329.10486.Chris.Knadle@coredump.us> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <200803062329.10486.Chris.Knadle@coredump.us> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14801 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Thu, Mar 06, 2008 at 11:29:07PM -0500, Chris Knadle wrote: > During the final unmount before reboot there was an assertion failure from XFS > leading to the debug output below; I've included what was available on the > screendump, which I handwrote + transcribed. [I also have screenshots from a > 2.1 MP camera -- they're slightly ugly but usable.] CONFIG_KEXEC was not > compiled in, so I was uanble to get a crashdump via SysRq. I'm sending this > along in case someone working on the XFS fs can find + fix a bug. I'm not > sending the kernel config in this first post just as not to spam everybody > with it if it's not needed -- if desired just ask and I'll send it. > Source used: 2.6.24.y from git -- 2.6.24.3. > > This is on my Desktop box, x86 system -- single P4 CPU @ 2.6 GHz, IDE disk > attached to an onboard HighPoint HPT370 controller, and running Debian Sid. > > When replying please CC me, as I am not currently subscribed to the list. ...... > Assertion failed: atomic_read(&mp->m_active_trans) == 0, file: > fs/xfs/xfs_vfsops.c, line: 708 Known problem. Race in the VFS w.r.t. read-only remounts: http://marc.info/?l=linux-kernel&m=120106649923499&w=2 The fix for the problem lies outside XFS: http://marc.info/?l=linux-kernel&m=120109304227035&w=2 Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Fri Mar 7 15:46:40 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 07 Mar 2008 15:47:00 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.5 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_72 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m27NkdQC011216 for ; Fri, 7 Mar 2008 15:46:40 -0800 X-ASG-Debug-ID: 1204933603-6bc1001c0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mail.g-house.de (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 3CE6266E095; Fri, 7 Mar 2008 15:46:44 -0800 (PST) Received: from mail.g-house.de (ns2.g-housing.de [81.169.133.75]) by cuda.sgi.com with ESMTP id v1WjRs74E44zVkcn; Fri, 07 Mar 2008 15:46:44 -0800 (PST) Received: from [89.54.188.127] (helo=[192.168.178.25]) by mail.g-house.de with esmtpa (Exim 4.63) (envelope-from ) id 1JXmGv-00073J-AV; Sat, 08 Mar 2008 00:46:41 +0100 Date: Sat, 8 Mar 2008 00:46:40 +0100 (CET) From: Christian Kujau X-X-Sender: evil@sheep.housecafe.de To: David Chinner cc: LKML , xfs@oss.sgi.com X-ASG-Orig-Subj: Re: INFO: task mount:11202 blocked for more than 120 seconds Subject: Re: INFO: task mount:11202 blocked for more than 120 seconds In-Reply-To: <20080307224040.GV155259@sgi.com> Message-ID: References: <20080307224040.GV155259@sgi.com> User-Agent: Alpine 1.00 (DEB 882 2007-12-20) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Barracuda-Connect: ns2.g-housing.de[81.169.133.75] X-Barracuda-Start-Time: 1204933608 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44181 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14802 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: lists@nerdbynature.de Precedence: bulk X-list: xfs On Sat, 8 Mar 2008, David Chinner wrote: > Well, if that is hung there, something else must be holding on to > the iolock it's waiting on. What are the other D state processes in the > machine? I have 7 processes in D state so far: $ ps auxww [....] root 9844 0.0 0.0 0 0 ? D Mar06 0:22 [pdflush] root 2697 0.0 0.0 4712 460 ? D Mar07 0:00 sync root 8342 0.0 0.0 1780 440 ? D Mar07 0:01 /bin/rm -rf /data/md1/stuff root 12494 0.0 0.0 11124 1228 ? D Mar07 0:14 /usr/bin/rsync root 15008 0.0 0.0 4712 460 ? D Mar07 0:00 sync root 11202 0.0 0.0 5012 764 ? D Mar07 0:00 mount -o remount,ro /data/md1 root 15936 0.0 0.0 4712 460 ? D Mar07 0:00 sync At one point I did a sysrq-D and put the results in: http://nerdbynature.de/bits/2.6.25-rc4/hung_task/kern.log.gz (grep for "SysRq : Show Locks Held" and "SysRq : Show Blocked State") > Also, the iolock can be held across I/O so it's possible you've lost an I/O. > Any I/O errors in the syslog? No, no I/O errors at all. See the kern.log above, I could even do dd(1) from the md1 (dm-crypt on raid1), no errors either. thanks, Christian. -- BOFH excuse #233: TCP/IP UDP alarm threshold is set too low. From owner-xfs@oss.sgi.com Fri Mar 7 16:21:12 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 07 Mar 2008 16:21:31 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_62 autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m280L8DQ016909 for ; Fri, 7 Mar 2008 16:21:10 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA02613; Sat, 8 Mar 2008 11:21:31 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m280LRLF89349587; Sat, 8 Mar 2008 11:21:28 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m280LOOp89466845; Sat, 8 Mar 2008 11:21:24 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Sat, 8 Mar 2008 11:21:24 +1100 From: David Chinner To: IWAMOTO Toshihiro Cc: xfs@oss.sgi.com Subject: Re: [PATCH] prototype file data inode inlining Message-ID: <20080308002124.GN155407@sgi.com> References: <20080307093411.4B1912DC9B2@mail.valinux.co.jp> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080307093411.4B1912DC9B2@mail.valinux.co.jp> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14803 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Fri, Mar 07, 2008 at 06:34:09PM +0900, IWAMOTO Toshihiro wrote: > Hi, > > I've done a prototype implementation of file data inlining in inodes a > while ago. It was originally meant to solve a performance problem > with a large number of small files at some customer site. > Although I measured some performance gains, a different workaround has > been adopted due to the patch quality problem. > > As I'm not asking for inclusion, the patch hasn't been ported to the > current kernel version. This patch might be useful if someone has a > similar performance problem and would like to see if file inlining > helps or not. Interesting. I'm not going to comment on the code, just the overall design and implementation. Problems: - data loss on crash is unacceptable - this is an on-disk format change - it needs to be implemented either as a mkfs option with a specific superblock feature bit, or as a mount option with a version 3 inode and a superblock feature bit to indicate inodes with data in them have been created. - local -> extent conversion occurs at copy-in time, not writeback time, so using the normal read/write paths through ->get_blocks() will fail here in xfs_bmapi(): 4793 if (XFS_IFORK_FORMAT(ip, whichfork) == XFS_DINODE_FMT_LOCAL) { 4794 >>>>>>>> ASSERT(wr && tp); 4795 if ((error = xfs_bmap_local_to_extents(tp, ip, 4796 firstblock, total, &logflags, whichfork))) 4797 goto error0; 4798 } because on a normal read or write (delayed allocation) we are not doing allocation and hence do not have an open transaction the first time we come through here. Just avoiding this conversion and returning zero maps if we are not writing will not help in the delayed allocation case. I note that you hacked around this by special casing the inline format ->get_blocks() callout to copy the data into the page and marking it mapped and uptodate. I think this is the wrong approach and is not the right layer at which to make this distinction - the special casing needs to be done at higher layers, not in the block mapping function. I think for inline data, we'd do best to special case this as high up the read/write paths as possible. e.g. for read() type operations intercept in xfs_read() and just do what we need to do there for populating the single page cache page. For write, we should let it go through the normal delayed allocation mechanisms, only converting to local format during ->writepage() if there's a single block extent and it fits in the data fork. This also handles the truncate case nicely. For mmap operations, we need to handle the inline case separately to the normal ->readpage case, similar to the xfs_read() case. ->readpages should never occur on an inline data inode. > Some random notes and the patch itself follows. > > Inlined file data are written from xfs_page_state_convert(). > The xfs_trans related operations in that function is to get inode > written on disk and isn't for crash consistency. Which is the exact opposite of what they are supposed to be used for. Given that the next thing that happens after data write in the writeback path is ->write_inode(), forcing the inode into the log for pure data changes is unnecessary. We just need to format the data into the inode during data writeback. > Small files are made inlined when created. Non inlined files don't > get inlined when they are truncated. As I inferred above, I think this is the wrong approach. Start the inodes in extent format just like they currently are, and only convert in writeback. This means no changes to the write path or delayed allocation handling. That is, only the disk format should care if the data is inline or not; everything in memory still treats data as block based extents. i.e. the only time we do anything w.r.t local data format is reading the inode off disk and writing it back to disk. The only issue here is that extent->local conversion requires a free transaction, not an allocation transaction, but that should not be difficult to handle as we can log the inode complete with inline data in the free transaction to make that conversion atomic. > xfs_bmap_local_to_extents() has been modified to work with file data, > but logging isn't implemented. A machine crash can cause data > corruption. There are two ways to do inline->extent safely from a crash recovery perspective. Method 1: Use an Intent/Done transaction pair The way this needs to be done is via a pair of transactions. The first allocation transaction remains the same, but needs a different type - an "allocation intent" rather than an "allocation" transaction. On data I/O completion, we then need an " allocation complete" transaction that signals that the data is on disk and the allocation intent is now permanent. That means we can change state in memory and log it to disk before the data write is done, but it won't get replayed on crash unless the allocation completion transaction is also in the log after the data is safely on disk. Hence we don't overwrite data in the inode during recovery if there is no copy of it elsewhere. This needs modifications to the recovery code to understand the new transaction types correctly. Method 2: Log the data We can log any object that is held in a xfs_buf_t. During conversion, we could simply build an xfs_buf_t that points to the page that holds the data and log that. The complexity here is that the buffer needs to point to the inodes address space, not the address space of the buftarg where all the metadata resides. The xfs_buf_t in this case should only exist for the life of the data I/O; once the data I/O is complete we can tear it down and go back to treating the page normally. > O_SYNC may behave incorrectly. ->write_inode(SYNC) should handle it just fine. > Use of attribute forks isn't considered and likely has issues. If you don't change the way the attribute fork handling works, it should be just fine. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Fri Mar 7 17:54:10 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 07 Mar 2008 17:54:30 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.7 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m281s7E5019951 for ; Fri, 7 Mar 2008 17:54:09 -0800 X-ASG-Debug-ID: 1204941273-19bf00fe0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mail.g-house.de (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 0A74E66EAB5; Fri, 7 Mar 2008 17:54:33 -0800 (PST) Received: from mail.g-house.de (ns2.g-housing.de [81.169.133.75]) by cuda.sgi.com with ESMTP id X5qr7JufVhXfPbla; Fri, 07 Mar 2008 17:54:33 -0800 (PST) Received: from [89.54.188.127] (helo=[192.168.178.25]) by mail.g-house.de with esmtpa (Exim 4.63) (envelope-from ) id 1JXoGd-0002ZQ-4x; Sat, 08 Mar 2008 02:54:31 +0100 Date: Sat, 8 Mar 2008 02:54:26 +0100 (CET) From: Christian Kujau X-X-Sender: evil@sheep.housecafe.de To: David Chinner cc: LKML , xfs@oss.sgi.com X-ASG-Orig-Subj: Re: INFO: task mount:11202 blocked for more than 120 seconds Subject: Re: INFO: task mount:11202 blocked for more than 120 seconds In-Reply-To: Message-ID: References: <20080307224040.GV155259@sgi.com> User-Agent: Alpine 1.00 (DEB 882 2007-12-20) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Barracuda-Connect: ns2.g-housing.de[81.169.133.75] X-Barracuda-Start-Time: 1204941278 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44189 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14804 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: lists@nerdbynature.de Precedence: bulk X-list: xfs On Sat, 8 Mar 2008, Christian Kujau wrote: > I have 7 processes in D state so far: FWIW, it's 100% reproducible with 2.6.25-rc3 too...sigh :-\ So, the last working kernel for me is 2.6.24.1 - that's a lot of bisecting and I fear that compile errors will invalidate the bisecting results again or make it impossible at all....I'll try anyway....tomorrow... C. -- BOFH excuse #285: Telecommunications is upgrading. From owner-xfs@oss.sgi.com Fri Mar 7 20:01:04 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 07 Mar 2008 20:01:24 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m28412YY025086 for ; Fri, 7 Mar 2008 20:01:04 -0800 X-ASG-Debug-ID: 1204948890-3aa3004d0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 9909CF42B83 for ; Fri, 7 Mar 2008 20:01:30 -0800 (PST) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id YFIOtX0ZT3alSfD6 for ; Fri, 07 Mar 2008 20:01:30 -0800 (PST) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id ABA0D18004487 for ; Fri, 7 Mar 2008 22:00:56 -0600 (CST) Message-ID: <47D20F78.7000103@sandeen.net> Date: Fri, 07 Mar 2008 22:00:56 -0600 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: xfs-oss X-ASG-Orig-Subj: [PATCH, RFC] - remove mounpoint UUID code Subject: [PATCH, RFC] - remove mounpoint UUID code Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1204948891 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44195 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14805 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs It looks like all of the below is unused... and according to Nathan, "dont think it even got used/implemented anywhere, but i think it was meant to be an auto-mount kinda thing... such that when you look up at that point, it knows to mount the device with that uuid there, if its not already it was never really written anywhere ... just an idea in doug doucettes brain i think." Think it'll ever go anywhere, or should it get pruned? The below builds; not at all tested, until I get an idea if it's worth doing. Need to double check that some structures might not need padding out to keep things compatible/consistent... dmapi/xfs_dm.c | 2 -- xfs_attr_leaf.c | 6 +----- xfs_bmap.c | 4 ---- xfs_dinode.h | 2 -- xfs_inode.c | 8 -------- xfs_inode.h | 1 - xfs_inode_item.c | 53 +++++++++++------------------------------------------ xfs_inode_item.h | 26 ++++++++------------------ xfs_itable.c | 2 -- xfs_log_recover.c | 10 ++-------- xfsidbg.c | 2 +- 11 files changed, 23 insertions(+), 93 deletions(-) Index: linux-2.6-xfs/fs/xfs/dmapi/xfs_dm.c =================================================================== --- linux-2.6-xfs.orig/fs/xfs/dmapi/xfs_dm.c +++ linux-2.6-xfs/fs/xfs/dmapi/xfs_dm.c @@ -363,7 +363,6 @@ xfs_dip_to_stat( buf->dt_blocks = 0; break; case XFS_DINODE_FMT_LOCAL: - case XFS_DINODE_FMT_UUID: buf->dt_rdev = 0; buf->dt_blksize = mp->m_sb.sb_blocksize; buf->dt_blocks = 0; @@ -431,7 +430,6 @@ xfs_ip_to_stat( buf->dt_blocks = 0; break; case XFS_DINODE_FMT_LOCAL: - case XFS_DINODE_FMT_UUID: buf->dt_rdev = 0; buf->dt_blksize = mp->m_sb.sb_blocksize; buf->dt_blocks = 0; Index: linux-2.6-xfs/fs/xfs/xfs_attr_leaf.c =================================================================== --- linux-2.6-xfs.orig/fs/xfs/xfs_attr_leaf.c +++ linux-2.6-xfs/fs/xfs/xfs_attr_leaf.c @@ -155,13 +155,9 @@ xfs_attr_shortform_bytesfit(xfs_inode_t offset = (XFS_LITINO(mp) - bytes) >> 3; /* rounded down */ - switch (dp->i_d.di_format) { - case XFS_DINODE_FMT_DEV: + if (dp->i_d.di_format == XFS_DINODE_FMT_DEV) { minforkoff = roundup(sizeof(xfs_dev_t), 8) >> 3; return (offset >= minforkoff) ? minforkoff : 0; - case XFS_DINODE_FMT_UUID: - minforkoff = roundup(sizeof(uuid_t), 8) >> 3; - return (offset >= minforkoff) ? minforkoff : 0; } if (!(mp->m_flags & XFS_MOUNT_ATTR2)) { Index: linux-2.6-xfs/fs/xfs/xfs_bmap.c =================================================================== --- linux-2.6-xfs.orig/fs/xfs/xfs_bmap.c +++ linux-2.6-xfs/fs/xfs/xfs_bmap.c @@ -3532,7 +3532,6 @@ xfs_bmap_forkoff_reset( { if (whichfork == XFS_ATTR_FORK && (ip->i_d.di_format != XFS_DINODE_FMT_DEV) && - (ip->i_d.di_format != XFS_DINODE_FMT_UUID) && (ip->i_d.di_format != XFS_DINODE_FMT_BTREE) && ((mp->m_attroffset >> 3) > ip->i_d.di_forkoff)) { ip->i_d.di_forkoff = mp->m_attroffset >> 3; @@ -4000,9 +3999,6 @@ xfs_bmap_add_attrfork( case XFS_DINODE_FMT_DEV: ip->i_d.di_forkoff = roundup(sizeof(xfs_dev_t), 8) >> 3; break; - case XFS_DINODE_FMT_UUID: - ip->i_d.di_forkoff = roundup(sizeof(uuid_t), 8) >> 3; - break; case XFS_DINODE_FMT_LOCAL: case XFS_DINODE_FMT_EXTENTS: case XFS_DINODE_FMT_BTREE: Index: linux-2.6-xfs/fs/xfs/xfs_dinode.h =================================================================== --- linux-2.6-xfs.orig/fs/xfs/xfs_dinode.h +++ linux-2.6-xfs/fs/xfs/xfs_dinode.h @@ -88,7 +88,6 @@ typedef struct xfs_dinode xfs_dir2_sf_t di_dir2sf; /* shortform directory v2 */ char di_c[1]; /* local contents */ __be32 di_dev; /* device for S_IFCHR/S_IFBLK */ - uuid_t di_muuid; /* mount point value */ char di_symlink[1]; /* local symbolic link */ } di_u; union { @@ -150,7 +149,6 @@ typedef enum xfs_dinode_fmt /* LNK: di_symlink */ XFS_DINODE_FMT_EXTENTS, /* DIR, REG, LNK: di_bmx */ XFS_DINODE_FMT_BTREE, /* DIR, REG, LNK: di_bmbt */ - XFS_DINODE_FMT_UUID /* MNT: di_uuid */ } xfs_dinode_fmt_t; /* Index: linux-2.6-xfs/fs/xfs/xfs_inode.c =================================================================== --- linux-2.6-xfs.orig/fs/xfs/xfs_inode.c +++ linux-2.6-xfs/fs/xfs/xfs_inode.c @@ -2970,14 +2970,6 @@ xfs_iflush_fork( } break; - case XFS_DINODE_FMT_UUID: - if (iip->ili_format.ilf_fields & XFS_ILOG_UUID) { - ASSERT(whichfork == XFS_DATA_FORK); - memcpy(&dip->di_u.di_muuid, &ip->i_df.if_u2.if_uuid, - sizeof(uuid_t)); - } - break; - default: ASSERT(0); break; Index: linux-2.6-xfs/fs/xfs/xfs_inode_item.c =================================================================== --- linux-2.6-xfs.orig/fs/xfs/xfs_inode_item.c +++ linux-2.6-xfs/fs/xfs/xfs_inode_item.c @@ -70,8 +70,7 @@ xfs_inode_item_size( switch (ip->i_d.di_format) { case XFS_DINODE_FMT_EXTENTS: iip->ili_format.ilf_fields &= - ~(XFS_ILOG_DDATA | XFS_ILOG_DBROOT | - XFS_ILOG_DEV | XFS_ILOG_UUID); + ~(XFS_ILOG_DDATA | XFS_ILOG_DBROOT | XFS_ILOG_DEV); if ((iip->ili_format.ilf_fields & XFS_ILOG_DEXT) && (ip->i_d.di_nextents > 0) && (ip->i_df.if_bytes > 0)) { @@ -86,8 +85,7 @@ xfs_inode_item_size( ASSERT(ip->i_df.if_ext_max == XFS_IFORK_DSIZE(ip) / (uint)sizeof(xfs_bmbt_rec_t)); iip->ili_format.ilf_fields &= - ~(XFS_ILOG_DDATA | XFS_ILOG_DEXT | - XFS_ILOG_DEV | XFS_ILOG_UUID); + ~(XFS_ILOG_DDATA | XFS_ILOG_DEXT | XFS_ILOG_DEV); if ((iip->ili_format.ilf_fields & XFS_ILOG_DBROOT) && (ip->i_df.if_broot_bytes > 0)) { ASSERT(ip->i_df.if_broot != NULL); @@ -112,8 +110,7 @@ xfs_inode_item_size( case XFS_DINODE_FMT_LOCAL: iip->ili_format.ilf_fields &= - ~(XFS_ILOG_DEXT | XFS_ILOG_DBROOT | - XFS_ILOG_DEV | XFS_ILOG_UUID); + ~(XFS_ILOG_DEXT | XFS_ILOG_DBROOT | XFS_ILOG_DEV); if ((iip->ili_format.ilf_fields & XFS_ILOG_DDATA) && (ip->i_df.if_bytes > 0)) { ASSERT(ip->i_df.if_u1.if_data != NULL); @@ -126,14 +123,7 @@ xfs_inode_item_size( case XFS_DINODE_FMT_DEV: iip->ili_format.ilf_fields &= - ~(XFS_ILOG_DDATA | XFS_ILOG_DBROOT | - XFS_ILOG_DEXT | XFS_ILOG_UUID); - break; - - case XFS_DINODE_FMT_UUID: - iip->ili_format.ilf_fields &= - ~(XFS_ILOG_DDATA | XFS_ILOG_DBROOT | - XFS_ILOG_DEXT | XFS_ILOG_DEV); + ~(XFS_ILOG_DDATA | XFS_ILOG_DBROOT | XFS_ILOG_DEXT); break; default: @@ -319,8 +309,7 @@ xfs_inode_item_format( switch (ip->i_d.di_format) { case XFS_DINODE_FMT_EXTENTS: ASSERT(!(iip->ili_format.ilf_fields & - (XFS_ILOG_DDATA | XFS_ILOG_DBROOT | - XFS_ILOG_DEV | XFS_ILOG_UUID))); + (XFS_ILOG_DDATA | XFS_ILOG_DBROOT | XFS_ILOG_DEV))); if (iip->ili_format.ilf_fields & XFS_ILOG_DEXT) { ASSERT(ip->i_df.if_bytes > 0); ASSERT(ip->i_df.if_u1.if_extents != NULL); @@ -369,8 +358,7 @@ xfs_inode_item_format( case XFS_DINODE_FMT_BTREE: ASSERT(!(iip->ili_format.ilf_fields & - (XFS_ILOG_DDATA | XFS_ILOG_DEXT | - XFS_ILOG_DEV | XFS_ILOG_UUID))); + (XFS_ILOG_DDATA | XFS_ILOG_DEXT | XFS_ILOG_DEV))); if (iip->ili_format.ilf_fields & XFS_ILOG_DBROOT) { ASSERT(ip->i_df.if_broot_bytes > 0); ASSERT(ip->i_df.if_broot != NULL); @@ -385,8 +373,7 @@ xfs_inode_item_format( case XFS_DINODE_FMT_LOCAL: ASSERT(!(iip->ili_format.ilf_fields & - (XFS_ILOG_DBROOT | XFS_ILOG_DEXT | - XFS_ILOG_DEV | XFS_ILOG_UUID))); + (XFS_ILOG_DBROOT | XFS_ILOG_DEXT | XFS_ILOG_DEV))); if (iip->ili_format.ilf_fields & XFS_ILOG_DDATA) { ASSERT(ip->i_df.if_bytes > 0); ASSERT(ip->i_df.if_u1.if_data != NULL); @@ -411,21 +398,9 @@ xfs_inode_item_format( case XFS_DINODE_FMT_DEV: ASSERT(!(iip->ili_format.ilf_fields & - (XFS_ILOG_DBROOT | XFS_ILOG_DEXT | - XFS_ILOG_DDATA | XFS_ILOG_UUID))); + (XFS_ILOG_DBROOT | XFS_ILOG_DEXT | XFS_ILOG_DDATA))); if (iip->ili_format.ilf_fields & XFS_ILOG_DEV) { - iip->ili_format.ilf_u.ilfu_rdev = - ip->i_df.if_u2.if_rdev; - } - break; - - case XFS_DINODE_FMT_UUID: - ASSERT(!(iip->ili_format.ilf_fields & - (XFS_ILOG_DBROOT | XFS_ILOG_DEXT | - XFS_ILOG_DDATA | XFS_ILOG_DEV))); - if (iip->ili_format.ilf_fields & XFS_ILOG_UUID) { - iip->ili_format.ilf_u.ilfu_uuid = - ip->i_df.if_u2.if_uuid; + iip->ili_format.ilf_rdev = ip->i_df.if_u2.if_rdev; } break; @@ -1088,10 +1063,7 @@ xfs_inode_item_format_convert( in_f->ilf_asize = in_f32->ilf_asize; in_f->ilf_dsize = in_f32->ilf_dsize; in_f->ilf_ino = in_f32->ilf_ino; - /* copy biggest field of ilf_u */ - memcpy(in_f->ilf_u.ilfu_uuid.__u_bits, - in_f32->ilf_u.ilfu_uuid.__u_bits, - sizeof(uuid_t)); + in_f->ilf_rdev = in_f32->ilf_rdev; in_f->ilf_blkno = in_f32->ilf_blkno; in_f->ilf_len = in_f32->ilf_len; in_f->ilf_boffset = in_f32->ilf_boffset; @@ -1106,10 +1078,7 @@ xfs_inode_item_format_convert( in_f->ilf_asize = in_f64->ilf_asize; in_f->ilf_dsize = in_f64->ilf_dsize; in_f->ilf_ino = in_f64->ilf_ino; - /* copy biggest field of ilf_u */ - memcpy(in_f->ilf_u.ilfu_uuid.__u_bits, - in_f64->ilf_u.ilfu_uuid.__u_bits, - sizeof(uuid_t)); + in_f->ilf_rdev = in_f64->ilf_rdev; in_f->ilf_blkno = in_f64->ilf_blkno; in_f->ilf_len = in_f64->ilf_len; in_f->ilf_boffset = in_f64->ilf_boffset; Index: linux-2.6-xfs/fs/xfs/xfs_itable.c =================================================================== --- linux-2.6-xfs.orig/fs/xfs/xfs_itable.c +++ linux-2.6-xfs/fs/xfs/xfs_itable.c @@ -111,7 +111,6 @@ xfs_bulkstat_one_iget( buf->bs_blocks = 0; break; case XFS_DINODE_FMT_LOCAL: - case XFS_DINODE_FMT_UUID: buf->bs_rdev = 0; buf->bs_blksize = mp->m_sb.sb_blocksize; buf->bs_blocks = 0; @@ -186,7 +185,6 @@ xfs_bulkstat_one_dinode( buf->bs_blocks = 0; break; case XFS_DINODE_FMT_LOCAL: - case XFS_DINODE_FMT_UUID: buf->bs_rdev = 0; buf->bs_blksize = mp->m_sb.sb_blocksize; buf->bs_blocks = 0; Index: linux-2.6-xfs/fs/xfs/xfs_inode_item.h =================================================================== --- linux-2.6-xfs.orig/fs/xfs/xfs_inode_item.h +++ linux-2.6-xfs/fs/xfs/xfs_inode_item.h @@ -31,10 +31,7 @@ typedef struct xfs_inode_log_format { __uint16_t ilf_asize; /* size of attr d/ext/root */ __uint16_t ilf_dsize; /* size of data/ext/root */ __uint64_t ilf_ino; /* inode number */ - union { - __uint32_t ilfu_rdev; /* rdev value for dev inode*/ - uuid_t ilfu_uuid; /* mount point value */ - } ilf_u; + __uint32_t ilf_rdev; /* rdev value for dev inode*/ __int64_t ilf_blkno; /* blkno of inode buffer */ __int32_t ilf_len; /* len of inode buffer */ __int32_t ilf_boffset; /* off of inode in buffer */ @@ -48,10 +45,7 @@ typedef struct xfs_inode_log_format_32 { __uint16_t ilf_asize; /* size of attr d/ext/root */ __uint16_t ilf_dsize; /* size of data/ext/root */ __uint64_t ilf_ino; /* inode number */ - union { - __uint32_t ilfu_rdev; /* rdev value for dev inode*/ - uuid_t ilfu_uuid; /* mount point value */ - } ilf_u; + __uint32_t ilf_rdev; /* rdev value for dev inode*/ __int64_t ilf_blkno; /* blkno of inode buffer */ __int32_t ilf_len; /* len of inode buffer */ __int32_t ilf_boffset; /* off of inode in buffer */ @@ -66,10 +60,7 @@ typedef struct xfs_inode_log_format_64 { __uint16_t ilf_dsize; /* size of data/ext/root */ __uint32_t ilf_pad; /* pad for 64 bit boundary */ __uint64_t ilf_ino; /* inode number */ - union { - __uint32_t ilfu_rdev; /* rdev value for dev inode*/ - uuid_t ilfu_uuid; /* mount point value */ - } ilf_u; + __uint32_t ilf_rdev; /* rdev value for dev inode*/ __int64_t ilf_blkno; /* blkno of inode buffer */ __int32_t ilf_len; /* len of inode buffer */ __int32_t ilf_boffset; /* off of inode in buffer */ @@ -83,15 +74,15 @@ typedef struct xfs_inode_log_format_64 { #define XFS_ILOG_DEXT 0x004 /* log i_df.if_extents */ #define XFS_ILOG_DBROOT 0x008 /* log i_df.i_broot */ #define XFS_ILOG_DEV 0x010 /* log the dev field */ -#define XFS_ILOG_UUID 0x020 /* log the uuid field */ +/* 0x020*/ /* unused */ #define XFS_ILOG_ADATA 0x040 /* log i_af.if_data */ #define XFS_ILOG_AEXT 0x080 /* log i_af.if_extents */ #define XFS_ILOG_ABROOT 0x100 /* log i_af.i_broot */ #define XFS_ILOG_NONCORE (XFS_ILOG_DDATA | XFS_ILOG_DEXT | \ XFS_ILOG_DBROOT | XFS_ILOG_DEV | \ - XFS_ILOG_UUID | XFS_ILOG_ADATA | \ - XFS_ILOG_AEXT | XFS_ILOG_ABROOT) + XFS_ILOG_ADATA | XFS_ILOG_AEXT | \ + XFS_ILOG_ABROOT) #define XFS_ILOG_DFORK (XFS_ILOG_DDATA | XFS_ILOG_DEXT | \ XFS_ILOG_DBROOT) @@ -101,9 +92,8 @@ typedef struct xfs_inode_log_format_64 { #define XFS_ILOG_ALL (XFS_ILOG_CORE | XFS_ILOG_DDATA | \ XFS_ILOG_DEXT | XFS_ILOG_DBROOT | \ - XFS_ILOG_DEV | XFS_ILOG_UUID | \ - XFS_ILOG_ADATA | XFS_ILOG_AEXT | \ - XFS_ILOG_ABROOT) + XFS_ILOG_DEV | XFS_ILOG_ADATA | \ + XFS_ILOG_AEXT | XFS_ILOG_ABROOT) #define XFS_ILI_HOLD 0x1 #define XFS_ILI_IOLOCKED_EXCL 0x2 Index: linux-2.6-xfs/fs/xfs/xfs_log_recover.c =================================================================== --- linux-2.6-xfs.orig/fs/xfs/xfs_log_recover.c +++ linux-2.6-xfs/fs/xfs/xfs_log_recover.c @@ -2420,14 +2420,8 @@ xlog_recover_do_inode_trans( } fields = in_f->ilf_fields; - switch (fields & (XFS_ILOG_DEV | XFS_ILOG_UUID)) { - case XFS_ILOG_DEV: - dip->di_u.di_dev = cpu_to_be32(in_f->ilf_u.ilfu_rdev); - break; - case XFS_ILOG_UUID: - dip->di_u.di_muuid = in_f->ilf_u.ilfu_uuid; - break; - } + if (fields & XFS_ILOG_DEV) + dip->di_u.di_dev = cpu_to_be32(in_f->ilf_rdev); if (in_f->ilf_size == 2) goto write_inode_buffer; Index: linux-2.6-xfs/fs/xfs/xfsidbg.c =================================================================== --- linux-2.6-xfs.orig/fs/xfs/xfsidbg.c +++ linux-2.6-xfs/fs/xfs/xfsidbg.c @@ -3424,7 +3424,7 @@ xfs_inode_item_print(xfs_inode_log_item_ kdb_printf("dsize %d, asize %d, rdev 0x%x\n", ilip->ili_format.ilf_dsize, ilip->ili_format.ilf_asize, - ilip->ili_format.ilf_u.ilfu_rdev); + ilip->ili_format.ilf_rdev); kdb_printf("blkno 0x%Lx len 0x%x boffset 0x%x\n", ilip->ili_format.ilf_blkno, ilip->ili_format.ilf_len, Index: linux-2.6-xfs/fs/xfs/xfs_inode.h =================================================================== --- linux-2.6-xfs.orig/fs/xfs/xfs_inode.h +++ linux-2.6-xfs/fs/xfs/xfs_inode.h @@ -79,7 +79,6 @@ typedef struct xfs_ifork { char if_inline_data[XFS_INLINE_DATA]; /* very small file data */ xfs_dev_t if_rdev; /* dev number if special */ - uuid_t if_uuid; /* mount point value */ } if_u2; } xfs_ifork_t; From owner-xfs@oss.sgi.com Fri Mar 7 20:31:20 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 07 Mar 2008 20:31:41 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m284VFtO030723 for ; Fri, 7 Mar 2008 20:31:18 -0800 Received: from [134.15.251.3] (melb-sw-corp-251-3.corp.sgi.com [134.15.251.3]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA07766; Sat, 8 Mar 2008 15:31:37 +1100 Message-ID: <47D216A3.9040406@sgi.com> Date: Sat, 08 Mar 2008 15:31:31 +1100 From: Ian Costello Reply-To: ianc@melbourne.sgi.com User-Agent: Thunderbird 2.0.0.12 (Windows/20080213) MIME-Version: 1.0 To: Eric Sandeen CC: xfs-oss Subject: Re: [PATCH, RFC] - remove mounpoint UUID code References: <47D20F78.7000103@sandeen.net> In-Reply-To: <47D20F78.7000103@sandeen.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14806 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: ianc@sgi.com Precedence: bulk X-list: xfs CXFS does use this to mount filesystems, i.e. on a client read the sb get the uuid and use the uuid to lookup the MDS to request the mount... Having said that if it is removed then it will force us to look at another method to mount a cxfs filesystem, and also remove the necessity for cxfs clients to not read the superblock (which is the only metadata cxfs clients read off disk at this point)... Regards, -- Ian Costello Phone: +61 3 9963 1952 R&D Engineer Mobile: +61 417 508 522 CXFS MultiOS Eric Sandeen wrote: > It looks like all of the below is unused... and according > to Nathan, > > "dont think it even got used/implemented anywhere, but i think it > was meant to be an auto-mount kinda thing... such that when you look > up at that point, it knows to mount the device with that uuid there, > if its not already it was never really written anywhere ... just an > idea in doug doucettes brain i think." > > Think it'll ever go anywhere, or should it get pruned? > > The below builds; not at all tested, until I get an idea if it's worth > doing. Need to double check that some structures might not need padding > out to keep things compatible/consistent... > > dmapi/xfs_dm.c | 2 -- > xfs_attr_leaf.c | 6 +----- > xfs_bmap.c | 4 ---- > xfs_dinode.h | 2 -- > xfs_inode.c | 8 -------- > xfs_inode.h | 1 - > xfs_inode_item.c | 53 +++++++++++------------------------------------------ > xfs_inode_item.h | 26 ++++++++------------------ > xfs_itable.c | 2 -- > xfs_log_recover.c | 10 ++-------- > xfsidbg.c | 2 +- > 11 files changed, 23 insertions(+), 93 deletions(-) > > > Index: linux-2.6-xfs/fs/xfs/dmapi/xfs_dm.c > =================================================================== > --- linux-2.6-xfs.orig/fs/xfs/dmapi/xfs_dm.c > +++ linux-2.6-xfs/fs/xfs/dmapi/xfs_dm.c > @@ -363,7 +363,6 @@ xfs_dip_to_stat( > buf->dt_blocks = 0; > break; > case XFS_DINODE_FMT_LOCAL: > - case XFS_DINODE_FMT_UUID: > buf->dt_rdev = 0; > buf->dt_blksize = mp->m_sb.sb_blocksize; > buf->dt_blocks = 0; > @@ -431,7 +430,6 @@ xfs_ip_to_stat( > buf->dt_blocks = 0; > break; > case XFS_DINODE_FMT_LOCAL: > - case XFS_DINODE_FMT_UUID: > buf->dt_rdev = 0; > buf->dt_blksize = mp->m_sb.sb_blocksize; > buf->dt_blocks = 0; > Index: linux-2.6-xfs/fs/xfs/xfs_attr_leaf.c > =================================================================== > --- linux-2.6-xfs.orig/fs/xfs/xfs_attr_leaf.c > +++ linux-2.6-xfs/fs/xfs/xfs_attr_leaf.c > @@ -155,13 +155,9 @@ xfs_attr_shortform_bytesfit(xfs_inode_t > > offset = (XFS_LITINO(mp) - bytes) >> 3; /* rounded down */ > > - switch (dp->i_d.di_format) { > - case XFS_DINODE_FMT_DEV: > + if (dp->i_d.di_format == XFS_DINODE_FMT_DEV) { > minforkoff = roundup(sizeof(xfs_dev_t), 8) >> 3; > return (offset >= minforkoff) ? minforkoff : 0; > - case XFS_DINODE_FMT_UUID: > - minforkoff = roundup(sizeof(uuid_t), 8) >> 3; > - return (offset >= minforkoff) ? minforkoff : 0; > } > > if (!(mp->m_flags & XFS_MOUNT_ATTR2)) { > Index: linux-2.6-xfs/fs/xfs/xfs_bmap.c > =================================================================== > --- linux-2.6-xfs.orig/fs/xfs/xfs_bmap.c > +++ linux-2.6-xfs/fs/xfs/xfs_bmap.c > @@ -3532,7 +3532,6 @@ xfs_bmap_forkoff_reset( > { > if (whichfork == XFS_ATTR_FORK && > (ip->i_d.di_format != XFS_DINODE_FMT_DEV) && > - (ip->i_d.di_format != XFS_DINODE_FMT_UUID) && > (ip->i_d.di_format != XFS_DINODE_FMT_BTREE) && > ((mp->m_attroffset >> 3) > ip->i_d.di_forkoff)) { > ip->i_d.di_forkoff = mp->m_attroffset >> 3; > @@ -4000,9 +3999,6 @@ xfs_bmap_add_attrfork( > case XFS_DINODE_FMT_DEV: > ip->i_d.di_forkoff = roundup(sizeof(xfs_dev_t), 8) >> 3; > break; > - case XFS_DINODE_FMT_UUID: > - ip->i_d.di_forkoff = roundup(sizeof(uuid_t), 8) >> 3; > - break; > case XFS_DINODE_FMT_LOCAL: > case XFS_DINODE_FMT_EXTENTS: > case XFS_DINODE_FMT_BTREE: > Index: linux-2.6-xfs/fs/xfs/xfs_dinode.h > =================================================================== > --- linux-2.6-xfs.orig/fs/xfs/xfs_dinode.h > +++ linux-2.6-xfs/fs/xfs/xfs_dinode.h > @@ -88,7 +88,6 @@ typedef struct xfs_dinode > xfs_dir2_sf_t di_dir2sf; /* shortform directory v2 */ > char di_c[1]; /* local contents */ > __be32 di_dev; /* device for S_IFCHR/S_IFBLK */ > - uuid_t di_muuid; /* mount point value */ > char di_symlink[1]; /* local symbolic link */ > } di_u; > union { > @@ -150,7 +149,6 @@ typedef enum xfs_dinode_fmt > /* LNK: di_symlink */ > XFS_DINODE_FMT_EXTENTS, /* DIR, REG, LNK: di_bmx */ > XFS_DINODE_FMT_BTREE, /* DIR, REG, LNK: di_bmbt */ > - XFS_DINODE_FMT_UUID /* MNT: di_uuid */ > } xfs_dinode_fmt_t; > > /* > Index: linux-2.6-xfs/fs/xfs/xfs_inode.c > =================================================================== > --- linux-2.6-xfs.orig/fs/xfs/xfs_inode.c > +++ linux-2.6-xfs/fs/xfs/xfs_inode.c > @@ -2970,14 +2970,6 @@ xfs_iflush_fork( > } > break; > > - case XFS_DINODE_FMT_UUID: > - if (iip->ili_format.ilf_fields & XFS_ILOG_UUID) { > - ASSERT(whichfork == XFS_DATA_FORK); > - memcpy(&dip->di_u.di_muuid, &ip->i_df.if_u2.if_uuid, > - sizeof(uuid_t)); > - } > - break; > - > default: > ASSERT(0); > break; > Index: linux-2.6-xfs/fs/xfs/xfs_inode_item.c > =================================================================== > --- linux-2.6-xfs.orig/fs/xfs/xfs_inode_item.c > +++ linux-2.6-xfs/fs/xfs/xfs_inode_item.c > @@ -70,8 +70,7 @@ xfs_inode_item_size( > switch (ip->i_d.di_format) { > case XFS_DINODE_FMT_EXTENTS: > iip->ili_format.ilf_fields &= > - ~(XFS_ILOG_DDATA | XFS_ILOG_DBROOT | > - XFS_ILOG_DEV | XFS_ILOG_UUID); > + ~(XFS_ILOG_DDATA | XFS_ILOG_DBROOT | XFS_ILOG_DEV); > if ((iip->ili_format.ilf_fields & XFS_ILOG_DEXT) && > (ip->i_d.di_nextents > 0) && > (ip->i_df.if_bytes > 0)) { > @@ -86,8 +85,7 @@ xfs_inode_item_size( > ASSERT(ip->i_df.if_ext_max == > XFS_IFORK_DSIZE(ip) / (uint)sizeof(xfs_bmbt_rec_t)); > iip->ili_format.ilf_fields &= > - ~(XFS_ILOG_DDATA | XFS_ILOG_DEXT | > - XFS_ILOG_DEV | XFS_ILOG_UUID); > + ~(XFS_ILOG_DDATA | XFS_ILOG_DEXT | XFS_ILOG_DEV); > if ((iip->ili_format.ilf_fields & XFS_ILOG_DBROOT) && > (ip->i_df.if_broot_bytes > 0)) { > ASSERT(ip->i_df.if_broot != NULL); > @@ -112,8 +110,7 @@ xfs_inode_item_size( > > case XFS_DINODE_FMT_LOCAL: > iip->ili_format.ilf_fields &= > - ~(XFS_ILOG_DEXT | XFS_ILOG_DBROOT | > - XFS_ILOG_DEV | XFS_ILOG_UUID); > + ~(XFS_ILOG_DEXT | XFS_ILOG_DBROOT | XFS_ILOG_DEV); > if ((iip->ili_format.ilf_fields & XFS_ILOG_DDATA) && > (ip->i_df.if_bytes > 0)) { > ASSERT(ip->i_df.if_u1.if_data != NULL); > @@ -126,14 +123,7 @@ xfs_inode_item_size( > > case XFS_DINODE_FMT_DEV: > iip->ili_format.ilf_fields &= > - ~(XFS_ILOG_DDATA | XFS_ILOG_DBROOT | > - XFS_ILOG_DEXT | XFS_ILOG_UUID); > - break; > - > - case XFS_DINODE_FMT_UUID: > - iip->ili_format.ilf_fields &= > - ~(XFS_ILOG_DDATA | XFS_ILOG_DBROOT | > - XFS_ILOG_DEXT | XFS_ILOG_DEV); > + ~(XFS_ILOG_DDATA | XFS_ILOG_DBROOT | XFS_ILOG_DEXT); > break; > > default: > @@ -319,8 +309,7 @@ xfs_inode_item_format( > switch (ip->i_d.di_format) { > case XFS_DINODE_FMT_EXTENTS: > ASSERT(!(iip->ili_format.ilf_fields & > - (XFS_ILOG_DDATA | XFS_ILOG_DBROOT | > - XFS_ILOG_DEV | XFS_ILOG_UUID))); > + (XFS_ILOG_DDATA | XFS_ILOG_DBROOT | XFS_ILOG_DEV))); > if (iip->ili_format.ilf_fields & XFS_ILOG_DEXT) { > ASSERT(ip->i_df.if_bytes > 0); > ASSERT(ip->i_df.if_u1.if_extents != NULL); > @@ -369,8 +358,7 @@ xfs_inode_item_format( > > case XFS_DINODE_FMT_BTREE: > ASSERT(!(iip->ili_format.ilf_fields & > - (XFS_ILOG_DDATA | XFS_ILOG_DEXT | > - XFS_ILOG_DEV | XFS_ILOG_UUID))); > + (XFS_ILOG_DDATA | XFS_ILOG_DEXT | XFS_ILOG_DEV))); > if (iip->ili_format.ilf_fields & XFS_ILOG_DBROOT) { > ASSERT(ip->i_df.if_broot_bytes > 0); > ASSERT(ip->i_df.if_broot != NULL); > @@ -385,8 +373,7 @@ xfs_inode_item_format( > > case XFS_DINODE_FMT_LOCAL: > ASSERT(!(iip->ili_format.ilf_fields & > - (XFS_ILOG_DBROOT | XFS_ILOG_DEXT | > - XFS_ILOG_DEV | XFS_ILOG_UUID))); > + (XFS_ILOG_DBROOT | XFS_ILOG_DEXT | XFS_ILOG_DEV))); > if (iip->ili_format.ilf_fields & XFS_ILOG_DDATA) { > ASSERT(ip->i_df.if_bytes > 0); > ASSERT(ip->i_df.if_u1.if_data != NULL); > @@ -411,21 +398,9 @@ xfs_inode_item_format( > > case XFS_DINODE_FMT_DEV: > ASSERT(!(iip->ili_format.ilf_fields & > - (XFS_ILOG_DBROOT | XFS_ILOG_DEXT | > - XFS_ILOG_DDATA | XFS_ILOG_UUID))); > + (XFS_ILOG_DBROOT | XFS_ILOG_DEXT | XFS_ILOG_DDATA))); > if (iip->ili_format.ilf_fields & XFS_ILOG_DEV) { > - iip->ili_format.ilf_u.ilfu_rdev = > - ip->i_df.if_u2.if_rdev; > - } > - break; > - > - case XFS_DINODE_FMT_UUID: > - ASSERT(!(iip->ili_format.ilf_fields & > - (XFS_ILOG_DBROOT | XFS_ILOG_DEXT | > - XFS_ILOG_DDATA | XFS_ILOG_DEV))); > - if (iip->ili_format.ilf_fields & XFS_ILOG_UUID) { > - iip->ili_format.ilf_u.ilfu_uuid = > - ip->i_df.if_u2.if_uuid; > + iip->ili_format.ilf_rdev = ip->i_df.if_u2.if_rdev; > } > break; > > @@ -1088,10 +1063,7 @@ xfs_inode_item_format_convert( > in_f->ilf_asize = in_f32->ilf_asize; > in_f->ilf_dsize = in_f32->ilf_dsize; > in_f->ilf_ino = in_f32->ilf_ino; > - /* copy biggest field of ilf_u */ > - memcpy(in_f->ilf_u.ilfu_uuid.__u_bits, > - in_f32->ilf_u.ilfu_uuid.__u_bits, > - sizeof(uuid_t)); > + in_f->ilf_rdev = in_f32->ilf_rdev; > in_f->ilf_blkno = in_f32->ilf_blkno; > in_f->ilf_len = in_f32->ilf_len; > in_f->ilf_boffset = in_f32->ilf_boffset; > @@ -1106,10 +1078,7 @@ xfs_inode_item_format_convert( > in_f->ilf_asize = in_f64->ilf_asize; > in_f->ilf_dsize = in_f64->ilf_dsize; > in_f->ilf_ino = in_f64->ilf_ino; > - /* copy biggest field of ilf_u */ > - memcpy(in_f->ilf_u.ilfu_uuid.__u_bits, > - in_f64->ilf_u.ilfu_uuid.__u_bits, > - sizeof(uuid_t)); > + in_f->ilf_rdev = in_f64->ilf_rdev; > in_f->ilf_blkno = in_f64->ilf_blkno; > in_f->ilf_len = in_f64->ilf_len; > in_f->ilf_boffset = in_f64->ilf_boffset; > Index: linux-2.6-xfs/fs/xfs/xfs_itable.c > =================================================================== > --- linux-2.6-xfs.orig/fs/xfs/xfs_itable.c > +++ linux-2.6-xfs/fs/xfs/xfs_itable.c > @@ -111,7 +111,6 @@ xfs_bulkstat_one_iget( > buf->bs_blocks = 0; > break; > case XFS_DINODE_FMT_LOCAL: > - case XFS_DINODE_FMT_UUID: > buf->bs_rdev = 0; > buf->bs_blksize = mp->m_sb.sb_blocksize; > buf->bs_blocks = 0; > @@ -186,7 +185,6 @@ xfs_bulkstat_one_dinode( > buf->bs_blocks = 0; > break; > case XFS_DINODE_FMT_LOCAL: > - case XFS_DINODE_FMT_UUID: > buf->bs_rdev = 0; > buf->bs_blksize = mp->m_sb.sb_blocksize; > buf->bs_blocks = 0; > Index: linux-2.6-xfs/fs/xfs/xfs_inode_item.h > =================================================================== > --- linux-2.6-xfs.orig/fs/xfs/xfs_inode_item.h > +++ linux-2.6-xfs/fs/xfs/xfs_inode_item.h > @@ -31,10 +31,7 @@ typedef struct xfs_inode_log_format { > __uint16_t ilf_asize; /* size of attr d/ext/root */ > __uint16_t ilf_dsize; /* size of data/ext/root */ > __uint64_t ilf_ino; /* inode number */ > - union { > - __uint32_t ilfu_rdev; /* rdev value for dev inode*/ > - uuid_t ilfu_uuid; /* mount point value */ > - } ilf_u; > + __uint32_t ilf_rdev; /* rdev value for dev inode*/ > __int64_t ilf_blkno; /* blkno of inode buffer */ > __int32_t ilf_len; /* len of inode buffer */ > __int32_t ilf_boffset; /* off of inode in buffer */ > @@ -48,10 +45,7 @@ typedef struct xfs_inode_log_format_32 { > __uint16_t ilf_asize; /* size of attr d/ext/root */ > __uint16_t ilf_dsize; /* size of data/ext/root */ > __uint64_t ilf_ino; /* inode number */ > - union { > - __uint32_t ilfu_rdev; /* rdev value for dev inode*/ > - uuid_t ilfu_uuid; /* mount point value */ > - } ilf_u; > + __uint32_t ilf_rdev; /* rdev value for dev inode*/ > __int64_t ilf_blkno; /* blkno of inode buffer */ > __int32_t ilf_len; /* len of inode buffer */ > __int32_t ilf_boffset; /* off of inode in buffer */ > @@ -66,10 +60,7 @@ typedef struct xfs_inode_log_format_64 { > __uint16_t ilf_dsize; /* size of data/ext/root */ > __uint32_t ilf_pad; /* pad for 64 bit boundary */ > __uint64_t ilf_ino; /* inode number */ > - union { > - __uint32_t ilfu_rdev; /* rdev value for dev inode*/ > - uuid_t ilfu_uuid; /* mount point value */ > - } ilf_u; > + __uint32_t ilf_rdev; /* rdev value for dev inode*/ > __int64_t ilf_blkno; /* blkno of inode buffer */ > __int32_t ilf_len; /* len of inode buffer */ > __int32_t ilf_boffset; /* off of inode in buffer */ > @@ -83,15 +74,15 @@ typedef struct xfs_inode_log_format_64 { > #define XFS_ILOG_DEXT 0x004 /* log i_df.if_extents */ > #define XFS_ILOG_DBROOT 0x008 /* log i_df.i_broot */ > #define XFS_ILOG_DEV 0x010 /* log the dev field */ > -#define XFS_ILOG_UUID 0x020 /* log the uuid field */ > +/* 0x020*/ /* unused */ > #define XFS_ILOG_ADATA 0x040 /* log i_af.if_data */ > #define XFS_ILOG_AEXT 0x080 /* log i_af.if_extents */ > #define XFS_ILOG_ABROOT 0x100 /* log i_af.i_broot */ > > #define XFS_ILOG_NONCORE (XFS_ILOG_DDATA | XFS_ILOG_DEXT | \ > XFS_ILOG_DBROOT | XFS_ILOG_DEV | \ > - XFS_ILOG_UUID | XFS_ILOG_ADATA | \ > - XFS_ILOG_AEXT | XFS_ILOG_ABROOT) > + XFS_ILOG_ADATA | XFS_ILOG_AEXT | \ > + XFS_ILOG_ABROOT) > > #define XFS_ILOG_DFORK (XFS_ILOG_DDATA | XFS_ILOG_DEXT | \ > XFS_ILOG_DBROOT) > @@ -101,9 +92,8 @@ typedef struct xfs_inode_log_format_64 { > > #define XFS_ILOG_ALL (XFS_ILOG_CORE | XFS_ILOG_DDATA | \ > XFS_ILOG_DEXT | XFS_ILOG_DBROOT | \ > - XFS_ILOG_DEV | XFS_ILOG_UUID | \ > - XFS_ILOG_ADATA | XFS_ILOG_AEXT | \ > - XFS_ILOG_ABROOT) > + XFS_ILOG_DEV | XFS_ILOG_ADATA | \ > + XFS_ILOG_AEXT | XFS_ILOG_ABROOT) > > #define XFS_ILI_HOLD 0x1 > #define XFS_ILI_IOLOCKED_EXCL 0x2 > Index: linux-2.6-xfs/fs/xfs/xfs_log_recover.c > =================================================================== > --- linux-2.6-xfs.orig/fs/xfs/xfs_log_recover.c > +++ linux-2.6-xfs/fs/xfs/xfs_log_recover.c > @@ -2420,14 +2420,8 @@ xlog_recover_do_inode_trans( > } > > fields = in_f->ilf_fields; > - switch (fields & (XFS_ILOG_DEV | XFS_ILOG_UUID)) { > - case XFS_ILOG_DEV: > - dip->di_u.di_dev = cpu_to_be32(in_f->ilf_u.ilfu_rdev); > - break; > - case XFS_ILOG_UUID: > - dip->di_u.di_muuid = in_f->ilf_u.ilfu_uuid; > - break; > - } > + if (fields & XFS_ILOG_DEV) > + dip->di_u.di_dev = cpu_to_be32(in_f->ilf_rdev); > > if (in_f->ilf_size == 2) > goto write_inode_buffer; > Index: linux-2.6-xfs/fs/xfs/xfsidbg.c > =================================================================== > --- linux-2.6-xfs.orig/fs/xfs/xfsidbg.c > +++ linux-2.6-xfs/fs/xfs/xfsidbg.c > @@ -3424,7 +3424,7 @@ xfs_inode_item_print(xfs_inode_log_item_ > kdb_printf("dsize %d, asize %d, rdev 0x%x\n", > ilip->ili_format.ilf_dsize, > ilip->ili_format.ilf_asize, > - ilip->ili_format.ilf_u.ilfu_rdev); > + ilip->ili_format.ilf_rdev); > kdb_printf("blkno 0x%Lx len 0x%x boffset 0x%x\n", > ilip->ili_format.ilf_blkno, > ilip->ili_format.ilf_len, > Index: linux-2.6-xfs/fs/xfs/xfs_inode.h > =================================================================== > --- linux-2.6-xfs.orig/fs/xfs/xfs_inode.h > +++ linux-2.6-xfs/fs/xfs/xfs_inode.h > @@ -79,7 +79,6 @@ typedef struct xfs_ifork { > char if_inline_data[XFS_INLINE_DATA]; > /* very small file data */ > xfs_dev_t if_rdev; /* dev number if special */ > - uuid_t if_uuid; /* mount point value */ > } if_u2; > } xfs_ifork_t; > > > > > From owner-xfs@oss.sgi.com Fri Mar 7 20:33:07 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 07 Mar 2008 20:33:15 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m284X4Io030966 for ; Fri, 7 Mar 2008 20:33:07 -0800 X-ASG-Debug-ID: 1204950814-498e02430000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 4C372F42EDC for ; Fri, 7 Mar 2008 20:33:34 -0800 (PST) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id jUKY0lhHteQUaVHy for ; Fri, 07 Mar 2008 20:33:34 -0800 (PST) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 45CE218004487; Fri, 7 Mar 2008 22:33:34 -0600 (CST) Message-ID: <47D2171D.7070202@sandeen.net> Date: Fri, 07 Mar 2008 22:33:33 -0600 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: ianc@melbourne.sgi.com CC: xfs-oss X-ASG-Orig-Subj: Re: [PATCH, RFC] - remove mounpoint UUID code Subject: Re: [PATCH, RFC] - remove mounpoint UUID code References: <47D20F78.7000103@sandeen.net> <47D216A3.9040406@sgi.com> In-Reply-To: <47D216A3.9040406@sgi.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1204950815 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -1.42 X-Barracuda-Spam-Status: No, SCORE=-1.42 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=COMMA_SUBJECT X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44197 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.60 COMMA_SUBJECT Subject is like 'Re: FDSDS, this is a subject' X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14807 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Ian Costello wrote: > CXFS does use this to mount filesystems, i.e. on a client read the sb > get the uuid and use the uuid to lookup the MDS to request the mount... Well, this isn't removing the sb uuid... it's this special file type XFS_DINODE_FMT_UUID... which is cxfs using? -Eric > Having said that if it is removed then it will force us to look at > another method to mount a cxfs filesystem, and also remove the necessity > for cxfs clients to not read the superblock (which is the only metadata > cxfs clients read off disk at this point)... > > Regards, > From owner-xfs@oss.sgi.com Fri Mar 7 20:40:34 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 07 Mar 2008 20:40:41 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m284eUbi031620 for ; Fri, 7 Mar 2008 20:40:33 -0800 Received: from [134.15.251.3] (melb-sw-corp-251-3.corp.sgi.com [134.15.251.3]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA08056; Sat, 8 Mar 2008 15:40:54 +1100 Message-ID: <47D218CF.8090909@sgi.com> Date: Sat, 08 Mar 2008 15:40:47 +1100 From: Ian Costello Reply-To: ianc@melbourne.sgi.com User-Agent: Thunderbird 2.0.0.12 (Windows/20080213) MIME-Version: 1.0 To: Eric Sandeen CC: xfs-oss Subject: Re: [PATCH, RFC] - remove mounpoint UUID code References: <47D20F78.7000103@sandeen.net> <47D216A3.9040406@sgi.com> <47D2171D.7070202@sandeen.net> In-Reply-To: <47D2171D.7070202@sandeen.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14808 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: ianc@sgi.com Precedence: bulk X-list: xfs Eric, My bad, cxfs uses the sb_uuid, should have looked a little closer at the patch. I assumed the worst from the heading (must have filtered the "point" out of the heading :) Eric Sandeen wrote: > Ian Costello wrote: >> CXFS does use this to mount filesystems, i.e. on a client read the sb >> get the uuid and use the uuid to lookup the MDS to request the mount... > > Well, this isn't removing the sb uuid... it's this special file type > XFS_DINODE_FMT_UUID... which is cxfs using? > > -Eric > >> Having said that if it is removed then it will force us to look at >> another method to mount a cxfs filesystem, and also remove the necessity >> for cxfs clients to not read the superblock (which is the only metadata >> cxfs clients read off disk at this point)... >> >> Regards, >> > > Thanks, -- Ian Costello Phone: +61 3 9963 1952 R&D Engineer Mobile: +61 417 508 522 CXFS MultiOS From owner-xfs@oss.sgi.com Fri Mar 7 20:59:24 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 07 Mar 2008 20:59:42 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m284xNpm032515 for ; Fri, 7 Mar 2008 20:59:24 -0800 X-ASG-Debug-ID: 1204952393-498902840000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 29387F43065 for ; Fri, 7 Mar 2008 20:59:53 -0800 (PST) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id xpV53WOlS2c7ZmB2 for ; Fri, 07 Mar 2008 20:59:53 -0800 (PST) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id C912618004487; Fri, 7 Mar 2008 22:59:22 -0600 (CST) Message-ID: <47D21D2A.90902@sandeen.net> Date: Fri, 07 Mar 2008 22:59:22 -0600 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: ianc@melbourne.sgi.com CC: xfs-oss X-ASG-Orig-Subj: Re: [PATCH, RFC] - remove mounpoint UUID code Subject: Re: [PATCH, RFC] - remove mounpoint UUID code References: <47D20F78.7000103@sandeen.net> <47D216A3.9040406@sgi.com> <47D2171D.7070202@sandeen.net> <47D218CF.8090909@sgi.com> In-Reply-To: <47D218CF.8090909@sgi.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1204952394 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -1.42 X-Barracuda-Spam-Status: No, SCORE=-1.42 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=COMMA_SUBJECT X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44197 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.60 COMMA_SUBJECT Subject is like 'Re: FDSDS, this is a subject' X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14809 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Ian Costello wrote: > Eric, > > My bad, cxfs uses the sb_uuid, should have looked a little closer at the > patch. I assumed the worst from the heading (must have filtered the > "point" out of the heading :) Nah, I wouldn't do that to you ;) (and you wouldn't have to accept it if I did try!) :) -Eric From owner-xfs@oss.sgi.com Fri Mar 7 21:37:51 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 07 Mar 2008 21:38:13 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.0 required=5.0 tests=BAYES_50,HTML_MESSAGE autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m285bnhJ001773 for ; Fri, 7 Mar 2008 21:37:51 -0800 X-ASG-Debug-ID: 1204954698-1d4a00130000-w1Z2WR X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from po-out-1718.google.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 06F2A66F96A for ; Fri, 7 Mar 2008 21:38:18 -0800 (PST) Received: from po-out-1718.google.com (po-out-1718.google.com [72.14.252.156]) by cuda.sgi.com with ESMTP id UQDt79kCG9bwYktA for ; Fri, 07 Mar 2008 21:38:18 -0800 (PST) Received: by po-out-1718.google.com with SMTP id y22so1183149pof.2 for ; Fri, 07 Mar 2008 21:38:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:subject:mime-version:content-type; bh=ONXi38G7jo14WrL41DE4EDkXLoyLF5dLvJm62KmDDj8=; b=PJCIs/03fMSq0D+j9A8qIUYUK3rtuRkq/5jtlMEir8JiG4pH0PhuEwnxpQflI3XO2+wqZ9MDcTC02WvO6lFXkXJcu6+DIo7EuHlYIYvENjHoKf0jRTiEvbm4O7S8SAZWUnDvWsPuzloYGZGDsNkV375pGuolRD4MWQV8lHr5+Uc= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:subject:mime-version:content-type; b=mTqAcFpTbC2knjwN7tx8JNquUEjTgSv0z8QsuEZI0DxrXJ8aFgrTsO3zn2/iToPChrj79cWw5ThNJDXtH1cAat6rV9+rgPbLHrxMUzqF95wGJEOV69x9EYabncG7pJUvuVRSqicRPvmm0gDvG7uiCWC2jZ3YqPeCqsyynQzLLuo= Received: by 10.141.71.8 with SMTP id y8mr1466369rvk.32.1204954670809; Fri, 07 Mar 2008 21:37:50 -0800 (PST) Received: by 10.141.5.10 with HTTP; Fri, 7 Mar 2008 21:37:50 -0800 (PST) Message-ID: Date: Sat, 8 Mar 2008 11:07:50 +0530 From: "Arpita Choudhary" X-ASG-Orig-Subj: Arpita, has requested to connect online Subject: Arpita, has requested to connect online MIME-Version: 1.0 X-Barracuda-Connect: po-out-1718.google.com[72.14.252.156] X-Barracuda-Start-Time: 1204954699 X-Barracuda-Bayes: INNOCENT GLOBAL 0.4925 1.0000 0.0000 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: 0.32 X-Barracuda-Spam-Status: No, SCORE=0.32 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=HTML_MESSAGE, MISSING_HEADERS, TO_CC_NONE X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44202 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.19 MISSING_HEADERS Missing To: header 0.00 HTML_MESSAGE BODY: HTML included in message 0.13 TO_CC_NONE No To: or Cc: header To: undisclosed-recipients:; X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean Content-Type: text/plain Content-Disposition: inline Content-Transfer-Encoding: 7bit Content-length: 503 X-archive-position: 14810 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: arpita.choudhary894@gmail.com Precedence: bulk X-list: xfs Hi, I would like to request you to join on SiliconIndia. It helps you and me leveraging the networks of their friends, friends` friends, colleagues, classmates, and relatives to get better opportunities. We can refer our friends and relatives to jobs in our companies and our friends` companies. Some companies even pay for referring candidates. Copy the link below to your browser to join my friend list siliconindia.com/register.php?id=924x5ys1 Thanks Arpita [[HTML alternate version deleted]] From owner-xfs@oss.sgi.com Sat Mar 8 03:54:01 2008 Received: with ECARTIS (v1.0.0; list xfs); Sat, 08 Mar 2008 03:54:22 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m28Brx2b021321 for ; Sat, 8 Mar 2008 03:54:01 -0800 X-ASG-Debug-ID: 1204977268-7936011e0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from bombadil.infradead.org (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 7BAEAF44BAA for ; Sat, 8 Mar 2008 03:54:29 -0800 (PST) Received: from bombadil.infradead.org (bombadil.infradead.org [18.85.46.34]) by cuda.sgi.com with ESMTP id xLAulTBskzB9FOhk for ; Sat, 08 Mar 2008 03:54:29 -0800 (PST) Received: from hch by bombadil.infradead.org with local (Exim 4.68 #1 (Red Hat Linux)) id 1JXxdD-0003YR-Rq; Sat, 08 Mar 2008 11:54:27 +0000 Date: Sat, 8 Mar 2008 06:54:27 -0500 From: Christoph Hellwig To: Eric Sandeen Cc: xfs-oss X-ASG-Orig-Subj: Re: [PATCH, RFC] - remove mounpoint UUID code Subject: Re: [PATCH, RFC] - remove mounpoint UUID code Message-ID: <20080308115427.GB27606@infradead.org> References: <47D20F78.7000103@sandeen.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <47D20F78.7000103@sandeen.net> User-Agent: Mutt/1.5.17 (2007-11-01) X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org See http://www.infradead.org/rpr.html X-Barracuda-Connect: bombadil.infradead.org[18.85.46.34] X-Barracuda-Start-Time: 1204977269 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -1.42 X-Barracuda-Spam-Status: No, SCORE=-1.42 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=COMMA_SUBJECT X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44225 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.60 COMMA_SUBJECT Subject is like 'Re: FDSDS, this is a subject' X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14811 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs On Fri, Mar 07, 2008 at 10:00:56PM -0600, Eric Sandeen wrote: > It looks like all of the below is unused... and according > to Nathan, > > "dont think it even got used/implemented anywhere, but i think it > was meant to be an auto-mount kinda thing... such that when you look > up at that point, it knows to mount the device with that uuid there, > if its not already it was never really written anywhere ... just an > idea in doug doucettes brain i think." > > Think it'll ever go anywhere, or should it get pruned? > > The below builds; not at all tested, until I get an idea if it's worth > doing. Need to double check that some structures might not need padding > out to keep things compatible/consistent... This looks fine to me. But please keep the XFS_DINODE_FMT_UUID enum value and add a big comment close to it documenting what it was an maybe a approximate removal date so people who care about it can look it up in the SCM history easily. From owner-xfs@oss.sgi.com Sat Mar 8 22:14:53 2008 Received: with ECARTIS (v1.0.0; list xfs); Sat, 08 Mar 2008 22:15:17 -0800 (PST) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.8 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m296EpBu021638 for ; Sat, 8 Mar 2008 22:14:53 -0800 X-ASG-Debug-ID: 1205043315-4f8d00da0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mail.g-house.de (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id E493BF4A7D9; Sat, 8 Mar 2008 22:15:16 -0800 (PST) Received: from mail.g-house.de (ns2.g-housing.de [81.169.133.75]) by cuda.sgi.com with ESMTP id 2RzUD3Bf2hHIEqhC; Sat, 08 Mar 2008 22:15:16 -0800 (PST) Received: from [89.54.167.70] (helo=[192.168.178.25]) by mail.g-house.de with esmtpa (Exim 4.63) (envelope-from ) id 1JYEoT-0004FU-HI; Sun, 09 Mar 2008 07:15:13 +0100 Date: Sun, 9 Mar 2008 07:15:09 +0100 (CET) From: Christian Kujau X-X-Sender: evil@sheep.housecafe.de To: David Chinner cc: LKML , xfs@oss.sgi.com X-ASG-Orig-Subj: 2.6.25-rc hangs (was: INFO: task mount:11202 blocked for more than 120 seconds) Subject: 2.6.25-rc hangs (was: INFO: task mount:11202 blocked for more than 120 seconds) In-Reply-To: Message-ID: References: <20080307224040.GV155259@sgi.com> User-Agent: Alpine 1.00 (DEB 882 2007-12-20) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Barracuda-Connect: ns2.g-housing.de[81.169.133.75] X-Barracuda-Start-Time: 1205043320 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -1.52 X-Barracuda-Spam-Status: No, SCORE=-1.52 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=BSF_RULE7568M X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44300 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.50 BSF_RULE7568M Custom Rule 7568M X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14812 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: lists@nerdbynature.de Precedence: bulk X-list: xfs On Sat, 8 Mar 2008, Christian Kujau wrote: > FWIW, it's 100% reproducible with 2.6.25-rc3 too...sigh :-\ > So, the last working kernel for me is 2.6.24.1 - that's a lot of bisecting > and I fear that compile errors will invalidate the bisecting results again or > make it impossible at all....I'll try anyway....tomorrow... Bisecting failed as expected :-( I tried to follow the git-bisect manpage (and have successfully used bisect in the past a few times), but I think ~5700 revisions between 2.6.24 and 2.6.25 are just too much fuzz. The bisect logs so far, with my comments inbetween: git-bisect start # 2.6.25-rc4, known to be bad: # bad: [84c6f6046c5a2189160a8f0dca8b90427bf690ea] x86_64: make ptrace always sign-extend orig_ax to 64 bits git-bisect bad 84c6f6046c5a2189160a8f0dca8b90427bf690ea # 2.6.24, good: # good: [49914084e797530d9baaf51df9eda77babc98fa8] Linux 2.6.24 git-bisect good 49914084e797530d9baaf51df9eda77babc98fa8 # 2.6.24, hard lockup during boot, bad: # bad: [bd45ac0c5daae35e7c71138172e63df5cf644cf6] Merge branch 'linux-2.6' git-bisect bad bd45ac0c5daae35e7c71138172e63df5cf644cf6 I marked the last one bad, because I could not boot any more. As it's a headless box, I could not get more details. It did not even respond so sysrq-b. After marking this bad, I compiled and booted again - same result, hard lockup. So I tried again: git bisect reset git-bisect start # 2.6.25-rc4, known to be bad: # bad: [84c6f6046c5a2189160a8f0dca8b90427bf690ea] x86_64: make ptrace always sign-extend orig_ax to 64 bits git-bisect bad 84c6f6046c5a2189160a8f0dca8b90427bf690ea # 2.6.24, good: # good: [49914084e797530d9baaf51df9eda77babc98fa8] Linux 2.6.24 git-bisect good 49914084e797530d9baaf51df9eda77babc98fa8 # 2.6.24, hard lockup during boot, marking good anyway: # good: [bd45ac0c5daae35e7c71138172e63df5cf644cf6] Merge branch 'linux-2.6' git-bisect good bd45ac0c5daae35e7c71138172e63df5cf644cf6 # lockup # bad: [f0f1b3364ae7f48084bdf2837fb979ff59622523] Merge branch 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux-acpi-2.6 git-bisect bad f0f1b3364ae7f48084bdf2837fb979ff59622523 Although I could not boot with bd45ac0c5daae35e7c71138172e63df5cf644cf6, I marked it "good", as the lockup is totally unrelated to my problem. However, the box locks up as soon as I'm using the device-mapper. This time it does respond to sysrq-b. But still: I'm unable to diagnose the system hang [0] and I fear that 2.6.25 is released and for the first time since ages I'd have to skip a release... Help! Christian. [0] http://lkml.org/lkml/2008/3/7/308 -- BOFH excuse #288: Hard drive sleeping. Let it wake up on it's own... From owner-xfs@oss.sgi.com Sun Mar 9 09:44:27 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 09 Mar 2008 09:44:46 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m29GiMrG003504 for ; Sun, 9 Mar 2008 09:44:27 -0700 X-ASG-Debug-ID: 1205081089-7c77000a0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id DB2F6676D74 for ; Sun, 9 Mar 2008 09:44:49 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id g53wLb2Av486Dcs4 for ; Sun, 09 Mar 2008 09:44:49 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 979A418004487; Sun, 9 Mar 2008 11:44:43 -0500 (CDT) Message-ID: <47D413FA.50602@sandeen.net> Date: Sun, 09 Mar 2008 11:44:42 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: Christian Kujau CC: David Chinner , LKML , xfs@oss.sgi.com X-ASG-Orig-Subj: Re: 2.6.25-rc hangs Subject: Re: 2.6.25-rc hangs References: <20080307224040.GV155259@sgi.com> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1205081092 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44339 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14813 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Christian Kujau wrote: > On Sat, 8 Mar 2008, Christian Kujau wrote: >> FWIW, it's 100% reproducible with 2.6.25-rc3 too...sigh :-\ >> So, the last working kernel for me is 2.6.24.1 - that's a lot of bisecting >> and I fear that compile errors will invalidate the bisecting results again or >> make it impossible at all....I'll try anyway....tomorrow... > > Bisecting failed as expected :-( > I tried to follow the git-bisect manpage (and have successfully used > bisect in the past a few times), but I think ~5700 revisions between > 2.6.24 and 2.6.25 are just too much fuzz. The bisect logs so far, with > my comments inbetween: Christian, what is the test you are using for the bisect? Thanks, -Eric From owner-xfs@oss.sgi.com Sun Mar 9 11:05:34 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 09 Mar 2008 11:05:53 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m29I5ViX010667 for ; Sun, 9 Mar 2008 11:05:34 -0700 X-ASG-Debug-ID: 1205085958-5e9a01560000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mail.g-house.de (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 178256773B3; Sun, 9 Mar 2008 11:05:59 -0700 (PDT) Received: from mail.g-house.de (ns2.g-housing.de [81.169.133.75]) by cuda.sgi.com with ESMTP id FemqvGVdy6WPy4tf; Sun, 09 Mar 2008 11:05:59 -0700 (PDT) Received: from [89.54.188.143] (helo=[192.168.178.25]) by mail.g-house.de with esmtpa (Exim 4.63) (envelope-from ) id 1JYPuG-0004Hb-9U; Sun, 09 Mar 2008 19:05:56 +0100 Date: Sun, 9 Mar 2008 19:05:53 +0100 (CET) From: Christian Kujau X-X-Sender: evil@sheep.housecafe.de To: Eric Sandeen cc: David Chinner , LKML , xfs@oss.sgi.com X-ASG-Orig-Subj: Re: 2.6.25-rc hangs Subject: Re: 2.6.25-rc hangs In-Reply-To: <47D413FA.50602@sandeen.net> Message-ID: References: <20080307224040.GV155259@sgi.com> <47D413FA.50602@sandeen.net> User-Agent: Alpine 1.00 (DEB 882 2007-12-20) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Barracuda-Connect: ns2.g-housing.de[81.169.133.75] X-Barracuda-Start-Time: 1205085960 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44345 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14814 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: lists@nerdbynature.de Precedence: bulk X-list: xfs On Sun, 9 Mar 2008, Eric Sandeen wrote: >> I tried to follow the git-bisect manpage (and have successfully used >> bisect in the past a few times), but I think ~5700 revisions between >> 2.6.24 and 2.6.25 are just too much fuzz. The bisect logs so far, with >> my comments inbetween: > > Christian, what is the test you are using for the bisect? Sorry, I don't understand: which "test" do you mean? I did the bisect as per the bisect log and then just rebooted. Which gave me no usable results yet. I'm trying to boot 2.6.25-rc1 in a few moments and see if the hang is there as well. If it is, I'll start bisecting again and hope that "halfway between -rc1 and .24" will be a bootable kernel this time... The "error" I'm trying to chase is a system "hang", but no instant lockup. I can reproduce this by increasing disk I/O. I did this primarily with rsync from different filesystems to my backup XFS partition. After a few minutes, the INFO: messages[0] appeared. Thanks, Christian. [0] http://lkml.org/lkml/2008/3/7/308 -- BOFH excuse #2: solar flares From owner-xfs@oss.sgi.com Sun Mar 9 14:34:31 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 09 Mar 2008 14:34:50 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_72 autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m29LYORI030610 for ; Sun, 9 Mar 2008 14:34:30 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id IAA08094; Mon, 10 Mar 2008 08:34:47 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m29LYjLF91659560; Mon, 10 Mar 2008 08:34:46 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m29LYfLb91607587; Mon, 10 Mar 2008 08:34:41 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Mon, 10 Mar 2008 08:34:41 +1100 From: David Chinner To: Christian Kujau Cc: David Chinner , LKML , xfs@oss.sgi.com, dm-devel@redhat.com Subject: Re: INFO: task mount:11202 blocked for more than 120 seconds Message-ID: <20080309213441.GQ155407@sgi.com> References: <20080307224040.GV155259@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14815 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs [adding dm-devel to cc list] On Sat, Mar 08, 2008 at 12:46:40AM +0100, Christian Kujau wrote: > On Sat, 8 Mar 2008, David Chinner wrote: > >Well, if that is hung there, something else must be holding on to > >the iolock it's waiting on. What are the other D state processes in the > >machine? > > I have 7 processes in D state so far: > > $ ps auxww [....] > root 9844 0.0 0.0 0 0 ? D Mar06 0:22 [pdflush] > root 2697 0.0 0.0 4712 460 ? D Mar07 0:00 sync > root 8342 0.0 0.0 1780 440 ? D Mar07 0:01 /bin/rm -rf > /data/md1/stuff > root 12494 0.0 0.0 11124 1228 ? D Mar07 0:14 /usr/bin/rsync > root 15008 0.0 0.0 4712 460 ? D Mar07 0:00 sync > root 11202 0.0 0.0 5012 764 ? D Mar07 0:00 mount -o > remount,ro /data/md1 > root 15936 0.0 0.0 4712 460 ? D Mar07 0:00 sync > > At one point I did a sysrq-D and put the results in: > http://nerdbynature.de/bits/2.6.25-rc4/hung_task/kern.log.gz > (grep for "SysRq : Show Locks Held" and "SysRq : Show Blocked State") Ok, this looks like a block layer issue. Everything is waiting in ioschedule() and so in places we are blocked holding locked inodes. Hence sync, pdflush, etc are hung waiting for the inode locks to be released. e.g: INFO: task rm:8342 blocked for more than 120 seconds. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. rm D ee08de8c 0 8342 8199 f5eebd80 00000086 c0380a16 ee08de8c ee08de8c 00000000 ee08de94 c200ebd0 c043ef2b c0146237 c043f1d2 c0146210 ee08de8c 00000000 00000000 db122110 c01464ca 00000002 c1865b00 0000000c 00000000 ee3acd60 c012c700 c200ebec Call Trace: [] dm_table_unplug_all+0x26/0x40 [] io_schedule+0xb/0x20 [] sync_page+0x27/0x40 [] __wait_on_bit+0x42/0x70 [] sync_page+0x0/0x40 [] wait_on_page_bit+0x5a/0x60 [] wake_bit_function+0x0/0x60 [] truncate_inode_pages_range+0x223/0x360 [] truncate_inode_pages+0x17/0x20 [] generic_delete_inode+0xef/0x100 [] iput+0x5c/0x70 [] do_unlinkat+0xf7/0x160 [] sysenter_past_esp+0x9a/0xa5 [] trace_hardirqs_on+0x9c/0x110 [] sysenter_past_esp+0x5f/0xa5 ======================= no locks held by rm/8342. And rsync is stuck in congestion_wait() SysRq : Show Blocked State task PC stack pid father rsync D 00000292 0 12494 1 f5dc7b40 00000086 00000000 00000292 f697bcdc 00735f5c 00000000 00000600 c043efd9 c013820d f532ed60 c05c0f04 f5cc1b58 00735f5c c0122900 f532ed60 c05c0c00 c053eb04 0000000a c043ef0b c01515f8 00000000 f532ed60 c012c6c0 Call Trace: [] schedule_timeout+0x49/0xc0 [] mark_held_locks+0x3d/0x70 [] process_timeout+0x0/0x10 [] io_schedule_timeout+0xb/0x20 [] congestion_wait+0x58/0x80 [] autoremove_wake_function+0x0/0x40 [] balance_dirty_pages_ratelimited_nr+0xb2/0x240 [] generic_file_buffered_write+0x1a8/0x650 [] xfs_log_move_tail+0x3e/0x180 [] _spin_lock+0x29/0x40 [] xfs_write+0x7ac/0x8a0 [] core_sys_select+0x21/0x350 [] xfs_file_aio_write+0x5c/0x70 [] do_sync_write+0xd5/0x120 [] restore_nocheck+0x12/0x15 [] autoremove_wake_function+0x0/0x40 [] dnotify_parent+0x35/0x90 [] do_sync_write+0x0/0x120 [] vfs_write+0x9f/0x140 [] sys_write+0x41/0x70 [] sysenter_past_esp+0x5f/0xa5 And this rsync procss will definitely be holding the iolock on an XFS inode here (which is why other processes are hanging on an inode iolock). > >Also, the iolock can be held across I/O so it's possible you've lost an > >I/O. Any I/O errors in the syslog? > > No, no I/O errors at all. See the kern.log above, I could even do dd(1) > from the md1 (dm-crypt on raid1), no errors either. Oh, dm-crypt. Well, I'd definitely start looking there. XFS has a history of exposing dm-crypt bugs, and these hangs appear to be I/O congestion/scheduling related and not XFS. Also, we haven't changed anything related to plug/unplug of block devices in XFS recently, so that also points to some other change as well... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Sun Mar 9 14:39:36 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 09 Mar 2008 14:39:45 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.2 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m29LdZVd031167 for ; Sun, 9 Mar 2008 14:39:36 -0700 X-ASG-Debug-ID: 1205098804-627901800000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from fg-out-1718.google.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 9B1D1B0C534 for ; Sun, 9 Mar 2008 14:40:04 -0700 (PDT) Received: from fg-out-1718.google.com (fg-out-1718.google.com [72.14.220.157]) by cuda.sgi.com with ESMTP id SsMuRrTYdBUgFifi for ; Sun, 09 Mar 2008 14:40:04 -0700 (PDT) Received: by fg-out-1718.google.com with SMTP id e12so1368465fga.8 for ; Sun, 09 Mar 2008 14:40:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:user-agent:mime-version:to:subject:content-type:content-transfer-encoding; bh=+nT1De/lMdf9Em5p5ssrhE8x1oiLyBw5+/peEmriqBs=; b=kgdENgP5Aj7Uw8MwAt9IX3TpseNCzV6H6mFQAuA5fDMG979G4Hfi3ChkkHA6eqd6Tcl8p2PSLbCM01hf70ecOC6BNSlM9XJGRsbu/aSh2E+9BuQ1EAwewpl8Q0b52JPSvrV0z/nhhtPO5g1JLIoVT3pWx7ngWIaBP8FlHzc1HGg= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:user-agent:mime-version:to:subject:content-type:content-transfer-encoding; b=lTddhYXpl78G+ftYMR7GYYpZdcP4LCofREoXkJ2YTGT9ONLTQVwJYepmeGbfTWWJgmCLrQjbx5pRYdQY0KfFjW1UbePcwBgz7mTNatLoaeW+1K6gbLfpLiyEHEYe/aV0K7S+sY08q1a+0Uysfy056JzIOSdYR0f8ToFHTRmUYA4= Received: by 10.82.171.16 with SMTP id t16mr10892729bue.25.1205098802489; Sun, 09 Mar 2008 14:40:02 -0700 (PDT) Received: from ?192.168.0.1? ( [89.174.121.39]) by mx.google.com with ESMTPS id g9sm7886895gvc.4.2008.03.09.14.40.00 (version=TLSv1/SSLv3 cipher=RC4-MD5); Sun, 09 Mar 2008 14:40:01 -0700 (PDT) Message-ID: <47D45926.5070806@gmail.com> Date: Sun, 09 Mar 2008 22:39:50 +0100 From: Rekrutacja User-Agent: Thunderbird 2.0.0.12 (Windows/20080213) MIME-Version: 1.0 To: xfs@oss.sgi.com X-ASG-Orig-Subj: xfs for free hosting on linux ? performance questions Subject: xfs for free hosting on linux ? performance questions Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Barracuda-Connect: fg-out-1718.google.com[72.14.220.157] X-Barracuda-Start-Time: 1205098805 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -1.42 X-Barracuda-Spam-Status: No, SCORE=-1.42 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=MARKETING_SUBJECT X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44359 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.60 MARKETING_SUBJECT Subject contains popular marketing words X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14816 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: rekrutacja119@gmail.com Precedence: bulk X-list: xfs hello, i'm going to create xfs on RAID 5 array consiting of 5 drives, each 500GB, it will be software raid made with mdadm on linux. i run free hosting, and the array is only for users files (system and such are on ext3). what options do you recommend while creating and while mounting? i had performance problems while testing this setup with bonnie++ and postmark (postmark especially). data will be in many directories, users files are in /var/www/users/username and i have more than 100 000 accounts. also a policy to make max 3 MB files, so most of my files are around 50 KB i think. so knowing this all, what would you advice me if i may ask? my current idea was like this: mkfs.xfs -l size=128m,lazy-count=1 mount -o nobarrier,noatime,logbufs=8,logbsize=256k this will have full backup, so i'm more after performance, not stability, so if you can give me some performance tips... (postmark run on array made from 4 disks, not 5, durning test, with parameters - number of files 10k, transactions 10k, subdirs 20k, was giving very very slow reads, around 200KB/s, while reiser4/btrfs were like 2MB/s reads, but even ext3 was faster - any idea why? i tested on 2.6.24.3 and 2.6.25-rc4 ) i keep seeing tests like this: http://www.t2-project.org/zine/1/ also, i have real life example, that 3 disks RAID-0 array with tweaked xfs is as fast as just one disk with reiser4 (my free hosting with millions of files) i know reiser4 is not stable (i also experienced this), so i'm asking if you know any additional tips to tweak xfs, or maybe you know why it has performance problems in some situations. thanks in advance From owner-xfs@oss.sgi.com Sun Mar 9 15:18:48 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 09 Mar 2008 15:19:08 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m29MIhPC001265 for ; Sun, 9 Mar 2008 15:18:47 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA08935; Mon, 10 Mar 2008 09:19:04 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m29MJ2LF91553956; Mon, 10 Mar 2008 09:19:03 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m29MIw7H91684156; Mon, 10 Mar 2008 09:18:58 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Mon, 10 Mar 2008 09:18:58 +1100 From: David Chinner To: Chris Knadle Cc: David Chinner , linux-kernel@vger.kernel.org, xfs@oss.sgi.com, hch@infradead.org Subject: Re: assfail during unmount xfs 2.6.24.3 [from 2.6.24.y git] Message-ID: <20080309221858.GS155407@sgi.com> References: <200803062329.10486.Chris.Knadle@coredump.us> <20080307224645.GW155259@sgi.com> <200803081555.11695.Chris.Knadle@coredump.us> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <200803081555.11695.Chris.Knadle@coredump.us> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14817 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Sat, Mar 08, 2008 at 03:55:08PM -0500, Chris Knadle wrote: > On Friday 07 March 2008, David Chinner wrote: > > On Thu, Mar 06, 2008 at 11:29:07PM -0500, Chris Knadle wrote: > > > During the final unmount before reboot there was an assertion failure > > > from XFS > ... > > > When replying please CC me, as I am not currently subscribed to the list. > ... > > > Assertion failed: atomic_read(&mp->m_active_trans) == 0, file: > > > fs/xfs/xfs_vfsops.c, line: 708 > > Thank you for taking the time to look and reply to this. > > > Known problem. Race in the VFS w.r.t. read-only remounts: > > > > http://marc.info/?l=linux-kernel&m=120106649923499&w=2 > > At the bottom of the above message there's a patch which is not currently > merged in either the linux-2.6.24.y.git or linux-2.6.git trees. That's > perhaps as it should be, but I would like to know if it's worth trying > merging the patch locally as an attempted workaround until such time as > 2.6.25 is released and stable. Sure, the patch is harmless - I've had it running in my local tree for quite some time now. > > > The fix for the problem lies outside XFS: > > > > http://marc.info/?l=linux-kernel&m=120109304227035&w=2 > > If I understand the above correctly, it sounds like the per-vfsmount is > going to be a new feature committed to 2.6.25, but probably not backported to > prior kernel trees. Right. > If this feature was was added at 2.6.23.9 then if I > understand corectly it's only an issue between that version and 2.6.25; is > that correct? Only if the per-vfsmount writer counts got merged in 2.6.25-rcX. I'm not sure that they did. Christoph? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Sun Mar 9 15:59:15 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 09 Mar 2008 15:59:33 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.7 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_65, MIME_8BIT_HEADER autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m29Mx80D003705 for ; Sun, 9 Mar 2008 15:59:12 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA09515; Mon, 10 Mar 2008 09:59:29 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m29MxRLF91691311; Mon, 10 Mar 2008 09:59:28 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m29MxP0b91498420; Mon, 10 Mar 2008 09:59:25 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Mon, 10 Mar 2008 09:59:25 +1100 From: David Chinner To: Christian =?iso-8859-1?Q?R=F8snes?= Cc: David Chinner , xfs@oss.sgi.com Subject: Re: XFS internal error xfs_trans_cancel at line 1150 of file fs/xfs/xfs_trans.c Message-ID: <20080309225925.GT155407@sgi.com> References: <1a4a774c0802130251h657a52f7lb97942e7afdf6e3f@mail.gmail.com> <20080213214551.GR155407@sgi.com> <1a4a774c0803050553h7f6294cfq41c38f34ea92ceae@mail.gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <1a4a774c0803050553h7f6294cfq41c38f34ea92ceae@mail.gmail.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14818 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Wed, Mar 05, 2008 at 02:53:18PM +0100, Christian Rřsnes wrote: > On Wed, Feb 13, 2008 at 10:45 PM, David Chinner wrote: > After being hit several times by the problem mentioned above (running > kernel 2.6.17.7), > I upgraded the kernel to version 2.6.24.3. I then ran a rsync test to > a 99% full partition: > > df -k: > /dev/sdb1 286380096 282994528 3385568 99% /data > > The rsync application will probably fail because it will most likely > run out of space, > but I got another xfs_trans_cancel kernel message: > > Filesystem "sdb1": XFS internal error xfs_trans_cancel at line 1163 of > file fs/xfs/xfs_trans.c. Caller 0xc021a010 > Pid: 11642, comm: rsync Not tainted 2.6.24.3FC #1 > [] xfs_trans_cancel+0x5d/0xe6 > [] xfs_mkdir+0x45a/0x493 > [] xfs_mkdir+0x45a/0x493 > [] xfs_acl_vhasacl_default+0x33/0x44 > [] xfs_vn_mknod+0x165/0x243 > [] xfs_access+0x2f/0x35 > [] xfs_vn_mkdir+0x12/0x14 > [] vfs_mkdir+0xa3/0xe2 > [] sys_mkdirat+0x8a/0xc3 > [] sys_mkdir+0x1f/0x23 > [] syscall_call+0x7/0xb > ======================= > xfs_force_shutdown(sdb1,0x8) called from line 1164 of file > fs/xfs/xfs_trans.c. Return address = 0xc0212690 > Filesystem "sdb1": Corruption of in-memory data detected. Shutting > down filesystem: sdb1 > Please umount the filesystem, and rectify the problem(s) Ok, so the problem still exists. > Trying to umount /dev/sdb1 fails (umount just hangs) . That shouldn't happen. Any output in the log when it hung? What were the blocked process stack traces (/proc/sysrq-trigger is your friend)? > Rebooting the system seems to hang also - and I believe the kernel > outputs this message > when trying to umount /dev/sdb1: > > xfs_force_shutdown(sdb1,0x1) called from line 420 of file fs/xfs/xfs_rw.c. > Return address = 0xc021cb21 It's already been shut down, right? An unmount should not trigger more of these warnings... > > After waiting 5 minutes I power-cycle the system to bring it back up. > > After the restart, I ran: > > xfs_check /dev/sdb1 > > (there was no output from xfs_check). > > Could this be the same problem I experienced with 2.6.17.7 ? Yes, it likely is. Can you apply the patch below and reproduce the problem? I can't reproduce the problem locally, so I'll need you to apply test patches to isolate the error. I suspect a xfs_dir_canenter()/xfs_dir_createname() with resblks == 0 issue, and the patch below will tell us if this is the case. It annotates the error paths for both create and mkdir (the two places I've seen this error occur), and what I am expecting to see is something like: xfs_create: dir_enter w/ 0 resblks ok. xfs_create: dir_createname error 28 Cheers, Dave. --- fs/xfs/xfs_vnodeops.c | 23 ++++++++++++++++++----- 1 file changed, 18 insertions(+), 5 deletions(-) Index: 2.6.x-xfs-new/fs/xfs/xfs_vnodeops.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_vnodeops.c 2008-02-22 17:40:04.000000000 +1100 +++ 2.6.x-xfs-new/fs/xfs/xfs_vnodeops.c 2008-03-10 09:53:43.658179381 +1100 @@ -1886,12 +1886,17 @@ xfs_create( if (error) goto error_return; - if (resblks == 0 && (error = xfs_dir_canenter(tp, dp, name, namelen))) - goto error_return; + if (!resblks) { + error = xfs_dir_canenter(tp, dp, name, namelen); + if (error) + goto error_return; + printk(KERN_WARNING "xfs_create: dir_enter w/ 0 resblks ok.\n"); + } error = xfs_dir_ialloc(&tp, dp, mode, 1, rdev, credp, prid, resblks > 0, &ip, &committed); if (error) { + printk(KERN_WARNING "xfs_create: dir_ialloc error %d\n", error); if (error == ENOSPC) goto error_return; goto abort_return; @@ -1921,6 +1926,7 @@ xfs_create( resblks - XFS_IALLOC_SPACE_RES(mp) : 0); if (error) { ASSERT(error != ENOSPC); + printk(KERN_WARNING "xfs_create: dir_createname error %d\n", error); goto abort_return; } xfs_ichgtime(dp, XFS_ICHGTIME_MOD | XFS_ICHGTIME_CHG); @@ -1955,6 +1961,7 @@ xfs_create( error = xfs_bmap_finish(&tp, &free_list, &committed); if (error) { xfs_bmap_cancel(&free_list); + printk(KERN_WARNING "xfs_create: xfs_bmap_finish error %d\n", error); goto abort_rele; } @@ -2727,9 +2734,12 @@ xfs_mkdir( if (error) goto error_return; - if (resblks == 0 && - (error = xfs_dir_canenter(tp, dp, dir_name, dir_namelen))) - goto error_return; + if (!resblks) { + error = xfs_dir_canenter(tp, dp, dir_name, dir_namelen); + if (error) + goto error_return; + printk(KERN_WARNING "xfs_mkdir: dir_enter w/ 0 resblks ok.\n"); + } /* * create the directory inode. */ @@ -2737,6 +2747,7 @@ xfs_mkdir( 0, credp, prid, resblks > 0, &cdp, NULL); if (error) { + printk(KERN_WARNING "xfs_mkdir: dir_ialloc error %d\n", error); if (error == ENOSPC) goto error_return; goto abort_return; @@ -2761,6 +2772,7 @@ xfs_mkdir( &first_block, &free_list, resblks ? resblks - XFS_IALLOC_SPACE_RES(mp) : 0); if (error) { + printk(KERN_WARNING "xfs_mkdir: dir_createname error %d\n", error); ASSERT(error != ENOSPC); goto error1; } @@ -2805,6 +2817,7 @@ xfs_mkdir( error = xfs_bmap_finish(&tp, &free_list, &committed); if (error) { + printk(KERN_WARNING "xfs_mkdir: bmap_finish error %d\n", error); IRELE(cdp); goto error2; } From owner-xfs@oss.sgi.com Sun Mar 9 17:02:40 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 09 Mar 2008 17:02:57 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2A02b7V013246 for ; Sun, 9 Mar 2008 17:02:40 -0700 X-ASG-Debug-ID: 1205107387-65b603aa0000-ps1ADW X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from ty.sabi.co.UK (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 81D6D1201019 for ; Sun, 9 Mar 2008 17:03:07 -0700 (PDT) Received: from ty.sabi.co.UK (82-69-39-138.dsl.in-addr.zen.co.uk [82.69.39.138]) by cuda.sgi.com with ESMTP id y0hC2ZHKVnGh8vBY for ; Sun, 09 Mar 2008 17:03:07 -0700 (PDT) Received: from from [127.0.0.1] (helo=tree.ty.sabi.co.uk) by ty.sabi.co.UK with esmtp(Exim 4.66 #1) id 1JYVTA-0008LW-Pq for ; Mon, 10 Mar 2008 00:02:20 +0000 MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-ID: <18388.31367.314074.624135@tree.ty.sabi.co.uk> Date: Mon, 10 Mar 2008 00:02:15 +0000 X-Face: SMJE]JPYVBO-9UR%/8d'mG.F!@.,l@c[f'[%S8'BZIcbQc3/">GrXDwb#;fTRGNmHr^JFb SAptvwWc,0+z+~p~"Gdr4H$(|N(yF(wwCM2bW0~U?HPEE^fkPGx^u[*[yV.gyB!hDOli}EF[\cW*S H&spRGFL}{`bj1TaD^l/"[ msn( /TH#THs{Hpj>)]f> X-ASG-Orig-Subj: Re: xfs for free hosting on linux ? performance questions Subject: Re: xfs for free hosting on linux ? performance questions In-Reply-To: <47D45926.5070806@gmail.com> References: <47D45926.5070806@gmail.com> X-Mailer: VM 7.17 under 21.5 (beta28) XEmacs Lucid From: pg_lxra@lxra.for.sabi.co.UK (Peter Grandi) X-Disclaimer: This message contains only personal opinions X-Barracuda-Connect: 82-69-39-138.dsl.in-addr.zen.co.uk[82.69.39.138] X-Barracuda-Start-Time: 1205107388 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -1.42 X-Barracuda-Spam-Status: No, SCORE=-1.42 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=MARKETING_SUBJECT X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44367 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.60 MARKETING_SUBJECT Subject contains popular marketing words X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14819 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: pg_lxra@lxra.for.sabi.co.UK Precedence: bulk X-list: xfs [ ... > data will be in many directories, users files are in > /var/www/users/username and i have more than 100 000 > accounts. also a policy to make max 3 MB files, so most of my > files are around 50 KB i think. This is a much better situation than the frequent inane one where someone asks about using a filesystem as a database for small records. But a ''source files'' filesystem is still a bit of challenge. > [ ... ] mount -o nobarrier,noatime,logbufs=8,logbsize=256k [ > ... ] Even with backups, 'nobarrier' means begging for trouble. > [ ... ] number of files 10k, transactions 10k, subdirs 20k, > was giving very very slow reads, around 200KB/s, while > reiser4/btrfs were like 2MB/s reads, but even ext3 was faster > - any idea why? i tested on 2.6.24.3 and 2.6.25-rc4 ) [ ... ] > that 3 disks RAID-0 array with tweaked xfs is as fast as just > one disk with reiser4 (my free hosting with millions of files) > [ ... ] Just use the by now very well tested ReiserFS version 3 for that. Just as you would use XFS for bulk streaming of large files. Recently on the same 4x(1+1) RAID10 f2 I did some (single threaded, single file reading and writing) tests involving various file systems with both large files (2-12 2GB ones) and a directory tree contain lots of small Java source files (6.5GB in 50k directories containing 150k files). While most file systems could reach something like 250MB/s writing and 430MB/s reading with the small set of large files (with XFS a few dozen MB/s ahead of most of the others and would have been more so on multithreaded), on the directory tree ReiserFS did around 57MB/s, JFS around 36MB/s, and XFS around 20-25MB/s. There are several reasons why ReiserFS is better suited for large directory tree with lots of small files; one of them is that it was designed for that :-). From owner-xfs@oss.sgi.com Sun Mar 9 17:07:53 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 09 Mar 2008 17:08:00 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.8 required=5.0 tests=AWL,BAYES_00,MIME_8BIT_HEADER autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2A07m6Q014035 for ; Sun, 9 Mar 2008 17:07:52 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA10826; Mon, 10 Mar 2008 11:08:13 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m2A08BLF89099059; Mon, 10 Mar 2008 11:08:12 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m2A0891n91600879; Mon, 10 Mar 2008 11:08:09 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Mon, 10 Mar 2008 11:08:09 +1100 From: David Chinner To: Christian =?iso-8859-1?Q?R=F8snes?= Cc: xfs@oss.sgi.com Subject: Re: XFS internal error xfs_trans_cancel at line 1150 of file fs/xfs/xfs_trans.c Message-ID: <20080310000809.GU155407@sgi.com> References: <1a4a774c0802130251h657a52f7lb97942e7afdf6e3f@mail.gmail.com> <20080213214551.GR155407@sgi.com> <1a4a774c0803050553h7f6294cfq41c38f34ea92ceae@mail.gmail.com> <1a4a774c0803060310w2642224w690ac8fa13f96ec@mail.gmail.com> <1a4a774c0803070319j1eb8790ek3daae4a16b3e6256@mail.gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <1a4a774c0803070319j1eb8790ek3daae4a16b3e6256@mail.gmail.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14820 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Fri, Mar 07, 2008 at 12:19:28PM +0100, Christian Rřsnes wrote: > > Actually, a single mkdir command is enough to trigger the filesystem > > shutdown when its 99% full (according to df -k): > > > > /data# mkdir test > > mkdir: cannot create directory `test': No space left on device Ok, that's helpful ;) So, can you dump the directory inode with xfs_db? i.e. # ls -ia /data The directory inode is the inode at ".", and if this is the root of the filesystem it will probably be 128. Then run: # xfs_db -r -c 'inode 128' -c p /dev/sdb1 > > -------- > > meta-data=/dev/sdb1 isize=512 agcount=16, agsize=4476752 blks > > = sectsz=512 attr=0 > > data = bsize=4096 blocks=71627792, imaxpct=25 > > = sunit=16 swidth=32 blks, unwritten=1 > > naming =version 2 bsize=4096 > > log =internal bsize=4096 blocks=32768, version=2 > > = sectsz=512 sunit=16 blks, lazy-count=0 > > realtime =none extsz=65536 blocks=0, rtextents=0 > > > > xfs_db -r -c 'sb 0' -c p /dev/sdb1 > > ---------------------------------- ..... > > fdblocks = 847484 Apparently there are still lots of free blocks. I wonder if you are out of space in the metadata AGs. Can you do this for me: ------- #!/bin/bash for i in `seq 0 1 15`; do echo freespace histogram for AG $i xfs_db -r -c "freesp -bs -a $i" /dev/sdb1 done ------ > Instrumenting the code, I found that this occurs on my system when I > do a 'mkdir /data/test' on the partition in question: > > in xfs_mkdir (xfs_vnodeops.c): > > error = xfs_dir_ialloc(&tp, dp, mode, 2, > 0, credp, prid, resblks > 0, > &cdp, NULL); > > if (error) { > if (error == ENOSPC) > goto error_return; <=== this is hit and then > execution jumps to error_return > goto abort_return; > } Ah - you can ignore my last email, then. You're already one step ahead of me ;) This does not appear to be the case I was expecting, though I can see how we can get an ENOSPC here with plenty of blocks free - none are large enough to allocate an inode chunk. What would be worth knowing is the value of resblks when this error is reported. This tends to imply we are returning an ENOSPC with a dirty transaction. Right now I can't see how that would occur, though the fragmented free space is something I can try to reproduce with. > Is this the correct behavior for this type of situation: mkdir command > fails due to no available space on filesystem, > and xfs_mkdir goes to label error_return ? (And after this the > filesystem is shutdown) The filesystem should not be shutdown here. We need to trace through further to the source of the error.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Sun Mar 9 17:44:44 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 09 Mar 2008 17:45:02 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.1 required=5.0 tests=AWL,BAYES_05,HTML_MESSAGE, J_CHICKENPOX_48 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2A0ihbF016842 for ; Sun, 9 Mar 2008 17:44:44 -0700 X-ASG-Debug-ID: 1205109906-5bb700770000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from wf-out-1314.google.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id D8B4B1201540 for ; Sun, 9 Mar 2008 17:45:06 -0700 (PDT) Received: from wf-out-1314.google.com (wf-out-1314.google.com [209.85.200.171]) by cuda.sgi.com with ESMTP id NZvM7x6AZN6m91nc for ; Sun, 09 Mar 2008 17:45:06 -0700 (PDT) Received: by wf-out-1314.google.com with SMTP id 29so1628576wff.32 for ; Sun, 09 Mar 2008 17:45:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:to:subject:in-reply-to:mime-version:content-type:references; bh=rtgpaAcOJuZmpqgyKI8xtSr55oTNvUKOlYX+CSbrz38=; b=lGqoAd0bNm2dY6NEFf1wrwARFi7F8VwOqlTRm63S6ysVWWgF/HjFKVAaRCBRSk0XZi1e+0h67uUY43SyPnVmJOJyHXaCiLmrnpfYm3q2qPjgP5v2chGdDqxJQqKRWwG5eNzCln2cKayUSV03kf+ChxXUDFXaA0qrrLIs+BMuWXY= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:in-reply-to:mime-version:content-type:references; b=tHGcXtBQqFAup5s98rLRENwJnAcfjjYem14qb3U0mXp08GJNj7M4/sIGOuNKXqFJ4WLA7xs2lHCY4WcVTEhRfPyPShxN4MvmKjEZoX5dKJ746Je45RVBCtNOxyXnPO1Ds/xSI78EskLBEmQs3RpRXycMylw9ogO54DBq6eAQ5MQ= Received: by 10.142.232.20 with SMTP id e20mr1434988wfh.187.1205109530151; Sun, 09 Mar 2008 17:38:50 -0700 (PDT) Received: by 10.142.163.3 with HTTP; Sun, 9 Mar 2008 17:38:50 -0700 (PDT) Message-ID: <2db2c6b80803091738o20dc80b2p82c5bf7246a8c560@mail.gmail.com> Date: Mon, 10 Mar 2008 01:38:50 +0100 From: Rekrutacja119 To: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: xfs for free hosting on linux ? performance questions Subject: Re: xfs for free hosting on linux ? performance questions In-Reply-To: <18388.31367.314074.624135@tree.ty.sabi.co.uk> MIME-Version: 1.0 References: <47D45926.5070806@gmail.com> <18388.31367.314074.624135@tree.ty.sabi.co.uk> X-Barracuda-Connect: wf-out-1314.google.com[209.85.200.171] X-Barracuda-Start-Time: 1205109906 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -1.42 X-Barracuda-Spam-Status: No, SCORE=-1.42 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=HTML_MESSAGE, MARKETING_SUBJECT X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44371 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.60 MARKETING_SUBJECT Subject contains popular marketing words 0.00 HTML_MESSAGE BODY: HTML included in message X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean Content-Type: text/plain Content-Disposition: inline Content-Transfer-Encoding: 7bit Content-length: 1342 X-archive-position: 14821 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: rekrutacja119@gmail.com Precedence: bulk X-list: xfs > > There are several reasons why ReiserFS is better suited for > large directory tree with lots of small files; one of them is > that it was designed for that :-). but reiserfs is slow for me too, maybe i have some other problems? i made array like this mdadm --create /dev/md0 --verbose --level=5 --raid-devices=5 /dev/sd{b,c,d,e,f}1 --assume-clean then just mkfs.reiserfs /dev/md0 and then mount /dev/md0 to /array and run postmark with these: set numbers 20000 set transactions 10000 set subdirectories 20000 set location /array/ results i got: Time: 287 seconds total 118 seconds of transactions (84 per second) Files: 25014 created (87 per second) Creation alone: 20000 files (168 per second) Mixed with transactions: 5014 files (42 per second) 5026 read (42 per second) 4971 appended (42 per second) 25014 deleted (87 per second) Deletion alone: 20028 files (400 per second) Mixed with transactions: 4986 files (42 per second) Data: 26.48 megabytes read (94.47 kilobytes per second) 136.18 megabytes written (485.89 kilobytes per second) is there something wrong with my system?? LA is 0.50 right now (i'm testing it at night), hdparm is showing that every HD from the array is doing 100MB/s at least, so why these numbers? [[HTML alternate version deleted]] From owner-xfs@oss.sgi.com Sun Mar 9 18:46:50 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 09 Mar 2008 18:47:34 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2A1kmX0020563 for ; Sun, 9 Mar 2008 18:46:50 -0700 X-ASG-Debug-ID: 1205113637-7a2100d60000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mail.g-house.de (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id BF137F4DF52 for ; Sun, 9 Mar 2008 18:47:18 -0700 (PDT) Received: from mail.g-house.de (ns2.g-housing.de [81.169.133.75]) by cuda.sgi.com with ESMTP id GK1F8vBT0BIl4aV8 for ; Sun, 09 Mar 2008 18:47:18 -0700 (PDT) Received: from [89.54.188.143] (helo=[192.168.178.25]) by mail.g-house.de with esmtpa (Exim 4.63) (envelope-from ) id 1JYX6B-0001qk-Ph; Mon, 10 Mar 2008 02:46:44 +0100 Date: Mon, 10 Mar 2008 02:46:38 +0100 (CET) From: Christian Kujau X-X-Sender: evil@sheep.housecafe.de To: David Chinner cc: LKML , xfs@oss.sgi.com, dm-devel@redhat.com X-ASG-Orig-Subj: Re: INFO: task mount:11202 blocked for more than 120 seconds Subject: Re: INFO: task mount:11202 blocked for more than 120 seconds In-Reply-To: <20080309213441.GQ155407@sgi.com> Message-ID: References: <20080307224040.GV155259@sgi.com> <20080309213441.GQ155407@sgi.com> User-Agent: Alpine 1.00 (DEB 882 2007-12-20) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Barracuda-Connect: ns2.g-housing.de[81.169.133.75] X-Barracuda-Start-Time: 1205113638 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44375 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14822 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: lists@nerdbynature.de Precedence: bulk X-list: xfs On Mon, 10 Mar 2008, David Chinner wrote: > Oh, dm-crypt. Well, I'd definitely start looking there. XFS has a > history of exposing dm-crypt bugs, and these hangs appear to be I/O > congestion/scheduling related and not XFS. Yeah, I noticed that too, thanks for verifying this: during the 2nd bisect run, the box locked up hard when I accessed the device-mapper. I'm using a wrapper script to set up my luks/dm-crypt devices and still have to find out which command exactly triggers the lockup. So, maybe the hard lockup and the hangs are not so unrelated after all...oh well. > Also, we haven't changed > anything related to plug/unplug of block devices in XFS recently, so > that also points to some other change as well... Thanks for your assistance, David, I really appreciate it. I'll try to find out more about this dm-crypt thingy.... Christian. -- BOFH excuse #396: Mail server hit by UniSpammer. From owner-xfs@oss.sgi.com Sun Mar 9 23:10:18 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 09 Mar 2008 23:10:37 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2A6AFBL012856 for ; Sun, 9 Mar 2008 23:10:18 -0700 X-ASG-Debug-ID: 1205129445-338d037e0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from bombadil.infradead.org (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 4280AF4EB94 for ; Sun, 9 Mar 2008 23:10:46 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org [18.85.46.34]) by cuda.sgi.com with ESMTP id jK0BQTQEGrQsSDES for ; Sun, 09 Mar 2008 23:10:46 -0700 (PDT) Received: from hch by bombadil.infradead.org with local (Exim 4.68 #1 (Red Hat Linux)) id 1JYbDB-0003MA-Tn; Mon, 10 Mar 2008 06:10:13 +0000 Date: Mon, 10 Mar 2008 02:10:13 -0400 From: Christoph Hellwig To: David Chinner Cc: Chris Knadle , linux-kernel@vger.kernel.org, xfs@oss.sgi.com, hch@infradead.org X-ASG-Orig-Subj: Re: assfail during unmount xfs 2.6.24.3 [from 2.6.24.y git] Subject: Re: assfail during unmount xfs 2.6.24.3 [from 2.6.24.y git] Message-ID: <20080310061013.GA3496@infradead.org> References: <200803062329.10486.Chris.Knadle@coredump.us> <20080307224645.GW155259@sgi.com> <200803081555.11695.Chris.Knadle@coredump.us> <20080309221858.GS155407@sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080309221858.GS155407@sgi.com> User-Agent: Mutt/1.5.17 (2007-11-01) X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org See http://www.infradead.org/rpr.html X-Barracuda-Connect: bombadil.infradead.org[18.85.46.34] X-Barracuda-Start-Time: 1205129446 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44393 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14823 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs On Mon, Mar 10, 2008 at 09:18:58AM +1100, David Chinner wrote: > Only if the per-vfsmount writer counts got merged in 2.6.25-rcX. I'm not sure > that they did. Christoph? No, it didn't go in. The new target is 2.6.26. From owner-xfs@oss.sgi.com Sun Mar 9 23:21:00 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 09 Mar 2008 23:21:19 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00,J_CHICKENPOX_43 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2A6KxLf014393 for ; Sun, 9 Mar 2008 23:21:00 -0700 X-ASG-Debug-ID: 1205130087-078700db0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from wf-out-1314.google.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 15A7167A1B0 for ; Sun, 9 Mar 2008 23:21:28 -0700 (PDT) Received: from wf-out-1314.google.com (wf-out-1314.google.com [209.85.200.168]) by cuda.sgi.com with ESMTP id HHfr6ISTZXaqLIyt for ; Sun, 09 Mar 2008 23:21:28 -0700 (PDT) Received: by wf-out-1314.google.com with SMTP id 29so1745821wff.32 for ; Sun, 09 Mar 2008 23:21:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:sender:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references:x-google-sender-auth; bh=5Hw7WgZDsmjQRZRJG2FEBmrVC6jg7dTGG9hjXI4LsdA=; b=iw5DFTxfaAafy8Ykc3XJCJ71N0qjUjutvhThZ4mphKCFt/5h0tgcbnfMF552Iww0NVzViQWOL7o2VPMQ/TtbUIg1YXDJkAHMzhxNfcgzuopLR5jQdsniM0ekqziVAznlSBE9jyf85zeyJwYEXKSHOWT0R0G0G5UhhYv8INLH1h8= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:sender:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references:x-google-sender-auth; b=MBqAF7PoRFvqzna606a/uuf6e1u+Ol5TxNrvzsEuFakw2bWsGXZHm3ZokPQSMsIfjj4hmcUXaJGqSVEWrfZubWO/nk+lsHnjCjpx8tR8K9uo03HVT258BY9FxgWvDY4GyN14+8gRo8FohfFm7jRj4abKFYMUasz4PqUrGfymmk4= Received: by 10.142.83.4 with SMTP id g4mr1498459wfb.28.1205129704274; Sun, 09 Mar 2008 23:15:04 -0700 (PDT) Received: by 10.143.119.3 with HTTP; Sun, 9 Mar 2008 23:15:04 -0700 (PDT) Message-ID: <416c461f0803092315m7ae6f55ek9b64058c3793aaa7@mail.gmail.com> Date: Mon, 10 Mar 2008 17:15:04 +1100 From: "Niv Sardi" To: "David Chinner" X-ASG-Orig-Subj: Re: ADD 977766 - mkfs.xfs man page needs the default settings updated. [REVIEW TAKE 3] Subject: Re: ADD 977766 - mkfs.xfs man page needs the default settings updated. [REVIEW TAKE 3] Cc: "Niv Sardi" , sgi.bugs.xfs@fido.engr.sgi.com, xfs-dev@sgi.com, xfs@oss.sgi.com In-Reply-To: <20080310060751.GY155407@sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Content-Disposition: inline References: <20080222003514.8D88E2C3@toolshop.engr.sgi.com> <20080310060751.GY155407@sgi.com> X-Google-Sender-Auth: 1a9b6f108a4cddad X-Barracuda-Connect: wf-out-1314.google.com[209.85.200.168] X-Barracuda-Start-Time: 1205130090 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44394 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14824 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: xaiki@cxhome.ath.cx Precedence: bulk X-list: xfs On Mon, Mar 10, 2008 at 5:07 PM, David Chinner wrote: > On Fri, Mar 07, 2008 at 03:39:05PM +1100, Niv Sardi wrote: > > Incorporated Eric's changes, last call does it look good to everyone > > > @@ -387,17 +399,13 @@ With some combinations of filesystem block size, inode size, > > and directory block size, the minimum log size is larger than 512 blocks. > > .TP > > .BI version= value > > -This specifies the version of the log. The > > -.I value > > -is either 1 or 2. Specifying > > -.B version=2 > > -enables the > > -.B sunit > > -suboption, and allows the logbsize to be increased beyond 32K. > > -Version 2 logs are automatically selected if a log stripe unit > > -is specified. See > > -.BR sunit " and " su > > -suboptions, below. > > +This specifies the version of the log. The current default is 2, > > +which allows for larger log buffer sizes, as well as supporting > > +stripe-aligned log writes (see the sunit and su options, below). > > +.IP > > +The previous version 1, which is limited to 32k log buffers and does > > +not support stripe-aligned writes, is kept for backwards compatibility > > +with very old 2.4 kernels. > I don't like this change. You're removing specific references to the > commands needed to set sunit, or how to set the version number, and what > the default behaviour on stripe aligned filesystems are. Right that should be added back. > Secondly, version one logs are not being kept around for backwards > compatibility reasons. It's a valid, supported configuration, and in > some cases performs better than version 2 logs.... Can you be more specific ? the man page should document when this is better supported and I believe you're the one that has the best knowledge about that. > Realistically, I see no need for changing this text except to add that > the default is version 2. The change was motivated by Eric's comments on OSS that it is not clear why one should pick log v1 or v2, and I believe he is right. Cheers, -- Niv Sardi From owner-xfs@oss.sgi.com Mon Mar 10 02:00:25 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 10 Mar 2008 02:00:46 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_43, J_CHICKENPOX_44,J_CHICKENPOX_45,J_CHICKENPOX_46,J_CHICKENPOX_47, J_CHICKENPOX_48,MIME_8BIT_HEADER autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2A90Olh001293 for ; Mon, 10 Mar 2008 02:00:25 -0700 X-ASG-Debug-ID: 1205139653-661901310000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from wr-out-0506.google.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id F1FDD67AE6C for ; Mon, 10 Mar 2008 02:00:54 -0700 (PDT) Received: from wr-out-0506.google.com (wr-out-0506.google.com [64.233.184.233]) by cuda.sgi.com with ESMTP id yXZfGf4Fs3PgDUWF for ; Mon, 10 Mar 2008 02:00:54 -0700 (PDT) Received: by wr-out-0506.google.com with SMTP id c53so701356wra.20 for ; Mon, 10 Mar 2008 02:00:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; bh=pYfK/7MeAb2B1JyGGKqiEED2p6qS6CaPmdD2vu9IJGM=; b=ZOVVaIQo252d9/jYCHuIw+r8NoBfuy6xLoNcOW9jW3WK1vm2G2/GpwjdJpuSDytA6YDGjhcqJ9HwPzHayjO6LW3TOOsecThoY/AeTK4Pf2tq8dTLHn0Vg85MV06pNkw38S401LHc99OxHvHG4wZleWuGGvC9yBsbmiEUHfysWUs= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=C88F3ezQrw9pz3uvxW/7E29gaPM6uHAGo0MPhBrqfG/Spmlxiy2yQWcMeEcnv4kojzSqes8Na5ikIdIUcCa/YROli8NY4lLsUvNn76mMT/rC4Y4ZqMN1evAgH5FgGoAnofrKyGERN8H0nBDcdn17sDfem5dBvozT9v+e4ZBKcfU= Received: by 10.150.146.14 with SMTP id t14mr2582998ybd.67.1205138054848; Mon, 10 Mar 2008 01:34:14 -0700 (PDT) Received: by 10.150.96.5 with HTTP; Mon, 10 Mar 2008 01:34:14 -0700 (PDT) Message-ID: <1a4a774c0803100134k258e1bcfma95e7969bc44b2af@mail.gmail.com> Date: Mon, 10 Mar 2008 09:34:14 +0100 From: "=?ISO-8859-1?Q?Christian_R=F8snes?=" To: "David Chinner" X-ASG-Orig-Subj: Re: XFS internal error xfs_trans_cancel at line 1150 of file fs/xfs/xfs_trans.c Subject: Re: XFS internal error xfs_trans_cancel at line 1150 of file fs/xfs/xfs_trans.c Cc: xfs@oss.sgi.com In-Reply-To: <20080310000809.GU155407@sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Disposition: inline References: <1a4a774c0802130251h657a52f7lb97942e7afdf6e3f@mail.gmail.com> <20080213214551.GR155407@sgi.com> <1a4a774c0803050553h7f6294cfq41c38f34ea92ceae@mail.gmail.com> <1a4a774c0803060310w2642224w690ac8fa13f96ec@mail.gmail.com> <1a4a774c0803070319j1eb8790ek3daae4a16b3e6256@mail.gmail.com> <20080310000809.GU155407@sgi.com> X-Barracuda-Connect: wr-out-0506.google.com[64.233.184.233] X-Barracuda-Start-Time: 1205139654 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44404 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id m2A90Plh001295 X-archive-position: 14825 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: christian.rosnes@gmail.com Precedence: bulk X-list: xfs Thanks for the help, David. Answers below: On Mon, Mar 10, 2008 at 1:08 AM, David Chinner wrote: > On Fri, Mar 07, 2008 at 12:19:28PM +0100, Christian Rřsnes wrote: > > > Actually, a single mkdir command is enough to trigger the filesystem > > > shutdown when its 99% full (according to df -k): > > > > > > /data# mkdir test > > > mkdir: cannot create directory `test': No space left on device > > Ok, that's helpful ;) > > So, can you dump the directory inode with xfs_db? i.e. > > # ls -ia /data # ls -ia /data 128 . 128 .. 131 content 149256847 rsync > > The directory inode is the inode at ".", and if this is the root of > the filesystem it will probably be 128. > > Then run: > > # xfs_db -r -c 'inode 128' -c p /dev/sdb1 > # xfs_db -r -c 'inode 128' -c p /dev/sdb1 core.magic = 0x494e core.mode = 040755 core.version = 1 core.format = 1 (local) core.nlinkv1 = 4 core.uid = 0 core.gid = 0 core.flushiter = 47007 core.atime.sec = Wed Oct 19 12:14:10 2005 core.atime.nsec = 640092000 core.mtime.sec = Fri Dec 15 10:27:21 2006 core.mtime.nsec = 624437500 core.ctime.sec = Fri Dec 15 10:27:21 2006 core.ctime.nsec = 624437500 core.size = 32 core.nblocks = 0 core.extsize = 0 core.nextents = 0 core.naextents = 0 core.forkoff = 0 core.aformat = 2 (extents) core.dmevmask = 0 core.dmstate = 0 core.newrtbm = 0 core.prealloc = 0 core.realtime = 0 core.immutable = 0 core.append = 0 core.sync = 0 core.noatime = 0 core.nodump = 0 core.rtinherit = 0 core.projinherit = 0 core.nosymlinks = 0 core.extsz = 0 core.extszinherit = 0 core.nodefrag = 0 core.filestream = 0 core.gen = 0 next_unlinked = null u.sfdir2.hdr.count = 2 u.sfdir2.hdr.i8count = 0 u.sfdir2.hdr.parent.i4 = 128 u.sfdir2.list[0].namelen = 7 u.sfdir2.list[0].offset = 0x30 u.sfdir2.list[0].name = "content" u.sfdir2.list[0].inumber.i4 = 131 u.sfdir2.list[1].namelen = 5 u.sfdir2.list[1].offset = 0x48 u.sfdir2.list[1].name = "rsync" u.sfdir2.list[1].inumber.i4 = 149256847 > > > > -------- > > > meta-data=/dev/sdb1 isize=512 agcount=16, agsize=4476752 blks > > > = sectsz=512 attr=0 > > > data = bsize=4096 blocks=71627792, imaxpct=25 > > > = sunit=16 swidth=32 blks, unwritten=1 > > > naming =version 2 bsize=4096 > > > log =internal bsize=4096 blocks=32768, version=2 > > > = sectsz=512 sunit=16 blks, lazy-count=0 > > > realtime =none extsz=65536 blocks=0, rtextents=0 > > > > > > xfs_db -r -c 'sb 0' -c p /dev/sdb1 > > > ---------------------------------- > ..... > > > fdblocks = 847484 > > Apparently there are still lots of free blocks. I wonder if you are out of > space in the metadata AGs. > > Can you do this for me: > > ------- > #!/bin/bash > > for i in `seq 0 1 15`; do > echo freespace histogram for AG $i > xfs_db -r -c "freesp -bs -a $i" /dev/sdb1 > done > ------ > # for i in `seq 0 1 15`; do > echo freespace histogram for AG $i > xfs_db -r -c "freesp -bs -a $i" /dev/sdb1 > done freespace histogram for AG 0 from to extents blocks pct 1 1 2098 2098 3.77 2 3 8032 16979 30.54 4 7 6158 33609 60.46 8 15 363 2904 5.22 total free extents 16651 total free blocks 55590 average free extent size 3.33854 freespace histogram for AG 1 from to extents blocks pct 1 1 2343 2343 3.90 2 3 9868 21070 35.10 4 7 6000 34535 57.52 8 15 261 2088 3.48 total free extents 18472 total free blocks 60036 average free extent size 3.25011 freespace histogram for AG 2 from to extents blocks pct 1 1 1206 1206 10.55 2 3 3919 8012 70.10 4 7 394 2211 19.35 total free extents 5519 total free blocks 11429 average free extent size 2.07085 freespace histogram for AG 3 from to extents blocks pct 1 1 3179 3179 8.48 2 3 14689 29736 79.35 4 7 820 4560 12.17 total free extents 18688 total free blocks 37475 average free extent size 2.0053 freespace histogram for AG 4 from to extents blocks pct 1 1 4113 4113 9.62 2 3 10685 22421 52.45 4 7 2951 16212 37.93 total free extents 17749 total free blocks 42746 average free extent size 2.40836 freespace histogram for AG 5 from to extents blocks pct 1 1 2909 2909 4.23 2 3 20370 41842 60.81 4 7 3973 23861 34.68 8 15 24 192 0.28 total free extents 27276 total free blocks 68804 average free extent size 2.52251 freespace histogram for AG 6 from to extents blocks pct 1 1 3577 3577 4.86 2 3 18592 38577 52.43 4 7 4427 25764 35.02 8 15 707 5656 7.69 total free extents 27303 total free blocks 73574 average free extent size 2.69472 freespace histogram for AG 7 from to extents blocks pct 1 1 2634 2634 9.14 2 3 11928 24349 84.48 4 7 366 1840 6.38 total free extents 14928 total free blocks 28823 average free extent size 1.9308 freespace histogram for AG 8 from to extents blocks pct 1 1 6473 6473 6.39 2 3 22020 46190 45.61 4 7 7343 40137 39.64 8 15 1058 8464 8.36 total free extents 36894 total free blocks 101264 average free extent size 2.74473 freespace histogram for AG 9 from to extents blocks pct 1 1 2165 2165 2.22 2 3 15746 33317 34.20 4 7 9402 55502 56.97 8 15 805 6440 6.61 total free extents 28118 total free blocks 97424 average free extent size 3.46483 freespace histogram for AG 10 from to extents blocks pct 1 1 5886 5886 9.46 2 3 13682 29881 48.01 4 7 4561 23919 38.43 8 15 319 2552 4.10 total free extents 24448 total free blocks 62238 average free extent size 2.54573 freespace histogram for AG 11 from to extents blocks pct 1 1 4197 4197 7.47 2 3 8421 18061 32.14 4 7 4336 24145 42.97 8 15 1224 9792 17.43 total free extents 18178 total free blocks 56195 average free extent size 3.09137 freespace histogram for AG 12 from to extents blocks pct 1 1 310 310 90.64 2 3 16 32 9.36 total free extents 326 total free blocks 342 average free extent size 1.04908 freespace histogram for AG 13 from to extents blocks pct 1 1 4845 4845 22.31 2 3 7533 16873 77.69 total free extents 12378 total free blocks 21718 average free extent size 1.75456 freespace histogram for AG 14 from to extents blocks pct 1 1 3572 3572 6.50 2 3 17437 36656 66.72 4 7 2702 14711 26.78 total free extents 23711 total free blocks 54939 average free extent size 2.31703 freespace histogram for AG 15 from to extents blocks pct 1 1 4568 4568 6.24 2 3 13400 28983 39.62 4 7 6992 39606 54.14 total free extents 24960 total free blocks 73157 average free extent size 2.93097 > > > Instrumenting the code, I found that this occurs on my system when I > > do a 'mkdir /data/test' on the partition in question: > > > > in xfs_mkdir (xfs_vnodeops.c): > > > > error = xfs_dir_ialloc(&tp, dp, mode, 2, > > 0, credp, prid, resblks > 0, > > &cdp, NULL); > > > > if (error) { > > if (error == ENOSPC) > > goto error_return; <=== this is hit and then > > execution jumps to error_return > > goto abort_return; > > } > > Ah - you can ignore my last email, then. You're already one step ahead > of me ;) > > This does not appear to be the case I was expecting, though I can > see how we can get an ENOSPC here with plenty of blocks free - none > are large enough to allocate an inode chunk. What would be worth > knowing is the value of resblks when this error is reported. Ok. I'll see if I can print it out. > > This tends to imply we are returning an ENOSPC with a dirty > transaction. Right now I can't see how that would occur, though > the fragmented free space is something I can try to reproduce with. ok > > > > Is this the correct behavior for this type of situation: mkdir command > > fails due to no available space on filesystem, > > and xfs_mkdir goes to label error_return ? (And after this the > > filesystem is shutdown) > > The filesystem should not be shutdown here. We need to trace through > further to the source of the error.... > ok Btw - to debug this on a test-system, can I do a dd if=/dev/sdb1 or dd if=/dev/sdb, and output it to an image which is then loopback mounted on the test-system ? Ie. is there some sort of "best practice" on how to copy this partition to a test-system for further testing ? Thanks Christian From owner-xfs@oss.sgi.com Mon Mar 10 03:02:16 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 10 Mar 2008 03:02:35 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=BAYES_00,MIME_8BIT_HEADER autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2AA2FNZ006914 for ; Mon, 10 Mar 2008 03:02:16 -0700 X-ASG-Debug-ID: 1205143349-4d7702490000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from wx-out-0506.google.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 9CB1E67B299 for ; Mon, 10 Mar 2008 03:02:29 -0700 (PDT) Received: from wx-out-0506.google.com (wx-out-0506.google.com [66.249.82.238]) by cuda.sgi.com with ESMTP id vsX7yNJqo40XVmsS for ; Mon, 10 Mar 2008 03:02:29 -0700 (PDT) Received: by wx-out-0506.google.com with SMTP id s9so1814129wxc.32 for ; Mon, 10 Mar 2008 03:02:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; bh=zdIiennxwldllE8wNGzQ0cfSsl+iYleF1zHG/0ysJP8=; b=QgoPXqNQqJVjGGLajieDXr2Bl0aty2Dhfi3cV7CSEekDahc85HblP3yMZ+D7uaJx3Sf8dLLJ+rhIja56KhypGIsa752joVBl3gCDU0YduYixRQCjNqVLRkraYW9WNTk6SIUvAlbSlEICxXgGJPMUQM58T0dkHJG279j1LoK09L0= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=nWvL1f9dJlNXccQtyDk5rwPVJegHrHzYNRe/jRe6RO6plm+ZgGPQCCgihpJaz3Hgd3Yt363zuu8RXq4eoebVhBqfnBOYIsBz4iGNfPcgwOnXY8Gu0K1iRwGQ+rvsG6PPYo9mBvmpXBWDt68pDUWVRzjuloBilj398cG3Hcnzkbo= Received: by 10.151.15.3 with SMTP id s3mr2633399ybi.143.1205143348662; Mon, 10 Mar 2008 03:02:28 -0700 (PDT) Received: by 10.150.96.5 with HTTP; Mon, 10 Mar 2008 03:02:28 -0700 (PDT) Message-ID: <1a4a774c0803100302y17530814wee7522aa0dfd7668@mail.gmail.com> Date: Mon, 10 Mar 2008 11:02:28 +0100 From: "=?ISO-8859-1?Q?Christian_R=F8snes?=" To: "David Chinner" X-ASG-Orig-Subj: Re: XFS internal error xfs_trans_cancel at line 1150 of file fs/xfs/xfs_trans.c Subject: Re: XFS internal error xfs_trans_cancel at line 1150 of file fs/xfs/xfs_trans.c Cc: xfs@oss.sgi.com In-Reply-To: <1a4a774c0803100134k258e1bcfma95e7969bc44b2af@mail.gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Disposition: inline References: <1a4a774c0802130251h657a52f7lb97942e7afdf6e3f@mail.gmail.com> <20080213214551.GR155407@sgi.com> <1a4a774c0803050553h7f6294cfq41c38f34ea92ceae@mail.gmail.com> <1a4a774c0803060310w2642224w690ac8fa13f96ec@mail.gmail.com> <1a4a774c0803070319j1eb8790ek3daae4a16b3e6256@mail.gmail.com> <20080310000809.GU155407@sgi.com> <1a4a774c0803100134k258e1bcfma95e7969bc44b2af@mail.gmail.com> X-Barracuda-Connect: wx-out-0506.google.com[66.249.82.238] X-Barracuda-Start-Time: 1205143349 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44408 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id m2AA2GNZ006917 X-archive-position: 14826 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: christian.rosnes@gmail.com Precedence: bulk X-list: xfs On Mon, Mar 10, 2008 at 9:34 AM, Christian Rřsnes wrote: > On Mon, Mar 10, 2008 at 1:08 AM, David Chinner wrote: > > On Fri, Mar 07, 2008 at 12:19:28PM +0100, Christian Rřsnes wrote: > > > > > Instrumenting the code, I found that this occurs on my system when I > > > do a 'mkdir /data/test' on the partition in question: > > > > > > in xfs_mkdir (xfs_vnodeops.c): > > > > > > error = xfs_dir_ialloc(&tp, dp, mode, 2, > > > 0, credp, prid, resblks > 0, > > > &cdp, NULL); > > > > > > if (error) { > > > if (error == ENOSPC) > > > goto error_return; <=== this is hit and then > > > execution jumps to error_return > > > goto abort_return; > > > } > > > > Ah - you can ignore my last email, then. You're already one step ahead > > of me ;) > > > > This does not appear to be the case I was expecting, though I can > > see how we can get an ENOSPC here with plenty of blocks free - none > > are large enough to allocate an inode chunk. What would be worth > > knowing is the value of resblks when this error is reported. > > Ok. I'll see if I can print it out. Ok. I added printk statments to xfs_mkdir in xfs_vnodeops.c: 'resblks=45' is the value returned by: resblks = XFS_MKDIR_SPACE_RES(mp, dir_namelen); and this is the value when the error_return label is called. -- and inside xfs_dir_ialloc (file: xfs_utils.c) this is where it returns ... code = xfs_ialloc(tp, dp, mode, nlink, rdev, credp, prid, okalloc, &ialloc_context, &call_again, &ip); /* * Return an error if we were unable to allocate a new inode. * This should only happen if we run out of space on disk or * encounter a disk error. */ if (code) { *ipp = NULL; return code; } if (!call_again && (ip == NULL)) { *ipp = NULL; return XFS_ERROR(ENOSPC); <============== returns here } Christian From owner-xfs@oss.sgi.com Mon Mar 10 04:44:56 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 10 Mar 2008 04:45:16 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.8 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2ABiudI021300 for ; Mon, 10 Mar 2008 04:44:56 -0700 X-ASG-Debug-ID: 1205149525-648002f80000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from steelboxtech.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 948DE67B9AF; Mon, 10 Mar 2008 04:45:26 -0700 (PDT) Received: from steelboxtech.com (steelb0x.securesites.net [161.58.192.27]) by cuda.sgi.com with ESMTP id HblqmhVBhEyaTTat; Mon, 10 Mar 2008 04:45:26 -0700 (PDT) Received: from [192.168.1.152] (cromo.steelbox.com [199.72.224.67]) (authenticated bits=0) by steelboxtech.com (8.13.6.20060614/8.13.6) with ESMTP id m2ABjMpg086602; Mon, 10 Mar 2008 11:45:22 GMT Message-ID: <47D51FF0.2080000@steelbox.com> Date: Mon, 10 Mar 2008 07:48:00 -0400 From: Kris Kersey User-Agent: Thunderbird 2.0.0.6 (X11/20071004) MIME-Version: 1.0 To: David Chinner CC: xfs@oss.sgi.com, Bill Vaughan X-ASG-Orig-Subj: Re: pdflush hang on xlog_grant_log_space() Subject: Re: pdflush hang on xlog_grant_log_space() References: <47D062AF.80501@steelbox.com> <20080307223510.GM155407@sgi.com> In-Reply-To: <20080307223510.GM155407@sgi.com> X-Enigmail-Version: 0.95.3 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: steelb0x.securesites.net[161.58.192.27] X-Barracuda-Start-Time: 1205149526 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44413 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14827 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: kkersey@steelbox.com Precedence: bulk X-list: xfs Thank you for your help. Two questions: 1) Can you define "much larger number"? I know you recently increased this number from 10 to 1000, so should I increase it to 10,000? 100,000? 2) Is this a fix or a work-around? If this is a work-around, is there a fix in the works? Can you explain the issue a bit, or if it's been covered, can you point me to the explanation? I'd just like to understand what's going on. Thanks, Kris Kersey David Chinner wrote: > On Thu, Mar 06, 2008 at 04:31:27PM -0500, Kris Kersey wrote: >> Hello, >> >> I'm working on a NAS product and we're currently having lock-ups that >> seem to be hanging in XFS code. We're running a NAS that has 1024 NFSD >> threads accessing three RAID mounts. All three mounts are running XFS >> file systems. Lately we've had random lockups on these boxes and I am >> now running a kernel with KDB built-in. >> >> The lock-up takes the form of all NFSD threads in D state with one out >> of three pdflush threads in D state. The assumption can be made that >> all NFSD threads are waiting on the one pdflush thread to complete. So >> two times now when an NAS has gotten in this state I have accessed KDB >> and ran a stack trace on the pdflush thread. Both times the thread was >> stuck on xlog_grant_log_space+0xdb. > > Try bumping XFS_TRANS_PUSH_AIL_RESTARTS to a much larger number and > seeing if the problem goes away.... > > Alternatively, that restart hack is backed by a "watchdog" timeout > in 2.6.25-rc1, so if that is the cause of the problem perhaps the > latest -rcX kernel will prevent the hang? > > BTW, you can get all the traces of D state threads through the sysrq > interface, so you don't need to drop into kdb to get this..... > > Cheers, > > Dave. From owner-xfs@oss.sgi.com Mon Mar 10 05:21:52 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 10 Mar 2008 05:22:11 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2ACLocl024845 for ; Mon, 10 Mar 2008 05:21:52 -0700 X-ASG-Debug-ID: 1205151740-572700450000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mail.flatline.de (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 537471202669; Mon, 10 Mar 2008 05:22:20 -0700 (PDT) Received: from mail.flatline.de (flatline.de [80.190.243.144]) by cuda.sgi.com with ESMTP id RCwUwZcbMc9a8hUt; Mon, 10 Mar 2008 05:22:20 -0700 (PDT) Received: from shell.priv.flatline.de ([172.16.123.7] helo=slop.flatline.de ident=count) by mail.flatline.de with smtp (Exim 4.69) (envelope-from ) id 1JYh1E-0001JD-8x; Mon, 10 Mar 2008 13:22:17 +0100 Received: by slop.flatline.de (sSMTP sendmail emulation); Mon, 10 Mar 2008 13:22:16 +0100 Date: Mon, 10 Mar 2008 13:22:16 +0100 From: Andreas Kotes To: David Chinner Cc: linux-kernel@vger.kernel.org, xfs@oss.sgi.com X-ASG-Orig-Subj: Re: XFS internal error Subject: Re: XFS internal error Message-ID: <20080310122216.GG14256@slop.flatline.de> References: <470831E6.4030704@fastmail.co.uk> <20071008001452.GX995458@sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20071008001452.GX995458@sgi.com> User-Agent: Mutt/1.5.13 (2006-08-11) X-Barracuda-Connect: flatline.de[80.190.243.144] X-Barracuda-Start-Time: 1205151741 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44417 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14828 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: count-linux@flatline.de Precedence: bulk X-list: xfs Hello, * David Chinner [20080310 13:18]: > Yes, but those previous corruptions get left on disk as a landmine > for you to trip over some time later, even on a kernel that has the > bug fixed. > > I suggest that you run xfs_check on the filesystem and if that > shows up errors, run xfs_repair onteh filesystem to correct them. I seem to be having similiar problems, and xfs_repair is not helping :( I always run into: [ 137.099267] Filesystem "sda2": XFS internal error xfs_trans_cancel at line 1132 of file fs/xfs/xfs_trans.c. Caller 0xffffffff80372156 [ 137.106267] [ 137.106268] Call Trace: [ 137.113129] [] xfs_trans_cancel+0x100/0x130 [ 137.116524] [] xfs_create+0x256/0x6e0 [ 137.119904] [] xfs_dir2_isleaf+0x19/0x50 [ 137.123269] [] xfs_vn_mknod+0x195/0x250 [ 137.126607] [] vfs_create+0xac/0xf0 [ 137.129920] [] open_namei+0x5dc/0x700 [ 137.133227] [] __wake_up+0x43/0x70 [ 137.136477] [] do_filp_open+0x1c/0x50 [ 137.139693] [] do_sys_open+0x5a/0x100 [ 137.142838] [] sysenter_do_call+0x1b/0x67 [ 137.145964] [ 137.149014] xfs_force_shutdown(sda2,0x8) called from line 1133 of file fs/xfs/xfs_trans.c. Return address = 0xffffffff8036930e [ 137.163485] Filesystem "sda2": Corruption of in-memory data detected. Shutting down filesystem: sda2 directly after booting. I'm using kernel 2.6.22.16 and xfs_repair version 2.9.7 How can I help finding the problem? I'd like xfs_repair to be able to fix this. Br, Andreas -- flatline IT services - Andreas Kotes - Tailored solutions for your IT needs From owner-xfs@oss.sgi.com Mon Mar 10 05:39:08 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 10 Mar 2008 05:39:26 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00,J_CHICKENPOX_45 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2ACd7Rc026328 for ; Mon, 10 Mar 2008 05:39:08 -0700 X-ASG-Debug-ID: 1205152777-65ef00700000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from gw02.mail.saunalahti.fi (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 3EB7E120254E for ; Mon, 10 Mar 2008 05:39:37 -0700 (PDT) Received: from gw02.mail.saunalahti.fi (gw02.mail.saunalahti.fi [195.197.172.116]) by cuda.sgi.com with ESMTP id jCt1spQVFwn0AXWC for ; Mon, 10 Mar 2008 05:39:37 -0700 (PDT) Received: from uunet198.aac.fi (uunet198.aac.fi [193.64.61.198]) by gw02.mail.saunalahti.fi (Postfix) with ESMTP id AA80F139F59 for ; Mon, 10 Mar 2008 14:39:04 +0200 (EET) Message-ID: <47D52BE5.6010706@iki.fi> Date: Mon, 10 Mar 2008 14:39:01 +0200 From: Erkki Lintunen Reply-To: erkki.lintunen@iki.fi User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: xfs@oss.sgi.com X-ASG-Orig-Subj: an occational trouble with xfs file system which xfs_repair 2.7.14 has been able to fix Subject: an occational trouble with xfs file system which xfs_repair 2.7.14 has been able to fix X-Enigmail-Version: 0.95.6 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Barracuda-Connect: gw02.mail.saunalahti.fi[195.197.172.116] X-Barracuda-Start-Time: 1205152778 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44419 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14829 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: erkki.lintunen@iki.fi Precedence: bulk X-list: xfs Hi, can you help me a bit with my troublesome ~700GB xfs filesystem? The file system has had several dir trees since it was created somewhere 2004-2005. It has been written to daily since it was created. It has been expanded few times with xfs_growfs. It has experienced the same symptom already 2-4 times. The symptom is that one of the dir trees gets locked about once a year. It is always the same tree. I can't remember when or what happened when the symptom was first experienced. I guess the system had run on 2.6.17.x kernel once in its lifetime, but xfs_repair ought to fix the dir lock problem, at least the latest, doesn't it. The filesystem is used for backups with rsync, cp -al and rm -fr commands in a script. When the trouble begins cp -al command starts to take several hours and hundreds of megs memory. rm -fr of a subtree also takes considerably longer than rm a subtree in another bigger tree in the same filesystem, but the rm commands have always finnished, which the cp -al commands haven't. Most of the time the cp -al process has D status. I have mananged to repair the file system with xfs_repair 2.7.14, but not with 2.6.20, which comes in Debian Sarge. Now I tried latest xfs_repair and it didn't fix the problem - at least on the first run without any options. For example latest backup had to be interrupted and time command showed following: real 1342m7.316s user 1m4.152s sys 14m5.109s I have xfs_metadump of the filesystem right after the interrup. Its size is 3.9G uncompressed and 1.6G compressed with bzip2 -9. Now I ran xfs_repair 2.7.14 on the file system and wait one day before I'll see whether it was capable to fix the problem this time as well. What else information I could provide in addition to those requested in FAQ? plastic:~# grep backup-volA /etc/fstab /dev/vg00/backup-volA /site/backup-volA xfs defaults 0 1 plastic:~# df -ml /backup/volA/. Filesystem 1M-blocks Used Available Use% Mounted on /site/backup-volA 692688 647328 45361 94% /backup/volA plastic:~# ./xfs_repair -V xfs_repair version 2.9.7 plastic:~# /usr/local/sbin/xfs_repair -V xfs_repair version 2.7.14 plastic:~# /sbin/xfs_repair -V xfs_repair version 2.6.20 plastic:~# dmesg |tail -n 3 Filesystem "dm-0": Disabling barriers, not supported by the underlying device XFS mounting filesystem dm-0 Ending clean XFS mount for filesystem: dm-0 plastic:~# uname -a Linux plastic 2.6.24.2-i686-net #1 SMP Tue Feb 12 17:42:16 EET 2008 i686 GNU/Linux plastic:~# xfs_info /site/backup-volA meta-data=/site/backup-volA isize=256 agcount=39, agsize=4559936 blks = sectsz=512 data = bsize=4096 blocks=177360896, imaxpct=25 = sunit=0 swidth=0 blks, unwritten=1 naming =version 2 bsize=4096 log =internal bsize=4096 blocks=32768, version=1 = sectsz=512 sunit=0 blks realtime =none extsz=65536 blocks=0, rtextents=0 # diff between output of xfs_repair 2.9.7 (screenlog.0) and # xfs_repair 2.7.14 (screenlog.1) --- screenlog.0 2008-03-10 10:32:13.000000000 +0200 +++ screenlog.1 2008-03-10 14:04:00.000000000 +0200 @@ -1,3 +1,9 @@ + - scan filesystem freespace and inode maps... + - found root inode chunk +Phase 3 - for each AG... + - scan and clear agi unlinked lists... + - process known inodes and perform inode discovery... + - agno = 0 - agno = 1 - agno = 2 - agno = 3 @@ -39,6 +45,9 @@ - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... + - clear lost+found (if it exists) ... + - clearing existing "lost+found" inode + - marking entry "lost+found" to be deleted - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 @@ -83,103 +92,13 @@ - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - - traversing filesystem ... - - traversal finished ... - - moving disconnected inodes to lost+found ... + - ensuring existence of lost+found directory + - traversing filesystem starting at / ... +rebuilding directory inode 128 + - traversal finished ... + - traversing all unattached subtrees ... + - traversals finished ... + - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... Best regards, Erkki From owner-xfs@oss.sgi.com Mon Mar 10 06:31:17 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 10 Mar 2008 06:31:35 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2ADVFZ0030880 for ; Mon, 10 Mar 2008 06:31:17 -0700 X-ASG-Debug-ID: 1205155904-4c5b02520000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 307A95C3BC8 for ; Mon, 10 Mar 2008 06:31:45 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id AkdC6s8Fj2hw8imd for ; Mon, 10 Mar 2008 06:31:45 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 6721218000096; Mon, 10 Mar 2008 08:31:43 -0500 (CDT) Message-ID: <47D5383E.50201@sandeen.net> Date: Mon, 10 Mar 2008 08:31:42 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: erkki.lintunen@iki.fi CC: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: an occational trouble with xfs file system which xfs_repair 2.7.14 has been able to fix Subject: Re: an occational trouble with xfs file system which xfs_repair 2.7.14 has been able to fix References: <47D52BE5.6010706@iki.fi> In-Reply-To: <47D52BE5.6010706@iki.fi> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1205155906 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44421 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14830 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Erkki Lintunen wrote: ... > commands in a script. When the trouble begins cp -al command starts to > take several hours and hundreds of megs memory. rm -fr of a subtree also > takes considerably longer than rm a subtree in another bigger tree in > the same filesystem, but the rm commands have always finnished, which > the cp -al commands haven't. Most of the time the cp -al process has D > status. ... > What else information I could provide in addition to those requested in FAQ? When you get a process in the D state, do echo t > /proc/sysrq-trigger to get backtraces of all processes; or echo w to get all blocked processes. -Eric From owner-xfs@oss.sgi.com Mon Mar 10 11:16:54 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 10 Mar 2008 11:17:13 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.0 required=5.0 tests=BAYES_50,RCVD_IN_PSBL autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2AIGrAP030031 for ; Mon, 10 Mar 2008 11:16:54 -0700 X-ASG-Debug-ID: 1205173038-3d4201d60000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mailer1.acanac.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id E92F4F53D5B for ; Mon, 10 Mar 2008 11:17:23 -0700 (PDT) Received: from mailer1.acanac.net (mailer1.acanac.net [66.49.201.229]) by cuda.sgi.com with ESMTP id yNYulOZi1u9SdWlc for ; Mon, 10 Mar 2008 11:17:23 -0700 (PDT) Received: from toshiba-user (dsl-67-204-13-21.acanac.net [67.204.13.21]) by mailer1.acanac.net (8.12.11/8.12.11) with SMTP id m2AIBTme009644 for ; Mon, 10 Mar 2008 13:17:02 -0500 Message-Id: <200803101817.m2AIBTme009644@mailer1.acanac.net> From: Sounder Dilipan To: Date: Mon, 10 Mar 2008 14:10:15 -0400 X-ASG-Orig-Subj: 'Work from Home' 'Web Developer' & 'Web Programmer' Opportunities Subject: 'Work from Home' 'Web Developer' & 'Web Programmer' Opportunities X-Mailer: TOL Mailer MIME-Version: 1.0 Content-Type: text/plain Content-Transfer-Encoding: 7Bit X-Barracuda-Connect: mailer1.acanac.net[66.49.201.229] X-Barracuda-Start-Time: 1205173043 X-Barracuda-Bayes: INNOCENT GLOBAL 0.5026 1.0000 0.7500 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: 0.75 X-Barracuda-Spam-Status: No, SCORE=0.75 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44441 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14831 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hr@netultimate.com Precedence: bulk X-list: xfs Hi, We are recruiting 'web designers' and 'web programmers' to work with us in part time and full time in Contract basis. If you or your friends or your family members are looking for opportunities to work from home contact us ASAP by phone or email or by 'yahoo messenger' 'yahoo messenger' : dilipan@yahoo.com email : resume@netultimate.com Regards, Dilipan (703) 849 1269 (USA) (416) 238 0270 (CANADA) From owner-xfs@oss.sgi.com Mon Mar 10 13:37:29 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 10 Mar 2008 13:37:46 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00,J_CHICKENPOX_21 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2AKbPF8017144 for ; Mon, 10 Mar 2008 13:37:29 -0700 X-ASG-Debug-ID: 1205181475-20ad03b40000-ps1ADW X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from ty.sabi.co.UK (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 87E0167FAF4 for ; Mon, 10 Mar 2008 13:37:55 -0700 (PDT) Received: from ty.sabi.co.UK ([192.100.78.57]) by cuda.sgi.com with ESMTP id kMzlb3CAHfQt9sDD for ; Mon, 10 Mar 2008 13:37:55 -0700 (PDT) Received: from from [127.0.0.1] (helo=tree.ty.sabi.co.uk) by ty.sabi.co.UK with esmtp(Exim 4.66 #1) id 1JYokn-0001Jg-61 for ; Mon, 10 Mar 2008 20:37:49 +0000 MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-ID: <18389.39964.468947.464155@tree.ty.sabi.co.uk> Date: Mon, 10 Mar 2008 20:37:48 +0000 X-Face: SMJE]JPYVBO-9UR%/8d'mG.F!@.,l@c[f'[%S8'BZIcbQc3/">GrXDwb#;fTRGNmHr^JFb SAptvwWc,0+z+~p~"Gdr4H$(|N(yF(wwCM2bW0~U?HPEE^fkPGx^u[*[yV.gyB!hDOli}EF[\cW*S H&spRGFL}{`bj1TaD^l/"[ msn( /TH#THs{Hpj>)]f> X-ASG-Orig-Subj: Re: xfs for free hosting on linux ? performance questions Subject: Re: xfs for free hosting on linux ? performance questions In-Reply-To: <2db2c6b80803091738o20dc80b2p82c5bf7246a8c560@mail.gmail.com> References: <47D45926.5070806@gmail.com> <18388.31367.314074.624135@tree.ty.sabi.co.uk> <2db2c6b80803091738o20dc80b2p82c5bf7246a8c560@mail.gmail.com> X-Mailer: VM 7.17 under 21.5 (beta28) XEmacs Lucid From: pg_xfs2@xfs2.for.to.sabi.co.UK (Peter Grandi) X-Disclaimer: This message contains only personal opinions X-Barracuda-Connect: UNKNOWN[192.100.78.57] X-Barracuda-Start-Time: 1205181476 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -1.20 X-Barracuda-Spam-Status: No, SCORE=-1.20 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=FROM_HAS_ULINE_NUMS, MARKETING_SUBJECT X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44450 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.60 MARKETING_SUBJECT Subject contains popular marketing words 0.22 FROM_HAS_ULINE_NUMS From: contains an underline and numbers/letters X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14832 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: pg_xfs2@xfs2.for.to.sabi.co.UK Precedence: bulk X-list: xfs > but reiserfs is slow for me too, maybe i have some other > problems? i made array like this mdadm --create /dev/md0 > --verbose --level=5 --raid-devices=5 /dev/sd{b,c,d,e,f}1 > --assume-clean Uhhhhh, '--assume-clean' on a RAID5 is not necessarily a good idea. Just don't do it, and wait a few hours for the initial sync to happen. > [ ... ] > 26.48 megabytes read (94.47 kilobytes per second) > 136.18 megabytes written (485.89 kilobytes per second) > is there something wrong with my system?? The speed above is indeed terrible. Perhaps the array is rebuilding frantically as it finds all stripes have the wrong parity because of '--assume-clean'. > [ ... ] LA is 0.50 right now (i'm testing it at night), Why is LA relevant? What is running on that system now? Are those physical disks being used by some applications? > hdparm is showing that every HD from the array is doing > 100MB/s at least, so why these numbers? Perhaps after rebuilding the array without '--assume-clean' you might try to do such simple tests on the '/dev/md0' device itself, just to be sure that it performs more or less adequately. After several reports that is helps, try this before the test: blockdev --setra 1024 /dev/md0 but even without it you should be getting at least 40-50MB/s. And always check the array status with 'mdadm --detail' to see if it is resyncing or not before testing. Using 'watch iostat -k sd{b,c,d,e,f} 1 2' (and checking out the _second_ set of figures) is also nice to see the actual transfer rates of each drive in the array and verify that if you are reading there is no writing going on etc. From owner-xfs@oss.sgi.com Mon Mar 10 15:21:25 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 10 Mar 2008 15:21:44 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.6 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_44, J_CHICKENPOX_45,J_CHICKENPOX_46,J_CHICKENPOX_47,MIME_8BIT_HEADER autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2AMLICm002470 for ; Mon, 10 Mar 2008 15:21:22 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA12074; Tue, 11 Mar 2008 09:21:39 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m2AMLcLF92616701; Tue, 11 Mar 2008 09:21:38 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m2AMLZ7m92735104; Tue, 11 Mar 2008 09:21:35 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Tue, 11 Mar 2008 09:21:35 +1100 From: David Chinner To: Christian =?iso-8859-1?Q?R=F8snes?= Cc: David Chinner , xfs@oss.sgi.com Subject: Re: XFS internal error xfs_trans_cancel at line 1150 of file fs/xfs/xfs_trans.c Message-ID: <20080310222135.GZ155407@sgi.com> References: <20080310000809.GU155407@sgi.com> <1a4a774c0803100134k258e1bcfma95e7969bc44b2af@mail.gmail.com> <1a4a774c0803100302y17530814wee7522aa0dfd7668@mail.gmail.com> <1a4a774c0802130251h657a52f7lb97942e7afdf6e3f@mail.gmail.com> <20080213214551.GR155407@sgi.com> <1a4a774c0803050553h7f6294cfq41c38f34ea92ceae@mail.gmail.com> <1a4a774c0803060310w2642224w690ac8fa13f96ec@mail.gmail.com> <1a4a774c0803070319j1eb8790ek3daae4a16b3e6256@mail.gmail.com> <20080310000809.GU155407@sgi.com> <1a4a774c0803100134k258e1bcfma95e7969bc44b2af@mail.gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <1a4a774c0803100302y17530814wee7522aa0dfd7668@mail.gmail.com> <1a4a774c0803100134k258e1bcfma95e7969bc44b2af@mail.gmail.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14833 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Mon, Mar 10, 2008 at 09:34:14AM +0100, Christian Rřsnes wrote: > On Mon, Mar 10, 2008 at 1:08 AM, David Chinner wrote: > > On Fri, Mar 07, 2008 at 12:19:28PM +0100, Christian Rřsnes wrote: > > > > Actually, a single mkdir command is enough to trigger the filesystem > > > > shutdown when its 99% full (according to df -k): > > > > > > > > /data# mkdir test > > > > mkdir: cannot create directory `test': No space left on device > > > > Ok, that's helpful ;) > > So, can you dump the directory inode with xfs_db? i.e. > > # ls -ia /data > > # ls -ia /data > 128 . 128 .. 131 content 149256847 rsync > > > The directory inode is the inode at ".", and if this is the root of > > the filesystem it will probably be 128. Then run: > > # xfs_db -r -c 'inode 128' -c p /dev/sdb1 > > # xfs_db -r -c 'inode 128' -c p /dev/sdb1 > core.magic = 0x494e > core.mode = 040755 > core.version = 1 > core.format = 1 (local) ..... > core.size = 32 .... > u.sfdir2.hdr.count = 2 > u.sfdir2.hdr.i8count = 0 > u.sfdir2.hdr.parent.i4 = 128 > u.sfdir2.list[0].namelen = 7 > u.sfdir2.list[0].offset = 0x30 > u.sfdir2.list[0].name = "content" > u.sfdir2.list[0].inumber.i4 = 131 > u.sfdir2.list[1].namelen = 5 > u.sfdir2.list[1].offset = 0x48 > u.sfdir2.list[1].name = "rsync" > u.sfdir2.list[1].inumber.i4 = 149256847 Ok, so a shortform directory still with heaps of space in it. so it's definitely not a directory namespace creation issue. > > > > xfs_db -r -c 'sb 0' -c p /dev/sdb1 > > > > ---------------------------------- > > ..... > > > > fdblocks = 847484 > > > > Apparently there are still lots of free blocks. I wonder if you are out of > > space in the metadata AGs. > > > > Can you do this for me: > > > > ------- > > #!/bin/bash > > > > for i in `seq 0 1 15`; do > > echo freespace histogram for AG $i > > xfs_db -r -c "freesp -bs -a $i" /dev/sdb1 > > done > > ------ > freespace histogram for AG 0 > from to extents blocks pct > 1 1 2098 2098 3.77 > 2 3 8032 16979 30.54 > 4 7 6158 33609 60.46 > 8 15 363 2904 5.22 So with 256 byte inodes, we need a 16k allocation or a 4 block extent. There's plenty of extents large enough to use for that, so it's not an inode chunk allocation error. > Btw - to debug this on a test-system, can I do a dd if=/dev/sdb1 or dd > if=/dev/sdb, > and output it to an image which is then loopback mounted on the test-system ? That would work. Use /dev/sdb1 as the source so all you copy are filesystem blocks. > Ie. is there some sort of "best practice" on how to copy this > partition to a test-system > for further testing ? Do what fit's your needs - for debugging identical images are generally best. For debugging metadata or repair problems, xfs_metadump works very well (replaces data with zeros, though), and for imaging purposes xfs_copy is very efficient. On Mon, Mar 10, 2008 at 11:02:28AM +0100, Christian Rřsnes wrote: > On Mon, Mar 10, 2008 at 9:34 AM, Christian Rřsnes > wrote: > > On Mon, Mar 10, 2008 at 1:08 AM, David Chinner wrote: > > > This does not appear to be the case I was expecting, though I can > > > see how we can get an ENOSPC here with plenty of blocks free - none > > > are large enough to allocate an inode chunk. What would be worth > > > knowing is the value of resblks when this error is reported. > > > > Ok. I'll see if I can print it out. > > Ok. I added printk statments to xfs_mkdir in xfs_vnodeops.c: > > 'resblks=45' is the value returned by: > > resblks = XFS_MKDIR_SPACE_RES(mp, dir_namelen); > > and this is the value when the error_return label is called. That confirms we're not out of directory space or filesystem space. > -- > > and inside xfs_dir_ialloc (file: xfs_utils.c) this is where it returns > > ... > > code = xfs_ialloc(tp, dp, mode, nlink, rdev, credp, prid, okalloc, > &ialloc_context, &call_again, &ip); > > /* > * Return an error if we were unable to allocate a new inode. > * This should only happen if we run out of space on disk or > * encounter a disk error. > */ > if (code) { > *ipp = NULL; > return code; > } > if (!call_again && (ip == NULL)) { > *ipp = NULL; > return XFS_ERROR(ENOSPC); <============== returns here > } Interesting. That implies that xfs_ialloc() failed here: 1053 /* 1054 * Call the space management code to pick 1055 * the on-disk inode to be allocated. 1056 */ 1057 error = xfs_dialloc(tp, pip ? pip->i_ino : 0, mode, okalloc, 1058 ialloc_context, call_again, &ino); 1059 if (error != 0) { 1060 return error; 1061 } 1062 if (*call_again || ino == NULLFSINO) { <<<<<<<<<<<<<<<< 1063 *ipp = NULL; 1064 return 0; 1065 } Which means that xfs_dialloc() failed without ian error or setting *call_again but setting ino == NULLFSINO. That leaves these possible failure places: 544 agbp = xfs_ialloc_ag_select(tp, parent, mode, okalloc); 545 /* 546 * Couldn't find an allocation group satisfying the 547 * criteria, give up. 548 */ 549 if (!agbp) { 550 *inop = NULLFSINO; 551 >>>>>>>>>> return 0; 552 } ........ 572 /* 573 * If we have already hit the ceiling of inode blocks then clear 574 * okalloc so we scan all available agi structures for a free 575 * inode. 576 */ 577 578 if (mp->m_maxicount && 579 mp->m_sb.sb_icount + XFS_IALLOC_INODES(mp) > mp->m_maxicount) { 580 noroom = 1; 581 okalloc = 0; 582 } ........ 600 if ((error = xfs_ialloc_ag_alloc(tp, agbp, &ialloced))) { 601 xfs_trans_brelse(tp, agbp); 602 if (error == ENOSPC) { 603 *inop = NULLFSINO; 604 >>>>>>>>>> return 0; 605 } else 606 return error; ........ 629 nextag: 630 if (++tagno == agcount) 631 tagno = 0; 632 if (tagno == agno) { 633 *inop = NULLFSINO; 634 >>>>>>>>>> return noroom ? ENOSPC : 0; 635 } Note that for the last case, we don't know what the value of "noroom" is. noroom gets set to 1 if we've reached the maximum number of inodes in the filesystem. Fromteh earlier superblock dump you did: > dblocks = 71627792 ..... > inopblog = 3 ..... > imax_pct = 25 > icount = 3570112 > ifree = 0 and the code that calculates this is: icount = sbp->sb_dblocks * sbp->sb_imax_pct; do_div(icount, 100); do_div(icount, mp->m_ialloc_blks); mp->m_maxicount = (icount * mp->m_ialloc_blks) << sbp->sb_inopblog; therefore: m_maxicount = (((((71627792 * 25) / 100) / 4) * 4) << 3) = 143,255,584 which is way larger than the 3,570,112 that you have already allocated. Hence I think that noroom == 0 and the last chunk of code above is a possibility. Further - we need to allocate new inodes as there are none free. That implies we are calling xfs_ialloc_ag_alloc(). Taking a stab in the dark, I suspect that we are not getting an error from xfs_ialloc_ag_alloc() but we are not allocating inode chunks. Why? Back to the superblock: > unit = 16 > width = 32 You've got a filesystem with stripe alignment set. In xfs_ialloc_ag_alloc() we attempt inode allocation by the following rules: 1. a) If we haven't previously allocated inodes, fall through to 2. b) If we have previously allocated inode, attempt to allocate next to the last inode chunk. 2. If we do not have an extent now: a) if we have stripe alignment, try with alignment b) if we don't have stripe alignment try cluster alignment 3. If we do not have an extent now: a) if we have stripe alignment, try with cluster alignment b) no stripe alignment, turn off alignment. 4. If we do not have an extent now: FAIL. Note the case missing from the stripe alignment fallback path - it does not try without alignment at all. That means if all those extents large enough that we found above are not correctly aligned, then we will still fail to allocate an inode chunk. if all the AGs are like this, then we'll fail to allocate at all and fall out of xfs_dialloc() through the last fragment I quoted above. As to the shutdown that this triggers - the attempt to allocate dirties the AGFL and the AGF by moving free blocks into the free list for btree splits and cancelling a dirty transaction results in a shutdown. Now, to test this theory. ;) Luckily, it's easy to test. mount the filesystem with the mount option "noalign" and rerun the mkdir test. If it is an alignment problem, then setting noalign will prevent this ENOSPC and shutdown as the filesystem will be able to allocate more inodes. Can you test this for me, Christian? Cheers, Dave. > > > Christian -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Mar 10 15:29:56 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 10 Mar 2008 15:30:04 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2AMTrN4004407 for ; Mon, 10 Mar 2008 15:29:54 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA12567; Tue, 11 Mar 2008 09:30:23 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m2AMUKLF92794328; Tue, 11 Mar 2008 09:30:21 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m2AMUI5W92797299; Tue, 11 Mar 2008 09:30:18 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Tue, 11 Mar 2008 09:30:18 +1100 From: David Chinner To: Andreas Kotes Cc: David Chinner , linux-kernel@vger.kernel.org, xfs@oss.sgi.com Subject: Re: XFS internal error Message-ID: <20080310223018.GA155407@sgi.com> References: <470831E6.4030704@fastmail.co.uk> <20071008001452.GX995458@sgi.com> <20080310122216.GG14256@slop.flatline.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080310122216.GG14256@slop.flatline.de> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14834 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Mon, Mar 10, 2008 at 01:22:16PM +0100, Andreas Kotes wrote: > Hello, > > * David Chinner [20080310 13:18]: > > Yes, but those previous corruptions get left on disk as a landmine > > for you to trip over some time later, even on a kernel that has the > > bug fixed. > > > > I suggest that you run xfs_check on the filesystem and if that > > shows up errors, run xfs_repair onteh filesystem to correct them. > > I seem to be having similiar problems, and xfs_repair is not helping :( xfs_repair is ensuring that the problem is not being caused by on-disk corruption. In this case, it does not appear to be caused by on-disk corruption, so xfs_repair won't help. > I always run into: > > [ 137.099267] Filesystem "sda2": XFS internal error xfs_trans_cancel at line 1132 of file fs/xfs/xfs_trans.c. Caller 0xffffffff80372156 > [ 137.106267] > [ 137.106268] Call Trace: > [ 137.113129] [] xfs_trans_cancel+0x100/0x130 > [ 137.116524] [] xfs_create+0x256/0x6e0 > [ 137.119904] [] xfs_dir2_isleaf+0x19/0x50 > [ 137.123269] [] xfs_vn_mknod+0x195/0x250 > [ 137.126607] [] vfs_create+0xac/0xf0 > [ 137.129920] [] open_namei+0x5dc/0x700 > [ 137.133227] [] __wake_up+0x43/0x70 > [ 137.136477] [] do_filp_open+0x1c/0x50 > [ 137.139693] [] do_sys_open+0x5a/0x100 > [ 137.142838] [] sysenter_do_call+0x1b/0x67 > [ 137.145964] > [ 137.149014] xfs_force_shutdown(sda2,0x8) called from line 1133 of file fs/xfs/xfs_trans.c. Return address = 0xffffffff8036930e > [ 137.163485] Filesystem "sda2": Corruption of in-memory data detected. Shutting down filesystem: sda2 > > directly after booting. Interesting. I think I just found a cause of this shutdown under certain circumstances: http://marc.info/?l=linux-xfs&m=120518791828200&w=2 To confirm it might be the same issue, can you dump the superblock of this filesystem for me? i.e.: # xfs_db -r -c 'sb 0' -c p /dev/sda2 Also, what the mount options you are using are? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Mar 10 15:59:05 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 10 Mar 2008 15:59:14 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2AMx2rS008835 for ; Mon, 10 Mar 2008 15:59:04 -0700 X-ASG-Debug-ID: 1205189972-6c3401920000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mail.flatline.de (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 4D9BC1250F92; Mon, 10 Mar 2008 15:59:32 -0700 (PDT) Received: from mail.flatline.de (flatline.de [80.190.243.144]) by cuda.sgi.com with ESMTP id 7cwm3ETcNwwg07sv; Mon, 10 Mar 2008 15:59:32 -0700 (PDT) Received: from shell.priv.flatline.de ([172.16.123.7] helo=slop.flatline.de ident=count) by mail.flatline.de with smtp (Exim 4.69) (envelope-from ) id 1JYqxr-00006A-JA; Mon, 10 Mar 2008 23:59:28 +0100 Received: by slop.flatline.de (sSMTP sendmail emulation); Mon, 10 Mar 2008 23:59:27 +0100 Date: Mon, 10 Mar 2008 23:59:27 +0100 From: Andreas Kotes To: David Chinner Cc: linux-kernel@vger.kernel.org, xfs@oss.sgi.com X-ASG-Orig-Subj: Re: XFS internal error Subject: Re: XFS internal error Message-ID: <20080310225927.GP14256@slop.flatline.de> References: <470831E6.4030704@fastmail.co.uk> <20071008001452.GX995458@sgi.com> <20080310122216.GG14256@slop.flatline.de> <20080310223018.GA155407@sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080310223018.GA155407@sgi.com> User-Agent: Mutt/1.5.13 (2006-08-11) X-Barracuda-Connect: flatline.de[80.190.243.144] X-Barracuda-Start-Time: 1205189973 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-ASG-Whitelist: BODY (http://marc.info/\?) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14835 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: count@flatline.de Precedence: bulk X-list: xfs Hello Dave, * David Chinner [20080310 23:30]: > On Mon, Mar 10, 2008 at 01:22:16PM +0100, Andreas Kotes wrote: > > * David Chinner [20080310 13:18]: > > > Yes, but those previous corruptions get left on disk as a landmine > > > for you to trip over some time later, even on a kernel that has the > > > bug fixed. > > > > > > I suggest that you run xfs_check on the filesystem and if that > > > shows up errors, run xfs_repair onteh filesystem to correct them. > > > > I seem to be having similiar problems, and xfs_repair is not helping :( > > xfs_repair is ensuring that the problem is not being caused by on-disk > corruption. In this case, it does not appear to be caused by on-disk > corruption, so xfs_repair won't help. ok, too bad - btw, is it a problem that I'm doing the xfs_repair on a mounted filesystem with xfs_repair -f -L after a remount rw? > > I always run into: > > > > [ 137.099267] Filesystem "sda2": XFS internal error xfs_trans_cancel at line 1132 of file fs/xfs/xfs_trans.c. Caller 0xffffffff80372156 > > [ 137.106267] > > [ 137.106268] Call Trace: > > [ 137.113129] [] xfs_trans_cancel+0x100/0x130 > > [ 137.116524] [] xfs_create+0x256/0x6e0 > > [ 137.119904] [] xfs_dir2_isleaf+0x19/0x50 > > [ 137.123269] [] xfs_vn_mknod+0x195/0x250 > > [ 137.126607] [] vfs_create+0xac/0xf0 > > [ 137.129920] [] open_namei+0x5dc/0x700 > > [ 137.133227] [] __wake_up+0x43/0x70 > > [ 137.136477] [] do_filp_open+0x1c/0x50 > > [ 137.139693] [] do_sys_open+0x5a/0x100 > > [ 137.142838] [] sysenter_do_call+0x1b/0x67 > > [ 137.145964] > > [ 137.149014] xfs_force_shutdown(sda2,0x8) called from line 1133 of file fs/xfs/xfs_trans.c. Return address = 0xffffffff8036930e > > [ 137.163485] Filesystem "sda2": Corruption of in-memory data detected. Shutting down filesystem: sda2 > > > > directly after booting. > > Interesting. I think I just found a cause of this shutdown under > certain circumstances: > > http://marc.info/?l=linux-xfs&m=120518791828200&w=2 > > To confirm it might be the same issue, can you dump the superblock of this > filesystem for me? i.e.: > > # xfs_db -r -c 'sb 0' -c p /dev/sda2 certainly: magicnum = 0x58465342 blocksize = 4096 dblocks = 35613152 rblocks = 0 rextents = 0 uuid = 62dae5fa-4085-4edc-ad76-5652d9fb00ae logstart = 33554436 rootino = 128 rbmino = 129 rsumino = 130 rextsize = 1 agblocks = 2225822 agcount = 16 rbmblocks = 0 logblocks = 17389 versionnum = 0x3084 sectsize = 512 inodesize = 256 inopblock = 16 fname = "s2g-serv\000\000\000\000" blocklog = 12 sectlog = 9 inodelog = 8 inopblog = 4 agblklog = 22 rextslog = 0 inprogress = 0 imax_pct = 25 icount = 15232 ifree = 2379 fdblocks = 5942436 frextents = 0 uquotino = 0 gquotino = 0 qflags = 0 flags = 0 shared_vn = 0 inoalignmt = 2 unit = 0 width = 0 dirblklog = 0 logsectlog = 0 logsectsize = 0 logsunit = 0 features2 = 0 > Also, what the mount options you are using are? rw,noatime ... if you want more info, just let me know :) Kind regards from Berlin, Andreas -- flatline IT services - Andreas Kotes - Tailored solutions for your IT needs From owner-xfs@oss.sgi.com Mon Mar 10 16:45:16 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 10 Mar 2008 16:45:36 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2ANjCUP018533 for ; Mon, 10 Mar 2008 16:45:15 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA15360; Tue, 11 Mar 2008 10:45:43 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m2ANjfLF92781281; Tue, 11 Mar 2008 10:45:42 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m2ANjeQC92784398; Tue, 11 Mar 2008 10:45:40 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Tue, 11 Mar 2008 10:45:40 +1100 From: David Chinner To: Andreas Kotes Cc: David Chinner , linux-kernel@vger.kernel.org, xfs@oss.sgi.com Subject: Re: XFS internal error Message-ID: <20080310234539.GC155407@sgi.com> References: <470831E6.4030704@fastmail.co.uk> <20071008001452.GX995458@sgi.com> <20080310122216.GG14256@slop.flatline.de> <20080310223018.GA155407@sgi.com> <20080310225927.GP14256@slop.flatline.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080310225927.GP14256@slop.flatline.de> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14836 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Mon, Mar 10, 2008 at 11:59:27PM +0100, Andreas Kotes wrote: > * David Chinner [20080310 23:30]: > > On Mon, Mar 10, 2008 at 01:22:16PM +0100, Andreas Kotes wrote: > > > * David Chinner [20080310 13:18]: > > > > Yes, but those previous corruptions get left on disk as a landmine > > > > for you to trip over some time later, even on a kernel that has the > > > > bug fixed. > > > > > > > > I suggest that you run xfs_check on the filesystem and if that > > > > shows up errors, run xfs_repair onteh filesystem to correct them. > > > > > > I seem to be having similiar problems, and xfs_repair is not helping :( > > > > xfs_repair is ensuring that the problem is not being caused by on-disk > > corruption. In this case, it does not appear to be caused by on-disk > > corruption, so xfs_repair won't help. > > ok, too bad - btw, is it a problem that I'm doing the xfs_repair on a > mounted filesystem with xfs_repair -f -L after a remount rw? If it was read only, and you rebooted immediately afterwards, you'd probably be ok. Doing this to a mounted, rw filesystem is asking for trouble. If the shutdown is occurring after you've run xfs_repair, then it is almost certainly the cause.... I'd suggest getting a knoppix (or similar) rescue disk and repairing from that, rebooting and seeing if the problem persists. If it does, then we'll have to look further into it. FWIW, you've got plenty of free inodes so this does not look to be the same problem I've just found. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Mar 10 17:44:15 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 10 Mar 2008 17:44:34 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.6 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2B0iExN028589 for ; Mon, 10 Mar 2008 17:44:15 -0700 X-ASG-Debug-ID: 1205196284-68d000350000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from postoffice.aconex.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 6D63B125114A for ; Mon, 10 Mar 2008 17:44:45 -0700 (PDT) Received: from postoffice.aconex.com (prod.aconex.com [203.89.192.138]) by cuda.sgi.com with ESMTP id G6Aqgywa8bFvjw4u for ; Mon, 10 Mar 2008 17:44:45 -0700 (PDT) Received: from [192.168.5.76] (unknown [203.89.192.141]) by postoffice.aconex.com (Postfix) with ESMTP id 7FF1392C49A; Tue, 11 Mar 2008 11:44:12 +1100 (EST) X-ASG-Orig-Subj: Re: [PATCH, RFC] - remove mounpoint UUID code Subject: Re: [PATCH, RFC] - remove mounpoint UUID code From: Nathan Scott Reply-To: nscott@aconex.com To: Eric Sandeen Cc: xfs-oss In-Reply-To: <47D20F78.7000103@sandeen.net> References: <47D20F78.7000103@sandeen.net> Content-Type: text/plain Organization: Aconex Date: Tue, 11 Mar 2008 11:44:12 +1100 Message-Id: <1205196252.15982.69.camel@edge.scott.net.au> Mime-Version: 1.0 X-Mailer: Evolution 2.12.3 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: prod.aconex.com[203.89.192.138] X-Barracuda-Start-Time: 1205196285 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -1.42 X-Barracuda-Spam-Status: No, SCORE=-1.42 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=COMMA_SUBJECT X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44467 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.60 COMMA_SUBJECT Subject is like 'Re: FDSDS, this is a subject' X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14837 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nscott@aconex.com Precedence: bulk X-list: xfs On Fri, 2008-03-07 at 22:00 -0600, Eric Sandeen wrote: > It looks like all of the below is unused... and according > to Nathan, > > "dont think it even got used/implemented anywhere, but i think it > was meant to be an auto-mount kinda thing... such that when you look > up at that point, it knows to mount the device with that uuid there, > if its not already it was never really written anywhere ... just an > idea in doug doucettes brain i think." > > Think it'll ever go anywhere, or should it get pruned? > > The below builds; not at all tested, until I get an idea if it's worth > doing. Need to double check that some structures might not need padding > out to keep things compatible/consistent... Since effectively all versions of XFS support this feature ondisk, including complete support in recovery, it would be better IMO to leave it in for someone to implement/experiment with the syscall and auto-mounting userspace support. That would then require no new feature bits, mkfs/repair changes, etc. There is effectively zero cost to leaving it there - and non-zero cost in removing it, if our seriously bad regression-via-cleanup history is anything to go by ... :| It would be really unfortunate to remove this, and then find that it was useful to someone (who didn't know about it at this time). OTOH, if there is definately never ever any chance this can ever be useful, then it should indeed be removed. :) cheers. -- Nathan From owner-xfs@oss.sgi.com Mon Mar 10 18:04:06 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 10 Mar 2008 18:04:25 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2B13xV2031888 for ; Mon, 10 Mar 2008 18:04:03 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id MAA17583; Tue, 11 Mar 2008 12:04:23 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m2B14MLF91634908; Tue, 11 Mar 2008 12:04:22 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m2B14Lgb92800887; Tue, 11 Mar 2008 12:04:21 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Tue, 11 Mar 2008 12:04:21 +1100 From: David Chinner To: xfs-dev Cc: xfs@oss.sgi.com, haryadi@cs.wisc.edu, remzi@cs.wisc.edu Subject: [Review] Improve XFS error checking and propagation Message-ID: <20080311010420.GD155407@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14838 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs A recent paper at the FAST08 conference highlighted a large number of unchecked error paths in Linux filesystems and I/O layers. As a subsystem, XFS had the highest aggregate numbers of bad error propagation. A tarball which contains a quilt patch series of 32 patches aimed at improving this situation can be found here: http://oss.sgi.com/~dgc/xfs/error-check/xfs-error-checking.tar.gz The paper "EIO: Error Handling is Occasionally Correct" can be found here: http://www.cs.wisc.edu/adsl/Publications/eio-fast08.html And the in depth results here: http://www.cs.wisc.edu/adsl/Publications/eio-fast08/readme.html http://www.cs.wisc.edu/adsl/Publications/eio-fast08/ The XFS results I've been working from are here: http://www.cs.wisc.edu/adsl/Publications/eio-fast08/fullfs-xfs-without-false-positives.txt and included below is an annotated version of this file as I've worked through it. The graph of the XFS error paths is a good visual representation of how the bad error paths tend to cluster together: http://www.cs.wisc.edu/adsl/Publications/eio-fast08/singlefs-xfs.pdf (you'll need at least 800% zoom to be able to read it at all) The paper analysed a 2.6.15 kernel, but I've been working against an xfs-dev tree (~2.6.24). Of the 101 reported problems for the 2.6.15 kernel that was analysed: - 7 did not exist anymore (bhv layer, dirv1, write path changes) - 11 were false positives that were not modified - 24 were false positives that have been patched to remove (e.g. int xfs_foo() to void xfs_foo()) - 37 real problems where an error needed to be returned and are fixed in the patch series. - 3 where there is no error path to return an error and no point in even warning about it (ENOSPC flushing) - 10 where there is no error path to return an error, but patched to warn to the syslog about potential data loss or metadata I/O errors - 4 were already fixed in the xfs-dev tree - 2 where the error is ignored because we must continue anyway (patched to warn to syslog) - 4 that I haven't yet fixed (xfs_buf_iostrategy and xfs_buf_iostart) because I need to think about them more. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group ---------------------------------------- fs/xfs/ ---- d 1 xfs_write -> _xfs_log_force fs/xfs/linux-2.6/xfs_lrw.c 881 d 2 xfs_write -> _xfs_log_force fs/xfs/linux-2.6/xfs_lrw.c 884 F 3 xfs_flush_device -> _xfs_log_force fs/xfs/linux-2.6/xfs_super.c 547 F 4 xfs_qm_dqflush -> _xfs_log_force fs/xfs/quota/xfs_dquot.c 1294 F 5 xfs_qm_dqflock_pushbuf_wait -> _xfs_log_force fs/xfs/quota/xfs_dquot.c 1591 F 6 xfs_qm_dqunpin_wait -> _xfs_log_force fs/xfs/quota/xfs_dquot_item.c 204 F 7 xfs_qm_dquot_logitem_pushbuf -> _xfs_log_force fs/xfs/quota/xfs_dquot_item.c 267 F 8 xfs_alloc_search_busy -> _xfs_log_force fs/xfs/xfs_alloc.c 2593 F 9 xfs_iunpin_wait -> _xfs_log_force fs/xfs/xfs_inode.c 2847 F 10 xfs_iflush -> _xfs_log_force fs/xfs/xfs_inode.c 3243 F 11 xfs_inode_item_pushbuf -> _xfs_log_force fs/xfs/xfs_inode_item.c 819 P 12 xfs_log_unmount_write -> _xfs_log_force fs/xfs/xfs_log.c 529 F 13 xlog_recover_finish -> _xfs_log_force fs/xfs/xfs_log_recover.c 3961 F 14 xfs_unmountfs -> _xfs_log_force fs/xfs/xfs_mount.c 1088 F 15 xfs_trans_push_ail -> _xfs_log_force fs/xfs/xfs_trans_ail.c 198 F 16 xfs_syncsub -> _xfs_log_force fs/xfs/xfs_vfsops.c 1440 F 17 xfs_syncsub -> _xfs_log_force fs/xfs/xfs_vfsops.c 1455 F 18 xfs_syncsub -> _xfs_log_force fs/xfs/xfs_vfsops.c 1491 F 19 xfs_syncsub -> _xfs_log_force fs/xfs/xfs_vfsops.c 1543 P 20 xfs_fsync -> _xfs_log_force fs/xfs/xfs_vnodeops.c 1129 P 21 xfs_qm_write_sb_changes -> _xfs_trans_commit fs/xfs/quota/xfs_qm.c 2414 P 22 xfs_qm_scall_setqlim -> _xfs_trans_commit fs/xfs/quota/xfs_qm_syscalls.c 739 P 23 xfs_itruncate_finish -> _xfs_trans_commit fs/xfs/xfs_inode.c 1718 P 24 xlog_recover_process_efi -> _xfs_trans_commit fs/xfs/xfs_log_recover.c 3047 P 25 xlog_recover_clear_agi_bucket -> _xfs_trans_commit fs/xfs/xfs_log_recover.c 3174 PB 26 xfs_mount_log_sbunit -> _xfs_trans_commit fs/xfs/xfs_mount.c 1579 P 27 xfs_growfs_rt_alloc -> _xfs_trans_commit fs/xfs/xfs_rtalloc.c 154 P 28 xfs_growfs_rt_alloc -> _xfs_trans_commit fs/xfs/xfs_rtalloc.c 191 P 29 xfs_growfs_rt -> _xfs_trans_commit fs/xfs/xfs_rtalloc.c 2103 P 30 xfs_inactive_attrs -> _xfs_trans_commit fs/xfs/xfs_vnodeops.c 1505 C 31 xfs_inactive -> _xfs_trans_commit fs/xfs/xfs_vnodeops.c 1790 d 32 xfs_initialize_vnode -> bhv_insert fs/xfs/linux-2.6/xfs_super.c 220 d 33 vfs_insertops -> bhv_insert fs/xfs/linux-2.6/xfs_vfs.c 259 N 34 linvfs_truncate -> block_truncate_page fs/xfs/linux-2.6/xfs_iops.c 651 G 35 fs_flushinval_pages -> filemap_fdatawait fs/xfs/linux-2.6/xfs_fs_subr.c 83 G 36 fs_flush_pages -> filemap_fdatawait fs/xfs/linux-2.6/xfs_fs_subr.c 108 G 37 fs_flushinval_pages -> filemap_fdatawrite fs/xfs/linux-2.6/xfs_fs_subr.c 82 G 38 fs_flush_pages -> filemap_fdatawrite fs/xfs/linux-2.6/xfs_fs_subr.c 105 n 39 xfs_flush_inode_work -> filemap_flush fs/xfs/linux-2.6/xfs_super.c 508 PM 40 xlog_sync -> pagebuf_associate_memory fs/xfs/xfs_log.c 1358 PM 41 xlog_sync -> pagebuf_associate_memory fs/xfs/xfs_log.c 1395 PM 42 xlog_write_log_records -> pagebuf_associate_memory fs/xfs/xfs_log_recover.c 1156 PM 43 xlog_write_log_records -> pagebuf_associate_memory fs/xfs/xfs_log_recover.c 1159 PM 44 xlog_do_recovery_pass -> pagebuf_associate_memory fs/xfs/xfs_log_recover.c 3646 PM 45 xlog_do_recovery_pass -> pagebuf_associate_memory fs/xfs/xfs_log_recover.c 3653 PM 46 xlog_do_recovery_pass -> pagebuf_associate_memory fs/xfs/xfs_log_recover.c 3705 PM 47 xlog_do_recovery_pass -> pagebuf_associate_memory fs/xfs/xfs_log_recover.c 3711 M 48 xfs_buf_read_flags -> pagebuf_iostart fs/xfs/linux-2.6/xfs_buf.c 636 M 49 xfsbufd -> pagebuf_iostrategy fs/xfs/linux-2.6/xfs_buf.c 1755 M 50 xfs_flush_buftarg -> pagebuf_iostrategy fs/xfs/linux-2.6/xfs_buf.c 1816 M 51 XFS_bwrite -> pagebuf_iostrategy fs/xfs/linux-2.6/xfs_buf.h 503 n 52 xfs_flush_device_work -> sync_blockdev fs/xfs/linux-2.6/xfs_super.c 533 f 53 exit_xfs_fs -> unregister_filesystem fs/xfs/linux-2.6/xfs_super.c 999 PM 54 xfs_acl_vset -> xfs_acl_vremove fs/xfs/xfs_acl.c 326 f 55 xfs_ialloc_ag_select -> xfs_alloc_pagf_init fs/xfs/xfs_ialloc.c 411 P 56 xfs_qm_dqflush -> xfs_bawrite fs/xfs/quota/xfs_dquot.c 1300 N 57 xfs_qm_dqflock_pushbuf_wait -> xfs_bawrite fs/xfs/quota/xfs_dquot.c 1595 N 58 xfs_qm_dquot_logitem_pushbuf -> xfs_bawrite fs/xfs/quota/xfs_dquot_item.c 275 N 59 xfs_buf_item_push -> xfs_bawrite fs/xfs/xfs_buf_item.c 669 P 60 xfs_iflush -> xfs_bawrite fs/xfs/xfs_inode.c 3249 N 61 xfs_inode_item_pushbuf -> xfs_bawrite fs/xfs/xfs_inode_item.c 823 F 62 xfs_qm_dqflush -> xfs_bdwrite fs/xfs/quota/xfs_dquot.c 1298 F 63 xfs_qm_dqiter_bufs -> xfs_bdwrite fs/xfs/quota/xfs_qm.c 1551 F 64 xfs_iflush -> xfs_bdwrite fs/xfs/xfs_inode.c 3247 F 65 xlog_recover_do_buffer_trans -> xfs_bdwrite fs/xfs/xfs_log_recover.c 2271 F 66 xlog_recover_do_inode_trans -> xfs_bdwrite fs/xfs/xfs_log_recover.c 2535 F 67 xlog_recover_do_dquot_trans -> xfs_bdwrite fs/xfs/xfs_log_recover.c 2664 C 68 xfs_inactive -> xfs_bmap_finish fs/xfs/xfs_vnodeops.c 1788 P 69 xfs_iomap_write_allocate -> xfs_bmap_last_offset fs/xfs/xfs_iomap.c 787 d 70 xfs_dir_leaf_rebalance -> xfs_dir_leaf_compact fs/xfs/xfs_dir_leaf.c 1146 d 71 xfs_dir_leaf_rebalance -> xfs_dir_leaf_compact fs/xfs/xfs_dir_leaf.c 1176 d 72 xfs_dir_leaf_to_shortform -> xfs_dir_shortform_addname fs/xfs/xfs_dir_leaf.c 693 P 73 xlog_recover_process_efi -> xfs_free_extent fs/xfs/xfs_log_recover.c 3041 n 74 xfs_inode_item_push -> xfs_iflush fs/xfs/xfs_inode_item.c 879 P 75 xlog_recover_do_inode_trans -> xfs_imap fs/xfs/xfs_log_recover.c 2320 f 76 xfs_bmap_add_extent -> xfs_mod_incore_sb fs/xfs/xfs_bmap.c 689 f 77 xfs_bmap_add_extent_hole_delay -> xfs_mod_incore_sb fs/xfs/xfs_bmap.c 1918 f 78 xfs_bmap_del_extent -> xfs_mod_incore_sb fs/xfs/xfs_bmap.c 3117 f 79 xfs_bmapi -> xfs_mod_incore_sb fs/xfs/xfs_bmap.c 4801 f 80 xfs_bmapi -> xfs_mod_incore_sb fs/xfs/xfs_bmap.c 4805 f 81 xfs_bunmapi -> xfs_mod_incore_sb fs/xfs/xfs_bmap.c 5452 f 82 xfs_bunmapi -> xfs_mod_incore_sb fs/xfs/xfs_bmap.c 5458 f 83 xfs_trans_reserve -> xfs_mod_incore_sb fs/xfs/xfs_trans.c 305 P 84 xfs_qm_quotacheck -> xfs_mount_reset_sbqflags fs/xfs/quota/xfs_qm.c 1962 N 85 xfs_qm_dqpurge -> xfs_qm_dqflush fs/xfs/quota/xfs_dquot.c 1505 NB 86 xfs_qm_dquot_logitem_push -> xfs_qm_dqflush fs/xfs/quota/xfs_dquot_item.c 168 N 87 xfs_qm_shake_freelist -> xfs_qm_dqflush fs/xfs/quota/xfs_qm.c 2134 N 88 xfs_qm_dqreclaim_one -> xfs_qm_dqflush fs/xfs/quota/xfs_qm.c 2306 PM 89 xfs_qm_quotacheck -> xfs_qm_dqflush_all fs/xfs/quota/xfs_qm.c 1930 P 90 xfs_qm_scall_quotaoff -> xfs_qm_log_quotaoff fs/xfs/quota/xfs_qm_syscalls.c 291 P 91 xfs_qm_scall_quotaoff -> xfs_qm_log_quotaoff_end fs/xfs/quota/xfs_qm_syscalls.c 347 F 92 xfs_qm_newmount -> xfs_qm_mount_quotas fs/xfs/quota/xfs_qm_bhv.c 273 F 93 xfs_qm_endmount -> xfs_qm_mount_quotas fs/xfs/quota/xfs_qm_bhv.c 301 PM 94 xfs_quiesce_fs -> xfs_syncsub fs/xfs/xfs_vfsops.c 632 P 95 xlog_recover_process_efi -> xfs_trans_reserve fs/xfs/xfs_log_recover.c 3036 P 96 xlog_recover_clear_agi_bucket -> xfs_trans_reserve fs/xfs/xfs_log_recover.c 3152 P 97 xfs_qm_scall_trunc_qfiles -> xfs_truncate_file fs/xfs/quota/xfs_qm_syscalls.c 395 P 98 xfs_qm_scall_trunc_qfiles -> xfs_truncate_file fs/xfs/quota/xfs_qm_syscalls.c 404 P 99 xfs_log_unmount_write -> xlog_state_release_iclog fs/xfs/xfs_log.c 570 P 100 xfs_log_unmount_write -> xlog_state_release_iclog fs/xfs/xfs_log.c 606 f 101 xfs_log_force_umount -> xlog_state_sync_all fs/xfs/xfs_log.c 3586 f = false positive F = false positive + patch to remove condition G = patch in mainline git tree already M = __must_check annotations found this as well P = real, patch to fix n = no error path to return error N = no error path to return error, patch to warn about error added d = does not exist anymore. B = some other bug found and fixed at same time C = error ignored, must continue anyway. If silent, made noisy Notes: - all the xfs_mod_incore_sb() are false positive because they are freeing blocks or extents which means there can never be an error returned. The only error that can be returned is ENOSPC when trying to allocate blocks.... - none of the callers of xfs_mount_log_sb() check the return value. - new function xfs_log_sbcount failed to check return of xfs_trans_commit. Callers are failing to check return value. - most of the callers to xfs_log_force() are not interested in errors - they'll get them through other means (i.e. log error implies filesystem shutdown). Only a handful of callers really should return errors, such as fsync(), sync writes or synchronous transaction commits. From owner-xfs@oss.sgi.com Mon Mar 10 18:14:58 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 10 Mar 2008 18:15:19 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_43 autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2B1Es9F001444 for ; Mon, 10 Mar 2008 18:14:56 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id MAA17834; Tue, 11 Mar 2008 12:15:21 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m2B1FJLF92840015; Tue, 11 Mar 2008 12:15:19 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m2B1FE8W92795931; Tue, 11 Mar 2008 12:15:14 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Tue, 11 Mar 2008 12:15:14 +1100 From: David Chinner To: Niv Sardi Cc: David Chinner , Niv Sardi , sgi.bugs.xfs@fido.engr.sgi.com, xfs-dev@sgi.com, xfs@oss.sgi.com Subject: Re: ADD 977766 - mkfs.xfs man page needs the default settings updated. [REVIEW TAKE 3] Message-ID: <20080311011514.GE155407@sgi.com> References: <20080222003514.8D88E2C3@toolshop.engr.sgi.com> <20080310060751.GY155407@sgi.com> <416c461f0803092315m7ae6f55ek9b64058c3793aaa7@mail.gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <416c461f0803092315m7ae6f55ek9b64058c3793aaa7@mail.gmail.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14839 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Mon, Mar 10, 2008 at 05:15:04PM +1100, Niv Sardi wrote: > On Mon, Mar 10, 2008 at 5:07 PM, David Chinner wrote: > > Secondly, version one logs are not being kept around for backwards > > compatibility reasons. It's a valid, supported configuration, and in > > some cases performs better than version 2 logs.... > > Can you be more specific ? More specific about which comment? Re: performance - specSFS. IIRC, anything that is effectively a synchronous transaction workload tends to perform slightly better with v1 logs than v2 logs. It's in the order of a few percent, but some ppl kill for that ;) > the man page should document when this is > better supported and I believe you're the one that has the best > knowledge about that. > > > Realistically, I see no need for changing this text except to add that > > the default is version 2. > > The change was motivated by Eric's comments on OSS that it is not > clear why one should pick log v1 or v2, and I believe he is right. If you don't understand - use the default. In most cases v2 logs are the right thing to use and no amount of text in the man page is going to be able to explain the corner cases where you'd want to use v1 logs.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Mar 10 18:19:23 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 10 Mar 2008 18:19:31 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2B1JI3a002192 for ; Mon, 10 Mar 2008 18:19:23 -0700 X-ASG-Debug-ID: 1205198388-0ac800360000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 89283F5C71C for ; Mon, 10 Mar 2008 18:19:48 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id mHZLCqrOgX4zbbgq for ; Mon, 10 Mar 2008 18:19:48 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 6AC6518004EE3; Mon, 10 Mar 2008 20:19:16 -0500 (CDT) Message-ID: <47D5DE13.8030902@sandeen.net> Date: Mon, 10 Mar 2008 20:19:15 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: nscott@aconex.com CC: xfs-oss X-ASG-Orig-Subj: Re: [PATCH, RFC] - remove mounpoint UUID code Subject: Re: [PATCH, RFC] - remove mounpoint UUID code References: <47D20F78.7000103@sandeen.net> <1205196252.15982.69.camel@edge.scott.net.au> In-Reply-To: <1205196252.15982.69.camel@edge.scott.net.au> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1205198389 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -1.42 X-Barracuda-Spam-Status: No, SCORE=-1.42 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=COMMA_SUBJECT X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44469 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.60 COMMA_SUBJECT Subject is like 'Re: FDSDS, this is a subject' X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14840 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Nathan Scott wrote: (hope you didn't too much mind my quoting you in this thread) ;) > Since effectively all versions of XFS support this feature ondisk, > including complete support in recovery, it would be better IMO to > leave it in for someone to implement/experiment with the syscall > and auto-mounting userspace support. That would then require no > new feature bits, mkfs/repair changes, etc. There is effectively > zero cost to leaving it there - and non-zero cost in removing it, > if our seriously bad regression-via-cleanup history is anything > to go by ... :| the only cost to leaving it is having another instance of "ok now what the heck is THIS?!" ... death by a thousand cuts of xfs complexity. But yeah, removing it has some risk too. > It would be really unfortunate to remove this, and then find that > it was useful to someone (who didn't know about it at this time). > OTOH, if there is definately never ever any chance this can ever > be useful, then it should indeed be removed. :) Well I'm not hung up about it. If anyone thinks it'll be useful, I'm not bothered by leaving it as is. So, Nathan, what are your plans for this code? *grin* -Eric From owner-xfs@oss.sgi.com Mon Mar 10 18:45:32 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 10 Mar 2008 18:45:52 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_43 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2B1jSeI005781 for ; Mon, 10 Mar 2008 18:45:32 -0700 X-ASG-Debug-ID: 1205199959-68b001220000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id B8B3DF5C581 for ; Mon, 10 Mar 2008 18:45:59 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id ZXkv4SBGRrgRFxx3 for ; Mon, 10 Mar 2008 18:45:59 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id A3C7818004EE3; Mon, 10 Mar 2008 20:45:58 -0500 (CDT) Message-ID: <47D5E455.9090309@sandeen.net> Date: Mon, 10 Mar 2008 20:45:57 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: David Chinner CC: Niv Sardi , Niv Sardi , sgi.bugs.xfs@fido.engr.sgi.com, xfs-dev@sgi.com, xfs@oss.sgi.com X-ASG-Orig-Subj: Re: ADD 977766 - mkfs.xfs man page needs the default settings updated. [REVIEW TAKE 3] Subject: Re: ADD 977766 - mkfs.xfs man page needs the default settings updated. [REVIEW TAKE 3] References: <20080222003514.8D88E2C3@toolshop.engr.sgi.com> <20080310060751.GY155407@sgi.com> <416c461f0803092315m7ae6f55ek9b64058c3793aaa7@mail.gmail.com> <20080311011514.GE155407@sgi.com> In-Reply-To: <20080311011514.GE155407@sgi.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1205199959 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44471 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14841 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs David Chinner wrote: >> The change was motivated by Eric's comments on OSS that it is not >> clear why one should pick log v1 or v2, and I believe he is right. > > If you don't understand - use the default. In most cases v2 logs are the > right thing to use and no amount of text in the man page is going to be > able to explain the corner cases where you'd want to use v1 logs.... I think the only problem, Dave, is that there are maybe 2 people on the face of this earth who DO understand ;) (and I don't count myself among them). Just saying that v1 logs are still there for corner cases & specialized workloads which may perform better is probably fine, don't you think? That way those people who kill for such things can test both flavors. Without that, people won't know if v1 is broken, deprecated, dangerous, or what. If you say nothing at all about the differences, then don't even bother to document the log version option at all, IMHO. -Eric From owner-xfs@oss.sgi.com Mon Mar 10 18:55:21 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 10 Mar 2008 18:55:28 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.2 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_43 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2B1tJCd007297 for ; Mon, 10 Mar 2008 18:55:21 -0700 X-ASG-Debug-ID: 1205200549-0ae700d20000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from filer.fsl.cs.sunysb.edu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 6BC16F5CA2E; Mon, 10 Mar 2008 18:55:49 -0700 (PDT) Received: from filer.fsl.cs.sunysb.edu (filer.fsl.cs.sunysb.edu [130.245.126.2]) by cuda.sgi.com with ESMTP id Bs3iFoLYA9xGiHJ8; Mon, 10 Mar 2008 18:55:49 -0700 (PDT) Received: from josefsipek.net (baal.fsl.cs.sunysb.edu [130.245.126.78]) by filer.fsl.cs.sunysb.edu (8.12.11.20060308/8.13.1) with ESMTP id m2B1tlP7001558; Mon, 10 Mar 2008 21:55:47 -0400 Received: by josefsipek.net (Postfix, from userid 1000) id 4827D1C07AC0; Mon, 10 Mar 2008 21:55:49 -0400 (EDT) Date: Mon, 10 Mar 2008 21:55:49 -0400 From: "Josef 'Jeff' Sipek" To: Barry Naujok Cc: "xfs@oss.sgi.com" , xfs-dev X-ASG-Orig-Subj: Re: Final call for review of sb_bad_features2 in userspace Subject: Re: Final call for review of sb_bad_features2 in userspace Message-ID: <20080311015549.GB8870@josefsipek.net> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.16 (2007-06-11) X-Barracuda-Connect: filer.fsl.cs.sunysb.edu[130.245.126.2] X-Barracuda-Start-Time: 1205200550 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44471 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14842 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeffpc@josefsipek.net Precedence: bulk X-list: xfs On Thu, Mar 06, 2008 at 05:02:07PM +1100, Barry Naujok wrote: > I think the attached patch maybe the least offensive for past > kernels, and XFSQA! > > xfs_check and xfs_repair will ignore sb_bad_features2 if it is > zero, and if not, makes sure it's the same as sb_features2. > > mkfs.xfs will set sb_bad_features2 to be the same. Maybe if > we change the behaviour of the kernel mount code with > respects of sb_bad_features2, this can be revisited. > > (An intermediate solution I has was if "xfs_repair -n" is > run AND sb_bad_features is zero, then ignore it to let > xfs_repair continue, otherwise duplicate it, but doing > that requires a golden output change to QA 030 and 033 > unless the kernel mount code is changed... ARGH!) Tiny comments below, but either way, looks good. Josef 'Jeff' Sipek. > =========================================================================== > xfsprogs/db/check.c > =========================================================================== > > --- a/xfsprogs/db/check.c 2008-03-06 16:59:31.000000000 +1100 > +++ b/xfsprogs/db/check.c 2008-03-06 12:32:54.664882390 +1100 > @@ -869,6 +869,15 @@ blockget_f( > mp->m_sb.sb_frextents, frextents); > error++; > } > + if (mp->m_sb.sb_bad_features2 != 0 && > + mp->m_sb.sb_bad_features2 != mp->m_sb.sb_features2) { cosmetic: I'd align the second line to have the sb-> start in the same column :) It looks kinda odd to have the second line as indented as the dbprintf few lines later. > + if (!sflag) > + dbprintf("sb_features2 (0x%x) not same as " > + "sb_bad_features2 (0x%x)\n", > + mp->m_sb.sb_features2, > + mp->m_sb.sb_bad_features2); > + error++; > + } > if ((sbversion & XFS_SB_VERSION_ATTRBIT) && > !XFS_SB_VERSION_HASATTR(&mp->m_sb)) { > if (!sflag) > ... > =========================================================================== > xfsprogs/include/xfs_sb.h > =========================================================================== > > --- a/xfsprogs/include/xfs_sb.h 2008-03-06 16:59:31.000000000 +1100 > +++ b/xfsprogs/include/xfs_sb.h 2008-02-29 17:16:33.814417687 +1100 > @@ -151,6 +151,7 @@ typedef struct xfs_sb > __uint16_t sb_logsectsize; /* sector size for the log, bytes */ > __uint32_t sb_logsunit; /* stripe unit size for the log */ > __uint32_t sb_features2; /* additional feature bits */ > + __uint32_t sb_bad_features2; /* unusable space */ > } xfs_sb_t; __attribute__((packed)) ? ... > =========================================================================== > xfsprogs/repair/phase1.c > =========================================================================== > > --- a/xfsprogs/repair/phase1.c 2008-03-06 16:59:31.000000000 +1100 > +++ b/xfsprogs/repair/phase1.c 2008-03-06 16:57:40.021125442 +1100 > @@ -91,6 +91,20 @@ phase1(xfs_mount_t *mp) > primary_sb_modified = 1; > } > > + /* > + * Check bad_features2 and make sure features2 the same as > + * bad_features (ORing the two together). Leave bad_features2 > + * set so older kernels can still use it and not mount unsupported > + * filesystems when it reads bad_features2. > + */ > + if (sb->sb_bad_features2 != 0 && > + sb->sb_bad_features2 != sb->sb_features2) { Same as the check.c comment above. -- You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists. - Abbie Hoffman From owner-xfs@oss.sgi.com Tue Mar 11 01:08:09 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 11 Mar 2008 01:08:28 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=BAYES_00,MIME_8BIT_HEADER autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2B885HK002377 for ; Tue, 11 Mar 2008 01:08:09 -0700 X-ASG-Debug-ID: 1205222914-7e6602dd0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from wx-out-0506.google.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 76506682E6F for ; Tue, 11 Mar 2008 01:08:35 -0700 (PDT) Received: from wx-out-0506.google.com (wx-out-0506.google.com [66.249.82.236]) by cuda.sgi.com with ESMTP id lpJKIxI3l7zoErKD for ; Tue, 11 Mar 2008 01:08:35 -0700 (PDT) Received: by wx-out-0506.google.com with SMTP id s9so2307964wxc.32 for ; Tue, 11 Mar 2008 01:08:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; bh=p/GJsOP9/MwKhHlqJsy4MOflozfd3v73UmbG6v3+rto=; b=pHW+wwDQvUhnNIqyIbVDZqukX29GXkrQt0BvYarqSfU0CHn9jt+wSAn49ejTadlKccpdMjXJkl2WdiK2ZG4CiRka7Rra9TK+eBNbfRt0qn2kYb4QcEstvsXPoEosHUa1fbaP0JbsE5jdjNQ+coTbctuaUS4iQDmUyBCAGZxjUW8= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=fFgF0tu1sM6ZeBBC+zOVIyUsl2pEptiUOZ3T1zH6yMX53kKRHFTgW0oe2WkaAH7eZn2ei3roSiU95GNzWsEd62pQZKMB03D1SPcuiIRDrCAAum2vpW0H3q5D2qpBLP+foUsDSv4iRqQWRGuWMFxe0ixaMRvYgEF+6e9ZRtvSCMw= Received: by 10.150.157.11 with SMTP id f11mr3452704ybe.108.1205222914353; Tue, 11 Mar 2008 01:08:34 -0700 (PDT) Received: by 10.150.96.5 with HTTP; Tue, 11 Mar 2008 01:08:31 -0700 (PDT) Message-ID: <1a4a774c0803110108u3f01813fs7f9540f886be055@mail.gmail.com> Date: Tue, 11 Mar 2008 09:08:31 +0100 From: "=?ISO-8859-1?Q?Christian_R=F8snes?=" To: "David Chinner" X-ASG-Orig-Subj: Re: XFS internal error xfs_trans_cancel at line 1150 of file fs/xfs/xfs_trans.c Subject: Re: XFS internal error xfs_trans_cancel at line 1150 of file fs/xfs/xfs_trans.c Cc: xfs@oss.sgi.com In-Reply-To: <20080310222135.GZ155407@sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Content-Disposition: inline References: <1a4a774c0802130251h657a52f7lb97942e7afdf6e3f@mail.gmail.com> <20080213214551.GR155407@sgi.com> <1a4a774c0803050553h7f6294cfq41c38f34ea92ceae@mail.gmail.com> <1a4a774c0803060310w2642224w690ac8fa13f96ec@mail.gmail.com> <1a4a774c0803070319j1eb8790ek3daae4a16b3e6256@mail.gmail.com> <20080310000809.GU155407@sgi.com> <1a4a774c0803100302y17530814wee7522aa0dfd7668@mail.gmail.com> <1a4a774c0803100134k258e1bcfma95e7969bc44b2af@mail.gmail.com> <20080310222135.GZ155407@sgi.com> X-Barracuda-Connect: wx-out-0506.google.com[66.249.82.236] X-Barracuda-Start-Time: 1205222916 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44496 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14843 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: christian.rosnes@gmail.com Precedence: bulk X-list: xfs On Mon, Mar 10, 2008 at 11:21 PM, David Chinner wrote: > You've got a filesystem with stripe alignment set. In xfs_ialloc_ag_alloc() > we attempt inode allocation by the following rules: > > 1. a) If we haven't previously allocated inodes, fall through to 2. > b) If we have previously allocated inode, attempt to allocate next > to the last inode chunk. > > 2. If we do not have an extent now: > a) if we have stripe alignment, try with alignment > b) if we don't have stripe alignment try cluster alignment > > 3. If we do not have an extent now: > a) if we have stripe alignment, try with cluster alignment > b) no stripe alignment, turn off alignment. > > 4. If we do not have an extent now: FAIL. > > Note the case missing from the stripe alignment fallback path - it does not > try without alignment at all. That means if all those extents large enough > that we found above are not correctly aligned, then we will still fail > to allocate an inode chunk. if all the AGs are like this, then we'll > fail to allocate at all and fall out of xfs_dialloc() through the last > fragment I quoted above. > > As to the shutdown that this triggers - the attempt to allocate dirties > the AGFL and the AGF by moving free blocks into the free list for btree > splits and cancelling a dirty transaction results in a shutdown. > > Now, to test this theory. ;) Luckily, it's easy to test. mount the > filesystem with the mount option "noalign" and rerun the mkdir test. > If it is an alignment problem, then setting noalign will prevent > this ENOSPC and shutdown as the filesystem will be able to allocate > more inodes. > > Can you test this for me, Christian? Thanks. Unfortunately noalign didn't solve my problem: # mount | grep /data /dev/sdb1 on /data type xfs (rw,noatime,noalign,logbufs=8,nobarrier) # mkdir /data/test results in: Filesystem "sdb1": XFS internal error xfs_trans_cancel at line 1163 of file fs/xfs/xfs_trans.c. Caller 0xc021a010 Pid: 17889, comm: mkdir Not tainted 2.6.24.3FC #7 [] xfs_trans_cancel+0x5d/0xe6 [] xfs_mkdir+0x45a/0x493 [] xfs_mkdir+0x45a/0x493 [] xfs_acl_vhasacl_default+0x33/0x44 [] xfs_vn_mknod+0x165/0x243 [] xfs_access+0x2f/0x35 [] xfs_vn_mkdir+0x12/0x14 [] vfs_mkdir+0xa3/0xe2 [] sys_mkdirat+0x8a/0xc3 [] sys_mkdir+0x1f/0x23 [] syscall_call+0x7/0xb [] atm_reset_addr+0xd/0x83 ======================= xfs_force_shutdown(sdb1,0x8) called from line 1164 of file fs/xfs/xfs_trans.c. Return address = 0xc0212690 Filesystem "sdb1": Corruption of in-memory data detected. Shutting down filesystem: sdb1 Please umount the filesystem, and rectify the problem(s) I'll try to add some printk statements to the codepaths you mentioned, and see where it leads. Christian From owner-xfs@oss.sgi.com Tue Mar 11 01:30:14 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 11 Mar 2008 01:30:32 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-0.9 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_55, J_CHICKENPOX_62 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2B8UAiU005114 for ; Tue, 11 Mar 2008 01:30:14 -0700 X-ASG-Debug-ID: 1205224240-0c5000320000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mail.valinux.co.jp (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id A4AFFF5F01E for ; Tue, 11 Mar 2008 01:30:40 -0700 (PDT) Received: from mail.valinux.co.jp (fms-01.valinux.co.jp [210.128.90.1]) by cuda.sgi.com with ESMTP id RyMHVzjbj2qpDATR for ; Tue, 11 Mar 2008 01:30:40 -0700 (PDT) Received: from dhcp032.local.valinux.co.jp (vagw.valinux.co.jp [210.128.90.14]) by mail.valinux.co.jp (Postfix) with ESMTP id 6D2372DC8B4; Tue, 11 Mar 2008 17:30:38 +0900 (JST) Date: Tue, 11 Mar 2008 17:30:38 +0900 From: IWAMOTO Toshihiro To: David Chinner Cc: IWAMOTO Toshihiro , xfs@oss.sgi.com X-ASG-Orig-Subj: Re: [PATCH] prototype file data inode inlining Subject: Re: [PATCH] prototype file data inode inlining In-Reply-To: <20080308002124.GN155407@sgi.com> References: <20080307093411.4B1912DC9B2@mail.valinux.co.jp> <20080308002124.GN155407@sgi.com> User-Agent: Wanderlust/2.15.5 (Almost Unreal) SEMI/1.14.6 (Maruoka) FLIM/1.14.9 (=?ISO-8859-4?Q?Goj=F2?=) APEL/10.7 Emacs/22.1 (x86_64-pc-linux-gnu) MULE/5.0 (SAKAKI) MIME-Version: 1.0 (generated by SEMI 1.14.6 - "Maruoka") Content-Type: text/plain; charset=US-ASCII Message-Id: <20080311083038.6D2372DC8B4@mail.valinux.co.jp> X-Barracuda-Connect: fms-01.valinux.co.jp[210.128.90.1] X-Barracuda-Start-Time: 1205224241 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44497 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14844 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: iwamoto@valinux.co.jp Precedence: bulk X-list: xfs At Sat, 8 Mar 2008 11:21:24 +1100, David Chinner wrote: > Interesting. I'm not going to comment on the code, just the overall > design and implementation. Thanks for comments. > Problems: > - local -> extent conversion occurs at copy-in time, not writeback > time, so using the normal read/write paths through ->get_blocks() > will fail here in xfs_bmapi(): > > 4793 if (XFS_IFORK_FORMAT(ip, whichfork) == XFS_DINODE_FMT_LOCAL) { > 4794 >>>>>>>> ASSERT(wr && tp); > 4795 if ((error = xfs_bmap_local_to_extents(tp, ip, > 4796 firstblock, total, &logflags, whichfork))) > 4797 goto error0; > 4798 } > > because on a normal read or write (delayed allocation) we > are not doing allocation and hence do not have an open > transaction the first time we come through here. Just > avoiding this conversion and returning zero maps if we are > not writing will not help in the delayed allocation case. > > > I note that you hacked around this by special casing the inline > format ->get_blocks() callout to copy the data into the page and > marking it mapped and uptodate. I think this is the wrong approach > and is not the right layer at which to make this distinction - the special > casing needs to be done at higher layers, not in the block mapping > function. > > I think for inline data, we'd do best to special case this as high > up the read/write paths as possible. e.g. for read() type operations > intercept in xfs_read() and just do what we need to do there for > populating the single page cache page. For write, we should let it > go through the normal delayed allocation mechanisms, only converting > to local format during ->writepage() if there's a single block > extent and it fits in the data fork. This also handles the truncate > case nicely. I was vaguely aware of layering violation, but took my dirty&quick approach as the largest concern at that time was whether file inlining gives enough performance gain. With your suggestion, I would be able to implement better. Unfortunately, I lack time to do so. > > Some random notes and the patch itself follows. > > > > Inlined file data are written from xfs_page_state_convert(). > > The xfs_trans related operations in that function is to get inode > > written on disk and isn't for crash consistency. > > Which is the exact opposite of what they are supposed to be used for. > Given that the next thing that happens after data write in the writeback path > is ->write_inode(), forcing the inode into the log for pure data changes > is unnecessary. We just need to format the data into the inode during > data writeback. It seemed that setting the XFS_ILOG_DDATA bit in ip->i_itemp->ili_format.ilf_fields was necessary for xfs_iflush_fork, and I wasn't aware of other solutions. > > xfs_bmap_local_to_extents() has been modified to work with file data, > > but logging isn't implemented. A machine crash can cause data > > corruption. > > There are two ways to do inline->extent safely from a crash recovery > perspective. > > Method 1: Use an Intent/Done transaction pair > Method 2: Log the data Thanks for explanation. It doesn't sound so complicated as I've imagined. -- IWAMOTO Toshihiro From owner-xfs@oss.sgi.com Tue Mar 11 02:33:50 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 11 Mar 2008 02:34:07 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.7 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_43, MIME_8BIT_HEADER autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2B9Xiaw015696 for ; Tue, 11 Mar 2008 02:33:48 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id UAA00786; Tue, 11 Mar 2008 20:34:09 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m2B9Y8LF92990430; Tue, 11 Mar 2008 20:34:09 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m2B9Y6Fr93003109; Tue, 11 Mar 2008 20:34:06 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Tue, 11 Mar 2008 20:34:06 +1100 From: David Chinner To: Christian =?iso-8859-1?Q?R=F8snes?= Cc: xfs@oss.sgi.com Subject: Re: XFS internal error xfs_trans_cancel at line 1150 of file fs/xfs/xfs_trans.c Message-ID: <20080311093406.GN155407@sgi.com> References: <1a4a774c0802130251h657a52f7lb97942e7afdf6e3f@mail.gmail.com> <20080213214551.GR155407@sgi.com> <1a4a774c0803050553h7f6294cfq41c38f34ea92ceae@mail.gmail.com> <1a4a774c0803060310w2642224w690ac8fa13f96ec@mail.gmail.com> <1a4a774c0803070319j1eb8790ek3daae4a16b3e6256@mail.gmail.com> <20080310000809.GU155407@sgi.com> <1a4a774c0803100302y17530814wee7522aa0dfd7668@mail.gmail.com> <1a4a774c0803100134k258e1bcfma95e7969bc44b2af@mail.gmail.com> <20080310222135.GZ155407@sgi.com> <1a4a774c0803110108u3f01813fs7f9540f886be055@mail.gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <1a4a774c0803110108u3f01813fs7f9540f886be055@mail.gmail.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14845 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Tue, Mar 11, 2008 at 09:08:31AM +0100, Christian Rřsnes wrote: > On Mon, Mar 10, 2008 at 11:21 PM, David Chinner wrote: > > You've got a filesystem with stripe alignment set. In xfs_ialloc_ag_alloc() > > we attempt inode allocation by the following rules: > > > > 1. a) If we haven't previously allocated inodes, fall through to 2. > > b) If we have previously allocated inode, attempt to allocate next > > to the last inode chunk. > > > > 2. If we do not have an extent now: > > a) if we have stripe alignment, try with alignment > > b) if we don't have stripe alignment try cluster alignment > > > > 3. If we do not have an extent now: > > a) if we have stripe alignment, try with cluster alignment > > b) no stripe alignment, turn off alignment. > > > > 4. If we do not have an extent now: FAIL. > > > > Note the case missing from the stripe alignment fallback path - it does not > > try without alignment at all. That means if all those extents large enough > > that we found above are not correctly aligned, then we will still fail > > to allocate an inode chunk. if all the AGs are like this, then we'll > > fail to allocate at all and fall out of xfs_dialloc() through the last > > fragment I quoted above. > > > > As to the shutdown that this triggers - the attempt to allocate dirties > > the AGFL and the AGF by moving free blocks into the free list for btree > > splits and cancelling a dirty transaction results in a shutdown. > > > > Now, to test this theory. ;) Luckily, it's easy to test. mount the > > filesystem with the mount option "noalign" and rerun the mkdir test. > > If it is an alignment problem, then setting noalign will prevent > > this ENOSPC and shutdown as the filesystem will be able to allocate > > more inodes. > > > > Can you test this for me, Christian? > > Thanks. Unfortunately noalign didn't solve my problem: Ok, reading the code a bit further, I've mixed up m_sinoalign, m_sinoalignmt and the noalign mount option. The noalign mount option turns off m_sinoalign, but it does not turn off inode cluster alignment, hence we can't fall back to an unaligned allocation. So the above theory still holds, just the test case was broken. Unfortunately, further investigation indicates that inodes are always allocated aligned; I expect that I could count the number of linux XFS filesystems not using inode allocation alignment because mkfs.xfs has set this as the default since it was added in mid-1996. The problem with unaligned inode allocation is the lookup case (xfs_dilocate()) in that it requires btree lookups to convert the inode number to a block number as you don't know where in the chunk the inode exists just by looking at the inode number. With aligned allocations, the block number can be derived directly from the inode number because we know how the inode chunks are aligned. IOWs, if we allow an unaligned inode chunk allocation to occur, we have to strip the "aligned inode allocation" feature bit from the filesystem and the related state and use the slow, btree based lookup path forever more. That involves I/O instead of a simple mask operation.... Hence I'm inclined to leave the allocation alignment as it stands and work out how to prevent the shutdown (a difficult issue in itself). > I'll try to add some printk statements to the codepaths you mentioned, > and see where it leads. Definitely worth confirming this is where the error is coming from. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Tue Mar 11 02:59:25 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 11 Mar 2008 02:59:32 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-0.6 required=5.0 tests=AWL,BAYES_00, SUBJECT_FUZZY_TION autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2B9xNPl018547 for ; Tue, 11 Mar 2008 02:59:25 -0700 X-ASG-Debug-ID: 1205229593-40bd00960000-w1Z2WR X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mail.lichtvoll.de (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 34DB4F60284 for ; Tue, 11 Mar 2008 02:59:53 -0700 (PDT) Received: from mail.lichtvoll.de (mondschein.lichtvoll.de [194.150.191.11]) by cuda.sgi.com with ESMTP id 036j3u51AyA9OyHO for ; Tue, 11 Mar 2008 02:59:53 -0700 (PDT) Received: from nb27steigerwald.qs.de (unknown [212.204.70.254]) by mail.lichtvoll.de (Postfix) with ESMTP id E0FDC5ADD6 for ; Tue, 11 Mar 2008 10:59:50 +0100 (CET) From: Martin Steigerwald To: linux-xfs@oss.sgi.com X-ASG-Orig-Subj: Re: disappearing xfs partition Subject: Re: disappearing xfs partition Date: Tue, 11 Mar 2008 10:59:50 +0100 User-Agent: KMail/1.9.9 References: (sfid-20080304_090400_023531_C318561D) In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Disposition: inline Message-Id: <200803111059.50596.Martin@lichtvoll.de> X-Barracuda-Connect: mondschein.lichtvoll.de[194.150.191.11] X-Barracuda-Start-Time: 1205229594 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44503 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id m2B9xPPl018552 X-archive-position: 14846 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: Martin@lichtvoll.de Precedence: bulk X-list: xfs Am Dienstag 04 März 2008 schrieb Jeff Breidenbach: > Following up and close out the topic, I got this comment from Eric. > > >So parted has this bad habit of making partition tables that cannot > >actually be read from the disk, and poking the supposedly values > >directly into the kernel. Then things work fine until reboot, at > > which time the partition table cannot be properly read. Usually > > this turns into a truncated size due to an overflow.... > > I'd been using cfdisk and not parted, but that's apparently what > happened. Rewriting the partition table with cfdisk fixed everything > and allowed the partition to mount. At least for this boot. Hi Jan, Its always a good idea to check whether the kernel reread the partition table after partitioning with cat /proc/partitions If it doesn't have you can tell the kernel to do it manually: blockdev --rereadpt /dev/sde If that tells you something about device is busy or so you'd need to unmount partitions on it or reboot. Ciao, -- Martin 'Helios' Steigerwald - http://www.Lichtvoll.de GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7 From owner-xfs@oss.sgi.com Tue Mar 11 04:26:05 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 11 Mar 2008 04:26:24 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.2 required=5.0 tests=AWL,BAYES_00,MIME_8BIT_HEADER autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2BBQ4ab003211 for ; Tue, 11 Mar 2008 04:26:05 -0700 X-ASG-Debug-ID: 1205234794-75ba003f0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from wr-out-0506.google.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 2E4C6683BA1 for ; Tue, 11 Mar 2008 04:26:34 -0700 (PDT) Received: from wr-out-0506.google.com (wr-out-0506.google.com [64.233.184.236]) by cuda.sgi.com with ESMTP id qBQkO3XuZUGMXVi4 for ; Tue, 11 Mar 2008 04:26:34 -0700 (PDT) Received: by wr-out-0506.google.com with SMTP id c53so1273787wra.20 for ; Tue, 11 Mar 2008 04:26:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; bh=qbza4PIVKxFug4NBBDzwJtDecFtxbdsSA4HSQglXtpQ=; b=SkHIJ3HK7z3BbiWR+ENkFEljY348kMaoUk3Tpd9D8dMBxE6IVjRdwV1794ME7uH08F1zfSQpiY7E220cw7dSrdmVa0W/GOJSy/BSCS3jEnnWq8/IyjKk/Ojay6F5x9clWleS/aaKdCmS8FInk+CCQFXkW0x54rIUh6A0x8nSLug= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=HDHnCJj70iePyJfKFp5Taq8dY343FcMcfY7k5kcjIQ0ZO+72R8MZB7R5uh8k1u4MOyKMH8wULDmx4vsHFKpqu2BX10U2OL047SfylEAPpGLKdfbnH/3NuYLi51CEDsq9kENQzmIKO+zOPYWl+076bkqD9u/w5+fdYaHxvzuR23s= Received: by 10.151.47.7 with SMTP id z7mr3574839ybj.103.1205234369452; Tue, 11 Mar 2008 04:19:29 -0700 (PDT) Received: by 10.150.96.5 with HTTP; Tue, 11 Mar 2008 04:19:29 -0700 (PDT) Message-ID: <1a4a774c0803110419n645da456leaedd98593300726@mail.gmail.com> Date: Tue, 11 Mar 2008 12:19:29 +0100 From: "=?ISO-8859-1?Q?Christian_R=F8snes?=" To: "David Chinner" X-ASG-Orig-Subj: Re: XFS internal error xfs_trans_cancel at line 1150 of file fs/xfs/xfs_trans.c Subject: Re: XFS internal error xfs_trans_cancel at line 1150 of file fs/xfs/xfs_trans.c Cc: xfs@oss.sgi.com In-Reply-To: <20080311093406.GN155407@sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Disposition: inline References: <1a4a774c0802130251h657a52f7lb97942e7afdf6e3f@mail.gmail.com> <1a4a774c0803050553h7f6294cfq41c38f34ea92ceae@mail.gmail.com> <1a4a774c0803060310w2642224w690ac8fa13f96ec@mail.gmail.com> <1a4a774c0803070319j1eb8790ek3daae4a16b3e6256@mail.gmail.com> <20080310000809.GU155407@sgi.com> <1a4a774c0803100302y17530814wee7522aa0dfd7668@mail.gmail.com> <1a4a774c0803100134k258e1bcfma95e7969bc44b2af@mail.gmail.com> <20080310222135.GZ155407@sgi.com> <1a4a774c0803110108u3f01813fs7f9540f886be055@mail.gmail.com> <20080311093406.GN155407@sgi.com> X-Barracuda-Connect: wr-out-0506.google.com[64.233.184.236] X-Barracuda-Start-Time: 1205234795 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44509 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id m2BBQ6ab003213 X-archive-position: 14847 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: christian.rosnes@gmail.com Precedence: bulk X-list: xfs On Tue, Mar 11, 2008 at 10:34 AM, David Chinner wrote: > > On Tue, Mar 11, 2008 at 09:08:31AM +0100, Christian Rřsnes wrote: > > I'll try to add some printk statements to the codepaths you mentioned, > > and see where it leads. > > Definitely worth confirming this is where the error is coming from. > if (tagno == agno) { printk("XFS: xfs_dialloc:0021\n"); *inop = NULLFSINO; return noroom ? ENOSPC : 0; } seems to be what triggers this inside xfs_dialloc. Here a trace which give some indication to the codepath taken inside xfs_dialloc (xfs_ialloc.c): mount: /dev/sdb1 on /data type xfs (rw,noatime,logbufs=8,nobarrier) # mkdir /data/test mkdir: cannot create directory `/data/test': No space left on device Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0001 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0002 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0003 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0005 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0006 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0010 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0012 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0013 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0014 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0018 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0020 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0013 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0014 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0018 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0020 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0013 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0014 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0018 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0020 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0013 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0014 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0018 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0020 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0013 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0014 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0018 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0020 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0013 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0014 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0018 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0020 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0013 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0014 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0018 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0020 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0013 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0014 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0018 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0020 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0013 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0014 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0018 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0020 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0013 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0014 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0018 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0020 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0013 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0014 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0018 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0020 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0013 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0014 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0018 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0020 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0013 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0014 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0018 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0020 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0013 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0014 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0018 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0020 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0013 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0014 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0018 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0020 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0013 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0014 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0018 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0020 Mar 11 11:47:00 linux kernel: XFS: xfs_dialloc:0021 Mar 11 11:47:00 linux kernel: Filesystem "sdb1": XFS internal error xfs_trans_cancel at line 1163 of file fs/xfs/xfs_trans.c. Caller 0xc021a390 Mar 11 11:47:00 linux kernel: Pid: 5598, comm: mkdir Not tainted 2.6.24.3FC #9 Mar 11 11:47:00 linux kernel: [] xfs_trans_cancel+0x5d/0xe6 Mar 11 11:47:00 linux kernel: [] xfs_mkdir+0x45a/0x493 Mar 11 11:47:00 linux kernel: [] xfs_mkdir+0x45a/0x493 Mar 11 11:47:00 linux kernel: [] xfs_acl_vhasacl_default+0x33/0x44 Mar 11 11:47:00 linux kernel: [] xfs_vn_mknod+0x165/0x243 Mar 11 11:47:00 linux kernel: [] xfs_access+0x2f/0x35 Mar 11 11:47:00 linux kernel: [] xfs_vn_mkdir+0x12/0x14 Mar 11 11:47:00 linux kernel: [] vfs_mkdir+0xa3/0xe2 Mar 11 11:47:00 linux kernel: [] sys_mkdirat+0x8a/0xc3 Mar 11 11:47:00 linux kernel: [] sys_mkdir+0x1f/0x23 Mar 11 11:47:00 linux kernel: [] syscall_call+0x7/0xb Mar 11 11:47:00 linux kernel: [] svc_proc_register+0x3c/0x4b Mar 11 11:47:00 linux kernel: ======================= Mar 11 11:47:00 linux kernel: xfs_force_shutdown(sdb1,0x8) called from line 1164 of file fs/xfs/xfs_trans.c. Return address = 0xc0212a10 Mar 11 11:47:00 linux kernel: Filesystem "sdb1": Corruption of in-memory data detected. Shutting down filesystem: sdb1 Mar 11 11:47:00 linux kernel: Please umount the filesystem, and rectify the problem(s) instrumented code: int xfs_dialloc( xfs_trans_t *tp, /* transaction pointer */ xfs_ino_t parent, /* parent inode (directory) */ mode_t mode, /* mode bits for new inode */ int okalloc, /* ok to allocate more space */ xfs_buf_t **IO_agbp, /* in/out ag header's buffer */ boolean_t *alloc_done, /* true if we needed to replenish inode freelist */ xfs_ino_t *inop) /* inode number allocated */ { xfs_agnumber_t agcount; /* number of allocation groups */ xfs_buf_t *agbp; /* allocation group header's buffer */ xfs_agnumber_t agno; /* allocation group number */ xfs_agi_t *agi; /* allocation group header structure */ xfs_btree_cur_t *cur; /* inode allocation btree cursor */ int error; /* error return value */ int i; /* result code */ int ialloced; /* inode allocation status */ int noroom = 0; /* no space for inode blk allocation */ xfs_ino_t ino; /* fs-relative inode to be returned */ /* REFERENCED */ int j; /* result code */ xfs_mount_t *mp; /* file system mount structure */ int offset; /* index of inode in chunk */ xfs_agino_t pagino; /* parent's a.g. relative inode # */ xfs_agnumber_t pagno; /* parent's allocation group number */ xfs_inobt_rec_incore_t rec; /* inode allocation record */ xfs_agnumber_t tagno; /* testing allocation group number */ xfs_btree_cur_t *tcur; /* temp cursor */ xfs_inobt_rec_incore_t trec; /* temp inode allocation record */ printk("XFS: xfs_dialloc:0001\n"); if (*IO_agbp == NULL) { printk("XFS: xfs_dialloc:0002\n"); /* * We do not have an agbp, so select an initial allocation * group for inode allocation. */ agbp = xfs_ialloc_ag_select(tp, parent, mode, okalloc); printk("XFS: xfs_dialloc:0003\n"); /* * Couldn't find an allocation group satisfying the * criteria, give up. */ if (!agbp) { printk("XFS: xfs_dialloc:0004\n"); *inop = NULLFSINO; return 0; } printk("XFS: xfs_dialloc:0005\n"); agi = XFS_BUF_TO_AGI(agbp); ASSERT(be32_to_cpu(agi->agi_magicnum) == XFS_AGI_MAGIC); printk("XFS: xfs_dialloc:0006\n"); } else { printk("XFS: xfs_dialloc:0007\n"); /* * Continue where we left off before. In this case, we * know that the allocation group has free inodes. */ agbp = *IO_agbp; agi = XFS_BUF_TO_AGI(agbp); ASSERT(be32_to_cpu(agi->agi_magicnum) == XFS_AGI_MAGIC); printk("XFS: xfs_dialloc:0008\n"); ASSERT(be32_to_cpu(agi->agi_freecount) > 0); printk("XFS: xfs_dialloc:0009\n"); } printk("XFS: xfs_dialloc:0010\n"); mp = tp->t_mountp; agcount = mp->m_sb.sb_agcount; agno = be32_to_cpu(agi->agi_seqno); tagno = agno; pagno = XFS_INO_TO_AGNO(mp, parent); pagino = XFS_INO_TO_AGINO(mp, parent); /* * If we have already hit the ceiling of inode blocks then clear * okalloc so we scan all available agi structures for a free * inode. */ if (mp->m_maxicount && mp->m_sb.sb_icount + XFS_IALLOC_INODES(mp) > mp->m_maxicount) { printk("XFS: xfs_dialloc:0011\n"); noroom = 1; okalloc = 0; } /* * Loop until we find an allocation group that either has free inodes * or in which we can allocate some inodes. Iterate through the * allocation groups upward, wrapping at the end. */ printk("XFS: xfs_dialloc:0012\n"); *alloc_done = B_FALSE; while (!agi->agi_freecount) { printk("XFS: xfs_dialloc:0013\n"); /* * Don't do anything if we're not supposed to allocate * any blocks, just go on to the next ag. */ if (okalloc) { printk("XFS: xfs_dialloc:0014\n"); /* * Try to allocate some new inodes in the allocation * group. */ if ((error = xfs_ialloc_ag_alloc(tp, agbp, &ialloced))) { printk("XFS: xfs_dialloc:0015\n"); xfs_trans_brelse(tp, agbp); if (error == ENOSPC) { printk("XFS: xfs_dialloc:0016\n"); *inop = NULLFSINO; return 0; } else { printk("XFS: xfs_dialloc:0017\n"); return error; } } printk("XFS: xfs_dialloc:0018\n"); if (ialloced) { /* * We successfully allocated some inodes, return * the current context to the caller so that it * can commit the current transaction and call * us again where we left off. */ printk("XFS: xfs_dialloc:0019\n"); ASSERT(be32_to_cpu(agi->agi_freecount) > 0); *alloc_done = B_TRUE; *IO_agbp = agbp; *inop = NULLFSINO; return 0; } } printk("XFS: xfs_dialloc:0020\n"); /* * If it failed, give up on this ag. */ xfs_trans_brelse(tp, agbp); /* * Go on to the next ag: get its ag header. */ nextag: if (++tagno == agcount) tagno = 0; if (tagno == agno) { printk("XFS: xfs_dialloc:0021\n"); *inop = NULLFSINO; return noroom ? ENOSPC : 0; } down_read(&mp->m_peraglock); if (mp->m_perag[tagno].pagi_inodeok == 0) { up_read(&mp->m_peraglock); printk("XFS: xfs_dialloc:0022\n"); goto nextag; } error = xfs_ialloc_read_agi(mp, tp, tagno, &agbp); up_read(&mp->m_peraglock); if (error) { printk("XFS: xfs_dialloc:0023\n"); goto nextag; } agi = XFS_BUF_TO_AGI(agbp); ASSERT(be32_to_cpu(agi->agi_magicnum) == XFS_AGI_MAGIC); } /* * Here with an allocation group that has a free inode. * Reset agno since we may have chosen a new ag in the * loop above. */ printk("XFS: xfs_dialloc:0024\n"); agno = tagno; *IO_agbp = NULL; cur = xfs_btree_init_cursor(mp, tp, agbp, be32_to_cpu(agi->agi_seqno), XFS_BTNUM_INO, (xfs_inode_t *)0, 0); ... Christian From owner-xfs@oss.sgi.com Tue Mar 11 05:20:47 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 11 Mar 2008 05:21:05 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.8 required=5.0 tests=AWL,BAYES_00,MIME_8BIT_HEADER autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2BCKgDN010466 for ; Tue, 11 Mar 2008 05:20:46 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id XAA05226; Tue, 11 Mar 2008 23:21:07 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m2BCL6LF93166947; Tue, 11 Mar 2008 23:21:07 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m2BCL3X093080997; Tue, 11 Mar 2008 23:21:03 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Tue, 11 Mar 2008 23:21:03 +1100 From: David Chinner To: Christian =?iso-8859-1?Q?R=F8snes?= Cc: xfs@oss.sgi.com Subject: Re: XFS internal error xfs_trans_cancel at line 1150 of file fs/xfs/xfs_trans.c Message-ID: <20080311122103.GP155407@sgi.com> References: <1a4a774c0803050553h7f6294cfq41c38f34ea92ceae@mail.gmail.com> <1a4a774c0803060310w2642224w690ac8fa13f96ec@mail.gmail.com> <1a4a774c0803070319j1eb8790ek3daae4a16b3e6256@mail.gmail.com> <20080310000809.GU155407@sgi.com> <1a4a774c0803100302y17530814wee7522aa0dfd7668@mail.gmail.com> <1a4a774c0803100134k258e1bcfma95e7969bc44b2af@mail.gmail.com> <20080310222135.GZ155407@sgi.com> <1a4a774c0803110108u3f01813fs7f9540f886be055@mail.gmail.com> <20080311093406.GN155407@sgi.com> <1a4a774c0803110419n645da456leaedd98593300726@mail.gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <1a4a774c0803110419n645da456leaedd98593300726@mail.gmail.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14848 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Tue, Mar 11, 2008 at 12:19:29PM +0100, Christian Rřsnes wrote: > On Tue, Mar 11, 2008 at 10:34 AM, David Chinner wrote: > > > > On Tue, Mar 11, 2008 at 09:08:31AM +0100, Christian Rřsnes wrote: > > > > I'll try to add some printk statements to the codepaths you mentioned, > > > and see where it leads. > > > > Definitely worth confirming this is where the error is coming from. > > > > if (tagno == agno) { > printk("XFS: xfs_dialloc:0021\n"); > *inop = NULLFSINO; > return noroom ? ENOSPC : 0; > } > > seems to be what triggers this inside xfs_dialloc. > > Here a trace which give some indication to the codepath taken inside > xfs_dialloc (xfs_ialloc.c): Yup, that's trying to allocate in each AG and failing. Almost certainly the problem is the described alignment issue. FYI, I'm travelling tomorrow so I won't really get a chance to look at this more until thursday.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Tue Mar 11 05:29:09 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 11 Mar 2008 05:29:16 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2BCT4vC012032 for ; Tue, 11 Mar 2008 05:29:07 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id XAA05411; Tue, 11 Mar 2008 23:29:29 +1100 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id 2390C58C4C0F; Tue, 11 Mar 2008 23:29:29 +1100 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: PARTIAL TAKE 978682 - Replace custom AIL linked-list code with struct list_head Message-Id: <20080311122929.2390C58C4C0F@chook.melbourne.sgi.com> Date: Tue, 11 Mar 2008 23:29:29 +1100 (EST) From: dgc@sgi.com (David Chinner) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14849 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Replace custom AIL linked-list code with struct list_head Replace the xfs_ail_entry_t with a struct list_head and clean the surrounding code up. Also fixes a livelock in xfs_trans_first_push_ail() by terminating the loop at the head of the list correctly. Signed-off-by: Josef 'Jeff' Sipek Date: Tue Mar 11 23:29:03 AEDT 2008 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/2.6.x-xfs Inspected by: dgc@sgi.com The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:30636a fs/xfs/xfs_trans_ail.c - 1.86 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_trans_ail.c.diff?r1=text&tr1=1.86&r2=text&tr2=1.85&f=h - Replace custom AIL linked-list code with struct list_head. Fix livelock in xfs_trans_first_push_ail() by terminating search loop correctly. fs/xfs/xfs_mount.h - 1.260 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_mount.h.diff?r1=text&tr1=1.260&r2=text&tr2=1.259&f=h - Replace custom AIL linked-list code with struct list_head fs/xfs/xfs_trans.h - 1.149 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_trans.h.diff?r1=text&tr1=1.149&r2=text&tr2=1.148&f=h - Replace custom AIL linked-list code with struct list_head From owner-xfs@oss.sgi.com Tue Mar 11 05:33:15 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 11 Mar 2008 05:33:34 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2BCXApP013030 for ; Tue, 11 Mar 2008 05:33:14 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id XAA05613; Tue, 11 Mar 2008 23:33:37 +1100 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id CBA3158C4C0F; Tue, 11 Mar 2008 23:33:37 +1100 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: TAKE 978682 - Update xfsidbg code after AIL listhead conversion Message-Id: <20080311123337.CBA3158C4C0F@chook.melbourne.sgi.com> Date: Tue, 11 Mar 2008 23:33:37 +1100 (EST) From: dgc@sgi.com (David Chinner) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14850 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Update xfsidbg code after AIL listhead conversion. Date: Tue Mar 11 23:33:13 AEDT 2008 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/2.6.x-xfs Inspected by: jeffpc@josefsipek.net The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:30641a fs/xfs/xfsidbg.c - 1.346 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfsidbg.c.diff?r1=text&tr1=1.346&r2=text&tr2=1.345&f=h - Convert AIL debug code to struct list_head. From owner-xfs@oss.sgi.com Tue Mar 11 05:39:22 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 11 Mar 2008 05:39:30 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.8 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_42, J_CHICKENPOX_43,J_CHICKENPOX_44,J_CHICKENPOX_45,J_CHICKENPOX_46, J_CHICKENPOX_47,J_CHICKENPOX_48,J_CHICKENPOX_61,MIME_8BIT_HEADER autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2BCdLpd014335 for ; Tue, 11 Mar 2008 05:39:22 -0700 X-ASG-Debug-ID: 1205239188-0d1d00360000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from rn-out-0910.google.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 99346F626BE for ; Tue, 11 Mar 2008 05:39:49 -0700 (PDT) Received: from rn-out-0910.google.com (rn-out-0910.google.com [64.233.170.189]) by cuda.sgi.com with ESMTP id 2ZO8NEDnyNIiVKKw for ; Tue, 11 Mar 2008 05:39:49 -0700 (PDT) Received: by rn-out-0910.google.com with SMTP id a43so1585563rne.10 for ; Tue, 11 Mar 2008 05:39:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; bh=SV/Btin7cNoreVF/SkrxaM6efICO4rRKpHbp5SeYprE=; b=niMWq0S4DhF1aQ8Lwd7M6vph8pMJ2UsKPxmWXch3i2pvRmjWfYTnbuksbNVScYOADHdYy2seEKHDEHGQzB0Fv20cHq5laf4igy0SOgsUXMRshi7bKfFK4biPLjJ1cAKQ+ZZZl3u+UsFC9rg98UhsE4YmhMNyRHBDtTd4/vyRa/o= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=D0Tuf5fMDPJ7rXN6pBdSypVK+PELHEY4SOeH1InHcwOeBr1Cg+Hx48iAw9inVN+AAkaf1RPgW/ZV0UEpamGhx3eAu5VmT+lv1td6vwzKjiTCooZsEoBV3v+Hhsf7G34BlxvZWSeC7BhORzf5Sv2ZEgF/DhZg++kGOuAV1HfkUK0= Received: by 10.151.42.13 with SMTP id u13mr3617941ybj.107.1205239188475; Tue, 11 Mar 2008 05:39:48 -0700 (PDT) Received: by 10.150.96.5 with HTTP; Tue, 11 Mar 2008 05:39:48 -0700 (PDT) Message-ID: <1a4a774c0803110539s129fd2am86e933a03cdd1b18@mail.gmail.com> Date: Tue, 11 Mar 2008 13:39:48 +0100 From: "=?ISO-8859-1?Q?Christian_R=F8snes?=" To: "David Chinner" X-ASG-Orig-Subj: Re: XFS internal error xfs_trans_cancel at line 1150 of file fs/xfs/xfs_trans.c Subject: Re: XFS internal error xfs_trans_cancel at line 1150 of file fs/xfs/xfs_trans.c Cc: xfs@oss.sgi.com In-Reply-To: <20080311122103.GP155407@sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Disposition: inline References: <1a4a774c0803050553h7f6294cfq41c38f34ea92ceae@mail.gmail.com> <1a4a774c0803070319j1eb8790ek3daae4a16b3e6256@mail.gmail.com> <20080310000809.GU155407@sgi.com> <1a4a774c0803100302y17530814wee7522aa0dfd7668@mail.gmail.com> <1a4a774c0803100134k258e1bcfma95e7969bc44b2af@mail.gmail.com> <20080310222135.GZ155407@sgi.com> <1a4a774c0803110108u3f01813fs7f9540f886be055@mail.gmail.com> <20080311093406.GN155407@sgi.com> <1a4a774c0803110419n645da456leaedd98593300726@mail.gmail.com> <20080311122103.GP155407@sgi.com> X-Barracuda-Connect: rn-out-0910.google.com[64.233.170.189] X-Barracuda-Start-Time: 1205239190 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44515 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id m2BCdMpd014338 X-archive-position: 14851 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: christian.rosnes@gmail.com Precedence: bulk X-list: xfs On Tue, Mar 11, 2008 at 1:21 PM, David Chinner wrote: > On Tue, Mar 11, 2008 at 12:19:29PM +0100, Christian Rřsnes wrote: > > On Tue, Mar 11, 2008 at 10:34 AM, David Chinner wrote: > > > > > > On Tue, Mar 11, 2008 at 09:08:31AM +0100, Christian Rřsnes wrote: > > > > > > I'll try to add some printk statements to the codepaths you mentioned, > > > > and see where it leads. > > > > > > Definitely worth confirming this is where the error is coming from. > > > > > > > if (tagno == agno) { > > printk("XFS: xfs_dialloc:0021\n"); > > *inop = NULLFSINO; > > return noroom ? ENOSPC : 0; > > } > > > > seems to be what triggers this inside xfs_dialloc. > > > > Here a trace which give some indication to the codepath taken inside > > xfs_dialloc (xfs_ialloc.c): > > Yup, that's trying to allocate in each AG and failing. Almost certainly > the problem is the described alignment issue. > > FYI, I'm travelling tomorrow so I won't really get a chance to look > at this more until thursday.... > Ok. Thanks again for all your help so far in tracking this down. Here's the codepath taken within xfs_ialloc_ag_alloc (xfs_ialloc.c): mount: /dev/sdb1 on /data type xfs (rw,noatime,logbufs=8,nobarrier) # mkdir /data/test mkdir: cannot create directory `/data/test': No space left on device Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0001 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0003 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0004 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0007 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0008 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0011 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0012 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0014 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0015 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0016 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0017 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0020 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0021 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0001 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0003 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0004 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0007 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0008 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0011 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0012 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0014 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0015 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0016 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0017 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0020 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0021 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0001 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0003 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0004 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0007 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0008 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0011 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0012 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0014 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0015 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0016 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0017 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0020 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0021 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0001 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0003 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0004 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0007 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0008 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0011 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0012 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0014 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0015 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0016 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0017 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0020 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0021 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0001 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0003 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0004 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0007 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0008 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0011 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0012 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0014 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0015 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0016 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0017 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0020 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0021 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0001 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0003 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0004 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0007 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0008 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0011 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0012 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0014 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0015 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0016 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0017 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0020 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0021 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0001 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0003 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0004 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0007 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0008 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0011 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0012 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0014 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0015 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0016 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0017 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0020 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0021 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0001 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0003 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0004 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0007 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0008 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0011 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0012 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0014 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0015 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0016 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0017 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0020 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0021 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0001 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0003 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0004 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0007 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0008 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0011 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0012 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0014 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0015 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0016 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0017 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0020 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0021 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0001 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0003 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0004 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0007 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0008 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0011 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0012 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0014 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0015 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0016 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0017 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0020 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0021 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0001 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0003 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0004 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0007 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0008 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0011 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0012 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0014 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0015 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0016 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0017 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0020 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0021 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0001 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0003 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0004 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0007 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0008 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0011 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0012 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0014 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0015 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0016 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0017 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0020 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0021 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0001 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0003 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0004 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0007 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0008 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0011 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0012 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0014 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0015 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0016 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0017 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0020 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0021 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0001 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0003 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0004 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0007 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0008 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0011 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0012 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0014 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0015 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0016 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0017 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0020 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0021 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0001 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0003 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0004 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0007 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0008 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0011 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0012 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0014 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0015 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0016 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0017 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0020 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0021 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0001 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0003 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0004 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0007 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0008 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0011 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0012 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0014 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0015 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0016 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0017 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0020 Mar 11 13:27:44 linux kernel: XFS: xfs_ialloc_ag_alloc:0021 Mar 11 13:27:44 linux kernel: Filesystem "sdb1": XFS internal error xfs_trans_cancel at line 1163 of file fs/xfs/xfs_trans.c. Caller 0xc021a1f8 Mar 11 13:27:44 linux kernel: Pid: 5593, comm: mkdir Not tainted 2.6.24.3FC #10 Mar 11 13:27:44 linux kernel: [] xfs_trans_cancel+0x5d/0xe6 Mar 11 13:27:44 linux kernel: [] xfs_mkdir+0x45a/0x493 Mar 11 13:27:44 linux kernel: [] xfs_mkdir+0x45a/0x493 Mar 11 13:27:44 linux kernel: [] xfs_acl_vhasacl_default+0x33/0x44 Mar 11 13:27:44 linux kernel: [] xfs_vn_mknod+0x165/0x243 Mar 11 13:27:44 linux kernel: [] xfs_access+0x2f/0x35 Mar 11 13:27:44 linux kernel: [] xfs_vn_mkdir+0x12/0x14 Mar 11 13:27:44 linux kernel: [] vfs_mkdir+0xa3/0xe2 Mar 11 13:27:44 linux kernel: [] sys_mkdirat+0x8a/0xc3 Mar 11 13:27:44 linux kernel: [] sys_mkdir+0x1f/0x23 Mar 11 13:27:44 linux kernel: [] syscall_call+0x7/0xb Mar 11 13:27:44 linux kernel: [] proc_dodebug+0xc6/0x1e2 Mar 11 13:27:44 linux kernel: ======================= Mar 11 13:27:44 linux kernel: xfs_force_shutdown(sdb1,0x8) called from line 1164 of file fs/xfs/xfs_trans.c. Return address = 0xc0212878 Mar 11 13:27:44 linux kernel: Filesystem "sdb1": Corruption of in-memory data detected. Shutting down filesystem: sdb1 Mar 11 13:27:44 linux kernel: Please umount the filesystem, and rectify the problem(s) /* * Allocate new inodes in the allocation group specified by agbp. * Return 0 for success, else error code. */ STATIC int /* error code or 0 */ xfs_ialloc_ag_alloc( xfs_trans_t *tp, /* transaction pointer */ xfs_buf_t *agbp, /* alloc group buffer */ int *alloc) { xfs_agi_t *agi; /* allocation group header */ xfs_alloc_arg_t args; /* allocation argument structure */ int blks_per_cluster; /* fs blocks per inode cluster */ xfs_btree_cur_t *cur; /* inode btree cursor */ xfs_daddr_t d; /* disk addr of buffer */ xfs_agnumber_t agno; int error; xfs_buf_t *fbuf; /* new free inodes' buffer */ xfs_dinode_t *free; /* new free inode structure */ int i; /* inode counter */ int j; /* block counter */ int nbufs; /* num bufs of new inodes */ xfs_agino_t newino; /* new first inode's number */ xfs_agino_t newlen; /* new number of inodes */ int ninodes; /* num inodes per buf */ xfs_agino_t thisino; /* current inode number, for loop */ int version; /* inode version number to use */ int isaligned = 0; /* inode allocation at stripe unit */ /* boundary */ args.tp = tp; args.mp = tp->t_mountp; printk("XFS: xfs_ialloc_ag_alloc:0001\n"); /* * Locking will ensure that we don't have two callers in here * at one time. */ newlen = XFS_IALLOC_INODES(args.mp); if (args.mp->m_maxicount && args.mp->m_sb.sb_icount + newlen > args.mp->m_maxicount) { printk("XFS: xfs_ialloc_ag_alloc:0002\n"); return XFS_ERROR(ENOSPC); } printk("XFS: xfs_ialloc_ag_alloc:0003\n"); args.minlen = args.maxlen = XFS_IALLOC_BLOCKS(args.mp); /* * First try to allocate inodes contiguous with the last-allocated * chunk of inodes. If the filesystem is striped, this will fill * an entire stripe unit with inodes. */ agi = XFS_BUF_TO_AGI(agbp); newino = be32_to_cpu(agi->agi_newino); args.agbno = XFS_AGINO_TO_AGBNO(args.mp, newino) + XFS_IALLOC_BLOCKS(args.mp); if (likely(newino != NULLAGINO && (args.agbno < be32_to_cpu(agi->agi_length)))) { printk("XFS: xfs_ialloc_ag_alloc:0004\n"); args.fsbno = XFS_AGB_TO_FSB(args.mp, be32_to_cpu(agi->agi_seqno), args.agbno); args.type = XFS_ALLOCTYPE_THIS_BNO; args.mod = args.total = args.wasdel = args.isfl = args.userdata = args.minalignslop = 0; args.prod = 1; args.alignment = 1; /* * Allow space for the inode btree to split. */ args.minleft = XFS_IN_MAXLEVELS(args.mp) - 1; if ((error = xfs_alloc_vextent(&args))) { printk("XFS: xfs_ialloc_ag_alloc:0005\n"); return error; } } else { printk("XFS: xfs_ialloc_ag_alloc:0006\n"); args.fsbno = NULLFSBLOCK; } if (unlikely(args.fsbno == NULLFSBLOCK)) { printk("XFS: xfs_ialloc_ag_alloc:0007\n"); /* * Set the alignment for the allocation. * If stripe alignment is turned on then align at stripe unit * boundary. * If the cluster size is smaller than a filesystem block * then we're doing I/O for inodes in filesystem block size * pieces, so don't need alignment anyway. */ isaligned = 0; if (args.mp->m_sinoalign) { printk("XFS: xfs_ialloc_ag_alloc:0008\n"); ASSERT(!(args.mp->m_flags & XFS_MOUNT_NOALIGN)); args.alignment = args.mp->m_dalign; isaligned = 1; } else if (XFS_SB_VERSION_HASALIGN(&args.mp->m_sb) && args.mp->m_sb.sb_inoalignmt >= XFS_B_TO_FSBT(args.mp, XFS_INODE_CLUSTER_SIZE(args.mp))) { printk("XFS: xfs_ialloc_ag_alloc:0009\n"); args.alignment = args.mp->m_sb.sb_inoalignmt; } else { printk("XFS: xfs_ialloc_ag_alloc:0010\n"); args.alignment = 1; } /* * Need to figure out where to allocate the inode blocks. * Ideally they should be spaced out through the a.g. * For now, just allocate blocks up front. */ printk("XFS: xfs_ialloc_ag_alloc:0011\n"); args.agbno = be32_to_cpu(agi->agi_root); args.fsbno = XFS_AGB_TO_FSB(args.mp, be32_to_cpu(agi->agi_seqno), args.agbno); /* * Allocate a fixed-size extent of inodes. */ args.type = XFS_ALLOCTYPE_NEAR_BNO; args.mod = args.total = args.wasdel = args.isfl = args.userdata = args.minalignslop = 0; args.prod = 1; /* * Allow space for the inode btree to split. */ args.minleft = XFS_IN_MAXLEVELS(args.mp) - 1; printk("XFS: xfs_ialloc_ag_alloc:0012\n"); if ((error = xfs_alloc_vextent(&args))) { printk("XFS: xfs_ialloc_ag_alloc:0013\n"); return error; } printk("XFS: xfs_ialloc_ag_alloc:0014\n"); } printk("XFS: xfs_ialloc_ag_alloc:0015\n"); /* * If stripe alignment is turned on, then try again with cluster * alignment. */ if (isaligned && args.fsbno == NULLFSBLOCK) { printk("XFS: xfs_ialloc_ag_alloc:0016\n"); args.type = XFS_ALLOCTYPE_NEAR_BNO; args.agbno = be32_to_cpu(agi->agi_root); args.fsbno = XFS_AGB_TO_FSB(args.mp, be32_to_cpu(agi->agi_seqno), args.agbno); if (XFS_SB_VERSION_HASALIGN(&args.mp->m_sb) && args.mp->m_sb.sb_inoalignmt >= XFS_B_TO_FSBT(args.mp, XFS_INODE_CLUSTER_SIZE(args.mp))) { printk("XFS: xfs_ialloc_ag_alloc:0017\n"); args.alignment = args.mp->m_sb.sb_inoalignmt; } else { printk("XFS: xfs_ialloc_ag_alloc:0018\n"); args.alignment = 1; } if ((error = xfs_alloc_vextent(&args))) { printk("XFS: xfs_ialloc_ag_alloc:0019\n"); return error; } } printk("XFS: xfs_ialloc_ag_alloc:0020\n"); if (args.fsbno == NULLFSBLOCK) { printk("XFS: xfs_ialloc_ag_alloc:0021\n"); *alloc = 0; return 0; } printk("XFS: xfs_ialloc_ag_alloc:0022\n"); ASSERT(args.len == args.minlen); /* * Convert the results. */ newino = XFS_OFFBNO_TO_AGINO(args.mp, args.agbno, 0); /* * Loop over the new block(s), filling in the inodes. * For small block sizes, manipulate the inodes in buffers * which are multiples of the blocks size. */ if (args.mp->m_sb.sb_blocksize >= XFS_INODE_CLUSTER_SIZE(args.mp)) { printk("XFS: xfs_ialloc_ag_alloc:0023\n"); blks_per_cluster = 1; nbufs = (int)args.len; ninodes = args.mp->m_sb.sb_inopblock; } else { printk("XFS: xfs_ialloc_ag_alloc:0024\n"); blks_per_cluster = XFS_INODE_CLUSTER_SIZE(args.mp) / args.mp->m_sb.sb_blocksize; nbufs = (int)args.len / blks_per_cluster; ninodes = blks_per_cluster * args.mp->m_sb.sb_inopblock; } printk("XFS: xfs_ialloc_ag_alloc:0025\n"); /* * Figure out what version number to use in the inodes we create. * If the superblock version has caught up to the one that supports * the new inode format, then use the new inode version. Otherwise * use the old version so that old kernels will continue to be * able to use the file system. */ if (XFS_SB_VERSION_HASNLINK(&args.mp->m_sb)) { printk("XFS: xfs_ialloc_ag_alloc:0026\n"); version = XFS_DINODE_VERSION_2; } else { printk("XFS: xfs_ialloc_ag_alloc:0027\n"); version = XFS_DINODE_VERSION_1; } for (j = 0; j < nbufs; j++) { printk("XFS: xfs_ialloc_ag_alloc:0028\n"); /* * Get the block. */ d = XFS_AGB_TO_DADDR(args.mp, be32_to_cpu(agi->agi_seqno), args.agbno + (j * blks_per_cluster)); fbuf = xfs_trans_get_buf(tp, args.mp->m_ddev_targp, d, args.mp->m_bsize * blks_per_cluster, XFS_BUF_LOCK); ASSERT(fbuf); ASSERT(!XFS_BUF_GETERROR(fbuf)); /* * Set initial values for the inodes in this buffer. */ xfs_biozero(fbuf, 0, ninodes << args.mp->m_sb.sb_inodelog); for (i = 0; i < ninodes; i++) { printk("XFS: xfs_ialloc_ag_alloc:0029\n"); free = XFS_MAKE_IPTR(args.mp, fbuf, i); free->di_core.di_magic = cpu_to_be16(XFS_DINODE_MAGIC); free->di_core.di_version = version; free->di_next_unlinked = cpu_to_be32(NULLAGINO); xfs_ialloc_log_di(tp, fbuf, i, XFS_DI_CORE_BITS | XFS_DI_NEXT_UNLINKED); } xfs_trans_inode_alloc_buf(tp, fbuf); printk("XFS: xfs_ialloc_ag_alloc:0030\n"); } printk("XFS: xfs_ialloc_ag_alloc:0031\n"); be32_add(&agi->agi_count, newlen); be32_add(&agi->agi_freecount, newlen); agno = be32_to_cpu(agi->agi_seqno); down_read(&args.mp->m_peraglock); args.mp->m_perag[agno].pagi_freecount += newlen; up_read(&args.mp->m_peraglock); agi->agi_newino = cpu_to_be32(newino); /* * Insert records describing the new inode chunk into the btree. */ cur = xfs_btree_init_cursor(args.mp, tp, agbp, agno, XFS_BTNUM_INO, (xfs_inode_t *)0, 0); for (thisino = newino; thisino < newino + newlen; thisino += XFS_INODES_PER_CHUNK) { printk("XFS: xfs_ialloc_ag_alloc:0032\n"); if ((error = xfs_inobt_lookup_eq(cur, thisino, XFS_INODES_PER_CHUNK, XFS_INOBT_ALL_FREE, &i))) { printk("XFS: xfs_ialloc_ag_alloc:0033\n"); xfs_btree_del_cursor(cur, XFS_BTREE_ERROR); return error; } printk("XFS: xfs_ialloc_ag_alloc:0034\n"); ASSERT(i == 0); if ((error = xfs_inobt_insert(cur, &i))) { printk("XFS: xfs_ialloc_ag_alloc:0035\n"); xfs_btree_del_cursor(cur, XFS_BTREE_ERROR); return error; } ASSERT(i == 1); printk("XFS: xfs_ialloc_ag_alloc:0036\n"); } printk("XFS: xfs_ialloc_ag_alloc:0037\n"); xfs_btree_del_cursor(cur, XFS_BTREE_NOERROR); /* * Log allocation group header fields */ xfs_ialloc_log_agi(tp, agbp, XFS_AGI_COUNT | XFS_AGI_FREECOUNT | XFS_AGI_NEWINO); /* * Modify/log superblock values for inode count and inode free count. */ xfs_trans_mod_sb(tp, XFS_TRANS_SB_ICOUNT, (long)newlen); xfs_trans_mod_sb(tp, XFS_TRANS_SB_IFREE, (long)newlen); *alloc = 1; printk("XFS: xfs_ialloc_ag_alloc:0038\n"); return 0; } Christian From owner-xfs@oss.sgi.com Tue Mar 11 06:47:55 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 11 Mar 2008 06:48:15 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2BDlr3G025328 for ; Tue, 11 Mar 2008 06:47:55 -0700 X-ASG-Debug-ID: 1205243303-74b201720000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mail.flatline.de (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 6D542F62702 for ; Tue, 11 Mar 2008 06:48:23 -0700 (PDT) Received: from mail.flatline.de (flatline.de [80.190.243.144]) by cuda.sgi.com with ESMTP id UdinNapWZR8HTXZn for ; Tue, 11 Mar 2008 06:48:23 -0700 (PDT) Received: from shell.priv.flatline.de ([172.16.123.7] helo=slop.flatline.de ident=count) by mail.flatline.de with smtp (Exim 4.69) (envelope-from ) id 1JZ4pW-0003Vy-Pm; Tue, 11 Mar 2008 14:47:49 +0100 Received: by slop.flatline.de (sSMTP sendmail emulation); Tue, 11 Mar 2008 14:47:46 +0100 Date: Tue, 11 Mar 2008 14:47:46 +0100 From: Andreas Kotes To: David Chinner Cc: linux-kernel@vger.kernel.org, xfs@oss.sgi.com X-ASG-Orig-Subj: Re: XFS internal error Subject: Re: XFS internal error Message-ID: <20080311134746.GQ14256@slop.flatline.de> References: <470831E6.4030704@fastmail.co.uk> <20071008001452.GX995458@sgi.com> <20080310122216.GG14256@slop.flatline.de> <20080310223018.GA155407@sgi.com> <20080310225927.GP14256@slop.flatline.de> <20080310234539.GC155407@sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080310234539.GC155407@sgi.com> User-Agent: Mutt/1.5.13 (2006-08-11) X-Barracuda-Connect: flatline.de[80.190.243.144] X-Barracuda-Start-Time: 1205243304 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44519 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14852 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: count-linux@flatline.de Precedence: bulk X-list: xfs Hello, * David Chinner [20080311 00:45]: > On Mon, Mar 10, 2008 at 11:59:27PM +0100, Andreas Kotes wrote: > > * David Chinner [20080310 23:30]: > > > On Mon, Mar 10, 2008 at 01:22:16PM +0100, Andreas Kotes wrote: > > > > * David Chinner [20080310 13:18]: > > > > > Yes, but those previous corruptions get left on disk as a landmine > > > > > for you to trip over some time later, even on a kernel that has the > > > > > bug fixed. > > > > > > > > > > I suggest that you run xfs_check on the filesystem and if that > > > > > shows up errors, run xfs_repair onteh filesystem to correct them. > > > > > > > > I seem to be having similiar problems, and xfs_repair is not helping :( > > > > > > xfs_repair is ensuring that the problem is not being caused by on-disk > > > corruption. In this case, it does not appear to be caused by on-disk > > > corruption, so xfs_repair won't help. > > > > ok, too bad - btw, is it a problem that I'm doing the xfs_repair on a > > mounted filesystem with xfs_repair -f -L after a remount rw? > > If it was read only, and you rebooted immediately afterwards, you'd > probably be ok. Doing this to a mounted, rw filesystem is asking > for trouble. If the shutdown is occurring after you've run xfs_repair, > then it is almost certainly the cause.... whoops, that should have read 'remount ro' .. xfs_repair on a live and writable filesystem is of course inviting desaster. I was trying read only - btw, the system as such is booted via PXE and running complete out of an initrd, using the HDD just for local data storage - not much happening on shutdown/reboot either way. > I'd suggest getting a knoppix (or similar) rescue disk and repairing > from that, rebooting and seeing if the problem persists. If it > does, then we'll have to look further into it. I basically build a PXE image which does an xfs_repair -L /dev/sda2 from initrd - and the problem persists. Sigh. Exactly no change. > FWIW, you've got plenty of free inodes so this does not look > to be the same problem I've just found. okay ... it happens on several of the dozens of machines I'm running this way, but not on others - I have yet to find the difference. what can I do to help find the problem? Andreas -- flatline IT services - Andreas Kotes - Tailored solutions for your IT needs From owner-xfs@oss.sgi.com Tue Mar 11 07:35:48 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 11 Mar 2008 07:36:06 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=BAYES_00,MIME_8BIT_HEADER autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2BEZhPb031813 for ; Tue, 11 Mar 2008 07:35:48 -0700 X-ASG-Debug-ID: 1205246173-1c19004b0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from moutng.kundenserver.de (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id B3C4FF6310C for ; Tue, 11 Mar 2008 07:36:14 -0700 (PDT) Received: from moutng.kundenserver.de (moutng.kundenserver.de [212.227.126.171]) by cuda.sgi.com with ESMTP id U9fNLVIYiNGSjs5I for ; Tue, 11 Mar 2008 07:36:14 -0700 (PDT) Received: from [192.168.2.97] (p5084FC1A.dip.t-dialin.net [80.132.252.26]) by mrelayeu.kundenserver.de (node=mrelayeu6) with ESMTP (Nemesis) id 0ML29c-1JZ5aO2X05-0001en; Tue, 11 Mar 2008 15:36:12 +0100 Message-ID: <47D6982D.7030305@t-online.de> Disposition-Notification-To: =?ISO-8859-15?Q?J=FCrgen_Fischer?= Date: Tue, 11 Mar 2008 15:33:17 +0100 From: =?ISO-8859-15?Q?J=FCrgen_Fischer?= User-Agent: Thunderbird 2.0.0.12 (Windows/20080213) MIME-Version: 1.0 To: xfs@oss.sgi.com X-ASG-Orig-Subj: Problem with xfs devices Subject: Problem with xfs devices Content-Type: text/plain; charset=ISO-8859-15; format=flowed Content-Transfer-Encoding: 7bit X-Provags-ID: V01U2FsdGVkX18i6N7hV/nv45aqninWJwtsd+lOHgxyTorUw13 /yOpwiddINxfc+1gtPyRNO3rFw3GhED9Sk0FAd5Qur0A69D3pp fP1An/oL6wV77pnM+1i6SwSg3DkEGHI X-Barracuda-Connect: moutng.kundenserver.de[212.227.126.171] X-Barracuda-Start-Time: 1205246174 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44521 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14853 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jfi.pcs@t-online.de Precedence: bulk X-list: xfs Hi, I have some problems with xfs devices which I can't mount and hope you can help me. I have the devices md1, md5, md6, md7, md8 on a Root-Server by a german provider with SuSE linux 10.2. I've made yesterday evening an online update which updates the kernel and wants a reboot after that. Since that no rebbot is possible, no help available from provider. The devices md1, md5 and md8 I can mount without problems. But if i try to mount md6 or md7 I got the following messages on the screen: rescue:~# mount /dev/md6 /mnt Killed rescue:~# Message from syslogd@rescue at Tue Mar 11 00:48:37 2008 ... rescue kernel: Oops: 0000 [2] SMP Message from syslogd@rescue at Tue Mar 11 00:48:37 2008 ... rescue kernel: CR2: 0000000000000008 No xfs_check or xfs_repair is possible. I got this messages: rescue:~# xfs_check -v /dev/md6 ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_check. If you are unable to mount the filesystem, then use the xfs_repair -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. rescue:~# I can start after that the xfs_repair -L command but nothing is happend. I wait 30 minutes or so an tra to kill the process - impossible. I must restart my rescue system. I would be great if you can help me. Regards Juergen From owner-xfs@oss.sgi.com Tue Mar 11 09:23:42 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 11 Mar 2008 09:24:01 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=AWL,BAYES_00,MIME_8BIT_HEADER autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2BGNeae017364 for ; Tue, 11 Mar 2008 09:23:42 -0700 X-ASG-Debug-ID: 1205252650-0d7103870000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mx1.redhat.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 6F2CCF64334 for ; Tue, 11 Mar 2008 09:24:10 -0700 (PDT) Received: from mx1.redhat.com (mx1.redhat.com [66.187.233.31]) by cuda.sgi.com with ESMTP id Ulxe0MgD8n3NKmYi for ; Tue, 11 Mar 2008 09:24:10 -0700 (PDT) Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id m2BGOAq1015327; Tue, 11 Mar 2008 12:24:10 -0400 Received: from lacrosse.corp.redhat.com (lacrosse.corp.redhat.com [172.16.52.154]) by int-mx1.corp.redhat.com (8.13.1/8.13.1) with ESMTP id m2BGO9HM026732; Tue, 11 Mar 2008 12:24:09 -0400 Received: from neon.msp.redhat.com (neon.msp.redhat.com [10.15.80.10]) by lacrosse.corp.redhat.com (8.12.11.20060308/8.11.6) with ESMTP id m2BGO8xf001222; Tue, 11 Mar 2008 12:24:09 -0400 Message-ID: <47D6B228.5070300@sandeen.net> Date: Tue, 11 Mar 2008 11:24:08 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (X11/20080226) MIME-Version: 1.0 To: =?ISO-8859-15?Q?J=FCrgen_Fischer?= CC: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: Problem with xfs devices Subject: Re: Problem with xfs devices References: <47D6982D.7030305@t-online.de> In-Reply-To: <47D6982D.7030305@t-online.de> Content-Type: text/plain; charset=ISO-8859-15 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 2.58 on 172.16.52.254 X-Barracuda-Connect: mx1.redhat.com[66.187.233.31] X-Barracuda-Start-Time: 1205252651 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44529 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14854 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Jürgen Fischer wrote: > Hi, > > I have some problems with xfs devices which I can't mount and hope you > can help me. > > I have the devices md1, md5, md6, md7, md8 on a Root-Server by a german > provider with SuSE linux 10.2. I've made yesterday evening an online > update which updates the kernel and wants a reboot after that. Since > that no rebbot is possible, no help available from provider. > The devices md1, md5 and md8 I can mount without problems. But if i try > to mount md6 or md7 I got the following messages on the screen: > > rescue:~# mount /dev/md6 /mnt > Killed > rescue:~# > Message from syslogd@rescue at Tue Mar 11 00:48:37 2008 ... > rescue kernel: Oops: 0000 [2] SMP > > Message from syslogd@rescue at Tue Mar 11 00:48:37 2008 ... > rescue kernel: CR2: 0000000000000008 type "dmesg" to get the whole oops. -Eric From owner-xfs@oss.sgi.com Tue Mar 11 10:30:17 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 11 Mar 2008 10:30:35 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2BHUDMX025044 for ; Tue, 11 Mar 2008 10:30:17 -0700 X-ASG-Debug-ID: 1205256643-67a800520000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sargon.lncsa.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 7169EF64F6F for ; Tue, 11 Mar 2008 10:30:44 -0700 (PDT) Received: from sargon.lncsa.com (sargon.lncsa.com [212.99.8.251]) by cuda.sgi.com with ESMTP id hnuchaShsZCPFF1q for ; Tue, 11 Mar 2008 10:30:44 -0700 (PDT) Received: from localhost (localhost.localdomain [127.0.0.1]) by sargon.lncsa.com (Postfix) with ESMTP id DF7EC3013BA3 for ; Tue, 11 Mar 2008 18:30:02 +0100 (CET) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Scanned: Debian amavisd-new at lncsa.com Received: from sargon.lncsa.com ([127.0.0.1]) by localhost (sargon.lncsa.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 19-VkScLCTve for ; Tue, 11 Mar 2008 18:30:02 +0100 (CET) Received: from zenon.apartia.fr (zenon.apartia.fr [10.0.3.1]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "zenon.apartia.fr", Issuer "ca.apartia.fr" (verified OK)) by sargon.lncsa.com (Postfix) with ESMTP id B86E83015E26 for ; Tue, 11 Mar 2008 18:30:02 +0100 (CET) Received: from trajan.apartia.fr (trajan.apartia.fr [10.0.3.121]) by zenon.apartia.fr (Postfix) with ESMTP id 38C4C59C5B414 for ; Tue, 11 Mar 2008 18:29:58 +0100 (CET) Received: by trajan.apartia.fr (Postfix, from userid 1000) id C636F3CEED905; Tue, 11 Mar 2008 18:29:58 +0100 (CET) Date: Tue, 11 Mar 2008 18:29:58 +0100 From: Louis-David Mitterrand To: xfs@oss.sgi.com X-ASG-Orig-Subj: can I shrink an xfs? Subject: can I shrink an xfs? Message-ID: <20080311172958.GA31833@apartia.fr> Mail-Followup-To: xfs@oss.sgi.com MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.5.17+20080114 (2008-01-14) X-Barracuda-Connect: sargon.lncsa.com[212.99.8.251] X-Barracuda-Start-Time: 1205256644 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44533 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Status: Clean X-archive-position: 14855 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: vindex+lists-xfs@apartia.org Precedence: bulk X-list: xfs Hi, Is it possible to shrink an xfs? I tried using xfs_growfs to no avail. zenon:~# xfs_growfs -D 650000000 /backup meta-data=/dev/md2 isize=256 agcount=32, agsize=20539552 blks = sectsz=4096 attr=1 data = bsize=4096 blocks=657265248, imaxpct=25 = sunit=16 swidth=48 blks naming =version 2 bsize=4096 log =internal bsize=4096 blocks=32768, version=2 = sectsz=4096 sunit=1 blks, lazy-count=0 realtime =none extsz=196608 blocks=0, rtextents=0 data size 650000000 too small, old size is 657265248 ... even though the fs is 80% full only. Thanks, From owner-xfs@oss.sgi.com Tue Mar 11 11:01:47 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 11 Mar 2008 11:02:04 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.2 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_55 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2BI1fvS028494 for ; Tue, 11 Mar 2008 11:01:47 -0700 X-ASG-Debug-ID: 1205258531-1bef033c0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from filer.fsl.cs.sunysb.edu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 9D68DF65E51 for ; Tue, 11 Mar 2008 11:02:12 -0700 (PDT) Received: from filer.fsl.cs.sunysb.edu (filer.fsl.cs.sunysb.edu [130.245.126.2]) by cuda.sgi.com with ESMTP id cAVekRemrDS30mha for ; Tue, 11 Mar 2008 11:02:12 -0700 (PDT) Received: from josefsipek.net (baal.fsl.cs.sunysb.edu [130.245.126.78]) by filer.fsl.cs.sunysb.edu (8.12.11.20060308/8.13.1) with ESMTP id m2BI29Xb013866 for ; Tue, 11 Mar 2008 14:02:10 -0400 Received: by josefsipek.net (Postfix, from userid 1000) id 9CBFB1C07AC0; Tue, 11 Mar 2008 14:02:10 -0400 (EDT) Date: Tue, 11 Mar 2008 14:02:10 -0400 From: "Josef 'Jeff' Sipek" To: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: can I shrink an xfs? Subject: Re: can I shrink an xfs? Message-ID: <20080311180210.GB5767@josefsipek.net> References: <20080311172958.GA31833@apartia.fr> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080311172958.GA31833@apartia.fr> User-Agent: Mutt/1.5.16 (2007-06-11) X-Barracuda-Connect: filer.fsl.cs.sunysb.edu[130.245.126.2] X-Barracuda-Start-Time: 1205258532 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44535 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14856 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeffpc@josefsipek.net Precedence: bulk X-list: xfs On Tue, Mar 11, 2008 at 06:29:58PM +0100, Louis-David Mitterrand wrote: > Hi, > > Is it possible to shrink an xfs? I tried using xfs_growfs to no avail. There's no official/supported way to do it. There might be some tool to do it, but I'm not aware of any. Dave, et. al.: why not support shrinking? I know, inode numbers and whatnot, but that's already being messed up (at least to a degree) with the noikeep mount option. I think that shrinking would be useful to have - even if you have to pass it --i-know-what-i-am-doing kind of option. Maybe a offline shrink via repair would be more viable? Josef 'Jeff' Sipek. -- Hegh QaQ law' quvHa'ghach QaQ puS From owner-xfs@oss.sgi.com Tue Mar 11 11:13:41 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 11 Mar 2008 11:14:24 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2BIDd6O030370 for ; Tue, 11 Mar 2008 11:13:41 -0700 X-ASG-Debug-ID: 1205259249-05ad00bc0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from gw03.mail.saunalahti.fi (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 51805686BC3 for ; Tue, 11 Mar 2008 11:14:09 -0700 (PDT) Received: from gw03.mail.saunalahti.fi (gw03.mail.saunalahti.fi [195.197.172.111]) by cuda.sgi.com with ESMTP id 8MOq9IKVG8T4hzs5 for ; Tue, 11 Mar 2008 11:14:09 -0700 (PDT) Received: from uunet198.aac.fi (uunet198.aac.fi [193.64.61.198]) by gw03.mail.saunalahti.fi (Postfix) with ESMTP id 74B682169E9 for ; Tue, 11 Mar 2008 20:14:05 +0200 (EET) Message-ID: <47D6CBE9.8090905@iki.fi> Date: Tue, 11 Mar 2008 20:14:01 +0200 From: Erkki Lintunen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: an occational trouble with xfs file system which xfs_repair 2.7.14 has been able to fix Subject: Re: an occational trouble with xfs file system which xfs_repair 2.7.14 has been able to fix References: <47D52BE5.6010706@iki.fi> <47D5383E.50201@sandeen.net> In-Reply-To: <47D5383E.50201@sandeen.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Barracuda-Connect: gw03.mail.saunalahti.fi[195.197.172.111] X-Barracuda-Start-Time: 1205259250 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -1.52 X-Barracuda-Spam-Status: No, SCORE=-1.52 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=BSF_RULE7568M X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44536 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.50 BSF_RULE7568M Custom Rule 7568M X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14857 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: erkki.lintunen@iki.fi Precedence: bulk X-list: xfs Hi, on 10.3.2008 15:31 Eric Sandeen wrote: > Erkki Lintunen wrote: >> the cp -al commands haven't. Most of the time the cp -al process has D >> status. > >> What else information I could provide in addition to those requested in FAQ? > > When you get a process in the D state, do echo t > /proc/sysrq-trigger > to get backtraces of all processes; or echo w to get all blocked processes. Thanks for the tip. Unfortunately I couldn't get my hands onto the system before the message below on the console and SysRq rebooting the system today. From the log the script had stopped to cp -al again and in the same tree. My wild guess is that the script shouldn't have had anything to talk to network at the time kernel soft lockup nor there isn't any other services experiencing network traffic. I upgraded kernel to 2.6.24.3, ran xfs_repair 2.9.7 on the xfs file system and rest the case for next run. Best regards, Erkki BUG: soft lockup - CPU#0 stuck for 11s! [bond0:1207] Pid: 1207, comm: bond0 Not tainted (2.6.24.2-i686-net #1) EIP: 0060:[] EFLAGS: 00000286 CPU: 0 EIP is at _spin_lock+0x5/0x10 EAX: cf925134 EBX: 00000002 ECX: 00000001 EDX: cf92505c ESI: cc023d40 EDI: cf9f1c80 EBP: cee70000 ESP: cf655d8c DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068 CR0: 8005003b CR2: b4d2cffc CR3: 0f78b000 CR4: 000006d0 DR0: 00000000 DR1: 00000000 DR2: 00000000 DR3: 00000000 DR6: ffff0ff0 DR7: 00000400 [] ad_rx_machine+0x1c/0x3c0 [bonding] [] elv_queue_empty+0x24/0x30 [] ide_do_request+0x65/0x360 [ide_core] [] bond_3ad_lacpdu_recv+0x9f/0xb0 [bonding] [] netif_receive_skb+0x2cb/0x3c0 [] e100_rx_indicate+0x100/0x180 [e100] [] irq_exit+0x52/0x80 [] do_IRQ+0x3e/0x80 [] as_put_io_context+0x48/0x70 [] e100_rx_clean+0x105/0x140 [e100] [] e100_poll+0x22/0x80 [e100] [] net_rx_action+0x18d/0x1d0 [] e100_disable_irq+0x3d/0x60 [e100] [] e100_intr+0x8e/0xc0 [e100] [] __do_softirq+0xd4/0xf0 [] do_softirq+0x38/0x40 [] irq_exit+0x75/0x80 [] do_IRQ+0x3e/0x80 [] common_interrupt+0x23/0x28 [] ad_rx_machine+0xd6/0x3c0 [bonding] [] lock_timer_base+0x27/0x60 [] __mod_timer+0x7e/0xa0 [] bond_3ad_state_machine_handler+0xc4/0x180 [bonding] [] bond_mii_monitor+0x0/0xc0 [bonding] [] bond_3ad_state_machine_handler+0x0/0x180 [bonding] [] run_workqueue+0x5b/0x110 [] worker_thread+0xcd/0x100 [] autoremove_wake_function+0x0/0x50 [] finish_task_switch+0x2f/0x80 [] autoremove_wake_function+0x0/0x50 [] worker_thread+0x0/0x100 [] kthread+0x6b/0x70 [] kthread+0x0/0x70 [] kernel_thread_helper+0x7/0x10 ======================= From owner-xfs@oss.sgi.com Tue Mar 11 11:38:58 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 11 Mar 2008 11:39:06 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.7 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2BIcvIW001276 for ; Tue, 11 Mar 2008 11:38:58 -0700 X-ASG-Debug-ID: 1205260768-679d013c0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mx1.redhat.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id A3467F65FC8 for ; Tue, 11 Mar 2008 11:39:28 -0700 (PDT) Received: from mx1.redhat.com (mx1.redhat.com [66.187.233.31]) by cuda.sgi.com with ESMTP id IWeqE0YATyBk6vIp for ; Tue, 11 Mar 2008 11:39:28 -0700 (PDT) Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id m2BHlesE023789 for ; Tue, 11 Mar 2008 13:47:40 -0400 Received: from lacrosse.corp.redhat.com (lacrosse.corp.redhat.com [172.16.52.154]) by int-mx1.corp.redhat.com (8.13.1/8.13.1) with ESMTP id m2BHleTs030920 for ; Tue, 11 Mar 2008 13:47:40 -0400 Received: from neon.msp.redhat.com (neon.msp.redhat.com [10.15.80.10]) by lacrosse.corp.redhat.com (8.12.11.20060308/8.11.6) with ESMTP id m2BHle13026923 for ; Tue, 11 Mar 2008 13:47:40 -0400 Message-ID: <47D6C5BB.4040106@sandeen.net> Date: Tue, 11 Mar 2008 12:47:39 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (X11/20080226) MIME-Version: 1.0 To: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: can I shrink an xfs? Subject: Re: can I shrink an xfs? References: <20080311172958.GA31833@apartia.fr> In-Reply-To: <20080311172958.GA31833@apartia.fr> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.58 on 172.16.52.254 X-Barracuda-Connect: mx1.redhat.com[66.187.233.31] X-Barracuda-Start-Time: 1205260768 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44539 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14858 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Louis-David Mitterrand wrote: > Hi, > > Is it possible to shrink an xfs? I tried using xfs_growfs to no avail. Nope. http://oss.sgi.com/projects/xfs/faq.html#resize -Eric > zenon:~# xfs_growfs -D 650000000 /backup > meta-data=/dev/md2 isize=256 agcount=32, agsize=20539552 blks > = sectsz=4096 attr=1 > data = bsize=4096 blocks=657265248, imaxpct=25 > = sunit=16 swidth=48 blks > naming =version 2 bsize=4096 > log =internal bsize=4096 blocks=32768, version=2 > = sectsz=4096 sunit=1 blks, lazy-count=0 > realtime =none extsz=196608 blocks=0, rtextents=0 > data size 650000000 too small, old size is 657265248 > > ... even though the fs is 80% full only. > > Thanks, > > From owner-xfs@oss.sgi.com Tue Mar 11 13:09:30 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 11 Mar 2008 13:09:37 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.6 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2BK9R6Y020972 for ; Tue, 11 Mar 2008 13:09:30 -0700 X-ASG-Debug-ID: 1205266197-66c602830000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from postoffice.aconex.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 6FFBDF6983F for ; Tue, 11 Mar 2008 13:09:57 -0700 (PDT) Received: from postoffice.aconex.com (prod.aconex.com [203.89.192.138]) by cuda.sgi.com with ESMTP id fnouEMqIAVCFG0Jo for ; Tue, 11 Mar 2008 13:09:57 -0700 (PDT) Received: from mail.aconex.com (castle.yarra.acx [192.168.3.3]) by postoffice.aconex.com (Postfix) with ESMTP id B8E1E92CF01; Wed, 12 Mar 2008 07:09:56 +1100 (EST) Received: from 192.168.3.1 (proxying for 58.107.42.33) (SquirrelMail authenticated user nscott) by mail.aconex.com with HTTP; Wed, 12 Mar 2008 07:09:56 +1100 (EST) Message-ID: <34665.192.168.3.1.1205266196.squirrel@mail.aconex.com> In-Reply-To: <47D5DE13.8030902@sandeen.net> References: <47D20F78.7000103@sandeen.net> <1205196252.15982.69.camel@edge.scott.net.au> <47D5DE13.8030902@sandeen.net> Date: Wed, 12 Mar 2008 07:09:56 +1100 (EST) X-ASG-Orig-Subj: Re: [PATCH, RFC] - remove mounpoint UUID code Subject: Re: [PATCH, RFC] - remove mounpoint UUID code From: nscott@aconex.com To: "Eric Sandeen" Cc: "xfs-oss" User-Agent: SquirrelMail/1.4.8-4.el4.centos MIME-Version: 1.0 Content-Type: text/plain;charset=iso-8859-1 Content-Transfer-Encoding: 8bit X-Priority: 3 (Normal) Importance: Normal X-Barracuda-Connect: prod.aconex.com[203.89.192.138] X-Barracuda-Start-Time: 1205266198 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -0.87 X-Barracuda-Spam-Status: No, SCORE=-0.87 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=COMMA_SUBJECT, NO_REAL_NAME X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44545 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.60 COMMA_SUBJECT Subject is like 'Re: FDSDS, this is a subject' 0.55 NO_REAL_NAME From: does not include a real name X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14859 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nscott@aconex.com Precedence: bulk X-list: xfs > Nathan Scott wrote: > > > (hope you didn't too much mind my quoting you in this thread) ;) (hope you didn't mind too much my dissing your cleanup) :) >> Since effectively all versions of XFS support this feature ondisk, >> including complete support in recovery, it would be better IMO to >> leave it in for someone to implement/experiment with the syscall >> and auto-mounting userspace support. That would then require no >> new feature bits, mkfs/repair changes, etc. There is effectively >> zero cost to leaving it there - and non-zero cost in removing it, >> if our seriously bad regression-via-cleanup history is anything >> to go by ... :| > > the only cost to leaving it is having another instance of "ok now what > the heck is THIS?!" ... death by a thousand cuts of xfs complexity. But > yeah, removing it has some risk too. So, document it and move on. It would be a fun little project to go and experiment with this code a bit. It amounts to a trivial amount of code at the end of the day, and theres certainly nothing "complex" about it. >> It would be really unfortunate to remove this, and then find that >> it was useful to someone (who didn't know about it at this time). >> OTOH, if there is definately never ever any chance this can ever >> be useful, then it should indeed be removed. :) > > Well I'm not hung up about it. If anyone thinks it'll be useful, I'm > not bothered by leaving it as is. So, Nathan, what are your plans for > this code? *grin* I don't have any immediate plans. I can imagine it could be used to stitch parts of the namespace together in a filesystem that supports multiple devices (in a chunkfs kinda way) ... or maybe more simply just an in-filesystem auto-mounter. *shrug*. But its there, the tools support it (once again, I didn't see a userspace patch - hohum), so I would vote for leaving it in its current form so some enterprising, constructive young coder can try to make something useful from it at some point. :) cheers. From owner-xfs@oss.sgi.com Wed Mar 12 00:22:32 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 12 Mar 2008 00:22:51 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2C7MVgO001676 for ; Wed, 12 Mar 2008 00:22:32 -0700 X-ASG-Debug-ID: 1205306582-7dd400780000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from bombadil.infradead.org (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 75E5BF7406E for ; Wed, 12 Mar 2008 00:23:02 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org [18.85.46.34]) by cuda.sgi.com with ESMTP id x5cXt6TH1meFqMs2 for ; Wed, 12 Mar 2008 00:23:02 -0700 (PDT) Received: from hch by bombadil.infradead.org with local (Exim 4.68 #1 (Red Hat Linux)) id 1JZLIh-0007BO-52; Wed, 12 Mar 2008 07:22:59 +0000 Date: Wed, 12 Mar 2008 03:22:59 -0400 From: Christoph Hellwig To: nscott@aconex.com Cc: Eric Sandeen , xfs-oss X-ASG-Orig-Subj: Re: [PATCH, RFC] - remove mounpoint UUID code Subject: Re: [PATCH, RFC] - remove mounpoint UUID code Message-ID: <20080312072259.GA26148@infradead.org> References: <47D20F78.7000103@sandeen.net> <1205196252.15982.69.camel@edge.scott.net.au> <47D5DE13.8030902@sandeen.net> <34665.192.168.3.1.1205266196.squirrel@mail.aconex.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <34665.192.168.3.1.1205266196.squirrel@mail.aconex.com> User-Agent: Mutt/1.5.17 (2007-11-01) X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org See http://www.infradead.org/rpr.html X-Barracuda-Connect: bombadil.infradead.org[18.85.46.34] X-Barracuda-Start-Time: 1205306582 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -1.42 X-Barracuda-Spam-Status: No, SCORE=-1.42 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=COMMA_SUBJECT X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44591 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.60 COMMA_SUBJECT Subject is like 'Re: FDSDS, this is a subject' X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14860 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs On Wed, Mar 12, 2008 at 07:09:56AM +1100, nscott@aconex.com wrote: > I don't have any immediate plans. I can imagine it could be used to > stitch parts of the namespace together in a filesystem that supports > multiple devices (in a chunkfs kinda way) ... or maybe more simply > just an in-filesystem auto-mounter. *shrug*. But its there, the tools > support it (once again, I didn't see a userspace patch - hohum), so I > would vote for leaving it in its current form so some enterprising, > constructive young coder can try to make something useful from it > at some point. :) That kind of automounter really doesn't belong into the low-level filesystem. If we really wanted it it would go into the VFS, storing the uuid or other identifier for the mountpoint in an xattr. This is really just dead junk that should go away. From owner-xfs@oss.sgi.com Wed Mar 12 02:17:43 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 12 Mar 2008 02:18:14 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2C9HcBv026999 for ; Wed, 12 Mar 2008 02:17:42 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id UAA16183; Wed, 12 Mar 2008 20:18:01 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m2C9HxLF94200496; Wed, 12 Mar 2008 20:18:00 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m2C9HuDv94164836; Wed, 12 Mar 2008 20:17:56 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Wed, 12 Mar 2008 20:17:55 +1100 From: David Chinner To: "Josef 'Jeff' Sipek" Cc: xfs@oss.sgi.com, ruben.porras@linworks.de Subject: Re: can I shrink an xfs? Message-ID: <20080312091755.GQ155407@sgi.com> References: <20080311172958.GA31833@apartia.fr> <20080311180210.GB5767@josefsipek.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080311180210.GB5767@josefsipek.net> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14861 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Tue, Mar 11, 2008 at 02:02:10PM -0400, Josef 'Jeff' Sipek wrote: > On Tue, Mar 11, 2008 at 06:29:58PM +0100, Louis-David Mitterrand wrote: > > Hi, > > > > Is it possible to shrink an xfs? I tried using xfs_growfs to no avail. > > There's no official/supported way to do it. There might be some tool to do > it, but I'm not aware of any. > > Dave, et. al.: why not support shrinking? I know, inode numbers and whatnot, > but that's already being messed up (at least to a degree) with the noikeep > mount option. I think that shrinking would be useful to have - even if you > have to pass it --i-know-what-i-am-doing kind of option. Maybe a offline > shrink via repair would be more viable? It's being worked on - have a look in the achives at what Ruben Porras has been doing (and the patches he recently sent). Shrinking involves a lot of bits to come together sucessful, and we're slowly getting them together. If you want to get it done faster - help Ruben ;) Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Wed Mar 12 10:50:27 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 12 Mar 2008 10:50:35 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2CHoPrW031475 for ; Wed, 12 Mar 2008 10:50:27 -0700 X-ASG-Debug-ID: 1205344253-1f7403e70000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mail.flatline.de (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 084B7F7BB47; Wed, 12 Mar 2008 10:50:53 -0700 (PDT) Received: from mail.flatline.de (flatline.de [80.190.243.144]) by cuda.sgi.com with ESMTP id AnBqC2AV9UIPJ5AL; Wed, 12 Mar 2008 10:50:53 -0700 (PDT) Received: from shell.priv.flatline.de ([172.16.123.7] helo=slop.flatline.de ident=count) by mail.flatline.de with smtp (Exim 4.69) (envelope-from ) id 1JZV6I-0006L0-2T; Wed, 12 Mar 2008 18:50:51 +0100 Received: by slop.flatline.de (sSMTP sendmail emulation); Wed, 12 Mar 2008 18:50:50 +0100 Date: Wed, 12 Mar 2008 18:50:50 +0100 From: Andreas Kotes To: David Chinner Cc: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: XFS internal error Subject: Re: XFS internal error Message-ID: <20080312175050.GD14256@slop.flatline.de> References: <470831E6.4030704@fastmail.co.uk> <20071008001452.GX995458@sgi.com> <20080310122216.GG14256@slop.flatline.de> <20080310223018.GA155407@sgi.com> <20080310225927.GP14256@slop.flatline.de> <20080310234539.GC155407@sgi.com> <20080311134746.GQ14256@slop.flatline.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080311134746.GQ14256@slop.flatline.de> User-Agent: Mutt/1.5.13 (2006-08-11) X-Barracuda-Connect: flatline.de[80.190.243.144] X-Barracuda-Start-Time: 1205344256 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44633 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14862 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: count-linux@flatline.de Precedence: bulk X-list: xfs Hello, * Andreas Kotes [20080311 14:47]: > I basically build a PXE image which does an xfs_repair -L /dev/sda2 from > initrd - and the problem persists. Sigh. Exactly no change. ok, I cannot find the differences, but I have five machines showing similiar symptoms: = a6b = [1924291.170115] Filesystem "sda2": XFS internal error xfs_trans_cancel at line 1132 of file fs/xfs/xfs_trans.c. Caller 0xffffffff80372895 [1924291.176058] [1924291.176059] Call Trace: [1924291.182571] [] xfs_trans_cancel+0x100/0x130 [1924291.186252] [] xfs_link+0x2b5/0x480 [1924291.190045] [] __link_path_walk+0x74a/0xe30 [1924291.194073] [] xfs_vn_link+0x5d/0xd0 [1924291.197940] [] xfs_trans_unlocked_item+0x3b/0x60 [1924291.201739] [] xfs_lookup+0xa5/0xb0 [1924291.205675] [] __mutex_lock_slowpath+0x121/0x260 [1924291.209838] [] xfs_vn_lookup+0x6a/0x80 [1924291.214218] [] vfs_link+0xf2/0x140 [1924291.218643] [] sys_linkat+0x151/0x180 [1924291.223179] [] vfs_lstat_fd+0x4d/0x70 [1924291.227864] [] sysenter_do_call+0x1b/0x67 [1924291.232745] [] xfs_vn_getattr+0x0/0x110 [1924291.237489] [1924291.241936] xfs_force_shutdown(sda2,0x8) called from line 1133 of file fs/xfs/xfs_trans.c. Return address = 0xffffffff8036930e = a7b = [ 137.099267] Filesystem "sda2": XFS internal error xfs_trans_cancel at line 1132 of file fs/xfs/xfs_trans.c. Caller 0xffffffff80372156 [ 137.106267] [ 137.106268] Call Trace: [ 137.113129] [] xfs_trans_cancel+0x100/0x130 [ 137.116524] [] xfs_create+0x256/0x6e0 [ 137.119904] [] xfs_dir2_isleaf+0x19/0x50 [ 137.123269] [] xfs_vn_mknod+0x195/0x250 [ 137.126607] [] vfs_create+0xac/0xf0 [ 137.129920] [] open_namei+0x5dc/0x700 [ 137.133227] [] __wake_up+0x43/0x70 [ 137.136477] [] do_filp_open+0x1c/0x50 [ 137.139693] [] do_sys_open+0x5a/0x100 [ 137.142838] [] sysenter_do_call+0x1b/0x67 [ 137.145964] [ 137.149014] xfs_force_shutdown(sda2,0x8) called from line 1133 of file fs/xfs/xfs_trans.c. Return address = 0xffffffff8036930e = a7i = [35595.024715] Filesystem "sda2": XFS internal error xfs_trans_cancel at line 1132 of file fs/xfs/xfs_trans.c. Caller 0xffffffff80372895 [35595.030585] [35595.030586] Call Trace: [35595.036749] [] xfs_trans_cancel+0x100/0x130 [35595.039804] [] xfs_link+0x2b5/0x480 [35595.042774] [] __link_path_walk+0x74a/0xe30 [35595.045834] [] xfs_vn_link+0x5d/0xd0 [35595.049002] [] xfs_trans_unlocked_item+0x3b/0x60 [35595.052255] [] xfs_lookup+0xa5/0xb0 [35595.055554] [] __mutex_lock_slowpath+0x121/0x260 [35595.059007] [] xfs_vn_lookup+0x6a/0x80 [35595.062576] [] vfs_link+0xf2/0x140 [35595.066248] [] sys_linkat+0x151/0x180 [35595.070159] [] vfs_lstat_fd+0x4d/0x70 [35595.073942] [] thread_return+0x0/0x741 [35595.077652] [] sysenter_do_call+0x1b/0x67 [35595.081297] [] xfs_vn_getattr+0x0/0x110 [35595.085022] [35595.088792] xfs_force_shutdown(sda2,0x8) called from line 1133 of file fs/xfs/xfs_trans.c. Return address = 0xffffffff8036930e = a9i = [ 150.210765] Filesystem "sda2": XFS internal error xfs_trans_cancel at line 1132 of file fs/xfs/xfs_trans.c. Caller 0xffffffff80372156 [ 150.217907] [ 150.217908] Call Trace: [ 150.225265] [] xfs_trans_cancel+0x100/0x130 [ 150.229091] [] xfs_create+0x256/0x6e0 [ 150.233118] [] xfs_dir2_isleaf+0x19/0x50 [ 150.237054] [] xfs_vn_mknod+0x195/0x250 [ 150.241055] [] vfs_create+0xac/0xf0 [ 150.245126] [] open_namei+0x5dc/0x700 [ 150.249259] [] __wake_up+0x43/0x70 [ 150.253435] [] do_filp_open+0x1c/0x50 [ 150.257694] [] do_sys_open+0x5a/0x100 [ 150.261965] [] sysenter_do_call+0x1b/0x67 [ 150.266326] [ 150.270729] xfs_force_shutdown(sda2,0x8) called from line 1133 of file fs/xfs/xfs_trans.c. Return address = 0xffffffff8036930e = a2i = [128951.705981] Filesystem "sda2": XFS internal error xfs_trans_cancel at line 1132 of file fs/xfs/xfs_trans.c. Caller 0xffffffff80372895 [128951.709417] [128951.709418] Call Trace: [128951.712992] [] xfs_trans_cancel+0x100/0x130 [128951.714928] [] xfs_link+0x2b5/0x480 [128951.716939] [] __link_path_walk+0x74a/0xe30 [128951.719015] [] xfs_vn_link+0x5d/0xd0 [128951.721188] [] xfs_trans_unlocked_item+0x3b/0x60 [128951.723493] [] xfs_lookup+0xa5/0xb0 [128951.725833] [] __mutex_lock_slowpath+0x121/0x260 [128951.728113] [] xfs_vn_lookup+0x6a/0x80 [128951.730352] [] vfs_link+0xf2/0x140 [128951.732637] [] sys_linkat+0x151/0x180 [128951.734982] [] vfs_lstat_fd+0x4d/0x70 [128951.737414] [] sysenter_do_call+0x1b/0x67 [128951.739932] [] xfs_vn_getattr+0x0/0x110 [128951.742416] [128951.744790] xfs_force_shutdown(sda2,0x8) called from line 1133 of file fs/xfs/xfs_trans.c. Return address = 0xffffffff8036930e ... yes, not all of them were freshly booted, and not all of them had a fresh xfs_repair, but I think this might help narrowing down on the problem a bit ... all of them are running 2.6.22.16 right now. any suggestions? Br, Andreas P.S: I've taken this off LKML, and will leave the systems largely untouched until I hear from you ... -- flatline IT services - Andreas Kotes - Tailored solutions for your IT needs From owner-xfs@oss.sgi.com Wed Mar 12 11:11:52 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 12 Mar 2008 11:12:00 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2CIBout001558 for ; Wed, 12 Mar 2008 11:11:52 -0700 X-ASG-Debug-ID: 1205345535-4fdd032e0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mail.rfc1149.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id D7A1768EF26 for ; Wed, 12 Mar 2008 11:12:18 -0700 (PDT) Received: from mail.rfc1149.net (zaphod.rfc1149.net [88.191.14.223]) by cuda.sgi.com with ESMTP id Ck7EFBT5mHiaEoSv for ; Wed, 12 Mar 2008 11:12:18 -0700 (PDT) Received: from localhost (localhost [127.0.0.1]) by mail.rfc1149.net (Postfix) with ESMTP id 06008E16EA; Wed, 12 Mar 2008 19:12:13 +0100 (CET) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Scanned: amavisd-new at rfc1149.net Received: from mail.rfc1149.net ([127.0.0.1]) by localhost (zaphod.rfc1149.net [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id wT0WU402wQLB; Wed, 12 Mar 2008 19:12:03 +0100 (CET) Received: from dawn.rfc1149.net (unknown [192.168.9.2]) by mail.rfc1149.net (Postfix) with ESMTP id 22098E098B; Wed, 12 Mar 2008 19:12:03 +0100 (CET) Received: by dawn.rfc1149.net (Postfix, from userid 1000) id 866158065; Wed, 12 Mar 2008 19:12:02 +0100 (CET) Resent-From: sam@rfc1149.net Resent-Date: Wed, 12 Mar 2008 19:12:02 +0100 Resent-Message-ID: <20080312181202.GK4884@rfc1149.net> Resent-To: xfs@oss.sgi.com, dgc@sgi.com X-Original-To: sam@rfc1149.net Received: from localhost (localhost [127.0.0.1]) by mail.rfc1149.net (Postfix) with ESMTP id C293DE1992 for ; Wed, 12 Mar 2008 19:04:00 +0100 (CET) X-Virus-Scanned: amavisd-new at rfc1149.net Received: from mail.rfc1149.net ([127.0.0.1]) by localhost (zaphod.rfc1149.net [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id tOtjDvEZPMbk for ; Wed, 12 Mar 2008 19:03:52 +0100 (CET) Received: from mail2.rfc1149.net (unknown [IPv6:2a01:5d8:5138:2f95::3]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "mail2.rfc1149.net", Issuer "rfc1149.net" (verified OK)) by mail.rfc1149.net (Postfix) with ESMTP id 743DEE098B for ; Wed, 12 Mar 2008 19:03:52 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by mail2.rfc1149.net (Postfix) with ESMTP id B4DB1C408D; Wed, 12 Mar 2008 19:03:51 +0100 (CET) Received: from mail2.rfc1149.net ([127.0.0.1]) by localhost (localhost [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id thVpKH7RSVSf; Wed, 12 Mar 2008 19:03:51 +0100 (CET) Received: by mail2.rfc1149.net (Postfix, from userid 1000) id 93125C40B8; Wed, 12 Mar 2008 19:03:51 +0100 (CET) To: device-mapper development Cc: Christian Kujau , David Chinner , LKML , xfs@oss.sgi.com X-ASG-Orig-Subj: Re: INFO: task mount:11202 blocked for more than 120 seconds Subject: Re: INFO: task mount:11202 blocked for more than 120 seconds References: <20080307224040.GV155259@sgi.com> <20080309213441.GQ155407@sgi.com> Date: Wed, 12 Mar 2008 19:03:51 +0100 In-Reply-To: <20080309213441.GQ155407@sgi.com> (David Chinner's message of "Mon\, 10 Mar 2008 08\:34\:41 +1100") User-Agent: Gnus/5.11 (Gnus v5.11) Emacs/22.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 8bit Precedence: special-delivery Gnus-Tag: mail.list.linux.kernel From: Samuel Tardieu Organization: RFC 1149 (see http://www.rfc1149.net/) X-WWW: http://www.rfc1149.net/sam X-Jabber: (see http://www.jabber.org/) X-OpenPGP-Fingerprint: 79C0 AE3C CEA8 F17B 0EF1 45A5 F133 2241 1B80 ADE6 (see http://www.gnupg.org/) Message-Id: <2008-03-12-19-12-02+trackit+sam@rfc1149.net> X-Barracuda-Connect: zaphod.rfc1149.net[88.191.14.223] X-Barracuda-Start-Time: 1205345539 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44634 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Status: Clean X-archive-position: 14863 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sam@rfc1149.net Precedence: bulk X-list: xfs >>>>> "David" == David Chinner writes: >> No, no I/O errors at all. See the kern.log above, I could even do >> dd(1) from the md1 (dm-crypt on raid1), no errors either. David> Oh, dm-crypt. Well, I'd definitely start looking there. XFS has David> a history of exposing dm-crypt bugs, and these hangs appear to David> be I/O congestion/scheduling related and not XFS. Also, we David> haven't changed anything related to plug/unplug of block David> devices in XFS recently, so that also points to some other David> change as well... For what it is worth, I notice the same error last week (with a kernel of the day) on my laptop but was too busy at work to investigate. And I use ext3... on dm-crypt. Sam -- Samuel Tardieu -- sam@rfc1149.net -- http://www.rfc1149.net/ From owner-xfs@oss.sgi.com Wed Mar 12 17:01:08 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 12 Mar 2008 17:01:15 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2D012pr019437 for ; Wed, 12 Mar 2008 17:01:07 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA18965; Thu, 13 Mar 2008 11:01:25 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m2D01NLF92923835; Thu, 13 Mar 2008 11:01:24 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m2D01LDk95034362; Thu, 13 Mar 2008 11:01:21 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Thu, 13 Mar 2008 11:01:21 +1100 From: David Chinner To: Andreas Kotes Cc: David Chinner , xfs@oss.sgi.com Subject: Re: XFS internal error Message-ID: <20080313000121.GU155407@sgi.com> References: <470831E6.4030704@fastmail.co.uk> <20071008001452.GX995458@sgi.com> <20080310122216.GG14256@slop.flatline.de> <20080310223018.GA155407@sgi.com> <20080310225927.GP14256@slop.flatline.de> <20080310234539.GC155407@sgi.com> <20080311134746.GQ14256@slop.flatline.de> <20080312175050.GD14256@slop.flatline.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080312175050.GD14256@slop.flatline.de> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14864 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Wed, Mar 12, 2008 at 06:50:50PM +0100, Andreas Kotes wrote: > Hello, > > * Andreas Kotes [20080311 14:47]: > > I basically build a PXE image which does an xfs_repair -L /dev/sda2 from > > initrd - and the problem persists. Sigh. Exactly no change. Do you do this on every boot? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Wed Mar 12 20:22:34 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 12 Mar 2008 20:22:42 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2D3MTmv009069 for ; Wed, 12 Mar 2008 20:22:31 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA25235; Thu, 13 Mar 2008 14:22:56 +1100 Received: by chook.melbourne.sgi.com (Postfix, from userid 16403) id 4501558C4C0F; Thu, 13 Mar 2008 14:22:56 +1100 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: TAKE 977766 - Update mkfs manpage for new defaults: Message-Id: <20080313032256.4501558C4C0F@chook.melbourne.sgi.com> Date: Thu, 13 Mar 2008 14:22:56 +1100 (EST) From: xaiki@sgi.com (Niv Sardi-Altivanik) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14865 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: xaiki@sgi.com Precedence: bulk X-list: xfs Update mkfs manpage for new defaults: log, attr and inodes v2, Drop the ability to turn unwritten extents off completly, reduce imaxpct for big filesystems, less AGs for single disks configs. Date: Thu Mar 13 14:22:34 AEDT 2008 Workarea: chook.melbourne.sgi.com:/build/xaiki/isms/xfs-cmds Inspected by: esandeen The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:30662a xfsprogs/man/man8/mkfs.xfs.8 - 1.30 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/man/man8/mkfs.xfs.8.diff?r1=text&tr1=1.30&r2=text&tr2=1.29&f=h - Update mkfs manpage for new defaults: From owner-xfs@oss.sgi.com Thu Mar 13 00:17:31 2008 Received: with ECARTIS (v1.0.0; list xfs); Thu, 13 Mar 2008 00:17:38 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2D7HSc9000988 for ; Thu, 13 Mar 2008 00:17:30 -0700 X-ASG-Debug-ID: 1205392679-181303620000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mail.flatline.de (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id F1592F800FD; Thu, 13 Mar 2008 00:17:59 -0700 (PDT) Received: from mail.flatline.de (flatline.de [80.190.243.144]) by cuda.sgi.com with ESMTP id GeLMAMHXrVNNBJ65; Thu, 13 Mar 2008 00:17:59 -0700 (PDT) Received: from shell.priv.flatline.de ([172.16.123.7] helo=slop.flatline.de ident=count) by mail.flatline.de with smtp (Exim 4.69) (envelope-from ) id 1JZhhA-0007fs-AR; Thu, 13 Mar 2008 08:17:45 +0100 Received: by slop.flatline.de (sSMTP sendmail emulation); Thu, 13 Mar 2008 08:17:44 +0100 Date: Thu, 13 Mar 2008 08:17:44 +0100 From: Andreas Kotes To: David Chinner Cc: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: XFS internal error Subject: Re: XFS internal error Message-ID: <20080313071744.GB30874@slop.flatline.de> References: <470831E6.4030704@fastmail.co.uk> <20071008001452.GX995458@sgi.com> <20080310122216.GG14256@slop.flatline.de> <20080310223018.GA155407@sgi.com> <20080310225927.GP14256@slop.flatline.de> <20080310234539.GC155407@sgi.com> <20080311134746.GQ14256@slop.flatline.de> <20080312175050.GD14256@slop.flatline.de> <20080313000121.GU155407@sgi.com> <20080313071445.GA30874@slop.flatline.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080313071445.GA30874@slop.flatline.de> User-Agent: Mutt/1.5.13 (2006-08-11) X-Barracuda-Connect: flatline.de[80.190.243.144] X-Barracuda-Start-Time: 1205392679 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44686 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14867 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: count@flatline.de Precedence: bulk X-list: xfs Hello, * Andreas Kotes [20080313 08:14]: > * David Chinner [20080313 01:01]: > > On Wed, Mar 12, 2008 at 06:50:50PM +0100, Andreas Kotes wrote: > > > * Andreas Kotes [20080311 14:47]: > > > > I basically build a PXE image which does an xfs_repair -L /dev/sda2 from > > > > initrd - and the problem persists. Sigh. Exactly no change. > > > > Do you do this on every boot? > > no, I did this on a6b and a7b so far, where the problems I mentioned > occur, and only after I saw these in-memory problems. in general, XFS > proves to be realiable for us. > > would you recommend running an xfs_check before running an xfs_repair in > case of problems? oh, btw - running xfs_check doesn't work most of the time, as the log usually contains entries, and isn't replayed before shutdown .. I figure running this on every boot would leave me killing my log all of the time, if the shutdown didn't leave time to write the changes to disk? ;) Andreas -- flatline IT services - Andreas Kotes - Tailored solutions for your IT needs From owner-xfs@oss.sgi.com Thu Mar 13 00:15:12 2008 Received: with ECARTIS (v1.0.0; list xfs); Thu, 13 Mar 2008 00:15:53 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2D7F9GT000566 for ; Thu, 13 Mar 2008 00:15:12 -0700 X-ASG-Debug-ID: 1205392540-4155026b0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mail.flatline.de (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 8DD7CF7FEC1 for ; Thu, 13 Mar 2008 00:15:41 -0700 (PDT) Received: from mail.flatline.de (flatline.de [80.190.243.144]) by cuda.sgi.com with ESMTP id OX1HAB6mnb1cdJr9 for ; Thu, 13 Mar 2008 00:15:41 -0700 (PDT) Received: from shell.priv.flatline.de ([172.16.123.7] helo=slop.flatline.de ident=count) by mail.flatline.de with smtp (Exim 4.69) (envelope-from ) id 1JZheI-0007dq-1z; Thu, 13 Mar 2008 08:14:47 +0100 Received: by slop.flatline.de (sSMTP sendmail emulation); Thu, 13 Mar 2008 08:14:45 +0100 Date: Thu, 13 Mar 2008 08:14:45 +0100 From: Andreas Kotes To: David Chinner Cc: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: XFS internal error Subject: Re: XFS internal error Message-ID: <20080313071445.GA30874@slop.flatline.de> References: <470831E6.4030704@fastmail.co.uk> <20071008001452.GX995458@sgi.com> <20080310122216.GG14256@slop.flatline.de> <20080310223018.GA155407@sgi.com> <20080310225927.GP14256@slop.flatline.de> <20080310234539.GC155407@sgi.com> <20080311134746.GQ14256@slop.flatline.de> <20080312175050.GD14256@slop.flatline.de> <20080313000121.GU155407@sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080313000121.GU155407@sgi.com> User-Agent: Mutt/1.5.13 (2006-08-11) X-Barracuda-Connect: flatline.de[80.190.243.144] X-Barracuda-Start-Time: 1205392541 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44686 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14866 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: count@flatline.de Precedence: bulk X-list: xfs Hallo Dave, * David Chinner [20080313 01:01]: > On Wed, Mar 12, 2008 at 06:50:50PM +0100, Andreas Kotes wrote: > > * Andreas Kotes [20080311 14:47]: > > > I basically build a PXE image which does an xfs_repair -L /dev/sda2 from > > > initrd - and the problem persists. Sigh. Exactly no change. > > Do you do this on every boot? no, I did this on a6b and a7b so far, where the problems I mentioned occur, and only after I saw these in-memory problems. in general, XFS proves to be realiable for us. would you recommend running an xfs_check before running an xfs_repair in case of problems? Br, Andreas -- flatline IT services - Andreas Kotes - Tailored solutions for your IT needs From owner-xfs@oss.sgi.com Thu Mar 13 02:12:43 2008 Received: with ECARTIS (v1.0.0; list xfs); Thu, 13 Mar 2008 02:12:51 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2D9Cf65016574 for ; Thu, 13 Mar 2008 02:12:43 -0700 X-ASG-Debug-ID: 1205399592-02de01380000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from gw03.mail.saunalahti.fi (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 65FE9F80CD1 for ; Thu, 13 Mar 2008 02:13:12 -0700 (PDT) Received: from gw03.mail.saunalahti.fi (gw03.mail.saunalahti.fi [195.197.172.111]) by cuda.sgi.com with ESMTP id iVkcoaJ7KVuAlfuD for ; Thu, 13 Mar 2008 02:13:12 -0700 (PDT) Received: from uunet198.aac.fi (uunet198.aac.fi [193.64.61.198]) by gw03.mail.saunalahti.fi (Postfix) with ESMTP id C349D216D0D for ; Thu, 13 Mar 2008 11:12:39 +0200 (EET) Message-ID: <47D8F001.3030506@iki.fi> Date: Thu, 13 Mar 2008 11:12:33 +0200 From: Erkki Lintunen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: an occational trouble with xfs file system which xfs_repair 2.7.14 has been able to fix Subject: Re: an occational trouble with xfs file system which xfs_repair 2.7.14 has been able to fix References: <47D52BE5.6010706@iki.fi> <47D5383E.50201@sandeen.net> <47D6CBE9.8090905@iki.fi> In-Reply-To: <47D6CBE9.8090905@iki.fi> X-Enigmail-Version: 0.95.6 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Barracuda-Connect: gw03.mail.saunalahti.fi[195.197.172.111] X-Barracuda-Start-Time: 1205399593 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44694 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14868 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: ebirdie@iki.fi Precedence: bulk X-list: xfs on 11.3.2008 20:14 Erkki Lintunen wrote: > I upgraded kernel to 2.6.24.3, ran xfs_repair 2.9.7 on the xfs file > system and rest the case for next run. FWIW This time either kernel upgrade or xfs_repair 2.9.7 did provide fix for the occasional hiccup in the xfs fs. Erkki From owner-xfs@oss.sgi.com Thu Mar 13 07:53:32 2008 Received: with ECARTIS (v1.0.0; list xfs); Thu, 13 Mar 2008 07:53:39 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.5 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_43, J_CHICKENPOX_44,J_CHICKENPOX_45,J_CHICKENPOX_46,J_CHICKENPOX_47, J_CHICKENPOX_48,MIME_8BIT_HEADER autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2DErQ0G030326 for ; Thu, 13 Mar 2008 07:53:30 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id BAA18019; Fri, 14 Mar 2008 01:53:52 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m2DEroLF96260260; Fri, 14 Mar 2008 01:53:51 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m2DErnuS96205088; Fri, 14 Mar 2008 01:53:49 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Fri, 14 Mar 2008 01:53:49 +1100 From: David Chinner To: Christian =?iso-8859-1?Q?R=F8snes?= Cc: xfs@oss.sgi.com Subject: Re: XFS internal error xfs_trans_cancel at line 1150 of file fs/xfs/xfs_trans.c Message-ID: <20080313145349.GJ95344431@sgi.com> References: <20080311122103.GP155407@sgi.com> <1a4a774c0803110539s129fd2am86e933a03cdd1b18@mail.gmail.com> <20080312232425.GR155407@sgi.com> <1a4a774c0803130114l3927051byd54cd96cdb0efbe7@mail.gmail.com> <20080313090830.GD95344431@sgi.com> <1a4a774c0803130214x406a4eb9wfb8738d1f503663f@mail.gmail.com> <20080313092139.GF95344431@sgi.com> <1a4a774c0803130227l2fdf4861v21183b9bd3e7ce8d@mail.gmail.com> <20080313113634.GH95344431@sgi.com> <1a4a774c0803130446x609b9cb2mf3da323183c35606@mail.gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1a4a774c0803130446x609b9cb2mf3da323183c35606@mail.gmail.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14869 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs ok..... loads the metadump... Looking at the AGF status before the mkdir: dgc@budgie:/mnt/test$ for i in `seq 0 1 15`; do echo AG $i ; sudo xfs_db -r -c "agf $i" -c 'p flcount longest' -f /mnt/scratch/shutdown; done AG 0 flcount = 6 longest = 8 AG 1 flcount = 6 longest = 8 AG 2 flcount = 6 longest = 7 AG 3 flcount = 6 longest = 7 AG 4 flcount = 6 longest = 7 AG 5 flcount = 7 longest = 8 .... AG 5 immediately caught my eye: seqno = 5 length = 4476752 bnoroot = 7 cntroot = 46124 bnolevel = 2 cntlevel = 2 flfirst = 56 fllast = 62 flcount = 7 freeblks = 68797 longest = 8 btreeblks = 0 magicnum = 0x58414746 versionnum = 1 Mainly because at level 2 btrees this: blocks = XFS_MIN_FREELIST_PAG(pag,mp); gives blocks = 6 and the freelist count says 7 blocks. hence if the alignment check fails in some way, it will try to reduce the free list down to 6 blocks. Unsurprisingly, then, this breakpoint (what function does every "log object" operation call?) eventually tripped: Stack traceback for pid 2936 0xe000003817440000 2936 2902 1 1 R 0xe0000038174403a0 *mkdir 0xa0000001003c3cc0 xfs_trans_find_item 0xa0000001003c0d10 xfs_trans_log_buf+0x2f0 0xa0000001002f81e0 xfs_alloc_log_agf+0x80 0xa0000001002fa3d0 xfs_alloc_get_freelist+0x3d0 0xa0000001002ffe90 xfs_alloc_fix_freelist+0x770 0xa000000100300a00 xfs_alloc_vextent+0x440 0xa000000100374d70 xfs_ialloc_ag_alloc+0x2d0 0xa000000100375dd0 xfs_dialloc+0x2d0 ....... Which is the first place we dirty a log item. It's the AGF of block 5: [1]kdb> xbuf 0xe0000038143e2e00 buf 0xe0000038143e2e00 agf 0xe000003817550200 magicnum 0x58414746 versionnum 0x1 seqno 0x5 length 0x444f50 roots b 0x7 c 0xb42c levels b 2 c 2 flfirst 57 fllast 62 flcount 6 freeblks 68797 longest 8 And you'll note that flcount = 6 and flfirst = 57 now. In memory: [1]kdb> xperag 0xe000003802979510 ..... ag 5 f_init 1 i_init 1 f_levels[b,c] 2,2 f_flcount 6 f_freeblks 68797 f_longest 8 f__metadata 0 i_freecount 0 i_inodeok 1 ..... f_flcount = 6 as well. So, we've really modified the AGF here, and find out why the alignement checks failed. [1]kdb> xalloc 0xe00000381744fc48 tp 0xe000003817450000 mp 0xe000003802979510 agbp 0x0000000000020024 pag 0xe000003802972378 fsbno 42563856[5:97910] agno 0x5 agbno 0xffffffff minlen 0x8 maxlen 0x8 mod 0x0 prod 0x1 minleft 0x0 total 0x0 alignment 0x1 minalignslop 0x0 len 0x0 type this_bno otype this_bno wasdel 0 wasfromfl 0 isfl 0 userdata 0 Oh - alignment = 1. How did that happen? And why did it fail? I note: "this_bno" means it wants an exact allocation (fsbno 42563856[5:97910]). Ah, that means we are in the first attmpt to allocate a block in an AG. i.e here: 153 /* 154 * First try to allocate inodes contiguous with the last-allocated 155 * chunk of inodes. If the filesystem is striped, this will fill 156 * an entire stripe unit with inodes. 157 */ 158 agi = XFS_BUF_TO_AGI(agbp); 159 newino = be32_to_cpu(agi->agi_newino); 160 args.agbno = XFS_AGINO_TO_AGBNO(args.mp, newino) + 161 XFS_IALLOC_BLOCKS(args.mp); 162 if (likely(newino != NULLAGINO && 163 (args.agbno < be32_to_cpu(agi->agi_length)))) { 164 args.fsbno = XFS_AGB_TO_FSB(args.mp, 165 be32_to_cpu(agi->agi_seqno), args.agbno); 166 args.type = XFS_ALLOCTYPE_THIS_BNO; 167 args.mod = args.total = args.wasdel = args.isfl = 168 args.userdata = args.minalignslop = 0; 169 >>>>>>>> args.prod = 1; 170 >>>>>>>> args.alignment = 1; 171 /* 172 * Allow space for the inode btree to split. 173 */ 174 args.minleft = XFS_IN_MAXLEVELS(args.mp) - 1; 175 >>>>>>>> if ((error = xfs_alloc_vextent(&args))) 176 return error; This now makes sense - at first we attempt an unaligned, exact block allocation. This gets us to modifying the free list because we have a free 8 block extent as required. However, the exact extent being asked for is not free, so the btree lookup fails and we abort the allocation attempt. We then fall back to method 2 - try stripe alignment - which now fails the longest free block checks because alignment is accounted for and we need ~24 blocks to make sure of this. We fall back to method 3 - cluster alignment - which also fails because we need a extent of 9 blocks, but we only have extents of 8 blocks available. We never try again without alignment.... Now we fail allocation in that AG having dirtied the AGF, the AGFL, and a block out of both the by-size and by-count free space btrees. Hence when we fail to allocate in all other AGs, we return ENOSPC and the transaction get cancelled. Because it has dirty items in it, we get shut down. But no wonder it was so hard to reproduce.... The patch below fixes the shutdown for me. Can you give it a go? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group --- fs/xfs/xfs_ialloc.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) Index: 2.6.x-xfs-new/fs/xfs/xfs_ialloc.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_ialloc.c 2008-03-13 13:07:24.000000000 +1100 +++ 2.6.x-xfs-new/fs/xfs/xfs_ialloc.c 2008-03-14 01:40:21.926153338 +1100 @@ -167,7 +167,12 @@ xfs_ialloc_ag_alloc( args.mod = args.total = args.wasdel = args.isfl = args.userdata = args.minalignslop = 0; args.prod = 1; - args.alignment = 1; + if (xfs_sb_version_hasalign(&args.mp->m_sb) && + args.mp->m_sb.sb_inoalignmt >= + XFS_B_TO_FSBT(args.mp, XFS_INODE_CLUSTER_SIZE(args.mp))) + args.alignment = args.mp->m_sb.sb_inoalignmt; + else + args.alignment = 1; /* * Allow space for the inode btree to split. */ From owner-xfs@oss.sgi.com Thu Mar 13 23:07:14 2008 Received: with ECARTIS (v1.0.0; list xfs); Thu, 13 Mar 2008 23:07:25 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.2 required=5.0 tests=AWL,BAYES_50 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2E67Dqn007009 for ; Thu, 13 Mar 2008 23:07:14 -0700 X-ASG-Debug-ID: 1205474860-161d01f40000-w1Z2WR X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from wa-out-0708.google.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 5F0BDF9A0C9 for ; Thu, 13 Mar 2008 23:07:40 -0700 (PDT) Received: from wa-out-0708.google.com (wa-out-0708.google.com [209.85.146.251]) by cuda.sgi.com with ESMTP id cdTHf7WBbopo5DGb for ; Thu, 13 Mar 2008 23:07:40 -0700 (PDT) Received: by wa-out-0708.google.com with SMTP id n36so5297438wag.4 for ; Thu, 13 Mar 2008 23:07:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlegroups.com; s=beta; h=domainkey-signature:received:from:to:subject:x-google-loop:date:mime-version:content-type; bh=+l5/NCY91iOonEwq3HDMKLpg+vCQUaIwm5bWrRdQlQU=; b=nTGEp7owcn2NEp8p+qblF3rk95YsmKkQutnXRhbUyhWUdsEnqbmsxLTJNqVmJXQYYB1gWIY3ZvWMNBwdQoR6okylQKIYbLEFywqtNE7om5nQMMQYUCB9upTGZV7Xn2owp5HcPp1jEAqjRrwk0QzpePmEU9xzwxP2KeBc3D+yKnc= DomainKey-Signature: a=rsa-sha1; c=nofws; d=googlegroups.com; s=beta; h=from:to:subject:x-google-loop:date:mime-version:content-type; b=d2OoCL7nt1XncXCuiHSromtOmcgw1y88PY8DXuWZPbiIImVc2kJigSrdduKxcSkZli8bLYnF6bPdh7LJauG6iBhX2ahBKJX6ISofCAm2bqC+5vDwzUdnGJPPsToWkUH0Wdi6GUhstGGzJpIaVRPIG0aSqwoLQPoIDg1nSS23mUQ= Received: by 10.140.82.35 with SMTP id f35mr666873rvb.12.1205474860396; Thu, 13 Mar 2008 23:07:40 -0700 (PDT) From: noreply@googlegroups.com To: linux-xfs@oss.sgi.com X-ASG-Orig-Subj: Google Groups: You've been invited to The 5th International Conference SETIT 2009 Subject: Google Groups: You've been invited to The 5th International Conference SETIT 2009 X-Google-Loop: sub_invite Date: Fri, 14 Mar 2008 06:07:35 +0000 Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 X-Barracuda-Connect: wa-out-0708.google.com[209.85.146.251] X-Barracuda-Start-Time: 1205474863 Message-Id: <20080314060740.5F0BDF9A0C9@cuda.sgi.com> X-Barracuda-Bayes: INNOCENT GLOBAL 0.0201 1.0000 -1.8900 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -0.64 X-Barracuda-Spam-Status: No, SCORE=-0.64 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=MSGID_FROM_MTA_ID, NO_REAL_NAME X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44776 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.55 NO_REAL_NAME From: does not include a real name 0.70 MSGID_FROM_MTA_ID Message-Id for external message added locally X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14870 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: noreply@googlegroups.com Precedence: bulk X-list: xfs International Conference SETIT 2009 mbouhlel@gmail.com has invited you to join the The 5th International Conference SETIT 2009 group with this message: Dear In order to receive information about activities and events of the Fifth International Conference on Sciences of Electronic, Technologies of Information and Telecommunications SETIT (4 to 6 times per year), please accept to join our group Here is the group's description: The 5th International Conference on Sciences of Electronic, Technologies of Information and Telecommunications SETIT 2009 will be held in Tunisia 22-26 March 2009. You can find more details in: http://www.setit.rnu.tn ---------------------- Google Groups Information ---------------------- You can accept this invitation by clicking the following URL: http://groups.google.fr/group/setit-2009/sub?s=M1bvoQgAAACgZL4BrTEx0OEHDMoPgaMQ&hl=en Access to the group on the web requires a Google Account. If you don't have a Google Account set up yet, you'll first need to create an account before you can access the group. You can create an account at: http://www.google.com/accounts/NewAccount?service=groups2&dEM=linux-xfs%40oss.sgi.com&continue=http%3A%2F%2Fgroups.google.fr%2Fgroup%2Fsetit-2009%3Fhl%3Den --------------------- If This Message Is Unwanted --------------------- If you feel that this message is abuse, please inform the Google Groups staff by using the URL below. http://groups.google.fr/groups/abuse?invite=MgAAAPBWUwCNsDIH1HkIAA&hl=en From owner-xfs@oss.sgi.com Fri Mar 14 02:02:18 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 14 Mar 2008 02:02:25 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_43, J_CHICKENPOX_44,J_CHICKENPOX_45,J_CHICKENPOX_46,J_CHICKENPOX_48, MIME_8BIT_HEADER autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2E92GAb011701 for ; Fri, 14 Mar 2008 02:02:18 -0700 X-ASG-Debug-ID: 1205485365-4d3e01310000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from wx-out-0506.google.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 5FA55F9C3AD for ; Fri, 14 Mar 2008 02:02:45 -0700 (PDT) Received: from wx-out-0506.google.com (wx-out-0506.google.com [66.249.82.230]) by cuda.sgi.com with ESMTP id tyjzxecJBNkfAsBV for ; Fri, 14 Mar 2008 02:02:45 -0700 (PDT) Received: by wx-out-0506.google.com with SMTP id s9so3919374wxc.32 for ; Fri, 14 Mar 2008 02:02:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:to:subject:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; bh=bidT734VPu6k+N4l6+0akP6S3S5ryE0ZCV7KwejydK8=; b=T+sY6wD3r7EV++Oc5kUISR0twfT6TIcrR06lmjkF4v/D+9Eho9fuqEoa7s7iOicVDXodB+yoLpA+Y2jtPgxHAsH7PQjElcc0La/5uUYvXBIKvdkZ3CC7/XALYlG2YVcJAXY0SIbcDPAf+kQ97lEY5UAq/OUX6qgjpl3h+nLcQw0= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=qCLGuIYi7Fq3dVKCFWbbAGfs8itHF461w4Tqz81o/DWaJBXE+qwPNXnpEaRzib/UwgZ+wf5K7DmsDDBWdb/L/bDvfP+TFFjAiFCce3bKKOlo7ZPzmLt+LwZSLOKPWcccWkqjFIxu4HWw5KghXFotRc+TcfQeeH4Nv/JyJuxhS3M= Received: by 10.150.200.8 with SMTP id x8mr6143078ybf.149.1205485364594; Fri, 14 Mar 2008 02:02:44 -0700 (PDT) Received: by 10.150.96.5 with HTTP; Fri, 14 Mar 2008 02:02:44 -0700 (PDT) Message-ID: <1a4a774c0803140202r1ce10755hf1b679a02f30e7de@mail.gmail.com> Date: Fri, 14 Mar 2008 10:02:44 +0100 From: "=?ISO-8859-1?Q?Christian_R=F8snes?=" To: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: XFS internal error xfs_trans_cancel at line 1150 of file fs/xfs/xfs_trans.c Subject: Re: XFS internal error xfs_trans_cancel at line 1150 of file fs/xfs/xfs_trans.c In-Reply-To: <20080313145349.GJ95344431@sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Content-Disposition: inline References: <20080311122103.GP155407@sgi.com> <20080312232425.GR155407@sgi.com> <1a4a774c0803130114l3927051byd54cd96cdb0efbe7@mail.gmail.com> <20080313090830.GD95344431@sgi.com> <1a4a774c0803130214x406a4eb9wfb8738d1f503663f@mail.gmail.com> <20080313092139.GF95344431@sgi.com> <1a4a774c0803130227l2fdf4861v21183b9bd3e7ce8d@mail.gmail.com> <20080313113634.GH95344431@sgi.com> <1a4a774c0803130446x609b9cb2mf3da323183c35606@mail.gmail.com> <20080313145349.GJ95344431@sgi.com> X-Barracuda-Connect: wx-out-0506.google.com[66.249.82.230] X-Barracuda-Start-Time: 1205485368 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44786 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14871 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: christian.rosnes@gmail.com Precedence: bulk X-list: xfs On Thu, Mar 13, 2008 at 3:53 PM, David Chinner wrote: > The patch below fixes the shutdown for me. Can you give it a go? > > > Cheers, > > Dave. > -- > Dave Chinner > Principal Engineer > SGI Australian Software Group > > > --- > fs/xfs/xfs_ialloc.c | 7 ++++++- > 1 file changed, 6 insertions(+), 1 deletion(-) > > Index: 2.6.x-xfs-new/fs/xfs/xfs_ialloc.c > =================================================================== > --- 2.6.x-xfs-new.orig/fs/xfs/xfs_ialloc.c 2008-03-13 13:07:24.000000000 +1100 > +++ 2.6.x-xfs-new/fs/xfs/xfs_ialloc.c 2008-03-14 01:40:21.926153338 +1100 > @@ -167,7 +167,12 @@ xfs_ialloc_ag_alloc( > > args.mod = args.total = args.wasdel = args.isfl = > args.userdata = args.minalignslop = 0; > args.prod = 1; > - args.alignment = 1; > + if (xfs_sb_version_hasalign(&args.mp->m_sb) && > > + args.mp->m_sb.sb_inoalignmt >= > + XFS_B_TO_FSBT(args.mp, XFS_INODE_CLUSTER_SIZE(args.mp))) > + args.alignment = args.mp->m_sb.sb_inoalignmt; > + else > + args.alignment = 1; > > > /* > * Allow space for the inode btree to split. > */ > Yes. This patch fixes the shutdown problem for me too. Once again, thanks. Christian From owner-xfs@oss.sgi.com Fri Mar 14 02:56:04 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 14 Mar 2008 02:56:36 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2E9u2O9019283 for ; Fri, 14 Mar 2008 02:56:04 -0700 X-ASG-Debug-ID: 1205488593-4d3702370000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mx1.redhat.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 39887F9C7C4; Fri, 14 Mar 2008 02:56:33 -0700 (PDT) Received: from mx1.redhat.com (mx1.redhat.com [66.187.233.31]) by cuda.sgi.com with ESMTP id dBWNz0LmC41RGQM6; Fri, 14 Mar 2008 02:56:33 -0700 (PDT) Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id m2E9RKSL009965; Fri, 14 Mar 2008 05:27:25 -0400 Received: from pobox.stuttgart.redhat.com (pobox.stuttgart.redhat.com [172.16.2.10]) by int-mx1.corp.redhat.com (8.13.1/8.13.1) with ESMTP id m2E9RJrd005207; Fri, 14 Mar 2008 05:27:20 -0400 Received: from [10.32.4.105] (vpn-4-105.str.redhat.com [10.32.4.105]) by pobox.stuttgart.redhat.com (8.13.1/8.13.1) with ESMTP id m2E9RI3w011351; Fri, 14 Mar 2008 05:27:18 -0400 Message-ID: <47DA44EB.8000307@redhat.com> Date: Fri, 14 Mar 2008 10:27:07 +0100 From: Milan Broz User-Agent: Thunderbird 2.0.0.12 (X11/20080226) MIME-Version: 1.0 To: David Chinner CC: Christian Kujau , LKML , xfs@oss.sgi.com, dm-devel@redhat.com X-ASG-Orig-Subj: Re: INFO: task mount:11202 blocked for more than 120 seconds Subject: Re: INFO: task mount:11202 blocked for more than 120 seconds References: <20080307224040.GV155259@sgi.com> <20080309213441.GQ155407@sgi.com> In-Reply-To: <20080309213441.GQ155407@sgi.com> X-Enigmail-Version: 0.95.6 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.58 on 172.16.52.254 X-Barracuda-Connect: mx1.redhat.com[66.187.233.31] X-Barracuda-Start-Time: 1205488594 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44790 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14872 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: mbroz@redhat.com Precedence: bulk X-list: xfs David Chinner wrote: >>> Also, the iolock can be held across I/O so it's possible you've lost an >>> I/O. Any I/O errors in the syslog? >> No, no I/O errors at all. See the kern.log above, I could even do dd(1) >> from the md1 (dm-crypt on raid1), no errors either. > > Oh, dm-crypt. Well, I'd definitely start looking there. XFS has a > history of exposing dm-crypt bugs, and these hangs appear to be I/O > congestion/scheduling related and not XFS. Also, we haven't changed > anything related to plug/unplug of block devices in XFS recently, so > that also points to some other change as well... Yes, there is bug in dm-crypt... Please try if the patch here helps: http://lkml.org/lkml/2008/3/14/71 Thanks, Milan -- mbroz@redhat.com From owner-xfs@oss.sgi.com Fri Mar 14 05:13:18 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 14 Mar 2008 05:13:25 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2ECDGf9012144 for ; Fri, 14 Mar 2008 05:13:18 -0700 X-ASG-Debug-ID: 1205496827-602c00480000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from node2.t-mail.cz (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 981DAF9D7B9 for ; Fri, 14 Mar 2008 05:13:48 -0700 (PDT) Received: from node2.t-mail.cz (node2.t-mail.cz [62.141.0.167]) by cuda.sgi.com with ESMTP id A4vZ3xXzSI6iD1RS for ; Fri, 14 Mar 2008 05:13:48 -0700 (PDT) Received: from localhost (localhost [127.0.0.1]) by bl311.tmo.cz (Postfix) with ESMTP id 312A2442 for ; Fri, 14 Mar 2008 13:13:15 +0100 (CET) Received: from node2.t-mail.cz ([127.0.0.1]) by localhost (bl311.tmo.cz [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 4d2-wFgol-b2 for ; Fri, 14 Mar 2008 13:13:14 +0100 (CET) Received: from yalta.homelinux.doma (89-24-53-65.i4g.tmcz.cz [89.24.53.65]) by bl311.tmo.cz (Postfix) with ESMTP id E5FA2AE for ; Fri, 14 Mar 2008 13:13:13 +0100 (CET) Received: from AXALTO-9A9B4636.homelinux.doma (AXALTO-9A9B4636.homelinux.doma [192.168.0.95]) by yalta.homelinux.doma (Postfix) with ESMTP id 8B33A1813D90 for ; Fri, 14 Mar 2008 13:13:13 +0100 (CET) Received: from [127.0.0.1] (localhost [127.0.0.1]) by AXALTO-9A9B4636.homelinux.doma (Postfix) with ESMTP id BDF11349745D for ; Fri, 14 Mar 2008 13:13:15 +0100 (CET) X-ASG-Orig-Subj: xfs_repair on root filesystem Subject: xfs_repair on root filesystem From: Massimiliano Adamo To: xfs@oss.sgi.com Content-Type: text/plain Date: Fri, 14 Mar 2008 13:13:15 +0100 Message-Id: <1205496795.16829.6.camel@AXALTO-9A9B4636.homelinux.doma> Mime-Version: 1.0 X-Mailer: Evolution 2.12.1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: node2.t-mail.cz[62.141.0.167] X-Barracuda-Start-Time: 1205496828 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44800 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14873 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: maxadamo@gmail.com Precedence: bulk X-list: xfs Hi all, I have seen that it's possible to run xfs_check on filesystems mounted read-only, but in order to use xfs_repair the filesystem must be unmounted. I can't even imagine how to run xfs_repair on root filesystem, without booting from a live cd. Do you think it would be possible to implement xfs_repair on 'ro' filesystem, or on filesytems frozen with xfs_freeze? The other question would be about the possibility to shrink a XFS filesystem... but as I can see this is a work already in progress. cheers Massimiliano Adamo From owner-xfs@oss.sgi.com Fri Mar 14 07:24:08 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 14 Mar 2008 07:24:16 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.7 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2EEO4e4001632 for ; Fri, 14 Mar 2008 07:24:08 -0700 X-ASG-Debug-ID: 1205504675-602c031a0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mx1.redhat.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 72B3D1156ED0 for ; Fri, 14 Mar 2008 07:24:35 -0700 (PDT) Received: from mx1.redhat.com (mx1.redhat.com [66.187.233.31]) by cuda.sgi.com with ESMTP id G7AuI9guKHRB0oZP for ; Fri, 14 Mar 2008 07:24:35 -0700 (PDT) Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id m2EEOXcq005703; Fri, 14 Mar 2008 10:24:34 -0400 Received: from lacrosse.corp.redhat.com (lacrosse.corp.redhat.com [172.16.52.154]) by int-mx1.corp.redhat.com (8.13.1/8.13.1) with ESMTP id m2EEOXxs017620; Fri, 14 Mar 2008 10:24:33 -0400 Received: from neon.msp.redhat.com (neon.msp.redhat.com [10.15.80.10]) by lacrosse.corp.redhat.com (8.12.11.20060308/8.11.6) with ESMTP id m2EEOWKd020438; Fri, 14 Mar 2008 10:24:33 -0400 Message-ID: <47DA8AA0.3020902@sandeen.net> Date: Fri, 14 Mar 2008 09:24:32 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (X11/20080226) MIME-Version: 1.0 To: Massimiliano Adamo CC: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: xfs_repair on root filesystem Subject: Re: xfs_repair on root filesystem References: <1205496795.16829.6.camel@AXALTO-9A9B4636.homelinux.doma> In-Reply-To: <1205496795.16829.6.camel@AXALTO-9A9B4636.homelinux.doma> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.58 on 172.16.52.254 X-Barracuda-Connect: mx1.redhat.com[66.187.233.31] X-Barracuda-Start-Time: 1205504676 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44808 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14874 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Massimiliano Adamo wrote: > Hi all, > > I have seen that it's possible to run xfs_check on filesystems mounted > read-only, but in order to use xfs_repair the filesystem must be > unmounted. > I can't even imagine how to run xfs_repair on root filesystem, without > booting from a live cd. > > Do you think it would be possible to implement xfs_repair on 'ro' > filesystem, or on filesytems frozen with xfs_freeze? Like this, from the xfs_repair manpage? :) -d Repair dangerously. Allow xfs_repair to repair an XFS filesystem mounted read only. This is typically done on a root fileystem from single user mode, immediately followed by a reboot. For an added bonus, you could create a "rescue initrd" which contains repair. I always thought this would be a neat trick. > The other question would be about the possibility to shrink a XFS > filesystem... but as I can see this is a work already in progress. It is (slowly) in progress I think, and something of a FAQ by now :) -Eric > cheers > Massimiliano Adamo > > From owner-xfs@oss.sgi.com Fri Mar 14 16:57:41 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 14 Mar 2008 16:57:49 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2ENvdnh015591 for ; Fri, 14 Mar 2008 16:57:41 -0700 X-ASG-Debug-ID: 1205539088-5c6802aa0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mail.g-house.de (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 8096C6A2B1C; Fri, 14 Mar 2008 16:58:08 -0700 (PDT) Received: from mail.g-house.de (ns2.g-housing.de [81.169.133.75]) by cuda.sgi.com with ESMTP id fFHsSgt1cmZIlQEt; Fri, 14 Mar 2008 16:58:08 -0700 (PDT) Received: from [89.49.144.197] (helo=[192.168.178.25]) by mail.g-house.de with esmtpa (Exim 4.63) (envelope-from ) id 1JaJmn-0005Ig-Fv; Sat, 15 Mar 2008 00:58:05 +0100 Date: Sat, 15 Mar 2008 00:58:02 +0100 (CET) From: Christian Kujau X-X-Sender: evil@sheep.housecafe.de To: Milan Broz cc: David Chinner , LKML , xfs@oss.sgi.com, dm-devel@redhat.com X-ASG-Orig-Subj: Re: INFO: task mount:11202 blocked for more than 120 seconds Subject: Re: INFO: task mount:11202 blocked for more than 120 seconds In-Reply-To: <47DA44EB.8000307@redhat.com> Message-ID: References: <20080307224040.GV155259@sgi.com> <20080309213441.GQ155407@sgi.com> <47DA44EB.8000307@redhat.com> User-Agent: Alpine 1.00 (DEB 882 2007-12-20) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Barracuda-Connect: ns2.g-housing.de[81.169.133.75] X-Barracuda-Start-Time: 1205539089 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44847 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14875 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: lists@nerdbynature.de Precedence: bulk X-list: xfs On Fri, 14 Mar 2008, Milan Broz wrote: > Yes, there is bug in dm-crypt... > Please try if the patch here helps: http://lkml.org/lkml/2008/3/14/71 Hm, it seems to help the hangs, yes. Applied to today's -git a few hours ago, the hangs are gone. However, when doing lots of disk I/O, the machine locks up after a few (10-20) minutes. Sadly, netconsole got nothing :( After the first lockup I tried again and shortly after bootup I got: [ 866.681441] [ INFO: possible circular locking dependency detected ] [ 866.681876] 2.6.25-rc5 #1 [ 866.682203] ------------------------------------------------------- [ 866.682637] kswapd0/132 is trying to acquire lock: [ 866.683028] (&(&ip->i_iolock)->mr_lock){----}, at: [] xfs_ilock+0x96/0xb0 [ 866.683916] [ 866.683917] but task is already holding lock: [ 866.684582] (iprune_mutex){--..}, at: [] shrink_icache_memory+0x72/0x220 [ 866.685461] [ 866.685462] which lock already depends on the new lock. [ 866.685463] [ 866.686440] [ 866.686441] the existing dependency chain (in reverse order) is: [ 866.687151] [ 866.687152] -> #1 (iprune_mutex){--..}: [ 866.687339] [] add_lock_to_list+0x44/0xc0 [ 866.687339] [] __lock_acquire+0xc26/0x10b0 [ 866.687339] [] shrink_icache_memory+0x72/0x220 [ 866.687339] [] __lock_acquire+0x18f/0x10b0 [ 866.687339] [] lock_acquire+0x5e/0x80 [ 866.687339] [] shrink_icache_memory+0x72/0x220 [ 866.687339] [] mutex_lock_nested+0x89/0x240 [ 866.687339] [] shrink_icache_memory+0x72/0x220 [ 866.687339] [] shrink_icache_memory+0x72/0x220 [ 866.687339] [] shrink_icache_memory+0x72/0x220 [ 866.687339] [] shrink_slab+0x21/0x160 [ 866.687340] [] shrink_slab+0x101/0x160 [ 866.687340] [] try_to_free_pages+0x152/0x230 [ 866.687340] [] isolate_pages_global+0x0/0x60 [ 866.687340] [] __alloc_pages+0x14b/0x370 [ 866.687340] [] _read_unlock_irq+0x20/0x30 [ 866.687340] [] __grab_cache_page+0x81/0xc0 [ 866.687340] [] block_write_begin+0x76/0xe0 [ 866.687340] [] xfs_vm_write_begin+0x46/0x50 [ 866.687340] [] xfs_get_blocks+0x0/0x30 [ 866.687340] [] generic_file_buffered_write+0x117/0x650 [ 866.687340] [] xfs_ilock+0x6d/0xb0 [ 866.687340] [] xfs_write+0x7ac/0x8a0 [ 866.687340] [] core_sys_select+0x21/0x350 [ 866.687340] [] xfs_file_aio_write+0x5c/0x70 [ 866.687340] [] do_sync_write+0xd5/0x120 [ 866.687340] [] autoremove_wake_function+0x0/0x40 [ 866.687340] [] dnotify_parent+0x35/0x90 [ 866.687340] [] do_sync_write+0x0/0x120 [ 866.687340] [] vfs_write+0x9f/0x140 [ 866.687340] [] sys_write+0x41/0x70 [ 866.687340] [] sysenter_past_esp+0x5f/0xa5 [ 866.687340] [] 0xffffffff [ 866.687340] [ 866.687340] -> #0 (&(&ip->i_iolock)->mr_lock){----}: [ 866.687340] [] print_circular_bug_entry+0x40/0x50 The box was running fine then for ~20 minutes, then it locked up again. Full dmesg and .config: http://nerdbynature.de/bits/2.6.25-rc5/ Right now I'm back to 2.6.24.3... Thanks, Christian. -- BOFH excuse #350: paradigm shift...without a clutch From owner-xfs@oss.sgi.com Fri Mar 14 20:24:55 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 14 Mar 2008 20:25:08 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2F3OsVJ007035 for ; Fri, 14 Mar 2008 20:24:55 -0700 X-ASG-Debug-ID: 1205551520-7a8402d30000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id DDAF3FA57CE for ; Fri, 14 Mar 2008 20:25:20 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id xLd4PIdbTbfG0PWb for ; Fri, 14 Mar 2008 20:25:20 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 672ED18828840; Fri, 14 Mar 2008 22:24:47 -0500 (CDT) Message-ID: <47DB4181.7040603@sandeen.net> Date: Fri, 14 Mar 2008 22:24:49 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: xfs-oss CC: patches@arm.linux.org.uk X-ASG-Orig-Subj: [PATCH] fix dir2 shortform structures on ARM old ABI Subject: [PATCH] fix dir2 shortform structures on ARM old ABI Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1205551522 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44860 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14876 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs This should fix the longstanding issues with xfs and old ABI arm boxes, which lead to various asserts and xfs shutdowns, and for which an (incorrect) patch has been floating around for years. (Said patch made ARM internally consistent, but altered the normal xfs on-disk format such that it looked corrupted on other architectures): http://lists.arm.linux.org.uk/lurker/message/20040311.002034.5ecf21a2.html Old ABI ARM has interesting packing & padding; for example on ARM old abi: struct xfs_dir2_sf_entry { __uint8_t namelen; /* 0 1 */ /* XXX 3 bytes hole, try to pack */ xfs_dir2_sf_off_t offset; /* 4 4 */ __uint8_t name[1]; /* 8 1 */ /* XXX 3 bytes hole, try to pack */ xfs_dir2_inou_t inumber; /* 12 8 */ /* size: 20, cachelines: 1 */ /* sum members: 14, holes: 2, sum holes: 6 */ /* last cacheline: 20 bytes */ }; but on x86: struct xfs_dir2_sf_entry { __uint8_t namelen; /* 0 1 */ xfs_dir2_sf_off_t offset; /* 1 2 */ __uint8_t name[1]; /* 3 1 */ xfs_dir2_inou_t inumber; /* 4 8 */ /* size: 12, cachelines: 1 */ /* last cacheline: 12 bytes */ }; ... this sort of discrepancy leads to problems. I've verified this patch by comparing the on-disk structure layouts using pahole from the dwarves package, as well as running through a bit of xfsqa under qemu-arm, modified so that the check/repair phase after each test actually executes check/repair from the x86 host, on the filesystem populated by the arm emulator. Thus far it all looks good. There are 2 other structures with extra padding at the end, but they don't seem to cause trouble. I suppose they could be packed as well: xfs_dir2_data_unused_t and xfs_dir2_sf_t. Note that userspace needs a similar treatment, and any filesystems which were running with the previous rogue "fix" will now see corruption (either in the kernel, or during xfs_repair) with this fix properly in place; it may be worth teaching xfs_repair to identify and fix that specific issue. Signed-off-by: Eric Sandeen --- Index: linux-2.6.24/fs/xfs/linux-2.6/xfs_linux.h =================================================================== --- linux-2.6.24.orig/fs/xfs/linux-2.6/xfs_linux.h +++ linux-2.6.24/fs/xfs/linux-2.6/xfs_linux.h @@ -300,4 +300,11 @@ static inline __uint64_t howmany_64(__ui return x; } +/* ARM old ABI has some weird alignment/padding */ +#if defined(__arm__) && !defined(__ARM_EABI__) +#define __arch_pack __attribute__((packed)) +#else +#define __arch_pack +#endif + #endif /* __XFS_LINUX__ */ Index: linux-2.6.24/fs/xfs/xfs_dir2_sf.h =================================================================== --- linux-2.6.24.orig/fs/xfs/xfs_dir2_sf.h +++ linux-2.6.24/fs/xfs/xfs_dir2_sf.h @@ -62,7 +62,7 @@ typedef union { * Normalized offset (in a data block) of the entry, really xfs_dir2_data_off_t. * Only need 16 bits, this is the byte offset into the single block form. */ -typedef struct { __uint8_t i[2]; } xfs_dir2_sf_off_t; +typedef struct { __uint8_t i[2]; } __arch_pack xfs_dir2_sf_off_t; /* * The parent directory has a dedicated field, and the self-pointer must @@ -76,14 +76,14 @@ typedef struct xfs_dir2_sf_hdr { __uint8_t count; /* count of entries */ __uint8_t i8count; /* count of 8-byte inode #s */ xfs_dir2_inou_t parent; /* parent dir inode number */ -} xfs_dir2_sf_hdr_t; +} __arch_pack xfs_dir2_sf_hdr_t; typedef struct xfs_dir2_sf_entry { __uint8_t namelen; /* actual name length */ xfs_dir2_sf_off_t offset; /* saved offset */ __uint8_t name[1]; /* name, variable size */ xfs_dir2_inou_t inumber; /* inode number, var. offset */ -} xfs_dir2_sf_entry_t; +} __arch_pack xfs_dir2_sf_entry_t; typedef struct xfs_dir2_sf { xfs_dir2_sf_hdr_t hdr; /* shortform header */ From owner-xfs@oss.sgi.com Fri Mar 14 21:17:13 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 14 Mar 2008 21:17:29 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2F4H7Rx019187 for ; Fri, 14 Mar 2008 21:17:12 -0700 X-ASG-Debug-ID: 1205554658-7e5d00eb0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from filer.fsl.cs.sunysb.edu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 80126FA593A for ; Fri, 14 Mar 2008 21:17:38 -0700 (PDT) Received: from filer.fsl.cs.sunysb.edu (filer.fsl.cs.sunysb.edu [130.245.126.2]) by cuda.sgi.com with ESMTP id C8Txwlb5I8sYpUAw for ; Fri, 14 Mar 2008 21:17:38 -0700 (PDT) Received: from josefsipek.net (baal.fsl.cs.sunysb.edu [130.245.126.78]) by filer.fsl.cs.sunysb.edu (8.12.11.20060308/8.13.1) with ESMTP id m2F4HKCq000502; Sat, 15 Mar 2008 00:17:20 -0400 Received: by josefsipek.net (Postfix, from userid 1000) id AE88B1C07AC0; Sat, 15 Mar 2008 00:17:22 -0400 (EDT) Date: Sat, 15 Mar 2008 00:17:22 -0400 From: "Josef 'Jeff' Sipek" To: Eric Sandeen Cc: xfs-oss , patches@arm.linux.org.uk X-ASG-Orig-Subj: Re: [PATCH] fix dir2 shortform structures on ARM old ABI Subject: Re: [PATCH] fix dir2 shortform structures on ARM old ABI Message-ID: <20080315041722.GA25621@josefsipek.net> References: <47DB4181.7040603@sandeen.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <47DB4181.7040603@sandeen.net> User-Agent: Mutt/1.5.16 (2007-06-11) X-Barracuda-Connect: filer.fsl.cs.sunysb.edu[130.245.126.2] X-Barracuda-Start-Time: 1205554659 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44864 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14877 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeffpc@josefsipek.net Precedence: bulk X-list: xfs On Fri, Mar 14, 2008 at 10:24:49PM -0500, Eric Sandeen wrote: > This should fix the longstanding issues with xfs and old ABI > arm boxes, which lead to various asserts and xfs shutdowns, > and for which an (incorrect) patch has been floating around > for years. (Said patch made ARM internally consistent, but > altered the normal xfs on-disk format such that it looked > corrupted on other architectures): > http://lists.arm.linux.org.uk/lurker/message/20040311.002034.5ecf21a2.html ... > Signed-off-by: Eric Sandeen > > --- > > Index: linux-2.6.24/fs/xfs/linux-2.6/xfs_linux.h > =================================================================== > --- linux-2.6.24.orig/fs/xfs/linux-2.6/xfs_linux.h > +++ linux-2.6.24/fs/xfs/linux-2.6/xfs_linux.h > @@ -300,4 +300,11 @@ static inline __uint64_t howmany_64(__ui > return x; > } > > +/* ARM old ABI has some weird alignment/padding */ > +#if defined(__arm__) && !defined(__ARM_EABI__) > +#define __arch_pack __attribute__((packed)) > +#else > +#define __arch_pack > +#endif Shouldn't this be unconditional? Just because it ends up being ok on x86 doesn't mean that it won't break some time later on...(do we want another bad_features2 incident?) > + > #endif /* __XFS_LINUX__ */ > Index: linux-2.6.24/fs/xfs/xfs_dir2_sf.h > =================================================================== > --- linux-2.6.24.orig/fs/xfs/xfs_dir2_sf.h > +++ linux-2.6.24/fs/xfs/xfs_dir2_sf.h > @@ -62,7 +62,7 @@ typedef union { > * Normalized offset (in a data block) of the entry, really xfs_dir2_data_off_t. > * Only need 16 bits, this is the byte offset into the single block form. > */ > -typedef struct { __uint8_t i[2]; } xfs_dir2_sf_off_t; > +typedef struct { __uint8_t i[2]; } __arch_pack xfs_dir2_sf_off_t; > > /* > * The parent directory has a dedicated field, and the self-pointer must > @@ -76,14 +76,14 @@ typedef struct xfs_dir2_sf_hdr { > __uint8_t count; /* count of entries */ > __uint8_t i8count; /* count of 8-byte inode #s */ > xfs_dir2_inou_t parent; /* parent dir inode number */ > -} xfs_dir2_sf_hdr_t; > +} __arch_pack xfs_dir2_sf_hdr_t; > > typedef struct xfs_dir2_sf_entry { > __uint8_t namelen; /* actual name length */ > xfs_dir2_sf_off_t offset; /* saved offset */ > __uint8_t name[1]; /* name, variable size */ > xfs_dir2_inou_t inumber; /* inode number, var. offset */ > -} xfs_dir2_sf_entry_t; > +} __arch_pack xfs_dir2_sf_entry_t; > > typedef struct xfs_dir2_sf { > xfs_dir2_sf_hdr_t hdr; /* shortform header */ A very simple patch! I like it (minus the condition vs. unconditional packing - see above). Josef 'Jeff' Sipek. -- Penguin : Linux version 2.6.23.1 on an i386 machine (6135.23 BogoMips). From owner-xfs@oss.sgi.com Fri Mar 14 21:23:41 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 14 Mar 2008 21:23:49 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2F4Nepb021006 for ; Fri, 14 Mar 2008 21:23:41 -0700 X-ASG-Debug-ID: 1205555052-034900520000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id BF9316A39E3 for ; Fri, 14 Mar 2008 21:24:12 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id BqzSLg4XzK0Tnbrc for ; Fri, 14 Mar 2008 21:24:12 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 32AFA18828840; Fri, 14 Mar 2008 23:23:41 -0500 (CDT) Message-ID: <47DB4F4F.8030407@sandeen.net> Date: Fri, 14 Mar 2008 23:23:43 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: "Josef 'Jeff' Sipek" CC: xfs-oss X-ASG-Orig-Subj: Re: [PATCH] fix dir2 shortform structures on ARM old ABI Subject: Re: [PATCH] fix dir2 shortform structures on ARM old ABI References: <47DB4181.7040603@sandeen.net> <20080315041722.GA25621@josefsipek.net> In-Reply-To: <20080315041722.GA25621@josefsipek.net> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1205555052 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44865 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14878 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Josef 'Jeff' Sipek wrote: > On Fri, Mar 14, 2008 at 10:24:49PM -0500, Eric Sandeen wrote: >> This should fix the longstanding issues with xfs and old ABI >> arm boxes, which lead to various asserts and xfs shutdowns, >> and for which an (incorrect) patch has been floating around >> for years. (Said patch made ARM internally consistent, but >> altered the normal xfs on-disk format such that it looked >> corrupted on other architectures): >> http://lists.arm.linux.org.uk/lurker/message/20040311.002034.5ecf21a2.html > ... >> Signed-off-by: Eric Sandeen >> >> --- >> >> Index: linux-2.6.24/fs/xfs/linux-2.6/xfs_linux.h >> =================================================================== >> --- linux-2.6.24.orig/fs/xfs/linux-2.6/xfs_linux.h >> +++ linux-2.6.24/fs/xfs/linux-2.6/xfs_linux.h >> @@ -300,4 +300,11 @@ static inline __uint64_t howmany_64(__ui >> return x; >> } >> >> +/* ARM old ABI has some weird alignment/padding */ >> +#if defined(__arm__) && !defined(__ARM_EABI__) >> +#define __arch_pack __attribute__((packed)) >> +#else >> +#define __arch_pack >> +#endif > > Shouldn't this be unconditional? Just because it ends up being ok on x86 > doesn't mean that it won't break some time later on...(do we want another > bad_features2 incident?) I think that packing structures when they don't need to be can actually be harmful, efficiency-wise. I read a nice explanation of this.... which I can't find now. -Eric From owner-xfs@oss.sgi.com Fri Mar 14 21:33:11 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 14 Mar 2008 21:33:18 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2F4X7vn023491 for ; Fri, 14 Mar 2008 21:33:11 -0700 X-ASG-Debug-ID: 1205555618-0989010a0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 66C2AFA5E16 for ; Fri, 14 Mar 2008 21:33:39 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id NKc24bJCy2v6EUwn for ; Fri, 14 Mar 2008 21:33:39 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id B36E318004EE3; Fri, 14 Mar 2008 23:33:37 -0500 (CDT) Message-ID: <47DB51A3.70200@sandeen.net> Date: Fri, 14 Mar 2008 23:33:39 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: "Josef 'Jeff' Sipek" CC: xfs-oss X-ASG-Orig-Subj: Re: [PATCH] fix dir2 shortform structures on ARM old ABI Subject: Re: [PATCH] fix dir2 shortform structures on ARM old ABI References: <47DB4181.7040603@sandeen.net> <20080315041722.GA25621@josefsipek.net> <47DB4F4F.8030407@sandeen.net> <20080315042703.GA28242@josefsipek.net> In-Reply-To: <20080315042703.GA28242@josefsipek.net> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1205555619 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44864 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14879 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Josef 'Jeff' Sipek wrote: > On Fri, Mar 14, 2008 at 11:23:43PM -0500, Eric Sandeen wrote: >> Josef 'Jeff' Sipek wrote: >>> Shouldn't this be unconditional? Just because it ends up being ok on x86 >>> doesn't mean that it won't break some time later on...(do we want another >>> bad_features2 incident?) >> I think that packing structures when they don't need to be can actually >> be harmful, efficiency-wise. I read a nice explanation of this.... >> which I can't find now. > > Agreed. For in-memory only structures it makes sense to let the compiler do > whatever is the best, but for structures that are on-disk, you really have > no choice, you have to have the same layout in memory - which frequently > means packed. Unless I missed it, the structs you modified are on-disk, and > therefore _must_ be the way the docs say - which happens to be packed. Well, the docs probably don't actually say "packed" :) ... they just assume standard/predictable layout of the structures. So on almost all architectures they _don't_ need to be explicitly packed, and it's my understanding that doing so when it's not neccessary is harmful. But these 3 cases, on the odd arm abi, do need it. A QA test to run pahole on all disk structures to look for proper sizes / no holes might be good... though it require debug xfs (well, xfs built with -g). -Eric From owner-xfs@oss.sgi.com Fri Mar 14 21:45:04 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 14 Mar 2008 21:45:13 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2F4j3Tu026048 for ; Fri, 14 Mar 2008 21:45:04 -0700 X-ASG-Debug-ID: 1205556335-023902160000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id C03BEFA55C5 for ; Fri, 14 Mar 2008 21:45:35 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id BFVLfC7FDonwCBt6 for ; Fri, 14 Mar 2008 21:45:35 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id B93DF18828840; Fri, 14 Mar 2008 23:45:34 -0500 (CDT) Message-ID: <47DB5470.5020000@sandeen.net> Date: Fri, 14 Mar 2008 23:45:36 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: Chris Wedgwood CC: "Josef 'Jeff' Sipek" , xfs-oss X-ASG-Orig-Subj: Re: [PATCH] fix dir2 shortform structures on ARM old ABI Subject: Re: [PATCH] fix dir2 shortform structures on ARM old ABI References: <47DB4181.7040603@sandeen.net> <20080315041722.GA25621@josefsipek.net> <47DB4F4F.8030407@sandeen.net> <20080315043622.GA11547@puku.stupidest.org> In-Reply-To: <20080315043622.GA11547@puku.stupidest.org> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1205556335 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44866 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14880 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Chris Wedgwood wrote: > On Fri, Mar 14, 2008 at 11:23:43PM -0500, Eric Sandeen wrote: > >> I think that packing structures when they don't need to be can >> actually be harmful, efficiency-wise. I read a nice explanation of >> this.... which I can't find now. > > objdump -d would show if that was the case... it would show if that was the case for the particular compiler you test I guess. -Eric From owner-xfs@oss.sgi.com Fri Mar 14 21:51:18 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 14 Mar 2008 21:51:25 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2F4pGZi027849 for ; Fri, 14 Mar 2008 21:51:18 -0700 X-ASG-Debug-ID: 1205556707-1306007e0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from filer.fsl.cs.sunysb.edu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 02CD86A3B3D for ; Fri, 14 Mar 2008 21:51:47 -0700 (PDT) Received: from filer.fsl.cs.sunysb.edu (filer.fsl.cs.sunysb.edu [130.245.126.2]) by cuda.sgi.com with ESMTP id MTBmNDM0EUqKUIFf for ; Fri, 14 Mar 2008 21:51:47 -0700 (PDT) Received: from josefsipek.net (baal.fsl.cs.sunysb.edu [130.245.126.78]) by filer.fsl.cs.sunysb.edu (8.12.11.20060308/8.13.1) with ESMTP id m2F4pisH004689; Sat, 15 Mar 2008 00:51:44 -0400 Received: by josefsipek.net (Postfix, from userid 1000) id 29DB11C07AC0; Sat, 15 Mar 2008 00:51:47 -0400 (EDT) Date: Sat, 15 Mar 2008 00:51:47 -0400 From: "Josef 'Jeff' Sipek" To: Eric Sandeen Cc: xfs-oss X-ASG-Orig-Subj: Re: [PATCH] fix dir2 shortform structures on ARM old ABI Subject: Re: [PATCH] fix dir2 shortform structures on ARM old ABI Message-ID: <20080315045147.GB28242@josefsipek.net> References: <47DB4181.7040603@sandeen.net> <20080315041722.GA25621@josefsipek.net> <47DB4F4F.8030407@sandeen.net> <20080315042703.GA28242@josefsipek.net> <47DB51A3.70200@sandeen.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <47DB51A3.70200@sandeen.net> User-Agent: Mutt/1.5.16 (2007-06-11) X-Barracuda-Connect: filer.fsl.cs.sunysb.edu[130.245.126.2] X-Barracuda-Start-Time: 1205556708 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44867 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14881 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeffpc@josefsipek.net Precedence: bulk X-list: xfs On Fri, Mar 14, 2008 at 11:33:39PM -0500, Eric Sandeen wrote: > Josef 'Jeff' Sipek wrote: > > On Fri, Mar 14, 2008 at 11:23:43PM -0500, Eric Sandeen wrote: > >> Josef 'Jeff' Sipek wrote: > > >>> Shouldn't this be unconditional? Just because it ends up being ok on x86 > >>> doesn't mean that it won't break some time later on...(do we want another > >>> bad_features2 incident?) > >> I think that packing structures when they don't need to be can actually > >> be harmful, efficiency-wise. I read a nice explanation of this.... > >> which I can't find now. > > > > Agreed. For in-memory only structures it makes sense to let the compiler do > > whatever is the best, but for structures that are on-disk, you really have > > no choice, you have to have the same layout in memory - which frequently > > means packed. Unless I missed it, the structs you modified are on-disk, and > > therefore _must_ be the way the docs say - which happens to be packed. > > Well, the docs probably don't actually say "packed" :) ... they just > assume standard/predictable layout of the structures. Ok, nitpicking, eh? Well, you started ;) Yes, it is true that they don't say packed, but at the same time they don't define any holes, and the best way to force the compiler to not make holes is to pack the structure. > So on almost all architectures they _don't_ need to be explicitly > packed, and it's my understanding that doing so when it's not neccessary > is harmful. But these 3 cases, on the odd arm abi, do need it. My understanding of the "harmful" case is that of unaligned word access, but the compiler takes care of that. All in all, all the ABIs that get the right alignment without packing will not suffer performance-wise, and the old arm ABI needs to be packed anyway. Now, next time Linux gets ported to an architecture - if that arch has a "bad" alignment ABI rules, XFS will break, and someone (I nominate you :-P ) will have to augment the #if...or just use packed and forget about the whole issue. :) Josef 'Jeff' Sipek, wondering exactly how passionate one can get about structure member alignment :) -- Research, n.: Consider Columbus: He didn't know where he was going. When he got there he didn't know where he was. When he got back he didn't know where he had been. And he did it all on someone else's money. From owner-xfs@oss.sgi.com Fri Mar 14 22:07:58 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 14 Mar 2008 22:08:06 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2F57sXs031778 for ; Fri, 14 Mar 2008 22:07:58 -0700 X-ASG-Debug-ID: 1205557705-161500c00000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from filer.fsl.cs.sunysb.edu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 19EC46A3D3E for ; Fri, 14 Mar 2008 22:08:25 -0700 (PDT) Received: from filer.fsl.cs.sunysb.edu (filer.fsl.cs.sunysb.edu [130.245.126.2]) by cuda.sgi.com with ESMTP id HwkNVP4iXCmDeys8 for ; Fri, 14 Mar 2008 22:08:25 -0700 (PDT) Received: from josefsipek.net (baal.fsl.cs.sunysb.edu [130.245.126.78]) by filer.fsl.cs.sunysb.edu (8.12.11.20060308/8.13.1) with ESMTP id m2F4R0kD002085; Sat, 15 Mar 2008 00:27:00 -0400 Received: by josefsipek.net (Postfix, from userid 1000) id 1E7541C07AC0; Sat, 15 Mar 2008 00:27:03 -0400 (EDT) Date: Sat, 15 Mar 2008 00:27:03 -0400 From: "Josef 'Jeff' Sipek" To: Eric Sandeen Cc: xfs-oss X-ASG-Orig-Subj: Re: [PATCH] fix dir2 shortform structures on ARM old ABI Subject: Re: [PATCH] fix dir2 shortform structures on ARM old ABI Message-ID: <20080315042703.GA28242@josefsipek.net> References: <47DB4181.7040603@sandeen.net> <20080315041722.GA25621@josefsipek.net> <47DB4F4F.8030407@sandeen.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <47DB4F4F.8030407@sandeen.net> User-Agent: Mutt/1.5.16 (2007-06-11) X-Barracuda-Connect: filer.fsl.cs.sunysb.edu[130.245.126.2] X-Barracuda-Start-Time: 1205557706 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44867 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14882 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeffpc@josefsipek.net Precedence: bulk X-list: xfs On Fri, Mar 14, 2008 at 11:23:43PM -0500, Eric Sandeen wrote: > Josef 'Jeff' Sipek wrote: > > On Fri, Mar 14, 2008 at 10:24:49PM -0500, Eric Sandeen wrote: > >> This should fix the longstanding issues with xfs and old ABI > >> arm boxes, which lead to various asserts and xfs shutdowns, > >> and for which an (incorrect) patch has been floating around > >> for years. (Said patch made ARM internally consistent, but > >> altered the normal xfs on-disk format such that it looked > >> corrupted on other architectures): > >> http://lists.arm.linux.org.uk/lurker/message/20040311.002034.5ecf21a2.html > > ... > >> Signed-off-by: Eric Sandeen > >> > >> --- > >> > >> Index: linux-2.6.24/fs/xfs/linux-2.6/xfs_linux.h > >> =================================================================== > >> --- linux-2.6.24.orig/fs/xfs/linux-2.6/xfs_linux.h > >> +++ linux-2.6.24/fs/xfs/linux-2.6/xfs_linux.h > >> @@ -300,4 +300,11 @@ static inline __uint64_t howmany_64(__ui > >> return x; > >> } > >> > >> +/* ARM old ABI has some weird alignment/padding */ > >> +#if defined(__arm__) && !defined(__ARM_EABI__) > >> +#define __arch_pack __attribute__((packed)) > >> +#else > >> +#define __arch_pack > >> +#endif > > > > Shouldn't this be unconditional? Just because it ends up being ok on x86 > > doesn't mean that it won't break some time later on...(do we want another > > bad_features2 incident?) > > I think that packing structures when they don't need to be can actually > be harmful, efficiency-wise. I read a nice explanation of this.... > which I can't find now. Agreed. For in-memory only structures it makes sense to let the compiler do whatever is the best, but for structures that are on-disk, you really have no choice, you have to have the same layout in memory - which frequently means packed. Unless I missed it, the structs you modified are on-disk, and therefore _must_ be the way the docs say - which happens to be packed. Josef 'Jeff' Sipek. -- In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say "programmer" or something like that. - Linus Torvalds From owner-xfs@oss.sgi.com Sun Mar 16 06:09:02 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 16 Mar 2008 06:09:39 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2GD8vdC027209 for ; Sun, 16 Mar 2008 06:09:01 -0700 X-ASG-Debug-ID: 1205672969-604603780000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mail.g-house.de (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id D2FA012546CE for ; Sun, 16 Mar 2008 06:09:29 -0700 (PDT) Received: from mail.g-house.de (ns2.g-housing.de [81.169.133.75]) by cuda.sgi.com with ESMTP id SEY8I4rmt6iNvQmq for ; Sun, 16 Mar 2008 06:09:29 -0700 (PDT) Received: from [89.49.142.35] (helo=[192.168.178.25]) by mail.g-house.de with esmtpa (Exim 4.63) (envelope-from ) id 1Jasbd-00049R-Ff; Sun, 16 Mar 2008 14:08:54 +0100 Date: Sun, 16 Mar 2008 14:08:41 +0100 (CET) From: Christian Kujau X-X-Sender: evil@sheep.housecafe.de To: Chr cc: Milan Broz , David Chinner , LKML , xfs@oss.sgi.com, dm-devel@redhat.com, Alasdair G Kergon , dm-crypt@saout.de X-ASG-Orig-Subj: Re: [dm-crypt] INFO: task mount:11202 blocked for more than 120 seconds Subject: Re: [dm-crypt] INFO: task mount:11202 blocked for more than 120 seconds In-Reply-To: <200803152234.53199.chunkeey@web.de> Message-ID: References: <200803150108.04008.chunkeey@web.de> <200803151432.11125.chunkeey@web.de> <200803152234.53199.chunkeey@web.de> User-Agent: Alpine 1.00 (DEB 882 2007-12-20) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Barracuda-Connect: ns2.g-housing.de[81.169.133.75] X-Barracuda-Start-Time: 1205672969 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.44996 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14883 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: lists@nerdbynature.de Precedence: bulk X-list: xfs On Sat, 15 Mar 2008, Chr wrote: > On Saturday 15 March 2008 14:32:10 Chr wrote: >> reverted: >> >> commit 3a7f6c990ad04e6f576a159876c602d14d6f7fef >> dm crypt: use async crypto >> >> dm-crypt: Use crypto ablkcipher interface >> Move encrypt/decrypt core to async crypto call. >> > > well.... it's much better now, without the async interface. > Christian Kajau, can you confirm it too? I reverted the commit above from today's -git and booted....I could not notice any hangs more. But when I tried to reproduce these hangs by generating disk I/O (mostly reads) ~10 minutes later the box panicked, but still not netconsole messages :-( thanks, C. -- BOFH excuse #233: TCP/IP UDP alarm threshold is set too low. From owner-xfs@oss.sgi.com Sun Mar 16 13:33:35 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 16 Mar 2008 13:33:43 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2GKXVce017632 for ; Sun, 16 Mar 2008 13:33:33 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id HAA12270; Mon, 17 Mar 2008 07:33:55 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m2GKXqLF99740886; Mon, 17 Mar 2008 07:33:53 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m2GKXlPa96774153; Mon, 17 Mar 2008 07:33:47 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Mon, 17 Mar 2008 07:33:47 +1100 From: David Chinner To: Christian Kujau Cc: Milan Broz , David Chinner , LKML , xfs@oss.sgi.com, dm-devel@redhat.com Subject: Re: INFO: task mount:11202 blocked for more than 120 seconds Message-ID: <20080316203347.GX95344431@sgi.com> References: <20080307224040.GV155259@sgi.com> <20080309213441.GQ155407@sgi.com> <47DA44EB.8000307@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14884 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Sat, Mar 15, 2008 at 12:58:02AM +0100, Christian Kujau wrote: > On Fri, 14 Mar 2008, Milan Broz wrote: > >Yes, there is bug in dm-crypt... > >Please try if the patch here helps: http://lkml.org/lkml/2008/3/14/71 > > Hm, it seems to help the hangs, yes. Applied to today's -git a few hours > ago, the hangs are gone. However, when doing lots of disk I/O, the machine > locks up after a few (10-20) minutes. Sadly, netconsole got nothing :( > > After the first lockup I tried again and shortly after bootup I got: False positive. Memory reclaim can inverts the order of iprune_mutex and the normal inode locking orders. i.e. Both of these are possible: do_something() enter memory reclaim iprune_mutex inode lock or do_something() inode lock do_something_else enter memory reclaim iprune_mutex inode lock on different inode So depending on what is seen first (in this case the first), the second will trip lockdep. Neither are a deadlock, because the inode lock held before memory reclaim is a referenced inode and will *never* be on the free list for memory reclaim to deadlock on.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Sun Mar 16 17:47:07 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 16 Mar 2008 17:47:13 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_43 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2H0l6jL027290 for ; Sun, 16 Mar 2008 17:47:06 -0700 X-ASG-Debug-ID: 1205714857-12b602ae0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from flyingAngel.upjs.sk (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 722E3FAECD9 for ; Sun, 16 Mar 2008 17:47:38 -0700 (PDT) Received: from flyingAngel.upjs.sk (static113-109.rudna.net [212.20.113.109]) by cuda.sgi.com with ESMTP id Q6B4RPnCrgWVG5qh for ; Sun, 16 Mar 2008 17:47:38 -0700 (PDT) Received: by flyingAngel.upjs.sk (Postfix, from userid 500) id 866B32801BD; Sun, 16 Mar 2008 23:44:30 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by flyingAngel.upjs.sk (Postfix) with ESMTP id 80342204B80; Sun, 16 Mar 2008 23:44:30 +0100 (CET) Date: Sun, 16 Mar 2008 23:44:30 +0100 (CET) From: Jan Derfinak To: David Chinner cc: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: Differences in mkfs.xfs and xfs_info output. Subject: Re: Differences in mkfs.xfs and xfs_info output. In-Reply-To: <20080220054216.GN155407@sgi.com> Message-ID: References: <20080216074019.GV155407@sgi.com> <20080217230645.GY155407@sgi.com> <20080219002059.GX155407@sgi.com> <20080219014619.GY155407@sgi.com> <20080220054216.GN155407@sgi.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Barracuda-Connect: static113-109.rudna.net[212.20.113.109] X-Barracuda-Start-Time: 1205714858 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45042 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14885 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: ja@mail.upjs.sk Precedence: bulk X-list: xfs On Wed, 20 Feb 2008, David Chinner wrote: > On Tue, Feb 19, 2008 at 03:05:32PM +0100, Jan Derfinak wrote: > > On Tue, 19 Feb 2008, David Chinner wrote: > > > > > I did not use a patched mkfs - just my patch that does correction. > > > > I tried with only your patch. The result is slightly different, but > > not correct. > > Ok, still 1024 blocks out. Still need to reproduce it locally. > > FYI - thisis not a corruption bug - just an accounting problem. > IOWs, all it will cause is slightly premature detection of ENOSPC.... Just for your information. I tested kernel 2.6.25-rc5 compiled from sources from ftp.kernel.org. The bug with wrong sb_fdblocks is there too: # xfs_check /dev/system/mnt # mount /mnt/mnt # umount /mnt/mnt # xfs_check /dev/system/mnt sb_fdblocks 214009, counted 215033 # xfs_info /mnt/mnt meta-data=/dev/mapper/system-mnt isize=256 agcount=4, agsize=554240 blks = sectsz=512 attr=2 data = bsize=4096 blocks=2216960, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 log =internal bsize=4096 blocks=32768, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 # uname -a Linux host 2.6.25-rc5 #1 Sun Mar 16 21:17:39 CET 2008 x86_64 x86_64 x86_64 GNU/Linux # xfs_repair -V xfs_repair version 2.9.6 jan -- From owner-xfs@oss.sgi.com Mon Mar 17 06:48:20 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 17 Mar 2008 06:48:29 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-0.2 required=5.0 tests=BAYES_40 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2HDmGMH002175 for ; Mon, 17 Mar 2008 06:48:20 -0700 X-ASG-Debug-ID: 1205761726-0dbe00d20000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from MTA002E.interbusiness.it (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id B5A426AE3A5 for ; Mon, 17 Mar 2008 06:48:46 -0700 (PDT) Received: from MTA002E.interbusiness.it (MTA002E.interbusiness.it [88.44.62.2]) by cuda.sgi.com with ESMTP id d8yfMHASZ32EfyFS for ; Mon, 17 Mar 2008 06:48:46 -0700 (PDT) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AgUFAEYT3kdQzr8e/2dsb2JhbACBWqVN Received: from host30-191-static.206-80-b.business.telecomitalia.it (HELO tank.sv.lnf.it) ([80.206.191.30]) by ibs09.interbusiness.it with ESMTP; 17 Mar 2008 14:48:45 +0100 Received: from trinity.sv.lnf.it ([10.5.1.3]) by tank.sv.lnf.it with esmtp (Exim 4.63) (envelope-from ) id 1JbFhk-0001RZ-Nv for xfs@oss.sgi.com; Mon, 17 Mar 2008 14:48:44 +0100 Received: from [10.5.1.23] (helo=harry.sv.lnf.it) by trinity.sv.lnf.it with esmtp (Exim 4.63) (envelope-from ) id 1JbFhk-00013q-3d for xfs@oss.sgi.com; Mon, 17 Mar 2008 14:48:44 +0100 Received: by harry.sv.lnf.it (Postfix, from userid 1000) id 2D552804D36C; Mon, 17 Mar 2008 14:48:46 +0100 (CET) Date: Mon, 17 Mar 2008 14:48:47 +0100 From: Marco Gaiarin To: xfs@oss.sgi.com X-ASG-Orig-Subj: XFS check script on boot? Subject: XFS check script on boot? Message-ID: <20080317134847.GJ14510@sv.lnf.it> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Organization: La Nostra Famiglia - Polo FVG User-Agent: Mutt/1.5.15+20070412 (2007-04-11) X-Barracuda-Connect: MTA002E.interbusiness.it[88.44.62.2] X-Barracuda-Start-Time: 1205761727 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45095 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id m2HDmLMH002197 X-archive-position: 14886 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: gaio@sv.lnf.it Precedence: bulk X-list: xfs I've recently suffered an XFS corruption on a remote server (intel, debian etch, custom 2.6.X kernel): at some point some process hit an XFS inconsistency on /var, so /var desappear and suddenly machine refuse to work. Being a remote machine with no full-knowledge people there, i've rebooted it, entered in ssh and stopped all services and tasks, arriving at the point where i can remount /var readonly. So i was able to xfs_check the partition (that confirmed me the corruption), but i was not able to unmount /var, so i was forced to use '-d' options of xfs_repair. That indeed worked. ;) This was the first tile i hit a xfs filesystem corruption, so i'm asking why seems there's no /etc/init.d/checkfs.sh-like script that check and repair XFS filesystem at boot. Probably doing fully automatically it is a bit too dangerous, but an approach like 'normal' fsck, eg if filesystem are too corrupt (it need '-f' option) ask admin password and force to do it by hand, seems to me simple and effective. Someone can explain me? Thanks. -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.sv.lnf.it/ Polo FVG - Via della Bontŕ, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)sv.lnf.it tel +39-0434-842711 fax +39-0434-842797 From owner-xfs@oss.sgi.com Mon Mar 17 10:35:56 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 17 Mar 2008 10:36:13 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2HHZrGj004817 for ; Mon, 17 Mar 2008 10:35:56 -0700 X-ASG-Debug-ID: 1205775384-5f77012b0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mx1.redhat.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 42034FB7F8B for ; Mon, 17 Mar 2008 10:36:24 -0700 (PDT) Received: from mx1.redhat.com (mx1.redhat.com [66.187.233.31]) by cuda.sgi.com with ESMTP id cJ5WnuwU0RQ4DyKj for ; Mon, 17 Mar 2008 10:36:24 -0700 (PDT) Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id m2HHaHkX016644; Mon, 17 Mar 2008 13:36:17 -0400 Received: from pobox.fab.redhat.com (pobox.fab.redhat.com [10.33.63.12]) by int-mx1.corp.redhat.com (8.13.1/8.13.1) with ESMTP id m2HHaA1s005170; Mon, 17 Mar 2008 13:36:11 -0400 Received: from agk.fab.redhat.com (agk.fab.redhat.com [10.33.0.19]) by pobox.fab.redhat.com (8.13.1/8.13.1) with ESMTP id m2HHaABk025245; Mon, 17 Mar 2008 13:36:10 -0400 Received: from agk by agk.fab.redhat.com with local (Exim 4.34) id 1JbJFp-00019F-VR; Mon, 17 Mar 2008 17:36:09 +0000 Date: Mon, 17 Mar 2008 17:36:09 +0000 From: Alasdair G Kergon To: Christian Kujau , Chr , Milan Broz , David Chinner , LKML , xfs@oss.sgi.com, dm-devel@redhat.com, dm-crypt@saout.de, Herbert Xu , Ritesh Raj Sarraf X-ASG-Orig-Subj: Re: [dm-crypt] INFO: task mount:11202 blocked for more than 120 seconds Subject: Re: [dm-crypt] INFO: task mount:11202 blocked for more than 120 seconds Message-ID: <20080317173609.GD29322@agk.fab.redhat.com> Mail-Followup-To: Christian Kujau , Chr , Milan Broz , David Chinner , LKML , xfs@oss.sgi.com, dm-devel@redhat.com, dm-crypt@saout.de, Herbert Xu , Ritesh Raj Sarraf References: <200803150108.04008.chunkeey@web.de> <200803151432.11125.chunkeey@web.de> <200803152234.53199.chunkeey@web.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.1i Organization: Red Hat UK Ltd. Registered in England and Wales, number 03798903. Registered Office: Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE. X-Scanned-By: MIMEDefang 2.58 on 172.16.52.254 X-Barracuda-Connect: mx1.redhat.com[66.187.233.31] X-Barracuda-Start-Time: 1205775385 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45110 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14887 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: agk@redhat.com Precedence: bulk X-list: xfs Latest version for everyone to try: From: Milan Broz Fix regression in dm-crypt introduced in commit 3a7f6c990ad04e6f576a159876c602d14d6f7fef (dm crypt: use async crypto). If write requests need to be split into pieces, the code must not process them in parallel because the crypto context cannot be shared. So there can be parallel crypto operations on one part of the write, but only one write bio can be processed at a time. This is not optimal and the workqueue code need to be optimized for parallel processing, but for now it solves problem without affecting the performance of synchronous crypto operation (most of current dm-crypt users). Signed-off-by: Milan Broz Signed-off-by: Alasdair G Kergon --- drivers/md/dm-crypt.c | 58 +++++++++++++++++++++++++------------------------- 1 files changed, 30 insertions(+), 28 deletions(-) Index: linux-2.6.25-rc4/drivers/md/dm-crypt.c =================================================================== --- linux-2.6.25-rc4.orig/drivers/md/dm-crypt.c 2008-03-17 11:42:16.000000000 +0000 +++ linux-2.6.25-rc4/drivers/md/dm-crypt.c 2008-03-17 11:42:28.000000000 +0000 @@ -1,7 +1,7 @@ /* * Copyright (C) 2003 Christophe Saout * Copyright (C) 2004 Clemens Fruhwirth - * Copyright (C) 2006-2007 Red Hat, Inc. All rights reserved. + * Copyright (C) 2006-2008 Red Hat, Inc. All rights reserved. * * This file is released under the GPL. */ @@ -93,6 +93,8 @@ struct crypt_config { struct workqueue_struct *io_queue; struct workqueue_struct *crypt_queue; + wait_queue_head_t writeq; + /* * crypto related data */ @@ -331,14 +333,7 @@ static void crypt_convert_init(struct cr ctx->idx_out = bio_out ? bio_out->bi_idx : 0; ctx->sector = sector + cc->iv_offset; init_completion(&ctx->restart); - /* - * Crypto operation can be asynchronous, - * ctx->pending is increased after request submission. - * We need to ensure that we don't call the crypt finish - * operation before pending got incremented - * (dependent on crypt submission return code). - */ - atomic_set(&ctx->pending, 2); + atomic_set(&ctx->pending, 1); } static int crypt_convert_block(struct crypt_config *cc, @@ -411,43 +406,42 @@ static void crypt_alloc_req(struct crypt static int crypt_convert(struct crypt_config *cc, struct convert_context *ctx) { - int r = 0; + int r; while(ctx->idx_in < ctx->bio_in->bi_vcnt && ctx->idx_out < ctx->bio_out->bi_vcnt) { crypt_alloc_req(cc, ctx); + atomic_inc(&ctx->pending); + r = crypt_convert_block(cc, ctx, cc->req); switch (r) { + /* async */ case -EBUSY: wait_for_completion(&ctx->restart); INIT_COMPLETION(ctx->restart); /* fall through*/ case -EINPROGRESS: - atomic_inc(&ctx->pending); cc->req = NULL; - r = 0; - /* fall through*/ + ctx->sector++; + continue; + + /* sync */ case 0: + atomic_dec(&ctx->pending); ctx->sector++; continue; - } - break; + /* error */ + default: + atomic_dec(&ctx->pending); + return r; + } } - /* - * If there are pending crypto operation run async - * code. Otherwise process return code synchronously. - * The step of 2 ensures that async finish doesn't - * call crypto finish too early. - */ - if (atomic_sub_return(2, &ctx->pending)) - return -EINPROGRESS; - - return r; + return 0; } static void dm_crypt_bio_destructor(struct bio *bio) @@ -624,8 +618,10 @@ static void kcryptd_io_read(struct dm_cr static void kcryptd_io_write(struct dm_crypt_io *io) { struct bio *clone = io->ctx.bio_out; + struct crypt_config *cc = io->target->private; generic_make_request(clone); + wake_up(&cc->writeq); } static void kcryptd_io(struct work_struct *work) @@ -698,7 +694,8 @@ static void kcryptd_crypt_write_convert_ r = crypt_convert(cc, &io->ctx); - if (r != -EINPROGRESS) { + if (atomic_dec_and_test(&io->ctx.pending)) { + /* processed, no running async crypto */ kcryptd_crypt_write_io_submit(io, r, 0); if (unlikely(r < 0)) return; @@ -706,8 +703,12 @@ static void kcryptd_crypt_write_convert_ atomic_inc(&io->pending); /* out of memory -> run queues */ - if (unlikely(remaining)) + if (unlikely(remaining)) { + /* wait for async crypto and reinitialize pending counter */ + wait_event(cc->writeq, !atomic_read(&io->ctx.pending)); + atomic_set(&io->ctx.pending, 1); congestion_wait(WRITE, HZ/100); + } } } @@ -746,7 +747,7 @@ static void kcryptd_crypt_read_convert(s r = crypt_convert(cc, &io->ctx); - if (r != -EINPROGRESS) + if (atomic_dec_and_test(&io->ctx.pending)) kcryptd_crypt_read_done(io, r); crypt_dec_pending(io); @@ -1047,6 +1048,7 @@ static int crypt_ctr(struct dm_target *t goto bad_crypt_queue; } + init_waitqueue_head(&cc->writeq); ti->private = cc; return 0; From owner-xfs@oss.sgi.com Mon Mar 17 11:31:56 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 17 Mar 2008 11:32:03 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_53, J_CHICKENPOX_63 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2HIVpdH024313 for ; Mon, 17 Mar 2008 11:31:56 -0700 X-ASG-Debug-ID: 1205778740-603302420000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 843EEFBB478 for ; Mon, 17 Mar 2008 11:32:20 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id teM1tUFjrcV7ANQD for ; Mon, 17 Mar 2008 11:32:20 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 27E7118052C86; Mon, 17 Mar 2008 13:32:17 -0500 (CDT) Message-ID: <47DEB930.7020108@sandeen.net> Date: Mon, 17 Mar 2008 13:32:16 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: "Josef 'Jeff' Sipek" CC: xfs-oss X-ASG-Orig-Subj: Re: [PATCH] fix dir2 shortform structures on ARM old ABI Subject: Re: [PATCH] fix dir2 shortform structures on ARM old ABI References: <47DB4181.7040603@sandeen.net> <20080315041722.GA25621@josefsipek.net> <47DB4F4F.8030407@sandeen.net> <20080315042703.GA28242@josefsipek.net> <47DB51A3.70200@sandeen.net> <20080315045147.GB28242@josefsipek.net> In-Reply-To: <20080315045147.GB28242@josefsipek.net> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1205778741 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45112 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14888 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Josef 'Jeff' Sipek wrote: > Josef 'Jeff' Sipek, wondering exactly how passionate one can get about > structure member alignment :) Very. ;) Tossing packed at all the ondisk stuctures bloats things badly on ia64. cvs/linux-2.6-xfs> wc -l before.dis 166688 before.dis cvs/linux-2.6-xfs> wc -l after.dis 182294 after.dis That's +15606 lines. http://digitalvampire.org/blog/index.php/2006/07/31/why-you-shouldnt-use-__attribute__packed/ Please, don't do this. _Annotating_ ondisk structures sounds good to me, assuming something can be done with it (i.e., testing for holes - I'd thought of this a while ago too, but never came up with anything to make use of it) but don't pack stuff just for fun. -Eric From owner-xfs@oss.sgi.com Mon Mar 17 12:52:46 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 17 Mar 2008 12:52:54 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.2 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_53, J_CHICKENPOX_63 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2HJqgX9020794 for ; Mon, 17 Mar 2008 12:52:46 -0700 X-ASG-Debug-ID: 1205783594-070e03290000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from filer.fsl.cs.sunysb.edu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 4B4BEFBC149 for ; Mon, 17 Mar 2008 12:53:14 -0700 (PDT) Received: from filer.fsl.cs.sunysb.edu (filer.fsl.cs.sunysb.edu [130.245.126.2]) by cuda.sgi.com with ESMTP id U8pmh1Q4BCSv8MG9 for ; Mon, 17 Mar 2008 12:53:14 -0700 (PDT) Received: from josefsipek.net (baal.fsl.cs.sunysb.edu [130.245.126.78]) by filer.fsl.cs.sunysb.edu (8.12.11.20060308/8.13.1) with ESMTP id m2HJrBRG028316; Mon, 17 Mar 2008 15:53:11 -0400 Received: by josefsipek.net (Postfix, from userid 1000) id 130FE1C008A2; Mon, 17 Mar 2008 15:53:13 -0400 (EDT) Date: Mon, 17 Mar 2008 15:53:13 -0400 From: "Josef 'Jeff' Sipek" To: Eric Sandeen Cc: xfs-oss X-ASG-Orig-Subj: Re: [PATCH] fix dir2 shortform structures on ARM old ABI Subject: Re: [PATCH] fix dir2 shortform structures on ARM old ABI Message-ID: <20080317195313.GB16500@josefsipek.net> References: <47DB4181.7040603@sandeen.net> <20080315041722.GA25621@josefsipek.net> <47DB4F4F.8030407@sandeen.net> <20080315042703.GA28242@josefsipek.net> <47DB51A3.70200@sandeen.net> <20080315045147.GB28242@josefsipek.net> <47DEB930.7020108@sandeen.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <47DEB930.7020108@sandeen.net> User-Agent: Mutt/1.5.16 (2007-06-11) X-Barracuda-Connect: filer.fsl.cs.sunysb.edu[130.245.126.2] X-Barracuda-Start-Time: 1205783595 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45118 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14889 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeffpc@josefsipek.net Precedence: bulk X-list: xfs On Mon, Mar 17, 2008 at 01:32:16PM -0500, Eric Sandeen wrote: > Josef 'Jeff' Sipek wrote: > > > Josef 'Jeff' Sipek, wondering exactly how passionate one can get about > > structure member alignment :) > > Very. ;) > > Tossing packed at all the ondisk stuctures bloats things badly on ia64. > > cvs/linux-2.6-xfs> wc -l before.dis > 166688 before.dis > cvs/linux-2.6-xfs> wc -l after.dis > 182294 after.dis > > That's +15606 lines. I'm not done yet! :-P First of all, the patch I showed you actually breaks a few things that I still need to fix. Second, I need to find out whether all the affected structures are always aligned on some boundary (probably 4 or 8 byte). If there indeed is some alignment, there might be a way to reduce those 15k extra lines to something a whole lot less - I hope. Josef 'Jeff' Sipek. -- A CRAY is the only computer that runs an endless loop in just 4 hours... From owner-xfs@oss.sgi.com Mon Mar 17 13:03:43 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 17 Mar 2008 13:03:50 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_53, J_CHICKENPOX_63 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2HK3gdU024022 for ; Mon, 17 Mar 2008 13:03:43 -0700 X-ASG-Debug-ID: 1205784253-7e3c02bf0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 8BD7C6B13EB for ; Mon, 17 Mar 2008 13:04:14 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id Ow7EGSYk5mD6D6hB for ; Mon, 17 Mar 2008 13:04:14 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 81AB418C732BF; Mon, 17 Mar 2008 15:04:13 -0500 (CDT) Message-ID: <47DECEBD.10604@sandeen.net> Date: Mon, 17 Mar 2008 15:04:13 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: "Josef 'Jeff' Sipek" CC: xfs-oss X-ASG-Orig-Subj: Re: [PATCH] fix dir2 shortform structures on ARM old ABI Subject: Re: [PATCH] fix dir2 shortform structures on ARM old ABI References: <47DB4181.7040603@sandeen.net> <20080315041722.GA25621@josefsipek.net> <47DB4F4F.8030407@sandeen.net> <20080315042703.GA28242@josefsipek.net> <47DB51A3.70200@sandeen.net> <20080315045147.GB28242@josefsipek.net> <47DEB930.7020108@sandeen.net> <20080317195313.GB16500@josefsipek.net> In-Reply-To: <20080317195313.GB16500@josefsipek.net> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1205784254 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45119 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14890 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Josef 'Jeff' Sipek wrote: > On Mon, Mar 17, 2008 at 01:32:16PM -0500, Eric Sandeen wrote: >> Josef 'Jeff' Sipek wrote: >> >>> Josef 'Jeff' Sipek, wondering exactly how passionate one can get about >>> structure member alignment :) >> Very. ;) >> >> Tossing packed at all the ondisk stuctures bloats things badly on ia64. >> >> cvs/linux-2.6-xfs> wc -l before.dis >> 166688 before.dis >> cvs/linux-2.6-xfs> wc -l after.dis >> 182294 after.dis >> >> That's +15606 lines. > > I'm not done yet! :-P > > First of all, the patch I showed you actually breaks a few things that I > still need to fix. Oh, I wasn't trying to blame you or our patch specifically, just wanted to highlight what I consider to be the bad idea of giving gcc a bunch of directives that IMHO we don't need. > Second, I need to find out whether all the affected structures are always > aligned on some boundary (probably 4 or 8 byte). If there indeed is some > alignment, there might be a way to reduce those 15k extra lines to something > a whole lot less - I hope. To what end? What are you trying to fix? If it's not reduced to 0 then your change is introducing regressions, IMHO. Respectfully, ;) -Eric From owner-xfs@oss.sgi.com Mon Mar 17 16:27:05 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 17 Mar 2008 16:27:39 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2HNR3FU018931 for ; Mon, 17 Mar 2008 16:27:05 -0700 X-ASG-Debug-ID: 1205796451-072f02950000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from knox.decisionsoft.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id E4DA9FBDC69 for ; Mon, 17 Mar 2008 16:27:32 -0700 (PDT) Received: from knox.decisionsoft.com (knox-be.decisionsoft.com [87.194.172.100]) by cuda.sgi.com with ESMTP id 0pAG2h5kKM1IpH3t for ; Mon, 17 Mar 2008 16:27:32 -0700 (PDT) Received: from kennet.dsl.local ([10.0.0.11]) by knox.decisionsoft.com with esmtp (Exim 4.63) (envelope-from ) id 1JbOjn-0003yq-NW for xfs@oss.sgi.com; Mon, 17 Mar 2008 23:27:27 +0000 Message-ID: <47DEFE5E.4030703@decisionsoft.co.uk> Date: Mon, 17 Mar 2008 23:27:26 +0000 From: Stuart Rowan Reply-To: strr-debian@decisionsoft.co.uk User-Agent: Thunderbird 2.0.0.12 (X11/20080226) MIME-Version: 1.0 To: xfs@oss.sgi.com X-ASG-Orig-Subj: 2.6.24.3 nfs server on xfs keeps producing nfsd: non-standard errno: -117 Subject: 2.6.24.3 nfs server on xfs keeps producing nfsd: non-standard errno: -117 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-SA-Exim-Connect-IP: 10.0.0.11 X-SA-Exim-Mail-From: strr-debian@decisionsoft.co.uk X-SA-Exim-Scanned: No (on knox.decisionsoft.com); SAEximRunCond expanded to false X-SystemFilter-new-T: not expanding X-SystemFilter-new-S: not expanding X-SystemFilter-new-F: not expanding X-Barracuda-Connect: knox-be.decisionsoft.com[87.194.172.100] X-Barracuda-Start-Time: 1205796455 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45132 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14891 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: strr-debian@decisionsoft.co.uk Precedence: bulk X-list: xfs Hi, Firstly thanks for the great filesystem and apologies if this ends up being NFS rather than XFS being weird! I'm not subscribed so please do keep me CC'd. I have *millions* of lines of (>200k per minute according to syslog): nfsd: non-standard errno: -117 being sent out of dmesg Now errno 117 is #define EUCLEAN 117 /* Structure needs cleaning */ which seems to be only used from a quick grep by XFS and JFFS and smbfs. My nfs server export two locations /home /home/archive both of these are XFS partitions, hence my suspicion that the -117 is coming from XFS. xfs_repair -n says the filesystems are clean xfs_repair has been run multiple times to completion on the filesystems, all is fine. The XFS partitions are lvm volumes as follows data/home 900G data/archive 400G The volume group, data, is sda3 sda3 is a 6 drive 3ware 9550SXU-8LP RAID10 array The NFS server is currently in use (indeed the message only starts once clients connect) and works absolutely fine. How do I find out what (if anything) is wrong with my filesystem / appropriately silence this message? Many thanks, Stu. From owner-xfs@oss.sgi.com Mon Mar 17 16:35:11 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 17 Mar 2008 16:35:18 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.7 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_53, J_CHICKENPOX_63 autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2HNZ6Xc021656 for ; Mon, 17 Mar 2008 16:35:10 -0700 Received: from boing.melbourne.sgi.com (boing.melbourne.sgi.com [134.14.55.141]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA00222; Tue, 18 Mar 2008 10:35:32 +1100 Message-ID: <47DF0044.6080704@sgi.com> Date: Tue, 18 Mar 2008 10:35:32 +1100 From: Timothy Shimmin User-Agent: Thunderbird 2.0.0.9 (Macintosh/20071031) MIME-Version: 1.0 To: Eric Sandeen CC: "Josef 'Jeff' Sipek" , xfs-oss Subject: Re: [PATCH] fix dir2 shortform structures on ARM old ABI References: <47DB4181.7040603@sandeen.net> <20080315041722.GA25621@josefsipek.net> <47DB4F4F.8030407@sandeen.net> <20080315042703.GA28242@josefsipek.net> <47DB51A3.70200@sandeen.net> <20080315045147.GB28242@josefsipek.net> <47DEB930.7020108@sandeen.net> In-Reply-To: <47DEB930.7020108@sandeen.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14892 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs Eric Sandeen wrote: > Josef 'Jeff' Sipek wrote: > >> Josef 'Jeff' Sipek, wondering exactly how passionate one can get about >> structure member alignment :) > > Very. ;) > > Tossing packed at all the ondisk stuctures bloats things badly on ia64. > > cvs/linux-2.6-xfs> wc -l before.dis > 166688 before.dis > cvs/linux-2.6-xfs> wc -l after.dis > 182294 after.dis > > That's +15606 lines. > > http://digitalvampire.org/blog/index.php/2006/07/31/why-you-shouldnt-use-__attribute__packed/ > Interesting. So the problem there is that gcc is doing the wrong thing on some arches (the example being ia64, sparc64). --Tim From owner-xfs@oss.sgi.com Mon Mar 17 16:43:38 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 17 Mar 2008 16:43:46 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.4 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2HNhaCH024412 for ; Mon, 17 Mar 2008 16:43:38 -0700 X-ASG-Debug-ID: 1205797448-073c03680000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from filer.fsl.cs.sunysb.edu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id B06DDFBDE83 for ; Mon, 17 Mar 2008 16:44:08 -0700 (PDT) Received: from filer.fsl.cs.sunysb.edu (filer.fsl.cs.sunysb.edu [130.245.126.2]) by cuda.sgi.com with ESMTP id muB82EkvK299QQeW for ; Mon, 17 Mar 2008 16:44:08 -0700 (PDT) Received: from josefsipek.net (baal.fsl.cs.sunysb.edu [130.245.126.78]) by filer.fsl.cs.sunysb.edu (8.12.11.20060308/8.13.1) with ESMTP id m2HKSqGt002137; Mon, 17 Mar 2008 16:28:52 -0400 Received: by josefsipek.net (Postfix, from userid 1000) id 026371C008A2; Mon, 17 Mar 2008 16:28:54 -0400 (EDT) Date: Mon, 17 Mar 2008 16:28:53 -0400 From: "Josef 'Jeff' Sipek" To: Eric Sandeen Cc: xfs-oss X-ASG-Orig-Subj: Re: [PATCH] fix dir2 shortform structures on ARM old ABI Subject: Re: [PATCH] fix dir2 shortform structures on ARM old ABI Message-ID: <20080317202853.GC16500@josefsipek.net> References: <47DB4181.7040603@sandeen.net> <20080315041722.GA25621@josefsipek.net> <47DB4F4F.8030407@sandeen.net> <20080315042703.GA28242@josefsipek.net> <47DB51A3.70200@sandeen.net> <20080315045147.GB28242@josefsipek.net> <47DEB930.7020108@sandeen.net> <20080317195313.GB16500@josefsipek.net> <47DECEBD.10604@sandeen.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <47DECEBD.10604@sandeen.net> User-Agent: Mutt/1.5.16 (2007-06-11) X-Barracuda-Connect: filer.fsl.cs.sunysb.edu[130.245.126.2] X-Barracuda-Start-Time: 1205797448 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45134 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14893 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeffpc@josefsipek.net Precedence: bulk X-list: xfs On Mon, Mar 17, 2008 at 03:04:13PM -0500, Eric Sandeen wrote: > Josef 'Jeff' Sipek wrote: ... > Oh, I wasn't trying to blame you or our patch specifically, just wanted > to highlight what I consider to be the bad idea of giving gcc a bunch of > directives that IMHO we don't need. Right. And yes, just plopping __attribute__((packed)) to the end of each on-disk structure is a really bad idea - it actually make xfsqa fail :) But living on the edge, without telling gcc what exactly we want from it is an even worse idea! Take sb_bad_features2...that fiasco that's going to stay with the XFS on-disk format for a long time to come, would have never happened if the structures were properly packed/padded to begin with. > > Second, I need to find out whether all the affected structures are always > > aligned on some boundary (probably 4 or 8 byte). If there indeed is some > > alignment, there might be a way to reduce those 15k extra lines to something > > a whole lot less - I hope. > > To what end? What are you trying to fix? If it's not reduced to 0 then Not packing the structures is all fine if you have one compiler, one OS, and one architecture to care about. That worked fine when XFS ran on IRIX on MIPS, but Linux runs on so many different ABIs that you are asking for trouble by not packing. I can't imagine the number of supported ABIs not growing. Packing on as-needed basis (which you suggested with your patch) is rather messy... 1) new ABIs have to checked 2) you'll end up with a rat's nest of #ifdefs to make __arch_pack do the right thing Really, we need a way to tell gcc: "hey, gcc, we know what we're doing - just trust us; don't pad, and don't worry about the alignment". packed gets you the no-padding, and as I mentioned in my previous reply, align(X) will hopefully take care of the alignment. Then, gcc should generate nice code to access members that are on proper boundaries (AFAIK virtually all, if not all, the struct members in XFS fall into this category), and slightly worse code for few places that might exist. The problem you saw in ia64, is because gcc generated the "worst" case code for all the struct accesses. I _think_ that can be fixed to near-0, if not 0 on all but the few ABIs (e.g., old arm). Josef 'Jeff' Sipek. -- The box said "Windows XP or better required". So I installed Linux. From owner-xfs@oss.sgi.com Mon Mar 17 16:43:40 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 17 Mar 2008 16:43:49 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_53, J_CHICKENPOX_63 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2HNhc3u024422 for ; Mon, 17 Mar 2008 16:43:40 -0700 X-ASG-Debug-ID: 1205797450-073003690000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from filer.fsl.cs.sunysb.edu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 36067FBDE83 for ; Mon, 17 Mar 2008 16:44:10 -0700 (PDT) Received: from filer.fsl.cs.sunysb.edu (filer.fsl.cs.sunysb.edu [130.245.126.2]) by cuda.sgi.com with ESMTP id zVeUAeq5zakjjx95 for ; Mon, 17 Mar 2008 16:44:10 -0700 (PDT) Received: from josefsipek.net (baal.fsl.cs.sunysb.edu [130.245.126.78]) by filer.fsl.cs.sunysb.edu (8.12.11.20060308/8.13.1) with ESMTP id m2HNgIUN000910; Mon, 17 Mar 2008 19:42:18 -0400 Received: by josefsipek.net (Postfix, from userid 1000) id C30D71C008A2; Mon, 17 Mar 2008 19:42:19 -0400 (EDT) Date: Mon, 17 Mar 2008 19:42:19 -0400 From: "Josef 'Jeff' Sipek" To: Timothy Shimmin Cc: Eric Sandeen , xfs-oss X-ASG-Orig-Subj: Re: [PATCH] fix dir2 shortform structures on ARM old ABI Subject: Re: [PATCH] fix dir2 shortform structures on ARM old ABI Message-ID: <20080317234219.GD16500@josefsipek.net> References: <47DB4181.7040603@sandeen.net> <20080315041722.GA25621@josefsipek.net> <47DB4F4F.8030407@sandeen.net> <20080315042703.GA28242@josefsipek.net> <47DB51A3.70200@sandeen.net> <20080315045147.GB28242@josefsipek.net> <47DEB930.7020108@sandeen.net> <47DF0044.6080704@sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <47DF0044.6080704@sgi.com> User-Agent: Mutt/1.5.16 (2007-06-11) X-Barracuda-Connect: filer.fsl.cs.sunysb.edu[130.245.126.2] X-Barracuda-Start-Time: 1205797451 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45134 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14894 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeffpc@josefsipek.net Precedence: bulk X-list: xfs On Tue, Mar 18, 2008 at 10:35:32AM +1100, Timothy Shimmin wrote: > Eric Sandeen wrote: >> Josef 'Jeff' Sipek wrote: >>> Josef 'Jeff' Sipek, wondering exactly how passionate one can get about >>> structure member alignment :) >> Very. ;) >> Tossing packed at all the ondisk stuctures bloats things badly on ia64. >> cvs/linux-2.6-xfs> wc -l before.dis >> 166688 before.dis >> cvs/linux-2.6-xfs> wc -l after.dis >> 182294 after.dis >> That's +15606 lines. >> http://digitalvampire.org/blog/index.php/2006/07/31/why-you-shouldnt-use-__attribute__packed/ > Interesting. > So the problem there is that gcc is doing the wrong thing > on some arches (the example being ia64, sparc64). Actually, it's not doing the wrong thing... __attribute__((packed)) means: 1) condense the members of the struct leaving NO padding bytes 2) do NOT assume the entire structure is aligned on any boundary This means, that even if you have a member that'd be nicely aligned without the packed attribute (see below), the compiler will generate worst case alignment code. struct foo { u64 a; } __attribute__((packed)); You can put struct foo anywhere in memory, and the code accessing ->a will _always_ work. Using __attribute((packed,aligned(4))), tells it that the structure as a whole will be aligned on a 4-byte boundary, but there should be no padding bytes inserted. Josef 'Jeff' Sipek. -- Penguin : Linux version 2.6.23.1 on an i386 machine (6135.23 BogoMips). From owner-xfs@oss.sgi.com Mon Mar 17 17:27:52 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 17 Mar 2008 17:28:00 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.8 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2I0Rkdu003853 for ; Mon, 17 Mar 2008 17:27:51 -0700 Received: from boing.melbourne.sgi.com (boing.melbourne.sgi.com [134.14.55.141]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA01604; Tue, 18 Mar 2008 11:28:13 +1100 Message-ID: <47DF0C9D.1010602@sgi.com> Date: Tue, 18 Mar 2008 11:28:13 +1100 From: Timothy Shimmin User-Agent: Thunderbird 2.0.0.9 (Macintosh/20071031) MIME-Version: 1.0 To: strr-debian@decisionsoft.co.uk CC: xfs@oss.sgi.com Subject: Re: 2.6.24.3 nfs server on xfs keeps producing nfsd: non-standard errno: -117 References: <47DEFE5E.4030703@decisionsoft.co.uk> In-Reply-To: <47DEFE5E.4030703@decisionsoft.co.uk> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14895 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs Hi Stuart, Stuart Rowan wrote: > Hi, > > Firstly thanks for the great filesystem and apologies if this ends up > being NFS rather than XFS being weird! I'm not subscribed so please do > keep me CC'd. > > I have *millions* of lines of (>200k per minute according to syslog): > nfsd: non-standard errno: -117 > being sent out of dmesg > > Now errno 117 is > #define EUCLEAN 117 /* Structure needs cleaning */ > which seems to be only used from a quick grep by XFS and JFFS and smbfs. > > In XFS we mapped EFSCORRUPTED to EUCLEAN as EFSCORRUPTED didn't exist on Linux. However, normally if this error is encountered in XFS then we output an appropriate msg to the syslog. Our default error level is 3 and most reports are rated at 1 so should show up I would have thought. --Tim > My nfs server export two locations > /home > /home/archive > both of these are XFS partitions, hence my suspicion that the -117 is > coming from XFS. > > xfs_repair -n says the filesystems are clean > xfs_repair has been run multiple times to completion on the filesystems, > all is fine. > > The XFS partitions are lvm volumes as follows > data/home 900G > data/archive 400G > The volume group, data, is sda3 > sda3 is a 6 drive 3ware 9550SXU-8LP RAID10 array > > The NFS server is currently in use (indeed the message only starts once > clients connect) and works absolutely fine. > > How do I find out what (if anything) is wrong with my filesystem / > appropriately silence this message? > > Many thanks, > Stu. > From owner-xfs@oss.sgi.com Mon Mar 17 17:39:01 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 17 Mar 2008 17:39:11 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2I0cwxp006945 for ; Mon, 17 Mar 2008 17:39:01 -0700 X-ASG-Debug-ID: 1205800769-47ab01890000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mta4.srv.hcvlny.cv.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 72464FBE060 for ; Mon, 17 Mar 2008 17:39:29 -0700 (PDT) Received: from mta4.srv.hcvlny.cv.net (mta4.srv.hcvlny.cv.net [167.206.4.199]) by cuda.sgi.com with ESMTP id EExhNqNN5odEZr7r for ; Mon, 17 Mar 2008 17:39:29 -0700 (PDT) Received: from freyr.home (ool-44c218a8.dyn.optonline.net [68.194.24.168]) by mta4.srv.hcvlny.cv.net (Sun Java System Messaging Server 6.2-8.04 (built Feb 28 2007)) with ESMTP id <0JXW00684HTG1HT0@mta4.srv.hcvlny.cv.net> for xfs@oss.sgi.com; Mon, 17 Mar 2008 20:39:24 -0400 (EDT) Received: by freyr.home (Postfix, from userid 1000) id C8256800B55; Mon, 17 Mar 2008 20:39:05 -0400 (EDT) Date: Mon, 17 Mar 2008 20:39:05 -0400 From: "Josef 'Jeff' Sipek" X-ASG-Orig-Subj: [RFC][PATCH 1/1] XFS: annotate all on-disk structures with __ondisk Subject: [RFC][PATCH 1/1] XFS: annotate all on-disk structures with __ondisk In-reply-to: <20080317202853.GC16500@josefsipek.net> To: sandeen@sandeen.net, xfs@oss.sgi.com, tes@sgi.com, dgc@sgi.com Cc: "Josef 'Jeff' Sipek" Message-id: <1205800745-9217-1-git-send-email-jeffpc@josefsipek.net> X-Mailer: git-send-email 1.5.4.rc2.85.g9de45-dirty Content-transfer-encoding: 7BIT References: <20080317202853.GC16500@josefsipek.net> X-Barracuda-Connect: mta4.srv.hcvlny.cv.net[167.206.4.199] X-Barracuda-Start-Time: 1205800770 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -1.52 X-Barracuda-Spam-Status: No, SCORE=-1.52 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=BSF_RULE7568M X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45138 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.50 BSF_RULE7568M Custom Rule 7568M X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14896 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeffpc@josefsipek.net Precedence: bulk X-list: xfs Currently, the annotation just forces the structures to be packed, and 4-byte aligned. Signed-off-by: Josef 'Jeff' Sipek --- This is just an RFC, and the alignment needs to be verified against the offsets within the pages read from disk, and more xfsqa runs on various architectures are needed. (I don't want to be responsible for something like the bitops regression on ppc!) The .text segment shrinks on x86 and s390x, but grows in ia64 (3776 bytes == 0.3%). text data bss dec hex filename 542054 3171 3084 548309 85dd5 xfs-x86-original.ko 542026 3171 3084 548281 85db9 xfs-x86-packed-aligned4.ko 1244057 70858 2480 1317395 141a13 xfs-ia64-original.ko 1247833 70858 2480 1321171 1428d3 xfs-ia64-packed-aligned4.ko 679901 19374 3112 702387 ab7b3 xfs-s390x-original.ko 679781 19374 3112 702267 ab73b xfs-s390x-packed-aligned4.ko The approximate number of instructions effectively stays the same on x86 (goes up by 2), s390x gets smaller (by 12 instructions), but ia64 bloats by 708 instructions (0.34%). $ for x in *.ko; do objdump -d $x > `basename $x .ko`.dis ; done $ wc -l *.dis 141494 xfs-x86-original.dis 141496 xfs-x86-packed-aligned4.dis 208514 xfs-ia64-original.dis 209222 xfs-ia64-packed-aligned4.dis 121544 xfs-s390x-original.dis 121532 xfs-s390x-packed-aligned4.dis I could try to compile things on a sparc64, mips, and x86_64, but that's for another day - and depending on where this thread will lead. The patch keeps xfsqa happy on x86 - well, it dies in the 100-range, but I haven't had the time to check if that happens without this patch. Someone (not it!) should nurse xfsqa back to health :) Jeff. --- fs/xfs/linux-2.6/xfs_linux.h | 1 + fs/xfs/xfs_ag.h | 6 +++--- fs/xfs/xfs_alloc_btree.h | 2 +- fs/xfs/xfs_attr_leaf.h | 14 +++++++------- fs/xfs/xfs_attr_sf.h | 9 +++++---- fs/xfs/xfs_bmap_btree.h | 8 ++++---- fs/xfs/xfs_btree.h | 12 ++++++------ fs/xfs/xfs_da_btree.h | 8 ++++---- fs/xfs/xfs_dinode.h | 6 +++--- fs/xfs/xfs_dir2_block.h | 4 ++-- fs/xfs/xfs_dir2_data.h | 10 +++++----- fs/xfs/xfs_dir2_leaf.h | 9 +++++---- fs/xfs/xfs_dir2_node.h | 5 +++-- fs/xfs/xfs_dir2_sf.h | 14 +++++++------- fs/xfs/xfs_ialloc_btree.h | 4 ++-- fs/xfs/xfs_log_priv.h | 6 +++--- fs/xfs/xfs_quota.h | 4 ++-- fs/xfs/xfs_sb.h | 2 +- fs/xfs/xfs_trans.h | 2 +- 19 files changed, 65 insertions(+), 61 deletions(-) diff --git a/fs/xfs/linux-2.6/xfs_linux.h b/fs/xfs/linux-2.6/xfs_linux.h index 284460f..f06199c 100644 --- a/fs/xfs/linux-2.6/xfs_linux.h +++ b/fs/xfs/linux-2.6/xfs_linux.h @@ -186,6 +186,7 @@ #define xfs_itruncate_data(ip, off) \ (-vmtruncate(vn_to_inode(XFS_ITOV(ip)), (off))) +#define __ondisk __attribute__((packed,aligned(4))) /* Move the kernel do_div definition off to one side */ diff --git a/fs/xfs/xfs_ag.h b/fs/xfs/xfs_ag.h index 61b292a..20f3291 100644 --- a/fs/xfs/xfs_ag.h +++ b/fs/xfs/xfs_ag.h @@ -69,7 +69,7 @@ typedef struct xfs_agf { __be32 agf_freeblks; /* total free blocks */ __be32 agf_longest; /* longest free space */ __be32 agf_btreeblks; /* # of blocks held in AGF btrees */ -} xfs_agf_t; +} __ondisk xfs_agf_t; #define XFS_AGF_MAGICNUM 0x00000001 #define XFS_AGF_VERSIONNUM 0x00000002 @@ -121,7 +121,7 @@ typedef struct xfs_agi { * still being referenced. */ __be32 agi_unlinked[XFS_AGI_UNLINKED_BUCKETS]; -} xfs_agi_t; +} __ondisk xfs_agi_t; #define XFS_AGI_MAGICNUM 0x00000001 #define XFS_AGI_VERSIONNUM 0x00000002 @@ -153,7 +153,7 @@ typedef struct xfs_agi { typedef struct xfs_agfl { __be32 agfl_bno[1]; /* actually XFS_AGFL_SIZE(mp) */ -} xfs_agfl_t; +} __ondisk xfs_agfl_t; /* * Busy block/extent entry. Used in perag to mark blocks that have been freed diff --git a/fs/xfs/xfs_alloc_btree.h b/fs/xfs/xfs_alloc_btree.h index 5bd1a2c..f7c5bba 100644 --- a/fs/xfs/xfs_alloc_btree.h +++ b/fs/xfs/xfs_alloc_btree.h @@ -41,7 +41,7 @@ struct xfs_mount; typedef struct xfs_alloc_rec { __be32 ar_startblock; /* starting block number */ __be32 ar_blockcount; /* count of free blocks */ -} xfs_alloc_rec_t, xfs_alloc_key_t; +} __ondisk xfs_alloc_rec_t, xfs_alloc_key_t; typedef struct xfs_alloc_rec_incore { xfs_agblock_t ar_startblock; /* starting block number */ diff --git a/fs/xfs/xfs_attr_leaf.h b/fs/xfs/xfs_attr_leaf.h index 040f732..792d2a9 100644 --- a/fs/xfs/xfs_attr_leaf.h +++ b/fs/xfs/xfs_attr_leaf.h @@ -75,7 +75,7 @@ struct xfs_trans; typedef struct xfs_attr_leaf_map { /* RLE map of free bytes */ __be16 base; /* base of free region */ __be16 size; /* length of free region */ -} xfs_attr_leaf_map_t; +} __ondisk xfs_attr_leaf_map_t; typedef struct xfs_attr_leaf_hdr { /* constant-structure header block */ xfs_da_blkinfo_t info; /* block type, links, etc. */ @@ -86,34 +86,34 @@ typedef struct xfs_attr_leaf_hdr { /* constant-structure header block */ __u8 pad1; xfs_attr_leaf_map_t freemap[XFS_ATTR_LEAF_MAPSIZE]; /* N largest free regions */ -} xfs_attr_leaf_hdr_t; +} __ondisk xfs_attr_leaf_hdr_t; typedef struct xfs_attr_leaf_entry { /* sorted on key, not name */ __be32 hashval; /* hash value of name */ __be16 nameidx; /* index into buffer of name/value */ __u8 flags; /* LOCAL/ROOT/SECURE/INCOMPLETE flag */ __u8 pad2; /* unused pad byte */ -} xfs_attr_leaf_entry_t; +} __ondisk xfs_attr_leaf_entry_t; typedef struct xfs_attr_leaf_name_local { __be16 valuelen; /* number of bytes in value */ __u8 namelen; /* length of name bytes */ __u8 nameval[1]; /* name/value bytes */ -} xfs_attr_leaf_name_local_t; +} __ondisk xfs_attr_leaf_name_local_t; typedef struct xfs_attr_leaf_name_remote { __be32 valueblk; /* block number of value bytes */ __be32 valuelen; /* number of bytes in value */ __u8 namelen; /* length of name bytes */ - __u8 name[1]; /* name bytes */ -} xfs_attr_leaf_name_remote_t; + __u8 name[3]; /* name bytes */ +} __ondisk xfs_attr_leaf_name_remote_t; typedef struct xfs_attr_leafblock { xfs_attr_leaf_hdr_t hdr; /* constant-structure header block */ xfs_attr_leaf_entry_t entries[1]; /* sorted on key, not name */ xfs_attr_leaf_name_local_t namelist; /* grows from bottom of buf */ xfs_attr_leaf_name_remote_t valuelist; /* grows from bottom of buf */ -} xfs_attr_leafblock_t; +} __ondisk xfs_attr_leafblock_t; /* * Flags used in the leaf_entry[i].flags field. diff --git a/fs/xfs/xfs_attr_sf.h b/fs/xfs/xfs_attr_sf.h index f67f917..a13afb7 100644 --- a/fs/xfs/xfs_attr_sf.h +++ b/fs/xfs/xfs_attr_sf.h @@ -33,15 +33,16 @@ struct xfs_inode; typedef struct xfs_attr_shortform { struct xfs_attr_sf_hdr { /* constant-structure header block */ __be16 totsize; /* total bytes in shortform list */ - __u8 count; /* count of active entries */ - } hdr; + __u8 count; /* count of active entries */ + __u8 pad; + } __ondisk hdr; struct xfs_attr_sf_entry { __uint8_t namelen; /* actual length of name (no NULL) */ __uint8_t valuelen; /* actual length of value (no NULL) */ __uint8_t flags; /* flags bits (see xfs_attr_leaf.h) */ __uint8_t nameval[1]; /* name & value bytes concatenated */ - } list[1]; /* variable sized array */ -} xfs_attr_shortform_t; + } __ondisk list[1]; /* variable sized array */ +} __ondisk xfs_attr_shortform_t; typedef struct xfs_attr_sf_hdr xfs_attr_sf_hdr_t; typedef struct xfs_attr_sf_entry xfs_attr_sf_entry_t; diff --git a/fs/xfs/xfs_bmap_btree.h b/fs/xfs/xfs_bmap_btree.h index cd0d4b4..3c749c8 100644 --- a/fs/xfs/xfs_bmap_btree.h +++ b/fs/xfs/xfs_bmap_btree.h @@ -31,7 +31,7 @@ struct xfs_inode; typedef struct xfs_bmdr_block { __be16 bb_level; /* 0 is a leaf */ __be16 bb_numrecs; /* current # of data records */ -} xfs_bmdr_block_t; +} __ondisk xfs_bmdr_block_t; /* * Bmap btree record and extent descriptor. @@ -51,11 +51,11 @@ typedef struct xfs_bmdr_block { typedef struct xfs_bmbt_rec_32 { __uint32_t l0, l1, l2, l3; -} xfs_bmbt_rec_32_t; +} __ondisk xfs_bmbt_rec_32_t; typedef struct xfs_bmbt_rec_64 { __be64 l0, l1; -} xfs_bmbt_rec_64_t; +} __ondisk xfs_bmbt_rec_64_t; typedef __uint64_t xfs_bmbt_rec_base_t; /* use this for casts */ typedef xfs_bmbt_rec_64_t xfs_bmbt_rec_t, xfs_bmdr_rec_t; @@ -140,7 +140,7 @@ typedef struct xfs_bmbt_irec */ typedef struct xfs_bmbt_key { __be64 br_startoff; /* starting file offset */ -} xfs_bmbt_key_t, xfs_bmdr_key_t; +} __ondisk xfs_bmbt_key_t, xfs_bmdr_key_t; /* btree pointer type */ typedef __be64 xfs_bmbt_ptr_t, xfs_bmdr_ptr_t; diff --git a/fs/xfs/xfs_btree.h b/fs/xfs/xfs_btree.h index 7440b78..40ac5b8 100644 --- a/fs/xfs/xfs_btree.h +++ b/fs/xfs/xfs_btree.h @@ -47,7 +47,7 @@ typedef struct xfs_btree_sblock { __be16 bb_numrecs; /* current # of data records */ __be32 bb_leftsib; /* left sibling block or NULLAGBLOCK */ __be32 bb_rightsib; /* right sibling block or NULLAGBLOCK */ -} xfs_btree_sblock_t; +} __ondisk xfs_btree_sblock_t; /* * Long form header: bmap btrees. @@ -58,7 +58,7 @@ typedef struct xfs_btree_lblock { __be16 bb_numrecs; /* current # of data records */ __be64 bb_leftsib; /* left sibling block or NULLDFSBNO */ __be64 bb_rightsib; /* right sibling block or NULLDFSBNO */ -} xfs_btree_lblock_t; +} __ondisk xfs_btree_lblock_t; /* * Combined header and structure, used by common code. @@ -68,7 +68,7 @@ typedef struct xfs_btree_hdr __be32 bb_magic; /* magic number for block type */ __be16 bb_level; /* 0 is a leaf */ __be16 bb_numrecs; /* current # of data records */ -} xfs_btree_hdr_t; +} __ondisk xfs_btree_hdr_t; typedef struct xfs_btree_block { xfs_btree_hdr_t bb_h; /* header */ @@ -76,13 +76,13 @@ typedef struct xfs_btree_block { struct { __be32 bb_leftsib; __be32 bb_rightsib; - } s; /* short form pointers */ + } __ondisk s; /* short form pointers */ struct { __be64 bb_leftsib; __be64 bb_rightsib; - } l; /* long form pointers */ + } __ondisk l; /* long form pointers */ } bb_u; /* rest */ -} xfs_btree_block_t; +} __ondisk xfs_btree_block_t; /* * For logging record fields. diff --git a/fs/xfs/xfs_da_btree.h b/fs/xfs/xfs_da_btree.h index 7facf86..36901d7 100644 --- a/fs/xfs/xfs_da_btree.h +++ b/fs/xfs/xfs_da_btree.h @@ -45,7 +45,7 @@ typedef struct xfs_da_blkinfo { __be32 back; /* following block in list */ __be16 magic; /* validity check on block */ __be16 pad; /* unused */ -} xfs_da_blkinfo_t; +} __ondisk xfs_da_blkinfo_t; /* * This is the structure of the root and intermediate nodes in the Btree. @@ -63,12 +63,12 @@ typedef struct xfs_da_intnode { xfs_da_blkinfo_t info; /* block type, links, etc. */ __be16 count; /* count of active entries */ __be16 level; /* level above leaves (leaf == 0) */ - } hdr; + } __ondisk hdr; struct xfs_da_node_entry { __be32 hashval; /* hash value for this descendant */ __be32 before; /* Btree block before this key */ - } btree[1]; /* variable sized array of keys */ -} xfs_da_intnode_t; + } __ondisk btree[1]; /* variable sized array of keys */ +} __ondisk xfs_da_intnode_t; typedef struct xfs_da_node_hdr xfs_da_node_hdr_t; typedef struct xfs_da_node_entry xfs_da_node_entry_t; diff --git a/fs/xfs/xfs_dinode.h b/fs/xfs/xfs_dinode.h index c9065ea..9a24755 100644 --- a/fs/xfs/xfs_dinode.h +++ b/fs/xfs/xfs_dinode.h @@ -36,7 +36,7 @@ struct xfs_mount; typedef struct xfs_timestamp { __be32 t_sec; /* timestamp seconds */ __be32 t_nsec; /* timestamp nanoseconds */ -} xfs_timestamp_t; +} __ondisk xfs_timestamp_t; /* * Note: Coordinate changes to this structure with the XFS_DI_* #defines @@ -69,7 +69,7 @@ typedef struct xfs_dinode_core { __be16 di_dmstate; /* DMIG state info */ __be16 di_flags; /* random flags, XFS_DIFLAG_... */ __be32 di_gen; /* generation number */ -} xfs_dinode_core_t; +} __ondisk xfs_dinode_core_t; #define DI_MAX_FLUSH 0xffff @@ -96,7 +96,7 @@ typedef struct xfs_dinode xfs_bmbt_rec_32_t di_abmx[1]; /* extent list */ xfs_attr_shortform_t di_attrsf; /* shortform attribute list */ } di_a; -} xfs_dinode_t; +} __ondisk xfs_dinode_t; /* * The 32 bit link count in the inode theoretically maxes out at UINT_MAX. diff --git a/fs/xfs/xfs_dir2_block.h b/fs/xfs/xfs_dir2_block.h index 10e6896..a85f98d 100644 --- a/fs/xfs/xfs_dir2_block.h +++ b/fs/xfs/xfs_dir2_block.h @@ -45,7 +45,7 @@ struct xfs_trans; typedef struct xfs_dir2_block_tail { __be32 count; /* count of leaf entries */ __be32 stale; /* count of stale lf entries */ -} xfs_dir2_block_tail_t; +} __ondisk xfs_dir2_block_tail_t; /* * Generic single-block structure, for xfs_db. @@ -55,7 +55,7 @@ typedef struct xfs_dir2_block { xfs_dir2_data_union_t u[1]; xfs_dir2_leaf_entry_t leaf[1]; xfs_dir2_block_tail_t tail; -} xfs_dir2_block_t; +} __ondisk xfs_dir2_block_t; /* * Pointer to the leaf header embedded in a data block (1-block format) diff --git a/fs/xfs/xfs_dir2_data.h b/fs/xfs/xfs_dir2_data.h index b816e02..e7ae1db 100644 --- a/fs/xfs/xfs_dir2_data.h +++ b/fs/xfs/xfs_dir2_data.h @@ -67,7 +67,7 @@ struct xfs_trans; typedef struct xfs_dir2_data_free { __be16 offset; /* start of freespace */ __be16 length; /* length of freespace */ -} xfs_dir2_data_free_t; +} __ondisk xfs_dir2_data_free_t; /* * Header for the data blocks. @@ -78,7 +78,7 @@ typedef struct xfs_dir2_data_hdr { __be32 magic; /* XFS_DIR2_DATA_MAGIC */ /* or XFS_DIR2_BLOCK_MAGIC */ xfs_dir2_data_free_t bestfree[XFS_DIR2_DATA_FD_COUNT]; -} xfs_dir2_data_hdr_t; +} __ondisk xfs_dir2_data_hdr_t; /* * Active entry in a data block. Aligned to 8 bytes. @@ -90,7 +90,7 @@ typedef struct xfs_dir2_data_entry { __u8 name[1]; /* name bytes, no null */ /* variable offset */ __be16 tag; /* starting offset of us */ -} xfs_dir2_data_entry_t; +} __ondisk xfs_dir2_data_entry_t; /* * Unused entry in a data block. Aligned to 8 bytes. @@ -101,7 +101,7 @@ typedef struct xfs_dir2_data_unused { __be16 length; /* total free length */ /* variable offset */ __be16 tag; /* starting offset of us */ -} xfs_dir2_data_unused_t; +} __ondisk xfs_dir2_data_unused_t; typedef union { xfs_dir2_data_entry_t entry; @@ -114,7 +114,7 @@ typedef union { typedef struct xfs_dir2_data { xfs_dir2_data_hdr_t hdr; /* magic XFS_DIR2_DATA_MAGIC */ xfs_dir2_data_union_t u[1]; -} xfs_dir2_data_t; +} __ondisk xfs_dir2_data_t; /* * Macros. diff --git a/fs/xfs/xfs_dir2_leaf.h b/fs/xfs/xfs_dir2_leaf.h index 6c9539f..01a6091 100644 --- a/fs/xfs/xfs_dir2_leaf.h +++ b/fs/xfs/xfs_dir2_leaf.h @@ -48,7 +48,7 @@ typedef struct xfs_dir2_leaf_hdr { xfs_da_blkinfo_t info; /* header for da routines */ __be16 count; /* count of entries */ __be16 stale; /* count of stale entries */ -} xfs_dir2_leaf_hdr_t; +} __ondisk xfs_dir2_leaf_hdr_t; /* * Leaf block entry. @@ -56,14 +56,14 @@ typedef struct xfs_dir2_leaf_hdr { typedef struct xfs_dir2_leaf_entry { __be32 hashval; /* hash value of name */ __be32 address; /* address of data entry */ -} xfs_dir2_leaf_entry_t; +} __ondisk xfs_dir2_leaf_entry_t; /* * Leaf block tail. */ typedef struct xfs_dir2_leaf_tail { __be32 bestcount; -} xfs_dir2_leaf_tail_t; +} __ondisk xfs_dir2_leaf_tail_t; /* * Leaf block. @@ -75,8 +75,9 @@ typedef struct xfs_dir2_leaf { xfs_dir2_leaf_entry_t ents[1]; /* entries */ /* ... */ xfs_dir2_data_off_t bests[1]; /* best free counts */ + __u8 pad[2]; xfs_dir2_leaf_tail_t tail; /* leaf tail */ -} xfs_dir2_leaf_t; +} __ondisk xfs_dir2_leaf_t; /* * DB blocks here are logical directory block numbers, not filesystem blocks. diff --git a/fs/xfs/xfs_dir2_node.h b/fs/xfs/xfs_dir2_node.h index dde72db..78ab236 100644 --- a/fs/xfs/xfs_dir2_node.h +++ b/fs/xfs/xfs_dir2_node.h @@ -45,13 +45,14 @@ typedef struct xfs_dir2_free_hdr { __be32 firstdb; /* db of first entry */ __be32 nvalid; /* count of valid entries */ __be32 nused; /* count of used entries */ -} xfs_dir2_free_hdr_t; +} __ondisk xfs_dir2_free_hdr_t; typedef struct xfs_dir2_free { xfs_dir2_free_hdr_t hdr; /* block header */ __be16 bests[1]; /* best free counts */ /* unused entries are -1 */ -} xfs_dir2_free_t; + __u8 pad[2]; +} __ondisk xfs_dir2_free_t; #define XFS_DIR2_MAX_FREE_BESTS(mp) \ (((mp)->m_dirblksize - (uint)sizeof(xfs_dir2_free_hdr_t)) / \ diff --git a/fs/xfs/xfs_dir2_sf.h b/fs/xfs/xfs_dir2_sf.h index 005629d..5229bf2 100644 --- a/fs/xfs/xfs_dir2_sf.h +++ b/fs/xfs/xfs_dir2_sf.h @@ -43,26 +43,26 @@ struct xfs_trans; /* * Inode number stored as 8 8-bit values. */ -typedef struct { __uint8_t i[8]; } xfs_dir2_ino8_t; +typedef struct { __uint8_t i[8]; } __ondisk xfs_dir2_ino8_t; /* * Inode number stored as 4 8-bit values. * Works a lot of the time, when all the inode numbers in a directory * fit in 32 bits. */ -typedef struct { __uint8_t i[4]; } xfs_dir2_ino4_t; +typedef struct { __uint8_t i[4]; } __ondisk xfs_dir2_ino4_t; typedef union { xfs_dir2_ino8_t i8; xfs_dir2_ino4_t i4; -} xfs_dir2_inou_t; +} __ondisk xfs_dir2_inou_t; #define XFS_DIR2_MAX_SHORT_INUM ((xfs_ino_t)0xffffffffULL) /* * Normalized offset (in a data block) of the entry, really xfs_dir2_data_off_t. * Only need 16 bits, this is the byte offset into the single block form. */ -typedef struct { __uint8_t i[2]; } xfs_dir2_sf_off_t; +typedef struct { __uint8_t i[2]; } __ondisk xfs_dir2_sf_off_t; /* * The parent directory has a dedicated field, and the self-pointer must @@ -76,19 +76,19 @@ typedef struct xfs_dir2_sf_hdr { __uint8_t count; /* count of entries */ __uint8_t i8count; /* count of 8-byte inode #s */ xfs_dir2_inou_t parent; /* parent dir inode number */ -} xfs_dir2_sf_hdr_t; +} __ondisk xfs_dir2_sf_hdr_t; typedef struct xfs_dir2_sf_entry { __uint8_t namelen; /* actual name length */ xfs_dir2_sf_off_t offset; /* saved offset */ __uint8_t name[1]; /* name, variable size */ xfs_dir2_inou_t inumber; /* inode number, var. offset */ -} xfs_dir2_sf_entry_t; +} __ondisk xfs_dir2_sf_entry_t; typedef struct xfs_dir2_sf { xfs_dir2_sf_hdr_t hdr; /* shortform header */ xfs_dir2_sf_entry_t list[1]; /* shortform entries */ -} xfs_dir2_sf_t; +} __ondisk xfs_dir2_sf_t; static inline int xfs_dir2_sf_hdr_size(int i8count) { diff --git a/fs/xfs/xfs_ialloc_btree.h b/fs/xfs/xfs_ialloc_btree.h index 8efc4a5..036cdd0 100644 --- a/fs/xfs/xfs_ialloc_btree.h +++ b/fs/xfs/xfs_ialloc_btree.h @@ -51,7 +51,7 @@ typedef struct xfs_inobt_rec { __be32 ir_startino; /* starting inode number */ __be32 ir_freecount; /* count of free inodes (set bits) */ __be64 ir_free; /* free inode mask */ -} xfs_inobt_rec_t; +} __ondisk xfs_inobt_rec_t; typedef struct xfs_inobt_rec_incore { xfs_agino_t ir_startino; /* starting inode number */ @@ -65,7 +65,7 @@ typedef struct xfs_inobt_rec_incore { */ typedef struct xfs_inobt_key { __be32 ir_startino; /* starting inode number */ -} xfs_inobt_key_t; +} __ondisk xfs_inobt_key_t; /* btree pointer type */ typedef __be32 xfs_inobt_ptr_t; diff --git a/fs/xfs/xfs_log_priv.h b/fs/xfs/xfs_log_priv.h index 01c63db..81c5a2d 100644 --- a/fs/xfs/xfs_log_priv.h +++ b/fs/xfs/xfs_log_priv.h @@ -270,7 +270,7 @@ typedef struct xlog_op_header { __u8 oh_clientid; /* who sent me this : 1 b */ __u8 oh_flags; /* : 1 b */ __u16 oh_res2; /* 32 bit align : 2 b */ -} xlog_op_header_t; +} __ondisk xlog_op_header_t; /* valid values for h_fmt */ @@ -301,12 +301,12 @@ typedef struct xlog_rec_header { __be32 h_fmt; /* format of log record : 4 */ uuid_t h_fs_uuid; /* uuid of FS : 16 */ __be32 h_size; /* iclog size : 4 */ -} xlog_rec_header_t; +} __ondisk xlog_rec_header_t; typedef struct xlog_rec_ext_header { __be32 xh_cycle; /* write cycle of log : 4 */ __be32 xh_cycle_data[XLOG_HEADER_CYCLE_SIZE / BBSIZE]; /* : 256 */ -} xlog_rec_ext_header_t; +} __ondisk xlog_rec_ext_header_t; #ifdef __KERNEL__ /* diff --git a/fs/xfs/xfs_quota.h b/fs/xfs/xfs_quota.h index 12c4ec7..f5b9c30 100644 --- a/fs/xfs/xfs_quota.h +++ b/fs/xfs/xfs_quota.h @@ -67,7 +67,7 @@ typedef struct xfs_disk_dquot { __be32 d_rtbtimer; /* similar to above; for RT disk blocks */ __be16 d_rtbwarns; /* warnings issued wrt RT disk blocks */ __be16 d_pad; -} xfs_disk_dquot_t; +} __ondisk xfs_disk_dquot_t; /* * This is what goes on disk. This is separated from the xfs_disk_dquot because @@ -76,7 +76,7 @@ typedef struct xfs_disk_dquot { typedef struct xfs_dqblk { xfs_disk_dquot_t dd_diskdq; /* portion that lives incore as well */ char dd_fill[32]; /* filling for posterity */ -} xfs_dqblk_t; +} __ondisk xfs_dqblk_t; /* * flags for q_flags field in the dquot. diff --git a/fs/xfs/xfs_sb.h b/fs/xfs/xfs_sb.h index b1a83f8..beee35e 100644 --- a/fs/xfs/xfs_sb.h +++ b/fs/xfs/xfs_sb.h @@ -226,7 +226,7 @@ typedef struct xfs_dsb { __be32 sb_bad_features2; /* must be padded to 64 bit alignment */ -} xfs_dsb_t; +} __ondisk xfs_dsb_t; /* * Sequence number values for the fields. diff --git a/fs/xfs/xfs_trans.h b/fs/xfs/xfs_trans.h index 0804207..2fbe465 100644 --- a/fs/xfs/xfs_trans.h +++ b/fs/xfs/xfs_trans.h @@ -32,7 +32,7 @@ typedef struct xfs_trans_header { uint th_type; /* transaction type */ __int32_t th_tid; /* transaction id (unused) */ uint th_num_items; /* num items logged by trans */ -} xfs_trans_header_t; +} __ondisk xfs_trans_header_t; #define XFS_TRANS_HEADER_MAGIC 0x5452414e /* TRAN */ -- 1.5.4.rc2.85.g9de45-dirty From owner-xfs@oss.sgi.com Mon Mar 17 17:56:37 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 17 Mar 2008 17:56:48 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2I0uZwK011794 for ; Mon, 17 Mar 2008 17:56:37 -0700 X-ASG-Debug-ID: 1205801823-47b501f80000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from arnor.apana.org.au (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id E7941F62B9D; Mon, 17 Mar 2008 17:57:03 -0700 (PDT) Received: from arnor.apana.org.au (rhun.apana.org.au [64.62.148.172]) by cuda.sgi.com with ESMTP id VJh9WyG4I8eSvBYQ; Mon, 17 Mar 2008 17:57:03 -0700 (PDT) Received: from gondolin.me.apana.org.au ([192.168.0.6] ident=mail) by arnor.apana.org.au with esmtp (Exim 4.63 #1 (Debian)) id 1JbQ8S-00019p-4Z; Tue, 18 Mar 2008 11:57:00 +1100 Received: from herbert by gondolin.me.apana.org.au with local (Exim 3.36 #1 (Debian)) id 1JbQ8L-0003nS-00; Tue, 18 Mar 2008 08:56:53 +0800 Date: Tue, 18 Mar 2008 08:56:53 +0800 From: Herbert Xu To: Christian Kujau , Chr , Milan Broz , David Chinner , LKML , xfs@oss.sgi.com, dm-devel@redhat.com, dm-crypt@saout.de, Ritesh Raj Sarraf X-ASG-Orig-Subj: Re: [dm-crypt] INFO: task mount:11202 blocked for more than 120 seconds Subject: Re: [dm-crypt] INFO: task mount:11202 blocked for more than 120 seconds Message-ID: <20080318005653.GA14575@gondor.apana.org.au> References: <200803150108.04008.chunkeey@web.de> <200803151432.11125.chunkeey@web.de> <200803152234.53199.chunkeey@web.de> <20080317173609.GD29322@agk.fab.redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080317173609.GD29322@agk.fab.redhat.com> User-Agent: Mutt/1.5.13 (2006-08-11) X-Barracuda-Connect: rhun.apana.org.au[64.62.148.172] X-Barracuda-Start-Time: 1205801824 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45138 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14897 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: herbert@gondor.apana.org.au Precedence: bulk X-list: xfs On Mon, Mar 17, 2008 at 05:36:09PM +0000, Alasdair G Kergon wrote: > Latest version for everyone to try: > > From: Milan Broz > > Fix regression in dm-crypt introduced in commit > 3a7f6c990ad04e6f576a159876c602d14d6f7fef > (dm crypt: use async crypto). > > If write requests need to be split into pieces, the code must not > process them in parallel because the crypto context cannot be shared. > So there can be parallel crypto operations on one part of the write, > but only one write bio can be processed at a time. Could you explain this part please? Crypto tfm objects are meant to be reentrant, synchronous or not. Cheers, -- Visit Openswan at http://www.openswan.org/ Email: Herbert Xu ~{PmV>HI~} Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt From owner-xfs@oss.sgi.com Mon Mar 17 18:07:30 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 17 Mar 2008 18:07:38 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2I17TVB014800 for ; Mon, 17 Mar 2008 18:07:30 -0700 X-ASG-Debug-ID: 1205802476-113e00400000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from knox.decisionsoft.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 5E0CC6B2FCA for ; Mon, 17 Mar 2008 18:07:56 -0700 (PDT) Received: from knox.decisionsoft.com (knox-be.decisionsoft.com [87.194.172.100]) by cuda.sgi.com with ESMTP id 2XS06KAh5nG4A4FB for ; Mon, 17 Mar 2008 18:07:56 -0700 (PDT) Received: from [82.152.70.89] (helo=[192.168.1.67]) by knox.decisionsoft.com with esmtpsa (TLS-1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.63) (envelope-from ) id 1JbQIO-00068c-2c; Tue, 18 Mar 2008 01:07:22 +0000 Message-ID: <47DF15BD.8020208@decisionsoft.co.uk> Date: Tue, 18 Mar 2008 01:07:09 +0000 From: Stuart Rowan Reply-To: strr-debian@decisionsoft.co.uk User-Agent: Thunderbird 2.0.0.12 (Windows/20080213) MIME-Version: 1.0 To: Timothy Shimmin CC: xfs@oss.sgi.com References: <47DEFE5E.4030703@decisionsoft.co.uk> <47DF0C9D.1010602@sgi.com> In-Reply-To: <47DF0C9D.1010602@sgi.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-SA-Exim-Connect-IP: 82.152.70.89 X-SA-Exim-Mail-From: strr-debian@decisionsoft.co.uk X-ASG-Orig-Subj: Re: 2.6.24.3 nfs server on xfs keeps producing nfsd: non-standard errno: -117 Subject: Re: 2.6.24.3 nfs server on xfs keeps producing nfsd: non-standard errno: -117 X-SA-Exim-Version: 4.2.1 (built Tue, 09 Jan 2007 17:23:22 +0000) X-SA-Exim-Scanned: Yes (on knox.decisionsoft.com) X-SystemFilter-new-T: not expanding X-SystemFilter-new-S: not expanding X-SystemFilter-new-F: not expanding X-Barracuda-Connect: knox-be.decisionsoft.com[87.194.172.100] X-Barracuda-Start-Time: 1205802482 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45139 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14898 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: strr-debian@decisionsoft.co.uk Precedence: bulk X-list: xfs Hi Tim, Timothy Shimmin wrote: > Hi Stuart, > > Stuart Rowan wrote: >> Hi, >> >> Firstly thanks for the great filesystem and apologies if this ends up >> being NFS rather than XFS being weird! I'm not subscribed so please do >> keep me CC'd. >> >> I have *millions* of lines of (>200k per minute according to syslog): >> nfsd: non-standard errno: -117 >> being sent out of dmesg >> >> Now errno 117 is >> #define EUCLEAN 117 /* Structure needs cleaning */ >> which seems to be only used from a quick grep by XFS and JFFS and smbfs. >> >> > In XFS we mapped EFSCORRUPTED to EUCLEAN as EFSCORRUPTED > didn't exist on Linux. > However, normally if this error is encountered in XFS then > we output an appropriate msg to the syslog. > Our default error level is 3 and most reports are rated at 1 > so should show up I would have thought. > > --Tim > Thanks for the swift reply -- reading previous mailing list posts, that was my expectation too! I've attached the relevant section of /var/log/kern.log inline below -- there is no error message! Is there another way of inspecting what (if anything) a given XFS file-system thinks is wrong with it. Thanks, Stu. Mar 17 23:01:50 evenlode kernel: SGI XFS with ACLs, security attributes, realtime, large block/inode numbers, no debug enabled Mar 17 23:01:50 evenlode kernel: SGI XFS Quota Management subsystem Mar 17 23:01:50 evenlode kernel: Filesystem "dm-0": Disabling barriers, not supported by the underlying device Mar 17 23:01:50 evenlode kernel: XFS mounting filesystem dm-0 Mar 17 23:01:50 evenlode kernel: Ending clean XFS mount for filesystem: dm-0 Mar 17 23:01:50 evenlode kernel: Filesystem "dm-1": Disabling barriers, not supported by the underlying device Mar 17 23:01:50 evenlode kernel: XFS mounting filesystem dm-1 Mar 17 23:01:50 evenlode kernel: Ending clean XFS mount for filesystem: dm-1 Mar 17 23:01:50 evenlode kernel: e1000: eth0: e1000_watchdog: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX Mar 17 23:01:50 evenlode kernel: RPC: Registered udp transport module. Mar 17 23:01:50 evenlode kernel: RPC: Registered tcp transport module. Mar 17 23:01:50 evenlode kernel: NET: Registered protocol family 10 Mar 17 23:01:50 evenlode kernel: lo: Disabled Privacy Extensions Mar 17 23:01:50 evenlode kernel: IA-32 Microcode Update Driver: v1.14a Mar 17 23:01:51 evenlode kernel: Installing knfsd (copyright (C) 1996 okir@monad.swb.de). Mar 17 23:01:51 evenlode kernel: NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory Mar 17 23:01:51 evenlode kernel: NFSD: starting 90-second grace period Mar 17 23:01:57 evenlode kernel: eth0: no IPv6 routers present Mar 17 23:03:04 evenlode kernel: 3w-9xxx: scsi0: AEN: INFO (0x04:0x000C): Initialize started:unit=0, subunit=0. Mar 17 23:03:04 evenlode kernel: 3w-9xxx: scsi0: AEN: INFO (0x04:0x000C): Initialize started:unit=0, subunit=2. Mar 17 23:04:40 evenlode kernel: nfsd: non-standard errno: -117 Mar 17 23:05:11 evenlode last message repeated 93970 times Mar 17 23:06:12 evenlode last message repeated 188363 times mount: /dev/mapper/data-home on /home type xfs (rw,logbufs=8,usrquota) /dev/mapper/data-archive on /home/archive type xfs (rw,logbufs=8) evenlode:~# xfs_info /home meta-data=/dev/data/home isize=256 agcount=32, agsize=7372784 blks = sectsz=512 attr=0 data = bsize=4096 blocks=235929088, imaxpct=25 = sunit=16 swidth=48 blks naming =version 2 bsize=4096 log =internal bsize=4096 blocks=32768, version=2 = sectsz=512 sunit=16 blks, lazy-count=0 realtime =none extsz=65536 blocks=0, rtextents=0 evenlode:~# xfs_info /home/archive meta-data=/dev/data/archive isize=256 agcount=16, agsize=6553600 blks = sectsz=512 attr=0 data = bsize=4096 blocks=104857600, imaxpct=25 = sunit=16 swidth=48 blks naming =version 2 bsize=4096 log =internal bsize=4096 blocks=32768, version=2 = sectsz=512 sunit=16 blks, lazy-count=0 realtime =none extsz=65536 blocks=0, rtextents=0 From owner-xfs@oss.sgi.com Mon Mar 17 18:46:04 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 17 Mar 2008 18:46:11 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2I1k0GO025059 for ; Mon, 17 Mar 2008 18:46:04 -0700 X-ASG-Debug-ID: 1205804790-0c3e00c70000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from opera.rednote.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 2B5A1FBE412 for ; Mon, 17 Mar 2008 18:46:30 -0700 (PDT) Received: from opera.rednote.net (opera.rednote.net [74.53.93.34]) by cuda.sgi.com with ESMTP id nRmPBTQFWtlBNVJV for ; Mon, 17 Mar 2008 18:46:30 -0700 (PDT) Received: from jdc.jasonjgw.net ([IPv6:::1]) by opera.rednote.net (8.14.1/8.14.1) with ESMTP id m2I1CLGt017826 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL) for ; Tue, 18 Mar 2008 01:12:27 GMT Received: from jdc.jasonjgw.net (ip6-localhost [IPv6:::1]) by jdc.jasonjgw.net (8.14.2/8.14.2/Debian-3) with ESMTP id m2I1CAew012751 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT) for ; Tue, 18 Mar 2008 12:12:10 +1100 Received: (from jason@localhost) by jdc.jasonjgw.net (8.14.2/8.14.2/Submit) id m2I1CAE5012750 for xfs@oss.sgi.com; Tue, 18 Mar 2008 12:12:10 +1100 Date: Tue, 18 Mar 2008 12:12:10 +1100 From: Jason White To: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: XFS check script on boot? Subject: Re: XFS check script on boot? Message-ID: <20080318011210.GA12727@jdc.jasonjgw.net> Mail-Followup-To: xfs@oss.sgi.com References: <20080317134847.GJ14510@sv.lnf.it> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080317134847.GJ14510@sv.lnf.it> User-Agent: Mutt/1.5.17+20080114 (2008-01-14) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Scanned: ClamAV 0.92.1/6277/Mon Mar 17 18:08:59 2008 on opera.rednote.net X-Virus-Status: Clean X-Barracuda-Connect: opera.rednote.net[74.53.93.34] X-Barracuda-Start-Time: 1205804793 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45142 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 14899 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jason@jasonjgw.net Precedence: bulk X-list: xfs On Mon, Mar 17, 2008 at 02:48:47PM +0100, Marco Gaiarin wrote: > This was the first tile i hit a xfs filesystem corruption, so i'm > asking why seems there's no /etc/init.d/checkfs.sh-like script that > check and repair XFS filesystem at boot. Because XFS is a journaling file system which can, and should, recover automatically from an unclean shutdown such as a system crash or power failure. If you get XFS corruption after a reboot then you have a hardware or software problem that shouldn't be happening and which ought to be investigated and fixed. The purpose of a journaling file system is to make time-consuming fsck and similar checks unnecessary after a reboot. From owner-xfs@oss.sgi.com Mon Mar 17 20:34:32 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 17 Mar 2008 20:34:43 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2I3YT9W016715 for ; Mon, 17 Mar 2008 20:34:32 -0700 X-ASG-Debug-ID: 1205811299-49db018b0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 10E2E6B3855 for ; Mon, 17 Mar 2008 20:34:59 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id GmYGdH3LBZT1jxuO for ; Mon, 17 Mar 2008 20:34:59 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 459C018DA0AE3; Mon, 17 Mar 2008 22:34:58 -0500 (CDT) Message-ID: <47DF3861.6020308@sandeen.net> Date: Mon, 17 Mar 2008 22:34:57 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: "Josef 'Jeff' Sipek" CC: xfs@oss.sgi.com, tes@sgi.com, dgc@sgi.com X-ASG-Orig-Subj: Re: [RFC][PATCH 1/1] XFS: annotate all on-disk structures with __ondisk Subject: Re: [RFC][PATCH 1/1] XFS: annotate all on-disk structures with __ondisk References: <20080317202853.GC16500@josefsipek.net> <1205800745-9217-1-git-send-email-jeffpc@josefsipek.net> In-Reply-To: <1205800745-9217-1-git-send-email-jeffpc@josefsipek.net> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1205811302 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45149 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14900 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Josef 'Jeff' Sipek wrote: > Currently, the annotation just forces the structures to be packed, and > 4-byte aligned. Semantic nitpick: in my definition of "annotation" this is more than just an annotation. An "__ondisk" annotation, to me, would allow something like sparse to verify properly laid out on-disk structures, but would not affect the actual runtime code - I think that would be quite useful. However, this change actually impacts the bytecode; it is a functional change. So I really do understand what you're trying to do, despite my protestations. If there is some magical instruction to gcc which: a) leaves all current "non-broken" ABIs and gcc implementations' bytecode untouched (or at the very least, minimally/trivially modified), and b) fixes all possible future ABIs so they will always have things perfectly and properly aligned, again w/o undue molestation of the resulting bytecode then I could probably be convinced. :) But this seems like a tall order, and would require much scrutiny. I'm just very shy of a sweeping change like this, which has a material impact on the most common architectures, and does not actually provide, as far as I can see, any benefit to them - only risk. And for now I'll shut up and let the sgi guys chime in eventually. :) -Eric > Signed-off-by: Josef 'Jeff' Sipek > > --- > This is just an RFC, and the alignment needs to be verified against the > offsets within the pages read from disk, and more xfsqa runs on various > architectures are needed. (I don't want to be responsible for something like > the bitops regression on ppc!) > > The .text segment shrinks on x86 and s390x, but grows in ia64 (3776 bytes == > 0.3%). > > text data bss dec hex filename > 542054 3171 3084 548309 85dd5 xfs-x86-original.ko > 542026 3171 3084 548281 85db9 xfs-x86-packed-aligned4.ko > 1244057 70858 2480 1317395 141a13 xfs-ia64-original.ko > 1247833 70858 2480 1321171 1428d3 xfs-ia64-packed-aligned4.ko > 679901 19374 3112 702387 ab7b3 xfs-s390x-original.ko > 679781 19374 3112 702267 ab73b xfs-s390x-packed-aligned4.ko > > The approximate number of instructions effectively stays the same on x86 > (goes up by 2), s390x gets smaller (by 12 instructions), but ia64 bloats by > 708 instructions (0.34%). > > $ for x in *.ko; do objdump -d $x > `basename $x .ko`.dis ; done > $ wc -l *.dis > 141494 xfs-x86-original.dis > 141496 xfs-x86-packed-aligned4.dis > 208514 xfs-ia64-original.dis > 209222 xfs-ia64-packed-aligned4.dis > 121544 xfs-s390x-original.dis > 121532 xfs-s390x-packed-aligned4.dis > > I could try to compile things on a sparc64, mips, and x86_64, but that's for > another day - and depending on where this thread will lead. > > The patch keeps xfsqa happy on x86 - well, it dies in the 100-range, but I > haven't had the time to check if that happens without this patch. Someone > (not it!) should nurse xfsqa back to health :) > > Jeff. From owner-xfs@oss.sgi.com Mon Mar 17 21:07:34 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 17 Mar 2008 21:07:44 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2I47VeJ020462 for ; Mon, 17 Mar 2008 21:07:34 -0700 X-ASG-Debug-ID: 1205813283-49ed034a0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mx1.redhat.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 30EFC6B3D97 for ; Mon, 17 Mar 2008 21:08:03 -0700 (PDT) Received: from mx1.redhat.com (mx1.redhat.com [66.187.233.31]) by cuda.sgi.com with ESMTP id FfxdFCCTaILZP2QH for ; Mon, 17 Mar 2008 21:08:03 -0700 (PDT) Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id m2I47vbY029471; Tue, 18 Mar 2008 00:07:57 -0400 Received: from pobox.stuttgart.redhat.com (pobox.stuttgart.redhat.com [172.16.2.10]) by int-mx1.corp.redhat.com (8.13.1/8.13.1) with ESMTP id m2I47tcD009564; Tue, 18 Mar 2008 00:07:56 -0400 Received: from [10.32.4.6] (vpn-4-6.str.redhat.com [10.32.4.6]) by pobox.stuttgart.redhat.com (8.13.1/8.13.1) with ESMTP id m2I47r3B019782; Tue, 18 Mar 2008 00:07:54 -0400 Message-ID: <47DF400F.5080606@redhat.com> Date: Tue, 18 Mar 2008 05:07:43 +0100 From: Milan Broz User-Agent: Thunderbird 2.0.0.12 (X11/20080226) MIME-Version: 1.0 To: Herbert Xu CC: Christian Kujau , Chr , David Chinner , LKML , xfs@oss.sgi.com, dm-devel@redhat.com, dm-crypt@saout.de, Ritesh Raj Sarraf X-ASG-Orig-Subj: Re: [dm-crypt] INFO: task mount:11202 blocked for more than 120 seconds Subject: Re: [dm-crypt] INFO: task mount:11202 blocked for more than 120 seconds References: <200803150108.04008.chunkeey@web.de> <200803151432.11125.chunkeey@web.de> <200803152234.53199.chunkeey@web.de> <20080317173609.GD29322@agk.fab.redhat.com> <20080318005653.GA14575@gondor.apana.org.au> In-Reply-To: <20080318005653.GA14575@gondor.apana.org.au> X-Enigmail-Version: 0.95.6 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.58 on 172.16.52.254 X-Barracuda-Connect: mx1.redhat.com[66.187.233.31] X-Barracuda-Start-Time: 1205813284 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45151 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14901 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: mbroz@redhat.com Precedence: bulk X-list: xfs Herbert Xu wrote: > On Mon, Mar 17, 2008 at 05:36:09PM +0000, Alasdair G Kergon wrote: >> Latest version for everyone to try: >> >> From: Milan Broz >> >> Fix regression in dm-crypt introduced in commit >> 3a7f6c990ad04e6f576a159876c602d14d6f7fef >> (dm crypt: use async crypto). >> >> If write requests need to be split into pieces, the code must not >> process them in parallel because the crypto context cannot be shared. >> So there can be parallel crypto operations on one part of the write, >> but only one write bio can be processed at a time. > > Could you explain this part please? Crypto tfm objects are meant > to be reentrant, synchronous or not. Ah, sorry - I mean dm-crypt convert context (with crypto context included). Context is reentrant in the sense of crypto operations. But we need also sometimes split bio in writes (not only because of low memory, but also new memory layout of cloned bio can be different and we must not violate hardware restrictions - spec. XFS generates such highly optimized bio requests - it's why it discovers so many dm-crypt problems ;-) see problematic dm-crypt bio write path while (remaining) { clone = crypt_alloc_buffer(io, remaining); ... io->ctx.bio_out = clone; io->ctx.idx_out = 0; remaining -= clone->bi_size; ... r = crypt_convert(cc, &io->ctx); -> fire sync or (multiple) async crypto operation if (atomic_dec_and_test(&io->ctx.pending)) -> sync mode, submit clone direclty ... if (unlikely(remaining)) congestion_wait(WRITE, HZ/100); } and in async crypto completion callback (because async callback cannot call in its context generic_make_request directly) is called: struct convert_context *ctx = async_req->data; ... if (!atomic_dec_and_test(&ctx->pending)) return; ... INIT_WORK(&io->work, kcryptd_io); queue_work(cc->io_queue, &io->work); ... (and from io thread later) struct bio *clone = io->ctx.bio_out; generic_make_request(clone); problems: 1) we cannot use io->work struct in parallel 2) io->ctx.pending is shared here between multiple sub-bio clones ... (there was no problems in sync crypto mode. and dm-crypt io struct is already allocated from mempool in crypt_map allocation, so changing this to per cloned sub-bio allocation can cause new problems in low-memory situations, not good idea change it in this development phase...) Milan -- mbroz@redhat.com From owner-xfs@oss.sgi.com Mon Mar 17 21:08:45 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 17 Mar 2008 21:08:53 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2I48eY8020879 for ; Mon, 17 Mar 2008 21:08:43 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA08054; Tue, 18 Mar 2008 15:09:07 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m2I495LF101095358; Tue, 18 Mar 2008 15:09:05 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m2I493w9101078641; Tue, 18 Mar 2008 15:09:03 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Tue, 18 Mar 2008 15:09:03 +1100 From: David Chinner To: Eric Sandeen Cc: "Josef 'Jeff' Sipek" , xfs@oss.sgi.com, tes@sgi.com, dgc@sgi.com Subject: Re: [RFC][PATCH 1/1] XFS: annotate all on-disk structures with __ondisk Message-ID: <20080318040903.GU155407@sgi.com> References: <20080317202853.GC16500@josefsipek.net> <1205800745-9217-1-git-send-email-jeffpc@josefsipek.net> <47DF3861.6020308@sandeen.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <47DF3861.6020308@sandeen.net> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14902 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Mon, Mar 17, 2008 at 10:34:57PM -0500, Eric Sandeen wrote: > Josef 'Jeff' Sipek wrote: > > Currently, the annotation just forces the structures to be packed, and > > 4-byte aligned. > > Semantic nitpick: in my definition of "annotation" this is more than > just an annotation. > > An "__ondisk" annotation, to me, would allow something like sparse to > verify properly laid out on-disk structures, but would not affect the > actual runtime code - I think that would be quite useful. However, this > change actually impacts the bytecode; it is a functional change. Yup - this isn't "annotation".... > So I really do understand what you're trying to do, despite my > protestations. If there is some magical instruction to gcc which: > > a) leaves all current "non-broken" ABIs and gcc implementations' > bytecode untouched (or at the very least, minimally/trivially modified), and > > b) fixes all possible future ABIs so they will always have things > perfectly and properly aligned, again w/o undue molestation of the > resulting bytecode > > then I could probably be convinced. :) But this seems like a tall > order, and would require much scrutiny. > > I'm just very shy of a sweeping change like this, which has a material > impact on the most common architectures, and does not actually provide, > as far as I can see, any benefit to them - only risk. > > And for now I'll shut up and let the sgi guys chime in eventually. :) I think you iterated my concerns quite well, Eric. The thing I want to see for any sort of change like this is output off all the structures and their alignment before the change and their alignment after the change. On all supported arches. 'pahole' is the tool you used for that, wasn't it, Eric? The only arch I would expect to see a change in the structures is on ARM; if there's anything other than that there there's something wrong. This is going to require a lot of validation to ensure that it is correct..... Not to mention performance testing on ia64 given the added overhead in critical paths..... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Mar 17 21:30:38 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 17 Mar 2008 21:30:46 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.7 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_53, J_CHICKENPOX_63 autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2I4UYwX023602 for ; Mon, 17 Mar 2008 21:30:37 -0700 Received: from boing.melbourne.sgi.com (boing.melbourne.sgi.com [134.14.55.141]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA08789; Tue, 18 Mar 2008 15:31:00 +1100 Message-ID: <47DF4584.8000002@sgi.com> Date: Tue, 18 Mar 2008 15:31:00 +1100 From: Timothy Shimmin User-Agent: Thunderbird 2.0.0.9 (Macintosh/20071031) MIME-Version: 1.0 To: "Josef 'Jeff' Sipek" CC: Eric Sandeen , xfs-oss Subject: Re: [PATCH] fix dir2 shortform structures on ARM old ABI References: <47DB4181.7040603@sandeen.net> <20080315041722.GA25621@josefsipek.net> <47DB4F4F.8030407@sandeen.net> <20080315042703.GA28242@josefsipek.net> <47DB51A3.70200@sandeen.net> <20080315045147.GB28242@josefsipek.net> <47DEB930.7020108@sandeen.net> <47DF0044.6080704@sgi.com> <20080317234219.GD16500@josefsipek.net> In-Reply-To: <20080317234219.GD16500@josefsipek.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14903 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs Josef 'Jeff' Sipek wrote: > On Tue, Mar 18, 2008 at 10:35:32AM +1100, Timothy Shimmin wrote: >> Eric Sandeen wrote: >>> Josef 'Jeff' Sipek wrote: >>>> Josef 'Jeff' Sipek, wondering exactly how passionate one can get about >>>> structure member alignment :) >>> Very. ;) >>> Tossing packed at all the ondisk stuctures bloats things badly on ia64. >>> cvs/linux-2.6-xfs> wc -l before.dis >>> 166688 before.dis >>> cvs/linux-2.6-xfs> wc -l after.dis >>> 182294 after.dis >>> That's +15606 lines. >>> http://digitalvampire.org/blog/index.php/2006/07/31/why-you-shouldnt-use-__attribute__packed/ >> Interesting. >> So the problem there is that gcc is doing the wrong thing >> on some arches (the example being ia64, sparc64). > > Actually, it's not doing the wrong thing... > > __attribute__((packed)) means: > > 1) condense the members of the struct leaving NO padding bytes > > 2) do NOT assume the entire structure is aligned on any boundary > Okay I only knew about (1) - cause that sounds more like "pack"ing to me. So you can't assume alignment for the start of the variable without aligned() if you use packed - Ok. Thanks, --Tim > This means, that even if you have a member that'd be nicely aligned without > the packed attribute (see below), the compiler will generate worst case > alignment code. > > struct foo { > u64 a; > } __attribute__((packed)); > > You can put struct foo anywhere in memory, and the code accessing ->a will > _always_ work. > > Using __attribute((packed,aligned(4))), tells it that the structure as a > whole will be aligned on a 4-byte boundary, but there should be no padding > bytes inserted. > > Josef 'Jeff' Sipek. > From owner-xfs@oss.sgi.com Mon Mar 17 22:27:50 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 17 Mar 2008 22:27:58 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.4 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2I5RjlB029324 for ; Mon, 17 Mar 2008 22:27:50 -0700 X-ASG-Debug-ID: 1205818097-71f7008a0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from filer.fsl.cs.sunysb.edu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 26A376B3603 for ; Mon, 17 Mar 2008 22:28:17 -0700 (PDT) Received: from filer.fsl.cs.sunysb.edu (filer.fsl.cs.sunysb.edu [130.245.126.2]) by cuda.sgi.com with ESMTP id juLOSQleaiEdCOHV for ; Mon, 17 Mar 2008 22:28:17 -0700 (PDT) Received: from josefsipek.net (baal.fsl.cs.sunysb.edu [130.245.126.78]) by filer.fsl.cs.sunysb.edu (8.12.11.20060308/8.13.1) with ESMTP id m2I5SEsm008446; Tue, 18 Mar 2008 01:28:14 -0400 Received: by josefsipek.net (Postfix, from userid 1000) id BEF431C008A2; Tue, 18 Mar 2008 01:28:16 -0400 (EDT) Date: Tue, 18 Mar 2008 01:28:16 -0400 From: "Josef 'Jeff' Sipek" To: David Chinner Cc: Eric Sandeen , xfs@oss.sgi.com, tes@sgi.com X-ASG-Orig-Subj: Re: [RFC][PATCH 1/1] XFS: annotate all on-disk structures with __ondisk Subject: Re: [RFC][PATCH 1/1] XFS: annotate all on-disk structures with __ondisk Message-ID: <20080318052816.GF16500@josefsipek.net> References: <20080317202853.GC16500@josefsipek.net> <1205800745-9217-1-git-send-email-jeffpc@josefsipek.net> <47DF3861.6020308@sandeen.net> <20080318040903.GU155407@sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080318040903.GU155407@sgi.com> User-Agent: Mutt/1.5.16 (2007-06-11) X-Barracuda-Connect: filer.fsl.cs.sunysb.edu[130.245.126.2] X-Barracuda-Start-Time: 1205818098 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45157 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14904 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeffpc@josefsipek.net Precedence: bulk X-list: xfs On Tue, Mar 18, 2008 at 03:09:03PM +1100, David Chinner wrote: > On Mon, Mar 17, 2008 at 10:34:57PM -0500, Eric Sandeen wrote: > > Josef 'Jeff' Sipek wrote: > > > Currently, the annotation just forces the structures to be packed, and > > > 4-byte aligned. > > > > Semantic nitpick: in my definition of "annotation" this is more than > > just an annotation. > > > > An "__ondisk" annotation, to me, would allow something like sparse to > > verify properly laid out on-disk structures, but would not affect the > > actual runtime code - I think that would be quite useful. However, this > > change actually impacts the bytecode; it is a functional change. > > Yup - this isn't "annotation".... Ok. I'll redo the comment for the next version of the patch :) ... > I think you iterated my concerns quite well, Eric. > > The thing I want to see for any sort of change like this is output off all > the structures and their alignment before the change and their alignment > after the change. On all supported arches. 'pahole' is the tool you used > for that, wasn't it, Eric? Ok, next one will include pahole output. (And yes, pahole is the tool Eric used.) > The only arch I would expect to see a change in the structures is on ARM; if > there's anything other than that there there's something wrong. This is going > to require a lot of validation to ensure that it is correct..... > > Not to mention performance testing on ia64 given the added overhead in > critical paths..... Agreed on both accounts. Josef 'Jeff' Sipek. -- Intellectuals solve problems; geniuses prevent them - Albert Einstein From owner-xfs@oss.sgi.com Tue Mar 18 03:40:36 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 18 Mar 2008 03:40:44 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.4 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2IAeYdj030487 for ; Tue, 18 Mar 2008 03:40:36 -0700 X-ASG-Debug-ID: 1205836863-269100aa0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from MTA009E.interbusiness.it (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 67846FC17AB for ; Tue, 18 Mar 2008 03:41:03 -0700 (PDT) Received: from MTA009E.interbusiness.it (MTA009E.interbusiness.it [88.44.62.9]) by cuda.sgi.com with ESMTP id lkOdIygP5yWRCBnk for ; Tue, 18 Mar 2008 03:41:03 -0700 (PDT) Received: from host30-191-static.206-80-b.business.telecomitalia.it (HELO tank.sv.lnf.it) ([80.206.191.30]) by MTA009E.interbusiness.it with ESMTP; 18 Mar 2008 11:41:00 +0100 X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AjAGAFY530dQzr8e/2dsb2JhbACBWqdk Received: from trinity.sv.lnf.it ([10.5.1.3]) by tank.sv.lnf.it with esmtp (Exim 4.63) (envelope-from ) id 1JbZFZ-0001dd-RM; Tue, 18 Mar 2008 11:40:58 +0100 Received: from [10.5.1.23] (helo=harry.sv.lnf.it) by trinity.sv.lnf.it with esmtp (Exim 4.63) (envelope-from ) id 1JbZFZ-0003EL-G1; Tue, 18 Mar 2008 11:40:57 +0100 Received: by harry.sv.lnf.it (Postfix, from userid 1000) id 81F2C804D36C; Tue, 18 Mar 2008 11:41:01 +0100 (CET) Date: Tue, 18 Mar 2008 11:41:01 +0100 From: Marco Gaiarin To: Jason White , xfs@oss.sgi.com X-ASG-Orig-Subj: Re: XFS check script on boot? Subject: Re: XFS check script on boot? Message-ID: <20080318104055.GA7903@sv.lnf.it> References: <20080317134847.GJ14510@sv.lnf.it> <20080318011210.GA12727@jdc.jasonjgw.net> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline In-Reply-To: <20080318011210.GA12727@jdc.jasonjgw.net> Organization: La Nostra Famiglia - Polo FVG User-Agent: Mutt/1.5.15+20070412 (2007-04-11) X-Barracuda-Connect: MTA009E.interbusiness.it[88.44.62.9] X-Barracuda-Start-Time: 1205836865 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45178 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id m2IAeadj030490 X-archive-position: 14905 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: gaio@sv.lnf.it Precedence: bulk X-list: xfs Mandi! Jason White In chel di` si favelave... > The purpose of a journaling file system is to make time-consuming fsck and > similar checks unnecessary after a reboot. I know exactly that, and indeed, i agree. I'm not speaking about writing a script that check everytime the filesystem, but sometime (triggered by /forcefsck?) The problem arise when you have on a remote site people with low knowledge, and have to instruct them to run the kernel in init=/bin/bash mode or in initrd 'emergency' mode, spelling all command by phone. [ok, a remote controlled KVM switch is on the buy list ;] With a such boot script, if a fsck was forced, and a corruption founded, the system ask for root password and an operator find a shell with / mounted readonly and all other system unmounted (eg, the 'normal' way of checking eg, for EXT3; apart that for other filesystem a 'simple' check are ever tried, and password are asked only for serious corruption that need 'fsck -f'). A good starting point for trying a recovery. -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.sv.lnf.it/ Polo FVG - Via della Bontŕ, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)sv.lnf.it tel +39-0434-842711 fax +39-0434-842797 From owner-xfs@oss.sgi.com Tue Mar 18 03:45:36 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 18 Mar 2008 03:45:47 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.1 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2IAjZhq031162 for ; Tue, 18 Mar 2008 03:45:36 -0700 X-ASG-Debug-ID: 1205837164-1af900b20000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from smtp-tls.univ-nantes.fr (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 5780C6B58C2 for ; Tue, 18 Mar 2008 03:46:05 -0700 (PDT) Received: from smtp-tls.univ-nantes.fr (Smtp-Tls1.univ-nantes.fr [193.52.101.145]) by cuda.sgi.com with ESMTP id qYI9A7DeiWbH9H2W for ; Tue, 18 Mar 2008 03:46:05 -0700 (PDT) Received: from localhost (debian [127.0.0.1]) by smtp-tls.univ-nantes.fr (Postfix) with ESMTP id E1A873CEB; Tue, 18 Mar 2008 11:46:02 +0100 (CET) Received: from smtp-tls.univ-nantes.fr ([193.52.101.145]) by localhost (SMTP-TLS.univ-nantes.fr [193.52.101.145]) (amavisd-new, port 10024) with LMTP id 12942-10-9; Tue, 18 Mar 2008 11:46:02 +0100 (CET) Received: from [172.20.13.9] (tomintoul.cri.univ-nantes.prive [172.20.13.9]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp-tls.univ-nantes.fr (Postfix) with ESMTP id B04243CE9; Tue, 18 Mar 2008 11:46:02 +0100 (CET) Message-ID: <47DF9D6A.3000700@univ-nantes.fr> Date: Tue, 18 Mar 2008 11:46:02 +0100 From: Yann Dupont Organization: CRIUN User-Agent: Thunderbird 3.0a1pre (X11/2008022403) MIME-Version: 1.0 To: xfs@oss.sgi.com, dm-devel@redhat.com Cc: evms-devel@lists.sourceforge.net, Arnaud ABELARD X-ASG-Orig-Subj: xfs corruption / memory problem or evms or device mapper involved ?? Subject: xfs corruption / memory problem or evms or device mapper involved ?? Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 8bit X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Scanned: by amavisd-new-20030616-p7 (Debian) at smtp.univ-nantes.fr X-Barracuda-Connect: Smtp-Tls1.univ-nantes.fr[193.52.101.145] X-Barracuda-Start-Time: 1205837166 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -0.97 X-Barracuda-Spam-Status: No, SCORE=-0.97 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=BSF_RULE_7582B X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45177 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 1.05 BSF_RULE_7582B BODY: Custom Rule 7582B X-Virus-Status: Clean X-archive-position: 14906 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: Yann.Dupont@univ-nantes.fr Precedence: bulk X-list: xfs Hello. Yesterday I had to expand an xfs volume. The machine use a 2.6.18-4-vserver amd64 kernel. The process never finished. I hard to hard reboot the machine, leaving the xfs filesystem in a bad state, as far as I can see (2nd part of the mail) Here are the logs. I have this since yesterday : Here are the failed expand operation with evms : First, add of a scsi disk on SAN Mar 17 15:33:20 speyburn kernel: Vendor: IFT Model: ER2510FS-6RH Rev: 342R Mar 17 15:33:20 speyburn kernel: Type: Direct-Access ANSI SCSI revision: 03 Mar 17 15:33:20 speyburn kernel: SCSI device sdh: 215040000 512-byte hdwr sectors (110100 MB) Mar 17 15:33:20 speyburn kernel: sdh: Write Protect is off Mar 17 15:33:20 speyburn kernel: sdh: Mode Sense: 8f 00 00 08 Mar 17 15:33:20 speyburn kernel: SCSI device sdh: drive cache: write back Mar 17 15:33:20 speyburn kernel: SCSI device sdh: 215040000 512-byte hdwr sectors (110100 MB) Mar 17 15:33:20 speyburn kernel: sdh: Write Protect is off Mar 17 15:33:20 speyburn kernel: sdh: Mode Sense: 8f 00 00 08 Mar 17 15:33:20 speyburn kernel: SCSI device sdh: drive cache: write back Mar 17 15:33:20 speyburn kernel: sdh: unknown partition table Mar 17 15:33:20 speyburn kernel: sd 0:0:1:6: Attached scsi disk sdh Mar 17 15:33:20 speyburn kernel: sd 0:0:1:6: Attached scsi generic sg13 type 0 ---- Then evms expand operation - The root of the problem. Mar 17 15:45:23 speyburn kernel: Bad page state in process 'evmsn' Mar 17 15:45:23 speyburn kernel: [] :dm_mod:dm_suspend+0xe1/0x239 Mar 17 15:45:23 speyburn kernel: [] default_wake_function+0x0/0xe Mar 17 15:45:23 speyburn kernel: [] :dm_mod:dev_suspend+0xda/0x16f Mar 17 15:45:23 speyburn kernel: [] :dm_mod:ctl_ioctl+0x213/0x25e Mar 17 15:45:23 speyburn kernel: [] signal_wake_up+0x1e/0x2d Mar 17 15:45:23 speyburn kernel: [] do_ioctl+0x55/0x6b Mar 17 15:45:23 speyburn kernel: [] vfs_ioctl+0x364/0x38b Mar 17 15:45:23 speyburn kernel: [] sys_futex+0x102/0x124 Mar 17 15:45:23 speyburn kernel: [] sys_ioctl+0x59/0x78 Mar 17 15:45:23 speyburn kernel: Mar 17 15:45:23 speyburn kernel: RIP: 0010:[] [] put_page+0x13/0x2e Mar 17 15:45:23 speyburn kernel: RSP: 0000:ffff81027fccd970 EFLAGS: 00010246 Mar 17 15:45:23 speyburn kernel: RAX: 0000000000000000 RBX: 0000000000000001 RCX: ffff810128f10a20 Mar 17 15:45:23 speyburn kernel: RDX: ffff810128f10b40 RSI: ffff810320954970 RDI: ffff81032f9896e0 Mar 17 15:45:23 speyburn kernel: RBP: ffff810128f109c0 R08: 004740e900000c82 R09: ffff810128f109c0 Mar 17 15:45:23 speyburn kernel: R10: ffffffff88195b04 R11: 0000000000000098 R12: ffff810324ad1024 Mar 17 15:45:23 speyburn kernel: R13: ffff810324ad1028 R14: 004740e900000e14 R15: 0000000000000000 Mar 17 15:45:23 speyburn kernel: FS: 0000000000000000(0000) GS:ffffffff80530000(0063) knlGS:00000000f7dafaa0 Mar 17 15:45:23 speyburn kernel: CS: 0010 DS: 002b ES: 002b CR0: 000000008005003b Mar 17 15:45:23 speyburn kernel: CR2: 00000000f7e5aca0 CR3: 00000001eac8e000 CR4: 00000000000006e0 ---- Then processes on the machine seems affected by 'bad pages' ? Mar 17 15:45:23 speyburn kernel: Process imapd (pid: 10575[#49152], threadinfo ffff81027fccc000, task ffff8103172177c0) Mar 17 15:45:23 speyburn kernel: Stack: ffffffff8819391b ffff810163a762b0 ffff810324ad1000 ffff810324ad1024 Mar 17 15:45:23 speyburn kernel: CPU 0 Mar 17 15:45:23 speyburn kernel: [] :xfs:xfs_lookup+0x6c/0x7d Mar 17 15:45:23 speyburn kernel: RIP [] put_page+0x13/0x2e Mar 17 15:45:23 speyburn kernel: R13: ffff810324ad1028 R14: 004740e900000e14 R15: 0000000000000000 Mar 17 15:45:23 speyburn kernel: [] __up_read+0x13/0x8a Mar 17 15:45:23 speyburn kernel: [] :xfs:xfs_create+0x1f0/0x5dd Mar 17 15:45:23 speyburn kernel: RSP Mar 17 15:45:23 speyburn kernel: ----------- [cut here ] --------- [please bite here ] --------- Mar 17 15:45:23 speyburn kernel: [] :xfs:kmem_zone_zalloc+0x1e/0x2f Mar 17 15:45:23 speyburn kernel: [] ia32_sysret+0x0/0xa Mar 17 15:45:23 speyburn kernel: Mar 17 15:45:23 speyburn kernel: Mar 17 15:45:23 speyburn kernel: Call Trace: Mar 17 15:45:23 speyburn kernel: [] may_delete+0x42/0x12b Mar 17 15:45:23 speyburn kernel: RDX: ffff810128f10b40 RSI: ffff810320954970 RDI: ffff81032f9896e0 Mar 17 15:45:23 speyburn kernel: Process imapd (pid: 7935[#49152], threadinfo ffff810231714000, task ffff8102ae037880) Mar 17 15:45:23 speyburn kernel: Stack: ffffffff8819391b ffff810163a762b0 ffff810324ad1000 ffff810324ad1024 Mar 17 15:45:23 speyburn kernel: [] mntput_no_expire+0x19/0x8b Mar 17 15:45:23 speyburn kernel: Pid: 10600, comm: imapd Tainted: G B 2.6.18-4-vserver-amd64 #1 Mar 17 15:45:23 speyburn kernel: Call Trace: Mar 17 15:45:23 speyburn kernel: [] :xfs:xfs_vn_permission+0x14/0x18 Mar 17 15:45:23 speyburn kernel: [] permission+0xf0/0x155 Mar 17 15:45:23 speyburn kernel: CPU 0 Mar 17 15:45:23 speyburn kernel: [] :xfs:xfs_vn_permission+0x14/0x18 Mar 17 15:45:23 speyburn kernel: [] permission+0xf0/0x155 Mar 17 15:45:23 speyburn kernel: Mar 17 15:45:23 speyburn kernel: CPU 0 Mar 17 15:45:23 speyburn kernel: [] :xfs:xfs_trans_reserve+0xea/0x1cb Mar 17 15:45:23 speyburn kernel: RSP Mar 17 15:45:23 speyburn kernel: Mar 17 15:45:23 speyburn kernel: [] sys32_lstat64+0x20/0x29 Mar 17 15:45:23 speyburn kernel: <0>Bad page state in process 'imapd' Mar 17 15:45:23 speyburn kernel: page:ffff81032f9e0498 flags:0x020000000000020c mapping:ffff810321d2fd20 mapcount:0 count:0 Mar 17 15:45:23 speyburn kernel: Trying to fix it up, but a reboot is needed Mar 17 15:45:23 speyburn kernel: Backtrace: Mar 17 15:45:23 speyburn kernel: Mar 17 15:45:23 speyburn kernel: Call Trace: Mar 17 15:45:23 speyburn kernel: [] bad_page+0x4e/0x78 Mar 17 15:45:23 speyburn kernel: [] free_hot_cold_page+0x73/0xff Mar 17 15:45:23 speyburn kernel: [] :xfs:xfs_buf_free+0x99/0xdd Mar 17 15:45:23 speyburn kernel: [] :xfs:xfs_trans_push_ail+0xaf/0x237 Mar 17 15:45:23 speyburn kernel: [] :xfs:xfs_log_reserve+0x443/0x6a6 Mar 17 15:45:23 speyburn kernel: [] :xfs:kmem_zone_zalloc+0x1e/0x2f Mar 17 15:45:23 speyburn kernel: Mar 17 15:45:23 speyburn kernel: Bad page state in process 'imapd' Mar 17 15:45:23 speyburn kernel: Trying to fix it up, but a reboot is needed Mar 17 15:45:23 speyburn kernel: [] :xfs:xfs_vn_permission+0x14/0x18 Mar 17 15:45:23 speyburn kernel: Unable to handle kernel NULL pointer dereference at 0000000000000000 RIP: Mar 17 15:45:23 speyburn kernel: [] :xfs:_xfs_buf_find+0x93/0x1f2 Mar 17 15:45:23 speyburn kernel: PGD 1c9561067 PUD 2ba02f067 PMD 0 Mar 17 15:45:23 speyburn kernel: Oops: 0000 [131] SMP Mar 17 15:45:23 speyburn kernel: CPU 0 Mar 17 15:45:23 speyburn kernel: Modules linked in: iptable_filter ip_tables x_tables ipv6 ext2 mbcache dm_snapshot dm_mirror tsdev psmouse shpchp serio_raw pci_hotplug pcspkr evdev sg st xfs raid456 xor raid10 raid1 raid0 linear md_mod dm_mod ch sd_mod mptsas mptscsih mptbase scsi_transport_sas ehci_hcd bnx2 qla2xxx firmware_class uhci_hcd scsi_transport_fc scsi_mod thermal processor fan Mar 17 15:45:23 speyburn kernel: Pid: 10598, comm: imapd Tainted: G B 2.6.18-4-vserver-amd64 #1 Mar 17 15:45:23 speyburn kernel: RIP: 0010:[] [] :xfs:_xfs_buf_find+0x93/0x1f2 Mar 17 15:45:23 speyburn kernel: RSP: 0000:ffff81028f26b918 EFLAGS: 00010287 Mar 17 15:45:23 speyburn kernel: RAX: ffffffffffffffa0 RBX: ffffffffffffffa0 RCX: ffff810128f10a20 Mar 17 15:45:23 speyburn kernel: RDX: 0000000000000010 RSI: 0000000000000000 RDI: ffff810320954970 Mar 17 15:45:23 speyburn kernel: RBP: ffff810320954960 R08: 0000000000000000 R09: ffffffff88194e39 Mar 17 15:45:23 speyburn kernel: R10: 000000003e21e352 R11: ffff810001036640 R12: 0000000000000000 Mar 17 15:45:23 speyburn kernel: R13: 0000000000002000 R14: ffff8103243d7340 R15: 00000000d1bd5000 Mar 17 15:45:23 speyburn kernel: FS: 0000000000000000(0000) GS:ffffffff80530000(0063) knlGS:00000000f7e0eaa0 Mar 17 15:45:23 speyburn kernel: CS: 0010 DS: 002b ES: 002b CR0: 000000008005003b Mar 17 15:45:23 speyburn kernel: [] __link_path_walk+0x1a2/0xf88 Mar 17 15:45:23 speyburn kernel: <0>Bad page state in process 'imapd' Mar 17 15:45:23 speyburn kernel: page:ffff81032874f928 flags:0x020000000000020c mapping:ffff810321d2fd20 mapcount:0 count:0 Mar 17 15:45:23 speyburn kernel: Trying to fix it up, but a reboot is needed Mar 17 15:45:23 speyburn kernel: Backtrace: Mar 17 15:45:23 speyburn kernel: Mar 17 15:45:23 speyburn kernel: Call Trace: Mar 17 15:45:23 speyburn kernel: [] bad_page+0x4e/0x78 Mar 17 15:45:23 speyburn kernel: [] free_hot_cold_page+0x73/0xff Mar 17 15:45:24 speyburn kernel: Mar 17 15:45:24 speyburn kernel: [] link_path_walk+0x5c/0xe5 Mar 17 15:45:24 speyburn kernel: [] thread_return+0x0/0xe7 Mar 17 15:45:24 speyburn kernel: [] d_rehash+0x6a/0x80 Mar 17 15:45:24 speyburn kernel: [] :xfs:xfs_corruption_error+0xe4/0xf6 Mar 17 15:45:24 speyburn kernel: [] :xfs:xfs_dir2_leaf_lookup_int+0x105/0x1fd Mar 17 15:45:24 speyburn kernel: [] :xfs:xfs_dir2_isleaf+0x19/0x4a Mar 17 15:45:24 speyburn kernel: Filesystem "dm-17": XFS internal error xfs_da_do_buf(2) at line 2084 of file fs/xfs/xfs_da_btree.c. Caller 0xffffffff88163646 Mar 17 15:45:24 speyburn kernel: Bad page state in process 'imapd' Mar 17 15:45:24 speyburn kernel: page:ffff81032c4d79c0 flags:0x020000000000020c mapping:ffff810321d2fd20 mapcount:0 count:0 Mar 17 15:45:24 speyburn kernel: Trying to fix it up, but a reboot is needed Mar 17 15:45:24 speyburn kernel: Backtrace: Mar 17 15:45:24 speyburn kernel: Mar 17 15:45:24 speyburn kernel: Call Trace: Mar 17 15:45:24 speyburn kernel: [] bad_page+0x4e/0x78 Mar 17 15:45:24 speyburn kernel: [] free_hot_cold_page+0x73/0xff Mar 17 15:45:24 speyburn kernel: [] :xfs:xfs_buf_free+0x99/0xdd Mar 17 15:45:24 speyburn kernel: [] :xfs:xfs_trans_push_ail+0xaf/0x237 Mar 17 15:45:24 speyburn kernel: [] :xfs:xfs_log_reserve+0x443/0x6a6 Mar 17 15:45:24 speyburn kernel: [] :xfs:kmem_zone_zalloc+0x1e/0x2f Mar 17 15:45:24 speyburn kernel: [] :xfs:xfs_trans_reserve+0xea/0x1cb Mar 17 15:45:24 speyburn kernel: [] :xfs:xfs_create+0x1f0/0x5dd Mar 17 15:45:24 speyburn kernel: [] :xfs:xfs_lookup+0x6c/0x7d Mar 17 15:45:24 speyburn kernel: [] __link_path_walk+0x1a2/0xf88 Mar 17 15:45:24 speyburn kernel: [] do_filp_open+0x1c/0x3d Mar 17 15:45:24 speyburn kernel: imapd[10611]: segfault at 00000000385f442c rip 00000000f7e678ac rsp 00000000fffc8cd8 error 4 Mar 17 15:45:24 speyburn kernel: [] do_sys_open+0x44/0xc5 Mar 17 15:45:24 speyburn kernel: [] ia32_sysret+0x0/0xa Mar 17 15:45:24 speyburn kernel: Mar 17 15:45:24 speyburn kernel: Bad page state in process 'imapd' Mar 17 15:45:24 speyburn kernel: page:ffff8103296d6bd0 flags:0x020000000000020c mapping:ffff810321d2fd20 mapcount:0 count:0 Mar 17 15:45:24 speyburn kernel: Trying to fix it up, but a reboot is needed Mar 17 15:45:24 speyburn kernel: Backtrace: Mar 17 15:45:24 speyburn kernel: Mar 17 15:45:24 speyburn kernel: Call Trace: Mar 17 15:45:24 speyburn kernel: [] bad_page+0x4e/0x78 Mar 17 15:45:24 speyburn kernel: [] free_hot_cold_page+0x73/0xff Mar 17 15:45:24 speyburn kernel: [] :xfs:xfs_buf_free+0x99/0xdd Mar 17 15:45:24 speyburn kernel: [] :xfs:xfs_trans_push_ail+0xaf/0x237 Mar 17 15:45:24 speyburn kernel: [] :xfs:xfs_log_reserve+0x443/0x6a6 Mar 17 15:45:24 speyburn kernel: [] :xfs:kmem_zone_zalloc+0x1e/0x2f Mar 17 15:45:24 speyburn kernel: [] :xfs:xfs_trans_reserve+0xea/0x1cb Mar 17 15:45:24 speyburn kernel: [] :xfs:xfs_create+0x1f0/0x5dd Mar 17 15:45:24 speyburn kernel: Unable to handle kernel paging request at 0000000000100108 RIP: Mar 17 15:45:24 speyburn kernel: [] get_page_from_freelist+0x18c/0x3a6 Mar 17 15:45:24 speyburn kernel: PGD 268228067 PUD 143e6f067 PMD 0 Mar 17 15:45:24 speyburn kernel: Oops: 0002 [133] SMP Mar 17 15:45:24 speyburn kernel: CPU 1 Mar 17 15:45:24 speyburn kernel: Modules linked in: iptable_filter ip_tables x_tables ipv6 ext2 mbcache dm_snapshot dm_mirror tsdev psmouse shpchp serio_raw pci_hotplug pcspkr evdev sg st xfs raid456 xor raid10 raid1 raid0 linear md_mod dm_mod ch sd_mod mptsas mptscsih mptbase scsi_transport_sas ehci_hcd bnx2 qla2xxx firmware_class uhci_hcd scsi_transport_fc scsi_mod thermal processor fan Mar 17 15:45:24 speyburn kernel: Pid: 10615, comm: imapd Tainted: G B 2.6.18-4-vserver-amd64 #1 Mar 17 15:45:24 speyburn kernel: RIP: 0010:[] [] get_page_from_freelist+0x18c/0x3a6 Mar 17 15:45:24 speyburn kernel: RSP: 0000:ffff8103125b1d08 EFLAGS: 00010002 Mar 17 15:45:24 speyburn kernel: RAX: ffff81032bce0b10 RBX: ffff810324d75d40 RCX: 0000000000100100 Mar 17 15:45:24 speyburn kernel: RDX: ffff810324d75d50 RSI: 0000000000000c8b RDI: ffff810000016800 Mar 17 15:45:24 speyburn kernel: RBP: 0000000000000282 R08: 0000000000000000 R09: 0000000000002d3d Mar 17 15:45:24 speyburn kernel: R10: 0000000000000000 R11: 0000000000000002 R12: ffff810000016800 Mar 17 15:45:24 speyburn kernel: R13: ffff81032bce0ae8 R14: ffff810000018010 R15: 00003ffffffff000 Mar 17 15:45:24 speyburn kernel: FS: 0000000000000000(0000) GS:ffff810324d75c40(0063) knlGS:00000000f7d77aa0 Mar 17 15:45:24 speyburn kernel: CS: 0010 DS: 002b ES: 002b CR0: 000000008005003b Mar 17 15:45:24 speyburn kernel: CR2: 0000000000100108 CR3: 0000000209207000 CR4: 00000000000006e0 Mar 17 15:45:24 speyburn kernel: Process imapd (pid: 10615[#49152], threadinfo ffff8103125b0000, task ffff81012db1b880) Mar 17 15:45:24 speyburn kernel: Stack: ffff81005406e0c0 0000004400000015 ffff810000018010 000280d200000000 Mar 17 15:45:24 speyburn kernel: 0000000000000002 ffffffff8023695f ffff8103125b1e08 000000008027e850 Mar 17 15:45:24 speyburn kernel: 0000000000000001 ffffffff8027e708 ffff810320b34000 ffff810000018010 Mar 17 15:45:24 speyburn kernel: Call Trace: Mar 17 15:45:24 speyburn kernel: [] do_sock_write+0xcb/0x19c Mar 17 15:45:24 speyburn kernel: [] __activate_task+0x27/0x39 Mar 17 15:45:24 speyburn kernel: [] __alloc_pages+0x5c/0x2a9 Mar 17 15:45:24 speyburn kernel: [] __handle_mm_fault+0x1e2/0xa80 Mar 17 15:45:24 speyburn kernel: [] expand_stack+0x13c/0x170 Mar 17 15:45:24 speyburn kernel: [] do_page_fault+0x39d/0x706 Mar 17 15:45:24 speyburn kernel: [] thread_return+0x0/0xe7 Mar 17 15:45:24 speyburn kernel: [] error_exit+0x0/0x84 Mar 17 15:45:24 speyburn kernel: Mar 17 15:45:24 speyburn kernel: Mar 17 15:45:24 speyburn kernel: Code: 48 89 51 08 48 89 0a 48 c7 40 08 00 02 20 00 48 c7 00 00 01 Mar 17 15:45:24 speyburn kernel: RIP [] get_page_from_freelist+0x18c/0x3a6 Mar 17 15:45:24 speyburn kernel: RSP Mar 17 15:45:24 speyburn kernel: CR2: 0000000000100108 Mar 17 15:45:24 speyburn kernel: <0>Bad page state in process 'imapd' Mar 17 15:45:24 speyburn kernel: page:ffff81032884a700 flags:0x020000000000020c mapping:ffff810321d2fd20 mapcount:0 count:0 Mar 17 15:45:24 speyburn kernel: Trying to fix it up, but a reboot is needed Mar 17 15:45:24 speyburn kernel: Backtrace: Mar 17 15:45:24 speyburn kernel: Mar 17 15:45:24 speyburn kernel: Call Trace: Mar 17 15:45:24 speyburn kernel: [] bad_page+0x4e/0x78 Mar 17 15:45:24 speyburn kernel: [] free_hot_cold_page+0x73/0xff Mar 17 15:45:24 speyburn kernel: [] :xfs:xfs_buf_free+0x99/0xdd Mar 17 15:45:24 speyburn kernel: [] :xfs:xfs_trans_push_ail+0xaf/0x237 Mar 17 15:45:24 speyburn kernel: [] :xfs:xfs_log_reserve+0x443/0x6a6 Mar 17 15:45:24 speyburn kernel: [] :xfs:kmem_zone_zalloc+0x1e/0x2f Mar 17 15:45:24 speyburn kernel: [] :xfs:xfs_trans_reserve+0xea/0x1cb Mar 17 15:45:24 speyburn kernel: [] :xfs:xfs_create+0x1f0/0x5dd Mar 17 15:45:24 speyburn kernel: [] :xfs:xfs_vn_mknod+0x1bd/0x3c8 Mar 17 15:45:24 speyburn kernel: [] :xfs:xfs_iunlock+0x57/0x79 Mar 17 15:45:24 speyburn kernel: [] :xfs:xfs_lookup+0x6c/0x7d Mar 17 15:45:24 speyburn kernel: [] __up_read+0x13/0x8a Mar 17 15:45:24 speyburn kernel: [] :xfs:xfs_iunlock+0x57/0x79 Mar 17 15:45:24 speyburn kernel: [] :xfs:xfs_access+0x3d/0x46 Mar 17 15:45:24 speyburn kernel: [] :xfs:xfs_vn_permission+0x14/0x18 Mar 17 15:45:24 speyburn kernel: [] permission+0xf0/0x155 Mar 17 15:45:24 speyburn kernel: [] __link_path_walk+0x1a2/0xf88 Mar 17 15:45:24 speyburn kernel: [] mntput_no_expire+0x19/0x8b Mar 17 15:45:24 speyburn kernel: [] link_path_walk+0xd3/0xe5 Mar 17 15:45:24 speyburn kernel: [] vfs_create+0xe7/0x12c Mar 17 15:45:24 speyburn kernel: [] open_namei+0x18c/0x6a0 Mar 17 15:45:24 speyburn kernel: [] do_filp_open+0x1c/0x3d Mar 17 15:45:24 speyburn kernel: [] do_sys_open+0x44/0xc5 Mar 17 15:45:24 speyburn kernel: [] ia32_sysret+0x0/0xa Mar 17 15:45:24 speyburn kernel: Mar 17 15:45:24 speyburn kernel: Bad page state in process 'imapd' Mar 17 15:45:24 speyburn kernel: page:ffff81032bce0ae8 flags:0x020000000000020c mapping:0000000000000000 mapcount:1 count:0 Mar 17 15:45:24 speyburn kernel: Trying to fix it up, but a reboot is needed Mar 17 15:45:24 speyburn kernel: [] :xfs:xfs_lookup+0x6c/0x7d Mar 17 15:45:24 speyburn kernel: ----------- [cut here ] --------- [please bite here ] --------- Mar 17 15:45:24 speyburn kernel: Kernel BUG at include/linux/mm.h:300 Mar 17 15:45:24 speyburn kernel: Bad page state in process 'imaplogin' Mar 17 15:45:24 speyburn kernel: <1>Unable to handle kernel paging request at 0000000000200200 RIP: Mar 17 15:45:24 speyburn kernel: [] get_empty_filp+0x5b/0x1a6 Mar 17 15:45:24 speyburn kernel: R13: ffff810000016800 R14: 000000000000000d R15: 0000000000000002 Mar 17 15:45:24 speyburn kernel: Call Trace: Mar 17 15:45:24 speyburn kernel: [] get_page_from_freelist+0x138/0x3a6 ----- At that point, a hard reboot was needed !! Is there any body who knows what has happened ?? And now, 2nd part of thge problem : Now my XFS FS seems in bad state , Every second or sao I have this problem : Mar 17 15:55:03 speyburn kernel: 0x0: 70 72 69 76 65 2c 53 3d 34 35 32 33 3a 30 0a 31 Mar 17 15:55:03 speyburn kernel: Filesystem "dm-17": XFS internal error xfs_da_do_buf(2) at line 2084 of file fs/xfs/xfs_da_btree.c. Caller 0xffffffff88165646 Mar 17 15:55:03 speyburn kernel: Mar 17 15:55:03 speyburn kernel: Call Trace: Mar 17 15:55:03 speyburn kernel: [] :xfs:xfs_corruption_error+0xe4/0xf6 Mar 17 15:55:03 speyburn kernel: [] :xfs:kmem_zone_alloc+0x56/0xa3 Mar 17 15:55:03 speyburn kernel: [] :xfs:xfs_da_do_buf+0x53c/0x61e Mar 17 15:55:03 speyburn kernel: [] :xfs:xfs_da_read_buf+0x16/0x1b Mar 17 15:55:03 speyburn kernel: [] :xfs:xfs_da_read_buf+0x16/0x1b Mar 17 15:55:03 speyburn kernel: [] :xfs:xfs_dir2_leaf_getdents+0x3c3/0x6d9 Mar 17 15:55:03 speyburn kernel: [] :xfs:xfs_dir2_leaf_getdents+0x3c3/0x6d9 Mar 17 15:55:03 speyburn kernel: [] :xfs:xfs_dir2_put_dirent64_direct+0x0/0x6b Mar 17 15:55:03 speyburn kernel: [] :xfs:xfs_dir_getdents+0xf2/0x11a Mar 17 15:55:03 speyburn kernel: [] __up_read+0x13/0x8a Mar 17 15:55:03 speyburn kernel: [] :xfs:xfs_readdir+0x3f/0x58 Mar 17 15:55:03 speyburn kernel: [] :xfs:xfs_file_readdir+0xb6/0x1a7 Mar 17 15:55:03 speyburn kernel: [] compat_filldir+0x0/0xb6 Mar 17 15:55:03 speyburn kernel: [] compat_filldir+0x0/0xb6 Mar 17 15:55:03 speyburn kernel: [] vfs_readdir+0x77/0xa9 Mar 17 15:55:03 speyburn kernel: [] compat_sys_getdents+0x75/0xbd Mar 17 15:55:03 speyburn kernel: [] ia32_sysret+0x0/0xa Mar 17 15:55:03 speyburn kernel: Mar 17 15:55:04 speyburn kernel: 0x0: 70 72 69 76 65 2c 53 3d 34 35 32 33 3a 30 0a 31 Mar 17 15:55:04 speyburn kernel: Filesystem "dm-17": XFS internal error xfs_da_do_buf(2) at line 2084 of file fs/xfs/xfs_da_btree.c. Caller 0xffffffff88165646 Mar 17 15:55:04 speyburn kernel: Mar 17 15:55:04 speyburn kernel: Call Trace: Mar 17 15:55:04 speyburn kernel: [] :xfs:xfs_corruption_error+0xe4/0xf6 Mar 17 15:55:04 speyburn kernel: [] :xfs:kmem_zone_alloc+0x56/0xa3 Mar 17 15:55:04 speyburn kernel: [] :xfs:xfs_da_do_buf+0x53c/0x61e Mar 17 15:55:04 speyburn kernel: [] :xfs:xfs_da_read_buf+0x16/0x1b Mar 17 15:55:04 speyburn kernel: [] :xfs:xfs_da_read_buf+0x16/0x1b Mar 17 15:55:04 speyburn kernel: [] :xfs:xfs_dir2_leaf_getdents+0x3c3/0x6d9 Mar 17 15:55:04 speyburn kernel: [] :xfs:xfs_dir2_leaf_getdents+0x3c3/0x6d9 Mar 17 15:55:04 speyburn kernel: [] :xfs:xfs_dir2_put_dirent64_direct+0x0/0x6b Mar 17 15:55:04 speyburn kernel: [] :xfs:xfs_dir_getdents+0xf2/0x11a Mar 17 15:55:04 speyburn kernel: [] __up_read+0x13/0x8a Mar 17 15:55:04 speyburn kernel: [] :xfs:xfs_readdir+0x3f/0x58 Mar 17 15:55:04 speyburn kernel: [] :xfs:xfs_file_readdir+0xb6/0x1a7 Mar 17 15:55:04 speyburn kernel: [] compat_filldir+0x0/0xb6 Mar 17 15:55:04 speyburn kernel: [] __sched_text_start+0x173/0xbd8 Mar 17 15:55:04 speyburn kernel: [] compat_filldir+0x0/0xb6 Mar 17 15:55:04 speyburn kernel: [] vfs_readdir+0x77/0xa9 Mar 17 15:55:04 speyburn kernel: [] compat_sys_getdents+0x75/0xbd Mar 17 15:55:04 speyburn kernel: [] ia32_sysret+0x0/0xa Mar 17 15:55:04 speyburn kernel: Mar 17 15:56:04 speyburn kernel: 0x0: 70 72 69 76 65 2c 53 3d 34 35 32 33 3a 30 0a 31 Mar 17 15:56:04 speyburn kernel: Filesystem "dm-17": XFS internal error xfs_da_do_buf(2) at line 2084 of file fs/xfs/xfs_da_btree.c. Caller 0xffffffff88165646 Mar 17 15:56:04 speyburn kernel: Mar 17 15:56:04 speyburn kernel: Call Trace: Mar 17 15:56:04 speyburn kernel: [] :xfs:xfs_corruption_error+0xe4/0xf6 Mar 17 15:56:04 speyburn kernel: [] :xfs:kmem_zone_alloc+0x56/0xa3 Mar 17 15:56:04 speyburn kernel: [] :xfs:xfs_da_do_buf+0x53c/0x61e Mar 17 15:56:04 speyburn kernel: [] :xfs:xfs_da_read_buf+0x16/0x1b Mar 17 15:56:04 speyburn kernel: [] :xfs:xfs_da_read_buf+0x16/0x1b Mar 17 15:56:04 speyburn kernel: [] :xfs:xfs_dir2_leaf_getdents+0x3c3/0x6d9 Mar 17 15:56:04 speyburn kernel: [] :xfs:xfs_dir2_leaf_getdents+0x3c3/0x6d9 Mar 17 15:56:04 speyburn kernel: [] :xfs:xfs_dir2_put_dirent64_direct+0x0/0x6b Mar 17 15:56:04 speyburn kernel: [] :xfs:xfs_dir_getdents+0xf2/0x11a Mar 17 15:56:04 speyburn kernel: [] __up_read+0x13/0x8a Mar 17 15:56:04 speyburn kernel: [] :xfs:xfs_readdir+0x3f/0x58 Mar 17 15:56:04 speyburn kernel: [] :xfs:xfs_file_readdir+0xb6/0x1a7 Mar 17 15:56:04 speyburn kernel: [] compat_filldir+0x0/0xb6 Mar 17 15:56:04 speyburn kernel: [] __sched_text_start+0x173/0xbd8 Mar 17 15:56:04 speyburn kernel: [] compat_filldir+0x0/0xb6 Mar 17 15:56:04 speyburn kernel: [] vfs_readdir+0x77/0xa9 Mar 17 15:56:04 speyburn kernel: [] compat_sys_getdents+0x75/0xbd Mar 17 15:56:04 speyburn kernel: [] ia32_sysret+0x0/0xa Mar 17 15:56:04 speyburn kernel: Mar 17 15:57:04 speyburn kernel: 0x0: 70 72 69 76 65 2c 53 3d 34 35 32 33 3a 30 0a 31 Mar 17 15:57:04 speyburn kernel: Filesystem "dm-17": XFS internal error xfs_da_do_buf(2) at line 2084 of file fs/xfs/xfs_da_btree.c. Caller 0xffffffff88165646 Mar 17 15:57:04 speyburn kernel: Mar 17 15:57:04 speyburn kernel: Call Trace: Mar 17 15:57:04 speyburn kernel: [] :xfs:xfs_corruption_error+0xe4/0xf6 Mar 17 15:57:04 speyburn kernel: [] :xfs:kmem_zone_alloc+0x56/0xa3 Mar 17 15:57:04 speyburn kernel: [] :xfs:xfs_da_do_buf+0x53c/0x61e Mar 17 15:57:04 speyburn kernel: [] :xfs:xfs_da_read_buf+0x16/0x1b Mar 17 15:57:04 speyburn kernel: [] :xfs:xfs_da_read_buf+0x16/0x1b Mar 17 15:57:04 speyburn kernel: [] :xfs:xfs_dir2_leaf_getdents+0x3c3/0x6d9 Mar 17 15:57:04 speyburn kernel: [] :xfs:xfs_dir2_leaf_getdents+0x3c3/0x6d9 Mar 17 15:57:04 speyburn kernel: [] :xfs:xfs_dir2_put_dirent64_direct+0x0/0x6b Mar 17 15:57:04 speyburn kernel: [] :xfs:xfs_dir_getdents+0xf2/0x11a Mar 17 15:57:04 speyburn kernel: [] __up_read+0x13/0x8a Mar 17 15:57:04 speyburn kernel: [] :xfs:xfs_readdir+0x3f/0x58 Mar 17 15:57:04 speyburn kernel: [] :xfs:xfs_file_readdir+0xb6/0x1a7 Mar 17 15:57:04 speyburn kernel: [] compat_filldir+0x0/0xb6 Mar 17 15:57:04 speyburn kernel: [] __sched_text_start+0x173/0xbd8 Mar 17 15:57:04 speyburn kernel: [] compat_filldir+0x0/0xb6 Mar 17 15:57:04 speyburn kernel: [] vfs_readdir+0x77/0xa9 Mar 17 15:57:04 speyburn kernel: [] compat_sys_getdents+0x75/0xbd Mar 17 15:57:04 speyburn kernel: [] ia32_sysret+0x0/0xa Mar 17 15:57:04 speyburn kernel: Mar 17 15:58:04 speyburn kernel: 0x0: 70 72 69 76 65 2c 53 3d 34 35 32 33 3a 30 0a 31 Mar 17 15:58:04 speyburn kernel: Filesystem "dm-17": XFS internal error xfs_da_do_buf(2) at line 2084 of file fs/xfs/xfs_da_btree.c. Caller 0xffffffff88165646 Mar 17 15:58:04 speyburn kernel: Mar 17 15:58:04 speyburn kernel: Call Trace: Mar 17 15:58:04 speyburn kernel: [] :xfs:xfs_corruption_error+0xe4/0xf6 Mar 17 15:58:04 speyburn kernel: [] :xfs:kmem_zone_alloc+0x56/0xa3 Mar 17 15:58:04 speyburn kernel: [] :xfs:xfs_da_do_buf+0x53c/0x61e Mar 17 15:58:04 speyburn kernel: [] :xfs:xfs_da_read_buf+0x16/0x1b Mar 17 15:58:04 speyburn kernel: [] :xfs:xfs_da_read_buf+0x16/0x1b Mar 17 15:58:04 speyburn kernel: [] :xfs:xfs_dir2_leaf_getdents+0x3c3/0x6d9 Mar 17 15:58:04 speyburn kernel: [] :xfs:xfs_dir2_leaf_getdents+0x3c3/0x6d9 Mar 17 15:58:04 speyburn kernel: [] :xfs:xfs_dir2_put_dirent64_direct+0x0/0x6b Mar 17 15:58:04 speyburn kernel: [] :xfs:xfs_dir_getdents+0xf2/0x11a Mar 17 15:58:04 speyburn kernel: [] __up_read+0x13/0x8a Mar 17 15:58:04 speyburn kernel: [] :xfs:xfs_readdir+0x3f/0x58 Mar 17 15:58:04 speyburn kernel: [] :xfs:xfs_file_readdir+0xb6/0x1a7 Mar 17 15:58:04 speyburn kernel: [] compat_filldir+0x0/0xb6 Mar 17 15:58:04 speyburn kernel: [] __sched_text_start+0x173/0xbd8 Mar 17 15:58:04 speyburn kernel: [] compat_filldir+0x0/0xb6 Mar 17 15:58:04 speyburn kernel: [] vfs_readdir+0x77/0xa9 Mar 17 15:58:04 speyburn kernel: [] compat_sys_getdents+0x75/0xbd Mar 17 15:58:04 speyburn kernel: [] ia32_sysret+0x0/0xa Mar 17 15:58:04 speyburn kernel: Mar 17 15:59:04 speyburn kernel: 0x0: 70 72 69 76 65 2c 53 3d 34 35 32 33 3a 30 0a 31 Mar 17 15:59:04 speyburn kernel: Filesystem "dm-17": XFS internal error xfs_da_do_buf(2) at line 2084 of file fs/xfs/xfs_da_btree.c. Caller 0xffffffff88165646 Mar 17 15:59:04 speyburn kernel: Mar 17 15:59:04 speyburn kernel: Call Trace: Mar 17 15:59:04 speyburn kernel: [] :xfs:xfs_corruption_error+0xe4/0xf6 Mar 17 15:59:04 speyburn kernel: [] :xfs:kmem_zone_alloc+0x56/0xa3 Mar 17 15:59:04 speyburn kernel: [] :xfs:xfs_da_do_buf+0x53c/0x61e Mar 17 15:59:04 speyburn kernel: [] :xfs:xfs_da_read_buf+0x16/0x1b Mar 17 15:59:04 speyburn kernel: [] :xfs:xfs_da_read_buf+0x16/0x1b Mar 17 15:59:04 speyburn kernel: [] :xfs:xfs_dir2_leaf_getdents+0x3c3/0x6d9 Mar 17 15:59:04 speyburn kernel: [] :xfs:xfs_dir2_leaf_getdents+0x3c3/0x6d9 Mar 17 15:59:04 speyburn kernel: [] :xfs:xfs_dir2_put_dirent64_direct+0x0/0x6b Mar 17 15:59:04 speyburn kernel: [] :xfs:xfs_dir_getdents+0xf2/0x11a Mar 17 15:59:04 speyburn kernel: [] __up_read+0x13/0x8a Mar 17 15:59:04 speyburn kernel: [] :xfs:xfs_readdir+0x3f/0x58 Mar 17 15:59:04 speyburn kernel: [] :xfs:xfs_file_readdir+0xb6/0x1a7 Mar 17 15:59:04 speyburn kernel: [] compat_filldir+0x0/0xb6 Mar 17 15:59:04 speyburn kernel: [] __sched_text_start+0x173/0xbd8 Mar 17 15:59:04 speyburn kernel: [] compat_filldir+0x0/0xb6 Mar 17 15:59:04 speyburn kernel: [] vfs_readdir+0x77/0xa9 Mar 17 15:59:04 speyburn kernel: [] compat_sys_getdents+0x75/0xbd Mar 17 15:59:04 speyburn kernel: [] ia32_sysret+0x0/0xa Mar 17 15:59:04 speyburn kernel: Mar 17 16:00:06 speyburn kernel: 0x0: 70 72 69 76 65 2c 53 3d 34 35 32 33 3a 30 0a 31 Mar 17 16:01:04 speyburn kernel: 0x0: 70 72 69 76 65 2c 53 3d 34 35 32 33 3a 30 0a 31 Mar 17 16:01:04 speyburn kernel: Filesystem "dm-17": XFS internal error xfs_da_do_buf(2) at line 2084 of file fs/xfs/xfs_da_btree.c. Caller 0xffffffff88165646 So I have to check the filesystem :( The curious part is that in spite of the continuous messages, the FS (lots of maildirs) seems consistent for now and works OK. But i'm quite scared and will go for a complete check now. Sincerely , -- Yann Dupont - Pôle IRTS, DSI Université de Nantes Tel : 02.51.12.53.91 - Mail/Jabber : Yann.Dupont@univ-nantes.fr From owner-xfs@oss.sgi.com Tue Mar 18 05:36:13 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 18 Mar 2008 05:36:20 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2ICaCdO012501 for ; Tue, 18 Mar 2008 05:36:13 -0700 X-ASG-Debug-ID: 1205843803-2d8200740000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from smtp01av.ntr.oleane.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 4E824FC23CD for ; Tue, 18 Mar 2008 05:36:44 -0700 (PDT) Received: from smtp01av.ntr.oleane.net (smtp-out2.net.av.oleane.com [195.25.12.12]) by cuda.sgi.com with ESMTP id W1b3UK6mLSKeRo2W for ; Tue, 18 Mar 2008 05:36:44 -0700 (PDT) Received: from Antivirus (localhost [127.0.0.1]) by smtp01av.ntr.oleane.net with ESMTP id m2ICBgxS011497 for ; Tue, 18 Mar 2008 13:11:42 +0100 Received: from [172.16.0.11] (151-15.252-81.static-ip.oleane.fr [81.252.15.151]) by smtp01av.ntr.oleane.net (8.12.11.20060308/8.12.11/8.12-FT) with ESMTP id m2ICBfmM011457 for ; Tue, 18 Mar 2008 13:11:41 +0100 To: xfs@oss.sgi.com X-ASG-Orig-Subj: Update of XFSPROG XFSDUMP on linux kernel 2.4.22 Subject: Update of XFSPROG XFSDUMP on linux kernel 2.4.22 Message-ID: <1205842046.47dfb07ede86d@hermesadm.chb.fr> Date: Tue, 18 Mar 2008 13:07:26 +0100 (CET) From: Xavier Poirier MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 8bit User-Agent: IMP/PHP IMAP webmail program 2.2.3 X-WebMail-Company: Centre Hospitalier de Bourg en Bresse X-FAX-Notify: requeued X-AntiVirus: scanned for viruses by AMaViS (http://amavis.org/) X-Barracuda-Connect: smtp-out2.net.av.oleane.com[195.25.12.12] X-Barracuda-Start-Time: 1205843805 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45186 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14907 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: xpoirier@ch-bourg01.fr Precedence: bulk X-list: xfs Hi all XFS ML Users ! I'm Xavier from France. I had installed two years ago a linux server with 2 XFS Partitions. All is working like a charm ! * Except, xfsrestore command, that is often crashing (one time on two) with a dump file of 35Go Here is my versions descriptions: - Linux Mandrake 9.2 kernel 2.4.22 - XFSDUMP 2.2.13 - XFSDUMP 2.5.4 (installed by RPM) My question is : Is it better to update the XFS programs to newer versions, or update my linux kernel, to avoid problems ? I've tryed to find some older distribs of xfsdump, but without succes (like 2.2.33) the most recent distribs fails with the configure at manual install ... What is the best for you ? I really need your help before take a decision! Thanks Xavier From owner-xfs@oss.sgi.com Tue Mar 18 06:49:30 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 18 Mar 2008 06:49:40 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2IDnUlI018857 for ; Tue, 18 Mar 2008 06:49:30 -0700 X-ASG-Debug-ID: 1205848201-11da00190000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from knox.decisionsoft.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 9578CFC256E; Tue, 18 Mar 2008 06:50:01 -0700 (PDT) Received: from knox.decisionsoft.com (knox-be.decisionsoft.com [87.194.172.100]) by cuda.sgi.com with ESMTP id oCsUYCmBI2HaBBju; Tue, 18 Mar 2008 06:50:01 -0700 (PDT) Received: from kennet.dsl.local ([10.0.0.11]) by knox.decisionsoft.com with esmtp (Exim 4.63) (envelope-from ) id 1JbcCR-0007nO-7o; Tue, 18 Mar 2008 13:49:55 +0000 Message-ID: <47DFC880.6040403@decisionsoft.co.uk> Date: Tue, 18 Mar 2008 13:49:52 +0000 From: Stuart Rowan Reply-To: strr-debian@decisionsoft.co.uk User-Agent: Thunderbird 2.0.0.12 (X11/20080226) MIME-Version: 1.0 To: xfs@oss.sgi.com CC: Timothy Shimmin X-ASG-Orig-Subj: Re: 2.6.24.3 nfs server on xfs keeps producing nfsd: non-standard errno: -117 Subject: Re: 2.6.24.3 nfs server on xfs keeps producing nfsd: non-standard errno: -117 References: <47DEFE5E.4030703@decisionsoft.co.uk> <47DF0C9D.1010602@sgi.com> In-Reply-To: <47DF0C9D.1010602@sgi.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-SA-Exim-Connect-IP: 10.0.0.11 X-SA-Exim-Mail-From: strr-debian@decisionsoft.co.uk X-SA-Exim-Scanned: No (on knox.decisionsoft.com); SAEximRunCond expanded to false X-SystemFilter-new-T: not expanding X-SystemFilter-new-S: not expanding X-SystemFilter-new-F: not expanding X-Barracuda-Connect: knox-be.decisionsoft.com[87.194.172.100] X-Barracuda-Start-Time: 1205848202 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -1.52 X-Barracuda-Spam-Status: No, SCORE=-1.52 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=BSF_RULE7568M X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45190 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.50 BSF_RULE7568M Custom Rule 7568M X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14908 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: strr-debian@decisionsoft.co.uk Precedence: bulk X-list: xfs Timothy Shimmin wrote, on 18/03/08 00:28: > Hi Stuart, > > Stuart Rowan wrote: >> Hi, >> >> Firstly thanks for the great filesystem and apologies if this ends up >> being NFS rather than XFS being weird! I'm not subscribed so please do >> keep me CC'd. >> >> I have *millions* of lines of (>200k per minute according to syslog): >> nfsd: non-standard errno: -117 >> being sent out of dmesg >> >> Now errno 117 is >> #define EUCLEAN 117 /* Structure needs cleaning */ >> which seems to be only used from a quick grep by XFS and JFFS and smbfs. >> >> > In XFS we mapped EFSCORRUPTED to EUCLEAN as EFSCORRUPTED > didn't exist on Linux. > However, normally if this error is encountered in XFS then > we output an appropriate msg to the syslog. > Our default error level is 3 and most reports are rated at 1 > so should show up I would have thought. > > --Tim > >> My nfs server export two locations >> /home >> /home/archive >> both of these are XFS partitions, hence my suspicion that the -117 is >> coming from XFS. >> >> xfs_repair -n says the filesystems are clean >> xfs_repair has been run multiple times to completion on the >> filesystems, all is fine. >> >> The XFS partitions are lvm volumes as follows >> data/home 900G >> data/archive 400G >> The volume group, data, is sda3 >> sda3 is a 6 drive 3ware 9550SXU-8LP RAID10 array >> >> The NFS server is currently in use (indeed the message only starts >> once clients connect) and works absolutely fine. >> >> How do I find out what (if anything) is wrong with my filesystem / >> appropriately silence this message? >> >> Many thanks, >> Stu. >> > > I briefly changed the sysctl fs.xfs.error_level to 6 and then back to 3 It gives the following message and backtrace > Mar 18 13:35:15 evenlode kernel: nfsd: non-standard errno: -117 > Mar 18 13:35:15 evenlode kernel: 0x0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > Mar 18 13:35:15 evenlode kernel: Filesystem "dm-0": XFS internal error xfs_itobp at line 360 of file fs/xfs/xfs_inode.c. Caller 0xffffffff8821224d > Mar 18 13:35:15 evenlode kernel: Pid: 2791, comm: nfsd Not tainted 2.6.24.3-generic #1 > Mar 18 13:35:15 evenlode kernel: > Mar 18 13:35:15 evenlode kernel: Call Trace: > Mar 18 13:35:15 evenlode kernel: [] :xfs:xfs_iread+0x71/0x1e8 > Mar 18 13:35:15 evenlode kernel: [] :xfs:xfs_itobp+0x141/0x17b > Mar 18 13:35:15 evenlode kernel: [] :xfs:xfs_iread+0x71/0x1e8 > Mar 18 13:35:15 evenlode kernel: [] :xfs:xfs_iread+0x71/0x1e8 > Mar 18 13:35:15 evenlode kernel: [] :xfs:xfs_iget_core+0x352/0x63a > Mar 18 13:35:15 evenlode kernel: [] alloc_inode+0x152/0x1c2 > Mar 18 13:35:15 evenlode kernel: [] :xfs:xfs_iget+0x9b/0x13f > Mar 18 13:35:15 evenlode kernel: [] :xfs:xfs_vget+0x4d/0xbb > Mar 18 13:35:15 evenlode kernel: [] :xfs:xfs_nfs_get_inode+0x2e/0x42 > Mar 18 13:35:15 evenlode kernel: [] :xfs:xfs_fs_fh_to_dentry+0x64/0x97 > Mar 18 13:35:15 evenlode kernel: [] :exportfs:exportfs_decode_fh+0x30/0x1dc > Mar 18 13:35:15 evenlode kernel: [] :nfsd:nfsd_acceptable+0x0/0xca > Mar 18 13:35:15 evenlode kernel: [] set_current_groups+0x148/0x153 > Mar 18 13:35:15 evenlode kernel: [] :nfsd:nfsd_setuser+0x11c/0x171 > Mar 18 13:35:15 evenlode kernel: [] :nfsd:nfsd_setuser_and_check_port+0x52/0x57 > Mar 18 13:35:15 evenlode kernel: [] :nfsd:fh_verify+0x1fb/0x4a4 > Mar 18 13:35:15 evenlode kernel: [] :sunrpc:svc_tcp_recvfrom+0x7ab/0x843 > Mar 18 13:35:15 evenlode kernel: [] :nfsd:nfsd_open+0x1f/0x170 > Mar 18 13:35:15 evenlode kernel: [] :nfsd:nfsd_read+0x7f/0xc4 > Mar 18 13:35:15 evenlode kernel: [] :nfsd:nfsd3_proc_read+0x117/0x15a > Mar 18 13:35:15 evenlode kernel: [] :nfsd:nfsd_dispatch+0xde/0x1c2 > Mar 18 13:35:15 evenlode kernel: [] :sunrpc:svc_process+0x3f7/0x6e9 > Mar 18 13:35:15 evenlode kernel: [] __down_read+0x12/0x9a > Mar 18 13:35:15 evenlode kernel: [] :nfsd:nfsd+0x191/0x2ae > Mar 18 13:35:15 evenlode kernel: [] child_rip+0xa/0x12 > Mar 18 13:35:15 evenlode kernel: [] :nfsd:nfsd+0x0/0x2ae > Mar 18 13:35:15 evenlode kernel: [] child_rip+0x0/0x12 > Mar 18 13:35:15 evenlode kernel: Does that help? Thanks, Stu. From owner-xfs@oss.sgi.com Tue Mar 18 10:46:41 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 18 Mar 2008 10:46:53 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2IHkdhH006600 for ; Tue, 18 Mar 2008 10:46:40 -0700 X-ASG-Debug-ID: 1205862431-17d1020a0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from moutng.kundenserver.de (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 51630FC6FB3 for ; Tue, 18 Mar 2008 10:47:11 -0700 (PDT) Received: from moutng.kundenserver.de (moutng.kundenserver.de [212.227.126.188]) by cuda.sgi.com with ESMTP id AIs3JeybVpWbCk4K for ; Tue, 18 Mar 2008 10:47:11 -0700 (PDT) Received: from alice.local ([77.47.55.199]) by mrelayeu.kundenserver.de (node=mrelayeu5) with ESMTP (Nemesis) id 0ML25U-1Jbftq0Kke-000633; Tue, 18 Mar 2008 18:47:03 +0100 Message-ID: <47E00011.8060508@nerdbynature.de> Date: Tue, 18 Mar 2008 18:46:57 +0100 From: Christian Kujau User-Agent: Thunderbird 3.0a1pre (Macintosh/2008030403) MIME-Version: 1.0 To: Chr CC: Alasdair G Kergon , Milan Broz , David Chinner , LKML , xfs@oss.sgi.com, dm-devel@redhat.com, dm-crypt@saout.de, Herbert Xu , Ritesh Raj Sarraf X-ASG-Orig-Subj: Re: [dm-crypt] INFO: task mount:11202 blocked for more than 120 seconds Subject: Re: [dm-crypt] INFO: task mount:11202 blocked for more than 120 seconds References: <20080317173609.GD29322@agk.fab.redhat.com> <200803171936.22260.chunkeey@web.de> In-Reply-To: <200803171936.22260.chunkeey@web.de> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Provags-ID: V01U2FsdGVkX19eBReylrrTOexT8zEQVC1uFiJzZfwurNC2lm1 2htiydzYssRq6wCSV5bWl6gj6tBQegBrgYvxsUQl1Oi8+TYgoI ETZs60Tv6nOQiut2dYDZME1vV8X938KfPD2Ssf21Bg= X-Barracuda-Connect: moutng.kundenserver.de[212.227.126.188] X-Barracuda-Start-Time: 1205862432 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45206 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14909 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: lists@nerdbynature.de Precedence: bulk X-list: xfs Chr wrote: > Well, the dm-crypt regressions seems to be gone. :) > Thanks for your work& time! Unfortunately I'm unable to test atm as I currently don't have access to the machine where the hangs occured. So, if Chr says "it's fixed" I believe it is and I don't wanna be a show stopper...IOW: feel free to close the bug. Thanks to all involved, Christian. From owner-xfs@oss.sgi.com Tue Mar 18 16:30:57 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 18 Mar 2008 16:31:05 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.5 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2INUt5x003815 for ; Tue, 18 Mar 2008 16:30:57 -0700 X-ASG-Debug-ID: 1205883087-0dbb02ae0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from rv-out-0910.google.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 3BA226BA6F6 for ; Tue, 18 Mar 2008 16:31:27 -0700 (PDT) Received: from rv-out-0910.google.com (rv-out-0910.google.com [209.85.198.187]) by cuda.sgi.com with ESMTP id XYtdvnO1QH2F5Im3 for ; Tue, 18 Mar 2008 16:31:27 -0700 (PDT) Received: by rv-out-0910.google.com with SMTP id k20so66322rvb.32 for ; Tue, 18 Mar 2008 16:31:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:message-id:date:from:sender:to:subject:mime-version:content-type:content-transfer-encoding:content-disposition:x-google-sender-auth; bh=RCrD73N3hV+gVEZAG895XMjNAMXMVk2/zRFzUtb/eqs=; b=uH/BCIMyuljB5JalpDj7yEZVXkZQLpUuO2302Lg7cX4NeUQESuqe+vQh30ldD6tAt9TeG5WKfpcvmcEKs/LGUTAes5/izevU2KehC0+aX3NCxK1VmA6sKE4Me6fOyOB4llgEQh8nFIBgVZlQYAYV+X9u2qQGh92ssGfpJ1SSGvI= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=message-id:date:from:sender:to:subject:mime-version:content-type:content-transfer-encoding:content-disposition:x-google-sender-auth; b=oN/CGN8bjdPOJrFNPvi5H8/eJ6Y8PZZAlXX2CuQEElW8sP/yNS0Al8JckR8Akp/nSW0e7wedqk4RE5FBRlsMyRfDDM9JAhQvXX0CBQwannPrKWgdw8Dvt0EEJAJZDen8YKaECp6ncNIWE8bkoZVVh4UjcZVH9L4IUK2uDh94t4g= Received: by 10.141.18.14 with SMTP id v14mr1254002rvi.236.1205883087483; Tue, 18 Mar 2008 16:31:27 -0700 (PDT) Received: by 10.140.255.2 with HTTP; Tue, 18 Mar 2008 16:31:27 -0700 (PDT) Message-ID: Date: Tue, 18 Mar 2008 23:31:27 +0000 From: "Andre Draszik" To: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: [PATCH] fix dir2 shortform structures on ARM old ABI Subject: Re: [PATCH] fix dir2 shortform structures on ARM old ABI MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Content-Disposition: inline X-Google-Sender-Auth: d0d9a69c21294cc6 X-Barracuda-Connect: rv-out-0910.google.com[209.85.198.187] X-Barracuda-Start-Time: 1205883088 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45229 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14910 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: xfs@andred.net Precedence: bulk X-list: xfs Hi, (I just subscribed, so I can't reply correctly :-( In fact,, the last two evenings I spent making XFS work on arm eabi, where things are much better than with the old API, but still XFS won't work out of the box. So, Eric, if you go for the #if defined(__arm__) && !defined(__ARM_EABI__) approach, arm eabi will still be broken. EABI basically behaves like other 'normal' arches/abis, but sometimes structures get padded to have a size of a multiple of 8, i.e. padding is added at the end of the struct, which as far as I can see for now affects 5 structs: xfs_dir2_data_entry_t, xfs_dinode_t, xfs_sb_t, xfs_dsb_t, and xfs_log_item_t I must say, I like Jeff's approach of explicitly telling gcc about alignment much better :-) It makes it a) much easier to find structs that are in fact representations of on-disk data and thus might need tweaking, and b) as somebody already said you fix such problems once and forever. E.g. for me as an absolute outsider, it was quite time consuming finding out which structs are actually on-disk. That said, Jeff, you mentioned that your changes don't work yet completely - could this be because (at least from the comments) struct xfs_sb needs to match struct xfs_dsb and you only change xfs_dsb? Cheers, Andre' From owner-xfs@oss.sgi.com Tue Mar 18 16:48:42 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 18 Mar 2008 16:48:49 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2INmepq005008 for ; Tue, 18 Mar 2008 16:48:42 -0700 X-ASG-Debug-ID: 1205884152-0dbb03240000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from wx-out-0506.google.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 02A446BA665 for ; Tue, 18 Mar 2008 16:49:12 -0700 (PDT) Received: from wx-out-0506.google.com (wx-out-0506.google.com [66.249.82.231]) by cuda.sgi.com with ESMTP id 1zPBpbEIa7rPcA6y for ; Tue, 18 Mar 2008 16:49:12 -0700 (PDT) Received: by wx-out-0506.google.com with SMTP id s9so150409wxc.32 for ; Tue, 18 Mar 2008 16:49:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:message-id:date:from:sender:to:subject:mime-version:content-type:content-transfer-encoding:content-disposition:x-google-sender-auth; bh=z3zGfl7OGhdyWegRdslcBaUBogAC2LQHelNCVqO1Xeo=; b=PN4bJjMSZnA7F93e+jT80yuuBJInWmRcJ5fQrx1CfXSUS8ijO+pu6T4LEgu5tNbwmny/C2sw3XIBrjsogwmVwHQ6pKa0S5l5yPfStSS+3QEH6CaKOG09qlRgL8odpanp7ZbckKKd3lqIZW2zEgzySezXP4VkHX3jp2mzHwfudlE= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=message-id:date:from:sender:to:subject:mime-version:content-type:content-transfer-encoding:content-disposition:x-google-sender-auth; b=tm5r1KPQDP8ikxZ/ydwW/9yTKDlX2f9qylIRZzJhyyK75AZzLzGXoffoKB/GuNWI4Xz9SluKRCDorLVeWI9IAC1UeSt6wdO8OWp/PmbSHF/zn57XiqAq97h3Hh1lWvC8mkutvQ9ZvBNe3W56wvV93Qf0FwW1rBJdzGv/GxZN6lc= Received: by 10.141.79.12 with SMTP id g12mr1249709rvl.182.1205884151119; Tue, 18 Mar 2008 16:49:11 -0700 (PDT) Received: by 10.140.255.2 with HTTP; Tue, 18 Mar 2008 16:49:11 -0700 (PDT) Message-ID: Date: Tue, 18 Mar 2008 23:49:11 +0000 From: "Andre Draszik" To: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: [PATCH] fix dir2 shortform structures on ARM old ABI Subject: Re: [PATCH] fix dir2 shortform structures on ARM old ABI MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Content-Disposition: inline X-Google-Sender-Auth: 22caa6fa99b43e40 X-Barracuda-Connect: wx-out-0506.google.com[66.249.82.231] X-Barracuda-Start-Time: 1205884153 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45231 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14911 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: xfs@andred.net Precedence: bulk X-list: xfs Hi, (I just subscribed, so I can't reply correctly :-( In fact,, the last two evenings I spent making XFS work on arm eabi, where things are much better than with the old API, but still XFS won't work out of the box. So, Eric, if you go for the #if defined(__arm__) && !defined(__ARM_EABI__) approach, arm eabi will still be broken. EABI basically behaves like other 'normal' arches/abis, but sometimes structures get padded to have a size of a multiple of 8, i.e. padding is added at the end of the struct, which as far as I can see for now affects 5 structs: xfs_dir2_data_entry_t, xfs_dinode_t, xfs_sb_t, xfs_dsb_t, and xlog_rec_header_t I must say, I like Jeff's approach of explicitly telling gcc about alignment much better :-) It makes it a) much easier to find structs that are in fact representations of on-disk data and thus might need tweaking, and b) as somebody already said you fix such problems once and forever. E.g. for me as an absolute outsider, it was quite time consuming finding out which structs are actually on-disk. That said, Jeff, you mentioned that your changes don't work yet completely - could this be because (at least from the comments) struct xfs_sb needs to match struct xfs_dsb and you only change xfs_dsb? Cheers, Andre' From owner-xfs@oss.sgi.com Tue Mar 18 17:24:09 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 18 Mar 2008 17:24:18 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.4 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2J0O8JE006845 for ; Tue, 18 Mar 2008 17:24:09 -0700 X-ASG-Debug-ID: 1205886280-686301a30000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from filer.fsl.cs.sunysb.edu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 42A07FCC313 for ; Tue, 18 Mar 2008 17:24:40 -0700 (PDT) Received: from filer.fsl.cs.sunysb.edu (filer.fsl.cs.sunysb.edu [130.245.126.2]) by cuda.sgi.com with ESMTP id svQHBH6lKsYFcw3w for ; Tue, 18 Mar 2008 17:24:40 -0700 (PDT) Received: from josefsipek.net (baal.fsl.cs.sunysb.edu [130.245.126.78]) by filer.fsl.cs.sunysb.edu (8.12.11.20060308/8.13.1) with ESMTP id m2J0N6BI032605; Tue, 18 Mar 2008 20:23:08 -0400 Received: by josefsipek.net (Postfix, from userid 1000) id 496941C008A2; Tue, 18 Mar 2008 20:23:07 -0400 (EDT) Date: Tue, 18 Mar 2008 20:23:07 -0400 From: "Josef 'Jeff' Sipek" To: Andre Draszik Cc: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: [PATCH] fix dir2 shortform structures on ARM old ABI Subject: Re: [PATCH] fix dir2 shortform structures on ARM old ABI Message-ID: <20080319002307.GA11349@josefsipek.net> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.16 (2007-06-11) X-Barracuda-Connect: filer.fsl.cs.sunysb.edu[130.245.126.2] X-Barracuda-Start-Time: 1205886281 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45232 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14912 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeffpc@josefsipek.net Precedence: bulk X-list: xfs On Tue, Mar 18, 2008 at 11:49:11PM +0000, Andre Draszik wrote: > Hi, > > (I just subscribed, so I can't reply correctly :-( > > In fact,, the last two evenings I spent making XFS work on arm eabi, > where things are much better than with the old API, but still XFS > won't work out of the box. > > So, Eric, if you go for the #if defined(__arm__) && > !defined(__ARM_EABI__) approach, arm eabi will still be broken. Ouch. > EABI basically behaves like other 'normal' arches/abis, but sometimes > structures get padded to have a size of a multiple of 8, i.e. padding > is added at the end of the struct, which as far as I can see for now > affects 5 structs: xfs_dir2_data_entry_t, xfs_dinode_t, xfs_sb_t, > xfs_dsb_t, and xlog_rec_header_t > > I must say, I like Jeff's approach of explicitly telling gcc about > alignment much better :-) It makes it a) much easier to find structs > that are in fact representations of on-disk data and thus might need > tweaking, and b) as somebody already said you fix such problems once > and forever. Hopefully, there won't be any need for additional tweaking. > E.g. for me as an absolute outsider, it was quite time consuming > finding out which structs are actually on-disk. Hey, I feel your pain. I just grepped the entire source tree for 'struct' and went through them one by one. > That said, Jeff, you mentioned that your changes don't work yet > completely - could this be because (at least from the comments) struct > xfs_sb needs to match struct xfs_dsb and you only change xfs_dsb? As far as I know the patch I sent later last night does work. BUT, and I mean *BUT*, I did not test it on anything other than x86. And even then, I wouldn't trust it with my data just yet :) It is very possible that some arches are still broken - it was a "request for comment", not a "please apply". I talked about it a bit in the IRC channel, and the SGI folks want (1) proof that adding these attributes does not create any regressions on any of the supported architectures (and fixes the one that's currently broken - arm), and (2) need to make sure that the ~700 instructions (~0.3% increase) that get added to ia64 do not cause a regression in performance. Both points are valid, and it'll all happen sometime (hopefully) soon. Thanks for the heads up wrt ARM EABI. Josef 'Jeff' Sipek. -- You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists. - Abbie Hoffman From owner-xfs@oss.sgi.com Tue Mar 18 20:17:47 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 18 Mar 2008 20:17:55 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2J3HiFK010314 for ; Tue, 18 Mar 2008 20:17:47 -0700 X-ASG-Debug-ID: 1205896694-674f02c90000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id BCD8FFCC725 for ; Tue, 18 Mar 2008 20:18:14 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id YDu3GaEcGgn8R2cX for ; Tue, 18 Mar 2008 20:18:14 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id D47C01801ACA7; Tue, 18 Mar 2008 22:18:11 -0500 (CDT) Message-ID: <47E085F3.8030908@sandeen.net> Date: Tue, 18 Mar 2008 22:18:11 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: Andre Draszik CC: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: [PATCH] fix dir2 shortform structures on ARM old ABI Subject: Re: [PATCH] fix dir2 shortform structures on ARM old ABI References: In-Reply-To: Content-Type: multipart/mixed; boundary="------------000308010300090300070805" X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1205896696 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45244 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14913 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs This is a multi-part message in MIME format. --------------000308010300090300070805 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Andre Draszik wrote: > Hi, > > (I just subscribed, so I can't reply correctly :-( > > In fact,, the last two evenings I spent making XFS work on arm eabi, > where things are much better than with the old API, but still XFS > won't work out of the box. How so? > So, Eric, if you go for the #if defined(__arm__) && > !defined(__ARM_EABI__) approach, arm eabi will still be broken. Details, please. > EABI basically behaves like other 'normal' arches/abis, but sometimes > structures get padded to have a size of a multiple of 8, i.e. padding > is added at the end of the struct, which as far as I can see for now > affects 5 structs: xfs_dir2_data_entry_t, xfs_dinode_t, xfs_sb_t, > xfs_dsb_t, and xfs_log_item_t I did pretty exhaustive testing on new abi and saw no failures that were clearly unique to arm, although full qa gets enough failures that I wouldn't swear to it. What testing did you do, and what failures did you see? And what work did you need to do to "make XFS work" on eabi? I've helpfully provided structure layouts for the structures you mention in the attached files, for your diffing pleasure. I think you'll find that it's not exactly as you described. xfs_dinode_t has no extra padding at the end, though xfs_dir2_sf, a member of one of its unions, does (though other union members are larger though, so the struct offsets are not changed.) There is one other cosmetic difference just because my arm tree doesn't have Jeff's ail_entry list change. The rest of the structures seem to be identical. If you have structure differences that lead to demonstrable failures, then by all means, provide the details. > I must say, I like Jeff's approach of explicitly telling gcc about > alignment much better :-) It makes it a) much easier to find structs > that are in fact representations of on-disk data and thus might need > tweaking, and b) as somebody already said you fix such problems once > and forever. > E.g. for me as an absolute outsider, it was quite time consuming > finding out which structs are actually on-disk. At some point here I'm just going to go quietly insane. Yes, _annotating_ things as __ondisk is great, and I have no problems with that, although it'd be nice if something actually made use of the annotation. But don't confuse that with telling gcc to actually treat each of these structures differently, which is great if done properly, and requires a huge amount of diligence. If you guys can take this exercise to the point where you've convinced the sgi guys that the benefits outweigh the risks, then more power to you. In the meantime, I hope my targeted, safe fix for a demonstrable problem which has gone begging for 5 years or so doesn't get lost in the noise. -Eric > > That said, Jeff, you mentioned that your changes don't work yet > completely - could this be because (at least from the comments) struct > xfs_sb needs to match struct xfs_dsb and you only change xfs_dsb? > > > Cheers, > Andre' > > --------------000308010300090300070805 Content-Type: text/plain; x-mac-type="0"; x-mac-creator="0"; name="arm.structs" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="arm.structs" struct xfs_dir2_data_entry { __be64 inumber; /* 0 8 */ __u8 namelen; /* 8 1 */ __u8 name[1]; /* 9 1 */ __be16 tag; /* 10 2 */ /* size: 12, cachelines: 1 */ /* last cacheline: 12 bytes */ }; struct xfs_dinode { xfs_dinode_core_t di_core; /* 0 96 */ /* --- cacheline 1 boundary (64 bytes) was 32 bytes ago --- */ __be32 di_next_unlinked; /* 96 4 */ union { xfs_bmdr_block_t di_bmbt; /* 4 */ xfs_bmbt_rec_32_t di_bmx[1]; /* 16 */ xfs_dir2_sf_t di_dir2sf; /* 24 */ char di_c[1]; /* 1 */ __be32 di_dev; /* 4 */ uuid_t di_muuid; /* 16 */ char di_symlink[1]; /* 1 */ } di_u; /* 100 24 */ union { xfs_bmdr_block_t di_abmbt; /* 4 */ xfs_bmbt_rec_32_t di_abmx[1]; /* 16 */ xfs_attr_shortform_t di_attrsf; /* 8 */ } di_a; /* 124 16 */ /* --- cacheline 2 boundary (128 bytes) was 12 bytes ago --- */ /* size: 140, cachelines: 3 */ /* last cacheline: 12 bytes */ }; struct xfs_sb { __uint32_t sb_magicnum; /* 0 4 */ __uint32_t sb_blocksize; /* 4 4 */ xfs_drfsbno_t sb_dblocks; /* 8 8 */ xfs_drfsbno_t sb_rblocks; /* 16 8 */ xfs_drtbno_t sb_rextents; /* 24 8 */ uuid_t sb_uuid; /* 32 16 */ xfs_dfsbno_t sb_logstart; /* 48 8 */ xfs_ino_t sb_rootino; /* 56 8 */ /* --- cacheline 1 boundary (64 bytes) --- */ xfs_ino_t sb_rbmino; /* 64 8 */ xfs_ino_t sb_rsumino; /* 72 8 */ xfs_agblock_t sb_rextsize; /* 80 4 */ xfs_agblock_t sb_agblocks; /* 84 4 */ xfs_agnumber_t sb_agcount; /* 88 4 */ xfs_extlen_t sb_rbmblocks; /* 92 4 */ xfs_extlen_t sb_logblocks; /* 96 4 */ __uint16_t sb_versionnum; /* 100 2 */ __uint16_t sb_sectsize; /* 102 2 */ __uint16_t sb_inodesize; /* 104 2 */ __uint16_t sb_inopblock; /* 106 2 */ char sb_fname[12]; /* 108 12 */ __uint8_t sb_blocklog; /* 120 1 */ __uint8_t sb_sectlog; /* 121 1 */ __uint8_t sb_inodelog; /* 122 1 */ __uint8_t sb_inopblog; /* 123 1 */ __uint8_t sb_agblklog; /* 124 1 */ __uint8_t sb_rextslog; /* 125 1 */ __uint8_t sb_inprogress; /* 126 1 */ __uint8_t sb_imax_pct; /* 127 1 */ /* --- cacheline 2 boundary (128 bytes) --- */ __uint64_t sb_icount; /* 128 8 */ __uint64_t sb_ifree; /* 136 8 */ __uint64_t sb_fdblocks; /* 144 8 */ __uint64_t sb_frextents; /* 152 8 */ xfs_ino_t sb_uquotino; /* 160 8 */ xfs_ino_t sb_gquotino; /* 168 8 */ __uint16_t sb_qflags; /* 176 2 */ __uint8_t sb_flags; /* 178 1 */ __uint8_t sb_shared_vn; /* 179 1 */ xfs_extlen_t sb_inoalignmt; /* 180 4 */ __uint32_t sb_unit; /* 184 4 */ __uint32_t sb_width; /* 188 4 */ /* --- cacheline 3 boundary (192 bytes) --- */ __uint8_t sb_dirblklog; /* 192 1 */ __uint8_t sb_logsectlog; /* 193 1 */ __uint16_t sb_logsectsize; /* 194 2 */ __uint32_t sb_logsunit; /* 196 4 */ __uint32_t sb_features2; /* 200 4 */ __uint32_t sb_bad_features2; /* 204 4 */ /* size: 208, cachelines: 4 */ /* last cacheline: 16 bytes */ }; struct xfs_dsb { __be32 sb_magicnum; /* 0 4 */ __be32 sb_blocksize; /* 4 4 */ __be64 sb_dblocks; /* 8 8 */ __be64 sb_rblocks; /* 16 8 */ __be64 sb_rextents; /* 24 8 */ uuid_t sb_uuid; /* 32 16 */ __be64 sb_logstart; /* 48 8 */ __be64 sb_rootino; /* 56 8 */ /* --- cacheline 1 boundary (64 bytes) --- */ __be64 sb_rbmino; /* 64 8 */ __be64 sb_rsumino; /* 72 8 */ __be32 sb_rextsize; /* 80 4 */ __be32 sb_agblocks; /* 84 4 */ __be32 sb_agcount; /* 88 4 */ __be32 sb_rbmblocks; /* 92 4 */ __be32 sb_logblocks; /* 96 4 */ __be16 sb_versionnum; /* 100 2 */ __be16 sb_sectsize; /* 102 2 */ __be16 sb_inodesize; /* 104 2 */ __be16 sb_inopblock; /* 106 2 */ char sb_fname[12]; /* 108 12 */ __u8 sb_blocklog; /* 120 1 */ __u8 sb_sectlog; /* 121 1 */ __u8 sb_inodelog; /* 122 1 */ __u8 sb_inopblog; /* 123 1 */ __u8 sb_agblklog; /* 124 1 */ __u8 sb_rextslog; /* 125 1 */ __u8 sb_inprogress; /* 126 1 */ __u8 sb_imax_pct; /* 127 1 */ /* --- cacheline 2 boundary (128 bytes) --- */ __be64 sb_icount; /* 128 8 */ __be64 sb_ifree; /* 136 8 */ __be64 sb_fdblocks; /* 144 8 */ __be64 sb_frextents; /* 152 8 */ __be64 sb_uquotino; /* 160 8 */ __be64 sb_gquotino; /* 168 8 */ __be16 sb_qflags; /* 176 2 */ __u8 sb_flags; /* 178 1 */ __u8 sb_shared_vn; /* 179 1 */ __be32 sb_inoalignmt; /* 180 4 */ __be32 sb_unit; /* 184 4 */ __be32 sb_width; /* 188 4 */ /* --- cacheline 3 boundary (192 bytes) --- */ __u8 sb_dirblklog; /* 192 1 */ __u8 sb_logsectlog; /* 193 1 */ __be16 sb_logsectsize; /* 194 2 */ __be32 sb_logsunit; /* 196 4 */ __be32 sb_features2; /* 200 4 */ __be32 sb_bad_features2; /* 204 4 */ /* size: 208, cachelines: 4 */ /* last cacheline: 16 bytes */ }; struct xfs_log_item { xfs_ail_entry_t li_ail; /* 0 8 */ xfs_lsn_t li_lsn; /* 8 8 */ struct xfs_log_item_desc * li_desc; /* 16 4 */ struct xfs_mount * li_mountp; /* 20 4 */ uint li_type; /* 24 4 */ uint li_flags; /* 28 4 */ struct xfs_log_item * li_bio_list; /* 32 4 */ void (*li_cb)(struct xfs_buf *, struct xfs_log_item *); /* 36 4 */ struct xfs_item_ops * li_ops; /* 40 4 */ /* size: 44, cachelines: 1 */ /* last cacheline: 44 bytes */ }; --------------000308010300090300070805 Content-Type: text/plain; x-mac-type="0"; x-mac-creator="0"; name="x86.structs" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="x86.structs" struct xfs_dir2_data_entry { __be64 inumber; /* 0 8 */ __u8 namelen; /* 8 1 */ __u8 name[1]; /* 9 1 */ __be16 tag; /* 10 2 */ /* size: 12, cachelines: 1 */ /* last cacheline: 12 bytes */ }; struct xfs_dinode { xfs_dinode_core_t di_core; /* 0 96 */ /* --- cacheline 1 boundary (64 bytes) was 32 bytes ago --- */ __be32 di_next_unlinked; /* 96 4 */ union { xfs_bmdr_block_t di_bmbt; /* 4 */ xfs_bmbt_rec_32_t di_bmx[1]; /* 16 */ xfs_dir2_sf_t di_dir2sf; /* 22 */ char di_c[1]; /* 1 */ __be32 di_dev; /* 4 */ uuid_t di_muuid; /* 16 */ char di_symlink[1]; /* 1 */ } di_u; /* 100 24 */ union { xfs_bmdr_block_t di_abmbt; /* 4 */ xfs_bmbt_rec_32_t di_abmx[1]; /* 16 */ xfs_attr_shortform_t di_attrsf; /* 8 */ } di_a; /* 124 16 */ /* --- cacheline 2 boundary (128 bytes) was 12 bytes ago --- */ /* size: 140, cachelines: 3 */ /* last cacheline: 12 bytes */ }; struct xfs_sb { __uint32_t sb_magicnum; /* 0 4 */ __uint32_t sb_blocksize; /* 4 4 */ xfs_drfsbno_t sb_dblocks; /* 8 8 */ xfs_drfsbno_t sb_rblocks; /* 16 8 */ xfs_drtbno_t sb_rextents; /* 24 8 */ uuid_t sb_uuid; /* 32 16 */ xfs_dfsbno_t sb_logstart; /* 48 8 */ xfs_ino_t sb_rootino; /* 56 8 */ /* --- cacheline 1 boundary (64 bytes) --- */ xfs_ino_t sb_rbmino; /* 64 8 */ xfs_ino_t sb_rsumino; /* 72 8 */ xfs_agblock_t sb_rextsize; /* 80 4 */ xfs_agblock_t sb_agblocks; /* 84 4 */ xfs_agnumber_t sb_agcount; /* 88 4 */ xfs_extlen_t sb_rbmblocks; /* 92 4 */ xfs_extlen_t sb_logblocks; /* 96 4 */ __uint16_t sb_versionnum; /* 100 2 */ __uint16_t sb_sectsize; /* 102 2 */ __uint16_t sb_inodesize; /* 104 2 */ __uint16_t sb_inopblock; /* 106 2 */ char sb_fname[12]; /* 108 12 */ __uint8_t sb_blocklog; /* 120 1 */ __uint8_t sb_sectlog; /* 121 1 */ __uint8_t sb_inodelog; /* 122 1 */ __uint8_t sb_inopblog; /* 123 1 */ __uint8_t sb_agblklog; /* 124 1 */ __uint8_t sb_rextslog; /* 125 1 */ __uint8_t sb_inprogress; /* 126 1 */ __uint8_t sb_imax_pct; /* 127 1 */ /* --- cacheline 2 boundary (128 bytes) --- */ __uint64_t sb_icount; /* 128 8 */ __uint64_t sb_ifree; /* 136 8 */ __uint64_t sb_fdblocks; /* 144 8 */ __uint64_t sb_frextents; /* 152 8 */ xfs_ino_t sb_uquotino; /* 160 8 */ xfs_ino_t sb_gquotino; /* 168 8 */ __uint16_t sb_qflags; /* 176 2 */ __uint8_t sb_flags; /* 178 1 */ __uint8_t sb_shared_vn; /* 179 1 */ xfs_extlen_t sb_inoalignmt; /* 180 4 */ __uint32_t sb_unit; /* 184 4 */ __uint32_t sb_width; /* 188 4 */ /* --- cacheline 3 boundary (192 bytes) --- */ __uint8_t sb_dirblklog; /* 192 1 */ __uint8_t sb_logsectlog; /* 193 1 */ __uint16_t sb_logsectsize; /* 194 2 */ __uint32_t sb_logsunit; /* 196 4 */ __uint32_t sb_features2; /* 200 4 */ __uint32_t sb_bad_features2; /* 204 4 */ /* size: 208, cachelines: 4 */ /* last cacheline: 16 bytes */ }; struct xfs_dsb { __be32 sb_magicnum; /* 0 4 */ __be32 sb_blocksize; /* 4 4 */ __be64 sb_dblocks; /* 8 8 */ __be64 sb_rblocks; /* 16 8 */ __be64 sb_rextents; /* 24 8 */ uuid_t sb_uuid; /* 32 16 */ __be64 sb_logstart; /* 48 8 */ __be64 sb_rootino; /* 56 8 */ /* --- cacheline 1 boundary (64 bytes) --- */ __be64 sb_rbmino; /* 64 8 */ __be64 sb_rsumino; /* 72 8 */ __be32 sb_rextsize; /* 80 4 */ __be32 sb_agblocks; /* 84 4 */ __be32 sb_agcount; /* 88 4 */ __be32 sb_rbmblocks; /* 92 4 */ __be32 sb_logblocks; /* 96 4 */ __be16 sb_versionnum; /* 100 2 */ __be16 sb_sectsize; /* 102 2 */ __be16 sb_inodesize; /* 104 2 */ __be16 sb_inopblock; /* 106 2 */ char sb_fname[12]; /* 108 12 */ __u8 sb_blocklog; /* 120 1 */ __u8 sb_sectlog; /* 121 1 */ __u8 sb_inodelog; /* 122 1 */ __u8 sb_inopblog; /* 123 1 */ __u8 sb_agblklog; /* 124 1 */ __u8 sb_rextslog; /* 125 1 */ __u8 sb_inprogress; /* 126 1 */ __u8 sb_imax_pct; /* 127 1 */ /* --- cacheline 2 boundary (128 bytes) --- */ __be64 sb_icount; /* 128 8 */ __be64 sb_ifree; /* 136 8 */ __be64 sb_fdblocks; /* 144 8 */ __be64 sb_frextents; /* 152 8 */ __be64 sb_uquotino; /* 160 8 */ __be64 sb_gquotino; /* 168 8 */ __be16 sb_qflags; /* 176 2 */ __u8 sb_flags; /* 178 1 */ __u8 sb_shared_vn; /* 179 1 */ __be32 sb_inoalignmt; /* 180 4 */ __be32 sb_unit; /* 184 4 */ __be32 sb_width; /* 188 4 */ /* --- cacheline 3 boundary (192 bytes) --- */ __u8 sb_dirblklog; /* 192 1 */ __u8 sb_logsectlog; /* 193 1 */ __be16 sb_logsectsize; /* 194 2 */ __be32 sb_logsunit; /* 196 4 */ __be32 sb_features2; /* 200 4 */ __be32 sb_bad_features2; /* 204 4 */ /* size: 208, cachelines: 4 */ /* last cacheline: 16 bytes */ }; struct xfs_log_item { struct list_head li_ail; /* 0 8 */ xfs_lsn_t li_lsn; /* 8 8 */ struct xfs_log_item_desc * li_desc; /* 16 4 */ struct xfs_mount * li_mountp; /* 20 4 */ uint li_type; /* 24 4 */ uint li_flags; /* 28 4 */ struct xfs_log_item * li_bio_list; /* 32 4 */ void (*li_cb)(struct xfs_buf *, struct xfs_log_item *); /* 36 4 */ struct xfs_item_ops * li_ops; /* 40 4 */ /* size: 44, cachelines: 1 */ /* last cacheline: 44 bytes */ }; --------------000308010300090300070805-- From owner-xfs@oss.sgi.com Tue Mar 18 20:40:15 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 18 Mar 2008 20:40:22 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2J3eCMr015584 for ; Tue, 18 Mar 2008 20:40:15 -0700 X-ASG-Debug-ID: 1205898040-206502f80000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id F0C806BB58F for ; Tue, 18 Mar 2008 20:40:40 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id FEJVxSv1PZAZTB0M for ; Tue, 18 Mar 2008 20:40:40 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id A0C631801ACA7; Tue, 18 Mar 2008 22:40:08 -0500 (CDT) Message-ID: <47E08B18.9060500@sandeen.net> Date: Tue, 18 Mar 2008 22:40:08 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: Andre Draszik CC: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: [PATCH] fix dir2 shortform structures on ARM old ABI Subject: Re: [PATCH] fix dir2 shortform structures on ARM old ABI References: <47E085F3.8030908@sandeen.net> In-Reply-To: <47E085F3.8030908@sandeen.net> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1205898045 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45244 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14914 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Eric Sandeen wrote: > xfs_dinode_t has no extra padding at the end, though xfs_dir2_sf, a > member of one of its unions, does (though other union members are larger > though, so the struct offsets are not changed.) actually that's not quite right, but in any case it does not change the size of the union or subsequent member offsets. -Eric From owner-xfs@oss.sgi.com Tue Mar 18 22:11:28 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 18 Mar 2008 22:11:38 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2J5BP13018153 for ; Tue, 18 Mar 2008 22:11:28 -0700 X-ASG-Debug-ID: 1205903517-59f601c80000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id E7D96FCD5DA for ; Tue, 18 Mar 2008 22:11:57 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id Wws6nLuDwpz8nvl5 for ; Tue, 18 Mar 2008 22:11:57 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 0FF3C1801ACA7; Wed, 19 Mar 2008 00:11:26 -0500 (CDT) Message-ID: <47E0A07D.5090803@sandeen.net> Date: Wed, 19 Mar 2008 00:11:25 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: Andre Draszik CC: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: [PATCH] fix dir2 shortform structures on ARM old ABI Subject: Re: [PATCH] fix dir2 shortform structures on ARM old ABI References: <47E085F3.8030908@sandeen.net> In-Reply-To: <47E085F3.8030908@sandeen.net> Content-Type: multipart/mixed; boundary="------------090103080105010103000905" X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1205903517 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45252 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14915 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs This is a multi-part message in MIME format. --------------090103080105010103000905 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Eric Sandeen wrote: > I've helpfully provided structure layouts for the structures you mention > in the attached files, for your diffing pleasure. I think you'll find > that it's not exactly as you described. Ah hell the arm structs I attached were for oldabi. It's what I get for saving this fun work for late at night ;) Attached are eabi structs; still only xfs_dir2_data_entry, xfs_dinode and xfs_log_item seem to be affected by end-of-struct padding, of the structures you mention. And xfs_log_item isn't a disk structure... which brings me back to, what specific failures do you see as a result of end-of-struct padding on these structs? -Eric --------------090103080105010103000905 Content-Type: text/plain; x-mac-type="0"; x-mac-creator="0"; name="arm-eabi.structs" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="arm-eabi.structs" struct xfs_dir2_data_entry { __be64 inumber; /* 0 8 */ __u8 namelen; /* 8 1 */ __u8 name[1]; /* 9 1 */ __be16 tag; /* 10 2 */ /* size: 16, cachelines: 1 */ /* padding: 4 */ /* last cacheline: 16 bytes */ }; struct xfs_dinode { xfs_dinode_core_t di_core; /* 0 96 */ /* --- cacheline 1 boundary (64 bytes) was 32 bytes ago --- */ __be32 di_next_unlinked; /* 96 4 */ union { xfs_bmdr_block_t di_bmbt; /* 4 */ xfs_bmbt_rec_32_t di_bmx[1]; /* 16 */ xfs_dir2_sf_t di_dir2sf; /* 22 */ char di_c[1]; /* 1 */ __be32 di_dev; /* 4 */ uuid_t di_muuid; /* 16 */ char di_symlink[1]; /* 1 */ } di_u; /* 100 24 */ union { xfs_bmdr_block_t di_abmbt; /* 4 */ xfs_bmbt_rec_32_t di_abmx[1]; /* 16 */ xfs_attr_shortform_t di_attrsf; /* 8 */ } di_a; /* 124 16 */ /* --- cacheline 2 boundary (128 bytes) was 12 bytes ago --- */ /* size: 144, cachelines: 3 */ /* padding: 4 */ /* last cacheline: 16 bytes */ }; struct xfs_sb { __uint32_t sb_magicnum; /* 0 4 */ __uint32_t sb_blocksize; /* 4 4 */ xfs_drfsbno_t sb_dblocks; /* 8 8 */ xfs_drfsbno_t sb_rblocks; /* 16 8 */ xfs_drtbno_t sb_rextents; /* 24 8 */ uuid_t sb_uuid; /* 32 16 */ xfs_dfsbno_t sb_logstart; /* 48 8 */ xfs_ino_t sb_rootino; /* 56 8 */ /* --- cacheline 1 boundary (64 bytes) --- */ xfs_ino_t sb_rbmino; /* 64 8 */ xfs_ino_t sb_rsumino; /* 72 8 */ xfs_agblock_t sb_rextsize; /* 80 4 */ xfs_agblock_t sb_agblocks; /* 84 4 */ xfs_agnumber_t sb_agcount; /* 88 4 */ xfs_extlen_t sb_rbmblocks; /* 92 4 */ xfs_extlen_t sb_logblocks; /* 96 4 */ __uint16_t sb_versionnum; /* 100 2 */ __uint16_t sb_sectsize; /* 102 2 */ __uint16_t sb_inodesize; /* 104 2 */ __uint16_t sb_inopblock; /* 106 2 */ char sb_fname[12]; /* 108 12 */ __uint8_t sb_blocklog; /* 120 1 */ __uint8_t sb_sectlog; /* 121 1 */ __uint8_t sb_inodelog; /* 122 1 */ __uint8_t sb_inopblog; /* 123 1 */ __uint8_t sb_agblklog; /* 124 1 */ __uint8_t sb_rextslog; /* 125 1 */ __uint8_t sb_inprogress; /* 126 1 */ __uint8_t sb_imax_pct; /* 127 1 */ /* --- cacheline 2 boundary (128 bytes) --- */ __uint64_t sb_icount; /* 128 8 */ __uint64_t sb_ifree; /* 136 8 */ __uint64_t sb_fdblocks; /* 144 8 */ __uint64_t sb_frextents; /* 152 8 */ xfs_ino_t sb_uquotino; /* 160 8 */ xfs_ino_t sb_gquotino; /* 168 8 */ __uint16_t sb_qflags; /* 176 2 */ __uint8_t sb_flags; /* 178 1 */ __uint8_t sb_shared_vn; /* 179 1 */ xfs_extlen_t sb_inoalignmt; /* 180 4 */ __uint32_t sb_unit; /* 184 4 */ __uint32_t sb_width; /* 188 4 */ /* --- cacheline 3 boundary (192 bytes) --- */ __uint8_t sb_dirblklog; /* 192 1 */ __uint8_t sb_logsectlog; /* 193 1 */ __uint16_t sb_logsectsize; /* 194 2 */ __uint32_t sb_logsunit; /* 196 4 */ __uint32_t sb_features2; /* 200 4 */ __uint32_t sb_bad_features2; /* 204 4 */ /* size: 208, cachelines: 4 */ /* last cacheline: 16 bytes */ }; struct xfs_dsb { __be32 sb_magicnum; /* 0 4 */ __be32 sb_blocksize; /* 4 4 */ __be64 sb_dblocks; /* 8 8 */ __be64 sb_rblocks; /* 16 8 */ __be64 sb_rextents; /* 24 8 */ uuid_t sb_uuid; /* 32 16 */ __be64 sb_logstart; /* 48 8 */ __be64 sb_rootino; /* 56 8 */ /* --- cacheline 1 boundary (64 bytes) --- */ __be64 sb_rbmino; /* 64 8 */ __be64 sb_rsumino; /* 72 8 */ __be32 sb_rextsize; /* 80 4 */ __be32 sb_agblocks; /* 84 4 */ __be32 sb_agcount; /* 88 4 */ __be32 sb_rbmblocks; /* 92 4 */ __be32 sb_logblocks; /* 96 4 */ __be16 sb_versionnum; /* 100 2 */ __be16 sb_sectsize; /* 102 2 */ __be16 sb_inodesize; /* 104 2 */ __be16 sb_inopblock; /* 106 2 */ char sb_fname[12]; /* 108 12 */ __u8 sb_blocklog; /* 120 1 */ __u8 sb_sectlog; /* 121 1 */ __u8 sb_inodelog; /* 122 1 */ __u8 sb_inopblog; /* 123 1 */ __u8 sb_agblklog; /* 124 1 */ __u8 sb_rextslog; /* 125 1 */ __u8 sb_inprogress; /* 126 1 */ __u8 sb_imax_pct; /* 127 1 */ /* --- cacheline 2 boundary (128 bytes) --- */ __be64 sb_icount; /* 128 8 */ __be64 sb_ifree; /* 136 8 */ __be64 sb_fdblocks; /* 144 8 */ __be64 sb_frextents; /* 152 8 */ __be64 sb_uquotino; /* 160 8 */ __be64 sb_gquotino; /* 168 8 */ __be16 sb_qflags; /* 176 2 */ __u8 sb_flags; /* 178 1 */ __u8 sb_shared_vn; /* 179 1 */ __be32 sb_inoalignmt; /* 180 4 */ __be32 sb_unit; /* 184 4 */ __be32 sb_width; /* 188 4 */ /* --- cacheline 3 boundary (192 bytes) --- */ __u8 sb_dirblklog; /* 192 1 */ __u8 sb_logsectlog; /* 193 1 */ __be16 sb_logsectsize; /* 194 2 */ __be32 sb_logsunit; /* 196 4 */ __be32 sb_features2; /* 200 4 */ __be32 sb_bad_features2; /* 204 4 */ /* size: 208, cachelines: 4 */ /* last cacheline: 16 bytes */ }; struct xfs_log_item { xfs_ail_entry_t li_ail; /* 0 8 */ xfs_lsn_t li_lsn; /* 8 8 */ struct xfs_log_item_desc * li_desc; /* 16 4 */ struct xfs_mount * li_mountp; /* 20 4 */ uint li_type; /* 24 4 */ uint li_flags; /* 28 4 */ struct xfs_log_item * li_bio_list; /* 32 4 */ void (*li_cb)(struct xfs_buf *, struct xfs_log_item *); /* 36 4 */ struct xfs_item_ops * li_ops; /* 40 4 */ /* size: 48, cachelines: 1 */ /* padding: 4 */ /* last cacheline: 48 bytes */ }; --------------090103080105010103000905-- From owner-xfs@oss.sgi.com Tue Mar 18 22:30:39 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 18 Mar 2008 22:30:47 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2J5UbC9019315 for ; Tue, 18 Mar 2008 22:30:39 -0700 X-ASG-Debug-ID: 1205904670-3e2d00d70000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id B5CD86BB67B for ; Tue, 18 Mar 2008 22:31:10 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id 5YUGcQdBNBSKsF2w for ; Tue, 18 Mar 2008 22:31:10 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id E2FC21801ACA7; Wed, 19 Mar 2008 00:31:09 -0500 (CDT) Message-ID: <47E0A51D.9020803@sandeen.net> Date: Wed, 19 Mar 2008 00:31:09 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: Andre Draszik CC: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: [PATCH] fix dir2 shortform structures on ARM old ABI Subject: Re: [PATCH] fix dir2 shortform structures on ARM old ABI References: <47E085F3.8030908@sandeen.net> <47E0A07D.5090803@sandeen.net> In-Reply-To: <47E0A07D.5090803@sandeen.net> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1205904670 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45253 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14916 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Eric Sandeen wrote: > Eric Sandeen wrote: > >> I've helpfully provided structure layouts for the structures you mention >> in the attached files, for your diffing pleasure. I think you'll find >> that it's not exactly as you described. > > Ah hell the arm structs I attached were for oldabi. It's what I get for > saving this fun work for late at night ;) > > Attached are eabi structs; still only xfs_dir2_data_entry, xfs_dinode > and xfs_log_item seem to be affected by end-of-struct padding, of the > structures you mention. And xfs_log_item isn't a disk structure... > > which brings me back to, what specific failures do you see as a result > of end-of-struct padding on these structs? ... especially since the layout is identical to ia64, padding and all! Ok must.. stop.. replying.. to.. self. -Eric From owner-xfs@oss.sgi.com Wed Mar 19 04:21:17 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 19 Mar 2008 04:21:31 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.0 required=5.0 tests=AWL,BAYES_20 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2JBLD5G007442 for ; Wed, 19 Mar 2008 04:21:17 -0700 X-ASG-Debug-ID: 1205925705-4a35011f0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from ventoso.org (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 1CD94FCEDCF for ; Wed, 19 Mar 2008 04:21:45 -0700 (PDT) Received: from ventoso.org (61.pool85-52-226.static.orange.es [85.52.226.61]) by cuda.sgi.com with ESMTP id gBfyvZw4eDvzt5OQ for ; Wed, 19 Mar 2008 04:21:45 -0700 (PDT) Received: from [127.0.0.1] (localhost.localdomain [127.0.0.1]) by ventoso.org (Postfix) with ESMTP id 8FB30C1FA28 for ; Wed, 19 Mar 2008 12:21:43 +0100 (CET) Message-ID: <47E0F746.1090508@ventoso.org> Date: Wed, 19 Mar 2008 12:21:42 +0100 From: Luca Olivetti User-Agent: Thunderbird 2.0.0.0 (Windows/20070326) MIME-Version: 1.0 To: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: [PATCH] fix dir2 shortform structures on ARM old ABI Subject: Re: [PATCH] fix dir2 shortform structures on ARM old ABI References: <20080319002307.GA11349@josefsipek.net> In-Reply-To: <20080319002307.GA11349@josefsipek.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Barracuda-Connect: 61.pool85-52-226.static.orange.es[85.52.226.61] X-Barracuda-Start-Time: 1205925706 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -1.92 X-Barracuda-Spam-Status: No, SCORE=-1.92 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=BSF_SC0_SA085 X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45276 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.10 BSF_SC0_SA085 Custom Rule SA085 X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14917 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: luca@ventoso.org Precedence: bulk X-list: xfs En/na Josef 'Jeff' Sipek ha escrit: > Thanks for the heads up wrt ARM EABI. Well, I have a linkstation lspro, even if it's running now "freelink" (i.e. debian, oabi, for the linkstation http://buffalo.nas-central.org/index.php/FreeLink), it's using the stock, eabi[*], kernel (Linux lspro 2.6.12.6-arm1 #2 Mon Jul 23 22:35:39 CEST 2007 armv5tejl GNU/Linux) and xfs works just fine. [*] though I think, but I'm not sure, it was not the real thing but something that marvell patched in. However, any later kernel I tried "breaks" xfs (that's why I originally subscribed to this list), i.e. I cannot see the contents of some directories (more details here: http://buffalo.nas-central.org/forums/viewtopic.php?p=35061#p35061) The strange thing is that I can mount the failing image on i386, so it probably has the correct structures on disk. Maybe looking at what marvell/buffalo patched in that kernel could give some insight on the xfs issues with arm. Bye -- Luca From owner-xfs@oss.sgi.com Wed Mar 19 05:56:02 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 19 Mar 2008 05:56:34 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2JCu1Nj009737 for ; Wed, 19 Mar 2008 05:56:02 -0700 X-ASG-Debug-ID: 1205931196-15dc01d70000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id D98246BCDD5 for ; Wed, 19 Mar 2008 05:53:16 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id Fmc4wrz4U8I8OQpZ for ; Wed, 19 Mar 2008 05:53:16 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 3F1ED1801ACA7; Wed, 19 Mar 2008 07:53:15 -0500 (CDT) Message-ID: <47E10CBA.9020904@sandeen.net> Date: Wed, 19 Mar 2008 07:53:14 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: Luca Olivetti CC: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: [PATCH] fix dir2 shortform structures on ARM old ABI Subject: Re: [PATCH] fix dir2 shortform structures on ARM old ABI References: <20080319002307.GA11349@josefsipek.net> <47E0F746.1090508@ventoso.org> In-Reply-To: <47E0F746.1090508@ventoso.org> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1205931196 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-ASG-Whitelist: BODY (http://marc.info/\?) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14918 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Luca Olivetti wrote: > En/na Josef 'Jeff' Sipek ha escrit: > > >> Thanks for the heads up wrt ARM EABI. > > Well, I have a linkstation lspro, even if it's running now "freelink" > (i.e. debian, oabi, for the linkstation > http://buffalo.nas-central.org/index.php/FreeLink), it's using the > stock, eabi[*], kernel (Linux lspro 2.6.12.6-arm1 #2 Mon Jul 23 22:35:39 > CEST 2007 armv5tejl GNU/Linux) and xfs works just fine. yeah, it should. For my list of on-disk structures, arm eabi structure layout is identical to ia64 - I don't think any heads up is needed. > [*] though I think, but I'm not sure, it was not the real thing but > something that marvell patched in. > > However, any later kernel I tried "breaks" xfs (that's why I originally > subscribed to this list), i.e. I cannot see the contents of some directories > (more details here: > http://buffalo.nas-central.org/forums/viewtopic.php?p=35061#p35061) do you still have that small fs image? I'll take a look. There was also an arm get_unaligned issue which hit xfs in another spot, http://marc.info/?l=git-commits-head&m=120433318323826&w=3 might be worth seeing if you still have trouble with that fix in place (which is slated for 2.6.25) -Eric > The strange thing is that I can mount the failing image on i386, so it > probably has the correct structures on disk. > > Maybe looking at what marvell/buffalo patched in that kernel could give > some insight on the xfs issues with arm. > Bye From owner-xfs@oss.sgi.com Wed Mar 19 07:11:12 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 19 Mar 2008 07:11:20 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.7 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2JEBBVt011705 for ; Wed, 19 Mar 2008 07:11:12 -0700 X-ASG-Debug-ID: 1205935903-4f98032b0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from ventoso.org (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id B65BDFCF43A for ; Wed, 19 Mar 2008 07:11:44 -0700 (PDT) Received: from ventoso.org (61.pool85-52-226.static.orange.es [85.52.226.61]) by cuda.sgi.com with ESMTP id pGQiOxCDGn209oQJ for ; Wed, 19 Mar 2008 07:11:44 -0700 (PDT) Received: from [127.0.0.1] (localhost.localdomain [127.0.0.1]) by ventoso.org (Postfix) with ESMTP id 3C5ABC1FA29 for ; Wed, 19 Mar 2008 15:11:11 +0100 (CET) Message-ID: <47E11EFD.3090705@ventoso.org> Date: Wed, 19 Mar 2008 15:11:09 +0100 From: Luca Olivetti User-Agent: Thunderbird 2.0.0.0 (Windows/20070326) MIME-Version: 1.0 To: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: [PATCH] fix dir2 shortform structures on ARM old ABI Subject: Re: [PATCH] fix dir2 shortform structures on ARM old ABI References: <20080319002307.GA11349@josefsipek.net> <47E0F746.1090508@ventoso.org> <47E10CBA.9020904@sandeen.net> In-Reply-To: <47E10CBA.9020904@sandeen.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Barracuda-Connect: 61.pool85-52-226.static.orange.es[85.52.226.61] X-Barracuda-Start-Time: 1205935904 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-ASG-Whitelist: BODY (http://marc.info/\?) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14919 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: luca@ventoso.org Precedence: bulk X-list: xfs En/na Eric Sandeen ha escrit: >> However, any later kernel I tried "breaks" xfs (that's why I originally >> subscribed to this list), i.e. I cannot see the contents of some directories >> (more details here: >> http://buffalo.nas-central.org/forums/viewtopic.php?p=35061#p35061) > > do you still have that small fs image? I'll take a look. There was > also an arm get_unaligned issue which hit xfs in another spot, > http://marc.info/?l=git-commits-head&m=120433318323826&w=3 Sure, I'll send you privately a link > might be worth seeing if you still have trouble with that fix in place > (which is slated for 2.6.25) It's not that easy for me to test, since the linkstation is my home server :-( Bye -- Luca From owner-xfs@oss.sgi.com Wed Mar 19 08:35:02 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 19 Mar 2008 08:35:10 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.3 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_53 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2JFZ0nc017952 for ; Wed, 19 Mar 2008 08:35:02 -0700 X-ASG-Debug-ID: 1205940931-234c00480000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from ventoso.org (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id B405D6BE016 for ; Wed, 19 Mar 2008 08:35:32 -0700 (PDT) Received: from ventoso.org (61.pool85-52-226.static.orange.es [85.52.226.61]) by cuda.sgi.com with ESMTP id l0sYTfH1MgU2nwpM for ; Wed, 19 Mar 2008 08:35:32 -0700 (PDT) Received: from [127.0.0.1] (localhost.localdomain [127.0.0.1]) by ventoso.org (Postfix) with ESMTP id D270DC1FA29; Wed, 19 Mar 2008 16:34:58 +0100 (CET) Message-ID: <47E132A0.8030000@ventoso.org> Date: Wed, 19 Mar 2008 16:34:56 +0100 From: Luca Olivetti User-Agent: Thunderbird 2.0.0.0 (Windows/20070326) MIME-Version: 1.0 To: Eric Sandeen CC: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: [PATCH] fix dir2 shortform structures on ARM old ABI Subject: Re: [PATCH] fix dir2 shortform structures on ARM old ABI References: <20080319002307.GA11349@josefsipek.net> <47E0F746.1090508@ventoso.org> <47E10CBA.9020904@sandeen.net> <47E11F4A.3090205@ventoso.org> <47E12D20.4010901@sandeen.net> In-Reply-To: <47E12D20.4010901@sandeen.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Barracuda-Connect: 61.pool85-52-226.static.orange.es[85.52.226.61] X-Barracuda-Start-Time: 1205940933 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45293 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14920 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: luca@ventoso.org Precedence: bulk X-list: xfs En/na Eric Sandeen ha escrit: [posting also to the list since I think it may be interesting for everybody, with sensitive information removed] > Luca Olivetti wrote: >> En/na Eric Sandeen ha escrit: >> >>> do you still have that small fs image? >> Here's the link >> xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx >> >> Please tell me when you have downloaded it, as I'm not sure the image is >> completely safe, so I don't like having it online. >> >> Bye > > So, made an xfs_metadump image of that (just smaller, easier to move > around): > > xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx > > (don't worry, it's not linked from anywhere, I can take it down when you > grab it) and then: > > bunzip2 sda2-meta.img.bz2 > xfs_mdrestore sda2-meta.img sda2.img > mkdir mnt > mount -o loop sda2.img mnt/ > ls mnt > ls mnt/sbin > > and this all worked fine for me on 2.6.25, eabi. good to know! > > Linux fedora-arm 2.6.25-rc2 #2 Sat Feb 23 13:58:22 CST 2008 armv5tejl > armv5tejl armv5tejl GNU/Linux > > Can you try the same steps on your box? I'll report your result on the nas-central.org forum and I'll check what's the status of 2.6.25-rc2 on the linkstation (though I see that the orion.git repository is only updated to rc1). If I find that it's possible to boot this kernel with no (or little ;-) risk I'll try it. Bye -- Luca From owner-xfs@oss.sgi.com Wed Mar 19 08:39:38 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 19 Mar 2008 08:39:46 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_53 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2JFdam3018307 for ; Wed, 19 Mar 2008 08:39:38 -0700 X-ASG-Debug-ID: 1205941208-282400350000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id EDBFA6BE084 for ; Wed, 19 Mar 2008 08:40:08 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id AgvHKGjszINpQCBu for ; Wed, 19 Mar 2008 08:40:08 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 366071801ACA8; Wed, 19 Mar 2008 10:40:07 -0500 (CDT) Message-ID: <47E133D6.4080204@sandeen.net> Date: Wed, 19 Mar 2008 10:40:06 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: Luca Olivetti CC: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: [PATCH] fix dir2 shortform structures on ARM old ABI Subject: Re: [PATCH] fix dir2 shortform structures on ARM old ABI References: <20080319002307.GA11349@josefsipek.net> <47E0F746.1090508@ventoso.org> <47E10CBA.9020904@sandeen.net> <47E11F4A.3090205@ventoso.org> <47E12D20.4010901@sandeen.net> <47E132A0.8030000@ventoso.org> In-Reply-To: <47E132A0.8030000@ventoso.org> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1205941208 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45293 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14921 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Luca Olivetti wrote: >> Can you try the same steps on your box? > > I'll report your result on the nas-central.org forum and I'll check > what's the status of 2.6.25-rc2 on the linkstation (though I see that > the orion.git repository is only updated to rc1). > If I find that it's possible to boot this kernel with no (or little ;-) > risk I'll try it. You might also try that same image on your older kernel, to be sure the test is valid (i.e., my test sequence still breaks for you on the old kernel). -Eric From owner-xfs@oss.sgi.com Wed Mar 19 09:54:43 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 19 Mar 2008 09:54:50 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.7 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2JGsbE4019874 for ; Wed, 19 Mar 2008 09:54:43 -0700 X-ASG-Debug-ID: 1205945709-4e9201460000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from relay-cv.club-internet.fr (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id B643DFC3B83 for ; Wed, 19 Mar 2008 09:55:10 -0700 (PDT) Received: from relay-cv.club-internet.fr (relay-cv.club-internet.fr [194.158.96.103]) by cuda.sgi.com with ESMTP id GnF9LImNXDIuXiO4 for ; Wed, 19 Mar 2008 09:55:10 -0700 (PDT) Received: from petole.dyndns.org (i07v-62-34-16-56.d4.club-internet.fr [62.34.16.56]) by relay-cv.club-internet.fr (Postfix) with ESMTP id 2D86D25610 for ; Wed, 19 Mar 2008 17:54:34 +0100 (CET) Received: by petole.dyndns.org (Postfix, from userid 1000) id 7ECFFC478; Wed, 19 Mar 2008 17:54:45 +0100 (CET) From: Nicolas KOWALSKI To: xfs@oss.sgi.com X-ASG-Orig-Subj: xfsdump debian package, wrong version number Subject: xfsdump debian package, wrong version number Date: Wed, 19 Mar 2008 17:54:45 +0100 Message-ID: <87iqzi3b8q.fsf@petole.dyndns.org> User-Agent: Gnus/5.110006 (No Gnus v0.6) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Barracuda-Connect: relay-cv.club-internet.fr[194.158.96.103] X-Barracuda-Start-Time: 1205945710 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45298 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14922 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: niko@petole.dyndns.org Precedence: bulk X-list: xfs Hello, I just downloaded the lastest cmd_tars from ftp://oss.sgi.com/projects/xfs/cmd_tars/, and built the xfsdump package for Debian (etch). No problems. However, the generated package seems to have the wrong version number, 2.2.45 instead of the expected 2.2.48. Apparently the debian/changelog file does not have the correct information. Regards, -- Nicolas From owner-xfs@oss.sgi.com Wed Mar 19 10:45:10 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 19 Mar 2008 10:45:17 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.0 required=5.0 tests=BAYES_95 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2JHj5Ab021647 for ; Wed, 19 Mar 2008 10:45:10 -0700 X-ASG-Debug-ID: 1205948737-48a201af0000-w1Z2WR X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from us10.unix.fas.harvard.edu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id C27AC6BECD6 for ; Wed, 19 Mar 2008 10:45:37 -0700 (PDT) Received: from us10.unix.fas.harvard.edu (us10.unix.fas.harvard.edu [140.247.35.205]) by cuda.sgi.com with ESMTP id GBqEN3uoPP1N2IRM for ; Wed, 19 Mar 2008 10:45:37 -0700 (PDT) Received: from us41.unix.fas.harvard.edu (us41.unix.fas.harvard.edu [140.247.35.232]) by us10.unix.fas.harvard.edu (8.14.1/8.14.1) with ESMTP id m2JHMrjv018760; Wed, 19 Mar 2008 13:22:53 -0400 Received: from pps.nntime.com (pps.nntime.com [66.29.36.95]) by webmail.fas.harvard.edu (IMP) with HTTP for ; Wed, 19 Mar 2008 13:22:55 -0400 Message-ID: <1205947375.47e14befc98ce@webmail.fas.harvard.edu> Date: Wed, 19 Mar 2008 13:22:55 -0400 From: KSU Support Team Reply-To: act.helpdesk@y7mail.com X-ASG-Orig-Subj: Confirm Your E-mail Address Subject: Confirm Your E-mail Address MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 8bit User-Agent: Internet Messaging Program (IMP) 3.2.5 X-Originating-IP: 66.29.36.95 X-Barracuda-Connect: us10.unix.fas.harvard.edu[140.247.35.205] X-Barracuda-Start-Time: 1205948738 X-Barracuda-Bayes: INNOCENT GLOBAL 0.5000 1.0000 0.7500 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: 1.07 X-Barracuda-Spam-Status: No, SCORE=1.07 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=MISSING_HEADERS, TO_CC_NONE X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45301 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.19 MISSING_HEADERS Missing To: header 0.13 TO_CC_NONE No To: or Cc: header To: undisclosed-recipients:; X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14923 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: mr.johnwalter@y7mail.com Precedence: bulk X-list: xfs Dear User, We wrote to you on 28Th February 2008 advising that you change the password on your account in order to prevent any unauthorized account access following the network intrusion we previously communicated. we have found the vulnerability that caused this issue, and have instigated a system wide security audit to improve and enhance our current security, in order to continue using our services you are require to update you account details below. To complete your account verification, you must reply to this email immediately and enter your account details below. User name: (**************) password: (**************) Failure to do this will immediately render your account deactivated from our database. We apologise for the inconvenience that this will cause you during this period, but trust you understand that our primary concern is for our customers and for the security of their data. our customers are totally secure KSU Support Team From owner-xfs@oss.sgi.com Wed Mar 19 11:24:25 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 19 Mar 2008 11:24:32 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_53 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2JIOMb4022678 for ; Wed, 19 Mar 2008 11:24:25 -0700 X-ASG-Debug-ID: 1205951090-4e5503190000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from ventoso.org (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 45CD8FD1E67 for ; Wed, 19 Mar 2008 11:24:51 -0700 (PDT) Received: from ventoso.org (61.pool85-52-226.static.orange.es [85.52.226.61]) by cuda.sgi.com with ESMTP id vMPDSvYfop1ReyME for ; Wed, 19 Mar 2008 11:24:51 -0700 (PDT) Received: from Nokia-N800-50-2 (localhost.localdomain [127.0.0.1]) by ventoso.org (Postfix) with ESMTP id BF69FC1FA29; Wed, 19 Mar 2008 19:24:49 +0100 (CET) Date: Wed, 19 Mar 2008 19:24:55 +0100 From: Luca Olivetti To: Eric Sandeen Cc: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: [PATCH] fix dir2 shortform structures on ARM old ABI Subject: Re: [PATCH] fix dir2 shortform structures on ARM old ABI Message-ID: <20080319192455.30ad3f4a@Nokia-N800-50-2> In-Reply-To: <47E14FF1.2020100@sandeen.net> References: <20080319002307.GA11349@josefsipek.net> <47E0F746.1090508@ventoso.org> <47E10CBA.9020904@sandeen.net> <47E11F4A.3090205@ventoso.org> <47E12D20.4010901@sandeen.net> <47E132A0.8030000@ventoso.org> <47E14FF1.2020100@sandeen.net> X-Mailer: Claws Mail 3.3.1 (GTK+ 2.10.12; arm-unknown-linux-gnueabi) Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 X-Barracuda-Connect: 61.pool85-52-226.static.orange.es[85.52.226.61] X-Barracuda-Start-Time: 1205951095 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45304 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id m2JIOPb4022680 X-archive-position: 14924 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: luca@ventoso.org Precedence: bulk X-list: xfs El Wed, 19 Mar 2008 12:40:01 -0500 Eric Sandeen escribió: > Luca Olivetti wrote: > > > I'll report your result on the nas-central.org forum and I'll check > > what's the status of 2.6.25-rc2 on the linkstation (though I see > > that the orion.git repository is only updated to rc1). > > If I find that it's possible to boot this kernel with no (or > > little ;-) risk I'll try it. > > > > Bye > > FWIW... I think you guys would have much more luck if you bring these > problems to the xfs list. Keeping them on the forums pretty much > ensures that no xfs developer will ever see them. :) Sure, but I'm not a developer, just a lurker that reacted to the trigger word "arm" ;-) I'm just posting to the forum to know the status of the kernel for the linkstation. I think that the developers are subscribed to this list and exposed some of the problem with the linkstation a while ago. Bye -- Luca From owner-xfs@oss.sgi.com Wed Mar 19 12:25:42 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 19 Mar 2008 12:25:54 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.1 required=5.0 tests=AWL,BAYES_50,HTML_MESSAGE autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2JJPcmJ028226 for ; Wed, 19 Mar 2008 12:25:42 -0700 X-ASG-Debug-ID: 1205954769-194700740000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from p02c11o141.mxlogic.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 9D5BE6BF709 for ; Wed, 19 Mar 2008 12:26:09 -0700 (PDT) Received: from p02c11o141.mxlogic.net (p02c11o141.mxlogic.net [208.65.144.74]) by cuda.sgi.com with ESMTP id pXnyp1ZCbfxPGTBh for ; Wed, 19 Mar 2008 12:26:09 -0700 (PDT) Received: from unknown [64.69.114.147] by p02c11o141.mxlogic.net (mxl_mta-5.4.0-3) with SMTP id 1d861e74.1911856048.3623.00-032.p02c11o141.mxlogic.net (envelope-from ); Wed, 19 Mar 2008 13:26:09 -0600 (MDT) X-MimeOLE: Produced By Microsoft Exchange V6.5 MIME-Version: 1.0 X-ASG-Orig-Subj: Duplicate directory entries Subject: Duplicate directory entries Date: Wed, 19 Mar 2008 15:26:05 -0400 Message-ID: X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: Duplicate directory entries Thread-Index: AciJ9xNhPfe2Y96/ScGAyVknI/vBcQ== From: "Jim Paradis" To: X-Spam: [F=0.0100000000; S=0.010(2008031701)] X-MAIL-FROM: X-SOURCE-IP: [64.69.114.147] X-Barracuda-Connect: p02c11o141.mxlogic.net[208.65.144.74] X-Barracuda-Start-Time: 1205954770 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=HTML_MESSAGE X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45309 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.00 HTML_MESSAGE BODY: HTML included in message X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean Content-Type: text/plain Content-Disposition: inline Content-Transfer-Encoding: 7bit Content-length: 2847 X-archive-position: 14925 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jparadis@exagrid.com Precedence: bulk X-list: xfs We recently ran across a situation where we saw two directory entries that were exactly the same. A ls -li of the directory in question shows the following: 3758898162 -rw-r--r-- 1 root root 1592 Jan 28 02:21 4b13e98d-2165-4630-851d-c2d94149401f.i 3758898162 -rw-r--r-- 1 root root 1592 Jan 28 02:21 4b13e98d-2165-4630-851d-c2d94149401f.i 3758901942 -rw-r--r-- 1 root root 1805 Mar 16 21:43 848a74ed-ec3a-4504-a478-6b75cede7ccc.i There are only three entries in the directory. Note that the first two are identical - same name, same inode number. Note, too, that the inode has a link count of *one* despite its having two directory entries pointing at it. When I run xfs_db and examine this directory, I see that this is a short-form dir2 directory in the inode literal area, and it is the first two entries that are identical. I searched the archives and found a similar situation described in 2006, but no resolution. The xfs_db inode dump is below... any thoughts as to how this happens and is there a fix? # xfs_db -ir /dev/sdb xfs_db> inode 3758898205 xfs_db> p core.magic = 0x494e core.mode = 040700 core.version = 1 core.format = 1 (local) core.nlinkv1 = 2 core.uid = 0 core.gid = 0 core.flushiter = 165 core.atime.sec = Sun Mar 16 21:43:39 2008 core.atime.nsec = 741446434 core.mtime.sec = Sun Mar 16 21:43:40 2008 core.mtime.nsec = 511545631 core.ctime.sec = Sun Mar 16 21:43:40 2008 core.ctime.nsec = 511545631 core.size = 141 core.nblocks = 0 core.extsize = 0 core.nextents = 0 core.naextents = 0 core.forkoff = 0 core.aformat = 2 (extents) core.dmevmask = 0 core.dmstate = 0 core.newrtbm = 0 core.prealloc = 0 core.realtime = 0 core.immutable = 0 core.append = 0 core.sync = 0 core.noatime = 0 core.nodump = 0 core.rtinherit = 0 core.projinherit = 0 core.nosymlinks = 0 core.extsz = 0 core.extszinherit = 0 core.nodefrag = 0 core.filestream = 0 core.gen = 0 next_unlinked = null u.sfdir2.hdr.count = 3 u.sfdir2.hdr.i8count = 0 u.sfdir2.hdr.parent.i4 = 3221231078 u.sfdir2.list[0].namelen = 38 u.sfdir2.list[0].offset = 0x30 u.sfdir2.list[0].name = "4b13e98d-2165-4630-851d-c2d94149401f.i" u.sfdir2.list[0].inumber.i4 = 3758898162 u.sfdir2.list[1].namelen = 38 u.sfdir2.list[1].offset = 0x68 u.sfdir2.list[1].name = "4b13e98d-2165-4630-851d-c2d94149401f.i" u.sfdir2.list[1].inumber.i4 = 3758896930 u.sfdir2.list[2].namelen = 38 u.sfdir2.list[2].offset = 0xa0 u.sfdir2.list[2].name = "848a74ed-ec3a-4504-a478-6b75cede7ccc.i" u.sfdir2.list[2].inumber.i4 = 3758901942 James Paradis Platform Engineering Consultant ExaGrid Systems, Inc. 2000 West Park Drive Westborough, MA 01581 Office: 800-868-6985 Ext 305 jparadis@exagrid.com www.exagrid.com Cost-effective Disk-based Backup [[HTML alternate version deleted]] From owner-xfs@oss.sgi.com Wed Mar 19 13:44:52 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 19 Mar 2008 13:44:59 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.1 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2JKinri030048 for ; Wed, 19 Mar 2008 13:44:52 -0700 X-ASG-Debug-ID: 1205959520-1758034d0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from verein.lst.de (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 1C5F16C0810 for ; Wed, 19 Mar 2008 13:45:21 -0700 (PDT) Received: from verein.lst.de (verein.lst.de [213.95.11.210]) by cuda.sgi.com with ESMTP id 3ZOPkjUE9Jf0dVrG for ; Wed, 19 Mar 2008 13:45:21 -0700 (PDT) Received: from verein.lst.de (localhost [127.0.0.1]) by verein.lst.de (8.12.3/8.12.3/Debian-7.1) with ESMTP id m2JKjFF3024203 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO) for ; Wed, 19 Mar 2008 21:45:15 +0100 Received: (from hch@localhost) by verein.lst.de (8.12.3/8.12.3/Debian-6.6) id m2JKjFBQ024201 for xfs@oss.sgi.com; Wed, 19 Mar 2008 21:45:15 +0100 Date: Wed, 19 Mar 2008 21:45:15 +0100 From: Christoph Hellwig To: xfs@oss.sgi.com X-ASG-Orig-Subj: [PATCH 3/3] kill XFS_ICSB_SB_LOCKED Subject: [PATCH 3/3] kill XFS_ICSB_SB_LOCKED Message-ID: <20080319204515.GD23644@lst.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.3.28i X-Scanned-By: MIMEDefang 2.39 X-Barracuda-Connect: verein.lst.de[213.95.11.210] X-Barracuda-Start-Time: 1205959522 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45313 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14927 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@lst.de Precedence: bulk X-list: xfs With the last two patches XFS_ICSB_SB_LOCKED is never checked and only superflously passed to xfs_icsb_count, so kill it. Signed-off-by: Christoph Hellwig Index: linux-2.6-xfs/fs/xfs/xfs_mount.c =================================================================== --- linux-2.6-xfs.orig/fs/xfs/xfs_mount.c 2008-03-08 15:51:26.000000000 +0100 +++ linux-2.6-xfs/fs/xfs/xfs_mount.c 2008-03-08 15:51:26.000000000 +0100 @@ -2193,7 +2193,7 @@ xfs_icsb_disable_counter( if (!test_and_set_bit(field, &mp->m_icsb_counters)) { /* drain back to superblock */ - xfs_icsb_count(mp, &cnt, XFS_ICSB_SB_LOCKED|XFS_ICSB_LAZY_COUNT); + xfs_icsb_count(mp, &cnt, XFS_ICSB_LAZY_COUNT); switch(field) { case XFS_SBS_ICOUNT: mp->m_sb.sb_icount = cnt.icsb_icount; Index: linux-2.6-xfs/fs/xfs/xfs_mount.h =================================================================== --- linux-2.6-xfs.orig/fs/xfs/xfs_mount.h 2008-03-08 15:51:24.000000000 +0100 +++ linux-2.6-xfs/fs/xfs/xfs_mount.h 2008-03-08 15:51:26.000000000 +0100 @@ -206,7 +206,6 @@ typedef struct xfs_icsb_cnts { #define XFS_ICSB_FLAG_LOCK (1 << 0) /* counter lock bit */ -#define XFS_ICSB_SB_LOCKED (1 << 0) /* sb already locked */ #define XFS_ICSB_LAZY_COUNT (1 << 1) /* accuracy not needed */ extern int xfs_icsb_init_counters(struct xfs_mount *); From owner-xfs@oss.sgi.com Wed Mar 19 13:44:38 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 19 Mar 2008 13:44:46 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.1 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2JKiaDj030032 for ; Wed, 19 Mar 2008 13:44:38 -0700 X-ASG-Debug-ID: 1205959508-174802e90000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from verein.lst.de (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 981C26C07E5 for ; Wed, 19 Mar 2008 13:45:08 -0700 (PDT) Received: from verein.lst.de (verein.lst.de [213.95.11.210]) by cuda.sgi.com with ESMTP id Qiv2OR7dJ0ZlsFN6 for ; Wed, 19 Mar 2008 13:45:08 -0700 (PDT) Received: from verein.lst.de (localhost [127.0.0.1]) by verein.lst.de (8.12.3/8.12.3/Debian-7.1) with ESMTP id m2JKj1F3024132 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO) for ; Wed, 19 Mar 2008 21:45:01 +0100 Received: (from hch@localhost) by verein.lst.de (8.12.3/8.12.3/Debian-6.6) id m2JKj1Ci024130 for xfs@oss.sgi.com; Wed, 19 Mar 2008 21:45:01 +0100 Date: Wed, 19 Mar 2008 21:45:01 +0100 From: Christoph Hellwig To: xfs@oss.sgi.com X-ASG-Orig-Subj: [PATCH 1/3] split xfs_icsb_sync_counters_flags Subject: [PATCH 1/3] split xfs_icsb_sync_counters_flags Message-ID: <20080319204501.GB23644@lst.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.3.28i X-Scanned-By: MIMEDefang 2.39 X-Barracuda-Connect: verein.lst.de[213.95.11.210] X-Barracuda-Start-Time: 1205959509 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45313 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14926 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@lst.de Precedence: bulk X-list: xfs Add a new xfs_icsb_sync_counters_locked for the case where m_sb_lock is already taken and add a flags argument to xfs_icsb_sync_counters so that xfs_icsb_sync_counters_flags is not needed. Signed-off-by: Christoph Hellwig Index: linux-2.6-xfs/fs/xfs/xfs_fsops.c =================================================================== --- linux-2.6-xfs.orig/fs/xfs/xfs_fsops.c 2008-02-22 04:44:13.000000000 +0100 +++ linux-2.6-xfs/fs/xfs/xfs_fsops.c 2008-03-04 18:26:19.000000000 +0100 @@ -462,7 +462,7 @@ xfs_fs_counts( xfs_mount_t *mp, xfs_fsop_counts_t *cnt) { - xfs_icsb_sync_counters_flags(mp, XFS_ICSB_LAZY_COUNT); + xfs_icsb_sync_counters(mp, XFS_ICSB_LAZY_COUNT); spin_lock(&mp->m_sb_lock); cnt->freedata = mp->m_sb.sb_fdblocks - XFS_ALLOC_SET_ASIDE(mp); cnt->freertx = mp->m_sb.sb_frextents; @@ -524,7 +524,7 @@ xfs_reserve_blocks( */ retry: spin_lock(&mp->m_sb_lock); - xfs_icsb_sync_counters_flags(mp, XFS_ICSB_SB_LOCKED); + xfs_icsb_sync_counters_locked(mp, 0); /* * If our previous reservation was larger than the current value, Index: linux-2.6-xfs/fs/xfs/xfs_mount.c =================================================================== --- linux-2.6-xfs.orig/fs/xfs/xfs_mount.c 2008-03-03 16:12:34.000000000 +0100 +++ linux-2.6-xfs/fs/xfs/xfs_mount.c 2008-03-04 18:26:19.000000000 +0100 @@ -54,7 +54,6 @@ STATIC void xfs_unmountfs_wait(xfs_mount STATIC void xfs_icsb_destroy_counters(xfs_mount_t *); STATIC void xfs_icsb_balance_counter(xfs_mount_t *, xfs_sb_field_t, int, int); -STATIC void xfs_icsb_sync_counters(xfs_mount_t *); STATIC int xfs_icsb_modify_counters(xfs_mount_t *, xfs_sb_field_t, int64_t, int); STATIC int xfs_icsb_disable_counter(xfs_mount_t *, xfs_sb_field_t); @@ -63,7 +62,6 @@ STATIC int xfs_icsb_disable_counter(xfs_ #define xfs_icsb_destroy_counters(mp) do { } while (0) #define xfs_icsb_balance_counter(mp, a, b, c) do { } while (0) -#define xfs_icsb_sync_counters(mp) do { } while (0) #define xfs_icsb_modify_counters(mp, a, b, c) do { } while (0) #endif @@ -1374,7 +1372,7 @@ xfs_log_sbcount( if (!xfs_fs_writable(mp)) return 0; - xfs_icsb_sync_counters(mp); + xfs_icsb_sync_counters(mp, 0); /* * we don't need to do this if we are updating the superblock @@ -2252,38 +2250,33 @@ xfs_icsb_enable_counter( } void -xfs_icsb_sync_counters_flags( +xfs_icsb_sync_counters_locked( xfs_mount_t *mp, int flags) { xfs_icsb_cnts_t cnt; - /* Pass 1: lock all counters */ - if ((flags & XFS_ICSB_SB_LOCKED) == 0) - spin_lock(&mp->m_sb_lock); - xfs_icsb_count(mp, &cnt, flags); - /* Step 3: update mp->m_sb fields */ if (!xfs_icsb_counter_disabled(mp, XFS_SBS_ICOUNT)) mp->m_sb.sb_icount = cnt.icsb_icount; if (!xfs_icsb_counter_disabled(mp, XFS_SBS_IFREE)) mp->m_sb.sb_ifree = cnt.icsb_ifree; if (!xfs_icsb_counter_disabled(mp, XFS_SBS_FDBLOCKS)) mp->m_sb.sb_fdblocks = cnt.icsb_fdblocks; - - if ((flags & XFS_ICSB_SB_LOCKED) == 0) - spin_unlock(&mp->m_sb_lock); } /* * Accurate update of per-cpu counters to incore superblock */ -STATIC void +void xfs_icsb_sync_counters( - xfs_mount_t *mp) + xfs_mount_t *mp, + int flags) { - xfs_icsb_sync_counters_flags(mp, 0); + spin_lock(&mp->m_sb_lock); + xfs_icsb_sync_counters_locked(mp, flags); + spin_unlock(&mp->m_sb_lock); } /* Index: linux-2.6-xfs/fs/xfs/xfs_mount.h =================================================================== --- linux-2.6-xfs.orig/fs/xfs/xfs_mount.h 2008-03-03 16:12:34.000000000 +0100 +++ linux-2.6-xfs/fs/xfs/xfs_mount.h 2008-03-04 18:26:19.000000000 +0100 @@ -211,12 +211,13 @@ typedef struct xfs_icsb_cnts { extern int xfs_icsb_init_counters(struct xfs_mount *); extern void xfs_icsb_reinit_counters(struct xfs_mount *); -extern void xfs_icsb_sync_counters_flags(struct xfs_mount *, int); +extern void xfs_icsb_sync_counters(struct xfs_mount *, int); +extern void xfs_icsb_sync_counters_locked(struct xfs_mount *, int); #else #define xfs_icsb_init_counters(mp) (0) #define xfs_icsb_reinit_counters(mp) do { } while (0) -#define xfs_icsb_sync_counters_flags(mp, flags) do { } while (0) +#define xfs_icsb_sync_counters(mp, flags) do { } while (0) #endif typedef struct xfs_ail { Index: linux-2.6-xfs/fs/xfs/linux-2.6/xfs_super.c =================================================================== --- linux-2.6-xfs.orig/fs/xfs/linux-2.6/xfs_super.c 2008-03-03 04:57:21.000000000 +0100 +++ linux-2.6-xfs/fs/xfs/linux-2.6/xfs_super.c 2008-03-04 18:26:19.000000000 +0100 @@ -1182,7 +1182,7 @@ xfs_fs_statfs( statp->f_fsid.val[0] = (u32)id; statp->f_fsid.val[1] = (u32)(id >> 32); - xfs_icsb_sync_counters_flags(mp, XFS_ICSB_LAZY_COUNT); + xfs_icsb_sync_counters(mp, XFS_ICSB_LAZY_COUNT); spin_lock(&mp->m_sb_lock); statp->f_bsize = sbp->sb_blocksize; From owner-xfs@oss.sgi.com Wed Mar 19 13:48:49 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 19 Mar 2008 13:48:57 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.1 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_63 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2JKmmW1030704 for ; Wed, 19 Mar 2008 13:48:49 -0700 X-ASG-Debug-ID: 1205959759-1949038a0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from verein.lst.de (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id C9FE56C040E for ; Wed, 19 Mar 2008 13:49:19 -0700 (PDT) Received: from verein.lst.de (verein.lst.de [213.95.11.210]) by cuda.sgi.com with ESMTP id Zaiv4eFgnECR5Ebf for ; Wed, 19 Mar 2008 13:49:19 -0700 (PDT) Received: from verein.lst.de (localhost [127.0.0.1]) by verein.lst.de (8.12.3/8.12.3/Debian-7.1) with ESMTP id m2JKnEF3024639 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO) for ; Wed, 19 Mar 2008 21:49:14 +0100 Received: (from hch@localhost) by verein.lst.de (8.12.3/8.12.3/Debian-6.6) id m2JKnEGv024637 for xfs@oss.sgi.com; Wed, 19 Mar 2008 21:49:14 +0100 Date: Wed, 19 Mar 2008 21:49:14 +0100 From: Christoph Hellwig To: xfs@oss.sgi.com X-ASG-Orig-Subj: [PATCH] remove most calls to VN_RELE Subject: [PATCH] remove most calls to VN_RELE Message-ID: <20080319204914.GB24271@lst.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.3.28i X-Scanned-By: MIMEDefang 2.39 X-Barracuda-Connect: verein.lst.de[213.95.11.210] X-Barracuda-Start-Time: 1205959760 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45315 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14928 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@lst.de Precedence: bulk X-list: xfs Most VN_RELE calls either directly contain a XFS_ITOV or have the corresponding xfs_inode already in scope. Use the IRELE helper instead of VN_RELE to clarify the code. With a little more work we can kill VN_RELE altogether and define IRELE in terms of iput directly. Signed-off-by: Christoph Hellwig Index: linux-2.6-xfs/fs/xfs/quota/xfs_qm.c =================================================================== --- linux-2.6-xfs.orig/fs/xfs/quota/xfs_qm.c 2008-03-06 10:32:26.000000000 +0100 +++ linux-2.6-xfs/fs/xfs/quota/xfs_qm.c 2008-03-06 10:33:30.000000000 +0100 @@ -1810,7 +1810,7 @@ xfs_qm_dqusage_adjust( * Now release the inode. This will send it to 'inactive', and * possibly even free blocks. */ - VN_RELE(XFS_ITOV(ip)); + IRELE(ip); /* * Goto next inode. @@ -1968,7 +1968,7 @@ xfs_qm_init_quotainos( if ((error = xfs_iget(mp, NULL, mp->m_sb.sb_gquotino, 0, 0, &gip, 0))) { if (uip) - VN_RELE(XFS_ITOV(uip)); + IRELE(uip); return XFS_ERROR(error); } } @@ -1999,7 +1999,7 @@ xfs_qm_init_quotainos( sbflags | XFS_SB_GQUOTINO, flags); if (error) { if (uip) - VN_RELE(XFS_ITOV(uip)); + IRELE(uip); return XFS_ERROR(error); } Index: linux-2.6-xfs/fs/xfs/quota/xfs_qm_syscalls.c =================================================================== --- linux-2.6-xfs.orig/fs/xfs/quota/xfs_qm_syscalls.c 2008-03-06 10:05:58.000000000 +0100 +++ linux-2.6-xfs/fs/xfs/quota/xfs_qm_syscalls.c 2008-03-06 10:33:30.000000000 +0100 @@ -386,7 +386,7 @@ xfs_qm_scall_trunc_qfiles( error = xfs_iget(mp, NULL, mp->m_sb.sb_uquotino, 0, 0, &qip, 0); if (! error) { (void) xfs_truncate_file(mp, qip); - VN_RELE(XFS_ITOV(qip)); + IRELE(qip); } } @@ -395,7 +395,7 @@ xfs_qm_scall_trunc_qfiles( error = xfs_iget(mp, NULL, mp->m_sb.sb_gquotino, 0, 0, &qip, 0); if (! error) { (void) xfs_truncate_file(mp, qip); - VN_RELE(XFS_ITOV(qip)); + IRELE(qip); } } @@ -552,13 +552,13 @@ xfs_qm_scall_getqstat( out->qs_uquota.qfs_nblks = uip->i_d.di_nblocks; out->qs_uquota.qfs_nextents = uip->i_d.di_nextents; if (tempuqip) - VN_RELE(XFS_ITOV(uip)); + IRELE(uip); } if (gip) { out->qs_gquota.qfs_nblks = gip->i_d.di_nblocks; out->qs_gquota.qfs_nextents = gip->i_d.di_nextents; if (tempgqip) - VN_RELE(XFS_ITOV(gip)); + IRELE(gip); } if (mp->m_quotainfo) { out->qs_incoredqs = XFS_QI_MPLNDQUOTS(mp); @@ -1095,7 +1095,7 @@ again: * inactive code in hell. */ if (vnode_refd) - VN_RELE(vp); + IRELE(ip); XFS_MOUNT_ILOCK(mp); /* * If an inode was inserted or removed, we gotta Index: linux-2.6-xfs/fs/xfs/xfs_log_recover.c =================================================================== --- linux-2.6-xfs.orig/fs/xfs/xfs_log_recover.c 2008-03-06 10:32:26.000000000 +0100 +++ linux-2.6-xfs/fs/xfs/xfs_log_recover.c 2008-03-06 10:33:30.000000000 +0100 @@ -46,6 +46,7 @@ #include "xfs_trans_priv.h" #include "xfs_quota.h" #include "xfs_rw.h" +#include "xfs_utils.h" STATIC int xlog_find_zeroed(xlog_t *, xfs_daddr_t *); STATIC int xlog_clear_stale_blocks(xlog_t *, xfs_lsn_t); @@ -3248,7 +3249,7 @@ xlog_recover_process_iunlinks( if (ip->i_d.di_mode == 0) xfs_iput_new(ip, 0); else - VN_RELE(XFS_ITOV(ip)); + IRELE(ip); } else { /* * We can't read in the inode Index: linux-2.6-xfs/fs/xfs/xfs_rtalloc.c =================================================================== --- linux-2.6-xfs.orig/fs/xfs/xfs_rtalloc.c 2008-03-06 10:05:58.000000000 +0100 +++ linux-2.6-xfs/fs/xfs/xfs_rtalloc.c 2008-03-06 10:33:30.000000000 +0100 @@ -44,6 +44,7 @@ #include "xfs_rw.h" #include "xfs_inode_item.h" #include "xfs_trans_space.h" +#include "xfs_utils.h" /* @@ -2278,7 +2279,7 @@ xfs_rtmount_inodes( ASSERT(sbp->sb_rsumino != NULLFSINO); error = xfs_iget(mp, NULL, sbp->sb_rsumino, 0, 0, &mp->m_rsumip, 0); if (error) { - VN_RELE(XFS_ITOV(mp->m_rbmip)); + IRELE(mp->m_rbmip); return error; } ASSERT(mp->m_rsumip != NULL); Index: linux-2.6-xfs/fs/xfs/xfs_vfsops.c =================================================================== --- linux-2.6-xfs.orig/fs/xfs/xfs_vfsops.c 2008-03-06 10:32:26.000000000 +0100 +++ linux-2.6-xfs/fs/xfs/xfs_vfsops.c 2008-03-06 10:33:30.000000000 +0100 @@ -55,6 +55,7 @@ #include "xfs_fsops.h" #include "xfs_vnodeops.h" #include "xfs_vfsops.h" +#include "xfs_utils.h" int __init @@ -595,7 +596,7 @@ xfs_unmount( /* * Drop the reference count */ - VN_RELE(rvp); + IRELE(rip); /* * If we're forcing a shutdown, typically because of a media error, @@ -777,8 +778,8 @@ xfs_unmount_flush( goto fscorrupt_out2; if (rbmip) { - VN_RELE(XFS_ITOV(rbmip)); - VN_RELE(XFS_ITOV(rsumip)); + IRELE(rbmip); + IRELE(rsumip); } xfs_iunlock(rip, XFS_ILOCK_EXCL); @@ -1156,10 +1157,10 @@ xfs_sync_inodes( * above, then wait until after we've unlocked * the inode to release the reference. This is * because we can be already holding the inode - * lock when VN_RELE() calls xfs_inactive(). + * lock when IRELE() calls xfs_inactive(). * * Make sure to drop the mount lock before calling - * VN_RELE() so that we don't trip over ourselves if + * IRELE() so that we don't trip over ourselves if * we have to go for the mount lock again in the * inactive code. */ @@ -1167,7 +1168,7 @@ xfs_sync_inodes( IPOINTER_INSERT(ip, mp); } - VN_RELE(vp); + IRELE(ip); vnode_refed = B_FALSE; } Index: linux-2.6-xfs/fs/xfs/xfs_mount.c =================================================================== --- linux-2.6-xfs.orig/fs/xfs/xfs_mount.c 2008-03-06 10:33:29.000000000 +0100 +++ linux-2.6-xfs/fs/xfs/xfs_mount.c 2008-03-06 10:33:30.000000000 +0100 @@ -43,6 +43,7 @@ #include "xfs_rw.h" #include "xfs_quota.h" #include "xfs_fsops.h" +#include "xfs_utils.h" STATIC void xfs_mount_log_sb(xfs_mount_t *, __int64_t); STATIC int xfs_uuid_mount(xfs_mount_t *); @@ -957,7 +958,6 @@ xfs_mountfs( { xfs_sb_t *sbp = &(mp->m_sb); xfs_inode_t *rip; - bhv_vnode_t *rvp = NULL; __uint64_t resblks; __int64_t update_flags = 0LL; uint quotamount, quotaflags; @@ -1147,7 +1147,6 @@ xfs_mountfs( } ASSERT(rip != NULL); - rvp = XFS_ITOV(rip); if (unlikely((rip->i_d.di_mode & S_IFMT) != S_IFDIR)) { cmn_err(CE_WARN, "XFS: corrupted root inode"); @@ -1230,7 +1229,7 @@ xfs_mountfs( /* * Free up the root inode. */ - VN_RELE(rvp); + IRELE(rip); error3: xfs_log_unmount_dealloc(mp); error2: From owner-xfs@oss.sgi.com Wed Mar 19 13:50:25 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 19 Mar 2008 13:50:38 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.1 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2JKoOGe031001 for ; Wed, 19 Mar 2008 13:50:25 -0700 X-ASG-Debug-ID: 1205959853-194603a60004-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from verein.lst.de (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id C03D16C0452 for ; Wed, 19 Mar 2008 13:50:56 -0700 (PDT) Received: from verein.lst.de (verein.lst.de [213.95.11.210]) by cuda.sgi.com with ESMTP id MUNXVPvBrKi3pDZ3 for ; Wed, 19 Mar 2008 13:50:56 -0700 (PDT) Received: from verein.lst.de (localhost [127.0.0.1]) by verein.lst.de (8.12.3/8.12.3/Debian-7.1) with ESMTP id m2JKlOF3024361 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO) for ; Wed, 19 Mar 2008 21:47:24 +0100 Received: (from hch@localhost) by verein.lst.de (8.12.3/8.12.3/Debian-6.6) id m2JKlO9K024359 for xfs@oss.sgi.com; Wed, 19 Mar 2008 21:47:24 +0100 Date: Wed, 19 Mar 2008 21:47:24 +0100 From: Christoph Hellwig To: xfs@oss.sgi.com X-ASG-Orig-Subj: [PATCH] cleanup root inode handling in xfs_fs_fill_super Subject: [PATCH] cleanup root inode handling in xfs_fs_fill_super Message-ID: <20080319204724.GA24271@lst.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.3.28i X-Scanned-By: MIMEDefang 2.39 X-Barracuda-Connect: verein.lst.de[213.95.11.210] X-Barracuda-Start-Time: 1205959857 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45315 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14931 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@lst.de Precedence: bulk X-list: xfs - rename rootvp to root for clarify - remove useless vn_to_inode call - check is_bad_inode before calling d_alloc_root - use iput instead of VN_RELE in the error case Signed-off-by: Christoph Hellwig Index: linux-2.6-xfs/fs/xfs/linux-2.6/xfs_super.c =================================================================== --- linux-2.6-xfs.orig/fs/xfs/linux-2.6/xfs_super.c 2008-02-27 00:40:51.000000000 +0100 +++ linux-2.6-xfs/fs/xfs/linux-2.6/xfs_super.c 2008-02-27 00:43:04.000000000 +0100 @@ -1307,7 +1307,7 @@ xfs_fs_fill_super( void *data, int silent) { - struct inode *rootvp; + struct inode *root; struct xfs_mount *mp = NULL; struct xfs_mount_args *args = xfs_args_allocate(sb, silent); int error; @@ -1345,19 +1345,18 @@ xfs_fs_fill_super( sb->s_time_gran = 1; set_posix_acl_flag(sb); - rootvp = igrab(mp->m_rootip->i_vnode); - if (!rootvp) { + root = igrab(mp->m_rootip->i_vnode); + if (!root) { error = ENOENT; goto fail_unmount; } - - sb->s_root = d_alloc_root(vn_to_inode(rootvp)); - if (!sb->s_root) { - error = ENOMEM; + if (is_bad_inode(root)) { + error = EINVAL; goto fail_vnrele; } - if (is_bad_inode(sb->s_root->d_inode)) { - error = EINVAL; + sb->s_root = d_alloc_root(root); + if (!sb->s_root) { + error = ENOMEM; goto fail_vnrele; } @@ -1379,7 +1378,7 @@ fail_vnrele: dput(sb->s_root); sb->s_root = NULL; } else { - VN_RELE(rootvp); + iput(root); } fail_unmount: From owner-xfs@oss.sgi.com Wed Mar 19 13:50:24 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 19 Mar 2008 13:50:33 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.1 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2JKoMAx030987 for ; Wed, 19 Mar 2008 13:50:24 -0700 X-ASG-Debug-ID: 1205959853-194603a60000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from verein.lst.de (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 2B3276C0450 for ; Wed, 19 Mar 2008 13:50:54 -0700 (PDT) Received: from verein.lst.de (verein.lst.de [213.95.11.210]) by cuda.sgi.com with ESMTP id b8G91cCnvj53Yq5u for ; Wed, 19 Mar 2008 13:50:54 -0700 (PDT) Received: from verein.lst.de (localhost [127.0.0.1]) by verein.lst.de (8.12.3/8.12.3/Debian-7.1) with ESMTP id m2JKeFF3023707 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO) for ; Wed, 19 Mar 2008 21:40:15 +0100 Received: (from hch@localhost) by verein.lst.de (8.12.3/8.12.3/Debian-6.6) id m2JKeEFj023705 for xfs@oss.sgi.com; Wed, 19 Mar 2008 21:40:14 +0100 Date: Wed, 19 Mar 2008 21:40:14 +0100 From: Christoph Hellwig To: xfs@oss.sgi.com X-ASG-Orig-Subj: [PATCH] split xfs_ioc_xattr Subject: [PATCH] split xfs_ioc_xattr Message-ID: <20080319204014.GA23644@lst.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.3.28i X-Scanned-By: MIMEDefang 2.39 X-Barracuda-Connect: verein.lst.de[213.95.11.210] X-Barracuda-Start-Time: 1205959855 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45315 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14929 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@lst.de Precedence: bulk X-list: xfs The three subcases of xfs_ioc_xattr don't share any semantics and almost no code, so split it into three separate helpers. Signed-off-by: Christoph Hellwig Index: linux-2.6-xfs/fs/xfs/linux-2.6/xfs_ioctl.c =================================================================== --- linux-2.6-xfs.orig/fs/xfs/linux-2.6/xfs_ioctl.c 2008-03-04 18:14:57.000000000 +0100 +++ linux-2.6-xfs/fs/xfs/linux-2.6/xfs_ioctl.c 2008-03-04 18:25:51.000000000 +0100 @@ -871,85 +871,85 @@ xfs_ioc_fsgetxattr( } STATIC int -xfs_ioc_xattr( +xfs_ioc_fssetxattr( xfs_inode_t *ip, struct file *filp, - unsigned int cmd, void __user *arg) { struct fsxattr fa; struct bhv_vattr *vattr; - int error = 0; + int error; int attr_flags; - unsigned int flags; + + if (copy_from_user(&fa, arg, sizeof(fa))) + return -EFAULT; vattr = kmalloc(sizeof(*vattr), GFP_KERNEL); if (unlikely(!vattr)) return -ENOMEM; - switch (cmd) { - case XFS_IOC_FSSETXATTR: { - if (copy_from_user(&fa, arg, sizeof(fa))) { - error = -EFAULT; - break; - } - - attr_flags = 0; - if (filp->f_flags & (O_NDELAY|O_NONBLOCK)) - attr_flags |= ATTR_NONBLOCK; - - vattr->va_mask = XFS_AT_XFLAGS | XFS_AT_EXTSIZE | XFS_AT_PROJID; - vattr->va_xflags = fa.fsx_xflags; - vattr->va_extsize = fa.fsx_extsize; - vattr->va_projid = fa.fsx_projid; - - error = xfs_setattr(ip, vattr, attr_flags, NULL); - if (likely(!error)) - vn_revalidate(XFS_ITOV(ip)); /* update flags */ - error = -error; - break; - } - - case XFS_IOC_GETXFLAGS: { - flags = xfs_di2lxflags(ip->i_d.di_flags); - if (copy_to_user(arg, &flags, sizeof(flags))) - error = -EFAULT; - break; - } - - case XFS_IOC_SETXFLAGS: { - if (copy_from_user(&flags, arg, sizeof(flags))) { - error = -EFAULT; - break; - } - - if (flags & ~(FS_IMMUTABLE_FL | FS_APPEND_FL | \ - FS_NOATIME_FL | FS_NODUMP_FL | \ - FS_SYNC_FL)) { - error = -EOPNOTSUPP; - break; - } - - attr_flags = 0; - if (filp->f_flags & (O_NDELAY|O_NONBLOCK)) - attr_flags |= ATTR_NONBLOCK; - - vattr->va_mask = XFS_AT_XFLAGS; - vattr->va_xflags = xfs_merge_ioc_xflags(flags, - xfs_ip2xflags(ip)); - - error = xfs_setattr(ip, vattr, attr_flags, NULL); - if (likely(!error)) - vn_revalidate(XFS_ITOV(ip)); /* update flags */ - error = -error; - break; - } + attr_flags = 0; + if (filp->f_flags & (O_NDELAY|O_NONBLOCK)) + attr_flags |= ATTR_NONBLOCK; + + vattr->va_mask = XFS_AT_XFLAGS | XFS_AT_EXTSIZE | XFS_AT_PROJID; + vattr->va_xflags = fa.fsx_xflags; + vattr->va_extsize = fa.fsx_extsize; + vattr->va_projid = fa.fsx_projid; + + error = -xfs_setattr(ip, vattr, attr_flags, NULL); + if (!error) + vn_revalidate(XFS_ITOV(ip)); /* update flags */ + kfree(vattr); + return 0; +} - default: - error = -ENOTTY; - break; - } +STATIC int +xfs_ioc_getxflags( + xfs_inode_t *ip, + void __user *arg) +{ + unsigned int flags; + + flags = xfs_di2lxflags(ip->i_d.di_flags); + if (copy_to_user(arg, &flags, sizeof(flags))) + return -EFAULT; + return 0; +} +STATIC int +xfs_ioc_setxflags( + xfs_inode_t *ip, + struct file *filp, + void __user *arg) +{ + struct bhv_vattr *vattr; + unsigned int flags; + int attr_flags; + int error; + + if (copy_from_user(&flags, arg, sizeof(flags))) + return -EFAULT; + + if (flags & ~(FS_IMMUTABLE_FL | FS_APPEND_FL | \ + FS_NOATIME_FL | FS_NODUMP_FL | \ + FS_SYNC_FL)) + return -EOPNOTSUPP; + + vattr = kmalloc(sizeof(*vattr), GFP_KERNEL); + if (unlikely(!vattr)) + return -ENOMEM; + + attr_flags = 0; + if (filp->f_flags & (O_NDELAY|O_NONBLOCK)) + attr_flags |= ATTR_NONBLOCK; + + vattr->va_mask = XFS_AT_XFLAGS; + vattr->va_xflags = xfs_merge_ioc_xflags(flags, xfs_ip2xflags(ip)); + + error = -xfs_setattr(ip, vattr, attr_flags, NULL); + if (likely(!error)) + vn_revalidate(XFS_ITOV(ip)); /* update flags */ kfree(vattr); return error; } @@ -1090,10 +1090,12 @@ xfs_ioctl( return xfs_ioc_fsgetxattr(ip, 0, arg); case XFS_IOC_FSGETXATTRA: return xfs_ioc_fsgetxattr(ip, 1, arg); + case XFS_IOC_FSSETXATTR: + return xfs_ioc_fssetxattr(ip, filp, arg); case XFS_IOC_GETXFLAGS: + return xfs_ioc_getxflags(ip, arg); case XFS_IOC_SETXFLAGS: - case XFS_IOC_FSSETXATTR: - return xfs_ioc_xattr(ip, filp, cmd, arg); + return xfs_ioc_setxflags(ip, filp, arg); case XFS_IOC_FSSETDM: { struct fsdmidata dmi; From owner-xfs@oss.sgi.com Wed Mar 19 13:50:25 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 19 Mar 2008 13:50:34 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_33, J_CHICKENPOX_64 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2JKoNn1030990 for ; Wed, 19 Mar 2008 13:50:25 -0700 X-ASG-Debug-ID: 1205959853-194603a60002-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from verein.lst.de (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 837BB6C0452 for ; Wed, 19 Mar 2008 13:50:55 -0700 (PDT) Received: from verein.lst.de (verein.lst.de [213.95.11.210]) by cuda.sgi.com with ESMTP id MpqnjM29FHRzugP6 for ; Wed, 19 Mar 2008 13:50:55 -0700 (PDT) Received: from verein.lst.de (localhost [127.0.0.1]) by verein.lst.de (8.12.3/8.12.3/Debian-7.1) with ESMTP id m2JKj9F3024156 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO) for ; Wed, 19 Mar 2008 21:45:09 +0100 Received: (from hch@localhost) by verein.lst.de (8.12.3/8.12.3/Debian-6.6) id m2JKj9WZ024154 for xfs@oss.sgi.com; Wed, 19 Mar 2008 21:45:09 +0100 Date: Wed, 19 Mar 2008 21:45:09 +0100 From: Christoph Hellwig To: xfs@oss.sgi.com X-ASG-Orig-Subj: [PATCH 2/3] split xfs_icsb_balance_counter Subject: [PATCH 2/3] split xfs_icsb_balance_counter Message-ID: <20080319204509.GC23644@lst.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.3.28i X-Scanned-By: MIMEDefang 2.39 X-Barracuda-Connect: verein.lst.de[213.95.11.210] X-Barracuda-Start-Time: 1205959856 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45315 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14930 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@lst.de Precedence: bulk X-list: xfs Add an xfs_icsb_balance_counter_locked for the case where mp->m_sb_lock is already locked. Signed-off-by: Christoph Hellwig Index: linux-2.6-xfs/fs/xfs/xfs_mount.c =================================================================== --- linux-2.6-xfs.orig/fs/xfs/xfs_mount.c 2008-03-06 10:01:27.000000000 +0100 +++ linux-2.6-xfs/fs/xfs/xfs_mount.c 2008-03-06 10:05:42.000000000 +0100 @@ -53,7 +53,9 @@ STATIC void xfs_unmountfs_wait(xfs_mount #ifdef HAVE_PERCPU_SB STATIC void xfs_icsb_destroy_counters(xfs_mount_t *); STATIC void xfs_icsb_balance_counter(xfs_mount_t *, xfs_sb_field_t, - int, int); + int); +STATIC void xfs_icsb_balance_counter_locked(xfs_mount_t *, xfs_sb_field_t, + int); STATIC int xfs_icsb_modify_counters(xfs_mount_t *, xfs_sb_field_t, int64_t, int); STATIC int xfs_icsb_disable_counter(xfs_mount_t *, xfs_sb_field_t); @@ -61,7 +63,8 @@ STATIC int xfs_icsb_disable_counter(xfs_ #else #define xfs_icsb_destroy_counters(mp) do { } while (0) -#define xfs_icsb_balance_counter(mp, a, b, c) do { } while (0) +#define xfs_icsb_balance_counter(mp, a, b) do { } while (0) +#define xfs_icsb_balance_counter_locked(mp, a, b) do { } while (0) #define xfs_icsb_modify_counters(mp, a, b, c) do { } while (0) #endif @@ -1996,9 +1999,9 @@ xfs_icsb_cpu_notify( case CPU_ONLINE: case CPU_ONLINE_FROZEN: xfs_icsb_lock(mp); - xfs_icsb_balance_counter(mp, XFS_SBS_ICOUNT, 0, 0); - xfs_icsb_balance_counter(mp, XFS_SBS_IFREE, 0, 0); - xfs_icsb_balance_counter(mp, XFS_SBS_FDBLOCKS, 0, 0); + xfs_icsb_balance_counter(mp, XFS_SBS_ICOUNT, 0); + xfs_icsb_balance_counter(mp, XFS_SBS_IFREE, 0); + xfs_icsb_balance_counter(mp, XFS_SBS_FDBLOCKS, 0); xfs_icsb_unlock(mp); break; case CPU_DEAD: @@ -2018,12 +2021,9 @@ xfs_icsb_cpu_notify( memset(cntp, 0, sizeof(xfs_icsb_cnts_t)); - xfs_icsb_balance_counter(mp, XFS_SBS_ICOUNT, - XFS_ICSB_SB_LOCKED, 0); - xfs_icsb_balance_counter(mp, XFS_SBS_IFREE, - XFS_ICSB_SB_LOCKED, 0); - xfs_icsb_balance_counter(mp, XFS_SBS_FDBLOCKS, - XFS_ICSB_SB_LOCKED, 0); + xfs_icsb_balance_counter_locked(mp, XFS_SBS_ICOUNT, 0); + xfs_icsb_balance_counter_locked(mp, XFS_SBS_IFREE, 0); + xfs_icsb_balance_counter_locked(mp, XFS_SBS_FDBLOCKS, 0); spin_unlock(&mp->m_sb_lock); xfs_icsb_unlock(mp); break; @@ -2075,9 +2075,9 @@ xfs_icsb_reinit_counters( * initial balance kicks us off correctly */ mp->m_icsb_counters = -1; - xfs_icsb_balance_counter(mp, XFS_SBS_ICOUNT, 0, 0); - xfs_icsb_balance_counter(mp, XFS_SBS_IFREE, 0, 0); - xfs_icsb_balance_counter(mp, XFS_SBS_FDBLOCKS, 0, 0); + xfs_icsb_balance_counter(mp, XFS_SBS_ICOUNT, 0); + xfs_icsb_balance_counter(mp, XFS_SBS_IFREE, 0); + xfs_icsb_balance_counter(mp, XFS_SBS_FDBLOCKS, 0); xfs_icsb_unlock(mp); } @@ -2299,19 +2299,15 @@ xfs_icsb_sync_counters( #define XFS_ICSB_FDBLK_CNTR_REENABLE(mp) \ (uint64_t)(512 + XFS_ALLOC_SET_ASIDE(mp)) STATIC void -xfs_icsb_balance_counter( +xfs_icsb_balance_counter_locked( xfs_mount_t *mp, xfs_sb_field_t field, - int flags, int min_per_cpu) { uint64_t count, resid; int weight = num_online_cpus(); uint64_t min = (uint64_t)min_per_cpu; - if (!(flags & XFS_ICSB_SB_LOCKED)) - spin_lock(&mp->m_sb_lock); - /* disable counter and sync counter */ xfs_icsb_disable_counter(mp, field); @@ -2321,19 +2317,19 @@ xfs_icsb_balance_counter( count = mp->m_sb.sb_icount; resid = do_div(count, weight); if (count < max(min, XFS_ICSB_INO_CNTR_REENABLE)) - goto out; + return; break; case XFS_SBS_IFREE: count = mp->m_sb.sb_ifree; resid = do_div(count, weight); if (count < max(min, XFS_ICSB_INO_CNTR_REENABLE)) - goto out; + return; break; case XFS_SBS_FDBLOCKS: count = mp->m_sb.sb_fdblocks; resid = do_div(count, weight); if (count < max(min, XFS_ICSB_FDBLK_CNTR_REENABLE(mp))) - goto out; + return; break; default: BUG(); @@ -2342,9 +2338,17 @@ xfs_icsb_balance_counter( } xfs_icsb_enable_counter(mp, field, count, resid); -out: - if (!(flags & XFS_ICSB_SB_LOCKED)) - spin_unlock(&mp->m_sb_lock); +} + +STATIC void +xfs_icsb_balance_counter( + xfs_mount_t *mp, + xfs_sb_field_t fields, + int min_per_cpu) +{ + spin_lock(&mp->m_sb_lock); + xfs_icsb_balance_counter_locked(mp, fields, min_per_cpu); + spin_unlock(&mp->m_sb_lock); } STATIC int @@ -2451,7 +2455,7 @@ slow_path: * we are done. */ if (ret != ENOSPC) - xfs_icsb_balance_counter(mp, field, 0, 0); + xfs_icsb_balance_counter(mp, field, 0); xfs_icsb_unlock(mp); return ret; @@ -2475,7 +2479,7 @@ balance_counter: * will either succeed through the fast path or slow path without * another balance operation being required. */ - xfs_icsb_balance_counter(mp, field, 0, delta); + xfs_icsb_balance_counter(mp, field, delta); xfs_icsb_unlock(mp); goto again; } From owner-xfs@oss.sgi.com Wed Mar 19 16:47:02 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 19 Mar 2008 16:47:11 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2JNkwBY008552 for ; Wed, 19 Mar 2008 16:47:01 -0700 Received: from pc-bnaujok.melbourne.sgi.com (pc-bnaujok.melbourne.sgi.com [134.14.55.58]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA24811; Thu, 20 Mar 2008 10:47:25 +1100 Date: Thu, 20 Mar 2008 10:48:31 +1100 To: "Jim Paradis" , xfs@oss.sgi.com Subject: Re: Duplicate directory entries From: "Barry Naujok" Organization: SGI Content-Type: text/plain; format=flowed; delsp=yes; charset=utf-8 MIME-Version: 1.0 References: Content-Transfer-Encoding: 7bit Message-ID: In-Reply-To: User-Agent: Opera Mail/9.24 (Win32) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14932 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@sgi.com Precedence: bulk X-list: xfs On Thu, 20 Mar 2008 06:26:05 +1100, Jim Paradis wrote: > We recently ran across a situation where we saw two directory entries > that were exactly the same. A ls -li of the directory in question shows > the following: > > > 3758898162 -rw-r--r-- 1 root root 1592 Jan 28 02:21 > 4b13e98d-2165-4630-851d-c2d94149401f.i > > 3758898162 -rw-r--r-- 1 root root 1592 Jan 28 02:21 > 4b13e98d-2165-4630-851d-c2d94149401f.i > > 3758901942 -rw-r--r-- 1 root root 1805 Mar 16 21:43 > 848a74ed-ec3a-4504-a478-6b75cede7ccc.i > > > There are only three entries in the directory. Note that the first two > are identical - same name, same inode number. Note, too, that the inode > has a link count of *one* despite its having two directory entries > pointing at it. > > > When I run xfs_db and examine this directory, I see that this is a > short-form dir2 directory in the inode literal area, and it is the first > two entries that are identical. I searched the archives and found a > similar situation described in 2006, but no resolution. The xfs_db > inode dump is below... any thoughts as to how this happens and is there > a fix? I can't comment on how it was caused, but xfs_repair -n on the filesystem should detect it, and running xfs_repair without the -n should fix it. Regards, Barry. From owner-xfs@oss.sgi.com Wed Mar 19 17:02:33 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 19 Mar 2008 17:02:39 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2K02Rvi009232 for ; Wed, 19 Mar 2008 17:02:31 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA25664; Thu, 20 Mar 2008 11:02:57 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m2K02tLF103214846; Thu, 20 Mar 2008 11:02:56 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m2K02sSi92792601; Thu, 20 Mar 2008 11:02:54 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Thu, 20 Mar 2008 11:02:54 +1100 From: David Chinner To: Jim Paradis Cc: xfs@oss.sgi.com Subject: Re: Duplicate directory entries Message-ID: <20080320000254.GC103321673@sgi.com> References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14933 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Wed, Mar 19, 2008 at 03:26:05PM -0400, Jim Paradis wrote: > We recently ran across a situation where we saw two directory entries > that were exactly the same. What kernel version? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Wed Mar 19 17:13:01 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 19 Mar 2008 17:13:09 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from relay.sgi.com (relay1.corp.sgi.com [192.26.58.214]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2K0CxkZ009693 for ; Wed, 19 Mar 2008 17:13:01 -0700 Received: from outhouse.melbourne.sgi.com (outhouse.melbourne.sgi.com [134.14.52.145]) by relay1.corp.sgi.com (Postfix) with ESMTP id DBCAB8F8076; Wed, 19 Mar 2008 17:13:28 -0700 (PDT) Received: from itchy (xaiki@itchy.melbourne.sgi.com [134.14.55.96]) by outhouse.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m2K0DGTG1015140; Thu, 20 Mar 2008 11:13:18 +1100 (AEDT) From: Niv Sardi To: Nicolas KOWALSKI Cc: xfs@oss.sgi.com Subject: Re: xfsdump debian package, wrong version number References: <87iqzi3b8q.fsf@petole.dyndns.org> Date: Thu, 20 Mar 2008 11:13:16 +1100 In-Reply-To: <87iqzi3b8q.fsf@petole.dyndns.org> (Nicolas KOWALSKI's message of "Wed, 19 Mar 2008 17:54:45 +0100") Message-ID: User-Agent: Gnus/5.110007 (No Gnus v0.7) Emacs/23.0.60 (i486-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14934 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: xaiki@debian.org Precedence: bulk X-list: xfs Debian packaging is not made in upstream repository. quick fix: hand edit debian/changelog or get the dsc from http://packages.debian.org/sid/xfsdump and rebuild for your favourite flavour. Cheers, -- Niv Sardi From owner-xfs@oss.sgi.com Wed Mar 19 17:35:20 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 19 Mar 2008 17:35:27 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2K0ZHWc010732 for ; Wed, 19 Mar 2008 17:35:19 -0700 Received: from boing.melbourne.sgi.com (boing.melbourne.sgi.com [134.14.55.141]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA27217; Thu, 20 Mar 2008 11:35:43 +1100 Message-ID: <47E1B15E.5050107@sgi.com> Date: Thu, 20 Mar 2008 11:35:42 +1100 From: Timothy Shimmin User-Agent: Thunderbird 2.0.0.9 (Macintosh/20071031) MIME-Version: 1.0 To: Eric Sandeen CC: Andre Draszik , xfs@oss.sgi.com Subject: Re: [PATCH] fix dir2 shortform structures on ARM old ABI References: <47E085F3.8030908@sandeen.net> <47E0A07D.5090803@sandeen.net> In-Reply-To: <47E0A07D.5090803@sandeen.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14935 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs Eric Sandeen wrote: > Eric Sandeen wrote: > >> I've helpfully provided structure layouts for the structures you mention >> in the attached files, for your diffing pleasure. I think you'll find >> that it's not exactly as you described. > > Ah hell the arm structs I attached were for oldabi. It's what I get for > saving this fun work for late at night ;) > > Attached are eabi structs; still only xfs_dir2_data_entry, xfs_dinode > and xfs_log_item seem to be affected by end-of-struct padding, of the > structures you mention. And xfs_log_item isn't a disk structure... > > which brings me back to, what specific failures do you see as a result > of end-of-struct padding on these structs? > Which reminds me when writing 122 that I noticed with xfs_dinode but didn't think the end of struct padding would affect things - remember chatting to Nathan at the time IIRC. Actually, now that I think about it (Tim waking up a bit :-), the xfs_dinode_t is of limited value, because a lot of those union fields at the end aren't actually used directly and just give an indication of what the layout is like. We are in the variable literal area there. --Tim From owner-xfs@oss.sgi.com Wed Mar 19 17:40:58 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 19 Mar 2008 17:41:10 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-0.9 required=5.0 tests=AWL,BAYES_50,J_CHICKENPOX_23, J_CHICKENPOX_33,J_CHICKENPOX_44,J_CHICKENPOX_46 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2K0evLQ011150 for ; Wed, 19 Mar 2008 17:40:58 -0700 X-ASG-Debug-ID: 1205973682-300d00ab0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from filer.fsl.cs.sunysb.edu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 9E2DD6C30AE for ; Wed, 19 Mar 2008 17:41:22 -0700 (PDT) Received: from filer.fsl.cs.sunysb.edu (filer.fsl.cs.sunysb.edu [130.245.126.2]) by cuda.sgi.com with ESMTP id SFGX2aeu3azQpORK for ; Wed, 19 Mar 2008 17:41:22 -0700 (PDT) Received: from josefsipek.net (baal.fsl.cs.sunysb.edu [130.245.126.78]) by filer.fsl.cs.sunysb.edu (8.12.11.20060308/8.13.1) with ESMTP id m2J6nwSX011642 for ; Wed, 19 Mar 2008 02:49:58 -0400 Received: by josefsipek.net (Postfix, from userid 1000) id 8D8E61C008A2; Wed, 19 Mar 2008 02:49:59 -0400 (EDT) Date: Wed, 19 Mar 2008 02:49:59 -0400 From: "Josef 'Jeff' Sipek" To: XFS Mailing List X-ASG-Orig-Subj: xfsqa failures Subject: xfsqa failures Message-ID: <20080319064959.GD11349@josefsipek.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.5.16 (2007-06-11) X-Barracuda-Connect: filer.fsl.cs.sunysb.edu[130.245.126.2] X-Barracuda-Start-Time: 1205973687 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45329 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14936 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeffpc@josefsipek.net Precedence: bulk X-list: xfs Hello all! I'm going to be blut...what's the deal with xfsqa? There are far too many tests that seem broken (or at least I hope it's the tests and not XFS). It makes it really hard to know if a test failed because the test's golden output is wrong, or because XFS has a bug - doubly so when one changes something in XFS. I ran the "auto" group of tests (which uses about 2/3 of the tests in qa), and out of the 109 tests run, 17 failed for me. Some failed because the output didn't match, but one test leaks a reference of some sort forcing me to reboot, and resume the tests from that point, and another test trips an assertion. Running the other tests (not part of the auto group) is even worse (e.g., one test deadlocks the system). Summary: Not run: 010 022 023 024 025 035 036 037 038 039 040 043 044 050 052 055 057 058 077 090 092 093 094 095 097 098 099 101 102 112 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 168 180 Failures: 016 018 081 082 103 130 132 166 167 171 172 173 175 176 177 178 182 Failed 17 of 109 tests Josef 'Jeff' Sipek. --- Here's some detailed info (xfsqa output, the assertion backtrace) about tests that fail on a pretty average dual CPU Athlon (the first gen Athlon MP - if you want detailed specs, let me know): # ./check -g auto FSTYP -- xfs (debug) PLATFORM -- Linux/i686 fstest 2.6.25-rc6 MKFS_OPTIONS -- -f -bsize=4096 /dev/hda6 MOUNT_OPTIONS -- /dev/hda6 /mnt/xfs_scratch 001 13s ... 002 1s ... 003 1s ... 004 2s ... 005 1s ... 006 40s ... 007 61s ... 008 7s ... 009 1s ... 010 [not run] dbtest was not built for this platform 011 52s ... 012 6s ... 013 271s ... 014 71s ... 015 6s ... 016 31s ... [failed, exit status 1] - output mismatch (see 016.out.bad) 10,114c10,11 < *** generate log traffic < *** mount < *** fiddle < *** unmount < *** check for corruption < *** generate log traffic < *** mount < *** fiddle < *** unmount < *** check for corruption < *** generate log traffic < *** mount < *** fiddle < *** unmount < *** check for corruption < *** generate log traffic < *** mount < *** fiddle < *** unmount < *** check for corruption < *** generate log traffic < *** mount < *** fiddle < *** unmount < *** check for corruption < *** generate log traffic < *** mount < *** fiddle < *** unmount < *** check for corruption < *** generate log traffic < *** mount < *** fiddle < *** unmount < *** check for corruption < *** generate log traffic < *** mount < *** fiddle < *** unmount < *** check for corruption < *** generate log traffic < *** mount < *** fiddle < *** unmount < *** check for corruption < *** generate log traffic < *** mount < *** fiddle < *** unmount < *** check for corruption < *** generate log traffic < *** mount < *** fiddle < *** unmount < *** check for corruption < *** generate log traffic < *** mount < *** fiddle < *** unmount < *** check for corruption < *** generate log traffic < *** mount < *** fiddle < *** unmount < *** check for corruption < *** generate log traffic < *** mount < *** fiddle < *** unmount < *** check for corruption < *** generate log traffic < *** mount < *** fiddle < *** unmount < *** check for corruption < *** generate log traffic < *** mount < *** fiddle < *** unmount < *** check for corruption < *** generate log traffic < *** mount < *** fiddle < *** unmount < *** check for corruption < *** generate log traffic < *** mount < *** fiddle < *** unmount < *** check for corruption < *** generate log traffic < *** mount < *** fiddle < *** unmount < *** check for corruption < *** generate log traffic < *** mount < *** fiddle < *** unmount < *** check for corruption < *** generate log traffic < *** mount < *** fiddle < *** unmount < *** check for corruption --- > !!! unexpected log position 3318 > (see 016.full for details) 017 41s ... 018 22s ... - output mismatch (see 018.out.bad) 4,17c4,6 < *** compare logprint: 018.trans_inode with 018.fulldir/trans_inode.mnt-onoalign,logbsize=32k.mkfs-lsize=2000b-lversion=1.filtered < *** compare logprint: 018.trans_buf with 018.fulldir/trans_buf.mnt-onoalign,logbsize=32k.mkfs-lsize=2000b-lversion=1.filtered < *** compare logprint: 018.op with 018.fulldir/op.mnt-onoalign,logbsize=32k.mkfs-lsize=2000b-lversion=2.filtered < *** compare logprint: 018.trans_inode with 018.fulldir/trans_inode.mnt-onoalign,logbsize=32k.mkfs-lsize=2000b-lversion=2.filtered < *** compare logprint: 018.trans_buf with 018.fulldir/trans_buf.mnt-onoalign,logbsize=32k.mkfs-lsize=2000b-lversion=2.filtered < *** compare logprint: 018.op with 018.fulldir/op.mnt-onoalign,logbsize=64k.mkfs-lsize=2000b-lversion=2.filtered < *** compare logprint: 018.trans_inode with 018.fulldir/trans_inode.mnt-onoalign,logbsize=64k.mkfs-lsize=2000b-lversion=2.filtered < *** compare logprint: 018.trans_buf with 018.fulldir/trans_buf.mnt-onoalign,logbsize=64k.mkfs-lsize=2000b-lversion=2.filtered < *** compare logprint: 018.op with 018.fulldir/op.mnt-onoalign,logbsize=128k.mkfs-lsize=2000b-lversion=2.filtered < *** compare logprint: 018.trans_inode with 018.fulldir/trans_inode.mnt-onoalign,logbsize=128k.mkfs-lsize=2000b-lversion=2.filtered < *** compare logprint: 018.trans_buf with 018.fulldir/trans_buf.mnt-onoalign,logbsize=128k.mkfs-lsize=2000b-lversion=2.filtered < *** compare logprint: 018.op with 018.fulldir/op.mnt-onoalign,logbsize=256k.mkfs-lsize=2000b-lversion=2.filtered < *** compare logprint: 018.trans_inode with 018.fulldir/trans_inode.mnt-onoalign,logbsize=256k.mkfs-lsize=2000b-lversion=2.filtered < *** compare logprint: 018.trans_buf with 018.fulldir/trans_buf.mnt-onoalign,logbsize=256k.mkfs-lsize=2000b-lversion=2.filtered --- > 0 split(s) found prior to diff cmd: 50,54d49 > logprint output 018.op differs to 018.fulldir/op.mnt-onoalign,logbsize=32k.mkfs-lsize=2000b-lversion=1.filtered considering splits > (see 018.full for details) 019 3s ... 020 10s ... 021 2s ... 022 [not run] No dump tape specified 023 [not run] No dump tape specified 024 [not run] No dump tape specified 025 [not run] No dump tape specified 026 15s ... 027 15s ... 028 24s ... 029 2s ... 030 22s ... 031 22s ... 032 25s ... 033 17s ... 034 3s ... 035 [not run] No dump tape specified 036 [not run] No dump tape specified 037 [not run] No dump tape specified 038 [not run] No dump tape specified 039 [not run] No dump tape specified 040 [not run] Can't run srcdiff without KWORKAREA set 041 58s ... 042 131s ... 043 [not run] No dump tape specified 044 [not run] This test requires a valid $SCRATCH_LOGDEV 045 2s ... 046 14s ... 047 24s ... 048 0s ... 049 13s ... 050 25s ... [not run] Installed kernel does not support XFS quota 051 2s ... 052 4s ... [not run] Installed kernel does not support XFS quota 053 3s ... 054 5s ... 055 [not run] No dump tape specified 056 14s ... 057 [not run] Place holder for IRIX test 057 058 [not run] Place holder for IRIX test 058 061 13s ... 062 3s ... 063 14s ... 065 37s ... 066 2s ... 067 3s ... 068 54s ... 069 28s ... 070 25s ... 072 1s ... 073 53s ... 074 521s ... 075 60s ... 076 148s ... 077 [not run] No linux directory to source files from 078 43s ... 079 1s ... 081 6s ... [failed, exit status 1] - output mismatch (see 081.out.bad) 3a4,5 > logprint output 081.ugquota.trans_inode differs to 081.fulldir/trans_inode.mnt-oquota,gquota.mkfs-lsize=2000b-lversion=1.filtered > (see 081.full for details) 082 40s ... - output mismatch (see 082.out.bad) 5,19c5,6 < *** compare logprint: 082.trans_buf with 082.fulldir/trans_buf.mnt-ologbsize=32k.mkfs-lsize=2000b-lversion=2.sync.filtered < --- mkfs=version=2,su=4096, mnt=logbsize=32k, sync=sync --- < *** compare logprint: 082.trans_inode with 082.fulldir/trans_inode.mnt-ologbsize=32k.mkfs-lsize=2000b-lversion=2,su=4096.sync.filtered < *** compare logprint: 082.trans_buf with 082.fulldir/trans_buf.mnt-ologbsize=32k.mkfs-lsize=2000b-lversion=2,su=4096.sync.filtered < --- mkfs=version=2,su=32768, mnt=logbsize=32k, sync=sync --- < *** compare logprint: 082.trans_inode with 082.fulldir/trans_inode.mnt-ologbsize=32k.mkfs-lsize=2000b-lversion=2,su=32768.sync.filtered < *** compare logprint: 082.trans_buf with 082.fulldir/trans_buf.mnt-ologbsize=32k.mkfs-lsize=2000b-lversion=2,su=32768.sync.filtered < --- mkfs=version=2,su=36864, mnt=logbsize=32k, sync=sync --- < < *** Cannot mkfs for this test using option specified: -l size=2000b -l version=2,su=36864 *** < < --- mkfs=version=2,su=5120, mnt=logbsize=32k, sync=sync --- < < *** Cannot mkfs for this test using option specified: -l size=2000b -l version=2,su=5120 *** < --- > logprint output 082.trans_inode differs to 082.fulldir/trans_inode.mnt-ologbsize=32k.mkfs-lsize=2000b-lversion=2.sync.filtered > (see 082.full for details) 22,39c9,11 < *** compare logprint: 082.trans_inode with 082.fulldir/trans_inode.mnt-ologbsize=32k.mkfs-lsize=2000b-lversion=2.nosync.filtered < *** compare logprint: 082.trans_buf with 082.fulldir/trans_buf.mnt-ologbsize=32k.mkfs-lsize=2000b-lversion=2.nosync.filtered < --- mkfs=version=2,su=4096, mnt=logbsize=32k, sync=nosync --- < *** compare logprint: 082.op with 082.fulldir/op.mnt-ologbsize=32k.mkfs-lsize=2000b-lversion=2,su=4096.nosync.filtered < *** compare logprint: 082.trans_inode with 082.fulldir/trans_inode.mnt-ologbsize=32k.mkfs-lsize=2000b-lversion=2,su=4096.nosync.filtered < *** compare logprint: 082.trans_buf with 082.fulldir/trans_buf.mnt-ologbsize=32k.mkfs-lsize=2000b-lversion=2,su=4096.nosync.filtered < --- mkfs=version=2,su=32768, mnt=logbsize=32k, sync=nosync --- < *** compare logprint: 082.op with 082.fulldir/op.mnt-ologbsize=32k.mkfs-lsize=2000b-lversion=2,su=32768.nosync.filtered < *** compare logprint: 082.trans_inode with 082.fulldir/trans_inode.mnt-ologbsize=32k.mkfs-lsize=2000b-lversion=2,su=32768.nosync.filtered < *** compare logprint: 082.trans_buf with 082.fulldir/trans_buf.mnt-ologbsize=32k.mkfs-lsize=2000b-lversion=2,su=32768.nosync.filtered < --- mkfs=version=2,su=36864, mnt=logbsize=32k, sync=nosync --- < < *** Cannot mkfs for this test using option specified: -l size=2000b -l version=2,su=36864 *** < < --- mkfs=version=2,su=5120, mnt=logbsize=32k, sync=nosync --- < < *** Cannot mkfs for this test using option specified: -l size=2000b -l version=2,su=5120 *** < --- > 0 split(s) found prior to diff cmd: 50,54d49 > logprint output 082.op differs to 082.fulldir/op.mnt-ologbsize=32k.mkfs-lsize=2000b-lversion=2.nosync.filtered considering splits > (see 082.full for details) 083 213s ... 084 61s ... 088 1s ... 089 258s ... 090 [not run] External volumes not in use, skipped this test 091 149s ... 092 [not run] inode64 not checked on this platform 093 [not run] not suitable for this OS: Linux 094 [not run] External volumes not in use, skipped this test 095 [not run] not suitable for this OS: Linux 096 4s ... 097 [not run] not suitable for this OS: Linux 098 [not run] not suitable for this filesystem type: xfs 099 [not run] not suitable for this OS: Linux 100 59s ... 101 [not run] not suitable for this filesystem type: xfs 102 [not run] not suitable for this filesystem type: xfs 103 2s ... - output mismatch (see 103.out.bad) 7c7 < ln: creating symbolic link `SCRATCH_MNT/nosymlink/target' to `SCRATCH_MNT/nosymlink/source': Operation not permitted --- > ln: creating symbolic link `SCRATCH_MNT/nosymlink/target': Operation not permitted 105 2s ... 112 [not run] fsx not built with AIO for this platform 117 36s ... 120 17s ... 121 9s ... 122 3s ... 123 0s ... 124 14s ... 125 62s ... 126 1s ... 127 630s ... 128 3s ... 129 41s ... 130 - output mismatch (see 130.out.bad) 20c20 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 3 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 24c24 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1000.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 26c26 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1000.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 30c30 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 4 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 89c89 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 2.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 101c101 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 0.000000 bytes, 0 ops; 0.0000 sec (nan bytes/sec and nan ops/sec) 103c103 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 0.000000 bytes, 0 ops; 0.0000 sec (nan bytes/sec and nan ops/sec) 105c105 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 0.000000 bytes, 0 ops; 0.0000 sec (nan bytes/sec and nan ops/sec) 107c107 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 0.000000 bytes, 0 ops; 0.0000 sec (nan bytes/sec and nan ops/sec) 109c109 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 0.000000 bytes, 0 ops; 0.0000 sec (nan bytes/sec and nan ops/sec) 111c111 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 0.000000 bytes, 0 ops; 0.0000 sec (nan bytes/sec and nan ops/sec) 113c113 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 0.000000 bytes, 0 ops; 0.0000 sec (nan bytes/sec and nan ops/sec) 115c115 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 0.000000 bytes, 0 ops; 0.0000 sec (nan bytes/sec and nan ops/sec) 117c117 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 0.000000 bytes, 0 ops; 0.0000 sec (nan bytes/sec and nan ops/sec) 119c119 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 0.000000 bytes, 0 ops; 0.0000 sec (nan bytes/sec and nan ops/sec) 124c124 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 10.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 128c128 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 23.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 133c133 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 136c136 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 139c139 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 142c142 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 145c145 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 148c148 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 151c151 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 154c154 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 157c157 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 160c160 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 163c163 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 166c166 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 169c169 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 172c172 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 13.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 174c174 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 176c176 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 178c178 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 180c180 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 182c182 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 184c184 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 186c186 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 188c188 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 190c190 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 192c192 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 194c194 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 196c196 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 198c198 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 200c200 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 202c202 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 204c204 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 207c207 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 4.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 210c210 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 4.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 213c213 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 4.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 216c216 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 4.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 218c218 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 220c220 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 222c222 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 224c224 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 226c226 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 228c228 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 230c230 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 232c232 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 234c234 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 236c236 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 238c238 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 240c240 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 242c242 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 244c244 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 246c246 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 248c248 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 251c251 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 4.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 254c254 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 4.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 257c257 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 4.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 260c260 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 4.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 265c265 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 268c268 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 271c271 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 274c274 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 277c277 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 280c280 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 283c283 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 286c286 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 289c289 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 292c292 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 295c295 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 298c298 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 301c301 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 304c304 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 13.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 339c339 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 4.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 342c342 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 4.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 345c345 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 4.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 348c348 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 4.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 383c383 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 4.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 386c386 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 4.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 389c389 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 4.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 392c392 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 4.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 131 1s ... 132 - output mismatch (see 132.out.bad) 3c3 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 512.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 5c5 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 512.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 7c7 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 512.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 9c9 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 512.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 11c11 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 512.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 13c13 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 512.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 15c15 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 512.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 17c17 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 512.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 21c21 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 512.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 25c25 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 512.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 29c29 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 512.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 33c33 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 512.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 37c37 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 512.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 41c41 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 512.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 45c45 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 512.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 49c49 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 512.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 51c51 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 53c53 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 55c55 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 57c57 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 63c63 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 69c69 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 75c75 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 81c81 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 85c85 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 89c89 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 93c93 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 97c97 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 1 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 99c99 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 2 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 101c101 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 2 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 121c121 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 2 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 133c133 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 2 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 141c141 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 2 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 143c143 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 4 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 171c171 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 4 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 177c177 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 4 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 183c183 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 8 KiB, 2 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 185c185 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 8 KiB, 2 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 233c233 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 16 KiB, 4 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 235c235 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 16 KiB, 4 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 283c283 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 32 KiB, 8 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 285c285 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 32 KiB, 8 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 337c337 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 64 KiB, 16 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 339c339 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 64 KiB, 16 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 393c393 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 128 KiB, 32 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 395c395 < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) --- > 128 KiB, 32 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) 134 2s ... 135 2s ... 137 37s ... 138 75s ... 139 84s ... 140 55s ... 141 2s ... 142 [not run] Assuming DMAPI modules are not loaded 143 [not run] Assuming DMAPI modules are not loaded 144 [not run] Assuming DMAPI modules are not loaded 145 [not run] Assuming DMAPI modules are not loaded 146 [not run] Assuming DMAPI modules are not loaded 147 [not run] Assuming DMAPI modules are not loaded 148 [not run] parallel repair binary xfs_prepair64 is not installed 149 [not run] parallel repair binary xfs_prepair is not installed 150 [not run] Assuming DMAPI modules are not loaded 151 [not run] Assuming DMAPI modules are not loaded 152 [not run] Assuming DMAPI modules are not loaded 153 [not run] Assuming DMAPI modules are not loaded 154 [not run] Assuming DMAPI modules are not loaded 155 [not run] Assuming DMAPI modules are not loaded 156 [not run] Assuming DMAPI modules are not loaded 157 [not run] Assuming DMAPI modules are not loaded 158 [not run] Assuming DMAPI modules are not loaded 159 [not run] Assuming DMAPI modules are not loaded 160 [not run] Assuming DMAPI modules are not loaded 161 [not run] Assuming DMAPI modules are not loaded 162 [not run] Assuming DMAPI modules are not loaded 163 [not run] Assuming DMAPI modules are not loaded 164 0s ... 165 1s ... 166 - output mismatch (see 166.out.bad) 2,6c2,6 < 0: [0..31]: XX..YY AG (AA..BB) 32 < 1: [32..127]: XX..YY AG (AA..BB) 96 10000 < 2: [128..159]: XX..YY AG (AA..BB) 32 < 3: [160..223]: XX..YY AG (AA..BB) 64 10000 < 4: [224..255]: XX..YY AG (AA..BB) 32 --- > 0: [0..7]: XX..YY AG (AA..BB) 8 > 1: [8..127]: XX..YY AG (AA..BB) 120 10000 > 2: [128..135]: XX..YY AG (AA..BB) 8 > 3: [136..247]: XX..YY AG (AA..BB) 112 10000 > 4: [248..255]: XX..YY AG (AA..BB) 8 167 274s ... **************************** 167 leaks a reference of some sort, unable to unmount xfs_scratch (-EBUSY), but test does not die; reboot **************************** 168 [not run] Assuming DMAPI modules are not loaded 169 170 26s ... 171 [failed, exit status 1] - output mismatch (see 171.out.bad) 6,21c6,7 < + passed, streams are in seperate AGs < # testing 64 16 8 100 1 1 1 .... < # streaming < # sync AGs... < # checking stream AGs... < + passed, streams are in seperate AGs < # testing 64 16 8 100 1 0 0 .... < # streaming < # sync AGs... < # checking stream AGs... < + passed, streams are in seperate AGs < # testing 64 16 8 100 1 0 1 .... < # streaming < # sync AGs... < # checking stream AGs... < + passed, streams are in seperate AGs --- > - failed, 7 streams with matching AGs > (see 171.full for details) 172 [failed, exit status 1] - output mismatch (see 172.out.bad) 11c11,12 < + passed, streams are in seperate AGs --- > - failed, 1 streams with matching AGs > (see 172.full for details) 173 **************************** 173 trips an assertion...dmesg: Ending clean XFS mount for filesystem: hda6 Assertion failed: pag->pagf_freeblks == be32_to_cpu(agf->agf_freeblks), file: fs/xfs/xfs_alloc.c, line: 2216 ------------[ cut here ]------------ kernel BUG at fs/xfs/support/debug.c:81! invalid opcode: 0000 [#1] SMP Modules linked in: xfs crc32c libcrc32c dm_snapshot dm_mirror dm_mod loop amd_k7_agp evdev ext3 jbd mbcache ide_disk aic7xxx scsi_transport_spi scsi_mod amd74xx ohci_hcd e1000 ide_pci_generic ide_core usbcore Pid: 6039, comm: mkdir Not tainted (2.6.25-rc6 #12) EIP: 0060:[] EFLAGS: 00010282 CPU: 0 EIP is at assfail+0x10/0x17 [xfs] EAX: 00000070 EBX: f7bc8a6c ECX: 00000000 EDX: 00000000 ESI: 00000000 EDI: f7878200 EBP: f7463b20 ESP: f7463b10 DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068 Process mkdir (pid: 6039, ti=f7463000 task=f7cd50c0 task.ti=f7463000) Stack: f9371761 f9360e4c f9360d8f 000008a8 f7463b50 f92f73bb 000b8001 00000000 f6ca97b0 00000017 f78f9aa0 f7af2590 f7a288c8 00000ef8 00000000 f7463ccc f7463bf0 f92fa0f7 00000000 f7463be0 00000000 00000001 00000046 f7af2590 Call Trace: [] ? xfs_alloc_read_agf+0x231/0x2f7 [xfs] [] ? xfs_alloc_fix_freelist+0x176/0x422 [xfs] [] ? hrtick_set+0xce/0xd6 [] ? schedule+0x715/0x747 [] ? xfs_alloc_vextent+0x1ff/0x8d8 [xfs] [] ? down_read+0x19/0x2d [] ? xfs_alloc_vextent+0x223/0x8d8 [xfs] [] ? xfs_buf_item_trace+0xa4/0xae [xfs] [] ? xfs_ialloc_ag_alloc+0x280/0x6e7 [xfs] [] ? xfs_ialloc_read_agi+0x93/0x1e4 [xfs] [] ? xfs_ialloc_read_agi+0x104/0x1e4 [xfs] [] ? xfs_dialloc+0x167/0xb12 [xfs] [] ? xlog_trace_loggrant+0xa8/0xb3 [xfs] [] ? xfs_ialloc+0x4b/0x537 [xfs] [] ? _spin_unlock+0x1d/0x20 [] ? xlog_grant_log_space+0x2a0/0x2ea [xfs] [] ? xfs_dir_ialloc+0x6f/0x253 [xfs] [] ? xfs_mkdir+0x204/0x498 [xfs] [] ? xfs_acl_get_attr+0x66/0x89 [xfs] [] ? xfs_vn_mknod+0x140/0x22a [xfs] [] ? xfs_vn_mkdir+0xd/0xf [xfs] [] ? vfs_mkdir+0x94/0xd7 [] ? sys_mkdirat+0x85/0xbd [] ? up_read+0x16/0x2b [] ? do_page_fault+0x2af/0x535 [] ? sys_mkdir+0x10/0x12 [] ? sysenter_past_esp+0x5f/0xa5 ======================= Code: 01 52 ba 5c 17 37 f9 50 b8 5d 17 37 f9 6a 01 6a 10 e8 5a 8e e5 c6 83 c4 14 c9 c3 55 89 e5 51 52 50 68 61 17 37 f9 e8 81 fc db c6 <0f> 0b 83 c4 10 eb fe 55 89 e5 57 89 c7 b8 50 ee 38 f9 83 e7 07 EIP: [] assfail+0x10/0x17 [xfs] SS:ESP 0068:f7463b10 ---[ end trace ad759a5a28c539de ]--- rebooted, before running more tests. **************************** 174 37s ... 175 - output mismatch (see 175.out.bad) 5,31c5,8 < # spawning test file with 4096 256 0 punch_test_file noresv < [0] punch_test_file < + not using resvsp at file creation < # writing with 4096 0 256 punch_test_file < # showing file state punch_test_file < punch_test_file: < EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL FLAGS < 0: [0..2047]: 96..2143 0 (96..2143) 2048 00000 < FLAG Values: < 010000 Unwritten preallocated extent < 001000 Doesn't begin on stripe unit < 000100 Doesn't end on stripe unit < 000010 Doesn't begin on stripe width < 000001 Doesn't end on stripe width < # punching with 4096 240 16 d punch_test_file < + hole punch using dmapi punch_hole < # showing file state punch_test_file < punch_test_file: < EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL FLAGS < 0: [0..1919]: 96..2015 0 (96..2015) 1920 00000 < 1: [1920..2047]: hole 128 < FLAG Values: < 010000 Unwritten preallocated extent < 001000 Doesn't begin on stripe unit < 000100 Doesn't end on stripe unit < 000010 Doesn't begin on stripe width < 000001 Doesn't end on stripe width --- > mount: wrong fs type, bad option, bad superblock on /dev/hda6, > missing codepage or helper program, or other error > In some cases useful info is found in syslog - try > dmesg | tail or so 33,63c10 < -- this time use a 4k (one block) extent size hint -- < # testing 4096 1 256 240 16 d 0 256 w p noresv ... < + mounting with dmapi enabled < # spawning test file with 4096 256 1 punch_test_file noresv < + setting extent size hint to 4096 < [4096] punch_test_file < + not using resvsp at file creation < # writing with 4096 0 256 punch_test_file < # showing file state punch_test_file < punch_test_file: < EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL FLAGS < 0: [0..2047]: 96..2143 0 (96..2143) 2048 00000 < FLAG Values: < 010000 Unwritten preallocated extent < 001000 Doesn't begin on stripe unit < 000100 Doesn't end on stripe unit < 000010 Doesn't begin on stripe width < 000001 Doesn't end on stripe width < # punching with 4096 240 16 d punch_test_file < + hole punch using dmapi punch_hole < # showing file state punch_test_file < punch_test_file: < EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL FLAGS < 0: [0..1919]: 96..2015 0 (96..2015) 1920 00000 < 1: [1920..2047]: hole 128 < FLAG Values: < 010000 Unwritten preallocated extent < 001000 Doesn't begin on stripe unit < 000100 Doesn't end on stripe unit < 000010 Doesn't begin on stripe width < 000001 Doesn't end on stripe width --- > 175 not run: Assuming DMAPI modules are not loaded 176 - output mismatch (see 176.out.bad) 5,30c5,8 < # spawning test file with 4096 256 0 punch_test_file < [0] punch_test_file < # writing with 4096 0 256 punch_test_file < # showing file state punch_test_file < punch_test_file: < EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL FLAGS < 0: [0..2047]: 96..2143 0 (96..2143) 2048 00000 < FLAG Values: < 010000 Unwritten preallocated extent < 001000 Doesn't begin on stripe unit < 000100 Doesn't end on stripe unit < 000010 Doesn't begin on stripe width < 000001 Doesn't end on stripe width < # punching with 4096 240 16 d punch_test_file < + hole punch using dmapi punch_hole < # showing file state punch_test_file < punch_test_file: < EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL FLAGS < 0: [0..1919]: 96..2015 0 (96..2015) 1920 00000 < 1: [1920..2047]: hole 128 < FLAG Values: < 010000 Unwritten preallocated extent < 001000 Doesn't begin on stripe unit < 000100 Doesn't end on stripe unit < 000010 Doesn't begin on stripe width < 000001 Doesn't end on stripe width --- > mount: wrong fs type, bad option, bad superblock on /dev/hda6, > missing codepage or helper program, or other error > In some cases useful info is found in syslog - try > dmesg | tail or so 32,121c10 < -- this time dont use resvsp -- < # testing 4096 0 256 240 16 d 0 256 w p noresv ... < + mounting with dmapi enabled < # spawning test file with 4096 256 0 punch_test_file noresv < [0] punch_test_file < + not using resvsp at file creation < # writing with 4096 0 256 punch_test_file < # showing file state punch_test_file < punch_test_file: < EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL FLAGS < 0: [0..2047]: 96..2143 0 (96..2143) 2048 00000 < FLAG Values: < 010000 Unwritten preallocated extent < 001000 Doesn't begin on stripe unit < 000100 Doesn't end on stripe unit < 000010 Doesn't begin on stripe width < 000001 Doesn't end on stripe width < # punching with 4096 240 16 d punch_test_file < + hole punch using dmapi punch_hole < # showing file state punch_test_file < punch_test_file: < EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL FLAGS < 0: [0..1919]: 96..2015 0 (96..2015) 1920 00000 < 1: [1920..2047]: hole 128 < FLAG Values: < 010000 Unwritten preallocated extent < 001000 Doesn't begin on stripe unit < 000100 Doesn't end on stripe unit < 000010 Doesn't begin on stripe width < 000001 Doesn't end on stripe width < < < -- test unresvsp hole punch with resvsp on file create -- < # testing 4096 0 256 240 16 u 0 256 w p ... < # spawning test file with 4096 256 0 punch_test_file < [0] punch_test_file < # writing with 4096 0 256 punch_test_file < # showing file state punch_test_file < punch_test_file: < EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL FLAGS < 0: [0..2047]: 96..2143 0 (96..2143) 2048 00000 < FLAG Values: < 010000 Unwritten preallocated extent < 001000 Doesn't begin on stripe unit < 000100 Doesn't end on stripe unit < 000010 Doesn't begin on stripe width < 000001 Doesn't end on stripe width < # punching with 4096 240 16 u punch_test_file < + hole punch using unresvsp < # showing file state punch_test_file < punch_test_file: < EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL FLAGS < 0: [0..1919]: 96..2015 0 (96..2015) 1920 00000 < 1: [1920..2047]: hole 128 < FLAG Values: < 010000 Unwritten preallocated extent < 001000 Doesn't begin on stripe unit < 000100 Doesn't end on stripe unit < 000010 Doesn't begin on stripe width < 000001 Doesn't end on stripe width < < -- this time dont use resvsp -- < # testing 4096 0 256 240 16 u 0 256 w p noresv ... < # spawning test file with 4096 256 0 punch_test_file noresv < [0] punch_test_file < + not using resvsp at file creation < # writing with 4096 0 256 punch_test_file < # showing file state punch_test_file < punch_test_file: < EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL FLAGS < 0: [0..2047]: 96..2143 0 (96..2143) 2048 00000 < FLAG Values: < 010000 Unwritten preallocated extent < 001000 Doesn't begin on stripe unit < 000100 Doesn't end on stripe unit < 000010 Doesn't begin on stripe width < 000001 Doesn't end on stripe width < # punching with 4096 240 16 u punch_test_file < + hole punch using unresvsp < # showing file state punch_test_file < punch_test_file: < EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL FLAGS < 0: [0..1919]: 96..2015 0 (96..2015) 1920 00000 < 1: [1920..2047]: hole 128 < FLAG Values: < 010000 Unwritten preallocated extent < 001000 Doesn't begin on stripe unit < 000100 Doesn't end on stripe unit < 000010 Doesn't begin on stripe width < 000001 Doesn't end on stripe width --- > 176 not run: Assuming DMAPI modules are not loaded 177 [failed, exit status 1] - output mismatch (see 177.out.bad) 58,88c58,64 < Start bulkstat_unlink_test_modified < Iteration 0 ... < testFiles 1000 ... < passed < Iteration 1 ... < testFiles 1000 ... < passed < Iteration 2 ... < testFiles 1000 ... < passed < Iteration 3 ... < testFiles 1000 ... < passed < Iteration 4 ... < testFiles 1000 ... < passed < Iteration 5 ... < testFiles 1000 ... < passed < Iteration 6 ... < testFiles 1000 ... < passed < Iteration 7 ... < testFiles 1000 ... < passed < Iteration 8 ... < testFiles 1000 ... < passed < Iteration 9 ... < testFiles 1000 ... < passed --- > mount: wrong fs type, bad option, bad superblock on /dev/hda6, > missing codepage or helper program, or other error > In some cases useful info is found in syslog - try > dmesg | tail or so > > mount failed > (see 177.full for details) 178 - output mismatch (see 178.out.bad) 15,16d14 < sb root inode value INO inconsistent with calculated value INO < resetting superblock root inode pointer to INO 51,52d48 < sb root inode value INO inconsistent with calculated value INO < resetting superblock root inode pointer to INO 179 180 [not run] This test requires at least 10GB of /dev/hda6 to run 181 182 - output mismatch (see 182.out.bad) 1a2,491 > file /mnt/xfs_scratch/510 has incorrect size - sync failed > file /mnt/xfs_scratch/511 has incorrect size - sync failed > file /mnt/xfs_scratch/512 has incorrect size - sync failed > file /mnt/xfs_scratch/513 has incorrect size - sync failed > file /mnt/xfs_scratch/514 has incorrect size - sync failed > file /mnt/xfs_scratch/515 has incorrect size - sync failed > file /mnt/xfs_scratch/516 has incorrect size - sync failed > file /mnt/xfs_scratch/517 has incorrect size - sync failed > file /mnt/xfs_scratch/518 has incorrect size - sync failed > file /mnt/xfs_scratch/519 has incorrect size - sync failed > file /mnt/xfs_scratch/520 has incorrect size - sync failed > file /mnt/xfs_scratch/521 has incorrect size - sync failed > file /mnt/xfs_scratch/522 has incorrect size - sync failed > file /mnt/xfs_scratch/523 has incorrect size - sync failed > file /mnt/xfs_scratch/524 has incorrect size - sync failed > file /mnt/xfs_scratch/525 has incorrect size - sync failed > file /mnt/xfs_scratch/526 has incorrect size - sync failed > file /mnt/xfs_scratch/527 has incorrect size - sync failed > file /mnt/xfs_scratch/528 has incorrect size - sync failed > file /mnt/xfs_scratch/529 has incorrect size - sync failed > file /mnt/xfs_scratch/530 has incorrect size - sync failed > file /mnt/xfs_scratch/531 has incorrect size - sync failed > file /mnt/xfs_scratch/532 has incorrect size - sync failed > file /mnt/xfs_scratch/533 has incorrect size - sync failed > file /mnt/xfs_scratch/534 has incorrect size - sync failed > file /mnt/xfs_scratch/535 has incorrect size - sync failed > file /mnt/xfs_scratch/536 has incorrect size - sync failed > file /mnt/xfs_scratch/537 has incorrect size - sync failed > file /mnt/xfs_scratch/538 has incorrect size - sync failed > file /mnt/xfs_scratch/539 has incorrect size - sync failed > file /mnt/xfs_scratch/540 has incorrect size - sync failed > file /mnt/xfs_scratch/541 has incorrect size - sync failed > file /mnt/xfs_scratch/542 has incorrect size - sync failed > file /mnt/xfs_scratch/543 has incorrect size - sync failed > file /mnt/xfs_scratch/544 has incorrect size - sync failed > file /mnt/xfs_scratch/545 has incorrect size - sync failed > file /mnt/xfs_scratch/546 has incorrect size - sync failed > file /mnt/xfs_scratch/547 has incorrect size - sync failed > file /mnt/xfs_scratch/548 has incorrect size - sync failed > file /mnt/xfs_scratch/549 has incorrect size - sync failed > file /mnt/xfs_scratch/550 has incorrect size - sync failed > file /mnt/xfs_scratch/551 has incorrect size - sync failed > file /mnt/xfs_scratch/552 has incorrect size - sync failed > file /mnt/xfs_scratch/553 has incorrect size - sync failed > file /mnt/xfs_scratch/554 has incorrect size - sync failed > file /mnt/xfs_scratch/555 has incorrect size - sync failed > file /mnt/xfs_scratch/556 has incorrect size - sync failed > file /mnt/xfs_scratch/557 has incorrect size - sync failed > file /mnt/xfs_scratch/558 has incorrect size - sync failed > file /mnt/xfs_scratch/559 has incorrect size - sync failed > file /mnt/xfs_scratch/560 has incorrect size - sync failed > file /mnt/xfs_scratch/561 has incorrect size - sync failed > file /mnt/xfs_scratch/562 has incorrect size - sync failed > file /mnt/xfs_scratch/563 has incorrect size - sync failed > file /mnt/xfs_scratch/564 has incorrect size - sync failed > file /mnt/xfs_scratch/565 has incorrect size - sync failed > file /mnt/xfs_scratch/566 has incorrect size - sync failed > file /mnt/xfs_scratch/567 has incorrect size - sync failed > file /mnt/xfs_scratch/568 has incorrect size - sync failed > file /mnt/xfs_scratch/569 has incorrect size - sync failed > file /mnt/xfs_scratch/570 has incorrect size - sync failed > file /mnt/xfs_scratch/571 has incorrect size - sync failed > file /mnt/xfs_scratch/572 has incorrect size - sync failed > file /mnt/xfs_scratch/573 has incorrect size - sync failed > file /mnt/xfs_scratch/574 has incorrect size - sync failed > file /mnt/xfs_scratch/575 has incorrect size - sync failed > file /mnt/xfs_scratch/576 has incorrect size - sync failed > file /mnt/xfs_scratch/577 has incorrect size - sync failed > file /mnt/xfs_scratch/578 has incorrect size - sync failed > file /mnt/xfs_scratch/579 has incorrect size - sync failed > file /mnt/xfs_scratch/580 has incorrect size - sync failed > file /mnt/xfs_scratch/581 has incorrect size - sync failed > file /mnt/xfs_scratch/582 has incorrect size - sync failed > file /mnt/xfs_scratch/583 has incorrect size - sync failed > file /mnt/xfs_scratch/584 has incorrect size - sync failed > file /mnt/xfs_scratch/585 has incorrect size - sync failed > file /mnt/xfs_scratch/586 has incorrect size - sync failed > file /mnt/xfs_scratch/587 has incorrect size - sync failed > file /mnt/xfs_scratch/588 has incorrect size - sync failed > file /mnt/xfs_scratch/589 has incorrect size - sync failed > file /mnt/xfs_scratch/590 has incorrect size - sync failed > file /mnt/xfs_scratch/591 has incorrect size - sync failed > file /mnt/xfs_scratch/592 has incorrect size - sync failed > file /mnt/xfs_scratch/593 has incorrect size - sync failed > file /mnt/xfs_scratch/594 has incorrect size - sync failed > file /mnt/xfs_scratch/595 has incorrect size - sync failed > file /mnt/xfs_scratch/596 has incorrect size - sync failed > file /mnt/xfs_scratch/597 has incorrect size - sync failed > file /mnt/xfs_scratch/598 has incorrect size - sync failed > file /mnt/xfs_scratch/599 has incorrect size - sync failed > file /mnt/xfs_scratch/600 has incorrect size - sync failed > file /mnt/xfs_scratch/601 has incorrect size - sync failed > file /mnt/xfs_scratch/602 has incorrect size - sync failed > file /mnt/xfs_scratch/603 has incorrect size - sync failed > file /mnt/xfs_scratch/604 has incorrect size - sync failed > file /mnt/xfs_scratch/605 has incorrect size - sync failed > file /mnt/xfs_scratch/606 has incorrect size - sync failed > file /mnt/xfs_scratch/607 has incorrect size - sync failed > file /mnt/xfs_scratch/608 has incorrect size - sync failed > file /mnt/xfs_scratch/609 has incorrect size - sync failed > file /mnt/xfs_scratch/610 has incorrect size - sync failed > file /mnt/xfs_scratch/611 has incorrect size - sync failed > file /mnt/xfs_scratch/612 has incorrect size - sync failed > file /mnt/xfs_scratch/613 has incorrect size - sync failed > file /mnt/xfs_scratch/614 has incorrect size - sync failed > file /mnt/xfs_scratch/615 has incorrect size - sync failed > file /mnt/xfs_scratch/616 has incorrect size - sync failed > file /mnt/xfs_scratch/617 has incorrect size - sync failed > file /mnt/xfs_scratch/618 has incorrect size - sync failed > file /mnt/xfs_scratch/619 has incorrect size - sync failed > file /mnt/xfs_scratch/620 has incorrect size - sync failed > file /mnt/xfs_scratch/621 has incorrect size - sync failed > file /mnt/xfs_scratch/622 has incorrect size - sync failed > file /mnt/xfs_scratch/623 has incorrect size - sync failed > file /mnt/xfs_scratch/624 has incorrect size - sync failed > file /mnt/xfs_scratch/625 has incorrect size - sync failed > file /mnt/xfs_scratch/626 has incorrect size - sync failed > file /mnt/xfs_scratch/627 has incorrect size - sync failed > file /mnt/xfs_scratch/628 has incorrect size - sync failed > file /mnt/xfs_scratch/629 has incorrect size - sync failed > file /mnt/xfs_scratch/630 has incorrect size - sync failed > file /mnt/xfs_scratch/631 has incorrect size - sync failed > file /mnt/xfs_scratch/632 has incorrect size - sync failed > file /mnt/xfs_scratch/633 has incorrect size - sync failed > file /mnt/xfs_scratch/634 has incorrect size - sync failed > file /mnt/xfs_scratch/635 has incorrect size - sync failed > file /mnt/xfs_scratch/636 has incorrect size - sync failed > file /mnt/xfs_scratch/637 has incorrect size - sync failed > file /mnt/xfs_scratch/638 has incorrect size - sync failed > file /mnt/xfs_scratch/639 has incorrect size - sync failed > file /mnt/xfs_scratch/640 has incorrect size - sync failed > file /mnt/xfs_scratch/641 has incorrect size - sync failed > file /mnt/xfs_scratch/642 has incorrect size - sync failed > file /mnt/xfs_scratch/643 has incorrect size - sync failed > file /mnt/xfs_scratch/644 has incorrect size - sync failed > file /mnt/xfs_scratch/645 has incorrect size - sync failed > file /mnt/xfs_scratch/646 has incorrect size - sync failed > file /mnt/xfs_scratch/647 has incorrect size - sync failed > file /mnt/xfs_scratch/648 has incorrect size - sync failed > file /mnt/xfs_scratch/649 has incorrect size - sync failed > file /mnt/xfs_scratch/650 has incorrect size - sync failed > file /mnt/xfs_scratch/651 has incorrect size - sync failed > file /mnt/xfs_scratch/652 has incorrect size - sync failed > file /mnt/xfs_scratch/653 has incorrect size - sync failed > file /mnt/xfs_scratch/654 has incorrect size - sync failed > file /mnt/xfs_scratch/655 has incorrect size - sync failed > file /mnt/xfs_scratch/656 has incorrect size - sync failed > file /mnt/xfs_scratch/657 has incorrect size - sync failed > file /mnt/xfs_scratch/658 has incorrect size - sync failed > file /mnt/xfs_scratch/659 has incorrect size - sync failed > file /mnt/xfs_scratch/660 has incorrect size - sync failed > file /mnt/xfs_scratch/661 has incorrect size - sync failed > file /mnt/xfs_scratch/662 has incorrect size - sync failed > file /mnt/xfs_scratch/663 has incorrect size - sync failed > file /mnt/xfs_scratch/664 has incorrect size - sync failed > file /mnt/xfs_scratch/665 has incorrect size - sync failed > file /mnt/xfs_scratch/666 has incorrect size - sync failed > file /mnt/xfs_scratch/667 has incorrect size - sync failed > file /mnt/xfs_scratch/668 has incorrect size - sync failed > file /mnt/xfs_scratch/669 has incorrect size - sync failed > file /mnt/xfs_scratch/670 has incorrect size - sync failed > file /mnt/xfs_scratch/671 has incorrect size - sync failed > file /mnt/xfs_scratch/672 has incorrect size - sync failed > file /mnt/xfs_scratch/673 has incorrect size - sync failed > file /mnt/xfs_scratch/674 has incorrect size - sync failed > file /mnt/xfs_scratch/675 has incorrect size - sync failed > file /mnt/xfs_scratch/676 has incorrect size - sync failed > file /mnt/xfs_scratch/677 has incorrect size - sync failed > file /mnt/xfs_scratch/678 has incorrect size - sync failed > file /mnt/xfs_scratch/679 has incorrect size - sync failed > file /mnt/xfs_scratch/680 has incorrect size - sync failed > file /mnt/xfs_scratch/681 has incorrect size - sync failed > file /mnt/xfs_scratch/682 has incorrect size - sync failed > file /mnt/xfs_scratch/683 has incorrect size - sync failed > file /mnt/xfs_scratch/684 has incorrect size - sync failed > file /mnt/xfs_scratch/685 has incorrect size - sync failed > file /mnt/xfs_scratch/686 has incorrect size - sync failed > file /mnt/xfs_scratch/687 has incorrect size - sync failed > file /mnt/xfs_scratch/688 has incorrect size - sync failed > file /mnt/xfs_scratch/689 has incorrect size - sync failed > file /mnt/xfs_scratch/690 has incorrect size - sync failed > file /mnt/xfs_scratch/691 has incorrect size - sync failed > file /mnt/xfs_scratch/692 has incorrect size - sync failed > file /mnt/xfs_scratch/693 has incorrect size - sync failed > file /mnt/xfs_scratch/694 has incorrect size - sync failed > file /mnt/xfs_scratch/695 has incorrect size - sync failed > file /mnt/xfs_scratch/696 has incorrect size - sync failed > file /mnt/xfs_scratch/697 has incorrect size - sync failed > file /mnt/xfs_scratch/698 has incorrect size - sync failed > file /mnt/xfs_scratch/699 has incorrect size - sync failed > file /mnt/xfs_scratch/700 has incorrect size - sync failed > file /mnt/xfs_scratch/701 has incorrect size - sync failed > file /mnt/xfs_scratch/702 has incorrect size - sync failed > file /mnt/xfs_scratch/703 has incorrect size - sync failed > file /mnt/xfs_scratch/704 has incorrect size - sync failed > file /mnt/xfs_scratch/705 has incorrect size - sync failed > file /mnt/xfs_scratch/706 has incorrect size - sync failed > file /mnt/xfs_scratch/707 has incorrect size - sync failed > file /mnt/xfs_scratch/708 has incorrect size - sync failed > file /mnt/xfs_scratch/709 has incorrect size - sync failed > file /mnt/xfs_scratch/710 has incorrect size - sync failed > file /mnt/xfs_scratch/711 has incorrect size - sync failed > file /mnt/xfs_scratch/712 has incorrect size - sync failed > file /mnt/xfs_scratch/713 has incorrect size - sync failed > file /mnt/xfs_scratch/714 has incorrect size - sync failed > file /mnt/xfs_scratch/715 has incorrect size - sync failed > file /mnt/xfs_scratch/716 has incorrect size - sync failed > file /mnt/xfs_scratch/717 has incorrect size - sync failed > file /mnt/xfs_scratch/718 has incorrect size - sync failed > file /mnt/xfs_scratch/719 has incorrect size - sync failed > file /mnt/xfs_scratch/720 has incorrect size - sync failed > file /mnt/xfs_scratch/721 has incorrect size - sync failed > file /mnt/xfs_scratch/722 has incorrect size - sync failed > file /mnt/xfs_scratch/723 has incorrect size - sync failed > file /mnt/xfs_scratch/724 has incorrect size - sync failed > file /mnt/xfs_scratch/725 has incorrect size - sync failed > file /mnt/xfs_scratch/726 has incorrect size - sync failed > file /mnt/xfs_scratch/727 has incorrect size - sync failed > file /mnt/xfs_scratch/728 has incorrect size - sync failed > file /mnt/xfs_scratch/729 has incorrect size - sync failed > file /mnt/xfs_scratch/730 has incorrect size - sync failed > file /mnt/xfs_scratch/731 has incorrect size - sync failed > file /mnt/xfs_scratch/732 has incorrect size - sync failed > file /mnt/xfs_scratch/733 has incorrect size - sync failed > file /mnt/xfs_scratch/734 has incorrect size - sync failed > file /mnt/xfs_scratch/735 has incorrect size - sync failed > file /mnt/xfs_scratch/736 has incorrect size - sync failed > file /mnt/xfs_scratch/737 has incorrect size - sync failed > file /mnt/xfs_scratch/738 has incorrect size - sync failed > file /mnt/xfs_scratch/739 has incorrect size - sync failed > file /mnt/xfs_scratch/740 has incorrect size - sync failed > file /mnt/xfs_scratch/741 has incorrect size - sync failed > file /mnt/xfs_scratch/742 has incorrect size - sync failed > file /mnt/xfs_scratch/743 has incorrect size - sync failed > file /mnt/xfs_scratch/744 has incorrect size - sync failed > file /mnt/xfs_scratch/745 has incorrect size - sync failed > file /mnt/xfs_scratch/746 has incorrect size - sync failed > file /mnt/xfs_scratch/747 has incorrect size - sync failed > file /mnt/xfs_scratch/748 has incorrect size - sync failed > file /mnt/xfs_scratch/749 has incorrect size - sync failed > file /mnt/xfs_scratch/750 has incorrect size - sync failed > file /mnt/xfs_scratch/751 has incorrect size - sync failed > file /mnt/xfs_scratch/752 has incorrect size - sync failed > file /mnt/xfs_scratch/753 has incorrect size - sync failed > file /mnt/xfs_scratch/754 has incorrect size - sync failed > file /mnt/xfs_scratch/755 has incorrect size - sync failed > file /mnt/xfs_scratch/756 has incorrect size - sync failed > file /mnt/xfs_scratch/757 has incorrect size - sync failed > file /mnt/xfs_scratch/758 has incorrect size - sync failed > file /mnt/xfs_scratch/759 has incorrect size - sync failed > file /mnt/xfs_scratch/760 has incorrect size - sync failed > file /mnt/xfs_scratch/761 has incorrect size - sync failed > file /mnt/xfs_scratch/762 has incorrect size - sync failed > file /mnt/xfs_scratch/763 has incorrect size - sync failed > file /mnt/xfs_scratch/764 has incorrect size - sync failed > file /mnt/xfs_scratch/765 has incorrect size - sync failed > file /mnt/xfs_scratch/766 has incorrect size - sync failed > file /mnt/xfs_scratch/767 has incorrect size - sync failed > file /mnt/xfs_scratch/768 has incorrect size - sync failed > file /mnt/xfs_scratch/769 has incorrect size - sync failed > file /mnt/xfs_scratch/770 has incorrect size - sync failed > file /mnt/xfs_scratch/771 has incorrect size - sync failed > file /mnt/xfs_scratch/772 has incorrect size - sync failed > file /mnt/xfs_scratch/773 has incorrect size - sync failed > file /mnt/xfs_scratch/774 has incorrect size - sync failed > file /mnt/xfs_scratch/775 has incorrect size - sync failed > file /mnt/xfs_scratch/776 has incorrect size - sync failed > file /mnt/xfs_scratch/777 has incorrect size - sync failed > file /mnt/xfs_scratch/778 has incorrect size - sync failed > file /mnt/xfs_scratch/779 has incorrect size - sync failed > file /mnt/xfs_scratch/780 has incorrect size - sync failed > file /mnt/xfs_scratch/781 has incorrect size - sync failed > file /mnt/xfs_scratch/782 has incorrect size - sync failed > file /mnt/xfs_scratch/783 has incorrect size - sync failed > file /mnt/xfs_scratch/784 has incorrect size - sync failed > file /mnt/xfs_scratch/785 has incorrect size - sync failed > file /mnt/xfs_scratch/786 has incorrect size - sync failed > file /mnt/xfs_scratch/787 has incorrect size - sync failed > file /mnt/xfs_scratch/788 has incorrect size - sync failed > file /mnt/xfs_scratch/789 has incorrect size - sync failed > file /mnt/xfs_scratch/790 has incorrect size - sync failed > file /mnt/xfs_scratch/791 has incorrect size - sync failed > file /mnt/xfs_scratch/792 has incorrect size - sync failed > file /mnt/xfs_scratch/793 has incorrect size - sync failed > file /mnt/xfs_scratch/794 has incorrect size - sync failed > file /mnt/xfs_scratch/795 has incorrect size - sync failed > file /mnt/xfs_scratch/796 has incorrect size - sync failed > file /mnt/xfs_scratch/797 has incorrect size - sync failed > file /mnt/xfs_scratch/798 has incorrect size - sync failed > file /mnt/xfs_scratch/799 has incorrect size - sync failed > file /mnt/xfs_scratch/800 has incorrect size - sync failed > file /mnt/xfs_scratch/801 has incorrect size - sync failed > file /mnt/xfs_scratch/802 has incorrect size - sync failed > file /mnt/xfs_scratch/803 has incorrect size - sync failed > file /mnt/xfs_scratch/804 has incorrect size - sync failed > file /mnt/xfs_scratch/805 has incorrect size - sync failed > file /mnt/xfs_scratch/806 has incorrect size - sync failed > file /mnt/xfs_scratch/807 has incorrect size - sync failed > file /mnt/xfs_scratch/808 has incorrect size - sync failed > file /mnt/xfs_scratch/809 has incorrect size - sync failed > file /mnt/xfs_scratch/810 has incorrect size - sync failed > file /mnt/xfs_scratch/811 has incorrect size - sync failed > file /mnt/xfs_scratch/812 has incorrect size - sync failed > file /mnt/xfs_scratch/813 has incorrect size - sync failed > file /mnt/xfs_scratch/814 has incorrect size - sync failed > file /mnt/xfs_scratch/815 has incorrect size - sync failed > file /mnt/xfs_scratch/816 has incorrect size - sync failed > file /mnt/xfs_scratch/817 has incorrect size - sync failed > file /mnt/xfs_scratch/818 has incorrect size - sync failed > file /mnt/xfs_scratch/819 has incorrect size - sync failed > file /mnt/xfs_scratch/820 has incorrect size - sync failed > file /mnt/xfs_scratch/821 has incorrect size - sync failed > file /mnt/xfs_scratch/822 has incorrect size - sync failed > file /mnt/xfs_scratch/823 has incorrect size - sync failed > file /mnt/xfs_scratch/824 has incorrect size - sync failed > file /mnt/xfs_scratch/825 has incorrect size - sync failed > file /mnt/xfs_scratch/826 has incorrect size - sync failed > file /mnt/xfs_scratch/827 has incorrect size - sync failed > file /mnt/xfs_scratch/828 has incorrect size - sync failed > file /mnt/xfs_scratch/829 has incorrect size - sync failed > file /mnt/xfs_scratch/830 has incorrect size - sync failed > file /mnt/xfs_scratch/831 has incorrect size - sync failed > file /mnt/xfs_scratch/832 has incorrect size - sync failed > file /mnt/xfs_scratch/833 has incorrect size - sync failed > file /mnt/xfs_scratch/834 has incorrect size - sync failed > file /mnt/xfs_scratch/835 has incorrect size - sync failed > file /mnt/xfs_scratch/836 has incorrect size - sync failed > file /mnt/xfs_scratch/837 has incorrect size - sync failed > file /mnt/xfs_scratch/838 has incorrect size - sync failed > file /mnt/xfs_scratch/839 has incorrect size - sync failed > file /mnt/xfs_scratch/840 has incorrect size - sync failed > file /mnt/xfs_scratch/841 has incorrect size - sync failed > file /mnt/xfs_scratch/842 has incorrect size - sync failed > file /mnt/xfs_scratch/843 has incorrect size - sync failed > file /mnt/xfs_scratch/844 has incorrect size - sync failed > file /mnt/xfs_scratch/845 has incorrect size - sync failed > file /mnt/xfs_scratch/846 has incorrect size - sync failed > file /mnt/xfs_scratch/847 has incorrect size - sync failed > file /mnt/xfs_scratch/848 has incorrect size - sync failed > file /mnt/xfs_scratch/849 has incorrect size - sync failed > file /mnt/xfs_scratch/850 has incorrect size - sync failed > file /mnt/xfs_scratch/851 has incorrect size - sync failed > file /mnt/xfs_scratch/852 has incorrect size - sync failed > file /mnt/xfs_scratch/853 has incorrect size - sync failed > file /mnt/xfs_scratch/854 has incorrect size - sync failed > file /mnt/xfs_scratch/855 has incorrect size - sync failed > file /mnt/xfs_scratch/856 has incorrect size - sync failed > file /mnt/xfs_scratch/857 has incorrect size - sync failed > file /mnt/xfs_scratch/858 has incorrect size - sync failed > file /mnt/xfs_scratch/859 has incorrect size - sync failed > file /mnt/xfs_scratch/860 has incorrect size - sync failed > file /mnt/xfs_scratch/861 has incorrect size - sync failed > file /mnt/xfs_scratch/862 has incorrect size - sync failed > file /mnt/xfs_scratch/863 has incorrect size - sync failed > file /mnt/xfs_scratch/864 has incorrect size - sync failed > file /mnt/xfs_scratch/865 has incorrect size - sync failed > file /mnt/xfs_scratch/866 has incorrect size - sync failed > file /mnt/xfs_scratch/867 has incorrect size - sync failed > file /mnt/xfs_scratch/868 has incorrect size - sync failed > file /mnt/xfs_scratch/869 has incorrect size - sync failed > file /mnt/xfs_scratch/870 has incorrect size - sync failed > file /mnt/xfs_scratch/871 has incorrect size - sync failed > file /mnt/xfs_scratch/872 has incorrect size - sync failed > file /mnt/xfs_scratch/873 has incorrect size - sync failed > file /mnt/xfs_scratch/874 has incorrect size - sync failed > file /mnt/xfs_scratch/875 has incorrect size - sync failed > file /mnt/xfs_scratch/876 has incorrect size - sync failed > file /mnt/xfs_scratch/877 has incorrect size - sync failed > file /mnt/xfs_scratch/878 has incorrect size - sync failed > file /mnt/xfs_scratch/879 has incorrect size - sync failed > file /mnt/xfs_scratch/880 has incorrect size - sync failed > file /mnt/xfs_scratch/881 has incorrect size - sync failed > file /mnt/xfs_scratch/882 has incorrect size - sync failed > file /mnt/xfs_scratch/883 has incorrect size - sync failed > file /mnt/xfs_scratch/884 has incorrect size - sync failed > file /mnt/xfs_scratch/885 has incorrect size - sync failed > file /mnt/xfs_scratch/886 has incorrect size - sync failed > file /mnt/xfs_scratch/887 has incorrect size - sync failed > file /mnt/xfs_scratch/888 has incorrect size - sync failed > file /mnt/xfs_scratch/889 has incorrect size - sync failed > file /mnt/xfs_scratch/890 has incorrect size - sync failed > file /mnt/xfs_scratch/891 has incorrect size - sync failed > file /mnt/xfs_scratch/892 has incorrect size - sync failed > file /mnt/xfs_scratch/893 has incorrect size - sync failed > file /mnt/xfs_scratch/894 has incorrect size - sync failed > file /mnt/xfs_scratch/895 has incorrect size - sync failed > file /mnt/xfs_scratch/896 has incorrect size - sync failed > file /mnt/xfs_scratch/897 has incorrect size - sync failed > file /mnt/xfs_scratch/898 has incorrect size - sync failed > file /mnt/xfs_scratch/899 has incorrect size - sync failed > file /mnt/xfs_scratch/900 has incorrect size - sync failed > file /mnt/xfs_scratch/901 has incorrect size - sync failed > file /mnt/xfs_scratch/902 has incorrect size - sync failed > file /mnt/xfs_scratch/903 has incorrect size - sync failed > file /mnt/xfs_scratch/904 has incorrect size - sync failed > file /mnt/xfs_scratch/905 has incorrect size - sync failed > file /mnt/xfs_scratch/906 has incorrect size - sync failed > file /mnt/xfs_scratch/907 has incorrect size - sync failed > file /mnt/xfs_scratch/908 has incorrect size - sync failed > file /mnt/xfs_scratch/909 has incorrect size - sync failed > file /mnt/xfs_scratch/910 has incorrect size - sync failed > file /mnt/xfs_scratch/911 has incorrect size - sync failed > file /mnt/xfs_scratch/912 has incorrect size - sync failed > file /mnt/xfs_scratch/913 has incorrect size - sync failed > file /mnt/xfs_scratch/914 has incorrect size - sync failed > file /mnt/xfs_scratch/915 has incorrect size - sync failed > file /mnt/xfs_scratch/916 has incorrect size - sync failed > file /mnt/xfs_scratch/917 has incorrect size - sync failed > file /mnt/xfs_scratch/918 has incorrect size - sync failed > file /mnt/xfs_scratch/919 has incorrect size - sync failed > file /mnt/xfs_scratch/920 has incorrect size - sync failed > file /mnt/xfs_scratch/921 has incorrect size - sync failed > file /mnt/xfs_scratch/922 has incorrect size - sync failed > file /mnt/xfs_scratch/923 has incorrect size - sync failed > file /mnt/xfs_scratch/924 has incorrect size - sync failed > file /mnt/xfs_scratch/925 has incorrect size - sync failed > file /mnt/xfs_scratch/926 has incorrect size - sync failed > file /mnt/xfs_scratch/927 has incorrect size - sync failed > file /mnt/xfs_scratch/928 has incorrect size - sync failed > file /mnt/xfs_scratch/929 has incorrect size - sync failed > file /mnt/xfs_scratch/930 has incorrect size - sync failed > file /mnt/xfs_scratch/931 has incorrect size - sync failed > file /mnt/xfs_scratch/932 has incorrect size - sync failed > file /mnt/xfs_scratch/933 has incorrect size - sync failed > file /mnt/xfs_scratch/934 has incorrect size - sync failed > file /mnt/xfs_scratch/935 has incorrect size - sync failed > file /mnt/xfs_scratch/936 has incorrect size - sync failed > file /mnt/xfs_scratch/937 has incorrect size - sync failed > file /mnt/xfs_scratch/938 has incorrect size - sync failed > file /mnt/xfs_scratch/939 has incorrect size - sync failed > file /mnt/xfs_scratch/940 has incorrect size - sync failed > file /mnt/xfs_scratch/941 has incorrect size - sync failed > file /mnt/xfs_scratch/942 has incorrect size - sync failed > file /mnt/xfs_scratch/943 has incorrect size - sync failed > file /mnt/xfs_scratch/944 has incorrect size - sync failed > file /mnt/xfs_scratch/945 has incorrect size - sync failed > file /mnt/xfs_scratch/946 has incorrect size - sync failed > file /mnt/xfs_scratch/947 has incorrect size - sync failed > file /mnt/xfs_scratch/948 has incorrect size - sync failed > file /mnt/xfs_scratch/949 has incorrect size - sync failed > file /mnt/xfs_scratch/950 has incorrect size - sync failed > file /mnt/xfs_scratch/951 has incorrect size - sync failed > file /mnt/xfs_scratch/952 has incorrect size - sync failed > file /mnt/xfs_scratch/953 has incorrect size - sync failed > file /mnt/xfs_scratch/954 has incorrect size - sync failed > file /mnt/xfs_scratch/955 has incorrect size - sync failed > file /mnt/xfs_scratch/956 has incorrect size - sync failed > file /mnt/xfs_scratch/957 has incorrect size - sync failed > file /mnt/xfs_scratch/958 has incorrect size - sync failed > file /mnt/xfs_scratch/959 has incorrect size - sync failed > file /mnt/xfs_scratch/960 has incorrect size - sync failed > file /mnt/xfs_scratch/961 has incorrect size - sync failed > file /mnt/xfs_scratch/962 has incorrect size - sync failed > file /mnt/xfs_scratch/963 has incorrect size - sync failed > file /mnt/xfs_scratch/964 has incorrect size - sync failed > file /mnt/xfs_scratch/965 has incorrect size - sync failed > file /mnt/xfs_scratch/966 has incorrect size - sync failed > file /mnt/xfs_scratch/967 has incorrect size - sync failed > file /mnt/xfs_scratch/968 has incorrect size - sync failed > file /mnt/xfs_scratch/969 has incorrect size - sync failed > file /mnt/xfs_scratch/970 has incorrect size - sync failed > file /mnt/xfs_scratch/971 has incorrect size - sync failed > file /mnt/xfs_scratch/972 has incorrect size - sync failed > file /mnt/xfs_scratch/973 has incorrect size - sync failed > file /mnt/xfs_scratch/974 has incorrect size - sync failed > file /mnt/xfs_scratch/975 has incorrect size - sync failed > file /mnt/xfs_scratch/976 has incorrect size - sync failed > file /mnt/xfs_scratch/977 has incorrect size - sync failed > file /mnt/xfs_scratch/978 has incorrect size - sync failed > file /mnt/xfs_scratch/979 has incorrect size - sync failed > file /mnt/xfs_scratch/980 has incorrect size - sync failed > file /mnt/xfs_scratch/981 has incorrect size - sync failed > file /mnt/xfs_scratch/982 has incorrect size - sync failed > file /mnt/xfs_scratch/983 has incorrect size - sync failed > file /mnt/xfs_scratch/984 has incorrect size - sync failed > file /mnt/xfs_scratch/985 has incorrect size - sync failed > file /mnt/xfs_scratch/986 has incorrect size - sync failed > file /mnt/xfs_scratch/987 has incorrect size - sync failed > file /mnt/xfs_scratch/988 has incorrect size - sync failed > file /mnt/xfs_scratch/989 has incorrect size - sync failed > file /mnt/xfs_scratch/990 has incorrect size - sync failed > file /mnt/xfs_scratch/991 has incorrect size - sync failed > file /mnt/xfs_scratch/992 has incorrect size - sync failed > file /mnt/xfs_scratch/993 has incorrect size - sync failed > file /mnt/xfs_scratch/994 has incorrect size - sync failed > file /mnt/xfs_scratch/995 has incorrect size - sync failed > file /mnt/xfs_scratch/996 has incorrect size - sync failed > file /mnt/xfs_scratch/997 has incorrect size - sync failed > file /mnt/xfs_scratch/998 has incorrect size - sync failed > file /mnt/xfs_scratch/999 has incorrect size - sync failed 183 184 0s ... Not run: 010 022 023 024 025 035 036 037 038 039 040 043 044 050 052 055 057 058 077 090 092 093 094 095 097 098 099 101 102 112 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 168 180 Failures: 016 018 081 082 103 130 132 166 167 171 172 173 175 176 177 178 182 Failed 17 of 109 tests From owner-xfs@oss.sgi.com Wed Mar 19 18:08:50 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 19 Mar 2008 18:09:22 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2K18k94012089 for ; Wed, 19 Mar 2008 18:08:48 -0700 Received: from boing.melbourne.sgi.com (boing.melbourne.sgi.com [134.14.55.141]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id MAA29214; Thu, 20 Mar 2008 12:09:13 +1100 Message-ID: <47E1B939.3060008@sgi.com> Date: Thu, 20 Mar 2008 12:09:13 +1100 From: Timothy Shimmin User-Agent: Thunderbird 2.0.0.9 (Macintosh/20071031) MIME-Version: 1.0 To: strr-debian@decisionsoft.co.uk CC: xfs@oss.sgi.com Subject: Re: 2.6.24.3 nfs server on xfs keeps producing nfsd: non-standard errno: -117 References: <47DEFE5E.4030703@decisionsoft.co.uk> <47DF0C9D.1010602@sgi.com> <47DFC880.6040403@decisionsoft.co.uk> In-Reply-To: <47DFC880.6040403@decisionsoft.co.uk> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14937 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs Stuart Rowan wrote: > Timothy Shimmin wrote, on 18/03/08 00:28: >> Hi Stuart, >> >> Stuart Rowan wrote: >>> >>> I have *millions* of lines of (>200k per minute according to syslog): >>> nfsd: non-standard errno: -117 >>> being sent out of dmesg >>> >>> Now errno 117 is >>> #define EUCLEAN 117 /* Structure needs cleaning */ >>> >> In XFS we mapped EFSCORRUPTED to EUCLEAN as EFSCORRUPTED >> didn't exist on Linux. >> However, normally if this error is encountered in XFS then >> we output an appropriate msg to the syslog. >> Our default error level is 3 and most reports are rated at 1 >> so should show up I would have thought. >> >> --Tim >> >>> >>> xfs_repair -n says the filesystems are clean >>> xfs_repair has been run multiple times to completion on the >>> filesystems, all is fine. >>> >>> The NFS server is currently in use (indeed the message only starts >>> once clients connect) and works absolutely fine. >>> >>> How do I find out what (if anything) is wrong with my filesystem / >>> appropriately silence this message? >>> >> > I briefly changed the sysctl fs.xfs.error_level to 6 and then back to 3 > Good idea (I was thinking about that :-). Somehow, your subject line referring to 2.6.24 didn't stick in my brain (that's pretty old). So I was looking at recent code which I can't see has this error case from xfs_itobp() (it is now in xfs_imap_to_bp()). Looking at old code, I see 2 EFSCORRUPTED paths with the following one triggering at XFS_ERRLEVEL_HIGH (and presumably why you didn't see it until now) ... montep |1.198| | /* montep |1.198| | * Validate the magic number and version of every inode in the buffer montep |1.198| | * (if DEBUG kernel) or the first inode in the buffer, otherwise. montep |1.198| | */ nathans |1.303|2.4.x-xfs:slinx:74929a |#ifdef DEBUG montep |1.198| | ni = BBTOB(imap.im_len) >> mp->m_sb.sb_inodelog; montep |1.198| |#else montep |1.198| | ni = 1; montep |1.198| |#endif montep |1.198| | for (i = 0; i < ni; i++) { doucette |1.245|irix6.5f:irix:09146b | int di_ok; doucette |1.245|irix6.5f:irix:09146b | xfs_dinode_t *dip; doucette |1.245|irix6.5f:irix:09146b | lord |1.292|2.4.0-test1-xfs:slinx:65571a| dip = (xfs_dinode_t *)xfs_buf_offset(bp, montep |1.198| | (i << mp->m_sb.sb_inodelog)); dxm |1.285|2.4.0-test1-xfs:slinx:62350a| di_ok = INT_GET(dip->di_core.di_magic, ARCH_CONVERT) == XFS_DINODE_MAGIC && dxm |1.285|2.4.0-test1-xfs:slinx:62350a| XFS_DINODE_GOOD_VERSION(INT_GET(dip->di_core.di_version, ARCH_CONVERT)); overby |1.362|2.4.x-xfs:slinx:136445a | if (unlikely(XFS_TEST_ERROR(!di_ok, mp, XFS_ERRTAG_ITOBP_INOTOBP, overby |1.362|2.4.x-xfs:slinx:136445a | XFS_RANDOM_ITOBP_INOTOBP))) { montep |1.198| |#ifdef DEBUG nathans |1.337|2.4.x-xfs:slinx:119399a | prdev("bad inode magic/vsn daddr 0x%llx #%d (magic=%x)", nathans |1.337|2.4.x-xfs:slinx:119399a | mp->m_dev, (unsigned long long)imap.im_blkno, i, nathans |1.303|2.4.x-xfs:slinx:74929a | INT_GET(dip->di_core.di_magic, ARCH_CONVERT)); montep |1.198| |#endif lord |1.376|2.4.x-xfs:slinx:150747a | XFS_CORRUPTION_ERROR("xfs_itobp", XFS_ERRLEVEL_HIGH, overby |1.362|2.4.x-xfs:slinx:136445a | mp, dip); montep |1.198| | xfs_trans_brelse(tp, bp); sup |1.216| | return XFS_ERROR(EFSCORRUPTED); montep |1.198| | } ajs |1.143| | } So the first inode in the buffer has the wrong magic# or version#. I'm surprised that this wasn't picked up by repair or check. --Tim > It gives the following message and backtrace > >> Mar 18 13:35:15 evenlode kernel: nfsd: non-standard errno: -117 >> Mar 18 13:35:15 evenlode kernel: 0x0: 00 00 00 00 00 00 00 00 00 00 00 >> 00 00 00 00 00 Mar 18 13:35:15 evenlode kernel: Filesystem "dm-0": XFS >> internal error xfs_itobp at line 360 of file fs/xfs/xfs_inode.c. >> Caller 0xffffffff8821224d >> Mar 18 13:35:15 evenlode kernel: Pid: 2791, comm: nfsd Not tainted >> 2.6.24.3-generic #1 >> Mar 18 13:35:15 evenlode kernel: Mar 18 13:35:15 evenlode kernel: Call >> Trace: >> Mar 18 13:35:15 evenlode kernel: [] >> :xfs:xfs_iread+0x71/0x1e8 >> Mar 18 13:35:15 evenlode kernel: [] >> :xfs:xfs_itobp+0x141/0x17b >> Mar 18 13:35:15 evenlode kernel: [] >> :xfs:xfs_iread+0x71/0x1e8 >> Mar 18 13:35:15 evenlode kernel: [] >> :xfs:xfs_iread+0x71/0x1e8 >> Mar 18 13:35:15 evenlode kernel: [] >> :xfs:xfs_iget_core+0x352/0x63a >> Mar 18 13:35:15 evenlode kernel: [] >> alloc_inode+0x152/0x1c2 >> Mar 18 13:35:15 evenlode kernel: [] >> :xfs:xfs_iget+0x9b/0x13f >> Mar 18 13:35:15 evenlode kernel: [] >> :xfs:xfs_vget+0x4d/0xbb > > > Does that help? > > Thanks, > Stu. From owner-xfs@oss.sgi.com Wed Mar 19 18:20:28 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 19 Mar 2008 18:20:39 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-0.3 required=5.0 tests=AWL,BAYES_50,J_CHICKENPOX_23, J_CHICKENPOX_33,J_CHICKENPOX_44,J_CHICKENPOX_46 autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2K1KKux012815 for ; Wed, 19 Mar 2008 18:20:23 -0700 Received: from [134.14.55.21] (dhcp21.melbourne.sgi.com [134.14.55.21]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id MAA00061; Thu, 20 Mar 2008 12:20:44 +1100 Message-ID: <47E1BB82.105@sgi.com> Date: Thu, 20 Mar 2008 12:18:58 +1100 From: Mark Goodwin Reply-To: markgw@sgi.com Organization: SGI Engineering User-Agent: Thunderbird 1.5.0.14 (Windows/20071210) MIME-Version: 1.0 To: "Josef 'Jeff' Sipek" CC: XFS Mailing List Subject: Re: xfsqa failures References: <20080319064959.GD11349@josefsipek.net> In-Reply-To: <20080319064959.GD11349@josefsipek.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14938 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: markgw@sgi.com Precedence: bulk X-list: xfs Thanks for the report. You're seeing roughly the same failures that we're seeing. There is a fairly big effort going on internally here at SGI to get the QA suite cleaned up and usable again. Once that's done, we'll push out the fixes of course. Cheers -- Mark Josef 'Jeff' Sipek wrote: > Hello all! > > I'm going to be blut...what's the deal with xfsqa? There are far too many > tests that seem broken (or at least I hope it's the tests and not XFS). It > makes it really hard to know if a test failed because the test's golden > output is wrong, or because XFS has a bug - doubly so when one changes > something in XFS. > > I ran the "auto" group of tests (which uses about 2/3 of the tests in qa), > and out of the 109 tests run, 17 failed for me. Some failed because the output > didn't match, but one test leaks a reference of some sort forcing me to > reboot, and resume the tests from that point, and another test trips an > assertion. Running the other tests (not part of the auto group) is even > worse (e.g., one test deadlocks the system). > > Summary: > > Not run: 010 022 023 024 025 035 036 037 038 039 040 043 044 050 052 055 > 057 058 077 090 092 093 094 095 097 098 099 101 102 112 142 143 > 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 > 160 161 162 163 168 180 > Failures: 016 018 081 082 103 130 132 166 167 171 172 173 175 176 177 178 > 182 > Failed 17 of 109 tests > > Josef 'Jeff' Sipek. > > --- > Here's some detailed info (xfsqa output, the assertion backtrace) about tests > that fail on a pretty average dual CPU Athlon (the first gen Athlon MP - if you > want detailed specs, let me know): > > # ./check -g auto > FSTYP -- xfs (debug) > PLATFORM -- Linux/i686 fstest 2.6.25-rc6 > MKFS_OPTIONS -- -f -bsize=4096 /dev/hda6 > MOUNT_OPTIONS -- /dev/hda6 /mnt/xfs_scratch > > 001 13s ... > 002 1s ... > 003 1s ... > 004 2s ... > 005 1s ... > 006 40s ... > 007 61s ... > 008 7s ... > 009 1s ... > 010 [not run] dbtest was not built for this platform > 011 52s ... > 012 6s ... > 013 271s ... > 014 71s ... > 015 6s ... > 016 31s ... [failed, exit status 1] - output mismatch (see 016.out.bad) > 10,114c10,11 > < *** generate log traffic > < *** mount > < *** fiddle > < *** unmount > < *** check for corruption > < *** generate log traffic > < *** mount > < *** fiddle > < *** unmount > < *** check for corruption > < *** generate log traffic > < *** mount > < *** fiddle > < *** unmount > < *** check for corruption > < *** generate log traffic > < *** mount > < *** fiddle > < *** unmount > < *** check for corruption > < *** generate log traffic > < *** mount > < *** fiddle > < *** unmount > < *** check for corruption > < *** generate log traffic > < *** mount > < *** fiddle > < *** unmount > < *** check for corruption > < *** generate log traffic > < *** mount > < *** fiddle > < *** unmount > < *** check for corruption > < *** generate log traffic > < *** mount > < *** fiddle > < *** unmount > < *** check for corruption > < *** generate log traffic > < *** mount > < *** fiddle > < *** unmount > < *** check for corruption > < *** generate log traffic > < *** mount > < *** fiddle > < *** unmount > < *** check for corruption > < *** generate log traffic > < *** mount > < *** fiddle > < *** unmount > < *** check for corruption > < *** generate log traffic > < *** mount > < *** fiddle > < *** unmount > < *** check for corruption > < *** generate log traffic > < *** mount > < *** fiddle > < *** unmount > < *** check for corruption > < *** generate log traffic > < *** mount > < *** fiddle > < *** unmount > < *** check for corruption > < *** generate log traffic > < *** mount > < *** fiddle > < *** unmount > < *** check for corruption > < *** generate log traffic > < *** mount > < *** fiddle > < *** unmount > < *** check for corruption > < *** generate log traffic > < *** mount > < *** fiddle > < *** unmount > < *** check for corruption > < *** generate log traffic > < *** mount > < *** fiddle > < *** unmount > < *** check for corruption > < *** generate log traffic > < *** mount > < *** fiddle > < *** unmount > < *** check for corruption > < *** generate log traffic > < *** mount > < *** fiddle > < *** unmount > < *** check for corruption > < *** generate log traffic > < *** mount > < *** fiddle > < *** unmount > < *** check for corruption > --- >> !!! unexpected log position 3318 >> (see 016.full for details) > 017 41s ... > 018 22s ... - output mismatch (see 018.out.bad) > 4,17c4,6 > < *** compare logprint: 018.trans_inode with 018.fulldir/trans_inode.mnt-onoalign,logbsize=32k.mkfs-lsize=2000b-lversion=1.filtered > < *** compare logprint: 018.trans_buf with 018.fulldir/trans_buf.mnt-onoalign,logbsize=32k.mkfs-lsize=2000b-lversion=1.filtered > < *** compare logprint: 018.op with 018.fulldir/op.mnt-onoalign,logbsize=32k.mkfs-lsize=2000b-lversion=2.filtered > < *** compare logprint: 018.trans_inode with 018.fulldir/trans_inode.mnt-onoalign,logbsize=32k.mkfs-lsize=2000b-lversion=2.filtered > < *** compare logprint: 018.trans_buf with 018.fulldir/trans_buf.mnt-onoalign,logbsize=32k.mkfs-lsize=2000b-lversion=2.filtered > < *** compare logprint: 018.op with 018.fulldir/op.mnt-onoalign,logbsize=64k.mkfs-lsize=2000b-lversion=2.filtered > < *** compare logprint: 018.trans_inode with 018.fulldir/trans_inode.mnt-onoalign,logbsize=64k.mkfs-lsize=2000b-lversion=2.filtered > < *** compare logprint: 018.trans_buf with 018.fulldir/trans_buf.mnt-onoalign,logbsize=64k.mkfs-lsize=2000b-lversion=2.filtered > < *** compare logprint: 018.op with 018.fulldir/op.mnt-onoalign,logbsize=128k.mkfs-lsize=2000b-lversion=2.filtered > < *** compare logprint: 018.trans_inode with 018.fulldir/trans_inode.mnt-onoalign,logbsize=128k.mkfs-lsize=2000b-lversion=2.filtered > < *** compare logprint: 018.trans_buf with 018.fulldir/trans_buf.mnt-onoalign,logbsize=128k.mkfs-lsize=2000b-lversion=2.filtered > < *** compare logprint: 018.op with 018.fulldir/op.mnt-onoalign,logbsize=256k.mkfs-lsize=2000b-lversion=2.filtered > < *** compare logprint: 018.trans_inode with 018.fulldir/trans_inode.mnt-onoalign,logbsize=256k.mkfs-lsize=2000b-lversion=2.filtered > < *** compare logprint: 018.trans_buf with 018.fulldir/trans_buf.mnt-onoalign,logbsize=256k.mkfs-lsize=2000b-lversion=2.filtered > --- >> 0 split(s) found prior to diff cmd: 50,54d49 >> logprint output 018.op differs to 018.fulldir/op.mnt-onoalign,logbsize=32k.mkfs-lsize=2000b-lversion=1.filtered considering splits >> (see 018.full for details) > 019 3s ... > 020 10s ... > 021 2s ... > 022 [not run] No dump tape specified > 023 [not run] No dump tape specified > 024 [not run] No dump tape specified > 025 [not run] No dump tape specified > 026 15s ... > 027 15s ... > 028 24s ... > 029 2s ... > 030 22s ... > 031 22s ... > 032 25s ... > 033 17s ... > 034 3s ... > 035 [not run] No dump tape specified > 036 [not run] No dump tape specified > 037 [not run] No dump tape specified > 038 [not run] No dump tape specified > 039 [not run] No dump tape specified > 040 [not run] Can't run srcdiff without KWORKAREA set > 041 58s ... > 042 131s ... > 043 [not run] No dump tape specified > 044 [not run] This test requires a valid $SCRATCH_LOGDEV > 045 2s ... > 046 14s ... > 047 24s ... > 048 0s ... > 049 13s ... > 050 25s ... [not run] Installed kernel does not support XFS quota > 051 2s ... > 052 4s ... [not run] Installed kernel does not support XFS quota > 053 3s ... > 054 5s ... > 055 [not run] No dump tape specified > 056 14s ... > 057 [not run] Place holder for IRIX test 057 > 058 [not run] Place holder for IRIX test 058 > 061 13s ... > 062 3s ... > 063 14s ... > 065 37s ... > 066 2s ... > 067 3s ... > 068 54s ... > 069 28s ... > 070 25s ... > 072 1s ... > 073 53s ... > 074 521s ... > 075 60s ... > 076 148s ... > 077 [not run] No linux directory to source files from > 078 43s ... > 079 1s ... > 081 6s ... [failed, exit status 1] - output mismatch (see 081.out.bad) > 3a4,5 >> logprint output 081.ugquota.trans_inode differs to 081.fulldir/trans_inode.mnt-oquota,gquota.mkfs-lsize=2000b-lversion=1.filtered >> (see 081.full for details) > 082 40s ... - output mismatch (see 082.out.bad) > 5,19c5,6 > < *** compare logprint: 082.trans_buf with 082.fulldir/trans_buf.mnt-ologbsize=32k.mkfs-lsize=2000b-lversion=2.sync.filtered > < --- mkfs=version=2,su=4096, mnt=logbsize=32k, sync=sync --- > < *** compare logprint: 082.trans_inode with 082.fulldir/trans_inode.mnt-ologbsize=32k.mkfs-lsize=2000b-lversion=2,su=4096.sync.filtered > < *** compare logprint: 082.trans_buf with 082.fulldir/trans_buf.mnt-ologbsize=32k.mkfs-lsize=2000b-lversion=2,su=4096.sync.filtered > < --- mkfs=version=2,su=32768, mnt=logbsize=32k, sync=sync --- > < *** compare logprint: 082.trans_inode with 082.fulldir/trans_inode.mnt-ologbsize=32k.mkfs-lsize=2000b-lversion=2,su=32768.sync.filtered > < *** compare logprint: 082.trans_buf with 082.fulldir/trans_buf.mnt-ologbsize=32k.mkfs-lsize=2000b-lversion=2,su=32768.sync.filtered > < --- mkfs=version=2,su=36864, mnt=logbsize=32k, sync=sync --- > < > < *** Cannot mkfs for this test using option specified: -l size=2000b -l version=2,su=36864 *** > < > < --- mkfs=version=2,su=5120, mnt=logbsize=32k, sync=sync --- > < > < *** Cannot mkfs for this test using option specified: -l size=2000b -l version=2,su=5120 *** > < > --- >> logprint output 082.trans_inode differs to 082.fulldir/trans_inode.mnt-ologbsize=32k.mkfs-lsize=2000b-lversion=2.sync.filtered >> (see 082.full for details) > 22,39c9,11 > < *** compare logprint: 082.trans_inode with 082.fulldir/trans_inode.mnt-ologbsize=32k.mkfs-lsize=2000b-lversion=2.nosync.filtered > < *** compare logprint: 082.trans_buf with 082.fulldir/trans_buf.mnt-ologbsize=32k.mkfs-lsize=2000b-lversion=2.nosync.filtered > < --- mkfs=version=2,su=4096, mnt=logbsize=32k, sync=nosync --- > < *** compare logprint: 082.op with 082.fulldir/op.mnt-ologbsize=32k.mkfs-lsize=2000b-lversion=2,su=4096.nosync.filtered > < *** compare logprint: 082.trans_inode with 082.fulldir/trans_inode.mnt-ologbsize=32k.mkfs-lsize=2000b-lversion=2,su=4096.nosync.filtered > < *** compare logprint: 082.trans_buf with 082.fulldir/trans_buf.mnt-ologbsize=32k.mkfs-lsize=2000b-lversion=2,su=4096.nosync.filtered > < --- mkfs=version=2,su=32768, mnt=logbsize=32k, sync=nosync --- > < *** compare logprint: 082.op with 082.fulldir/op.mnt-ologbsize=32k.mkfs-lsize=2000b-lversion=2,su=32768.nosync.filtered > < *** compare logprint: 082.trans_inode with 082.fulldir/trans_inode.mnt-ologbsize=32k.mkfs-lsize=2000b-lversion=2,su=32768.nosync.filtered > < *** compare logprint: 082.trans_buf with 082.fulldir/trans_buf.mnt-ologbsize=32k.mkfs-lsize=2000b-lversion=2,su=32768.nosync.filtered > < --- mkfs=version=2,su=36864, mnt=logbsize=32k, sync=nosync --- > < > < *** Cannot mkfs for this test using option specified: -l size=2000b -l version=2,su=36864 *** > < > < --- mkfs=version=2,su=5120, mnt=logbsize=32k, sync=nosync --- > < > < *** Cannot mkfs for this test using option specified: -l size=2000b -l version=2,su=5120 *** > < > --- >> 0 split(s) found prior to diff cmd: 50,54d49 >> logprint output 082.op differs to 082.fulldir/op.mnt-ologbsize=32k.mkfs-lsize=2000b-lversion=2.nosync.filtered considering splits >> (see 082.full for details) > 083 213s ... > 084 61s ... > 088 1s ... > 089 258s ... > 090 [not run] External volumes not in use, skipped this test > 091 149s ... > 092 [not run] inode64 not checked on this platform > 093 [not run] not suitable for this OS: Linux > 094 [not run] External volumes not in use, skipped this test > 095 [not run] not suitable for this OS: Linux > 096 4s ... > 097 [not run] not suitable for this OS: Linux > 098 [not run] not suitable for this filesystem type: xfs > 099 [not run] not suitable for this OS: Linux > 100 59s ... > 101 [not run] not suitable for this filesystem type: xfs > 102 [not run] not suitable for this filesystem type: xfs > 103 2s ... - output mismatch (see 103.out.bad) > 7c7 > < ln: creating symbolic link `SCRATCH_MNT/nosymlink/target' to `SCRATCH_MNT/nosymlink/source': Operation not permitted > --- >> ln: creating symbolic link `SCRATCH_MNT/nosymlink/target': Operation not permitted > 105 2s ... > 112 [not run] fsx not built with AIO for this platform > 117 36s ... > 120 17s ... > 121 9s ... > 122 3s ... > 123 0s ... > 124 14s ... > 125 62s ... > 126 1s ... > 127 630s ... > 128 3s ... > 129 41s ... > 130 - output mismatch (see 130.out.bad) > 20c20 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 3 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 24c24 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1000.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 26c26 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1000.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 30c30 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 4 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 89c89 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 2.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 101c101 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 0.000000 bytes, 0 ops; 0.0000 sec (nan bytes/sec and nan ops/sec) > 103c103 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 0.000000 bytes, 0 ops; 0.0000 sec (nan bytes/sec and nan ops/sec) > 105c105 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 0.000000 bytes, 0 ops; 0.0000 sec (nan bytes/sec and nan ops/sec) > 107c107 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 0.000000 bytes, 0 ops; 0.0000 sec (nan bytes/sec and nan ops/sec) > 109c109 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 0.000000 bytes, 0 ops; 0.0000 sec (nan bytes/sec and nan ops/sec) > 111c111 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 0.000000 bytes, 0 ops; 0.0000 sec (nan bytes/sec and nan ops/sec) > 113c113 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 0.000000 bytes, 0 ops; 0.0000 sec (nan bytes/sec and nan ops/sec) > 115c115 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 0.000000 bytes, 0 ops; 0.0000 sec (nan bytes/sec and nan ops/sec) > 117c117 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 0.000000 bytes, 0 ops; 0.0000 sec (nan bytes/sec and nan ops/sec) > 119c119 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 0.000000 bytes, 0 ops; 0.0000 sec (nan bytes/sec and nan ops/sec) > 124c124 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 10.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 128c128 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 23.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 133c133 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 136c136 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 139c139 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 142c142 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 145c145 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 148c148 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 151c151 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 154c154 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 157c157 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 160c160 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 163c163 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 166c166 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 169c169 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 172c172 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 13.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 174c174 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 176c176 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 178c178 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 180c180 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 182c182 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 184c184 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 186c186 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 188c188 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 190c190 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 192c192 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 194c194 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 196c196 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 198c198 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 200c200 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 202c202 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 204c204 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 207c207 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 4.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 210c210 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 4.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 213c213 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 4.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 216c216 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 4.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 218c218 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 220c220 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 222c222 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 224c224 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 226c226 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 228c228 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 230c230 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 232c232 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 234c234 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 236c236 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 238c238 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 240c240 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 242c242 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 244c244 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 246c246 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 248c248 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 251c251 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 4.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 254c254 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 4.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 257c257 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 4.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 260c260 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 4.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 265c265 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 268c268 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 271c271 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 274c274 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 277c277 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 280c280 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 283c283 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 286c286 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 289c289 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 292c292 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 295c295 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 298c298 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 301c301 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 304c304 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 13.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 339c339 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 4.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 342c342 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 4.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 345c345 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 4.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 348c348 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 4.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 383c383 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 4.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 386c386 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 4.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 389c389 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 4.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 392c392 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 4.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 131 1s ... > 132 - output mismatch (see 132.out.bad) > 3c3 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 512.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 5c5 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 512.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 7c7 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 512.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 9c9 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 512.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 11c11 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 512.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 13c13 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 512.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 15c15 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 512.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 17c17 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 512.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 21c21 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 512.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 25c25 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 512.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 29c29 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 512.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 33c33 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 512.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 37c37 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 512.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 41c41 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 512.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 45c45 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 512.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 49c49 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 512.000000 bytes, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 51c51 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 53c53 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 55c55 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 57c57 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 63c63 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 69c69 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 75c75 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 81c81 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 85c85 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 89c89 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 93c93 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 97c97 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 1 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 99c99 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 2 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 101c101 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 2 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 121c121 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 2 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 133c133 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 2 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 141c141 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 2 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 143c143 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 4 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 171c171 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 4 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 177c177 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 4 KiB, 1 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 183c183 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 8 KiB, 2 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 185c185 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 8 KiB, 2 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 233c233 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 16 KiB, 4 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 235c235 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 16 KiB, 4 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 283c283 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 32 KiB, 8 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 285c285 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 32 KiB, 8 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 337c337 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 64 KiB, 16 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 339c339 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 64 KiB, 16 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 393c393 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 128 KiB, 32 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 395c395 > < XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) > --- >> 128 KiB, 32 ops; 0.0000 sec (inf EiB/sec and inf ops/sec) > 134 2s ... > 135 2s ... > 137 37s ... > 138 75s ... > 139 84s ... > 140 55s ... > 141 2s ... > 142 [not run] Assuming DMAPI modules are not loaded > 143 [not run] Assuming DMAPI modules are not loaded > 144 [not run] Assuming DMAPI modules are not loaded > 145 [not run] Assuming DMAPI modules are not loaded > 146 [not run] Assuming DMAPI modules are not loaded > 147 [not run] Assuming DMAPI modules are not loaded > 148 [not run] parallel repair binary xfs_prepair64 is not installed > 149 [not run] parallel repair binary xfs_prepair is not installed > 150 [not run] Assuming DMAPI modules are not loaded > 151 [not run] Assuming DMAPI modules are not loaded > 152 [not run] Assuming DMAPI modules are not loaded > 153 [not run] Assuming DMAPI modules are not loaded > 154 [not run] Assuming DMAPI modules are not loaded > 155 [not run] Assuming DMAPI modules are not loaded > 156 [not run] Assuming DMAPI modules are not loaded > 157 [not run] Assuming DMAPI modules are not loaded > 158 [not run] Assuming DMAPI modules are not loaded > 159 [not run] Assuming DMAPI modules are not loaded > 160 [not run] Assuming DMAPI modules are not loaded > 161 [not run] Assuming DMAPI modules are not loaded > 162 [not run] Assuming DMAPI modules are not loaded > 163 [not run] Assuming DMAPI modules are not loaded > 164 0s ... > 165 1s ... > 166 - output mismatch (see 166.out.bad) > 2,6c2,6 > < 0: [0..31]: XX..YY AG (AA..BB) 32 > < 1: [32..127]: XX..YY AG (AA..BB) 96 10000 > < 2: [128..159]: XX..YY AG (AA..BB) 32 > < 3: [160..223]: XX..YY AG (AA..BB) 64 10000 > < 4: [224..255]: XX..YY AG (AA..BB) 32 > --- >> 0: [0..7]: XX..YY AG (AA..BB) 8 >> 1: [8..127]: XX..YY AG (AA..BB) 120 10000 >> 2: [128..135]: XX..YY AG (AA..BB) 8 >> 3: [136..247]: XX..YY AG (AA..BB) 112 10000 >> 4: [248..255]: XX..YY AG (AA..BB) 8 > 167 274s ... > > **************************** > 167 leaks a reference of some sort, unable to unmount xfs_scratch (-EBUSY), but > test does not die; reboot > **************************** > > 168 [not run] Assuming DMAPI modules are not loaded > 169 > 170 26s ... > 171 [failed, exit status 1] - output mismatch (see 171.out.bad) > 6,21c6,7 > < + passed, streams are in seperate AGs > < # testing 64 16 8 100 1 1 1 .... > < # streaming > < # sync AGs... > < # checking stream AGs... > < + passed, streams are in seperate AGs > < # testing 64 16 8 100 1 0 0 .... > < # streaming > < # sync AGs... > < # checking stream AGs... > < + passed, streams are in seperate AGs > < # testing 64 16 8 100 1 0 1 .... > < # streaming > < # sync AGs... > < # checking stream AGs... > < + passed, streams are in seperate AGs > --- >> - failed, 7 streams with matching AGs >> (see 171.full for details) > 172 [failed, exit status 1] - output mismatch (see 172.out.bad) > 11c11,12 > < + passed, streams are in seperate AGs > --- >> - failed, 1 streams with matching AGs >> (see 172.full for details) > 173 > > **************************** > 173 trips an assertion...dmesg: > > Ending clean XFS mount for filesystem: hda6 > Assertion failed: pag->pagf_freeblks == be32_to_cpu(agf->agf_freeblks), file: fs/xfs/xfs_alloc.c, line: 2216 > ------------[ cut here ]------------ > kernel BUG at fs/xfs/support/debug.c:81! > invalid opcode: 0000 [#1] SMP > Modules linked in: xfs crc32c libcrc32c dm_snapshot dm_mirror dm_mod loop amd_k7_agp evdev ext3 jbd mbcache ide_disk aic7xxx scsi_transport_spi scsi_mod amd74xx ohci_hcd e1000 ide_pci_generic ide_core usbcore > > Pid: 6039, comm: mkdir Not tainted (2.6.25-rc6 #12) > EIP: 0060:[] EFLAGS: 00010282 CPU: 0 > EIP is at assfail+0x10/0x17 [xfs] > EAX: 00000070 EBX: f7bc8a6c ECX: 00000000 EDX: 00000000 > ESI: 00000000 EDI: f7878200 EBP: f7463b20 ESP: f7463b10 > DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068 > Process mkdir (pid: 6039, ti=f7463000 task=f7cd50c0 task.ti=f7463000) > Stack: f9371761 f9360e4c f9360d8f 000008a8 f7463b50 f92f73bb 000b8001 00000000 > f6ca97b0 00000017 f78f9aa0 f7af2590 f7a288c8 00000ef8 00000000 f7463ccc > f7463bf0 f92fa0f7 00000000 f7463be0 00000000 00000001 00000046 f7af2590 > Call Trace: > [] ? xfs_alloc_read_agf+0x231/0x2f7 [xfs] > [] ? xfs_alloc_fix_freelist+0x176/0x422 [xfs] > [] ? hrtick_set+0xce/0xd6 > [] ? schedule+0x715/0x747 > [] ? xfs_alloc_vextent+0x1ff/0x8d8 [xfs] > [] ? down_read+0x19/0x2d > [] ? xfs_alloc_vextent+0x223/0x8d8 [xfs] > [] ? xfs_buf_item_trace+0xa4/0xae [xfs] > [] ? xfs_ialloc_ag_alloc+0x280/0x6e7 [xfs] > [] ? xfs_ialloc_read_agi+0x93/0x1e4 [xfs] > [] ? xfs_ialloc_read_agi+0x104/0x1e4 [xfs] > [] ? xfs_dialloc+0x167/0xb12 [xfs] > [] ? xlog_trace_loggrant+0xa8/0xb3 [xfs] > [] ? xfs_ialloc+0x4b/0x537 [xfs] > [] ? _spin_unlock+0x1d/0x20 > [] ? xlog_grant_log_space+0x2a0/0x2ea [xfs] > [] ? xfs_dir_ialloc+0x6f/0x253 [xfs] > [] ? xfs_mkdir+0x204/0x498 [xfs] > [] ? xfs_acl_get_attr+0x66/0x89 [xfs] > [] ? xfs_vn_mknod+0x140/0x22a [xfs] > [] ? xfs_vn_mkdir+0xd/0xf [xfs] > [] ? vfs_mkdir+0x94/0xd7 > [] ? sys_mkdirat+0x85/0xbd > [] ? up_read+0x16/0x2b > [] ? do_page_fault+0x2af/0x535 > [] ? sys_mkdir+0x10/0x12 > [] ? sysenter_past_esp+0x5f/0xa5 > ======================= > Code: 01 52 ba 5c 17 37 f9 50 b8 5d 17 37 f9 6a 01 6a 10 e8 5a 8e e5 c6 83 c4 14 c9 c3 55 89 e5 51 52 50 68 61 17 37 f9 e8 81 fc db c6 <0f> 0b 83 c4 10 eb fe 55 89 e5 57 89 c7 b8 50 ee 38 f9 83 e7 07 > EIP: [] assfail+0x10/0x17 [xfs] SS:ESP 0068:f7463b10 > ---[ end trace ad759a5a28c539de ]--- > > rebooted, before running more tests. > **************************** > > 174 37s ... > 175 - output mismatch (see 175.out.bad) > 5,31c5,8 > < # spawning test file with 4096 256 0 punch_test_file noresv > < [0] punch_test_file > < + not using resvsp at file creation > < # writing with 4096 0 256 punch_test_file > < # showing file state punch_test_file > < punch_test_file: > < EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL FLAGS > < 0: [0..2047]: 96..2143 0 (96..2143) 2048 00000 > < FLAG Values: > < 010000 Unwritten preallocated extent > < 001000 Doesn't begin on stripe unit > < 000100 Doesn't end on stripe unit > < 000010 Doesn't begin on stripe width > < 000001 Doesn't end on stripe width > < # punching with 4096 240 16 d punch_test_file > < + hole punch using dmapi punch_hole > < # showing file state punch_test_file > < punch_test_file: > < EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL FLAGS > < 0: [0..1919]: 96..2015 0 (96..2015) 1920 00000 > < 1: [1920..2047]: hole 128 > < FLAG Values: > < 010000 Unwritten preallocated extent > < 001000 Doesn't begin on stripe unit > < 000100 Doesn't end on stripe unit > < 000010 Doesn't begin on stripe width > < 000001 Doesn't end on stripe width > --- >> mount: wrong fs type, bad option, bad superblock on /dev/hda6, >> missing codepage or helper program, or other error >> In some cases useful info is found in syslog - try >> dmesg | tail or so > 33,63c10 > < -- this time use a 4k (one block) extent size hint -- > < # testing 4096 1 256 240 16 d 0 256 w p noresv ... > < + mounting with dmapi enabled > < # spawning test file with 4096 256 1 punch_test_file noresv > < + setting extent size hint to 4096 > < [4096] punch_test_file > < + not using resvsp at file creation > < # writing with 4096 0 256 punch_test_file > < # showing file state punch_test_file > < punch_test_file: > < EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL FLAGS > < 0: [0..2047]: 96..2143 0 (96..2143) 2048 00000 > < FLAG Values: > < 010000 Unwritten preallocated extent > < 001000 Doesn't begin on stripe unit > < 000100 Doesn't end on stripe unit > < 000010 Doesn't begin on stripe width > < 000001 Doesn't end on stripe width > < # punching with 4096 240 16 d punch_test_file > < + hole punch using dmapi punch_hole > < # showing file state punch_test_file > < punch_test_file: > < EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL FLAGS > < 0: [0..1919]: 96..2015 0 (96..2015) 1920 00000 > < 1: [1920..2047]: hole 128 > < FLAG Values: > < 010000 Unwritten preallocated extent > < 001000 Doesn't begin on stripe unit > < 000100 Doesn't end on stripe unit > < 000010 Doesn't begin on stripe width > < 000001 Doesn't end on stripe width > --- >> 175 not run: Assuming DMAPI modules are not loaded > 176 - output mismatch (see 176.out.bad) > 5,30c5,8 > < # spawning test file with 4096 256 0 punch_test_file > < [0] punch_test_file > < # writing with 4096 0 256 punch_test_file > < # showing file state punch_test_file > < punch_test_file: > < EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL FLAGS > < 0: [0..2047]: 96..2143 0 (96..2143) 2048 00000 > < FLAG Values: > < 010000 Unwritten preallocated extent > < 001000 Doesn't begin on stripe unit > < 000100 Doesn't end on stripe unit > < 000010 Doesn't begin on stripe width > < 000001 Doesn't end on stripe width > < # punching with 4096 240 16 d punch_test_file > < + hole punch using dmapi punch_hole > < # showing file state punch_test_file > < punch_test_file: > < EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL FLAGS > < 0: [0..1919]: 96..2015 0 (96..2015) 1920 00000 > < 1: [1920..2047]: hole 128 > < FLAG Values: > < 010000 Unwritten preallocated extent > < 001000 Doesn't begin on stripe unit > < 000100 Doesn't end on stripe unit > < 000010 Doesn't begin on stripe width > < 000001 Doesn't end on stripe width > --- >> mount: wrong fs type, bad option, bad superblock on /dev/hda6, >> missing codepage or helper program, or other error >> In some cases useful info is found in syslog - try >> dmesg | tail or so > 32,121c10 > < -- this time dont use resvsp -- > < # testing 4096 0 256 240 16 d 0 256 w p noresv ... > < + mounting with dmapi enabled > < # spawning test file with 4096 256 0 punch_test_file noresv > < [0] punch_test_file > < + not using resvsp at file creation > < # writing with 4096 0 256 punch_test_file > < # showing file state punch_test_file > < punch_test_file: > < EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL FLAGS > < 0: [0..2047]: 96..2143 0 (96..2143) 2048 00000 > < FLAG Values: > < 010000 Unwritten preallocated extent > < 001000 Doesn't begin on stripe unit > < 000100 Doesn't end on stripe unit > < 000010 Doesn't begin on stripe width > < 000001 Doesn't end on stripe width > < # punching with 4096 240 16 d punch_test_file > < + hole punch using dmapi punch_hole > < # showing file state punch_test_file > < punch_test_file: > < EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL FLAGS > < 0: [0..1919]: 96..2015 0 (96..2015) 1920 00000 > < 1: [1920..2047]: hole 128 > < FLAG Values: > < 010000 Unwritten preallocated extent > < 001000 Doesn't begin on stripe unit > < 000100 Doesn't end on stripe unit > < 000010 Doesn't begin on stripe width > < 000001 Doesn't end on stripe width > < > < > < -- test unresvsp hole punch with resvsp on file create -- > < # testing 4096 0 256 240 16 u 0 256 w p ... > < # spawning test file with 4096 256 0 punch_test_file > < [0] punch_test_file > < # writing with 4096 0 256 punch_test_file > < # showing file state punch_test_file > < punch_test_file: > < EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL FLAGS > < 0: [0..2047]: 96..2143 0 (96..2143) 2048 00000 > < FLAG Values: > < 010000 Unwritten preallocated extent > < 001000 Doesn't begin on stripe unit > < 000100 Doesn't end on stripe unit > < 000010 Doesn't begin on stripe width > < 000001 Doesn't end on stripe width > < # punching with 4096 240 16 u punch_test_file > < + hole punch using unresvsp > < # showing file state punch_test_file > < punch_test_file: > < EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL FLAGS > < 0: [0..1919]: 96..2015 0 (96..2015) 1920 00000 > < 1: [1920..2047]: hole 128 > < FLAG Values: > < 010000 Unwritten preallocated extent > < 001000 Doesn't begin on stripe unit > < 000100 Doesn't end on stripe unit > < 000010 Doesn't begin on stripe width > < 000001 Doesn't end on stripe width > < > < -- this time dont use resvsp -- > < # testing 4096 0 256 240 16 u 0 256 w p noresv ... > < # spawning test file with 4096 256 0 punch_test_file noresv > < [0] punch_test_file > < + not using resvsp at file creation > < # writing with 4096 0 256 punch_test_file > < # showing file state punch_test_file > < punch_test_file: > < EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL FLAGS > < 0: [0..2047]: 96..2143 0 (96..2143) 2048 00000 > < FLAG Values: > < 010000 Unwritten preallocated extent > < 001000 Doesn't begin on stripe unit > < 000100 Doesn't end on stripe unit > < 000010 Doesn't begin on stripe width > < 000001 Doesn't end on stripe width > < # punching with 4096 240 16 u punch_test_file > < + hole punch using unresvsp > < # showing file state punch_test_file > < punch_test_file: > < EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL FLAGS > < 0: [0..1919]: 96..2015 0 (96..2015) 1920 00000 > < 1: [1920..2047]: hole 128 > < FLAG Values: > < 010000 Unwritten preallocated extent > < 001000 Doesn't begin on stripe unit > < 000100 Doesn't end on stripe unit > < 000010 Doesn't begin on stripe width > < 000001 Doesn't end on stripe width > --- >> 176 not run: Assuming DMAPI modules are not loaded > 177 [failed, exit status 1] - output mismatch (see 177.out.bad) > 58,88c58,64 > < Start bulkstat_unlink_test_modified > < Iteration 0 ... > < testFiles 1000 ... > < passed > < Iteration 1 ... > < testFiles 1000 ... > < passed > < Iteration 2 ... > < testFiles 1000 ... > < passed > < Iteration 3 ... > < testFiles 1000 ... > < passed > < Iteration 4 ... > < testFiles 1000 ... > < passed > < Iteration 5 ... > < testFiles 1000 ... > < passed > < Iteration 6 ... > < testFiles 1000 ... > < passed > < Iteration 7 ... > < testFiles 1000 ... > < passed > < Iteration 8 ... > < testFiles 1000 ... > < passed > < Iteration 9 ... > < testFiles 1000 ... > < passed > --- >> mount: wrong fs type, bad option, bad superblock on /dev/hda6, >> missing codepage or helper program, or other error >> In some cases useful info is found in syslog - try >> dmesg | tail or so >> >> mount failed >> (see 177.full for details) > 178 - output mismatch (see 178.out.bad) > 15,16d14 > < sb root inode value INO inconsistent with calculated value INO > < resetting superblock root inode pointer to INO > 51,52d48 > < sb root inode value INO inconsistent with calculated value INO > < resetting superblock root inode pointer to INO > 179 > 180 [not run] This test requires at least 10GB of /dev/hda6 to run > 181 > 182 - output mismatch (see 182.out.bad) > 1a2,491 >> file /mnt/xfs_scratch/510 has incorrect size - sync failed >> file /mnt/xfs_scratch/511 has incorrect size - sync failed >> file /mnt/xfs_scratch/512 has incorrect size - sync failed >> file /mnt/xfs_scratch/513 has incorrect size - sync failed >> file /mnt/xfs_scratch/514 has incorrect size - sync failed >> file /mnt/xfs_scratch/515 has incorrect size - sync failed >> file /mnt/xfs_scratch/516 has incorrect size - sync failed >> file /mnt/xfs_scratch/517 has incorrect size - sync failed >> file /mnt/xfs_scratch/518 has incorrect size - sync failed >> file /mnt/xfs_scratch/519 has incorrect size - sync failed >> file /mnt/xfs_scratch/520 has incorrect size - sync failed >> file /mnt/xfs_scratch/521 has incorrect size - sync failed >> file /mnt/xfs_scratch/522 has incorrect size - sync failed >> file /mnt/xfs_scratch/523 has incorrect size - sync failed >> file /mnt/xfs_scratch/524 has incorrect size - sync failed >> file /mnt/xfs_scratch/525 has incorrect size - sync failed >> file /mnt/xfs_scratch/526 has incorrect size - sync failed >> file /mnt/xfs_scratch/527 has incorrect size - sync failed >> file /mnt/xfs_scratch/528 has incorrect size - sync failed >> file /mnt/xfs_scratch/529 has incorrect size - sync failed >> file /mnt/xfs_scratch/530 has incorrect size - sync failed >> file /mnt/xfs_scratch/531 has incorrect size - sync failed >> file /mnt/xfs_scratch/532 has incorrect size - sync failed >> file /mnt/xfs_scratch/533 has incorrect size - sync failed >> file /mnt/xfs_scratch/534 has incorrect size - sync failed >> file /mnt/xfs_scratch/535 has incorrect size - sync failed >> file /mnt/xfs_scratch/536 has incorrect size - sync failed >> file /mnt/xfs_scratch/537 has incorrect size - sync failed >> file /mnt/xfs_scratch/538 has incorrect size - sync failed >> file /mnt/xfs_scratch/539 has incorrect size - sync failed >> file /mnt/xfs_scratch/540 has incorrect size - sync failed >> file /mnt/xfs_scratch/541 has incorrect size - sync failed >> file /mnt/xfs_scratch/542 has incorrect size - sync failed >> file /mnt/xfs_scratch/543 has incorrect size - sync failed >> file /mnt/xfs_scratch/544 has incorrect size - sync failed >> file /mnt/xfs_scratch/545 has incorrect size - sync failed >> file /mnt/xfs_scratch/546 has incorrect size - sync failed >> file /mnt/xfs_scratch/547 has incorrect size - sync failed >> file /mnt/xfs_scratch/548 has incorrect size - sync failed >> file /mnt/xfs_scratch/549 has incorrect size - sync failed >> file /mnt/xfs_scratch/550 has incorrect size - sync failed >> file /mnt/xfs_scratch/551 has incorrect size - sync failed >> file /mnt/xfs_scratch/552 has incorrect size - sync failed >> file /mnt/xfs_scratch/553 has incorrect size - sync failed >> file /mnt/xfs_scratch/554 has incorrect size - sync failed >> file /mnt/xfs_scratch/555 has incorrect size - sync failed >> file /mnt/xfs_scratch/556 has incorrect size - sync failed >> file /mnt/xfs_scratch/557 has incorrect size - sync failed >> file /mnt/xfs_scratch/558 has incorrect size - sync failed >> file /mnt/xfs_scratch/559 has incorrect size - sync failed >> file /mnt/xfs_scratch/560 has incorrect size - sync failed >> file /mnt/xfs_scratch/561 has incorrect size - sync failed >> file /mnt/xfs_scratch/562 has incorrect size - sync failed >> file /mnt/xfs_scratch/563 has incorrect size - sync failed >> file /mnt/xfs_scratch/564 has incorrect size - sync failed >> file /mnt/xfs_scratch/565 has incorrect size - sync failed >> file /mnt/xfs_scratch/566 has incorrect size - sync failed >> file /mnt/xfs_scratch/567 has incorrect size - sync failed >> file /mnt/xfs_scratch/568 has incorrect size - sync failed >> file /mnt/xfs_scratch/569 has incorrect size - sync failed >> file /mnt/xfs_scratch/570 has incorrect size - sync failed >> file /mnt/xfs_scratch/571 has incorrect size - sync failed >> file /mnt/xfs_scratch/572 has incorrect size - sync failed >> file /mnt/xfs_scratch/573 has incorrect size - sync failed >> file /mnt/xfs_scratch/574 has incorrect size - sync failed >> file /mnt/xfs_scratch/575 has incorrect size - sync failed >> file /mnt/xfs_scratch/576 has incorrect size - sync failed >> file /mnt/xfs_scratch/577 has incorrect size - sync failed >> file /mnt/xfs_scratch/578 has incorrect size - sync failed >> file /mnt/xfs_scratch/579 has incorrect size - sync failed >> file /mnt/xfs_scratch/580 has incorrect size - sync failed >> file /mnt/xfs_scratch/581 has incorrect size - sync failed >> file /mnt/xfs_scratch/582 has incorrect size - sync failed >> file /mnt/xfs_scratch/583 has incorrect size - sync failed >> file /mnt/xfs_scratch/584 has incorrect size - sync failed >> file /mnt/xfs_scratch/585 has incorrect size - sync failed >> file /mnt/xfs_scratch/586 has incorrect size - sync failed >> file /mnt/xfs_scratch/587 has incorrect size - sync failed >> file /mnt/xfs_scratch/588 has incorrect size - sync failed >> file /mnt/xfs_scratch/589 has incorrect size - sync failed >> file /mnt/xfs_scratch/590 has incorrect size - sync failed >> file /mnt/xfs_scratch/591 has incorrect size - sync failed >> file /mnt/xfs_scratch/592 has incorrect size - sync failed >> file /mnt/xfs_scratch/593 has incorrect size - sync failed >> file /mnt/xfs_scratch/594 has incorrect size - sync failed >> file /mnt/xfs_scratch/595 has incorrect size - sync failed >> file /mnt/xfs_scratch/596 has incorrect size - sync failed >> file /mnt/xfs_scratch/597 has incorrect size - sync failed >> file /mnt/xfs_scratch/598 has incorrect size - sync failed >> file /mnt/xfs_scratch/599 has incorrect size - sync failed >> file /mnt/xfs_scratch/600 has incorrect size - sync failed >> file /mnt/xfs_scratch/601 has incorrect size - sync failed >> file /mnt/xfs_scratch/602 has incorrect size - sync failed >> file /mnt/xfs_scratch/603 has incorrect size - sync failed >> file /mnt/xfs_scratch/604 has incorrect size - sync failed >> file /mnt/xfs_scratch/605 has incorrect size - sync failed >> file /mnt/xfs_scratch/606 has incorrect size - sync failed >> file /mnt/xfs_scratch/607 has incorrect size - sync failed >> file /mnt/xfs_scratch/608 has incorrect size - sync failed >> file /mnt/xfs_scratch/609 has incorrect size - sync failed >> file /mnt/xfs_scratch/610 has incorrect size - sync failed >> file /mnt/xfs_scratch/611 has incorrect size - sync failed >> file /mnt/xfs_scratch/612 has incorrect size - sync failed >> file /mnt/xfs_scratch/613 has incorrect size - sync failed >> file /mnt/xfs_scratch/614 has incorrect size - sync failed >> file /mnt/xfs_scratch/615 has incorrect size - sync failed >> file /mnt/xfs_scratch/616 has incorrect size - sync failed >> file /mnt/xfs_scratch/617 has incorrect size - sync failed >> file /mnt/xfs_scratch/618 has incorrect size - sync failed >> file /mnt/xfs_scratch/619 has incorrect size - sync failed >> file /mnt/xfs_scratch/620 has incorrect size - sync failed >> file /mnt/xfs_scratch/621 has incorrect size - sync failed >> file /mnt/xfs_scratch/622 has incorrect size - sync failed >> file /mnt/xfs_scratch/623 has incorrect size - sync failed >> file /mnt/xfs_scratch/624 has incorrect size - sync failed >> file /mnt/xfs_scratch/625 has incorrect size - sync failed >> file /mnt/xfs_scratch/626 has incorrect size - sync failed >> file /mnt/xfs_scratch/627 has incorrect size - sync failed >> file /mnt/xfs_scratch/628 has incorrect size - sync failed >> file /mnt/xfs_scratch/629 has incorrect size - sync failed >> file /mnt/xfs_scratch/630 has incorrect size - sync failed >> file /mnt/xfs_scratch/631 has incorrect size - sync failed >> file /mnt/xfs_scratch/632 has incorrect size - sync failed >> file /mnt/xfs_scratch/633 has incorrect size - sync failed >> file /mnt/xfs_scratch/634 has incorrect size - sync failed >> file /mnt/xfs_scratch/635 has incorrect size - sync failed >> file /mnt/xfs_scratch/636 has incorrect size - sync failed >> file /mnt/xfs_scratch/637 has incorrect size - sync failed >> file /mnt/xfs_scratch/638 has incorrect size - sync failed >> file /mnt/xfs_scratch/639 has incorrect size - sync failed >> file /mnt/xfs_scratch/640 has incorrect size - sync failed >> file /mnt/xfs_scratch/641 has incorrect size - sync failed >> file /mnt/xfs_scratch/642 has incorrect size - sync failed >> file /mnt/xfs_scratch/643 has incorrect size - sync failed >> file /mnt/xfs_scratch/644 has incorrect size - sync failed >> file /mnt/xfs_scratch/645 has incorrect size - sync failed >> file /mnt/xfs_scratch/646 has incorrect size - sync failed >> file /mnt/xfs_scratch/647 has incorrect size - sync failed >> file /mnt/xfs_scratch/648 has incorrect size - sync failed >> file /mnt/xfs_scratch/649 has incorrect size - sync failed >> file /mnt/xfs_scratch/650 has incorrect size - sync failed >> file /mnt/xfs_scratch/651 has incorrect size - sync failed >> file /mnt/xfs_scratch/652 has incorrect size - sync failed >> file /mnt/xfs_scratch/653 has incorrect size - sync failed >> file /mnt/xfs_scratch/654 has incorrect size - sync failed >> file /mnt/xfs_scratch/655 has incorrect size - sync failed >> file /mnt/xfs_scratch/656 has incorrect size - sync failed >> file /mnt/xfs_scratch/657 has incorrect size - sync failed >> file /mnt/xfs_scratch/658 has incorrect size - sync failed >> file /mnt/xfs_scratch/659 has incorrect size - sync failed >> file /mnt/xfs_scratch/660 has incorrect size - sync failed >> file /mnt/xfs_scratch/661 has incorrect size - sync failed >> file /mnt/xfs_scratch/662 has incorrect size - sync failed >> file /mnt/xfs_scratch/663 has incorrect size - sync failed >> file /mnt/xfs_scratch/664 has incorrect size - sync failed >> file /mnt/xfs_scratch/665 has incorrect size - sync failed >> file /mnt/xfs_scratch/666 has incorrect size - sync failed >> file /mnt/xfs_scratch/667 has incorrect size - sync failed >> file /mnt/xfs_scratch/668 has incorrect size - sync failed >> file /mnt/xfs_scratch/669 has incorrect size - sync failed >> file /mnt/xfs_scratch/670 has incorrect size - sync failed >> file /mnt/xfs_scratch/671 has incorrect size - sync failed >> file /mnt/xfs_scratch/672 has incorrect size - sync failed >> file /mnt/xfs_scratch/673 has incorrect size - sync failed >> file /mnt/xfs_scratch/674 has incorrect size - sync failed >> file /mnt/xfs_scratch/675 has incorrect size - sync failed >> file /mnt/xfs_scratch/676 has incorrect size - sync failed >> file /mnt/xfs_scratch/677 has incorrect size - sync failed >> file /mnt/xfs_scratch/678 has incorrect size - sync failed >> file /mnt/xfs_scratch/679 has incorrect size - sync failed >> file /mnt/xfs_scratch/680 has incorrect size - sync failed >> file /mnt/xfs_scratch/681 has incorrect size - sync failed >> file /mnt/xfs_scratch/682 has incorrect size - sync failed >> file /mnt/xfs_scratch/683 has incorrect size - sync failed >> file /mnt/xfs_scratch/684 has incorrect size - sync failed >> file /mnt/xfs_scratch/685 has incorrect size - sync failed >> file /mnt/xfs_scratch/686 has incorrect size - sync failed >> file /mnt/xfs_scratch/687 has incorrect size - sync failed >> file /mnt/xfs_scratch/688 has incorrect size - sync failed >> file /mnt/xfs_scratch/689 has incorrect size - sync failed >> file /mnt/xfs_scratch/690 has incorrect size - sync failed >> file /mnt/xfs_scratch/691 has incorrect size - sync failed >> file /mnt/xfs_scratch/692 has incorrect size - sync failed >> file /mnt/xfs_scratch/693 has incorrect size - sync failed >> file /mnt/xfs_scratch/694 has incorrect size - sync failed >> file /mnt/xfs_scratch/695 has incorrect size - sync failed >> file /mnt/xfs_scratch/696 has incorrect size - sync failed >> file /mnt/xfs_scratch/697 has incorrect size - sync failed >> file /mnt/xfs_scratch/698 has incorrect size - sync failed >> file /mnt/xfs_scratch/699 has incorrect size - sync failed >> file /mnt/xfs_scratch/700 has incorrect size - sync failed >> file /mnt/xfs_scratch/701 has incorrect size - sync failed >> file /mnt/xfs_scratch/702 has incorrect size - sync failed >> file /mnt/xfs_scratch/703 has incorrect size - sync failed >> file /mnt/xfs_scratch/704 has incorrect size - sync failed >> file /mnt/xfs_scratch/705 has incorrect size - sync failed >> file /mnt/xfs_scratch/706 has incorrect size - sync failed >> file /mnt/xfs_scratch/707 has incorrect size - sync failed >> file /mnt/xfs_scratch/708 has incorrect size - sync failed >> file /mnt/xfs_scratch/709 has incorrect size - sync failed >> file /mnt/xfs_scratch/710 has incorrect size - sync failed >> file /mnt/xfs_scratch/711 has incorrect size - sync failed >> file /mnt/xfs_scratch/712 has incorrect size - sync failed >> file /mnt/xfs_scratch/713 has incorrect size - sync failed >> file /mnt/xfs_scratch/714 has incorrect size - sync failed >> file /mnt/xfs_scratch/715 has incorrect size - sync failed >> file /mnt/xfs_scratch/716 has incorrect size - sync failed >> file /mnt/xfs_scratch/717 has incorrect size - sync failed >> file /mnt/xfs_scratch/718 has incorrect size - sync failed >> file /mnt/xfs_scratch/719 has incorrect size - sync failed >> file /mnt/xfs_scratch/720 has incorrect size - sync failed >> file /mnt/xfs_scratch/721 has incorrect size - sync failed >> file /mnt/xfs_scratch/722 has incorrect size - sync failed >> file /mnt/xfs_scratch/723 has incorrect size - sync failed >> file /mnt/xfs_scratch/724 has incorrect size - sync failed >> file /mnt/xfs_scratch/725 has incorrect size - sync failed >> file /mnt/xfs_scratch/726 has incorrect size - sync failed >> file /mnt/xfs_scratch/727 has incorrect size - sync failed >> file /mnt/xfs_scratch/728 has incorrect size - sync failed >> file /mnt/xfs_scratch/729 has incorrect size - sync failed >> file /mnt/xfs_scratch/730 has incorrect size - sync failed >> file /mnt/xfs_scratch/731 has incorrect size - sync failed >> file /mnt/xfs_scratch/732 has incorrect size - sync failed >> file /mnt/xfs_scratch/733 has incorrect size - sync failed >> file /mnt/xfs_scratch/734 has incorrect size - sync failed >> file /mnt/xfs_scratch/735 has incorrect size - sync failed >> file /mnt/xfs_scratch/736 has incorrect size - sync failed >> file /mnt/xfs_scratch/737 has incorrect size - sync failed >> file /mnt/xfs_scratch/738 has incorrect size - sync failed >> file /mnt/xfs_scratch/739 has incorrect size - sync failed >> file /mnt/xfs_scratch/740 has incorrect size - sync failed >> file /mnt/xfs_scratch/741 has incorrect size - sync failed >> file /mnt/xfs_scratch/742 has incorrect size - sync failed >> file /mnt/xfs_scratch/743 has incorrect size - sync failed >> file /mnt/xfs_scratch/744 has incorrect size - sync failed >> file /mnt/xfs_scratch/745 has incorrect size - sync failed >> file /mnt/xfs_scratch/746 has incorrect size - sync failed >> file /mnt/xfs_scratch/747 has incorrect size - sync failed >> file /mnt/xfs_scratch/748 has incorrect size - sync failed >> file /mnt/xfs_scratch/749 has incorrect size - sync failed >> file /mnt/xfs_scratch/750 has incorrect size - sync failed >> file /mnt/xfs_scratch/751 has incorrect size - sync failed >> file /mnt/xfs_scratch/752 has incorrect size - sync failed >> file /mnt/xfs_scratch/753 has incorrect size - sync failed >> file /mnt/xfs_scratch/754 has incorrect size - sync failed >> file /mnt/xfs_scratch/755 has incorrect size - sync failed >> file /mnt/xfs_scratch/756 has incorrect size - sync failed >> file /mnt/xfs_scratch/757 has incorrect size - sync failed >> file /mnt/xfs_scratch/758 has incorrect size - sync failed >> file /mnt/xfs_scratch/759 has incorrect size - sync failed >> file /mnt/xfs_scratch/760 has incorrect size - sync failed >> file /mnt/xfs_scratch/761 has incorrect size - sync failed >> file /mnt/xfs_scratch/762 has incorrect size - sync failed >> file /mnt/xfs_scratch/763 has incorrect size - sync failed >> file /mnt/xfs_scratch/764 has incorrect size - sync failed >> file /mnt/xfs_scratch/765 has incorrect size - sync failed >> file /mnt/xfs_scratch/766 has incorrect size - sync failed >> file /mnt/xfs_scratch/767 has incorrect size - sync failed >> file /mnt/xfs_scratch/768 has incorrect size - sync failed >> file /mnt/xfs_scratch/769 has incorrect size - sync failed >> file /mnt/xfs_scratch/770 has incorrect size - sync failed >> file /mnt/xfs_scratch/771 has incorrect size - sync failed >> file /mnt/xfs_scratch/772 has incorrect size - sync failed >> file /mnt/xfs_scratch/773 has incorrect size - sync failed >> file /mnt/xfs_scratch/774 has incorrect size - sync failed >> file /mnt/xfs_scratch/775 has incorrect size - sync failed >> file /mnt/xfs_scratch/776 has incorrect size - sync failed >> file /mnt/xfs_scratch/777 has incorrect size - sync failed >> file /mnt/xfs_scratch/778 has incorrect size - sync failed >> file /mnt/xfs_scratch/779 has incorrect size - sync failed >> file /mnt/xfs_scratch/780 has incorrect size - sync failed >> file /mnt/xfs_scratch/781 has incorrect size - sync failed >> file /mnt/xfs_scratch/782 has incorrect size - sync failed >> file /mnt/xfs_scratch/783 has incorrect size - sync failed >> file /mnt/xfs_scratch/784 has incorrect size - sync failed >> file /mnt/xfs_scratch/785 has incorrect size - sync failed >> file /mnt/xfs_scratch/786 has incorrect size - sync failed >> file /mnt/xfs_scratch/787 has incorrect size - sync failed >> file /mnt/xfs_scratch/788 has incorrect size - sync failed >> file /mnt/xfs_scratch/789 has incorrect size - sync failed >> file /mnt/xfs_scratch/790 has incorrect size - sync failed >> file /mnt/xfs_scratch/791 has incorrect size - sync failed >> file /mnt/xfs_scratch/792 has incorrect size - sync failed >> file /mnt/xfs_scratch/793 has incorrect size - sync failed >> file /mnt/xfs_scratch/794 has incorrect size - sync failed >> file /mnt/xfs_scratch/795 has incorrect size - sync failed >> file /mnt/xfs_scratch/796 has incorrect size - sync failed >> file /mnt/xfs_scratch/797 has incorrect size - sync failed >> file /mnt/xfs_scratch/798 has incorrect size - sync failed >> file /mnt/xfs_scratch/799 has incorrect size - sync failed >> file /mnt/xfs_scratch/800 has incorrect size - sync failed >> file /mnt/xfs_scratch/801 has incorrect size - sync failed >> file /mnt/xfs_scratch/802 has incorrect size - sync failed >> file /mnt/xfs_scratch/803 has incorrect size - sync failed >> file /mnt/xfs_scratch/804 has incorrect size - sync failed >> file /mnt/xfs_scratch/805 has incorrect size - sync failed >> file /mnt/xfs_scratch/806 has incorrect size - sync failed >> file /mnt/xfs_scratch/807 has incorrect size - sync failed >> file /mnt/xfs_scratch/808 has incorrect size - sync failed >> file /mnt/xfs_scratch/809 has incorrect size - sync failed >> file /mnt/xfs_scratch/810 has incorrect size - sync failed >> file /mnt/xfs_scratch/811 has incorrect size - sync failed >> file /mnt/xfs_scratch/812 has incorrect size - sync failed >> file /mnt/xfs_scratch/813 has incorrect size - sync failed >> file /mnt/xfs_scratch/814 has incorrect size - sync failed >> file /mnt/xfs_scratch/815 has incorrect size - sync failed >> file /mnt/xfs_scratch/816 has incorrect size - sync failed >> file /mnt/xfs_scratch/817 has incorrect size - sync failed >> file /mnt/xfs_scratch/818 has incorrect size - sync failed >> file /mnt/xfs_scratch/819 has incorrect size - sync failed >> file /mnt/xfs_scratch/820 has incorrect size - sync failed >> file /mnt/xfs_scratch/821 has incorrect size - sync failed >> file /mnt/xfs_scratch/822 has incorrect size - sync failed >> file /mnt/xfs_scratch/823 has incorrect size - sync failed >> file /mnt/xfs_scratch/824 has incorrect size - sync failed >> file /mnt/xfs_scratch/825 has incorrect size - sync failed >> file /mnt/xfs_scratch/826 has incorrect size - sync failed >> file /mnt/xfs_scratch/827 has incorrect size - sync failed >> file /mnt/xfs_scratch/828 has incorrect size - sync failed >> file /mnt/xfs_scratch/829 has incorrect size - sync failed >> file /mnt/xfs_scratch/830 has incorrect size - sync failed >> file /mnt/xfs_scratch/831 has incorrect size - sync failed >> file /mnt/xfs_scratch/832 has incorrect size - sync failed >> file /mnt/xfs_scratch/833 has incorrect size - sync failed >> file /mnt/xfs_scratch/834 has incorrect size - sync failed >> file /mnt/xfs_scratch/835 has incorrect size - sync failed >> file /mnt/xfs_scratch/836 has incorrect size - sync failed >> file /mnt/xfs_scratch/837 has incorrect size - sync failed >> file /mnt/xfs_scratch/838 has incorrect size - sync failed >> file /mnt/xfs_scratch/839 has incorrect size - sync failed >> file /mnt/xfs_scratch/840 has incorrect size - sync failed >> file /mnt/xfs_scratch/841 has incorrect size - sync failed >> file /mnt/xfs_scratch/842 has incorrect size - sync failed >> file /mnt/xfs_scratch/843 has incorrect size - sync failed >> file /mnt/xfs_scratch/844 has incorrect size - sync failed >> file /mnt/xfs_scratch/845 has incorrect size - sync failed >> file /mnt/xfs_scratch/846 has incorrect size - sync failed >> file /mnt/xfs_scratch/847 has incorrect size - sync failed >> file /mnt/xfs_scratch/848 has incorrect size - sync failed >> file /mnt/xfs_scratch/849 has incorrect size - sync failed >> file /mnt/xfs_scratch/850 has incorrect size - sync failed >> file /mnt/xfs_scratch/851 has incorrect size - sync failed >> file /mnt/xfs_scratch/852 has incorrect size - sync failed >> file /mnt/xfs_scratch/853 has incorrect size - sync failed >> file /mnt/xfs_scratch/854 has incorrect size - sync failed >> file /mnt/xfs_scratch/855 has incorrect size - sync failed >> file /mnt/xfs_scratch/856 has incorrect size - sync failed >> file /mnt/xfs_scratch/857 has incorrect size - sync failed >> file /mnt/xfs_scratch/858 has incorrect size - sync failed >> file /mnt/xfs_scratch/859 has incorrect size - sync failed >> file /mnt/xfs_scratch/860 has incorrect size - sync failed >> file /mnt/xfs_scratch/861 has incorrect size - sync failed >> file /mnt/xfs_scratch/862 has incorrect size - sync failed >> file /mnt/xfs_scratch/863 has incorrect size - sync failed >> file /mnt/xfs_scratch/864 has incorrect size - sync failed >> file /mnt/xfs_scratch/865 has incorrect size - sync failed >> file /mnt/xfs_scratch/866 has incorrect size - sync failed >> file /mnt/xfs_scratch/867 has incorrect size - sync failed >> file /mnt/xfs_scratch/868 has incorrect size - sync failed >> file /mnt/xfs_scratch/869 has incorrect size - sync failed >> file /mnt/xfs_scratch/870 has incorrect size - sync failed >> file /mnt/xfs_scratch/871 has incorrect size - sync failed >> file /mnt/xfs_scratch/872 has incorrect size - sync failed >> file /mnt/xfs_scratch/873 has incorrect size - sync failed >> file /mnt/xfs_scratch/874 has incorrect size - sync failed >> file /mnt/xfs_scratch/875 has incorrect size - sync failed >> file /mnt/xfs_scratch/876 has incorrect size - sync failed >> file /mnt/xfs_scratch/877 has incorrect size - sync failed >> file /mnt/xfs_scratch/878 has incorrect size - sync failed >> file /mnt/xfs_scratch/879 has incorrect size - sync failed >> file /mnt/xfs_scratch/880 has incorrect size - sync failed >> file /mnt/xfs_scratch/881 has incorrect size - sync failed >> file /mnt/xfs_scratch/882 has incorrect size - sync failed >> file /mnt/xfs_scratch/883 has incorrect size - sync failed >> file /mnt/xfs_scratch/884 has incorrect size - sync failed >> file /mnt/xfs_scratch/885 has incorrect size - sync failed >> file /mnt/xfs_scratch/886 has incorrect size - sync failed >> file /mnt/xfs_scratch/887 has incorrect size - sync failed >> file /mnt/xfs_scratch/888 has incorrect size - sync failed >> file /mnt/xfs_scratch/889 has incorrect size - sync failed >> file /mnt/xfs_scratch/890 has incorrect size - sync failed >> file /mnt/xfs_scratch/891 has incorrect size - sync failed >> file /mnt/xfs_scratch/892 has incorrect size - sync failed >> file /mnt/xfs_scratch/893 has incorrect size - sync failed >> file /mnt/xfs_scratch/894 has incorrect size - sync failed >> file /mnt/xfs_scratch/895 has incorrect size - sync failed >> file /mnt/xfs_scratch/896 has incorrect size - sync failed >> file /mnt/xfs_scratch/897 has incorrect size - sync failed >> file /mnt/xfs_scratch/898 has incorrect size - sync failed >> file /mnt/xfs_scratch/899 has incorrect size - sync failed >> file /mnt/xfs_scratch/900 has incorrect size - sync failed >> file /mnt/xfs_scratch/901 has incorrect size - sync failed >> file /mnt/xfs_scratch/902 has incorrect size - sync failed >> file /mnt/xfs_scratch/903 has incorrect size - sync failed >> file /mnt/xfs_scratch/904 has incorrect size - sync failed >> file /mnt/xfs_scratch/905 has incorrect size - sync failed >> file /mnt/xfs_scratch/906 has incorrect size - sync failed >> file /mnt/xfs_scratch/907 has incorrect size - sync failed >> file /mnt/xfs_scratch/908 has incorrect size - sync failed >> file /mnt/xfs_scratch/909 has incorrect size - sync failed >> file /mnt/xfs_scratch/910 has incorrect size - sync failed >> file /mnt/xfs_scratch/911 has incorrect size - sync failed >> file /mnt/xfs_scratch/912 has incorrect size - sync failed >> file /mnt/xfs_scratch/913 has incorrect size - sync failed >> file /mnt/xfs_scratch/914 has incorrect size - sync failed >> file /mnt/xfs_scratch/915 has incorrect size - sync failed >> file /mnt/xfs_scratch/916 has incorrect size - sync failed >> file /mnt/xfs_scratch/917 has incorrect size - sync failed >> file /mnt/xfs_scratch/918 has incorrect size - sync failed >> file /mnt/xfs_scratch/919 has incorrect size - sync failed >> file /mnt/xfs_scratch/920 has incorrect size - sync failed >> file /mnt/xfs_scratch/921 has incorrect size - sync failed >> file /mnt/xfs_scratch/922 has incorrect size - sync failed >> file /mnt/xfs_scratch/923 has incorrect size - sync failed >> file /mnt/xfs_scratch/924 has incorrect size - sync failed >> file /mnt/xfs_scratch/925 has incorrect size - sync failed >> file /mnt/xfs_scratch/926 has incorrect size - sync failed >> file /mnt/xfs_scratch/927 has incorrect size - sync failed >> file /mnt/xfs_scratch/928 has incorrect size - sync failed >> file /mnt/xfs_scratch/929 has incorrect size - sync failed >> file /mnt/xfs_scratch/930 has incorrect size - sync failed >> file /mnt/xfs_scratch/931 has incorrect size - sync failed >> file /mnt/xfs_scratch/932 has incorrect size - sync failed >> file /mnt/xfs_scratch/933 has incorrect size - sync failed >> file /mnt/xfs_scratch/934 has incorrect size - sync failed >> file /mnt/xfs_scratch/935 has incorrect size - sync failed >> file /mnt/xfs_scratch/936 has incorrect size - sync failed >> file /mnt/xfs_scratch/937 has incorrect size - sync failed >> file /mnt/xfs_scratch/938 has incorrect size - sync failed >> file /mnt/xfs_scratch/939 has incorrect size - sync failed >> file /mnt/xfs_scratch/940 has incorrect size - sync failed >> file /mnt/xfs_scratch/941 has incorrect size - sync failed >> file /mnt/xfs_scratch/942 has incorrect size - sync failed >> file /mnt/xfs_scratch/943 has incorrect size - sync failed >> file /mnt/xfs_scratch/944 has incorrect size - sync failed >> file /mnt/xfs_scratch/945 has incorrect size - sync failed >> file /mnt/xfs_scratch/946 has incorrect size - sync failed >> file /mnt/xfs_scratch/947 has incorrect size - sync failed >> file /mnt/xfs_scratch/948 has incorrect size - sync failed >> file /mnt/xfs_scratch/949 has incorrect size - sync failed >> file /mnt/xfs_scratch/950 has incorrect size - sync failed >> file /mnt/xfs_scratch/951 has incorrect size - sync failed >> file /mnt/xfs_scratch/952 has incorrect size - sync failed >> file /mnt/xfs_scratch/953 has incorrect size - sync failed >> file /mnt/xfs_scratch/954 has incorrect size - sync failed >> file /mnt/xfs_scratch/955 has incorrect size - sync failed >> file /mnt/xfs_scratch/956 has incorrect size - sync failed >> file /mnt/xfs_scratch/957 has incorrect size - sync failed >> file /mnt/xfs_scratch/958 has incorrect size - sync failed >> file /mnt/xfs_scratch/959 has incorrect size - sync failed >> file /mnt/xfs_scratch/960 has incorrect size - sync failed >> file /mnt/xfs_scratch/961 has incorrect size - sync failed >> file /mnt/xfs_scratch/962 has incorrect size - sync failed >> file /mnt/xfs_scratch/963 has incorrect size - sync failed >> file /mnt/xfs_scratch/964 has incorrect size - sync failed >> file /mnt/xfs_scratch/965 has incorrect size - sync failed >> file /mnt/xfs_scratch/966 has incorrect size - sync failed >> file /mnt/xfs_scratch/967 has incorrect size - sync failed >> file /mnt/xfs_scratch/968 has incorrect size - sync failed >> file /mnt/xfs_scratch/969 has incorrect size - sync failed >> file /mnt/xfs_scratch/970 has incorrect size - sync failed >> file /mnt/xfs_scratch/971 has incorrect size - sync failed >> file /mnt/xfs_scratch/972 has incorrect size - sync failed >> file /mnt/xfs_scratch/973 has incorrect size - sync failed >> file /mnt/xfs_scratch/974 has incorrect size - sync failed >> file /mnt/xfs_scratch/975 has incorrect size - sync failed >> file /mnt/xfs_scratch/976 has incorrect size - sync failed >> file /mnt/xfs_scratch/977 has incorrect size - sync failed >> file /mnt/xfs_scratch/978 has incorrect size - sync failed >> file /mnt/xfs_scratch/979 has incorrect size - sync failed >> file /mnt/xfs_scratch/980 has incorrect size - sync failed >> file /mnt/xfs_scratch/981 has incorrect size - sync failed >> file /mnt/xfs_scratch/982 has incorrect size - sync failed >> file /mnt/xfs_scratch/983 has incorrect size - sync failed >> file /mnt/xfs_scratch/984 has incorrect size - sync failed >> file /mnt/xfs_scratch/985 has incorrect size - sync failed >> file /mnt/xfs_scratch/986 has incorrect size - sync failed >> file /mnt/xfs_scratch/987 has incorrect size - sync failed >> file /mnt/xfs_scratch/988 has incorrect size - sync failed >> file /mnt/xfs_scratch/989 has incorrect size - sync failed >> file /mnt/xfs_scratch/990 has incorrect size - sync failed >> file /mnt/xfs_scratch/991 has incorrect size - sync failed >> file /mnt/xfs_scratch/992 has incorrect size - sync failed >> file /mnt/xfs_scratch/993 has incorrect size - sync failed >> file /mnt/xfs_scratch/994 has incorrect size - sync failed >> file /mnt/xfs_scratch/995 has incorrect size - sync failed >> file /mnt/xfs_scratch/996 has incorrect size - sync failed >> file /mnt/xfs_scratch/997 has incorrect size - sync failed >> file /mnt/xfs_scratch/998 has incorrect size - sync failed >> file /mnt/xfs_scratch/999 has incorrect size - sync failed > 183 > 184 0s ... > > Not run: 010 022 023 024 025 035 036 037 038 039 040 043 044 050 052 055 057 058 077 090 092 093 094 095 097 098 099 101 102 112 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 168 180 > Failures: 016 018 081 082 103 130 132 166 167 171 172 173 175 176 177 178 182 > Failed 17 of 109 tests > > -- Mark Goodwin markgw@sgi.com Engineering Manager for XFS and PCP Phone: +61-3-99631937 SGI Australian Software Group Cell: +61-4-18969583 ------------------------------------------------------------- From owner-xfs@oss.sgi.com Wed Mar 19 18:28:37 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 19 Mar 2008 18:28:47 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.6 required=5.0 tests=BAYES_80,J_CHICKENPOX_51, WHOIS_DMNBYPROXY autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2K1SaCs013290 for ; Wed, 19 Mar 2008 18:28:37 -0700 X-ASG-Debug-ID: 1205976549-2fe002140000-w1Z2WR X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from smtp.ning.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id CEA4A6C323E for ; Wed, 19 Mar 2008 18:29:09 -0700 (PDT) Received: from smtp.ning.com (smtp.ning.com [8.6.19.94]) by cuda.sgi.com with ESMTP id VlMOYz5qEq5dSzy8 for ; Wed, 19 Mar 2008 18:29:09 -0700 (PDT) Received: from [10.16.43.35] (port=40273 helo=z10008d.ningops.com) by smtp.ning.com with esmtp (Exim 4.68) (envelope-from ) id 1Jc9aA-0005Ao-9q for linux-xfs@oss.sgi.com; Thu, 20 Mar 2008 01:28:38 +0000 Message-ID: <8128819.8128111205976518315.JavaMail.xncore@smtp> Date: Thu, 20 Mar 2008 01:28:38 +0000 (GMT) From: P Alb Reply-To: P Alb To: "linux-xfs@oss.sgi.com" X-ASG-Orig-Subj: Come join me on ssbbw4u... Subject: Come join me on ssbbw4u... MIME-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-XN-UUID: d42c1eb6-3893-4714-9c6d-54841e608d13 X-Barracuda-Connect: smtp.ning.com[8.6.19.94] X-Barracuda-Start-Time: 1205976549 X-Barracuda-Bayes: INNOCENT GLOBAL 0.5369 1.0000 0.7500 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: 0.75 X-Barracuda-Spam-Status: No, SCORE=0.75 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45333 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14939 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: palbrektsen@gmail.com Precedence: bulk X-list: xfs Come join me on ssbbw4u. Click here to join: http://ssbbw4u.ning.com/?xgi=fg5JMSg Thanks, P Alb From owner-xfs@oss.sgi.com Wed Mar 19 19:32:51 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 19 Mar 2008 19:32:58 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2K2Wn2t015233 for ; Wed, 19 Mar 2008 19:32:51 -0700 X-ASG-Debug-ID: 1205980401-2fe203530000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 6965E6C3A91 for ; Wed, 19 Mar 2008 19:33:21 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id BUVynFFsO58LYgtT for ; Wed, 19 Mar 2008 19:33:21 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id E92FC1802E67F for ; Wed, 19 Mar 2008 21:33:20 -0500 (CDT) Message-ID: <47E1CCF0.9070904@sandeen.net> Date: Wed, 19 Mar 2008 21:33:20 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: xfs-oss X-ASG-Orig-Subj: [PATCH] fix xfsqa 049 when using whole scratch device Subject: [PATCH] fix xfsqa 049 when using whole scratch device Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1205980402 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45337 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14940 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs if SCRATCH_DEV happens to be a whole device, mke2fs will wait for confirmation. (mke2fs -F would work, too...) -Eric Index: xfs-cmds/xfstests/049 =================================================================== --- xfs-cmds.orig/xfstests/049 +++ xfs-cmds/xfstests/049 @@ -60,7 +60,7 @@ echo "--- mounts" >> $seq.full mount >> $seq.full _log "Create ext2 fs on scratch" -mkfs -t ext2 $SCRATCH_DEV >> $seq.full 2>&1 \ +echo y | mkfs -t ext2 $SCRATCH_DEV >> $seq.full 2>&1 \ || _fail "!!! failed to mkfs ext2" _log "Mount ext2 fs on scratch" From owner-xfs@oss.sgi.com Wed Mar 19 20:01:59 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 19 Mar 2008 20:02:06 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2K31v22016259 for ; Wed, 19 Mar 2008 20:01:59 -0700 X-ASG-Debug-ID: 1205982147-77ef02190000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id D7EA46C3CC3 for ; Wed, 19 Mar 2008 20:02:28 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id ILwWyeFBciqkUywG for ; Wed, 19 Mar 2008 20:02:28 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 8DC3518004B4A for ; Wed, 19 Mar 2008 22:02:27 -0500 (CDT) Message-ID: <47E1D3C3.1050000@sandeen.net> Date: Wed, 19 Mar 2008 22:02:27 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: xfs-oss X-ASG-Orig-Subj: Re: [PATCH] fix dir2 shortform structures on ARM old ABI Subject: Re: [PATCH] fix dir2 shortform structures on ARM old ABI References: <47DB4181.7040603@sandeen.net> In-Reply-To: <47DB4181.7040603@sandeen.net> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1205982150 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45339 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14941 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Here's the userspace side. Jeff, I guess this means you have more work to do ;) -Eric Index: xfs-cmds/xfsprogs/include/platform_defs.h.in =================================================================== --- xfs-cmds.orig/xfsprogs/include/platform_defs.h.in +++ xfs-cmds/xfsprogs/include/platform_defs.h.in @@ -147,4 +147,11 @@ typedef unsigned long long __psunsigned_ | (minor&IRIX_DEV_MAXMIN))) #define IRIX_DEV_TO_KDEVT(dev) makedev(IRIX_DEV_MAJOR(dev),IRIX_DEV_MINOR(dev)) +/* ARM old ABI has some weird alignment/padding */ +#if defined(__arm__) && !defined(__ARM_EABI__) +#define __arch_pack __attribute__((packed)) +#else +#define __arch_pack +#endif + #endif /* __XFS_PLATFORM_DEFS_H__ */ Index: xfs-cmds/xfsprogs/include/xfs_dir2_sf.h =================================================================== --- xfs-cmds.orig/xfsprogs/include/xfs_dir2_sf.h +++ xfs-cmds/xfsprogs/include/xfs_dir2_sf.h @@ -62,7 +62,7 @@ typedef union { * Normalized offset (in a data block) of the entry, really xfs_dir2_data_off_t. * Only need 16 bits, this is the byte offset into the single block form. */ -typedef struct { __uint8_t i[2]; } xfs_dir2_sf_off_t; +typedef struct { __uint8_t i[2]; } __arch_pack xfs_dir2_sf_off_t; /* * The parent directory has a dedicated field, and the self-pointer must @@ -76,14 +76,14 @@ typedef struct xfs_dir2_sf_hdr { __uint8_t count; /* count of entries */ __uint8_t i8count; /* count of 8-byte inode #s */ xfs_dir2_inou_t parent; /* parent dir inode number */ -} xfs_dir2_sf_hdr_t; +} __arch_pack xfs_dir2_sf_hdr_t; typedef struct xfs_dir2_sf_entry { __uint8_t namelen; /* actual name length */ xfs_dir2_sf_off_t offset; /* saved offset */ __uint8_t name[1]; /* name, variable size */ xfs_dir2_inou_t inumber; /* inode number, var. offset */ -} xfs_dir2_sf_entry_t; +} __arch_pack xfs_dir2_sf_entry_t; typedef struct xfs_dir2_sf { xfs_dir2_sf_hdr_t hdr; /* shortform header */ From owner-xfs@oss.sgi.com Wed Mar 19 20:32:12 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 19 Mar 2008 20:32:19 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-0.5 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2K3WBfF021611 for ; Wed, 19 Mar 2008 20:32:12 -0700 X-ASG-Debug-ID: 1205983964-4184015c0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from p02c11o141.mxlogic.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id F33926C3ECB for ; Wed, 19 Mar 2008 20:32:44 -0700 (PDT) Received: from p02c11o141.mxlogic.net (p02c11o141.mxlogic.net [208.65.144.74]) by cuda.sgi.com with ESMTP id 4krChsGSzFWit4cv for ; Wed, 19 Mar 2008 20:32:44 -0700 (PDT) Received: from unknown [64.69.114.147] (EHLO p02c11o141.mxlogic.net) by p02c11o141.mxlogic.net (mxl_mta-5.4.0-3) with ESMTP id cdad1e74.2132966320.2515.00-507.p02c11o141.mxlogic.net (envelope-from ); Wed, 19 Mar 2008 21:32:44 -0600 (MDT) Received: from unknown [64.69.114.147] (EHLO mail.exagrid.com) by p02c11o141.mxlogic.net (mxl_mta-5.4.0-3) with ESMTP id bdad1e74.2016390064.2506.00-029.p02c11o141.mxlogic.net (envelope-from ); Wed, 19 Mar 2008 21:32:43 -0600 (MDT) X-MimeOLE: Produced By Microsoft Exchange V6.5 Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" X-ASG-Orig-Subj: RE: Duplicate directory entries Subject: RE: Duplicate directory entries Date: Wed, 19 Mar 2008 23:32:42 -0400 Message-ID: In-Reply-To: <20080320000254.GC103321673@sgi.com> X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: Duplicate directory entries Thread-Index: AciKHcNndWWwVRAcTqeWP4AyT7SVLwAHQK8w From: "Jim Paradis" To: "David Chinner" Cc: X-Spam: [F=0.0100000000; S=0.010(2008031701)] X-MAIL-FROM: X-SOURCE-IP: [64.69.114.147] X-Barracuda-Connect: p02c11o141.mxlogic.net[208.65.144.74] X-Barracuda-Start-Time: 1205983964 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45341 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id m2K3WCfF021613 X-archive-position: 14942 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jparadis@exagrid.com Precedence: bulk X-list: xfs David Chinner: >On Wed, Mar 19, 2008 at 03:26:05PM -0400, Jim Paradis wrote: >> We recently ran across a situation where we saw two directory entries >> that were exactly the same. > >What kernel version? 2.6.18 . Has this problem been root-caused and fixed in a more recent version? Looking through the git updates in the main tree doesn't suggest to me that it has... --jim From owner-xfs@oss.sgi.com Wed Mar 19 21:35:42 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 19 Mar 2008 21:35:52 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2K4ZcEV023718 for ; Wed, 19 Mar 2008 21:35:41 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA05921; Thu, 20 Mar 2008 15:36:06 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m2K4a2LF102600391; Thu, 20 Mar 2008 15:36:05 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m2K4ZvCW103504230; Thu, 20 Mar 2008 15:35:57 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Thu, 20 Mar 2008 15:35:57 +1100 From: David Chinner To: Jim Paradis Cc: xfs@oss.sgi.com Subject: Re: Duplicate directory entries Message-ID: <20080320043557.GX95344431@sgi.com> References: <20080320000254.GC103321673@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14943 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Wed, Mar 19, 2008 at 11:32:42PM -0400, Jim Paradis wrote: > David Chinner: > >On Wed, Mar 19, 2008 at 03:26:05PM -0400, Jim Paradis wrote: > >> We recently ran across a situation where we saw two directory entries > >> that were exactly the same. > > > >What kernel version? > > 2.6.18. No, it was originnally diagnosed on 2.6.5 kernels (sles9) and was fixed around 2.6.17 by the inode i_sem -> i_mutex conversion in mainline before we tracked it down. The root problem there was a semaphore lock leak in the direct I/O code causing problems when the inode was recycled and reused as a directory. The system would panic in the dentry cache, but log recovery would result in creating duplicate entries in the directory. I don't think this is your problem unless you've only recently upgraded from an old kernel and your applications do direct I/O..... Can you reproduce the problem or provide any information on events that may have occurred around the time of the duplicates being created? I suspect that a reproducable test case will be the only way we can track this down.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Wed Mar 19 21:48:03 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 19 Mar 2008 21:48:09 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_61 autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2K4lwjL024300 for ; Wed, 19 Mar 2008 21:48:00 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA06364; Thu, 20 Mar 2008 15:48:16 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m2K4mELF102641948; Thu, 20 Mar 2008 15:48:16 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m2K4mDDV102648583; Thu, 20 Mar 2008 15:48:13 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Thu, 20 Mar 2008 15:48:13 +1100 From: David Chinner To: xfs-dev Cc: xfs-oss Subject: [PATCH] Revalidate the btree cursor after an insert Message-ID: <20080320044813.GY95344431@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14944 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Ensure a btree insert returns a valid cursor. When writing into preallocated regions there is a case where XFS can oops or hang doing the unwritten extent conversion on I/O completion. It turns out thathte problem is related to the btree cursor being invalid. When we do an insert into the tree, we may need to split blocks in the tree. When we only split at the leaf level (i.e. level 0), everything works just fine. However, if we have a multi-level split in the btreee, the cursor passed to the insert function is no longer valid once the insert is complete. The leaf level split is handled correctly because all the operations at level 0 are done using the original cursor, hence it is updated correctly. However, when we need to update the next level up the tree, we don't use that cursor - we use a cloned cursor that points to the index in the next level up where we need to do the insert. Hence if we need to split a second level, the changes to the tree are reflected in the cloned cursor and not the original cursor. This clone-and-move-up-a-level-on-split behaviour recurses all the way to the top of the tree. The complexity here is that these cloned cursors do not point to the original index that was inserted - they point to the newly allocated block (the right block) and the original cursor pointer to that level may still point to the left block. Hence, without deep examination of the cloned cursor and buffers, we cannot update the original cursor with the new path from the cloned cursor. In these cases the original cursor could be pointing to the wrong block(s) and hence a subsequent modification to the tree using that cursor will lead to corruption of the tree. The crash case occurs whenteh tree changes height - we insert a new level in the tree, and the cursor does not have a buffer in it's path for that level. Hence any attempt to walk back up the cursor to the root block will result in a null pointer dereference. To make matters even more complex, the BMAP BT is rooted in an inode, so we can have a change of height in the btree *without a root split*. That is, if the root block in the inode is full when we split a leaf node, we cannot fit the pointer to the new block in the root, so we allocate a new block, migrate all the ptrs out of the inode into the new block and point the inode root block at the newly allocated block. This changes the height of the tree without a root split having occurred and hence invalidates the path in the original cursor. The patch below prevents xfs_bmbt_insert() from returning with an invalid cursor by detecting the cases that invalidate the original cursor and refresh it by do a lookup into the btree for the original index we were inserting at. Note that the INOBT, AGFBNO and AGFCNT btree implementations also have this bug, but the cursor is currently always destroyed or revalidated after an insert for those trees. Hence this patch only address the problem in the BMBT code. Signed-off-by: Dave Chinner --- fs/xfs/xfs_bmap_btree.c | 38 ++++++++++++++++++++++++++++++++++++-- 1 file changed, 36 insertions(+), 2 deletions(-) Index: 2.6.x-xfs-new/fs/xfs/xfs_bmap_btree.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_bmap_btree.c 2008-03-14 11:33:48.000000000 +1100 +++ 2.6.x-xfs-new/fs/xfs/xfs_bmap_btree.c 2008-03-20 15:45:19.351601515 +1100 @@ -2027,6 +2027,24 @@ xfs_bmbt_increment( /* * Insert the current record at the point referenced by cur. + * + * A multi-level split of the tree on insert will invalidate the original + * cursor. It appears, however, that some callers assume that the cursor is + * always valid. Hence if we do a multi-level split we need to revalidate the + * cursor. + * + * When a split occurs, we will see a new cursor returned. Use that as a + * trigger to determine if we need to revalidate the original cursor. If we get + * a split, then use the original irec to lookup up the path of the record we + * just inserted. + * + * Note that the fact that the btree root is in the inode means that we can + * have the level of the tree change without a "split" occurring at the root + * level. What happens is that the root is migrated to an allocated block and + * the inode root is pointed to it. This means a single split can change the + * level of the tree (level 2 -> level 3) and invalidate the old cursor. Hence + * the level change should be accounted as a split so as to correctly trigger a + * revalidation of the old cursor. */ int /* error */ xfs_bmbt_insert( @@ -2039,11 +2057,14 @@ xfs_bmbt_insert( xfs_fsblock_t nbno; xfs_btree_cur_t *ncur; xfs_bmbt_rec_t nrec; + xfs_bmbt_irec_t oirec; /* original irec */ xfs_btree_cur_t *pcur; + int splits = 0; XFS_BMBT_TRACE_CURSOR(cur, ENTRY); level = 0; nbno = NULLFSBLOCK; + oirec = cur->bc_rec.b; xfs_bmbt_disk_set_all(&nrec, &cur->bc_rec.b); ncur = NULL; pcur = cur; @@ -2052,11 +2073,13 @@ xfs_bmbt_insert( &i))) { if (pcur != cur) xfs_btree_del_cursor(pcur, XFS_BTREE_ERROR); - XFS_BMBT_TRACE_CURSOR(cur, ERROR); - return error; + goto error0; } XFS_WANT_CORRUPTED_GOTO(i == 1, error0); if (pcur != cur && (ncur || nbno == NULLFSBLOCK)) { + /* allocating a new root is effectively a split */ + if (cur->bc_nlevels != pcur->bc_nlevels) + splits++; cur->bc_nlevels = pcur->bc_nlevels; cur->bc_private.b.allocated += pcur->bc_private.b.allocated; @@ -2070,10 +2093,21 @@ xfs_bmbt_insert( xfs_btree_del_cursor(pcur, XFS_BTREE_NOERROR); } if (ncur) { + splits++; pcur = ncur; ncur = NULL; } } while (nbno != NULLFSBLOCK); + + if (splits > 1) { + /* revalidate the old cursor as we had a multi-level split */ + error = xfs_bmbt_lookup_eq(cur, oirec.br_startoff, + oirec.br_startblock, oirec.br_blockcount, &i); + if (error) + goto error0; + ASSERT(i == 1); + } + XFS_BMBT_TRACE_CURSOR(cur, EXIT); *stat = i; return 0; From owner-xfs@oss.sgi.com Wed Mar 19 22:19:49 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 19 Mar 2008 22:19:56 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_45 autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2K5Jg81025541 for ; Wed, 19 Mar 2008 22:19:45 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id QAA07357; Thu, 20 Mar 2008 16:20:01 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m2K5K1LF103266057; Thu, 20 Mar 2008 16:20:01 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m2K5K0xd99269688; Thu, 20 Mar 2008 16:20:00 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Thu, 20 Mar 2008 16:20:00 +1100 From: David Chinner To: xfs-dev Cc: xfs-oss Subject: [PATCH 1/2] Factor repeated code in xfs_ialloc_ag_alloc Message-ID: <20080320052000.GZ95344431@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14945 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Factor repeated code that determines the cluster alignment of an inode extent. Signed-off-by: Dave Chinner --- fs/xfs/xfs_ialloc.c | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) Index: 2.6.x-xfs-new/fs/xfs/xfs_ialloc.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_ialloc.c 2008-03-14 09:25:16.432921552 +1100 +++ 2.6.x-xfs-new/fs/xfs/xfs_ialloc.c 2008-03-14 09:38:00.263435189 +1100 @@ -107,6 +107,16 @@ xfs_ialloc_log_di( /* * Allocation group level functions. */ +static inline int +xfs_ialloc_cluster_alignment( + xfs_alloc_arg_t *args) +{ + if (xfs_sb_version_hasalign(&args->mp->m_sb) && + args->mp->m_sb.sb_inoalignmt >= + XFS_B_TO_FSBT(args->mp, XFS_INODE_CLUSTER_SIZE(args->mp))) + return args->mp->m_sb.sb_inoalignmt; + return 1; +} /* * Allocate new inodes in the allocation group specified by agbp. @@ -191,13 +201,8 @@ xfs_ialloc_ag_alloc( ASSERT(!(args.mp->m_flags & XFS_MOUNT_NOALIGN)); args.alignment = args.mp->m_dalign; isaligned = 1; - } else if (xfs_sb_version_hasalign(&args.mp->m_sb) && - args.mp->m_sb.sb_inoalignmt >= - XFS_B_TO_FSBT(args.mp, - XFS_INODE_CLUSTER_SIZE(args.mp))) - args.alignment = args.mp->m_sb.sb_inoalignmt; - else - args.alignment = 1; + } else + args.alignment = xfs_ialloc_cluster_alignment(&args); /* * Need to figure out where to allocate the inode blocks. * Ideally they should be spaced out through the a.g. @@ -230,12 +235,7 @@ xfs_ialloc_ag_alloc( args.agbno = be32_to_cpu(agi->agi_root); args.fsbno = XFS_AGB_TO_FSB(args.mp, be32_to_cpu(agi->agi_seqno), args.agbno); - if (xfs_sb_version_hasalign(&args.mp->m_sb) && - args.mp->m_sb.sb_inoalignmt >= - XFS_B_TO_FSBT(args.mp, XFS_INODE_CLUSTER_SIZE(args.mp))) - args.alignment = args.mp->m_sb.sb_inoalignmt; - else - args.alignment = 1; + args.alignment = xfs_ialloc_cluster_alignment(&args); if ((error = xfs_alloc_vextent(&args))) return error; } From owner-xfs@oss.sgi.com Wed Mar 19 22:20:36 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 19 Mar 2008 22:20:43 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.6 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_43, J_CHICKENPOX_44,J_CHICKENPOX_45,J_CHICKENPOX_46,J_CHICKENPOX_47, J_CHICKENPOX_48 autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2K5KXHA025670 for ; Wed, 19 Mar 2008 22:20:35 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id QAA07382; Thu, 20 Mar 2008 16:21:01 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m2K5L0LF99641772; Thu, 20 Mar 2008 16:21:00 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m2K5L0dd103516014; Thu, 20 Mar 2008 16:21:00 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Thu, 20 Mar 2008 16:21:00 +1100 From: David Chinner To: xfs-dev Cc: xfs-oss Subject: [PATCH 2/2] Prevent shutdown on inode allocation failure Message-ID: <20080320052100.GA95344431@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14946 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs At ENOSPC, we can get a filesystem shutdown due to a cancelling a dirty transaction in xfs_mkdir or xfs_create. This is due to the initial allocation attempt not taking into inode alignment and hence we can prepare the AGF freelist for allocation when it's not actually possible to do an allocation. This results in inode allocation returning ENOSPC with a dirty transaction, and hence we shut down the filesystem. Because the first allocation is an exact allocation attempt, we must tell the allocator that the alignment does not affect the allocation attempt. i.e. we will accept any extent alignment as long as the extent starts at the block we want. Unfortunately, this means that if the longest free extent is less than the length + alignment necessary for fallback allocation attempts but is long enough to attempt a non-aligned allocation, we will modify the free list. If we then have the exact allocation fail, all other allocation attempts will also fail due to the alignment constraint being taken into account. Hence the initial attempt needs to set the "alignment slop" field so that alignment, while not required, must be taken into account when determining if there is enough space left in the AG to do the allocation. That means if the exact allocation fails, we will not dirty the freelist if there is not enough space available fo a subsequent allocation to succeed. Hence we get an ENOSPC error back to userspace without shutting down the filesystem. Signed-off-by: Dave Chinner --- fs/xfs/xfs_ialloc.c | 18 ++++++++++++++++-- 1 file changed, 16 insertions(+), 2 deletions(-) Index: 2.6.x-xfs-new/fs/xfs/xfs_ialloc.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_ialloc.c 2008-03-14 09:28:15.998038053 +1100 +++ 2.6.x-xfs-new/fs/xfs/xfs_ialloc.c 2008-03-14 09:33:42.000000000 +1100 @@ -177,10 +177,24 @@ xfs_ialloc_ag_alloc( args.mod = args.total = args.wasdel = args.isfl = args.userdata = args.minalignslop = 0; args.prod = 1; - args.alignment = 1; + /* - * Allow space for the inode btree to split. + * We need to take into account alignment here to ensure that + * we don't modify the free list if we fail to have an exact + * block. If we don't have an exact match, and every oher + * attempt allocation attempt fails, we'll end up cancelling + * a dirty transaction and shutting down. + * + * For an exact allocation, alignment must be 1, + * however we need to take cluster alignment into account when + * fixing up the freelist. Use the minalignslop field to + * indicate that extra blocks might be required for alignment, + * but not to use them in the actual exact allocation. */ + args.alignment = 1; + args.minalignslop = xfs_ialloc_cluster_alignment(&args) - 1; + + /* Allow space for the inode btree to split. */ args.minleft = XFS_IN_MAXLEVELS(args.mp) - 1; if ((error = xfs_alloc_vextent(&args))) return error; From owner-xfs@oss.sgi.com Wed Mar 19 23:04:20 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 19 Mar 2008 23:04:27 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.4 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2K64GE6027161 for ; Wed, 19 Mar 2008 23:04:18 -0700 Received: from [134.14.55.78] (redback.melbourne.sgi.com [134.14.55.78]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA08601; Thu, 20 Mar 2008 17:04:45 +1100 Message-ID: <47E2000B.9030208@sgi.com> Date: Thu, 20 Mar 2008 17:11:23 +1100 From: Lachlan McIlroy Reply-To: lachlan@sgi.com User-Agent: Thunderbird 2.0.0.12 (X11/20080213) MIME-Version: 1.0 To: xfs-dev , xfs-oss Subject: REVIEW: xfs_bmap_check_leaf_extents() can reference unmapped memory Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14947 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: lachlan@sgi.com Precedence: bulk X-list: xfs While investigating the extent corruption bug I ran into this bug in debug only code. xfs_bmap_check_leaf_extents() loops through the leaf blocks of the extent btree checking that every extent is entirely before the next extent. It also compares the last extent in the previous block to the first extent in the current block when the previous block has been released and potentially unmapped. So take a copy of the last extent instead of a pointer. Also move the last extent check out of the loop because we only need to do it once. Lachlan --- fs/xfs/xfs_bmap.c_1.386 2008-03-17 13:37:32.000000000 +1100 +++ fs/xfs/xfs_bmap.c 2008-03-19 14:55:41.000000000 +1100 @@ -6194,7 +6194,7 @@ xfs_bmap_check_leaf_extents( xfs_mount_t *mp; /* file system mount structure */ __be64 *pp; /* pointer to block address */ xfs_bmbt_rec_t *ep; /* pointer to current extent */ - xfs_bmbt_rec_t *lastp; /* pointer to previous extent */ + xfs_bmbt_rec_t last; /* last extent in previous block */ xfs_bmbt_rec_t *nextp; /* pointer to next extent */ int bp_release = 0; @@ -6264,7 +6264,6 @@ xfs_bmap_check_leaf_extents( /* * Loop over all leaf nodes checking that all extents are in the right order. */ - lastp = NULL; for (;;) { xfs_fsblock_t nextbno; xfs_extnum_t num_recs; @@ -6285,18 +6284,18 @@ xfs_bmap_check_leaf_extents( */ ep = XFS_BTREE_REC_ADDR(xfs_bmbt, block, 1); + if (i) { + xfs_btree_check_rec(XFS_BTNUM_BMAP, (void *)&last, + (void *)ep); + } for (j = 1; j < num_recs; j++) { nextp = XFS_BTREE_REC_ADDR(xfs_bmbt, block, j + 1); - if (lastp) { - xfs_btree_check_rec(XFS_BTNUM_BMAP, - (void *)lastp, (void *)ep); - } xfs_btree_check_rec(XFS_BTNUM_BMAP, (void *)ep, (void *)(nextp)); - lastp = ep; ep = nextp; } + last = *ep; i += num_recs; if (bp_release) { bp_release = 0; From owner-xfs@oss.sgi.com Wed Mar 19 23:32:26 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 19 Mar 2008 23:32:33 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2K6WLm6028367 for ; Wed, 19 Mar 2008 23:32:23 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA09468; Thu, 20 Mar 2008 17:32:46 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m2K6WjsT103461578; Thu, 20 Mar 2008 17:32:45 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m2K6WjjU103521978; Thu, 20 Mar 2008 17:32:45 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Thu, 20 Mar 2008 17:32:45 +1100 From: David Chinner To: xfs-dev Cc: xfs-oss Subject: [PATCH] XFSQA 103: filter ln output Message-ID: <20080320063245.GA103491721@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14948 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Move recent versions of ln (i.e. debian unstable) have a different error output. update the filter to handle this. Signed-off-by: Dave Chinner --- xfstests/103 | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) Index: xfs-cmds/xfstests/103 =================================================================== --- xfs-cmds.orig/xfstests/103 2006-11-14 19:57:41.000000000 +1100 +++ xfs-cmds/xfstests/103 2008-03-19 10:02:14.393358705 +1100 @@ -47,7 +47,8 @@ _filter_scratch() _filter_ln() { - sed -e "s,SCRATCH_MNT/nosymlink/target - Operation not permitted,ln: creating symbolic link \`SCRATCH_MNT/nosymlink/target\' to \`SCRATCH_MNT/nosymlink/source\': Operation not permitted,g" + sed -e "s,SCRATCH_MNT/nosymlink/target - Operation not permitted,ln: creating symbolic link \`SCRATCH_MNT/nosymlink/target\' to \`SCRATCH_MNT/nosymlink/source\': Operation not permitted,g" \ + -e "s,: Operation not permitted, to \`SCRATCH_MNT/nosymlink/source\': Operation not permitted,g" } _filter_noymlinks_flag() From owner-xfs@oss.sgi.com Wed Mar 19 23:34:02 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 19 Mar 2008 23:34:09 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2K6Xws8028612 for ; Wed, 19 Mar 2008 23:34:01 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA09513; Thu, 20 Mar 2008 17:34:27 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m2K6YRsT102509610; Thu, 20 Mar 2008 17:34:27 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m2K6YRuB103200718; Thu, 20 Mar 2008 17:34:27 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Thu, 20 Mar 2008 17:34:27 +1100 From: David Chinner To: xfs-dev Cc: xfs-oss Subject: [PATCH] XFSQA 141: support 64k pagesize Message-ID: <20080320063427.GB103491721@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14949 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Make the file larger and read 64k from it instead of 16k so that it pulls in a full page from the middle of the file. Signed-off-by: Dave Chinner --- xfstests/141 | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) Index: xfs-cmds/xfstests/141 =================================================================== --- xfs-cmds.orig/xfstests/141 2006-12-19 14:11:09.000000000 +1100 +++ xfs-cmds/xfstests/141 2008-03-19 09:45:33.500797374 +1100 @@ -39,7 +39,7 @@ _scratch_mount # create file, mmap a region and mmap read it file=$SCRATCH_MNT/mmap -xfs_io -f -c "pwrite 0 64k" -c "mmap 16k 16k" -c "mread -r" $file > /dev/null +xfs_io -f -c "pwrite 0 1024k" -c "mmap 64k 64k" -c "mread -r" $file > /dev/null rm -f $file -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Wed Mar 19 23:36:54 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 19 Mar 2008 23:37:01 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2K6anIW029063 for ; Wed, 19 Mar 2008 23:36:52 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA09584; Thu, 20 Mar 2008 17:37:13 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m2K6bDsT103518016; Thu, 20 Mar 2008 17:37:13 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m2K6bDb2103403369; Thu, 20 Mar 2008 17:37:13 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Thu, 20 Mar 2008 17:37:13 +1100 From: David Chinner To: xfs-dev Cc: xfs-oss Subject: [PATCH] XFSQA 166: support varying page sizes Message-ID: <20080320063713.GC103491721@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14950 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Make the filter check the resultant output based on the initial written region size. hence page size of the machine will not affect the output of the filter. Modify the golden output to match. Signed-off-by: Dave Chinner --- xfstests/166 | 38 +++++++++++++++++++++++++++++++------- xfstests/166.out | 10 +++++----- 2 files changed, 36 insertions(+), 12 deletions(-) Index: xfs-cmds/xfstests/166.out =================================================================== --- xfs-cmds.orig/xfstests/166.out 2007-06-20 13:42:46.000000000 +1000 +++ xfs-cmds/xfstests/166.out 2008-03-19 09:40:03.518840554 +1100 @@ -1,6 +1,6 @@ QA output created by 166 -0: [0..31]: XX..YY AG (AA..BB) 32 -1: [32..127]: XX..YY AG (AA..BB) 96 10000 -2: [128..159]: XX..YY AG (AA..BB) 32 -3: [160..223]: XX..YY AG (AA..BB) 64 10000 -4: [224..255]: XX..YY AG (AA..BB) 32 +0: [AA..BB] XX..YY AG (AA..BB) RIGHT GOOD +1: [AA..BB] XX..YY AG (AA..BB) RIGHT GOOD +2: [AA..BB] XX..YY AG (AA..BB) RIGHT GOOD +3: [AA..BB] XX..YY AG (AA..BB) RIGHT GOOD +4: [AA..BB] XX..YY AG (AA..BB) RIGHT GOOD Index: xfs-cmds/xfstests/166 =================================================================== --- xfs-cmds.orig/xfstests/166 2007-06-20 13:42:46.000000000 +1000 +++ xfs-cmds/xfstests/166 2008-03-19 09:40:03.518840554 +1100 @@ -27,14 +27,38 @@ _cleanup() . ./common.rc . ./common.filter +# assumes 1st, 3rd and 5th blocks are single written blocks, +# the others are unwritten. _filter_blocks() { - $AWK_PROG '/[0-9]/ { - if ($7) - print $1, $2, "XX..YY", "AG", "(AA..BB)", $6, $7; - else - print $1, $2, "XX..YY", "AG", "(AA..BB)", $6; - }' + $AWK_PROG ' +/[0-9]/ { + if (!written_size) { + written_size = $6 + unwritten1 = ((1048576/512) / 2) - written_size + unwritten2 = ((1048576/512) / 2) - 2 * written_size + } + + if ($7) { + size = "RIGHT" + flags = "GOOD" + if (unwritten1) { + if ($6 != unwritten1) + size = "WRONG" + unwritten1 = 0; + } else if ($6 != unwritten2) { + size = "WRONG" + } + if ($7 < 10000) + flags = "BAD" + } else { + size = "RIGHT" + flags = "GOOD" + if ($6 != written_size) + size = "WRONG" + } + print $1, "[AA..BB]", "XX..YY", "AG", "(AA..BB)", size, flags +}' } # real QA test starts here @@ -48,7 +72,7 @@ _scratch_mount TEST_FILE=$SCRATCH_MNT/test_file TEST_PROG=$here/src/unwritten_mmap -FILE_SIZE=131072 +FILE_SIZE=1048576 rm -f $TEST_FILE $TEST_PROG $FILE_SIZE $TEST_FILE From owner-xfs@oss.sgi.com Thu Mar 20 00:06:19 2008 Received: with ECARTIS (v1.0.0; list xfs); Thu, 20 Mar 2008 00:06:26 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_61 autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2K76Eqm032073 for ; Thu, 20 Mar 2008 00:06:17 -0700 Received: from [134.14.55.78] (redback.melbourne.sgi.com [134.14.55.78]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id SAA10322; Thu, 20 Mar 2008 18:06:40 +1100 Message-ID: <47E20E8E.6070209@sgi.com> Date: Thu, 20 Mar 2008 18:13:18 +1100 From: Lachlan McIlroy Reply-To: lachlan@sgi.com User-Agent: Thunderbird 2.0.0.12 (X11/20080213) MIME-Version: 1.0 To: David Chinner CC: xfs-dev , xfs-oss Subject: Re: [PATCH] Revalidate the btree cursor after an insert References: <20080320044813.GY95344431@sgi.com> In-Reply-To: <20080320044813.GY95344431@sgi.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14951 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: lachlan@sgi.com Precedence: bulk X-list: xfs Looks good Dave - and thanks for explaining the finer details. David Chinner wrote: > Ensure a btree insert returns a valid cursor. > > When writing into preallocated regions there is a case > where XFS can oops or hang doing the unwritten extent conversion > on I/O completion. It turns out thathte problem is related to > the btree cursor being invalid. > > When we do an insert into the tree, we may need to split blocks > in the tree. When we only split at the leaf level (i.e. level 0), > everything works just fine. However, if we have a multi-level > split in the btreee, the cursor passed to the insert function is > no longer valid once the insert is complete. > > The leaf level split is handled correctly because all the operations > at level 0 are done using the original cursor, hence it is updated > correctly. However, when we need to update the next level up the > tree, we don't use that cursor - we use a cloned cursor that points > to the index in the next level up where we need to do the insert. > > Hence if we need to split a second level, the changes to the tree > are reflected in the cloned cursor and not the original cursor. > This clone-and-move-up-a-level-on-split behaviour recurses all > the way to the top of the tree. > > The complexity here is that these cloned cursors do not point to > the original index that was inserted - they point to the newly > allocated block (the right block) and the original cursor pointer > to that level may still point to the left block. Hence, without deep > examination of the cloned cursor and buffers, we cannot update the > original cursor with the new path from the cloned cursor. > > In these cases the original cursor could be pointing to the wrong > block(s) and hence a subsequent modification to the tree using that > cursor will lead to corruption of the tree. > > The crash case occurs whenteh tree changes height - we insert a new > level in the tree, and the cursor does not have a buffer in it's path > for that level. Hence any attempt to walk back up the cursor to the > root block will result in a null pointer dereference. > > To make matters even more complex, the BMAP BT is rooted in an inode, > so we can have a change of height in the btree *without a root split*. > That is, if the root block in the inode is full when we split a > leaf node, we cannot fit the pointer to the new block in the root, > so we allocate a new block, migrate all the ptrs out of the inode > into the new block and point the inode root block at the newly > allocated block. This changes the height of the tree without a > root split having occurred and hence invalidates the path in the > original cursor. > > The patch below prevents xfs_bmbt_insert() from returning with an > invalid cursor by detecting the cases that invalidate the original > cursor and refresh it by do a lookup into the btree for the original > index we were inserting at. > > Note that the INOBT, AGFBNO and AGFCNT btree implementations also have > this bug, but the cursor is currently always destroyed or revalidated > after an insert for those trees. Hence this patch only address the > problem in the BMBT code. > > Signed-off-by: Dave Chinner > --- > fs/xfs/xfs_bmap_btree.c | 38 ++++++++++++++++++++++++++++++++++++-- > 1 file changed, 36 insertions(+), 2 deletions(-) > > Index: 2.6.x-xfs-new/fs/xfs/xfs_bmap_btree.c > =================================================================== > --- 2.6.x-xfs-new.orig/fs/xfs/xfs_bmap_btree.c 2008-03-14 11:33:48.000000000 +1100 > +++ 2.6.x-xfs-new/fs/xfs/xfs_bmap_btree.c 2008-03-20 15:45:19.351601515 +1100 > @@ -2027,6 +2027,24 @@ xfs_bmbt_increment( > > /* > * Insert the current record at the point referenced by cur. > + * > + * A multi-level split of the tree on insert will invalidate the original > + * cursor. It appears, however, that some callers assume that the cursor is > + * always valid. Hence if we do a multi-level split we need to revalidate the > + * cursor. > + * > + * When a split occurs, we will see a new cursor returned. Use that as a > + * trigger to determine if we need to revalidate the original cursor. If we get > + * a split, then use the original irec to lookup up the path of the record we > + * just inserted. > + * > + * Note that the fact that the btree root is in the inode means that we can > + * have the level of the tree change without a "split" occurring at the root > + * level. What happens is that the root is migrated to an allocated block and > + * the inode root is pointed to it. This means a single split can change the > + * level of the tree (level 2 -> level 3) and invalidate the old cursor. Hence > + * the level change should be accounted as a split so as to correctly trigger a > + * revalidation of the old cursor. > */ > int /* error */ > xfs_bmbt_insert( > @@ -2039,11 +2057,14 @@ xfs_bmbt_insert( > xfs_fsblock_t nbno; > xfs_btree_cur_t *ncur; > xfs_bmbt_rec_t nrec; > + xfs_bmbt_irec_t oirec; /* original irec */ > xfs_btree_cur_t *pcur; > + int splits = 0; > > XFS_BMBT_TRACE_CURSOR(cur, ENTRY); > level = 0; > nbno = NULLFSBLOCK; > + oirec = cur->bc_rec.b; > xfs_bmbt_disk_set_all(&nrec, &cur->bc_rec.b); > ncur = NULL; > pcur = cur; > @@ -2052,11 +2073,13 @@ xfs_bmbt_insert( > &i))) { > if (pcur != cur) > xfs_btree_del_cursor(pcur, XFS_BTREE_ERROR); > - XFS_BMBT_TRACE_CURSOR(cur, ERROR); > - return error; > + goto error0; > } > XFS_WANT_CORRUPTED_GOTO(i == 1, error0); > if (pcur != cur && (ncur || nbno == NULLFSBLOCK)) { > + /* allocating a new root is effectively a split */ > + if (cur->bc_nlevels != pcur->bc_nlevels) > + splits++; > cur->bc_nlevels = pcur->bc_nlevels; > cur->bc_private.b.allocated += > pcur->bc_private.b.allocated; > @@ -2070,10 +2093,21 @@ xfs_bmbt_insert( > xfs_btree_del_cursor(pcur, XFS_BTREE_NOERROR); > } > if (ncur) { > + splits++; > pcur = ncur; > ncur = NULL; > } > } while (nbno != NULLFSBLOCK); > + > + if (splits > 1) { > + /* revalidate the old cursor as we had a multi-level split */ > + error = xfs_bmbt_lookup_eq(cur, oirec.br_startoff, > + oirec.br_startblock, oirec.br_blockcount, &i); > + if (error) > + goto error0; > + ASSERT(i == 1); > + } > + > XFS_BMBT_TRACE_CURSOR(cur, EXIT); > *stat = i; > return 0; > From owner-xfs@oss.sgi.com Thu Mar 20 00:17:41 2008 Received: with ECARTIS (v1.0.0; list xfs); Thu, 20 Mar 2008 00:17:48 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.8 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2K7Hd3O000503 for ; Thu, 20 Mar 2008 00:17:41 -0700 X-ASG-Debug-ID: 1205997490-555b02b90000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from relay-am.club-internet.fr (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id A9ADC125680A for ; Thu, 20 Mar 2008 00:18:10 -0700 (PDT) Received: from relay-am.club-internet.fr (relay-am.club-internet.fr [194.158.104.67]) by cuda.sgi.com with ESMTP id J5uJmO4v2EHuUucX for ; Thu, 20 Mar 2008 00:18:10 -0700 (PDT) Received: from petole.dyndns.org (i07v-62-34-16-56.d4.club-internet.fr [62.34.16.56]) by relay-am.club-internet.fr (Postfix) with ESMTP id 857B725612 for ; Thu, 20 Mar 2008 08:17:38 +0100 (CET) Received: by petole.dyndns.org (Postfix, from userid 1000) id 08858C485; Thu, 20 Mar 2008 08:17:50 +0100 (CET) From: Nicolas KOWALSKI To: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: xfsdump debian package, wrong version number Subject: Re: xfsdump debian package, wrong version number References: <87iqzi3b8q.fsf@petole.dyndns.org> Date: Thu, 20 Mar 2008 08:17:50 +0100 In-Reply-To: Message-ID: <871w65sw2p.fsf@petole.dyndns.org> User-Agent: Gnus/5.110006 (No Gnus v0.6) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Barracuda-Connect: relay-am.club-internet.fr[194.158.104.67] X-Barracuda-Start-Time: 1205997492 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45356 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14952 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: niko@petole.dyndns.org Precedence: bulk X-list: xfs Niv Sardi writes: > Debian packaging is not made in upstream repository. Sorry, I was thinking so because of the debian/ directory available in every source packages I downloaded from oss.sgi.com. > quick fix: hand edit debian/changelog > or get the dsc from http://packages.debian.org/sid/xfsdump and rebuild > for your favourite flavour. Thanks for the tips, -- Nicolas From owner-xfs@oss.sgi.com Thu Mar 20 00:35:03 2008 Received: with ECARTIS (v1.0.0; list xfs); Thu, 20 Mar 2008 00:35:10 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2K7Z0sR005563 for ; Thu, 20 Mar 2008 00:35:03 -0700 X-ASG-Debug-ID: 1205998531-408700dd0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from bombadil.infradead.org (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 319BC6C4E4F for ; Thu, 20 Mar 2008 00:35:32 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org [18.85.46.34]) by cuda.sgi.com with ESMTP id 3wFVzPcXGhERbIg7 for ; Thu, 20 Mar 2008 00:35:32 -0700 (PDT) Received: from hch by bombadil.infradead.org with local (Exim 4.68 #1 (Red Hat Linux)) id 1JcFIj-0005H6-7Y; Thu, 20 Mar 2008 07:35:01 +0000 Date: Thu, 20 Mar 2008 03:35:01 -0400 From: Christoph Hellwig To: Eric Sandeen Cc: xfs-oss X-ASG-Orig-Subj: Re: [PATCH] fix xfsqa 049 when using whole scratch device Subject: Re: [PATCH] fix xfsqa 049 when using whole scratch device Message-ID: <20080320073501.GA19969@infradead.org> References: <47E1CCF0.9070904@sandeen.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <47E1CCF0.9070904@sandeen.net> User-Agent: Mutt/1.5.17 (2007-11-01) X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org See http://www.infradead.org/rpr.html X-Barracuda-Connect: bombadil.infradead.org[18.85.46.34] X-Barracuda-Start-Time: 1205998534 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45357 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14953 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs On Wed, Mar 19, 2008 at 09:33:20PM -0500, Eric Sandeen wrote: > if SCRATCH_DEV happens to be a whole device, mke2fs will wait > for confirmation. > > (mke2fs -F would work, too...) Looks good. From owner-xfs@oss.sgi.com Thu Mar 20 00:45:28 2008 Received: with ECARTIS (v1.0.0; list xfs); Thu, 20 Mar 2008 00:45:35 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2K7jQ3s006117 for ; Thu, 20 Mar 2008 00:45:28 -0700 X-ASG-Debug-ID: 1205999158-556203d90000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from bombadil.infradead.org (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id AB5DE125687E; Thu, 20 Mar 2008 00:45:59 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org [18.85.46.34]) by cuda.sgi.com with ESMTP id 8ldyJHfE1WIHAJmk; Thu, 20 Mar 2008 00:45:59 -0700 (PDT) Received: from hch by bombadil.infradead.org with local (Exim 4.68 #1 (Red Hat Linux)) id 1JcFTK-0006O0-NE; Thu, 20 Mar 2008 07:45:58 +0000 Date: Thu, 20 Mar 2008 03:45:58 -0400 From: Christoph Hellwig To: David Chinner Cc: xfs-dev , xfs-oss X-ASG-Orig-Subj: Re: [PATCH] XFSQA 141: support 64k pagesize Subject: Re: [PATCH] XFSQA 141: support 64k pagesize Message-ID: <20080320074558.GD19969@infradead.org> References: <20080320063427.GB103491721@sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080320063427.GB103491721@sgi.com> User-Agent: Mutt/1.5.17 (2007-11-01) X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org See http://www.infradead.org/rpr.html X-Barracuda-Connect: bombadil.infradead.org[18.85.46.34] X-Barracuda-Start-Time: 1205999159 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45358 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14955 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs On Thu, Mar 20, 2008 at 05:34:27PM +1100, David Chinner wrote: > Make the file larger and read 64k from it instead of 16k > so that it pulls in a full page from the middle of the file. Looks good. From owner-xfs@oss.sgi.com Thu Mar 20 00:45:14 2008 Received: with ECARTIS (v1.0.0; list xfs); Thu, 20 Mar 2008 00:45:21 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2K7jCar006091 for ; Thu, 20 Mar 2008 00:45:14 -0700 X-ASG-Debug-ID: 1205999144-2e4e02070000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from bombadil.infradead.org (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id B73D46C4F76; Thu, 20 Mar 2008 00:45:45 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org [18.85.46.34]) by cuda.sgi.com with ESMTP id 5sbOfBUF6wfssqcW; Thu, 20 Mar 2008 00:45:45 -0700 (PDT) Received: from hch by bombadil.infradead.org with local (Exim 4.68 #1 (Red Hat Linux)) id 1JcFT6-0006Nk-Ax; Thu, 20 Mar 2008 07:45:44 +0000 Date: Thu, 20 Mar 2008 03:45:44 -0400 From: Christoph Hellwig To: David Chinner Cc: xfs-dev , xfs-oss X-ASG-Orig-Subj: Re: [PATCH] XFSQA 103: filter ln output Subject: Re: [PATCH] XFSQA 103: filter ln output Message-ID: <20080320074544.GC19969@infradead.org> References: <20080320063245.GA103491721@sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080320063245.GA103491721@sgi.com> User-Agent: Mutt/1.5.17 (2007-11-01) X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org See http://www.infradead.org/rpr.html X-Barracuda-Connect: bombadil.infradead.org[18.85.46.34] X-Barracuda-Start-Time: 1205999145 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45357 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14954 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs On Thu, Mar 20, 2008 at 05:32:45PM +1100, David Chinner wrote: > Move recent versions of ln (i.e. debian unstable) have > a different error output. update the filter to handle this. Looks good. From owner-xfs@oss.sgi.com Thu Mar 20 00:45:54 2008 Received: with ECARTIS (v1.0.0; list xfs); Thu, 20 Mar 2008 00:46:02 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2K7jooE006254 for ; Thu, 20 Mar 2008 00:45:54 -0700 X-ASG-Debug-ID: 1205999183-556303c40000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from bombadil.infradead.org (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 99DEE12568B1; Thu, 20 Mar 2008 00:46:23 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org [18.85.46.34]) by cuda.sgi.com with ESMTP id d2tl311yHuOkDfJF; Thu, 20 Mar 2008 00:46:23 -0700 (PDT) Received: from hch by bombadil.infradead.org with local (Exim 4.68 #1 (Red Hat Linux)) id 1JcFTi-0006RD-Sp; Thu, 20 Mar 2008 07:46:22 +0000 Date: Thu, 20 Mar 2008 03:46:22 -0400 From: Christoph Hellwig To: David Chinner Cc: xfs-dev , xfs-oss X-ASG-Orig-Subj: Re: [PATCH] XFSQA 166: support varying page sizes Subject: Re: [PATCH] XFSQA 166: support varying page sizes Message-ID: <20080320074622.GE19969@infradead.org> References: <20080320063713.GC103491721@sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080320063713.GC103491721@sgi.com> User-Agent: Mutt/1.5.17 (2007-11-01) X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org See http://www.infradead.org/rpr.html X-Barracuda-Connect: bombadil.infradead.org[18.85.46.34] X-Barracuda-Start-Time: 1205999183 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45358 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14956 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs On Thu, Mar 20, 2008 at 05:37:13PM +1100, David Chinner wrote: > Make the filter check the resultant output based on > the initial written region size. hence page size of the > machine will not affect the output of the filter. > Modify the golden output to match. Looks good. From owner-xfs@oss.sgi.com Thu Mar 20 00:46:44 2008 Received: with ECARTIS (v1.0.0; list xfs); Thu, 20 Mar 2008 00:46:53 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2K7kgTg006710 for ; Thu, 20 Mar 2008 00:46:44 -0700 X-ASG-Debug-ID: 1205999235-70d402220000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from bombadil.infradead.org (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 0C62E12569E6 for ; Thu, 20 Mar 2008 00:47:15 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org [18.85.46.34]) by cuda.sgi.com with ESMTP id kOOLt2ulbulDM3yF for ; Thu, 20 Mar 2008 00:47:15 -0700 (PDT) Received: from hch by bombadil.infradead.org with local (Exim 4.68 #1 (Red Hat Linux)) id 1JcFU4-0006Rs-Ny; Thu, 20 Mar 2008 07:46:44 +0000 Date: Thu, 20 Mar 2008 03:46:44 -0400 From: Christoph Hellwig To: David Chinner Cc: xfs-dev , xfs-oss X-ASG-Orig-Subj: Re: [PATCH 1/2] Factor repeated code in xfs_ialloc_ag_alloc Subject: Re: [PATCH 1/2] Factor repeated code in xfs_ialloc_ag_alloc Message-ID: <20080320074644.GF19969@infradead.org> References: <20080320052000.GZ95344431@sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080320052000.GZ95344431@sgi.com> User-Agent: Mutt/1.5.17 (2007-11-01) X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org See http://www.infradead.org/rpr.html X-Barracuda-Connect: bombadil.infradead.org[18.85.46.34] X-Barracuda-Start-Time: 1205999236 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45358 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14957 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs On Thu, Mar 20, 2008 at 04:20:00PM +1100, David Chinner wrote: > Factor repeated code that determines the cluster alignment of > an inode extent. Nice one, looks good. From owner-xfs@oss.sgi.com Thu Mar 20 00:47:23 2008 Received: with ECARTIS (v1.0.0; list xfs); Thu, 20 Mar 2008 00:47:31 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2K7lMF4007061 for ; Thu, 20 Mar 2008 00:47:23 -0700 X-ASG-Debug-ID: 1205999275-1c2c03570000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from bombadil.infradead.org (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id A7F6B6C4D47 for ; Thu, 20 Mar 2008 00:47:55 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org [18.85.46.34]) by cuda.sgi.com with ESMTP id mkWW99FGFvTR4MS5 for ; Thu, 20 Mar 2008 00:47:55 -0700 (PDT) Received: from hch by bombadil.infradead.org with local (Exim 4.68 #1 (Red Hat Linux)) id 1JcFUi-0006TC-LL; Thu, 20 Mar 2008 07:47:24 +0000 Date: Thu, 20 Mar 2008 03:47:24 -0400 From: Christoph Hellwig To: David Chinner Cc: xfs-dev , xfs-oss X-ASG-Orig-Subj: Re: [PATCH 2/2] Prevent shutdown on inode allocation failure Subject: Re: [PATCH 2/2] Prevent shutdown on inode allocation failure Message-ID: <20080320074724.GG19969@infradead.org> References: <20080320052100.GA95344431@sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080320052100.GA95344431@sgi.com> User-Agent: Mutt/1.5.17 (2007-11-01) X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org See http://www.infradead.org/rpr.html X-Barracuda-Connect: bombadil.infradead.org[18.85.46.34] X-Barracuda-Start-Time: 1205999275 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45359 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14958 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs On Thu, Mar 20, 2008 at 04:21:00PM +1100, David Chinner wrote: > At ENOSPC, we can get a filesystem shutdown due to a cancelling a > dirty transaction in xfs_mkdir or xfs_create. This is due to the > initial allocation attempt not taking into inode alignment and hence > we can prepare the AGF freelist for allocation when it's not actually > possible to do an allocation. This results in inode allocation returning > ENOSPC with a dirty transaction, and hence we shut down the filesystem. > > Because the first allocation is an exact allocation attempt, we must tell > the allocator that the alignment does not affect the allocation attempt. > i.e. we will accept any extent alignment as long as the extent starts > at the block we want. Unfortunately, this means that if the longest > free extent is less than the length + alignment necessary for fallback > allocation attempts but is long enough to attempt a non-aligned allocation, > we will modify the free list. > > If we then have the exact allocation fail, all other allocation attempts > will also fail due to the alignment constraint being taken into account. > Hence the initial attempt needs to set the "alignment slop" field so > that alignment, while not required, must be taken into account when > determining if there is enough space left in the AG to do the allocation. > > That means if the exact allocation fails, we will not dirty the freelist > if there is not enough space available fo a subsequent allocation to > succeed. Hence we get an ENOSPC error back to userspace without shutting > down the filesystem. Looks good. From owner-xfs@oss.sgi.com Thu Mar 20 00:55:28 2008 Received: with ECARTIS (v1.0.0; list xfs); Thu, 20 Mar 2008 00:55:35 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2K7tQq1007957 for ; Thu, 20 Mar 2008 00:55:28 -0700 X-ASG-Debug-ID: 1205999758-648e03340000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from bombadil.infradead.org (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 4DD2DFDE4BA for ; Thu, 20 Mar 2008 00:55:59 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org [18.85.46.34]) by cuda.sgi.com with ESMTP id V3y9Cq23AD4zG11x for ; Thu, 20 Mar 2008 00:55:59 -0700 (PDT) Received: from hch by bombadil.infradead.org with local (Exim 4.68 #1 (Red Hat Linux)) id 1JcFcV-0007En-QN; Thu, 20 Mar 2008 07:55:27 +0000 Date: Thu, 20 Mar 2008 03:55:27 -0400 From: Christoph Hellwig To: Lachlan McIlroy Cc: xfs-dev , xfs-oss X-ASG-Orig-Subj: Re: REVIEW: xfs_bmap_check_leaf_extents() can reference unmapped memory Subject: Re: REVIEW: xfs_bmap_check_leaf_extents() can reference unmapped memory Message-ID: <20080320075527.GA24999@infradead.org> References: <47E2000B.9030208@sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <47E2000B.9030208@sgi.com> User-Agent: Mutt/1.5.17 (2007-11-01) X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org See http://www.infradead.org/rpr.html X-Barracuda-Connect: bombadil.infradead.org[18.85.46.34] X-Barracuda-Start-Time: 1205999759 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45358 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14959 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs On Thu, Mar 20, 2008 at 05:11:23PM +1100, Lachlan McIlroy wrote: > + xfs_bmbt_rec_t last; /* last extent in previous block */ > xfs_bmbt_rec_t *nextp; /* pointer to next extent */ > int bp_release = 0; > > @@ -6264,7 +6264,6 @@ xfs_bmap_check_leaf_extents( > /* > * Loop over all leaf nodes checking that all extents are in the right order. > */ > for (;;) { > xfs_fsblock_t nextbno; > xfs_extnum_t num_recs; > @@ -6285,18 +6284,18 @@ xfs_bmap_check_leaf_extents( > */ > > ep = XFS_BTREE_REC_ADDR(xfs_bmbt, block, 1); > + if (i) { > + xfs_btree_check_rec(XFS_BTNUM_BMAP, (void *)&last, > + (void *)ep); I haven't actually compiled this yet, but I'd expect this to give an unitialized variable warning with gcc because it can't figure out this can't happen in the first loop iteration. You might need and last = { 0, } somewhere in the beginning of the function. Also I think the void * casts above are useless. > xfs_btree_check_rec(XFS_BTNUM_BMAP, (void *)ep, > (void *)(nextp)); and at that point you might fix these up aswell, with the added benefit that now the whole call fits on a single line. From owner-xfs@oss.sgi.com Thu Mar 20 01:25:51 2008 Received: with ECARTIS (v1.0.0; list xfs); Thu, 20 Mar 2008 01:25:59 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2K8PosV008974 for ; Thu, 20 Mar 2008 01:25:51 -0700 X-ASG-Debug-ID: 1206001582-2e45033b0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from knox.decisionsoft.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 6DBBE6C506E for ; Thu, 20 Mar 2008 01:26:22 -0700 (PDT) Received: from knox.decisionsoft.com (knox-be.decisionsoft.com [87.194.172.100]) by cuda.sgi.com with ESMTP id UTEcMgNb3XXiM9fW for ; Thu, 20 Mar 2008 01:26:22 -0700 (PDT) Received: from tugela.dsl.local ([10.0.0.91]) by knox.decisionsoft.com with esmtp (Exim 4.63) (envelope-from ) id 1JcG5q-000625-QR; Thu, 20 Mar 2008 08:25:46 +0000 Received: from strr (helo=localhost) by tugela.dsl.local with local-esmtp (Exim 4.63) (envelope-from ) id 1JcG5q-0006qp-Mn; Thu, 20 Mar 2008 08:25:46 +0000 Date: Thu, 20 Mar 2008 08:25:46 +0000 (GMT) From: Stuart Rowan X-X-Sender: strr@tugela.dsl.local To: Timothy Shimmin cc: strr-debian@decisionsoft.co.uk, xfs@oss.sgi.com X-ASG-Orig-Subj: XFS internal error xfs_itobp at line 360 of file fs/xfs/xfs_inode.c. (was Re: 2.6.24.3 nfs server on xfs keeps producing nfsd: non-standard errno: -117) Subject: XFS internal error xfs_itobp at line 360 of file fs/xfs/xfs_inode.c. (was Re: 2.6.24.3 nfs server on xfs keeps producing nfsd: non-standard errno: -117) In-Reply-To: <47E1B939.3060008@sgi.com> Message-ID: References: <47DEFE5E.4030703@decisionsoft.co.uk> <47DF0C9D.1010602@sgi.com> <47DFC880.6040403@decisionsoft.co.uk> <47E1B939.3060008@sgi.com> User-Agent: Alpine 1.00 (DEB 882 2007-12-20) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-SA-Exim-Connect-IP: 10.0.0.91 X-SA-Exim-Mail-From: strr@decisionsoft.com X-SA-Exim-Scanned: No (on knox.decisionsoft.com); SAEximRunCond expanded to false X-SystemFilter: not expanding decisionsoft.co.uk address X-Barracuda-Connect: knox-be.decisionsoft.com[87.194.172.100] X-Barracuda-Start-Time: 1206001583 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45361 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14960 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: strr@decisionsoft.com Precedence: bulk X-list: xfs On Thu, 20 Mar 2008, Timothy Shimmin wrote: > Stuart Rowan wrote: >> Timothy Shimmin wrote, on 18/03/08 00:28: >>> Hi Stuart, >>> >>> Stuart Rowan wrote: >>>> >>>> I have *millions* of lines of (>200k per minute according to syslog): >>>> nfsd: non-standard errno: -117 >>>> being sent out of dmesg >>>> >>>> Now errno 117 is >>>> #define EUCLEAN 117 /* Structure needs cleaning */ >>>> >>> In XFS we mapped EFSCORRUPTED to EUCLEAN as EFSCORRUPTED >>> didn't exist on Linux. >>> However, normally if this error is encountered in XFS then >>> we output an appropriate msg to the syslog. >>> Our default error level is 3 and most reports are rated at 1 >>> so should show up I would have thought. >>> >>> --Tim >>> >>>> >>>> xfs_repair -n says the filesystems are clean >>>> xfs_repair has been run multiple times to completion on the filesystems, >>>> all is fine. >>>> >>>> The NFS server is currently in use (indeed the message only starts once >>>> clients connect) and works absolutely fine. >>>> >>>> How do I find out what (if anything) is wrong with my filesystem / >>>> appropriately silence this message? >>>> >>> >> I briefly changed the sysctl fs.xfs.error_level to 6 and then back to 3 >> > Good idea (I was thinking about that :-). > > Somehow, your subject line referring to 2.6.24 didn't stick in > my brain (that's pretty old). > So I was looking at recent code which I can't see has this error > case from xfs_itobp() (it is now in xfs_imap_to_bp()). > Pretty old for you, latest released Linux kernel to me :-P > Looking at old code, I see 2 EFSCORRUPTED paths with the following > one triggering at XFS_ERRLEVEL_HIGH (and presumably why you didn't > see it until now) ... > > montep |1.198| | /* > montep |1.198| | * Validate the magic number > and version of every inode in the buffer > montep |1.198| | * (if DEBUG kernel) or the > first inode in the buffer, otherwise. > montep |1.198| | */ > nathans |1.303|2.4.x-xfs:slinx:74929a |#ifdef DEBUG > montep |1.198| | ni = BBTOB(imap.im_len) >> > mp->m_sb.sb_inodelog; > montep |1.198| |#else > montep |1.198| | ni = 1; > montep |1.198| |#endif > montep |1.198| | for (i = 0; i < ni; i++) { > doucette |1.245|irix6.5f:irix:09146b | int > di_ok; > doucette |1.245|irix6.5f:irix:09146b | xfs_dinode_t *dip; > doucette |1.245|irix6.5f:irix:09146b | > lord |1.292|2.4.0-test1-xfs:slinx:65571a| dip = (xfs_dinode_t > *)xfs_buf_offset(bp, > montep |1.198| | > (i << mp->m_sb.sb_inodelog)); > dxm |1.285|2.4.0-test1-xfs:slinx:62350a| di_ok = > INT_GET(dip->di_core.di_magic, ARCH_CONVERT) == XFS_DINODE_MAGIC && > dxm |1.285|2.4.0-test1-xfs:slinx:62350a| > XFS_DINODE_GOOD_VERSION(INT_GET(dip->di_core.di_version, ARCH_CONVERT)); > overby |1.362|2.4.x-xfs:slinx:136445a | if > (unlikely(XFS_TEST_ERROR(!di_ok, mp, XFS_ERRTAG_ITOBP_INOTOBP, > overby |1.362|2.4.x-xfs:slinx:136445a | > XFS_RANDOM_ITOBP_INOTOBP))) { > montep |1.198| |#ifdef DEBUG > nathans |1.337|2.4.x-xfs:slinx:119399a | prdev("bad > inode magic/vsn daddr 0x%llx #%d (magic=%x)", > nathans |1.337|2.4.x-xfs:slinx:119399a | > mp->m_dev, (unsigned long long)imap.im_blkno, i, > nathans |1.303|2.4.x-xfs:slinx:74929a | > INT_GET(dip->di_core.di_magic, ARCH_CONVERT)); > montep |1.198| |#endif > lord |1.376|2.4.x-xfs:slinx:150747a | > XFS_CORRUPTION_ERROR("xfs_itobp", XFS_ERRLEVEL_HIGH, > overby |1.362|2.4.x-xfs:slinx:136445a | > mp, dip); > montep |1.198| | > xfs_trans_brelse(tp, bp); > sup |1.216| | return > XFS_ERROR(EFSCORRUPTED); > montep |1.198| | } > ajs |1.143| | } > > So the first inode in the buffer has the wrong magic# or version#. > I'm surprised that this wasn't picked up by repair or check. > > --Tim > I have some more information! The server, evenlode, was previously serving NFS exports of ext3 filesystems. Last week we rsycned the data to the new server running XFS. Eventually I spotted the high error rate was linked to the volume of NFS read calls (200k / minute). A quick tcpdump gave me a couple of likely looking hosts. I logged into one (bonny) and found gnome-panel using 100% CPU. I killed that and these messages have now reduced to a handful an hour. That gnome-panel will have had the NFS server and underlying NFS backing filesystem (ext3-> XFS) changed underneath it. So my questions ... Is it possible that the errors are related to duff request data being sent by the NFS clients because they are still referencing e.g. inodes as they were when the NFS server was ext3 backed? Is it also possible that things like the rather high request rate (200k/sec) although that's reduced now, made a race in e.g. the XFS code triggerable? As you say, it's rather suprising that this sort of issue is not being caught by xfs_repair (-n) and that's what leads me to suspect something else at play ... Cheers, Stu. >> It gives the following message and backtrace >> >>> Mar 18 13:35:15 evenlode kernel: nfsd: non-standard errno: -117 >>> Mar 18 13:35:15 evenlode kernel: 0x0: 00 00 00 00 00 00 00 00 00 00 00 00 >>> 00 00 00 00 Mar 18 13:35:15 evenlode kernel: Filesystem "dm-0": XFS >>> internal error xfs_itobp at line 360 of file fs/xfs/xfs_inode.c. Caller >>> 0xffffffff8821224d >>> Mar 18 13:35:15 evenlode kernel: Pid: 2791, comm: nfsd Not tainted >>> 2.6.24.3-generic #1 >>> Mar 18 13:35:15 evenlode kernel: Mar 18 13:35:15 evenlode kernel: Call >>> Trace: >>> Mar 18 13:35:15 evenlode kernel: [] >>> :xfs:xfs_iread+0x71/0x1e8 >>> Mar 18 13:35:15 evenlode kernel: [] >>> :xfs:xfs_itobp+0x141/0x17b >>> Mar 18 13:35:15 evenlode kernel: [] >>> :xfs:xfs_iread+0x71/0x1e8 >>> Mar 18 13:35:15 evenlode kernel: [] >>> :xfs:xfs_iread+0x71/0x1e8 >>> Mar 18 13:35:15 evenlode kernel: [] >>> :xfs:xfs_iget_core+0x352/0x63a >>> Mar 18 13:35:15 evenlode kernel: [] >>> alloc_inode+0x152/0x1c2 >>> Mar 18 13:35:15 evenlode kernel: [] >>> :xfs:xfs_iget+0x9b/0x13f >>> Mar 18 13:35:15 evenlode kernel: [] >>> :xfs:xfs_vget+0x4d/0xbb >> >> >> Does that help? >> >> Thanks, >> Stu. > > > From owner-xfs@oss.sgi.com Thu Mar 20 01:50:40 2008 Received: with ECARTIS (v1.0.0; list xfs); Thu, 20 Mar 2008 01:50:47 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2K8ob8n009866 for ; Thu, 20 Mar 2008 01:50:40 -0700 X-ASG-Debug-ID: 1206003069-38c9016f0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from deliver.uni-koblenz.de (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id AABB4B8F103 for ; Thu, 20 Mar 2008 01:51:09 -0700 (PDT) Received: from deliver.uni-koblenz.de (deliver.uni-koblenz.de [141.26.64.15]) by cuda.sgi.com with ESMTP id 0WRW50IBdGDv5BuA for ; Thu, 20 Mar 2008 01:51:09 -0700 (PDT) Received: from localhost (localhost [127.0.0.1]) by deliver.uni-koblenz.de (Postfix) with ESMTP id 4D0287855789 for ; Thu, 20 Mar 2008 09:51:09 +0100 (CET) Received: from deliver.uni-koblenz.de ([127.0.0.1]) by localhost (deliver.uni-koblenz.de [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 10291-01; Thu, 20 Mar 2008 09:51:07 +0100 (CET) Received: from bliss.uni-koblenz.de (bliss.uni-koblenz.de [141.26.64.65]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by deliver.uni-koblenz.de (Postfix) with ESMTP id EDC84785578B for ; Thu, 20 Mar 2008 09:51:07 +0100 (CET) From: Rainer Krienke To: xfs@oss.sgi.com X-ASG-Orig-Subj: xfs_repair sometimes hangs during repair Subject: xfs_repair sometimes hangs during repair Date: Thu, 20 Mar 2008 09:51:03 +0100 User-Agent: KMail/1.9.9 MIME-Version: 1.0 Content-Type: multipart/signed; boundary="nextPart2834985.ULnozQbeu3"; protocol="application/pgp-signature"; micalg=pgp-sha1 Content-Transfer-Encoding: 7bit Message-Id: <200803200951.07238.krienke@uni-koblenz.de> X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Scanned: amavisd-new at uni-koblenz.de X-Barracuda-Connect: deliver.uni-koblenz.de[141.26.64.15] X-Barracuda-Start-Time: 1206003070 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45362 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Status: Clean X-archive-position: 14961 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: krienke@uni-koblenz.de Precedence: bulk X-list: xfs --nextPart2834985.ULnozQbeu3 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Content-Disposition: inline Hello, lately I had a problem with a xfs filesystem on a Novell SLES10 SP1 system.= I=20 tried to repair the fs using xfs_repair (V2.9.7) but this did not work.=20 xfs_repair failes with an out of memory error. The machine has 6GB of RAM b= ut=20 is a 32 bit machine and I guess it limits one process to 3GB memory. At lea= st=20 xfs_repair died when its virtual size was about 3GB. The filesystem is 360G= B=20 in size and had 6.9 million inodes on it. This problem is well documented and I got an advice to set the -o=20 bhash=3D in the xfs_repair call. This basically worked however it= =20 rquired a lot of trial an error which is not what you want if you need to g= et=20 your filesystem back online as soon as possible. In betwenn I created a test filesystem 360GB with 120million inodes on it.= =20 xfs_repair without options is unable to complete. If I run xfs_repair -o=20 bhash=3D8192 the repair process terminates normally (the filesystem is actu= ally=20 ok).=20=20 The problem is that if I specify smaller values for bhash, xfs_repair will = not=20 issue any error or warning or out of memory message but it won't terminate= =20 either. At some point sometimes in Phase3 sometimes in Phase6 it simply=20 hangs. In this state xfs_repair uses 100% CPU and does no longer access the= =20 disk with the filesystem to be raired as reported by iostat -d. This is not really good, because each try to run xfs_repair takes about 90= =20 minutes until you finally find out, that it does not work (xfs_repair is=20 hanging) and you have to restart xfs_repair using a larger bhash value.=20 Actually I would expect xfs_repair to issue an error or perhaps there shoul= d=20 at least be some guidlines on how to set bhash depending on the size of the= =20 filesystem. Are there any hints on how to set a minimal bhash value for a given filesys= tem=20 or is there another way to get a larger fs checked on a 32 bit machine? Thanks in advance Rainer=20=20 --=20 Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1 56070 Koblenz, http://www.uni-koblenz.de/~krienke, Tel: +49261287 1312 PGP: http://www.uni-koblenz.de/~krienke/mypgp.html,Fax: +49261287 1001312 --nextPart2834985.ULnozQbeu3 Content-Type: application/pgp-signature; name=signature.asc Content-Description: This is a digitally signed message part. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.4-svn0 (GNU/Linux) iD8DBQBH4iV7aldtjc/KDEoRAtbmAKD7O+LyqtUG1Tz5919MKi7JZX2r9wCeNqT7 8Lob6qFRyObmVB5Ie5BBcOA= =NYNS -----END PGP SIGNATURE----- --nextPart2834985.ULnozQbeu3-- From owner-xfs@oss.sgi.com Thu Mar 20 02:50:25 2008 Received: with ECARTIS (v1.0.0; list xfs); Thu, 20 Mar 2008 02:50:33 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_63, J_CHICKENPOX_64,J_CHICKENPOX_65,J_CHICKENPOX_66 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2K9oNTS011747 for ; Thu, 20 Mar 2008 02:50:25 -0700 X-ASG-Debug-ID: 1206006651-759a02140000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from verein.lst.de (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 42D316C557B for ; Thu, 20 Mar 2008 02:50:52 -0700 (PDT) Received: from verein.lst.de (verein.lst.de [213.95.11.210]) by cuda.sgi.com with ESMTP id ioi3zzDS3ue2lpjd for ; Thu, 20 Mar 2008 02:50:52 -0700 (PDT) Received: from verein.lst.de (localhost [127.0.0.1]) by verein.lst.de (8.12.3/8.12.3/Debian-7.1) with ESMTP id m2K9dfF3029324 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO) for ; Thu, 20 Mar 2008 10:39:41 +0100 Received: (from hch@localhost) by verein.lst.de (8.12.3/8.12.3/Debian-6.6) id m2K9dekh029322 for xfs@oss.sgi.com; Thu, 20 Mar 2008 10:39:40 +0100 Date: Thu, 20 Mar 2008 10:39:40 +0100 From: Christoph Hellwig To: xfs@oss.sgi.com X-ASG-Orig-Subj: [PATCH] kill mrlock_t Subject: [PATCH] kill mrlock_t Message-ID: <20080320093940.GA28966@lst.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.3.28i X-Scanned-By: MIMEDefang 2.39 X-Barracuda-Connect: verein.lst.de[213.95.11.210] X-Barracuda-Start-Time: 1206006655 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -1.52 X-Barracuda-Spam-Status: No, SCORE=-1.52 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=BSF_SC0_MJ615 X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45367 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.50 BSF_SC0_MJ615 Custom Rule MJ615 X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14962 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@lst.de Precedence: bulk X-list: xfs XFS inodes are locked via the xfs_ilock family of functions which internally use a rw_semaphore wrapper into an abstraction called mrlock_t. The mrlock_t should be purely internal to xfs_ilock functions but leaks through to the callers via various lock state asserts. This patch: - adds new xfs_isilocked abstraction to make the lock state asserts fits into the xfs_ilock API family - opencodes the mrlock wrappers in the xfs_ilock family of functions - makes the state tracking debug-only and merged into a single state word - remove superflous flags to the xfs_ilock family of functions This kills 8 bytes per inode for non-debug builds, which would e.g. be the space for ACL caching on 32bit systems. Signed-off-by: Christoph Hellwig Index: linux-2.6-xfs/fs/xfs/linux-2.6/mrlock.h =================================================================== --- linux-2.6-xfs.orig/fs/xfs/linux-2.6/mrlock.h 2008-03-20 09:38:53.000000000 +0100 +++ /dev/null 1970-01-01 00:00:00.000000000 +0000 @@ -1,102 +0,0 @@ -/* - * Copyright (c) 2000-2006 Silicon Graphics, Inc. - * All Rights Reserved. - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License as - * published by the Free Software Foundation. - * - * This program is distributed in the hope that it would be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write the Free Software Foundation, - * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA - */ -#ifndef __XFS_SUPPORT_MRLOCK_H__ -#define __XFS_SUPPORT_MRLOCK_H__ - -#include - -enum { MR_NONE, MR_ACCESS, MR_UPDATE }; - -typedef struct { - struct rw_semaphore mr_lock; - int mr_writer; -} mrlock_t; - -#define mrinit(mrp, name) \ - do { (mrp)->mr_writer = 0; init_rwsem(&(mrp)->mr_lock); } while (0) -#define mrlock_init(mrp, t,n,s) mrinit(mrp, n) -#define mrfree(mrp) do { } while (0) - -static inline void mraccess(mrlock_t *mrp) -{ - down_read(&mrp->mr_lock); -} - -static inline void mrupdate(mrlock_t *mrp) -{ - down_write(&mrp->mr_lock); - mrp->mr_writer = 1; -} - -static inline void mraccess_nested(mrlock_t *mrp, int subclass) -{ - down_read_nested(&mrp->mr_lock, subclass); -} - -static inline void mrupdate_nested(mrlock_t *mrp, int subclass) -{ - down_write_nested(&mrp->mr_lock, subclass); - mrp->mr_writer = 1; -} - - -static inline int mrtryaccess(mrlock_t *mrp) -{ - return down_read_trylock(&mrp->mr_lock); -} - -static inline int mrtryupdate(mrlock_t *mrp) -{ - if (!down_write_trylock(&mrp->mr_lock)) - return 0; - mrp->mr_writer = 1; - return 1; -} - -static inline void mrunlock(mrlock_t *mrp) -{ - if (mrp->mr_writer) { - mrp->mr_writer = 0; - up_write(&mrp->mr_lock); - } else { - up_read(&mrp->mr_lock); - } -} - -static inline void mrdemote(mrlock_t *mrp) -{ - mrp->mr_writer = 0; - downgrade_write(&mrp->mr_lock); -} - -#ifdef DEBUG -/* - * Debug-only routine, without some platform-specific asm code, we can - * now only answer requests regarding whether we hold the lock for write - * (reader state is outside our visibility, we only track writer state). - * Note: means !ismrlocked would give false positives, so don't do that. - */ -static inline int ismrlocked(mrlock_t *mrp, int type) -{ - if (mrp && type == MR_UPDATE) - return mrp->mr_writer; - return 1; -} -#endif - -#endif /* __XFS_SUPPORT_MRLOCK_H__ */ Index: linux-2.6-xfs/fs/xfs/xfs_iget.c =================================================================== --- linux-2.6-xfs.orig/fs/xfs/xfs_iget.c 2008-03-20 09:38:53.000000000 +0100 +++ linux-2.6-xfs/fs/xfs/xfs_iget.c 2008-03-20 09:40:35.000000000 +0100 @@ -211,9 +211,8 @@ finish_inode: xfs_itrace_exit_tag(ip, "xfs_iget.alloc"); - mrlock_init(&ip->i_lock, MRLOCK_ALLOW_EQUAL_PRI|MRLOCK_BARRIER, - "xfsino", ip->i_ino); - mrlock_init(&ip->i_iolock, MRLOCK_BARRIER, "xfsio", ip->i_ino); + init_rwsem(&ip->i_lock); + init_rwsem(&ip->i_iolock); init_waitqueue_head(&ip->i_ipin_wait); atomic_set(&ip->i_pincount, 0); initnsema(&ip->i_flock, 1, "xfsfino"); @@ -593,8 +592,9 @@ xfs_iunlock_map_shared( * XFS_IOLOCK_EXCL | XFS_ILOCK_EXCL */ void -xfs_ilock(xfs_inode_t *ip, - uint lock_flags) +xfs_ilock( + xfs_inode_t *ip, + uint lock_flags) { /* * You can't set both SHARED and EXCL for the same lock, @@ -608,16 +608,19 @@ xfs_ilock(xfs_inode_t *ip, ASSERT((lock_flags & ~(XFS_LOCK_MASK | XFS_LOCK_DEP_MASK)) == 0); if (lock_flags & XFS_IOLOCK_EXCL) { - mrupdate_nested(&ip->i_iolock, XFS_IOLOCK_DEP(lock_flags)); + down_write_nested(&ip->i_iolock, XFS_IOLOCK_DEP(lock_flags)); } else if (lock_flags & XFS_IOLOCK_SHARED) { - mraccess_nested(&ip->i_iolock, XFS_IOLOCK_DEP(lock_flags)); + down_read_nested(&ip->i_iolock, XFS_IOLOCK_DEP(lock_flags)); } if (lock_flags & XFS_ILOCK_EXCL) { - mrupdate_nested(&ip->i_lock, XFS_ILOCK_DEP(lock_flags)); + down_write_nested(&ip->i_lock, XFS_ILOCK_DEP(lock_flags)); } else if (lock_flags & XFS_ILOCK_SHARED) { - mraccess_nested(&ip->i_lock, XFS_ILOCK_DEP(lock_flags)); + down_read_nested(&ip->i_lock, XFS_ILOCK_DEP(lock_flags)); } xfs_ilock_trace(ip, 1, lock_flags, (inst_t *)__return_address); +#ifdef DEBUG + ip->i_lock_state |= (lock_flags & (XFS_ILOCK_EXCL|XFS_IOLOCK_EXCL)); +#endif } /* @@ -634,11 +637,12 @@ xfs_ilock(xfs_inode_t *ip, * */ int -xfs_ilock_nowait(xfs_inode_t *ip, - uint lock_flags) +xfs_ilock_nowait( + xfs_inode_t *ip, + uint lock_flags) { - int iolocked; - int ilocked; + int iolocked; + int ilocked; /* * You can't set both SHARED and EXCL for the same lock, @@ -653,35 +657,36 @@ xfs_ilock_nowait(xfs_inode_t *ip, iolocked = 0; if (lock_flags & XFS_IOLOCK_EXCL) { - iolocked = mrtryupdate(&ip->i_iolock); - if (!iolocked) { + iolocked = down_write_trylock(&ip->i_iolock); + if (!iolocked) return 0; - } } else if (lock_flags & XFS_IOLOCK_SHARED) { - iolocked = mrtryaccess(&ip->i_iolock); - if (!iolocked) { + iolocked = down_read_trylock(&ip->i_iolock); + if (!iolocked) return 0; - } } + if (lock_flags & XFS_ILOCK_EXCL) { - ilocked = mrtryupdate(&ip->i_lock); - if (!ilocked) { - if (iolocked) { - mrunlock(&ip->i_iolock); - } - return 0; - } + ilocked = down_write_trylock(&ip->i_lock); + if (!ilocked) + goto out_ilock_fail; } else if (lock_flags & XFS_ILOCK_SHARED) { - ilocked = mrtryaccess(&ip->i_lock); - if (!ilocked) { - if (iolocked) { - mrunlock(&ip->i_iolock); - } - return 0; - } + ilocked = down_read_trylock(&ip->i_lock); + if (!ilocked) + goto out_ilock_fail; } xfs_ilock_trace(ip, 2, lock_flags, (inst_t *)__return_address); +#ifdef DEBUG + ip->i_lock_state |= (lock_flags & (XFS_ILOCK_EXCL|XFS_IOLOCK_EXCL)); +#endif return 1; + + out_ilock_fail: + if (lock_flags & XFS_IOLOCK_EXCL) + up_write(&ip->i_iolock); + else if (lock_flags & XFS_IOLOCK_SHARED) + up_read(&ip->i_iolock); + return 0; } /* @@ -697,8 +702,9 @@ xfs_ilock_nowait(xfs_inode_t *ip, * */ void -xfs_iunlock(xfs_inode_t *ip, - uint lock_flags) +xfs_iunlock( + xfs_inode_t *ip, + uint lock_flags) { /* * You can't set both SHARED and EXCL for the same lock, @@ -711,35 +717,33 @@ xfs_iunlock(xfs_inode_t *ip, (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)); ASSERT((lock_flags & ~(XFS_LOCK_MASK | XFS_IUNLOCK_NONOTIFY | XFS_LOCK_DEP_MASK)) == 0); + ASSERT(ip->i_lock_state & lock_flags); ASSERT(lock_flags != 0); - if (lock_flags & (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL)) { - ASSERT(!(lock_flags & XFS_IOLOCK_SHARED) || - (ismrlocked(&ip->i_iolock, MR_ACCESS))); - ASSERT(!(lock_flags & XFS_IOLOCK_EXCL) || - (ismrlocked(&ip->i_iolock, MR_UPDATE))); - mrunlock(&ip->i_iolock); - } - - if (lock_flags & (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)) { - ASSERT(!(lock_flags & XFS_ILOCK_SHARED) || - (ismrlocked(&ip->i_lock, MR_ACCESS))); - ASSERT(!(lock_flags & XFS_ILOCK_EXCL) || - (ismrlocked(&ip->i_lock, MR_UPDATE))); - mrunlock(&ip->i_lock); + if (lock_flags & XFS_IOLOCK_EXCL) + up_write(&ip->i_iolock); + else if (lock_flags & XFS_IOLOCK_SHARED) + up_read(&ip->i_iolock); + + if (lock_flags & XFS_ILOCK_EXCL) + up_write(&ip->i_lock); + else if (lock_flags & (XFS_ILOCK_SHARED)) + up_read(&ip->i_lock); + if ((lock_flags & (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)) && + !(lock_flags & XFS_IUNLOCK_NONOTIFY) && ip->i_itemp) { /* * Let the AIL know that this item has been unlocked in case * it is in the AIL and anyone is waiting on it. Don't do * this if the caller has asked us not to. */ - if (!(lock_flags & XFS_IUNLOCK_NONOTIFY) && - ip->i_itemp != NULL) { - xfs_trans_unlocked_item(ip->i_mount, - (xfs_log_item_t*)(ip->i_itemp)); - } + xfs_trans_unlocked_item(ip->i_mount, + (xfs_log_item_t *)ip->i_itemp); } xfs_ilock_trace(ip, 3, lock_flags, (inst_t *)__return_address); +#ifdef DEBUG + ip->i_lock_state &= ~(lock_flags & (XFS_ILOCK_EXCL|XFS_IOLOCK_EXCL)); +#endif } /* @@ -747,21 +751,42 @@ xfs_iunlock(xfs_inode_t *ip, * if it is being demoted. */ void -xfs_ilock_demote(xfs_inode_t *ip, - uint lock_flags) +xfs_ilock_demote( + xfs_inode_t *ip, + uint lock_flags) { ASSERT(lock_flags & (XFS_IOLOCK_EXCL|XFS_ILOCK_EXCL)); ASSERT((lock_flags & ~(XFS_IOLOCK_EXCL|XFS_ILOCK_EXCL)) == 0); + ASSERT(ip->i_lock_state & lock_flags); - if (lock_flags & XFS_ILOCK_EXCL) { - ASSERT(ismrlocked(&ip->i_lock, MR_UPDATE)); - mrdemote(&ip->i_lock); - } - if (lock_flags & XFS_IOLOCK_EXCL) { - ASSERT(ismrlocked(&ip->i_iolock, MR_UPDATE)); - mrdemote(&ip->i_iolock); - } + if (lock_flags & XFS_ILOCK_EXCL) + downgrade_write(&ip->i_lock); + if (lock_flags & XFS_IOLOCK_EXCL) + downgrade_write(&ip->i_iolock); + +#ifdef DEBUG + ip->i_lock_state &= ~lock_flags; +#endif +} + +#ifdef DEBUG +/* + * Debug-only routine, without additional rw_semaphore APIs, we can + * now only answer requests regarding whether we hold the lock for write + * (reader state is outside our visibility, we only track writer state). + * + * Note: means !xfs_isilocked would give false positives, so don't do that. + */ +int +xfs_isilocked( + xfs_inode_t *ip, + uint lock_flags) +{ + if (lock_flags & (XFS_ILOCK_EXCL|XFS_IOLOCK_EXCL)) + return (ip->i_lock_state & lock_flags); + return 1; } +#endif /* * The following three routines simply manage the i_flock Index: linux-2.6-xfs/fs/xfs/xfs_inode.c =================================================================== --- linux-2.6-xfs.orig/fs/xfs/xfs_inode.c 2008-03-20 09:38:53.000000000 +0100 +++ linux-2.6-xfs/fs/xfs/xfs_inode.c 2008-03-20 09:40:35.000000000 +0100 @@ -1291,7 +1291,7 @@ xfs_file_last_byte( xfs_fileoff_t size_last_block; int error; - ASSERT(ismrlocked(&(ip->i_iolock), MR_UPDATE | MR_ACCESS)); + ASSERT(xfs_isilocked(ip, XFS_IOLOCK_EXCL|XFS_IOLOCK_SHARED)); mp = ip->i_mount; /* @@ -1402,7 +1402,7 @@ xfs_itruncate_start( bhv_vnode_t *vp; int error = 0; - ASSERT(ismrlocked(&ip->i_iolock, MR_UPDATE) != 0); + ASSERT(xfs_isilocked(ip, XFS_IOLOCK_EXCL)); ASSERT((new_size == 0) || (new_size <= ip->i_size)); ASSERT((flags == XFS_ITRUNC_DEFINITE) || (flags == XFS_ITRUNC_MAYBE)); @@ -1529,8 +1529,7 @@ xfs_itruncate_finish( xfs_bmap_free_t free_list; int error; - ASSERT(ismrlocked(&ip->i_iolock, MR_UPDATE) != 0); - ASSERT(ismrlocked(&ip->i_lock, MR_UPDATE) != 0); + ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL|XFS_IOLOCK_EXCL)); ASSERT((new_size == 0) || (new_size <= ip->i_size)); ASSERT(*tp != NULL); ASSERT((*tp)->t_flags & XFS_TRANS_PERM_LOG_RES); @@ -1795,8 +1794,7 @@ xfs_igrow_start( xfs_fsize_t new_size, cred_t *credp) { - ASSERT(ismrlocked(&(ip->i_lock), MR_UPDATE) != 0); - ASSERT(ismrlocked(&(ip->i_iolock), MR_UPDATE) != 0); + ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL|XFS_IOLOCK_EXCL)); ASSERT(new_size > ip->i_size); /* @@ -1824,8 +1822,7 @@ xfs_igrow_finish( xfs_fsize_t new_size, int change_flag) { - ASSERT(ismrlocked(&(ip->i_lock), MR_UPDATE) != 0); - ASSERT(ismrlocked(&(ip->i_iolock), MR_UPDATE) != 0); + ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL|XFS_IOLOCK_EXCL)); ASSERT(ip->i_transp == tp); ASSERT(new_size > ip->i_size); @@ -2302,7 +2299,7 @@ xfs_ifree( xfs_dinode_t *dip; xfs_buf_t *ibp; - ASSERT(ismrlocked(&ip->i_lock, MR_UPDATE)); + ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); ASSERT(ip->i_transp == tp); ASSERT(ip->i_d.di_nlink == 0); ASSERT(ip->i_d.di_nextents == 0); @@ -2707,8 +2704,6 @@ xfs_idestroy( } if (ip->i_afp) xfs_idestroy_fork(ip, XFS_ATTR_FORK); - mrfree(&ip->i_lock); - mrfree(&ip->i_iolock); freesema(&ip->i_flock); #ifdef XFS_INODE_TRACE @@ -2761,7 +2756,7 @@ void xfs_ipin( xfs_inode_t *ip) { - ASSERT(ismrlocked(&ip->i_lock, MR_UPDATE)); + ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); atomic_inc(&ip->i_pincount); } @@ -2794,7 +2789,7 @@ __xfs_iunpin_wait( { xfs_inode_log_item_t *iip = ip->i_itemp; - ASSERT(ismrlocked(&ip->i_lock, MR_UPDATE | MR_ACCESS)); + ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL|XFS_ILOCK_SHARED)); if (atomic_read(&ip->i_pincount) == 0) return; @@ -2844,7 +2839,7 @@ xfs_iextents_copy( xfs_fsblock_t start_block; ifp = XFS_IFORK_PTR(ip, whichfork); - ASSERT(ismrlocked(&ip->i_lock, MR_UPDATE|MR_ACCESS)); + ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL|XFS_ILOCK_SHARED)); ASSERT(ifp->if_bytes > 0); nrecs = ifp->if_bytes / (uint)sizeof(xfs_bmbt_rec_t); @@ -3149,7 +3144,7 @@ xfs_iflush( XFS_STATS_INC(xs_iflush_count); - ASSERT(ismrlocked(&ip->i_lock, MR_UPDATE|MR_ACCESS)); + ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL|XFS_ILOCK_SHARED)); ASSERT(issemalocked(&(ip->i_flock))); ASSERT(ip->i_d.di_format != XFS_DINODE_FMT_BTREE || ip->i_d.di_nextents > ip->i_df.if_ext_max); @@ -3314,7 +3309,7 @@ xfs_iflush_int( int first; #endif - ASSERT(ismrlocked(&ip->i_lock, MR_UPDATE|MR_ACCESS)); + ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL|XFS_ILOCK_SHARED)); ASSERT(issemalocked(&(ip->i_flock))); ASSERT(ip->i_d.di_format != XFS_DINODE_FMT_BTREE || ip->i_d.di_nextents > ip->i_df.if_ext_max); Index: linux-2.6-xfs/fs/xfs/linux-2.6/xfs_lrw.c =================================================================== --- linux-2.6-xfs.orig/fs/xfs/linux-2.6/xfs_lrw.c 2008-03-20 09:38:53.000000000 +0100 +++ linux-2.6-xfs/fs/xfs/linux-2.6/xfs_lrw.c 2008-03-20 09:40:35.000000000 +0100 @@ -393,7 +393,7 @@ xfs_zero_last_block( int error = 0; xfs_bmbt_irec_t imap; - ASSERT(ismrlocked(&ip->i_lock, MR_UPDATE) != 0); + ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); zero_offset = XFS_B_FSB_OFFSET(mp, isize); if (zero_offset == 0) { @@ -424,14 +424,14 @@ xfs_zero_last_block( * out sync. We need to drop the ilock while we do this so we * don't deadlock when the buffer cache calls back to us. */ - xfs_iunlock(ip, XFS_ILOCK_EXCL| XFS_EXTSIZE_RD); + xfs_iunlock(ip, XFS_ILOCK_EXCL); zero_len = mp->m_sb.sb_blocksize - zero_offset; if (isize + zero_len > offset) zero_len = offset - isize; error = xfs_iozero(ip, isize, zero_len); - xfs_ilock(ip, XFS_ILOCK_EXCL|XFS_EXTSIZE_RD); + xfs_ilock(ip, XFS_ILOCK_EXCL); ASSERT(error >= 0); return error; } @@ -464,8 +464,7 @@ xfs_zero_eof( int error = 0; xfs_bmbt_irec_t imap; - ASSERT(ismrlocked(&ip->i_lock, MR_UPDATE)); - ASSERT(ismrlocked(&ip->i_iolock, MR_UPDATE)); + ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL|XFS_IOLOCK_EXCL)); ASSERT(offset > isize); /* @@ -474,8 +473,7 @@ xfs_zero_eof( */ error = xfs_zero_last_block(ip, offset, isize); if (error) { - ASSERT(ismrlocked(&ip->i_lock, MR_UPDATE)); - ASSERT(ismrlocked(&ip->i_iolock, MR_UPDATE)); + ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL|XFS_IOLOCK_EXCL)); return error; } @@ -506,8 +504,7 @@ xfs_zero_eof( error = xfs_bmapi(NULL, ip, start_zero_fsb, zero_count_fsb, 0, NULL, 0, &imap, &nimaps, NULL, NULL); if (error) { - ASSERT(ismrlocked(&ip->i_lock, MR_UPDATE)); - ASSERT(ismrlocked(&ip->i_iolock, MR_UPDATE)); + ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL|XFS_IOLOCK_EXCL)); return error; } ASSERT(nimaps > 0); @@ -531,7 +528,7 @@ xfs_zero_eof( * Drop the inode lock while we're doing the I/O. * We'll still have the iolock to protect us. */ - xfs_iunlock(ip, XFS_ILOCK_EXCL|XFS_EXTSIZE_RD); + xfs_iunlock(ip, XFS_ILOCK_EXCL); zero_off = XFS_FSB_TO_B(mp, start_zero_fsb); zero_len = XFS_FSB_TO_B(mp, imap.br_blockcount); @@ -547,13 +544,13 @@ xfs_zero_eof( start_zero_fsb = imap.br_startoff + imap.br_blockcount; ASSERT(start_zero_fsb <= (end_zero_fsb + 1)); - xfs_ilock(ip, XFS_ILOCK_EXCL|XFS_EXTSIZE_RD); + xfs_ilock(ip, XFS_ILOCK_EXCL); } return 0; out_lock: - xfs_ilock(ip, XFS_ILOCK_EXCL|XFS_EXTSIZE_RD); + xfs_ilock(ip, XFS_ILOCK_EXCL); ASSERT(error >= 0); return error; } Index: linux-2.6-xfs/fs/xfs/quota/xfs_dquot.c =================================================================== --- linux-2.6-xfs.orig/fs/xfs/quota/xfs_dquot.c 2008-03-20 09:38:53.000000000 +0100 +++ linux-2.6-xfs/fs/xfs/quota/xfs_dquot.c 2008-03-20 09:40:35.000000000 +0100 @@ -933,7 +933,7 @@ xfs_qm_dqget( type == XFS_DQ_PROJ || type == XFS_DQ_GROUP); if (ip) { - ASSERT(XFS_ISLOCKED_INODE_EXCL(ip)); + ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); if (type == XFS_DQ_USER) ASSERT(ip->i_udquot == NULL); else @@ -1088,7 +1088,7 @@ xfs_qm_dqget( xfs_qm_mplist_unlock(mp); XFS_DQ_HASH_UNLOCK(h); dqret: - ASSERT((ip == NULL) || XFS_ISLOCKED_INODE_EXCL(ip)); + ASSERT((ip == NULL) || xfs_isilocked(ip, XFS_ILOCK_EXCL)); xfs_dqtrace_entry(dqp, "DQGET DONE"); *O_dqpp = dqp; return (0); Index: linux-2.6-xfs/fs/xfs/quota/xfs_qm.c =================================================================== --- linux-2.6-xfs.orig/fs/xfs/quota/xfs_qm.c 2008-03-20 09:38:53.000000000 +0100 +++ linux-2.6-xfs/fs/xfs/quota/xfs_qm.c 2008-03-20 09:40:35.000000000 +0100 @@ -670,7 +670,7 @@ xfs_qm_dqattach_one( xfs_dquot_t *dqp; int error; - ASSERT(XFS_ISLOCKED_INODE_EXCL(ip)); + ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); error = 0; /* * See if we already have it in the inode itself. IO_idqpp is @@ -874,7 +874,7 @@ xfs_qm_dqattach( return 0; ASSERT((flags & XFS_QMOPT_ILOCKED) == 0 || - XFS_ISLOCKED_INODE_EXCL(ip)); + xfs_isilocked(ip, XFS_ILOCK_EXCL)); if (! (flags & XFS_QMOPT_ILOCKED)) xfs_ilock(ip, XFS_ILOCK_EXCL); @@ -888,7 +888,8 @@ xfs_qm_dqattach( goto done; nquotas++; } - ASSERT(XFS_ISLOCKED_INODE_EXCL(ip)); + + ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); if (XFS_IS_OQUOTA_ON(mp)) { error = XFS_IS_GQUOTA_ON(mp) ? xfs_qm_dqattach_one(ip, ip->i_d.di_gid, XFS_DQ_GROUP, @@ -913,7 +914,7 @@ xfs_qm_dqattach( * This WON'T, in general, result in a thrash. */ if (nquotas == 2) { - ASSERT(XFS_ISLOCKED_INODE_EXCL(ip)); + ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); ASSERT(ip->i_udquot); ASSERT(ip->i_gdquot); @@ -956,7 +957,7 @@ xfs_qm_dqattach( #ifdef QUOTADEBUG else - ASSERT(XFS_ISLOCKED_INODE_EXCL(ip)); + ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); #endif return error; } @@ -1291,7 +1292,7 @@ xfs_qm_dqget_noattach( xfs_mount_t *mp; xfs_dquot_t *udqp, *gdqp; - ASSERT(XFS_ISLOCKED_INODE_EXCL(ip)); + ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); mp = ip->i_mount; udqp = NULL; gdqp = NULL; @@ -1392,7 +1393,7 @@ xfs_qm_qino_alloc( * Keep an extra reference to this quota inode. This inode is * locked exclusively and joined to the transaction already. */ - ASSERT(XFS_ISLOCKED_INODE_EXCL(*ip)); + ASSERT(xfs_isilocked(*ip, XFS_ILOCK_EXCL)); VN_HOLD(XFS_ITOV((*ip))); /* @@ -2549,7 +2550,7 @@ xfs_qm_vop_chown( uint bfield = XFS_IS_REALTIME_INODE(ip) ? XFS_TRANS_DQ_RTBCOUNT : XFS_TRANS_DQ_BCOUNT; - ASSERT(XFS_ISLOCKED_INODE_EXCL(ip)); + ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); ASSERT(XFS_IS_QUOTA_RUNNING(ip->i_mount)); /* old dquot */ @@ -2593,7 +2594,7 @@ xfs_qm_vop_chown_reserve( uint delblks, blkflags, prjflags = 0; xfs_dquot_t *unresudq, *unresgdq, *delblksudq, *delblksgdq; - ASSERT(XFS_ISLOCKED_INODE(ip)); + ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL|XFS_ILOCK_SHARED)); mp = ip->i_mount; ASSERT(XFS_IS_QUOTA_RUNNING(mp)); @@ -2703,7 +2704,7 @@ xfs_qm_vop_dqattach_and_dqmod_newinode( if (!XFS_IS_QUOTA_ON(tp->t_mountp)) return; - ASSERT(XFS_ISLOCKED_INODE_EXCL(ip)); + ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); ASSERT(XFS_IS_QUOTA_RUNNING(tp->t_mountp)); if (udqp) { Index: linux-2.6-xfs/fs/xfs/quota/xfs_quota_priv.h =================================================================== --- linux-2.6-xfs.orig/fs/xfs/quota/xfs_quota_priv.h 2008-03-20 09:38:53.000000000 +0100 +++ linux-2.6-xfs/fs/xfs/quota/xfs_quota_priv.h 2008-03-20 09:40:35.000000000 +0100 @@ -27,11 +27,6 @@ /* Number of dquots that fit in to a dquot block */ #define XFS_QM_DQPERBLK(mp) ((mp)->m_quotainfo->qi_dqperchunk) -#define XFS_ISLOCKED_INODE(ip) (ismrlocked(&(ip)->i_lock, \ - MR_UPDATE | MR_ACCESS) != 0) -#define XFS_ISLOCKED_INODE_EXCL(ip) (ismrlocked(&(ip)->i_lock, \ - MR_UPDATE) != 0) - #define XFS_DQ_IS_ADDEDTO_TRX(t, d) ((d)->q_transp == (t)) #define XFS_QI_MPLRECLAIMS(mp) ((mp)->m_quotainfo->qi_dqreclaims) Index: linux-2.6-xfs/fs/xfs/quota/xfs_trans_dquot.c =================================================================== --- linux-2.6-xfs.orig/fs/xfs/quota/xfs_trans_dquot.c 2008-03-20 09:38:53.000000000 +0100 +++ linux-2.6-xfs/fs/xfs/quota/xfs_trans_dquot.c 2008-03-20 09:40:35.000000000 +0100 @@ -834,7 +834,7 @@ xfs_trans_reserve_quota_nblks( ASSERT(ip->i_ino != mp->m_sb.sb_uquotino); ASSERT(ip->i_ino != mp->m_sb.sb_gquotino); - ASSERT(XFS_ISLOCKED_INODE_EXCL(ip)); + ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); ASSERT(XFS_IS_QUOTA_RUNNING(ip->i_mount)); ASSERT((flags & ~(XFS_QMOPT_FORCE_RES | XFS_QMOPT_ENOSPC)) == XFS_TRANS_DQ_RES_RTBLKS || Index: linux-2.6-xfs/fs/xfs/xfs_bmap.c =================================================================== --- linux-2.6-xfs.orig/fs/xfs/xfs_bmap.c 2008-03-20 09:38:53.000000000 +0100 +++ linux-2.6-xfs/fs/xfs/xfs_bmap.c 2008-03-20 09:40:35.000000000 +0100 @@ -4075,7 +4075,6 @@ xfs_bmap_add_attrfork( error2: xfs_bmap_cancel(&flist); error1: - ASSERT(ismrlocked(&ip->i_lock,MR_UPDATE)); xfs_iunlock(ip, XFS_ILOCK_EXCL); error0: xfs_trans_cancel(tp, XFS_TRANS_RELEASE_LOG_RES|XFS_TRANS_ABORT); Index: linux-2.6-xfs/fs/xfs/xfs_inode.h =================================================================== --- linux-2.6-xfs.orig/fs/xfs/xfs_inode.h 2008-03-20 09:38:53.000000000 +0100 +++ linux-2.6-xfs/fs/xfs/xfs_inode.h 2008-03-20 09:40:35.000000000 +0100 @@ -221,8 +221,11 @@ typedef struct xfs_inode { /* Transaction and locking information. */ struct xfs_trans *i_transp; /* ptr to owning transaction*/ struct xfs_inode_log_item *i_itemp; /* logging information */ - mrlock_t i_lock; /* inode lock */ - mrlock_t i_iolock; /* inode IO lock */ + struct rw_semaphore i_lock; /* inode lock */ + struct rw_semaphore i_iolock; /* inode IO lock */ +#ifdef DEBUG + unsigned int i_lock_state; /* i_lock/i_iolock state */ +#endif sema_t i_flock; /* inode flush lock */ atomic_t i_pincount; /* inode pin count */ wait_queue_head_t i_ipin_wait; /* inode pinning wait queue */ @@ -386,20 +389,9 @@ xfs_iflags_test_and_clear(xfs_inode_t *i #define XFS_ILOCK_EXCL (1<<2) #define XFS_ILOCK_SHARED (1<<3) #define XFS_IUNLOCK_NONOTIFY (1<<4) -/* #define XFS_IOLOCK_NESTED (1<<5) */ -#define XFS_EXTENT_TOKEN_RD (1<<6) -#define XFS_SIZE_TOKEN_RD (1<<7) -#define XFS_EXTSIZE_RD (XFS_EXTENT_TOKEN_RD|XFS_SIZE_TOKEN_RD) -#define XFS_WILLLEND (1<<8) /* Always acquire tokens for lending */ -#define XFS_EXTENT_TOKEN_WR (XFS_EXTENT_TOKEN_RD | XFS_WILLLEND) -#define XFS_SIZE_TOKEN_WR (XFS_SIZE_TOKEN_RD | XFS_WILLLEND) -#define XFS_EXTSIZE_WR (XFS_EXTSIZE_RD | XFS_WILLLEND) -/* TODO:XFS_SIZE_TOKEN_WANT (1<<9) */ - -#define XFS_LOCK_MASK (XFS_IOLOCK_EXCL | XFS_IOLOCK_SHARED \ - | XFS_ILOCK_EXCL | XFS_ILOCK_SHARED \ - | XFS_EXTENT_TOKEN_RD | XFS_SIZE_TOKEN_RD \ - | XFS_WILLLEND) + +#define XFS_LOCK_MASK (XFS_IOLOCK_EXCL | XFS_IOLOCK_SHARED | \ + XFS_ILOCK_EXCL | XFS_ILOCK_SHARED) /* * Flags for lockdep annotations. @@ -483,6 +475,7 @@ void xfs_ilock(xfs_inode_t *, uint); int xfs_ilock_nowait(xfs_inode_t *, uint); void xfs_iunlock(xfs_inode_t *, uint); void xfs_ilock_demote(xfs_inode_t *, uint); +int xfs_isilocked(xfs_inode_t *, uint); void xfs_iflock(xfs_inode_t *); int xfs_iflock_nowait(xfs_inode_t *); uint xfs_ilock_map_shared(xfs_inode_t *); Index: linux-2.6-xfs/fs/xfs/xfs_inode_item.c =================================================================== --- linux-2.6-xfs.orig/fs/xfs/xfs_inode_item.c 2008-03-20 09:38:53.000000000 +0100 +++ linux-2.6-xfs/fs/xfs/xfs_inode_item.c 2008-03-20 09:40:35.000000000 +0100 @@ -546,7 +546,7 @@ STATIC void xfs_inode_item_pin( xfs_inode_log_item_t *iip) { - ASSERT(ismrlocked(&(iip->ili_inode->i_lock), MR_UPDATE)); + ASSERT(xfs_isilocked(iip->ili_inode, XFS_ILOCK_EXCL)); xfs_ipin(iip->ili_inode); } @@ -663,13 +663,13 @@ xfs_inode_item_unlock( ASSERT(iip != NULL); ASSERT(iip->ili_inode->i_itemp != NULL); - ASSERT(ismrlocked(&(iip->ili_inode->i_lock), MR_UPDATE)); + ASSERT(xfs_isilocked(iip->ili_inode, XFS_ILOCK_EXCL)); ASSERT((!(iip->ili_inode->i_itemp->ili_flags & XFS_ILI_IOLOCKED_EXCL)) || - ismrlocked(&(iip->ili_inode->i_iolock), MR_UPDATE)); + xfs_isilocked(iip->ili_inode, XFS_IOLOCK_EXCL)); ASSERT((!(iip->ili_inode->i_itemp->ili_flags & XFS_ILI_IOLOCKED_SHARED)) || - ismrlocked(&(iip->ili_inode->i_iolock), MR_ACCESS)); + xfs_isilocked(iip->ili_inode, XFS_IOLOCK_SHARED)); /* * Clear the transaction pointer in the inode. */ @@ -768,7 +768,7 @@ xfs_inode_item_pushbuf( ip = iip->ili_inode; - ASSERT(ismrlocked(&(ip->i_lock), MR_ACCESS)); + ASSERT(xfs_isilocked(ip, XFS_ILOCK_SHARED)); /* * The ili_pushbuf_flag keeps others from @@ -851,7 +851,7 @@ xfs_inode_item_push( ip = iip->ili_inode; - ASSERT(ismrlocked(&(ip->i_lock), MR_ACCESS)); + ASSERT(xfs_isilocked(ip, XFS_ILOCK_SHARED)); ASSERT(issemalocked(&(ip->i_flock))); /* * Since we were able to lock the inode's flush lock and Index: linux-2.6-xfs/fs/xfs/xfs_iomap.c =================================================================== --- linux-2.6-xfs.orig/fs/xfs/xfs_iomap.c 2008-03-20 09:38:53.000000000 +0100 +++ linux-2.6-xfs/fs/xfs/xfs_iomap.c 2008-03-20 09:40:35.000000000 +0100 @@ -196,14 +196,14 @@ xfs_iomap( break; case BMAPI_WRITE: xfs_iomap_enter_trace(XFS_IOMAP_WRITE_ENTER, ip, offset, count); - lockmode = XFS_ILOCK_EXCL|XFS_EXTSIZE_WR; + lockmode = XFS_ILOCK_EXCL; if (flags & BMAPI_IGNSTATE) bmapi_flags |= XFS_BMAPI_IGSTATE|XFS_BMAPI_ENTIRE; xfs_ilock(ip, lockmode); break; case BMAPI_ALLOCATE: xfs_iomap_enter_trace(XFS_IOMAP_ALLOC_ENTER, ip, offset, count); - lockmode = XFS_ILOCK_SHARED|XFS_EXTSIZE_RD; + lockmode = XFS_ILOCK_SHARED; bmapi_flags = XFS_BMAPI_ENTIRE; /* Attempt non-blocking lock */ @@ -624,7 +624,7 @@ xfs_iomap_write_delay( int prealloc, fsynced = 0; int error; - ASSERT(ismrlocked(&ip->i_lock, MR_UPDATE) != 0); + ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); /* * Make sure that the dquots are there. This doesn't hold Index: linux-2.6-xfs/fs/xfs/xfs_trans_inode.c =================================================================== --- linux-2.6-xfs.orig/fs/xfs/xfs_trans_inode.c 2008-03-20 09:38:53.000000000 +0100 +++ linux-2.6-xfs/fs/xfs/xfs_trans_inode.c 2008-03-20 09:40:35.000000000 +0100 @@ -111,13 +111,13 @@ xfs_trans_iget( */ ASSERT(ip->i_itemp != NULL); ASSERT(lock_flags & XFS_ILOCK_EXCL); - ASSERT(ismrlocked(&ip->i_lock, MR_UPDATE)); + ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); ASSERT((!(lock_flags & XFS_IOLOCK_EXCL)) || - ismrlocked(&ip->i_iolock, MR_UPDATE)); + xfs_isilocked(ip, XFS_IOLOCK_EXCL)); ASSERT((!(lock_flags & XFS_IOLOCK_EXCL)) || (ip->i_itemp->ili_flags & XFS_ILI_IOLOCKED_EXCL)); ASSERT((!(lock_flags & XFS_IOLOCK_SHARED)) || - ismrlocked(&ip->i_iolock, (MR_UPDATE | MR_ACCESS))); + xfs_isilocked(ip, XFS_IOLOCK_EXCL|XFS_IOLOCK_SHARED)); ASSERT((!(lock_flags & XFS_IOLOCK_SHARED)) || (ip->i_itemp->ili_flags & XFS_ILI_IOLOCKED_ANY)); @@ -185,7 +185,7 @@ xfs_trans_ijoin( xfs_inode_log_item_t *iip; ASSERT(ip->i_transp == NULL); - ASSERT(ismrlocked(&ip->i_lock, MR_UPDATE)); + ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); ASSERT(lock_flags & XFS_ILOCK_EXCL); if (ip->i_itemp == NULL) xfs_inode_item_init(ip, ip->i_mount); @@ -232,7 +232,7 @@ xfs_trans_ihold( { ASSERT(ip->i_transp == tp); ASSERT(ip->i_itemp != NULL); - ASSERT(ismrlocked(&ip->i_lock, MR_UPDATE)); + ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); ip->i_itemp->ili_flags |= XFS_ILI_HOLD; } @@ -257,7 +257,7 @@ xfs_trans_log_inode( ASSERT(ip->i_transp == tp); ASSERT(ip->i_itemp != NULL); - ASSERT(ismrlocked(&ip->i_lock, MR_UPDATE)); + ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); lidp = xfs_trans_find_item(tp, (xfs_log_item_t*)(ip->i_itemp)); ASSERT(lidp != NULL); Index: linux-2.6-xfs/fs/xfs/xfs_utils.c =================================================================== --- linux-2.6-xfs.orig/fs/xfs/xfs_utils.c 2008-03-20 09:38:53.000000000 +0100 +++ linux-2.6-xfs/fs/xfs/xfs_utils.c 2008-03-20 09:40:35.000000000 +0100 @@ -310,7 +310,7 @@ xfs_bump_ino_vers2( { xfs_mount_t *mp; - ASSERT(ismrlocked (&ip->i_lock, MR_UPDATE)); + ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); ASSERT(ip->i_d.di_version == XFS_DINODE_VERSION_1); ip->i_d.di_version = XFS_DINODE_VERSION_2; Index: linux-2.6-xfs/fs/xfs/xfs_vnodeops.c =================================================================== --- linux-2.6-xfs.orig/fs/xfs/xfs_vnodeops.c 2008-03-20 09:38:53.000000000 +0100 +++ linux-2.6-xfs/fs/xfs/xfs_vnodeops.c 2008-03-20 09:40:35.000000000 +0100 @@ -1443,7 +1443,7 @@ xfs_inactive_attrs( int error; xfs_mount_t *mp; - ASSERT(ismrlocked(&ip->i_iolock, MR_UPDATE)); + ASSERT(xfs_isilocked(ip, XFS_IOLOCK_EXCL)); tp = *tpp; mp = ip->i_mount; ASSERT(ip->i_d.di_forkoff != 0); @@ -1900,7 +1900,7 @@ xfs_create( * It is locked (and joined to the transaction). */ - ASSERT(ismrlocked (&ip->i_lock, MR_UPDATE)); + ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); /* * Now we join the directory inode to the transaction. We do not do it Index: linux-2.6-xfs/fs/xfs/linux-2.6/xfs_linux.h =================================================================== --- linux-2.6-xfs.orig/fs/xfs/linux-2.6/xfs_linux.h 2008-03-20 09:38:53.000000000 +0100 +++ linux-2.6-xfs/fs/xfs/linux-2.6/xfs_linux.h 2008-03-20 09:40:35.000000000 +0100 @@ -42,7 +42,6 @@ #include #include -#include #include #include #include From owner-xfs@oss.sgi.com Thu Mar 20 13:37:06 2008 Received: with ECARTIS (v1.0.0; list xfs); Thu, 20 Mar 2008 13:37:14 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.4 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2KKb3pD006418 for ; Thu, 20 Mar 2008 13:37:06 -0700 X-ASG-Debug-ID: 1206045456-5b4901bd0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from filer.fsl.cs.sunysb.edu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 11502FE1E45 for ; Thu, 20 Mar 2008 13:37:36 -0700 (PDT) Received: from filer.fsl.cs.sunysb.edu (filer.fsl.cs.sunysb.edu [130.245.126.2]) by cuda.sgi.com with ESMTP id Hz0cRFjp5GFMAGoq for ; Thu, 20 Mar 2008 13:37:36 -0700 (PDT) Received: from josefsipek.net (baal.fsl.cs.sunysb.edu [130.245.126.78]) by filer.fsl.cs.sunysb.edu (8.12.11.20060308/8.13.1) with ESMTP id m2KKbYuh030877; Thu, 20 Mar 2008 16:37:34 -0400 Received: by josefsipek.net (Postfix, from userid 1000) id D103A1C008A2; Thu, 20 Mar 2008 16:37:35 -0400 (EDT) Date: Thu, 20 Mar 2008 16:37:35 -0400 From: "Josef 'Jeff' Sipek" To: Eric Sandeen Cc: xfs-oss X-ASG-Orig-Subj: Re: [PATCH] fix xfsqa 049 when using whole scratch device Subject: Re: [PATCH] fix xfsqa 049 when using whole scratch device Message-ID: <20080320203735.GD16357@josefsipek.net> References: <47E1CCF0.9070904@sandeen.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <47E1CCF0.9070904@sandeen.net> User-Agent: Mutt/1.5.16 (2007-06-11) X-Barracuda-Connect: filer.fsl.cs.sunysb.edu[130.245.126.2] X-Barracuda-Start-Time: 1206045457 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45410 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14966 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeffpc@josefsipek.net Precedence: bulk X-list: xfs On Wed, Mar 19, 2008 at 09:33:20PM -0500, Eric Sandeen wrote: > if SCRATCH_DEV happens to be a whole device, mke2fs will wait > for confirmation. > > (mke2fs -F would work, too...) Tested; does NOT cause a regression. Josef 'Jeff' Sipek. -- Failure is not an option, It comes bundled with your Microsoft product. From owner-xfs@oss.sgi.com Thu Mar 20 13:36:42 2008 Received: with ECARTIS (v1.0.0; list xfs); Thu, 20 Mar 2008 13:36:50 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.4 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2KKadUQ006316 for ; Thu, 20 Mar 2008 13:36:42 -0700 X-ASG-Debug-ID: 1206045432-5ffa024b0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from filer.fsl.cs.sunysb.edu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 2FF676CA968; Thu, 20 Mar 2008 13:37:12 -0700 (PDT) Received: from filer.fsl.cs.sunysb.edu (filer.fsl.cs.sunysb.edu [130.245.126.2]) by cuda.sgi.com with ESMTP id 4dRoHL0WRCqau7bU; Thu, 20 Mar 2008 13:37:12 -0700 (PDT) Received: from josefsipek.net (baal.fsl.cs.sunysb.edu [130.245.126.78]) by filer.fsl.cs.sunysb.edu (8.12.11.20060308/8.13.1) with ESMTP id m2KKbA1c030776; Thu, 20 Mar 2008 16:37:10 -0400 Received: by josefsipek.net (Postfix, from userid 1000) id 349501C008A2; Thu, 20 Mar 2008 16:37:12 -0400 (EDT) Date: Thu, 20 Mar 2008 16:37:12 -0400 From: "Josef 'Jeff' Sipek" To: David Chinner Cc: xfs-dev , xfs-oss X-ASG-Orig-Subj: Re: [PATCH] XFSQA 103: filter ln output Subject: Re: [PATCH] XFSQA 103: filter ln output Message-ID: <20080320203712.GC16357@josefsipek.net> References: <20080320063245.GA103491721@sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080320063245.GA103491721@sgi.com> User-Agent: Mutt/1.5.16 (2007-06-11) X-Barracuda-Connect: filer.fsl.cs.sunysb.edu[130.245.126.2] X-Barracuda-Start-Time: 1206045433 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45409 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14965 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeffpc@josefsipek.net Precedence: bulk X-list: xfs On Thu, Mar 20, 2008 at 05:32:45PM +1100, David Chinner wrote: > Move recent versions of ln (i.e. debian unstable) have > a different error output. update the filter to handle this. > > Signed-off-by: Dave Chinner Fixes the failure I reported few days ago. Josef 'Jeff' Sipek. -- Real Programmers consider "what you see is what you get" to be just as bad a concept in Text Editors as it is in women. No, the Real Programmer wants a "you asked for it, you got it" text editor -- complicated, cryptic, powerful, unforgiving, dangerous. From owner-xfs@oss.sgi.com Thu Mar 20 13:35:40 2008 Received: with ECARTIS (v1.0.0; list xfs); Thu, 20 Mar 2008 13:35:48 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2KKZcv5006179 for ; Thu, 20 Mar 2008 13:35:40 -0700 X-ASG-Debug-ID: 1206045371-601e02280000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from filer.fsl.cs.sunysb.edu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 2D4496CA944; Thu, 20 Mar 2008 13:36:11 -0700 (PDT) Received: from filer.fsl.cs.sunysb.edu (filer.fsl.cs.sunysb.edu [130.245.126.2]) by cuda.sgi.com with ESMTP id lD4aIoFNoCVPsXFc; Thu, 20 Mar 2008 13:36:11 -0700 (PDT) Received: from josefsipek.net (baal.fsl.cs.sunysb.edu [130.245.126.78]) by filer.fsl.cs.sunysb.edu (8.12.11.20060308/8.13.1) with ESMTP id m2KKa95G030560; Thu, 20 Mar 2008 16:36:09 -0400 Received: by josefsipek.net (Postfix, from userid 1000) id E77C01C008A2; Thu, 20 Mar 2008 16:36:10 -0400 (EDT) Date: Thu, 20 Mar 2008 16:36:10 -0400 From: "Josef 'Jeff' Sipek" To: David Chinner Cc: xfs-dev , xfs-oss X-ASG-Orig-Subj: Re: [PATCH] XFSQA 166: support varying page sizes Subject: Re: [PATCH] XFSQA 166: support varying page sizes Message-ID: <20080320203610.GA16357@josefsipek.net> References: <20080320063713.GC103491721@sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080320063713.GC103491721@sgi.com> User-Agent: Mutt/1.5.16 (2007-06-11) X-Barracuda-Connect: filer.fsl.cs.sunysb.edu[130.245.126.2] X-Barracuda-Start-Time: 1206045372 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45409 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14963 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeffpc@josefsipek.net Precedence: bulk X-list: xfs On Thu, Mar 20, 2008 at 05:37:13PM +1100, David Chinner wrote: > Make the filter check the resultant output based on > the initial written region size. hence page size of the > machine will not affect the output of the filter. > Modify the golden output to match. > > Signed-off-by: Dave Chinner This fixes the failure I reported few days ago. Josef 'Jeff' Sipek. -- If I have trouble installing Linux, something is wrong. Very wrong. - Linus Torvalds From owner-xfs@oss.sgi.com Thu Mar 20 13:36:14 2008 Received: with ECARTIS (v1.0.0; list xfs); Thu, 20 Mar 2008 13:36:21 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2KKaCxq006249 for ; Thu, 20 Mar 2008 13:36:14 -0700 X-ASG-Debug-ID: 1206045404-600f02310000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from filer.fsl.cs.sunysb.edu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id C2E4B6CA956; Thu, 20 Mar 2008 13:36:45 -0700 (PDT) Received: from filer.fsl.cs.sunysb.edu (filer.fsl.cs.sunysb.edu [130.245.126.2]) by cuda.sgi.com with ESMTP id uowo99igRCPTebJ0; Thu, 20 Mar 2008 13:36:45 -0700 (PDT) Received: from josefsipek.net (baal.fsl.cs.sunysb.edu [130.245.126.78]) by filer.fsl.cs.sunysb.edu (8.12.11.20060308/8.13.1) with ESMTP id m2KKahbx030687; Thu, 20 Mar 2008 16:36:43 -0400 Received: by josefsipek.net (Postfix, from userid 1000) id C9BC11C008A2; Thu, 20 Mar 2008 16:36:44 -0400 (EDT) Date: Thu, 20 Mar 2008 16:36:44 -0400 From: "Josef 'Jeff' Sipek" To: David Chinner Cc: xfs-dev , xfs-oss X-ASG-Orig-Subj: Re: [PATCH] XFSQA 141: support 64k pagesize Subject: Re: [PATCH] XFSQA 141: support 64k pagesize Message-ID: <20080320203644.GB16357@josefsipek.net> References: <20080320063427.GB103491721@sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080320063427.GB103491721@sgi.com> User-Agent: Mutt/1.5.16 (2007-06-11) X-Barracuda-Connect: filer.fsl.cs.sunysb.edu[130.245.126.2] X-Barracuda-Start-Time: 1206045405 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45409 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14964 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeffpc@josefsipek.net Precedence: bulk X-list: xfs On Thu, Mar 20, 2008 at 05:34:27PM +1100, David Chinner wrote: > Make the file larger and read 64k from it instead of 16k > so that it pulls in a full page from the middle of the file. > > Signed-off-by: Dave Chinner Test; does NOT cause a regression. Josef 'Jeff' Sipek. -- Once you have their hardware. Never give it back. (The First Rule of Hardware Acquisition) From owner-xfs@oss.sgi.com Fri Mar 21 07:12:01 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 21 Mar 2008 07:12:33 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00,J_CHICKENPOX_43 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2LEBwvs007695 for ; Fri, 21 Mar 2008 07:12:01 -0700 X-ASG-Debug-ID: 1206108749-697e00690000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mx1.wp.pl (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 2DCD66CEB5D for ; Fri, 21 Mar 2008 07:12:30 -0700 (PDT) Received: from mx1.wp.pl (mx1.wp.pl [212.77.101.5]) by cuda.sgi.com with ESMTP id a4vHBfgVxfXhjPoE for ; Fri, 21 Mar 2008 07:12:30 -0700 (PDT) Received: (wp-smtpd smtp.wp.pl 26228 invoked from network); 21 Mar 2008 15:05:48 +0100 Received: from ip-83-238-22-2.netia.com.pl (HELO lapsg1.open-e.pl) (stf_xl@wp.pl@[83.238.22.2]) (envelope-sender ) by smtp.wp.pl (WP-SMTPD) with AES128-SHA encrypted SMTP for ; 21 Mar 2008 15:05:48 +0100 From: Stanislaw Gruszka To: xfs@oss.sgi.com X-ASG-Orig-Subj: BUG: xfs on linux lvm - lvconvert random hungs when doing i/o Subject: BUG: xfs on linux lvm - lvconvert random hungs when doing i/o Date: Fri, 21 Mar 2008 15:20:16 +0100 User-Agent: KMail/1.9.7 MIME-Version: 1.0 Content-Disposition: inline Message-Id: <200803211520.16398.stf_xl@wp.pl> Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit X-WP-AV: skaner antywirusowy poczty Wirtualnej Polski S. A. X-WP-SPAM: NO 0000000 [gaOl] X-Barracuda-Connect: mx1.wp.pl[212.77.101.5] X-Barracuda-Start-Time: 1206108751 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -1.52 X-Barracuda-Spam-Status: No, SCORE=-1.52 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=BSF_RULE7568M X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45478 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.50 BSF_RULE7568M Custom Rule 7568M X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14967 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: stf_xl@wp.pl Precedence: bulk X-list: xfs Hello I have problems using xfs and lvm snapshots on linux-2.6.24 , When I do lvconvert to create snapshots and when system is under heavy load, lvconvert and I/O processes randomly hung . I use below script to reproduce, but it is very hard to catch this bug. #!/bin/bash #set -x DISK="physical_device" # clean old stuff umount /mnt/tmp for ((j = 0; j < 20; j++)) ; do echo -n "Remove $j " date umount /mnt/m$j lvremove -s -f /dev/VG/sn_$j done vgchange -a n vgremove -f VG # initialization pvcreate $DISK 2> /dev/null vgcreate VG $DISK 2> /dev/null vgchange -a y lvcreate -L40G -n lv VG mkdir -p /mnt/tmp mkfs.xfs /dev/VG/lv for ((j = 0; j < 20; j++)) do lvcreate -L512M -n /dev/VG/sn_${j} VG mkdir -p /mnt/m$j done # test nloops=10 for ((loop = 0; loop < $nloops; loop++)) ; do echo "loop $loop start ... " mount /dev/VG/lv /mnt/tmp dd if=/dev/urandom of=/mnt/tmp/file_tmp1 bs=1024 & load_pid1=$! dd if=/dev/urandom of=/mnt/tmp/file_tmp2 bs=1024 & load_pid2=$! for ((j = 0; j < 20; j++)) ; do echo -n "Convert $j " date lvconvert -s -c512 /dev/VG/lv /dev/VG/sn_$j sleep 10 mount -t xfs -o nouuid,noatime /dev/VG/sn_$j /mnt/m$j sync done for ((j = 0; j < 20; j++)) ; do echo -n "Remove $j " date umount /mnt/m$j lvremove -s -f /dev/VG/sn_$j done kill $load_pid1 wait $load_pid1 kill $load_pid2 wait $load_pid2 umount /mnt/tmp echo "done" done Here is sysrq show-blocked-task output of such situation: SysRq : HELP : loglevel0-8 reBoot Crashdump tErm Full kIll saK showMem Nice powerOff showPc show-all-timers(Q) unRaw Sync showTasks Unmount shoW-blocked-tasks SysRq : Show Blocked State task PC stack pid father xfsdatad/1 D 00000000 0 288 2 f7d1aa90 00000046 00000000 00000000 00000000 00000000 f68be900 c4018a80 ffffffff ea4906f8 f7d1aa90 f744bf0c c05558e6 ea490700 c4015d60 00000000 ea4906f8 00000004 ea490680 ea490680 c0555a2d ea490700 ea490700 f7d1aa90 Call Trace: [] rwsem_down_failed_common+0x76/0x170 [] rwsem_down_write_failed+0x1d/0x24 [] call_rwsem_down_write_failed+0x6/0x8 [] down_write+0x12/0x20 [] xfs_ilock+0x5a/0xa0 [] xfs_setfilesize+0x43/0x130 [] xfs_end_bio_delalloc+0x0/0x20 [] xfs_end_bio_delalloc+0xd/0x20 [] run_workqueue+0x52/0x100 [] prepare_to_wait+0x52/0x70 [] worker_thread+0x7f/0xc0 [] autoremove_wake_function+0x0/0x50 [] autoremove_wake_function+0x0/0x50 [] worker_thread+0x0/0xc0 [] kthread+0x59/0xa0 [] kthread+0x0/0xa0 [] kernel_thread_helper+0x7/0x10 ======================= pdflush D 00fc61cb 0 7337 2 edc37a90 00000046 f28f9ed8 00fc61cb f28f9ed8 00000282 f7c5de40 c4010a80 f28f9ed8 00fc61cb f28f9f2c 00000000 c0554fd7 00000000 00000002 c07443e4 c07443e4 00fc61cb c012ca40 edc37a90 c0743d80 00000246 c0137370 c4010a80 Call Trace: [] schedule_timeout+0x47/0x90 [] process_timeout+0x0/0x10 [] prepare_to_wait+0x20/0x70 [] io_schedule_timeout+0x1b/0x30 [] congestion_wait+0x7e/0xa0 [] autoremove_wake_function+0x0/0x50 [] sync_sb_inodes+0x141/0x1d0 [] autoremove_wake_function+0x0/0x50 [] writeback_inodes+0x87/0xb0 [] wb_kupdate+0xa3/0x100 [] __pdflush+0xb9/0x170 [] pdflush+0x0/0x30 [] pdflush+0x28/0x30 [] wb_kupdate+0x0/0x100 [] kthread+0x59/0xa0 [] kthread+0x0/0xa0 [] kernel_thread_helper+0x7/0x10 ======================= dd D c4018ab4 0 12113 29734 ee178a90 00000082 ebe70ac0 c4018ab4 00000001 f75a4440 f7d0ee40 c4010a80 f3951bc0 f3951bc8 00000246 ee178a90 c0555b35 00000001 ee178a90 c011eaa0 f3951bcc f3951bcc ee25b160 c04793c7 f3baed80 f3951bc0 00008000 00000000 Call Trace: [] __down+0x75/0xe0 [] default_wake_function+0x0/0x10 [] dm_unplug_all+0x17/0x30 [] __down_failed+0x7/0xc [] blk_backing_dev_unplug+0x0/0x10 [] xfs_buf_lock+0x3c/0x50 [] _xfs_buf_find+0x151/0x1d0 [] kmem_zone_alloc+0x47/0xc0 [] ata_check_status+0x8/0x10 [] xfs_buf_get_flags+0x55/0x130 [] xfs_buf_read_flags+0x1c/0x90 [] xfs_trans_read_buf+0x16f/0x350 [] xfs_itobp+0x7d/0x250 [] find_get_pages_tag+0x38/0x90 [] write_cache_pages+0x11d/0x330 [] xfs_iflush+0x99/0x470 [] xfs_inode_flush+0x127/0x1f0 [] xfs_fs_write_inode+0x22/0x80 [] write_inode+0x4b/0x50 [] __sync_single_inode+0xf0/0x190 [] __writeback_single_inode+0x49/0x1c0 [] del_timer_sync+0xe/0x20 [] prop_fraction_single+0x33/0x60 [] task_dirty_limit+0x46/0xd0 [] sync_sb_inodes+0xde/0x1d0 [] get_dirty_limits+0x13a/0x160 [] writeback_inodes+0xa0/0xb0 [] balance_dirty_pages+0x193/0x2c0 [] generic_perform_write+0x142/0x190 [] generic_file_buffered_write+0x87/0x150 [] xfs_write+0x61b/0x8c0 [] __do_softirq+0x75/0xf0 [] smp_apic_timer_interrupt+0x2a/0x40 [] apic_timer_interrupt+0x28/0x30 [] xfs_file_aio_write+0x76/0x90 [] do_sync_write+0xbd/0x110 [] notify_die+0x30/0x40 [] autoremove_wake_function+0x0/0x50 [] atomic_notifier_call_chain+0x17/0x20 [] notify_die+0x30/0x40 [] vfs_write+0x160/0x170 [] sys_write+0x41/0x70 [] syscall_call+0x7/0xb ======================= lvconvert D c4010a80 0 12930 12501 ec09e030 00000082 00000000 c4010a80 ec09e030 ebe70a90 f7c5d580 c4010a80 7fffffff cbd43e38 cbd43de8 00000002 c055501c ec1ea98c 00000292 ec09e030 c011eb9a 00000000 00000292 c0555b8e 00000001 ec09e030 c011eaa0 7fffffff Call Trace: [] schedule_timeout+0x8c/0x90 [] __wake_up_locked+0x1a/0x20 [] __down+0xce/0xe0 [] default_wake_function+0x0/0x10 [] wait_for_common+0xa9/0x140 [] default_wake_function+0x0/0x10 [] default_wake_function+0x0/0x10 [] flush_cpu_workqueue+0x69/0xa0 [] wq_barrier_func+0x0/0x10 [] flush_workqueue+0x2c/0x40 [] xfs_flush_buftarg+0x17/0x120 [] xfs_quiesce_fs+0x16/0x70 [] xfs_attr_quiesce+0x20/0x60 [] xfs_freeze+0x8/0x10 [] freeze_bdev+0x77/0x80 [] lock_fs+0x1b/0x70 [] bdev_set+0x0/0x10 [] dm_suspend+0xc3/0x350 [] default_wake_function+0x0/0x10 [] default_wake_function+0x0/0x10 [] do_suspend+0x7a/0x90 [] dev_suspend+0x0/0x20 [] ctl_ioctl+0xcb/0x130 [] do_ioctl+0x6a/0xa0 [] vfs_ioctl+0x5e/0x1d0 [] sys_ioctl+0x70/0x80 [] syscall_call+0x7/0xb ======================= dd D 00fc61cb 0 12953 29684 f7c92a90 00000082 e99a7c70 00fc61cb e99a7c70 00000286 f69b9580 c4018a80 e99a7c70 00fc61cb e99a7cc4 00000010 c0554fd7 00008000 c0748e44 f7c6a664 f7c6a664 00fc61cb c012ca40 f7c92a90 f7c6a000 00000246 c0137370 c4018a80 Call Trace: [] schedule_timeout+0x47/0x90 [] process_timeout+0x0/0x10 [] prepare_to_wait+0x20/0x70 [] io_schedule_timeout+0x1b/0x30 [] congestion_wait+0x7e/0xa0 [] autoremove_wake_function+0x0/0x50 [] get_dirty_limits+0x13a/0x160 [] autoremove_wake_function+0x0/0x50 [] balance_dirty_pages+0xc0/0x2c0 [] generic_perform_write+0x142/0x190 [] generic_file_buffered_write+0x87/0x150 [] xfs_write+0x61b/0x8c0 [] elv_next_request+0x7d/0x150 [] scsi_dispatch_cmd+0x15e/0x290 [] xfs_file_aio_write+0x76/0x90 [] do_sync_write+0xbd/0x110 [] autoremove_wake_function+0x0/0x50 [] run_timer_softirq+0x30/0x180 [] tick_do_periodic_broadcast+0x1f/0x30 [] notify_die+0x30/0x40 [] tick_handle_periodic_broadcast+0xd/0x50 [] vfs_write+0x160/0x170 [] sys_write+0x41/0x70 [] syscall_call+0x7/0xb ======================= I have also full memory dump of hung situation, so I could provide some interesting variables values (xfs_buf, xfs_inode) if you want. Please tell me if you eventually want some other info like linux .config etc. . Currently I try to reproduce bug with CONFIG_XFS_DEBUG and CONFIG_XFS_TRACE options and with such tracing options: #define XFS_ALLOC_TRACE 0 #define XFS_ATTR_TRACE 0 #define XFS_BLI_TRACE 0 #define XFS_BMAP_TRACE 0 #define XFS_BMBT_TRACE 0 #define XFS_DIR2_TRACE 0 #define XFS_DQUOT_TRACE 0 #define XFS_ILOCK_TRACE 1 #define XFS_LOG_TRACE 0 #define XFS_RW_TRACE 1 #define XFS_BUF_TRACE 1 #define XFS_VNODE_TRACE 0 #define XFS_FILESTREAMS_TRACE 0 I hope I would provide more valuable information soon to fix this problem. I also would like to ask if you have some propositions how to reproduce bug, because my scripts need to work few hours or even days to hung processes. Regards Stanislaw Gruszka From owner-xfs@oss.sgi.com Fri Mar 21 08:04:21 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 21 Mar 2008 08:04:29 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2LF4JJX010105 for ; Fri, 21 Mar 2008 08:04:21 -0700 X-ASG-Debug-ID: 1206111892-3cd600890000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 435CC6CEDC3 for ; Fri, 21 Mar 2008 08:04:52 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id B3TbHZdi2jYce8g8 for ; Fri, 21 Mar 2008 08:04:52 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 102F218004B4A for ; Fri, 21 Mar 2008 10:04:51 -0500 (CDT) Message-ID: <47E3CE92.20803@sandeen.net> Date: Fri, 21 Mar 2008 10:04:50 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: xfs-oss X-ASG-Orig-Subj: FYI: xfs problems in Fedora 8 updates Subject: FYI: xfs problems in Fedora 8 updates Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1206111893 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45482 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14968 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs https://bugzilla.redhat.com/show_bug.cgi?id=437968 Bugzilla Bug 437968: Corrupt xfs root filesystem with kernel kernel-2.6.24.3-xx Just to give the sgi guys a heads up, 2 people have seen this now. I know it's a distro kernel but fedora is generally reasonably close to upstream. I'm looking into it but just wanted to put this on the list, too. Thanks, -Eric From owner-xfs@oss.sgi.com Fri Mar 21 11:03:12 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 21 Mar 2008 11:03:20 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_53 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2LI3A2i021089 for ; Fri, 21 Mar 2008 11:03:12 -0700 X-ASG-Debug-ID: 1206122622-775b007b0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from filer.fsl.cs.sunysb.edu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 1C9E86D051B for ; Fri, 21 Mar 2008 11:03:42 -0700 (PDT) Received: from filer.fsl.cs.sunysb.edu (filer.fsl.cs.sunysb.edu [130.245.126.2]) by cuda.sgi.com with ESMTP id jy4eiIOPoQZbSrgp for ; Fri, 21 Mar 2008 11:03:42 -0700 (PDT) Received: from josefsipek.net (baal.fsl.cs.sunysb.edu [130.245.126.78]) by filer.fsl.cs.sunysb.edu (8.12.11.20060308/8.13.1) with ESMTP id m2LI3bnJ026737; Fri, 21 Mar 2008 14:03:37 -0400 Received: by josefsipek.net (Postfix, from userid 1000) id 1D2FA1C008A2; Fri, 21 Mar 2008 14:03:38 -0400 (EDT) Date: Fri, 21 Mar 2008 14:03:38 -0400 From: "Josef 'Jeff' Sipek" To: Christoph Hellwig Cc: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: [PATCH] kill mrlock_t Subject: Re: [PATCH] kill mrlock_t Message-ID: <20080321180338.GB5433@josefsipek.net> References: <20080320093940.GA28966@lst.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080320093940.GA28966@lst.de> User-Agent: Mutt/1.5.16 (2007-06-11) X-Barracuda-Connect: filer.fsl.cs.sunysb.edu[130.245.126.2] X-Barracuda-Start-Time: 1206122624 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45494 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14969 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeffpc@josefsipek.net Precedence: bulk X-list: xfs On Thu, Mar 20, 2008 at 10:39:40AM +0100, Christoph Hellwig wrote: > XFS inodes are locked via the xfs_ilock family of functions which > internally use a rw_semaphore wrapper into an abstraction called > mrlock_t. The mrlock_t should be purely internal to xfs_ilock functions > but leaks through to the callers via various lock state asserts. > > This patch: > > - adds new xfs_isilocked abstraction to make the lock state asserts > fits into the xfs_ilock API family > - opencodes the mrlock wrappers in the xfs_ilock family of functions > - makes the state tracking debug-only and merged into a single state > word > - remove superflous flags to the xfs_ilock family of functions > > This kills 8 bytes per inode for non-debug builds, which would e.g. > be the space for ACL caching on 32bit systems. Nice. I do NOT see anything obviously wrong with the patch. Josef 'Jeff' Sipek. -- "Memory is like gasoline. You use it up when you are running. Of course you get it all back when you reboot..."; Actual explanation obtained from the Micro$oft help desk. From owner-xfs@oss.sgi.com Fri Mar 21 11:06:13 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 21 Mar 2008 11:06:20 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.4 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2LI6C3N021397 for ; Fri, 21 Mar 2008 11:06:13 -0700 X-ASG-Debug-ID: 1206122805-459300b80000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from filer.fsl.cs.sunysb.edu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 73EE0FF89FA for ; Fri, 21 Mar 2008 11:06:45 -0700 (PDT) Received: from filer.fsl.cs.sunysb.edu (filer.fsl.cs.sunysb.edu [130.245.126.2]) by cuda.sgi.com with ESMTP id cJGO5RCwKmz2AwyP for ; Fri, 21 Mar 2008 11:06:45 -0700 (PDT) Received: from josefsipek.net (baal.fsl.cs.sunysb.edu [130.245.126.78]) by filer.fsl.cs.sunysb.edu (8.12.11.20060308/8.13.1) with ESMTP id m2LI6gh8027171; Fri, 21 Mar 2008 14:06:42 -0400 Received: by josefsipek.net (Postfix, from userid 1000) id B5A031C008A2; Fri, 21 Mar 2008 14:06:43 -0400 (EDT) Date: Fri, 21 Mar 2008 14:06:43 -0400 From: "Josef 'Jeff' Sipek" To: Christoph Hellwig Cc: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: [PATCH] cleanup root inode handling in xfs_fs_fill_super Subject: Re: [PATCH] cleanup root inode handling in xfs_fs_fill_super Message-ID: <20080321180643.GC5433@josefsipek.net> References: <20080319204724.GA24271@lst.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080319204724.GA24271@lst.de> User-Agent: Mutt/1.5.16 (2007-06-11) X-Barracuda-Connect: filer.fsl.cs.sunysb.edu[130.245.126.2] X-Barracuda-Start-Time: 1206122806 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45495 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14970 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeffpc@josefsipek.net Precedence: bulk X-list: xfs On Wed, Mar 19, 2008 at 09:47:24PM +0100, Christoph Hellwig wrote: > - rename rootvp to root for clarify > - remove useless vn_to_inode call > - check is_bad_inode before calling d_alloc_root > - use iput instead of VN_RELE in the error case Looks good. Josef 'Jeff' Sipek. -- FORTUNE PROVIDES QUESTIONS FOR THE GREAT ANSWERS: #19 A: To be or not to be. Q: What is the square root of 4b^2? From owner-xfs@oss.sgi.com Fri Mar 21 11:52:49 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 21 Mar 2008 11:52:58 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.4 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2LIqmMu023473 for ; Fri, 21 Mar 2008 11:52:49 -0700 X-ASG-Debug-ID: 1206125600-457402590000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from filer.fsl.cs.sunysb.edu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 8B088FFD25B for ; Fri, 21 Mar 2008 11:53:20 -0700 (PDT) Received: from filer.fsl.cs.sunysb.edu (filer.fsl.cs.sunysb.edu [130.245.126.2]) by cuda.sgi.com with ESMTP id CVlC0hp06FlLoIDi for ; Fri, 21 Mar 2008 11:53:20 -0700 (PDT) Received: from josefsipek.net (baal.fsl.cs.sunysb.edu [130.245.126.78]) by filer.fsl.cs.sunysb.edu (8.12.11.20060308/8.13.1) with ESMTP id m2LHjtge022276; Fri, 21 Mar 2008 13:45:55 -0400 Received: by josefsipek.net (Postfix, from userid 1000) id 6DF501C008A2; Fri, 21 Mar 2008 13:45:56 -0400 (EDT) Date: Fri, 21 Mar 2008 13:45:56 -0400 From: "Josef 'Jeff' Sipek" To: Stanislaw Gruszka Cc: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: BUG: xfs on linux lvm - lvconvert random hungs when doing i/o Subject: Re: BUG: xfs on linux lvm - lvconvert random hungs when doing i/o Message-ID: <20080321174556.GA5433@josefsipek.net> References: <200803211520.16398.stf_xl@wp.pl> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <200803211520.16398.stf_xl@wp.pl> User-Agent: Mutt/1.5.16 (2007-06-11) X-Barracuda-Connect: filer.fsl.cs.sunysb.edu[130.245.126.2] X-Barracuda-Start-Time: 1206125601 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -1.52 X-Barracuda-Spam-Status: No, SCORE=-1.52 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=BSF_RULE7568M X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45497 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.50 BSF_RULE7568M Custom Rule 7568M X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14971 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeffpc@josefsipek.net Precedence: bulk X-list: xfs On Fri, Mar 21, 2008 at 03:20:16PM +0100, Stanislaw Gruszka wrote: Interesting, I've noticed similar hang (based on my un-expert inspection of your backtraces) which went away as suddenly as it appeared. I wasn't making snapshots or any other LVM operation at the time. It just happened - logs didn't contain anything. It's an LVM with 2 disks => 1 LV with XFS It's a bit dated version of XFS (2.6.24-rc7 vanilla). ... > Here is sysrq show-blocked-task output of such situation: I did the same... SysRq : Show Blocked State task PC stack pid father postgres D 0003469a 0 3435 3432 df072d90 00200096 3f8d021a 0003469a f76c8e20 00000000 0002c7fb 00000000 3f8fca15 0003469a dfec20b4 dfec2f68 dfc308ec df072dd8 c032274f 00000001 dfec20b4 00000001 00200246 dfec20b4 dfec20b4 00200282 df072dc4 00000001 Call Trace: [] schedule_timeout+0x69/0xa2 [] xlog_state_sync+0xf0/0x1ef [xfs] [] _xfs_log_force+0x58/0x5f [xfs] [] _xfs_trans_commit+0x2a0/0x371 [xfs] [] xfs_fsync+0x13a/0x1d0 [xfs] [] xfs_file_fsync+0x6f/0x79 [xfs] [] do_fsync+0x5c/0x93 [] __do_fsync+0x20/0x2f [] sys_fsync+0xd/0xf [] sysenter_past_esp+0x5f/0xa5 ======================= postgres D 0003469a 0 3437 3432 df0d4ce0 00200096 ee0ef4db 0003469a f7a9bd9c 00000000 00671954 00000000 ee760e2f 0003469a fffeffff f7a9bd98 f76ad0e0 df0d4d08 c0323b86 df0d4d10 f7a9bd9c f7a9bdb8 f76ad0e0 e94642a8 f7a9bd98 00000000 00000000 df0d4d24 Call Trace: [] rwsem_down_failed_common+0x66/0x15f [] rwsem_down_read_failed+0x1d/0x28 [] call_rwsem_down_read_failed+0x7/0xc [] xfs_ilock+0x2f/0x87 [xfs] [] xfs_access+0x16/0x39 [xfs] [] xfs_vn_permission+0x13/0x17 [xfs] [] permission+0x7b/0xe3 [] vfs_permission+0xf/0x11 [] __link_path_walk+0x71/0xce8 [] link_path_walk+0x44/0xbf [] path_walk+0x18/0x1a [] do_path_lookup+0x78/0x1b0 [] __path_lookup_intent_open+0x44/0x81 [] path_lookup_open+0x21/0x27 [] open_namei+0x5b/0x646 [] do_filp_open+0x26/0x43 [] do_sys_open+0x43/0xcc [] sys_open+0x1c/0x1e [] sysenter_past_esp+0x5f/0xa5 ======================= postgres D dfed43d0 0 3438 3432 df0debc0 00200092 00000000 dfed43d0 c032e760 dfed43d0 df0debb0 f92dd5ea 00200046 dfec2f78 df0dec20 dfc308ec dfec2f68 df0dec08 c032274f 00000001 dfec2f68 00000001 00200246 dfec2f68 dfec2f68 00200282 df0debf4 00000001 Call Trace: [] schedule_timeout+0x69/0xa2 [] xlog_state_sync_all+0xcd/0x1aa [xfs] [] _xfs_log_force+0x40/0x5f [xfs] [] xfs_iget_core+0x404/0x5c4 [xfs] [] xfs_iget+0xaf/0x117 [xfs] [] xfs_trans_iget+0xe1/0x149 [xfs] [] xfs_ialloc+0xb2/0x5ad [xfs] [] xfs_dir_ialloc+0x6c/0x2a3 [xfs] [] xfs_create+0x2ba/0x467 [xfs] [] xfs_vn_mknod+0x163/0x29f [xfs] [] xfs_vn_create+0x12/0x14 [xfs] [] vfs_create+0x96/0xe9 [] open_namei+0x535/0x646 [] do_filp_open+0x26/0x43 [] do_sys_open+0x43/0xcc [] sys_open+0x1c/0x1e [] sysenter_past_esp+0x5f/0xa5 ======================= bash D 00000000 0 4251 4155 e3deade8 00000082 00000000 00000000 e8d9d76c 00000000 00000000 00000000 7a4ec347 0003469a fffeffff e8d9d768 e3d7b220 e3deae10 c0323b86 e3deae18 e8d9d76c e8d9d788 e3deaf20 cc9fb000 e8d9d768 00000000 00000000 e3deae2c Call Trace: [] rwsem_down_failed_common+0x66/0x15f [] rwsem_down_read_failed+0x1d/0x28 [] call_rwsem_down_read_failed+0x7/0xc [] xfs_ilock+0x2f/0x87 [xfs] [] xfs_access+0x16/0x39 [xfs] [] xfs_vn_permission+0x13/0x17 [xfs] [] permission+0x7b/0xe3 [] vfs_permission+0xf/0x11 [] may_open+0xa5/0x240 [] open_namei+0x6d/0x646 [] do_filp_open+0x26/0x43 [] do_sys_open+0x43/0xcc [] sys_open+0x1c/0x1e [] sysenter_past_esp+0x5f/0xa5 ======================= bash D dfed49a0 0 15791 4155 e792c9c4 00000096 00000001 dfed49a0 c032e760 00000046 dfed49a0 dfed49a0 00000282 e792c9c4 dfed4998 00000282 dfed49a0 e792c9f0 c0323ede dfbca6f0 00000001 dfbca6f0 c0116f2f dfed49bc dfed49bc dfed4998 dfef0b88 c8901aa8 Call Trace: [] __down+0x86/0xed [] __down_failed+0xa/0x10 [] xfs_buf_lock+0x46/0x49 [xfs] [] xfs_getsb+0x16/0x33 [xfs] [] xfs_trans_getsb+0x36/0x74 [xfs] [] xfs_trans_apply_sb_deltas+0x16/0x46f [xfs] [] _xfs_trans_commit+0x8d/0x371 [xfs] [] xfs_iomap_write_allocate+0x2ce/0x491 [xfs] [] xfs_iomap+0x439/0x484 [xfs] [] xfs_bmap+0x2c/0x32 [xfs] [] xfs_map_blocks+0x38/0x78 [xfs] [] xfs_page_state_convert+0x311/0x79b [xfs] [] xfs_vm_writepage+0x54/0xe0 [xfs] [] __writepage+0xb/0x27 [] write_cache_pages+0x202/0x2e5 [] generic_writepages+0x23/0x2d [] xfs_vm_writepages+0x3d/0x45 [xfs] [] do_writepages+0x26/0x39 [] __filemap_fdatawrite_range+0x66/0x72 [] filemap_fdatawrite+0x26/0x28 [] xfs_flush_pages+0x4d/0x74 [xfs] [] xfs_release+0x12a/0x218 [xfs] [] xfs_file_release+0xe/0x12 [xfs] [] __fput+0xb6/0x18f [] fput+0x18/0x1a [] filp_close+0x41/0x67 [] sys_close+0x65/0xa7 [] sysenter_past_esp+0x5f/0xa5 ======================= pdflush D 0003469a 0 27649 2 c2aa4ea4 00000092 3f8e2cc7 0003469a 00000282 c2aa4e94 000089ca 00000000 3f8eb691 0003469a dfed4998 dfed49cc c2aa4eac c2aa4ecc f92dd57f 00000000 ceb64ea0 c0116f2f dfed49e8 dfed49e8 dfed4998 00000020 00000000 c2aa4ed8 Call Trace: [] xfs_buf_wait_unpin+0x7a/0xa8 [xfs] [] xfs_buf_iorequest+0x49/0x70 [xfs] [] xfs_bdstrat_cb+0x4d/0x52 [xfs] [] xfs_bwrite+0x54/0xb7 [xfs] [] xfs_syncsub+0x130/0x2df [xfs] [] xfs_sync+0x3d/0x4f [xfs] [] xfs_fs_write_super+0x1c/0x23 [xfs] [] sync_supers+0x84/0xb2 [] wb_kupdate+0x27/0xd3 [] pdflush+0xc0/0x175 [] kthread+0x38/0x5a [] kernel_thread_helper+0x7/0x10 ======================= amarokapp D 00000000 0 27973 27967 df915bc0 00000092 00000000 00000000 f77854f8 00000000 00000000 00000000 f2938f82 0003469a df915c20 f7f6392c f7f5c258 df915c08 c032274f 00000001 f7f5c258 00000001 00000246 f7f5c258 f7f5c258 00000282 df915bf4 00000001 Call Trace: [] schedule_timeout+0x69/0xa2 [] xlog_state_sync_all+0xcd/0x1aa [xfs] [] _xfs_log_force+0x40/0x5f [xfs] [] xfs_iget_core+0x404/0x5c4 [xfs] [] xfs_iget+0xaf/0x117 [xfs] [] xfs_trans_iget+0xe1/0x149 [xfs] [] xfs_ialloc+0xb2/0x5ad [xfs] [] xfs_dir_ialloc+0x6c/0x2a3 [xfs] [] xfs_create+0x2ba/0x467 [xfs] [] xfs_vn_mknod+0x163/0x29f [xfs] [] xfs_vn_create+0x12/0x14 [xfs] [] vfs_create+0x96/0xe9 [] open_namei+0x535/0x646 [] do_filp_open+0x26/0x43 [] do_sys_open+0x43/0xcc [] sys_open+0x1c/0x1e [] sysenter_past_esp+0x5f/0xa5 ======================= find D 0003469a 0 28000 26670 c899bb88 00000086 9ad0d912 0003469a 00000001 00000046 0009a6ff 00000000 9ada8011 0003469a dffe1b80 00000296 dffe1b88 c899bbb4 c0323ede f3d410a0 00000001 f3d410a0 c0116f2f dffe1ba4 dffe1ba4 dffe1ac0 dffe1ac0 f3a6a880 Call Trace: [] __down+0x86/0xed [] __down_failed+0xa/0x10 [] xfs_buf_iowait+0x4e/0x58 [xfs] [] xfs_buf_iostart+0x7a/0x8b [xfs] [] xfs_buf_read_flags+0x51/0x75 [xfs] [] xfs_trans_read_buf+0x3f/0x333 [xfs] [] xfs_itobp+0x70/0x1d1 [xfs] [] xfs_iread+0x79/0x1d9 [xfs] [] xfs_iget_core+0x14c/0x5c4 [xfs] [] xfs_iget+0xaf/0x117 [xfs] [] xfs_dir_lookup_int+0x81/0xec [xfs] [] xfs_lookup+0x56/0x7a [xfs] [] xfs_vn_lookup+0x2f/0x61 [xfs] [] do_lookup+0x12a/0x16a [] __link_path_walk+0x74d/0xce8 [] link_path_walk+0x44/0xbf [] path_walk+0x18/0x1a [] do_path_lookup+0x78/0x1b0 [] __user_walk_fd+0x32/0x4a [] vfs_lstat_fd+0x18/0x3e [] vfs_lstat+0x11/0x13 [] sys_lstat64+0x14/0x28 [] sysenter_past_esp+0x5f/0xa5 ======================= Josef 'Jeff' Sipek. -- It used to be said [...] that AIX looks like one space alien discovered Unix, and described it to another different space alien who then implemented AIX. But their universal translators were broken and they'd had to gesture a lot. - Paul Tomblin From owner-xfs@oss.sgi.com Fri Mar 21 11:56:04 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 21 Mar 2008 11:56:12 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2LIu4Qs023906 for ; Fri, 21 Mar 2008 11:56:04 -0700 X-ASG-Debug-ID: 1206125795-0fcf01560000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from node2.t-mail.cz (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 4F7136D0A85 for ; Fri, 21 Mar 2008 11:56:35 -0700 (PDT) Received: from node2.t-mail.cz (node2.t-mail.cz [62.141.0.167]) by cuda.sgi.com with ESMTP id 8MNyAKohGrBS3mTu for ; Fri, 21 Mar 2008 11:56:35 -0700 (PDT) Received: from localhost (localhost [127.0.0.1]) by bl311.tmo.cz (Postfix) with ESMTP id 9BF75374 for ; Fri, 21 Mar 2008 19:56:33 +0100 (CET) Received: from node2.t-mail.cz ([127.0.0.1]) by localhost (bl311.tmo.cz [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id vt5JEWQLM5dg for ; Fri, 21 Mar 2008 19:56:22 +0100 (CET) Received: from dasa-laptop (89-24-45-214.i4g.tmcz.cz [89.24.45.214]) by bl311.tmo.cz (Postfix) with ESMTP id 0A334230 for ; Fri, 21 Mar 2008 19:56:22 +0100 (CET) Received: from [127.0.0.1] (localhost [127.0.0.1]) by dasa-laptop (Postfix) with ESMTP id 952C825D7C for ; Fri, 21 Mar 2008 19:56:19 +0100 (CET) X-ASG-Orig-Subj: serious problem with XFS on nvidia IDE controller Subject: serious problem with XFS on nvidia IDE controller From: Massimiliano Adamo To: xfs@oss.sgi.com Content-Type: text/plain Date: Fri, 21 Mar 2008 19:56:18 +0100 Message-Id: <1206125778.6867.14.camel@dasa-laptop> Mime-Version: 1.0 X-Mailer: Evolution 2.12.1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: node2.t-mail.cz[62.141.0.167] X-Barracuda-Start-Time: 1206125796 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45498 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14972 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: maxadamo@gmail.com Precedence: bulk X-list: xfs Hi all, I have a notebook, Acer Aspire 5520 using the following controller: 0:06.0 IDE interface: nVidia Corporation Unknown device 0560 (rev a1) dmidecode doesn't recognize this controller, lshw says almost the same like lspci does. My kernel is the one provided with Ubuntu 7.10: 2.6.22-14 At present, this controller is not managed by any driver in the linux kernel, and in order to boot, I have to append the string "all_generic_ide" into grub config file. Otherwise system doesn't boot. As I like all tools provided with XFS (even if it's missing shrinking capabilities :)), I decided to install Ubuntu over XFS filesystem. I have installed the system at least 3 times in one day, before understanding that the problem was XFS filesystem. After each reboot there were file missing, when running ldconfig I was getting messages like: the file blabla_blabla.so is truncated, or is corrupted... I had to perform installation once again, and then use reiser. I also noticed that "sync" command was always taking a long time to complete. Does anyone has idea what can be the problem? Does XFS refuses to work with generic_ata driver? I am using XFS on another 2 computers without any problem. Furthermore, now with reiser i don't have any kind of problem. the following is the output taken from lshw-gtk: product: nVidia Corporation vendor: nVidia Corporation bus info: pci@0000:00:06.0 logical name: scsi0 version: a1 width: 32 bits clock: 66MHz capabilities: ide, Power Management, bus mastering, PCI capabilities listing, Emulated device configuration: driver: ata_generic latency: 0 maxlatency: 1 mingnt: 3 module: ata_generic cheers Massimiliano From owner-xfs@oss.sgi.com Fri Mar 21 12:02:07 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 21 Mar 2008 12:02:14 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.4 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2LJ26qm024418 for ; Fri, 21 Mar 2008 12:02:07 -0700 X-ASG-Debug-ID: 1206126159-4ca802270000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from filer.fsl.cs.sunysb.edu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 2F2DDFFD59E for ; Fri, 21 Mar 2008 12:02:39 -0700 (PDT) Received: from filer.fsl.cs.sunysb.edu (filer.fsl.cs.sunysb.edu [130.245.126.2]) by cuda.sgi.com with ESMTP id moc0R0HaVzHKmsDK for ; Fri, 21 Mar 2008 12:02:39 -0700 (PDT) Received: from josefsipek.net (baal.fsl.cs.sunysb.edu [130.245.126.78]) by filer.fsl.cs.sunysb.edu (8.12.11.20060308/8.13.1) with ESMTP id m2LI8cpJ027413; Fri, 21 Mar 2008 14:08:38 -0400 Received: by josefsipek.net (Postfix, from userid 1000) id 7344F1C008A2; Fri, 21 Mar 2008 14:08:39 -0400 (EDT) Date: Fri, 21 Mar 2008 14:08:39 -0400 From: "Josef 'Jeff' Sipek" To: Christoph Hellwig Cc: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: [PATCH] remove most calls to VN_RELE Subject: Re: [PATCH] remove most calls to VN_RELE Message-ID: <20080321180839.GD5433@josefsipek.net> References: <20080319204914.GB24271@lst.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080319204914.GB24271@lst.de> User-Agent: Mutt/1.5.16 (2007-06-11) X-Barracuda-Connect: filer.fsl.cs.sunysb.edu[130.245.126.2] X-Barracuda-Start-Time: 1206126160 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45497 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14973 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeffpc@josefsipek.net Precedence: bulk X-list: xfs On Wed, Mar 19, 2008 at 09:49:14PM +0100, Christoph Hellwig wrote: > Most VN_RELE calls either directly contain a XFS_ITOV or have the > corresponding xfs_inode already in scope. Use the IRELE helper instead > of VN_RELE to clarify the code. With a little more work we can kill > VN_RELE altogether and define IRELE in terms of iput directly. Looks good. Josef 'Jeff' Sipek. -- You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists. - Abbie Hoffman From owner-xfs@oss.sgi.com Fri Mar 21 12:22:38 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 21 Mar 2008 12:22:45 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2LJMavM029725 for ; Fri, 21 Mar 2008 12:22:37 -0700 X-ASG-Debug-ID: 1206127389-7b1702820000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from node2.t-mail.cz (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 6903E6D09F0 for ; Fri, 21 Mar 2008 12:23:09 -0700 (PDT) Received: from node2.t-mail.cz (node2.t-mail.cz [62.141.0.167]) by cuda.sgi.com with ESMTP id AUbH9cCoZMEYYARQ for ; Fri, 21 Mar 2008 12:23:09 -0700 (PDT) Received: from localhost (localhost [127.0.0.1]) by bl311.tmo.cz (Postfix) with ESMTP id D6EBF5AB; Fri, 21 Mar 2008 20:22:37 +0100 (CET) Received: from node2.t-mail.cz ([127.0.0.1]) by localhost (bl311.tmo.cz [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id GdJ6trIHgHN6; Fri, 21 Mar 2008 20:22:26 +0100 (CET) Received: from dasa-laptop (89-24-45-214.i4g.tmcz.cz [89.24.45.214]) by bl311.tmo.cz (Postfix) with ESMTP id 044B95A0; Fri, 21 Mar 2008 20:22:26 +0100 (CET) Received: from [127.0.0.1] (localhost [127.0.0.1]) by dasa-laptop (Postfix) with ESMTP id 3EE7A24B2A; Fri, 21 Mar 2008 20:22:25 +0100 (CET) X-ASG-Orig-Subj: Re: serious problem with XFS on nvidia IDE controller Subject: Re: serious problem with XFS on nvidia IDE controller From: Massimiliano Adamo To: "Josef 'Jeff' Sipek" Cc: xfs@oss.sgi.com In-Reply-To: <20080321190239.GF5433@josefsipek.net> References: <1206125778.6867.14.camel@dasa-laptop> <20080321190239.GF5433@josefsipek.net> Content-Type: text/plain Date: Fri, 21 Mar 2008 20:22:24 +0100 Message-Id: <1206127344.6867.22.camel@dasa-laptop> Mime-Version: 1.0 X-Mailer: Evolution 2.12.1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: node2.t-mail.cz[62.141.0.167] X-Barracuda-Start-Time: 1206127390 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45499 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14974 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: maxadamo@gmail.com Precedence: bulk X-list: xfs Il giorno ven, 21/03/2008 alle 15.02 -0400, Josef 'Jeff' Sipek ha scritto: > On Fri, Mar 21, 2008 at 07:56:18PM +0100, Massimiliano Adamo wrote: > ... > > My kernel is the one provided with Ubuntu 7.10: 2.6.22-14 > > > > At present, this controller is not managed by any driver in the linux > > kernel, and in order to boot, I have to append the string > > "all_generic_ide" into grub config file. Otherwise system doesn't boot. > > I have no idea if a newer kernel does support it, but it might be worth a > try. I should have to do this ... but now I have my system installed with reiser :( Furthermore, I left Gentoo, and I am using ubuntu as I hate recompiling... and I wouldn't recompile a kernel on it. What I can do, is to boot with last "partimage live-cd" (which uses newer kernel), and try to see if it loads some other driver. > > ... > > I have installed the system at least 3 times in one day, before > > understanding that the problem was XFS filesystem. > > After each reboot there were file missing, when running ldconfig I was > > getting messages like: the file blabla_blabla.so is truncated, or is > > corrupted... > > Is there anything in dmesg? As I can remember there was nothing really interesting... > We have joy, we have fun, we have Linux on a Sun... very nice signature :) -- Massimiliano From owner-xfs@oss.sgi.com Fri Mar 21 12:52:31 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 21 Mar 2008 12:52:38 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-0.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2LJqTCb030777 for ; Fri, 21 Mar 2008 12:52:31 -0700 X-ASG-Debug-ID: 1206129179-779e03780000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from p02c11o146.mxlogic.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 181FA6D0D9F for ; Fri, 21 Mar 2008 12:53:00 -0700 (PDT) Received: from p02c11o146.mxlogic.net (p02c11o146.mxlogic.net [208.65.144.79]) by cuda.sgi.com with ESMTP id vGIDPYy9iQNMHoJF for ; Fri, 21 Mar 2008 12:53:00 -0700 (PDT) Received: from unknown [64.69.114.147] (EHLO p02c11o146.mxlogic.net) by p02c11o146.mxlogic.net (mxl_mta-5.4.0-3) with ESMTP id c1214e74.2328976304.1626.00-559.p02c11o146.mxlogic.net (envelope-from ); Fri, 21 Mar 2008 13:53:00 -0600 (MDT) Received: from unknown [64.69.114.147] by p02c11o146.mxlogic.net (mxl_mta-5.4.0-3) with SMTP id 09ef3e74.1844534192.3793.00-036.p02c11o146.mxlogic.net (envelope-from ); Fri, 21 Mar 2008 12:29:36 -0600 (MDT) X-MimeOLE: Produced By Microsoft Exchange V6.5 Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" X-ASG-Orig-Subj: RE: Duplicate directory entries Subject: RE: Duplicate directory entries Date: Fri, 21 Mar 2008 14:29:14 -0400 Message-ID: In-Reply-To: <20080320043557.GX95344431@sgi.com> X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: Duplicate directory entries Thread-Index: AciKRAWuJ3tNCSlaSNuTaYteF0iJ2wBPSLkQ References: <20080320000254.GC103321673@sgi.com> <20080320043557.GX95344431@sgi.com> From: "Jim Paradis" To: "David Chinner" Cc: X-Spam: [F=0.0100000000; S=0.010(2008031701)] X-MAIL-FROM: X-SOURCE-IP: [64.69.114.147] X-Barracuda-Connect: p02c11o146.mxlogic.net[208.65.144.79] X-Barracuda-Start-Time: 1206129183 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45502 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id m2LJqVCb030779 X-archive-position: 14975 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jparadis@exagrid.com Precedence: bulk X-list: xfs > > >> We recently ran across a situation where we saw two directory entries > > >> that were exactly the same. > > > > > >What kernel version? > > > > 2.6.18. . . . > I don't think this is your problem unless you've only recently upgraded > from an old kernel and your applications do direct I/O..... > > Can you reproduce the problem or provide any information on events > that may have occurred around the time of the duplicates being created? > I suspect that a reproducable test case will be the only way we can track > this down.... No, we have not recently upgraded the kernel. Unfortunately, we are not yet able to reproduce the problem at will. It's something we observed on a production system, but there's a lot that goes on on that system. I'll keep trying to narrow it down, though... --jim From owner-xfs@oss.sgi.com Fri Mar 21 13:37:39 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 21 Mar 2008 13:37:47 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.4 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2LKbcgW032459 for ; Fri, 21 Mar 2008 13:37:39 -0700 X-ASG-Debug-ID: 1206131890-0a71012b0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from filer.fsl.cs.sunysb.edu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id A72B16D168D for ; Fri, 21 Mar 2008 13:38:10 -0700 (PDT) Received: from filer.fsl.cs.sunysb.edu (filer.fsl.cs.sunysb.edu [130.245.126.2]) by cuda.sgi.com with ESMTP id FSDRi5w5FxLiF4qU for ; Fri, 21 Mar 2008 13:38:10 -0700 (PDT) Received: from josefsipek.net (baal.fsl.cs.sunysb.edu [130.245.126.78]) by filer.fsl.cs.sunysb.edu (8.12.11.20060308/8.13.1) with ESMTP id m2LJ2cXW005404; Fri, 21 Mar 2008 15:02:38 -0400 Received: by josefsipek.net (Postfix, from userid 1000) id 50B561C008A2; Fri, 21 Mar 2008 15:02:39 -0400 (EDT) Date: Fri, 21 Mar 2008 15:02:39 -0400 From: "Josef 'Jeff' Sipek" To: Massimiliano Adamo Cc: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: serious problem with XFS on nvidia IDE controller Subject: Re: serious problem with XFS on nvidia IDE controller Message-ID: <20080321190239.GF5433@josefsipek.net> References: <1206125778.6867.14.camel@dasa-laptop> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1206125778.6867.14.camel@dasa-laptop> User-Agent: Mutt/1.5.16 (2007-06-11) X-Barracuda-Connect: filer.fsl.cs.sunysb.edu[130.245.126.2] X-Barracuda-Start-Time: 1206131891 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45504 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14976 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeffpc@josefsipek.net Precedence: bulk X-list: xfs On Fri, Mar 21, 2008 at 07:56:18PM +0100, Massimiliano Adamo wrote: ... > My kernel is the one provided with Ubuntu 7.10: 2.6.22-14 > > At present, this controller is not managed by any driver in the linux > kernel, and in order to boot, I have to append the string > "all_generic_ide" into grub config file. Otherwise system doesn't boot. I have no idea if a newer kernel does support it, but it might be worth a try. ... > I have installed the system at least 3 times in one day, before > understanding that the problem was XFS filesystem. > After each reboot there were file missing, when running ldconfig I was > getting messages like: the file blabla_blabla.so is truncated, or is > corrupted... Is there anything in dmesg? Josef 'Jeff' Sipek. -- We have joy, we have fun, we have Linux on a Sun... From owner-xfs@oss.sgi.com Fri Mar 21 13:52:00 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 21 Mar 2008 13:52:08 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2LKpx2x000625 for ; Fri, 21 Mar 2008 13:52:00 -0700 X-ASG-Debug-ID: 1206132752-13e6010a0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from node2.t-mail.cz (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 29B346D1385 for ; Fri, 21 Mar 2008 13:52:32 -0700 (PDT) Received: from node2.t-mail.cz (node2.t-mail.cz [62.141.0.167]) by cuda.sgi.com with ESMTP id rLXMasrShgGSVSmW for ; Fri, 21 Mar 2008 13:52:32 -0700 (PDT) Received: from localhost (localhost [127.0.0.1]) by bl311.tmo.cz (Postfix) with ESMTP id D87F24AC; Fri, 21 Mar 2008 21:51:59 +0100 (CET) Received: from node2.t-mail.cz ([127.0.0.1]) by localhost (bl311.tmo.cz [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id wmjIfPCghHdj; Fri, 21 Mar 2008 21:51:58 +0100 (CET) Received: from dasa-laptop (89-24-45-214.i4g.tmcz.cz [89.24.45.214]) by bl311.tmo.cz (Postfix) with ESMTP id 2B534467; Fri, 21 Mar 2008 21:51:58 +0100 (CET) Received: from [127.0.0.1] (localhost [127.0.0.1]) by dasa-laptop (Postfix) with ESMTP id 3DE3424362; Fri, 21 Mar 2008 21:51:56 +0100 (CET) X-ASG-Orig-Subj: Re: serious problem with XFS on nvidia IDE controller Subject: Re: serious problem with XFS on nvidia IDE controller From: Massimiliano Adamo To: "Josef 'Jeff' Sipek" Cc: xfs@oss.sgi.com In-Reply-To: <20080321190239.GF5433@josefsipek.net> References: <1206125778.6867.14.camel@dasa-laptop> <20080321190239.GF5433@josefsipek.net> Content-Type: text/plain Date: Fri, 21 Mar 2008 21:51:55 +0100 Message-Id: <1206132715.6636.6.camel@dasa-laptop> Mime-Version: 1.0 X-Mailer: Evolution 2.12.1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: node2.t-mail.cz[62.141.0.167] X-Barracuda-Start-Time: 1206132753 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45506 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14977 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: maxadamo@gmail.com Precedence: bulk X-list: xfs Il giorno ven, 21/03/2008 alle 15.02 -0400, Josef 'Jeff' Sipek ha scritto: > On Fri, Mar 21, 2008 at 07:56:18PM +0100, Massimiliano Adamo wrote: > ... > > My kernel is the one provided with Ubuntu 7.10: 2.6.22-14 > > > > At present, this controller is not managed by any driver in the linux > > kernel, and in order to boot, I have to append the string > > "all_generic_ide" into grub config file. Otherwise system doesn't boot. > > I have no idea if a newer kernel does support it, but it might be worth a > try. > I've booted with a live-cd using a newer kernel (2.6.24), and the controller is now recognized properly (and not unknown): description: IDE interface product: MCP67 IDE Controller vendor: nVidia Corporation .... configuration: driver=pata_amd Anyway, the point is that with ata_generic my kernel is working properly with reiser, and is not working with XFS. -- cheers Massimiliano From owner-xfs@oss.sgi.com Fri Mar 21 13:58:48 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 21 Mar 2008 13:58:55 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2LKwkkl001093 for ; Fri, 21 Mar 2008 13:58:48 -0700 X-ASG-Debug-ID: 1206133159-4f7002930000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from node2.t-mail.cz (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id BBB7BFFE695 for ; Fri, 21 Mar 2008 13:59:20 -0700 (PDT) Received: from node2.t-mail.cz (node2.t-mail.cz [62.141.0.167]) by cuda.sgi.com with ESMTP id JlGa9AsH4Uii7QSt for ; Fri, 21 Mar 2008 13:59:20 -0700 (PDT) Received: from localhost (localhost [127.0.0.1]) by bl311.tmo.cz (Postfix) with ESMTP id C3E0F36E; Fri, 21 Mar 2008 21:59:18 +0100 (CET) Received: from node2.t-mail.cz ([127.0.0.1]) by localhost (bl311.tmo.cz [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ca7hsaf6frE6; Fri, 21 Mar 2008 21:59:17 +0100 (CET) Received: from dasa-laptop (89-24-45-214.i4g.tmcz.cz [89.24.45.214]) by bl311.tmo.cz (Postfix) with ESMTP id C7D441BA; Fri, 21 Mar 2008 21:59:16 +0100 (CET) Received: from [127.0.0.1] (localhost [127.0.0.1]) by dasa-laptop (Postfix) with ESMTP id 615CB243CE; Fri, 21 Mar 2008 21:59:15 +0100 (CET) X-ASG-Orig-Subj: Re: serious problem with XFS on nvidia IDE controller Subject: Re: serious problem with XFS on nvidia IDE controller From: Massimiliano Adamo To: "Josef 'Jeff' Sipek" Cc: xfs@oss.sgi.com In-Reply-To: <1206132715.6636.6.camel@dasa-laptop> References: <1206125778.6867.14.camel@dasa-laptop> <20080321190239.GF5433@josefsipek.net> <1206132715.6636.6.camel@dasa-laptop> Content-Type: text/plain Date: Fri, 21 Mar 2008 21:59:15 +0100 Message-Id: <1206133155.6636.9.camel@dasa-laptop> Mime-Version: 1.0 X-Mailer: Evolution 2.12.1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: node2.t-mail.cz[62.141.0.167] X-Barracuda-Start-Time: 1206133160 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45505 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14978 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: maxadamo@gmail.com Precedence: bulk X-list: xfs Il giorno ven, 21/03/2008 alle 21.51 +0100, Massimiliano Adamo ha scritto: > I've booted with a live-cd using a newer kernel (2.6.24), and the > controller is now recognized properly (and not unknown): > > description: IDE interface > product: MCP67 IDE Controller ok, that was not a kernel problem. It was enough to run update-pciids, even on my kernel :) > configuration: driver=pata_amd this driver does not exist on my kernel. It may be the problem, but the problem is not arising with reiser. -- Massimiliano From owner-xfs@oss.sgi.com Fri Mar 21 16:21:12 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 21 Mar 2008 16:21:20 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2LNLBBT011608 for ; Fri, 21 Mar 2008 16:21:12 -0700 X-ASG-Debug-ID: 1206141702-2a25024a0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from node1.t-mail.cz (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 497A76D22D1 for ; Fri, 21 Mar 2008 16:21:42 -0700 (PDT) Received: from node1.t-mail.cz (node1.t-mail.cz [62.141.0.166]) by cuda.sgi.com with ESMTP id NT5GIdFOgOVBWXz7 for ; Fri, 21 Mar 2008 16:21:42 -0700 (PDT) Received: from localhost (localhost [127.0.0.1]) by bl310.tmo.cz (Postfix) with ESMTP id 4CF6B178; Sat, 22 Mar 2008 00:21:09 +0100 (CET) Received: from node1.t-mail.cz ([127.0.0.1]) by localhost (bl310.tmo.cz [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id YNUae4817dYk; Sat, 22 Mar 2008 00:21:07 +0100 (CET) Received: from dasa-laptop (89-24-45-214.i4g.tmcz.cz [89.24.45.214]) by bl310.tmo.cz (Postfix) with ESMTP id 241DF415; Sat, 22 Mar 2008 00:21:07 +0100 (CET) Received: from [127.0.0.1] (localhost [127.0.0.1]) by dasa-laptop (Postfix) with ESMTP id 9F36325D7A; Sat, 22 Mar 2008 00:21:05 +0100 (CET) X-ASG-Orig-Subj: Re: serious problem with XFS on nvidia IDE controller Subject: Re: serious problem with XFS on nvidia IDE controller From: Massimiliano Adamo To: Chris Wedgwood Cc: xfs@oss.sgi.com In-Reply-To: <20080321230609.GA31222@puku.stupidest.org> References: <1206125778.6867.14.camel@dasa-laptop> <20080321230609.GA31222@puku.stupidest.org> Content-Type: text/plain Date: Sat, 22 Mar 2008 00:21:05 +0100 Message-Id: <1206141665.6636.23.camel@dasa-laptop> Mime-Version: 1.0 X-Mailer: Evolution 2.12.1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: node1.t-mail.cz[62.141.0.166] X-Barracuda-Start-Time: 1206141705 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45516 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14979 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: maxadamo@gmail.com Precedence: bulk X-list: xfs Il giorno ven, 21/03/2008 alle 16.06 -0700, Chris Wedgwood ha scritto: > On Fri, Mar 21, 2008 at 07:56:18PM +0100, Massimiliano Adamo wrote: > > > I have a notebook, Acer Aspire 5520 using the following controller: > > 0:06.0 IDE interface: nVidia Corporation Unknown device 0560 (rev a1) > > MCP67 IDE > > drivers/ata/pata_amd.c got support for that with 05e2867a, so after > 2.6.19-rc4 You're right, now I know it. I updated pci ids with "update-piciids" command and I found the name of the controller. After that I booted from a live-cd and I got pata_amd running, but this module doesn't belong to Ubuntu kernel (I don't have it)... I'll search how to add this module, if possible. But I would never recompile a kernel on ubuntu (don't have time for this). I left gentoo to avoid recompiling. > > drivers/ide/pci/amd74xx.c, 9cbcc5e3, after v2.6.23 > > > My kernel is the one provided with Ubuntu 7.10: 2.6.22-14 > > For the sata driver that should work then i would think. You might > want to see what the Ubuntu people have to say about that. let's say the problem is: "ubuntu people" are neither hacker or guru, and majority of people just use the default filesystem. > > I have installed the system at least 3 times in one day, before > > understanding that the problem was XFS filesystem. > > I doubt it. > What is your doubt? System is now running fine. I have been reinstalling the system the whole day, and after each reboot I was missing files from the filesystem. I didn't have messages in dmes, or in logs. The last attempt was to change filesystem. I just reinstalled the same version of the operating system, just changing to reiser and everything is now ok. Is there anything else to understand? The syWith XFS the system was broken after 20 minutes. > > I also noticed that "sync" command was always taking a long time to > > complete. > > Lots of dirty pages and slow writeout (PIO) might explain that. > Agree, but why no problems like this with reiser? -- Massimiliano From owner-xfs@oss.sgi.com Fri Mar 21 19:51:42 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 21 Mar 2008 19:52:20 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2M2peON020039 for ; Fri, 21 Mar 2008 19:51:41 -0700 X-ASG-Debug-ID: 1206154329-0cfd02850000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mail.g-house.de (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 59A211000362; Fri, 21 Mar 2008 19:52:10 -0700 (PDT) Received: from mail.g-house.de (ns2.g-housing.de [81.169.133.75]) by cuda.sgi.com with ESMTP id bj2k5EpWhpVMMAdC; Fri, 21 Mar 2008 19:52:10 -0700 (PDT) Received: from [77.47.55.199] (helo=pf1.housecafe.de) by mail.g-house.de with esmtpa (Exim 4.63) (envelope-from ) id 1Jctpy-0006SQ-UJ; Sat, 22 Mar 2008 03:52:03 +0100 Date: Sat, 22 Mar 2008 03:52:01 +0100 (CET) From: Christian Kujau X-X-Sender: evil@sheep.housecafe.de To: Alasdair G Kergon cc: Chr , Milan Broz , David Chinner , LKML , xfs@oss.sgi.com, dm-devel@redhat.com, dm-crypt@saout.de, Herbert Xu , Ritesh Raj Sarraf X-ASG-Orig-Subj: Re: [dm-crypt] INFO: task mount:11202 blocked for more than 120 seconds Subject: Re: [dm-crypt] INFO: task mount:11202 blocked for more than 120 seconds In-Reply-To: <20080317173609.GD29322@agk.fab.redhat.com> Message-ID: References: <200803150108.04008.chunkeey@web.de> <200803151432.11125.chunkeey@web.de> <200803152234.53199.chunkeey@web.de> <20080317173609.GD29322@agk.fab.redhat.com> User-Agent: Alpine 1.00 (DEB 882 2007-12-20) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Barracuda-Connect: ns2.g-housing.de[81.169.133.75] X-Barracuda-Start-Time: 1206154332 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45529 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14980 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: lists@nerdbynature.de Precedence: bulk X-list: xfs On Mon, 17 Mar 2008, Alasdair G Kergon wrote: > From: Milan Broz > > Fix regression in dm-crypt introduced in commit > 3a7f6c990ad04e6f576a159876c602d14d6f7fef > (dm crypt: use async crypto). I finally got around to test this - and yes, 2.6.25-rc6 with this patch applied make the hangs go away. There are other problems now[0], but they don't seem to be related, AFAICT. Thank you! Christian. [0] http://lkml.org/lkml/2008/3/21/408 -- BOFH excuse #197: I'm sorry a pentium won't do, you need an SGI to connect with us. From owner-xfs@oss.sgi.com Fri Mar 21 22:49:11 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 21 Mar 2008 22:49:20 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-0.3 required=5.0 tests=BAYES_00,J_CHICKENPOX_210, J_CHICKENPOX_25,J_CHICKENPOX_26,J_CHICKENPOX_27,J_CHICKENPOX_28, J_CHICKENPOX_42,J_CHICKENPOX_43,J_CHICKENPOX_44,J_CHICKENPOX_45, J_CHICKENPOX_52,J_CHICKENPOX_53,J_CHICKENPOX_54,J_CHICKENPOX_55, J_CHICKENPOX_56,J_CHICKENPOX_57,J_CHICKENPOX_62,J_CHICKENPOX_63, J_CHICKENPOX_64,J_CHICKENPOX_65,J_CHICKENPOX_66,J_CHICKENPOX_73, J_CHICKENPOX_74,J_CHICKENPOX_75 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2M5n99w006608 for ; Fri, 21 Mar 2008 22:49:11 -0700 X-ASG-Debug-ID: 1206164977-621102660000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from smtp-0-2.linuxrulz.org (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id D6D2F6D34E5 for ; Fri, 21 Mar 2008 22:49:37 -0700 (PDT) Received: from smtp-0-2.linuxrulz.org (smtp-0-2.linuxrulz.org [66.197.170.246]) by cuda.sgi.com with ESMTP id DUV30NYFUlJgEwOm for ; Fri, 21 Mar 2008 22:49:37 -0700 (PDT) Received: from localhost (scranton-0-2 [127.0.0.1]) by smtp-0-2.linuxrulz.org (Postfix) with ESMTP id AD38633F044 for ; Sat, 22 Mar 2008 05:49:05 +0000 (GMT) Received: from [10.254.254.242] (dsl-241-56-67.telkomadsl.co.za [41.241.56.67]) by smtp-0-2.linuxrulz.org (Postfix) with ESMTP id 863D333EFE5 for ; Sat, 22 Mar 2008 05:48:58 +0000 (GMT) X-ASG-Orig-Subj: [PATCH] Remove sysv3 legacy functions Subject: [PATCH] Remove sysv3 legacy functions From: Nigel Kukard To: xfs@oss.sgi.com Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="=-NqQpxZTxc9RLiJilMgMP" Date: Sat, 22 Mar 2008 05:48:55 +0000 Message-Id: <1206164935.14300.8.camel@nigel-x60> Mime-Version: 1.0 X-Mailer: Evolution 2.12.3 (2.12.3-1.1) X-Barracuda-Connect: smtp-0-2.linuxrulz.org[66.197.170.246] X-Barracuda-Start-Time: 1206164980 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45542 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14981 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nkukard@lbsd.net Precedence: bulk X-list: xfs --=-NqQpxZTxc9RLiJilMgMP Content-Type: multipart/mixed; boundary="=-zYTwdJq22U6LZJFdfpPz" --=-zYTwdJq22U6LZJFdfpPz Content-Type: text/plain Content-Transfer-Encoding: quoted-printable Remove legacy sysv3 functions. -N --=-zYTwdJq22U6LZJFdfpPz Content-Disposition: attachment; filename=xfsprogs-2.7.11_sysv3-legacy.patch Content-Type: text/x-patch; name=xfsprogs-2.7.11_sysv3-legacy.patch; charset=us-ascii Content-Transfer-Encoding: base64 ZGlmZiAtcnUgeGZzcHJvZ3MtMi43LjExX3ZhbmlsbGEvY29weS94ZnNfY29w eS5jIHhmc3Byb2dzLTIuNy4xMV9zeXN2My1sZWdhY3kvY29weS94ZnNfY29w eS5jDQotLS0geGZzcHJvZ3MtMi43LjExX3ZhbmlsbGEvY29weS94ZnNfY29w eS5jCTIwMDYtMDEtMTcgMDM6NDY6NDYuMDAwMDAwMDAwICswMDAwDQorKysg eGZzcHJvZ3MtMi43LjExX3N5c3YzLWxlZ2FjeS9jb3B5L3hmc19jb3B5LmMJ MjAwOC0wMy0yMSAxNjowMTo1OS4wMDAwMDAwMDAgKzAwMDANCkBAIC05MDMs NyArOTAzLDcgQEANCiANCiAJCS8qIHNhdmUgd2hhdCB3ZSBuZWVkIChhZ2Yp IGluIHRoZSBidHJlZSBidWZmZXIgKi8NCiANCi0JCWJjb3B5KGFnX2hkci54 ZnNfYWdmLCBidHJlZV9idWYuZGF0YSwgc291cmNlX3NlY3RvcnNpemUpOw0K KwkJbWVtbW92ZShidHJlZV9idWYuZGF0YSwgYWdfaGRyLnhmc19hZ2YsIHNv dXJjZV9zZWN0b3JzaXplKTsNCiAJCWFnX2hkci54ZnNfYWdmID0gKHhmc19h Z2ZfdCAqKSBidHJlZV9idWYuZGF0YTsNCiAJCWJ0cmVlX2J1Zi5sZW5ndGgg PSBzb3VyY2VfYmxvY2tzaXplOw0KIA0KZGlmZiAtcnUgeGZzcHJvZ3MtMi43 LjExX3ZhbmlsbGEvZ3Jvd2ZzL3hmc19ncm93ZnMuYyB4ZnNwcm9ncy0yLjcu MTFfc3lzdjMtbGVnYWN5L2dyb3dmcy94ZnNfZ3Jvd2ZzLmMNCi0tLSB4ZnNw cm9ncy0yLjcuMTFfdmFuaWxsYS9ncm93ZnMveGZzX2dyb3dmcy5jCTIwMDYt MDEtMTcgMDM6NDY6NDguMDAwMDAwMDAwICswMDAwDQorKysgeGZzcHJvZ3Mt Mi43LjExX3N5c3YzLWxlZ2FjeS9ncm93ZnMveGZzX2dyb3dmcy5jCTIwMDgt MDMtMjEgMTY6MDY6MjEuMDAwMDAwMDAwICswMDAwDQpAQCAtMjUwLDcgKzI1 MCw3IEBADQogCSAqIE5lZWQgcm9vdCBhY2Nlc3MgZnJvbSBoZXJlIG9uICh1 c2luZyByYXcgZGV2aWNlcykuLi4NCiAJICovDQogDQotCWJ6ZXJvKCZ4aSwg c2l6ZW9mKHhpKSk7DQorCW1lbXNldCgmeGksIDAsIHNpemVvZih4aSkpOw0K IAl4aS5kbmFtZSA9IGRhdGFkZXY7DQogCXhpLmxvZ25hbWUgPSBsb2dkZXY7 DQogCXhpLnJ0bmFtZSA9IHJ0ZGV2Ow0KZGlmZiAtcnUgeGZzcHJvZ3MtMi43 LjExX3ZhbmlsbGEvaW8vYm1hcC5jIHhmc3Byb2dzLTIuNy4xMV9zeXN2My1s ZWdhY3kvaW8vYm1hcC5jDQotLS0geGZzcHJvZ3MtMi43LjExX3ZhbmlsbGEv aW8vYm1hcC5jCTIwMDYtMDEtMTcgMDM6NDY6NDkuMDAwMDAwMDAwICswMDAw DQorKysgeGZzcHJvZ3MtMi43LjExX3N5c3YzLWxlZ2FjeS9pby9ibWFwLmMJ MjAwOC0wMy0yMSAxNjowNjoyMS4wMDAwMDAwMDAgKzAwMDANCkBAIC0xNzUs NyArMTc1LDcgQEANCiANCiAJZG8gewkvKiBsb29wIGEgbWl4aW11bSBvZiB0 d28gdGltZXMgKi8NCiANCi0JCWJ6ZXJvKG1hcCwgc2l6ZW9mKCptYXApKTsJ LyogemVybyBoZWFkZXIgKi8NCisJCW1lbXNldChtYXAsIDAsIHNpemVvZigq bWFwKSk7CS8qIHplcm8gaGVhZGVyICovDQogDQogCQltYXAtPmJtdl9sZW5n dGggPSAtMTsNCiAJCW1hcC0+Ym12X2NvdW50ID0gbWFwX3NpemU7DQpkaWZm IC1ydSB4ZnNwcm9ncy0yLjcuMTFfdmFuaWxsYS9saWJoYW5kbGUvamRtLmMg eGZzcHJvZ3MtMi43LjExX3N5c3YzLWxlZ2FjeS9saWJoYW5kbGUvamRtLmMN Ci0tLSB4ZnNwcm9ncy0yLjcuMTFfdmFuaWxsYS9saWJoYW5kbGUvamRtLmMJ MjAwNi0wMS0xNyAwMzo0Njo0OS4wMDAwMDAwMDAgKzAwMDANCisrKyB4ZnNw cm9ncy0yLjcuMTFfc3lzdjMtbGVnYWN5L2xpYmhhbmRsZS9qZG0uYwkyMDA4 LTAzLTIxIDE2OjA2OjIxLjAwMDAwMDAwMCArMDAwMA0KQEAgLTQ3LDcgKzQ3 LDcgQEANCiB7DQogCWhhbmRsZXAtPmZoX2ZzaGFuZGxlID0gKmZzaGFuZGxl cDsNCiAJaGFuZGxlcC0+Zmhfc3pfZm9sbG93aW5nID0gRklMRUhBTkRMRV9T Wl9GT0xMT1dJTkc7DQotCWJ6ZXJvKGhhbmRsZXAtPmZoX3BhZCwgRklMRUhB TkRMRV9TWl9QQUQpOw0KKwltZW1zZXQoaGFuZGxlcC0+ZmhfcGFkLCAwLCBG SUxFSEFORExFX1NaX1BBRCk7DQogCWhhbmRsZXAtPmZoX2dlbiA9IHN0YXRw LT5ic19nZW47DQogCWhhbmRsZXAtPmZoX2lubyA9IHN0YXRwLT5ic19pbm87 DQogfQ0KZGlmZiAtcnUgeGZzcHJvZ3MtMi43LjExX3ZhbmlsbGEvbG9ncHJp bnQvbG9nX21pc2MuYyB4ZnNwcm9ncy0yLjcuMTFfc3lzdjMtbGVnYWN5L2xv Z3ByaW50L2xvZ19taXNjLmMNCi0tLSB4ZnNwcm9ncy0yLjcuMTFfdmFuaWxs YS9sb2dwcmludC9sb2dfbWlzYy5jCTIwMDYtMDEtMTcgMDM6NDY6NTEuMDAw MDAwMDAwICswMDAwDQorKysgeGZzcHJvZ3MtMi43LjExX3N5c3YzLWxlZ2Fj eS9sb2dwcmludC9sb2dfbWlzYy5jCTIwMDgtMDMtMjEgMTU6MDQ6NTEuMDAw MDAwMDAwICswMDAwDQpAQCAtMTIwLDEwICsxMjAsMTAgQEANCiAgICAgeGxv Z19vcF9oZWFkZXJfdCBoYnVmOw0KIA0KICAgICAvKg0KLSAgICAgKiBiY29w eSBiZWNhdXNlIG9uIDY0L24zMiwgcGFydGlhbCByZWFkcyBjYW4gY2F1c2Ug dGhlIG9wX2hlYWQNCisgICAgICogbWVtbW92ZSBiZWNhdXNlIG9uIDY0L24z MiwgcGFydGlhbCByZWFkcyBjYW4gY2F1c2UgdGhlIG9wX2hlYWQNCiAgICAg ICogcG9pbnRlciB0byBjb21lIGluIHBvaW50aW5nIHRvIGFuIG9kZC1udW1i ZXJlZCBieXRlDQogICAgICAqLw0KLSAgICBiY29weShvcF9oZWFkLCAmaGJ1 Ziwgc2l6ZW9mKHhsb2dfb3BfaGVhZGVyX3QpKTsNCisgICAgbWVtbW92ZSgm aGJ1Ziwgb3BfaGVhZCwgc2l6ZW9mKHhsb2dfb3BfaGVhZGVyX3QpKTsNCiAg ICAgb3BfaGVhZCA9ICZoYnVmOw0KICAgICAqcHRyICs9IHNpemVvZih4bG9n X29wX2hlYWRlcl90KTsNCiAgICAgcHJpbnRmKCJPcGVyICglZCk6IHRpZDog JXggIGxlbjogJWQgIGNsaWVudGlkOiAlcyAgIiwgaSwNCkBAIC0yNTMsMTAg KzI1MywxMCBAQA0KICAgICBsb25nIGxvbmcJCSB4LCB5Ow0KIA0KICAgICAv Kg0KLSAgICAgKiBiY29weSB0byBlbnN1cmUgOC1ieXRlIGFsaWdubWVudCBm b3IgdGhlIGxvbmcgbG9uZ3MgaW4NCisgICAgICogbWVtbW92ZSB0byBlbnN1 cmUgOC1ieXRlIGFsaWdubWVudCBmb3IgdGhlIGxvbmcgbG9uZ3MgaW4NCiAg ICAgICogYnVmX2xvZ19mb3JtYXRfdCBzdHJ1Y3R1cmUNCiAgICAgICovDQot ICAgIGJjb3B5KCpwdHIsICZsYnVmLCBNSU4oc2l6ZW9mKHhmc19idWZfbG9n X2Zvcm1hdF90KSwgbGVuKSk7DQorICAgIG1lbW1vdmUoJmxidWYsICpwdHIs IE1JTihzaXplb2YoeGZzX2J1Zl9sb2dfZm9ybWF0X3QpLCBsZW4pKTsNCiAg ICAgZiA9ICZsYnVmOw0KICAgICAqcHRyICs9IGxlbjsNCiANCkBAIC0zMTks MTUgKzMxOSwxNSBAQA0KIAkJfSBlbHNlIHsNCiAJCQlwcmludGYoIlxuIik7 DQogCQkJLyoNCi0JCQkgKiBiY29weSBiZWNhdXNlICpwdHIgbWF5IG5vdCBi ZSA4LWJ5dGUgYWxpZ25lZA0KKwkJCSAqIG1lbW1vdmUgYmVjYXVzZSAqcHRy IG1heSBub3QgYmUgOC1ieXRlIGFsaWduZWQNCiAJCQkgKi8NCi0JCQliY29w eSgqcHRyLCAmeCwgc2l6ZW9mKGxvbmcgbG9uZykpOw0KLQkJCWJjb3B5KCpw dHIrOCwgJnksIHNpemVvZihsb25nIGxvbmcpKTsNCisJCQltZW1tb3ZlKCZ4 LCAqcHRyLCBzaXplb2YobG9uZyBsb25nKSk7DQorCQkJbWVtbW92ZSgmeSwg KnB0cis4LCBzaXplb2YobG9uZyBsb25nKSk7DQogCQkJcHJpbnRmKCJpY291 bnQ6ICVsbGQgIGlmcmVlOiAlbGxkICAiLA0KIAkJCQlJTlRfR0VUKHgsIEFS Q0hfQ09OVkVSVCksDQogCQkJCUlOVF9HRVQoeSwgQVJDSF9DT05WRVJUKSk7 DQotCQkJYmNvcHkoKnB0cisxNiwgJngsIHNpemVvZihsb25nIGxvbmcpKTsN Ci0JCQliY29weSgqcHRyKzI0LCAmeSwgc2l6ZW9mKGxvbmcgbG9uZykpOw0K KwkJCW1lbW1vdmUoJngsICpwdHIrMTYsIHNpemVvZihsb25nIGxvbmcpKTsN CisJCQltZW1tb3ZlKCZ5LCAqcHRyKzI0LCBzaXplb2YobG9uZyBsb25nKSk7 DQogCQkJcHJpbnRmKCJmZGJsa3M6ICVsbGQgIGZyZXh0OiAlbGxkXG4iLA0K IAkJCQlJTlRfR0VUKHgsIEFSQ0hfQ09OVkVSVCksDQogCQkJCUlOVF9HRVQo eSwgQVJDSF9DT05WRVJUKSk7DQpAQCAtNDc1LDEwICs0NzUsMTAgQEANCiAg ICAgeGZzX2VmZF9sb2dfZm9ybWF0X3QgbGJ1ZjsNCiANCiAgICAgLyoNCi0g ICAgICogYmNvcHkgdG8gZW5zdXJlIDgtYnl0ZSBhbGlnbm1lbnQgZm9yIHRo ZSBsb25nIGxvbmdzIGluDQorICAgICAqIG1lbW1vdmUgdG8gZW5zdXJlIDgt Ynl0ZSBhbGlnbm1lbnQgZm9yIHRoZSBsb25nIGxvbmdzIGluDQogICAgICAq IHhmc19lZmRfbG9nX2Zvcm1hdF90IHN0cnVjdHVyZQ0KICAgICAgKi8NCi0g ICAgYmNvcHkoKnB0ciwgJmxidWYsIGxlbik7DQorICAgIG1lbW1vdmUoJmxi dWYsICpwdHIsIGxlbik7DQogICAgIGYgPSAmbGJ1ZjsNCiAgICAgKnB0ciAr PSBsZW47DQogICAgIGlmIChsZW4gPj0gc2l6ZW9mKHhmc19lZmRfbG9nX2Zv cm1hdF90KSkgew0KQEAgLTUxMSwxMCArNTExLDEwIEBADQogICAgIHhmc19l ZmlfbG9nX2Zvcm1hdF90IGxidWY7DQogDQogICAgIC8qDQotICAgICAqIGJj b3B5IHRvIGVuc3VyZSA4LWJ5dGUgYWxpZ25tZW50IGZvciB0aGUgbG9uZyBs b25ncyBpbg0KKyAgICAgKiBtZW1tb3ZlIHRvIGVuc3VyZSA4LWJ5dGUgYWxp Z25tZW50IGZvciB0aGUgbG9uZyBsb25ncyBpbg0KICAgICAgKiB4ZnNfZWZp X2xvZ19mb3JtYXRfdCBzdHJ1Y3R1cmUNCiAgICAgICovDQotICAgIGJjb3B5 KCpwdHIsICZsYnVmLCBsZW4pOw0KKyAgICBtZW1tb3ZlKCZsYnVmLCAqcHRy LCBsZW4pOw0KICAgICBmID0gJmxidWY7DQogICAgICpwdHIgKz0gbGVuOw0K ICAgICBpZiAobGVuID49IHNpemVvZih4ZnNfZWZpX2xvZ19mb3JtYXRfdCkp IHsNCkBAIC01NDQsNyArNTQ0LDcgQEANCiAgICAgeGZzX3FvZmZfbG9nZm9y bWF0X3QgKmY7DQogICAgIHhmc19xb2ZmX2xvZ2Zvcm1hdF90IGxidWY7DQog DQotICAgIGJjb3B5KCpwdHIsICZsYnVmLCBNSU4oc2l6ZW9mKHhmc19xb2Zm X2xvZ2Zvcm1hdF90KSwgbGVuKSk7DQorICAgIG1lbW1vdmUoJmxidWYsICpw dHIsIE1JTihzaXplb2YoeGZzX3FvZmZfbG9nZm9ybWF0X3QpLCBsZW4pKTsN CiAgICAgZiA9ICZsYnVmOw0KICAgICAqcHRyICs9IGxlbjsNCiAgICAgaWYg KGxlbiA+PSBzaXplb2YoeGZzX3FvZmZfbG9nZm9ybWF0X3QpKSB7DQpAQCAt NTk4LDE0ICs1OTgsMTQgQEANCiANCiAJcHJpbnRmKCJTSE9SVEZPUk0gRElS RUNUT1JZIHNpemUgJWQgY291bnQgJWRcbiIsDQogCSAgICAgICBzaXplLCBz ZnAtPmhkci5jb3VudCk7DQotCWJjb3B5KCYoc2ZwLT5oZHIucGFyZW50KSwg Jmlubywgc2l6ZW9mKGlubykpOw0KKwltZW1tb3ZlKCZpbm8sICYoc2ZwLT5o ZHIucGFyZW50KSwgc2l6ZW9mKGlubykpOw0KIAlwcmludGYoIi4uIGlubyAw eCVsbHhcbiIsICh1bnNpZ25lZCBsb25nIGxvbmcpSU5UX0dFVChpbm8sIEFS Q0hfQ09OVkVSVCkpOw0KIA0KIAljb3VudCA9ICh1aW50KShzZnAtPmhkci5j b3VudCk7DQogCXNmZXAgPSAmKHNmcC0+bGlzdFswXSk7DQogCWZvciAoaSA9 IDA7IGkgPCBjb3VudDsgaSsrKSB7DQotCQliY29weSgmKHNmZXAtPmludW1i ZXIpLCAmaW5vLCBzaXplb2YoaW5vKSk7DQotCQliY29weSgoc2ZlcC0+bmFt ZSksIG5hbWVidWYsIHNmZXAtPm5hbWVsZW4pOw0KKwkJbWVtbW92ZSgmaW5v LCAmKHNmZXAtPmludW1iZXIpLCBzaXplb2YoaW5vKSk7DQorCQltZW1tb3Zl KG5hbWVidWYsIChzZmVwLT5uYW1lKSwgc2ZlcC0+bmFtZWxlbik7DQogCQlu YW1lYnVmW3NmZXAtPm5hbWVsZW5dID0gJ1wwJzsNCiAJCXByaW50ZigiJXMg aW5vIDB4JWxseCBuYW1lbGVuICVkXG4iLA0KIAkJICAgICAgIG5hbWVidWYs ICh1bnNpZ25lZCBsb25nIGxvbmcpaW5vLCBzZmVwLT5uYW1lbGVuKTsNCkBA IC02MjgsMTIgKzYyOCwxMiBAQA0KICAgICAvKg0KICAgICAgKiBwcmludCBp bm9kZSB0eXBlIGhlYWRlciByZWdpb24NCiAgICAgICoNCi0gICAgICogYmNv cHkgdG8gZW5zdXJlIDgtYnl0ZSBhbGlnbm1lbnQgZm9yIHRoZSBsb25nIGxv bmdzIGluDQorICAgICAqIG1lbW1vdmUgdG8gZW5zdXJlIDgtYnl0ZSBhbGln bm1lbnQgZm9yIHRoZSBsb25nIGxvbmdzIGluDQogICAgICAqIHhmc19pbm9k ZV9sb2dfZm9ybWF0X3Qgc3RydWN0dXJlDQogICAgICAqDQogICAgICAqIGxl biBjYW4gYmUgc21hbGxlciB0aGFuIHhmc19pbm9kZV9sb2dfZm9ybWF0X3Qg c29tZXRpbWVzLi4uICg/KQ0KICAgICAgKi8NCi0gICAgYmNvcHkoKnB0ciwg JmxidWYsIE1JTihzaXplb2YoeGZzX2lub2RlX2xvZ19mb3JtYXRfdCksIGxl bikpOw0KKyAgICBtZW1tb3ZlKCZsYnVmLCAqcHRyLCBNSU4oc2l6ZW9mKHhm c19pbm9kZV9sb2dfZm9ybWF0X3QpLCBsZW4pKTsNCiAgICAgdmVyc2lvbiA9 IGxidWYuaWxmX3R5cGU7DQogICAgIGYgPSAmbGJ1ZjsNCiAgICAgKCppKSsr OwkJCQkJLyogYnVtcCBpbmRleCAqLw0KQEAgLTY3OSw3ICs2NzksNyBAQA0K IAlyZXR1cm4gZi0+aWxmX3NpemUtMTsNCiAgICAgfQ0KIA0KLSAgICBiY29w eSgqcHRyLCAmZGlubywgc2l6ZW9mKGRpbm8pKTsNCisgICAgbWVtbW92ZSgm ZGlubywgKnB0ciwgc2l6ZW9mKGRpbm8pKTsNCiAgICAgbW9kZSA9IGRpbm8u ZGlfbW9kZSAmIFNfSUZNVDsNCiAgICAgc2l6ZSA9IChpbnQpZGluby5kaV9z aXplOw0KICAgICB4bG9nX3ByaW50X3RyYW5zX2lub2RlX2NvcmUoJmRpbm8p Ow0KQEAgLTc5OCwxMCArNzk4LDEwIEBADQogICAgIC8qDQogICAgICAqIHBy aW50IGRxdW90IGhlYWRlciByZWdpb24NCiAgICAgICoNCi0gICAgICogYmNv cHkgdG8gZW5zdXJlIDgtYnl0ZSBhbGlnbm1lbnQgZm9yIHRoZSBsb25nIGxv bmdzIGluDQorICAgICAqIG1lbW1vdmUgdG8gZW5zdXJlIDgtYnl0ZSBhbGln bm1lbnQgZm9yIHRoZSBsb25nIGxvbmdzIGluDQogICAgICAqIHhmc19kcV9s b2dmb3JtYXRfdCBzdHJ1Y3R1cmUNCiAgICAgICovDQotICAgIGJjb3B5KCpw dHIsICZsYnVmLCBNSU4oc2l6ZW9mKHhmc19kcV9sb2dmb3JtYXRfdCksIGxl bikpOw0KKyAgICBtZW1tb3ZlKCZsYnVmLCAqcHRyLCBNSU4oc2l6ZW9mKHhm c19kcV9sb2dmb3JtYXRfdCksIGxlbikpOw0KICAgICBmID0gJmxidWY7DQog ICAgICgqaSkrKzsJCQkJCS8qIGJ1bXAgaW5kZXggKi8NCiAgICAgKnB0ciAr PSBsZW47DQpAQCAtODMwLDcgKzgzMCw3IEBADQogCWhlYWQgPSAoeGxvZ19v cF9oZWFkZXJfdCAqKSpwdHI7DQogCXhsb2dfcHJpbnRfb3BfaGVhZGVyKGhl YWQsICppLCBwdHIpOw0KIAlBU1NFUlQoSU5UX0dFVChoZWFkLT5vaF9sZW4s IEFSQ0hfQ09OVkVSVCkgPT0gc2l6ZW9mKHhmc19kaXNrX2RxdW90X3QpKTsN Ci0JYmNvcHkoKnB0ciwgJmRkcSwgc2l6ZW9mKHhmc19kaXNrX2RxdW90X3Qp KTsNCisJbWVtbW92ZSgmZGRxLCAqcHRyLCBzaXplb2YoeGZzX2Rpc2tfZHF1 b3RfdCkpOw0KIAlwcmludGYoIkRRVU9UOiBtYWdpYyAweCVoeCBmbGFncyAw JWhvXG4iLA0KIAkgICAgICAgSU5UX0dFVChkZHEuZF9tYWdpYywgQVJDSF9D T05WRVJUKSwNCiAJICAgICAgIElOVF9HRVQoZGRxLmRfZmxhZ3MsIEFSQ0hf Q09OVkVSVCkpOw0KZGlmZiAtcnUgeGZzcHJvZ3MtMi43LjExX3ZhbmlsbGEv bWtmcy9wcm90by5jIHhmc3Byb2dzLTIuNy4xMV9zeXN2My1sZWdhY3kvbWtm cy9wcm90by5jDQotLS0geGZzcHJvZ3MtMi43LjExX3ZhbmlsbGEvbWtmcy9w cm90by5jCTIwMDYtMDEtMTcgMDM6NDY6NTEuMDAwMDAwMDAwICswMDAwDQor KysgeGZzcHJvZ3MtMi43LjExX3N5c3YzLWxlZ2FjeS9ta2ZzL3Byb3RvLmMJ MjAwOC0wMy0yMSAxNjowNjoyMS4wMDAwMDAwMDAgKzAwMDANCkBAIC0yMzQs NyArMjM0LDcgQEANCiAJaWYgKGRvbG9jYWwgJiYgbGVuIDw9IFhGU19JRk9S S19EU0laRShpcCkpIHsNCiAJCWxpYnhmc19pZGF0YV9yZWFsbG9jKGlwLCBs ZW4sIFhGU19EQVRBX0ZPUkspOw0KIAkJaWYgKGJ1ZikNCi0JCQliY29weShi dWYsIGlwLT5pX2RmLmlmX3UxLmlmX2RhdGEsIGxlbik7DQorCQkJbWVtbW92 ZShpcC0+aV9kZi5pZl91MS5pZl9kYXRhLCBidWYsIGxlbik7DQogCQlpcC0+ aV9kLmRpX3NpemUgPSBsZW47DQogCQlpcC0+aV9kZi5pZl9mbGFncyAmPSB+ WEZTX0lGRVhURU5UUzsNCiAJCWlwLT5pX2RmLmlmX2ZsYWdzIHw9IFhGU19J RklOTElORTsNCkBAIC0yNTcsOSArMjU3LDkgQEANCiAJCWQgPSBYRlNfRlNC X1RPX0RBRERSKG1wLCBtYXAuYnJfc3RhcnRibG9jayk7DQogCQlicCA9IGxp Ynhmc190cmFuc19nZXRfYnVmKGxvZ2l0ID8gdHAgOiAwLCBtcC0+bV9kZXYs IGQsDQogCQkJbmIgPDwgbXAtPm1fYmxrYmJfbG9nLCAwKTsNCi0JCWJjb3B5 KGJ1ZiwgWEZTX0JVRl9QVFIoYnApLCBsZW4pOw0KKwkJbWVtbW92ZShYRlNf QlVGX1BUUihicCksIGJ1ZiwgbGVuKTsNCiAJCWlmIChsZW4gPCBYRlNfQlVG X0NPVU5UKGJwKSkNCi0JCQliemVybyhYRlNfQlVGX1BUUihicCkgKyBsZW4s IFhGU19CVUZfQ09VTlQoYnApIC0gbGVuKTsNCisJCQltZW1zZXQoWEZTX0JV Rl9QVFIoYnApICsgbGVuLCAwLCBYRlNfQlVGX0NPVU5UKGJwKSAtIGxlbik7 DQogCQlpZiAobG9naXQpDQogCQkJbGlieGZzX3RyYW5zX2xvZ19idWYodHAs IGJwLCAwLCBYRlNfQlVGX0NPVU5UKGJwKSAtIDEpOw0KIAkJZWxzZQ0KQEAg LTM3Niw3ICszNzYsNyBAQA0KIAljcmVkX3QJCWNyZWRzOw0KIAljaGFyCQkq dmFsdWU7DQogDQotCWJ6ZXJvKCZjcmVkcywgc2l6ZW9mKGNyZWRzKSk7DQor CW1lbXNldCgmY3JlZHMsIDAsIHNpemVvZihjcmVkcykpOw0KIAltc3RyID0g Z2V0c3RyKHBwKTsNCiAJc3dpdGNoIChtc3RyWzBdKSB7DQogCWNhc2UgJy0n Og0KQEAgLTYzNSw4ICs2MzUsOCBAQA0KIAl0cCA9IGxpYnhmc190cmFuc19h bGxvYyhtcCwgMCk7DQogCWlmICgoaSA9IGxpYnhmc190cmFuc19yZXNlcnZl KHRwLCBNS0ZTX0JMT0NLUkVTX0lOT0RFLCAwLCAwLCAwLCAwKSkpDQogCQly ZXNfZmFpbGVkKGkpOw0KLQliemVybygmY3JlZHMsIHNpemVvZihjcmVkcykp Ow0KLQliemVybygmZnN4YXR0cnMsIHNpemVvZihmc3hhdHRycykpOw0KKwlt ZW1zZXQoJmNyZWRzLCAwLCBzaXplb2YoY3JlZHMpKTsNCisJbWVtc2V0KCZm c3hhdHRycywgMCwgc2l6ZW9mKGZzeGF0dHJzKSk7DQogCWVycm9yID0gbGli eGZzX2lub2RlX2FsbG9jKCZ0cCwgTlVMTCwgU19JRlJFRywgMSwgMCwNCiAJ CQkJCSZjcmVkcywgJmZzeGF0dHJzLCAmcmJtaXApOw0KIAlpZiAoZXJyb3Ip IHsNCmRpZmYgLXJ1IHhmc3Byb2dzLTIuNy4xMV92YW5pbGxhL21rZnMveGZz X21rZnMuYyB4ZnNwcm9ncy0yLjcuMTFfc3lzdjMtbGVnYWN5L21rZnMveGZz X21rZnMuYw0KLS0tIHhmc3Byb2dzLTIuNy4xMV92YW5pbGxhL21rZnMveGZz X21rZnMuYwkyMDA2LTAxLTE3IDAzOjQ2OjUxLjAwMDAwMDAwMCArMDAwMA0K KysrIHhmc3Byb2dzLTIuNy4xMV9zeXN2My1sZWdhY3kvbWtmcy94ZnNfbWtm cy5jCTIwMDgtMDMtMjEgMTY6MDY6MjEuMDAwMDAwMDAwICswMDAwDQpAQCAt NjMxLDkgKzYzMSw5IEBADQogCWV4dGVudF9mbGFnZ2luZyA9IDE7DQogCWZv cmNlX292ZXJ3cml0ZSA9IDA7DQogCXdvcnN0X2ZyZWVsaXN0ID0gMDsNCi0J Ynplcm8oJmZzeCwgc2l6ZW9mKGZzeCkpOw0KKwltZW1zZXQoJmZzeCwgMCwg c2l6ZW9mKGZzeCkpOw0KIA0KLQliemVybygmeGksIHNpemVvZih4aSkpOw0K KwltZW1zZXQoJnhpLCAwLCBzaXplb2YoeGkpKTsNCiAJeGkubm90dm9sb2sg PSAxOw0KIAl4aS5zZXRibGtzaXplID0gMTsNCiAJeGkuaXNyZWFkb25seSA9 IExJQlhGU19FWENMVVNJVkVMWTsNCkBAIC0xODgyLDcgKzE4ODIsNyBAQA0K IAlic2l6ZSA9IDEgPDwgKGJsb2NrbG9nIC0gQkJTSElGVCk7DQogCW1wID0g Jm1idWY7DQogCXNicCA9ICZtcC0+bV9zYjsNCi0JYnplcm8obXAsIHNpemVv Zih4ZnNfbW91bnRfdCkpOw0KKwltZW1zZXQobXAsIDAsIHNpemVvZih4ZnNf bW91bnRfdCkpOw0KIAlzYnAtPnNiX2Jsb2NrbG9nID0gKF9fdWludDhfdCli bG9ja2xvZzsNCiAJc2JwLT5zYl9zZWN0bG9nID0gKF9fdWludDhfdClzZWN0 b3Jsb2c7DQogCXNicC0+c2JfYWdibGtsb2cgPSAoX191aW50OF90KWxpYnhm c19sb2cyX3JvdW5kdXAoKHVuc2lnbmVkIGludClhZ3NpemUpOw0KQEAgLTIw MjgsMTIgKzIwMjgsMTIgQEANCiAJICogZXh0WzIsM10gYW5kIHJlaXNlcmZz ICg2NGspIC0gYW5kIGhvcGVmdWxseSBhbGwgZWxzZS4NCiAJICovDQogCWJ1 ZiA9IGxpYnhmc19nZXRidWYoeGkuZGRldiwgMCwgQlRPQkIoV0hBQ0tfU0la RSkpOw0KLQliemVybyhYRlNfQlVGX1BUUihidWYpLCBXSEFDS19TSVpFKTsN CisJbWVtc2V0KFhGU19CVUZfUFRSKGJ1ZiksIDAsIFdIQUNLX1NJWkUpOw0K IAlsaWJ4ZnNfd3JpdGVidWYoYnVmLCBMSUJYRlNfRVhJVF9PTl9GQUlMVVJF KTsNCiANCiAJLyogT0ssIG5vdyB3cml0ZSB0aGUgc3VwZXJibG9jayAqLw0K IAlidWYgPSBsaWJ4ZnNfZ2V0YnVmKHhpLmRkZXYsIFhGU19TQl9EQUREUiwg WEZTX0ZTU19UT19CQihtcCwgMSkpOw0KLQliemVybyhYRlNfQlVGX1BUUihi dWYpLCBzZWN0b3JzaXplKTsNCisJbWVtc2V0KFhGU19CVUZfUFRSKGJ1Ziks IDAsIHNlY3RvcnNpemUpOw0KIAlsaWJ4ZnNfeGxhdGVfc2IoWEZTX0JVRl9Q VFIoYnVmKSwgc2JwLCAtMSwgWEZTX1NCX0FMTF9CSVRTKTsNCiAJbGlieGZz X3dyaXRlYnVmKGJ1ZiwgTElCWEZTX0VYSVRfT05fRkFJTFVSRSk7DQogDQpA QCAtMjA1Niw3ICsyMDU2LDcgQEANCiAJaWYgKCF4aS5kaXNmaWxlKSB7DQog CQlidWYgPSBsaWJ4ZnNfZ2V0YnVmKHhpLmRkZXYsICh4aS5kc2l6ZSAtIEJU T0JCKFdIQUNLX1NJWkUpKSwgDQogCQkJCSAgICBCVE9CQihXSEFDS19TSVpF KSk7DQotCQliemVybyhYRlNfQlVGX1BUUihidWYpLCBXSEFDS19TSVpFKTsN CisJCW1lbXNldChYRlNfQlVGX1BUUihidWYpLCAwLCBXSEFDS19TSVpFKTsN CiAJCWxpYnhmc193cml0ZWJ1ZihidWYsIExJQlhGU19FWElUX09OX0ZBSUxV UkUpOw0KIAl9DQogDQpAQCAtMjA4NCw3ICsyMDg0LDcgQEANCiAJCWJ1ZiA9 IGxpYnhmc19nZXRidWYoeGkuZGRldiwNCiAJCQkJWEZTX0FHX0RBRERSKG1w LCBhZ25vLCBYRlNfU0JfREFERFIpLA0KIAkJCQlYRlNfRlNTX1RPX0JCKG1w LCAxKSk7DQotCQliemVybyhYRlNfQlVGX1BUUihidWYpLCBzZWN0b3JzaXpl KTsNCisJCW1lbXNldChYRlNfQlVGX1BUUihidWYpLCAwLCBzZWN0b3JzaXpl KTsNCiAJCWxpYnhmc194bGF0ZV9zYihYRlNfQlVGX1BUUihidWYpLCBzYnAs IC0xLCBYRlNfU0JfQUxMX0JJVFMpOw0KIAkJbGlieGZzX3dyaXRlYnVmKGJ1 ZiwgTElCWEZTX0VYSVRfT05fRkFJTFVSRSk7DQogDQpAQCAtMjA5NSw3ICsy MDk1LDcgQEANCiAJCQkJWEZTX0FHX0RBRERSKG1wLCBhZ25vLCBYRlNfQUdG X0RBRERSKG1wKSksDQogCQkJCVhGU19GU1NfVE9fQkIobXAsIDEpKTsNCiAJ CWFnZiA9IFhGU19CVUZfVE9fQUdGKGJ1Zik7DQotCQliemVybyhhZ2YsIHNl Y3RvcnNpemUpOw0KKwkJbWVtc2V0KGFnZiwgMCwgc2VjdG9yc2l6ZSk7DQog CQlpZiAoYWdubyA9PSBhZ2NvdW50IC0gMSkNCiAJCQlhZ3NpemUgPSBkYmxv Y2tzIC0gKHhmc19kcmZzYm5vX3QpKGFnbm8gKiBhZ3NpemUpOw0KIAkJSU5U X1NFVChhZ2YtPmFnZl9tYWdpY251bSwgQVJDSF9DT05WRVJULCBYRlNfQUdG X01BR0lDKTsNCkBAIC0yMTMwLDcgKzIxMzAsNyBAQA0KIAkJCQlYRlNfQUdf REFERFIobXAsIGFnbm8sIFhGU19BR0lfREFERFIobXApKSwNCiAJCQkJWEZT X0ZTU19UT19CQihtcCwgMSkpOw0KIAkJYWdpID0gWEZTX0JVRl9UT19BR0ko YnVmKTsNCi0JCWJ6ZXJvKGFnaSwgc2VjdG9yc2l6ZSk7DQorCQltZW1zZXQo YWdpLCAwLCBzZWN0b3JzaXplKTsNCiAJCUlOVF9TRVQoYWdpLT5hZ2lfbWFn aWNudW0sIEFSQ0hfQ09OVkVSVCwgWEZTX0FHSV9NQUdJQyk7DQogCQlJTlRf U0VUKGFnaS0+YWdpX3ZlcnNpb25udW0sIEFSQ0hfQ09OVkVSVCwgWEZTX0FH SV9WRVJTSU9OKTsNCiAJCUlOVF9TRVQoYWdpLT5hZ2lfc2Vxbm8sIEFSQ0hf Q09OVkVSVCwgYWdubyk7DQpAQCAtMjE1Miw3ICsyMTUyLDcgQEANCiAJCQkJ WEZTX0FHQl9UT19EQUREUihtcCwgYWdubywgWEZTX0JOT19CTE9DSyhtcCkp LA0KIAkJCQlic2l6ZSk7DQogCQlibG9jayA9IFhGU19CVUZfVE9fU0JMT0NL KGJ1Zik7DQotCQliemVybyhibG9jaywgYmxvY2tzaXplKTsNCisJCW1lbXNl dChibG9jaywgMCwgYmxvY2tzaXplKTsNCiAJCUlOVF9TRVQoYmxvY2stPmJi X21hZ2ljLCBBUkNIX0NPTlZFUlQsIFhGU19BQlRCX01BR0lDKTsNCiAJCUlO VF9TRVQoYmxvY2stPmJiX2xldmVsLCBBUkNIX0NPTlZFUlQsIDApOw0KIAkJ SU5UX1NFVChibG9jay0+YmJfbnVtcmVjcywgQVJDSF9DT05WRVJULCAxKTsN CkBAIC0yMjAyLDcgKzIyMDIsNyBAQA0KIAkJCQlYRlNfQUdCX1RPX0RBRERS KG1wLCBhZ25vLCBYRlNfQ05UX0JMT0NLKG1wKSksDQogCQkJCWJzaXplKTsN CiAJCWJsb2NrID0gWEZTX0JVRl9UT19TQkxPQ0soYnVmKTsNCi0JCWJ6ZXJv KGJsb2NrLCBibG9ja3NpemUpOw0KKwkJbWVtc2V0KGJsb2NrLCAwLCBibG9j a3NpemUpOw0KIAkJSU5UX1NFVChibG9jay0+YmJfbWFnaWMsIEFSQ0hfQ09O VkVSVCwgWEZTX0FCVENfTUFHSUMpOw0KIAkJSU5UX1NFVChibG9jay0+YmJf bGV2ZWwsIEFSQ0hfQ09OVkVSVCwgMCk7DQogCQlJTlRfU0VUKGJsb2NrLT5i Yl9udW1yZWNzLCBBUkNIX0NPTlZFUlQsIDEpOw0KQEAgLTIyMzksNyArMjIz OSw3IEBADQogCQkJCVhGU19BR0JfVE9fREFERFIobXAsIGFnbm8sIFhGU19J QlRfQkxPQ0sobXApKSwNCiAJCQkJYnNpemUpOw0KIAkJYmxvY2sgPSBYRlNf QlVGX1RPX1NCTE9DSyhidWYpOw0KLQkJYnplcm8oYmxvY2ssIGJsb2Nrc2l6 ZSk7DQorCQltZW1zZXQoYmxvY2ssIDAsIGJsb2Nrc2l6ZSk7DQogCQlJTlRf U0VUKGJsb2NrLT5iYl9tYWdpYywgQVJDSF9DT05WRVJULCBYRlNfSUJUX01B R0lDKTsNCiAJCUlOVF9TRVQoYmxvY2stPmJiX2xldmVsLCBBUkNIX0NPTlZF UlQsIDApOw0KIAkJSU5UX1NFVChibG9jay0+YmJfbnVtcmVjcywgQVJDSF9D T05WRVJULCAwKTsNCkBAIC0yMjUzLDcgKzIyNTMsNyBAQA0KIAkgKi8NCiAJ YnVmID0gbGlieGZzX2dldGJ1ZihtcC0+bV9kZXYsDQogCQkoeGZzX2RhZGRy X3QpWEZTX0ZTQl9UT19CQihtcCwgZGJsb2NrcyAtIDFMTCksIGJzaXplKTsN Ci0JYnplcm8oWEZTX0JVRl9QVFIoYnVmKSwgYmxvY2tzaXplKTsNCisJbWVt c2V0KFhGU19CVUZfUFRSKGJ1ZiksIDAsIGJsb2Nrc2l6ZSk7DQogCWxpYnhm c193cml0ZWJ1ZihidWYsIExJQlhGU19FWElUX09OX0ZBSUxVUkUpOw0KIA0K IAkvKg0KQEAgLTIyNjIsNyArMjI2Miw3IEBADQogCWlmIChtcC0+bV9ydGRl diAmJiBydGJsb2NrcyA+IDApIHsNCiAJCWJ1ZiA9IGxpYnhmc19nZXRidWYo bXAtPm1fcnRkZXYsDQogCQkJCVhGU19GU0JfVE9fQkIobXAsIHJ0YmxvY2tz IC0gMUxMKSwgYnNpemUpOw0KLQkJYnplcm8oWEZTX0JVRl9QVFIoYnVmKSwg YmxvY2tzaXplKTsNCisJCW1lbXNldChYRlNfQlVGX1BUUihidWYpLCAwLCBi bG9ja3NpemUpOw0KIAkJbGlieGZzX3dyaXRlYnVmKGJ1ZiwgTElCWEZTX0VY SVRfT05fRkFJTFVSRSk7DQogCX0NCiANCkBAIC0yMjczLDcgKzIyNzMsNyBA QA0KIAkJeGZzX2FsbG9jX2FyZ190CWFyZ3M7DQogCQl4ZnNfdHJhbnNfdAkq dHA7DQogDQotCQliemVybygmYXJncywgc2l6ZW9mKGFyZ3MpKTsNCisJCW1l bXNldCgmYXJncywgMCwgc2l6ZW9mKGFyZ3MpKTsNCiAJCWFyZ3MudHAgPSB0 cCA9IGxpYnhmc190cmFuc19hbGxvYyhtcCwgMCk7DQogCQlhcmdzLm1wID0g bXA7DQogCQlhcmdzLmFnbm8gPSBhZ25vOw0KZGlmZiAtcnUgeGZzcHJvZ3Mt Mi43LjExX3ZhbmlsbGEvcmVwYWlyL2FnaGVhZGVyLmMgeGZzcHJvZ3MtMi43 LjExX3N5c3YzLWxlZ2FjeS9yZXBhaXIvYWdoZWFkZXIuYw0KLS0tIHhmc3By b2dzLTIuNy4xMV92YW5pbGxhL3JlcGFpci9hZ2hlYWRlci5jCTIwMDYtMDEt MTcgMDM6NDY6NTIuMDAwMDAwMDAwICswMDAwDQorKysgeGZzcHJvZ3MtMi43 LjExX3N5c3YzLWxlZ2FjeS9yZXBhaXIvYWdoZWFkZXIuYwkyMDA4LTAzLTIx IDE2OjExOjU0LjAwMDAwMDAwMCArMDAwMA0KQEAgLTE4NCw3ICsxODQsNyBA QA0KIA0KICAqIHRoZSBpbnByb2dyZXNzIGZpZWxkcywgdmVyc2lvbiBudW1i ZXJzLCBhbmQgY291bnRlcnMNCiAgKiBhcmUgYWxsb3dlZCB0byBkaWZmZXIg YXMgd2VsbCBhcyBhbGwgZmllbGRzIGFmdGVyIHRoZQ0KLSAqIGNvdW50ZXJz IHRvIGNvcGUgd2l0aCB0aGUgcHJlLTYuNSBta2ZzIG5vbi1iemVyb2VkDQor ICogY291bnRlcnMgdG8gY29wZSB3aXRoIHRoZSBwcmUtNi41IG1rZnMgbm9u LXplcm9lZA0KICAqIHNlY29uZGFyeSBzdXBlcmJsb2NrIHNlY3RvcnMuDQog ICovDQogDQpAQCAtMjMzLDcgKzIzMyw3IEBADQogCSAqIChlLmcuIHdlcmUg cHJlLTYuNSBiZXRhKSBjb3VsZCBsZWF2ZSBnYXJiYWdlIGluIHRoZSBzZWNv bmRhcnkNCiAJICogc3VwZXJibG9jayBzZWN0b3JzLiAgQW55dGhpbmcgc3Rh bXBpbmcgdGhlIHNoYXJlZCBmcyBiaXQgb3IgYmV0dGVyDQogCSAqIGludG8g dGhlIHNlY29uZGFyaWVzIGlzIG9rIGFuZCBzaG91bGQgZ2VuZXJhdGUgY2xl YW4gc2Vjb25kYXJ5DQotCSAqIHN1cGVyYmxvY2sgc2VjdG9ycy4gIHNvIG9u bHkgcnVuIHRoZSBiemVybyBjaGVjayBvbiB0aGUNCisJICogc3VwZXJibG9j ayBzZWN0b3JzLiAgc28gb25seSBydW4gdGhlIHplcm8gY2hlY2sgb24gdGhl DQogCSAqIHBvdGVudGlhbGx5IGdhcmJhZ2VkIHNlY29uZGFyaWVzLg0KIAkg Ki8NCiAJaWYgKHByZV82NV9iZXRhIHx8DQpAQCAtMjc1LDcgKzI3NSw3IEBA DQogCQkJCWRvX3dhcm4oDQogCQlfKCJ6ZXJvaW5nIHVudXNlZCBwb3J0aW9u IG9mICVzIHN1cGVyYmxvY2sgKEFHICMldSlcbiIpLA0KIAkJCQkJIWkgPyBf KCJwcmltYXJ5IikgOiBfKCJzZWNvbmRhcnkiKSwgaSk7DQotCQkJCWJ6ZXJv KCh2b2lkICopKChfX3BzaW50X3Qpc2IgKyBzaXplKSwNCisJCQkJbWVtc2V0 KCh2b2lkICopKChfX3BzaW50X3Qpc2IgKyBzaXplKSwgMCwNCiAJCQkJCW1w LT5tX3NiLnNiX3NlY3RzaXplIC0gc2l6ZSk7DQogCQkJfSBlbHNlDQogCQkJ CWRvX3dhcm4oDQpAQCAtMjg2LDcgKzI4Niw3IEBADQogDQogCS8qDQogCSAq IG5vdyBsb29rIGZvciB0aGUgZmllbGRzIHdlIGNhbiBtYW5pcHVsYXRlIGRp cmVjdGx5Lg0KLQkgKiBpZiB3ZSBkaWQgYSBiemVybyBhbmQgdGhhdCBiemVy byBjb3VsZCBoYXZlIGluY2x1ZGVkDQorCSAqIGlmIHdlIGRpZCBhIHplcm8g YW5kIHRoYXQgemVybyBjb3VsZCBoYXZlIGluY2x1ZGVkDQogCSAqIHRoZSBm aWVsZCBpbiBxdWVzdGlvbiwganVzdCBzaWxlbnRseSByZXNldCBpdC4gIG90 aGVyd2lzZSwNCiAJICogY29tcGxhaW4uDQogCSAqDQpkaWZmIC1ydSB4ZnNw cm9ncy0yLjcuMTFfdmFuaWxsYS9yZXBhaXIvYXR0cl9yZXBhaXIuYyB4ZnNw cm9ncy0yLjcuMTFfc3lzdjMtbGVnYWN5L3JlcGFpci9hdHRyX3JlcGFpci5j DQotLS0geGZzcHJvZ3MtMi43LjExX3ZhbmlsbGEvcmVwYWlyL2F0dHJfcmVw YWlyLmMJMjAwNi0wMS0xNyAwMzo0Njo1Mi4wMDAwMDAwMDAgKzAwMDANCisr KyB4ZnNwcm9ncy0yLjcuMTFfc3lzdjMtbGVnYWN5L3JlcGFpci9hdHRyX3Jl cGFpci5jCTIwMDgtMDMtMjEgMTY6MDY6MjEuMDAwMDAwMDAwICswMDAwDQpA QCAtODMsNyArODMsNyBAQA0KIGludA0KIHZhbHVlY2hlY2soY2hhciAqbmFt ZXZhbHVlLCBjaGFyICp2YWx1ZSwgaW50IG5hbWVsZW4sIGludCB2YWx1ZWxl bikNCiB7DQotCS8qIGZvciBwcm9wZXIgYWxpZ25tZW50IGlzc3VlcywgZ2V0 IHRoZSBzdHJ1Y3RzIGFuZCBiY29weSB0aGUgdmFsdWVzICovDQorCS8qIGZv ciBwcm9wZXIgYWxpZ25tZW50IGlzc3VlcywgZ2V0IHRoZSBzdHJ1Y3RzIGFu ZCBtZW1tb3ZlIHRoZSB2YWx1ZXMgKi8NCiAJeGZzX21hY19sYWJlbF90IG1h Y2w7DQogCXhmc19hY2xfdCB0aGlzYWNsOw0KIAl2b2lkICp2YWx1ZXA7DQpA QCAtOTMsOCArOTMsOCBAQA0KIAkJCShzdHJuY21wKG5hbWV2YWx1ZSwgU0dJ X0FDTF9ERUZBVUxULA0KIAkJCQlTR0lfQUNMX0RFRkFVTFRfU0laRSkgPT0g MCkpIHsNCiAJCWlmICh2YWx1ZSA9PSBOVUxMKSB7DQotCQkJYnplcm8oJnRo aXNhY2wsIHNpemVvZih4ZnNfYWNsX3QpKTsNCi0JCQliY29weShuYW1ldmFs dWUrbmFtZWxlbiwgJnRoaXNhY2wsIHZhbHVlbGVuKTsNCisJCQltZW1zZXQo JnRoaXNhY2wsIDAsIHNpemVvZih4ZnNfYWNsX3QpKTsNCisJCQltZW1tb3Zl KCZ0aGlzYWNsLCBuYW1ldmFsdWUrbmFtZWxlbiwgdmFsdWVsZW4pOw0KIAkJ CXZhbHVlcCA9ICZ0aGlzYWNsOw0KIAkJfSBlbHNlDQogCQkJdmFsdWVwID0g dmFsdWU7DQpAQCAtMTA3LDggKzEwNyw4IEBADQogCQl9DQogCX0gZWxzZSBp ZiAoc3RybmNtcChuYW1ldmFsdWUsIFNHSV9NQUNfRklMRSwgU0dJX01BQ19G SUxFX1NJWkUpID09IDApIHsNCiAJCWlmICh2YWx1ZSA9PSBOVUxMKSB7DQot CQkJYnplcm8oJm1hY2wsIHNpemVvZih4ZnNfbWFjX2xhYmVsX3QpKTsNCi0J CQliY29weShuYW1ldmFsdWUrbmFtZWxlbiwgJm1hY2wsIHZhbHVlbGVuKTsN CisJCQltZW1zZXQoJm1hY2wsIDAsIHNpemVvZih4ZnNfbWFjX2xhYmVsX3Qp KTsNCisJCQltZW1tb3ZlKCZtYWNsLCBuYW1ldmFsdWUrbmFtZWxlbiwgdmFs dWVsZW4pOw0KIAkJCXZhbHVlcCA9ICZtYWNsOw0KIAkJfSBlbHNlDQogCQkJ dmFsdWVwID0gdmFsdWU7DQpAQCAtMzU3LDcgKzM1Nyw3IEBADQogCQl9DQog CQlBU1NFUlQobXAtPm1fc2Iuc2JfYmxvY2tzaXplID09IFhGU19CVUZfQ09V TlQoYnApKTsNCiAJCWxlbmd0aCA9IE1JTihYRlNfQlVGX0NPVU5UKGJwKSwg dmFsdWVsZW4gLSBhbW91bnRkb25lKTsNCi0JCWJjb3B5KFhGU19CVUZfUFRS KGJwKSwgdmFsdWUsIGxlbmd0aCk7DQorCQltZW1tb3ZlKHZhbHVlLCBYRlNf QlVGX1BUUihicCksIGxlbmd0aCk7DQogCQlhbW91bnRkb25lICs9IGxlbmd0 aDsNCiAJCXZhbHVlICs9IGxlbmd0aDsNCiAJCWkrKzsNCkBAIC04MDMsNyAr ODAzLDcgQEANCiAJICogdGhlIHdheS4gIFRoZW4gd2FsayB0aGUgbGVhZiBi bG9ja3MgbGVmdC10by1yaWdodCwgY2FsbGluZw0KIAkgKiBhIHBhcmVudC12 ZXJpZmljYXRpb24gcm91dGluZSBlYWNoIHRpbWUgd2UgdHJhdmVyc2UgYSBi bG9jay4NCiAJICovDQotCWJ6ZXJvKCZkYV9jdXJzb3IsIHNpemVvZihkYV9i dF9jdXJzb3JfdCkpOw0KKwltZW1zZXQoJmRhX2N1cnNvciwgMCwgc2l6ZW9m KGRhX2J0X2N1cnNvcl90KSk7DQogCWRhX2N1cnNvci5hY3RpdmUgPSAwOw0K IAlkYV9jdXJzb3IudHlwZSA9IDA7DQogCWRhX2N1cnNvci5pbm8gPSBpbm87 DQpkaWZmIC1ydSB4ZnNwcm9ncy0yLjcuMTFfdmFuaWxsYS9yZXBhaXIvZGlu b2RlLmMgeGZzcHJvZ3MtMi43LjExX3N5c3YzLWxlZ2FjeS9yZXBhaXIvZGlu b2RlLmMNCi0tLSB4ZnNwcm9ncy0yLjcuMTFfdmFuaWxsYS9yZXBhaXIvZGlu b2RlLmMJMjAwNi0wMS0xNyAwMzo0Njo1Mi4wMDAwMDAwMDAgKzAwMDANCisr KyB4ZnNwcm9ncy0yLjcuMTFfc3lzdjMtbGVnYWN5L3JlcGFpci9kaW5vZGUu YwkyMDA4LTAzLTIxIDE2OjA2OjIxLjAwMDAwMDAwMCArMDAwMA0KQEAgLTI5 Niw3ICsyOTYsNyBAQA0KIAkvKiBhbmQgY2xlYXIgdGhlIGZvcmtzICovDQog DQogCWlmIChkaXJ0eSAmJiAhbm9fbW9kaWZ5KQ0KLQkJYnplcm8oJmRpbm8t PmRpX3UsIFhGU19MSVRJTk8obXApKTsNCisJCW1lbXNldCgmZGluby0+ZGlf dSwgMCwgWEZTX0xJVElOTyhtcCkpOw0KIA0KIAlyZXR1cm4oZGlydHkpOw0K IH0NCkBAIC0xNTE2LDggKzE1MTYsOCBAQA0KIAkJICogbG9jYWwgc3ltbGlu aywganVzdCBjb3B5IHRoZSBzeW1saW5rIG91dCBvZiB0aGUNCiAJCSAqIGlu b2RlIGludG8gdGhlIGRhdGEgYXJlYQ0KIAkJICovDQotCQliY29weSgoY2hh ciAqKVhGU19ERk9SS19EUFRSKGRpbm8pLA0KLQkJCXN5bWxpbmssIElOVF9H RVQoZGlub2MtPmRpX3NpemUsIEFSQ0hfQ09OVkVSVCkpOw0KKwkJbWVtbW92 ZShzeW1saW5rLCAoY2hhciAqKVhGU19ERk9SS19EUFRSKGRpbm8pLA0KKwkJ CUlOVF9HRVQoZGlub2MtPmRpX3NpemUsIEFSQ0hfQ09OVkVSVCkpOw0KIAl9 IGVsc2Ugew0KIAkJLyoNCiAJCSAqIHN0b3JlZCBpbiBhIG1ldGEtZGF0YSBm aWxlLCBoYXZlIHRvIGJtYXAgb25lIGJsb2NrDQpAQCAtMTU0Miw3ICsxNTQy LDcgQEANCiAJCQlidWZfZGF0YSA9IChjaGFyICopWEZTX0JVRl9QVFIoYnAp Ow0KIAkJCXNpemUgPSBNSU4oSU5UX0dFVChkaW5vYy0+ZGlfc2l6ZSwgQVJD SF9DT05WRVJUKQ0KIAkJCQktIGFtb3VudGRvbmUsIChpbnQpWEZTX0ZTQl9U T19CQihtcCwgMSkqQkJTSVpFKTsNCi0JCQliY29weShidWZfZGF0YSwgY3B0 ciwgc2l6ZSk7DQorCQkJbWVtbW92ZShjcHRyLCBidWZfZGF0YSwgc2l6ZSk7 DQogCQkJY3B0ciArPSBzaXplOw0KIAkJCWFtb3VudGRvbmUgKz0gc2l6ZTsN CiAJCQlpKys7DQpkaWZmIC1ydSB4ZnNwcm9ncy0yLjcuMTFfdmFuaWxsYS9y ZXBhaXIvZGlyLmMgeGZzcHJvZ3MtMi43LjExX3N5c3YzLWxlZ2FjeS9yZXBh aXIvZGlyLmMNCi0tLSB4ZnNwcm9ncy0yLjcuMTFfdmFuaWxsYS9yZXBhaXIv ZGlyLmMJMjAwNi0wMS0xNyAwMzo0Njo1Mi4wMDAwMDAwMDAgKzAwMDANCisr KyB4ZnNwcm9ncy0yLjcuMTFfc3lzdjMtbGVnYWN5L3JlcGFpci9kaXIuYwky MDA4LTAzLTIxIDE2OjE1OjAyLjAwMDAwMDAwMCArMDAwMA0KQEAgLTMzNCw3 ICszMzQsNyBAQA0KIAkJICogaGFwcGVuZWQuDQogCQkgKi8NCiAJCWlmIChq dW5raXQpICB7DQotCQkJYmNvcHkoc2ZfZW50cnktPm5hbWUsIG5hbWUsIG5h bWVsZW4pOw0KKwkJCW1lbW1vdmUobmFtZSwgc2ZfZW50cnktPm5hbWUsIG5h bWVsZW4pOw0KIAkJCW5hbWVbbmFtZWxlbl0gPSAnXDAnOw0KIA0KIAkJCWlm ICghbm9fbW9kaWZ5KSAgew0KQEAgLTM1Miw3ICszNTIsNyBAQA0KIA0KIAkJ CQlJTlRfTU9EKHNmLT5oZHIuY291bnQsIEFSQ0hfQ09OVkVSVCwgLTEpOw0K IAkJCQludW1fZW50cmllcy0tOw0KLQkJCQliemVybygodm9pZCAqKSAoKF9f cHNpbnRfdCkgc2ZfZW50cnkgKyB0bXBfbGVuKSwNCisJCQkJbWVtc2V0KCh2 b2lkICopICgoX19wc2ludF90KSBzZl9lbnRyeSArIHRtcF9sZW4pLCAwLA0K IAkJCQkJdG1wX2VsZW4pOw0KIA0KIAkJCQkvKg0KQEAgLTUxMSw3ICs1MTEs NyBAQA0KIAlpZiAoKGZyZWVtYXAgPSBtYWxsb2MobXAtPm1fc2Iuc2JfYmxv Y2tzaXplKSkgPT0gTlVMTCkNCiAJCXJldHVybihOVUxMKTsNCiANCi0JYnpl cm8oZnJlZW1hcCwgbXAtPm1fc2Iuc2JfYmxvY2tzaXplL05CQlkpOw0KKwlt ZW1zZXQoZnJlZW1hcCwgMCwgbXAtPm1fc2Iuc2JfYmxvY2tzaXplL05CQlkp Ow0KIA0KIAlyZXR1cm4oZnJlZW1hcCk7DQogfQ0KQEAgLTUyMCw3ICs1MjAs NyBAQA0KIHZvaWQNCiBpbml0X2RhX2ZyZWVtYXAoZGFfZnJlZW1hcF90ICpk aXJfZnJlZW1hcCkNCiB7DQotCWJ6ZXJvKGRpcl9mcmVlbWFwLCBzaXplb2Yo ZGFfZnJlZW1hcF90KSAqIERBX0JNQVBfU0laRSk7DQorCW1lbXNldChkaXJf ZnJlZW1hcCwgMCwgc2l6ZW9mKGRhX2ZyZWVtYXBfdCkgKiBEQV9CTUFQX1NJ WkUpOw0KIH0NCiANCiAvKg0KQEAgLTc1Myw3ICs3NTMsNyBAQA0KIAlkYV9o b2xlX21hcF90CWhvbGVtYXA7DQogDQogCWluaXRfZGFfZnJlZW1hcChkaXJf ZnJlZW1hcCk7DQotCWJ6ZXJvKCZob2xlbWFwLCBzaXplb2YoZGFfaG9sZV9t YXBfdCkpOw0KKwltZW1zZXQoJmhvbGVtYXAsIDAsIHNpemVvZihkYV9ob2xl X21hcF90KSk7DQogDQogCXNldF9kYV9mcmVlbWFwKG1wLCBkaXJfZnJlZW1h cCwgMCwgNTApOw0KIAlzZXRfZGFfZnJlZW1hcChtcCwgZGlyX2ZyZWVtYXAs IDEwMCwgMTI2KTsNCkBAIC0xNTI1LDkgKzE1MjUsOSBAQA0KIAkJCQltZW1t b3ZlKGVudHJ5LCBlbnRyeSArIDEsIChJTlRfR0VUKGhkci0+Y291bnQsIEFS Q0hfQ09OVkVSVCkgLSBpKSAqDQogCQkJCQlzaXplb2YoeGZzX2Rpcl9sZWFm X2VudHJ5X3QpKTsNCiAJCQl9DQotCQkJYnplcm8oKHZvaWQgKikgKChfX3Bz aW50X3QpIGVudHJ5ICsNCisJCQltZW1zZXQoKHZvaWQgKikgKChfX3BzaW50 X3QpIGVudHJ5ICsNCiAJCQkJKElOVF9HRVQobGVhZi0+aGRyLmNvdW50LCBB UkNIX0NPTlZFUlQpIC0gaSAtIDEpICoNCi0JCQkJc2l6ZW9mKHhmc19kaXJf bGVhZl9lbnRyeV90KSksDQorCQkJCXNpemVvZih4ZnNfZGlyX2xlYWZfZW50 cnlfdCkpLCAwLA0KIAkJCQlzaXplb2YoeGZzX2Rpcl9sZWFmX2VudHJ5X3Qp KTsNCiANCiAJCQlzdGFydCA9IChfX3BzaW50X3QpICZsZWFmLT5lbnRyaWVz W0lOVF9HRVQoaGRyLT5jb3VudCwgQVJDSF9DT05WRVJUKV0gLQ0KQEAgLTE2 MjQsOSArMTYyNCw5IEBADQogCQkJCQkJKElOVF9HRVQobGVhZi0+aGRyLmNv dW50LCBBUkNIX0NPTlZFUlQpIC0gaSAtIDEpICoNCiAJCQkJCQlzaXplb2Yo eGZzX2Rpcl9sZWFmX2VudHJ5X3QpKTsNCiAJCQkJfQ0KLQkJCQliemVybygo dm9pZCAqKSAoKF9fcHNpbnRfdCkgZW50cnkgKw0KKwkJCQltZW1zZXQoKHZv aWQgKikgKChfX3BzaW50X3QpIGVudHJ5ICsNCiAJCQkJCShJTlRfR0VUKGxl YWYtPmhkci5jb3VudCwgQVJDSF9DT05WRVJUKSAtIGkgLSAxKSAqDQotCQkJ CQlzaXplb2YoeGZzX2Rpcl9sZWFmX2VudHJ5X3QpKSwNCisJCQkJCXNpemVv Zih4ZnNfZGlyX2xlYWZfZW50cnlfdCkpLCAwLA0KIAkJCQkJc2l6ZW9mKHhm c19kaXJfbGVhZl9lbnRyeV90KSk7DQogDQogCQkJCS8qDQpAQCAtMTgyNSwx MSArMTgyNSwxMSBAQA0KIAkJCQkJICAgIHNpemVvZih4ZnNfZGlyX2xlYWZf ZW50cnlfdCkpICB7DQogCQkJCQkJbWVtbW92ZShlbnRyeSwgZW50cnkgKyAx LA0KIAkJCQkJCQlieXRlcyk7DQotCQkJCQkJYnplcm8oKHZvaWQgKikNCi0J CQkJCQkoKF9fcHNpbnRfdCkgZW50cnkgKyBieXRlcyksDQorCQkJCQkJbWVt c2V0KCh2b2lkICopDQorCQkJCQkJKChfX3BzaW50X3QpIGVudHJ5ICsgYnl0 ZXMpLCAwLA0KIAkJCQkJCXNpemVvZih4ZnNfZGlyX2xlYWZfZW50cnlfdCkp Ow0KIAkJCQkJfSBlbHNlICB7DQotCQkJCQkJYnplcm8oZW50cnksDQorCQkJ CQkJbWVtc2V0KGVudHJ5LCAwLA0KIAkJCQkJCXNpemVvZih4ZnNfZGlyX2xl YWZfZW50cnlfdCkpOw0KIAkJCQkJfQ0KIA0KQEAgLTIwNjcsMTEgKzIwNjcs MTEgQEANCiAJCQkJICovDQogCQkJCWlmIChieXRlcyA+IHNpemVvZih4ZnNf ZGlyX2xlYWZfZW50cnlfdCkpICB7DQogCQkJCQltZW1tb3ZlKGVudHJ5LCBl bnRyeSArIDEsIGJ5dGVzKTsNCi0JCQkJCWJ6ZXJvKCh2b2lkICopDQotCQkJ CQkJKChfX3BzaW50X3QpIGVudHJ5ICsgYnl0ZXMpLA0KKwkJCQkJbWVtc2V0 KCh2b2lkICopDQorCQkJCQkJKChfX3BzaW50X3QpIGVudHJ5ICsgYnl0ZXMp LCAwLA0KIAkJCQkJCXNpemVvZih4ZnNfZGlyX2xlYWZfZW50cnlfdCkpOw0K IAkJCQl9IGVsc2UgIHsNCi0JCQkJCWJ6ZXJvKGVudHJ5LA0KKwkJCQkJbWVt c2V0KGVudHJ5LCAwLA0KIAkJCQkJCXNpemVvZih4ZnNfZGlyX2xlYWZfZW50 cnlfdCkpOw0KIAkJCQl9DQogDQpAQCAtMjEzNiw3ICsyMTM2LDcgQEANCiAJ CSAqIG1ha2luZyBpdCBpbXBvc3NpYmxlIGZvciB0aGUgc3RvcmVkIGxlbmd0 aA0KIAkJICogdmFsdWUgdG8gYmUgb3V0IG9mIHJhbmdlLg0KIAkJICovDQot CQliY29weShuYW1lc3QtPm5hbWUsIGZuYW1lLCBlbnRyeS0+bmFtZWxlbik7 DQorCQltZW1tb3ZlKGZuYW1lLCBuYW1lc3QtPm5hbWUsIGVudHJ5LT5uYW1l bGVuKTsNCiAJCWZuYW1lW2VudHJ5LT5uYW1lbGVuXSA9ICdcMCc7DQogCQlo YXNodmFsID0gbGlieGZzX2RhX2hhc2huYW1lKGZuYW1lLCBlbnRyeS0+bmFt ZWxlbik7DQogDQpAQCAtMjQ2NSw3ICsyNDY1LDcgQEANCiAJICogKFhGU19E SVJfTEVBRl9NQVBTSVpFICgzKSAqIGJpZ2dlc3QgcmVnaW9ucykNCiAJICog YW5kIHNlZSBpZiB0aGV5IG1hdGNoIHdoYXQncyBpbiB0aGUgYmxvY2sNCiAJ ICovDQotCWJ6ZXJvKCZob2xlbWFwLCBzaXplb2YoZGFfaG9sZV9tYXBfdCkp Ow0KKwltZW1zZXQoJmhvbGVtYXAsIDAsIHNpemVvZihkYV9ob2xlX21hcF90 KSk7DQogCXByb2Nlc3NfZGFfZnJlZW1hcChtcCwgZGlyX2ZyZWVtYXAsICZo b2xlbWFwKTsNCiANCiAJaWYgKHplcm9fbGVuX2VudHJpZXMpICB7DQpAQCAt MjUyMiw3ICsyNTIyLDcgQEANCiAJCQkvKg0KIAkJCSAqIGNvcHkgbGVhZiBi bG9jayBoZWFkZXINCiAJCQkgKi8NCi0JCQliY29weSgmbGVhZi0+aGRyLCAm bmV3X2xlYWYtPmhkciwNCisJCQltZW1tb3ZlKCZuZXdfbGVhZi0+aGRyLCAm bGVhZi0+aGRyLA0KIAkJCQlzaXplb2YoeGZzX2Rpcl9sZWFmX2hkcl90KSk7 DQogDQogCQkJLyoNCkBAIC0yNTY4LDggKzI1NjgsOCBAQA0KIAkJCQlkX2Vu dHJ5LT5uYW1lbGVuID0gc19lbnRyeS0+bmFtZWxlbjsNCiAJCQkJZF9lbnRy eS0+cGFkMiA9IDA7DQogDQotCQkJCWJjb3B5KChjaGFyICopIGxlYWYgKyBJ TlRfR0VUKHNfZW50cnktPm5hbWVpZHgsIEFSQ0hfQ09OVkVSVCksDQotCQkJ CQlmaXJzdF9ieXRlLCBieXRlcyk7DQorCQkJCW1lbW1vdmUoZmlyc3RfYnl0 ZSwgKGNoYXIgKikgbGVhZiArIElOVF9HRVQoc19lbnRyeS0+bmFtZWlkeCwg QVJDSF9DT05WRVJUKSwNCisJCQkJCWJ5dGVzKTsNCiANCiAJCQkJbnVtX2Vu dHJpZXMrKzsNCiAJCQkJZF9lbnRyeSsrOw0KQEAgLTI1ODEsNyArMjU4MSw3 IEBADQogCQkJLyoNCiAJCQkgKiB6ZXJvIHNwYWNlIGJldHdlZW4gZW5kIG9m IHRhYmxlIGFuZCB0b3Agb2YgaGVhcA0KIAkJCSAqLw0KLQkJCWJ6ZXJvKGRf ZW50cnksIChfX3BzaW50X3QpIGZpcnN0X2J5dGUNCisJCQltZW1zZXQoZF9l bnRyeSwgMCwgKF9fcHNpbnRfdCkgZmlyc3RfYnl0ZQ0KIAkJCQkJLSAoX19w c2ludF90KSBkX2VudHJ5KTsNCiANCiAJCQkvKg0KQEAgLTI2MTcsNyArMjYx Nyw3IEBADQogCQkJLyoNCiAJCQkgKiBmaW5hbCBzdGVwLCBjb3B5IGJsb2Nr IGJhY2sNCiAJCQkgKi8NCi0JCQliY29weShuZXdfbGVhZiwgbGVhZiwgbXAt Pm1fc2Iuc2JfYmxvY2tzaXplKTsNCisJCQltZW1tb3ZlKGxlYWYsIG5ld19s ZWFmLCBtcC0+bV9zYi5zYl9ibG9ja3NpemUpOw0KIA0KIAkJCSpidWZfZGly dHkgPSAxOw0KIAkJfSBlbHNlICB7DQpAQCAtMjg1Myw3ICsyODUzLDcgQEAN CiAJICogdGhlIHdheS4gIFRoZW4gd2FsayB0aGUgbGVhZiBibG9ja3MgbGVm dC10by1yaWdodCwgY2FsbGluZw0KIAkgKiBhIHBhcmVudC12ZXJpZmljYXRp b24gcm91dGluZSBlYWNoIHRpbWUgd2UgdHJhdmVyc2UgYSBibG9jay4NCiAJ ICovDQotCWJ6ZXJvKCZkYV9jdXJzb3IsIHNpemVvZihkYV9idF9jdXJzb3Jf dCkpOw0KKwltZW1zZXQoJmRhX2N1cnNvciwgMCwgc2l6ZW9mKGRhX2J0X2N1 cnNvcl90KSk7DQogDQogCWRhX2N1cnNvci5hY3RpdmUgPSAwOw0KIAlkYV9j dXJzb3IudHlwZSA9IDA7DQpkaWZmIC1ydSB4ZnNwcm9ncy0yLjcuMTFfdmFu aWxsYS9yZXBhaXIvZGlyMi5jIHhmc3Byb2dzLTIuNy4xMV9zeXN2My1sZWdh Y3kvcmVwYWlyL2RpcjIuYw0KLS0tIHhmc3Byb2dzLTIuNy4xMV92YW5pbGxh L3JlcGFpci9kaXIyLmMJMjAwNi0wMS0xNyAwMzo0Njo1Mi4wMDAwMDAwMDAg KzAwMDANCisrKyB4ZnNwcm9ncy0yLjcuMTFfc3lzdjMtbGVnYWN5L3JlcGFp ci9kaXIyLmMJMjAwOC0wMy0yMSAxNjowNjoyMS4wMDAwMDAwMDAgKzAwMDAN CkBAIC0xMjQsNyArMTI0LDcgQEANCiAJCX0NCiAJCWZvciAoaSA9IG9mZiA9 IDA7IGkgPCBuZXg7IGkrKywgb2ZmICs9IFhGU19CVUZfQ09VTlQoYnApKSB7 DQogCQkJYnAgPSBicGxpc3RbaV07DQotCQkJYmNvcHkoWEZTX0JVRl9QVFIo YnApLCAoY2hhciAqKWRhYnVmLT5kYXRhICsgb2ZmLA0KKwkJCW1lbW1vdmUo KGNoYXIgKilkYWJ1Zi0+ZGF0YSArIG9mZiwgWEZTX0JVRl9QVFIoYnApLA0K IAkJCQlYRlNfQlVGX0NPVU5UKGJwKSk7DQogCQl9DQogCX0NCkBAIC0xNDks NyArMTQ5LDcgQEANCiAJCWRhYnVmLT5kaXJ0eSA9IDA7DQogCQlmb3IgKGk9 b2ZmPTA7IGkgPCBkYWJ1Zi0+bmJ1ZjsgaSsrLCBvZmYgKz0gWEZTX0JVRl9D T1VOVChicCkpIHsNCiAJCQlicCA9IGRhYnVmLT5icHNbaV07DQotCQkJYmNv cHkoKGNoYXIgKilkYWJ1Zi0+ZGF0YSArIG9mZiwgWEZTX0JVRl9QVFIoYnAp LA0KKwkJCW1lbW1vdmUoWEZTX0JVRl9QVFIoYnApLCAoY2hhciAqKWRhYnVm LT5kYXRhICsgb2ZmLA0KIAkJCQlYRlNfQlVGX0NPVU5UKGJwKSk7DQogCQl9 DQogCX0NCkBAIC0xODcsMTAgKzE4NywxMCBAQA0KIAkJCWRvX2Vycm9yKF8o ImNvdWxkbid0IG1hbGxvYyBkaXIyIGJ1ZmZlciBsaXN0XG4iKSk7DQogCQkJ ZXhpdCgxKTsNCiAJCX0NCi0JCWJjb3B5KGRhYnVmLT5icHMsIGJwbGlzdCwg bmJ1ZiAqIHNpemVvZigqYnBsaXN0KSk7DQorCQltZW1tb3ZlKGJwbGlzdCwg ZGFidWYtPmJwcywgbmJ1ZiAqIHNpemVvZigqYnBsaXN0KSk7DQogCQlmb3Ig KGkgPSBvZmYgPSAwOyBpIDwgbmJ1ZjsgaSsrLCBvZmYgKz0gWEZTX0JVRl9D T1VOVChicCkpIHsNCiAJCQlicCA9IGJwbGlzdFtpXTsNCi0JCQliY29weSgo Y2hhciAqKWRhYnVmLT5kYXRhICsgb2ZmLCBYRlNfQlVGX1BUUihicCksDQor CQkJbWVtbW92ZShYRlNfQlVGX1BUUihicCksIChjaGFyICopZGFidWYtPmRh dGEgKyBvZmYsDQogCQkJCVhGU19CVUZfQ09VTlQoYnApKTsNCiAJCX0NCiAJ fQ0KQEAgLTIyMyw3ICsyMjMsNyBAQA0KIAkJCWRvX2Vycm9yKF8oImNvdWxk bid0IG1hbGxvYyBkaXIyIGJ1ZmZlciBsaXN0XG4iKSk7DQogCQkJZXhpdCgx KTsNCiAJCX0NCi0JCWJjb3B5KGRhYnVmLT5icHMsIGJwbGlzdCwgbmJ1ZiAq IHNpemVvZigqYnBsaXN0KSk7DQorCQltZW1tb3ZlKGJwbGlzdCwgZGFidWYt PmJwcywgbmJ1ZiAqIHNpemVvZigqYnBsaXN0KSk7DQogCX0NCiAJZGFfYnVm X2RvbmUoZGFidWYpOw0KIAlmb3IgKGkgPSAwOyBpIDwgbmJ1ZjsgaSsrKQ0K QEAgLTEwNzYsNyArMTA3Niw3IEBADQogCQkgKiBoYXBwZW5lZC4NCiAJCSAq Lw0KIAkJaWYgKGp1bmtpdCkgIHsNCi0JCQliY29weShzZmVwLT5uYW1lLCBu YW1lLCBuYW1lbGVuKTsNCisJCQltZW1tb3ZlKG5hbWUsIHNmZXAtPm5hbWUs IG5hbWVsZW4pOw0KIAkJCW5hbWVbbmFtZWxlbl0gPSAnXDAnOw0KIA0KIAkJ CWlmICghbm9fbW9kaWZ5KSAgew0KQEAgLTEwOTUsNyArMTA5NSw3IEBADQog DQogCQkJCUlOVF9NT0Qoc2ZwLT5oZHIuY291bnQsIEFSQ0hfQ09OVkVSVCwg LTEpOw0KIAkJCQludW1fZW50cmllcy0tOw0KLQkJCQliemVybygodm9pZCAq KSAoKF9fcHNpbnRfdCkgc2ZlcCArIHRtcF9sZW4pLA0KKwkJCQltZW1zZXQo KHZvaWQgKikgKChfX3BzaW50X3QpIHNmZXAgKyB0bXBfbGVuKSwgMCwNCiAJ CQkJCXRtcF9lbGVuKTsNCiANCiAJCQkJLyoNCkBAIC0xOTIxLDcgKzE5MjEs NyBAQA0KIAkgKiBUaGVuIHdhbGsgdGhlIGxlYWYgYmxvY2tzIGxlZnQtdG8t cmlnaHQsIGNhbGxpbmcgYSBwYXJlbnQNCiAJICogdmVyaWZpY2F0aW9uIHJv dXRpbmUgZWFjaCB0aW1lIHdlIHRyYXZlcnNlIGEgYmxvY2suDQogCSAqLw0K LQliemVybygmZGFfY3Vyc29yLCBzaXplb2YoZGFfY3Vyc29yKSk7DQorCW1l bXNldCgmZGFfY3Vyc29yLCAwLCBzaXplb2YoZGFfY3Vyc29yKSk7DQogCWRh X2N1cnNvci5pbm8gPSBpbm87DQogCWRhX2N1cnNvci5kaXAgPSBkaXA7DQog CWRhX2N1cnNvci5ibGttYXAgPSBibGttYXA7DQpkaWZmIC1ydSB4ZnNwcm9n cy0yLjcuMTFfdmFuaWxsYS9yZXBhaXIvZ2xvYmFscy5oIHhmc3Byb2dzLTIu Ny4xMV9zeXN2My1sZWdhY3kvcmVwYWlyL2dsb2JhbHMuaA0KLS0tIHhmc3By b2dzLTIuNy4xMV92YW5pbGxhL3JlcGFpci9nbG9iYWxzLmgJMjAwNi0wMS0x NyAwMzo0Njo1Mi4wMDAwMDAwMDAgKzAwMDANCisrKyB4ZnNwcm9ncy0yLjcu MTFfc3lzdjMtbGVnYWN5L3JlcGFpci9nbG9iYWxzLmgJMjAwOC0wMy0yMSAx NjoxMDoxOS4wMDAwMDAwMDAgKzAwMDANCkBAIC02Niw3ICs2Niw3IEBADQog ICogdGhlIHBhcnRpYWwgc2IgbWFzayBiaXQgc2V0LCB0aGVuIHlvdSBkZXBl bmQgb24gdGhlIGZpZWxkcw0KICAqIGluIGl0IHVwIHRvIGFuZCBpbmNsdWRp bmcgc2JfaW5vYWxpZ25tdCBidXQgdGhlIHVudXNlZCBwYXJ0IG9mIHRoZQ0K ICAqIHNlY3RvciBtYXkgaGF2ZSB0cmFzaCBpbiBpdC4gIElmIHRoZSBzYiBo YXMgYW55IGJpdHMgc2V0IHRoYXQgYXJlIGluDQotICogdGhlIGdvb2QgbWFz aywgdGhlbiB0aGUgZW50aXJlIHNiIGFuZCBzZWN0b3IgYXJlIGdvb2QgKHdh cyBiemVybydlZA0KKyAqIHRoZSBnb29kIG1hc2ssIHRoZW4gdGhlIGVudGly ZSBzYiBhbmQgc2VjdG9yIGFyZSBnb29kICh3YXMgemVybydlZA0KICAqIGJ5 IG1rZnMpLiAgVGhlIHRoaXJkIG1hc2sgaXMgZm9yIGZpbGVzeXN0ZW1zIG1h ZGUgYnkgcHJlLTYuNSBjYW1wdXMNCiAgKiBhbHBoYSBta2ZzJ3MuICBUaG9z ZSBhcmUgcmFyZSBzbyB3ZSdsbCBjaGVjayBmb3IgdGhvc2UgdW5kZXINCiAg KiBhIHNwZWNpYWwgb3B0aW9uLg0KZGlmZiAtcnUgeGZzcHJvZ3MtMi43LjEx X3ZhbmlsbGEvcmVwYWlyL2luY29yZS5jIHhmc3Byb2dzLTIuNy4xMV9zeXN2 My1sZWdhY3kvcmVwYWlyL2luY29yZS5jDQotLS0geGZzcHJvZ3MtMi43LjEx X3ZhbmlsbGEvcmVwYWlyL2luY29yZS5jCTIwMDYtMDEtMTcgMDM6NDY6NTIu MDAwMDAwMDAwICswMDAwDQorKysgeGZzcHJvZ3MtMi43LjExX3N5c3YzLWxl Z2FjeS9yZXBhaXIvaW5jb3JlLmMJMjAwOC0wMy0yMSAxNjowNjoyMS4wMDAw MDAwMDAgKzAwMDANCkBAIC03NCw3ICs3NCw3IEBADQogCQkJCW51bWJsb2Nr cyk7DQogCQkJcmV0dXJuOw0KIAkJfQ0KLQkJYnplcm8oYmFfYm1hcFtpXSwg c2l6ZSk7DQorCQltZW1zZXQoYmFfYm1hcFtpXSwgMCwgc2l6ZSk7DQogCX0N CiANCiAJaWYgKHJ0YmxvY2tzID09IDApICB7DQpkaWZmIC1ydSB4ZnNwcm9n cy0yLjcuMTFfdmFuaWxsYS9yZXBhaXIvaW5jb3JlX2JtYy5jIHhmc3Byb2dz LTIuNy4xMV9zeXN2My1sZWdhY3kvcmVwYWlyL2luY29yZV9ibWMuYw0KLS0t IHhmc3Byb2dzLTIuNy4xMV92YW5pbGxhL3JlcGFpci9pbmNvcmVfYm1jLmMJ MjAwNi0wMS0xNyAwMzo0Njo1Mi4wMDAwMDAwMDAgKzAwMDANCisrKyB4ZnNw cm9ncy0yLjcuMTFfc3lzdjMtbGVnYWN5L3JlcGFpci9pbmNvcmVfYm1jLmMJ MjAwOC0wMy0yMSAxNjowNjoyMS4wMDAwMDAwMDAgKzAwMDANCkBAIC0yOSw3 ICsyOSw3IEBADQogew0KIAlpbnQgaTsNCiANCi0JYnplcm8oY3Vyc29yLCBz aXplb2YoYm1hcF9jdXJzb3JfdCkpOw0KKwltZW1zZXQoY3Vyc29yLCAwLCBz aXplb2YoYm1hcF9jdXJzb3JfdCkpOw0KIAljdXJzb3ItPmlubyA9IE5VTExG U0lOTzsNCiAJY3Vyc29yLT5udW1fbGV2ZWxzID0gbnVtX2xldmVsczsNCiAN CmRpZmYgLXJ1IHhmc3Byb2dzLTIuNy4xMV92YW5pbGxhL3JlcGFpci9pbmNv cmVfaW5vLmMgeGZzcHJvZ3MtMi43LjExX3N5c3YzLWxlZ2FjeS9yZXBhaXIv aW5jb3JlX2luby5jDQotLS0geGZzcHJvZ3MtMi43LjExX3ZhbmlsbGEvcmVw YWlyL2luY29yZV9pbm8uYwkyMDA2LTAxLTE3IDAzOjQ2OjUyLjAwMDAwMDAw MCArMDAwMA0KKysrIHhmc3Byb2dzLTIuNy4xMV9zeXN2My1sZWdhY3kvcmVw YWlyL2luY29yZV9pbm8uYwkyMDA4LTAzLTIxIDE2OjA2OjIxLjAwMDAwMDAw MCArMDAwMA0KQEAgLTUxNSwxMiArNTE1LDExIEBADQogCWlmICghdG1wKQ0K IAkJZG9fZXJyb3IoXygiY291bGRuJ3QgbWVtYWxpZ24gcGVudHJpZXMgdGFi bGVcbiIpKTsNCiANCi0JKHZvaWQpIGJjb3B5KGlyZWMtPmlub191bi5wbGlz dC0+cGVudHJpZXMsIHRtcCwNCisJbWVtbW92ZSh0bXAsIGlyZWMtPmlub191 bi5wbGlzdC0+cGVudHJpZXMsDQogCQkJdGFyZ2V0ICogc2l6ZW9mKHBhcmVu dF9lbnRyeV90KSk7DQogDQogCWlmIChjbnQgPiB0YXJnZXQpDQotCQkodm9p ZCkgYmNvcHkoaXJlYy0+aW5vX3VuLnBsaXN0LT5wZW50cmllcyArIHRhcmdl dCwNCi0JCQkJdG1wICsgdGFyZ2V0ICsgMSwNCisJCW1lbW1vdmUodG1wICsg dGFyZ2V0ICsgMSwgaXJlYy0+aW5vX3VuLnBsaXN0LT5wZW50cmllcyArIHRh cmdldCwNCiAJCQkJKGNudCAtIHRhcmdldCkgKiBzaXplb2YocGFyZW50X2Vu dHJ5X3QpKTsNCiANCiAJZnJlZShpcmVjLT5pbm9fdW4ucGxpc3QtPnBlbnRy aWVzKTsNCkBAIC02NzQsNyArNjczLDcgQEANCiAJaWYgKGJwdHJzX2luZGV4 ID09IEJQVFJfQUxMT0NfTlVNKQ0KIAkJYnB0cnMgPSBOVUxMOw0KIA0KLQli emVybyhicHRyLCBzaXplb2YoYmFja3B0cnNfdCkpOw0KKwltZW1zZXQoYnB0 ciwgMCwgc2l6ZW9mKGJhY2twdHJzX3QpKTsNCiANCiAJcmV0dXJuKGJwdHIp Ow0KIH0NCkBAIC02ODgsNyArNjg3LDcgQEANCiAJaWYgKChwdHIgPSBtYWxs b2Moc2l6ZW9mKGJhY2twdHJzX3QpKSkgPT0gTlVMTCkNCiAJCWRvX2Vycm9y KF8oImNvdWxkIG5vdCBtYWxsb2MgYmFjayBwb2ludGVyIHRhYmxlXG4iKSk7 DQogDQotCWJ6ZXJvKHB0ciwgc2l6ZW9mKGJhY2twdHJzX3QpKTsNCisJbWVt c2V0KHB0ciwgMCwgc2l6ZW9mKGJhY2twdHJzX3QpKTsNCiANCiAJcmV0dXJu KHB0cik7DQogfQ0KQEAgLTgwMiw3ICs4MDEsNyBAQA0KIAlpZiAoKGxhc3Rf cmVjID0gbWFsbG9jKHNpemVvZihpbm9fdHJlZV9ub2RlX3QgKikgKiBhZ2Nv dW50KSkgPT0gTlVMTCkNCiAJCWRvX2Vycm9yKF8oImNvdWxkbid0IG1hbGxv YyB1bmNlcnRhaW4gaW5vZGUgY2FjaGUgYXJlYVxuIikpOw0KIA0KLQliemVy byhsYXN0X3JlYywgc2l6ZW9mKGlub190cmVlX25vZGVfdCAqKSAqIGFnY291 bnQpOw0KKwltZW1zZXQobGFzdF9yZWMsIDAsIHNpemVvZihpbm9fdHJlZV9u b2RlX3QgKikgKiBhZ2NvdW50KTsNCiANCiAJZnVsbF9iYWNrcHRycyA9IDA7 DQogDQpkaWZmIC1ydSB4ZnNwcm9ncy0yLjcuMTFfdmFuaWxsYS9yZXBhaXIv cGhhc2U0LmMgeGZzcHJvZ3MtMi43LjExX3N5c3YzLWxlZ2FjeS9yZXBhaXIv cGhhc2U0LmMNCi0tLSB4ZnNwcm9ncy0yLjcuMTFfdmFuaWxsYS9yZXBhaXIv cGhhc2U0LmMJMjAwNi0wMS0xNyAwMzo0Njo1Mi4wMDAwMDAwMDAgKzAwMDAN CisrKyB4ZnNwcm9ncy0yLjcuMTFfc3lzdjMtbGVnYWN5L3JlcGFpci9waGFz ZTQuYwkyMDA4LTAzLTIxIDE2OjA2OjIxLjAwMDAwMDAwMCArMDAwMA0KQEAg LTY4LDcgKzY4LDcgQEANCiAJCW5hbWVzdCA9IFhGU19ESVJfTEVBRl9OQU1F U1RSVUNUKGxlYWYsDQogCQkJSU5UX0dFVChlbnRyeS0+bmFtZWlkeCwgQVJD SF9DT05WRVJUKSk7DQogCQlYRlNfRElSX1NGX0dFVF9ESVJJTk8oJm5hbWVz dC0+aW51bWJlciwgJmxpbm8pOw0KLQkJYmNvcHkobmFtZXN0LT5uYW1lLCBm bmFtZSwgZW50cnktPm5hbWVsZW4pOw0KKwkJbWVtbW92ZShmbmFtZSwgbmFt ZXN0LT5uYW1lLCBlbnRyeS0+bmFtZWxlbik7DQogCQlmbmFtZVtlbnRyeS0+ bmFtZWxlbl0gPSAnXDAnOw0KIA0KIAkJaWYgKGZuYW1lWzBdICE9ICcvJyAm JiAhc3RyY21wKGZuYW1lLCBPUlBIQU5BR0UpKSAgew0KQEAgLTMxNiw3ICsz MTYsNyBAQA0KIAkJdG1wX3NmZSA9IE5VTEw7DQogCQlzZl9lbnRyeSA9IG5l eHRfc2ZlOw0KIAkJWEZTX0RJUl9TRl9HRVRfRElSSU5PKCZzZl9lbnRyeS0+ aW51bWJlciwgJmxpbm8pOw0KLQkJYmNvcHkoc2ZfZW50cnktPm5hbWUsIGZu YW1lLCBzZl9lbnRyeS0+bmFtZWxlbik7DQorCQltZW1tb3ZlKGZuYW1lLCBz Zl9lbnRyeS0+bmFtZSwgc2ZfZW50cnktPm5hbWVsZW4pOw0KIAkJZm5hbWVb c2ZfZW50cnktPm5hbWVsZW5dID0gJ1wwJzsNCiANCiAJCWlmICghc3RyY21w KE9SUEhBTkFHRSwgZm5hbWUpKSAgew0KQEAgLTQ0Nyw3ICs0NDcsNyBAQA0K IA0KIAkJCUlOVF9NT0Qoc2YtPmhkci5jb3VudCwgQVJDSF9DT05WRVJULCAt MSk7DQogDQotCQkJYnplcm8oKHZvaWQgKikgKChfX3BzaW50X3QpIHNmX2Vu dHJ5ICsgdG1wX2xlbiksDQorCQkJbWVtc2V0KCh2b2lkICopICgoX19wc2lu dF90KSBzZl9lbnRyeSArIHRtcF9sZW4pLCAwLA0KIAkJCQl0bXBfZWxlbik7 DQogDQogCQkJLyoNCkBAIC01MzQsNyArNTM0LDcgQEANCiAJCX0NCiAJCWRl cCA9ICh4ZnNfZGlyMl9kYXRhX2VudHJ5X3QgKilwdHI7DQogCQlsaW5vID0g SU5UX0dFVChkZXAtPmludW1iZXIsIEFSQ0hfQ09OVkVSVCk7DQotCQliY29w eShkZXAtPm5hbWUsIGZuYW1lLCBkZXAtPm5hbWVsZW4pOw0KKwkJbWVtbW92 ZShmbmFtZSwgZGVwLT5uYW1lLCBkZXAtPm5hbWVsZW4pOw0KIAkJZm5hbWVb ZGVwLT5uYW1lbGVuXSA9ICdcMCc7DQogDQogCQlpZiAoZm5hbWVbMF0gIT0g Jy8nICYmICFzdHJjbXAoZm5hbWUsIE9SUEhBTkFHRSkpICB7DQpAQCAtNzk3 LDcgKzc5Nyw3IEBADQogCQlzZl9lbnRyeSA9IG5leHRfc2ZlOw0KIAkJbGlu byA9IFhGU19ESVIyX1NGX0dFVF9JTlVNQkVSKHNmLA0KIAkJCVhGU19ESVIy X1NGX0lOVU1CRVJQKHNmX2VudHJ5KSk7DQotCQliY29weShzZl9lbnRyeS0+ bmFtZSwgZm5hbWUsIHNmX2VudHJ5LT5uYW1lbGVuKTsNCisJCW1lbW1vdmUo Zm5hbWUsIHNmX2VudHJ5LT5uYW1lLCBzZl9lbnRyeS0+bmFtZWxlbik7DQog CQlmbmFtZVtzZl9lbnRyeS0+bmFtZWxlbl0gPSAnXDAnOw0KIA0KIAkJaWYg KCFzdHJjbXAoT1JQSEFOQUdFLCBmbmFtZSkpICB7DQpAQCAtOTMxLDcgKzkz MSw3IEBADQogCQkJaWYgKGxpbm8gPiBYRlNfRElSMl9NQVhfU0hPUlRfSU5V TSkNCiAJCQkJc2YtPmhkci5pOGNvdW50LS07DQogDQotCQkJYnplcm8oKHZv aWQgKikgKChfX3BzaW50X3QpIHNmX2VudHJ5ICsgdG1wX2xlbiksDQorCQkJ bWVtc2V0KCh2b2lkICopICgoX19wc2ludF90KSBzZl9lbnRyeSArIHRtcF9s ZW4pLCAwLA0KIAkJCQl0bXBfZWxlbik7DQogDQogCQkJLyoNCkBAIC0xMjky LDcgKzEyOTIsNyBAQA0KIAkJLyoNCiAJCSAqIG5vdyByZXNldCB0aGUgYml0 bWFwIGZvciBhbGwgYWdzDQogCQkgKi8NCi0JCWJ6ZXJvKGJhX2JtYXBbaV0s IHJvdW5kdXAobXAtPm1fc2Iuc2JfYWdibG9ja3MvKE5CQlkvWFJfQkIpLA0K KwkJbWVtc2V0KGJhX2JtYXBbaV0sIDAsIHJvdW5kdXAobXAtPm1fc2Iuc2Jf YWdibG9ja3MvKE5CQlkvWFJfQkIpLA0KIAkJCQkJCXNpemVvZihfX3VpbnQ2 NF90KSkpOw0KIAkJZm9yIChqID0gMDsgaiA8IGFnX2hkcl9ibG9jazsgaisr KQ0KIAkJCXNldF9hZ2Jub19zdGF0ZShtcCwgaSwgaiwgWFJfRV9JTlVTRV9G Uyk7DQpkaWZmIC1ydSB4ZnNwcm9ncy0yLjcuMTFfdmFuaWxsYS9yZXBhaXIv cGhhc2U1LmMgeGZzcHJvZ3MtMi43LjExX3N5c3YzLWxlZ2FjeS9yZXBhaXIv cGhhc2U1LmMNCi0tLSB4ZnNwcm9ncy0yLjcuMTFfdmFuaWxsYS9yZXBhaXIv cGhhc2U1LmMJMjAwNi0wMS0xNyAwMzo0Njo1Mi4wMDAwMDAwMDAgKzAwMDAN CisrKyB4ZnNwcm9ncy0yLjcuMTFfc3lzdjMtbGVnYWN5L3JlcGFpci9waGFz ZTUuYwkyMDA4LTAzLTIxIDE2OjEyOjA2LjAwMDAwMDAwMCArMDAwMA0KQEAg LTkzLDcgKzkzLDcgQEANCiAJICogZXh0ZW50cyBvZiBmcmVlIGJsb2Nrcy4g IEF0IHRoaXMgcG9pbnQsIHdlIGtub3cNCiAJICogdGhhdCBibG9ja3MgaW4g dGhlIGJpdG1hcCBhcmUgZWl0aGVyIHNldCB0byBhbg0KIAkgKiAiaW4gdXNl IiBzdGF0ZSBvciBzZXQgdG8gdW5rbm93biAoMCkgc2luY2UgdGhlDQotCSAq IGJtYXBzIHdlcmUgYnplcm8nZWQgaW4gcGhhc2UgNCBhbmQgb25seSBibG9j a3MNCisJICogYm1hcHMgd2VyZSB6ZXJvJ2VkIGluIHBoYXNlIDQgYW5kIG9u bHkgYmxvY2tzDQogCSAqIGJlaW5nIHVzZWQgYnkgaW5vZGVzLCBpbm9kZSBi bWFwcywgYWcgaGVhZGVycywNCiAJICogYW5kIHRoZSBmaWxlcyB0aGVtc2Vs dmVzIHdlcmUgcHV0IGludG8gdGhlIGJpdG1hcC4NCiAJICoNCkBAIC02NjQs NyArNjY0LDcgQEANCiAJCSAqIGluaXRpYWxpemUgYmxvY2sgaGVhZGVyDQog CQkgKi8NCiAJCWJ0X2hkciA9IFhGU19CVUZfVE9fQUxMT0NfQkxPQ0sobHB0 ci0+YnVmX3ApOw0KLQkJYnplcm8oYnRfaGRyLCBtcC0+bV9zYi5zYl9ibG9j a3NpemUpOw0KKwkJbWVtc2V0KGJ0X2hkciwgMCwgbXAtPm1fc2Iuc2JfYmxv Y2tzaXplKTsNCiANCiAJCUlOVF9TRVQoYnRfaGRyLT5iYl9tYWdpYywgQVJD SF9DT05WRVJULCBtYWdpYyk7DQogCQlJTlRfU0VUKGJ0X2hkci0+YmJfbGV2 ZWwsIEFSQ0hfQ09OVkVSVCwgbGV2ZWwpOw0KQEAgLTc0MSw3ICs3NDEsNyBA QA0KIAkJICogaW5pdGlhbGl6ZSBibG9jayBoZWFkZXINCiAJCSAqLw0KIAkJ YnRfaGRyID0gWEZTX0JVRl9UT19BTExPQ19CTE9DSyhscHRyLT5idWZfcCk7 DQotCQliemVybyhidF9oZHIsIG1wLT5tX3NiLnNiX2Jsb2Nrc2l6ZSk7DQor CQltZW1zZXQoYnRfaGRyLCAwLCBtcC0+bV9zYi5zYl9ibG9ja3NpemUpOw0K IA0KIAkJSU5UX1NFVChidF9oZHItPmJiX21hZ2ljLCBBUkNIX0NPTlZFUlQs IG1hZ2ljKTsNCiAJCUlOVF9TRVQoYnRfaGRyLT5iYl9sZXZlbCwgQVJDSF9D T05WRVJULCBpKTsNCkBAIC03NzIsNyArNzcyLDcgQEANCiAJCSAqIGJsb2Nr IGluaXRpYWxpemF0aW9uLCBsYXkgaW4gYmxvY2sgaGVhZGVyDQogCQkgKi8N CiAJCWJ0X2hkciA9IFhGU19CVUZfVE9fQUxMT0NfQkxPQ0sobHB0ci0+YnVm X3ApOw0KLQkJYnplcm8oYnRfaGRyLCBtcC0+bV9zYi5zYl9ibG9ja3NpemUp Ow0KKwkJbWVtc2V0KGJ0X2hkciwgMCwgbXAtPm1fc2Iuc2JfYmxvY2tzaXpl KTsNCiANCiAJCUlOVF9TRVQoYnRfaGRyLT5iYl9tYWdpYywgQVJDSF9DT05W RVJULCBtYWdpYyk7DQogCQlidF9oZHItPmJiX2xldmVsID0gMDsNCkBAIC0x MDIxLDcgKzEwMjEsNyBAQA0KIAkJICogaW5pdGlhbGl6ZSBibG9jayBoZWFk ZXINCiAJCSAqLw0KIAkJYnRfaGRyID0gWEZTX0JVRl9UT19JTk9CVF9CTE9D SyhscHRyLT5idWZfcCk7DQotCQliemVybyhidF9oZHIsIG1wLT5tX3NiLnNi X2Jsb2Nrc2l6ZSk7DQorCQltZW1zZXQoYnRfaGRyLCAwLCBtcC0+bV9zYi5z Yl9ibG9ja3NpemUpOw0KIA0KIAkJSU5UX1NFVChidF9oZHItPmJiX21hZ2lj LCBBUkNIX0NPTlZFUlQsIFhGU19JQlRfTUFHSUMpOw0KIAkJSU5UX1NFVChi dF9oZHItPmJiX2xldmVsLCBBUkNIX0NPTlZFUlQsIGxldmVsKTsNCkBAIC0x MDYwLDcgKzEwNjAsNyBAQA0KIAkJCVhGU19BR19EQUREUihtcCwgYWdubywg WEZTX0FHSV9EQUREUihtcCkpLA0KIAkJCW1wLT5tX3NiLnNiX3NlY3RzaXpl L0JCU0laRSk7DQogCWFnaSA9IFhGU19CVUZfVE9fQUdJKGFnaV9idWYpOw0K LQliemVybyhhZ2ksIG1wLT5tX3NiLnNiX3NlY3RzaXplKTsNCisJbWVtc2V0 KGFnaSwgMCwgbXAtPm1fc2Iuc2Jfc2VjdHNpemUpOw0KIA0KIAlJTlRfU0VU KGFnaS0+YWdpX21hZ2ljbnVtLCBBUkNIX0NPTlZFUlQsIFhGU19BR0lfTUFH SUMpOw0KIAlJTlRfU0VUKGFnaS0+YWdpX3ZlcnNpb25udW0sIEFSQ0hfQ09O VkVSVCwgWEZTX0FHSV9WRVJTSU9OKTsNCkBAIC0xMTI0LDcgKzExMjQsNyBA QA0KIAkJICogaW5pdGlhbGl6ZSBibG9jayBoZWFkZXINCiAJCSAqLw0KIAkJ YnRfaGRyID0gWEZTX0JVRl9UT19JTk9CVF9CTE9DSyhscHRyLT5idWZfcCk7 DQotCQliemVybyhidF9oZHIsIG1wLT5tX3NiLnNiX2Jsb2Nrc2l6ZSk7DQor CQltZW1zZXQoYnRfaGRyLCAwLCBtcC0+bV9zYi5zYl9ibG9ja3NpemUpOw0K IA0KIAkJSU5UX1NFVChidF9oZHItPmJiX21hZ2ljLCBBUkNIX0NPTlZFUlQs IFhGU19JQlRfTUFHSUMpOw0KIAkJSU5UX1NFVChidF9oZHItPmJiX2xldmVs LCBBUkNIX0NPTlZFUlQsIGkpOw0KQEAgLTExNTIsNyArMTE1Miw3IEBADQog CQkgKiBibG9jayBpbml0aWFsaXphdGlvbiwgbGF5IGluIGJsb2NrIGhlYWRl cg0KIAkJICovDQogCQlidF9oZHIgPSBYRlNfQlVGX1RPX0lOT0JUX0JMT0NL KGxwdHItPmJ1Zl9wKTsNCi0JCWJ6ZXJvKGJ0X2hkciwgbXAtPm1fc2Iuc2Jf YmxvY2tzaXplKTsNCisJCW1lbXNldChidF9oZHIsIDAsIG1wLT5tX3NiLnNi X2Jsb2Nrc2l6ZSk7DQogDQogCQlJTlRfU0VUKGJ0X2hkci0+YmJfbWFnaWMs IEFSQ0hfQ09OVkVSVCwgWEZTX0lCVF9NQUdJQyk7DQogCQlidF9oZHItPmJi X2xldmVsID0gMDsNCkBAIC0xMjM5LDcgKzEyMzksNyBAQA0KIAkJCVhGU19B R19EQUREUihtcCwgYWdubywgWEZTX0FHRl9EQUREUihtcCkpLA0KIAkJCW1w LT5tX3NiLnNiX3NlY3RzaXplL0JCU0laRSk7DQogCWFnZiA9IFhGU19CVUZf VE9fQUdGKGFnZl9idWYpOw0KLQliemVybyhhZ2YsIG1wLT5tX3NiLnNiX3Nl Y3RzaXplKTsNCisJbWVtc2V0KGFnZiwgMCwgbXAtPm1fc2Iuc2Jfc2VjdHNp emUpOw0KIA0KICNpZmRlZiBYUl9CTERfRlJFRV9UUkFDRQ0KIAlmcHJpbnRm KHN0ZGVyciwgImFnZiA9IDB4JXgsIGFnZl9idWYtPmJfdW4uYl9hZGRyID0g MHgleFxuIiwNCkBAIC0xMjg3LDcgKzEyODcsNyBAQA0KIAkJCQlYRlNfQUdf REFERFIobXAsIGFnbm8sIFhGU19BR0ZMX0RBRERSKG1wKSksDQogCQkJCW1w LT5tX3NiLnNiX3NlY3RzaXplL0JCU0laRSk7DQogCQlhZ2ZsID0gWEZTX0JV Rl9UT19BR0ZMKGFnZmxfYnVmKTsNCi0JCWJ6ZXJvKGFnZmwsIG1wLT5tX3Ni LnNiX3NlY3RzaXplKTsNCisJCW1lbXNldChhZ2ZsLCAwLCBtcC0+bV9zYi5z Yl9zZWN0c2l6ZSk7DQogCQkvKg0KIAkJICogb2ssIG5vdyBncmFiIGFzIG1h bnkgYmxvY2tzIGFzIHdlIGNhbg0KIAkJICovDQpkaWZmIC1ydSB4ZnNwcm9n cy0yLjcuMTFfdmFuaWxsYS9yZXBhaXIvcGhhc2U2LmMgeGZzcHJvZ3MtMi43 LjExX3N5c3YzLWxlZ2FjeS9yZXBhaXIvcGhhc2U2LmMNCi0tLSB4ZnNwcm9n cy0yLjcuMTFfdmFuaWxsYS9yZXBhaXIvcGhhc2U2LmMJMjAwNi0wMS0xNyAw Mzo0Njo1Mi4wMDAwMDAwMDAgKzAwMDANCisrKyB4ZnNwcm9ncy0yLjcuMTFf c3lzdjMtbGVnYWN5L3JlcGFpci9waGFzZTYuYwkyMDA4LTAzLTIxIDE2OjA2 OjIxLjAwMDAwMDAwMCArMDAwMA0KQEAgLTM0MSw3ICszNDEsNyBAQA0KIAkJ CWVycm9yKTsNCiAJfQ0KIA0KLQliemVybygmaXAtPmlfZCwgc2l6ZW9mKHhm c19kaW5vZGVfY29yZV90KSk7DQorCW1lbXNldCgmaXAtPmlfZCwgMCwgc2l6 ZW9mKHhmc19kaW5vZGVfY29yZV90KSk7DQogDQogCWlwLT5pX2QuZGlfbWFn aWMgPSBYRlNfRElOT0RFX01BR0lDOw0KIAlpcC0+aV9kLmRpX21vZGUgPSBT X0lGUkVHOw0KQEAgLTQ2MSw3ICs0NjEsNyBAQA0KIAkJCXJldHVybigxKTsN CiAJCX0NCiANCi0JCWJjb3B5KGJtcCwgWEZTX0JVRl9QVFIoYnApLCBtcC0+ bV9zYi5zYl9ibG9ja3NpemUpOw0KKwkJbWVtbW92ZShYRlNfQlVGX1BUUihi cCksIGJtcCwgbXAtPm1fc2Iuc2JfYmxvY2tzaXplKTsNCiANCiAJCWxpYnhm c190cmFuc19sb2dfYnVmKHRwLCBicCwgMCwgbXAtPm1fc2Iuc2JfYmxvY2tz aXplIC0gMSk7DQogDQpAQCAtNTMxLDcgKzUzMSw3IEBADQogCQkJcmV0dXJu KDEpOw0KIAkJfQ0KIA0KLQkJYmNvcHkoc21wLCBYRlNfQlVGX1BUUihicCks IG1wLT5tX3NiLnNiX2Jsb2Nrc2l6ZSk7DQorCQltZW1tb3ZlKFhGU19CVUZf UFRSKGJwKSwgc21wLCBtcC0+bV9zYi5zYl9ibG9ja3NpemUpOw0KIA0KIAkJ bGlieGZzX3RyYW5zX2xvZ19idWYodHAsIGJwLCAwLCBtcC0+bV9zYi5zYl9i bG9ja3NpemUgLSAxKTsNCiANCkBAIC01NzYsNyArNTc2LDcgQEANCiAJCQll cnJvcik7DQogCX0NCiANCi0JYnplcm8oJmlwLT5pX2QsIHNpemVvZih4ZnNf ZGlub2RlX2NvcmVfdCkpOw0KKwltZW1zZXQoJmlwLT5pX2QsIDAsIHNpemVv Zih4ZnNfZGlub2RlX2NvcmVfdCkpOw0KIA0KIAlpcC0+aV9kLmRpX21hZ2lj ID0gWEZTX0RJTk9ERV9NQUdJQzsNCiAJaXAtPmlfZC5kaV9tb2RlID0gU19J RlJFRzsNCkBAIC02NzQsNyArNjc0LDcgQEANCiAJLyoNCiAJICogdGFrZSBj YXJlIG9mIHRoZSBjb3JlIC0tIGluaXRpYWxpemF0aW9uIGZyb20geGZzX2lh bGxvYygpDQogCSAqLw0KLQliemVybygmaXAtPmlfZCwgc2l6ZW9mKHhmc19k aW5vZGVfY29yZV90KSk7DQorCW1lbXNldCgmaXAtPmlfZCwgMCwgc2l6ZW9m KHhmc19kaW5vZGVfY29yZV90KSk7DQogDQogCWlwLT5pX2QuZGlfbWFnaWMg PSBYRlNfRElOT0RFX01BR0lDOw0KIAlpcC0+aV9kLmRpX21vZGUgPSAoX191 aW50MTZfdCkgbW9kZXxTX0lGRElSOw0KQEAgLTEyMzEsNyArMTIzMSw3IEBA DQogCS8qDQogCSAqIHNuYWcgdGhlIGluZm8gd2UgbmVlZCBvdXQgb2YgdGhl IGRpcmVjdG9yeSB0aGVuIHJlbGVhc2UgYWxsIGJ1ZmZlcnMNCiAJICovDQot CWJjb3B5KG5hbWVzdC0+bmFtZSwgZm5hbWUsIGVudHJ5LT5uYW1lbGVuKTsN CisJbWVtbW92ZShmbmFtZSwgbmFtZXN0LT5uYW1lLCBlbnRyeS0+bmFtZWxl bik7DQogCWZuYW1lW2VudHJ5LT5uYW1lbGVuXSA9ICdcMCc7DQogCSpoYXNo dmFsID0gSU5UX0dFVChlbnRyeS0+aGFzaHZhbCwgQVJDSF9DT05WRVJUKTsN CiAJbmFtZWxlbiA9IGVudHJ5LT5uYW1lbGVuOw0KQEAgLTEzNDEsNyArMTM0 MSw3IEBADQogCQlqdW5raXQgPSAwOw0KIA0KIAkJWEZTX0RJUl9TRl9HRVRf RElSSU5PKCZuYW1lc3QtPmludW1iZXIsICZsaW5vKTsNCi0JCWJjb3B5KG5h bWVzdC0+bmFtZSwgZm5hbWUsIGVudHJ5LT5uYW1lbGVuKTsNCisJCW1lbW1v dmUoZm5hbWUsIG5hbWVzdC0+bmFtZSwgZW50cnktPm5hbWVsZW4pOw0KIAkJ Zm5hbWVbZW50cnktPm5hbWVsZW5dID0gJ1wwJzsNCiANCiAJCUFTU0VSVChs aW5vICE9IE5VTExGU0lOTyk7DQpAQCAtMTY1Niw3ICsxNjU2LDcgQEANCiAJ bGlieGZzX3RyYW5zX2lqb2luKHRwLCBpcCwgMCk7DQogCWxpYnhmc190cmFu c19paG9sZCh0cCwgaXApOw0KIAlsaWJ4ZnNfZGFfYmpvaW4odHAsIGJwKTsN Ci0JYnplcm8oJmFyZ3MsIHNpemVvZihhcmdzKSk7DQorCW1lbXNldCgmYXJn cywgMCwgc2l6ZW9mKGFyZ3MpKTsNCiAJWEZTX0JNQVBfSU5JVCgmZmxpc3Qs ICZmaXJzdGJsb2NrKTsNCiAJYXJncy5kcCA9IGlwOw0KIAlhcmdzLnRyYW5z ID0gdHA7DQpAQCAtMTkwNyw3ICsxOTA3LDcgQEANCiAJCQljb250aW51ZTsN CiAJCX0NCiAJCWp1bmtpdCA9IDA7DQotCQliY29weShkZXAtPm5hbWUsIGZu YW1lLCBkZXAtPm5hbWVsZW4pOw0KKwkJbWVtbW92ZShmbmFtZSwgZGVwLT5u YW1lLCBkZXAtPm5hbWVsZW4pOw0KIAkJZm5hbWVbZGVwLT5uYW1lbGVuXSA9 ICdcMCc7DQogCQlBU1NFUlQoSU5UX0dFVChkZXAtPmludW1iZXIsIEFSQ0hf Q09OVkVSVCkgIT0gTlVMTEZTSU5PKTsNCiAJCS8qDQpAQCAtMjM1MCw3ICsy MzUwLDcgQEANCiAJfQ0KIA0KIAkvKiBhbGxvY2F0ZSBibG9ja3MgZm9yIGJ0 cmVlICovDQotCWJ6ZXJvKCZhcmdzLCBzaXplb2YoYXJncykpOw0KKwltZW1z ZXQoJmFyZ3MsIDAsIHNpemVvZihhcmdzKSk7DQogCWFyZ3MudHJhbnMgPSB0 cDsNCiAJYXJncy5kcCA9IGlwOw0KIAlhcmdzLndoaWNoZm9yayA9IFhGU19E QVRBX0ZPUks7DQpAQCAtMjM2NCw3ICsyMzY0LDcgQEANCiAJCS8qIE5PVFJF QUNIRUQgKi8NCiAJfQ0KIAlsZWFmID0gbGJwLT5kYXRhOw0KLQliemVybyhs ZWFmLCBtcC0+bV9kaXJibGtzaXplKTsNCisJbWVtc2V0KGxlYWYsIDAsIG1w LT5tX2RpcmJsa3NpemUpOw0KIAlJTlRfU0VUKGxlYWYtPmhkci5pbmZvLm1h Z2ljLCBBUkNIX0NPTlZFUlQsIFhGU19ESVIyX0xFQUZOX01BR0lDKTsNCiAJ bGlieGZzX2RhX2xvZ19idWYodHAsIGxicCwgMCwgbXAtPm1fZGlyYmxrc2l6 ZSAtIDEpOw0KIAlsaWJ4ZnNfYm1hcF9maW5pc2goJnRwLCAmZmxpc3QsIGZp cnN0YmxvY2ssICZjb21taXR0ZWQpOw0KQEAgLTIzODEsNyArMjM4MSw3IEBA DQogCQlsaWJ4ZnNfdHJhbnNfaWpvaW4odHAsIGlwLCAwKTsNCiAJCWxpYnhm c190cmFuc19paG9sZCh0cCwgaXApOw0KIAkJWEZTX0JNQVBfSU5JVCgmZmxp c3QsICZmaXJzdGJsb2NrKTsNCi0JCWJ6ZXJvKCZhcmdzLCBzaXplb2YoYXJn cykpOw0KKwkJbWVtc2V0KCZhcmdzLCAwLCBzaXplb2YoYXJncykpOw0KIAkJ YXJncy50cmFucyA9IHRwOw0KIAkJYXJncy5kcCA9IGlwOw0KIAkJYXJncy53 aGljaGZvcmsgPSBYRlNfREFUQV9GT1JLOw0KQEAgLTIzOTgsNyArMjM5OCw3 IEBADQogCQkJLyogTk9UUkVBQ0hFRCAqLw0KIAkJfQ0KIAkJZnJlZSA9IGZi cC0+ZGF0YTsNCi0JCWJ6ZXJvKGZyZWUsIG1wLT5tX2RpcmJsa3NpemUpOw0K KwkJbWVtc2V0KGZyZWUsIDAsIG1wLT5tX2RpcmJsa3NpemUpOw0KIAkJSU5U X1NFVChmcmVlLT5oZHIubWFnaWMsIEFSQ0hfQ09OVkVSVCwgWEZTX0RJUjJf RlJFRV9NQUdJQyk7DQogCQlJTlRfU0VUKGZyZWUtPmhkci5maXJzdGRiLCBB UkNIX0NPTlZFUlQsIGkpOw0KIAkJSU5UX1NFVChmcmVlLT5oZHIubnZhbGlk LCBBUkNIX0NPTlZFUlQsIFhGU19ESVIyX01BWF9GUkVFX0JFU1RTKG1wKSk7 DQpAQCAtMjQ3Myw3ICsyNDczLDcgQEANCiAJCQltcC0+bV9kaXJibGtzaXpl KTsNCiAJCWV4aXQoMSk7DQogCX0NCi0JYmNvcHkoYnAtPmRhdGEsIGRhdGEs IG1wLT5tX2RpcmJsa3NpemUpOw0KKwltZW1tb3ZlKGRhdGEsIGJwLT5kYXRh LCBtcC0+bV9kaXJibGtzaXplKTsNCiAJcHRyID0gKGNoYXIgKilkYXRhLT51 Ow0KIAlpZiAoSU5UX0dFVChkYXRhLT5oZHIubWFnaWMsIEFSQ0hfQ09OVkVS VCkgPT0gWEZTX0RJUjJfQkxPQ0tfTUFHSUMpIHsNCiAJCWJ0cCA9IFhGU19E SVIyX0JMT0NLX1RBSUxfUChtcCwgKHhmc19kaXIyX2Jsb2NrX3QgKilkYXRh KTsNCkBAIC0yNDk1LDcgKzI0OTUsNyBAQA0KIAlsaWJ4ZnNfZGFfYmhvbGQo dHAsIGZicCk7DQogCVhGU19CTUFQX0lOSVQoJmZsaXN0LCAmZmlyc3RibG9j ayk7DQogCW5lZWRsb2cgPSBuZWVkc2NhbiA9IDA7DQotCWJ6ZXJvKCgoeGZz X2RpcjJfZGF0YV90ICopKGJwLT5kYXRhKSktPmhkci5iZXN0ZnJlZSwNCisJ bWVtc2V0KCgoeGZzX2RpcjJfZGF0YV90ICopKGJwLT5kYXRhKSktPmhkci5i ZXN0ZnJlZSwgMCwNCiAJCXNpemVvZihkYXRhLT5oZHIuYmVzdGZyZWUpKTsN CiAJbGlieGZzX2RpcjJfZGF0YV9tYWtlX2ZyZWUodHAsIGJwLCAoeGZzX2Rp cjJfZGF0YV9hb2ZmX3Qpc2l6ZW9mKGRhdGEtPmhkciksDQogCQltcC0+bV9k aXJibGtzaXplIC0gc2l6ZW9mKGRhdGEtPmhkciksICZuZWVkbG9nLCAmbmVl ZHNjYW4pOw0KQEAgLTI4NTYsNyArMjg1Niw3IEBADQogCQkJfQ0KIAkJfQ0K IA0KLQkJYmNvcHkoc2ZfZW50cnktPm5hbWUsIGZuYW1lLCBzZl9lbnRyeS0+ bmFtZWxlbik7DQorCQltZW1tb3ZlKGZuYW1lLCBzZl9lbnRyeS0+bmFtZSwg c2ZfZW50cnktPm5hbWVsZW4pOw0KIAkJZm5hbWVbc2ZfZW50cnktPm5hbWVs ZW5dID0gJ1wwJzsNCiANCiAJCUFTU0VSVChub19tb2RpZnkgfHwgbGlubyAh PSBOVUxMRlNJTk8pOw0KQEAgLTI5NjcsNyArMjk2Nyw3IEBADQogCQkJCW1l bW1vdmUoc2ZfZW50cnksIHRtcF9zZmUsIHRtcF9sZW4pOw0KIA0KIAkJCQlJ TlRfTU9EKHNmLT5oZHIuY291bnQsIEFSQ0hfQ09OVkVSVCwgLTEpOw0KLQkJ CQliemVybygodm9pZCAqKSAoKF9fcHNpbnRfdCkgc2ZfZW50cnkgKyB0bXBf bGVuKSwNCisJCQkJbWVtc2V0KCh2b2lkICopICgoX19wc2ludF90KSBzZl9l bnRyeSArIHRtcF9sZW4pLCAwLA0KIAkJCQkJCXRtcF9lbGVuKTsNCiANCiAJ CQkJLyoNCkBAIC0zMDcxLDcgKzMwNzEsNyBAQA0KIA0KIAkJWEZTX0RJUl9T Rl9HRVRfRElSSU5PKCZzZl9lbnRyeS0+aW51bWJlciwgJmxpbm8pOw0KIA0K LQkJYmNvcHkoc2ZfZW50cnktPm5hbWUsIGZuYW1lLCBzZl9lbnRyeS0+bmFt ZWxlbik7DQorCQltZW1tb3ZlKGZuYW1lLCBzZl9lbnRyeS0+bmFtZSwgc2Zf ZW50cnktPm5hbWVsZW4pOw0KIAkJZm5hbWVbc2ZfZW50cnktPm5hbWVsZW5d ID0gJ1wwJzsNCiANCiAJCWlmIChzZl9lbnRyeS0+bmFtZVswXSA9PSAnLycp ICB7DQpAQCAtMzA4Nyw3ICszMDg3LDcgQEANCiAJCQkJbWVtbW92ZShzZl9l bnRyeSwgdG1wX3NmZSwgdG1wX2xlbik7DQogDQogCQkJCUlOVF9NT0Qoc2Yt Pmhkci5jb3VudCwgQVJDSF9DT05WRVJULCAtMSk7DQotCQkJCWJ6ZXJvKCh2 b2lkICopICgoX19wc2ludF90KSBzZl9lbnRyeSArIHRtcF9sZW4pLA0KKwkJ CQltZW1zZXQoKHZvaWQgKikgKChfX3BzaW50X3QpIHNmX2VudHJ5ICsgdG1w X2xlbiksIDAsDQogCQkJCQkJdG1wX2VsZW4pOw0KIA0KIAkJCQkvKg0KQEAg LTMyNDIsNyArMzI0Miw3IEBADQogCQkJfQ0KIAkJfQ0KIA0KLQkJYmNvcHko c2ZlcC0+bmFtZSwgZm5hbWUsIHNmZXAtPm5hbWVsZW4pOw0KKwkJbWVtbW92 ZShmbmFtZSwgc2ZlcC0+bmFtZSwgc2ZlcC0+bmFtZWxlbik7DQogCQlmbmFt ZVtzZmVwLT5uYW1lbGVuXSA9ICdcMCc7DQogDQogCQlBU1NFUlQobm9fbW9k aWZ5IHx8IChsaW5vICE9IE5VTExGU0lOTyAmJiBsaW5vICE9IDApKTsNCkBA IC0zMzYzLDcgKzMzNjMsNyBAQA0KIAkJCQltZW1tb3ZlKHNmZXAsIHRtcF9z ZmVwLCB0bXBfbGVuKTsNCiANCiAJCQkJSU5UX01PRChzZnAtPmhkci5jb3Vu dCwgQVJDSF9DT05WRVJULCAtMSk7DQotCQkJCWJ6ZXJvKCh2b2lkICopICgo X19wc2ludF90KSBzZmVwICsgdG1wX2xlbiksDQorCQkJCW1lbXNldCgodm9p ZCAqKSAoKF9fcHNpbnRfdCkgc2ZlcCArIHRtcF9sZW4pLCAwLA0KIAkJCQkJ CXRtcF9lbGVuKTsNCiANCiAJCQkJLyoNCkBAIC0zODc5LDggKzM4NzksOCBA QA0KIAlpbnQJCQlpOw0KIAlpbnQJCQlqOw0KIA0KLQliemVybygmemVyb2Ny LCBzaXplb2Yoc3RydWN0IGNyZWQpKTsNCi0JYnplcm8oJnplcm9mc3gsIHNp emVvZihzdHJ1Y3QgZnN4YXR0cikpOw0KKwltZW1zZXQoJnplcm9jciwgMCwg c2l6ZW9mKHN0cnVjdCBjcmVkKSk7DQorCW1lbXNldCgmemVyb2ZzeCwgMCwg c2l6ZW9mKHN0cnVjdCBmc3hhdHRyKSk7DQogDQogCWRvX2xvZyhfKCJQaGFz ZSA2IC0gY2hlY2sgaW5vZGUgY29ubmVjdGl2aXR5Li4uXG4iKSk7DQogDQpk aWZmIC1ydSB4ZnNwcm9ncy0yLjcuMTFfdmFuaWxsYS9yZXBhaXIvcnQuYyB4 ZnNwcm9ncy0yLjcuMTFfc3lzdjMtbGVnYWN5L3JlcGFpci9ydC5jDQotLS0g eGZzcHJvZ3MtMi43LjExX3ZhbmlsbGEvcmVwYWlyL3J0LmMJMjAwNi0wMS0x NyAwMzo0Njo1Mi4wMDAwMDAwMDAgKzAwMDANCisrKyB4ZnNwcm9ncy0yLjcu MTFfc3lzdjMtbGVnYWN5L3JlcGFpci9ydC5jCTIwMDgtMDMtMjEgMTU6MTQ6 MTMuMDAwMDAwMDAwICswMDAwDQpAQCAtMjc1LDcgKzI3NSw3IEBADQogCQkJ Y29udGludWU7DQogCQl9DQogCQlieXRlcyA9IGJwLT5iX3VuLmJfYWRkcjsN Ci0JCWJjb3B5KGJ5dGVzLCAoY2hhciAqKXN1bWZpbGUgKyBzdW1ibm8gKiBt cC0+bV9zYi5zYl9ibG9ja3NpemUsDQorCQltZW1tb3ZlKChjaGFyICopc3Vt ZmlsZSArIHN1bWJubyAqIG1wLT5tX3NiLnNiX2Jsb2Nrc2l6ZSwgYnl0ZXMs DQogCQkJbXAtPm1fc2Iuc2JfYmxvY2tzaXplKTsNCiAJCWxpYnhmc19wdXRi dWYoYnApOw0KIAl9DQpkaWZmIC1ydSB4ZnNwcm9ncy0yLjcuMTFfdmFuaWxs YS9yZXBhaXIvc2IuYyB4ZnNwcm9ncy0yLjcuMTFfc3lzdjMtbGVnYWN5L3Jl cGFpci9zYi5jDQotLS0geGZzcHJvZ3MtMi43LjExX3ZhbmlsbGEvcmVwYWly L3NiLmMJMjAwNi0wMS0xNyAwMzo0Njo1Mi4wMDAwMDAwMDAgKzAwMDANCisr KyB4ZnNwcm9ncy0yLjcuMTFfc3lzdjMtbGVnYWN5L3JlcGFpci9zYi5jCTIw MDgtMDMtMjEgMTY6MTA6MTAuMDAwMDAwMDAwICswMDAwDQpAQCAtNzcsNyAr NzcsNyBAQA0KIAlkZXN0LT5zYl9mZGJsb2NrcyA9IDA7DQogCWRlc3QtPnNi X2ZyZXh0ZW50cyA9IDA7DQogDQotCWJ6ZXJvKHNvdXJjZS0+c2JfZm5hbWUs IDEyKTsNCisJbWVtc2V0KHNvdXJjZS0+c2JfZm5hbWUsIDAsIDEyKTsNCiB9 DQogDQogLyoNCkBAIC0xMDUsNyArMTA1LDcgQEANCiAJCWV4aXQoMSk7DQog CX0NCiANCi0JYnplcm8oJmJ1ZnNiLCBzaXplb2YoeGZzX3NiX3QpKTsNCisJ bWVtc2V0KCZidWZzYiwgMCwgc2l6ZW9mKHhmc19zYl90KSk7DQogCXJldHZh bCA9IDA7DQogCWRpcnR5ID0gMDsNCiAJYnNpemUgPSAwOw0KQEAgLTE0NCw3 ICsxNDQsNyBAQA0KIAkJCSAqIGZvdW5kIG9uZS4gIG5vdyB2ZXJpZnkgaXQg YnkgbG9va2luZw0KIAkJCSAqIGZvciBvdGhlciBzZWNvbmRhcmllcy4NCiAJ CQkgKi8NCi0JCQliY29weSgmYnVmc2IsIHJzYiwgc2l6ZW9mKHhmc19zYl90 KSk7DQorCQkJbWVtbW92ZShyc2IsICZidWZzYiwgc2l6ZW9mKHhmc19zYl90 KSk7DQogCQkJcnNiLT5zYl9pbnByb2dyZXNzID0gMDsNCiAJCQljbGVhcl9z dW5pdCA9IDE7DQogDQpAQCAtNTc2LDcgKzU3Niw3IEBADQogdm9pZA0KIGdl dF9zYl9nZW9tZXRyeShmc19nZW9tZXRyeV90ICpnZW8sIHhmc19zYl90ICpz YnApDQogew0KLQliemVybyhnZW8sIHNpemVvZihmc19nZW9tZXRyeV90KSk7 DQorCW1lbXNldChnZW8sIDAsIHNpemVvZihmc19nZW9tZXRyeV90KSk7DQog DQogCS8qDQogCSAqIGJsaW5kbHkgc2V0IGZpZWxkcyB0aGF0IHdlIGtub3cg YXJlIGFsd2F5cyBnb29kDQpAQCAtNjQzLDcgKzY0Myw3IEBADQogCSAqIHN1 cGVyYmxvY2sgZmllbGRzIGxvY2F0ZWQgYWZ0ZXIgc2Jfd2lkdGhmaWVsZHMg Z2V0IHNldA0KIAkgKiBpbnRvIHRoZSBnZW9tZXRyeSBzdHJ1Y3R1cmUgb25s eSBpZiB3ZSBjYW4gZGV0ZXJtaW5lDQogCSAqIGZyb20gdGhlIGZlYXR1cmVz IGVuYWJsZWQgaW4gdGhpcyBzdXBlcmJsb2NrIHdoZXRoZXINCi0JICogb3Ig bm90IHRoZSBzZWN0b3Igd2FzIGJ6ZXJvJ2QgYXQgbWtmcyB0aW1lLg0KKwkg KiBvciBub3QgdGhlIHNlY3RvciB3YXMgemVybydkIGF0IG1rZnMgdGltZS4N CiAJICovDQogCWlmICgoIXByZV82NV9iZXRhICYmIChzYnAtPnNiX3ZlcnNp b25udW0gJiBYUl9HT09EX1NFQ1NCX1ZOTUFTSykpIHx8DQogCSAgICAocHJl XzY1X2JldGEgJiYgKHNicC0+c2JfdmVyc2lvbm51bSAmIFhSX0FMUEhBX1NF Q1NCX1ZOTUFTSykpKSB7DQpkaWZmIC1ydSB4ZnNwcm9ncy0yLjcuMTFfdmFu aWxsYS9ydGNwL3hmc19ydGNwLmMgeGZzcHJvZ3MtMi43LjExX3N5c3YzLWxl Z2FjeS9ydGNwL3hmc19ydGNwLmMNCi0tLSB4ZnNwcm9ncy0yLjcuMTFfdmFu aWxsYS9ydGNwL3hmc19ydGNwLmMJMjAwNi0wMS0xNyAwMzo0Njo1Mi4wMDAw MDAwMDAgKzAwMDANCisrKyB4ZnNwcm9ncy0yLjcuMTFfc3lzdjMtbGVnYWN5 L3J0Y3AveGZzX3J0Y3AuYwkyMDA4LTAzLTIxIDE2OjA2OjIxLjAwMDAwMDAw MCArMDAwMA0KQEAgLTM2NSw3ICszNjUsNyBAQA0KIAkJCXJldHVybiggLTEg KTsNCiAJCX0NCiANCi0JCWJ6ZXJvKCBmYnVmLCBpb3N6KTsNCisJCW1lbXNl dCggZmJ1ZiwgMCwgaW9zeik7DQogCX0NCiANCiAJY2xvc2UoZnJvbWZkKTsN Cg== --=-zYTwdJq22U6LZJFdfpPz-- --=-NqQpxZTxc9RLiJilMgMP Content-Type: application/pgp-signature; name=signature.asc Content-Description: This is a digitally signed message part -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (GNU/Linux) iD8DBQBH5J3HKoUGSidwLE4RAkwrAJ97KD68RQQE27bM56FN2iux7dNJwgCaAwSt QbvltoXIg1urmpcd/mR6N88= =9pcy -----END PGP SIGNATURE----- --=-NqQpxZTxc9RLiJilMgMP-- From owner-xfs@oss.sgi.com Sat Mar 22 03:05:41 2008 Received: with ECARTIS (v1.0.0; list xfs); Sat, 22 Mar 2008 03:05:50 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2MA5eAS006731 for ; Sat, 22 Mar 2008 03:05:41 -0700 X-ASG-Debug-ID: 1206180372-150e031f0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mx1.wp.pl (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id CEC096D3D28 for ; Sat, 22 Mar 2008 03:06:12 -0700 (PDT) Received: from mx1.wp.pl (mx1.wp.pl [212.77.101.5]) by cuda.sgi.com with ESMTP id 8pXTaoeHBhyiyTT5 for ; Sat, 22 Mar 2008 03:06:12 -0700 (PDT) Received: (wp-smtpd smtp.wp.pl 613 invoked from network); 22 Mar 2008 11:06:10 +0100 Received: from cgy162.neoplus.adsl.tpnet.pl (HELO asus.local) (stf_xl@wp.pl@[83.30.252.162]) (envelope-sender ) by smtp.wp.pl (WP-SMTPD) with AES256-SHA encrypted SMTP for ; 22 Mar 2008 11:06:10 +0100 From: Stanislaw Gruszka To: "Josef 'Jeff' Sipek" X-ASG-Orig-Subj: Re: BUG: xfs on linux lvm - lvconvert random hungs when doing i/o Subject: Re: BUG: xfs on linux lvm - lvconvert random hungs when doing i/o Date: Sat, 22 Mar 2008 11:20:41 +0100 User-Agent: KMail/1.9.7 Cc: xfs@oss.sgi.com References: <200803211520.16398.stf_xl@wp.pl> <20080321174556.GA5433@josefsipek.net> In-Reply-To: <20080321174556.GA5433@josefsipek.net> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200803221120.42144.stf_xl@wp.pl> X-WP-AV: skaner antywirusowy poczty Wirtualnej Polski S. A. X-WP-SPAM: NO 0000000 [UTOU] X-Barracuda-Connect: mx1.wp.pl[212.77.101.5] X-Barracuda-Start-Time: 1206180373 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45558 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14982 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: stf_xl@wp.pl Precedence: bulk X-list: xfs On Friday 21 March 2008, Josef 'Jeff' Sipek wrote: > On Fri, Mar 21, 2008 at 03:20:16PM +0100, Stanislaw Gruszka wrote: > > Interesting, I've noticed similar hang (based on my un-expert inspection of > your backtraces) which went away as suddenly as it appeared. I wasn't making > snapshots or any other LVM operation at the time. It just happened - logs > didn't contain anything. What I showed in traces is real deadlock, processes will stay in this state as long as machine power off or end of the world, nothing will up semaphores. Making snapshot when there is pending I/O on volume take much more time compering when there is no load, propably because of locks contention, which is problem by itself. However in most cases processes unlock after some time. For me making snapshot take few seconds without I/O and something between 1 - 3 minutes with I/O . However, as I told before, in very rare cases making snapshot with I/O on xfs volume take forever. Stanislaw Gruszka From owner-xfs@oss.sgi.com Sat Mar 22 04:57:57 2008 Received: with ECARTIS (v1.0.0; list xfs); Sat, 22 Mar 2008 04:58:10 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2MBvtMv019778 for ; Sat, 22 Mar 2008 04:57:57 -0700 X-ASG-Debug-ID: 1206187107-5c81039c0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from node1.t-mail.cz (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 7D50A6D427F for ; Sat, 22 Mar 2008 04:58:28 -0700 (PDT) Received: from node1.t-mail.cz (node1.t-mail.cz [62.141.0.166]) by cuda.sgi.com with ESMTP id fVXHgG3NhbG8eOl6 for ; Sat, 22 Mar 2008 04:58:28 -0700 (PDT) Received: from localhost (localhost [127.0.0.1]) by bl310.tmo.cz (Postfix) with ESMTP id 2351F329; Sat, 22 Mar 2008 12:57:54 +0100 (CET) Received: from node1.t-mail.cz ([127.0.0.1]) by localhost (bl310.tmo.cz [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id vyUP0p6UXG57; Sat, 22 Mar 2008 12:57:50 +0100 (CET) Received: from dasa-laptop (89-24-51-90.i4g.tmcz.cz [89.24.51.90]) by bl310.tmo.cz (Postfix) with ESMTP id 3EF1231E; Sat, 22 Mar 2008 12:57:49 +0100 (CET) Received: from [127.0.0.1] (localhost [127.0.0.1]) by dasa-laptop (Postfix) with ESMTP id 0CBC924B4A; Sat, 22 Mar 2008 12:57:48 +0100 (CET) X-ASG-Orig-Subj: Re: serious problem with XFS on nvidia IDE controller Subject: Re: serious problem with XFS on nvidia IDE controller From: Massimiliano Adamo To: Chris Wedgwood Cc: xfs@oss.sgi.com In-Reply-To: <20080322005528.GA917@puku.stupidest.org> References: <1206125778.6867.14.camel@dasa-laptop> <20080321230609.GA31222@puku.stupidest.org> <1206141665.6636.23.camel@dasa-laptop> <20080322005528.GA917@puku.stupidest.org> Content-Type: text/plain Date: Sat, 22 Mar 2008 12:57:47 +0100 Message-Id: <1206187067.6595.15.camel@dasa-laptop> Mime-Version: 1.0 X-Mailer: Evolution 2.12.1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: node1.t-mail.cz[62.141.0.166] X-Barracuda-Start-Time: 1206187108 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45566 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14983 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: maxadamo@gmail.com Precedence: bulk X-list: xfs Il giorno ven, 21/03/2008 alle 17.55 -0700, Chris Wedgwood ha scritto: > On Sat, Mar 22, 2008 at 12:21:05AM +0100, Massimiliano Adamo wrote: > > > let's say the problem is: "ubuntu people" are neither hacker or > > guru, and majority of people just use the default filesystem. > > they have a bug tracker and kernel people, i'm sure if you have > module problems there is someone who can look into it > I have found this: https://answers.launchpad.net/ubuntu/+question/7694 It's not completely true in my case (as I have /dev/sdxx), but it says that this driver is not there anymore. The reason for removing that driver could be this one: https://bugs.launchpad.net/ubuntu/+source/initramfs-tools/+bug/191007 Anyway, I wouldn't mind about the reason, and the point is that I should recompile the kernel. And no way, I am not using ubuntu to waste time recompiling kernels :) > > Agree, but why no problems like this with reiser? > > xfs don't journal data, by design, that might be what you are seeing, > or something else, it's not clear without more information Now for me it's difficult to reproduce this bug, as the installation with XFS was completetely unsable (i was able to boot no more then two/three times). At least I should create a new partition with xfs and run tests over it. -- cheers Massimiliano From owner-xfs@oss.sgi.com Sat Mar 22 07:11:00 2008 Received: with ECARTIS (v1.0.0; list xfs); Sat, 22 Mar 2008 07:11:08 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2MEAvxu027482 for ; Sat, 22 Mar 2008 07:11:00 -0700 X-ASG-Debug-ID: 1206195089-0c5001070000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from smtp-out003.kontent.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id CCE76125BB8B for ; Sat, 22 Mar 2008 07:11:30 -0700 (PDT) Received: from smtp-out003.kontent.com (smtp-out003.kontent.com [81.88.40.217]) by cuda.sgi.com with ESMTP id NCP0Eq2SEEGE3EsR for ; Sat, 22 Mar 2008 07:11:30 -0700 (PDT) Received: from lstyd.lan (p5B16048B.dip0.t-ipconnect.de [91.22.4.139]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: theendofthetunnel_de_1@smtp-out.kontent.com) by smtp-out.kontent.com (Postfix) with ESMTP id F16334000B8D; Sat, 22 Mar 2008 15:10:52 +0100 (CET) Received: from [10.0.0.2] (para.lan [10.0.0.2]) by lstyd.lan (Postfix) with ESMTP id 71EF420745D5; Sat, 22 Mar 2008 15:09:21 +0100 (CET) Message-ID: <47E5136E.3060807@theendofthetunnel.de> Date: Sat, 22 Mar 2008 15:10:54 +0100 From: Hannes Dorbath User-Agent: Thunderbird 2.0.0.12 (Windows/20080213) MIME-Version: 1.0 To: Xavier Poirier CC: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: Update of XFSPROG XFSDUMP on linux kernel 2.4.22 Subject: Re: Update of XFSPROG XFSDUMP on linux kernel 2.4.22 References: <1205842046.47dfb07ede86d@hermesadm.chb.fr> In-Reply-To: <1205842046.47dfb07ede86d@hermesadm.chb.fr> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Barracuda-Connect: smtp-out003.kontent.com[81.88.40.217] X-Barracuda-Start-Time: 1206195091 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45575 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14984 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: light@theendofthetunnel.de Precedence: bulk X-list: xfs Xavier Poirier wrote: > - Linux Mandrake 9.2 kernel 2.4.22 > - XFSDUMP 2.2.13 > - XFSDUMP 2.5.4 (installed by RPM) I doubt XFS on 2.4.x was ever meant for production use. This is 2008, grab a nice 2.6.23-r17 and latest xfsprogs from ftp://oss.sgi.com/projects/xfs/cmd_tars/ ;) -- Best regards, Hannes Dorbath From owner-xfs@oss.sgi.com Sat Mar 22 17:50:44 2008 Received: with ECARTIS (v1.0.0; list xfs); Sat, 22 Mar 2008 17:50:52 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-0.4 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2N0ohcx008059 for ; Sat, 22 Mar 2008 17:50:44 -0700 X-ASG-Debug-ID: 1206233476-013b02af0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from wa-out-1112.google.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 4E25A6D666D for ; Sat, 22 Mar 2008 17:51:17 -0700 (PDT) Received: from wa-out-1112.google.com (wa-out-1112.google.com [209.85.146.177]) by cuda.sgi.com with ESMTP id uwfyczAkwzDEXJoC for ; Sat, 22 Mar 2008 17:51:17 -0700 (PDT) Received: by wa-out-1112.google.com with SMTP id k22so2594038waf.18 for ; Sat, 22 Mar 2008 17:51:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:message-id:date:from:user-agent:mime-version:to:cc:subject:content-type; bh=LxWq7A+l3FAFeVbFRdmv2K+Vgs5nXxSna44bwums18o=; b=CHsB85CM7fKDS0wwGbzn6YQQiYNKq7kfLglFUGx3YBSwd0cDJRazqM8amXjFMUDVLixqOI5llq50XRVfmM/YAMcaF+zk/B0aVVlPLII9CYTc1l0BDicWxRHrVs6AVwrErw380RS6oKX8L8KqytrmDkrKog+T4HfnPEchIGiCofk= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=message-id:date:from:user-agent:mime-version:to:cc:subject:content-type; b=R3AkZ7EWaiwG251h6UCnxhatcV3v3RzScQVgz8lJf/GznFBZ0Rb88CX54p/AhhZpGMeEgpEmmSoAhCyScf0kCZtL1Jl77r3yi4Er1ONpc72oZq7ivf0YgP8MpQ3gN7YTbrZkvOIn9OES3eF0IbNFPiJWh012IQ1anb/bd/vYQ7c= Received: by 10.114.170.1 with SMTP id s1mr8543592wae.133.1206233476714; Sat, 22 Mar 2008 17:51:16 -0700 (PDT) Received: from ?192.168.1.2? ( [118.0.185.164]) by mx.google.com with ESMTPS id m27sm10443113pof.10.2008.03.22.17.51.15 (version=TLSv1/SSLv3 cipher=RC4-MD5); Sat, 22 Mar 2008 17:51:16 -0700 (PDT) Message-ID: <47E5A982.8010002@gmail.com> Date: Sun, 23 Mar 2008 08:51:14 +0800 From: Kevin Xu User-Agent: Thunderbird 2.0.0.12 (Windows/20080213) MIME-Version: 1.0 To: xfscn@googlegroups.com CC: xfs@oss.sgi.com X-ASG-Orig-Subj: [PATCH]fix fbno in xfs_dir2_node_addname_int Subject: [PATCH]fix fbno in xfs_dir2_node_addname_int Content-Type: multipart/mixed; boundary="------------090401090106060102070105" X-Barracuda-Connect: wa-out-1112.google.com[209.85.146.177] X-Barracuda-Start-Time: 1206233477 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45618 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14985 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: cgxu.gg@gmail.com Precedence: bulk X-list: xfs This is a multi-part message in MIME format. --------------090401090106060102070105 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit if we didn't find a freespace block for our new entry in the current freeindex block, return to the first freeindex block and continue to check. --------------090401090106060102070105 Content-Type: text/x-patch; name="usig-xfs-fix-080322.patch" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="usig-xfs-fix-080322.patch" --- linux-2.6-xfs/fs/xfs/xfs_dir2_node.c 2008-03-22 22:41:12.118699220 +0800 +++ linux-xfs-usig/fs/xfs/xfs_dir2_node.c 2008-03-22 22:48:07.694781678 +0800 @@ -1502,8 +1502,10 @@ xfs_dir2_node_addname_int( */ xfs_da_brelse(tp, fbp); fbp = NULL; - if (fblk && fblk->bp) + if (fblk && fblk->bp) { fblk->bp = NULL; + fbno = -1; + } } } } --------------090401090106060102070105-- From owner-xfs@oss.sgi.com Sat Mar 22 18:03:27 2008 Received: with ECARTIS (v1.0.0; list xfs); Sat, 22 Mar 2008 18:03:34 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.5 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2N13Q2x009810 for ; Sat, 22 Mar 2008 18:03:27 -0700 X-ASG-Debug-ID: 1206234240-7fde035c0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from wa-out-1112.google.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 799F96D66F0 for ; Sat, 22 Mar 2008 18:04:00 -0700 (PDT) Received: from wa-out-1112.google.com (wa-out-1112.google.com [209.85.146.183]) by cuda.sgi.com with ESMTP id gfkv3MFTfAMbgvE6 for ; Sat, 22 Mar 2008 18:04:00 -0700 (PDT) Received: by wa-out-1112.google.com with SMTP id k22so2598720waf.18 for ; Sat, 22 Mar 2008 18:03:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:message-id:date:from:user-agent:mime-version:to:cc:subject:content-type; bh=kiaNwe/B1/oZQpPPGRsyxGbnQQeNnErkWrsA5Bkx27I=; b=V/8riQ14s7LyfobnS6CVpRnMWwkEqTJNpEeElTNInwM+shdz1loBbU2jX2uzd50OUrTW/PWNfMDaXtPC5Tl/HUMBIl1a4N7yrDhQ8CTUk9RpCOGYStvQ/SWqEqfr2FjYG/NkgXsac3RWbSn3WYyF77iMH725Dxbxe4fvhvD/16U= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=message-id:date:from:user-agent:mime-version:to:cc:subject:content-type; b=UoxM/X7V3pr/+3jptWVbkhbJmNcgKQ/y5TH3fi0oMZeEbwbQL7nRD8AuZJaYOUHi52EuE/5eBb83v9McAtx7NeG98fH2qBwx54SH4SZiMTe3NON9K7kVuoDWD2tLsBDWQX6q2NK7+uI3pfDIfizfY/lsmzKfhgJPgC7vsVAiZLU= Received: by 10.114.136.1 with SMTP id j1mr8901470wad.85.1206234239859; Sat, 22 Mar 2008 18:03:59 -0700 (PDT) Received: from ?192.168.1.2? ( [118.0.185.164]) by mx.google.com with ESMTPS id n22sm10484166pof.1.2008.03.22.18.03.58 (version=TLSv1/SSLv3 cipher=RC4-MD5); Sat, 22 Mar 2008 18:03:59 -0700 (PDT) Message-ID: <47E5AC7D.4080708@gmail.com> Date: Sun, 23 Mar 2008 09:03:57 +0800 From: Kevin Xu User-Agent: Thunderbird 2.0.0.12 (Windows/20080213) MIME-Version: 1.0 To: xfscn@googlegroups.com CC: xfs@oss.sgi.com X-ASG-Orig-Subj: [PATCH]fix the algorithm for addname in xfs_da_node_lookup_int Subject: [PATCH]fix the algorithm for addname in xfs_da_node_lookup_int Content-Type: multipart/mixed; boundary="------------040805070705080301070307" X-Barracuda-Connect: wa-out-1112.google.com[209.85.146.183] X-Barracuda-Start-Time: 1206234240 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45618 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14986 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: cgxu.gg@gmail.com Precedence: bulk X-list: xfs This is a multi-part message in MIME format. --------------040805070705080301070307 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit fix the algorithm for addname in xfs_da_node_lookup_int --------------040805070705080301070307 Content-Type: text/x-patch; name="usig-xfs-fix-080323.patch" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="usig-xfs-fix-080323.patch" --- linux-2.6-xfs/fs/xfs/xfs_da_btree.c 2007-09-21 14:14:35.000000000 +0800 +++ linux-xfs-usig/fs/xfs/xfs_da_btree.c 2008-03-23 08:19:13.583751436 +0800 @@ -1161,7 +1161,7 @@ xfs_da_node_lookup_int(xfs_da_state_t *s ASSERT(0); return XFS_ERROR(EFSCORRUPTED); } - if (((retval == ENOENT) || (retval == ENOATTR)) && + if ((((retval == ENOENT) && (state->extrablk.index == -1)) || (retval == ENOATTR)) && (blk->hashval == args->hashval)) { error = xfs_da_path_shift(state, &state->path, 1, 1, &retval); --------------040805070705080301070307-- From owner-xfs@oss.sgi.com Sat Mar 22 18:45:19 2008 Received: with ECARTIS (v1.0.0; list xfs); Sat, 22 Mar 2008 18:45:27 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.1 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2N1jI9r012610 for ; Sat, 22 Mar 2008 18:45:19 -0700 X-ASG-Debug-ID: 1206236751-27a5035d0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 93042125C9FF for ; Sat, 22 Mar 2008 18:45:52 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id W1BEtuTcUVxoZkla for ; Sat, 22 Mar 2008 18:45:52 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 90F1C18004B4A; Sat, 22 Mar 2008 20:45:50 -0500 (CDT) Message-ID: <47E5B64E.5090504@sandeen.net> Date: Sat, 22 Mar 2008 20:45:50 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: Kevin Xu CC: xfscn@googlegroups.com, xfs@oss.sgi.com X-ASG-Orig-Subj: Re: [PATCH]fix the algorithm for addname in xfs_da_node_lookup_int Subject: Re: [PATCH]fix the algorithm for addname in xfs_da_node_lookup_int References: <47E5AC7D.4080708@gmail.com> In-Reply-To: <47E5AC7D.4080708@gmail.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1206236752 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45621 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14987 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Kevin Xu wrote: > fix the algorithm for addname in xfs_da_node_lookup_int Could you please include a bit more of a changelog, i.e. what was broken and how does this fix it? Perhaps a testcase, if there is one? -Eric From owner-xfs@oss.sgi.com Sat Mar 22 20:05:23 2008 Received: with ECARTIS (v1.0.0; list xfs); Sat, 22 Mar 2008 20:05:32 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2N35LKE015797 for ; Sat, 22 Mar 2008 20:05:23 -0700 X-ASG-Debug-ID: 1206241554-053901030000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from rv-out-0910.google.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id C24B26D6B37 for ; Sat, 22 Mar 2008 20:05:54 -0700 (PDT) Received: from rv-out-0910.google.com (rv-out-0910.google.com [209.85.198.185]) by cuda.sgi.com with ESMTP id ICV1Gpcsi2OU5hYg for ; Sat, 22 Mar 2008 20:05:54 -0700 (PDT) Received: by rv-out-0910.google.com with SMTP id k20so1165563rvb.32 for ; Sat, 22 Mar 2008 20:05:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:message-id:date:from:user-agent:mime-version:to:cc:subject:references:in-reply-to:content-type:content-transfer-encoding; bh=VlfhIFN7oCPEEsLTNuQ+/zt7cnBoFpc6ItZHVehDFbM=; b=CuY6mWR8/2ZwyPfAkT+hwa+Cz+EQs4Tv4hU5iARjKtdJ7D9Cf04i1aAGRtTPx1rqH5SsfYJUUKSjM5BTEM5U71S++jazy5+GVC5hpLHAphNJvDCUqISx89wzY0ZimA+4wrr0+5mv7NIhs9gN+XxPXBDB30DVo3ewy058GjuIrNE= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=message-id:date:from:user-agent:mime-version:to:cc:subject:references:in-reply-to:content-type:content-transfer-encoding; b=Fbw5obeOYu82ryWMgw7ttw5O2Iksess0p2bqXFWx6hkNTluIDEGmSzw7GjVKXCId3lqEGhO8SynafIl3oormZzDUcwKJH+f0P/7eABmrvuZixmhJ5ZD9eCtSxmv1s4Iza/48uTkSs5uHiIFrvveysDXj/ef8jWQmbsvH0fEvybc= Received: by 10.114.15.1 with SMTP id 1mr9067026wao.27.1206241554017; Sat, 22 Mar 2008 20:05:54 -0700 (PDT) Received: from ?192.168.1.2? ( [118.0.185.164]) by mx.google.com with ESMTPS id m27sm10569465pof.10.2008.03.22.20.05.52 (version=TLSv1/SSLv3 cipher=RC4-MD5); Sat, 22 Mar 2008 20:05:53 -0700 (PDT) Message-ID: <47E5C90F.70303@gmail.com> Date: Sun, 23 Mar 2008 11:05:51 +0800 From: Kevin Xu User-Agent: Thunderbird 2.0.0.12 (Windows/20080213) MIME-Version: 1.0 To: Eric Sandeen CC: xfscn@googlegroups.com, xfs@oss.sgi.com X-ASG-Orig-Subj: Re: [PATCH]fix the algorithm for addname in xfs_da_node_lookup_int Subject: Re: [PATCH]fix the algorithm for addname in xfs_da_node_lookup_int References: <47E5AC7D.4080708@gmail.com> <47E5B64E.5090504@sandeen.net> In-Reply-To: <47E5B64E.5090504@sandeen.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Barracuda-Connect: rv-out-0910.google.com[209.85.198.185] X-Barracuda-Start-Time: 1206241555 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45626 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14988 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: cgxu.gg@gmail.com Precedence: bulk X-list: xfs Eric Sandeen wrote: > Kevin Xu wrote: > >> fix the algorithm for addname in xfs_da_node_lookup_int >> > > Could you please include a bit more of a changelog, i.e. what was broken > and how does this fix it? Perhaps a testcase, if there is one? > > -Eric > > I think there is a problem in original processing. When add a new entry and its hash value is already exist in current directory, furthermore the hash value is the last one in a leaf block, although we already found a freespace block for adding the new entry, but we drop this one and continue to check, until the last same hash value. I think the reason is that xfs_da_node_lookup_int is a common function for addname and lookup. I added a condition before enter another leaf block, only when we didn't find a freespace block for the new entry, then we need to enter another leaf block. Regards Kevin From owner-xfs@oss.sgi.com Sat Mar 22 20:34:20 2008 Received: with ECARTIS (v1.0.0; list xfs); Sat, 22 Mar 2008 20:34:26 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_64 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2N3YI8l021900 for ; Sat, 22 Mar 2008 20:34:20 -0700 X-ASG-Debug-ID: 1206243291-036b01ec0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 2D8F46D6BF9 for ; Sat, 22 Mar 2008 20:34:51 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id iKHDoOyfNIsvA99y for ; Sat, 22 Mar 2008 20:34:51 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 2EB7C18004B4A for ; Sat, 22 Mar 2008 22:34:19 -0500 (CDT) Message-ID: <47E5CFBA.7060405@sandeen.net> Date: Sat, 22 Mar 2008 22:34:18 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: xfs-oss X-ASG-Orig-Subj: [PATCH] xfsqa: call _notrun in common.dump if dump utils not found Subject: [PATCH] xfsqa: call _notrun in common.dump if dump utils not found Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1206243292 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45628 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14989 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs it may not always be obvious to outsiders that xfsdump is packaged separately from xfsprogs... is it worth checking for the binaries rather than spewing verbose failures if it's not installed? (and are the locations ok for irix/bsd/whatnot too...?) ... also abort if bc not found (common.filter requires this, my minimal testing root didn't have it and much error spew ensued... nicer to check up front IMHO) -Eric Index: xfstests/common.dump =================================================================== --- xfstests.orig/common.dump +++ xfstests/common.dump @@ -41,6 +41,10 @@ do_quota_check=true # do quota check if _need_to_be_root +[ -x /usr/sbin/xfsdump ] || _notrun "xfsdump executable not found" +[ -x /usr/sbin/xfsrestore ] || _notrun "xfsrestore executable not found" +[ -x /usr/sbin/xfsinvutil ] || _notrun "xfsinvutil executable not found" + # install our cleaner trap "_cleanup; exit \$status" 0 1 2 3 15 Index: xfstests/common.config =================================================================== --- xfstests.orig/common.config +++ xfstests/common.config @@ -114,6 +114,9 @@ export AWK_PROG="`set_prog_path awk`" export SED_PROG="`set_prog_path sed`" [ "$SED_PROG" = "" ] && _fatal "sed not found" +export BC_PROG="`set_prog_path bc`" +[ "$BC_PROG" = "" ] && _fatal "bc not found" + export PS_ALL_FLAGS="-ef" export DF_PROG="`set_prog_path df`" From owner-xfs@oss.sgi.com Sun Mar 23 07:49:35 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 23 Mar 2008 07:50:06 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-0.0 required=5.0 tests=BAYES_40,SUBJECT_FUZZY_TION autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2NEnYi7007666 for ; Sun, 23 Mar 2008 07:49:35 -0700 X-ASG-Debug-ID: 1206283807-5d3001400000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from web52006.mail.re2.yahoo.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with SMTP id 0C899125E5BA for ; Sun, 23 Mar 2008 07:50:07 -0700 (PDT) Received: from web52006.mail.re2.yahoo.com (web52006.mail.re2.yahoo.com [206.190.49.253]) by cuda.sgi.com with SMTP id FoSIoUbcE0tdBfwY for ; Sun, 23 Mar 2008 07:50:07 -0700 (PDT) Received: (qmail 52349 invoked by uid 60001); 23 Mar 2008 14:50:06 -0000 DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=X-YMail-OSG:Received:Date:From:Subject:To:MIME-Version:Content-Type:Content-Transfer-Encoding:Message-ID; b=f/nyURXlHID2ECnMyNvLc1ZkCN1PUmjrZrHGoEKAuDadktMrLx6MgFz+gOxCLSSTA7wQPadxPHNtgt1rDaHKxLqZQraKpfNoPtK31VcgnPyJ2IWl5BzArLen1ecd8gh6SS8WGtk8SkqnmsObY57Qu48WR5UFrfa9d738WtD2y2Y=; X-YMail-OSG: KrqcJFUVM1miN0bU_QigasOzK.N.Ei6.fkLX6auRb31Oeoh64Px.xPcucSRUKDUA45CUlN4DNYnVZATCWhUBoVbT5SMsCmGTAEDG5E.aj0OCe4.O6g-- Received: from [84.107.196.80] by web52006.mail.re2.yahoo.com via HTTP; Sun, 23 Mar 2008 07:50:06 PDT Date: Sun, 23 Mar 2008 07:50:06 -0700 (PDT) From: "Hendrik ." X-ASG-Orig-Subj: Poor VMWare disk performance on XFS partition Subject: Poor VMWare disk performance on XFS partition To: xfs@oss.sgi.com MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 8bit Message-ID: <876423.51989.qm@web52006.mail.re2.yahoo.com> X-Barracuda-Connect: web52006.mail.re2.yahoo.com[206.190.49.253] X-Barracuda-Start-Time: 1206283808 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -1.52 X-Barracuda-Spam-Status: No, SCORE=-1.52 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=BSF_SC5_SA210e X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45673 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.50 BSF_SC5_SA210e Custom Rule SA210e X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14990 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: chasake@yahoo.com Precedence: bulk X-list: xfs I've been converting some of my drives from EXT3 to XFS a while ago. Now I notice poor disk performance when using XFS as underlying filesystem for a VMware virtual drive. I did some experiments and it really seems to be the XFS filesystem 'trashing' the speed of a VMware Windows XP guest. The first thing that I noticed is that when I shut down the virtual machine it takes a very long time after the machine seems to be shut down until the VMware window becomes responsive again. In the mean time there is heavy disk I/O. I found out that VMware seems to write some kind of memory map on the host hard disk which get heavily fragmented. The removal of this file is probably very I/O intensive which causes the delay. This is very annoying but not really a problem as it only happens when a virtual machine is shut down. But there seems to be another problem when running the guest operating system itself. I made two exact copies of a Windows XP virtual machine on two hard disks of the same type, size and brand. The first hard disk had a XFS partition to host the virtual machine, the seconds harddisk was formatted as EXT3. The XFS partition has no fragmentation at all thus all files only consisted of 1 extent. The EXT3 files were a bit fragmented but this was only marginal (some larger disk image files consisted of 17 extents where 16 was optimal, reported by 'filefrag'). Then I ran the virtual machines one by one and started a defragmentation program to cause a lot of I/O on the guest operating system. Defragmentation of the XP host running from the EXT3 partition took only 2m36 but the exact same guest on the XFS partition took 10m5 to complete. On the EXT3 partition hardly any noise was heard from the drive heads as if the host operating system was caching and delaying the I/O operations. On the XFS host however a lot of noise was heard as if the harddisk was trashing heavily. The machine I ran the tests on had the following specifications: - Dual core AMD Athlon64 5200+ - 2 GB memory - 2x 80 GB Samsung IDE harddisk - Ubuntu Gutsy Gibbon (7.10) 64-bit - Standard stock Ubuntu kernel - VMware workstation 6.0.2 build-59824 The guest was a bare Windows XP SP-2 installation, running O&O defrag 10. Can anyone give me some information what might be causing this massive slowdown? Regards, Hendrik van den Boogaard ____________________________________________________________________________________ Looking for last minute shopping deals? Find them fast with Yahoo! Search. http://tools.search.yahoo.com/newsearch/category.php?category=shopping From owner-xfs@oss.sgi.com Sun Mar 23 13:35:22 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 23 Mar 2008 13:35:29 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_64 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2NKZM9s005004 for ; Sun, 23 Mar 2008 13:35:22 -0700 X-ASG-Debug-ID: 1206304529-6eb800100000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 0935C6D9888 for ; Sun, 23 Mar 2008 13:35:51 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id TDZ0wFD6BUbkXLmW for ; Sun, 23 Mar 2008 13:35:51 -0700 (PDT) Received: from Liberator.local (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 37ADA18004B4A; Sun, 23 Mar 2008 15:29:56 -0500 (CDT) Message-ID: <47E6BDC3.7030107@sandeen.net> Date: Sun, 23 Mar 2008 15:29:55 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: Chris Wedgwood CC: xfs-oss X-ASG-Orig-Subj: Re: [PATCH] xfsqa: call _notrun in common.dump if dump utils not found Subject: Re: [PATCH] xfsqa: call _notrun in common.dump if dump utils not found References: <47E5CFBA.7060405@sandeen.net> <20080323054136.GA7529@puku.stupidest.org> In-Reply-To: <20080323054136.GA7529@puku.stupidest.org> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1206304556 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45696 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14991 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Chris Wedgwood wrote: > On Sat, Mar 22, 2008 at 10:34:18PM -0500, Eric Sandeen wrote: > >> it may not always be obvious to outsiders that xfsdump is packaged >> separately from xfsprogs... is it worth checking for the binaries >> rather than spewing verbose failures if it's not installed? > > I really think xfsdump & fsr should be moved to xfsprogs. > wouldn't bother me... tests should probably require the executable either way :) -Eric From owner-xfs@oss.sgi.com Sun Mar 23 13:41:35 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 23 Mar 2008 13:41:43 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00, SUBJECT_FUZZY_TION autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2NKfYlU005596 for ; Sun, 23 Mar 2008 13:41:35 -0700 X-ASG-Debug-ID: 1206304927-3f5e00150000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 2B70C125F1D6 for ; Sun, 23 Mar 2008 13:42:07 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id s7nDfDn5u38qAK9U for ; Sun, 23 Mar 2008 13:42:07 -0700 (PDT) Received: from Liberator.local (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 2E88118004B4A; Sun, 23 Mar 2008 15:42:07 -0500 (CDT) Message-ID: <47E6C09E.5030601@sandeen.net> Date: Sun, 23 Mar 2008 15:42:06 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: "Hendrik ." CC: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: Poor VMWare disk performance on XFS partition Subject: Re: Poor VMWare disk performance on XFS partition References: <876423.51989.qm@web52006.mail.re2.yahoo.com> In-Reply-To: <876423.51989.qm@web52006.mail.re2.yahoo.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1206304928 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45697 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14992 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Hendrik . wrote: > I've been converting some of my drives from EXT3 to > XFS a while ago. Now I notice poor disk performance > when using XFS as underlying filesystem for a VMware > virtual drive. I did some experiments and it really > seems to be the XFS filesystem 'trashing' the speed of > a VMware Windows XP guest. > > The first thing that I noticed is that when I shut > down the virtual machine it takes a very long time > after the machine seems to be shut down until the > VMware window becomes responsive again. In the mean > time there is heavy disk I/O. I found out that VMware > seems to write some kind of memory map on the host > hard disk which get heavily fragmented. What does xfs_bmap and/or filefrag say, is this file indeed very fragmented? And is it less so on ext3? If the file is persistent then preallocating it would probably help. > The removal of > this file is probably very I/O intensive which causes > the delay. This is very annoying but not really a > problem as it only happens when a virtual machine is > shut down. > > But there seems to be another problem when running the > guest operating system itself. I made two exact copies > of a Windows XP virtual machine on two hard disks of > the same type, size and brand. The first hard disk had > a XFS partition to host the virtual machine, the > seconds harddisk was formatted as EXT3. The XFS > partition has no fragmentation at all thus all files > only consisted of 1 extent. The EXT3 files were a bit > fragmented but this was only marginal (some larger > disk image files consisted of 17 extents where 16 was > optimal, reported by 'filefrag'). I've honestly never used vmware. How many disk image files per guest? You said 2 copies of an XP VM, one on xfs and one on ext3, but then said "EXT3 files" so I'm not sure what the big picture looks like here. If a single guest uses multiple files, can you run xfs_bmap -v on them, it may be that xfs is spreading them out between the AGs, thereby putting them into different regions of the disk. > Then I ran the virtual machines one by one and started > a defragmentation program to cause a lot of I/O on the > guest operating system. Defragmentation of the XP host > running from the EXT3 partition took only 2m36 but the > exact same guest on the XFS partition took 10m5 to > complete. On the EXT3 partition hardly any noise was > heard from the drive heads as if the host operating > system was caching and delaying the I/O operations. On > the XFS host however a lot of noise was heard as if > the harddisk was trashing heavily. Maybe try using seekwatcher to trace/graph IO of the vmware processes to see what's going on? -Eric From owner-xfs@oss.sgi.com Sun Mar 23 16:43:37 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 23 Mar 2008 16:43:45 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.3 required=5.0 tests=BAYES_60,MIME_8BIT_HEADER autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2NNhZbi022601 for ; Sun, 23 Mar 2008 16:43:37 -0700 X-ASG-Debug-ID: 1206315848-3b6d02570000-w1Z2WR X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from heller.inter.net.il (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 777EA125FB8E for ; Sun, 23 Mar 2008 16:44:08 -0700 (PDT) Received: from heller.inter.net.il (heller.inter.net.il [213.8.233.23]) by cuda.sgi.com with ESMTP id 7kX6L7iu4w5LZi7m for ; Sun, 23 Mar 2008 16:44:08 -0700 (PDT) Received: from reception ([77.126.168.55]) by heller.inter.net.il (MOS 3.7.3a-GA) with ESMTP id FHZ63870 (AUTH user2008); Mon, 24 Mar 2008 01:43:39 +0200 (IST) Message-ID: <9b56817f30b2a7db2cddc3a3001180c2@smile.net.il> From: "=?windows-1255?Q?=EE=E5=EE=E7=E4_=EE=E7=F9=E1=E9=ED?=" To: "1" X-ASG-Orig-Subj: =?windows-1255?Q?=EE=E5=EE=E7=E4_=EC=FA=EE=E9=EB=E4_=E1=EE=E7=F9=E1=E9=ED?= Subject: =?windows-1255?Q?=EE=E5=EE=E7=E4_=EC=FA=EE=E9=EB=E4_=E1=EE=E7=F9=E1=E9=ED?= Date: Sun, 23 Mar 2008 23:34:56 +0200 MIME-Version: 1.0 Content-Type: text/plain; charset="windows-1255" X-Barracuda-Connect: heller.inter.net.il[213.8.233.23] X-Barracuda-Start-Time: 1206315849 X-Barracuda-Bayes: INNOCENT GLOBAL 0.4315 1.0000 0.0000 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45707 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id m2NNhbbi022605 X-archive-position: 14993 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: user2008@smile.net.il Precedence: bulk X-list: xfs îĺîçä ěúîéëä áîçůáéí: * đôéěĺú ĺäéú÷ňĺú îçůá * čéôĺě áĺéřĺńéí ĺîćé÷éí ŕçřéí * úîéëä ĺäăřëä áŕĺôéń * úçćĺ÷ä ůĺčôú * áňéĺú ëěěéĺú * âéáĺé đúĺđéí * ééňĺő î÷öĺňé ěôřčéí, öĺř ÷ůř 054-4691436 ( đŕ ěäůŕéř äĺăňä ŕí ŕđé ěŕ ćîéď ) ěäńřä řůĺí äńř áđĺůŕ úĺăä. From owner-xfs@oss.sgi.com Sun Mar 23 17:41:53 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 23 Mar 2008 17:41:59 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00, SUBJECT_FUZZY_TION autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2O0foYZ026330 for ; Sun, 23 Mar 2008 17:41:53 -0700 X-ASG-Debug-ID: 1206319343-4c9b00f80000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from flyingAngel.upjs.sk (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 86D966DA2AD for ; Sun, 23 Mar 2008 17:42:24 -0700 (PDT) Received: from flyingAngel.upjs.sk (static113-109.rudna.net [212.20.113.109]) by cuda.sgi.com with ESMTP id RAmJA4sFhKxNj0nR for ; Sun, 23 Mar 2008 17:42:24 -0700 (PDT) Received: by flyingAngel.upjs.sk (Postfix, from userid 500) id 7F78128100E; Mon, 24 Mar 2008 01:41:50 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by flyingAngel.upjs.sk (Postfix) with ESMTP id 73DAD21FF7A; Mon, 24 Mar 2008 01:41:50 +0100 (CET) Date: Mon, 24 Mar 2008 01:41:50 +0100 (CET) From: Jan Derfinak To: "Hendrik ." cc: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: Poor VMWare disk performance on XFS partition Subject: Re: Poor VMWare disk performance on XFS partition In-Reply-To: <876423.51989.qm@web52006.mail.re2.yahoo.com> Message-ID: References: <876423.51989.qm@web52006.mail.re2.yahoo.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Barracuda-Connect: static113-109.rudna.net[212.20.113.109] X-Barracuda-Start-Time: 1206319344 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45707 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14994 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: ja@mail.upjs.sk Precedence: bulk X-list: xfs On Sun, 23 Mar 2008, Hendrik . wrote: > I've been converting some of my drives from EXT3 to > XFS a while ago. Now I notice poor disk performance > when using XFS as underlying filesystem for a VMware > virtual drive. I did some experiments and it really > seems to be the XFS filesystem 'trashing' the speed of > a VMware Windows XP guest. Mount XFS partition with "nobarrier" option. I'm using also logbufs=8,logbsize=256k for vmware. jan -- From owner-xfs@oss.sgi.com Sun Mar 23 20:03:59 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 23 Mar 2008 20:04:31 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_33 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2O33ub2002328 for ; Sun, 23 Mar 2008 20:03:59 -0700 X-ASG-Debug-ID: 1206327870-703801c20000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id C210C1004FE5 for ; Sun, 23 Mar 2008 20:04:30 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id PiFNCdRI2b8vOZ4T for ; Sun, 23 Mar 2008 20:04:30 -0700 (PDT) Received: from Liberator.local (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id BD3D318004B4A for ; Sun, 23 Mar 2008 22:04:29 -0500 (CDT) Message-ID: <47E71A3D.9040707@sandeen.net> Date: Sun, 23 Mar 2008 22:04:29 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: xfs-oss X-ASG-Orig-Subj: [PATCH] xfsqa: make 054 _require_quota Subject: [PATCH] xfsqa: make 054 _require_quota Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1206327870 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45721 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14995 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs 054 needs quota support to run, but doesn't make that explicit. Index: xfstests/054 =================================================================== --- xfstests.orig/054 +++ xfstests/054 @@ -38,6 +38,7 @@ cp /dev/null $seq.full chmod ugo+rwx $seq.full _require_scratch +_require_quota _filter_stat() { From owner-xfs@oss.sgi.com Sun Mar 23 21:01:36 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 23 Mar 2008 21:01:49 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00, SUBJECT_FUZZY_TION autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2O41Xs5012732 for ; Sun, 23 Mar 2008 21:01:35 -0700 X-ASG-Debug-ID: 1206331326-42d100130000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from smtp-out03.alice-dsl.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 6F168100525C for ; Sun, 23 Mar 2008 21:02:06 -0700 (PDT) Received: from smtp-out03.alice-dsl.net (smtp-out03.alice-dsl.net [88.44.63.5]) by cuda.sgi.com with ESMTP id N5HWIJC5QipyMVxr for ; Sun, 23 Mar 2008 21:02:06 -0700 (PDT) Received: from out.alice-dsl.de ([192.168.125.60]) by smtp-out03.alice-dsl.net with Microsoft SMTPSVC(6.0.3790.1830); Mon, 24 Mar 2008 04:55:27 +0100 Received: from basil.firstfloor.org ([78.53.157.213]) by out.alice-dsl.de with Microsoft SMTPSVC(6.0.3790.1830); Mon, 24 Mar 2008 04:55:27 +0100 Received: by basil.firstfloor.org (Postfix, from userid 1000) id C04541B41E0; Mon, 24 Mar 2008 05:02:04 +0100 (CET) To: Eric Sandeen Cc: "Hendrik ." , xfs@oss.sgi.com X-ASG-Orig-Subj: Re: Poor VMWare disk performance on XFS partition Subject: Re: Poor VMWare disk performance on XFS partition References: <876423.51989.qm@web52006.mail.re2.yahoo.com> <47E6C09E.5030601@sandeen.net> From: Andi Kleen Date: 24 Mar 2008 05:02:04 +0100 In-Reply-To: <47E6C09E.5030601@sandeen.net> Message-ID: <87prtk7osj.fsf@basil.nowhere.org> Lines: 21 User-Agent: Gnus/5.09 (Gnus v5.9.0) Emacs/21.3 MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-OriginalArrivalTime: 24 Mar 2008 03:55:27.0815 (UTC) FILETIME=[E5957D70:01C88D62] X-Barracuda-Connect: smtp-out03.alice-dsl.net[88.44.63.5] X-Barracuda-Start-Time: 1206331327 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45725 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14996 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: andi@firstfloor.org Precedence: bulk X-list: xfs Eric Sandeen writes: > > What does xfs_bmap and/or filefrag say, is this file indeed very > fragmented? > And is it less so on ext3? If the file is persistent then > preallocating it would probably help. Preallocating would prevent one of the main features of a sparse VM images: starting small and only growing as the virtual machine needs more storage without having to resize the virtual partitions. I remember XFS had a mmap problem a long time ago (in 2.4) which sounded similar (iirc it trickered with samba), but I thought it was long fixed. The problem back then was that page flushing on mmaps didn't get merged due to some unfortunate VM interactions and then thousands of extents got created on flushing a mmap. A lot of extents seems to make XFS slow. -Andi From owner-xfs@oss.sgi.com Mon Mar 24 06:13:07 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 24 Mar 2008 06:13:15 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.4 required=5.0 tests=BAYES_00,J_CHICKENPOX_24, J_CHICKENPOX_53 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2ODD5qm007130 for ; Mon, 24 Mar 2008 06:13:07 -0700 X-ASG-Debug-ID: 1206364416-286603350000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from tyo200.gate.nec.co.jp (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 9DCDE6DC514 for ; Mon, 24 Mar 2008 06:13:36 -0700 (PDT) Received: from tyo200.gate.nec.co.jp (TYO200.gate.nec.co.jp [210.143.35.50]) by cuda.sgi.com with ESMTP id GCSV9abNLzLnFkO5 for ; Mon, 24 Mar 2008 06:13:36 -0700 (PDT) Received: from tyo202.gate.nec.co.jp ([10.7.69.202]) by tyo200.gate.nec.co.jp (8.13.8/8.13.4) with ESMTP id m2OBI0ni019112 for ; Mon, 24 Mar 2008 20:18:07 +0900 (JST) Received: from mailgate3.nec.co.jp (mailgate54.nec.co.jp [10.7.69.195]) by tyo202.gate.nec.co.jp (8.13.8/8.13.4) with ESMTP id m2OBBaab013130; Mon, 24 Mar 2008 20:11:36 +0900 (JST) Received: (from root@localhost) by mailgate3.nec.co.jp (8.11.7/3.7W-MAILGATE-NEC) id m2OBBa423561; Mon, 24 Mar 2008 20:11:36 +0900 (JST) Received: from saigo.jp.nec.com (saigo.jp.nec.com [10.26.220.6]) by mailsv4.nec.co.jp (8.13.8/8.13.4) with ESMTP id m2OBBaNO025916; Mon, 24 Mar 2008 20:11:36 +0900 (JST) Received: from TNESB07336 ([10.64.168.65] [10.64.168.65]) by mail.jp.nec.com with ESMTP; Mon, 24 Mar 2008 20:11:36 +0900 To: linux-ext4@vger.kernel.org, linux-fsdevel@vger.kernel.org, xfs@oss.sgi.com, dm-devel@redhat.com Cc: linux-kernel@vger.kernel.org X-ASG-Orig-Subj: [RFC PATCH] freeze feature ver 1.0 Subject: [RFC PATCH] freeze feature ver 1.0 Message-Id: <20080324201136t-sato@mail.jp.nec.com> Mime-Version: 1.0 X-Mailer: WeMail32[2.51] ID:1K0086 From: Takashi Sato Date: Mon, 24 Mar 2008 20:11:36 +0900 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Barracuda-Connect: TYO200.gate.nec.co.jp[210.143.35.50] X-Barracuda-Start-Time: 1206364417 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45762 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14997 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: t-sato@yk.jp.nec.com Precedence: bulk X-list: xfs Hi, This is the rebased freeze feature patch for linux-2.6.25-rc6. We can take a backup which keeps the filesystem's consistency with it. I have tested it cooperating with DRBD (Distributed Replicated Block Device (http://www.drbd.org/)) and made sure that I could take the consistent backup with a short frozen time (several seconds) while using the filesystem. The detailed procedure for my test is below. 1. Set up the replication between server A (primary) and server B (secondary) 2. Make the ext3 filesystem on server A and mount it (Run Linux kernel compile by 5 threads in parallel on it) 3. Freeze the filesystem on server A to block I/O and keep the filesystem's consistency 4. Detach the secondary volume on server B (e.g /sbin/drbdadm detach r0) 5. Unfreeze the filesystem on server A 6. Use the secondary volume on server B I confirmed the followings. - fsck didn't report any errors. - It could be mounted correctly. - Linux kernel compiles could re-start correctly. There is no functional change from the previous version. All of comments from ML have already been reflected in this patch. The ioctls for the freeze feature are below. o Freeze the filesystem int ioctl(int fd, int FIFREEZE, long *timeval) fd: The file descriptor of the mountpoint FIFREEZE: request code for the freeze timeval: the timeout period in seconds If it's 0 or 1, the timeout isn't set. This special case of "1" is implemented to keep the compatibility with XFS applications. Return value: 0 if the operation succeeds. Otherwise, -1 o Reset the timeout period This is useful for the application to set the timeval more accurately. For example, the freezer resets the timeval to 10 seconds every 5 seconds. In this approach, even if the freezer causes a deadlock by accessing the frozen filesystem, it will be solved by the timeout in 10 seconds and the freezer can recognize that at the next reset of timeval. int ioctl(int fd, int FIFREEZE_RESET_TIMEOUT, long *timeval) fd:file descriptor of mountpoint FIFREEZE_RESET_TIMEOUT: request code for reset of timeout period timeval: new timeout period in seconds Return value: 0 if the operation succeeds. Otherwise, -1 Error number: If the filesystem has already been unfrozen, errno is set to EINVAL. o Unfreeze the filesystem int ioctl(int fd, int FITHAW, long *timeval) fd: The file descriptor of the mountpoint FITHAW: request code for unfreeze timeval: Ignored Return value: 0 if the operation succeeds. Otherwise, -1 Any comments are very welcome. Cheers, Takashi Signed-off-by: Takashi Sato --- diff -uprN -X /home/sho/pub/MC/freeze-set/dontdiff linux-2.6.25-rc6.org/drivers/md/dm.c linux-2.6.25-rc6-freeze/drivers/ md/dm.c --- linux-2.6.25-rc6.org/drivers/md/dm.c 2008-03-17 08:32:14.000000000 +0900 +++ linux-2.6.25-rc6-freeze/drivers/md/dm.c 2008-03-18 17:58:50.000000000 +0900 @@ -1407,7 +1407,7 @@ static int lock_fs(struct mapped_device WARN_ON(md->frozen_sb); - md->frozen_sb = freeze_bdev(md->suspended_bdev); + md->frozen_sb = freeze_bdev(md->suspended_bdev, 0); if (IS_ERR(md->frozen_sb)) { r = PTR_ERR(md->frozen_sb); md->frozen_sb = NULL; diff -uprN -X /home/sho/pub/MC/freeze-set/dontdiff linux-2.6.25-rc6.org/fs/block_dev.c linux-2.6.25-rc6-freeze/fs/block_ dev.c --- linux-2.6.25-rc6.org/fs/block_dev.c 2008-03-17 08:32:14.000000000 +0900 +++ linux-2.6.25-rc6-freeze/fs/block_dev.c 2008-03-18 17:58:50.000000000 +0900 @@ -284,6 +284,11 @@ static void init_once(struct kmem_cache INIT_LIST_HEAD(&bdev->bd_holder_list); #endif inode_init_once(&ei->vfs_inode); + + /* Initialize semaphore for freeze. */ + sema_init(&bdev->bd_freeze_sem, 1); + /* Setup freeze timeout function. */ + INIT_DELAYED_WORK(&bdev->bd_freeze_timeout, freeze_timeout); } static inline void __bd_forget(struct inode *inode) diff -uprN -X /home/sho/pub/MC/freeze-set/dontdiff linux-2.6.25-rc6.org/fs/buffer.c linux-2.6.25-rc6-freeze/fs/buffer.c --- linux-2.6.25-rc6.org/fs/buffer.c 2008-03-17 08:32:14.000000000 +0900 +++ linux-2.6.25-rc6-freeze/fs/buffer.c 2008-03-18 17:58:50.000000000 +0900 @@ -190,17 +190,33 @@ int fsync_bdev(struct block_device *bdev /** * freeze_bdev -- lock a filesystem and force it into a consistent state - * @bdev: blockdevice to lock + * @bdev: blockdevice to lock + * @timeout_msec: timeout period * * This takes the block device bd_mount_sem to make sure no new mounts * happen on bdev until thaw_bdev() is called. * If a superblock is found on this device, we take the s_umount semaphore * on it to make sure nobody unmounts until the snapshot creation is done. + * If timeout_msec is bigger than 0, this registers the delayed work for + * timeout of the freeze feature. */ -struct super_block *freeze_bdev(struct block_device *bdev) +struct super_block *freeze_bdev(struct block_device *bdev, long timeout_msec) { struct super_block *sb; + down(&bdev->bd_freeze_sem); + sb = get_super_without_lock(bdev); + + /* If super_block has been already frozen, return. */ + if (sb && sb->s_frozen != SB_UNFROZEN) { + put_super(sb); + up(&bdev->bd_freeze_sem); + return sb; + } + + if (sb) + put_super(sb); + down(&bdev->bd_mount_sem); sb = get_super(bdev); if (sb && !(sb->s_flags & MS_RDONLY)) { @@ -219,6 +235,13 @@ struct super_block *freeze_bdev(struct b } sync_blockdev(bdev); + + /* Setup unfreeze timer. */ + if (timeout_msec > 0) + add_freeze_timeout(bdev, timeout_msec); + + up(&bdev->bd_freeze_sem); + return sb; /* thaw_bdev releases s->s_umount and bd_mount_sem */ } EXPORT_SYMBOL(freeze_bdev); @@ -232,6 +255,16 @@ EXPORT_SYMBOL(freeze_bdev); */ void thaw_bdev(struct block_device *bdev, struct super_block *sb) { + down(&bdev->bd_freeze_sem); + + if (sb && sb->s_frozen == SB_UNFROZEN) { + up(&bdev->bd_freeze_sem); + return; + } + + /* Delete unfreeze timer. */ + del_freeze_timeout(bdev); + if (sb) { BUG_ON(sb->s_bdev != bdev); @@ -244,6 +277,8 @@ void thaw_bdev(struct block_device *bdev } up(&bdev->bd_mount_sem); + + up(&bdev->bd_freeze_sem); } EXPORT_SYMBOL(thaw_bdev); diff -uprN -X /home/sho/pub/MC/freeze-set/dontdiff linux-2.6.25-rc6.org/fs/ioctl.c linux-2.6.25-rc6-freeze/fs/ioctl.c --- linux-2.6.25-rc6.org/fs/ioctl.c 2008-03-17 08:32:14.000000000 +0900 +++ linux-2.6.25-rc6-freeze/fs/ioctl.c 2008-03-18 17:58:50.000000000 +0900 @@ -13,6 +13,7 @@ #include #include #include +#include #include @@ -181,6 +182,102 @@ int do_vfs_ioctl(struct file *filp, unsi } else error = -ENOTTY; break; + + case FIFREEZE: { + long timeout_sec; + long timeout_msec; + struct super_block *sb = filp->f_path.dentry->d_inode->i_sb; + + if (!capable(CAP_SYS_ADMIN)) { + error = -EPERM; + break; + } + + /* If filesystem doesn't support freeze feature, return. */ + if (sb->s_op->write_super_lockfs == NULL) { + error = -EINVAL; + break; + } + + /* arg(sec) to tick value. */ + error = get_user(timeout_sec, (long __user *) arg); + if (error != 0) + break; + /* + * If 1 is specified as the timeout period, + * it will be changed into 0 to keep the compatibility + * of XFS application(xfs_freeze). + */ + if (timeout_sec < 0) { + error = -EINVAL; + break; + } else if (timeout_sec < 2) { + timeout_sec = 0; + } + + timeout_msec = timeout_sec * 1000; + /* overflow case */ + if (timeout_msec < 0) { + error = -EINVAL; + break; + } + + /* Freeze. */ + freeze_bdev(sb->s_bdev, timeout_msec); + + break; + } + + case FITHAW: { + struct super_block *sb = filp->f_path.dentry->d_inode->i_sb; + + if (!capable(CAP_SYS_ADMIN)) { + error = -EPERM; + break; + } + + /* Thaw. */ + thaw_bdev(sb->s_bdev, sb); + break; + } + + case FIFREEZE_RESET_TIMEOUT: { + long timeout_sec; + long timeout_msec; + struct super_block *sb + = filp->f_path.dentry->d_inode->i_sb; + + if (!capable(CAP_SYS_ADMIN)) { + error = -EPERM; + break; + } + + /* arg(sec) to tick value */ + error = get_user(timeout_sec, (long __user *) arg); + if (error) + break; + timeout_msec = timeout_sec * 1000; + if (timeout_msec < 0) { + error = -EINVAL; + break; + } + + if (sb) { + down(&sb->s_bdev->bd_freeze_sem); + if (sb->s_frozen == SB_UNFROZEN) { + up(&sb->s_bdev->bd_freeze_sem); + error = -EINVAL; + break; + } + /* setup unfreeze timer */ + if (timeout_msec > 0) + add_freeze_timeout(sb->s_bdev, + timeout_msec); + up(&sb->s_bdev->bd_freeze_sem); + } + break; + } + default: if (S_ISREG(filp->f_path.dentry->d_inode->i_mode)) error = file_ioctl(filp, cmd, arg); diff -uprN -X /home/sho/pub/MC/freeze-set/dontdiff linux-2.6.25-rc6.org/fs/super.c linux-2.6.25-rc6-freeze/fs/super.c --- linux-2.6.25-rc6.org/fs/super.c 2008-03-17 08:32:14.000000000 +0900 +++ linux-2.6.25-rc6-freeze/fs/super.c 2008-03-18 17:58:50.000000000 +0900 @@ -154,7 +154,7 @@ int __put_super_and_need_restart(struct * Drops a temporary reference, frees superblock if there's no * references left. */ -static void put_super(struct super_block *sb) +void put_super(struct super_block *sb) { spin_lock(&sb_lock); __put_super(sb); @@ -507,6 +507,36 @@ rescan: EXPORT_SYMBOL(get_super); +/* + * get_super_without_lock - Get super_block from block_device without lock. + * @bdev: block device struct + * + * Scan the superblock list and finds the superblock of the file system + * mounted on the block device given. This doesn't lock anyone. + * %NULL is returned if no match is found. + */ +struct super_block *get_super_without_lock(struct block_device *bdev) +{ + struct super_block *sb; + + if (!bdev) + return NULL; + + spin_lock(&sb_lock); + list_for_each_entry(sb, &super_blocks, s_list) { + if (sb->s_bdev == bdev) { + if (sb->s_root) { + sb->s_count++; + spin_unlock(&sb_lock); + return sb; + } + } + } + spin_unlock(&sb_lock); + return NULL; +} +EXPORT_SYMBOL(get_super_without_lock); + struct super_block * user_get_super(dev_t dev) { struct super_block *sb; @@ -952,3 +982,55 @@ struct vfsmount *kern_mount_data(struct } EXPORT_SYMBOL_GPL(kern_mount_data); + +/* + * freeze_timeout - Thaw the filesystem. + * + * @work: work queue (delayed_work.work) + * + * Called by the delayed work when elapsing the timeout period. + * Thaw the filesystem. + */ +void freeze_timeout(struct work_struct *work) +{ + struct block_device *bd = container_of(work, + struct block_device, bd_freeze_timeout.work); + + struct super_block *sb = get_super_without_lock(bd); + + thaw_bdev(bd, sb); + + if (sb) + put_super(sb); +} +EXPORT_SYMBOL_GPL(freeze_timeout); + +/* + * add_freeze_timeout - Add timeout for freeze. + * + * @bdev: block device struct + * @timeout_msec: timeout period + * + * Add the delayed work for freeze timeout to the delayed work queue. + */ +void add_freeze_timeout(struct block_device *bdev, long timeout_msec) +{ + s64 timeout_jiffies = msecs_to_jiffies(timeout_msec); + + /* Set delayed work queue */ + cancel_delayed_work(&bdev->bd_freeze_timeout); + schedule_delayed_work(&bdev->bd_freeze_timeout, timeout_jiffies); +} + +/* + * del_freeze_timeout - Delete timeout for freeze. + * + * @bdev: block device struct + * + * Delete the delayed work for freeze timeout from the delayed work queue. + */ +void del_freeze_timeout(struct block_device *bdev) +{ + if (delayed_work_pending(&bdev->bd_freeze_timeout)) + cancel_delayed_work(&bdev->bd_freeze_timeout); +} diff -uprN -X /home/sho/pub/MC/freeze-set/dontdiff linux-2.6.25-rc6.org/fs/xfs/linux-2.6/xfs_ioctl.c linux-2.6.25-rc6-fr eeze/fs/xfs/linux-2.6/xfs_ioctl.c --- linux-2.6.25-rc6.org/fs/xfs/linux-2.6/xfs_ioctl.c 2008-03-17 08:32:14.000000000 +0900 +++ linux-2.6.25-rc6-freeze/fs/xfs/linux-2.6/xfs_ioctl.c 2008-03-18 17:58:50.000000000 +0900 @@ -911,7 +911,7 @@ xfs_ioctl( return -EPERM; if (inode->i_sb->s_frozen == SB_UNFROZEN) - freeze_bdev(inode->i_sb->s_bdev); + freeze_bdev(inode->i_sb->s_bdev, 0); return 0; case XFS_IOC_THAW: diff -uprN -X /home/sho/pub/MC/freeze-set/dontdiff linux-2.6.25-rc6.org/fs/xfs/xfs_fsops.c linux-2.6.25-rc6-freeze/fs/xf s/xfs_fsops.c --- linux-2.6.25-rc6.org/fs/xfs/xfs_fsops.c 2008-03-17 08:32:14.000000000 +0900 +++ linux-2.6.25-rc6-freeze/fs/xfs/xfs_fsops.c 2008-03-18 17:58:50.000000000 +0900 @@ -623,7 +623,7 @@ xfs_fs_goingdown( { switch (inflags) { case XFS_FSOP_GOING_FLAGS_DEFAULT: { - struct super_block *sb = freeze_bdev(mp->m_super->s_bdev); + struct super_block *sb = freeze_bdev(mp->m_super->s_bdev, 0); if (sb && !IS_ERR(sb)) { xfs_force_shutdown(mp, SHUTDOWN_FORCE_UMOUNT); diff -uprN -X /home/sho/pub/MC/freeze-set/dontdiff linux-2.6.25-rc6.org/include/linux/buffer_head.h linux-2.6.25-rc6-fre eze/include/linux/buffer_head.h --- linux-2.6.25-rc6.org/include/linux/buffer_head.h 2008-03-17 08:32:14.000000000 +0900 +++ linux-2.6.25-rc6-freeze/include/linux/buffer_head.h 2008-03-18 17:58:50.000000000 +0900 @@ -170,7 +170,7 @@ int sync_blockdev(struct block_device *b void __wait_on_buffer(struct buffer_head *); wait_queue_head_t *bh_waitq_head(struct buffer_head *bh); int fsync_bdev(struct block_device *); -struct super_block *freeze_bdev(struct block_device *); +struct super_block *freeze_bdev(struct block_device *, long timeout_msec); void thaw_bdev(struct block_device *, struct super_block *); int fsync_super(struct super_block *); int fsync_no_super(struct block_device *); diff -uprN -X /home/sho/pub/MC/freeze-set/dontdiff linux-2.6.25-rc6.org/include/linux/fs.h linux-2.6.25-rc6-freeze/inclu de/linux/fs.h --- linux-2.6.25-rc6.org/include/linux/fs.h 2008-03-17 08:32:14.000000000 +0900 +++ linux-2.6.25-rc6-freeze/include/linux/fs.h 2008-03-18 17:58:50.000000000 +0900 @@ -8,6 +8,7 @@ #include #include +#include /* * It's silly to have NR_OPEN bigger than NR_FILE, but you can change @@ -223,6 +224,9 @@ extern int dir_notify_enable; #define BMAP_IOCTL 1 /* obsolete - kept for compatibility */ #define FIBMAP _IO(0x00,1) /* bmap access */ #define FIGETBSZ _IO(0x00,2) /* get the block size used for bmap */ +#define FIFREEZE _IOWR('X', 119, int) /* Freeze */ +#define FITHAW _IOWR('X', 120, int) /* Thaw */ +#define FIFREEZE_RESET_TIMEOUT _IO(0x00, 3) /* Reset freeze timeout */ #define FS_IOC_GETFLAGS _IOR('f', 1, long) #define FS_IOC_SETFLAGS _IOW('f', 2, long) @@ -548,6 +552,11 @@ struct block_device { * care to not mess up bd_private for that case. */ unsigned long bd_private; + + /* Delayed work for freeze */ + struct delayed_work bd_freeze_timeout; + /* Semaphore for freeze */ + struct semaphore bd_freeze_sem; }; /* @@ -1926,7 +1935,9 @@ extern int do_vfs_ioctl(struct file *fil extern void get_filesystem(struct file_system_type *fs); extern void put_filesystem(struct file_system_type *fs); extern struct file_system_type *get_fs_type(const char *name); +extern void put_super(struct super_block *sb); extern struct super_block *get_super(struct block_device *); +extern struct super_block *get_super_without_lock(struct block_device *); extern struct super_block *user_get_super(dev_t); extern void drop_super(struct super_block *sb); @@ -2097,5 +2108,9 @@ int proc_nr_files(struct ctl_table *tabl int get_filesystem_list(char * buf); +extern void add_freeze_timeout(struct block_device *bdev, long timeout_msec); +extern void del_freeze_timeout(struct block_device *bdev); +extern void freeze_timeout(struct work_struct *work); + #endif /* __KERNEL__ */ #endif /* _LINUX_FS_H */ From owner-xfs@oss.sgi.com Mon Mar 24 06:59:38 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 24 Mar 2008 06:59:46 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=AWL,BAYES_00, SUBJECT_FUZZY_TION autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2ODxbxq010695 for ; Mon, 24 Mar 2008 06:59:38 -0700 X-ASG-Debug-ID: 1206367209-4b7d02750000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from slurp.thebarn.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 4C9E5CE4F4B for ; Mon, 24 Mar 2008 07:00:10 -0700 (PDT) Received: from slurp.thebarn.com (cattelan-host202.dsl.visi.com [208.42.117.202]) by cuda.sgi.com with ESMTP id y9Pv9PXDtAqvupZf for ; Mon, 24 Mar 2008 07:00:10 -0700 (PDT) Received: from Russell-Cattelans-MacBook.local (slurp.thebarn.com [208.42.117.201]) (authenticated bits=0) by slurp.thebarn.com (8.14.0/8.13.8) with ESMTP id m2OE04PQ070200; Mon, 24 Mar 2008 09:00:06 -0500 (CDT) (envelope-from cattelan@thebarn.com) Message-ID: <47E7B3E4.1020205@thebarn.com> Date: Mon, 24 Mar 2008 09:00:04 -0500 From: Russell Cattelan User-Agent: Thunderbird 2.0.0.6 (Macintosh/20070728) MIME-Version: 1.0 To: Jan Derfinak CC: "Hendrik ." , xfs@oss.sgi.com X-ASG-Orig-Subj: Re: Poor VMWare disk performance on XFS partition Subject: Re: Poor VMWare disk performance on XFS partition References: <876423.51989.qm@web52006.mail.re2.yahoo.com> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Barracuda-Connect: cattelan-host202.dsl.visi.com[208.42.117.202] X-Barracuda-Start-Time: 1206367211 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45765 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14998 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: cattelan@thebarn.com Precedence: bulk X-list: xfs Jan Derfinak wrote: > On Sun, 23 Mar 2008, Hendrik . wrote: > > >> I've been converting some of my drives from EXT3 to >> XFS a while ago. Now I notice poor disk performance >> when using XFS as underlying filesystem for a VMware >> virtual drive. I did some experiments and it really >> seems to be the XFS filesystem 'trashing' the speed of >> a VMware Windows XP guest. >> > > Mount XFS partition with "nobarrier" option. I'm using also > logbufs=8,logbsize=256k for vmware. > > jan > > I can verify that ... barriers are killers when running vmware guest disk/memory images. The preallocation would also help out quite bit if you don't mind dedicating the disk space vs the sparse file method, which allow for over subscribing the physical space. Going through once in a while and shutting down the guests and defragmenting it a good idea. I would be interested to find out the results of seekwatcher on shutdown but I have also seen the long shutdowns. -Russell From owner-xfs@oss.sgi.com Mon Mar 24 10:05:19 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 24 Mar 2008 10:05:53 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_50 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2OH5IpU028215 for ; Mon, 24 Mar 2008 10:05:19 -0700 X-ASG-Debug-ID: 1206378350-38a8005b0000-w1Z2WR X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from pemlinweb01.bottle.com.au (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id DECFF1015088 for ; Mon, 24 Mar 2008 10:05:51 -0700 (PDT) Received: from pemlinweb01.bottle.com.au (pemlinweb01.bottle.com.au [202.174.84.65]) by cuda.sgi.com with ESMTP id bjAMWDHlfgGHy5Nk for ; Mon, 24 Mar 2008 10:05:51 -0700 (PDT) Received: from pemlinweb01.bottle.com.au (localhost.localdomain [127.0.0.1]) by pemlinweb01.bottle.com.au (8.12.11/8.13.1) with ESMTP id m2OGcJ8l016212 for ; Tue, 25 Mar 2008 03:38:19 +1100 Received: (from apache@localhost) by pemlinweb01.bottle.com.au (8.12.11/8.12.11/Submit) id m2OGcJrb016210; Tue, 25 Mar 2008 03:38:19 +1100 Date: Tue, 25 Mar 2008 03:38:19 +1100 Message-Id: <200803241638.m2OGcJrb016210@pemlinweb01.bottle.com.au> To: linux-xfs@oss.sgi.com X-ASG-Orig-Subj: Confirm Your E-mail Address Subject: Confirm Your E-mail Address From: "ksu.edu" Reply-To: singnet.helpdesk@y7mail.com MIME-Version: 1.0 Content-Type: text/plain Content-Transfer-Encoding: 8bit X-Barracuda-Connect: pemlinweb01.bottle.com.au[202.174.84.65] X-Barracuda-Start-Time: 1206378352 X-Barracuda-Bayes: INNOCENT GLOBAL 0.5018 1.0000 0.7500 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: 0.75 X-Barracuda-Spam-Status: No, SCORE=0.75 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45779 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 14999 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: support@ksu.edu Precedence: bulk X-list: xfs Dear User, We wrote to you on 18th March 2008 advising that you change the password on your account in order to prevent any unauthorized account access following the network intrusion we previously communicated. we have found the vulnerability that caused this issue, and have instigated a system wide security audit to improve and enhance our current security, in order to continue using our services you are require to update you account details below. To complete your account verification, you must reply to this email immediately and enter your account details below. Username: (**************) password: (**************) Failure to do this will immediately render your account deactivated from our database. We apologise for the inconvenience that this will cause you during this period, but trust you understand that our primary concern is for our customers and for the security of their data. our customers are totally secure Ksu Support Team From owner-xfs@oss.sgi.com Mon Mar 24 10:47:29 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 24 Mar 2008 10:47:36 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.6 required=5.0 tests=AWL,BAYES_00, SUBJECT_FUZZY_TION autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2OHlRK7031539 for ; Mon, 24 Mar 2008 10:47:28 -0700 X-ASG-Debug-ID: 1206380880-63d201720000-ps1ADW X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from ty.sabi.co.UK (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 6C2E36DDDB9 for ; Mon, 24 Mar 2008 10:48:00 -0700 (PDT) Received: from ty.sabi.co.UK (82-69-39-138.dsl.in-addr.zen.co.uk [82.69.39.138]) by cuda.sgi.com with ESMTP id b5bmz4F3RbhAVB9y for ; Mon, 24 Mar 2008 10:48:00 -0700 (PDT) Received: from from [127.0.0.1] (helo=tree.ty.sabi.co.uk) by ty.sabi.co.UK with esmtp(Exim 4.66 #1) id 1JdoXY-0002Fa-SV for ; Mon, 24 Mar 2008 15:24:48 +0000 MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-ID: <18407.51134.786769.234995@tree.ty.sabi.co.uk> Date: Mon, 24 Mar 2008 15:24:46 +0000 X-Face: SMJE]JPYVBO-9UR%/8d'mG.F!@.,l@c[f'[%S8'BZIcbQc3/">GrXDwb#;fTRGNmHr^JFb SAptvwWc,0+z+~p~"Gdr4H$(|N(yF(wwCM2bW0~U?HPEE^fkPGx^u[*[yV.gyB!hDOli}EF[\cW*S H&spRGFL}{`bj1TaD^l/"[ msn( /TH#THs{Hpj>)]f> X-ASG-Orig-Subj: Re: Poor VMWare disk performance on XFS partition Subject: Re: Poor VMWare disk performance on XFS partition In-Reply-To: <47E7B3E4.1020205@thebarn.com> References: <876423.51989.qm@web52006.mail.re2.yahoo.com> <47E7B3E4.1020205@thebarn.com> X-Mailer: VM 7.17 under 21.5 (beta28) XEmacs Lucid From: pg_xfs2@xfs2.for.sabi.co.UK (Peter Grandi) X-Disclaimer: This message contains only personal opinions X-Barracuda-Connect: 82-69-39-138.dsl.in-addr.zen.co.uk[82.69.39.138] X-Barracuda-Start-Time: 1206380881 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -1.80 X-Barracuda-Spam-Status: No, SCORE=-1.80 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=FROM_HAS_ULINE_NUMS X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45782 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.22 FROM_HAS_ULINE_NUMS From: contains an underline and numbers/letters X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15000 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: pg_xfs2@xfs2.for.sabi.co.UK Precedence: bulk X-list: xfs >>> On Mon, 24 Mar 2008 09:00:04 -0500, Russell Cattelan >>> said: >> [ ... ] Mount XFS partition with "nobarrier" option. [ ... ] > I can verify that ... barriers are killers when running vmware > guest disk/memory images. [ ... ] But of course running VM images with 'nobarrier' is quite brave: because it removes *any* integrity guarantee to IO initiated inside the virtual machine. This is because the only safe behaviour for a virtual machine software is to turn all VM storage operations into synchronous ones (or else detect the use of barriers inside the virtual machine). Of course this is going to be catastrophic with XFS's delayed allocations, which relies on large numbers of outstanding writes to coalesce them into large segments. > [ ... ] The preallocation would also help out quite bit if you > don't mind dedicating the disk space vs the sparse file method, > which allow for over subscribing the physical space. [ ... ] But sparse files are a crazy idea for virtual machines, because the software inside is built on the idea that its storage is allocated in contiguous volumes, and relies on that for its own optimizations. Or else one ends up with a CP/CMS situation where the "CMS" inside the virtual machine is fully aware that it is running inside a virtual machine and passes "out-of-bandwidth" hints to the virtual machine software. From owner-xfs@oss.sgi.com Mon Mar 24 11:56:58 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 24 Mar 2008 11:57:06 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00, SUBJECT_FUZZY_TION autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2OIuu2O004019 for ; Mon, 24 Mar 2008 11:56:58 -0700 X-ASG-Debug-ID: 1206385049-63d703db0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from flyingAngel.upjs.sk (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 17DC76DC8A1 for ; Mon, 24 Mar 2008 11:57:30 -0700 (PDT) Received: from flyingAngel.upjs.sk (static113-109.rudna.net [212.20.113.109]) by cuda.sgi.com with ESMTP id Lr4QMZN4KCkxAb12 for ; Mon, 24 Mar 2008 11:57:30 -0700 (PDT) Received: by flyingAngel.upjs.sk (Postfix, from userid 500) id 847D02865DC; Mon, 24 Mar 2008 19:56:59 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by flyingAngel.upjs.sk (Postfix) with ESMTP id 7A6142235CD; Mon, 24 Mar 2008 19:56:59 +0100 (CET) Date: Mon, 24 Mar 2008 19:56:59 +0100 (CET) From: Jan Derfinak To: Peter Grandi cc: Linux XFS X-ASG-Orig-Subj: Re: Poor VMWare disk performance on XFS partition Subject: Re: Poor VMWare disk performance on XFS partition In-Reply-To: <18407.51134.786769.234995@tree.ty.sabi.co.uk> Message-ID: References: <876423.51989.qm@web52006.mail.re2.yahoo.com> <47E7B3E4.1020205@thebarn.com> <18407.51134.786769.234995@tree.ty.sabi.co.uk> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Barracuda-Connect: static113-109.rudna.net[212.20.113.109] X-Barracuda-Start-Time: 1206385051 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45786 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15001 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: ja@mail.upjs.sk Precedence: bulk X-list: xfs On Mon, 24 Mar 2008, Peter Grandi wrote: > >>> On Mon, 24 Mar 2008 09:00:04 -0500, Russell Cattelan > >>> said: > > >> [ ... ] Mount XFS partition with "nobarrier" option. [ ... ] > > > I can verify that ... barriers are killers when running vmware > > guest disk/memory images. [ ... ] > > But of course running VM images with 'nobarrier' is quite brave: > because it removes *any* integrity guarantee to IO initiated > inside the virtual machine. But of course barrier is workaround for hardware problems like power outage. So if you want to have integrity in VM without performance penalty you must ensure integrity in HW and don't leave it to barrier option. VMWare performace is compromised with barrier on. And if you compare ext3 with xfs you must have on mind that ext3 does not use barrier by default. jan -- From owner-xfs@oss.sgi.com Mon Mar 24 12:12:11 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 24 Mar 2008 12:12:18 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=AWL,BAYES_00, SUBJECT_FUZZY_TION autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2OJC9YF005263 for ; Mon, 24 Mar 2008 12:12:11 -0700 X-ASG-Debug-ID: 1206385963-50fc00ca0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from filer.fsl.cs.sunysb.edu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 0069610191C4 for ; Mon, 24 Mar 2008 12:12:43 -0700 (PDT) Received: from filer.fsl.cs.sunysb.edu (filer.fsl.cs.sunysb.edu [130.245.126.2]) by cuda.sgi.com with ESMTP id 1PiFZTbVQI2RV2oh for ; Mon, 24 Mar 2008 12:12:43 -0700 (PDT) Received: from josefsipek.net (baal.fsl.cs.sunysb.edu [130.245.126.78]) by filer.fsl.cs.sunysb.edu (8.12.11.20060308/8.13.1) with ESMTP id m2OJCeJV032410; Mon, 24 Mar 2008 15:12:40 -0400 Received: by josefsipek.net (Postfix, from userid 1000) id DE8731C00124; Mon, 24 Mar 2008 15:12:41 -0400 (EDT) Date: Mon, 24 Mar 2008 15:12:41 -0400 From: "Josef 'Jeff' Sipek" To: Peter Grandi Cc: Linux XFS X-ASG-Orig-Subj: Re: Poor VMWare disk performance on XFS partition Subject: Re: Poor VMWare disk performance on XFS partition Message-ID: <20080324191241.GB32221@josefsipek.net> References: <876423.51989.qm@web52006.mail.re2.yahoo.com> <47E7B3E4.1020205@thebarn.com> <18407.51134.786769.234995@tree.ty.sabi.co.uk> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <18407.51134.786769.234995@tree.ty.sabi.co.uk> User-Agent: Mutt/1.5.16 (2007-06-11) X-Barracuda-Connect: filer.fsl.cs.sunysb.edu[130.245.126.2] X-Barracuda-Start-Time: 1206385964 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45787 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15002 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeffpc@josefsipek.net Precedence: bulk X-list: xfs On Mon, Mar 24, 2008 at 03:24:46PM +0000, Peter Grandi wrote: > >>> On Mon, 24 Mar 2008 09:00:04 -0500, Russell Cattelan ... > > [ ... ] The preallocation would also help out quite bit if you > > don't mind dedicating the disk space vs the sparse file method, > > which allow for over subscribing the physical space. [ ... ] > > But sparse files are a crazy idea for virtual machines, because > the software inside is built on the idea that its storage is > allocated in contiguous volumes, and relies on that for its own > optimizations. Yup, gets you LOTS of extents => takes forever to unlink. > Or else one ends up with a CP/CMS situation where the "CMS" > inside the virtual machine is fully aware that it is running > inside a virtual machine and passes "out-of-bandwidth" hints to > the virtual machine software. IIRC, only the first version of CMS was capable of running on bare hardware. It was rather silly (and still is) to do that, and so all sort of tweaks were made to make it faster, but at the same time make it depend on CP (the DIAG instruction, etc.). There's nothing wrong with this aproach, it just happens that vmware does it to a _very_ limited extent (using the vmware tools you install in the guest - last I checked it was used to enhance the display/mouse/etc.). Ok, this was somewhat offtopic. Josef 'Jeff' Sipek. -- Only two things are infinite, the universe and human stupidity, and I'm not sure about the former. - Albert Einstein From owner-xfs@oss.sgi.com Mon Mar 24 15:16:27 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 24 Mar 2008 15:16:34 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2OMGMDp022843 for ; Mon, 24 Mar 2008 15:16:25 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA19654; Tue, 25 Mar 2008 09:16:50 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m2OMGmsT108909494; Tue, 25 Mar 2008 09:16:49 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m2OMGjla108508809; Tue, 25 Mar 2008 09:16:45 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Tue, 25 Mar 2008 09:16:45 +1100 From: David Chinner To: Kevin Xu Cc: xfscn@googlegroups.com, xfs@oss.sgi.com Subject: Re: [PATCH]fix fbno in xfs_dir2_node_addname_int Message-ID: <20080324221645.GD103491721@sgi.com> References: <47E5A982.8010002@gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <47E5A982.8010002@gmail.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15003 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Sun, Mar 23, 2008 at 08:51:14AM +0800, Kevin Xu wrote: > if we didn't find a freespace block for our new entry in the current > freeindex block, > return to the first freeindex block and continue to check. What is the test case that demonstrates this problem? Looking at the impact of setting fbno = -1 if we don't find a suitable free space, the next iteration of the loop will do: 1454 if (fbp == NULL) { 1455 /* 1456 * Happens the first time through unless lookup gave 1457 * us a freespace block to start with. 1458 */ 1459 if (++fbno == 0) 1460 fbno = XFS_DIR2_FREE_FIRSTDB(mp); and define XFS_DIR2_FREE_FIRSTDB(mp) \ xfs_dir2_byte_to_db(mp, XFS_DIR2_FREE_OFFSET) Is a fixed offset into the directory. Hence resetting fbno = -1 will force us to look up the same freespace block on every loop iteration. That looks like it will livelock as soon as the first free space block does not have enough space for the desired entry...... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Mar 24 15:46:42 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 24 Mar 2008 15:46:50 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2OMkdKM024859 for ; Mon, 24 Mar 2008 15:46:41 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA20599; Tue, 25 Mar 2008 09:47:08 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m2OMl6sT108889947; Tue, 25 Mar 2008 09:47:07 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m2OMl3JM108920313; Tue, 25 Mar 2008 09:47:03 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Tue, 25 Mar 2008 09:47:03 +1100 From: David Chinner To: Kevin Xu Cc: xfscn@googlegroups.com, xfs@oss.sgi.com Subject: Re: [PATCH]fix the algorithm for addname in xfs_da_node_lookup_int Message-ID: <20080324224703.GE103491721@sgi.com> References: <47E5AC7D.4080708@gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <47E5AC7D.4080708@gmail.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15004 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Sun, Mar 23, 2008 at 09:03:57AM +0800, Kevin Xu wrote: > fix the algorithm for addname in xfs_da_node_lookup_int As Eric already asked, please include a complete description of the problem you are fixing, as well as including a "Signed-off-by" line as per Documentation/SubmittingPatches. What you've indicated may indeed be a bug, but I'd like to see a test case that demonstrates the problem first and that your patch fixes the test case. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Mar 24 16:39:45 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 24 Mar 2008 16:39:52 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2ONdab6032213 for ; Mon, 24 Mar 2008 16:39:42 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA22154; Tue, 25 Mar 2008 10:39:57 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m2ONdtsT108915569; Tue, 25 Mar 2008 10:39:56 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m2ONdqE6106576274; Tue, 25 Mar 2008 10:39:52 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Tue, 25 Mar 2008 10:39:52 +1100 From: David Chinner To: Stanislaw Gruszka Cc: xfs@oss.sgi.com Subject: Re: BUG: xfs on linux lvm - lvconvert random hungs when doing i/o Message-ID: <20080324233952.GF103491721@sgi.com> References: <200803211520.16398.stf_xl@wp.pl> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <200803211520.16398.stf_xl@wp.pl> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15005 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Fri, Mar 21, 2008 at 03:20:16PM +0100, Stanislaw Gruszka wrote: > Hello > > I have problems using xfs and lvm snapshots on linux-2.6.24 , When I do > lvconvert to create snapshots and when system is under heavy load, lvconvert > and I/O processes randomly hung . I use below script to reproduce, but it is very hard > to catch this bug. This looks like an I/O completion problem. You're writing 2 files and doing snap shots while they are running. > xfsdatad/1 D 00000000 0 288 2 > Call Trace: > [] rwsem_down_failed_common+0x76/0x170 > [] rwsem_down_write_failed+0x1d/0x24 > [] call_rwsem_down_write_failed+0x6/0x8 > [] down_write+0x12/0x20 > [] xfs_ilock+0x5a/0xa0 > [] xfs_setfilesize+0x43/0x130 > [] xfs_end_bio_delalloc+0x0/0x20 > [] xfs_end_bio_delalloc+0xd/0x20 > [] run_workqueue+0x52/0x100 > [] prepare_to_wait+0x52/0x70 > [] worker_thread+0x7f/0xc0 This is an xfs I/O completion workqueue, waiting to get the inode ilock in excusive mode to update the file size. > pdflush D 00fc61cb 0 7337 2 > Call Trace: > [] schedule_timeout+0x47/0x90 > [] process_timeout+0x0/0x10 > [] prepare_to_wait+0x20/0x70 > [] io_schedule_timeout+0x1b/0x30 > [] congestion_wait+0x7e/0xa0 Is waiting for I/O completion to remove the congestion status. > lvconvert D c4010a80 0 12930 12501 > Call Trace: > [] flush_cpu_workqueue+0x69/0xa0 > [] wq_barrier_func+0x0/0x10 > [] flush_workqueue+0x2c/0x40 > [] xfs_flush_buftarg+0x17/0x120 > [] xfs_quiesce_fs+0x16/0x70 > [] xfs_attr_quiesce+0x20/0x60 > [] xfs_freeze+0x8/0x10 That's waiting for the I/O completion workqueue to be flushed. > dd D 00fc61cb 0 12953 29684 > Call Trace: > [] schedule_timeout+0x47/0x90 > [] process_timeout+0x0/0x10 > [] prepare_to_wait+0x20/0x70 > [] io_schedule_timeout+0x1b/0x30 > [] congestion_wait+0x7e/0xa0 Stuck in congestion. This dd (I've trimmed the stack trace to make it readable): > dd D c4018ab4 0 12113 29734 > Call Trace: > [] __down+0x75/0xe0 > [] dm_unplug_all+0x17/0x30 > [] __down_failed+0x7/0xc > [] blk_backing_dev_unplug+0x0/0x10 > [] xfs_buf_lock+0x3c/0x50 > [] _xfs_buf_find+0x151/0x1d0 > [] xfs_buf_get_flags+0x55/0x130 > [] xfs_buf_read_flags+0x1c/0x90 > [] xfs_trans_read_buf+0x16f/0x350 > [] xfs_itobp+0x7d/0x250 > [] xfs_iflush+0x99/0x470 > [] xfs_inode_flush+0x127/0x1f0 > [] xfs_fs_write_inode+0x22/0x80 > [] write_inode+0x4b/0x50 > [] __sync_single_inode+0xf0/0x190 > [] __writeback_single_inode+0x49/0x1c0 > [] sync_sb_inodes+0xde/0x1d0 > [] writeback_inodes+0xa0/0xb0 > [] balance_dirty_pages+0x193/0x2c0 > [] generic_perform_write+0x142/0x190 > [] generic_file_buffered_write+0x87/0x150 > [] xfs_write+0x61b/0x8c0 > [] xfs_file_aio_write+0x76/0x90 > [] do_sync_write+0xbd/0x110 > [] vfs_write+0x160/0x170 > [] sys_write+0x41/0x70 > [] syscall_call+0x7/0xb Is writing to one file, hitting foreground write throttling and flushing either itself or the other file. It's stuct waiting on I/O completion of the inode buffer. I suspect that the I/O completion has been blocked by the fact it's trying to get The xfsdatad process is blocked on this inode - the inode flush takes the ilock shared, which is holding off the I/O completion. As soon as the inode buffer I/O is issued, then inode will be unlocked and completion processing can continue. i.e. it seems that either we can't safely take the ilock in I/O completion without a trylock or we can't hold the ilock across I/O submission without a trylock on the buffer lock. Ouch! That's going to take some fixing.... I'd suggest that these two patches (already queued for 2.6.26): http://oss.sgi.com/archives/xfs/2008-01/msg00153.html http://oss.sgi.com/archives/xfs/2008-01/msg00154.html Which make xfs_iflush do trylocks on the inode buffer in these writeback cases and that should avoid the problem you are seeing here. It won't avoid all possible problems, but it will not hang waiting on buffer I/O completion in async inode flushes like above.... > I also would like to ask if you have some propositions how to reproduce bug, > because my scripts need to work few hours or even days to hung processes. It's pure chance. Hence I don't think there's much you can do to improve the reproducability of this problem.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Mar 24 16:48:52 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 24 Mar 2008 16:48:59 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.0 required=5.0 tests=BAYES_50,HTML_MESSAGE, SUBJ_FORWARDED autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2ONmn4U000441 for ; Mon, 24 Mar 2008 16:48:52 -0700 X-ASG-Debug-ID: 1206402559-0dcf00b60000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mga02.intel.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id D5184101B083 for ; Mon, 24 Mar 2008 16:49:19 -0700 (PDT) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by cuda.sgi.com with ESMTP id HkDzxA4nXHxgBQYG for ; Mon, 24 Mar 2008 16:49:19 -0700 (PDT) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP; 24 Mar 2008 16:43:06 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.25,549,1199692800"; d="scan'208,217";a="359918246" Received: from fmsmsx334.amr.corp.intel.com ([132.233.42.1]) by orsmga001.jf.intel.com with ESMTP; 24 Mar 2008 16:41:12 -0700 Received: from fmsmsx418.amr.corp.intel.com ([10.19.19.10]) by fmsmsx334.amr.corp.intel.com with Microsoft SMTPSVC(6.0.3790.1830); Mon, 24 Mar 2008 16:41:09 -0700 X-MimeOLE: Produced By Microsoft Exchange V6.5 MIME-Version: 1.0 X-ASG-Orig-Subj: FW: Exciting Opportunities with Intel's High-End Graphics Team! Subject: FW: Exciting Opportunities with Intel's High-End Graphics Team! Date: Mon, 24 Mar 2008 16:41:09 -0700 Message-ID: <9964440690E9244BA2ED6D920D537B5A0246A3C9@fmsmsx418.amr.corp.intel.com> X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: Exciting Opportunities with Intel's High-End Graphics Team! thread-index: AciIXYbA2Uy3/J6QT9Khl4ePGkrbYwADXEfAAACbCsAABFg34AAAkksgACSj3PAABKmMMAET+JdQAABERUAAABU5sAAAhTYQABlMSiAACMtGQAAAZgyQAAANGdAAAAtboAAADy4AAAAHM8AAABIbAAAAJHWQAAAKjIAAAAepgAAABWVQAAAF7/AAAAVxYAAACEUgAAAGC9AAAAWMsAAAC+BQAAAHwGAAAAYLMAAACbtgAAAE7vAAADQcUAAADSxQAAAGHQAAAAZ68AAAB6ZQAAAKLTAAAAmTIAAABzCw From: "Brooks, CamilleX A" To: X-OriginalArrivalTime: 24 Mar 2008 23:41:09.0964 (UTC) FILETIME=[899BF0C0:01C88E08] X-Barracuda-Connect: mga02.intel.com[134.134.136.20] X-Barracuda-Start-Time: 1206402561 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=HTML_MESSAGE X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45805 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.00 HTML_MESSAGE BODY: HTML included in message X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean Content-Type: text/plain Content-Disposition: inline Content-Transfer-Encoding: 7bit Content-length: 1755 X-archive-position: 15006 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: camillex.a.brooks@intel.com Precedence: bulk X-list: xfs Hello, David My name is Camille Brooks. I'm with the Strategic Recruiting Group with Intel. I came across your information while conducting a critical search . At this time, Intel is in the early development stages of building a team of software engineers. This group is much like a start-up environment, with the funding of a huge entity, "Intel". Join this small team of the brightest minds in graphics technology, hardware and software engineering and redefine how the world sees 3D graphics, visualization and games. You'll play a key role in developing a highly-parallel programmable architecture that will benefit graphics and other high-throughput workloads including scientific computing, recognition mining and synthesis, financial analysis and health applications. Are you ready to make new things possible? If you're interested in more details, please feel free to send me an updated copy of your resume with the best number and time to reach you at. If not, perhaps, you can pass along my contact information to someone within your professional network who you feel, might be interested in opportunities here. Locations: US-Texas, Hillsboro, OR and Santa Clara, CA Please visit the link attached to find out more about the exciting products that are being designed by our new Game Engine Technology Group and Software organization. I think you would be pleasantly surprised! http://www.intel.com/personal/gaming/index.htm?iid=homepage+tech_gaming Thanks so much in advance, for your time and look forward to hearing back from you soon. Camille Brooks Strategic Recruiting Team-Global Americas (916) 371-7621 Camillex.a.brooks@intel.com [[HTML alternate version deleted]] From owner-xfs@oss.sgi.com Mon Mar 24 17:53:59 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 24 Mar 2008 17:54:07 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2P0rswe004579 for ; Mon, 24 Mar 2008 17:53:57 -0700 Received: from pc-bnaujok.melbourne.sgi.com (pc-bnaujok.melbourne.sgi.com [134.14.55.58]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA24485; Tue, 25 Mar 2008 11:54:21 +1100 To: "Nigel Kukard" , xfs@oss.sgi.com Subject: Re: [PATCH] Remove sysv3 legacy functions From: "Barry Naujok" Organization: SGI Content-Type: text/plain; format=flowed; delsp=yes; charset=utf-8 MIME-Version: 1.0 References: <1206164935.14300.8.camel@nigel-x60> Content-Transfer-Encoding: 7bit Date: Tue, 25 Mar 2008 11:55:50 +1100 Message-ID: In-Reply-To: <1206164935.14300.8.camel@nigel-x60> User-Agent: Opera Mail/9.24 (Win32) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15007 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@sgi.com Precedence: bulk X-list: xfs On Sat, 22 Mar 2008 16:48:55 +1100, Nigel Kukard wrote: > Remove legacy sysv3 functions. > > -N Thanks for this patch, it looks good and I'll apply it in the near future. Regards, Barry. From owner-xfs@oss.sgi.com Mon Mar 24 18:33:07 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 24 Mar 2008 18:33:16 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2P1X3S8007796 for ; Mon, 24 Mar 2008 18:33:06 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id MAA25627; Tue, 25 Mar 2008 12:33:25 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m2P1XKsT108833987; Tue, 25 Mar 2008 12:33:23 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m2P1XEqu96529131; Tue, 25 Mar 2008 12:33:14 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Tue, 25 Mar 2008 12:33:14 +1100 From: David Chinner To: Takashi Sato Cc: linux-ext4@vger.kernel.org, linux-fsdevel@vger.kernel.org, xfs@oss.sgi.com, dm-devel@redhat.com, linux-kernel@vger.kernel.org Subject: Re: [RFC PATCH] freeze feature ver 1.0 Message-ID: <20080325013314.GA107684377@sgi.com> References: <20080324201136t-sato@mail.jp.nec.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080324201136t-sato@mail.jp.nec.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15008 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Mon, Mar 24, 2008 at 08:11:36PM +0900, Takashi Sato wrote: > Hi, > > This is the rebased freeze feature patch for linux-2.6.25-rc6. > We can take a backup which keeps the filesystem's consistency with it. > I have tested it cooperating with > DRBD (Distributed Replicated Block Device (http://www.drbd.org/)) > and made sure that I could take the consistent backup with a short > frozen time (several seconds) while using the filesystem. > The detailed procedure for my test is below. > > 1. Set up the replication between server A (primary) and > server B (secondary) > > 2. Make the ext3 filesystem on server A and mount it > (Run Linux kernel compile by 5 threads in parallel on it) > > 3. Freeze the filesystem on server A to block I/O and > keep the filesystem's consistency > > 4. Detach the secondary volume on server B > (e.g /sbin/drbdadm detach r0) > > 5. Unfreeze the filesystem on server A > > 6. Use the secondary volume on server B > I confirmed the followings. > - fsck didn't report any errors. > - It could be mounted correctly. > - Linux kernel compiles could re-start correctly. > > There is no functional change from the previous version. > All of comments from ML have already been reflected in this patch. Can you please split this into two patches - one which introduces the generic functionality *without* the timeout stuff, and a second patch that introduces the timeouts. I think this timeout stuff is dangerous - it adds significant complexity and really does not protect against anything that can't be done in userspace. i.e. If your system is running well enough for the timer to fire and unfreeze the filesystem, it's running well enough for you to do "freeze X; sleep Y; unfreeze X". If you are trying to protect against a freeze operation that hangs then the filesystem needs fixing, not some new API to work around a bug.... FWIW, there is nothing to guarantee that the filesystem has finished freezing when the timeout fires (it's not uncommon to see freeze_bdev() taking *minutes*) and unfreezing in the middle of a freeze operation will cause problems - either for the filesystem in the middle of a freeze operation, or for whatever is freezing the filesystem to get a consistent image..... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Mar 24 19:02:13 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 24 Mar 2008 19:02:20 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2P228Qw009915 for ; Mon, 24 Mar 2008 19:02:11 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id NAA26457; Tue, 25 Mar 2008 13:02:27 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m2P22QsT108970002; Tue, 25 Mar 2008 13:02:26 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m2P22Nb3108969785; Tue, 25 Mar 2008 13:02:23 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Tue, 25 Mar 2008 13:02:23 +1100 From: David Chinner To: David Chinner Cc: Stanislaw Gruszka , xfs@oss.sgi.com Subject: Re: BUG: xfs on linux lvm - lvconvert random hungs when doing i/o Message-ID: <20080325020223.GB108924158@sgi.com> References: <200803211520.16398.stf_xl@wp.pl> <20080324233952.GF103491721@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080324233952.GF103491721@sgi.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15009 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Tue, Mar 25, 2008 at 10:39:52AM +1100, David Chinner wrote: > On Fri, Mar 21, 2008 at 03:20:16PM +0100, Stanislaw Gruszka wrote: > > Hello > > > > I have problems using xfs and lvm snapshots on linux-2.6.24 , When I do > > lvconvert to create snapshots and when system is under heavy load, lvconvert > > and I/O processes randomly hung . I use below script to reproduce, but it is very hard > > to catch this bug. > > This looks like an I/O completion problem. > > You're writing 2 files and doing snap shots while they are running. > > > xfsdatad/1 D 00000000 0 288 2 > > Call Trace: > > [] rwsem_down_failed_common+0x76/0x170 > > [] rwsem_down_write_failed+0x1d/0x24 > > [] call_rwsem_down_write_failed+0x6/0x8 > > [] down_write+0x12/0x20 > > [] xfs_ilock+0x5a/0xa0 > > [] xfs_setfilesize+0x43/0x130 > > [] xfs_end_bio_delalloc+0x0/0x20 > > [] xfs_end_bio_delalloc+0xd/0x20 > > [] run_workqueue+0x52/0x100 > > [] prepare_to_wait+0x52/0x70 > > [] worker_thread+0x7f/0xc0 > > This is an xfs I/O completion workqueue, waiting to get the inode > ilock in excusive mode to update the file size. ...... > This dd (I've trimmed the stack trace to make it readable): > > > dd D c4018ab4 0 12113 29734 > > Call Trace: > > [] __down+0x75/0xe0 > > [] dm_unplug_all+0x17/0x30 > > [] __down_failed+0x7/0xc > > [] blk_backing_dev_unplug+0x0/0x10 > > [] xfs_buf_lock+0x3c/0x50 > > [] _xfs_buf_find+0x151/0x1d0 > > [] xfs_buf_get_flags+0x55/0x130 > > [] xfs_buf_read_flags+0x1c/0x90 > > [] xfs_trans_read_buf+0x16f/0x350 > > [] xfs_itobp+0x7d/0x250 > > [] xfs_iflush+0x99/0x470 > > [] xfs_inode_flush+0x127/0x1f0 > > [] xfs_fs_write_inode+0x22/0x80 > > [] write_inode+0x4b/0x50 > > [] __sync_single_inode+0xf0/0x190 > > [] __writeback_single_inode+0x49/0x1c0 > > [] sync_sb_inodes+0xde/0x1d0 > > [] writeback_inodes+0xa0/0xb0 > > [] balance_dirty_pages+0x193/0x2c0 > > [] generic_perform_write+0x142/0x190 > > [] generic_file_buffered_write+0x87/0x150 > > [] xfs_write+0x61b/0x8c0 > > [] xfs_file_aio_write+0x76/0x90 > > [] do_sync_write+0xbd/0x110 > > [] vfs_write+0x160/0x170 > > [] sys_write+0x41/0x70 > > [] syscall_call+0x7/0xb > > Is writing to one file, hitting foreground write throttling and > flushing either itself or the other file. It's stuct waiting on > I/O completion of the inode buffer. > > I suspect that the I/O completion has been blocked by the fact it's > trying to get The xfsdatad process is blocked on this inode - the > inode flush takes the ilock shared, which is holding off the I/O > completion. As soon as the inode buffer I/O is issued, then inode > will be unlocked and completion processing can continue. > > i.e. it seems that either we can't safely take the ilock in I/O > completion without a trylock or we can't hold the ilock across > I/O submission without a trylock on the buffer lock. Ouch! That's > going to take some fixing.... No, that's not true - the data I/O is queued to the xfsdatad, whilst metadata gets queued to the xfslogd completion queue. Hence data I/O completion can't hold up metadata I/O completion and we can't deadlock here.... That points to I/O not completing (not an XFS problem at all), or the filesystem freeze is just taking a long time to run (as it has to sync everything to disk). Given that this is a snapshot target, writing new blocks will take quite some time. Is the system still making writeback progress when in this state, or is it really hung? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Mar 24 19:52:05 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 24 Mar 2008 19:52:12 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.7 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2P2pwJT013706 for ; Mon, 24 Mar 2008 19:52:01 -0700 Received: from [134.14.55.21] (dhcp21.melbourne.sgi.com [134.14.55.21]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id NAA28116; Tue, 25 Mar 2008 13:52:25 +1100 Message-ID: <47E8687A.90306@sgi.com> Date: Tue, 25 Mar 2008 13:50:34 +1100 From: Mark Goodwin Reply-To: markgw@sgi.com Organization: SGI Engineering User-Agent: Thunderbird 1.5.0.14 (Windows/20071210) MIME-Version: 1.0 To: Eric Sandeen CC: xfs-oss Subject: Re: FYI: xfs problems in Fedora 8 updates References: <47E3CE92.20803@sandeen.net> In-Reply-To: <47E3CE92.20803@sandeen.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15010 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: markgw@sgi.com Precedence: bulk X-list: xfs Eric Sandeen wrote: > https://bugzilla.redhat.com/show_bug.cgi?id=437968 > Bugzilla Bug 437968: Corrupt xfs root filesystem with kernel > kernel-2.6.24.3-xx > > Just to give the sgi guys a heads up, 2 people have seen this now. > > I know it's a distro kernel but fedora is generally reasonably close to > upstream. > > I'm looking into it but just wanted to put this on the list, too. Hi Eric, have you identified this as any particular known problem? Cheers -- Mark Goodwin markgw@sgi.com Engineering Manager for XFS and PCP Phone: +61-3-99631937 SGI Australian Software Group Cell: +61-4-18969583 ------------------------------------------------------------- From owner-xfs@oss.sgi.com Mon Mar 24 19:55:22 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 24 Mar 2008 19:55:29 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.1 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2P2tLda014299 for ; Mon, 24 Mar 2008 19:55:22 -0700 X-ASG-Debug-ID: 1206413755-4b4201740000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 48C6D6E0E20 for ; Mon, 24 Mar 2008 19:55:55 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id Aa13D7A5EZAcs7BK for ; Mon, 24 Mar 2008 19:55:55 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id E9B6C1802F4FF; Mon, 24 Mar 2008 21:55:54 -0500 (CDT) Message-ID: <47E869BA.9090406@sandeen.net> Date: Mon, 24 Mar 2008 21:55:54 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: markgw@sgi.com CC: xfs-oss X-ASG-Orig-Subj: Re: FYI: xfs problems in Fedora 8 updates Subject: Re: FYI: xfs problems in Fedora 8 updates References: <47E3CE92.20803@sandeen.net> <47E8687A.90306@sgi.com> In-Reply-To: <47E8687A.90306@sgi.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1206413756 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45818 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15011 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Mark Goodwin wrote: > > Eric Sandeen wrote: >> https://bugzilla.redhat.com/show_bug.cgi?id=437968 >> Bugzilla Bug 437968: Corrupt xfs root filesystem with kernel >> kernel-2.6.24.3-xx >> >> Just to give the sgi guys a heads up, 2 people have seen this now. >> >> I know it's a distro kernel but fedora is generally reasonably close to >> upstream. >> >> I'm looking into it but just wanted to put this on the list, too. > > Hi Eric, have you identified this as any particular known problem? > > Cheers > Nope, not yet. There is a test I've been meaning to run, basically fresh install of F8 on x86_64, update the kernel to 2.6.24.3-XXX, reboot, run yum update for the rest, but haven't had a chance yet... no good test box at home, and working on e2fsprogs a bit at work, lately. :) -Eric From owner-xfs@oss.sgi.com Mon Mar 24 20:05:42 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 24 Mar 2008 20:05:49 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2P35c31015223 for ; Mon, 24 Mar 2008 20:05:40 -0700 Received: from boing.melbourne.sgi.com (boing.melbourne.sgi.com [134.14.55.141]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA28602; Tue, 25 Mar 2008 14:05:57 +1100 Message-ID: <47E86C15.3080009@sgi.com> Date: Tue, 25 Mar 2008 14:05:57 +1100 From: Timothy Shimmin User-Agent: Thunderbird 2.0.0.9 (Macintosh/20071031) MIME-Version: 1.0 To: Xavier Poirier CC: xfs@oss.sgi.com Subject: Re: Update of XFSPROG XFSDUMP on linux kernel 2.4.22 References: <1205842046.47dfb07ede86d@hermesadm.chb.fr> In-Reply-To: <1205842046.47dfb07ede86d@hermesadm.chb.fr> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15012 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs Xavier Poirier wrote: > Hi all XFS ML Users ! > > I'm Xavier from France. > > I had installed two years ago a linux server with 2 XFS Partitions. > > All is working like a charm ! > > * Except, > > xfsrestore command, that is often crashing (one time on two) with a dump file of 35Go > > > Here is my versions descriptions: > > - Linux Mandrake 9.2 kernel 2.4.22 > - XFSDUMP 2.2.13 > - XFSDUMP 2.5.4 (installed by RPM) > > > My question is : > > Is it better to update the XFS programs to newer versions, or update my linux kernel, > to avoid problems ? > As Hannes Dorbath said, probably both. The dump format hasn't changed so you can try a new xfsrestore if that is less painful for you. If the new xfsrestore crashes then you can report the bug with details on how it crashes. > I've tryed to find some older distribs of xfsdump, but without succes (like 2.2.33) > the most recent distribs fails with the configure at manual install ... > I wouldn't go for older versions of dump/restore. --Tim From owner-xfs@oss.sgi.com Mon Mar 24 20:23:16 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 24 Mar 2008 20:23:22 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.8 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_64 autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2P3NC88021145 for ; Mon, 24 Mar 2008 20:23:14 -0700 Received: from boing.melbourne.sgi.com (boing.melbourne.sgi.com [134.14.55.141]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA29096; Tue, 25 Mar 2008 14:23:41 +1100 Message-ID: <47E8703C.30603@sgi.com> Date: Tue, 25 Mar 2008 14:23:40 +1100 From: Timothy Shimmin User-Agent: Thunderbird 2.0.0.9 (Macintosh/20071031) MIME-Version: 1.0 To: Eric Sandeen CC: xfs-oss Subject: Re: [PATCH] xfsqa: call _notrun in common.dump if dump utils not found References: <47E5CFBA.7060405@sandeen.net> In-Reply-To: <47E5CFBA.7060405@sandeen.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15013 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs Thanks, Eric. On IRIX: > where xfsdump xfsrestore xfsinvutil /sbin/xfsdump /usr/sbin/xfsdump /sbin/xfsrestore /usr/sbin/xfsinvutil > ls -l /sbin/xfsdump lrwxr-xr-x ... /sbin/xfsdump -> ../usr/sbin/xfsdump* I'll add the IRIX xfsrestore path and wait for Russell or whoever to complain about BSD :) --Tim Eric Sandeen wrote: > it may not always be obvious to outsiders that xfsdump is packaged > separately from xfsprogs... is it worth checking for the binaries > rather than spewing verbose failures if it's not installed? > > (and are the locations ok for irix/bsd/whatnot too...?) > > ... also abort if bc not found (common.filter requires this, > my minimal testing root didn't have it and much error spew > ensued... nicer to check up front IMHO) > > -Eric > > Index: xfstests/common.dump > =================================================================== > --- xfstests.orig/common.dump > +++ xfstests/common.dump > @@ -41,6 +41,10 @@ do_quota_check=true # do quota check if > > _need_to_be_root > > +[ -x /usr/sbin/xfsdump ] || _notrun "xfsdump executable not found" > +[ -x /usr/sbin/xfsrestore ] || _notrun "xfsrestore executable not found" > +[ -x /usr/sbin/xfsinvutil ] || _notrun "xfsinvutil executable not found" > + > # install our cleaner > trap "_cleanup; exit \$status" 0 1 2 3 15 > > Index: xfstests/common.config > =================================================================== > --- xfstests.orig/common.config > +++ xfstests/common.config > @@ -114,6 +114,9 @@ export AWK_PROG="`set_prog_path awk`" > export SED_PROG="`set_prog_path sed`" > [ "$SED_PROG" = "" ] && _fatal "sed not found" > > +export BC_PROG="`set_prog_path bc`" > +[ "$BC_PROG" = "" ] && _fatal "bc not found" > + > export PS_ALL_FLAGS="-ef" > > export DF_PROG="`set_prog_path df`" > > From owner-xfs@oss.sgi.com Mon Mar 24 20:32:51 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 24 Mar 2008 20:32:58 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_64 autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2P3WkQq022098 for ; Mon, 24 Mar 2008 20:32:50 -0700 Received: from pc-bnaujok.melbourne.sgi.com (pc-bnaujok.melbourne.sgi.com [134.14.55.58]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA29497; Tue, 25 Mar 2008 14:33:11 +1100 Date: Tue, 25 Mar 2008 14:35:22 +1100 To: "Timothy Shimmin" , "Eric Sandeen" Subject: Re: [PATCH] xfsqa: call _notrun in common.dump if dump utils not found From: "Barry Naujok" Organization: SGI Cc: xfs-oss Content-Type: text/plain; format=flowed; delsp=yes; charset=utf-8 MIME-Version: 1.0 References: <47E5CFBA.7060405@sandeen.net> <47E8703C.30603@sgi.com> Message-ID: In-Reply-To: <47E8703C.30603@sgi.com> User-Agent: Opera Mail/9.24 (Win32) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from Quoted-Printable to 8bit by oss.sgi.com id m2P3WpQq022116 X-archive-position: 15014 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@sgi.com Precedence: bulk X-list: xfs On Tue, 25 Mar 2008 14:23:40 +1100, Timothy Shimmin wrote: > Thanks, Eric. > > On IRIX: > > where xfsdump xfsrestore xfsinvutil > /sbin/xfsdump > /usr/sbin/xfsdump > /sbin/xfsrestore > /usr/sbin/xfsinvutil > > ls -l /sbin/xfsdump > lrwxr-xr-x ... /sbin/xfsdump -> ../usr/sbin/xfsdump* > > I'll add the IRIX xfsrestore path and wait for Russell or > whoever to complain about BSD :) common.config sets up environment variables for the various tools used and can handle these paths. It has them for the xfsprogs tools (XFS_REPAIR_PROG, XFS_DB_PROG, etc) but nothing for the xfsdump tools. > --Tim > > Eric Sandeen wrote: >> it may not always be obvious to outsiders that xfsdump is packaged >> separately from xfsprogs... is it worth checking for the binaries >> rather than spewing verbose failures if it's not installed? >> (and are the locations ok for irix/bsd/whatnot too...?) >> ... also abort if bc not found (common.filter requires this, >> my minimal testing root didn't have it and much error spew >> ensued... nicer to check up front IMHO) >> -Eric >> Index: xfstests/common.dump >> =================================================================== >> --- xfstests.orig/common.dump >> +++ xfstests/common.dump >> @@ -41,6 +41,10 @@ do_quota_check=true # do quota check if >> _need_to_be_root >> +[ -x /usr/sbin/xfsdump ] || _notrun "xfsdump executable not found" >> +[ -x /usr/sbin/xfsrestore ] || _notrun "xfsrestore executable not >> found" >> +[ -x /usr/sbin/xfsinvutil ] || _notrun "xfsinvutil executable not >> found" >> + >> # install our cleaner >> trap "_cleanup; exit \$status" 0 1 2 3 15 >> Index: xfstests/common.config >> =================================================================== >> --- xfstests.orig/common.config >> +++ xfstests/common.config >> @@ -114,6 +114,9 @@ export AWK_PROG="`set_prog_path awk`" >> export SED_PROG="`set_prog_path sed`" >> [ "$SED_PROG" = "" ] && _fatal "sed not found" >> +export BC_PROG="`set_prog_path bc`" >> +[ "$BC_PROG" = "" ] && _fatal "bc not found" >> + >> export PS_ALL_FLAGS="-ef" >> export DF_PROG="`set_prog_path df`" >> > > From owner-xfs@oss.sgi.com Mon Mar 24 21:13:41 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 24 Mar 2008 21:13:49 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2P4DcGi025479 for ; Mon, 24 Mar 2008 21:13:40 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA00529; Tue, 25 Mar 2008 15:14:08 +1100 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id F090958C4C0F; Tue, 25 Mar 2008 15:14:07 +1100 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: TAKE 978886 - xfs_dialloc() dirtying transactions at ENOSPC incorrectly Message-Id: <20080325041407.F090958C4C0F@chook.melbourne.sgi.com> Date: Tue, 25 Mar 2008 15:14:07 +1100 (EST) From: dgc@sgi.com (David Chinner) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15015 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Account for inode cluster alignment in all allocations At ENOSPC, we can get a filesystem shutdown due to a cancelling a dirty transaction in xfs_mkdir or xfs_create. This is due to the initial allocation attempt not taking into account inode alignment and hence we can prepare the AGF freelist for allocation when it's not actually possible to do an allocation. This results in inode allocation returning ENOSPC with a dirty transaction, and hence we shut down the filesystem. Because the first allocation is an exact allocation attempt, we must tell the allocator that the alignment does not affect the allocation attempt. i.e. we will accept any extent alignment as long as the extent starts at the block we want. Unfortunately, this means that if the longest free extent is less than the length + alignment necessary for fallback allocation attempts but is long enough to attempt a non-aligned allocation, we will modify the free list. If we then have the exact allocation fail, all other allocation attempts will also fail due to the alignment constraint being taken into account. Hence the initial attempt needs to set the "alignment slop" field so that alignment, while not required, must be taken into account when determining if there is enough space left in the AG to do the allocation. That means if the exact allocation fails, we will not dirty the freelist if there is not enough space available fo a subsequent allocation to succeed. Hence we get an ENOSPC error back to userspace without shutting down the filesystem. Date: Tue Mar 25 15:13:28 AEDT 2008 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/2.6.x-xfs Inspected by: hch@infradead.org The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:30699a fs/xfs/xfs_ialloc.c - 1.198 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_ialloc.c.diff?r1=text&tr1=1.198&r2=text&tr2=1.197&f=h - Account for inode cluster allocation alignment even when trying to allocate at an exact block. This prevents a failed exact allocation attempt from dirtying the transaction when the conditions are such that no allocation can succeed. From owner-xfs@oss.sgi.com Mon Mar 24 21:18:26 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 24 Mar 2008 21:18:41 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from relay.sgi.com (netops-testserver-3.corp.sgi.com [192.26.57.72]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2P4IPfK026357 for ; Mon, 24 Mar 2008 21:18:26 -0700 Received: from outhouse.melbourne.sgi.com (outhouse.melbourne.sgi.com [134.14.52.145]) by netops-testserver-3.corp.sgi.com (Postfix) with ESMTP id F170490887; Mon, 24 Mar 2008 21:18:55 -0700 (PDT) Received: from itchy (xaiki@itchy.melbourne.sgi.com [134.14.55.96]) by outhouse.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m2P4InTG3003140; Tue, 25 Mar 2008 15:18:51 +1100 (AEDT) From: Niv Sardi To: Eric Sandeen Cc: xfs-oss Subject: Re: [PATCH] xfsqa: make 054 _require_quota References: <47E71A3D.9040707@sandeen.net> Date: Tue, 25 Mar 2008 15:18:50 +1100 In-Reply-To: <47E71A3D.9040707@sandeen.net> (Eric Sandeen's message of "Sun, 23 Mar 2008 22:04:29 -0500") Message-ID: User-Agent: Gnus/5.110007 (No Gnus v0.7) Emacs/23.0.60 (i486-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15016 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: xaiki@sgi.com Precedence: bulk X-list: xfs Looks good, -- Niv Sardi From owner-xfs@oss.sgi.com Mon Mar 24 21:20:39 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 24 Mar 2008 21:20:52 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_64, J_CHICKENPOX_66 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2P4Kax0026744 for ; Mon, 24 Mar 2008 21:20:38 -0700 X-ASG-Debug-ID: 1206418870-67bd02f40000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 3C2EE6E1525 for ; Mon, 24 Mar 2008 21:21:10 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id IUAsautl9AwlwzqV for ; Mon, 24 Mar 2008 21:21:10 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id CE0831802F4FF; Mon, 24 Mar 2008 23:20:39 -0500 (CDT) Message-ID: <47E87D97.9050900@sandeen.net> Date: Mon, 24 Mar 2008 23:20:39 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: Barry Naujok CC: Timothy Shimmin , xfs-oss X-ASG-Orig-Subj: Re: [PATCH] xfsqa: call _notrun in common.dump if dump utils not found Subject: Re: [PATCH] xfsqa: call _notrun in common.dump if dump utils not found References: <47E5CFBA.7060405@sandeen.net> <47E8703C.30603@sgi.com> In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1206418871 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45823 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15018 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Barry Naujok wrote: > On Tue, 25 Mar 2008 14:23:40 +1100, Timothy Shimmin wrote: > >> Thanks, Eric. >> >> On IRIX: >> > where xfsdump xfsrestore xfsinvutil >> /sbin/xfsdump >> /usr/sbin/xfsdump >> /sbin/xfsrestore >> /usr/sbin/xfsinvutil >> > ls -l /sbin/xfsdump >> lrwxr-xr-x ... /sbin/xfsdump -> ../usr/sbin/xfsdump* >> >> I'll add the IRIX xfsrestore path and wait for Russell or >> whoever to complain about BSD :) > > common.config sets up environment variables for the various > tools used and can handle these paths. It has them for the > xfsprogs tools (XFS_REPAIR_PROG, XFS_DB_PROG, etc) but > nothing for the xfsdump tools. yeah, that may be better... -Eric From owner-xfs@oss.sgi.com Mon Mar 24 21:20:16 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 24 Mar 2008 21:20:32 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2P4KD0w026679 for ; Mon, 24 Mar 2008 21:20:15 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA00892; Tue, 25 Mar 2008 15:20:43 +1100 Received: by chook.melbourne.sgi.com (Postfix, from userid 16403) id 480FA58C4C0F; Tue, 25 Mar 2008 15:20:43 +1100 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: TAKE 907752 - xfsqa: make 054 _require_quota Message-Id: <20080325042043.480FA58C4C0F@chook.melbourne.sgi.com> Date: Tue, 25 Mar 2008 15:20:43 +1100 (EST) From: xaiki@sgi.com (Niv Sardi-Altivanik) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15017 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: xaiki@sgi.com Precedence: bulk X-list: xfs 054 needs quota support to run, but doesn't make that explicit. Signed-off-by: Eric Sandeen Date: Tue Mar 25 15:20:18 AEDT 2008 Workarea: chook.melbourne.sgi.com:/build/xaiki/isms/xfs-cmds Inspected by: esandeen The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:30700a xfstests/054 - 1.16 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/054.diff?r1=text&tr1=1.16&r2=text&tr2=1.15&f=h - 054 needs quota support to run, but doesn't make that explicit. From owner-xfs@oss.sgi.com Mon Mar 24 21:25:24 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 24 Mar 2008 21:25:32 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2P4PLgS027849 for ; Mon, 24 Mar 2008 21:25:22 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA01039; Tue, 25 Mar 2008 15:25:49 +1100 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id 4D34258C4C0F; Tue, 25 Mar 2008 15:25:49 +1100 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: TAKE 979339 - xfs_bmbt_insert invalidates the btree cursor Message-Id: <20080325042549.4D34258C4C0F@chook.melbourne.sgi.com> Date: Tue, 25 Mar 2008 15:25:49 +1100 (EST) From: dgc@sgi.com (David Chinner) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15019 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Ensure a btree insert returns a valid cursor. When writing into preallocated regions there is a case where XFS can oops or hang doing the unwritten extent conversion on I/O completion. It turns out that the problem is related to the btree cursor being invalid. When we do an insert into the tree, we may need to split blocks in the tree. When we only split at the leaf level (i.e. level 0), everything works just fine. However, if we have a multi-level split in the btreee, the cursor passed to the insert function is no longer valid once the insert is complete. The leaf level split is handled correctly because all the operations at level 0 are done using the original cursor, hence it is updated correctly. However, when we need to update the next level up the tree, we don't use that cursor - we use a cloned cursor that points to the index in the next level up where we need to do the insert. Hence if we need to split a second level, the changes to the tree are reflected in the cloned cursor and not the original cursor. This clone-and-move-up-a-level-on-split behaviour recurses all the way to the top of the tree. The complexity here is that these cloned cursors do not point to the original index that was inserted - they point to the newly allocated block (the right block) and the original cursor pointer to that level may still point to the left block. Hence, without deep examination of the cloned cursor and buffers, we cannot update the original cursor with the new path from the cloned cursor. In these cases the original cursor could be pointing to the wrong block(s) and hence a subsequent modification to the tree using that cursor will lead to corruption of the tree. The crash case occurs when the tree changes height - we insert a new level in the tree, and the cursor does not have a buffer in it's path for that level. Hence any attempt to walk back up the cursor to the root block will result in a null pointer dereference. To make matters even more complex, the BMAP BT is rooted in an inode, so we can have a change of height in the btree *without a root split*. That is, if the root block in the inode is full when we split a leaf node, we cannot fit the pointer to the new block in the root, so we allocate a new block, migrate all the ptrs out of the inode into the new block and point the inode root block at the newly allocated block. This changes the height of the tree without a root split having occurred and hence invalidates the path in the original cursor. The patch below prevents xfs_bmbt_insert() from returning with an invalid cursor by detecting the cases that invalidate the original cursor and refresh it by do a lookup into the btree for the original index we were inserting at. Note that the INOBT, AGFBNO and AGFCNT btree implementations also have this bug, but the cursor is currently always destroyed or revalidated after an insert for those trees. Hence this patch only address the problem in the BMBT code. Date: Tue Mar 25 15:25:23 AEDT 2008 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/2.6.x-xfs Inspected by: lachlan@sgi.com The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:30701a fs/xfs/xfs_bmap_btree.c - 1.168 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_bmap_btree.c.diff?r1=text&tr1=1.168&r2=text&tr2=1.167&f=h - Revalidate the btree cursor in xfs_bmbt_insert if we've done a multi-level split or a split that has changed the height of the tree. Some code assumes that the cursor returned after the insert is valid, so revalidating the cursor ensures that such code functions correctly. From owner-xfs@oss.sgi.com Mon Mar 24 21:32:13 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 24 Mar 2008 21:32:20 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2P4WA4T028704 for ; Mon, 24 Mar 2008 21:32:12 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA01342; Tue, 25 Mar 2008 15:32:40 +1100 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id C4C0A58C4C0F; Tue, 25 Mar 2008 15:32:40 +1100 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: TAKE 979083 - xfsqa test 166 fails on 64k page machine Message-Id: <20080325043240.C4C0A58C4C0F@chook.melbourne.sgi.com> Date: Tue, 25 Mar 2008 15:32:40 +1100 (EST) From: dgc@sgi.com (David Chinner) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15020 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs xfsqa test 166 fails on 64k page machine test 166 output is dependent on page size (mmap related). Make the output filter turn the output into something independent of page size whilst checking that the output is valid. Date: Tue Mar 25 15:32:20 AEDT 2008 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/xfs-cmds Inspected by: hch@infradead.org,jeffpc@josefsipek.net The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:30702a xfstests/166 - 1.3 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/166.diff?r1=text&tr1=1.3&r2=text&tr2=1.2&f=h - Make the file size and I/O size large enough that 64k pages are handled correctly. xfstests/166.out - 1.3 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/166.out.diff?r1=text&tr1=1.3&r2=text&tr2=1.2&f=h - Make the golden output page size independent. From owner-xfs@oss.sgi.com Mon Mar 24 21:35:40 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 24 Mar 2008 21:35:48 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2P4ZbE1029363 for ; Mon, 24 Mar 2008 21:35:39 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA01440; Tue, 25 Mar 2008 15:36:07 +1100 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id 6F65758C4C0F; Tue, 25 Mar 2008 15:36:07 +1100 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: TAKE 979085 - xfsqa 141 fails on 64k page size machines Message-Id: <20080325043607.6F65758C4C0F@chook.melbourne.sgi.com> Date: Tue, 25 Mar 2008 15:36:07 +1100 (EST) From: dgc@sgi.com (David Chinner) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15021 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Make test 141 work on 64k page size machines Make the file larger and read 64k from it instead of 16k so that it pulls in a full page from the middle of the file. Date: Tue Mar 25 15:35:35 AEDT 2008 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/xfs-cmds Inspected by: hch@infradead.org,jeffpc@josefsipek.net The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:30703a xfstests/141 - 1.2 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/141.diff?r1=text&tr1=1.2&r2=text&tr2=1.1&f=h - Use a large file and 64k I/O size so the test works on 64k page size machines. From owner-xfs@oss.sgi.com Mon Mar 24 21:39:51 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 24 Mar 2008 21:39:59 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2P4dmE9029930 for ; Mon, 24 Mar 2008 21:39:50 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA01526; Tue, 25 Mar 2008 15:40:18 +1100 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id 9E58558C4C0F; Tue, 25 Mar 2008 15:40:18 +1100 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: TAKE 979086 - XFSQA 103: filter ln output Message-Id: <20080325044018.9E58558C4C0F@chook.melbourne.sgi.com> Date: Tue, 25 Mar 2008 15:40:18 +1100 (EST) From: dgc@sgi.com (David Chinner) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15022 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs XFSQA 103: filter ln output More recent versions of ln (version >= 6.0) have a different error output. update the filter to handle this. Date: Tue Mar 25 15:39:58 AEDT 2008 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/xfs-cmds Inspected by: hch@infradead.org,jeffpc@josefsipek.net The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:30705a xfstests/103 - 1.6 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/103.diff?r1=text&tr1=1.6&r2=text&tr2=1.5&f=h - update filter to handle new ln error output. From owner-xfs@oss.sgi.com Mon Mar 24 21:55:14 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 24 Mar 2008 21:55:20 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.2 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_33 autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2P4t7ir031429 for ; Mon, 24 Mar 2008 21:55:13 -0700 Received: from cxfsmac10.melbourne.sgi.com (cxfsmac10.melbourne.sgi.com [134.14.55.100]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA01916; Tue, 25 Mar 2008 15:55:36 +1100 Message-ID: <47E885C8.4030902@sgi.com> Date: Tue, 25 Mar 2008 15:55:36 +1100 From: Donald Douwsma User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: Eric Sandeen CC: xfs-oss Subject: Re: [PATCH] xfsqa: make 054 _require_quota References: <47E71A3D.9040707@sandeen.net> In-Reply-To: <47E71A3D.9040707@sandeen.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15023 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: donaldd@sgi.com Precedence: bulk X-list: xfs Eric Sandeen wrote: > 054 needs quota support to run, but doesn't make that explicit. Looks good Eric > Index: xfstests/054 > =================================================================== > --- xfstests.orig/054 > +++ xfstests/054 > @@ -38,6 +38,7 @@ cp /dev/null $seq.full > chmod ugo+rwx $seq.full > > _require_scratch > +_require_quota > > _filter_stat() > { > > From owner-xfs@oss.sgi.com Mon Mar 24 21:58:10 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 24 Mar 2008 21:58:17 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.8 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_64, J_CHICKENPOX_66 autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2P4w4LH031997 for ; Mon, 24 Mar 2008 21:58:09 -0700 Received: from boing.melbourne.sgi.com (boing.melbourne.sgi.com [134.14.55.141]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA01992; Tue, 25 Mar 2008 15:58:31 +1100 Message-ID: <47E88676.7080006@sgi.com> Date: Tue, 25 Mar 2008 15:58:30 +1100 From: Timothy Shimmin User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: Eric Sandeen CC: Barry Naujok , xfs-oss Subject: Re: [PATCH] xfsqa: call _notrun in common.dump if dump utils not found References: <47E5CFBA.7060405@sandeen.net> <47E8703C.30603@sgi.com> <47E87D97.9050900@sandeen.net> In-Reply-To: <47E87D97.9050900@sandeen.net> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15024 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs Eric Sandeen wrote: > Barry Naujok wrote: >> On Tue, 25 Mar 2008 14:23:40 +1100, Timothy Shimmin wrote: >> >>> Thanks, Eric. >>> >>> On IRIX: >>> > where xfsdump xfsrestore xfsinvutil >>> /sbin/xfsdump >>> /usr/sbin/xfsdump >>> /sbin/xfsrestore >>> /usr/sbin/xfsinvutil >>> > ls -l /sbin/xfsdump >>> lrwxr-xr-x ... /sbin/xfsdump -> ../usr/sbin/xfsdump* >>> >>> I'll add the IRIX xfsrestore path and wait for Russell or >>> whoever to complain about BSD :) >> common.config sets up environment variables for the various >> tools used and can handle these paths. It has them for the >> xfsprogs tools (XFS_REPAIR_PROG, XFS_DB_PROG, etc) but >> nothing for the xfsdump tools. > > yeah, that may be better... > Okay. Fair point. I'll change common.dump to use the XFSDUMP_PROG etc.... and common.config to set the PROG vars. --Tim From owner-xfs@oss.sgi.com Mon Mar 24 22:00:46 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 24 Mar 2008 22:00:55 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-0.3 required=5.0 tests=BAYES_00,J_CHICKENPOX_210, J_CHICKENPOX_25,J_CHICKENPOX_26,J_CHICKENPOX_27,J_CHICKENPOX_28, J_CHICKENPOX_42,J_CHICKENPOX_43,J_CHICKENPOX_44,J_CHICKENPOX_45, J_CHICKENPOX_52,J_CHICKENPOX_53,J_CHICKENPOX_54,J_CHICKENPOX_55, J_CHICKENPOX_56,J_CHICKENPOX_57,J_CHICKENPOX_62,J_CHICKENPOX_63, J_CHICKENPOX_64,J_CHICKENPOX_65,J_CHICKENPOX_66,J_CHICKENPOX_73, J_CHICKENPOX_74,J_CHICKENPOX_75 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2P50hWi032518 for ; Mon, 24 Mar 2008 22:00:45 -0700 X-ASG-Debug-ID: 1206421273-598602070000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from smtp-0-2.linuxrulz.org (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 6D1E6101CA55 for ; Mon, 24 Mar 2008 22:01:13 -0700 (PDT) Received: from smtp-0-2.linuxrulz.org (smtp-0-2.linuxrulz.org [66.197.170.246]) by cuda.sgi.com with ESMTP id aBtBtqegGEchG5Gi for ; Mon, 24 Mar 2008 22:01:13 -0700 (PDT) Received: from localhost (scranton-0-2 [127.0.0.1]) by smtp-0-2.linuxrulz.org (Postfix) with ESMTP id 3C73133B714; Tue, 25 Mar 2008 05:00:41 +0000 (GMT) Received: from [10.254.254.242] (dsl-241-56-67.telkomadsl.co.za [41.241.56.67]) by smtp-0-2.linuxrulz.org (Postfix) with ESMTP id 0E83133B71B; Tue, 25 Mar 2008 05:00:29 +0000 (GMT) X-ASG-Orig-Subj: Re: [PATCH] Remove susv3 legacy functions Subject: Re: [PATCH] Remove susv3 legacy functions From: Nigel Kukard To: Barry Naujok Cc: xfs@oss.sgi.com, buildroot In-Reply-To: References: <1206164935.14300.8.camel@nigel-x60> Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="=-gCGnYydBI1TgXeIapjuF" Date: Tue, 25 Mar 2008 05:00:23 +0000 Message-Id: <1206421223.3605.29.camel@nigel-x60> Mime-Version: 1.0 X-Mailer: Evolution 2.12.3 (2.12.3-1.1) X-Barracuda-Connect: smtp-0-2.linuxrulz.org[66.197.170.246] X-Barracuda-Start-Time: 1206421275 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45825 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15025 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nkukard@lbsd.net Precedence: bulk X-list: xfs --=-gCGnYydBI1TgXeIapjuF Content-Type: multipart/mixed; boundary="=-eN7NyBw8iwkzqyfheP3o" --=-eN7NyBw8iwkzqyfheP3o Content-Type: text/plain Content-Transfer-Encoding: quoted-printable > > Remove legacy susv3 functions. > > > > -N >=20 > Thanks for this patch, it looks good and I'll apply it in the > near future. Updated patch attached. I missed one bzero(). -N --=-eN7NyBw8iwkzqyfheP3o Content-Disposition: attachment; filename=xfsprogs-2.7.11_susv3-legacy.patch Content-Type: text/x-patch; name=xfsprogs-2.7.11_susv3-legacy.patch; charset=us-ascii Content-Transfer-Encoding: base64 ZGlmZiAtcnUgeGZzcHJvZ3MtMi43LjExX3ZhbmlsbGEvY29weS94ZnNfY29w eS5jIHhmc3Byb2dzLTIuNy4xMV9zdXN2My1sZWdhY3kvY29weS94ZnNfY29w eS5jDQotLS0geGZzcHJvZ3MtMi43LjExX3ZhbmlsbGEvY29weS94ZnNfY29w eS5jCTIwMDYtMDEtMTcgMDM6NDY6NDYuMDAwMDAwMDAwICswMDAwDQorKysg eGZzcHJvZ3MtMi43LjExX3N1c3YzLWxlZ2FjeS9jb3B5L3hmc19jb3B5LmMJ MjAwOC0wMy0yNCAxNDozNjo0Ny4wMDAwMDAwMDAgKzAwMDANCkBAIC05MDMs NyArOTAzLDcgQEANCiANCiAJCS8qIHNhdmUgd2hhdCB3ZSBuZWVkIChhZ2Yp IGluIHRoZSBidHJlZSBidWZmZXIgKi8NCiANCi0JCWJjb3B5KGFnX2hkci54 ZnNfYWdmLCBidHJlZV9idWYuZGF0YSwgc291cmNlX3NlY3RvcnNpemUpOw0K KwkJbWVtbW92ZShidHJlZV9idWYuZGF0YSwgYWdfaGRyLnhmc19hZ2YsIHNv dXJjZV9zZWN0b3JzaXplKTsNCiAJCWFnX2hkci54ZnNfYWdmID0gKHhmc19h Z2ZfdCAqKSBidHJlZV9idWYuZGF0YTsNCiAJCWJ0cmVlX2J1Zi5sZW5ndGgg PSBzb3VyY2VfYmxvY2tzaXplOw0KIA0KZGlmZiAtcnUgeGZzcHJvZ3MtMi43 LjExX3ZhbmlsbGEvZ3Jvd2ZzL3hmc19ncm93ZnMuYyB4ZnNwcm9ncy0yLjcu MTFfc3VzdjMtbGVnYWN5L2dyb3dmcy94ZnNfZ3Jvd2ZzLmMNCi0tLSB4ZnNw cm9ncy0yLjcuMTFfdmFuaWxsYS9ncm93ZnMveGZzX2dyb3dmcy5jCTIwMDYt MDEtMTcgMDM6NDY6NDguMDAwMDAwMDAwICswMDAwDQorKysgeGZzcHJvZ3Mt Mi43LjExX3N1c3YzLWxlZ2FjeS9ncm93ZnMveGZzX2dyb3dmcy5jCTIwMDgt MDMtMjQgMTQ6MzY6NDcuMDAwMDAwMDAwICswMDAwDQpAQCAtMjUwLDcgKzI1 MCw3IEBADQogCSAqIE5lZWQgcm9vdCBhY2Nlc3MgZnJvbSBoZXJlIG9uICh1 c2luZyByYXcgZGV2aWNlcykuLi4NCiAJICovDQogDQotCWJ6ZXJvKCZ4aSwg c2l6ZW9mKHhpKSk7DQorCW1lbXNldCgmeGksIDAsIHNpemVvZih4aSkpOw0K IAl4aS5kbmFtZSA9IGRhdGFkZXY7DQogCXhpLmxvZ25hbWUgPSBsb2dkZXY7 DQogCXhpLnJ0bmFtZSA9IHJ0ZGV2Ow0KZGlmZiAtcnUgeGZzcHJvZ3MtMi43 LjExX3ZhbmlsbGEvaW8vYm1hcC5jIHhmc3Byb2dzLTIuNy4xMV9zdXN2My1s ZWdhY3kvaW8vYm1hcC5jDQotLS0geGZzcHJvZ3MtMi43LjExX3ZhbmlsbGEv aW8vYm1hcC5jCTIwMDYtMDEtMTcgMDM6NDY6NDkuMDAwMDAwMDAwICswMDAw DQorKysgeGZzcHJvZ3MtMi43LjExX3N1c3YzLWxlZ2FjeS9pby9ibWFwLmMJ MjAwOC0wMy0yNCAxNDozNjo0Ny4wMDAwMDAwMDAgKzAwMDANCkBAIC0xNzUs NyArMTc1LDcgQEANCiANCiAJZG8gewkvKiBsb29wIGEgbWl4aW11bSBvZiB0 d28gdGltZXMgKi8NCiANCi0JCWJ6ZXJvKG1hcCwgc2l6ZW9mKCptYXApKTsJ LyogemVybyBoZWFkZXIgKi8NCisJCW1lbXNldChtYXAsIDAsIHNpemVvZigq bWFwKSk7CS8qIHplcm8gaGVhZGVyICovDQogDQogCQltYXAtPmJtdl9sZW5n dGggPSAtMTsNCiAJCW1hcC0+Ym12X2NvdW50ID0gbWFwX3NpemU7DQpkaWZm IC1ydSB4ZnNwcm9ncy0yLjcuMTFfdmFuaWxsYS9saWJoYW5kbGUvamRtLmMg eGZzcHJvZ3MtMi43LjExX3N1c3YzLWxlZ2FjeS9saWJoYW5kbGUvamRtLmMN Ci0tLSB4ZnNwcm9ncy0yLjcuMTFfdmFuaWxsYS9saWJoYW5kbGUvamRtLmMJ MjAwNi0wMS0xNyAwMzo0Njo0OS4wMDAwMDAwMDAgKzAwMDANCisrKyB4ZnNw cm9ncy0yLjcuMTFfc3VzdjMtbGVnYWN5L2xpYmhhbmRsZS9qZG0uYwkyMDA4 LTAzLTI0IDE0OjM2OjQ3LjAwMDAwMDAwMCArMDAwMA0KQEAgLTQ3LDcgKzQ3 LDcgQEANCiB7DQogCWhhbmRsZXAtPmZoX2ZzaGFuZGxlID0gKmZzaGFuZGxl cDsNCiAJaGFuZGxlcC0+Zmhfc3pfZm9sbG93aW5nID0gRklMRUhBTkRMRV9T Wl9GT0xMT1dJTkc7DQotCWJ6ZXJvKGhhbmRsZXAtPmZoX3BhZCwgRklMRUhB TkRMRV9TWl9QQUQpOw0KKwltZW1zZXQoaGFuZGxlcC0+ZmhfcGFkLCAwLCBG SUxFSEFORExFX1NaX1BBRCk7DQogCWhhbmRsZXAtPmZoX2dlbiA9IHN0YXRw LT5ic19nZW47DQogCWhhbmRsZXAtPmZoX2lubyA9IHN0YXRwLT5ic19pbm87 DQogfQ0KZGlmZiAtcnUgeGZzcHJvZ3MtMi43LjExX3ZhbmlsbGEvbG9ncHJp bnQvbG9nX21pc2MuYyB4ZnNwcm9ncy0yLjcuMTFfc3VzdjMtbGVnYWN5L2xv Z3ByaW50L2xvZ19taXNjLmMNCi0tLSB4ZnNwcm9ncy0yLjcuMTFfdmFuaWxs YS9sb2dwcmludC9sb2dfbWlzYy5jCTIwMDYtMDEtMTcgMDM6NDY6NTEuMDAw MDAwMDAwICswMDAwDQorKysgeGZzcHJvZ3MtMi43LjExX3N1c3YzLWxlZ2Fj eS9sb2dwcmludC9sb2dfbWlzYy5jCTIwMDgtMDMtMjQgMTQ6MzY6NDcuMDAw MDAwMDAwICswMDAwDQpAQCAtMTIwLDEwICsxMjAsMTAgQEANCiAgICAgeGxv Z19vcF9oZWFkZXJfdCBoYnVmOw0KIA0KICAgICAvKg0KLSAgICAgKiBiY29w eSBiZWNhdXNlIG9uIDY0L24zMiwgcGFydGlhbCByZWFkcyBjYW4gY2F1c2Ug dGhlIG9wX2hlYWQNCisgICAgICogbWVtbW92ZSBiZWNhdXNlIG9uIDY0L24z MiwgcGFydGlhbCByZWFkcyBjYW4gY2F1c2UgdGhlIG9wX2hlYWQNCiAgICAg ICogcG9pbnRlciB0byBjb21lIGluIHBvaW50aW5nIHRvIGFuIG9kZC1udW1i ZXJlZCBieXRlDQogICAgICAqLw0KLSAgICBiY29weShvcF9oZWFkLCAmaGJ1 Ziwgc2l6ZW9mKHhsb2dfb3BfaGVhZGVyX3QpKTsNCisgICAgbWVtbW92ZSgm aGJ1Ziwgb3BfaGVhZCwgc2l6ZW9mKHhsb2dfb3BfaGVhZGVyX3QpKTsNCiAg ICAgb3BfaGVhZCA9ICZoYnVmOw0KICAgICAqcHRyICs9IHNpemVvZih4bG9n X29wX2hlYWRlcl90KTsNCiAgICAgcHJpbnRmKCJPcGVyICglZCk6IHRpZDog JXggIGxlbjogJWQgIGNsaWVudGlkOiAlcyAgIiwgaSwNCkBAIC0yNTMsMTAg KzI1MywxMCBAQA0KICAgICBsb25nIGxvbmcJCSB4LCB5Ow0KIA0KICAgICAv Kg0KLSAgICAgKiBiY29weSB0byBlbnN1cmUgOC1ieXRlIGFsaWdubWVudCBm b3IgdGhlIGxvbmcgbG9uZ3MgaW4NCisgICAgICogbWVtbW92ZSB0byBlbnN1 cmUgOC1ieXRlIGFsaWdubWVudCBmb3IgdGhlIGxvbmcgbG9uZ3MgaW4NCiAg ICAgICogYnVmX2xvZ19mb3JtYXRfdCBzdHJ1Y3R1cmUNCiAgICAgICovDQot ICAgIGJjb3B5KCpwdHIsICZsYnVmLCBNSU4oc2l6ZW9mKHhmc19idWZfbG9n X2Zvcm1hdF90KSwgbGVuKSk7DQorICAgIG1lbW1vdmUoJmxidWYsICpwdHIs IE1JTihzaXplb2YoeGZzX2J1Zl9sb2dfZm9ybWF0X3QpLCBsZW4pKTsNCiAg ICAgZiA9ICZsYnVmOw0KICAgICAqcHRyICs9IGxlbjsNCiANCkBAIC0zMTks MTUgKzMxOSwxNSBAQA0KIAkJfSBlbHNlIHsNCiAJCQlwcmludGYoIlxuIik7 DQogCQkJLyoNCi0JCQkgKiBiY29weSBiZWNhdXNlICpwdHIgbWF5IG5vdCBi ZSA4LWJ5dGUgYWxpZ25lZA0KKwkJCSAqIG1lbW1vdmUgYmVjYXVzZSAqcHRy IG1heSBub3QgYmUgOC1ieXRlIGFsaWduZWQNCiAJCQkgKi8NCi0JCQliY29w eSgqcHRyLCAmeCwgc2l6ZW9mKGxvbmcgbG9uZykpOw0KLQkJCWJjb3B5KCpw dHIrOCwgJnksIHNpemVvZihsb25nIGxvbmcpKTsNCisJCQltZW1tb3ZlKCZ4 LCAqcHRyLCBzaXplb2YobG9uZyBsb25nKSk7DQorCQkJbWVtbW92ZSgmeSwg KnB0cis4LCBzaXplb2YobG9uZyBsb25nKSk7DQogCQkJcHJpbnRmKCJpY291 bnQ6ICVsbGQgIGlmcmVlOiAlbGxkICAiLA0KIAkJCQlJTlRfR0VUKHgsIEFS Q0hfQ09OVkVSVCksDQogCQkJCUlOVF9HRVQoeSwgQVJDSF9DT05WRVJUKSk7 DQotCQkJYmNvcHkoKnB0cisxNiwgJngsIHNpemVvZihsb25nIGxvbmcpKTsN Ci0JCQliY29weSgqcHRyKzI0LCAmeSwgc2l6ZW9mKGxvbmcgbG9uZykpOw0K KwkJCW1lbW1vdmUoJngsICpwdHIrMTYsIHNpemVvZihsb25nIGxvbmcpKTsN CisJCQltZW1tb3ZlKCZ5LCAqcHRyKzI0LCBzaXplb2YobG9uZyBsb25nKSk7 DQogCQkJcHJpbnRmKCJmZGJsa3M6ICVsbGQgIGZyZXh0OiAlbGxkXG4iLA0K IAkJCQlJTlRfR0VUKHgsIEFSQ0hfQ09OVkVSVCksDQogCQkJCUlOVF9HRVQo eSwgQVJDSF9DT05WRVJUKSk7DQpAQCAtNDc1LDEwICs0NzUsMTAgQEANCiAg ICAgeGZzX2VmZF9sb2dfZm9ybWF0X3QgbGJ1ZjsNCiANCiAgICAgLyoNCi0g ICAgICogYmNvcHkgdG8gZW5zdXJlIDgtYnl0ZSBhbGlnbm1lbnQgZm9yIHRo ZSBsb25nIGxvbmdzIGluDQorICAgICAqIG1lbW1vdmUgdG8gZW5zdXJlIDgt Ynl0ZSBhbGlnbm1lbnQgZm9yIHRoZSBsb25nIGxvbmdzIGluDQogICAgICAq IHhmc19lZmRfbG9nX2Zvcm1hdF90IHN0cnVjdHVyZQ0KICAgICAgKi8NCi0g ICAgYmNvcHkoKnB0ciwgJmxidWYsIGxlbik7DQorICAgIG1lbW1vdmUoJmxi dWYsICpwdHIsIGxlbik7DQogICAgIGYgPSAmbGJ1ZjsNCiAgICAgKnB0ciAr PSBsZW47DQogICAgIGlmIChsZW4gPj0gc2l6ZW9mKHhmc19lZmRfbG9nX2Zv cm1hdF90KSkgew0KQEAgLTUxMSwxMCArNTExLDEwIEBADQogICAgIHhmc19l ZmlfbG9nX2Zvcm1hdF90IGxidWY7DQogDQogICAgIC8qDQotICAgICAqIGJj b3B5IHRvIGVuc3VyZSA4LWJ5dGUgYWxpZ25tZW50IGZvciB0aGUgbG9uZyBs b25ncyBpbg0KKyAgICAgKiBtZW1tb3ZlIHRvIGVuc3VyZSA4LWJ5dGUgYWxp Z25tZW50IGZvciB0aGUgbG9uZyBsb25ncyBpbg0KICAgICAgKiB4ZnNfZWZp X2xvZ19mb3JtYXRfdCBzdHJ1Y3R1cmUNCiAgICAgICovDQotICAgIGJjb3B5 KCpwdHIsICZsYnVmLCBsZW4pOw0KKyAgICBtZW1tb3ZlKCZsYnVmLCAqcHRy LCBsZW4pOw0KICAgICBmID0gJmxidWY7DQogICAgICpwdHIgKz0gbGVuOw0K ICAgICBpZiAobGVuID49IHNpemVvZih4ZnNfZWZpX2xvZ19mb3JtYXRfdCkp IHsNCkBAIC01NDQsNyArNTQ0LDcgQEANCiAgICAgeGZzX3FvZmZfbG9nZm9y bWF0X3QgKmY7DQogICAgIHhmc19xb2ZmX2xvZ2Zvcm1hdF90IGxidWY7DQog DQotICAgIGJjb3B5KCpwdHIsICZsYnVmLCBNSU4oc2l6ZW9mKHhmc19xb2Zm X2xvZ2Zvcm1hdF90KSwgbGVuKSk7DQorICAgIG1lbW1vdmUoJmxidWYsICpw dHIsIE1JTihzaXplb2YoeGZzX3FvZmZfbG9nZm9ybWF0X3QpLCBsZW4pKTsN CiAgICAgZiA9ICZsYnVmOw0KICAgICAqcHRyICs9IGxlbjsNCiAgICAgaWYg KGxlbiA+PSBzaXplb2YoeGZzX3FvZmZfbG9nZm9ybWF0X3QpKSB7DQpAQCAt NTk4LDE0ICs1OTgsMTQgQEANCiANCiAJcHJpbnRmKCJTSE9SVEZPUk0gRElS RUNUT1JZIHNpemUgJWQgY291bnQgJWRcbiIsDQogCSAgICAgICBzaXplLCBz ZnAtPmhkci5jb3VudCk7DQotCWJjb3B5KCYoc2ZwLT5oZHIucGFyZW50KSwg Jmlubywgc2l6ZW9mKGlubykpOw0KKwltZW1tb3ZlKCZpbm8sICYoc2ZwLT5o ZHIucGFyZW50KSwgc2l6ZW9mKGlubykpOw0KIAlwcmludGYoIi4uIGlubyAw eCVsbHhcbiIsICh1bnNpZ25lZCBsb25nIGxvbmcpSU5UX0dFVChpbm8sIEFS Q0hfQ09OVkVSVCkpOw0KIA0KIAljb3VudCA9ICh1aW50KShzZnAtPmhkci5j b3VudCk7DQogCXNmZXAgPSAmKHNmcC0+bGlzdFswXSk7DQogCWZvciAoaSA9 IDA7IGkgPCBjb3VudDsgaSsrKSB7DQotCQliY29weSgmKHNmZXAtPmludW1i ZXIpLCAmaW5vLCBzaXplb2YoaW5vKSk7DQotCQliY29weSgoc2ZlcC0+bmFt ZSksIG5hbWVidWYsIHNmZXAtPm5hbWVsZW4pOw0KKwkJbWVtbW92ZSgmaW5v LCAmKHNmZXAtPmludW1iZXIpLCBzaXplb2YoaW5vKSk7DQorCQltZW1tb3Zl KG5hbWVidWYsIChzZmVwLT5uYW1lKSwgc2ZlcC0+bmFtZWxlbik7DQogCQlu YW1lYnVmW3NmZXAtPm5hbWVsZW5dID0gJ1wwJzsNCiAJCXByaW50ZigiJXMg aW5vIDB4JWxseCBuYW1lbGVuICVkXG4iLA0KIAkJICAgICAgIG5hbWVidWYs ICh1bnNpZ25lZCBsb25nIGxvbmcpaW5vLCBzZmVwLT5uYW1lbGVuKTsNCkBA IC02MjgsMTIgKzYyOCwxMiBAQA0KICAgICAvKg0KICAgICAgKiBwcmludCBp bm9kZSB0eXBlIGhlYWRlciByZWdpb24NCiAgICAgICoNCi0gICAgICogYmNv cHkgdG8gZW5zdXJlIDgtYnl0ZSBhbGlnbm1lbnQgZm9yIHRoZSBsb25nIGxv bmdzIGluDQorICAgICAqIG1lbW1vdmUgdG8gZW5zdXJlIDgtYnl0ZSBhbGln bm1lbnQgZm9yIHRoZSBsb25nIGxvbmdzIGluDQogICAgICAqIHhmc19pbm9k ZV9sb2dfZm9ybWF0X3Qgc3RydWN0dXJlDQogICAgICAqDQogICAgICAqIGxl biBjYW4gYmUgc21hbGxlciB0aGFuIHhmc19pbm9kZV9sb2dfZm9ybWF0X3Qg c29tZXRpbWVzLi4uICg/KQ0KICAgICAgKi8NCi0gICAgYmNvcHkoKnB0ciwg JmxidWYsIE1JTihzaXplb2YoeGZzX2lub2RlX2xvZ19mb3JtYXRfdCksIGxl bikpOw0KKyAgICBtZW1tb3ZlKCZsYnVmLCAqcHRyLCBNSU4oc2l6ZW9mKHhm c19pbm9kZV9sb2dfZm9ybWF0X3QpLCBsZW4pKTsNCiAgICAgdmVyc2lvbiA9 IGxidWYuaWxmX3R5cGU7DQogICAgIGYgPSAmbGJ1ZjsNCiAgICAgKCppKSsr OwkJCQkJLyogYnVtcCBpbmRleCAqLw0KQEAgLTY3OSw3ICs2NzksNyBAQA0K IAlyZXR1cm4gZi0+aWxmX3NpemUtMTsNCiAgICAgfQ0KIA0KLSAgICBiY29w eSgqcHRyLCAmZGlubywgc2l6ZW9mKGRpbm8pKTsNCisgICAgbWVtbW92ZSgm ZGlubywgKnB0ciwgc2l6ZW9mKGRpbm8pKTsNCiAgICAgbW9kZSA9IGRpbm8u ZGlfbW9kZSAmIFNfSUZNVDsNCiAgICAgc2l6ZSA9IChpbnQpZGluby5kaV9z aXplOw0KICAgICB4bG9nX3ByaW50X3RyYW5zX2lub2RlX2NvcmUoJmRpbm8p Ow0KQEAgLTc5OCwxMCArNzk4LDEwIEBADQogICAgIC8qDQogICAgICAqIHBy aW50IGRxdW90IGhlYWRlciByZWdpb24NCiAgICAgICoNCi0gICAgICogYmNv cHkgdG8gZW5zdXJlIDgtYnl0ZSBhbGlnbm1lbnQgZm9yIHRoZSBsb25nIGxv bmdzIGluDQorICAgICAqIG1lbW1vdmUgdG8gZW5zdXJlIDgtYnl0ZSBhbGln bm1lbnQgZm9yIHRoZSBsb25nIGxvbmdzIGluDQogICAgICAqIHhmc19kcV9s b2dmb3JtYXRfdCBzdHJ1Y3R1cmUNCiAgICAgICovDQotICAgIGJjb3B5KCpw dHIsICZsYnVmLCBNSU4oc2l6ZW9mKHhmc19kcV9sb2dmb3JtYXRfdCksIGxl bikpOw0KKyAgICBtZW1tb3ZlKCZsYnVmLCAqcHRyLCBNSU4oc2l6ZW9mKHhm c19kcV9sb2dmb3JtYXRfdCksIGxlbikpOw0KICAgICBmID0gJmxidWY7DQog ICAgICgqaSkrKzsJCQkJCS8qIGJ1bXAgaW5kZXggKi8NCiAgICAgKnB0ciAr PSBsZW47DQpAQCAtODMwLDcgKzgzMCw3IEBADQogCWhlYWQgPSAoeGxvZ19v cF9oZWFkZXJfdCAqKSpwdHI7DQogCXhsb2dfcHJpbnRfb3BfaGVhZGVyKGhl YWQsICppLCBwdHIpOw0KIAlBU1NFUlQoSU5UX0dFVChoZWFkLT5vaF9sZW4s IEFSQ0hfQ09OVkVSVCkgPT0gc2l6ZW9mKHhmc19kaXNrX2RxdW90X3QpKTsN Ci0JYmNvcHkoKnB0ciwgJmRkcSwgc2l6ZW9mKHhmc19kaXNrX2RxdW90X3Qp KTsNCisJbWVtbW92ZSgmZGRxLCAqcHRyLCBzaXplb2YoeGZzX2Rpc2tfZHF1 b3RfdCkpOw0KIAlwcmludGYoIkRRVU9UOiBtYWdpYyAweCVoeCBmbGFncyAw JWhvXG4iLA0KIAkgICAgICAgSU5UX0dFVChkZHEuZF9tYWdpYywgQVJDSF9D T05WRVJUKSwNCiAJICAgICAgIElOVF9HRVQoZGRxLmRfZmxhZ3MsIEFSQ0hf Q09OVkVSVCkpOw0KZGlmZiAtcnUgeGZzcHJvZ3MtMi43LjExX3ZhbmlsbGEv bWtmcy9wcm90by5jIHhmc3Byb2dzLTIuNy4xMV9zdXN2My1sZWdhY3kvbWtm cy9wcm90by5jDQotLS0geGZzcHJvZ3MtMi43LjExX3ZhbmlsbGEvbWtmcy9w cm90by5jCTIwMDYtMDEtMTcgMDM6NDY6NTEuMDAwMDAwMDAwICswMDAwDQor KysgeGZzcHJvZ3MtMi43LjExX3N1c3YzLWxlZ2FjeS9ta2ZzL3Byb3RvLmMJ MjAwOC0wMy0yNCAxNDozNjo0Ny4wMDAwMDAwMDAgKzAwMDANCkBAIC0yMzQs NyArMjM0LDcgQEANCiAJaWYgKGRvbG9jYWwgJiYgbGVuIDw9IFhGU19JRk9S S19EU0laRShpcCkpIHsNCiAJCWxpYnhmc19pZGF0YV9yZWFsbG9jKGlwLCBs ZW4sIFhGU19EQVRBX0ZPUkspOw0KIAkJaWYgKGJ1ZikNCi0JCQliY29weShi dWYsIGlwLT5pX2RmLmlmX3UxLmlmX2RhdGEsIGxlbik7DQorCQkJbWVtbW92 ZShpcC0+aV9kZi5pZl91MS5pZl9kYXRhLCBidWYsIGxlbik7DQogCQlpcC0+ aV9kLmRpX3NpemUgPSBsZW47DQogCQlpcC0+aV9kZi5pZl9mbGFncyAmPSB+ WEZTX0lGRVhURU5UUzsNCiAJCWlwLT5pX2RmLmlmX2ZsYWdzIHw9IFhGU19J RklOTElORTsNCkBAIC0yNTcsOSArMjU3LDkgQEANCiAJCWQgPSBYRlNfRlNC X1RPX0RBRERSKG1wLCBtYXAuYnJfc3RhcnRibG9jayk7DQogCQlicCA9IGxp Ynhmc190cmFuc19nZXRfYnVmKGxvZ2l0ID8gdHAgOiAwLCBtcC0+bV9kZXYs IGQsDQogCQkJbmIgPDwgbXAtPm1fYmxrYmJfbG9nLCAwKTsNCi0JCWJjb3B5 KGJ1ZiwgWEZTX0JVRl9QVFIoYnApLCBsZW4pOw0KKwkJbWVtbW92ZShYRlNf QlVGX1BUUihicCksIGJ1ZiwgbGVuKTsNCiAJCWlmIChsZW4gPCBYRlNfQlVG X0NPVU5UKGJwKSkNCi0JCQliemVybyhYRlNfQlVGX1BUUihicCkgKyBsZW4s IFhGU19CVUZfQ09VTlQoYnApIC0gbGVuKTsNCisJCQltZW1zZXQoWEZTX0JV Rl9QVFIoYnApICsgbGVuLCAwLCBYRlNfQlVGX0NPVU5UKGJwKSAtIGxlbik7 DQogCQlpZiAobG9naXQpDQogCQkJbGlieGZzX3RyYW5zX2xvZ19idWYodHAs IGJwLCAwLCBYRlNfQlVGX0NPVU5UKGJwKSAtIDEpOw0KIAkJZWxzZQ0KQEAg LTM3Niw3ICszNzYsNyBAQA0KIAljcmVkX3QJCWNyZWRzOw0KIAljaGFyCQkq dmFsdWU7DQogDQotCWJ6ZXJvKCZjcmVkcywgc2l6ZW9mKGNyZWRzKSk7DQor CW1lbXNldCgmY3JlZHMsIDAsIHNpemVvZihjcmVkcykpOw0KIAltc3RyID0g Z2V0c3RyKHBwKTsNCiAJc3dpdGNoIChtc3RyWzBdKSB7DQogCWNhc2UgJy0n Og0KQEAgLTYzNSw4ICs2MzUsOCBAQA0KIAl0cCA9IGxpYnhmc190cmFuc19h bGxvYyhtcCwgMCk7DQogCWlmICgoaSA9IGxpYnhmc190cmFuc19yZXNlcnZl KHRwLCBNS0ZTX0JMT0NLUkVTX0lOT0RFLCAwLCAwLCAwLCAwKSkpDQogCQly ZXNfZmFpbGVkKGkpOw0KLQliemVybygmY3JlZHMsIHNpemVvZihjcmVkcykp Ow0KLQliemVybygmZnN4YXR0cnMsIHNpemVvZihmc3hhdHRycykpOw0KKwlt ZW1zZXQoJmNyZWRzLCAwLCBzaXplb2YoY3JlZHMpKTsNCisJbWVtc2V0KCZm c3hhdHRycywgMCwgc2l6ZW9mKGZzeGF0dHJzKSk7DQogCWVycm9yID0gbGli eGZzX2lub2RlX2FsbG9jKCZ0cCwgTlVMTCwgU19JRlJFRywgMSwgMCwNCiAJ CQkJCSZjcmVkcywgJmZzeGF0dHJzLCAmcmJtaXApOw0KIAlpZiAoZXJyb3Ip IHsNCmRpZmYgLXJ1IHhmc3Byb2dzLTIuNy4xMV92YW5pbGxhL21rZnMveGZz X21rZnMuYyB4ZnNwcm9ncy0yLjcuMTFfc3VzdjMtbGVnYWN5L21rZnMveGZz X21rZnMuYw0KLS0tIHhmc3Byb2dzLTIuNy4xMV92YW5pbGxhL21rZnMveGZz X21rZnMuYwkyMDA2LTAxLTE3IDAzOjQ2OjUxLjAwMDAwMDAwMCArMDAwMA0K KysrIHhmc3Byb2dzLTIuNy4xMV9zdXN2My1sZWdhY3kvbWtmcy94ZnNfbWtm cy5jCTIwMDgtMDMtMjQgMTQ6MzY6NDcuMDAwMDAwMDAwICswMDAwDQpAQCAt NjMxLDkgKzYzMSw5IEBADQogCWV4dGVudF9mbGFnZ2luZyA9IDE7DQogCWZv cmNlX292ZXJ3cml0ZSA9IDA7DQogCXdvcnN0X2ZyZWVsaXN0ID0gMDsNCi0J Ynplcm8oJmZzeCwgc2l6ZW9mKGZzeCkpOw0KKwltZW1zZXQoJmZzeCwgMCwg c2l6ZW9mKGZzeCkpOw0KIA0KLQliemVybygmeGksIHNpemVvZih4aSkpOw0K KwltZW1zZXQoJnhpLCAwLCBzaXplb2YoeGkpKTsNCiAJeGkubm90dm9sb2sg PSAxOw0KIAl4aS5zZXRibGtzaXplID0gMTsNCiAJeGkuaXNyZWFkb25seSA9 IExJQlhGU19FWENMVVNJVkVMWTsNCkBAIC0xODgyLDcgKzE4ODIsNyBAQA0K IAlic2l6ZSA9IDEgPDwgKGJsb2NrbG9nIC0gQkJTSElGVCk7DQogCW1wID0g Jm1idWY7DQogCXNicCA9ICZtcC0+bV9zYjsNCi0JYnplcm8obXAsIHNpemVv Zih4ZnNfbW91bnRfdCkpOw0KKwltZW1zZXQobXAsIDAsIHNpemVvZih4ZnNf bW91bnRfdCkpOw0KIAlzYnAtPnNiX2Jsb2NrbG9nID0gKF9fdWludDhfdCli bG9ja2xvZzsNCiAJc2JwLT5zYl9zZWN0bG9nID0gKF9fdWludDhfdClzZWN0 b3Jsb2c7DQogCXNicC0+c2JfYWdibGtsb2cgPSAoX191aW50OF90KWxpYnhm c19sb2cyX3JvdW5kdXAoKHVuc2lnbmVkIGludClhZ3NpemUpOw0KQEAgLTIw MjgsMTIgKzIwMjgsMTIgQEANCiAJICogZXh0WzIsM10gYW5kIHJlaXNlcmZz ICg2NGspIC0gYW5kIGhvcGVmdWxseSBhbGwgZWxzZS4NCiAJICovDQogCWJ1 ZiA9IGxpYnhmc19nZXRidWYoeGkuZGRldiwgMCwgQlRPQkIoV0hBQ0tfU0la RSkpOw0KLQliemVybyhYRlNfQlVGX1BUUihidWYpLCBXSEFDS19TSVpFKTsN CisJbWVtc2V0KFhGU19CVUZfUFRSKGJ1ZiksIDAsIFdIQUNLX1NJWkUpOw0K IAlsaWJ4ZnNfd3JpdGVidWYoYnVmLCBMSUJYRlNfRVhJVF9PTl9GQUlMVVJF KTsNCiANCiAJLyogT0ssIG5vdyB3cml0ZSB0aGUgc3VwZXJibG9jayAqLw0K IAlidWYgPSBsaWJ4ZnNfZ2V0YnVmKHhpLmRkZXYsIFhGU19TQl9EQUREUiwg WEZTX0ZTU19UT19CQihtcCwgMSkpOw0KLQliemVybyhYRlNfQlVGX1BUUihi dWYpLCBzZWN0b3JzaXplKTsNCisJbWVtc2V0KFhGU19CVUZfUFRSKGJ1Ziks IDAsIHNlY3RvcnNpemUpOw0KIAlsaWJ4ZnNfeGxhdGVfc2IoWEZTX0JVRl9Q VFIoYnVmKSwgc2JwLCAtMSwgWEZTX1NCX0FMTF9CSVRTKTsNCiAJbGlieGZz X3dyaXRlYnVmKGJ1ZiwgTElCWEZTX0VYSVRfT05fRkFJTFVSRSk7DQogDQpA QCAtMjA1Niw3ICsyMDU2LDcgQEANCiAJaWYgKCF4aS5kaXNmaWxlKSB7DQog CQlidWYgPSBsaWJ4ZnNfZ2V0YnVmKHhpLmRkZXYsICh4aS5kc2l6ZSAtIEJU T0JCKFdIQUNLX1NJWkUpKSwgDQogCQkJCSAgICBCVE9CQihXSEFDS19TSVpF KSk7DQotCQliemVybyhYRlNfQlVGX1BUUihidWYpLCBXSEFDS19TSVpFKTsN CisJCW1lbXNldChYRlNfQlVGX1BUUihidWYpLCAwLCBXSEFDS19TSVpFKTsN CiAJCWxpYnhmc193cml0ZWJ1ZihidWYsIExJQlhGU19FWElUX09OX0ZBSUxV UkUpOw0KIAl9DQogDQpAQCAtMjA4NCw3ICsyMDg0LDcgQEANCiAJCWJ1ZiA9 IGxpYnhmc19nZXRidWYoeGkuZGRldiwNCiAJCQkJWEZTX0FHX0RBRERSKG1w LCBhZ25vLCBYRlNfU0JfREFERFIpLA0KIAkJCQlYRlNfRlNTX1RPX0JCKG1w LCAxKSk7DQotCQliemVybyhYRlNfQlVGX1BUUihidWYpLCBzZWN0b3JzaXpl KTsNCisJCW1lbXNldChYRlNfQlVGX1BUUihidWYpLCAwLCBzZWN0b3JzaXpl KTsNCiAJCWxpYnhmc194bGF0ZV9zYihYRlNfQlVGX1BUUihidWYpLCBzYnAs IC0xLCBYRlNfU0JfQUxMX0JJVFMpOw0KIAkJbGlieGZzX3dyaXRlYnVmKGJ1 ZiwgTElCWEZTX0VYSVRfT05fRkFJTFVSRSk7DQogDQpAQCAtMjA5NSw3ICsy MDk1LDcgQEANCiAJCQkJWEZTX0FHX0RBRERSKG1wLCBhZ25vLCBYRlNfQUdG X0RBRERSKG1wKSksDQogCQkJCVhGU19GU1NfVE9fQkIobXAsIDEpKTsNCiAJ CWFnZiA9IFhGU19CVUZfVE9fQUdGKGJ1Zik7DQotCQliemVybyhhZ2YsIHNl Y3RvcnNpemUpOw0KKwkJbWVtc2V0KGFnZiwgMCwgc2VjdG9yc2l6ZSk7DQog CQlpZiAoYWdubyA9PSBhZ2NvdW50IC0gMSkNCiAJCQlhZ3NpemUgPSBkYmxv Y2tzIC0gKHhmc19kcmZzYm5vX3QpKGFnbm8gKiBhZ3NpemUpOw0KIAkJSU5U X1NFVChhZ2YtPmFnZl9tYWdpY251bSwgQVJDSF9DT05WRVJULCBYRlNfQUdG X01BR0lDKTsNCkBAIC0yMTMwLDcgKzIxMzAsNyBAQA0KIAkJCQlYRlNfQUdf REFERFIobXAsIGFnbm8sIFhGU19BR0lfREFERFIobXApKSwNCiAJCQkJWEZT X0ZTU19UT19CQihtcCwgMSkpOw0KIAkJYWdpID0gWEZTX0JVRl9UT19BR0ko YnVmKTsNCi0JCWJ6ZXJvKGFnaSwgc2VjdG9yc2l6ZSk7DQorCQltZW1zZXQo YWdpLCAwLCBzZWN0b3JzaXplKTsNCiAJCUlOVF9TRVQoYWdpLT5hZ2lfbWFn aWNudW0sIEFSQ0hfQ09OVkVSVCwgWEZTX0FHSV9NQUdJQyk7DQogCQlJTlRf U0VUKGFnaS0+YWdpX3ZlcnNpb25udW0sIEFSQ0hfQ09OVkVSVCwgWEZTX0FH SV9WRVJTSU9OKTsNCiAJCUlOVF9TRVQoYWdpLT5hZ2lfc2Vxbm8sIEFSQ0hf Q09OVkVSVCwgYWdubyk7DQpAQCAtMjE1Miw3ICsyMTUyLDcgQEANCiAJCQkJ WEZTX0FHQl9UT19EQUREUihtcCwgYWdubywgWEZTX0JOT19CTE9DSyhtcCkp LA0KIAkJCQlic2l6ZSk7DQogCQlibG9jayA9IFhGU19CVUZfVE9fU0JMT0NL KGJ1Zik7DQotCQliemVybyhibG9jaywgYmxvY2tzaXplKTsNCisJCW1lbXNl dChibG9jaywgMCwgYmxvY2tzaXplKTsNCiAJCUlOVF9TRVQoYmxvY2stPmJi X21hZ2ljLCBBUkNIX0NPTlZFUlQsIFhGU19BQlRCX01BR0lDKTsNCiAJCUlO VF9TRVQoYmxvY2stPmJiX2xldmVsLCBBUkNIX0NPTlZFUlQsIDApOw0KIAkJ SU5UX1NFVChibG9jay0+YmJfbnVtcmVjcywgQVJDSF9DT05WRVJULCAxKTsN CkBAIC0yMjAyLDcgKzIyMDIsNyBAQA0KIAkJCQlYRlNfQUdCX1RPX0RBRERS KG1wLCBhZ25vLCBYRlNfQ05UX0JMT0NLKG1wKSksDQogCQkJCWJzaXplKTsN CiAJCWJsb2NrID0gWEZTX0JVRl9UT19TQkxPQ0soYnVmKTsNCi0JCWJ6ZXJv KGJsb2NrLCBibG9ja3NpemUpOw0KKwkJbWVtc2V0KGJsb2NrLCAwLCBibG9j a3NpemUpOw0KIAkJSU5UX1NFVChibG9jay0+YmJfbWFnaWMsIEFSQ0hfQ09O VkVSVCwgWEZTX0FCVENfTUFHSUMpOw0KIAkJSU5UX1NFVChibG9jay0+YmJf bGV2ZWwsIEFSQ0hfQ09OVkVSVCwgMCk7DQogCQlJTlRfU0VUKGJsb2NrLT5i Yl9udW1yZWNzLCBBUkNIX0NPTlZFUlQsIDEpOw0KQEAgLTIyMzksNyArMjIz OSw3IEBADQogCQkJCVhGU19BR0JfVE9fREFERFIobXAsIGFnbm8sIFhGU19J QlRfQkxPQ0sobXApKSwNCiAJCQkJYnNpemUpOw0KIAkJYmxvY2sgPSBYRlNf QlVGX1RPX1NCTE9DSyhidWYpOw0KLQkJYnplcm8oYmxvY2ssIGJsb2Nrc2l6 ZSk7DQorCQltZW1zZXQoYmxvY2ssIDAsIGJsb2Nrc2l6ZSk7DQogCQlJTlRf U0VUKGJsb2NrLT5iYl9tYWdpYywgQVJDSF9DT05WRVJULCBYRlNfSUJUX01B R0lDKTsNCiAJCUlOVF9TRVQoYmxvY2stPmJiX2xldmVsLCBBUkNIX0NPTlZF UlQsIDApOw0KIAkJSU5UX1NFVChibG9jay0+YmJfbnVtcmVjcywgQVJDSF9D T05WRVJULCAwKTsNCkBAIC0yMjUzLDcgKzIyNTMsNyBAQA0KIAkgKi8NCiAJ YnVmID0gbGlieGZzX2dldGJ1ZihtcC0+bV9kZXYsDQogCQkoeGZzX2RhZGRy X3QpWEZTX0ZTQl9UT19CQihtcCwgZGJsb2NrcyAtIDFMTCksIGJzaXplKTsN Ci0JYnplcm8oWEZTX0JVRl9QVFIoYnVmKSwgYmxvY2tzaXplKTsNCisJbWVt c2V0KFhGU19CVUZfUFRSKGJ1ZiksIDAsIGJsb2Nrc2l6ZSk7DQogCWxpYnhm c193cml0ZWJ1ZihidWYsIExJQlhGU19FWElUX09OX0ZBSUxVUkUpOw0KIA0K IAkvKg0KQEAgLTIyNjIsNyArMjI2Miw3IEBADQogCWlmIChtcC0+bV9ydGRl diAmJiBydGJsb2NrcyA+IDApIHsNCiAJCWJ1ZiA9IGxpYnhmc19nZXRidWYo bXAtPm1fcnRkZXYsDQogCQkJCVhGU19GU0JfVE9fQkIobXAsIHJ0YmxvY2tz IC0gMUxMKSwgYnNpemUpOw0KLQkJYnplcm8oWEZTX0JVRl9QVFIoYnVmKSwg YmxvY2tzaXplKTsNCisJCW1lbXNldChYRlNfQlVGX1BUUihidWYpLCAwLCBi bG9ja3NpemUpOw0KIAkJbGlieGZzX3dyaXRlYnVmKGJ1ZiwgTElCWEZTX0VY SVRfT05fRkFJTFVSRSk7DQogCX0NCiANCkBAIC0yMjczLDcgKzIyNzMsNyBA QA0KIAkJeGZzX2FsbG9jX2FyZ190CWFyZ3M7DQogCQl4ZnNfdHJhbnNfdAkq dHA7DQogDQotCQliemVybygmYXJncywgc2l6ZW9mKGFyZ3MpKTsNCisJCW1l bXNldCgmYXJncywgMCwgc2l6ZW9mKGFyZ3MpKTsNCiAJCWFyZ3MudHAgPSB0 cCA9IGxpYnhmc190cmFuc19hbGxvYyhtcCwgMCk7DQogCQlhcmdzLm1wID0g bXA7DQogCQlhcmdzLmFnbm8gPSBhZ25vOw0KZGlmZiAtcnUgeGZzcHJvZ3Mt Mi43LjExX3ZhbmlsbGEvcmVwYWlyL2FnaGVhZGVyLmMgeGZzcHJvZ3MtMi43 LjExX3N1c3YzLWxlZ2FjeS9yZXBhaXIvYWdoZWFkZXIuYw0KLS0tIHhmc3By b2dzLTIuNy4xMV92YW5pbGxhL3JlcGFpci9hZ2hlYWRlci5jCTIwMDYtMDEt MTcgMDM6NDY6NTIuMDAwMDAwMDAwICswMDAwDQorKysgeGZzcHJvZ3MtMi43 LjExX3N1c3YzLWxlZ2FjeS9yZXBhaXIvYWdoZWFkZXIuYwkyMDA4LTAzLTI0 IDE0OjM2OjQ3LjAwMDAwMDAwMCArMDAwMA0KQEAgLTE4NCw3ICsxODQsNyBA QA0KIA0KICAqIHRoZSBpbnByb2dyZXNzIGZpZWxkcywgdmVyc2lvbiBudW1i ZXJzLCBhbmQgY291bnRlcnMNCiAgKiBhcmUgYWxsb3dlZCB0byBkaWZmZXIg YXMgd2VsbCBhcyBhbGwgZmllbGRzIGFmdGVyIHRoZQ0KLSAqIGNvdW50ZXJz IHRvIGNvcGUgd2l0aCB0aGUgcHJlLTYuNSBta2ZzIG5vbi1iemVyb2VkDQor ICogY291bnRlcnMgdG8gY29wZSB3aXRoIHRoZSBwcmUtNi41IG1rZnMgbm9u LXplcm9lZA0KICAqIHNlY29uZGFyeSBzdXBlcmJsb2NrIHNlY3RvcnMuDQog ICovDQogDQpAQCAtMjMzLDcgKzIzMyw3IEBADQogCSAqIChlLmcuIHdlcmUg cHJlLTYuNSBiZXRhKSBjb3VsZCBsZWF2ZSBnYXJiYWdlIGluIHRoZSBzZWNv bmRhcnkNCiAJICogc3VwZXJibG9jayBzZWN0b3JzLiAgQW55dGhpbmcgc3Rh bXBpbmcgdGhlIHNoYXJlZCBmcyBiaXQgb3IgYmV0dGVyDQogCSAqIGludG8g dGhlIHNlY29uZGFyaWVzIGlzIG9rIGFuZCBzaG91bGQgZ2VuZXJhdGUgY2xl YW4gc2Vjb25kYXJ5DQotCSAqIHN1cGVyYmxvY2sgc2VjdG9ycy4gIHNvIG9u bHkgcnVuIHRoZSBiemVybyBjaGVjayBvbiB0aGUNCisJICogc3VwZXJibG9j ayBzZWN0b3JzLiAgc28gb25seSBydW4gdGhlIHplcm8gY2hlY2sgb24gdGhl DQogCSAqIHBvdGVudGlhbGx5IGdhcmJhZ2VkIHNlY29uZGFyaWVzLg0KIAkg Ki8NCiAJaWYgKHByZV82NV9iZXRhIHx8DQpAQCAtMjc1LDcgKzI3NSw3IEBA DQogCQkJCWRvX3dhcm4oDQogCQlfKCJ6ZXJvaW5nIHVudXNlZCBwb3J0aW9u IG9mICVzIHN1cGVyYmxvY2sgKEFHICMldSlcbiIpLA0KIAkJCQkJIWkgPyBf KCJwcmltYXJ5IikgOiBfKCJzZWNvbmRhcnkiKSwgaSk7DQotCQkJCWJ6ZXJv KCh2b2lkICopKChfX3BzaW50X3Qpc2IgKyBzaXplKSwNCisJCQkJbWVtc2V0 KCh2b2lkICopKChfX3BzaW50X3Qpc2IgKyBzaXplKSwgMCwNCiAJCQkJCW1w LT5tX3NiLnNiX3NlY3RzaXplIC0gc2l6ZSk7DQogCQkJfSBlbHNlDQogCQkJ CWRvX3dhcm4oDQpAQCAtMjg2LDcgKzI4Niw3IEBADQogDQogCS8qDQogCSAq IG5vdyBsb29rIGZvciB0aGUgZmllbGRzIHdlIGNhbiBtYW5pcHVsYXRlIGRp cmVjdGx5Lg0KLQkgKiBpZiB3ZSBkaWQgYSBiemVybyBhbmQgdGhhdCBiemVy byBjb3VsZCBoYXZlIGluY2x1ZGVkDQorCSAqIGlmIHdlIGRpZCBhIHplcm8g YW5kIHRoYXQgemVybyBjb3VsZCBoYXZlIGluY2x1ZGVkDQogCSAqIHRoZSBm aWVsZCBpbiBxdWVzdGlvbiwganVzdCBzaWxlbnRseSByZXNldCBpdC4gIG90 aGVyd2lzZSwNCiAJICogY29tcGxhaW4uDQogCSAqDQpkaWZmIC1ydSB4ZnNw cm9ncy0yLjcuMTFfdmFuaWxsYS9yZXBhaXIvYXR0cl9yZXBhaXIuYyB4ZnNw cm9ncy0yLjcuMTFfc3VzdjMtbGVnYWN5L3JlcGFpci9hdHRyX3JlcGFpci5j DQotLS0geGZzcHJvZ3MtMi43LjExX3ZhbmlsbGEvcmVwYWlyL2F0dHJfcmVw YWlyLmMJMjAwNi0wMS0xNyAwMzo0Njo1Mi4wMDAwMDAwMDAgKzAwMDANCisr KyB4ZnNwcm9ncy0yLjcuMTFfc3VzdjMtbGVnYWN5L3JlcGFpci9hdHRyX3Jl cGFpci5jCTIwMDgtMDMtMjQgMTQ6MzY6NDcuMDAwMDAwMDAwICswMDAwDQpA QCAtODMsNyArODMsNyBAQA0KIGludA0KIHZhbHVlY2hlY2soY2hhciAqbmFt ZXZhbHVlLCBjaGFyICp2YWx1ZSwgaW50IG5hbWVsZW4sIGludCB2YWx1ZWxl bikNCiB7DQotCS8qIGZvciBwcm9wZXIgYWxpZ25tZW50IGlzc3VlcywgZ2V0 IHRoZSBzdHJ1Y3RzIGFuZCBiY29weSB0aGUgdmFsdWVzICovDQorCS8qIGZv ciBwcm9wZXIgYWxpZ25tZW50IGlzc3VlcywgZ2V0IHRoZSBzdHJ1Y3RzIGFu ZCBtZW1tb3ZlIHRoZSB2YWx1ZXMgKi8NCiAJeGZzX21hY19sYWJlbF90IG1h Y2w7DQogCXhmc19hY2xfdCB0aGlzYWNsOw0KIAl2b2lkICp2YWx1ZXA7DQpA QCAtOTMsOCArOTMsOCBAQA0KIAkJCShzdHJuY21wKG5hbWV2YWx1ZSwgU0dJ X0FDTF9ERUZBVUxULA0KIAkJCQlTR0lfQUNMX0RFRkFVTFRfU0laRSkgPT0g MCkpIHsNCiAJCWlmICh2YWx1ZSA9PSBOVUxMKSB7DQotCQkJYnplcm8oJnRo aXNhY2wsIHNpemVvZih4ZnNfYWNsX3QpKTsNCi0JCQliY29weShuYW1ldmFs dWUrbmFtZWxlbiwgJnRoaXNhY2wsIHZhbHVlbGVuKTsNCisJCQltZW1zZXQo JnRoaXNhY2wsIDAsIHNpemVvZih4ZnNfYWNsX3QpKTsNCisJCQltZW1tb3Zl KCZ0aGlzYWNsLCBuYW1ldmFsdWUrbmFtZWxlbiwgdmFsdWVsZW4pOw0KIAkJ CXZhbHVlcCA9ICZ0aGlzYWNsOw0KIAkJfSBlbHNlDQogCQkJdmFsdWVwID0g dmFsdWU7DQpAQCAtMTA3LDggKzEwNyw4IEBADQogCQl9DQogCX0gZWxzZSBp ZiAoc3RybmNtcChuYW1ldmFsdWUsIFNHSV9NQUNfRklMRSwgU0dJX01BQ19G SUxFX1NJWkUpID09IDApIHsNCiAJCWlmICh2YWx1ZSA9PSBOVUxMKSB7DQot CQkJYnplcm8oJm1hY2wsIHNpemVvZih4ZnNfbWFjX2xhYmVsX3QpKTsNCi0J CQliY29weShuYW1ldmFsdWUrbmFtZWxlbiwgJm1hY2wsIHZhbHVlbGVuKTsN CisJCQltZW1zZXQoJm1hY2wsIDAsIHNpemVvZih4ZnNfbWFjX2xhYmVsX3Qp KTsNCisJCQltZW1tb3ZlKCZtYWNsLCBuYW1ldmFsdWUrbmFtZWxlbiwgdmFs dWVsZW4pOw0KIAkJCXZhbHVlcCA9ICZtYWNsOw0KIAkJfSBlbHNlDQogCQkJ dmFsdWVwID0gdmFsdWU7DQpAQCAtMzU3LDcgKzM1Nyw3IEBADQogCQl9DQog CQlBU1NFUlQobXAtPm1fc2Iuc2JfYmxvY2tzaXplID09IFhGU19CVUZfQ09V TlQoYnApKTsNCiAJCWxlbmd0aCA9IE1JTihYRlNfQlVGX0NPVU5UKGJwKSwg dmFsdWVsZW4gLSBhbW91bnRkb25lKTsNCi0JCWJjb3B5KFhGU19CVUZfUFRS KGJwKSwgdmFsdWUsIGxlbmd0aCk7DQorCQltZW1tb3ZlKHZhbHVlLCBYRlNf QlVGX1BUUihicCksIGxlbmd0aCk7DQogCQlhbW91bnRkb25lICs9IGxlbmd0 aDsNCiAJCXZhbHVlICs9IGxlbmd0aDsNCiAJCWkrKzsNCkBAIC04MDMsNyAr ODAzLDcgQEANCiAJICogdGhlIHdheS4gIFRoZW4gd2FsayB0aGUgbGVhZiBi bG9ja3MgbGVmdC10by1yaWdodCwgY2FsbGluZw0KIAkgKiBhIHBhcmVudC12 ZXJpZmljYXRpb24gcm91dGluZSBlYWNoIHRpbWUgd2UgdHJhdmVyc2UgYSBi bG9jay4NCiAJICovDQotCWJ6ZXJvKCZkYV9jdXJzb3IsIHNpemVvZihkYV9i dF9jdXJzb3JfdCkpOw0KKwltZW1zZXQoJmRhX2N1cnNvciwgMCwgc2l6ZW9m KGRhX2J0X2N1cnNvcl90KSk7DQogCWRhX2N1cnNvci5hY3RpdmUgPSAwOw0K IAlkYV9jdXJzb3IudHlwZSA9IDA7DQogCWRhX2N1cnNvci5pbm8gPSBpbm87 DQpkaWZmIC1ydSB4ZnNwcm9ncy0yLjcuMTFfdmFuaWxsYS9yZXBhaXIvZGlu b2RlLmMgeGZzcHJvZ3MtMi43LjExX3N1c3YzLWxlZ2FjeS9yZXBhaXIvZGlu b2RlLmMNCi0tLSB4ZnNwcm9ncy0yLjcuMTFfdmFuaWxsYS9yZXBhaXIvZGlu b2RlLmMJMjAwNi0wMS0xNyAwMzo0Njo1Mi4wMDAwMDAwMDAgKzAwMDANCisr KyB4ZnNwcm9ncy0yLjcuMTFfc3VzdjMtbGVnYWN5L3JlcGFpci9kaW5vZGUu YwkyMDA4LTAzLTI0IDE0OjM2OjQ3LjAwMDAwMDAwMCArMDAwMA0KQEAgLTI5 Niw3ICsyOTYsNyBAQA0KIAkvKiBhbmQgY2xlYXIgdGhlIGZvcmtzICovDQog DQogCWlmIChkaXJ0eSAmJiAhbm9fbW9kaWZ5KQ0KLQkJYnplcm8oJmRpbm8t PmRpX3UsIFhGU19MSVRJTk8obXApKTsNCisJCW1lbXNldCgmZGluby0+ZGlf dSwgMCwgWEZTX0xJVElOTyhtcCkpOw0KIA0KIAlyZXR1cm4oZGlydHkpOw0K IH0NCkBAIC0xNTE2LDggKzE1MTYsOCBAQA0KIAkJICogbG9jYWwgc3ltbGlu aywganVzdCBjb3B5IHRoZSBzeW1saW5rIG91dCBvZiB0aGUNCiAJCSAqIGlu b2RlIGludG8gdGhlIGRhdGEgYXJlYQ0KIAkJICovDQotCQliY29weSgoY2hh ciAqKVhGU19ERk9SS19EUFRSKGRpbm8pLA0KLQkJCXN5bWxpbmssIElOVF9H RVQoZGlub2MtPmRpX3NpemUsIEFSQ0hfQ09OVkVSVCkpOw0KKwkJbWVtbW92 ZShzeW1saW5rLCAoY2hhciAqKVhGU19ERk9SS19EUFRSKGRpbm8pLA0KKwkJ CUlOVF9HRVQoZGlub2MtPmRpX3NpemUsIEFSQ0hfQ09OVkVSVCkpOw0KIAl9 IGVsc2Ugew0KIAkJLyoNCiAJCSAqIHN0b3JlZCBpbiBhIG1ldGEtZGF0YSBm aWxlLCBoYXZlIHRvIGJtYXAgb25lIGJsb2NrDQpAQCAtMTU0Miw3ICsxNTQy LDcgQEANCiAJCQlidWZfZGF0YSA9IChjaGFyICopWEZTX0JVRl9QVFIoYnAp Ow0KIAkJCXNpemUgPSBNSU4oSU5UX0dFVChkaW5vYy0+ZGlfc2l6ZSwgQVJD SF9DT05WRVJUKQ0KIAkJCQktIGFtb3VudGRvbmUsIChpbnQpWEZTX0ZTQl9U T19CQihtcCwgMSkqQkJTSVpFKTsNCi0JCQliY29weShidWZfZGF0YSwgY3B0 ciwgc2l6ZSk7DQorCQkJbWVtbW92ZShjcHRyLCBidWZfZGF0YSwgc2l6ZSk7 DQogCQkJY3B0ciArPSBzaXplOw0KIAkJCWFtb3VudGRvbmUgKz0gc2l6ZTsN CiAJCQlpKys7DQpkaWZmIC1ydSB4ZnNwcm9ncy0yLjcuMTFfdmFuaWxsYS9y ZXBhaXIvZGlyLmMgeGZzcHJvZ3MtMi43LjExX3N1c3YzLWxlZ2FjeS9yZXBh aXIvZGlyLmMNCi0tLSB4ZnNwcm9ncy0yLjcuMTFfdmFuaWxsYS9yZXBhaXIv ZGlyLmMJMjAwNi0wMS0xNyAwMzo0Njo1Mi4wMDAwMDAwMDAgKzAwMDANCisr KyB4ZnNwcm9ncy0yLjcuMTFfc3VzdjMtbGVnYWN5L3JlcGFpci9kaXIuYwky MDA4LTAzLTI0IDE0OjM2OjQ3LjAwMDAwMDAwMCArMDAwMA0KQEAgLTMzNCw3 ICszMzQsNyBAQA0KIAkJICogaGFwcGVuZWQuDQogCQkgKi8NCiAJCWlmIChq dW5raXQpICB7DQotCQkJYmNvcHkoc2ZfZW50cnktPm5hbWUsIG5hbWUsIG5h bWVsZW4pOw0KKwkJCW1lbW1vdmUobmFtZSwgc2ZfZW50cnktPm5hbWUsIG5h bWVsZW4pOw0KIAkJCW5hbWVbbmFtZWxlbl0gPSAnXDAnOw0KIA0KIAkJCWlm ICghbm9fbW9kaWZ5KSAgew0KQEAgLTM1Miw3ICszNTIsNyBAQA0KIA0KIAkJ CQlJTlRfTU9EKHNmLT5oZHIuY291bnQsIEFSQ0hfQ09OVkVSVCwgLTEpOw0K IAkJCQludW1fZW50cmllcy0tOw0KLQkJCQliemVybygodm9pZCAqKSAoKF9f cHNpbnRfdCkgc2ZfZW50cnkgKyB0bXBfbGVuKSwNCisJCQkJbWVtc2V0KCh2 b2lkICopICgoX19wc2ludF90KSBzZl9lbnRyeSArIHRtcF9sZW4pLCAwLA0K IAkJCQkJdG1wX2VsZW4pOw0KIA0KIAkJCQkvKg0KQEAgLTUxMSw3ICs1MTEs NyBAQA0KIAlpZiAoKGZyZWVtYXAgPSBtYWxsb2MobXAtPm1fc2Iuc2JfYmxv Y2tzaXplKSkgPT0gTlVMTCkNCiAJCXJldHVybihOVUxMKTsNCiANCi0JYnpl cm8oZnJlZW1hcCwgbXAtPm1fc2Iuc2JfYmxvY2tzaXplL05CQlkpOw0KKwlt ZW1zZXQoZnJlZW1hcCwgMCwgbXAtPm1fc2Iuc2JfYmxvY2tzaXplL05CQlkp Ow0KIA0KIAlyZXR1cm4oZnJlZW1hcCk7DQogfQ0KQEAgLTUyMCw3ICs1MjAs NyBAQA0KIHZvaWQNCiBpbml0X2RhX2ZyZWVtYXAoZGFfZnJlZW1hcF90ICpk aXJfZnJlZW1hcCkNCiB7DQotCWJ6ZXJvKGRpcl9mcmVlbWFwLCBzaXplb2Yo ZGFfZnJlZW1hcF90KSAqIERBX0JNQVBfU0laRSk7DQorCW1lbXNldChkaXJf ZnJlZW1hcCwgMCwgc2l6ZW9mKGRhX2ZyZWVtYXBfdCkgKiBEQV9CTUFQX1NJ WkUpOw0KIH0NCiANCiAvKg0KQEAgLTc1Myw3ICs3NTMsNyBAQA0KIAlkYV9o b2xlX21hcF90CWhvbGVtYXA7DQogDQogCWluaXRfZGFfZnJlZW1hcChkaXJf ZnJlZW1hcCk7DQotCWJ6ZXJvKCZob2xlbWFwLCBzaXplb2YoZGFfaG9sZV9t YXBfdCkpOw0KKwltZW1zZXQoJmhvbGVtYXAsIDAsIHNpemVvZihkYV9ob2xl X21hcF90KSk7DQogDQogCXNldF9kYV9mcmVlbWFwKG1wLCBkaXJfZnJlZW1h cCwgMCwgNTApOw0KIAlzZXRfZGFfZnJlZW1hcChtcCwgZGlyX2ZyZWVtYXAs IDEwMCwgMTI2KTsNCkBAIC0xNTI1LDkgKzE1MjUsOSBAQA0KIAkJCQltZW1t b3ZlKGVudHJ5LCBlbnRyeSArIDEsIChJTlRfR0VUKGhkci0+Y291bnQsIEFS Q0hfQ09OVkVSVCkgLSBpKSAqDQogCQkJCQlzaXplb2YoeGZzX2Rpcl9sZWFm X2VudHJ5X3QpKTsNCiAJCQl9DQotCQkJYnplcm8oKHZvaWQgKikgKChfX3Bz aW50X3QpIGVudHJ5ICsNCisJCQltZW1zZXQoKHZvaWQgKikgKChfX3BzaW50 X3QpIGVudHJ5ICsNCiAJCQkJKElOVF9HRVQobGVhZi0+aGRyLmNvdW50LCBB UkNIX0NPTlZFUlQpIC0gaSAtIDEpICoNCi0JCQkJc2l6ZW9mKHhmc19kaXJf bGVhZl9lbnRyeV90KSksDQorCQkJCXNpemVvZih4ZnNfZGlyX2xlYWZfZW50 cnlfdCkpLCAwLA0KIAkJCQlzaXplb2YoeGZzX2Rpcl9sZWFmX2VudHJ5X3Qp KTsNCiANCiAJCQlzdGFydCA9IChfX3BzaW50X3QpICZsZWFmLT5lbnRyaWVz W0lOVF9HRVQoaGRyLT5jb3VudCwgQVJDSF9DT05WRVJUKV0gLQ0KQEAgLTE2 MjQsOSArMTYyNCw5IEBADQogCQkJCQkJKElOVF9HRVQobGVhZi0+aGRyLmNv dW50LCBBUkNIX0NPTlZFUlQpIC0gaSAtIDEpICoNCiAJCQkJCQlzaXplb2Yo eGZzX2Rpcl9sZWFmX2VudHJ5X3QpKTsNCiAJCQkJfQ0KLQkJCQliemVybygo dm9pZCAqKSAoKF9fcHNpbnRfdCkgZW50cnkgKw0KKwkJCQltZW1zZXQoKHZv aWQgKikgKChfX3BzaW50X3QpIGVudHJ5ICsNCiAJCQkJCShJTlRfR0VUKGxl YWYtPmhkci5jb3VudCwgQVJDSF9DT05WRVJUKSAtIGkgLSAxKSAqDQotCQkJ CQlzaXplb2YoeGZzX2Rpcl9sZWFmX2VudHJ5X3QpKSwNCisJCQkJCXNpemVv Zih4ZnNfZGlyX2xlYWZfZW50cnlfdCkpLCAwLA0KIAkJCQkJc2l6ZW9mKHhm c19kaXJfbGVhZl9lbnRyeV90KSk7DQogDQogCQkJCS8qDQpAQCAtMTgyNSwx MSArMTgyNSwxMSBAQA0KIAkJCQkJICAgIHNpemVvZih4ZnNfZGlyX2xlYWZf ZW50cnlfdCkpICB7DQogCQkJCQkJbWVtbW92ZShlbnRyeSwgZW50cnkgKyAx LA0KIAkJCQkJCQlieXRlcyk7DQotCQkJCQkJYnplcm8oKHZvaWQgKikNCi0J CQkJCQkoKF9fcHNpbnRfdCkgZW50cnkgKyBieXRlcyksDQorCQkJCQkJbWVt c2V0KCh2b2lkICopDQorCQkJCQkJKChfX3BzaW50X3QpIGVudHJ5ICsgYnl0 ZXMpLCAwLA0KIAkJCQkJCXNpemVvZih4ZnNfZGlyX2xlYWZfZW50cnlfdCkp Ow0KIAkJCQkJfSBlbHNlICB7DQotCQkJCQkJYnplcm8oZW50cnksDQorCQkJ CQkJbWVtc2V0KGVudHJ5LCAwLA0KIAkJCQkJCXNpemVvZih4ZnNfZGlyX2xl YWZfZW50cnlfdCkpOw0KIAkJCQkJfQ0KIA0KQEAgLTIwNjcsMTEgKzIwNjcs MTEgQEANCiAJCQkJICovDQogCQkJCWlmIChieXRlcyA+IHNpemVvZih4ZnNf ZGlyX2xlYWZfZW50cnlfdCkpICB7DQogCQkJCQltZW1tb3ZlKGVudHJ5LCBl bnRyeSArIDEsIGJ5dGVzKTsNCi0JCQkJCWJ6ZXJvKCh2b2lkICopDQotCQkJ CQkJKChfX3BzaW50X3QpIGVudHJ5ICsgYnl0ZXMpLA0KKwkJCQkJbWVtc2V0 KCh2b2lkICopDQorCQkJCQkJKChfX3BzaW50X3QpIGVudHJ5ICsgYnl0ZXMp LCAwLA0KIAkJCQkJCXNpemVvZih4ZnNfZGlyX2xlYWZfZW50cnlfdCkpOw0K IAkJCQl9IGVsc2UgIHsNCi0JCQkJCWJ6ZXJvKGVudHJ5LA0KKwkJCQkJbWVt c2V0KGVudHJ5LCAwLA0KIAkJCQkJCXNpemVvZih4ZnNfZGlyX2xlYWZfZW50 cnlfdCkpOw0KIAkJCQl9DQogDQpAQCAtMjEzNiw3ICsyMTM2LDcgQEANCiAJ CSAqIG1ha2luZyBpdCBpbXBvc3NpYmxlIGZvciB0aGUgc3RvcmVkIGxlbmd0 aA0KIAkJICogdmFsdWUgdG8gYmUgb3V0IG9mIHJhbmdlLg0KIAkJICovDQot CQliY29weShuYW1lc3QtPm5hbWUsIGZuYW1lLCBlbnRyeS0+bmFtZWxlbik7 DQorCQltZW1tb3ZlKGZuYW1lLCBuYW1lc3QtPm5hbWUsIGVudHJ5LT5uYW1l bGVuKTsNCiAJCWZuYW1lW2VudHJ5LT5uYW1lbGVuXSA9ICdcMCc7DQogCQlo YXNodmFsID0gbGlieGZzX2RhX2hhc2huYW1lKGZuYW1lLCBlbnRyeS0+bmFt ZWxlbik7DQogDQpAQCAtMjQ2NSw3ICsyNDY1LDcgQEANCiAJICogKFhGU19E SVJfTEVBRl9NQVBTSVpFICgzKSAqIGJpZ2dlc3QgcmVnaW9ucykNCiAJICog YW5kIHNlZSBpZiB0aGV5IG1hdGNoIHdoYXQncyBpbiB0aGUgYmxvY2sNCiAJ ICovDQotCWJ6ZXJvKCZob2xlbWFwLCBzaXplb2YoZGFfaG9sZV9tYXBfdCkp Ow0KKwltZW1zZXQoJmhvbGVtYXAsIDAsIHNpemVvZihkYV9ob2xlX21hcF90 KSk7DQogCXByb2Nlc3NfZGFfZnJlZW1hcChtcCwgZGlyX2ZyZWVtYXAsICZo b2xlbWFwKTsNCiANCiAJaWYgKHplcm9fbGVuX2VudHJpZXMpICB7DQpAQCAt MjUyMiw3ICsyNTIyLDcgQEANCiAJCQkvKg0KIAkJCSAqIGNvcHkgbGVhZiBi bG9jayBoZWFkZXINCiAJCQkgKi8NCi0JCQliY29weSgmbGVhZi0+aGRyLCAm bmV3X2xlYWYtPmhkciwNCisJCQltZW1tb3ZlKCZuZXdfbGVhZi0+aGRyLCAm bGVhZi0+aGRyLA0KIAkJCQlzaXplb2YoeGZzX2Rpcl9sZWFmX2hkcl90KSk7 DQogDQogCQkJLyoNCkBAIC0yNTY4LDggKzI1NjgsOCBAQA0KIAkJCQlkX2Vu dHJ5LT5uYW1lbGVuID0gc19lbnRyeS0+bmFtZWxlbjsNCiAJCQkJZF9lbnRy eS0+cGFkMiA9IDA7DQogDQotCQkJCWJjb3B5KChjaGFyICopIGxlYWYgKyBJ TlRfR0VUKHNfZW50cnktPm5hbWVpZHgsIEFSQ0hfQ09OVkVSVCksDQotCQkJ CQlmaXJzdF9ieXRlLCBieXRlcyk7DQorCQkJCW1lbW1vdmUoZmlyc3RfYnl0 ZSwgKGNoYXIgKikgbGVhZiArIElOVF9HRVQoc19lbnRyeS0+bmFtZWlkeCwg QVJDSF9DT05WRVJUKSwNCisJCQkJCWJ5dGVzKTsNCiANCiAJCQkJbnVtX2Vu dHJpZXMrKzsNCiAJCQkJZF9lbnRyeSsrOw0KQEAgLTI1ODEsNyArMjU4MSw3 IEBADQogCQkJLyoNCiAJCQkgKiB6ZXJvIHNwYWNlIGJldHdlZW4gZW5kIG9m IHRhYmxlIGFuZCB0b3Agb2YgaGVhcA0KIAkJCSAqLw0KLQkJCWJ6ZXJvKGRf ZW50cnksIChfX3BzaW50X3QpIGZpcnN0X2J5dGUNCisJCQltZW1zZXQoZF9l bnRyeSwgMCwgKF9fcHNpbnRfdCkgZmlyc3RfYnl0ZQ0KIAkJCQkJLSAoX19w c2ludF90KSBkX2VudHJ5KTsNCiANCiAJCQkvKg0KQEAgLTI2MTcsNyArMjYx Nyw3IEBADQogCQkJLyoNCiAJCQkgKiBmaW5hbCBzdGVwLCBjb3B5IGJsb2Nr IGJhY2sNCiAJCQkgKi8NCi0JCQliY29weShuZXdfbGVhZiwgbGVhZiwgbXAt Pm1fc2Iuc2JfYmxvY2tzaXplKTsNCisJCQltZW1tb3ZlKGxlYWYsIG5ld19s ZWFmLCBtcC0+bV9zYi5zYl9ibG9ja3NpemUpOw0KIA0KIAkJCSpidWZfZGly dHkgPSAxOw0KIAkJfSBlbHNlICB7DQpAQCAtMjg1Myw3ICsyODUzLDcgQEAN CiAJICogdGhlIHdheS4gIFRoZW4gd2FsayB0aGUgbGVhZiBibG9ja3MgbGVm dC10by1yaWdodCwgY2FsbGluZw0KIAkgKiBhIHBhcmVudC12ZXJpZmljYXRp b24gcm91dGluZSBlYWNoIHRpbWUgd2UgdHJhdmVyc2UgYSBibG9jay4NCiAJ ICovDQotCWJ6ZXJvKCZkYV9jdXJzb3IsIHNpemVvZihkYV9idF9jdXJzb3Jf dCkpOw0KKwltZW1zZXQoJmRhX2N1cnNvciwgMCwgc2l6ZW9mKGRhX2J0X2N1 cnNvcl90KSk7DQogDQogCWRhX2N1cnNvci5hY3RpdmUgPSAwOw0KIAlkYV9j dXJzb3IudHlwZSA9IDA7DQpkaWZmIC1ydSB4ZnNwcm9ncy0yLjcuMTFfdmFu aWxsYS9yZXBhaXIvZGlyMi5jIHhmc3Byb2dzLTIuNy4xMV9zdXN2My1sZWdh Y3kvcmVwYWlyL2RpcjIuYw0KLS0tIHhmc3Byb2dzLTIuNy4xMV92YW5pbGxh L3JlcGFpci9kaXIyLmMJMjAwNi0wMS0xNyAwMzo0Njo1Mi4wMDAwMDAwMDAg KzAwMDANCisrKyB4ZnNwcm9ncy0yLjcuMTFfc3VzdjMtbGVnYWN5L3JlcGFp ci9kaXIyLmMJMjAwOC0wMy0yNCAxNDozNjo0Ny4wMDAwMDAwMDAgKzAwMDAN CkBAIC0xMjQsNyArMTI0LDcgQEANCiAJCX0NCiAJCWZvciAoaSA9IG9mZiA9 IDA7IGkgPCBuZXg7IGkrKywgb2ZmICs9IFhGU19CVUZfQ09VTlQoYnApKSB7 DQogCQkJYnAgPSBicGxpc3RbaV07DQotCQkJYmNvcHkoWEZTX0JVRl9QVFIo YnApLCAoY2hhciAqKWRhYnVmLT5kYXRhICsgb2ZmLA0KKwkJCW1lbW1vdmUo KGNoYXIgKilkYWJ1Zi0+ZGF0YSArIG9mZiwgWEZTX0JVRl9QVFIoYnApLA0K IAkJCQlYRlNfQlVGX0NPVU5UKGJwKSk7DQogCQl9DQogCX0NCkBAIC0xNDks NyArMTQ5LDcgQEANCiAJCWRhYnVmLT5kaXJ0eSA9IDA7DQogCQlmb3IgKGk9 b2ZmPTA7IGkgPCBkYWJ1Zi0+bmJ1ZjsgaSsrLCBvZmYgKz0gWEZTX0JVRl9D T1VOVChicCkpIHsNCiAJCQlicCA9IGRhYnVmLT5icHNbaV07DQotCQkJYmNv cHkoKGNoYXIgKilkYWJ1Zi0+ZGF0YSArIG9mZiwgWEZTX0JVRl9QVFIoYnAp LA0KKwkJCW1lbW1vdmUoWEZTX0JVRl9QVFIoYnApLCAoY2hhciAqKWRhYnVm LT5kYXRhICsgb2ZmLA0KIAkJCQlYRlNfQlVGX0NPVU5UKGJwKSk7DQogCQl9 DQogCX0NCkBAIC0xODcsMTAgKzE4NywxMCBAQA0KIAkJCWRvX2Vycm9yKF8o ImNvdWxkbid0IG1hbGxvYyBkaXIyIGJ1ZmZlciBsaXN0XG4iKSk7DQogCQkJ ZXhpdCgxKTsNCiAJCX0NCi0JCWJjb3B5KGRhYnVmLT5icHMsIGJwbGlzdCwg bmJ1ZiAqIHNpemVvZigqYnBsaXN0KSk7DQorCQltZW1tb3ZlKGJwbGlzdCwg ZGFidWYtPmJwcywgbmJ1ZiAqIHNpemVvZigqYnBsaXN0KSk7DQogCQlmb3Ig KGkgPSBvZmYgPSAwOyBpIDwgbmJ1ZjsgaSsrLCBvZmYgKz0gWEZTX0JVRl9D T1VOVChicCkpIHsNCiAJCQlicCA9IGJwbGlzdFtpXTsNCi0JCQliY29weSgo Y2hhciAqKWRhYnVmLT5kYXRhICsgb2ZmLCBYRlNfQlVGX1BUUihicCksDQor CQkJbWVtbW92ZShYRlNfQlVGX1BUUihicCksIChjaGFyICopZGFidWYtPmRh dGEgKyBvZmYsDQogCQkJCVhGU19CVUZfQ09VTlQoYnApKTsNCiAJCX0NCiAJ fQ0KQEAgLTIyMyw3ICsyMjMsNyBAQA0KIAkJCWRvX2Vycm9yKF8oImNvdWxk bid0IG1hbGxvYyBkaXIyIGJ1ZmZlciBsaXN0XG4iKSk7DQogCQkJZXhpdCgx KTsNCiAJCX0NCi0JCWJjb3B5KGRhYnVmLT5icHMsIGJwbGlzdCwgbmJ1ZiAq IHNpemVvZigqYnBsaXN0KSk7DQorCQltZW1tb3ZlKGJwbGlzdCwgZGFidWYt PmJwcywgbmJ1ZiAqIHNpemVvZigqYnBsaXN0KSk7DQogCX0NCiAJZGFfYnVm X2RvbmUoZGFidWYpOw0KIAlmb3IgKGkgPSAwOyBpIDwgbmJ1ZjsgaSsrKQ0K QEAgLTEwNzYsNyArMTA3Niw3IEBADQogCQkgKiBoYXBwZW5lZC4NCiAJCSAq Lw0KIAkJaWYgKGp1bmtpdCkgIHsNCi0JCQliY29weShzZmVwLT5uYW1lLCBu YW1lLCBuYW1lbGVuKTsNCisJCQltZW1tb3ZlKG5hbWUsIHNmZXAtPm5hbWUs IG5hbWVsZW4pOw0KIAkJCW5hbWVbbmFtZWxlbl0gPSAnXDAnOw0KIA0KIAkJ CWlmICghbm9fbW9kaWZ5KSAgew0KQEAgLTEwOTUsNyArMTA5NSw3IEBADQog DQogCQkJCUlOVF9NT0Qoc2ZwLT5oZHIuY291bnQsIEFSQ0hfQ09OVkVSVCwg LTEpOw0KIAkJCQludW1fZW50cmllcy0tOw0KLQkJCQliemVybygodm9pZCAq KSAoKF9fcHNpbnRfdCkgc2ZlcCArIHRtcF9sZW4pLA0KKwkJCQltZW1zZXQo KHZvaWQgKikgKChfX3BzaW50X3QpIHNmZXAgKyB0bXBfbGVuKSwgMCwNCiAJ CQkJCXRtcF9lbGVuKTsNCiANCiAJCQkJLyoNCkBAIC0xOTIxLDcgKzE5MjEs NyBAQA0KIAkgKiBUaGVuIHdhbGsgdGhlIGxlYWYgYmxvY2tzIGxlZnQtdG8t cmlnaHQsIGNhbGxpbmcgYSBwYXJlbnQNCiAJICogdmVyaWZpY2F0aW9uIHJv dXRpbmUgZWFjaCB0aW1lIHdlIHRyYXZlcnNlIGEgYmxvY2suDQogCSAqLw0K LQliemVybygmZGFfY3Vyc29yLCBzaXplb2YoZGFfY3Vyc29yKSk7DQorCW1l bXNldCgmZGFfY3Vyc29yLCAwLCBzaXplb2YoZGFfY3Vyc29yKSk7DQogCWRh X2N1cnNvci5pbm8gPSBpbm87DQogCWRhX2N1cnNvci5kaXAgPSBkaXA7DQog CWRhX2N1cnNvci5ibGttYXAgPSBibGttYXA7DQpkaWZmIC1ydSB4ZnNwcm9n cy0yLjcuMTFfdmFuaWxsYS9yZXBhaXIvZ2xvYmFscy5oIHhmc3Byb2dzLTIu Ny4xMV9zdXN2My1sZWdhY3kvcmVwYWlyL2dsb2JhbHMuaA0KLS0tIHhmc3By b2dzLTIuNy4xMV92YW5pbGxhL3JlcGFpci9nbG9iYWxzLmgJMjAwNi0wMS0x NyAwMzo0Njo1Mi4wMDAwMDAwMDAgKzAwMDANCisrKyB4ZnNwcm9ncy0yLjcu MTFfc3VzdjMtbGVnYWN5L3JlcGFpci9nbG9iYWxzLmgJMjAwOC0wMy0yNCAx NDozNjo0Ny4wMDAwMDAwMDAgKzAwMDANCkBAIC02Niw3ICs2Niw3IEBADQog ICogdGhlIHBhcnRpYWwgc2IgbWFzayBiaXQgc2V0LCB0aGVuIHlvdSBkZXBl bmQgb24gdGhlIGZpZWxkcw0KICAqIGluIGl0IHVwIHRvIGFuZCBpbmNsdWRp bmcgc2JfaW5vYWxpZ25tdCBidXQgdGhlIHVudXNlZCBwYXJ0IG9mIHRoZQ0K ICAqIHNlY3RvciBtYXkgaGF2ZSB0cmFzaCBpbiBpdC4gIElmIHRoZSBzYiBo YXMgYW55IGJpdHMgc2V0IHRoYXQgYXJlIGluDQotICogdGhlIGdvb2QgbWFz aywgdGhlbiB0aGUgZW50aXJlIHNiIGFuZCBzZWN0b3IgYXJlIGdvb2QgKHdh cyBiemVybydlZA0KKyAqIHRoZSBnb29kIG1hc2ssIHRoZW4gdGhlIGVudGly ZSBzYiBhbmQgc2VjdG9yIGFyZSBnb29kICh3YXMgemVybydlZA0KICAqIGJ5 IG1rZnMpLiAgVGhlIHRoaXJkIG1hc2sgaXMgZm9yIGZpbGVzeXN0ZW1zIG1h ZGUgYnkgcHJlLTYuNSBjYW1wdXMNCiAgKiBhbHBoYSBta2ZzJ3MuICBUaG9z ZSBhcmUgcmFyZSBzbyB3ZSdsbCBjaGVjayBmb3IgdGhvc2UgdW5kZXINCiAg KiBhIHNwZWNpYWwgb3B0aW9uLg0KZGlmZiAtcnUgeGZzcHJvZ3MtMi43LjEx X3ZhbmlsbGEvcmVwYWlyL2luY29yZS5jIHhmc3Byb2dzLTIuNy4xMV9zdXN2 My1sZWdhY3kvcmVwYWlyL2luY29yZS5jDQotLS0geGZzcHJvZ3MtMi43LjEx X3ZhbmlsbGEvcmVwYWlyL2luY29yZS5jCTIwMDYtMDEtMTcgMDM6NDY6NTIu MDAwMDAwMDAwICswMDAwDQorKysgeGZzcHJvZ3MtMi43LjExX3N1c3YzLWxl Z2FjeS9yZXBhaXIvaW5jb3JlLmMJMjAwOC0wMy0yNCAxNDozNjo0Ny4wMDAw MDAwMDAgKzAwMDANCkBAIC03NCw3ICs3NCw3IEBADQogCQkJCW51bWJsb2Nr cyk7DQogCQkJcmV0dXJuOw0KIAkJfQ0KLQkJYnplcm8oYmFfYm1hcFtpXSwg c2l6ZSk7DQorCQltZW1zZXQoYmFfYm1hcFtpXSwgMCwgc2l6ZSk7DQogCX0N CiANCiAJaWYgKHJ0YmxvY2tzID09IDApICB7DQpkaWZmIC1ydSB4ZnNwcm9n cy0yLjcuMTFfdmFuaWxsYS9yZXBhaXIvaW5jb3JlX2JtYy5jIHhmc3Byb2dz LTIuNy4xMV9zdXN2My1sZWdhY3kvcmVwYWlyL2luY29yZV9ibWMuYw0KLS0t IHhmc3Byb2dzLTIuNy4xMV92YW5pbGxhL3JlcGFpci9pbmNvcmVfYm1jLmMJ MjAwNi0wMS0xNyAwMzo0Njo1Mi4wMDAwMDAwMDAgKzAwMDANCisrKyB4ZnNw cm9ncy0yLjcuMTFfc3VzdjMtbGVnYWN5L3JlcGFpci9pbmNvcmVfYm1jLmMJ MjAwOC0wMy0yNCAxNDozNjo0Ny4wMDAwMDAwMDAgKzAwMDANCkBAIC0yOSw3 ICsyOSw3IEBADQogew0KIAlpbnQgaTsNCiANCi0JYnplcm8oY3Vyc29yLCBz aXplb2YoYm1hcF9jdXJzb3JfdCkpOw0KKwltZW1zZXQoY3Vyc29yLCAwLCBz aXplb2YoYm1hcF9jdXJzb3JfdCkpOw0KIAljdXJzb3ItPmlubyA9IE5VTExG U0lOTzsNCiAJY3Vyc29yLT5udW1fbGV2ZWxzID0gbnVtX2xldmVsczsNCiAN CmRpZmYgLXJ1IHhmc3Byb2dzLTIuNy4xMV92YW5pbGxhL3JlcGFpci9pbmNv cmVfaW5vLmMgeGZzcHJvZ3MtMi43LjExX3N1c3YzLWxlZ2FjeS9yZXBhaXIv aW5jb3JlX2luby5jDQotLS0geGZzcHJvZ3MtMi43LjExX3ZhbmlsbGEvcmVw YWlyL2luY29yZV9pbm8uYwkyMDA2LTAxLTE3IDAzOjQ2OjUyLjAwMDAwMDAw MCArMDAwMA0KKysrIHhmc3Byb2dzLTIuNy4xMV9zdXN2My1sZWdhY3kvcmVw YWlyL2luY29yZV9pbm8uYwkyMDA4LTAzLTI0IDE0OjM2OjQ3LjAwMDAwMDAw MCArMDAwMA0KQEAgLTUxNSwxMiArNTE1LDExIEBADQogCWlmICghdG1wKQ0K IAkJZG9fZXJyb3IoXygiY291bGRuJ3QgbWVtYWxpZ24gcGVudHJpZXMgdGFi bGVcbiIpKTsNCiANCi0JKHZvaWQpIGJjb3B5KGlyZWMtPmlub191bi5wbGlz dC0+cGVudHJpZXMsIHRtcCwNCisJbWVtbW92ZSh0bXAsIGlyZWMtPmlub191 bi5wbGlzdC0+cGVudHJpZXMsDQogCQkJdGFyZ2V0ICogc2l6ZW9mKHBhcmVu dF9lbnRyeV90KSk7DQogDQogCWlmIChjbnQgPiB0YXJnZXQpDQotCQkodm9p ZCkgYmNvcHkoaXJlYy0+aW5vX3VuLnBsaXN0LT5wZW50cmllcyArIHRhcmdl dCwNCi0JCQkJdG1wICsgdGFyZ2V0ICsgMSwNCisJCW1lbW1vdmUodG1wICsg dGFyZ2V0ICsgMSwgaXJlYy0+aW5vX3VuLnBsaXN0LT5wZW50cmllcyArIHRh cmdldCwNCiAJCQkJKGNudCAtIHRhcmdldCkgKiBzaXplb2YocGFyZW50X2Vu dHJ5X3QpKTsNCiANCiAJZnJlZShpcmVjLT5pbm9fdW4ucGxpc3QtPnBlbnRy aWVzKTsNCkBAIC02NzQsNyArNjczLDcgQEANCiAJaWYgKGJwdHJzX2luZGV4 ID09IEJQVFJfQUxMT0NfTlVNKQ0KIAkJYnB0cnMgPSBOVUxMOw0KIA0KLQli emVybyhicHRyLCBzaXplb2YoYmFja3B0cnNfdCkpOw0KKwltZW1zZXQoYnB0 ciwgMCwgc2l6ZW9mKGJhY2twdHJzX3QpKTsNCiANCiAJcmV0dXJuKGJwdHIp Ow0KIH0NCkBAIC02ODgsNyArNjg3LDcgQEANCiAJaWYgKChwdHIgPSBtYWxs b2Moc2l6ZW9mKGJhY2twdHJzX3QpKSkgPT0gTlVMTCkNCiAJCWRvX2Vycm9y KF8oImNvdWxkIG5vdCBtYWxsb2MgYmFjayBwb2ludGVyIHRhYmxlXG4iKSk7 DQogDQotCWJ6ZXJvKHB0ciwgc2l6ZW9mKGJhY2twdHJzX3QpKTsNCisJbWVt c2V0KHB0ciwgMCwgc2l6ZW9mKGJhY2twdHJzX3QpKTsNCiANCiAJcmV0dXJu KHB0cik7DQogfQ0KQEAgLTgwMiw3ICs4MDEsNyBAQA0KIAlpZiAoKGxhc3Rf cmVjID0gbWFsbG9jKHNpemVvZihpbm9fdHJlZV9ub2RlX3QgKikgKiBhZ2Nv dW50KSkgPT0gTlVMTCkNCiAJCWRvX2Vycm9yKF8oImNvdWxkbid0IG1hbGxv YyB1bmNlcnRhaW4gaW5vZGUgY2FjaGUgYXJlYVxuIikpOw0KIA0KLQliemVy byhsYXN0X3JlYywgc2l6ZW9mKGlub190cmVlX25vZGVfdCAqKSAqIGFnY291 bnQpOw0KKwltZW1zZXQobGFzdF9yZWMsIDAsIHNpemVvZihpbm9fdHJlZV9u b2RlX3QgKikgKiBhZ2NvdW50KTsNCiANCiAJZnVsbF9iYWNrcHRycyA9IDA7 DQogDQpkaWZmIC1ydSB4ZnNwcm9ncy0yLjcuMTFfdmFuaWxsYS9yZXBhaXIv cGhhc2U0LmMgeGZzcHJvZ3MtMi43LjExX3N1c3YzLWxlZ2FjeS9yZXBhaXIv cGhhc2U0LmMNCi0tLSB4ZnNwcm9ncy0yLjcuMTFfdmFuaWxsYS9yZXBhaXIv cGhhc2U0LmMJMjAwNi0wMS0xNyAwMzo0Njo1Mi4wMDAwMDAwMDAgKzAwMDAN CisrKyB4ZnNwcm9ncy0yLjcuMTFfc3VzdjMtbGVnYWN5L3JlcGFpci9waGFz ZTQuYwkyMDA4LTAzLTI0IDE0OjM2OjQ3LjAwMDAwMDAwMCArMDAwMA0KQEAg LTY4LDcgKzY4LDcgQEANCiAJCW5hbWVzdCA9IFhGU19ESVJfTEVBRl9OQU1F U1RSVUNUKGxlYWYsDQogCQkJSU5UX0dFVChlbnRyeS0+bmFtZWlkeCwgQVJD SF9DT05WRVJUKSk7DQogCQlYRlNfRElSX1NGX0dFVF9ESVJJTk8oJm5hbWVz dC0+aW51bWJlciwgJmxpbm8pOw0KLQkJYmNvcHkobmFtZXN0LT5uYW1lLCBm bmFtZSwgZW50cnktPm5hbWVsZW4pOw0KKwkJbWVtbW92ZShmbmFtZSwgbmFt ZXN0LT5uYW1lLCBlbnRyeS0+bmFtZWxlbik7DQogCQlmbmFtZVtlbnRyeS0+ bmFtZWxlbl0gPSAnXDAnOw0KIA0KIAkJaWYgKGZuYW1lWzBdICE9ICcvJyAm JiAhc3RyY21wKGZuYW1lLCBPUlBIQU5BR0UpKSAgew0KQEAgLTMxNiw3ICsz MTYsNyBAQA0KIAkJdG1wX3NmZSA9IE5VTEw7DQogCQlzZl9lbnRyeSA9IG5l eHRfc2ZlOw0KIAkJWEZTX0RJUl9TRl9HRVRfRElSSU5PKCZzZl9lbnRyeS0+ aW51bWJlciwgJmxpbm8pOw0KLQkJYmNvcHkoc2ZfZW50cnktPm5hbWUsIGZu YW1lLCBzZl9lbnRyeS0+bmFtZWxlbik7DQorCQltZW1tb3ZlKGZuYW1lLCBz Zl9lbnRyeS0+bmFtZSwgc2ZfZW50cnktPm5hbWVsZW4pOw0KIAkJZm5hbWVb c2ZfZW50cnktPm5hbWVsZW5dID0gJ1wwJzsNCiANCiAJCWlmICghc3RyY21w KE9SUEhBTkFHRSwgZm5hbWUpKSAgew0KQEAgLTQ0Nyw3ICs0NDcsNyBAQA0K IA0KIAkJCUlOVF9NT0Qoc2YtPmhkci5jb3VudCwgQVJDSF9DT05WRVJULCAt MSk7DQogDQotCQkJYnplcm8oKHZvaWQgKikgKChfX3BzaW50X3QpIHNmX2Vu dHJ5ICsgdG1wX2xlbiksDQorCQkJbWVtc2V0KCh2b2lkICopICgoX19wc2lu dF90KSBzZl9lbnRyeSArIHRtcF9sZW4pLCAwLA0KIAkJCQl0bXBfZWxlbik7 DQogDQogCQkJLyoNCkBAIC01MzQsNyArNTM0LDcgQEANCiAJCX0NCiAJCWRl cCA9ICh4ZnNfZGlyMl9kYXRhX2VudHJ5X3QgKilwdHI7DQogCQlsaW5vID0g SU5UX0dFVChkZXAtPmludW1iZXIsIEFSQ0hfQ09OVkVSVCk7DQotCQliY29w eShkZXAtPm5hbWUsIGZuYW1lLCBkZXAtPm5hbWVsZW4pOw0KKwkJbWVtbW92 ZShmbmFtZSwgZGVwLT5uYW1lLCBkZXAtPm5hbWVsZW4pOw0KIAkJZm5hbWVb ZGVwLT5uYW1lbGVuXSA9ICdcMCc7DQogDQogCQlpZiAoZm5hbWVbMF0gIT0g Jy8nICYmICFzdHJjbXAoZm5hbWUsIE9SUEhBTkFHRSkpICB7DQpAQCAtNzk3 LDcgKzc5Nyw3IEBADQogCQlzZl9lbnRyeSA9IG5leHRfc2ZlOw0KIAkJbGlu byA9IFhGU19ESVIyX1NGX0dFVF9JTlVNQkVSKHNmLA0KIAkJCVhGU19ESVIy X1NGX0lOVU1CRVJQKHNmX2VudHJ5KSk7DQotCQliY29weShzZl9lbnRyeS0+ bmFtZSwgZm5hbWUsIHNmX2VudHJ5LT5uYW1lbGVuKTsNCisJCW1lbW1vdmUo Zm5hbWUsIHNmX2VudHJ5LT5uYW1lLCBzZl9lbnRyeS0+bmFtZWxlbik7DQog CQlmbmFtZVtzZl9lbnRyeS0+bmFtZWxlbl0gPSAnXDAnOw0KIA0KIAkJaWYg KCFzdHJjbXAoT1JQSEFOQUdFLCBmbmFtZSkpICB7DQpAQCAtOTMxLDcgKzkz MSw3IEBADQogCQkJaWYgKGxpbm8gPiBYRlNfRElSMl9NQVhfU0hPUlRfSU5V TSkNCiAJCQkJc2YtPmhkci5pOGNvdW50LS07DQogDQotCQkJYnplcm8oKHZv aWQgKikgKChfX3BzaW50X3QpIHNmX2VudHJ5ICsgdG1wX2xlbiksDQorCQkJ bWVtc2V0KCh2b2lkICopICgoX19wc2ludF90KSBzZl9lbnRyeSArIHRtcF9s ZW4pLCAwLA0KIAkJCQl0bXBfZWxlbik7DQogDQogCQkJLyoNCkBAIC0xMjky LDcgKzEyOTIsNyBAQA0KIAkJLyoNCiAJCSAqIG5vdyByZXNldCB0aGUgYml0 bWFwIGZvciBhbGwgYWdzDQogCQkgKi8NCi0JCWJ6ZXJvKGJhX2JtYXBbaV0s IHJvdW5kdXAobXAtPm1fc2Iuc2JfYWdibG9ja3MvKE5CQlkvWFJfQkIpLA0K KwkJbWVtc2V0KGJhX2JtYXBbaV0sIDAsIHJvdW5kdXAobXAtPm1fc2Iuc2Jf YWdibG9ja3MvKE5CQlkvWFJfQkIpLA0KIAkJCQkJCXNpemVvZihfX3VpbnQ2 NF90KSkpOw0KIAkJZm9yIChqID0gMDsgaiA8IGFnX2hkcl9ibG9jazsgaisr KQ0KIAkJCXNldF9hZ2Jub19zdGF0ZShtcCwgaSwgaiwgWFJfRV9JTlVTRV9G Uyk7DQpkaWZmIC1ydSB4ZnNwcm9ncy0yLjcuMTFfdmFuaWxsYS9yZXBhaXIv cGhhc2U1LmMgeGZzcHJvZ3MtMi43LjExX3N1c3YzLWxlZ2FjeS9yZXBhaXIv cGhhc2U1LmMNCi0tLSB4ZnNwcm9ncy0yLjcuMTFfdmFuaWxsYS9yZXBhaXIv cGhhc2U1LmMJMjAwNi0wMS0xNyAwMzo0Njo1Mi4wMDAwMDAwMDAgKzAwMDAN CisrKyB4ZnNwcm9ncy0yLjcuMTFfc3VzdjMtbGVnYWN5L3JlcGFpci9waGFz ZTUuYwkyMDA4LTAzLTI0IDE0OjM2OjQ3LjAwMDAwMDAwMCArMDAwMA0KQEAg LTkzLDcgKzkzLDcgQEANCiAJICogZXh0ZW50cyBvZiBmcmVlIGJsb2Nrcy4g IEF0IHRoaXMgcG9pbnQsIHdlIGtub3cNCiAJICogdGhhdCBibG9ja3MgaW4g dGhlIGJpdG1hcCBhcmUgZWl0aGVyIHNldCB0byBhbg0KIAkgKiAiaW4gdXNl IiBzdGF0ZSBvciBzZXQgdG8gdW5rbm93biAoMCkgc2luY2UgdGhlDQotCSAq IGJtYXBzIHdlcmUgYnplcm8nZWQgaW4gcGhhc2UgNCBhbmQgb25seSBibG9j a3MNCisJICogYm1hcHMgd2VyZSB6ZXJvJ2VkIGluIHBoYXNlIDQgYW5kIG9u bHkgYmxvY2tzDQogCSAqIGJlaW5nIHVzZWQgYnkgaW5vZGVzLCBpbm9kZSBi bWFwcywgYWcgaGVhZGVycywNCiAJICogYW5kIHRoZSBmaWxlcyB0aGVtc2Vs dmVzIHdlcmUgcHV0IGludG8gdGhlIGJpdG1hcC4NCiAJICoNCkBAIC02NjQs NyArNjY0LDcgQEANCiAJCSAqIGluaXRpYWxpemUgYmxvY2sgaGVhZGVyDQog CQkgKi8NCiAJCWJ0X2hkciA9IFhGU19CVUZfVE9fQUxMT0NfQkxPQ0sobHB0 ci0+YnVmX3ApOw0KLQkJYnplcm8oYnRfaGRyLCBtcC0+bV9zYi5zYl9ibG9j a3NpemUpOw0KKwkJbWVtc2V0KGJ0X2hkciwgMCwgbXAtPm1fc2Iuc2JfYmxv Y2tzaXplKTsNCiANCiAJCUlOVF9TRVQoYnRfaGRyLT5iYl9tYWdpYywgQVJD SF9DT05WRVJULCBtYWdpYyk7DQogCQlJTlRfU0VUKGJ0X2hkci0+YmJfbGV2 ZWwsIEFSQ0hfQ09OVkVSVCwgbGV2ZWwpOw0KQEAgLTc0MSw3ICs3NDEsNyBA QA0KIAkJICogaW5pdGlhbGl6ZSBibG9jayBoZWFkZXINCiAJCSAqLw0KIAkJ YnRfaGRyID0gWEZTX0JVRl9UT19BTExPQ19CTE9DSyhscHRyLT5idWZfcCk7 DQotCQliemVybyhidF9oZHIsIG1wLT5tX3NiLnNiX2Jsb2Nrc2l6ZSk7DQor CQltZW1zZXQoYnRfaGRyLCAwLCBtcC0+bV9zYi5zYl9ibG9ja3NpemUpOw0K IA0KIAkJSU5UX1NFVChidF9oZHItPmJiX21hZ2ljLCBBUkNIX0NPTlZFUlQs IG1hZ2ljKTsNCiAJCUlOVF9TRVQoYnRfaGRyLT5iYl9sZXZlbCwgQVJDSF9D T05WRVJULCBpKTsNCkBAIC03NzIsNyArNzcyLDcgQEANCiAJCSAqIGJsb2Nr IGluaXRpYWxpemF0aW9uLCBsYXkgaW4gYmxvY2sgaGVhZGVyDQogCQkgKi8N CiAJCWJ0X2hkciA9IFhGU19CVUZfVE9fQUxMT0NfQkxPQ0sobHB0ci0+YnVm X3ApOw0KLQkJYnplcm8oYnRfaGRyLCBtcC0+bV9zYi5zYl9ibG9ja3NpemUp Ow0KKwkJbWVtc2V0KGJ0X2hkciwgMCwgbXAtPm1fc2Iuc2JfYmxvY2tzaXpl KTsNCiANCiAJCUlOVF9TRVQoYnRfaGRyLT5iYl9tYWdpYywgQVJDSF9DT05W RVJULCBtYWdpYyk7DQogCQlidF9oZHItPmJiX2xldmVsID0gMDsNCkBAIC0x MDIxLDcgKzEwMjEsNyBAQA0KIAkJICogaW5pdGlhbGl6ZSBibG9jayBoZWFk ZXINCiAJCSAqLw0KIAkJYnRfaGRyID0gWEZTX0JVRl9UT19JTk9CVF9CTE9D SyhscHRyLT5idWZfcCk7DQotCQliemVybyhidF9oZHIsIG1wLT5tX3NiLnNi X2Jsb2Nrc2l6ZSk7DQorCQltZW1zZXQoYnRfaGRyLCAwLCBtcC0+bV9zYi5z Yl9ibG9ja3NpemUpOw0KIA0KIAkJSU5UX1NFVChidF9oZHItPmJiX21hZ2lj LCBBUkNIX0NPTlZFUlQsIFhGU19JQlRfTUFHSUMpOw0KIAkJSU5UX1NFVChi dF9oZHItPmJiX2xldmVsLCBBUkNIX0NPTlZFUlQsIGxldmVsKTsNCkBAIC0x MDYwLDcgKzEwNjAsNyBAQA0KIAkJCVhGU19BR19EQUREUihtcCwgYWdubywg WEZTX0FHSV9EQUREUihtcCkpLA0KIAkJCW1wLT5tX3NiLnNiX3NlY3RzaXpl L0JCU0laRSk7DQogCWFnaSA9IFhGU19CVUZfVE9fQUdJKGFnaV9idWYpOw0K LQliemVybyhhZ2ksIG1wLT5tX3NiLnNiX3NlY3RzaXplKTsNCisJbWVtc2V0 KGFnaSwgMCwgbXAtPm1fc2Iuc2Jfc2VjdHNpemUpOw0KIA0KIAlJTlRfU0VU KGFnaS0+YWdpX21hZ2ljbnVtLCBBUkNIX0NPTlZFUlQsIFhGU19BR0lfTUFH SUMpOw0KIAlJTlRfU0VUKGFnaS0+YWdpX3ZlcnNpb25udW0sIEFSQ0hfQ09O VkVSVCwgWEZTX0FHSV9WRVJTSU9OKTsNCkBAIC0xMTI0LDcgKzExMjQsNyBA QA0KIAkJICogaW5pdGlhbGl6ZSBibG9jayBoZWFkZXINCiAJCSAqLw0KIAkJ YnRfaGRyID0gWEZTX0JVRl9UT19JTk9CVF9CTE9DSyhscHRyLT5idWZfcCk7 DQotCQliemVybyhidF9oZHIsIG1wLT5tX3NiLnNiX2Jsb2Nrc2l6ZSk7DQor CQltZW1zZXQoYnRfaGRyLCAwLCBtcC0+bV9zYi5zYl9ibG9ja3NpemUpOw0K IA0KIAkJSU5UX1NFVChidF9oZHItPmJiX21hZ2ljLCBBUkNIX0NPTlZFUlQs IFhGU19JQlRfTUFHSUMpOw0KIAkJSU5UX1NFVChidF9oZHItPmJiX2xldmVs LCBBUkNIX0NPTlZFUlQsIGkpOw0KQEAgLTExNTIsNyArMTE1Miw3IEBADQog CQkgKiBibG9jayBpbml0aWFsaXphdGlvbiwgbGF5IGluIGJsb2NrIGhlYWRl cg0KIAkJICovDQogCQlidF9oZHIgPSBYRlNfQlVGX1RPX0lOT0JUX0JMT0NL KGxwdHItPmJ1Zl9wKTsNCi0JCWJ6ZXJvKGJ0X2hkciwgbXAtPm1fc2Iuc2Jf YmxvY2tzaXplKTsNCisJCW1lbXNldChidF9oZHIsIDAsIG1wLT5tX3NiLnNi X2Jsb2Nrc2l6ZSk7DQogDQogCQlJTlRfU0VUKGJ0X2hkci0+YmJfbWFnaWMs IEFSQ0hfQ09OVkVSVCwgWEZTX0lCVF9NQUdJQyk7DQogCQlidF9oZHItPmJi X2xldmVsID0gMDsNCkBAIC0xMjM5LDcgKzEyMzksNyBAQA0KIAkJCVhGU19B R19EQUREUihtcCwgYWdubywgWEZTX0FHRl9EQUREUihtcCkpLA0KIAkJCW1w LT5tX3NiLnNiX3NlY3RzaXplL0JCU0laRSk7DQogCWFnZiA9IFhGU19CVUZf VE9fQUdGKGFnZl9idWYpOw0KLQliemVybyhhZ2YsIG1wLT5tX3NiLnNiX3Nl Y3RzaXplKTsNCisJbWVtc2V0KGFnZiwgMCwgbXAtPm1fc2Iuc2Jfc2VjdHNp emUpOw0KIA0KICNpZmRlZiBYUl9CTERfRlJFRV9UUkFDRQ0KIAlmcHJpbnRm KHN0ZGVyciwgImFnZiA9IDB4JXgsIGFnZl9idWYtPmJfdW4uYl9hZGRyID0g MHgleFxuIiwNCkBAIC0xMjg3LDcgKzEyODcsNyBAQA0KIAkJCQlYRlNfQUdf REFERFIobXAsIGFnbm8sIFhGU19BR0ZMX0RBRERSKG1wKSksDQogCQkJCW1w LT5tX3NiLnNiX3NlY3RzaXplL0JCU0laRSk7DQogCQlhZ2ZsID0gWEZTX0JV Rl9UT19BR0ZMKGFnZmxfYnVmKTsNCi0JCWJ6ZXJvKGFnZmwsIG1wLT5tX3Ni LnNiX3NlY3RzaXplKTsNCisJCW1lbXNldChhZ2ZsLCAwLCBtcC0+bV9zYi5z Yl9zZWN0c2l6ZSk7DQogCQkvKg0KIAkJICogb2ssIG5vdyBncmFiIGFzIG1h bnkgYmxvY2tzIGFzIHdlIGNhbg0KIAkJICovDQpkaWZmIC1ydSB4ZnNwcm9n cy0yLjcuMTFfdmFuaWxsYS9yZXBhaXIvcGhhc2U2LmMgeGZzcHJvZ3MtMi43 LjExX3N1c3YzLWxlZ2FjeS9yZXBhaXIvcGhhc2U2LmMNCi0tLSB4ZnNwcm9n cy0yLjcuMTFfdmFuaWxsYS9yZXBhaXIvcGhhc2U2LmMJMjAwNi0wMS0xNyAw Mzo0Njo1Mi4wMDAwMDAwMDAgKzAwMDANCisrKyB4ZnNwcm9ncy0yLjcuMTFf c3VzdjMtbGVnYWN5L3JlcGFpci9waGFzZTYuYwkyMDA4LTAzLTI0IDE0OjM2 OjQ3LjAwMDAwMDAwMCArMDAwMA0KQEAgLTM0MSw3ICszNDEsNyBAQA0KIAkJ CWVycm9yKTsNCiAJfQ0KIA0KLQliemVybygmaXAtPmlfZCwgc2l6ZW9mKHhm c19kaW5vZGVfY29yZV90KSk7DQorCW1lbXNldCgmaXAtPmlfZCwgMCwgc2l6 ZW9mKHhmc19kaW5vZGVfY29yZV90KSk7DQogDQogCWlwLT5pX2QuZGlfbWFn aWMgPSBYRlNfRElOT0RFX01BR0lDOw0KIAlpcC0+aV9kLmRpX21vZGUgPSBT X0lGUkVHOw0KQEAgLTQ2MSw3ICs0NjEsNyBAQA0KIAkJCXJldHVybigxKTsN CiAJCX0NCiANCi0JCWJjb3B5KGJtcCwgWEZTX0JVRl9QVFIoYnApLCBtcC0+ bV9zYi5zYl9ibG9ja3NpemUpOw0KKwkJbWVtbW92ZShYRlNfQlVGX1BUUihi cCksIGJtcCwgbXAtPm1fc2Iuc2JfYmxvY2tzaXplKTsNCiANCiAJCWxpYnhm c190cmFuc19sb2dfYnVmKHRwLCBicCwgMCwgbXAtPm1fc2Iuc2JfYmxvY2tz aXplIC0gMSk7DQogDQpAQCAtNTMxLDcgKzUzMSw3IEBADQogCQkJcmV0dXJu KDEpOw0KIAkJfQ0KIA0KLQkJYmNvcHkoc21wLCBYRlNfQlVGX1BUUihicCks IG1wLT5tX3NiLnNiX2Jsb2Nrc2l6ZSk7DQorCQltZW1tb3ZlKFhGU19CVUZf UFRSKGJwKSwgc21wLCBtcC0+bV9zYi5zYl9ibG9ja3NpemUpOw0KIA0KIAkJ bGlieGZzX3RyYW5zX2xvZ19idWYodHAsIGJwLCAwLCBtcC0+bV9zYi5zYl9i bG9ja3NpemUgLSAxKTsNCiANCkBAIC01NzYsNyArNTc2LDcgQEANCiAJCQll cnJvcik7DQogCX0NCiANCi0JYnplcm8oJmlwLT5pX2QsIHNpemVvZih4ZnNf ZGlub2RlX2NvcmVfdCkpOw0KKwltZW1zZXQoJmlwLT5pX2QsIDAsIHNpemVv Zih4ZnNfZGlub2RlX2NvcmVfdCkpOw0KIA0KIAlpcC0+aV9kLmRpX21hZ2lj ID0gWEZTX0RJTk9ERV9NQUdJQzsNCiAJaXAtPmlfZC5kaV9tb2RlID0gU19J RlJFRzsNCkBAIC02NzQsNyArNjc0LDcgQEANCiAJLyoNCiAJICogdGFrZSBj YXJlIG9mIHRoZSBjb3JlIC0tIGluaXRpYWxpemF0aW9uIGZyb20geGZzX2lh bGxvYygpDQogCSAqLw0KLQliemVybygmaXAtPmlfZCwgc2l6ZW9mKHhmc19k aW5vZGVfY29yZV90KSk7DQorCW1lbXNldCgmaXAtPmlfZCwgMCwgc2l6ZW9m KHhmc19kaW5vZGVfY29yZV90KSk7DQogDQogCWlwLT5pX2QuZGlfbWFnaWMg PSBYRlNfRElOT0RFX01BR0lDOw0KIAlpcC0+aV9kLmRpX21vZGUgPSAoX191 aW50MTZfdCkgbW9kZXxTX0lGRElSOw0KQEAgLTEyMzEsNyArMTIzMSw3IEBA DQogCS8qDQogCSAqIHNuYWcgdGhlIGluZm8gd2UgbmVlZCBvdXQgb2YgdGhl IGRpcmVjdG9yeSB0aGVuIHJlbGVhc2UgYWxsIGJ1ZmZlcnMNCiAJICovDQot CWJjb3B5KG5hbWVzdC0+bmFtZSwgZm5hbWUsIGVudHJ5LT5uYW1lbGVuKTsN CisJbWVtbW92ZShmbmFtZSwgbmFtZXN0LT5uYW1lLCBlbnRyeS0+bmFtZWxl bik7DQogCWZuYW1lW2VudHJ5LT5uYW1lbGVuXSA9ICdcMCc7DQogCSpoYXNo dmFsID0gSU5UX0dFVChlbnRyeS0+aGFzaHZhbCwgQVJDSF9DT05WRVJUKTsN CiAJbmFtZWxlbiA9IGVudHJ5LT5uYW1lbGVuOw0KQEAgLTEzNDEsNyArMTM0 MSw3IEBADQogCQlqdW5raXQgPSAwOw0KIA0KIAkJWEZTX0RJUl9TRl9HRVRf RElSSU5PKCZuYW1lc3QtPmludW1iZXIsICZsaW5vKTsNCi0JCWJjb3B5KG5h bWVzdC0+bmFtZSwgZm5hbWUsIGVudHJ5LT5uYW1lbGVuKTsNCisJCW1lbW1v dmUoZm5hbWUsIG5hbWVzdC0+bmFtZSwgZW50cnktPm5hbWVsZW4pOw0KIAkJ Zm5hbWVbZW50cnktPm5hbWVsZW5dID0gJ1wwJzsNCiANCiAJCUFTU0VSVChs aW5vICE9IE5VTExGU0lOTyk7DQpAQCAtMTY1Niw3ICsxNjU2LDcgQEANCiAJ bGlieGZzX3RyYW5zX2lqb2luKHRwLCBpcCwgMCk7DQogCWxpYnhmc190cmFu c19paG9sZCh0cCwgaXApOw0KIAlsaWJ4ZnNfZGFfYmpvaW4odHAsIGJwKTsN Ci0JYnplcm8oJmFyZ3MsIHNpemVvZihhcmdzKSk7DQorCW1lbXNldCgmYXJn cywgMCwgc2l6ZW9mKGFyZ3MpKTsNCiAJWEZTX0JNQVBfSU5JVCgmZmxpc3Qs ICZmaXJzdGJsb2NrKTsNCiAJYXJncy5kcCA9IGlwOw0KIAlhcmdzLnRyYW5z ID0gdHA7DQpAQCAtMTkwNyw3ICsxOTA3LDcgQEANCiAJCQljb250aW51ZTsN CiAJCX0NCiAJCWp1bmtpdCA9IDA7DQotCQliY29weShkZXAtPm5hbWUsIGZu YW1lLCBkZXAtPm5hbWVsZW4pOw0KKwkJbWVtbW92ZShmbmFtZSwgZGVwLT5u YW1lLCBkZXAtPm5hbWVsZW4pOw0KIAkJZm5hbWVbZGVwLT5uYW1lbGVuXSA9 ICdcMCc7DQogCQlBU1NFUlQoSU5UX0dFVChkZXAtPmludW1iZXIsIEFSQ0hf Q09OVkVSVCkgIT0gTlVMTEZTSU5PKTsNCiAJCS8qDQpAQCAtMjM1MCw3ICsy MzUwLDcgQEANCiAJfQ0KIA0KIAkvKiBhbGxvY2F0ZSBibG9ja3MgZm9yIGJ0 cmVlICovDQotCWJ6ZXJvKCZhcmdzLCBzaXplb2YoYXJncykpOw0KKwltZW1z ZXQoJmFyZ3MsIDAsIHNpemVvZihhcmdzKSk7DQogCWFyZ3MudHJhbnMgPSB0 cDsNCiAJYXJncy5kcCA9IGlwOw0KIAlhcmdzLndoaWNoZm9yayA9IFhGU19E QVRBX0ZPUks7DQpAQCAtMjM2NCw3ICsyMzY0LDcgQEANCiAJCS8qIE5PVFJF QUNIRUQgKi8NCiAJfQ0KIAlsZWFmID0gbGJwLT5kYXRhOw0KLQliemVybyhs ZWFmLCBtcC0+bV9kaXJibGtzaXplKTsNCisJbWVtc2V0KGxlYWYsIDAsIG1w LT5tX2RpcmJsa3NpemUpOw0KIAlJTlRfU0VUKGxlYWYtPmhkci5pbmZvLm1h Z2ljLCBBUkNIX0NPTlZFUlQsIFhGU19ESVIyX0xFQUZOX01BR0lDKTsNCiAJ bGlieGZzX2RhX2xvZ19idWYodHAsIGxicCwgMCwgbXAtPm1fZGlyYmxrc2l6 ZSAtIDEpOw0KIAlsaWJ4ZnNfYm1hcF9maW5pc2goJnRwLCAmZmxpc3QsIGZp cnN0YmxvY2ssICZjb21taXR0ZWQpOw0KQEAgLTIzODEsNyArMjM4MSw3IEBA DQogCQlsaWJ4ZnNfdHJhbnNfaWpvaW4odHAsIGlwLCAwKTsNCiAJCWxpYnhm c190cmFuc19paG9sZCh0cCwgaXApOw0KIAkJWEZTX0JNQVBfSU5JVCgmZmxp c3QsICZmaXJzdGJsb2NrKTsNCi0JCWJ6ZXJvKCZhcmdzLCBzaXplb2YoYXJn cykpOw0KKwkJbWVtc2V0KCZhcmdzLCAwLCBzaXplb2YoYXJncykpOw0KIAkJ YXJncy50cmFucyA9IHRwOw0KIAkJYXJncy5kcCA9IGlwOw0KIAkJYXJncy53 aGljaGZvcmsgPSBYRlNfREFUQV9GT1JLOw0KQEAgLTIzOTgsNyArMjM5OCw3 IEBADQogCQkJLyogTk9UUkVBQ0hFRCAqLw0KIAkJfQ0KIAkJZnJlZSA9IGZi cC0+ZGF0YTsNCi0JCWJ6ZXJvKGZyZWUsIG1wLT5tX2RpcmJsa3NpemUpOw0K KwkJbWVtc2V0KGZyZWUsIDAsIG1wLT5tX2RpcmJsa3NpemUpOw0KIAkJSU5U X1NFVChmcmVlLT5oZHIubWFnaWMsIEFSQ0hfQ09OVkVSVCwgWEZTX0RJUjJf RlJFRV9NQUdJQyk7DQogCQlJTlRfU0VUKGZyZWUtPmhkci5maXJzdGRiLCBB UkNIX0NPTlZFUlQsIGkpOw0KIAkJSU5UX1NFVChmcmVlLT5oZHIubnZhbGlk LCBBUkNIX0NPTlZFUlQsIFhGU19ESVIyX01BWF9GUkVFX0JFU1RTKG1wKSk7 DQpAQCAtMjQ3Myw3ICsyNDczLDcgQEANCiAJCQltcC0+bV9kaXJibGtzaXpl KTsNCiAJCWV4aXQoMSk7DQogCX0NCi0JYmNvcHkoYnAtPmRhdGEsIGRhdGEs IG1wLT5tX2RpcmJsa3NpemUpOw0KKwltZW1tb3ZlKGRhdGEsIGJwLT5kYXRh LCBtcC0+bV9kaXJibGtzaXplKTsNCiAJcHRyID0gKGNoYXIgKilkYXRhLT51 Ow0KIAlpZiAoSU5UX0dFVChkYXRhLT5oZHIubWFnaWMsIEFSQ0hfQ09OVkVS VCkgPT0gWEZTX0RJUjJfQkxPQ0tfTUFHSUMpIHsNCiAJCWJ0cCA9IFhGU19E SVIyX0JMT0NLX1RBSUxfUChtcCwgKHhmc19kaXIyX2Jsb2NrX3QgKilkYXRh KTsNCkBAIC0yNDk1LDcgKzI0OTUsNyBAQA0KIAlsaWJ4ZnNfZGFfYmhvbGQo dHAsIGZicCk7DQogCVhGU19CTUFQX0lOSVQoJmZsaXN0LCAmZmlyc3RibG9j ayk7DQogCW5lZWRsb2cgPSBuZWVkc2NhbiA9IDA7DQotCWJ6ZXJvKCgoeGZz X2RpcjJfZGF0YV90ICopKGJwLT5kYXRhKSktPmhkci5iZXN0ZnJlZSwNCisJ bWVtc2V0KCgoeGZzX2RpcjJfZGF0YV90ICopKGJwLT5kYXRhKSktPmhkci5i ZXN0ZnJlZSwgMCwNCiAJCXNpemVvZihkYXRhLT5oZHIuYmVzdGZyZWUpKTsN CiAJbGlieGZzX2RpcjJfZGF0YV9tYWtlX2ZyZWUodHAsIGJwLCAoeGZzX2Rp cjJfZGF0YV9hb2ZmX3Qpc2l6ZW9mKGRhdGEtPmhkciksDQogCQltcC0+bV9k aXJibGtzaXplIC0gc2l6ZW9mKGRhdGEtPmhkciksICZuZWVkbG9nLCAmbmVl ZHNjYW4pOw0KQEAgLTI4NTYsNyArMjg1Niw3IEBADQogCQkJfQ0KIAkJfQ0K IA0KLQkJYmNvcHkoc2ZfZW50cnktPm5hbWUsIGZuYW1lLCBzZl9lbnRyeS0+ bmFtZWxlbik7DQorCQltZW1tb3ZlKGZuYW1lLCBzZl9lbnRyeS0+bmFtZSwg c2ZfZW50cnktPm5hbWVsZW4pOw0KIAkJZm5hbWVbc2ZfZW50cnktPm5hbWVs ZW5dID0gJ1wwJzsNCiANCiAJCUFTU0VSVChub19tb2RpZnkgfHwgbGlubyAh PSBOVUxMRlNJTk8pOw0KQEAgLTI5NjcsNyArMjk2Nyw3IEBADQogCQkJCW1l bW1vdmUoc2ZfZW50cnksIHRtcF9zZmUsIHRtcF9sZW4pOw0KIA0KIAkJCQlJ TlRfTU9EKHNmLT5oZHIuY291bnQsIEFSQ0hfQ09OVkVSVCwgLTEpOw0KLQkJ CQliemVybygodm9pZCAqKSAoKF9fcHNpbnRfdCkgc2ZfZW50cnkgKyB0bXBf bGVuKSwNCisJCQkJbWVtc2V0KCh2b2lkICopICgoX19wc2ludF90KSBzZl9l bnRyeSArIHRtcF9sZW4pLCAwLA0KIAkJCQkJCXRtcF9lbGVuKTsNCiANCiAJ CQkJLyoNCkBAIC0zMDcxLDcgKzMwNzEsNyBAQA0KIA0KIAkJWEZTX0RJUl9T Rl9HRVRfRElSSU5PKCZzZl9lbnRyeS0+aW51bWJlciwgJmxpbm8pOw0KIA0K LQkJYmNvcHkoc2ZfZW50cnktPm5hbWUsIGZuYW1lLCBzZl9lbnRyeS0+bmFt ZWxlbik7DQorCQltZW1tb3ZlKGZuYW1lLCBzZl9lbnRyeS0+bmFtZSwgc2Zf ZW50cnktPm5hbWVsZW4pOw0KIAkJZm5hbWVbc2ZfZW50cnktPm5hbWVsZW5d ID0gJ1wwJzsNCiANCiAJCWlmIChzZl9lbnRyeS0+bmFtZVswXSA9PSAnLycp ICB7DQpAQCAtMzA4Nyw3ICszMDg3LDcgQEANCiAJCQkJbWVtbW92ZShzZl9l bnRyeSwgdG1wX3NmZSwgdG1wX2xlbik7DQogDQogCQkJCUlOVF9NT0Qoc2Yt Pmhkci5jb3VudCwgQVJDSF9DT05WRVJULCAtMSk7DQotCQkJCWJ6ZXJvKCh2 b2lkICopICgoX19wc2ludF90KSBzZl9lbnRyeSArIHRtcF9sZW4pLA0KKwkJ CQltZW1zZXQoKHZvaWQgKikgKChfX3BzaW50X3QpIHNmX2VudHJ5ICsgdG1w X2xlbiksIDAsDQogCQkJCQkJdG1wX2VsZW4pOw0KIA0KIAkJCQkvKg0KQEAg LTMyNDIsNyArMzI0Miw3IEBADQogCQkJfQ0KIAkJfQ0KIA0KLQkJYmNvcHko c2ZlcC0+bmFtZSwgZm5hbWUsIHNmZXAtPm5hbWVsZW4pOw0KKwkJbWVtbW92 ZShmbmFtZSwgc2ZlcC0+bmFtZSwgc2ZlcC0+bmFtZWxlbik7DQogCQlmbmFt ZVtzZmVwLT5uYW1lbGVuXSA9ICdcMCc7DQogDQogCQlBU1NFUlQobm9fbW9k aWZ5IHx8IChsaW5vICE9IE5VTExGU0lOTyAmJiBsaW5vICE9IDApKTsNCkBA IC0zMzYzLDcgKzMzNjMsNyBAQA0KIAkJCQltZW1tb3ZlKHNmZXAsIHRtcF9z ZmVwLCB0bXBfbGVuKTsNCiANCiAJCQkJSU5UX01PRChzZnAtPmhkci5jb3Vu dCwgQVJDSF9DT05WRVJULCAtMSk7DQotCQkJCWJ6ZXJvKCh2b2lkICopICgo X19wc2ludF90KSBzZmVwICsgdG1wX2xlbiksDQorCQkJCW1lbXNldCgodm9p ZCAqKSAoKF9fcHNpbnRfdCkgc2ZlcCArIHRtcF9sZW4pLCAwLA0KIAkJCQkJ CXRtcF9lbGVuKTsNCiANCiAJCQkJLyoNCkBAIC0zODc5LDggKzM4NzksOCBA QA0KIAlpbnQJCQlpOw0KIAlpbnQJCQlqOw0KIA0KLQliemVybygmemVyb2Ny LCBzaXplb2Yoc3RydWN0IGNyZWQpKTsNCi0JYnplcm8oJnplcm9mc3gsIHNp emVvZihzdHJ1Y3QgZnN4YXR0cikpOw0KKwltZW1zZXQoJnplcm9jciwgMCwg c2l6ZW9mKHN0cnVjdCBjcmVkKSk7DQorCW1lbXNldCgmemVyb2ZzeCwgMCwg c2l6ZW9mKHN0cnVjdCBmc3hhdHRyKSk7DQogDQogCWRvX2xvZyhfKCJQaGFz ZSA2IC0gY2hlY2sgaW5vZGUgY29ubmVjdGl2aXR5Li4uXG4iKSk7DQogDQpk aWZmIC1ydSB4ZnNwcm9ncy0yLjcuMTFfdmFuaWxsYS9yZXBhaXIvcnQuYyB4 ZnNwcm9ncy0yLjcuMTFfc3VzdjMtbGVnYWN5L3JlcGFpci9ydC5jDQotLS0g eGZzcHJvZ3MtMi43LjExX3ZhbmlsbGEvcmVwYWlyL3J0LmMJMjAwNi0wMS0x NyAwMzo0Njo1Mi4wMDAwMDAwMDAgKzAwMDANCisrKyB4ZnNwcm9ncy0yLjcu MTFfc3VzdjMtbGVnYWN5L3JlcGFpci9ydC5jCTIwMDgtMDMtMjQgMTQ6MzY6 NDcuMDAwMDAwMDAwICswMDAwDQpAQCAtMjc1LDcgKzI3NSw3IEBADQogCQkJ Y29udGludWU7DQogCQl9DQogCQlieXRlcyA9IGJwLT5iX3VuLmJfYWRkcjsN Ci0JCWJjb3B5KGJ5dGVzLCAoY2hhciAqKXN1bWZpbGUgKyBzdW1ibm8gKiBt cC0+bV9zYi5zYl9ibG9ja3NpemUsDQorCQltZW1tb3ZlKChjaGFyICopc3Vt ZmlsZSArIHN1bWJubyAqIG1wLT5tX3NiLnNiX2Jsb2Nrc2l6ZSwgYnl0ZXMs DQogCQkJbXAtPm1fc2Iuc2JfYmxvY2tzaXplKTsNCiAJCWxpYnhmc19wdXRi dWYoYnApOw0KIAl9DQpkaWZmIC1ydSB4ZnNwcm9ncy0yLjcuMTFfdmFuaWxs YS9yZXBhaXIvc2IuYyB4ZnNwcm9ncy0yLjcuMTFfc3VzdjMtbGVnYWN5L3Jl cGFpci9zYi5jDQotLS0geGZzcHJvZ3MtMi43LjExX3ZhbmlsbGEvcmVwYWly L3NiLmMJMjAwNi0wMS0xNyAwMzo0Njo1Mi4wMDAwMDAwMDAgKzAwMDANCisr KyB4ZnNwcm9ncy0yLjcuMTFfc3VzdjMtbGVnYWN5L3JlcGFpci9zYi5jCTIw MDgtMDMtMjQgMTQ6MzY6NDcuMDAwMDAwMDAwICswMDAwDQpAQCAtNzcsNyAr NzcsNyBAQA0KIAlkZXN0LT5zYl9mZGJsb2NrcyA9IDA7DQogCWRlc3QtPnNi X2ZyZXh0ZW50cyA9IDA7DQogDQotCWJ6ZXJvKHNvdXJjZS0+c2JfZm5hbWUs IDEyKTsNCisJbWVtc2V0KHNvdXJjZS0+c2JfZm5hbWUsIDAsIDEyKTsNCiB9 DQogDQogLyoNCkBAIC0xMDUsNyArMTA1LDcgQEANCiAJCWV4aXQoMSk7DQog CX0NCiANCi0JYnplcm8oJmJ1ZnNiLCBzaXplb2YoeGZzX3NiX3QpKTsNCisJ bWVtc2V0KCZidWZzYiwgMCwgc2l6ZW9mKHhmc19zYl90KSk7DQogCXJldHZh bCA9IDA7DQogCWRpcnR5ID0gMDsNCiAJYnNpemUgPSAwOw0KQEAgLTE0NCw3 ICsxNDQsNyBAQA0KIAkJCSAqIGZvdW5kIG9uZS4gIG5vdyB2ZXJpZnkgaXQg YnkgbG9va2luZw0KIAkJCSAqIGZvciBvdGhlciBzZWNvbmRhcmllcy4NCiAJ CQkgKi8NCi0JCQliY29weSgmYnVmc2IsIHJzYiwgc2l6ZW9mKHhmc19zYl90 KSk7DQorCQkJbWVtbW92ZShyc2IsICZidWZzYiwgc2l6ZW9mKHhmc19zYl90 KSk7DQogCQkJcnNiLT5zYl9pbnByb2dyZXNzID0gMDsNCiAJCQljbGVhcl9z dW5pdCA9IDE7DQogDQpAQCAtNTc2LDcgKzU3Niw3IEBADQogdm9pZA0KIGdl dF9zYl9nZW9tZXRyeShmc19nZW9tZXRyeV90ICpnZW8sIHhmc19zYl90ICpz YnApDQogew0KLQliemVybyhnZW8sIHNpemVvZihmc19nZW9tZXRyeV90KSk7 DQorCW1lbXNldChnZW8sIDAsIHNpemVvZihmc19nZW9tZXRyeV90KSk7DQog DQogCS8qDQogCSAqIGJsaW5kbHkgc2V0IGZpZWxkcyB0aGF0IHdlIGtub3cg YXJlIGFsd2F5cyBnb29kDQpAQCAtNjQzLDcgKzY0Myw3IEBADQogCSAqIHN1 cGVyYmxvY2sgZmllbGRzIGxvY2F0ZWQgYWZ0ZXIgc2Jfd2lkdGhmaWVsZHMg Z2V0IHNldA0KIAkgKiBpbnRvIHRoZSBnZW9tZXRyeSBzdHJ1Y3R1cmUgb25s eSBpZiB3ZSBjYW4gZGV0ZXJtaW5lDQogCSAqIGZyb20gdGhlIGZlYXR1cmVz IGVuYWJsZWQgaW4gdGhpcyBzdXBlcmJsb2NrIHdoZXRoZXINCi0JICogb3Ig bm90IHRoZSBzZWN0b3Igd2FzIGJ6ZXJvJ2QgYXQgbWtmcyB0aW1lLg0KKwkg KiBvciBub3QgdGhlIHNlY3RvciB3YXMgemVybydkIGF0IG1rZnMgdGltZS4N CiAJICovDQogCWlmICgoIXByZV82NV9iZXRhICYmIChzYnAtPnNiX3ZlcnNp b25udW0gJiBYUl9HT09EX1NFQ1NCX1ZOTUFTSykpIHx8DQogCSAgICAocHJl XzY1X2JldGEgJiYgKHNicC0+c2JfdmVyc2lvbm51bSAmIFhSX0FMUEhBX1NF Q1NCX1ZOTUFTSykpKSB7DQpkaWZmIC1ydSB4ZnNwcm9ncy0yLjcuMTFfdmFu aWxsYS9ydGNwL3hmc19ydGNwLmMgeGZzcHJvZ3MtMi43LjExX3N1c3YzLWxl Z2FjeS9ydGNwL3hmc19ydGNwLmMNCi0tLSB4ZnNwcm9ncy0yLjcuMTFfdmFu aWxsYS9ydGNwL3hmc19ydGNwLmMJMjAwNi0wMS0xNyAwMzo0Njo1Mi4wMDAw MDAwMDAgKzAwMDANCisrKyB4ZnNwcm9ncy0yLjcuMTFfc3VzdjMtbGVnYWN5 L3J0Y3AveGZzX3J0Y3AuYwkyMDA4LTAzLTI0IDE0OjM4OjA1LjAwMDAwMDAw MCArMDAwMA0KQEAgLTMyOCw3ICszMjgsNyBAQA0KIA0KIAlpb3N6ID0gIGRp b2F0dHIuZF9taW5pb3N6Ow0KIAlmYnVmID0gbWVtYWxpZ24oIGRpb2F0dHIu ZF9tZW0sIGlvc3opOw0KLQliemVybyAoZmJ1ZiwgaW9zeik7DQorCW1lbXNl dChmYnVmLCAwLCBpb3N6KTsNCiANCiAJLyoNCiAJICogcmVhZCB0aGUgZW50 aXJlIHNvdXJjZSBmaWxlDQpAQCAtMzY1LDcgKzM2NSw3IEBADQogCQkJcmV0 dXJuKCAtMSApOw0KIAkJfQ0KIA0KLQkJYnplcm8oIGZidWYsIGlvc3opOw0K KwkJbWVtc2V0KCBmYnVmLCAwLCBpb3N6KTsNCiAJfQ0KIA0KIAljbG9zZShm cm9tZmQpOw0K --=-eN7NyBw8iwkzqyfheP3o-- --=-gCGnYydBI1TgXeIapjuF Content-Type: application/pgp-signature; name=signature.asc Content-Description: This is a digitally signed message part -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (GNU/Linux) iD8DBQBH6IbnKoUGSidwLE4RAs9pAKC1Fi4yFPFAza6dt5AMDL7dHBDBWwCZAZFT EqDnEa2r5kMhZzE8RRTJXR4= =xqAq -----END PGP SIGNATURE----- --=-gCGnYydBI1TgXeIapjuF-- From owner-xfs@oss.sgi.com Mon Mar 24 22:35:49 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 24 Mar 2008 22:36:23 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.0 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_43, LONGWORDS autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2P5ZjC3003408 for ; Mon, 24 Mar 2008 22:35:47 -0700 Received: from pc-bnaujok.melbourne.sgi.com (pc-bnaujok.melbourne.sgi.com [134.14.55.58]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id QAA03060 for ; Tue, 25 Mar 2008 16:36:19 +1100 Date: Tue, 25 Mar 2008 16:39:02 +1100 To: "xfs@oss.sgi.com" Subject: REVIEW: Write primary superblock info to ALL secondaries during mkfs From: "Barry Naujok" Organization: SGI Content-Type: text/plain; format=flowed; delsp=yes; charset=utf-8 MIME-Version: 1.0 Message-ID: User-Agent: Opera Mail/9.24 (Win32) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from Quoted-Printable to 8bit by oss.sgi.com id m2P5ZnC3003414 X-archive-position: 15026 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@sgi.com Precedence: bulk X-list: xfs Secondaries should contain redundant information from the primary superblock. It does this for the filesystem geometry information, but not inode values (rootino, rt inos, quota inos). This patch updates all the secondaries from the primary just before it marks the filesystem as good to go. Unfortunately, this also affects the output of xfs_repair during QA 030 and 178 which restores the primary superblock from the secondaries. Now that the secondaries have valid inode values, xfs_repair does not have to restore them to the correct values after copying the secondary into the primary. Attached is the mkfs.xfs patch and also the updated golden outputs for QA 030 and 178. The next step after this is to enhance xfs_repair to be more thorough in checking the secondaries during Phase 1. -- Index: ci/xfsprogs/mkfs/xfs_mkfs.c =================================================================== --- ci.orig/xfsprogs/mkfs/xfs_mkfs.c 2008-03-25 13:30:53.000000000 +1100 +++ ci/xfsprogs/mkfs/xfs_mkfs.c 2008-03-25 16:29:44.811095380 +1100 @@ -2397,48 +2397,32 @@ } /* - * Write out multiple secondary superblocks with rootinode field set + * Write out secondary superblocks with inode fields set */ - if (mp->m_sb.sb_agcount > 1) { - /* - * the last superblock - */ - buf = libxfs_readbuf(mp->m_dev, - XFS_AGB_TO_DADDR(mp, mp->m_sb.sb_agcount-1, - XFS_SB_DADDR), - XFS_FSS_TO_BB(mp, 1), - LIBXFS_EXIT_ON_FAILURE); - INT_SET((XFS_BUF_TO_SBP(buf))->sb_rootino, - ARCH_CONVERT, mp->m_sb.sb_rootino); - libxfs_writebuf(buf, LIBXFS_EXIT_ON_FAILURE); - /* - * and one in the middle for luck - */ - if (mp->m_sb.sb_agcount > 2) { - buf = libxfs_readbuf(mp->m_dev, - XFS_AGB_TO_DADDR(mp, (mp->m_sb.sb_agcount-1)/2, - XFS_SB_DADDR), - XFS_FSS_TO_BB(mp, 1), - LIBXFS_EXIT_ON_FAILURE); - INT_SET((XFS_BUF_TO_SBP(buf))->sb_rootino, - ARCH_CONVERT, mp->m_sb.sb_rootino); - libxfs_writebuf(buf, LIBXFS_EXIT_ON_FAILURE); - } + buf = libxfs_getsb(mp, LIBXFS_EXIT_ON_FAILURE); + XFS_BUF_TO_SBP(buf)->sb_inprogress = 0; + + for (agno = 1; agno < mp->m_sb.sb_agcount; agno++) { + xfs_buf_t *sbuf; + + sbuf = libxfs_getbuf(mp->m_dev, + XFS_AGB_TO_DADDR(mp, agno, XFS_SB_DADDR), + XFS_FSS_TO_BB(mp, 1)); + memcpy(XFS_BUF_PTR(sbuf), XFS_BUF_PTR(buf), + XFS_BUF_SIZE(sbuf)); + libxfs_writebuf(sbuf, LIBXFS_EXIT_ON_FAILURE); } /* - * Dump all inodes and buffers before marking us all done. - * Need to drop references to inodes we still hold, first. + * Flush out all inodes and buffers before marking us all done. */ libxfs_rtmount_destroy(mp); libxfs_icache_purge(); - libxfs_bcache_purge(); + libxfs_bcache_flush(); /* - * Mark the filesystem ok. + * Finalize the filesystem (sb_inprogress = 0 from above). */ - buf = libxfs_getsb(mp, LIBXFS_EXIT_ON_FAILURE); - (XFS_BUF_TO_SBP(buf))->sb_inprogress = 0; libxfs_writebuf(buf, LIBXFS_EXIT_ON_FAILURE); libxfs_umount(mp); Index: ci/xfstests/030.out.linux =================================================================== --- ci.orig/xfstests/030.out.linux 2007-10-10 16:12:52.000000000 +1000 +++ ci/xfstests/030.out.linux 2008-03-25 16:30:54.926056313 +1100 @@ -14,12 +14,6 @@ found candidate secondary superblock... verified secondary superblock... writing modified primary superblock -sb root inode value INO inconsistent with calculated value INO -resetting superblock root inode pointer to INO -sb realtime bitmap inode INO inconsistent with calculated value INO -resetting superblock realtime bitmap ino pointer to INO -sb realtime summary inode INO inconsistent with calculated value INO -resetting superblock realtime summary ino pointer to INO Phase 2 - using log - zero log... - scan filesystem freespace and inode maps... @@ -132,12 +126,6 @@ found candidate secondary superblock... verified secondary superblock... writing modified primary superblock -sb root inode value INO inconsistent with calculated value INO -resetting superblock root inode pointer to INO -sb realtime bitmap inode INO inconsistent with calculated value INO -resetting superblock realtime bitmap ino pointer to INO -sb realtime summary inode INO inconsistent with calculated value INO -resetting superblock realtime summary ino pointer to INO Phase 2 - using log - zero log... - scan filesystem freespace and inode maps... Index: ci/xfstests/178.out =================================================================== --- ci.orig/xfstests/178.out 2007-10-10 16:12:56.000000000 +1000 +++ ci/xfstests/178.out 2008-03-25 16:31:09.944120144 +1100 @@ -12,12 +12,6 @@ found candidate secondary superblock... verified secondary superblock... writing modified primary superblock -sb root inode value INO inconsistent with calculated value INO -resetting superblock root inode pointer to INO -sb realtime bitmap inode INO inconsistent with calculated value INO -resetting superblock realtime bitmap ino pointer to INO -sb realtime summary inode INO inconsistent with calculated value INO -resetting superblock realtime summary ino pointer to INO Phase 2 - using log - zero log... - scan filesystem freespace and inode maps... @@ -48,12 +42,6 @@ found candidate secondary superblock... verified secondary superblock... writing modified primary superblock -sb root inode value INO inconsistent with calculated value INO -resetting superblock root inode pointer to INO -sb realtime bitmap inode INO inconsistent with calculated value INO -resetting superblock realtime bitmap ino pointer to INO -sb realtime summary inode INO inconsistent with calculated value INO -resetting superblock realtime summary ino pointer to INO Phase 2 - using log - zero log... - scan filesystem freespace and inode maps... From owner-xfs@oss.sgi.com Mon Mar 24 22:59:55 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 24 Mar 2008 23:00:03 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2P5xmks005528 for ; Mon, 24 Mar 2008 22:59:53 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA03691; Tue, 25 Mar 2008 17:00:19 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m2P60IsT104662751; Tue, 25 Mar 2008 17:00:18 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m2P60HQL109016602; Tue, 25 Mar 2008 17:00:17 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Tue, 25 Mar 2008 17:00:17 +1100 From: David Chinner To: Barry Naujok Cc: "xfs@oss.sgi.com" Subject: Re: REVIEW: Write primary superblock info to ALL secondaries during mkfs Message-ID: <20080325060017.GK103491721@sgi.com> References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15027 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Tue, Mar 25, 2008 at 04:39:02PM +1100, Barry Naujok wrote: > Secondaries should contain redundant information from the primary > superblock. It does this for the filesystem geometry information, > but not inode values (rootino, rt inos, quota inos). > > This patch updates all the secondaries from the primary just before > it marks the filesystem as good to go. So it's got all the inodes, geometry, etc correct in them? So what about the fact that the kernel code doesn't keep all copies up to date? e.g. growfs will only write new values into a handful of superblocks, changing sunit/swidth via mount options only change the primary, etc.... If you are going to change mkfs to keep them all up to date, the kernel code really needs to do the same thing.... > Unfortunately, this also affects the output of xfs_repair during > QA 030 and 178 which restores the primary superblock from the > secondaries. So do a version check and have a different golden output for the new version.... > Index: ci/xfsprogs/mkfs/xfs_mkfs.c > =================================================================== > --- ci.orig/xfsprogs/mkfs/xfs_mkfs.c 2008-03-25 13:30:53.000000000 +1100 > +++ ci/xfsprogs/mkfs/xfs_mkfs.c 2008-03-25 16:29:44.811095380 +1100 > @@ -2397,48 +2397,32 @@ > } > > /* > - * Write out multiple secondary superblocks with rootinode field set > + * Write out secondary superblocks with inode fields set > */ > - if (mp->m_sb.sb_agcount > 1) { > - /* > - * the last superblock > - */ > - buf = libxfs_readbuf(mp->m_dev, > - XFS_AGB_TO_DADDR(mp, mp->m_sb.sb_agcount-1, > - XFS_SB_DADDR), > - XFS_FSS_TO_BB(mp, 1), > - LIBXFS_EXIT_ON_FAILURE); > - INT_SET((XFS_BUF_TO_SBP(buf))->sb_rootino, > - ARCH_CONVERT, mp->m_sb.sb_rootino); > - libxfs_writebuf(buf, LIBXFS_EXIT_ON_FAILURE); > - /* > - * and one in the middle for luck > - */ > - if (mp->m_sb.sb_agcount > 2) { > - buf = libxfs_readbuf(mp->m_dev, > - XFS_AGB_TO_DADDR(mp, > (mp->m_sb.sb_agcount-1)/2, > - XFS_SB_DADDR), > - XFS_FSS_TO_BB(mp, 1), > - LIBXFS_EXIT_ON_FAILURE); > - INT_SET((XFS_BUF_TO_SBP(buf))->sb_rootino, > - ARCH_CONVERT, mp->m_sb.sb_rootino); > - libxfs_writebuf(buf, LIBXFS_EXIT_ON_FAILURE); > - } > + buf = libxfs_getsb(mp, LIBXFS_EXIT_ON_FAILURE); > + XFS_BUF_TO_SBP(buf)->sb_inprogress = 0; > + > + for (agno = 1; agno < mp->m_sb.sb_agcount; agno++) { > + xfs_buf_t *sbuf; sbuf is a bad name. I immediately think "source buffer", when in fact it's the destination buffer. > + > + sbuf = libxfs_getbuf(mp->m_dev, > + XFS_AGB_TO_DADDR(mp, agno, XFS_SB_DADDR), > + XFS_FSS_TO_BB(mp, 1)); > + memcpy(XFS_BUF_PTR(sbuf), XFS_BUF_PTR(buf), > + XFS_BUF_SIZE(sbuf)); > + libxfs_writebuf(sbuf, LIBXFS_EXIT_ON_FAILURE); > } > > /* > - * Dump all inodes and buffers before marking us all done. > - * Need to drop references to inodes we still hold, first. > + * Flush out all inodes and buffers before marking us all done. > */ > libxfs_rtmount_destroy(mp); > libxfs_icache_purge(); > - libxfs_bcache_purge(); > + libxfs_bcache_flush(); Don't you still need a purge there to free all the objects in the cache? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Mar 24 23:04:43 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 24 Mar 2008 23:04:50 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.8 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_64, J_CHICKENPOX_66 autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2P64bnA006224 for ; Mon, 24 Mar 2008 23:04:41 -0700 Received: from boing.melbourne.sgi.com (boing.melbourne.sgi.com [134.14.55.141]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA03820; Tue, 25 Mar 2008 17:05:01 +1100 Message-ID: <47E8960D.1000801@sgi.com> Date: Tue, 25 Mar 2008 17:05:01 +1100 From: Timothy Shimmin User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: Eric Sandeen CC: Barry Naujok , xfs-oss Subject: Re: [PATCH] xfsqa: call _notrun in common.dump if dump utils not found References: <47E5CFBA.7060405@sandeen.net> <47E8703C.30603@sgi.com> <47E87D97.9050900@sandeen.net> <47E88676.7080006@sgi.com> In-Reply-To: <47E88676.7080006@sgi.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15028 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs Timothy Shimmin wrote: > Eric Sandeen wrote: >> Barry Naujok wrote: >>> On Tue, 25 Mar 2008 14:23:40 +1100, Timothy Shimmin wrote: >>> >>>> Thanks, Eric. >>>> >>>> On IRIX: >>>> > where xfsdump xfsrestore xfsinvutil >>>> /sbin/xfsdump >>>> /usr/sbin/xfsdump >>>> /sbin/xfsrestore >>>> /usr/sbin/xfsinvutil >>>> > ls -l /sbin/xfsdump >>>> lrwxr-xr-x ... /sbin/xfsdump -> ../usr/sbin/xfsdump* >>>> >>>> I'll add the IRIX xfsrestore path and wait for Russell or >>>> whoever to complain about BSD :) >>> common.config sets up environment variables for the various >>> tools used and can handle these paths. It has them for the >>> xfsprogs tools (XFS_REPAIR_PROG, XFS_DB_PROG, etc) but >>> nothing for the xfsdump tools. >> >> yeah, that may be better... >> > Okay. Fair point. > I'll change common.dump to use the XFSDUMP_PROG etc.... > and common.config to set the PROG vars. > > --Tim > > Okay, something like below then. Note, I only test for failure in common.dump and I need to filter out the fullpaths for the commands now as they output their full path. Oh and I changed a bit for the DEBUGDUMP which can use binaries in xfstests for debugging. Ughhh. --Tim =========================================================================== Index: xfstests/common.config =========================================================================== --- a/xfstests/common.config 2008-03-25 17:02:57.000000000 +1100 +++ b/xfstests/common.config 2008-03-25 18:21:37.964000000 +1100 @@ -114,6 +114,9 @@ export AWK_PROG="`set_prog_path awk`" export SED_PROG="`set_prog_path sed`" [ "$SED_PROG" = "" ] && _fatal "sed not found" +export BC_PROG="`set_prog_path bc`" +[ "$BC_PROG" = "" ] && _fatal "bc not found" + export PS_ALL_FLAGS="-ef" export DF_PROG="`set_prog_path df`" @@ -128,6 +131,9 @@ export XFS_GROWFS_PROG=`set_prog_path xf export XFS_IO_PROG="`set_prog_path xfs_io`" export XFS_PARALLEL_REPAIR_PROG="`set_prog_path xfs_prepair`" export XFS_PARALLEL_REPAIR64_PROG="`set_prog_path xfs_prepair64`" +export XFSDUMP_PROG="`set_prog_path xfsdump`" +export XFSRESTORE_PROG="`set_prog_path xfsrestore`" +export XFSINVUTIL_PROG="`set_prog_path xfsinvutil`" # Generate a comparable xfsprogs version number in the form of # major * 10000 + minor * 100 + release =========================================================================== Index: xfstests/common.dump =========================================================================== --- a/xfstests/common.dump 2008-03-25 17:02:57.000000000 +1100 +++ b/xfstests/common.dump 2008-03-25 18:38:43.792000000 +1100 @@ -9,17 +9,23 @@ rm -f $here/$seq.full if [ -n "$DEBUGDUMP" ]; then - _dump_debug=-v4 - _restore_debug=-v4 - _invutil_debug=-d + _dump_debug=-v4 + _restore_debug=-v4 + _invutil_debug=-d + + # Use dump/restore in qa directory (copy them here) for debugging + export PATH="$here:$PATH" + export XFSDUMP_PROG="`set_prog_path xfsdump`" + export XFSRESTORE_PROG="`set_prog_path xfsrestore`" + export XFSINVUTIL_PROG="`set_prog_path xfsinvutil`" + [ -x $here/xfsdump ] && echo "Using xfstests' xfsdump for debug" + [ -x $here/xfsrestore ] && echo "Using xfstests' xfsrestore for debug" + [ -x $here/xfsinvutil ] && echo "Using xfstests' xfsinvutil for debug" fi -# Use dump/restore in qa directory for debugging -PATH="$here:$PATH" -export PATH -#which xfsdump -#which xfsrestore -#which xfsinvutil +[ "$XFSDUMP_PROG" = "" ] && _fatal "xfsdump not found" +[ "$XFSRESTORE_PROG" = "" ] && _fatal "xfsrestore not found" +[ "$XFSINVUTIL_PROG" = "" ] && _fatal "xfsinvutil not found" # status returned for not run tests NOTRUNSTS=2 @@ -761,6 +767,9 @@ _dump_filter_main() { _filter_devchar |\ sed \ + -e "s#$XFSDUMP_PROG#xfsdump#" \ + -e "s#$XFSRESTORE_PROG#xfsrestore#" \ + -e "s#$XFSINVUTIL_PROG#xfsinvutil#" \ -e "s/`hostname`/HOSTNAME/" \ -e "s#$SCRATCH_DEV#SCRATCH_DEV#" \ -e "s#$SCRATCH_RAWDEV#SCRATCH_DEV#" \ @@ -906,7 +915,7 @@ _do_dump_sub() echo "Dumping to tape..." opts="$_dump_debug$dump_args -s $dump_sdir -f $dumptape -M $media_label -L $session_label $SCRATCH_MNT" echo "xfsdump $opts" | _dir_filter - xfsdump $opts 2>&1 | tee -a $here/$seq.full | _dump_filter + $XFSDUMP_PROG $opts 2>&1 | tee -a $here/$seq.full | _dump_filter } # @@ -919,7 +928,7 @@ _do_dump() echo "Dumping to tape..." opts="$_dump_debug$dump_args -f $dumptape -M $media_label -L $session_label $SCRATCH_MNT" echo "xfsdump $opts" | _dir_filter - xfsdump $opts 2>&1 | tee -a $here/$seq.full | _dump_filter + $XFSDUMP_PROG $opts 2>&1 | tee -a $here/$seq.full | _dump_filter } @@ -934,7 +943,7 @@ _do_dump_min() onemeg=1048576 opts="$_dump_debug$dump_args -m -b $onemeg -l0 -f $dumptape -M $media_label -L $session_label $SCRATCH_MNT" echo "xfsdump $opts" | _dir_filter - xfsdump $opts 2>&1 | tee -a $here/$seq.full | _dump_filter + $XFSDUMP_PROG $opts 2>&1 | tee -a $here/$seq.full | _dump_filter } @@ -948,7 +957,7 @@ _do_dump_file() echo "Dumping to file..." opts="$_dump_debug$dump_args -f $dump_file -M $media_label -L $session_label $SCRATCH_MNT" echo "xfsdump $opts" | _dir_filter - xfsdump $opts 2>&1 | tee -a $here/$seq.full | _dump_filter + $XFSDUMP_PROG $opts 2>&1 | tee -a $here/$seq.full | _dump_filter } # @@ -970,7 +979,7 @@ _do_dump_multi_file() echo "Dumping to files..." opts="$_dump_debug$dump_args $multi_args -L $session_label $SCRATCH_MNT" echo "xfsdump $opts" | _dir_filter - xfsdump $opts 2>&1 | tee -a $here/$seq.full | _dump_filter + $XFSDUMP_PROG $opts 2>&1 | tee -a $here/$seq.full | _dump_filter } @@ -1004,7 +1013,7 @@ _do_restore() echo "Restoring from tape..." opts="$_restore_debug -f $dumptape -L $session_label $restore_dir" echo "xfsrestore $opts" | _dir_filter - xfsrestore $opts 2>&1 | tee -a $here/$seq.full | _dump_filter + $XFSRESTORE_PROG $opts 2>&1 | tee -a $here/$seq.full | _dump_filter } # @@ -1019,7 +1028,7 @@ _do_restore_min() onemeg=1048576 opts="$_restore_debug -m -b $onemeg -f $dumptape -L $session_label $restore_dir" echo "xfsrestore $opts" | _dir_filter - xfsrestore $opts 2>&1 | tee -a $here/$seq.full | _dump_filter + $XFSRESTORE_PROG $opts 2>&1 | tee -a $here/$seq.full | _dump_filter } # @@ -1033,7 +1042,7 @@ _do_restore_file() echo "Restoring from file..." opts="$_restore_debug -f $dump_file -L $session_label $restore_dir" echo "xfsrestore $opts" | _dir_filter - xfsrestore $opts 2>&1 | tee -a $here/$seq.full | _dump_filter + $XFSRESTORE_PROG $opts 2>&1 | tee -a $here/$seq.full | _dump_filter } # @@ -1050,7 +1059,7 @@ _do_restore_file_cum() echo "Restoring cumumlative from file..." opts="$_restore_debug -f $dump_file -r $restore_dir" echo "xfsrestore $opts" | _dir_filter - xfsrestore $opts 2>&1 | tee -a $here/$seq.full | _dump_filter + $XFSRESTORE_PROG $opts 2>&1 | tee -a $here/$seq.full | _dump_filter } _do_restore_toc() @@ -1059,7 +1068,7 @@ _do_restore_toc() opts="$_restore_debug -f $dump_file -t" echo "xfsrestore $opts" | _dir_filter cd $SCRATCH_MNT # for IRIX which needs xfs cwd - xfsrestore $opts 2>&1 | tee -a $here/$seq.full | _dump_filter_main |\ + $XFSRESTORE_PROG $opts 2>&1 | tee -a $here/$seq.full | _dump_filter_main |\ _check_quota_file |\ _check_quota_entries |\ $AWK_PROG 'NF != 1 { print; next } @@ -1090,7 +1099,7 @@ _do_restore_multi_file() echo "Restoring from file..." opts="$_restore_debug $multi_args -L $session_label $restore_dir" echo "xfsrestore $opts" | _dir_filter - xfsrestore $opts 2>&1 | tee -a $here/$seq.full | _dump_filter + $XFSRESTORE_PROG $opts 2>&1 | tee -a $here/$seq.full | _dump_filter } # @@ -1106,7 +1115,7 @@ _do_dump_restore() restore_opts="$_restore_debug - $restore_dir" dump_opts="$_dump_debug$dump_args -s $dump_sdir - $SCRATCH_MNT" echo "xfsdump $dump_opts | xfsrestore $restore_opts" | _dir_filter - xfsdump $dump_opts 2>$tmp.dump.mlog | xfsrestore $restore_opts 2>&1 | tee -a $here/$seq.full | _dump_filter + $XFSDUMP_PROG $dump_opts 2>$tmp.dump.mlog | $XFSRESTORE_PROG $restore_opts 2>&1 | tee -a $here/$seq.full | _dump_filter _dump_filter <$tmp.dump.mlog } @@ -1244,7 +1253,7 @@ _diff_compare() # _dump_inventory() { - xfsdump $_dump_debug -I | tee -a $here/$seq.full | _dump_filter_main + $XFSDUMP_PROG $_dump_debug -I | tee -a $here/$seq.full | _dump_filter_main } # @@ -1255,7 +1264,7 @@ _do_invutil() { host=`hostname` echo "xfsinvutil $_invutil_debug -M $host:$SCRATCH_MNT \"$middate\" $*" >$here/$seq.full - xfsinvutil $_invutil_debug $* -M $host:$SCRATCH_MNT "$middate" \ + $XFSINVUTIL_PROG $_invutil_debug $* -M $host:$SCRATCH_MNT "$middate" \ | tee -a $here/$seq.full | _invutil_filter } From owner-xfs@oss.sgi.com Mon Mar 24 23:13:24 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 24 Mar 2008 23:13:32 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2P6DLLx007068 for ; Mon, 24 Mar 2008 23:13:23 -0700 Received: from pc-bnaujok.melbourne.sgi.com (pc-bnaujok.melbourne.sgi.com [134.14.55.58]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA04040; Tue, 25 Mar 2008 17:13:51 +1100 Date: Tue, 25 Mar 2008 17:16:44 +1100 To: "David Chinner" Subject: Re: REVIEW: Write primary superblock info to ALL secondaries during mkfs From: "Barry Naujok" Organization: SGI Cc: "xfs@oss.sgi.com" Content-Type: text/plain; format=flowed; delsp=yes; charset=utf-8 MIME-Version: 1.0 References: <20080325060017.GK103491721@sgi.com> Content-Transfer-Encoding: 7bit Message-ID: In-Reply-To: <20080325060017.GK103491721@sgi.com> User-Agent: Opera Mail/9.24 (Win32) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15029 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@sgi.com Precedence: bulk X-list: xfs On Tue, 25 Mar 2008 17:00:17 +1100, David Chinner wrote: > On Tue, Mar 25, 2008 at 04:39:02PM +1100, Barry Naujok wrote: >> Secondaries should contain redundant information from the primary >> superblock. It does this for the filesystem geometry information, >> but not inode values (rootino, rt inos, quota inos). >> >> This patch updates all the secondaries from the primary just before >> it marks the filesystem as good to go. > > So it's got all the inodes, geometry, etc correct in them? > > So what about the fact that the kernel code doesn't keep all > copies up to date? e.g. growfs will only write new values into > a handful of superblocks, changing sunit/swidth via mount options > only change the primary, etc.... > > If you are going to change mkfs to keep them all up to date, the > kernel code really needs to do the same thing.... Yes it should :) Geometry information was already done across all the AGs. >> >> /* >> - * Dump all inodes and buffers before marking us all done. >> - * Need to drop references to inodes we still hold, first. >> + * Flush out all inodes and buffers before marking us all done. >> */ >> libxfs_rtmount_destroy(mp); >> libxfs_icache_purge(); >> - libxfs_bcache_purge(); >> + libxfs_bcache_flush(); > > Don't you still need a purge there to free all the objects in the > cache? No, the flush does what is required. There is no libxfs_icache_flush at the moment, so I left the purge there for that. libxfs_umount later on does a libxfs_*_purge() anyway. The main thing is to make sure all objects are written to disk before the sb_inprogress field in the primary superblock is zeroed. Barry. From owner-xfs@oss.sgi.com Mon Mar 24 23:44:58 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 24 Mar 2008 23:45:05 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.4 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2P6isZQ009840 for ; Mon, 24 Mar 2008 23:44:56 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA04951; Tue, 25 Mar 2008 17:45:22 +1100 Received: by chook.melbourne.sgi.com (Postfix, from userid 44625) id 9A7BD58C4C0F; Tue, 25 Mar 2008 17:45:22 +1100 (EST) To: sgi.bugs.xfs@engr.sgi.com, xfs@oss.sgi.com Subject: PARTIAL TAKE 976035 - cleanup root inode handling in xfs_fs_fill_super Message-Id: <20080325064522.9A7BD58C4C0F@chook.melbourne.sgi.com> Date: Tue, 25 Mar 2008 17:45:22 +1100 (EST) From: lachlan@sgi.com (Lachlan McIlroy) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15030 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: lachlan@sgi.com Precedence: bulk X-list: xfs cleanup root inode handling in xfs_fs_fill_super - rename rootvp to root for clarify - remove useless vn_to_inode call - check is_bad_inode before calling d_alloc_root - use iput instead of VN_RELE in the error case Signed-off-by: Christoph Hellwig Date: Tue Mar 25 17:44:23 AEDT 2008 Workarea: redback.melbourne.sgi.com:/home/lachlan/isms/2.6.x-hch Inspected by: hch Author: lachlan The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:30708a fs/xfs/linux-2.6/xfs_super.c - 1.411 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/linux-2.6/xfs_super.c.diff?r1=text&tr1=1.411&r2=text&tr2=1.410&f=h - cleanup root inode handling in xfs_fs_fill_super From owner-xfs@oss.sgi.com Mon Mar 24 23:48:04 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 24 Mar 2008 23:48:12 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.4 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2P6lwCE010364 for ; Mon, 24 Mar 2008 23:48:03 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA05028; Tue, 25 Mar 2008 17:48:28 +1100 Received: by chook.melbourne.sgi.com (Postfix, from userid 44625) id AEB5158C4C0F; Tue, 25 Mar 2008 17:48:28 +1100 (EST) To: sgi.bugs.xfs@engr.sgi.com, xfs@oss.sgi.com Subject: PARTIAL TAKE 976035 - split xfs_ioc_xattr Message-Id: <20080325064828.AEB5158C4C0F@chook.melbourne.sgi.com> Date: Tue, 25 Mar 2008 17:48:28 +1100 (EST) From: lachlan@sgi.com (Lachlan McIlroy) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15031 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: lachlan@sgi.com Precedence: bulk X-list: xfs split xfs_ioc_xattr The three subcases of xfs_ioc_xattr don't share any semantics and almost no code, so split it into three separate helpers. Signed-off-by: Christoph Hellwig Date: Tue Mar 25 17:47:37 AEDT 2008 Workarea: redback.melbourne.sgi.com:/home/lachlan/isms/2.6.x-hch Inspected by: hch Author: lachlan The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:30709a fs/xfs/linux-2.6/xfs_ioctl.c - 1.163 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/linux-2.6/xfs_ioctl.c.diff?r1=text&tr1=1.163&r2=text&tr2=1.162&f=h - split xfs_ioc_xattr From owner-xfs@oss.sgi.com Mon Mar 24 23:55:35 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 24 Mar 2008 23:55:42 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.4 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2P6tVwA011171 for ; Mon, 24 Mar 2008 23:55:33 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA05346; Tue, 25 Mar 2008 17:56:01 +1100 Received: by chook.melbourne.sgi.com (Postfix, from userid 44625) id 5CD2D58C4C0F; Tue, 25 Mar 2008 17:56:01 +1100 (EST) To: sgi.bugs.xfs@engr.sgi.com, xfs@oss.sgi.com Subject: PARTIAL TAKE 976035 - remove most calls to VN_RELE Message-Id: <20080325065601.5CD2D58C4C0F@chook.melbourne.sgi.com> Date: Tue, 25 Mar 2008 17:56:01 +1100 (EST) From: lachlan@sgi.com (Lachlan McIlroy) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15032 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: lachlan@sgi.com Precedence: bulk X-list: xfs remove most calls to VN_RELE Most VN_RELE calls either directly contain a XFS_ITOV or have the corresponding xfs_inode already in scope. Use the IRELE helper instead of VN_RELE to clarify the code. With a little more work we can kill VN_RELE altogether and define IRELE in terms of iput directly. Signed-off-by: Christoph Hellwig Date: Tue Mar 25 17:54:50 AEDT 2008 Workarea: redback.melbourne.sgi.com:/home/lachlan/isms/2.6.x-hch Inspected by: hch Author: lachlan The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:30710a fs/xfs/xfs_rtalloc.c - 1.110 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_rtalloc.c.diff?r1=text&tr1=1.110&r2=text&tr2=1.109&f=h fs/xfs/xfs_log_recover.c - 1.336 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_log_recover.c.diff?r1=text&tr1=1.336&r2=text&tr2=1.335&f=h fs/xfs/xfs_vfsops.c - 1.557 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_vfsops.c.diff?r1=text&tr1=1.557&r2=text&tr2=1.556&f=h fs/xfs/xfs_mount.c - 1.421 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_mount.c.diff?r1=text&tr1=1.421&r2=text&tr2=1.420&f=h fs/xfs/quota/xfs_qm_syscalls.c - 1.38 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/quota/xfs_qm_syscalls.c.diff?r1=text&tr1=1.38&r2=text&tr2=1.37&f=h fs/xfs/quota/xfs_qm.c - 1.60 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/quota/xfs_qm.c.diff?r1=text&tr1=1.60&r2=text&tr2=1.59&f=h - remove most calls to VN_RELE From owner-xfs@oss.sgi.com Tue Mar 25 05:53:19 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 25 Mar 2008 05:53:26 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_43 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2PCrHCS015387 for ; Tue, 25 Mar 2008 05:53:18 -0700 X-ASG-Debug-ID: 1206449629-0e5803df0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id E1D7B1262797 for ; Tue, 25 Mar 2008 05:53:49 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id xNZpTOIzkHZOraju for ; Tue, 25 Mar 2008 05:53:49 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 704721802F4FF; Tue, 25 Mar 2008 07:53:17 -0500 (CDT) Message-ID: <47E8F5BD.7000601@sandeen.net> Date: Tue, 25 Mar 2008 07:53:17 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: Barry Naujok CC: "xfs@oss.sgi.com" X-ASG-Orig-Subj: Re: REVIEW: Write primary superblock info to ALL secondaries during mkfs Subject: Re: REVIEW: Write primary superblock info to ALL secondaries during mkfs References: In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1206449631 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45857 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15033 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Barry Naujok wrote: > Secondaries should contain redundant information from the primary > superblock. It does this for the filesystem geometry information, > but not inode values (rootino, rt inos, quota inos). > > This patch updates all the secondaries from the primary just before > it marks the filesystem as good to go. > > Unfortunately, this also affects the output of xfs_repair during > QA 030 and 178 which restores the primary superblock from the > secondaries. > > Now that the secondaries have valid inode values, xfs_repair > does not have to restore them to the correct values after copying > the secondary into the primary. > > Attached is the mkfs.xfs patch and also the updated golden > outputs for QA 030 and 178. > > The next step after this is to enhance xfs_repair to be more > thorough in checking the secondaries during Phase 1. One related thing I'd always wondered about was stamping a secondary at the very end of the device (and therefore shrinking the fs by just a bit) - repair could then do a quick check at the end of the device before resorting to scanning for the 2nd backup... would this make any sense? -Eric From owner-xfs@oss.sgi.com Tue Mar 25 05:53:52 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 25 Mar 2008 05:53:58 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_64, J_CHICKENPOX_66 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2PCrnGO015481 for ; Tue, 25 Mar 2008 05:53:52 -0700 X-ASG-Debug-ID: 1206449663-0f1701d00000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id C63936E32D3 for ; Tue, 25 Mar 2008 05:54:23 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id HGrDUxXIq6RT5gkJ for ; Tue, 25 Mar 2008 05:54:23 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id EDEDD1802F4FF; Tue, 25 Mar 2008 07:54:22 -0500 (CDT) Message-ID: <47E8F5FE.8040600@sandeen.net> Date: Tue, 25 Mar 2008 07:54:22 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: Timothy Shimmin CC: Barry Naujok , xfs-oss X-ASG-Orig-Subj: Re: [PATCH] xfsqa: call _notrun in common.dump if dump utils not found Subject: Re: [PATCH] xfsqa: call _notrun in common.dump if dump utils not found References: <47E5CFBA.7060405@sandeen.net> <47E8703C.30603@sgi.com> <47E87D97.9050900@sandeen.net> <47E88676.7080006@sgi.com> <47E8960D.1000801@sgi.com> In-Reply-To: <47E8960D.1000801@sgi.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1206449663 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45858 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15034 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Timothy Shimmin wrote: > Timothy Shimmin wrote: > > Eric Sandeen wrote: > >> Barry Naujok wrote: > >>> On Tue, 25 Mar 2008 14:23:40 +1100, Timothy Shimmin wrote: > >>> > >>>> Thanks, Eric. > >>>> > >>>> On IRIX: > >>>> > where xfsdump xfsrestore xfsinvutil > >>>> /sbin/xfsdump > >>>> /usr/sbin/xfsdump > >>>> /sbin/xfsrestore > >>>> /usr/sbin/xfsinvutil > >>>> > ls -l /sbin/xfsdump > >>>> lrwxr-xr-x ... /sbin/xfsdump -> ../usr/sbin/xfsdump* > >>>> > >>>> I'll add the IRIX xfsrestore path and wait for Russell or > >>>> whoever to complain about BSD :) > >>> common.config sets up environment variables for the various > >>> tools used and can handle these paths. It has them for the > >>> xfsprogs tools (XFS_REPAIR_PROG, XFS_DB_PROG, etc) but > >>> nothing for the xfsdump tools. > >> > >> yeah, that may be better... > >> > > Okay. Fair point. > > I'll change common.dump to use the XFSDUMP_PROG etc.... > > and common.config to set the PROG vars. > > > > --Tim > > > > > > Okay, something like below then. > Note, I only test for failure in common.dump and I need > to filter out the fullpaths for the commands now as > they output their full path. > Oh and I changed a bit for the DEBUGDUMP which can > use binaries in xfstests for debugging. > Ughhh. well, I don't want to make this too much work... it's not critical. -eric From owner-xfs@oss.sgi.com Tue Mar 25 10:55:37 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 25 Mar 2008 10:56:13 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-0.9 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_45 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2PHtabl011175 for ; Tue, 25 Mar 2008 10:55:37 -0700 X-ASG-Debug-ID: 1206467765-127b00f90000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from smtp7-g19.free.fr (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 1667F6E644D for ; Tue, 25 Mar 2008 10:56:05 -0700 (PDT) Received: from smtp7-g19.free.fr (smtp7-g19.free.fr [212.27.42.64]) by cuda.sgi.com with ESMTP id e0a8obzoC1NGlECw for ; Tue, 25 Mar 2008 10:56:05 -0700 (PDT) Received: from smtp7-g19.free.fr (localhost [127.0.0.1]) by smtp7-g19.free.fr (Postfix) with ESMTP id 26880322842 for ; Tue, 25 Mar 2008 18:56:04 +0100 (CET) Received: from galadriel.home (pla78-1-82-235-234-79.fbx.proxad.net [82.235.234.79]) by smtp7-g19.free.fr (Postfix) with ESMTP id AA6B532288C for ; Tue, 25 Mar 2008 18:55:43 +0100 (CET) Date: Tue, 25 Mar 2008 18:54:53 +0100 From: Emmanuel Florac To: xfs@oss.sgi.com X-ASG-Orig-Subj: Serious XFS crash Subject: Serious XFS crash Message-ID: <20080325185453.3a1957dd@galadriel.home> Organization: Intellique X-Mailer: Claws Mail 2.9.1 (GTK+ 2.8.20; i686-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Barracuda-Connect: smtp7-g19.free.fr[212.27.42.64] X-Barracuda-Start-Time: 1206467768 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45878 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15035 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: eflorac@intellique.com Precedence: bulk X-list: xfs Here is the setup : Debian sarge running kernel 2.6.18.8 smp (clean build), xfsprogs version 2.6.20 (not used). An 8TB xfs filesystem broke apart losing roughly 2TB of data in about 350 (big) files : Mar 22 12:38:18 system3 kernel: 0x0: c0 49 00 35 6a bc c3 80 fd d4 64 f8 16 ec b9 85 Mar 22 12:38:18 system3 kernel: Filesystem "md0": XFS internal error xfs_da_do_buf(2) at line 2084 of file fs/xfs/xfs_da_btree.c. Caller 0xc0214fe8 Mar 22 12:38:18 system3 kernel: [xfs_da_do_buf+958/2144] xfs_da_do_buf+0x3be/0x860 Mar 22 12:38:18 system3 kernel: [xfs_da_read_buf+72/96] xfs_da_read_buf+0x48/0x60 Mar 22 12:38:18 system3 kernel: [xfs_da_read_buf+72/96] xfs_da_read_buf+0x48/0x60 Mar 22 12:38:18 system3 kernel: [_atomic_dec_and_lock+59/96] _atomic_dec_and_lock+0x3b/0x60 Mar 22 12:38:18 system3 kernel: [xfs_da_read_buf+72/96] xfs_da_read_buf+0x48/0x60 Mar 22 12:38:18 system3 kernel: [xfs_dir2_leaf_getdents+934/3072] xfs_dir2_leaf_getdents+0x3a6/0xc00 Mar 22 12:38:18 system3 kernel: [xfs_dir2_leaf_getdents+934/3072] xfs_dir2_leaf_getdents+0x3a6/0xc00 Mar 22 12:38:18 system3 kernel: [xfs_dir_getdents+242/320] xfs_dir_getdents+0xf2/0x140 Mar 22 12:38:18 system3 kernel: [xfs_dir2_put_dirent64_direct+0/144] xfs_dir2_put_dirent64_direct+0x0/0x90 Mar 22 12:38:18 system3 kernel: [xfs_dir2_put_dirent64_direct+0/144] xfs_dir2_put_dirent64_direct+0x0/0x90 Mar 22 12:38:18 system3 kernel: [xfs_readdir+72/112] xfs_readdir+0x48/0x70 Mar 22 12:38:18 system3 kernel: [xfs_file_readdir+256/528] xfs_file_readdir+0x100/0x210 Mar 22 12:38:18 system3 kernel: [filldir64+0/240] filldir64+0x0/0xf0 Mar 22 12:38:18 system3 kernel: [filldir64+0/240] filldir64+0x0/0xf0 Mar 22 12:38:18 system3 kernel: [vfs_readdir+129/160] vfs_readdir+0x81/0xa0 Mar 22 12:38:18 system3 kernel: [sys_getdents64+105/192] sys_getdents64+0x69/0xc0 Mar 22 12:38:18 system3 kernel: [syscall_call+7/11] syscall_call+0x7/0xb Mar 22 12:38:18 system3 kernel: 0x0: c0 49 00 35 6a bc c3 80 fd d4 64 f8 16 ec b9 85 At that point, the filesystem was completely unreadable. However, df reported about 2TB used. As a precaution, I booted with a live CD with xfsprogs 2.8.11. I first ran xfs_repair -n : No modify flag set, skipping phase 5 Phase 1 - find and verify superblock... Phase 2 - using internal log - scan filesystem freespace and inode maps... bad magic # 0x7c6999f7 for agf 0 bad version # 270461846 for agf 0 bad sequence # -506160237 for agf 0 bad length 1130385756 for agf 0, should be 68590288 flfirst 260475029 in agf 0 too large (max = 128) fllast -1448142937 in agf 0 too large (max = 128) bad magic # 0xfffde400 for agi 0 bad version # -1469688457 for agi 0 bad sequence # 2021095287 for agi 0 bad length # 2004318207 for agi 0, should be 68590288 would reset bad agf for ag 0 would reset bad agi for ag 0 bad uncorrected agheader 0, skipping ag... root inode chunk not found Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... error following ag 0 unlinked list - process known inodes and perform inode discovery... - agno = 0 bad magic number 0xeb51 on inode 288 bad version number 0x0 on inode 288 bad (negative) size -4597490693634830737 on inode 288 bad magic number 0xf162 on inode 289 bad version number 0x21 on inode 289 bad inode format in inode 289 bad magic number 0x1c02 on inode 290 bad version number 0xffffff80 on inode 290 bad (negative) size -1479238237238013911 on inode 290 bad magic number 0xdd on inode 291 bad version number 0xffffffe3 on inode 291 bad (negative) size -3643988304669136675 on inode 291 bad magic number 0xf884 on inode 292 bad version number 0xffffffd9 on inode 292 bad inode format in inode 292 bad magic number 0x181f on inode 293 bad version number 0xfffffff4 on inode 293 bad inode format in inode 293 bad magic number 0x970 on inode 294 bad version number 0xffffffa3 on inode 294 bad (negative) size -445852040749451058 on inode 294 bad magic number 0x3cde on inode 295 bad version number 0xffffff99 on inode 295 bad inode format in inode 295 bad magic number 0x396 on inode 296 bad version number 0xffffffc0 on inode 296 bad inode format in inode 296 bad magic number 0xe27b on inode 297 bad version number 0x11 on inode 297 bad inode format in inode 297 bad magic number 0xde24 on inode 298 bad version number 0xffffff80 on inode 298 bad (negative) size -4386485681027605669 on inode 298 bad magic number 0xe0c0 on inode 299 bad version number 0xfffffff6 on inode 299 bad inode format in inode 299 bad magic number 0x18f on inode 300 bad version number 0x6d on inode 300 bad inode format in inode 300 bad magic number 0x2fa6 on inode 301 bad version number 0xffffffe0 on inode 301 bad inode format in inode 301 bad magic number 0x874 on inode 302 bad version number 0x17 on inode 302 bad inode format in inode 302 bad magic number 0xc020 on inode 303 bad version number 0xffffffad on inode 303 bad (negative) size -2828235057529281131 on inode 303 bad magic number 0xdb62 on inode 304 bad version number 0xffffffb4 on inode 304 bad inode format in inode 304 bad magic number 0x1ec8 on inode 305 bad version number 0x1f on inode 305 bad inode format in inode 305 bad magic number 0x1ece on inode 306 bad version number 0xffffff80 on inode 306 bad (negative) size -4841365767938555696 on inode 306 bad magic number 0x2174 on inode 307 bad version number 0xffffff80 on inode 307 bad (negative) size -5167479495107527569 on inode 307 bad magic number 0x42ff on inode 308 bad version number 0x2e on inode 308 bad inode format in inode 308 bad magic number 0x2300 on inode 309 bad version number 0x13 on inode 309 bad inode format in inode 309 bad magic number 0xd009 on inode 310 bad version number 0x41 on inode 310 bad inode format in inode 310 bad magic number 0xde60 on inode 311 bad version number 0xfffffff3 on inode 311 bad (negative) size -667991506409959991 on inode 311 bad magic number 0x29ad on inode 312 bad version number 0x2e on inode 312 bad (negative) size -7260113882208003448 on inode 312 bad magic number 0x4b6a on inode 313 bad version number 0x3c on inode 313 bad (negative) size -1729319129454037310 on inode 313 bad magic number 0xcf81 on inode 314 bad version number 0x38 on inode 314 bad inode format in inode 314 bad magic number 0xa003 on inode 315 bad version number 0xfffffff1 on inode 315 bad inode format in inode 315 bad magic number 0x8c04 on inode 316 bad version number 0xfffffff3 on inode 316 bad (negative) size -3070587920707903991 on inode 316 bad magic number 0x3438 on inode 317 bad version number 0xffffffb9 on inode 317 bad (negative) size -8696290035356641328 on inode 317 bad magic number 0x44c on inode 318 bad version number 0xffffff9e on inode 318 bad (negative) size -8776495047018275686 on inode 318 bad magic number 0xe213 on inode 319 bad version number 0x32 on inode 319 bad (negative) size -8318616862220032662 on inode 319 bad directory block magic # 0xe409793 in block 0 for directory inode 256 corrupt block 0 in directory inode 256 would junk block no . entry for directory 256 no .. entry for root directory 256 problem with directory contents in inode 256 would clear root inode 256 bad directory block magic # 0xfe95b7b4 in block 0 for directory inode 259 corrupt block 0 in directory inode 259 would junk block bad directory block magic # 0xe600e5c0 in block 1 for directory inode 259 corrupt block 1 in directory inode 259 would junk block bad directory block magic # 0xc0490035 in block 2 for directory inode 259 corrupt block 2 in directory inode 259 would junk block bad directory block magic # 0xc079afae in block 3 for directory inode 259 corrupt block 3 in directory inode 259 would junk block no . entry for directory 259 no .. entry for directory 259 problem with directory contents in inode 259 would have cleared inode 259 imap claims in-use inode 260 is free, would correct imap bad directory block magic # 0x7acda06 in block 0 for directory inode 261 corrupt block 0 in directory inode 261 would junk block no . entry for directory 261 no .. entry for directory 261 problem with directory contents in inode 261 would have cleared inode 261 imap claims in-use inode 262 is free, would correct imap imap claims in-use inode 263 is free, would correct imap imap claims in-use inode 264 is free, would correct imap imap claims in-use inode 265 is free, would correct imap imap claims in-use inode 266 is free, would correct imap imap claims in-use inode 267 is free, would correct imap imap claims in-use inode 268 is free, would correct imap imap claims in-use inode 269 is free, would correct imap imap claims in-use inode 270 is free, would correct imap imap claims in-use inode 271 is free, would correct imap imap claims in-use inode 272 is free, would correct imap imap claims in-use inode 273 is free, would correct imap imap claims in-use inode 274 is free, would correct imap imap claims in-use inode 275 is free, would correct imap imap claims in-use inode 276 is free, would correct imap imap claims in-use inode 277 is free, would correct imap imap claims in-use inode 278 is free, would correct imap imap claims in-use inode 279 is free, would correct imap imap claims in-use inode 280 is free, would correct imap imap claims in-use inode 281 is free, would correct imap imap claims in-use inode 282 is free, would correct imap imap claims in-use inode 283 is free, would correct imap imap claims in-use inode 284 is free, would correct imap imap claims in-use inode 285 is free, would correct imap imap claims in-use inode 286 is free, would correct imap imap claims in-use inode 287 is free, would correct imap bad magic number 0xeb51 on inode 288, would reset magic number bad version number 0x0 on inode 288, would reset version number bad (negative) size -4597490693634830737 on inode 288 would have cleared inode 288 bad magic number 0xf162 on inode 289, would reset magic number bad version number 0x21 on inode 289, would reset version number bad inode format in inode 289 would have cleared inode 289 bad magic number 0x1c02 on inode 290, would reset magic number bad version number 0xffffff80 on inode 290, would reset version number bad (negative) size -1479238237238013911 on inode 290 would have cleared inode 290 bad magic number 0xdd on inode 291, would reset magic number bad version number 0xffffffe3 on inode 291, would reset version number bad (negative) size -3643988304669136675 on inode 291 would have cleared inode 291 bad magic number 0xf884 on inode 292, would reset magic number bad version number 0xffffffd9 on inode 292, would reset version number bad inode format in inode 292 would have cleared inode 292 bad magic number 0x181f on inode 293, would reset magic number bad version number 0xfffffff4 on inode 293, would reset version number bad inode format in inode 293 would have cleared inode 293 bad magic number 0x970 on inode 294, would reset magic number bad version number 0xffffffa3 on inode 294, would reset version number bad (negative) size -445852040749451058 on inode 294 would have cleared inode 294 bad magic number 0x3cde on inode 295, would reset magic number bad version number 0xffffff99 on inode 295, would reset version number bad inode format in inode 295 would have cleared inode 295 bad magic number 0x396 on inode 296, would reset magic number bad version number 0xffffffc0 on inode 296, would reset version number bad inode format in inode 296 would have cleared inode 296 bad magic number 0xe27b on inode 297, would reset magic number bad version number 0x11 on inode 297, would reset version number bad inode format in inode 297 would have cleared inode 297 bad magic number 0xde24 on inode 298, would reset magic number bad version number 0xffffff80 on inode 298, would reset version number bad (negative) size -4386485681027605669 on inode 298 would have cleared inode 298 bad magic number 0xe0c0 on inode 299, would reset magic number bad version number 0xfffffff6 on inode 299, would reset version number bad inode format in inode 299 would have cleared inode 299 bad magic number 0x18f on inode 300, would reset magic number bad version number 0x6d on inode 300, would reset version number bad inode format in inode 300 would have cleared inode 300 bad magic number 0x2fa6 on inode 301, would reset magic number bad version number 0xffffffe0 on inode 301, would reset version number bad inode format in inode 301 would have cleared inode 301 bad magic number 0x874 on inode 302, would reset magic number bad version number 0x17 on inode 302, would reset version number bad inode format in inode 302 would have cleared inode 302 bad magic number 0xc020 on inode 303, would reset magic number bad version number 0xffffffad on inode 303, would reset version number bad (negative) size -2828235057529281131 on inode 303 would have cleared inode 303 bad magic number 0xdb62 on inode 304, would reset magic number bad version number 0xffffffb4 on inode 304, would reset version number bad inode format in inode 304 would have cleared inode 304 bad magic number 0x1ec8 on inode 305, would reset magic number bad version number 0x1f on inode 305, would reset version number bad inode format in inode 305 would have cleared inode 305 bad magic number 0x1ece on inode 306, would reset magic number bad version number 0xffffff80 on inode 306, would reset version number bad (negative) size -4841365767938555696 on inode 306 would have cleared inode 306 bad magic number 0x2174 on inode 307, would reset magic number bad version number 0xffffff80 on inode 307, would reset version number bad (negative) size -5167479495107527569 on inode 307 would have cleared inode 307 bad magic number 0x42ff on inode 308, would reset magic number bad version number 0x2e on inode 308, would reset version number bad inode format in inode 308 would have cleared inode 308 bad magic number 0x2300 on inode 309, would reset magic number bad version number 0x13 on inode 309, would reset version number bad inode format in inode 309 would have cleared inode 309 bad magic number 0xd009 on inode 310, would reset magic number bad version number 0x41 on inode 310, would reset version number bad inode format in inode 310 would have cleared inode 310 bad magic number 0xde60 on inode 311, would reset magic number bad version number 0xfffffff3 on inode 311, would reset version number bad (negative) size -667991506409959991 on inode 311 would have cleared inode 311 bad magic number 0x29ad on inode 312, would reset magic number bad version number 0x2e on inode 312, would reset version number bad (negative) size -7260113882208003448 on inode 312 would have cleared inode 312 bad magic number 0x4b6a on inode 313, would reset magic number bad version number 0x3c on inode 313, would reset version number bad (negative) size -1729319129454037310 on inode 313 would have cleared inode 313 bad magic number 0xcf81 on inode 314, would reset magic number bad version number 0x38 on inode 314, would reset version number bad inode format in inode 314 would have cleared inode 314 bad magic number 0xa003 on inode 315, would reset magic number bad version number 0xfffffff1 on inode 315, would reset version number bad inode format in inode 315 would have cleared inode 315 bad magic number 0x8c04 on inode 316, would reset magic number bad version number 0xfffffff3 on inode 316, would reset version number bad (negative) size -3070587920707903991 on inode 316 would have cleared inode 316 bad magic number 0x3438 on inode 317, would reset magic number bad version number 0xffffffb9 on inode 317, would reset version number bad (negative) size -8696290035356641328 on inode 317 would have cleared inode 317 bad magic number 0x44c on inode 318, would reset magic number bad version number 0xffffff9e on inode 318, would reset version number bad (negative) size -8776495047018275686 on inode 318 would have cleared inode 318 bad magic number 0xe213 on inode 319, would reset magic number bad version number 0x32 on inode 319, would reset version number bad (negative) size -8318616862220032662 on inode 319 would have cleared inode 319 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - agno = 16 - agno = 17 - agno = 18 - agno = 19 - agno = 20 - agno = 21 - agno = 22 - agno = 23 - agno = 24 - agno = 25 - agno = 26 - agno = 27 - agno = 28 - agno = 29 - agno = 30 - agno = 31 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... root inode would be lost - check for inodes claiming duplicate blocks... - agno = 0 bad directory block magic # 0xe409793 in block 0 for directory inode 256 corrupt block 0 in directory inode 256 would junk block no . entry for directory 256 no .. entry for root directory 256 problem with directory contents in inode 256 would clear root inode 256 bad directory block magic # 0xfe95b7b4 in block 0 for directory inode 259 corrupt block 0 in directory inode 259 would junk block bad directory block magic # 0xe600e5c0 in block 1 for directory inode 259 corrupt block 1 in directory inode 259 would junk block bad directory block magic # 0xc0490035 in block 2 for directory inode 259 corrupt block 2 in directory inode 259 would junk block bad directory block magic # 0xc079afae in block 3 for directory inode 259 corrupt block 3 in directory inode 259 would junk block no . entry for directory 259 no .. entry for directory 259 problem with directory contents in inode 259 would have cleared inode 259 bad directory block magic # 0x7acda06 in block 0 for directory inode 261 corrupt block 0 in directory inode 261 would junk block no . entry for directory 261 no .. entry for directory 261 problem with directory contents in inode 261 would have cleared inode 261 bad magic number 0xeb51 on inode 288, would reset magic number bad version number 0x0 on inode 288, would reset version number bad (negative) size -4597490693634830737 on inode 288 would have cleared inode 288 bad magic number 0xf162 on inode 289, would reset magic number bad version number 0x21 on inode 289, would reset version number bad inode format in inode 289 would have cleared inode 289 bad magic number 0x1c02 on inode 290, would reset magic number bad version number 0xffffff80 on inode 290, would reset version number bad (negative) size -1479238237238013911 on inode 290 would have cleared inode 290 bad magic number 0xdd on inode 291, would reset magic number bad version number 0xffffffe3 on inode 291, would reset version number bad (negative) size -3643988304669136675 on inode 291 would have cleared inode 291 bad magic number 0xf884 on inode 292, would reset magic number bad version number 0xffffffd9 on inode 292, would reset version number bad inode format in inode 292 would have cleared inode 292 bad magic number 0x181f on inode 293, would reset magic number bad version number 0xfffffff4 on inode 293, would reset version number bad inode format in inode 293 would have cleared inode 293 bad magic number 0x970 on inode 294, would reset magic number bad version number 0xffffffa3 on inode 294, would reset version number bad (negative) size -445852040749451058 on inode 294 would have cleared inode 294 bad magic number 0x3cde on inode 295, would reset magic number bad version number 0xffffff99 on inode 295, would reset version number bad inode format in inode 295 would have cleared inode 295 bad magic number 0x396 on inode 296, would reset magic number bad version number 0xffffffc0 on inode 296, would reset version number bad inode format in inode 296 would have cleared inode 296 bad magic number 0xe27b on inode 297, would reset magic number bad version number 0x11 on inode 297, would reset version number bad inode format in inode 297 would have cleared inode 297 bad magic number 0xde24 on inode 298, would reset magic number bad version number 0xffffff80 on inode 298, would reset version number bad (negative) size -4386485681027605669 on inode 298 would have cleared inode 298 bad magic number 0xe0c0 on inode 299, would reset magic number bad version number 0xfffffff6 on inode 299, would reset version number bad inode format in inode 299 would have cleared inode 299 bad magic number 0x18f on inode 300, would reset magic number bad version number 0x6d on inode 300, would reset version number bad inode format in inode 300 would have cleared inode 300 bad magic number 0x2fa6 on inode 301, would reset magic number bad version number 0xffffffe0 on inode 301, would reset version number bad inode format in inode 301 would have cleared inode 301 bad magic number 0x874 on inode 302, would reset magic number bad version number 0x17 on inode 302, would reset version number bad inode format in inode 302 would have cleared inode 302 bad magic number 0xc020 on inode 303, would reset magic number bad version number 0xffffffad on inode 303, would reset version number bad (negative) size -2828235057529281131 on inode 303 would have cleared inode 303 bad magic number 0xdb62 on inode 304, would reset magic number bad version number 0xffffffb4 on inode 304, would reset version number bad inode format in inode 304 would have cleared inode 304 bad magic number 0x1ec8 on inode 305, would reset magic number bad version number 0x1f on inode 305, would reset version number bad inode format in inode 305 would have cleared inode 305 bad magic number 0x1ece on inode 306, would reset magic number bad version number 0xffffff80 on inode 306, would reset version number bad (negative) size -4841365767938555696 on inode 306 would have cleared inode 306 bad magic number 0x2174 on inode 307, would reset magic number bad version number 0xffffff80 on inode 307, would reset version number bad (negative) size -5167479495107527569 on inode 307 would have cleared inode 307 bad magic number 0x42ff on inode 308, would reset magic number bad version number 0x2e on inode 308, would reset version number bad inode format in inode 308 would have cleared inode 308 bad magic number 0x2300 on inode 309, would reset magic number bad version number 0x13 on inode 309, would reset version number bad inode format in inode 309 would have cleared inode 309 bad magic number 0xd009 on inode 310, would reset magic number bad version number 0x41 on inode 310, would reset version number bad inode format in inode 310 would have cleared inode 310 bad magic number 0xde60 on inode 311, would reset magic number bad version number 0xfffffff3 on inode 311, would reset version number bad (negative) size -667991506409959991 on inode 311 would have cleared inode 311 bad magic number 0x29ad on inode 312, would reset magic number bad version number 0x2e on inode 312, would reset version number bad (negative) size -7260113882208003448 on inode 312 would have cleared inode 312 bad magic number 0x4b6a on inode 313, would reset magic number bad version number 0x3c on inode 313, would reset version number bad (negative) size -1729319129454037310 on inode 313 would have cleared inode 313 bad magic number 0xcf81 on inode 314, would reset magic number bad version number 0x38 on inode 314, would reset version number bad inode format in inode 314 would have cleared inode 314 bad magic number 0xa003 on inode 315, would reset magic number bad version number 0xfffffff1 on inode 315, would reset version number bad inode format in inode 315 would have cleared inode 315 bad magic number 0x8c04 on inode 316, would reset magic number bad version number 0xfffffff3 on inode 316, would reset version number bad (negative) size -3070587920707903991 on inode 316 would have cleared inode 316 bad magic number 0x3438 on inode 317, would reset magic number bad version number 0xffffffb9 on inode 317, would reset version number bad (negative) size -8696290035356641328 on inode 317 would have cleared inode 317 bad magic number 0x44c on inode 318, would reset magic number bad version number 0xffffff9e on inode 318, would reset version number bad (negative) size -8776495047018275686 on inode 318 would have cleared inode 318 bad magic number 0xe213 on inode 319, would reset magic number bad version number 0x32 on inode 319, would reset version number bad (negative) size -8318616862220032662 on inode 319 would have cleared inode 319 - agno = 1 entry "S0045230.mpg" in shortform directory 2147483905 references non-existent inode 505 would have junked entry "S0045230.mpg" in directory inode 2147483905 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - agno = 16 - agno = 17 - agno = 18 - agno = 19 - agno = 20 - agno = 21 - agno = 22 - agno = 23 - agno = 24 - agno = 25 - agno = 26 - agno = 27 - agno = 28 - agno = 29 - agno = 30 - agno = 31 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... would reinitialize root directory - root inode lost, cannot make new one in no modify mode ... - skipping filesystem traversal from / ... - traversing all unattached subtrees ... entry "S0045230.mpg" in shortform directory 2147483905 references non-existent inode 505 would junk entry entry "S0045230.mpg" in shortform directory 2147483905 references non-existent inode 505 would junk entry - traversals finished ... - moving disconnected inodes to lost+found ... disconnected dir inode 260, would move to lost+found disconnected dir inode 262, would move to lost+found disconnected inode 263, would move to lost+found disconnected inode 264, would move to lost+found disconnected inode 265, would move to lost+found disconnected inode 266, would move to lost+found disconnected inode 267, would move to lost+found disconnected inode 268, would move to lost+found disconnected inode 269, would move to lost+found disconnected inode 270, would move to lost+found disconnected inode 271, would move to lost+found disconnected inode 272, would move to lost+found disconnected inode 273, would move to lost+found disconnected inode 274, would move to lost+found disconnected inode 275, would move to lost+found disconnected inode 276, would move to lost+found disconnected inode 277, would move to lost+found disconnected inode 278, would move to lost+found disconnected inode 279, would move to lost+found disconnected inode 280, would move to lost+found disconnected inode 281, would move to lost+found disconnected inode 282, would move to lost+found disconnected inode 283, would move to lost+found disconnected inode 284, would move to lost+found disconnected inode 285, would move to lost+found disconnected inode 286, would move to lost+found disconnected inode 287, would move to lost+found disconnected dir inode 2147483904, would move to lost+found disconnected dir inode 2147483905, would move to lost+found disconnected dir inode 2147483906, would move to lost+found disconnected inode 2147483907, would move to lost+found disconnected inode 2147483908, would move to lost+found disconnected dir inode 2147483913, would move to lost+found Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. Pretty ominous, however I had nothing better to try at the time, So I've ran the repair (output follows). The result is miserable : only 110GB of data remain in 43 files (more than 300 files missing), all in lost+found, and the filesystem is still inconsistent ( there is a circular directory inside : lost+found/256/lost+found/256/lost+found/256/... ). Is there any hope to get it repair somewhat more that this (and lose less data) ? Here is the actual xfs_repair output: Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... bad magic # 0x7c6999f7 for agf 0 bad version # 270461846 for agf 0 bad sequence # -506160237 for agf 0 bad length 1130385756 for agf 0, should be 68590288 flfirst 260475029 in agf 0 too large (max = 128) fllast -1448142937 in agf 0 too large (max = 128) bad magic # 0xfffde400 for agi 0 bad version # -1469688457 for agi 0 bad sequence # 2021095287 for agi 0 bad length # 2004318207 for agi 0, should be 68590288 reset bad agf for ag 0 reset bad agi for ag 0 bad agbno 2884332844 in agfl, agno 0 freeblk count 1 != flcount -1553133201 in ag 0 bad agbno 2555134669 for btbno root, agno 0 bad agbno 613251981 for btbcnt root, agno 0 bad agbno 1073741824 for inobt root, agno 0 root inode chunk not found Phase 3 - for each AG... - scan and clear agi unlinked lists... error following ag 0 unlinked list - process known inodes and perform inode discovery... - agno = 0 bad magic number 0xeb51 on inode 288 bad version number 0x0 on inode 288 bad (negative) size -4597490693634830737 on inode 288 bad magic number 0xf162 on inode 289 bad version number 0x21 on inode 289 bad inode format in inode 289 bad magic number 0x1c02 on inode 290 bad version number 0xffffff80 on inode 290 bad (negative) size -1479238237238013911 on inode 290 bad magic number 0xdd on inode 291 bad version number 0xffffffe3 on inode 291 bad (negative) size -3643988304669136675 on inode 291 bad magic number 0xf884 on inode 292 bad version number 0xffffffd9 on inode 292 bad inode format in inode 292 bad magic number 0x181f on inode 293 bad version number 0xfffffff4 on inode 293 bad inode format in inode 293 bad magic number 0x970 on inode 294 bad version number 0xffffffa3 on inode 294 bad (negative) size -445852040749451058 on inode 294 bad magic number 0x3cde on inode 295 bad version number 0xffffff99 on inode 295 bad inode format in inode 295 bad magic number 0x396 on inode 296 bad version number 0xffffffc0 on inode 296 bad inode format in inode 296 bad magic number 0xe27b on inode 297 bad version number 0x11 on inode 297 bad inode format in inode 297 bad magic number 0xde24 on inode 298 bad version number 0xffffff80 on inode 298 bad (negative) size -4386485681027605669 on inode 298 bad magic number 0xe0c0 on inode 299 bad version number 0xfffffff6 on inode 299 bad inode format in inode 299 bad magic number 0x18f on inode 300 bad version number 0x6d on inode 300 bad inode format in inode 300 bad magic number 0x2fa6 on inode 301 bad version number 0xffffffe0 on inode 301 bad inode format in inode 301 bad magic number 0x874 on inode 302 bad version number 0x17 on inode 302 bad inode format in inode 302 bad magic number 0xc020 on inode 303 bad version number 0xffffffad on inode 303 bad (negative) size -2828235057529281131 on inode 303 bad magic number 0xdb62 on inode 304 bad version number 0xffffffb4 on inode 304 bad inode format in inode 304 bad magic number 0x1ec8 on inode 305 bad version number 0x1f on inode 305 bad inode format in inode 305 bad magic number 0x1ece on inode 306 bad version number 0xffffff80 on inode 306 bad (negative) size -4841365767938555696 on inode 306 bad magic number 0x2174 on inode 307 bad version number 0xffffff80 on inode 307 bad (negative) size -5167479495107527569 on inode 307 bad magic number 0x42ff on inode 308 bad version number 0x2e on inode 308 bad inode format in inode 308 bad magic number 0x2300 on inode 309 bad version number 0x13 on inode 309 bad inode format in inode 309 bad magic number 0xd009 on inode 310 bad version number 0x41 on inode 310 bad inode format in inode 310 bad magic number 0xde60 on inode 311 bad version number 0xfffffff3 on inode 311 bad (negative) size -667991506409959991 on inode 311 bad magic number 0x29ad on inode 312 bad version number 0x2e on inode 312 bad (negative) size -7260113882208003448 on inode 312 bad magic number 0x4b6a on inode 313 bad version number 0x3c on inode 313 bad (negative) size -1729319129454037310 on inode 313 bad magic number 0xcf81 on inode 314 bad version number 0x38 on inode 314 bad inode format in inode 314 bad magic number 0xa003 on inode 315 bad version number 0xfffffff1 on inode 315 bad inode format in inode 315 bad magic number 0x8c04 on inode 316 bad version number 0xfffffff3 on inode 316 bad (negative) size -3070587920707903991 on inode 316 bad magic number 0x3438 on inode 317 bad version number 0xffffffb9 on inode 317 bad (negative) size -8696290035356641328 on inode 317 bad magic number 0x44c on inode 318 bad version number 0xffffff9e on inode 318 bad (negative) size -8776495047018275686 on inode 318 bad magic number 0xe213 on inode 319 bad version number 0x32 on inode 319 bad (negative) size -8318616862220032662 on inode 319 bad directory block magic # 0xe409793 in block 0 for directory inode 256 corrupt block 0 in directory inode 256 will junk block no . entry for directory 256 no .. entry for root directory 256 problem with directory contents in inode 256 cleared root inode 256 bad directory block magic # 0xfe95b7b4 in block 0 for directory inode 259 corrupt block 0 in directory inode 259 will junk block bad directory block magic # 0xe600e5c0 in block 1 for directory inode 259 corrupt block 1 in directory inode 259 will junk block bad directory block magic # 0xc0490035 in block 2 for directory inode 259 corrupt block 2 in directory inode 259 will junk block bad directory block magic # 0xc079afae in block 3 for directory inode 259 corrupt block 3 in directory inode 259 will junk block no . entry for directory 259 no .. entry for directory 259 problem with directory contents in inode 259 cleared inode 259 imap claims in-use inode 260 is free, correcting imap bad directory block magic # 0x7acda06 in block 0 for directory inode 261 corrupt block 0 in directory inode 261 will junk block no . entry for directory 261 no .. entry for directory 261 problem with directory contents in inode 261 cleared inode 261 imap claims in-use inode 262 is free, correcting imap imap claims in-use inode 263 is free, correcting imap imap claims in-use inode 264 is free, correcting imap imap claims in-use inode 265 is free, correcting imap imap claims in-use inode 266 is free, correcting imap imap claims in-use inode 267 is free, correcting imap imap claims in-use inode 268 is free, correcting imap imap claims in-use inode 269 is free, correcting imap imap claims in-use inode 270 is free, correcting imap imap claims in-use inode 271 is free, correcting imap imap claims in-use inode 272 is free, correcting imap imap claims in-use inode 273 is free, correcting imap imap claims in-use inode 274 is free, correcting imap imap claims in-use inode 275 is free, correcting imap imap claims in-use inode 276 is free, correcting imap imap claims in-use inode 277 is free, correcting imap imap claims in-use inode 278 is free, correcting imap imap claims in-use inode 279 is free, correcting imap imap claims in-use inode 280 is free, correcting imap imap claims in-use inode 281 is free, correcting imap imap claims in-use inode 282 is free, correcting imap imap claims in-use inode 283 is free, correcting imap imap claims in-use inode 284 is free, correcting imap imap claims in-use inode 285 is free, correcting imap imap claims in-use inode 286 is free, correcting imap imap claims in-use inode 287 is free, correcting imap bad magic number 0xeb51 on inode 288, resetting magic number bad version number 0x0 on inode 288, resetting version number bad (negative) size -4597490693634830737 on inode 288 cleared inode 288 bad magic number 0xf162 on inode 289, resetting magic number bad version number 0x21 on inode 289, resetting version number bad inode format in inode 289 cleared inode 289 bad magic number 0x1c02 on inode 290, resetting magic number bad version number 0xffffff80 on inode 290, resetting version number bad (negative) size -1479238237238013911 on inode 290 cleared inode 290 bad magic number 0xdd on inode 291, resetting magic number bad version number 0xffffffe3 on inode 291, resetting version number bad (negative) size -3643988304669136675 on inode 291 cleared inode 291 bad magic number 0xf884 on inode 292, resetting magic number bad version number 0xffffffd9 on inode 292, resetting version number bad inode format in inode 292 cleared inode 292 bad magic number 0x181f on inode 293, resetting magic number bad version number 0xfffffff4 on inode 293, resetting version number bad inode format in inode 293 cleared inode 293 bad magic number 0x970 on inode 294, resetting magic number bad version number 0xffffffa3 on inode 294, resetting version number bad (negative) size -445852040749451058 on inode 294 cleared inode 294 bad magic number 0x3cde on inode 295, resetting magic number bad version number 0xffffff99 on inode 295, resetting version number bad inode format in inode 295 cleared inode 295 bad magic number 0x396 on inode 296, resetting magic number bad version number 0xffffffc0 on inode 296, resetting version number bad inode format in inode 296 cleared inode 296 bad magic number 0xe27b on inode 297, resetting magic number bad version number 0x11 on inode 297, resetting version number bad inode format in inode 297 cleared inode 297 bad magic number 0xde24 on inode 298, resetting magic number bad version number 0xffffff80 on inode 298, resetting version number bad (negative) size -4386485681027605669 on inode 298 cleared inode 298 bad magic number 0xe0c0 on inode 299, resetting magic number bad version number 0xfffffff6 on inode 299, resetting version number bad inode format in inode 299 cleared inode 299 bad magic number 0x18f on inode 300, resetting magic number bad version number 0x6d on inode 300, resetting version number bad inode format in inode 300 cleared inode 300 bad magic number 0x2fa6 on inode 301, resetting magic number bad version number 0xffffffe0 on inode 301, resetting version number bad inode format in inode 301 cleared inode 301 bad magic number 0x874 on inode 302, resetting magic number bad version number 0x17 on inode 302, resetting version number bad inode format in inode 302 cleared inode 302 bad magic number 0xc020 on inode 303, resetting magic number bad version number 0xffffffad on inode 303, resetting version number bad (negative) size -2828235057529281131 on inode 303 cleared inode 303 bad magic number 0xdb62 on inode 304, resetting magic number bad version number 0xffffffb4 on inode 304, resetting version number bad inode format in inode 304 cleared inode 304 bad magic number 0x1ec8 on inode 305, resetting magic number bad version number 0x1f on inode 305, resetting version number bad inode format in inode 305 cleared inode 305 bad magic number 0x1ece on inode 306, resetting magic number bad version number 0xffffff80 on inode 306, resetting version number bad (negative) size -4841365767938555696 on inode 306 cleared inode 306 bad magic number 0x2174 on inode 307, resetting magic number bad version number 0xffffff80 on inode 307, resetting version number bad (negative) size -5167479495107527569 on inode 307 cleared inode 307 bad magic number 0x42ff on inode 308, resetting magic number bad version number 0x2e on inode 308, resetting version number bad inode format in inode 308 cleared inode 308 bad magic number 0x2300 on inode 309, resetting magic number bad version number 0x13 on inode 309, resetting version number bad inode format in inode 309 cleared inode 309 bad magic number 0xd009 on inode 310, resetting magic number bad version number 0x41 on inode 310, resetting version number bad inode format in inode 310 cleared inode 310 bad magic number 0xde60 on inode 311, resetting magic number bad version number 0xfffffff3 on inode 311, resetting version number bad (negative) size -667991506409959991 on inode 311 cleared inode 311 bad magic number 0x29ad on inode 312, resetting magic number bad version number 0x2e on inode 312, resetting version number bad (negative) size -7260113882208003448 on inode 312 cleared inode 312 bad magic number 0x4b6a on inode 313, resetting magic number bad version number 0x3c on inode 313, resetting version number bad (negative) size -1729319129454037310 on inode 313 cleared inode 313 bad magic number 0xcf81 on inode 314, resetting magic number bad version number 0x38 on inode 314, resetting version number bad inode format in inode 314 cleared inode 314 bad magic number 0xa003 on inode 315, resetting magic number bad version number 0xfffffff1 on inode 315, resetting version number bad inode format in inode 315 cleared inode 315 bad magic number 0x8c04 on inode 316, resetting magic number bad version number 0xfffffff3 on inode 316, resetting version number bad (negative) size -3070587920707903991 on inode 316 cleared inode 316 bad magic number 0x3438 on inode 317, resetting magic number bad version number 0xffffffb9 on inode 317, resetting version number bad (negative) size -8696290035356641328 on inode 317 cleared inode 317 bad magic number 0x44c on inode 318, resetting magic number bad version number 0xffffff9e on inode 318, resetting version number bad (negative) size -8776495047018275686 on inode 318 cleared inode 318 bad magic number 0xe213 on inode 319, resetting magic number bad version number 0x32 on inode 319, resetting version number bad (negative) size -8318616862220032662 on inode 319 cleared inode 319 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - agno = 16 - agno = 17 - agno = 18 - agno = 19 - agno = 20 - agno = 21 - agno = 22 - agno = 23 - agno = 24 - agno = 25 - agno = 26 - agno = 27 - agno = 28 - agno = 29 - agno = 30 - agno = 31 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... root inode lost - clear lost+found (if it exists) ... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 entry "S0045230.mpg" in shortform directory 2147483905 references non-existent inode 505 junking entry "S0045230.mpg" in directory inode 2147483905 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - agno = 16 - agno = 17 - agno = 18 - agno = 19 - agno = 20 - agno = 21 - agno = 22 - agno = 23 - agno = 24 - agno = 25 - agno = 26 - agno = 27 - agno = 28 - agno = 29 - agno = 30 - agno = 31 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... reinitializing root directory - resetting contents of realtime bitmap and summary inodes - ensuring existence of lost+found directory - traversing filesystem starting at / ... - traversal finished ... - traversing all unattached subtrees ... - traversals finished ... - moving disconnected inodes to lost+found ... disconnected inode 256, moving to lost+found disconnected dir inode 260, moving to lost+found disconnected dir inode 262, moving to lost+found disconnected inode 263, moving to lost+found disconnected inode 264, moving to lost+found disconnected inode 265, moving to lost+found disconnected inode 266, moving to lost+found disconnected inode 267, moving to lost+found disconnected inode 268, moving to lost+found disconnected inode 269, moving to lost+found disconnected inode 270, moving to lost+found disconnected inode 271, moving to lost+found disconnected inode 272, moving to lost+found disconnected inode 273, moving to lost+found disconnected inode 274, moving to lost+found disconnected inode 275, moving to lost+found disconnected inode 276, moving to lost+found disconnected inode 277, moving to lost+found disconnected inode 278, moving to lost+found disconnected inode 279, moving to lost+found disconnected inode 280, moving to lost+found disconnected inode 281, moving to lost+found disconnected inode 282, moving to lost+found disconnected inode 283, moving to lost+found disconnected inode 284, moving to lost+found disconnected inode 285, moving to lost+found disconnected inode 286, moving to lost+found disconnected inode 287, moving to lost+found disconnected dir inode 2147483904, moving to lost+found disconnected dir inode 2147483905, moving to lost+found disconnected dir inode 2147483906, moving to lost+found disconnected inode 2147483907, moving to lost+found disconnected dir inode 2147483913, moving to lost+found Phase 7 - verify and correct link counts... done -- -------------------------------------------------- Emmanuel Florac www.intellique.com -------------------------------------------------- From owner-xfs@oss.sgi.com Tue Mar 25 11:49:20 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 25 Mar 2008 11:49:27 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.1 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2PInG5C014650 for ; Tue, 25 Mar 2008 11:49:19 -0700 X-ASG-Debug-ID: 1206470990-127f02990000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id EBD826E6300 for ; Tue, 25 Mar 2008 11:49:50 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id dIBSnEWZEL5KKiER for ; Tue, 25 Mar 2008 11:49:50 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 0D3991802F61C; Tue, 25 Mar 2008 13:49:50 -0500 (CDT) Message-ID: <47E9494D.8080807@sandeen.net> Date: Tue, 25 Mar 2008 13:49:49 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: Emmanuel Florac CC: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: Serious XFS crash Subject: Re: Serious XFS crash References: <20080325185453.3a1957dd@galadriel.home> In-Reply-To: <20080325185453.3a1957dd@galadriel.home> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1206470990 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45882 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15036 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Emmanuel Florac wrote: > Here is the setup : Debian sarge running kernel 2.6.18.8 smp (clean > build), xfsprogs version 2.6.20 (not used). An 8TB xfs filesystem broke > apart losing roughly 2TB of data in about 350 (big) files : > > Mar 22 12:38:18 system3 kernel: 0x0: c0 49 00 35 6a bc c3 80 fd d4 64 > f8 16 ec b9 85 > Mar 22 12:38:18 system3 kernel: Filesystem "md0": XFS internal error > xfs_da_do_buf(2) at line 2084 of file fs/xfs/xfs_da_btree.c. Caller > 0xc0214fe8 Out of curiosity, what was the storage setup for that 8T volume? (md I guess, but what was behind it?) Also what architecture was this? -Eric From owner-xfs@oss.sgi.com Tue Mar 25 12:04:58 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 25 Mar 2008 12:05:06 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2PJ4ueS016168 for ; Tue, 25 Mar 2008 12:04:58 -0700 X-ASG-Debug-ID: 1206471910-571303410000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from smtp7-g19.free.fr (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 9B20712085C9 for ; Tue, 25 Mar 2008 12:05:11 -0700 (PDT) Received: from smtp7-g19.free.fr (smtp7-g19.free.fr [212.27.42.64]) by cuda.sgi.com with ESMTP id p4E69j5OGqNaYCLl for ; Tue, 25 Mar 2008 12:05:11 -0700 (PDT) Received: from smtp7-g19.free.fr (localhost [127.0.0.1]) by smtp7-g19.free.fr (Postfix) with ESMTP id A52F8322811; Tue, 25 Mar 2008 20:04:39 +0100 (CET) Received: from galadriel.home (pla78-1-82-235-234-79.fbx.proxad.net [82.235.234.79]) by smtp7-g19.free.fr (Postfix) with ESMTP id 6158B322866; Tue, 25 Mar 2008 20:04:39 +0100 (CET) Date: Tue, 25 Mar 2008 20:03:39 +0100 From: Emmanuel Florac To: Eric Sandeen Cc: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: Serious XFS crash Subject: Re: Serious XFS crash Message-ID: <20080325200339.1293a5e6@galadriel.home> In-Reply-To: <47E9494D.8080807@sandeen.net> References: <20080325185453.3a1957dd@galadriel.home> <47E9494D.8080807@sandeen.net> Organization: Intellique X-Mailer: Claws Mail 2.9.1 (GTK+ 2.8.20; i686-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 X-Barracuda-Connect: smtp7-g19.free.fr[212.27.42.64] X-Barracuda-Start-Time: 1206471911 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45883 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id m2PJ4weS016173 X-archive-position: 15037 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: eflorac@intellique.com Precedence: bulk X-list: xfs Le Tue, 25 Mar 2008 13:49:49 -0500 vous écriviez: > Out of curiosity, what was the storage setup for that 8T volume? (md > I guess, but what was behind it?) > Actually 2 hardware RAID-5 volumes ( 3Ware 9550SX) aggregated thru a software RAID-0 (md). > Also what architecture was this? Plain ole stinkin' x86, 2x dual core Xeon HT ( 8 cores ). Never lost any data on my MIPS/IRIX systems. -- -------------------------------------------------- Emmanuel Florac www.intellique.com -------------------------------------------------- From owner-xfs@oss.sgi.com Tue Mar 25 13:32:11 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 25 Mar 2008 13:32:21 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2PKWA8E026877 for ; Tue, 25 Mar 2008 13:32:11 -0700 X-ASG-Debug-ID: 1206477162-15b900fd0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mx1.suse.de (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id B17566E8B75 for ; Tue, 25 Mar 2008 13:32:43 -0700 (PDT) Received: from mx1.suse.de (cantor.suse.de [195.135.220.2]) by cuda.sgi.com with ESMTP id DxvXDEro9Fd4h1uh for ; Tue, 25 Mar 2008 13:32:43 -0700 (PDT) X-ASG-Whitelist: Client Received: from Relay1.suse.de (relay-ext.suse.de [195.135.221.8]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.suse.de (Postfix) with ESMTP id 1CDD03845D; Tue, 25 Mar 2008 21:32:11 +0100 (CET) Received: from 192.168.1.70 (SquirrelMail authenticated user neilb) by neil.brown.name with HTTP; Wed, 26 Mar 2008 07:32:01 +1100 (EST) From: "NeilBrown" To: "J. Bruce Fields" , xfs@oss.sgi.com Date: Wed, 26 Mar 2008 07:32:01 +1100 (EST) Message-ID: <32953.192.168.1.70.1206477121.squirrel@neil.brown.name> In-Reply-To: <20080325190943.GF2237@fieldses.org> References: <47CF157B.1010908@m2000.com> <18383.24847.381754.517731@notabene.brown> <47CF62C5.7000908@m2000.com> <18384.50909.866848.966192@notabene.brown> <9a8748490803121513w285cd45rb6b26a3d842cac1b@mail.gmail.com> <20080312221511.GC31632@fieldses.org> <9a8748490803121516u36395872i70cc88b0439adc74@mail.gmail.com> <18394.1501.991087.80264@notabene.brown> <47DAEFD0.9020407@m2000.com> <47E92F8E.7030504@m2000.com> <20080325190943.GF2237@fieldses.org> X-ASG-Orig-Subj: Re: [opensuse] nfs_update_inode: inode X mode changed, Y to Z Subject: Re: [opensuse] nfs_update_inode: inode X mode changed, Y to Z Cc: "Adam Schrotenboer" , "Jesper Juhl" , "Trond Myklebust" , lkml@vger.kernel.org, linux-nfs@vger.kernel.org, "Thomas Daniel" , "Frederic Revenu" , "Jeff Doan" User-Agent: SquirrelMail/1.4.13 MIME-Version: 1.0 Content-Type: text/plain;charset=iso-8859-1 Content-Transfer-Encoding: 8bit X-Priority: 3 (Normal) Importance: Normal X-Barracuda-Connect: cantor.suse.de[195.135.220.2] X-Barracuda-Start-Time: 1206477165 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15038 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: neilb@suse.de Precedence: bulk X-list: xfs On Wed, March 26, 2008 6:09 am, J. Bruce Fields wrote: > On Tue, Mar 25, 2008 at 09:59:58AM -0700, Adam Schrotenboer wrote: >> Adam Schrotenboer wrote: >>> Neil Brown wrote: >>>> On Wednesday March 12, jesper.juhl@gmail.com wrote: >>>>> On 12/03/2008, J. Bruce Fields wrote: >>>>>> What was the exported filesystem? >>>>> XFS >>>> >>>> It's a bit of a long shot, but could you try mounting the XFS file >>>> system with >>>> -o ikeep >>>> >>>> and see if it makes a difference. >>>> >>>> When you have "ikeep", I can find the code that increments the >>>> generation number between different uses of the one inode number. >>>> >>>> When you have "noikeep" (which I think is the default) it doesn't keep >>>> the inode of disk when deleted and so (presumably) needs generate a >>>> random generation number for each use. But I cannot find the code >>>> that does that. I'm probably not looking in the right place, but I >>>> don't think it can hurt to try "-o ikeep". >>>> >>>> NeilBrown >>>> >>> Ok, I've unmounted and remounted with that option enabled >>> (/proc/mounts confirms it's enabled). We'll see what happens. >> >> Well, it's been almost 2 weeks (11 days anyhow) and I am not seeing >> the nfs_update_inode message in the syslogs of any of our compute >> servers. I need to talk to the various people who work with them to >> verify, but it looks like this problem has been resolved. > > That's a workaround, at least, but it's unfortunate if a special mount > option is required to get correct behavior for nfs exports. Is there > anything we can do? > I suggest taking it up with the XFS developers... Dear XFS developers. Adam (and Jesper, though that was some time ago) was having problems with an XFS filesystem that was exported via NFS. The client would occasionally report the message given in the subject line. Examining the NFS code suggested that the most likely explanation was that the generation number used in the file handle was the same every time that the inode number was re-used. Examining the XFS code suggested that when the 'ikeep' mount option was used, the generation number be explicitly incremented for each re-use, while without 'ikeep', no evidence of setting the generation number could be found. Maybe it defaults to zero. Experimental evidence suggests that setting 'ikeep' removes the symptom. Question: Is is possible that without 'ikeep', XFS does not even try to provide unique generation numbers? If this is the case, could it please be fixed. If it is not the case, please help me find the code responsible. Thanks, NeilBrown From owner-xfs@oss.sgi.com Tue Mar 25 14:21:08 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 25 Mar 2008 14:21:15 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2PLL6M7030225 for ; Tue, 25 Mar 2008 14:21:08 -0700 X-ASG-Debug-ID: 1206480099-12fc00270000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from adm01.ops.amientertainment.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id A54B1102C92E for ; Tue, 25 Mar 2008 14:21:40 -0700 (PDT) Received: from adm01.ops.amientertainment.net (adm01.ops.amientertainment.net [64.141.138.10]) by cuda.sgi.com with ESMTP id hmHazZ1WTQKANbq7 for ; Tue, 25 Mar 2008 14:21:40 -0700 (PDT) Received: from dhcp-192-168-6-143.ops.amientertainment.net (dhcp-192-168-6-143.ops.amientertainment.net [192.168.6.143]) (authenticated bits=0) by adm01.ops.amientertainment.net (8.13.1/8.13.1/BCH2.0) with ESMTP id m2PLLda3019457 (version=TLSv1/SSLv3 cipher=RC4-MD5 bits=128 verify=NO) for ; Tue, 25 Mar 2008 17:21:39 -0400 X-ASG-Orig-Subj: XFS performance on LVM2 mirror Subject: XFS performance on LVM2 mirror From: Scott Tanner To: xfs@oss.sgi.com Content-Type: text/plain Organization: Rowe / AMI Date: Tue, 25 Mar 2008 17:37:32 -0400 Message-Id: <1206481052.4283.17.camel@dhcp-192-168-6-143> Mime-Version: 1.0 X-Mailer: Evolution 2.6.0 Content-Transfer-Encoding: 7bit X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.0 (adm01.ops.amientertainment.net [192.168.10.201]); Tue, 25 Mar 2008 17:21:39 -0400 (EDT) X-Scanned-By: MIMEDefang 2.61 on 192.168.10.201 X-Barracuda-Connect: adm01.ops.amientertainment.net[64.141.138.10] X-Barracuda-Start-Time: 1206480100 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45890 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15039 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: stanner@amientertainment.net Precedence: bulk X-list: xfs Hello, I've been doing some benchmarking (using bonnie++) of the XFS filesystem and found a substantial performance drop in rewrite and delete operations when using an LVM2 mirror. When using the Linux Software Raid driver to perform the mirror, XFS performance is quite good. I've tried a number of the performance tweaks from the mailing list archives, as seen below. The only option that seemed to make a real difference was LVM's --corelog, which only worked for 1 test then the server crashed. Are there any special tweaks for XFS on LVM2 mirrors? Recommendations on my setup? Thanks, Scott Here are some system specs and benchmark results: Dell 1750 (32bit) single Xeon 2.8GHz 1GB ram Centos5 (2.6.18-53.1.14.el5) Emulex lightpulse L982 HBA 2 x Jetstor Disk Arrays: 9 Disk Raid 5, 128K stripe 2 TB LU 16K readahead set on all disks (blockdev --setra 16384 /dev/sdx) Adjusted max_sectors_kb - no improvement Adjusted nr_requests - no improvement XFS mounted with -o noatime,nodiratime,nobarrier,logbufs=8 All Bonnie++ tests using -s 2016M -n 2:5000000:1000000/64, displaying average of 3 runs. --------------------------------------------------------------------- Single 2TB disk (LU) no LVM no XFS options: fs01,2G,40374,,186139,52,84250,24,43405,98,191468,14,745.0,0,,27,19,51,7,1389,20,26,20,18,1,1089,18 (S.Out Block= 186M/s |S.Out Rewrite= 84M/s |S.In Block=191M/s |Del = 1389) Linux Software Raid Mirror, no XFS options: fs01,2G,39516,,93718,25,75745,22,43253,98,193540,23,1475.8,1,,24,19,36,7,1184,18,24,19,17,2,969,12 (S.Out Block= 93M/s |Seq Out Rewrite= 75M/s |S.In Block=193M/s |Del = 1184) LVM2 Mirror, no XFS options: fs01,2G,33459,,93708,28,18752,5,42716,97,196157,20,1031.9,0,,25,21,33,5,180,2,26,22,18,1,226,1 (S.Out Block= 93M/s |S.Out Rewrite= 18M/s |S.In Block=196M/s |Del = 180) LVM2 Mirror, XFS options logsize=64M sunit=256 swidth=512 blks fs01,2G,34492,,93389,27,18381,5,43100,97,191331,21,1063.2,0,,23,20,29,4,178,1,24,19,17,1,191,1 (S.Out Block= 93M/s |S.Out Rewrite= 18M/s |S.In Block=191M/s |Del = 178) LVM2 Mirror XFS options logsize=64M sunit=64 swidth=1024 blks fs01,2G,36145,,92566,26,18430,5,43560,99,200523,21,1088.4,0,,23,20,34,5,211,2,25,20,18,1,172,1 (S.Out Block= 92M/s |S.Out Rewrite= 18M/s |S.In Block=200M/s |Del = 211) LVM2 Mirror XFS options logsize=64M sunit=32 swidth=256 blks fs01,2G,36922,,96298,26,18832,6,43623,97,199477,26,1187.2,3,,27,20,33,11,235,4,25,19,20,6,158,3 (S.Out Block= 96M/s |S.Out Rewrite= 18M/s |S.In Block=199M/s |Del = 235) LVM2 Mirror --metadatasize=64k, XFS options logsize=64M sunit=32 swidth=256 blks fs01,2G,34508,,96358,25,19132,6,43817,98,200040,27,1202.5,3,,29,22,33,10,234,4,29,23,19,6,195,4 (S.Out Block= 96M/s |S.Out Rewrite= 19M/s |S.In Block=200M/s |Del = 234) LVM2 Mirror --metadatasize=64k, XFS options logsize=64M sunit=256 swidth=256 blks fs01,2G,36367,,91923,24,18639,6,43732,97,199305,26,1230.0,3,,26,19,31,10,212,4,30,22,19,6,151,3 (S.Out Block= 91M/s |S.Out Rewrite= 18M/s |S.In Block=199M/s |Del = 212) LVM2 Mirror --corelog, XFS options logsize=64M sunit=256 swidth=256 blks fs01,2G,39387,,95708,30,46650,15,43273,98,199461,21,1037.3,0,,25,20,29,4,889,10,26,20,17,1,927,14 (S.Out Block= 95M/s |S.Out Rewrite= 46M/s |S.In Block=199M/s |Del = 889) And for comparison, EXT3 over LVM2 mirror: fs01,2G,33028,,117260,75,61222,38,36994,86,196760,45,885.5,2,,33,60,33,11,143,5,31,57,21,7,147,6 (S.Out Block=117M/s |S.Out Rewrite= 61M/s |S.In Block=196M/s |Del = 143) From owner-xfs@oss.sgi.com Tue Mar 25 14:24:09 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 25 Mar 2008 14:24:17 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.4 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_15 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2PLO8gu030661 for ; Tue, 25 Mar 2008 14:24:08 -0700 X-ASG-Debug-ID: 1206480282-130900320000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from filer.fsl.cs.sunysb.edu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 00F3F102C9C1 for ; Tue, 25 Mar 2008 14:24:42 -0700 (PDT) Received: from filer.fsl.cs.sunysb.edu (filer.fsl.cs.sunysb.edu [130.245.126.2]) by cuda.sgi.com with ESMTP id Eapxkg9lqQNGNRYo for ; Tue, 25 Mar 2008 14:24:42 -0700 (PDT) Received: from josefsipek.net (baal.fsl.cs.sunysb.edu [130.245.126.78]) by filer.fsl.cs.sunysb.edu (8.12.11.20060308/8.13.1) with ESMTP id m2PLONLL027449; Tue, 25 Mar 2008 17:24:24 -0400 Received: by josefsipek.net (Postfix, from userid 1000) id 49CCF1C00124; Tue, 25 Mar 2008 17:24:25 -0400 (EDT) Date: Tue, 25 Mar 2008 17:24:25 -0400 From: "Josef 'Jeff' Sipek" To: NeilBrown Cc: "J. Bruce Fields" , xfs@oss.sgi.com, Adam Schrotenboer , Jesper Juhl , Trond Myklebust , lkml@vger.kernel.org, linux-nfs@vger.kernel.org, Thomas Daniel , Frederic Revenu , Jeff Doan X-ASG-Orig-Subj: Re: [opensuse] nfs_update_inode: inode X mode changed, Y to Z Subject: Re: [opensuse] nfs_update_inode: inode X mode changed, Y to Z Message-ID: <20080325212425.GA20257@josefsipek.net> References: <47CF62C5.7000908@m2000.com> <18384.50909.866848.966192@notabene.brown> <9a8748490803121513w285cd45rb6b26a3d842cac1b@mail.gmail.com> <20080312221511.GC31632@fieldses.org> <9a8748490803121516u36395872i70cc88b0439adc74@mail.gmail.com> <18394.1501.991087.80264@notabene.brown> <47DAEFD0.9020407@m2000.com> <47E92F8E.7030504@m2000.com> <20080325190943.GF2237@fieldses.org> <32953.192.168.1.70.1206477121.squirrel@neil.brown.name> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <32953.192.168.1.70.1206477121.squirrel@neil.brown.name> User-Agent: Mutt/1.5.16 (2007-06-11) X-Barracuda-Connect: filer.fsl.cs.sunysb.edu[130.245.126.2] X-Barracuda-Start-Time: 1206480283 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45890 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15040 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeffpc@josefsipek.net Precedence: bulk X-list: xfs On Wed, Mar 26, 2008 at 07:32:01AM +1100, NeilBrown wrote: > I suggest taking it up with the XFS developers... > > Dear XFS developers. > Adam (and Jesper, though that was some time ago) was having problems > with an XFS filesystem that was exported via NFS. The client would > occasionally report the message given in the subject line. > Examining the NFS code suggested that the most likely explanation > was that the generation number used in the file handle was the same > every time that the inode number was re-used. > > Examining the XFS code suggested that when the 'ikeep' mount option was > used, the generation number be explicitly incremented for each > re-use, while without 'ikeep', no evidence of setting the generation > number could be found. Maybe it defaults to zero. > > Experimental evidence suggests that setting 'ikeep' removes the symptom. > > Question: Is is possible that without 'ikeep', XFS does not even try > to provide unique generation numbers? If this is the case, could it > please be fixed. If it is not the case, please help me find the code > responsible. Unless you specify the "ikeep" mount option, XFS will remove unused inode clusters. The newly freed blocks can be then used to store data or possibly a new inode cluster. If the blocks get reused for inodes, you'll end up with inodes whose generation numbers regressed. (inode number = f(block number)) Using the "ikeep" mount option causes to _never_ free empty inode clusters. This means that if you create many files and then unlink them, you'll end up with many unused inodes that are still allocated (and taking up disk space) but free to be used by the next creat(2)/mkdir(2)/etc.. This "problem" is inherent to any file system which dynamically allocates inodes. Josef 'Jeff' Sipek. -- Linux, n.: Generous programmers from around the world all join forces to help you shoot yourself in the foot for free. From owner-xfs@oss.sgi.com Tue Mar 25 14:38:01 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 25 Mar 2008 14:38:14 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-0.7 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_15 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2PLbx3W031808 for ; Tue, 25 Mar 2008 14:38:01 -0700 X-ASG-Debug-ID: 1206481113-130600be0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mx2.suse.de (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 9388D102C6EE for ; Tue, 25 Mar 2008 14:38:33 -0700 (PDT) Received: from mx2.suse.de (ns2.suse.de [195.135.220.15]) by cuda.sgi.com with ESMTP id nOMF6T5nyLQecY4p for ; Tue, 25 Mar 2008 14:38:33 -0700 (PDT) X-ASG-Whitelist: Client Received: from Relay1.suse.de (mail2.suse.de [195.135.221.8]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx2.suse.de (Postfix) with ESMTP id A87623EF8A; Tue, 25 Mar 2008 22:38:32 +0100 (CET) Received: from 192.168.1.70 (SquirrelMail authenticated user neilb) by neil.brown.name with HTTP; Wed, 26 Mar 2008 08:38:22 +1100 (EST) From: "NeilBrown" To: "Josef 'Jeff' Sipek" Date: Wed, 26 Mar 2008 08:38:22 +1100 (EST) Message-ID: <34178.192.168.1.70.1206481102.squirrel@neil.brown.name> In-Reply-To: <20080325212425.GA20257@josefsipek.net> References: <47CF62C5.7000908@m2000.com> <18384.50909.866848.966192@notabene.brown> <9a8748490803121513w285cd45rb6b26a3d842cac1b@mail.gmail.com> <20080312221511.GC31632@fieldses.org> <9a8748490803121516u36395872i70cc88b0439adc74@mail.gmail.com> <18394.1501.991087.80264@notabene.brown> <47DAEFD0.9020407@m2000.com> <47E92F8E.7030504@m2000.com> <20080325190943.GF2237@fieldses.org> <32953.192.168.1.70.1206477121.squirrel@neil.brown.name> <20080325212425.GA20257@josefsipek.net> X-ASG-Orig-Subj: Re: [opensuse] nfs_update_inode: inode X mode changed, Y to Z Subject: Re: [opensuse] nfs_update_inode: inode X mode changed, Y to Z Cc: "J. Bruce Fields" , xfs@oss.sgi.com, "Adam Schrotenboer" , "Jesper Juhl" , "Trond Myklebust" , lkml@vger.kernel.org, linux-nfs@vger.kernel.org, "Thomas Daniel" , "Frederic Revenu" , "Jeff Doan" User-Agent: SquirrelMail/1.4.13 MIME-Version: 1.0 Content-Type: text/plain;charset=iso-8859-1 Content-Transfer-Encoding: 8bit X-Priority: 3 (Normal) Importance: Normal X-Barracuda-Connect: ns2.suse.de[195.135.220.15] X-Barracuda-Start-Time: 1206481114 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15041 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: neilb@suse.de Precedence: bulk X-list: xfs On Wed, March 26, 2008 8:24 am, Josef 'Jeff' Sipek wrote: > Unless you specify the "ikeep" mount option, XFS will remove unused inode > clusters. The newly freed blocks can be then used to store data or > possibly > a new inode cluster. If the blocks get reused for inodes, you'll end up > with inodes whose generation numbers regressed. (inode number = f(block > number)) > > Using the "ikeep" mount option causes to _never_ free empty inode > clusters. > This means that if you create many files and then unlink them, you'll end > up > with many unused inodes that are still allocated (and taking up disk > space) > but free to be used by the next creat(2)/mkdir(2)/etc.. > > This "problem" is inherent to any file system which dynamically allocates > inodes. Yes, I understand all that. However you still need to do something about the generation number. It must be set to something. When you allocate an inode that doesn't currently exist on the device, you obviously cannot increment the old value and use that. However you can do a lot better than always using 0. The simplest would be to generate a 'random' number (get_random_bytes). Slightly better would be to generate a random number at boot time and use that, incrementing it each time it is used to set the generation number for an inode. Even better would be store store that 'next generation number' in the superblock so there would be even less risk of the 'random' generation producing repeats. This is what ext3 does. It doesn't dynamically allocate inodes, but it doesn't want to pay the cost of reading an old inode from storage just to see what the generation number is. So it has a number in the superblock which is incremented on each inode allocation and is used as the generation number. Certainly anything would be better than always using the same number. NeilBrown From owner-xfs@oss.sgi.com Tue Mar 25 15:12:55 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 25 Mar 2008 15:13:04 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.4 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_63 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2PMCqMg004005 for ; Tue, 25 Mar 2008 15:12:55 -0700 X-ASG-Debug-ID: 1206483206-4662014c0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from filer.fsl.cs.sunysb.edu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 57CC86E9A73 for ; Tue, 25 Mar 2008 15:13:26 -0700 (PDT) Received: from filer.fsl.cs.sunysb.edu (filer.fsl.cs.sunysb.edu [130.245.126.2]) by cuda.sgi.com with ESMTP id PYa7A7x0ZSwEcuTu for ; Tue, 25 Mar 2008 15:13:26 -0700 (PDT) Received: from josefsipek.net (baal.fsl.cs.sunysb.edu [130.245.126.78]) by filer.fsl.cs.sunysb.edu (8.12.11.20060308/8.13.1) with ESMTP id m2PMDJKJ005856; Tue, 25 Mar 2008 18:13:19 -0400 Received: by josefsipek.net (Postfix, from userid 1000) id 50E411C00124; Tue, 25 Mar 2008 18:13:21 -0400 (EDT) Date: Tue, 25 Mar 2008 18:13:21 -0400 From: "Josef 'Jeff' Sipek" To: NeilBrown Cc: "J. Bruce Fields" , xfs@oss.sgi.com, Adam Schrotenboer , Jesper Juhl , Trond Myklebust , linux-kernel@vger.kernel.org, linux-nfs@vger.kernel.org, Thomas Daniel , Frederic Revenu , Jeff Doan X-ASG-Orig-Subj: Re: [opensuse] nfs_update_inode: inode X mode changed, Y to Z Subject: Re: [opensuse] nfs_update_inode: inode X mode changed, Y to Z Message-ID: <20080325221321.GC20257@josefsipek.net> References: <9a8748490803121513w285cd45rb6b26a3d842cac1b@mail.gmail.com> <20080312221511.GC31632@fieldses.org> <9a8748490803121516u36395872i70cc88b0439adc74@mail.gmail.com> <18394.1501.991087.80264@notabene.brown> <47DAEFD0.9020407@m2000.com> <47E92F8E.7030504@m2000.com> <20080325190943.GF2237@fieldses.org> <32953.192.168.1.70.1206477121.squirrel@neil.brown.name> <20080325212425.GA20257@josefsipek.net> <34178.192.168.1.70.1206481102.squirrel@neil.brown.name> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <34178.192.168.1.70.1206481102.squirrel@neil.brown.name> User-Agent: Mutt/1.5.16 (2007-06-11) X-Barracuda-Connect: filer.fsl.cs.sunysb.edu[130.245.126.2] X-Barracuda-Start-Time: 1206483207 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45893 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15042 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeffpc@josefsipek.net Precedence: bulk X-list: xfs On Wed, Mar 26, 2008 at 08:38:22AM +1100, NeilBrown wrote: ... > However you still need to do something about the generation number. It > must be set to something. Right. > When you allocate an inode that doesn't currently exist on the device, > you obviously cannot increment the old value and use that. Makes sense. > However you can do a lot better than always using 0. I looked at the code (xfs_ialloc.c:xfs_ialloc_ag_alloc) 290 /* 291 * Set initial values for the inodes in this buffer. 292 */ 293 xfs_biozero(fbuf, 0, ninodes << args.mp->m_sb.sb_inodelog); 294 for (i = 0; i < ninodes; i++) { 295 free = XFS_MAKE_IPTR(args.mp, fbuf, i); 296 free->di_core.di_magic = cpu_to_be16(XFS_DINODE_MAGIC); 297 free->di_core.di_version = version; 298 free->di_next_unlinked = cpu_to_be32(NULLAGINO); 299 xfs_ialloc_log_di(tp, fbuf, i, 300 XFS_DI_CORE_BITS | XFS_DI_NEXT_UNLINKED); 301 } xfs_biozero(...) turns into a memset(buf, 0, len), and since the loop that follows doesn't change the generation number, it'll stay 0. > The simplest would be to generate a 'random' number (get_random_bytes). > Slightly better would be to generate a random number at boot time > and use that, incrementing it each time it is used to set the > generation number for an inode. I'm not familiar enough with NFS, do you want something that's monotonically increasing or do you just test for inequality? If it is inequality, why not just use something like the jiffies - that should be unique enough. > Even better would be store store that 'next generation number' in the > superblock so there would be even less risk of the 'random' generation > producing repeats. > This is what ext3 does. It doesn't dynamically allocate inodes, > but it doesn't want to pay the cost of reading an old inode from > storage just to see what the generation number is. So it has > a number in the superblock which is incremented on each inode allocation > and is used as the generation number. Something tells me that the SGI folks might not be all too happy with the in-sb number...XFS tries to be as parallel as possible, and this would cause the counter variable to bounce around their NUMA systems. Perhaps a per-ag variable would be better, but I remember reading that parallelizing updates to some inode count variable (I forget which) in the superblock \cite{dchinner-ols2006} led to a rather big improvement. It's almost morning down under, so I guess we'll get their comments on this soon. Josef 'Jeff' Sipek. -- Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it. - Brian W. Kernighan From owner-xfs@oss.sgi.com Tue Mar 25 15:24:55 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 25 Mar 2008 15:25:02 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.4 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2PMOr2m005972 for ; Tue, 25 Mar 2008 15:24:55 -0700 X-ASG-Debug-ID: 1206483927-399101f10000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from filer.fsl.cs.sunysb.edu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id D51286E9726 for ; Tue, 25 Mar 2008 15:25:27 -0700 (PDT) Received: from filer.fsl.cs.sunysb.edu (filer.fsl.cs.sunysb.edu [130.245.126.2]) by cuda.sgi.com with ESMTP id j4WB6RUtXNmjLkS1 for ; Tue, 25 Mar 2008 15:25:27 -0700 (PDT) Received: from josefsipek.net (baal.fsl.cs.sunysb.edu [130.245.126.78]) by filer.fsl.cs.sunysb.edu (8.12.11.20060308/8.13.1) with ESMTP id m2PLauJb030443; Tue, 25 Mar 2008 17:36:56 -0400 Received: by josefsipek.net (Postfix, from userid 1000) id 301E71C00124; Tue, 25 Mar 2008 17:36:58 -0400 (EDT) Date: Tue, 25 Mar 2008 17:36:58 -0400 From: "Josef 'Jeff' Sipek" To: Christoph Hellwig Cc: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: [PATCH 1/1] Documentation: correct XFS defaults for ikeep/noikeep mount options Subject: Re: [PATCH 1/1] Documentation: correct XFS defaults for ikeep/noikeep mount options Message-ID: <20080325213658.GB20257@josefsipek.net> References: <1203316358-31840-1-git-send-email-jeffpc@josefsipek.net> <20080222041036.GA24091@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080222041036.GA24091@infradead.org> User-Agent: Mutt/1.5.16 (2007-06-11) X-Barracuda-Connect: filer.fsl.cs.sunysb.edu[130.245.126.2] X-Barracuda-Start-Time: 1206483927 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45894 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15043 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeffpc@josefsipek.net Precedence: bulk X-list: xfs On Thu, Feb 21, 2008 at 11:10:36PM -0500, Christoph Hellwig wrote: > Looks good to me. Any word on this critical patch? ;) Josef 'Jeff' Sipek. > On Mon, Feb 18, 2008 at 01:32:38AM -0500, Josef 'Jeff' Sipek wrote: > > Signed-off-by: Josef 'Jeff' Sipek > > --- > > Documentation/filesystems/xfs.txt | 7 ++++--- > > 1 files changed, 4 insertions(+), 3 deletions(-) > > > > diff --git a/Documentation/filesystems/xfs.txt b/Documentation/filesystems/xfs.txt > > index 74aeb14..655bdfe 100644 > > --- a/Documentation/filesystems/xfs.txt > > +++ b/Documentation/filesystems/xfs.txt > > @@ -59,9 +59,10 @@ When mounting an XFS filesystem, the following options are accepted. > > > > ikeep/noikeep > > When inode clusters are emptied of inodes, keep them around > > - on the disk (ikeep) - this is the traditional XFS behaviour > > - and is still the default for now. Using the noikeep option, > > - inode clusters are returned to the free space pool. > > + on the disk (ikeep) - this is the traditional XFS behaviour. > > + Using the noikeep option, inode clusters are returned to the > > + free space pool. noikeep is the default for non-DMAPI mounts, > > + while ikeep is the default when DMAPI is in use. > > > > inode64 > > Indicates that XFS is allowed to create inodes at any location > > -- > > 1.5.4.rc2.85.g9de45-dirty > > > > > ---end quoted text--- > -- A CRAY is the only computer that runs an endless loop in just 4 hours... From owner-xfs@oss.sgi.com Tue Mar 25 16:08:42 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 25 Mar 2008 16:08:52 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.1 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_63 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2PN8fZo009179 for ; Tue, 25 Mar 2008 16:08:42 -0700 X-ASG-Debug-ID: 1206486550-7ad5012c0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mx2.suse.de (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 098FC102D591 for ; Tue, 25 Mar 2008 16:09:14 -0700 (PDT) Received: from mx2.suse.de (ns2.suse.de [195.135.220.15]) by cuda.sgi.com with ESMTP id rc7dpjquOwW52a4p for ; Tue, 25 Mar 2008 16:09:14 -0700 (PDT) X-ASG-Whitelist: Client Received: from Relay1.suse.de (mail2.suse.de [195.135.221.8]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx2.suse.de (Postfix) with ESMTP id 200483F1FF; Wed, 26 Mar 2008 00:09:10 +0100 (CET) Received: from 192.168.1.70 (SquirrelMail authenticated user neilb) by neil.brown.name with HTTP; Wed, 26 Mar 2008 10:09:00 +1100 (EST) From: "NeilBrown" To: "Josef 'Jeff' Sipek" Date: Wed, 26 Mar 2008 10:09:00 +1100 (EST) Message-ID: <45922.192.168.1.70.1206486540.squirrel@neil.brown.name> In-Reply-To: <20080325221321.GC20257@josefsipek.net> References: <9a8748490803121513w285cd45rb6b26a3d842cac1b@mail.gmail.com> <20080312221511.GC31632@fieldses.org> <9a8748490803121516u36395872i70cc88b0439adc74@mail.gmail.com> <18394.1501.991087.80264@notabene.brown> <47DAEFD0.9020407@m2000.com> <47E92F8E.7030504@m2000.com> <20080325190943.GF2237@fieldses.org> <32953.192.168.1.70.1206477121.squirrel@neil.brown.name> <20080325212425.GA20257@josefsipek.net> <34178.192.168.1.70.1206481102.squirrel@neil.brown.name> <20080325221321.GC20257@josefsipek.net> X-ASG-Orig-Subj: Re: [opensuse] nfs_update_inode: inode X mode changed, Y to Z Subject: Re: [opensuse] nfs_update_inode: inode X mode changed, Y to Z Cc: "J. Bruce Fields" , xfs@oss.sgi.com, "Adam Schrotenboer" , "Jesper Juhl" , "Trond Myklebust" , linux-kernel@vger.kernel.org, linux-nfs@vger.kernel.org, "Thomas Daniel" , "Frederic Revenu" , "Jeff Doan" User-Agent: SquirrelMail/1.4.13 MIME-Version: 1.0 Content-Type: text/plain;charset=iso-8859-1 Content-Transfer-Encoding: 8bit X-Priority: 3 (Normal) Importance: Normal X-Barracuda-Connect: ns2.suse.de[195.135.220.15] X-Barracuda-Start-Time: 1206486556 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15044 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: neilb@suse.de Precedence: bulk X-list: xfs On Wed, March 26, 2008 9:13 am, Josef 'Jeff' Sipek wrote: > On Wed, Mar 26, 2008 at 08:38:22AM +1100, NeilBrown wrote: > ... >> However you still need to do something about the generation number. It >> must be set to something. > > Right. > >> When you allocate an inode that doesn't currently exist on the device, >> you obviously cannot increment the old value and use that. > > Makes sense. > >> However you can do a lot better than always using 0. > > I looked at the code (xfs_ialloc.c:xfs_ialloc_ag_alloc) > > 290 /* > 291 * Set initial values for the inodes in this buffer. > 292 */ > 293 xfs_biozero(fbuf, 0, ninodes << > args.mp->m_sb.sb_inodelog); > 294 for (i = 0; i < ninodes; i++) { > 295 free = XFS_MAKE_IPTR(args.mp, fbuf, i); > 296 free->di_core.di_magic = > cpu_to_be16(XFS_DINODE_MAGIC); > 297 free->di_core.di_version = version; > 298 free->di_next_unlinked = > cpu_to_be32(NULLAGINO); > 299 xfs_ialloc_log_di(tp, fbuf, i, > 300 XFS_DI_CORE_BITS | > XFS_DI_NEXT_UNLINKED); > 301 } > > xfs_biozero(...) turns into a memset(buf, 0, len), and since the loop that > follows doesn't change the generation number, it'll stay 0. > >> The simplest would be to generate a 'random' number (get_random_bytes). >> Slightly better would be to generate a random number at boot time >> and use that, incrementing it each time it is used to set the >> generation number for an inode. > > I'm not familiar enough with NFS, do you want something that's > monotonically > increasing or do you just test for inequality? If it is inequality, why > not > just use something like the jiffies - that should be unique enough. > What we need is for the "filehandle" to be stable and unique. By 'stable' I mean that every time I get the filehandle for a particular file, I get the same string of bytes. By 'uniqie' I mean that if I get two filehandles for two different files, they must differ in at least one bit. If a file is deleted and the inode is re-used for a new file, then the old and new files are different and must have different file handles. The filehandle is traditionally generated from the inode number and a generation number, but the filesystem can actually do whatever it likes. xfs does it with xfs_fs_encode_fh(). Certainly you could initialise the i_generation to jiffies in xfs_ialloc_ag_alloc. That would be a suitable fix. get_random_bytes might be better, but the difference probably wouldn't be noticeable. NeilBrown From owner-xfs@oss.sgi.com Tue Mar 25 16:35:48 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 25 Mar 2008 16:35:56 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2PNZjLX015626 for ; Tue, 25 Mar 2008 16:35:47 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA04422; Wed, 26 Mar 2008 10:36:14 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m2PNaDsT109808078; Wed, 26 Mar 2008 10:36:14 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m2PNaBb7110043422; Wed, 26 Mar 2008 10:36:11 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Wed, 26 Mar 2008 10:36:11 +1100 From: David Chinner To: Emmanuel Florac Cc: xfs@oss.sgi.com Subject: Re: Serious XFS crash Message-ID: <20080325233611.GW103491721@sgi.com> References: <20080325185453.3a1957dd@galadriel.home> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080325185453.3a1957dd@galadriel.home> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15045 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Tue, Mar 25, 2008 at 06:54:53PM +0100, Emmanuel Florac wrote: > > Here is the setup : Debian sarge running kernel 2.6.18.8 smp (clean > build), xfsprogs version 2.6.20 (not used). An 8TB xfs filesystem broke > apart losing roughly 2TB of data in about 350 (big) files : > > Mar 22 12:38:18 system3 kernel: > 0x0: c0 49 00 35 6a bc c3 80 fd d4 64 f8 16 ec b9 85 | forw | | back | |mg1| |pad| #define XFS_DA_NODE_MAGIC 0xfebe /* magic number: non-leaf blocks */ #define XFS_ATTR_LEAF_MAGIC 0xfbee /* magic number: attribute leaf blks */ #define XFS_DIR2_LEAF1_MAGIC 0xd2f1 /* magic number: v2 dirlf single blks */ #define XFS_DIR2_LEAFN_MAGIC 0xd2ff /* magic number: v2 dirlf multi blks */ > 0x0: c0 49 00 35 6a bc c3 80 fd d4 64 f8 16 ec b9 85 |hdr.magic| #define XFS_DIR2_BLOCK_MAGIC 0x58443242 /* XD2B: for one block dirs */ #define XFS_DIR2_DATA_MAGIC 0x58443244 /* XD2D: for multiblock dirs */ So none of the magic numbers for a directory block match. And FWIW, I can't see any XFs magic number in that block. > As a precaution, I booted with a live CD with xfsprogs 2.8.11. I first > ran xfs_repair -n : > > No modify flag set, skipping phase 5 > Phase 1 - find and verify superblock... > Phase 2 - using internal log > - scan filesystem freespace and inode maps... > bad magic # 0x7c6999f7 for agf 0 > bad version # 270461846 for agf 0 > bad sequence # -506160237 for agf 0 > bad length 1130385756 for agf 0, should be 68590288 > flfirst 260475029 in agf 0 too large (max = 128) > fllast -1448142937 in agf 0 too large (max = 128) > bad magic # 0xfffde400 for agi 0 > bad version # -1469688457 for agi 0 > bad sequence # 2021095287 for agi 0 > bad length # 2004318207 for agi 0, should be 68590288 > would reset bad agf for ag 0 > would reset bad agi for ag 0 > bad uncorrected agheader 0, skipping ag... > root inode chunk not found Oh, that's toast. Something has overwritten the start of the filesystem and it does not appear to be other metadata. Well, not exactly the start of the filesystem - the superblock is untouched. What sector size is being used for the XFS filesystem? If it's not the same as teh filesystem block size, then XFS can't have done this itself because the offset that this garbage starts at would not be block aligned..... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Tue Mar 25 16:55:05 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 25 Mar 2008 16:55:13 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from emu.melbourne.sgi.com (emu.melbourne.sgi.com [134.14.54.11]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2PNt0jF016867 for ; Tue, 25 Mar 2008 16:55:05 -0700 Received: by emu.melbourne.sgi.com (Postfix, from userid 1116) id 9C92B5216877; Wed, 26 Mar 2008 12:31:57 +1100 (EST) To: sgi.bugs.xfs@engr.sgi.com, xfs@oss.sgi.com Subject: TAKE 979337 - add dump/restore paths to xfstests/common.dump and set up path for bc Message-Id: <20080326013157.9C92B5216877@emu.melbourne.sgi.com> Date: Wed, 26 Mar 2008 12:31:57 +1100 (EST) From: tes@emu.melbourne.sgi.com X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15046 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@emu.melbourne.sgi.com Precedence: bulk X-list: xfs Add dump/restore paths to xfstests/common.dump and set up path for bc. Date: Wed Mar 26 10:51:39 AEDT 2008 Workarea: emu.melbourne.sgi.com:/home/tes/isms/xfs-cmds Inspected by: sandeen@sandeen.net The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:30712a xfstests/common.config - 1.129 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/common.config.diff?r1=text&tr1=1.129&r2=text&tr2=1.128&f=h xfstests/common.dump - 1.60 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/common.dump.diff?r1=text&tr1=1.60&r2=text&tr2=1.59&f=h - add dump/restore paths to xfstests/common.dump and set up path for bc From owner-xfs@oss.sgi.com Tue Mar 25 16:57:27 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 25 Mar 2008 16:57:33 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.8 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_64, J_CHICKENPOX_66 autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2PNvNT9017309 for ; Tue, 25 Mar 2008 16:57:25 -0700 Received: from boing.melbourne.sgi.com (boing.melbourne.sgi.com [134.14.55.141]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA04958; Wed, 26 Mar 2008 10:57:49 +1100 Message-ID: <47E9917D.5080102@sgi.com> Date: Wed, 26 Mar 2008 10:57:49 +1100 From: Timothy Shimmin User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: Eric Sandeen CC: Barry Naujok , xfs-oss Subject: Re: [PATCH] xfsqa: call _notrun in common.dump if dump utils not found References: <47E5CFBA.7060405@sandeen.net> <47E8703C.30603@sgi.com> <47E87D97.9050900@sandeen.net> <47E88676.7080006@sgi.com> <47E8960D.1000801@sgi.com> <47E8F5FE.8040600@sandeen.net> In-Reply-To: <47E8F5FE.8040600@sandeen.net> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15047 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs Eric Sandeen wrote: > Timothy Shimmin wrote: >> Timothy Shimmin wrote: >> > Eric Sandeen wrote: >> >> Barry Naujok wrote: >> >>> On Tue, 25 Mar 2008 14:23:40 +1100, Timothy Shimmin wrote: >> >>> >> >>>> Thanks, Eric. >> >>>> >> >>>> On IRIX: >> >>>> > where xfsdump xfsrestore xfsinvutil >> >>>> /sbin/xfsdump >> >>>> /usr/sbin/xfsdump >> >>>> /sbin/xfsrestore >> >>>> /usr/sbin/xfsinvutil >> >>>> > ls -l /sbin/xfsdump >> >>>> lrwxr-xr-x ... /sbin/xfsdump -> ../usr/sbin/xfsdump* >> >>>> >> >>>> I'll add the IRIX xfsrestore path and wait for Russell or >> >>>> whoever to complain about BSD :) >> >>> common.config sets up environment variables for the various >> >>> tools used and can handle these paths. It has them for the >> >>> xfsprogs tools (XFS_REPAIR_PROG, XFS_DB_PROG, etc) but >> >>> nothing for the xfsdump tools. >> >> >> >> yeah, that may be better... >> >> >> > Okay. Fair point. >> > I'll change common.dump to use the XFSDUMP_PROG etc.... >> > and common.config to set the PROG vars. >> > >> > --Tim >> > >> > >> >> Okay, something like below then. >> Note, I only test for failure in common.dump and I need >> to filter out the fullpaths for the commands now as >> they output their full path. >> Oh and I changed a bit for the DEBUGDUMP which can >> use binaries in xfstests for debugging. >> Ughhh. > > well, I don't want to make this too much work... it's not critical. > :) That's cool. It's Barry's fault ;-) --Tim From owner-xfs@oss.sgi.com Tue Mar 25 18:54:09 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 25 Mar 2008 18:54:16 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.7 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_43 autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2Q1s6s4025217 for ; Tue, 25 Mar 2008 18:54:08 -0700 Received: from [134.14.55.21] (dhcp21.melbourne.sgi.com [134.14.55.21]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id MAA07689; Wed, 26 Mar 2008 12:54:32 +1100 Message-ID: <47E9AC67.8070809@sgi.com> Date: Wed, 26 Mar 2008 12:52:39 +1100 From: Mark Goodwin Reply-To: markgw@sgi.com Organization: SGI Engineering User-Agent: Thunderbird 1.5.0.14 (Windows/20071210) MIME-Version: 1.0 To: Eric Sandeen CC: Barry Naujok , "xfs@oss.sgi.com" Subject: Re: REVIEW: Write primary superblock info to ALL secondaries during mkfs References: <47E8F5BD.7000601@sandeen.net> In-Reply-To: <47E8F5BD.7000601@sandeen.net> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15048 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: markgw@sgi.com Precedence: bulk X-list: xfs Eric Sandeen wrote: > Barry Naujok wrote: >> Secondaries should contain redundant information from the primary >> superblock. It does this for the filesystem geometry information, >> but not inode values (rootino, rt inos, quota inos). >> >> This patch updates all the secondaries from the primary just before >> it marks the filesystem as good to go. >> >> Unfortunately, this also affects the output of xfs_repair during >> QA 030 and 178 which restores the primary superblock from the >> secondaries. >> >> Now that the secondaries have valid inode values, xfs_repair >> does not have to restore them to the correct values after copying >> the secondary into the primary. >> >> Attached is the mkfs.xfs patch and also the updated golden >> outputs for QA 030 and 178. >> >> The next step after this is to enhance xfs_repair to be more >> thorough in checking the secondaries during Phase 1. > > One related thing I'd always wondered about was stamping a secondary at > the very end of the device (and therefore shrinking the fs by just a > bit) - repair could then do a quick check at the end of the device > before resorting to scanning for the 2nd backup... would this make any > sense? I guess it might, Barry what do you think? Probably makes grow a bit more complicated. What would repair do if it doesn't find the backup SB at the end of the device? We'd need a new SB flag to indicate it's supposed to be there, which seems a bit chicken-and-egg'ish ... Cheers > > -Eric > > -- Mark Goodwin markgw@sgi.com Engineering Manager for XFS and PCP Phone: +61-3-99631937 SGI Australian Software Group Cell: +61-4-18969583 ------------------------------------------------------------- From owner-xfs@oss.sgi.com Tue Mar 25 19:06:49 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 25 Mar 2008 19:06:56 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.1 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2Q26l10026317 for ; Tue, 25 Mar 2008 19:06:49 -0700 X-ASG-Debug-ID: 1206497239-7c0b02b80000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id D6D016EB37B for ; Tue, 25 Mar 2008 19:07:19 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id QnKI6huACAh0R5H2 for ; Tue, 25 Mar 2008 19:07:19 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 9C0F31802F4FF; Tue, 25 Mar 2008 21:07:18 -0500 (CDT) Message-ID: <47E9AFD6.3050304@sandeen.net> Date: Tue, 25 Mar 2008 21:07:18 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: markgw@sgi.com CC: Barry Naujok , "xfs@oss.sgi.com" X-ASG-Orig-Subj: Re: REVIEW: Write primary superblock info to ALL secondaries during mkfs Subject: Re: REVIEW: Write primary superblock info to ALL secondaries during mkfs References: <47E8F5BD.7000601@sandeen.net> <47E9AC67.8070809@sgi.com> In-Reply-To: <47E9AC67.8070809@sgi.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1206497241 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45909 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15049 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Mark Goodwin wrote: > > Eric Sandeen wrote: >> One related thing I'd always wondered about was stamping a secondary at >> the very end of the device (and therefore shrinking the fs by just a >> bit) - repair could then do a quick check at the end of the device >> before resorting to scanning for the 2nd backup... would this make any >> sense? > > I guess it might, Barry what do you think? Probably makes grow a bit > more complicated. What would repair do if it doesn't find the backup > SB at the end of the device? We'd need a new SB flag to indicate it's > supposed to be there, which seems a bit chicken-and-egg'ish ... If not found at the end, just go back to the original search scheme, I'd say... -Eric From owner-xfs@oss.sgi.com Tue Mar 25 19:25:48 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 25 Mar 2008 19:26:03 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.4 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2Q2PiCW027868 for ; Tue, 25 Mar 2008 19:25:46 -0700 Received: from [134.14.55.78] (redback.melbourne.sgi.com [134.14.55.78]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id NAA08400; Wed, 26 Mar 2008 13:26:10 +1100 Message-ID: <47E9B5E3.7050007@sgi.com> Date: Wed, 26 Mar 2008 13:33:07 +1100 From: Lachlan McIlroy Reply-To: lachlan@sgi.com User-Agent: Thunderbird 2.0.0.12 (X11/20080213) MIME-Version: 1.0 To: Christoph Hellwig CC: xfs@oss.sgi.com Subject: Re: [PATCH] kill mrlock_t References: <20080320093940.GA28966@lst.de> In-Reply-To: <20080320093940.GA28966@lst.de> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15050 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: lachlan@sgi.com Precedence: bulk X-list: xfs I like the idea but the i_lock_state thing needs work. See comments below. Christoph Hellwig wrote: > XFS inodes are locked via the xfs_ilock family of functions which > internally use a rw_semaphore wrapper into an abstraction called > mrlock_t. The mrlock_t should be purely internal to xfs_ilock functions > but leaks through to the callers via various lock state asserts. > > This patch: > > - adds new xfs_isilocked abstraction to make the lock state asserts > fits into the xfs_ilock API family > - opencodes the mrlock wrappers in the xfs_ilock family of functions > - makes the state tracking debug-only and merged into a single state > word > - remove superflous flags to the xfs_ilock family of functions > > This kills 8 bytes per inode for non-debug builds, which would e.g. > be the space for ACL caching on 32bit systems. > > > Signed-off-by: Christoph Hellwig > > */ > void > -xfs_ilock(xfs_inode_t *ip, > - uint lock_flags) > +xfs_ilock( > + xfs_inode_t *ip, > + uint lock_flags) > { > /* > * You can't set both SHARED and EXCL for the same lock, > @@ -608,16 +608,19 @@ xfs_ilock(xfs_inode_t *ip, > ASSERT((lock_flags & ~(XFS_LOCK_MASK | XFS_LOCK_DEP_MASK)) == 0); > > if (lock_flags & XFS_IOLOCK_EXCL) { > - mrupdate_nested(&ip->i_iolock, XFS_IOLOCK_DEP(lock_flags)); > + down_write_nested(&ip->i_iolock, XFS_IOLOCK_DEP(lock_flags)); > } else if (lock_flags & XFS_IOLOCK_SHARED) { > - mraccess_nested(&ip->i_iolock, XFS_IOLOCK_DEP(lock_flags)); > + down_read_nested(&ip->i_iolock, XFS_IOLOCK_DEP(lock_flags)); > } > if (lock_flags & XFS_ILOCK_EXCL) { > - mrupdate_nested(&ip->i_lock, XFS_ILOCK_DEP(lock_flags)); > + down_write_nested(&ip->i_lock, XFS_ILOCK_DEP(lock_flags)); > } else if (lock_flags & XFS_ILOCK_SHARED) { > - mraccess_nested(&ip->i_lock, XFS_ILOCK_DEP(lock_flags)); > + down_read_nested(&ip->i_lock, XFS_ILOCK_DEP(lock_flags)); > } > xfs_ilock_trace(ip, 1, lock_flags, (inst_t *)__return_address); > +#ifdef DEBUG > + ip->i_lock_state |= (lock_flags & (XFS_ILOCK_EXCL|XFS_IOLOCK_EXCL)); > +#endif This seems racy - if we only acquire one of ilock/iolock exclusive and another thread acquires the other lock exclusive then we'll race setting i_lock_state. > } > > /* > @@ -634,11 +637,12 @@ xfs_ilock(xfs_inode_t *ip, > * > */ > int > -xfs_ilock_nowait(xfs_inode_t *ip, > - uint lock_flags) > +xfs_ilock_nowait( > + xfs_inode_t *ip, > + uint lock_flags) > { > - int iolocked; > - int ilocked; > + int iolocked; > + int ilocked; > > /* > * You can't set both SHARED and EXCL for the same lock, > @@ -653,35 +657,36 @@ xfs_ilock_nowait(xfs_inode_t *ip, > > iolocked = 0; > if (lock_flags & XFS_IOLOCK_EXCL) { > - iolocked = mrtryupdate(&ip->i_iolock); > - if (!iolocked) { > + iolocked = down_write_trylock(&ip->i_iolock); > + if (!iolocked) > return 0; > - } > } else if (lock_flags & XFS_IOLOCK_SHARED) { > - iolocked = mrtryaccess(&ip->i_iolock); > - if (!iolocked) { > + iolocked = down_read_trylock(&ip->i_iolock); > + if (!iolocked) > return 0; > - } > } > + > if (lock_flags & XFS_ILOCK_EXCL) { > - ilocked = mrtryupdate(&ip->i_lock); > - if (!ilocked) { > - if (iolocked) { > - mrunlock(&ip->i_iolock); > - } > - return 0; > - } > + ilocked = down_write_trylock(&ip->i_lock); > + if (!ilocked) > + goto out_ilock_fail; > } else if (lock_flags & XFS_ILOCK_SHARED) { > - ilocked = mrtryaccess(&ip->i_lock); > - if (!ilocked) { > - if (iolocked) { > - mrunlock(&ip->i_iolock); > - } > - return 0; > - } > + ilocked = down_read_trylock(&ip->i_lock); > + if (!ilocked) > + goto out_ilock_fail; > } > xfs_ilock_trace(ip, 2, lock_flags, (inst_t *)__return_address); > +#ifdef DEBUG > + ip->i_lock_state |= (lock_flags & (XFS_ILOCK_EXCL|XFS_IOLOCK_EXCL)); > +#endif Same deal, setting i_lock_state looks racy. > return 1; > + > + out_ilock_fail: > + if (lock_flags & XFS_IOLOCK_EXCL) > + up_write(&ip->i_iolock); > + else if (lock_flags & XFS_IOLOCK_SHARED) > + up_read(&ip->i_iolock); > + return 0; > } > > /* > @@ -697,8 +702,9 @@ xfs_ilock_nowait(xfs_inode_t *ip, > * > */ > void > -xfs_iunlock(xfs_inode_t *ip, > - uint lock_flags) > +xfs_iunlock( > + xfs_inode_t *ip, > + uint lock_flags) > { > /* > * You can't set both SHARED and EXCL for the same lock, > @@ -711,35 +717,33 @@ xfs_iunlock(xfs_inode_t *ip, > (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)); > ASSERT((lock_flags & ~(XFS_LOCK_MASK | XFS_IUNLOCK_NONOTIFY | > XFS_LOCK_DEP_MASK)) == 0); > + ASSERT(ip->i_lock_state & lock_flags); This assertion will always fail when *_SHARED flags are present in lock_flags. > ASSERT(lock_flags != 0); > > - if (lock_flags & (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL)) { > - ASSERT(!(lock_flags & XFS_IOLOCK_SHARED) || > - (ismrlocked(&ip->i_iolock, MR_ACCESS))); > - ASSERT(!(lock_flags & XFS_IOLOCK_EXCL) || > - (ismrlocked(&ip->i_iolock, MR_UPDATE))); > - mrunlock(&ip->i_iolock); > - } > - > - if (lock_flags & (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)) { > - ASSERT(!(lock_flags & XFS_ILOCK_SHARED) || > - (ismrlocked(&ip->i_lock, MR_ACCESS))); > - ASSERT(!(lock_flags & XFS_ILOCK_EXCL) || > - (ismrlocked(&ip->i_lock, MR_UPDATE))); > - mrunlock(&ip->i_lock); > + if (lock_flags & XFS_IOLOCK_EXCL) > + up_write(&ip->i_iolock); > + else if (lock_flags & XFS_IOLOCK_SHARED) > + up_read(&ip->i_iolock); > + > + if (lock_flags & XFS_ILOCK_EXCL) > + up_write(&ip->i_lock); > + else if (lock_flags & (XFS_ILOCK_SHARED)) > + up_read(&ip->i_lock); > > + if ((lock_flags & (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)) && > + !(lock_flags & XFS_IUNLOCK_NONOTIFY) && ip->i_itemp) { > /* > * Let the AIL know that this item has been unlocked in case > * it is in the AIL and anyone is waiting on it. Don't do > * this if the caller has asked us not to. > */ > - if (!(lock_flags & XFS_IUNLOCK_NONOTIFY) && > - ip->i_itemp != NULL) { > - xfs_trans_unlocked_item(ip->i_mount, > - (xfs_log_item_t*)(ip->i_itemp)); > - } > + xfs_trans_unlocked_item(ip->i_mount, > + (xfs_log_item_t *)ip->i_itemp); > } > xfs_ilock_trace(ip, 3, lock_flags, (inst_t *)__return_address); > +#ifdef DEBUG > + ip->i_lock_state &= ~(lock_flags & (XFS_ILOCK_EXCL|XFS_IOLOCK_EXCL)); > +#endif This seems racy - another thread could have acquired the ilock/iolock and set the flags before we get here and remove the flags. Unsetting the flags before we release the lock would still be racy if both iolock and ilock are released at the same time by different threads. > } > > /* > @@ -747,21 +751,42 @@ xfs_iunlock(xfs_inode_t *ip, > * if it is being demoted. > */ > void > -xfs_ilock_demote(xfs_inode_t *ip, > - uint lock_flags) > +xfs_ilock_demote( > + xfs_inode_t *ip, > + uint lock_flags) > { > ASSERT(lock_flags & (XFS_IOLOCK_EXCL|XFS_ILOCK_EXCL)); > ASSERT((lock_flags & ~(XFS_IOLOCK_EXCL|XFS_ILOCK_EXCL)) == 0); > + ASSERT(ip->i_lock_state & lock_flags); > > - if (lock_flags & XFS_ILOCK_EXCL) { > - ASSERT(ismrlocked(&ip->i_lock, MR_UPDATE)); > - mrdemote(&ip->i_lock); > - } > - if (lock_flags & XFS_IOLOCK_EXCL) { > - ASSERT(ismrlocked(&ip->i_iolock, MR_UPDATE)); > - mrdemote(&ip->i_iolock); > - } > + if (lock_flags & XFS_ILOCK_EXCL) > + downgrade_write(&ip->i_lock); > + if (lock_flags & XFS_IOLOCK_EXCL) > + downgrade_write(&ip->i_iolock); > + > +#ifdef DEBUG > + ip->i_lock_state &= ~lock_flags; > +#endif > +} > + > +#ifdef DEBUG > +/* > + * Debug-only routine, without additional rw_semaphore APIs, we can > + * now only answer requests regarding whether we hold the lock for write > + * (reader state is outside our visibility, we only track writer state). > + * > + * Note: means !xfs_isilocked would give false positives, so don't do that. > + */ > +int > +xfs_isilocked( > + xfs_inode_t *ip, > + uint lock_flags) > +{ > + if (lock_flags & (XFS_ILOCK_EXCL|XFS_IOLOCK_EXCL)) > + return (ip->i_lock_state & lock_flags); > + return 1; > } > +#endif > Splitting i_lock_state into two separate fields - one for each lock - and unsetting the fields prior to releasing/demoting the lock might be enough to solve all these races. From owner-xfs@oss.sgi.com Tue Mar 25 20:27:07 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 25 Mar 2008 20:27:15 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.8 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_15, J_CHICKENPOX_31 autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2Q3R2aP008192 for ; Tue, 25 Mar 2008 20:27:05 -0700 Received: from boing.melbourne.sgi.com (boing.melbourne.sgi.com [134.14.55.141]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA10019; Wed, 26 Mar 2008 14:27:26 +1100 Message-ID: <47E9C29E.6090703@sgi.com> Date: Wed, 26 Mar 2008 14:27:26 +1100 From: Timothy Shimmin User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: NeilBrown CC: "Josef 'Jeff' Sipek" , "J. Bruce Fields" , xfs@oss.sgi.com, Adam Schrotenboer , Jesper Juhl , Trond Myklebust , lkml@vger.kernel.org, linux-nfs@vger.kernel.org, Thomas Daniel , Frederic Revenu , Jeff Doan Subject: Re: [opensuse] nfs_update_inode: inode X mode changed, Y to Z References: <47CF62C5.7000908@m2000.com> <18384.50909.866848.966192@notabene.brown> <9a8748490803121513w285cd45rb6b26a3d842cac1b@mail.gmail.com> <20080312221511.GC31632@fieldses.org> <9a8748490803121516u36395872i70cc88b0439adc74@mail.gmail.com> <18394.1501.991087.80264@notabene.brown> <47DAEFD0.9020407@m2000.com> <47E92F8E.7030504@m2000.com> <20080325190943.GF2237@fieldses.org> <32953.192.168.1.70.1206477121.squirrel@neil.brown.name> <20080325212425.GA20257@josefsipek.net> <34178.192.168.1.70.1206481102.squirrel@neil.brown.name> In-Reply-To: <34178.192.168.1.70.1206481102.squirrel@neil.brown.name> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15051 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs Hi Neil, NeilBrown wrote: > On Wed, March 26, 2008 8:24 am, Josef 'Jeff' Sipek wrote: > >> Unless you specify the "ikeep" mount option, XFS will remove unused inode >> clusters. The newly freed blocks can be then used to store data or >> possibly >> a new inode cluster. If the blocks get reused for inodes, you'll end up >> with inodes whose generation numbers regressed. (inode number = f(block >> number)) >> >> Using the "ikeep" mount option causes to _never_ free empty inode >> clusters. >> This means that if you create many files and then unlink them, you'll end >> up >> with many unused inodes that are still allocated (and taking up disk >> space) >> but free to be used by the next creat(2)/mkdir(2)/etc.. >> >> This "problem" is inherent to any file system which dynamically allocates >> inodes. > > Yes, I understand all that. > > However you still need to do something about the generation number. It > must be set to something. > > When you allocate an inode that doesn't currently exist on the device, > you obviously cannot increment the old value and use that. > However you can do a lot better than always using 0. > Yes, this is a known problem. We came across it in about August last year I believe in the context of DMF as it wants to keep persistent file handles with gen#s in them: SGI bug: 969192: Default mount option "noikeep" makes the inode generation number non-persistent I vaguely remember at the time that a number of different schemes were tossed around but in the end we just turned off the ikeep for DMAPI mounted filesystems. I thought we had a bug open to do a real fix but can't see it at the moment. Will look into it and discuss with our group. Cheers, --Tim From owner-xfs@oss.sgi.com Tue Mar 25 20:37:31 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 25 Mar 2008 20:37:40 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2Q3bQHn009113 for ; Tue, 25 Mar 2008 20:37:30 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA10331; Wed, 26 Mar 2008 14:37:49 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m2Q3bjsT110139872; Wed, 26 Mar 2008 14:37:46 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m2Q3bcKj110135703; Wed, 26 Mar 2008 14:37:38 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Wed, 26 Mar 2008 14:37:38 +1100 From: David Chinner To: "Josef 'Jeff' Sipek" Cc: NeilBrown , "J. Bruce Fields" , xfs@oss.sgi.com, Adam Schrotenboer , Jesper Juhl , Trond Myklebust , linux-kernel@vger.kernel.org, linux-nfs@vger.kernel.org, Thomas Daniel , Frederic Revenu , Jeff Doan Subject: Re: [opensuse] nfs_update_inode: inode X mode changed, Y to Z Message-ID: <20080326033738.GX103491721@sgi.com> References: <20080312221511.GC31632@fieldses.org> <9a8748490803121516u36395872i70cc88b0439adc74@mail.gmail.com> <18394.1501.991087.80264@notabene.brown> <47DAEFD0.9020407@m2000.com> <47E92F8E.7030504@m2000.com> <20080325190943.GF2237@fieldses.org> <32953.192.168.1.70.1206477121.squirrel@neil.brown.name> <20080325212425.GA20257@josefsipek.net> <34178.192.168.1.70.1206481102.squirrel@neil.brown.name> <20080325221321.GC20257@josefsipek.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080325221321.GC20257@josefsipek.net> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15052 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Tue, Mar 25, 2008 at 06:13:21PM -0400, Josef 'Jeff' Sipek wrote: > On Wed, Mar 26, 2008 at 08:38:22AM +1100, NeilBrown wrote: > ... > > However you still need to do something about the generation number. It > > must be set to something. ..... > > Even better would be store store that 'next generation number' in the > > superblock so there would be even less risk of the 'random' generation > > producing repeats. > > This is what ext3 does. It doesn't dynamically allocate inodes, > > but it doesn't want to pay the cost of reading an old inode from > > storage just to see what the generation number is. So it has > > a number in the superblock which is incremented on each inode allocation > > and is used as the generation number. > > Something tells me that the SGI folks might not be all too happy with the > in-sb number... ..... > Perhaps a per-ag variable would be better, /me goes back to the bug from last year about stable inode/gen numbers for a HSM. dgc> Right, except the last thing we want is yet more global state needing to dgc> be updated in inode allocation. The best way to do this is a max generation dgc> number per AG (held in the AGI) so that it can be updated at the same time dgc> inodes are freed and not cause additional serialisation. Which was soundly rejected by the HSM folk because it wraps at 4 billion inode create/unlink cycles in an AG rather than per inode. The only thing they were happy with was the old behaviour and so they now mount their filesystems with ikeep. At that point the issue was dropped on the floor; the NFS side of things apparently weren't causing any problems so we didn't consider it urgent to fix.... Given this state of affairs (i.e. HSM using ikeep), I guess we can do anything we want for the noikeep case. I'll cook up a patch that does something similar to ext3 generation numbers for the initial seeding.... > but I remember reading that parallelizing updates > to some inode count variable (I forget which) in the superblock > \cite{dchinner-ols2006} led to a rather big improvement. That was for in memory counters not on disk, and the problem really was free block counts rather than free inode counts. Yes, I converted the inode counters at the same time, but that wasn't the limiting factor. Updates to the on disk superblock, OTOH, are a limiting factor and that was the lazy superblock counter modifications solve.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Tue Mar 25 22:02:08 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 25 Mar 2008 22:02:39 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2Q524GB013496 for ; Tue, 25 Mar 2008 22:02:06 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id QAA12580; Wed, 26 Mar 2008 16:02:22 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m2Q52HsT109977769; Wed, 26 Mar 2008 16:02:18 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m2Q528Ad110019234; Wed, 26 Mar 2008 16:02:08 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Wed, 26 Mar 2008 16:02:08 +1100 From: David Chinner To: David Chinner Cc: "Josef 'Jeff' Sipek" , NeilBrown , "J. Bruce Fields" , xfs@oss.sgi.com, Adam Schrotenboer , Jesper Juhl , Trond Myklebust , linux-kernel@vger.kernel.org, linux-nfs@vger.kernel.org, Thomas Daniel , Frederic Revenu , Jeff Doan Subject: Re: [opensuse] nfs_update_inode: inode X mode changed, Y to Z Message-ID: <20080326050208.GH108924158@sgi.com> References: <9a8748490803121516u36395872i70cc88b0439adc74@mail.gmail.com> <18394.1501.991087.80264@notabene.brown> <47DAEFD0.9020407@m2000.com> <47E92F8E.7030504@m2000.com> <20080325190943.GF2237@fieldses.org> <32953.192.168.1.70.1206477121.squirrel@neil.brown.name> <20080325212425.GA20257@josefsipek.net> <34178.192.168.1.70.1206481102.squirrel@neil.brown.name> <20080325221321.GC20257@josefsipek.net> <20080326033738.GX103491721@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080326033738.GX103491721@sgi.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15053 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Wed, Mar 26, 2008 at 02:37:38PM +1100, David Chinner wrote: > Given this state of affairs (i.e. HSM using ikeep), I guess we can do > anything we want for the noikeep case. I'll cook up a patch that does > something similar to ext3 generation numbers for the initial seeding.... Patch below for comments. It passes xfsqa, but there's no userspace support for it yet. 2.6.26 is the likely target for this change. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group --- Don't initialise new inode generation numbers to zero When we allocation new inode chunks, we initialise the generation numbers to zero. This works fine until we delete a chunk and then reallocate it, resulting in the same inode numbers but with a reset generation count. This can result in inode/generation pairs of different inodes occurring relatively close together. Given that the inode/gen pair makes up the "unique" portion of an NFS filehandle on XFS, this can result in file handles cached on clients being seen on the wire from the server but refer to a different file. This causes .... issues for NFS clients. Hence we need a unique generation number initialisation for each inode to prevent reuse of a small portion of the generation number space. Make this initialiser per-allocation group so that it is not a single point of contention in the filesystem, and increment it on every allocation within an AG to reduce the chance that a generation number is reused for a given inode number if the inode chunk is deleted and reallocated immediately afterwards. It is safe to add the agi_newinogen field to the AGI without using a feature bit. If an older kernel is used, it simply will not update the field on allocation. If the kernel is updated and the field has garbage in it, then it's like having a random seed to the generation number.... Signed-off-by: Dave Chinner --- fs/xfs/xfs_ag.h | 4 +++- fs/xfs/xfs_ialloc.c | 30 ++++++++++++++++++++++-------- 2 files changed, 25 insertions(+), 9 deletions(-) Index: 2.6.x-xfs-new/fs/xfs/xfs_ag.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_ag.h 2008-01-18 18:30:06.000000000 +1100 +++ 2.6.x-xfs-new/fs/xfs/xfs_ag.h 2008-03-26 13:03:41.122918236 +1100 @@ -121,6 +121,7 @@ typedef struct xfs_agi { * still being referenced. */ __be32 agi_unlinked[XFS_AGI_UNLINKED_BUCKETS]; + __be32 agi_newinogen; /* inode cluster generation */ } xfs_agi_t; #define XFS_AGI_MAGICNUM 0x00000001 @@ -134,7 +135,8 @@ typedef struct xfs_agi { #define XFS_AGI_NEWINO 0x00000100 #define XFS_AGI_DIRINO 0x00000200 #define XFS_AGI_UNLINKED 0x00000400 -#define XFS_AGI_NUM_BITS 11 +#define XFS_AGI_NEWINOGEN 0x00000800 +#define XFS_AGI_NUM_BITS 12 #define XFS_AGI_ALL_BITS ((1 << XFS_AGI_NUM_BITS) - 1) /* disk block (xfs_daddr_t) in the AG */ Index: 2.6.x-xfs-new/fs/xfs/xfs_ialloc.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_ialloc.c 2008-03-25 15:41:27.000000000 +1100 +++ 2.6.x-xfs-new/fs/xfs/xfs_ialloc.c 2008-03-26 14:29:47.998554368 +1100 @@ -309,6 +309,8 @@ xfs_ialloc_ag_alloc( free = XFS_MAKE_IPTR(args.mp, fbuf, i); free->di_core.di_magic = cpu_to_be16(XFS_DINODE_MAGIC); free->di_core.di_version = version; + free->di_core.di_gen = agi->agi_newinogen; + be32_add_cpu(&agi->agi_newinogen, 1); free->di_next_unlinked = cpu_to_be32(NULLAGINO); xfs_ialloc_log_di(tp, fbuf, i, XFS_DI_CORE_BITS | XFS_DI_NEXT_UNLINKED); @@ -347,7 +349,8 @@ xfs_ialloc_ag_alloc( * Log allocation group header fields */ xfs_ialloc_log_agi(tp, agbp, - XFS_AGI_COUNT | XFS_AGI_FREECOUNT | XFS_AGI_NEWINO); + XFS_AGI_COUNT | XFS_AGI_FREECOUNT | + XFS_AGI_NEWINO | XFS_AGI_NEWINOGEN); /* * Modify/log superblock values for inode count and inode free count. */ @@ -896,11 +899,12 @@ nextag: ino = XFS_AGINO_TO_INO(mp, agno, rec.ir_startino + offset); XFS_INOBT_CLR_FREE(&rec, offset); rec.ir_freecount--; + be32_add_cpu(&agi->agi_newinogen, 1); if ((error = xfs_inobt_update(cur, rec.ir_startino, rec.ir_freecount, rec.ir_free))) goto error0; be32_add(&agi->agi_freecount, -1); - xfs_ialloc_log_agi(tp, agbp, XFS_AGI_FREECOUNT); + xfs_ialloc_log_agi(tp, agbp, XFS_AGI_FREECOUNT | XFS_AGI_NEWINOGEN); down_read(&mp->m_peraglock); mp->m_perag[tagno].pagi_freecount--; up_read(&mp->m_peraglock); @@ -1320,6 +1324,11 @@ xfs_ialloc_compute_maxlevels( /* * Log specified fields for the ag hdr (inode section) + * + * We don't log the unlinked inode fields through here; they + * get logged directly to the buffer. Hence we have a discontinuity + * in the fields we are logging and we need two calls to map all + * the dirtied parts of the agi.... */ void xfs_ialloc_log_agi( @@ -1342,22 +1351,27 @@ xfs_ialloc_log_agi( offsetof(xfs_agi_t, agi_newino), offsetof(xfs_agi_t, agi_dirino), offsetof(xfs_agi_t, agi_unlinked), + offsetof(xfs_agi_t, agi_newinogen), sizeof(xfs_agi_t) }; + int log_newino = fields & XFS_AGI_NEWINOGEN; + #ifdef DEBUG xfs_agi_t *agi; /* allocation group header */ agi = XFS_BUF_TO_AGI(bp); ASSERT(be32_to_cpu(agi->agi_magicnum) == XFS_AGI_MAGIC); #endif - /* - * Compute byte offsets for the first and last fields. - */ + fields &= ~XFS_AGI_NEWINOGEN; + + /* Compute byte offsets for the first and last fields. */ xfs_btree_offsets(fields, offsets, XFS_AGI_NUM_BITS, &first, &last); - /* - * Log the allocation group inode header buffer. - */ xfs_trans_log_buf(tp, bp, first, last); + if (log_newino) { + xfs_btree_offsets(XFS_AGI_NEWINOGEN, offsets, XFS_AGI_NUM_BITS, + &first, &last); + xfs_trans_log_buf(tp, bp, first, last); + } } /* From owner-xfs@oss.sgi.com Tue Mar 25 23:53:35 2008 Received: with ECARTIS (v1.0.0; list xfs); Tue, 25 Mar 2008 23:53:43 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.4 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2Q6rVKP020968 for ; Tue, 25 Mar 2008 23:53:33 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA16136; Wed, 26 Mar 2008 17:54:00 +1100 Received: by chook.melbourne.sgi.com (Postfix, from userid 44625) id CD6F158C4C0F; Wed, 26 Mar 2008 17:54:00 +1100 (EST) To: sgi.bugs.xfs@engr.sgi.com, xfs@oss.sgi.com Subject: PARTIAL TAKE 976035 - Prevent xfs_bmap_check_leaf_extents() from referencing unmapped memory. Message-Id: <20080326065400.CD6F158C4C0F@chook.melbourne.sgi.com> Date: Wed, 26 Mar 2008 17:54:00 +1100 (EST) From: lachlan@sgi.com (Lachlan McIlroy) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15054 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: lachlan@sgi.com Precedence: bulk X-list: xfs Prevent xfs_bmap_check_leaf_extents() from referencing unmapped memory. While investigating the extent corruption bug I ran into this bug in debug only code. xfs_bmap_check_leaf_extents() loops through the leaf blocks of the extent btree checking that every extent is entirely before the next extent. It also compares the last extent in the previous block to the first extent in the current block when the previous block has been released and potentially unmapped. So take a copy of the last extent instead of a pointer. Also move the last extent check out of the loop because we only need to do it once. Date: Wed Mar 26 17:53:07 AEDT 2008 Workarea: redback.melbourne.sgi.com:/home/lachlan/isms/2.6.x-xfs Inspected by: hch Author: lachlan The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:30718a fs/xfs/xfs_bmap.c - 1.387 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_bmap.c.diff?r1=text&tr1=1.387&r2=text&tr2=1.386&f=h - Prevent xfs_bmap_check_leaf_extents() from referencing unmapped memory. From owner-xfs@oss.sgi.com Wed Mar 26 00:53:52 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 26 Mar 2008 00:54:01 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.0 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_45 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2Q7rpbG031393 for ; Wed, 26 Mar 2008 00:53:52 -0700 X-ASG-Debug-ID: 1206518064-311d00a10000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from smtp7-g19.free.fr (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 9D9B58E12DC for ; Wed, 26 Mar 2008 00:54:25 -0700 (PDT) Received: from smtp7-g19.free.fr (smtp7-g19.free.fr [212.27.42.64]) by cuda.sgi.com with ESMTP id p0qyOZ5iyghj8XLB for ; Wed, 26 Mar 2008 00:54:25 -0700 (PDT) Received: from smtp7-g19.free.fr (localhost [127.0.0.1]) by smtp7-g19.free.fr (Postfix) with ESMTP id 521CE322800; Wed, 26 Mar 2008 08:54:24 +0100 (CET) Received: from galadriel.home (pla78-1-82-235-234-79.fbx.proxad.net [82.235.234.79]) by smtp7-g19.free.fr (Postfix) with ESMTP id 0DEBB32282B; Wed, 26 Mar 2008 08:54:24 +0100 (CET) Date: Wed, 26 Mar 2008 08:51:22 +0100 From: Emmanuel Florac To: David Chinner Cc: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: Serious XFS crash Subject: Re: Serious XFS crash Message-ID: <20080326085122.2b60f7c7@galadriel.home> In-Reply-To: <20080325233611.GW103491721@sgi.com> References: <20080325185453.3a1957dd@galadriel.home> <20080325233611.GW103491721@sgi.com> Organization: Intellique X-Mailer: Claws Mail 2.9.1 (GTK+ 2.8.20; i686-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 X-Barracuda-Connect: smtp7-g19.free.fr[212.27.42.64] X-Barracuda-Start-Time: 1206518065 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45932 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id m2Q7rqbG031396 X-archive-position: 15055 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: eflorac@intellique.com Precedence: bulk X-list: xfs Le Wed, 26 Mar 2008 10:36:11 +1100 vous écriviez: > So none of the magic numbers for a directory block match. > And FWIW, I can't see any XFs magic number in that block. > There weren't any directory. As a matter of fact this FS was used to dump (thru samba) big videos files for later use. After the repair, there were several directories in lost+found, though... > Oh, that's toast. Something has overwritten the start of the > filesystem and it does not appear to be other metadata. Well, not > exactly the start of the filesystem - the superblock is untouched. > That's weird. > What sector size is being used for the XFS filesystem? Well the /dev/md0 uses 4KB blocks as default IIRC. I'll have to check this. > If it's > not the same as teh filesystem block size, then XFS can't have done > this itself because the offset that this garbage starts at would > not be block aligned..... Could it be an md bug then? I also had some IO errors on this setup lately due to a dead disk, but I've changed it and it looked OK since then, until yesterday. regards, Emmmanuel. -- -------------------------------------------------- Emmanuel Florac www.intellique.com -------------------------------------------------- From owner-xfs@oss.sgi.com Wed Mar 26 04:51:25 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 26 Mar 2008 04:51:41 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.0 required=5.0 tests=BAYES_99 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2QBpMYu019190 for ; Wed, 26 Mar 2008 04:51:25 -0700 X-ASG-Debug-ID: 1206532317-5c7c03c30000-w1Z2WR X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from bay0-omc2-s5.bay0.hotmail.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 4AB111036FC3 for ; Wed, 26 Mar 2008 04:51:57 -0700 (PDT) Received: from bay0-omc2-s5.bay0.hotmail.com (bay0-omc2-s5.bay0.hotmail.com [65.54.246.141]) by cuda.sgi.com with ESMTP id XctPjyBWkkXCVu7e for ; Wed, 26 Mar 2008 04:51:57 -0700 (PDT) Received: from hotmail.com ([10.4.30.18]) by bay0-omc2-s5.bay0.hotmail.com with Microsoft SMTPSVC(6.0.3790.3959); Wed, 26 Mar 2008 04:51:26 -0700 Received: from mail pickup service by hotmail.com with Microsoft SMTPSVC; Wed, 26 Mar 2008 04:51:25 -0700 Message-ID: Received: from 65.55.161.50 by BLU136-DAV8.phx.gbl with DAV; Wed, 26 Mar 2008 11:51:21 +0000 X-Originating-IP: [65.55.161.50] X-Originating-Email: [houston_687182696@hotmail.com] X-Sender: houston_687182696@hotmail.com thread-index: AciPN7YIX88kXoqyTeSyo4OI5xQ6pA== Thread-Topic: Look at their recent PR's - Red Branch Technologies From: "Houston_687182696" To: Cc: X-ASG-Orig-Subj: FWD: Look at their recent PR's - Red Branch Technologies Subject: ***** SUSPECTED SPAM ***** FWD: Look at their recent PR's - Red Branch Technologies Date: Wed, 26 Mar 2008 07:51:22 -0400 MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-15" Content-Transfer-Encoding: 7bit Content-Class: urn:content-classes:message Importance: normal Priority: normal X-MimeOLE: Produced By Microsoft MimeOLE V6.00.3790.3959 X-OriginalArrivalTime: 26 Mar 2008 11:51:25.0871 (UTC) FILETIME=[B85683F0:01C88F37] X-Barracuda-Connect: bay0-omc2-s5.bay0.hotmail.com[65.54.246.141] X-Barracuda-Start-Time: 1206532317 X-Barracuda-Bayes: INNOCENT GLOBAL 0.3716 1.0000 -0.0805 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: 2.08 X-Barracuda-Spam-Status: Yes, SCORE=2.08 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=FROM_ENDS_IN_NUMS, MSGID_FROM_MTA_HEADER X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45948 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 2.16 FROM_ENDS_IN_NUMS From: ends in many numbers 0.00 MSGID_FROM_MTA_HEADER Message-Id was added by a relay X-Priority: 5 (Lowest) X-MSMail-Priority: Low Importance: Low X-Barracuda-Spam-Flag: YES X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15056 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: Houston_687182696@hotmail.com Precedence: bulk X-list: xfs Introducing the company with a unique concept - Red Branch Technologies R B T I Red Branch Technologies, Inc. makes business travel easier, more secure and more responsive for both the hard-charging business traveler and the corporation by meeting travel needs at each point in the travel cycle. The company's innovative my/mTravel(r) and mTravel(r) products automate the business travel process from planning and booking to en route services and support, thru post travel reporting and unused ticket redemption. Red Branch's Magellan360 provides agency and net-delivered back office services to independent professional travel marketers. The company has real clients and income stream. Look at their recent PR's - Red Branch Technologies Teams With DeskPort Technologies to Introduce 60,000 Business Travelers to my/mTravel - Red Branch Technologies Gives Business Travelers Credit Card Fraud and Identity Theft Protection With Introduction of IdentiFlyer - Red Branch Technologies Wholly-Owned Subsidiary, Magellan360, Releases Unaudited Financial Performance for 2007; Revenue Up 12 Percent to $22 Million Do your due diligence on R B T I, and read their conferece call details you'll see why we think this is the undiscovered gem for double digit gains. decline dave dancer cow considerations consequently cm christianity buying From owner-xfs@oss.sgi.com Wed Mar 26 06:46:58 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 26 Mar 2008 06:47:08 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2QDkvuS011487 for ; Wed, 26 Mar 2008 06:46:58 -0700 X-ASG-Debug-ID: 1206539250-5fad00110000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mx1.wp.pl (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 3C79EBF18C2 for ; Wed, 26 Mar 2008 06:47:31 -0700 (PDT) Received: from mx1.wp.pl (mx1.wp.pl [212.77.101.5]) by cuda.sgi.com with ESMTP id Z5jnpzhviz4nOVYY for ; Wed, 26 Mar 2008 06:47:31 -0700 (PDT) Received: (wp-smtpd smtp.wp.pl 19022 invoked from network); 26 Mar 2008 14:47:29 +0100 Received: from ip-83-238-22-2.netia.com.pl (HELO lapsg1.open-e.pl) (stf_xl@wp.pl@[83.238.22.2]) (envelope-sender ) by smtp.wp.pl (WP-SMTPD) with AES256-SHA encrypted SMTP for ; 26 Mar 2008 14:47:29 +0100 From: Stanislaw Gruszka To: David Chinner X-ASG-Orig-Subj: Re: BUG: xfs on linux lvm - lvconvert random hungs when doing i/o Subject: Re: BUG: xfs on linux lvm - lvconvert random hungs when doing i/o Date: Wed, 26 Mar 2008 15:02:19 +0100 User-Agent: KMail/1.9.7 Cc: xfs@oss.sgi.com References: <200803211520.16398.stf_xl@wp.pl> <20080324233952.GF103491721@sgi.com> <20080325020223.GB108924158@sgi.com> In-Reply-To: <20080325020223.GB108924158@sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200803261502.19930.stf_xl@wp.pl> X-WP-AV: skaner antywirusowy poczty Wirtualnej Polski S. A. X-WP-SPAM: NO 0000000 [ITNk] X-Barracuda-Connect: mx1.wp.pl[212.77.101.5] X-Barracuda-Start-Time: 1206539252 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45956 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15057 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: stf_xl@wp.pl Precedence: bulk X-list: xfs On Tuesday 25 March 2008, David Chinner wrote: > That points to I/O not completing (not an XFS problem at all), or > the filesystem freeze is just taking a long time to run (as it has > to sync everything to disk). Given that this is a snapshot target, > writing new blocks will take quite some time. Is the system still > making writeback progress when in this state, or is it really hung? That is real deadlock, operations which normally takes few minutes, hangs for few hours until I restart machine. This bug is very strange because I can't catch it with more verbose debug and tracing options, I did more tests without debugging options and looks that version from sgi cvs (2.6.25-rc3) works well. I did also test with two patches you suggest, applied (with some troubles) for 2.6.24.2, but finally things hung after 2 days - better than without patches, as you expected. So, looks things are fixed in sgi cvs repository. Don't know if these are xfs or other subsystem (device-mapper?) fixes, anyway I can't reproduce bug on this version, that's good news, I think. > > Cheers, > > Dave. Thanks Stanislaw Gruszka From owner-xfs@oss.sgi.com Wed Mar 26 07:44:44 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 26 Mar 2008 07:44:55 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.0 required=5.0 tests=BAYES_60 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2QEigVg025114 for ; Wed, 26 Mar 2008 07:44:43 -0700 X-ASG-Debug-ID: 1206542715-5fab02a70000-w1Z2WR X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from hu-out-0506.google.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 6DF0CBEB619 for ; Wed, 26 Mar 2008 07:45:15 -0700 (PDT) Received: from hu-out-0506.google.com (hu-out-0506.google.com [72.14.214.239]) by cuda.sgi.com with ESMTP id OVT1QmGAnfgmTUjJ for ; Wed, 26 Mar 2008 07:45:15 -0700 (PDT) Received: by hu-out-0506.google.com with SMTP id 16so2075692hue.17 for ; Wed, 26 Mar 2008 07:45:14 -0700 (PDT) Received: by 10.86.51.2 with SMTP id y2mr6407431fgy.50.1206538945547; Wed, 26 Mar 2008 06:42:25 -0700 (PDT) Received: by 10.86.100.10 with HTTP; Wed, 26 Mar 2008 06:42:25 -0700 (PDT) Message-ID: Date: Wed, 26 Mar 2008 06:42:25 -0700 From: "DERRICK BIRNBERG" Reply-To: birnbergdownhill@yahoo.co.uk X-ASG-Orig-Subj: ATTN; Subject: ATTN; MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_Part_14735_29858663.1206538945553" X-Barracuda-Connect: hu-out-0506.google.com[72.14.214.239] X-Barracuda-Start-Time: 1206542717 X-Barracuda-Bayes: INNOCENT GLOBAL 0.5649 1.0000 0.7500 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: 1.07 X-Barracuda-Spam-Status: No, SCORE=1.07 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=MISSING_HEADERS, TO_CC_NONE X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45958 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.19 MISSING_HEADERS Missing To: header 0.13 TO_CC_NONE No To: or Cc: header To: undisclosed-recipients:; X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15058 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: derbirn8@googlemail.com Precedence: bulk X-list: xfs ------=_Part_14735_29858663.1206538945553 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Content-Disposition: inline MESSAGE CONCEALED IN ATTACHMENT -- MR.BIRNBERG ------=_Part_14735_29858663.1206538945553 Content-Type: application/msword; name=DOC..doc Content-Transfer-Encoding: base64 X-Attachment-Id: file0 Content-Disposition: attachment; filename=DOC..doc 0M8R4KGxGuEAAAAAAAAAAAAAAAAAAAAAPgADAP7/CQAGAAAAAAAAAAAAAAAB AAAAOgAAAAAAAAAAEAAAPQAAAAEAAAD+////AAAAADsAAAD///////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// ///////////////////////+/wAABQECAAAAAAAAAAAAAAAAAAAAAAABAAAA 4IWf8vlPaBCrkQgAKyez2TAAAACEAQAAEAAAAAEAAACIAAAAAgAAAJAAAAAD AAAA0AAAAAQAAADcAAAABQAAAOwAAAAHAAAA+AAAAAgAAAAIAQAACQAAABgB AAASAAAAJAEAAAoAAABAAQAADAAAAEwBAAANAAAAWAEAAA4AAABkAQAADwAA AGwBAAAQAAAAdAEAABMAAAB8AQAAAgAAAOQEAAAeAAAANQAAACAgICAgICAg ICAgICAgICAgICAgICAgICBNQVJLIEFORFJFV1MgQU5EIFBBUlRORVJTICAA AE4AHgAAAAEAAAAAICAgHgAAAAUAAABVc2VyACAgIB4AAAABAAAAAHNlch4A AAAHAAAATm9ybWFsACAeAAAABQAAAFVzZXIAbAAgHgAAAAMAAAA0MQByHgAA ABMAAABNaWNyb3NvZnQgV29yZCA4LjAAIEAAAAAAULfSBwAAAEAAAAAAOmDd TIDIAUAAAAAA+L530Y/IAQMAAAABAAAAAwAAACQBAAADAAAAggYAAAMAAAAA AAAA//////////////////////////////////////////////////////// //////////////////////////////////////////////7/AAAFAQIAAAAA AAAAAAAAAAAAAAAAAAIAAAAC1c3VnC4bEJOXCAArLPmuRAAAAAXVzdWcLhsQ k5cIACss+a5gAQAAHAEAAAwAAAABAAAAaAAAAA8AAABwAAAABQAAAHwAAAAG AAAAhAAAABEAAACMAAAAFwAAAJQAAAALAAAAnAAAABAAAACkAAAAEwAAAKwA AAAWAAAAtAAAAA0AAAC8AAAADAAAAP0AAAACAAAA5AQAAB4AAAACAAAAIABv AAMAAAANAAAAAwAAAAMAAAADAAAA/QcAAAMAAACzDQgACwAAAAAAAAALAAAA AAAAAAsAAAAAAAAACwAAAAAAAAAeEAAAAQAAADUAAAAgICAgICAgICAgICAg ICAgICAgICAgICAgTUFSSyBBTkRSRVdTIEFORCBQQVJUTkVSUyAgAAwQAAAC AAAAHgAAAAYAAABUaXRsZQADAAAAAQAAAAA8AQAABAAAAAAAAAAoAAAAAQAA AFIAAAACAAAAWgAAAAMAAACyAAAAAgAAAAIAAAAKAAAAX1BJRF9HVUlEAAMA AAAMAAAAX1BJRF9ITElOS1MAAgAAAOQEAABBAAAATgAAAHsAMwBFAEEARQBB ADkAQQAxAC0AMQBGAEEAQQAtADQANQBEADIALQBCAEQAMAA0AC0ANABFADcA OQA2AEQAICAgICAgICAgICAgICAgICAgICAgICAgIEJJUk5CRVJHIERPV05I SUxMIEFORCBQQVJUTkVSUyAgDSAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgMTQgSW52ZXJuZXNzIFN0 cmVldCwgTG9uZG9uLCBOVzEgN0hKLiBVSw0gICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgIFRFTCArNDQgNzA0IDU3NyA3NDUyIA0gICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgIEZBWCAgKzQ0IDcwMCA1ODAgMzY5Ng0NDE1hcmNoIDIzcmQgMjAwOC4N DUhlbGxvLA0NSSBoZXJlaW4gd3JpdGUgb24gYmVoYWxmIG9mIHRoZSBUcnVz dGVlcyBhbmQgRXhlY3V0b3JzIG9mIHRoZSBFc3RhdGUNb2YgTGF0ZSBFbmdy LkRpY2sgT2dnLiBJIGFtIEJhcnJpc3RlciBEZXJyaWNrIEJpcm5iZXJnC3Bl cnNvbmFsIGF0dG9ybmV5IHRvIHRoZSBkZWNlYXNlZCBhbmQgYWdhaW4gdHJ5 IHRvIG5vdGlmeSB5b3UgYXMgbXkNZWFybGllciBsZXR0ZXIgYnkgZW1haWwg d2FzIHJldHVybmVkIHVuZGVsaXZlcmVkDQ1JIHdpc2ggdG8gbm90aWZ5IHlv dSB0aGF0IHlvdSB3b3VsZCBiZSBsaXN0ZWQgYXMgYSBiZW5lZmljaWFyeSB0 byB0aGUNdG90YWwgc3VtIG9mIFNldmVudGVlbiBNaWxsaW9uIFVuaXRlZCBT dGF0ZXMgRG9sbGFycyAoVVNEJDE3LA0wMDAuMDAwLjAwKSBteSBsYXRlIGNs aWVudCBsZWZ0IGluIHRoZSBDb2RpY2lsIGFuZCBsYXN0DVRlc3RhbWVudCB0 byBoaXMgV0lMTC5UaGF0IGlzIGlmIHlvdSB3aXNoLg0NVW50aWwgaGlzIGRl YXRoLCBFbmdyLkRpY2sgT2dnIHdhcyBhIG1lbWJlciBvZiB0aGUNSGVsaWNv cHRlciBTb2NpZXR5IGFuZCB0aGUgSW5zdGl0dXRlIG9mIEVsZWN0cm9uaWMg Jg1FbGVjdHJpY2FsIEVuZ2luZWVycy4gSGUgd2FzIGEgdmVyeSBkZWRpY2F0 ZWQNQ2hyaXN0aWFuIHdobyBsb3ZlZCB0byBnaXZlIG91dC4gSGlzIGdyZWF0 DVBoaWxhbnRocm9weSBlYXJuZWQgaGltIG51bWVyb3VzIGF3YXJkcyBkdXJp bmcgaGlzDWxpZmUgdGltZS4gSGUgZGllZCBvbiB0aGUgMTN0aCBkYXkgb2Yg RGVjZW1iZXIgMjAwNSANYXQgdGhlIGFnZSBvZiA4MSB5ZWFycywgYW5kIGhp cyBXSUxMIGlzIG5vdyByZWFkeSBmb3IgDWV4ZWN1dGlvbi4gQWNjb3JkaW5n IHRvIGhpbSwgdGhpcyBtb25leSBpcyB0byBzdXBwb3J0IA1oaXMgaHVtYW5p dGFyaWFuIGFjdGl2aXRpZXMgYW5kIHRvIGhlbHAgdGhlIHBvb3IgYW5kIA1u ZWVkeSBpbiBvdXIgc29jaWV0eSBhdCBsYXJnZS4NDVBsZWFzZSBpZiBJIHJl YWNoIHlvdSwgYXMgSSBhbSBob3BlZnVsLCBlbmRlYXZvciB0byBnZXQNYmFj ayB0byBtZSBhcyBzb29uIGFzIHBvc3NpYmxlIHRvIGVuYWJsZSBtZSBjb25j bHVkZQ1teSBqb2IuIEkgaG9wZSB0byBoZWFyIGZyb20geW91IGluIG5vIGRp c3RhbnQgdGltZS4NTm90ZTogWW91IGFyZSBhZHZpc2VkIHRvIGNvbnRhY3Qg bWUgd2l0aCBteSBwZXJzb25hbA0gICAgICAgICAgICAgZW1haWwgYWRkcmVz czsNICAgICAgICAgICAgIBMgSFlQRVJMSU5LIG1haWx0bzpiaXJuYmVyZ2Rv d25oaWxsQHlhaG9vLmNvLnVrIAEUYmlybmJlcmdkb3duaGlsbEB5YWhvby5j by51axUgDQ0gICAgICAgICAgICAgICBJIGF3YWl0IHlvdXIgcHJvbXB0IHJl c3BvbnNlLg0NICAgICAgICAgICAgIFlvdXJzIGluIFNlcnZpY2UsDUJpcm5i ZXJnIERvd25pbmcgJiBQYXJ0bmVycy4NKEJhcnJpc3RlciBEZXJyaWNrIEJp cm5iZXJnKQ0NUFJJTkNJUEFMIFBBUlRORVJTOiBCYXJyaXN0ZXIgQWlkYW4g V2Fsc2guRXNxIE1hcmt1cw1Xb2xmZ2FuZywgTXIuIEpvaG4gSGFydmV5IEVz cSwgTXIuIEplcnJ5IFNtaXRoIEVzcSwgTXJzLiBTdWUgVXB0b24NAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAANwA3ADcARABFAEQAfQAAAAAAQQAA AIAAAAAGAAAAAwAAAHgADwADAAAAAAAAAAMAAAAAAAAAAwAAAAUAAAAfAAAA JAAAAG0AYQBpAGwAdABvADoAYgBpAHIAbgBiAGUAcgBnAGQAbwB3AG4AaABp AGwAbABAAHkAYQBoAG8AbwAuAGMAbwAuAHUAawAAAB8AAAABAAAAAAAAAAAA ZAEAAA8AAABsAQAAEAAAAHQBAAATAAAAfAEAAAIAAADkBAAAHgAAADUAAAAg ICAgICAgICAgICAgICAgICAgICAgICAgTUFSSyBBTkRSRVdTIEFORCBQQVJU TkVSUyAgAABOAB4AAAABAAAAACAgIB4AAAAFAAAAVXNlcgAgICAeAAAAAQAA AABzZXIeAAAABwAAAE5vcm1hbAAgHgAAAAUAAABVc2VyAGwAIB4AAAADAAAA NDEAch4AAAATAAAATWljcm9zb2Z0IFdvcmQgOC4wACBAAAAAAFC30gcAAABA AAAAADpg3UyAyAFAAAAAAPi+d9GPyAEDAAAAAQAAAAMAAAAkAQAAAwAAAIIG AAADAAAAAAAAAP////////////////////////////////////////////// //////////////////////////////////////////////////////8ABAAA OgQAAJsEAAAiBQAAqQUAAKoFAACrBQAAvAUAAL0FAADEBQAAxQUAAAoGAACF BgAAtgYAALcGAAD9BgAAOwcAAHMHAACeBwAAnwcAANIHAAAHCAAANQgAAGAI AACTCAAAyAgAAP8IAAA2CQAA/QAAAAAAAAAAAAAAAP0AAAAAAAAAAAAAAAD3 AAAAAAAAAAAAAAAA9wAAAAAAAAAAAAAAAPQAAAAAAAAAAAAAAADvAAAAAAAA AAAAAAAA7wAAAAAAAAAAAAAAAO8AAAAAAAAAAAAAAADvAAAAAAAAAAAAAAAA 7wAAAAAAAAAAAAAAAO8AAAAAAAAAAAAAAADvAAAAAAAAAAAAAAAA7wAAAAAA AAAAAAAAAO8AAAAAAAAAAAAAAADvAAAAAAAAAAAAAAAA7wAAAAAAAAAAAAAA AO8AAAAAAAAAAAAAAADvAAAAAAAAAAAAAAAA7wAAAAAAAAAAAAAAAO8AAAAA AAAAAAAAAADvAAAAAAAAAAAAAAAA7wAAAAAAAAAAAAAAAO8AAAAAAAAAAAAA AADvAAAAAAAAAAAAAAAA7wAAAAAAAAAAAAAAAO8AAAAAAAAAAAAAAADvAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAMSQAD4TQAgMAADEkAAAFAAAN xgUAAaQfwAABAAAAGwAEAAA6BAAAmwQAACIFAACpBQAAqgUAAKsFAAC8BQAA vQUAAMQFAADFBQAACgYAAIUGAAC2BgAAtwYAAP0GAAA7BwAAcwcAAJ4HAACf BwAA0gcAAAcIAAA1CAAAYAgAAJMIAADICAAA/wgAADYJAABsCQAAiwkAAIwJ AADECQAA+QkAAC0KAABiCgAAfgoAANwKAADdCgAACgsAAAsLAAAqCwAARwsA AGQLAABlCwAAmgsAAN0LAAAAAAAAAP4AAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACCQMtNgkAAGwJAACLCQAAjAkA AMQJAAD5CQAALQoAAGIKAAB+CgAA3AoAAN0KAAAKCwAACwsAACoLAABHCwAA ZAsAAGULAACaCwAA3QsAAPoAAAAAAAAAAAAAAAD6AAAAAAAAAAAAAAAA+gAA AAAAAAAAAAAAAPoAAAAAAAAAAAAAAAD6AAAAAAAAAAAAAAAA+gAAAAAAAAAA AAAAAPoAAAAAAAAAAAAAAAD4AAAAAAAAAAAAAAAA+AAAAAAAAAAAAAAAAPgA AAAAAAAAAAAAAAD4AAAAAAAAAAAAAAAA+gAAAAAAAAAAAAAAAPUAAAAAAAAA AAAAAAD6AAAAAAAAAAAAAAAA+gAAAAAAAAAAAAAAAPoAAAAAAAAAAAAAAAD6 AAAAAAAAAAAAAAAA+gAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMAADEkAAABAAAFAAAxJAAPhNAC ABIABAAAGQQAADoEAAByBAAAlwQAAJgEAACbBAAADAUAAKkFAACzBQAAtQUA ACEGAAAwBgAAQAYAAGcGAACvCAAAsQgAAIsKAACMCgAAuwoAALwKAAC9CgAA 2QoAANoKAADsCgAA3AsAAN0LAADCHgAAECAAABQgAAAA+vbr5d7X0MrCyr7K vsrCyrPKpbOiswDKnsrKwgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAB0IqAUNKEAAEMEoQAAAaAgiBA2oAAAAABggBT0oCAFFKAgBVCAFoCAAA FANqAAAAAE9KAgBRSgIAVQgBaAgAAAdDShgAaAgADkgqAU9KAgBRSgIAaAgA AAtPSgIAUUoCAGgIAA01CIFCKgFDShAAaAgADTUIgUIqDUNKEABoCAANNQiB QioGQ0oQAGgIAApCKgZDShAAaAgAABU1CIFCKgZDShAAT0oCAFFKAgBoCAAH QioJQ0ogAAo2CIFCKgtDSiAAHfcAAABEAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAANDJ 6nn5us4RjIIAqgBLqQsCAAAAFwAAAB0AAABiAGkAcgBuAGIAZQByAGcAZABv AHcAbgBoAGkAbABsAEAAeQBhAGgAbwBvAC4AYwBvAC4AdQBrAAAA4Mnqefm6 zhGMggCqAEupC0gAAABtAGEAaQBsAHQAbwA6AGIAaQByAG4AYgBlAHIAZwBk AG8AdwBuAGgAaQBsAGwAQAB5AGEAaABvAG8ALgBjAG8ALgB1AGsAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAASABIACgABAFsADwACAAAAAAAAACQAAEDx /wIAJAAAAAYATgBvAHIAbQBhAGwAAAACAAAABABtSAkEAAAAAAAAAAAAAAAA AAAAAAAAPABBQPL/oQA8AAAAFgBEAGUAZgBhAHUAbAB0ACAAUABhAHIAYQBn AHIAYQBwAGgAIABGAG8AbgB0AAAAAAAAAAAAAAAAADgAWQABAPIAOAAAAAwA RABvAGMAdQBtAGUAbgB0ACAATQBhAHAAAAAGAA8ALUQgAQgAT0oDAFFKAwAo AFVAogABASgAAAAJAEgAeQBwAGUAcgBsAGkAbgBrAAAABgA+KgFCKgI4AFZA ogARATgAAAARAEYAbwBsAGwAbwB3AGUAZABIAHkAcABlAHIAbABpAG4AawAA AAYAPioBQioMAAAAAKsBAADdBwAABQAAHgAAAAD/////BQAeHgAAAAD///// AAQAAN0LAAALAAAAAAQAADYJAADdCwAADAAAAA4AAAAABAAA3QsAAA0AAACL BgAAvAYAANkGAADdBwAAE1gU/xWAAAAAAN8HAAAHAAAAAACrAQAAuwEAAAoC AAAMAgAA+QUAAPsFAABvBgAAdAYAAN8HAAAHABoABwAaAAcAGgAHABoABwD/ /xQAAAAEAFUAcwBlAHIAOwBDADoAXABEAE8AQwBVAE0ARQB+ADEAXABVAHMA ZQByAFwATABPAEMAQQBMAFMAfgAxAFwAVABlAG0AcABcAEEAdQB0AG8AUgBl AGMAbwB2AGUAcgB5ACAAcwBhAHYAZQAgAG8AZgAgAEQATwBDAC4AYQBzAGQA BABVAHMAZQByACEAQwA6AFwARABPAEMAVQBNAEUAfgAxAFwAVQBzAGUAcgBc AEQAZQBzAGsAdABvAHAAXABEAE8AQwAuAC4AZABvAGMABABVAHMAZQByACEA QwA6AFwARABPAEMAVQBNAEUAfgAxAFwAVQBzAGUAcgBcAEQAZQBzAGsAdABv AHAAXABEAE8AQwAuAC4AZABvAGMABABVAHMAZQByACEAQwA6AFwARABPAEMA VQBNAEUAfgAxAFwAVQBzAGUAcgBcAEQAZQBzAGsAdABvAHAAXABEAE8AQwAu AC4AZABvAGMABABVAHMAZQByADsAQwA6AFwARABPAEMAVQBNAEUAfgAxAFwA VQBzAGUAcgBcAEwATwBDAEEATABTAH4AMQBcAFQAZQBtAHAAXABBAHUAdABv AFIAZQBjAG8AdgBlAHIAeQAgAHMAYQB2AGUAIABvAGYAIABEAE8AQwAuAGEA cwBkAAQAVQBzAGUAcgAvAEMAOgBcAEQAbwBjAHUAbQBlAG4AdABzACAAYQBu AGQAIABTAGUAdAB0AGkAbgBnAHMAXABVAHMAZQByAFwARABlAHMAawB0AG8A cABcAEQATwBDAC4ALgBkAG8AYwAEAFUAcwBlAHIALwBDADoAXABEAG8AYwB1 AG0AZQBuAHQAcwAgAGEAbgBkACAAUwBlAHQAdABpAG4AZwBzAFwAVQBzAGUA cgBcAEQAZQBzAGsAdABvAHAAXABEAE8AQwAuAC4AZABvAGMABABVAHMAZQBy AC8AQwA6AFwARABvAGMAdQBtAGUAbgB0AHMAIABhAG4AZAAgAFMAZQB0AHQA aQBuAGcAcwBcAFUAcwBlAHIAXABEAGUAcwBrAHQAbwBwAFwARABPAEMALgAu AGQAbwBjAAQAVQBzAGUAcgAvAEMAOgBcAEQAbwBjAHUAbQBlAG4AdABzACAA YQBuAGQAIABTAGUAdAB0AGkAbgBnAHMAXABVAHMAZQByAFwARABlAHMAawB0 AG8AcABcAEQATwBDAC4ALgBkAG8AYwAEAFUAcwBlAHIALwBDADoAXABEAG8A YwB1AG0AZQBuAHQAcwAgAGEAbgBkACAAUwBlAHQAdABpAG4AZwBzAFwAVQBz AGUAcgBcAEQAZQBzAGsAdABvAHAAXABEAE8AQwAuAC4AZABvAGMA/0ABgAEA HwIAAB8CAADgUyIBsgCyAB8CAAAAAAAAHwIAAAAAAAACEAAAAAAAAADdBwAA UAAACABAAAAEAAAARxaQAQAAAgIGAwUEBQIDBId6AAAAAACACAAAAAAAAAD/ AAAAAAAAAFQAaQBtAGUAcwAgAE4AZQB3ACAAUgBvAG0AYQBuAAAANRaQAQIA BQUBAgEHBgIFBwAAAAAAAAAQAAAAAAAAAAAAAACAAAAAAFMAeQBtAGIAbwBs AAAAMyaQAQAAAgsGBAICAgICBId6AAAAAACACAAAAAAAAAD/AAAAAAAAAEEA cgBpAGEAbAAAADUmkAEAAAILBgQDBQQEAgSHegBhAAAAgAgAAAAAAAAA/wAB AAAAAABUAGEAaABvAG0AYQAAACIABADxCIgYAADQAgAAaAEAAAAATznDplLA wyYAAAAAJgA2AAAAIwEAAHwGAAABAAMAAAAEAAMQDQAAAAAAAAAAAAAAAQAB AAAAAQAAAAAAAAAhAwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAClBsAHtAC0AIAAMjAAAAAAAAAAAAAAAAAAAPYHAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAA//8SAAAAAAAAADQAIAAgACAAIAAg ACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIABNAEEA UgBLACAAQQBOAEQAUgBFAFcAUwAgAEEATgBEACAAUABBAFIAVABOAEUAUgBT ACAAIAAAAAAAAAAEAFUAcwBlAHIABABVAHMAZQByAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAEAAAZBAAAOgQAAHIEAACXBAAAmAQAAJsEAAAMBQAAqQUAALMFAAC1 BQAAIQYAADAGAABABgAAZwYAAK8IAACxCAAAiwoAAIwKAAC7CgAAvAoAAL0K AADZCgAA2goAAOwKAADcCwAA3QsAAMIeAAAQIAAAFCAAABAiAAAUIgAAAPr2 6+Xe19DKwsq+yr7KwsqzyqWzorMAyp7KysLKwgAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAHQioBQ0oQAAQwShAAABoCCIEDagAAAAAGCAFPSgIAUUoCAFUIAWgI AAAUA2oAAAAAT0oCAFFKAgBVCAFoCAAAB0NKGABoCAAOSCoBT0oCAFFKAgBo CAAAC09KAgBRSgIAaAgADTUIgUIqAUNKEABoCAANNQiBQioNQ0oQAGgIAA01 CIFCKgZDShAAaAgACkIqBkNKEABoCAAAFTUIgUIqBkNKEABPSgIAUUoCAGgI AAdCKglDSiAACjYIgUIqC0NKIAAfHAAfsNAvILDgPSGwCAcisAgHI5CgBSSQ oAUlsAAALAAJMAAfsNAvILDgPSGwCAcisAgHI5CgBSSQoAUlsAAABTAABPIA 0AID8gF4D28AZgAgAEwAYQB0AGUAIABFAG4AZwByAC4ARABpAGMAawAgAEIA dQByAG4AZQB0AHQAVQBuAHQAaQBsACAAaABpAHMAIABkAGUAYQB0AGgALAAg AEUAbgBnAHIALgBEAGkAYwBrACAAQgB1AHIAbgBlAHQAdAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABNAGEAcgBj AGgAIAAyADUAdABoAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEgASAAoAAQBbAA8AAgAAAAAA AAAkAABA8f8CACQAAAAGAE4AbwByAG0AYQBsAAAAAgAAAAQAbUgJBAAAAAAA AAAAAAAAAAAAAAAAADwAQUDy/6EAPAAAABYARABlAGYAYQB1AGwAdAAgAFAA YQByAGEAZwByAGEAcABoACAARgBvAG4AdAAAAAAAAAAAAAAAAAA4AFkAAQDy ADgAAAAMAEQAbwBjAHUAbQBlAG4AdAAgAE0AYQBwAAAABgAPAC1EIAEIAE9K AwBRSgMAKABVQKIAAQEoAAAACQBIAHkAcABlAHIAbABpAG4AawAAAAYAPioB QioCOABWQKIAEQE4AAAAEQBGAG8AbABsAG8AdwBlAGQASAB5AHAAZQByAGwA aQBuAGsAAAAGAD4qAUIqDAAAAACrAQAA5QcAAAQAAB4AAAAA/////wQAHh4A AAAA/////wAAAACrAQAAvAEAAAoCAACJAgAAowMAANoDAADnBwAAngAAAAAA AAAAAAAAAIAAAACAmAAAAAAAAAAAAAAAAIAAAACAngAAAAAAAAAAAAAAAIAA AACAmAAAAAAAAAAAAAAAAIAAAACAngAAAAAAAAAAAAAAAIAAAACAmAAAAAAA AAAAAAAAAIAAAACAngAAAAAAAAAAAAAAAIAAAACAAAQAABQgAAALAAAAAAQA ADYJAADdCwAADAAAAA4AAAAABAAA3QsAAA0AAAD//wIAAAAHAFUAbgBrAG4A bwB3AG4ABABVAHMAZQByAJMGAADEBgAA4QYAAOUHAAATWBT/FYAAAAAAsQEA ALUBAADnBwAABwAEAAcAAAAAAKsBAAC7AQAACgIAAAwCAAABBgAAAwYAAHcG AAB8BgAA5wcAAAcABAAHABoABwAaAAcAGgAHAP//FAAAAAQAVQBzAGUAcgAh AEMAOgBcAEQATwBDAFUATQBFAH4AMQBcAFUAcwBlAHIAXABEAGUAcwBrAHQA bwBwAFwARABPAEMALgAuAGQAbwBjAAQAVQBzAGUAcgAhAEMAOgBcAEQATwBD AFUATQBFAH4AMQBcAFUAcwBlAHIAXABEAGUAcwBrAHQAbwBwAFwARABPAEMA LgAuAGQAbwBjAAQAVQBzAGUAcgA7AEMAOgBcAEQATwBDAFUATQBFAH4AMQBc AFUAcwBlAHIAXABMAE8AQwBBAEwAUwB+ADEAXABUAGUAbQBwAFwAQQB1AHQA bwBSAGUAYwBvAHYAZQByAHkAIABzAGEAdgBlACAAbwBmACAARABPAEMALgBh AHMAZAAEAFUAcwBlAHIALwBDADoAXABEAG8AYwB1AG0AZQBuAHQAcwAgAGEA bgBkACAAUwBlAHQAdABpAG4AZwBzAFwAVQBzAGUAcgBcAEQAZQBzAGsAdABv AHAAXABEAE8AQwAuAC4AZABvAGMABABVAHMAZQByAC8AQwA6AFwARABvAGMA dQBtAGUAbgB0AHMAIABhAG4AZAAgAFMAZQB0AHQAaQBuAGcAcwBcAFUAcwBl AHIAXABEAGUAcwBrAHQAbwBwAFwARABPAEMALgAuAGQAbwBjAAQAVQBzAGUA cgAvAEMAOgBcAEQAbwBjAHUAbQBlAG4AdABzACAAYQBuAGQAIABTAGUAdAB0 AGkAbgBnAHMAXABVAHMAZQByAFwARABlAHMAawB0AG8AcABcAEQATwBDAC4A LgBkAG8AYwAEAFUAcwBlAHIALwBDADoAXABEAG8AYwB1AG0AZQBuAHQAcwAg AGEAbgBkACAAUwBlAHQAdABpAG4AZwBzAFwAVQBzAGUAcgBcAEQAZQBzAGsA dABvAHAAXABEAE8AQwAuAC4AZABvAGMABABVAHMAZQByAC8AQwA6AFwARABv AGMAdQBtAGUAbgB0AHMAIABhAG4AZAAgAFMAZQB0AHQAaQBuAGcAcwBcAFUA cwBlAHIAXABEAGUAcwBrAHQAbwBwAFwARABPAEMALgAuAGQAbwBjAAQAVQBz AGUAcgAvAEMAOgBcAEQAbwBjAHUAbQBlAG4AdABzACAAYQBuAGQAIABTAGUA dAB0AGkAbgBnAHMAXABVAHMAZQByAFwARABlAHMAawB0AG8AcABcAEQATwBD AC4ALgBkAG8AYwAEAFUAcwBlAHIALwBDADoAXABEAG8AYwB1AG0AZQBuAHQA cwAgAGEAbgBkACAAUwBlAHQAdABpAG4AZwBzAFwAVQBzAGUAcgBcAEQAZQBz AGsAdABvAHAAXABEAE8AQwAuAC4AZABvAGMA/0ABgAEAswEAALMBAADIQUMB AQABALMBAAAAAAAAqwEAAAAAAAACiAAAAAAAAACrAQAAswEAALUBAAAKAgAA IgIAACMCAACjAwAAxAMAAMUDAADkBwAA5QcAAEAAAAgAQAAAQQAAIAAAAABB ABAgAAAAAEAAagsAQAAAQQBMHgAAAABBAHweAAAAAEAAPgwAQAAAQQB+HgAA AABBAMAeAAAAAEAAeg8AQAAAQAC4FwBAAAAEAAAARxaQAQAAAgIGAwUEBQID BId6AAAAAACACAAAAAAAAAD/AAAAAAAAAFQAaQBtAGUAcwAgAE4AZQB3ACAA UgBvAG0AYQBuAAAANRaQAQIABQUBAgEHBgIFBwAAAAAAAAAQAAAAAAAAAAAA AACAAAAAAFMAeQBtAGIAbwBsAAAAMyaQAQAAAgsGBAICAgICBId6AAAAAACA CAAAAAAAAAD/AAAAAAAAAEEAcgBpAGEAbAAAADUmkAEAAAILBgQDBQQEAgSH egBhAAAAgAgAAAAAAAAA/wABAAAAAABUAGEAaABvAG0AYQAAACIABADxCIgY AADQAgAAaAEAAAAATznDppnVw2YAAAAAKAA4AAAAJAEAAIIGAAABAAMAAAAE AAMQDQAAAAAAAAAAAAAAAQABAAAAAQAAAAAAAAAhAwAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAClBsAHtAC0AIAAMjAA AAAAAAAAAAAAAAAAAP0HAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAA//8S AAAAAAAAADQAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAA IAAgACAAIAAgACAAIABNAEEAUgBLACAAQQBOAEQAUgBFAFcAUwAgAEEATgBE ACAAUABBAFIAVABOAEUAUgBTACAAIAAAAAAAAAAEAFUAcwBlAHIABABVAHMA ZQByAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAALAAAAAAAAAAsAAAAAAAAA CwAAAAAAAAAeEAAAAQAAADUAAAAgICAgICAgICAgICAgICAgICAgICAgICAg TUFSSyBBTkRSRVdTIEFORCBQQVJUTkVSUyAgAAwQAAACAAAAHgAAAAYAAABU aXRsZQADAAAAAQAAAAA8AQAABAAAAAAAAAAoAAAAAQAAAFIAAAACAAAAWgAA AAMAAACyAAAAAgAAAAIAAAAKAAAAX1BJRF9HVUlEAAMAAAAMAAAAX1BJRF9I TElOS1MAAgAAAOQEAABBAAAATgAAAHsAMwBFAEEARQBBADkAQQAxAC0AMQBG AEEAQQAtADQANQBEADIALQBCAEQAMAA0AC0ANABFADcAOQA2AEQA7KXBAEcA CQQAACwSvwAAAAAAABAAAAAAAAQAAN0LAAAOAGJqYmqO2Y7ZAAAAAAAAAAAA AAAAAAAAAAAACQQWAAAiAADsswEA7LMBAOUHAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAA//8PAAAAAAAAAAAA//8PAAAAAAAAAAAA//8PAAAAAAAA AAAAAAAAAAAAAABdAAAAAAAAAAAAABAAADABAAAwEQAAAAAAADARAAAAAAAA MBEAAAAAAAAwEQAAAAAAADARAAAkAAAAAAAAAAAAAABUEQAAAAAAAPIRAAAA AAAA8hEAAAAAAADyEQAAAAAAAPIRAAAMAAAA/hEAABQAAABUEQAAAAAAAGEX AADsAAAAPhIAABYAAABUEgAAAAAAAFQSAAAAAAAAVBIAAAAAAABUEgAAAAAA AFQSAAAAAAAAVBIAAAAAAABUEgAAAAAAAK4WAAACAAAAsBYAAAAAAACwFgAA AAAAALAWAAAAAAAAsBYAAAAAAACwFgAAAAAAALAWAAAkAAAATRgAAPQBAABB GgAAogAAANQWAACNAAAAAAAAAAAAAAAAAAAAAAAAADARAAAAAAAAVBIAAAAA AAAAAAAAAAAAAAAAAAAAAAAAVBIAAAAAAABUEgAAAAAAAFQSAAAAAAAAVBIA AAAAAADUFgAAAAAAAKQSAAAAAAAAMBEAAAAAAAAwEQAAAAAAAFQSAAAAAAAA AAAAAAAAAABUEgAAAAAAAB4SAAAgAAAApBIAAAAAAACkEgAAAAAAAKQSAAAA AAAAVBIAABYAAAAwEQAAAAAAAFQSAAAAAAAAMBEAAAAAAABUEgAAAAAAAK4W AAAAAAAAAAAAAAAAAAAAAAAAAAAAAFQRAAAAAAAAVBEAAAAAAAAwEQAAAAAA ADARAAAAAAAAMBEAAAAAAAAwEQAAAAAAAFQSAAAAAAAArhYAAAAAAACkEgAA CgQAAKQSAAAAAAAAAAAAAAAAAACuFgAAAAAAADARAAAAAAAAMBEAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAK4WAAAAAAAAVBIAAAAAAAASEgAADAAAANBqQfXKj8gBVBEA AJ4AAADyEQAAAAAAAGoSAAA6AAAArhYAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFIAbwBvAHQAIABFAG4AdABy AHkAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAWAAUB//////////8DAAAABgkCAAAAAADAAAAAAAAARgAAAAAwUf1KVYDI AdBqQfXKj8gBSwAAAEAPAAAAAAAARABhAHQAYQAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAoAAgH/ ////BwAAAP////8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAQAAAAABAAAAAAAAAxAFQAYQBiAGwAZQAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgACAQEAAAAGAAAA /////wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABgAAADj GgAAAAAAAFcAbwByAGQARABvAGMAdQBtAGUAbgB0AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAaAAIBAgAAAAUAAAD/////AAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAKQAAAAAiAAAAAAAA //////////8DAAAABAAAAAUAAAAGAAAABwAAAAgAAAAJAAAACgAAAA8AAAD/ ////DQAAAA4AAAAgAAAADAAAABEAAAASAAAAEwAAABQAAAAVAAAAFgAAABcA AAD+////GQAAABoAAAAbAAAAHAAAAB0AAAAeAAAAIgAAAP////8hAAAA/v// /yMAAAAkAAAAJQAAACYAAAAnAAAAKAAAAP7///8qAAAAAgAAAEQAAAD9//// LgAAAP7///8tAAAA//////////////////////////////////////////// //////////////////////////////////////////////////////////// ///+/////v///0cAAABIAAAASQAAAC8AAAD/////RgAAAP////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// //////////////////////8ADBAAAAIAAAAeAAAABgAAAFRpdGxlAAMAAAAB AAAAADwBAAAEAAAAAAAAACgAAAABAAAAUgAAAAIAAABaAAAAAwAAALIAAAAC AAAAAgAAAAoAAABfUElEX0dVSUQAAwAAAAwAAABfUElEX0hMSU5LUwACAAAA 5AQAAEEAAABOAAAAewAzAEUAQQBFAEEAOQBBADEALQAxAEYAQQBBAC0ANAA1 AEQAMgAtAEIARAAwADQALQA0AEUANwA5ADYARAA3ADcANwBEAEUARAB9AAAA AABBAAAAgAAAAAYAAAADAAAAeAAPAAMAAAAAAAAAAwAAAAAAAAADAAAABQAA AB8AAAAkAAAAbQBhAGkAbAB0AG8AOgBiAGkAcgBuAGIAZQByAGcAZABvAHcA bgBoAGkAbABsAEAAeQBhAGgAbwBvAC4AYwBvAC4AdQBrAAAAHwAAAAEAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAD+/wAA BQECAAAAAAAAAAAAAAAAAAAAAAABAAAA4IWf8vlPaBCrkQgAKyez2TAAAACE AQAAEAAAAAEAAACIAAAAAgAAAJAAAAADAAAA0AAAAAQAAADcAAAABQAAAOwA AAAHAAAA+AAAAAgAAAAIAQAACQAAABgBAAASAAAAJAEAAAoAAABAAQAADAAA AEwBAAANAAAAWAEAAA4AAABkAQAADwAAAGwBAAAQAAAAdAEAABMAAAB8AQAA AgAAAOQEAAAeAAAANQAAACAgICAgICAgICAgICAgICAgICAgICAgICBNQVJL IEFORFJFV1MgQU5EIFBBUlRORVJTICAAAE4AHgAAAAEAAAAAICAgHgAAAAUA AABVc2VyACAgIB4AAAABAAAAAHNlch4AAAAHAAAATm9ybWFsACAeAAAABQAA AFVzZXIAbAAgHgAAAAMAAAA0MAByHgAAABMAAABNaWNyb3NvZnQgV29yZCA4 LjAAIEAAAAAAULfSBwAAAEAAAAAAOmDdTIDIAUAAAAAAHuXmyo/IAQMAAAAB AAAAAwAAACQBAAADAAAAggYAAAMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAD//xIAAAAAAAAA NAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAg ACAAIAAgAE0AQQBSAEsAIABBAE4ARABSAEUAVwBTACAAQQBOAEQAIABQAEEA UgBUAE4ARQBSAFMAIAAgAAAAAAAAAAQAVQBzAGUAcgAEAFUAcwBlAHIAAAAA AAAAAAAAAAAAAAAAAAAAAAAA/v8AAAUBAgAAAAAAAAAAAAAAAAAAAAAAAgAA AALVzdWcLhsQk5cIACss+a5EAAAABdXN1ZwuGxCTlwgAKyz5rmABAAAcAQAA DAAAAAEAAABoAAAADwAAAHAAAAAFAAAAfAAAAAYAAACEAAAAEQAAAIwAAAAX AAAAlAAAAAsAAACcAAAAEAAAAKQAAAATAAAArAAAABYAAAC0AAAADQAAALwA AAAMAAAA/QAAAAIAAADkBAAAHgAAAAIAAAAgAG8AAwAAAA0AAAADAAAAAwAA AAMAAAD9BwAAAwAAALMNCAALAAAAAAAAAAsAAAAAAAAACwAAAAAAAAALAAAA AAAAAB4QAAABAAAANQAAACAgICAgICAgICAgICAgICAgICAgICAgICBNQVJL IEFORFJFV1MgQU5EIFBBUlRORVJTICBNAGEAcgBjAGgAIAAyADUAdABoAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAE0AYQBy AGMAaAAgADIANgB0AGgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAEgASAAoAAQBbAA8AAgAAAAAAAAAkAABA8f8CACQAAAAG AE4AbwByAG0AYQBsAAAAAgAAAAQAbUgJBAAAAAAAAAAAAAAAAAAAAAAAADwA QUDy/6EAPAAAABYARABlAGYAYQB1AGwAdAAgAFAAYQByAGEAZwByAGEAcABo ACAARgBvAG4AdAAAAAAAAAAAAAAAAAA4AFkAAQDyADgAAAAMAEQAbwBjAHUA bQBlAG4AdAAgAE0AYQBwAAAABgAPAC1EIAEIAE9KAwBRSgMAKABVQKIAAQEo AAAACQBIAHkAcABlAHIAbABpAG4AawAAAAYAPioBQioCOABWQKIAEQE4AAAA EQBGAG8AbABsAG8AdwBlAGQASAB5AHAAZQByAGwAaQBuAGsAAAAGAD4qAUIq DAAAAACrAQAA5QcAAAQAAB4AAAAA/////wQAHh4AAAAA/////wAAAACrAQAA vAEAAAoCAACJAgAAowMAANoDAADnBwAAngAAAAAAAAAAAAAAAIAAAACAmAAA AAAAAAAAAAAAAIAAAACAngAAAAAAAAAAAAAAAIAAAACAmAAAAAAAAAAAAAAA AIAAAACAngAAAAAAAAAAAAAAAIAAAACAmAAAAAAAAAAAAAAAAIAAAACAngAA AAAAAAAAAAAAAIAAAACAAAQAABQiAAALAAAAAAQAADYJAADdCwAADAAAAA4A AAAABAAA3QsAAA0AAAD//wIAAAAHAFUAbgBrAG4AbwB3AG4ABABVAHMAZQBy AJMGAADEBgAA4QYAAOUHAAATWBT/FYAAAAAAsQEAALUBAADnBwAABwAEAAcA AAAAAKsBAAC7AQAACgIAAAwCAAABBgAAAwYAAHcGAAB8BgAA5wcAAAcABAAH ABoABwAaAAcAGgAHAP//FAAAAAQAVQBzAGUAcgAhAEMAOgBcAEQATwBDAFUA TQBFAH4AMQBcAFUAcwBlAHIAXABEAGUAcwBrAHQAbwBwAFwARABPAEMALgAu AGQAbwBjAAQAVQBzAGUAcgA7AEMAOgBcAEQATwBDAFUATQBFAH4AMQBcAFUA cwBlAHIAXABMAE8AQwBBAEwAUwB+ADEAXABUAGUAbQBwAFwAQQB1AHQAbwBS AGUAYwBvAHYAZQByAHkAIABzAGEAdgBlACAAbwBmACAARABPAEMALgBhAHMA ZAAEAFUAcwBlAHIALwBDADoAXABEAG8AYwB1AG0AZQBuAHQAcwAgAGEAbgBk ACAAUwBlAHQAdABpAG4AZwBzAFwAVQBzAGUAcgBcAEQAZQBzAGsAdABvAHAA XABEAE8AQwAuAC4AZABvAGMABABVAHMAZQByAC8AQwA6AFwARABvAGMAdQBt AGUAbgB0AHMAIABhAG4AZAAgAFMAZQB0AHQAaQBuAGcAcwBcAFUAcwBlAHIA XABEAGUAcwBrAHQAbwBwAFwARABPAEMALgAuAGQAbwBjAAQAVQBzAGUAcgAv AEMAOgBcAEQAbwBjAHUAbQBlAG4AdABzACAAYQBuAGQAIABTAGUAdAB0AGkA bgBnAHMAXABVAHMAZQByAFwARABlAHMAawB0AG8AcABcAEQATwBDAC4ALgBk AG8AYwAEAFUAcwBlAHIALwBDADoAXABEAG8AYwB1AG0AZQBuAHQAcwAgAGEA bgBkACAAUwBlAHQAdABpAG4AZwBzAFwAVQBzAGUAcgBcAEQAZQBzAGsAdABv AHAAXABEAE8AQwAuAC4AZABvAGMABABVAHMAZQByAC8AQwA6AFwARABvAGMA dQBtAGUAbgB0AHMAIABhAG4AZAAgAFMAZQB0AHQAaQBuAGcAcwBcAFUAcwBl AHIAXABEAGUAcwBrAHQAbwBwAFwARABPAEMALgAuAGQAbwBjAAQAVQBzAGUA cgAvAEMAOgBcAEQAbwBjAHUAbQBlAG4AdABzACAAYQBuAGQAIABTAGUAdAB0 AGkAbgBnAHMAXABVAHMAZQByAFwARABlAHMAawB0AG8AcABcAEQATwBDAC4A LgBkAG8AYwAEAFUAcwBlAHIALwBDADoAXABEAG8AYwB1AG0AZQBuAHQAcwAg AGEAbgBkACAAUwBlAHQAdABpAG4AZwBzAFwAVQBzAGUAcgBcAEQAZQBzAGsA dABvAHAAXABEAE8AQwAuAC4AZABvAGMABABVAHMAZQByAC8AQwA6AFwARABv AGMAdQBtAGUAbgB0AHMAIABhAG4AZAAgAFMAZQB0AHQAaQBuAGcAcwBcAFUA cwBlAHIAXABEAGUAcwBrAHQAbwBwAFwARABPAEMALgAuAGQAbwBjAP9AAYAB ALMBAACzAQAAyEFDAQEAAQCzAQAAAAAAAKsBAAAAAAAAAogAAAAAAAAAqwEA ALMBAAC1AQAACgIAACICAAAjAgAAowMAAMQDAADFAwAA5AcAAOUHAABAAAAI AEAAAEEAACIAAAAAQQAQIgAAAABAAGoLAEAAAEEATB4AAAAAQQB8HgAAAABA AD4MAEAAAEEAfh4AAAAAQQDAHgAAAABAAHoPAEAAAEAAuBcAQAAABAAAAEcW kAEAAAICBgMFBAUCAwSHegAAAAAAgAgAAAAAAAAA/wAAAAAAAABUAGkAbQBl AHMAIABOAGUAdwAgAFIAbwBtAGEAbgAAADUWkAECAAUFAQIBBwYCBQcAAAAA AAAAEAAAAAAAAAAAAAAAgAAAAABTAHkAbQBiAG8AbAAAADMmkAEAAAILBgQC AgICAgSHegAAAAAAgAgAAAAAAAAA/wAAAAAAAABBAHIAaQBhAGwAAAA1JpAB AAACCwYEAwUEBAIEh3oAYQAAAIAIAAAAAAAAAP8AAQAAAAAAVABhAGgAbwBt AGEAAAAiAAQA8QiIGAAA0AIAAGgBAAAAAE85w6bM1cNmAAAAACkAOAAAACQB AACCBgAAAQADAAAABAADEA0AAAAAAAAAAAAAAAEAAQAAAAEAAAAAAAAAIQMA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA pQbAB7QAtACAADIwAAAAAAAAAAAAAAAAAAD9BwAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAIAAAAAAP//EgAAAAAAAAA0ACAAIAAgACAAIAAgACAAIAAgACAAIAAg ACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAATQBBAFIASwAgAEEATgBEAFIA RQBXAFMAIABBAE4ARAAgAFAAQQBSAFQATgBFAFIAUwAgACAAAAAAAAAABABV AHMAZQByAAQAVQBzAGUAcgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA7KXBAEcACQQAADwQvwAAAAAAABAA AAAAAAQAAN0LAAAOAGJqYmqO2Y7ZAAAAAAAAAAAAAAAAAAAAAAAACQQWAAAk AADsswEA7LMBAOUHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA//8P AAAAAAAAAAAA//8PAAAAAAAAAAAA//8PAAAAAAAAAAAAAAAAAAAAAABdAAAA AAAAAAAAAAAAADABAAAwAQAAAAAAADABAAAAAAAAMAEAAAAAAAAwAQAAAAAA ADABAAAkAAAAAAAAAAAAAABUAQAAAAAAAPIBAAAAAAAA8gEAAAAAAADyAQAA AAAAAPIBAAAMAAAA/gEAABQAAABUAQAAAAAAAH0HAADsAAAAPgIAABYAAABU AgAAAAAAAFQCAAAAAAAAVAIAAAAAAABUAgAAAAAAAFQCAAAAAAAAVAIAAAAA AABUAgAAAAAAAMoGAAACAAAAzAYAAAAAAADMBgAAAAAAAMwGAAAAAAAAzAYA AAAAAADMBgAAAAAAAMwGAAAkAAAAaQgAAPQBAABdCgAAogAAAPAGAACNAAAA AAAAAAAAAAAAAAAAAAAAADABAAAAAAAAVAIAAAAAAAAAAAAAAAAAAAAAAAAA AAAAVAIAAAAAAABUAgAAAAAAAFQCAAAAAAAAVAIAAAAAAADwBgAAAAAAAKQC AAAAAAAAMAEAAAAAAAAwAQAAAAAAAFQCAAAAAAAAAAAAAAAAAABUAgAAAAAA AB4CAAAgAAAApAIAAAAAAACkAgAAAAAAAKQCAAAAAAAAVAIAABYAAAAwAQAA AAAAAFQCAAAAAAAAMAEAAAAAAABUAgAAAAAAAMoGAAAAAAAAAAAAAAAAAAAA AAAAAAAAAFQBAAAAAAAAVAEAAAAAAAAwAQAAAAAAADABAAAAAAAAMAEAAAAA AAAwAQAAAAAAAFQCAAAAAAAAygYAAAAAAACkAgAAJgQAAKQCAAAAAAAAAAAA AAAAAADKBgAAAAAAADABAAAAAAAAMAEAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMoGAAAA AAAAVAIAAAAAAAASAgAADAAAAKDTbX/Rj8gBVAEAAJ4AAADyAQAAAAAAAGoC AAA6AAAAygYAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAFIAbwBvAHQAIABFAG4AdAByAHkAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAWAAUB//////////8D AAAABgkCAAAAAADAAAAAAAAARgAAAAAwUf1KVYDIAaDTbX/Rj8gBPgAAAAAQ AAAAAAAARABhAHQAYQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAoAAgH/////BwAAAP////8AAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAAAAABAAAAAAAAAx AFQAYQBiAGwAZQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAADgACAQEAAAAGAAAA/////wAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABgAAADjGgAAAAAAAFcAbwByAGQA RABvAGMAdQBtAGUAbgB0AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAaAAIBAgAAAAUAAAD/////AAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAkAAAAAAAA//////////8DAAAABAAA AAUAAAAGAAAABwAAAAgAAAAJAAAACgAAAB8AAAD/////DQAAAA4AAAAgAAAA /////xEAAAASAAAAEwAAABQAAAAVAAAAFgAAABcAAAD+////GQAAABoAAAAb AAAAHAAAAB0AAAAeAAAAIgAAAAwAAAAwAAAA/////yMAAAAkAAAAJQAAACYA AAAnAAAAKAAAAP7/////////////////////////////////////////MQAA AP7///////////////////////////////////85AAAAAgAAADwAAAD9//// /v////7///8/AAAAQAAAAEEAAABCAAAAQwAAAEwAAAD///////////////// ///////////////+/////////0oAAAD///////////////////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// //8FAFMAdQBtAG0AYQByAHkASQBuAGYAbwByAG0AYQB0AGkAbwBuAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAKAACAf///////////////wAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADYAAAC0AQAAAAAAAAUARABv AGMAdQBtAGUAbgB0AFMAdQBtAG0AYQByAHkASQBuAGYAbwByAG0AYQB0AGkA bwBuAAAAAAAAAAAAAAA4AAIBBAAAAP//////////AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAKwAAAJwCAAAAAAAAAQBDAG8AbQBwAE8A YgBqAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAABIAAgD///////////////8AAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAagAAAAAAAAAwAFQAYQBiAGwAZQAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA DgACAP///////////////wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAIAAAD/CgAAAAAAAAEAAAD+////AwAAAAQAAAAFAAAABgAAAAcA AAAIAAAACQAAAAoAAAALAAAADAAAAA0AAAAOAAAADwAAABAAAAARAAAAEgAA ABMAAAAUAAAAFQAAABYAAAAXAAAAGAAAABkAAAAaAAAAGwAAABwAAAAdAAAA HgAAAB8AAAAgAAAAIQAAACIAAAAjAAAAJAAAACUAAAAmAAAAJwAAACgAAAAp AAAAKgAAAD0AAAAsAAAALQAAAC4AAAAvAAAAMAAAADEAAAAyAAAAMwAAADQA AAA1AAAA/v///zcAAAA4AAAAOQAAADoAAAA7AAAAPAAAAP7///8+AAAAPwAA AP7///////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// ////////////////////////////////////////////////AQD+/wMKAAD/ ////BgkCAAAAAADAAAAAAAAARhgAAABNaWNyb3NvZnQgV29yZCBEb2N1bWVu dAAKAAAATVNXb3JkRG9jABAAAABXb3JkLkRvY3VtZW50LjgA9DmycQAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAASABIACgABAFsADwACAAAA AAAAACQAAEDx/wIAJAAAAAYATgBvAHIAbQBhAGwAAAACAAAABABtSAkEAAAA AAAAAAAAAAAAAAAAAAAAPABBQPL/oQA8AAAAFgBEAGUAZgBhAHUAbAB0ACAA UABhAHIAYQBnAHIAYQBwAGgAIABGAG8AbgB0AAAAAAAAAAAAAAAAADgAWQAB APIAOAAAAAwARABvAGMAdQBtAGUAbgB0ACAATQBhAHAAAAAGAA8ALUQgAQgA T0oDAFFKAwAoAFVAogABASgAAAAJAEgAeQBwAGUAcgBsAGkAbgBrAAAABgA+ KgFCKgI4AFZAogARATgAAAARAEYAbwBsAGwAbwB3AGUAZABIAHkAcABlAHIA bABpAG4AawAAAAYAPioBQioMAAAAAKsBAADlBwAABAAAHgAAAAD/////BAAe HgAAAAD/////AAAAAKsBAAC8AQAACgIAAIkCAACjAwAA2gMAAOcHAACeAAAA AAAAAAAAAAAAgAAAAICYAAAAAAAAAAAAAAAAgAAAAICeAAAAAAAAAAAAAAAA gAAAAICYAAAAAAAAAAAAAAAAgAAAAICeAAAAAAAAAAAAAAAAgAAAAICYAAAA AAAAAAAAAAAAgAAAAICeAAAAAAAAAAAAAAAAgAAAAIAABAAAFCIAAAsAAAAA BAAANgkAAN0LAAAMAAAADgAAAAAEAADdCwAADQAAAP//AgAAAAcAVQBuAGsA bgBvAHcAbgAEAFUAcwBlAHIAkwYAAMQGAADhBgAA5QcAABNYFP8VgAAAAACx AQAAtQEAAOcHAAAHAAQABwAAAAAAqwEAALsBAAAKAgAADAIAAAEGAAADBgAA dwYAAHwGAADnBwAABwAEAAcAGgAHABoABwAaAAcA//8UAAAABABVAHMAZQBy ACEAQwA6AFwARABPAEMAVQBNAEUAfgAxAFwAVQBzAGUAcgBcAEQAZQBzAGsA dABvAHAAXABEAE8AQwAuAC4AZABvAGMABABVAHMAZQByADsAQwA6AFwARABP AEMAVQBNAEUAfgAxAFwAVQBzAGUAcgBcAEwATwBDAEEATABTAH4AMQBcAFQA ZQBtAHAAXABBAHUAdABvAFIAZQBjAG8AdgBlAHIAeQAgAHMAYQB2AGUAIABv AGYAIABEAE8AQwAuAGEAcwBkAAQAVQBzAGUAcgAvAEMAOgBcAEQAbwBjAHUA bQBlAG4AdABzACAAYQBuAGQAIABTAGUAdAB0AGkAbgBnAHMAXABVAHMAZQBy AFwARABlAHMAawB0AG8AcABcAEQATwBDAC4ALgBkAG8AYwAEAFUAcwBlAHIA LwBDADoAXABEAG8AYwB1AG0AZQBuAHQAcwAgAGEAbgBkACAAUwBlAHQAdABp AG4AZwBzAFwAVQBzAGUAcgBcAEQAZQBzAGsAdABvAHAAXABEAE8AQwAuAC4A ZABvAGMABABVAHMAZQByAC8AQwA6AFwARABvAGMAdQBtAGUAbgB0AHMAIABh AG4AZAAgAFMAZQB0AHQAaQBuAGcAcwBcAFUAcwBlAHIAXABEAGUAcwBrAHQA bwBwAFwARABPAEMALgAuAGQAbwBjAAQAVQBzAGUAcgAvAEMAOgBcAEQAbwBj AHUAbQBlAG4AdABzACAAYQBuAGQAIABTAGUAdAB0AGkAbgBnAHMAXABVAHMA ZQByAFwARABlAHMAawB0AG8AcABcAEQATwBDAC4ALgBkAG8AYwAEAFUAcwBl AHIALwBDADoAXABEAG8AYwB1AG0AZQBuAHQAcwAgAGEAbgBkACAAUwBlAHQA dABpAG4AZwBzAFwAVQBzAGUAcgBcAEQAZQBzAGsAdABvAHAAXABEAE8AQwAu AC4AZABvAGMABABVAHMAZQByAC8AQwA6AFwARABvAGMAdQBtAGUAbgB0AHMA IABhAG4AZAAgAFMAZQB0AHQAaQBuAGcAcwBcAFUAcwBlAHIAXABEAGUAcwBr AHQAbwBwAFwARABPAEMALgAuAGQAbwBjAAQAVQBzAGUAcgAvAEMAOgBcAEQA bwBjAHUAbQBlAG4AdABzACAAYQBuAGQAIABTAGUAdAB0AGkAbgBnAHMAXABV AHMAZQByAFwARABlAHMAawB0AG8AcABcAEQATwBDAC4ALgBkAG8AYwAEAFUA cwBlAHIALwBDADoAXABEAG8AYwB1AG0AZQBuAHQAcwAgAGEAbgBkACAAUwBl AHQAdABpAG4AZwBzAFwAVQBzAGUAcgBcAEQAZQBzAGsAdABvAHAAXABEAE8A QwAuAC4AZABvAGMA/0ABgAEAswEAALMBAADIQUMBAQABALMBAAAAAAAAqwEA AAAAAAACiAAAAAAAAACrAQAAswEAALUBAAAKAgAAIgIAACMCAACjAwAAxAMA AMUDAADkBwAA5QcAAEAAAAgAQAAAQQAAIgAAAABBABAiAAAAAEAAagsAQAAA QQBMHgAAAABBAHweAAAAAEAAPgwAQAAAQQB+HgAAAABBAMAeAAAAAEAAeg8A QAAAQAC4FwBAAAAEAAAARxaQAQAAAgIGAwUEBQIDBId6AAAAAACACAAAAAAA AAD/AAAAAAAAAFQAaQBtAGUAcwAgAE4AZQB3ACAAUgBvAG0AYQBuAAAANRaQ AQIABQUBAgEHBgIFBwAAAAAAAAAQAAAAAAAAAAAAAACAAAAAAFMAeQBtAGIA bwBsAAAAMyaQAQAAAgsGBAICAgICBId6AAAAAACACAAAAAAAAAD/AAAAAAAA AEEAcgBpAGEAbAAAADUmkAEAAAILBgQDBQQEAgSHegBhAAAAgAgAAAAAAAAA /wABAAAAAABUAGEAaABvAG0AYQAAACIABADxCIgYAADQAgAAaAEAAAAATznD pszVw2YAAAAAKQA4AAAAJAEAAIIGAAABAAMAAAAEAAMQDQAAAAAAAAAAAAAA AQABAAAAAQAAAAAAAAAhAwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAClBsAHtAC0AIAAMjAAAAAAAAAAAAAAAAAAAP0H AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAP7/ AAAFAQIAAAAAAAAAAAAAAAAAAAAAAAIAAAAC1c3VnC4bEJOXCAArLPmuRAAA AAXVzdWcLhsQk5cIACss+a5gAQAAHAEAAAwAAAABAAAAaAAAAA8AAABwAAAA BQAAAHwAAAAGAAAAhAAAABEAAACMAAAAFwAAAJQAAAALAAAAnAAAABAAAACk AAAAEwAAAKwAAAAWAAAAtAAAAA0AAAC8AAAADAAAAP0AAAACAAAA5AQAAB4A AAACAAAAIABvAAMAAAANAAAAAwAAAAMAAAADAAAA/QcAAAMAAACzDQgACwAA AAAAAAALAAAAAAAAAAsAAAAAAAAACwAAAAAAAAAeEAAAAQAAADUAAAAgICAg ICAgICAgICAgICAgICAgICAgICAgTUFSSyBBTkRSRVdTIEFORCBQQVJUTkVS UyAgBQBTAHUAbQBtAGEAcgB5AEkAbgBmAG8AcgBtAGEAdABpAG8AbgAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAACgAAgH///////////////8AAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA2AAAAtAEAAAAAAAAFAEQA bwBjAHUAbQBlAG4AdABTAHUAbQBtAGEAcgB5AEkAbgBmAG8AcgBtAGEAdABp AG8AbgAAAAAAAAAAAAAAOAACAQQAAAD//////////wAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAACsAAACcAgAAAAAAAAEAQwBvAG0AcABP AGIAagAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAASAAIA////////////////AAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAGoAAAAAAAAAMABUAGEAYgBsAGUAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AA4AAgD///////////////8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAACAAAAPwoAAAAAAAABAAAA/v///wMAAAAEAAAABQAAAAYAAAAH AAAACAAAAAkAAAAKAAAACwAAAAwAAAANAAAADgAAAA8AAAAQAAAAEQAAABIA AAATAAAAFAAAABUAAAAWAAAAFwAAABgAAAAZAAAAGgAAABsAAAAcAAAAHQAA AB4AAAAfAAAAIAAAACEAAAAiAAAAIwAAACQAAAAlAAAAJgAAACcAAAAoAAAA KQAAACoAAAD+////LAAAAC0AAAAuAAAALwAAADAAAAAxAAAAMgAAADMAAAA0 AAAANQAAAP7///83AAAAOAAAADkAAAA6AAAAOwAAADwAAAD+//////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// /////////////////////////////////////////////////wAAAAAAAAAA AAAAgAAAAICeAAAAAAAAAAAAAAAAgAAAAICYAAAAAAAAAAAAAAAAgAAAAICe AAAAAAAAAAAAAAAAgAAAAIAABAAAwh4AAAsAAAAABAAANgkAAN0LAAAMAAAA DgAAAAAEAADdCwAADQAAAJMGAADEBgAA4QYAAOUHAAATWBT/FYAAAAAA5wcA AAcAAAAAAAoCAAAMAgAAAQYAAAMGAAB3BgAAfAYAAOcHAAAHABoABwAaAAcA GgAHAP//FAAAAAQAVQBzAGUAcgAhAEMAOgBcAEQATwBDAFUATQBFAH4AMQBc AFUAcwBlAHIAXABEAGUAcwBrAHQAbwBwAFwARABPAEMALgAuAGQAbwBjAAQA VQBzAGUAcgAhAEMAOgBcAEQATwBDAFUATQBFAH4AMQBcAFUAcwBlAHIAXABE AGUAcwBrAHQAbwBwAFwARABPAEMALgAuAGQAbwBjAAQAVQBzAGUAcgAhAEMA OgBcAEQATwBDAFUATQBFAH4AMQBcAFUAcwBlAHIAXABEAGUAcwBrAHQAbwBw AFwARABPAEMALgAuAGQAbwBjAAQAVQBzAGUAcgA7AEMAOgBcAEQATwBDAFUA TQBFAH4AMQBcAFUAcwBlAHIAXABMAE8AQwBBAEwAUwB+ADEAXABUAGUAbQBw AFwAQQB1AHQAbwBSAGUAYwBvAHYAZQByAHkAIABzAGEAdgBlACAAbwBmACAA RABPAEMALgBhAHMAZAAEAFUAcwBlAHIALwBDADoAXABEAG8AYwB1AG0AZQBu AHQAcwAgAGEAbgBkACAAUwBlAHQAdABpAG4AZwBzAFwAVQBzAGUAcgBcAEQA ZQBzAGsAdABvAHAAXABEAE8AQwAuAC4AZABvAGMABABVAHMAZQByAC8AQwA6 AFwARABvAGMAdQBtAGUAbgB0AHMAIABhAG4AZAAgAFMAZQB0AHQAaQBuAGcA cwBcAFUAcwBlAHIAXABEAGUAcwBrAHQAbwBwAFwARABPAEMALgAuAGQAbwBj AAQAVQBzAGUAcgAvAEMAOgBcAEQAbwBjAHUAbQBlAG4AdABzACAAYQBuAGQA IABTAGUAdAB0AGkAbgBnAHMAXABVAHMAZQByAFwARABlAHMAawB0AG8AcABc AEQATwBDAC4ALgBkAG8AYwAEAFUAcwBlAHIALwBDADoAXABEAG8AYwB1AG0A ZQBuAHQAcwAgAGEAbgBkACAAUwBlAHQAdABpAG4AZwBzAFwAVQBzAGUAcgBc AEQAZQBzAGsAdABvAHAAXABEAE8AQwAuAC4AZABvAGMABABVAHMAZQByAC8A QwA6AFwARABvAGMAdQBtAGUAbgB0AHMAIABhAG4AZAAgAFMAZQB0AHQAaQBu AGcAcwBcAFUAcwBlAHIAXABEAGUAcwBrAHQAbwBwAFwARABPAEMALgAuAGQA bwBjAAQAVQBzAGUAcgAvAEMAOgBcAEQAbwBjAHUAbQBlAG4AdABzACAAYQBu AGQAIABTAGUAdAB0AGkAbgBnAHMAXABVAHMAZQByAFwARABlAHMAawB0AG8A cABcAEQATwBDAC4ALgBkAG8AYwD/QAGAAQAjAgAAIwIAANhQIgHFAMUAIwIA AAAAAAAjAgAAAAAAAAJkAAAAAAAAAAoCAAAiAgAAIwIAAKMDAADEAwAAxQMA AOQHAADlBwAAQAAACABAAABBAEweAAAAAEEAfB4AAAAAQAA+DABAAABBAH4e AAAAAEEAwB4AAAAAQAB6DwBAAABAALgXAEAAAAQAAABHFpABAAACAgYDBQQF AgMEh3oAAAAAAIAIAAAAAAAAAP8AAAAAAAAAVABpAG0AZQBzACAATgBlAHcA IABSAG8AbQBhAG4AAAA1FpABAgAFBQECAQcGAgUHAAAAAAAAABAAAAAAAAAA AAAAAIAAAAAAUwB5AG0AYgBvAGwAAAAzJpABAAACCwYEAgICAgIEh3oAAAAA AIAIAAAAAAAAAP8AAAAAAAAAQQByAGkAYQBsAAAANSaQAQAAAgsGBAMFBAQC BId6AGEAAACACAAAAAAAAAD/AAEAAAAAAFQAYQBoAG8AbQBhAAAAIgAEAPEI iBgAANACAABoAQAAAABPOcOmmsDDJgAAAAAnADgAAAAkAQAAggYAAAEAAwAA AAQAAxANAAAAAAAAAAAAAAABAAEAAAABAAAAAAAAACEDAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAKUGwAe0ALQAgAAy MAAAAAAAAAAAAAAAAAAA/QcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAACgAAAEABAAAMAAAATAEAAA0AAABYAQAADgAAAGQB AAAPAAAAbAEAABAAAAB0AQAAEwAAAHwBAAACAAAA5AQAAB4AAAA1AAAAICAg ICAgICAgICAgICAgICAgICAgICAgIE1BUksgQU5EUkVXUyBBTkQgUEFSVE5F UlMgIAAATgAeAAAAAQAAAAAgICAeAAAABQAAAFVzZXIAICAgHgAAAAEAAAAA c2VyHgAAAAcAAABOb3JtYWwAIB4AAAAFAAAAVXNlcgBsACAeAAAAAwAAADQx AHIeAAAAEwAAAE1pY3Jvc29mdCBXb3JkIDguMAAgQAAAAABQt9IHAAAAQAAA AAA6YN1MgMgBQAAAAAD4vnfRj8gBAwAAAAEAAAADAAAAJAEAAAMAAACCBgAA AwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIA AAAAAP//EgAAAAAAAAA0ACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAg ACAAIAAgACAAIAAgACAAIAAgACAATQBBAFIASwAgAEEATgBEAFIARQBXAFMA IABBAE4ARAAgAFAAQQBSAFQATgBFAFIAUwAgACAAAAAAAAAABABVAHMAZQBy AAQAVQBzAGUAcgAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAP7/AwoAAP////8G CQIAAAAAAMAAAAAAAABGGAAAAE1pY3Jvc29mdCBXb3JkIERvY3VtZW50AAoA AABNU1dvcmREb2MAEAAAAFdvcmQuRG9jdW1lbnQuOAD0ObJxAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABIAEgAKAAEAWwAPAAIAAAAAAAAA JAAAQPH/AgAkAAAABgBOAG8AcgBtAGEAbAAAAAIAAAAEAG1ICQQAAAAAAAAA AAAAAAAAAAAAAAA8AEFA8v+hADwAAAAWAEQAZQBmAGEAdQBsAHQAIABQAGEA cgBhAGcAcgBhAHAAaAAgAEYAbwBuAHQAAAAAAAAAAAAAAAAAOABZAAEA8gA4 AAAADABEAG8AYwB1AG0AZQBuAHQAIABNAGEAcAAAAAYADwAtRCABCABPSgMA UUoDACgAVUCiAAEBKAAAAAkASAB5AHAAZQByAGwAaQBuAGsAAAAGAD4qAUIq AjgAVkCiABEBOAAAABEARgBvAGwAbABvAHcAZQBkAEgAeQBwAGUAcgBsAGkA bgBrAAAABgA+KgFCKgwAAAAAqwEAAOUHAAAEAAAeAAAAAP////8EAB4eAAAA AP////8AAAAACgIAAIkCAACjAwAA2gMAAOcHAACcAAAAAAAAAAAAAAAAgAAA AICYAAAMEAAAAgAAAB4AAAAGAAAAVGl0bGUAAwAAAAEAAAAAPAEAAAQAAAAA AAAAKAAAAAEAAABSAAAAAgAAAFoAAAADAAAAsgAAAAIAAAACAAAACgAAAF9Q SURfR1VJRAADAAAADAAAAF9QSURfSExJTktTAAIAAADkBAAAQQAAAE4AAAB7 ADMARQBBAEUAQQA5AEEAMQAtADEARgBBAEEALQA0ADUARAAyAC0AQgBEADAA NAAtADQARQA3ADkANgBEADcANwA3AEQARQBEAH0AAAAAAEEAAACAAAAABgAA AAMAAAB4AA8AAwAAAAAAAAADAAAAAAAAAAMAAAAFAAAAHwAAACQAAABtAGEA aQBsAHQAbwA6AGIAaQByAG4AYgBlAHIAZwBkAG8AdwBuAGgAaQBsAGwAQAB5 AGEAaABvAG8ALgBjAG8ALgB1AGsAAAAfAAAAAQAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAP7/AAAFAQIAAAAAAAAAAAAA AAAAAAAAAAEAAADghZ/y+U9oEKuRCAArJ7PZMAAAAIQBAAAQAAAAAQAAAIgA AAACAAAAkAAAAAMAAADQAAAABAAAANwAAAAFAAAA7AAAAAcAAAD4AAAACAAA AAgBAAAJAAAAGAEAABIAAAAkAQAA ------=_Part_14735_29858663.1206538945553-- From owner-xfs@oss.sgi.com Wed Mar 26 11:42:52 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 26 Mar 2008 11:43:26 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=BAYES_00,J_CHICKENPOX_43, J_CHICKENPOX_45,J_CHICKENPOX_62,J_CHICKENPOX_75,SUBJECT_FUZZY_TION autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2QIgpWM014159 for ; Wed, 26 Mar 2008 11:42:52 -0700 X-ASG-Debug-ID: 1206557001-1f36003b0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mail.arkeia.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 3427A6F1634 for ; Wed, 26 Mar 2008 11:43:21 -0700 (PDT) Received: from mail.arkeia.com (mail2.fr.arkeia.com [62.240.235.218]) by cuda.sgi.com with ESMTP id oGim0sy9uHFIa57B for ; Wed, 26 Mar 2008 11:43:21 -0700 (PDT) Received: from localhost (localhost [127.0.0.1]) by mail.arkeia.com (Postfix) with ESMTP id 1CC6E4BCA52; Wed, 26 Mar 2008 19:42:49 +0100 (CET) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Scanned: by mail.arkeia.com Received: from mail.arkeia.com ([127.0.0.1]) by localhost (mail.arkeia.com [127.0.0.1]) (amavisd-new, port 42001) with ESMTP id cFH5OyMvtvOs; Wed, 26 Mar 2008 19:42:39 +0100 (CET) Received: from [10.1.14.5] (hubert.bat2.fr.arkeia.com [10.1.14.5]) by mail.arkeia.com (Postfix) with ESMTP id D0E924BCA50; Wed, 26 Mar 2008 19:42:39 +0100 (CET) Message-ID: <47EA99AD.9060400@free.fr> Date: Wed, 26 Mar 2008 19:45:01 +0100 From: Hubert Verstraete User-Agent: Thunderbird 1.5.0.12 (X11/20071019) MIME-Version: 1.0 To: xfs@oss.sgi.com CC: linux-raid@vger.kernel.org X-ASG-Orig-Subj: [PATCH] XFS tuning on software RAID5 partitionable array; was: MDP major registration Subject: [PATCH] XFS tuning on software RAID5 partitionable array; was: MDP major registration References: <47D90614.9040206@free.fr> <18408.36753.223347.129420@notabene.brown> <47E92EE2.1080108@free.fr> <20080326065232.GA21970@percy.comedia.it> <47EA71BF.8050800@tmr.com> <47EA8CF4.7080201@free.fr> In-Reply-To: <47EA8CF4.7080201@free.fr> Content-Type: multipart/mixed; boundary="------------050802060205070700070009" X-Barracuda-Connect: mail2.fr.arkeia.com[62.240.235.218] X-Barracuda-Start-Time: 1206557005 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45974 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Status: Clean X-archive-position: 15059 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hubskml@free.fr Precedence: bulk X-list: xfs This is a multi-part message in MIME format. --------------050802060205070700070009 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Hi XFS list, please find attached a patch for libdisk/mkfs.xfs which tunes XFS on software partitionable RAID arrays, also called mdp. Hubert Verstraete Hubert Verstraete wrote: > Bill Davidsen wrote: >> Luca Berra wrote: >>> On Tue, Mar 25, 2008 at 05:57:06PM +0100, Hubert Verstraete wrote: >>>> Neil Brown wrote: >>>>> On Thursday March 13, hubskml@free.fr wrote: >>>>>> Neil, >>>>>> >>>>>> What is the status of the major for the partitionable arrays ? >>>>> >>>>> automatically determined at runtime. >>>>> >>>>>> I see that it is 254, which is in the experimental section, >>>>>> according to the official Linux device list >>>>>> (http://www.lanana.org/docs/device-list/). >>>>>> Will there be an official registration ? >>>>> >>>>> No. Is there any need? >>>> >>>> I got this question in mind when I saw that mkfs.xfs source code was >>>> referring to the MD major to tune its parameters on an MD device, >>>> while it ignores MDP devices. >>>> If there were reasons to register MD, wouldn't they apply to MDP too ? >>> >>> i don't think so: >>> bluca@percy ~ $ grep mdp /proc/devices >>> 253 mdp >> >> Why is it important to have XFS tune its parameters for md and not for >> mdp? I don't understand your conclusion here, is tuning not needed for >> mdp, or so meaningless that it doesn't matter, or that XFS code reads >> /proc/devices, or ??? I note that device-mapper also has a dynamic >> major, what does XFS make of that? > > It reads from /proc/devices. > >> I don't know how much difference tuning makes, but if it's worth doing >> at all, it should be done for mdp as well, I would think. > > Same thought. I wrote the patch for mkfs.xfs but did not publish it for > two reasons: > 1) MD is registered but not MDP. Now I understand, it's not a problem, > we just need to read /proc/devices as device-mapper does. > 2) Tuning XFS for MDP can be achieved through the mkfs.xfs options. With > a few lines in shell, my XFS on MDP now has the same performance as XFS > on MD. > > Hubert > -- > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html --------------050802060205070700070009 Content-Type: text/x-patch; name="xfsprogs_mdp.patch" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="xfsprogs_mdp.patch" diff -u -r xfsprogs-2.8.11/libdisk/md.c xfsprogs-2.8.11-mdp/libdisk/md.c --- xfsprogs-2.8.11/libdisk/md.c 2006-06-26 07:01:15.000000000 +0200 +++ xfsprogs-2.8.11-mdp/libdisk/md.c 2008-03-26 20:12:38.000000000 +0100 @@ -24,8 +24,12 @@ dev_t dev) { if (major(dev) == MD_MAJOR) - return 1; - return get_driver_block_major("md", major(dev)); + return MD_IS_MD; + if (get_driver_block_major("md", major(dev))) + return MD_IS_MD; + if (get_driver_block_major("mdp", major(dev))) + return MD_IS_MDP; + return 0; } int @@ -37,12 +41,32 @@ int *sectalign, struct stat64 *sb) { - if (mnt_is_md_subvol(sb->st_rdev)) { + char *pc, *dfile2 = NULL; + int is_md; + + if ((is_md = mnt_is_md_subvol(sb->st_rdev))) { struct md_array_info md; int fd; + if (is_md == MD_IS_MDP) { + if (!(pc = strrchr(dfile, 'd')) + || !(pc = strchr(pc, 'p'))) { + fprintf(stderr, + _("Error getting MD array device from %s\n"), + dfile); + exit(1); + } + dfile2 = (char *) malloc(pc - dfile + 1); + if (dfile2 == NULL) { + fprintf(stderr, + _("Couldn't malloc device string\n")); + exit(1); + } + strncpy(dfile2, dfile, pc - dfile); + dfile2[pc - dfile + 1] = '\0'; + } /* Open device */ - fd = open(dfile, O_RDONLY); + fd = open(dfile2 ? dfile2 : dfile, O_RDONLY); if (fd == -1) return 0; @@ -50,10 +74,11 @@ if (ioctl(fd, GET_ARRAY_INFO, &md)) { fprintf(stderr, _("Error getting MD array info from %s\n"), - dfile); + dfile2 ? dfile2 : dfile); exit(1); } close(fd); + if (dfile2) free(dfile2); /* * Ignore levels we don't want aligned (e.g. linear) diff -u -r xfsprogs-2.8.11/libdisk/md.h xfsprogs-2.8.11-mdp/libdisk/md.h --- xfsprogs-2.8.11/libdisk/md.h 2006-06-26 07:01:15.000000000 +0200 +++ xfsprogs-2.8.11-mdp/libdisk/md.h 2008-03-26 20:12:10.000000000 +0100 @@ -20,6 +20,9 @@ #define MD_MAJOR 9 /* we also check at runtime */ #endif +#define MD_IS_MD 1 +#define MD_IS_MDP 2 + #define GET_ARRAY_INFO _IOR (MD_MAJOR, 0x11, struct md_array_info) #define MD_SB_CLEAN 0 --------------050802060205070700070009-- From owner-xfs@oss.sgi.com Wed Mar 26 13:13:54 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 26 Mar 2008 13:14:20 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.2 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2QKDlpS030396 for ; Wed, 26 Mar 2008 13:13:53 -0700 X-ASG-Debug-ID: 1206562461-5f3f035f0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from smtp7-g19.free.fr (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 37DBC103F796 for ; Wed, 26 Mar 2008 13:14:21 -0700 (PDT) Received: from smtp7-g19.free.fr (smtp7-g19.free.fr [212.27.42.64]) by cuda.sgi.com with ESMTP id hh2aK4iuoGH8lwt6 for ; Wed, 26 Mar 2008 13:14:21 -0700 (PDT) Received: from smtp7-g19.free.fr (localhost [127.0.0.1]) by smtp7-g19.free.fr (Postfix) with ESMTP id D2D7F32286D; Wed, 26 Mar 2008 21:13:49 +0100 (CET) Received: from galadriel.home (pla78-1-82-235-234-79.fbx.proxad.net [82.235.234.79]) by smtp7-g19.free.fr (Postfix) with ESMTP id B559D322886; Wed, 26 Mar 2008 21:13:48 +0100 (CET) Date: Wed, 26 Mar 2008 21:13:33 +0100 From: Emmanuel Florac To: David Chinner Cc: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: Serious XFS crash Subject: Re: Serious XFS crash Message-ID: <20080326211333.21d2c7bb@galadriel.home> In-Reply-To: <20080325233611.GW103491721@sgi.com> References: <20080325185453.3a1957dd@galadriel.home> <20080325233611.GW103491721@sgi.com> Organization: Intellique X-Mailer: Claws Mail 2.9.1 (GTK+ 2.8.20; i686-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 X-Barracuda-Connect: smtp7-g19.free.fr[212.27.42.64] X-Barracuda-Start-Time: 1206562462 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.45982 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id m2QKDspS030454 X-archive-position: 15060 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: eflorac@intellique.com Precedence: bulk X-list: xfs Le Wed, 26 Mar 2008 10:36:11 +1100 vous écriviez: > Oh, that's toast. Something has overwritten the start of the > filesystem and it does not appear to be other metadata. Well, not > exactly the start of the filesystem - the superblock is untouched. Just to get sure... Is there the slightest chance than tweaking around the system can save a couple more files, or is it dead once and for all? -- -------------------------------------------------- Emmanuel Florac www.intellique.com -------------------------------------------------- From owner-xfs@oss.sgi.com Wed Mar 26 22:22:32 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 26 Mar 2008 22:22:41 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2R5MOqS014756 for ; Wed, 26 Mar 2008 22:22:29 -0700 Received: from pc-bnaujok.melbourne.sgi.com (pc-bnaujok.melbourne.sgi.com [134.14.55.58]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id QAA25160 for ; Thu, 27 Mar 2008 16:22:57 +1100 Date: Thu, 27 Mar 2008 16:25:33 +1100 To: "xfs@oss.sgi.com" Subject: REVIEW: improve/fix/update zeroing garbage in superblock sectors in xfs_repair From: "Barry Naujok" Organization: SGI Content-Type: text/plain; format=flowed; delsp=yes; charset=utf-8 MIME-Version: 1.0 Message-ID: User-Agent: Opera Mail/9.24 (Win32) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from Quoted-Printable to 8bit by oss.sgi.com id m2R5MWqS014760 X-archive-position: 15061 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@sgi.com Precedence: bulk X-list: xfs Running XFS QA with a standard HDD with the bad_features2 problem happening and doing "mkfs.xfs -l version=1", a problem was encounter where it went to zero the "bad" features2 bit. Why didn't this happen all the time? Upon investigation, I updated the behaviour of the "secondary_sb_wack" function. Now it always zeroes any garbage found beyond the expected end of the xfs_sb_t structure in the first sector. Further down in discrete field checking, there were a lot of " if (sb->sb_versionnum & XR_PART_SECSB_VNMASK || !do_bzero) { " checks which seems superfluous for the tests and operations being performed. The following patch relies on the bad_features2 patch from the other week. -- --- ci.orig/xfsprogs/repair/agheader.c +++ ci/xfsprogs/repair/agheader.c @@ -213,82 +213,66 @@ compare_sb(xfs_mount_t *mp, xfs_sb_t *sb * * And everything else in the buffer beyond either sb_width, * sb_dirblklog (v2 dirs), or sb_logsectsize can be zeroed. - * - * Note: contrary to the name, this routine is called for all - * superblocks, not just the secondary superblocks. */ -int -secondary_sb_wack(xfs_mount_t *mp, xfs_buf_t *sbuf, xfs_sb_t *sb, - xfs_agnumber_t i) +static int +sb_whack( + xfs_mount_t *mp, + xfs_sb_t *sb, /* translated superblock */ + xfs_buf_t *sbuf, /* disk buffer with superblock */ + xfs_agnumber_t agno) { - int do_bzero; - int size; - char *ip; - int rval; - - rval = do_bzero = 0; + int rval = 0; + int do_zero = 0; + int size; + char *ip; /* - * mkfs's that stamped a feature bit besides the ones in the mask - * (e.g. were pre-6.5 beta) could leave garbage in the secondary - * superblock sectors. Anything stamping the shared fs bit or better - * into the secondaries is ok and should generate clean secondary - * superblock sectors. so only run the bzero check on the - * potentially garbaged secondaries. + * Check for garbage beyond the last field. + * Use field addresses instead so this code will still + * work against older filesystems when the superblock + * gets rev'ed again with new fields appended. */ - if (pre_65_beta || - (sb->sb_versionnum & XR_GOOD_SECSB_VNMASK) == 0 || - sb->sb_versionnum < XFS_SB_VERSION_4) { - /* - * Check for garbage beyond the last field. - * Use field addresses instead so this code will still - * work against older filesystems when the superblock - * gets rev'ed again with new fields appended. - */ - if (XFS_SB_VERSION_HASMOREBITS(sb)) - size = (__psint_t)&sb->sb_features2 - + sizeof(sb->sb_features2) - (__psint_t)sb; - else if (XFS_SB_VERSION_HASLOGV2(sb)) - size = (__psint_t)&sb->sb_logsunit + if (xfs_sb_version_hasmorebits(sb)) + size = (__psint_t)&sb->sb_bad_features2 + + sizeof(sb->sb_bad_features2) - (__psint_t)sb; + else if (xfs_sb_version_haslogv2(sb)) + size = (__psint_t)&sb->sb_logsunit + sizeof(sb->sb_logsunit) - (__psint_t)sb; - else if (XFS_SB_VERSION_HASSECTOR(sb)) - size = (__psint_t)&sb->sb_logsectsize + else if (xfs_sb_version_hassector(sb)) + size = (__psint_t)&sb->sb_logsectsize + sizeof(sb->sb_logsectsize) - (__psint_t)sb; - else if (XFS_SB_VERSION_HASDIRV2(sb)) - size = (__psint_t)&sb->sb_dirblklog + else if (xfs_sb_version_hasdirv2(sb)) + size = (__psint_t)&sb->sb_dirblklog + sizeof(sb->sb_dirblklog) - (__psint_t)sb; - else - size = (__psint_t)&sb->sb_width + else + size = (__psint_t)&sb->sb_width + sizeof(sb->sb_width) - (__psint_t)sb; - for (ip = (char *)((__psint_t)sb + size); - ip < (char *)((__psint_t)sb + mp->m_sb.sb_sectsize); - ip++) { - if (*ip) { - do_bzero = 1; - break; - } - } cd - if (do_bzero) { - rval |= XR_AG_SB_SEC; - if (!no_modify) { - do_warn( - _("zeroing unused portion of %s superblock (AG #%u)\n"), - !i ? _("primary") : _("secondary"), i); - bzero((void *)((__psint_t)sb + size), - mp->m_sb.sb_sectsize - size); - } else - do_warn( - _("would zero unused portion of %s superblock (AG #%u)\n"), - !i ? _("primary") : _("secondary"), i); + for (ip = (char *)XFS_BUF_PTR(sbuf) + size; + ip < (char *)XFS_BUF_PTR(sbuf) + mp->m_sb.sb_sectsize; ip++) { + if (*ip) { + do_zero = 1; + break; } } + if (do_zero) { + rval |= XR_AG_SB_SEC; + if (!no_modify) { + do_warn(_("zeroing unused portion of %s superblock " + "(AG #%u)\n"), !agno ? _("primary") : + _("secondary"), agno); + memset((char *)XFS_BUF_PTR(sbuf) + size, 0, + mp->m_sb.sb_sectsize - size); + } else + do_warn(_("would zero unused portion of %s superblock " + "(AG #%u)\n"), !agno ? _("primary") : + _("secondary"), agno); + } + /* - * now look for the fields we can manipulate directly. - * if we did a bzero and that bzero could have included - * the field in question, just silently reset it. otherwise, - * complain. + * now look for the fields we can manipulate directly that + * may not have been zeroed above. * * for now, just zero the flags field since only * the readonly flag is used @@ -296,11 +280,8 @@ secondary_sb_wack(xfs_mount_t *mp, xfs_b if (sb->sb_flags) { if (!no_modify) sb->sb_flags = 0; - if (sb->sb_versionnum & XR_PART_SECSB_VNMASK || !do_bzero) { - rval |= XR_AG_SB; - do_warn(_("bad flags field in superblock %d\n"), i); - } else - rval |= XR_AG_SB_SEC; + rval |= XR_AG_SB; + do_warn(_("bad flags field in superblock %d\n"), agno); } /* @@ -312,38 +293,24 @@ secondary_sb_wack(xfs_mount_t *mp, xfs_b if (sb->sb_inprogress == 1 && sb->sb_uquotino) { if (!no_modify) sb->sb_uquotino = 0; - if (sb->sb_versionnum & XR_PART_SECSB_VNMASK || !do_bzero) { - rval |= XR_AG_SB; - do_warn( - _("non-null user quota inode field in superblock %d\n"), - i); - - } else - rval |= XR_AG_SB_SEC; + rval |= XR_AG_SB; + do_warn(_("non-null user quota inode field in superblock %d\n"), + agno); } if (sb->sb_inprogress == 1 && sb->sb_gquotino) { if (!no_modify) sb->sb_gquotino = 0; - if (sb->sb_versionnum & XR_PART_SECSB_VNMASK || !do_bzero) { - rval |= XR_AG_SB; - do_warn( - _("non-null group quota inode field in superblock %d\n"), - i); - - } else - rval |= XR_AG_SB_SEC; + rval |= XR_AG_SB; + do_warn(_("non-null group quota inode field in superblock %d\n"), + agno); } if (sb->sb_inprogress == 1 && sb->sb_qflags) { if (!no_modify) sb->sb_qflags = 0; - if (sb->sb_versionnum & XR_PART_SECSB_VNMASK || !do_bzero) { - rval |= XR_AG_SB; - do_warn(_("non-null quota flags in superblock %d\n"), - i); - } else - rval |= XR_AG_SB_SEC; + rval |= XR_AG_SB; + do_warn(_("non-null quota flags in superblock %d\n"), agno); } /* @@ -352,44 +319,31 @@ secondary_sb_wack(xfs_mount_t *mp, xfs_b * written at mkfs time (and the corresponding sb version bits * are set). */ - if (!XFS_SB_VERSION_HASSHARED(sb) && sb->sb_shared_vn != 0) { + if (!xfs_sb_version_hasshared(sb) && sb->sb_shared_vn) { if (!no_modify) sb->sb_shared_vn = 0; - if (sb->sb_versionnum & XR_PART_SECSB_VNMASK || !do_bzero) { - rval |= XR_AG_SB; - do_warn( - _("bad shared version number in superblock %d\n"), - i); - } else - rval |= XR_AG_SB_SEC; + rval |= XR_AG_SB; + do_warn(_("bad shared version number in superblock %d\n"), + agno); } - if (!XFS_SB_VERSION_HASALIGN(sb) && sb->sb_inoalignmt != 0) { + if (!xfs_sb_version_hasalign(sb) && sb->sb_inoalignmt) { if (!no_modify) sb->sb_inoalignmt = 0; - if (sb->sb_versionnum & XR_PART_SECSB_VNMASK || !do_bzero) { - rval |= XR_AG_SB; - do_warn( - _("bad inode alignment field in superblock %d\n"), - i); - } else - rval |= XR_AG_SB_SEC; + rval |= XR_AG_SB; + do_warn(_("bad inode alignment field in superblock %d\n"), + agno); } - if (!XFS_SB_VERSION_HASDALIGN(sb) && - (sb->sb_unit != 0 || sb->sb_width != 0)) { + if (!xfs_sb_version_hasdalign(sb) && (sb->sb_unit || sb->sb_width)) { if (!no_modify) sb->sb_unit = sb->sb_width = 0; - if (sb->sb_versionnum & XR_GOOD_SECSB_VNMASK || !do_bzero) { - rval |= XR_AG_SB; - do_warn( - _("bad stripe unit/width fields in superblock %d\n"), - i); - } else - rval |= XR_AG_SB_SEC; + rval |= XR_AG_SB; + do_warn(_("bad stripe unit/width fields in superblock %d\n"), + agno); } - if (!XFS_SB_VERSION_HASSECTOR(sb) && + if (!xfs_sb_version_hassector(sb) && (sb->sb_sectsize != BBSIZE || sb->sb_sectlog != BBSHIFT || sb->sb_logsectsize != 0 || sb->sb_logsectlog != 0)) { if (!no_modify) { @@ -398,13 +352,11 @@ secondary_sb_wack(xfs_mount_t *mp, xfs_b sb->sb_logsectsize = 0; sb->sb_logsectlog = 0; } - if (sb->sb_versionnum & XR_GOOD_SECSB_VNMASK || !do_bzero) { + if (!do_zero) { rval |= XR_AG_SB; - do_warn( - _("bad log/data device sector size fields in superblock %d\n"), - i); - } else - rval |= XR_AG_SB_SEC; + do_warn(_("bad log/data device sector size fields in " + "superblock %d\n"), agno); + } } return(rval); @@ -463,7 +415,7 @@ verify_set_agheader(xfs_mount_t *mp, xfs rval |= XR_AG_SB; } - rval |= secondary_sb_wack(mp, sbuf, sb, i); + rval |= sb_whack(mp, sb, sbuf, i); rval |= verify_set_agf(mp, agf, i); rval |= verify_set_agi(mp, agi, i); From owner-xfs@oss.sgi.com Wed Mar 26 22:25:19 2008 Received: with ECARTIS (v1.0.0; list xfs); Wed, 26 Mar 2008 22:25:31 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2R5PD39015203 for ; Wed, 26 Mar 2008 22:25:16 -0700 Received: from pc-bnaujok.melbourne.sgi.com (pc-bnaujok.melbourne.sgi.com [134.14.55.58]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id QAA25213; Thu, 27 Mar 2008 16:25:42 +1100 Date: Thu, 27 Mar 2008 16:28:19 +1100 To: "Barry Naujok" , "xfs@oss.sgi.com" Subject: Re: REVIEW: improve/fix/update zeroing garbage in superblock sectors in xfs_repair From: "Barry Naujok" Organization: SGI Content-Type: multipart/mixed; boundary=----------EQ54Je2klZrw6sqzQDB6C1 MIME-Version: 1.0 References: Message-ID: In-Reply-To: User-Agent: Opera Mail/9.24 (Win32) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15062 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@sgi.com Precedence: bulk X-list: xfs ------------EQ54Je2klZrw6sqzQDB6C1 Content-Type: text/plain; format=flowed; delsp=yes; charset=utf-8 Content-Transfer-Encoding: Quoted-Printable Hmm... a "cd" line appeared in the patch, delete it if trying to apply the patch. I've attached it to be sure :) On Thu, 27 Mar 2008 16:25:33 +1100, Barry Naujok wrote: > Running XFS QA with a standard HDD with the bad_features2 problem > happening and doing "mkfs.xfs -l version=3D1", a problem was encounter > where it went to zero the "bad" features2 bit. > > Why didn't this happen all the time? > > Upon investigation, I updated the behaviour of the "secondary_sb_wack" > function. Now it always zeroes any garbage found beyond the expected > end of the xfs_sb_t structure in the first sector. > > Further down in discrete field checking, there were a lot of > " if (sb->sb_versionnum & XR_PART_SECSB_VNMASK || !do_bzero) { " > checks which seems superfluous for the tests and operations > being performed. > > The following patch relies on the bad_features2 patch from the > other week. > > -- ------------EQ54Je2klZrw6sqzQDB6C1 Content-Disposition: attachment; filename=update_sb_whack.patch Content-Type: text/x-patch; name=update_sb_whack.patch Content-Transfer-Encoding: Quoted-Printable --- ci.orig/xfsprogs/repair/agheader.c +++ ci/xfsprogs/repair/agheader.c @@ -213,82 +213,66 @@ compare_sb(xfs_mount_t *mp, xfs_sb_t *sb * * And everything else in the buffer beyond either sb_width, * sb_dirblklog (v2 dirs), or sb_logsectsize can be zeroed. - * - * Note: contrary to the name, this routine is called for all - * superblocks, not just the secondary superblocks. */ -int -secondary_sb_wack(xfs_mount_t *mp, xfs_buf_t *sbuf, xfs_sb_t *sb, - xfs_agnumber_t i) +static int +sb_whack( + xfs_mount_t *mp, + xfs_sb_t *sb, /* translated superblock */ + xfs_buf_t *sbuf, /* disk buffer with superblock */ + xfs_agnumber_t agno) { - int do_bzero; - int size; - char *ip; - int rval; - - rval =3D do_bzero =3D 0; + int rval =3D 0; + int do_zero =3D 0; + int size; + char *ip; =20 /* - * mkfs's that stamped a feature bit besides the ones in the mask - * (e.g. were pre-6.5 beta) could leave garbage in the secondary - * superblock sectors. Anything stamping the shared fs bit or better - * into the secondaries is ok and should generate clean secondary - * superblock sectors. so only run the bzero check on the - * potentially garbaged secondaries. + * Check for garbage beyond the last field. + * Use field addresses instead so this code will still + * work against older filesystems when the superblock + * gets rev'ed again with new fields appended. */ - if (pre_65_beta || - (sb->sb_versionnum & XR_GOOD_SECSB_VNMASK) =3D=3D 0 || - sb->sb_versionnum < XFS_SB_VERSION_4) { - /* - * Check for garbage beyond the last field. - * Use field addresses instead so this code will still - * work against older filesystems when the superblock - * gets rev'ed again with new fields appended. - */ - if (XFS_SB_VERSION_HASMOREBITS(sb)) - size =3D (__psint_t)&sb->sb_features2 - + sizeof(sb->sb_features2) - (__psint_t)sb; - else if (XFS_SB_VERSION_HASLOGV2(sb)) - size =3D (__psint_t)&sb->sb_logsunit + if (xfs_sb_version_hasmorebits(sb)) + size =3D (__psint_t)&sb->sb_bad_features2 + + sizeof(sb->sb_bad_features2) - (__psint_t)sb; + else if (xfs_sb_version_haslogv2(sb)) + size =3D (__psint_t)&sb->sb_logsunit + sizeof(sb->sb_logsunit) - (__psint_t)sb; - else if (XFS_SB_VERSION_HASSECTOR(sb)) - size =3D (__psint_t)&sb->sb_logsectsize + else if (xfs_sb_version_hassector(sb)) + size =3D (__psint_t)&sb->sb_logsectsize + sizeof(sb->sb_logsectsize) - (__psint_t)sb; - else if (XFS_SB_VERSION_HASDIRV2(sb)) - size =3D (__psint_t)&sb->sb_dirblklog + else if (xfs_sb_version_hasdirv2(sb)) + size =3D (__psint_t)&sb->sb_dirblklog + sizeof(sb->sb_dirblklog) - (__psint_t)sb; - else - size =3D (__psint_t)&sb->sb_width + else + size =3D (__psint_t)&sb->sb_width + sizeof(sb->sb_width) - (__psint_t)sb; - for (ip =3D (char *)((__psint_t)sb + size); - ip < (char *)((__psint_t)sb + mp->m_sb.sb_sectsize); - ip++) { - if (*ip) { - do_bzero =3D 1; - break; - } - } =20 - if (do_bzero) { - rval |=3D XR_AG_SB_SEC; - if (!no_modify) { - do_warn( - _("zeroing unused portion of %s superblock (AG #%u)\n"), - !i ? _("primary") : _("secondary"), i); - bzero((void *)((__psint_t)sb + size), - mp->m_sb.sb_sectsize - size); - } else - do_warn( - _("would zero unused portion of %s superblock (AG #%u)\n"), - !i ? _("primary") : _("secondary"), i); + for (ip =3D XFS_BUF_PTR(sbuf) + size; + ip < XFS_BUF_PTR(sbuf) + mp->m_sb.sb_sectsize; ip++) { + if (*ip) { + do_zero =3D 1; + break; } } =20 + if (do_zero) { + rval |=3D XR_AG_SB_SEC; + if (!no_modify) { + do_warn(_("zeroing unused portion of %s superblock " + "(AG #%u)\n"), !agno ? _("primary") : + _("secondary"), agno); + memset(XFS_BUF_PTR(sbuf) + size, 0, + mp->m_sb.sb_sectsize - size); + } else + do_warn(_("would zero unused portion of %s superblock " + "(AG #%u)\n"), !agno ? _("primary") : + _("secondary"), agno); + } + /* - * now look for the fields we can manipulate directly. - * if we did a bzero and that bzero could have included - * the field in question, just silently reset it. otherwise, - * complain. + * now look for the fields we can manipulate directly that + * may not have been zeroed above. * * for now, just zero the flags field since only * the readonly flag is used @@ -296,11 +280,8 @@ secondary_sb_wack(xfs_mount_t *mp, xfs_b if (sb->sb_flags) { if (!no_modify) sb->sb_flags =3D 0; - if (sb->sb_versionnum & XR_PART_SECSB_VNMASK || !do_bzero) { - rval |=3D XR_AG_SB; - do_warn(_("bad flags field in superblock %d\n"), i); - } else - rval |=3D XR_AG_SB_SEC; + rval |=3D XR_AG_SB; + do_warn(_("bad flags field in superblock %d\n"), agno); } =20 /* @@ -312,38 +293,24 @@ secondary_sb_wack(xfs_mount_t *mp, xfs_b if (sb->sb_inprogress =3D=3D 1 && sb->sb_uquotino) { if (!no_modify) sb->sb_uquotino =3D 0; - if (sb->sb_versionnum & XR_PART_SECSB_VNMASK || !do_bzero) { - rval |=3D XR_AG_SB; - do_warn( - _("non-null user quota inode field in superblock %d\n"), - i); - - } else - rval |=3D XR_AG_SB_SEC; + rval |=3D XR_AG_SB; + do_warn(_("non-null user quota inode field in superblock %d\n"), + agno); } =20 if (sb->sb_inprogress =3D=3D 1 && sb->sb_gquotino) { if (!no_modify) sb->sb_gquotino =3D 0; - if (sb->sb_versionnum & XR_PART_SECSB_VNMASK || !do_bzero) { - rval |=3D XR_AG_SB; - do_warn( - _("non-null group quota inode field in superblock %d\n"), - i); - - } else - rval |=3D XR_AG_SB_SEC; + rval |=3D XR_AG_SB; + do_warn(_("non-null group quota inode field in superblock %d\n"), + agno); } =20 if (sb->sb_inprogress =3D=3D 1 && sb->sb_qflags) { if (!no_modify) sb->sb_qflags =3D 0; - if (sb->sb_versionnum & XR_PART_SECSB_VNMASK || !do_bzero) { - rval |=3D XR_AG_SB; - do_warn(_("non-null quota flags in superblock %d\n"), - i); - } else - rval |=3D XR_AG_SB_SEC; + rval |=3D XR_AG_SB; + do_warn(_("non-null quota flags in superblock %d\n"), agno); } =20 /* @@ -352,44 +319,31 @@ secondary_sb_wack(xfs_mount_t *mp, xfs_b * written at mkfs time (and the corresponding sb version bits * are set). */ - if (!XFS_SB_VERSION_HASSHARED(sb) && sb->sb_shared_vn !=3D 0) { + if (!xfs_sb_version_hasshared(sb) && sb->sb_shared_vn) { if (!no_modify) sb->sb_shared_vn =3D 0; - if (sb->sb_versionnum & XR_PART_SECSB_VNMASK || !do_bzero) { - rval |=3D XR_AG_SB; - do_warn( - _("bad shared version number in superblock %d\n"), - i); - } else - rval |=3D XR_AG_SB_SEC; + rval |=3D XR_AG_SB; + do_warn(_("bad shared version number in superblock %d\n"), + agno); } =20 - if (!XFS_SB_VERSION_HASALIGN(sb) && sb->sb_inoalignmt !=3D 0) { + if (!xfs_sb_version_hasalign(sb) && sb->sb_inoalignmt) { if (!no_modify) sb->sb_inoalignmt =3D 0; - if (sb->sb_versionnum & XR_PART_SECSB_VNMASK || !do_bzero) { - rval |=3D XR_AG_SB; - do_warn( - _("bad inode alignment field in superblock %d\n"), - i); - } else - rval |=3D XR_AG_SB_SEC; + rval |=3D XR_AG_SB; + do_warn(_("bad inode alignment field in superblock %d\n"), + agno); } =20 - if (!XFS_SB_VERSION_HASDALIGN(sb) && - (sb->sb_unit !=3D 0 || sb->sb_width !=3D 0)) { + if (!xfs_sb_version_hasdalign(sb) && (sb->sb_unit || sb->sb_width)) { if (!no_modify) sb->sb_unit =3D sb->sb_width =3D 0; - if (sb->sb_versionnum & XR_GOOD_SECSB_VNMASK || !do_bzero) { - rval |=3D XR_AG_SB; - do_warn( - _("bad stripe unit/width fields in superblock %d\n"), - i); - } else - rval |=3D XR_AG_SB_SEC; + rval |=3D XR_AG_SB; + do_warn(_("bad stripe unit/width fields in superblock %d\n"), + agno); } =20 - if (!XFS_SB_VERSION_HASSECTOR(sb) && + if (!xfs_sb_version_hassector(sb) && (sb->sb_sectsize !=3D BBSIZE || sb->sb_sectlog !=3D BBSHIFT || sb->sb_logsectsize !=3D 0 || sb->sb_logsectlog !=3D 0)) { if (!no_modify) { @@ -398,13 +352,11 @@ secondary_sb_wack(xfs_mount_t *mp, xfs_b sb->sb_logsectsize =3D 0; sb->sb_logsectlog =3D 0; } - if (sb->sb_versionnum & XR_GOOD_SECSB_VNMASK || !do_bzero) { + if (!do_zero) { rval |=3D XR_AG_SB; - do_warn( - _("bad log/data device sector size fields in superblock %d\n"), - i); - } else - rval |=3D XR_AG_SB_SEC; + do_warn(_("bad log/data device sector size fields in " + "superblock %d\n"), agno); + } } =20 return(rval); @@ -463,7 +415,7 @@ verify_set_agheader(xfs_mount_t *mp, xfs rval |=3D XR_AG_SB; } =20 - rval |=3D secondary_sb_wack(mp, sbuf, sb, i); + rval |=3D sb_whack(mp, sb, sbuf, i); =20 rval |=3D verify_set_agf(mp, agf, i); rval |=3D verify_set_agi(mp, agi, i); ------------EQ54Je2klZrw6sqzQDB6C1-- From owner-xfs@oss.sgi.com Thu Mar 27 01:21:19 2008 Received: with ECARTIS (v1.0.0; list xfs); Thu, 27 Mar 2008 01:21:33 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2R8LGIL002361 for ; Thu, 27 Mar 2008 01:21:19 -0700 X-ASG-Debug-ID: 1206606109-22b102650000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mail.g-house.de (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 7BA611043D3C; Thu, 27 Mar 2008 01:21:49 -0700 (PDT) Received: from mail.g-house.de (ns2.g-housing.de [81.169.133.75]) by cuda.sgi.com with ESMTP id 5IDXjPEhpnO6qQSV; Thu, 27 Mar 2008 01:21:49 -0700 (PDT) Received: from [77.47.55.199] (helo=pf1.housecafe.de) by mail.g-house.de with esmtpa (Exim 4.63) (envelope-from ) id 1JenMi-0005b9-RY; Thu, 27 Mar 2008 09:21:40 +0100 Date: Thu, 27 Mar 2008 09:21:40 +0100 (CET) From: Christian Kujau X-X-Sender: evil@sheep.housecafe.de To: Alasdair G Kergon cc: Chr , Milan Broz , David Chinner , LKML , xfs@oss.sgi.com, dm-devel@redhat.com, dm-crypt@saout.de, Herbert Xu , Ritesh Raj Sarraf X-ASG-Orig-Subj: Re: [dm-crypt] INFO: task mount:11202 blocked for more than 120 seconds Subject: Re: [dm-crypt] INFO: task mount:11202 blocked for more than 120 seconds In-Reply-To: Message-ID: References: <200803150108.04008.chunkeey@web.de> <200803151432.11125.chunkeey@web.de> <200803152234.53199.chunkeey@web.de> <20080317173609.GD29322@agk.fab.redhat.com> User-Agent: Alpine 1.10 (DEB 962 2008-03-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Barracuda-Connect: ns2.g-housing.de[81.169.133.75] X-Barracuda-Start-Time: 1206606110 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.46029 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15063 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: lists@nerdbynature.de Precedence: bulk X-list: xfs On Sat, 22 Mar 2008, Christian Kujau wrote: > On Mon, 17 Mar 2008, Alasdair G Kergon wrote: >> From: Milan Broz >> >> Fix regression in dm-crypt introduced in commit >> 3a7f6c990ad04e6f576a159876c602d14d6f7fef >> (dm crypt: use async crypto). I noticed that this patch[0] hasn't made it into mainline yet, will this be included or is this fixed by something else? Thanks, Christian. [0] http://lkml.org/lkml/2008/3/17/214 -- BOFH excuse #416: We're out of slots on the server From owner-xfs@oss.sgi.com Thu Mar 27 09:31:46 2008 Received: with ECARTIS (v1.0.0; list xfs); Thu, 27 Mar 2008 09:31:57 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-0.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2RGViZ4022864 for ; Thu, 27 Mar 2008 09:31:46 -0700 X-ASG-Debug-ID: 1206635537-27f203ad0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from ext.agami.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 2CF74104824F for ; Thu, 27 Mar 2008 09:32:18 -0700 (PDT) Received: from ext.agami.com (64.221.212.177.ptr.us.xo.net [64.221.212.177]) by cuda.sgi.com with ESMTP id cUtfWOIddgVbMZEK for ; Thu, 27 Mar 2008 09:32:18 -0700 (PDT) Received: from agami.com (mail [192.168.168.5]) by ext.agami.com (8.12.5/8.12.5) with ESMTP id m2RGUn3M022386 for ; Thu, 27 Mar 2008 09:31:53 -0700 Received: from mx1.agami.com (mx1.agami.com [10.123.10.30]) by agami.com (8.12.11/8.12.11) with ESMTP id m2R0u4jc026339 for ; Wed, 26 Mar 2008 17:56:04 -0700 Received: from [10.123.4.142] ([10.123.4.142]) by mx1.agami.com with Microsoft SMTPSVC(6.0.3790.1830); Wed, 26 Mar 2008 17:56:27 -0700 Message-ID: <47EAF0BA.4040909@agami.com> Date: Wed, 26 Mar 2008 17:56:26 -0700 From: Michael Nishimoto User-Agent: Mail/News 1.5.0.4 (X11/20060629) MIME-Version: 1.0 To: XFS Mailing List X-ASG-Orig-Subj: accounting disk quotas in space reservation Subject: accounting disk quotas in space reservation Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-OriginalArrivalTime: 27 Mar 2008 00:56:27.0154 (UTC) FILETIME=[62E3EF20:01C88FA5] X-Scanned-By: MIMEDefang 2.58 on 192.168.168.13 X-Barracuda-Connect: 64.221.212.177.ptr.us.xo.net[64.221.212.177] X-Barracuda-Start-Time: 1206635539 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.46062 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15064 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: miken@agami.com Precedence: bulk X-list: xfs Hi, When allocating a space reservation at xfs_trans_reserve time, it doesn't look like xfs takes into account new disk blocks which might get allocated in the quota DBs. There is code to handle the log reservation for disk quotas, but I don't see anything for the space reservation. Am I missing something? thanks, Michael From owner-xfs@oss.sgi.com Thu Mar 27 09:47:11 2008 Received: with ECARTIS (v1.0.0; list xfs); Thu, 27 Mar 2008 09:47:18 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2RGl9HG024790 for ; Thu, 27 Mar 2008 09:47:11 -0700 X-ASG-Debug-ID: 1206636463-4d5303100000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from ext.agami.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id D4F9D6F8F4D for ; Thu, 27 Mar 2008 09:47:43 -0700 (PDT) Received: from ext.agami.com (64.221.212.177.ptr.us.xo.net [64.221.212.177]) by cuda.sgi.com with ESMTP id mZlXzp4yvjuOsGPd for ; Thu, 27 Mar 2008 09:47:43 -0700 (PDT) Received: from agami.com (mail [192.168.168.5]) by ext.agami.com (8.12.5/8.12.5) with ESMTP id m2RGUn3G022386 for ; Thu, 27 Mar 2008 09:31:52 -0700 Received: from mx1.agami.com (mx1.agami.com [10.123.10.30]) by agami.com (8.12.11/8.12.11) with ESMTP id m2R1fIdk030363 for ; Wed, 26 Mar 2008 18:41:18 -0700 Received: from [10.123.4.142] ([10.123.4.142]) by mx1.agami.com with Microsoft SMTPSVC(6.0.3790.1830); Wed, 26 Mar 2008 18:41:41 -0700 Message-ID: <47EAFB54.1050106@agami.com> Date: Wed, 26 Mar 2008 18:41:40 -0700 From: Michael Nishimoto User-Agent: Mail/News 1.5.0.4 (X11/20060629) MIME-Version: 1.0 To: XFS Mailing List X-ASG-Orig-Subj: Re: accounting disk quotas in space reservation Subject: Re: accounting disk quotas in space reservation References: <47EAF0BA.4040909@agami.com> In-Reply-To: <47EAF0BA.4040909@agami.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-OriginalArrivalTime: 27 Mar 2008 01:41:41.0187 (UTC) FILETIME=[B4948130:01C88FAB] X-Scanned-By: MIMEDefang 2.58 on 192.168.168.13 X-Barracuda-Connect: 64.221.212.177.ptr.us.xo.net[64.221.212.177] X-Barracuda-Start-Time: 1206636463 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.46065 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15065 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: miken@agami.com Precedence: bulk X-list: xfs Yes, I was missing something. I just figured out how disk quota objects are reserved/allocated. Michael Nishimoto wrote: > Hi, > > When allocating a space reservation at xfs_trans_reserve time, > it doesn't look like xfs takes into account new disk blocks which > might get allocated in the quota DBs. There is code to handle the > log reservation for disk quotas, but I don't see anything for > the space reservation. > > Am I missing something? > > thanks, > > Michael > > From owner-xfs@oss.sgi.com Thu Mar 27 11:54:04 2008 Received: with ECARTIS (v1.0.0; list xfs); Thu, 27 Mar 2008 11:54:25 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2RIs0Jb002685 for ; Thu, 27 Mar 2008 11:54:04 -0700 X-ASG-Debug-ID: 1206644073-30fd02a40000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mx1.redhat.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id E8B34104F7CD for ; Thu, 27 Mar 2008 11:54:33 -0700 (PDT) Received: from mx1.redhat.com (mx1.redhat.com [66.187.233.31]) by cuda.sgi.com with ESMTP id OfPHd6Md4LRngpyN for ; Thu, 27 Mar 2008 11:54:33 -0700 (PDT) Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id m2RIrJTK008787; Thu, 27 Mar 2008 14:53:19 -0400 Received: from pobox.fab.redhat.com (pobox.fab.redhat.com [10.33.63.12]) by int-mx1.corp.redhat.com (8.13.1/8.13.1) with ESMTP id m2RIrGYt026706; Thu, 27 Mar 2008 14:53:16 -0400 Received: from agk.fab.redhat.com (agk.fab.redhat.com [10.33.0.19]) by pobox.fab.redhat.com (8.13.1/8.13.1) with ESMTP id m2RIrF32023903; Thu, 27 Mar 2008 14:53:15 -0400 Received: from agk by agk.fab.redhat.com with local (Exim 4.34) id 1JexDv-0000PM-GG; Thu, 27 Mar 2008 18:53:15 +0000 Date: Thu, 27 Mar 2008 18:53:15 +0000 From: Alasdair G Kergon To: Linus Torvalds Cc: Chr , "Rafael J. Wysocki" , Milan Broz , dm-crypt@saout.de, Christian Kujau , Ulrich Lukas , linux-kernel@vger.kernel.org, Adrian Bunk , Andrew Morton , Natalie Protasevich , Herbert Xu , David Chinner , xfs@oss.sgi.com, dm-devel@redhat.com, dm-crypt@saout.de, Ritesh Raj Sarraf X-ASG-Orig-Subj: [2.6.25-rc8] dm crypt: fix ctx pending Subject: [2.6.25-rc8] dm crypt: fix ctx pending Message-ID: <20080327185315.GA26288@agk.fab.redhat.com> Mail-Followup-To: Linus Torvalds , Chr , "Rafael J. Wysocki" , Milan Broz , dm-crypt@saout.de, Christian Kujau , Ulrich Lukas , linux-kernel@vger.kernel.org, Adrian Bunk , Andrew Morton , Natalie Protasevich , Herbert Xu , David Chinner , xfs@oss.sgi.com, dm-devel@redhat.com, Ritesh Raj Sarraf Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.1i Organization: Red Hat UK Ltd. Registered in England and Wales, number 03798903. Registered Office: Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE. X-Scanned-By: MIMEDefang 2.58 on 172.16.52.254 X-Barracuda-Connect: mx1.redhat.com[66.187.233.31] X-Barracuda-Start-Time: 1206644074 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.46072 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15066 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: agk@redhat.com Precedence: bulk X-list: xfs From: Milan Broz Fix regression in dm-crypt introduced in commit 3a7f6c990ad04e6f576a159876c602d14d6f7fef (dm crypt: use async crypto). If write requests need to be split into pieces, the code must not process them in parallel because the crypto context cannot be shared. So there can be parallel crypto operations on one part of the write, but only one write bio can be processed at a time. This is not optimal and the workqueue code needs to be optimized for parallel processing, but for now it solves the problem without affecting the performance of synchronous crypto operation (most of current dm-crypt users). http://bugzilla.kernel.org/show_bug.cgi?id=10242 http://bugzilla.kernel.org/show_bug.cgi?id=10207 Signed-off-by: Milan Broz Signed-off-by: Alasdair G Kergon --- drivers/md/dm-crypt.c | 58 +++++++++++++++++++++++++------------------------- 1 files changed, 30 insertions(+), 28 deletions(-) Index: linux-2.6.25-rc7/drivers/md/dm-crypt.c =================================================================== --- linux-2.6.25-rc7.orig/drivers/md/dm-crypt.c 2008-03-27 17:51:23.000000000 +0000 +++ linux-2.6.25-rc7/drivers/md/dm-crypt.c 2008-03-27 17:51:25.000000000 +0000 @@ -1,7 +1,7 @@ /* * Copyright (C) 2003 Christophe Saout * Copyright (C) 2004 Clemens Fruhwirth - * Copyright (C) 2006-2007 Red Hat, Inc. All rights reserved. + * Copyright (C) 2006-2008 Red Hat, Inc. All rights reserved. * * This file is released under the GPL. */ @@ -93,6 +93,8 @@ struct crypt_config { struct workqueue_struct *io_queue; struct workqueue_struct *crypt_queue; + wait_queue_head_t writeq; + /* * crypto related data */ @@ -331,14 +333,7 @@ static void crypt_convert_init(struct cr ctx->idx_out = bio_out ? bio_out->bi_idx : 0; ctx->sector = sector + cc->iv_offset; init_completion(&ctx->restart); - /* - * Crypto operation can be asynchronous, - * ctx->pending is increased after request submission. - * We need to ensure that we don't call the crypt finish - * operation before pending got incremented - * (dependent on crypt submission return code). - */ - atomic_set(&ctx->pending, 2); + atomic_set(&ctx->pending, 1); } static int crypt_convert_block(struct crypt_config *cc, @@ -411,43 +406,42 @@ static void crypt_alloc_req(struct crypt static int crypt_convert(struct crypt_config *cc, struct convert_context *ctx) { - int r = 0; + int r; while(ctx->idx_in < ctx->bio_in->bi_vcnt && ctx->idx_out < ctx->bio_out->bi_vcnt) { crypt_alloc_req(cc, ctx); + atomic_inc(&ctx->pending); + r = crypt_convert_block(cc, ctx, cc->req); switch (r) { + /* async */ case -EBUSY: wait_for_completion(&ctx->restart); INIT_COMPLETION(ctx->restart); /* fall through*/ case -EINPROGRESS: - atomic_inc(&ctx->pending); cc->req = NULL; - r = 0; - /* fall through*/ + ctx->sector++; + continue; + + /* sync */ case 0: + atomic_dec(&ctx->pending); ctx->sector++; continue; - } - break; + /* error */ + default: + atomic_dec(&ctx->pending); + return r; + } } - /* - * If there are pending crypto operation run async - * code. Otherwise process return code synchronously. - * The step of 2 ensures that async finish doesn't - * call crypto finish too early. - */ - if (atomic_sub_return(2, &ctx->pending)) - return -EINPROGRESS; - - return r; + return 0; } static void dm_crypt_bio_destructor(struct bio *bio) @@ -624,8 +618,10 @@ static void kcryptd_io_read(struct dm_cr static void kcryptd_io_write(struct dm_crypt_io *io) { struct bio *clone = io->ctx.bio_out; + struct crypt_config *cc = io->target->private; generic_make_request(clone); + wake_up(&cc->writeq); } static void kcryptd_io(struct work_struct *work) @@ -698,7 +694,8 @@ static void kcryptd_crypt_write_convert_ r = crypt_convert(cc, &io->ctx); - if (r != -EINPROGRESS) { + if (atomic_dec_and_test(&io->ctx.pending)) { + /* processed, no running async crypto */ kcryptd_crypt_write_io_submit(io, r, 0); if (unlikely(r < 0)) return; @@ -706,8 +703,12 @@ static void kcryptd_crypt_write_convert_ atomic_inc(&io->pending); /* out of memory -> run queues */ - if (unlikely(remaining)) + if (unlikely(remaining)) { + /* wait for async crypto then reinitialize pending */ + wait_event(cc->writeq, !atomic_read(&io->ctx.pending)); + atomic_set(&io->ctx.pending, 1); congestion_wait(WRITE, HZ/100); + } } } @@ -746,7 +747,7 @@ static void kcryptd_crypt_read_convert(s r = crypt_convert(cc, &io->ctx); - if (r != -EINPROGRESS) + if (atomic_dec_and_test(&io->ctx.pending)) kcryptd_crypt_read_done(io, r); crypt_dec_pending(io); @@ -1047,6 +1048,7 @@ static int crypt_ctr(struct dm_target *t goto bad_crypt_queue; } + init_waitqueue_head(&cc->writeq); ti->private = cc; return 0; From owner-xfs@oss.sgi.com Thu Mar 27 16:57:16 2008 Received: with ECARTIS (v1.0.0; list xfs); Thu, 27 Mar 2008 16:57:51 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2RNvCeR001688 for ; Thu, 27 Mar 2008 16:57:14 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA01911; Fri, 28 Mar 2008 10:57:42 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m2RNvfsT112122914; Fri, 28 Mar 2008 10:57:41 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m2RNvd4N112287870; Fri, 28 Mar 2008 10:57:39 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Fri, 28 Mar 2008 10:57:39 +1100 From: David Chinner To: Barry Naujok Cc: "xfs@oss.sgi.com" Subject: Re: REVIEW: improve/fix/update zeroing garbage in superblock sectors in xfs_repair Message-ID: <20080327235739.GC108924158@sgi.com> References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15067 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Thu, Mar 27, 2008 at 04:25:33PM +1100, Barry Naujok wrote: > Running XFS QA with a standard HDD with the bad_features2 problem > happening and doing "mkfs.xfs -l version=1", a problem was encounter > where it went to zero the "bad" features2 bit. > > Why didn't this happen all the time? > > Upon investigation, I updated the behaviour of the "secondary_sb_wack" > function. Now it always zeroes any garbage found beyond the expected > end of the xfs_sb_t structure in the first sector. ..... > - if (XFS_SB_VERSION_HASMOREBITS(sb)) > - size = (__psint_t)&sb->sb_features2 > - + sizeof(sb->sb_features2) - (__psint_t)sb; > - else if (XFS_SB_VERSION_HASLOGV2(sb)) > - size = (__psint_t)&sb->sb_logsunit > + if (xfs_sb_version_hasmorebits(sb)) > + size = (__psint_t)&sb->sb_bad_features2 > + + sizeof(sb->sb_bad_features2) - (__psint_t)sb; > + else if (xfs_sb_version_haslogv2(sb)) ..... This is still fragile and requires us to update this function every time we add a new field to the superblock. Is there some way we can do this that doesn't require an update for every modification to the superblock we make? Also - size = offsetof(sb->sb_bad_features2) + sizeof(sb->sb_bad_features2); or in the case of the last entry in the superblock: size = sizeof(xfs_dsb_t); Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Thu Mar 27 19:14:44 2008 Received: with ECARTIS (v1.0.0; list xfs); Thu, 27 Mar 2008 19:14:52 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2S2EdHV009286 for ; Thu, 27 Mar 2008 19:14:43 -0700 Received: from pc-bnaujok.melbourne.sgi.com (pc-bnaujok.melbourne.sgi.com [134.14.55.58]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id NAA06651; Fri, 28 Mar 2008 13:15:08 +1100 Date: Fri, 28 Mar 2008 13:17:01 +1100 To: "David Chinner" Subject: Re: REVIEW: improve/fix/update zeroing garbage in superblock sectors in xfs_repair From: "Barry Naujok" Organization: SGI Cc: "xfs@oss.sgi.com" Content-Type: text/plain; format=flowed; delsp=yes; charset=utf-8 MIME-Version: 1.0 References: <20080327235739.GC108924158@sgi.com> Message-ID: In-Reply-To: <20080327235739.GC108924158@sgi.com> User-Agent: Opera Mail/9.24 (Win32) X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from Quoted-Printable to 8bit by oss.sgi.com id m2S2EiHV009348 X-archive-position: 15068 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@sgi.com Precedence: bulk X-list: xfs On Fri, 28 Mar 2008 10:57:39 +1100, David Chinner wrote: > On Thu, Mar 27, 2008 at 04:25:33PM +1100, Barry Naujok wrote: >> Running XFS QA with a standard HDD with the bad_features2 problem >> happening and doing "mkfs.xfs -l version=1", a problem was encounter >> where it went to zero the "bad" features2 bit. >> >> Why didn't this happen all the time? >> >> Upon investigation, I updated the behaviour of the "secondary_sb_wack" >> function. Now it always zeroes any garbage found beyond the expected >> end of the xfs_sb_t structure in the first sector. > ..... > >> - if (XFS_SB_VERSION_HASMOREBITS(sb)) >> - size = (__psint_t)&sb->sb_features2 >> - + sizeof(sb->sb_features2) - (__psint_t)sb; >> - else if (XFS_SB_VERSION_HASLOGV2(sb)) >> - size = (__psint_t)&sb->sb_logsunit >> + if (xfs_sb_version_hasmorebits(sb)) >> + size = (__psint_t)&sb->sb_bad_features2 >> + + sizeof(sb->sb_bad_features2) - (__psint_t)sb; >> + else if (xfs_sb_version_haslogv2(sb)) > ..... > > This is still fragile and requires us to update this function every time > we add a new field to the superblock. Is there some way we can do this > that doesn't require an update for every modification to the superblock > we make? It sort of seems that the zeroing code of old was only for the early filesystems before a lot of these features and that later features were explicitly added and checked: ie. if (!xfs_sb_hasfeature(sb) && sb->feature is set), zero it. Maybe the "mask" (XR_GOOD_SECSB_VNMASK) that was used should be expanded to the full range of sb_versionnum (0xff00). If extra features are set that xfs_repair doesn't handle, it doesn't even get this far to try and zero beyond xfs_sb_t. That does seem to be the solution (but still requires code for each and every single new feature added to xfs_sb_t). So, for CI mode for example, if (!xfs_sb_hasunicode(sb) && sb->sb_cftino != 0) { /* handle it */ } > Also - > size = offsetof(sb->sb_bad_features2) + > sizeof(sb->sb_bad_features2); > > or in the case of the last entry in the superblock: > > size = sizeof(xfs_dsb_t); > > > Cheers, > > Dave. From owner-xfs@oss.sgi.com Thu Mar 27 21:25:29 2008 Received: with ECARTIS (v1.0.0; list xfs); Thu, 27 Mar 2008 21:25:42 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_45 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2S4PRal022102 for ; Thu, 27 Mar 2008 21:25:29 -0700 X-ASG-Debug-ID: 1206678352-08f400d90000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 542FF6FD74D for ; Thu, 27 Mar 2008 21:25:52 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id Agzm4zh1u3xTRkIW for ; Thu, 27 Mar 2008 21:25:52 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id CA8631802FBE7; Thu, 27 Mar 2008 23:25:49 -0500 (CDT) Message-ID: <47EC734D.2020304@sandeen.net> Date: Thu, 27 Mar 2008 23:25:49 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: markgw@sgi.com CC: xfs-oss , Christoph Hellwig X-ASG-Orig-Subj: Re: FYI: xfs problems in Fedora 8 updates Subject: Re: FYI: xfs problems in Fedora 8 updates References: <47E3CE92.20803@sandeen.net> <47E8687A.90306@sgi.com> In-Reply-To: <47E8687A.90306@sgi.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1206678355 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -0.69 X-Barracuda-Spam-Status: No, SCORE=-0.69 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=UNIQUE_WORDS X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.46111 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 1.34 UNIQUE_WORDS BODY: Message body has many words used only once X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15069 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Mark Goodwin wrote: > > Eric Sandeen wrote: >> https://bugzilla.redhat.com/show_bug.cgi?id=437968 >> Bugzilla Bug 437968: Corrupt xfs root filesystem with kernel >> kernel-2.6.24.3-xx >> >> Just to give the sgi guys a heads up, 2 people have seen this now. >> >> I know it's a distro kernel but fedora is generally reasonably close to >> upstream. >> >> I'm looking into it but just wanted to put this on the list, too. > > Hi Eric, have you identified this as any particular known problem? > > Cheers > From a testcase and some git bisection, looks like this mod broke it somehow, but not sure how yet: http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=2bdf7cd0baa67608ada1517a281af359faf4c58c [XFS] superblock endianess annotations -Eric p.s. testcase was updating "foomatic" in a fresh F8 root on my test box... w/ this mod in place, subsequent xfs_repair is very unhappy: Phase 1 - find and verify superblock... Phase 2 - using internal log - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 41802940: Badness in key lookup (length) bp=(bno 0, len 512 bytes) key=(bno 0, len 4096 bytes) bad magic # 0x58465342 in inode 17627699 (data fork) bmbt block 0 bad data fork in inode 17627699 would have cleared inode 17627699 - agno = 5 - agno = 6 - agno = 7 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 bad magic # 0x58465342 in inode 17627699 (data fork) bmbt block 0 bad data fork in inode 17627699 would have cleared inode 17627699 entry "printer" in shortform directory 29824865 references free inode 17627699 would have junked entry "printer" in directory inode 29824865 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... entry "printer" in shortform directory inode 29824865 points to free inode 17627699would junk entry - traversal finished ... - moving disconnected inodes to lost+found ... disconnected inode 17627698, would move to lost+found disconnected inode 17627700, would move to lost+found disconnected inode 17627701, would move to lost+found disconnected inode 17627702, would move to lost+found disconnected inode 17627703, would move to lost+found disconnected inode 17627704, would move to lost+found disconnected inode 17627705, would move to lost+found disconnected inode 17627706, would move to lost+found disconnected inode 17627707, would move to lost+found disconnected inode 17627708, would move to lost+found disconnected inode 17627709, would move to lost+found disconnected inode 17627710, would move to lost+found disconnected inode 17627711, would move to lost+found disconnected inode 17631904, would move to lost+found disconnected inode 17631905, would move to lost+found disconnected inode 17631906, would move to lost+found disconnected inode 17631907, would move to lost+found disconnected inode 17631908, would move to lost+found disconnected inode 17631909, would move to lost+found disconnected inode 17631910, would move to lost+found disconnected inode 17631911, would move to lost+found disconnected inode 17631912, would move to lost+found disconnected inode 17631913, would move to lost+found disconnected inode 17631914, would move to lost+found disconnected inode 17631915, would move to lost+found disconnected inode 17631916, would move to lost+found disconnected inode 17631917, would move to lost+found disconnected inode 17631918, would move to lost+found disconnected inode 17631919, would move to lost+found disconnected inode 17631920, would move to lost+found disconnected inode 17631921, would move to lost+found disconnected inode 17631922, would move to lost+found disconnected inode 17631923, would move to lost+found disconnected inode 17631924, would move to lost+found disconnected inode 17631925, would move to lost+found disconnected inode 17631926, would move to lost+found disconnected inode 17631927, would move to lost+found disconnected inode 17631928, would move to lost+found disconnected inode 17631929, would move to lost+found disconnected inode 17631930, would move to lost+found disconnected inode 17631931, would move to lost+found disconnected inode 17631932, would move to lost+found disconnected inode 17631933, would move to lost+found disconnected inode 17631934, would move to lost+found disconnected inode 17631935, would move to lost+found disconnected inode 17631936, would move to lost+found disconnected inode 17631937, would move to lost+found disconnected inode 17631938, would move to lost+found disconnected inode 17631939, would move to lost+found disconnected inode 17631940, would move to lost+found disconnected inode 17631941, would move to lost+found disconnected inode 17631942, would move to lost+found disconnected inode 17631943, would move to lost+found disconnected inode 17631944, would move to lost+found disconnected inode 17631945, would move to lost+found disconnected inode 17631946, would move to lost+found disconnected inode 17631947, would move to lost+found disconnected inode 17631948, would move to lost+found disconnected inode 17631949, would move to lost+found disconnected inode 17631950, would move to lost+found disconnected inode 17631951, would move to lost+found disconnected inode 17631952, would move to lost+found disconnected inode 17631953, would move to lost+found disconnected inode 17631954, would move to lost+found disconnected inode 17631955, would move to lost+found disconnected inode 17631956, would move to lost+found disconnected inode 17631957, would move to lost+found disconnected inode 17631958, would move to lost+found disconnected inode 17631959, would move to lost+found disconnected inode 17631960, would move to lost+found disconnected inode 17631961, would move to lost+found disconnected inode 17631962, would move to lost+found disconnected inode 17631963, would move to lost+found disconnected inode 17631964, would move to lost+found disconnected inode 17631965, would move to lost+found disconnected inode 17631966, would move to lost+found disconnected inode 17631967, would move to lost+found disconnected inode 17632000, would move to lost+found disconnected inode 17632001, would move to lost+found disconnected inode 17632002, would move to lost+found disconnected inode 17632003, would move to lost+found disconnected inode 17632004, would move to lost+found disconnected inode 17632005, would move to lost+found disconnected inode 17632006, would move to lost+found disconnected inode 17632007, would move to lost+found disconnected inode 17632008, would move to lost+found disconnected inode 17632009, would move to lost+found disconnected inode 17632010, would move to lost+found disconnected inode 17632011, would move to lost+found disconnected inode 17632012, would move to lost+found disconnected inode 17632013, would move to lost+found disconnected inode 17632014, would move to lost+found disconnected inode 17632015, would move to lost+found disconnected inode 17632016, would move to lost+found disconnected inode 17632017, would move to lost+found disconnected inode 17632018, would move to lost+found disconnected inode 17632019, would move to lost+found disconnected inode 17632020, would move to lost+found disconnected inode 17632021, would move to lost+found disconnected inode 17632022, would move to lost+found disconnected inode 17632023, would move to lost+found disconnected inode 17632024, would move to lost+found disconnected inode 17632025, would move to lost+found disconnected inode 17632026, would move to lost+found disconnected inode 17632027, would move to lost+found disconnected inode 17632028, would move to lost+found disconnected inode 17632029, would move to lost+found disconnected inode 17632030, would move to lost+found disconnected inode 17632031, would move to lost+found disconnected inode 17632032, would move to lost+found disconnected inode 17632033, would move to lost+found disconnected inode 17632034, would move to lost+found disconnected inode 17632035, would move to lost+found disconnected inode 17632036, would move to lost+found disconnected inode 17632037, would move to lost+found disconnected inode 17632038, would move to lost+found disconnected inode 17632039, would move to lost+found disconnected inode 17632040, would move to lost+found disconnected inode 17632041, would move to lost+found disconnected inode 17632042, would move to lost+found disconnected inode 17632043, would move to lost+found disconnected inode 17632044, would move to lost+found disconnected inode 17632045, would move to lost+found disconnected inode 17632046, would move to lost+found disconnected inode 17632047, would move to lost+found disconnected inode 17632048, would move to lost+found disconnected inode 17632049, would move to lost+found disconnected inode 17632050, would move to lost+found disconnected inode 17632051, would move to lost+found disconnected inode 17632052, would move to lost+found disconnected inode 17632053, would move to lost+found disconnected inode 17632054, would move to lost+found disconnected inode 17632055, would move to lost+found disconnected inode 17632056, would move to lost+found disconnected inode 17632057, would move to lost+found disconnected inode 17632058, would move to lost+found disconnected inode 17632059, would move to lost+found disconnected inode 17632060, would move to lost+found disconnected inode 17632061, would move to lost+found disconnected inode 17632062, would move to lost+found disconnected inode 17632063, would move to lost+found disconnected inode 17632064, would move to lost+found disconnected inode 17632065, would move to lost+found disconnected inode 17632066, would move to lost+found disconnected inode 17632067, would move to lost+found disconnected inode 17632068, would move to lost+found disconnected inode 17632069, would move to lost+found disconnected inode 17632070, would move to lost+found disconnected inode 17632071, would move to lost+found disconnected inode 17632072, would move to lost+found disconnected inode 17632073, would move to lost+found disconnected inode 17632074, would move to lost+found disconnected inode 17632075, would move to lost+found disconnected inode 17632076, would move to lost+found disconnected inode 17632077, would move to lost+found disconnected inode 17632078, would move to lost+found disconnected inode 17632079, would move to lost+found disconnected inode 17632080, would move to lost+found disconnected inode 17632081, would move to lost+found disconnected inode 17632082, would move to lost+found disconnected inode 17632083, would move to lost+found disconnected inode 17632084, would move to lost+found disconnected inode 17632085, would move to lost+found disconnected inode 17632086, would move to lost+found disconnected inode 17632087, would move to lost+found disconnected inode 17632088, would move to lost+found disconnected inode 17632089, would move to lost+found disconnected inode 17632090, would move to lost+found disconnected inode 17632091, would move to lost+found disconnected inode 17632092, would move to lost+found disconnected inode 17632093, would move to lost+found disconnected inode 17632094, would move to lost+found disconnected inode 17632095, would move to lost+found disconnected inode 17632096, would move to lost+found disconnected inode 17632097, would move to lost+found disconnected inode 17632098, would move to lost+found disconnected inode 17632099, would move to lost+found disconnected inode 17632100, would move to lost+found disconnected inode 17632101, would move to lost+found disconnected inode 17632102, would move to lost+found disconnected inode 17632103, would move to lost+found disconnected inode 17632104, would move to lost+found disconnected inode 17632105, would move to lost+found disconnected inode 17632106, would move to lost+found disconnected inode 17632107, would move to lost+found disconnected inode 17632108, would move to lost+found disconnected inode 17632109, would move to lost+found disconnected inode 17632110, would move to lost+found disconnected inode 17632111, would move to lost+found disconnected inode 17632112, would move to lost+found disconnected inode 17632113, would move to lost+found disconnected inode 17632114, would move to lost+found disconnected inode 17632115, would move to lost+found disconnected inode 17632116, would move to lost+found disconnected inode 17632117, would move to lost+found disconnected inode 17632118, would move to lost+found disconnected inode 17632119, would move to lost+found disconnected inode 17632120, would move to lost+found disconnected inode 17632121, would move to lost+found disconnected inode 17632122, would move to lost+found disconnected inode 17632123, would move to lost+found disconnected inode 17632124, would move to lost+found disconnected inode 17632125, would move to lost+found disconnected inode 17632126, would move to lost+found disconnected inode 17632127, would move to lost+found disconnected inode 17632128, would move to lost+found disconnected inode 17632129, would move to lost+found disconnected inode 17632130, would move to lost+found disconnected inode 17632131, would move to lost+found disconnected inode 17632132, would move to lost+found disconnected inode 17632133, would move to lost+found disconnected inode 17632134, would move to lost+found disconnected inode 17632135, would move to lost+found disconnected inode 17632136, would move to lost+found disconnected inode 17632137, would move to lost+found disconnected inode 17632138, would move to lost+found disconnected inode 17632139, would move to lost+found disconnected inode 17632140, would move to lost+found disconnected inode 17632141, would move to lost+found disconnected inode 17632142, would move to lost+found disconnected inode 17632143, would move to lost+found disconnected inode 17632144, would move to lost+found disconnected inode 17632145, would move to lost+found disconnected inode 17632146, would move to lost+found disconnected inode 17632147, would move to lost+found disconnected inode 17632148, would move to lost+found disconnected inode 17632149, would move to lost+found disconnected inode 17632150, would move to lost+found disconnected inode 17632151, would move to lost+found disconnected inode 17632152, would move to lost+found disconnected inode 17632153, would move to lost+found disconnected inode 17632154, would move to lost+found disconnected inode 17632155, would move to lost+found disconnected inode 17632156, would move to lost+found disconnected inode 17632157, would move to lost+found disconnected inode 17632158, would move to lost+found disconnected inode 17632159, would move to lost+found disconnected inode 17632160, would move to lost+found disconnected inode 17632161, would move to lost+found disconnected inode 17632162, would move to lost+found disconnected inode 17632163, would move to lost+found disconnected inode 17632164, would move to lost+found disconnected inode 17632165, would move to lost+found disconnected inode 17632166, would move to lost+found disconnected inode 17632167, would move to lost+found disconnected inode 17632168, would move to lost+found disconnected inode 17632169, would move to lost+found disconnected inode 17632170, would move to lost+found disconnected inode 17632171, would move to lost+found disconnected inode 17632172, would move to lost+found disconnected inode 17632173, would move to lost+found disconnected inode 17632174, would move to lost+found disconnected inode 17632175, would move to lost+found disconnected inode 17632176, would move to lost+found disconnected inode 17632177, would move to lost+found disconnected inode 17632178, would move to lost+found disconnected inode 17632179, would move to lost+found disconnected inode 17632180, would move to lost+found disconnected inode 17632181, would move to lost+found disconnected inode 17632182, would move to lost+found disconnected inode 17632183, would move to lost+found disconnected inode 17632184, would move to lost+found disconnected inode 17632185, would move to lost+found disconnected inode 17632186, would move to lost+found disconnected inode 17632187, would move to lost+found disconnected inode 17632188, would move to lost+found disconnected inode 17632189, would move to lost+found disconnected inode 17632190, would move to lost+found disconnected inode 17632191, would move to lost+found disconnected inode 17632224, would move to lost+found disconnected inode 17632225, would move to lost+found disconnected inode 17632226, would move to lost+found disconnected inode 17632227, would move to lost+found disconnected inode 17632228, would move to lost+found disconnected inode 17632229, would move to lost+found disconnected inode 17632230, would move to lost+found disconnected inode 17632231, would move to lost+found disconnected inode 17632232, would move to lost+found disconnected inode 17632233, would move to lost+found disconnected inode 17632234, would move to lost+found disconnected inode 17632235, would move to lost+found disconnected inode 17632236, would move to lost+found disconnected inode 17632237, would move to lost+found disconnected inode 17632238, would move to lost+found disconnected inode 17632239, would move to lost+found disconnected inode 17632240, would move to lost+found disconnected inode 17632241, would move to lost+found disconnected inode 17632242, would move to lost+found disconnected inode 17632243, would move to lost+found disconnected inode 17632244, would move to lost+found disconnected inode 17632245, would move to lost+found disconnected inode 17632246, would move to lost+found disconnected inode 17632247, would move to lost+found disconnected inode 17632248, would move to lost+found disconnected inode 17632249, would move to lost+found disconnected inode 17632250, would move to lost+found disconnected inode 17632251, would move to lost+found disconnected inode 17632252, would move to lost+found disconnected inode 17632253, would move to lost+found disconnected inode 17632254, would move to lost+found disconnected inode 17632255, would move to lost+found disconnected inode 17632256, would move to lost+found disconnected inode 17632257, would move to lost+found disconnected inode 17632258, would move to lost+found disconnected inode 17632259, would move to lost+found disconnected inode 17632260, would move to lost+found disconnected inode 17632261, would move to lost+found disconnected inode 17632262, would move to lost+found disconnected inode 17632263, would move to lost+found disconnected inode 17632264, would move to lost+found disconnected inode 17632265, would move to lost+found disconnected inode 17632266, would move to lost+found disconnected inode 17632267, would move to lost+found disconnected inode 17632268, would move to lost+found disconnected inode 17632269, would move to lost+found disconnected inode 17632270, would move to lost+found disconnected inode 17632271, would move to lost+found disconnected inode 17632272, would move to lost+found disconnected inode 17632273, would move to lost+found disconnected inode 17632274, would move to lost+found disconnected inode 17632275, would move to lost+found disconnected inode 17632276, would move to lost+found disconnected inode 17632277, would move to lost+found disconnected inode 17632278, would move to lost+found disconnected inode 17632279, would move to lost+found disconnected inode 17632280, would move to lost+found disconnected inode 17632281, would move to lost+found disconnected inode 17632282, would move to lost+found disconnected inode 17632283, would move to lost+found disconnected inode 17632284, would move to lost+found disconnected inode 17632285, would move to lost+found disconnected inode 17632286, would move to lost+found disconnected inode 17632287, would move to lost+found disconnected inode 17632288, would move to lost+found disconnected inode 17632289, would move to lost+found disconnected inode 17632290, would move to lost+found disconnected inode 17632291, would move to lost+found disconnected inode 17632292, would move to lost+found disconnected inode 17632293, would move to lost+found disconnected inode 17632294, would move to lost+found disconnected inode 17632295, would move to lost+found disconnected inode 17632296, would move to lost+found disconnected inode 17632297, would move to lost+found disconnected inode 17632298, would move to lost+found disconnected inode 17632299, would move to lost+found disconnected inode 17632300, would move to lost+found disconnected inode 17632301, would move to lost+found disconnected inode 17632302, would move to lost+found disconnected inode 17632303, would move to lost+found disconnected inode 17632304, would move to lost+found disconnected inode 17632305, would move to lost+found disconnected inode 17632306, would move to lost+found disconnected inode 17632307, would move to lost+found disconnected inode 17632308, would move to lost+found disconnected inode 17632309, would move to lost+found disconnected inode 17632310, would move to lost+found disconnected inode 17632311, would move to lost+found disconnected inode 17632312, would move to lost+found disconnected inode 17632313, would move to lost+found disconnected inode 17632314, would move to lost+found disconnected inode 17632315, would move to lost+found disconnected inode 17632316, would move to lost+found disconnected inode 17632317, would move to lost+found disconnected inode 17632318, would move to lost+found disconnected inode 17632319, would move to lost+found disconnected inode 17632320, would move to lost+found disconnected inode 17632321, would move to lost+found disconnected inode 17632322, would move to lost+found disconnected inode 17632323, would move to lost+found disconnected inode 17632324, would move to lost+found disconnected inode 17632325, would move to lost+found disconnected inode 17632326, would move to lost+found disconnected inode 17632327, would move to lost+found disconnected inode 17632328, would move to lost+found disconnected inode 17632329, would move to lost+found disconnected inode 17632330, would move to lost+found disconnected inode 17632331, would move to lost+found disconnected inode 17632332, would move to lost+found disconnected inode 17632333, would move to lost+found disconnected inode 17632334, would move to lost+found disconnected inode 17632335, would move to lost+found disconnected inode 17632336, would move to lost+found disconnected inode 17632337, would move to lost+found disconnected inode 17632338, would move to lost+found disconnected inode 17632339, would move to lost+found disconnected inode 17632340, would move to lost+found disconnected inode 17632341, would move to lost+found disconnected inode 17632342, would move to lost+found disconnected inode 17632343, would move to lost+found disconnected inode 17632344, would move to lost+found disconnected inode 17632345, would move to lost+found disconnected inode 17632346, would move to lost+found disconnected inode 17632347, would move to lost+found disconnected inode 17632348, would move to lost+found disconnected inode 17632349, would move to lost+found disconnected inode 17632350, would move to lost+found disconnected inode 17632351, would move to lost+found disconnected inode 17632352, would move to lost+found disconnected inode 17632353, would move to lost+found disconnected inode 17632354, would move to lost+found disconnected inode 17632355, would move to lost+found disconnected inode 17632356, would move to lost+found disconnected inode 17632357, would move to lost+found disconnected inode 17632358, would move to lost+found disconnected inode 17632359, would move to lost+found disconnected inode 17632360, would move to lost+found disconnected inode 17632361, would move to lost+found disconnected inode 17632362, would move to lost+found disconnected inode 17632363, would move to lost+found disconnected inode 17632364, would move to lost+found disconnected inode 17632365, would move to lost+found disconnected inode 17632366, would move to lost+found disconnected inode 17632367, would move to lost+found disconnected inode 17632368, would move to lost+found disconnected inode 17632369, would move to lost+found disconnected inode 17632370, would move to lost+found disconnected inode 17632371, would move to lost+found disconnected inode 17632372, would move to lost+found disconnected inode 17632373, would move to lost+found disconnected inode 17632374, would move to lost+found disconnected inode 17632375, would move to lost+found disconnected inode 17632376, would move to lost+found disconnected inode 17632377, would move to lost+found disconnected inode 17632378, would move to lost+found disconnected inode 17632379, would move to lost+found disconnected inode 17632380, would move to lost+found disconnected inode 17632381, would move to lost+found disconnected inode 17632382, would move to lost+found disconnected inode 17632383, would move to lost+found disconnected inode 17632384, would move to lost+found disconnected inode 17632385, would move to lost+found disconnected inode 17632386, would move to lost+found disconnected inode 17632387, would move to lost+found disconnected inode 17632388, would move to lost+found disconnected inode 17632389, would move to lost+found disconnected inode 17632390, would move to lost+found disconnected inode 17632391, would move to lost+found disconnected inode 17632392, would move to lost+found disconnected inode 17632393, would move to lost+found disconnected inode 17632394, would move to lost+found disconnected inode 17632395, would move to lost+found disconnected inode 17632396, would move to lost+found disconnected inode 17632397, would move to lost+found disconnected inode 17632398, would move to lost+found disconnected inode 17632399, would move to lost+found disconnected inode 17632400, would move to lost+found disconnected inode 17632401, would move to lost+found disconnected inode 17632402, would move to lost+found disconnected inode 17632403, would move to lost+found disconnected inode 17632404, would move to lost+found disconnected inode 17632405, would move to lost+found disconnected inode 17632406, would move to lost+found disconnected inode 17632407, would move to lost+found disconnected inode 17632408, would move to lost+found disconnected inode 17632409, would move to lost+found disconnected inode 17632410, would move to lost+found disconnected inode 17632411, would move to lost+found disconnected inode 17632412, would move to lost+found disconnected inode 17632413, would move to lost+found disconnected inode 17632414, would move to lost+found disconnected inode 17632415, would move to lost+found disconnected inode 17632448, would move to lost+found disconnected inode 17632449, would move to lost+found disconnected inode 17632450, would move to lost+found disconnected inode 17632451, would move to lost+found disconnected inode 17632452, would move to lost+found disconnected inode 17632453, would move to lost+found disconnected inode 17632454, would move to lost+found disconnected inode 17632455, would move to lost+found disconnected inode 17632456, would move to lost+found disconnected inode 17632457, would move to lost+found disconnected inode 17632458, would move to lost+found disconnected inode 17632459, would move to lost+found disconnected inode 17632460, would move to lost+found disconnected inode 17632461, would move to lost+found disconnected inode 17632462, would move to lost+found disconnected inode 17632463, would move to lost+found disconnected inode 17632464, would move to lost+found disconnected inode 17632465, would move to lost+found disconnected inode 17632466, would move to lost+found disconnected inode 17632467, would move to lost+found disconnected inode 17632468, would move to lost+found disconnected inode 17632469, would move to lost+found disconnected inode 17632470, would move to lost+found disconnected inode 17632471, would move to lost+found disconnected inode 17632472, would move to lost+found disconnected inode 17632473, would move to lost+found disconnected inode 17632474, would move to lost+found disconnected inode 17632475, would move to lost+found disconnected inode 17632476, would move to lost+found disconnected inode 17632477, would move to lost+found disconnected inode 17632478, would move to lost+found disconnected inode 17632479, would move to lost+found disconnected inode 17632480, would move to lost+found disconnected inode 17632481, would move to lost+found disconnected inode 17632482, would move to lost+found disconnected inode 17632483, would move to lost+found disconnected inode 17632484, would move to lost+found disconnected inode 17632485, would move to lost+found disconnected inode 17632486, would move to lost+found disconnected inode 17632487, would move to lost+found disconnected inode 17632488, would move to lost+found disconnected inode 17632489, would move to lost+found disconnected inode 17632490, would move to lost+found disconnected inode 17632491, would move to lost+found disconnected inode 17632492, would move to lost+found disconnected inode 17632493, would move to lost+found disconnected inode 17632494, would move to lost+found disconnected inode 17632495, would move to lost+found disconnected inode 17632496, would move to lost+found disconnected inode 17632497, would move to lost+found disconnected inode 17632498, would move to lost+found disconnected inode 17632499, would move to lost+found disconnected inode 17632500, would move to lost+found disconnected inode 17632501, would move to lost+found disconnected inode 17632502, would move to lost+found disconnected inode 17632503, would move to lost+found disconnected inode 17632504, would move to lost+found disconnected inode 17632505, would move to lost+found disconnected inode 17632506, would move to lost+found disconnected inode 17632507, would move to lost+found disconnected inode 17632508, would move to lost+found disconnected inode 17632509, would move to lost+found disconnected inode 17632510, would move to lost+found disconnected inode 17632511, would move to lost+found disconnected inode 17632544, would move to lost+found disconnected inode 17632545, would move to lost+found disconnected inode 17632546, would move to lost+found disconnected inode 17632547, would move to lost+found disconnected inode 17632548, would move to lost+found disconnected inode 17632549, would move to lost+found disconnected inode 17632550, would move to lost+found disconnected inode 17632551, would move to lost+found disconnected inode 17632552, would move to lost+found disconnected inode 17632553, would move to lost+found disconnected inode 17632554, would move to lost+found disconnected inode 17632555, would move to lost+found disconnected inode 17632556, would move to lost+found disconnected inode 17632557, would move to lost+found disconnected inode 17632558, would move to lost+found disconnected inode 17632559, would move to lost+found disconnected inode 17632560, would move to lost+found disconnected inode 17632561, would move to lost+found disconnected inode 17632562, would move to lost+found disconnected inode 17632563, would move to lost+found disconnected inode 17632564, would move to lost+found disconnected inode 17632565, would move to lost+found disconnected inode 17632566, would move to lost+found disconnected inode 17632567, would move to lost+found disconnected inode 17632568, would move to lost+found disconnected inode 17632569, would move to lost+found disconnected inode 17632570, would move to lost+found disconnected inode 17632571, would move to lost+found disconnected inode 17632572, would move to lost+found disconnected inode 17632573, would move to lost+found disconnected inode 17632574, would move to lost+found disconnected inode 17632575, would move to lost+found disconnected inode 17632576, would move to lost+found disconnected inode 17632577, would move to lost+found disconnected inode 17632578, would move to lost+found disconnected inode 17632579, would move to lost+found disconnected inode 17632580, would move to lost+found disconnected inode 17632581, would move to lost+found disconnected inode 17632582, would move to lost+found disconnected inode 17632583, would move to lost+found disconnected inode 17632584, would move to lost+found disconnected inode 17632585, would move to lost+found disconnected inode 17632586, would move to lost+found disconnected inode 17632587, would move to lost+found disconnected inode 17632588, would move to lost+found disconnected inode 17632589, would move to lost+found disconnected inode 17632590, would move to lost+found disconnected inode 17632591, would move to lost+found disconnected inode 17632592, would move to lost+found disconnected inode 17632593, would move to lost+found disconnected inode 17632594, would move to lost+found disconnected inode 17632595, would move to lost+found disconnected inode 17632596, would move to lost+found disconnected inode 17632597, would move to lost+found disconnected inode 17632598, would move to lost+found disconnected inode 17632599, would move to lost+found disconnected inode 17632600, would move to lost+found disconnected inode 17632601, would move to lost+found disconnected inode 17632602, would move to lost+found disconnected inode 17632603, would move to lost+found disconnected inode 17632604, would move to lost+found disconnected inode 17632605, would move to lost+found disconnected inode 17632606, would move to lost+found disconnected inode 17632607, would move to lost+found disconnected inode 17632640, would move to lost+found disconnected inode 17632641, would move to lost+found disconnected inode 17632642, would move to lost+found disconnected inode 17632643, would move to lost+found disconnected inode 17632644, would move to lost+found disconnected inode 17632645, would move to lost+found disconnected inode 17632646, would move to lost+found disconnected inode 17632647, would move to lost+found disconnected inode 17632648, would move to lost+found disconnected inode 17632649, would move to lost+found disconnected inode 17632650, would move to lost+found disconnected inode 17632651, would move to lost+found disconnected inode 17632652, would move to lost+found disconnected inode 17632653, would move to lost+found disconnected inode 17632654, would move to lost+found disconnected inode 17632655, would move to lost+found disconnected inode 17632656, would move to lost+found disconnected inode 17632657, would move to lost+found disconnected inode 17632658, would move to lost+found disconnected inode 17632659, would move to lost+found disconnected inode 17632660, would move to lost+found disconnected inode 17632661, would move to lost+found disconnected inode 17632662, would move to lost+found disconnected inode 17632663, would move to lost+found disconnected inode 17632664, would move to lost+found disconnected inode 17632665, would move to lost+found disconnected inode 17632666, would move to lost+found disconnected inode 17632667, would move to lost+found disconnected inode 17632668, would move to lost+found disconnected inode 17632669, would move to lost+found disconnected inode 17632670, would move to lost+found disconnected inode 17632671, would move to lost+found disconnected inode 17632672, would move to lost+found disconnected inode 17632673, would move to lost+found disconnected inode 17632674, would move to lost+found disconnected inode 17632675, would move to lost+found disconnected inode 17632676, would move to lost+found disconnected inode 17632677, would move to lost+found disconnected inode 17632678, would move to lost+found disconnected inode 17632679, would move to lost+found disconnected inode 17632680, would move to lost+found disconnected inode 17632681, would move to lost+found disconnected inode 17632682, would move to lost+found disconnected inode 17632683, would move to lost+found disconnected inode 17632684, would move to lost+found disconnected inode 17632685, would move to lost+found disconnected inode 17632686, would move to lost+found disconnected inode 17632687, would move to lost+found disconnected inode 17632688, would move to lost+found disconnected inode 17632689, would move to lost+found disconnected inode 17632690, would move to lost+found disconnected inode 17632691, would move to lost+found disconnected inode 17632692, would move to lost+found disconnected inode 17632693, would move to lost+found disconnected inode 17632694, would move to lost+found disconnected inode 17632695, would move to lost+found disconnected inode 17632696, would move to lost+found disconnected inode 17632697, would move to lost+found disconnected inode 17632698, would move to lost+found disconnected inode 17632699, would move to lost+found disconnected inode 17632700, would move to lost+found disconnected inode 17632701, would move to lost+found disconnected inode 17632702, would move to lost+found disconnected inode 17632703, would move to lost+found disconnected inode 17632736, would move to lost+found disconnected inode 17632737, would move to lost+found disconnected inode 17632738, would move to lost+found disconnected inode 17632739, would move to lost+found disconnected inode 17632740, would move to lost+found disconnected inode 17632741, would move to lost+found disconnected inode 17632742, would move to lost+found disconnected inode 17632743, would move to lost+found disconnected inode 17632744, would move to lost+found disconnected inode 17632745, would move to lost+found disconnected inode 17632746, would move to lost+found disconnected inode 17632747, would move to lost+found disconnected inode 17632748, would move to lost+found disconnected inode 17632749, would move to lost+found disconnected inode 17632750, would move to lost+found disconnected inode 17632751, would move to lost+found disconnected inode 17632752, would move to lost+found disconnected inode 17632753, would move to lost+found disconnected inode 17632754, would move to lost+found disconnected inode 17632755, would move to lost+found disconnected inode 17632756, would move to lost+found disconnected inode 17632757, would move to lost+found disconnected inode 17632758, would move to lost+found disconnected inode 17632759, would move to lost+found disconnected inode 17632760, would move to lost+found disconnected inode 17632761, would move to lost+found disconnected inode 17632762, would move to lost+found disconnected inode 17632763, would move to lost+found disconnected inode 17632764, would move to lost+found disconnected inode 17632765, would move to lost+found disconnected inode 17632766, would move to lost+found disconnected inode 17632767, would move to lost+found disconnected inode 17632768, would move to lost+found disconnected inode 17632769, would move to lost+found disconnected inode 17632770, would move to lost+found disconnected inode 17632771, would move to lost+found disconnected inode 17632772, would move to lost+found disconnected inode 17632773, would move to lost+found disconnected inode 17632774, would move to lost+found disconnected inode 17632775, would move to lost+found disconnected inode 17632776, would move to lost+found disconnected inode 17632777, would move to lost+found disconnected inode 17632778, would move to lost+found disconnected inode 17632779, would move to lost+found disconnected inode 17632780, would move to lost+found disconnected inode 17632781, would move to lost+found disconnected inode 17632782, would move to lost+found disconnected inode 17632783, would move to lost+found disconnected inode 17632784, would move to lost+found disconnected inode 17632785, would move to lost+found disconnected inode 17632786, would move to lost+found disconnected inode 17632787, would move to lost+found disconnected inode 17632788, would move to lost+found disconnected inode 17632789, would move to lost+found disconnected inode 17632790, would move to lost+found disconnected inode 17632791, would move to lost+found disconnected inode 17632792, would move to lost+found disconnected inode 17632793, would move to lost+found disconnected inode 17632794, would move to lost+found disconnected inode 17632795, would move to lost+found disconnected inode 17632796, would move to lost+found disconnected inode 17632797, would move to lost+found disconnected inode 17632798, would move to lost+found disconnected inode 17632799, would move to lost+found disconnected inode 17632800, would move to lost+found disconnected inode 17632801, would move to lost+found disconnected inode 17632802, would move to lost+found disconnected inode 17632803, would move to lost+found disconnected inode 17632804, would move to lost+found disconnected inode 17632805, would move to lost+found disconnected inode 17632806, would move to lost+found disconnected inode 17632807, would move to lost+found disconnected inode 17632808, would move to lost+found disconnected inode 17632809, would move to lost+found disconnected inode 17632810, would move to lost+found disconnected inode 17632811, would move to lost+found disconnected inode 17632812, would move to lost+found disconnected inode 17632813, would move to lost+found disconnected inode 17632814, would move to lost+found disconnected inode 17632815, would move to lost+found disconnected inode 17632816, would move to lost+found disconnected inode 17632817, would move to lost+found disconnected inode 17632818, would move to lost+found disconnected inode 17632819, would move to lost+found disconnected inode 17632820, would move to lost+found disconnected inode 17632821, would move to lost+found disconnected inode 17632822, would move to lost+found disconnected inode 17632823, would move to lost+found disconnected inode 17632824, would move to lost+found disconnected inode 17632825, would move to lost+found disconnected inode 17632826, would move to lost+found disconnected inode 17632827, would move to lost+found disconnected inode 17632828, would move to lost+found disconnected inode 17632829, would move to lost+found disconnected inode 17632830, would move to lost+found disconnected inode 17632831, would move to lost+found disconnected inode 17632832, would move to lost+found disconnected inode 17632833, would move to lost+found disconnected inode 17632834, would move to lost+found disconnected inode 17632835, would move to lost+found disconnected inode 17632836, would move to lost+found disconnected inode 17632837, would move to lost+found disconnected inode 17632838, would move to lost+found disconnected inode 17632839, would move to lost+found disconnected inode 17632840, would move to lost+found disconnected inode 17632841, would move to lost+found disconnected inode 17632842, would move to lost+found disconnected inode 17632843, would move to lost+found disconnected inode 17632844, would move to lost+found disconnected inode 17632845, would move to lost+found disconnected inode 17632846, would move to lost+found disconnected inode 17632847, would move to lost+found disconnected inode 17632848, would move to lost+found disconnected inode 17632849, would move to lost+found disconnected inode 17632850, would move to lost+found disconnected inode 17632851, would move to lost+found disconnected inode 17632852, would move to lost+found disconnected inode 17632853, would move to lost+found disconnected inode 17632854, would move to lost+found disconnected inode 17632855, would move to lost+found disconnected inode 17632856, would move to lost+found disconnected inode 17632857, would move to lost+found disconnected inode 17632858, would move to lost+found disconnected inode 17632859, would move to lost+found disconnected inode 17632860, would move to lost+found disconnected inode 17632861, would move to lost+found disconnected inode 17632862, would move to lost+found disconnected inode 17632863, would move to lost+found disconnected inode 17632864, would move to lost+found disconnected inode 17632865, would move to lost+found disconnected inode 17632866, would move to lost+found disconnected inode 17632867, would move to lost+found disconnected inode 17632868, would move to lost+found disconnected inode 17632869, would move to lost+found disconnected inode 17632870, would move to lost+found disconnected inode 17632871, would move to lost+found disconnected inode 17632872, would move to lost+found disconnected inode 17632873, would move to lost+found disconnected inode 17632874, would move to lost+found disconnected inode 17632875, would move to lost+found disconnected inode 17632876, would move to lost+found disconnected inode 17632877, would move to lost+found disconnected inode 17632878, would move to lost+found disconnected inode 17632879, would move to lost+found disconnected inode 17632880, would move to lost+found disconnected inode 17632881, would move to lost+found disconnected inode 17632882, would move to lost+found disconnected inode 17632883, would move to lost+found disconnected inode 17632884, would move to lost+found disconnected inode 17632885, would move to lost+found disconnected inode 17632886, would move to lost+found disconnected inode 17632887, would move to lost+found disconnected inode 17632888, would move to lost+found disconnected inode 17632889, would move to lost+found disconnected inode 17632890, would move to lost+found disconnected inode 17632891, would move to lost+found disconnected inode 17632892, would move to lost+found disconnected inode 17632893, would move to lost+found disconnected inode 17632894, would move to lost+found disconnected inode 17632895, would move to lost+found disconnected inode 17632896, would move to lost+found disconnected inode 17632897, would move to lost+found disconnected inode 17632898, would move to lost+found disconnected inode 17632899, would move to lost+found disconnected inode 17632900, would move to lost+found disconnected inode 17632901, would move to lost+found disconnected inode 17632902, would move to lost+found disconnected inode 17632903, would move to lost+found disconnected inode 17632904, would move to lost+found disconnected inode 17632905, would move to lost+found disconnected inode 17632906, would move to lost+found disconnected inode 17632907, would move to lost+found disconnected inode 17632908, would move to lost+found disconnected inode 17632909, would move to lost+found disconnected inode 17632910, would move to lost+found disconnected inode 17632911, would move to lost+found disconnected inode 17632912, would move to lost+found disconnected inode 17632913, would move to lost+found disconnected inode 17632914, would move to lost+found disconnected inode 17632915, would move to lost+found disconnected inode 17632916, would move to lost+found disconnected inode 17632917, would move to lost+found disconnected inode 17632918, would move to lost+found disconnected inode 17632919, would move to lost+found disconnected inode 17632920, would move to lost+found disconnected inode 17632921, would move to lost+found disconnected inode 17632922, would move to lost+found disconnected inode 17632923, would move to lost+found disconnected inode 17632924, would move to lost+found disconnected inode 17632925, would move to lost+found disconnected inode 17632926, would move to lost+found disconnected inode 17632927, would move to lost+found disconnected inode 17632928, would move to lost+found disconnected inode 17632929, would move to lost+found disconnected inode 17632930, would move to lost+found disconnected inode 17632931, would move to lost+found disconnected inode 17632932, would move to lost+found disconnected inode 17632933, would move to lost+found disconnected inode 17632934, would move to lost+found disconnected inode 17632935, would move to lost+found disconnected inode 17632936, would move to lost+found disconnected inode 17632937, would move to lost+found disconnected inode 17632938, would move to lost+found disconnected inode 17632939, would move to lost+found disconnected inode 17632940, would move to lost+found disconnected inode 17632941, would move to lost+found disconnected inode 17632942, would move to lost+found disconnected inode 17632943, would move to lost+found disconnected inode 17632944, would move to lost+found disconnected inode 17632945, would move to lost+found disconnected inode 17632946, would move to lost+found disconnected inode 17632947, would move to lost+found disconnected inode 17632948, would move to lost+found disconnected inode 17632949, would move to lost+found disconnected inode 17632950, would move to lost+found disconnected inode 17632951, would move to lost+found disconnected inode 17632952, would move to lost+found disconnected inode 17632953, would move to lost+found disconnected inode 17632954, would move to lost+found disconnected inode 17632955, would move to lost+found disconnected inode 17632956, would move to lost+found disconnected inode 17632957, would move to lost+found disconnected inode 17632958, would move to lost+found disconnected inode 17632959, would move to lost+found disconnected inode 17632960, would move to lost+found disconnected inode 17632961, would move to lost+found disconnected inode 17632962, would move to lost+found disconnected inode 17632963, would move to lost+found disconnected inode 17632964, would move to lost+found disconnected inode 17632965, would move to lost+found disconnected inode 17632966, would move to lost+found disconnected inode 17632967, would move to lost+found disconnected inode 17632968, would move to lost+found disconnected inode 17632969, would move to lost+found disconnected inode 17632970, would move to lost+found disconnected inode 17632971, would move to lost+found disconnected inode 17632972, would move to lost+found disconnected inode 17632973, would move to lost+found disconnected inode 17632974, would move to lost+found disconnected inode 17632975, would move to lost+found disconnected inode 17632976, would move to lost+found disconnected inode 17632977, would move to lost+found disconnected inode 17632978, would move to lost+found disconnected inode 17632979, would move to lost+found disconnected inode 17632980, would move to lost+found disconnected inode 17632981, would move to lost+found disconnected inode 17632982, would move to lost+found disconnected inode 17632983, would move to lost+found disconnected inode 17632984, would move to lost+found disconnected inode 17632985, would move to lost+found disconnected inode 17632986, would move to lost+found disconnected inode 17632987, would move to lost+found disconnected inode 17632988, would move to lost+found disconnected inode 17632989, would move to lost+found disconnected inode 17632990, would move to lost+found disconnected inode 17632991, would move to lost+found disconnected inode 17633024, would move to lost+found disconnected inode 17633025, would move to lost+found disconnected inode 17633026, would move to lost+found disconnected inode 17633027, would move to lost+found disconnected inode 17633028, would move to lost+found disconnected inode 17633029, would move to lost+found disconnected inode 17633030, would move to lost+found disconnected inode 17633031, would move to lost+found disconnected inode 17633032, would move to lost+found disconnected inode 17633033, would move to lost+found disconnected inode 17633034, would move to lost+found disconnected inode 17633035, would move to lost+found disconnected inode 17633036, would move to lost+found disconnected inode 17633037, would move to lost+found disconnected inode 17633038, would move to lost+found disconnected inode 17633039, would move to lost+found disconnected inode 17633040, would move to lost+found disconnected inode 17633041, would move to lost+found disconnected inode 17633042, would move to lost+found disconnected inode 17633043, would move to lost+found disconnected inode 17633044, would move to lost+found disconnected inode 17633045, would move to lost+found disconnected inode 17633046, would move to lost+found disconnected inode 17633047, would move to lost+found disconnected inode 17633048, would move to lost+found disconnected inode 17633049, would move to lost+found disconnected inode 17633050, would move to lost+found disconnected inode 17633051, would move to lost+found disconnected inode 17633052, would move to lost+found disconnected inode 17633053, would move to lost+found disconnected inode 17633054, would move to lost+found disconnected inode 17633055, would move to lost+found disconnected inode 17633056, would move to lost+found disconnected inode 17633057, would move to lost+found disconnected inode 17633058, would move to lost+found disconnected inode 17633059, would move to lost+found disconnected inode 17633060, would move to lost+found disconnected inode 17633061, would move to lost+found disconnected inode 17633062, would move to lost+found disconnected inode 17633063, would move to lost+found disconnected inode 17633064, would move to lost+found disconnected inode 17633065, would move to lost+found disconnected inode 17633066, would move to lost+found disconnected inode 17633067, would move to lost+found disconnected inode 17633068, would move to lost+found disconnected inode 17633069, would move to lost+found disconnected inode 17633070, would move to lost+found disconnected inode 17633071, would move to lost+found disconnected inode 17633072, would move to lost+found disconnected inode 17633073, would move to lost+found disconnected inode 17633074, would move to lost+found disconnected inode 17633075, would move to lost+found disconnected inode 17633076, would move to lost+found disconnected inode 17633077, would move to lost+found disconnected inode 17633078, would move to lost+found disconnected inode 17633079, would move to lost+found disconnected inode 17633080, would move to lost+found disconnected inode 17633081, would move to lost+found disconnected inode 17633082, would move to lost+found disconnected inode 17633083, would move to lost+found disconnected inode 17633084, would move to lost+found disconnected inode 17633085, would move to lost+found disconnected inode 17633086, would move to lost+found disconnected inode 17633087, would move to lost+found disconnected inode 17633088, would move to lost+found disconnected inode 17633089, would move to lost+found disconnected inode 17633090, would move to lost+found disconnected inode 17633091, would move to lost+found disconnected inode 17633092, would move to lost+found disconnected inode 17633093, would move to lost+found disconnected inode 17633094, would move to lost+found disconnected inode 17633095, would move to lost+found disconnected inode 17633096, would move to lost+found disconnected inode 17633097, would move to lost+found disconnected inode 17633098, would move to lost+found disconnected inode 17633099, would move to lost+found disconnected inode 17633100, would move to lost+found disconnected inode 17633101, would move to lost+found disconnected inode 17633102, would move to lost+found disconnected inode 17633103, would move to lost+found disconnected inode 17633104, would move to lost+found disconnected inode 17633105, would move to lost+found disconnected inode 17633106, would move to lost+found disconnected inode 17633107, would move to lost+found disconnected inode 17633108, would move to lost+found disconnected inode 17633109, would move to lost+found disconnected inode 17633110, would move to lost+found disconnected inode 17633111, would move to lost+found disconnected inode 17633112, would move to lost+found disconnected inode 17633113, would move to lost+found disconnected inode 17633114, would move to lost+found disconnected inode 17633115, would move to lost+found disconnected inode 17633116, would move to lost+found disconnected inode 17633117, would move to lost+found disconnected inode 17633118, would move to lost+found disconnected inode 17633119, would move to lost+found disconnected inode 17633120, would move to lost+found disconnected inode 17633121, would move to lost+found disconnected inode 17633122, would move to lost+found disconnected inode 17633123, would move to lost+found disconnected inode 17633124, would move to lost+found disconnected inode 17633125, would move to lost+found disconnected inode 17633126, would move to lost+found disconnected inode 17633127, would move to lost+found disconnected inode 17633128, would move to lost+found disconnected inode 17633129, would move to lost+found disconnected inode 17633130, would move to lost+found disconnected inode 17633131, would move to lost+found disconnected inode 17633132, would move to lost+found disconnected inode 17633133, would move to lost+found disconnected inode 17633134, would move to lost+found disconnected inode 17633135, would move to lost+found disconnected inode 17633136, would move to lost+found disconnected inode 17633137, would move to lost+found disconnected inode 17633138, would move to lost+found disconnected inode 17633139, would move to lost+found disconnected inode 17633140, would move to lost+found disconnected inode 17633141, would move to lost+found disconnected inode 17633142, would move to lost+found disconnected inode 17633143, would move to lost+found disconnected inode 17633144, would move to lost+found disconnected inode 17633145, would move to lost+found disconnected inode 17633146, would move to lost+found disconnected inode 17633147, would move to lost+found disconnected inode 17633148, would move to lost+found disconnected inode 17633149, would move to lost+found disconnected inode 17633150, would move to lost+found disconnected inode 17633151, would move to lost+found disconnected inode 17639936, would move to lost+found disconnected inode 17639937, would move to lost+found disconnected inode 17639938, would move to lost+found disconnected inode 17639939, would move to lost+found disconnected inode 17639940, would move to lost+found disconnected inode 17639941, would move to lost+found disconnected inode 17639942, would move to lost+found disconnected inode 17639943, would move to lost+found disconnected inode 17639944, would move to lost+found disconnected inode 17639945, would move to lost+found disconnected inode 17639946, would move to lost+found disconnected inode 17639947, would move to lost+found disconnected inode 17639948, would move to lost+found disconnected inode 17639949, would move to lost+found disconnected inode 17639950, would move to lost+found disconnected inode 17639951, would move to lost+found disconnected inode 17639952, would move to lost+found disconnected inode 17639953, would move to lost+found disconnected inode 17639954, would move to lost+found disconnected inode 17639955, would move to lost+found disconnected inode 17639956, would move to lost+found disconnected inode 17639957, would move to lost+found disconnected inode 17639958, would move to lost+found disconnected inode 17639959, would move to lost+found disconnected inode 17639960, would move to lost+found disconnected inode 17639961, would move to lost+found disconnected inode 17639962, would move to lost+found disconnected inode 17639963, would move to lost+found disconnected inode 17639964, would move to lost+found disconnected inode 17639965, would move to lost+found disconnected inode 17639966, would move to lost+found disconnected inode 17639967, would move to lost+found disconnected inode 17639968, would move to lost+found disconnected inode 17639969, would move to lost+found disconnected inode 17639970, would move to lost+found disconnected inode 17639971, would move to lost+found disconnected inode 17639972, would move to lost+found disconnected inode 17639973, would move to lost+found disconnected inode 17639974, would move to lost+found disconnected inode 17639975, would move to lost+found disconnected inode 17639976, would move to lost+found disconnected inode 17639977, would move to lost+found disconnected inode 17639978, would move to lost+found disconnected inode 17639979, would move to lost+found disconnected inode 17639980, would move to lost+found disconnected inode 17639981, would move to lost+found disconnected inode 17639982, would move to lost+found disconnected inode 17639983, would move to lost+found disconnected inode 17639984, would move to lost+found disconnected inode 17639985, would move to lost+found disconnected inode 17639986, would move to lost+found disconnected inode 17639987, would move to lost+found disconnected inode 17639988, would move to lost+found disconnected inode 17639989, would move to lost+found disconnected inode 17639990, would move to lost+found disconnected inode 17639991, would move to lost+found disconnected inode 17639992, would move to lost+found disconnected inode 17639993, would move to lost+found disconnected inode 17639994, would move to lost+found disconnected inode 17639995, would move to lost+found disconnected inode 17639996, would move to lost+found disconnected inode 17639997, would move to lost+found disconnected inode 17639998, would move to lost+found disconnected inode 17639999, would move to lost+found disconnected inode 17640000, would move to lost+found disconnected inode 17640001, would move to lost+found disconnected inode 17640002, would move to lost+found disconnected inode 17640003, would move to lost+found disconnected inode 17640004, would move to lost+found disconnected inode 17640005, would move to lost+found disconnected inode 17640006, would move to lost+found disconnected inode 17640007, would move to lost+found disconnected inode 17640008, would move to lost+found disconnected inode 17640009, would move to lost+found disconnected inode 17640010, would move to lost+found disconnected inode 17640011, would move to lost+found disconnected inode 17640012, would move to lost+found disconnected inode 17640013, would move to lost+found disconnected inode 17640014, would move to lost+found disconnected inode 17640015, would move to lost+found disconnected inode 17640016, would move to lost+found disconnected inode 17640017, would move to lost+found disconnected inode 17640018, would move to lost+found disconnected inode 17640019, would move to lost+found disconnected inode 17640020, would move to lost+found disconnected inode 17640021, would move to lost+found disconnected inode 17640022, would move to lost+found disconnected inode 17640023, would move to lost+found disconnected inode 17640024, would move to lost+found disconnected inode 17640025, would move to lost+found disconnected inode 17640026, would move to lost+found disconnected inode 17640027, would move to lost+found disconnected inode 17640028, would move to lost+found disconnected inode 17640029, would move to lost+found disconnected inode 17640030, would move to lost+found disconnected inode 17640031, would move to lost+found disconnected inode 17640032, would move to lost+found disconnected inode 17640033, would move to lost+found disconnected inode 17640034, would move to lost+found disconnected inode 17640035, would move to lost+found disconnected inode 17640036, would move to lost+found disconnected inode 17640037, would move to lost+found disconnected inode 17640038, would move to lost+found disconnected inode 17640039, would move to lost+found disconnected inode 17640040, would move to lost+found disconnected inode 17640041, would move to lost+found disconnected inode 17640042, would move to lost+found disconnected inode 17640043, would move to lost+found disconnected inode 17640044, would move to lost+found disconnected inode 17640045, would move to lost+found disconnected inode 17640046, would move to lost+found disconnected inode 17640047, would move to lost+found disconnected inode 17640048, would move to lost+found disconnected inode 17640049, would move to lost+found disconnected inode 17640050, would move to lost+found disconnected inode 17640051, would move to lost+found disconnected inode 17640052, would move to lost+found disconnected inode 17640053, would move to lost+found disconnected inode 17640054, would move to lost+found disconnected inode 17640055, would move to lost+found disconnected inode 17640056, would move to lost+found disconnected inode 17640057, would move to lost+found disconnected inode 17640058, would move to lost+found disconnected inode 17640059, would move to lost+found disconnected inode 17640060, would move to lost+found disconnected inode 17640061, would move to lost+found disconnected inode 17640062, would move to lost+found disconnected inode 17640063, would move to lost+found disconnected inode 17640096, would move to lost+found disconnected inode 17640097, would move to lost+found disconnected inode 17640098, would move to lost+found disconnected inode 17640099, would move to lost+found disconnected inode 17640100, would move to lost+found disconnected inode 17640101, would move to lost+found disconnected inode 17640102, would move to lost+found disconnected inode 17640103, would move to lost+found disconnected inode 17640104, would move to lost+found disconnected inode 17640105, would move to lost+found disconnected inode 17640106, would move to lost+found disconnected inode 17640107, would move to lost+found disconnected inode 17640108, would move to lost+found disconnected inode 17640109, would move to lost+found disconnected inode 17640110, would move to lost+found disconnected inode 17640111, would move to lost+found disconnected inode 17640112, would move to lost+found disconnected inode 17640113, would move to lost+found disconnected inode 17640114, would move to lost+found disconnected inode 17640115, would move to lost+found disconnected inode 17640116, would move to lost+found disconnected inode 17640117, would move to lost+found disconnected inode 17640118, would move to lost+found disconnected inode 17640119, would move to lost+found disconnected inode 17640120, would move to lost+found disconnected inode 17640121, would move to lost+found disconnected inode 17640122, would move to lost+found disconnected inode 17640123, would move to lost+found disconnected inode 17640124, would move to lost+found disconnected inode 17640125, would move to lost+found disconnected inode 17640126, would move to lost+found disconnected inode 17640127, would move to lost+found disconnected inode 17640128, would move to lost+found disconnected inode 17640129, would move to lost+found disconnected inode 17640130, would move to lost+found disconnected inode 17640131, would move to lost+found disconnected inode 17640132, would move to lost+found disconnected inode 17640133, would move to lost+found disconnected inode 17640134, would move to lost+found disconnected inode 17640135, would move to lost+found disconnected inode 17640136, would move to lost+found disconnected inode 17640137, would move to lost+found disconnected inode 17640138, would move to lost+found disconnected inode 17640139, would move to lost+found disconnected inode 17640140, would move to lost+found disconnected inode 17640141, would move to lost+found disconnected inode 17640142, would move to lost+found disconnected inode 17640143, would move to lost+found disconnected inode 17640144, would move to lost+found disconnected inode 17640145, would move to lost+found disconnected inode 17640146, would move to lost+found disconnected inode 17640147, would move to lost+found disconnected inode 17640148, would move to lost+found disconnected inode 17640149, would move to lost+found disconnected inode 17640150, would move to lost+found disconnected inode 17640151, would move to lost+found disconnected inode 17640152, would move to lost+found disconnected inode 17640153, would move to lost+found disconnected inode 17640154, would move to lost+found disconnected inode 17640155, would move to lost+found disconnected inode 17640156, would move to lost+found disconnected inode 17640157, would move to lost+found disconnected inode 17640158, would move to lost+found disconnected inode 17640159, would move to lost+found disconnected inode 17640160, would move to lost+found disconnected inode 17640161, would move to lost+found disconnected inode 17640162, would move to lost+found disconnected inode 17640163, would move to lost+found disconnected inode 17640164, would move to lost+found disconnected inode 17640165, would move to lost+found disconnected inode 17640166, would move to lost+found disconnected inode 17640167, would move to lost+found disconnected inode 17640168, would move to lost+found disconnected inode 17640169, would move to lost+found disconnected inode 17640170, would move to lost+found disconnected inode 17640171, would move to lost+found disconnected inode 17640172, would move to lost+found disconnected inode 17640173, would move to lost+found disconnected inode 17640174, would move to lost+found disconnected inode 17640175, would move to lost+found disconnected inode 17640176, would move to lost+found disconnected inode 17640177, would move to lost+found disconnected inode 17640178, would move to lost+found disconnected inode 17640179, would move to lost+found disconnected inode 17640180, would move to lost+found disconnected inode 17640181, would move to lost+found disconnected inode 17640182, would move to lost+found disconnected inode 17640183, would move to lost+found disconnected inode 17640184, would move to lost+found disconnected inode 17640185, would move to lost+found disconnected inode 17640186, would move to lost+found disconnected inode 17640187, would move to lost+found disconnected inode 17640188, would move to lost+found disconnected inode 17640189, would move to lost+found disconnected inode 17640190, would move to lost+found disconnected inode 17640191, would move to lost+found disconnected inode 17640192, would move to lost+found disconnected inode 17640193, would move to lost+found disconnected inode 17640194, would move to lost+found disconnected inode 17640195, would move to lost+found disconnected inode 17640196, would move to lost+found disconnected inode 17640197, would move to lost+found disconnected inode 17640198, would move to lost+found disconnected inode 17640199, would move to lost+found disconnected inode 17640200, would move to lost+found disconnected inode 17640201, would move to lost+found disconnected inode 17640202, would move to lost+found disconnected inode 17640203, would move to lost+found disconnected inode 17640204, would move to lost+found disconnected inode 17640205, would move to lost+found disconnected inode 17640206, would move to lost+found disconnected inode 17640207, would move to lost+found disconnected inode 17640208, would move to lost+found disconnected inode 17640209, would move to lost+found disconnected inode 17640210, would move to lost+found disconnected inode 17640211, would move to lost+found disconnected inode 17640212, would move to lost+found disconnected inode 17640213, would move to lost+found disconnected inode 17640214, would move to lost+found disconnected inode 17640215, would move to lost+found disconnected inode 17640216, would move to lost+found disconnected inode 17640217, would move to lost+found disconnected inode 17640218, would move to lost+found disconnected inode 17640219, would move to lost+found disconnected inode 17640220, would move to lost+found disconnected inode 17640221, would move to lost+found disconnected inode 17640222, would move to lost+found disconnected inode 17640223, would move to lost+found disconnected inode 17640224, would move to lost+found disconnected inode 17640225, would move to lost+found disconnected inode 17640226, would move to lost+found disconnected inode 17640227, would move to lost+found disconnected inode 17640228, would move to lost+found disconnected inode 17640229, would move to lost+found disconnected inode 17640230, would move to lost+found disconnected inode 17640231, would move to lost+found disconnected inode 17640232, would move to lost+found disconnected inode 17640233, would move to lost+found disconnected inode 17640234, would move to lost+found disconnected inode 17640235, would move to lost+found disconnected inode 17640236, would move to lost+found disconnected inode 17640237, would move to lost+found disconnected inode 17640238, would move to lost+found disconnected inode 17640239, would move to lost+found disconnected inode 17640240, would move to lost+found disconnected inode 17640241, would move to lost+found disconnected inode 17640242, would move to lost+found disconnected inode 17640243, would move to lost+found disconnected inode 17640244, would move to lost+found disconnected inode 17640245, would move to lost+found disconnected inode 17640246, would move to lost+found disconnected inode 17640247, would move to lost+found disconnected inode 17640248, would move to lost+found disconnected inode 17640249, would move to lost+found disconnected inode 17640250, would move to lost+found disconnected inode 17640251, would move to lost+found disconnected inode 17640252, would move to lost+found disconnected inode 17640253, would move to lost+found disconnected inode 17640254, would move to lost+found disconnected inode 17640255, would move to lost+found disconnected inode 17640256, would move to lost+found disconnected inode 17640257, would move to lost+found disconnected inode 17640258, would move to lost+found disconnected inode 17640259, would move to lost+found disconnected inode 17640260, would move to lost+found disconnected inode 17640261, would move to lost+found disconnected inode 17640262, would move to lost+found disconnected inode 17640263, would move to lost+found disconnected inode 17640264, would move to lost+found disconnected inode 17640265, would move to lost+found disconnected inode 17640266, would move to lost+found disconnected inode 17640267, would move to lost+found disconnected inode 17640268, would move to lost+found disconnected inode 17640269, would move to lost+found disconnected inode 17640270, would move to lost+found disconnected inode 17640271, would move to lost+found disconnected inode 17640272, would move to lost+found disconnected inode 17640273, would move to lost+found disconnected inode 17640274, would move to lost+found disconnected inode 17640275, would move to lost+found disconnected inode 17640276, would move to lost+found disconnected inode 17640277, would move to lost+found disconnected inode 17640278, would move to lost+found disconnected inode 17640279, would move to lost+found disconnected inode 17640280, would move to lost+found disconnected inode 17640281, would move to lost+found disconnected inode 17640282, would move to lost+found disconnected inode 17640283, would move to lost+found disconnected inode 17640284, would move to lost+found disconnected inode 17640285, would move to lost+found disconnected inode 17640286, would move to lost+found disconnected inode 17640287, would move to lost+found disconnected inode 17640320, would move to lost+found disconnected inode 17640321, would move to lost+found disconnected inode 17640322, would move to lost+found disconnected inode 17640323, would move to lost+found disconnected inode 17640324, would move to lost+found disconnected inode 17640325, would move to lost+found disconnected inode 17640326, would move to lost+found disconnected inode 17640327, would move to lost+found disconnected inode 17640328, would move to lost+found disconnected inode 17640329, would move to lost+found disconnected inode 17640330, would move to lost+found disconnected inode 17640331, would move to lost+found disconnected inode 17640332, would move to lost+found disconnected inode 17640333, would move to lost+found disconnected inode 17640334, would move to lost+found disconnected inode 17640335, would move to lost+found disconnected inode 17640336, would move to lost+found disconnected inode 17640337, would move to lost+found disconnected inode 17640338, would move to lost+found disconnected inode 17640339, would move to lost+found disconnected inode 17640340, would move to lost+found disconnected inode 17640341, would move to lost+found disconnected inode 17640342, would move to lost+found disconnected inode 17640343, would move to lost+found disconnected inode 17640344, would move to lost+found disconnected inode 17640345, would move to lost+found disconnected inode 17640346, would move to lost+found disconnected inode 17640347, would move to lost+found disconnected inode 17640348, would move to lost+found disconnected inode 17640349, would move to lost+found disconnected inode 17640350, would move to lost+found disconnected inode 17640351, would move to lost+found disconnected inode 17640352, would move to lost+found disconnected inode 17640353, would move to lost+found disconnected inode 17640354, would move to lost+found disconnected inode 17640355, would move to lost+found disconnected inode 17640356, would move to lost+found disconnected inode 17640357, would move to lost+found disconnected inode 17640358, would move to lost+found disconnected inode 17640359, would move to lost+found disconnected inode 17640360, would move to lost+found disconnected inode 17640361, would move to lost+found disconnected inode 17640362, would move to lost+found disconnected inode 17640363, would move to lost+found disconnected inode 17640364, would move to lost+found disconnected inode 17640365, would move to lost+found disconnected inode 17640366, would move to lost+found disconnected inode 17640367, would move to lost+found disconnected inode 17640368, would move to lost+found disconnected inode 17640369, would move to lost+found disconnected inode 17640370, would move to lost+found disconnected inode 17640371, would move to lost+found disconnected inode 17640372, would move to lost+found disconnected inode 17640373, would move to lost+found disconnected inode 17640374, would move to lost+found disconnected inode 17640375, would move to lost+found disconnected inode 17640376, would move to lost+found disconnected inode 17640377, would move to lost+found disconnected inode 17640378, would move to lost+found disconnected inode 17640379, would move to lost+found disconnected inode 17640380, would move to lost+found disconnected inode 17640381, would move to lost+found disconnected inode 17640382, would move to lost+found disconnected inode 17640383, would move to lost+found disconnected inode 17640384, would move to lost+found disconnected inode 17640385, would move to lost+found disconnected inode 17640386, would move to lost+found disconnected inode 17640387, would move to lost+found disconnected inode 17640388, would move to lost+found disconnected inode 17640389, would move to lost+found disconnected inode 17640390, would move to lost+found disconnected inode 17640391, would move to lost+found disconnected inode 17640392, would move to lost+found disconnected inode 17640393, would move to lost+found disconnected inode 17640394, would move to lost+found disconnected inode 17640395, would move to lost+found disconnected inode 17640396, would move to lost+found disconnected inode 17640397, would move to lost+found disconnected inode 17640398, would move to lost+found disconnected inode 17640399, would move to lost+found disconnected inode 17640400, would move to lost+found disconnected inode 17640401, would move to lost+found disconnected inode 17640402, would move to lost+found disconnected inode 17640403, would move to lost+found disconnected inode 17640404, would move to lost+found disconnected inode 17640405, would move to lost+found disconnected inode 17640406, would move to lost+found disconnected inode 17640407, would move to lost+found disconnected inode 17640408, would move to lost+found disconnected inode 17640409, would move to lost+found disconnected inode 17640410, would move to lost+found disconnected inode 17640411, would move to lost+found disconnected inode 17640412, would move to lost+found disconnected inode 17640413, would move to lost+found disconnected inode 17640414, would move to lost+found disconnected inode 17640415, would move to lost+found disconnected inode 17640416, would move to lost+found disconnected inode 17640417, would move to lost+found disconnected inode 17640418, would move to lost+found disconnected inode 17640419, would move to lost+found disconnected inode 17640420, would move to lost+found disconnected inode 17640421, would move to lost+found disconnected inode 17640422, would move to lost+found disconnected inode 17640423, would move to lost+found disconnected inode 17640424, would move to lost+found disconnected inode 17640425, would move to lost+found disconnected inode 17640426, would move to lost+found disconnected inode 17640427, would move to lost+found disconnected inode 17640428, would move to lost+found disconnected inode 17640429, would move to lost+found disconnected inode 17640430, would move to lost+found disconnected inode 17640431, would move to lost+found disconnected inode 17640432, would move to lost+found disconnected inode 17640433, would move to lost+found disconnected inode 17640434, would move to lost+found disconnected inode 17640435, would move to lost+found disconnected inode 17640436, would move to lost+found disconnected inode 17640437, would move to lost+found disconnected inode 17640438, would move to lost+found disconnected inode 17640439, would move to lost+found disconnected inode 17640440, would move to lost+found disconnected inode 17640441, would move to lost+found disconnected inode 17640442, would move to lost+found disconnected inode 17640443, would move to lost+found disconnected inode 17640444, would move to lost+found disconnected inode 17640445, would move to lost+found disconnected inode 17640446, would move to lost+found disconnected inode 17640447, would move to lost+found disconnected inode 17640448, would move to lost+found disconnected inode 17640449, would move to lost+found disconnected inode 17640450, would move to lost+found disconnected inode 17640451, would move to lost+found disconnected inode 17640452, would move to lost+found disconnected inode 17640453, would move to lost+found disconnected inode 17640454, would move to lost+found disconnected inode 17640455, would move to lost+found disconnected inode 17640456, would move to lost+found disconnected inode 17640457, would move to lost+found disconnected inode 17640458, would move to lost+found disconnected inode 17640459, would move to lost+found disconnected inode 17640460, would move to lost+found disconnected inode 17640461, would move to lost+found disconnected inode 17640462, would move to lost+found disconnected inode 17640463, would move to lost+found disconnected inode 17640464, would move to lost+found disconnected inode 17640465, would move to lost+found disconnected inode 17640466, would move to lost+found disconnected inode 17640467, would move to lost+found disconnected inode 17640468, would move to lost+found disconnected inode 17640469, would move to lost+found disconnected inode 17640470, would move to lost+found disconnected inode 17640471, would move to lost+found disconnected inode 17640472, would move to lost+found disconnected inode 17640473, would move to lost+found disconnected inode 17640474, would move to lost+found disconnected inode 17640475, would move to lost+found disconnected inode 17640476, would move to lost+found disconnected inode 17640477, would move to lost+found disconnected inode 17640478, would move to lost+found disconnected inode 17640479, would move to lost+found disconnected inode 17640480, would move to lost+found disconnected inode 17640481, would move to lost+found disconnected inode 17640482, would move to lost+found disconnected inode 17640483, would move to lost+found disconnected inode 17640484, would move to lost+found disconnected inode 17640485, would move to lost+found disconnected inode 17640486, would move to lost+found disconnected inode 17640487, would move to lost+found disconnected inode 17640488, would move to lost+found disconnected inode 17640489, would move to lost+found disconnected inode 17640490, would move to lost+found disconnected inode 17640491, would move to lost+found disconnected inode 17640492, would move to lost+found disconnected inode 17640493, would move to lost+found disconnected inode 17640494, would move to lost+found disconnected inode 17640495, would move to lost+found disconnected inode 17640496, would move to lost+found disconnected inode 17640497, would move to lost+found disconnected inode 17640498, would move to lost+found disconnected inode 17640499, would move to lost+found disconnected inode 17640500, would move to lost+found disconnected inode 17640501, would move to lost+found disconnected inode 17640502, would move to lost+found disconnected inode 17640503, would move to lost+found disconnected inode 17640504, would move to lost+found disconnected inode 17640505, would move to lost+found disconnected inode 17640506, would move to lost+found disconnected inode 17640507, would move to lost+found disconnected inode 17640508, would move to lost+found disconnected inode 17640509, would move to lost+found disconnected inode 17640510, would move to lost+found disconnected inode 17640511, would move to lost+found disconnected inode 17640512, would move to lost+found disconnected inode 17640513, would move to lost+found disconnected inode 17640514, would move to lost+found disconnected inode 17640515, would move to lost+found disconnected inode 17640516, would move to lost+found disconnected inode 17640517, would move to lost+found disconnected inode 17640518, would move to lost+found disconnected inode 17640519, would move to lost+found disconnected inode 17640520, would move to lost+found disconnected inode 17640521, would move to lost+found disconnected inode 17640522, would move to lost+found disconnected inode 17640523, would move to lost+found disconnected inode 17640524, would move to lost+found disconnected inode 17640525, would move to lost+found disconnected inode 17640526, would move to lost+found disconnected inode 17640527, would move to lost+found disconnected inode 17640528, would move to lost+found disconnected inode 17640529, would move to lost+found disconnected inode 17640530, would move to lost+found disconnected inode 17640531, would move to lost+found disconnected inode 17640532, would move to lost+found disconnected inode 17640533, would move to lost+found disconnected inode 17640534, would move to lost+found disconnected inode 17640535, would move to lost+found disconnected inode 17640536, would move to lost+found disconnected inode 17640537, would move to lost+found disconnected inode 17640538, would move to lost+found disconnected inode 17640539, would move to lost+found disconnected inode 17640540, would move to lost+found disconnected inode 17640541, would move to lost+found disconnected inode 17640542, would move to lost+found disconnected inode 17640543, would move to lost+found disconnected inode 17640544, would move to lost+found disconnected inode 17640545, would move to lost+found disconnected inode 17640546, would move to lost+found disconnected inode 17640547, would move to lost+found disconnected inode 17640548, would move to lost+found disconnected inode 17640549, would move to lost+found disconnected inode 17640550, would move to lost+found disconnected inode 17640551, would move to lost+found disconnected inode 17640552, would move to lost+found disconnected inode 17640553, would move to lost+found disconnected inode 17640554, would move to lost+found disconnected inode 17640555, would move to lost+found disconnected inode 17640556, would move to lost+found disconnected inode 17640557, would move to lost+found disconnected inode 17640558, would move to lost+found disconnected inode 17640559, would move to lost+found disconnected inode 17640560, would move to lost+found disconnected inode 17640561, would move to lost+found disconnected inode 17640562, would move to lost+found disconnected inode 17640563, would move to lost+found disconnected inode 17640564, would move to lost+found disconnected inode 17640565, would move to lost+found disconnected inode 17640566, would move to lost+found disconnected inode 17640567, would move to lost+found disconnected inode 17640568, would move to lost+found disconnected inode 17640569, would move to lost+found disconnected inode 17640570, would move to lost+found disconnected inode 17640571, would move to lost+found disconnected inode 17640572, would move to lost+found disconnected inode 17640573, would move to lost+found disconnected inode 17640574, would move to lost+found disconnected inode 17640575, would move to lost+found disconnected inode 17650272, would move to lost+found disconnected inode 17650273, would move to lost+found disconnected inode 17650274, would move to lost+found disconnected inode 17650275, would move to lost+found disconnected inode 17650276, would move to lost+found disconnected inode 17650277, would move to lost+found disconnected inode 17650278, would move to lost+found disconnected inode 17650279, would move to lost+found disconnected inode 17650280, would move to lost+found disconnected inode 17650281, would move to lost+found disconnected inode 17650282, would move to lost+found disconnected inode 17650283, would move to lost+found disconnected inode 17650284, would move to lost+found disconnected inode 17650285, would move to lost+found disconnected inode 17650286, would move to lost+found disconnected inode 17650287, would move to lost+found disconnected inode 17650288, would move to lost+found disconnected inode 17650289, would move to lost+found disconnected inode 17650290, would move to lost+found disconnected inode 17650291, would move to lost+found disconnected inode 17650292, would move to lost+found disconnected inode 17650293, would move to lost+found disconnected inode 17650294, would move to lost+found disconnected inode 17650295, would move to lost+found disconnected inode 17650296, would move to lost+found disconnected inode 17650297, would move to lost+found disconnected inode 17650298, would move to lost+found disconnected inode 17650299, would move to lost+found disconnected inode 17650300, would move to lost+found disconnected inode 17650301, would move to lost+found disconnected inode 17650302, would move to lost+found disconnected inode 17650303, would move to lost+found disconnected inode 17650304, would move to lost+found disconnected inode 17650305, would move to lost+found disconnected inode 17650306, would move to lost+found disconnected inode 17650307, would move to lost+found disconnected inode 17650308, would move to lost+found disconnected inode 17650309, would move to lost+found disconnected inode 17650310, would move to lost+found disconnected inode 17650311, would move to lost+found disconnected inode 17650312, would move to lost+found disconnected inode 17650313, would move to lost+found disconnected inode 17650314, would move to lost+found disconnected inode 17650315, would move to lost+found disconnected inode 17650316, would move to lost+found disconnected inode 17650317, would move to lost+found disconnected inode 17650318, would move to lost+found disconnected inode 17650319, would move to lost+found disconnected inode 17650320, would move to lost+found disconnected inode 17650321, would move to lost+found disconnected inode 17650322, would move to lost+found disconnected inode 17650323, would move to lost+found disconnected inode 17650324, would move to lost+found disconnected inode 17650325, would move to lost+found disconnected inode 17650326, would move to lost+found disconnected inode 17650327, would move to lost+found disconnected inode 17650328, would move to lost+found disconnected inode 17650329, would move to lost+found disconnected inode 17650330, would move to lost+found disconnected inode 17650331, would move to lost+found disconnected inode 17650332, would move to lost+found disconnected inode 17650333, would move to lost+found disconnected inode 17650334, would move to lost+found disconnected inode 17650335, would move to lost+found disconnected inode 17650336, would move to lost+found disconnected inode 17650337, would move to lost+found disconnected inode 17650338, would move to lost+found disconnected inode 17650339, would move to lost+found disconnected inode 17650340, would move to lost+found disconnected inode 17650341, would move to lost+found disconnected inode 17650342, would move to lost+found disconnected inode 17650343, would move to lost+found disconnected inode 17650344, would move to lost+found disconnected inode 17650345, would move to lost+found disconnected inode 17650346, would move to lost+found disconnected inode 17650347, would move to lost+found disconnected inode 17650348, would move to lost+found disconnected inode 17650349, would move to lost+found disconnected inode 17650350, would move to lost+found disconnected inode 17650351, would move to lost+found disconnected inode 17650352, would move to lost+found disconnected inode 17650353, would move to lost+found disconnected inode 17650354, would move to lost+found disconnected inode 17650355, would move to lost+found disconnected inode 17650356, would move to lost+found disconnected inode 17650357, would move to lost+found disconnected inode 17650358, would move to lost+found disconnected inode 17650359, would move to lost+found disconnected inode 17650360, would move to lost+found disconnected inode 17650361, would move to lost+found disconnected inode 17650362, would move to lost+found disconnected inode 17650363, would move to lost+found disconnected inode 17650364, would move to lost+found disconnected inode 17650365, would move to lost+found disconnected inode 17650366, would move to lost+found disconnected inode 17650367, would move to lost+found disconnected inode 17650368, would move to lost+found disconnected inode 17650369, would move to lost+found disconnected inode 17650370, would move to lost+found disconnected inode 17650371, would move to lost+found disconnected inode 17650372, would move to lost+found disconnected inode 17650373, would move to lost+found disconnected inode 17650374, would move to lost+found disconnected inode 17650375, would move to lost+found disconnected inode 17650376, would move to lost+found disconnected inode 17650377, would move to lost+found disconnected inode 17650378, would move to lost+found disconnected inode 17650379, would move to lost+found disconnected inode 17650380, would move to lost+found disconnected inode 17650381, would move to lost+found disconnected inode 17650382, would move to lost+found disconnected inode 17650383, would move to lost+found disconnected inode 17650384, would move to lost+found disconnected inode 17650385, would move to lost+found disconnected inode 17650386, would move to lost+found disconnected inode 17650387, would move to lost+found disconnected inode 17650388, would move to lost+found disconnected inode 17650389, would move to lost+found disconnected inode 17650390, would move to lost+found disconnected inode 17650391, would move to lost+found disconnected inode 17650392, would move to lost+found disconnected inode 17650393, would move to lost+found disconnected inode 17650394, would move to lost+found disconnected inode 17650395, would move to lost+found disconnected inode 17650396, would move to lost+found disconnected inode 17650397, would move to lost+found disconnected inode 17650398, would move to lost+found disconnected inode 17650399, would move to lost+found disconnected inode 17650432, would move to lost+found disconnected inode 17650433, would move to lost+found disconnected inode 17650434, would move to lost+found disconnected inode 17650435, would move to lost+found disconnected inode 17650436, would move to lost+found disconnected inode 17650437, would move to lost+found disconnected inode 17650438, would move to lost+found disconnected inode 17650439, would move to lost+found disconnected inode 17650440, would move to lost+found disconnected inode 17650441, would move to lost+found disconnected inode 17650442, would move to lost+found disconnected inode 17650443, would move to lost+found disconnected inode 17650444, would move to lost+found disconnected inode 17650445, would move to lost+found disconnected inode 17650446, would move to lost+found disconnected inode 17650447, would move to lost+found disconnected inode 17650448, would move to lost+found disconnected inode 17650449, would move to lost+found disconnected inode 17650450, would move to lost+found disconnected inode 17650451, would move to lost+found disconnected inode 17650452, would move to lost+found disconnected inode 17650453, would move to lost+found disconnected inode 17650454, would move to lost+found disconnected inode 17650455, would move to lost+found disconnected inode 17650456, would move to lost+found disconnected inode 17650457, would move to lost+found disconnected inode 17650458, would move to lost+found disconnected inode 17650459, would move to lost+found disconnected inode 17650460, would move to lost+found disconnected inode 17650461, would move to lost+found disconnected inode 17650462, would move to lost+found disconnected inode 17650463, would move to lost+found disconnected inode 17650464, would move to lost+found disconnected inode 17650465, would move to lost+found disconnected inode 17650466, would move to lost+found disconnected inode 17650467, would move to lost+found disconnected inode 17650468, would move to lost+found disconnected inode 17650469, would move to lost+found disconnected inode 17650470, would move to lost+found disconnected inode 17650471, would move to lost+found disconnected inode 17650472, would move to lost+found disconnected inode 17650473, would move to lost+found disconnected inode 17650474, would move to lost+found disconnected inode 17650475, would move to lost+found disconnected inode 17650476, would move to lost+found disconnected inode 17650477, would move to lost+found disconnected inode 17650478, would move to lost+found disconnected inode 17650479, would move to lost+found disconnected inode 17650480, would move to lost+found disconnected inode 17650481, would move to lost+found disconnected inode 17650482, would move to lost+found disconnected inode 17650483, would move to lost+found disconnected inode 17650484, would move to lost+found disconnected inode 17650485, would move to lost+found disconnected inode 17650486, would move to lost+found disconnected inode 17650487, would move to lost+found disconnected inode 17650488, would move to lost+found disconnected inode 17650489, would move to lost+found disconnected inode 17650490, would move to lost+found disconnected inode 17650491, would move to lost+found disconnected inode 17650492, would move to lost+found disconnected inode 17650493, would move to lost+found disconnected inode 17650494, would move to lost+found disconnected inode 17650495, would move to lost+found disconnected inode 17650496, would move to lost+found disconnected inode 17650497, would move to lost+found disconnected inode 17650498, would move to lost+found disconnected inode 17650499, would move to lost+found disconnected inode 17650500, would move to lost+found disconnected inode 17650501, would move to lost+found disconnected inode 17650502, would move to lost+found disconnected inode 17650503, would move to lost+found disconnected inode 17650504, would move to lost+found disconnected inode 17650505, would move to lost+found disconnected inode 17650506, would move to lost+found disconnected inode 17650507, would move to lost+found disconnected inode 17650508, would move to lost+found disconnected inode 17650509, would move to lost+found disconnected inode 17650510, would move to lost+found disconnected inode 17650511, would move to lost+found disconnected inode 17650512, would move to lost+found disconnected inode 17650513, would move to lost+found disconnected inode 17650514, would move to lost+found disconnected inode 17650515, would move to lost+found disconnected inode 17650516, would move to lost+found disconnected inode 17650517, would move to lost+found disconnected inode 17650518, would move to lost+found disconnected inode 17650519, would move to lost+found disconnected inode 17650520, would move to lost+found disconnected inode 17650521, would move to lost+found disconnected inode 17650522, would move to lost+found disconnected inode 17650523, would move to lost+found disconnected inode 17650524, would move to lost+found disconnected inode 17650525, would move to lost+found disconnected inode 17650526, would move to lost+found disconnected inode 17650527, would move to lost+found disconnected inode 17650528, would move to lost+found disconnected inode 17650529, would move to lost+found disconnected inode 17650530, would move to lost+found disconnected inode 17650531, would move to lost+found disconnected inode 17650532, would move to lost+found disconnected inode 17650533, would move to lost+found disconnected inode 17650534, would move to lost+found disconnected inode 17650535, would move to lost+found disconnected inode 17650536, would move to lost+found disconnected inode 17650537, would move to lost+found disconnected inode 17650538, would move to lost+found disconnected inode 17650539, would move to lost+found disconnected inode 17650540, would move to lost+found disconnected inode 17650541, would move to lost+found disconnected inode 17650542, would move to lost+found disconnected inode 17650543, would move to lost+found disconnected inode 17650544, would move to lost+found disconnected inode 17650545, would move to lost+found disconnected inode 17650546, would move to lost+found disconnected inode 17650547, would move to lost+found disconnected inode 17650548, would move to lost+found disconnected inode 17650549, would move to lost+found disconnected inode 17650550, would move to lost+found disconnected inode 17650551, would move to lost+found disconnected inode 17650552, would move to lost+found disconnected inode 17650553, would move to lost+found disconnected inode 17650554, would move to lost+found disconnected inode 17650555, would move to lost+found disconnected inode 17650556, would move to lost+found disconnected inode 17650557, would move to lost+found disconnected inode 17650558, would move to lost+found disconnected inode 17650559, would move to lost+found disconnected inode 17650560, would move to lost+found disconnected inode 17650561, would move to lost+found disconnected inode 17650562, would move to lost+found disconnected inode 17650563, would move to lost+found disconnected inode 17650564, would move to lost+found disconnected inode 17650565, would move to lost+found disconnected inode 17650566, would move to lost+found disconnected inode 17650567, would move to lost+found disconnected inode 17650568, would move to lost+found disconnected inode 17650569, would move to lost+found disconnected inode 17650570, would move to lost+found disconnected inode 17650571, would move to lost+found disconnected inode 17650572, would move to lost+found disconnected inode 17650573, would move to lost+found disconnected inode 17650574, would move to lost+found disconnected inode 17650575, would move to lost+found disconnected inode 17650576, would move to lost+found disconnected inode 17650577, would move to lost+found disconnected inode 17650578, would move to lost+found disconnected inode 17650579, would move to lost+found disconnected inode 17650580, would move to lost+found disconnected inode 17650581, would move to lost+found disconnected inode 17650582, would move to lost+found disconnected inode 17650583, would move to lost+found disconnected inode 17650584, would move to lost+found disconnected inode 17650585, would move to lost+found disconnected inode 17650586, would move to lost+found disconnected inode 17650587, would move to lost+found disconnected inode 17650588, would move to lost+found disconnected inode 17650589, would move to lost+found disconnected inode 17650590, would move to lost+found disconnected inode 17650591, would move to lost+found disconnected inode 17650592, would move to lost+found disconnected inode 17650593, would move to lost+found disconnected inode 17650594, would move to lost+found disconnected inode 17650595, would move to lost+found disconnected inode 17650596, would move to lost+found disconnected inode 17650597, would move to lost+found disconnected inode 17650598, would move to lost+found disconnected inode 17650599, would move to lost+found disconnected inode 17650600, would move to lost+found disconnected inode 17650601, would move to lost+found disconnected inode 17650602, would move to lost+found disconnected inode 17650603, would move to lost+found disconnected inode 17650604, would move to lost+found disconnected inode 17650605, would move to lost+found disconnected inode 17650606, would move to lost+found disconnected inode 17650607, would move to lost+found disconnected inode 17650608, would move to lost+found disconnected inode 17650609, would move to lost+found disconnected inode 17650610, would move to lost+found disconnected inode 17650611, would move to lost+found disconnected inode 17650612, would move to lost+found disconnected inode 17650613, would move to lost+found disconnected inode 17650614, would move to lost+found disconnected inode 17650615, would move to lost+found disconnected inode 17650617, would move to lost+found disconnected inode 17650618, would move to lost+found disconnected inode 17650621, would move to lost+found disconnected inode 17650622, would move to lost+found disconnected inode 17650623, would move to lost+found disconnected inode 17650656, would move to lost+found disconnected inode 17650657, would move to lost+found disconnected inode 17650658, would move to lost+found disconnected inode 17650659, would move to lost+found disconnected inode 17650660, would move to lost+found disconnected inode 17650661, would move to lost+found disconnected inode 17650662, would move to lost+found disconnected inode 17650663, would move to lost+found disconnected inode 17650664, would move to lost+found disconnected inode 17650665, would move to lost+found disconnected inode 17650666, would move to lost+found disconnected inode 17650667, would move to lost+found disconnected inode 17650668, would move to lost+found disconnected inode 17650669, would move to lost+found disconnected inode 17650670, would move to lost+found disconnected inode 17650671, would move to lost+found disconnected inode 17650672, would move to lost+found disconnected inode 17650673, would move to lost+found disconnected inode 17650674, would move to lost+found disconnected inode 17650675, would move to lost+found disconnected inode 17650676, would move to lost+found disconnected inode 17650677, would move to lost+found disconnected inode 17650678, would move to lost+found disconnected inode 17650679, would move to lost+found disconnected inode 17650680, would move to lost+found disconnected inode 17650681, would move to lost+found disconnected inode 17650682, would move to lost+found disconnected inode 17650683, would move to lost+found disconnected inode 17650684, would move to lost+found disconnected inode 17650685, would move to lost+found disconnected inode 17650686, would move to lost+found disconnected inode 17650687, would move to lost+found disconnected inode 17650688, would move to lost+found disconnected inode 17650689, would move to lost+found disconnected inode 17650690, would move to lost+found disconnected inode 17650691, would move to lost+found disconnected inode 17650692, would move to lost+found disconnected inode 17650693, would move to lost+found disconnected inode 17650694, would move to lost+found disconnected inode 17650695, would move to lost+found disconnected inode 17650696, would move to lost+found disconnected inode 17650697, would move to lost+found disconnected inode 17650698, would move to lost+found disconnected inode 17650699, would move to lost+found disconnected inode 17650700, would move to lost+found disconnected inode 17650701, would move to lost+found disconnected inode 17650702, would move to lost+found disconnected inode 17650703, would move to lost+found disconnected inode 17650704, would move to lost+found disconnected inode 17650705, would move to lost+found disconnected inode 17650706, would move to lost+found disconnected inode 17650707, would move to lost+found disconnected inode 17650708, would move to lost+found disconnected inode 17650709, would move to lost+found disconnected inode 17650710, would move to lost+found disconnected inode 17650711, would move to lost+found disconnected inode 17650712, would move to lost+found disconnected inode 17650713, would move to lost+found disconnected inode 17650714, would move to lost+found disconnected inode 17650715, would move to lost+found disconnected inode 17650716, would move to lost+found disconnected inode 17650717, would move to lost+found disconnected inode 17650718, would move to lost+found disconnected inode 17650719, would move to lost+found disconnected inode 17650720, would move to lost+found disconnected inode 17650721, would move to lost+found disconnected inode 17650722, would move to lost+found disconnected inode 17650723, would move to lost+found disconnected inode 17650724, would move to lost+found disconnected inode 17650725, would move to lost+found disconnected inode 17650726, would move to lost+found disconnected inode 17650727, would move to lost+found disconnected inode 17650728, would move to lost+found disconnected inode 17650729, would move to lost+found disconnected inode 17650730, would move to lost+found disconnected inode 17650731, would move to lost+found disconnected inode 17650732, would move to lost+found disconnected inode 17650733, would move to lost+found disconnected inode 17650734, would move to lost+found disconnected inode 17650735, would move to lost+found disconnected inode 17650736, would move to lost+found disconnected inode 17650737, would move to lost+found disconnected inode 17650738, would move to lost+found disconnected inode 17650739, would move to lost+found disconnected inode 17650740, would move to lost+found disconnected inode 17650741, would move to lost+found disconnected inode 17650742, would move to lost+found disconnected inode 17650743, would move to lost+found disconnected inode 17650744, would move to lost+found disconnected inode 17650745, would move to lost+found disconnected inode 17650746, would move to lost+found disconnected inode 17650747, would move to lost+found disconnected inode 17650748, would move to lost+found disconnected inode 17650749, would move to lost+found disconnected inode 17650750, would move to lost+found disconnected inode 17650751, would move to lost+found disconnected inode 17650752, would move to lost+found disconnected inode 17650753, would move to lost+found disconnected inode 17650754, would move to lost+found disconnected inode 17650755, would move to lost+found disconnected inode 17650756, would move to lost+found disconnected inode 17650757, would move to lost+found disconnected inode 17650758, would move to lost+found disconnected inode 17650759, would move to lost+found disconnected inode 17650760, would move to lost+found disconnected inode 17650761, would move to lost+found disconnected inode 17650762, would move to lost+found disconnected inode 17650763, would move to lost+found disconnected inode 17650764, would move to lost+found disconnected inode 17650765, would move to lost+found disconnected inode 17650766, would move to lost+found disconnected inode 17650767, would move to lost+found disconnected inode 17650768, would move to lost+found disconnected inode 17650769, would move to lost+found disconnected inode 17650770, would move to lost+found disconnected inode 17650771, would move to lost+found disconnected inode 17650772, would move to lost+found disconnected inode 17650773, would move to lost+found disconnected inode 17650774, would move to lost+found disconnected inode 17650775, would move to lost+found disconnected inode 17650776, would move to lost+found disconnected inode 17650777, would move to lost+found disconnected inode 17650778, would move to lost+found disconnected inode 17650779, would move to lost+found disconnected inode 17650780, would move to lost+found disconnected inode 17650781, would move to lost+found disconnected inode 17650782, would move to lost+found disconnected inode 17650783, would move to lost+found disconnected inode 17650816, would move to lost+found disconnected inode 17650817, would move to lost+found disconnected inode 17650818, would move to lost+found disconnected inode 17650819, would move to lost+found disconnected inode 17650820, would move to lost+found disconnected inode 17650821, would move to lost+found disconnected inode 17650822, would move to lost+found disconnected inode 17650823, would move to lost+found disconnected inode 17650824, would move to lost+found disconnected inode 17650825, would move to lost+found disconnected inode 17650826, would move to lost+found disconnected inode 17650827, would move to lost+found disconnected inode 17650828, would move to lost+found disconnected inode 17650829, would move to lost+found disconnected inode 17650830, would move to lost+found disconnected inode 17650831, would move to lost+found disconnected inode 17650832, would move to lost+found disconnected inode 17650833, would move to lost+found disconnected inode 17650834, would move to lost+found disconnected inode 17650835, would move to lost+found disconnected inode 17650836, would move to lost+found disconnected inode 17650837, would move to lost+found disconnected inode 17650838, would move to lost+found disconnected inode 17650839, would move to lost+found disconnected inode 17650840, would move to lost+found disconnected inode 17650841, would move to lost+found disconnected inode 17650842, would move to lost+found disconnected inode 17650843, would move to lost+found disconnected inode 17650844, would move to lost+found disconnected inode 17650845, would move to lost+found disconnected inode 17650846, would move to lost+found disconnected inode 17650847, would move to lost+found disconnected inode 17650848, would move to lost+found disconnected inode 17650849, would move to lost+found disconnected inode 17650850, would move to lost+found disconnected inode 17650851, would move to lost+found disconnected inode 17650852, would move to lost+found disconnected inode 17650853, would move to lost+found disconnected inode 17650854, would move to lost+found disconnected inode 17650855, would move to lost+found disconnected inode 17650856, would move to lost+found disconnected inode 17650857, would move to lost+found disconnected inode 17650858, would move to lost+found disconnected inode 17650859, would move to lost+found disconnected inode 17650860, would move to lost+found disconnected inode 17650861, would move to lost+found disconnected inode 17650862, would move to lost+found disconnected inode 17650863, would move to lost+found disconnected inode 17650864, would move to lost+found disconnected inode 17650865, would move to lost+found disconnected inode 17650866, would move to lost+found disconnected inode 17650867, would move to lost+found disconnected inode 17650868, would move to lost+found disconnected inode 17650869, would move to lost+found disconnected inode 17650870, would move to lost+found disconnected inode 17650871, would move to lost+found disconnected inode 17650872, would move to lost+found disconnected inode 17650873, would move to lost+found disconnected inode 17650874, would move to lost+found disconnected inode 17650875, would move to lost+found disconnected inode 17650876, would move to lost+found disconnected inode 17650877, would move to lost+found disconnected inode 17650878, would move to lost+found disconnected inode 17650879, would move to lost+found disconnected inode 17653696, would move to lost+found disconnected inode 17653697, would move to lost+found disconnected inode 17653698, would move to lost+found disconnected inode 17653699, would move to lost+found disconnected inode 17653700, would move to lost+found disconnected inode 17653701, would move to lost+found disconnected inode 17653702, would move to lost+found disconnected inode 17653703, would move to lost+found disconnected inode 17653704, would move to lost+found disconnected inode 17653705, would move to lost+found disconnected inode 17653706, would move to lost+found disconnected inode 17653707, would move to lost+found disconnected inode 17653708, would move to lost+found disconnected inode 17653709, would move to lost+found disconnected inode 17653710, would move to lost+found disconnected inode 17653711, would move to lost+found disconnected inode 17653712, would move to lost+found disconnected inode 17653713, would move to lost+found disconnected inode 17653714, would move to lost+found disconnected inode 17653715, would move to lost+found disconnected inode 17653716, would move to lost+found disconnected inode 17653717, would move to lost+found disconnected inode 17653718, would move to lost+found disconnected inode 17653719, would move to lost+found disconnected inode 17653720, would move to lost+found disconnected inode 17653721, would move to lost+found disconnected inode 17653722, would move to lost+found disconnected inode 17653723, would move to lost+found disconnected inode 17653724, would move to lost+found disconnected inode 17653725, would move to lost+found disconnected inode 17653726, would move to lost+found disconnected inode 17653727, would move to lost+found disconnected inode 17653728, would move to lost+found disconnected inode 17653729, would move to lost+found disconnected inode 17653730, would move to lost+found disconnected inode 17653731, would move to lost+found disconnected inode 17653732, would move to lost+found disconnected inode 17653733, would move to lost+found disconnected inode 17653734, would move to lost+found disconnected inode 17653735, would move to lost+found disconnected inode 17653736, would move to lost+found disconnected inode 17653737, would move to lost+found disconnected inode 17653738, would move to lost+found disconnected inode 17653739, would move to lost+found disconnected inode 17653740, would move to lost+found disconnected inode 17653741, would move to lost+found disconnected inode 17653742, would move to lost+found disconnected inode 17653744, would move to lost+found disconnected inode 17653745, would move to lost+found disconnected inode 17653746, would move to lost+found disconnected inode 17653747, would move to lost+found disconnected inode 17653748, would move to lost+found disconnected inode 17653749, would move to lost+found disconnected inode 17653750, would move to lost+found disconnected inode 17677376, would move to lost+found disconnected inode 17677377, would move to lost+found disconnected inode 17677378, would move to lost+found disconnected inode 17677379, would move to lost+found disconnected inode 17677380, would move to lost+found disconnected inode 17677381, would move to lost+found disconnected inode 17677382, would move to lost+found disconnected inode 17677383, would move to lost+found disconnected inode 17677384, would move to lost+found disconnected inode 17677385, would move to lost+found disconnected inode 17677386, would move to lost+found disconnected inode 17677387, would move to lost+found disconnected inode 17677388, would move to lost+found disconnected inode 17677389, would move to lost+found disconnected inode 17677390, would move to lost+found disconnected inode 18249757, would move to lost+found disconnected inode 18249758, would move to lost+found disconnected inode 18249759, would move to lost+found Phase 7 - verify link counts... would have reset inode 29824865 nlinks from 6 to 5 No modify flag set, skipping filesystem flush and exiting. From owner-xfs@oss.sgi.com Thu Mar 27 22:01:37 2008 Received: with ECARTIS (v1.0.0; list xfs); Thu, 27 Mar 2008 22:01:45 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.2 required=5.0 tests=BAYES_99,URIBL_GREY autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2S51Xwf024493 for ; Thu, 27 Mar 2008 22:01:37 -0700 X-ASG-Debug-ID: 1206680526-40e603160000-w1Z2WR X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from me21507.mailengine1.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id C28359247B3 for ; Thu, 27 Mar 2008 22:02:06 -0700 (PDT) Received: from me21507.mailengine1.com (me21507.mailengine1.com [66.59.24.112]) by cuda.sgi.com with ESMTP id 93FiAQiSJL1t8tAg for ; Thu, 27 Mar 2008 22:02:06 -0700 (PDT) Received: by me21507.mailengine1.com (PowerMTA(TM) v3.2r22) id hthtss0cg8k0 for ; Thu, 27 Mar 2008 22:02:03 -0700 (envelope-from ) Content-Type: multipart/alternative; boundary="_----------=_1073964459106330" MIME-Version: 1.0 X-Mailer: StreamSend - 3326 X-Report-Abuse-At: abuse@streamsend.com X-Report-Abuse-Info: It is important to please include full email headers in the report X-Campaign-ID: 40 X-Streamsendid: 3326+5+1234845+40+me21507.mailengine1.com Date: Thu, 27 Mar 2008 22:02:03 -0700 From: "University of Bridgeport" To: linux-xfs@oss.sgi.com X-ASG-Orig-Subj: University of Bridgeport Graduate Programs and Research Opportunities Subject: University of Bridgeport Graduate Programs and Research Opportunities X-Barracuda-Connect: me21507.mailengine1.com[66.59.24.112] X-Barracuda-Start-Time: 1206680526 Message-Id: <20080328050206.C28359247B3@cuda.sgi.com> X-Barracuda-Bayes: INNOCENT GLOBAL 0.4970 1.0000 0.0000 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: 1.40 X-Barracuda-Spam-Status: No, SCORE=1.40 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=BSF_SC0_SA074b, BSF_SC5_SA210e, MSGID_FROM_MTA_ID X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.46112 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.70 MSGID_FROM_MTA_ID Message-Id for external message added locally 0.20 BSF_SC0_SA074b URI: Custom Rule SA074b 0.50 BSF_SC5_SA210e Custom Rule SA210e X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15070 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: noreply@bridgeportengineering.net Precedence: bulk X-list: xfs This is a multi-part message in MIME format. --_----------=_1073964459106330 Content-Transfer-Encoding: 7bit Content-Type: text/plain The Graduate Studies and Research Division at the University of Bridgeport (UB) offers several interdisciplinary graduate programs within the Colleges of Business, Engineering and Education & Human Resources. UB is currently the fastest growing university in the State of Connecticut and in New England. The growth in size, reputation, quality and research outcomes has been extraordinary. The Graduate Studies and Research Division (GSRD) of the University of Bridgeport offers new and exciting degree programs and research opportunities at the Masters and Doctorate levels, inter-disciplinary graduate degree concentrations, dual graduate degree options and professional graduate certificates in a variety of growth oriented fields. These innovative inter-disciplinary programs provide students with a variety of career enhancement options responsive to growing employer and employee needs for multiple competencies and skills in today's and tomorrow's demanding global professional work force. The division offers more than 200 Full-Paid Scholarships and/or Stipends, Graduate Assistantships, Teaching Assistantships, and Research Assistanships per year for graduate students. UB offers excellent post-graduation job placement opportunities for both domestic and international students, with more than 600 companies, school districts, reserach labs. and industries who have employed our graduates in the last several years. For more information and/or to apply for our programs please visit http://www.bridgeportengineering.net The inter-disciplinary graduate concentrations may be incorporated into the graduate programs offered by the Colleges of Business, Engineering and/or Education & Human Resources or embedded within a dual graduate degree program. Students can elect to enroll in one or more graduate degrees at the Masters level in Computer Science, Computer Engineering, Electrical Engineering, Mechanical Engineering, Technology Management, MBA (Masters of Business Administration) or the M.S. in MSIT (Masters of Science in Instructional Technology) program. The division also offers a Ph.D. program in Computer Science and Computer Engineering; and an Ed.D. program in Education. The interdisciplinary graduate degree or certificate concentrations include: Accounting; Automation and Robotics; Bio-Tech Management; CAD/CAM; China/India Trade; Computer Communications and Networking; Corporate, Government and Information Security & Continuity Management; E-Commerce; Entrepreneurship and New Venture Creation; Environmental and Energy Management; Finance; Global Business; Global Marketing; Global Program and Project Management; Health Care Management & Administration; Human Resources Management; Information Technology; Intellectual Property Management; Management and Operations; Manufacturing Management; Modern Data Base Systems; New Product Development and Commercialization; Service Management and Engineering; Software Engineering; Strategic Sourcing and Vendor Management; Supply Chain Management; Wireless and Mobile Communications, VLSI Design, Bio-Medical Engineering, and Micro and nano-electro mechanical systems. UB's graduate programs and instructional philosophy emphasizes real-life experiences through extensive hands-on laboratory-based training, co-ops and paid internships. The GSRD has established very strong relationships with industry in the last few years through an active Center for Interdisciplinary Business, Engineering and Technology Industry Advisory Board and intensive networking with local, regional and global companies. As a unit of the GSRD, UB's School of Engineering (SOE) is the fastest growing School of Engineering in the nation (among 300+ accredited engineering schools) and is home to the largest graduate engineering program in Connecticut with over 1,300 current graduate students; and is one of the four largest engineering programs in New England. The School of Engineering's recent accomplishments have been hailed in academia, the engineering community and the media as an amazing success story in the growth of academic quality, enrollment and research productivity among engineering schools in the country in the last 50 years. For more information and to enroll or apply to UB's graduate programs in Engineering, Computing, Technology, Business, Management and Education please visit http://www.bridgeportengineering.net Please do not reply to this message. To contact us or apply to our programs please follow the link above. Looking forward to hearing from you. Best Regards, Khaled Elleithy Associate Dean Graduate Studies and Research Division University of Bridgeport Click here on http://server1.streamsend.com/streamsend/unsubscribe.php?cd=3326&md=40&ud=e9b22c2cbd6ff0f41f7b76ea0134c9bb to update your profile or Unsubscribe --_----------=_1073964459106330-- From owner-xfs@oss.sgi.com Thu Mar 27 22:14:45 2008 Received: with ECARTIS (v1.0.0; list xfs); Thu, 27 Mar 2008 22:14:55 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_45 autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2S5EawT025708 for ; Thu, 27 Mar 2008 22:14:43 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id QAA12734; Fri, 28 Mar 2008 16:14:58 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m2S5EvsT111007517; Fri, 28 Mar 2008 16:14:57 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m2S5Eu1T103363666; Fri, 28 Mar 2008 16:14:56 +1100 (AEDT) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Fri, 28 Mar 2008 16:14:56 +1100 From: David Chinner To: Eric Sandeen Cc: markgw@sgi.com, xfs-oss , Christoph Hellwig Subject: Re: FYI: xfs problems in Fedora 8 updates Message-ID: <20080328051456.GH108924158@sgi.com> References: <47E3CE92.20803@sandeen.net> <47E8687A.90306@sgi.com> <47EC734D.2020304@sandeen.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <47EC734D.2020304@sandeen.net> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15071 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Thu, Mar 27, 2008 at 11:25:49PM -0500, Eric Sandeen wrote: > Mark Goodwin wrote: > > > > Eric Sandeen wrote: > >> https://bugzilla.redhat.com/show_bug.cgi?id=437968 > >> Bugzilla Bug 437968: Corrupt xfs root filesystem with kernel > >> kernel-2.6.24.3-xx > >> > >> Just to give the sgi guys a heads up, 2 people have seen this now. > >> > >> I know it's a distro kernel but fedora is generally reasonably close to > >> upstream. > >> > >> I'm looking into it but just wanted to put this on the list, too. > > > > Hi Eric, have you identified this as any particular known problem? > > > > Cheers > > >From a testcase and some git bisection, looks like this mod broke it > somehow, but not sure how yet: > > http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=2bdf7cd0baa67608ada1517a281af359faf4c58c > > [XFS] superblock endianess annotations Uh, what? Functionally that's a no-op.... Is this only showing up on attr=2 filesystems? > p.s. testcase was updating "foomatic" in a fresh F8 root on my test > box... w/ this mod in place, subsequent xfs_repair is very unhappy: > > Phase 1 - find and verify superblock... > Phase 2 - using internal log > - scan filesystem freespace and inode maps... > - found root inode chunk > Phase 3 - for each AG... > - scan (but don't clear) agi unlinked lists... > - process known inodes and perform inode discovery... > - agno = 0 > - agno = 1 > - agno = 2 > - agno = 3 > - agno = 4 > 41802940: Badness in key lookup (length) > bp=(bno 0, len 512 bytes) key=(bno 0, len 4096 bytes) > bad magic # 0x58465342 in inode 17627699 (data fork) bmbt block 0 > bad data fork in inode 17627699 > would have cleared inode 17627699 That's an inode bmbt block pointer that points to block zero. That's why it read the superblock instead of a btree block. > bad magic # 0x58465342 in inode 17627699 (data fork) bmbt block 0 > bad data fork in inode 17627699 > would have cleared inode 17627699 > entry "printer" in shortform directory 29824865 references free inode 17627699 > would have junked entry "printer" in directory inode 29824865 > No modify flag set, skipping phase 5 > Phase 6 - check inode connectivity... > - traversing filesystem ... > entry "printer" in shortform directory inode 29824865 points to free > inode 17627699would junk entry > - traversal finished ... > - moving disconnected inodes to lost+found ... > disconnected inode 17627698, would move to lost+found > disconnected inode 17627700, would move to lost+found > disconnected inode 17627701, would move to lost+found ..... And it looks like it was a directory inode judging by all the disconnected inodes. Looks like a corrupted directory extent btree from this. Can you run `xfs_db -r -c "inode 17627699" -c p ` so we can confirm this? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Fri Mar 28 02:01:54 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 28 Mar 2008 02:02:02 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.4 required=5.0 tests=BAYES_00,J_CHICKENPOX_24, J_CHICKENPOX_53 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2S91rT0012957 for ; Fri, 28 Mar 2008 02:01:54 -0700 X-ASG-Debug-ID: 1206694947-557b02e10000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from tyo202.gate.nec.co.jp (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id C8CB96FEA1F for ; Fri, 28 Mar 2008 02:02:28 -0700 (PDT) Received: from tyo202.gate.nec.co.jp (TYO202.gate.nec.co.jp [202.32.8.206]) by cuda.sgi.com with ESMTP id GyO0tRWcy9hFDnsR for ; Fri, 28 Mar 2008 02:02:28 -0700 (PDT) Received: from mailgate4.nec.co.jp (mailgate53.nec.co.jp [10.7.69.184]) by tyo202.gate.nec.co.jp (8.13.8/8.13.4) with ESMTP id m2S92QbM010114; Fri, 28 Mar 2008 18:02:26 +0900 (JST) Received: (from root@localhost) by mailgate4.nec.co.jp (8.11.7/3.7W-MAILGATE-NEC) id m2S92Qq02038; Fri, 28 Mar 2008 18:02:26 +0900 (JST) Received: from shoin.jp.nec.com (shoin.jp.nec.com [10.26.220.3]) by mailsv4.nec.co.jp (8.13.8/8.13.4) with ESMTP id m2S92PTY016032; Fri, 28 Mar 2008 18:02:25 +0900 (JST) Received: from TNESB07336 ([10.64.168.65] [10.64.168.65]) by mail.jp.nec.com with ESMTP; Fri, 28 Mar 2008 18:02:25 +0900 To: David Chinner Cc: "linux-fsdevel@vger.kernel.org" , "linux-ext4@vger.kernel.org" , "xfs@oss.sgi.com" , "dm-devel@redhat.com" , "linux-kernel@vger.kernel.org" X-ASG-Orig-Subj: [RFC PATCH 0/2] freeze feature ver 1.0 Subject: [RFC PATCH 0/2] freeze feature ver 1.0 Message-Id: <20080328180224t-sato@mail.jp.nec.com> Mime-Version: 1.0 X-Mailer: WeMail32[2.51] ID:1K0086 From: Takashi Sato Date: Fri, 28 Mar 2008 18:02:24 +0900 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Barracuda-Connect: TYO202.gate.nec.co.jp[202.32.8.206] X-Barracuda-Start-Time: 1206694948 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.46129 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15072 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: t-sato@yk.jp.nec.com Precedence: bulk X-list: xfs Hi, David Chinner wrote: > Can you please split this into two patches - one which introduces the > generic functionality *without* the timeout stuff, and a second patch > that introduces the timeouts. OK. I split my patch into two patches as below. [PATCH 1/2] Implement generic freeze feature The ioctls for the generic freeze feature are below. o Freeze the filesystem int ioctl(int fd, int FIFREEZE, arg) fd: The file descriptor of the mountpoint FIFREEZE: request code for the freeze arg: Ignored Return value: 0 if the operation succeeds. Otherwise, -1 o Unfreeze the filesystem int ioctl(int fd, int FITHAW, arg) fd: The file descriptor of the mountpoint FITHAW: request code for unfreeze arg: Ignored Return value: 0 if the operation succeeds. Otherwise, -1 [PATCH 2/2] Add timeout feature The timeout feature is added to freeze ioctl. And new ioctl to reset the timeout period is added. o Freeze the filesystem int ioctl(int fd, int FIFREEZE, long *timeval) fd: The file descriptor of the mountpoint FIFREEZE: request code for the freeze timeval: the timeout period in seconds If it's 0 or 1, the timeout isn't set. This special case of "1" is implemented to keep the compatibility with XFS applications. Return value: 0 if the operation succeeds. Otherwise, -1 o Reset the timeout period This is useful for the application to set the timeval more accurately. For example, the freezer resets the timeval to 10 seconds every 5 seconds. In this approach, even if the freezer causes a deadlock by accessing the frozen filesystem, it will be solved by the timeout in 10 seconds and the freezer can recognize that at the next reset of timeval. int ioctl(int fd, int FIFREEZE_RESET_TIMEOUT, long *timeval) fd:file descriptor of mountpoint FIFREEZE_RESET_TIMEOUT: request code for reset of timeout period timeval: new timeout period in seconds Return value: 0 if the operation succeeds. Otherwise, -1 Error number: If the filesystem has already been unfrozen, errno is set to EINVAL. Any comments are very welcome. Cheers, Takashi From owner-xfs@oss.sgi.com Fri Mar 28 02:04:57 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 28 Mar 2008 02:05:04 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.4 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_53 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2S94tcD013324 for ; Fri, 28 Mar 2008 02:04:57 -0700 X-ASG-Debug-ID: 1206695129-4a5d03a60000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from tyo202.gate.nec.co.jp (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id A22B26FEAE4 for ; Fri, 28 Mar 2008 02:05:29 -0700 (PDT) Received: from tyo202.gate.nec.co.jp (TYO202.gate.nec.co.jp [202.32.8.206]) by cuda.sgi.com with ESMTP id T1djIESOAYu0Gg2P for ; Fri, 28 Mar 2008 02:05:29 -0700 (PDT) Received: from mailgate3.nec.co.jp (mailgate54.nec.co.jp [10.7.69.197]) by tyo202.gate.nec.co.jp (8.13.8/8.13.4) with ESMTP id m2S95TJK013720; Fri, 28 Mar 2008 18:05:29 +0900 (JST) Received: (from root@localhost) by mailgate3.nec.co.jp (8.11.7/3.7W-MAILGATE-NEC) id m2S95SU13529; Fri, 28 Mar 2008 18:05:28 +0900 (JST) Received: from shoin.jp.nec.com (shoin.jp.nec.com [10.26.220.3]) by mailsv.nec.co.jp (8.13.8/8.13.4) with ESMTP id m2S95Ss7024793; Fri, 28 Mar 2008 18:05:28 +0900 (JST) Received: from TNESB07336 ([10.64.168.65] [10.64.168.65]) by mail.jp.nec.com with ESMTP; Fri, 28 Mar 2008 18:05:26 +0900 To: David Chinner Cc: "linux-ext4@vger.kernel.org" , "linux-fsdevel@vger.kernel.org" , "xfs@oss.sgi.com" , "dm-devel@redhat.com" , "linux-kernel@vger.kernel.org" X-ASG-Orig-Subj: [RFC PATCH 1/2] Implement generic freeze feature Subject: [RFC PATCH 1/2] Implement generic freeze feature Message-Id: <20080328180522t-sato@mail.jp.nec.com> Mime-Version: 1.0 X-Mailer: WeMail32[2.51] ID:1K0086 From: Takashi Sato Date: Fri, 28 Mar 2008 18:05:22 +0900 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Barracuda-Connect: TYO202.gate.nec.co.jp[202.32.8.206] X-Barracuda-Start-Time: 1206695130 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -1.42 X-Barracuda-Spam-Status: No, SCORE=-1.42 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=MARKETING_SUBJECT X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.46129 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.60 MARKETING_SUBJECT Subject contains popular marketing words X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15073 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: t-sato@yk.jp.nec.com Precedence: bulk X-list: xfs The ioctls for the generic freeze feature are below. o Freeze the filesystem int ioctl(int fd, int FIFREEZE, arg) fd: The file descriptor of the mountpoint FIFREEZE: request code for the freeze arg: Ignored Return value: 0 if the operation succeeds. Otherwise, -1 o Unfreeze the filesystem int ioctl(int fd, int FITHAW, arg) fd: The file descriptor of the mountpoint FITHAW: request code for unfreeze arg: Ignored Return value: 0 if the operation succeeds. Otherwise, -1 Signed-off-by: Takashi Sato --- fs/block_dev.c | 3 +++ fs/buffer.c | 25 +++++++++++++++++++++++++ fs/ioctl.c | 35 +++++++++++++++++++++++++++++++++++ fs/super.c | 32 +++++++++++++++++++++++++++++++- include/linux/fs.h | 7 +++++++ diff -uprN -X /home/sho/pub/MC/freeze-set/dontdiff linux-2.6.25-rc7.org/fs/block_dev.c linux-2.6.25-rc7-freeze/fs/block_ dev.c --- linux-2.6.25-rc7.org/fs/block_dev.c 2008-03-26 10:38:14.000000000 +0900 +++ linux-2.6.25-rc7-freeze/fs/block_dev.c 2008-03-27 09:26:36.000000000 +0900 @@ -284,6 +284,9 @@ static void init_once(struct kmem_cache INIT_LIST_HEAD(&bdev->bd_holder_list); #endif inode_init_once(&ei->vfs_inode); + + /* Initialize semaphore for freeze. */ + sema_init(&bdev->bd_freeze_sem, 1); } static inline void __bd_forget(struct inode *inode) diff -uprN -X /home/sho/pub/MC/freeze-set/dontdiff linux-2.6.25-rc7.org/fs/buffer.c linux-2.6.25-rc7-freeze/fs/buffer.c --- linux-2.6.25-rc7.org/fs/buffer.c 2008-03-26 10:38:14.000000000 +0900 +++ linux-2.6.25-rc7-freeze/fs/buffer.c 2008-03-26 20:32:23.000000000 +0900 @@ -201,6 +201,19 @@ struct super_block *freeze_bdev(struct b { struct super_block *sb; + down(&bdev->bd_freeze_sem); + sb = get_super_without_lock(bdev); + + /* If super_block has been already frozen, return. */ + if (sb && sb->s_frozen != SB_UNFROZEN) { + put_super(sb); + up(&bdev->bd_freeze_sem); + return sb; + } + + if (sb) + put_super(sb); + down(&bdev->bd_mount_sem); sb = get_super(bdev); if (sb && !(sb->s_flags & MS_RDONLY)) { @@ -219,6 +232,9 @@ struct super_block *freeze_bdev(struct b } sync_blockdev(bdev); + + up(&bdev->bd_freeze_sem); + return sb; /* thaw_bdev releases s->s_umount and bd_mount_sem */ } EXPORT_SYMBOL(freeze_bdev); @@ -232,6 +248,13 @@ EXPORT_SYMBOL(freeze_bdev); */ void thaw_bdev(struct block_device *bdev, struct super_block *sb) { + down(&bdev->bd_freeze_sem); + + if (sb && sb->s_frozen == SB_UNFROZEN) { + up(&bdev->bd_freeze_sem); + return; + } + if (sb) { BUG_ON(sb->s_bdev != bdev); @@ -244,6 +267,8 @@ void thaw_bdev(struct block_device *bdev } up(&bdev->bd_mount_sem); + + up(&bdev->bd_freeze_sem); } EXPORT_SYMBOL(thaw_bdev); diff -uprN -X /home/sho/pub/MC/freeze-set/dontdiff linux-2.6.25-rc7.org/fs/ioctl.c linux-2.6.25-rc7-freeze/fs/ioctl.c --- linux-2.6.25-rc7.org/fs/ioctl.c 2008-03-26 10:38:14.000000000 +0900 +++ linux-2.6.25-rc7-freeze/fs/ioctl.c 2008-03-26 20:22:17.000000000 +0900 @@ -13,6 +13,7 @@ #include #include #include +#include #include @@ -181,6 +182,40 @@ int do_vfs_ioctl(struct file *filp, unsi } else error = -ENOTTY; break; + + case FIFREEZE: { + struct super_block *sb = filp->f_path.dentry->d_inode->i_sb; + + if (!capable(CAP_SYS_ADMIN)) { + error = -EPERM; + break; + } + + /* If filesystem doesn't support freeze feature, return. */ + if (sb->s_op->write_super_lockfs == NULL) { + error = -EINVAL; + break; + } + + /* Freeze. */ + freeze_bdev(sb->s_bdev); + + break; + } + + case FITHAW: { + struct super_block *sb = filp->f_path.dentry->d_inode->i_sb; + + if (!capable(CAP_SYS_ADMIN)) { + error = -EPERM; + break; + } + + /* Thaw. */ + thaw_bdev(sb->s_bdev, sb); + break; + } + default: if (S_ISREG(filp->f_path.dentry->d_inode->i_mode)) error = file_ioctl(filp, cmd, arg); diff -uprN -X /home/sho/pub/MC/freeze-set/dontdiff linux-2.6.25-rc7.org/fs/super.c linux-2.6.25-rc7-freeze/fs/super.c --- linux-2.6.25-rc7.org/fs/super.c 2008-03-26 10:38:14.000000000 +0900 +++ linux-2.6.25-rc7-freeze/fs/super.c 2008-03-26 20:23:21.000000000 +0900 @@ -154,7 +154,7 @@ int __put_super_and_need_restart(struct * Drops a temporary reference, frees superblock if there's no * references left. */ -static void put_super(struct super_block *sb) +void put_super(struct super_block *sb) { spin_lock(&sb_lock); __put_super(sb); @@ -507,6 +507,36 @@ rescan: EXPORT_SYMBOL(get_super); +/* + * get_super_without_lock - Get super_block from block_device without lock. + * @bdev: block device struct + * + * Scan the superblock list and finds the superblock of the file system + * mounted on the block device given. This doesn't lock anyone. + * %NULL is returned if no match is found. + */ +struct super_block *get_super_without_lock(struct block_device *bdev) +{ + struct super_block *sb; + + if (!bdev) + return NULL; + + spin_lock(&sb_lock); + list_for_each_entry(sb, &super_blocks, s_list) { + if (sb->s_bdev == bdev) { + if (sb->s_root) { + sb->s_count++; + spin_unlock(&sb_lock); + return sb; + } + } + } + spin_unlock(&sb_lock); + return NULL; +} +EXPORT_SYMBOL(get_super_without_lock); + struct super_block * user_get_super(dev_t dev) { struct super_block *sb; diff -uprN -X /home/sho/pub/MC/freeze-set/dontdiff linux-2.6.25-rc7.org/include/linux/fs.h linux-2.6.25-rc7-freeze/inclu de/linux/fs.h --- linux-2.6.25-rc7.org/include/linux/fs.h 2008-03-26 10:38:14.000000000 +0900 +++ linux-2.6.25-rc7-freeze/include/linux/fs.h 2008-03-26 20:27:44.000000000 +0900 @@ -223,6 +223,8 @@ extern int dir_notify_enable; #define BMAP_IOCTL 1 /* obsolete - kept for compatibility */ #define FIBMAP _IO(0x00,1) /* bmap access */ #define FIGETBSZ _IO(0x00,2) /* get the block size used for bmap */ +#define FIFREEZE _IOWR('X', 119, int) /* Freeze */ +#define FITHAW _IOWR('X', 120, int) /* Thaw */ #define FS_IOC_GETFLAGS _IOR('f', 1, long) #define FS_IOC_SETFLAGS _IOW('f', 2, long) @@ -548,6 +550,9 @@ struct block_device { * care to not mess up bd_private for that case. */ unsigned long bd_private; + + /* Semaphore for freeze */ + struct semaphore bd_freeze_sem; }; /* @@ -1926,7 +1931,9 @@ extern int do_vfs_ioctl(struct file *fil extern void get_filesystem(struct file_system_type *fs); extern void put_filesystem(struct file_system_type *fs); extern struct file_system_type *get_fs_type(const char *name); +extern void put_super(struct super_block *sb); extern struct super_block *get_super(struct block_device *); +extern struct super_block *get_super_without_lock(struct block_device *); extern struct super_block *user_get_super(dev_t); extern void drop_super(struct super_block *sb); From owner-xfs@oss.sgi.com Fri Mar 28 02:17:32 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 28 Mar 2008 02:17:40 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.4 required=5.0 tests=BAYES_00,J_CHICKENPOX_24, J_CHICKENPOX_53 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2S9HVu6014570 for ; Fri, 28 Mar 2008 02:17:32 -0700 X-ASG-Debug-ID: 1206695885-4118030b0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from tyo200.gate.nec.co.jp (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 7AD851055169 for ; Fri, 28 Mar 2008 02:18:05 -0700 (PDT) Received: from tyo200.gate.nec.co.jp (TYO200.gate.nec.co.jp [210.143.35.50]) by cuda.sgi.com with ESMTP id zQPKAK68dgPSkhfB for ; Fri, 28 Mar 2008 02:18:05 -0700 (PDT) Received: from tyo202.gate.nec.co.jp ([10.7.69.202]) by tyo200.gate.nec.co.jp (8.13.8/8.13.4) with ESMTP id m2S9I0JB002348 for ; Fri, 28 Mar 2008 18:18:04 +0900 (JST) Received: from mailgate4.nec.co.jp (mailgate53.nec.co.jp [10.7.69.184]) by tyo202.gate.nec.co.jp (8.13.8/8.13.4) with ESMTP id m2S97eqB015798; Fri, 28 Mar 2008 18:07:40 +0900 (JST) Received: (from root@localhost) by mailgate4.nec.co.jp (8.11.7/3.7W-MAILGATE-NEC) id m2S97e510294; Fri, 28 Mar 2008 18:07:40 +0900 (JST) Received: from kuichi.jp.nec.com (kuichi.jp.nec.com [10.26.220.17]) by mailsv3.nec.co.jp (8.13.8/8.13.4) with ESMTP id m2S97de5001073; Fri, 28 Mar 2008 18:07:39 +0900 (JST) Received: from TNESB07336 ([10.64.168.65] [10.64.168.65]) by mail.jp.nec.com with ESMTP; Fri, 28 Mar 2008 18:07:36 +0900 To: David Chinner Cc: "linux-ext4@vger.kernel.org" , "linux-fsdevel@vger.kernel.org" , "xfs@oss.sgi.com" , "dm-devel@redhat.com" , "linux-kernel@vger.kernel.org" X-ASG-Orig-Subj: [RFC PATCH 2/2] Add timeout feature Subject: [RFC PATCH 2/2] Add timeout feature Message-Id: <20080328180736t-sato@mail.jp.nec.com> Mime-Version: 1.0 X-Mailer: WeMail32[2.51] ID:1K0086 From: Takashi Sato Date: Fri, 28 Mar 2008 18:07:36 +0900 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Barracuda-Connect: TYO200.gate.nec.co.jp[210.143.35.50] X-Barracuda-Start-Time: 1206695886 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.46130 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15074 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: t-sato@yk.jp.nec.com Precedence: bulk X-list: xfs The timeout feature is added to freeze ioctl. And new ioctl to reset the timeout period is added. o Freeze the filesystem int ioctl(int fd, int FIFREEZE, long *timeval) fd: The file descriptor of the mountpoint FIFREEZE: request code for the freeze timeval: the timeout period in seconds If it's 0 or 1, the timeout isn't set. This special case of "1" is implemented to keep the compatibility with XFS applications. Return value: 0 if the operation succeeds. Otherwise, -1 o Reset the timeout period int ioctl(int fd, int FIFREEZE_RESET_TIMEOUT, long *timeval) fd:file descriptor of mountpoint FIFREEZE_RESET_TIMEOUT: request code for reset of timeout period timeval: new timeout period in seconds Return value: 0 if the operation succeeds. Otherwise, -1 Error number: If the filesystem has already been unfrozen, errno is set to EINVAL. Signed-off-by: Takashi Sato --- drivers/md/dm.c | 2 - fs/block_dev.c | 2 + fs/buffer.c | 14 ++++++++- fs/ioctl.c | 64 ++++++++++++++++++++++++++++++++++++++++++- fs/super.c | 52 ++++++++++++++++++++++++++++++++++ fs/xfs/linux-2.6/xfs_ioctl.c | 2 - fs/xfs/xfs_fsops.c | 2 - include/linux/buffer_head.h | 2 - include/linux/fs.h | 8 +++++ 9 files changed, 141 insertions(+), 7 deletions(-) diff -uprN -X /home/sho/pub/MC/freeze-set/dontdiff linux-2.6.25-rc7-freeze/drivers/md/dm.c linux-2.6.25-rc7-timeout/driv ers/md/dm.c --- linux-2.6.25-rc7-freeze/drivers/md/dm.c 2008-03-26 20:14:00.000000000 +0900 +++ linux-2.6.25-rc7-timeout/drivers/md/dm.c 2008-03-26 20:10:07.000000000 +0900 @@ -1407,7 +1407,7 @@ static int lock_fs(struct mapped_device WARN_ON(md->frozen_sb); - md->frozen_sb = freeze_bdev(md->suspended_bdev); + md->frozen_sb = freeze_bdev(md->suspended_bdev, 0); if (IS_ERR(md->frozen_sb)) { r = PTR_ERR(md->frozen_sb); md->frozen_sb = NULL; diff -uprN -X /home/sho/pub/MC/freeze-set/dontdiff linux-2.6.25-rc7-freeze/fs/block_dev.c linux-2.6.25-rc7-timeout/fs/bl ock_dev.c --- linux-2.6.25-rc7-freeze/fs/block_dev.c 2008-03-27 09:26:36.000000000 +0900 +++ linux-2.6.25-rc7-timeout/fs/block_dev.c 2008-03-26 20:10:19.000000000 +0900 @@ -287,6 +287,8 @@ static void init_once(struct kmem_cache /* Initialize semaphore for freeze. */ sema_init(&bdev->bd_freeze_sem, 1); + /* Setup freeze timeout function. */ + INIT_DELAYED_WORK(&bdev->bd_freeze_timeout, freeze_timeout); } static inline void __bd_forget(struct inode *inode) diff -uprN -X /home/sho/pub/MC/freeze-set/dontdiff linux-2.6.25-rc7-freeze/fs/buffer.c linux-2.6.25-rc7-timeout/fs/buffe r.c --- linux-2.6.25-rc7-freeze/fs/buffer.c 2008-03-26 20:32:23.000000000 +0900 +++ linux-2.6.25-rc7-timeout/fs/buffer.c 2008-03-26 20:10:19.000000000 +0900 @@ -190,14 +190,17 @@ int fsync_bdev(struct block_device *bdev /** * freeze_bdev -- lock a filesystem and force it into a consistent state - * @bdev: blockdevice to lock + * @bdev: blockdevice to lock + * @timeout_msec: timeout period * * This takes the block device bd_mount_sem to make sure no new mounts * happen on bdev until thaw_bdev() is called. * If a superblock is found on this device, we take the s_umount semaphore * on it to make sure nobody unmounts until the snapshot creation is done. + * If timeout_msec is bigger than 0, this registers the delayed work for + * timeout of the freeze feature. */ -struct super_block *freeze_bdev(struct block_device *bdev) +struct super_block *freeze_bdev(struct block_device *bdev, long timeout_msec) { struct super_block *sb; @@ -233,6 +236,10 @@ struct super_block *freeze_bdev(struct b sync_blockdev(bdev); + /* Setup unfreeze timer. */ + if (timeout_msec > 0) + add_freeze_timeout(bdev, timeout_msec); + up(&bdev->bd_freeze_sem); return sb; /* thaw_bdev releases s->s_umount and bd_mount_sem */ @@ -255,6 +262,9 @@ void thaw_bdev(struct block_device *bdev return; } + /* Delete unfreeze timer. */ + del_freeze_timeout(bdev); + if (sb) { BUG_ON(sb->s_bdev != bdev); diff -uprN -X /home/sho/pub/MC/freeze-set/dontdiff linux-2.6.25-rc7-freeze/fs/ioctl.c linux-2.6.25-rc7-timeout/fs/ioctl. c --- linux-2.6.25-rc7-freeze/fs/ioctl.c 2008-03-26 20:22:17.000000000 +0900 +++ linux-2.6.25-rc7-timeout/fs/ioctl.c 2008-03-26 20:10:19.000000000 +0900 @@ -184,6 +184,8 @@ int do_vfs_ioctl(struct file *filp, unsi break; case FIFREEZE: { + long timeout_sec; + long timeout_msec; struct super_block *sb = filp->f_path.dentry->d_inode->i_sb; if (!capable(CAP_SYS_ADMIN)) { @@ -197,8 +199,31 @@ int do_vfs_ioctl(struct file *filp, unsi break; } + /* arg(sec) to tick value. */ + error = get_user(timeout_sec, (long __user *) arg); + if (error != 0) + break; + /* + * If 1 is specified as the timeout period, + * it will be changed into 0 to keep the compatibility + * of XFS application(xfs_freeze). + */ + if (timeout_sec < 0) { + error = -EINVAL; + break; + } else if (timeout_sec < 2) { + timeout_sec = 0; + } + + timeout_msec = timeout_sec * 1000; + /* overflow case */ + if (timeout_msec < 0) { + error = -EINVAL; + break; + } + /* Freeze. */ - freeze_bdev(sb->s_bdev); + freeze_bdev(sb->s_bdev, timeout_msec); break; } @@ -216,6 +241,43 @@ int do_vfs_ioctl(struct file *filp, unsi break; } + case FIFREEZE_RESET_TIMEOUT: { + long timeout_sec; + long timeout_msec; + struct super_block *sb + = filp->f_path.dentry->d_inode->i_sb; + + if (!capable(CAP_SYS_ADMIN)) { + error = -EPERM; + break; + } + + /* arg(sec) to tick value */ + error = get_user(timeout_sec, (long __user *) arg); + if (error) + break; + timeout_msec = timeout_sec * 1000; + if (timeout_msec < 0) { + error = -EINVAL; + break; + } + + if (sb) { + down(&sb->s_bdev->bd_freeze_sem); + if (sb->s_frozen == SB_UNFROZEN) { + up(&sb->s_bdev->bd_freeze_sem); + error = -EINVAL; + break; + } + /* setup unfreeze timer */ + if (timeout_msec > 0) + add_freeze_timeout(sb->s_bdev, + timeout_msec); + up(&sb->s_bdev->bd_freeze_sem); + } + break; + } + default: if (S_ISREG(filp->f_path.dentry->d_inode->i_mode)) error = file_ioctl(filp, cmd, arg); diff -uprN -X /home/sho/pub/MC/freeze-set/dontdiff linux-2.6.25-rc7-freeze/fs/super.c linux-2.6.25-rc7-timeout/fs/super. c --- linux-2.6.25-rc7-freeze/fs/super.c 2008-03-26 20:23:21.000000000 +0900 +++ linux-2.6.25-rc7-timeout/fs/super.c 2008-03-26 20:10:19.000000000 +0900 @@ -983,3 +983,55 @@ struct vfsmount *kern_mount_data(struct } EXPORT_SYMBOL_GPL(kern_mount_data); + +/* + * freeze_timeout - Thaw the filesystem. + * + * @work: work queue (delayed_work.work) + * + * Called by the delayed work when elapsing the timeout period. + * Thaw the filesystem. + */ +void freeze_timeout(struct work_struct *work) +{ + struct block_device *bd = container_of(work, + struct block_device, bd_freeze_timeout.work); + + struct super_block *sb = get_super_without_lock(bd); + + thaw_bdev(bd, sb); + + if (sb) + put_super(sb); +} +EXPORT_SYMBOL_GPL(freeze_timeout); + +/* + * add_freeze_timeout - Add timeout for freeze. + * + * @bdev: block device struct + * @timeout_msec: timeout period + * + * Add the delayed work for freeze timeout to the delayed work queue. + */ +void add_freeze_timeout(struct block_device *bdev, long timeout_msec) +{ + s64 timeout_jiffies = msecs_to_jiffies(timeout_msec); + + /* Set delayed work queue */ + cancel_delayed_work(&bdev->bd_freeze_timeout); + schedule_delayed_work(&bdev->bd_freeze_timeout, timeout_jiffies); +} + +/* + * del_freeze_timeout - Delete timeout for freeze. + * + * @bdev: block device struct + * + * Delete the delayed work for freeze timeout from the delayed work queue. + */ +void del_freeze_timeout(struct block_device *bdev) +{ + if (delayed_work_pending(&bdev->bd_freeze_timeout)) + cancel_delayed_work(&bdev->bd_freeze_timeout); +} diff -uprN -X /home/sho/pub/MC/freeze-set/dontdiff linux-2.6.25-rc7-freeze/fs/xfs/linux-2.6/xfs_ioctl.c linux-2.6.25-rc7 -timeout/fs/xfs/linux-2.6/xfs_ioctl.c --- linux-2.6.25-rc7-freeze/fs/xfs/linux-2.6/xfs_ioctl.c 2008-03-26 20:23:59.000000000 +0900 +++ linux-2.6.25-rc7-timeout/fs/xfs/linux-2.6/xfs_ioctl.c 2008-03-26 20:10:19.000000000 +0900 @@ -911,7 +911,7 @@ xfs_ioctl( return -EPERM; if (inode->i_sb->s_frozen == SB_UNFROZEN) - freeze_bdev(inode->i_sb->s_bdev); + freeze_bdev(inode->i_sb->s_bdev, 0); return 0; case XFS_IOC_THAW: diff -uprN -X /home/sho/pub/MC/freeze-set/dontdiff linux-2.6.25-rc7-freeze/fs/xfs/xfs_fsops.c linux-2.6.25-rc7-timeout/f s/xfs/xfs_fsops.c --- linux-2.6.25-rc7-freeze/fs/xfs/xfs_fsops.c 2008-03-26 20:24:44.000000000 +0900 +++ linux-2.6.25-rc7-timeout/fs/xfs/xfs_fsops.c 2008-03-26 20:10:19.000000000 +0900 @@ -623,7 +623,7 @@ xfs_fs_goingdown( { switch (inflags) { case XFS_FSOP_GOING_FLAGS_DEFAULT: { - struct super_block *sb = freeze_bdev(mp->m_super->s_bdev); + struct super_block *sb = freeze_bdev(mp->m_super->s_bdev, 0); if (sb && !IS_ERR(sb)) { xfs_force_shutdown(mp, SHUTDOWN_FORCE_UMOUNT); diff -uprN -X /home/sho/pub/MC/freeze-set/dontdiff linux-2.6.25-rc7-freeze/include/linux/buffer_head.h linux-2.6.25-rc7- timeout/include/linux/buffer_head.h --- linux-2.6.25-rc7-freeze/include/linux/buffer_head.h 2008-03-26 20:25:16.000000000 +0900 +++ linux-2.6.25-rc7-timeout/include/linux/buffer_head.h 2008-03-26 20:10:20.000000000 +0900 @@ -170,7 +170,7 @@ int sync_blockdev(struct block_device *b void __wait_on_buffer(struct buffer_head *); wait_queue_head_t *bh_waitq_head(struct buffer_head *bh); int fsync_bdev(struct block_device *); -struct super_block *freeze_bdev(struct block_device *); +struct super_block *freeze_bdev(struct block_device *, long timeout_msec); void thaw_bdev(struct block_device *, struct super_block *); int fsync_super(struct super_block *); int fsync_no_super(struct block_device *); diff -uprN -X /home/sho/pub/MC/freeze-set/dontdiff linux-2.6.25-rc7-freeze/include/linux/fs.h linux-2.6.25-rc7-timeout/i nclude/linux/fs.h --- linux-2.6.25-rc7-freeze/include/linux/fs.h 2008-03-26 20:27:44.000000000 +0900 +++ linux-2.6.25-rc7-timeout/include/linux/fs.h 2008-03-26 20:10:20.000000000 +0900 @@ -8,6 +8,7 @@ #include #include +#include /* * It's silly to have NR_OPEN bigger than NR_FILE, but you can change @@ -225,6 +226,7 @@ extern int dir_notify_enable; #define FIGETBSZ _IO(0x00,2) /* get the block size used for bmap */ #define FIFREEZE _IOWR('X', 119, int) /* Freeze */ #define FITHAW _IOWR('X', 120, int) /* Thaw */ +#define FIFREEZE_RESET_TIMEOUT _IO(0x00, 3) /* Reset freeze timeout */ #define FS_IOC_GETFLAGS _IOR('f', 1, long) #define FS_IOC_SETFLAGS _IOW('f', 2, long) @@ -551,6 +553,8 @@ struct block_device { */ unsigned long bd_private; + /* Delayed work for freeze */ + struct delayed_work bd_freeze_timeout; /* Semaphore for freeze */ struct semaphore bd_freeze_sem; }; @@ -2104,5 +2108,9 @@ int proc_nr_files(struct ctl_table *tabl int get_filesystem_list(char * buf); +extern void add_freeze_timeout(struct block_device *bdev, long timeout_msec); +extern void del_freeze_timeout(struct block_device *bdev); +extern void freeze_timeout(struct work_struct *work); + #endif /* __KERNEL__ */ #endif /* _LINUX_FS_H */ From owner-xfs@oss.sgi.com Fri Mar 28 04:02:40 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 28 Mar 2008 04:02:48 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2SB2dBc021424 for ; Fri, 28 Mar 2008 04:02:40 -0700 X-ASG-Debug-ID: 1206702191-36c502e60000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from tyo200.gate.nec.co.jp (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 149FB6FF6C5 for ; Fri, 28 Mar 2008 04:03:11 -0700 (PDT) Received: from tyo200.gate.nec.co.jp (TYO200.gate.nec.co.jp [210.143.35.50]) by cuda.sgi.com with ESMTP id 9HIN1Jk9H6seSCHi for ; Fri, 28 Mar 2008 04:03:11 -0700 (PDT) Received: from tyo202.gate.nec.co.jp ([10.7.69.202]) by tyo200.gate.nec.co.jp (8.13.8/8.13.4) with ESMTP id m2S980RU000259 for ; Fri, 28 Mar 2008 18:08:04 +0900 (JST) Received: from mailgate3.nec.co.jp (mailgate53.nec.co.jp [10.7.69.162]) by tyo202.gate.nec.co.jp (8.13.8/8.13.4) with ESMTP id m2S91lw0009398; Fri, 28 Mar 2008 18:01:48 +0900 (JST) Received: (from root@localhost) by mailgate3.nec.co.jp (8.11.7/3.7W-MAILGATE-NEC) id m2S91lG14604; Fri, 28 Mar 2008 18:01:47 +0900 (JST) Received: from kuichi.jp.nec.com (kuichi.jp.nec.com [10.26.220.17]) by mailsv4.nec.co.jp (8.13.8/8.13.4) with ESMTP id m2S91lV6015431; Fri, 28 Mar 2008 18:01:47 +0900 (JST) Received: from TNESB07336 ([10.64.168.65] [10.64.168.65]) by mail.jp.nec.com with ESMTP; Fri, 28 Mar 2008 18:01:45 +0900 To: David Chinner Cc: "linux-ext4@vger.kernel.org" , "linux-fsdevel@vger.kernel.org" , "xfs@oss.sgi.com" , "dm-devel@redhat.com" , "linux-kernel@vger.kernel.org" X-ASG-Orig-Subj: Re: [RFC PATCH] freeze feature ver 1.0 Subject: Re: [RFC PATCH] freeze feature ver 1.0 Message-Id: <20080328180145t-sato@mail.jp.nec.com> Mime-Version: 1.0 X-Mailer: WeMail32[2.51] ID:1K0086 From: Takashi Sato Date: Fri, 28 Mar 2008 18:01:45 +0900 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Barracuda-Connect: TYO200.gate.nec.co.jp[210.143.35.50] X-Barracuda-Start-Time: 1206702195 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.46137 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15075 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: t-sato@yk.jp.nec.com Precedence: bulk X-list: xfs Hi, David Chinner wrote: > Can you please split this into two patches - one which introduces the > generic functionality *without* the timeout stuff, and a second patch > that introduces the timeouts. OK. I will send the split patches in subsequent mails. > I think this timeout stuff is dangerous - it adds significant > complexity and really does not protect against anything that can't > be done in userspace. i.e. If your system is running well enough > for the timer to fire and unfreeze the filesystem, it's running well > enough for you to do "freeze X; sleep Y; unfreeze X". If the process is terminated at "sleep Y" by an unexpected accident (e.g. signals), the filesystem will be left frozen. So, I think the timeout is needed to unfreeze more definitely. > FWIW, there is nothing to guarantee that the filesystem has finished > freezing when the timeout fires (it's not uncommon to see > freeze_bdev() taking *minutes*) and unfreezing in the middle of a > freeze operation will cause problems - either for the filesystem > in the middle of a freeze operation, or for whatever is freezing the > filesystem to get a consistent image..... Do you mention the freeze_bdev()'s hang? The salvage target of my timeout is freeze process's accident as below. - It is killed before calling the unfreeze ioctl - It causes a deadlock by accessing the frozen filesystem So the delayed work for the timeout is set after all of freeze operations in freeze_bdev() in my patches. I think the filesystem dependent code (write_super_lockfs operation) should be implemented not to cause a hang. Cheers, Takashi From owner-xfs@oss.sgi.com Fri Mar 28 05:22:12 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 28 Mar 2008 05:22:46 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.8 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_43, J_CHICKENPOX_44,J_CHICKENPOX_45,J_CHICKENPOX_46,J_CHICKENPOX_47, J_CHICKENPOX_48 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2SCM9EE002028 for ; Fri, 28 Mar 2008 05:22:12 -0700 X-ASG-Debug-ID: 1206706963-4900009c0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 59EDA5D6EF1 for ; Fri, 28 Mar 2008 05:22:43 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id 0jUxZqKV7F0syXYK for ; Fri, 28 Mar 2008 05:22:43 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 8D68B1802FBE1; Fri, 28 Mar 2008 07:22:12 -0500 (CDT) Message-ID: <47ECE2F4.7040606@sandeen.net> Date: Fri, 28 Mar 2008 07:22:12 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: David Chinner CC: markgw@sgi.com, xfs-oss , Christoph Hellwig X-ASG-Orig-Subj: Re: FYI: xfs problems in Fedora 8 updates Subject: Re: FYI: xfs problems in Fedora 8 updates References: <47E3CE92.20803@sandeen.net> <47E8687A.90306@sgi.com> <47EC734D.2020304@sandeen.net> <20080328051456.GH108924158@sgi.com> In-Reply-To: <20080328051456.GH108924158@sgi.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1206706964 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.46142 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15076 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs David Chinner wrote: > On Thu, Mar 27, 2008 at 11:25:49PM -0500, Eric Sandeen wrote: >> Mark Goodwin wrote: >>> Eric Sandeen wrote: >>>> https://bugzilla.redhat.com/show_bug.cgi?id=437968 >>>> Bugzilla Bug 437968: Corrupt xfs root filesystem with kernel >>>> kernel-2.6.24.3-xx >>>> >>>> Just to give the sgi guys a heads up, 2 people have seen this now. >>>> >>>> I know it's a distro kernel but fedora is generally reasonably close to >>>> upstream. >>>> >>>> I'm looking into it but just wanted to put this on the list, too. >>> Hi Eric, have you identified this as any particular known problem? >>> >>> Cheers >> >From a testcase and some git bisection, looks like this mod broke it >> somehow, but not sure how yet: >> >> http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=2bdf7cd0baa67608ada1517a281af359faf4c58c >> >> [XFS] superblock endianess annotations > > Uh, what? Functionally that's a no-op.... well, it's supposed to be... but I guess that's why they call it a bug? :) really and truly, that's where bisect landed, and I tested a few times with and without the patch and it seems to confirm. > Is this only showing up on attr=2 filesystems? a little hard to say since it has the 64-bit features2 padding problem... but: [root@bear-05 test]# xfs_db -c version testfs versionnum [0xb094+0x8] = V4,ATTR,ALIGN,DIRV2,EXTFLG,MOREBITS,ATTR2 I can try later on non-attr2... this fs was originally created by the installer. > And it looks like it was a directory inode judging by all the > disconnected inodes. Looks like a corrupted directory extent btree > from this. Can you run `xfs_db -r -c "inode 17627699" -c p ` > so we can confirm this? sorry, meant to do that: # xfs_db -r -c "inode 17627699" -c p testfs core.magic = 0x494e core.mode = 040755 core.version = 1 core.format = 3 (btree) core.nlinkv1 = 2 core.uid = 0 core.gid = 0 core.flushiter = 2 core.atime.sec = Wed Jan 9 12:14:05 2008 core.atime.nsec = 000000000 core.mtime.sec = Fri Mar 28 07:16:54 2008 core.mtime.nsec = 590127668 core.ctime.sec = Fri Mar 28 07:16:54 2008 core.ctime.nsec = 590127668 core.size = 81920 core.nblocks = 30 core.extsize = 0 core.nextents = 23 core.naextents = 1 core.forkoff = 15 core.aformat = 2 (extents) core.dmevmask = 0 core.dmstate = 0 core.newrtbm = 0 core.prealloc = 0 core.realtime = 0 core.immutable = 0 core.append = 0 core.sync = 0 core.noatime = 0 core.nodump = 0 core.rtinherit = 0 core.projinherit = 0 core.nosymlinks = 0 core.extsz = 0 core.extszinherit = 0 core.nodefrag = 0 core.filestream = 0 core.gen = 0 next_unlinked = null u.bmbt.level = 1 u.bmbt.numrecs = 1 u.bmbt.keys[1] = [startoff] 1:[0] u.bmbt.ptrs[1] = 1:0 a.bmx[0] = [startoff,startblock,blockcount,extentflag] 0:[0,1103697,1,0] -Eric From owner-xfs@oss.sgi.com Fri Mar 28 08:43:00 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 28 Mar 2008 08:43:08 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2SFh0nU025429 for ; Fri, 28 Mar 2008 08:43:00 -0700 X-ASG-Debug-ID: 1206719013-48fd03d20000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mailrelay.pawisda.de (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 890CD701BD2 for ; Fri, 28 Mar 2008 08:43:33 -0700 (PDT) Received: from mailrelay.pawisda.de (mail.pawisda.de [213.157.4.156]) by cuda.sgi.com with ESMTP id JNCrdfNJ9ZRvkCxb for ; Fri, 28 Mar 2008 08:43:33 -0700 (PDT) Received: from [192.168.1.3] (unknown [92.50.81.38]) by mailrelay.pawisda.de (Postfix) with ESMTP id 4E8A3B8774 for ; Fri, 28 Mar 2008 16:43:31 +0100 (CET) X-ASG-Orig-Subj: [PATCH] do not test return value of xfs_bmap_*_count_leaves Subject: [PATCH] do not test return value of xfs_bmap_*_count_leaves From: Ruben Porras To: xfs@oss.sgi.com Content-Type: multipart/mixed; boundary="=-dcXUWvoe0VdSPkitZZIl" Date: Fri, 28 Mar 2008 16:43:31 +0100 Message-Id: <1206719011.8339.6.camel@marzo> Mime-Version: 1.0 X-Mailer: Evolution 2.12.3 X-Barracuda-Connect: mail.pawisda.de[213.157.4.156] X-Barracuda-Start-Time: 1206719015 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.46155 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15077 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: ruben.porras@linworks.de Precedence: bulk X-list: xfs --=-dcXUWvoe0VdSPkitZZIl Content-Type: text/plain Content-Transfer-Encoding: 7bit These functions, xfs_bmap_count_leaves and xfs_bmap_disk_count_leaves, return always 0. Therefore it is not necessary to test the return value. Regards --=-dcXUWvoe0VdSPkitZZIl Content-Disposition: attachment; filename=xfs_bmap.patch Content-Type: text/x-patch; name=xfs_bmap.patch; charset=UTF-8 Content-Transfer-Encoding: 7bit Index: fs/xfs/xfs_bmap.c =================================================================== RCS file: /cvs/linux-2.6-xfs/fs/xfs/xfs_bmap.c,v retrieving revision 1.383 diff -u -r1.383 xfs_bmap.c --- fs/xfs/xfs_bmap.c 6 Feb 2008 05:18:18 -0000 1.383 +++ fs/xfs/xfs_bmap.c 28 Mar 2008 15:39:59 -0000 @@ -6361,13 +6361,9 @@ mp = ip->i_mount; ifp = XFS_IFORK_PTR(ip, whichfork); if ( XFS_IFORK_FORMAT(ip, whichfork) == XFS_DINODE_FMT_EXTENTS ) { - if (unlikely(xfs_bmap_count_leaves(ifp, 0, + xfs_bmap_count_leaves(ifp, 0, ifp->if_bytes / (uint)sizeof(xfs_bmbt_rec_t), - count) < 0)) { - XFS_ERROR_REPORT("xfs_bmap_count_blocks(1)", - XFS_ERRLEVEL_LOW, mp); - return XFS_ERROR(EFSCORRUPTED); - } + count); return 0; } @@ -6448,13 +6444,7 @@ for (;;) { nextbno = be64_to_cpu(block->bb_rightsib); numrecs = be16_to_cpu(block->bb_numrecs); - if (unlikely(xfs_bmap_disk_count_leaves(0, - block, numrecs, count) < 0)) { - xfs_trans_brelse(tp, bp); - XFS_ERROR_REPORT("xfs_bmap_count_tree(2)", - XFS_ERRLEVEL_LOW, mp); - return XFS_ERROR(EFSCORRUPTED); - } + xfs_bmap_disk_count_leaves(0, block, numrecs, count); xfs_trans_brelse(tp, bp); if (nextbno == NULLFSBLOCK) break; --=-dcXUWvoe0VdSPkitZZIl-- From owner-xfs@oss.sgi.com Fri Mar 28 08:46:01 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 28 Mar 2008 08:46:08 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.4 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_55 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2SFk1BJ025905 for ; Fri, 28 Mar 2008 08:46:01 -0700 X-ASG-Debug-ID: 1206719194-2c8603a50000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from filer.fsl.cs.sunysb.edu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 50BD98A6773 for ; Fri, 28 Mar 2008 08:46:35 -0700 (PDT) Received: from filer.fsl.cs.sunysb.edu (filer.fsl.cs.sunysb.edu [130.245.126.2]) by cuda.sgi.com with ESMTP id EO1oOSBnRyFPWUt1 for ; Fri, 28 Mar 2008 08:46:35 -0700 (PDT) Received: from josefsipek.net (baal.fsl.cs.sunysb.edu [130.245.126.78]) by filer.fsl.cs.sunysb.edu (8.12.11.20060308/8.13.1) with ESMTP id m2SFk44w026705; Fri, 28 Mar 2008 11:46:04 -0400 Received: by josefsipek.net (Postfix, from userid 1000) id 326DB1C00124; Fri, 28 Mar 2008 11:46:05 -0400 (EDT) Date: Fri, 28 Mar 2008 11:46:05 -0400 From: "Josef 'Jeff' Sipek" To: Ruben Porras Cc: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: [PATCH] do not test return value of xfs_bmap_*_count_leaves Subject: Re: [PATCH] do not test return value of xfs_bmap_*_count_leaves Message-ID: <20080328154605.GA28322@josefsipek.net> References: <1206719011.8339.6.camel@marzo> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1206719011.8339.6.camel@marzo> User-Agent: Mutt/1.5.16 (2007-06-11) X-Barracuda-Connect: filer.fsl.cs.sunysb.edu[130.245.126.2] X-Barracuda-Start-Time: 1206719196 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.46156 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15078 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeffpc@josefsipek.net Precedence: bulk X-list: xfs On Fri, Mar 28, 2008 at 04:43:31PM +0100, Ruben Porras wrote: > These functions, xfs_bmap_count_leaves and xfs_bmap_disk_count_leaves, > return always 0. Therefore it is not necessary to test the return value. If it always returns 0, why not make it void? Josef 'Jeff' Sipek. -- Hegh QaQ law' quvHa'ghach QaQ puS From owner-xfs@oss.sgi.com Fri Mar 28 20:25:05 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 28 Mar 2008 20:25:12 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_43 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2T3P2v6004933 for ; Fri, 28 Mar 2008 20:25:05 -0700 X-ASG-Debug-ID: 1206761135-454000940000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 4C7B88B71C7 for ; Fri, 28 Mar 2008 20:25:36 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id f6SVoiPOYUwmZW9X for ; Fri, 28 Mar 2008 20:25:36 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 55D8E1801C350; Fri, 28 Mar 2008 22:25:34 -0500 (CDT) Message-ID: <47EDB6AD.4070604@sandeen.net> Date: Fri, 28 Mar 2008 22:25:33 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: David Chinner CC: xfs-dev , xfs-oss X-ASG-Orig-Subj: Re: [patch] detect and correct bad features2 superblock field Subject: Re: [patch] detect and correct bad features2 superblock field References: <20080220054041.GM155407@sgi.com> In-Reply-To: <20080220054041.GM155407@sgi.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1206761138 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.46202 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15079 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs David Chinner wrote: > There is a bug in mkfs.xfs that can result in writing the features2 > field in the superblock to the wrong location. This only occurs > on some architectures, typically those with 32 bit userspace and > 64 bit kernels. > > This patch detects the defect at mount time, logs a warning > such as: > ... > /* > + * Check for a bad features2 field alignment. This happened on > + * some platforms due to xfs_sb_t not being 64bit size aligned > + * when sb_features was added and hence the compiler put it in > + * the wrong place. > + * > + * If we detect a bad field, we or the set bits into the existing > + * features2 field in case it has already been modified and we > + * don't want to lose any features. Zero the bad one and mark > + * the two fields as needing updates once the transaction subsystem > + * is online. > + */ > + if (xfs_sb_has_bad_features2(sbp)) { > + cmn_err(CE_WARN, > + "XFS: correcting sb_features alignment problem"); > + sbp->sb_features2 |= sbp->sb_bad_features2; > + sbp->sb_bad_features2 = 0; > + update_flags |= XFS_SB_FEATURES2 | XFS_SB_BAD_FEATURES2; > + } > I think there's a minor problem here that while this will update the superblock with the proper features2 values, features2 has already been checked, so mp->m_flags won't have, for example, the attr2 flags... So attr2 will show up next time, but not on this mount. This probably wouldn't normally matter, except in a weird corner case I think I've found: x86_64 set attr2 in bad_features2 bad_features2 was found, kernel & userspace pad the same filesystem created attr2 attributes hch's sb endianness annotation actually made bad_features2 *not* found mount after that thinks there is no attr2 another corner case bug corrupts the fs, would be avoided if attr2 were not lost. -Eric From owner-xfs@oss.sgi.com Fri Mar 28 21:55:54 2008 Received: with ECARTIS (v1.0.0; list xfs); Fri, 28 Mar 2008 21:56:02 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.1 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2T4tqVu011724 for ; Fri, 28 Mar 2008 21:55:54 -0700 X-ASG-Debug-ID: 1206766587-581b01420000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id AB99F706781 for ; Fri, 28 Mar 2008 21:56:27 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id vcMzI56tIuBehqYf for ; Fri, 28 Mar 2008 21:56:27 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 0F65C1801C350 for ; Fri, 28 Mar 2008 23:56:26 -0500 (CDT) Message-ID: <47EDCBF9.4070102@sandeen.net> Date: Fri, 28 Mar 2008 23:56:25 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: xfs-oss X-ASG-Orig-Subj: [PATCH, RFC] fix attr fit checking for filesystems which have lost their attr2 Subject: [PATCH, RFC] fix attr fit checking for filesystems which have lost their attr2 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1206766587 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.46207 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15080 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Regarding the F8 corruption... I have a pretty narrow testcase now, and it turns out this was a bit of a perfect storm. First, F8 shipped 2.6.23 which had the problem with sb_features2 padding out on 64-bit boxes, but this was ok because userspace and kernelspace both did this, and it was properly mounting & running as attr2. However, hch came along in 2.6.24 and did some endian annotation for the superblock and in the process: > A new helper xfs_sb_from_disk handles the other (read) > direction and doesn't need the slightly hacky table-driven approach > because we only ever read the full sb from disk. However, this resulted in kernelspace behaving differently, and now *missing* the attr2 flag in sb_features2, (actually sb_bad_features2) so we mounted as if we had attr1. Which is really supposed to be ok, IIRC, except in xfs_attr_shortform_bytesfit we return the default fork offset value from the superblock, even if di_forkoff is *already* set. In the error case I had, di_forkoff was set to 15 (from previously being attr2...) but we returned 14 (the mp default) and I think this is where things started going wrong; I think this caused us to write an attr on top of the extent data. My understanding of this is that if di_forkoff is non-zero, we should always be using it for space calculations, regardless of whether we are mounted with attr2 or not... and with that, how's this look, to be honest I haven't run it through QA yet... I'm not certain if xfs_bmap_compute_maxlevels() may lead to similar problems.... ------------------ always use di_forkoff for when checking for attr space In the case where we mount a filesystem which was previously using the attr2 format as attr1, returning the default mp->m_attroffset instead of the per-inode di_forkoff for inline attribute fit calculations may result in corruption, if the existing "attr2" formatted attribute is already taking more space than the default. Signed-off-by: Eric Sandeen --- Index: linux-2.6-git/fs/xfs/xfs_attr_leaf.c =================================================================== --- linux-2.6-git.orig/fs/xfs/xfs_attr_leaf.c +++ linux-2.6-git/fs/xfs/xfs_attr_leaf.c @@ -166,7 +166,7 @@ xfs_attr_shortform_bytesfit(xfs_inode_t if (!(mp->m_flags & XFS_MOUNT_ATTR2)) { if (bytes <= XFS_IFORK_ASIZE(dp)) - return mp->m_attroffset >> 3; + return dp->i_d.di_forkoff; return 0; } From owner-xfs@oss.sgi.com Sat Mar 29 04:16:39 2008 Received: with ECARTIS (v1.0.0; list xfs); Sat, 29 Mar 2008 04:16:51 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_22 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2TBGbh1012706 for ; Sat, 29 Mar 2008 04:16:39 -0700 X-ASG-Debug-ID: 1206789430-4ae700490000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from smtp-out003.kontent.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 33157707781 for ; Sat, 29 Mar 2008 04:17:10 -0700 (PDT) Received: from smtp-out003.kontent.com (smtp-out003.kontent.com [81.88.40.217]) by cuda.sgi.com with ESMTP id EFpcdWQ4GxV27BKx for ; Sat, 29 Mar 2008 04:17:10 -0700 (PDT) Received: from lstyd.lan (p5B160B47.dip0.t-ipconnect.de [91.22.11.71]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: theendofthetunnel_de_1@smtp-out.kontent.com) by smtp-out.kontent.com (Postfix) with ESMTP id B55394000B60; Sat, 29 Mar 2008 12:17:09 +0100 (CET) Received: from [10.0.0.2] (para.lan [10.0.0.2]) by lstyd.lan (Postfix) with ESMTP id 4934320745D5; Sat, 29 Mar 2008 12:14:29 +0100 (CET) Message-ID: <47EE2539.1060001@theendofthetunnel.de> Date: Sat, 29 Mar 2008 12:17:13 +0100 From: Hannes Dorbath User-Agent: Thunderbird 2.0.0.12 (Windows/20080213) MIME-Version: 1.0 To: Scott Tanner CC: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: XFS performance on LVM2 mirror Subject: Re: XFS performance on LVM2 mirror References: <1206481052.4283.17.camel@dhcp-192-168-6-143> In-Reply-To: <1206481052.4283.17.camel@dhcp-192-168-6-143> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Barracuda-Connect: smtp-out003.kontent.com[81.88.40.217] X-Barracuda-Start-Time: 1206789433 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.46234 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15081 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: light@theendofthetunnel.de Precedence: bulk X-list: xfs Scott Tanner wrote: > Are there any special tweaks for XFS on LVM2 mirrors? LVM will always slow down things, it's an additional layer, there is no way around that. LVM does not do well with XFS stripe alignment, it causes sequential writes to happen in a non-uniform, pumping way and additionally does not support write barriers. > XFS mounted with -o noatime,nodiratime,nobarrier,logbufs=8 Nodiratime is redundant. You might add logbsize=256k. Lazy-counters require you to at least upgrade to 2.6.23. I'm unsure if your stripe alignment is correct. Why don't you use the su,sw options? Why do you run with only 64MB logsize? > Recommendations on my setup? Drop LVM altogether. Add ram to the box. Personally I'd use Solaris/ZFS for that specific setup. -- Best regards, Hannes Dorbath From owner-xfs@oss.sgi.com Sat Mar 29 09:18:36 2008 Received: with ECARTIS (v1.0.0; list xfs); Sat, 29 Mar 2008 09:18:43 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_43 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2TGIZAD004667 for ; Sat, 29 Mar 2008 09:18:36 -0700 X-ASG-Debug-ID: 1206807547-30f503bd0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id EE0551266503 for ; Sat, 29 Mar 2008 09:19:08 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id WgeB0F4Dp4oWG85n for ; Sat, 29 Mar 2008 09:19:08 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 4D17F1801238D; Sat, 29 Mar 2008 11:19:07 -0500 (CDT) Message-ID: <47EE6BFA.70603@sandeen.net> Date: Sat, 29 Mar 2008 11:19:06 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: David Chinner CC: xfs-dev , xfs-oss X-ASG-Orig-Subj: Re: [patch] detect and correct bad features2 superblock field Subject: Re: [patch] detect and correct bad features2 superblock field References: <20080220054041.GM155407@sgi.com> <47EDB6AD.4070604@sandeen.net> In-Reply-To: <47EDB6AD.4070604@sandeen.net> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1206807550 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.46254 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15082 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Eric Sandeen wrote: > David Chinner wrote: >> There is a bug in mkfs.xfs that can result in writing the features2 >> field in the superblock to the wrong location. This only occurs >> on some architectures, typically those with 32 bit userspace and >> 64 bit kernels. >> >> This patch detects the defect at mount time, logs a warning >> such as: >> ... > >> /* >> + * Check for a bad features2 field alignment. This happened on >> + * some platforms due to xfs_sb_t not being 64bit size aligned >> + * when sb_features was added and hence the compiler put it in >> + * the wrong place. >> + * >> + * If we detect a bad field, we or the set bits into the existing >> + * features2 field in case it has already been modified and we >> + * don't want to lose any features. Zero the bad one and mark >> + * the two fields as needing updates once the transaction subsystem >> + * is online. >> + */ >> + if (xfs_sb_has_bad_features2(sbp)) { >> + cmn_err(CE_WARN, >> + "XFS: correcting sb_features alignment problem"); >> + sbp->sb_features2 |= sbp->sb_bad_features2; >> + sbp->sb_bad_features2 = 0; >> + update_flags |= XFS_SB_FEATURES2 | XFS_SB_BAD_FEATURES2; >> + } >> > I think there's a minor problem here that while this will update the > superblock with the proper features2 values, features2 has already been > checked, so mp->m_flags won't have, for example, the attr2 flags... > > So attr2 will show up next time, but not on this mount. How's this look for a fixup: ======================== [XFS]: set ATTR2 in m_flags if flag found in bad_features2 Signed-off-by: Eric Sandeen --- Index: linux-2.6.24.x86_64/fs/xfs/xfs_mount.c =================================================================== --- linux-2.6.24.x86_64.orig/fs/xfs/xfs_mount.c +++ linux-2.6.24.x86_64/fs/xfs/xfs_mount.c @@ -1925,6 +1925,12 @@ xfs_mount_log_sb( } xfs_mod_sb(tp, fields); xfs_trans_commit(tp, 0); + + /* if we updated features2, recheck attr2 & set flag */ + if ((fields & (XFS_SB_FEATURES2 | XFS_SB_BAD_FEATURES2)) && + XFS_SB_VERSION_HASATTR2(&mp->m_sb)) { + mp->m_flags |= XFS_MOUNT_ATTR2; + } } From owner-xfs@oss.sgi.com Sat Mar 29 09:44:39 2008 Received: with ECARTIS (v1.0.0; list xfs); Sat, 29 Mar 2008 09:44:47 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2TGicY0006670 for ; Sat, 29 Mar 2008 09:44:39 -0700 X-ASG-Debug-ID: 1206809108-234e034d0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from drutsystem.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 11D5D708BD9 for ; Sat, 29 Mar 2008 09:45:09 -0700 (PDT) Received: from drutsystem.com (drutsystem.com [80.72.38.138]) by cuda.sgi.com with ESMTP id WH7MgI8V5SztFHCB for ; Sat, 29 Mar 2008 09:45:09 -0700 (PDT) Received: from drutsystem.com (localhost.drutsystem.com [127.0.0.1]) by drutsystem.com (Postfix) with ESMTP id 3B21D9480FE for ; Sat, 29 Mar 2008 16:27:19 +0000 (GMT) Received: from [10.0.0.2] (87-205-36-149.adsl.inetia.pl [87.205.36.149]) by drutsystem.com (Postfix) with ESMTP id E06839480FD for ; Sat, 29 Mar 2008 17:27:18 +0100 (CET) Message-ID: <47EE6DE2.1010800@drutsystem.com> Date: Sat, 29 Mar 2008 17:27:14 +0100 From: Michal Soltys User-Agent: Thunderbird 2.0.0.12 (Windows/20080213) MIME-Version: 1.0 To: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: XFS performance on LVM2 mirror Subject: Re: XFS performance on LVM2 mirror References: <1206481052.4283.17.camel@dhcp-192-168-6-143> In-Reply-To: <1206481052.4283.17.camel@dhcp-192-168-6-143> Content-Type: text/plain; charset=ISO-8859-2; format=flowed Content-Transfer-Encoding: 7bit X-AV-Checked: ClamAV on drutsystem.com X-Barracuda-Connect: drutsystem.com[80.72.38.138] X-Barracuda-Start-Time: 1206809114 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.46254 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15083 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nozo@drutsystem.com Precedence: bulk X-list: xfs Scott Tanner wrote: > Hello, > I've been doing some benchmarking (using bonnie++) of the XFS > filesystem and found a substantial performance drop in rewrite and > delete operations when using an LVM2 mirror. When using the Linux > Software Raid driver to perform the mirror, XFS performance is quite > good. I've tried a number of the performance tweaks from the mailing > list archives, as seen below. The only option that seemed to make a real > difference was LVM's --corelog, which only worked for 1 test then the > server crashed. > > Are there any special tweaks for XFS on LVM2 mirrors? Recommendations on > my setup? > Recently - there have been plenty of topics on linux-raid mailing list regarding this subject as well - worth checking them out. Generally important things to consider - read-ahead on your lv volume (or md device in case where xfs is directly on it - r-a of components are not used then), alignment of logical volumes with respect to underlying raid (especially as you use 128k chunk and by default volume won't start at multiple of 128k), lv extent size, xfs su/sw parameters which you will have to set manually, if you are on lvm on md raid. From owner-xfs@oss.sgi.com Sat Mar 29 18:29:59 2008 Received: with ECARTIS (v1.0.0; list xfs); Sat, 29 Mar 2008 18:30:33 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_43 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2U1TvW2018902 for ; Sat, 29 Mar 2008 18:29:58 -0700 X-ASG-Debug-ID: 1206840633-4b5201ae0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 894BB8BE0C2 for ; Sat, 29 Mar 2008 18:30:33 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id tGF1fsaADfsEYM3T for ; Sat, 29 Mar 2008 18:30:33 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 014901801238E; Sat, 29 Mar 2008 20:30:00 -0500 (CDT) Message-ID: <47EEED18.9090206@sandeen.net> Date: Sat, 29 Mar 2008 20:30:00 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: David Chinner CC: xfs-dev , xfs-oss X-ASG-Orig-Subj: Re: [patch] detect and correct bad features2 superblock field Subject: Re: [patch] detect and correct bad features2 superblock field References: <20080220054041.GM155407@sgi.com> In-Reply-To: <20080220054041.GM155407@sgi.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1206840633 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.46290 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15084 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs David Chinner wrote: > There is a bug in mkfs.xfs that can result in writing the features2 > field in the superblock to the wrong location. This only occurs > on some architectures, typically those with 32 bit userspace and > 64 bit kernels. > > This patch detects the defect at mount time, logs a warning > such as: > > XFS: correcting sb_features alignment problem > > in dmesg and corrects the problem so that everything is OK. > it also blacklists the bad field in the superblock so it does > not get used for something else later on. ... > /* > + * Check for a bad features2 field alignment. This happened on > + * some platforms due to xfs_sb_t not being 64bit size aligned > + * when sb_features was added and hence the compiler put it in > + * the wrong place. > + * > + * If we detect a bad field, we or the set bits into the existing > + * features2 field in case it has already been modified and we > + * don't want to lose any features. Zero the bad one and mark > + * the two fields as needing updates once the transaction subsystem > + * is online. > + */ > + if (xfs_sb_has_bad_features2(sbp)) { > + cmn_err(CE_WARN, > + "XFS: correcting sb_features alignment problem"); > + sbp->sb_features2 |= sbp->sb_bad_features2; > + sbp->sb_bad_features2 = 0; > + update_flags |= XFS_SB_FEATURES2 | XFS_SB_BAD_FEATURES2; > + } Hm, the other problem here may be that if we zero bad_features2, then any older kernel will mount up as attr2... and run into the corruption problem I found on F8... Should we make features2 and bad_features2 match rather than zeroing bad_features2? -Eric From owner-xfs@oss.sgi.com Sat Mar 29 18:49:11 2008 Received: with ECARTIS (v1.0.0; list xfs); Sat, 29 Mar 2008 18:49:18 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.1 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2U1n8vu020405 for ; Sat, 29 Mar 2008 18:49:11 -0700 X-ASG-Debug-ID: 1206841784-7fbc00a70000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 8FDEC8BC72F for ; Sat, 29 Mar 2008 18:49:44 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id fakqoijEjbcjmJ4U for ; Sat, 29 Mar 2008 18:49:44 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 28B311801239D; Sat, 29 Mar 2008 20:49:13 -0500 (CDT) Message-ID: <47EEF198.2060409@sandeen.net> Date: Sat, 29 Mar 2008 20:49:12 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: David Chinner CC: xfs-dev , xfs-oss X-ASG-Orig-Subj: Re: [patch] detect and correct bad features2 superblock field Subject: Re: [patch] detect and correct bad features2 superblock field References: <20080220054041.GM155407@sgi.com> <47EEED18.9090206@sandeen.net> In-Reply-To: <47EEED18.9090206@sandeen.net> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1206841784 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.46292 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15085 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Eric Sandeen wrote: > Hm, the other problem here may be that if we zero bad_features2, then > any older kernel will mount up as attr2... and run into the corruption > problem I found on F8... > Er, as attr1 > Should we make features2 and bad_features2 match rather than zeroing > bad_features2? > Maybe like this...? Though I suppose there's still a minor issue with older tools modifying flags... diff -u linux-2.6.24.x86_64/fs/xfs/xfs_sb.h linux-2.6.24.x86_64/fs/xfs/xfs_sb.h --- linux-2.6.24.x86_64/fs/xfs/xfs_sb.h +++ linux-2.6.24.x86_64/fs/xfs/xfs_sb.h @@ -325,7 +325,7 @@ */ static inline int xfs_sb_has_bad_features2(xfs_sb_t *sbp) { - return (sbp->sb_bad_features2 != 0); + return (sbp->sb_features2 != sbp->sb_bad_features2); } #define XFS_SB_VERSION_TONEW(v) xfs_sb_version_tonew(v) diff -u linux-2.6.24.x86_64/fs/xfs/xfs_mount.c linux-2.6.24.x86_64/fs/xfs/xfs_mount.c --- linux-2.6.24.x86_64/fs/xfs/xfs_mount.c +++ linux-2.6.24.x86_64/fs/xfs/xfs_mount.c @@ -994,7 +994,7 @@ cmn_err(CE_WARN, "XFS: correcting sb_features alignment problem"); sbp->sb_features2 |= sbp->sb_bad_features2; - sbp->sb_bad_features2 = 0; + sbp->sb_bad_features2 = sbp->sb_features2; update_flags |= XFS_SB_FEATURES2 | XFS_SB_BAD_FEATURES2; } From owner-xfs@oss.sgi.com Sat Mar 29 21:53:10 2008 Received: with ECARTIS (v1.0.0; list xfs); Sat, 29 Mar 2008 21:53:18 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.1 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2U4r6ZK004822 for ; Sat, 29 Mar 2008 21:53:10 -0700 X-ASG-Debug-ID: 1206852821-4dad02300000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 239A88C27DA for ; Sat, 29 Mar 2008 21:53:41 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id 8uqQmnmBoj7VTw94 for ; Sat, 29 Mar 2008 21:53:41 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 1EF701801239E; Sat, 29 Mar 2008 23:53:41 -0500 (CDT) Message-ID: <47EF1CD4.7070009@sandeen.net> Date: Sat, 29 Mar 2008 23:53:40 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: "Josef 'Jeff' Sipek" CC: David Chinner , xfs-dev , xfs-oss X-ASG-Orig-Subj: Re: [patch] detect and correct bad features2 superblock field Subject: Re: [patch] detect and correct bad features2 superblock field References: <20080220054041.GM155407@sgi.com> <47EEED18.9090206@sandeen.net> <20080330045014.GA26934@josefsipek.net> In-Reply-To: <20080330045014.GA26934@josefsipek.net> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1206852822 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.46304 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15086 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Josef 'Jeff' Sipek wrote: > On Sat, Mar 29, 2008 at 08:30:00PM -0500, Eric Sandeen wrote: >> Hm, the other problem here may be that if we zero bad_features2, then >> any older kernel will mount up as attr2... and run into the corruption >> problem I found on F8... >> >> Should we make features2 and bad_features2 match rather than zeroing >> bad_features2? > > I thought that was discussed here (or was it on IRC?), and the conclusion > was the best way is to always have features2 == bad_features2. It is the > safest way to handle things - the filesystem is guaranteed to work > everywhere properly (old & new kernels). Both the userspace (xfs_repair) > and kernel have to of course do the same thing (or bad_features2 with > features2, and save the result in both locations). > > At least that's what I seem to remember. > > Josef 'Jeff' Sipek. > It might have been, but it's not what was checked in... *shrug* http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_mount.c#rev1.419 -Eric From owner-xfs@oss.sgi.com Sat Mar 29 22:01:10 2008 Received: with ECARTIS (v1.0.0; list xfs); Sat, 29 Mar 2008 22:01:16 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.4 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_43 autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2U519jk005860 for ; Sat, 29 Mar 2008 22:01:09 -0700 X-ASG-Debug-ID: 1206853304-48ae02f20000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from filer.fsl.cs.sunysb.edu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 1A9BB8C28B5 for ; Sat, 29 Mar 2008 22:01:44 -0700 (PDT) Received: from filer.fsl.cs.sunysb.edu (filer.fsl.cs.sunysb.edu [130.245.126.2]) by cuda.sgi.com with ESMTP id tYra64J25wjWpPkI for ; Sat, 29 Mar 2008 22:01:44 -0700 (PDT) Received: from josefsipek.net (baal.fsl.cs.sunysb.edu [130.245.126.78]) by filer.fsl.cs.sunysb.edu (8.12.11.20060308/8.13.1) with ESMTP id m2U4oCiP026330; Sun, 30 Mar 2008 00:50:12 -0400 Received: by josefsipek.net (Postfix, from userid 1000) id 947DC1C00124; Sun, 30 Mar 2008 00:50:14 -0400 (EDT) Date: Sun, 30 Mar 2008 00:50:14 -0400 From: "Josef 'Jeff' Sipek" To: Eric Sandeen Cc: David Chinner , xfs-dev , xfs-oss X-ASG-Orig-Subj: Re: [patch] detect and correct bad features2 superblock field Subject: Re: [patch] detect and correct bad features2 superblock field Message-ID: <20080330045014.GA26934@josefsipek.net> References: <20080220054041.GM155407@sgi.com> <47EEED18.9090206@sandeen.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <47EEED18.9090206@sandeen.net> User-Agent: Mutt/1.5.16 (2007-06-11) X-Barracuda-Connect: filer.fsl.cs.sunysb.edu[130.245.126.2] X-Barracuda-Start-Time: 1206853305 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.46304 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15087 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeffpc@josefsipek.net Precedence: bulk X-list: xfs On Sat, Mar 29, 2008 at 08:30:00PM -0500, Eric Sandeen wrote: > David Chinner wrote: > > There is a bug in mkfs.xfs that can result in writing the features2 > > field in the superblock to the wrong location. This only occurs > > on some architectures, typically those with 32 bit userspace and > > 64 bit kernels. > > > > This patch detects the defect at mount time, logs a warning > > such as: > > > > XFS: correcting sb_features alignment problem > > > > in dmesg and corrects the problem so that everything is OK. > > it also blacklists the bad field in the superblock so it does > > not get used for something else later on. > > ... > > > /* > > + * Check for a bad features2 field alignment. This happened on > > + * some platforms due to xfs_sb_t not being 64bit size aligned > > + * when sb_features was added and hence the compiler put it in > > + * the wrong place. > > + * > > + * If we detect a bad field, we or the set bits into the existing > > + * features2 field in case it has already been modified and we > > + * don't want to lose any features. Zero the bad one and mark > > + * the two fields as needing updates once the transaction subsystem > > + * is online. > > + */ > > + if (xfs_sb_has_bad_features2(sbp)) { > > + cmn_err(CE_WARN, > > + "XFS: correcting sb_features alignment problem"); > > + sbp->sb_features2 |= sbp->sb_bad_features2; > > + sbp->sb_bad_features2 = 0; > > + update_flags |= XFS_SB_FEATURES2 | XFS_SB_BAD_FEATURES2; > > + } > > Hm, the other problem here may be that if we zero bad_features2, then > any older kernel will mount up as attr2... and run into the corruption > problem I found on F8... > > Should we make features2 and bad_features2 match rather than zeroing > bad_features2? I thought that was discussed here (or was it on IRC?), and the conclusion was the best way is to always have features2 == bad_features2. It is the safest way to handle things - the filesystem is guaranteed to work everywhere properly (old & new kernels). Both the userspace (xfs_repair) and kernel have to of course do the same thing (or bad_features2 with features2, and save the result in both locations). At least that's what I seem to remember. Josef 'Jeff' Sipek. -- Once you have their hardware. Never give it back. (The First Rule of Hardware Acquisition) From owner-xfs@oss.sgi.com Sat Mar 29 22:28:34 2008 Received: with ECARTIS (v1.0.0; list xfs); Sat, 29 Mar 2008 22:28:42 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.4 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2U5SWFT008179 for ; Sat, 29 Mar 2008 22:28:34 -0700 X-ASG-Debug-ID: 1206854947-0ccf02970000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from filer.fsl.cs.sunysb.edu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 309D670BFEC for ; Sat, 29 Mar 2008 22:29:07 -0700 (PDT) Received: from filer.fsl.cs.sunysb.edu (filer.fsl.cs.sunysb.edu [130.245.126.2]) by cuda.sgi.com with ESMTP id iwDV3Sr6XVu4hPb6 for ; Sat, 29 Mar 2008 22:29:07 -0700 (PDT) Received: from josefsipek.net (baal.fsl.cs.sunysb.edu [130.245.126.78]) by filer.fsl.cs.sunysb.edu (8.12.11.20060308/8.13.1) with ESMTP id m2U5T4d3030147; Sun, 30 Mar 2008 01:29:04 -0400 Received: by josefsipek.net (Postfix, from userid 1000) id 195371C00124; Sun, 30 Mar 2008 01:29:07 -0400 (EDT) Date: Sun, 30 Mar 2008 01:29:07 -0400 From: "Josef 'Jeff' Sipek" To: Eric Sandeen Cc: David Chinner , xfs-dev , xfs-oss X-ASG-Orig-Subj: Re: [patch] detect and correct bad features2 superblock field Subject: Re: [patch] detect and correct bad features2 superblock field Message-ID: <20080330052907.GB26934@josefsipek.net> References: <20080220054041.GM155407@sgi.com> <47EEED18.9090206@sandeen.net> <20080330045014.GA26934@josefsipek.net> <47EF1CD4.7070009@sandeen.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <47EF1CD4.7070009@sandeen.net> User-Agent: Mutt/1.5.16 (2007-06-11) X-Barracuda-Connect: filer.fsl.cs.sunysb.edu[130.245.126.2] X-Barracuda-Start-Time: 1206854948 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.46306 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15088 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeffpc@josefsipek.net Precedence: bulk X-list: xfs On Sat, Mar 29, 2008 at 11:53:40PM -0500, Eric Sandeen wrote: > Josef 'Jeff' Sipek wrote: > > On Sat, Mar 29, 2008 at 08:30:00PM -0500, Eric Sandeen wrote: > > >> Hm, the other problem here may be that if we zero bad_features2, then > >> any older kernel will mount up as attr2... and run into the corruption > >> problem I found on F8... > >> > >> Should we make features2 and bad_features2 match rather than zeroing > >> bad_features2? > > > > I thought that was discussed here (or was it on IRC?), and the conclusion > > was the best way is to always have features2 == bad_features2. It is the > > safest way to handle things - the filesystem is guaranteed to work > > everywhere properly (old & new kernels). Both the userspace (xfs_repair) > > and kernel have to of course do the same thing (or bad_features2 with > > features2, and save the result in both locations). > > > > At least that's what I seem to remember. > > > > Josef 'Jeff' Sipek. > > > > It might have been, but it's not what was checked in... *shrug* > > http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_mount.c#rev1.419 I remember the discussion taking place _after_ that commit..so it must have been userspace-related. Josef 'Jeff' Sipek. -- I'm somewhere between geek and normal. - Linus Torvalds From owner-xfs@oss.sgi.com Sun Mar 30 08:38:55 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 30 Mar 2008 08:39:29 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_50 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2UFcsEM012859 for ; Sun, 30 Mar 2008 08:38:55 -0700 X-ASG-Debug-ID: 1206891567-331503c50000-w1Z2WR X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mail07do.versatel.de (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id E901D1268165 for ; Sun, 30 Mar 2008 08:39:27 -0700 (PDT) Received: from mail07do.versatel.de (mail07do.versatel.de [89.245.129.27]) by cuda.sgi.com with ESMTP id vIa6LXCjX9uWyUZc for ; Sun, 30 Mar 2008 08:39:27 -0700 (PDT) Received: (qmail 17750 invoked from network); 30 Mar 2008 15:39:27 -0000 Received: from i59f7087c.versanet.de (HELO versanet.de) ([89.247.8.124]) (envelope-sender ) by mail07do.versatel.de (qmail-ldap-1.03) with SMTP for ; 30 Mar 2008 15:39:25 -0000 Received: from asx121.turbo-inline.com ([72.80.202.80]) by mailout.endmonthnow.com with ASMTP; Mon, 31 Mar 2008 01:21:04 +0900 Received: from public.micromail.com.au ([24.57.25.216]) by smtp.mixedthings.net with SMTP; Mon, 31 Mar 2008 01:15:47 +0900 Received: from unknown (81.163.94.151) by mail.naihautsui.co.kr with NNFMP; Mon, 31 Mar 2008 00:59:42 +0900 Message-ID: <9899807D.14D23564@versanet.de> Date: Mon, 31 Mar 2008 00:51:23 +0900 Reply-To: From: X-Accept-Language: en-us MIME-Version: 1.0 To: X-ASG-Orig-Subj: work at home mom Subject: work at home mom Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit X-Barracuda-Connect: mail07do.versatel.de[89.245.129.27] X-Barracuda-Start-Time: 1206891568 X-Barracuda-Bayes: INNOCENT GLOBAL 0.5196 1.0000 0.7500 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: 1.30 X-Barracuda-Spam-Status: No, SCORE=1.30 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=NO_REAL_NAME X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.46347 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.55 NO_REAL_NAME From: does not include a real name X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15089 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: gmvxc@versanet.de Precedence: bulk X-list: xfs What's up, In need of a job? http://allysonpewittuf469.blogspot.com Best wishes. From owner-xfs@oss.sgi.com Sun Mar 30 13:29:27 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 30 Mar 2008 13:29:34 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.3 required=5.0 tests=BAYES_50,HTML_MESSAGE, MIME_8BIT_HEADER autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2UKTOAG016647 for ; Sun, 30 Mar 2008 13:29:27 -0700 X-ASG-Debug-ID: 1206908996-0c8f03b10000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from server.dwo.hu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 8D3D2710364 for ; Sun, 30 Mar 2008 13:29:56 -0700 (PDT) Received: from server.dwo.hu (server.dwo.hu [87.229.110.63]) by cuda.sgi.com with ESMTP id I0dD6uswBOADoVgy for ; Sun, 30 Mar 2008 13:29:56 -0700 (PDT) Received: from catv54033dd5.pool.t-online.hu ([84.3.61.213] helo=HvDCorps) by server.dwo.hu with esmtpa (Exim 4.50) id 1Jg4Ki-0005LP-UP for xfs@oss.sgi.com; Sun, 30 Mar 2008 22:40:53 +0200 From: =?iso-8859-2?Q?Husz=E1r_Viktor_D=E9nes?= To: X-ASG-Orig-Subj: free space problem Subject: free space problem Date: Sun, 30 Mar 2008 22:29:06 +0200 MIME-Version: 1.0 X-Mailer: Microsoft Office Outlook, Build 11.0.6353 Thread-Index: AciSpLIWQUcXZR9GR32choz3gQ20ow== X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.3198 X-Barracuda-Connect: server.dwo.hu[87.229.110.63] X-Barracuda-Start-Time: 1206908999 Message-Id: <20080330202956.8D3D2710364@cuda.sgi.com> X-Barracuda-Bayes: INNOCENT GLOBAL 0.0017 1.0000 -2.0100 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -0.70 X-Barracuda-Spam-Status: No, SCORE=-0.70 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=HTML_MESSAGE, MARKETING_SUBJECT, MSGID_FROM_MTA_ID X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.46365 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.60 MARKETING_SUBJECT Subject contains popular marketing words 0.70 MSGID_FROM_MTA_ID Message-Id for external message added locally 0.00 HTML_MESSAGE BODY: HTML included in message X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean Content-Type: text/plain Content-Disposition: inline Content-Transfer-Encoding: 7bit Content-length: 394 X-archive-position: 15090 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hvd@dwo.hu Precedence: bulk X-list: xfs Hi all, we encountered a problem with XFS free space and perhaps reserved space. It reports to have 82 gb free on the raid, however its not possible to write on the file system. Please tell me if you need more details, we have read through the previous free space threads, but found nothing useful. We run 2.6.23 kernel. Thanks in advance, Vic [[HTML alternate version deleted]] From owner-xfs@oss.sgi.com Sun Mar 30 14:55:56 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 30 Mar 2008 14:56:03 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.1 required=5.0 tests=AWL,BAYES_00,MIME_8BIT_HEADER autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2ULtt5S023833 for ; Sun, 30 Mar 2008 14:55:56 -0700 X-ASG-Debug-ID: 1206914188-3197022a0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from smtp7-g19.free.fr (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 859DE1269452 for ; Sun, 30 Mar 2008 14:56:29 -0700 (PDT) Received: from smtp7-g19.free.fr (smtp7-g19.free.fr [212.27.42.64]) by cuda.sgi.com with ESMTP id wt3XsqmWfKWzu2ep for ; Sun, 30 Mar 2008 14:56:29 -0700 (PDT) Received: from smtp7-g19.free.fr (localhost [127.0.0.1]) by smtp7-g19.free.fr (Postfix) with ESMTP id 649B6322827; Sun, 30 Mar 2008 23:56:28 +0200 (CEST) Received: from galadriel.home (pla78-1-82-235-234-79.fbx.proxad.net [82.235.234.79]) by smtp7-g19.free.fr (Postfix) with ESMTP id 12B0C322814; Sun, 30 Mar 2008 23:56:27 +0200 (CEST) Date: Sun, 30 Mar 2008 23:54:44 +0200 From: Emmanuel Florac To: =?ISO-8859-1?Q?Husz=E1r?= Viktor =?ISO-8859-1?Q?D=E9nes?= Cc: X-ASG-Orig-Subj: Re: free space problem Subject: Re: free space problem Message-ID: <20080330235444.7d357bb6@galadriel.home> In-Reply-To: <20080330202956.8D3D2710364@cuda.sgi.com> References: <20080330202956.8D3D2710364@cuda.sgi.com> Organization: Intellique X-Mailer: Claws Mail 2.9.1 (GTK+ 2.8.20; i686-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 X-Barracuda-Connect: smtp7-g19.free.fr[212.27.42.64] X-Barracuda-Start-Time: 1206914191 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -1.42 X-Barracuda-Spam-Status: No, SCORE=-1.42 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=MARKETING_SUBJECT X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.46371 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.60 MARKETING_SUBJECT Subject contains popular marketing words X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id m2ULtu5S023835 X-archive-position: 15091 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: eflorac@intellique.com Precedence: bulk X-list: xfs Le Sun, 30 Mar 2008 22:29:06 +0200 vous écriviez: > We run > 2.6.23 kernel. Look in /var/log/messages and the output of dmesg command for xfs related messages and post them here, please. -- -------------------------------------------------- Emmanuel Florac www.intellique.com -------------------------------------------------- From owner-xfs@oss.sgi.com Sun Mar 30 15:30:57 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 30 Mar 2008 15:31:05 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2UMUqVT028308 for ; Sun, 30 Mar 2008 15:30:55 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id IAA09121; Mon, 31 Mar 2008 08:31:13 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m2UMV8sT115962959; Mon, 31 Mar 2008 08:31:11 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m2UMV29t116206029; Mon, 31 Mar 2008 08:31:02 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Mon, 31 Mar 2008 08:31:02 +1000 From: David Chinner To: Takashi Sato Cc: David Chinner , "linux-ext4@vger.kernel.org" , "linux-fsdevel@vger.kernel.org" , "xfs@oss.sgi.com" , "dm-devel@redhat.com" , "linux-kernel@vger.kernel.org" Subject: Re: [RFC PATCH] freeze feature ver 1.0 Message-ID: <20080330223102.GE103491721@sgi.com> References: <20080328180145t-sato@mail.jp.nec.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080328180145t-sato@mail.jp.nec.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15092 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Fri, Mar 28, 2008 at 06:01:45PM +0900, Takashi Sato wrote: > Hi, > > David Chinner wrote: > > Can you please split this into two patches - one which introduces the > > generic functionality *without* the timeout stuff, and a second patch > > that introduces the timeouts. > > OK. > I will send the split patches in subsequent mails. > > > I think this timeout stuff is dangerous - it adds significant > > complexity and really does not protect against anything that can't > > be done in userspace. i.e. If your system is running well enough > > for the timer to fire and unfreeze the filesystem, it's running well > > enough for you to do "freeze X; sleep Y; unfreeze X". > > If the process is terminated at "sleep Y" by an unexpected > accident (e.g. signals), the filesystem will be left frozen. At which point you run "unfreeze /mnt/fs" and unfreeze it. If you've got a script that fails in the middle of an operation that freezes the filesystem, then the error handling of that script needs to unfreeze the filesystem. The kernel does not need to do this. > So, I think the timeout is needed to unfreeze more definitely. No, it can be handled by userspace perfectly well. > > FWIW, there is nothing to guarantee that the filesystem has finished > > freezing when the timeout fires (it's not uncommon to see > > freeze_bdev() taking *minutes*) and unfreezing in the middle of a > > freeze operation will cause problems - either for the filesystem > > in the middle of a freeze operation, or for whatever is freezing the > > filesystem to get a consistent image..... > > Do you mention the freeze_bdev()'s hang? If freeze_bdev() hangs, then you've got a buggy filesystem and far more problems to worry about than undoing the freeze. It's likely you're going to need a reboot to unwedge then hung filesystem..... > The salvage target of my timeout is freeze process's accident as below. > - It is killed before calling the unfreeze ioctl Can be fixed from userspace. > - It causes a deadlock by accessing the frozen filesystem Application bug. Undeadlock it by running "unfreeze /mnt/fs".... FWIW, DM is quite capable of freezing the filesystem, snapshotting it and then unfreezing it without hanging, crashing or having nasty stuff in general happen. We've used 'xfs_freeze -f /mnt/fs; do_something; xfs_freeze -u /mnt/fs' for years without having problems with freeze hanging, application deadlocks, etc..... ... And if something has gone wrong during the freeze, it is far, far better to leave the filesystem stuck in a frozen state than to unfreeze it and allow it to be damaged further. If you get stuck or a script gets killed in the middle of execution, then an admin needs to look at the problem immediately. Just timing out and unfreezing is about the worst thing you can do because it allows problems (corruptions, errors, etc) to be propagated and potentially make things worse before an admin can intervene and fix things up.... Basically, I don't want to have to deal with the "snapshot image corrupt" bug reports that will come from user misuse/misunderstanding of the "freeze timeout". It's hard enough tracking down these sorts of problems without throwing in the "freeze timed out before completion" possibility that guarantees a non-consistent snapshot image..... /me points to the ASSERT_ALWAYS() in xfs_attr_quiesce() that ensures we get bug reports when the filesystem is still being actively modified when the freeze "completes". > So the delayed work for the timeout is set after all of freeze operations > in freeze_bdev() in my patches. > I think the filesystem dependent code (write_super_lockfs operation) > should be implemented not to cause a hang. And that should already be the case. If write_super_lockfs() hangs, then you've got a filesystem bug ;) Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Sun Mar 30 17:00:52 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 30 Mar 2008 17:01:00 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_53 autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2V00ncq014614 for ; Sun, 30 Mar 2008 17:00:51 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA12483; Mon, 31 Mar 2008 10:01:10 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m2V016sT116222233; Mon, 31 Mar 2008 10:01:08 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m2V00vBM115519763; Mon, 31 Mar 2008 10:00:57 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Mon, 31 Mar 2008 10:00:57 +1000 From: David Chinner To: Takashi Sato Cc: David Chinner , "linux-ext4@vger.kernel.org" , "linux-fsdevel@vger.kernel.org" , "xfs@oss.sgi.com" , "dm-devel@redhat.com" , "linux-kernel@vger.kernel.org" Subject: Re: [RFC PATCH 2/2] Add timeout feature Message-ID: <20080331000057.GI108924158@sgi.com> References: <20080328180736t-sato@mail.jp.nec.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080328180736t-sato@mail.jp.nec.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15093 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Fri, Mar 28, 2008 at 06:07:36PM +0900, Takashi Sato wrote: > The timeout feature is added to freeze ioctl. And new ioctl > to reset the timeout period is added. > o Freeze the filesystem > int ioctl(int fd, int FIFREEZE, long *timeval) > fd: The file descriptor of the mountpoint > FIFREEZE: request code for the freeze > timeval: the timeout period in seconds > If it's 0 or 1, the timeout isn't set. > This special case of "1" is implemented to keep > the compatibility with XFS applications. > Return value: 0 if the operation succeeds. Otherwise, -1 The timeout is not for the freeze operation - the timeout is only set up once the freeze is complete. i.e: $ time sudo ~/test_src/xfs_io -f -x -c 'gfreeze 10' /mnt/scratch/test freezing with level = 10 real 0m23.204s user 0m0.008s sys 0m0.012s The freeze takes 23s, and then the 10s timeout is started. So this timeout does not protect against freeze_bdev() hangs at all. All it does is introduce silent unfreezing of the block device that can not be synchronised with the application that is operating on the frozen device. FWIW, resetting this timeout from userspace is unreliable - there's no guarantee that under load your userspace process will get to run again inside the timeout to reset it, hence leaving you with a unfrozen filesystem when you really want it frozen... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Sun Mar 30 17:06:17 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 30 Mar 2008 17:06:24 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00,J_CHICKENPOX_53 autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2V06DcO017878 for ; Sun, 30 Mar 2008 17:06:15 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA12634; Mon, 31 Mar 2008 10:06:41 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m2V06csT116222690; Mon, 31 Mar 2008 10:06:40 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m2V06ZaE116206714; Mon, 31 Mar 2008 10:06:35 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Mon, 31 Mar 2008 10:06:35 +1000 From: David Chinner To: Takashi Sato Cc: David Chinner , "linux-ext4@vger.kernel.org" , "linux-fsdevel@vger.kernel.org" , "xfs@oss.sgi.com" , "dm-devel@redhat.com" , "linux-kernel@vger.kernel.org" Subject: Re: [RFC PATCH 1/2] Implement generic freeze feature Message-ID: <20080331000635.GJ108924158@sgi.com> References: <20080328180522t-sato@mail.jp.nec.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080328180522t-sato@mail.jp.nec.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15094 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Fri, Mar 28, 2008 at 06:05:22PM +0900, Takashi Sato wrote: > The ioctls for the generic freeze feature are below. > o Freeze the filesystem > int ioctl(int fd, int FIFREEZE, arg) > fd: The file descriptor of the mountpoint > FIFREEZE: request code for the freeze > arg: Ignored > Return value: 0 if the operation succeeds. Otherwise, -1 > > o Unfreeze the filesystem > int ioctl(int fd, int FITHAW, arg) > fd: The file descriptor of the mountpoint > FITHAW: request code for unfreeze > arg: Ignored > Return value: 0 if the operation succeeds. Otherwise, -1 Patch below to remove the XFS specific ioctl interfaces for this functionality. Signed-off-by: Dave Chinner --- fs/xfs/linux-2.6/xfs_ioctl.c | 15 --------------- fs/xfs/linux-2.6/xfs_ioctl32.c | 2 -- fs/xfs/xfs_fs.h | 4 ++-- 3 files changed, 2 insertions(+), 19 deletions(-) Index: 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_ioctl.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/linux-2.6/xfs_ioctl.c 2008-03-31 08:33:19.000000000 +1000 +++ 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_ioctl.c 2008-03-31 09:06:08.531294896 +1000 @@ -1228,21 +1228,6 @@ xfs_ioctl( return -error; } - case XFS_IOC_FREEZE: - if (!capable(CAP_SYS_ADMIN)) - return -EPERM; - - if (inode->i_sb->s_frozen == SB_UNFROZEN) - freeze_bdev(inode->i_sb->s_bdev, 0); - return 0; - - case XFS_IOC_THAW: - if (!capable(CAP_SYS_ADMIN)) - return -EPERM; - if (inode->i_sb->s_frozen != SB_UNFROZEN) - thaw_bdev(inode->i_sb->s_bdev, inode->i_sb); - return 0; - case XFS_IOC_GOINGDOWN: { __uint32_t in; Index: 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_ioctl32.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/linux-2.6/xfs_ioctl32.c 2007-11-20 16:12:45.000000000 +1100 +++ 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_ioctl32.c 2008-03-31 09:06:38.011484411 +1000 @@ -398,8 +398,6 @@ xfs_compat_ioctl( case XFS_IOC_FSGROWFSDATA: case XFS_IOC_FSGROWFSLOG: case XFS_IOC_FSGROWFSRT: - case XFS_IOC_FREEZE: - case XFS_IOC_THAW: case XFS_IOC_GOINGDOWN: case XFS_IOC_ERROR_INJECTION: case XFS_IOC_ERROR_CLEARALL: Index: 2.6.x-xfs-new/fs/xfs/xfs_fs.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_fs.h 2007-11-20 18:38:49.000000000 +1100 +++ 2.6.x-xfs-new/fs/xfs/xfs_fs.h 2008-03-31 09:09:54.902040723 +1000 @@ -473,8 +473,8 @@ typedef struct xfs_handle { #define XFS_IOC_ERROR_INJECTION _IOW ('X', 116, struct xfs_error_injection) #define XFS_IOC_ERROR_CLEARALL _IOW ('X', 117, struct xfs_error_injection) /* XFS_IOC_ATTRCTL_BY_HANDLE -- deprecated 118 */ -#define XFS_IOC_FREEZE _IOWR('X', 119, int) -#define XFS_IOC_THAW _IOWR('X', 120, int) +/* XFS_IOC_FREEZE -- FIFREEZE 119 */ +/* XFS_IOC_THAW -- FITHAW 120 */ #define XFS_IOC_FSSETDM_BY_HANDLE _IOW ('X', 121, struct xfs_fsop_setdm_handlereq) #define XFS_IOC_ATTRLIST_BY_HANDLE _IOW ('X', 122, struct xfs_fsop_attrlist_handlereq) #define XFS_IOC_ATTRMULTI_BY_HANDLE _IOW ('X', 123, struct xfs_fsop_attrmulti_handlereq) From owner-xfs@oss.sgi.com Sun Mar 30 17:31:08 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 30 Mar 2008 17:31:16 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.8 required=5.0 tests=AWL,BAYES_50,MIME_8BIT_HEADER autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2V0V5HB026469 for ; Sun, 30 Mar 2008 17:31:08 -0700 X-ASG-Debug-ID: 1206923499-201b00be0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from server.dwo.hu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id B8956711F00 for ; Sun, 30 Mar 2008 17:31:40 -0700 (PDT) Received: from server.dwo.hu (server.dwo.hu [87.229.110.63]) by cuda.sgi.com with ESMTP id gvCtP355gFjnfKoa for ; Sun, 30 Mar 2008 17:31:40 -0700 (PDT) Received: from catv54033dd5.pool.t-online.hu ([84.3.61.213] helo=HvDCorps) by server.dwo.hu with esmtpa (Exim 4.50) id 1Jg879-000AZ9-9b; Mon, 31 Mar 2008 02:43:08 +0200 From: =?iso-8859-2?Q?Husz=E1r_Viktor_D=E9nes?= To: "'Emmanuel Florac'" Cc: X-ASG-Orig-Subj: RE: free space problem Subject: RE: free space problem Date: Mon, 31 Mar 2008 02:31:16 +0200 MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-2" X-Mailer: Microsoft Office Outlook, Build 11.0.6353 In-Reply-To: <20080330235444.7d357bb6@galadriel.home> Thread-Index: AciSspwNuRERJec/Rjemif2B3lcdzwAE77Gg X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.3198 X-Barracuda-Connect: server.dwo.hu[87.229.110.63] X-Barracuda-Start-Time: 1206923500 Message-Id: <20080331003140.B8956711F00@cuda.sgi.com> X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -0.72 X-Barracuda-Spam-Status: No, SCORE=-0.72 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=MARKETING_SUBJECT, MSGID_FROM_MTA_ID X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.46382 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.60 MARKETING_SUBJECT Subject contains popular marketing words 0.70 MSGID_FROM_MTA_ID Message-Id for external message added locally X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id m2V0V8HB026493 X-archive-position: 15095 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hvd@dwo.hu Precedence: bulk X-list: xfs There is nothing extraordinary in it, you can see umount-mount and that it was mounted after all xfs activity (check, repair, db). root@mailslave:/home/void# grep -i xfs /var/log/{messages,kern.log,syslog} /var/log/kern.log:Mar 18 01:46:08 mailslave kernel: SGI XFS with ACLs, security attributes, realtime, large block/inode numbers, no debug enabled /var/log/kern.log:Mar 18 01:46:08 mailslave kernel: SGI XFS Quota Management subsystem /var/log/kern.log:Mar 18 01:46:08 mailslave kernel: XFS mounting filesystem md1 /var/log/kern.log:Mar 18 01:46:08 mailslave kernel: Ending clean XFS mount for filesystem: md1 /var/log/kern.log:Mar 18 01:46:08 mailslave kernel: XFS mounting filesystem dm-0 /var/log/kern.log:Mar 18 01:46:08 mailslave kernel: Ending clean XFS mount for filesystem: dm-0 /var/log/kern.log:Mar 18 02:23:16 mailslave kernel: SGI XFS with ACLs, security attributes, realtime, large block/inode numbers, no debug enabled /var/log/kern.log:Mar 18 02:23:16 mailslave kernel: SGI XFS Quota Management subsystem /var/log/kern.log:Mar 18 02:23:16 mailslave kernel: XFS mounting filesystem md1 /var/log/kern.log:Mar 18 02:23:16 mailslave kernel: Ending clean XFS mount for filesystem: md1 /var/log/kern.log:Mar 18 02:23:16 mailslave kernel: XFS mounting filesystem dm-0 /var/log/kern.log:Mar 18 02:23:16 mailslave kernel: Ending clean XFS mount for filesystem: dm-0 /var/log/kern.log:Mar 18 02:34:13 mailslave kernel: SGI XFS with ACLs, security attributes, realtime, large block/inode numbers, no debug enabled /var/log/kern.log:Mar 18 02:34:13 mailslave kernel: SGI XFS Quota Management subsystem /var/log/kern.log:Mar 18 02:34:13 mailslave kernel: XFS mounting filesystem md1 /var/log/kern.log:Mar 18 02:34:13 mailslave kernel: Ending clean XFS mount for filesystem: md1 /var/log/kern.log:Mar 18 02:34:13 mailslave kernel: XFS mounting filesystem dm-0 /var/log/kern.log:Mar 18 02:34:13 mailslave kernel: Ending clean XFS mount for filesystem: dm-0 /var/log/kern.log:Mar 20 09:57:50 mailslave kernel: SGI XFS with ACLs, security attributes, realtime, large block/inode numbers, no debug enabled /var/log/kern.log:Mar 20 09:57:50 mailslave kernel: SGI XFS Quota Management subsystem /var/log/kern.log:Mar 20 09:57:50 mailslave kernel: XFS mounting filesystem md1 /var/log/kern.log:Mar 20 09:57:50 mailslave kernel: Ending clean XFS mount for filesystem: md1 /var/log/kern.log:Mar 20 09:57:50 mailslave kernel: XFS mounting filesystem dm-0 /var/log/kern.log:Mar 20 09:57:50 mailslave kernel: Ending clean XFS mount for filesystem: dm-0 /var/log/kern.log:Mar 26 22:05:54 mailslave kernel: XFS mounting filesystem dm-0 /var/log/kern.log:Mar 26 22:05:54 mailslave kernel: Ending clean XFS mount for filesystem: dm-0 /var/log/kern.log:Mar 27 04:32:09 mailslave kernel: XFS mounting filesystem dm-0 /var/log/kern.log:Mar 27 04:32:09 mailslave kernel: Ending clean XFS mount for filesystem: dm-0 /var/log/kern.log:Mar 27 04:36:54 mailslave kernel: XFS mounting filesystem dm-0 /var/log/kern.log:Mar 27 04:36:54 mailslave kernel: Ending clean XFS mount for filesystem: dm-0 /var/log/kern.log:Mar 27 05:02:29 mailslave kernel: XFS mounting filesystem dm-0 /var/log/kern.log:Mar 27 05:02:29 mailslave kernel: Ending clean XFS mount for filesystem: dm-0 -----Original Message----- From: Emmanuel Florac [mailto:eflorac@intellique.com] Sent: Sunday, March 30, 2008 11:55 PM To: Huszár Viktor Dénes Cc: xfs@oss.sgi.com Subject: Re: free space problem Le Sun, 30 Mar 2008 22:29:06 +0200 vous écriviez: > We run > 2.6.23 kernel. Look in /var/log/messages and the output of dmesg command for xfs related messages and post them here, please. -- -------------------------------------------------- Emmanuel Florac www.intellique.com -------------------------------------------------- __________ Information from ESET NOD32 Antivirus, version of virus signature database 2985 (20080330) __________ The message was checked by ESET NOD32 Antivirus. http://www.eset.com __________ Information from ESET NOD32 Antivirus, version of virus signature database 2985 (20080330) __________ The message was checked by ESET NOD32 Antivirus. http://www.eset.com From owner-xfs@oss.sgi.com Sun Mar 30 17:38:28 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 30 Mar 2008 17:38:35 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.8 required=5.0 tests=AWL,BAYES_00,MIME_8BIT_HEADER autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2V0cMSn028861 for ; Sun, 30 Mar 2008 17:38:27 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA13401; Mon, 31 Mar 2008 10:38:51 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m2V0cnsT116214731; Mon, 31 Mar 2008 10:38:50 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m2V0cleN116143602; Mon, 31 Mar 2008 10:38:47 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Mon, 31 Mar 2008 10:38:47 +1000 From: David Chinner To: =?iso-8859-1?Q?Husz=E1r_Viktor_D=E9nes?= Cc: xfs@oss.sgi.com Subject: Re: free space problem Message-ID: <20080331003847.GF103491721@sgi.com> References: <20080330202956.8D3D2710364@cuda.sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20080330202956.8D3D2710364@cuda.sgi.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15096 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Sun, Mar 30, 2008 at 10:29:06PM +0200, Huszár Viktor Dénes wrote: > Hi all, > > we encountered a problem with XFS free space and perhaps reserved space. It > reports to have 82 gb free on the raid, however its not possible to write on > the file system. Please tell me if you need more details, we have read > through the previous free space threads, but found nothing useful. We run > 2.6.23 kernel. What is the error you are getting? Does 'dd if=/dev/zero of=/mntpt/file bs=512 count=1' work? If not, what's the error? What's the output of 'df -h' and 'df -ih'? Output of 'xfs_info '? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Sun Mar 30 17:52:27 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 30 Mar 2008 17:52:35 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2V0qNwZ032309 for ; Sun, 30 Mar 2008 17:52:25 -0700 Received: from boing.melbourne.sgi.com (boing.melbourne.sgi.com [134.14.55.141]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA13773; Mon, 31 Mar 2008 10:52:51 +1000 Message-ID: <47F035E3.1030004@sgi.com> Date: Mon, 31 Mar 2008 11:52:51 +1100 From: Timothy Shimmin User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: Eric Sandeen CC: xfs-oss Subject: Re: [PATCH, RFC] fix attr fit checking for filesystems which have lost their attr2 References: <47EDCBF9.4070102@sandeen.net> In-Reply-To: <47EDCBF9.4070102@sandeen.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15097 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs Hi Eric, Eric Sandeen wrote: > Regarding the F8 corruption... I have a pretty narrow testcase > now, and it turns out this was a bit of a perfect storm. > > First, F8 shipped 2.6.23 which had the problem with sb_features2 > padding out on 64-bit boxes, but this was ok because userspace > and kernelspace both did this, and it was properly mounting & > running as attr2. > > However, hch came along in 2.6.24 and did some endian annotation > for the superblock and in the process: >> A new helper xfs_sb_from_disk handles the other (read) >> direction and doesn't need the slightly hacky table-driven approach >> because we only ever read the full sb from disk. > > However, this resulted in kernelspace behaving differently, > and now *missing* the attr2 flag in sb_features2, (actually > sb_bad_features2) so we mounted as if we had attr1. Which > is really supposed to be ok, IIRC, Yes, I remember Nathan saying that too ;-) > except in > xfs_attr_shortform_bytesfit we return the default > fork offset value from the superblock, even if di_forkoff > is *already* set. In the error case I had, di_forkoff was set > to 15 (from previously being attr2...) but we returned 14 > (the mp default) and I think this is where things started > going wrong; I think this caused us to write an attr on top > of the extent data. > > My understanding of this is that if di_forkoff is non-zero, > we should always be using it for space calculations, regardless > of whether we are mounted with attr2 or not... > That was my understanding as well. I'll have a look at the code soon and see if I can see any problems with the change and the consistency of it all. Thanks a bunch, Tim. > and with that, how's this look, to be honest I haven't run it > through QA yet... > > I'm not certain if xfs_bmap_compute_maxlevels() may lead > to similar problems.... > > ------------------ > > always use di_forkoff for when checking for attr space > > In the case where we mount a filesystem which was previously > using the attr2 format as attr1, returning the default > mp->m_attroffset instead of the per-inode di_forkoff for > inline attribute fit calculations may result in corruption, > if the existing "attr2" formatted attribute is already taking > more space than the default. > > Signed-off-by: Eric Sandeen > --- > > > Index: linux-2.6-git/fs/xfs/xfs_attr_leaf.c > =================================================================== > --- linux-2.6-git.orig/fs/xfs/xfs_attr_leaf.c > +++ linux-2.6-git/fs/xfs/xfs_attr_leaf.c > @@ -166,7 +166,7 @@ xfs_attr_shortform_bytesfit(xfs_inode_t > > if (!(mp->m_flags & XFS_MOUNT_ATTR2)) { > if (bytes <= XFS_IFORK_ASIZE(dp)) > - return mp->m_attroffset >> 3; > + return dp->i_d.di_forkoff; > return 0; > } > From owner-xfs@oss.sgi.com Sun Mar 30 18:44:37 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 30 Mar 2008 18:44:44 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.1 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2V1iZ0Z009001 for ; Sun, 30 Mar 2008 18:44:37 -0700 X-ASG-Debug-ID: 1206927910-32c600580000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id A71008AC28F for ; Sun, 30 Mar 2008 18:45:11 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id oobFsbHEdVKLpDgk for ; Sun, 30 Mar 2008 18:45:11 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id CCF5E180123B5; Sun, 30 Mar 2008 20:45:09 -0500 (CDT) Message-ID: <47F04225.4030405@sandeen.net> Date: Sun, 30 Mar 2008 20:45:09 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: Timothy Shimmin CC: xfs-oss X-ASG-Orig-Subj: Re: [PATCH, RFC] fix attr fit checking for filesystems which have lost their attr2 Subject: Re: [PATCH, RFC] fix attr fit checking for filesystems which have lost their attr2 References: <47EDCBF9.4070102@sandeen.net> <47F035E3.1030004@sgi.com> In-Reply-To: <47F035E3.1030004@sgi.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1206927911 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -1.42 X-Barracuda-Spam-Status: No, SCORE=-1.42 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=COMMA_SUBJECT X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.46387 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.60 COMMA_SUBJECT Subject is like 'Re: FDSDS, this is a subject' X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15098 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Timothy Shimmin wrote: > Hi Eric, > > Eric Sandeen wrote: ... >> My understanding of this is that if di_forkoff is non-zero, >> we should always be using it for space calculations, regardless >> of whether we are mounted with attr2 or not... >> > That was my understanding as well. > I'll have a look at the code soon and see if I can > see any problems with the change and the consistency > of it all. > > Thanks a bunch, > Tim. Thanks. FWIW, if I install F8 with bona-fide attr2 in effect, with selinux (so attrs on everything) and then update it while mounted as an attr1 filesystem, with this patch in place, it does not result in anything bad as far as xfs_repair can see. (and it's a big update, probably touching most files...) -Eric From owner-xfs@oss.sgi.com Sun Mar 30 18:50:21 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 30 Mar 2008 18:50:30 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.0 required=5.0 tests=BAYES_50,PLING_QUERY autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2V1oKj8009789 for ; Sun, 30 Mar 2008 18:50:21 -0700 X-ASG-Debug-ID: 1206928253-21e002ab0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from msa101lp.auone-net.jp (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 3A76871228C for ; Sun, 30 Mar 2008 18:50:54 -0700 (PDT) Received: from msa101lp.auone-net.jp (msa101lp.auone-net.jp [222.3.140.164]) by cuda.sgi.com with ESMTP id 8tX5YQfwnD5xwT4Q for ; Sun, 30 Mar 2008 18:50:54 -0700 (PDT) Received: by msa101lp.auone-net.jp (au one net msa) id 121A76C0037; Mon, 31 Mar 2008 10:50:53 +0900 (JST) Date: Mon, 31 Mar 2008 10:50:53 +0900 (JST) From: au one net X-ASG-Orig-Subj: =?ISO-2022-JP?B?GyRCJWEhPCVrQXc/LiUoJWkhPERMQ04bKEI=?= Subject: =?ISO-2022-JP?B?GyRCJWEhPCVrQXc/LiUoJWkhPERMQ04bKEI=?= To: xfs@oss.sgi.com Auto-Submitted: auto-replied MIME-Version: 1.0 Content-Type: multipart/report; report-type=delivery-status; boundary="5A4FB6C0031.1206928253/msa101lp.auone-net.jp" Message-Id: <20080331015053.121A76C0037@msa101lp.auone-net.jp> X-Barracuda-Connect: msa101lp.auone-net.jp[222.3.140.164] X-Barracuda-Start-Time: 1206928256 X-Barracuda-Bayes: INNOCENT GLOBAL 0.4967 1.0000 0.0000 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.46388 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15099 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: no-reply@app.auone-net.jp Precedence: bulk X-list: xfs This is a MIME-encapsulated message. --5A4FB6C0031.1206928253/msa101lp.auone-net.jp Content-Description: Notification Content-Type: text/plain; charset=ISO-2022-JP $BAw?.@h$N%a!<%k%"%I%l%9$,!"8=:_MxMQ$G$-$J$$>uBV$K$J$C$F$$$^$9!#(B $B"(K\%a!<%k$O(Bau one net$B$N%a!<%k%7%9%F%`$h$jG[?.$7$F$*$j$^$9!#(B $B$3$N%a!<%k$K$OD>@\JV?.$$$?$@$1$^$;$s!#(B $B!J!!0J2: host mx.rediffmail.rediff.akadns.net[202.137.235.10] said: 550 Requested action not taken: mailbox unavailable (in reply to RCPT TO command) --5A4FB6C0031.1206928253/msa101lp.auone-net.jp Content-Description: Delivery report Content-Type: message/delivery-status Reporting-MTA: dns; msa101lp.auone-net.jp X-au-one-net-msa-Queue-ID: 5A4FB6C0031 X-au-one-net-msa-Sender: rfc822; xfs@oss.sgi.com Arrival-Date: Mon, 31 Mar 2008 10:50:48 +0900 (JST) Final-Recipient: rfc822; drgreg@rediffmail.com Original-Recipient: rfc822;drgreg@rediffmail.com Action: failed Status: 5.0.0 Remote-MTA: dns; mx.rediffmail.rediff.akadns.net Diagnostic-Code: smtp; 550 Requested action not taken: mailbox unavailable --5A4FB6C0031.1206928253/msa101lp.auone-net.jp Content-Description: Undelivered Message Headers Content-Type: text/rfc822-headers Received: from oss.sgi.com (N092034.ppp.dion.ne.jp [61.202.92.34]) by msa101lp.auone-net.jp (au one net msa) with ESMTP id 5A4FB6C0031 for ; Mon, 31 Mar 2008 10:50:48 +0900 (JST) From: xfs@oss.sgi.com To: drgreg@rediffmail.com Subject: delivery failed Date: Mon, 31 Mar 2008 10:50:21 +0900 MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_NextPart_000_0003_B05C3903.E9DEF4BA" X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2600.0000 X-MIMEOLE: Produced By Microsoft MimeOLE V6.00.2600.0000 Message-Id: <20080331015049.5A4FB6C0031@msa101lp.auone-net.jp> --5A4FB6C0031.1206928253/msa101lp.auone-net.jp-- From owner-xfs@oss.sgi.com Sun Mar 30 20:03:08 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 30 Mar 2008 20:03:15 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.1 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2V337L4017874 for ; Sun, 30 Mar 2008 20:03:08 -0700 X-ASG-Debug-ID: 1206932622-10b9032b0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 5EDC88CA46E for ; Sun, 30 Mar 2008 20:03:42 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id R24Tv8210m3su3k4 for ; Sun, 30 Mar 2008 20:03:42 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 0BA94180307E3 for ; Sun, 30 Mar 2008 22:03:09 -0500 (CDT) Message-ID: <47F0546C.9070709@sandeen.net> Date: Sun, 30 Mar 2008 22:03:08 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: xfs-oss X-ASG-Orig-Subj: [PATCH] combined features2 fixup patches (updating/rewriting what was sent in other threads) Subject: [PATCH] combined features2 fixup patches (updating/rewriting what was sent in other threads) Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1206932623 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.46391 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15100 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Ensure "both" features2 slots are consistent, and set mp attr2 flag. Since older kernels may look in the sb_bad_features2 slot for flags, rather than zeroing it out on fixup, we should make it equal to the sb_features2 value. Also, if the ATTR2 flag was not found prior to features2 fixup, it was not set in the mount flags, so re-check after the fixup so that the current session will use the feature. Also fix up the comments to reflect these changes. Signed-off-by: Eric Sandeen --- Index: linux-2.6-xfs/fs/xfs/xfs_mount.c =================================================================== --- linux-2.6-xfs.orig/fs/xfs/xfs_mount.c +++ linux-2.6-xfs/fs/xfs/xfs_mount.c @@ -967,22 +967,26 @@ xfs_mountfs( xfs_mount_common(mp, sbp); /* - * Check for a bad features2 field alignment. This happened on - * some platforms due to xfs_sb_t not being 64bit size aligned - * when sb_features was added and hence the compiler put it in - * the wrong place. + * Check for a mismatched features2 values. Older kernels + * read & wrote into the wrong sb offset for sb_features2 + * on some platforms due to xfs_sb_t not being 64bit size aligned + * when sb_features2 was added, which made older superblock + * reading/writing routines swap it as a 64-bit value. * - * If we detect a bad field, we or the set bits into the existing - * features2 field in case it has already been modified and we - * don't want to lose any features. Zero the bad one and mark - * the two fields as needing updates once the transaction subsystem - * is online. + * For backwards compatibility, we make both slots equal. + * + * If we detect a mismatched field, we OR the set bits into the + * existing features2 field in case it has already been modified; we + * don't want to lose any features. We then update the bad location + * with the ORed value so that older kernels will see any features2 + * flags, and mark the two fields as needing updates once the + * transaction subsystem is online. */ - if (xfs_sb_has_bad_features2(sbp)) { + if (xfs_sb_has_mismatched_features2(sbp)) { cmn_err(CE_WARN, "XFS: correcting sb_features alignment problem"); sbp->sb_features2 |= sbp->sb_bad_features2; - sbp->sb_bad_features2 = 0; + sbp->sb_bad_features2 = sbp->sb_features2; update_flags |= XFS_SB_FEATURES2 | XFS_SB_BAD_FEATURES2; } @@ -1181,6 +1185,12 @@ xfs_mountfs( xfs_mount_log_sb(mp, update_flags); /* + * Re-check for ATTR2 in case it was found in bad_features2 slot. + */ + if (xfs_sb_version_hasattr2(&mp->m_sb)) + mp->m_flags |= XFS_MOUNT_ATTR2; + + /* * Initialise the XFS quota management subsystem for this mount */ error = XFS_QM_INIT(mp, "amount, "aflags); @@ -1890,7 +1900,8 @@ xfs_uuid_unmount( /* * Used to log changes to the superblock unit and width fields which could - * be altered by the mount options. Only the first superblock is updated. + * be altered by the mount options, as well as any potential sb_features2 + * fixup. Only the first superblock is updated. */ STATIC void xfs_mount_log_sb( Index: linux-2.6-xfs/fs/xfs/xfs_sb.h =================================================================== --- linux-2.6-xfs.orig/fs/xfs/xfs_sb.h +++ linux-2.6-xfs/fs/xfs/xfs_sb.h @@ -320,11 +320,12 @@ static inline int xfs_sb_good_version(xf #endif /* __KERNEL__ */ /* - * Detect a bad features2 field + * Detect a mismatched features2 field. Older kernels read/wrote + * this into the wrong slot, so to be safe we keep them in sync. */ -static inline int xfs_sb_has_bad_features2(xfs_sb_t *sbp) +static inline int xfs_sb_has_mismatched_features2(xfs_sb_t *sbp) { - return (sbp->sb_bad_features2 != 0); + return (sbp->sb_bad_features2 != sbp->sb_features2); } static inline unsigned xfs_sb_version_tonew(unsigned v) From owner-xfs@oss.sgi.com Sun Mar 30 21:37:19 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 30 Mar 2008 21:37:27 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2V4b7Lx030807 for ; Sun, 30 Mar 2008 21:37:13 -0700 Received: from boing.melbourne.sgi.com (boing.melbourne.sgi.com [134.14.55.141]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA20034; Mon, 31 Mar 2008 14:37:31 +1000 Message-ID: <47F06A8B.4090609@sgi.com> Date: Mon, 31 Mar 2008 15:37:31 +1100 From: Timothy Shimmin User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: Eric Sandeen CC: xfs-oss Subject: Re: [PATCH, RFC] fix attr fit checking for filesystems which have lost their attr2 References: <47EDCBF9.4070102@sandeen.net> <47F035E3.1030004@sgi.com> In-Reply-To: <47F035E3.1030004@sgi.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15101 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs Timothy Shimmin wrote: > Hi Eric, > > Eric Sandeen wrote: >> Regarding the F8 corruption... I have a pretty narrow testcase >> now, and it turns out this was a bit of a perfect storm. >> >> First, F8 shipped 2.6.23 which had the problem with sb_features2 >> padding out on 64-bit boxes, but this was ok because userspace >> and kernelspace both did this, and it was properly mounting & >> running as attr2. >> >> However, hch came along in 2.6.24 and did some endian annotation for >> the superblock and in the process: >>> A new helper xfs_sb_from_disk handles the other (read) >>> direction and doesn't need the slightly hacky table-driven approach >>> because we only ever read the full sb from disk. >> >> However, this resulted in kernelspace behaving differently, >> and now *missing* the attr2 flag in sb_features2, (actually >> sb_bad_features2) so we mounted as if we had attr1. Which is really >> supposed to be ok, IIRC, > > Yes, I remember Nathan saying that too ;-) > >> except in xfs_attr_shortform_bytesfit we return the default >> fork offset value from the superblock, even if di_forkoff >> is *already* set. In the error case I had, di_forkoff was set >> to 15 (from previously being attr2...) but we returned 14 >> (the mp default) and I think this is where things started >> going wrong; I think this caused us to write an attr on top >> of the extent data. >> >> My understanding of this is that if di_forkoff is non-zero, >> we should always be using it for space calculations, regardless >> of whether we are mounted with attr2 or not... >> > That was my understanding as well. > I'll have a look at the code soon and see if I can > see any problems with the change and the consistency > of it all. > > Thanks a bunch, > Tim. > >> and with that, how's this look, to be honest I haven't run it >> through QA yet... >> >> I'm not certain if xfs_bmap_compute_maxlevels() may lead >> to similar problems.... >> Yes, I think it needs to be changed too. Looking at the code, it is needed for the log reservation stuff where we want to know the size of the space for an EA btree root or a data fork extents root, to aid in working out the maximum btree level, to work out maximum log space for some transactions. So underestimating it is dangerous and overestimating is wasteful of logspace. It is basing on the same assumption that if we are not mounted attr2 then it thinks we are using m_attroffset when we could be using di_forkoff. However, if di_forkoff is zero then we should use m_attroffset. The other places which referenced m_attroffset look fine to me too. >> ------------------ >> >> always use di_forkoff for when checking for attr space >> >> In the case where we mount a filesystem which was previously >> using the attr2 format as attr1, returning the default >> mp->m_attroffset instead of the per-inode di_forkoff for >> inline attribute fit calculations may result in corruption, >> if the existing "attr2" formatted attribute is already taking >> more space than the default. >> >> Signed-off-by: Eric Sandeen >> --- >> >> >> Index: linux-2.6-git/fs/xfs/xfs_attr_leaf.c >> =================================================================== >> --- linux-2.6-git.orig/fs/xfs/xfs_attr_leaf.c >> +++ linux-2.6-git/fs/xfs/xfs_attr_leaf.c >> @@ -166,7 +166,7 @@ xfs_attr_shortform_bytesfit(xfs_inode_t >> if (!(mp->m_flags & XFS_MOUNT_ATTR2)) { >> if (bytes <= XFS_IFORK_ASIZE(dp)) >> - return mp->m_attroffset >> 3; >> + return dp->i_d.di_forkoff; >> return 0; >> } >> > Okay, and XFS_IFORK_ASIZE(dp) looks at di_forkoff and if non-zero then returns literal-size - di_forkoff. Right, so it will only fit if di_forkoff is operational. Cool. --Tim From owner-xfs@oss.sgi.com Sun Mar 30 23:37:41 2008 Received: with ECARTIS (v1.0.0; list xfs); Sun, 30 Mar 2008 23:37:48 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m2V6baxQ010681 for ; Sun, 30 Mar 2008 23:37:39 -0700 Received: from boing.melbourne.sgi.com (boing.melbourne.sgi.com [134.14.55.141]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id QAA24536; Mon, 31 Mar 2008 16:38:00 +1000 Message-ID: <47F086C8.7070400@sgi.com> Date: Mon, 31 Mar 2008 17:38:00 +1100 From: Timothy Shimmin User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: Eric Sandeen CC: xfs-oss Subject: Re: [PATCH, RFC] fix attr fit checking for filesystems which have lost their attr2 References: <47EDCBF9.4070102@sandeen.net> <47F035E3.1030004@sgi.com> <47F06A8B.4090609@sgi.com> In-Reply-To: <47F06A8B.4090609@sgi.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15103 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs Eric, So probably need something like this for xfs_bmap_compute_maxlevels()... --Tim =========================================================================== Index: fs/xfs/xfs_bmap.c =========================================================================== --- a/fs/xfs/xfs_bmap.c 2008-03-31 16:32:24.000000000 +1000 +++ b/fs/xfs/xfs_bmap.c 2008-03-31 16:21:11.439806073 +1000 @@ -4154,16 +4154,21 @@ xfs_bmap_compute_maxlevels( * number of leaf entries, is controlled by the type of di_nextents * (a signed 32-bit number, xfs_extnum_t), or by di_anextents * (a signed 16-bit number, xfs_aextnum_t). + * + * Note that we can no longer assume that if we are in ATTR1 that + * the fork offset of all the inodes will be (m_attroffset >> 3) + * because we could have mounted with ATTR2 and then mounted back + * with ATTR1, keeping the di_forkoff's fixed but probably at + * various positions. Therefore, for both ATTR1 and ATTR2 + * we have to assume the worst case scenario of a minimum size + * available. */ if (whichfork == XFS_DATA_FORK) { maxleafents = MAXEXTNUM; - sz = (mp->m_flags & XFS_MOUNT_ATTR2) ? - XFS_BMDR_SPACE_CALC(MINDBTPTRS) : mp->m_attroffset; + sz = XFS_BMDR_SPACE_CALC(MINDBTPTRS); } else { maxleafents = MAXAEXTNUM; - sz = (mp->m_flags & XFS_MOUNT_ATTR2) ? - XFS_BMDR_SPACE_CALC(MINABTPTRS) : - mp->m_sb.sb_inodesize - mp->m_attroffset; + sz = XFS_BMDR_SPACE_CALC(MINABTPTRS); } maxrootrecs = (int)XFS_BTREE_BLOCK_MAXRECS(sz, xfs_bmdr, 0); minleafrecs = mp->m_bmap_dmnr[0]; From owner-xfs@oss.sgi.com Mon Mar 31 07:04:10 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 31 Mar 2008 07:04:17 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.1 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2VE49T9005820 for ; Mon, 31 Mar 2008 07:04:09 -0700 X-ASG-Debug-ID: 1206972283-6291005c0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 6E8518D0C26 for ; Mon, 31 Mar 2008 07:04:43 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id aC7DdX9RrmQGSVPl for ; Mon, 31 Mar 2008 07:04:43 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 4300918030807; Mon, 31 Mar 2008 08:32:23 -0500 (CDT) Message-ID: <47F0E7E6.8000006@sandeen.net> Date: Mon, 31 Mar 2008 08:32:22 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: Timothy Shimmin CC: xfs-oss X-ASG-Orig-Subj: Re: [PATCH, RFC] fix attr fit checking for filesystems which have lost their attr2 Subject: Re: [PATCH, RFC] fix attr fit checking for filesystems which have lost their attr2 References: <47EDCBF9.4070102@sandeen.net> <47F035E3.1030004@sgi.com> <47F06A8B.4090609@sgi.com> <47F086C8.7070400@sgi.com> In-Reply-To: <47F086C8.7070400@sgi.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1206972285 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -1.42 X-Barracuda-Spam-Status: No, SCORE=-1.42 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=COMMA_SUBJECT X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.46435 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.60 COMMA_SUBJECT Subject is like 'Re: FDSDS, this is a subject' X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15105 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Timothy Shimmin wrote: > Eric, > > So probably need something like this for xfs_bmap_compute_maxlevels()... Yep, I think that looks good. I had forgotten that you could mount with "noattr2" to change the default behavior... so this isn't only a bug w.r.t. bad_features2, is it. Thanks, -Eric > > --Tim > > =========================================================================== > Index: fs/xfs/xfs_bmap.c > =========================================================================== > > --- a/fs/xfs/xfs_bmap.c 2008-03-31 16:32:24.000000000 +1000 > +++ b/fs/xfs/xfs_bmap.c 2008-03-31 16:21:11.439806073 +1000 > @@ -4154,16 +4154,21 @@ xfs_bmap_compute_maxlevels( > * number of leaf entries, is controlled by the type of di_nextents > * (a signed 32-bit number, xfs_extnum_t), or by di_anextents > * (a signed 16-bit number, xfs_aextnum_t). > + * > + * Note that we can no longer assume that if we are in ATTR1 that > + * the fork offset of all the inodes will be (m_attroffset >> 3) > + * because we could have mounted with ATTR2 and then mounted back > + * with ATTR1, keeping the di_forkoff's fixed but probably at > + * various positions. Therefore, for both ATTR1 and ATTR2 > + * we have to assume the worst case scenario of a minimum size > + * available. > */ > if (whichfork == XFS_DATA_FORK) { > maxleafents = MAXEXTNUM; > - sz = (mp->m_flags & XFS_MOUNT_ATTR2) ? > - XFS_BMDR_SPACE_CALC(MINDBTPTRS) : mp->m_attroffset; > + sz = XFS_BMDR_SPACE_CALC(MINDBTPTRS); > } else { > maxleafents = MAXAEXTNUM; > - sz = (mp->m_flags & XFS_MOUNT_ATTR2) ? > - XFS_BMDR_SPACE_CALC(MINABTPTRS) : > - mp->m_sb.sb_inodesize - mp->m_attroffset; > + sz = XFS_BMDR_SPACE_CALC(MINABTPTRS); > } > maxrootrecs = (int)XFS_BTREE_BLOCK_MAXRECS(sz, xfs_bmdr, 0); > minleafrecs = mp->m_bmap_dmnr[0]; > From owner-xfs@oss.sgi.com Mon Mar 31 10:36:52 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 31 Mar 2008 10:37:00 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.3 required=5.0 tests=BAYES_50,MIME_8BIT_HEADER autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2VHaonV007831 for ; Mon, 31 Mar 2008 10:36:52 -0700 X-ASG-Debug-ID: 1206985043-1cc400420000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from server.dwo.hu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 24BB68D1DF4 for ; Mon, 31 Mar 2008 10:37:24 -0700 (PDT) Received: from server.dwo.hu (server.dwo.hu [87.229.110.63]) by cuda.sgi.com with ESMTP id WDQQNwaApTK0tr7l for ; Mon, 31 Mar 2008 10:37:24 -0700 (PDT) Received: from [87.229.110.93] (helo=HvDCorps) by server.dwo.hu with esmtpa (Exim 4.50) id 1JgO7j-0006Lo-JO; Mon, 31 Mar 2008 19:48:47 +0200 From: =?iso-8859-2?Q?Husz=E1r_Viktor_D=E9nes?= To: "'Emmanuel Florac'" , X-ASG-Orig-Subj: RE: free space problem Subject: RE: free space problem Date: Mon, 31 Mar 2008 19:36:56 +0200 MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-2" X-Mailer: Microsoft Office Outlook, Build 11.0.6353 X-MIMEOLE: Produced By Microsoft MimeOLE V6.00.2900.3198 Thread-Index: AciTBIKUEFLLtiLZSOCI0q2Yk5I6iAAUSTkg In-Reply-To: <20080331093951.100ec125@galadriel.home> X-Barracuda-Connect: server.dwo.hu[87.229.110.63] X-Barracuda-Start-Time: 1206985046 Message-Id: <20080331173724.24BB68D1DF4@cuda.sgi.com> X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -0.72 X-Barracuda-Spam-Status: No, SCORE=-0.72 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=MARKETING_SUBJECT, MSGID_FROM_MTA_ID X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.46450 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.60 MARKETING_SUBJECT Subject contains popular marketing words 0.70 MSGID_FROM_MTA_ID Message-Id for external message added locally X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id m2VHaqnV007842 X-archive-position: 15107 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hvd@dwo.hu Precedence: bulk X-list: xfs -----Original Message----- From: Emmanuel Florac [mailto:eflorac@intellique.com] Sent: Monday, March 31, 2008 9:40 AM To: Huszár Viktor Dénes; xfs@oss.sgi.com Subject: Re: free space problem Le Mon, 31 Mar 2008 02:31:16 +0200 vous écriviez: >Then maybe you simply ran out of inodes. It's common if you have lots >of small files. There is a way to increase the number of inodes but I >don't remember it right now. -- No, unfortunately not. There are not many small files and the inode usage is 1%. See. /bin/df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/md0 5494272 154415 5339857 3% / tmpfs 2056252 5 2056247 1% /lib/init/rw udev 2056252 487 2055765 1% /dev tmpfs 2056252 10 2056242 1% /dev/shm /dev/mapper/a-a 88427616 867107 87560509 1% /var/www/users __________ Information from ESET NOD32 Antivirus, version of virus signature database 2987 (20080331) __________ The message was checked by ESET NOD32 Antivirus. http://www.eset.com From owner-xfs@oss.sgi.com Mon Mar 31 10:34:21 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 31 Mar 2008 10:34:57 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.0 required=5.0 tests=BAYES_50,HTML_MESSAGE autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2VHYIeQ007400 for ; Mon, 31 Mar 2008 10:34:21 -0700 X-ASG-Debug-ID: 1206984894-2e5100460000-w1Z2WR X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from CDPC-SALES09 (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id AD3EA718DD2 for ; Mon, 31 Mar 2008 10:34:54 -0700 (PDT) Received: from CDPC-SALES09 (uslec-71.16.205.230.cust.uslec.net [71.16.205.230]) by cuda.sgi.com with ESMTP id FWKPUydbfIUCXvHi for ; Mon, 31 Mar 2008 10:34:54 -0700 (PDT) Message-ID: <23449120.1206984893325.JavaMail.sft.vest@gmail.com> From: Safety Vests To: X-ASG-Orig-Subj: Att: Safety Manager Subject: Att: Safety Manager Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_Part_9635_25583979.1206984893325" X-Barracuda-Connect: uslec-71.16.205.230.cust.uslec.net[71.16.205.230] X-Barracuda-Start-Time: 1206984894 Date: Mon, 31 Mar 2008 10:34:54 -0700 (PDT) X-Barracuda-Bayes: INNOCENT GLOBAL 0.5061 1.0000 0.7500 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: 0.75 X-Barracuda-Spam-Status: No, SCORE=0.75 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=HTML_MESSAGE X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.46449 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.00 HTML_MESSAGE BODY: HTML included in message X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15106 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sft.vest@gmail.com Precedence: bulk X-list: xfs ------=_Part_9635_25583979.1206984893325 Content-Type: multipart/alternative; boundary="----=_Part_9633_3559937.1206984893325" ------=_Part_9633_3559937.1206984893325 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Safety Vests Sale http://www.overstock.com/Home-Garden/ANSI-2-Mesh-5-point-Breakaway-Vest-Pack-of-6/3024039/product.html?cid=54556&fp=F&siteID=ydmf4rFDNTw-Wfs4Xgx_xXbOs5g.HQQOEQ ------=_Part_9633_3559937.1206984893325 Content-Type: multipart/related; boundary="----=_Part_9634_4317859.1206984893325" ------=_Part_9634_4317859.1206984893325 Content-Type: text/html; charset=us-ascii Content-Transfer-Encoding: 7bit

Safety Vests Sale

http://www.overstock.com/Home-Garden/ANSI-2-Mesh-5-point-Breakaway-Vest-Pack-of-6/3024039/product.html?cid=54556&fp=F&siteID=ydmf4rFDNTw-Wfs4Xgx_xXbOs5g.HQQOEQ

------=_Part_9634_4317859.1206984893325-- ------=_Part_9633_3559937.1206984893325-- ------=_Part_9635_25583979.1206984893325-- From owner-xfs@oss.sgi.com Mon Mar 31 10:42:54 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 31 Mar 2008 10:43:01 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.3 required=5.0 tests=BAYES_50,MIME_8BIT_HEADER autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2VHgrhZ008814 for ; Mon, 31 Mar 2008 10:42:54 -0700 X-ASG-Debug-ID: 1206985408-1c96006a0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from server.dwo.hu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 158428D4E9B; Mon, 31 Mar 2008 10:43:28 -0700 (PDT) Received: from server.dwo.hu (server.dwo.hu [87.229.110.63]) by cuda.sgi.com with ESMTP id TNqz2DaKowf2LueC; Mon, 31 Mar 2008 10:43:28 -0700 (PDT) Received: from [87.229.110.93] (helo=HvDCorps) by server.dwo.hu with esmtpa (Exim 4.50) id 1JgODk-0006VS-UO; Mon, 31 Mar 2008 19:55:00 +0200 From: =?iso-8859-2?Q?Husz=E1r_Viktor_D=E9nes?= To: "'David Chinner'" Cc: X-ASG-Orig-Subj: RE: free space problem Subject: RE: free space problem Date: Mon, 31 Mar 2008 19:43:24 +0200 MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-2" X-Mailer: Microsoft Office Outlook, Build 11.0.6353 X-MIMEOLE: Produced By Microsoft MimeOLE V6.00.2900.3198 Thread-Index: AciSyUJkShXZw1nrSlKU50o+Is8vDwAjJE3g In-Reply-To: <20080331003847.GF103491721@sgi.com> X-Barracuda-Connect: server.dwo.hu[87.229.110.63] X-Barracuda-Start-Time: 1206985409 Message-Id: <20080331174328.158428D4E9B@cuda.sgi.com> X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -0.72 X-Barracuda-Spam-Status: No, SCORE=-0.72 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=MARKETING_SUBJECT, MSGID_FROM_MTA_ID X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.46450 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.60 MARKETING_SUBJECT Subject contains popular marketing words 0.70 MSGID_FROM_MTA_ID Message-Id for external message added locally X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id m2VHgshZ008818 X-archive-position: 15108 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hvd@dwo.hu Precedence: bulk X-list: xfs -----Original Message----- From: David Chinner [mailto:dgc@sgi.com] Sent: Monday, March 31, 2008 2:39 AM To: Huszár Viktor Dénes Cc: xfs@oss.sgi.com Subject: Re: free space problem > What is the error you are getting? Does 'dd if=/dev/zero of=/mntpt/file > bs=512 count=1' work? If not, what's the error? What's the output of 'df > -h' and 'df -ih'? Output of 'xfs_info '? Cheers, Dave -- Actually, from yesterday it started working again. So the xfs_info of yesterday I can't send when it was not working. When it was not working, in dd there is nothing extra. A simple no space left on device. And kernel does not log anything (dmesg). The problem appeared suddenly and mysteriously, and it disappeared in the same way... If it happens again, I'll send you the info immediately. The problem was there for couple of days, we umounted and ran xfs_repair several times, and we mounted it back and not even mkdir worked, or in 5-10 minutes we wrote few Gb space on the disk and still plenty of Gbs were free, but it said: no space left on device. So if it appears again, I'll get back to you guys. Im really thankful for your help! Viktor from Hungary, Budapest __________ Information from ESET NOD32 Antivirus, version of virus signature database 2987 (20080331) __________ The message was checked by ESET NOD32 Antivirus. http://www.eset.com From owner-xfs@oss.sgi.com Mon Mar 31 11:27:38 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 31 Mar 2008 11:27:45 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2VIRaas013559 for ; Mon, 31 Mar 2008 11:27:37 -0700 X-ASG-Debug-ID: 1206988091-2e4d015b0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from smtp.getmail.no (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 4AD4F7194F7 for ; Mon, 31 Mar 2008 11:28:11 -0700 (PDT) Received: from smtp.getmail.no (smtp.getmail.no [84.208.20.33]) by cuda.sgi.com with ESMTP id Luodrqi28T5JfgoM for ; Mon, 31 Mar 2008 11:28:11 -0700 (PDT) Received: from pmxchannel-daemon.no-osl-m323-srv-004-z2.isp.get.no by no-osl-m323-srv-004-z2.isp.get.no (Sun Java System Messaging Server 6.2-7.05 (built Sep 5 2006)) id <0JYL0065BXYXGO00@no-osl-m323-srv-004-z2.isp.get.no> for xfs@oss.sgi.com; Mon, 31 Mar 2008 20:28:09 +0200 (CEST) Received: from smtp.getmail.no ([10.5.16.1]) by no-osl-m323-srv-004-z2.isp.get.no (Sun Java System Messaging Server 6.2-7.05 (built Sep 5 2006)) with ESMTP id <0JYL001HTXVD1720@no-osl-m323-srv-004-z2.isp.get.no> for xfs@oss.sgi.com; Mon, 31 Mar 2008 20:26:01 +0200 (CEST) Received: from localhost ([84.215.109.218]) by no-osl-m323-srv-009-z1.isp.get.no (Sun Java System Messaging Server 6.2-7.05 (built Sep 5 2006)) with ESMTP id <0JYL001WEXVDWPD0@no-osl-m323-srv-009-z1.isp.get.no> for xfs@oss.sgi.com; Mon, 31 Mar 2008 20:26:01 +0200 (CEST) Received: from thor by localhost with local (Exim 4.69 #1 (Debian)) id 1JgOhk-0005ZI-Vt for ; Mon, 31 Mar 2008 20:26:01 +0200 Date: Mon, 31 Mar 2008 20:26:00 +0200 From: Thor Kristoffersen X-ASG-Orig-Subj: Does XFS prevent disk spindown? Subject: Does XFS prevent disk spindown? To: xfs@oss.sgi.com Message-id: MIME-version: 1.0 Content-type: text/plain; charset=us-ascii Content-transfer-encoding: 7BIT User-Agent: Gnus/5.11 (Gnus v5.11) Emacs/22.1 (gnu/linux) X-Barracuda-Connect: smtp.getmail.no[84.208.20.33] X-Barracuda-Start-Time: 1206988092 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=UNPARSEABLE_RELAY X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.46452 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.00 UNPARSEABLE_RELAY Informational: message has unparseable relay lines X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15109 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: thorkr@gmail.com Precedence: bulk X-list: xfs I've noticed that when I spin down XFS-mounted disks they spin up again shortly afterwards. I used iostat to monitor disk accesses to a mounted partition (with noatime) in single user mode. Apparently there is a write access to the partition approximately every 35 seconds, even if the partition is idle. As far as I can understand, since there is no data that needs to be flushed this must be done by an XFS daemon for some purpose. Is there any setting or mount option I can use to get rid of this behavior? I know I can freeze the filesystem, but then I have to remember to unfreeze it every time I need to write to it, so it's not an ideal solution. Thor From owner-xfs@oss.sgi.com Mon Mar 31 16:32:28 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 31 Mar 2008 16:32:35 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.1 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m2VNWMcQ028298 for ; Mon, 31 Mar 2008 16:32:28 -0700 X-ASG-Debug-ID: 1207006379-105f038d0000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from ext.agami.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 4A07671BBEF for ; Mon, 31 Mar 2008 16:32:59 -0700 (PDT) Received: from ext.agami.com (64.221.212.177.ptr.us.xo.net [64.221.212.177]) by cuda.sgi.com with ESMTP id DQ5ttPpR1iKxF9gc for ; Mon, 31 Mar 2008 16:32:59 -0700 (PDT) Received: from agami.com (mail [192.168.168.5]) by ext.agami.com (8.12.5/8.12.5) with ESMTP id m2VMj62L019621 for ; Mon, 31 Mar 2008 15:45:06 -0700 Received: from mx1.agami.com (mx1.agami.com [10.123.10.30]) by agami.com (8.12.11/8.12.11) with ESMTP id m2VMj6wB006571 for ; Mon, 31 Mar 2008 15:45:06 -0700 Received: from [10.123.4.142] ([10.123.4.142]) by mx1.agami.com with Microsoft SMTPSVC(6.0.3790.1830); Mon, 31 Mar 2008 15:45:28 -0700 Message-ID: <47F16988.2080406@agami.com> Date: Mon, 31 Mar 2008 15:45:28 -0700 From: Michael Nishimoto User-Agent: Mail/News 1.5.0.4 (X11/20060629) MIME-Version: 1.0 To: XFS Mailing List X-ASG-Orig-Subj: Definition of XFS_DQUOT_LOGRES() Subject: Definition of XFS_DQUOT_LOGRES() Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-OriginalArrivalTime: 31 Mar 2008 22:45:29.0024 (UTC) FILETIME=[EB253400:01C89380] X-Barracuda-Connect: 64.221.212.177.ptr.us.xo.net[64.221.212.177] X-Barracuda-Start-Time: 1207006379 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.46474 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15110 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: miken@agami.com Precedence: bulk X-list: xfs The comment for XFS_DQUOT_LOGRES states that we need to reserve space for 3 dquots. I can't figure out why we need to add this amount to *all* operations and why this amount wasn't added after doing a runtime quotaon check. Comments? Michael /* * In the worst case, when both user and group quotas are on, * we can have a max of three dquots changing in a single transaction. */ #define XFS_DQUOT_LOGRES(mp) (sizeof(xfs_disk_dquot_t) * 3) From owner-xfs@oss.sgi.com Mon Mar 31 17:29:44 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 31 Mar 2008 17:29:52 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m310TeAT001755 for ; Mon, 31 Mar 2008 17:29:42 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA27633; Tue, 1 Apr 2008 10:30:10 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m310U8sT116802485; Tue, 1 Apr 2008 10:30:09 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m310U5nY117213716; Tue, 1 Apr 2008 10:30:05 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Tue, 1 Apr 2008 10:30:05 +1000 From: David Chinner To: Thor Kristoffersen Cc: xfs@oss.sgi.com Subject: Re: Does XFS prevent disk spindown? Message-ID: <20080401003005.GJ103491721@sgi.com> References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15111 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Mon, Mar 31, 2008 at 08:26:00PM +0200, Thor Kristoffersen wrote: > I've noticed that when I spin down XFS-mounted disks they spin up again > shortly afterwards. I used iostat to monitor disk accesses to a mounted > partition (with noatime) in single user mode. Apparently there is a write > access to the partition approximately every 35 seconds, even if the > partition is idle. As far as I can understand, since there is no data that > needs to be flushed this must be done by an XFS daemon for some purpose. > > Is there any setting or mount option I can use to get rid of this behavior? > I know I can freeze the filesystem, but then I have to remember to unfreeze > it every time I need to write to it, so it's not an ideal solution. Turn on laptop mode? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Mar 31 17:35:07 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 31 Mar 2008 17:35:14 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-1.8 required=5.0 tests=AWL,BAYES_00,MIME_8BIT_HEADER autolearn=no version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m310Z2mL002567 for ; Mon, 31 Mar 2008 17:35:06 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA27715; Tue, 1 Apr 2008 10:35:28 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m310ZQsT117406082; Tue, 1 Apr 2008 10:35:27 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m310ZJpZ117458099; Tue, 1 Apr 2008 10:35:19 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Tue, 1 Apr 2008 10:35:18 +1000 From: David Chinner To: =?iso-8859-1?Q?Husz=E1r_Viktor_D=E9nes?= Cc: "'Emmanuel Florac'" , xfs@oss.sgi.com Subject: Re: free space problem Message-ID: <20080401003518.GK103491721@sgi.com> References: <20080331093951.100ec125@galadriel.home> <20080331173724.24BB68D1DF4@cuda.sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20080331173724.24BB68D1DF4@cuda.sgi.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15112 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Mon, Mar 31, 2008 at 07:36:56PM +0200, Huszár Viktor Dénes wrote: > -----Original Message----- > From: Emmanuel Florac [mailto:eflorac@intellique.com] > Sent: Monday, March 31, 2008 9:40 AM > To: Huszár Viktor Dénes; xfs@oss.sgi.com > Subject: Re: free space problem > > Le Mon, 31 Mar 2008 02:31:16 +0200 vous écriviez: > > >Then maybe you simply ran out of inodes. It's common if you have lots > >of small files. There is a way to increase the number of inodes but I > >don't remember it right now. > -- > No, unfortunately not. There are not many small files and the inode usage is > 1%. See. Yes, but if you have fragemented free space then it is possible that there are not enough free extents large enough (or aligned correctly) to allocate more inodes. The number of "free inodes" reported doesn't take this into account; it only looks at the number of free blocks and converts that to a theoretical number of inodes that could be allocated in that space (i.e. it assumes perfect fit and no waste). In this "not quite full filesystem" situation, you can write data to the filesystem, but any attempt to create a new inode (new file, directory, etc) will fail with ENOSPC. This sounds like the symptoms you are reporting.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Mar 31 18:28:36 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 31 Mar 2008 18:28:44 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m311SWcX007842 for ; Mon, 31 Mar 2008 18:28:35 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA29245; Tue, 1 Apr 2008 11:29:02 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id m311SxsT117373871; Tue, 1 Apr 2008 11:29:00 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id m311Sup5115093476; Tue, 1 Apr 2008 11:28:56 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Tue, 1 Apr 2008 11:28:56 +1000 From: David Chinner To: Michael Nishimoto Cc: XFS Mailing List Subject: Re: Definition of XFS_DQUOT_LOGRES() Message-ID: <20080401012856.GL103491721@sgi.com> References: <47F16988.2080406@agami.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <47F16988.2080406@agami.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15113 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Mon, Mar 31, 2008 at 03:45:28PM -0700, Michael Nishimoto wrote: > The comment for XFS_DQUOT_LOGRES states that we need to reserve space > for 3 dquots. I can't figure out why we need to add this amount to *all* > operations and why this amount wasn't added after doing a runtime > quotaon check. It probably could be done that way. But given that: > /* > * In the worst case, when both user and group quotas are on, > * we can have a max of three dquots changing in a single transaction. > */ > #define XFS_DQUOT_LOGRES(mp) (sizeof(xfs_disk_dquot_t) * 3) sizeof(xfs_disk_dquot_t) = 104 bytes, the overall addition to the reservations is minor considering: [0]kdb> xtrres 0xe0000038055ac6c0 write: 109752 truncate: 223672 rename: 305976 link: 153144 remove: 153144 symlink: 158520 create: 158392 mkdir: 158392 ifree: 58936 ichange: 2104 growdata: 45696 swrite: 384 addafork: 70584 writeid: 384 attrinval: 179328 attrset: 22968 attrrm: 90552 clearagi: 1152 growrtalloc: 66048 growrtzero: 4224 growrtfree: 6272 [0]kdb> on a 14GB filesystem most of the transactions this is added to are on the far side of 150k and that means we're talking about less than 0.2% of the entire reservation comes from the dquot. With larger block sizes and/or larger filesystems, these get much larger. e.g. same 14GB device, 64k block size instead of 4k: [0]kdb> xtrres 0xe00000b8027d39f8 write: 987576 truncate: 1977272 rename: 2891064 link: 1445688 remove: 1445688 symlink: 1512504 create: 1511864 mkdir: 1511864 ifree: 470584 ichange: 1592 growdata: 395904 swrite: 384 addafork: 658616 writeid: 384 attrinval: 1581696 attrset: 329656 attrrm: 791480 clearagi: 640 growrtalloc: 592640 growrtzero: 65664 growrtfree: 67200 The rename reservation is *2.8MB* (up from 300k). IOWs, 300 bytes is really noise when it comes to reservation space. (OT: See why I want to increase the log size now? :) Is it worth the complexity of adding this dquot reservation at runtime for a best case reduction of 0.2% in log space reservation usage? Probably not, but patches can be convincing ;) Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Mar 31 20:08:49 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 31 Mar 2008 20:08:56 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.3 required=5.0 tests=BAYES_50,MIME_8BIT_HEADER autolearn=no version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m3138mIW017805 for ; Mon, 31 Mar 2008 20:08:49 -0700 X-ASG-Debug-ID: 1207019361-768201130000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from server.dwo.hu (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 9CA0171CF52; Mon, 31 Mar 2008 20:09:21 -0700 (PDT) Received: from server.dwo.hu (server.dwo.hu [87.229.110.63]) by cuda.sgi.com with ESMTP id FgBUHHjsJzaKuVXz; Mon, 31 Mar 2008 20:09:21 -0700 (PDT) Received: from [87.229.110.93] (helo=HvDCorps) by server.dwo.hu with esmtpa (Exim 4.50) id 1JgX3A-000J1m-4O; Tue, 01 Apr 2008 05:20:40 +0200 From: =?iso-8859-2?Q?Husz=E1r_Viktor_D=E9nes?= To: "'David Chinner'" Cc: "'Emmanuel Florac'" , X-ASG-Orig-Subj: RE: free space problem Subject: RE: free space problem Date: Tue, 1 Apr 2008 05:08:52 +0200 MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-2" Content-Transfer-Encoding: 7bit X-Mailer: Microsoft Office Outlook, Build 11.0.6353 In-Reply-To: <20080401003518.GK103491721@sgi.com> Thread-Index: AciTkfWY4zv9vyvkTdWR19YuMxZuWwAEcGww X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.3198 X-Barracuda-Connect: server.dwo.hu[87.229.110.63] X-Barracuda-Start-Time: 1207019364 Message-Id: <20080401030921.9CA0171CF52@cuda.sgi.com> X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -0.72 X-Barracuda-Spam-Status: No, SCORE=-0.72 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests=MARKETING_SUBJECT, MSGID_FROM_MTA_ID X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.46484 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.60 MARKETING_SUBJECT Subject contains popular marketing words 0.70 MSGID_FROM_MTA_ID Message-Id for external message added locally X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15114 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hvd@dwo.hu Precedence: bulk X-list: xfs >Yes, but if you have fragemented free space then it is possible that there >are not enough free extents large enough (or aligned correctly) to >allocate more inodes. The number of "free inodes" reported doesn't take >this into account; it only looks at the number of free blocks and converts >that to a theoretical number of inodes that could be allocated in that >space (i.e. it assumes perfect fit and no waste). >In this "not quite full filesystem" situation, you can write data to the >filesystem, but any attempt to create a new inode (new file, directory, >etc) will fail with ENOSPC. This sounds like the symptoms you are >reporting.... >Cheers, >Dave. What you described might happen with lot of small files, but in our case, our attempts fail to create a new inode and fail to write data. You suppose that we have fractured free spaces, and on these fractures no inodes can be created. In our case, the situation was different, however as it started to work again I still can't provide you any xfs_info/debug. Thanks again, Viktor __________ Information from ESET NOD32 Antivirus, version of virus signature database 2989 (20080401) __________ The message was checked by ESET NOD32 Antivirus. http://www.eset.com From owner-xfs@oss.sgi.com Mon Mar 31 22:59:40 2008 Received: with ECARTIS (v1.0.0; list xfs); Mon, 31 Mar 2008 23:00:15 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.0-r574664 (2007-09-11) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-2.1 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.0-r574664 Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id m315xdEH006595 for ; Mon, 31 Mar 2008 22:59:40 -0700 X-ASG-Debug-ID: 1207029614-3ac700720000-NocioJ X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 4D36671D0EB for ; Mon, 31 Mar 2008 23:00:14 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com with ESMTP id C0Nb4JtgNDG0SCv8 for ; Mon, 31 Mar 2008 23:00:14 -0700 (PDT) Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 00B1318003EE2; Tue, 1 Apr 2008 01:00:13 -0500 (CDT) Message-ID: <47F1CF6D.2040103@sandeen.net> Date: Tue, 01 Apr 2008 01:00:13 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: David Chinner CC: Thor Kristoffersen , xfs@oss.sgi.com X-ASG-Orig-Subj: Re: Does XFS prevent disk spindown? Subject: Re: Does XFS prevent disk spindown? References: <20080401003005.GJ103491721@sgi.com> In-Reply-To: <20080401003005.GJ103491721@sgi.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: sandeen.net[209.173.210.139] X-Barracuda-Start-Time: 1207029615 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by cuda.sgi.com at sgi.com X-Barracuda-Spam-Score: -2.02 X-Barracuda-Spam-Status: No, SCORE=-2.02 using per-user scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.1 tests= X-Barracuda-Spam-Report: Code version 3.1, rules version 3.1.46500 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-Virus-Scanned: ClamAV 0.91.2/6021/Wed Feb 27 15:55:48 2008 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 15115 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs David Chinner wrote: > On Mon, Mar 31, 2008 at 08:26:00PM +0200, Thor Kristoffersen wrote: >> I've noticed that when I spin down XFS-mounted disks they spin up again >> shortly afterwards. I used iostat to monitor disk accesses to a mounted >> partition (with noatime) in single user mode. Apparently there is a write >> access to the partition approximately every 35 seconds, even if the >> partition is idle. As far as I can understand, since there is no data that >> needs to be flushed this must be done by an XFS daemon for some purpose. Use blktrace, or echo 1 > /proc/sys/vm/block_dump to see what block and who's writing it... it's probably the superblock? what kernel? On an idle-in-gdm 2.6.25 system, xfs root, I see something like this from block_dump... it does settle out after a while: # while true > do > date > sleep 5 > dmesg -c > done bash(2986): READ block 448128 on sda2 bash(2986): dirtied inode 820453 (date) on sda2 date(2986): READ block 448160 on sda2 bash(2987): READ block 449736 on sda2 bash(2987): dirtied inode 820470 (sleep) on sda2 sleep(2987): READ block 449768 on sda2 Tue Apr 1 00:24:12 CDT 2008 xfssyncd(465): dirtied inode 128 (/) on sda2 xfssyncd(465): WRITE block 10246607 on sda2 Tue Apr 1 00:24:17 CDT 2008 Tue Apr 1 00:24:22 CDT 2008 Tue Apr 1 00:24:27 CDT 2008 Tue Apr 1 00:24:32 CDT 2008 Tue Apr 1 00:24:37 CDT 2008 Tue Apr 1 00:24:42 CDT 2008 pdflush(178): WRITE block 64 on sda2 Tue Apr 1 00:24:47 CDT 2008 Tue Apr 1 00:24:52 CDT 2008 Tue Apr 1 00:24:57 CDT 2008 Tue Apr 1 00:25:02 CDT 2008 Tue Apr 1 00:25:07 CDT 2008 Tue Apr 1 00:25:12 CDT 2008 xfssyncd(465): dirtied inode 128 (/) on sda2 xfssyncd(465): WRITE block 10246609 on sda2 Tue Apr 1 00:25:17 CDT 2008 Tue Apr 1 00:25:22 CDT 2008 Tue Apr 1 00:25:27 CDT 2008 Tue Apr 1 00:25:32 CDT 2008 Tue Apr 1 00:25:37 CDT 2008 Tue Apr 1 00:25:42 CDT 2008 pdflush(178): WRITE block 64 on sda2 Tue Apr 1 00:25:47 CDT 2008 Tue Apr 1 00:25:52 CDT 2008 Tue Apr 1 00:25:57 CDT 2008 Tue Apr 1 00:26:02 CDT 2008 Tue Apr 1 00:26:07 CDT 2008 Tue Apr 1 00:26:12 CDT 2008 Tue Apr 1 00:26:17 CDT 2008 Tue Apr 1 00:26:22 CDT 2008 Tue Apr 1 00:26:27 CDT 2008 Tue Apr 1 00:26:32 CDT 2008 Tue Apr 1 00:26:37 CDT 2008 Tue Apr 1 00:26:42 CDT 2008 Tue Apr 1 00:26:47 CDT 2008 Tue Apr 1 00:26:52 CDT 2008 Tue Apr 1 00:26:57 CDT 2008 Tue Apr 1 00:27:02 CDT 2008 Tue Apr 1 00:27:07 CDT 2008 Tue Apr 1 00:27:12 CDT 2008 Tue Apr 1 00:27:17 CDT 2008 Tue Apr 1 00:27:22 CDT 2008 Tue Apr 1 00:27:27 CDT 2008 Tue Apr 1 00:27:32 CDT 2008 Tue Apr 1 00:27:37 CDT 2008 Tue Apr 1 00:27:42 CDT 2008 Tue Apr 1 00:27:47 CDT 2008 Tue Apr 1 00:27:52 CDT 2008 Tue Apr 1 00:27:57 CDT 2008 Tue Apr 1 00:28:02 CDT 2008 Tue Apr 1 00:28:07 CDT 2008 Tue Apr 1 00:28:12 CDT 2008 Tue Apr 1 00:28:17 CDT 2008 Tue Apr 1 00:28:22 CDT 2008 Tue Apr 1 00:28:27 CDT 2008 Tue Apr 1 00:28:32 CDT 2008 Tue Apr 1 00:28:37 CDT 2008 Tue Apr 1 00:28:42 CDT 2008 Tue Apr 1 00:28:47 CDT 2008 Tue Apr 1 00:28:52 CDT 2008 Tue Apr 1 00:28:57 CDT 2008 Tue Apr 1 00:29:02 CDT 2008 Tue Apr 1 00:29:07 CDT 2008 Tue Apr 1 00:29:12 CDT 2008 Tue Apr 1 00:29:17 CDT 2008 Tue Apr 1 00:29:22 CDT 2008 Tue Apr 1 00:29:27 CDT 2008 Tue Apr 1 00:29:32 CDT 2008 Tue Apr 1 00:29:37 CDT 2008 Tue Apr 1 00:29:42 CDT 2008 Tue Apr 1 00:29:47 CDT 2008 Tue Apr 1 00:29:52 CDT 2008 Tue Apr 1 00:29:57 CDT 2008 .... -Eric >> Is there any setting or mount option I can use to get rid of this behavior? >> I know I can freeze the filesystem, but then I have to remember to unfreeze >> it every time I need to write to it, so it's not an ideal solution. > > Turn on laptop mode? > > Cheers, > > Dave.