From owner-xfs@oss.sgi.com Mon Jan 1 23:36:54 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 01 Jan 2007 23:36:59 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l027aqqw013711 for ; Mon, 1 Jan 2007 23:36:53 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id SAA06103; Tue, 2 Jan 2007 18:35:58 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l027Zw7Y80360519; Tue, 2 Jan 2007 18:35:58 +1100 (AEDT) Received: (from allanr@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l027Zwpt80352840; Tue, 2 Jan 2007 18:35:58 +1100 (AEDT) Date: Tue, 2 Jan 2007 18:35:58 +1100 (AEDT) From: Allan Randall Message-Id: <200701020735.l027Zwpt80352840@snort.melbourne.sgi.com> To: linux-xfs@oss.sgi.com, asg-qa@melbourne.sgi.com Subject: TAKE - Dmapi QA build X-archive-position: 10156 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: allanr@snort.melbourne.sgi.com Precedence: bulk X-list: xfs Dmapi QA build fix Date: Tue Jan 2 18:34:57 AEDT 2007 Workarea: snort.melbourne.sgi.com:/home/allanr/isms/xfs-cmds-2 Inspected by: ddiss The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:27826a xfstests/dmapi/Makefile.in - 1.9 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/dmapi/Makefile.in.diff?r1=text&tr1=1.9&r2=text&tr2=1.8&f=h - added default make option xfstests/include/buildmacros - 1.8 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/include/buildmacros.diff?r1=text&tr1=1.8&r2=text&tr2=1.7&f=h - removed special case for dmapi dir From owner-xfs@oss.sgi.com Tue Jan 2 03:36:05 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 02 Jan 2007 03:36:10 -0800 (PST) Received: from pentafluge.infradead.org (pentafluge.infradead.org [213.146.154.40]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l02Ba3qw010191 for ; Tue, 2 Jan 2007 03:36:04 -0800 Received: from hch by pentafluge.infradead.org with local (Exim 4.63 #1 (Red Hat Linux)) id 1H1heM-00061E-4u; Tue, 02 Jan 2007 11:17:46 +0000 Date: Tue, 2 Jan 2007 11:17:46 +0000 From: "'Christoph Hellwig'" To: "Chen, Kenneth W" Cc: "'Christoph Hellwig'" , "'Andrew Morton'" , Dmitriy Monakhov , Dmitriy Monakhov , linux-kernel@vger.kernel.org, Linux Memory Management , devel@openvz.org, xfs@oss.sgi.com Subject: Re: [PATCH] incorrect error handling inside generic_file_direct_write Message-ID: <20070102111746.GA22657@infradead.org> Mail-Followup-To: 'Christoph Hellwig' , "Chen, Kenneth W" , 'Andrew Morton' , Dmitriy Monakhov , Dmitriy Monakhov , linux-kernel@vger.kernel.org, Linux Memory Management , devel@openvz.org, xfs@oss.sgi.com References: <20061215104341.GA20089@infradead.org> <000101c7207a$48c138f0$ff0da8c0@amr.corp.intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <000101c7207a$48c138f0$ff0da8c0@amr.corp.intel.com> User-Agent: Mutt/1.4.2.2i X-SRS-Rewrite: SMTP reverse-path rewritten from by pentafluge.infradead.org See http://www.infradead.org/rpr.html X-archive-position: 10158 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs Content-Length: 1377 Lines: 33 On Fri, Dec 15, 2006 at 10:53:18AM -0800, Chen, Kenneth W wrote: > Christoph Hellwig wrote on Friday, December 15, 2006 2:44 AM > > So we're doing the sync_page_range once in __generic_file_aio_write > > with i_mutex held. > > > > > > > mutex_lock(&inode->i_mutex); > > > - ret = __generic_file_aio_write_nolock(iocb, iov, nr_segs, > > > - &iocb->ki_pos); > > > + ret = __generic_file_aio_write(iocb, iov, nr_segs, pos); > > > mutex_unlock(&inode->i_mutex); > > > > > > if (ret > 0 && ((file->f_flags & O_SYNC) || IS_SYNC(inode))) { > > > > And then another time after it's unlocked, this seems wrong. > > > I didn't invent that mess though. > > I should've ask the question first: in 2.6.20-rc1, generic_file_aio_write > will call sync_page_range twice, once from __generic_file_aio_write_nolock > and once within the function itself. Is it redundant? Can we delete the > one in the top level function? Like the following? Really? I'm looking at -rc3 now as -rc1 is rather old and it's definitly not the case there. I also can't remember ever doing this - when I started the generic read/write path untangling I had exactly the same situation that's now in -rc3: - generic_file_aio_write_nolock calls sync_page_range_nolock - generic_file_aio_write calls sync_page_range - __generic_file_aio_write_nolock doesn't call any sync_page_range variant From owner-xfs@oss.sgi.com Fri Jan 5 14:33:01 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 05 Jan 2007 14:33:07 -0800 (PST) Received: from service.eng.exegy.net (68-191-203-42.static.stls.mo.charter.com [68.191.203.42]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l05MWxqw011921 for ; Fri, 5 Jan 2007 14:33:00 -0800 Received: from HANAFORD.eng.exegy.net (hanaford.eng.exegy.net [10.19.1.4]) by service.eng.exegy.net (8.13.1/8.13.1) with ESMTP id l05MJnIS010019 for ; Fri, 5 Jan 2007 16:19:49 -0600 X-Ninja-PIM: Scanned by Ninja X-Ninja-AttachmentFiltering: (no action) Received: from [10.19.4.98] ([10.19.4.98]) by HANAFORD.eng.exegy.net with Microsoft SMTPSVC(6.0.3790.1830); Fri, 5 Jan 2007 16:19:48 -0600 Message-ID: <459ECF04.4090803@exegy.com> Date: Fri, 05 Jan 2007 16:19:48 -0600 From: Dave Lloyd User-Agent: Thunderbird 1.5.0.5 (X11/20060815) MIME-Version: 1.0 To: linux-xfs@oss.sgi.com Subject: [Fwd: [Fwd: xfs write speed regression 2.6.18.1 to 2.6.19.1]] Content-Type: multipart/mixed; boundary="------------060803090605060507010505" X-OriginalArrivalTime: 05 Jan 2007 22:19:48.0567 (UTC) FILETIME=[9CA8DE70:01C73117] X-archive-position: 10174 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dlloyd@exegy.com Precedence: bulk X-list: xfs Content-Length: 24707 Lines: 478 This is a multi-part message in MIME format. --------------060803090605060507010505 Content-Type: text/plain; charset="ISO-8859-1"; format="flowed" Content-Transfer-Encoding: 7bit From a co-worker. Anyone know what might have changed this between 2.6.18 and 2.6.19 when the issue first appeared? -- Dave Lloyd Product Support Engineer, Exegy, Inc. +1.314.450.5342 dlloyd@exegy.com -------- Original Message -------- Subject: [Fwd: xfs write speed regression 2.6.18.1 to 2.6.19.1] Date: Fri, 05 Jan 2007 16:16:11 -0600 From: Mr. Berkley Shands To: Dave Lloyd The short summary is under 2.6.18.* xfs is able to maintain a write rate of > 900MB/Sec for the first TB of data. Peak is ~256MB/Sec per raid X 4 raids. Under 2.6.19.1 this rate drops to ~220MB/Sec. and the allocations are no longer smooth starting at the outside edges. under 2.6.20-rc3 the speeds have gone back up some, but they are 10% slower than 2.6.18. and the allocations, as shown by the sequential writes (attached) are random. If I went all the way out to the inside tracks, you would be at about 490MB/Sec. Something changed. 2.6.19 was unstable, with XFS panics on a regular basis. 2.6.19.1 has not had an error yet.. (knock head on wall repeatedly). berkley -- //E. F. Berkley Shands, MSc// **Exegy Inc.** 3668 S. Geyer Road, Suite 300 St. Louis, MO 63127 Direct: (314) 450-5348 Cell: (314) 303-2546 Office: (314) 450-5353 Fax: (314) 450-5354 This e-mail and any documents accompanying it may contain legally privileged and/or confidential information belonging to Exegy, Inc. Such information may be protected from disclosure by law. The information is intended for use by only the addressee. If you are not the intended recipient, you are hereby notified that any disclosure or use of the information is strictly prohibited. If you have received this e-mail in error, please immediately contact the sender by e-mail or phone regarding instructions for return or destruction and do not use or disclose the content to others. --------------060803090605060507010505 Content-Type: text/plain; name="2.6.20-rc3run.txt" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="2.6.20-rc3run.txt" Data: Writing, 8192 MB, Buffer: 128 KB, Time: 58322 MS, Rate: 140.462, to /s2/GigaData.0 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 58321 MS, Rate: 140.464, to /s0/GigaData.0 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 58325 MS, Rate: 140.454, to /s1/GigaData.0 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 58356 MS, Rate: 140.380, to /s3/GigaData.0 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 43031 MS, Rate: 190.374, to /s2/GigaData.1 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 43043 MS, Rate: 190.321, to /s3/GigaData.1 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 44439 MS, Rate: 184.343, to /s1/GigaData.1 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 44439 MS, Rate: 184.343, to /s0/GigaData.1 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 38326 MS, Rate: 213.745, to /s0/GigaData.2 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 47993 MS, Rate: 170.692, to /s3/GigaData.2 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 47999 MS, Rate: 170.670, to /s1/GigaData.2 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 47998 MS, Rate: 170.674, to /s2/GigaData.2 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 46339 MS, Rate: 176.784, to /s1/GigaData.3 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 46337 MS, Rate: 176.792, to /s2/GigaData.3 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 46333 MS, Rate: 176.807, to /s3/GigaData.3 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 46369 MS, Rate: 176.670, to /s0/GigaData.3 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 44952 MS, Rate: 182.239, to /s1/GigaData.4 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 44952 MS, Rate: 182.239, to /s0/GigaData.4 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 44952 MS, Rate: 182.239, to /s3/GigaData.4 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 44951 MS, Rate: 182.243, to /s2/GigaData.4 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 41365 MS, Rate: 198.042, to /s2/GigaData.5 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 41357 MS, Rate: 198.080, to /s3/GigaData.5 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 41365 MS, Rate: 198.042, to /s0/GigaData.5 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 41365 MS, Rate: 198.042, to /s1/GigaData.5 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 38159 MS, Rate: 214.681, to /s1/GigaData.6 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 38168 MS, Rate: 214.630, to /s0/GigaData.6 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 39051 MS, Rate: 209.777, to /s3/GigaData.6 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 39055 MS, Rate: 209.755, to /s2/GigaData.6 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 37189 MS, Rate: 220.280, to /s2/GigaData.7 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 37189 MS, Rate: 220.280, to /s0/GigaData.7 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 37190 MS, Rate: 220.274, to /s1/GigaData.7 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 37189 MS, Rate: 220.280, to /s3/GigaData.7 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 34397 MS, Rate: 238.160, to /s3/GigaData.8 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 34876 MS, Rate: 234.889, to /s0/GigaData.8 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 34870 MS, Rate: 234.930, to /s1/GigaData.8 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 34880 MS, Rate: 234.862, to /s2/GigaData.8 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 36910 MS, Rate: 221.945, to /s0/GigaData.9 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 36896 MS, Rate: 222.029, to /s1/GigaData.9 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 36909 MS, Rate: 221.951, to /s3/GigaData.9 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 36930 MS, Rate: 221.825, to /s2/GigaData.9 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 38086 MS, Rate: 215.092, to /s3/GigaData.10 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 38082 MS, Rate: 215.115, to /s1/GigaData.10 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 38079 MS, Rate: 215.132, to /s2/GigaData.10 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 38118 MS, Rate: 214.912, to /s0/GigaData.10 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 45715 MS, Rate: 179.197, to /s0/GigaData.11 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 45714 MS, Rate: 179.201, to /s3/GigaData.11 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 45709 MS, Rate: 179.221, to /s1/GigaData.11 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 45745 MS, Rate: 179.080, to /s2/GigaData.11 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 37772 MS, Rate: 216.880, to /s2/GigaData.12 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 37772 MS, Rate: 216.880, to /s0/GigaData.12 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 37957 MS, Rate: 215.823, to /s3/GigaData.12 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 37966 MS, Rate: 215.772, to /s1/GigaData.12 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 34264 MS, Rate: 239.085, to /s1/GigaData.13 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 39469 MS, Rate: 207.555, to /s0/GigaData.13 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 39469 MS, Rate: 207.555, to /s3/GigaData.13 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 39469 MS, Rate: 207.555, to /s2/GigaData.13 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 36136 MS, Rate: 226.699, to /s2/GigaData.14 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 36137 MS, Rate: 226.693, to /s0/GigaData.14 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 35995 MS, Rate: 227.587, to /s3/GigaData.14 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 36174 MS, Rate: 226.461, to /s1/GigaData.14 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 35942 MS, Rate: 227.923, to /s2/GigaData.15 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 38528 MS, Rate: 212.625, to /s0/GigaData.15 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 38529 MS, Rate: 212.619, to /s1/GigaData.15 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 38529 MS, Rate: 212.619, to /s3/GigaData.15 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 35233 MS, Rate: 232.509, to /s1/GigaData.16 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 39777 MS, Rate: 205.948, to /s0/GigaData.16 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 39784 MS, Rate: 205.912, to /s2/GigaData.16 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 39778 MS, Rate: 205.943, to /s3/GigaData.16 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 41775 MS, Rate: 196.098, to /s3/GigaData.17 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 41767 MS, Rate: 196.136, to /s1/GigaData.17 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 41775 MS, Rate: 196.098, to /s0/GigaData.17 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 41803 MS, Rate: 195.967, to /s2/GigaData.17 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 39528 MS, Rate: 207.245, to /s0/GigaData.18 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 39529 MS, Rate: 207.240, to /s1/GigaData.18 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 39529 MS, Rate: 207.240, to /s3/GigaData.18 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 39561 MS, Rate: 207.073, to /s2/GigaData.18 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 37776 MS, Rate: 216.857, to /s2/GigaData.19 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 37779 MS, Rate: 216.840, to /s1/GigaData.19 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 37805 MS, Rate: 216.691, to /s3/GigaData.19 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 37806 MS, Rate: 216.685, to /s0/GigaData.19 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 38755 MS, Rate: 211.379, to /s1/GigaData.20 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 48546 MS, Rate: 168.747, to /s3/GigaData.20 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 48544 MS, Rate: 168.754, to /s0/GigaData.20 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 48536 MS, Rate: 168.782, to /s2/GigaData.20 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 41471 MS, Rate: 197.536, to /s2/GigaData.21 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 41476 MS, Rate: 197.512, to /s3/GigaData.21 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 52133 MS, Rate: 157.137, to /s1/GigaData.21 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 52135 MS, Rate: 157.131, to /s0/GigaData.21 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 49764 MS, Rate: 164.617, to /s1/GigaData.22 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 49757 MS, Rate: 164.640, to /s3/GigaData.22 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 49761 MS, Rate: 164.627, to /s2/GigaData.22 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 49764 MS, Rate: 164.617, to /s0/GigaData.22 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 39496 MS, Rate: 207.413, to /s2/GigaData.23 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 39491 MS, Rate: 207.440, to /s1/GigaData.23 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 39493 MS, Rate: 207.429, to /s3/GigaData.23 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 39490 MS, Rate: 207.445, to /s0/GigaData.23 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 41942 MS, Rate: 195.317, to /s3/GigaData.24 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 41942 MS, Rate: 195.317, to /s0/GigaData.24 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 41977 MS, Rate: 195.154, to /s1/GigaData.24 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 41983 MS, Rate: 195.127, to /s2/GigaData.24 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 38669 MS, Rate: 211.849, to /s3/GigaData.25 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 38672 MS, Rate: 211.833, to /s2/GigaData.25 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 39293 MS, Rate: 208.485, to /s0/GigaData.25 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 39292 MS, Rate: 208.490, to /s1/GigaData.25 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 39109 MS, Rate: 209.466, to /s2/GigaData.26 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 40761 MS, Rate: 200.976, to /s3/GigaData.26 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 40765 MS, Rate: 200.957, to /s1/GigaData.26 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 40766 MS, Rate: 200.952, to /s0/GigaData.26 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 39390 MS, Rate: 207.972, to /s2/GigaData.27 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 39397 MS, Rate: 207.935, to /s3/GigaData.27 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 39393 MS, Rate: 207.956, to /s0/GigaData.27 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 39392 MS, Rate: 207.961, to /s1/GigaData.27 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 51472 MS, Rate: 159.154, to /s2/GigaData.28 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 51471 MS, Rate: 159.158, to /s3/GigaData.28 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 51474 MS, Rate: 159.148, to /s0/GigaData.28 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 51473 MS, Rate: 159.151, to /s1/GigaData.28 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 46326 MS, Rate: 176.834, to /s3/GigaData.29 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 46327 MS, Rate: 176.830, to /s2/GigaData.29 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 46329 MS, Rate: 176.822, to /s1/GigaData.29 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 46330 MS, Rate: 176.818, to /s0/GigaData.29 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 38183 MS, Rate: 214.546, to /s3/GigaData.30 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 38183 MS, Rate: 214.546, to /s2/GigaData.30 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 38812 MS, Rate: 211.069, to /s0/GigaData.30 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 38813 MS, Rate: 211.063, to /s1/GigaData.30 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 37634 MS, Rate: 217.676, to /s0/GigaData.31 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 37564 MS, Rate: 218.081, to /s2/GigaData.31 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 40069 MS, Rate: 204.447, to /s3/GigaData.31 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 40082 MS, Rate: 204.381, to /s1/GigaData.31 Fastest for filesystem s0 234.889 MB/s /s0/GigaData.8 Slowest for filesystem s0 140.464 MB/s /s0/GigaData.0 Fastest for filesystem s1 239.085 MB/s /s1/GigaData.13 Slowest for filesystem s1 140.454 MB/s /s1/GigaData.0 Fastest for filesystem s2 234.862 MB/s /s2/GigaData.8 Slowest for filesystem s2 140.462 MB/s /s2/GigaData.0 Fastest for filesystem s3 238.160 MB/s /s3/GigaData.8 Slowest for filesystem s3 140.380 MB/s /s3/GigaData.0 Max write speed (striped): 946.996 Min write speed (striped): 561.76 --------------060803090605060507010505 Content-Type: message/rfc822; name="xfs write speed regression 2.6.18.1 to 2.6.19.1.eml" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="xfs write speed regression 2.6.18.1 to 2.6.19.1.eml" Message-ID: <45951A1B.8070206@exegy.com> Date: Fri, 29 Dec 2006 07:37:31 -0600 From: "Mr. Berkley Shands" User-Agent: Thunderbird 1.5.0.9 (X11/20061222) MIME-Version: 1.0 To: xfs-masters@oss.sgi.com, linux-kernel@vger.kernel.org CC: Dave Lloyd Subject: xfs write speed regression 2.6.18.1 to 2.6.19.1 Content-Type: multipart/alternative; boundary="------------050406070902090104070606" This is a multi-part message in MIME format. --------------050406070902090104070606 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Write speeds have decreased 10% to 30% between 2.6.18.1 and 2.6.19.1. Read speeds are unchanged at 1.15 GB/Sec. Using a SuperMicro H8DC8 or Tyan 2895 or a Tyan 2915 (Socket F) with 4 2.2GHz Opterons, 16GB RAM, dual LSI 8408E SAS controllers into 4 X 4 Raid0s. XFS file system. The only difference is the Kernel Rev. 16 Seagate 7200.10 Sata drives, with 3.AAE firmware (very important!). LSI 8408E firmware rev is 1.02.01-0158. Adapter readahead is disabled. (Enabling readahead with this firmware costs 25% in write performance :-( ) Under 2.6.18.1, I/O peaks at 256.1 MB/Sec into each raid0 - 1GB/Sec. The average is 230 MB/Sec over the first TB. With 2.6.19.1, the peak is 220 MB/Sec, and the average is 170 MB/Sec. EXT3 runs 2-3X slower than XFS for this benchmark, so it is hard to see where the regression appeared. I'm not really too worried about it, but that much of a decrease is worth reporting. Since there were significant XFS changes between the revs, it might not be worthwhile for me to chase the exact update that causes this issue. berkley -- //E. F. Berkley Shands, MSc// **Exegy Inc.** 3668 S. Geyer Road, Suite 300 St. Louis, MO 63127 Direct: (314) 450-5348 Cell: (314) 303-2546 Office: (314) 450-5353 Fax: (314) 450-5354 This e-mail and any documents accompanying it may contain legally privileged and/or confidential information belonging to Exegy, Inc. Such information may be protected from disclosure by law. The information is intended for use by only the addressee. If you are not the intended recipient, you are hereby notified that any disclosure or use of the information is strictly prohibited. If you have received this e-mail in error, please immediately contact the sender by e-mail or phone regarding instructions for return or destruction and do not use or disclose the content to others. --------------050406070902090104070606 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Write speeds have decreased 10% to 30% between 2.6.18.1 and 2.6.19.1.
Read speeds are unchanged at 1.15 GB/Sec.

Using a SuperMicro H8DC8 or Tyan 2895 or a Tyan 2915 (Socket F)
with 4 2.2GHz Opterons, 16GB RAM, dual LSI 8408E SAS controllers
into 4 X 4 Raid0s. XFS file system. The only difference is the Kernel Rev.
16 Seagate 7200.10 Sata drives, with 3.AAE firmware (very important!).
LSI 8408E firmware rev is 1.02.01-0158. Adapter readahead is disabled.
(Enabling readahead with this firmware costs 25% in write performance :-( )

Under 2.6.18.1, I/O peaks at 256.1 MB/Sec into each raid0 - 1GB/Sec.
The average is 230 MB/Sec over the first TB. With 2.6.19.1,
the peak is 220 MB/Sec, and the average is 170 MB/Sec.

EXT3 runs 2-3X slower than XFS for this benchmark, so it is hard
to see where the regression appeared. I'm not really too worried
about it, but that much of a decrease is worth reporting.
Since there were significant XFS changes between the revs,
it might not be worthwhile for me to chase the exact update
that causes this issue.

berkley


 
--

E. F. Berkley Shands, MSc

Exegy Inc.

3668 S. Geyer Road, Suite 300

St. Louis, MO  63127

Direct:  (314) 450-5348

Cell:  (314) 303-2546

Office:  (314) 450-5353

Fax:  (314) 450-5354

 

This e-mail and any documents accompanying it may contain legally privileged and/or confidential information belonging to Exegy, Inc.  Such information may be protected from disclosure by law.  The information is intended for use by only the addressee.  If you are not the intended recipient, you are hereby notified that any disclosure or use of the information is strictly prohibited.  If you have received this e-mail in error, please immediately contact the sender by e-mail or phone regarding instructions for return or destruction and do not use or disclose the content to others.

 

--------------050406070902090104070606-- --------------060803090605060507010505-- From owner-xfs@oss.sgi.com Sat Jan 6 08:57:11 2007 Received: with ECARTIS (v1.0.0; list xfs); Sat, 06 Jan 2007 08:57:16 -0800 (PST) Received: from sandeen.net (sandeen.net [209.173.210.139]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l06GvAqw027991 for ; Sat, 6 Jan 2007 08:57:11 -0800 Received: from [10.0.0.4] (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 3604B18011EB7; Sat, 6 Jan 2007 10:56:17 -0600 (CST) Message-ID: <459FD4B0.4000502@sandeen.net> Date: Sat, 06 Jan 2007 10:56:16 -0600 From: Eric Sandeen User-Agent: Thunderbird 1.5.0.9 (Macintosh/20061207) MIME-Version: 1.0 To: Dave Lloyd CC: linux-xfs@oss.sgi.com Subject: Re: [Fwd: [Fwd: xfs write speed regression 2.6.18.1 to 2.6.19.1]] References: <459ECF04.4090803@exegy.com> In-Reply-To: <459ECF04.4090803@exegy.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 10179 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Content-Length: 242 Lines: 10 > Under 2.6.19.1 this rate drops to ~220MB/Sec. and the allocations are no > longer smooth > starting at the outside edges. What do you mean by "allocations are no longer smooth" Do you have any data showing the allocation changes? -Eric From owner-xfs@oss.sgi.com Sun Jan 7 13:38:42 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 07 Jan 2007 13:38:45 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l07Lccqw000700 for ; Sun, 7 Jan 2007 13:38:40 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id IAA10087; Mon, 8 Jan 2007 08:37:37 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l07Lba7Y86233986; Mon, 8 Jan 2007 08:37:36 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l07LbZes86388742; Mon, 8 Jan 2007 08:37:35 +1100 (AEDT) Date: Mon, 8 Jan 2007 08:37:34 +1100 From: David Chinner To: linux-kernel Mailing List Cc: xfs@oss.sgi.com Subject: Re: xfs_file_ioctl / xfs_freeze: BUG: warning at kernel/mutex-debug.c:80/debug_mutex_unlock() Message-ID: <20070107213734.GS44411608@melbourne.sgi.com> References: <20070104001420.GA32440@m.safari.iki.fi> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070104001420.GA32440@m.safari.iki.fi> User-Agent: Mutt/1.4.2.1i X-archive-position: 10181 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 6087 Lines: 152 On Thu, Jan 04, 2007 at 02:14:21AM +0200, Sami Farin wrote: > just a simple test I did... > xfs_freeze -f /mnt/newtest > cp /etc/fstab /mnt/newtest > xfs_freeze -u /mnt/newtest > > 2007-01-04 01:44:30.341979500 <4>BUG: warning at kernel/mutex-debug.c:80/debug_mutex_unlock() > 2007-01-04 01:44:30.385771500 <4> [] dump_trace+0x215/0x21a > 2007-01-04 01:44:30.385774500 <4> [] show_trace_log_lvl+0x1a/0x30 > 2007-01-04 01:44:30.385775500 <4> [] show_trace+0x12/0x14 > 2007-01-04 01:44:30.385777500 <4> [] dump_stack+0x19/0x1b > 2007-01-04 01:44:30.385778500 <4> [] debug_mutex_unlock+0x69/0x120 > 2007-01-04 01:44:30.385779500 <4> [] __mutex_unlock_slowpath+0x44/0xf0 > 2007-01-04 01:44:30.385780500 <4> [] mutex_unlock+0x8/0xa > 2007-01-04 01:44:30.385782500 <4> [] thaw_bdev+0x57/0x6e > 2007-01-04 01:44:30.385791500 <4> [] xfs_ioctl+0x7ce/0x7d3 > 2007-01-04 01:44:30.385793500 <4> [] xfs_file_ioctl+0x33/0x54 > 2007-01-04 01:44:30.385794500 <4> [] do_ioctl+0x76/0x85 > 2007-01-04 01:44:30.385795500 <4> [] vfs_ioctl+0x59/0x1aa > 2007-01-04 01:44:30.385796500 <4> [] sys_ioctl+0x67/0x77 > 2007-01-04 01:44:30.385797500 <4> [] syscall_call+0x7/0xb > 2007-01-04 01:44:30.385799500 <4> [<001be410>] 0x1be410 > 2007-01-04 01:44:30.385800500 <4> ======================= > > fstab was there just fine after -u. Oh, that still hasn't been fixed? Generic bug, not XFS - the global semaphore->mutex cleanup converted the bd_mount_sem to a mutex, and mutexes complain loudly when a the process unlocking the mutex is not the process that locked it. Basically, the generic code is broken - the bd_mount_mutex needs to be reverted back to a semaphore because it is locked and unlocked by different processes. The following patch does this.... BTW, Sami, can you cc xfs@oss.sgi.com on XFS bug reports in future; you'll get more XFS savvy eyes there..... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group --- Revert bd_mount_mutex back to a semaphore so that xfs_freeze -f /mnt/newtest; xfs_freeze -u /mnt/newtest works safely and doesn't produce lockdep warnings. Signed-off-by: Dave Chinner --- fs/block_dev.c | 2 +- fs/buffer.c | 6 +++--- fs/gfs2/ops_fstype.c | 4 ++-- fs/super.c | 4 ++-- include/linux/fs.h | 2 +- 5 files changed, 9 insertions(+), 9 deletions(-) Index: 2.6.x-xfs-new/fs/block_dev.c =================================================================== --- 2.6.x-xfs-new.orig/fs/block_dev.c 2006-12-22 10:53:20.000000000 +1100 +++ 2.6.x-xfs-new/fs/block_dev.c 2007-01-08 08:26:15.843378600 +1100 @@ -263,7 +263,7 @@ static void init_once(void * foo, kmem_c { memset(bdev, 0, sizeof(*bdev)); mutex_init(&bdev->bd_mutex); - mutex_init(&bdev->bd_mount_mutex); + sema_init(&bdev->bd_mount_sem, 1); INIT_LIST_HEAD(&bdev->bd_inodes); INIT_LIST_HEAD(&bdev->bd_list); #ifdef CONFIG_SYSFS Index: 2.6.x-xfs-new/fs/buffer.c =================================================================== --- 2.6.x-xfs-new.orig/fs/buffer.c 2006-12-12 12:04:51.000000000 +1100 +++ 2.6.x-xfs-new/fs/buffer.c 2007-01-08 08:28:40.832542651 +1100 @@ -179,7 +179,7 @@ int fsync_bdev(struct block_device *bdev * freeze_bdev -- lock a filesystem and force it into a consistent state * @bdev: blockdevice to lock * - * This takes the block device bd_mount_mutex to make sure no new mounts + * This takes the block device bd_mount_sem to make sure no new mounts * happen on bdev until thaw_bdev() is called. * If a superblock is found on this device, we take the s_umount semaphore * on it to make sure nobody unmounts until the snapshot creation is done. @@ -188,7 +188,7 @@ struct super_block *freeze_bdev(struct b { struct super_block *sb; - mutex_lock(&bdev->bd_mount_mutex); + down(&bdev->bd_mount_sem); sb = get_super(bdev); if (sb && !(sb->s_flags & MS_RDONLY)) { sb->s_frozen = SB_FREEZE_WRITE; @@ -230,7 +230,7 @@ void thaw_bdev(struct block_device *bdev drop_super(sb); } - mutex_unlock(&bdev->bd_mount_mutex); + up(&bdev->bd_mount_sem); } EXPORT_SYMBOL(thaw_bdev); Index: 2.6.x-xfs-new/fs/gfs2/ops_fstype.c =================================================================== --- 2.6.x-xfs-new.orig/fs/gfs2/ops_fstype.c 2006-12-12 12:04:58.000000000 +1100 +++ 2.6.x-xfs-new/fs/gfs2/ops_fstype.c 2007-01-08 08:27:12.847973663 +1100 @@ -867,9 +867,9 @@ static int gfs2_get_sb_meta(struct file_ error = -EBUSY; goto error; } - mutex_lock(&sb->s_bdev->bd_mount_mutex); + down(&sb->s_bdev->bd_mount_sem); new = sget(fs_type, test_bdev_super, set_bdev_super, sb->s_bdev); - mutex_unlock(&sb->s_bdev->bd_mount_mutex); + up(&sb->s_bdev->bd_mount_sem); if (IS_ERR(new)) { error = PTR_ERR(new); goto error; Index: 2.6.x-xfs-new/fs/super.c =================================================================== --- 2.6.x-xfs-new.orig/fs/super.c 2006-12-22 11:45:59.000000000 +1100 +++ 2.6.x-xfs-new/fs/super.c 2007-01-08 08:24:20.718330640 +1100 @@ -736,9 +736,9 @@ int get_sb_bdev(struct file_system_type * will protect the lockfs code from trying to start a snapshot * while we are mounting */ - mutex_lock(&bdev->bd_mount_mutex); + down(&bdev->bd_mount_sem); s = sget(fs_type, test_bdev_super, set_bdev_super, bdev); - mutex_unlock(&bdev->bd_mount_mutex); + up(&bdev->bd_mount_sem); if (IS_ERR(s)) goto error_s; Index: 2.6.x-xfs-new/include/linux/fs.h =================================================================== --- 2.6.x-xfs-new.orig/include/linux/fs.h 2006-12-12 12:06:31.000000000 +1100 +++ 2.6.x-xfs-new/include/linux/fs.h 2007-01-08 08:24:53.602060200 +1100 @@ -456,7 +456,7 @@ struct block_device { struct inode * bd_inode; /* will die */ int bd_openers; struct mutex bd_mutex; /* open/close mutex */ - struct mutex bd_mount_mutex; /* mount mutex */ + struct semaphore bd_mount_sem; struct list_head bd_inodes; void * bd_holder; int bd_holders; From owner-xfs@oss.sgi.com Sun Jan 7 14:24:54 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 07 Jan 2007 14:25:00 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l07MOoqw008067 for ; Sun, 7 Jan 2007 14:24:53 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA11204; Mon, 8 Jan 2007 09:23:49 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l07MNk7Y86439344; Mon, 8 Jan 2007 09:23:46 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l07MNf2H82495547; Mon, 8 Jan 2007 09:23:41 +1100 (AEDT) Date: Mon, 8 Jan 2007 09:23:41 +1100 From: David Chinner To: Hugh Dickins Cc: Sami Farin <7atbggg02@sneakemail.com>, Nathan Scott , xfs@oss.sgi.com, Nick Piggin , linux-kernel@vger.kernel.org Subject: Re: BUG: warning at mm/truncate.c:60/cancel_dirty_page() Message-ID: <20070107222341.GT33919298@melbourne.sgi.com> References: <20070106023907.GA7766@m.safari.iki.fi> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.1i X-archive-position: 10182 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 4187 Lines: 94 On Sat, Jan 06, 2007 at 09:11:07PM +0000, Hugh Dickins wrote: > On Sat, 6 Jan 2007, Sami Farin wrote: > > > Linux 2.6.19.1 SMP [2] on Pentium D... > > I was running dt-15.14 [2] and I ran > > "cinfo datafile" (it does mincore()). > > Well it went OK but when I ran "strace cinfo datafile"...: > > 04:18:48.062466 mincore(0x37f1f000, 2147266560, > > You rightly noted in a followup that there have been changes to > mincore, but I doubt they have any bearing on this: I think the > BUG just happened at the same time as your mincore. > > > ... > > 2007-01-06 04:19:03.788181500 <4>BUG: warning at mm/truncate.c:60/cancel_dirty_page() > > 2007-01-06 04:19:03.788221500 <4> [] dump_trace+0x215/0x21a > > 2007-01-06 04:19:03.788223500 <4> [] show_trace_log_lvl+0x1a/0x30 > > 2007-01-06 04:19:03.788224500 <4> [] show_trace+0x12/0x14 > > 2007-01-06 04:19:03.788225500 <4> [] dump_stack+0x19/0x1b > > 2007-01-06 04:19:03.788227500 <4> [] cancel_dirty_page+0x7e/0x80 > > 2007-01-06 04:19:03.788228500 <4> [] truncate_complete_page+0x1a/0x47 > > 2007-01-06 04:19:03.788229500 <4> [] truncate_inode_pages_range+0x114/0x2ae > > 2007-01-06 04:19:03.788245500 <4> [] truncate_inode_pages+0x1a/0x1c > > 2007-01-06 04:19:03.788247500 <4> [] fs_flushinval_pages+0x40/0x77 > > 2007-01-06 04:19:03.788248500 <4> [] xfs_write+0x8c4/0xb68 > > 2007-01-06 04:19:03.788250500 <4> [] xfs_file_aio_write+0x7e/0x95 > > 2007-01-06 04:19:03.788251500 <4> [] do_sync_write+0xca/0x119 > > 2007-01-06 04:19:03.788265500 <4> [] vfs_write+0x187/0x18c > > 2007-01-06 04:19:03.788267500 <4> [] sys_write+0x3d/0x64 > > 2007-01-06 04:19:03.788268500 <4> [] syscall_call+0x7/0xb > > 2007-01-06 04:19:03.788269500 <4> [<001cf410>] 0x1cf410 > > 2007-01-06 04:19:03.788289500 <4> ======================= > > So... XFS uses truncate_inode_pages when serving the write system call. Only when you are doing direct I/O. XFS does direct writes without the i_mutex held, so it has to invalidate the range of cached pages while holding it's own locks to ensure direct I/O cache semantics are kept. > That's very inventive, Not really - been doing it for years. > and now it looks like Linus' cancel_dirty_page > and new warning have caught it out. VM people expect it to be called > either when freeing an inode no longer in use, or when doing a truncate, > after ensuring that all pages mapped into userspace have been taken out. Ok, so we are punching a hole in the middle of the address space because we are doing direct I/O on it and need to invalidate the cache. How are you supposed to invalidate a range of pages in a mapping for this case, then? invalidate_mapping_pages() would appear to be the candidate (the generic code uses this), but it _skips_ pages that are already mapped. invalidate_mapping_pages() then advises you to use truncate_inode_pages(): /** * invalidate_mapping_pages - Invalidate all the unlocked pages of one inode * @mapping: the address_space which holds the pages to invalidate * @start: the offset 'from' which to invalidate * @end: the offset 'to' which to invalidate (inclusive) * * This function only removes the unlocked pages, if you want to * remove all the pages of one inode, you must call truncate_inode_pages. * * invalidate_mapping_pages() will not block on IO activity. It will not * invalidate pages which are dirty, locked, under writeback or mapped into * pagetables. */ We want to remove all pages within the range given, so, as directed by the comment here, we use truncate_inode_pages(). Says nothing about mappings needing to be removed first so I guess that's where we've been caught..... I think we can use invalidate_inode_pages2_range(), but that doesn't handle partial page invalidations. I think this will be ok, but it's going to need some serious fsx testing on blocksize != page size configs. So, am I correct in assuming we should be calling invalidate_inode_pages2_range() instead of truncate_inode_pages()? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Sun Jan 7 14:49:08 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 07 Jan 2007 14:49:14 -0800 (PST) Received: from smtp.osdl.org (smtp.osdl.org [65.172.181.24]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l07Mn7qw012412 for ; Sun, 7 Jan 2007 14:49:08 -0800 Received: from shell0.pdx.osdl.net (fw.osdl.org [65.172.181.6]) by smtp.osdl.org (8.12.8/8.12.8) with ESMTP id l07MmCWi005515 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO); Sun, 7 Jan 2007 14:48:13 -0800 Received: from box (shell0.pdx.osdl.net [10.9.0.31]) by shell0.pdx.osdl.net (8.13.1/8.11.6) with SMTP id l07MmCQm026914; Sun, 7 Jan 2007 14:48:12 -0800 Date: Sun, 7 Jan 2007 14:48:12 -0800 From: Andrew Morton To: David Chinner Cc: Hugh Dickins , Sami Farin <7atbggg02@sneakemail.com>, Nathan Scott , xfs@oss.sgi.com, Nick Piggin , linux-kernel@vger.kernel.org Subject: Re: BUG: warning at mm/truncate.c:60/cancel_dirty_page() Message-Id: <20070107144812.96357ff9.akpm@osdl.org> In-Reply-To: <20070107222341.GT33919298@melbourne.sgi.com> References: <20070106023907.GA7766@m.safari.iki.fi> <20070107222341.GT33919298@melbourne.sgi.com> X-Mailer: Sylpheed version 2.2.7 (GTK+ 2.8.17; x86_64-unknown-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-MIMEDefang-Filter: osdl$Revision: 1.167 $ X-Scanned-By: MIMEDefang 2.36 X-archive-position: 10183 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: akpm@osdl.org Precedence: bulk X-list: xfs Content-Length: 476 Lines: 15 On Mon, 8 Jan 2007 09:23:41 +1100 David Chinner wrote: > How are you supposed to invalidate a range of pages in a mapping for > this case, then? invalidate_mapping_pages() would appear to be the > candidate (the generic code uses this), but it _skips_ pages that > are already mapped. unmap_mapping_range()? > So, am I correct in assuming we should be calling invalidate_inode_pages2_range() > instead of truncate_inode_pages()? That would be conventional. From owner-xfs@oss.sgi.com Sun Jan 7 15:05:48 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 07 Jan 2007 15:05:54 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l07N5jqw015757 for ; Sun, 7 Jan 2007 15:05:47 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA12109; Mon, 8 Jan 2007 10:04:43 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l07N4f7Y86367460; Mon, 8 Jan 2007 10:04:41 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l07N4ax085800729; Mon, 8 Jan 2007 10:04:36 +1100 (AEDT) Date: Mon, 8 Jan 2007 10:04:36 +1100 From: David Chinner To: Andrew Morton Cc: David Chinner , Hugh Dickins , Sami Farin <7atbggg02@sneakemail.com>, xfs@oss.sgi.com, Nick Piggin , linux-kernel@vger.kernel.org Subject: Re: BUG: warning at mm/truncate.c:60/cancel_dirty_page() Message-ID: <20070107230436.GU33919298@melbourne.sgi.com> References: <20070106023907.GA7766@m.safari.iki.fi> <20070107222341.GT33919298@melbourne.sgi.com> <20070107144812.96357ff9.akpm@osdl.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070107144812.96357ff9.akpm@osdl.org> User-Agent: Mutt/1.4.2.1i X-archive-position: 10184 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 2018 Lines: 66 On Sun, Jan 07, 2007 at 02:48:12PM -0800, Andrew Morton wrote: > On Mon, 8 Jan 2007 09:23:41 +1100 > David Chinner wrote: > > > How are you supposed to invalidate a range of pages in a mapping for > > this case, then? invalidate_mapping_pages() would appear to be the > > candidate (the generic code uses this), but it _skips_ pages that > > are already mapped. > > unmap_mapping_range()? /me looks at how it's used in invalidate_inode_pages2_range() and decides it's easier not to call this directly. > > So, am I correct in assuming we should be calling invalidate_inode_pages2_range() > > instead of truncate_inode_pages()? > > That would be conventional. .... in that case the following patch should fix the warning: --- fs/xfs/linux-2.6/xfs_fs_subr.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) Index: 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_fs_subr.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/linux-2.6/xfs_fs_subr.c 2006-12-12 12:05:17.000000000 +1100 +++ 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_fs_subr.c 2007-01-08 09:30:22.056571711 +1100 @@ -21,6 +21,8 @@ int fs_noerr(void) { return 0; } int fs_nosys(void) { return ENOSYS; } void fs_noval(void) { return; } +#define XFS_OFF_TO_PCSIZE(off) \ + (((off) + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT) void fs_tosspages( bhv_desc_t *bdp, @@ -32,7 +34,9 @@ fs_tosspages( struct inode *ip = vn_to_inode(vp); if (VN_CACHED(vp)) - truncate_inode_pages(ip->i_mapping, first); + invalidate_inode_pages2_range(ip->i_mapping, + XFS_OFF_TO_PCSIZE(first), + XFS_OFF_TO_PCSIZE(last)); } void @@ -49,7 +53,9 @@ fs_flushinval_pages( if (VN_TRUNC(vp)) VUNTRUNCATE(vp); filemap_write_and_wait(ip->i_mapping); - truncate_inode_pages(ip->i_mapping, first); + invalidate_inode_pages2_range(ip->i_mapping, + XFS_OFF_TO_PCSIZE(first), + XFS_OFF_TO_PCSIZE(last)); } } -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Sun Jan 7 15:15:08 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 07 Jan 2007 15:15:13 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l07NF6qw017480 for ; Sun, 7 Jan 2007 15:15:07 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA12240; Mon, 8 Jan 2007 10:14:08 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l07NE67Y84013920; Mon, 8 Jan 2007 10:14:07 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l07NE26U86531861; Mon, 8 Jan 2007 10:14:02 +1100 (AEDT) Date: Mon, 8 Jan 2007 10:14:02 +1100 From: David Chinner To: Haar =?iso-8859-1?Q?J=E1nos?= Cc: David Chinner , linux-xfs@oss.sgi.com, linux-kernel@vger.kernel.org Subject: Re: xfslogd-spinlock bug? Message-ID: <20070107231402.GU44411608@melbourne.sgi.com> References: <000d01c72127$3d7509b0$0400a8c0@dcccs> <20061217224457.GN33919298@melbourne.sgi.com> <026501c72237$0464f7a0$0400a8c0@dcccs> <20061218062444.GH44411608@melbourne.sgi.com> <027b01c7227d$0e26d1f0$0400a8c0@dcccs> <20061218223637.GP44411608@melbourne.sgi.com> <001a01c722fd$df5ca710$0400a8c0@dcccs> <20061219025229.GT33919298@melbourne.sgi.com> <20061219044700.GW33919298@melbourne.sgi.com> <041601c729b6$f81e4af0$0400a8c0@dcccs> Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <041601c729b6$f81e4af0$0400a8c0@dcccs> User-Agent: Mutt/1.4.2.1i X-archive-position: 10185 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 1910 Lines: 53 On Wed, Dec 27, 2006 at 01:58:06PM +0100, Haar János wrote: > Hello, > > ----- Original Message ----- > From: "David Chinner" > To: "David Chinner" > Cc: "Haar János" ; ; > > Sent: Tuesday, December 19, 2006 5:47 AM > Subject: Re: xfslogd-spinlock bug? > > > > On Tue, Dec 19, 2006 at 01:52:29PM +1100, David Chinner wrote: > > > > The filesystem was being shutdown so xfs_inode_item_destroy() just > > frees the inode log item without removing it from the AIL. I'll fix that, > > and see if i have any luck.... > > > > So I'd still try that patch i sent in the previous email... > > I still using the patch, but didnt shows any messages at this point. > > I'v got 3 crash/reboot, but 2 causes nbd disconneted, and this one: > > Dec 27 13:41:29 dy-base BUG: warning at > kernel/mutex.c:220/__mutex_unlock_common_slowpath() > Dec 27 13:41:29 dy-base Unable to handle kernel paging request at > 0000000066604480 RIP: > Dec 27 13:41:29 dy-base [] resched_task+0x12/0x64 > Dec 27 13:41:29 dy-base PGD 115246067 PUD 0 > Dec 27 13:41:29 dy-base Oops: 0000 [1] SMP > Dec 27 13:41:29 dy-base CPU 1 > Dec 27 13:41:29 dy-base Modules linked in: nbd rd netconsole e1000 video > Dec 27 13:41:29 dy-base Pid: 4069, comm: httpd Not tainted 2.6.19 #3 > Dec 27 13:41:29 dy-base RIP: 0010:[] [] > resched_task+0x12/0x64 > Dec 27 13:41:29 dy-base RSP: 0018:ffff810105c01b78 EFLAGS: 00010083 > Dec 27 13:41:29 dy-base RAX: ffffffff807d5800 RBX: 00001749fd97c214 RCX: Different corruption in RBX here. Looks like semi-random garbage there. I wonder - what's the mac and ip address(es) of your machine and nbd servers? (i.e. I suspect this is a nbd problem, not an XFS problem) Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Sun Jan 7 15:34:39 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 07 Jan 2007 15:34:43 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l07NYaqw020499 for ; Sun, 7 Jan 2007 15:34:37 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA12707; Mon, 8 Jan 2007 10:33:37 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l07NXa7Y83348863; Mon, 8 Jan 2007 10:33:36 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l07NXYaI86362939; Mon, 8 Jan 2007 10:33:34 +1100 (AEDT) Date: Mon, 8 Jan 2007 10:33:34 +1100 From: David Chinner To: Dave Lloyd Cc: linux-xfs@oss.sgi.com Subject: Re: [Fwd: [Fwd: xfs write speed regression 2.6.18.1 to 2.6.19.1]] Message-ID: <20070107233334.GW44411608@melbourne.sgi.com> References: <459ECF04.4090803@exegy.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <459ECF04.4090803@exegy.com> User-Agent: Mutt/1.4.2.1i X-archive-position: 10186 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 1186 Lines: 41 On Fri, Jan 05, 2007 at 04:19:48PM -0600, Dave Lloyd wrote: > From a co-worker. Anyone know what might have changed this between > 2.6.18 and 2.6.19 when the issue first appeared? IIRC< a bunch of changes went into the generic buffered I/O path to fix deadlocks on writes if we take a page fault during the copyin. That caused a performance regression for buffered I/O of around that sort of figure, and the regression is slowly being fixed up as per: > under 2.6.20-rc3 the speeds have gone back up some, but they are 10% > slower than 2.6.18. So I don't think this is an XFS problem as such. Still, I will try to do some local tests to check it out. > and the allocations, as shown by the sequential writes (attached) are > random. ???? > If I went all the way out to the inside tracks, you would be at about > 490MB/Sec. > > Something changed. 2.6.19 was unstable, with XFS panics on a regular basis. Got any stack traces? > 2.6.19.1 has not had an error yet.. (knock head on wall repeatedly). We didn't push any changes into 2.6.19.1, so that implies bugs in the generic code, not XFS.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Sun Jan 7 20:04:10 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 07 Jan 2007 20:04:16 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l08447qw032484 for ; Sun, 7 Jan 2007 20:04:09 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA19029; Mon, 8 Jan 2007 15:03:11 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l0843A7Y85711269; Mon, 8 Jan 2007 15:03:10 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l08439wn86331993; Mon, 8 Jan 2007 15:03:09 +1100 (AEDT) Date: Mon, 8 Jan 2007 15:03:09 +1100 From: David Chinner To: xfs-dev@sgi.com Cc: xfs@oss.sgi.com Subject: Review: fix mapping invalidation callouts Message-ID: <20070108040309.GX33919298@melbourne.sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.1i X-archive-position: 10187 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 2150 Lines: 69 With the recent cancel_dirty_page() changes, a warning was added if we cancel a dirty page that is still mapped into the page tables. This happens in XFS from fs_tosspages() and fs_flushinval_pages() because they call truncate_inode_pages(). truncate_inode_pages() does not invalidate existing page mappings; it is expected taht this is called only when truncating the file or destroying the inode and on both these cases there can be no mapped ptes. However, we call this when doing direct I/O writes to remove pages from the page cache. As a result, we can rip a page from the page cache that still has mappings attached. The correct fix is to use invalidate_inode_pages2_range() instead of truncate_inode_pages(). They essentially do the same thing, but the former also removes any pte mappings before removing the page from the page cache. Comments? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group --- fs/xfs/linux-2.6/xfs_fs_subr.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) Index: 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_fs_subr.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/linux-2.6/xfs_fs_subr.c 2006-12-12 12:05:17.000000000 +1100 +++ 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_fs_subr.c 2007-01-08 09:30:22.056571711 +1100 @@ -21,6 +21,8 @@ int fs_noerr(void) { return 0; } int fs_nosys(void) { return ENOSYS; } void fs_noval(void) { return; } +#define XFS_OFF_TO_PCSIZE(off) \ + (((off) + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT) void fs_tosspages( bhv_desc_t *bdp, @@ -32,7 +34,9 @@ fs_tosspages( struct inode *ip = vn_to_inode(vp); if (VN_CACHED(vp)) - truncate_inode_pages(ip->i_mapping, first); + invalidate_inode_pages2_range(ip->i_mapping, + XFS_OFF_TO_PCSIZE(first), + XFS_OFF_TO_PCSIZE(last)); } void @@ -49,7 +53,9 @@ fs_flushinval_pages( if (VN_TRUNC(vp)) VUNTRUNCATE(vp); filemap_write_and_wait(ip->i_mapping); - truncate_inode_pages(ip->i_mapping, first); + invalidate_inode_pages2_range(ip->i_mapping, + XFS_OFF_TO_PCSIZE(first), + XFS_OFF_TO_PCSIZE(last)); } } From owner-xfs@oss.sgi.com Sun Jan 7 20:45:28 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 07 Jan 2007 20:45:32 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l084jHqw010322 for ; Sun, 7 Jan 2007 20:45:19 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA20144; Mon, 8 Jan 2007 15:44:16 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l084iF7Y86375981; Mon, 8 Jan 2007 15:44:16 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l084iEUf86564446; Mon, 8 Jan 2007 15:44:14 +1100 (AEDT) Date: Mon, 8 Jan 2007 15:44:14 +1100 From: David Chinner To: xfs-dev@sgi.com Cc: xfs@oss.sgi.com Subject: Review: make growing by >2TB work Message-ID: <20070108044414.GC44411608@melbourne.sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.1i X-archive-position: 10188 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 14997 Lines: 414 Growing a filesystem by > 2TB currently causes an overflow in the transaction subsystem. Make transaction deltas and associated elements explicitly 64 bit types so that we don't get overflows. Comments? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group --- fs/xfs/xfs_bmap.c | 26 +++++++++++++------------- fs/xfs/xfs_mount.c | 18 ++++++------------ fs/xfs/xfs_mount.h | 7 ++++--- fs/xfs/xfs_trans.c | 32 ++++++++++++++++---------------- fs/xfs/xfs_trans.h | 42 +++++++++++++++++++++--------------------- 5 files changed, 60 insertions(+), 65 deletions(-) Index: 2.6.x-xfs-new/fs/xfs/xfs_mount.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_mount.c 2006-12-04 11:25:57.000000000 +1100 +++ 2.6.x-xfs-new/fs/xfs/xfs_mount.c 2006-12-04 11:33:31.470695330 +1100 @@ -52,11 +52,11 @@ STATIC void xfs_unmountfs_wait(xfs_mount #ifdef HAVE_PERCPU_SB STATIC void xfs_icsb_destroy_counters(xfs_mount_t *); -STATIC void xfs_icsb_balance_counter(xfs_mount_t *, xfs_sb_field_t, int, -int); +STATIC void xfs_icsb_balance_counter(xfs_mount_t *, xfs_sb_field_t, + int, int); STATIC void xfs_icsb_sync_counters(xfs_mount_t *); STATIC int xfs_icsb_modify_counters(xfs_mount_t *, xfs_sb_field_t, - int, int); + int64_t, int); STATIC int xfs_icsb_disable_counter(xfs_mount_t *, xfs_sb_field_t); #else @@ -136,14 +136,9 @@ xfs_mount_init(void) mp->m_flags |= XFS_MOUNT_NO_PERCPU_SB; } - AIL_LOCKINIT(&mp->m_ail_lock, "xfs_ail"); spinlock_init(&mp->m_sb_lock, "xfs_sb"); mutex_init(&mp->m_ilock); initnsema(&mp->m_growlock, 1, "xfs_grow"); - /* - * Initialize the AIL. - */ - xfs_trans_ail_init(mp); atomic_set(&mp->m_active_trans, 0); @@ -1255,7 +1250,7 @@ xfs_mod_sb(xfs_trans_t *tp, __int64_t fi */ int xfs_mod_incore_sb_unlocked(xfs_mount_t *mp, xfs_sb_field_t field, - int delta, int rsvd) + int64_t delta, int rsvd) { int scounter; /* short counter for 32 bit fields */ long long lcounter; /* long counter for 64 bit fields */ @@ -1287,7 +1282,6 @@ xfs_mod_incore_sb_unlocked(xfs_mount_t * mp->m_sb.sb_ifree = lcounter; return 0; case XFS_SBS_FDBLOCKS: - lcounter = (long long) mp->m_sb.sb_fdblocks - XFS_ALLOC_SET_ASIDE(mp); res_used = (long long)(mp->m_resblks - mp->m_resblks_avail); @@ -1418,7 +1412,7 @@ xfs_mod_incore_sb_unlocked(xfs_mount_t * * routine to do the work. */ int -xfs_mod_incore_sb(xfs_mount_t *mp, xfs_sb_field_t field, int delta, int rsvd) +xfs_mod_incore_sb(xfs_mount_t *mp, xfs_sb_field_t field, int64_t delta, int rsvd) { unsigned long s; int status; @@ -2091,7 +2085,7 @@ int xfs_icsb_modify_counters( xfs_mount_t *mp, xfs_sb_field_t field, - int delta, + int64_t delta, int rsvd) { xfs_icsb_cnts_t *icsbp; Index: 2.6.x-xfs-new/fs/xfs/xfs_mount.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_mount.h 2006-12-04 11:25:57.000000000 +1100 +++ 2.6.x-xfs-new/fs/xfs/xfs_mount.h 2006-12-04 11:33:31.470695330 +1100 @@ -565,10 +565,11 @@ xfs_daddr_to_agbno(struct xfs_mount *mp, /* * This structure is for use by the xfs_mod_incore_sb_batch() routine. + * xfs_growfs can specify a few fields which are more than int limit */ typedef struct xfs_mod_sb { xfs_sb_field_t msb_field; /* Field to modify, see below */ - int msb_delta; /* Change to make to specified field */ + int64_t msb_delta; /* Change to make to specified field */ } xfs_mod_sb_t; #define XFS_MOUNT_ILOCK(mp) mutex_lock(&((mp)->m_ilock)) @@ -586,9 +587,9 @@ extern int xfs_unmountfs(xfs_mount_t *, extern void xfs_unmountfs_close(xfs_mount_t *, struct cred *); extern int xfs_unmountfs_writesb(xfs_mount_t *); extern int xfs_unmount_flush(xfs_mount_t *, int); -extern int xfs_mod_incore_sb(xfs_mount_t *, xfs_sb_field_t, int, int); +extern int xfs_mod_incore_sb(xfs_mount_t *, xfs_sb_field_t, int64_t, int); extern int xfs_mod_incore_sb_unlocked(xfs_mount_t *, xfs_sb_field_t, - int, int); + int64_t, int); extern int xfs_mod_incore_sb_batch(xfs_mount_t *, xfs_mod_sb_t *, uint, int); extern struct xfs_buf *xfs_getsb(xfs_mount_t *, int); Index: 2.6.x-xfs-new/fs/xfs/xfs_trans.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_trans.c 2006-12-04 11:25:38.000000000 +1100 +++ 2.6.x-xfs-new/fs/xfs/xfs_trans.c 2006-12-04 11:33:31.470695330 +1100 @@ -339,7 +339,7 @@ xfs_trans_reserve( */ if (blocks > 0) { error = xfs_mod_incore_sb(tp->t_mountp, XFS_SBS_FDBLOCKS, - -blocks, rsvd); + -((int64_t)blocks), rsvd); if (error != 0) { current_restore_flags_nested(&tp->t_pflags, PF_FSTRANS); return (XFS_ERROR(ENOSPC)); @@ -380,7 +380,7 @@ xfs_trans_reserve( */ if (rtextents > 0) { error = xfs_mod_incore_sb(tp->t_mountp, XFS_SBS_FREXTENTS, - -rtextents, rsvd); + -((int64_t)rtextents), rsvd); if (error) { error = XFS_ERROR(ENOSPC); goto undo_log; @@ -410,7 +410,7 @@ undo_log: undo_blocks: if (blocks > 0) { (void) xfs_mod_incore_sb(tp->t_mountp, XFS_SBS_FDBLOCKS, - blocks, rsvd); + (int64_t)blocks, rsvd); tp->t_blk_res = 0; } @@ -432,7 +432,7 @@ void xfs_trans_mod_sb( xfs_trans_t *tp, uint field, - long delta) + int64_t delta) { switch (field) { @@ -663,62 +663,62 @@ xfs_trans_unreserve_and_mod_sb( if (tp->t_flags & XFS_TRANS_SB_DIRTY) { if (tp->t_icount_delta != 0) { msbp->msb_field = XFS_SBS_ICOUNT; - msbp->msb_delta = (int)tp->t_icount_delta; + msbp->msb_delta = tp->t_icount_delta; msbp++; } if (tp->t_ifree_delta != 0) { msbp->msb_field = XFS_SBS_IFREE; - msbp->msb_delta = (int)tp->t_ifree_delta; + msbp->msb_delta = tp->t_ifree_delta; msbp++; } if (tp->t_fdblocks_delta != 0) { msbp->msb_field = XFS_SBS_FDBLOCKS; - msbp->msb_delta = (int)tp->t_fdblocks_delta; + msbp->msb_delta = tp->t_fdblocks_delta; msbp++; } if (tp->t_frextents_delta != 0) { msbp->msb_field = XFS_SBS_FREXTENTS; - msbp->msb_delta = (int)tp->t_frextents_delta; + msbp->msb_delta = tp->t_frextents_delta; msbp++; } if (tp->t_dblocks_delta != 0) { msbp->msb_field = XFS_SBS_DBLOCKS; - msbp->msb_delta = (int)tp->t_dblocks_delta; + msbp->msb_delta = tp->t_dblocks_delta; msbp++; } if (tp->t_agcount_delta != 0) { msbp->msb_field = XFS_SBS_AGCOUNT; - msbp->msb_delta = (int)tp->t_agcount_delta; + msbp->msb_delta = tp->t_agcount_delta; msbp++; } if (tp->t_imaxpct_delta != 0) { msbp->msb_field = XFS_SBS_IMAX_PCT; - msbp->msb_delta = (int)tp->t_imaxpct_delta; + msbp->msb_delta = tp->t_imaxpct_delta; msbp++; } if (tp->t_rextsize_delta != 0) { msbp->msb_field = XFS_SBS_REXTSIZE; - msbp->msb_delta = (int)tp->t_rextsize_delta; + msbp->msb_delta = tp->t_rextsize_delta; msbp++; } if (tp->t_rbmblocks_delta != 0) { msbp->msb_field = XFS_SBS_RBMBLOCKS; - msbp->msb_delta = (int)tp->t_rbmblocks_delta; + msbp->msb_delta = tp->t_rbmblocks_delta; msbp++; } if (tp->t_rblocks_delta != 0) { msbp->msb_field = XFS_SBS_RBLOCKS; - msbp->msb_delta = (int)tp->t_rblocks_delta; + msbp->msb_delta = tp->t_rblocks_delta; msbp++; } if (tp->t_rextents_delta != 0) { msbp->msb_field = XFS_SBS_REXTENTS; - msbp->msb_delta = (int)tp->t_rextents_delta; + msbp->msb_delta = tp->t_rextents_delta; msbp++; } if (tp->t_rextslog_delta != 0) { msbp->msb_field = XFS_SBS_REXTSLOG; - msbp->msb_delta = (int)tp->t_rextslog_delta; + msbp->msb_delta = tp->t_rextslog_delta; msbp++; } } Index: 2.6.x-xfs-new/fs/xfs/xfs_trans.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_trans.h 2006-12-04 11:25:57.000000000 +1100 +++ 2.6.x-xfs-new/fs/xfs/xfs_trans.h 2006-12-04 11:33:31.474694802 +1100 @@ -350,25 +350,25 @@ typedef struct xfs_trans { xfs_trans_callback_t t_callback; /* transaction callback */ void *t_callarg; /* callback arg */ unsigned int t_flags; /* misc flags */ - long t_icount_delta; /* superblock icount change */ - long t_ifree_delta; /* superblock ifree change */ - long t_fdblocks_delta; /* superblock fdblocks chg */ - long t_res_fdblocks_delta; /* on-disk only chg */ - long t_frextents_delta;/* superblock freextents chg*/ - long t_res_frextents_delta; /* on-disk only chg */ + int64_t t_icount_delta; /* superblock icount change */ + int64_t t_ifree_delta; /* superblock ifree change */ + int64_t t_fdblocks_delta; /* superblock fdblocks chg */ + int64_t t_res_fdblocks_delta; /* on-disk only chg */ + int64_t t_frextents_delta;/* superblock freextents chg*/ + int64_t t_res_frextents_delta; /* on-disk only chg */ #ifdef DEBUG - long t_ag_freeblks_delta; /* debugging counter */ - long t_ag_flist_delta; /* debugging counter */ - long t_ag_btree_delta; /* debugging counter */ + int64_t t_ag_freeblks_delta; /* debugging counter */ + int64_t t_ag_flist_delta; /* debugging counter */ + int64_t t_ag_btree_delta; /* debugging counter */ #endif - long t_dblocks_delta;/* superblock dblocks change */ - long t_agcount_delta;/* superblock agcount change */ - long t_imaxpct_delta;/* superblock imaxpct change */ - long t_rextsize_delta;/* superblock rextsize chg */ - long t_rbmblocks_delta;/* superblock rbmblocks chg */ - long t_rblocks_delta;/* superblock rblocks change */ - long t_rextents_delta;/* superblocks rextents chg */ - long t_rextslog_delta;/* superblocks rextslog chg */ + int64_t t_dblocks_delta;/* superblock dblocks change */ + int64_t t_agcount_delta;/* superblock agcount change */ + int64_t t_imaxpct_delta;/* superblock imaxpct change */ + int64_t t_rextsize_delta;/* superblock rextsize chg */ + int64_t t_rbmblocks_delta;/* superblock rbmblocks chg */ + int64_t t_rblocks_delta;/* superblock rblocks change */ + int64_t t_rextents_delta;/* superblocks rextents chg */ + int64_t t_rextslog_delta;/* superblocks rextslog chg */ unsigned int t_items_free; /* log item descs free */ xfs_log_item_chunk_t t_items; /* first log item desc chunk */ xfs_trans_header_t t_header; /* header for in-log trans */ @@ -932,9 +932,9 @@ typedef struct xfs_trans { #define xfs_trans_set_sync(tp) ((tp)->t_flags |= XFS_TRANS_SYNC) #ifdef DEBUG -#define xfs_trans_agblocks_delta(tp, d) ((tp)->t_ag_freeblks_delta += (long)d) -#define xfs_trans_agflist_delta(tp, d) ((tp)->t_ag_flist_delta += (long)d) -#define xfs_trans_agbtree_delta(tp, d) ((tp)->t_ag_btree_delta += (long)d) +#define xfs_trans_agblocks_delta(tp, d) ((tp)->t_ag_freeblks_delta += (int64_t)d) +#define xfs_trans_agflist_delta(tp, d) ((tp)->t_ag_flist_delta += (int64_t)d) +#define xfs_trans_agbtree_delta(tp, d) ((tp)->t_ag_btree_delta += (int64_t)d) #else #define xfs_trans_agblocks_delta(tp, d) #define xfs_trans_agflist_delta(tp, d) @@ -950,7 +950,7 @@ xfs_trans_t *_xfs_trans_alloc(struct xfs xfs_trans_t *xfs_trans_dup(xfs_trans_t *); int xfs_trans_reserve(xfs_trans_t *, uint, uint, uint, uint, uint); -void xfs_trans_mod_sb(xfs_trans_t *, uint, long); +void xfs_trans_mod_sb(xfs_trans_t *, uint, int64_t); struct xfs_buf *xfs_trans_get_buf(xfs_trans_t *, struct xfs_buftarg *, xfs_daddr_t, int, uint); int xfs_trans_read_buf(struct xfs_mount *, xfs_trans_t *, Index: 2.6.x-xfs-new/fs/xfs/xfs_bmap.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_bmap.c 2006-12-04 11:25:38.000000000 +1100 +++ 2.6.x-xfs-new/fs/xfs/xfs_bmap.c 2006-12-04 11:33:31.478694275 +1100 @@ -684,7 +684,7 @@ xfs_bmap_add_extent( ASSERT(nblks <= da_old); if (nblks < da_old) xfs_mod_incore_sb(ip->i_mount, XFS_SBS_FDBLOCKS, - (int)(da_old - nblks), rsvd); + (int64_t)(da_old - nblks), rsvd); } /* * Clear out the allocated field, done with it now in any case. @@ -1209,7 +1209,7 @@ xfs_bmap_add_extent_delay_real( diff = (int)(temp + temp2 - STARTBLOCKVAL(PREV.br_startblock) - (cur ? cur->bc_private.b.allocated : 0)); if (diff > 0 && - xfs_mod_incore_sb(ip->i_mount, XFS_SBS_FDBLOCKS, -diff, rsvd)) { + xfs_mod_incore_sb(ip->i_mount, XFS_SBS_FDBLOCKS, -((int64_t)diff), rsvd)) { /* * Ick gross gag me with a spoon. */ @@ -1220,7 +1220,7 @@ xfs_bmap_add_extent_delay_real( diff--; if (!diff || !xfs_mod_incore_sb(ip->i_mount, - XFS_SBS_FDBLOCKS, -diff, rsvd)) + XFS_SBS_FDBLOCKS, -((int64_t)diff), rsvd)) break; } if (temp2) { @@ -1228,7 +1228,7 @@ xfs_bmap_add_extent_delay_real( diff--; if (!diff || !xfs_mod_incore_sb(ip->i_mount, - XFS_SBS_FDBLOCKS, -diff, rsvd)) + XFS_SBS_FDBLOCKS, -((int64_t)diff), rsvd)) break; } } @@ -2015,7 +2015,7 @@ xfs_bmap_add_extent_hole_delay( if (oldlen != newlen) { ASSERT(oldlen > newlen); xfs_mod_incore_sb(ip->i_mount, XFS_SBS_FDBLOCKS, - (int)(oldlen - newlen), rsvd); + (int64_t)(oldlen - newlen), rsvd); /* * Nothing to do for disk quota accounting here. */ @@ -3359,7 +3359,7 @@ xfs_bmap_del_extent( */ ASSERT(da_old >= da_new); if (da_old > da_new) - xfs_mod_incore_sb(mp, XFS_SBS_FDBLOCKS, (int)(da_old - da_new), + xfs_mod_incore_sb(mp, XFS_SBS_FDBLOCKS, (int64_t)(da_old - da_new), rsvd); if (delta) { /* DELTA: report the original extent. */ @@ -4929,28 +4929,28 @@ xfs_bmapi( if (rt) { error = xfs_mod_incore_sb(mp, XFS_SBS_FREXTENTS, - -(extsz), (flags & + -((int64_t)extsz), (flags & XFS_BMAPI_RSVBLOCKS)); } else { error = xfs_mod_incore_sb(mp, XFS_SBS_FDBLOCKS, - -(alen), (flags & + -((int64_t)alen), (flags & XFS_BMAPI_RSVBLOCKS)); } if (!error) { error = xfs_mod_incore_sb(mp, XFS_SBS_FDBLOCKS, - -(indlen), (flags & + -((int64_t)indlen), (flags & XFS_BMAPI_RSVBLOCKS)); if (error && rt) xfs_mod_incore_sb(mp, XFS_SBS_FREXTENTS, - extsz, (flags & + (int64_t)extsz, (flags & XFS_BMAPI_RSVBLOCKS)); else if (error) xfs_mod_incore_sb(mp, XFS_SBS_FDBLOCKS, - alen, (flags & + (int64_t)alen, (flags & XFS_BMAPI_RSVBLOCKS)); } @@ -5616,13 +5616,13 @@ xfs_bunmapi( rtexts = XFS_FSB_TO_B(mp, del.br_blockcount); do_div(rtexts, mp->m_sb.sb_rextsize); xfs_mod_incore_sb(mp, XFS_SBS_FREXTENTS, - (int)rtexts, rsvd); + (int64_t)rtexts, rsvd); (void)XFS_TRANS_RESERVE_QUOTA_NBLKS(mp, NULL, ip, -((long)del.br_blockcount), 0, XFS_QMOPT_RES_RTBLKS); } else { xfs_mod_incore_sb(mp, XFS_SBS_FDBLOCKS, - (int)del.br_blockcount, rsvd); + (int64_t)del.br_blockcount, rsvd); (void)XFS_TRANS_RESERVE_QUOTA_NBLKS(mp, NULL, ip, -((long)del.br_blockcount), 0, XFS_QMOPT_RES_REGBLKS); From owner-xfs@oss.sgi.com Sun Jan 7 22:11:47 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 07 Jan 2007 22:11:52 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l086Bhqw020794 for ; Sun, 7 Jan 2007 22:11:45 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA22344; Mon, 8 Jan 2007 17:10:43 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l086Ag7Y86453615; Mon, 8 Jan 2007 17:10:42 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l086Afep85364320; Mon, 8 Jan 2007 17:10:41 +1100 (AEDT) Date: Mon, 8 Jan 2007 17:10:40 +1100 From: David Chinner To: xfs-dev@sgi.com Cc: xfs@oss.sgi.com Subject: Review: fix block reservation to work with per-cpu counters Message-ID: <20070108061040.GD44411608@melbourne.sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.1i X-archive-position: 10189 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 6551 Lines: 206 Currently, XFS_IOC_SET_RESBLKS will not work properly when per-cpu superblock counters are enabled. Reservations can be lost silently as they are applied to the incore superblock instead of the currently active counters. Rather than try to shoe-horn the current reservation code into the per-cpu counters or vice-versa, we lock the superblock and snap the current counter state and work on that number. Once we work out exactly how much we need to "allocate" to the reserved area, we drop the lock and call xfs_mod_incore_sb() which will do all the right things w.r.t to the counter state. If we fail to get as much as we want (i.e. ENOSPC is returned) we go back to the start and try to allocate as much of what is left. Comments? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group --- fs/xfs/xfs_fsops.c | 54 +++++++++++++++++++++++++++++++++++++++++++++++----- fs/xfs/xfs_mount.c | 16 ++------------- fs/xfs/xfs_mount.h | 2 - fs/xfs/xfs_vfsops.c | 2 - 4 files changed, 54 insertions(+), 20 deletions(-) Index: 2.6.x-xfs-new/fs/xfs/xfs_fsops.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_fsops.c 2006-12-12 12:05:20.000000000 +1100 +++ 2.6.x-xfs-new/fs/xfs/xfs_fsops.c 2006-12-22 00:30:53.770384646 +1100 @@ -460,7 +460,7 @@ xfs_fs_counts( { unsigned long s; - xfs_icsb_sync_counters_lazy(mp); + xfs_icsb_sync_counters_flags(mp, XFS_ICSB_LAZY_COUNT); s = XFS_SB_LOCK(mp); cnt->freedata = mp->m_sb.sb_fdblocks - XFS_ALLOC_SET_ASIDE(mp); cnt->freertx = mp->m_sb.sb_frextents; @@ -491,7 +491,7 @@ xfs_reserve_blocks( __uint64_t *inval, xfs_fsop_resblks_t *outval) { - __int64_t lcounter, delta; + __int64_t lcounter, delta, fdblks_delta; __uint64_t request; unsigned long s; @@ -504,17 +504,35 @@ xfs_reserve_blocks( } request = *inval; + + /* + * With per-cpu counters, this becomes an interesting + * problem. we needto work out if we are freeing or allocation + * blocks first, then we can do the modification as necessary. + * + * We do this under the XFS_SB_LOCK so that if we are near + * ENOSPC, we will hold out any changes while we work out + * what to do. This means that the amount of free space can + * change while we do this, so we need to retry if we end up + * trying to reserve more space than is available. + * + * We also use the xfs_mod_incore_sb() interface so that we + * don't have to care about whether per cpu counter are + * enabled, disabled or even compiled in.... + */ +retry: s = XFS_SB_LOCK(mp); + xfs_icsb_sync_counters_flags(mp, XFS_ICSB_SB_LOCKED); /* * If our previous reservation was larger than the current value, * then move any unused blocks back to the free pool. */ - + fdblks_delta = 0; if (mp->m_resblks > request) { lcounter = mp->m_resblks_avail - request; if (lcounter > 0) { /* release unused blocks */ - mp->m_sb.sb_fdblocks += lcounter; + fdblks_delta = lcounter; mp->m_resblks_avail -= lcounter; } mp->m_resblks = request; @@ -522,24 +540,50 @@ xfs_reserve_blocks( __int64_t free; free = mp->m_sb.sb_fdblocks - XFS_ALLOC_SET_ASIDE(mp); + if (!free) + goto out; /* ENOSPC and fdblks_delta = 0 */ + delta = request - mp->m_resblks; lcounter = free - delta; if (lcounter < 0) { /* We can't satisfy the request, just get what we can */ mp->m_resblks += free; mp->m_resblks_avail += free; + fdblks_delta = -free; mp->m_sb.sb_fdblocks = XFS_ALLOC_SET_ASIDE(mp); } else { + fdblks_delta = -delta; mp->m_sb.sb_fdblocks = lcounter + XFS_ALLOC_SET_ASIDE(mp); mp->m_resblks = request; mp->m_resblks_avail += delta; } } - +out: outval->resblks = mp->m_resblks; outval->resblks_avail = mp->m_resblks_avail; XFS_SB_UNLOCK(mp, s); + + if (fdblks_delta) { + /* + * If we are putting blocks back here, m_resblks_avail is + * already at it's max so this will put it in the free pool. + * + * If we need space, we'll either succeed in getting it + * from the free block count or we'll get an enospc. If + * we get a ENOSPC, it means things changed while we were + * calculating fdblks_delta and so we should try again to + * see if there is anything left to reserve. + * + * Don't set the reserved flag here - we don't want to reserve + * the extra reserve blocks from the reserve..... + */ + int error; + error = xfs_mod_incore_sb(mp, XFS_SBS_FDBLOCKS, fdblks_delta, 0); + if (error == ENOSPC) + goto retry; + } + return 0; } Index: 2.6.x-xfs-new/fs/xfs/xfs_mount.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_mount.c 2006-12-12 18:02:03.000000000 +1100 +++ 2.6.x-xfs-new/fs/xfs/xfs_mount.c 2006-12-21 22:53:35.669131775 +1100 @@ -1963,8 +1963,8 @@ xfs_icsb_enable_counter( xfs_icsb_unlock_all_counters(mp); } -STATIC void -xfs_icsb_sync_counters_int( +void +xfs_icsb_sync_counters_flags( xfs_mount_t *mp, int flags) { @@ -1996,17 +1996,7 @@ STATIC void xfs_icsb_sync_counters( xfs_mount_t *mp) { - xfs_icsb_sync_counters_int(mp, 0); -} - -/* - * lazy addition used for things like df, background sb syncs, etc - */ -void -xfs_icsb_sync_counters_lazy( - xfs_mount_t *mp) -{ - xfs_icsb_sync_counters_int(mp, XFS_ICSB_LAZY_COUNT); + xfs_icsb_sync_counters_flags(mp, 0); } /* Index: 2.6.x-xfs-new/fs/xfs/xfs_mount.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_mount.h 2006-12-20 22:59:33.000000000 +1100 +++ 2.6.x-xfs-new/fs/xfs/xfs_mount.h 2006-12-21 22:52:35.596932143 +1100 @@ -312,7 +312,7 @@ typedef struct xfs_icsb_cnts { #define XFS_ICSB_LAZY_COUNT (1 << 1) /* accuracy not needed */ extern int xfs_icsb_init_counters(struct xfs_mount *); -extern void xfs_icsb_sync_counters_lazy(struct xfs_mount *); +extern void xfs_icsb_sync_counters_flags(struct xfs_mount *, int); #else #define xfs_icsb_init_counters(mp) (0) Index: 2.6.x-xfs-new/fs/xfs/xfs_vfsops.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_vfsops.c 2006-12-12 15:40:58.000000000 +1100 +++ 2.6.x-xfs-new/fs/xfs/xfs_vfsops.c 2006-12-22 00:28:42.851623181 +1100 @@ -815,7 +815,7 @@ xfs_statvfs( statp->f_type = XFS_SB_MAGIC; - xfs_icsb_sync_counters_lazy(mp); + xfs_icsb_sync_counters_flags(mp, XFS_ICSB_LAZY_COUNT); s = XFS_SB_LOCK(mp); statp->f_bsize = sbp->sb_blocksize; lsize = sbp->sb_logstart ? sbp->sb_logblocks : 0; From owner-xfs@oss.sgi.com Mon Jan 8 01:21:11 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 08 Jan 2007 01:21:16 -0800 (PST) Received: from pentafluge.infradead.org (pentafluge.infradead.org [213.146.154.40]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l089LAqw025065 for ; Mon, 8 Jan 2007 01:21:11 -0800 Received: from hch by pentafluge.infradead.org with local (Exim 4.63 #1 (Red Hat Linux)) id 1H3qVI-0004XJ-7x; Mon, 08 Jan 2007 09:09:16 +0000 Date: Mon, 8 Jan 2007 09:09:16 +0000 From: Christoph Hellwig To: David Chinner Cc: xfs-dev@sgi.com, xfs@oss.sgi.com Subject: Re: Review: fix mapping invalidation callouts Message-ID: <20070108090916.GA17121@infradead.org> References: <20070108040309.GX33919298@melbourne.sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070108040309.GX33919298@melbourne.sgi.com> User-Agent: Mutt/1.4.2.2i X-SRS-Rewrite: SMTP reverse-path rewritten from by pentafluge.infradead.org See http://www.infradead.org/rpr.html X-archive-position: 10191 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs Content-Length: 1111 Lines: 25 On Mon, Jan 08, 2007 at 03:03:09PM +1100, David Chinner wrote: > With the recent cancel_dirty_page() changes, a warning was > added if we cancel a dirty page that is still mapped into > the page tables. > This happens in XFS from fs_tosspages() and fs_flushinval_pages() > because they call truncate_inode_pages(). > > truncate_inode_pages() does not invalidate existing page mappings; > it is expected taht this is called only when truncating the file > or destroying the inode and on both these cases there can be > no mapped ptes. However, we call this when doing direct I/O writes > to remove pages from the page cache. As a result, we can rip > a page from the page cache that still has mappings attached. > > The correct fix is to use invalidate_inode_pages2_range() instead > of truncate_inode_pages(). They essentially do the same thing, but > the former also removes any pte mappings before removing the page > from the page cache. > > Comments? Generally looks good. But I feel a little cautios about changes in this area, so we should throw all possible test loads at this before commiting it. From owner-xfs@oss.sgi.com Mon Jan 8 01:21:09 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 08 Jan 2007 01:21:15 -0800 (PST) Received: from pentafluge.infradead.org (pentafluge.infradead.org [213.146.154.40]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l089L8qw025051 for ; Mon, 8 Jan 2007 01:21:09 -0800 Received: from hch by pentafluge.infradead.org with local (Exim 4.63 #1 (Red Hat Linux)) id 1H3qYE-0004gm-G8; Mon, 08 Jan 2007 09:12:18 +0000 Date: Mon, 8 Jan 2007 09:12:18 +0000 From: Christoph Hellwig To: David Chinner Cc: xfs-dev@sgi.com, xfs@oss.sgi.com Subject: Re: Review: make growing by >2TB work Message-ID: <20070108091218.GB17121@infradead.org> References: <20070108044414.GC44411608@melbourne.sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070108044414.GC44411608@melbourne.sgi.com> User-Agent: Mutt/1.4.2.2i X-SRS-Rewrite: SMTP reverse-path rewritten from by pentafluge.infradead.org See http://www.infradead.org/rpr.html X-archive-position: 10190 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs Content-Length: 808 Lines: 27 On Mon, Jan 08, 2007 at 03:44:14PM +1100, David Chinner wrote: > Growing a filesystem by > 2TB currently causes an overflow > in the transaction subsystem. Make transaction deltas and associated > elements explicitly 64 bit types so that we don't get overflows. > > Comments? Looks good. > > - AIL_LOCKINIT(&mp->m_ail_lock, "xfs_ail"); > spinlock_init(&mp->m_sb_lock, "xfs_sb"); > mutex_init(&mp->m_ilock); > initnsema(&mp->m_growlock, 1, "xfs_grow"); > - /* > - * Initialize the AIL. > - */ > - xfs_trans_ail_init(mp); This seems unrelated (?) > -xfs_mod_incore_sb(xfs_mount_t *mp, xfs_sb_field_t field, int delta, int rsvd) > +xfs_mod_incore_sb(xfs_mount_t *mp, xfs_sb_field_t field, int64_t delta, int rsvd) This seems to be over 80 chars linelength with your patch, just break the line. From owner-xfs@oss.sgi.com Mon Jan 8 02:45:56 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 08 Jan 2007 02:46:00 -0800 (PST) Received: from gw02.mail.saunalahti.fi (gw02.mail.saunalahti.fi [195.197.172.116]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l08Ajsqw006620 for ; Mon, 8 Jan 2007 02:45:56 -0800 Received: from mrp2.mail.saunalahti.fi (mrp2.mail.saunalahti.fi [62.142.5.31]) by gw02.mail.saunalahti.fi (Postfix) with ESMTP id 0EDFE13978A for ; Mon, 8 Jan 2007 12:23:34 +0200 (EET) Received: from [192.168.0.151] (unknown [62.142.247.178]) (using SSLv3 with cipher RC4-MD5 (128/128 bits)) (No client certificate requested) by mrp2.mail.saunalahti.fi (Postfix) with ESMTP id E1752598004 for ; Mon, 8 Jan 2007 12:23:32 +0200 (EET) Subject: xfs_repair: corrupt inode error From: Jyrki Muukkonen To: xfs@oss.sgi.com Content-Type: text/plain Date: Mon, 08 Jan 2007 12:23:32 +0200 Message-Id: <1168251812.20568.8.camel@mustis> Mime-Version: 1.0 X-Mailer: Evolution 2.8.1 Content-Transfer-Encoding: 7bit X-archive-position: 10192 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jyrki.muukkonen@futurice.fi Precedence: bulk X-list: xfs Content-Length: 871 Lines: 25 Got this error in phase 6 when running xfs_repair 2.8.18 on ~1.2TB partition over the weekend (it took around 60 hours to get to this point :). On earlier versions xfs_repair aborted after ~15-20 hours with "invalid inode type" error. ... disconnected inode 4151889519, moving to lost+found disconnected inode 4151889543, moving to lost+found corrupt inode 4151889543 (btree). This is a bug. Please report it to xfs@oss.sgi.com. cache_node_purge: refcount was 1, not zero (node=0x132650d0) fatal error -- 117 - couldn't iget disconnected inode I've got the full log (both stderr and stdout) and can put that somewhere if needed. It's about 80MB uncompressed and around 7MB gzipped. Running the xfs_repair without multithreading and with -v might also be possible if that's going to help. -- Jyrki Muukkonen Futurice Oy jyrki.muukkonen@futurice.fi +358 41 501 7322 From owner-xfs@oss.sgi.com Mon Jan 8 02:52:57 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 08 Jan 2007 02:53:02 -0800 (PST) Received: from pentafluge.infradead.org (pentafluge.infradead.org [213.146.154.40]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l08Aquqw008011 for ; Mon, 8 Jan 2007 02:52:57 -0800 Received: from hch by pentafluge.infradead.org with local (Exim 4.63 #1 (Red Hat Linux)) id 1H3s6l-00088J-FE; Mon, 08 Jan 2007 10:52:03 +0000 Date: Mon, 8 Jan 2007 10:52:03 +0000 From: Christoph Hellwig To: David Chinner Cc: xfs-dev@sgi.com, xfs@oss.sgi.com Subject: Re: Review: fix block reservation to work with per-cpu counters Message-ID: <20070108105203.GA31252@infradead.org> References: <20070108061040.GD44411608@melbourne.sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070108061040.GD44411608@melbourne.sgi.com> User-Agent: Mutt/1.4.2.2i X-SRS-Rewrite: SMTP reverse-path rewritten from by pentafluge.infradead.org See http://www.infradead.org/rpr.html X-archive-position: 10193 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs Content-Length: 929 Lines: 22 On Mon, Jan 08, 2007 at 05:10:40PM +1100, David Chinner wrote: > Currently, XFS_IOC_SET_RESBLKS will not work properly when > per-cpu superblock counters are enabled. Reservations can be lost > silently as they are applied to the incore superblock instead of > the currently active counters. > > Rather than try to shoe-horn the current reservation code into > the per-cpu counters or vice-versa, we lock the superblock > and snap the current counter state and work on that number. > Once we work out exactly how much we need to "allocate" to > the reserved area, we drop the lock and call xfs_mod_incore_sb() > which will do all the right things w.r.t to the counter state. > > If we fail to get as much as we want (i.e. ENOSPC is returned) > we go back to the start and try to allocate as much of what is > left. > > Comments? Sounds okay. Reservations shouldn't be frequent enough for this to have a performance impact. From owner-xfs@oss.sgi.com Mon Jan 8 04:14:09 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 08 Jan 2007 04:14:14 -0800 (PST) Received: from pne-smtpout4-sn1.fre.skanova.net (pne-smtpout4-sn1.fre.skanova.net [81.228.11.168]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l08CE5qw021469 for ; Mon, 8 Jan 2007 04:14:08 -0800 Received: from safari.iki.fi (80.223.106.128) by pne-smtpout4-sn1.fre.skanova.net (7.2.075) id 44A36A0A008DA590 for xfs@oss.sgi.com; Mon, 8 Jan 2007 12:03:24 +0100 Received: (qmail 10778 invoked by uid 500); 8 Jan 2007 11:03:23 -0000 Date: Mon, 8 Jan 2007 13:03:23 +0200 From: Sami Farin To: linux-kernel Mailing List , xfs@oss.sgi.com Subject: Re: xfs_file_ioctl / xfs_freeze: BUG: warning at kernel/mutex-debug.c:80/debug_mutex_unlock() Message-ID: <20070108110323.GA3803@m.safari.iki.fi> Mail-Followup-To: linux-kernel Mailing List , xfs@oss.sgi.com References: <20070104001420.GA32440@m.safari.iki.fi> <20070107213734.GS44411608@melbourne.sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070107213734.GS44411608@melbourne.sgi.com> User-Agent: Mutt/1.5.13 (2006-08-11) X-archive-position: 10194 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: safari-xfs@safari.iki.fi Precedence: bulk X-list: xfs Content-Length: 869 Lines: 29 On Mon, Jan 08, 2007 at 08:37:34 +1100, David Chinner wrote: ... > > fstab was there just fine after -u. > > Oh, that still hasn't been fixed? Looked like it =) > Generic bug, not XFS - the global > semaphore->mutex cleanup converted the bd_mount_sem to a mutex, and > mutexes complain loudly when a the process unlocking the mutex is > not the process that locked it. > > Basically, the generic code is broken - the bd_mount_mutex needs to > be reverted back to a semaphore because it is locked and unlocked > by different processes. The following patch does this.... > > BTW, Sami, can you cc xfs@oss.sgi.com on XFS bug reports in future; > you'll get more XFS savvy eyes there..... Forgot to. Thanks for patch. It fixed the issue, no more warnings. BTW. the fix is not in 2.6.git, either. -- Do what you love because life is too short for anything else. From owner-xfs@oss.sgi.com Mon Jan 8 05:08:46 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 08 Jan 2007 05:08:53 -0800 (PST) Received: from web31704.mail.mud.yahoo.com (web31704.mail.mud.yahoo.com [68.142.201.184]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l08D8iqw003127 for ; Mon, 8 Jan 2007 05:08:46 -0800 Received: (qmail 79111 invoked by uid 60001); 8 Jan 2007 12:41:07 -0000 DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=X-YMail-OSG:Received:Date:From:Subject:To:MIME-Version:Content-Type:Content-Transfer-Encoding:Message-ID; b=VLrCwkPST9v1+AfD3UCSbDzdDULB1OtBKId/alYx4gebrPsK4xhbvmUpRlUIUoq8tf6Sizyz0XLKrU7tw8F9bC+vbaUwsCsWwypmOAknlSw2IfeNO6F6R/jBh3RGytim5cu6h9DZ395dwbxt/mCxJmeLIV/XgYL02Gn/cTemOd0=; X-YMail-OSG: jyt.HCQVM1m9RPrZndjctcoQhySkIG.9.rx7tWMRYK3a.pSSRDmJRiGodRqnHeJsyDfdtqt1PGcoi0LkQQUbyedXiJDX80ndWG2KHNLICzkcdLof9hxOVh7WREbBD21aufMhsCvrNE9grjk- Received: from [212.150.66.71] by web31704.mail.mud.yahoo.com via HTTP; Mon, 08 Jan 2007 04:41:07 PST Date: Mon, 8 Jan 2007 04:41:07 -0800 (PST) From: Heilige Gheist Subject: kmem_alloc deadlock in SLES9 SP3 To: linux-xfs@oss.sgi.com MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 8bit Message-ID: <674311.78396.qm@web31704.mail.mud.yahoo.com> X-archive-position: 10195 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hgheist@yahoo.com Precedence: bulk X-list: xfs Content-Length: 835 Lines: 25 I'm getting occassional system freezes preceded by spurious kmem_deadlock messages. The system is running SLES9 SP3, xfs with large (~1GB) fragmented files, using real-time section. The message is Jan 8 06:27:55 ce-9 kernel: XFS: possible memory allocation deadlock in kmem_alloc (mode:0x2d0) ce-9:~ # uname -a Linux ce-9 2.6.5-7.276-bigsmp #2 SMP Tue Sep 19 05:27:23 IDT 2006 i686 i686 i386 GNU/Linux The similar bug report http://oss.sgi.com/bugzilla/show_bug.cgi?id=410 recommends upgrading to 2.6.17 to make use of new incore extent management code. Is there a version of commercial Linux (RHEL/SLES) that incorporates this fix? SLES10 is based on 2.6.16 kernel. --alan __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From owner-xfs@oss.sgi.com Mon Jan 8 05:08:49 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 08 Jan 2007 05:08:57 -0800 (PST) Received: from pne-smtpout4-sn2.hy.skanova.net (pne-smtpout4-sn2.hy.skanova.net [81.228.8.154]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l08D8jqw003133 for ; Mon, 8 Jan 2007 05:08:49 -0800 Received: from safari.iki.fi (80.223.106.128) by pne-smtpout4-sn2.hy.skanova.net (7.2.075) id 44A2EAB8008C1173 for xfs@oss.sgi.com; Mon, 8 Jan 2007 12:58:18 +0100 Received: (qmail 12273 invoked by uid 500); 8 Jan 2007 11:58:17 -0000 Date: Mon, 8 Jan 2007 13:58:17 +0200 From: Sami Farin To: xfs@oss.sgi.com, linux-kernel@vger.kernel.org Cc: Andrew Morton , David Chinner , Hugh Dickins , Nick Piggin Subject: Re: BUG: warning at mm/truncate.c:60/cancel_dirty_page() Message-ID: <20070108115816.GB3803@m.safari.iki.fi> Mail-Followup-To: xfs@oss.sgi.com, linux-kernel@vger.kernel.org, Andrew Morton , David Chinner , Hugh Dickins , Nick Piggin References: <20070106023907.GA7766@m.safari.iki.fi> <20070107222341.GT33919298@melbourne.sgi.com> <20070107144812.96357ff9.akpm@osdl.org> <20070107230436.GU33919298@melbourne.sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070107230436.GU33919298@melbourne.sgi.com> User-Agent: Mutt/1.5.13 (2006-08-11) X-archive-position: 10196 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: safari-xfs@safari.iki.fi Precedence: bulk X-list: xfs Content-Length: 1002 Lines: 28 On Mon, Jan 08, 2007 at 10:04:36 +1100, David Chinner wrote: > On Sun, Jan 07, 2007 at 02:48:12PM -0800, Andrew Morton wrote: > > On Mon, 8 Jan 2007 09:23:41 +1100 > > David Chinner wrote: > > > > > How are you supposed to invalidate a range of pages in a mapping for > > > this case, then? invalidate_mapping_pages() would appear to be the > > > candidate (the generic code uses this), but it _skips_ pages that > > > are already mapped. > > > > unmap_mapping_range()? > > /me looks at how it's used in invalidate_inode_pages2_range() and > decides it's easier not to call this directly. > > > > So, am I correct in assuming we should be calling invalidate_inode_pages2_range() > > > instead of truncate_inode_pages()? > > > > That would be conventional. > > .... in that case the following patch should fix the warning: I tried dt+strace+cinfo with this patch applied and got no warnings. Thanks for quick fix. -- Do what you love because life is too short for anything else. From owner-xfs@oss.sgi.com Mon Jan 8 05:40:55 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 08 Jan 2007 05:40:59 -0800 (PST) Received: from web59111.mail.re1.yahoo.com (web59111.mail.re1.yahoo.com [66.196.101.22]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l08Deqqw009792 for ; Mon, 8 Jan 2007 05:40:54 -0800 Received: (qmail 57541 invoked by uid 60001); 8 Jan 2007 13:13:13 -0000 DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=X-YMail-OSG:Received:Date:From:Subject:To:MIME-Version:Content-Type:Content-Transfer-Encoding:Message-ID; b=PEPdSTFHJLkQuWr5Bf+GaX+C5qw7z/cJl2tAjkD5fE8hCMhxNMmcxZP0Ie9uruNvlsDSTlkY4o1pLhkqk+TvpfT8HMi+amT4JnQOXusDvGMZnpYYqlrez2QdgkPIpXrO1R4UIu1ZbWUkPCf7Pr90SL46n7lMAe2jdnlWVjF0qG0=; X-YMail-OSG: gH7XkVsVM1lGAJQUO6aYVyqlEWke5ZYcOIwttweIcwf9ltYhEW8z6Iz1df84fPB9px_FMdp0vQtZdR3O29S_WyQKt5VtSJ00wn6qBKBt5453EvvGYHYQMqdGOeo6ft06fXQZGgyJBtPC2.wxBVKNMvdty8QPPI2HsNywWhPtP25_ Received: from [213.132.154.79] by web59111.mail.re1.yahoo.com via HTTP; Mon, 08 Jan 2007 05:13:12 PST Date: Mon, 8 Jan 2007 05:13:12 -0800 (PST) From: Dave N Subject: What's wrong with XFS? To: xfs@oss.sgi.com MIME-Version: 1.0 Message-ID: <936386.57179.qm@web59111.mail.re1.yahoo.com> Content-Type: text/plain Content-Disposition: inline Content-Transfer-Encoding: 7bit X-archive-position: 10198 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: mutex1@yahoo.com Precedence: bulk X-list: xfs Content-Length: 2408 Lines: 20 Hi, Can someone enlighten me what the issue is with XFS? I've been hearing a lot of good things on the Net about XFS. How it's lightening fast, how it has features other file systems do not have (like GRIO, real time volumes, allocate on flush, etc), how it scales very well, etc... but what I didn't hear about is how fast XFS screws things up if something wrong happens. Because of the good things I heard about XFS, I too decided to try it out (been using Ext3 or ReiserFS here for most of the time). Now I'm very disappointed in XFS. I live in an area where power outages are common and I do not have an UPS here. I have a few computers all running on XFS and thought that XFS will give me similar data-integrity like Ext3 or ReiserFS. Now, for the past few weeks I've been experiencing "strange behavior" from XFS. One time, I was reading an article on the Net and had only my Firefox browser open. Then we had a power outage for a short period of time, and when I logged in again into KDE, I was surprised to find out that all my desktop icons were messed up all over the place. The other time, again power outage, only this time I was working on a small text file. Booted up again only to find out that the file I was working on contained garbage and I had to start all over again. I also heard that XFS depends heavily on the application side for its data-integrity. XFS "thinks" that the application will use the proper calls when writing to disk. What???? How is it the task of the application to ensure the safety of your files??? IMO, programs are there to provide the tools to be productive, NOT to ensure the data safety of your files, that's the task of the file system. Even MySQL provides me with better data-integrity here. If I'm doing some database transaction and the power fails, I can be pretty sure that *most* of the time, MySQL will be just fine next time I boot up. Why oh why such a beautiful file system like XFS is so terrible at data-integrity? Look what Sun Microsystems did with their new ZFS file system... full atomicity, CRC checksumming and other features to ensure data-integrity... why can't XFS have such things? Thanks for listening to my preaching here guys Cheers! __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com [[HTML alternate version deleted]] From owner-xfs@oss.sgi.com Mon Jan 8 06:46:55 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 08 Jan 2007 06:47:03 -0800 (PST) Received: from smtp113.sbc.mail.mud.yahoo.com (smtp113.sbc.mail.mud.yahoo.com [68.142.198.212]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l08Eksqw022022 for ; Mon, 8 Jan 2007 06:46:55 -0800 Received: (qmail 71865 invoked from network); 8 Jan 2007 14:46:01 -0000 Received: from unknown (HELO stupidest.org) (cwedgwood@sbcglobal.net@24.5.75.45 with login) by smtp113.sbc.mail.mud.yahoo.com with SMTP; 8 Jan 2007 14:46:01 -0000 X-YMail-OSG: yCHEEAoVM1nr1jeANQdaVdHJaA5a7Isfj4K_s1adPn_f.p85cuQDjXyxjiGIjWWkCboecBArvkIeQVjguqxS6.Zltop6E1due9XHtVVEzN9USZaoLc8.J52CUCnRb1Zr57.FO_DrRmV.zz6wNGwnF0lvPGMkrlTMvIAMWB3mu8SMcD0l5V6ZrpTOLlUG Received: by tuatara.stupidest.org (Postfix, from userid 10000) id 8FFE6182614B; Mon, 8 Jan 2007 06:45:49 -0800 (PST) Date: Mon, 8 Jan 2007 06:45:49 -0800 From: Chris Wedgwood To: Dave N Cc: xfs@oss.sgi.com Subject: Re: What's wrong with XFS? Message-ID: <20070108144549.GA12073@tuatara.stupidest.org> References: <936386.57179.qm@web59111.mail.re1.yahoo.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <936386.57179.qm@web59111.mail.re1.yahoo.com> X-archive-position: 10199 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: xfs Content-Length: 2415 Lines: 60 On Mon, Jan 08, 2007 at 05:13:12AM -0800, Dave N wrote: > KDE, I was surprised to find out that all my desktop icons were > messed up all over the place. KDE made assumptions which are not only not true on linux but not true elsewhere either. Last I checked KDE dealt with the common cases that were problematic much better now. > The other time, again power outage, only this time I was working on > a small text file. Booted up again only to find out that the file I > was working on contained garbage and I had to start all over again. The file should not have contained garbage. Also, if you open+truncate+write a file it should be flushed very soon after close these days, the window is fairly small now. > I also heard that XFS depends heavily on the application side for > its data-integrity. XFS "thinks" that the application will use the > proper calls when writing to disk. What???? How is it the task of > the application to ensure the safety of your files??? It's always been that way, for many many years, even before Linux existed. If you want your applictions to be portable and reliable then you have to do do it right. MTAs are a good example of applications which typically get this right because people case about lost email and the authors typically take some effort into make sure it's right. > IMO, programs are there to provide the tools to be productive, NOT > to ensure the data safety of your files, that's the task of the file > system. Even MySQL provides me with better data-integrity here. Does MySQL allow me to read or write 100s of MB/s continuously on cheap hardware (for not so cheap hardware I could ask 7GB/s). > Why oh why such a beautiful file system like XFS is so terrible at > data-integrity? There is a cost to full data journalling. Personally even with ext3 I find the impact of this high enough I don't use it. > Look what Sun Microsystems did with their new ZFS file > system... full atomicity, CRC checksumming and other features to > ensure data-integrity... You could argue XFS is showing it's age, it's far from a new filesystem these days. ZFS is a very different animal to most traditional filesystems. > why can't XFS have such things? Because the realities of life sometime collide with what people want ideally. Linux can't have ZFS for licensing reasons but you can have Solaris with ZFS: http://opensolaris.org/os/downloads/on/ From owner-xfs@oss.sgi.com Mon Jan 8 06:54:45 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 08 Jan 2007 06:54:51 -0800 (PST) Received: from moutng.kundenserver.de (moutng.kundenserver.de [212.227.126.187]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l08Eshqw023908 for ; Mon, 8 Jan 2007 06:54:44 -0800 Received: from [194.173.12.131] (helo=[172.25.16.7]) by mrelayeu.kundenserver.de (node=mrelayeu0) with ESMTP (Nemesis), id 0MKwh2-1H3vgP2b4x-00070B; Mon, 08 Jan 2007 15:41:06 +0100 Message-ID: <45A25800.6060603@gmx.net> Date: Mon, 08 Jan 2007 15:41:04 +0100 From: Klaus Strebel User-Agent: Thunderbird 1.5.0.9 (Windows/20061207) MIME-Version: 1.0 To: Dave N CC: xfs@oss.sgi.com Subject: Re: What's wrong with XFS? References: <936386.57179.qm@web59111.mail.re1.yahoo.com> In-Reply-To: <936386.57179.qm@web59111.mail.re1.yahoo.com> Content-Type: text/plain; charset=ISO-8859-15 Content-Transfer-Encoding: 8bit X-Provags-ID: kundenserver.de abuse@kundenserver.de login:8a7df7300d3d15a4f701302fdde7adf9 X-archive-position: 10200 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: klaus.strebel@gmx.net Precedence: bulk X-list: xfs Content-Length: 1173 Lines: 30 Dave N schrieb: > Hi, > > Even MySQL provides me with better data-integrity here. If I'm doing some database transaction and the power fails, I can be pretty sure that *most* of the time, MySQL will be just fine next time I boot up. Hallo Dave, MySQL is an application which takes care of data-integrity ( which XFS depends on, as you stated yourself ;-) ). XFS takes care of the filesystem-integrity, to enable your MySQL to find the files it's caring of it's content-integrity ( as an application, you see ;-) ) > > Why oh why such a beautiful file system like XFS is so terrible at data-integrity? Look what Sun Microsystems did with their new ZFS file system... full atomicity, CRC checksumming and other features to ensure data-integrity... why can't XFS have such things? To mount multi-gigabyte filesystems after some kind of desaster in minutes, not in hours or days ;-). It's only caring for meta-data, not the data. > > Thanks for listening to my preaching here guys > > Cheers! -- Mit freundlichen Grüssen / best regards Klaus Strebel, Dipl.-Inform. (FH), mailto:klaus.strebel@gmx.net /"\ \ / ASCII RIBBON CAMPAIGN X AGAINST HTML MAIL / \ From owner-xfs@oss.sgi.com Mon Jan 8 07:13:16 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 08 Jan 2007 07:13:19 -0800 (PST) Received: from extgat1.local.navi.pl (ip-83-238-212-180.netia.com.pl [83.238.212.180]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l08FDDqw027209 for ; Mon, 8 Jan 2007 07:13:15 -0800 Received: from venus.local.navi.pl (www.local.navi.pl [192.168.1.10]) by extgat1.local.navi.pl (8.13.1/8.13.1) with ESMTP id l08EobEL003960 for ; Mon, 8 Jan 2007 15:50:37 +0100 Received: from venus.local.navi.pl (venus.local.navi.pl [192.168.1.10]) by venus.local.navi.pl (8.13.1/8.13.1) with ESMTP id l08Eobxm032566 for ; Mon, 8 Jan 2007 15:50:37 +0100 Subject: Re: What's wrong with XFS? From: Olaf =?iso-8859-2?Q?Fr=B1czyk?= To: xfs@oss.sgi.com In-Reply-To: <936386.57179.qm@web59111.mail.re1.yahoo.com> References: <936386.57179.qm@web59111.mail.re1.yahoo.com> Content-Type: text/plain; charset=UTF-8 Date: Mon, 08 Jan 2007 15:50:37 +0100 Message-Id: <1168267837.29690.14.camel@venus.local.navi.pl> Mime-Version: 1.0 X-Mailer: Evolution 2.0.2 (2.0.2-3) Content-Transfer-Encoding: 8bit X-archive-position: 10201 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: olaf@cbk.poznan.pl Precedence: bulk X-list: xfs Content-Length: 2829 Lines: 35 On Mon, 2007-01-08 at 05:13 -0800, Dave N wrote: > Hi, > > Can someone enlighten me what the issue is with XFS? I've been hearing a lot of good things on the Net about XFS. How it's lightening fast, how it has features other file systems do not have (like GRIO, real time volumes, allocate on flush, etc), how it scales very well, etc... but what I didn't hear about is how fast XFS screws things up if something wrong happens. Because of the good things I heard about XFS, I too decided to try it out (been using Ext3 or ReiserFS here for most of the time). Now I'm very disappointed in XFS. I live in an area where power outages are common and I do not have an UPS here. I have a few computers all running on XFS and thought that XFS will give me similar data-integrity like Ext3 or ReiserFS. Now, for the past few weeks I've been experiencing "strange behavior" from XFS. One time, I was reading an article on the Net and had only my Firefox browser open. Then we had a power outage for a short period of time, and when I logged in again into > KDE, I was surprised to find out that all my desktop icons were messed up all over the place. The other time, again power outage, only this time I was working on a small text file. Booted up again only to find out that the file I was working on contained garbage and I had to start all over again. > > I also heard that XFS depends heavily on the application side for its data-integrity. XFS "thinks" that the application will use the proper calls when writing to disk. What???? How is it the task of the application to ensure the safety of your files??? IMO, programs are there to provide the tools to be productive, NOT to ensure the data safety of your files, that's the task of the file system. Even MySQL provides me with better data-integrity here. If I'm doing some database transaction and the power fails, I can be pretty sure that *most* of the time, MySQL will be just fine next time I boot up. > > Why oh why such a beautiful file system like XFS is so terrible at data-integrity? Look what Sun Microsystems did with their new ZFS file system... full atomicity, CRC checksumming and other features to ensure data-integrity... why can't XFS have such things? > > Thanks for listening to my preaching here guys > > Cheers! Hi, It is nothing wrong with XFS - your expectations are wrong. You expect data to be journaled, but XFS does journal metadata only, not data. So, the thing that you get is filesystem integrity not data integrity. If you want data integrity you need properly written applications and __it is__ application's job to care about it's data. It is nothing unusual here. If you need data journaling then you need another filesystem - eg. ext3. I suppose that you find all of it in FAQ. Regards, Olaf -- Olaf FrÄ…czyk From owner-xfs@oss.sgi.com Mon Jan 8 07:13:17 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 08 Jan 2007 07:13:21 -0800 (PST) Received: from extgat1.local.navi.pl (ip-83-238-212-180.netia.com.pl [83.238.212.180]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l08FDDr0027209 for ; Mon, 8 Jan 2007 07:13:16 -0800 Received: from venus.local.navi.pl (venus.local.navi.pl [192.168.1.10]) by extgat1.local.navi.pl (8.13.1/8.13.1) with ESMTP id l08EnFLv003957; Mon, 8 Jan 2007 15:49:16 +0100 Received: from venus.local.navi.pl (venus.local.navi.pl [192.168.1.10]) by venus.local.navi.pl (8.13.1/8.13.1) with ESMTP id l08EnFc0032437; Mon, 8 Jan 2007 15:49:15 +0100 Subject: Re: What's wrong with XFS? From: Olaf Fraczyk To: Dave N Cc: xfs@oss.sgi.com In-Reply-To: <936386.57179.qm@web59111.mail.re1.yahoo.com> References: <936386.57179.qm@web59111.mail.re1.yahoo.com> Content-Type: text/plain Organization: NAVI Date: Mon, 08 Jan 2007 15:49:15 +0100 Message-Id: <1168267755.29690.13.camel@venus.local.navi.pl> Mime-Version: 1.0 X-Mailer: Evolution 2.0.2 (2.0.2-3) Content-Transfer-Encoding: 7bit X-archive-position: 10202 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: olaf@navi.pl Precedence: bulk X-list: xfs Content-Length: 2826 Lines: 35 On Mon, 2007-01-08 at 05:13 -0800, Dave N wrote: > Hi, > > Can someone enlighten me what the issue is with XFS? I've been hearing a lot of good things on the Net about XFS. How it's lightening fast, how it has features other file systems do not have (like GRIO, real time volumes, allocate on flush, etc), how it scales very well, etc... but what I didn't hear about is how fast XFS screws things up if something wrong happens. Because of the good things I heard about XFS, I too decided to try it out (been using Ext3 or ReiserFS here for most of the time). Now I'm very disappointed in XFS. I live in an area where power outages are common and I do not have an UPS here. I have a few computers all running on XFS and thought that XFS will give me similar data-integrity like Ext3 or ReiserFS. Now, for the past few weeks I've been experiencing "strange behavior" from XFS. One time, I was reading an article on the Net and had only my Firefox browser open. Then we had a power outage for a short period of time, and when I logged in again into > KDE, I was surprised to find out that all my desktop icons were messed up all over the place. The other time, again power outage, only this time I was working on a small text file. Booted up again only to find out that the file I was working on contained garbage and I had to start all over again. > > I also heard that XFS depends heavily on the application side for its data-integrity. XFS "thinks" that the application will use the proper calls when writing to disk. What???? How is it the task of the application to ensure the safety of your files??? IMO, programs are there to provide the tools to be productive, NOT to ensure the data safety of your files, that's the task of the file system. Even MySQL provides me with better data-integrity here. If I'm doing some database transaction and the power fails, I can be pretty sure that *most* of the time, MySQL will be just fine next time I boot up. > > Why oh why such a beautiful file system like XFS is so terrible at data-integrity? Look what Sun Microsystems did with their new ZFS file system... full atomicity, CRC checksumming and other features to ensure data-integrity... why can't XFS have such things? > > Thanks for listening to my preaching here guys > > Cheers! Hi, It is nothing wrong with XFS - your expectations are wrong. You expect data to be journaled, but XFS does journal metadata only, not data. So, the thing that you get is filesystem integrity not data integrity. If you want data integrity you need properly written applications and __it is__ application's job to care about it's data. It is nothing unusual here. If you need data journaling then you need another filesystem - eg. ext3. I suppose that you find all of it in FAQ. Regards, Olaf -- Olaf Fraczyk NAVI From owner-xfs@oss.sgi.com Mon Jan 8 07:25:19 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 08 Jan 2007 07:25:24 -0800 (PST) Received: from moutng.kundenserver.de (moutng.kundenserver.de [212.227.126.171]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l08FPHqw029952 for ; Mon, 8 Jan 2007 07:25:18 -0800 Received: from [194.173.12.131] (helo=[172.25.16.7]) by mrelayeu.kundenserver.de (node=mrelayeu6) with ESMTP (Nemesis), id 0ML29c-1H3wMK0doj-0007pa; Mon, 08 Jan 2007 16:24:24 +0100 Message-ID: <45A26227.2080907@gmx.net> Date: Mon, 08 Jan 2007 16:24:23 +0100 From: Klaus Strebel User-Agent: Thunderbird 1.5.0.9 (Windows/20061207) MIME-Version: 1.0 To: Dave N CC: xfs@oss.sgi.com Subject: Re: What's wrong with XFS? References: <936386.57179.qm@web59111.mail.re1.yahoo.com> <20070108144549.GA12073@tuatara.stupidest.org> In-Reply-To: <20070108144549.GA12073@tuatara.stupidest.org> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 8bit X-Provags-ID: kundenserver.de abuse@kundenserver.de login:8a7df7300d3d15a4f701302fdde7adf9 X-archive-position: 10203 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: klaus.strebel@gmx.net Precedence: bulk X-list: xfs Content-Length: 2805 Lines: 80 Chris Wedgwood schrieb: > On Mon, Jan 08, 2007 at 05:13:12AM -0800, Dave N wrote: > >> KDE, I was surprised to find out that all my desktop icons were >> messed up all over the place. > > KDE made assumptions which are not only not true on linux but not true > elsewhere either. Last I checked KDE dealt with the common cases that > were problematic much better now. > >> The other time, again power outage, only this time I was working on >> a small text file. Booted up again only to find out that the file I >> was working on contained garbage and I had to start all over again. > > The file should not have contained garbage. Also, if you > open+truncate+write a file it should be flushed very soon after close > these days, the window is fairly small now. > >> I also heard that XFS depends heavily on the application side for >> its data-integrity. XFS "thinks" that the application will use the >> proper calls when writing to disk. What???? How is it the task of >> the application to ensure the safety of your files??? > > It's always been that way, for many many years, even before Linux > existed. If you want your applictions to be portable and reliable > then you have to do do it right. > > MTAs are a good example of applications which typically get this right > because people case about lost email and the authors typically take > some effort into make sure it's right. > >> IMO, programs are there to provide the tools to be productive, NOT >> to ensure the data safety of your files, that's the task of the file >> system. Even MySQL provides me with better data-integrity here. > > Does MySQL allow me to read or write 100s of MB/s continuously on > cheap hardware (for not so cheap hardware I could ask 7GB/s). > >> Why oh why such a beautiful file system like XFS is so terrible at >> data-integrity? > > There is a cost to full data journalling. Personally even with ext3 I > find the impact of this high enough I don't use it. > >> Look what Sun Microsystems did with their new ZFS file >> system... full atomicity, CRC checksumming and other features to >> ensure data-integrity... > > You could argue XFS is showing it's age, it's far from a new > filesystem these days. > > ZFS is a very different animal to most traditional filesystems. > >> why can't XFS have such things? > > Because the realities of life sometime collide with what people want > ideally. > > Linux can't have ZFS for licensing reasons but you can have Solaris > with ZFS: http://opensolaris.org/os/downloads/on/ > > FYI, just found this ;-) Klaus -- Mit freundlichen Grüssen / best regards Klaus Strebel, Dipl.-Inform. (FH), mailto:klaus.strebel@gmx.net /"\ \ / ASCII RIBBON CAMPAIGN X AGAINST HTML MAIL / \ From owner-xfs@oss.sgi.com Mon Jan 8 07:59:44 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 08 Jan 2007 07:59:48 -0800 (PST) Received: from sumo.dreamhost.com (sumo.dreamhost.com [66.33.216.29]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l08Fxgqw004776 for ; Mon, 8 Jan 2007 07:59:43 -0800 Received: from spaceymail-a1.dreamhost.com (sd-green-bigip-62.dreamhost.com [208.97.132.62]) by sumo.dreamhost.com (Postfix) with ESMTP id 1584C17E75A for ; Mon, 8 Jan 2007 07:35:38 -0800 (PST) Received: from jupiter.solar.net (cpe-24-27-90-21.houston.res.rr.com [24.27.90.21]) by spaceymail-a1.dreamhost.com (Postfix) with ESMTP id 308AC194F6C for ; Mon, 8 Jan 2007 07:35:35 -0800 (PST) From: Joe Bacom Reply-To: joe@docsimple.com To: xfs@oss.sgi.com Subject: Re: What's wrong with XFS? Date: Mon, 8 Jan 2007 09:35:36 -0600 User-Agent: KMail/1.9.1 References: <936386.57179.qm@web59111.mail.re1.yahoo.com> <1168267755.29690.13.camel@venus.local.navi.pl> In-Reply-To: <1168267755.29690.13.camel@venus.local.navi.pl> MIME-Version: 1.0 Content-Type: multipart/signed; boundary="nextPart1755204.IqRS27UC8K"; protocol="application/pgp-signature"; micalg=pgp-sha1 Content-Transfer-Encoding: 7bit Message-Id: <200701080935.36736.joe@docsimple.com> X-archive-position: 10204 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: joe@docsimple.com Precedence: bulk X-list: xfs Content-Length: 4237 Lines: 107 --nextPart1755204.IqRS27UC8K Content-Type: text/plain; charset="iso-8859-6" Content-Transfer-Encoding: quoted-printable Content-Disposition: inline The solution to Dave's problem seems obvious to me. If you care about your= =20 data and your hardware, buy a UPS with power conditioning, configure Linux = to=20 show down when the battery gets low and enjoy the peace of mind knowing tha= t=20 even if your away from your machine and the power goes off, the system will= =20 take care of itself. Joe On Monday 08 January 2007 08:49, you wrote: > On Mon, 2007-01-08 at 05:13 -0800, Dave N wrote: > > Hi, > > > > Can someone enlighten me what the issue is with XFS? I've been hearing a > > lot of good things on the Net about XFS. How it's lightening fast, how = it > > has features other file systems do not have (like GRIO, real time > > volumes, allocate on flush, etc), how it scales very well, etc... but > > what I didn't hear about is how fast XFS screws things up if something > > wrong happens. Because of the good things I heard about XFS, I too > > decided to try it out (been using Ext3 or ReiserFS here for most of the > > time). Now I'm very disappointed in XFS. I live in an area where power > > outages are common and I do not have an UPS here. I have a few computers > > all running on XFS and thought that XFS will give me similar > > data-integrity like Ext3 or ReiserFS. Now, for the past few weeks I've > > been experiencing "strange behavior" from XFS. One time, I was reading = an > > article on the Net and had only my Firefox browser open. Then we had a > > power outage for a short period of time, and when I logged in again into > > KDE, I was surprised to find out that all my desktop icons were messed = up > > all over the place. The other time, again power outage, only this time I > > was working on a small text file. Booted up again only to find out that > > the file I was working on contained garbage and I had to start all over > > again. > > > > I also heard that XFS depends heavily on the application side for its > > data-integrity. XFS "thinks" that the application will use the proper > > calls when writing to disk. What???? How is it the task of the > > application to ensure the safety of your files??? IMO, programs are the= re > > to provide the tools to be productive, NOT to ensure the data safety of > > your files, that's the task of the file system. Even MySQL provides me > > with better data-integrity here. If I'm doing some database transaction > > and the power fails, I can be pretty sure that *most* of the time, MySQL > > will be just fine next time I boot up. > > > > Why oh why such a beautiful file system like XFS is so terrible at > > data-integrity? Look what Sun Microsystems did with their new ZFS file > > system... full atomicity, CRC checksumming and other features to ensure > > data-integrity... why can't XFS have such things? > > > > Thanks for listening to my preaching here guys > > > > Cheers! > > Hi, > > It is nothing wrong with XFS - your expectations are wrong. > > You expect data to be journaled, but XFS does journal metadata only, not > data. So, the thing that you get is filesystem integrity not data > integrity. > If you want data integrity you need properly written applications and > __it is__ application's job to care about it's data. It is nothing > unusual here. > > If you need data journaling then you need another filesystem - eg. ext3. > > I suppose that you find all of it in FAQ. > > Regards, > > Olaf --=20 A Cringester who requested anonymity says when a friend ran=20 Microsoft BS (Baseline Security) Analyzer on a XP Pro SP2 machine, the cumulative size of the patches that were required exceeded=20 the size of the original OS. I'm not surprised. The volume of Microsoft BS I've analyzed could fill Bill Gates' house. Source: Robert X. Cringely, InfoWorld, Sept. 4, 2006, Issue 36 Penguin: Linux version 2.6.16, 8010.09 BogoMips --nextPart1755204.IqRS27UC8K Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2 (GNU/Linux) iD8DBQBFomTILtK/s7Pra28RAr03AKCELLtTC2Nt5ZM5KDQDZsW7cQQBFwCeLn7Q 6hKiX3Z67OVHCmIcM5vAD8w= =Is2P -----END PGP SIGNATURE----- --nextPart1755204.IqRS27UC8K-- From owner-xfs@oss.sgi.com Mon Jan 8 08:08:08 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 08 Jan 2007 08:08:14 -0800 (PST) Received: from gw03.mail.saunalahti.fi (gw03.mail.saunalahti.fi [195.197.172.111]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l08G86qw006890 for ; Mon, 8 Jan 2007 08:08:07 -0800 Received: from mrp3.mail.saunalahti.fi (mrp3.mail.saunalahti.fi [62.142.5.32]) by gw03.mail.saunalahti.fi (Postfix) with ESMTP id 129C521638F for ; Mon, 8 Jan 2007 18:07:13 +0200 (EET) Received: from [192.168.0.151] (unknown [62.142.247.178]) (using SSLv3 with cipher RC4-MD5 (128/128 bits)) (No client certificate requested) by mrp3.mail.saunalahti.fi (Postfix) with ESMTP id E647B708001 for ; Mon, 8 Jan 2007 18:07:11 +0200 (EET) Subject: Re: xfs_repair: corrupt inode error From: Jyrki Muukkonen To: xfs@oss.sgi.com In-Reply-To: <1168251812.20568.8.camel@mustis> References: <1168251812.20568.8.camel@mustis> Content-Type: text/plain Date: Mon, 08 Jan 2007 18:07:11 +0200 Message-Id: <1168272431.21580.14.camel@mustis> Mime-Version: 1.0 X-Mailer: Evolution 2.8.1 Content-Transfer-Encoding: 7bit X-archive-position: 10205 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jyrki.muukkonen@futurice.fi Precedence: bulk X-list: xfs Content-Length: 1612 Lines: 43 On ma, 2007-01-08 at 12:23 +0200, Jyrki Muukkonen wrote: > Got this error in phase 6 when running xfs_repair 2.8.18 on ~1.2TB > partition over the weekend (it took around 60 hours to get to this > point :). On earlier versions xfs_repair aborted after ~15-20 hours with > "invalid inode type" error. > > ... > disconnected inode 4151889519, moving to lost+found > disconnected inode 4151889543, moving to lost+found > corrupt inode 4151889543 (btree). This is a bug. > Please report it to xfs@oss.sgi.com. > cache_node_purge: refcount was 1, not zero (node=0x132650d0) > > fatal error -- 117 - couldn't iget disconnected inode > > I've got the full log (both stderr and stdout) and can put that > somewhere if needed. It's about 80MB uncompressed and around 7MB > gzipped. Running the xfs_repair without multithreading and with -v might > also be possible if that's going to help. > Some more information: - running 64bit Ubuntu Edgy 2.6.17-10-generic - one processor so xfs_repair was run with two threads - 1.5GB RAM, 3GB swap (at some point the xfs_repair process took a bit over 2GB) - filesystem is ~1.14TB with about ~1.4 million files - most of the files are in subdirectories by date (/something/YYYY/MM/DD/), ~2-10 thousand per day So is there a way to skip / ignore this error? I could do some testing with different command line options and small code patches if that's going to help solve the bug. Most of the files have been recovered from backups, raw disk images etc. but unfortunately some are still missing. -- Jyrki Muukkonen Futurice Oy jyrki.muukkonen@futurice.fi +358 41 501 7322 From owner-xfs@oss.sgi.com Mon Jan 8 08:41:50 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 08 Jan 2007 08:41:57 -0800 (PST) Received: from mx1.redhat.com (mx1.redhat.com [66.187.233.31]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l08Gfnqw019671 for ; Mon, 8 Jan 2007 08:41:50 -0800 Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by mx1.redhat.com (8.12.11.20060308/8.12.11) with ESMTP id l08GeuH4014409; Mon, 8 Jan 2007 11:40:56 -0500 Received: from pobox-2.corp.redhat.com (pobox-2.corp.redhat.com [10.11.255.15]) by int-mx1.corp.redhat.com (8.13.1/8.13.1) with ESMTP id l08GetR4031260; Mon, 8 Jan 2007 11:40:55 -0500 Received: from [10.15.80.10] (neon.msp.redhat.com [10.15.80.10]) by pobox-2.corp.redhat.com (8.13.1/8.13.1) with ESMTP id l08GeswC000958; Mon, 8 Jan 2007 11:40:55 -0500 Message-ID: <45A27416.8030600@sandeen.net> Date: Mon, 08 Jan 2007 10:40:54 -0600 From: Eric Sandeen User-Agent: Thunderbird 1.5.0.8 (X11/20061107) MIME-Version: 1.0 To: linux-kernel Mailing List , xfs@oss.sgi.com Subject: Re: xfs_file_ioctl / xfs_freeze: BUG: warning at kernel/mutex-debug.c:80/debug_mutex_unlock() References: <20070104001420.GA32440@m.safari.iki.fi> <20070107213734.GS44411608@melbourne.sgi.com> <20070108110323.GA3803@m.safari.iki.fi> In-Reply-To: <20070108110323.GA3803@m.safari.iki.fi> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-archive-position: 10206 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Content-Length: 958 Lines: 35 Sami Farin wrote: > On Mon, Jan 08, 2007 at 08:37:34 +1100, David Chinner wrote: > ... >>> fstab was there just fine after -u. >> Oh, that still hasn't been fixed? > > Looked like it =) Hm, it was proposed upstream a while ago: http://lkml.org/lkml/2006/9/27/137 I guess it got lost? -Eric >> Generic bug, not XFS - the global >> semaphore->mutex cleanup converted the bd_mount_sem to a mutex, and >> mutexes complain loudly when a the process unlocking the mutex is >> not the process that locked it. >> >> Basically, the generic code is broken - the bd_mount_mutex needs to >> be reverted back to a semaphore because it is locked and unlocked >> by different processes. The following patch does this.... >> >> BTW, Sami, can you cc xfs@oss.sgi.com on XFS bug reports in future; >> you'll get more XFS savvy eyes there..... > > Forgot to. > > Thanks for patch. It fixed the issue, no more warnings. > > BTW. the fix is not in 2.6.git, either. > From owner-xfs@oss.sgi.com Mon Jan 8 09:32:28 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 08 Jan 2007 09:32:33 -0800 (PST) Received: from mx1.redhat.com (mx1.redhat.com [66.187.233.31]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l08HWRqw028035 for ; Mon, 8 Jan 2007 09:32:28 -0800 Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by mx1.redhat.com (8.12.11.20060308/8.12.11) with ESMTP id l08HH2oP008025; Mon, 8 Jan 2007 12:17:02 -0500 Received: from pobox-2.corp.redhat.com (pobox-2.corp.redhat.com [10.11.255.15]) by int-mx1.corp.redhat.com (8.13.1/8.13.1) with ESMTP id l08HH2NJ011087; Mon, 8 Jan 2007 12:17:02 -0500 Received: from [10.15.80.10] (neon.msp.redhat.com [10.15.80.10]) by pobox-2.corp.redhat.com (8.13.1/8.13.1) with ESMTP id l08HH1KB005167; Mon, 8 Jan 2007 12:17:02 -0500 Message-ID: <45A27C8D.5010109@sandeen.net> Date: Mon, 08 Jan 2007 11:17:01 -0600 From: Eric Sandeen User-Agent: Thunderbird 1.5.0.8 (X11/20061107) MIME-Version: 1.0 To: "Mr. Berkley Shands" CC: Dave Lloyd , linux-xfs@oss.sgi.com Subject: Re: XFS and 2.6.18 -> 2.6.20-rc3 References: <45A27BC7.2020709@exegy.com> In-Reply-To: <45A27BC7.2020709@exegy.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-archive-position: 10207 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Content-Length: 301 Lines: 12 Mr. Berkley Shands wrote: > Dave Lloyd (our in-house Idea Guy) looked at the allocation groups... > Non-sequential, random... > > What data would you like to see? xfs_bmap -v on files where you consider the allocation to have changed between kernels, would show exactly how it has changed. -Eric From owner-xfs@oss.sgi.com Mon Jan 8 09:33:03 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 08 Jan 2007 09:33:07 -0800 (PST) Received: from service.eng.exegy.net (68-191-203-42.static.stls.mo.charter.com [68.191.203.42]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l08HX1qw028138 for ; Mon, 8 Jan 2007 09:33:03 -0800 Received: from HANAFORD.eng.exegy.net (hanaford.eng.exegy.net [10.19.1.4]) by service.eng.exegy.net (8.13.1/8.13.1) with ESMTP id l08HDiFI030693; Mon, 8 Jan 2007 11:13:44 -0600 X-Ninja-PIM: Scanned by Ninja X-Ninja-AttachmentFiltering: (no action) Received: from [10.19.4.86] ([10.19.4.86]) by HANAFORD.eng.exegy.net over TLS secured channel with Microsoft SMTPSVC(6.0.3790.1830); Mon, 8 Jan 2007 11:13:43 -0600 Message-ID: <45A27BC7.2020709@exegy.com> Date: Mon, 08 Jan 2007 11:13:43 -0600 From: "Mr. Berkley Shands" User-Agent: Thunderbird 1.5.0.9 (X11/20061222) MIME-Version: 1.0 To: Eric Sandeen CC: Dave Lloyd , linux-xfs@oss.sgi.com Subject: XFS and 2.6.18 -> 2.6.20-rc3 X-OriginalArrivalTime: 08 Jan 2007 17:13:43.0520 (UTC) FILETIME=[597A5600:01C73348] Content-Type: text/plain Content-Disposition: inline Content-Transfer-Encoding: 7bit X-archive-position: 10208 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bshands@exegy.com Precedence: bulk X-list: xfs Content-Length: 2153 Lines: 71 My testbench is a 4 core Opteron (dual 275's) into two LSI8408E SAS controllers, into 16 Seagate 7200.10 320GB satas. Redhat ES4.4 (Centos 4.4). A slightly newer parted is needed than the contemporary of Moses that is shipped with the O/S. I have a standard burn in script that takes the 4 4-drive raid0's and puts a GPT label on them, aligns the partitions to stripe boundary's. It then proceeds to write 8GB files concurrently onto all 4 raid drives. Under 2.6.18.1 the write speeds start at 265MB/Sec and decrease mostly monotonically down to ~160MB/Sec, indicating that the files start on the outside (fastest tracks) and work in. All 4 raids are within 7-8MB/Sec of each other (usually they are identical in speed). By the time of 2.6.20-rc3, the same testbench shows a 10% across the board decrease in throughput for writes. Reads are unaffected. But now the allocation order for virgin file systems are random, usually starting at the slow 140MB/Sec, then bouncing up to 220MB/Sec, then around and around. No two raids get the same write speeds at the same time. Dave Lloyd (our in-house Idea Guy) looked at the allocation groups... Non-sequential, random... What data would you like to see? The run logs from 2.6.18.1 and 2.6.20-rc3? Want the scripts? The xfs-debug dumps of a few files? Berkley -- //E. F. Berkley Shands, MSc// **Exegy Inc.** 3668 S. Geyer Road, Suite 300 St. Louis, MO 63127 Direct: (314) 450-5348 Cell: (314) 303-2546 Office: (314) 450-5353 Fax: (314) 450-5354 This e-mail and any documents accompanying it may contain legally privileged and/or confidential information belonging to Exegy, Inc. Such information may be protected from disclosure by law. The information is intended for use by only the addressee. If you are not the intended recipient, you are hereby notified that any disclosure or use of the information is strictly prohibited. If you have received this e-mail in error, please immediately contact the sender by e-mail or phone regarding instructions for return or destruction and do not use or disclose the content to others. [[HTML alternate version deleted]] From owner-xfs@oss.sgi.com Mon Jan 8 15:02:21 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 08 Jan 2007 15:02:25 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l08N2Iqw018556 for ; Mon, 8 Jan 2007 15:02:20 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA21141; Tue, 9 Jan 2007 10:01:21 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l08N1K7Y85174448; Tue, 9 Jan 2007 10:01:20 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l08N1J2w87293173; Tue, 9 Jan 2007 10:01:19 +1100 (AEDT) Date: Tue, 9 Jan 2007 10:01:19 +1100 From: David Chinner To: Christoph Hellwig Cc: David Chinner , xfs-dev@sgi.com, xfs@oss.sgi.com Subject: Re: Review: make growing by >2TB work Message-ID: <20070108230119.GA33919298@melbourne.sgi.com> References: <20070108044414.GC44411608@melbourne.sgi.com> <20070108091218.GB17121@infradead.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070108091218.GB17121@infradead.org> User-Agent: Mutt/1.4.2.1i X-archive-position: 10211 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 1095 Lines: 42 On Mon, Jan 08, 2007 at 09:12:18AM +0000, Christoph Hellwig wrote: > On Mon, Jan 08, 2007 at 03:44:14PM +1100, David Chinner wrote: > > Growing a filesystem by > 2TB currently causes an overflow > > in the transaction subsystem. Make transaction deltas and associated > > elements explicitly 64 bit types so that we don't get overflows. > > > > Comments? > > Looks good. > > > > > - AIL_LOCKINIT(&mp->m_ail_lock, "xfs_ail"); > > spinlock_init(&mp->m_sb_lock, "xfs_sb"); > > mutex_init(&mp->m_ilock); > > initnsema(&mp->m_growlock, 1, "xfs_grow"); > > - /* > > - * Initialize the AIL. > > - */ > > - xfs_trans_ail_init(mp); > > This seems unrelated (?) Ahhh - leakage from a recent patch series reordering.... > > -xfs_mod_incore_sb(xfs_mount_t *mp, xfs_sb_field_t field, int delta, int rsvd) > > +xfs_mod_incore_sb(xfs_mount_t *mp, xfs_sb_field_t field, int64_t delta, int rsvd) > > This seems to be over 80 chars linelength with your patch, just break > the line. Will do. Thanks, Christoph. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Jan 8 15:05:29 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 08 Jan 2007 15:05:33 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l08N5Pqw019389 for ; Mon, 8 Jan 2007 15:05:28 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA21325; Tue, 9 Jan 2007 10:04:31 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l08N4U7Y87188352; Tue, 9 Jan 2007 10:04:30 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l08N4TLh84785003; Tue, 9 Jan 2007 10:04:29 +1100 (AEDT) Date: Tue, 9 Jan 2007 10:04:29 +1100 From: David Chinner To: Christoph Hellwig Cc: David Chinner , xfs-dev@sgi.com, xfs@oss.sgi.com Subject: Re: Review: fix mapping invalidation callouts Message-ID: <20070108230429.GB33919298@melbourne.sgi.com> References: <20070108040309.GX33919298@melbourne.sgi.com> <20070108090916.GA17121@infradead.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070108090916.GA17121@infradead.org> User-Agent: Mutt/1.4.2.1i X-archive-position: 10212 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 1669 Lines: 41 On Mon, Jan 08, 2007 at 09:09:16AM +0000, Christoph Hellwig wrote: > On Mon, Jan 08, 2007 at 03:03:09PM +1100, David Chinner wrote: > > With the recent cancel_dirty_page() changes, a warning was > > added if we cancel a dirty page that is still mapped into > > the page tables. > > This happens in XFS from fs_tosspages() and fs_flushinval_pages() > > because they call truncate_inode_pages(). > > > > truncate_inode_pages() does not invalidate existing page mappings; > > it is expected taht this is called only when truncating the file > > or destroying the inode and on both these cases there can be > > no mapped ptes. However, we call this when doing direct I/O writes > > to remove pages from the page cache. As a result, we can rip > > a page from the page cache that still has mappings attached. > > > > The correct fix is to use invalidate_inode_pages2_range() instead > > of truncate_inode_pages(). They essentially do the same thing, but > > the former also removes any pte mappings before removing the page > > from the page cache. > > > > Comments? > > Generally looks good. But I feel a little cautios about changes in this > area, so we should throw all possible test loads at this before commiting > it. Yup - fsx is one test that I really want to hit with this. The guy that reported the initial problem has replied saying this patch fixes the warnings (good start ;), but I'll hold off pushing it for a little while to test it more. This (or something like it) will need to go into 2.6.20 before it is released so we've got limited time to test this one out.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Jan 8 15:48:39 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 08 Jan 2007 15:48:43 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l08NmZqw026489 for ; Mon, 8 Jan 2007 15:48:37 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA22609; Tue, 9 Jan 2007 10:47:34 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l08NlU7Y87180398; Tue, 9 Jan 2007 10:47:31 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l08NlSkV86353415; Tue, 9 Jan 2007 10:47:28 +1100 (AEDT) Date: Tue, 9 Jan 2007 10:47:28 +1100 From: David Chinner To: Eric Sandeen Cc: linux-kernel Mailing List , xfs@oss.sgi.com, akpm@osdl.org Subject: bd_mount_mutex -> bd_mount_sem (was Re: xfs_file_ioctl / xfs_freeze: BUG: warning at kernel/mutex-debug.c:80/debug_mutex_unlock()) Message-ID: <20070108234728.GC33919298@melbourne.sgi.com> References: <20070104001420.GA32440@m.safari.iki.fi> <20070107213734.GS44411608@melbourne.sgi.com> <20070108110323.GA3803@m.safari.iki.fi> <45A27416.8030600@sandeen.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <45A27416.8030600@sandeen.net> User-Agent: Mutt/1.4.2.1i X-archive-position: 10213 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 519 Lines: 25 On Mon, Jan 08, 2007 at 10:40:54AM -0600, Eric Sandeen wrote: > Sami Farin wrote: > > On Mon, Jan 08, 2007 at 08:37:34 +1100, David Chinner wrote: > > ... > >>> fstab was there just fine after -u. > >> Oh, that still hasn't been fixed? > > > > Looked like it =) > > Hm, it was proposed upstream a while ago: > > http://lkml.org/lkml/2006/9/27/137 > > I guess it got lost? Seems like it. Andrew, did this ever get queued for merge? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Jan 8 15:58:21 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 08 Jan 2007 15:58:26 -0800 (PST) Received: from smtp.osdl.org (smtp.osdl.org [65.172.181.24]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l08NwJqw028326 for ; Mon, 8 Jan 2007 15:58:20 -0800 Received: from shell0.pdx.osdl.net (fw.osdl.org [65.172.181.6]) by smtp.osdl.org (8.12.8/8.12.8) with ESMTP id l08NugWi016413 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO); Mon, 8 Jan 2007 15:56:43 -0800 Received: from akpm.corp.google.com (shell0.pdx.osdl.net [10.9.0.31]) by shell0.pdx.osdl.net (8.13.1/8.11.6) with SMTP id l08Nub4n023266; Mon, 8 Jan 2007 15:56:39 -0800 Date: Mon, 8 Jan 2007 15:56:36 -0800 From: Andrew Morton To: David Chinner Cc: linux-kernel Mailing List , xfs@oss.sgi.com, Ingo Molnar Subject: Re: xfs_file_ioctl / xfs_freeze: BUG: warning at kernel/mutex-debug.c:80/debug_mutex_unlock() Message-Id: <20070108155636.a68dce33.akpm@osdl.org> In-Reply-To: <20070107213734.GS44411608@melbourne.sgi.com> References: <20070104001420.GA32440@m.safari.iki.fi> <20070107213734.GS44411608@melbourne.sgi.com> X-Mailer: Sylpheed version 2.2.7 (GTK+ 2.8.6; i686-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-MIMEDefang-Filter: osdl$Revision: 1.167 $ X-Scanned-By: MIMEDefang 2.36 X-archive-position: 10214 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: akpm@osdl.org Precedence: bulk X-list: xfs Content-Length: 2303 Lines: 46 On Mon, 8 Jan 2007 08:37:34 +1100 David Chinner wrote: > On Thu, Jan 04, 2007 at 02:14:21AM +0200, Sami Farin wrote: > > just a simple test I did... > > xfs_freeze -f /mnt/newtest > > cp /etc/fstab /mnt/newtest > > xfs_freeze -u /mnt/newtest > > > > 2007-01-04 01:44:30.341979500 <4>BUG: warning at kernel/mutex-debug.c:80/debug_mutex_unlock() > > 2007-01-04 01:44:30.385771500 <4> [] dump_trace+0x215/0x21a > > 2007-01-04 01:44:30.385774500 <4> [] show_trace_log_lvl+0x1a/0x30 > > 2007-01-04 01:44:30.385775500 <4> [] show_trace+0x12/0x14 > > 2007-01-04 01:44:30.385777500 <4> [] dump_stack+0x19/0x1b > > 2007-01-04 01:44:30.385778500 <4> [] debug_mutex_unlock+0x69/0x120 > > 2007-01-04 01:44:30.385779500 <4> [] __mutex_unlock_slowpath+0x44/0xf0 > > 2007-01-04 01:44:30.385780500 <4> [] mutex_unlock+0x8/0xa > > 2007-01-04 01:44:30.385782500 <4> [] thaw_bdev+0x57/0x6e > > 2007-01-04 01:44:30.385791500 <4> [] xfs_ioctl+0x7ce/0x7d3 > > 2007-01-04 01:44:30.385793500 <4> [] xfs_file_ioctl+0x33/0x54 > > 2007-01-04 01:44:30.385794500 <4> [] do_ioctl+0x76/0x85 > > 2007-01-04 01:44:30.385795500 <4> [] vfs_ioctl+0x59/0x1aa > > 2007-01-04 01:44:30.385796500 <4> [] sys_ioctl+0x67/0x77 > > 2007-01-04 01:44:30.385797500 <4> [] syscall_call+0x7/0xb > > 2007-01-04 01:44:30.385799500 <4> [<001be410>] 0x1be410 > > 2007-01-04 01:44:30.385800500 <4> ======================= > > > > fstab was there just fine after -u. > > Oh, that still hasn't been fixed? Generic bug, not XFS - the global > semaphore->mutex cleanup converted the bd_mount_sem to a mutex, and > mutexes complain loudly when a the process unlocking the mutex is > not the process that locked it. > > Basically, the generic code is broken - the bd_mount_mutex needs to > be reverted back to a semaphore because it is locked and unlocked > by different processes. The following patch does this.... > > ... > > Revert bd_mount_mutex back to a semaphore so that xfs_freeze -f /mnt/newtest; > xfs_freeze -u /mnt/newtest works safely and doesn't produce lockdep warnings. Sad. The alternative would be to implement mutex_unlock_dont_warn_if_a_different_task_did_it(). Ingo? Possible? From owner-xfs@oss.sgi.com Mon Jan 8 16:20:13 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 08 Jan 2007 16:20:18 -0800 (PST) Received: from smtp.osdl.org (smtp.osdl.org [65.172.181.24]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l090KBqw003980 for ; Mon, 8 Jan 2007 16:20:12 -0800 Received: from shell0.pdx.osdl.net (fw.osdl.org [65.172.181.6]) by smtp.osdl.org (8.12.8/8.12.8) with ESMTP id l090JHWi017068 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO); Mon, 8 Jan 2007 16:19:18 -0800 Received: from akpm.corp.google.com (shell0.pdx.osdl.net [10.9.0.31]) by shell0.pdx.osdl.net (8.13.1/8.11.6) with SMTP id l090JHZR023726; Mon, 8 Jan 2007 16:19:17 -0800 Date: Mon, 8 Jan 2007 16:19:17 -0800 From: Andrew Morton To: David Chinner Cc: Eric Sandeen , linux-kernel Mailing List , xfs@oss.sgi.com Subject: Re: bd_mount_mutex -> bd_mount_sem (was Re: xfs_file_ioctl / xfs_freeze: BUG: warning at kernel/mutex-debug.c:80/debug_mutex_unlock()) Message-Id: <20070108161917.73a4c2c6.akpm@osdl.org> In-Reply-To: <20070108234728.GC33919298@melbourne.sgi.com> References: <20070104001420.GA32440@m.safari.iki.fi> <20070107213734.GS44411608@melbourne.sgi.com> <20070108110323.GA3803@m.safari.iki.fi> <45A27416.8030600@sandeen.net> <20070108234728.GC33919298@melbourne.sgi.com> X-Mailer: Sylpheed version 2.2.7 (GTK+ 2.8.6; i686-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-MIMEDefang-Filter: osdl$Revision: 1.167 $ X-Scanned-By: MIMEDefang 2.36 X-archive-position: 10215 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: akpm@osdl.org Precedence: bulk X-list: xfs Content-Length: 674 Lines: 23 On Tue, 9 Jan 2007 10:47:28 +1100 David Chinner wrote: > On Mon, Jan 08, 2007 at 10:40:54AM -0600, Eric Sandeen wrote: > > Sami Farin wrote: > > > On Mon, Jan 08, 2007 at 08:37:34 +1100, David Chinner wrote: > > > ... > > >>> fstab was there just fine after -u. > > >> Oh, that still hasn't been fixed? > > > > > > Looked like it =) > > > > Hm, it was proposed upstream a while ago: > > > > http://lkml.org/lkml/2006/9/27/137 > > > > I guess it got lost? > > Seems like it. Andrew, did this ever get queued for merge? Seems not. I think people were hoping that various nasties in there would go away. We return to userspace with a kernel lock held?? From owner-xfs@oss.sgi.com Mon Jan 8 17:23:14 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 08 Jan 2007 17:23:18 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l091NBqw013912 for ; Mon, 8 Jan 2007 17:23:13 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id MAA25466; Tue, 9 Jan 2007 12:22:15 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l091ME7Y86576869; Tue, 9 Jan 2007 12:22:14 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l091MCQV86870835; Tue, 9 Jan 2007 12:22:12 +1100 (AEDT) Date: Tue, 9 Jan 2007 12:22:12 +1100 From: David Chinner To: "Mr. Berkley Shands" Cc: Eric Sandeen , Dave Lloyd , linux-xfs@oss.sgi.com Subject: Re: XFS and 2.6.18 -> 2.6.20-rc3 Message-ID: <20070109012212.GG44411608@melbourne.sgi.com> References: <45A27BC7.2020709@exegy.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <45A27BC7.2020709@exegy.com> User-Agent: Mutt/1.4.2.1i X-archive-position: 10216 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 2079 Lines: 63 On Mon, Jan 08, 2007 at 11:13:43AM -0600, Mr. Berkley Shands wrote: > My testbench is a 4 core Opteron (dual 275's) into > two LSI8408E SAS controllers, into 16 Seagate 7200.10 320GB satas. > Redhat ES4.4 (Centos 4.4). A slightly newer parted is needed > than the contemporary of Moses that is shipped with the O/S. > > I have a standard burn in script that takes the 4 4-drive raid0's > and puts a GPT label on them, aligns the partitions to stripe > boundary's. It then proceeds to write 8GB files concurrently > onto all 4 raid drives. How many files are being written at the same time to each filesystem? buffered or direct I/O? I/O size? how much memory in the machine? What size I/Os are actually hitting the disks? > Under 2.6.18.1 the write speeds start at 265MB/Sec and decrease > mostly monotonically down to ~160MB/Sec, indicating that > the files start on the outside (fastest tracks) and work in. So you are filling the entire disk with this test? > All 4 raids are within 7-8MB/Sec of each other (usually they > are identical in speed). > > By the time of 2.6.20-rc3, the same testbench shows > a 10% across the board decrease in throughput for writes. > Reads are unaffected. Reads being unaffected indicates the files are not being fragmented badly. > But now the allocation order for virgin file systems are random, How did you determine this? > usually starting at the slow 140MB/Sec, then bouncing up to 220MB/Sec, > then around and around. No two raids get the same write speeds at the > same time. > > Dave Lloyd (our in-house Idea Guy) looked at the allocation groups... > Non-sequential, random... > > What data would you like to see? First thing to do is run a set of write tests to the _raw_ devices, not to the filesystem so we can rule out a driver/hardware problem. Can you do something as simple as concurrent writes to each raid lun to see if .18 and .20 perform the same? > The run logs from 2.6.18.1 and 2.6.20-rc3? > Want the scripts? Yes please. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Jan 8 19:13:35 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 08 Jan 2007 19:13:39 -0800 (PST) Received: from sandeen.net (sandeen.net [209.173.210.139]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l093DYqw030656 for ; Mon, 8 Jan 2007 19:13:35 -0800 Received: from [10.0.0.4] (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 399E518011EB7; Mon, 8 Jan 2007 21:12:41 -0600 (CST) Message-ID: <45A30828.6000508@sandeen.net> Date: Mon, 08 Jan 2007 21:12:40 -0600 From: Eric Sandeen User-Agent: Thunderbird 1.5.0.9 (Macintosh/20061207) MIME-Version: 1.0 To: Andrew Morton CC: David Chinner , linux-kernel Mailing List , xfs@oss.sgi.com Subject: Re: bd_mount_mutex -> bd_mount_sem (was Re: xfs_file_ioctl / xfs_freeze: BUG: warning at kernel/mutex-debug.c:80/debug_mutex_unlock()) References: <20070104001420.GA32440@m.safari.iki.fi> <20070107213734.GS44411608@melbourne.sgi.com> <20070108110323.GA3803@m.safari.iki.fi> <45A27416.8030600@sandeen.net> <20070108234728.GC33919298@melbourne.sgi.com> <20070108161917.73a4c2c6.akpm@osdl.org> In-Reply-To: <20070108161917.73a4c2c6.akpm@osdl.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 10217 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Content-Length: 838 Lines: 26 Andrew Morton wrote: > On Tue, 9 Jan 2007 10:47:28 +1100 > David Chinner wrote: > >> On Mon, Jan 08, 2007 at 10:40:54AM -0600, Eric Sandeen wrote: >>> Sami Farin wrote: >>>> On Mon, Jan 08, 2007 at 08:37:34 +1100, David Chinner wrote: >>>> ... >>>>>> fstab was there just fine after -u. >>>>> Oh, that still hasn't been fixed? >>>> Looked like it =) >>> Hm, it was proposed upstream a while ago: >>> >>> http://lkml.org/lkml/2006/9/27/137 >>> >>> I guess it got lost? >> Seems like it. Andrew, did this ever get queued for merge? > > Seems not. I think people were hoping that various nasties in there > would go away. We return to userspace with a kernel lock held?? Is a semaphore any worse than the current mutex in this respect? At least unlocking from another thread doesn't violate semaphore rules. :) -Eric From owner-xfs@oss.sgi.com Mon Jan 8 19:18:56 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 08 Jan 2007 19:19:00 -0800 (PST) Received: from smtp.osdl.org (smtp.osdl.org [65.172.181.24]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l093Isqw032012 for ; Mon, 8 Jan 2007 19:18:55 -0800 Received: from shell0.pdx.osdl.net (fw.osdl.org [65.172.181.6]) by smtp.osdl.org (8.12.8/8.12.8) with ESMTP id l093I0Wi021812 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO); Mon, 8 Jan 2007 19:18:01 -0800 Received: from box (shell0.pdx.osdl.net [10.9.0.31]) by shell0.pdx.osdl.net (8.13.1/8.11.6) with SMTP id l093I0g2027122; Mon, 8 Jan 2007 19:18:00 -0800 Date: Mon, 8 Jan 2007 19:18:00 -0800 From: Andrew Morton To: Eric Sandeen Cc: David Chinner , linux-kernel Mailing List , xfs@oss.sgi.com Subject: Re: bd_mount_mutex -> bd_mount_sem (was Re: xfs_file_ioctl / xfs_freeze: BUG: warning at kernel/mutex-debug.c:80/debug_mutex_unlock()) Message-Id: <20070108191800.9d83ff5e.akpm@osdl.org> In-Reply-To: <45A30828.6000508@sandeen.net> References: <20070104001420.GA32440@m.safari.iki.fi> <20070107213734.GS44411608@melbourne.sgi.com> <20070108110323.GA3803@m.safari.iki.fi> <45A27416.8030600@sandeen.net> <20070108234728.GC33919298@melbourne.sgi.com> <20070108161917.73a4c2c6.akpm@osdl.org> <45A30828.6000508@sandeen.net> X-Mailer: Sylpheed version 2.2.7 (GTK+ 2.8.17; x86_64-unknown-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-MIMEDefang-Filter: osdl$Revision: 1.167 $ X-Scanned-By: MIMEDefang 2.36 X-archive-position: 10218 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: akpm@osdl.org Precedence: bulk X-list: xfs Content-Length: 1066 Lines: 30 On Mon, 08 Jan 2007 21:12:40 -0600 Eric Sandeen wrote: > Andrew Morton wrote: > > On Tue, 9 Jan 2007 10:47:28 +1100 > > David Chinner wrote: > > > >> On Mon, Jan 08, 2007 at 10:40:54AM -0600, Eric Sandeen wrote: > >>> Sami Farin wrote: > >>>> On Mon, Jan 08, 2007 at 08:37:34 +1100, David Chinner wrote: > >>>> ... > >>>>>> fstab was there just fine after -u. > >>>>> Oh, that still hasn't been fixed? > >>>> Looked like it =) > >>> Hm, it was proposed upstream a while ago: > >>> > >>> http://lkml.org/lkml/2006/9/27/137 > >>> > >>> I guess it got lost? > >> Seems like it. Andrew, did this ever get queued for merge? > > > > Seems not. I think people were hoping that various nasties in there > > would go away. We return to userspace with a kernel lock held?? > > Is a semaphore any worse than the current mutex in this respect? At > least unlocking from another thread doesn't violate semaphore rules. :) I assume that if we weren't returning to userspace with a lock held, this mutex problem would simply go away. From owner-xfs@oss.sgi.com Mon Jan 8 19:39:04 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 08 Jan 2007 19:39:07 -0800 (PST) Received: from sandeen.net (sandeen.net [209.173.210.139]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l093d2qw002793 for ; Mon, 8 Jan 2007 19:39:03 -0800 Received: from [10.0.0.4] (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id AC17318011EB7; Mon, 8 Jan 2007 21:38:06 -0600 (CST) Message-ID: <45A30E1D.4030401@sandeen.net> Date: Mon, 08 Jan 2007 21:38:05 -0600 From: Eric Sandeen User-Agent: Thunderbird 1.5.0.9 (Macintosh/20061207) MIME-Version: 1.0 To: Andrew Morton CC: David Chinner , linux-kernel Mailing List , xfs@oss.sgi.com Subject: Re: bd_mount_mutex -> bd_mount_sem (was Re: xfs_file_ioctl / xfs_freeze: BUG: warning at kernel/mutex-debug.c:80/debug_mutex_unlock()) References: <20070104001420.GA32440@m.safari.iki.fi> <20070107213734.GS44411608@melbourne.sgi.com> <20070108110323.GA3803@m.safari.iki.fi> <45A27416.8030600@sandeen.net> <20070108234728.GC33919298@melbourne.sgi.com> <20070108161917.73a4c2c6.akpm@osdl.org> <45A30828.6000508@sandeen.net> <20070108191800.9d83ff5e.akpm@osdl.org> In-Reply-To: <20070108191800.9d83ff5e.akpm@osdl.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 10219 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Content-Length: 1354 Lines: 37 Andrew Morton wrote: > On Mon, 08 Jan 2007 21:12:40 -0600 > Eric Sandeen wrote: > >> Andrew Morton wrote: >>> On Tue, 9 Jan 2007 10:47:28 +1100 >>> David Chinner wrote: >>> >>>> On Mon, Jan 08, 2007 at 10:40:54AM -0600, Eric Sandeen wrote: >>>>> Sami Farin wrote: >>>>>> On Mon, Jan 08, 2007 at 08:37:34 +1100, David Chinner wrote: >>>>>> ... >>>>>>>> fstab was there just fine after -u. >>>>>>> Oh, that still hasn't been fixed? >>>>>> Looked like it =) >>>>> Hm, it was proposed upstream a while ago: >>>>> >>>>> http://lkml.org/lkml/2006/9/27/137 >>>>> >>>>> I guess it got lost? >>>> Seems like it. Andrew, did this ever get queued for merge? >>> Seems not. I think people were hoping that various nasties in there >>> would go away. We return to userspace with a kernel lock held?? >> Is a semaphore any worse than the current mutex in this respect? At >> least unlocking from another thread doesn't violate semaphore rules. :) > > I assume that if we weren't returning to userspace with a lock held, this > mutex problem would simply go away. > Well nobody's asserting that the filesystem must always be locked & unlocked by the same thread, are they? That'd be a strange rule to enforce upon the userspace doing the filesystem management wouldn't it? Or am I thinking about this wrong... -Eric From owner-xfs@oss.sgi.com Mon Jan 8 19:52:24 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 08 Jan 2007 19:52:30 -0800 (PST) Received: from smtp.osdl.org (smtp.osdl.org [65.172.181.24]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l093qMqw005202 for ; Mon, 8 Jan 2007 19:52:23 -0800 Received: from shell0.pdx.osdl.net (fw.osdl.org [65.172.181.6]) by smtp.osdl.org (8.12.8/8.12.8) with ESMTP id l093pRWi022547 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO); Mon, 8 Jan 2007 19:51:27 -0800 Received: from box (shell0.pdx.osdl.net [10.9.0.31]) by shell0.pdx.osdl.net (8.13.1/8.11.6) with SMTP id l093pR2w027728; Mon, 8 Jan 2007 19:51:27 -0800 Date: Mon, 8 Jan 2007 19:51:27 -0800 From: Andrew Morton To: Eric Sandeen Cc: David Chinner , linux-kernel Mailing List , xfs@oss.sgi.com Subject: Re: bd_mount_mutex -> bd_mount_sem (was Re: xfs_file_ioctl / xfs_freeze: BUG: warning at kernel/mutex-debug.c:80/debug_mutex_unlock()) Message-Id: <20070108195127.67fe86b8.akpm@osdl.org> In-Reply-To: <45A30E1D.4030401@sandeen.net> References: <20070104001420.GA32440@m.safari.iki.fi> <20070107213734.GS44411608@melbourne.sgi.com> <20070108110323.GA3803@m.safari.iki.fi> <45A27416.8030600@sandeen.net> <20070108234728.GC33919298@melbourne.sgi.com> <20070108161917.73a4c2c6.akpm@osdl.org> <45A30828.6000508@sandeen.net> <20070108191800.9d83ff5e.akpm@osdl.org> <45A30E1D.4030401@sandeen.net> X-Mailer: Sylpheed version 2.2.7 (GTK+ 2.8.17; x86_64-unknown-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-MIMEDefang-Filter: osdl$Revision: 1.167 $ X-Scanned-By: MIMEDefang 2.36 X-archive-position: 10220 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: akpm@osdl.org Precedence: bulk X-list: xfs Content-Length: 1841 Lines: 48 On Mon, 08 Jan 2007 21:38:05 -0600 Eric Sandeen wrote: > Andrew Morton wrote: > > On Mon, 08 Jan 2007 21:12:40 -0600 > > Eric Sandeen wrote: > > > >> Andrew Morton wrote: > >>> On Tue, 9 Jan 2007 10:47:28 +1100 > >>> David Chinner wrote: > >>> > >>>> On Mon, Jan 08, 2007 at 10:40:54AM -0600, Eric Sandeen wrote: > >>>>> Sami Farin wrote: > >>>>>> On Mon, Jan 08, 2007 at 08:37:34 +1100, David Chinner wrote: > >>>>>> ... > >>>>>>>> fstab was there just fine after -u. > >>>>>>> Oh, that still hasn't been fixed? > >>>>>> Looked like it =) > >>>>> Hm, it was proposed upstream a while ago: > >>>>> > >>>>> http://lkml.org/lkml/2006/9/27/137 > >>>>> > >>>>> I guess it got lost? > >>>> Seems like it. Andrew, did this ever get queued for merge? > >>> Seems not. I think people were hoping that various nasties in there > >>> would go away. We return to userspace with a kernel lock held?? > >> Is a semaphore any worse than the current mutex in this respect? At > >> least unlocking from another thread doesn't violate semaphore rules. :) > > > > I assume that if we weren't returning to userspace with a lock held, this > > mutex problem would simply go away. > > > > Well nobody's asserting that the filesystem must always be locked & > unlocked by the same thread, are they? That'd be a strange rule to > enforce upon the userspace doing the filesystem management wouldn't it? > Or am I thinking about this wrong... I don't even know what code we're talking about here... I'm under the impression that XFS will return to userspace with a filesystem lock held, under the expectation (ie: requirement) that userspace will later come in and release that lock. If that's not true, then what _is_ happening in there? If that _is_ true then, well, that sucks a bit. From owner-xfs@oss.sgi.com Mon Jan 8 20:14:28 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 08 Jan 2007 20:14:34 -0800 (PST) Received: from postoffice.aconex.com (mail.app.aconex.com [203.89.192.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l094EQqw008710 for ; Mon, 8 Jan 2007 20:14:28 -0800 Received: from edge (unknown [124.178.235.100]) by postoffice.aconex.com (Postfix) with ESMTP id A1763AAC240; Tue, 9 Jan 2007 15:03:35 +1100 (EST) Subject: Re: bd_mount_mutex -> bd_mount_sem (was Re: xfs_file_ioctl / xfs_freeze: BUG: warning at kernel/mutex-debug.c:80/debug_mutex_unlock()) From: Nathan Scott Reply-To: nscott@aconex.com To: Andrew Morton Cc: Eric Sandeen , David Chinner , linux-kernel Mailing List , xfs@oss.sgi.com In-Reply-To: <20070108195127.67fe86b8.akpm@osdl.org> References: <20070104001420.GA32440@m.safari.iki.fi> <20070107213734.GS44411608@melbourne.sgi.com> <20070108110323.GA3803@m.safari.iki.fi> <45A27416.8030600@sandeen.net> <20070108234728.GC33919298@melbourne.sgi.com> <20070108161917.73a4c2c6.akpm@osdl.org> <45A30828.6000508@sandeen.net> <20070108191800.9d83ff5e.akpm@osdl.org> <45A30E1D.4030401@sandeen.net> <20070108195127.67fe86b8.akpm@osdl.org> Content-Type: text/plain Organization: Aconex Date: Tue, 09 Jan 2007 15:17:03 +1100 Message-Id: <1168316223.32113.83.camel@edge> Mime-Version: 1.0 X-Mailer: Evolution 2.6.3 Content-Transfer-Encoding: 7bit X-archive-position: 10221 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nscott@aconex.com Precedence: bulk X-list: xfs Content-Length: 3178 Lines: 77 On Mon, 2007-01-08 at 19:51 -0800, Andrew Morton wrote: > On Mon, 08 Jan 2007 21:38:05 -0600 > Eric Sandeen wrote: > > > Andrew Morton wrote: > > > On Mon, 08 Jan 2007 21:12:40 -0600 > > > Eric Sandeen wrote: > > > > > >> Andrew Morton wrote: > > >>> On Tue, 9 Jan 2007 10:47:28 +1100 > > >>> David Chinner wrote: > > >>> > > >>>> On Mon, Jan 08, 2007 at 10:40:54AM -0600, Eric Sandeen wrote: > > >>>>> Sami Farin wrote: > > >>>>>> On Mon, Jan 08, 2007 at 08:37:34 +1100, David Chinner wrote: > > >>>>>> ... > > >>>>>>>> fstab was there just fine after -u. > > >>>>>>> Oh, that still hasn't been fixed? > > >>>>>> Looked like it =) > > >>>>> Hm, it was proposed upstream a while ago: > > >>>>> > > >>>>> http://lkml.org/lkml/2006/9/27/137 > > >>>>> > > >>>>> I guess it got lost? > > >>>> Seems like it. Andrew, did this ever get queued for merge? > > >>> Seems not. I think people were hoping that various nasties in there > > >>> would go away. We return to userspace with a kernel lock held?? > > >> Is a semaphore any worse than the current mutex in this respect? At > > >> least unlocking from another thread doesn't violate semaphore rules. :) > > > > > > I assume that if we weren't returning to userspace with a lock held, this > > > mutex problem would simply go away. > > > > > > > Well nobody's asserting that the filesystem must always be locked & > > unlocked by the same thread, are they? That'd be a strange rule to > > enforce upon the userspace doing the filesystem management wouldn't it? > > Or am I thinking about this wrong... > > I don't even know what code we're talking about here... > > I'm under the impression that XFS will return to userspace with a > filesystem lock held, under the expectation (ie: requirement) that > userspace will later come in and release that lock. Its not really XFS, its more the generic device freezing code (freeze_bdev) which is called by both XFS and the device mapper suspend interface (both of which are exposed to userspace via ioctls). These interfaces are used when doing block level snapshots which are "filesystem coherent". > If that's not true, then what _is_ happening in there? This particular case was a device mapper stack trace, hence the confusion, I think. Both XFS and DM are making the same generic block layer call here though (freeze_bdev). > If that _is_ true then, well, that sucks a bit. Indeed, its a fairly ordinary interface, but thats too late to go fix now I guess (since its exposed to userspace already). A remount flag along the lines of readonly may have been a better way to go... perhaps. *shrug*... not clear - I guess the problem the original authors had there (whoever they were, I dunno), was that the block layer wants to call up to the filesystem to quiesce itself, and at some later user-defined point to unquiesce itself... which is a bit of a layering violation. >From a quick look, there seems to be a bug in the original patch - it is passing -EAGAIN back without wrapping it up in ERR_PTR(), which it needs to since freeze_bdev returns a struct super_block pointer. cheers. -- Nathan From owner-xfs@oss.sgi.com Mon Jan 8 20:50:13 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 08 Jan 2007 20:50:19 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l094oAqw019034 for ; Mon, 8 Jan 2007 20:50:12 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA00888; Tue, 9 Jan 2007 15:49:14 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l094nB7Y87330283; Tue, 9 Jan 2007 15:49:11 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l094n74D83809180; Tue, 9 Jan 2007 15:49:07 +1100 (AEDT) Date: Tue, 9 Jan 2007 15:49:07 +1100 From: David Chinner To: Nathan Scott Cc: Andrew Morton , Eric Sandeen , David Chinner , linux-kernel Mailing List , xfs@oss.sgi.com Subject: Re: bd_mount_mutex -> bd_mount_sem (was Re: xfs_file_ioctl / xfs_freeze: BUG: warning at kernel/mutex-debug.c:80/debug_mutex_unlock()) Message-ID: <20070109044907.GH33919298@melbourne.sgi.com> References: <20070107213734.GS44411608@melbourne.sgi.com> <20070108110323.GA3803@m.safari.iki.fi> <45A27416.8030600@sandeen.net> <20070108234728.GC33919298@melbourne.sgi.com> <20070108161917.73a4c2c6.akpm@osdl.org> <45A30828.6000508@sandeen.net> <20070108191800.9d83ff5e.akpm@osdl.org> <45A30E1D.4030401@sandeen.net> <20070108195127.67fe86b8.akpm@osdl.org> <1168316223.32113.83.camel@edge> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1168316223.32113.83.camel@edge> User-Agent: Mutt/1.4.2.1i X-archive-position: 10222 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 4027 Lines: 99 On Tue, Jan 09, 2007 at 03:17:03PM +1100, Nathan Scott wrote: > On Mon, 2007-01-08 at 19:51 -0800, Andrew Morton wrote: > > On Mon, 08 Jan 2007 21:38:05 -0600 > > Eric Sandeen wrote: > > > > > Andrew Morton wrote: > > > > On Mon, 08 Jan 2007 21:12:40 -0600 > > > > Eric Sandeen wrote: > > > > > > > >> Andrew Morton wrote: > > > >>> On Tue, 9 Jan 2007 10:47:28 +1100 > > > >>> David Chinner wrote: > > > >>> > > > >>>> On Mon, Jan 08, 2007 at 10:40:54AM -0600, Eric Sandeen wrote: > > > >>>>> Sami Farin wrote: > > > >>>>>> On Mon, Jan 08, 2007 at 08:37:34 +1100, David Chinner wrote: > > > >>>>>> ... > > > >>>>>>>> fstab was there just fine after -u. > > > >>>>>>> Oh, that still hasn't been fixed? > > > >>>>>> Looked like it =) > > > >>>>> Hm, it was proposed upstream a while ago: > > > >>>>> > > > >>>>> http://lkml.org/lkml/2006/9/27/137 > > > >>>>> > > > >>>>> I guess it got lost? > > > >>>> Seems like it. Andrew, did this ever get queued for merge? > > > >>> Seems not. I think people were hoping that various nasties in there > > > >>> would go away. We return to userspace with a kernel lock held?? > > > >> Is a semaphore any worse than the current mutex in this respect? At > > > >> least unlocking from another thread doesn't violate semaphore rules. :) > > > > > > > > I assume that if we weren't returning to userspace with a lock held, this > > > > mutex problem would simply go away. > > > > > > > > > > Well nobody's asserting that the filesystem must always be locked & > > > unlocked by the same thread, are they? That'd be a strange rule to > > > enforce upon the userspace doing the filesystem management wouldn't it? > > > Or am I thinking about this wrong... > > > > I don't even know what code we're talking about here... > > > > I'm under the impression that XFS will return to userspace with a > > filesystem lock held, under the expectation (ie: requirement) that > > userspace will later come in and release that lock. > > Its not really XFS, its more the generic device freezing code > (freeze_bdev) which is called by both XFS and the device mapper > suspend interface (both of which are exposed to userspace via > ioctls). These interfaces are used when doing block level > snapshots which are "filesystem coherent". > > > If that's not true, then what _is_ happening in there? > > This particular case was a device mapper stack trace, hence the > confusion, I think. Both XFS and DM are making the same generic > block layer call here though (freeze_bdev). Yup. it's the freeze_bdev/thaw_bdev use of the bd_mount_mutex() that's the problem. I fail to see _why_ we need to hold a lock across the freeze/thaw - the only reason i can think of is to hold out new calls to sget() (via get_sb_bdev()) while the filesystem is frozen though I'm not sure why you'd need to do that. Can someone explain why we are holding the lock from freeze to thaw? FWIW, the comment in get_sb_bdev() seems to imply s_umount is supposed to ensure that we don't get unmounted while frozen. Indeed, in the comment above freeze_bdev: * If a superblock is found on this device, we take the s_umount semaphore * on it to make sure nobody unmounts until the snapshot creation is done. implies this as well, but freeze_bdev does not take the s_umount semaphore, nor does any filesystem that implements ->write_super_lockfs() So the code is clearly at odds with the comments here. IMO, you should be able to unmount a frozen filesystem - behaviour should be the same as crashing while frozen, so i think the comments about "snapshots" are pretty dodgy as well. > > If that _is_ true then, well, that sucks a bit. > > Indeed, its a fairly ordinary interface, but thats too late to go > fix now I guess (since its exposed to userspace already). Userspace knows nothing about that lock, so we can change that without changing the the userspace API. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Jan 8 22:00:17 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 08 Jan 2007 22:00:22 -0800 (PST) Received: from postoffice.aconex.com (mail.app.aconex.com [203.89.192.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0960Gqw029476 for ; Mon, 8 Jan 2007 22:00:17 -0800 Received: from edge (unknown [124.178.235.100]) by postoffice.aconex.com (Postfix) with ESMTP id 235EBAAC204; Tue, 9 Jan 2007 16:49:24 +1100 (EST) Subject: Re: [**BULK SPAM**] Re: bd_mount_mutex -> bd_mount_sem (was Re: xfs_file_ioctl / xfs_freeze: BUG: warning at kernel/mutex-debug.c:80/debug_mutex_unlock()) From: Nathan Scott Reply-To: nscott@aconex.com To: David Chinner Cc: Andrew Morton , Eric Sandeen , linux-kernel Mailing List , xfs@oss.sgi.com In-Reply-To: <20070109044907.GH33919298@melbourne.sgi.com> References: <20070107213734.GS44411608@melbourne.sgi.com> <20070108110323.GA3803@m.safari.iki.fi> <45A27416.8030600@sandeen.net> <20070108234728.GC33919298@melbourne.sgi.com> <20070108161917.73a4c2c6.akpm@osdl.org> <45A30828.6000508@sandeen.net> <20070108191800.9d83ff5e.akpm@osdl.org> <45A30E1D.4030401@sandeen.net> <20070108195127.67fe86b8.akpm@osdl.org> <1168316223.32113.83.camel@edge> <20070109044907.GH33919298@melbourne.sgi.com> Content-Type: text/plain Organization: Aconex Date: Tue, 09 Jan 2007 17:02:53 +1100 Message-Id: <1168322573.32113.86.camel@edge> Mime-Version: 1.0 X-Mailer: Evolution 2.6.3 Content-Transfer-Encoding: 7bit X-archive-position: 10223 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nscott@aconex.com Precedence: bulk X-list: xfs Content-Length: 1235 Lines: 34 On Tue, 2007-01-09 at 15:49 +1100, David Chinner wrote: > On Tue, Jan 09, 2007 at 03:17:03PM +1100, Nathan Scott wrote: > > On Mon, 2007-01-08 at 19:51 -0800, Andrew Morton wrote: > > > If that's not true, then what _is_ happening in there? > > > > This particular case was a device mapper stack trace, hence the > > confusion, I think. Both XFS and DM are making the same generic > > block layer call here though (freeze_bdev). > > Yup. it's the freeze_bdev/thaw_bdev use of the bd_mount_mutex() > that's the problem. I fail to see _why_ we need to hold a lock > across the freeze/thaw - the only reason i can think of is to > hold out new calls to sget() (via get_sb_bdev()) while the > filesystem is frozen though I'm not sure why you'd need to > do that. Can someone explain why we are holding the lock from > freeze to thaw? Not me. If it's really not needed, then... > > > If that _is_ true then, well, that sucks a bit. > > > > Indeed, its a fairly ordinary interface, but thats too late to go > > fix now I guess (since its exposed to userspace already). > > Userspace knows nothing about that lock, so we can change that without > changing the the userspace API. ...that would be true, AFAICS. cheers. -- Nathan From owner-xfs@oss.sgi.com Mon Jan 8 23:26:38 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 08 Jan 2007 23:26:44 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l097QZqw011375 for ; Mon, 8 Jan 2007 23:26:37 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id SAA04480; Tue, 9 Jan 2007 18:25:39 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l097Pb7Y86106048; Tue, 9 Jan 2007 18:25:38 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l097PZdT87347416; Tue, 9 Jan 2007 18:25:35 +1100 (AEDT) Date: Tue, 9 Jan 2007 18:25:35 +1100 From: David Chinner To: David Chinner Cc: "Mr. Berkley Shands" , Eric Sandeen , Dave Lloyd , linux-xfs@oss.sgi.com Subject: Re: XFS and 2.6.18 -> 2.6.20-rc3 Message-ID: <20070109072535.GH44411608@melbourne.sgi.com> References: <45A27BC7.2020709@exegy.com> <20070109012212.GG44411608@melbourne.sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070109012212.GG44411608@melbourne.sgi.com> User-Agent: Mutt/1.4.2.1i X-archive-position: 10224 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 1196 Lines: 35 On Tue, Jan 09, 2007 at 12:22:12PM +1100, David Chinner wrote: > On Mon, Jan 08, 2007 at 11:13:43AM -0600, Mr. Berkley Shands wrote: > > My testbench is a 4 core Opteron (dual 275's) into > > two LSI8408E SAS controllers, into 16 Seagate 7200.10 320GB satas. > > Redhat ES4.4 (Centos 4.4). A slightly newer parted is needed > > than the contemporary of Moses that is shipped with the O/S. > > > > I have a standard burn in script that takes the 4 4-drive raid0's > > and puts a GPT label on them, aligns the partitions to stripe > > boundary's. It then proceeds to write 8GB files concurrently > > onto all 4 raid drives. I just ran up a similar test - single large file per device on a 4 core Xeon (woodcrest) with 16GB RAM, a single PCI-X SAS HBA and 12x10krpm 300GB SAS disks split into 3x4 disk dm raid zero stripes on 2.6.18 and 2.6.20-rc3. I see the same thing - 2.6.20-rc3 is more erractic and quite a bit slower than 2.6.18 when going through XFS. I suggest trying this on 2.6.20-rc3: # echo 10 > /proc/sys/vm/dirty_ratio That restored most of the lost performance and consistency in my testing.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Jan 8 23:57:01 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 08 Jan 2007 23:57:06 -0800 (PST) Received: from fallback.mail.elte.hu (fallback.mail.elte.hu [157.181.151.13]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l097uwqw016475 for ; Mon, 8 Jan 2007 23:57:01 -0800 Received: from mx2.mail.elte.hu ([157.181.151.9]) by fallback.mail.elte.hu with esmtp (Exim) id 1H4Ak5-00074g-52 from for ; Tue, 09 Jan 2007 07:45:53 +0100 Received: from elvis.elte.hu ([157.181.1.14]) by mx2.mail.elte.hu with esmtp (Exim) id 1H4Aj5-00062Q-0v from ; Tue, 09 Jan 2007 07:44:51 +0100 Received: by elvis.elte.hu (Postfix, from userid 1004) id 6242D3E243E; Tue, 9 Jan 2007 07:44:11 +0100 (CET) Date: Tue, 9 Jan 2007 07:41:13 +0100 From: Ingo Molnar To: Andrew Morton Cc: David Chinner , linux-kernel Mailing List , xfs@oss.sgi.com, Arjan van de Ven , Peter Zijlstra Subject: Re: xfs_file_ioctl / xfs_freeze: BUG: warning at kernel/mutex-debug.c:80/debug_mutex_unlock() Message-ID: <20070109064113.GB5569@elte.hu> References: <20070104001420.GA32440@m.safari.iki.fi> <20070107213734.GS44411608@melbourne.sgi.com> <20070108155636.a68dce33.akpm@osdl.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070108155636.a68dce33.akpm@osdl.org> User-Agent: Mutt/1.4.2.2i Received-SPF: softfail (mx2: transitioning domain of elte.hu does not designate 157.181.1.14 as permitted sender) client-ip=157.181.1.14; envelope-from=mingo@elte.hu; helo=elvis.elte.hu; X-ELTE-VirusStatus: clean X-ELTE-SpamScore: -2.6 X-ELTE-SpamLevel: X-ELTE-SpamCheck: no X-ELTE-SpamVersion: ELTE 2.0 X-ELTE-SpamCheck-Details: score=-2.6 required=5.9 tests=BAYES_00 autolearn=no SpamAssassin version=3.0.3 -2.6 BAYES_00 BODY: Bayesian spam probability is 0 to 1% [score: 0.0000] X-archive-position: 10225 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: mingo@elte.hu Precedence: bulk X-list: xfs Content-Length: 619 Lines: 17 * Andrew Morton wrote: > > Revert bd_mount_mutex back to a semaphore so that xfs_freeze -f > > /mnt/newtest; xfs_freeze -u /mnt/newtest works safely and doesn't > > produce lockdep warnings. > > Sad. The alternative would be to implement > mutex_unlock_dont_warn_if_a_different_task_did_it(). Ingo? Possible? i'd like to avoid it as much as i'd like to avoid having to add spin_unlock_dont_warn_if_a_different_task_did_it(). Unlocking by a different task is usually a sign of messy locking and bugs lurking. Is it really true that XFS's use of bd_mount_mutex is safe and justified? Ingo From owner-xfs@oss.sgi.com Tue Jan 9 02:36:11 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 09 Jan 2007 02:36:17 -0800 (PST) Received: from pentafluge.infradead.org (pentafluge.infradead.org [213.146.154.40]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l09AaAqw014667 for ; Tue, 9 Jan 2007 02:36:11 -0800 Received: from hch by pentafluge.infradead.org with local (Exim 4.63 #1 (Red Hat Linux)) id 1H4Dq8-00047R-W9; Tue, 09 Jan 2007 10:04:21 +0000 Date: Tue, 9 Jan 2007 10:04:20 +0000 From: Christoph Hellwig To: Andrew Morton Cc: Eric Sandeen , David Chinner , linux-kernel Mailing List , xfs@oss.sgi.com Subject: Re: bd_mount_mutex -> bd_mount_sem (was Re: xfs_file_ioctl / xfs_freeze: BUG: warning at kernel/mutex-debug.c:80/debug_mutex_unlock()) Message-ID: <20070109100420.GB14713@infradead.org> Mail-Followup-To: Christoph Hellwig , Andrew Morton , Eric Sandeen , David Chinner , linux-kernel Mailing List , xfs@oss.sgi.com References: <20070104001420.GA32440@m.safari.iki.fi> <20070107213734.GS44411608@melbourne.sgi.com> <20070108110323.GA3803@m.safari.iki.fi> <45A27416.8030600@sandeen.net> <20070108234728.GC33919298@melbourne.sgi.com> <20070108161917.73a4c2c6.akpm@osdl.org> <45A30828.6000508@sandeen.net> <20070108191800.9d83ff5e.akpm@osdl.org> <45A30E1D.4030401@sandeen.net> <20070108195127.67fe86b8.akpm@osdl.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070108195127.67fe86b8.akpm@osdl.org> User-Agent: Mutt/1.4.2.2i X-SRS-Rewrite: SMTP reverse-path rewritten from by pentafluge.infradead.org See http://www.infradead.org/rpr.html X-archive-position: 10226 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs Content-Length: 703 Lines: 16 On Mon, Jan 08, 2007 at 07:51:27PM -0800, Andrew Morton wrote: > I don't even know what code we're talking about here... > > I'm under the impression that XFS will return to userspace with a > filesystem lock held, under the expectation (ie: requirement) that > userspace will later come in and release that lock. > > If that's not true, then what _is_ happening in there? > > If that _is_ true then, well, that sucks a bit. It's not a filesystem lock. It's a per-blockdevice lock that prevents multiple people from freezing the filesystem at the same time, aswell as providing exclusion between a frozen filesystem an mount-related activity. It's a traditional text-box example for a semaphore. From owner-xfs@oss.sgi.com Tue Jan 9 02:36:15 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 09 Jan 2007 02:36:19 -0800 (PST) Received: from pentafluge.infradead.org (pentafluge.infradead.org [213.146.154.40]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l09AaDqw014683 for ; Tue, 9 Jan 2007 02:36:14 -0800 Received: from hch by pentafluge.infradead.org with local (Exim 4.63 #1 (Red Hat Linux)) id 1H4Doi-00046D-8I; Tue, 09 Jan 2007 10:02:52 +0000 Date: Tue, 9 Jan 2007 10:02:52 +0000 From: Christoph Hellwig To: Andrew Morton Cc: David Chinner , Eric Sandeen , linux-kernel Mailing List , xfs@oss.sgi.com Subject: Re: bd_mount_mutex -> bd_mount_sem (was Re: xfs_file_ioctl / xfs_freeze: BUG: warning at kernel/mutex-debug.c:80/debug_mutex_unlock()) Message-ID: <20070109100252.GA14713@infradead.org> Mail-Followup-To: Christoph Hellwig , Andrew Morton , David Chinner , Eric Sandeen , linux-kernel Mailing List , xfs@oss.sgi.com References: <20070104001420.GA32440@m.safari.iki.fi> <20070107213734.GS44411608@melbourne.sgi.com> <20070108110323.GA3803@m.safari.iki.fi> <45A27416.8030600@sandeen.net> <20070108234728.GC33919298@melbourne.sgi.com> <20070108161917.73a4c2c6.akpm@osdl.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070108161917.73a4c2c6.akpm@osdl.org> User-Agent: Mutt/1.4.2.2i X-SRS-Rewrite: SMTP reverse-path rewritten from by pentafluge.infradead.org See http://www.infradead.org/rpr.html X-archive-position: 10227 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs Content-Length: 313 Lines: 7 On Mon, Jan 08, 2007 at 04:19:17PM -0800, Andrew Morton wrote: > Seems not. I think people were hoping that various nasties in there > would go away. We return to userspace with a kernel lock held?? Well, there might be nicer solutions, but for now we should revert the broken commit to change the lock type. From owner-xfs@oss.sgi.com Tue Jan 9 03:58:36 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 09 Jan 2007 03:58:40 -0800 (PST) Received: from imr2.americas.sgi.com (imr2.americas.sgi.com [198.149.16.18]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l09BwZqw028773 for ; Tue, 9 Jan 2007 03:58:35 -0800 Received: from [134.15.160.26] (vpn-emea-sw-emea-160-26.emea.sgi.com [134.15.160.26]) by imr2.americas.sgi.com (8.12.9/8.12.10/SGI_generic_relay-1.2) with ESMTP id l09BVjnc76677151; Tue, 9 Jan 2007 03:31:46 -0800 (PST) Message-ID: <45A38332.40506@sgi.com> Date: Tue, 09 Jan 2007 11:57:38 +0000 From: Lachlan McIlroy Reply-To: lachlan@sgi.com Organization: SGI User-Agent: Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-US; rv:1.7.12) Gecko/20050920 X-Accept-Language: en-us, en MIME-Version: 1.0 To: David Chinner CC: Christoph Hellwig , xfs-dev@sgi.com, xfs@oss.sgi.com Subject: Re: Review: fix mapping invalidation callouts References: <20070108040309.GX33919298@melbourne.sgi.com> <20070108090916.GA17121@infradead.org> <20070108230429.GB33919298@melbourne.sgi.com> In-Reply-To: <20070108230429.GB33919298@melbourne.sgi.com> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 10228 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: lachlan@sgi.com Precedence: bulk X-list: xfs Content-Length: 1855 Lines: 43 David Chinner wrote: > On Mon, Jan 08, 2007 at 09:09:16AM +0000, Christoph Hellwig wrote: > >>On Mon, Jan 08, 2007 at 03:03:09PM +1100, David Chinner wrote: >> >>>With the recent cancel_dirty_page() changes, a warning was >>>added if we cancel a dirty page that is still mapped into >>>the page tables. >>>This happens in XFS from fs_tosspages() and fs_flushinval_pages() >>>because they call truncate_inode_pages(). >>> >>>truncate_inode_pages() does not invalidate existing page mappings; >>>it is expected taht this is called only when truncating the file >>>or destroying the inode and on both these cases there can be >>>no mapped ptes. However, we call this when doing direct I/O writes >>>to remove pages from the page cache. As a result, we can rip >>>a page from the page cache that still has mappings attached. >>> >>>The correct fix is to use invalidate_inode_pages2_range() instead >>>of truncate_inode_pages(). They essentially do the same thing, but >>>the former also removes any pte mappings before removing the page >>>from the page cache. >>> >>>Comments? >> >>Generally looks good. But I feel a little cautios about changes in this >>area, so we should throw all possible test loads at this before commiting >>it. > > > Yup - fsx is one test that I really want to hit with this. The guy that > reported the initial problem has replied saying this patch fixes the > warnings (good start ;), but I'll hold off pushing it for a little > while to test it more. This (or something like it) will need to go > into 2.6.20 before it is released so we've got limited time to > test this one out.... > This patch fixes fs_tosspages() and fs_flushinval_pages() but will a call to fs_flush_pages() with flags including B_INVAL work correctly? I can't see any code that passes B_INVAL into fs_flush_pages() but it should probably support it. From owner-xfs@oss.sgi.com Tue Jan 9 16:11:29 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 09 Jan 2007 16:11:33 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0A0BPqw006749 for ; Tue, 9 Jan 2007 16:11:28 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA00967; Wed, 10 Jan 2007 11:10:29 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l0A0AS7Y82688746; Wed, 10 Jan 2007 11:10:29 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l0A0ASPk88365186; Wed, 10 Jan 2007 11:10:28 +1100 (AEDT) Date: Wed, 10 Jan 2007 11:10:28 +1100 From: David Chinner To: Lachlan McIlroy Cc: David Chinner , Christoph Hellwig , xfs-dev@sgi.com, xfs@oss.sgi.com Subject: Re: Review: fix mapping invalidation callouts Message-ID: <20070110001028.GO33919298@melbourne.sgi.com> References: <20070108040309.GX33919298@melbourne.sgi.com> <20070108090916.GA17121@infradead.org> <20070108230429.GB33919298@melbourne.sgi.com> <45A38332.40506@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <45A38332.40506@sgi.com> User-Agent: Mutt/1.4.2.1i X-archive-position: 10234 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 548 Lines: 22 On Tue, Jan 09, 2007 at 11:57:38AM +0000, Lachlan McIlroy wrote: > > This patch fixes fs_tosspages() and fs_flushinval_pages() but will a > call to fs_flush_pages() with flags including B_INVAL work correctly? By definition fs_flush_pages() only flushes pages. If you need to flush and invalidate pages, you use fs_flushinval_pages(). Passing B_INVAL to fs_flush_pages() is broken code. > I can't see any code that passes B_INVAL into fs_flush_pages() good ;) Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Tue Jan 9 16:34:56 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 09 Jan 2007 16:35:01 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0A0Yqqw015502 for ; Tue, 9 Jan 2007 16:34:54 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA01703; Wed, 10 Jan 2007 11:33:56 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l0A0Xt7Y88512258; Wed, 10 Jan 2007 11:33:56 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l0A0XsmI88548583; Wed, 10 Jan 2007 11:33:54 +1100 (AEDT) Date: Wed, 10 Jan 2007 11:33:54 +1100 From: David Chinner To: linux-fsdevel@vger.kernel.org Cc: hch@infradead.org, linux-kernel@vger.kernel.org, xfs@oss.sgi.com Subject: [PATCH 1 of 2]: Make BH_Unwritten a first class bufferhead flag V2 Message-ID: <20070110003354.GN44411608@melbourne.sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.1i X-archive-position: 10235 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 4280 Lines: 112 Version 2: - separate buffer_delay in generic code into buffer_delay and buffer_unwritten - include XFS changes as a second patch: - remove XFS use of buffer_delay to indicate buffer_unwritten - remove XFS hack to silently clear "lost" unwritten flags Version 1: Currently, XFS uses BH_PrivateStart for flagging unwritten extent state in a bufferhead. Recently, i found the long standing mmap/unwritten extent conversion bug, and it was to do with partial page invalidation not clearing the unwritten flag from bufferheads attached to the page but beyond EOF. See here for a full explaination: http://oss.sgi.com/archives/xfs/2006-12/msg00196.html The solution I have checked into the XFS dev tree involves duplicating code from block_invalidatepage to clear the unwritten flag from the bufferhead(s), and then calling block_invalidatepage() to do the rest. Christoph suggested that this would be better solved by pushing the unwritten flag into the common buffer head flags and just adding the call to discard_buffer(): http://oss.sgi.com/archives/xfs/2006-12/msg00239.html The following patch makes BH_Unwritten a first class citizen. Patch against 2.6.20-rc3. Signed-Off-By: Dave Chinner --- fs/buffer.c | 4 +++- fs/xfs/linux-2.6/xfs_linux.h | 10 ---------- include/linux/buffer_head.h | 2 ++ 3 files changed, 5 insertions(+), 11 deletions(-) Index: 2.6.x-xfs-new/fs/buffer.c =================================================================== --- 2.6.x-xfs-new.orig/fs/buffer.c 2007-01-08 14:32:39.688130559 +1100 +++ 2.6.x-xfs-new/fs/buffer.c 2007-01-09 11:00:02.659186970 +1100 @@ -1437,6 +1437,7 @@ static void discard_buffer(struct buffer clear_buffer_req(bh); clear_buffer_new(bh); clear_buffer_delay(bh); + clear_buffer_unwritten(bh); unlock_buffer(bh); } @@ -1820,6 +1821,7 @@ static int __block_prepare_write(struct continue; } if (!buffer_uptodate(bh) && !buffer_delay(bh) && + !buffer_unwritten(bh) && (block_start < from || block_end > to)) { ll_rw_block(READ, 1, &bh); *wait_bh++=bh; @@ -2541,7 +2543,7 @@ int block_truncate_page(struct address_s if (PageUptodate(page)) set_buffer_uptodate(bh); - if (!buffer_uptodate(bh) && !buffer_delay(bh)) { + if (!buffer_uptodate(bh) && !buffer_delay(bh) && !buffer_unwritten(bh)) { err = -EIO; ll_rw_block(READ, 1, &bh); wait_on_buffer(bh); Index: 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_linux.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/linux-2.6/xfs_linux.h 2006-12-12 12:05:17.000000000 +1100 +++ 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_linux.h 2007-01-09 10:58:30.459212715 +1100 @@ -109,16 +109,6 @@ #undef HAVE_PERCPU_SB /* per cpu superblock counters are a 2.6 feature */ #endif -/* - * State flag for unwritten extent buffers. - * - * We need to be able to distinguish between these and delayed - * allocate buffers within XFS. The generic IO path code does - * not need to distinguish - we use the BH_Delay flag for both - * delalloc and these ondisk-uninitialised buffers. - */ -BUFFER_FNS(PrivateStart, unwritten); - #define restricted_chown xfs_params.restrict_chown.val #define irix_sgid_inherit xfs_params.sgid_inherit.val #define irix_symlink_mode xfs_params.symlink_mode.val Index: 2.6.x-xfs-new/include/linux/buffer_head.h =================================================================== --- 2.6.x-xfs-new.orig/include/linux/buffer_head.h 2006-12-12 12:06:29.000000000 +1100 +++ 2.6.x-xfs-new/include/linux/buffer_head.h 2007-01-09 10:58:30.535202804 +1100 @@ -34,6 +34,7 @@ enum bh_state_bits { BH_Write_EIO, /* I/O error on write */ BH_Ordered, /* ordered write */ BH_Eopnotsupp, /* operation not supported (barrier) */ + BH_Unwritten, /* Buffer is allocated on disk but not written */ BH_PrivateStart,/* not a state bit, but the first bit available * for private allocation by other entities @@ -126,6 +127,7 @@ BUFFER_FNS(Boundary, boundary) BUFFER_FNS(Write_EIO, write_io_error) BUFFER_FNS(Ordered, ordered) BUFFER_FNS(Eopnotsupp, eopnotsupp) +BUFFER_FNS(Unwritten, unwritten) #define bh_offset(bh) ((unsigned long)(bh)->b_data & ~PAGE_MASK) #define touch_buffer(bh) mark_page_accessed(bh->b_page) From owner-xfs@oss.sgi.com Tue Jan 9 16:39:19 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 09 Jan 2007 16:39:24 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0A0dGqw016419 for ; Tue, 9 Jan 2007 16:39:18 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA01793; Wed, 10 Jan 2007 11:38:21 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l0A0cK7Y88506750; Wed, 10 Jan 2007 11:38:20 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l0A0cJL888586851; Wed, 10 Jan 2007 11:38:19 +1100 (AEDT) Date: Wed, 10 Jan 2007 11:38:19 +1100 From: David Chinner To: linux-fsdevel@vger.kernel.org Cc: hch@infradead.org, linux-kernel@vger.kernel.org, xfs@oss.sgi.com Subject: [PATCH 2 of 2]: Make XFS use BH_Unwritten and BH_Delay correctly Message-ID: <20070110003819.GO44411608@melbourne.sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.1i X-archive-position: 10236 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 1017 Lines: 33 Don't hide buffer_unwritten behind buffer_delay() and remove the hack that clears unexpected buffer_unwritten() states now that it can't happen. Signed-Off-By: Dave Chinner --- fs/xfs/linux-2.6/xfs_aops.c | 3 --- 1 file changed, 29 deletions(-) Index: 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_aops.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/linux-2.6/xfs_aops.c 2007-01-08 12:21:40.000000000 +1100 +++ 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_aops.c 2007-01-09 11:05:09.763127643 +1100 @@ -58,8 +58,6 @@ xfs_count_page_state( do { if (buffer_uptodate(bh) && !buffer_mapped(bh)) (*unmapped) = 1; - else if (buffer_unwritten(bh) && !buffer_delay(bh)) - clear_buffer_unwritten(bh); else if (buffer_unwritten(bh)) (*unwritten) = 1; else if (buffer_delay(bh)) @@ -1271,7 +1269,6 @@ __xfs_get_blocks( if (direct) bh_result->b_private = inode; set_buffer_unwritten(bh_result); - set_buffer_delay(bh_result); } } From owner-xfs@oss.sgi.com Tue Jan 9 17:35:26 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 09 Jan 2007 17:35:31 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0A1ZNqw028340 for ; Tue, 9 Jan 2007 17:35:25 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id MAA03348; Wed, 10 Jan 2007 12:34:25 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l0A1YM7Y88584080; Wed, 10 Jan 2007 12:34:22 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l0A1YJ3C81529686; Wed, 10 Jan 2007 12:34:19 +1100 (AEDT) Date: Wed, 10 Jan 2007 12:34:19 +1100 From: David Chinner To: Christoph Hellwig , Andrew Morton , Eric Sandeen , David Chinner , linux-kernel Mailing List , xfs@oss.sgi.com Subject: Re: bd_mount_mutex -> bd_mount_sem (was Re: xfs_file_ioctl / xfs_freeze: BUG: warning at kernel/mutex-debug.c:80/debug_mutex_unlock()) Message-ID: <20070110013419.GP33919298@melbourne.sgi.com> References: <20070107213734.GS44411608@melbourne.sgi.com> <20070108110323.GA3803@m.safari.iki.fi> <45A27416.8030600@sandeen.net> <20070108234728.GC33919298@melbourne.sgi.com> <20070108161917.73a4c2c6.akpm@osdl.org> <45A30828.6000508@sandeen.net> <20070108191800.9d83ff5e.akpm@osdl.org> <45A30E1D.4030401@sandeen.net> <20070108195127.67fe86b8.akpm@osdl.org> <20070109100420.GB14713@infradead.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070109100420.GB14713@infradead.org> User-Agent: Mutt/1.4.2.1i X-archive-position: 10237 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 2061 Lines: 55 On Tue, Jan 09, 2007 at 10:04:20AM +0000, Christoph Hellwig wrote: > On Mon, Jan 08, 2007 at 07:51:27PM -0800, Andrew Morton wrote: > > I don't even know what code we're talking about here... > > > > I'm under the impression that XFS will return to userspace with a > > filesystem lock held, under the expectation (ie: requirement) that > > userspace will later come in and release that lock. > > > > If that's not true, then what _is_ happening in there? > > > > If that _is_ true then, well, that sucks a bit. > > It's not a filesystem lock. It's a per-blockdevice lock that prevents > multiple people from freezing the filesystem at the same time, aswell > as providing exclusion between a frozen filesystem an mount-related > activity. It's a traditional text-box example for a semaphore. This can be done without needing to hold a semaphore across the freeze/thaw. In the XFS case, we never try to lock the semaphore a second time - the freeze code checks if the filesystem is not already (being) frozen before calling freeze_bdev(). On thaw it also checks that the filesystem is frozen before calling thaw_bdev(). IOWs, you can safely do: # xfs_freeze -f /dev/sda1; xfs_freeze -f /dev/sda1; xfs_freeze -f /dev/sda1; # xfs_freeze -u /dev/sda1; xfs_freeze -u /dev/sda1; xfs_freeze -u /dev/sda1; And the filesystem will only be frozen once and thawed once. The second and subsequent incantations of the freeze/thaw are effectively ignored and don't block. IMO, if we need to prevent certain operations from occurring when the filesystem is frozen, those operations need to explicitly check the frozen state and block i.e. do something like: wait_event(sb->s_wait_unfrozen, (sb->s_frozen < SB_FREEZE_WRITE)); If you need to prevent unmounts from occurring while snapshotting a frozen filesystem, then the snapshot code needs to take the s_umount semaphore while the snapshot is in progress. We should not be making frozen filesystems unmountable.... Thoughts? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Tue Jan 9 20:55:03 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 09 Jan 2007 20:55:09 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0A4t0qw030229 for ; Tue, 9 Jan 2007 20:55:02 -0800 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA08306; Wed, 10 Jan 2007 15:54:01 +1100 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id 3041E58CF82A; Wed, 10 Jan 2007 15:54:01 +1100 (EST) To: xfs@oss.sgi.com, sgi.bugs.xfs@engr.sgi.com Subject: TAKE 959978 - growing an XFS filesystem by more than 2TB is broken Message-Id: <20070110045401.3041E58CF82A@chook.melbourne.sgi.com> Date: Wed, 10 Jan 2007 15:54:01 +1100 (EST) From: dgc@sgi.com (David Chinner) X-archive-position: 10238 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 1726 Lines: 40 Make growfs work for amounts greater than 2TB The free block modification code has a 32bit interface, limiting the size the filesystem can be grown even on 64 bit machines. On 32 bit machines, there are other 32bit variables in transaction structures and interfaces that need to be expanded to allow this to work. Date: Wed Jan 10 15:53:25 AEDT 2007 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/2.6.x-xfs Inspected by: hch@infradead.org The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:27894a fs/xfs/xfs_mount.h - 1.231 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_mount.h.diff?r1=text&tr1=1.231&r2=text&tr2=1.230&f=h - Update xfs_mod_sb_t to 64bit deltas. fs/xfs/xfs_mount.c - 1.390 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_mount.c.diff?r1=text&tr1=1.390&r2=text&tr2=1.389&f=h - Modify the core superblock modification code to handle 64 bit deltas so as to allow single modifications of greater than 2TB. fs/xfs/xfs_trans.c - 1.177 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_trans.c.diff?r1=text&tr1=1.177&r2=text&tr2=1.176&f=h - Superblock modification deltas changed to 64 bit types to allow single modifications of greater than 32 bits on 32 bit platforms. fs/xfs/xfs_trans.h - 1.143 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_trans.h.diff?r1=text&tr1=1.143&r2=text&tr2=1.142&f=h - Transaction deltas converted to explicit 64bit types. fs/xfs/xfs_bmap.c - 1.361 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_bmap.c.diff?r1=text&tr1=1.361&r2=text&tr2=1.360&f=h - Change free block modifications to use 64 bit type casts. From owner-xfs@oss.sgi.com Tue Jan 9 21:44:18 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 09 Jan 2007 21:44:23 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0A5iFqw006143 for ; Tue, 9 Jan 2007 21:44:16 -0800 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id QAA09520; Wed, 10 Jan 2007 16:43:14 +1100 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id A64F058CF82A; Wed, 10 Jan 2007 16:43:14 +1100 (EST) To: xfs@oss.sgi.com, sgi.bugs.xfs@engr.sgi.com Cc: sdoyon@max-t.com Subject: TAKE 956323 - File system block reservation mechanism is broken Message-Id: <20070110054314.A64F058CF82A@chook.melbourne.sgi.com> Date: Wed, 10 Jan 2007 16:43:14 +1100 (EST) From: dgc@sgi.com (David Chinner) X-archive-position: 10239 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 1648 Lines: 39 Fix block reservation mechanism. The block reservation mechanism has been broken since the per-cpu superblock counters were introduced. Make the block reservation code work with the per-cpu counters by syncing the counters, snapshotting the amount of available space and then doing a modifcation of the counter state according to the result. Continue in a loop until we either have no space available or we reserve some space. Date: Wed Jan 10 16:42:29 AEDT 2007 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/2.6.x-xfs Inspected by: hch@infradead.org The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:27895a fs/xfs/xfs_vfsops.c - 1.513 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_vfsops.c.diff?r1=text&tr1=1.513&r2=text&tr2=1.512&f=h - Use the flags superblock counter sync function variant. fs/xfs/xfs_mount.h - 1.232 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_mount.h.diff?r1=text&tr1=1.232&r2=text&tr2=1.231&f=h - Change the per-cpu counter sync interface to export a flags variant rather than specific, similar functions. fs/xfs/xfs_mount.c - 1.391 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_mount.c.diff?r1=text&tr1=1.391&r2=text&tr2=1.390&f=h - Change the per-cpu counter sync interface to export a flags variant rather than specific, similar functions. fs/xfs/xfs_fsops.c - 1.120 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_fsops.c.diff?r1=text&tr1=1.120&r2=text&tr2=1.119&f=h - Fix the block reservation mechanism to work with per-cpu superblock counters. From owner-xfs@oss.sgi.com Tue Jan 9 22:24:46 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 09 Jan 2007 22:24:50 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0A6Ohqw012997 for ; Tue, 9 Jan 2007 22:24:45 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA10572; Wed, 10 Jan 2007 17:23:46 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l0A6Nj7Y88512724; Wed, 10 Jan 2007 17:23:45 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l0A6Nilf87739300; Wed, 10 Jan 2007 17:23:44 +1100 (AEDT) Date: Wed, 10 Jan 2007 17:23:44 +1100 From: David Chinner To: David Chinner Cc: xfs-dev@sgi.com, xfs@oss.sgi.com Subject: Re: Review: fix mapping invalidation callouts Message-ID: <20070110062344.GR33919298@melbourne.sgi.com> References: <20070108040309.GX33919298@melbourne.sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070108040309.GX33919298@melbourne.sgi.com> User-Agent: Mutt/1.4.2.1i X-archive-position: 10240 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 2083 Lines: 65 On Mon, Jan 08, 2007 at 03:03:09PM +1100, David Chinner wrote: > With the recent cancel_dirty_page() changes, a warning was > added if we cancel a dirty page that is still mapped into > the page tables. > This happens in XFS from fs_tosspages() and fs_flushinval_pages() > because they call truncate_inode_pages(). > > truncate_inode_pages() does not invalidate existing page mappings; > it is expected taht this is called only when truncating the file > or destroying the inode and on both these cases there can be > no mapped ptes. However, we call this when doing direct I/O writes > to remove pages from the page cache. As a result, we can rip > a page from the page cache that still has mappings attached. > > The correct fix is to use invalidate_inode_pages2_range() instead > of truncate_inode_pages(). They essentially do the same thing, but > the former also removes any pte mappings before removing the page > from the page cache. > > Comments? > > Cheers, > > Dave. > -- > Dave Chinner > Principal Engineer > SGI Australian Software Group > > > --- > fs/xfs/linux-2.6/xfs_fs_subr.c | 10 ++++++++-- > 1 file changed, 8 insertions(+), 2 deletions(-) > > Index: 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_fs_subr.c > =================================================================== > --- 2.6.x-xfs-new.orig/fs/xfs/linux-2.6/xfs_fs_subr.c 2006-12-12 12:05:17.000000000 +1100 > +++ 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_fs_subr.c 2007-01-08 09:30:22.056571711 +1100 > @@ -21,6 +21,8 @@ int fs_noerr(void) { return 0; } > int fs_nosys(void) { return ENOSYS; } > void fs_noval(void) { return; } > > +#define XFS_OFF_TO_PCSIZE(off) \ > + (((off) + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT) I don't think this is right. Assuming 4k page size, first = 2k, last = 6k will result in invalidating page indexes 1 and 2 i.e. offset 4k -> 12k. In fact, we want to invalidate pages 0 and 1. IOWs, I think it should be: +#define XFS_OFF_TO_PCINDEX(off) ((off) >> PAGE_CACHE_SHIFT) Comments? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Wed Jan 10 00:40:29 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 10 Jan 2007 00:40:33 -0800 (PST) Received: from internal-mail-relay1.corp.sgi.com (internal-mail-relay1.corp.sgi.com [198.149.32.52]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0A8eSqw007823 for ; Wed, 10 Jan 2007 00:40:29 -0800 Received: from [134.15.160.10] (vpn-emea-sw-emea-160-10.emea.sgi.com [134.15.160.10]) by internal-mail-relay1.corp.sgi.com (8.12.9/8.12.10/SGI_generic_relay-1.2) with ESMTP id l0A8dXbj69791541; Wed, 10 Jan 2007 00:39:34 -0800 (PST) Message-ID: <45A4A645.5010708@sgi.com> Date: Wed, 10 Jan 2007 08:39:33 +0000 From: Lachlan McIlroy Reply-To: lachlan@sgi.com Organization: SGI User-Agent: Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-US; rv:1.7.12) Gecko/20050920 X-Accept-Language: en-us, en MIME-Version: 1.0 To: David Chinner CC: xfs-dev@sgi.com, xfs@oss.sgi.com Subject: Re: Review: fix mapping invalidation callouts References: <20070108040309.GX33919298@melbourne.sgi.com> <20070110062344.GR33919298@melbourne.sgi.com> In-Reply-To: <20070110062344.GR33919298@melbourne.sgi.com> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 10241 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: lachlan@sgi.com Precedence: bulk X-list: xfs Content-Length: 2077 Lines: 63 David Chinner wrote: > On Mon, Jan 08, 2007 at 03:03:09PM +1100, David Chinner wrote: > >>With the recent cancel_dirty_page() changes, a warning was >>added if we cancel a dirty page that is still mapped into >>the page tables. >>This happens in XFS from fs_tosspages() and fs_flushinval_pages() >>because they call truncate_inode_pages(). >> >>truncate_inode_pages() does not invalidate existing page mappings; >>it is expected taht this is called only when truncating the file >>or destroying the inode and on both these cases there can be >>no mapped ptes. However, we call this when doing direct I/O writes >>to remove pages from the page cache. As a result, we can rip >>a page from the page cache that still has mappings attached. >> >>The correct fix is to use invalidate_inode_pages2_range() instead >>of truncate_inode_pages(). They essentially do the same thing, but >>the former also removes any pte mappings before removing the page >>from the page cache. >> >>Comments? >> >>Cheers, >> >>Dave. >>-- >>Dave Chinner >>Principal Engineer >>SGI Australian Software Group >> >> >>--- >> fs/xfs/linux-2.6/xfs_fs_subr.c | 10 ++++++++-- >> 1 file changed, 8 insertions(+), 2 deletions(-) >> >>Index: 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_fs_subr.c >>=================================================================== >>--- 2.6.x-xfs-new.orig/fs/xfs/linux-2.6/xfs_fs_subr.c 2006-12-12 12:05:17.000000000 +1100 >>+++ 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_fs_subr.c 2007-01-08 09:30:22.056571711 +1100 >>@@ -21,6 +21,8 @@ int fs_noerr(void) { return 0; } >> int fs_nosys(void) { return ENOSYS; } >> void fs_noval(void) { return; } >> >>+#define XFS_OFF_TO_PCSIZE(off) \ >>+ (((off) + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT) > > > I don't think this is right. > > Assuming 4k page size, first = 2k, last = 6k will result in > invalidating page indexes 1 and 2 i.e. offset 4k -> 12k. In fact, > we want to invalidate pages 0 and 1. > > IOWs, I think it should be: > > +#define XFS_OFF_TO_PCINDEX(off) ((off) >> PAGE_CACHE_SHIFT) > > Comments? > Makes sense to me. From owner-xfs@oss.sgi.com Wed Jan 10 04:21:18 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 10 Jan 2007 04:21:23 -0800 (PST) Received: from pentafluge.infradead.org (pentafluge.infradead.org [213.146.154.40]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0ACLAqw015999 for ; Wed, 10 Jan 2007 04:21:17 -0800 Received: from hch by pentafluge.infradead.org with local (Exim 4.63 #1 (Red Hat Linux)) id 1H4c9l-0007Yf-JD; Wed, 10 Jan 2007 12:02:13 +0000 Date: Wed, 10 Jan 2007 12:02:13 +0000 From: Christoph Hellwig To: David Chinner Cc: linux-fsdevel@vger.kernel.org, hch@infradead.org, linux-kernel@vger.kernel.org, xfs@oss.sgi.com Subject: Re: [PATCH 1 of 2]: Make BH_Unwritten a first class bufferhead flag V2 Message-ID: <20070110120213.GA28534@infradead.org> Mail-Followup-To: Christoph Hellwig , David Chinner , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, xfs@oss.sgi.com References: <20070110003354.GN44411608@melbourne.sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070110003354.GN44411608@melbourne.sgi.com> User-Agent: Mutt/1.4.2.2i X-SRS-Rewrite: SMTP reverse-path rewritten from by pentafluge.infradead.org See http://www.infradead.org/rpr.html X-archive-position: 10242 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs Content-Length: 34 Lines: 2 The two patches look good to me. From owner-xfs@oss.sgi.com Wed Jan 10 05:30:11 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 10 Jan 2007 05:30:15 -0800 (PST) Received: from service.eng.exegy.net (68-191-203-42.static.stls.mo.charter.com [68.191.203.42]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0ADU9qw029849 for ; Wed, 10 Jan 2007 05:30:10 -0800 Received: from HANAFORD.eng.exegy.net (hanaford.eng.exegy.net [10.19.1.4]) by service.eng.exegy.net (8.13.1/8.13.1) with ESMTP id l0ADTFeu012629; Wed, 10 Jan 2007 07:29:15 -0600 X-Ninja-PIM: Scanned by Ninja X-Ninja-AttachmentFiltering: (no action) Received: from [10.19.4.86] ([10.19.4.86]) by HANAFORD.eng.exegy.net over TLS secured channel with Microsoft SMTPSVC(6.0.3790.1830); Wed, 10 Jan 2007 07:29:15 -0600 Message-ID: <45A4EA2B.5050505@exegy.com> Date: Wed, 10 Jan 2007 07:29:15 -0600 From: "Mr. Berkley Shands" User-Agent: Thunderbird 1.5.0.9 (X11/20061222) MIME-Version: 1.0 To: David Chinner CC: Eric Sandeen , Dave Lloyd , linux-xfs@oss.sgi.com Subject: Re: XFS and 2.6.18 -> 2.6.20-rc3 References: <45A27BC7.2020709@exegy.com> <20070109012212.GG44411608@melbourne.sgi.com> <20070109072535.GH44411608@melbourne.sgi.com> In-Reply-To: <20070109072535.GH44411608@melbourne.sgi.com> X-OriginalArrivalTime: 10 Jan 2007 13:29:15.0381 (UTC) FILETIME=[52AAEA50:01C734BB] Content-Type: text/plain Content-Disposition: inline Content-Transfer-Encoding: 7bit X-archive-position: 10243 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bshands@exegy.com Precedence: bulk X-list: xfs Content-Length: 5024 Lines: 117 With a fresh install of the O/S on a non-broken motherboard, the change to /proc/sys/vm/dirty_ratio restores most of the lost performance from 2.6.18, as of 2.6.20-rc4. The difference is 10% to 15% without the dirty_ratio change (40 is the default, 10 gives the old performance). Data: Writing, 8192 MB, Buffer: 128 KB, Time: 33571 MS, Rate: 244.020, to /s0/GigaData.0 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 33480 MS, Rate: 244.683, to /s2/GigaData.0 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 33570 MS, Rate: 244.027, to /s1/GigaData.0 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 33579 MS, Rate: 243.962, to /s3/GigaData.0 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 31486 MS, Rate: 260.179, to /s0/GigaData.1 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 31537 MS, Rate: 259.758, to /s2/GigaData.1 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 31538 MS, Rate: 259.750, to /s3/GigaData.1 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 31549 MS, Rate: 259.660, to /s1/GigaData.1 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 32205 MS, Rate: 254.370, to /s2/GigaData.2 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 32205 MS, Rate: 254.370, to /s1/GigaData.2 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 32218 MS, Rate: 254.268, to /s3/GigaData.2 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 32220 MS, Rate: 254.252, to /s0/GigaData.2 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 33536 MS, Rate: 244.275, to /s0/GigaData.3 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 33583 MS, Rate: 243.933, to /s3/GigaData.3 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 34016 MS, Rate: 240.828, to /s1/GigaData.3 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 34018 MS, Rate: 240.814, to /s2/GigaData.3 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 34121 MS, Rate: 240.087, to /s1/GigaData.4 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 34123 MS, Rate: 240.073, to /s0/GigaData.4 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 34114 MS, Rate: 240.136, to /s2/GigaData.4 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 34122 MS, Rate: 240.080, to /s3/GigaData.4 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 30386 MS, Rate: 269.598, to /s0/GigaData.9 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 30384 MS, Rate: 269.616, to /s1/GigaData.9 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 30717 MS, Rate: 266.693, to /s3/GigaData.9 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 30726 MS, Rate: 266.615, to /s2/GigaData.9 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 30088 MS, Rate: 272.268, to /s0/GigaData.10 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 30079 MS, Rate: 272.349, to /s2/GigaData.10 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 30086 MS, Rate: 272.286, to /s1/GigaData.10 Data: Writing, 8192 MB, Buffer: 128 KB, Time: 29989 MS, Rate: 273.167, to /s3/GigaData.10 So whatever needs to be tweaked in the VM system seems to be the key. Thanks to all for getting this regression repaired. Berkley David Chinner wrote: > On Tue, Jan 09, 2007 at 12:22:12PM +1100, David Chinner wrote: > >> On Mon, Jan 08, 2007 at 11:13:43AM -0600, Mr. Berkley Shands wrote: >> >>> My testbench is a 4 core Opteron (dual 275's) into >>> two LSI8408E SAS controllers, into 16 Seagate 7200.10 320GB satas. >>> Redhat ES4.4 (Centos 4.4). A slightly newer parted is needed >>> than the contemporary of Moses that is shipped with the O/S. >>> >>> I have a standard burn in script that takes the 4 4-drive raid0's >>> and puts a GPT label on them, aligns the partitions to stripe >>> boundary's. It then proceeds to write 8GB files concurrently >>> onto all 4 raid drives. >>> > > I just ran up a similar test - single large file per device on a 4 > core Xeon (woodcrest) with 16GB RAM, a single PCI-X SAS HBA and > 12x10krpm 300GB SAS disks split into 3x4 disk dm raid zero stripes > on 2.6.18 and 2.6.20-rc3. > > I see the same thing - 2.6.20-rc3 is more erractic and quite a > bit slower than 2.6.18 when going through XFS. > > I suggest trying this on 2.6.20-rc3: > > # echo 10 > /proc/sys/vm/dirty_ratio > > That restored most of the lost performance and consistency > in my testing.... > > Cheers, > > Dave. > -- //E. F. Berkley Shands, MSc// **Exegy Inc.** 3668 S. Geyer Road, Suite 300 St. Louis, MO 63127 Direct: (314) 450-5348 Cell: (314) 303-2546 Office: (314) 450-5353 Fax: (314) 450-5354 This e-mail and any documents accompanying it may contain legally privileged and/or confidential information belonging to Exegy, Inc. Such information may be protected from disclosure by law. The information is intended for use by only the addressee. If you are not the intended recipient, you are hereby notified that any disclosure or use of the information is strictly prohibited. If you have received this e-mail in error, please immediately contact the sender by e-mail or phone regarding instructions for return or destruction and do not use or disclose the content to others. [[HTML alternate version deleted]] From owner-xfs@oss.sgi.com Wed Jan 10 05:56:12 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 10 Jan 2007 05:56:16 -0800 (PST) Received: from ciao.gmane.org (main.gmane.org [80.91.229.2]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0ADuAqw002875 for ; Wed, 10 Jan 2007 05:56:12 -0800 Received: from root by ciao.gmane.org with local (Exim 4.43) id 1H4duw-0001C7-G7 for linux-xfs@oss.sgi.com; Wed, 10 Jan 2007 14:55:02 +0100 Received: from p54a57214.dip.t-dialin.net ([84.165.114.20]) by main.gmane.org with esmtp (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Wed, 10 Jan 2007 14:55:02 +0100 Received: from christoph.bier by p54a57214.dip.t-dialin.net with local (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Wed, 10 Jan 2007 14:55:02 +0100 X-Injected-Via-Gmane: http://gmane.org/ To: linux-xfs@oss.sgi.com From: Christoph Bier Subject: Mounting an external HDD fails each second time after xfs_repair Date: Wed, 10 Jan 2007 14:35:34 +0100 Message-ID: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Complaints-To: usenet@sea.gmane.org X-Gmane-NNTP-Posting-Host: p54a57214.dip.t-dialin.net User-Agent: Mozilla/5.0 (X11; U; Linux i686; de-AT; rv:1.7.8) Gecko/20061113 Debian/1.7.8-1sarge8 Mnenhy/0.7.1 X-Accept-Language: de, de-de, de-at, en X-archive-position: 10244 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: christoph.bier@web.de Precedence: bulk X-list: xfs Content-Length: 3497 Lines: 100 Hi all, I use 14 partitions (one extended partition; partition table see below) on my new external 400GB HDD that is managed by LVM2 on Debian Sarge with a vanilla kernel 2.6.19. The first time I mounted the HDD on my desktop everything worked fine and I was able to copy 71GB of data. I unmounted and exported the HDD and imported and mounted it on my laptop (commands see below) running Ubuntu Edgy. Fine, too, I was able to read the data. I exported again and imported again on my desktop. But now mounting fails with mount: /dev/mm-extern/audiovideo: can't read superblock /var/log/syslog prints: [Output: http://www.zvisionwelt.de/tmpdownloads/mount-failure-syslog.output] The funny thing is that I just wanted to have a recent syslog print to post here and tried again to mount the HDD and now it works! [/var/log/syslog output: http://www.zvisionwelt.de/tmpdownloads/mount-success-syslog.output] I didn't change anything else after the mount failure mentioned above! BUT: Reading fails while I wanted to have a look at the files in lost+found. [/var/log/syslog output: http://www.zvisionwelt.de/tmpdownloads/read-failure-syslog.output] Yesterday I used xfs_repair after the mount failure with the result that two directories were removed: # xfs_repair /dev/mm-extern/audiovideo [Output: http://www.zvisionwelt.de/tmpdownloads/xfs_repair.output] Then mounting worked again. But this morning it failed again as I wrote above. I could repeat the scenario: xfs_repair -> mounting works -> unmount -> mounting again fails -> xfs_repair -> mounting works -> unmount -> mounting again fails -> waiting for about 90 minutes (without xfs_repair) it worked again. Strange ... Looking for answers I found this message tonight: Citation from this message: "After [xfs_repair] the file system is mountable for one time again." Any ideas what's going wrong here? I found . But this should be fixed since 2.6.17.7. Here is some more information: Partition table: http://www.zvisionwelt.de/tmpdownloads/partition-table.txt The HDD has one volume group named "mm-extern". The logical volume is named "audiovideo". fstab entry: /dev/mm-extern/audiovideo /media/samsung xfs defaults,noatime 0 0 # xfs_info /media/samsung/ meta-data=/media/samsung isize=256 agcount=16, agsize=6104384 blks = sectsz=512 data = bsize=4096 blocks=97670144, imaxpct=25 = sunit=0 swidth=0 blks, unwritten=1 naming =version 2 bsize=4096 log =internal bsize=4096 blocks=32768, version=1 = sectsz=512 sunit=0 blks realtime =none extsz=65536 blocks=0, rtextents=0 Importing and mounting of the external HDD: vgscan vgimport mm-extern vgchange -a y /dev/mm-extern mount /media/samsung Exporting and unmounting: umount /media/samsung vgchange -a n /dev/mm-extern vgexport mm-extern # dpkg -l lvm\* | grep ^ii ii lvm-common 1.5.17 The Logical Volume Manager for Linux (common ii lvm2 2.01.04-5 The Linux Logical Volume Manager # dpkg -l xfsprogs | grep ^ii ii xfsprogs 2.6.20-1 Utilities for managing the XFS filesystem I successfully use xfs with LVM2 on an internal HDD. Best, Christoph -- +++ Typografie-Regeln: http://zvisionwelt.de/downloads.html (1.6) From owner-xfs@oss.sgi.com Wed Jan 10 06:15:48 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 10 Jan 2007 06:15:52 -0800 (PST) Received: from ciao.gmane.org (main.gmane.org [80.91.229.2]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0AEFkqw006747 for ; Wed, 10 Jan 2007 06:15:48 -0800 Received: from list by ciao.gmane.org with local (Exim 4.43) id 1H4eE2-00049Y-Hp for linux-xfs@oss.sgi.com; Wed, 10 Jan 2007 15:14:46 +0100 Received: from p54a57214.dip.t-dialin.net ([84.165.114.20]) by main.gmane.org with esmtp (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Wed, 10 Jan 2007 15:14:46 +0100 Received: from christoph.bier by p54a57214.dip.t-dialin.net with local (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Wed, 10 Jan 2007 15:14:46 +0100 X-Injected-Via-Gmane: http://gmane.org/ To: linux-xfs@oss.sgi.com From: Christoph Bier Subject: Re: Mounting an external HDD fails each second time after xfs_repair Date: Wed, 10 Jan 2007 15:14:35 +0100 Message-ID: References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Complaints-To: usenet@sea.gmane.org X-Gmane-NNTP-Posting-Host: p54a57214.dip.t-dialin.net User-Agent: Mozilla/5.0 (X11; U; Linux i686; de-AT; rv:1.7.8) Gecko/20061113 Debian/1.7.8-1sarge8 Mnenhy/0.7.1 X-Accept-Language: de, de-de, de-at, en In-Reply-To: X-archive-position: 10245 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: christoph.bier@web.de Precedence: bulk X-list: xfs Content-Length: 468 Lines: 16 Christoph Bier schrieb am 10.01.2007 14:35: [...] > I didn't change anything else after the mount failure mentioned > above! BUT: Reading fails while I wanted to have a look at the files > in lost+found. > [/var/log/syslog output: > http://www.zvisionwelt.de/tmpdownloads/read-failure-syslog.output] Only files in lost+found seem to be affected. I successfully opened some others files. [...] -- +++ Typografie-Regeln: http://zvisionwelt.de/downloads.html (1.6) From owner-xfs@oss.sgi.com Wed Jan 10 06:37:45 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 10 Jan 2007 06:37:49 -0800 (PST) Received: from relay.sw.ru (mailhub.sw.ru [195.214.233.200]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0AEbhqw011879 for ; Wed, 10 Jan 2007 06:37:44 -0800 Received: from localhost ([192.168.0.119]) by relay.sw.ru (8.13.4/8.13.4) with ESMTP id l0AEacW4004584; Wed, 10 Jan 2007 17:36:40 +0300 (MSK) To: David Chinner Cc: Dmitriy Monakhov , Dmitriy Monakhov , linux-kernel@vger.kernel.org, devel@openvz.org, Andrew Morton , xfs@oss.sgi.com Subject: Re: [PATCH] incorrect direct io error handling References: <87d56he3tn.fsf@sw.ru> <20061218221515.GN44411608@melbourne.sgi.com> <87psagto4v.fsf@sw.ru> <20061220142631.GZ44411608@melbourne.sgi.com> From: Dmitriy Monakhov Date: Wed, 10 Jan 2007 17:36:57 +0300 In-Reply-To: <20061220142631.GZ44411608@melbourne.sgi.com> (David Chinner's message of "Thu, 21 Dec 2006 01:26:31 +1100") Message-ID: <87irffkkxi.fsf@sw.ru> User-Agent: Gnus/5.1008 (Gnus v5.10.8) Emacs/21.4 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-archive-position: 10246 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dmonakhov@sw.ru Precedence: bulk X-list: xfs Content-Length: 4230 Lines: 97 Sorry for long delay (russian holidays are very hard time :) ) David Chinner writes: > On Tue, Dec 19, 2006 at 09:07:12AM +0300, Dmitriy Monakhov wrote: >> David Chinner writes: >> > On Mon, Dec 18, 2006 at 04:22:44PM +0300, Dmitriy Monakhov wrote: >> >> diff --git a/mm/filemap.c b/mm/filemap.c >> >> index 8332c77..7c571dd 100644 >> >> --- a/mm/filemap.c >> >> +++ b/mm/filemap.c > > > >> > You comment in the first hunk that i_mutex may not be held here, >> > but there's no comment in __generic_file_aio_write_nolock() that the >> > i_mutex must be held for !S_ISBLK devices. >> Any one may call directly call generic_file_direct_write() with i_mutex not held. > > Only block devices based on the implementation (i.e. buffered I/O is > done here). but one can't call vmtruncate without the i_mutex held, > so if a filesystem is calling generic_file_direct_write() it won't > be able to use __generic_file_aio_write_nolock() without the i_mutex > held (because it can right now if it doesn't need the buffered I/O > fallback path), then > >> > >> >> @@ -2341,6 +2353,13 @@ ssize_t generic_file_aio_write_nolock(st >> >> ssize_t ret; >> >> >> >> BUG_ON(iocb->ki_pos != pos); >> >> + /* >> >> + * generic_file_buffered_write() may be called inside >> >> + * __generic_file_aio_write_nolock() even in case of >> >> + * O_DIRECT for non S_ISBLK files. So i_mutex must be held. >> >> + */ >> >> + if (!S_ISBLK(inode->i_mode)) >> >> + BUG_ON(!mutex_is_locked(&inode->i_mutex)); >> >> >> >> ret = __generic_file_aio_write_nolock(iocb, iov, nr_segs, >> >> &iocb->ki_pos); >> > >> > I note that you comment here in generic_file_aio_write_nolock(), >> > but it's not immediately obvious that this is refering to the >> > vmtruncate() call in __generic_file_aio_write_nolock(). >> This is not about vmtruncate(). __generic_file_aio_write_nolock() may >> call generic_file_buffered_write() even in case of O_DIRECT for !S_ISBLK, and > > No, the need for i_mutex is currently dependent on doing direct I/O > and the return value from generic_file_buffered_write(). > A filesystem that doesn't fall back to buffered I/O (e.g. XFS) can currently > use generic_file_aio_write_nolock() without needing to hold i_mutex. > use generic_file_aio_write_nolock() without needing to hold i_mutex. But it doesn't use it. XFS implement it's own write method with it's own locking rules and explicitly call generic_file_direct_write() in case of O_DIRECT. BTW XFS correctly handling ENOSPC in case of O_DIRECT (fs corruption not happend after error occur). > > Your change prevents that by introducing a vmtruncate() before the > generic_file_buffered_write() return value check, which means that a > filesystem now _must_ hold the i_mutex when calling > generic_file_aio_write_nolock() even when it doesn't do buffered I/O > through this path. Yes it's so. But it is just explicitly document the fact that every fs call generic_file_aio_write_nolock() with i_mutex held (where is no any fs that invoke it without i_mutex). As i understand Andrew Morton think so too: http://lkml.org/lkml/2006/12/12/67 I guess we can make that a rule (document it, add BUG_ON(!mutex_is_locked(..)) if it isn't a blockdev) if needs be. After really checking that this matches reality for all callers. > >> generic_file_buffered_write() has documented locking rules (i_mutex held). >> IMHO it is important to explicitly document this . And after we realize >> that i_mutex always held, vmtruncate() may be safely called. > > I don't think changing the locking semantics of > generic_file_aio_write_nolock() to require a lock for all > filesystem-based users is a good way to fix a filesystem specific > direct I/O problem which can be easily fixed in filesystem specific > code - i.e. call vmtruncate() in ext3_file_write() on failure.... Where are more than 10 filesystems where we have to fix it then. And fix is almost the same for all fs, so we have to do many copy/paste work IMHO fix it inside generic_file_aio_write_nolock is realy straightforward way. What do you think? > > Cheers, > > Dave. > -- > Dave Chinner > Principal Engineer > SGI Australian Software Group From owner-xfs@oss.sgi.com Wed Jan 10 07:22:01 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 10 Jan 2007 07:22:08 -0800 (PST) Received: from sandeen.net (sandeen.net [209.173.210.139]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0AFM0qw020227 for ; Wed, 10 Jan 2007 07:22:01 -0800 Received: from [10.0.0.4] (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 20FBF18011EB7; Wed, 10 Jan 2007 09:21:07 -0600 (CST) Message-ID: <45A50462.40601@sandeen.net> Date: Wed, 10 Jan 2007 09:21:06 -0600 From: Eric Sandeen User-Agent: Thunderbird 1.5.0.9 (Macintosh/20061207) MIME-Version: 1.0 To: Christoph Bier CC: linux-xfs@oss.sgi.com Subject: Re: Mounting an external HDD fails each second time after xfs_repair References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 10248 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Content-Length: 1377 Lines: 37 Christoph Bier wrote: > Hi all, > > I use 14 partitions (one extended partition; partition table see > below) on my new external 400GB HDD that is managed by LVM2 on > Debian Sarge with a vanilla kernel 2.6.19. > > The first time I mounted the HDD on my desktop everything worked > fine and I was able to copy 71GB of data. I unmounted and exported > the HDD and imported and mounted it on my laptop (commands see > below) running Ubuntu Edgy. Fine, too, I was able to read the data. > I exported again and imported again on my desktop. But now mounting > fails with > > mount: /dev/mm-extern/audiovideo: can't read superblock > > /var/log/syslog prints: > [Output: > http://www.zvisionwelt.de/tmpdownloads/mount-failure-syslog.output] These are not xfs errors, you have device problems: Jan 10 11:43:21 localhost kernel: sd 0:0:0:0: SCSI error: return code = 0x00070000 Jan 10 11:43:21 localhost kernel: end_request: I/O error, dev sda, sector 234300481 Jan 10 11:43:22 localhost kernel: I/O error in filesystem ("dm-1") meta-data dev dm-1 block 0x17495339 ("xlog_bread") error 5 buf count 262144 Jan 10 11:43:22 localhost kernel: XFS: empty log check failed Jan 10 11:43:22 localhost kernel: XFS: log mount/recovery failed: error 5 Jan 10 11:43:22 localhost kernel: XFS: log mount failed XFS is responding -properly- to an I/O error from your disk. -Eric From owner-xfs@oss.sgi.com Wed Jan 10 08:13:09 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 10 Jan 2007 08:13:14 -0800 (PST) Received: from ciao.gmane.org (main.gmane.org [80.91.229.2]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0AGD7qw029106 for ; Wed, 10 Jan 2007 08:13:09 -0800 Received: from list by ciao.gmane.org with local (Exim 4.43) id 1H4g3Y-0003Ak-IQ for linux-xfs@oss.sgi.com; Wed, 10 Jan 2007 17:12:05 +0100 Received: from p54a57214.dip.t-dialin.net ([84.165.114.20]) by main.gmane.org with esmtp (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Wed, 10 Jan 2007 17:12:04 +0100 Received: from christoph.bier by p54a57214.dip.t-dialin.net with local (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Wed, 10 Jan 2007 17:12:04 +0100 X-Injected-Via-Gmane: http://gmane.org/ To: linux-xfs@oss.sgi.com From: Christoph Bier Subject: Re: Mounting an external HDD fails each second time after xfs_repair Date: Wed, 10 Jan 2007 17:11:55 +0100 Message-ID: References: <45A50462.40601@sandeen.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Complaints-To: usenet@sea.gmane.org X-Gmane-NNTP-Posting-Host: p54a57214.dip.t-dialin.net User-Agent: Mozilla/5.0 (X11; U; Linux i686; de-AT; rv:1.7.8) Gecko/20061113 Debian/1.7.8-1sarge8 Mnenhy/0.7.1 X-Accept-Language: de, de-de, de-at, en In-Reply-To: <45A50462.40601@sandeen.net> X-archive-position: 10249 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: christoph.bier@web.de Precedence: bulk X-list: xfs Content-Length: 1585 Lines: 43 Eric Sandeen schrieb am 10.01.2007 16:21: > Christoph Bier wrote: >> Hi all, >> >> I use 14 partitions (one extended partition; partition table see >> below) on my new external 400GB HDD that is managed by LVM2 on >> Debian Sarge with a vanilla kernel 2.6.19. >> >> The first time I mounted the HDD on my desktop everything worked >> fine and I was able to copy 71GB of data. I unmounted and exported >> the HDD and imported and mounted it on my laptop (commands see >> below) running Ubuntu Edgy. Fine, too, I was able to read the data. >> I exported again and imported again on my desktop. But now mounting >> fails with >> >> mount: /dev/mm-extern/audiovideo: can't read superblock >> >> /var/log/syslog prints: >> [Output: >> http://www.zvisionwelt.de/tmpdownloads/mount-failure-syslog.output] > > These are not xfs errors, you have device problems: > > Jan 10 11:43:21 localhost kernel: sd 0:0:0:0: SCSI error: return code = > 0x00070000 > Jan 10 11:43:21 localhost kernel: end_request: I/O error, dev sda, > sector 234300481 > Jan 10 11:43:22 localhost kernel: I/O error in filesystem ("dm-1") > meta-data dev dm-1 block 0x17495339 ("xlog_bread") error 5 buf > count 262144 > Jan 10 11:43:22 localhost kernel: XFS: empty log check failed > Jan 10 11:43:22 localhost kernel: XFS: log mount/recovery failed: error 5 > Jan 10 11:43:22 localhost kernel: XFS: log mount failed > > XFS is responding -properly- to an I/O error from your disk. Hm. Ok, thanks for your answer! Best, Christoph -- +++ Typografie-Regeln: http://zvisionwelt.de/downloads.html (1.6) From owner-xfs@oss.sgi.com Wed Jan 10 09:20:14 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 10 Jan 2007 09:20:21 -0800 (PST) Received: from netcenter.hu (ns.netcenter.hu [195.228.254.57]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0AHKBqw013602 for ; Wed, 10 Jan 2007 09:20:13 -0800 Received: from dcccs (dsl5401D093.pool.t-online.hu [84.1.208.147]) by netcenter.hu (8.13.4/8.12.8) with SMTP id l0AHGYT0029561; Wed, 10 Jan 2007 18:16:34 +0100 Message-ID: <049c01c734db$5089aa70$0400a8c0@dcccs> From: "Janos Haar" To: "David Chinner" Cc: , , References: <000d01c72127$3d7509b0$0400a8c0@dcccs> <20061217224457.GN33919298@melbourne.sgi.com> <026501c72237$0464f7a0$0400a8c0@dcccs> <20061218062444.GH44411608@melbourne.sgi.com> <027b01c7227d$0e26d1f0$0400a8c0@dcccs> <20061218223637.GP44411608@melbourne.sgi.com> <001a01c722fd$df5ca710$0400a8c0@dcccs> <20061219025229.GT33919298@melbourne.sgi.com> <20061219044700.GW33919298@melbourne.sgi.com> <041601c729b6$f81e4af0$0400a8c0@dcccs> <20070107231402.GU44411608@melbourne.sgi.com> Subject: Re: xfslogd-spinlock bug? Date: Wed, 10 Jan 2007 18:18:08 +0100 MIME-Version: 1.0 Content-Type: text/plain; charset="ISO-8859-1" Content-Transfer-Encoding: 8bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2800.1807 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2800.1896 X-archive-position: 10250 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: djani22@netcenter.hu Precedence: bulk X-list: xfs Content-Length: 7071 Lines: 170 ----- Original Message ----- From: "David Chinner" To: "Haar János" Cc: "David Chinner" ; ; Sent: Monday, January 08, 2007 12:14 AM Subject: Re: xfslogd-spinlock bug? > On Wed, Dec 27, 2006 at 01:58:06PM +0100, Haar János wrote: > > Hello, > > > > ----- Original Message ----- > > From: "David Chinner" > > To: "David Chinner" > > Cc: "Haar János" ; ; > > > > Sent: Tuesday, December 19, 2006 5:47 AM > > Subject: Re: xfslogd-spinlock bug? > > > > > > > On Tue, Dec 19, 2006 at 01:52:29PM +1100, David Chinner wrote: > > > > > > The filesystem was being shutdown so xfs_inode_item_destroy() just > > > frees the inode log item without removing it from the AIL. I'll fix that, > > > and see if i have any luck.... > > > > > > So I'd still try that patch i sent in the previous email... > > > > I still using the patch, but didnt shows any messages at this point. > > > > I'v got 3 crash/reboot, but 2 causes nbd disconneted, and this one: > > > > Dec 27 13:41:29 dy-base BUG: warning at > > kernel/mutex.c:220/__mutex_unlock_common_slowpath() > > Dec 27 13:41:29 dy-base Unable to handle kernel paging request at > > 0000000066604480 RIP: > > Dec 27 13:41:29 dy-base [] resched_task+0x12/0x64 > > Dec 27 13:41:29 dy-base PGD 115246067 PUD 0 > > Dec 27 13:41:29 dy-base Oops: 0000 [1] SMP > > Dec 27 13:41:29 dy-base CPU 1 > > Dec 27 13:41:29 dy-base Modules linked in: nbd rd netconsole e1000 video > > Dec 27 13:41:29 dy-base Pid: 4069, comm: httpd Not tainted 2.6.19 #3 > > Dec 27 13:41:29 dy-base RIP: 0010:[] [] > > resched_task+0x12/0x64 > > Dec 27 13:41:29 dy-base RSP: 0018:ffff810105c01b78 EFLAGS: 00010083 > > Dec 27 13:41:29 dy-base RAX: ffffffff807d5800 RBX: 00001749fd97c214 RCX: > > Different corruption in RBX here. Looks like semi-random garbage there. > I wonder - what's the mac and ip address(es) of your machine and nbd > servers? dy-base: eth0 Link encap:Ethernet HWaddr 00:90:27:A2:7B:8B eth0:1 Link encap:Ethernet HWaddr 00:90:27:A2:7B:8B eth0:2 Link encap:Ethernet HWaddr 00:90:27:A2:7B:8B eth1 Link encap:Ethernet HWaddr 00:07:E9:32:E6:D8 eth1:1 Link encap:Ethernet HWaddr 00:07:E9:32:E6:D8 eth1:2 Link encap:Ethernet HWaddr 00:07:E9:32:E6:D8 eth2 Link encap:Ethernet HWaddr 00:07:E9:32:E6:D9 node1-4: 00:0E:0C:A0:E5:7E 00:0E:0C:A0:EF:5E 00:0E:0C:A0:E9:58 00:0E:0C:A0:EF:A3 Some new stuff: Jan 8 18:11:16 dy-base BUG: warning at kernel/mutex.c:220/__mutex_unlock_common_slowpath() Jan 8 18:11:16 dy-base Unable to handle kernel NULL pointer dereference at 0000000000000008 RIP: Jan 8 18:11:16 dy-base [] resched_task+0x1a/0x64 Jan 8 18:11:16 dy-base PGD 9859d067 PUD 4e347067 PMD 0 Jan 8 18:11:16 dy-base Oops: 0000 [1] SMP Jan 8 18:11:16 dy-base CPU 3 Jan 8 18:11:16 dy-base Modules linked in: nbd rd netconsole e1000 Jan 8 18:11:16 dy-base Pid: 3471, comm: ls Not tainted 2.6.19 #4 Jan 8 18:11:16 dy-base RIP: 0010:[] [] resched_task+0x1a/0x64 Jan 8 18:11:16 dy-base RSP: 0000:ffff81011fd1fb10 EFLAGS: 00010097 Jan 8 18:11:16 dy-base RAX: ffffffff80810800 RBX: 000004f0e2850659 RCX: ffff81010c0a2000 Jan 8 18:11:16 dy-base RDX: 0000000000000000 RSI: ffff81000583c368 RDI: ffff81002b809830 Jan 8 18:11:16 dy-base RBP: ffff81011fd1fb10 R08: 0000000000000000 R09: 0000000000000080 Jan 8 18:11:16 dy-base R10: 0000000000000080 R11: ffff81000584d280 R12: ffff8100d852a7b0 Jan 8 18:11:16 dy-base R13: 0000000000000003 R14: 0000000000000001 R15: 0000000000000001 Jan 8 18:11:16 dy-base FS: 00002b69786c8a90(0000) GS:ffff81011fcb98c0(0000) knlGS:0000000000000000 Jan 8 18:11:16 dy-base CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b Jan 8 18:11:16 dy-base CR2: 0000000000000008 CR3: 0000000056ec5000 CR4: 00000000000006e0 Jan 8 18:11:16 dy-base Process ls (pid: 3471, threadinfo ffff810080d80000, task ffff8100dcdea770) Jan 8 18:11:16 dy-base Stack: ffff81011fd1fb90 ffffffff80223f3f ffff81011fd1fb50 0000000000000282 Jan 8 18:11:16 dy-base 0000000000000001 0000000000000001 ffff81000583ba00 00000003032ca4b8 Jan 8 18:11:16 dy-base 000000000000000a 0000000000000082 ffff810103539d00 ffff810013867bf8 Jan 8 18:11:16 dy-base Call Trace: Jan 8 18:11:16 dy-base [] try_to_wake_up+0x3a7/0x3dc Jan 8 18:11:16 dy-base [] default_wake_function+0xd/0xf Jan 8 18:11:16 dy-base [] autoremove_wake_function+0x11/0x38 Jan 8 18:11:16 dy-base [] __wake_up_common+0x3e/0x68 Jan 8 18:11:16 dy-base [] __wake_up+0x38/0x50 Jan 8 18:11:16 dy-base [] sk_stream_write_space+0x5d/0x83 Jan 8 18:11:16 dy-base [] tcp_check_space+0x8f/0xcd Jan 8 18:11:16 dy-base [] tcp_rcv_established+0x116/0x76e Jan 8 18:11:16 dy-base [] tcp_v4_do_rcv+0x2d/0x322 Jan 8 18:11:16 dy-base [] tcp_v4_rcv+0x8bb/0x925 Jan 8 18:11:16 dy-base [] ip_local_deliver_finish+0x0/0x1ce Jan 8 18:11:16 dy-base [] ip_local_deliver+0x172/0x238 Jan 8 18:11:16 dy-base [] ip_rcv+0x44f/0x497 Jan 8 18:11:16 dy-base [] :e1000:e1000_alloc_rx_buffers+0x1e7/0x2cb Jan 8 18:11:16 dy-base [] netif_receive_skb+0x1ee/0x255 Jan 8 18:11:16 dy-base [] process_backlog+0x8a/0x10f Jan 8 18:11:16 dy-base [] net_rx_action+0xa9/0x16e Jan 8 18:11:16 dy-base [] __do_softirq+0x57/0xc7 Jan 8 18:11:16 dy-base [] call_softirq+0x1c/0x28 Jan 8 18:11:16 dy-base [] do_softirq+0x34/0x87 Jan 8 18:11:16 dy-base [] irq_exit+0x3f/0x41 Jan 8 18:11:16 dy-base [] do_IRQ+0xa9/0xc7 Jan 8 18:11:16 dy-base [] ret_from_intr+0x0/0xa Jan 8 18:11:16 dy-base Jan 8 18:11:16 dy-base Jan 8 18:11:16 dy-base Code: 48 03 42 08 8b 00 85 c0 7e 0a 0f 0b 68 ed 80 64 80 c2 f0 03 Jan 8 18:11:16 dy-base RIP [] resched_task+0x1a/0x64 Jan 8 18:11:16 dy-base RSP Jan 8 18:11:16 dy-base CR2: 0000000000000008 Jan 8 18:11:16 dy-base <0>Kernel panic - not syncing: Fatal exception Jan 8 18:11:16 dy-base Jan 8 18:11:16 dy-base Rebooting in 5 seconds.. (i have disabled the slab debuggint, because i need more perf.) Thanks, Janos > > (i.e. I suspect this is a nbd problem, not an XFS problem) > > Cheers, > > Dave. > -- > Dave Chinner > Principal Engineer > SGI Australian Software Group > - > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ From owner-xfs@oss.sgi.com Wed Jan 10 14:09:48 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 10 Jan 2007 14:09:53 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0AM9iqw010560 for ; Wed, 10 Jan 2007 14:09:46 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA04696; Thu, 11 Jan 2007 09:08:48 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l0AM8k7Y89475357; Thu, 11 Jan 2007 09:08:46 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l0AM8hM689432358; Thu, 11 Jan 2007 09:08:43 +1100 (AEDT) Date: Thu, 11 Jan 2007 09:08:43 +1100 From: David Chinner To: "Mr. Berkley Shands" Cc: David Chinner , Eric Sandeen , Dave Lloyd , linux-xfs@oss.sgi.com Subject: Re: XFS and 2.6.18 -> 2.6.20-rc3 Message-ID: <20070110220843.GA44411608@melbourne.sgi.com> References: <45A27BC7.2020709@exegy.com> <20070109012212.GG44411608@melbourne.sgi.com> <20070109072535.GH44411608@melbourne.sgi.com> <45A4EA2B.5050505@exegy.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <45A4EA2B.5050505@exegy.com> User-Agent: Mutt/1.4.2.1i X-archive-position: 10251 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 814 Lines: 30 On Wed, Jan 10, 2007 at 07:29:15AM -0600, Mr. Berkley Shands wrote: > With a fresh install of the O/S on a non-broken motherboard, the change to > > /proc/sys/vm/dirty_ratio > > restores most of the lost performance from 2.6.18, > as of 2.6.20-rc4. The difference is 10% to 15% without the dirty_ratio > change (40 is the default, 10 gives the old performance). .... > So whatever needs to be tweaked in the VM system seems to be the key. > > Thanks to all for getting this regression repaired. Well, it's not repaired as such - you've got a WAR for the problem. I'll report the problem to lkml so that the VM gurus can try to really fix the problem.... Thanks for confirming that the dirty_ratio tweak also worked for you. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Wed Jan 10 15:05:44 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 10 Jan 2007 15:05:49 -0800 (PST) Received: from ty.sabi.co.UK (82-69-39-138.dsl.in-addr.zen.co.uk [82.69.39.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0AN5gqw021828 for ; Wed, 10 Jan 2007 15:05:44 -0800 Received: from from [127.0.0.1] (helo=base.ty.sabi.co.UK) by ty.sabi.co.UK with esmtp(Exim 4.62 #1) id 1H4m6L-0004ta-Ap for ; Wed, 10 Jan 2007 22:39:21 +0000 MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-ID: <17829.27416.823078.9589@base.ty.sabi.co.UK> Date: Wed, 10 Jan 2007 22:39:20 +0000 X-Face: SMJE]JPYVBO-9UR%/8d'mG.F!@.,l@c[f'[%S8'BZIcbQc3/">GrXDwb#;fTRGNmHr^JFb SAptvwWc,0+z+~p~"Gdr4H$(|N(yF(wwCM2bW0~U?HPEE^fkPGx^u[*[yV.gyB!hDOli}EF[\cW*S H&spRGFL}{`bj1TaD^l/"[ msn( /TH#THs{Hpj>)]f> Subject: Re: Mounting an external HDD fails each second time after xfs_repair In-Reply-To: References: X-Mailer: VM 7.17 under 21.4 (patch 20) XEmacs Lucid From: pg_xfs@xfs.to.sabi.co.UK (Peter Grandi) X-Disclaimer: This message contains only personal opinions X-archive-position: 10252 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: pg_xfs@xfs.to.sabi.co.UK Precedence: bulk X-list: xfs Content-Length: 1133 Lines: 27 >>> On Wed, 10 Jan 2007 14:35:34 +0100, Christoph Bier >>> said: christoph.bier> Hi all, I use 14 partitions (one extended christoph.bier> partition; partition table see below) on my new christoph.bier> external 400GB HDD that is managed by LVM2 on christoph.bier> Debian Sarge with a vanilla kernel 2.6.19. Fascinating setup :-). We can of course assume, given the care with which you have designed your setup, and the astuteness of your choice to omit any details as to the hardware (what type of HD interface, what type of external bus, the bridge and the host adapter chipsets, the disk and the power supply ratings) that you have advisedly chosen the external disk for 100% reliable operation, as you have checked thoroughly that the power supply of the case is sufficient and that both chipsets (USB? FW?) are well known for being bug-free and reliable for use with the relevant GNU/Linux mass storage driver... :-) christoph.bier> The first time I mounted the HDD on my desktop christoph.bier> everything worked fine [ ... ] A typo here: you typed "worked fine" instead of "seemed to work fine". From owner-xfs@oss.sgi.com Wed Jan 10 19:35:42 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 10 Jan 2007 19:35:46 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0B3Zcqw007918 for ; Wed, 10 Jan 2007 19:35:41 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA13941; Thu, 11 Jan 2007 14:34:36 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l0B3YY7Y85407471; Thu, 11 Jan 2007 14:34:35 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l0B3YUYo88980444; Thu, 11 Jan 2007 14:34:30 +1100 (AEDT) Date: Thu, 11 Jan 2007 14:34:30 +1100 From: David Chinner To: Janos Haar Cc: David Chinner , linux-xfs@oss.sgi.com, linux-kernel@vger.kernel.org Subject: Re: xfslogd-spinlock bug? Message-ID: <20070111033430.GA33919298@melbourne.sgi.com> References: <026501c72237$0464f7a0$0400a8c0@dcccs> <20061218062444.GH44411608@melbourne.sgi.com> <027b01c7227d$0e26d1f0$0400a8c0@dcccs> <20061218223637.GP44411608@melbourne.sgi.com> <001a01c722fd$df5ca710$0400a8c0@dcccs> <20061219025229.GT33919298@melbourne.sgi.com> <20061219044700.GW33919298@melbourne.sgi.com> <041601c729b6$f81e4af0$0400a8c0@dcccs> <20070107231402.GU44411608@melbourne.sgi.com> <049c01c734db$5089aa70$0400a8c0@dcccs> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <049c01c734db$5089aa70$0400a8c0@dcccs> User-Agent: Mutt/1.4.2.1i X-archive-position: 10253 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 2455 Lines: 57 On Wed, Jan 10, 2007 at 06:18:08PM +0100, Janos Haar wrote: > From: "David Chinner" > > Different corruption in RBX here. Looks like semi-random garbage there. > > I wonder - what's the mac and ip address(es) of your machine and nbd > > servers? > > dy-base: no matches. Oh well, it was a long shot. > Some new stuff: > Jan 8 18:11:16 dy-base RAX: ffffffff80810800 RBX: 000004f0e2850659 RCX: RBX trashed again with more random garbage. > Jan 8 18:11:16 dy-base [] default_wake_function+0xd/0xf > Jan 8 18:11:16 dy-base [] > autoremove_wake_function+0x11/0x38 > Jan 8 18:11:16 dy-base [] __wake_up_common+0x3e/0x68 > Jan 8 18:11:16 dy-base [] __wake_up+0x38/0x50 > Jan 8 18:11:16 dy-base [] > sk_stream_write_space+0x5d/0x83 > Jan 8 18:11:16 dy-base [] tcp_check_space+0x8f/0xcd > Jan 8 18:11:16 dy-base [] > tcp_rcv_established+0x116/0x76e > Jan 8 18:11:16 dy-base [] tcp_v4_do_rcv+0x2d/0x322 > Jan 8 18:11:16 dy-base [] tcp_v4_rcv+0x8bb/0x925 > Jan 8 18:11:16 dy-base [] > ip_local_deliver_finish+0x0/0x1ce > Jan 8 18:11:16 dy-base [] ip_local_deliver+0x172/0x238 > Jan 8 18:11:16 dy-base [] ip_rcv+0x44f/0x497 > Jan 8 18:11:16 dy-base [] > :e1000:e1000_alloc_rx_buffers+0x1e7/0x2cb > Jan 8 18:11:16 dy-base [] netif_receive_skb+0x1ee/0x255 > Jan 8 18:11:16 dy-base [] process_backlog+0x8a/0x10f > Jan 8 18:11:16 dy-base [] net_rx_action+0xa9/0x16e > Jan 8 18:11:16 dy-base [] __do_softirq+0x57/0xc7 > Jan 8 18:11:16 dy-base [] call_softirq+0x1c/0x28 > Jan 8 18:11:16 dy-base [] do_softirq+0x34/0x87 > Jan 8 18:11:16 dy-base [] irq_exit+0x3f/0x41 > Jan 8 18:11:16 dy-base [] do_IRQ+0xa9/0xc7 > Jan 8 18:11:16 dy-base [] ret_from_intr+0x0/0xa ...... > > (i.e. I suspect this is a nbd problem, not an XFS problem) There's something seriously wrong in your kernel that has, AFAICT, nothing to do with XFS. I suggest talking to the NBD folk as that is the only unusualy thing that I can see that you are using.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Wed Jan 10 22:24:48 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 10 Jan 2007 22:24:53 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0B6Ojqw009412 for ; Wed, 10 Jan 2007 22:24:46 -0800 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA17847; Thu, 11 Jan 2007 17:23:46 +1100 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id 7D6AC58CF82A; Thu, 11 Jan 2007 17:23:46 +1100 (EST) To: xfs@oss.sgi.com, sgi.bugs.xfs@engr.sgi.com Subject: TAKE 960028 - XFS tripping bug in cancel_dirty_pages in 2.6.20-rc3 Message-Id: <20070111062346.7D6AC58CF82A@chook.melbourne.sgi.com> Date: Thu, 11 Jan 2007 17:23:46 +1100 (EST) From: dgc@sgi.com (David Chinner) X-archive-position: 10254 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 1090 Lines: 28 Unmap pages before removing them from the page cache. The new cancel_dirty_pages() code found that XFS was removing page from the page cache that had dirty page table entries. XFS invalidates page cache pages via internal interfaces which are implemented via truncate_inode_pages which does not remove the page mapping first. Switch to using invalidate_inode_pages2_range() which does almost the same thing except it also removes page table mappings as expected by cancel_dirty_pages. Date: Thu Jan 11 17:23:10 AEDT 2007 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/2.6.x-xfs Inspected by: hch@infradead.org,lachlan The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:27909a fs/xfs/linux-2.6/xfs_fs_subr.c - 1.48 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/linux-2.6/xfs_fs_subr.c.diff?r1=text&tr1=1.48&r2=text&tr2=1.47&f=h - Convert truncate_inode_pages to invalidate_inode_pages2_range so that page table mappings are removed before we remove the pages from the page cache. From owner-xfs@oss.sgi.com Wed Jan 10 22:50:56 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 10 Jan 2007 22:51:00 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0B6orqw014825 for ; Wed, 10 Jan 2007 22:50:55 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA18495; Thu, 11 Jan 2007 17:50:00 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l0B6nx7Y84658087; Thu, 11 Jan 2007 17:49:59 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l0B6nwK889469774; Thu, 11 Jan 2007 17:49:58 +1100 (AEDT) Date: Thu, 11 Jan 2007 17:49:58 +1100 From: David Chinner To: Lachlan McIlroy Cc: David Chinner , xfs-dev@sgi.com, xfs@oss.sgi.com Subject: Re: Review: fix mapping invalidation callouts Message-ID: <20070111064958.GC33919298@melbourne.sgi.com> References: <20070108040309.GX33919298@melbourne.sgi.com> <20070110062344.GR33919298@melbourne.sgi.com> <45A4A645.5010708@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <45A4A645.5010708@sgi.com> User-Agent: Mutt/1.4.2.1i X-archive-position: 10255 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 2603 Lines: 81 On Wed, Jan 10, 2007 at 08:39:33AM +0000, Lachlan McIlroy wrote: > David Chinner wrote: > >On Mon, Jan 08, 2007 at 03:03:09PM +1100, David Chinner wrote: > > > >>With the recent cancel_dirty_page() changes, a warning was > >>added if we cancel a dirty page that is still mapped into > >>the page tables. > >>This happens in XFS from fs_tosspages() and fs_flushinval_pages() > >>because they call truncate_inode_pages(). > >> > >>truncate_inode_pages() does not invalidate existing page mappings; > >>it is expected taht this is called only when truncating the file > >>or destroying the inode and on both these cases there can be > >>no mapped ptes. However, we call this when doing direct I/O writes > >>to remove pages from the page cache. As a result, we can rip > >>a page from the page cache that still has mappings attached. > >> > >>The correct fix is to use invalidate_inode_pages2_range() instead > >>of truncate_inode_pages(). They essentially do the same thing, but > >>the former also removes any pte mappings before removing the page > >>from the page cache. > >> > >>Comments? > >> > >>Cheers, > >> > >>Dave. > >>-- > >>Dave Chinner > >>Principal Engineer > >>SGI Australian Software Group > >> > >> > >>--- > >>fs/xfs/linux-2.6/xfs_fs_subr.c | 10 ++++++++-- > >>1 file changed, 8 insertions(+), 2 deletions(-) > >> > >>Index: 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_fs_subr.c > >>=================================================================== > >>--- 2.6.x-xfs-new.orig/fs/xfs/linux-2.6/xfs_fs_subr.c 2006-12-12 > >>12:05:17.000000000 +1100 > >>+++ 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_fs_subr.c 2007-01-08 > >>09:30:22.056571711 +1100 > >>@@ -21,6 +21,8 @@ int fs_noerr(void) { return 0; } > >>int fs_nosys(void) { return ENOSYS; } > >>void fs_noval(void) { return; } > >> > >>+#define XFS_OFF_TO_PCSIZE(off) \ > >>+ (((off) + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT) > > > > > >I don't think this is right. > > > >Assuming 4k page size, first = 2k, last = 6k will result in > >invalidating page indexes 1 and 2 i.e. offset 4k -> 12k. In fact, > >we want to invalidate pages 0 and 1. > > > >IOWs, I think it should be: > > > >+#define XFS_OFF_TO_PCINDEX(off) ((off) >> PAGE_CACHE_SHIFT) > > > >Comments? > > > > Makes sense to me. Yeah, you'd think so. The first xfsqa run I do -after- checking it in (been running for 24 hours) I get a stack dump with the warning in cancel_dirty_page(), so clearly this isn't right either. I'm not sure WTF is going on here. Chatz, don't push that mod yet.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Thu Jan 11 00:01:31 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 11 Jan 2007 00:01:36 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0B81Sqw031720 for ; Thu, 11 Jan 2007 00:01:30 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id TAA20075; Thu, 11 Jan 2007 19:00:34 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l0B80X7Y89660575; Thu, 11 Jan 2007 19:00:34 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l0B80WGe89639904; Thu, 11 Jan 2007 19:00:32 +1100 (AEDT) Date: Thu, 11 Jan 2007 19:00:32 +1100 From: David Chinner To: David Chinner Cc: Lachlan McIlroy , xfs-dev@sgi.com, xfs@oss.sgi.com Subject: Re: Review: fix mapping invalidation callouts Message-ID: <20070111080032.GD33919298@melbourne.sgi.com> References: <20070108040309.GX33919298@melbourne.sgi.com> <20070110062344.GR33919298@melbourne.sgi.com> <45A4A645.5010708@sgi.com> <20070111064958.GC33919298@melbourne.sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070111064958.GC33919298@melbourne.sgi.com> User-Agent: Mutt/1.4.2.1i X-archive-position: 10256 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 1470 Lines: 47 On Thu, Jan 11, 2007 at 05:49:58PM +1100, David Chinner wrote: > On Wed, Jan 10, 2007 at 08:39:33AM +0000, Lachlan McIlroy wrote: > > David Chinner wrote: > > >>+#define XFS_OFF_TO_PCSIZE(off) \ > > >>+ (((off) + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT) > > > > > > > > >I don't think this is right. > > > > > >Assuming 4k page size, first = 2k, last = 6k will result in > > >invalidating page indexes 1 and 2 i.e. offset 4k -> 12k. In fact, > > >we want to invalidate pages 0 and 1. > > > > > >IOWs, I think it should be: > > > > > >+#define XFS_OFF_TO_PCINDEX(off) ((off) >> PAGE_CACHE_SHIFT) > > > > > >Comments? > > > > > > > Makes sense to me. > > Yeah, you'd think so. The first xfsqa run I do -after- checking it in > (been running for 24 hours) I get a stack dump with the warning > in cancel_dirty_page(), so clearly this isn't right either. I'm > not sure WTF is going on here. Of course, I just realised that this is 2.6.19 that I'm testing on (fmeh) and so the code is different - cancel-dirty_page() doesn't exist in this tree, and the warning is coming from invalidate_inode_pages2_range() because invalidate_complete_page2() is returning an error for some reason..... Looks like it's a partial page truncation problem. invalidate_inode_pages2_range() fails on partial page truncation when part of the page (i.e. a bufferhead) is dirty. This looks like a _big_ mess. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Thu Jan 11 00:02:41 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 11 Jan 2007 00:02:45 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0B82bqw032039 for ; Thu, 11 Jan 2007 00:02:39 -0800 Received: from [134.15.251.13] ([134.15.251.13]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id TAA20095; Thu, 11 Jan 2007 19:01:39 +1100 Message-ID: <45A5EEE1.6000802@melbourne.sgi.com> Date: Thu, 11 Jan 2007 19:01:37 +1100 From: David Chatterton Reply-To: chatz@melbourne.sgi.com Organization: SGI User-Agent: Thunderbird 1.5.0.9 (Windows/20061207) MIME-Version: 1.0 To: David Chinner CC: Lachlan McIlroy , xfs-dev@sgi.com, xfs@oss.sgi.com Subject: Re: Review: fix mapping invalidation callouts References: <20070108040309.GX33919298@melbourne.sgi.com> <20070110062344.GR33919298@melbourne.sgi.com> <45A4A645.5010708@sgi.com> <20070111064958.GC33919298@melbourne.sgi.com> In-Reply-To: <20070111064958.GC33919298@melbourne.sgi.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-archive-position: 10257 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: chatz@melbourne.sgi.com Precedence: bulk X-list: xfs Content-Length: 2770 Lines: 86 David Chinner wrote: > On Wed, Jan 10, 2007 at 08:39:33AM +0000, Lachlan McIlroy wrote: >> David Chinner wrote: >>> On Mon, Jan 08, 2007 at 03:03:09PM +1100, David Chinner wrote: >>> >>>> With the recent cancel_dirty_page() changes, a warning was >>>> added if we cancel a dirty page that is still mapped into >>>> the page tables. >>>> This happens in XFS from fs_tosspages() and fs_flushinval_pages() >>>> because they call truncate_inode_pages(). >>>> >>>> truncate_inode_pages() does not invalidate existing page mappings; >>>> it is expected taht this is called only when truncating the file >>>> or destroying the inode and on both these cases there can be >>>> no mapped ptes. However, we call this when doing direct I/O writes >>>> to remove pages from the page cache. As a result, we can rip >>>> a page from the page cache that still has mappings attached. >>>> >>>> The correct fix is to use invalidate_inode_pages2_range() instead >>>> of truncate_inode_pages(). They essentially do the same thing, but >>>> the former also removes any pte mappings before removing the page >>> >from the page cache. >>>> Comments? >>>> >>>> Cheers, >>>> >>>> Dave. >>>> -- >>>> Dave Chinner >>>> Principal Engineer >>>> SGI Australian Software Group >>>> >>>> >>>> --- >>>> fs/xfs/linux-2.6/xfs_fs_subr.c | 10 ++++++++-- >>>> 1 file changed, 8 insertions(+), 2 deletions(-) >>>> >>>> Index: 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_fs_subr.c >>>> =================================================================== >>>> --- 2.6.x-xfs-new.orig/fs/xfs/linux-2.6/xfs_fs_subr.c 2006-12-12 >>>> 12:05:17.000000000 +1100 >>>> +++ 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_fs_subr.c 2007-01-08 >>>> 09:30:22.056571711 +1100 >>>> @@ -21,6 +21,8 @@ int fs_noerr(void) { return 0; } >>>> int fs_nosys(void) { return ENOSYS; } >>>> void fs_noval(void) { return; } >>>> >>>> +#define XFS_OFF_TO_PCSIZE(off) \ >>>> + (((off) + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT) >>> >>> I don't think this is right. >>> >>> Assuming 4k page size, first = 2k, last = 6k will result in >>> invalidating page indexes 1 and 2 i.e. offset 4k -> 12k. In fact, >>> we want to invalidate pages 0 and 1. >>> >>> IOWs, I think it should be: >>> >>> +#define XFS_OFF_TO_PCINDEX(off) ((off) >> PAGE_CACHE_SHIFT) >>> >>> Comments? >>> >> Makes sense to me. > > Yeah, you'd think so. The first xfsqa run I do -after- checking it in > (been running for 24 hours) I get a stack dump with the warning > in cancel_dirty_page(), so clearly this isn't right either. I'm > not sure WTF is going on here. > > Chatz, don't push that mod yet.... > Ack. Lets get Donald to pull 2.6.20-rc into 2.6.x-xfs, or do we need to wait until you have this fixed? David -- David Chatterton XFS Engineering Manager SGI Australia From owner-xfs@oss.sgi.com Thu Jan 11 02:48:17 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 11 Jan 2007 02:48:22 -0800 (PST) Received: from www708.sakura.ne.jp (www708.sakura.ne.jp [59.106.19.158]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0BAmGqw005907 for ; Thu, 11 Jan 2007 02:48:17 -0800 Received: from www708.sakura.ne.jp (localhost [127.0.0.1]) by www708.sakura.ne.jp (8.12.11/8.12.11) with ESMTP id l0BAAssK006725 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Thu, 11 Jan 2007 19:10:54 +0900 (JST) (envelope-from china-career@www708.sakura.ne.jp) Received: (from china-career@localhost) by www708.sakura.ne.jp (8.12.11/8.12.11/Submit) id l0BAAsPj006724; Thu, 11 Jan 2007 19:10:54 +0900 (JST) (envelope-from china-career) Date: Thu, 11 Jan 2007 19:10:54 +0900 (JST) Message-Id: <200701111010.l0BAAsPj006724@www708.sakura.ne.jp> To: xfs@oss.sgi.com Subject: PART-TIME JOB OFFER From: Raymond Limited Reply-To: Millert1000@gmail.com MIME-Version: 1.0 Content-Type: text/plain Content-Transfer-Encoding: 8bit X-archive-position: 10258 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: Contacttanya@Raymondindia.com Precedence: bulk X-list: xfs Content-Length: 419 Lines: 20 Good Day, Are you looking for a lucrative job? The job takes only 3-5 hours a week , And it a chance for you to make over $3,000 extra per month depending on how usefull you are to the company. Also you do not need to resume at any office to get started ,Its a work from home and you do not pay any fee to get started . Try now without risking your current job. Do get back to me if interested Thanks Tanya From owner-xfs@oss.sgi.com Thu Jan 11 02:48:20 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 11 Jan 2007 02:48:24 -0800 (PST) Received: from www708.sakura.ne.jp (www708.sakura.ne.jp [59.106.19.158]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0BAmGr0005907 for ; Thu, 11 Jan 2007 02:48:18 -0800 Received: from www708.sakura.ne.jp (localhost [127.0.0.1]) by www708.sakura.ne.jp (8.12.11/8.12.11) with ESMTP id l0BA21mM095570 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Thu, 11 Jan 2007 19:02:01 +0900 (JST) (envelope-from china-career@www708.sakura.ne.jp) Received: (from china-career@localhost) by www708.sakura.ne.jp (8.12.11/8.12.11/Submit) id l0BA21uY095569; Thu, 11 Jan 2007 19:02:01 +0900 (JST) (envelope-from china-career) Date: Thu, 11 Jan 2007 19:02:01 +0900 (JST) Message-Id: <200701111002.l0BA21uY095569@www708.sakura.ne.jp> To: xfs@oss.sgi.com Subject: PART-TIME JOB OFFER From: Raymond Limited Reply-To: Miller.t1000@gmail.com MIME-Version: 1.0 Content-Type: text/plain Content-Transfer-Encoding: 8bit X-archive-position: 10259 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: Contacttanya@Raymondindia.com Precedence: bulk X-list: xfs Content-Length: 418 Lines: 19 Good Day, Are you looking for a lucrative job? The job takes only 3-5 hours a week , And it a chance for you to make over $3,000 extra per month depending on how usefull you are to the company. Also you do not need to resume at any office to get started ,Its a work from home and you do not pay any fee to get started . Try now without risking your current job. Do get back to me if interested Thanks Tanya From owner-xfs@oss.sgi.com Thu Jan 11 12:17:52 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 11 Jan 2007 12:17:57 -0800 (PST) Received: from netcenter.hu (ns.netcenter.hu [195.228.254.57]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0BKHoqw029286 for ; Thu, 11 Jan 2007 12:17:52 -0800 Received: from dcccs (dsl5401D093.pool.t-online.hu [84.1.208.147]) by netcenter.hu (8.13.4/8.12.8) with SMTP id l0BKEDTv012911; Thu, 11 Jan 2007 21:14:13 +0100 Message-ID: <04e601c735bd$4bd17de0$0400a8c0@dcccs> From: "Janos Haar" To: "David Chinner" Cc: , , References: <026501c72237$0464f7a0$0400a8c0@dcccs> <20061218062444.GH44411608@melbourne.sgi.com> <027b01c7227d$0e26d1f0$0400a8c0@dcccs> <20061218223637.GP44411608@melbourne.sgi.com> <001a01c722fd$df5ca710$0400a8c0@dcccs> <20061219025229.GT33919298@melbourne.sgi.com> <20061219044700.GW33919298@melbourne.sgi.com> <041601c729b6$f81e4af0$0400a8c0@dcccs> <20070107231402.GU44411608@melbourne.sgi.com> <049c01c734db$5089aa70$0400a8c0@dcccs> <20070111033430.GA33919298@melbourne.sgi.com> Subject: Re: xfslogd-spinlock bug? Date: Thu, 11 Jan 2007 21:15:51 +0100 MIME-Version: 1.0 Content-Type: text/plain; charset="ISO-8859-1" Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2800.1807 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2800.1896 X-archive-position: 10264 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: djani22@netcenter.hu Precedence: bulk X-list: xfs Content-Length: 3133 Lines: 84 ----- Original Message ----- From: "David Chinner" To: "Janos Haar" Cc: "David Chinner" ; ; Sent: Thursday, January 11, 2007 4:34 AM Subject: Re: xfslogd-spinlock bug? > On Wed, Jan 10, 2007 at 06:18:08PM +0100, Janos Haar wrote: > > From: "David Chinner" > > > Different corruption in RBX here. Looks like semi-random garbage there. > > > I wonder - what's the mac and ip address(es) of your machine and nbd > > > servers? > > > > dy-base: > > no matches. Oh well, it was a long shot. > > > Some new stuff: > > Jan 8 18:11:16 dy-base RAX: ffffffff80810800 RBX: 000004f0e2850659 RCX: > > RBX trashed again with more random garbage. > > > Jan 8 18:11:16 dy-base [] default_wake_function+0xd/0xf > > Jan 8 18:11:16 dy-base [] > > autoremove_wake_function+0x11/0x38 > > Jan 8 18:11:16 dy-base [] __wake_up_common+0x3e/0x68 > > Jan 8 18:11:16 dy-base [] __wake_up+0x38/0x50 > > Jan 8 18:11:16 dy-base [] > > sk_stream_write_space+0x5d/0x83 > > Jan 8 18:11:16 dy-base [] tcp_check_space+0x8f/0xcd > > Jan 8 18:11:16 dy-base [] > > tcp_rcv_established+0x116/0x76e > > Jan 8 18:11:16 dy-base [] tcp_v4_do_rcv+0x2d/0x322 > > Jan 8 18:11:16 dy-base [] tcp_v4_rcv+0x8bb/0x925 > > Jan 8 18:11:16 dy-base [] > > ip_local_deliver_finish+0x0/0x1ce > > Jan 8 18:11:16 dy-base [] ip_local_deliver+0x172/0x238 > > Jan 8 18:11:16 dy-base [] ip_rcv+0x44f/0x497 > > Jan 8 18:11:16 dy-base [] > > :e1000:e1000_alloc_rx_buffers+0x1e7/0x2cb > > Jan 8 18:11:16 dy-base [] netif_receive_skb+0x1ee/0x255 > > Jan 8 18:11:16 dy-base [] process_backlog+0x8a/0x10f > > Jan 8 18:11:16 dy-base [] net_rx_action+0xa9/0x16e > > Jan 8 18:11:16 dy-base [] __do_softirq+0x57/0xc7 > > Jan 8 18:11:16 dy-base [] call_softirq+0x1c/0x28 > > Jan 8 18:11:16 dy-base [] do_softirq+0x34/0x87 > > Jan 8 18:11:16 dy-base [] irq_exit+0x3f/0x41 > > Jan 8 18:11:16 dy-base [] do_IRQ+0xa9/0xc7 > > Jan 8 18:11:16 dy-base [] ret_from_intr+0x0/0xa > ...... > > > (i.e. I suspect this is a nbd problem, not an XFS problem) > > There's something seriously wrong in your kernel that has, AFAICT, > nothing to do with XFS. I suggest talking to the NBD folk as that is > the only unusualy thing that I can see that you are using.... > > Cheers, OK, i will try.... Thanks a lot, Janos > > Dave. > -- > Dave Chinner > Principal Engineer > SGI Australian Software Group > - > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ From owner-xfs@oss.sgi.com Thu Jan 11 15:06:31 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 11 Jan 2007 15:06:42 -0800 (PST) Received: from lucidpixels.com (lucidpixels.com [66.45.37.187]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0BN6Tqw026541 for ; Thu, 11 Jan 2007 15:06:31 -0800 Received: by lucidpixels.com (Postfix, from userid 1001) id 41F681A08D49A; Thu, 11 Jan 2007 18:05:36 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id 347D0A009879; Thu, 11 Jan 2007 18:05:36 -0500 (EST) Date: Thu, 11 Jan 2007 18:05:36 -0500 (EST) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: linux-raid@vger.kerne.org cc: xfs@oss.sgi.com Subject: Tweaking/Optimizing MD RAID: 195MB/s write, 181MB/s read (so far) Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 10265 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Content-Length: 1346 Lines: 53 With 4 Raptor 150s & XFS (default XFS options): # Stripe tests: echo 8192 > /sys/block/md3/md/stripe_cache_size # DD TESTS [WRITE] DEFAULT: $ dd if=/dev/zero of=10gb.no.optimizations.out bs=1M count=10240 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 96.6988 seconds, 111 MB/s 8192 STRIPE CACHE $ dd if=/dev/zero of=10gb.8192k.stripe.out bs=1M count=10240 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 55.0628 seconds, 195 MB/s $ 16384 STRIPE CACHE $ dd if=/dev/zero of=10gb.16384k.stripe.out bs=1M count=10240 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 56.2793 seconds, 191 MB/s # DD TESTS [READ] DEFAULT: $ dd if=10gb.16384k.stripe.out of=/dev/null bs=1M 298+0 records in 297+0 records out 311427072 bytes (311 MB) copied, 3.5453 seconds, 87.8 MB/s 2048K READ AHEAD $ dd if=10gb.16384k.stripe.out of=/dev/null bs=1M 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 85.4632 seconds, 126 MB/s 8192K READ AHEAD $ dd if=10gb.16384k.stripe.out of=/dev/null bs=1M 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 64.9454 seconds, 165 MB/s 16384K READ AHEAD $ dd if=10gb.16384k.stripe.out of=/dev/null bs=1M 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 59.3119 seconds, 181 MB/s From owner-xfs@oss.sgi.com Thu Jan 11 15:18:37 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 11 Jan 2007 15:18:42 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0BNIXqw028723 for ; Thu, 11 Jan 2007 15:18:36 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA14775; Fri, 12 Jan 2007 10:17:31 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l0BNHS7Y90349016; Fri, 12 Jan 2007 10:17:28 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l0BNHOFP90321019; Fri, 12 Jan 2007 10:17:24 +1100 (AEDT) Date: Fri, 12 Jan 2007 10:17:24 +1100 From: David Chinner To: Justin Piszcz Cc: linux-raid@vger.kerne.org, xfs@oss.sgi.com Subject: Re: Tweaking/Optimizing MD RAID: 195MB/s write, 181MB/s read (so far) Message-ID: <20070111231724.GH44411608@melbourne.sgi.com> References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.1i X-archive-position: 10266 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 533 Lines: 22 On Thu, Jan 11, 2007 at 06:05:36PM -0500, Justin Piszcz wrote: > With 4 Raptor 150s & XFS (default XFS options): I need more context for this to be meaningful in any way. What type of md config are you using here? RAID0, 1, 5.....? What's the raw device throughput (i.e. read and write to /dev/md3)? output of 'growfs_xfs -n '? (i.e. is XFS doing aligned or unaligned allocation/IO)? If it's raid0, how does it compare with a dm stripe? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Thu Jan 11 15:20:40 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 11 Jan 2007 15:20:45 -0800 (PST) Received: from lucidpixels.com (lucidpixels.com [66.45.37.187]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0BNKeqw029198 for ; Thu, 11 Jan 2007 15:20:40 -0800 Received: by lucidpixels.com (Postfix, from userid 1001) id E345B1A009EEE; Thu, 11 Jan 2007 18:19:46 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id DDACAA009879; Thu, 11 Jan 2007 18:19:46 -0500 (EST) Date: Thu, 11 Jan 2007 18:19:46 -0500 (EST) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: David Chinner cc: linux-raid@vger.kernel.org, xfs@oss.sgi.com Subject: Re: Tweaking/Optimizing MD RAID: 195MB/s write, 181MB/s read (so far) In-Reply-To: <20070111231724.GH44411608@melbourne.sgi.com> Message-ID: References: <20070111231724.GH44411608@melbourne.sgi.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 10267 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Content-Length: 1245 Lines: 41 RAID5 with 128kb chunk size. Raid0 was 317MB/s write and 279MB/s read. # xfs_growfs -n /dev/md3 meta-data=/dev/md3 isize=256 agcount=16, agsize=6868160 blks = sectsz=4096 attr=0 data = bsize=4096 blocks=109890528, imaxpct=25 = sunit=32 swidth=96 blks, unwritten=1 naming =version 2 bsize=4096 log =internal bsize=4096 blocks=32768, version=2 = sectsz=4096 sunit=1 blks realtime =none extsz=393216 blocks=0, rtextents=0 On Fri, 12 Jan 2007, David Chinner wrote: > On Thu, Jan 11, 2007 at 06:05:36PM -0500, Justin Piszcz wrote: > > With 4 Raptor 150s & XFS (default XFS options): > > I need more context for this to be meaningful in any way. > > What type of md config are you using here? RAID0, 1, 5.....? > > What's the raw device throughput (i.e. read and write to /dev/md3)? > > output of 'growfs_xfs -n '? (i.e. is XFS doing aligned > or unaligned allocation/IO)? > > If it's raid0, how does it compare with a dm stripe? > > Cheers, > > Dave. > -- > Dave Chinner > Principal Engineer > SGI Australian Software Group > From owner-xfs@oss.sgi.com Thu Jan 11 15:33:00 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 11 Jan 2007 15:33:05 -0800 (PST) Received: from lucidpixels.com (lucidpixels.com [66.45.37.187]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0BNWxqw031505 for ; Thu, 11 Jan 2007 15:33:00 -0800 Received: by lucidpixels.com (Postfix, from userid 1001) id 879FB1A08D49B; Thu, 11 Jan 2007 18:32:06 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id 83576A009879; Thu, 11 Jan 2007 18:32:06 -0500 (EST) Date: Thu, 11 Jan 2007 18:32:06 -0500 (EST) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: David Chinner cc: linux-raid@vger.kernel.org, xfs@oss.sgi.com Subject: Re: Tweaking/Optimizing MD RAID: 195MB/s write, 181MB/s read (so far) In-Reply-To: <20070111231724.GH44411608@melbourne.sgi.com> Message-ID: References: <20070111231724.GH44411608@melbourne.sgi.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 10268 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Content-Length: 655 Lines: 28 Have not tried DM stripe btw. On Fri, 12 Jan 2007, David Chinner wrote: > On Thu, Jan 11, 2007 at 06:05:36PM -0500, Justin Piszcz wrote: > > With 4 Raptor 150s & XFS (default XFS options): > > I need more context for this to be meaningful in any way. > > What type of md config are you using here? RAID0, 1, 5.....? > > What's the raw device throughput (i.e. read and write to /dev/md3)? > > output of 'growfs_xfs -n '? (i.e. is XFS doing aligned > or unaligned allocation/IO)? > > If it's raid0, how does it compare with a dm stripe? > > Cheers, > > Dave. > -- > Dave Chinner > Principal Engineer > SGI Australian Software Group > > From owner-xfs@oss.sgi.com Thu Jan 11 15:39:13 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 11 Jan 2007 15:39:18 -0800 (PST) Received: from lucidpixels.com (lucidpixels.com [66.45.37.187]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0BNdAqw000650 for ; Thu, 11 Jan 2007 15:39:12 -0800 Received: by lucidpixels.com (Postfix, from userid 1001) id 91E461A08D49B; Thu, 11 Jan 2007 18:38:17 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id 8C009A009879; Thu, 11 Jan 2007 18:38:17 -0500 (EST) Date: Thu, 11 Jan 2007 18:38:17 -0500 (EST) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com Subject: Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read & 195MB/s write) Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 10270 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Content-Length: 6434 Lines: 151 Using 4 raptor 150s: Without the tweaks, I get 111MB/s write and 87MB/s read. With the tweaks, 195MB/s write and 211MB/s read. Using kernel 2.6.19.1. Without the tweaks and with the tweaks: # Stripe tests: echo 8192 > /sys/block/md3/md/stripe_cache_size # DD TESTS [WRITE] DEFAULT: (512K) $ dd if=/dev/zero of=10gb.no.optimizations.out bs=1M count=10240 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 96.6988 seconds, 111 MB/s 8192 STRIPE CACHE $ dd if=/dev/zero of=10gb.8192k.stripe.out bs=1M count=10240 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 55.0628 seconds, 195 MB/s (and again...) 10737418240 bytes (11 GB) copied, 61.9902 seconds, 173 MB/s (and again...) 10737418240 bytes (11 GB) copied, 61.3053 seconds, 175 MB/s ** maybe 16384 is better, need to do more testing. 16384 STRIPE CACHE $ dd if=/dev/zero of=10gb.16384k.stripe.out bs=1M count=10240 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 56.2793 seconds, 191 MB/s 32768 STRIPE CACHE $ dd if=/dev/zero of=10gb.32768.stripe.out bs=1M count=10240 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 55.8382 seconds, 192 MB/s # Set readahead. blockdev --setra 16384 /dev/md3 # DD TESTS [READ] DEFAULT: (1536K READ AHEAD) $ dd if=10gb.16384k.stripe.out of=/dev/null bs=1M 298+0 records in 297+0 records out 311427072 bytes (311 MB) copied, 3.5453 seconds, 87.8 MB/s 2048K READ AHEAD $ dd if=10gb.16384k.stripe.out of=/dev/null bs=1M 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 85.4632 seconds, 126 MB/s 8192K READ AHEAD $ dd if=10gb.16384k.stripe.out of=/dev/null bs=1M 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 64.9454 seconds, 165 MB/s 16384K READ AHEAD $ dd if=10gb.16384k.stripe.out of=/dev/null bs=1M 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 59.3119 seconds, 181 MB/s 32768 READ AHEAD $ dd if=10gb.16384k.stripe.out of=/dev/null bs=1M 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 56.6329 seconds, 190 MB/s 65536 READ AHEAD $ dd if=10gb.16384k.stripe.out of=/dev/null bs=1M 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 54.9768 seconds, 195 MB/s 131072 READ AHEAD 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 52.0896 seconds, 206 MB/s 262144 READ AHEAD** $ dd if=10gb.16384k.stripe.out of=/dev/null bs=1M 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 50.8496 seconds, 211 MB/s (and again..) 10737418240 bytes (11 GB) copied, 51.2064 seconds, 210 MB/s*** 524288 READ AHEAD $ dd if=10gb.16384k.stripe.out of=/dev/null bs=1M 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 59.6508 seconds, 180 MB/s Output (vmstat) during a write test: procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 2 1 172 730536 12 952740 0 0 0 357720 1836 107450 0 80 6 15 1 1 172 485016 12 1194448 0 0 0 171760 1604 42853 0 38 16 46 1 0 172 243960 12 1432140 0 0 0 223088 1598 63118 0 44 25 31 0 0 172 77428 12 1596240 0 0 0 199736 1559 56939 0 36 28 36 2 0 172 50328 12 1622796 0 0 16 87496 1726 31251 0 27 73 0 2 1 172 50600 12 1622052 0 0 0 313432 1739 88026 0 53 16 32 1 1 172 51012 12 1621216 0 0 0 200656 1586 56349 0 38 9 53 0 3 172 50084 12 1622408 0 0 0 204320 1588 67085 0 40 24 36 1 1 172 51716 12 1620760 0 0 0 245672 1608 81564 0 61 13 26 0 2 172 51168 12 1621432 0 0 0 212740 1622 67203 0 44 22 34 0 2 172 51940 12 1620516 0 0 0 203704 1614 59396 0 42 24 35 0 0 172 51188 12 1621348 0 0 0 171744 1582 56664 0 38 28 34 1 0 172 52264 12 1620812 0 0 0 143792 1724 43543 0 39 59 2 0 1 172 48292 12 1623984 0 0 16 248784 1610 73980 0 40 19 41 0 2 172 51868 12 1620596 0 0 0 209184 1571 60611 0 40 20 40 1 1 172 51168 12 1621340 0 0 0 205020 1620 70048 0 38 27 34 2 0 172 51076 12 1621508 0 0 0 236400 1658 81582 0 59 13 29 0 0 172 51284 12 1621064 0 0 0 138739 1611 40220 0 30 34 36 1 0 172 52020 12 1620376 0 0 4 170200 1752 52315 0 38 58 5 Output (vmstat) during a read test: procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 1 0 172 53484 12 1769396 0 0 0 0 1005 54 0 0 100 0 0 0 172 53148 12 1740380 0 0 221752 0 1562 11779 0 22 70 9 0 0 172 53868 12 1709048 0 0 231764 16 1708 14658 0 37 54 9 2 0 172 53384 12 1768236 0 0 189604 8 1646 8507 0 28 59 13 2 0 172 53920 12 1758856 0 0 253708 0 1716 17665 0 37 63 0 0 0 172 50704 12 1739872 0 0 239700 0 1654 10949 0 41 54 5 1 0 172 50796 12 1684120 0 0 206236 0 1722 16610 0 43 57 0 2 0 172 53012 12 1768192 0 0 217876 12 1709 17022 0 34 66 0 0 0 172 50676 12 1761664 0 0 252840 8 1711 15985 0 38 62 0 0 0 172 53676 12 1736192 0 0 240072 0 1686 7530 0 42 54 4 0 0 172 52892 12 1686740 0 0 211924 0 1707 16284 0 38 62 0 2 0 172 53536 12 1767580 0 0 212668 0 1680 18409 0 34 62 5 0 0 172 50488 12 1760780 0 0 251972 9 1719 15818 0 41 59 0 0 0 172 53912 12 1736916 0 0 241932 8 1645 12602 0 37 54 9 1 0 172 53296 12 1656072 0 0 180800 0 1723 15826 0 41 59 0 1 1 172 51208 12 1770156 0 0 242800 0 1738 11146 1 30 64 6 2 0 172 53604 12 1756452 0 0 251104 0 1652 10315 0 39 59 2 0 0 172 53268 12 1739120 0 0 244536 0 1679 18972 0 44 56 0 1 0 172 53256 12 1664920 0 0 187620 0 1668 19003 0 39 53 8 1 0 172 53716 12 1767424 0 0 234244 0 1711 17040 0 32 64 5 2 0 172 53680 12 1760680 0 0 255196 0 1695 9895 0 38 61 1 From owner-xfs@oss.sgi.com Thu Jan 11 17:21:34 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 11 Jan 2007 17:21:38 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0C1LVqw021954 for ; Thu, 11 Jan 2007 17:21:33 -0800 Received: from pcbnaujok (pc-bnaujok.melbourne.sgi.com [134.14.55.58]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id MAA18610; Fri, 12 Jan 2007 12:20:31 +1100 Message-Id: <200701120120.MAA18610@larry.melbourne.sgi.com> From: "Barry Naujok" To: "'Jyrki Muukkonen'" , Subject: RE: xfs_repair: corrupt inode error Date: Fri, 12 Jan 2007 12:25:48 +1100 MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit X-Mailer: Microsoft Office Outlook, Build 11.0.6353 Thread-Index: AcczPzUWNOSh3kMgRnm1p0xEIAfyJQCqLagQ X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.3028 In-Reply-To: <1168272431.21580.14.camel@mustis> X-archive-position: 10271 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@melbourne.sgi.com Precedence: bulk X-list: xfs Content-Length: 2111 Lines: 66 > -----Original Message----- > From: xfs-bounce@oss.sgi.com [mailto:xfs-bounce@oss.sgi.com] > On Behalf Of Jyrki Muukkonen > Sent: Tuesday, 9 January 2007 3:07 AM > To: xfs@oss.sgi.com > Subject: Re: xfs_repair: corrupt inode error > > On ma, 2007-01-08 at 12:23 +0200, Jyrki Muukkonen wrote: > > Got this error in phase 6 when running xfs_repair 2.8.18 on ~1.2TB > > partition over the weekend (it took around 60 hours to get to this > > point :). On earlier versions xfs_repair aborted after > ~15-20 hours with > > "invalid inode type" error. > > > > ... > > disconnected inode 4151889519, moving to lost+found > > disconnected inode 4151889543, moving to lost+found > > corrupt inode 4151889543 (btree). This is a bug. > > Please report it to xfs@oss.sgi.com. > > cache_node_purge: refcount was 1, not zero (node=0x132650d0) > > > > fatal error -- 117 - couldn't iget disconnected inode > > > > I've got the full log (both stderr and stdout) and can put that > > somewhere if needed. It's about 80MB uncompressed and around 7MB > > gzipped. Running the xfs_repair without multithreading and > with -v might > > also be possible if that's going to help. > > > > Some more information: > - running 64bit Ubuntu Edgy 2.6.17-10-generic > - one processor so xfs_repair was run with two threads > - 1.5GB RAM, 3GB swap (at some point the xfs_repair process took a bit > over 2GB) > - filesystem is ~1.14TB with about ~1.4 million files > - most of the files are in subdirectories by date > (/something/YYYY/MM/DD/), ~2-10 thousand per day > > So is there a way to skip / ignore this error? I could do some testing > with different command line options and small code patches if that's > going to help solve the bug. > > Most of the files have been recovered from backups, raw disk > images etc. > but unfortunately some are still missing. > > -- > Jyrki Muukkonen > Futurice Oy > jyrki.muukkonen@futurice.fi > +358 41 501 7322 Would it be possible to run xfs_db and print out the inode above: # xfs_db xfs_db> inode 4151889543 xfs_db> print and email the output back? Regards, Barry. From owner-xfs@oss.sgi.com Fri Jan 12 00:49:12 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 12 Jan 2007 00:49:17 -0800 (PST) Received: from gw02.mail.saunalahti.fi (gw02.mail.saunalahti.fi [195.197.172.116]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0C8nAqw016756 for ; Fri, 12 Jan 2007 00:49:12 -0800 Received: from mrp2.mail.saunalahti.fi (mrp2.mail.saunalahti.fi [62.142.5.31]) by gw02.mail.saunalahti.fi (Postfix) with ESMTP id 02616139544; Fri, 12 Jan 2007 10:48:15 +0200 (EET) Received: from [192.168.0.151] (unknown [62.142.247.178]) (using SSLv3 with cipher RC4-MD5 (128/128 bits)) (No client certificate requested) by mrp2.mail.saunalahti.fi (Postfix) with ESMTP id 82204598001; Fri, 12 Jan 2007 10:48:12 +0200 (EET) Subject: RE: xfs_repair: corrupt inode error From: Jyrki Muukkonen To: Barry Naujok Cc: xfs@oss.sgi.com In-Reply-To: <200701120120.MAA18610@larry.melbourne.sgi.com> References: <200701120120.MAA18610@larry.melbourne.sgi.com> Content-Type: text/plain Date: Fri, 12 Jan 2007 10:48:12 +0200 Message-Id: <1168591692.6015.1.camel@mustis> Mime-Version: 1.0 X-Mailer: Evolution 2.8.1 Content-Transfer-Encoding: 7bit X-archive-position: 10272 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jyrki.muukkonen@futurice.fi Precedence: bulk X-list: xfs Content-Length: 3326 Lines: 122 On pe, 2007-01-12 at 12:25 +1100, Barry Naujok wrote: > > > -----Original Message----- > > From: xfs-bounce@oss.sgi.com [mailto:xfs-bounce@oss.sgi.com] > > On Behalf Of Jyrki Muukkonen > > Sent: Tuesday, 9 January 2007 3:07 AM > > To: xfs@oss.sgi.com > > Subject: Re: xfs_repair: corrupt inode error > > > > On ma, 2007-01-08 at 12:23 +0200, Jyrki Muukkonen wrote: > > > Got this error in phase 6 when running xfs_repair 2.8.18 on ~1.2TB > > > partition over the weekend (it took around 60 hours to get to this > > > point :). On earlier versions xfs_repair aborted after > > ~15-20 hours with > > > "invalid inode type" error. > > > > > > ... > > > disconnected inode 4151889519, moving to lost+found > > > disconnected inode 4151889543, moving to lost+found > > > corrupt inode 4151889543 (btree). This is a bug. > > > Please report it to xfs@oss.sgi.com. > > > cache_node_purge: refcount was 1, not zero (node=0x132650d0) > > > > > > fatal error -- 117 - couldn't iget disconnected inode > > > > > > I've got the full log (both stderr and stdout) and can put that > > > somewhere if needed. It's about 80MB uncompressed and around 7MB > > > gzipped. Running the xfs_repair without multithreading and > > with -v might > > > also be possible if that's going to help. > > > > > > > Some more information: > > - running 64bit Ubuntu Edgy 2.6.17-10-generic > > - one processor so xfs_repair was run with two threads > > - 1.5GB RAM, 3GB swap (at some point the xfs_repair process took a bit > > over 2GB) > > - filesystem is ~1.14TB with about ~1.4 million files > > - most of the files are in subdirectories by date > > (/something/YYYY/MM/DD/), ~2-10 thousand per day > > > > So is there a way to skip / ignore this error? I could do some testing > > with different command line options and small code patches if that's > > going to help solve the bug. > > > > Most of the files have been recovered from backups, raw disk > > images etc. > > but unfortunately some are still missing. > > > > -- > > Jyrki Muukkonen > > Futurice Oy > > jyrki.muukkonen@futurice.fi > > +358 41 501 7322 > > Would it be possible to run xfs_db and print out the inode above: > > # xfs_db > xfs_db> inode 4151889543 > xfs_db> print > > and email the output back? > > Regards, > Barry. > > OK, here it is: xfs_db> inode 4151889543 xfs_db> print core.magic = 0x494e core.mode = 0102672 core.version = 1 core.format = 3 (btree) core.nlinkv1 = 2308 core.uid = 721387 core.gid = 475570 core.flushiter = 7725 core.atime.sec = Sun Mar 16 17:15:13 2008 core.atime.nsec = 000199174 core.mtime.sec = Wed Dec 28 01:58:50 2011 core.mtime.nsec = 016845061 core.ctime.sec = Tue Aug 22 19:57:39 2006 core.ctime.nsec = 148761321 core.size = 1880085426117611906 core.nblocks = 0 core.extsize = 0 core.nextents = 0 core.naextents = 0 core.forkoff = 0 core.aformat = 2 (extents) core.dmevmask = 0x1010905 core.dmstate = 11 core.newrtbm = 0 core.prealloc = 1 core.realtime = 0 core.immutable = 0 core.append = 0 core.sync = 0 core.noatime = 0 core.nodump = 0 core.rtinherit = 0 core.projinherit = 1 core.nosymlinks = 0 core.extsz = 0 core.extszinherit = 0 core.nodefrag = 0 core.gen = 51072068 next_unlinked = null u.bmbt.level = 18550 u.bmbt.numrecs = 0 -- Jyrki Muukkonen Futurice Oy jyrki.muukkonen@futurice.fi +358 41 501 7322 From owner-xfs@oss.sgi.com Fri Jan 12 06:34:49 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 12 Jan 2007 06:34:56 -0800 (PST) Received: from hobbit.corpit.ru (hobbit.corpit.ru [81.13.94.6]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0CEYkqw020912 for ; Fri, 12 Jan 2007 06:34:48 -0800 Received: from [192.168.1.1] (paltus.tls.msk.ru [192.168.1.1]) by hobbit.corpit.ru (Postfix) with ESMTP id 81B613565B; Fri, 12 Jan 2007 17:01:25 +0300 (MSK) (envelope-from mjt@tls.msk.ru) Message-ID: <45A794B4.5020608@tls.msk.ru> Date: Fri, 12 Jan 2007 17:01:24 +0300 From: Michael Tokarev User-Agent: Thunderbird 1.5.0.5 (X11/20060813) MIME-Version: 1.0 To: Justin Piszcz CC: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com Subject: Re: Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read & 195MB/s write) References: In-Reply-To: X-Enigmail-Version: 0.94.0.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-archive-position: 10273 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: mjt@tls.msk.ru Precedence: bulk X-list: xfs Content-Length: 1267 Lines: 44 Justin Piszcz wrote: > Using 4 raptor 150s: > > Without the tweaks, I get 111MB/s write and 87MB/s read. > With the tweaks, 195MB/s write and 211MB/s read. > > Using kernel 2.6.19.1. > > Without the tweaks and with the tweaks: > > # Stripe tests: > echo 8192 > /sys/block/md3/md/stripe_cache_size > > # DD TESTS [WRITE] > > DEFAULT: (512K) > $ dd if=/dev/zero of=10gb.no.optimizations.out bs=1M count=10240 > 10240+0 records in > 10240+0 records out > 10737418240 bytes (11 GB) copied, 96.6988 seconds, 111 MB/s [] > 8192K READ AHEAD > $ dd if=10gb.16384k.stripe.out of=/dev/null bs=1M > 10240+0 records in > 10240+0 records out > 10737418240 bytes (11 GB) copied, 64.9454 seconds, 165 MB/s What exactly are you measuring? Linear read/write, like copying one device to another (or to /dev/null), in large chunks? I don't think it's an interesting test. Hint: how many times a day you plan to perform such a copy? (By the way, for a copy of one block device to another, try using O_DIRECT, with two dd processes doing the copy - one reading, and another writing - this way, you'll get best results without huge affect on other things running on the system. Like this: dd if=/dev/onedev bs=1M iflag=direct | dd of=/dev/twodev bs=1M oflag=direct ) /mjt From owner-xfs@oss.sgi.com Fri Jan 12 06:39:31 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 12 Jan 2007 06:39:38 -0800 (PST) Received: from lucidpixels.com (lucidpixels.com [66.45.37.187]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0CEdSqw022058 for ; Fri, 12 Jan 2007 06:39:30 -0800 Received: by lucidpixels.com (Postfix, from userid 1001) id 6D0DE1A08D49C; Fri, 12 Jan 2007 09:38:35 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id 66E76A06E85E; Fri, 12 Jan 2007 09:38:35 -0500 (EST) Date: Fri, 12 Jan 2007 09:38:35 -0500 (EST) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: Michael Tokarev cc: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com Subject: Re: Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read & 195MB/s write) In-Reply-To: <45A794B4.5020608@tls.msk.ru> Message-ID: References: <45A794B4.5020608@tls.msk.ru> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 10274 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Content-Length: 5764 Lines: 143 On Fri, 12 Jan 2007, Michael Tokarev wrote: > Justin Piszcz wrote: > > Using 4 raptor 150s: > > > > Without the tweaks, I get 111MB/s write and 87MB/s read. > > With the tweaks, 195MB/s write and 211MB/s read. > > > > Using kernel 2.6.19.1. > > > > Without the tweaks and with the tweaks: > > > > # Stripe tests: > > echo 8192 > /sys/block/md3/md/stripe_cache_size > > > > # DD TESTS [WRITE] > > > > DEFAULT: (512K) > > $ dd if=/dev/zero of=10gb.no.optimizations.out bs=1M count=10240 > > 10240+0 records in > > 10240+0 records out > > 10737418240 bytes (11 GB) copied, 96.6988 seconds, 111 MB/s > [] > > 8192K READ AHEAD > > $ dd if=10gb.16384k.stripe.out of=/dev/null bs=1M > > 10240+0 records in > > 10240+0 records out > > 10737418240 bytes (11 GB) copied, 64.9454 seconds, 165 MB/s > > What exactly are you measuring? Linear read/write, like copying one > device to another (or to /dev/null), in large chunks? Check bonnie benchmarks below. > > I don't think it's an interesting test. Hint: how many times a day > you plan to perform such a copy? It is a measurement of raw performance. > > (By the way, for a copy of one block device to another, try using > O_DIRECT, with two dd processes doing the copy - one reading, and > another writing - this way, you'll get best results without huge > affect on other things running on the system. Like this: > > dd if=/dev/onedev bs=1M iflag=direct | > dd of=/dev/twodev bs=1M oflag=direct > ) Interesting, I will take this into consideration-- however, an untar test shows a 2:1 improvement, see below. > > /mjt > Decompress/unrar a DVD-sized file: On the following RAID volumes with the same set of [4] 150GB raptors: RAID 0] 1:13.16 elapsed @ 49% CPU RAID 4] 2:05.85 elapsed @ 30% CPU RAID 5] 2:01.94 elapsed @ 32% CPU RAID 6] 2:39.34 elapsed @ 24% CPU RAID 10] 1:52.37 elapsed @ 32% CPU RAID 5 Tweaked (8192 stripe_cache & 16384 setra/blockdev):: RAID 5 TWEAKED: 1:06.41 elapsed @ 60% CPU I did not tweak raid 0, but seeing how RAID5 tweaked is faster than RAID0 is good enough for me :) RAID0 did 278MB/s read and 317MB/s write (by the way) Here are the bonnie results, the times alone speak for themselves, from 8 minutes to min and 48-59 seconds. # No optimizations: # Run Benchmarks Default Bonnie: [nr_requests=128,max_sectors_kb=512,stripe_cache_size=256,read_ahead=1536] default_run1,4000M,42879,98,105436,19,41081,11,46277,96,87845,15,639.2,1,16:100000:16/64,380,4,29642,99,2990,18,469,5,11784,40,1712,12 default_run2,4000M,47145,99,108664,19,40931,11,46466,97,94158,16,634.8,0,16:100000:16/64,377,4,16990,56,2850,17,431,4,21066,71,1800,13 default_run3,4000M,43653,98,109063,19,40898,11,46447,97,97141,16,645.8,1,16:100000:16/64,373,4,22302,75,2793,16,420,4,16708,56,1794,13 default_run4,4000M,46485,98,110664,20,41102,11,46443,97,93616,16,631.3,1,16:100000:16/64,363,3,14484,49,2802,17,388,4,25532,86,1604,12 default_run5,4000M,43813,98,109800,19,41214,11,46457,97,92563,15,635.1,1,16:100000:16/64,376,4,28990,95,2827,17,388,4,22874,76,1817,13 169.88user 44.01system 8:02.98elapsed 44%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (6major+1102minor)pagefaults 0swaps 161.60user 44.33system 7:53.14elapsed 43%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (13major+1095minor)pagefaults 0swaps 166.64user 45.24system 8:00.07elapsed 44%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (13major+1096minor)pagefaults 0swaps 161.90user 44.66system 8:00.85elapsed 42%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (13major+1094minor)pagefaults 0swaps 167.61user 44.12system 8:03.26elapsed 43%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (13major+1092minor)pagefaults 0swaps All optimizations [bonnie++] 168.08user 46.05system 5:55.13elapsed 60%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (16major+1092minor)pagefaults 0swaps 162.65user 46.21system 5:48.47elapsed 59%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (7major+1101minor)pagefaults 0swaps 168.06user 45.74system 5:59.84elapsed 59%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (7major+1102minor)pagefaults 0swaps 168.00user 46.18system 5:58.77elapsed 59%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (13major+1095minor)pagefaults 0swaps 167.98user 45.53system 5:56.49elapsed 59%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (5major+1101minor)pagefaults 0swaps c6300-optimized:4000M,43976,99,167209,29,73109,22,43471,91,208572,40,511.4,1,16:100000:16/64,1109,12,26948,89,2469,14,1051,11,29037,97,2167,16 c6300-optimized:4000M,47455,99,190212,35,70402,21,43167,92,206290,40,503.3,1,16:100000:16/64,1071,11,29893,99,2804,16,1059,12,24887,84,2090,16 c6300-optimized:4000M,43979,99,172543,29,71811,21,41760,87,201870,39,498.9,1,16:100000:16/64,1042,11,30276,99,2800,16,1063,12,29491,99,2257,17 c6300-optimized:4000M,43824,98,164585,29,73470,22,43098,90,207003,40,489.1,1,16:100000:16/64,1045,11,30288,98,2512,15,1018,11,27365,92,2097,16 c6300-optimized:4000M,44003,99,194250,32,71055,21,43327,91,196553,38,505.8,1,16:100000:16/64,1031,11,30278,98,2474,14,1049,12,28068,94,2027,15 txt version of optimized results: Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP c6300-optimiz 47455 99 190212 35 70402 21 43167 92 206290 40 503.3 1 c6300-optimiz 43979 99 172543 29 71811 21 41760 87 201870 39 498.9 1 c6300-optimiz 43824 98 164585 29 73470 22 43098 90 207003 40 489.1 1 c6300-optimiz 44003 99 194250 32 71055 21 43327 91 196553 38 505.8 1 From owner-xfs@oss.sgi.com Fri Jan 12 09:38:45 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 12 Jan 2007 09:39:27 -0800 (PST) Received: from lucidpixels.com (lucidpixels.com [66.45.37.187]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0CHcgqw024928 for ; Fri, 12 Jan 2007 09:38:44 -0800 Received: by lucidpixels.com (Postfix, from userid 1001) id A744E1A000257; Fri, 12 Jan 2007 12:37:48 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id 8FAF7A000257; Fri, 12 Jan 2007 12:37:48 -0500 (EST) Date: Fri, 12 Jan 2007 12:37:48 -0500 (EST) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: Michael Tokarev cc: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com Subject: Re: Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read & 195MB/s write) In-Reply-To: Message-ID: References: <45A794B4.5020608@tls.msk.ru> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 10276 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Content-Length: 6407 Lines: 154 RAID 5 TWEAKED: 1:06.41 elapsed @ 60% CPU This should be 1:14 not 1:06(was with a similarly sized file but not the same) the 1:14 is the same file as used with the other benchmarks. and to get that I used 256mb read-ahead and 16384 stripe size ++ 128 max_sectors_kb (same size as my sw raid5 chunk size) On Fri, 12 Jan 2007, Justin Piszcz wrote: > > > On Fri, 12 Jan 2007, Michael Tokarev wrote: > > > Justin Piszcz wrote: > > > Using 4 raptor 150s: > > > > > > Without the tweaks, I get 111MB/s write and 87MB/s read. > > > With the tweaks, 195MB/s write and 211MB/s read. > > > > > > Using kernel 2.6.19.1. > > > > > > Without the tweaks and with the tweaks: > > > > > > # Stripe tests: > > > echo 8192 > /sys/block/md3/md/stripe_cache_size > > > > > > # DD TESTS [WRITE] > > > > > > DEFAULT: (512K) > > > $ dd if=/dev/zero of=10gb.no.optimizations.out bs=1M count=10240 > > > 10240+0 records in > > > 10240+0 records out > > > 10737418240 bytes (11 GB) copied, 96.6988 seconds, 111 MB/s > > [] > > > 8192K READ AHEAD > > > $ dd if=10gb.16384k.stripe.out of=/dev/null bs=1M > > > 10240+0 records in > > > 10240+0 records out > > > 10737418240 bytes (11 GB) copied, 64.9454 seconds, 165 MB/s > > > > What exactly are you measuring? Linear read/write, like copying one > > device to another (or to /dev/null), in large chunks? > Check bonnie benchmarks below. > > > > I don't think it's an interesting test. Hint: how many times a day > > you plan to perform such a copy? > It is a measurement of raw performance. > > > > (By the way, for a copy of one block device to another, try using > > O_DIRECT, with two dd processes doing the copy - one reading, and > > another writing - this way, you'll get best results without huge > > affect on other things running on the system. Like this: > > > > dd if=/dev/onedev bs=1M iflag=direct | > > dd of=/dev/twodev bs=1M oflag=direct > > ) > Interesting, I will take this into consideration-- however, an untar test > shows a 2:1 improvement, see below. > > > > /mjt > > > > Decompress/unrar a DVD-sized file: > > On the following RAID volumes with the same set of [4] 150GB raptors: > > RAID 0] 1:13.16 elapsed @ 49% CPU > RAID 4] 2:05.85 elapsed @ 30% CPU > RAID 5] 2:01.94 elapsed @ 32% CPU > RAID 6] 2:39.34 elapsed @ 24% CPU > RAID 10] 1:52.37 elapsed @ 32% CPU > > RAID 5 Tweaked (8192 stripe_cache & 16384 setra/blockdev):: > > RAID 5 TWEAKED: 1:06.41 elapsed @ 60% CPU > > I did not tweak raid 0, but seeing how RAID5 tweaked is faster than RAID0 > is good enough for me :) > > RAID0 did 278MB/s read and 317MB/s write (by the way) > > Here are the bonnie results, the times alone speak for themselves, from 8 > minutes to min and 48-59 seconds. > > # No optimizations: > # Run Benchmarks > Default Bonnie: > [nr_requests=128,max_sectors_kb=512,stripe_cache_size=256,read_ahead=1536] > default_run1,4000M,42879,98,105436,19,41081,11,46277,96,87845,15,639.2,1,16:100000:16/64,380,4,29642,99,2990,18,469,5,11784,40,1712,12 > default_run2,4000M,47145,99,108664,19,40931,11,46466,97,94158,16,634.8,0,16:100000:16/64,377,4,16990,56,2850,17,431,4,21066,71,1800,13 > default_run3,4000M,43653,98,109063,19,40898,11,46447,97,97141,16,645.8,1,16:100000:16/64,373,4,22302,75,2793,16,420,4,16708,56,1794,13 > default_run4,4000M,46485,98,110664,20,41102,11,46443,97,93616,16,631.3,1,16:100000:16/64,363,3,14484,49,2802,17,388,4,25532,86,1604,12 > default_run5,4000M,43813,98,109800,19,41214,11,46457,97,92563,15,635.1,1,16:100000:16/64,376,4,28990,95,2827,17,388,4,22874,76,1817,13 > > 169.88user 44.01system 8:02.98elapsed 44%CPU (0avgtext+0avgdata > 0maxresident)k > 0inputs+0outputs (6major+1102minor)pagefaults 0swaps > 161.60user 44.33system 7:53.14elapsed 43%CPU (0avgtext+0avgdata > 0maxresident)k > 0inputs+0outputs (13major+1095minor)pagefaults 0swaps > 166.64user 45.24system 8:00.07elapsed 44%CPU (0avgtext+0avgdata > 0maxresident)k > 0inputs+0outputs (13major+1096minor)pagefaults 0swaps > 161.90user 44.66system 8:00.85elapsed 42%CPU (0avgtext+0avgdata > 0maxresident)k > 0inputs+0outputs (13major+1094minor)pagefaults 0swaps > 167.61user 44.12system 8:03.26elapsed 43%CPU (0avgtext+0avgdata > 0maxresident)k > 0inputs+0outputs (13major+1092minor)pagefaults 0swaps > > > All optimizations [bonnie++] > > 168.08user 46.05system 5:55.13elapsed 60%CPU (0avgtext+0avgdata > 0maxresident)k > 0inputs+0outputs (16major+1092minor)pagefaults 0swaps > 162.65user 46.21system 5:48.47elapsed 59%CPU (0avgtext+0avgdata > 0maxresident)k > 0inputs+0outputs (7major+1101minor)pagefaults 0swaps > 168.06user 45.74system 5:59.84elapsed 59%CPU (0avgtext+0avgdata > 0maxresident)k > 0inputs+0outputs (7major+1102minor)pagefaults 0swaps > 168.00user 46.18system 5:58.77elapsed 59%CPU (0avgtext+0avgdata > 0maxresident)k > 0inputs+0outputs (13major+1095minor)pagefaults 0swaps > 167.98user 45.53system 5:56.49elapsed 59%CPU (0avgtext+0avgdata > 0maxresident)k > 0inputs+0outputs (5major+1101minor)pagefaults 0swaps > > c6300-optimized:4000M,43976,99,167209,29,73109,22,43471,91,208572,40,511.4,1,16:100000:16/64,1109,12,26948,89,2469,14,1051,11,29037,97,2167,16 > c6300-optimized:4000M,47455,99,190212,35,70402,21,43167,92,206290,40,503.3,1,16:100000:16/64,1071,11,29893,99,2804,16,1059,12,24887,84,2090,16 > c6300-optimized:4000M,43979,99,172543,29,71811,21,41760,87,201870,39,498.9,1,16:100000:16/64,1042,11,30276,99,2800,16,1063,12,29491,99,2257,17 > c6300-optimized:4000M,43824,98,164585,29,73470,22,43098,90,207003,40,489.1,1,16:100000:16/64,1045,11,30288,98,2512,15,1018,11,27365,92,2097,16 > c6300-optimized:4000M,44003,99,194250,32,71055,21,43327,91,196553,38,505.8,1,16:100000:16/64,1031,11,30278,98,2474,14,1049,12,28068,94,2027,15 > > txt version of optimized results: > > Version 1.03 ------Sequential Output------ --Sequential Input- > --Random- > -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- > --Seeks-- > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP > /sec %CP > c6300-optimiz 47455 99 190212 35 70402 21 43167 92 206290 > 40 503.3 1 > c6300-optimiz 43979 99 172543 29 71811 21 41760 87 201870 > 39 498.9 1 > c6300-optimiz 43824 98 164585 29 73470 22 43098 90 207003 > 40 489.1 1 > c6300-optimiz 44003 99 194250 32 71055 21 43327 91 196553 > 38 505.8 1 > > From owner-xfs@oss.sgi.com Fri Jan 12 11:56:56 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 12 Jan 2007 11:57:07 -0800 (PST) Received: from lucidpixels.com (lucidpixels.com [66.45.37.187]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0CJusqw017594 for ; Fri, 12 Jan 2007 11:56:55 -0800 Received: by lucidpixels.com (Postfix, from userid 1001) id A89A31A000257; Fri, 12 Jan 2007 14:56:00 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id A17EEA000257; Fri, 12 Jan 2007 14:56:00 -0500 (EST) Date: Fri, 12 Jan 2007 14:56:00 -0500 (EST) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: Al Boldi cc: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com Subject: Re: Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read & 195MB/s write) In-Reply-To: <200701122235.30288.a1426z@gawab.com> Message-ID: References: <200701122235.30288.a1426z@gawab.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 10277 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Content-Length: 1116 Lines: 38 On Fri, 12 Jan 2007, Al Boldi wrote: > Justin Piszcz wrote: > > RAID 5 TWEAKED: 1:06.41 elapsed @ 60% CPU > > > > This should be 1:14 not 1:06(was with a similarly sized file but not the > > same) the 1:14 is the same file as used with the other benchmarks. and to > > get that I used 256mb read-ahead and 16384 stripe size ++ 128 > > max_sectors_kb (same size as my sw raid5 chunk size) > > max_sectors_kb is probably your key. On my system I get twice the read > performance by just reducing max_sectors_kb from default 512 to 192. > > Can you do a fresh reboot to shell and then: > $ cat /sys/block/hda/queue/* > $ cat /proc/meminfo > $ echo 3 > /proc/sys/vm/drop_caches > $ dd if=/dev/hda of=/dev/null bs=1M count=10240 > $ echo 192 > /sys/block/hda/queue/max_sectors_kb > $ echo 3 > /proc/sys/vm/drop_caches > $ dd if=/dev/hda of=/dev/null bs=1M count=10240 > > > Thanks! > > -- > Al > > - > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Ok. sec From owner-xfs@oss.sgi.com Fri Jan 12 12:51:04 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 12 Jan 2007 12:51:36 -0800 (PST) Received: from lucidpixels.com (lucidpixels.com [66.45.37.187]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0CKp0qw031823 for ; Fri, 12 Jan 2007 12:51:03 -0800 Received: by lucidpixels.com (Postfix, from userid 1001) id E541A1A000257; Fri, 12 Jan 2007 15:15:09 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id BFA8CA000257; Fri, 12 Jan 2007 15:15:09 -0500 (EST) Date: Fri, 12 Jan 2007 15:15:09 -0500 (EST) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: Al Boldi cc: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com Subject: Re: Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read & 195MB/s write) In-Reply-To: Message-ID: References: <200701122235.30288.a1426z@gawab.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 10278 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Content-Length: 6427 Lines: 216 Btw, max sectors did improve my performance a little bit but stripe_cache+read_ahead were the main optimizations that made everything go faster by about ~1.5x. I have individual bonnie++ benchmarks of [only] the max_sector_kb tests as well, it improved the times from 8min/bonnie run -> 7min 11 seconds or so, see below and then after that is what you requested. # Options used: # blockdev --setra 1536 /dev/md3 (back to default) # cat /sys/block/sd{e,g,i,k}/queue/max_sectors_kb # value: 512 # value: 512 # value: 512 # value: 512 # Test with, chunksize of raid array (128) # echo 128 > /sys/block/sde/queue/max_sectors_kb # echo 128 > /sys/block/sdg/queue/max_sectors_kb # echo 128 > /sys/block/sdi/queue/max_sectors_kb # echo 128 > /sys/block/sdk/queue/max_sectors_kb max_sectors_kb128_run1:max_sectors_kb128_run1,4000M,46522,98,109829,19,42776,12,46527,97,86206,14,647.7,1,16:100000:16/64,874,9,29123,97,2778,16,852,9,25399,86,1396,10 max_sectors_kb128_run2:max_sectors_kb128_run2,4000M,44037,99,107971,19,42420,12,46385,97,85773,14,628.8,1,16:100000:16/64,981,10,23006,77,3185,19,848,9,27891,94,1737,13 max_sectors_kb128_run3:max_sectors_kb128_run3,4000M,46501,98,108313,19,42558,12,46314,97,87697,15,617.0,1,16:100000:16/64,864,9,29795,99,2744,16,897,9,29021,98,1439,10 max_sectors_kb128_run4:max_sectors_kb128_run4,4000M,40750,98,108959,19,42519,12,45027,97,86484,14,637.0,1,16:100000:16/64,929,10,29641,98,2476,14,883,9,29529,99,1867,13 max_sectors_kb128_run5:max_sectors_kb128_run5,4000M,46664,98,108387,19,42801,12,46423,97,87379,14,642.5,0,16:100000:16/64,925,10,29756,99,2759,16,915,10,28694,97,1215,8 162.54user 43.96system 7:12.02elapsed 47%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (5major+1104minor)pagefaults 0swaps 168.75user 43.51system 7:14.49elapsed 48%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (13major+1092minor)pagefaults 0swaps 162.76user 44.18system 7:12.26elapsed 47%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (13major+1096minor)pagefaults 0swaps 178.91user 43.39system 7:24.39elapsed 50%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (13major+1094minor)pagefaults 0swaps 162.45user 43.86system 7:11.26elapsed 47%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (13major+1092minor)pagefaults 0swaps --------------- # cat /sys/block/sd[abcdefghijk]/queue/* cat: /sys/block/sda/queue/iosched: Is a directory 32767 512 128 128 noop [anticipatory] cat: /sys/block/sdb/queue/iosched: Is a directory 32767 512 128 128 noop [anticipatory] cat: /sys/block/sdc/queue/iosched: Is a directory 32767 128 128 128 noop [anticipatory] cat: /sys/block/sdd/queue/iosched: Is a directory 32767 128 128 128 noop [anticipatory] cat: /sys/block/sde/queue/iosched: Is a directory 32767 128 128 128 noop [anticipatory] cat: /sys/block/sdf/queue/iosched: Is a directory 32767 128 128 128 noop [anticipatory] cat: /sys/block/sdg/queue/iosched: Is a directory 32767 128 128 128 noop [anticipatory] cat: /sys/block/sdh/queue/iosched: Is a directory 32767 128 128 128 noop [anticipatory] cat: /sys/block/sdi/queue/iosched: Is a directory 32767 128 128 128 noop [anticipatory] cat: /sys/block/sdj/queue/iosched: Is a directory 32767 128 128 128 noop [anticipatory] cat: /sys/block/sdk/queue/iosched: Is a directory 32767 128 128 128 noop [anticipatory] # (note I am only using four of these (which are raptors, in raid5 for md3)) # cat /proc/meminfo MemTotal: 2048904 kB MemFree: 1299980 kB Buffers: 1408 kB Cached: 58032 kB SwapCached: 0 kB Active: 65012 kB Inactive: 33796 kB HighTotal: 1153312 kB HighFree: 1061792 kB LowTotal: 895592 kB LowFree: 238188 kB SwapTotal: 2200760 kB SwapFree: 2200760 kB Dirty: 8 kB Writeback: 0 kB AnonPages: 39332 kB Mapped: 20248 kB Slab: 37116 kB SReclaimable: 10580 kB SUnreclaim: 26536 kB PageTables: 1284 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 3225212 kB Committed_AS: 111056 kB VmallocTotal: 114680 kB VmallocUsed: 3828 kB VmallocChunk: 110644 kB # # echo 3 > /proc/sys/vm/drop_caches # dd if=/dev/md3 of=/dev/null bs=1M count=10240 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 399.352 seconds, 26.9 MB/s # for i in sde sdg sdi sdk; do echo 192 > /sys/block/"$i"/queue/max_sectors_kb; echo "Set /sys/block/"$i"/queue/max_sectors_kb to 192kb"; done Set /sys/block/sde/queue/max_sectors_kb to 192kb Set /sys/block/sdg/queue/max_sectors_kb to 192kb Set /sys/block/sdi/queue/max_sectors_kb to 192kb Set /sys/block/sdk/queue/max_sectors_kb to 192kb # echo 3 > /proc/sys/vm/drop_caches # dd if=/dev/md3 of=/dev/null bs=1M count=10240 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 398.069 seconds, 27.0 MB/s Awful performance with your numbers/drop_caches settings.. ! What were your tests designed to show? Justin. On Fri, 12 Jan 2007, Justin Piszcz wrote: > > > On Fri, 12 Jan 2007, Al Boldi wrote: > > > Justin Piszcz wrote: > > > RAID 5 TWEAKED: 1:06.41 elapsed @ 60% CPU > > > > > > This should be 1:14 not 1:06(was with a similarly sized file but not the > > > same) the 1:14 is the same file as used with the other benchmarks. and to > > > get that I used 256mb read-ahead and 16384 stripe size ++ 128 > > > max_sectors_kb (same size as my sw raid5 chunk size) > > > > max_sectors_kb is probably your key. On my system I get twice the read > > performance by just reducing max_sectors_kb from default 512 to 192. > > > > Can you do a fresh reboot to shell and then: > > $ cat /sys/block/hda/queue/* > > $ cat /proc/meminfo > > $ echo 3 > /proc/sys/vm/drop_caches > > $ dd if=/dev/hda of=/dev/null bs=1M count=10240 > > $ echo 192 > /sys/block/hda/queue/max_sectors_kb > > $ echo 3 > /proc/sys/vm/drop_caches > > $ dd if=/dev/hda of=/dev/null bs=1M count=10240 > > > > > > Thanks! > > > > -- > > Al > > > > - > > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > > the body of a message to majordomo@vger.kernel.org > > More majordomo info at http://vger.kernel.org/majordomo-info.html > > > > Ok. sec > - > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > From owner-xfs@oss.sgi.com Fri Jan 12 13:00:15 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 12 Jan 2007 13:00:24 -0800 (PST) Received: from raad.intranet ([212.12.190.123]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0CL0Aqw001179 for ; Fri, 12 Jan 2007 13:00:12 -0800 Received: from localhost ([10.0.0.111]) by raad.intranet (8.8.7/8.8.7) with ESMTP id XAA24886; Fri, 12 Jan 2007 23:58:47 +0300 From: Al Boldi To: Justin Piszcz Subject: Re: Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read & 195MB/s write) Date: Sat, 13 Jan 2007 00:00:48 +0300 User-Agent: KMail/1.5 Cc: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com References: In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset="windows-1256" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200701130000.48717.a1426z@gawab.com> X-archive-position: 10279 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: a1426z@gawab.com Precedence: bulk X-list: xfs Content-Length: 1765 Lines: 53 Justin Piszcz wrote: > Btw, max sectors did improve my performance a little bit but > stripe_cache+read_ahead were the main optimizations that made everything > go faster by about ~1.5x. I have individual bonnie++ benchmarks of > [only] the max_sector_kb tests as well, it improved the times from > 8min/bonnie run -> 7min 11 seconds or so, see below and then after that is > what you requested. > > # echo 3 > /proc/sys/vm/drop_caches > # dd if=/dev/md3 of=/dev/null bs=1M count=10240 > 10240+0 records in > 10240+0 records out > 10737418240 bytes (11 GB) copied, 399.352 seconds, 26.9 MB/s > # for i in sde sdg sdi sdk; do echo 192 > > /sys/block/"$i"/queue/max_sectors_kb; echo "Set > /sys/block/"$i"/queue/max_sectors_kb to 192kb"; done > Set /sys/block/sde/queue/max_sectors_kb to 192kb > Set /sys/block/sdg/queue/max_sectors_kb to 192kb > Set /sys/block/sdi/queue/max_sectors_kb to 192kb > Set /sys/block/sdk/queue/max_sectors_kb to 192kb > # echo 3 > /proc/sys/vm/drop_caches > # dd if=/dev/md3 of=/dev/null bs=1M count=10240 > 10240+0 records in > 10240+0 records out > 10737418240 bytes (11 GB) copied, 398.069 seconds, 27.0 MB/s > > Awful performance with your numbers/drop_caches settings.. ! Can you repeat with /dev/sda only? With fresh reboot to shell, then: $ cat /sys/block/sda/queue/max_sectors_kb $ echo 3 > /proc/sys/vm/drop_caches $ dd if=/dev/sda of=/dev/null bs=1M count=10240 $ echo 192 > /sys/block/sda/queue/max_sectors_kb $ echo 3 > /proc/sys/vm/drop_caches $ dd if=/dev/sda of=/dev/null bs=1M count=10240 $ echo 128 > /sys/block/sda/queue/max_sectors_kb $ echo 3 > /proc/sys/vm/drop_caches $ dd if=/dev/sda of=/dev/null bs=1M count=10240 > What were your tests designed to show? A problem with the block-io. Thanks! -- Al From owner-xfs@oss.sgi.com Fri Jan 12 13:41:32 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 12 Jan 2007 13:41:41 -0800 (PST) Received: from lucidpixels.com (lucidpixels.com [66.45.37.187]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0CLfUqw008787 for ; Fri, 12 Jan 2007 13:41:31 -0800 Received: by lucidpixels.com (Postfix, from userid 1001) id BCDCA1A000259; Fri, 12 Jan 2007 16:40:27 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id B160BA000257; Fri, 12 Jan 2007 16:40:27 -0500 (EST) Date: Fri, 12 Jan 2007 16:40:27 -0500 (EST) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: Al Boldi cc: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com Subject: Re: Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read & 195MB/s write) In-Reply-To: <200701130000.48717.a1426z@gawab.com> Message-ID: References: <200701130000.48717.a1426z@gawab.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 10280 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Content-Length: 2997 Lines: 100 On Sat, 13 Jan 2007, Al Boldi wrote: > Justin Piszcz wrote: > > Btw, max sectors did improve my performance a little bit but > > stripe_cache+read_ahead were the main optimizations that made everything > > go faster by about ~1.5x. I have individual bonnie++ benchmarks of > > [only] the max_sector_kb tests as well, it improved the times from > > 8min/bonnie run -> 7min 11 seconds or so, see below and then after that is > > what you requested. > > > > # echo 3 > /proc/sys/vm/drop_caches > > # dd if=/dev/md3 of=/dev/null bs=1M count=10240 > > 10240+0 records in > > 10240+0 records out > > 10737418240 bytes (11 GB) copied, 399.352 seconds, 26.9 MB/s > > # for i in sde sdg sdi sdk; do echo 192 > > > /sys/block/"$i"/queue/max_sectors_kb; echo "Set > > /sys/block/"$i"/queue/max_sectors_kb to 192kb"; done > > Set /sys/block/sde/queue/max_sectors_kb to 192kb > > Set /sys/block/sdg/queue/max_sectors_kb to 192kb > > Set /sys/block/sdi/queue/max_sectors_kb to 192kb > > Set /sys/block/sdk/queue/max_sectors_kb to 192kb > > # echo 3 > /proc/sys/vm/drop_caches > > # dd if=/dev/md3 of=/dev/null bs=1M count=10240 > > 10240+0 records in > > 10240+0 records out > > 10737418240 bytes (11 GB) copied, 398.069 seconds, 27.0 MB/s > > > > Awful performance with your numbers/drop_caches settings.. ! > > Can you repeat with /dev/sda only? > > With fresh reboot to shell, then: > $ cat /sys/block/sda/queue/max_sectors_kb > $ echo 3 > /proc/sys/vm/drop_caches > $ dd if=/dev/sda of=/dev/null bs=1M count=10240 > > $ echo 192 > /sys/block/sda/queue/max_sectors_kb > $ echo 3 > /proc/sys/vm/drop_caches > $ dd if=/dev/sda of=/dev/null bs=1M count=10240 > > $ echo 128 > /sys/block/sda/queue/max_sectors_kb > $ echo 3 > /proc/sys/vm/drop_caches > $ dd if=/dev/sda of=/dev/null bs=1M count=10240 > > > What were your tests designed to show? > > A problem with the block-io. > > > Thanks! > > -- > Al > > - > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Here you go: For sda-- (is a 74GB raptor only)-- but ok. # uptime 16:25:38 up 1 min, 3 users, load average: 0.23, 0.14, 0.05 # cat /sys/block/sda/queue/max_sectors_kb 512 # echo 3 > /proc/sys/vm/drop_caches # dd if=/dev/sda of=/dev/null bs=1M count=10240 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 150.891 seconds, 71.2 MB/s # # # # echo 192 > /sys/block/sda/queue/max_sectors_kb # echo 3 > /proc/sys/vm/drop_caches # dd if=/dev/sda of=/dev/null bs=1M count=10240 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 150.192 seconds, 71.5 MB/s # echo 128 > /sys/block/sda/queue/max_sectors_kb # echo 3 > /proc/sys/vm/drop_caches # dd if=/dev/sda of=/dev/null bs=1M count=10240 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 150.15 seconds, 71.5 MB/s Does this show anything useful? Justin. From owner-xfs@oss.sgi.com Fri Jan 12 14:38:31 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 12 Jan 2007 14:38:49 -0800 (PST) Received: from gaimboi.tmr.com (mail.tmr.com [64.65.253.246]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0CMcTqw020429 for ; Fri, 12 Jan 2007 14:38:30 -0800 Received: from [127.0.0.1] (gaimboi.tmr.com [127.0.0.1]) by gaimboi.tmr.com (8.12.8/8.12.8) with ESMTP id l0CKfVHT030948; Fri, 12 Jan 2007 15:41:31 -0500 Message-ID: <45A7F27B.3080402@tmr.com> Date: Fri, 12 Jan 2007 15:41:31 -0500 From: Bill Davidsen Organization: TMR Associates Inc, Schenectady NY User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.0.8) Gecko/20061105 SeaMonkey/1.0.6 MIME-Version: 1.0 To: Justin Piszcz CC: Al Boldi , linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com Subject: Re: Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read & 195MB/s write) References: <200701122235.30288.a1426z@gawab.com> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 10281 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: davidsen@tmr.com Precedence: bulk X-list: xfs Content-Length: 2195 Lines: 63 Justin Piszcz wrote: > # echo 3 > /proc/sys/vm/drop_caches > # dd if=/dev/md3 of=/dev/null bs=1M count=10240 > 10240+0 records in > 10240+0 records out > 10737418240 bytes (11 GB) copied, 399.352 seconds, 26.9 MB/s > # for i in sde sdg sdi sdk; do echo 192 > > /sys/block/"$i"/queue/max_sectors_kb; echo "Set > /sys/block/"$i"/queue/max_sectors_kb to 192kb"; done > Set /sys/block/sde/queue/max_sectors_kb to 192kb > Set /sys/block/sdg/queue/max_sectors_kb to 192kb > Set /sys/block/sdi/queue/max_sectors_kb to 192kb > Set /sys/block/sdk/queue/max_sectors_kb to 192kb > # echo 3 > /proc/sys/vm/drop_caches > # dd if=/dev/md3 of=/dev/null bs=1M count=10240 > 10240+0 records in > 10240+0 records out > 10737418240 bytes (11 GB) copied, 398.069 seconds, 27.0 MB/s > > Awful performance with your numbers/drop_caches settings.. ! > > What were your tests designed to show? > To start, I expect then to show change in write, not read... and IIRC (I didn't look it up) drop_caches just flushes the caches so you start with known memory contents, none. > > Justin. > > On Fri, 12 Jan 2007, Justin Piszcz wrote: > > >> On Fri, 12 Jan 2007, Al Boldi wrote: >> >> >>> Justin Piszcz wrote: >>> >>>> RAID 5 TWEAKED: 1:06.41 elapsed @ 60% CPU >>>> >>>> This should be 1:14 not 1:06(was with a similarly sized file but not the >>>> same) the 1:14 is the same file as used with the other benchmarks. and to >>>> get that I used 256mb read-ahead and 16384 stripe size ++ 128 >>>> max_sectors_kb (same size as my sw raid5 chunk size) >>>> >>> max_sectors_kb is probably your key. On my system I get twice the read >>> performance by just reducing max_sectors_kb from default 512 to 192. >>> >>> Can you do a fresh reboot to shell and then: >>> $ cat /sys/block/hda/queue/* >>> $ cat /proc/meminfo >>> $ echo 3 > /proc/sys/vm/drop_caches >>> $ dd if=/dev/hda of=/dev/null bs=1M count=10240 >>> $ echo 192 > /sys/block/hda/queue/max_sectors_kb >>> $ echo 3 > /proc/sys/vm/drop_caches >>> $ dd if=/dev/hda of=/dev/null bs=1M count=10240 >>> >>> -- bill davidsen CTO TMR Associates, Inc Doing interesting things with small computers since 1979 From owner-xfs@oss.sgi.com Fri Jan 12 14:38:34 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 12 Jan 2007 14:38:50 -0800 (PST) Received: from gaimboi.tmr.com (mail.tmr.com [64.65.253.246]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0CMcTr0020429 for ; Fri, 12 Jan 2007 14:38:32 -0800 Received: from [127.0.0.1] (gaimboi.tmr.com [127.0.0.1]) by gaimboi.tmr.com (8.12.8/8.12.8) with ESMTP id l0CKWDHT030931; Fri, 12 Jan 2007 15:32:13 -0500 Message-ID: <45A7F04D.6030804@tmr.com> Date: Fri, 12 Jan 2007 15:32:13 -0500 From: Bill Davidsen Organization: TMR Associates Inc, Schenectady NY User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.0.8) Gecko/20061105 SeaMonkey/1.0.6 MIME-Version: 1.0 To: Al Boldi CC: Justin Piszcz , linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com Subject: Re: Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read & 195MB/s write) References: <200701122235.30288.a1426z@gawab.com> In-Reply-To: <200701122235.30288.a1426z@gawab.com> Content-Type: text/plain Content-Disposition: inline Content-Transfer-Encoding: 7bit X-archive-position: 10282 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: davidsen@tmr.com Precedence: bulk X-list: xfs Content-Length: 2157 Lines: 55 Al Boldi wrote: > Justin Piszcz wrote: > >> RAID 5 TWEAKED: 1:06.41 elapsed @ 60% CPU >> >> This should be 1:14 not 1:06(was with a similarly sized file but not the >> same) the 1:14 is the same file as used with the other benchmarks. and to >> get that I used 256mb read-ahead and 16384 stripe size ++ 128 >> max_sectors_kb (same size as my sw raid5 chunk size) >> > > max_sectors_kb is probably your key. On my system I get twice the read > performance by just reducing max_sectors_kb from default 512 to 192. > > Can you do a fresh reboot to shell and then: > $ cat /sys/block/hda/queue/* > $ cat /proc/meminfo > $ echo 3 > /proc/sys/vm/drop_caches > $ dd if=/dev/hda of=/dev/null bs=1M count=10240 > $ echo 192 > /sys/block/hda/queue/max_sectors_kb > $ echo 3 > /proc/sys/vm/drop_caches > $ dd if=/dev/hda of=/dev/null bs=1M count=10240 > You can find even more joy on large writes, assuming you have a recent 2.6 kernel. Look at /proc/sys/vm/dirty-* values. by making the ratio smaller, and the ratio smaller you can reduce or eliminate the bursty behavior of Linux disk write. However, see my previous thread on poor RAID-5 write performance, there's still something not optimal. Note that RAID-10, which does more i/o, is faster with default tuning than RAID-5 by about N-1 taime (N = array drives). I would say that the numbers posted are interesting, but most people don't have a spare GB or more to use for buffer, particularly if you have multiple arrays on your disks. Before someone says "why do that..." here's why: /boot - should be mirrored so the BIOS will boot if a drive fails swap - RAID-1, because for an given tuning, it's faster than RAID-5. Note: RAID-10 is faster yet, but Fedora and SuSE rescue CDs don't like RAID-10 swap. critical - stuff you can't afford to lose, RAID-6\ normal - RAID-5 That's why I have partitions of the same drives at different RAID levels, and with various tuning settings, depending on how they are used. -- bill davidsen CTO TMR Associates, Inc Doing interesting things with small computers since 1979 [[HTML alternate version deleted]] From owner-xfs@oss.sgi.com Sat Jan 13 01:41:34 2007 Received: with ECARTIS (v1.0.0; list xfs); Sat, 13 Jan 2007 01:41:39 -0800 (PST) Received: from lucidpixels.com (lucidpixels.com [66.45.37.187]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0D9fXqw020811 for ; Sat, 13 Jan 2007 01:41:34 -0800 Received: by lucidpixels.com (Postfix, from userid 1001) id E9CA51A000259; Sat, 13 Jan 2007 04:40:39 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id E4DD0A000257; Sat, 13 Jan 2007 04:40:39 -0500 (EST) Date: Sat, 13 Jan 2007 04:40:39 -0500 (EST) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: Al Boldi cc: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com Subject: Re: Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read & 195MB/s write) In-Reply-To: <200701130911.38240.a1426z@gawab.com> Message-ID: References: <200701130000.48717.a1426z@gawab.com> <200701130911.38240.a1426z@gawab.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 10283 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Content-Length: 2096 Lines: 66 On Sat, 13 Jan 2007, Al Boldi wrote: > Justin Piszcz wrote: > > On Sat, 13 Jan 2007, Al Boldi wrote: > > > Justin Piszcz wrote: > > > > Btw, max sectors did improve my performance a little bit but > > > > stripe_cache+read_ahead were the main optimizations that made > > > > everything go faster by about ~1.5x. I have individual bonnie++ > > > > benchmarks of [only] the max_sector_kb tests as well, it improved the > > > > times from 8min/bonnie run -> 7min 11 seconds or so, see below and > > > > then after that is what you requested. > > > > > > Can you repeat with /dev/sda only? > > > > For sda-- (is a 74GB raptor only)-- but ok. > > Do you get the same results for the 150GB-raptor on sd{e,g,i,k}? > > > # uptime > > 16:25:38 up 1 min, 3 users, load average: 0.23, 0.14, 0.05 > > # cat /sys/block/sda/queue/max_sectors_kb > > 512 > > # echo 3 > /proc/sys/vm/drop_caches > > # dd if=/dev/sda of=/dev/null bs=1M count=10240 > > 10240+0 records in > > 10240+0 records out > > 10737418240 bytes (11 GB) copied, 150.891 seconds, 71.2 MB/s > > # echo 192 > /sys/block/sda/queue/max_sectors_kb > > # echo 3 > /proc/sys/vm/drop_caches > > # dd if=/dev/sda of=/dev/null bs=1M count=10240 > > 10240+0 records in > > 10240+0 records out > > 10737418240 bytes (11 GB) copied, 150.192 seconds, 71.5 MB/s > > # echo 128 > /sys/block/sda/queue/max_sectors_kb > > # echo 3 > /proc/sys/vm/drop_caches > > # dd if=/dev/sda of=/dev/null bs=1M count=10240 > > 10240+0 records in > > 10240+0 records out > > 10737418240 bytes (11 GB) copied, 150.15 seconds, 71.5 MB/s > > > > > > Does this show anything useful? > > Probably a latency issue. md is highly latency sensitive. > > What CPU type/speed do you have? Bootlog/dmesg? > > > Thanks! > > -- > Al > > - > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > > What CPU type/speed do you have? Bootlog/dmesg? Core Duo E6300 The speed is great since I have tweaked the various settings.. From owner-xfs@oss.sgi.com Sun Jan 14 18:42:05 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 14 Jan 2007 18:42:09 -0800 (PST) Received: from ruth.realtime.net (mercury.realtime.net [205.238.132.86]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0F2g3qw006106 for ; Sun, 14 Jan 2007 18:42:04 -0800 Received: from [192.168.2.120] (cpe-66-68-187-7.austin.res.rr.com [66.68.187.7]) by realtime.net (Realtime Communications Advanced E-Mail Services V9.2) with ESMTP id 44866804-1817707 for multiple; Sun, 14 Jan 2007 20:00:53 -0600 Message-ID: <45AAE04A.7070003@johngroves.net> Date: Sun, 14 Jan 2007 20:00:42 -0600 From: John Groves Reply-To: jgl@johngroves.net User-Agent: Mozilla Thunderbird 1.0.7 (Windows/20050923) X-Accept-Language: en-us, en MIME-Version: 1.0 To: linux-xfs@oss.sgi.com Subject: dmapi vs. filesystem filter drivers Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Authenticated-User: jg@bga.com X-archive-position: 10293 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jgl@johngroves.net Precedence: bulk X-list: xfs Content-Length: 386 Lines: 11 Some time ago on this list I recall seeing some discussion of dmapi vs. filesystem filter drivers, and I think somebody said that the linux filesystem interface was being redesigned to provide reasonable support for filter drivers. Can anybody confirm whether a filter driver interface will be supported, and point me to more information on the status of that work? Thanks, John From owner-xfs@oss.sgi.com Sun Jan 14 18:47:08 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 14 Jan 2007 18:47:12 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0F2l5qw007561 for ; Sun, 14 Jan 2007 18:47:07 -0800 Received: from [134.14.55.89] (soarer.melbourne.sgi.com [134.14.55.89]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id NAA20784; Mon, 15 Jan 2007 13:46:07 +1100 Message-ID: <45AAEB4F.6090309@sgi.com> Date: Mon, 15 Jan 2007 13:47:43 +1100 From: Vlad Apostolov User-Agent: Thunderbird 1.5.0.8 (X11/20061025) MIME-Version: 1.0 To: jgl@johngroves.net CC: linux-xfs@oss.sgi.com Subject: Re: dmapi vs. filesystem filter drivers References: <45AAE04A.7070003@johngroves.net> In-Reply-To: <45AAE04A.7070003@johngroves.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 10294 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: vapo@sgi.com Precedence: bulk X-list: xfs Content-Length: 495 Lines: 19 John Groves wrote: > Some time ago on this list I recall seeing some discussion of dmapi > vs. filesystem filter drivers, and I think somebody said that the > linux filesystem interface was being redesigned to provide reasonable > support for filter drivers. > > Can anybody confirm whether a filter driver interface will be > supported, and point me to more information on the status of that work? > > Thanks, > John > Hi John, Currently there is no plan for such support. Regards, Vlad From owner-xfs@oss.sgi.com Mon Jan 15 10:43:36 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 15 Jan 2007 10:43:45 -0800 (PST) Received: from ciao.gmane.org (main.gmane.org [80.91.229.2]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0FIhXqw000562 for ; Mon, 15 Jan 2007 10:43:35 -0800 Received: from list by ciao.gmane.org with local (Exim 4.43) id 1H6Wmv-00071T-Gz for linux-xfs@oss.sgi.com; Mon, 15 Jan 2007 19:42:33 +0100 Received: from p54a56174.dip.t-dialin.net ([84.165.97.116]) by main.gmane.org with esmtp (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Mon, 15 Jan 2007 19:42:33 +0100 Received: from christoph.bier by p54a56174.dip.t-dialin.net with local (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Mon, 15 Jan 2007 19:42:33 +0100 X-Injected-Via-Gmane: http://gmane.org/ To: linux-xfs@oss.sgi.com From: Christoph Bier Subject: Re: Mounting an external HDD fails each second time after xfs_repair Date: Mon, 15 Jan 2007 19:42:22 +0100 Message-ID: References: <17829.27416.823078.9589@base.ty.sabi.co.UK> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Complaints-To: usenet@sea.gmane.org X-Gmane-NNTP-Posting-Host: p54a56174.dip.t-dialin.net User-Agent: Mozilla/5.0 (X11; U; Linux i686; de-AT; rv:1.7.8) Gecko/20061113 Debian/1.7.8-1sarge8 Mnenhy/0.7.1 X-Accept-Language: de, de-de, de-at, en In-Reply-To: <17829.27416.823078.9589@base.ty.sabi.co.UK> X-archive-position: 10296 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: christoph.bier@web.de Precedence: bulk X-list: xfs Content-Length: 517 Lines: 17 Peter Grandi schrieb am 10.01.2007 23:39: [...] > We can of course assume, [...] that you have advisedly > chosen the external disk for 100% reliable operation, as you > have checked thoroughly that the power supply of the case is > sufficient and that both chipsets (USB? FW?) are well known for > being bug-free and reliable for use with the relevant GNU/Linux > mass storage driver... :-) Yes, you can. But OT here. Regards, Christoph -- +++ Typografie-Regeln: http://zvisionwelt.de/downloads.html (1.6) From owner-xfs@oss.sgi.com Mon Jan 15 14:32:18 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 15 Jan 2007 14:32:22 -0800 (PST) Received: from bycast.com (bycast.com [209.139.229.1] (may be forged)) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0FMWHqw017458 for ; Mon, 15 Jan 2007 14:32:17 -0800 Received: from [192.168.110.200] (account kjamieson HELO [192.168.110.200]) by bycast.com (CommuniGate Pro SMTP 4.3.9) with ESMTPA id 1472555 for xfs@oss.sgi.com; Mon, 15 Jan 2007 13:31:22 -0800 Message-ID: <45ABF2AA.2010209@bycast.com> Date: Mon, 15 Jan 2007 13:31:22 -0800 From: Kevin Jamieson Reply-To: kjamieson@bycast.com Organization: Bycast User-Agent: Icedove 1.5.0.9 (X11/20061220) MIME-Version: 1.0 To: xfs@oss.sgi.com Subject: Re: Review: fix block reservation to work with per-cpu counters Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 10298 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: kjamieson@bycast.com Precedence: bulk X-list: xfs Content-Length: 1273 Lines: 35 This fails to build on non-SMP systems: CC [M] fs/xfs/xfs_fsops.o fs/xfs/xfs_fsops.c: In function 'xfs_fs_counts': fs/xfs/xfs_fsops.c:463: warning: implicit declaration of function 'xfs_icsb_sync_counters_flags' fs/xfs/xfs_fsops.c:463: error: 'XFS_ICSB_LAZY_COUNT' undeclared (first use in this function) fs/xfs/xfs_fsops.c:463: error: (Each undeclared identifier is reported only once fs/xfs/xfs_fsops.c:463: error: for each function it appears in.) fs/xfs/xfs_fsops.c: In function 'xfs_reserve_blocks': fs/xfs/xfs_fsops.c:525: error: 'XFS_ICSB_SB_LOCKED' undeclared (first use in this function) make[1]: *** [fs/xfs/xfs_fsops.o] Error 1 make: *** [fs/xfs/xfs.ko] Error 2 The define for not HAVE_PERCPU_SB needs to be changed: Index: fs/xfs/xfs_mount.h =================================================================== RCS file: /cvs/linux-2.6-xfs/fs/xfs/xfs_mount.h,v retrieving revision 1.232 diff -u -r1.232 xfs_mount.h --- fs/xfs/xfs_mount.h 10 Jan 2007 14:42:52 -0000 1.232 +++ fs/xfs/xfs_mount.h 15 Jan 2007 21:13:09 -0000 @@ -311,7 +311,7 @@ #else #define xfs_icsb_init_counters(mp) (0) -#define xfs_icsb_sync_counters_lazy(mp) do { } while (0) +#define xfs_icsb_sync_counters_flags(mp, flags) do { } while (0) #endif typedef struct xfs_mount { From owner-xfs@oss.sgi.com Mon Jan 15 15:48:15 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 15 Jan 2007 15:48:20 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0FNmBqw000467 for ; Mon, 15 Jan 2007 15:48:14 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA20204; Tue, 16 Jan 2007 10:47:13 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l0FNlB7Y94279286; Tue, 16 Jan 2007 10:47:12 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l0FNlAUZ93604157; Tue, 16 Jan 2007 10:47:10 +1100 (AEDT) Date: Tue, 16 Jan 2007 10:47:10 +1100 From: David Chinner To: Kevin Jamieson Cc: xfs@oss.sgi.com, chatz@sgi.com Subject: Re: Review: fix block reservation to work with per-cpu counters Message-ID: <20070115234710.GI44411608@melbourne.sgi.com> References: <45ABF2AA.2010209@bycast.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <45ABF2AA.2010209@bycast.com> User-Agent: Mutt/1.4.2.1i X-archive-position: 10299 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 1109 Lines: 35 On Mon, Jan 15, 2007 at 01:31:22PM -0800, Kevin Jamieson wrote: > This fails to build on non-SMP systems: > > CC [M] fs/xfs/xfs_fsops.o > fs/xfs/xfs_fsops.c: In function 'xfs_fs_counts': > fs/xfs/xfs_fsops.c:463: warning: implicit declaration of function > 'xfs_icsb_sync_counters_flags' > fs/xfs/xfs_fsops.c:463: error: 'XFS_ICSB_LAZY_COUNT' undeclared (first use > in this function) > fs/xfs/xfs_fsops.c:463: error: (Each undeclared identifier is reported only > once > fs/xfs/xfs_fsops.c:463: error: for each function it appears in.) > fs/xfs/xfs_fsops.c: In function 'xfs_reserve_blocks': > fs/xfs/xfs_fsops.c:525: error: 'XFS_ICSB_SB_LOCKED' undeclared (first use > in this function) > make[1]: *** [fs/xfs/xfs_fsops.o] Error 1 > make: *** [fs/xfs/xfs.ko] Error 2 > > > The define for not HAVE_PERCPU_SB needs to be changed: Sorry about that - my fault. This fix looks good - I'll try to find some time to check this in (I'm at LCA right now and have limited net access). Chatz, maybe you could check this in? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Jan 15 16:46:43 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 15 Jan 2007 16:46:48 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0G0keqw019016 for ; Mon, 15 Jan 2007 16:46:42 -0800 Received: from pcbnaujok (pc-bnaujok.melbourne.sgi.com [134.14.55.58]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA21869 for ; Tue, 16 Jan 2007 11:45:46 +1100 Message-Id: <200701160045.LAA21869@larry.melbourne.sgi.com> From: "Barry Naujok" To: Subject: FW: REVIEW: 031 QA failure with xfs_repair Date: Tue, 16 Jan 2007 11:51:06 +1100 MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit X-Mailer: Microsoft Office Outlook, Build 11.0.6353 Thread-Index: Acc4/VNfRwpmO6G5RrO3MD7i3HhgzgACtREw X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.3028 X-archive-position: 10300 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@melbourne.sgi.com Precedence: bulk X-list: xfs Content-Length: 3763 Lines: 132 QA 030 and 031 are failing with xfs_repair as the "zeroed log" is different on IRIX and Linux. xfs_repair must be coded to handle the different platform log formats. ========================== Need a review please: 383 -> p_rdiff -um libxfs libxlog ======================================================================== === xfsprogs/libxfs/darwin.c ======================================================================== === --- /usr/tmp/TmpDir.5868489-0/xfsprogs/libxfs/darwin.c_1.12 Mon Jan 15 17:30:37 2007 +++ xfsprogs/libxfs/darwin.c Mon Jan 15 16:53:25 2007 @@ -22,6 +22,7 @@ #include #include +int platform_has_uuid = 0; extern char *progname; int ======================================================================== === xfsprogs/libxfs/freebsd.c ======================================================================== === --- /usr/tmp/TmpDir.5868489-0/xfsprogs/libxfs/freebsd.c_1.15 Mon Jan 15 17:30:37 2007 +++ xfsprogs/libxfs/freebsd.c Mon Jan 15 16:53:35 2007 @@ -22,6 +22,7 @@ #include #include +int platform_has_uuid = 0; extern char *progname; int ======================================================================== === xfsprogs/libxfs/init.h ======================================================================== === --- /usr/tmp/TmpDir.5868489-0/xfsprogs/libxfs/init.h_1.12 Mon Jan 15 17:30:37 2007 +++ xfsprogs/libxfs/init.h Mon Jan 15 16:52:09 2007 @@ -34,5 +34,6 @@ extern int platform_align_blockdev (void); extern int platform_aio_init (int aio_count); extern int platform_nproc(void); +extern int platform_has_uuid; #endif /* LIBXFS_INIT_H */ ======================================================================== === xfsprogs/libxfs/irix.c ======================================================================== === --- /usr/tmp/TmpDir.5868489-0/xfsprogs/libxfs/irix.c_1.13 Mon Jan 15 17:30:37 2007 +++ xfsprogs/libxfs/irix.c Mon Jan 15 16:53:49 2007 @@ -21,6 +21,7 @@ #include #include +int platform_has_uuid = 0; extern char *progname; extern __int64_t findsize(char *); ======================================================================== === xfsprogs/libxfs/linux.c ======================================================================== === --- /usr/tmp/TmpDir.5868489-0/xfsprogs/libxfs/linux.c_1.16 Mon Jan 15 17:30:37 2007 +++ xfsprogs/libxfs/linux.c Mon Jan 15 16:53:10 2007 @@ -26,6 +26,7 @@ #include #include +int platform_has_uuid = 1; extern char *progname; static int max_block_alignment; ======================================================================== === xfsprogs/libxlog/xfs_log_recover.c ======================================================================== === --- /usr/tmp/TmpDir.5868489-0/xfsprogs/libxlog/xfs_log_recover.c_1.30 Mon Jan 15 17:30:37 2007 +++ xfsprogs/libxlog/xfs_log_recover.c Mon Jan 15 16:54:50 2007 @@ -258,6 +258,7 @@ uint first_half_cycle, last_half_cycle; uint stop_on_cycle; int error, log_bbnum = log->l_logBBsize; + extern int platform_has_uuid; /* Is the end of the log device zeroed? */ if ((error = xlog_find_zeroed(log, &first_blk)) == -1) { @@ -264,7 +265,7 @@ *return_head_blk = first_blk; /* Is the whole lot zeroed? */ - if (!first_blk) { + if (!first_blk && platform_has_uuid) { /* Linux XFS shouldn't generate totally zeroed logs - * mkfs etc write a dummy unmount record to a fresh * log so we can store the uuid in there From owner-xfs@oss.sgi.com Mon Jan 15 17:04:09 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 15 Jan 2007 17:04:13 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0G145qw022117 for ; Mon, 15 Jan 2007 17:04:07 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id MAA22242; Tue, 16 Jan 2007 12:03:11 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l0G13A7Y88046406; Tue, 16 Jan 2007 12:03:10 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l0G13AiW94176375; Tue, 16 Jan 2007 12:03:10 +1100 (AEDT) Date: Tue, 16 Jan 2007 12:03:10 +1100 From: David Chinner To: Barry Naujok Cc: xfs@oss.sgi.com Subject: Re: FW: REVIEW: 031 QA failure with xfs_repair Message-ID: <20070116010310.GM44411608@melbourne.sgi.com> References: <200701160045.LAA21869@larry.melbourne.sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <200701160045.LAA21869@larry.melbourne.sgi.com> User-Agent: Mutt/1.4.2.1i X-archive-position: 10301 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 341 Lines: 19 On Tue, Jan 16, 2007 at 11:51:06AM +1100, Barry Naujok wrote: > > QA 030 and 031 are failing with xfs_repair as the "zeroed log" is > different > on IRIX and Linux. > > xfs_repair must be coded to handle the different platform log > formats. Looks ok. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Jan 15 17:07:48 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 15 Jan 2007 17:07:53 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0G17jqw023071 for ; Mon, 15 Jan 2007 17:07:46 -0800 Received: from pcbnaujok (pc-bnaujok.melbourne.sgi.com [134.14.55.58]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id MAA22290 for ; Tue, 16 Jan 2007 12:06:51 +1100 Message-Id: <200701160106.MAA22290@larry.melbourne.sgi.com> From: "Barry Naujok" To: Subject: RE: REVIEW: 031 QA failure with xfs_repair Date: Tue, 16 Jan 2007 12:12:18 +1100 MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_NextPart_000_051B_01C73967.91347F60" X-Mailer: Microsoft Office Outlook, Build 11.0.6353 Thread-Index: Acc4/VNfRwpmO6G5RrO3MD7i3HhgzgACtREwAADCTqA= X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.3028 In-Reply-To: <200701160045.LAA21869@larry.melbourne.sgi.com> X-archive-position: 10302 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@melbourne.sgi.com Precedence: bulk X-list: xfs Content-Length: 6224 Lines: 168 This is a multi-part message in MIME format. ------=_NextPart_000_051B_01C73967.91347F60 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Well, Dave has already replied, but here is the patch again with out the line break crap and tabs intact. > -----Original Message----- > From: xfs-bounce@oss.sgi.com [mailto:xfs-bounce@oss.sgi.com] > On Behalf Of Barry Naujok > Sent: Tuesday, 16 January 2007 11:51 AM > To: xfs@oss.sgi.com > Subject: FW: REVIEW: 031 QA failure with xfs_repair > > > QA 030 and 031 are failing with xfs_repair as the "zeroed log" is > different > on IRIX and Linux. > > xfs_repair must be coded to handle the different platform log > formats. > > ========================== > > Need a review please: (see attached) ------=_NextPart_000_051B_01C73967.91347F60 Content-Type: application/octet-stream; name="031-fix.diff" Content-Transfer-Encoding: quoted-printable Content-Disposition: attachment; filename="031-fix.diff" =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D xfsprogs/libxfs/darwin.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- /usr/tmp/TmpDir.5868489-0/xfsprogs/libxfs/darwin.c_1.12 Mon Jan 15 = 17:30:37 2007 +++ xfsprogs/libxfs/darwin.c Mon Jan 15 16:53:25 2007 @@ -22,6 +22,7 @@ #include #include +int platform_has_uuid =3D 0; extern char *progname; int =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D xfsprogs/libxfs/freebsd.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- /usr/tmp/TmpDir.5868489-0/xfsprogs/libxfs/freebsd.c_1.15 Mon Jan 15 = 17:30:37 2007 +++ xfsprogs/libxfs/freebsd.c Mon Jan 15 16:53:35 2007 @@ -22,6 +22,7 @@ #include #include +int platform_has_uuid =3D 0; extern char *progname; int =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D xfsprogs/libxfs/init.h =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- /usr/tmp/TmpDir.5868489-0/xfsprogs/libxfs/init.h_1.12 Mon Jan 15 = 17:30:37 2007 +++ xfsprogs/libxfs/init.h Mon Jan 15 16:52:09 2007 @@ -34,5 +34,6 @@ extern int platform_align_blockdev (void); extern int platform_aio_init (int aio_count); extern int platform_nproc(void); +extern int platform_has_uuid; #endif /* LIBXFS_INIT_H */ =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D xfsprogs/libxfs/irix.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- /usr/tmp/TmpDir.5868489-0/xfsprogs/libxfs/irix.c_1.13 Mon Jan 15 = 17:30:37 2007 +++ xfsprogs/libxfs/irix.c Mon Jan 15 16:53:49 2007 @@ -21,6 +21,7 @@ #include #include +int platform_has_uuid =3D 0; extern char *progname; extern __int64_t findsize(char *); =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D xfsprogs/libxfs/linux.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- /usr/tmp/TmpDir.5868489-0/xfsprogs/libxfs/linux.c_1.16 Mon Jan 15 = 17:30:37 2007 +++ xfsprogs/libxfs/linux.c Mon Jan 15 16:53:10 2007 @@ -26,6 +26,7 @@ #include #include +int platform_has_uuid =3D 1; extern char *progname; static int max_block_alignment; =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D xfsprogs/libxlog/xfs_log_recover.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- /usr/tmp/TmpDir.5868489-0/xfsprogs/libxlog/xfs_log_recover.c_1.30 Mon= Jan 15 17:30:37 2007 +++ xfsprogs/libxlog/xfs_log_recover.c Mon Jan 15 16:54:50 2007 @@ -258,6 +258,7 @@ uint first_half_cycle, last_half_cycle; uint stop_on_cycle; int error, log_bbnum =3D log->l_logBBsize; + extern int platform_has_uuid; /* Is the end of the log device zeroed? */ if ((error =3D xlog_find_zeroed(log, &first_blk)) =3D=3D -1) { @@ -264,7 +265,7 @@ *return_head_blk =3D first_blk; /* Is the whole lot zeroed? */ - if (!first_blk) { + if (!first_blk && platform_has_uuid) { /* Linux XFS shouldn't generate totally zeroed logs - * mkfs etc write a dummy unmount record to a fresh * log so we can store the uuid in there ------=_NextPart_000_051B_01C73967.91347F60-- From owner-xfs@oss.sgi.com Mon Jan 15 22:13:06 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 15 Jan 2007 22:13:11 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0G6D0r0015798 for ; Mon, 15 Jan 2007 22:13:05 -0800 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA29557; Tue, 16 Jan 2007 17:08:45 +1100 Received: by chook.melbourne.sgi.com (Postfix, from userid 16344) id 0D2E958F92CD; Tue, 16 Jan 2007 17:08:45 +1100 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: PARTIAL TAKE 957103 - Message-Id: <20070116060845.0D2E958F92CD@chook.melbourne.sgi.com> Date: Tue, 16 Jan 2007 17:08:45 +1100 (EST) From: vapo@sgi.com (Vlad Apostolov) X-archive-position: 10304 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: vapo@sgi.com Precedence: bulk X-list: xfs Content-Length: 631 Lines: 17 Fix update_26_xfs script to avoid problems with new files and deleting mainline-patches Date: Tue Jan 16 17:08:22 AEDT 2007 Workarea: chook.melbourne.sgi.com:/build/vapo/isms/xfs-cmds Inspected by: chatz The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:27936a xfsmisc/update_26_xfs - 1.4 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsmisc/update_26_xfs.diff?r1=text&tr1=1.4&r2=text&tr2=1.3&f=h - Exit if lsdiff is not in path, otherwise new files will not be marked fetal. Add mainline-patches to skip list so it is not deleted. From owner-xfs@oss.sgi.com Mon Jan 15 22:13:04 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 15 Jan 2007 22:13:08 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0G6D0qw015798 for ; Mon, 15 Jan 2007 22:13:02 -0800 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA29384; Tue, 16 Jan 2007 17:01:54 +1100 Received: by chook.melbourne.sgi.com (Postfix, from userid 16344) id 22E5358F92CD; Tue, 16 Jan 2007 17:01:54 +1100 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: PARTIAL TAKE 957103 - Message-Id: <20070116060154.22E5358F92CD@chook.melbourne.sgi.com> Date: Tue, 16 Jan 2007 17:01:54 +1100 (EST) From: vapo@sgi.com (Vlad Apostolov) X-archive-position: 10303 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: vapo@sgi.com Precedence: bulk X-list: xfs Content-Length: 441 Lines: 14 Date: Tue Jan 16 17:01:21 AEDT 2007 Workarea: chook.melbourne.sgi.com:/build/vapo/isms/linux-xfs Inspected by: chatz The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: 2.6.x-xfs-melb:linux:27934a mainline-patches/Kconfig - 1.3 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/linux-2.6-xfs/mainline-patches/Kconfig.diff?r1=text&tr1=1.3&r2=text&tr2=1.2&f=h - Reinstate mainline patch From owner-xfs@oss.sgi.com Mon Jan 15 22:13:09 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 15 Jan 2007 22:13:14 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0G6D0r2015798 for ; Mon, 15 Jan 2007 22:13:08 -0800 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA29407; Tue, 16 Jan 2007 17:03:14 +1100 Received: by chook.melbourne.sgi.com (Postfix, from userid 16344) id 1E6BB58F92CD; Tue, 16 Jan 2007 17:03:14 +1100 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: PARTIAL TAKE 957103 - Merge 2.6.x-xfs up to more recent kernels and new kdb versions Message-Id: <20070116060314.1E6BB58F92CD@chook.melbourne.sgi.com> Date: Tue, 16 Jan 2007 17:03:14 +1100 (EST) From: vapo@sgi.com (Vlad Apostolov) X-archive-position: 10305 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: vapo@sgi.com Precedence: bulk X-list: xfs Content-Length: 3090 Lines: 44 Date: Tue Jan 16 17:02:31 AEDT 2007 Workarea: chook.melbourne.sgi.com:/build/vapo/isms/linux-xfs Inspected by: chatz The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: 2.6.x-xfs-melb:linux:27935a mainline-patches/linux-2.6/xfs_sysctl.c - 1.3 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/linux-2.6-xfs/mainline-patches/linux-2.6/xfs_sysctl.c.diff?r1=text&tr1=1.3&r2=text&tr2=1.2&f=h mainline-patches/linux-2.6/xfs_vfs.c - 1.3 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/linux-2.6-xfs/mainline-patches/linux-2.6/xfs_vfs.c.diff?r1=text&tr1=1.3&r2=text&tr2=1.2&f=h mainline-patches/xfs_vfsops.c - 1.3 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/linux-2.6-xfs/mainline-patches/xfs_vfsops.c.diff?r1=text&tr1=1.3&r2=text&tr2=1.2&f=h mainline-patches/xfs_iget.c - 1.3 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/linux-2.6-xfs/mainline-patches/xfs_iget.c.diff?r1=text&tr1=1.3&r2=text&tr2=1.2&f=h mainline-patches/linux-2.6/xfs_vfs.h - 1.3 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/linux-2.6-xfs/mainline-patches/linux-2.6/xfs_vfs.h.diff?r1=text&tr1=1.3&r2=text&tr2=1.2&f=h mainline-patches/Makefile-linux-2.6 - 1.3 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/linux-2.6-xfs/mainline-patches/Makefile-linux-2.6.diff?r1=text&tr1=1.3&r2=text&tr2=1.2&f=h mainline-patches/linux-2.6/xfs_globals.c - 1.3 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/linux-2.6-xfs/mainline-patches/linux-2.6/xfs_globals.c.diff?r1=text&tr1=1.3&r2=text&tr2=1.2&f=h mainline-patches/linux-2.6/xfs_linux.h - 1.3 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/linux-2.6-xfs/mainline-patches/linux-2.6/xfs_linux.h.diff?r1=text&tr1=1.3&r2=text&tr2=1.2&f=h mainline-patches/linux-2.6/xfs_sysctl.h - 1.3 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/linux-2.6-xfs/mainline-patches/linux-2.6/xfs_sysctl.h.diff?r1=text&tr1=1.3&r2=text&tr2=1.2&f=h mainline-patches/linux-2.6/xfs_super.c - 1.3 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/linux-2.6-xfs/mainline-patches/linux-2.6/xfs_super.c.diff?r1=text&tr1=1.3&r2=text&tr2=1.2&f=h mainline-patches/linux-2.6/xfs_super.h - 1.3 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/linux-2.6-xfs/mainline-patches/linux-2.6/xfs_super.h.diff?r1=text&tr1=1.3&r2=text&tr2=1.2&f=h mainline-patches/xfs_acl.c - 1.3 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/linux-2.6-xfs/mainline-patches/xfs_acl.c.diff?r1=text&tr1=1.3&r2=text&tr2=1.2&f=h mainline-patches/xfs.h - 1.3 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/linux-2.6-xfs/mainline-patches/xfs.h.diff?r1=text&tr1=1.3&r2=text&tr2=1.2&f=h mainline-patches/linux-2.6/xfs_vnode.h - 1.3 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/linux-2.6-xfs/mainline-patches/linux-2.6/xfs_vnode.h.diff?r1=text&tr1=1.3&r2=text&tr2=1.2&f=h mainline-patches/quota/xfs_qm_bhv.c - 1.3 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/linux-2.6-xfs/mainline-patches/quota/xfs_qm_bhv.c.diff?r1=text&tr1=1.3&r2=text&tr2=1.2&f=h mainline-patches/xfs_behavior.h - 1.3 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/linux-2.6-xfs/mainline-patches/xfs_behavior.h.diff?r1=text&tr1=1.3&r2=text&tr2=1.2&f=h - Reinstate delete file. From owner-xfs@oss.sgi.com Mon Jan 15 22:15:15 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 15 Jan 2007 22:15:19 -0800 (PST) Received: from tyo200.gate.nec.co.jp (TYO200.gate.nec.co.jp [210.143.35.50]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0G6FDqw016970 for ; Mon, 15 Jan 2007 22:15:15 -0800 Received: from tyo201.gate.nec.co.jp ([10.7.69.201]) by tyo200.gate.nec.co.jp (8.13.8/8.13.4) with ESMTP id l0G5h0Nj007175 for ; Tue, 16 Jan 2007 14:43:07 +0900 (JST) Received: from mailgate4.nec.co.jp (mailgate53.nec.co.jp [10.7.69.184]) by tyo201.gate.nec.co.jp (8.13.8/8.13.4) with ESMTP id l0G5bJro003972 for ; Tue, 16 Jan 2007 14:37:19 +0900 (JST) Received: (from root@localhost) by mailgate4.nec.co.jp (8.11.7/3.7W-MAILGATE-NEC) id l0G5bJf21908 for xfs@oss.sgi.com; Tue, 16 Jan 2007 14:37:19 +0900 (JST) Received: from secsv3.tnes.nec.co.jp (tnesvc2.tnes.nec.co.jp [10.1.101.15]) by mailsv3.nec.co.jp (8.11.7/3.7W-MAILSV4-NEC) with ESMTP id l0G5bJI18645 for ; Tue, 16 Jan 2007 14:37:19 +0900 (JST) Received: from tnesvc2.tnes.nec.co.jp ([10.1.101.15]) by secsv3.tnes.nec.co.jp (ExpressMail 5.10) with SMTP id 20070116.143718.67102512 for ; Tue, 16 Jan 2007 14:37:18 +0900 Received: FROM tnessv1.tnes.nec.co.jp BY tnesvc2.tnes.nec.co.jp ; Tue Jan 16 14:37:18 2007 +0900 Received: from rifu.bsd.tnes.nec.co.jp (rifu.bsd.tnes.nec.co.jp [10.1.104.1]) by tnessv1.tnes.nec.co.jp (Postfix) with ESMTP id EA456AE4B3 for ; Tue, 16 Jan 2007 14:37:16 +0900 (JST) Received: from TNESG9305.tnes.nec.co.jp (TNESG9305.bsd.tnes.nec.co.jp [10.1.104.199]) by rifu.bsd.tnes.nec.co.jp (8.12.11/3.7W/BSD-TNES-MX01) with SMTP id l0G5bI9P024720; Tue, 16 Jan 2007 14:37:18 +0900 Message-Id: <200701160537.AA04877@TNESG9305.tnes.nec.co.jp> Date: Tue, 16 Jan 2007 14:37:04 +0900 To: xfs@oss.sgi.com Subject: [PATCH] fix extent length in xfs_io bmap From: Utako Kusaka MIME-Version: 1.0 X-Mailer: AL-Mail32 Version 1.13 Content-Type: text/plain; charset=us-ascii X-archive-position: 10306 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: utako@tnes.nec.co.jp Precedence: bulk X-list: xfs Content-Length: 3436 Lines: 98 Hi, In bmap command in xfs_io, there is a difference in the length of the extent group between "bmap" and "bmap -n nn". It occurs if the file size > max extent size and the extents are allocated contiguously. This patch fixes it. Test fs: # xfs_info /dev/sx8/14p1 meta-data=/dev/sx8/14p1 isize=256 agcount=16, agsize=7631000 blks = sectsz=512 attr=0 data = bsize=4096 blocks=122096000, imaxpct=25 = sunit=0 swidth=0 blks, unwritten=1 naming =version 2 bsize=4096 log =internal bsize=4096 blocks=32768, version=1 = sectsz=512 sunit=0 blks realtime =none extsz=4096 blocks=0, rtextents=0 Example: u.bmx[0-5] = [startoff,startblock,blockcount,extentflag] 0:[ 0, 1048588,2097088,0] 1:[2097088, 3145676,2097088,0] 2:[4194176, 5242764,2097088,0] 3:[6291264, 7339852, 291136,0] 4:[6582400, 8388616,2097088,0] 5:[8679488,10485704,1806272,0] # xfs_io file2 xfs_io> bmap -v file2: EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL 0: [0..52659199]: 8388704..61047903 0 (8388704..61047903) 52659200 ...* 1: [52659200..83886079]: 61048064..92274943 1 (64..31226943) 31226880 ...** xfs_io> bmap -v -n 1 file2: EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL 0: [0..16776703]: 8388704..25165407 0 (8388704..25165407) 16776704 ...* xfs_io> bmap -v -n 2 file2: EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL 0: [0..52659199]: 8388704..61047903 0 (8388704..61047903) 52659200 1: [52659200..69435903]: 61048064..77824767 1 (64..16776767) 16776704 ...** Signed-off-by: Utako Kusaka --- --- xfsprogs-2.8.18-orgn/io/bmap.c 2006-12-13 13:57:22.000000000 +0900 +++ xfsprogs-2.8.18/io/bmap.c 2006-12-21 11:07:09.573116475 +0900 @@ -78,6 +78,7 @@ bmap_f( int bmv_iflags = 0; /* flags for XFS_IOC_GETBMAPX */ int i = 0; int c; + int egcnt; while ((c = getopt(argc, argv, "adln:pv")) != EOF) { switch (c) { @@ -136,7 +137,7 @@ bmap_f( } } - map_size = nflag ? nflag+1 : 32; /* initial guess - 256 */ + map_size = nflag ? nflag+2 : 32; /* initial guess - 256 */ map = malloc(map_size*sizeof(*map)); if (map == NULL) { fprintf(stderr, _("%s: malloc of %d bytes failed.\n"), @@ -232,9 +233,10 @@ bmap_f( return 0; } } + egcnt = nflag ? min(nflag, map->bmv_entries) : map->bmv_entries; printf("%s:\n", file->name); if (!vflag) { - for (i = 0; i < map->bmv_entries; i++) { + for (i = 0; i < egcnt; i++) { printf("\t%d: [%lld..%lld]: ", i, (long long) map[i + 1].bmv_offset, (long long)(map[i + 1].bmv_offset + @@ -288,7 +290,7 @@ bmap_f( * Go through the extents and figure out the width * needed for all columns. */ - for (i = 0; i < map->bmv_entries; i++) { + for (i = 0; i < egcnt; i++) { snprintf(rbuf, sizeof(rbuf), "[%lld..%lld]:", (long long) map[i + 1].bmv_offset, (long long)(map[i + 1].bmv_offset + @@ -325,7 +327,7 @@ bmap_f( aoff_w, _("AG-OFFSET"), tot_w, _("TOTAL"), flg ? _(" FLAGS") : ""); - for (i = 0; i < map->bmv_entries; i++) { + for (i = 0; i < egcnt; i++) { flg = FLG_NULL; if (map[i + 1].bmv_oflags & BMV_OF_PREALLOC) { flg |= FLG_PRE; From owner-xfs@oss.sgi.com Tue Jan 16 04:07:32 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 16 Jan 2007 04:07:39 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0GC7Sqw024516 for ; Tue, 16 Jan 2007 04:07:31 -0800 Received: from [134.15.251.9] ([134.15.251.9]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id XAA07308; Tue, 16 Jan 2007 23:06:23 +1100 Message-ID: <45ACBFB4.5060405@melbourne.sgi.com> Date: Tue, 16 Jan 2007 23:06:12 +1100 From: David Chatterton Reply-To: chatz@melbourne.sgi.com Organization: SGI User-Agent: Thunderbird 1.5.0.9 (Windows/20061207) MIME-Version: 1.0 To: David Chinner CC: Kevin Jamieson , xfs@oss.sgi.com, chatz@sgi.com Subject: Re: Review: fix block reservation to work with per-cpu counters References: <45ABF2AA.2010209@bycast.com> <20070115234710.GI44411608@melbourne.sgi.com> In-Reply-To: <20070115234710.GI44411608@melbourne.sgi.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-archive-position: 10308 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: chatz@melbourne.sgi.com Precedence: bulk X-list: xfs Content-Length: 1157 Lines: 40 David Chinner wrote: > On Mon, Jan 15, 2007 at 01:31:22PM -0800, Kevin Jamieson wrote: >> This fails to build on non-SMP systems: >> >> CC [M] fs/xfs/xfs_fsops.o >> fs/xfs/xfs_fsops.c: In function 'xfs_fs_counts': >> fs/xfs/xfs_fsops.c:463: warning: implicit declaration of function >> 'xfs_icsb_sync_counters_flags' >> fs/xfs/xfs_fsops.c:463: error: 'XFS_ICSB_LAZY_COUNT' undeclared (first use >> in this function) >> fs/xfs/xfs_fsops.c:463: error: (Each undeclared identifier is reported only >> once >> fs/xfs/xfs_fsops.c:463: error: for each function it appears in.) >> fs/xfs/xfs_fsops.c: In function 'xfs_reserve_blocks': >> fs/xfs/xfs_fsops.c:525: error: 'XFS_ICSB_SB_LOCKED' undeclared (first use >> in this function) >> make[1]: *** [fs/xfs/xfs_fsops.o] Error 1 >> make: *** [fs/xfs/xfs.ko] Error 2 >> >> >> The define for not HAVE_PERCPU_SB needs to be changed: > > Sorry about that - my fault. This fix looks good - I'll try to find some > time to check this in (I'm at LCA right now and have limited net > access). > > Chatz, maybe you could check this in? > Done. David -- David Chatterton XFS Engineering Manager SGI Australia From owner-xfs@oss.sgi.com Tue Jan 16 04:29:21 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 16 Jan 2007 04:29:25 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0GCTHqw002343 for ; Tue, 16 Jan 2007 04:29:19 -0800 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id XAA07241; Tue, 16 Jan 2007 23:04:07 +1100 Received: by chook.melbourne.sgi.com (Postfix, from userid 1113) id 59C3A58F92CD; Tue, 16 Jan 2007 23:04:07 +1100 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: TAKE 956323 - File system block reservation mechanism is broken Message-Id: <20070116120407.59C3A58F92CD@chook.melbourne.sgi.com> Date: Tue, 16 Jan 2007 23:04:07 +1100 (EST) From: chatz@sgi.com (David Chatterton) X-archive-position: 10309 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: chatz@sgi.com Precedence: bulk X-list: xfs Content-Length: 572 Lines: 19 Fix block reservation changes for non-SMP systems. Signed-off-by: Kevin Jamieson Date: Tue Jan 16 23:03:21 AEDT 2007 Workarea: chook.melbourne.sgi.com:/build/chatz/isms/xfs-linux Inspected by: kjamieson,dgc The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-kern/xfs-linux-melb Modid: xfs-linux-melb:xfs-kern:27940a xfs_mount.h - 1.233 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_mount.h.diff?r1=text&tr1=1.233&r2=text&tr2=1.232&f=h - No-op xfs_icsb_sync_counters_flags on non-SMP builds. From owner-xfs@oss.sgi.com Tue Jan 16 08:54:01 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 16 Jan 2007 08:54:07 -0800 (PST) Received: from waldorf.loreland.org (ip186.digipost.co.nz [203.110.30.186] (may be forged)) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0GGrwqw027653 for ; Tue, 16 Jan 2007 08:54:00 -0800 Received: from localhost (localhost [127.0.0.1]) by waldorf.loreland.org (Postfix) with ESMTP id CB4E2428A0D for ; Wed, 17 Jan 2007 05:30:47 +1300 (NZDT) Received: from waldorf.loreland.org ([127.0.0.1]) by localhost (waldorf.loreland.org [127.0.0.1]) (amavisd-new, port 10024) with LMTP id PdAKPmv33ATG for ; Wed, 17 Jan 2007 05:30:42 +1300 (NZDT) Received: from colo.loreland.org (localhost [127.0.0.1]) by waldorf.loreland.org (Postfix) with ESMTP id 207DE41AD46 for ; Wed, 17 Jan 2007 05:30:42 +1300 (NZDT) Received: from 193.203.83.22 (SquirrelMail authenticated user jamesb) by colo.loreland.org with HTTP; Wed, 17 Jan 2007 05:30:42 +1300 (NZDT) Message-ID: <32920.193.203.83.22.1168965042.squirrel@colo.loreland.org> Date: Wed, 17 Jan 2007 05:30:42 +1300 (NZDT) Subject: problem with latest xfsprogs progress code From: "James Braid" To: xfs@oss.sgi.com Reply-To: jamesb@loreland.org User-Agent: SquirrelMail/1.4.9a MIME-Version: 1.0 Content-Type: text/plain;charset=iso-8859-1 Content-Transfer-Encoding: 8bit X-Priority: 3 (Normal) Importance: Normal X-archive-position: 10313 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jamesb@loreland.org Precedence: bulk X-list: xfs Content-Length: 449 Lines: 14 Running 2.8.18 xfs_repair on a largeish (65TB, ~70M inodes) filesystem on an x86_64 machine gives the following "progress" output: 12:15:36: process known inodes and inode discovery - 1461632 of 0 inod es done 12:15:36: Phase 3: elapsed time 14 minutes, 32 seconds - processed 100 571 inodes per minute 12:15:36: Phase 3: 0% done - estimated remaining time 3364 weeks, 3 da ys, 7 hours, 30 minutes, 45 seconds Is this a known bug? Thanks, James From owner-xfs@oss.sgi.com Wed Jan 17 03:01:14 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 17 Jan 2007 03:01:19 -0800 (PST) Received: from waldorf.loreland.org (ip186.digipost.co.nz [203.110.30.186] (may be forged)) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0HB1Cqw028140 for ; Wed, 17 Jan 2007 03:01:14 -0800 Received: from localhost (localhost [127.0.0.1]) by waldorf.loreland.org (Postfix) with ESMTP id DAFCD428A0D for ; Thu, 18 Jan 2007 00:00:16 +1300 (NZDT) Received: from waldorf.loreland.org ([127.0.0.1]) by localhost (waldorf.loreland.org [127.0.0.1]) (amavisd-new, port 10024) with LMTP id nUTMnzRbYGb8 for ; Thu, 18 Jan 2007 00:00:14 +1300 (NZDT) Received: from colo.loreland.org (localhost [127.0.0.1]) by waldorf.loreland.org (Postfix) with ESMTP id D568341AD46 for ; Thu, 18 Jan 2007 00:00:14 +1300 (NZDT) Received: from 193.203.83.22 (SquirrelMail authenticated user jamesb) by colo.loreland.org with HTTP; Thu, 18 Jan 2007 00:00:14 +1300 (NZDT) Message-ID: <53858.193.203.83.22.1169031614.squirrel@colo.loreland.org> In-Reply-To: <32920.193.203.83.22.1168965042.squirrel@colo.loreland.org> References: <32920.193.203.83.22.1168965042.squirrel@colo.loreland.org> Date: Thu, 18 Jan 2007 00:00:14 +1300 (NZDT) Subject: Re: problem with latest xfsprogs progress code From: "James Braid" To: xfs@oss.sgi.com Reply-To: jamesb@loreland.org User-Agent: SquirrelMail/1.4.9a MIME-Version: 1.0 Content-Type: text/plain;charset=iso-8859-1 Content-Transfer-Encoding: 8bit X-Priority: 3 (Normal) Importance: Normal X-archive-position: 10316 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jamesb@loreland.org Precedence: bulk X-list: xfs Content-Length: 2993 Lines: 56 I'm now seeing the following output - it's been sitting at this point for over 13 hours now... earlier versions of xfs_repair would finish quite a bit faster. Any ideas whats going on? - 03:00:37: traversing filesystem - 0 of 55 allocation groups done - 03:15:37: traversing filesystem - 0 of 55 allocation groups done - 03:30:37: traversing filesystem - 0 of 55 allocation groups done - 03:45:37: traversing filesystem - 0 of 55 allocation groups done - 04:00:37: traversing filesystem - 0 of 55 allocation groups done - 04:15:37: traversing filesystem - 0 of 55 allocation groups done - 04:30:37: traversing filesystem - 0 of 55 allocation groups done - 04:45:37: traversing filesystem - 0 of 55 allocation groups done - 05:00:37: traversing filesystem - 0 of 55 allocation groups done - 05:15:37: traversing filesystem - 0 of 55 allocation groups done - 05:30:37: traversing filesystem - 0 of 55 allocation groups done - 05:45:37: traversing filesystem - 0 of 55 allocation groups done - 06:00:37: traversing filesystem - 0 of 55 allocation groups done - 06:15:37: traversing filesystem - 0 of 55 allocation groups done - 06:30:37: traversing filesystem - 0 of 55 allocation groups done - 06:45:37: traversing filesystem - 0 of 55 allocation groups done - 07:00:37: traversing filesystem - 0 of 55 allocation groups done - 07:15:37: traversing filesystem - 0 of 55 allocation groups done - 07:30:37: traversing filesystem - 0 of 55 allocation groups done - 07:45:37: traversing filesystem - 0 of 55 allocation groups done - 08:00:37: traversing filesystem - 0 of 55 allocation groups done - 08:15:37: traversing filesystem - 0 of 55 allocation groups done - 08:30:37: traversing filesystem - 0 of 55 allocation groups done - 08:45:37: traversing filesystem - 0 of 55 allocation groups done - 09:00:37: traversing filesystem - 0 of 55 allocation groups done - 09:15:37: traversing filesystem - 0 of 55 allocation groups done - 09:30:37: traversing filesystem - 0 of 55 allocation groups done - 09:45:37: traversing filesystem - 0 of 55 allocation groups done - 10:00:37: traversing filesystem - 0 of 55 allocation groups done - 10:15:37: traversing filesystem - 0 of 55 allocation groups done - 10:30:37: traversing filesystem - 0 of 55 allocation groups done > Running 2.8.18 xfs_repair on a largeish (65TB, ~70M inodes) filesystem on > an x86_64 machine gives the following "progress" output: > > 12:15:36: process known inodes and inode discovery - 1461632 of 0 inod > es done > 12:15:36: Phase 3: elapsed time 14 minutes, 32 seconds - processed 100 > 571 inodes per minute > 12:15:36: Phase 3: 0% done - estimated remaining time 3364 weeks, 3 da > ys, 7 hours, 30 minutes, 45 seconds > > Is this a known bug? > > Thanks, James > > > From owner-xfs@oss.sgi.com Wed Jan 17 06:25:11 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 17 Jan 2007 06:25:19 -0800 (PST) Received: from moutng.kundenserver.de (moutng.kundenserver.de [212.227.126.183]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0HEP8qw018463 for ; Wed, 17 Jan 2007 06:25:10 -0800 Received: from [194.173.12.131] (helo=[172.25.16.7]) by mrelayeu.kundenserver.de (node=mrelayeu5) with ESMTP (Nemesis), id 0ML25U-1H7BSq0EIy-00058x; Wed, 17 Jan 2007 15:08:32 +0100 Message-ID: <45AE2DDF.5000602@gmx.net> Date: Wed, 17 Jan 2007 15:08:31 +0100 From: Klaus Strebel User-Agent: Thunderbird 1.5.0.9 (Windows/20061207) MIME-Version: 1.0 To: jamesb@loreland.org CC: xfs@oss.sgi.com Subject: Re: problem with latest xfsprogs progress code References: <32920.193.203.83.22.1168965042.squirrel@colo.loreland.org> <53858.193.203.83.22.1169031614.squirrel@colo.loreland.org> In-Reply-To: <53858.193.203.83.22.1169031614.squirrel@colo.loreland.org> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 8bit X-Provags-ID: kundenserver.de abuse@kundenserver.de login:8a7df7300d3d15a4f701302fdde7adf9 X-archive-position: 10317 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: klaus.strebel@gmx.net Precedence: bulk X-list: xfs Content-Length: 3584 Lines: 74 James Braid schrieb: > I'm now seeing the following output - it's been sitting at this point for > over 13 hours now... earlier versions of xfs_repair would finish quite a > bit faster. Any ideas whats going on? > > - 03:00:37: traversing filesystem - 0 of 55 allocation groups done > - 03:15:37: traversing filesystem - 0 of 55 allocation groups done > - 03:30:37: traversing filesystem - 0 of 55 allocation groups done > - 03:45:37: traversing filesystem - 0 of 55 allocation groups done > - 04:00:37: traversing filesystem - 0 of 55 allocation groups done > - 04:15:37: traversing filesystem - 0 of 55 allocation groups done > - 04:30:37: traversing filesystem - 0 of 55 allocation groups done > - 04:45:37: traversing filesystem - 0 of 55 allocation groups done > - 05:00:37: traversing filesystem - 0 of 55 allocation groups done > - 05:15:37: traversing filesystem - 0 of 55 allocation groups done > - 05:30:37: traversing filesystem - 0 of 55 allocation groups done > - 05:45:37: traversing filesystem - 0 of 55 allocation groups done > - 06:00:37: traversing filesystem - 0 of 55 allocation groups done > - 06:15:37: traversing filesystem - 0 of 55 allocation groups done > - 06:30:37: traversing filesystem - 0 of 55 allocation groups done > - 06:45:37: traversing filesystem - 0 of 55 allocation groups done > - 07:00:37: traversing filesystem - 0 of 55 allocation groups done > - 07:15:37: traversing filesystem - 0 of 55 allocation groups done > - 07:30:37: traversing filesystem - 0 of 55 allocation groups done > - 07:45:37: traversing filesystem - 0 of 55 allocation groups done > - 08:00:37: traversing filesystem - 0 of 55 allocation groups done > - 08:15:37: traversing filesystem - 0 of 55 allocation groups done > - 08:30:37: traversing filesystem - 0 of 55 allocation groups done > - 08:45:37: traversing filesystem - 0 of 55 allocation groups done > - 09:00:37: traversing filesystem - 0 of 55 allocation groups done > - 09:15:37: traversing filesystem - 0 of 55 allocation groups done > - 09:30:37: traversing filesystem - 0 of 55 allocation groups done > - 09:45:37: traversing filesystem - 0 of 55 allocation groups done > - 10:00:37: traversing filesystem - 0 of 55 allocation groups done > - 10:15:37: traversing filesystem - 0 of 55 allocation groups done > - 10:30:37: traversing filesystem - 0 of 55 allocation groups done > > > >> Running 2.8.18 xfs_repair on a largeish (65TB, ~70M inodes) filesystem on >> an x86_64 machine gives the following "progress" output: >> >> 12:15:36: process known inodes and inode discovery - 1461632 of 0 inod >> es done >> 12:15:36: Phase 3: elapsed time 14 minutes, 32 seconds - processed 100 >> 571 inodes per minute >> 12:15:36: Phase 3: 0% done - estimated remaining time 3364 weeks, 3 da >> ys, 7 hours, 30 minutes, 45 seconds >> >> Is this a known bug? Hi James, why do you think that this is a bug? You have an almost infinitely large filesystem, so the file-system check will also run for an almost infinitely long time ;-). You see, not all that's possible is really desirable. Ciao Klaus Btw. i wouldn't expect this xfs_repair run to finish without running out of memory :-(. -- Mit freundlichen Grüssen / best regards Klaus Strebel, Dipl.-Inform. (FH), mailto:klaus.strebel@gmx.net /"\ \ / ASCII RIBBON CAMPAIGN X AGAINST HTML MAIL / \ From owner-xfs@oss.sgi.com Wed Jan 17 07:55:45 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 17 Jan 2007 07:55:51 -0800 (PST) Received: from mx1.redhat.com (mx1.redhat.com [66.187.233.31]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0HFtiqw006822 for ; Wed, 17 Jan 2007 07:55:45 -0800 Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by mx1.redhat.com (8.12.11.20060308/8.12.11) with ESMTP id l0HFsjAJ016294; Wed, 17 Jan 2007 10:54:45 -0500 Received: from pobox-2.corp.redhat.com (pobox-2.corp.redhat.com [10.11.255.15]) by int-mx1.corp.redhat.com (8.13.1/8.13.1) with ESMTP id l0HFsebQ011801; Wed, 17 Jan 2007 10:54:40 -0500 Received: from [10.15.80.10] (neon.msp.redhat.com [10.15.80.10]) by pobox-2.corp.redhat.com (8.13.1/8.13.1) with ESMTP id l0HFsd38017648; Wed, 17 Jan 2007 10:54:39 -0500 Message-ID: <45AE46C2.6090005@sandeen.net> Date: Wed, 17 Jan 2007 09:54:42 -0600 From: Eric Sandeen User-Agent: Thunderbird 1.5.0.9 (X11/20061219) MIME-Version: 1.0 To: Klaus Strebel CC: jamesb@loreland.org, xfs@oss.sgi.com Subject: Re: problem with latest xfsprogs progress code References: <32920.193.203.83.22.1168965042.squirrel@colo.loreland.org> <53858.193.203.83.22.1169031614.squirrel@colo.loreland.org> <45AE2DDF.5000602@gmx.net> In-Reply-To: <45AE2DDF.5000602@gmx.net> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-archive-position: 10320 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Content-Length: 1056 Lines: 31 Klaus Strebel wrote: >>> Running 2.8.18 xfs_repair on a largeish (65TB, ~70M inodes) filesystem on >>> an x86_64 machine gives the following "progress" output: >>> >>> 12:15:36: process known inodes and inode discovery - 1461632 of 0 inod >>> es done >>> 12:15:36: Phase 3: elapsed time 14 minutes, 32 seconds - processed 100 >>> 571 inodes per minute >>> 12:15:36: Phase 3: 0% done - estimated remaining time 3364 weeks, 3 da >>> ys, 7 hours, 30 minutes, 45 seconds >>> >>> Is this a known bug? > Hi James, > > why do you think that this is a bug? You have an almost infinitely large > filesystem, so the file-system check will also run for an almost > infinitely long time ;-). > > You see, not all that's possible is really desirable. Well, while 65TB is impressive*, and repairing it quickly is indeed a challenge, it probably still should not take 64+ years. ;-) Sounds like something is in fact going wrong. -Eric *it amuses me to see xfs users refer to nearly 100T as largeISH; clearly you all do not suffer from lowered expectations. :) From owner-xfs@oss.sgi.com Wed Jan 17 09:38:53 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 17 Jan 2007 09:39:01 -0800 (PST) Received: from mx1.redhat.com (mx1.redhat.com [66.187.233.31]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0HHcqqw002128 for ; Wed, 17 Jan 2007 09:38:53 -0800 Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by mx1.redhat.com (8.12.11.20060308/8.12.11) with ESMTP id l0HHbswF012538; Wed, 17 Jan 2007 12:37:54 -0500 Received: from pobox-2.corp.redhat.com (pobox-2.corp.redhat.com [10.11.255.15]) by int-mx1.corp.redhat.com (8.13.1/8.13.1) with ESMTP id l0HHbslt015897; Wed, 17 Jan 2007 12:37:54 -0500 Received: from [10.15.80.10] (neon.msp.redhat.com [10.15.80.10]) by pobox-2.corp.redhat.com (8.13.1/8.13.1) with ESMTP id l0HHbrR6032532; Wed, 17 Jan 2007 12:37:54 -0500 Message-ID: <45AE5EF5.8050604@sandeen.net> Date: Wed, 17 Jan 2007 11:37:57 -0600 From: Eric Sandeen User-Agent: Thunderbird 1.5.0.9 (X11/20061219) MIME-Version: 1.0 To: xfs-masters@oss.sgi.com CC: ", linux-kernel@vger.kernel.org, xfs@oss.sgi.com, "Eric W. Biederman" Subject: Re: [xfs-masters] [PATCH 14/59] sysctl: C99 convert xfs ctl_tables References: <11689656301563-git-send-email-ebiederm@xmission.com> In-Reply-To: <11689656301563-git-send-email-ebiederm@xmission.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-archive-position: 10321 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Content-Length: 1742 Lines: 71 Eric W. Biederman wrote: > From: Eric W. Biederman - unquoted > > Signed-off-by: Eric W. Biederman > --- > fs/xfs/linux-2.6/xfs_sysctl.c | 258 ++++++++++++++++++++++++++++------------ > 1 files changed, 180 insertions(+), 78 deletions(-) Oh no, 100 more XFS LOC! ;) Minor nits below... > + { > + .ctl_name = XFS_PANIC_MASK, > + .procname = "panic_mask", > + .data = &xfs_params.panic_mask.val, > + .maxlen = sizeof(int), > + .mode = 0644, > + .proc_handler = &proc_dointvec_minmax, Extra space here > + { > + .ctl_name = XFS_INHERIT_NODUMP, > + .procname = "inherit_nodump", > + .data = &xfs_params.inherit_nodump.val, > + .maxlen = sizeof(int), > + .mode = 0644, > + .proc_handler = &proc_dointvec_minmax, > + .strategy = &sysctl_intvec, NULL, don't think you want the NULL here > + .extra1 = &xfs_params.inherit_nodump.min, > + .extra2 = &xfs_params.inherit_nodump.max > + }, > + { > + .ctl_name = XFS_INHERIT_NOATIME, > + .procname = "inherit_noatime", > + .data = &xfs_params.inherit_noatim.val, > + .maxlen = sizeof(int), > + .mode = 0644, > + .proc_handler = &proc_dointvec_minmax, > + .strategy = &sysctl_intvec, NULL, or here > + { > + .ctl_name = XFS_BUF_AGE, > + .procname = "age_buffer_centisecs", > + .data = &xfs_params.xfs_buf_age.val, > + .maxlen = sizeof(int), > + .mode = 0644, > + .proc_handler = &proc_dointvec_minmax, > + .strategy = &sysctl_intvec, NULL, or here > + { > + .ctl_name = XFS_STATS_CLEAR, > + .procname = "stats_clear", > + .data = &xfs_params.stats_clear.val, > + .maxlen = sizeof(int), > + .mode = 0644, > + .proc_handler = &xfs_stats_clear_proc_handler, extra space here Thanks, -Eric From owner-xfs@oss.sgi.com Wed Jan 17 14:52:31 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 17 Jan 2007 14:52:36 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0HMqSqw006264 for ; Wed, 17 Jan 2007 14:52:30 -0800 Received: from [134.14.55.18] (dhcp18.melbourne.sgi.com [134.14.55.18]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA03696; Thu, 18 Jan 2007 09:51:26 +1100 Message-ID: <45AEA86E.1060003@melbourne.sgi.com> Date: Thu, 18 Jan 2007 09:51:26 +1100 From: David Chatterton Reply-To: chatz@melbourne.sgi.com Organization: SGI User-Agent: Thunderbird 1.5.0.9 (Windows/20061207) MIME-Version: 1.0 To: jamesb@loreland.org CC: Eric Sandeen , Klaus Strebel , xfs@oss.sgi.com Subject: Re: problem with latest xfsprogs progress code References: <32920.193.203.83.22.1168965042.squirrel@colo.loreland.org> <53858.193.203.83.22.1169031614.squirrel@colo.loreland.org> <45AE2DDF.5000602@gmx.net> <45AE46C2.6090005@sandeen.net> In-Reply-To: <45AE46C2.6090005@sandeen.net> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-archive-position: 10323 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: chatz@melbourne.sgi.com Precedence: bulk X-list: xfs Content-Length: 1728 Lines: 58 Eric Sandeen wrote: > Klaus Strebel wrote: > >>>> Running 2.8.18 xfs_repair on a largeish (65TB, ~70M inodes) filesystem on >>>> an x86_64 machine gives the following "progress" output: >>>> >>>> 12:15:36: process known inodes and inode discovery - 1461632 of 0 inod >>>> es done >>>> 12:15:36: Phase 3: elapsed time 14 minutes, 32 seconds - processed 100 >>>> 571 inodes per minute >>>> 12:15:36: Phase 3: 0% done - estimated remaining time 3364 weeks, 3 da >>>> ys, 7 hours, 30 minutes, 45 seconds >>>> >>>> Is this a known bug? >> Hi James, >> >> why do you think that this is a bug? You have an almost infinitely large >> filesystem, so the file-system check will also run for an almost >> infinitely long time ;-). >> >> You see, not all that's possible is really desirable. > > Well, while 65TB is impressive*, and repairing it quickly is indeed a > challenge, it probably still should not take 64+ years. ;-) > > Sounds like something is in fact going wrong. > > -Eric > > *it amuses me to see xfs users refer to nearly 100T as largeISH; clearly > you all do not suffer from lowered expectations. :) > Barry is at linux.conf.au this week, he knows this code better than anyone else. Phase 3 is scanning the inodes in each allocation group, building up a map of filesystem blocks that are marked as used. See http://oss.sgi.com/projects/xfs/training/xfs_slides_11_repair.pdf Scanning an AG and its inodes should not be taking this long. Are you under memory pressure and the machine is just swapping to death? Are you seeing I/O errors on the storage? Is the storage using AVT mode and the luns are flipping between controllers? Thanks, David -- David Chatterton XFS Engineering Manager SGI Australia From owner-xfs@oss.sgi.com Thu Jan 18 02:36:39 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 18 Jan 2007 02:36:45 -0800 (PST) Received: from poster.science.ru.nl (poster.science.ru.nl [131.174.30.28]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0IAabqw028340 for ; Thu, 18 Jan 2007 02:36:39 -0800 Received: from smeltpunt.science.ru.nl [131.174.16.145] (helo=smeltpunt.science.ru.nl) by poster.science.ru.nl (8.13.7/5.11) with ESMTP id l0IALHRS003751 for ; Thu, 18 Jan 2007 11:21:17 +0100 (MET) Received: from brielle.hef.kun.nl [131.174.192.149] (helo=[131.174.192.149]) by smeltpunt.science.ru.nl (8.13.7/5.11) with ESMTP id l0IALC22024507 for ; Thu, 18 Jan 2007 11:21:12 +0100 (MET) Message-ID: <45AF4A1A.50805@science.ru.nl> Date: Thu, 18 Jan 2007 11:21:14 +0100 From: Wim Janssen User-Agent: Thunderbird 1.5.0.8 (X11/20061025) MIME-Version: 1.0 To: linux-xfs@oss.sgi.com Subject: xfs-questions Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.56 on 131.174.16.145 X-archive-position: 10327 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: WGHM.Janssen@science.ru.nl Precedence: bulk X-list: xfs Content-Length: 1732 Lines: 43 Hello Folks, Yesterday I downloaded from the SGI-ftpsite the module: kernel-module-xfs-2.6.9-22.EL-0.1-1.src.rpm This because I like to work with the xfs-filesystem on my storage-unit. ( a Dell PowerEdge1950 with a MD1000-diskarray ) As Eric pointed out in the readme-file he tested it a bit with the 2.6.9-22 kernel and it might also work with other RHEL kernels. On my storage-unit I installed the 64-bit RHEL4 AS with the kernel: 2.6.9-42.ELsmp x86_64 My questions: - If I install the xfs-module, how can I test that it is working properly ??? - How can I be sure about the stability ??? Do you have some test about this ??? - Does a more recent version of the module exists ??? - How do I have to deal with the kernel-updates I get from RedHat ??? - Can you give me some hints/advice how to proceed and avoid difficulties. Because the storage-unit is going to be a very critical part of our IT-infra-structure. For many years I have worked with the xfs-filesystem on the SGI Indigo- and Challenge-systems and was always very pleased with the stability, performance and managability. That's why I consider to implement the xfs-filesystem for our storage-unit. I hope you can help me with my questions. Kind regards and many thanks in advance, Wim Janssen P.S. It would be nice if I can report a success in the neaar future. -- Radboud Universiteit Nijmegen Phone: +31(0)24-3652097 [ Wim Janssen ] Hoge Energie Fysica Fax: +31(0)24-3652191 Systeembeheer Toernooiveld 1 E-mail: W.Janssen@hef.ru.nl 6525 ED Nijmegen URL: http://www.hef.ru.nl From owner-xfs@oss.sgi.com Thu Jan 18 08:49:50 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 18 Jan 2007 08:49:54 -0800 (PST) Received: from mx1.redhat.com (mx1.redhat.com [66.187.233.31]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0IGnnqw014331 for ; Thu, 18 Jan 2007 08:49:49 -0800 Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by mx1.redhat.com (8.12.11.20060308/8.12.11) with ESMTP id l0IGmtDO003808; Thu, 18 Jan 2007 11:48:55 -0500 Received: from pobox-2.corp.redhat.com (pobox-2.corp.redhat.com [10.11.255.15]) by int-mx1.corp.redhat.com (8.13.1/8.13.1) with ESMTP id l0IGmn5e001924; Thu, 18 Jan 2007 11:48:49 -0500 Received: from [10.15.80.10] (neon.msp.redhat.com [10.15.80.10]) by pobox-2.corp.redhat.com (8.13.1/8.13.1) with ESMTP id l0IGmmFb027382; Thu, 18 Jan 2007 11:48:49 -0500 Message-ID: <45AFA4F6.5040806@sandeen.net> Date: Thu, 18 Jan 2007 10:48:54 -0600 From: Eric Sandeen User-Agent: Thunderbird 1.5.0.9 (X11/20061219) MIME-Version: 1.0 To: Wim Janssen CC: linux-xfs@oss.sgi.com Subject: Re: xfs-questions References: <45AF4A1A.50805@science.ru.nl> In-Reply-To: <45AF4A1A.50805@science.ru.nl> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-archive-position: 10328 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Content-Length: 2139 Lines: 67 Wim Janssen wrote: > Hello Folks, > > Yesterday I downloaded from the SGI-ftpsite the module: > kernel-module-xfs-2.6.9-22.EL-0.1-1.src.rpm > This because I like to work with the xfs-filesystem on my storage-unit. > ( a Dell PowerEdge1950 with a MD1000-diskarray ) > As Eric pointed out in the readme-file he tested it a bit with the > 2.6.9-22 kernel > and it might also work with other RHEL kernels. > On my storage-unit I installed the 64-bit RHEL4 AS with the kernel: > 2.6.9-42.ELsmp x86_64 I would suggest http://sandeen.net/rhel4_xfs/kernel-module-xfs-2.6.9-42.0.2.EL-0.2-1.src.rpm which is a little more up to date; see also http://sandeen.net/rhel4_xfs/ for more info > My questions: > - If I install the xfs-module, how can I test that it is working > properly ??? you could use the xfstests suite in xfs cvs > - How can I be sure about the stability ??? or you could test it in your environment > Do you have some test about this ??? > - Does a more recent version of the module exists ??? see above > - How do I have to deal with the kernel-updates I get from RedHat ??? This does not change your kernel, though it may change your support when it's loaded. When you update your rh kernel, you'll need to rebuild the above src.rpm for the new kernel. > - Can you give me some hints/advice how to proceed and avoid difficulties. > Because the storage-unit is going to be a very critical part of our > IT-infra-structure. Then you should test the heck out of it before it's fully deployed :) Let me know if you have trouble, Thanks, -Eric > For many years I have worked with the xfs-filesystem on the SGI Indigo- and > Challenge-systems and was always very pleased with the stability, > performance > and managability. That's why I consider to implement the xfs-filesystem > for our storage-unit. > I hope you can help me with my questions. > > Kind regards and many thanks in advance, > > > Wim Janssen > P.S. > It would be nice if I can report a success in the neaar future. > From owner-xfs@oss.sgi.com Fri Jan 19 08:25:12 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 19 Jan 2007 08:25:27 -0800 (PST) Received: from coraid.com (ns1.coraid.com [65.14.39.133]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0JGP9qw026643 for ; Fri, 19 Jan 2007 08:25:12 -0800 Received: from coraid.com ([205.185.197.207]) by coraid.com; Fri Jan 19 11:17:13 EST 2007 Date: Fri, 19 Jan 2007 11:21:08 -0500 From: "Ed L. Cashin" To: Christoph Hellwig Cc: linux-kernel@vger.kernel.org, Andrew Morton , xfs@oss.sgi.com, Alan Cox Subject: Re: Re: bio pages with zero page reference count Message-ID: <20070119162108.GG16715@coraid.com> Reply-To: support@coraid.com References: <20061209234305.c65b4e14.akpm@osdl.org> <20061218175300.GM23156@coraid.com> <20061218222109.GA23156@coraid.com> <20061218225343.GA30167@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20061218225343.GA30167@infradead.org> User-Agent: Mutt/1.5.11+cvs20060126 X-archive-position: 10329 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: ecashin@coraid.com Precedence: bulk X-list: xfs Content-Length: 3277 Lines: 81 On Mon, Dec 18, 2006 at 10:53:43PM +0000, Christoph Hellwig wrote: > On Mon, Dec 18, 2006 at 05:21:09PM -0500, Ed L. Cashin wrote: ... > > If anyone has a better reference, I'd like to see it. > > I searched around a little bit and found these: > > http://groups.google.at/group/open-iscsi/browse_frm/thread/17fbe253cf1f69dd/f26cf19b0fee9147?tvc=1&q=kmalloc+iscsi+%22christoph+hellwig%22&hl=de#f26cf19b0fee9147 > http://www.ussg.iu.edu/hypermail/linux/kernel/0408.3/0061.html > > But that's not the conclusion I was looking for. So it sounds like you've been advocating a general discussion of this issue for a few years now. To summarize the issue: 1) users of the block layer assume that it's fine to associate pages that have a zero reference count with a bio before requesting I/O, 2) intermediaries like iscsi, aoe, and drbd, associate the pages with the frags of skbuffs, but 3) when the network layer has to linearize the skbuff for a network device that doesn't support scatter gather, it winds up doing a get_page and put_page on each page in the frags, despite the fact that the page reference count on each may already be zero. The network layer is assuming that it's OK to do use these operations on any page in the frags. Maybe the discussion is slow to start because too many parts of the kernel are involved. Here are a couple of specific questions. Maybe they'll help get the ball rolling. 1) What are the disadvantages of making the network layer *not* to assume it's correct to use get/put_page on the frags when it linearizes an sk_buff? For example, the network layer could omit the get/put_page when the page reference count is zero. 2) What are the disadvantages of having one part of the kernel (e.g., XFS) reference a page before handing it off to another part of the kernel, e.g., in a bio? This change would require multiple parts of the kernel to change behavior, but it seems conceptually cleaner, since the reference count would reflect the reality that the page does have an owner (XFS or whoever). I don't know how practical the implementation would be. 3) It seems messy to handle this is in each of the individual intermediary drivers that sit between the block and network layers, but if that really is the place to do it, then is there a problem with simply incrementing the page reference counts upon getting a bio from the block layer, and later decrementing them before giving them back with bio_endio? bio_for_each_segment(bv, bio, i) atomic_inc(&bv->bv_page->_count); ... [and later] bio_for_each_segment(bv, bio, i) atomic_dec(&bv->bv_page->_count); bio_endio(bio, bytes_done, error); That seems to eliminate problems aoe users have with XFS on AoE devices that are accessible via network devices that don't support scatter gather, but is it the right fix? Andrew Morton changed "count" to "_count" to stop folks from directly manipulating the page struct member, but I don't see any get/put_page type operations that fit what the aoe driver has to do in this case. -- Ed L Cashin From owner-xfs@oss.sgi.com Fri Jan 19 13:36:33 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 19 Jan 2007 13:36:40 -0800 (PST) Received: from mail.atipa.com (125.14.124.24.cm.sunflower.com [24.124.14.125]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0JLaVqw006946 for ; Fri, 19 Jan 2007 13:36:32 -0800 Received: from [192.168.100.172] ([192.168.100.172]) by mail.atipa.com with Microsoft SMTPSVC(6.0.3790.1830); Fri, 19 Jan 2007 15:25:02 -0600 Message-ID: <45B136D2.5080305@atipa.com> Date: Fri, 19 Jan 2007 15:23:30 -0600 From: Roger Heflin User-Agent: Thunderbird 1.5.0.9 (X11/20070102) MIME-Version: 1.0 To: Eric Sandeen CC: linux-xfs@oss.sgi.com Subject: Re: xfs-questions References: <45AF4A1A.50805@science.ru.nl> <45AFA4F6.5040806@sandeen.net> In-Reply-To: <45AFA4F6.5040806@sandeen.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-OriginalArrivalTime: 19 Jan 2007 21:25:02.0671 (UTC) FILETIME=[47E56DF0:01C73C10] X-archive-position: 10332 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: rheflin@atipa.com Precedence: bulk X-list: xfs Content-Length: 894 Lines: 31 Eric Sandeen wrote: > Wim Janssen wrote: >> Hello Folks, >> >> Yesterday I downloaded from the SGI-ftpsite the module: >> kernel-module-xfs-2.6.9-22.EL-0.1-1.src.rpm >> This because I like to work with the xfs-filesystem on my storage-unit. >> ( a Dell PowerEdge1950 with a MD1000-diskarray ) >> As Eric pointed out in the readme-file he tested it a bit with the >> 2.6.9-22 kernel >> and it might also work with other RHEL kernels. >> On my storage-unit I installed the 64-bit RHEL4 AS with the kernel: >> 2.6.9-42.ELsmp x86_64 > > I would suggest > http://sandeen.net/rhel4_xfs/kernel-module-xfs-2.6.9-42.0.2.EL-0.2-1.src.rpm > > which is a little more up to date; see also > http://sandeen.net/rhel4_xfs/ > for more info > Eric, Very nice, it is very annoying the Redhat has a 8TB limit on filesystems. I am testing it with 2.6.9-42.0.3. Roger From owner-xfs@oss.sgi.com Fri Jan 19 13:40:39 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 19 Jan 2007 13:40:47 -0800 (PST) Received: from ampex.com (postal.ampex.com [65.201.33.131]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0JLeXqw008008 for ; Fri, 19 Jan 2007 13:40:34 -0800 Received: from rwc-serv.ampex.com (rwc-serv [136.185.11.2]) by ampex.com (8.12.10/8.12.9) with ESMTP id l0JLRg4k003189 for ; Fri, 19 Jan 2007 13:27:42 -0800 (PST) Received: from ampex.com (dhcp-41 [136.185.34.80]) by rwc-serv.ampex.com (8.13.1/8.13.1) with ESMTP id l0JLRd0g014704 for ; Fri, 19 Jan 2007 13:27:42 -0800 Message-ID: <45B137CA.3020206@ampex.com> Date: Fri, 19 Jan 2007 13:27:38 -0800 From: Les Oxley User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.4) Gecko/20030624 Netscape/7.1 (ax) X-Accept-Language: en-us, en MIME-Version: 1.0 To: xfs@oss.sgi.com Subject: EXTENT BOUNDARIES Content-Type: multipart/mixed; boundary="------------060404040304010301010007" X-MailScanner-Information: Please contact the ISP for more information X-MailScanner: Not scanned: please contact your Internet E-Mail Service Provider for details X-MailScanner-SpamCheck: not spam (whitelisted), SpamAssassin (score=0.665, required 3, SUBJ_ALL_CAPS) X-archive-position: 10334 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: les@ampex.com Precedence: bulk X-list: xfs Content-Length: 32347 Lines: 548 This is a multi-part message in MIME format. --------------060404040304010301010007 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Hello, We are looking into running XFS on a 3TB FLASH MEMORY MODULE. We have a question regarding the extent boundaries. See the attached PowerPoint drawing, xfs.ppt We are running Linux. Our media is 3 million contiguous 4KB blocks. We would like to define an extent size of 1MB and this tracks the erasure block size of the flash memory, and that greatly improves perfomance. We are trying to understand where XFS places the extent boundaries with reference to the contiguous block sequence. Is this deterministic as indicated in the drawing ? That is, are the extent boundaries on 256 block boundaries. Any help would be greatly appreciated. Les Oxley Ampex Corporation Redwood City California. --------------060404040304010301010007 Content-Type: application/vnd.ms-powerpoint; name="xfs.ppt" Content-Transfer-Encoding: base64 Content-Disposition: inline; filename="xfs.ppt" 0M8R4KGxGuEAAAAAAAAAAAAAAAAAAAAAPgADAP7/CQAGAAAAAAAAAAAAAAAB AAAAAQAAAAAAAAAAEAAAGQAAAAEAAAD+////AAAAAAAAAAD///////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// ///////////////////////9////KwAAAAMAAAAEAAAABQAAAAYAAAAHAAAA CAAAAAkAAAAKAAAACwAAAAwAAAANAAAADgAAAA8AAAAQAAAAEQAAABIAAAAT AAAAFAAAABUAAAAWAAAAFwAAABgAAAD+/////v///yoAAAAcAAAAHQAAAB4A AAAfAAAAIAAAACEAAAAiAAAAIwAAACQAAAAlAAAAJgAAACcAAAAoAAAAKQAA AP7////+/////v////////////////////////////////////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// /////////////////////////////////////////////1IAbwBvAHQAIABF AG4AdAByAHkAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAWAAUA//////////8BAAAAEI2BZJtPzxGG6gCqALkp6AAAAAAA AAAAAAAAAHD3CugOPMcBGgAAAAADAAAAAAAAUABvAHcAZQByAFAAbwBpAG4A dAAgAEQAbwBjAHUAbQBlAG4AdAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA ACgAAgECAAAAAwAAAP////8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAACAAAA3y0AAAAAAAAFAFMAdQBtAG0AYQByAHkASQBuAGYAbwBy AG0AYQB0AGkAbwBuAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAKAACAQQA AAD//////////wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA ABsAAABoHAAAAAAAAAUARABvAGMAdQBtAGUAbgB0AFMAdQBtAG0AYQByAHkA SQBuAGYAbwByAG0AYQB0AGkAbwBuAAAAAAAAAAAAAAA4AAIB//////////// ////AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAJwC AAAAAAAADwDoAzIGAAABAOkDKAAAAEAmAADAGAAAzRYAADokAAAFAAAACgAA AAAAAAAAAAAAAQAGAAAAAAEPAPIDDgEAAC8AyA8MAAAAMADSDwQAAAAAAAAA DwDVB0wAAAAAALcPRAAAAFQAaQBtAGUAcwAgAE4AZQB3ACAAUgBvAG0AYQBu AAAApLcSAKS3EgCwsxIA+NMDMNSzEgAIAAAA1LMSAH7UAzAAAAYSAACkDwYA AAAAAAIACgAAAKUPCAAAAAAAAAgAAAEAAACpDwoAAAAHAAAAAgAJBAAAQACj D24AAAAFAP/9PwAAACIgAABkAAAAAAAAAGQAAAAAAAAAAABAAgAAAAACAAAA ///vAAAAAAD///////8YAAAAAAEAAAAFAAAgASABAAAAAAAFAABAAkACAAAA AAAFAABgA2ADAAAAAAAFAACABIAEAAAAAA8ACwQAAwAADwAA8PgCAAAAAAbw cAAAAAI4AAANAAAANgAAAAIAAAACAAAABwAAAAEAAAAABAAAAQAAAAAEAAAB AAAAAAQAAAEAAAAABAAAAQAAAAAEAAABAAAAAAQAAAEAAAAABAAAAQAAAAAE AAABAAAAAAQAAAEAAAAABAAAAQAAALIDAAAjBgvwTAIAAIEAMGUBAIIAmLIA AIMAMGUBAIQAmLIAAIUAAgAAAIcAAQAAAIgAAAAAAIkAAAAAAL8AAAANAAwB 9AAAEA0BAAAAIA4BAAAAIMABAQAACMEBAAABAMIB////AMMBAAAAIMQBAAAA AMVBAAAAAMbBAAAAAMcBAAAAAMgBAAAAAMkBAAAAAMoBAAAAAMsBNSUAAMwB AAAIAM0BAAAAAM4BAAAAAM/BAAAAANcBAgAAAP8BBgAGAAACAAAAAAECAgAA CAICy8vLAAMCAAAAIAQCAAABAAUCOGMAAAYCOGMAAAcCAAAAAAgCAAAAAAkC AAABAAoCAAAAAAsCAAAAAAwCAAABAA0CAAAAAA4CAAAAAA8CAAEAABACAAAA ABECAAAAAD8CAAADAIACAAAAAIECAAABAIICBQAAAIMCnDEAAIQCAAAAAIUC 8PkGAIYCAAAAAIcC9wAAEIgCAAAAIL8CAQAPAMACAAAAAMECAAAAAMICZAAA AMMCAAAAAMQCAAAAAMUCAAAAAMYCAAAAAMcCAAAAAMgCAAAAAMkCAAAAAMoC MHUAAMsC0BITAMwCMO3s/80CQFSJAM4CAIAAAM8CAID//9ACAAB5/9ECMgAA ANICIE4AANMCUMMAANQCAAAAANUCECcAANYCcJQAANcCsDz//9gCAAAAANkC ECcAANoCcJQAAP8CFgAfAAQDAQAAAEEDqCkBAEIDAAAAAEMDAwAAAEQDfL4B AEUDAAAAAH8DAAAPAIQDfL4BAIUDAAAAAIYDfL4BAIcDAAAAADAAGvEMAAAA /wAAAN3d3QBmmf8AQAAe8RAAAAABAAAIBwAACAIAAAj3AAAQHwDwDxwAAAAA APMDFAAAAAIAAAAAAAAAAAAAAAAAAIAAAAAADwDQBxMBAAAfAP8DFAAAAAIA AAQMAAAAAAAAAAAAAAACAAAADwD6A2cAAAAAAP4DAwAAAAEBAAAA/QM0AAAA QwAAAGQAAABDAAAAZAAAAOCzEgB+1AMw2LMSAAgAAABMGgAA6BEAAKz///9Y ////AQAAAHAA+wMIAAAAAAAAAAASAABwAPsDCAAAAAEAAAAgHwAAHwAIBDwA AAAAAP0DNAAAAEIAAABkAAAAQgAAAGQAAAC0mjcKAAAAAKS3EgABAAAAphcA APQOAAAAAAAAAAAAAAAA//8fAAcEPAAAAAAA/QM0AAAAIQAAAGQAAAAhAAAA ZAAAALSaNwoAAAAApLcSAAEAAAAEFwAA9A4AAAAAAAAAAAAAAAD//z8A2Q8M AAAAAADaDwQAAAAAACUADwDwDxwAAAAAAPMDFAAAAAMAAAAEAAAAAAAAAAAB AAAAAAAAAQABBFAAAAAAAAAB////fwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AQASAAAAcBcFAAAAAAIAAQAAAOoDAAAAAA8A+APBCQAAAgDvAxgAAAABAAAA AQIHCQgAAAAAAAAAAAAAAAAAAABgAPAHIAAAAP///wAAAAAAgICAAAAAAAAA zJkAMzPMAMzM/wCysrIAYADwByAAAAAAAP8A////AAAAAAD//wAA/5kAAAD/ /wD/AAAAlpaWAGAA8AcgAAAA///MAAAAAABmZjMAgIAAADOZMwCAAAAAADPM AP/MZgBgAPAHIAAAAP///wAAAAAAMzMzAAAAAADd3d0AgICAAE1NTQDq6uoA YADwByAAAAD///8AAAAAAICAgAAAAAAA/8xmAAAA/wDMAMwAwMDAAGAA8Acg AAAA////AAAAAACAgIAAAAAAAMDAwAAAZv8A/wAAAACZAABgAPAHIAAAAP// /wAAAAAAgICAAAAAAAAzmf8Amf/MAMwAzACysrIAYADwByAAAAD///8AAAAA AICAgAAAAAAA////ADMzzADMzP8AsrKyAAAAow8+AAAAAQD//T8AAAAiIAAA ZAAAAAAAAQBkAAAAAAAAAAAAeQMAAAAAAgAAAP//7wAAAAAA////////RAAA AAADAAAQAKMPfAAAAAUA//0/AAEAIiAAAGQAAAAAAAAAZAAUAAAATgEAAHkD AAAAAAIAAAD//+8AAAAAAP///////zEAAAAAAQAAgAUAABMg0gK8AQAAAgAs AIAFAAAiIFYEeQMAAAIAJQCABQAAEyAUBjUFAAACAB4AgAUAALsA0AfxBgAA AAAgAKMPbgAAAAUA//0/AAAAIiAAAGQAAAAAAAAAZAAeAAAAAAAAAEACAAAA AAIAAAD//+8AAAAAAP///////wwAAAAAAQAAAAUAACABIAEAAAAAAAUAAEAC QAIAAAAAAAUAAGADYAMAAAAAAAUAAIAEgAQAAAAAUACjD1IAAAAFAAAAAQkA AAAAAQAAAAAAAAABAAEJAAAAAAEAvAEAAAAAAgABCQAAAAABAHkDAAAAAAMA AQkAAAAAAQA1BQAAAAAEAAEJAAAAAAEA8QYAAAAAYACjDwwAAAABAAAAAAAA AAAAAABwAKMPPgAAAAUAAAAAAAAAAAACACwAAQAAAAAAAAACACUAAgAAAAAA AAACAB4AAwAAAAAAAAACAB0ABAAAAAAAAAACAB0AgACjDz4AAAAFAAAAAAAA AAAAAgAlAAEAAAAAAAAAAgAeAAIAAAAAAAAAAgAdAAMAAAAAAAAAAgAZAAQA AAAAAAAAAgAZAA8ADATDBQAADwAC8LsFAAAgAAjwCAAAAAYAAAAGBAAADwAD 8FMFAAAPAATwKAAAAAEACfAQAAAA9fX19fX19fX19fX19fX19QIACvAIAAAA AAQAAAUAAAAPAATw8AAAABIACvAIAAAAAgQAAAAKAADjAAvwVAAAAH8AAQAB AIAAFKupA4EAECcCAIIAiBMBAIMAECcCAIQAiBMBAIcAAQAAAL8AEAAfAIEB BAAACIMBAAAACL8BAQARAMABAQAACP8BAQAJAAECAgAACAAAEPAIAAAAMwLe AmIjUwYPABHwEAAAAAAAwwsIAAAAAAAAAAEANwoPAA3wVAAAAAAAnw8EAAAA AAAAAAAAqA8gAAAAQ2xpY2sgdG8gZWRpdCBNYXN0ZXIgdGl0bGUgc3R5bGUA AKIPBgAAACEAAAAAAAAAqg8KAAAAIQAAAAEAAAAAAA8ABPA0AQAAEgAK8AgA AAADBAAAAAoAANMAC/BOAAAAfwABAAEAgAA0qakDgQAQJwIAggCIEwEAgwAQ JwIAhACIEwEAvwAQAB8AgQEEAAAIgwEAAAAIvwEBABEAwAEBAAAI/wEBAAkA AQICAAAIAAAQ8AgAAAAmB94CYiMAFg8AEfAQAAAAAADDCwgAAAABAAAAAgA3 Cg8ADfCeAAAAAACfDwQAAAABAAAAAACoD1IAAABDbGljayB0byBlZGl0IE1h c3RlciB0ZXh0IHN0eWxlcw1TZWNvbmQgbGV2ZWwNVGhpcmQgbGV2ZWwNRm91 cnRoIGxldmVsDUZpZnRoIGxldmVsAACiDx4AAAAhAAAAAAANAAAAAQAMAAAA AgANAAAAAwAMAAAABAAAAKoPCgAAAFMAAAABAAAAAAAPAATw8QAAABIACvAI AAAABAQAAAAKAADTAAvwTgAAAH8AAQABAIAAtLCpA4EAECcCAIIAiBMBAIMA ECcCAIQAiBMBAL8AEAAfAIEBBAAACIMBAAAACL8BAQARAMABAQAACP8BAQAJ AAECAgAACAAAEPAIAAAAjRbeAtYKMxgPABHwEAAAAAAAwwsIAAAAAgAAAAcB NwoPAA3wWwAAAAAAnw8EAAAABAAAAAAAqA8BAAAAKgAAoQ8UAAAAAgAAAAAA AAAAAAIAAAAAAAIAFgAAAPgPBAAAAAAAAAAAAKYPFgAAAPEeAAB5A7wBvAF5 A3kDNQU1BfEG8QYPAATw8wAAABIACvAIAAAABQQAAAAKAADTAAvwTgAAAH8A AQABAIAAlK+pA4EAECcCAIIAiBMBAIMAECcCAIQAiBMBAL8AEAAfAIEBBAAA CIMBAAAACL8BAQARAMABAQAACP8BAQAJAAECAgAACAAAEPAIAAAAjRYSDS4Z MxgPABHwEAAAAAAAwwsIAAAAAwAAAAkCNQQPAA3wXQAAAAAAnw8EAAAABAAA AAAAqA8BAAAAKgAAoQ8WAAAAAgAAAAAAAAgAAAEAAgAAAAAAAgAWAAAA+g8E AAAAAAAAAAAApg8WAAAA8R4AAHkDvAG8AXkDeQM1BTUF8QbxBg8ABPDzAAAA EgAK8AgAAAAGBAAAAAoAANMAC/BOAAAAfwABAAEAgAAUtKkDgQAQJwIAggCI EwEAgwAQJwIAhACIEwEAvwAQAB8AgQEEAAAIgwEAAAAIvwEBABEAwAEBAAAI /wEBAAkAAQICAAAIAAAQ8AgAAACNFmobYiMzGA8AEfAQAAAAAADDCwgAAAAE AAAACAI1BA8ADfBdAAAAAACfDwQAAAAEAAAAAACoDwEAAAAqAAChDxYAAAAC AAAAAAAACAAAAgACAAAAAAACABYAAADYDwQAAAAAAAAAAACmDxYAAADxHgAA eQO8AbwBeQN5AzUFNQXxBvEGDwAE8EgAAAASAArwCAAAAAEEAAAADAAAgwAL 8DAAAACBAQAAAAiDAQUAAAiTAa5K7QCUAW6TmQC/ARIAEgD/AQAACAAEAwkA AAA/AwEAAQAQAPAHIAAAAP///wAAAAAAgICAAAAAAAAAzJkAMzPMAMzM/wCy srIAIAC6DywAAABCAGwAYQBuAGsAIABQAHIAZQBzAGUAbgB0AGEAdABpAG8A bgAuAHAAbwB0AA8A7gOYHQAAAgDvAxgAAAAQAAAAAAAAAAAAAAAAAACAAAAA AAUAAAAPAAwESB0AAA8AAvBAHQAAEAAI8AgAAAAwAAAAsTMAAJALGPHkAgAA AQAAAAIAAAADAAAABAAAAAUAAAAGAAAABwAGAAgABgAJAAYACgAAAAsACgAM AAAADQAAAA4AAAAPAAAAEAAAABEAEAASABAAEwAAABQAAAAVAAAAFgAAABcA FgAYAAAAGQAAABoAAAAbAAAAHAAAAB0AAAAeAAAAHwAAACAAAAAhAAAAIgAA ACMAAAAkAAAAJQAAACYAAAAnAAAAKAAAACkAAAAqAAAAKwAAACwAAAAtAAAA LgAAAC8AAAAwAAAAMQAAADIAAAAzAAAANAAAADUAAAA2AAAANwAAADgAAAA5 AAAAOgAAADsAAAA8AAAAPQAAAD4AAAA/AAAAQAAAAEEAAABCAAAAQwAAAEQA AABFAAAARgAAAEcAAABIAEcASQAAAEoAAABLAAAATAAAAE0AAABOAAAATwAA AFAAAABRAAAAUgAAAFMAAABUAAAAVQAAAFYAVQBXAFUAWABVAFkAVQBaAFUA WwBVAFwAVQBdAFUAXgBVAF8AVQBgAFUAYQBVAGIAVQBjAFUAZABVAGUAVQBm AFUAZwBVAGgAVQBpAFUAagBVAGsAVQBsAFUAbQBVAG4AVQBvAFUAcABVAHEA VQByAFUAcwBVAHQAVQB1AFUAdgBVAHcAVQB4AFUAeQBVAHoAVQB7AFUAfABV AH0AVQB+AFUAfwBVAIAAVQCBAFUAggBVAIMAVQCEAFUAhQBVAIYAVQCHAFUA iABVAIkAVQCKAFUAiwBVAIwAVQCNAFUAjgBVAI8AVQCQAFUAkQBVAJIAVQCT AFUAlABVAJUAVQCWAFUAlwBVAJgAVQCZAFUAmgBVAJsAVQCcAFUAnQBVAJ4A AACfAJ4AoAAAAKEAoACiAAAAowAAAKQAAAClAAAApgClAKcAAACoAAAAqQAA AKoAAACrAAAArAAAAK0AAACuAAAArwAAALAAAACxAAAAsgAAALMAAAC0AAAA tQAAALYAAAC3AAAAuAAAALkAAAAPAAPw5BkAAA8ABPAoAAAAAQAJ8BAAAAAA AAAAAAAAAAAAAAAAAAAAAgAK8AgAAAAACAAABQAAAA8ABPBSAAAAEgAK8AgA AAACCAAAAAoAAHMAC/AqAAAAhQACAAAAhwABAAAAgQEAAAAIvwEAABAAwAEB AAAI/wEIAAgAAQICAAAIAAAQ8AgAAABfAF8AgCUAGA8ABPDLAAAAogwK8AgA AABwIgAAAAoAAMMAC/BIAAAAgAB0Oh8BhQACAAAAhwABAAAAvwACAA8AvwEA ABAAwAEBAAAI/wEGAA4AAQICAAAIPwIAAAMAvwIBAA8A/wIWAB8AfwMAAA8A AAAQ8AgAAAAQF1skbCXkFw8ADfBTAAAAAACfDwQAAAAEAAAAAACoDwMAAAB4 ZnMAAKEPIgAAAAQAAAAAAAAIAAACAAMAAAABAAIAAQAQAAEAAAAAAAIACgAA AKoPCgAAAAQAAAABAAAAAAAPAATwtwAAABIACvAIAAAAcjMAAAAKAACzAAvw QgAAAIAAtClrCYUAAgAAAIcAAQAAAL8AAAANAMABAQAACP8BBgAGAAECAgAA CD8CAAADAL8CAQAPAP8CFgAfAH8DAAAPAAAAEPAIAAAAMAnQBcAGwAkPAA3w RQAAAAAAnw8EAAAABAAAAAAAqA8BAAAAMAAAoQ8WAAAAAgAAAAAAAAgAAAEA AgAAAAAAAgAKAAAAqg8KAAAAAgAAAAEAAAAAAA8ABPC3AAAAEgAK8AgAAABz MwAAAAoAALMAC/BCAAAAgAB0J2sJhQACAAAAhwABAAAAvwAAAA0AwAEBAAAI /wEGAAYAAQICAAAIPwIAAAMAvwIBAA8A/wIWAB8AfwMAAA8AAAAQ8AgAAAAw CcAGsAfACQ8ADfBFAAAAAACfDwQAAAAEAAAAAACoDwEAAAAxAAChDxYAAAAC AAAAAAAACAAAAQACAAAAAAACAAoAAACqDwoAAAACAAAAAQAAAAAADwAE8LcA AAASAArwCAAAAIEzAAAACgAAswAL8EIAAACAADQoawmFAAIAAACHAAEAAAC/ AAAADQDAAQEAAAj/AQYABgABAgIAAAg/AgAAAwC/AgEADwD/AhYAHwB/AwAA DwAAABDwCAAAADAJsAegCMAJDwAN8EUAAAAAAJ8PBAAAAAQAAAAAAKgPAQAA ADIAAKEPFgAAAAIAAAAAAAAIAAABAAIAAAAAAAIACgAAAKoPCgAAAAIAAAAB AAAAAAAPAATwtwAAABIACvAIAAAAgjMAAAAKAACzAAvwQgAAAIAAtCZrCYUA AgAAAIcAAQAAAL8AAAANAMABAQAACP8BBgAGAAECAgAACD8CAAADAL8CAQAP AP8CFgAfAH8DAAAPAAAAEPAIAAAAMAmgCJAJwAkPAA3wRQAAAAAAnw8EAAAA BAAAAAAAqA8BAAAAMwAAoQ8WAAAAAgAAAAAAAAgAAAEAAgAAAAAAAgAKAAAA qg8KAAAAAgAAAAEAAAAAAA8ABPBkAAAAEgAK8AgAAACDMwAAAAoAAKMAC/A8 AAAAhQACAAAAhwABAAAAvwAAAA0AwAEBAAAI/wEGAAYAAQICAAAIPwIAAAMA vwIBAA8A/wIWAB8AfwMAAA8AAAAQ8AgAAAAwCZAJgArACQ8ABPBkAAAAEgAK 8AgAAACEMwAAAAoAAKMAC/A8AAAAhQACAAAAhwABAAAAvwAAAA0AwAEBAAAI /wEGAAYAAQICAAAIPwIAAAMAvwIBAA8A/wIWAB8AfwMAAA8AAAAQ8AgAAAAw CYAKcAvACQ8ABPBkAAAAEgAK8AgAAACFMwAAAAoAAKMAC/A8AAAAhQACAAAA hwABAAAAvwAAAA0AwAEBAAAI/wEGAAYAAQICAAAIPwIAAAMAvwIBAA8A/wIW AB8AfwMAAA8AAAAQ8AgAAAAwCXALYAzACQ8ABPBkAAAAEgAK8AgAAACGMwAA AAoAAKMAC/A8AAAAhQACAAAAhwABAAAAvwAAAA0AwAEBAAAI/wEGAAYAAQIC AAAIPwIAAAMAvwIBAA8A/wIWAB8AfwMAAA8AAAAQ8AgAAAAwCWAMUA3ACQ8A BPBkAAAAEgAK8AgAAACHMwAAAAoAAKMAC/A8AAAAhQACAAAAhwABAAAAvwAA AA0AwAEBAAAI/wEGAAYAAQICAAAIPwIAAAMAvwIBAA8A/wIWAB8AfwMAAA8A AAAQ8AgAAAAwCVANQA7ACQ8ABPBkAAAAEgAK8AgAAACIMwAAAAoAAKMAC/A8 AAAAhQACAAAAhwABAAAAvwAAAA0AwAEBAAAI/wEGAAYAAQICAAAIPwIAAAMA vwIBAA8A/wIWAB8AfwMAAA8AAAAQ8AgAAAAwCUAOMA/ACQ8ABPBkAAAAEgAK 8AgAAACJMwAAAAoAAKMAC/A8AAAAhQACAAAAhwABAAAAvwAAAA0AwAEBAAAI /wEGAAYAAQICAAAIPwIAAAMAvwIBAA8A/wIWAB8AfwMAAA8AAAAQ8AgAAAAw CTAPIBDACQ8ABPC5AAAAEgAK8AgAAACKMwAAAAoAALMAC/BCAAAAgADUJ2sJ hQACAAAAhwABAAAAvwAAAA0AwAEBAAAI/wEGAAYAAQICAAAIPwIAAAMAvwIB AA8A/wIWAB8AfwMAAA8AAAAQ8AgAAAAwCSAQEBHACQ8ADfBHAAAAAACfDwQA AAAEAAAAAACoDwMAAAAyNTUAAKEPFgAAAAQAAAAAAAAIAAABAAQAAAAAAAIA CgAAAKoPCgAAAAQAAAABAAAAAAAPAATwuQAAABIACvAIAAAAizMAAAAKAACz AAvwQgAAAIAA1CprCYUAAgAAAIcAAQAAAL8AAAANAMABAQAACP8BBgAGAAEC AgAACD8CAAADAL8CAQAPAP8CFgAfAH8DAAAPAAAAEPAIAAAAMAkQEQASwAkP AA3wRwAAAAAAnw8EAAAABAAAAAAAqA8DAAAAMjU2AAChDxYAAAAEAAAAAAAA CAAAAQAEAAAAAAACAAoAAACqDwoAAAAEAAAAAQAAAAAADwAE8GQAAAASAArw CAAAAIwzAAAACgAAowAL8DwAAACFAAIAAACHAAEAAAC/AAAADQDAAQEAAAj/ AQYABgABAgIAAAg/AgAAAwC/AgEADwD/AhYAHwB/AwAADwAAABDwCAAAADAJ ABLwEsAJDwAE8GQAAAASAArwCAAAAI0zAAAACgAAowAL8DwAAACFAAIAAACH AAEAAAC/AAAADQDAAQEAAAj/AQYABgABAgIAAAg/AgAAAwC/AgEADwD/AhYA HwB/AwAADwAAABDwCAAAADAJ8BLgE8AJDwAE8GQAAAASAArwCAAAAI4zAAAA CgAAowAL8DwAAACFAAIAAACHAAEAAAC/AAAADQDAAQEAAAj/AQYABgABAgIA AAg/AgAAAwC/AgEADwD/AhYAHwB/AwAADwAAABDwCAAAADAJ4BPQFMAJDwAE 8GQAAAASAArwCAAAAI8zAAAACgAAowAL8DwAAACFAAIAAACHAAEAAAC/AAAA DQDAAQEAAAj/AQYABgABAgIAAAg/AgAAAwC/AgEADwD/AhYAHwB/AwAADwAA ABDwCAAAADAJ0BTAFcAJDwAE8GQAAAASAArwCAAAAJAzAAAACgAAowAL8DwA AACFAAIAAACHAAEAAAC/AAAADQDAAQEAAAj/AQYABgABAgIAAAg/AgAAAwC/ AgEADwD/AhYAHwB/AwAADwAAABDwCAAAADAJwBWwFsAJDwAE8GQAAAASAArw CAAAAJEzAAAACgAAowAL8DwAAACFAAIAAACHAAEAAAC/AAAADQDAAQEAAAj/ AQYABgABAgIAAAg/AgAAAwC/AgEADwD/AhYAHwB/AwAADwAAABDwCAAAADAJ sBagF8AJDwAE8GQAAAASAArwCAAAAJIzAAAACgAAowAL8DwAAACFAAIAAACH AAEAAAC/AAAADQDAAQEAAAj/AQYABgABAgIAAAg/AgAAAwC/AgEADwD/AhYA HwB/AwAADwAAABDwCAAAADAJoBeQGMAJDwAE8GQAAAASAArwCAAAAJMzAAAA CgAAowAL8DwAAACFAAIAAACHAAEAAAC/AAAADQDAAQEAAAj/AQYABgABAgIA AAg/AgAAAwC/AgEADwD/AhYAHwB/AwAADwAAABDwCAAAADAJkBiAGcAJDwAE 8GQAAAASAArwCAAAAJQzAAAACgAAowAL8DwAAACFAAIAAACHAAEAAAC/AAAA DQDAAQEAAAj/AQYABgABAgIAAAg/AgAAAwC/AgEADwD/AhYAHwB/AwAADwAA ABDwCAAAADAJgBlwGsAJDwAE8GQAAAASAArwCAAAAJUzAAAACgAAowAL8DwA AACFAAIAAACHAAEAAAC/AAAADQDAAQEAAAj/AQYABgABAgIAAAg/AgAAAwC/ AgEADwD/AhYAHwB/AwAADwAAABDwCAAAADAJcBpgG8AJDwAE8LkAAAASAArw CAAAAJYzAAAACgAAswAL8EIAAACAAFQsawmFAAIAAACHAAEAAAC/AAAADQDA AQEAAAj/AQYABgABAgIAAAg/AgAAAwC/AgEADwD/AhYAHwB/AwAADwAAABDw CAAAADAJYBtQHMAJDwAN8EcAAAAAAJ8PBAAAAAQAAAAAAKgPAwAAADUxMQAA oQ8WAAAABAAAAAAAAAgAAAEABAAAAAAAAgAKAAAAqg8KAAAABAAAAAEAAAAA AA8ABPC5AAAAEgAK8AgAAACXMwAAAAoAALMAC/BCAAAAgAAUKmsJhQACAAAA hwABAAAAvwAAAA0AwAEBAAAI/wEGAAYAAQICAAAIPwIAAAMAvwIBAA8A/wIW AB8AfwMAAA8AAAAQ8AgAAAAwCVAcQB3ACQ8ADfBHAAAAAACfDwQAAAAEAAAA AACoDwMAAAA1MTIAAKEPFgAAAAQAAAAAAAAIAAABAAQAAAAAAAIACgAAAKoP CgAAAAQAAAABAAAAAAAPAATwZAAAABIACvAIAAAAmDMAAAAKAACjAAvwPAAA AIUAAgAAAIcAAQAAAL8AAAANAMABAQAACP8BBgAGAAECAgAACD8CAAADAL8C AQAPAP8CFgAfAH8DAAAPAAAAEPAIAAAAMAlAHTAewAkPAATwZAAAABIACvAI AAAAmTMAAAAKAACjAAvwPAAAAIUAAgAAAIcAAQAAAL8AAAANAMABAQAACP8B BgAGAAECAgAACD8CAAADAL8CAQAPAP8CFgAfAH8DAAAPAAAAEPAIAAAAMAkw HiAfwAkPAATwZAAAABIACvAIAAAAmjMAAAAKAACjAAvwPAAAAIUAAgAAAIcA AQAAAL8AAAANAMABAQAACP8BBgAGAAECAgAACD8CAAADAL8CAQAPAP8CFgAf AH8DAAAPAAAAEPAIAAAAMAkgHxAgwAkPAATwZAAAABIACvAIAAAAmzMAAAAK AACjAAvwPAAAAIUAAgAAAIcAAQAAAL8AAAANAMABAQAACP8BBgAGAAECAgAA CD8CAAADAL8CAQAPAP8CFgAfAH8DAAAPAAAAEPAIAAAAMAkQIAAhwAkPAATw ZAAAABIACvAIAAAAnDMAAAAKAACjAAvwPAAAAIUAAgAAAIcAAQAAAL8AAAAN AMABAQAACP8BBgAGAAECAgAACD8CAAADAL8CAQAPAP8CFgAfAH8DAAAPAAAA EPAIAAAAMAkAIfAhwAkPAATwZAAAABIACvAIAAAAnTMAAAAKAACjAAvwPAAA AIUAAgAAAIcAAQAAAL8AAAANAMABAQAACP8BBgAGAAECAgAACD8CAAADAL8C AQAPAP8CFgAfAH8DAAAPAAAAEPAIAAAAMAnwIeAiwAkPAATwZAAAABIACvAI AAAAnjMAAAAKAACjAAvwPAAAAIUAAgAAAIcAAQAAAL8AAAANAMABAQAACP8B BgAGAAECAgAACD8CAAADAL8CAQAPAP8CFgAfAH8DAAAPAAAAEPAIAAAAMAng ItAjwAkPAATwzQAAAKIMCvAIAAAAnzMAAAAKAADDAAvwSAAAAIAAdCprCYUA AgAAAIcAAQAAAL8AAgAPAL8BAAAQAMABAQAACP8BBgAOAAECAgAACD8CAAAD AL8CAQAPAP8CFgAfAH8DAAAPAAAAEPAIAAAAAAngAV0E+gkPAA3wVQAAAAAA nw8EAAAABAAAAAAAqA8RAAAAQ09OVElHVU9VUw1CTE9DSyMAAKEPFgAAABIA AAAAAAAIAAABABIAAAAAAAIACgAAAKoPCgAAABIAAAABAAAAAAAPAATwrgAA ABIACvAIAAAApTMAAAAKAACzAAvwQgAAAIAAdC1rCYUAAgAAAIcAAQAAAL8A AAANAMABAQAACP8BBgAGAAECAgAACD8CAAADAL8CAQAPAP8CFgAfAH8DAAAP AAAAEPAIAAAAgArQBRAREAsPAA3wPAAAAAAAnw8EAAAABAAAAAAAoQ8WAAAA AQAAAAAAAAgAAAEAAQAAAAAAAgAKAAAAqg8KAAAAAQAAAAEAAAAAAA8ABPCu AAAAEgAK8AgAAACmMwAAAAoAALMAC/BCAAAAgADULWsJhQACAAAAhwABAAAA vwAAAA0AwAEBAAAI/wEGAAYAAQICAAAIPwIAAAMAvwIBAA8A/wIWAB8AfwMA AA8AAAAQ8AgAAACAChARUBwQCw8ADfA8AAAAAACfDwQAAAAEAAAAAAChDxYA AAABAAAAAAAACAAAAQABAAAAAAACAAoAAACqDwoAAAABAAAAAQAAAAAADwAE 8HYAAABCAQrwCAAAAKgzAAAACgAA0wAL8E4AAACFAAIAAACHAAEAAAC/AAAA DQBEAQQAAAB/AQAAAQC/AQAAEADAAQEAAAj/ARYAFgABAgIAAAg/AgAAAwC/ AgEADwD/AhYAHwB/AwAADwAAABDwCAAAAIAKUBzgIoAKDwAE8HYAAABCAQrw CAAAAKkzAAAACgAA0wAL8E4AAACFAAIAAACHAAEAAAC/AAAADQBEAQQAAAB/ AQAAAQC/AQAAEADAAQEAAAj/ARYAFgABAgIAAAg/AgAAAwC/AgEADwD/AhYA HwB/AwAADwAAABDwCAAAABALUBywIhALDwAE8M0AAACiDArwCAAAAKozAAAA CgAAwwAL8EgAAACAAPQrawmFAAIAAACHAAEAAAC/AAIADwC/AQAAEADAAQEA AAj/AQYADgABAgIAAAg/AgAAAwC/AgEADwD/AhYAHwB/AwAADwAAABDwCAAA AIAKQAEEBRoLDwAN8FUAAAAAAJ8PBAAAAAQAAAAAAKgPEQAAAEVYVEVOVCBC T1VOREFSSUVTAAChDxYAAAASAAAAAAAACAAAAQASAAAAAAACAAoAAACqDwoA AAASAAAAAQAAAAAADwAE8MEAAACiDArwCAAAAKszAAAACgAAwwAL8EgAAACA ALQsawmFAAIAAACHAAEAAAC/AAIADwC/AQAAEADAAQEAAAj/AQYADgABAgIA AAg/AgAAAwC/AgEADwD/AhYAHwB/AwAADwAAABDwCAAAADMHBBJ/E80HDwAN 8EkAAAAAAJ8PBAAAAAQAAAAAAKgPBQAAAE1FRElBAAChDxYAAAAGAAAAAAAA CAAAAQAGAAAAAAACAAoAAACqDwoAAAAGAAAAAQAAAAAADwAE8HYAAABCAQrw CAAAAKwzAAAACgAA0wAL8E4AAACFAAIAAACHAAEAAAC/AAAADQBEAQQAAAB/ AQAAAQC/AQAAEADAAQEAAAj/ARYAFgABAgIAAAg/AgAAAwC/AgEADwD/AhYA HwB/AwAADwAAABDwCAAAAIAH4BMAJIAHDwAE8HYAAABCAQrwCAAAAK0zAABA CgAA0wAL8E4AAACFAAIAAACHAAEAAAC/AAAADQBEAQQAAAB/AQAAAQC/AQAA EADAAQEAAAj/ARYAFgABAgIAAAg/AgAAAwC/AgEADwD/AhYAHwB/AwAADwAA ABDwCAAAAIAH0AWgEYAHDwAE8HYAAABCAQrwCAAAAK4zAAAACgAA0wAL8E4A AACFAAIAAACHAAEAAAC/AAAADQBEAQQAAAB/AQAAAQC/AQAAEADAAQEAAAj/ ARYAFgABAgIAAAg/AgAAAwC/AgEADwD/AhYAHwB/AwAADwAAABDwCAAAALAK kAlQCvAMDwAE8MIAAACiDArwCAAAAK8zAAAACgAAwwAL8EgAAACAAJQuawmF AAIAAACHAAEAAAC/AAIADwC/AQAAEADAAQEAAAj/AQYADgABAgIAAAg/AgAA AwC/AgEADwD/AhYAHwB/AwAADwAAABDwCAAAAOsMAwovC4UNDwAN8EoAAAAA AJ8PBAAAAAQAAAAAAKgPBAAAADEgTUIAAKEPGAAAAAUAAAAAAAAoAAABADIA BQAAAAAAAgAKAAAAqg8KAAAABQAAAAEAAAAAAA8ABPB2AAAAQgEK8AgAAACw MwAAAAoAANMAC/BOAAAAhQACAAAAhwABAAAAvwAAAA0ARAEEAAAAfwEAAAEA vwEAABAAwAEBAAAI/wEWABYAAQICAAAIPwIAAAMAvwIBAA8A/wIWAB8AfwMA AA8AAAAQ8AgAAABlCd0MnQ2lCw8ABPDCAAAAogwK8AgAAACxMwAAAAoAAMMA C/BIAAAAgAA0MWsJhQACAAAAhwABAAAAvwACAA8AvwEAABAAwAEBAAAI/wEG AA4AAQICAAAIPwIAAAMAvwIBAA8A/wIWAB8AfwMAAA8AAAAQ8AgAAACgC1cN dg46DA8ADfBKAAAAAACfDwQAAAAEAAAAAACoDwQAAAA0IEtCAAChDxgAAAAF AAAAAAAAKAAAAQAyAAUAAAAAAAIACgAAAKoPCgAAAAUAAAABAAAAAAAPAATw SAAAABIACvAIAAAAAQgAAAAMAACDAAvwMAAAAIEBAAAACIMBBQAACJMBrkrt AJQBbpOZAL8BEgASAP8BAAAIAAQDCQAAAD8DAQABAA8ABfAAAAAAEADwByAA AAD///8AAAAAAICAgAAAAAAA////ADMzzADMzP8AsrKyAAAAchcQAAAAAQAw AAAAAAA6BgAAAxAAAAAA9Q8cAAAAAAEAAIMVAAMAAAAAoy0AAAEAAAADAAAA AQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEAAAACAAAA AwAAAAQAAAAFAAAABgAAAAcAAAAIAAAACQAAAAoAAAD+/////v////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////// /////////////v8AAAUAAgAAAAAAAAAAAAAAAAAAAAAAAgAAAALVzdWcLhsQ k5cIACss+a5EAAAABdXN1ZwuGxCTlwgAKyz5rgQCAADAAQAAEAAAAAEAAACI AAAAAwAAAJAAAAAPAAAAoAAAAAQAAAC8AAAABgAAAMQAAAAHAAAAzAAAAAgA AADUAAAACQAAANwAAAAKAAAA5AAAABcAAADsAAAACwAAAPQAAAAQAAAA/AAA ABMAAAAEAQAAFgAAAAwBAAANAAAAFAEAAAwAAABeAQAAAgAAAOQEAAAeAAAA BwAAAEN1c3RvbQAAHgAAABIAAABBbXBleCBDb3Jwb3JhdGlvbgBlAAMAAADf LQAAAwAAAA8AAAADAAAAAQAAAAMAAAAAAAAAAwAAAAAAAAADAAAAAAAAAAMA AAAxFQgACwAAAAAAAAALAAAAAAAAAAsAAAAAAAAACwAAAAAAAAAeEAAAAwAA ABAAAABUaW1lcyBOZXcgUm9tYW4AFwAAAEJsYW5rIFByZXNlbnRhdGlvbi5w b3QADwAAAE5vIFNsaWRlIFRpdGxlAAwQAAAGAAAAHgAAAAsAAABGb250cyBV c2VkAAMAAAABAAAAHgAAABAAAABEZXNpZ24gVGVtcGxhdGUAAwAAAAEAAAAe AAAADQAAAFNsaWRlIFRpdGxlcwADAAAAAQD+/wAABQACAAAAAAAAAAAAAAAA AAAAAAABAAAA4IWf8vlPaBCrkQgAKyez2TAAAAA4HAAADQAAAAEAAABwAAAA AgAAAHgAAAAEAAAAkAAAAAcAAACsAAAACAAAAOQAAAAJAAAA8AAAABIAAAD8 AAAACgAAABwBAAALAAAAKAEAAAwAAAA0AQAADQAAAEABAAAPAAAATAEAABEA AABUAQAAAgAAAOQEAAAeAAAADwAAAE5vIFNsaWRlIFRpdGxlAAAeAAAAEgAA AEFtcGV4IENvcnBvcmF0aW9uAGUAHgAAAC0AAABDOlxtc29mZmljZVxUZW1w bGF0ZXNcQmxhbmsgUHJlc2VudGF0aW9uLnBvdABvd2UeAAAABAAAAGxlcwAe AAAABAAAADYzOQAeAAAAFQAAAE1pY3Jvc29mdCBQb3dlclBvaW50AFxCbEAA AABwOa0qN0IAAEAAAACAUqqZ7jvHAUAAAACA1z0+A5u+AUAAAACAywPoDjzH AQMAAAASAAAARwAAANoaAAD/////AwAAAAgAbxCYCgAAAQAJAAADZQ0AAAcA IQAAAAAAEQAAACYGDwAYAP////8AABAAAAAAAAAAAAC6AwAAZwIAAAkAAAAm Bg8ACAD/////AgAAABcAAAAmBg8AIwD/////BAAbAFROUFAUAMjwADAAAAAA FAAAAEQNeAAAAAAAAAAKAAAAJgYPAAoAVE5QUAAAAgD0AwkAAAAmBg8ACAD/ ////AwAAAA8AAAAmBg8AFABUTlBQBAAMAAEAAAABAAAAAAAAAAUAAAALAgAA AAAFAAAADAJnAroDBQAAAAQBDQAAAAcAAAD8AgAA////AAAABAAAAC0BAAAI AAAA+gIFAAEAAAAAAAAABAAAAC0BAQAEAAAALQEAAAkAAAAdBiEA8ABtAsAD AAAAAAQAAAAtAQAABwAAAPwCAAD///8AAAAEAAAALQECAAQAAADwAQAACAAA APoCAAAAAAAAAAAAAAQAAAAtAQAAEAAAACYGDwAWAP////8AAEcAAAA1AgAA EQEAAGACAAAIAAAAJgYPAAYA/////wEAHAAAAPsCAAAAAAAAAAAAAAAAAAAA AAAAAKoSAJJ99HdAAAAAyAUKwr2s9HfGrPR3AQAAAAAAMAAEAAAALQEDAAUA AAAJAgAAAAIFAAAAFAIAAAAABQAAAAIBAgAAABAAAAAmBg8AFgD/////AABH AQAANQIAAHkCAABgAgAACAAAACYGDwAGAP////8BAAUAAAAJAgAAAAIFAAAA FAIAAAAABQAAAAIBAgAAAAcAAAD8AgEAAAAAAAAABAAAAC0BBAAIAAAA+gIA AAEAAAAAAAACBAAAAC0BBQAHAAAAGwRbAq4DCQAJAAQAAAAtAQIABAAAAC0B AAAEAAAA8AEFAAQAAAAtAQQABAAAAC0BAQAHAAAAGwRZAqwDQwKQAwQAAAAt AQIABAAAAC0BAAAFAAAACQIAAAACBQAAABQCAAAAABwAAAD7AvP/AAAAAAAA vAIAAAAAAAAAElRpbWVzIE5ldyBSb21hbgC9rPR3xqz0dwEAAAAAADAABAAA AC0BBQAEAAAA8AEDAAUAAAAJAgAAAAIFAAAAFAIAAAAABQAAAC4BGAAAAAUA AAACAQEAAAAMAAAAMgpSApYDAwAAAHhmc8IGAAQABQAFAAAALgEBAAAABQAA AAIBAgAAAAUAAAACAQIAAAAHAAAA/AIAAP///wAAAAQAAAAtAQMACAAAAPoC AAABAAAAAAAAAgQAAAAtAQYABwAAABsE9gCqAOcAkgAEAAAALQECAAQAAADw AQMABAAAAC0BAAAEAAAA8AEGAAUAAAAJAgAAAAIFAAAAFAIAAAAAHAAAAPsC +P8AAAAAAACQAQAAAAAAAAASVGltZXMgTmV3IFJvbWFuAL2s9HfGrPR3AQAA AAAAMAAEAAAALQEDAAQAAADwAQUABQAAAAkCAAAAAgUAAAAUAgAAAAAFAAAA LgEYAAAABQAAAAIBAQAAAAkAAAAyCvAAnAABAAAAMAAEAAUAAAAuAQEAAAAF AAAAAgECAAAABQAAAAIBAgAAAAcAAAD8AgAA////AAAABAAAAC0BBQAIAAAA +gIAAAEAAAAAAAACBAAAAC0BBgAHAAAAGwT2AMIA5wCpAAQAAAAtAQIABAAA APABBQAEAAAALQEAAAQAAADwAQYABQAAAAkCAAAAAgUAAAAUAgAAAAAFAAAA CQIAAAACBQAAABQCAAAAAAUAAAAuARgAAAAFAAAAAgEBAAAACQAAADIK8ACz AAEAAAAxAAQABQAAAC4BAQAAAAUAAAACAQIAAAAFAAAAAgECAAAABwAAAPwC AAD///8AAAAEAAAALQEFAAgAAAD6AgAAAQAAAAAAAAIEAAAALQEGAAcAAAAb BPYA2QDnAMEABAAAAC0BAgAEAAAA8AEFAAQAAAAtAQAABAAAAPABBgAFAAAA CQIAAAACBQAAABQCAAAAAAUAAAAJAgAAAAIFAAAAFAIAAAAABQAAAC4BGAAA AAUAAAACAQEAAAAJAAAAMgrwAMsAAQAAADIABAAFAAAALgEBAAAABQAAAAIB AgAAAAUAAAACAQIAAAAHAAAA/AIAAP///wAAAAQAAAAtAQUACAAAAPoCAAAB AAAAAAAAAgQAAAAtAQYABwAAABsE9gDxAOcA2AAEAAAALQECAAQAAADwAQUA BAAAAC0BAAAEAAAA8AEGAAUAAAAJAgAAAAIFAAAAFAIAAAAABQAAAAkCAAAA AgUAAAAUAgAAAAAFAAAALgEYAAAABQAAAAIBAQAAAAkAAAAyCvAA4gABAAAA MwAEAAUAAAAuAQEAAAAFAAAAAgECAAAABQAAAAIBAgAAAAcAAAD8AgAA//// AAAABAAAAC0BBQAIAAAA+gIAAAEAAAAAAAACBAAAAC0BBgAHAAAAGwT2AAkB 5wDwAAQAAAAtAQIABAAAAPABBQAEAAAALQEAAAQAAADwAQYABwAAAPwCAAD/ //8AAAAEAAAALQEFAAgAAAD6AgAAAQAAAAAAAAIEAAAALQEGAAcAAAAbBPYA IAHnAAgBBAAAAC0BAgAEAAAA8AEFAAQAAAAtAQAABAAAAPABBgAHAAAA/AIA AP///wAAAAQAAAAtAQUACAAAAPoCAAABAAAAAAAAAgQAAAAtAQYABwAAABsE 9gA4AecAHwEEAAAALQECAAQAAADwAQUABAAAAC0BAAAEAAAA8AEGAAcAAAD8 AgAA////AAAABAAAAC0BBQAIAAAA+gIAAAEAAAAAAAACBAAAAC0BBgAHAAAA GwT2AE8B5wA3AQQAAAAtAQIABAAAAPABBQAEAAAALQEAAAQAAADwAQYABwAA APwCAAD///8AAAAEAAAALQEFAAgAAAD6AgAAAQAAAAAAAAIEAAAALQEGAAcA AAAbBPYAZwHnAE4BBAAAAC0BAgAEAAAA8AEFAAQAAAAtAQAABAAAAPABBgAH AAAA/AIAAP///wAAAAQAAAAtAQUACAAAAPoCAAABAAAAAAAAAgQAAAAtAQYA BwAAABsE9gB+AecAZgEEAAAALQECAAQAAADwAQUABAAAAC0BAAAEAAAA8AEG AAcAAAD8AgAA////AAAABAAAAC0BBQAIAAAA+gIAAAEAAAAAAAACBAAAAC0B BgAHAAAAGwT2AJYB5wB9AQQAAAAtAQIABAAAAPABBQAEAAAALQEAAAQAAADw AQYABwAAAPwCAAD///8AAAAEAAAALQEFAAgAAAD6AgAAAQAAAAAAAAIEAAAA LQEGAAcAAAAbBPYArQHnAJUBBAAAAC0BAgAEAAAA8AEFAAQAAAAtAQAABAAA APABBgAFAAAACQIAAAACBQAAABQCAAAAAAUAAAAJAgAAAAIFAAAAFAIAAAAA BQAAAC4BGAAAAAUAAAACAQEAAAAMAAAAMgrwAJsBAwAAADI1NfwEAAQABAAF AAAALgEBAAAABQAAAAIBAgAAAAUAAAACAQIAAAAHAAAA/AIAAP///wAAAAQA AAAtAQUACAAAAPoCAAABAAAAAAAAAgQAAAAtAQYABwAAABsE9gDFAecArAEE AAAALQECAAQAAADwAQUABAAAAC0BAAAEAAAA8AEGAAUAAAAJAgAAAAIFAAAA FAIAAAAABQAAAAkCAAAAAgUAAAAUAgAAAAAFAAAALgEYAAAABQAAAAIBAQAA AAwAAAAyCvAAsgEDAAAAMjU2/QQABAAEAAUAAAAuAQEAAAAFAAAAAgECAAAA BQAAAAIBAgAAAAcAAAD8AgAA////AAAABAAAAC0BBQAIAAAA+gIAAAEAAAAA AAACBAAAAC0BBgAHAAAAGwT2ANwB5wDEAQQAAAAtAQIABAAAAPABBQAEAAAA LQEAAAQAAADwAQYABwAAAPwCAAD///8AAAAEAAAALQEFAAgAAAD6AgAAAQAA AAAAAAIEAAAALQEGAAcAAAAbBPYA9AHnANsBBAAAAC0BAgAEAAAA8AEFAAQA AAAtAQAABAAAAPABBgAHAAAA/AIAAP///wAAAAQAAAAtAQUACAAAAPoCAAAB AAAAAAAAAgQAAAAtAQYABwAAABsE9gALAucA8wEEAAAALQECAAQAAADwAQUA BAAAAC0BAAAEAAAA8AEGAAcAAAD8AgAA////AAAABAAAAC0BBQAIAAAA+gIA AAEAAAAAAAACBAAAAC0BBgAHAAAAGwT2ACMC5wAKAgQAAAAtAQIABAAAAPAB BQAEAAAALQEAAAQAAADwAQYABwAAAPwCAAD///8AAAAEAAAALQEFAAgAAAD6 AgAAAQAAAAAAAAIEAAAALQEGAAcAAAAbBPYAOgLnACICBAAAAC0BAgAEAAAA 8AEFAAQAAAAtAQAABAAAAPABBgAHAAAA/AIAAP///wAAAAQAAAAtAQUACAAA APoCAAABAAAAAAAAAgQAAAAtAQYABwAAABsE9gBSAucAOQIEAAAALQECAAQA AADwAQUABAAAAC0BAAAEAAAA8AEGAAcAAAD8AgAA////AAAABAAAAC0BBQAI AAAA+gIAAAEAAAAAAAACBAAAAC0BBgAHAAAAGwT2AGkC5wBRAgQAAAAtAQIA BAAAAPABBQAEAAAALQEAAAQAAADwAQYABwAAAPwCAAD///8AAAAEAAAALQEF AAgAAAD6AgAAAQAAAAAAAAIEAAAALQEGAAcAAAAbBPYAgQLnAGgCBAAAAC0B AgAEAAAA8AEFAAQAAAAtAQAABAAAAPABBgAHAAAA/AIAAP///wAAAAQAAAAt AQUACAAAAPoCAAABAAAAAAAAAgQAAAAtAQYABwAAABsE9gCZAucAgAIEAAAA LQECAAQAAADwAQUABAAAAC0BAAAEAAAA8AEGAAcAAAD8AgAA////AAAABAAA AC0BBQAIAAAA+gIAAAEAAAAAAAACBAAAAC0BBgAHAAAAGwT2ALAC5wCYAgQA AAAtAQIABAAAAPABBQAEAAAALQEAAAQAAADwAQYABwAAAPwCAAD///8AAAAE AAAALQEFAAgAAAD6AgAAAQAAAAAAAAIEAAAALQEGAAcAAAAbBPYAyALnAK8C BAAAAC0BAgAEAAAA8AEFAAQAAAAtAQAABAAAAPABBgAFAAAACQIAAAACBQAA ABQCAAAAAAUAAAAJAgAAAAIFAAAAFAIAAAAABQAAAC4BGAAAAAUAAAACAQEA AAAMAAAAMgrwALUCAwAAADUxMQgEAAQABAAFAAAALgEBAAAABQAAAAIBAgAA AAUAAAACAQIAAAAHAAAA/AIAAP///wAAAAQAAAAtAQUACAAAAPoCAAABAAAA AAAAAgQAAAAtAQYABwAAABsE9gDfAucAxwIEAAAALQECAAQAAADwAQUABAAA AC0BAAAEAAAA8AEGAAUAAAAJAgAAAAIFAAAAFAIAAAAABQAAAAkCAAAAAgUA AAAUAgAAAAAFAAAALgEYAAAABQAAAAIBAQAAAAwAAAAyCvAAzAIDAAAANTEy CQQABAAEAAUAAAAuAQEAAAAFAAAAAgECAAAABQAAAAIBAgAAAAcAAAD8AgAA ////AAAABAAAAC0BBQAIAAAA+gIAAAEAAAAAAAACBAAAAC0BBgAHAAAAGwT2 APcC5wDeAgQAAAAtAQIABAAAAPABBQAEAAAALQEAAAQAAADwAQYABwAAAPwC AAD///8AAAAEAAAALQEFAAgAAAD6AgAAAQAAAAAAAAIEAAAALQEGAAcAAAAb BPYADgPnAPYCBAAAAC0BAgAEAAAA8AEFAAQAAAAtAQAABAAAAPABBgAHAAAA /AIAAP///wAAAAQAAAAtAQUACAAAAPoCAAABAAAAAAAAAgQAAAAtAQYABwAA ABsE9gAmA+cADQMEAAAALQECAAQAAADwAQUABAAAAC0BAAAEAAAA8AEGAAcA AAD8AgAA////AAAABAAAAC0BBQAIAAAA+gIAAAEAAAAAAAACBAAAAC0BBgAH AAAAGwT2AD0D5wAlAwQAAAAtAQIABAAAAPABBQAEAAAALQEAAAQAAADwAQYA BwAAAPwCAAD///8AAAAEAAAALQEFAAgAAAD6AgAAAQAAAAAAAAIEAAAALQEG AAcAAAAbBPYAVQPnADwDBAAAAC0BAgAEAAAA8AEFAAQAAAAtAQAABAAAAPAB BgAHAAAA/AIAAP///wAAAAQAAAAtAQUACAAAAPoCAAABAAAAAAAAAgQAAAAt AQYABwAAABsE9gBsA+cAVAMEAAAALQECAAQAAADwAQUABAAAAC0BAAAEAAAA 8AEGAAcAAAD8AgAA////AAAABAAAAC0BBQAIAAAA+gIAAAEAAAAAAAACBAAA AC0BBgAHAAAAGwT2AIQD5wBrAwQAAAAtAQIABAAAAPABBQAEAAAALQEAAAQA AADwAQYABAAAAC0BBAAEAAAALQEBAAcAAAAbBPsAbwDiAC8ABAAAAC0BAgAE AAAALQEAAAUAAAAJAgAAAAIFAAAAFAIAAAAABQAAAAkCAAAAAgUAAAAUAgAA AAAFAAAALgEYAAAABQAAAAIBAQAAABYAAAAyCuwANQAKAAAAQ09OVElHVU9V UwUABgAGAAQAAwAGAAUABgAGAAQABQAAAC4BAQAAAAUAAAACAQIAAAAFAAAA CQIAAAACBQAAABQCAAAAAAUAAAAuARgAAAAFAAAAAgEBAAAAEAAAADIK9gA/ AAYAAABCTE9DSyMFAAUABgAFAAYAAwAFAAAALgEBAAAABQAAAAIBAgAAAAUA AAACAQIAAAAHAAAA/AIAAP///wAAAAQAAAAtAQUACAAAAPoCAAABAAAAAAAA AgQAAAAtAQYABwAAABsEFwGtAQgBkgAEAAAALQECAAQAAADwAQUABAAAAC0B AAAEAAAA8AEGAAUAAAAJAgAAAAIFAAAAFAIAAAAABQAAAAIBAgAAAAcAAAD8 AgAA////AAAABAAAAC0BBQAIAAAA+gIAAAEAAAAAAAACBAAAAC0BBgAHAAAA GwQXAcgCCAGsAQQAAAAtAQIABAAAAPABBQAEAAAALQEAAAQAAADwAQYABQAA AAkCAAAAAgUAAAAUAgAAAAAFAAAAAgECAAAAEAAAACYGDwAWAP////8AAMYC AAAHAQAAbQMAAAoBAAAIAAAA+gIAAAEAAAAAAAAABAAAAC0BBQAEAAAALQEE AAgAAAAlAwIAxwIIAWsDCAEEAAAALQEAAAQAAADwAQUABAAAAC0BAgAIAAAA JgYPAAYA/////wEAEAAAACYGDwAWAP////8AAMYCAAAVAQAAaQMAABgBAAAI AAAA+gIAAAEAAAAAAAAABAAAAC0BBQAEAAAALQEEAAgAAAAlAwIAxwIWAWcD FgEEAAAALQEAAAQAAADwAQUABAAAAC0BAgAIAAAAJgYPAAYA/////wEABAAA AC0BBAAEAAAALQEBAAcAAAAbBBgBfwAIAR8ABAAAAC0BAgAEAAAALQEAAAUA AAAJAgAAAAIFAAAAFAIAAAAABQAAAAkCAAAAAgUAAAAUAgAAAAAFAAAALgEY AAAABQAAAAIBAQAAACEAAAAyChIBJQARAAAARVhURU5UIEJPVU5EQVJJRVMA BQAFAAUABQAGAAUAAgAFAAUABgAGAAUABgAFAAMABQAEAAUAAAAuAQEAAAAF AAAAAgECAAAABQAAAAIBAgAAAAQAAAAtAQQABAAAAC0BAQAHAAAAGwTFAOoB tQDEAQQAAAAtAQIABAAAAC0BAAAFAAAACQIAAAACBQAAABQCAAAAAAUAAAAJ AgAAAAIFAAAAFAIAAAAABQAAAC4BGAAAAAUAAAACAQEAAAAPAAAAMgq/AMoB BQAAAE1FRElBAAcABQAFAAMABgAFAAAALgEBAAAABQAAAAIBAgAAAAUAAAAC AQIAAAAQAAAAJgYPABYA/////wAA8gEAALsAAACKAwAAvgAAAAgAAAD6AgAA AQAAAAAAAAAEAAAALQEFAAQAAAAtAQQACAAAACUDAgDzAbwAiAO8AAQAAAAt AQAABAAAAPABBQAEAAAALQECAAgAAAAmBg8ABgD/////AQAQAAAAJgYPABYA /////wAAkQAAALsAAAC8AQAAvgAAAAgAAAD6AgAAAQAAAAAAAAAEAAAALQEF AAQAAAAtAQQACAAAACUDAgC6AbwAkgC8AAQAAAAtAQAABAAAAPABBQAEAAAA LQECAAgAAAAmBg8ABgD/////AQAQAAAAJgYPABYA/////wAA7wAAAAsBAAAF AQAARwEAAAgAAAD6AgAAAQAAAAAAAAAEAAAALQEFAAQAAAAtAQQACAAAACUD AgDwAAwBAwFFAQQAAAAtAQAABAAAAPABBQAEAAAALQECAAgAAAAmBg8ABgD/ ////AQAEAAAALQEEAAQAAAAtAQEABwAAABsEVAEaAUQB+wAEAAAALQECAAQA AAAtAQAABQAAAAkCAAAAAgUAAAAUAgAAAAAFAAAACQIAAAACBQAAABQCAAAA AAUAAAAuARgAAAAFAAAAAgEBAAAADQAAADIKTgEBAQQAAAAxIE1CBAACAAcA BQAFAAAALgEBAAAABQAAAAIBAgAAAAUAAAACAQIAAAAQAAAAJgYPABYA//// /wAAQgEAAOsAAABYAQAAJgEAAAgAAAD6AgAAAQAAAAAAAAAEAAAALQEFAAQA AAAtAQQACAAAACUDAgBDAewAVgEkAQQAAAAtAQAABAAAAPABBQAEAAAALQEC AAgAAAAmBg8ABgD/////AQAEAAAALQEEAAQAAAAtAQEABwAAABsENAFsASQB TwEEAAAALQECAAQAAAAtAQAABQAAAAkCAAAAAgUAAAAUAgAAAAAFAAAACQIA AAACBQAAABQCAAAAAAUAAAAuARgAAAAFAAAAAgEBAAAADQAAADIKLgFUAQQA AAA0IEtCBAACAAYABQAFAAAALgEBAAAABQAAAAIBAgAAAAUAAAACAQIAAAAE AAAALQEBAAQAAAAtAQQAHAAAAPsCEAAHAAAAAAC8AgAAAAABAgIiU3lzdGVt AAAAAAoAAAAEAAAAAAAFAAAAAQAAAAAAMAAEAAAALQEFAAQAAADwAQMADwAA ACYGDwAUAFROUFAEAAwAAAAAAAAAAAAAAAAACQAAACYGDwAIAP////8BAAAA AwAAAAAASD8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAmAAAAAMAAAAAAAAAIAAAAAEAAAA2AAAAAgAAAD4A AAABAAAAAgAAAAoAAABfUElEX0dVSUQAAgAAAOQEAABBAAAATgAAAHsANQAx AEUAQQA5AEYAMQA3AC0ARQA5ADAAMAAtADQAOABEAEIALQBBADIAOABDAC0A MgAxAEEAQQBBADMAMAA2ADkAOQBGADAAfQAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAPYPGwAAABQAAABfwJHjuy0AAAMA 9AMDABIAbGVzCAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEMAdQByAHIAZQBuAHQAIABV AHMAZQByAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAaAAIA////////////////AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAACwAAACMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAD/ //////////////8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAP////////// /////wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA////////////////AAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA --------------060404040304010301010007-- From owner-xfs@oss.sgi.com Fri Jan 19 17:13:14 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 19 Jan 2007 17:13:18 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0K1D9qw030820 for ; Fri, 19 Jan 2007 17:13:12 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA24248 for ; Sat, 20 Jan 2007 11:51:51 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l0K0pp7Y97493958 for ; Sat, 20 Jan 2007 11:51:51 +1100 (AEDT) Received: (from mg@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l0K0polh75392878 for xfs@oss.sgi.com; Sat, 20 Jan 2007 11:51:50 +1100 (EST) Date: Sat, 20 Jan 2007 11:51:50 +1100 (EST) From: Michael Gigante Message-Id: <200701200051.l0K0polh75392878@snort.melbourne.sgi.com> To: xfs@oss.sgi.com Subject: Work on XFS at SGI? X-archive-position: 10335 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: mg@snort.melbourne.sgi.com Precedence: bulk X-list: xfs Content-Length: 516 Lines: 16 We have an opening in the XFS team for an experienced filesystems engineer. The position is located in SGI's Melbourne engineering team. Please contact Mark Goodwin (markgw) and I if you would like further details. Mike ----------------------------------------------------------------------- Michael Gigante, mg@sgi.com Director, File Serving Technologies +61 3 9834 8251 Silicon Graphics Inc, Australian Software Group, Melbourne Australia From owner-xfs@oss.sgi.com Fri Jan 19 19:03:59 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 19 Jan 2007 19:04:04 -0800 (PST) Received: from sandeen.net (sandeen.net [209.173.210.139]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0K33vqw012840 for ; Fri, 19 Jan 2007 19:03:59 -0800 Received: from [10.0.0.4] (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 6CF6C18033AEA; Fri, 19 Jan 2007 21:03:03 -0600 (CST) Message-ID: <45B18665.8070703@sandeen.net> Date: Fri, 19 Jan 2007 21:03:01 -0600 From: Eric Sandeen User-Agent: Thunderbird 1.5.0.9 (Macintosh/20061207) MIME-Version: 1.0 To: Roger Heflin CC: linux-xfs@oss.sgi.com Subject: Re: xfs-questions References: <45AF4A1A.50805@science.ru.nl> <45AFA4F6.5040806@sandeen.net> <45B136D2.5080305@atipa.com> In-Reply-To: <45B136D2.5080305@atipa.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 10336 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Content-Length: 408 Lines: 21 Roger Heflin wrote: > Eric, > > Very nice, it is very annoying the Redhat has a 8TB limit > on filesystems. Well, ext3 can do 16T now... upstream and in RHEL5 as a preview. Feel free to test that too if you need a RHEL-supported solution (and if 16T is enough for you). > I am testing it with 2.6.9-42.0.3. Great, let me know if you have trouble. Thanks, -Eric > Roger > From owner-xfs@oss.sgi.com Fri Jan 19 19:45:41 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 19 Jan 2007 19:45:46 -0800 (PST) Received: from smtp108.sbc.mail.mud.yahoo.com (smtp108.sbc.mail.mud.yahoo.com [68.142.198.207]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0K3jdqw017043 for ; Fri, 19 Jan 2007 19:45:41 -0800 Received: (qmail 12924 invoked from network); 20 Jan 2007 03:44:46 -0000 Received: from unknown (HELO stupidest.org) (cwedgwood@sbcglobal.net@24.5.75.45 with login) by smtp108.sbc.mail.mud.yahoo.com with SMTP; 20 Jan 2007 03:44:45 -0000 X-YMail-OSG: 1b95hPkVM1lYNba3bNTo4N_oVyUHZxCL_8O1YkYH4qsW4cZRLZCYiVMKOw7PfogP1UBU1TGCOmpBPitNPqWRUx48iWz3btBFFg8kirK3aAWYlTpkrdQnLXqWC53Szfk- Received: by tuatara.stupidest.org (Postfix, from userid 10000) id CE6FD1826121; Fri, 19 Jan 2007 19:44:43 -0800 (PST) Date: Fri, 19 Jan 2007 19:44:43 -0800 From: Chris Wedgwood To: Les Oxley Cc: xfs@oss.sgi.com Subject: Re: EXTENT BOUNDARIES Message-ID: <20070120034443.GA27654@tuatara.stupidest.org> References: <45B137CA.3020206@ampex.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <45B137CA.3020206@ampex.com> X-archive-position: 10337 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: xfs Content-Length: 1367 Lines: 39 On Fri, Jan 19, 2007 at 01:27:38PM -0800, Les Oxley wrote: > We are looking into running XFS on a 3TB FLASH MEMORY MODULE. neat, you need to send me some of those does it wear level and do other clever stuff? most conventional flash storage devices don't last long with journalling filesystems... > We have a question regarding the extent boundaries. See the > attached PowerPoint drawing, xfs.ppt We are running Linux. you're asking linux people to read a powerpoint file? :-) you might have better luck with something else > Our media is 3 million contiguous 4KB blocks. That's not 3TB, surely it's < 12GB? > We would like to define an extent size of 1MB and this tracks the > erasure block size of the flash memory, and that greatly improves > perfomance. We are trying to understand where XFS places the extent > boundaries with reference to the contiguous block sequence. Is this > deterministic as indicated in the drawing ? That is, are the extent > boundaries on 256 block boundaries. i didn't open the powerpoint file, so i migth not be answering this very well... extent boundaries do not have to be on any boundary you can tweak mkfs.xfs to affect where AGs start, is that of value to you? now, there are also rt volumes which can have larger extents so those might be of value to you, but you would still need somewhere for the metadata From owner-xfs@oss.sgi.com Fri Jan 19 20:35:34 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 19 Jan 2007 20:35:41 -0800 (PST) Received: from sandeen.net (sandeen.net [209.173.210.139]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0K4ZWqw000461 for ; Fri, 19 Jan 2007 20:35:34 -0800 Received: from [10.0.0.4] (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id D2FC818033AEA; Fri, 19 Jan 2007 22:34:39 -0600 (CST) Message-ID: <45B19BDD.2050808@sandeen.net> Date: Fri, 19 Jan 2007 22:34:37 -0600 From: Eric Sandeen User-Agent: Thunderbird 1.5.0.9 (Macintosh/20061207) MIME-Version: 1.0 To: Les Oxley CC: xfs@oss.sgi.com Subject: Re: EXTENT BOUNDARIES References: <45B137CA.3020206@ampex.com> In-Reply-To: <45B137CA.3020206@ampex.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 10338 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Content-Length: 1449 Lines: 38 Les Oxley wrote: > > Hello, > > We are looking into running XFS on a 3TB FLASH MEMORY MODULE. We have a > question regarding the extent boundaries. > See the attached PowerPoint drawing, xfs.ppt We are running Linux. > Our media is 3 million contiguous 4KB blocks. We would like to define > an extent size of 1MB and this tracks the erasure block size > of the flash memory, and that greatly improves perfomance. We are trying > to understand where XFS places the extent boundaries with reference to > the contiguous block sequence. > Is this deterministic as indicated in the drawing ? That is, are the > extent boundaries on 256 block boundaries. > > Any help would be greatly appreciated. > > Les Oxley > Ampex Corporation > Redwood City > California. extents by definition land on filesystem block boundaries, and can in general be any number of filesystem blocks, starting & ending most anywhere on the block device. If you wish to always allocate in 1m chunks, you might consider using the xfs realtime subvolume, see the extsize description in the mkfs.xfs man page. I'm not sure how much buffered IO to the realtime subvol has been tested; pretty sure it works at this point, though the sgi guys will correct me if I'm wrong... it's not exactly the normal mode of operation. Using the realtime subvol, however, all your file -metadata- will still be allocated on the main data volume, in much smaller pieces. -Eric From owner-xfs@oss.sgi.com Sat Jan 20 04:24:37 2007 Received: with ECARTIS (v1.0.0; list xfs); Sat, 20 Jan 2007 04:24:45 -0800 (PST) Received: from lucidpixels.com (lucidpixels.com [66.45.37.187]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0KCOXqw032560 for ; Sat, 20 Jan 2007 04:24:35 -0800 Received: by lucidpixels.com (Postfix, from userid 1001) id 1DD761A052755; Sat, 20 Jan 2007 07:23:39 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id 194C2A002262; Sat, 20 Jan 2007 07:23:39 -0500 (EST) Date: Sat, 20 Jan 2007 07:23:39 -0500 (EST) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: linux-kernel@vger.kernel.org cc: linux-raid@vger.kernel.org, xfs@oss.sgi.com, Neil Brown Subject: Kernel 2.6.19.2 New RAID 5 Bug (oops when writing Samba -> RAID5) Message-ID: MIME-Version: 1.0 Content-Type: MULTIPART/MIXED; BOUNDARY="-1463747160-1358739522-1169295819=:29223" X-archive-position: 10339 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Content-Length: 13690 Lines: 257 This message is in MIME format. The first part should be readable text, while the remaining parts are likely unreadable without MIME-aware tools. ---1463747160-1358739522-1169295819=:29223 Content-Type: TEXT/PLAIN; charset=US-ASCII My .config is attached, please let me know if any other information is needed and please CC (lkml) as I am not on the list, thanks! Running Kernel 2.6.19.2 on a MD RAID5 volume. Copying files over Samba to the RAID5 running XFS. Any idea what happened here? [473795.214705] BUG: unable to handle kernel paging request at virtual address fffb92b0 [473795.214715] printing eip: [473795.214718] c0358b14 [473795.214721] *pde = 00003067 [473795.214723] *pte = 00000000 [473795.214726] Oops: 0000 [#1] [473795.214729] PREEMPT SMP [473795.214736] CPU: 0 [473795.214737] EIP: 0060:[] Not tainted VLI [473795.214738] EFLAGS: 00010286 (2.6.19.2 #1) [473795.214746] EIP is at copy_data+0x6c/0x179 [473795.214750] eax: 00000000 ebx: 00001000 ecx: 00000354 edx: fffb9000 [473795.214754] esi: fffb92b0 edi: da86c2b0 ebp: 00001000 esp: f7927dc4 [473795.214757] ds: 007b es: 007b ss: 0068 [473795.214761] Process md4_raid5 (pid: 1305, ti=f7926000 task=f7ea9030 task.ti=f7926000) [473795.214765] Stack: c1ba7c40 00000003 f5538c80 00000001 da86c000 00000009 00000000 0000006c [473795.214790] 00001000 da8536a8 aa6fee90 f5538c80 00000190 c0358d00 aa6fee88 0000ffff [473795.214863] d7c5794c 00000001 da853488 f6fbec70 f6fbebc0 00000001 00000005 00000001 [473795.214876] Call Trace: [473795.214880] [] compute_parity5+0xdf/0x497 [473795.214887] [] handle_stripe+0x930/0x2986 [473795.214892] [] find_busiest_group+0x124/0x4fd [473795.214898] [] release_stripe+0x21/0x2e [473795.214902] [] raid5d+0x100/0x161 [473795.214907] [] md_thread+0x40/0x103 [473795.214912] [] autoremove_wake_function+0x0/0x4b [473795.214917] [] md_thread+0x0/0x103 [473795.214922] [] kthread+0xfc/0x100 [473795.214926] [] kthread+0x0/0x100 [473795.214930] [] kernel_thread_helper+0x7/0x1c [473795.214935] ======================= [473795.214938] Code: 14 39 d1 0f 8d 10 01 00 00 89 c8 01 c0 01 c8 01 c0 01 c0 89 44 24 1c eb 51 89 d9 c1 e9 02 8b 7c 24 10 01 f7 8b 44 24 18 8d 34 02 a5 89 d9 83 e1 03 74 02 f3 a4 c7 44 24 04 03 00 00 00 89 14 [473795.215017] EIP: [] copy_data+0x6c/0x179 SS:ESP 0068:f7927dc4 [473795.215024] <6>note: md4_raid5[1305] exited with preempt_count 2 # mdadm -D /dev/md4 /dev/md4: Version : 01.00.03 Creation Time : Wed Jan 10 15:58:52 2007 Raid Level : raid5 Array Size : 1562834432 (1490.44 GiB 1600.34 GB) Device Size : 781417216 (372.61 GiB 400.09 GB) Raid Devices : 5 Total Devices : 5 Preferred Minor : 4 Persistence : Superblock is persistent Update Time : Sat Jan 20 07:15:01 2007 State : active Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 128K Name : 4 UUID : 7f453e18:893e4dd9:6e810372:4c724f49 Events : 33 Number Major Minor RaidDevice State 0 8 33 0 active sync /dev/sdc1 1 8 81 1 active sync /dev/sdf1 2 8 113 2 active sync /dev/sdh1 3 8 65 3 active sync /dev/sde1 5 8 49 4 active sync /dev/sdd1 ---1463747160-1358739522-1169295819=:29223 Content-Type: APPLICATION/octet-stream; name=config.bz2 Content-Transfer-Encoding: BASE64 Content-ID: Content-Description: Content-Disposition: attachment; filename=config.bz2 QlpoOTFBWSZTWQ8rjMoACWRfgGgQXOf//z////C////gYCH8dAAG+bHoBIfX yB2z65Ruqvrdd7zAADm9ZT3MU9b7eq90zzsHXzyt92oeQG867pQ65B2293Jd sBd614Xd13u9avW3uYfd76y+7m2vaZ3nb6wNTCCZNBoEyCaTEwhMAFDaJ4Ka PSD1BpoRoEyTQ1GgknqaGmmgAAAAAAMQQQDSAlTxPUJ5QAGgAYgaAAEmkoJo jTSp7TSm1HiNRoNAAAaBoDRoAikJMmiYKZ5EnppMhoAZGmjTQ0AaAASIggAI gIUaj1PUPRHqAAAAAGnh+ZP9B/5I8UnNxWVizytgaOmUrUJUYgJjZFFBZjFh H6KwevdDVDtrql+j5sNHRpFR+7h8+nzuppdLsuGkP8WVm1pKAZKjEXBAJe7i WYv9xMaL/NMplpcQGP1fVM6Rfr+JDHtYvkyYmILKrJRIpWQr6HGZjRyqCJWK LUqCrbaWoqsqKBUFPVmZKlaKMiNQlS62sVMo0oKRVFBRFRKW0tXLK40TW0Ii oCmFoVka2oxBRURQURKy2WpC2igsIKExhcpAG2VK1lRtKlanva41BEPNISuW yVBEKMm2UAcsigChWAqqChW0aVK1g2yopVtIVILbSKFtK0VlYsixsAlZCGjJ CmFpSW1a1pa/WSxUFTFt4zDJB9aVQcEq0ou+GYW2URWKI20KhSpYqVUClsG2 JV4uCrjSoVErVLZdmkTGopS1QKhRi21GVZattqLKkAKZMr0919GWtHnfTBf0 95ablL0yl5e6OA7C/ALmWRYEP5LvvAAhI8+IyfcRyv6TxpQ9N4S/ibZOOUfF P9tFg9KbYwHT/u4j+9mzF7x9c/mqWbqxsnT5mWrduIZd12+H5NJw8kH6uZJ9 rezpzvRFRk6JwmqfuTh8kfl5j57art0VzKAcQiWOlSBgtIPwN5myF8JTjUVn QgwL7bZX4byLjYp6ddo/ibhPIsY5jQWZFx9ctovzPzFXI9Xz+vdPs4fUtOD9 f35OqJuv2nV7FQpRk7yfXxV/MpwsLyq6MX/R7rA6x9HBlmqzEfZxbaXd3jas q81ecR/IcZ7Vrt0wFOen24Vzo5NQRtKAQeivSb5uAe7CSWG9c8U/anyhCpz8 7J6Idrf2zc3hQowzuxpaKsAVtwjUozki/ek1vgNzak1J1cLNfi65yiIqzcm/ e1iHyErTfuKbqOg1cr1hKs63rwcEfVjxPMKXxuhXngNxbLdslMGzPlZdcJe/ I+DInSWMG5DE6dGPC/G2lTOEhcTHt58ujjOjq2lItNt+vA9TRGrxg6n93k5D c5TNzTgnHfndql9F38V1G1BXVVa3jebMyhKHGs1DSgjZ8U51clVtxDROEcHq MONelmtb7VoKVmDDbvbpQmudC8zqN/M5bZiVzRtOSZQhY+rhNtKGpcUejOVn G96bYojzeK1x5TZvdr+OM8wtV7jRpGvbaD7z5pjXlG3pvOBLeXlY6W/t69ev FFj0L59W5gwwrq5bfk2q/e3oOIWa4Y0xmU1d0a1T1vsnrrl04xt6bLK2413B vG6WGEqsAsj2rtKsw7XHdmtt8xc6y5onPdMwZ5QWrPS3xXjhXCWWGIrxeuFG 7cNY0lhEobzWyxOf29nf3bu3zJ7omnoPzhjGAMYPbV7fP3vd40UaRnfEfSEQ xjAGMEOVnwKZWZnfMH7ZnLG5qhaKityR/wQrK+a3Pkuw9HLHAfMDpBAQPltf C/nkiTL3gl0T7ZzR2t1yyrTxbzqYY3pYPiLvteP5T86cPVDz7q7/LeNNMPSS eOfL8beB+NBIfOsdOrOIEJ/mmR86Td3DTPv/bagUL21DQarJNDBmak5OYF1+ pDCAtIZD6Zr4QoqUKOrbLVWgv6lo61xVfC6UPKOp1zjx/j8YXp+C7ubiEp9H yG79bLUnv+VY/cmdYn+/z/W29jA18is5zW9dNqgf3t6iFa114Dj65VWS0uU+ jmlfSZ/OIR030fSm/GBR4k23ihNrtsjgjXzM/DWKVF4iMXBDaCJ6xirFBqOt jh8rMZWQ48m3ZTE37Y8NteinbzvgYysZE9tSLIEi30qcAAXTkI3zER2PcEGF cczZi5Fquy3NkT9MXnwn9ZU6nZX5kDtnYMLzaTmCBBlIXVum3X9cDNaTcex9 J2mg2+DvlBEPNCESuBxhuJJ9Q+frtXhfOM9171lrVhytJbnj4a8vNMm3cfLS JW6ZDqzEk+iIKFC35HCZi0rooUWkNTNruws1Ub0k83Zg3X8naNKOI5xTF/cf SFhhF4ChjIaZXNAFtFBAMAvWNU+lnwHMFh7W/x5+noaxJzzu8ODI6vFhyEMx YiH6/FJne2txwgUQGezpqL0a0/Twd4bdSdaDp2MebqPHmZQOpGQ3yPZ3HYwy y9j21nZo262kY5+OcYlEC+cKy/NDV+iXu4c1nVYvqZosNP346Sg4WdF0zd3K /IPhf5M9uI2u1tRM6GtVaOf1yTwxgVgsDVOq1JOCl1LDKFpO792GFk7CKvXt b2yzpnT6x7yv7ELt1pt18ltKVCreYaRpIMlWBXvhX0XLcXbxjtN2oyOVvF4+ GayoD1FA3iinPA8qYY0tOWT1q3Al9dw6bZFkM26vqIglb7VYTk6szHYiH8fM supAMwzVb4TIeGMyK7SaqM86Fxflt9/5USDmkIo5aPhRcefSIguHBq27jEj1 VB3RJFKEjCJkFPM/bEZIN9792CNbWwp7lfNyhk2N5n0oHr15koKePXhj1M88 YFohmhTLntXrfabIB0puHkW0gPdtqmOvZjuurpGEHb31YtjZJX01SejlpJAV 8UcQ3QdFzBtAsYYUkX4utXjjPIJY55GyqPy79xDGSosqNKCIxVHp08WEnXOv ZoZtRGGYBjMRxgUI3A3NleD70ZIKOsLXGob1gt5zKvem6hq3iAyHS2EocQkW Ykd/LbLiDJ58NvamPP1qmCvoM7+7LYoFfnsB/gnp2gRqjcFB3eKv8zNizSgE YG5H2Sz2nn7DRa/1ZVtHLq9jyZKMevdd3gJOiYMAonikxIPWge+gHf1x1iaE k47RMgRkQsNxPIoep99upghDdPtjJjo+09ctZy1YU3V6ilaSajqbVZamR4Gb TRWCbMyX+PhDqUZjWaa5DZekItR83EBbGjyLiBXfxV4Un27boGX3x4mBwbmw ILwttjYwrUtsW1sqp4k7LOpTfW6ufSpNojO8693gOLGFarWG8OyEu+iYHdTC ubMkN9rNuLt9TvMPQftYsFirBYKRIiMWCIxIIigsUUBZIsUVEEEWAqowUViK MRWJFEBAVRRiqgqqRQUgpBgwQWIKKCqKKCLGRUEBYKoisVRVERSKHwNWCAoL BiEVRYisiKIqxVWCwVZFRIipFWKCyMRURYsgLFIKsGKIjCAkYiKjIjEFZFJi VRWCkVSCkgsBUYqJFBRgkIsBQQQYrFkFFEUUGCkWKixFIxgKsjEkiyKsQZEU GKMioxViMSWE8Gdnm9ni1mWdiLIQyhdpQuqSH3uwbtmsoNyiEJ5bmDKzM5Ds mwrQkah1iR4XqSsHEd+G3zsMDTMUMoQhNtHeGG/wQOSBOTL4s6I4ZGWDfeOg fMdnTxdOlZToq/PTjrUHYaoHCkMNEVSaOYmnGQioU6dmpWluOwtLVCYcdvEX ww1p2cwL8LW0kGUWFVrlhtMAqpjLKzISRPFCPPP59McrZsX3MVGMaXWgs7fB tLxZfDBSVLFMLHhkzLXi0hsG7uAy2CHWjUgN3RB7bZuaLm4t7c54SNWwfxpt I9Wtr7Sc5QKWDfswPakCHFZq0TSk00gWxZEKjsxQ7GkTaAa9YFV9ICvAVRYk VRRmscbYZOnlSIR0AZmUoXEky8k4YsuYwILbllCsY2iWF1OBMC8uzDVXbmw3 860XUFxMBPdsPY2AVxM/LKhSyiYBTLotBUVUnmw1XggNWvR4IydroWnCFgBf LLSWkcyZ1iqotIveqPw2obmgmFiLFJpGyoadW4L6yMhod7jTJDAOAkoyiUdl CgeTgokTxrA88RWD0VcSpgWWAUAR5P1cY2lsBuyRs0c9ds6ilUa2qPLwR0eq C+DS1SIFAHVfcicq3MN60pUyiPbWAFsd+vamrKWjwVIgvZRFOqT12/LhhuPd 6dwgDLwy5k72csoCyCPJAsbvhkTGgGXExQ7XaWAb7SkKoZbS8n+AYJEpGQkV tjdnpqGvMTLBYPaYHeRXmy1ioh/NAfKPeUa59lGSwBIEXxdBODXDIpPVSkBa xeKKrVLEo6pUgn8IDHzFVLcT46dafKGkHbLtVauzOfWJo1xuYEyFZDSbBcZq Ud76bpyF3GVdN13Ki3KdCYqsRmILhmzBb0CqPdqWJaMUNdNC+A+rCVLby9Kl k00MHkaldfRyq03lQ3X7CmbFMGAmMlmPAuRYtg6+l0jQpY3KdJKiSI35aTJy +L+JtcG/brBzKQ8rs7JpnLbUkMZADm6JIsDEgBYERJQjKEcUCisoFFevAqBe meZwUKrz760EcYTnr0o8M01Yki7jYhuU7mU3oaRzV6g7Gkkx8tiZkWBoqElm gs4YCM0c2mo2RDdeJ53U0pFrWoW9PeLUbtEZdUznRjM5gNx3VjCrg9dWoPEK cxbD3lYKFIyA+o3uF7AZ9ZqJSa4te4LtpMYoEYFTjvTTAMU0MTEwC1GUXlzD lI6dZgIvlvu/rF5RaaKNEmmnDQ3KkBVc15GGNN7W5Mbt4UWyrJA0MDRO97dV 7CuG029LBe6FZOaembo3dhvZMg1dWhNhdouJ6+dLbv9e8t2oOS64lsYPZShj olSHrd8+DQxFHrxiFKxFYXbO2kktgw7JRLx1RGXmslSICk0kS/NhFstjXuM7 26ZTMxZXr3yGZyRcoE+pQdzCkrjRWZheDw0mLzaqeqy5aj3tmBqzMWeph35o ZKqEkwmyW4fYGx20opL+Hwa412rT9HvJep9DdbVMnJZstTZ/bLR1VSginQIt bXvQYoUrisNAh2ihdEAJMmE2bHoG5AgO3rArDKNXvMgZz2U1oIMMzISWPL73 jXlmwt55XQwkAdIFxmmQqh9cC7qEyD5mYGklc7wqDi+nJTvG93RrjZINgSKS unsgemol8W5HGUrJGZUajqiAENu857swqpcOzEM9ykUVKkUMu6xRetl5LyHl gvOA7kgBCGkmk8g4BJgG6L4+k5EU4nJtjHvEaWRlBlL2uhCWdHVbjahTvUPK KwdSB6YZYDGXSjDJ1UotG5bq/UG+6G27uB0xLhrWet2PsQlAE6rutBGLHIqe ryJINlMJGM5KCQ03Ru5DSfiImMrixYUvk2cYVTzYYf1Ujfd0jwjfXUuFmxze rnFWToj9EhCDxiPBLagF8fGEwOgdD3emIRse2E2w6lWfBNp/Wa9lkctSkdjC eWof0tlpge+KpIQGpWAmkNqOF8sDjvbCWN2eI4f5n5VaOSHe8BIT2cpXuFRw pi1KW7qpq6yqJJS0lVoKwjqiJ5bsOpnnxIZ3no5bHJSSCkFWLMZAsEMJCSSW FAkYXobuXCyIgMxg6gkZ0lDCUqG+0HI0ItU9GYjKPBJ5QP3qpaL2SdqUCgcB M3Y1WzLOk6/a17xC6cDC5sIwt29Yir9GKWuR13t3SqFePt9OEotNJUhzM+GK iPJl7hCSHHkgxfWtc8J3HnmKqy+xjI2FkvX6tZDWXHzpwWI5g5oKLvmObTM0 mKWZOJ9ueshCSSW7BSwi5kyXhjjTDBuS/OmMLflPHSzYoWvWvyGpgkG68anW ZAuEGFANgZlLfwbnJYisHfaWAm+ZrgYQyIHpcmZjKdWp35HSh5KCqqnhIfhV O+tOI1MPhxSnxZd2tDys5rab95qbkdqxBWoqwDg36JiF0iIDi6i5k30lCBpO OnPJo9qdlHz09uTRINpmpQqpS3sKMLm4Dd23Qee9PNBOilcUFcr1amNmUOpK HtlftZWLtIbQyieBRySuyGRjDiImpYhG07OwQa1q4DmMOEnRoSysqZHBDnMQ LuBsNCMLVJ9HjPQRnMvyoriYwX5Zom7teqEPEoUjoJKLxSjX6aT1QSRDE+WR 8eiRT7/fw/R+QzyQyY+I4gDqJosguVdmA8s+TiHe028ULyxlWikRj9IBYpOx m7r4jwxYp/MxgnrBanBUKGeYpquz5MwVaQF2VaWnlmC0dQ3rJQMAhB2yaMwz eOY4APmUBGmzL0REJRiwL4HzjiBQPrFnWLrdSGqaGOizzRs0OP0mFHatZjSy Pspz5IHPzSclDvHU1gmwmhWYMKJwQ8pCWDyqoHWSipjU9QYUYC9iPynd62N3 AZl4IzzhkHwdUT0tL04WXUjBxfm+9J1Ly8VsqMcTA01G9tlJiS1xgYuMJ3gd ByFKDM9qjQWHg0ZEPQ63mztlEB90QFMsOKZrSFPAbnwwDlyYoCGgQa3+aVo4 3fesNNsA6r6HuUXuv+KwpEQ0APFTK5f0e2tz1M+ioeK9b8Hicw9ikokKK/ch QwgArjQxeAyH6W6GUPWLDJY9EG8FtahjmNZSliDxBIZg3vCLT900a9WIIG7o g0T9SUG0Y7iiidpNUOyOlGOL21jAydrhVu2kJdVKWyxbNZzCiwIHpeTSaLek kDScmKILlIv2RER5dLwfLPI6aQkZ6lTN5MSOqEBhGGBzRBxM7MSavfxR+Gmw DVAYvy5Dg9vKAYLt907SBxZYdMSamlRBEOpo6lDoPY3dCKOTLHPpZ9bICpcV pq79tL2gq8LwZOzfBaZiVVwoofSvnQEkZrc2406FaE5SUTEBKAWE650Z7zSg knDSoHZJPWepd8sgc6BZ30lzfiHEeZELu2V7hShIoiBLbWhsyjuF8FvRcMmt lyrXumFmklGiByM0azwod7J4VRLcXAZgqd12bAncZA6YajlLnpXtPnL655Ux GsQyZGpU85QQelA6vgnoM8ch1cDoGY6iFlUh7bQ/wqObtSrCHJzclxE9lzny F4mibQ8lzJEC+NLkOlCLlMAmcktc8WxcVSceL5lXMLpILEPWsjRphO7jPOpZ mFm5jOUI8ThOYPBlxjda40oHHgCi9O9xhOCm3a+ts/HbUmC0VlDU5pGzdxwN jnRy8NKM9u69DzCqTQd8cVlTgjxr/zqXLVQ60oRwqC81LdlDg58CG1kqNAlA 0K4toLcxRQF+sK49VnbFYhjaAr0nevNxBtaNvAb51m+QF5KURDHSYXfzQNjn e9N2BnxkpRrzGLU5QFCYRDKy0S0g3Fh5cQJkEXCnafKgKa9JzS82rQUrtLqW u5+ecCq1I2wfEsxm62lfh5e6eoSICGalwHkG/TXTDQrjaSsDAefW9aPdmJ6B 0xsObDPW1HSSpKIyQrgPCnPyKLW42VHR7xECozLbg3SxMbtcTQ1aYcvS62A5 5p72RDNOoJHwB2Gq3GRlZi0BQkMosUlO9E9njYsaMsgjOsTFVmjBjucmctkq IUIwV16MDEIDf1QZOw+Sdtgg7kgRIPjEYps/nNiDavSAK3rDHu7quJjyz5WC KKAuuiyte20+bMRpWgyCid1aAJlbsMoaEL0r0gdmYu9mcgF1a7GkiMeumjtU ZUzbmAflpFSgRA3tsHi4JFO7RwrMW5og/vmtlwO3Trd2XBk7eCIAFvE2EmNQ OdHZDJDqazDGFtdhQfchuAjs03VD1OeHHaQ3VVZvy2+PkdexTVelvpshqulY a6K3xV91AqJUZLX4YaoBTtSRLiwa2wMrSEuGFXlc6seb2EIFaaOcl20vPPml Z4DqO/EyOEI4+mcixTYopTRA+kD7HVlXUvdXz4K+XC1pQylPJcGYQVFcbbcA gAzJQEn5vRnFROAgSSDgOXRi9l25mNxmAMVcIQcBO9qas0ah5MUfb5JeEaD0 YS9WIMXgP4RBED011FNNVUCLO6Cm6fEAdUZCXLytPTlXnvTXvCXrrQ4t1PdT U5XZ1GSKjdPshEoetMTG7c3h4oyVULoC+zYZguGBtaOdVV7Wx9njcgG220Np tDMK4xmzN6sFbSwWkGYbgMbhywWa6jr2uYvUhie22VYGUNCV1oRvaccbi+N0 cTVzWFg0hfbG0iSM4cQ0jMEQUVAEyQiEy7zY+RdnHVcGx3a7Df2piy24rgid +3fqccr9amtYPZpbOzEBxtCKsDRS8ILUbW27tInWmkDRAaLEh5kzzN2cadWI 2loXdX9bVE1kBmz9Vr8s4Qxi+zz3UGsf22QSwfXvjl/Gc9lG0gSp00D+Tc7m GGtcugDQrCguiRqMiniaxfIKwMyufLxjYaSgcJCNu+Zq/q+a1k1DjKVLArnb Gh4v7TZa113oV0+YUPFja9HsNCAy7mVxvBR9ulpbcEdoyZYSzXWPVb7btTeH Xa85bGxGyyZOt9DgEglEfc6i7zEhYtqjDtgzikkFGq1WUYPR9s1beVGstq3F nj0vunSiXMYLsgcFEGxCIEJTvS0hWVIYkELflhJO2dg477xAfE2SzVs6TIwU iJw4RC2WMx4GcVJSy62FTVhpXXgjk6fLy+g/L+vlymGX9rOqspBfVWgef3sQ H65+7GW78scGfcsyNnOXzsYY0nxHczxj6sSnzZnpzHKCTo6L4vdPlCx89PTw f9GswJb+aKXZURCZH2TSZCYIe7w9526a6zZ+bp69ddpidAVYsULNZ2efgphV sAPoRBmU30p+Pc4IyvKWRc9vA7kEIs+ZhfmsHt/wawKsmpuT7piQQgCJrxef f2y798S385MTa2/DTAcZajHRCM7cLGdS2ss3DoQMx3XWd7yFAMSjmeNznj+q 2E/AO7Jqhj5PZxhvyZDwel+UTkgNELLP0WjWjQbsHdRhAYCSBIssXx95Rifk sfoDUBRYCr8Ht7p2PUdIFneWisnxPQQ17DPqY66tWtOCGxhu6+7CsgTMQYI4 RsAp3mN3Y/r2YGMNfSsNZIKtTYz2ERECZtFuh4xMuPYSDtcj21PgKDCIG7Bt B5fx9348v7R5zEkCR+AZn4jPhePfX38kIBx2sTxr4+Ht5bm2757mzKoquekP Yez7nF+Hi8ymcBIkUkUDnD6Sm9XsRkSb5gnFznFEQZOyxoHqgc+xTjHUeDQ6 unRJd9wNNrqIYhbByqEQleiuto4Aqm5bELaC8lcrapRdxYhUFLFQwPMSAJ4R VCA/Ak02aY93iuGDN508rKlb4+MYZu8k0cPgqnYY14YmhDftXLijHu72WzPX xDUbEEpTx5hUFLZdZvZ5N94d+e5mNvWbzzcc5mPBgzaAAviCDXXO6QD3fFxe 5FDp5Tnz4Q1GRVe6l5enKe34/w+WvhJ+LeFZxzWdPkdc3gdHbQ9/31WJEwmi WgX7Xxz6/M6WSDASBvZmNyk0f56UwDxC8uXECDD3zDmEGKS4XDzNQ8pjDxDB kl/JZ+yj3dDUEkAkL3UaY3RQ+iGJJAJGLPpQeqGsxERfIjRrQlOhiRd0pKEA cKrCiAFkAIxlmS3PDwzEkCRAXU2iMD3jPGsZINTjK69Mqg/YSKzXFqmSAXN7 oBE32ek4lKtQaOVOIcKEuaH18Wx+66/UfjwR+r4D8T54MelvtqUabAbbTbbE LlWbl01uws03uCrgvMzdcRmhW2qiLXnrJPiyjq6Isq6eGOfFsjqBQC+zPvmz IT3KXTlx1Hcpr3oSI/4u5IpwoSAeVxmU ---1463747160-1358739522-1169295819=:29223-- From owner-xfs@oss.sgi.com Sat Jan 20 04:47:28 2007 Received: with ECARTIS (v1.0.0; list xfs); Sat, 20 Jan 2007 04:47:33 -0800 (PST) Received: from lucidpixels.com (lucidpixels.com [66.45.37.187]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0KClRqw006699 for ; Sat, 20 Jan 2007 04:47:28 -0800 Received: by lucidpixels.com (Postfix, from userid 1001) id 5F77B1A052758; Sat, 20 Jan 2007 07:46:33 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id 5A5F9A002262; Sat, 20 Jan 2007 07:46:33 -0500 (EST) Date: Sat, 20 Jan 2007 07:46:33 -0500 (EST) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: linux-kernel@vger.kernel.org cc: linux-raid@vger.kernel.org, xfs@oss.sgi.com, Neil Brown Subject: Re: Kernel 2.6.19.2 New RAID 5 Bug (oops when writing Samba -> RAID5) In-Reply-To: Message-ID: References: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 10340 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Content-Length: 2576 Lines: 58 On Sat, 20 Jan 2007, Justin Piszcz wrote: > My .config is attached, please let me know if any other information is > needed and please CC (lkml) as I am not on the list, thanks! > > Running Kernel 2.6.19.2 on a MD RAID5 volume. Copying files over Samba to > the RAID5 running XFS. > > Any idea what happened here? > It happened again under heavy read I/O when I was running md5sum -c on some of my files. [ 551.942958] BUG: unable to handle kernel paging request at virtual address fffb97b0 [ 551.942970] printing eip: [ 551.942972] c0358bd8 [ 551.942974] *pde = 00003067 [ 551.942976] *pte = 00000000 [ 551.942980] Oops: 0002 [#1] [ 551.942982] PREEMPT SMP [ 551.942989] CPU: 0 [ 551.942990] EIP: 0060:[] Not tainted VLI [ 551.942991] EFLAGS: 00010286 (2.6.19.2 #1) [ 551.942999] EIP is at copy_data+0x130/0x179 [ 551.943001] eax: 00000000 ebx: 00001000 ecx: 00000214 edx: fffb9000 [ 551.943005] esi: dd2007b0 edi: fffb97b0 ebp: 00001000 esp: f76ffe1c [ 551.943007] ds: 007b es: 007b ss: 0068 [ 551.943011] Process md4_raid5 (pid: 1309, ti=f76fe000 task=f7081560 task.ti=f76fe000) [ 551.943013] Stack: c1d880c0 00000003 cd2f0540 00000000 dd200000 0000000e 00000000 000000a8 [ 551.943027] 00001000 cd2f0540 dd1f1adc f6435c48 dd1f1ad8 c035a977 34f3db20 c027be16 [ 551.943043] c0553328 00000002 00000002 c01146b9 f6435c48 c0553328 f6435c48 dd1f193c [ 551.943056] Call Trace: [ 551.943059] [] handle_stripe+0x1ca/0x2986 [ 551.943065] [] __next_cpu+0x22/0x33 [ 551.943072] [] find_busiest_group+0x124/0x4fd [ 551.943136] [] __wake_up+0x32/0x43 [ 551.943140] [] release_stripe+0x21/0x2e [ 551.943145] [] raid5d+0x100/0x161 [ 551.943150] [] md_thread+0x40/0x103 [ 551.943155] [] autoremove_wake_function+0x0/0x4b [ 551.943160] [] md_thread+0x0/0x103 [ 551.943165] [] kthread+0xfc/0x100 [ 551.943169] [] kthread+0x0/0x100 [ 551.943173] [] kernel_thread_helper+0x7/0x1c [ 551.943178] ======================= [ 551.943180] Code: 8b 4c 24 08 8b 41 2c 8b 4c 24 1c 03 54 08 08 8b 44 24 0c 85 c0 0f 85 3a ff ff ff 89 d9 c1 e9 02 8b 44 24 18 8d 3c 02 03 74 24 10 a5 89 d9 83 e1 03 74 02 f3 a4 e9 37 ff ff ff 01 ee 89 74 24 [ 551.943254] EIP: [] copy_data+0x130/0x179 SS:ESP 0068:f76ffe1c [ 551.943262] <6>note: md4_raid5[1309] exited with preempt_count 3 I will run resync/check on this array and then see if that fixes it. Justin. From owner-xfs@oss.sgi.com Sun Jan 21 11:28:30 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 21 Jan 2007 11:28:53 -0800 (PST) Received: from lucidpixels.com (lucidpixels.com [66.45.37.187]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0LJSTqw025801 for ; Sun, 21 Jan 2007 11:28:30 -0800 Received: by lucidpixels.com (Postfix, from userid 1001) id C34DA1A00052F; Sun, 21 Jan 2007 14:27:34 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id C2E90A00226A; Sun, 21 Jan 2007 14:27:34 -0500 (EST) Date: Sun, 21 Jan 2007 14:27:34 -0500 (EST) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: linux-kernel@vger.kernel.org cc: linux-raid@vger.kernel.org, xfs@oss.sgi.com Subject: 2.6.20-rc5: cp 18gb 18gb.2 = OOM killer, reproducible just like 2.16.19.2 Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 10350 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Content-Length: 8458 Lines: 197 Why does copying an 18GB on a 74GB raptor raid1 cause the kernel to invoke the OOM killer and kill all of my processes? Doing this on a single disk 2.6.19.2 is OK, no issues. However, this happens every time! Anything to try? Any other output needed? Can someone shed some light on this situation? Thanks. The last lines of vmstat 1 (right before it kill -9'd my shell/ssh) procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 7 764 50348 12 1269988 0 0 53632 172 1902 4600 1 8 29 62 0 7 764 49420 12 1260004 0 0 53632 34368 1871 6357 2 11 48 40 0 6 764 39608 12 1237420 0 0 52880 94696 1891 7424 1 12 47 39 0 6 764 44264 12 1226064 0 0 42496 29723 1908 6035 1 9 31 58 0 6 764 26672 12 1214180 0 0 43520 117472 1944 6189 1 13 0 87 0 7 764 9132 12 1211732 0 0 22016 80400 1570 3304 1 8 0 92 1 8 764 9512 12 1200388 0 0 33288 62212 1687 4843 1 9 0 91 0 5 764 13980 12 1197096 0 0 5012 161 1619 2115 1 4 42 54 0 5 764 29604 12 1197220 0 0 0 112 1548 1602 0 3 50 48 0 5 764 49692 12 1197396 0 0 0 152 1484 1438 1 3 50 47 0 5 764 73128 12 1197644 0 0 0 120 1463 1392 1 3 49 47 0 4 764 99460 12 1197704 0 0 24 168 1545 1803 1 3 39 57 0 4 764 100088 12 1219296 0 0 11672 75450 1614 1371 0 5 73 22 0 6 764 50404 12 1269072 0 0 53632 145 1989 3871 1 9 34 56 0 6 764 51500 12 1267684 0 0 53632 608 1834 4437 1 8 21 71 4 5 764 51424 12 1266792 0 0 53504 7584 1847 4393 2 9 48 42 0 6 764 51456 12 1263736 0 0 53636 9804 1880 4326 1 10 9 81 0 6 764 50640 12 1263060 0 0 53504 4392 1929 4430 1 8 28 63 0 6 764 50956 12 1257884 24 0 50724 17214 1858 4755 1 11 35 54 0 6 764 48360 12 1247692 0 0 50840 48880 1871 6242 1 10 0 89 0 6 764 40028 12 1225860 0 0 42512 93346 1770 5599 2 11 0 87 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 6 764 20372 12 1211744 0 0 21512 123664 1747 4378 0 9 3 88 0 6 764 12140 12 1188584 0 0 20224 111244 1628 4244 1 9 15 76 0 7 764 11280 12 1171144 0 0 22300 80936 1669 5314 1 8 11 80 0 6 764 12168 12 1162840 0 0 28168 44072 1808 5065 1 8 0 92 1 7 3132 123740 12 1051848 0 2368 3852 34246 2097 2376 0 5 0 94 1 6 3132 19996 12 1155664 0 0 51752 290 1999 2136 2 8 0 91 The last lines of iostat (right before it kill -9'd my shell/ssh) avg-cpu: %user %nice %system %iowait %steal %idle 0.51 0.00 7.65 91.84 0.00 0.00 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 87.63 3873.20 254.64 65.98 21905.15 24202.06 287.61 144.37 209.54 3.22 103.30 sdb 0.00 3873.20 1.03 132.99 12.37 60045.36 896.25 41.19 398.53 6.93 92.89 sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sde 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdf 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdg 0.00 30.93 0.00 9.28 0.00 157.73 34.00 0.01 1.11 1.11 1.03 sdh 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdi 12.37 0.00 7.22 1.03 78.35 1.03 19.25 0.04 5.38 5.38 4.43 sdj 12.37 0.00 6.19 1.03 74.23 1.03 20.86 0.02 2.43 2.43 1.75 sdk 0.00 30.93 0.00 9.28 0.00 157.73 34.00 0.01 0.56 0.56 0.52 md0 0.00 0.00 0.00 610.31 0.00 2441.24 8.00 0.00 0.00 0.00 0.00 md2 0.00 0.00 347.42 996.91 22181.44 7917.53 44.78 0.00 0.00 0.00 0.00 md1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 md3 0.00 0.00 1.03 9.28 4.12 164.95 32.80 0.00 0.00 0.00 0.00 md4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 avg-cpu: %user %nice %system %iowait %steal %idle 1.02 0.00 12.24 86.73 0.00 0.00 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 142.00 2497.00 403.00 38.00 34828.00 15112.00 226.49 142.77 181.95 2.27 100.10 sdb 0.00 2498.00 8.00 27.00 36.00 12844.00 736.00 3.63 45.31 6.06 21.20 sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sde 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdf 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdg 7.00 25.00 1.00 11.00 32.00 144.00 29.33 0.04 3.58 2.58 3.10 sdh 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdi 10.00 7.00 4.00 1.00 56.00 32.00 35.20 0.01 2.40 2.40 1.20 sdj 10.00 22.00 4.00 8.00 56.00 120.00 29.33 0.03 2.50 2.50 3.00 sdk 7.00 10.00 1.00 4.00 32.00 56.00 35.20 0.01 2.40 2.40 1.20 md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 md2 0.00 0.00 558.00 7028.00 34664.00 56076.00 23.92 0.00 0.00 0.00 0.00 md1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 md3 0.00 0.00 0.00 11.00 0.00 168.00 30.55 0.00 0.00 0.00 0.00 md4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 The last lines of dmesg: [ 5947.199985] lowmem_reserve[]: 0 0 0 [ 5947.199992] DMA: 0*4kB 1*8kB 1*16kB 0*32kB 1*64kB 1*128kB 1*256kB 0*512kB 1*1024kB 1*2048kB 0*4096kB = 3544kB [ 5947.200010] Normal: 1*4kB 0*8kB 1*16kB 1*32kB 0*64kB 1*128kB 0*256kB 1*512kB 0*1024kB 1*2048kB 0*4096kB = 2740kB [ 5947.200035] HighMem: 98*4kB 35*8kB 9*16kB 69*32kB 4*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 3664kB [ 5947.200052] Swap cache: add 789, delete 189, find 16/17, race 0+0 [ 5947.200055] Free swap = 2197628kB [ 5947.200058] Total swap = 2200760kB [ 5947.200060] Free swap: 2197628kB [ 5947.205664] 517888 pages of RAM [ 5947.205671] 288512 pages of HIGHMEM [ 5947.205673] 5666 reserved pages [ 5947.205675] 257163 pages shared [ 5947.205678] 600 pages swap cached [ 5947.205680] 88876 pages dirty [ 5947.205682] 115111 pages writeback [ 5947.205684] 5608 pages mapped [ 5947.205686] 49367 pages slab [ 5947.205688] 541 pages pagetables [ 5947.205795] Out of memory: kill process 1853 (named) score 9937 or a child [ 5947.205801] Killed process 1853 (named) [ 5947.206616] bash invoked oom-killer: gfp_mask=0x84d0, order=0, oomkilladj=0 [ 5947.206621] [] out_of_memory+0x17b/0x1b0 [ 5947.206631] [] __alloc_pages+0x29c/0x2f0 [ 5947.206636] [] __pte_alloc+0x1d/0x90 [ 5947.206643] [] copy_page_range+0x357/0x380 [ 5947.206649] [] copy_process+0x765/0xfc0 [ 5947.206655] [] alloc_pid+0x1b9/0x280 [ 5947.206662] [] do_fork+0x79/0x1e0 [ 5947.206674] [] do_pipe+0x5f/0xc0 [ 5947.206680] [] sys_clone+0x36/0x40 [ 5947.206686] [] syscall_call+0x7/0xb [ 5947.206691] [] __sched_text_start+0x853/0x950 [ 5947.206698] ======================= From owner-xfs@oss.sgi.com Sun Jan 21 23:09:22 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 21 Jan 2007 23:09:27 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0M79Iqw007915 for ; Sun, 21 Jan 2007 23:09:20 -0800 Received: from pcbnaujok (pc-bnaujok.melbourne.sgi.com [134.14.55.58]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id SAA29051; Mon, 22 Jan 2007 18:08:14 +1100 Message-Id: <200701220708.SAA29051@larry.melbourne.sgi.com> From: "Barry Naujok" To: "'Utako Kusaka'" , Subject: RE: [PATCH] fix extent length in xfs_io bmap Date: Mon, 22 Jan 2007 18:14:06 +1100 MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit X-Mailer: Microsoft Office Outlook, Build 11.0.6353 Thread-Index: Acc5NaxL+BYHxF3cSP+IHsTJ/pphcQEvxmIw X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.3028 In-Reply-To: <200701160537.AA04877@TNESG9305.tnes.nec.co.jp> X-archive-position: 10354 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@melbourne.sgi.com Precedence: bulk X-list: xfs Content-Length: 4009 Lines: 131 Hi Utako, I'll push this patch in for the next xfsprogs update. Thanks, Barry. > -----Original Message----- > From: xfs-bounce@oss.sgi.com [mailto:xfs-bounce@oss.sgi.com] > On Behalf Of Utako Kusaka > Sent: Tuesday, 16 January 2007 4:37 PM > To: xfs@oss.sgi.com > Subject: [PATCH] fix extent length in xfs_io bmap > > Hi, > > In bmap command in xfs_io, there is a difference in the > length of the extent group > between "bmap" and "bmap -n nn". > It occurs if the file size > max extent size and the extents > are allocated contiguously. > This patch fixes it. > > Test fs: > # xfs_info /dev/sx8/14p1 > meta-data=/dev/sx8/14p1 isize=256 agcount=16, > agsize=7631000 blks > = sectsz=512 attr=0 > data = bsize=4096 > blocks=122096000, imaxpct=25 > = sunit=0 swidth=0 blks, > unwritten=1 > naming =version 2 bsize=4096 > log =internal bsize=4096 blocks=32768, version=1 > = sectsz=512 sunit=0 blks > realtime =none extsz=4096 blocks=0, rtextents=0 > > > Example: > u.bmx[0-5] = [startoff,startblock,blockcount,extentflag] > 0:[ 0, 1048588,2097088,0] > 1:[2097088, 3145676,2097088,0] > 2:[4194176, 5242764,2097088,0] > 3:[6291264, 7339852, 291136,0] > 4:[6582400, 8388616,2097088,0] > 5:[8679488,10485704,1806272,0] > > # xfs_io file2 > xfs_io> bmap -v > file2: > EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET > TOTAL > 0: [0..52659199]: 8388704..61047903 0 > (8388704..61047903) 52659200 ...* > 1: [52659200..83886079]: 61048064..92274943 1 > (64..31226943) 31226880 ...** > xfs_io> bmap -v -n 1 > file2: > EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET > TOTAL > 0: [0..16776703]: 8388704..25165407 0 > (8388704..25165407) 16776704 ...* > xfs_io> bmap -v -n 2 > file2: > EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET > TOTAL > 0: [0..52659199]: 8388704..61047903 0 > (8388704..61047903) 52659200 > 1: [52659200..69435903]: 61048064..77824767 1 > (64..16776767) 16776704 ...** > > > Signed-off-by: Utako Kusaka > --- > > --- xfsprogs-2.8.18-orgn/io/bmap.c 2006-12-13 > 13:57:22.000000000 +0900 > +++ xfsprogs-2.8.18/io/bmap.c 2006-12-21 11:07:09.573116475 +0900 > @@ -78,6 +78,7 @@ bmap_f( > int bmv_iflags = 0; /* flags for > XFS_IOC_GETBMAPX */ > int i = 0; > int c; > + int egcnt; > > while ((c = getopt(argc, argv, "adln:pv")) != EOF) { > switch (c) { > @@ -136,7 +137,7 @@ bmap_f( > } > } > > - map_size = nflag ? nflag+1 : 32; /* initial > guess - 256 */ > + map_size = nflag ? nflag+2 : 32; /* initial > guess - 256 */ > map = malloc(map_size*sizeof(*map)); > if (map == NULL) { > fprintf(stderr, _("%s: malloc of %d bytes failed.\n"), > @@ -232,9 +233,10 @@ bmap_f( > return 0; > } > } > + egcnt = nflag ? min(nflag, map->bmv_entries) : map->bmv_entries; > printf("%s:\n", file->name); > if (!vflag) { > - for (i = 0; i < map->bmv_entries; i++) { > + for (i = 0; i < egcnt; i++) { > printf("\t%d: [%lld..%lld]: ", i, > (long long) map[i + 1].bmv_offset, > (long long)(map[i + 1].bmv_offset + > @@ -288,7 +290,7 @@ bmap_f( > * Go through the extents and figure out the width > * needed for all columns. > */ > - for (i = 0; i < map->bmv_entries; i++) { > + for (i = 0; i < egcnt; i++) { > snprintf(rbuf, sizeof(rbuf), "[%lld..%lld]:", > (long long) map[i + 1].bmv_offset, > (long long)(map[i + 1].bmv_offset + > @@ -325,7 +327,7 @@ bmap_f( > aoff_w, _("AG-OFFSET"), > tot_w, _("TOTAL"), > flg ? _(" FLAGS") : ""); > - for (i = 0; i < map->bmv_entries; i++) { > + for (i = 0; i < egcnt; i++) { > flg = FLG_NULL; > if (map[i + 1].bmv_oflags & BMV_OF_PREALLOC) { > flg |= FLG_PRE; > > From owner-xfs@oss.sgi.com Sun Jan 21 23:58:57 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 21 Jan 2007 23:59:02 -0800 (PST) Received: from tyo200.gate.nec.co.jp (TYO200.gate.nec.co.jp [210.143.35.50]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0M7wtqw012650 for ; Sun, 21 Jan 2007 23:58:56 -0800 Received: from tyo202.gate.nec.co.jp ([10.7.69.202]) by tyo200.gate.nec.co.jp (8.13.8/8.13.4) with ESMTP id l0M7w0AW002165 for ; Mon, 22 Jan 2007 16:58:00 +0900 (JST) Received: from mailgate3.nec.co.jp (mailgate54.nec.co.jp [10.7.69.195]) by tyo202.gate.nec.co.jp (8.13.8/8.13.4) with ESMTP id l0M7sd8I029066 for ; Mon, 22 Jan 2007 16:54:39 +0900 (JST) Received: (from root@localhost) by mailgate3.nec.co.jp (8.11.7/3.7W-MAILGATE-NEC) id l0M7scd17409 for xfs@oss.sgi.com; Mon, 22 Jan 2007 16:54:38 +0900 (JST) Received: from secsv3.tnes.nec.co.jp (tnesvc2.tnes.nec.co.jp [10.1.101.15]) by mailsv4.nec.co.jp (8.11.7/3.7W-MAILSV4-NEC) with ESMTP id l0M7scp12576 for ; Mon, 22 Jan 2007 16:54:38 +0900 (JST) Received: from tnesvc2.tnes.nec.co.jp ([10.1.101.15]) by secsv3.tnes.nec.co.jp (ExpressMail 5.10) with SMTP id 20070122.165438.07803072 for ; Mon, 22 Jan 2007 16:54:38 +0900 Received: FROM tnessv1.tnes.nec.co.jp BY tnesvc2.tnes.nec.co.jp ; Mon Jan 22 16:54:37 2007 +0900 Received: from rifu.bsd.tnes.nec.co.jp (rifu.bsd.tnes.nec.co.jp [10.1.104.1]) by tnessv1.tnes.nec.co.jp (Postfix) with ESMTP id 067FEAE4B3; Mon, 22 Jan 2007 16:54:35 +0900 (JST) Received: from TNESG9305.tnes.nec.co.jp (TNESG9305.bsd.tnes.nec.co.jp [10.1.104.199]) by rifu.bsd.tnes.nec.co.jp (8.12.11/3.7W/BSD-TNES-MX01) with SMTP id l0M7sbmQ027971; Mon, 22 Jan 2007 16:54:37 +0900 Message-Id: <200701220754.AA04898@TNESG9305.tnes.nec.co.jp> Date: Mon, 22 Jan 2007 16:54:30 +0900 To: xfs@oss.sgi.com Subject: [PATCH]segmentation fault in xfs_io mwrite command #2 From: Utako Kusaka MIME-Version: 1.0 X-Mailer: AL-Mail32 Version 1.13 Content-Type: text/plain; charset=us-ascii X-archive-position: 10355 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: utako@tnes.nec.co.jp Precedence: bulk X-list: xfs Content-Length: 665 Lines: 24 Hi, My previous patch is not enough. If mmap offset is not 0, segmentation fault occurs in mwrite. This patch fixes it. Example: # xfs_io -f file7 -c "mmap 4096 4096" -c "mwrite" Segmentation fault Signed-off-by: Utako Kusaka --- --- xfsprogs-2.8.18-orgn/io/mmap.c 2006-12-13 13:57:22.000000000 +0900 +++ xfsprogs-2.8.18/io/mmap.c 2007-01-19 16:29:44.000000000 +0900 @@ -562,6 +562,7 @@ mwrite_f( if (!start) return 0; + offset -= mapping->offset; if (rflag) { for (tmp = offset + length -1; tmp >= offset; tmp--) ((char *)mapping->addr)[tmp] = seed; From owner-xfs@oss.sgi.com Mon Jan 22 05:15:37 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 22 Jan 2007 05:15:42 -0800 (PST) Received: from mx2.suse.de (ns2.suse.de [195.135.220.15]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0MDFZqw031582 for ; Mon, 22 Jan 2007 05:15:36 -0800 Received: from Relay1.suse.de (mail2.suse.de [195.135.221.8]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx2.suse.de (Postfix) with ESMTP id 6FFAE21270 for ; Mon, 22 Jan 2007 13:55:21 +0100 (CET) From: Jean Delvare Organization: SuSE Linux To: xfs@oss.sgi.com Subject: Memory allocation in xfs Date: Mon, 22 Jan 2007 13:58:09 +0100 User-Agent: KMail/1.9.1 MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200701221358.09300.jdelvare@suse.de> X-archive-position: 10360 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jdelvare@suse.de Precedence: bulk X-list: xfs Content-Length: 1962 Lines: 52 Hi all, While investigating a customer-reported issue, I noticed the following code in the xfs filesystem (in linux/fs/xfs/linux-2.6/kmem.c): #define MAX_VMALLOCS 6 #define MAX_SLAB_SIZE 0x20000 void * kmem_alloc(size_t size, int flags) { int retries = 0; int lflags = kmem_flags_convert(flags); void *ptr; do { if (size < MAX_SLAB_SIZE || retries > MAX_VMALLOCS) ptr = kmalloc(size, lflags); else ptr = __vmalloc(size, lflags, PAGE_KERNEL); if (ptr || (flags & (KM_MAYFAIL|KM_NOSLEEP))) return ptr; if (!(++retries % 100)) printk(KERN_ERR "XFS: possible memory allocation " "deadlock in %s (mode:0x%x)\n", __FUNCTION__, lflags); blk_congestion_wait(WRITE, HZ/50); } while (1); } If I read it correctly, it first chooses between kmalloc and vmalloc based on size, picking kmalloc if the size is less than 128 kB, and vmalloc if it's larger. So far, so good, makes sense to me. Then, if 6 attempts at vmalloc failed, it switches to kmalloc regardless of the size. I read in LDD3 that some architectures have a relatively small address space reserved for vmalloc, I guess this explains why this fallback was implemented. Am I correct? I wonder if it's really a good idea to then insist on kmalloc if kmalloc fails too, but at this point it probably no longer matters, we're doomed... What I am curious about is why the fallback in the other direction wasn't implemented. If we need an amount of memory less than 128 kB and kmalloc keeps failing, isn't it worth trying vmalloc? Disclaimer: I am just trying to learn how the memory management works in Linux 2.6, so I might as well be totally wrong. Thanks, -- Jean Delvare Suse L3 From owner-xfs@oss.sgi.com Mon Jan 22 10:49:40 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 22 Jan 2007 10:49:46 -0800 (PST) Received: from lucidpixels.com (lucidpixels.com [66.45.37.187]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0MIncqw009367 for ; Mon, 22 Jan 2007 10:49:40 -0800 Received: by lucidpixels.com (Postfix, from userid 1001) id 08BB51A00052F; Mon, 22 Jan 2007 13:48:44 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id 05319A050ADA; Mon, 22 Jan 2007 13:48:44 -0500 (EST) Date: Mon, 22 Jan 2007 13:48:44 -0500 (EST) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: Pavel Machek cc: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com Subject: Re: 2.6.20-rc5: cp 18gb 18gb.2 = OOM killer, reproducible just like 2.16.19.2 In-Reply-To: <20070122133735.GB4493@ucw.cz> Message-ID: References: <20070122133735.GB4493@ucw.cz> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 10362 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Content-Length: 1153 Lines: 35 On Mon, 22 Jan 2007, Pavel Machek wrote: > On Sun 2007-01-21 14:27:34, Justin Piszcz wrote: > > Why does copying an 18GB on a 74GB raptor raid1 cause the kernel to invoke > > the OOM killer and kill all of my processes? > > > > Doing this on a single disk 2.6.19.2 is OK, no issues. However, this > > happens every time! > > > > Anything to try? Any other output needed? Can someone shed some light on > > this situation? > > Is it highmem-related? Can you try it with mem=256M? > > Pavel > -- > (english) http://www.livejournal.com/~pavelmachek > (cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html > - > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > I will give this a try later or tomorrow, I cannot have my machine crash at the moment. Also, the onboard video on the Intel 965 chipset uses 128MB, not sure if that has anything to do with it because after the system kill -9's all the processes etc, my terminal looks like garbage. Justin. From owner-xfs@oss.sgi.com Mon Jan 22 11:59:10 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 22 Jan 2007 12:00:11 -0800 (PST) Received: from smtp.osdl.org (smtp.osdl.org [65.172.181.24]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0MJx9qw026900 for ; Mon, 22 Jan 2007 11:59:10 -0800 Received: from shell0.pdx.osdl.net (fw.osdl.org [65.172.181.6]) by smtp.osdl.org (8.12.8/8.12.8) with ESMTP id l0MJv3N8008016 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO); Mon, 22 Jan 2007 11:57:04 -0800 Received: from box (shell0.pdx.osdl.net [10.9.0.31]) by shell0.pdx.osdl.net (8.13.1/8.11.6) with SMTP id l0MJv3uU032713; Mon, 22 Jan 2007 11:57:03 -0800 Date: Mon, 22 Jan 2007 11:57:03 -0800 From: Andrew Morton To: Justin Piszcz Cc: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com Subject: Re: 2.6.20-rc5: cp 18gb 18gb.2 = OOM killer, reproducible just like 2.16.19.2 Message-Id: <20070122115703.97ed54f3.akpm@osdl.org> In-Reply-To: References: X-Mailer: Sylpheed version 2.2.7 (GTK+ 2.8.17; x86_64-unknown-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-MIMEDefang-Filter: osdl$Revision: 1.170 $ X-Scanned-By: MIMEDefang 2.36 X-archive-position: 10363 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: akpm@osdl.org Precedence: bulk X-list: xfs Content-Length: 3347 Lines: 85 > On Sun, 21 Jan 2007 14:27:34 -0500 (EST) Justin Piszcz wrote: > Why does copying an 18GB on a 74GB raptor raid1 cause the kernel to invoke > the OOM killer and kill all of my processes? What's that? Software raid or hardware raid? If the latter, which driver? > Doing this on a single disk 2.6.19.2 is OK, no issues. However, this > happens every time! > > Anything to try? Any other output needed? Can someone shed some light on > this situation? > > Thanks. > > > The last lines of vmstat 1 (right before it kill -9'd my shell/ssh) > > procs -----------memory---------- ---swap-- -----io---- -system-- > ----cpu---- > r b swpd free buff cache si so bi bo in cs us sy id > wa > 0 7 764 50348 12 1269988 0 0 53632 172 1902 4600 1 8 > 29 62 > 0 7 764 49420 12 1260004 0 0 53632 34368 1871 6357 2 11 > 48 40 The wordwrapping is painful :( > > The last lines of dmesg: > [ 5947.199985] lowmem_reserve[]: 0 0 0 > [ 5947.199992] DMA: 0*4kB 1*8kB 1*16kB 0*32kB 1*64kB 1*128kB 1*256kB > 0*512kB 1*1024kB 1*2048kB 0*4096kB = 3544kB > [ 5947.200010] Normal: 1*4kB 0*8kB 1*16kB 1*32kB 0*64kB 1*128kB 0*256kB > 1*512kB 0*1024kB 1*2048kB 0*4096kB = 2740kB > [ 5947.200035] HighMem: 98*4kB 35*8kB 9*16kB 69*32kB 4*64kB 1*128kB > 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 3664kB > [ 5947.200052] Swap cache: add 789, delete 189, find 16/17, race 0+0 > [ 5947.200055] Free swap = 2197628kB > [ 5947.200058] Total swap = 2200760kB > [ 5947.200060] Free swap: 2197628kB > [ 5947.205664] 517888 pages of RAM > [ 5947.205671] 288512 pages of HIGHMEM > [ 5947.205673] 5666 reserved pages > [ 5947.205675] 257163 pages shared > [ 5947.205678] 600 pages swap cached > [ 5947.205680] 88876 pages dirty > [ 5947.205682] 115111 pages writeback > [ 5947.205684] 5608 pages mapped > [ 5947.205686] 49367 pages slab > [ 5947.205688] 541 pages pagetables > [ 5947.205795] Out of memory: kill process 1853 (named) score 9937 or a > child > [ 5947.205801] Killed process 1853 (named) > [ 5947.206616] bash invoked oom-killer: gfp_mask=0x84d0, order=0, > oomkilladj=0 > [ 5947.206621] [] out_of_memory+0x17b/0x1b0 > [ 5947.206631] [] __alloc_pages+0x29c/0x2f0 > [ 5947.206636] [] __pte_alloc+0x1d/0x90 > [ 5947.206643] [] copy_page_range+0x357/0x380 > [ 5947.206649] [] copy_process+0x765/0xfc0 > [ 5947.206655] [] alloc_pid+0x1b9/0x280 > [ 5947.206662] [] do_fork+0x79/0x1e0 > [ 5947.206674] [] do_pipe+0x5f/0xc0 > [ 5947.206680] [] sys_clone+0x36/0x40 > [ 5947.206686] [] syscall_call+0x7/0xb > [ 5947.206691] [] __sched_text_start+0x853/0x950 > [ 5947.206698] ======================= Important information from the oom-killing event is missing. Please send it all. >From your earlier reports we have several hundred MB of ZONE_NORMAL memory which has gone awol. Please include /proc/meminfo from after the oom-killing. Please work out what is using all that slab memory, via /proc/slabinfo. After the oom-killing, please see if you can free up the ZONE_NORMAL memory via a few `echo 3 > /proc/sys/vm/drop_caches' commands. See if you can work out what happened to the missing couple-of-hundred MB from ZONE_NORMAL. From owner-xfs@oss.sgi.com Mon Jan 22 12:54:08 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 22 Jan 2007 12:54:15 -0800 (PST) Received: from lucidpixels.com (lucidpixels.com [75.144.35.66] (may be forged)) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0MKrwqw011878 for ; Mon, 22 Jan 2007 12:54:08 -0800 Received: by lucidpixels.com (Postfix, from userid 1001) id 654091A052EC0; Mon, 22 Jan 2007 15:20:16 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id 49E73A050ADA; Mon, 22 Jan 2007 15:20:16 -0500 (EST) Date: Mon, 22 Jan 2007 15:20:16 -0500 (EST) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: Andrew Morton cc: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com Subject: Re: 2.6.20-rc5: cp 18gb 18gb.2 = OOM killer, reproducible just like 2.16.19.2 In-Reply-To: <20070122115703.97ed54f3.akpm@osdl.org> Message-ID: References: <20070122115703.97ed54f3.akpm@osdl.org> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 10364 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Content-Length: 5670 Lines: 141 > What's that? Software raid or hardware raid? If the latter, which driver? Software RAID (md) On Mon, 22 Jan 2007, Andrew Morton wrote: > > On Sun, 21 Jan 2007 14:27:34 -0500 (EST) Justin Piszcz wrote: > > Why does copying an 18GB on a 74GB raptor raid1 cause the kernel to invoke > > the OOM killer and kill all of my processes? > > What's that? Software raid or hardware raid? If the latter, which driver? > > > Doing this on a single disk 2.6.19.2 is OK, no issues. However, this > > happens every time! > > > > Anything to try? Any other output needed? Can someone shed some light on > > this situation? > > > > Thanks. > > > > > > The last lines of vmstat 1 (right before it kill -9'd my shell/ssh) > > > > procs -----------memory---------- ---swap-- -----io---- -system-- > > ----cpu---- > > r b swpd free buff cache si so bi bo in cs us sy id > > wa > > 0 7 764 50348 12 1269988 0 0 53632 172 1902 4600 1 8 > > 29 62 > > 0 7 764 49420 12 1260004 0 0 53632 34368 1871 6357 2 11 > > 48 40 > > The wordwrapping is painful :( > > > > > The last lines of dmesg: > > [ 5947.199985] lowmem_reserve[]: 0 0 0 > > [ 5947.199992] DMA: 0*4kB 1*8kB 1*16kB 0*32kB 1*64kB 1*128kB 1*256kB > > 0*512kB 1*1024kB 1*2048kB 0*4096kB = 3544kB > > [ 5947.200010] Normal: 1*4kB 0*8kB 1*16kB 1*32kB 0*64kB 1*128kB 0*256kB > > 1*512kB 0*1024kB 1*2048kB 0*4096kB = 2740kB > > [ 5947.200035] HighMem: 98*4kB 35*8kB 9*16kB 69*32kB 4*64kB 1*128kB > > 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 3664kB > > [ 5947.200052] Swap cache: add 789, delete 189, find 16/17, race 0+0 > > [ 5947.200055] Free swap = 2197628kB > > [ 5947.200058] Total swap = 2200760kB > > [ 5947.200060] Free swap: 2197628kB > > [ 5947.205664] 517888 pages of RAM > > [ 5947.205671] 288512 pages of HIGHMEM > > [ 5947.205673] 5666 reserved pages > > [ 5947.205675] 257163 pages shared > > [ 5947.205678] 600 pages swap cached > > [ 5947.205680] 88876 pages dirty > > [ 5947.205682] 115111 pages writeback > > [ 5947.205684] 5608 pages mapped > > [ 5947.205686] 49367 pages slab > > [ 5947.205688] 541 pages pagetables > > [ 5947.205795] Out of memory: kill process 1853 (named) score 9937 or a > > child > > [ 5947.205801] Killed process 1853 (named) > > [ 5947.206616] bash invoked oom-killer: gfp_mask=0x84d0, order=0, > > oomkilladj=0 > > [ 5947.206621] [] out_of_memory+0x17b/0x1b0 > > [ 5947.206631] [] __alloc_pages+0x29c/0x2f0 > > [ 5947.206636] [] __pte_alloc+0x1d/0x90 > > [ 5947.206643] [] copy_page_range+0x357/0x380 > > [ 5947.206649] [] copy_process+0x765/0xfc0 > > [ 5947.206655] [] alloc_pid+0x1b9/0x280 > > [ 5947.206662] [] do_fork+0x79/0x1e0 > > [ 5947.206674] [] do_pipe+0x5f/0xc0 > > [ 5947.206680] [] sys_clone+0x36/0x40 > > [ 5947.206686] [] syscall_call+0x7/0xb > > [ 5947.206691] [] __sched_text_start+0x853/0x950 > > [ 5947.206698] ======================= > > Important information from the oom-killing event is missing. Please send > it all. > > >From your earlier reports we have several hundred MB of ZONE_NORMAL memory > which has gone awol. > > Please include /proc/meminfo from after the oom-killing. > > Please work out what is using all that slab memory, via /proc/slabinfo. > > After the oom-killing, please see if you can free up the ZONE_NORMAL memory > via a few `echo 3 > /proc/sys/vm/drop_caches' commands. See if you can > work out what happened to the missing couple-of-hundred MB from > ZONE_NORMAL. > > I believe this is the first part of it (hopefully): 2908kB active:86104kB inactive:1061904kB present:1145032kB pages_scanned:0 all_unreclaimable? no [ 5947.199985] lowmem_reserve[]: 0 0 0 [ 5947.199992] DMA: 0*4kB 1*8kB 1*16kB 0*32kB 1*64kB 1*128kB 1*256kB 0*512kB 1*1024kB 1*2048kB 0*4096kB = 3544kB [ 5947.200010] Normal: 1*4kB 0*8kB 1*16kB 1*32kB 0*64kB 1*128kB 0*256kB 1*512kB 0*1024kB 1*2048kB 0*4096kB = 2740kB [ 5947.200035] HighMem: 98*4kB 35*8kB 9*16kB 69*32kB 4*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 3664kB [ 5947.200052] Swap cache: add 789, delete 189, find 16/17, race 0+0 [ 5947.200055] Free swap = 2197628kB [ 5947.200058] Total swap = 2200760kB [ 5947.200060] Free swap: 2197628kB [ 5947.205664] 517888 pages of RAM [ 5947.205671] 288512 pages of HIGHMEM [ 5947.205673] 5666 reserved pages [ 5947.205675] 257163 pages shared [ 5947.205678] 600 pages swap cached [ 5947.205680] 88876 pages dirty [ 5947.205682] 115111 pages writeback [ 5947.205684] 5608 pages mapped [ 5947.205686] 49367 pages slab [ 5947.205688] 541 pages pagetables [ 5947.205795] Out of memory: kill process 1853 (named) score 9937 or a child [ 5947.205801] Killed process 1853 (named) [ 5947.206616] bash invoked oom-killer: gfp_mask=0x84d0, order=0, oomkilladj=0 [ 5947.206621] [] out_of_memory+0x17b/0x1b0 [ 5947.206631] [] __alloc_pages+0x29c/0x2f0 [ 5947.206636] [] __pte_alloc+0x1d/0x90 [ 5947.206643] [] copy_page_range+0x357/0x380 [ 5947.206649] [] copy_process+0x765/0xfc0 [ 5947.206655] [] alloc_pid+0x1b9/0x280 [ 5947.206662] [] do_fork+0x79/0x1e0 [ 5947.206674] [] do_pipe+0x5f/0xc0 [ 5947.206680] [] sys_clone+0x36/0x40 [ 5947.206686] [] syscall_call+0x7/0xb [ 5947.206691] [] __sched_text_start+0x853/0x950 [ 5947.206698] ======================= I will have to include the other parts when I am near the machine and can reboot it locally :) Justin. From owner-xfs@oss.sgi.com Mon Jan 22 13:22:19 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 22 Jan 2007 13:22:23 -0800 (PST) Received: from wx-out-0506.google.com (wx-out-0506.google.com [66.249.82.232]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0MLMHqw016565 for ; Mon, 22 Jan 2007 13:22:18 -0800 Received: by wx-out-0506.google.com with SMTP id t4so1316743wxc for ; Mon, 22 Jan 2007 13:21:23 -0800 (PST) DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:message-id:date:from:sender:to:subject:mime-version:content-type:content-transfer-encoding:content-disposition:x-google-sender-auth; b=a68IbKM2Z1sWvKVE1HSCDl8BIMbvNR51IbemYEnsYuhJ8fort1hGSR/Ttie09vVmvz2yiXVMERJKp5GtvhOEOItxDsDCI79wpbY6XrMYOrO/xPQKwFOqX0yHri1KMvSDfM+6GBlyrwkyJ+KSvCPGwt5oZFQeCrLT5Nz6rRC3iEc= Received: by 10.90.32.14 with SMTP id f14mr6845742agf.1169499142860; Mon, 22 Jan 2007 12:52:22 -0800 (PST) Received: by 10.90.95.5 with HTTP; Mon, 22 Jan 2007 12:52:22 -0800 (PST) Message-ID: <68c491a60701221252i30d28955pde6a4e987a1d248f@mail.gmail.com> Date: Mon, 22 Jan 2007 21:52:22 +0100 From: "=?ISO-8859-1?Q?Martin_Schr=F6der?=" To: xfs@oss.sgi.com Subject: [OT] Spam on this list MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline X-Google-Sender-Auth: ecc14f71d9ab8aa8 X-archive-position: 10365 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: martin@oneiros.de Precedence: bulk X-list: xfs Content-Length: 214 Lines: 9 This list is a constant distributor of spam, maybe because it accepts non-member contributions. A plea to those who are able to do so: Please fix this by allowing only contributions from members. Best Martin From owner-xfs@oss.sgi.com Mon Jan 22 13:26:05 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 22 Jan 2007 13:26:18 -0800 (PST) Received: from mx1.redhat.com (mx1.redhat.com [66.187.233.31]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0MLQ2qw018009 for ; Mon, 22 Jan 2007 13:26:05 -0800 Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by mx1.redhat.com (8.12.11.20060308/8.12.11) with ESMTP id l0ML10MU028278; Mon, 22 Jan 2007 16:01:00 -0500 Received: from mail.boston.redhat.com (mail.boston.redhat.com [172.16.76.12]) by int-mx1.corp.redhat.com (8.13.1/8.13.1) with ESMTP id l0ML107T024795; Mon, 22 Jan 2007 16:01:00 -0500 Received: from [172.16.83.145] (dhcp83-145.boston.redhat.com [172.16.83.145]) by mail.boston.redhat.com (8.12.11.20060308/8.12.11) with ESMTP id l0ML0wkJ031155; Mon, 22 Jan 2007 16:00:59 -0500 Message-ID: <45B5261B.1050104@redhat.com> Date: Mon, 22 Jan 2007 16:01:15 -0500 From: Chuck Ebbert Organization: Red Hat User-Agent: Thunderbird 1.5.0.9 (X11/20061219) MIME-Version: 1.0 To: Justin Piszcz CC: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com, Neil Brown Subject: Re: Kernel 2.6.19.2 New RAID 5 Bug (oops when writing Samba -> RAID5) References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 10366 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: cebbert@redhat.com Precedence: bulk X-list: xfs Content-Length: 2649 Lines: 56 Justin Piszcz wrote: > My .config is attached, please let me know if any other information is > needed and please CC (lkml) as I am not on the list, thanks! > > Running Kernel 2.6.19.2 on a MD RAID5 volume. Copying files over Samba to > the RAID5 running XFS. > > Any idea what happened here? > > [473795.214705] BUG: unable to handle kernel paging request at virtual > address fffb92b0 > [473795.214715] printing eip: > [473795.214718] c0358b14 > [473795.214721] *pde = 00003067 > [473795.214723] *pte = 00000000 > [473795.214726] Oops: 0000 [#1] > [473795.214729] PREEMPT SMP > [473795.214736] CPU: 0 > [473795.214737] EIP: 0060:[] Not tainted VLI > [473795.214738] EFLAGS: 00010286 (2.6.19.2 #1) > [473795.214746] EIP is at copy_data+0x6c/0x179 > [473795.214750] eax: 00000000 ebx: 00001000 ecx: 00000354 edx: fffb9000 > [473795.214754] esi: fffb92b0 edi: da86c2b0 ebp: 00001000 esp: f7927dc4 > [473795.214757] ds: 007b es: 007b ss: 0068 > [473795.214761] Process md4_raid5 (pid: 1305, ti=f7926000 task=f7ea9030 task.ti=f7926000) > [473795.214765] Stack: c1ba7c40 00000003 f5538c80 00000001 da86c000 00000009 00000000 0000006c > [473795.214790] 00001000 da8536a8 aa6fee90 f5538c80 00000190 c0358d00 aa6fee88 0000ffff > [473795.214863] d7c5794c 00000001 da853488 f6fbec70 f6fbebc0 00000001 00000005 00000001 > [473795.214876] Call Trace: > [473795.214880] [] compute_parity5+0xdf/0x497 > [473795.214887] [] handle_stripe+0x930/0x2986 > [473795.214892] [] find_busiest_group+0x124/0x4fd > [473795.214898] [] release_stripe+0x21/0x2e > [473795.214902] [] raid5d+0x100/0x161 > [473795.214907] [] md_thread+0x40/0x103 > [473795.214912] [] autoremove_wake_function+0x0/0x4b > [473795.214917] [] md_thread+0x0/0x103 > [473795.214922] [] kthread+0xfc/0x100 > [473795.214926] [] kthread+0x0/0x100 > [473795.214930] [] kernel_thread_helper+0x7/0x1c > [473795.214935] ======================= > [473795.214938] Code: 14 39 d1 0f 8d 10 01 00 00 89 c8 01 c0 01 c8 01 c0 > 01 c0 89 44 24 1c eb 51 89 d9 c1 e9 02 8b 7c 24 10 01 f7 8b 44 24 18 8d 34 > 02 a5 89 d9 83 e1 03 74 02 f3 a4 c7 44 24 04 03 00 00 00 89 14 > [473795.215017] EIP: [] copy_data+0x6c/0x179 SS:ESP > 0068:f7927dc4 > Without digging too deeply, I'd say you've hit the same bug Sami Farin and others have reported starting with 2.6.19: pages mapped with kmap_atomic() become unmapped during memcpy() or similar operations. Try disabling preempt -- that seems to be the common factor. From owner-xfs@oss.sgi.com Mon Jan 22 13:36:25 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 22 Jan 2007 13:36:30 -0800 (PST) Received: from postoffice.aconex.com (mail.app.aconex.com [203.89.192.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0MLaNqw021068 for ; Mon, 22 Jan 2007 13:36:24 -0800 Received: from edge (unknown [203.89.192.141]) by postoffice.aconex.com (Postfix) with ESMTP id A9856AAC8C9; Tue, 23 Jan 2007 08:23:14 +1100 (EST) Subject: Re: Re: memory allocation in xfs From: Nathan Scott Reply-To: nscott@aconex.com To: Jean Delvare Cc: xfs@oss.sgi.com In-Reply-To: <200701221343.41653.jdelvare@suse.de> References: <200701171354.19187.jdelvare@suse.de> <1169416074.18017.54.camel@edge> <200701221343.41653.jdelvare@suse.de> Content-Type: text/plain; charset=UTF-8 Organization: Aconex Date: Tue, 23 Jan 2007 08:34:14 +1100 Message-Id: <1169501655.18017.110.camel@edge> Mime-Version: 1.0 X-Mailer: Evolution 2.6.3 Content-Transfer-Encoding: 8bit X-archive-position: 10367 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nscott@aconex.com Precedence: bulk X-list: xfs Content-Length: 2968 Lines: 59 On Mon, 2007-01-22 at 13:43 +0100, Jean Delvare wrote: > Hi Nathan, > > Le Dimanche 21 Janvier 2007 22:47, Nathan Scott a écrit : > > On Wed, 2007-01-17 at 13:54 +0100, Jean Delvare wrote: > > > While investigating Suse bug #177066, I noticed the following code in the > > > xfs filesystem (in linux/fs/xfs/linux-2.6/kmem.c): > > > > Sorry for the delay, I've been at LCA the last week or so. > > Heh, no worry :-) > > > > If I read it correctly, it first chooses between kmalloc and vmalloc > > > based on size, picking kmalloc if the size is less than 128 kB, and > > > vmalloc if it's larger. So far, so good, makes sense to me. > > > > > > Then, if 6 attempts at vmalloc failed, it switches to kmalloc regardless > > > of the size. That's what I don't understand. I am not a kernel memory > > > management guru, in fact I'm just trying to learn how these things work, > > > but isn't kmalloc _less_ likely to succeed than vmalloc for a large size, > > > given that kmalloc must find a physical block of that size, while vmalloc > > > only needs a virtual one? > > > > I'm also not a VM guru, and implemented here based on others advice; > > as I understand things, vmalloc space on some platforms can become > > quite easily depleted such that we can get into the situation that > > all of vmalloc space is allocated and cannot be released - whereas > > for kmalloc space there is a chance that some other part of the > > kernel will free things up and allow us to make a large allocation. > > You might be able to get more information by following up to the > > XFS list, as the people who were reviewing my changes and suggested > > this change, will be there. > > Correct, I read about the limited vmalloc space in LDD3 too. The book says > it's "relatively small" on "some architectures" but doesn't give details. I > ended up testing on my i386 and x86_64 systems and I found 631 MB and 32 TB, > respectively, which I don't think qualify as small. So I guess the > "relatively small" was referring to other architectures. > > Anyway, it makes some sense to try kmalloc() if vmalloc() fails 6 times, after > all there's nothing to lose. But I wonder if it really makes sense to insist > on kmalloc() if kmalloc() is failing too. And I wonder why the other fallback > (from kmalloc() to vmalloc()) wasn't implemented. But I also agree that we > are in trouble if the allocations start failing, and the solution it to make > sure xfs isn't asking for unreasonably large amounts of memory (which I am > aware has been addressed since.) *nod* ... it is largely academic now, there are very few places where we will ask for large memory allocations anymore (Mandy fixed by far the worst offender with the incore extent changes), and IIRC the only places where we now attempt large allocations (other than debug/trace allocs) are in places where we allow an allocation failure and can fall back to allocating in smaller sizes. cheers. -- Nathan From owner-xfs@oss.sgi.com Mon Jan 22 13:46:35 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 22 Jan 2007 13:46:39 -0800 (PST) Received: from mail-ausfall.charite.de (mail-ausfall.charite.de [193.175.70.131]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0MLkXqw023760 for ; Mon, 22 Jan 2007 13:46:35 -0800 Received: from localhost (localhost [127.0.0.1]) by mail-ausfall.charite.de (Postfix) with ESMTP id 7B1FC23FC18; Mon, 22 Jan 2007 22:30:03 +0100 (CET) Received: from mail-ausfall.charite.de ([127.0.0.1]) by localhost (mail-ausfall.charite.de [127.0.0.1]) (amavisd-new, port 10025) with LMTP id sKK+1-hCrCwR; Mon, 22 Jan 2007 22:29:58 +0100 (CET) Received: from postamt.charite.de (postamt.charite.de [160.45.207.132]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mail-ausfall.charite.de (Postfix) with ESMTP id F243223FC13; Mon, 22 Jan 2007 22:29:57 +0100 (CET) Received: by postamt.charite.de (Postfix, from userid 7945) id CCC4D220BB8; Mon, 22 Jan 2007 22:29:57 +0100 (CET) Date: Mon, 22 Jan 2007 22:29:57 +0100 From: Ralf Hildebrandt To: Martin =?utf-8?B?U2NocsO2ZGVy?= Cc: xfs@oss.sgi.com Subject: Re: [OT] Spam on this list Message-ID: <20070122212957.GN27538@charite.de> References: <68c491a60701221252i30d28955pde6a4e987a1d248f@mail.gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <68c491a60701221252i30d28955pde6a4e987a1d248f@mail.gmail.com> User-Agent: Mutt/1.5.13 (2006-08-11) X-archive-position: 10368 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: Ralf.Hildebrandt@charite.de Precedence: bulk X-list: xfs Content-Length: 596 Lines: 15 * Martin Schröder : > This list is a constant distributor of spam, maybe because it accepts > non-member contributions. > > A plea to those who are able to do so: Please fix this by allowing > only contributions from members. Yes please. It's extremely annoying. -- Ralf Hildebrandt (i.A. des IT-Zentrums) Ralf.Hildebrandt@charite.de Charite - Universitätsmedizin Berlin Tel. +49 (0)30-450 570-155 Gemeinsame Einrichtung von FU- und HU-Berlin Fax. +49 (0)30-450 570-962 IT-Zentrum Standort CBF send no mail to plonk@charite.de From owner-xfs@oss.sgi.com Mon Jan 22 14:16:00 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 22 Jan 2007 14:16:17 -0800 (PST) Received: from mx1.suse.de (ns.suse.de [195.135.220.2]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0MMFwqw030509 for ; Mon, 22 Jan 2007 14:16:00 -0800 Received: from Relay2.suse.de (mail2.suse.de [195.135.221.8]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.suse.de (Postfix) with ESMTP id 19CDF124E1; Mon, 22 Jan 2007 23:00:00 +0100 (CET) From: Neil Brown To: Chuck Ebbert Date: Tue, 23 Jan 2007 08:59:36 +1100 MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-ID: <17845.13256.284461.992275@notabene.brown> Cc: Justin Piszcz , linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com Subject: Re: Kernel 2.6.19.2 New RAID 5 Bug (oops when writing Samba -> RAID5) In-Reply-To: message from Chuck Ebbert on Monday January 22 References: <45B5261B.1050104@redhat.com> X-Mailer: VM 7.19 under Emacs 21.4.1 X-face: [Gw_3E*Gng}4rRrKRYotwlE?.2|**#s9D Justin Piszcz wrote: > > My .config is attached, please let me know if any other information is > > needed and please CC (lkml) as I am not on the list, thanks! > > > > Running Kernel 2.6.19.2 on a MD RAID5 volume. Copying files over Samba to > > the RAID5 running XFS. > > > > Any idea what happened here? .... > > > Without digging too deeply, I'd say you've hit the same bug Sami Farin > and others > have reported starting with 2.6.19: pages mapped with kmap_atomic() > become unmapped > during memcpy() or similar operations. Try disabling preempt -- that > seems to be the > common factor. That is exactly the conclusion I had just come to (a kmap_atomic page must be being unmapped during memcpy). I wasn't aware that others had reported it - thanks for that. Turning off CONFIG_PREEMPT certainly seems like a good idea. NeilBrown From owner-xfs@oss.sgi.com Mon Jan 22 15:02:55 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 22 Jan 2007 15:03:04 -0800 (PST) X-Spam-oss-Status: No, score=0.6 required=5.0 tests=AWL,BAYES_50, FH_HOST_EQ_D_D_D_D,FH_HOST_EQ_D_D_D_DB,J_CHICKENPOX_21,J_CHICKENPOX_24, J_CHICKENPOX_64,J_CHICKENPOX_91,RDNS_DYNAMIC autolearn=no version=3.2.0-pre1-r497472 Received: from ext.agami.com (64.221.212.177.ptr.us.xo.net [64.221.212.177]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0MN2rqw012230 for ; Mon, 22 Jan 2007 15:02:54 -0800 Received: from agami.com ([192.168.168.140]) by ext.agami.com (8.12.5/8.12.5) with ESMTP id l0MMhVEB015240 for ; Mon, 22 Jan 2007 14:43:34 -0800 Received: from mx1.agami.com (mx1.agami.com [10.123.10.30]) by agami.com (8.12.11/8.12.11) with ESMTP id l0MMhUd5008501 for ; Mon, 22 Jan 2007 14:43:30 -0800 Received: from [10.123.4.142] ([10.123.4.142]) by mx1.agami.com with Microsoft SMTPSVC(6.0.3790.1830); Mon, 22 Jan 2007 14:43:30 -0800 Message-ID: <45B53E11.8080406@agami.com> Date: Mon, 22 Jan 2007 14:43:29 -0800 From: Michael Nishimoto User-Agent: Mail/News 1.5.0.4 (X11/20060629) MIME-Version: 1.0 To: XFS Mailing List CC: Chandan Talukdar Subject: xfs_repair speedup changes Content-Type: multipart/mixed; boundary="------------090708000700070403030802" X-OriginalArrivalTime: 22 Jan 2007 22:43:30.0062 (UTC) FILETIME=[BCF58AE0:01C73E76] X-Scanned-By: MIMEDefang 2.58 on 192.168.168.13 X-archive-position: 10370 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: miken@agami.com Precedence: bulk X-list: xfs Content-Length: 47315 Lines: 1794 This is a multi-part message in MIME format. --------------090708000700070403030802 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Hi everyone, agami Systems started on a project to speed up xfs_repair before we knew that SGI was working on the same task. Similar to SGI's solution, our approach uses readahead to shorten the runtime. Agami also wanted to change the existing code as little as possible. By releasing this patch, we hope to start a discussion which will lead to continued improvements in xfs_repair runtimes. Our patch has a couple of ideas which should benefit SGI's code. Using our NAS platform which has 4 CPUs and runs XFS over software RAID5, we have seen 5 to 8 times speedup, depending on resources allocated to a run. The test filesystem had 1.4TB of data with 24M files. Unfortunately, I have not been able to run the latest CVS code against our system due to kernel differences. SGI's advantages ---------------- 1. User space cache with maximum number of entries a. means that xfs_repair will cause less interference with other mounted filesystems. b. allows tracking of cache behavior. 2. Rewrite phase7 to eliminate unnecessary transaction overhead. agami's advantages ------------------ 1. Doesn't depend on AIO & generic DIO working correctly. Will work with older linux kernels. 2. Parallelism model provides additional benefits a. In phases 3 and 4, many threads can be used to prefetch inode blocks regardless of AG count. b. By processing one AG at a time, drives spend less time seeking when multiple AGs are placed on a single drive due to the volume geometry. c. By placing each prefetch in its own thread, more parallelism is achieved especially when retrieving directory blocks. Chandan Talukdar performed all the xfs_repair work over last summer. Because the work was done on an old base, I have ported it forward to a CVS date of May 17, 2006. I chose this date because it allows a cleaner patch to be delivered. I would like to hear suggestions for how to proceed. Michael Nishimoto --------------090708000700070403030802 Content-Type: text/plain; name="xfs_repair.patch" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="xfs_repair.patch" diff -Nru xfsprogs-old/include/builddefs.in xfsprogs-new2/include/builddefs.in --- xfsprogs-old/include/builddefs.in 2006-04-27 21:02:55.000000000 -0700 +++ xfsprogs-new2/include/builddefs.in 2007-01-12 13:58:43.000000000 -0800 @@ -106,7 +106,7 @@ GCFLAGS = $(OPTIMIZER) $(DEBUG) -funsigned-char -fno-strict-aliasing -Wall \ -DVERSION=\"$(PKG_VERSION)\" -DLOCALEDIR=\"$(PKG_LOCALE_DIR)\" \ - -DPACKAGE=\"$(PKG_NAME)\" -I$(TOPDIR)/include + -DPACKAGE=\"$(PKG_NAME)\" -I$(TOPDIR)/include -pthread # First, Global, Platform, Local CFLAGS CFLAGS += $(FCFLAGS) $(GCFLAGS) $(PCFLAGS) $(LCFLAGS) diff -Nru xfsprogs-old/libxfs/xfs.h xfsprogs-new2/libxfs/xfs.h --- xfsprogs-old/libxfs/xfs.h 2006-05-02 08:35:40.000000000 -0700 +++ xfsprogs-new2/libxfs/xfs.h 2007-01-05 12:08:33.000000000 -0800 @@ -94,6 +94,7 @@ #define xfs_itobp libxfs_itobp #define xfs_ichgtime libxfs_ichgtime #define xfs_bmapi libxfs_bmapi +#define xfs_bmapi_single libxfs_bmapi_single #define xfs_bmap_finish libxfs_bmap_finish #define xfs_bmap_del_free libxfs_bmap_del_free #define xfs_bunmapi libxfs_bunmapi diff -Nru xfsprogs-old/repair/dino_chunks.c xfsprogs-new2/repair/dino_chunks.c --- xfsprogs-old/repair/dino_chunks.c 2005-11-11 06:27:22.000000000 -0800 +++ xfsprogs-new2/repair/dino_chunks.c 2007-01-05 12:10:12.000000000 -0800 @@ -920,7 +920,38 @@ ino_tree_node_t *ino_rec, *first_ino_rec, *prev_ino_rec; first_ino_rec = ino_rec = findfirst_inode_rec(agno); +#ifdef PHASE_3_4 + ino_tree_node_t *first_ra_rec, *ra_rec; + int iters = 0; + int chunklen = BBTOB(XFS_FSB_TO_BB(mp, XFS_IALLOC_BLOCKS(mp))); + + first_ra_rec = ra_rec = first_ino_rec; + /* + * before we start processing, insert 'rahead' number of + * nodes into the read ahead queue. call insert_nodes() + * with readdirblks set to 1 implying that the directory + * blocks of the directory inodes within this chunk should + * also be read in. + */ + if (insert_nodes(rahead, agno, &first_ra_rec, &ra_rec, 1, chunklen)) + do_error(_("failed to allocate memory. aborting\n")); +#endif while (ino_rec != NULL) { +#ifdef PHASE_3_4 + iters++; + /* + * after a set of 'radelta' number of nodes have been processed, + * insert another 'radelta' nodes into the read ahead queue. + * call insert_nodes() with readdirblks set to 1 implying that + * the directory blocks of the directory inodes within this + * chunk should also be read in. + */ + if (iters % radelta == 0) { + if (insert_nodes(radelta, agno, &first_ra_rec, + &ra_rec, 1, chunklen)) + do_error(_("failed to allocate memory. aborting\n")); + } +#endif /* * paranoia - step through inode records until we step * through a full allocation of inodes. this could diff -Nru xfsprogs-old/repair/globals.h xfsprogs-new2/repair/globals.h --- xfsprogs-old/repair/globals.h 2005-11-11 06:27:22.000000000 -0800 +++ xfsprogs-new2/repair/globals.h 2007-01-08 14:04:14.000000000 -0800 @@ -193,4 +193,10 @@ EXTERN __uint32_t sb_unit; EXTERN __uint32_t sb_width; +/* Used to increase performance by doing readahead */ + +EXTERN int numthreads; +EXTERN int rahead; +EXTERN int radelta; + #endif /* _XFS_REPAIR_GLOBAL_H */ diff -Nru xfsprogs-old/repair/Makefile xfsprogs-new2/repair/Makefile --- xfsprogs-old/repair/Makefile 2005-11-11 06:27:22.000000000 -0800 +++ xfsprogs-new2/repair/Makefile 2007-01-12 16:09:54.000000000 -0800 @@ -14,13 +14,18 @@ CFILES = agheader.c attr_repair.c avl.c avl64.c bmap.c dino_chunks.c \ dinode.c dir.c dir2.c dir_stack.c globals.c incore.c \ incore_bmc.c init.c incore_ext.c incore_ino.c io.c phase1.c \ - phase2.c phase3.c phase4.c phase5.c phase6.c phase7.c rt.c sb.c \ - scan.c versions.c xfs_repair.c + phase2.c phase3.c phase4.c phase5.c phase6.c phase7.c queue.c rt.c \ + sb.c scan.c threads.c versions.c xfs_repair.c -LLDLIBS = $(LIBXFS) $(LIBXLOG) $(LIBUUID) +LLDLIBS = $(LIBXFS) $(LIBXLOG) $(LIBUUID) $(LIBPTHREAD) LTDEPENDENCIES = $(LIBXFS) $(LIBXLOG) LLDFLAGS = -static +# PHASE_3_4: Enable read ahead for phase 3 and 4 +# PHASE_6: Enable read ahead for phase 6 +# PHASE_7: Enable read ahead for phase 7 +CFLAGS += -DPHASE_3_4 -DPHASE_6 -DPHASE_7 + default: $(LTCOMMAND) globals.o: globals.h diff -Nru xfsprogs-old/repair/phase6.c xfsprogs-new2/repair/phase6.c --- xfsprogs-old/repair/phase6.c 2006-05-12 09:03:02.000000000 -0700 +++ xfsprogs-new2/repair/phase6.c 2007-01-17 14:15:14.000000000 -0800 @@ -28,10 +28,18 @@ #include "err_protos.h" #include "dinode.h" #include "versions.h" +#ifdef PHASE_6 +#include +#include "queue.h" +#include "threads.h" +#endif static struct cred zerocr; static struct fsxattr zerofsx; static int orphanage_entered; +#ifdef PHASE_6 +static queue_t dir_queue; +#endif /* * Data structures and routines to keep track of directory entries @@ -1476,8 +1484,33 @@ add_inode_reached(irec, ino_offset); add_inode_ref(current_irec, current_ino_offset); - if (!is_inode_refchecked(lino, irec, ino_offset)) + if (!is_inode_refchecked(lino, irec, ino_offset)) { +#ifdef PHASE_6 + qnode_t *node; + node = alloc_qnode(&Q, sizeof(rahead_t)); + if (node != NULL) { + rahead_t *nd = (rahead_t *)(node->data); + nd->type = IDIR; + nd->u_ra.ino = lino; + nd->readdirblks = 1; + queue_insert(&Q, node); + } else { + do_error(_("failed to allocate memory. " + "aborting\n")); + } + node = alloc_qnode(&dir_queue, + sizeof(xfs_ino_t)); + if (node != NULL) { + *((xfs_ino_t*)(node->data)) = lino; + queue_insert(&dir_queue, node); + } else { + do_error("failed to allocate memory. " + "aborting\n"); + } +#else push_dir(stack, lino); +#endif + } } else { junkit = 1; do_warn( @@ -2037,9 +2070,38 @@ add_inode_ref(current_irec, current_ino_offset); if (!is_inode_refchecked( INT_GET(dep->inumber, ARCH_CONVERT), irec, - ino_offset)) + ino_offset)) { +#ifdef PHASE_6 + qnode_t *node; + node = alloc_qnode(&Q, sizeof(rahead_t)); + if (node != NULL) { + rahead_t *ra= (rahead_t *)(node->data); + ra->type = IDIR; + ra->u_ra.ino = INT_GET(dep->inumber, + ARCH_CONVERT); + ra->readdirblks = 1; + queue_insert(&Q, node); + } else { + do_error(_("failed to allocate memory." + " aborting\n")); + } + + node = alloc_qnode(&dir_queue, + sizeof(xfs_ino_t)); + if (node != NULL) { + xfs_ino_t*ino=(xfs_ino_t*)(node->data); + *ino = INT_GET(dep->inumber, + ARCH_CONVERT); + queue_insert(&dir_queue, node); + } else { + do_error("failed to allocate memory. " + "aborting\n"); + } +#else push_dir(stack, - INT_GET(dep->inumber, ARCH_CONVERT)); + INT_GET(dep->inumber, ARCH_CONVERT)); +#endif + } } else { junkit = 1; do_warn( @@ -2944,8 +3006,34 @@ add_inode_ref(current_irec, current_ino_offset); if (!is_inode_refchecked(lino, irec, - ino_offset)) + ino_offset)) { +#ifdef PHASE_6 + qnode_t *node; + node= alloc_qnode(&Q,sizeof(rahead_t)); + if (node != NULL) { + rahead_t *ra = + (rahead_t *)(node->data); + ra->type = IDIR; + ra->u_ra.ino = lino; + ra->readdirblks = 1; + queue_insert(&Q, node); + } else { + do_error(_("failed to allocate memory. aborting\n")); + } + + node = alloc_qnode(&dir_queue, + sizeof(xfs_ino_t)); + if (node != NULL) { + *((xfs_ino_t *)(node->data)) = + lino; + queue_insert(&dir_queue, node); + } else { + do_error("failed to allocate memory. aborting\n"); + } +#else push_dir(stack, lino); +#endif + } } else { junkit = 1; do_warn( @@ -3339,8 +3427,35 @@ add_inode_ref(current_irec, current_ino_offset); if (!is_inode_refchecked(lino, irec, - ino_offset)) + ino_offset)) { +#ifdef PHASE_6 + qnode_t *node; + node = alloc_qnode(&Q, + sizeof(rahead_t)); + if (node != NULL) { + rahead_t *ra = + (rahead_t*)(node->data); + ra->type = IDIR; + ra->u_ra.ino = lino; + ra->readdirblks = 1; + queue_insert(&Q, node); + } else { + do_error(_("failed to allocate memory. aborting\n")); + } + + node = alloc_qnode(&dir_queue, + sizeof(xfs_ino_t)); + if (node != NULL) { + *((xfs_ino_t*)(node->data)) = + lino; + queue_insert(&dir_queue, node); + } else { + do_error("failed to allocate memory. aborting\n"); + } +#else push_dir(stack, lino); +#endif + } } else { junkit = 1; do_warn(_("entry \"%s\" in directory inode %llu" @@ -3466,6 +3581,21 @@ int ino_offset, need_dot, committed; int dirty, num_illegal, error, nres; +#ifdef PHASE_6 + /* + * pull directory inode # off directory queue + * + * open up directory inode, check all entries, + * then call prune_dir_entries to remove all + * remaining illegal directory entries. + */ + qnode_t *node; + while (queue_remove(&dir_queue, &node, 0)) { + wake_if_sleeping(); + + ino = *((xfs_ino_t*)(node->data)); + free_qnode(&dir_queue, node); +#else /* * pull directory inode # off directory stack * @@ -3474,7 +3604,8 @@ * remaining illegal directory entries. */ - while ((ino = pop_dir(stack)) != NULLFSINO) { + while ((ino = pop_dir(stack)) != NULLFSINO) { +#endif irec = find_inode_rec(XFS_INO_TO_AGNO(mp, ino), XFS_INO_TO_AGINO(mp, ino)); ASSERT(irec != NULL); @@ -3953,7 +4084,19 @@ orphanage_ino = mk_orphanage(mp); } +#ifdef PHASE_6 + /* + * when the read ahead code is enabled, we do the namespace walk + * in a breadth first manner as opposed to the depth first model + * used earlier. to implement BF traversal, we will be using a + * queue. this queue is named 'dir_queue', and will be used in + * process_dirstack(). note that this should not be confused + * with the read ahead queue named 'Q'. + */ + queue_init(&dir_queue); +#else dir_stack_init(&stack); +#endif mark_standalone_inodes(mp); @@ -3963,7 +4106,34 @@ if (!need_root_inode) { do_log(_(" - traversing filesystem starting at / ... \n")); +#ifdef PHASE_6 + qnode_t *node; + /* + * insert root dir in the read ahead queue + */ + node = alloc_qnode(&Q, sizeof(rahead_t)); + if (node != NULL) { + rahead_t *ra = (rahead_t *)(node->data); + ra->type = IDIR; + ra->u_ra.ino = mp->m_sb.sb_rootino; + ra->readdirblks = 1; + queue_insert(&Q, node); + } else { + do_error(_("failed to allocate memory. aborting\n")); + } + /* + * insert root dir into the directory processing queue + */ + node = alloc_qnode(&dir_queue, sizeof(xfs_ino_t)); + if (node != NULL) { + *((xfs_ino_t*)(node->data)) = mp->m_sb.sb_rootino; + queue_insert(&dir_queue, node); + } else { + do_error("failed to allocate memory. aborting\n"); + } +#else push_dir(&stack, mp->m_sb.sb_rootino); +#endif process_dirstack(mp, &stack); do_log(_(" - traversal finished ... \n")); @@ -4017,9 +4187,33 @@ ino = XFS_AGINO_TO_INO(mp, i, j + irec->ino_startnum); if (inode_isadir(irec, j) && - !is_inode_refchecked(ino, - irec, j)) { + !is_inode_refchecked(ino, irec, j)) { +#ifdef PHASE_6 + qnode_t *node; + node=alloc_qnode(&Q, sizeof(rahead_t)); + if (node != NULL) { + rahead_t *ra = + (rahead_t *)(node->data); + ra->type = IDIR; + ra->u_ra.ino = ino; + ra->readdirblks = 1; + queue_insert(&Q, node); + } else { + do_error(_("failed to allocate memory. aborting\n")); + } + + node = alloc_qnode(&dir_queue, + sizeof(xfs_ino_t)); + if (node != NULL) { + *((xfs_ino_t*)(node->data)) = + ino; + queue_insert(&dir_queue, node); + } else { + do_error("failed to allocate memory. aborting\n"); + } +#else push_dir(&stack, ino); +#endif process_dirstack(mp, &stack); } } diff -Nru xfsprogs-old/repair/phase7.c xfsprogs-new2/repair/phase7.c --- xfsprogs-old/repair/phase7.c 2005-11-11 06:27:22.000000000 -0800 +++ xfsprogs-new2/repair/phase7.c 2007-01-06 08:38:12.000000000 -0800 @@ -25,6 +25,10 @@ #include "err_protos.h" #include "dinode.h" #include "versions.h" +#ifdef PHASE_7 +#include "queue.h" +#include "threads.h" +#endif /* dinoc is a pointer to the IN-CORE dinode core */ void @@ -91,8 +95,45 @@ */ for (i = 0; i < glob_agcount; i++) { irec = findfirst_inode_rec(i); +#ifdef PHASE_7 + int iter = 0; + ino_tree_node_t *ra_irec; + ino_tree_node_t *ra_first_irec; + int chunklen = BBTOB(XFS_FSB_TO_BB(mp, XFS_IALLOC_BLOCKS(mp))); + + ra_first_irec = ra_irec = irec; + /* + * before we start processing, insert 'rahead' number of + * nodes into the read ahead queue. call insert_nodes() + * with readdirblks set to 0 implying that the directory + * blocks of the directory inodes within this chunk should + * not be read in. that's because phase 7 looks into only + * the inode structure. + */ + if (insert_nodes(rahead, i, &ra_first_irec, &ra_irec, + 0, chunklen)) + do_error(_("failed to allocate memory. aborting\n")); +#endif while (irec != NULL) { +#ifdef PHASE_7 + iter++; + /* + * after a set of 'radelta' num of nodes have been + * processed, insert another 'radelta' nodes into the + * read ahead queue. call insert_nodes() with + * readdirblks set to 0 implying that the directory + * blocks of the directory inodes within this chunk + * should not be read in. that's because phase 7 + * looks into only the inode structure. + */ + if (iter % radelta == 0) { + if (insert_nodes(radelta, i, &ra_first_irec, + &ra_irec, 0, chunklen)) + do_error(_("failed to allocate memory. " + "aborting\n")); + } +#endif for (j = 0; j < XFS_INODES_PER_CHUNK; j++) { ASSERT(is_inode_confirmed(irec, j)); diff -Nru xfsprogs-old/repair/queue.c xfsprogs-new2/repair/queue.c --- xfsprogs-old/repair/queue.c 1969-12-31 16:00:00.000000000 -0800 +++ xfsprogs-new2/repair/queue.c 2007-01-22 11:32:23.000000000 -0800 @@ -0,0 +1,187 @@ +/* + * Copyright (c) 2006-2007 agami Systems, Inc. + * All Rights Reserved. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of version 2 of the GNU General Public + * License as published by the Free Software Foundation. + * + * This program is distributed in the hope that it would be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write the Free Software Foundation, + * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA + * + * Contact information: + * agami Systems, Inc., + * 1269 Innsbruck Drive, + * Sunnyvale, CA 94089, or: + * + * http://www.agami.com + */ + +#include +#include +#include +#include "queue.h" + +/* + * The routines in this file implement a generic queue data structure + */ + +int +queue_init( + queue_t *Q) +{ + pthread_mutex_init(&Q->qmutex, NULL); + pthread_cond_init(&Q->qcond_wait, NULL); + Q->head = Q->tail = NULL; + Q->waiters = 0; + Q->status = 0; + Q->fl.head = NULL; + Q->fl.cnt = 0; + pthread_mutex_init(&Q->fl.listmutex, NULL); + + return 0; +} + +/* + * This routine is useful for waking up blocked waiters on an + * empty queue + */ +int +q_empty_wakeup_all( + queue_t *Q) +{ + pthread_mutex_lock(&Q->qmutex); + + if (Q->head == Q->tail) { + pthread_cond_broadcast(&Q->qcond_wait); + Q->status = 1; + pthread_mutex_unlock(&Q->qmutex); + return 0; + } else { + pthread_mutex_unlock(&Q->qmutex); + return 1; + } +} + +/* + * This queue can be used in two modes namely blocking/non-blocking. + * In the blocking mode, threads issuing a delete wait till data is + * available. Non blocking deletes return immediatly if the queue + * is empty. + */ +int +queue_remove( + queue_t *Q, + qnode_t **data, + int blocking) +{ + pthread_mutex_lock(&Q->qmutex); + + if (Q->status) { + pthread_mutex_unlock(&Q->qmutex); + return 0; + } + + if (!blocking) { + if ((Q->head == NULL) && (Q->tail == NULL)) { + pthread_mutex_unlock(&Q->qmutex); + return 0; + } + } + + Q->waiters++; + + while ((Q->head == NULL) && (Q->tail == NULL) && (Q->status == 0)) { + pthread_cond_wait(&Q->qcond_wait, &Q->qmutex); + } + + Q->waiters--; + + if (Q->status) { + pthread_mutex_unlock(&Q->qmutex); + return 0; + } + + *data = Q->tail; + if (Q->head == Q->tail) + Q->tail = Q->head = NULL; + else + Q->tail = Q->tail->next; + + pthread_mutex_unlock(&Q->qmutex); + return 1; +} + +void +queue_insert( + queue_t *Q, + qnode_t *data) +{ + + pthread_mutex_lock(&Q->qmutex); + + if (Q->head == NULL) { + Q->head = Q->tail = data; + } else { + Q->head->next = data; + Q->head = data; + } + + if (Q->waiters) { + pthread_cond_signal(&Q->qcond_wait); + } + + pthread_mutex_unlock(&Q->qmutex); + return; +} + +void +free_qnode( + queue_t *Q, + qnode_t *node) +{ + pthread_mutex_lock(&Q->fl.listmutex); + + node->next = Q->fl.head; + Q->fl.head = node; + Q->fl.cnt++; + + pthread_mutex_unlock(&Q->fl.listmutex); + return; +} + +qnode_t* +alloc_qnode( + queue_t *Q, + int size) +{ + qnode_t *node = NULL; + pthread_mutex_lock(&Q->fl.listmutex); + + if (Q->fl.cnt > 0) { + node = Q->fl.head; + Q->fl.head = (Q->fl.head)->next; + Q->fl.cnt--; + node->next = NULL; + } else { + if ((node = (qnode_t*)malloc(sizeof(qnode_t))) == NULL) { + pthread_mutex_unlock(&Q->fl.listmutex); + return NULL; + } + if ((node->data = malloc(size)) == NULL) { + free(node); + pthread_mutex_unlock(&Q->fl.listmutex); + return NULL; + } + node->next = NULL; + } + pthread_mutex_unlock(&Q->fl.listmutex); + return node; +} + diff -Nru xfsprogs-old/repair/queue.h xfsprogs-new2/repair/queue.h --- xfsprogs-old/repair/queue.h 1969-12-31 16:00:00.000000000 -0800 +++ xfsprogs-new2/repair/queue.h 2007-01-22 11:32:33.000000000 -0800 @@ -0,0 +1,78 @@ +/* + * Copyright (c) 2006-2007 agami Systems, Inc. + * All Rights Reserved. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of version 2 of the GNU General Public + * License as published by the Free Software Foundation. + * + * This program is distributed in the hope that it would be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write the Free Software Foundation, + * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA + * + * Contact information: + * agami Systems, Inc., + * 1269 Innsbruck Drive, + * Sunnyvale, CA 94089, or: + * + * http://www.agami.com + */ + +typedef struct qnode { + void *data; + struct qnode *next; +} qnode_t; + +typedef struct freelist { + qnode_t *head; + int cnt; + pthread_mutex_t listmutex; +} freelist_t; + +typedef struct queue { + qnode_t *head; + qnode_t *tail; + pthread_mutex_t qmutex; + pthread_cond_t qcond_wait; + int waiters; + /* + * status can be either 0 or 1. 0 signifying queue is being + * used; 1 signifying queue no longer being used. + */ + int status; + freelist_t fl; +} queue_t; + +int +queue_init( + queue_t *Q); + +void +queue_insert( + queue_t *Q, + qnode_t *data); + +int +queue_remove( + queue_t *Q, + qnode_t **data, + int blocking); + +int +q_empty_wakeup_all( + queue_t *Q); + +qnode_t* +alloc_qnode( + queue_t *Q, + int size); + +void +free_qnode( + queue_t *Q, + qnode_t *node); diff -Nru xfsprogs-old/repair/threads.c xfsprogs-new2/repair/threads.c --- xfsprogs-old/repair/threads.c 1969-12-31 16:00:00.000000000 -0800 +++ xfsprogs-new2/repair/threads.c 2007-01-22 11:32:10.000000000 -0800 @@ -0,0 +1,821 @@ +/* + * Copyright (c) 2006-2007 agami Systems, Inc. + * All Rights Reserved. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of version 2 of the GNU General Public + * License as published by the Free Software Foundation. + * + * This program is distributed in the hope that it would be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write the Free Software Foundation, + * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA + * + * Contact information: + * agami Systems, Inc., + * 1269 Innsbruck Drive, + * Sunnyvale, CA 94089, or: + * + * http://www.agami.com + */ + +/* + * this file consists of all the routines needed to implement the read + * ahead algorithm. as of now, we do two kinds of read ahead namely + * 1) a complete inode chunk (denoted by ICHUNK in the queue node) + * 2) a directory inode (denoted by IDIR in the queue node) + * To begin with 'numthreads' threads are created by start_threads(), + * which is an exported routine. the worker threads start with the + * thread_worker() routine. this essentially plucks off nodes from + * the read ahead queue, and depending on their type, dispatches them + * to the appropriate routines. + * + * note: according to the current algorithm of xfs_repair, phase 3,4, + * and 7 issues ICHUNK read ahead requests. during phase 3 and 4, + * process_aginodes() queues in nodes to the read ahead queue, while + * in phase 7, phase7() does it. phase 6 issues IDIR requests. The + * insertions to the read ahead queue happens in lf_block_dir_entry_check(), + * longform_dir2_entry_check_data(), shortform_dir_entry_check(), + * shortform_dir2_entry_check(), phase6(). + */ + +#include +#include +#include "avl.h" +#include "globals.h" +#include "incore.h" +#include "bmap.h" +#include "versions.h" +#include "dinode.h" +#include "queue.h" +#include "threads.h" +#include "err_protos.h" +#include "dir2.h" + +#define BBTOOFF64(bbs) (((xfs_off_t)(bbs)) << BBSHIFT) + +static pthread_t *mythread; +static xfs_mount_t *mptr; +static int fd; +static int chunklen; +static char *pool; +static char **buf; +static int *tid; +/* + * phase 6 specific. we maintain two global counters for the read ahead + * in phase 6. 'p6_processed' maintains the count of directory inodes that + * the main thread has completed processing. 'p6_read' maintains the count + * of directory inodes that the read ahead threads have fetched into memory. + * in order to avoid flooding the page cache, we allow read aheads only if + * the delta between 'p6_read' & 'p6_processed' is less than 'rahead', which + * is our global read ahead tunable. in case the delta exceeds the value, + * the worker threads sleep on their individual conditional variables. + * before going to sleep, the threads queue their conditional variables in + * the 'p6_sleep_Q' queue. this is done in the routine 'good_to_read()'. + * when the delta gets below the threshold, the main thread wakes up the + * sleeping worker threads. this happens in 'wake_if_sleeping()'. + */ +static int p6_processed = 0; +static int p6_read = 0; +static pthread_mutex_t p6_mutex = PTHREAD_MUTEX_INITIALIZER; +static queue_t p6_sleep_Q; +/* + * read ahead queue + */ +queue_t Q; + +extern int libxfs_bmapi_single(xfs_trans_t *, xfs_inode_t *, int, + xfs_fsblock_t *, xfs_fileoff_t); + +static void +sleep100ms(void); + +static int +ichunk_dirblks_rahead( + int tid, + char *ichunk); + +static int +thread_read_dir2( + int tid, + xfs_dinode_t *dip, + blkmap_t *blkmap); + +static int +block_dir2_rahead( + int tid, + blkmap_t *blkmap); + +static int +thread_read_exinode( + int tid, + xfs_dinode_t *dip, + __uint64_t *nex, + blkmap_t **blkmapp); + +static void +ichunk_rahead( + int tid, + xfs_daddr_t blkno, + int len, + int readdirblks); + +static void +idir_rahead( + int tid, + xfs_ino_t ino, + int readdirblks); + +static void +dir2_rahead( + int tid, + xfs_inode_t *ip); + +static int +leaf_node_dir2_rahead( + int tid, + blkmap_t *blkmap); + +static void +good_to_read( + pthread_cond_t *cond); + +/* + * pluck nodes off the read ahead queue, and call the appropriate + * routine depending on the type i.e ICHUNK or IDIR + */ + +static void* +thread_worker( + void* arg) +{ + nodetype_t type; + xfs_daddr_t blkno; + int len; + xfs_ino_t ino; + int readdirblks; + qnode_t *node; + int tid = *((int*)arg); + pthread_cond_t p6_sleep_cond = PTHREAD_COND_INITIALIZER; + + while (queue_remove(&Q, &node, 1)) { + type = ((rahead_t*)(node->data))->type; + readdirblks = ((rahead_t*)(node->data))->readdirblks; + + switch (type) { + case ICHUNK: + blkno = ((rahead_t*)(node->data))->u_ra.ichunk.blkno; + len = ((rahead_t*)(node->data))->u_ra.ichunk.len; + ichunk_rahead(tid, blkno, len, readdirblks); + break; + case IDIR: + ino = ((rahead_t*)(node->data))->u_ra.ino; + good_to_read(&p6_sleep_cond); + idir_rahead(tid, ino, readdirblks); + break; + default: + do_error(_("should never be reached\n")); + break; + } + + free_qnode(&Q, node); + } + + pthread_exit(NULL); + return arg; +} + +/* + * read in an inode chunk. if readdirblks is set, then read in the + * directory blocks of all the directory inodes within the chunk. + */ +static void +ichunk_rahead( + int tid, + xfs_daddr_t blkno, + int len, + int readdirblks) +{ + if (pread64(fd, buf[tid], len, BBTOOFF64(blkno)) < 0) { + do_warn(_("failed read ahead of inode chunk at block %ld.\n"), blkno); + return; + } + + if (readdirblks) { + if (ichunk_dirblks_rahead(tid, buf[tid])) { + do_warn(_("suspect inode chunk at block %ld.\n"), blkno); + } + } + return; +} + +/* + * given an inode number, read in the inode. if readdirblks is set, + * read in the directory blocks for the inode. + */ +static void +idir_rahead( + int tid, + xfs_ino_t ino, + int readdirblks) +{ + xfs_inode_t *ip; + int error; + + if ((error = libxfs_iget(mptr, NULL, ino, 0, &ip, 0))) { + do_warn(_("couldn't map inode %llu, err = %d\n"),ino, error); + return; + } + + if (readdirblks) { + switch (ip->i_d.di_format) { + case XFS_DINODE_FMT_EXTENTS: + case XFS_DINODE_FMT_BTREE: + if (XFS_SB_VERSION_HASDIRV2(&mptr->m_sb)) + dir2_rahead(tid, ip); + break; + default: + break; + } + } + + libxfs_iput(ip, 0); +} + +/* + * read in the block map for an extent type inode. stripped down + * version of 'process_bmbt_reclist_int()' + * + * return: 0 on success, 1 on failure + */ +static int +thread_read_exinode( + int tid, + xfs_dinode_t *dip, + __uint64_t *nex, + blkmap_t **blkmapp) +{ + int i; + xfs_dfilblks_t c; /* count */ + xfs_dfilblks_t cp = 0; /* prev count */ + xfs_dfsbno_t s; /* start */ + xfs_dfsbno_t sp = 0; /* prev start */ + xfs_dfiloff_t o = 0; /* offset */ + xfs_dfiloff_t op = 0; /* prev offset */ + int flag; /* extent flag */ + xfs_bmbt_rec_32_t *rp; + + rp = (xfs_bmbt_rec_32_t *)XFS_DFORK_PTR(dip, XFS_DATA_FORK); + *nex = XFS_DFORK_NEXTENTS(dip, XFS_DATA_FORK); + + for (i = 0; i < *nex; i++, rp++) { + convert_extent(rp, &o, &s, &c, &flag); + + if (i > 0 && op + cp > o) { + return 1; + } + + op = o; + cp = c; + sp = s; + + if (c == 0) { + return 1; + } + + if (!verify_dfsbno(mptr, s)) { + return 1; + } + + if (!verify_dfsbno(mptr, s + c - 1)) { + return 1; + } + + if (s + c - 1 < s) { + return 1; + } + + if (o > fs_max_file_offset) { + return 1; + } + + blkmap_set_ext(blkmapp, o, s, c); + } + return 0; +} + +/* + * carve out disk inodes out of the inode chunk one by one. if inode not + * a v2 directory, ignore. otherwise read in it's directory blocks. what + * do we do in case we get an error in any of the directory inodes within + * the chunk ? we take the optimistic view, and move on to the next inode + * in the chunk. + * + * return: 0 if all the inodes in the chunk were read without any errors, + * 1 otherwise. + */ +static int +ichunk_dirblks_rahead( + int tid, + char *ichunk) +{ + xfs_dinode_t *dino; + xfs_dinode_core_t *dinoc; + __uint64_t nextents; + blkmap_t *dblkmap = NULL; + int done = 0; + int icnt = 0; + int irec_offset = 0; + int err = 0; + + while (!done) { + dino = (xfs_dinode_t*)(ichunk + + (icnt << mptr->m_sb.sb_inodelog)); + + icnt++; + irec_offset++; + + if(icnt == XFS_IALLOC_INODES(mptr) && + irec_offset == XFS_INODES_PER_CHUNK) { + done = 1; + } else if (irec_offset == XFS_INODES_PER_CHUNK) { + irec_offset = 0; + } + + dinoc = &dino->di_core; + + if (INT_GET(dinoc->di_magic, ARCH_CONVERT) != XFS_DINODE_MAGIC) + continue; + + if (!XFS_DINODE_GOOD_VERSION(dinoc->di_version) || + (!fs_inode_nlink && + dinoc->di_version > XFS_DINODE_VERSION_1)) { + continue; + } + + if (INT_GET(dinoc->di_size, ARCH_CONVERT) < 0) + continue; + + if ((INT_GET(dinoc->di_mode, ARCH_CONVERT) & S_IFMT) != + S_IFDIR) + continue; + + nextents = INT_GET(dinoc->di_nextents, ARCH_CONVERT); + if (nextents > INT_GET(dinoc->di_nblocks, ARCH_CONVERT) || + nextents > XFS_MAX_INCORE_EXTENTS) + nextents = 1; + + if (INT_GET(dinoc->di_size, ARCH_CONVERT) <= + XFS_DFORK_DSIZE(dino, mptr) && + (dinoc->di_format != XFS_DINODE_FMT_LOCAL)) { + continue; + } + + if (dinoc->di_format == XFS_DINODE_FMT_EXTENTS) { + dblkmap = blkmap_alloc(nextents); + } else { + continue; + } + + nextents = 0; + if (thread_read_exinode(tid, dino, &nextents, &dblkmap)) { + err = 1; + blkmap_free(dblkmap); + continue; + } + + if (nextents > MAXEXTNUM) { + blkmap_free(dblkmap); + continue; + } + + if (nextents != INT_GET(dinoc->di_nextents, ARCH_CONVERT)) { + blkmap_free(dblkmap); + continue; + } + + if (XFS_SB_VERSION_HASDIRV2(&mptr->m_sb)) + if (thread_read_dir2(tid, dino, dblkmap)) + err = 1; + + blkmap_free(dblkmap); + } + + return err; +} + +/* + * call the appropriate directory routine based on the type i.e block/leaf. + * + * return: 0 if the directory blocks were read without any errors, 1 otherwise. + */ +static int +thread_read_dir2( + int tid, + xfs_dinode_t *dip, + blkmap_t *blkmap) +{ + int res = 0; + xfs_dfiloff_t last = 0; + + if (blkmap) + last = blkmap_last_off(blkmap); + + if (last == mptr->m_dirblkfsbs && + dip->di_core.di_format == XFS_DINODE_FMT_EXTENTS) { + res = block_dir2_rahead(tid, blkmap); + } else if (last >= mptr->m_dirleafblk + mptr->m_dirblkfsbs && + dip->di_core.di_format == XFS_DINODE_FMT_EXTENTS) { + res = leaf_node_dir2_rahead(tid, blkmap); + } + + return res; +} + +/* + * read in the blocks of a block type v2 directory. stripped down + * version of 'process_block_dir2()'. the idea is to bring the blocks + * into the page cache so that the main thread has a cache hit. it's + * fine to ignore any read error that we might see since the main + * thread will catch up behind us and clean things up. we just log + * the failure and proceed. + * + * return: 0 on a successful read of the block, 1 otherwise. + */ +static int +block_dir2_rahead( + int tid, + blkmap_t *blkmap) +{ + bmap_ext_t lbmp; + bmap_ext_t *bmp; + xfs_dir2_block_t *block; + xfs_dabuf_t *bp; + int nex; + + nex = blkmap_getn(blkmap, mptr->m_dirdatablk, mptr->m_dirblkfsbs, &bmp, + &lbmp); + if (nex == 0) { + return 1; + } + bp = da_read_buf(mptr, nex, bmp); + if (bmp != &lbmp) + free(bmp); + + if (bp == NULL) { + return 1; + } + + block = bp->data; + if (INT_GET(block->hdr.magic, ARCH_CONVERT) != XFS_DIR2_BLOCK_MAGIC) + do_warn(_("bad directory block magic in block %lu\n"), + XFS_FSB_TO_DADDR(mptr, bmp[0].startblock)); + + da_brelse(bp); + return 0; +} + +static void sleep100ms(void) +{ + struct timespec ts; + ts.tv_sec = 0; + ts.tv_nsec = 100*1000*1000; + + nanosleep(&ts, NULL); + return; +} + +void start_threads( + xfs_mount_t *mp) +{ + char* temp; + int i; + + if ((mptr = (xfs_mount_t*)malloc(sizeof(xfs_mount_t))) == NULL) + do_error("failed to allocate memory.\n"); + memcpy(mptr, mp, sizeof(xfs_mount_t)); + /* + * chunklen is the length of an inode chunk + */ + chunklen = BBTOB(XFS_FSB_TO_BB(mptr, XFS_IALLOC_BLOCKS(mptr))); + /* + * allocate enough memory for the worker threads + */ + if((pool = (char*)malloc(chunklen * numthreads)) == NULL) + do_error("failed to allocate memory.\n"); + buf = (char**)malloc(numthreads * sizeof(char*)); + if (buf == NULL) + do_error("failed to allocate memory.\n"); + temp = pool; + for (i = 0; i < numthreads; i++) { + buf[i] = temp; + temp += chunklen; + } + fd = libxfs_device_to_fd(mptr->m_dev); + if (queue_init(&Q)) + do_error("failed to initialize read ahead queue.\n"); + if (queue_init(&p6_sleep_Q)) + do_error("failed to initialize sleep queue for phase 6.\n"); + mythread = (pthread_t*)malloc(numthreads * sizeof(pthread_t)); + if (mythread == NULL) + do_error("failed to allocate memory.\n"); + tid = (int*)malloc(numthreads * sizeof(int)); + if (tid == NULL) + do_error("failed to allocate memory.\n"); + for (i = 0; i < numthreads; i++) { + tid[i] = i; + if (pthread_create(&mythread[i], NULL, thread_worker, (void*)&tid[i])) + do_error("failed to create worker threads.\n"); + } + return; +} + +void stop_threads(void) { + + int i; + + unblock_threads(); + + for (i = 0; i < numthreads; i++) { + if (pthread_join(mythread[i], NULL)) + do_warn(_("thread %d failed to join. continuing"), i); + } + + free(mptr); + free(pool); + free(buf); + free(tid); + + return; +} + +void +unblock_threads(void) +{ + /* + * loop every 100ms to see if the read ahead queue is empty. + * if so, wake up all worker threads. + */ + while (1) { + if (q_empty_wakeup_all(&Q)) { + sleep100ms(); + } else { + break; + } + } +} + +/* + * directory block read ahead code for phase 6. the idea is to bring + * the blocks into the page cache so that the main thread has a cache hit. + * it's fine to ignore any read error that we might see since the main + * thread will catch up behind us and clean things up. we just log the + * failure and proceed. + */ +static void +dir2_rahead( + int tid, + xfs_inode_t *ip) +{ + xfs_fileoff_t da_bno; + xfs_fileoff_t next_da_bno; + int j; + xfs_fsblock_t fsb; + xfs_daddr_t blkno; + int len; + int nfsb; + int error; + char *buf; + + for (da_bno = 0, next_da_bno = 0; next_da_bno != NULLFILEOFF; + da_bno = next_da_bno) { + + next_da_bno = da_bno + mptr->m_dirblkfsbs - 1; + if (libxfs_bmap_next_offset(NULL, ip, &next_da_bno, + XFS_DATA_FORK)) + break; + + if (mptr->m_dirblkfsbs == 1) { + error = libxfs_bmapi_single(NULL, ip, XFS_DATA_FORK, + &fsb, da_bno); + if (error != 0) { + do_warn("bmap block err: %d in inode: %llu\n", + error, ip->i_ino); + return; + } + if (fsb == NULLFSBLOCK) { + return; + } + blkno = XFS_FSB_TO_DADDR(mptr, fsb); + len = XFS_FSB_TO_BB(mptr, 1); + + if ((buf = (char*)malloc(BBTOB(len))) == NULL) { + do_error("malloc failed in thread %d\n", tid); + } + + if (pread64(fd, buf, BBTOB(len), BBTOOFF64(blkno))< 0){ + do_warn(_("failed read of block: %ld. " + "continuing\n"), blkno); + } + + free(buf); + } else if ((nfsb = mptr->m_dirblkfsbs) > 1) { + xfs_fsblock_t firstblock; + xfs_bmbt_irec_t *mapp; + int nmap; + + mapp = malloc(sizeof(*mapp) * nfsb); + + if (mapp == NULL) { + do_error("cannot allocate memory for map\n"); + } + + firstblock = NULLFSBLOCK; + nmap = nfsb; + error = libxfs_bmapi(NULL, ip, da_bno, nfsb, + XFS_BMAPI_METADATA | + XFS_BMAPI_AFLAG(XFS_DATA_FORK), + &firstblock, 0, mapp, &nmap, + NULL); + if (error) { + do_warn("bmap block err: %d in inode: %llu\n", + error, ip->i_ino); + free(mapp); + return; + } + + for (j = 0; j < nmap; j++) { + blkno= XFS_FSB_TO_DADDR(mptr, + mapp[j].br_startblock); + len = XFS_FSB_TO_BB(mptr, + mapp[j].br_blockcount); + + if ((buf = (char*)malloc(BBTOB(len))) == NULL){ + do_error("malloc failed in thread %d\n", + tid); + } + + if (pread64(fd, buf, BBTOB(len), + BBTOOFF64(blkno)) < 0) { + do_warn(_("failed read of block: %ld. " + "continuing\n"), blkno); + } + free(buf); + } + free(mapp); + } else { + do_warn("invalid mptr->m_dirblkfsbs: %d\n", + mptr->m_dirblkfsbs); + return; + } + } + return; +} + +/* + * read in blocks of a leaf type v2 directory. stripped down version + * of 'process_leaf_node_dir2()'. the idea is to bring the blocks into + * the page cache so that the main thread has a cache hit. it's fine to + * ignore any read error that we might see since the main thread will + * catch up behind us and clean things up. we just log the failure and + * proceed. + * + * return: 0 on a successful read of all the blocks, 1 otherwise. + * + * Todo: this code reads only the directory data blocks. need to enhance + * it to read the internal node and leaf blocks. + */ +static int +leaf_node_dir2_rahead( + int tid, + blkmap_t *blkmap) +{ + xfs_dfiloff_t dbno; + xfs_dfiloff_t ndbno; + bmap_ext_t lbmp; + bmap_ext_t *bmp; + xfs_dabuf_t *bp; + xfs_dir2_data_t *data; + int nex; + int t; + int err = 0; + + ndbno = NULLDFILOFF; + while ((dbno = blkmap_next_off(blkmap,ndbno,&t)) < mptr->m_dirleafblk){ + nex = blkmap_getn(blkmap, dbno, mptr->m_dirblkfsbs, + &bmp, &lbmp); + ndbno = dbno + mptr->m_dirblkfsbs - 1; + if (nex == 0) { + err = 1; + continue; + } + bp = da_read_buf(mptr, nex, bmp); + if (bmp != &lbmp) + free(bmp); + if (bp == NULL) { + err = 1; + continue; + } + data = bp->data; + if (INT_GET(data->hdr.magic, ARCH_CONVERT) != + XFS_DIR2_DATA_MAGIC) + do_warn(_("bad directory block magic # %#x in " + "block %lu\n"), + INT_GET(data->hdr.magic, ARCH_CONVERT), + XFS_FSB_TO_DADDR(mptr, bmp[0].startblock)); + + da_brelse(bp); + } + return err; +} + +int +insert_nodes( + int numnodes, + int agno, + ino_tree_node_t **first_ra_recp, + ino_tree_node_t **ra_recp, + int readdirblks, + int chunklen) +{ + int i, ra_inos; + xfs_agblock_t agbno; + qnode_t *ranode; + + for (i = 0; i < numnodes && *ra_recp != NULL; i++) { + ra_inos = XFS_INODES_PER_CHUNK; + while (ra_inos < XFS_IALLOC_INODES(mptr) && *ra_recp != NULL) { + if ((*ra_recp = next_ino_rec(*ra_recp)) != NULL) + ra_inos += XFS_INODES_PER_CHUNK; + } + ranode = alloc_qnode(&Q, sizeof(rahead_t)); + if (ranode != NULL) { + rahead_t *ra = (rahead_t*)(ranode->data); + agbno = XFS_AGINO_TO_AGBNO(mptr, + (*first_ra_recp)->ino_startnum); + ra->type = ICHUNK; + ra->u_ra.ichunk.blkno = XFS_AGB_TO_DADDR(mptr, agno, + agbno); + ra->u_ra.ichunk.len = chunklen; + ra->readdirblks = readdirblks; + queue_insert(&Q, ranode); + } else { + return 1; + } + if (*ra_recp != NULL) + *first_ra_recp = *ra_recp = next_ino_rec(*ra_recp); + } + return 0; +} +/* + * refer to notes at the begining of the file for details about the + * working of this routine. + */ +static void +good_to_read( + pthread_cond_t *cond) +{ + qnode_t *node; + pthread_mutex_lock(&p6_mutex); + p6_read++; + if (p6_read - p6_processed < rahead) { + pthread_mutex_unlock(&p6_mutex); + return; + } else { + node = alloc_qnode(&p6_sleep_Q, sizeof(pthread_cond_t*)); + if (node != NULL) { + (pthread_cond_t*)(node->data) = cond; + queue_insert(&p6_sleep_Q, node); + pthread_cond_wait(cond, &p6_mutex); + } else { + do_error("failed to allocate memory. aborting\n"); + } + pthread_mutex_unlock(&p6_mutex); + return; + } +} +/* + * refer to notes at the begining of the file for details about the + * working of this routine. + */ +void +wake_if_sleeping(void) +{ + qnode_t *node; + pthread_mutex_lock(&p6_mutex); + p6_processed++; + if (queue_remove(&p6_sleep_Q, &node, 0)) { + pthread_cond_t *cond = (pthread_cond_t*)(node->data); + free_qnode(&p6_sleep_Q, node); + pthread_cond_signal(cond); + pthread_mutex_unlock(&p6_mutex); + return; + } else { + pthread_mutex_unlock(&p6_mutex); + return; + } +} diff -Nru xfsprogs-old/repair/threads.h xfsprogs-new2/repair/threads.h --- xfsprogs-old/repair/threads.h 1969-12-31 16:00:00.000000000 -0800 +++ xfsprogs-new2/repair/threads.h 2007-01-22 11:32:16.000000000 -0800 @@ -0,0 +1,81 @@ +/* + * Copyright (c) 2006-2007 agami Systems, Inc. + * All Rights Reserved. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of version 2 of the GNU General Public + * License as published by the Free Software Foundation. + * + * This program is distributed in the hope that it would be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write the Free Software Foundation, + * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA + * + * Contact information: + * agami Systems, Inc., + * 1269 Innsbruck Drive, + * Sunnyvale, CA 94089, or: + * + * http://www.agami.com + */ + +/* + * There are two kind of objects that are being prefetched right + * now namely inode chunks and directory inodes. Inode chunks + * are operated on in phase 3, phase 4, and phase 7. Phase 6 + * operates on individual directory inodes. + */ +typedef enum nodetype { + ICHUNK, + IDIR +} nodetype_t; + +typedef struct rahead { + nodetype_t type; + union { + struct { + xfs_daddr_t blkno; + int len; + } ichunk; + xfs_ino_t ino; + } u_ra; + /* + * setting readdirblks to 1 signifies that the directory + * blocks for the directory inodes within this inode chunk + * should also be read in. + */ + int readdirblks; +} rahead_t; + +typedef struct buftype { + char *buf; + int len; +} buf_t; + +void +start_threads( + xfs_mount_t *mptr); + +void +stop_threads(void); + +void +unblock_threads(void); + +void +wake_if_sleeping(void); + +int +insert_nodes( + int numnodes, + int agno, + ino_tree_node_t **first_ra_recp, + ino_tree_node_t **ra_recp, + int readdirblks, + int chunklen); + +extern queue_t Q; diff -Nru xfsprogs-old/repair/xfs_repair.c xfsprogs-new2/repair/xfs_repair.c --- xfsprogs-old/repair/xfs_repair.c 2005-11-11 06:27:22.000000000 -0800 +++ xfsprogs-new2/repair/xfs_repair.c 2007-01-17 14:07:20.000000000 -0800 @@ -25,6 +25,8 @@ #include "protos.h" #include "incore.h" #include "err_protos.h" +#include "queue.h" +#include "threads.h" #define rounddown(x, y) (((x)/(y))*(y)) @@ -52,6 +54,16 @@ "assume_xfs", #define PRE_65_BETA 1 "fs_is_pre_65_beta", +#define IHASH_SIZE 2 + "ihash", +#define BHASH_SIZE 3 + "bhash", +#define NUMTHREADS 4 + "numthreads", +#define RAHEAD 5 + "rahead", +#define RADELTA 6 + "radelta", NULL }; @@ -171,6 +183,14 @@ fs_has_extflgbit_allowed = 1; pre_65_beta = 0; fs_shared_allowed = 1; + /* + * default values of numthreads, rahead, and radelta if not + * overriden by user supplied [-o] suboptions. + */ + numthreads = 10; + rahead = 100; + radelta = 10; + /* * XXX have to add suboption processing here @@ -202,6 +222,27 @@ PRE_65_BETA); pre_65_beta = 1; break; + case NUMTHREADS: + if (!val) + do_error("value for 'numthreads' needs to be specified\n"); + int inp_numthreads = atoi(val); + if (inp_numthreads > numthreads) + numthreads = inp_numthreads; + break; + case RAHEAD: + if (!val) + do_error("value for 'rahead' needs to be specified\n"); + int inp_rahead = atoi(val); + if (inp_rahead > rahead) + rahead = inp_rahead; + break; + case RADELTA: + if (!val) + do_error("value for 'radelta' needs to be specified\n"); + int inp_radelta = atoi(val); + if (inp_radelta > radelta) + radelta = inp_radelta; + break; default: unknown('o', val); break; @@ -496,6 +537,9 @@ phase2(mp); +#if defined(PHASE_3_4) || defined(PHASE_6) || defined(PHASE_7) + start_threads(mp); +#endif phase3(mp); phase4(mp); @@ -513,6 +557,9 @@ do_warn( _("Inode allocation btrees are too corrupted, skipping phases 6 and 7\n")); } +#if defined(PHASE_3_4) || defined(PHASE_6) || defined(PHASE_7) + stop_threads(); +#endif if (lost_quotas && !have_uquotino && !have_gquotino) { if (!no_modify) { --------------090708000700070403030802-- From owner-xfs@oss.sgi.com Mon Jan 22 16:00:55 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 22 Jan 2007 16:01:01 -0800 (PST) X-Spam-oss-Status: No, score=0.3 required=5.0 tests=AWL,BAYES_50 autolearn=ham version=3.2.0-pre1-r497472 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0N00qqw027859 for ; Mon, 22 Jan 2007 16:00:54 -0800 Received: from pcbnaujok (pc-bnaujok.melbourne.sgi.com [134.14.55.58]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA23050; Tue, 23 Jan 2007 10:59:52 +1100 Message-Id: <200701222359.KAA23050@larry.melbourne.sgi.com> From: "Barry Naujok" To: "'Michael Nishimoto'" , "'XFS Mailing List'" Cc: "'Chandan Talukdar'" Subject: RE: xfs_repair speedup changes Date: Tue, 23 Jan 2007 10:59:57 +1100 MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit X-Mailer: Microsoft Office Outlook, Build 11.0.6353 Thread-Index: Acc+eYXLzq7sybG0RKO5VqEkqXlKmgABmBRw X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.3028 In-Reply-To: <45B53E11.8080406@agami.com> X-archive-position: 10371 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@melbourne.sgi.com Precedence: bulk X-list: xfs Content-Length: 3294 Lines: 82 Hi Michael, It's going to take a little bit of time for me to digest this patch and to see how it compares to the other work performed by us. On the surface, it looks quite interesting and I'll benchmark and analyse the two to see how it compares and integrate the best solution for the majority of cases. I'm confused on why the kernel should make any difference to running xfs_repair. You should be able to get the 2.8.18 xfsprogs tarball from the FTP site, compile and test it: ftp://oss.sgi.com/projects/xfs/cmd_tars/xfsprogs_2.8.18-1.tar.gz I'm currently working on converting the phase 2-5 block map to an extent based format which will improve memory consumption in addition to speeding it up in most cases. The only other forseable change is trying to merge phase 3 and phase 6 with directory checking, but I'm not sure how practical/feasible this is and whether the amount of work will provide a significant performance increase. Regards, Barry. > -----Original Message----- > From: xfs-bounce@oss.sgi.com [mailto:xfs-bounce@oss.sgi.com] > On Behalf Of Michael Nishimoto > Sent: Tuesday, 23 January 2007 9:43 AM > To: XFS Mailing List > Cc: Chandan Talukdar > Subject: xfs_repair speedup changes > > Hi everyone, > > agami Systems started on a project to speed up xfs_repair before > we knew that SGI was working on the same task. Similar to SGI's > solution, our approach uses readahead to shorten the runtime. Agami > also wanted to change the existing code as little as possible. > > By releasing this patch, we hope to start a discussion which will > lead to continued improvements in xfs_repair runtimes. Our patch > has a couple of ideas which should benefit SGI's code. Using our > NAS platform which has 4 CPUs and runs XFS over software RAID5, > we have seen 5 to 8 times speedup, depending on resources allocated > to a run. The test filesystem had 1.4TB of data with 24M files. > Unfortunately, I have not been able to run the latest CVS code > against our system due to kernel differences. > > SGI's advantages > ---------------- > 1. User space cache with maximum number of entries > a. means that xfs_repair will cause less interference > with other mounted filesystems. > b. allows tracking of cache behavior. > 2. Rewrite phase7 to eliminate unnecessary transaction overhead. > > agami's advantages > ------------------ > 1. Doesn't depend on AIO & generic DIO working correctly. Will > work with older linux kernels. > 2. Parallelism model provides additional benefits > a. In phases 3 and 4, many threads can be used to prefetch > inode blocks regardless of AG count. > b. By processing one AG at a time, drives spend less time seeking > when multiple AGs are placed on a single drive due to > the volume > geometry. > c. By placing each prefetch in its own thread, more parallelism > is achieved especially when retrieving directory blocks. > > Chandan Talukdar performed all the xfs_repair work over last summer. > Because the work was done on an old base, I have ported it forward to > a CVS date of May 17, 2006. I chose this date because it allows a > cleaner patch to be delivered. > > I would like to hear suggestions for how to proceed. > > Michael Nishimoto > From owner-xfs@oss.sgi.com Mon Jan 22 16:11:10 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 22 Jan 2007 16:11:22 -0800 (PST) X-Spam-oss-Status: No, score=-2.3 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from amd.ucw.cz (gprs189-60.eurotel.cz [160.218.189.60]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0N0Atqw030364 for ; Mon, 22 Jan 2007 16:11:00 -0800 Received: by amd.ucw.cz (Postfix, from userid 8) id 150F32C06E; Tue, 23 Jan 2007 00:47:59 +0100 (CET) Date: Tue, 23 Jan 2007 00:47:59 +0100 From: Pavel Machek To: Justin Piszcz Cc: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com Subject: Re: 2.6.20-rc5: cp 18gb 18gb.2 = OOM killer, reproducible just like 2.16.19.2 Message-ID: <20070122234759.GF17637@elf.ucw.cz> References: <20070122133735.GB4493@ucw.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Warning: Reading this can be dangerous to your mental health. User-Agent: Mutt/1.5.11+cvs20060126 X-archive-position: 10372 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: pavel@ucw.cz Precedence: bulk X-list: xfs Content-Length: 1502 Lines: 42 On Mon 2007-01-22 13:48:44, Justin Piszcz wrote: > > > On Mon, 22 Jan 2007, Pavel Machek wrote: > > > On Sun 2007-01-21 14:27:34, Justin Piszcz wrote: > > > Why does copying an 18GB on a 74GB raptor raid1 cause the kernel to invoke > > > the OOM killer and kill all of my processes? > > > > > > Doing this on a single disk 2.6.19.2 is OK, no issues. However, this > > > happens every time! > > > > > > Anything to try? Any other output needed? Can someone shed some light on > > > this situation? > > > > Is it highmem-related? Can you try it with mem=256M? > > > > Pavel > > -- > > (english) http://www.livejournal.com/~pavelmachek > > (cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html > > - > > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > > the body of a message to majordomo@vger.kernel.org > > More majordomo info at http://vger.kernel.org/majordomo-info.html > > > > I will give this a try later or tomorrow, I cannot have my machine crash > at the moment. > > Also, the onboard video on the Intel 965 chipset uses 128MB, not sure if > that has anything to do with it because after the system kill -9's all the > processes etc, my terminal looks like garbage. That looks like separate problem. Switch to text mode console (vgacon, not fbcon) for tests. Pavel -- (english) http://www.livejournal.com/~pavelmachek (cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html From owner-xfs@oss.sgi.com Mon Jan 22 16:39:17 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 22 Jan 2007 16:39:29 -0800 (PST) X-Spam-oss-Status: No, score=-1.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0N0dDqw010904 for ; Mon, 22 Jan 2007 16:39:15 -0800 Received: from [134.14.55.84] (shark.melbourne.sgi.com [134.14.55.84]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA24318; Tue, 23 Jan 2007 11:38:11 +1100 Message-ID: <45B558B5.6080403@sgi.com> Date: Tue, 23 Jan 2007 11:37:09 +1100 From: Donald Douwsma User-Agent: Thunderbird 1.5.0.8 (X11/20060911) MIME-Version: 1.0 To: Andrew Morton CC: Justin Piszcz , linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com Subject: Re: 2.6.20-rc5: cp 18gb 18gb.2 = OOM killer, reproducible just like 2.16.19.2 References: <20070122115703.97ed54f3.akpm@osdl.org> In-Reply-To: <20070122115703.97ed54f3.akpm@osdl.org> X-Enigmail-Version: 0.94.1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-archive-position: 10373 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: donaldd@sgi.com Precedence: bulk X-list: xfs Content-Length: 11172 Lines: 230 Andrew Morton wrote: >> On Sun, 21 Jan 2007 14:27:34 -0500 (EST) Justin Piszcz wrote: >> Why does copying an 18GB on a 74GB raptor raid1 cause the kernel to invoke >> the OOM killer and kill all of my processes? > > What's that? Software raid or hardware raid? If the latter, which driver? I've hit this using local disk while testing xfs built against 2.6.20-rc4 (SMP x86_64) dmesg follows, I'm not sure if anything in this is useful after the first event as our automated tests continued on after the failure. > Please include /proc/meminfo from after the oom-killing. > > Please work out what is using all that slab memory, via /proc/slabinfo. Sorry I didnt pick this up ether. I'll try to reproduce this and gather some more detailed info for a single event. Donald ... XFS mounting filesystem sdb5 Ending clean XFS mount for filesystem: sdb5 XFS mounting filesystem sdb5 Ending clean XFS mount for filesystem: sdb5 hald invoked oom-killer: gfp_mask=0x200d2, order=0, oomkilladj=0 Call Trace: [] out_of_memory+0x70/0x25d [] __alloc_pages+0x22c/0x2b5 [] alloc_page_vma+0x71/0x76 [] read_swap_cache_async+0x45/0xd8 [] swapin_readahead+0x60/0xd3 [] __handle_mm_fault+0x703/0x9d8 [] do_page_fault+0x42b/0x7b3 [] do_readv_writev+0x176/0x18b [] thread_return+0x0/0xed [] __const_udelay+0x2c/0x2d [] scsi_done+0x0/0x17 [] error_exit+0x0/0x84 Mem-info: Node 0 DMA per-cpu: CPU 0: Hot: hi: 0, btch: 1 usd: 0 Cold: hi: 0, btch: 1 usd: 0 CPU 1: Hot: hi: 0, btch: 1 usd: 0 Cold: hi: 0, btch: 1 usd: 0 CPU 2: Hot: hi: 0, btch: 1 usd: 0 Cold: hi: 0, btch: 1 usd: 0 CPU 3: Hot: hi: 0, btch: 1 usd: 0 Cold: hi: 0, btch: 1 usd: 0 Node 0 DMA32 per-cpu: CPU 0: Hot: hi: 186, btch: 31 usd: 31 Cold: hi: 62, btch: 15 usd: 53 CPU 1: Hot: hi: 186, btch: 31 usd: 2 Cold: hi: 62, btch: 15 usd: 60 CPU 2: Hot: hi: 186, btch: 31 usd: 20 Cold: hi: 62, btch: 15 usd: 47 CPU 3: Hot: hi: 186, btch: 31 usd: 25 Cold: hi: 62, btch: 15 usd: 56 Active:76 inactive:495856 dirty:0 writeback:0 unstable:0 free:3680 slab:9119 mapped:32 pagetables:637 Node 0 DMA free:8036kB min:24kB low:28kB high:36kB active:0kB inactive:1856kB present:9376kB pages_scanned:3296 all_unreclaimable? yes lowmem_reserve[]: 0 2003 2003 Node 0 DMA32 free:6684kB min:5712kB low:7140kB high:8568kB active:304kB inactive:1981624kB present:2052068kB pages_scanned:4343329 all_unreclaimable? yes lowmem_reserve[]: 0 0 0 Node 0 DMA: 1*4kB 0*8kB 0*16kB 1*32kB 1*64kB 0*128kB 1*256kB 1*512kB 1*1024kB 1*2048kB 1*4096kB = 8036kB Node 0 DMA32: 273*4kB 29*8kB 1*16kB 1*32kB 1*64kB 1*128kB 2*256kB 1*512kB 0*1024kB 0*2048kB 1*4096kB = 6684kB Swap cache: add 741048, delete 244661, find 84826/143198, race 680+239 Free swap = 1088524kB Total swap = 3140668kB Free swap: 1088524kB 524224 pages of RAM 9619 reserved pages 259 pages shared 496388 pages swap cached No available memory (MPOL_BIND): kill process 3492 (hald) score 0 or a child Killed process 3626 (hald-addon-acpi) top invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0 Call Trace: [] out_of_memory+0x70/0x25d [] __alloc_pages+0x22c/0x2b5 [] alloc_pages_current+0x74/0x79 [] __page_cache_alloc+0xb/0xe [] __do_page_cache_readahead+0xa1/0x217 [] io_schedule+0x28/0x33 [] __wait_on_bit_lock+0x5b/0x66 [] __lock_page+0x72/0x78 [] do_page_cache_readahead+0x4e/0x5a [] filemap_nopage+0x140/0x30c [] __handle_mm_fault+0x1fb/0x9d8 [] do_page_fault+0x42b/0x7b3 [] __wake_up+0x43/0x50 [] tty_ldisc_deref+0x71/0x76 [] error_exit+0x0/0x84 Mem-info: Node 0 DMA per-cpu: CPU 0: Hot: hi: 0, btch: 1 usd: 0 Cold: hi: 0, btch: 1 usd: 0 CPU 1: Hot: hi: 0, btch: 1 usd: 0 Cold: hi: 0, btch: 1 usd: 0 CPU 2: Hot: hi: 0, btch: 1 usd: 0 Cold: hi: 0, btch: 1 usd: 0 CPU 3: Hot: hi: 0, btch: 1 usd: 0 Cold: hi: 0, btch: 1 usd: 0 Node 0 DMA32 per-cpu: CPU 0: Hot: hi: 186, btch: 31 usd: 31 Cold: hi: 62, btch: 15 usd: 53 CPU 1: Hot: hi: 186, btch: 31 usd: 2 Cold: hi: 62, btch: 15 usd: 60 CPU 2: Hot: hi: 186, btch: 31 usd: 1 Cold: hi: 62, btch: 15 usd: 10 CPU 3: Hot: hi: 186, btch: 31 usd: 25 Cold: hi: 62, btch: 15 usd: 26 Active:90 inactive:496233 dirty:0 writeback:0 unstable:0 free:3485 slab:9119 mapped:32 pagetables:637 Node 0 DMA free:8036kB min:24kB low:28kB high:36kB active:0kB inactive:1856kB present:9376kB pages_scanned:3328 all_unreclaimable? yes lowmem_reserve[]: 0 2003 2003 Node 0 DMA32 free:5904kB min:5712kB low:7140kB high:8568kB active:360kB inactive:1983092kB present:2052068kB pages_scanned:4587649 all_unreclaimable? yes lowmem_reserve[]: 0 0 0 Node 0 DMA: 1*4kB 0*8kB 0*16kB 1*32kB 1*64kB 0*128kB 1*256kB 1*512kB 1*1024kB 1*2048kB 1*4096kB = 8036kB Node 0 DMA32: 78*4kB 29*8kB 1*16kB 1*32kB 1*64kB 1*128kB 2*256kB 1*512kB 0*1024kB 0*2048kB 1*4096kB = 5904kB Swap cache: add 741067, delete 244673, find 84826/143210, race 680+239 Free swap = 1088572kB Total swap = 3140668kB Free swap: 1088572kB 524224 pages of RAM 9619 reserved pages 290 pages shared 496396 pages swap cached No available memory (MPOL_BIND): kill process 7914 (top) score 0 or a child Killed process 7914 (top) nscd invoked oom-killer: gfp_mask=0x200d2, order=0, oomkilladj=0 Call Trace: [] out_of_memory+0x70/0x25d [] __alloc_pages+0x22c/0x2b5 [] alloc_page_vma+0x71/0x76 [] read_swap_cache_async+0x45/0xd8 [] __handle_mm_fault+0x713/0x9d8 [] do_page_fault+0x42b/0x7b3 [] try_to_del_timer_sync+0x51/0x5a [] del_timer_sync+0xc/0x16 [] schedule_timeout+0x92/0xad [] process_timeout+0x0/0xb [] sys_epoll_wait+0x3e0/0x421 [] default_wake_function+0x0/0xf [] error_exit+0x0/0x84 Mem-info: Node 0 DMA per-cpu: CPU 0: Hot: hi: 0, btch: 1 usd: 0 Cold: hi: 0, btch: 1 usd: 0 CPU 1: Hot: hi: 0, btch: 1 usd: 0 Cold: hi: 0, btch: 1 usd: 0 CPU 2: Hot: hi: 0, btch: 1 usd: 0 Cold: hi: 0, btch: 1 usd: 0 CPU 3: Hot: hi: 0, btch: 1 usd: 0 Cold: hi: 0, btch: 1 usd: 0 Node 0 DMA32 per-cpu: CPU 0: Hot: hi: 186, btch: 31 usd: 30 Cold: hi: 62, btch: 15 usd: 53 CPU 1: Hot: hi: 186, btch: 31 usd: 2 Cold: hi: 62, btch: 15 usd: 60 CPU 2: Hot: hi: 186, btch: 31 usd: 0 Cold: hi: 62, btch: 15 usd: 14 CPU 3: Hot: hi: 186, btch: 31 usd: 25 Cold: hi: 62, btch: 15 usd: 26 Active:91 inactive:496325 dirty:0 writeback:0 unstable:0 free:3425 slab:9119 mapped:32 pagetables:637 Node 0 DMA free:8036kB min:24kB low:28kB high:36kB active:0kB inactive:1856kB present:9376kB pages_scanned:3328 all_unreclaimable? yes lowmem_reserve[]: 0 2003 2003 Node 0 DMA32 free:5664kB min:5712kB low:7140kB high:8568kB active:364kB inactive:1983372kB present:2052068kB pages_scanned:4610273 all_unreclaimable? yes lowmem_reserve[]: 0 0 0 Node 0 DMA: 1*4kB 0*8kB 0*16kB 1*32kB 1*64kB 0*128kB 1*256kB 1*512kB 1*1024kB 1*2048kB 1*4096kB = 8036kB Node 0 DMA32: 18*4kB 29*8kB 1*16kB 1*32kB 1*64kB 1*128kB 2*256kB 1*512kB 0*1024kB 0*2048kB 1*4096kB = 5664kB Swap cache: add 741069, delete 244674, find 84826/143212, race 680+239 Free swap = 1088576kB Total swap = 3140668kB Free swap: 1088576kB 524224 pages of RAM 9619 reserved pages 293 pages shared 496396 pages swap cached No available memory (MPOL_BIND): kill process 4166 (nscd) score 0 or a child Killed process 4166 (nscd) xfs_repair invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0 Call Trace: [] out_of_memory+0x70/0x25d [] __alloc_pages+0x22c/0x2b5 [] alloc_pages_current+0x74/0x79 [] __page_cache_alloc+0xb/0xe [] __do_page_cache_readahead+0xa1/0x217 [] do_page_cache_readahead+0x4e/0x5a [] filemap_nopage+0x140/0x30c [] __handle_mm_fault+0x1fb/0x9d8 [] do_page_fault+0x42b/0x7b3 [] autoremove_wake_function+0x0/0x2e [] up_write+0x9/0xb [] sys_mprotect+0x645/0x764 [] error_exit+0x0/0x84 Mem-info: Node 0 DMA per-cpu: CPU 0: Hot: hi: 0, btch: 1 usd: 0 Cold: hi: 0, btch: 1 usd: 0 CPU 1: Hot: hi: 0, btch: 1 usd: 0 Cold: hi: 0, btch: 1 usd: 0 CPU 2: Hot: hi: 0, btch: 1 usd: 0 Cold: hi: 0, btch: 1 usd: 0 CPU 3: Hot: hi: 0, btch: 1 usd: 0 Cold: hi: 0, btch: 1 usd: 0 Node 0 DMA32 per-cpu: CPU 0: Hot: hi: 186, btch: 31 usd: 30 Cold: hi: 62, btch: 15 usd: 53 CPU 1: Hot: hi: 186, btch: 31 usd: 2 Cold: hi: 62, btch: 15 usd: 60 CPU 2: Hot: hi: 186, btch: 31 usd: 30 Cold: hi: 62, btch: 15 usd: 14 CPU 3: Hot: hi: 186, btch: 31 usd: 25 Cold: hi: 62, btch: 15 usd: 26 Active:91 inactive:496247 dirty:0 writeback:0 unstable:0 free:3394 slab:9119 mapped:32 pagetables:637 Node 0 DMA free:8036kB min:24kB low:28kB high:36kB active:0kB inactive:1856kB present:9376kB pages_scanned:3328 all_unreclaimable? yes lowmem_reserve[]: 0 2003 2003 Node 0 DMA32 free:5540kB min:5712kB low:7140kB high:8568kB active:364kB inactive:1983300kB present:2052068kB pages_scanned:4631841 all_unreclaimable? yes lowmem_reserve[]: 0 0 0 Node 0 DMA: 1*4kB 0*8kB 0*16kB 1*32kB 1*64kB 0*128kB 1*256kB 1*512kB 1*1024kB 1*2048kB 1*4096kB = 8036kB Node 0 DMA32: 1*4kB 22*8kB 1*16kB 1*32kB 1*64kB 1*128kB 2*256kB 1*512kB 0*1024kB 0*2048kB 1*4096kB = 5540kB Swap cache: add 741070, delete 244674, find 84826/143212, race 680+239 Free swap = 1088576kB Total swap = 3140668kB Free swap: 1088576kB 524224 pages of RAM 9619 reserved pages 293 pages shared 496397 pages swap cached No available memory (MPOL_BIND): kill process 17869 (xfs_repair) score 0 or a child Killed process 17869 (xfs_repair) klogd invoked oom-killer: gfp_mask=0x200d2, order=0, oomkilladj=0 Call Trace: [] out_of_memory+0x70/0x25d [] __alloc_pages+0x22c/0x2b5 [] __pagevec_lru_add_active+0xce/0xde [] alloc_page_vma+0x71/0x76 [] read_swap_cache_async+0x45/0xd8 [] __handle_mm_fault+0x713/0x9d8 [] do_page_fault+0x42b/0x7b3 [] autoremove_wake_function+0x0/0x2e [] error_exit+0x0/0x84 ... From owner-xfs@oss.sgi.com Mon Jan 22 17:13:21 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 22 Jan 2007 17:13:31 -0800 (PST) X-Spam-oss-Status: No, score=-2.3 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from smtp.osdl.org (smtp.osdl.org [65.172.181.24]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0N1DKqw019330 for ; Mon, 22 Jan 2007 17:13:21 -0800 Received: from shell0.pdx.osdl.net (fw.osdl.org [65.172.181.6]) by smtp.osdl.org (8.12.8/8.12.8) with ESMTP id l0N1C73U017729 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO); Mon, 22 Jan 2007 17:12:07 -0800 Received: from box (shell0.pdx.osdl.net [10.9.0.31]) by shell0.pdx.osdl.net (8.13.1/8.11.6) with SMTP id l0N1C6WW007622; Mon, 22 Jan 2007 17:12:06 -0800 Date: Mon, 22 Jan 2007 17:12:06 -0800 From: Andrew Morton To: Donald Douwsma Cc: Justin Piszcz , linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com Subject: Re: 2.6.20-rc5: cp 18gb 18gb.2 = OOM killer, reproducible just like 2.16.19.2 Message-Id: <20070122171206.fc2faf5f.akpm@osdl.org> In-Reply-To: <45B558B5.6080403@sgi.com> References: <20070122115703.97ed54f3.akpm@osdl.org> <45B558B5.6080403@sgi.com> X-Mailer: Sylpheed version 2.2.7 (GTK+ 2.8.17; x86_64-unknown-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-MIMEDefang-Filter: osdl$Revision: 1.170 $ X-Scanned-By: MIMEDefang 2.36 X-archive-position: 10374 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: akpm@osdl.org Precedence: bulk X-list: xfs Content-Length: 2350 Lines: 52 On Tue, 23 Jan 2007 11:37:09 +1100 Donald Douwsma wrote: > Andrew Morton wrote: > >> On Sun, 21 Jan 2007 14:27:34 -0500 (EST) Justin Piszcz wrote: > >> Why does copying an 18GB on a 74GB raptor raid1 cause the kernel to invoke > >> the OOM killer and kill all of my processes? > > > > What's that? Software raid or hardware raid? If the latter, which driver? > > I've hit this using local disk while testing xfs built against 2.6.20-rc4 (SMP x86_64) > > dmesg follows, I'm not sure if anything in this is useful after the first event as our automated tests continued on > after the failure. This looks different. > ... > > Mem-info: > Node 0 DMA per-cpu: > CPU 0: Hot: hi: 0, btch: 1 usd: 0 Cold: hi: 0, btch: 1 usd: 0 > CPU 1: Hot: hi: 0, btch: 1 usd: 0 Cold: hi: 0, btch: 1 usd: 0 > CPU 2: Hot: hi: 0, btch: 1 usd: 0 Cold: hi: 0, btch: 1 usd: 0 > CPU 3: Hot: hi: 0, btch: 1 usd: 0 Cold: hi: 0, btch: 1 usd: 0 > Node 0 DMA32 per-cpu: > CPU 0: Hot: hi: 186, btch: 31 usd: 31 Cold: hi: 62, btch: 15 usd: 53 > CPU 1: Hot: hi: 186, btch: 31 usd: 2 Cold: hi: 62, btch: 15 usd: 60 > CPU 2: Hot: hi: 186, btch: 31 usd: 20 Cold: hi: 62, btch: 15 usd: 47 > CPU 3: Hot: hi: 186, btch: 31 usd: 25 Cold: hi: 62, btch: 15 usd: 56 > Active:76 inactive:495856 dirty:0 writeback:0 unstable:0 free:3680 slab:9119 mapped:32 pagetables:637 No dirty pages, no pages under writeback. > Node 0 DMA free:8036kB min:24kB low:28kB high:36kB active:0kB inactive:1856kB present:9376kB pages_scanned:3296 > all_unreclaimable? yes > lowmem_reserve[]: 0 2003 2003 > Node 0 DMA32 free:6684kB min:5712kB low:7140kB high:8568kB active:304kB inactive:1981624kB present:2052068kB Inactive list is filled. > pages_scanned:4343329 all_unreclaimable? yes We scanned our guts out and decided that nothing was reclaimable. > No available memory (MPOL_BIND): kill process 3492 (hald) score 0 or a child > No available memory (MPOL_BIND): kill process 7914 (top) score 0 or a child > No available memory (MPOL_BIND): kill process 4166 (nscd) score 0 or a child > No available memory (MPOL_BIND): kill process 17869 (xfs_repair) score 0 or a child But in all cases a constrained memory policy was in use. From owner-xfs@oss.sgi.com Mon Jan 22 18:08:09 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 22 Jan 2007 18:08:15 -0800 (PST) X-Spam-oss-Status: No, score=-2.2 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from mx1.suse.de (ns.suse.de [195.135.220.2]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0N286qw031101 for ; Mon, 22 Jan 2007 18:08:09 -0800 Received: from Relay2.suse.de (mail2.suse.de [195.135.221.8]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.suse.de (Postfix) with ESMTP id EDFCB12261; Tue, 23 Jan 2007 03:07:12 +0100 (CET) From: Neil Brown To: "Dan Williams" Date: Tue, 23 Jan 2007 13:06:40 +1100 MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-ID: <17845.28080.887134.987438@notabene.brown> Cc: "Chuck Ebbert" , "Justin Piszcz" , linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com Subject: Re: Kernel 2.6.19.2 New RAID 5 Bug (oops when writing Samba -> RAID5) In-Reply-To: message from Dan Williams on Monday January 22 References: <45B5261B.1050104@redhat.com> <17845.13256.284461.992275@notabene.brown> X-Mailer: VM 7.19 under Emacs 21.4.1 X-face: [Gw_3E*Gng}4rRrKRYotwlE?.2|**#s9D On 1/22/07, Neil Brown wrote: > > On Monday January 22, cebbert@redhat.com wrote: > > > Justin Piszcz wrote: > > > > My .config is attached, please let me know if any other information is > > > > needed and please CC (lkml) as I am not on the list, thanks! > > > > > > > > Running Kernel 2.6.19.2 on a MD RAID5 volume. Copying files over Samba to > > > > the RAID5 running XFS. > > > > > > > > Any idea what happened here? > > .... > > > > > > > Without digging too deeply, I'd say you've hit the same bug Sami Farin > > > and others > > > have reported starting with 2.6.19: pages mapped with kmap_atomic() > > > become unmapped > > > during memcpy() or similar operations. Try disabling preempt -- that > > > seems to be the > > > common factor. > > > > That is exactly the conclusion I had just come to (a kmap_atomic page > > must be being unmapped during memcpy). I wasn't aware that others had > > reported it - thanks for that. > > > > Turning off CONFIG_PREEMPT certainly seems like a good idea. > > > Coming from an ARM background I am not yet versed in the inner > workings of kmap_atomic, but if you have time for a question I am > curious as to why spin_lock(&sh->lock) is not sufficient pre-emption > protection for copy_data() in this case? > Presumably there is a bug somewhere. kmap_atomic itself calls inc_preempt_count so that preemption should be disabled at least until the kunmap_atomic is called. But apparently not. The symptoms point exactly to the page getting unmapped when it shouldn't. Until that bug is found and fixed, the work around of turning of CONFIG_PREEMPT seems to make sense. Of course it would be great if someone who can easily reproduce this bug could do the 'git bisect' thing to find out where the bug crept in..... NeilBrown From owner-xfs@oss.sgi.com Mon Jan 22 19:08:47 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 22 Jan 2007 19:08:50 -0800 (PST) X-Spam-oss-Status: No, score=0.8 required=5.0 tests=AWL,BAYES_50, J_CHICKENPOX_43,SUBJ_ALL_CAPS autolearn=no version=3.2.0-pre1-r497472 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0N38iqw010790 for ; Mon, 22 Jan 2007 19:08:46 -0800 Received: from pcbnaujok (pc-bnaujok.melbourne.sgi.com [134.14.55.58]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA28050; Tue, 23 Jan 2007 14:07:44 +1100 Message-Id: <200701230307.OAA28050@larry.melbourne.sgi.com> From: "Barry Naujok" To: "'Les Oxley'" Cc: Subject: RE: EXTENT BOUNDARIES Date: Tue, 23 Jan 2007 14:08:29 +1100 MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit X-Mailer: Microsoft Office Outlook, Build 11.0.6353 Thread-Index: Acc8TH8L1gsIRHEuRTG6DU0RpXquDACTa4xQ X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.3028 In-Reply-To: <45B19BDD.2050808@sandeen.net> X-archive-position: 10376 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@melbourne.sgi.com Precedence: bulk X-list: xfs Content-Length: 2800 Lines: 85 > -----Original Message----- > From: xfs-bounce@oss.sgi.com [mailto:xfs-bounce@oss.sgi.com] > On Behalf Of Eric Sandeen > Sent: Saturday, 20 January 2007 3:35 PM > To: Les Oxley > Cc: xfs@oss.sgi.com > Subject: Re: EXTENT BOUNDARIES > > Les Oxley wrote: > > > > Hello, > > > > We are looking into running XFS on a 3TB FLASH MEMORY > MODULE. We have a > > question regarding the extent boundaries. > > See the attached PowerPoint drawing, xfs.ppt We are running Linux. > > Our media is 3 million contiguous 4KB blocks. We would > like to define > > an extent size of 1MB and this tracks the erasure block size > > of the flash memory, and that greatly improves perfomance. > We are trying > > to understand where XFS places the extent boundaries with > reference to > > the contiguous block sequence. > > Is this deterministic as indicated in the drawing ? That > is, are the > > extent boundaries on 256 block boundaries. > > > > Any help would be greatly appreciated. > > > > Les Oxley > > Ampex Corporation > > Redwood City > > California. > > extents by definition land on filesystem block boundaries, and can in > general be any number of filesystem blocks, starting & ending most > anywhere on the block device. > > If you wish to always allocate in 1m chunks, you might consider using > the xfs realtime subvolume, see the extsize description in > the mkfs.xfs > man page. I'm not sure how much buffered IO to the realtime > subvol has > been tested; pretty sure it works at this point, though the sgi guys > will correct me if I'm wrong... it's not exactly the normal mode of > operation. > > Using the realtime subvol, however, all your file -metadata- > will still > be allocated on the main data volume, in much smaller pieces. > > -Eric If you don't need to use 100% of the space for your data, you can give XFS a hint to align on a stripe unit if it's applicable. If you allocate 1MB chunks at a time (either via a write or prealloc) with an filesystem with sunit=1MB (2048 sectors if using 512 bytes sectors for mkfs.xfs command), it will align to the stripe unit where there is space available. Once the aligned space is full, it will allocate in the remaining space. Metadata such as inodes, directories, etc will not be nicely aligned. If your total write or prealloc is smaller than 512KB it will not nicely align, but find suitable space. I would recommending some experimentation to see if either of the above ideas are suitable for your purpose. For the sunit idea and 512 byte sector size, the following mkfs command should work: # mkfs.xfs -b 4096 -d sunit=2048,swidth=4096 To see it in action using dd: # dd if=/dev/zero of= bs=1048576 # xfs_bmap -v You should see the block range aligned to 2048 sectors. From owner-xfs@oss.sgi.com Mon Jan 22 19:27:57 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 22 Jan 2007 19:28:02 -0800 (PST) X-Spam-oss-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from nf-out-0910.google.com (nf-out-0910.google.com [64.233.182.191]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0N3Rtqw014708 for ; Mon, 22 Jan 2007 19:27:57 -0800 Received: by nf-out-0910.google.com with SMTP id x30so87037nfb for ; Mon, 22 Jan 2007 19:27:02 -0800 (PST) DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=YyPyqkif8t9RDwskczse5LiMPC6QKh4KCunEEhev/m5UCY6L5HrffWLTZehiTS+g+69dBURTq5ETIaxomAk3IZaO/Z7eLSL6b7LTpIht17I1Lz5JK1bBBf98/M1UmgCo8w1SlEo0m3bpFLmYGIZD6/+3fGqcNwCg+FIN+cNCeM4= Received: by 10.82.138.6 with SMTP id l6mr6182903bud.1169516651781; Mon, 22 Jan 2007 17:44:11 -0800 (PST) Received: by 10.82.176.11 with HTTP; Mon, 22 Jan 2007 17:44:11 -0800 (PST) Message-ID: Date: Mon, 22 Jan 2007 18:44:11 -0700 From: "Dan Williams" To: "Neil Brown" Subject: Re: Kernel 2.6.19.2 New RAID 5 Bug (oops when writing Samba -> RAID5) Cc: "Chuck Ebbert" , "Justin Piszcz" , linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com In-Reply-To: <17845.13256.284461.992275@notabene.brown> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline References: <45B5261B.1050104@redhat.com> <17845.13256.284461.992275@notabene.brown> X-archive-position: 10377 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dan.j.williams@gmail.com Precedence: bulk X-list: xfs Content-Length: 1245 Lines: 36 On 1/22/07, Neil Brown wrote: > On Monday January 22, cebbert@redhat.com wrote: > > Justin Piszcz wrote: > > > My .config is attached, please let me know if any other information is > > > needed and please CC (lkml) as I am not on the list, thanks! > > > > > > Running Kernel 2.6.19.2 on a MD RAID5 volume. Copying files over Samba to > > > the RAID5 running XFS. > > > > > > Any idea what happened here? > .... > > > > > Without digging too deeply, I'd say you've hit the same bug Sami Farin > > and others > > have reported starting with 2.6.19: pages mapped with kmap_atomic() > > become unmapped > > during memcpy() or similar operations. Try disabling preempt -- that > > seems to be the > > common factor. > > That is exactly the conclusion I had just come to (a kmap_atomic page > must be being unmapped during memcpy). I wasn't aware that others had > reported it - thanks for that. > > Turning off CONFIG_PREEMPT certainly seems like a good idea. > Coming from an ARM background I am not yet versed in the inner workings of kmap_atomic, but if you have time for a question I am curious as to why spin_lock(&sh->lock) is not sufficient pre-emption protection for copy_data() in this case? > NeilBrown Regards, Dan From owner-xfs@oss.sgi.com Mon Jan 22 21:59:31 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 22 Jan 2007 21:59:38 -0800 (PST) X-Spam-oss-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0N5xSqw014137 for ; Mon, 22 Jan 2007 21:59:30 -0800 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id QAA01261; Tue, 23 Jan 2007 16:38:32 +1100 Received: by chook.melbourne.sgi.com (Postfix, from userid 1161) id D30BB58FF490; Tue, 23 Jan 2007 16:38:31 +1100 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: TAKE 959693 - Thread stack size problem with parallized xfs_repair Message-Id: <20070123053831.D30BB58FF490@chook.melbourne.sgi.com> Date: Tue, 23 Jan 2007 16:38:31 +1100 (EST) From: bnaujok@sgi.com (Barry Naujok) X-archive-position: 10378 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@sgi.com Precedence: bulk X-list: xfs Content-Length: 538 Lines: 17 Fix xfs_repair dying with setting stackspace for threads Date: Tue Jan 23 16:37:51 AEDT 2007 Workarea: chook.melbourne.sgi.com:/home/bnaujok/isms/repair Inspected by: mvalluri@sgi.com The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:27983a xfsprogs/repair/threads.c - 1.2 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/repair/threads.c.diff?r1=text&tr1=1.2&r2=text&tr2=1.1&f=h - Fix xfs_repair dying setting stackspace for threads From owner-xfs@oss.sgi.com Mon Jan 22 22:18:34 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 22 Jan 2007 22:18:39 -0800 (PST) X-Spam-oss-Status: No, score=-2.4 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0N6IVqw018338 for ; Mon, 22 Jan 2007 22:18:33 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA02263; Tue, 23 Jan 2007 17:17:33 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l0N6HW7Y86832343; Tue, 23 Jan 2007 17:17:32 +1100 (AEDT) Received: (from bnaujok@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l0N6HVYp101231640; Tue, 23 Jan 2007 17:17:31 +1100 (AEDT) Date: Tue, 23 Jan 2007 17:17:31 +1100 (AEDT) From: Barry Naujok Message-Id: <200701230617.l0N6HVYp101231640@snort.melbourne.sgi.com> To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: PARTIAL TAKE 960195 - Patch for attr and acl to correctly use libtool while crosscompiling X-archive-position: 10379 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@snort.melbourne.sgi.com Precedence: bulk X-list: xfs Content-Length: 1160 Lines: 25 Fix cross-compile issues with libtool and compiler. Date: Tue Jan 23 17:16:34 AEDT 2007 Workarea: snort.melbourne.sgi.com:/home/bnaujok/isms/xfs-cmds Inspected by: Diego 'Flameeyes' Pettenò [flameeyes@gentoo.org] The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:27984a attr/VERSION - 1.67 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/attr/VERSION.diff?r1=text&tr1=1.67&r2=text&tr2=1.66&f=h attr/doc/CHANGES - 1.79 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/attr/doc/CHANGES.diff?r1=text&tr1=1.79&r2=text&tr2=1.78&f=h attr/include/builddefs.in - 1.32 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/attr/include/builddefs.in.diff?r1=text&tr1=1.32&r2=text&tr2=1.31&f=h attr/m4/package_utilies.m4 - 1.8 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/attr/m4/package_utilies.m4.diff?r1=text&tr1=1.8&r2=text&tr2=1.7&f=h attr/m4/package_globals.m4 - 1.6 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/attr/m4/package_globals.m4.diff?r1=text&tr1=1.6&r2=text&tr2=1.5&f=h - Fix cross-compile issues with libtool and compiler. From owner-xfs@oss.sgi.com Mon Jan 22 22:24:34 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 22 Jan 2007 22:24:39 -0800 (PST) X-Spam-oss-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0N6OUqw019813 for ; Mon, 22 Jan 2007 22:24:33 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA02408; Tue, 23 Jan 2007 17:23:33 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l0N6NW7Y101128069; Tue, 23 Jan 2007 17:23:32 +1100 (AEDT) Received: (from bnaujok@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l0N6NVFd100598439; Tue, 23 Jan 2007 17:23:31 +1100 (AEDT) Date: Tue, 23 Jan 2007 17:23:31 +1100 (AEDT) From: Barry Naujok Message-Id: <200701230623.l0N6NVFd100598439@snort.melbourne.sgi.com> To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: TAKE 960195 - Patch for attr and acl to correctly use libtool while crosscompiling X-archive-position: 10380 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@snort.melbourne.sgi.com Precedence: bulk X-list: xfs Content-Length: 819 Lines: 21 Use libtool correctly when cross-compiling Date: Tue Jan 23 17:23:10 AEDT 2007 Workarea: snort.melbourne.sgi.com:/home/bnaujok/isms/xfs-cmds Inspected by: Diego 'Flameeyes' Pettenò [flameeyes@gentoo.org] The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:27985a acl/VERSION - 1.83 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/acl/VERSION.diff?r1=text&tr1=1.83&r2=text&tr2=1.82&f=h acl/doc/CHANGES - 1.93 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/acl/doc/CHANGES.diff?r1=text&tr1=1.93&r2=text&tr2=1.92&f=h acl/m4/package_attrdev.m4 - 1.6 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/acl/m4/package_attrdev.m4.diff?r1=text&tr1=1.6&r2=text&tr2=1.5&f=h - Use libtool correctly when cross-compiling From owner-xfs@oss.sgi.com Tue Jan 23 01:19:18 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 23 Jan 2007 01:19:22 -0800 (PST) X-Spam-oss-Status: No, score=-2.3 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from spitz.ucw.cz (gprs189-60.eurotel.cz [160.218.189.60]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0N9JCqw029313 for ; Tue, 23 Jan 2007 01:19:15 -0800 Received: by spitz.ucw.cz (Postfix, from userid 0) id 877D62787F; Mon, 22 Jan 2007 13:37:36 +0000 (UTC) Date: Mon, 22 Jan 2007 13:37:35 +0000 From: Pavel Machek To: Justin Piszcz Cc: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com Subject: Re: 2.6.20-rc5: cp 18gb 18gb.2 = OOM killer, reproducible just like 2.16.19.2 Message-ID: <20070122133735.GB4493@ucw.cz> References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.9i X-archive-position: 10381 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: pavel@ucw.cz Precedence: bulk X-list: xfs Content-Length: 576 Lines: 17 On Sun 2007-01-21 14:27:34, Justin Piszcz wrote: > Why does copying an 18GB on a 74GB raptor raid1 cause the kernel to invoke > the OOM killer and kill all of my processes? > > Doing this on a single disk 2.6.19.2 is OK, no issues. However, this > happens every time! > > Anything to try? Any other output needed? Can someone shed some light on > this situation? Is it highmem-related? Can you try it with mem=256M? Pavel -- (english) http://www.livejournal.com/~pavelmachek (cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html From owner-xfs@oss.sgi.com Tue Jan 23 02:57:24 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 23 Jan 2007 02:57:30 -0800 (PST) X-Spam-oss-Status: No, score=-2.6 required=5.0 tests=BAYES_00,SPF_HELO_FAIL autolearn=ham version=3.2.0-pre1-r497472 Received: from lucidpixels.com (lucidpixels.com [75.144.35.66] (may be forged)) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0NAvNqw022005 for ; Tue, 23 Jan 2007 02:57:24 -0800 Received: by lucidpixels.com (Postfix, from userid 1001) id D5AE61A00052F; Tue, 23 Jan 2007 05:56:28 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id CD05CA050ADA; Tue, 23 Jan 2007 05:56:28 -0500 (EST) Date: Tue, 23 Jan 2007 05:56:28 -0500 (EST) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: Neil Brown cc: Chuck Ebbert , linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com Subject: Re: Kernel 2.6.19.2 New RAID 5 Bug (oops when writing Samba -> RAID5) In-Reply-To: <17845.13256.284461.992275@notabene.brown> Message-ID: References: <45B5261B.1050104@redhat.com> <17845.13256.284461.992275@notabene.brown> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 10382 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Content-Length: 1320 Lines: 41 On Tue, 23 Jan 2007, Neil Brown wrote: > On Monday January 22, cebbert@redhat.com wrote: > > Justin Piszcz wrote: > > > My .config is attached, please let me know if any other information is > > > needed and please CC (lkml) as I am not on the list, thanks! > > > > > > Running Kernel 2.6.19.2 on a MD RAID5 volume. Copying files over Samba to > > > the RAID5 running XFS. > > > > > > Any idea what happened here? > .... > > > > > Without digging too deeply, I'd say you've hit the same bug Sami Farin > > and others > > have reported starting with 2.6.19: pages mapped with kmap_atomic() > > become unmapped > > during memcpy() or similar operations. Try disabling preempt -- that > > seems to be the > > common factor. > > That is exactly the conclusion I had just come to (a kmap_atomic page > must be being unmapped during memcpy). I wasn't aware that others had > reported it - thanks for that. > > Turning off CONFIG_PREEMPT certainly seems like a good idea. > > NeilBrown > - > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Is this a bug that can or will be fixed or should I disable pre-emption on critical and/or server machines? Justin. From owner-xfs@oss.sgi.com Tue Jan 23 04:00:14 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 23 Jan 2007 04:00:22 -0800 (PST) X-Spam-oss-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_05, SPF_HELO_FAIL autolearn=ham version=3.2.0-pre1-r497472 Received: from lucidpixels.com (lucidpixels.com [75.144.35.66] (may be forged)) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0NC0Dqw004266 for ; Tue, 23 Jan 2007 04:00:14 -0800 Received: by lucidpixels.com (Postfix, from userid 1001) id 534A91A00052F; Tue, 23 Jan 2007 06:59:19 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id 501E1A050ADA; Tue, 23 Jan 2007 06:59:19 -0500 (EST) Date: Tue, 23 Jan 2007 06:59:19 -0500 (EST) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: Michael Tokarev cc: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com, Alan Piszcz Subject: Re: Kernel 2.6.19.2 New RAID 5 Bug (oops when writing Samba -> RAID5) In-Reply-To: <45B5ECAA.6000100@tls.msk.ru> Message-ID: References: <45B5261B.1050104@redhat.com> <17845.13256.284461.992275@notabene.brown> <45B5ECAA.6000100@tls.msk.ru> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 10383 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Content-Length: 794 Lines: 30 On Tue, 23 Jan 2007, Michael Tokarev wrote: > Justin Piszcz wrote: > [] > > Is this a bug that can or will be fixed or should I disable pre-emption on > > critical and/or server machines? > > Disabling pre-emption on critical and/or server machines seems to be a good > idea in the first place. IMHO anyway.. ;) > > /mjt > So for a server system, the following options should be as follows: Preemption Model (No Forced Preemption (Server)) ---> [ ] Preempt The Big Kernel Lock Also, my mobo has HPET timer support in the BIOS, is there any reason to use this on a server? I do run X on it via the Intel 965 chipset video. So bottom line is make sure not to use preemption on servers or else you will get weird spinlock/deadlocks on RAID devices--GOOD To know! Thanks! Justin. From owner-xfs@oss.sgi.com Tue Jan 23 04:49:03 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 23 Jan 2007 04:49:09 -0800 (PST) X-Spam-oss-Status: No, score=-1.6 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from hobbit.corpit.ru (hobbit.corpit.ru [81.13.94.6]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0NCn2qw020935 for ; Tue, 23 Jan 2007 04:49:03 -0800 Received: from paltus.tls.msk.ru (paltus.tls.msk.ru [192.168.1.1]) by hobbit.corpit.ru (Postfix) with ESMTP id 52D9F356A5; Tue, 23 Jan 2007 15:48:06 +0300 (MSK) (envelope-from mjt@tls.msk.ru) Received: from [192.168.1.200] (mjt.ppp.tls.msk.ru [192.168.1.200]) by paltus.tls.msk.ru (Postfix) with ESMTP id B8A7D7F8B; Tue, 23 Jan 2007 15:48:04 +0300 (MSK) (envelope-from mjt@tls.msk.ru) Message-ID: <45B60403.1060201@tls.msk.ru> Date: Tue, 23 Jan 2007 15:48:03 +0300 From: Michael Tokarev Organization: Telecom Service, JSC User-Agent: Icedove 1.5.0.8 (X11/20061128) MIME-Version: 1.0 To: Justin Piszcz CC: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com, Alan Piszcz Subject: Re: Kernel 2.6.19.2 New RAID 5 Bug (oops when writing Samba -> RAID5) References: <45B5261B.1050104@redhat.com> <17845.13256.284461.992275@notabene.brown> <45B5ECAA.6000100@tls.msk.ru> In-Reply-To: X-Enigmail-Version: 0.94.1.0 OpenPGP: id=4F9CF57E Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-archive-position: 10384 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: mjt@tls.msk.ru Precedence: bulk X-list: xfs Content-Length: 817 Lines: 21 Justin Piszcz wrote: > > On Tue, 23 Jan 2007, Michael Tokarev wrote: > >> Disabling pre-emption on critical and/or server machines seems to be a good >> idea in the first place. IMHO anyway.. ;) > > So bottom line is make sure not to use preemption on servers or else you > will get weird spinlock/deadlocks on RAID devices--GOOD To know! This is not a reason. The reason is that preemption usually works worse on servers, esp. high-loaded servers - the more often you interrupt a (kernel) work, the more nedleess context switches you'll have, and the more slow the whole thing works. Another point is that with preemption enabled, we have more chances to hit one or another bug somewhere. Those bugs should be found and fixed for sure, but important servers/data isn't a place usually for bughunting. /mjt From owner-xfs@oss.sgi.com Tue Jan 23 05:06:06 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 23 Jan 2007 05:06:12 -0800 (PST) X-Spam-oss-Status: No, score=0.3 required=5.0 tests=AWL,BAYES_50, J_CHICKENPOX_14,J_CHICKENPOX_43,J_CHICKENPOX_44,J_CHICKENPOX_45, J_CHICKENPOX_46,J_CHICKENPOX_47,J_CHICKENPOX_48 autolearn=no version=3.2.0-pre1-r497472 Received: from gw02.mail.saunalahti.fi (gw02.mail.saunalahti.fi [195.197.172.116]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0ND63qw025207 for ; Tue, 23 Jan 2007 05:06:05 -0800 Received: from mrp1.mail.saunalahti.fi (mrp1.mail.saunalahti.fi [62.142.5.30]) by gw02.mail.saunalahti.fi (Postfix) with ESMTP id 6D77E139135; Tue, 23 Jan 2007 15:05:09 +0200 (EET) Received: from [192.168.0.151] (unknown [62.142.247.178]) (using SSLv3 with cipher RC4-MD5 (128/128 bits)) (No client certificate requested) by mrp1.mail.saunalahti.fi (Postfix) with ESMTP id 1F57033C002; Tue, 23 Jan 2007 15:05:06 +0200 (EET) Subject: RE: xfs_repair: corrupt inode error From: Jyrki Muukkonen To: Barry Naujok Cc: xfs@oss.sgi.com In-Reply-To: <200701142346.KAA16770@larry.melbourne.sgi.com> References: <200701142346.KAA16770@larry.melbourne.sgi.com> Content-Type: text/plain Date: Tue, 23 Jan 2007 15:05:05 +0200 Message-Id: <1169557505.6383.23.camel@mustis> Mime-Version: 1.0 X-Mailer: Evolution 2.8.1 Content-Transfer-Encoding: 7bit X-archive-position: 10386 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jyrki.muukkonen@futurice.fi Precedence: bulk X-list: xfs Content-Length: 4670 Lines: 161 On ma, 2007-01-15 at 10:52 +1100, Barry Naujok wrote: > It appears the inode is corrupted. The size appears to be wrong, and > there are no blocks allocated to the inode. > > Also, looking at the u.bmbt info, this is most definitely wrong. > > To get xfs_repair to wipe the inode and continue, do the following > commands: > > # xfs_db -x > > xfs_db> inode 4151889543 > xfs_db> write core.mode 0 > xfs_db> quit > > # xfs_repair > > > -----Original Message----- > > From: Jyrki Muukkonen [mailto:jyrki.muukkonen@futurice.fi] > > Sent: Friday, 12 January 2007 7:48 PM > > To: Barry Naujok > > Cc: xfs@oss.sgi.com > > Subject: RE: xfs_repair: corrupt inode error > > > > On pe, 2007-01-12 at 12:25 +1100, Barry Naujok wrote: > > > > > > > -----Original Message----- > > > > From: xfs-bounce@oss.sgi.com [mailto:xfs-bounce@oss.sgi.com] > > > > On Behalf Of Jyrki Muukkonen > > > > Sent: Tuesday, 9 January 2007 3:07 AM > > > > To: xfs@oss.sgi.com > > > > Subject: Re: xfs_repair: corrupt inode error > > > > > > > > On ma, 2007-01-08 at 12:23 +0200, Jyrki Muukkonen wrote: > > > > > Got this error in phase 6 when running xfs_repair > > 2.8.18 on ~1.2TB > > > > > partition over the weekend (it took around 60 hours to > > get to this > > > > > point :). On earlier versions xfs_repair aborted after > > > > ~15-20 hours with > > > > > "invalid inode type" error. > > > > > > > > > > ... > > > > > disconnected inode 4151889519, moving to lost+found > > > > > disconnected inode 4151889543, moving to lost+found > > > > > corrupt inode 4151889543 (btree). This is a bug. > > > > > Please report it to xfs@oss.sgi.com. > > > > > cache_node_purge: refcount was 1, not zero (node=0x132650d0) > > > > > > > > > > fatal error -- 117 - couldn't iget disconnected inode > > > > > > > > > > I've got the full log (both stderr and stdout) and can put that > > > > > somewhere if needed. It's about 80MB uncompressed and around 7MB > > > > > gzipped. Running the xfs_repair without multithreading and > > > > with -v might > > > > > also be possible if that's going to help. > > > > > > > > > > > > > Some more information: > > > > - running 64bit Ubuntu Edgy 2.6.17-10-generic > > > > - one processor so xfs_repair was run with two threads > > > > - 1.5GB RAM, 3GB swap (at some point the xfs_repair > > process took a bit > > > > over 2GB) > > > > - filesystem is ~1.14TB with about ~1.4 million files > > > > - most of the files are in subdirectories by date > > > > (/something/YYYY/MM/DD/), ~2-10 thousand per day > > > > > > > > So is there a way to skip / ignore this error? I could do > > some testing > > > > with different command line options and small code > > patches if that's > > > > going to help solve the bug. > > > > > > > > Most of the files have been recovered from backups, raw disk > > > > images etc. > > > > but unfortunately some are still missing. > > > > > > > > -- > > > > Jyrki Muukkonen > > > > Futurice Oy > > > > jyrki.muukkonen@futurice.fi > > > > +358 41 501 7322 > > > > > > Would it be possible to run xfs_db and print out the inode above: > > > > > > # xfs_db > > > xfs_db> inode 4151889543 > > > xfs_db> print > > > > > > and email the output back? > > > > > > Regards, > > > Barry. > > > > > > > > > > OK, here it is: > > > > xfs_db> inode 4151889543 > > xfs_db> print > > core.magic = 0x494e > > core.mode = 0102672 > > core.version = 1 > > core.format = 3 (btree) > > core.nlinkv1 = 2308 > > core.uid = 721387 > > core.gid = 475570 > > core.flushiter = 7725 > > core.atime.sec = Sun Mar 16 17:15:13 2008 > > core.atime.nsec = 000199174 > > core.mtime.sec = Wed Dec 28 01:58:50 2011 > > core.mtime.nsec = 016845061 > > core.ctime.sec = Tue Aug 22 19:57:39 2006 > > core.ctime.nsec = 148761321 > > core.size = 1880085426117611906 > > core.nblocks = 0 > > core.extsize = 0 > > core.nextents = 0 > > core.naextents = 0 > > core.forkoff = 0 > > core.aformat = 2 (extents) > > core.dmevmask = 0x1010905 > > core.dmstate = 11 > > core.newrtbm = 0 > > core.prealloc = 1 > > core.realtime = 0 > > core.immutable = 0 > > core.append = 0 > > core.sync = 0 > > core.noatime = 0 > > core.nodump = 0 > > core.rtinherit = 0 > > core.projinherit = 1 > > core.nosymlinks = 0 > > core.extsz = 0 > > core.extszinherit = 0 > > core.nodefrag = 0 > > core.gen = 51072068 > > next_unlinked = null > > u.bmbt.level = 18550 > > u.bmbt.numrecs = 0 > > > > > > > > -- > > Jyrki Muukkonen > > Futurice Oy > > jyrki.muukkonen@futurice.fi > > +358 41 501 7322 > > > Thanks, setting core.mode to 0 on that particular inode helped. -- Jyrki Muukkonen Futurice Oy jyrki.muukkonen@futurice.fi +358 41 501 7322 From owner-xfs@oss.sgi.com Tue Jan 23 05:47:44 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 23 Jan 2007 05:47:49 -0800 (PST) X-Spam-oss-Status: No, score=-2.4 required=5.0 tests=AWL,BAYES_00, SPF_HELO_FAIL autolearn=ham version=3.2.0-pre1-r497472 Received: from lucidpixels.com (lucidpixels.com [75.144.35.66] (may be forged)) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0NDlgqw001511 for ; Tue, 23 Jan 2007 05:47:43 -0800 Received: by lucidpixels.com (Postfix, from userid 1001) id CDE381A00052F; Tue, 23 Jan 2007 08:46:48 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id CA18EA050ADA; Tue, 23 Jan 2007 08:46:48 -0500 (EST) Date: Tue, 23 Jan 2007 08:46:48 -0500 (EST) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: Michael Tokarev cc: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com, Alan Piszcz Subject: Re: Kernel 2.6.19.2 New RAID 5 Bug (oops when writing Samba -> RAID5) In-Reply-To: <45B60403.1060201@tls.msk.ru> Message-ID: References: <45B5261B.1050104@redhat.com> <17845.13256.284461.992275@notabene.brown> <45B5ECAA.6000100@tls.msk.ru> <45B60403.1060201@tls.msk.ru> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 10387 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Content-Length: 1146 Lines: 34 On Tue, 23 Jan 2007, Michael Tokarev wrote: > Justin Piszcz wrote: > > > > On Tue, 23 Jan 2007, Michael Tokarev wrote: > > > >> Disabling pre-emption on critical and/or server machines seems to be a good > >> idea in the first place. IMHO anyway.. ;) > > > > So bottom line is make sure not to use preemption on servers or else you > > will get weird spinlock/deadlocks on RAID devices--GOOD To know! > > This is not a reason. The reason is that preemption usually works worse > on servers, esp. high-loaded servers - the more often you interrupt a > (kernel) work, the more nedleess context switches you'll have, and the > more slow the whole thing works. > > Another point is that with preemption enabled, we have more chances to > hit one or another bug somewhere. Those bugs should be found and fixed > for sure, but important servers/data isn't a place usually for bughunting. > > /mjt > - > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Thanks for the update/info. Justin. From owner-xfs@oss.sgi.com Tue Jan 23 07:35:46 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 23 Jan 2007 07:35:52 -0800 (PST) X-Spam-oss-Status: No, score=0.5 required=5.0 tests=AWL,BAYES_60,RCVD_BAD_ID autolearn=no version=3.2.0-pre1-r497472 Received: from evaldomino.Falconstor.com (mail1.falconstor.com [216.223.47.230]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0NFZiqw021667 for ; Tue, 23 Jan 2007 07:35:45 -0800 Received: from [10.3.4.132] ([10.3.4.132]) by falconstormail.falconstor.net (Lotus Domino Release 5.0.11) with ESMTP id 2007012310105984:2146 ; Tue, 23 Jan 2007 10:10:59 -0500 Message-ID: <45B6277F.20506@falconstor.com> Date: Tue, 23 Jan 2007 10:19:27 -0500 From: "Geir A. Myrestrand" Reply-To: geir.myrestrand@falconstor.com Organization: FalconStor Software, Inc. User-Agent: Thunderbird 1.5.0.9 (Windows/20061207) MIME-Version: 1.0 To: linux-xfs@oss.sgi.com Subject: Will xfs_growfs succeed on a full file system? X-MIMETrack: Itemize by SMTP Server on FalconstorMail/FalconStor(Release 5.0.11 |July 24, 2002) at 01/23/2007 10:11:00 AM, Serialize by Router on evaldomino/FalconStor(Release 5.0.11 |July 24, 2002) at 01/23/2007 10:35:49 AM, Serialize complete at 01/23/2007 10:35:49 AM Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset=ISO-8859-1; format=flowed X-archive-position: 10388 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: geir.myrestrand@falconstor.com Precedence: bulk X-list: xfs Content-Length: 490 Lines: 17 Does xfs_growfs depend on some space left on the file system in order to be able to grow it? I have a colleague who ran into an issue where a file system resize failed. The file system is 100% full. Aside from analyzing what happened in his case, should XFS be able to grow a file system that is 100% full? The device has already been expanded, it is the XFS file system that fails to resize. I just wonder if that is by design, or whether it is an issue. -- Geir A. Myrestrand From owner-xfs@oss.sgi.com Tue Jan 23 07:44:31 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 23 Jan 2007 07:44:36 -0800 (PST) X-Spam-oss-Status: No, score=0.0 required=5.0 tests=BAYES_50 autolearn=ham version=3.2.0-pre1-r497472 Received: from wx-out-0506.google.com (wx-out-0506.google.com [66.249.82.238]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0NFiUqw023519 for ; Tue, 23 Jan 2007 07:44:31 -0800 Received: by wx-out-0506.google.com with SMTP id t4so1560680wxc for ; Tue, 23 Jan 2007 07:43:36 -0800 (PST) DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:message-id:date:from:to:subject:mime-version:content-type:content-transfer-encoding:content-disposition; b=Ng242LHfpHhFJ+0B3HMI/HVT3s04lo+TkvcHec9+9T8pQZnIHlJl9RwTf0CT6v57VhrhyBJ8JNaBYYLTUzuFaZSXNh1IUb0d77xyyZPMk+C4zyGvSQe6QctiMDwz0QXvMFhyiFnRxxCeZ41W12oT/j93RrCUQ08WNfARVi2njXM= Received: by 10.70.130.8 with SMTP id c8mr13296665wxd.1169565421405; Tue, 23 Jan 2007 07:17:01 -0800 (PST) Received: by 10.70.22.9 with HTTP; Tue, 23 Jan 2007 07:16:59 -0800 (PST) Message-ID: Date: Tue, 23 Jan 2007 16:16:59 +0100 From: "Peter Gervai" To: xfs@oss.sgi.com Subject: how to sync / commit data to disk? MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline X-archive-position: 10389 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: grinapo@gmail.com Precedence: bulk X-list: xfs Content-Length: 631 Lines: 27 Hello, [Tried to search archieves, found nothing, probably my keywords are bad. :-)] What is the recommended way to make sure that a file is written physically to the disk? (apart from the cache of the disk.) This problem seem to have arisen in grub bootloader under Debian linux (and most probably everywhere else): it must be sure that the copied files are there, and can be addressed by C/H/S and modified there, at the given sector address. My educated guess would be xfs_freeze -f sync xfs_freeze -u but I give a large chance to be wrong about it. Ideas? Please cc on me if possible. Thanks. -- byte-byte, grin From owner-xfs@oss.sgi.com Tue Jan 23 08:09:12 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 23 Jan 2007 08:09:22 -0800 (PST) X-Spam-oss-Status: No, score=-1.4 required=5.0 tests=AWL,BAYES_20, SPF_HELO_PASS autolearn=ham version=3.2.0-pre1-r497472 Received: from mx1.redhat.com (mx1.redhat.com [66.187.233.31]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0NG9Bqw028914 for ; Tue, 23 Jan 2007 08:09:12 -0800 Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by mx1.redhat.com (8.12.11.20060308/8.12.11) with ESMTP id l0NG8Etj002753; Tue, 23 Jan 2007 11:08:14 -0500 Received: from pobox-2.corp.redhat.com (pobox-2.corp.redhat.com [10.11.255.15]) by int-mx1.corp.redhat.com (8.13.1/8.13.1) with ESMTP id l0NG88Qi000369; Tue, 23 Jan 2007 11:08:08 -0500 Received: from [10.15.80.10] (neon.msp.redhat.com [10.15.80.10]) by pobox-2.corp.redhat.com (8.13.1/8.13.1) with ESMTP id l0NG87Nm008728; Tue, 23 Jan 2007 11:08:08 -0500 Message-ID: <45B632F6.50705@sandeen.net> Date: Tue, 23 Jan 2007 10:08:22 -0600 From: Eric Sandeen User-Agent: Thunderbird 1.5.0.9 (X11/20061219) MIME-Version: 1.0 To: Peter Gervai CC: xfs@oss.sgi.com Subject: Re: how to sync / commit data to disk? References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-archive-position: 10390 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Content-Length: 1397 Lines: 41 Peter Gervai wrote: > Hello, > > [Tried to search archieves, found nothing, probably my keywords are bad. :-)] > > What is the recommended way to make sure that a file is written > physically to the disk? (apart from the cache of the disk.) > > This problem seem to have arisen in grub bootloader under Debian linux > (and most probably everywhere else): it must be sure that the copied > files are there, and can be addressed by C/H/S and modified there, at > the given sector address. > > My educated guess would be > xfs_freeze -f > sync > xfs_freeze -u That's one hack that has been proposed, and may help. Another issue that I've seen with grub is that it seems to like to write directly to the block device WHILE THE FILESYSTEM IS MOUNTED. This is very bad, and causes ext3 grief too. grub seems to think that it can just call "sync" and have everything be happy, but esp. when it's doing reads & writes via both block dev & filesystem, stuff is so out of whack syncs won't save you. I'm not sure how you're invoking grub, but we found that manually specifying --stage2, i.e. install --stage2=/boot/grub/stage2 ... at least caused it to leave the block device alone while the fs is mounted, rather than trying to write the underlying bdev... that was obvious, no? ;-) You could verify this by stracing your grub command, and see what it is doing with the block device. -Eric From owner-xfs@oss.sgi.com Tue Jan 23 08:40:26 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 23 Jan 2007 08:40:37 -0800 (PST) X-Spam-oss-Status: No, score=0.1 required=5.0 tests=AWL,BAYES_50,RCVD_BAD_ID autolearn=no version=3.2.0-pre1-r497472 Received: from evaldomino.Falconstor.com (mail1.falconstor.com [216.223.47.230]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0NGePqw006501 for ; Tue, 23 Jan 2007 08:40:26 -0800 Received: from [10.3.4.132] ([10.3.4.132]) by falconstormail.falconstor.net (Lotus Domino Release 5.0.11) with ESMTP id 2007012310494791:2166 ; Tue, 23 Jan 2007 10:49:47 -0500 Message-ID: <45B63097.7020504@falconstor.com> Date: Tue, 23 Jan 2007 10:58:15 -0500 From: "Geir A. Myrestrand" Reply-To: geir.myrestrand@falconstor.com Organization: FalconStor Software, Inc. User-Agent: Thunderbird 1.5.0.9 (Windows/20061207) MIME-Version: 1.0 To: xfs@oss.sgi.com CC: Peter Gervai Subject: Re: how to sync / commit data to disk? References: In-Reply-To: X-MIMETrack: Itemize by SMTP Server on FalconstorMail/FalconStor(Release 5.0.11 |July 24, 2002) at 01/23/2007 10:49:47 AM, Serialize by Router on evaldomino/FalconStor(Release 5.0.11 |July 24, 2002) at 01/23/2007 11:40:30 AM, Serialize complete at 01/23/2007 11:40:30 AM Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset=ISO-8859-1; format=flowed X-archive-position: 10391 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: geir.myrestrand@falconstor.com Precedence: bulk X-list: xfs Content-Length: 1063 Lines: 37 Peter Gervai wrote: > Hello, > > [Tried to search archieves, found nothing, probably my keywords are bad. > :-)] > > What is the recommended way to make sure that a file is written > physically to the disk? (apart from the cache of the disk.) > > This problem seem to have arisen in grub bootloader under Debian linux > (and most probably everywhere else): it must be sure that the copied > files are there, and can be addressed by C/H/S and modified there, at > the given sector address. > > My educated guess would be > xfs_freeze -f > sync > xfs_freeze -u > > but I give a large chance to be wrong about it. > > Ideas? > > Please cc on me if possible. Thanks. Call the sync before you freeze the file system, not after. You can't write to the file system when it is frozen, so it makes no sense to call sync after a freeze. I don't think you have any control of whether the data is written physically to the disk or is still in the disk(s) buffer, the buffer you can flush is on the software side. Call sync and freeze. -- Geir A. Myrestrand From owner-xfs@oss.sgi.com Tue Jan 23 08:42:23 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 23 Jan 2007 08:42:27 -0800 (PST) X-Spam-oss-Status: No, score=-1.0 required=5.0 tests=AWL,BAYES_50, SPF_HELO_PASS autolearn=ham version=3.2.0-pre1-r497472 Received: from mx1.redhat.com (mx1.redhat.com [66.187.233.31]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0NGgMqw007076 for ; Tue, 23 Jan 2007 08:42:23 -0800 Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by mx1.redhat.com (8.12.11.20060308/8.12.11) with ESMTP id l0NGepx5021705; Tue, 23 Jan 2007 11:40:51 -0500 Received: from pobox-2.corp.redhat.com (pobox-2.corp.redhat.com [10.11.255.15]) by int-mx1.corp.redhat.com (8.13.1/8.13.1) with ESMTP id l0NGep17011991; Tue, 23 Jan 2007 11:40:51 -0500 Received: from [10.15.80.10] (neon.msp.redhat.com [10.15.80.10]) by pobox-2.corp.redhat.com (8.13.1/8.13.1) with ESMTP id l0NGeobX012927; Tue, 23 Jan 2007 11:40:50 -0500 Message-ID: <45B63AA1.8010504@sandeen.net> Date: Tue, 23 Jan 2007 10:41:05 -0600 From: Eric Sandeen User-Agent: Thunderbird 1.5.0.9 (X11/20061219) MIME-Version: 1.0 To: geir.myrestrand@falconstor.com CC: linux-xfs@oss.sgi.com Subject: Re: Will xfs_growfs succeed on a full file system? References: <45B6277F.20506@falconstor.com> In-Reply-To: <45B6277F.20506@falconstor.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-archive-position: 10392 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Content-Length: 818 Lines: 24 Geir A. Myrestrand wrote: > Does xfs_growfs depend on some space left on the file system in order to > be able to grow it? > > I have a colleague who ran into an issue where a file system resize > failed. The file system is 100% full. > > Aside from analyzing what happened in his case, should XFS be able to > grow a file system that is 100% full? > > The device has already been expanded, it is the XFS file system that > fails to resize. I just wonder if that is by design, or whether it is an > issue. > Off the top of my head, I think it should work ok even if full, although I could be (and apparently I am) wrong here. How exactly did the growfs fail? I actually wasn't able to completely fill my filesystem, got stuck at 20k left. :) but growing that from 50M to 100M worked fine for me. -Eric From owner-xfs@oss.sgi.com Tue Jan 23 08:51:49 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 23 Jan 2007 08:51:57 -0800 (PST) X-Spam-oss-Status: No, score=-0.6 required=5.0 tests=AWL,BAYES_50 autolearn=ham version=3.2.0-pre1-r497472 Received: from smtp112.sbc.mail.mud.yahoo.com (smtp112.sbc.mail.mud.yahoo.com [68.142.198.211]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0NGpkqw009136 for ; Tue, 23 Jan 2007 08:51:49 -0800 Received: (qmail 85472 invoked from network); 23 Jan 2007 16:50:52 -0000 Received: from unknown (HELO stupidest.org) (cwedgwood@sbcglobal.net@24.5.75.45 with login) by smtp112.sbc.mail.mud.yahoo.com with SMTP; 23 Jan 2007 16:50:52 -0000 X-YMail-OSG: Dep4N5cVM1kPtL0L8RfdjBIojBMllsMlobqZVZp97J2_4iFnj_dPoi9jnQShg.Hhkfx3qvHt1x3xehHFIYyrd5fIzFdVRtZrV.S7G2pMA263I8USvuPbELjXOQIS2V8- Received: by tuatara.stupidest.org (Postfix, from userid 10000) id F0DE91826121; Tue, 23 Jan 2007 08:50:50 -0800 (PST) Date: Tue, 23 Jan 2007 08:50:50 -0800 From: Chris Wedgwood To: Peter Gervai Cc: xfs@oss.sgi.com Subject: Re: how to sync / commit data to disk? Message-ID: <20070123165050.GA28720@tuatara.stupidest.org> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-archive-position: 10393 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: xfs Content-Length: 1070 Lines: 30 On Tue, Jan 23, 2007 at 04:16:59PM +0100, Peter Gervai wrote: > This problem seem to have arisen in grub bootloader under Debian > linux (and most probably everywhere else): it must be sure that the > copied files are there, and can be addressed by C/H/S and modified > there, at the given sector address. grub is broken this comes up all the time, there are various work-arounds but it doesn't change the fact that GRUB IS BROKEN it would be nice if someone would just address it from that end > xfs_freeze -f > sync > xfs_freeze -u sync before freeze (actually, I'm not sure a sync there is necessary but it can't hurt) wrt to grub, i thought it did this for xfs anyhow? i suggested doing this a couple of years back and i thought that was what it was doing now in some versions of grub (afaik vendors like red hat never took that change, the last conversation i had about this ended up being an argument about whether fsync should place the data in it's final location on disk or not (nothing in the specs says it should) so i gave up and dropped the issue) From owner-xfs@oss.sgi.com Tue Jan 23 09:03:29 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 23 Jan 2007 09:03:39 -0800 (PST) X-Spam-oss-Status: No, score=-0.6 required=5.0 tests=AWL,BAYES_50,HTML_MESSAGE autolearn=ham version=3.2.0-pre1-r497472 Received: from gaimboi.tmr.com (mail.tmr.com [64.65.253.246]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0NH3Oqw011440 for ; Tue, 23 Jan 2007 09:03:27 -0800 Received: from [127.0.0.1] (gaimboi.tmr.com [127.0.0.1]) by gaimboi.tmr.com (8.12.8/8.12.8) with ESMTP id l0NH5iHT005815; Tue, 23 Jan 2007 12:05:46 -0500 Message-ID: <45B64068.9020500@tmr.com> Date: Tue, 23 Jan 2007 12:05:44 -0500 From: Bill Davidsen Organization: TMR Associates Inc, Schenectady NY User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.0.8) Gecko/20061105 SeaMonkey/1.0.6 MIME-Version: 1.0 To: Justin Piszcz CC: Michael Tokarev , linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com, Alan Piszcz Subject: Re: Kernel 2.6.19.2 New RAID 5 Bug (oops when writing Samba -> RAID5) References: <45B5261B.1050104@redhat.com> <17845.13256.284461.992275@notabene.brown> <45B5ECAA.6000100@tls.msk.ru> In-Reply-To: Content-Type: text/plain Content-Disposition: inline Content-Transfer-Encoding: 7bit X-archive-position: 10394 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: davidsen@tmr.com Precedence: bulk X-list: xfs Content-Length: 1881 Lines: 59 Justin Piszcz wrote: > On Tue, 23 Jan 2007, Michael Tokarev wrote: > > >> Justin Piszcz wrote: >> [] >> >>> Is this a bug that can or will be fixed or should I disable pre-emption on >>> critical and/or server machines? >>> >> Disabling pre-emption on critical and/or server machines seems to be a good >> idea in the first place. IMHO anyway.. ;) >> >> /mjt >> >> > > So for a server system, the following options should be as follows: > > Preemption Model (No Forced Preemption (Server)) ---> > [ ] Preempt The Big Kernel Lock > > Also, my mobo has HPET timer support in the BIOS, is there any reason to > use this on a server? I do run X on it via the Intel 965 chipset video. > > So bottom line is make sure not to use preemption on servers or else you > will get weird spinlock/deadlocks on RAID devices--GOOD To know! I should actually think it's BAD to know, it has nothing to do with servers, either PREEMPT works safely or it doesn't, like being pregnant there are no grey areas. Justin Piszcz wrote: This is not a reason. The reason is that preemption usually works worse on servers, esp. high-loaded servers - the more often you interrupt a (kernel) work, the more nedleess context switches you'll have, and the more slow the whole thing works. Another point is that with preemption enabled, we have more chances to hit one or another bug somewhere. Those bugs should be found and fixed for sure, but important servers/data isn't a place usually for bughunting. Unfortunately bugs, like big horn sheep, must be hunted where they can be found, however inconvenient that may be. I am curious to know if this applies to voluntary preempt as well. -- bill davidsen CTO TMR Associates, Inc Doing interesting things with small computers since 1979 [[HTML alternate version deleted]] From owner-xfs@oss.sgi.com Tue Jan 23 09:22:57 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 23 Jan 2007 09:23:05 -0800 (PST) X-Spam-oss-Status: No, score=-0.7 required=5.0 tests=AWL,BAYES_20, SPF_HELO_PASS autolearn=ham version=3.2.0-pre1-r497472 Received: from astra.simleu.ro (astra.simleu.ro [80.97.18.177]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0NHMuqw016497 for ; Tue, 23 Jan 2007 09:22:57 -0800 Received: from teal.hq.k1024.org (84-75-115-193.dclient.hispeed.ch [84.75.115.193]) by astra.simleu.ro (Postfix) with ESMTP id 218755E; Tue, 23 Jan 2007 18:58:07 +0200 (EET) Received: by teal.hq.k1024.org (Postfix, from userid 4004) id 4F4D740A0AF; Tue, 23 Jan 2007 17:58:03 +0100 (CET) Date: Tue, 23 Jan 2007 17:58:03 +0100 From: Iustin Pop To: Peter Gervai Cc: xfs@oss.sgi.com Subject: Re: how to sync / commit data to disk? Message-ID: <20070123165803.GA17430@teal.hq.k1024.org> Mail-Followup-To: Peter Gervai , xfs@oss.sgi.com References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Linux: This message was written on Linux X-Header: /usr/include gives great headers User-Agent: Mutt/1.5.13 (2006-08-11) X-archive-position: 10395 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: iusty@k1024.org Precedence: bulk X-list: xfs Content-Length: 429 Lines: 17 On Tue, Jan 23, 2007 at 04:16:59PM +0100, Peter Gervai wrote: > My educated guess would be > xfs_freeze -f > sync > xfs_freeze -u > > but I give a large chance to be wrong about it. > > Ideas? I usually unmount the /boot partition if I need to reinstall grub (which is a rare event). Since nowadays klogd doesn't keep /boot/System.map open anymore, there is no reason not to do an umount/grub install/mount sequence. Iustin From owner-xfs@oss.sgi.com Tue Jan 23 09:45:43 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 23 Jan 2007 09:45:52 -0800 (PST) X-Spam-oss-Status: No, score=0.1 required=5.0 tests=AWL,BAYES_50,RCVD_BAD_ID autolearn=no version=3.2.0-pre1-r497472 Received: from evaldomino.Falconstor.com (mail1.falconstor.com [216.223.47.230]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0NHjgqw021570 for ; Tue, 23 Jan 2007 09:45:43 -0800 Received: from [10.3.4.132] ([10.3.4.132]) by falconstormail.falconstor.net (Lotus Domino Release 5.0.11) with ESMTP id 2007012312103270:2193 ; Tue, 23 Jan 2007 12:10:32 -0500 Message-ID: <45B64383.4030603@falconstor.com> Date: Tue, 23 Jan 2007 12:18:59 -0500 From: "Geir A. Myrestrand" Reply-To: geir.myrestrand@falconstor.com Organization: FalconStor Software, Inc. User-Agent: Thunderbird 1.5.0.9 (Windows/20061207) MIME-Version: 1.0 To: linux-xfs@oss.sgi.com Subject: Re: Will xfs_growfs succeed on a full file system? References: <45B6277F.20506@falconstor.com> <45B63AA1.8010504@sandeen.net> In-Reply-To: <45B63AA1.8010504@sandeen.net> X-MIMETrack: Itemize by SMTP Server on FalconstorMail/FalconStor(Release 5.0.11 |July 24, 2002) at 01/23/2007 12:10:32 PM, Serialize by Router on evaldomino/FalconStor(Release 5.0.11 |July 24, 2002) at 01/23/2007 12:45:46 PM, Serialize complete at 01/23/2007 12:45:46 PM Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset=ISO-8859-1; format=flowed X-archive-position: 10396 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: geir.myrestrand@falconstor.com Precedence: bulk X-list: xfs Content-Length: 1758 Lines: 58 Eric Sandeen wrote: > Geir A. Myrestrand wrote: >> Does xfs_growfs depend on some space left on the file system in order to >> be able to grow it? >> >> I have a colleague who ran into an issue where a file system resize >> failed. The file system is 100% full. >> >> Aside from analyzing what happened in his case, should XFS be able to >> grow a file system that is 100% full? >> >> The device has already been expanded, it is the XFS file system that >> fails to resize. I just wonder if that is by design, or whether it is an >> issue. >> > > Off the top of my head, I think it should work ok even if full, although > I could be (and apparently I am) wrong here. How exactly did the growfs > fail? > > I actually wasn't able to completely fill my filesystem, got stuck at > 20k left. :) but growing that from 50M to 100M worked fine for me. > > -Eric The only error I saw in his output was this line: xfs_growfs: XFS_IOC_FSGROWFSDATA xfsctl failed: No space left on device He claims that the file system is actually resized after it has been re-mounted. He verifies with df: After expansion (with xfs_growfs): # df -k Filesystem 1K-blocks Used Available Use% Mounted on /dev/vbdi6 93504 93504 0 100% /nas/NASDisk-00006 # df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/vbdi6 64 6 58 10% /nas/NASDisk-00006 After re-mount: # df -k Filesystem 1K-blocks Used Available Use% Mounted on /dev/vbdi6 200000 93516 106484 47% /nas/NASDisk-00006 # df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/vbdi6 204800 6 204794 1% /nas/NASDisk-00006 -- Geir A. Myrestrand From owner-xfs@oss.sgi.com Tue Jan 23 11:18:34 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 23 Jan 2007 11:18:41 -0800 (PST) X-Spam-oss-Status: No, score=-0.2 required=5.0 tests=AWL,BAYES_50, DATE_IN_PAST_12_24 autolearn=no version=3.2.0-pre1-r497472 Received: from slurp.thebarn.com (cattelan-host202.dsl.visi.com [208.42.117.202]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0NJIXqw006151 for ; Tue, 23 Jan 2007 11:18:33 -0800 Received: from [127.0.0.1] (lupo.thebarn.com [10.0.0.10]) (authenticated bits=0) by slurp.thebarn.com (8.13.8/8.13.8) with ESMTP id l0NHnRSJ081062; Tue, 23 Jan 2007 11:49:37 -0600 (CST) (envelope-from cattelan@thebarn.com) Subject: Re: [OT] Spam on this list From: Russell Cattelan To: Ralf Hildebrandt Cc: Martin =?ISO-8859-1?Q?Schr=F6der?= , xfs@oss.sgi.com In-Reply-To: <20070122212957.GN27538@charite.de> References: <68c491a60701221252i30d28955pde6a4e987a1d248f@mail.gmail.com> <20070122212957.GN27538@charite.de> Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="=-o8q4PfncurwXPU5RyGBX" Date: Mon, 22 Jan 2007 16:29:59 -0600 Message-Id: <1169504999.28100.45.camel@xenon.msp.redhat.com> Mime-Version: 1.0 X-Mailer: Evolution 2.9.4-1mdv2007.1 X-archive-position: 10397 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: cattelan@thebarn.com Precedence: bulk X-list: xfs Content-Length: 1149 Lines: 38 --=-o8q4PfncurwXPU5RyGBX Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable On Mon, 2007-01-22 at 22:29 +0100, Ralf Hildebrandt wrote: > * Martin Schr=C3=B6der : > > This list is a constant distributor of spam, maybe because it accepts > > non-member contributions. > >=20 > > A plea to those who are able to do so: Please fix this by allowing > > only contributions from members. >=20 > Yes please. It's extremely annoying. We tried that once ... the screaming from people who only wanted to=20 subscribe once but post from multiple emails (work, home, gmail) etc... was also quite loud. It's probably time to update SA on oss since the spam count has been=20 going up a bit. >=20 --=20 Russell Cattelan --=-o8q4PfncurwXPU5RyGBX Content-Type: application/pgp-signature; name=signature.asc Content-Description: This is a digitally signed message part -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.5 (GNU/Linux) iD8DBQBFtTrnNRmM+OaGhBgRAjRaAJ9W3OgAY4M3Pdz4Y6aFrD30w9uJ8gCbBuYt i/ijKIn2n4bOSWGud6kJXp4= =vANp -----END PGP SIGNATURE----- --=-o8q4PfncurwXPU5RyGBX-- From owner-xfs@oss.sgi.com Tue Jan 23 11:24:32 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 23 Jan 2007 11:24:37 -0800 (PST) X-Spam-oss-Status: No, score=-0.5 required=5.0 tests=AWL,BAYES_50 autolearn=ham version=3.2.0-pre1-r497472 Received: from smtp102.sbc.mail.mud.yahoo.com (smtp102.sbc.mail.mud.yahoo.com [68.142.198.201]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0NJORqw007561 for ; Tue, 23 Jan 2007 11:24:32 -0800 Received: (qmail 6669 invoked from network); 23 Jan 2007 19:23:30 -0000 Received: from unknown (HELO stupidest.org) (cwedgwood@sbcglobal.net@24.5.75.45 with login) by smtp102.sbc.mail.mud.yahoo.com with SMTP; 23 Jan 2007 19:23:29 -0000 X-YMail-OSG: qtnmeXEVM1ly4JCPrTYCw6jHgS3FJXlJY7ft4Phcx7P86burmr3WhhH2Ui8kjmDRL1OfDxpLOw-- Received: by tuatara.stupidest.org (Postfix, from userid 10000) id A0B9F1826121; Tue, 23 Jan 2007 11:23:28 -0800 (PST) Date: Tue, 23 Jan 2007 11:23:28 -0800 From: Chris Wedgwood To: Russell Cattelan Cc: Ralf Hildebrandt , Martin Schr?der , xfs@oss.sgi.com Subject: Re: [OT] Spam on this list Message-ID: <20070123192328.GA32187@tuatara.stupidest.org> References: <68c491a60701221252i30d28955pde6a4e987a1d248f@mail.gmail.com> <20070122212957.GN27538@charite.de> <1169504999.28100.45.camel@xenon.msp.redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1169504999.28100.45.camel@xenon.msp.redhat.com> X-archive-position: 10398 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: xfs Content-Length: 332 Lines: 9 On Mon, Jan 22, 2007 at 04:29:59PM -0600, Russell Cattelan wrote: > It's probably time to update SA on oss since the spam count has been > going up a bit. how about ditch ecartis or whatever cacner there is that munges emails horribly? if you leave the email bodies intact people's local filters will work much more effectively From owner-xfs@oss.sgi.com Tue Jan 23 12:33:57 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 23 Jan 2007 12:34:08 -0800 (PST) X-Spam-oss-Status: No, score=0.0 required=5.0 tests=BAYES_50 autolearn=ham version=3.2.0-pre1-r497472 Received: from wx-out-0506.google.com (wx-out-0506.google.com [66.249.82.227]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0NKXtqw029574 for ; Tue, 23 Jan 2007 12:33:56 -0800 Received: by wx-out-0506.google.com with SMTP id t4so1641375wxc for ; Tue, 23 Jan 2007 12:33:01 -0800 (PST) DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:message-id:date:from:to:subject:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=A5PzYQf9mmLz9eUxS5zi5F/Sp2cWbumn7UL12CjkzuvPNmvzAytqsn3lpG1w5Mn92GP5+Bayn7KalpP199YXEpVhLOtV65uNb18Mzh+EWRTPDCArIrEU3A+iR7nfdGiqj8c/p8a6kcBJQGrg/5tJFPrdv11eCbkhvhvnspW3X44= Received: by 10.70.96.3 with SMTP id t3mr13804562wxb.1169584381374; Tue, 23 Jan 2007 12:33:01 -0800 (PST) Received: by 10.70.22.9 with HTTP; Tue, 23 Jan 2007 12:33:01 -0800 (PST) Message-ID: Date: Tue, 23 Jan 2007 21:33:01 +0100 From: "Peter Gervai" To: xfs@oss.sgi.com Subject: Re: how to sync / commit data to disk? In-Reply-To: <45B632F6.50705@sandeen.net> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline References: <45B632F6.50705@sandeen.net> X-archive-position: 10400 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: grinapo@gmail.com Precedence: bulk X-list: xfs Content-Length: 3512 Lines: 82 Thanks for the very informative replies! I try to address some questions, and maybe ask some more. The original question was: > > What is the recommended way to make sure that a file is written > > physically to the disk? (apart from the cache of the disk.) On 1/23/07, Eric Sandeen wrote: > Another issue that I've seen with grub is that it seems to like to write > directly to the block device WHILE THE FILESYSTEM IS MOUNTED. It does not seem to do evil in this case: 3243 open("//boot/grub/stage2", O_RDWR) = 5 3243 fstat64(0x5, 0xf7c2e9f4) = 0 3243 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xfffffffff7bc4000 3243 _llseek(5, 0, [0], SEEK_SET) = 0 3243 read(5, "RV\276\3\201\350(\1^\277\370\201f\213-\203}\4\0\17\204\312\0\200|\377\0t>f\213\35f1\300\260\1779E\4\177\3\213E\4)E\4f\1\5\307\4\20\0\211D\2f\211\ \\10\307D\6\0pPf1\300\211D\4f\211D\f\264B\315\23\17\202\237\0\273\0p\353Vf\213\5f1\322f\3674\210T\nf1"..., 512) = 512 3243 write(5, "\352p\202\0\0\0\3\2\377\377\377\0\0\0\0\0\0\0000.97\0/boot/grub/menu.lst\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\3721\300\216\330\216\320\216\300g"..., 512) = 512 3243 close(5) = 0 Now, for me it seems to be a very interesting question that why people would CARE whether it's synced or not, since they write it by the filesystem layer anyway? I do not know, and investigating the reason why 'grub-install' would worry about sync is beyond my available time. The original problem was because grub-install froze xfs, then tried to do the above, which magically fail on the frozen filesystem, hanging the install till the cows come home. I tried to fix this, then started wondering how to do it properly, and now that you mentioned and I checked the trace I really start wondering about why to care... :-o Then Chris Wedgwood said: > GRUB IS BROKEN Apart from that it is so far he best boot manager I found (still I'm open to suggestions of better, free boot managers, but please not on this list), which is completely based on my extremely subjective point of view (which includes XFS support, naturally), and which may be completely opposite to the point of view of any human or nonhuman being, it is not, as you have seen above. It uses a dirty hack which works, I accept that. Specifying --stage2 seems to avoid that hack altogether anyway. > wrt to grub, i thought it did this for xfs anyhow? I accept the "BROKEN" comment regarding this one. ;-) It is a broken implementation. Freeze, ten tries to write. Doomed to fail. Iustin Pop commnted about: > I usually unmount the /boot partition if I need to reinstall grub Manually it's ok but this is about grub-install, a "vendor" provided script which cannot assume that you have a separate boot partition. "Geir A. Myrestrand" added: > Call the sync before you freeze the file system, not after. You can't > write to the file system when it is frozen, so it makes no sense to call > sync after a freeze. I thought that freeze lets already written but unsynced data to be written. It seems to me that sync before freeze seems to be a good way to make sure things are on the disk where they gonna stay, as far as it is possible in such a hacky case. Thanks, and feel free to comment more, if you like. -- byte-byte, grin From owner-xfs@oss.sgi.com Tue Jan 23 13:44:26 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 23 Jan 2007 13:44:33 -0800 (PST) X-Spam-oss-Status: No, score=-2.2 required=5.0 tests=AWL,BAYES_00, SPF_HELO_PASS autolearn=ham version=3.2.0-pre1-r497472 Received: from sandeen.net (sandeen.net [209.173.210.139]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0NLiOqw013908 for ; Tue, 23 Jan 2007 13:44:26 -0800 Received: from [10.0.0.4] (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 1F4721801B769; Tue, 23 Jan 2007 15:43:30 -0600 (CST) Message-ID: <45B68181.2010707@sandeen.net> Date: Tue, 23 Jan 2007 15:43:29 -0600 From: Eric Sandeen User-Agent: Thunderbird 1.5.0.9 (Macintosh/20061207) MIME-Version: 1.0 To: Peter Gervai CC: xfs@oss.sgi.com Subject: Re: how to sync / commit data to disk? References: <45B632F6.50705@sandeen.net> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 10401 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Content-Length: 2966 Lines: 64 Peter Gervai wrote: > Thanks for the very informative replies! > > I try to address some questions, and maybe ask some more. > > The original question was: >> > What is the recommended way to make sure that a file is written >> > physically to the disk? (apart from the cache of the disk.) > > On 1/23/07, Eric Sandeen wrote: > >> Another issue that I've seen with grub is that it seems to like to write >> directly to the block device WHILE THE FILESYSTEM IS MOUNTED. > > It does not seem to do evil in this case: > > 3243 open("//boot/grub/stage2", O_RDWR) = 5 > 3243 fstat64(0x5, 0xf7c2e9f4) = 0 > 3243 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, > MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xfffffffff7bc4000 > 3243 _llseek(5, 0, [0], SEEK_SET) = 0 > 3243 read(5, > "RV\276\3\201\350(\1^\277\370\201f\213-\203}\4\0\17\204\312\0\200|\377\0t>f\213\35f1\300\260\1779E\4\177\3\213E\4)E\4f\1\5\307\4\20\0\211D\2f\211\ > > \\10\307D\6\0pPf1\300\211D\4f\211D\f\264B\315\23\17\202\237\0\273\0p\353Vf\213\5f1\322f\3674\210T\nf1"..., > > 512) = 512 > 3243 write(5, > "\352p\202\0\0\0\3\2\377\377\377\0\0\0\0\0\0\0000.97\0/boot/grub/menu.lst\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 > > \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\3721\300\216\330\216\320\216\300g"..., > > 512) = 512 > 3243 close(5) = 0 > > Now, for me it seems to be a very interesting question that why people > would CARE whether it's synced or not, since they write it by the > filesystem layer anyway? I do not know, and investigating the reason > why 'grub-install' would worry about sync is beyond my available time. > > The original problem was because grub-install froze xfs, then tried to > do the above, which magically fail on the frozen filesystem, hanging > the install till the cows come home. I tried to fix this, then started > wondering how to do it properly, and now that you mentioned and I > checked the trace I really start wondering about why to care... :-o The other thing I've seen grub do is to go ahead & write nicely through the filesystem, then do a verification step of trying to go read directly from the block device while the filesystem is still mounted. Again, not treating the filesystem well; until you unmount the fs there's not a lot to guarantee that all your data is where you expect it on disk (especially w/ a journalling fs, where metadata in the journal is really enough for safety, even if its not in its final disk location - and of course grub doesn't replay the journal....) When I saw it hanging, I saw it hanging in this verification step, for the above reasons. it wandered into unwritten metadata blocks. Skipping that verification step would probably be a better idea, it's causing many more problems than it's likely to catch (unless you can unmount or otherwise quiesce the filesystem first...) -Eric From owner-xfs@oss.sgi.com Tue Jan 23 14:26:33 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 23 Jan 2007 14:26:37 -0800 (PST) X-Spam-oss-Status: No, score=-1.8 required=5.0 tests=AWL,BAYES_00, J_CHICKENPOX_72 autolearn=no version=3.2.0-pre1-r497472 Received: from postoffice.aconex.com (mail.app.aconex.com [203.89.192.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0NMQTqw023486 for ; Tue, 23 Jan 2007 14:26:33 -0800 Received: from edge (unknown [203.89.192.141]) by postoffice.aconex.com (Postfix) with ESMTP id 52833AAC28E; Wed, 24 Jan 2007 09:13:11 +1100 (EST) Subject: Re: how to sync / commit data to disk? From: Nathan Scott Reply-To: nscott@aconex.com To: Chris Wedgwood Cc: Peter Gervai , xfs@oss.sgi.com In-Reply-To: <20070123165050.GA28720@tuatara.stupidest.org> References: <20070123165050.GA28720@tuatara.stupidest.org> Content-Type: text/plain Organization: Aconex Date: Wed, 24 Jan 2007 09:24:29 +1100 Message-Id: <1169591069.18017.143.camel@edge> Mime-Version: 1.0 X-Mailer: Evolution 2.6.3 Content-Transfer-Encoding: 7bit X-archive-position: 10402 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nscott@aconex.com Precedence: bulk X-list: xfs Content-Length: 923 Lines: 32 On Tue, 2007-01-23 at 08:50 -0800, Chris Wedgwood wrote: > On Tue, Jan 23, 2007 at 04:16:59PM +0100, Peter Gervai wrote: > > > This problem seem to have arisen in grub bootloader under Debian > > linux (and most probably everywhere else): it must be sure that the > > copied files are there, and can be addressed by C/H/S and modified > > there, at the given sector address. > > grub is broken > > this comes up all the time, there are various work-arounds but it > doesn't change the fact that GRUB IS BROKEN > > it would be nice if someone would just address it from that end > > > xfs_freeze -f > > sync > > xfs_freeze -u > > sync before freeze (actually, I'm not sure a sync there is necessary > but it can't hurt) Its not necessary (would be a bug if so). FWIW, this hack can be better achieved in a filesystem independent way by doing a remount,ro ... remount,rw instead of freeze/thaw. cheers. -- Nathan From owner-xfs@oss.sgi.com Tue Jan 23 14:38:08 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 23 Jan 2007 14:38:15 -0800 (PST) X-Spam-oss-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0NMc5qw026319 for ; Tue, 23 Jan 2007 14:38:06 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA25007; Wed, 24 Jan 2007 09:37:05 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l0NMb37Y102241714; Wed, 24 Jan 2007 09:37:04 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l0NMb2Cp102192831; Wed, 24 Jan 2007 09:37:02 +1100 (AEDT) Date: Wed, 24 Jan 2007 09:37:02 +1100 From: David Chinner To: linux-kernel@vger.kernel.org Cc: xfs@oss.sgi.com, akpm@osdl.org Subject: [PATCH 1/2]: Fix BUG in cancel_dirty_pages on XFS Message-ID: <20070123223702.GF33919298@melbourne.sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.1i X-archive-position: 10403 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 6967 Lines: 190 With the recent changes to cancel_dirty_pages(), XFS will dump warnings in the syslog because it can truncate_inode_pages() on dirty mapped pages. I've determined that this is indeed correct behaviour for XFS as this can happen in the case of races on mmap()d files with direct I/O. In this case when we do a direct I/O read, we flush the dirty pages to disk, then truncate them out of the page cache. Unfortunately, between the flush and the truncate the mmap could dirty the page again. At this point we toss a dirty page that is mapped. None of the existing functions for truncating pages or invalidating pages work in this situation. Invalidating a page only works for non-dirty pages with non-dirty buffers, and they only work for whole pages and XFS requires partial page truncation. On top of that the page invalidation functions don't actually call into the filesystem to invalidate the page and so the filesystem can't actually invalidate the page properly (e.g. do stuff based on private buffer head flags). So that leaves us needing to use truncate semantics an the problem is that none of them unmap pages in a non-racy manner - if they unmap pages they do it separately tothe truncate of the page, leading to races with mmap redirtying the page between the unmap and the truncate ofthe page. Hence we need a truncate function that unmaps the pages while they are locked for truncate in a similar fashion to invalidate_inode_pages2_range(). The following patch (unchanged from the last time it was sent) does this. The XFS changes are in a second patch. The patch has been test on ia64 and x86-64 via XFSQA and a lot of fsx mixing mmap and direct I/O operations. Signed-off-by: Dave Chinner --- Index: 2.6.x-xfs-new/include/linux/mm.h =================================================================== --- 2.6.x-xfs-new.orig/include/linux/mm.h 2007-01-15 15:09:57.000000000 +1100 +++ 2.6.x-xfs-new/include/linux/mm.h 2007-01-16 08:59:24.031897743 +1100 @@ -1060,6 +1060,8 @@ extern unsigned long page_unuse(struct p extern void truncate_inode_pages(struct address_space *, loff_t); extern void truncate_inode_pages_range(struct address_space *, loff_t lstart, loff_t lend); +extern void truncate_unmap_inode_pages_range(struct address_space *, + loff_t lstart, loff_t lend, int unmap); /* generic vm_area_ops exported for stackable file systems */ extern struct page *filemap_nopage(struct vm_area_struct *, unsigned long, int *); Index: 2.6.x-xfs-new/mm/truncate.c =================================================================== --- 2.6.x-xfs-new.orig/mm/truncate.c 2007-01-16 08:59:23.947908876 +1100 +++ 2.6.x-xfs-new/mm/truncate.c 2007-01-16 09:35:53.102924243 +1100 @@ -59,7 +59,7 @@ void cancel_dirty_page(struct page *page WARN_ON(++warncount < 5); } - + if (TestClearPageDirty(page)) { struct address_space *mapping = page->mapping; if (mapping && mapping_cap_account_dirty(mapping)) { @@ -122,16 +122,34 @@ invalidate_complete_page(struct address_ return ret; } +/* + * This is a helper for truncate_unmap_inode_page. Unmap the page we + * are passed. Page must be locked by the caller. + */ +static void +unmap_single_page(struct address_space *mapping, struct page *page) +{ + BUG_ON(!PageLocked(page)); + while (page_mapped(page)) { + unmap_mapping_range(mapping, + (loff_t)page->index << PAGE_CACHE_SHIFT, + PAGE_CACHE_SIZE, 0); + } +} + /** - * truncate_inode_pages - truncate range of pages specified by start and + * truncate_unmap_inode_pages_range - truncate range of pages specified by + * start and end byte offsets and optionally unmap them first. * end byte offsets * @mapping: mapping to truncate * @lstart: offset from which to truncate * @lend: offset to which to truncate + * @unmap: unmap whole truncated pages if non-zero * * Truncate the page cache, removing the pages that are between * specified offsets (and zeroing out partial page - * (if lstart is not page aligned)). + * (if lstart is not page aligned)). If specified, unmap the pages + * before they are removed. * * Truncate takes two passes - the first pass is nonblocking. It will not * block on page locks and it will not block on writeback. The second pass @@ -146,8 +164,8 @@ invalidate_complete_page(struct address_ * mapping is large, it is probably the case that the final pages are the most * recently touched, and freeing happens in ascending file offset order. */ -void truncate_inode_pages_range(struct address_space *mapping, - loff_t lstart, loff_t lend) +void truncate_unmap_inode_pages_range(struct address_space *mapping, + loff_t lstart, loff_t lend, int unmap) { const pgoff_t start = (lstart + PAGE_CACHE_SIZE-1) >> PAGE_CACHE_SHIFT; pgoff_t end; @@ -162,6 +180,14 @@ void truncate_inode_pages_range(struct a BUG_ON((lend & (PAGE_CACHE_SIZE - 1)) != (PAGE_CACHE_SIZE - 1)); end = (lend >> PAGE_CACHE_SHIFT); + /* + * if unmapping, do a range unmap up front to minimise the + * overhead of unmapping the pages + */ + if (unmap) { + unmap_mapping_range(mapping, (loff_t)start << PAGE_CACHE_SHIFT, + (loff_t)end << PAGE_CACHE_SHIFT, 0); + } pagevec_init(&pvec, 0); next = start; while (next <= end && @@ -184,6 +210,8 @@ void truncate_inode_pages_range(struct a unlock_page(page); continue; } + if (unmap) + unmap_single_page(mapping, page); truncate_complete_page(mapping, page); unlock_page(page); } @@ -195,6 +223,8 @@ void truncate_inode_pages_range(struct a struct page *page = find_lock_page(mapping, start - 1); if (page) { wait_on_page_writeback(page); + if (unmap) + unmap_single_page(mapping, page); truncate_partial_page(page, partial); unlock_page(page); page_cache_release(page); @@ -224,12 +254,30 @@ void truncate_inode_pages_range(struct a if (page->index > next) next = page->index; next++; + if (unmap) + unmap_single_page(mapping, page); truncate_complete_page(mapping, page); unlock_page(page); } pagevec_release(&pvec); } } +EXPORT_SYMBOL(truncate_unmap_inode_pages_range); + +/** + * truncate_inode_pages_range - truncate range of pages specified by start and + * end byte offsets + * @mapping: mapping to truncate + * @lstart: offset from which to truncate + * @lend: offset to which to truncate + * + * Called under (and serialised by) inode->i_mutex. + */ +void truncate_inode_pages_range(struct address_space *mapping, + loff_t lstart, loff_t lend) +{ + truncate_unmap_inode_pages_range(mapping, lstart, lend, 0); +} EXPORT_SYMBOL(truncate_inode_pages_range); /** @@ -241,7 +289,7 @@ EXPORT_SYMBOL(truncate_inode_pages_range */ void truncate_inode_pages(struct address_space *mapping, loff_t lstart) { - truncate_inode_pages_range(mapping, lstart, (loff_t)-1); + truncate_unmap_inode_pages_range(mapping, lstart, (loff_t)-1, 0); } EXPORT_SYMBOL(truncate_inode_pages); From owner-xfs@oss.sgi.com Tue Jan 23 14:40:03 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 23 Jan 2007 14:40:09 -0800 (PST) X-Spam-oss-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0NMe1qw027012 for ; Tue, 23 Jan 2007 14:40:02 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA25067; Wed, 24 Jan 2007 09:39:03 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l0NMd17Y102189537; Wed, 24 Jan 2007 09:39:01 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l0NMd0Ao102274346; Wed, 24 Jan 2007 09:39:00 +1100 (AEDT) Date: Wed, 24 Jan 2007 09:39:00 +1100 From: David Chinner To: linux-kernel@vger.kernel.org Cc: xfs@oss.sgi.com, akpm@osdl.org Subject: [PATCH 2/2]: Fix BUG in cancel_dirty_pages on XFS Message-ID: <20070123223900.GG33919298@melbourne.sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.1i X-archive-position: 10404 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 986 Lines: 34 Make XFS use the new truncate_unmap_inode_pages_range() function. Signed-off-by: Dave Chinner --- fs/xfs/linux-2.6/xfs_fs_subr.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) Index: 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_fs_subr.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/linux-2.6/xfs_fs_subr.c 2007-01-23 18:42:46.000000000 +1100 +++ 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_fs_subr.c 2007-01-23 18:44:53.955160806 +1100 @@ -32,7 +32,8 @@ fs_tosspages( struct inode *ip = vn_to_inode(vp); if (VN_CACHED(vp)) - truncate_inode_pages(ip->i_mapping, first); + truncate_unmap_inode_pages_range(ip->i_mapping, + first, last, 1); } void @@ -49,7 +50,8 @@ fs_flushinval_pages( if (VN_TRUNC(vp)) VUNTRUNCATE(vp); filemap_write_and_wait(ip->i_mapping); - truncate_inode_pages(ip->i_mapping, first); + truncate_unmap_inode_pages_range(ip->i_mapping, + first, last, 1); } } From owner-xfs@oss.sgi.com Tue Jan 23 14:48:07 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 23 Jan 2007 14:48:11 -0800 (PST) X-Spam-oss-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0NMm4qw029046 for ; Tue, 23 Jan 2007 14:48:06 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA25473; Wed, 24 Jan 2007 09:47:06 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l0NMl57Y102183153; Wed, 24 Jan 2007 09:47:06 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l0NMl4kr102205982; Wed, 24 Jan 2007 09:47:04 +1100 (AEDT) Date: Wed, 24 Jan 2007 09:47:04 +1100 From: David Chinner To: xfs-dev@sgi.com Cc: xfs@oss.sgi.com Subject: Review: Fix sub-page zeroing for buffered writes into unwritten extents Message-ID: <20070123224704.GH33919298@melbourne.sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.1i X-archive-position: 10405 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 2089 Lines: 59 Simple test case: prealloc large file write 3000 bytes to the middle of the file read back file The data in the block where the 3000 bytes was written has non-zero garbage around it both in memory and on disk. The problem is a buffer mapping problem. When we copy data into an unwritten buffer, we have the create flag set which means we map the buffer. We then mark the buffer as unwritten, and do some more checks. Because the buffer is mapped, we do not set the buffer_new() flag on the buffer, which means when we return to the generic code, it does not do sub-block zeroing of the unwritten areas of the block. The following patch fixes the problem. Comments? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group --- fs/xfs/linux-2.6/xfs_aops.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) Index: 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_aops.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/linux-2.6/xfs_aops.c 2007-01-23 18:40:45.255241599 +1100 +++ 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_aops.c 2007-01-23 18:49:13.345681246 +1100 @@ -1282,13 +1282,18 @@ __xfs_get_blocks( bh_result->b_bdev = iomap.iomap_target->bt_bdev; /* - * If we previously allocated a block out beyond eof and we are - * now coming back to use it then we will need to flag it as new - * even if it has a disk address. + * If we previously allocated a block out beyond eof and we are now + * coming back to use it then we will need to flag it as new even if it + * has a disk address. + * + * With sub-block writes into unwritten extents we also need to mark + * the buffer as new so that the unwritten parts of the buffer gets + * correctly zeroed. */ if (create && ((!buffer_mapped(bh_result) && !buffer_uptodate(bh_result)) || - (offset >= i_size_read(inode)) || (iomap.iomap_flags & IOMAP_NEW))) + (offset >= i_size_read(inode)) || + (iomap.iomap_flags & (IOMAP_NEW|IOMAP_UNWRITTEN)))) set_buffer_new(bh_result); if (iomap.iomap_flags & IOMAP_DELAY) { From owner-xfs@oss.sgi.com Tue Jan 23 14:59:52 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 23 Jan 2007 14:59:57 -0800 (PST) X-Spam-oss-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00, J_CHICKENPOX_72 autolearn=no version=3.2.0-pre1-r497472 Received: from smtp113.sbc.mail.mud.yahoo.com (smtp113.sbc.mail.mud.yahoo.com [68.142.198.212]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0NMxpqw032460 for ; Tue, 23 Jan 2007 14:59:52 -0800 Received: (qmail 71485 invoked from network); 23 Jan 2007 22:58:57 -0000 Received: from unknown (HELO stupidest.org) (cwedgwood@sbcglobal.net@70.231.250.146 with login) by smtp113.sbc.mail.mud.yahoo.com with SMTP; 23 Jan 2007 22:58:57 -0000 X-YMail-OSG: s79a0fgVM1nfPz_CjQOZPSHV5IEhOsTLdWF2rwPwfRaGEjkEWMA3ppWzSojZxrEfv2M9QDvP3w-- Received: by tuatara.stupidest.org (Postfix, from userid 10000) id 953121826125; Tue, 23 Jan 2007 14:58:55 -0800 (PST) Date: Tue, 23 Jan 2007 14:58:55 -0800 From: Chris Wedgwood To: Nathan Scott Cc: Peter Gervai , xfs@oss.sgi.com Subject: Re: how to sync / commit data to disk? Message-ID: <20070123225855.GA5054@tuatara.stupidest.org> References: <20070123165050.GA28720@tuatara.stupidest.org> <1169591069.18017.143.camel@edge> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1169591069.18017.143.camel@edge> X-archive-position: 10406 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: xfs Content-Length: 279 Lines: 8 On Wed, Jan 24, 2007 at 09:24:29AM +1100, Nathan Scott wrote: > FWIW, this hack can be better achieved in a filesystem independent > way by doing a remount,ro ... remount,rw instead of freeze/thaw. except it won't work if the are open read-write files (which can be the case) From owner-xfs@oss.sgi.com Tue Jan 23 18:08:58 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 23 Jan 2007 18:09:02 -0800 (PST) X-Spam-oss-Status: No, score=-0.8 required=5.0 tests=AWL,BAYES_40 autolearn=ham version=3.2.0-pre1-r497472 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0O28tqw014968 for ; Tue, 23 Jan 2007 18:08:57 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id NAA00598; Wed, 24 Jan 2007 13:07:54 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l0O27q7Y94795662; Wed, 24 Jan 2007 13:07:53 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l0O27oP1102269405; Wed, 24 Jan 2007 13:07:50 +1100 (AEDT) Date: Wed, 24 Jan 2007 13:07:50 +1100 From: David Chinner To: "Geir A. Myrestrand" Cc: linux-xfs@oss.sgi.com Subject: Re: Will xfs_growfs succeed on a full file system? Message-ID: <20070124020750.GD44411608@melbourne.sgi.com> References: <45B6277F.20506@falconstor.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <45B6277F.20506@falconstor.com> User-Agent: Mutt/1.4.2.1i X-archive-position: 10407 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 1228 Lines: 38 On Tue, Jan 23, 2007 at 10:19:27AM -0500, Geir A. Myrestrand wrote: > Does xfs_growfs depend on some space left on the file system in order to > be able to grow it? Yes, it does require some space to be left because growing the filesystem can require extending the original last AG to the full size and that means there may be btree work to be journalled and hence we have to reserve blocks for that to succeed in all cases. > I have a colleague who ran into an issue where a file system resize > failed. The file system is 100% full. Yup, we hit that in QA recently and have an open bug for it. > Aside from analyzing what happened in his case, should XFS be able to > grow a file system that is 100% full? Yes, it should. > The device has already been expanded, it is the XFS file system that > fails to resize. I just wonder if that is by design, or whether it is an > issue. It's a bug, really, but one you can easily work around by freeing up about 50k of space. If you've got a really large filesystem, then you might need to free more space than that. At some point in my copious amounts of free time I'll fix it properly... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Tue Jan 23 18:54:53 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 23 Jan 2007 18:54:57 -0800 (PST) X-Spam-oss-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50,RCVD_BAD_ID autolearn=no version=3.2.0-pre1-r497472 Received: from evaldomino.Falconstor.com (mail1.falconstor.com [216.223.47.230]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0O2sqqw024141 for ; Tue, 23 Jan 2007 18:54:53 -0800 Received: from [10.1.10.196] ([10.1.10.196]) by falconstormail.falconstor.net (Lotus Domino Release 5.0.11) with ESMTP id 2007012321262402:2309 ; Tue, 23 Jan 2007 21:26:24 -0500 Message-ID: <45B6C5C4.50005@falconstor.com> Date: Tue, 23 Jan 2007 21:34:44 -0500 From: "Geir A. Myrestrand" Reply-To: geir.myrestrand@falconstor.com Organization: FalconStor Software, Inc. User-Agent: Thunderbird 1.5.0.9 (Windows/20061207) MIME-Version: 1.0 To: linux-xfs@oss.sgi.com Subject: Re: Will xfs_growfs succeed on a full file system? References: <45B6277F.20506@falconstor.com> <20070124020750.GD44411608@melbourne.sgi.com> In-Reply-To: <20070124020750.GD44411608@melbourne.sgi.com> X-MIMETrack: Itemize by SMTP Server on FalconstorMail/FalconStor(Release 5.0.11 |July 24, 2002) at 01/23/2007 09:26:24 PM, Serialize by Router on evaldomino/FalconStor(Release 5.0.11 |July 24, 2002) at 01/23/2007 09:54:56 PM, Serialize complete at 01/23/2007 09:54:56 PM Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset=ISO-8859-1; format=flowed X-archive-position: 10408 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: geir.myrestrand@falconstor.com Precedence: bulk X-list: xfs Content-Length: 1411 Lines: 40 David Chinner wrote: > On Tue, Jan 23, 2007 at 10:19:27AM -0500, Geir A. Myrestrand wrote: >> Does xfs_growfs depend on some space left on the file system in order to >> be able to grow it? > > Yes, it does require some space to be left because growing the > filesystem can require extending the original last AG to the full > size and that means there may be btree work to be journalled > and hence we have to reserve blocks for that to succeed in all > cases. > >> I have a colleague who ran into an issue where a file system resize >> failed. The file system is 100% full. > > Yup, we hit that in QA recently and have an open bug for it. > >> Aside from analyzing what happened in his case, should XFS be able to >> grow a file system that is 100% full? > > Yes, it should. > >> The device has already been expanded, it is the XFS file system that >> fails to resize. I just wonder if that is by design, or whether it is an >> issue. > > It's a bug, really, but one you can easily work around by freeing > up about 50k of space. If you've got a really large filesystem, then > you might need to free more space than that. > > At some point in my copious amounts of free time I'll fix it properly... Thanks a lot for your feedback David. We look forward to a fix, but just knowing that it is an issue and that we easily can work around it helps a lot. Thanks again! -- Geir A. Myrestrand From owner-xfs@oss.sgi.com Tue Jan 23 22:22:53 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 23 Jan 2007 22:22:58 -0800 (PST) X-Spam-oss-Status: No, score=-0.5 required=5.0 tests=AWL,BAYES_50 autolearn=ham version=3.2.0-pre1-r497472 Received: from smtp101.sbc.mail.mud.yahoo.com (smtp101.sbc.mail.mud.yahoo.com [68.142.198.200]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0O6Mqqw002766 for ; Tue, 23 Jan 2007 22:22:53 -0800 Received: (qmail 3681 invoked from network); 24 Jan 2007 06:21:52 -0000 Received: from unknown (HELO stupidest.org) (cwedgwood@sbcglobal.net@24.5.75.45 with login) by smtp101.sbc.mail.mud.yahoo.com with SMTP; 24 Jan 2007 06:21:51 -0000 X-YMail-OSG: NtHfQ8cVM1m.MoP5R11ZSNNr2bOqACR_tTykvMVo69vZnvyDyODnf6y_6g0i7txEEzath46clYd.ge2aT4X5_L90CIUpDarnpPCSTJBTUMQ9DdbcEHNBUORBkJl3AM8- Received: by tuatara.stupidest.org (Postfix, from userid 10000) id 31C6F1826121; Tue, 23 Jan 2007 22:21:50 -0800 (PST) Date: Tue, 23 Jan 2007 22:21:50 -0800 From: Chris Wedgwood To: Peter Gervai Cc: xfs@oss.sgi.com Subject: Re: how to sync / commit data to disk? Message-ID: <20070124062150.GA12989@tuatara.stupidest.org> References: <45B632F6.50705@sandeen.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-archive-position: 10409 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: xfs Content-Length: 2766 Lines: 58 On Tue, Jan 23, 2007 at 09:33:01PM +0100, Peter Gervai wrote: > Now, for me it seems to be a very interesting question that why > people would CARE whether it's synced or not, since they write it by > the filesystem layer anyway? I do not know, and investigating the > reason why 'grub-install' would worry about sync is beyond my > available time. there was/is a case where grub assumes a sync will put the file in it's fine place, it then pokes about using it's own fs code talking to the block device --- and that's where it breaks the reason is sync/fsync have to put the data down somewhere that's not volatile, that isn't going to get lost --- the specification doesn't require those system calls should put the data down in their final place or indeed put the filesystem in a state where it's safe to poke about under a mounted filesystem (though freeze essentialyl does that) > The original problem was because grub-install froze xfs, then tried > to do the above, which magically fail on the frozen filesystem, > hanging the install till the cows come home. I tried to fix this, > then started wondering how to do it properly, and now that you > mentioned and I checked the trace I really start wondering about why > to care... :-o i guess it wouldn't be hard to add an freeze/unfreeze hacky thing to use as a super-duper-sync to allow people to do nasty things, i'm not sure if that's really a good idea long term though > Apart from that it is so far he best boot manager I found (still I'm > open to suggestions of better, free boot managers, but please not on > this list), which is completely based on my extremely subjective > point of view (which includes XFS support, naturally), and which may > be completely opposite to the point of view of any human or nonhuman > being, it is not, as you have seen above. It uses a dirty hack which > works, I accept that. Specifying --stage2 seems to avoid that hack > altogether anyway. to be quite honest, almost all the bootloaders suck in one way or another, you're just best off picking whatever has the least pain and living with it (arguably this applies to most software in general but boot-loaders are most definately nasty in places) > I accept the "BROKEN" comment regarding this one. ;-) It is a broken > implementation. Freeze, ten tries to write. Doomed to fail. right, so freeze/unfreeze in one call might be a better idea so there is no chance of having it stuck frozen and breaking things > I thought that freeze lets already written but unsynced data to be > written. freeze should push all unwritten data and metadata out to the fs and leave it in a decent state suitable for taking a snapshot, freeze/unfreeze in one go won't really do this, but it might be good enough for grub From owner-xfs@oss.sgi.com Tue Jan 23 23:01:27 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 23 Jan 2007 23:01:32 -0800 (PST) X-Spam-oss-Status: No, score=-0.7 required=5.0 tests=AWL,BAYES_50 autolearn=ham version=3.2.0-pre1-r497472 Received: from nz-out-0506.google.com (nz-out-0506.google.com [64.233.162.239]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0O71Qqw010579 for ; Tue, 23 Jan 2007 23:01:27 -0800 Received: by nz-out-0506.google.com with SMTP id m22so76118nzf for ; Tue, 23 Jan 2007 23:00:31 -0800 (PST) DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:message-id:date:from:to:subject:cc:mime-version:content-type:content-transfer-encoding:content-disposition; b=HH0BcBFE8E1XpU58ztwrvph+DTQ4s9kPaWnNkRtom9WtZFA09kybIiLcAfVhD1Ro2FmuFGqNMK7y9EPRdPPCUIpxH/yBevqNbuyepc28YHdyZXx2gKPqcboL1I15pGMQi2nXuqSPF8RuHbYi3wrx7r1k7xXz9mSEcghj57k1xD0= Received: by 10.35.121.9 with SMTP id y9mr801592pym.1169620462371; Tue, 23 Jan 2007 22:34:22 -0800 (PST) Received: by 10.35.40.17 with HTTP; Tue, 23 Jan 2007 22:34:22 -0800 (PST) Message-ID: <5d96567b0701232234y2ff15762sbd1aaada5c3a0a0@mail.gmail.com> Date: Wed, 24 Jan 2007 08:34:22 +0200 From: "Raz Ben-Jehuda(caro)" To: dgc@sgi.com Subject: [DISCUSS] xfs allocation bitmap method over linux raid Cc: xfs@oss.sgi.com MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline X-archive-position: 10410 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: raziebe@gmail.com Precedence: bulk X-list: xfs Content-Length: 1846 Lines: 58 David Hello. I have looked up in LKML and hopefully you are the one to ask in regard to xfs file system in Linux. My name is Raz and I work for a video servers company. These servers demand high throughput from the storage. We applied XFS file system on our machines. A video server reads a file in a sequential manner. So, if a file extent size is not a factor of the stripe unit size a sequential read over a raid would break into several small pieces which is undesirable for performance. I have been examining the bitmap of a file over Linux raid5. According to the documentation XFS tries to align a file on stripe unit size. What I have done is to fix the bitmap allocation method during the writing to be aligned by the stripe unit size. The thing is , though this seems to work , I do not know whether I missed something. The bellow is a patch (a mere two lines) i have applied to the file system and I would be really grateful to have your opinion. diff -ru --exclude='*.o' /d1/rt/kernels/linux-2.6.17-UNI/fs/xfs/xfs_iomap.c linux-2.6.17-UNI/fs/xfs/xfs_iomap.c --- /d1/rt/kernels/linux-2.6.17-UNI/fs/xfs/xfs_iomap.c 2006-06-18 01:49:35.000000000 +0000 +++ linux-2.6.17-UNI/fs/xfs/xfs_iomap.c 2006-12-26 14:11:02.000000000 +0000 @@ -441,8 +441,8 @@ if (unlikely(rt)) { if (!(extsz = ip->i_d.di_extsize)) extsz = mp->m_sb.sb_rextsize; - } else { - extsz = ip->i_d.di_extsize; + } else { + extsz = mp->m_dalign; // raz fix alignment to raid stripe unit } isize = ip->i_d.di_size; @@ -663,7 +663,7 @@ if (!(extsz = ip->i_d.di_extsize)) extsz = mp->m_sb.sb_rextsize; } else { - extsz = ip->i_d.di_extsize; + extsz = mp->m_dalign; // raz fix alignment to raid stripe unit } offset_fsb = XFS_B_TO_FSBT(mp, offset); ~ Thank you. -- Raz From owner-xfs@oss.sgi.com Wed Jan 24 04:30:54 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 24 Jan 2007 04:31:02 -0800 (PST) X-Spam-oss-Status: No, score=-0.7 required=5.0 tests=BAYES_20 autolearn=ham version=3.2.0-pre1-r497472 Received: from amsfep18-int.chello.nl (amsfep17-int.chello.nl [213.46.243.15] (may be forged)) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0OCUlqw024860 for ; Wed, 24 Jan 2007 04:30:54 -0800 Received: from [192.168.0.111] (really [62.194.129.232]) by amsfep18-int.chello.nl (InterMail vM.6.01.04.04 201-2131-118-104-20050224) with ESMTP id <20070124121357.MDPZ25065.amsfep18-int.chello.nl@[192.168.0.111]>; Wed, 24 Jan 2007 13:13:57 +0100 Subject: Re: [PATCH 1/2]: Fix BUG in cancel_dirty_pages on XFS From: Peter Zijlstra To: David Chinner Cc: linux-kernel@vger.kernel.org, xfs@oss.sgi.com, akpm@osdl.org In-Reply-To: <20070123223702.GF33919298@melbourne.sgi.com> References: <20070123223702.GF33919298@melbourne.sgi.com> Content-Type: text/plain Date: Wed, 24 Jan 2007 13:13:55 +0100 Message-Id: <1169640835.6189.14.camel@twins> Mime-Version: 1.0 X-Mailer: Evolution 2.8.1 Content-Transfer-Encoding: 7bit X-archive-position: 10411 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: a.p.zijlstra@chello.nl Precedence: bulk X-list: xfs Content-Length: 2225 Lines: 53 On Wed, 2007-01-24 at 09:37 +1100, David Chinner wrote: > With the recent changes to cancel_dirty_pages(), XFS will > dump warnings in the syslog because it can truncate_inode_pages() > on dirty mapped pages. > > I've determined that this is indeed correct behaviour for XFS > as this can happen in the case of races on mmap()d files with > direct I/O. In this case when we do a direct I/O read, we > flush the dirty pages to disk, then truncate them out of the > page cache. Unfortunately, between the flush and the truncate > the mmap could dirty the page again. At this point we toss a > dirty page that is mapped. This sounds iffy, why not just leave the page in the pagecache if its mapped anyway? > None of the existing functions for truncating pages or invalidating > pages work in this situation. Invalidating a page only works for > non-dirty pages with non-dirty buffers, and they only work for > whole pages and XFS requires partial page truncation. > > On top of that the page invalidation functions don't actually > call into the filesystem to invalidate the page and so the filesystem > can't actually invalidate the page properly (e.g. do stuff based on > private buffer head flags). Have you seen the new launder_page() a_op? called from invalidate_inode_pages2_range() > So that leaves us needing to use truncate semantics and the problem > is that none of them unmap pages in a non-racy manner - if they > unmap pages they do it separately to the truncate of the page, > leading to races with mmap redirtying the page between the unmap and > the truncate ofthe page. Isn't there still a race where the page fault path doesn't yet lock the page and can just reinsert it? Nick's pagefault rework should rid us of this by always locking the page in the fault path. > Hence we need a truncate function that unmaps the pages while they > are locked for truncate in a similar fashion to > invalidate_inode_pages2_range(). The following patch (unchanged from > the last time it was sent) does this. The XFS changes are in a > second patch. > > The patch has been test on ia64 and x86-64 via XFSQA and a lot > of fsx mixing mmap and direct I/O operations. > > Signed-off-by: Dave Chinner From owner-xfs@oss.sgi.com Wed Jan 24 06:11:18 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 24 Jan 2007 06:11:26 -0800 (PST) X-Spam-oss-Status: No, score=0.0 required=5.0 tests=BAYES_50 autolearn=ham version=3.2.0-pre1-r497472 Received: from smtp104.mail.mud.yahoo.com (smtp104.mail.mud.yahoo.com [209.191.85.214]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0OEBHqw016067 for ; Wed, 24 Jan 2007 06:11:18 -0800 Received: (qmail 40225 invoked from network); 24 Jan 2007 13:43:42 -0000 DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com.au; h=Received:X-YMail-OSG:Message-ID:Date:From:User-Agent:X-Accept-Language:MIME-Version:To:CC:Subject:References:In-Reply-To:Content-Type:Content-Transfer-Encoding; b=ww6UHRDFgrxKBffDWQUKiZstr2KZirT5coAfPzSNDavq+UJTTiwdXpTyy/z5HPDHVWXW54OSxEJeulHDkiZeXZ/g1OMHrCKbcoJD3BlhqgMFjMs/jZBE+A8FGK0id5V+5icMx+W07S/drm49NMkV3aZiqpgP0uuPZvAaQWzB03k= ; Received: from unknown (HELO ?192.168.0.1?) (nickpiggin@203.173.3.219 with plain) by smtp104.mail.mud.yahoo.com with SMTP; 24 Jan 2007 13:43:41 -0000 X-YMail-OSG: lHEGZOoVM1n2QDgHw0oNf0PZxd5UlnE3JMqgxlfAWMAoQYeUC.P2zvuR1gur0FZhawxgNvLjsw-- Message-ID: <45B7627B.8050202@yahoo.com.au> Date: Thu, 25 Jan 2007 00:43:23 +1100 From: Nick Piggin User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.12) Gecko/20051007 Debian/1.7.12-1 X-Accept-Language: en MIME-Version: 1.0 To: Peter Zijlstra CC: David Chinner , linux-kernel@vger.kernel.org, xfs@oss.sgi.com, akpm@osdl.org Subject: Re: [PATCH 1/2]: Fix BUG in cancel_dirty_pages on XFS References: <20070123223702.GF33919298@melbourne.sgi.com> <1169640835.6189.14.camel@twins> In-Reply-To: <1169640835.6189.14.camel@twins> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 10413 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nickpiggin@yahoo.com.au Precedence: bulk X-list: xfs Content-Length: 2168 Lines: 55 Peter Zijlstra wrote: > On Wed, 2007-01-24 at 09:37 +1100, David Chinner wrote: > >>With the recent changes to cancel_dirty_pages(), XFS will >>dump warnings in the syslog because it can truncate_inode_pages() >>on dirty mapped pages. >> >>I've determined that this is indeed correct behaviour for XFS >>as this can happen in the case of races on mmap()d files with >>direct I/O. In this case when we do a direct I/O read, we >>flush the dirty pages to disk, then truncate them out of the >>page cache. Unfortunately, between the flush and the truncate >>the mmap could dirty the page again. At this point we toss a >>dirty page that is mapped. > > > This sounds iffy, why not just leave the page in the pagecache if its > mapped anyway? And why not just leave it in the pagecache and be done with it? All you need is to do a writeout before a direct IO read, which is what generic dio code does. I guess you'll say that direct writes still need to remove pages, but in that case you'll either have to live with some racyness (which is what the generic code does), or have a higher level synchronisation to prevent buffered + direct IO writes I suppose? >>None of the existing functions for truncating pages or invalidating >>pages work in this situation. Invalidating a page only works for >>non-dirty pages with non-dirty buffers, and they only work for >>whole pages and XFS requires partial page truncation. >> >>On top of that the page invalidation functions don't actually >>call into the filesystem to invalidate the page and so the filesystem >>can't actually invalidate the page properly (e.g. do stuff based on >>private buffer head flags). > > > Have you seen the new launder_page() a_op? called from > invalidate_inode_pages2_range() It would have been nice to make that one into a more potentially useful generic callback. But why was it introduced, exactly? I can't tell from the code or the discussion why NFS couldn't start the IO, and signal the caller to wait_on_page_writeback and retry? That seemed to me like the convetional fix. -- SUSE Labs, Novell Inc. Send instant messages to your online friends http://au.messenger.yahoo.com From owner-xfs@oss.sgi.com Wed Jan 24 06:41:05 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 24 Jan 2007 06:41:11 -0800 (PST) X-Spam-oss-Status: No, score=-1.7 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from amsfep11-int.chello.nl (amsfep19-int.chello.nl [213.46.243.16] (may be forged)) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0OEf2qw022475 for ; Wed, 24 Jan 2007 06:41:04 -0800 Received: from [192.168.0.111] (really [62.194.129.232]) by amsfep11-int.chello.nl (InterMail vM.6.01.04.04 201-2131-118-104-20050224) with ESMTP id <20070124144006.UFBT1100.amsfep11-int.chello.nl@[192.168.0.111]>; Wed, 24 Jan 2007 15:40:06 +0100 Subject: Re: [PATCH 1/2]: Fix BUG in cancel_dirty_pages on XFS From: Peter Zijlstra To: Nick Piggin Cc: David Chinner , linux-kernel@vger.kernel.org, xfs@oss.sgi.com, akpm@osdl.org In-Reply-To: <45B7627B.8050202@yahoo.com.au> References: <20070123223702.GF33919298@melbourne.sgi.com> <1169640835.6189.14.camel@twins> <45B7627B.8050202@yahoo.com.au> Content-Type: text/plain Date: Wed, 24 Jan 2007 15:40:04 +0100 Message-Id: <1169649604.6189.27.camel@twins> Mime-Version: 1.0 X-Mailer: Evolution 2.8.1 Content-Transfer-Encoding: 7bit X-archive-position: 10414 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: a.p.zijlstra@chello.nl Precedence: bulk X-list: xfs Content-Length: 1388 Lines: 39 On Thu, 2007-01-25 at 00:43 +1100, Nick Piggin wrote: > > Have you seen the new launder_page() a_op? called from > > invalidate_inode_pages2_range() > > It would have been nice to make that one into a more potentially > useful generic callback. That can still be done when the need arises, right? > But why was it introduced, exactly? I can't tell from the code or > the discussion why NFS couldn't start the IO, and signal the caller > to wait_on_page_writeback and retry? That seemed to me like the > convetional fix. to quote a bit: On Tue, 19 Dec 2006 18:19:38 -0500 Trond Myklebust wrote: > NFS: Fix race in nfs_release_page() > > invalidate_inode_pages2() may set the dirty bit on a page owing to the call > to unmap_mapping_range() after the page was locked. In order to fix this, > NFS has hooked the releasepage() method. This, however leads to deadlocks > in other parts of the VM. and: > > Now, arguably the VM shouldn't be calling try_to_release_page() with > > __GFP_FS when it's holding a lock on a page. > > > > But otoh, NFS should never be running lock_page() within nfs_release_page() > > against the page which was passed into nfs_release_page(). It'll deadlock > > for sure. > > The reason why it is happening is that the last dirty page from that > inode gets cleaned, resulting in a call to dput(). From owner-xfs@oss.sgi.com Wed Jan 24 08:45:01 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 24 Jan 2007 08:45:07 -0800 (PST) X-Spam-oss-Status: No, score=-1.8 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from internal-mail-relay1.corp.sgi.com (internal-mail-relay1.corp.sgi.com [198.149.32.52]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0OGj0qw024255 for ; Wed, 24 Jan 2007 08:45:01 -0800 Received: from [134.15.160.4] (vpn-emea-sw-emea-160-4.emea.sgi.com [134.15.160.4]) by internal-mail-relay1.corp.sgi.com (8.12.9/8.12.10/SGI_generic_relay-1.2) with ESMTP id l0OGi5bj72860584; Wed, 24 Jan 2007 08:44:05 -0800 (PST) Message-ID: <45B78CD4.1060400@sgi.com> Date: Wed, 24 Jan 2007 16:44:04 +0000 From: Lachlan McIlroy Reply-To: lachlan@sgi.com Organization: SGI User-Agent: Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-US; rv:1.7.12) Gecko/20050920 X-Accept-Language: en-us, en MIME-Version: 1.0 To: David Chinner CC: xfs-dev@sgi.com, xfs@oss.sgi.com Subject: Re: Review: Fix sub-page zeroing for buffered writes into unwritten extents References: <20070123224704.GH33919298@melbourne.sgi.com> In-Reply-To: <20070123224704.GH33919298@melbourne.sgi.com> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 10415 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: lachlan@sgi.com Precedence: bulk X-list: xfs Content-Length: 1215 Lines: 39 Dave, I'm trying to understand what the sequence of events is here. If we write to an unwritten extent then will __xfs_get_blocks() be called with create=1 and flags=BMAPI_WRITE? And calling bhv_vop_bmap() with flags set to BMAPI_WRITE will cause xfs_iomap() to set iomap_flags to IOMAP_NEW? The combination of create=1 and iomap_flags=IOMAP_NEW in __xfs_get_blocks() should result in calling set_buffer_new(), right? I must be missing something... Lachlan David Chinner wrote: > Simple test case: > > prealloc large file > write 3000 bytes to the middle of the file > read back file > > The data in the block where the 3000 bytes was written has > non-zero garbage around it both in memory and on disk. > > The problem is a buffer mapping problem. When we copy data > into an unwritten buffer, we have the create flag set which > means we map the buffer. We then mark the buffer as unwritten, > and do some more checks. Because the buffer is mapped, we do > not set the buffer_new() flag on the buffer, which means when > we return to the generic code, it does not do sub-block zeroing > of the unwritten areas of the block. > > The following patch fixes the problem. Comments? > > Cheers, > > Dave. From owner-xfs@oss.sgi.com Wed Jan 24 08:53:05 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 24 Jan 2007 08:53:11 -0800 (PST) X-Spam-oss-Status: No, score=-1.0 required=5.0 tests=AWL,BAYES_50, J_CHICKENPOX_72 autolearn=no version=3.2.0-pre1-r497472 Received: from moving-picture.com (mpc-26.sohonet.co.uk [193.203.82.251]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0OGr2qw026218 for ; Wed, 24 Jan 2007 08:53:05 -0800 Received: from minion.mpc.local ([172.16.11.112] helo=moving-picture.com) by moving-picture.com with esmtp (Exim 4.43) id 1H9kzB-0003GM-KA for xfs@oss.sgi.com; Wed, 24 Jan 2007 16:28:33 +0000 Message-ID: <45B78931.6030809@moving-picture.com> Date: Wed, 24 Jan 2007 16:28:33 +0000 From: James Pearson User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.6) Gecko/20040524 X-Accept-Language: en-us, en MIME-Version: 1.0 To: xfs@oss.sgi.com Subject: Re: how to sync / commit data to disk? Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-Disclaimer: This email and any attachments are confidential, may be legally X-Disclaimer: privileged and intended solely for the use of addressee. If you X-Disclaimer: are not the intended recipient of this message, any disclosure, X-Disclaimer: copying, distribution or any action taken in reliance on it is X-Disclaimer: strictly prohibited and may be unlawful. If you have received X-Disclaimer: this message in error, please notify the sender and delete all X-Disclaimer: copies from your system. X-Disclaimer: X-Disclaimer: Email may be susceptible to data corruption, interception and X-Disclaimer: unauthorised amendment, and we do not accept liability for any X-Disclaimer: such corruption, interception or amendment or the consequences X-Disclaimer: thereof. X-archive-position: 10416 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: james-p@moving-picture.com Precedence: bulk X-list: xfs Content-Length: 711 Lines: 26 >> grub is broken >> >> this comes up all the time, there are various work-arounds but it >> doesn't change the fact that GRUB IS BROKEN >> >> it would be nice if someone would just address it from that end >> >> > xfs_freeze -f >> > sync >> > xfs_freeze -u >> >> sync before freeze (actually, I'm not sure a sync there is necessary >> but it can't hurt) > > Its not necessary (would be a bug if so). > > FWIW, this hack can be better achieved in a filesystem independent way > by doing a remount,ro ... remount,rw instead of freeze/thaw. This is the way that I do it - based on info in an earlier posting on this list: James Pearson From owner-xfs@oss.sgi.com Wed Jan 24 14:26:06 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 24 Jan 2007 14:26:13 -0800 (PST) X-Spam-oss-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0OMQ1qw030816 for ; Wed, 24 Jan 2007 14:26:04 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA29737; Thu, 25 Jan 2007 09:24:58 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l0OMOs7Y103303836; Thu, 25 Jan 2007 09:24:55 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l0OMOpJ6101547176; Thu, 25 Jan 2007 09:24:51 +1100 (AEDT) Date: Thu, 25 Jan 2007 09:24:51 +1100 From: David Chinner To: Peter Zijlstra Cc: David Chinner , linux-kernel@vger.kernel.org, xfs@oss.sgi.com, akpm@osdl.org Subject: Re: [PATCH 1/2]: Fix BUG in cancel_dirty_pages on XFS Message-ID: <20070124222451.GM33919298@melbourne.sgi.com> References: <20070123223702.GF33919298@melbourne.sgi.com> <1169640835.6189.14.camel@twins> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1169640835.6189.14.camel@twins> User-Agent: Mutt/1.4.2.1i X-archive-position: 10417 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 2461 Lines: 63 On Wed, Jan 24, 2007 at 01:13:55PM +0100, Peter Zijlstra wrote: > On Wed, 2007-01-24 at 09:37 +1100, David Chinner wrote: > > With the recent changes to cancel_dirty_pages(), XFS will > > dump warnings in the syslog because it can truncate_inode_pages() > > on dirty mapped pages. > > > > I've determined that this is indeed correct behaviour for XFS > > as this can happen in the case of races on mmap()d files with > > direct I/O. In this case when we do a direct I/O read, we > > flush the dirty pages to disk, then truncate them out of the > > page cache. Unfortunately, between the flush and the truncate > > the mmap could dirty the page again. At this point we toss a > > dirty page that is mapped. > > This sounds iffy, why not just leave the page in the pagecache if its > mapped anyway? Because then fsx fails. > > None of the existing functions for truncating pages or invalidating > > pages work in this situation. Invalidating a page only works for > > non-dirty pages with non-dirty buffers, and they only work for > > whole pages and XFS requires partial page truncation. > > > > On top of that the page invalidation functions don't actually > > call into the filesystem to invalidate the page and so the filesystem > > can't actually invalidate the page properly (e.g. do stuff based on > > private buffer head flags). > > Have you seen the new launder_page() a_op? called from > invalidate_inode_pages2_range() No, but we can't use invalidate_inode_pages2_range() because it doesn't handle partial pages. I tried that first and it left warnings in the syslog and fsx failed. > > So that leaves us needing to use truncate semantics and the problem > > is that none of them unmap pages in a non-racy manner - if they > > unmap pages they do it separately to the truncate of the page, > > leading to races with mmap redirtying the page between the unmap and > > the truncate ofthe page. > > Isn't there still a race where the page fault path doesn't yet lock the > page and can just reinsert it? Yes, but it's a tiny race compared to the other mechanisms available. > Nick's pagefault rework should rid us of this by always locking the page > in the fault path. Yes, and that's what I'm relying on to fix the problem completely. invalidate_inode_pages2_range() needs this fix as well to be race free, so it's not like I'm introducing a new problem.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Wed Jan 24 14:40:10 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 24 Jan 2007 14:40:16 -0800 (PST) X-Spam-oss-Status: No, score=-0.6 required=5.0 tests=AWL,BAYES_50 autolearn=ham version=3.2.0-pre1-r497472 Received: from postoffice.aconex.com (mail.app.aconex.com [203.89.192.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0OMe6qw001285 for ; Wed, 24 Jan 2007 14:40:10 -0800 Received: from edge (unknown [203.89.192.141]) by postoffice.aconex.com (Postfix) with ESMTP id 8CCE0AAC367; Thu, 25 Jan 2007 09:26:37 +1100 (EST) Subject: Re: [DISCUSS] xfs allocation bitmap method over linux raid From: Nathan Scott Reply-To: nscott@aconex.com To: "Raz Ben-Jehuda(caro)" Cc: xfs@oss.sgi.com In-Reply-To: <5d96567b0701232234y2ff15762sbd1aaada5c3a0a0@mail.gmail.com> References: <5d96567b0701232234y2ff15762sbd1aaada5c3a0a0@mail.gmail.com> Content-Type: text/plain Organization: Aconex Date: Thu, 25 Jan 2007 09:38:13 +1100 Message-Id: <1169678294.18017.200.camel@edge> Mime-Version: 1.0 X-Mailer: Evolution 2.6.3 Content-Transfer-Encoding: 7bit X-archive-position: 10418 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nscott@aconex.com Precedence: bulk X-list: xfs Content-Length: 2538 Lines: 68 Hi Raz, On Wed, 2007-01-24 at 08:34 +0200, Raz Ben-Jehuda(caro) wrote: > David Hello. > I have looked up in LKML and hopefully you are the one to ask in > regard to xfs file system in Linux. > My name is Raz and I work for a video servers company. OOC, which one? (would be nice to put an entry for your company on the http://oss.sgi.com/projects/xfs/users.html page). > These servers demand high throughput from the storage. > We applied XFS file system on our machines. > > A video server reads a file in a sequential manner. So, if a Do you write the file sequentially? Buffered or direct writes? > file extent size is not a factor of the stripe unit size a sequential > read over a raid would break into several small pieces which > is undesirable for performance. > > I have been examining the bitmap of a file over Linux raid5. I've found that, in combination with Jens Axboe's blktrace toolkit to be very useful - if you have a sufficiently recent kernel, I'd highly recommend you check out blktrace, it should help you alot. (bmap == block map, theres no bitmap involved) > According to the documentation XFS tries to align a file on > stripe unit size. > > What I have done is to fix the bitmap allocation method during > the writing to be aligned by the stripe unit size. Thats not quite what the patch does, FWIW - it does two things: - forces allocations to be stripe unit sized (not aligned) - and, er, removes some of the per-inode extsize hint code :) > /d1/rt/kernels/linux-2.6.17-UNI/fs/xfs/xfs_iomap.c > linux-2.6.17-UNI/fs/xfs/xfs_iomap.c > --- /d1/rt/kernels/linux-2.6.17-UNI/fs/xfs/xfs_iomap.c 2006-06-18 > 01:49:35.000000000 +0000 > +++ linux-2.6.17-UNI/fs/xfs/xfs_iomap.c 2006-12-26 14:11:02.000000000 +0000 > @@ -441,8 +441,8 @@ > if (unlikely(rt)) { > if (!(extsz = ip->i_d.di_extsize)) > extsz = mp->m_sb.sb_rextsize; > - } else { > - extsz = ip->i_d.di_extsize; > + } else { > + extsz = mp->m_dalign; // raz fix alignment to raid stripe unit > } The real question is, why are your initial writes not being affected by the code in xfs_iomap_eof_align_last_fsb which rounds requests to a stripe unit boundary? Provided you are writing sequentially, you should be seeing xfs_iomap_eof_want_preallocate return true, then later doing stripe unit alignment in xfs_iomap_eof_align_last_fsb (because prealloc got set earlier) ... can you trace your requests through the routines you've modified and find why this is _not_ happening? cheers. -- Nathan From owner-xfs@oss.sgi.com Wed Jan 24 14:48:05 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 24 Jan 2007 14:48:11 -0800 (PST) X-Spam-oss-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0OMm1qw003023 for ; Wed, 24 Jan 2007 14:48:03 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA00406; Thu, 25 Jan 2007 09:47:00 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l0OMkv7Y102739401; Thu, 25 Jan 2007 09:46:58 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l0OMksLp99969928; Thu, 25 Jan 2007 09:46:54 +1100 (AEDT) Date: Thu, 25 Jan 2007 09:46:54 +1100 From: David Chinner To: Nick Piggin Cc: Peter Zijlstra , David Chinner , linux-kernel@vger.kernel.org, xfs@oss.sgi.com, akpm@osdl.org Subject: Re: [PATCH 1/2]: Fix BUG in cancel_dirty_pages on XFS Message-ID: <20070124224654.GN33919298@melbourne.sgi.com> References: <20070123223702.GF33919298@melbourne.sgi.com> <1169640835.6189.14.camel@twins> <45B7627B.8050202@yahoo.com.au> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <45B7627B.8050202@yahoo.com.au> User-Agent: Mutt/1.4.2.1i X-archive-position: 10419 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 2255 Lines: 59 On Thu, Jan 25, 2007 at 12:43:23AM +1100, Nick Piggin wrote: > Peter Zijlstra wrote: > >On Wed, 2007-01-24 at 09:37 +1100, David Chinner wrote: > > > >>With the recent changes to cancel_dirty_pages(), XFS will > >>dump warnings in the syslog because it can truncate_inode_pages() > >>on dirty mapped pages. > >> > >>I've determined that this is indeed correct behaviour for XFS > >>as this can happen in the case of races on mmap()d files with > >>direct I/O. In this case when we do a direct I/O read, we > >>flush the dirty pages to disk, then truncate them out of the > >>page cache. Unfortunately, between the flush and the truncate > >>the mmap could dirty the page again. At this point we toss a > >>dirty page that is mapped. > > > > > >This sounds iffy, why not just leave the page in the pagecache if its > >mapped anyway? > > And why not just leave it in the pagecache and be done with it? because what is in cache is then not coherent with what is on disk, and a direct read is supposed to read the data that is present in the file at the time it is issued. > All you need is to do a writeout before a direct IO read, which is > what generic dio code does. No, that's not good enough - after writeout but before the direct I/O read is issued a process can fault the page and dirty it. If you do a direct read, followed by a buffered read you should get the same data. The only way to guarantee this is to chuck out any cached pages across the range of the direct I/O so they are fetched again from disk on the next buffered I/O. i.e. coherent at the time the direct I/O is issued. > I guess you'll say that direct writes still need to remove pages, Yup. > but in that case you'll either have to live with some racyness > (which is what the generic code does), or have a higher level > synchronisation to prevent buffered + direct IO writes I suppose? The XFS inode iolock - direct I/O writes take it shared, buffered writes takes it exclusive - so you can't do both at once. Buffered reads take is shared, which is another reason why we need to purge the cache on direct I/O writes - they can operate concurrently (and coherently) with buffered reads. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Wed Jan 24 15:00:00 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 24 Jan 2007 15:00:04 -0800 (PST) X-Spam-oss-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0OMxvqw005536 for ; Wed, 24 Jan 2007 14:59:59 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA00653; Thu, 25 Jan 2007 09:58:55 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l0OMws7Y103245037; Thu, 25 Jan 2007 09:58:54 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l0OMwqhI103224290; Thu, 25 Jan 2007 09:58:52 +1100 (AEDT) Date: Thu, 25 Jan 2007 09:58:52 +1100 From: David Chinner To: "Raz Ben-Jehuda(caro)" Cc: dgc@sgi.com, xfs@oss.sgi.com Subject: Re: [DISCUSS] xfs allocation bitmap method over linux raid Message-ID: <20070124225852.GO33919298@melbourne.sgi.com> References: <5d96567b0701232234y2ff15762sbd1aaada5c3a0a0@mail.gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <5d96567b0701232234y2ff15762sbd1aaada5c3a0a0@mail.gmail.com> User-Agent: Mutt/1.4.2.1i X-archive-position: 10420 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 2589 Lines: 75 On Wed, Jan 24, 2007 at 08:34:22AM +0200, Raz Ben-Jehuda(caro) wrote: > David Hello. > I have looked up in LKML and hopefully you are the one to ask in > regard to xfs file system in Linux. > My name is Raz and I work for a video servers company. > These servers demand high throughput from the storage. > We applied XFS file system on our machines. > > A video server reads a file in a sequential manner. So, if a > file extent size is not a factor of the stripe unit size a sequential > read over a raid would break into several small pieces which > is undesirable for performance. > > I have been examining the bitmap of a file over Linux raid5. > According to the documentation XFS tries to align a file on > stripe unit size. Yup. > What I have done is to fix the bitmap allocation method during > the writing to be aligned by the stripe unit size. > The thing is , though this seems to work , I do not know whether I > missed something. > > The bellow is a patch (a mere two lines) i have applied to the > file system and I would be really grateful to have your opinion. > > > diff -ru --exclude='*.o' > /d1/rt/kernels/linux-2.6.17-UNI/fs/xfs/xfs_iomap.c > linux-2.6.17-UNI/fs/xfs/xfs_iomap.c > --- /d1/rt/kernels/linux-2.6.17-UNI/fs/xfs/xfs_iomap.c 2006-06-18 > 01:49:35.000000000 +0000 > +++ linux-2.6.17-UNI/fs/xfs/xfs_iomap.c 2006-12-26 14:11:02.000000000 +0000 > @@ -441,8 +441,8 @@ > if (unlikely(rt)) { > if (!(extsz = ip->i_d.di_extsize)) > extsz = mp->m_sb.sb_rextsize; > - } else { > - extsz = ip->i_d.di_extsize; > + } else { > + extsz = mp->m_dalign; // raz fix alignment to raid stripe unit > } > > isize = ip->i_d.di_size; > @@ -663,7 +663,7 @@ > if (!(extsz = ip->i_d.di_extsize)) > extsz = mp->m_sb.sb_rextsize; > } else { > - extsz = ip->i_d.di_extsize; > + extsz = mp->m_dalign; // raz fix alignment to raid stripe unit > } > > offset_fsb = XFS_B_TO_FSBT(mp, offset); No, that changes the default behaviour of XFS and breaks the extent allocation size hint code, which is what you should be using to do this. i.e: # xfs_io -c "chattr -R +e +E" -c "extsize " /path/to/mnt Will set the inode extent size on all new files and directories in the filesystem to . You'll get a bunch of errors from this command because you cannot change the extsize of a file that already has extents allocated to it, so it's best to apply this right after mkfs when the filesystem is empty. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Wed Jan 24 15:38:13 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 24 Jan 2007 15:38:18 -0800 (PST) X-Spam-oss-Status: No, score=-2.4 required=5.0 tests=AWL,BAYES_00, J_CHICKENPOX_23,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r497472 Received: from lucidpixels.com (lucidpixels.com [75.144.35.66]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0ONcBqw013516 for ; Wed, 24 Jan 2007 15:38:13 -0800 Received: by lucidpixels.com (Postfix, from userid 1001) id EB9311A00052F; Wed, 24 Jan 2007 18:37:16 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id D9D02A00226A; Wed, 24 Jan 2007 18:37:16 -0500 (EST) Date: Wed, 24 Jan 2007 18:37:15 -0500 (EST) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: Chuck Ebbert cc: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com, Neil Brown Subject: Re: Kernel 2.6.19.2 New RAID 5 Bug (oops when writing Samba -> RAID5) In-Reply-To: <45B5261B.1050104@redhat.com> Message-ID: References: <45B5261B.1050104@redhat.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 10421 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Content-Length: 3124 Lines: 74 On Mon, 22 Jan 2007, Chuck Ebbert wrote: > Justin Piszcz wrote: > > My .config is attached, please let me know if any other information is > > needed and please CC (lkml) as I am not on the list, thanks! > > > > Running Kernel 2.6.19.2 on a MD RAID5 volume. Copying files over Samba to > > the RAID5 running XFS. > > > > Any idea what happened here? > > > > [473795.214705] BUG: unable to handle kernel paging request at virtual > > address fffb92b0 > > [473795.214715] printing eip: > > [473795.214718] c0358b14 > > [473795.214721] *pde = 00003067 > > [473795.214723] *pte = 00000000 > > [473795.214726] Oops: 0000 [#1] > > [473795.214729] PREEMPT SMP [473795.214736] CPU: 0 > > [473795.214737] EIP: 0060:[] Not tainted VLI > > [473795.214738] EFLAGS: 00010286 (2.6.19.2 #1) > > [473795.214746] EIP is at copy_data+0x6c/0x179 > > [473795.214750] eax: 00000000 ebx: 00001000 ecx: 00000354 edx: > > fffb9000 > > [473795.214754] esi: fffb92b0 edi: da86c2b0 ebp: 00001000 esp: > > f7927dc4 > > [473795.214757] ds: 007b es: 007b ss: 0068 > > [473795.214761] Process md4_raid5 (pid: 1305, ti=f7926000 task=f7ea9030 > > task.ti=f7926000) > > [473795.214765] Stack: c1ba7c40 00000003 f5538c80 00000001 da86c000 00000009 > > 00000000 0000006c [473795.214790] 00001000 da8536a8 aa6fee90 f5538c80 > > 00000190 c0358d00 aa6fee88 0000ffff [473795.214863] d7c5794c 00000001 > > da853488 f6fbec70 f6fbebc0 00000001 00000005 00000001 [473795.214876] Call > > Trace: > > [473795.214880] [] compute_parity5+0xdf/0x497 > > [473795.214887] [] handle_stripe+0x930/0x2986 > > [473795.214892] [] find_busiest_group+0x124/0x4fd > > [473795.214898] [] release_stripe+0x21/0x2e > > [473795.214902] [] raid5d+0x100/0x161 > > [473795.214907] [] md_thread+0x40/0x103 > > [473795.214912] [] autoremove_wake_function+0x0/0x4b > > [473795.214917] [] md_thread+0x0/0x103 > > [473795.214922] [] kthread+0xfc/0x100 > > [473795.214926] [] kthread+0x0/0x100 > > [473795.214930] [] kernel_thread_helper+0x7/0x1c > > [473795.214935] ======================= > > [473795.214938] Code: 14 39 d1 0f 8d 10 01 00 00 89 c8 01 c0 01 c8 01 c0 01 > > c0 89 44 24 1c eb 51 89 d9 c1 e9 02 8b 7c 24 10 01 f7 8b 44 24 18 8d 34 02 > > a5 89 d9 83 e1 03 74 02 f3 a4 c7 44 24 04 03 00 00 00 89 14 > > [473795.215017] EIP: [] copy_data+0x6c/0x179 SS:ESP 0068:f7927dc4 > > > Without digging too deeply, I'd say you've hit the same bug Sami Farin and > others > have reported starting with 2.6.19: pages mapped with kmap_atomic() become > unmapped > during memcpy() or similar operations. Try disabling preempt -- that seems to > be the > common factor. > > > - > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > > After I run some other tests, I am going to re-run this test and see if it OOPSes again with PREEMPT off. Justin. From owner-xfs@oss.sgi.com Wed Jan 24 15:40:02 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 24 Jan 2007 15:40:08 -0800 (PST) X-Spam-oss-Status: No, score=-2.4 required=5.0 tests=AWL,BAYES_00, SPF_HELO_PASS autolearn=ham version=3.2.0-pre1-r497472 Received: from lucidpixels.com (lucidpixels.com [75.144.35.66]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0ONe1qw014102 for ; Wed, 24 Jan 2007 15:40:01 -0800 Received: by lucidpixels.com (Postfix, from userid 1001) id 391511A00052F; Wed, 24 Jan 2007 18:39:07 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id 323B7A00226A; Wed, 24 Jan 2007 18:39:07 -0500 (EST) Date: Wed, 24 Jan 2007 18:39:07 -0500 (EST) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: Pavel Machek cc: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com Subject: Re: 2.6.20-rc5: cp 18gb 18gb.2 = OOM killer, reproducible just like 2.16.19.2 In-Reply-To: Message-ID: References: <20070122133735.GB4493@ucw.cz> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 10422 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Content-Length: 1813 Lines: 51 > Is it highmem-related? Can you try it with mem=256M? Bad idea, the kernel crashes & burns when I use mem=256, I had to boot 2.6.20-rc5-6 single to get back into my machine, very nasty. Remember I use an onboard graphics controller that has 128MB of RAM allocated to it and I believe the ICH8 chipset also uses some memory, in any event mem=256 causes the machine to lockup before it can even get to the boot/init processes, the two leds on the keyboard were blinking, caps lock and scroll lock and I saw no console at all! Justin. On Mon, 22 Jan 2007, Justin Piszcz wrote: > > > On Mon, 22 Jan 2007, Pavel Machek wrote: > > > On Sun 2007-01-21 14:27:34, Justin Piszcz wrote: > > > Why does copying an 18GB on a 74GB raptor raid1 cause the kernel to invoke > > > the OOM killer and kill all of my processes? > > > > > > Doing this on a single disk 2.6.19.2 is OK, no issues. However, this > > > happens every time! > > > > > > Anything to try? Any other output needed? Can someone shed some light on > > > this situation? > > > > Is it highmem-related? Can you try it with mem=256M? > > > > Pavel > > -- > > (english) http://www.livejournal.com/~pavelmachek > > (cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html > > - > > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > > the body of a message to majordomo@vger.kernel.org > > More majordomo info at http://vger.kernel.org/majordomo-info.html > > > > I will give this a try later or tomorrow, I cannot have my machine crash > at the moment. > > Also, the onboard video on the Intel 965 chipset uses 128MB, not sure if > that has anything to do with it because after the system kill -9's all the > processes etc, my terminal looks like garbage. > > Justin. > > From owner-xfs@oss.sgi.com Wed Jan 24 15:41:03 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 24 Jan 2007 15:41:10 -0800 (PST) X-Spam-oss-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00, SPF_HELO_PASS autolearn=ham version=3.2.0-pre1-r497472 Received: from lucidpixels.com (lucidpixels.com [75.144.35.66]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0ONf1qw014548 for ; Wed, 24 Jan 2007 15:41:02 -0800 Received: by lucidpixels.com (Postfix, from userid 1001) id BFBFE1A00052F; Wed, 24 Jan 2007 18:40:07 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id B2614A00226A; Wed, 24 Jan 2007 18:40:07 -0500 (EST) Date: Wed, 24 Jan 2007 18:40:07 -0500 (EST) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: Andrew Morton cc: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com Subject: Re: 2.6.20-rc5: cp 18gb 18gb.2 = OOM killer, reproducible just like 2.16.19.2 In-Reply-To: <20070122115703.97ed54f3.akpm@osdl.org> Message-ID: References: <20070122115703.97ed54f3.akpm@osdl.org> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 10423 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Content-Length: 3581 Lines: 92 On Mon, 22 Jan 2007, Andrew Morton wrote: > > On Sun, 21 Jan 2007 14:27:34 -0500 (EST) Justin Piszcz wrote: > > Why does copying an 18GB on a 74GB raptor raid1 cause the kernel to invoke > > the OOM killer and kill all of my processes? > > What's that? Software raid or hardware raid? If the latter, which driver? > > > Doing this on a single disk 2.6.19.2 is OK, no issues. However, this > > happens every time! > > > > Anything to try? Any other output needed? Can someone shed some light on > > this situation? > > > > Thanks. > > > > > > The last lines of vmstat 1 (right before it kill -9'd my shell/ssh) > > > > procs -----------memory---------- ---swap-- -----io---- -system-- > > ----cpu---- > > r b swpd free buff cache si so bi bo in cs us sy id > > wa > > 0 7 764 50348 12 1269988 0 0 53632 172 1902 4600 1 8 > > 29 62 > > 0 7 764 49420 12 1260004 0 0 53632 34368 1871 6357 2 11 > > 48 40 > > The wordwrapping is painful :( > > > > > The last lines of dmesg: > > [ 5947.199985] lowmem_reserve[]: 0 0 0 > > [ 5947.199992] DMA: 0*4kB 1*8kB 1*16kB 0*32kB 1*64kB 1*128kB 1*256kB > > 0*512kB 1*1024kB 1*2048kB 0*4096kB = 3544kB > > [ 5947.200010] Normal: 1*4kB 0*8kB 1*16kB 1*32kB 0*64kB 1*128kB 0*256kB > > 1*512kB 0*1024kB 1*2048kB 0*4096kB = 2740kB > > [ 5947.200035] HighMem: 98*4kB 35*8kB 9*16kB 69*32kB 4*64kB 1*128kB > > 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 3664kB > > [ 5947.200052] Swap cache: add 789, delete 189, find 16/17, race 0+0 > > [ 5947.200055] Free swap = 2197628kB > > [ 5947.200058] Total swap = 2200760kB > > [ 5947.200060] Free swap: 2197628kB > > [ 5947.205664] 517888 pages of RAM > > [ 5947.205671] 288512 pages of HIGHMEM > > [ 5947.205673] 5666 reserved pages > > [ 5947.205675] 257163 pages shared > > [ 5947.205678] 600 pages swap cached > > [ 5947.205680] 88876 pages dirty > > [ 5947.205682] 115111 pages writeback > > [ 5947.205684] 5608 pages mapped > > [ 5947.205686] 49367 pages slab > > [ 5947.205688] 541 pages pagetables > > [ 5947.205795] Out of memory: kill process 1853 (named) score 9937 or a > > child > > [ 5947.205801] Killed process 1853 (named) > > [ 5947.206616] bash invoked oom-killer: gfp_mask=0x84d0, order=0, > > oomkilladj=0 > > [ 5947.206621] [] out_of_memory+0x17b/0x1b0 > > [ 5947.206631] [] __alloc_pages+0x29c/0x2f0 > > [ 5947.206636] [] __pte_alloc+0x1d/0x90 > > [ 5947.206643] [] copy_page_range+0x357/0x380 > > [ 5947.206649] [] copy_process+0x765/0xfc0 > > [ 5947.206655] [] alloc_pid+0x1b9/0x280 > > [ 5947.206662] [] do_fork+0x79/0x1e0 > > [ 5947.206674] [] do_pipe+0x5f/0xc0 > > [ 5947.206680] [] sys_clone+0x36/0x40 > > [ 5947.206686] [] syscall_call+0x7/0xb > > [ 5947.206691] [] __sched_text_start+0x853/0x950 > > [ 5947.206698] ======================= > > Important information from the oom-killing event is missing. Please send > it all. > > >From your earlier reports we have several hundred MB of ZONE_NORMAL memory > which has gone awol. > > Please include /proc/meminfo from after the oom-killing. > > Please work out what is using all that slab memory, via /proc/slabinfo. > > After the oom-killing, please see if you can free up the ZONE_NORMAL memory > via a few `echo 3 > /proc/sys/vm/drop_caches' commands. See if you can > work out what happened to the missing couple-of-hundred MB from > ZONE_NORMAL. > > Trying this now. From owner-xfs@oss.sgi.com Wed Jan 24 15:43:17 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 24 Jan 2007 15:43:23 -0800 (PST) X-Spam-oss-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00, SPF_HELO_PASS autolearn=ham version=3.2.0-pre1-r497472 Received: from lucidpixels.com (lucidpixels.com [75.144.35.66]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0ONhGqw015645 for ; Wed, 24 Jan 2007 15:43:17 -0800 Received: by lucidpixels.com (Postfix, from userid 1001) id A01A11A00052F; Wed, 24 Jan 2007 18:42:22 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id 9F1DDA00226A; Wed, 24 Jan 2007 18:42:22 -0500 (EST) Date: Wed, 24 Jan 2007 18:42:22 -0500 (EST) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: Pavel Machek cc: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com Subject: Re: 2.6.20-rc5: cp 18gb 18gb.2 = OOM killer, reproducible just like 2.16.19.2 In-Reply-To: Message-ID: References: <20070122133735.GB4493@ucw.cz> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 10424 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Content-Length: 2031 Lines: 59 And FYI yes I used mem=256M just as you said, not mem=256. Justin. On Wed, 24 Jan 2007, Justin Piszcz wrote: > > Is it highmem-related? Can you try it with mem=256M? > > Bad idea, the kernel crashes & burns when I use mem=256, I had to boot > 2.6.20-rc5-6 single to get back into my machine, very nasty. Remember I > use an onboard graphics controller that has 128MB of RAM allocated to it > and I believe the ICH8 chipset also uses some memory, in any event mem=256 > causes the machine to lockup before it can even get to the boot/init > processes, the two leds on the keyboard were blinking, caps lock and > scroll lock and I saw no console at all! > > Justin. > > On Mon, 22 Jan 2007, Justin Piszcz wrote: > > > > > > > On Mon, 22 Jan 2007, Pavel Machek wrote: > > > > > On Sun 2007-01-21 14:27:34, Justin Piszcz wrote: > > > > Why does copying an 18GB on a 74GB raptor raid1 cause the kernel to invoke > > > > the OOM killer and kill all of my processes? > > > > > > > > Doing this on a single disk 2.6.19.2 is OK, no issues. However, this > > > > happens every time! > > > > > > > > Anything to try? Any other output needed? Can someone shed some light on > > > > this situation? > > > > > > Is it highmem-related? Can you try it with mem=256M? > > > > > > Pavel > > > -- > > > (english) http://www.livejournal.com/~pavelmachek > > > (cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html > > > - > > > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > > > the body of a message to majordomo@vger.kernel.org > > > More majordomo info at http://vger.kernel.org/majordomo-info.html > > > > > > > I will give this a try later or tomorrow, I cannot have my machine crash > > at the moment. > > > > Also, the onboard video on the Intel 965 chipset uses 128MB, not sure if > > that has anything to do with it because after the system kill -9's all the > > processes etc, my terminal looks like garbage. > > > > Justin. > > > > > > From owner-xfs@oss.sgi.com Wed Jan 24 16:06:30 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 24 Jan 2007 16:06:38 -0800 (PST) X-Spam-oss-Status: No, score=-1.3 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from smtp107.mail.mud.yahoo.com (smtp107.mail.mud.yahoo.com [209.191.85.217]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0P06Tqw020608 for ; Wed, 24 Jan 2007 16:06:30 -0800 Received: (qmail 69646 invoked from network); 25 Jan 2007 00:05:35 -0000 DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com.au; h=Received:X-YMail-OSG:Message-ID:Date:From:User-Agent:X-Accept-Language:MIME-Version:To:CC:Subject:References:In-Reply-To:Content-Type:Content-Transfer-Encoding; b=UVpSNIVgtmAYt2nckF3fCq3Kbthu8KPVUW+ckBSx3JCoZyzQ1ubPTef2D4t4GRhBtyqs4nJT7k9ni3Uj/xMYZ6YZRUMBSaXUX6eciEdscS4UazEesF3UW0/cYbwNxjz3R5211NgUwbaWPyTMcaQRSTeBKwJSEa0raHqdgGiO/7U= ; Received: from unknown (HELO ?192.168.0.1?) (nickpiggin@203.173.3.219 with plain) by smtp107.mail.mud.yahoo.com with SMTP; 25 Jan 2007 00:05:34 -0000 X-YMail-OSG: Yygb12QVM1lgq7WFrc.EDwmOqtIELHor_sHnjJ3JNXS1VLAGmYzNeDWqIgo.Pv6aCZZSWh_Yj_8x.GjBsILBuaoba5wHPWPh3TbYrLIQ7UU2GnokD24- Message-ID: <45B7F43B.9060905@yahoo.com.au> Date: Thu, 25 Jan 2007 11:05:15 +1100 From: Nick Piggin User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.12) Gecko/20051007 Debian/1.7.12-1 X-Accept-Language: en MIME-Version: 1.0 To: Peter Zijlstra CC: David Chinner , linux-kernel@vger.kernel.org, xfs@oss.sgi.com, akpm@osdl.org Subject: Re: [PATCH 1/2]: Fix BUG in cancel_dirty_pages on XFS References: <20070123223702.GF33919298@melbourne.sgi.com> <1169640835.6189.14.camel@twins> <45B7627B.8050202@yahoo.com.au> <1169649604.6189.27.camel@twins> In-Reply-To: <1169649604.6189.27.camel@twins> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 10425 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nickpiggin@yahoo.com.au Precedence: bulk X-list: xfs Content-Length: 1739 Lines: 56 Peter Zijlstra wrote: > On Thu, 2007-01-25 at 00:43 +1100, Nick Piggin wrote: > > >>>Have you seen the new launder_page() a_op? called from >>>invalidate_inode_pages2_range() >> >>It would have been nice to make that one into a more potentially >>useful generic callback. > > > That can still be done when the need arises, right? Yeah I guess so. >>But why was it introduced, exactly? I can't tell from the code or >>the discussion why NFS couldn't start the IO, and signal the caller >>to wait_on_page_writeback and retry? That seemed to me like the >>convetional fix. > > > to quote a bit: > > On Tue, 19 Dec 2006 18:19:38 -0500 > Trond Myklebust wrote: > > >> NFS: Fix race in nfs_release_page() >> >> invalidate_inode_pages2() may set the dirty bit on a page owing to the call >> to unmap_mapping_range() after the page was locked. In order to fix this, >> NFS has hooked the releasepage() method. This, however leads to deadlocks >> in other parts of the VM. > > > and: > > >>>Now, arguably the VM shouldn't be calling try_to_release_page() with >>>__GFP_FS when it's holding a lock on a page. >>> >>>But otoh, NFS should never be running lock_page() within nfs_release_page() >>>against the page which was passed into nfs_release_page(). It'll deadlock >>>for sure. >> >>The reason why it is happening is that the last dirty page from that >>inode gets cleaned, resulting in a call to dput(). OK but what's the problem with just failing to release the page if it is dirty, I wonder? In the worst case, page reclaim will just end up doing a writeout to clean it. -- SUSE Labs, Novell Inc. Send instant messages to your online friends http://au.messenger.yahoo.com From owner-xfs@oss.sgi.com Wed Jan 24 16:11:11 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 24 Jan 2007 16:11:17 -0800 (PST) X-Spam-oss-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00, SPF_HELO_PASS autolearn=ham version=3.2.0-pre1-r497472 Received: from lucidpixels.com (lucidpixels.com [75.144.35.66]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0P0BAqw022058 for ; Wed, 24 Jan 2007 16:11:11 -0800 Received: by lucidpixels.com (Postfix, from userid 1001) id E1E571A00052F; Wed, 24 Jan 2007 19:10:16 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id DA507A00226A; Wed, 24 Jan 2007 19:10:16 -0500 (EST) Date: Wed, 24 Jan 2007 19:10:16 -0500 (EST) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: Andrew Morton cc: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com Subject: Re: 2.6.20-rc5: cp 18gb 18gb.2 = OOM killer, reproducible just like 2.16.19.2 In-Reply-To: <20070122115703.97ed54f3.akpm@osdl.org> Message-ID: References: <20070122115703.97ed54f3.akpm@osdl.org> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 10426 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Content-Length: 3856 Lines: 97 On Mon, 22 Jan 2007, Andrew Morton wrote: > > On Sun, 21 Jan 2007 14:27:34 -0500 (EST) Justin Piszcz wrote: > > Why does copying an 18GB on a 74GB raptor raid1 cause the kernel to invoke > > the OOM killer and kill all of my processes? > > What's that? Software raid or hardware raid? If the latter, which driver? > > > Doing this on a single disk 2.6.19.2 is OK, no issues. However, this > > happens every time! > > > > Anything to try? Any other output needed? Can someone shed some light on > > this situation? > > > > Thanks. > > > > > > The last lines of vmstat 1 (right before it kill -9'd my shell/ssh) > > > > procs -----------memory---------- ---swap-- -----io---- -system-- > > ----cpu---- > > r b swpd free buff cache si so bi bo in cs us sy id > > wa > > 0 7 764 50348 12 1269988 0 0 53632 172 1902 4600 1 8 > > 29 62 > > 0 7 764 49420 12 1260004 0 0 53632 34368 1871 6357 2 11 > > 48 40 > > The wordwrapping is painful :( > > > > > The last lines of dmesg: > > [ 5947.199985] lowmem_reserve[]: 0 0 0 > > [ 5947.199992] DMA: 0*4kB 1*8kB 1*16kB 0*32kB 1*64kB 1*128kB 1*256kB > > 0*512kB 1*1024kB 1*2048kB 0*4096kB = 3544kB > > [ 5947.200010] Normal: 1*4kB 0*8kB 1*16kB 1*32kB 0*64kB 1*128kB 0*256kB > > 1*512kB 0*1024kB 1*2048kB 0*4096kB = 2740kB > > [ 5947.200035] HighMem: 98*4kB 35*8kB 9*16kB 69*32kB 4*64kB 1*128kB > > 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 3664kB > > [ 5947.200052] Swap cache: add 789, delete 189, find 16/17, race 0+0 > > [ 5947.200055] Free swap = 2197628kB > > [ 5947.200058] Total swap = 2200760kB > > [ 5947.200060] Free swap: 2197628kB > > [ 5947.205664] 517888 pages of RAM > > [ 5947.205671] 288512 pages of HIGHMEM > > [ 5947.205673] 5666 reserved pages > > [ 5947.205675] 257163 pages shared > > [ 5947.205678] 600 pages swap cached > > [ 5947.205680] 88876 pages dirty > > [ 5947.205682] 115111 pages writeback > > [ 5947.205684] 5608 pages mapped > > [ 5947.205686] 49367 pages slab > > [ 5947.205688] 541 pages pagetables > > [ 5947.205795] Out of memory: kill process 1853 (named) score 9937 or a > > child > > [ 5947.205801] Killed process 1853 (named) > > [ 5947.206616] bash invoked oom-killer: gfp_mask=0x84d0, order=0, > > oomkilladj=0 > > [ 5947.206621] [] out_of_memory+0x17b/0x1b0 > > [ 5947.206631] [] __alloc_pages+0x29c/0x2f0 > > [ 5947.206636] [] __pte_alloc+0x1d/0x90 > > [ 5947.206643] [] copy_page_range+0x357/0x380 > > [ 5947.206649] [] copy_process+0x765/0xfc0 > > [ 5947.206655] [] alloc_pid+0x1b9/0x280 > > [ 5947.206662] [] do_fork+0x79/0x1e0 > > [ 5947.206674] [] do_pipe+0x5f/0xc0 > > [ 5947.206680] [] sys_clone+0x36/0x40 > > [ 5947.206686] [] syscall_call+0x7/0xb > > [ 5947.206691] [] __sched_text_start+0x853/0x950 > > [ 5947.206698] ======================= > > Important information from the oom-killing event is missing. Please send > it all. > > >From your earlier reports we have several hundred MB of ZONE_NORMAL memory > which has gone awol. > > Please include /proc/meminfo from after the oom-killing. > > Please work out what is using all that slab memory, via /proc/slabinfo. > > After the oom-killing, please see if you can free up the ZONE_NORMAL memory > via a few `echo 3 > /proc/sys/vm/drop_caches' commands. See if you can > work out what happened to the missing couple-of-hundred MB from > ZONE_NORMAL. > > Running with PREEMPT OFF lets me copy the file!! The machine LAGS occasionally every 5-30-60 seconds or so VERY BADLY, talking 5-10 seconds of lag, but hey, it does not crash!! I will boot the older kernel with preempt on and see if I can get you that information you requested. Justin. From owner-xfs@oss.sgi.com Wed Jan 24 16:13:54 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 24 Jan 2007 16:14:01 -0800 (PST) X-Spam-oss-Status: No, score=-0.8 required=5.0 tests=AWL,BAYES_20 autolearn=ham version=3.2.0-pre1-r497472 Received: from smtp110.mail.mud.yahoo.com (smtp110.mail.mud.yahoo.com [209.191.85.220]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0P0Dpqw022950 for ; Wed, 24 Jan 2007 16:13:52 -0800 Received: (qmail 95673 invoked from network); 25 Jan 2007 00:12:57 -0000 DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com.au; h=Received:X-YMail-OSG:Message-ID:Date:From:User-Agent:X-Accept-Language:MIME-Version:To:CC:Subject:References:In-Reply-To:Content-Type:Content-Transfer-Encoding; b=SUG/0qvcxeFnLgrcwo+MdUKRKMnHVXMD1ZALUDcM+Ya7ZBVTClXe6UiXDLpAPx2VK7zB2nqabEsW6KO5jniY3OkpuYzWwKaLVbI/3GxELWldXsLPctfRQsL/MGmmic54bngsJzPy6sv5WZEqeYAZe3uZ7QGwPc8wH6Jl8e3aucI= ; Received: from unknown (HELO ?192.168.0.1?) (nickpiggin@203.173.3.219 with plain) by smtp110.mail.mud.yahoo.com with SMTP; 25 Jan 2007 00:12:56 -0000 X-YMail-OSG: vBQBlUgVM1nL5FVXzyr6L1deC41pJun8tVa9X3TepIrBDJXRYFfqjiFAwiwCOXg2g4MG_wDHEw-- Message-ID: <45B7F5F9.2070308@yahoo.com.au> Date: Thu, 25 Jan 2007 11:12:41 +1100 From: Nick Piggin User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.12) Gecko/20051007 Debian/1.7.12-1 X-Accept-Language: en MIME-Version: 1.0 To: David Chinner CC: Peter Zijlstra , linux-kernel@vger.kernel.org, xfs@oss.sgi.com, akpm@osdl.org Subject: Re: [PATCH 1/2]: Fix BUG in cancel_dirty_pages on XFS References: <20070123223702.GF33919298@melbourne.sgi.com> <1169640835.6189.14.camel@twins> <45B7627B.8050202@yahoo.com.au> <20070124224654.GN33919298@melbourne.sgi.com> In-Reply-To: <20070124224654.GN33919298@melbourne.sgi.com> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 10427 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nickpiggin@yahoo.com.au Precedence: bulk X-list: xfs Content-Length: 2042 Lines: 49 David Chinner wrote: > On Thu, Jan 25, 2007 at 12:43:23AM +1100, Nick Piggin wrote: >>And why not just leave it in the pagecache and be done with it? > > > because what is in cache is then not coherent with what is on disk, > and a direct read is supposed to read the data that is present > in the file at the time it is issued. So after a writeout it will be coherent of course, so the point in question is what happens when someone comes in and dirties it at the worst possible moment? That relates to the paragraph below... >>All you need is to do a writeout before a direct IO read, which is >>what generic dio code does. > > > No, that's not good enough - after writeout but before the > direct I/O read is issued a process can fault the page and dirty > it. If you do a direct read, followed by a buffered read you should > get the same data. The only way to guarantee this is to chuck out > any cached pages across the range of the direct I/O so they are > fetched again from disk on the next buffered I/O. i.e. coherent > at the time the direct I/O is issued. ... so surely if you do a direct read followed by a buffered read, you should *not* get the same data if there has been some activity to modify that part of the file in the meantime (whether that be a buffered or direct write). >>but in that case you'll either have to live with some racyness >>(which is what the generic code does), or have a higher level >>synchronisation to prevent buffered + direct IO writes I suppose? > > > The XFS inode iolock - direct I/O writes take it shared, buffered > writes takes it exclusive - so you can't do both at once. Buffered > reads take is shared, which is another reason why we need to purge > the cache on direct I/O writes - they can operate concurrently > (and coherently) with buffered reads. Ah, I'm glad to see somebody cares about doing the right thing ;) Maybe I'll use XFS for my filesystems in future. -- SUSE Labs, Novell Inc. Send instant messages to your online friends http://au.messenger.yahoo.com From owner-xfs@oss.sgi.com Wed Jan 24 16:34:30 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 24 Jan 2007 16:34:37 -0800 (PST) X-Spam-oss-Status: No, score=-2.3 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from amd.ucw.cz (gprs189-60.eurotel.cz [160.218.189.60]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0P0Y8qw031723 for ; Wed, 24 Jan 2007 16:34:29 -0800 Received: by amd.ucw.cz (Postfix, from userid 8) id DDB8F2C06C; Thu, 25 Jan 2007 01:32:42 +0100 (CET) Date: Thu, 25 Jan 2007 01:32:42 +0100 From: Pavel Machek To: Justin Piszcz Cc: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com Subject: Re: 2.6.20-rc5: cp 18gb 18gb.2 = OOM killer, reproducible just like 2.16.19.2 Message-ID: <20070125003242.GA23343@elf.ucw.cz> References: <20070122133735.GB4493@ucw.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Warning: Reading this can be dangerous to your mental health. User-Agent: Mutt/1.5.11+cvs20060126 X-archive-position: 10428 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: pavel@ucw.cz Precedence: bulk X-list: xfs Content-Length: 771 Lines: 18 Hi! > > Is it highmem-related? Can you try it with mem=256M? > > Bad idea, the kernel crashes & burns when I use mem=256, I had to boot > 2.6.20-rc5-6 single to get back into my machine, very nasty. Remember I > use an onboard graphics controller that has 128MB of RAM allocated to it > and I believe the ICH8 chipset also uses some memory, in any event mem=256 > causes the machine to lockup before it can even get to the boot/init > processes, the two leds on the keyboard were blinking, caps lock and > scroll lock and I saw no console at all! Okay, so try mem=700M or disable CONFIG_HIGHMEM or something. Pavel -- (english) http://www.livejournal.com/~pavelmachek (cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html From owner-xfs@oss.sgi.com Wed Jan 24 16:36:48 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 24 Jan 2007 16:36:55 -0800 (PST) X-Spam-oss-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0P0aiqw032573 for ; Wed, 24 Jan 2007 16:36:47 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA03451; Thu, 25 Jan 2007 11:35:44 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l0P0Zf7Y102597619; Thu, 25 Jan 2007 11:35:41 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l0P0Zamf103216590; Thu, 25 Jan 2007 11:35:36 +1100 (AEDT) Date: Thu, 25 Jan 2007 11:35:36 +1100 From: David Chinner To: Nick Piggin Cc: David Chinner , Peter Zijlstra , linux-kernel@vger.kernel.org, xfs@oss.sgi.com, akpm@osdl.org Subject: Re: [PATCH 1/2]: Fix BUG in cancel_dirty_pages on XFS Message-ID: <20070125003536.GS33919298@melbourne.sgi.com> References: <20070123223702.GF33919298@melbourne.sgi.com> <1169640835.6189.14.camel@twins> <45B7627B.8050202@yahoo.com.au> <20070124224654.GN33919298@melbourne.sgi.com> <45B7F5F9.2070308@yahoo.com.au> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <45B7F5F9.2070308@yahoo.com.au> User-Agent: Mutt/1.4.2.1i X-archive-position: 10430 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 2142 Lines: 53 On Thu, Jan 25, 2007 at 11:12:41AM +1100, Nick Piggin wrote: > David Chinner wrote: > >On Thu, Jan 25, 2007 at 12:43:23AM +1100, Nick Piggin wrote: > > >>And why not just leave it in the pagecache and be done with it? > > > > > >because what is in cache is then not coherent with what is on disk, > >and a direct read is supposed to read the data that is present > >in the file at the time it is issued. > > So after a writeout it will be coherent of course, so the point in > question is what happens when someone comes in and dirties it at the > worst possible moment? That relates to the paragraph below... > > >>All you need is to do a writeout before a direct IO read, which is > >>what generic dio code does. > > > > > >No, that's not good enough - after writeout but before the > >direct I/O read is issued a process can fault the page and dirty > >it. If you do a direct read, followed by a buffered read you should > >get the same data. The only way to guarantee this is to chuck out > >any cached pages across the range of the direct I/O so they are > >fetched again from disk on the next buffered I/O. i.e. coherent > >at the time the direct I/O is issued. > > ... so surely if you do a direct read followed by a buffered read, > you should *not* get the same data if there has been some activity > to modify that part of the file in the meantime (whether that be a > buffered or direct write). Right. And that is what happens in XFS because it purges the caches on direct I/O and forces data to be re-read from disk. Effectively, if you are mixing direct I/O with other types of I/O (buffered or mmap) then the application really needs to be certain it is doing the right thing because there are races that can occur below the filesystem. All we care about in the filesystem is that what we cache is the same as what is on disk, and that implies that direct I/O needs to purge the cache regardless of the state it is in.... Hence we need to unmap pages and use truncate semantics on them to ensure they are removed from the page cache.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Wed Jan 24 16:37:19 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 24 Jan 2007 16:37:25 -0800 (PST) X-Spam-oss-Status: No, score=-1.7 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from smtp103.mail.mud.yahoo.com (smtp103.mail.mud.yahoo.com [209.191.85.213]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0P0bGqw000327 for ; Wed, 24 Jan 2007 16:37:19 -0800 Received: (qmail 32861 invoked from network); 25 Jan 2007 00:36:18 -0000 DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com.au; h=Received:X-YMail-OSG:Message-ID:Date:From:User-Agent:X-Accept-Language:MIME-Version:To:CC:Subject:References:In-Reply-To:Content-Type:Content-Transfer-Encoding; b=sPaoL1ac3dc1Z7tZ8tjVBZsfFkRN8XGJB81n2Mz2ZxUzPfLiF8nF2Xu6BeB33NzRjW9DWvhXxEDHj3luY60f4o/3a2INP5qr9kKF53auHu+RvlzXc/1zT5i4TTFYFAEjdZzv8dTc3M4ktQmyq0zI+eiG4kd0xt+rlWauBtsvjEM= ; Received: from unknown (HELO ?192.168.0.1?) (nickpiggin@203.173.3.219 with plain) by smtp103.mail.mud.yahoo.com with SMTP; 25 Jan 2007 00:36:16 -0000 X-YMail-OSG: 4Ona76sVM1ky.Jp39_.OBcWeEAiZ6gDARngn0KdR48NuIq7j44EzMPeub5DY0E13i1AQAn2TTA-- Message-ID: <45B7FB71.5030603@yahoo.com.au> Date: Thu, 25 Jan 2007 11:36:01 +1100 From: Nick Piggin User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.12) Gecko/20051007 Debian/1.7.12-1 X-Accept-Language: en MIME-Version: 1.0 To: Justin Piszcz CC: Andrew Morton , linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com Subject: Re: 2.6.20-rc5: cp 18gb 18gb.2 = OOM killer, reproducible just like 2.16.19.2 References: <20070122115703.97ed54f3.akpm@osdl.org> In-Reply-To: Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 10432 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nickpiggin@yahoo.com.au Precedence: bulk X-list: xfs Content-Length: 941 Lines: 28 Justin Piszcz wrote: > > On Mon, 22 Jan 2007, Andrew Morton wrote: >>After the oom-killing, please see if you can free up the ZONE_NORMAL memory >>via a few `echo 3 > /proc/sys/vm/drop_caches' commands. See if you can >>work out what happened to the missing couple-of-hundred MB from >>ZONE_NORMAL. >> > > Running with PREEMPT OFF lets me copy the file!! The machine LAGS > occasionally every 5-30-60 seconds or so VERY BADLY, talking 5-10 seconds > of lag, but hey, it does not crash!! I will boot the older kernel with > preempt on and see if I can get you that information you requested. It wouldn't be a bad idea to recompile the new kernel with preempt on and get the info from there. It is usually best to be working with the most recent kernels. We can always backport any important fixes if we need to. Thanks, Nick -- SUSE Labs, Novell Inc. Send instant messages to your online friends http://au.messenger.yahoo.com From owner-xfs@oss.sgi.com Wed Jan 24 16:37:09 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 24 Jan 2007 16:37:17 -0800 (PST) X-Spam-oss-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00, SPF_HELO_PASS autolearn=ham version=3.2.0-pre1-r497472 Received: from lucidpixels.com (lucidpixels.com [75.144.35.66]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0P0b7qw032726 for ; Wed, 24 Jan 2007 16:37:09 -0800 Received: by lucidpixels.com (Postfix, from userid 1001) id 2EB051A048CAB; Wed, 24 Jan 2007 19:36:13 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id 2BA16A000878; Wed, 24 Jan 2007 19:36:13 -0500 (EST) Date: Wed, 24 Jan 2007 19:36:13 -0500 (EST) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: Pavel Machek cc: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com Subject: Re: 2.6.20-rc5: cp 18gb 18gb.2 = OOM killer, reproducible just like 2.16.19.2 In-Reply-To: <20070125003242.GA23343@elf.ucw.cz> Message-ID: References: <20070122133735.GB4493@ucw.cz> <20070125003242.GA23343@elf.ucw.cz> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 10431 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Content-Length: 1118 Lines: 31 On Thu, 25 Jan 2007, Pavel Machek wrote: > Hi! > > > > Is it highmem-related? Can you try it with mem=256M? > > > > Bad idea, the kernel crashes & burns when I use mem=256, I had to boot > > 2.6.20-rc5-6 single to get back into my machine, very nasty. Remember I > > use an onboard graphics controller that has 128MB of RAM allocated to it > > and I believe the ICH8 chipset also uses some memory, in any event mem=256 > > causes the machine to lockup before it can even get to the boot/init > > processes, the two leds on the keyboard were blinking, caps lock and > > scroll lock and I saw no console at all! > > Okay, so try mem=700M or disable CONFIG_HIGHMEM or something. > Pavel > -- > (english) http://www.livejournal.com/~pavelmachek > (cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html > - > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Ok, this will be my last test for tonight, trying now. Justin. From owner-xfs@oss.sgi.com Wed Jan 24 16:35:36 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 24 Jan 2007 16:35:43 -0800 (PST) X-Spam-oss-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00, SPF_HELO_PASS autolearn=ham version=3.2.0-pre1-r497472 Received: from lucidpixels.com (lucidpixels.com [75.144.35.66]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0P0ZTqw032140 for ; Wed, 24 Jan 2007 16:35:31 -0800 Received: by lucidpixels.com (Postfix, from userid 1001) id A87CF1A048CAB; Wed, 24 Jan 2007 19:34:30 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id A2FC4A000878; Wed, 24 Jan 2007 19:34:30 -0500 (EST) Date: Wed, 24 Jan 2007 19:34:30 -0500 (EST) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: Andrew Morton cc: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com Subject: Re: 2.6.20-rc5: cp 18gb 18gb.2 = OOM killer, reproducible just like 2.16.19.2 In-Reply-To: <20070122115703.97ed54f3.akpm@osdl.org> Message-ID: References: <20070122115703.97ed54f3.akpm@osdl.org> MIME-Version: 1.0 Content-Type: MULTIPART/MIXED; BOUNDARY="-1463747160-1301739425-1169685270=:4028" X-archive-position: 10429 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Content-Length: 78731 Lines: 1356 This message is in MIME format. The first part should be readable text, while the remaining parts are likely unreadable without MIME-aware tools. ---1463747160-1301739425-1169685270=:4028 Content-Type: TEXT/PLAIN; charset=US-ASCII There is some XFS stuff in the dmesg too, that is why I am continuing to include the XFS mailing list. Scroll down to read more. On Mon, 22 Jan 2007, Andrew Morton wrote: > > On Sun, 21 Jan 2007 14:27:34 -0500 (EST) Justin Piszcz wrote: > > Why does copying an 18GB on a 74GB raptor raid1 cause the kernel to invoke > > the OOM killer and kill all of my processes? > > What's that? Software raid or hardware raid? If the latter, which driver? > > > Doing this on a single disk 2.6.19.2 is OK, no issues. However, this > > happens every time! > > > > Anything to try? Any other output needed? Can someone shed some light on > > this situation? > > > > Thanks. > > > > > > The last lines of vmstat 1 (right before it kill -9'd my shell/ssh) > > > > procs -----------memory---------- ---swap-- -----io---- -system-- > > ----cpu---- > > r b swpd free buff cache si so bi bo in cs us sy id > > wa > > 0 7 764 50348 12 1269988 0 0 53632 172 1902 4600 1 8 > > 29 62 > > 0 7 764 49420 12 1260004 0 0 53632 34368 1871 6357 2 11 > > 48 40 > > The wordwrapping is painful :( > > > > > The last lines of dmesg: > > [ 5947.199985] lowmem_reserve[]: 0 0 0 > > [ 5947.199992] DMA: 0*4kB 1*8kB 1*16kB 0*32kB 1*64kB 1*128kB 1*256kB > > 0*512kB 1*1024kB 1*2048kB 0*4096kB = 3544kB > > [ 5947.200010] Normal: 1*4kB 0*8kB 1*16kB 1*32kB 0*64kB 1*128kB 0*256kB > > 1*512kB 0*1024kB 1*2048kB 0*4096kB = 2740kB > > [ 5947.200035] HighMem: 98*4kB 35*8kB 9*16kB 69*32kB 4*64kB 1*128kB > > 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 3664kB > > [ 5947.200052] Swap cache: add 789, delete 189, find 16/17, race 0+0 > > [ 5947.200055] Free swap = 2197628kB > > [ 5947.200058] Total swap = 2200760kB > > [ 5947.200060] Free swap: 2197628kB > > [ 5947.205664] 517888 pages of RAM > > [ 5947.205671] 288512 pages of HIGHMEM > > [ 5947.205673] 5666 reserved pages > > [ 5947.205675] 257163 pages shared > > [ 5947.205678] 600 pages swap cached > > [ 5947.205680] 88876 pages dirty > > [ 5947.205682] 115111 pages writeback > > [ 5947.205684] 5608 pages mapped > > [ 5947.205686] 49367 pages slab > > [ 5947.205688] 541 pages pagetables > > [ 5947.205795] Out of memory: kill process 1853 (named) score 9937 or a > > child > > [ 5947.205801] Killed process 1853 (named) > > [ 5947.206616] bash invoked oom-killer: gfp_mask=0x84d0, order=0, > > oomkilladj=0 > > [ 5947.206621] [] out_of_memory+0x17b/0x1b0 > > [ 5947.206631] [] __alloc_pages+0x29c/0x2f0 > > [ 5947.206636] [] __pte_alloc+0x1d/0x90 > > [ 5947.206643] [] copy_page_range+0x357/0x380 > > [ 5947.206649] [] copy_process+0x765/0xfc0 > > [ 5947.206655] [] alloc_pid+0x1b9/0x280 > > [ 5947.206662] [] do_fork+0x79/0x1e0 > > [ 5947.206674] [] do_pipe+0x5f/0xc0 > > [ 5947.206680] [] sys_clone+0x36/0x40 > > [ 5947.206686] [] syscall_call+0x7/0xb > > [ 5947.206691] [] __sched_text_start+0x853/0x950 > > [ 5947.206698] ======================= > > Important information from the oom-killing event is missing. Please send > it all. > > >From your earlier reports we have several hundred MB of ZONE_NORMAL memory > which has gone awol. > > Please include /proc/meminfo from after the oom-killing. > > Please work out what is using all that slab memory, via /proc/slabinfo. > > After the oom-killing, please see if you can free up the ZONE_NORMAL memory > via a few `echo 3 > /proc/sys/vm/drop_caches' commands. See if you can > work out what happened to the missing couple-of-hundred MB from > ZONE_NORMAL. > > I have done all you said, and I ran a constant loop w/ vmstat & cat /proc/slabinfo toward the end of the file(s) (_after_oom_killer) is when I ran: p34:~# echo 3 > /proc/sys/vm/drop_caches p34:~# echo 3 > /proc/sys/vm/drop_caches p34:~# echo 3 > /proc/sys/vm/drop_caches p34:~# The tarball will yield the following files you requested: 4.0K _proc_meminfo_after_oom_killing.txt 976K _slabinfo_after_oom_killer.txt 460K _slabinfo.txt 8.0K _vmstat_after_oom_killer.txt 4.0K _vmstat.txt I am going back to 2.6.20-rc5-6 w/NO-PRE-EMPT, as this was about 10x more stable in copying operations and everything else, if you need any more diagnostics/crashing(s) in this manner, let me know, because I can make it happen every single time with pre-empt on. And not sure if it matters but when I copy 18gb 18g.2 I have seen it make it to various stages: Attempt size_the 18g.2 got to: 1. 7.8G 2. 4.3G 3. 1.2G Back to 2.6.20-rc5 w/ no pre-emption, until then/or if/you request anything else let me know. Also I ran echo t > /proc/sysrq-trigger I have attached THE ENTIRE syslog/kernel dmesg/ring buffer so you'll see my system booting up with no problems/errors up until now in which I am going back to the old kernel. This is attached as kernel_ring_buffer.txt.bz2 Andrew, Please let me know if any of this helps! Justin. ---1463747160-1301739425-1169685270=:4028 Content-Type: APPLICATION/octet-stream; name=kernel_ring_buffer.txt.bz2 Content-Transfer-Encoding: BASE64 Content-ID: Content-Description: Content-Disposition: attachment; filename=kernel_ring_buffer.txt.bz2 QlpoOTFBWSZTWUx8+xEAGkr/gH98KAB4///3////zv////BgxR744AAAAAAA AAD3vqsFoYKtYLQAAAPvumhqAAACDuXcVFoMLuzrqmlNW7cDlrQVlkaB0dAl 0NAk7NQQRtgh1gOhe2dudxoB0oACdal8B3Vy993E33Ou18+97vua86OzFt10 UXY0Nsefe4HaPpT4Dhve+4+vvX2nu4WwHbudM7HDY1ba7OzXdsNdFcsHneB2 8eN7njuupbHTuxu73uHvY60C7g7oLap3nA5vAdzh3vLWNstZtlsLZrTQo8gd 7wPHXfe98NsbY0Fa2b62B1xuxoyB83A843eNs2xtjbNsbZr13YoegdwN7vb3 ihWmizG2rTTXlrptq+gfeA993ePLRezQtjRTrRdmj3YHvAfNuFCiihRT1orW 2oHfeA9PG2KKFFFKUUyB3oFdd2ZA5aB93u6DtqKpEusAGeAACgAAAAAADjgK 6PXSgUb3uXYLZ6aABECgaoNCnQDJ0oDIbXOJGgAFOgHkaAdAACgAAAAAAAAw jShWhQA0CiiGQINCGomo0yI0Cn6aeolGjTNIANAAANHqaANRNMP9VEiITQUa ammjRoAAAAAAaaAAAanpo0JEiQSZGlP1NJk8myoNADQaAAAANAAk0oIISJia mTCADREPUAZAAAA0NAAE1SiaE0m0JiaJtNQTEnqb1E9I0aaMg2p6T0jTQDQ0 HqAUpIgI0CaATQCYk9QyaJiYUzUwpsPUSbJBp6ano8vd7jtSqZYJ55fHroMy ZZmZhjKTfDTDM+/TULMKzCoYTJbMIrUFOGaP4g0DUWGMwZmZmJOPHVbMjCWU mRMplDMkkkrSlJSZzn9z+JrX8X93YAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAa1rWta1oAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAALAzprWta1jGdAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAPABsAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAzmprWtaxjOgAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA qAaAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAGwBsAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAADn8vn+/O6yv8Ob1a3K/Ha1rWfwea/7dzme/3/35+Sv88zKz Apm8lsW2aqg5r6bY+24qqHjfW6DC8ynNOWZv6q2nlpo38ARfnPFf+UftMf1l z7qf9+eO8/w0r92/tVVVVVbPxCmf6s+OTC7zM5nyxMY8dcBm+C0Arel1XfZw K1Iq9a0Za+TkZ/pnVwlx/8/1wMen5Tm04tPa0+WfyNfvftcp19YfP6PVMa/+ /zW0tpaoVVCqof7GrX+6n5b5S94x/kWM5/h8P6V6Zf/efhqfLHp+DA/+vyT/ fMzMzIiIERASSCX8F/TP9iSVf5G22kk/0bbaSX5/q220kn9tttJJ/TbbSSa7 +38EGx+MSCWq1qSW//DbbSSf/z/c6QbHbS07OpJb+eYQbENLDk4kln8vmEGx tpadnUkt/r9z+Mjy5/L8x+av9+ZlZgUzeS2LbNVQc19NsfbcVVDFKTSlBhfZ WploozJ9wI3nWbe6PsvH0XPrT2543PtpRn96qqqqrZ9Ip5/fP0fLWUFeNTOZ 9OJjHmahm81oBWaXVe3dwK1Iq9a0Za+HIz7Tq4S4+/2icVHMwfYcKqa+xmsz CyqMqoqqjw/sip9ifJHQX9AOONKGj2+iqoVVCqoVVCqof7WrX2T23yl7xj2W M5+Xj6r0y+2fTU+WPX8GB/PbU9Xd3d3CqoVVDMzBm+1vi/9iSVfybbaST/Rt tpJfn+rbbSSf2220kn9NttJJrv7fwQbH4xIJarWpJb/JttpJP+X9zpBsdtLT s6klv55hBsQ0sOTiSWfy+YQbG2lp2dSS3+v3P4yPLn8vzH5q/35mVmBTN5LY ts1VBzX02x9txVUPG+tseR/KE/VTSP90cH5e/Pc90fZePyXPrT2543PtpQ/s iIiIiIiIiIinwEQZ9j3cIlzyZ5+q4l5thL9sMC9exH5fnIF3wXN21F/rJGfa dXCXH3e3Nqn8Cyt94+/8fjXsPn7Bbqm3ZvJmG1UdWnFp9ef3tfzPlynX3B8P h6pjX7/fbS2ltLaVVD/c1aj9L66vemP0WM5/Ty965Zfdn4anvj0/Bgfz21PR 3d5mREQIiAkkEv4L98/3Ukq/i220kn+jbbSS/P9W22kk/tttpJP6bbaSTXf2 /gg2PxiQS1WtSS3+DbbSSf8f4dINjtpadnUkt/PMINiGlhycSSz+PzCDY20t OzqSWny/8Ejrn8fkPyV/0ZmVmBTN5LYts1VBzXxtj6biqoeN9bYwv1VqZaKM PtBHWdZr4j5TH5Ln07/XrntP14WUj6oiIiKqqtnzFM/hPlkwu8zOZ9OJjHjr gxj1dhx1S6r1twK1Iq9a0Za+TkZ+s6uEuPn9c+fz38G6S/K0lE+75Q7hEH83 cZCW7rj7L+UIv9Vv2KIf94jsPb0Xp/MW5ePDcp44WndE8eIdwiDx4GQlvJW8 W6hF8R4KIfAjsPHheX8C3Dx4bwnjhad0Tx4h3CIPHgZCW8lx4v1CL4t4KIfA jsPHhen8C3Lx4blPHC07onjxDuEQePAyEt5Ljzv1CL4t4KIfAjsPHhen8D5d P3Xj3JzHfuOglaL38o1CX7v3KIe4jY7915fuLcvHu9zcp27CwTe1YLl0Xb7K IdiNDe1y+xbbxvbbTexYJvasFy6Lt9lEOxGhva5fYtt43ttpvYsE3tWC5dF2 +yiHYjQ3tcvsW28b2203sWCb2rBcui7fZRDsRob2uX2LbeN7bab2LBN7VpTM JG32UQ7EcDe10+xbLxvbbTexYJvasyZVH2+yiHYjQ3tcvsE9fb/F933sd8e9 PXtpPNfLzEIW/VA/pyt+yer+maFotqt7+m9nd7J1rCZdoznOc5Oep1rKThE1 SNPGta0danWspOETVI08a1rR1zOtNlVIGofWta1rR1xPHGUa7b9c65pWvMUj nnk8344wmzqH1rWta0dcXSipSlDSek65F0ATKNhU3nr4zVN0uiRdunv110eu tp27ZQF+zJ25RN9k7dtokWbt27dux7anrlBYWtY27ZRMV1yp4SZkzzjtrSAX 67Habd+uuyZjd0SmG7R27dj21PWkFhi97m9725QXFrWNu05rxrTDhKYxg4tO O2rpEdOvbtXkQO6ru3WY70xC73s76nnYWylBvuo1wlue7Im6Nt973s765Tvv KFxy3Snuz9uaOOVSvam3RO1Wt2bt27Ht27dqoOtNx3RFV+dwm92RIs2976AH 9BJKEEkEEU/TwFVQqqF/SueHZV6/sLpX6XaqtvKu6bS2HXFpvXUh28liFtpX VbquPfv6ttu2UiXqov2qSv9lVhfkO6sawZ0ZjBuWNzFsY2Ew9nw1eWW4AYWM 8LHEhEgkCAJtOKqtJgBiYUYntP+Dieny1tk6Y1n05cczodTavsodTf39eXLp v/+4+T/vh/XQ/4+2m/eJ83dWlLz/Fe/TtXXPxfJs+36NvL1cdfbnl1Lz9Ki7 Al80wqYmSFiYUmJlKxMMTAGJlRiYGJhiZiZYn4/Pr89ttuvz5cgnPKF35UeD t4bIThhMKYKGWKuNCMUOqjjm+DuqrDPLMPnJolzqd+eVT2JCe0GDJTthTEwF dgR2TWJmJiQUgpBSCkFIKQxYSQpBSCkFIKQUgpBSCkFIUKSASkFIKQUgpBSC kFIKQUgwkIJBSCkFIKQUgpBSCkFIWhJKQUgpBSCkFIKQUgpBSFCQsKQUgpBS CkFIKQUgpBSDAAEgpBSCkFIKQUgpBSCkECCQUgpBSCkFIKQUgpBSFsApBSCk FIKQUg0JCkKkFIKQYJBSCkFIKQUgpBSCkFIUpSCkFIKQUgpBSCkFIKQQSCkF IKQUgpBSCkFILGAyUKUgsnTKkFIKQUgpBSCkFIKQUgpBSCkFIKQUgpBSCkFI KQUgpBSCkFIKQUhiWkFIKQUgpBSCkFIKQUgpBSCkFIKQUgpBSCkFIKQUgpBS CkFIKQUgpBSCkFIKQUgpBSCkFIKQUhZLSCkFIKQUgpBSCkFIKQa1gUJBJIpJ FIMQJOszzi+HtMPdunWLrNPOaLi3KbxR5eO2o2ZDqyDWJixMYqxYmLDSlDKp qsFMxZkMyswZgzExiUzFSr5Hwp2qtkH7TVqGa1DTKNGlRHAWRH/ff+kvno+Y PO+wrt5mlXYwtnp8rsa9nPhz339H7xERFTRAcAiQQAx98iVn34RPfEREIkYq K+vDk8p/xRjldkdNLpnKOlxOk6quhrbODZvfHcujhlyy9Fd33ve7tVd/sCtg rd2ubpasWLFmLFizFix0BUDwAaAex+0Z0AoVQFC0Kej6GKKKKKKKKKKSDIcw GUyhzT/MV+4qbUxD5Vco25dy99Oih/MocFDylt57dF2v1y1bWrRzLRo0WjQ0 TRow2aWrTS1WUlllJZZSWIYCUKFCUKFCUaGiaNrZf2RtavpatWWWWGG9lkhZ zOjp3n+L93zPuk+7g+wlp2NRVgqsLWAKa4gtev4/vxetcuQnlOVSwnQJpvww nH3lvd14z5x/ZjOXMNPpFPHuPYPvgfzkplffeN7r3XTj+pTzKdxPdfSwf+th +Q6lPQ/Nx/PP0+YiBERAiBERAiBERAsAAPyJ+4AkffCusMheFcY/gOXtPoPn s8r9Yvtu9PQYCtjII7astJZmYMyH/HBXE+Jolb5T3RZHnlmZmZmIebKZjGG+ J6mUlcIyuCxK/TKczmaipyNrpbIv5piBuwjCwSyQO2CyfYhAKiIQWCgoLlgT Iwk+r90Y+iljGlKDGlL8QhgMkkA4YKKSDugThhhIKEm9D9SFFGoUUahRRvBM BmEIGSOCTKjjiU8WRPWILgcWGMMY4xpVulXrxWYD5ixfGxjgElXR2fGPTzRO Xd3RHd3dEd3d0TsB+YB9gD5gdwCO6nIfUPqvcMe69HuvtVq9Trcdj8Rn6frr 385r9tg3wR/HA1kx31319RvV4ZTMCQpAfyCQxGSfFJBQ3pFFGhuTLlT7TKa4 4PIymsMySchxHIay3ZMyswzJmGYU88yxmDtqFUwo+TEGYLMFjAjdkdDFrZlm WZZlhVosqPfgLcWKXUr7Zak2MUvhfZfX8mYzGY362h8h98bub8Smu6yS+hWJ 9Y+Vob2PQP5yXkiiHkQh8JBkKCqon1DRhkYNLWi54PO87ZQ8/Dz1rWta1rXg eJ4netzzHdvZR3HC1PLFRrAdsTe9Tfl8shbZC2yFvkPIUJmeAOSnupA/FK/U vk9N54UweBntxtWRKPPNUyDe8rp7PPxzMzM48+t4HiG+sqjxyn9qFq0ljDHN 1NV3YVvrKmrm9Fexxco+NyPbkrqYo6WK4qu6rUxkehlmWbnS/kb2yWyyJ6Sx LCxb29pWosWLFEURSRkJCHwiSFZ8GE4r93u+n3/nEREREREQzfu/H0/W961r Wta1rWta1rWtdAHTvXMviLSjQO78jz0lLhMrpKtInU2Je02FfvhspP+t33VQ ++P1BwvsPjc20H1nAL9bwHesMqnS7jLpX1jcdCfne07BbFzRiX41/hVHlfSL tHO9TuPePvpq6wdivE2Edu6HC95rtVHjxbZOEet9d/abxvcTuROq/0+fkchH +evmdLgdx9o0xi0aBRElCgURJ+c5Pm4/M/i0a1jI7zodG9vGNTeta4tvHGNA fAgdkBsnch84B6IQPcd5W9Nq9x0ORbG8Hnr+w2Q1EsiN1eFjMWWMxZYzHSV5 U2r5+INzFDt3HSNUy40+u/LQp428oHx8sBVWAqrAVWzYdEgDAD6DklPgZ2Pe cj/xhXhdDR32VYT/yNHzDmliMJdAbHMruNivzvd4C+w3fbWcx+p3Hkl/A4KH 3FxL2W0BXoZKQO/4KiPjb3ES/px2Pl5eH8h9mfLWta1rWtfM+89sHlTneIve fG8KNEBkP1AfcT2D29EH0/nV+xxnKv6FVewd9ebluOOm3n7+6/+Ln/7zbTPh jTV9xHoj0Rq8Kt1bodbJMrIxgxgrGDGDGDGDDFZXy18HzXZxi9S/blPkk6fQ 2ON53cVcPZ7ynXvvIg7dLzK+ok9SyQzn6be0X+0hfxh5JhR1SUJ+eE/ksLIk vHteMyXl+y8ru1S253jGN/Y/bfTravN7B+6nELZQ1T8Ec+l256ce7tnbg7cf HX2ceXHhnjmZ9D6/KHQDc3+RfHhep2eCMF4nY64eoPdWIdsL8+59plH6Fbj6 +nMzBXnXh6/DwzNevvu+vRPheirSVeZek+OVmVmVuVso35F+yH9xiLDKnvPq /k536n2C0YWGFhhYYWGFhhYYWGBhgfeeF9t6/TMzMzwF5UNe1v9eGnWvUfpG DIwYMGDDD6qHSi9oXp9v86buam5wGTVibHOyLaLPo7s7Y+ObifG4gtQ5kD5F CwAGx6gQRux+KZVm8m9a6b8aAdOoh0UCzLWa46duGn6U599eRaPA/AuDh7KZ XnY+fJD9T9haD6D9ed/TX7+2q1rVa1qtfyTtTyX3021Mn0OwPQefmrKep7x7 8zMxVVV+kQgfGZjEkA+WwgBoOA+UAxGsFaK2HG473xuRHFTw+Xy18fneFxp0 9MLzPAPwTvviYbEfEjzw2W5XJXcOy5GKwxYWI/M1T0vdX7r8/Mdyh/6nxvf8 zzPAVKnI8TtZcrC00lP6r5FyGHDxH1FMOVvenDueyx6W17TeqPhBtUbke6n4 L5TsU5EA+GJ9HvD6QQQ+sL+P4X6sYxbbdAnxxbbbbbgEMYtttttwCHuIAB+0 DmfEPqgMNz2QceF8C0N+fnmZmftM+0uBhcT10bnCbB1llaF+wvxtna9fjStd KtJdLRWlHtOfAvmMHpxpqH6ap0Sm0NKHH4W/7jzP5rmRxPOnan7CP3hap5nq X0PrOdOGuW/7DE2VYpmWYMxYyZQyoyZNqjWJrKCkpFtKWBaiWWpaJbUKFCka TSampqampqaNGjRo0ampqaKNFGijU1NTVVqq1VampqahqGoampqahqGoampq aQ1UyMpqpoajVNDQ1GhoaGhoaGDBrBrTWnENNlbWZbTWmS2GS2osYMWzCjQK VBSAIxoCYEiixxEJSJhlIJEokoDCQSMSQVVItEW222pGpGsmEZZNtaNKm0bb ZmzZtQjWFsfz1QcGylByFCvHmAPUKckjjt54j6GJikPQQDkkVJCmoQSvjWDn k43iHBMBxJxDJPugB85DZM557Pxz6CflnP3/Zi222222222222223M8hyO46 n230u4fuP+5wuWec1rU1rU1rU+HcPSAPzwKT3fNVVVVVRFVVVaAZTOFPh7CM JuRhuWDCrDCnP/EcQv5Tj9fp9uta1rWv2UdadxX7r8u3am/B+Pccr/D/twUc 7gRlDsvCGqG1DKJ+2+Jxphh9V2tj5riWP6bDlbjlTqNDRztjnhmGYbjiuZY5 2HM/T0qqGCGgiaiGprSqoYIaCJqIamtKqhghoImohqa0qqGCGgiaiGprSqoY IaCJqIamtKqhghoImohqa0qqGCGgiaiGgzpVVDATQAmgSaDOlVUMBNACaBL6 zyUOq/EriP2vAuI5HspifK4janl+P19fx+1sdr1ulXrY/U9PYH4nC/IZfd6n /a8vDM+Wta1rv/hxI+BD3099PfT5m1OUgB+hAUIezD4h/uBIB7foE+KJ8FKr vbl3d3ZAIHS4st1rMznOc4Q0OlxZbrWZnOc5whoeGZtr9ovw7vyyPkZA+J2L Zsr6LtOB6Ie2+7iR6H8CtVbTyPvwTJMpZMspmLJd09TJ7zLwrzOVsYc9XfZe evpbo+Zr3nY4fy1f3w//1WsZjsT91958Xu8VD0p3fL8jNfTXz+y9SX71P5b8 PsI/o+/n8Lbbbb+iAT8s4D88AmAkh/N/xpCtW2E/WESASe/t/hD6f6X8tn4f ju5wqrwEtWha0a0a0xxxZxxdzhVXgJatC1o1o1pjjizji7nCqvAS1aFrRrRr THFJcPeFcJ1vyt7Kx9NPJp8Nd06qvH6Ndrj8m1tttzrGwPuDJ+34HqAyCSH1 kZDb+SMBuxP357vLdtFi4NU1DjYtTKb7b8W7gZBhZTJm33cOe7gTFMpq5n0N m6wMOXLpy48KfRDzpke1LwyfH7L9/7frHt5V/NTc2T3rod393w1tVl8urVsZ kcet2pxfNcneXLDRz9Ls1PGYG/g0fhvG5sc1ZjB3J505+DwHNcg5crVzNFxW /Lq79eHGdGW3SOlV1WGl3953DjlVgBm6cB9ATILA0AFAAVqaSB/IRNhuoBuG 17fde5aeGpe/VqrTgNhhXF7ynyKfIplPko4Bblqu6w+ZxOpsdTRud15VRxXA r95HYuq+R0Oph2NzY0aU+3jqngF7dccH9V+L90ZZGIhA/jBOgIKgHOvHXTu7 u7u7u7u+AOgESlKUbbbbbbbbbbb/kB/e/x+c6B/NyALAB5wf644SVn8b+6nn kMVR8qHk8cKPcfZB/ADuB6fQkeB0pI+P1P8378fb/gBH+r2cfigr3df4kvF7 kfj7MxGL0GAn6oBOYXKZmkgiasJSUItMLKTEyCd8ZO+eVVVVVVVVVVV7D/n9 odAr5Be7n8deW222fZvvtv777GcW0dGNNGmmiBAgCgAf9r7/rrvYj+iwLu27 iLsC7tu4i7Au7fAMIA8j+BgwQiP7R6OJEkx/X5KSSSSD+ss7u7u7u8GAENjB g2NDY1Zmq7u7u7u7pVbc861rWta1rWtalyJbR4MgG5BQMDpRVDEgczIk3uwY TBZBC1UURVRFPJgUQE3iQ6oSefKqtCTE3LJAkAgEBcXwkkCJXiCSQSSCnkcG G+HnrhVeQ2YzxnJbS2lvqIaOfCw4ngh2oWGTEhqgCwxIKqsgqosWELHEnosF UJMMUAUKM8tpUEAEVdFMjKJCHAwpdklGQTvpVWCAh4gIULKn77S2lugmvZEq Mh5wWQ5MIRCyGDEh1JTvIeEVJlZjyatPWZDJwSgUAFVVRVBQWEmchLDQhA8T wDEBnkjLVVcKuLq5Q4pzcJpbbYje78WMmMp3DcCyHZsCGCeMhWRCZELIONJz uU81Okzvgh2E5SZmNmeinMQs0yPMhuTnTS3aLGQnJaY0ouStmBSkGKoAIqKo qqKqKqirAGQgRGSQSCjCCRFVVVFVVFVSSlDC4IHaRGBmEqoQzk6gcwJuPc0m pkmy5uJMaeMQvMh1RODumSUMwNQqoyTU5IOZtOZhDYGJuQEMpJLM2JS3Jncz lMOYDjAFgbxkCrmARGBAz02ATWDozSzISBsBkAyTTqIgYIco2TqQyHryqqqq uTXOw5RnDMZqImKKzRciwwKl7CqhMeIYky4yTCqqjAthgC+A77VVVVVVVUEE EBDwTPUhzO6eEMFDPOjvsDWiZEDDxqZZbXwiHQa2rZnAaQ3dMEMoXIHIdSHT ZNmqZyppcmlxb4235Ocf7rjbbdfI8rwuNObiLw8LuuC8Vwb2yHXmseHA8uLy pxoR0BcBiaIUQWzJIwHnhXHHIOABYzwOAKGbXhwN6u4ZdE53M7jv6xnOu6at mcuGQMEmSFkZubeRJk5J5ERDQ+wd093Vd/I7h2He4HTw0vGulysWa12LZd/D tXjuHoOLmSTs9kVVVXgoittnRJuacJtk5VWtba79g8hmH6iEP5QiSAfyB3le f9PtDt9h5H1lotr/acL+spxsTinqklKUpJv5/Pz7/j+P5fmAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAJT9j5/S+WJ9fz+sfHXxbqTFJS nXOc5znE1MLhlAwQSuMYxjGLiwCSqgSQSszMzM3Hsh/8WhbZbZU8NxT0OMUw OMUwOMUwOMUxyruod1fyFwhIdATlDnt16N7f8fF3yCRUOz2SrsJlJ8pjOXc+ E5gB9BIBoTIw8qYvs3JrW3lvvu167cH9Cu+bXDFDfop/VdLVDBucYzgV7s8t W1LdLinuP6DqsX9pQ7nqe2vr8ebW6IaxkzW5h8879va/m4wQJ0qpJJJJJLUO FiVSJud6RqwiyY95wb+vwDi9HXR/C49O6312lOfRv2ouIpzxF9VJsNHewfVT 2U/+KbeY/33InA71alrohSQ9fmnmGtSBUUUYixDNACqxEUUVSRQVVkhYySFJ IgoKKpIBiMLAQjIIyQdBJCzEzKswzKLTFTGKGo2tlqmYGMhxwaMGMFlgdGa/ ruFXjO9FwuyXhqrDutKbtnRryxPFehQpJ3j7v1a3+PgAAAepy/UpS0rJWsUU xihMCCiimMUJgQUUUxjvBIBqemM8N4mSQD3hiBhVVhEhGUzty1rWta25cNlT BsT3UOII1FJAd66+pD21yAyTO5r9Sqqqqqv5dTJxdLtuWmzaurpxZis9NLQ8 WVJkwsWFiwsWFiyl9RhBw+FytXddLlzu+5vc8HJ0p+jMzCsrCscHBopV0e87 bRJcEl+CAVB7chGBBolLeC1lhbQSKpL2EVNmsCQgLTRIfqIkPiL1fw9RCRii oQLEc5OidWQxxioJqJQ7xVkWaA5UGBvII5OZzu884BYEE84cAYe4cGSgZSSb z1rnfHstcOPHjz679c2juBYHuh4L1tRqwBAsADygJcGAUBYEDFuIpK85Zorx VKvSHhL916xm/VKSnundKRVlnWGgAADVHrz6IksQmfHtNz4QA6QI59XGzPjO ZJI2XO83rIQu84hATxz5cwm90km+rJoAIvfSQTwhBIi6gCyjDEBkIAqjNicV WyItqZzz5SQ8c2Z5vDyhFgAc/wobYY1qhjO9aJxwBzvnlSJO6c5kfGkb3pfe jl9wwI7H1wBgcja9nvi2aPkQLyWqSiayvsLlAVPDiv5u96AAANalJrklrd03 KUl6yndd+tymdWEE/BYA85jj2Xe5B6zefoMu/fMjfAB5nJ8ry7AfSe8s4a5y YZCHBNPFPEOACyXwngcjlHiEK36mJm75JIiALUOcipitx9it7uU8kuHiyuHy mUpRbVq3704Lw/YkrnWJDXTF64tbnXfdr4zrq/fVr+5SeUlbU32PW964AAA1 MTOfnt5x536nKZpSlO5PL1tvNZy17+bvXY+Y8w6deVfOCoroG0W35g2baT1h +W3+GecK53NTmt9iIpc6YbSLvD1Wy2CCsoxMasOLJdLBp5t3kbNYfQBx8W7T 7vc7u7u6AASRJKWrF5O5KUvORWcpWUrL9KJtilFw9AQk2dBXEYwoq4AsSB2B QAnCNEYj7MH1ERS+fENcM37k+xHagNi+AcEKMhQF4eeLo1x6vwrqzV3zy5gB l1NJx6kfGMsEQSmH+Bx/O0H88GClXyizlYSxMyHkjl4OpnvGc64AAAFZiSld a07zmucVyIpuHB1osEmHTUZauzNrHFQIF3EkAcWRVFb7wVeYagdVyd7mVg9M EkOqYkVpVLyJ0xbjRKc2EvRz0czynffe+0HbHh3K2aXd63nXWbmgAABWJIXk v5Lyac7HDq3sR2rHZ3Jfa0p9AMmSP5zzkK+q5oWORJ1KusE8AUZyJWGYybcU 86l5G3YmK3MRl1iSioxTQE/ABnZIxJXeHb28F5anOJrdusVlKyWtNW7xjDWg AACs6GsgTMmiJXMREVyIhKIqECQJwoApNlkE1j0QTtcAclGREKeFxEV4Wz0A QRzjqepOW+cAOc5wnldYne+TuAbu+Ro8snChMxAqHEPxwLujLhFqKqI94ADG wY2GZWe9DqIil74vfc6JJ1MV7xpnQAAA6/Kaq9UgVwG1A+1GkWavvdvnBNYL EGTzP506Y+37yKr5wex3QQDksmprwCIHpmpY8U6PAfsXggAooKHvYyQiGSpN gC+Cjpjo5SifBNmZd4GnW1fa2cWjnObR6IgTTABgQMNKbKFCxeFkTdA6xDQ0 UZXd3jIQFxUFCQhIrWU0721hrYAAD3RPv3Ot3nKc1bd3wRcGmUfHS/YHFOl9 HWQIoiPiRYgeXB7kKxuZXCDwVJI8qpgw9pUa7EXwACVC4lc0HAmjyMvpnh3t /OKewLjyY6YkZl7BUyHnJpqOpWIeah//D6oPvO5cSOKqyPuWg0N7B/fZbE5l DyuQx5bnlUZRK64SmOCvBaVOfYdnq9IiMoR+9gS5AQGgKAG4BIQECk+PFY91 krNXKVYpiK9hadxbeNWqyapXsGLi+Pkhztq5VTsXRmw9Jh2FFiBGVreBbDvA szL5E1uQLtzN7sZLe2rUY4zMMXNbmW5GGJaujemlWbii9yHomrG21j0bbxxN SG4WbSSGXl0js6d2AcKT0bZevFUk2Gc/I1mXM95m9mLLZVdCm6zbgOXMVaBa FKaTK3LzdNcUsOVydjlL5c0+ns1OV3stuMuCbXgaE1kBzcLxsiptQFxXABNW coWJofjxcjiBtaNGTxxrBtTanGma30WbLJ4in5hMJn4FYyZYsxYZMxDBmVgz FlMYLIYypmSwmZEQWQYxZBIjIIQVIgkjJErCyWRlmIxqyawGrE1WQyYYMsjF WIZkxYxTFmMQBEUBoDQP8IFhYwWBBYQRgxP/7aGlvaWln5arWVYYWxYtTMxW FopgNQmYZLZNUp1WStJSVpJW1PX71dff7/j8fb4+30AAAAAAAAAAAAAAAADW ta1rWtAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA/4/1/VN1p+v8vz/P9P6X7f8r nM7777x21o5+rFnpauMYwcJlBjGMYxjGK3d3o97Wvf5Z1V4re2c5ycJpBjGM YxjGK3d3o97WvfWdVeK3tnOcnCaQYxjGMYxit3d6Pe1r37coiIiduzMyIiIi IieFZvJgVPjsqmPFGerKyOPqAB5/qDh70Ysnns/A6JoKfAfRAiMxn+Pt82nX 5W19x7Tv8e7nPgD8wfp0/YcHHn4/ZJJJJJJP5+/0fl/T/T+f8P4ZmZmZmZmZ mZmZmZmZnN5VVxtVXOlVd8Kq8h9skgAfIHz8DdXU5vg9cbUXCi0e5rJerJPP TQS+SjK6n1Gocj3YwtT+Ywft8LosssuFy2joK2LjbOVymrGBja0xLWhpiHHF mF8TzPE8bRe7TvGFM6t8Z/W39vYAACs9Sk/hfs06knxS27UpUvXO3b8A8bc0 l42pEaLLvt730Fuk4SyeEQPdmRB0qomKzVPBIpF6lsJMqESEhUTFs79tca5w 3YsTqJz/3gBSp8UvnvHN+g2CSaqbIFQIumyiI/DibxKyADy88rIQICFCUEgj TKn3555Z8j04DJo7MYAORHFEVXWc44q+t26uFZWj0O901q8QLnG3COoEA7NO bUWfDBwo80oodi7GL7A2IJyOYQCPf2DiAD9Otb8Q9Oy+1PpUq7lma1LIQvGJ as6x962zrOdN71rWta1rQA4zPvTrevVJKEEkO63QSgVORdQGic2h9uwFEF0A 0OFta2PRAYcJ4V6bz2mOuiwJ0J5GPNqe9F36677kNa7px51DCOGtQZB2xy+t QM8cWeMwApNvXfGHinlNoeBQe+s8nGube7ePV4VBQfN43TaDqtpG7xrV6iGl azDLW2cX4IGgKDWK6a7ouEYRebWpqq0qKNowCNyspPnOtc3lvmwAAHuSJ3Ky U+YcAFJ3SQDUIBPUc5vEana9UwzGE8671xxeGe/kpXbHPrHHFym92xbbM25v CeKJeN0hpF7AGIRJ7iMxxx7bA1zyXphyhnVurOWeENc3e/UmDxDuooo804xV FF658b1jq7Fg7cJxi5N2xRTGd+DV4rDcqHDxj1gqLi3VpzaZtmLZxfhABu7m dWzprObZzbBQUFzcPrx49jX0E2A+gAOAAPntjYv4vj85PkVd18vSaOHbrGPs c905SU5OZpJJ5KStKTqlPp3p3ffrz1wABe973vlhsCQQfEnoHYBQEgEOETSl 9YblAz7iIhHfuwExzmlQmkrX3vp9/jGN8WHCLw195u738iQDHhOXlKxfHj1r PXXnByxT6KhIUoJ7Xt0n+fFYM7ACI5p4vIq8yQiMPO15p59vPXK67t6Rw1ig ooosU9a6zrd2LFBRZi0WGbQ4ucBUmeeM64u4oaKLkqv2uPB8nGKqV7gFiSfP eNOOEZA7dc3qvlZUdFruthcmU+y1GsShIQBCABKvjtOM3zN761rWgdge/3qd 0/KmmcSUpTnPvL9yXrS9Z6kGnnPrEMvPMwaciHyRaXBOeda4s2sHcJC84qnB cO3ItEqcXHFzFMNRRTVop7W6aj4zzznnjvBDhzm2QYQ2IyZg5BJBKpVEDpk0 cXOQuEg9UYayQBZypiBZs5kQbNESeqb+Vq9Q6dIwiLiTzbqHOVFG+wCWRExB kjzzO9e9/MAa8Iw7vteldmWnRnpVmZlQV5oybGGLFcA95+IXmZ53zvm6AAD3 KSS06p3JT1JSk9Ssl5JXnOv2BJ+ck4wcOnCduRw+s9+2e+b6Tl6RVLz6xlj1 nrs132X6oJ2nQttFFLab3vkxiAGnN0KKc2iijDjfWOOKTwhyzri73QTpAg8g kTmy3DqMOni8gB9mo0CtcbccOkEFG/H4zHHFNk3aPb09LkBrxcuOzJmlAFkc j38r2Wh80QRJh+UTVQTjmiO1l6mLxbOsfS9PrnHNb1zfNgAAPJNSlZBIhEjA tyYCGNUKMAA60FM7ow86eG56pzDHOWRz8RSmrQ4+cBAjrATcXkKIPaYSbu9Y xnO7jRJalkBBJHnK3vgUBAmL1U3qoGfJbOsJoYGmmVElBzZUQDwhHwDPFuwM PCeNxQIXrv4fjuoAgAi+VDPPivWKvIHhIJIJBJPbx+bhf9viYh+KNi1/Lq5l 7MTWr0n7NJJL0pGPjGm8Zz5zgAve973ve99AkD0QHZk2YkEKB2XLKQN0JC44 xUcZj31icAMoQXQhEzC1aXV1opAu6zDssKRKpCSMP1d6uTlkdCHAypglEEji jqU/pTnrCxyp0go65S7KAmluygYSsOo09XUyEwjJ29p4kHPXi46t5CPCZcq7 Zx5kxwvObsV3XvWFp5a+p6kn59T39d+a5zefPNgAAvfgghn/EsABwmMe7Be4 CywhC2AoG8LektDLfa2tV9msT2zTOnzzlzYLt0hvN1blKhWXrxjWWmqXLJ2D jV4Tw5Sod4rrdMASYba7pPAzx3zjji5fKLUNPwd77kMek4QWa82BACbOvPds vHrGGvSeet74N98a61e90nJWnUpX5z1rW9c3zYAADySh0kjNlA+MUjPQOfHZ khk8eL35syzO74GCNi1Z5B8KIBsw2ODeIcsj0iTl7GzrivCYZ7le2mLS3VuN NRUPCVutYtrmnh0rOGi6x0cac2adoVPHGOs8cUymiD2mGeO8ZM8cObioKYTD jPCY1qkmXWbJHY4NeBc5zg4OAD5zjHBU6N0yqxEJ0S6eMd7rnW9/Ye6AHv5V QZ7gPyBZifmkyAfSB2QHA2D0CSQAHMA1sn4fh+Hz89cz6XlJmZmZmbykzMzM zN5SZmZmZmQEAYKOSRgHRiGA9wPmRDXxffIPzPqu+613bfm/TnH4td643nec Nq9YphHKOVU1WbJlzmt07jEtUNC3oNmrVuBrrclbErVSpIlZDdZmm7y8nTO6 6dTe0NVTRMySVeh09twcutm3NUtDMiUIhQk5lUTVUuS5MXpgZCJGTW8JipSu 6WhKcN6Thm6qczNxXExeqk7qp5dnW1Sic08uV8BHKa+K7ufJscOI5XHenszj ZOApndmxhxyjcFktulrx0ulHBb7MQgL9nJLuKyiYRQ6uNkg+pwZAFQPyoudc 9azvgAAAAAAAAAADWta1rWtAAAAAAAOgGQAAAAAAAAAAAAAa1rWta1oAAAAA AAAAAA/Ynre5/G+v4yoYH7DRF3chiwQwP1yKIzMkXgIYFn8VAzGFq2CCgHVE CIg8CEYszVe4yg5TwABtwe6dh8Op5ficTWzkMY2ccSYno+aKKOYsh+tAFgiG wn1xDwH6QQy9J2nxD6OIY2deZ7RNr7wzoZO0UyzGGzAJxfozxwz3e5VXPbrz HsDxMLcVXcvCnduL39qd133Fdb3y8zRXwDfuvTfoU7qX1NlxUNnbE2NrvjKm 9krfIjtZS/Izzyo8ICP9PkDJHAIcoSwHYGqk/OGpSnNfG+b3ve97gAfFK1pP w/arSiUrJnulKfpSvnnlLWpWlOSPogOsXEnFrA2RnwfukSaQ4gQ0BTx6L3q7 Yc5sM+1xj7DxEZk+KoiiiivV+LMj93kCSOMuYlxwWmgiP6/7/9/gACf3mzKj Tv5R9HnLrFmQP2EBEIo7XwcTcBkEjsQDUCJ/fWLwG4vJgPsLY/BggnMh+xJ/ b5ez5EQWRRB3SIXCDCCGVavAksCgy3GoXWrUq8QFWtXznzWH3mpKU/KkmJOR uk+OX81ffnm9gAL3ve9+CdISZleUWVFELIkKNWplOAIpVRwhsk6cpy7xTsSp 4ZXqFvtfHl9tdauUMJjDS3GKt5QtFAyglNJyjItqPbdLwqoPFVMhLoJRc+vP nvGN95pAXtJyzCe2u8XNnvBPAnlFRTi08WnnBjnm6eOLRwhhymTsC7zdcWEV RKsyIHQw6s707AkkACx/b+NdQsit+aU1jsnXTWLrXriVv1j7Ur+lNYk+++ud 68555wAGta555598h8GQzuc4y9eL0/kVVVVVVX4BQhhPi+eG4zb1Ya5uYqzV u7h8IoaPWTnzkNdM09xS2Y9U6ejWnvOOapeKYZhWuEq+pzDAsHlkBQslZFFG goGGSTyJuZO6dkDWcLDrVODfPOtcX5Y6Z6SndsJBjvn13rPVnM3jAcPkcpvV Mu0VHqlVltM40SAAQJIxa2su1cOHW8RDBwyJSHrm6s13ifLF943re9gAAPUk iklZJ8yvW+vbm8pMpp56fGM6pDhOKXObXyzDt6VCmLq2hGeE9ycJyveww8Xt yrxm7eXSuHrxeuLt8cUvNzis7tWK8+PWO+72zAREIwhu9GimiWMILuqoKIPd 2zxoRi65hSNJKARZeTnpC1wPoo8ZJBHDSmo18EbXY7dq2HMRrN5jrdr5xfuT 3uSfNeZ3rbfOAAAP8EuntGrwFSp7O/yHLWyqDt4cenDfbvcJp0yp4yXNs8kZ MjnPtg+HFvutWdW21xHDLjK1aowgM6QzMSsmldxySNJBMRBIprMMZkYeWSQD ziMFe1lQ2tFwAjyDxKHYQAV3FnhBk8hxZI2ldwK5cQIiOEuIuInUuEkm7hVA J4QRZGI+Zz4PbDaPnSls3i1mhq0jKLZIVdx4Ak0QgAJAHaKzEzjN5AAAeT3T HzKSSUkmPO/POq1m6yvfRIHnsNxRfsDDRFA86bmJiUCRagSWedSdvZuZgCze RxOJK1BXcUR+Mj4bHCEQPPO7PYjkFuL4o0wWReRkoBfpUMizwQW0hX7oAe2Z DPXF5RXu1mk5He77An0CcPPN9nSPpR3sX2KN7BOmASCSeGY8dyczP2+fWUL9 3d0vNzu98NX7xmJPUn52+29cc8355sAG7u7u7g4hAj7Ox24JH2CEeLyOm7hm BwCAI5iigOUBcp1RBNswnG3Au0mQJMFDy1MQHkXUQQCYINQgaMnmEIpRPnVV wIIJ04R0yfI8UXQVEuINQIAsETX5J3HhTgSQlBPP2/4G/73PyeTEZAZ8iCuR JH0Qva+zLyBh5YOFfiHIij8TbvvDj7f485mqXWRMCBSudtjUt19LXuxs3sAA Bn9v8qKU98v+UveDo5WZXy4VeHw7hEZVRBmBZ4VQcgpDqtVZKXeCcMTftidM wHGPHOpXXV2nv6KiimLRd2guO89c7ebzyV5tFFcoYNVxTIq9WxTyznOcQzaC gw8waHr1rPXV3FFFGoLjTjFtyerctVFON4wZtFLczcJDMx686xvdqdJU5Sum 6vTMeLFvmixUD9SjrgBEGznmq47n5R5OCoQyrx+vuq9qzWPmlJy2O8tOc4AA A+1PU3tS1qR+JHzyASEDyevya2IBA/EQemAD7kIswYSgj8A89KNGAYiwuL2A SzeRRovvj9NX2OQcJOkdBJN1eaHOxdRDiNiJiAQzEwJnd8qdjCDkCxUcQJmF 5UPaSjjBXY5pCN4rw0olcRHOYYhJHTvZq6URBB0RyCB27zvl+91rV1K2x11F dW+JTuUknxXzGtb1zfAAddddddddeZHyXvu1OGFfaSNvi4dWjhzqmQaNH6IM 4oqJUelI+KMteECQXGHPmxbs3eGBDDRRT1ad2ijmlRtHjjOOLWmyuxzSiSYi K9AXAjzDJCKOhRMLNqaihwOEVcYtb5u+O8a4sqprFox8m2Y1q6aytVK9Z8Z1 zTes9AYXTMa895m93a+SVvOMTWk4e1mUYsVvaj2Zkq+gO/BY9yhQqB5/YmZm Ztfst6bHtavZcjLCQelBGy5c8ZjMZmHkfAyrs4nlPcuTuLhenddyEc5yB+nV +namIpfrBKrJ9lNWbOMwJUhfzVuXkQLBwEjVVtQ1Do7CrP2acsRa2gVqhuzm XpEFsk6N2cR0STkxrupNxqUCztB1uVtsJmBul0iM3YVIF3bxZC2YdCXNqIsL DMC9icmi1ZEZuTOUHsKXMy6wM1sM0MDpM5RKezIYuolCKbV7yNgTUsG53Okz vRjjOLnXRPd18shTH0g6jiAGIoQ5IIxmozrqTd9zodB7UKSdQiHJx5L5xnTs etxuxvhy8TaXfTCYpkm2xrapbu0p8SdyVkrJWSspPyt5b5xx73wAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABn71+8nX1rWfbkp+F7YxW0 nTJFKmOJJsoCGSJY46TZQEMkH3SD9RbqmH6d+dr7p0ByeEUUeT7iRXiQ/TOA /1B6mfWffizwMWCMWCMWeg0dhj1495Az0qgzdv0mt+Z5recdjU5IdR3h2PwN RuuqeVgywnfkp3Mq+OMsE9RMtGilWrEMGsyyXmJWWUrJfrH48xdnwODxPU4L pvx7cPZ38/Z2+AAAE+bbpS1t7/a3TuUkpim90ZtaYSXrdXom7kkrmzhnAURd Q1c2qRMwlTdBhAO5NCrizMkacSdNzUCBwcBjM3sbuoDmE/tIaouHXNqLPSWO WbHgGEcLg+tua8gXQLRakZrFQbItA6CGvoEkEAzeQ8Z0bIk0dFvq1rYvivOr zrrvqSnqXkp7tm2Muc5wABe973vAOQACABwASPUhCCcYTCVwVIFtapTHnt7h CzTmY/sWQIhw254e+75U5tZp1YM+m2s+jJyQXL2qiDlS4d3ED0GYEoRlv5mJ jCASBrpl1wUxJdKCfTVCN6uiTFYIjnuklPXMXy1tzgAADkk95mf4JM/FCPoA TxUGkolyKccUB0JynEPWrRwqVrSkNiuEqtzM0tvVkCsyA5WVFJ5HQJm+5lYW z2JFMvznImq5GZ4kpefh0Q/fL3Y19gyAOh1+/g4vV5yuUiiZzFWpSJSIpCVU PQDR+wfeQAIJH5iSep+HPLc8579++AAANSlJJ9KUqYqEcHsC1q9sXnOADLre KYWgxDFMYrS96w1Wuq4J4kLq6GnnPbNdDtRNV0djY7u7zMToT6BEvueBVUX0 EzMlSbnxtTXRamRz0cLl+eV3Ev5nrR33e3d6477zJ6v1ZnettgAAvdwb5gl4 A/rgAkgkhsQmPiQJkWvFaqy0w6Rrqvbpetrkju8ttzqsReO3H3zgznOZ5Pm/ W7X6Dg0H2fkZDU+deywWQ48fRVm1yDgUzH621KGEN0feVIM05dl1ZXors6Cj rc1cLjTqkM9yT9KfMa1nmt+c8AAAeUzn1SkkpAu838R4+R59gaUeoO6iuYrn 1arULmG8C+K7tgA9j5Cqo3hXOD5zgayt8i8fwD0Qq8QNwbpUOcjg+x2LpW7d Cvu0F593dfY+rCLzwZjpknz7A5wAAX9e2aPgrDJVb5EQMA+KVeebu7AAAfEl 5FZJK+q7tqVrjq179axdA8veErFcX4BByhBwjvWtXQArS0M8C9ZlsKMzMmvJ A4XwrIKgPZ5OX6xVXal2vu1MSi4RmJmPIUFVXgkEj4l8mujTVQJEmF9ZKB91 17pJub1e2O/GnOAAAPKX1SnTrXPRnjDiIb5gpttgQPf8FqrtKx1QfTAHwhtR iomeiQ0dLTN9Zq+KtcCwc3tCgQQ5C6qpALg/f06zQI8yDxHmuXXI+vqPPj9y lg++IbvqP3izOGPo+KY++Zl+/DOKfqJ6GV2e6YJ2am5HrXWs63vW9gAAPKfp SlOr23tKdWlPno5t4/poASNFZGgviBJohw5rnrCPBUWRPzt5UXGxGC6jPmOr EdBEHXfuVg8gZWxvXl5+AMQJy4WjxTVo5EED26weA577WdPSEcO74amaFq21 wVDUpeVXfKg1JEoQKU55IOLaz3nHeL9zFPjXTvzW29gAAKyeBRBfpMRt27Io 7DmHW+eRs3TvY8xTAL69GzU4IPtc4vPFCZycIWGDXiAjLVsMjqqoF0FFLxdo 0bV0LHsRVVUwOGqPYdbT8ABTjrO+sndc5aeOXGN174EPxxBnvCh6J5JnKKKK h6CnfwkKfEDmSTa6D3E9c0m89+5B+iLvQvpRyCIlaFFhfd1OY9EWHWZk2pN7 jWXNOjKGMXVDKTlWIqpZeODlZhomnlRo3Ih1VS9UYEYRtbE1kRo2rqYuLCip hKUcytWylLDlZD5FHjBy5jcmRatwU90O0hRi3iNUFpnchmSXaLnTm1tipRl4 rvJCUa6cxtCsG4Naxy5GW8WS700a625BIA3IFggAHDJMgMQN4ogzjJMIRQMH Y5djx3pwRhmRb6nXhdu2w4cR99OqfuW7371738fGwAAAAAAAAAAAAAAAAAAA Na1rWta0AAAAAAAOwXAAAAAAAAAAAAAAAAAHXXWOB6VPexflfyz45ANiiijt U0xd3c3kA2KKKO1TTF3dzeQDYooo7VNMXd3MgT12TATj4HsjmltKiiDHHrfu BYC0oqhD01bs3Ls1BQ97vGIIBJBJPzzuTnfsGNfQZ5xXajD3g7ByedcAaAki p6cjkqg3uHdtx11OJtw2PBeho4Ll03Ll7W1Po1vXOTpvXLDJYFksZL4uzpma T8upJ/Ixn39NfZr5+33AAAJEd6p2AS7MB7wB2ekBilB18QSf2BkZzwQFadTj L2p7H1mvb3Z3MzcrMyx8djAogy3s1lVM3wwsUjytzMm9PV1A5mZ3j5XkUVtA ZMxjy+PpHOQUsEgyRs3d1nOA3kdyCSVJnrwASK1a605f9XAALivauw0OUV6d u5iSQqp+HI2r1fLutbVlJWk+KU+er45ze984AAA1SNLehey2RUSW4mBLtjGM J5YulsYYIrOzocIqLubPIFYfLjnAQxy3E8GEUWYIDb81kcxdnYOZnejcbfRR Bcc5Pt+eHa2wNmZXJHk52rmyOy1QgjsZ5DY5Yk15TZsWTKVyq26i76711Xvr F719a1fmA2AAA1EkxSBe/mDk8kAJaKGwJnGcOBCcWRmB1CgEzSRpOpoxajh6 cGTk97ELYCIFcHOCCdpWYWpQuiqqIESCKNNUGaWRZiaKs4zyclqOHTG2TLbc 8jkkqKWfKtwaZ0EJQH50RO1lbdddatbOZjd8ct1+ulPjVsstb3wAAN3frgvw wB9eeZzUCCcm+25wciTZED1b6MeUIuI+o+c99x44cNQPGmWaiY5zhC2P/U08 TX8Ez8MGjH+TLqqfByu/VOzXvqoSLSLjSCeE8UgH1fg2eMiiOSZv6VKH8+cS 68HYS8jeK6Skgo9TP+PLQ+ceKPJAk19B7hnO7v58+4b7ma3vgAAC/rPS1VqU xqt5VWWy+nfeb3WOMjnCeDhAmZ7IJ5a7Dc1vkdbded5yxMIBJa1oYVowWIaM AFcKCVtSjYgkyhFg7weBuRhGGZjb7H9XBYAjC3tnyFfvVAqnM3evXVuur0pM qUnVKT01vd8522AAAI7oToeQBquMcqSEAyhEPjGbWiLkZqyyt7rqZnABhgQK 5c4QGCvFq2oCBVDlADTMuVilRhjpoTKIAXTkeuntgoX7dzavIncFq/QIwvbT EHw+N6xqQFIVO2THXiSa7775e+EkJFXZu7V3Izyk25YQj3ZWd73e93d3d3b3 ve973ve9CeEIm6enDpmBwSCiEugTLIsQ8GTKTvvjBMJDwgGvPPT1ok5ZDDIF mcZxmQ43ZA0nrjHGQDaFQmkMMhQGC1kizjTz1iTl53CoE4STlgsgYxZWQ3vG 6XICzTIBvjWt6ArA4ZDzAgh2znfjjlxPbYVk90gzxqgeUCKTveNc7zsk67sF CMSKrxYVIw9bMoHseu3nfHVcZtet3Vc94653br3aSfPxe2Ob1xzgAB11111n WPbrr2mmCMFBiQWDGAKiHASIiZU8HAJPBhAgCfM83a1RcxOaWRJxKh5ndqux oFdp1HkZMSbA553x02PBXcxieoc5u1okQdInocU+kNErLA86CHldlCFlud4v i06rfvWO8Jr9tv1y+t75vTgAAL3/kBqbqATPbl8zyEQaXC0pQUFIdwzg5WrO tWp5RgRWqsVqQ/bcg/F2FmO4q5vo0iyIqBysyFwHRHACcUD7AbmeCy7ie1DZ i4hER9xnXUm5CmYS/P8TfzIkwCr+qknQVKZjlRykxSSStJ3zu+fL423t2AAC 2LX+J670V5nUo+pp+DIV2vFB8Hbmy3aiZqUhFRdunWZzjX9Xyt8fe97ajx1J cR49cDHXTizbt9WyKuLxRT9B29qczLmxcaUFEGhT4nQbqaKzKLpaK2QuB8Cg CHyAaAXwDkGSKYMgGQXADDIPqBUsXkTjiUcLEusOJpWzWZ2tVuFOPLcGmLCw VdK7WbOPRFu8yNmKU03Supqnm3jhLdcbuRDgSSMlTWWXU43I2yDTOibhaULy ANJWxjCFNyTqRpnEWqpXZOU9EKtOwk5zbt7cGhJvavKvaWExm1s1lBQrE7Fs vMyFTE6vy3Cus7vVo7ogZqSrcyUDslSMVmnt0K12txj8KgLryeD9Od/OHI6P dz2ujeruO3JTlitbmpN9jh3zu2tBxrj4Mrp27o33vvujx65Hfm2w+pO5EKQC pJCkiACke5CRR6MSEAiT8UbUZpu3WwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAH4yeVpKVrCLCO/hvnGMYxjGMYxiY3bbbbbVfr8iqqqv m0FJl84oYcW3FzbFinnk3xuOu2Zu3/a3vwvfF3eR1M1nu+A9Dh4dFsd1IHOh HgWqlDj4LLyOS2XU5lsU9Djyc+FODsfKPhNsqzDMMw7+7fV2n2xJLe5M253z 7a9fR4AAAx7rKnUzawWACaxy4FAAYJNcPR3Y/BHd7uyrlHa1q1rtj33vODZE Aqe5p18/GRx1XTaigXqAgiqrzvWcgbsOOjenrErGSvKrzztdrven3o6ujueP Oz3G9Nryfc47uMNdmLKoyrvjFfU3V7vW9gAAOUn6St/iS0kluc9yvr1r15nS Vrqwo4pRakaJDgkAVW91VzWgt5kADRIiOjrXsdct8YHSz4hV3Fw4J8FTTuEf aSPmM/Xb5we3zVGu0cblnm9FEeA1nujrW35tiMWtXGJqt+69y/fxXeMZG9gA AMeSoumQASTyC74wCEE5VyIRWgZwjTl4ta1jEDgDKApU1G2ZGkQFFVKKLj6r pcwH8MFeHXFefByRx269inXu83Hau9Ej3qubWp+nmEI11YccaVt6WZiUCJ9q y3amBSifWsmtZ1vNe7Y+svSSSu5Jr476zjXM85wAAL3vwSSABOsYkAqh4RER KXdhKSiuszNZiUPpzsBnkkQri56GzWVUeKBOqPOx3bhBZ755jv7HNjeT7cTN pX2rwMweQfcWb5y+A8PAeWB52ZrMwm7fL2ceZFG8qp0zQ97/R+4B++3C+Dnx evU375UzW/VurX1q/eY+d9Z73lrewAL3ve970PcdTPQr0FVpbAyk3vVKQmVT K21atuClNI+fJu4355o2FjnMSyOCHE+fBzgtSW+iI0fBbu2RfDDM5doKWLpl 0KEzJmsuKPh7NbvnWMaxqVtfvrOLZ6+kz71jXrbPOcAAC970AyDsywZZtgAG 0iJqteISyF0ZDStYrWueciOwpbiRwbwnkCPaqaDLvFxECuKBXZrMT5wxHMo1 NfkCePgs81tqRbluO85Njzoea+bOu14X7sxVQD5eRdE34FdvHl5zn1k63u29 gAAOUma1pKXpTUlZTqk+O9396SjZwbuprjgLXT4d0r0SIeIVMueMikUUEgkA VHguantTMtY7u591WdjnA617NXfmHr3MzOj2tyvNsbtdvEF7yAFfa7npNKoj h8mEK5nGd6lKKSlJiT4r1nON51vYAAD4kAnnoED555pMfR+DxpvQPaqpDPPV ADIoHmYvGdYkFjuzaGU1de2a+yHhTizxzzxvSwvrb7uPw3duAL375wV19zKv kCvPtyzWD74ErYqrLOkFqMNO6riLVUJc8jsAOmms3nGMZve9wABbulPPe90m 7yuK15MYxWU4rSTWO8F+6hY+wL7Yngxr+ewE77c2x9XMSvrLdejEtGxd0s8V kAS5GT7AuzlZXy0gBKNGZ5/MUeZblrRUmAESsjyKXnz2LUCWqxrQ9Ax3quu+ 619W1jvezewAAH4eb2kmaymDzCD4apQpN3M2QTBFEBR42xNX8iYE+xySLtzP GRcRxHkdjgo93NNUL9gAR2fLtUczF5lRmVO9Q62pcaHl3Q0juu7t4WCeBjFX ubfgu7LfiOq4PusWMpXkzSP5CSOfYHP1NWguK8V2KB3VbLu6PW7622vK5eG3 JfPvHgiuptuLTP2rq5tRZ1ZVQTjom7Faqbq8FxtNLA4lN0rcCXUiMqMWislr Iur03ho8aswVNSc2jCrKYBwbV02stUUjrk5ExG6ZjA7gkoVsxTw3duqjNeza k1eubJJspaJ0NEPAcMoqAamjoqHdzNKNUmS7tPYy8bMBXT/R+0OICfsfkBXy /OC5QHIQ1EwV7PO8LxkVwZFrIsD0Gx5PHU9U3fXa973AAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGp9ZWkkrK0kBBHACP1rO7MzMzMzMzM zOxHAzo6R+AN0nq3ONGDFS5e+Pfn38b8Z81aY5r7KhDc7/GkFAqehobAGiaD EcrIxZagDouMsRcFtAXJvunC49nXnopsei5h7mz0jCPacmtDuwgUKFgQsGMG DBhiZ77+PgOfPXXWc3fGvj5AAAJO6MSlPxknUr8dc92v7k6F7PjxUBQ4jUpu Fd2Kqu1FACc+ISRFW6+aa+Ft93pHEbbmevM28OT5vB0ptyMD3vkHxWDe7KsV eS6xR5IquyVnn1j82s6YrqP0mOUpJvpze9Z5zgAADXqU+JJWQV3QATzjHni8 u7J5wDwnhCULWIIBtJt+W4httbFrgHYJTccHKADpAfQrECSW7qS7XbPgHLuq ZPh74Luz5E32G6zzogjewCR5ccH7BwSM9tG89QIp+zO1K6C6F1ufWXmt53ve wAAGq90pPVJSks71T1bOeSOJzI8nRw1VbQEYloQjtCdzJuYiOEkGuve9G53T pVUWLhx4XjjjjbOEe6Y1IWfE2rueaecsE+xAJBu7kSRzoXuC7mSv3VFTYEs+ wFw5vnyVma2SxelXFs3NGAPoxvcvdtgAAO5SkpSScM0nVZNdSbtM7dWr5wRR hluUMjL6tFgYU9+Los8ocp2XQ7g54RvQOd873t4ugVFKiOU226fktwZ1V4eU ecg8F+ZTrgojiPOAeDyVkZMPSruIVN2l7+X33jeN7bAAAclJSk6nzJOprRiG O+9B0XGMAii4uCRAzcYMo7zcp413xveSeEOK4xJy9nHGszaaGdM1JM2g+CCF 7lZQYzMXEAF3MxgM8gvzMzL8vmeNu6CHOH6Isir8y5NAB1PoXgqW7hfLr2L5 kjkP3xaHclZTzHMGbMu1F5WhjnO8IG85/PSSTVvivnMfHM+evQAACnqet7SY rShFgs7TvvHHHGtw7SY1quJv2oiqHXWuG6yf7l97SvaJvZ0d5wgNVUSBQngB sgLk/K3h7COy8nret6DmALCcIHTJjvOAwY13YwqvElY8FeXd9/B8uAOngpKf fXI7wW624fXVbdddYrXu9r45Pju2Gcb3vYAADGvPz88C8H45Dk8noR88DdMU 444R4pgFNr3nA6LlKrK+3ExlW+cBDEDgFkAniIHIIE2neNOjTFjN8UlvmnPP HOzW7KprWs8G2V4CeC/HZvLa+RDcVnD5A5JsNPvgHEBBufJg3PQoZlyfCYgF dNs/NxLM+u2u7vzvnd3dAAHfoepuVrStJWtJK1pJWtJSVtnDVcBiNHZ62qYu OEvvmZhzoAwxk27UUPDM3xa1qiDhAAjDEza0L1cgvcgEJc0vi0PdgOgMXrUK ZAJErm0TQimFpatailBYrWFBi0a9vyZy/IVRhozpzIWxSfKN/VLa72873d3d 3QALy1JSlK1kpTw/C9KK0loa5elIZ82zGeRd2sgs4IxXbj7+gOZh9ciwCIzX rre6M4xV5tZFkhwMM6O6a551mznGK4y5QrnNPHV3F1hJJJBJIw8C9jou6u8s WebmZJIMHelRHpGS4zeupeUg82a/FiP1rv5c4Ppjvfmt+vXrgAADE79n4a+r GMfTryS1TPwoaYvPF1xw7kFD2YHtrOtc8cXGCKY1eOGuekgkEkgG9QkRl3Nx 8ji+qpTRA0rQ8Xji6U14y+NGrN6zk0qCqc25aKC8UttCnxKyezPamawrtQHC ou67bkGMU1tw4K6pxrwrR10XOrnODnXN0a7+5a+/m2tT+PxK/GszNNA/hyph xBHxDAqmT+LxGxm3snXTsnJeRlnZe5boXLgxoq2VTiJfJu7m9WKryYeBVhys tM1M3mpQVE1IqdaqHgmxsYddaYytoRsOMiZZB07tXe07Q1VVKTWOcDGvcM3J jKyGKLjczam1SZmVL2zVvIF1iUK9E3GlC8xHLvOAUOebzkjec/Ubx7b5ct7l 8+h2zu7ulwZNcfjxeKu+jJZGErSKSSVkpXFpKJ31JaeW+Lse8/DYAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADVPrK/aspOpXqspOpXqpC jU+WT3iiLOjlL8qWIYQqWefY5C4VNepLn1MAVyehXqLohQcgDMnITpZLAdgK 0POPjItZTfBNMhNZMIxKSV/Ga90tJWlK+RSn3pSlPjV8X7x8fPvfyADd3d3d vl/CGuUEJ2QN4B+Cb/cEOwau4oYQBGNJuPeUKR5Bo8A77U3FAZAG1S+AU/Tl QVEkk3dIc2kecZENxgEgRUZ2Fnm7nhbxa1VFTrE3u77Djh14uLbbWtVVRFVQ NgZkEnFuIcIaxZygduTfrV15hmjTIGTGRLc4fWPQA8g73vd3ve7u7u7oAPef JWc5JavObrKStaysrWkpOmL9VxWaABklpVUc8ngVeNPk1ZC+apu3z5MWeVUc sV07Pl2gMIIN+Rz4W4/d+7iMxJwZEmZ55UxnkGPY9qpH1NxRqfk3SWd36lZf d3ve7u7u7ugBiRST8pPmkk9+eXk1Pfle9+a13ianffcipWi2GIjT2s7WVVVa NNKU9AYqyqvmbXtaJxzM18AKMAZiwYIDCqpzHkr5Cu1lxhaEEx5ljDQfewou D9gaOc/L2fJd9ozrtemsELOZfztK47vaSfQ2ZztsAABidUpLSw4PoCvyAjvQ /M+xPnsdZW7SbZnVKu49NXd1VTVJdzLE2LhG5lDNjMz5zkZkvwrpj4rGm3wz Hp0ht+T3kQY9BAoiVnceXYwmqryioo45c9vzYPgFreoiJu5ctzMUd2rt4Lvw r1kxTyvem85NgAAPUk/VNU+mvp1JyBUAdLrrSEXCDlGQDIKtxa1mNaI2lLYe 015bgErdtiuasFzyBd1PKr94HJnhTas83ecFVOkCSMYj7N3A8I/EdCg9jVdx zpwmqqRNwCecEzCBAYy19jnnroj3JB8mcm49VOpjOYx3W/0vrvvGNabAAAYl NzuWralfW+j53vvtS6gLMpTqSKqqn00DvAKpTkGBB5wQjEqVIcFHW5bJHeQa qlh5RcE+SOcu72w727t+HFuTd1267PhmZ2+9VWqkUsKrBjPnXdvGmt7AAAfM kakze3qu9V5WDHOATKAU+W2x+UBFaRAzbc16CGKgW2uZcXrcocJEmFQ5ZAqg 6YkUIaIrjRmp3gMBIkkUFYgrEFZaand471rvPMuhJ4RkHmRWwN9w2LxduLDT KuvzHtSx9ernnnmfPNgAAPczTHve+VrOSte++9Zxht/fPIr3gDc4LHAjzmNe +AYssM8akbV3NSPAXGtve5dqvBmKxxJe8kYeaYiCeA3AgjlbfY1TXLPODDDj whmOcCXIwrw1bLrI9trGtY711KUt6w1fON7bAAAfXNJ9L01jFje+VrhS+Xwr WdoSZiLOQSFrS6vWqUeqqhUVFITEBxNGJg/XSGHToUGeEafGbi56FI8nw+DT nl4nRgmNqpoRzgRTnZi8FdBU91re1t9767v1Or10rfHvrsvjW97AABuvk+bp A5Y0jY0OnMS7NHtudeKxZmLuvXpLMTkx5eFZedqJab4ufoBMbI1S5D/KJ80I ZlYIzcKqlN3331/O97SY/myfDZWajB2cWZrbrvz8PAe9YPeYNWB0l4Lwtd87 Wrkkck2DZB5OgEPH7kfzx9nIAdrWq75fsOF4WzBkpowZifjiAZyIcjNVmanM jcKNS3e2My4kYnDOwlcRNDaqAYavLzGsxJG8jRDDVJ0KlbuCc0lXeTtt2DmC pNqmsRO4FZow9d4dy7y5bRcWhDyUsmatTGyMeRFiYmJmW9h5py3lVlxC3RIZ si6dmcmhsmIozhxyNjG9XIWk6mdP7f0B5ygNEAEAbzhs74+r5OcZuXbda14d vUNl6+E36JOTf7koHqSdbr1J1rOfiY8nl/U+J+vfza/r588bAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAalPfUzKff9PmTXdZnlLWpXc6 erfGrea+bRzszHZptI5+bsh07so06NM043o9jd2828tLXjRz2rg038eLVeCs vaGMSRSEUiDAGMA4uJPGce3Gd999+fPXXXXQAAk/Ga9VnX4Yr5WO7eY66666 6t5nWMeSHgj0kk+cH6cXlVCqYmyq0GKEVU7OqHWZcPKix4cBL3MzBv4AGwf2 jfs5mej5YF/B7cDyBp3RUC4cR89zLwUeX7BLbmgaINVS5FCtO1zkdtjtPtyy Xs25WzZGN7WxK5wVGc4PxvLeNab2AAA19ZqYr8/EnvHnqfG+XMeqsia+nGIm oi0bi64zwIaTbTN2/XTBH6fY55wc4FE37nlbdlnxhipi51sHLVXhEeQcS7mZ MTUQqVW+A3VV7/lcd9q+D+ngx++tfEJyK2wzI+OHK+baNxe7aHPz3vXve93M 7u7s6ADXf0pSTdZWVrWtaUmU6k316zbFe+V6+J2NzEz7q80DuiBTO7taXq0x Bu7nRlkSbjsWcifY1rHTme2o8fRHHk4niqoseRqvt26qjYujN3NYq3uA0iHS EikdyfBgzKq3v0zSnu2dL525wAAByU9yY5uevOl+Saip7Mz4KRVT4Pe1mRt8 AI+oHh27vADsB1VXcRVt1sGz2RVRhzLhulPeBlX2fIvdi+DAVT0HOnzcseMq izUF1cDSag0M36bc5Nb2AAAxryrdJfadddTGMed551eIGIWbnOxCSHnjVw1J 508Ag4eTPOzd0/CCSRUQTcQJIvIZZHh6Z73ZkgG+rtVF3MvzEG39SIHsb5gz LDvo5Fi5N3c2IzLHuheSz6zoyKJyIVOTGWLzoHvfvnBMfVRee+9zz0AAB3at cfl8b3N4rM77uz1bdKSuetcM+TdyMGc3h+913hgOcxX548h5swMzYUNua8Ly mX+j7cZXfN2Xhjr2Fdxnue9EC7teF73z+fnCx6KW6rSFZL8miY6B95My8vu9 3d3d0ACsmq0+u5JWWlKUiT11zeOalc3773jWMjMzIs/SqOeDc3SxcLOj1R5u 3XneodCu74jojQBl7B8HJ873rsgHHfO5mPu3u/q/IXvojrWtTbDLkawC6Raz 4tDUSwfCXlpmtEuvrorXhhx8zyyuOPi9O7cNOzMRJq9P47eZudze7u6AAJ6K 7zJm1633rGXdpWs6pnOfdK76DJ2YGb0jdcc4O2Sjnd8q8hMczEgOQrg7mY6b ULuxmZMDFWup7VVVLlRiqpweAeEiz38ubyPMR0X6PXRtTTqqceXVTR6RWbX0 Pry2+bHOAAAEpSnjulJr3zv8/Md9eV7N1bZ53aUtOa++R8uu9O7NX35rs3+v mz5DTALStazFa08E0CoRdBVtI+LWekx+wIGYUsh0uZU84Lr68XZu8iPqfXNU lphFNRIcE/bAH5ZjPPe/XObAAARdSk8rub90vK065fCUg4Q05UGlKImRdQ7r Vmb4SOiRqoqSL3YcLwK88D7MyhCKl6bbdCiCJmXVQYxDYJUY4bh1oHSzxN7W N0hnoteiBgFSAQADbuAo8v4/Qz2/fPkjX8qplfS2bmMb2dLy1gmXciaxaGHF Xu3JlxOXe0cwVk0mHVThmlVw80msThU9hLTaMDdIqRe7NqZF6JqsdHTlVUXU pCtc1FkjbyLWxylJCDrC9nS9yJgNY7GIxaiylTkyJrTpuRUzl7aipnK1yog7 ETImZmUlVzOOHSqr2HWFyotZeQ/3fl+zoaJr3SzCTcLgyBqEsTF5JvyfQE2F J1HS04980vubQ2g0lqmTJtTg7TqSnJKYk6lKz9rmcd/Gfn43sABrWta1rWgA AAAAAHQDIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAANSUn2p7+K0kVnWtc ma0nTvruUovyGj8ikUy3ACgBeNSUo4jDwCGPmp0QEJ+BFaykkla0rWapSdSf eUk1NS/dud8+PTz0AAA8kokp9K75LUzJRO/3f3PJWmNX3alszE7KfKhxz58w ecLZq+WpjlCgRofJbgUGfkMS3+OWR5A7R+Wrih24saKu/0A57HbbnvENQE+o /PZ9D5wO5UCr0CR1WIt4PvvnB0MWro5muMBoveoQIiQgelxk5KFChQoUPV4t W2ZnGQAAElPco9V3empnvua7vzOMXm79ZluL55BOZmrMc1oxVdxeWMWROzjv Xqf58h2fwPCO33o7sycA86YtXbZL5VVTzjFnsbGZgZjf2gd7zevu/QtdIRkR EioH3DXmZve93QAAdakpjzp+q+q1lpjrjm5ves5rNBOXaDIP1S9bTBJk6KpP tPe15bA3VXmkeKu3FmXD5ZnIhXV0BJDMEEYW2FF9meeNySJNxG35hkbeJmSD YlxFpOJmqMDvysvreN7bAAAOSUkpJvfq62+lpScCXfPHvTgJsJJHahc80XdV ECVqVUjlKIhw8yuxuzhaj5OV/lC7vt2tSVIOEpgREEVPjiro845uKSWmTLzz ZdGMqIm/HMAJd6KaXQaQkxI1zOuxLMOfPq8vL7vzzzd3d3d0AK4rPR65WYX7 l7WzSkrri2McrxL1ra5CqLjOmO/PO92D03EekJRsPfRuzYyD403HX7zhvVnd 3Xzg0PO+VBzM8JDuTUVp88G9V27ERspBSzwE88Iqe8HihAseVE9KMleE5xvu Xv9Kz171nmub9OeAAAMyTyT3tfUQd83z3HeD2ZmXc0aD3ZbhjB6nUVt857Gt saPwA986szBWr2oodGRBVGip4GeXC5G2697C6vYWZCs5RGis2Tvyu/UYsve+ d83d3QACT0S2OusWre9164tPW98qY73vN798nKp5k7NzkLHgufOw20C4E8rY bGHW8u3tRgibcUEUoqWn3tf5oGPJQxTlUbK/LeCSW94zbXh65sAABKUfHvnD NdTGHVJ1FeAvq75d2RwIQlCEIrmdEzJ7iEzD0Z9hHFmZjAogPvfPN3b3sIlO Lin5di7HV+p3N47tita16nqvWtYw5vYAADXqoz2O58BIshTt+ZNZ0xsDwiG6 uhUQNu7tvOxBbaApLaS9bHFfczK3vg52TTKhURjuKu6rLk1tWqt9AJN1uz3s xtKNDg+UiTEYZtbWMUSsv+EktPdbczrm+c4AAAVgBrnzZgzreO7POBsqZjo7 kdpBw9+XvnGO5GbPVAdx2eedHey9qqo88eknx0/rg4WPXZFEK1helHWWzl8z NkNYuEULcbBn3NjiuUWRD3o6u8CypSK1xanRAcgEkgGSgJBIAg15HORwjg19 HO+wLcwu5is1WeTjWbT5W5G7ZDiS5IN8sEobUqZMTbZYebc5e3k5LwF5UI7s w8sQIyOc105mnDNKpzbF1GUdoXZyK5TMXhmp4WMEgzY03ReFptUpNbdjBSxC Szd3l5lxMS8RtiA6WPaqTJKczEiqS5q1zWWYzNG5tOXiNQ5WVNF1jeZGzic3 f397z7DU47D2F4ZWPNabaWjpz8Xu4PwhP8v8cWLFixYsWLFizyEPxBI/mp9c p/OUxsUy/dgbFM2KZDRTCPhmWFf2nKOOx/vg/jQ/kpvwqHhYGLlbL/WsnP/Z bWmb2nse9jGMauRON/rP6n9j+x6NNmzZs2bNmzZwutyNX+ND/babGeNwW99r vbvo9zdu3bt27du3buFyqvPwL5X2WHfIdTtDZbUP9RfQCh/mn6Zfyqqqqqqq qgxkVBmMZZhmWOd/s6Oh5x8j4Ie1D3Pe+T5PgMi4+A8D4D0FxIkSJEiRIkSJ EiRIkSJEjq6urq6ur/UcK7z2eRl5oncf409u/x1rWtaj+39q/zP7P3/20y22 6ZbbdMX/bEREREVxcrm8fN5c4mJiYmJ1Op5ISH6ueV5tW2rbVvgDYbleDl6P Y+T5Pk+T1fJ73yfJ0dXV1dXV1dXV1dXV1dXV1dXV1dXV1dXV9SpKnKnsR6gD 8Ac4QPAAAPsAfr+v838z/Z/NTLbbpltt0y2260cAGgDiAGS2kkkllfu/dczM zMzMz85ZH3V8HwfB2cXwfB8HwfB8H5XtTy9uta1rWGrqCwNr3fhTsp81MKYU /QfENiOhzOZ8TY2NGX1XB334mmGGGTmPsPC5jmdDDDqP9oz5DYugcqb6l9iW D5WF8rD5WU42HysPlYfKw+Vh8jD5GHsDmatHia87uMvG4qFrBOBwKiwoOBwO BwLiRIkSJEiRIkSJEj7wLAixI8x129ezMzMzM3guVREREcuXP6kkEh5XrW1i y2jRrRORkuO6I2vW+u3s5vM5/VzALAvhkkkqqhJJJJXZJKwkk3d9qyHxOyjY V7KZbryn0p5r7T2LhNBBkW9M8r7ac44GAjk7rjN1UGjaaoI7+yqDwpA5KUM5 TkwO1MaOVxtaUoeYqvMIOHX/6WLRlfes3oRtTgUD66cz6p0uUcvCYWp40zpg zY50066g4zbXYKrC5l0HStTiWTuncRgkMR3JH9w+QHR0eTk9FChUd+vf73Du 7uHd3znOc5znOc5znP9EkD+BJAgdf71VVVVVzMzMPvMMMMMI/1VWkr7KfnT2 y3Qbg/B3+n+v7nW7HbTRo0po0aU0aNKf1Kow0bH4/E9Z7qfZfG5GfFPbXH4z TjzOg2Vyz8/zndW3Euu/H8auxx9uA494vu8jyMbcnXdXRZeB5Xg2ObhkNTyr v61wbccLgtLjcPB4l3l5HDe8V3lm8dTkG68TQ6dE7nXww5DDGxdLtcark5wE ggXAkSDTRoBXPCwMIBoGjzi3EgcHAD1uZ0Krsex+yqe8q+Kvwq/BX+BbiLBi GBXt8czMzvv7/9KGuFP7Y37UTdMn6YysLCr/V7N2WZZn8Vbq/KVcyGkPCw/a pfbQ4G1VP+L9FKnIZUsLIMBkTCskTxf9cIYTCW77+/1PXH05IcT+J/53jTvu 0PP+mZXwSfWd9atj24+HDcV/kWzKMKWZVVyNg40Tgg+JXWUON3U3UmtFkl0q 91ZR9cf6Ttm19Vt9tL6zQxpe/a8zZLuPvPJXRTQVlVcbwtST+Q8z9Pvo3D7C t6elHrbj2sHqr7nnfDpeqS3R+krxn6qnGsfbWgT3N07y8Xi2lLZil0lauiXi ssWWLLF+HYXuLFKvCdziXIKxUaOFsqjWbm81QyjZEyJe2us+Z1FhqOltbU/8 y8C3R1yKfC/3701A/7qELHtdlfPazvmRXlPJCkA9wJI/SqqqqqqvJxYB9cAl bSHYWKvbO6u3XqrJP54sLCwqeC/W0VuYp+yTwoaUvT4BZD2lm0+B4naT4m6H yuEuNztXie/jXfYiZ/LaUrS++2vC6HOidgu+5oP0v9Zo+s4oNFLCvMJ40M/j 4F3cYuFH1WFtCfcat4f2+Mfx/7ddcP462/6a4Z/863Y2eB+NV3DD+g60p6BP W1FT95l9VUfUj3Sh9KG5q95TtHuut2ONSXxKYXtHD+Qbx+ym4Jq5G1HEN77Z P+t628l3ncg9QthDoLzvgbD+BhD1sK7zBXmlhvYocjwtUsHc0ilTxv7IPttv 8gSSBA/WQP4KrP3qikn9RVV/pyFwKKL/FNk6gAk/CcGlVV9f3MGFVVxj9vP9 ahRJJJNVP+H/Efr0/WOtB/1LUZgofdvIinFxCJvapCs3YNUcuxbmaoSqkkUY q90onCYiKtlzBlLSZiBcu8WYkaeYLud1zmOM3Muoi9vIpNK7k1FZlNxl5Ocm P92t2dm+uMpGn2pMROXDrLvJjBAMCV16MzczcVqslIyjb49JbxWYVQDFTUHZ d3GUaWTrKU3E5rmrs2jsictBhqViRNwbi8y6mcwrl7OnYMysm7sWodxc3UKj URb2tynUm/8UVPe7usGbvvduMurx9Yl33MzvZMOQS7zjjNO7cZAdA0rjMFSH SqcmojaJUk0ribiaRuzu0sjZysmr0ZdDJxYiU5ek2bNA0qwOGJCh5FG9ttGM WBycujJCYi9vad0ZEmcxbO41cYoipvZSrN2zUXGbIlwDdKrmHExOG3b17dtb c8Uh3dHapmIV5OzdFZt1QOFReghgq3uhkzZDORKu3gww8TnBe4DY1MODWHQi bk7YeaDm5YszOOoyQtOXtnNyrxQdxXLWYbi9ypwWnIlTYhfz8/oYLBSH6YsW KX54LaVqViqqyTH5uKfa7HkfPjy6yocLYUVup7GVDSyVLK2bsxm1FTmjI2Bk 6FuLNumnlvRLi1WSp0q1uU43/hmw9gNO8F3VILb67rQSDUYYmDgcTcThFbtw pFGleW3JeuTWZqi7Rc0cykwNgrlTEhm9zNjBNTcoTjF3tQ9emRObU1akjHRY dIqZSSQnRp2onUdiZ1MzLUlRqire5yYpOIlGnL0ZbNtK0cUYDU3j1ahRrK2l WqRWE8uCjUxly72LsMzBjYbmnK3KKUXWTWRdvacjMDgQaFXqvKqLtjZux/hH H3M6ylBqMztdzVWXlIzsW7qlK1AoYcuJlCWSZmzsrcEVsbdvDLqZgYlusxh0 SplNyIRyjdbOVWm3EwHmmnQQgyct3s5tSZkLZ3aoRN1Dlio3NRmaqsqbS2UT EXjdrIMTWyt0U4VmZsO8e3jN3gwQ6ovAqgbf/QWu73GdkmZLU3XXeRkzEVXc sZrMC4Vw91VUQNBGxNuTMWZjanSYmzpusjDuAvI3A6mqm81qXJgUqmsmY3Z1 omMvZJPHbzQ8dVdybUTVzM1KdVNw1GNwXGvTdF27j0DgHMmaPO9Sxvsh6NGr ZVz3rR7yty5xVFtBjsqppwI0Xgus12JuqqBsCYKq3FPLSWs6byjVPUd3LwTV 2Z0XZuZ25zAhLuozajIiLmVOuVmRmxsxbV4pVREZVultaXBjbdtzTKDVpOVS LzvACUeDgIrLqgAOcyLZqJBQgT3tTL7dN9E7fIN5ZyqNTp7VCSzFbDIeIdSf eDg4Rp0vsTQHMzncjp73UlfVWJXfYmklD4SE0MqLJB1HDycWUzEYZcREms2y HXeY7qec50XPaQkruOZlhjNFuKQuMNzdYIi4WEmLxzY7UZG3YrbUB2pDuzCM Vk6du7ZDjcySxe45UsYoMVkRypYulFszO3iKjZoyw9vRLnXJam83Rm1rqNqZ GyY1aowyFdkNy9igcmUUsyaNp3YjVa0YnRG6MYiToxEwK2HunZJeWhEzD2Zn aKxNRcSDNkq1nbu9CuzZfdjrMf0Aec4PJ8IsK6HObR1AclKGgl26oV4johND zOZeFwE13EDG6MqXMVgVbDQaq3FUsOYtGa9yBVaXboY9lQIUSi4iuZnd3Fzg Ay322o7rT3sbLFqahYpzM2drcqWN3LcnL2DzAdLFW6GmrUorV0ClW32VcI9g yjQtxWl2bmOyoO3d7aNqNUYZuoJynVbkvIoLczTqOVmbsRG1riVrSWN43Kip JiHFVTWqsgXqJjHhMXtQg5o6KSp5MTNCxLCvY2TEmo7wB1AHKcrnOADTwBpc 0IhRm53s3OTiu1UQTSgxNC+7rLXLY1aNcbOVnRzj2xcVG9ypc2nsd3EKNVyN 3YVNHWHo6BhA5T7PcYCeKq/m7Xz1/A/9eUqffrfV/jVov11+T+i+R+PaP11+ cvq367VfoVnSWca5Bvfr3fnzDSvnaX6rqb6uOi5OS4rm+yvlOFk+w6HVw6Ov Jb+Ep3J3VouXI5ezjLuEuK45YuXXX8UcDQ6H/E57Lkc9Nv/06NvDHG5vHcew 5+lPZTul03PRHq69T2rkLh0rKYsrgs9vfTgmXQcPDpj/c3V9qOeunXvHp3+C YsXe7M9vkOE97Xv2eut4O+83DrBtO1aqdVYrXFNbLTP55Wf4l7V1epenBd1G XU0Op8TR4cdy2U2PdyNr2xwHiXHV76q2Fvms2lWquK3Kw4+J0LdPAudYe3nx x7LY7X5n7F+fyZ+V+eZhrNb7/tUP9c7CQc67+II9AQPPN0iq/u++F8T3B+H7 rNaz0uT0q74+oYqvHv1G49FuaO7K3jrkaG7BpchclpbDq9jZZlXh4uf2DTeP FyOBtHfvv5bx1xHJwtR413d+0A4BozEBt5SmGxjUPTIAuQJGCQYgDkjQHS32 uUrnHm7efl86cN2ap58XPgfRH8LrHLF11YXiqvMqeZqjKZOp41311O33+99l NQwm9PEMCOlD2qb5TDKZ/fr5G0+r7TYmDKeUMTAwYXP2hWj1y7ZVqmxb0wPM +DxLhO9jY8/e6/UxcPU8Nr1Er4Edld9lLoekFeo8pg/DjUbX/Iqdj1PfxPkb 25HfTDpqmU/LVPaeDgYXMRoOFdf4E4LuVWQdK7/+bA2G53h49We6m3/i/LBU qcrYvA9DaThLy59YOq0rhc/PypWK4o/tDhbPJYp+vI8q8XUW3O1VxnOmjzcp hYLdThKMI4E6WbR/wtHBdKY6ByR6mKtNU1VsvicdiS+MrZbedOHnBj1NDodi xKeL3dj2vbUN7oqcCXgCR/KXFT/QhIBFRyAAgCAe7an6fP5/P8YiIiIiIqqq qqqiIlJJJJJJJJJJL+yOCJgMyR/fOMu8fh+zrHCuzd4Q51+I8gE8gIQEIB2H xvXrT8o/J3q7F3E6H+i8FXlcjnqcU/xulHMwejBzZV3+nbrlt46+vPZcNZm+ Hk8B0EnpA+4O++uQfFz3R6DkDPevMJg62D+UFgLEiuWAgoAp9BOAF9ieTjPP PHTd5o1u980udPK5+VbdS4nBs2T0Ned+4ej2eR6ed7FPRTPNij8j+T8N3szn 6l3eXtXlT2F7TbhZucDyOVvxPzRwPOPU99eh9CsO+eU6OHvvD0w+o6o6UHnf 8FtKL24j6yOfHBhTD9xo+uR9tep3biHGHzJCKCSfVACMgkAICHxz5szMzMzc ge6IRAX7YfU3Xzxxd/WYA+QfP7PtqUSlKJb9QfpNAnE9FntW/zFPBeJ1Ntg8 eXK8DbvZiZMfSeRvf+Ch7clYYrMJMMVjExgTMlmFmQzKWYppiPTlG2IwwMZW F0Xf9O3up9LB06M41sYfb0peWD/LI9LDyY218DfXlmZ18Hx1Pj2Se/x4RRRc cHU+s9vbjiKZBKweAoeFw8an+Yw7HmOpw7Gvj769oxXwh3vDd7GzbCh9RlOO qr51XMyDdG8T1eza6VnPJmPd7Wmnan+FDzeVlM/lEuL9l4hc6YhfMyNTMjLR h7LCU2ud408Od7tZd3EnKRwOY0rpc6HnRckcbyxmM6+6/bIOdYXcCcTFT1C7 tkcDAPbeZX6YJZBvcLkh2qMMWUwMyZ95mo76T+JdYuN1tFOFNaMpwNUTUwu9 bavvu0NH4H109dqpzc+EG9uXedzmpXUqNphTti40m2D4GlDIyYxDkU7BwKWj qacaUzMzMVVemoKfO1VVVXDWCmLVVVVcNSKYtVVVVw1/IxYsUixYsWLFmZ+I xVVVtqfZcFotxS1rr8X1774Y+WaZmZ9NdD7Zr4vZbx2pyItr8yn89ocLLlTF oywy42WrFo0YjBhlxPL5m1F+HkFxhtXAjLv1qK2sv5sNhkRhG2jRGUML+Ywj O8+zbDd+uuPRb8LUWiDRkXoC2v2L2blvr9h4Ph8PI9o1+iaOl+N9R+xFLz7E O/r5YsWKgge8KH28OAn65oOeMnZ8vZpmz8S4aLrgvUsOpg8vDgawPXgbYVcd dNjXdop09ONzfXizPXtNsvpKrwK7tV5LY2tFoUx80GiN2ENrAaY0bh9P5dSY aNLRo143yKtV1zFeL2QeHs8Tj3DnTJlfdsfsttbHPxlXs9mqWw+TDE+QfHfM zO9X1DSC4YTGqsaJNKxxqcugtZTA+A/A0V3t8ZjM44KfpYbVPllHfr1lwPS5 16avutkWUe32VOunhTYXU6Giv0NGlHgZ+eSqPG88qe+WUsXl8baS74YkuIXc 6neWCzanllVXbiB0pgHPjL6FPxaTkXMwWYssSvwpnM3UtJrHpDuNUr+5cD2T uPjVMPib9VD7I8eurGQLbEYFtiMC2z7Q/QT9v5sENNQ/Tfq/aXQdKczEL7Po 544E4V5hxzL4WjvR7fpfpZyqLdQ4HI43My4Kb9XhtRdFNuRyM4mVcoy9uHpc 1DfhbDc/oGUU4HqOLkp83PmXKu4na6U4cz3clWCAgIIdlk885A4k3gCh5Ojl eq6mfgff7vzX0e9w+PzRrKvua9/+Paj7KLb43c3O5z27vcPiHjv1Ny+o3vkr zr87w1+J2DRHyrYkXLhGAYEZhm8BTKXC/Q7yB21YEDil6XgdxMjyUPrh8rvW 8YuBi9afXaaBavlTa+s3U0b0919al5zJhYsDMsGWZjDKYdDx+pqML1D3w2On SDtB0g19f1+Zxy/kpPIvtXrc1faLzTlsvYa7nqTmNHBTa6yado4dnz+30e06 Bd5kp8H2fcf67+s4F4K9ay0XvOff3+invpzPD+A0vsXqOT5Vl6Hao4mdVDwo tUNRPWNxLRgZH6nTiA4tw2A4v34y4gOXj5W222222222222220h7piakPgCG wZ2EmponO5HS2NIudtdTct6fCnhTlDam5yofNx2G5XCC7NtNWtGrXXY2vXT4 3sr1dnB2UcFYsvceO3dXZ6sC+Hsi4Xx0U7jFdZ5/XbHMxfE9i07enf1G2SPP gJ9VMK9gnidveLFow9Rg1CtKhC2r1lOgt/Xwp6y+ktFZTBimFojyr72kbwyH SOU4ugNHK3h30LglMpw9cT+g+7jcF/5KGhcINo/knOnci7lxPErvrz+Gq1rV a1qta21fAwYLmcDaZfWezz3Te7kp5bS5NkcIOkcaZT403LcRwluYg3FkPbOW 5eY42i4L/md5VKmiLwLkaYn8TByo2DmSQ+UAPbwq/1i+6ltLaW0tpbS2ltLa W0tBkPyRCQ4QkGVix9+AJobmXfGqbQewJ+6FhwkYZIywvU/ffC/M4XNU8DxJ 4WXs70edlDbzXKr21X5rD4v9x4UcDkOZdTRo70pxLrftq6knAc6i0qXcZCmw XPcdN5brK8Kw1V4869tJ0YRQkUh8w/D9lPqFEUKoIp8tfWSAf22Rfuffa6qu xXgth6N7k199dWv3p9DrXpnC+uwHGCRtAA8DMooqf3UAbADmGGgfr7EYYAcB ufLof2koUDftcMBOFGiKEBYQjogIBTjoZDm/lLRZS5Tl16et7aeg9vsnR6u4 8OPh9UPDgW98qsQ9PDlmPNyO/yGu89/o9fQOdHPfQ8zvu3ag5GGKZMUxYpv3 3QAJr3juSPUAGwAAzXnV3d34iIiF6bp/dnifHvxjGMYxjGMYxjGMYxjHAHvI BA8ED3Agj1AQkKiZjamVRrKH1000UxMMwGdfgf1jKP0MpgyOVfHjSco63c4Z hwMVWML4WFVgwjVl+FhT6sZrrTLWbIymTQwYtMpv+WND7t4FwynGGR120knd DbC0pmiw0NU4znSNiPKfXVGyL7fvLgrn+j8Nr9d+UR17Xl7xWFlh7/jtG9GV hlog1S8UHDa3w1mnOW1PKkaEu6l09ovMmEYTDiudOShlOZZJHIyrdZRP+Zw4 baIO8/zGh/YYRuj++w4l4WgfqYD6a43ksLgsknCPPFmLMjtQ+fQrVD5veLVx plgxXHRqTkMsvcQbUt+RglwPptXx26ncR6GuB7x387Q1q0NatDWkkYZVPDey d57ZHjg7xGBe7a3NiDB2Luh5evuuB5weguF+Zyk7G4uhtprlB/dBlutkXGI7 palZG6Ng/9CxF3yvwqvcWKsLVeBcBX0vDKyIeNR9v38YNimPPnTuWYVZXqpe 8fVTrrr83p7TuanGmGVxLRuX7y3LgMh+KbmSEhgDYT5qqqqqqqrzMH2nAjjY upcenk4J/wrCPPe+4rgG2QYjDvjjWpK+q2Hs4HusDvSeCMoXA5b4eVzXytr4 W3BfvoYb1J6a6eJedTgCyuJea0qWT2VRvezOJ5x5qaZZ7vMVPI+DalsXOuw5 XchueDc0phKwpwMxXslwrwpHY9DX5ZmZ6qazWU40jmWrmm5lbWxG9Dc3Q2P1 zcwzhrpwtt+Qv5+EDgD5wLnB/LD/HphJEpQEUkiUo/zpMzKkzMqTMypMzKkz JhGEYRng5IkAHgAXOGQZLamyYTFNksiym0ramyMmprbMMGMWWy2g3plNqfuY xjhTKbw1b3A1tNLfW+tbU3pgkLjBYWhKGSYkLJYJc4yCYkMk0bcodcd2Oia8 wPypvo8iqVNTDPdTxpwt0+llycZaLdcp0vYU+ZuUwbmI9laNH1WrYmEbG9Nm 1Ww0aLIbWqbF+0bmTiNVWDaqN95WrVMkpU0b1WGQeCmtU+27qjisDucTEpsc elBkPd9V9thfll36hocVhTjtxlpaUx1VxUuF0u+GgWZmWZZ3LjXS2C4zC2Ru hcC2rnXGp94K0OqmuCMHKXNaKfV6brpudUchd1YlqHcaqporDQNU1aso/zwY NKw8LjXdRmRP1upHp2dtsmmzNuRymX2m+YyPsvQ4Tg4LVra1Sjw8uBeRLYwH C7K2ph3nuLxOZPIG19w6N6WWU0YOaynCmVP5TivnLfm4QYm7V7ynKDwI3g+H CvYO3BTyLqfC2oniV6FND5GSv+himGEwxAdCnY8cDGw9L8IPddqbi4pJx9Hi LcbUyD0eD2GwZPe9KdZcDQcI2mViMMcD2Mr/5t9iWAyPhw7XIwjevC8E/mHJ zK5nJ1HRPJgMTx7GyasjIqVMtGqtHeRhlluxJ0hYiwwbQwcoPyU2oWim15GE YMD3u5bvgTuu4bdrai6JTKLWMi3JaNMQ1LDNKfZTTcmS3KXQ20NL1jvKyDvA qnGb3QV0ajVkRyp6WxolHThrnLHyX+83VtRZY01XMzwFqOUG0tLnDrcPgc97 atXbmt5V3txOS6QdDiNlDRaMW1NzuvuLfcriLC4Q1Bx2GcVN0wysRMRsw4b2 xbR6+C5RxKMDqVy5HJKcadEp/WbE2WD67Vpbiyz7KaHvdFGpw4Hxo2sqtja1 ZGLlXoeqORjYyMisoslPgwwYYzDJxpzDyVf51DDruadYXSFPbTzDSh13Ooys ld+/0B86avVeJT7aavuQb9V4WH4HIGQwHGmbmqcTkHok5mI8jVJjDVJqsDLC OgbnZ4f33+VuibHSNLkYTRRxXa6qey2vA7LatyPBPmdHGZdLjcQveUdbL7Y2 q+E5Q5HLR6ZQ7o/i+svI51d2fODreBcxLwiv9UGQ+BkU2p2JezcHiuR/UDll v6nCVbIOf8KcoTnfO8aH9Z/mRw/L3FeIfcaaL66f9BubHyJ9yHjGqdJ/ovhx jf2+dE+xSxBie1z/Z+zKZF+dZGpHxtE8FlzuoqVOFNXC2P8jDbhX2PhTlX2L 3ary4GkOb83l31qjjvCtCypYO1otOV738v+UHvbr6j7QcZy5odxHmRqjwvff grT+PfcLZfz+dc66n52K43yvZoeIWDaNE5uZG4OlPBMp1IbROFkdEdX7rYpt gvwKcT5etpE6mrSO6cvAr4QafB9BPTpL0pT7OVqcFyj39BOVgmQsMQyYZGFl YGWDFUz5qGpOmFmAdrKnlT8Sb01BX0dNhufH3+gO66EeCwj+FvThX9BxtHfG lTcyMs7ZQ3twu8TDO8vMuK84OGqHB3Jyp0NWWGIYg/JTySsM+c8nQhzO0e2G 1zFxtOGoMQ0WW5tbX8OEOL11P98PUvSNv0WF0jB/If4KHNFdA70v7rBhissH TaHs51bxptBs9hhhgFU6hOJR6QeRqeBe48FDjT2cb+Obd13x7lU9rSNF2It0 TAxDBM2sgsjIymGWEyn4o01MIwyq2t6n2Dw3HSXgRcSTfepZH3JvenBsm1t9 u7dN7ff04OCcLhsU+yhsoeKHYvVNWIbp4MTusRrVvWq2OBq+o53ErcWqY7jV ojkapxjmVpTShhbkbStQbGU7sMwzDMi5F3TLrg2HcXT4v344lucJe2y0h2HM 4VvTrbcuFM8M0OWjRayaGLjBqRrc5Foatl30cTzleEjCMI9fe0TuOVVvB3OM tTKE62UOJBo0j/8DKYM/TFoZCwZUwDAqQKAySVFFFAoMklRRRQKDJJUUUUCg ySVFFFAoMklRRRQKDJJUUUUCgySVFFFAoMAKiiigUEhC4zGYzBowjsXsJxHt 2J4qG90ryTkUuRtRcgZTug3GSmKey2jR7aucOTanVei81K/5di3OmSmI4e9Z U75XQnLkiYsxNIczwXsNo8YXwNF7Tv5L8juDlxvgatV+elR3ll3mF0LgTguA rgRwNLq1yuRbphwnArumlzC3hW1N9cZcC2++2ejvNCj/0uPpc7TtTDgjKMov /o3LzxDveeZZ9a8zxL4VR3VcRsHI8QbBcKyDcWxoYcdi56I3tHJfBQ5k8Djw tJXN/3C3CzxbTamFcBaWpv1qtarVy2aLVfqbdyziXWyhlwJvc8WhzDjb7MYo pmIJEBOyAGoAb5DmHmdZIVHMtDjltMLqh3qVeVPO7U4U2tjVNl8KGwdi2ris k/mMOXCY0tscminD0IO2qbrl5G21Mg3KnhYF71Dc0p4cFpTvUN4mh7wOWStZ K1k0AxBlCHvDARCyM6Yp2qm6U86eZ0b1Rwh5MVLsmbn3d5eBWVatqOrlLNy6 AytHK+74F3F6noh0SnMeS2MLy4nmatq6lzSmiO5hwTQMpOxmrBhGH3bDRKYX ciusPZTqd1hiZHESwc7j3R83a12YvWdw6ltHLITWk8LvsK4FwrsYcuJsftoY qne8DnaKbVd6NqYGYZTuOR6wbD0+Mk6HH3nDl+p1q/41cCtwNfCqfrXgS9qH xvYV3rvHtvhRZI6YzI7Hxd9412cU0VW8uEGVyxR8rdWWV3EfpWEPM5kXuB0p 8q1E0NFO4Vt1ifiYSnAd58W0XG+JhwJ/yQwuDFMKyMllHzPfJzr5A3w6Iyqb 9jgI8D2HDhaScMBZByjcaplRbUbbNScsGJZvRRtBiu9uudOO5LKS9WRLxjvC u6L4SyTjWq1ajkdtKW2UuUHZhacZ61a0uKhy2UNlMU1XE+sxUZYf6Sqf7D/I fccreFW5uNzDaVVhiEsrBK7owKeVbw5YceVpGtpf6LIvsHj4ryxZizKez3et PkO9tcA2L3pTgb04Bw22zMzMUaq4vi2jJ3HhFxLjezuJvxXQ9+9Id4T/OvN6 pY//F3JFOFCQTHz7EQ== ---1463747160-1301739425-1169685270=:4028 Content-Type: APPLICATION/octet-stream; name=preempt.tar.bz2 Content-Transfer-Encoding: BASE64 Content-ID: Content-Description: Content-Disposition: attachment; filename=preempt.tar.bz2 QlpoOTFBWSZTWRCFytMGXnd/lP//wCBKY//1P3/fwP////ABAAABAAhgT13v kiSAAAlAAKKVRKpSRVCglRUlSqUUkQhUoUIqgKAkgoKhEiRVCCFBQeQakKik FApVAokVUoiQkUCqpQVQKFJKKFISQUVSQoqKlChRCiKlIqqCiAkpQUSVFAJE SUhUugBoAABQAAoAAAKAAAAKAAHGTBNDIZGRk0NAGgyMIBoNGmQxDQArJMm2 qSYTJo0yZDAmjQNGCZGgDE00YQyMcZME0MhkZGTQ0AaDIwgGg0aZDENABJSm 0oyNMmmRhGmQ8phBo000GQ0yMAQeo0AJqkgQQIBCAhMmjImRkemgmmg0NM1N oTNEClJAJAgQSEgGjQNAYQaABo0BoAeof0C3J++wbAEghgkNefhObYJCQQgf SiOjg2IJVrV9NkKjVYqTMkAsxUonOEiNWKCZkFKjVYZWFQQNYqSpmUCp/Pr+ 0DZQVqIrZVkGTCFjEMyg7/y5aP5dWoR3srLVpY/uO9bUf3Mq5xL3MYcujrfG bMQLnKrrKWa0qBzzppi0w051actWMm7CqzZqttmW5s2bmqzEyxZZUD+YGARX Xta75Qd8CGZBdd8ZXNZXe23ysigxDAN3Bpxixs0dGHGCo2sUo8YKlqyiL3lQ vgyUAhfoxQRROqIrioO9Bqg80G1BtQeSLIMDIMDzQYqtQZBgOE1ixmEyu+Cs leuRlVZZlGIyDIPM9A1oZZWUbQarUFqDuDhGBiqwOQ3BuDYGA3Qag2BxKOJT mDcHIa3pvLQNTuDdtN3ZVtRscV0DoFtbXFbUbNnaDeS2BviVb1ZMq3g2BsDa 5DsHPRV1rSNatZBxUuqqyLatBhbDvBvB1QcwcI0HTQZRzvQdoN4mQYDoHINg YDaBgVwDIOoNoOINA3BtyU3cg2bWQcwZNw1UdwbA2g2BqDUDkG0LgGA5BuDi hby0Q7C04tg3pWriDINBxVxJq1tao3DUu0HaDgOaq7S6iLoOkjYwTaqumwx3 VYpNI2LQNU0Mjaq0jYytmYk/6XDZDLXAO9tBmXa8XNxbb5nizOpMpMejuDSN VbktKNpTHjJ6VV6qrzdm/aXZnnjp49NNjjM44buzfd6Y8k6Ni4yMrJOAvGFv orrfZ4dnp6tju5vHTtxbUd8NLV6ls9Hrh405d8uHbHmzlw2cNaztpJ56Y7Z1 24ZVaGYYYWNtVPRMMTLMdOKKW9V5VZdzhyzj0aNnj09OXjDZwzW+Y6uUtd42 i36VW3rGzh2quwdVW1Vqq2qeDQN2KrY3N3bZyeuOxOAeqrxuejtb1ycmz0no 7G0raNOnTHJvu9OXjtx4zvGy4YbMmzWlu5eNzRxl23aOsuzjLRZi7ctGf6/t p0VprxmNc6e2nUlLeekw5GT2IZTom05ct5v7LMntkBxBqg6g1BvBpzDdG3Fq 4OHDo4jDxxXktnp42u4bbO20eVVu5O+bLZ4xyw78ZyzZjbXbbTa3ZvO3TT1s yu2dMjXUZkb1VtTxN7MMLwjpc9uNtcb+N7rxyneyd7xHIYWx1cvWZ0sbtVb3 fbmqu27t401vjvhu4cvHjk5tvS2YurbqunDhp08ZyJtFgZQrIMsKyKyslGVe rxuV35eVdh3Lhs9G1so79ZrPWzYOWWIxGQ4JrJVwTs7XVnZta1a2Ya8dapb2 NmOmNPHLZ5y9G2Y9eah4ekm9xhWitNOW4mw3YmW5PRMvVhwY3Z3m3Wzzyl5V Zzz3ji7DxZd7FzasdMk8SdQa30IWtb3VEV4IXEGQeKDm5t+dxsmG3VxmlVa3 26B6cFbSuoNUdheKlvb0LdzQ4B3O8VgbVWuM5xHNVbw0rtXTt0c7M5482zbQ YDbrUHMHaDoHhBsG4dQbAso1/rKsYl1VWW9VY08ZFzFnegdUezp48vLY9OG/ ttjLdvbtrd2rgO4NBnadHV5cnadA8Wt676bTebW9c1tq44EzkHinnGsta0G/ orGq41sT0qvL1b+zty2cG7T2MrxMMOGGqrlVZlVYjEGJiDey2rxtvPIslgtu 3KDAZB0DCF9aIrFTkMpXdMzGgzWYazMo1BmlVmZmCZkGYDFWVWZmFYosqsqr LIsRlYjAy1DAzU1aJYYamMYaSTUsE1VaKmrAxGVCyDSqsZDFFiMBaQtNBlGs sJDUDEFgMgZAwKyDINQZQZBqDSDAYUYoyQyCyYmIGKrKkyqsApgZQsyWTCGT BZLFgMKrJlqDKDUqsDJlWMIxViUsKsBlqDLUqrAYhaBpVNWWQZWSwMsZYKWV LUGgaBhlMqwMWKTBiMkZUswLBYRlVYyqsIx7FV1tb6K0LzDwc1vtzXpXc2vF cZnbUg7gyBtRzVXBXKq899bOs0b9tneOM5catY59L3jxBvBtBlXmWykypd6s B6+NelYDgTaLViq5KdyV6H9VzB6UPAOJsoYktrKRtV2aPh6PK23ctWmnUbcs szF5qatNirKjmVXFRNByHoHqnKt9rMXGyj02ZQ42qzF6q8VXDGDg8aYydGmq xtnDYeN6q33zTHbuVaqbZvYCsB3g5OEYje9G169uqyjK9IN6qsg9AZBur0ur z4tvffWdM93L0x6I8NyMyNCacPTxS5uA4rUHog3BxBvbcnWMOXs542Nzdwem 16b2bujThjd3R0uTk4O6e1VY0dNm7Z4zgvOs0a60nq71quLnVquq08F5g1Bo tB2DXiTeHi885cN2McWGMjqq2FW1B5QW1llDzQ3try2tWVqvGXtenZptZ0B0 6bExh6qrZVej4eXjLHW7prtp7vGzZy4cNG3Dxt09WvC87XFumWVq5vFsqrAa elZ3BxC3puWDwx7ZqlpvRw2acHZ5VXI2pnTL2VXoqtnXWarMY1o1jdpqDmDU GSvJW/fss9PTdjlVdg2zbxpNaBrW2u29rfWZoM673FZatirOgdStWz26Zd1X sYx29Mm7F7LkHbzV35xWHPfaq9j2cPVv63KsakbJNqNXijLS8Lnzx1XibLZH AcFF5SnJVeYeZHNPCq658oe0q9pViU6SmSjuUYHgdvG/rZixjGMZGNW5rlXN VkMqt6rnHpljmbM6bLd7b2HR35LEch0G13ZgcB1eeVpzq7WXFvb1taasxmmS eKq8Y4uKywXYMLCPESs32UrxzpvtZzsDNtQcwZIWQYGMFhVcg2Q7MRx1K3Vx GMUxiwy5uebbdtbpsDZissMMYZWLDC8TAZVjMbqrZ6YxjG+/TY1pjVo0w4Nn Dh63b+ObGaMMNzdOTowliq7VWQdQbwbhomILchW0Dtksb+MFjLxLw5nicdKt Sdg7wYDiB2LtWWWZistBsgwGAxBYDINda1u77cbV3zrfWttGtt6ra2VWUOOj FphmMMNw6DuDkHAOgdkcBl23JvdYyjSjbGXUHYmBlbg4BkwysZZZWViFgO4N A3BgOYuJzZWTTLFllSt7SYsYVmGtWnFDqsYjeR3bGpa1qzmDoHZVZDvZDEZS 1lWZRkHfe6jattLszjlHEGg0GoOYMg7wZBtvb79JrKc3Mrib777bwaDvTO6N tpMoO0GKrZ4rzvnbqzDjzNY0cnM3q6VWXDFYXSMZTZVbwalaSpxBszLGZRmS 4qsc5GZMh5YYHjRqnMDvHOO9mc2LLtNN2d4N7rCvHMrYm2MYxlYls2YWGZYK 46lljI61y7VM7A0Dv1ZRhW9W2VTqdcbeNc5086nRjdlNb6bN+W7c533burvL JMZVhgmYsYmWVZIsYpgGMisMbQxTKsNNGXRoy0wyyxmMrLDFi7VWaBirx5v1 jTGGMMYrGGsNG2pmjiusmS6NZqWssyyytdA70bsozLekHuxIge9VaosQfkD1 qgVuiK8ufNel2uwO6qumSEMxUzGYkmLLGLuJKfUfWK9+XoQ5ZEp0yV1RBiq6 vY0ex7GXs8PiTjLMpKzIXalHxefDlfQHLE6YSl0gHRXs5vlBwyJxVLiq9x7n u6ccuFItVUXZ8+xx2fWyqx7N2h2MIsyHZB71V60PSRuyyslWgmA1B3qmsSbJ T/NcAyJ/PBle1/876sgtWKzBWWMxH31u2pGsWQYbhjGGVixhlMlgvuacYqtJ rKtFWGMraDSC5kLe0qrjKZk6BiTWVWYswGcayRgOKgzjA+qZLXFoslm1itY1 kGLkGif8g4uNqqyDjmDMoNA6g2VvJ9LEZV3B8wfMHN1wtB4g7D/zraVhu8tG sNKrLTnR/71TcP6gcevE7LWxqsvGRmVanIcBktpRlS9YO4Mg62BvXvaQv6gY DZtU0GlmWKqxZWWVGWKtVXoD6wafao2o600GfO+rVrRGUe0GB0qurEbW8qyV bM0ldqrDZ3j/xTGRsDGVMyZD6gyWxwQu9dWg5vyjZuzapjexkV/5NzZutztb NjZpzpwcxuxttQZK5rRqlsbjzhuqvtOL2ScqNnSLGmzZs0RtYljTEsc2jds0 VjJwxLZsuIZLg3taOLicBgb2pYs0GS3mVwqw4XFd4OYNVzU3ctj1atVmzsD5 A8bG+GXnVWq1Z9JWWq1NVls01MbGTa009m9Vhpm7TVYb5jpVabNjdbLZiewP bhbHMuDaag5rlcg5uBxK1cVxb3KxwDeqf8YP+N/zB6A/3A/3g9yLih86DSq/ qg9iF6wfvB7g2g2/WStv7TH+Z+OH+a6Xfb/OY3bunbzdzabWy5Y6dOHd2x03 ldQ2DmVWSN0v6tQaB+VqDcHIpTXG/GtnhHXlHjmHFwjvaZRmI8duINBtA1EL IOwa37UGzxdrLXm145xG2balvVXfHpSl45K3YTi0abksrebQbwOAds3g5Dnv vgMKv4qIvNdgea5QRNVVvt1KnaDu289iieA2DtmwNqD09HYTkNO8m3mDeDze N6qu+932k4tpWx12vNd8S321ztbw27B03o27bQaMS7AzkNBvyDaDeubi2u7l webtm7G+9tttu8b7Gndrt0OummtVW/uQEzVUhscVeGx0ZxSIjMzMu+3LT1jp ucunJzXHdVYqrvrXdqbHE2Tpbbbep1qNU8VXLtdunRxnjxp34zlsssbMcnjN nDlKm+9B/AojlBgNsIOfXxdMk3geVCudJSPSSlKy1IMQq4qqykHp129OFOCJ gmjzBqBsmiDLMKlXmoLBVNZEB6VVhBJ5lFF7ZA5g4BoUTWVVc86BSHOCuINo NQLWdc6VCzEHyB/C0ojtlUo5yiEXGNUGEHbISWYJTWEqu2UTbLdD3BpIn+1D AhccaZcIzRAP/iVYVSR5kkENoldmYRilSwTKqwyVGQZBUrKDJCyDJCyQsqVS sgyKQ/jB1B1BtBqDUGoPEG1BkHxCVaoMBkGQpUcA+AYD5CIiMlKIiUTHRVxo Egjx9gAB9z7gDNjWtARiIiMQALADHAI5gefDUACbRKIiCmqgCzWZvLMzeWo7 Wa1p/SD6VVKlX+cH90H06g6g9IO8GoNQag2g1BqDVBtB/pilPqD+KD+CQv+U GoNSFgtQYK1EVkGQaosgyVkhagyoMssGhNWMS0aJg1VZBiVNKmUH8URXQP+P 9z9n8lfnA8PvarPy/O/FX+5SA/GVNmWGSFWMYxpjMmjWhH5MRamNjVozRrTR mWhA1hQRmNspSVrJFXCRPVC4gyC+gMBqFZEYiYVWQZZQdVxB3rv1Scm1fpl2 toW1sqvCOoV3RPEHFOEjgTztjGbdbuzpybXLJmbIyzNwwMrem1bFoOIG6Rar Z29KsPUrHLd07aktpVenijSK2l6449UucMlcOjoTc62x5emRXCqufMh4rWcC 4oeVWoNEaeKOYOLqBojdoHN23DaK9FcUcDtRxYr1etuGIm60bpERvaK5xVvS P7o7ovVYg2rcyxDUDexB/j24LLjJZlPRSsi2g4g2ranVLINqDxgNWtvO4P/C 6XpcblFlL1vDSV35iuAcJhC4gykbpigt1kDQZQPEG3XXXAO9w1CroHLSq5MF V2jFVtipRuLBSsoypf4oP/Gq9Qf5Qf5Qag2B8SK4ByDIHuDiDYpgP2Sq/oB/ kg3B1BqDiDQOAcxP6QbQcglLwDtUHeQuYPEHA+tVXytIPYH0huj4gfKDPtbV zB19AdZNJfoDtGrhVapvMMddZk41Nda7yUpW/Nq6g1yukG0G9A1qDMg776oO /76Irn79vPfb5e/064vS989i9HCrZkV409LN6DgvZf9/+jz/d3OQ5rF3Qysr IKyRoNVValZ9jzvnfMQZye3sdWXGuedqlTuJK4g6XNc7Qb1teNcwZxvr0t7x sUGZ42tRhW9kt9bZnPa567dqHNwDoMJtJtkmdaod0MltK452tpb221tLjLfa g5gxBrgMZq2qrA5zuGgfKvHYHGs278cd8iucqt+rq5IL/UDn5y9ag4QdwOem qraU8Mw9p7fD1VdnfJjgyeFHdGz5g+S4ssnPftdI9Ltcr0uXo1K5DzQeg5iv F2ve0lXFBvXSo7Yqry9a5DayK/JX1chtegMXR+xpuqvFVkGPZPN/hHzUn1AN xOesqtqrYYK3qskoc8BuHjijwDmSVyjgOqjqqtG2tw3wNqxSN6qyBFx/HBx2 R70HXAOwbUG/Y43u9HwcHVua73ctzprulSVZzt7985/ZJRvSco70MrQYj/MG 1JipfrX5fFsquUYqsg7Z+/w/tMQfXuS+jIG2/kNKq33BlnGKD5UGN1VjhqjE w4bNJcpYS1bWJYl35u/T8nbY+6VPxiq1BjrHlBpF9/pfEr7ywU9QZeGVTkWV WVgMj78B6R8Iyg+av399bKN6yU3ZrM8k/Gzdcfv1FRv1hh01AuWbffUDZ8c2 0nEmKW7ZU9doP1cUaRxdr4WNiq8bW11OwPet7Es2B85QbilPUGwnvc9VVtF8 sDbK6g+2+/4ycQPp78fW7RXwe0r6l+TJD1kvpIiNu23mx3szh+Nt2cY5ZoGU H4r+v37Sq3499RV5+vFxfiQuCid7UG9tB8HRVL6KZZavfINV8W6C1eJq3ofd TVfP5Xq8MsXmg9KDSqXXfeDONQez5hzV87dVcQ73rcQZb/f8oNm0rmV+Gmis VX03P2vdsB+vxIfblaJXpV5UvybTbY3bvu65Vd4CzIZeGhPp1X4o+AZNXb27 WBtcV65B896qsOMfOudJf52/IPNHVIPYHwDiS7Xyrmm1eWlfJxfQV/Gn0XHU WK4ai19hpqvv8pHHy9d9QflvbYpinxRxxvBvnVWB+zL5j45g9lff86q+4NfE H5JlR+dB8W+whfb0oPp7Fe+VmXrrchd4PxAyDQOLyripfOvCve+fW1c6TMIO QavFHN9tI+K6V/oF3tK+fydV1Z8eD1U9LV98vw+y2raW6M2NQe/0tjZV7/e3 3fPxWmWHF9zdRxPH326oMive11e/mpdK+bUXmwOvn68/cNBy517afj26vwV0 4BwxdS+sqrc+T8ekvjvesrYfHT6UHnm+NbQZITIOtT4mJ/Db42NzD85H1K8l bzYh+ObNS2lmZKuNLUdkcIf8Afp+V9J+dBqg6UGwNVtB4tIr3FbVVsVbFN1n 0xtgvcGP65/CZXxdJZdnVzB4gxxyc3z0znT9QOSvxWSW1G69bzFiDYFtKsrz tbhh4BvtlqQaQasBqoMBgpTxgPa40J08WszV61W9fO71I3ngG8W1J4B5jftu H1tA5vcrjuV0rrhgMt/equlNc7RVXfffUGwNg5o5wNbKqyuyrUjoGS3xLjFQ P6lVo1PQPEHiDkh3UsyLveze4quErzB3g64XLrSRM5SaRkDzJxsDcO03t+yD q1OlCsgceIO2/xBlBlQc9uu3jnZ1vZrvBqDUG8H0pB7m0p2D51+Ei2itoYGn eL2tVxR2u9Hwtwv6fkX9FPyXKq9vr1qXYGhbleqjeSlK+Lgg4/PnzDap9FV8 0Pl+ea+iPkjhHpbiFqon1z7/r1ccVCv1wYDIPy35zMWLGYWKyl9eNA/LC3LK q81+d3nvzfQ/OD7B8G0rt/L4MY8/jiVr0ba7Z/H7yuD5lfLh7ePi+Mt9g5Gw awGz7eq6lXu1c173sjnmqtUe2wfWH07el6B+u7vntxuV3VfB8vkOrnfZ+Ta4 f067fsxx+qr2ak+KmlVaPKX6bpX66OG/1o+TZfLs/D8KD5V2va7wd4Mg8wZd q71xtB2g9RSmkGIe/vvB9A+d8Or2fcT4cQfZn05X8n7vubuHPzs+m2KjSnjb JKs+uyn5UG8Qsg/HQdA9fvewvs9FV4U+9aDL04qXxXCmofXJnJ5vaB8YvnR3 0jOPrtuMh9eY41GJVaVWXwde25AT9qnNS5Bk5vqHytlb9aDXz9rXj0qvSq9b 5eq/K+W7iDeXZzUGp8e3Fxb3bgHF9iuoPcHvfOwP0DK+lfGg9JC/SHVGiibB lkg+PnbQe6H2afTllKn0/e3qt1V7tH3mG2xoXe6v04pp31WZZfSvpD+AHa6p iw4oNQewMgyqmQMBgWRR8VV+7Z0YP2ZF+X5N27hLmq3rTly2TkPv/XnDOOV7 N2043dU/L9NhOGx5dbPa8NlV+x8Clzss2+Kq/hvKp0YxyquTT0Y3XvxPfly5 ektmV5yobWQdrfst7ejdGCsDIkua/0wfzg4ufFe3jy8RZjnTdteeFenJt3OH LhLbffvfvpr9Sv2opZKXL8P4eN5F/WYqv5fy0dX8KrwDaaKWoMqrZeWbh+kH qqtvYOOA22DL3r3rztUHjnPt29r0vRdV7mm7ijsCyvyZbyX6q3DSV3ifqBpJ 0H5+Kq7UdB3/XQaUXxcINKFZ7UHvxB9L4B5uNk51QcFVpVfs+Hten02Hw2Y/ X5Pnz5N8K+DIZdlqvk+3vw3Bom9TgNqljJ1vBtJ7Xwl98F7UN73vq+gbW9Dn K9l7+lXyXH18wN74svZHiT3g2vz9t26H7OzR7P19Wxhc1zfhVfiTZtxz1a+M VXp5s1D2totoMuHIZqz29Nrl9yq+z3bnpjH1K6Ccyp+lyT7V7mwPi3rqRk/i uoPZ3UwGTxBiD5bwfJ8gylckyYJ9fy2kvwcKrSq32e08PyqvZs2fHon2fT7P xuT7fz9/hnv9PhLKq1+jAPit3e0rgGI/VXavD7qDhQZQX1wGfrz6QfSDzdHi 86qLh49PNfb1s48eeniHr+wHRq8V4oivzoOYNCK4tpC+UDSHe1c1zbVlfiY3 U7C+WNDZy5bsu3Q4w4OTF+n6cVVs7pdXe6yjbQrbxB137bbXZQdC5BquAZZO 1HiltbpVv3toPiaJWwU71kb0FtB9VvX53ihy5NGGW5bP2a39vfb96rlrefAP ZbMeBjetWWzGbevPrRVzS4sLUjaLJA9KrXr5NVLiV45ktoMID+yqtniqztVZ f2MwwwyomKmLDGNoqtL3eNSuKiLJ5lG9Li2r09IMpVUvSD0DQcKrkeTZLuqQ 2M1thhzy6SpjnuDlTYTtA2g437a2Q7t5XfGrV86Dft466618s8cZ1kGQeIPu F+X8nscl7VX8HdV7mit+Yf0vbn9E8o4Vz2rYGLJz2Qdd/z54++vlSnt9zKg+ 1B+r5V63qdgZgPZ6fT6X39B2vUHrRtOKvfKzLi3B4vKDLDMr5mU+rH2vdxsV Xt3u4ZbiZd8+LJW79BdHsw+zhLzWh9N1V5VWdm9sDwDdV2y+/OvWxxltBkhb 9eM8fLV11tnlQrAe1eT9VTLzBkxVecVWeebcOoGl4ks724aesH6w2tl3kfjv 3JN0srr8EtSps+u6N62xNytlaz+cBq+vw7/DqXu/qwc5XlQavl7dKFceleOq C9QedQeaVyGvXvuDeW/03m0+nXHz46ua7fLxR+LhGoGKrJCvYGSF9rJLUx9j 9m8G9fTv+WbtriMquTp8tEPFV3buHiXKu9j5eRHCq4o3OS8nM2Z6dNb8lxp2 1eHB04q7OBvg3HHWHFt7WzYvyqsgfs/J+z3XquNVpq1aSXsK1WytsZ2nsh6p p7PRVbyt/nntu786O97VVy4i565nUG8upXRcbalLzG7Z50lbPPl5zF6zkYMn rTub6r1rW/PXf9Squz+T8n3cQeErxqvi4TztDN/tvPpvzxfUPx2sl2/f2lB+ qg83naqtWPlfX8qve8xbhv9AdQbQOYNt9QcQYg8/HXeL5x9Yg2ttLOdkW1FY V0+3hq84iuLyvNBqDYNUGrxA+Vimo+DTlR41EPQ/pafBAT3qvyl0bpe3cnx7 VXX13z3u0+h3w7+qq53IOvt1d64u9ff8muy9ZQa6B59LeD8tZ9cd3HZ/H4na fXsffs+W9Vj0B96q+z5sdNLzB9sPqqQ/DtRbGvfRvVeXF8Bt8eFzWVlWTvUH rQeoO8HYHze1JyTm31Q61e10G9xauIOBbVvZb1q4RtWdUcOZ9Jjtci9Pk785 5Q2XjOqPeltly9vaDqVyDi6aQt1O1VYVYDtO3et2+/pp1NaSvKrGyq/o5I1G 0vu2PQp6I8araskeki6DVeQy2svRr3Qe9LqLaxVmOcm8S44S4I2FPNJ2Og44 R4RzFVxOaytAp1BtHeDBV4oMFcA2d1taU8YDYN4OIha7283gyDQm+3QPEmWq 5rtlcA47HEuyDORC8ZA34zbnntq1BqDSq+yNsqr5d0ej4bnHvfZIiNrvlv5g bdevW/xdXPp9PGuA1x7Z6+1lbfexb+7zbRXpW8Hm63DZR6ceaG1Jr4mwq283 V5ud5HVvt8B1fSD9ONrjt67HXElVz42quG75aK7KziePDU/lKnsuJXMG4UNb qrVT4JYPe/G2z7qMP5fEfDBPvVfThKYwHw+Hx7ujHUmeiq+Cq0fGGelXHXpK fdm9vqKem1S97rs/P409+CvibFYxjrbY1vVW7dsl69Sj85K2SwLb2etc+TVb YdsDcm5NDFVhq2Nm1eaDb14wbyt4MB3lV3qrSraDaD3rhVvXjiDuHdN1WqeH ejYbNLfDYdMbUqvk+z8ulcK7ep1VWxXbN5jZlDf1+OeT9lajZDXpUFtBvBgO QYQuJ8SeFYtrZbu+XGj9jGlO1V67N7l48Rm+zx9r0benZonCW13z3K2b+wO/ cGuK2pVUua959oNg3VtB++0S2qYezIMY0691XpUcPNqrJ8Ntt5XBMS6j43Bv VdnJN3HNVnu2M+E0t4b1XJrqpNe+iD4VXo6eDp17swenHO7T10ZsauE3TZwZ mcbe/vtma+/sc32e7nFfY775Pq6bua9bPjOVPTvmM1ZQfVB6vWp87eD3ve+Q /KT4V9F+nSq4D5cG1Vfb7/f78M+3P2fd7/M92np7q4+PH8ePscTkntVfH5bF aPf2v6Z8fLyfdsfj3lt7eTk2S++z4z7cFnvah1hu7r3gxR3BtBvBxxW0u1lx iDvBlB9uvNZfZkq8Fa++qr2NdHH6x2cg7n2fY+b8jK9T0+seny+/8A33Zz9N OzH1vrznOCq8JU3B1eoNXyrr1yYuwPpVW8HiXcczZ6++9O6VXB1PRz56o8W3 p6dMthtzzBz35vNEV5qrihsHaV2/ANQdw7ejp1drV5N679iqzqQdgYDur7gy rlK9Yqyd6qsfc3Wxth4qvl39Pbj5ldZe5qg7F1Xqsx0XGSceK7ByDSVtOyPa d6remKt0aRZZKPmDOoN4Mg6g/Iq1BhNoNQyu9irttJrLLzXkGOwO0Drkom0H HGwbg6bV4tTe2a22kHHbLNbXPyVAGtuO/tt1BtBqDFsjipfaynAdi+17W8XQ cJq76VeYPFG4cB7uIwHabbeQdYjRdHWv20+XHYPzB+CvX2zrl2VX4Kr93IPW CzcWphzVc5Vd1NismmyzHtaaoN5C1fF7bg9r0BlbbCK1e0GoGudQeI7rZ1Xx zXV8plfL093aHp3nXQeMDtbOOYcd63rzXpxPg1VWrccGrlkt9Hv+2p430bPZ PHfn2z6/k/VdJbudVV58o9WjBph+n17fL59PTfp9/NHx/Cq/G9Sl6fGFWUDy DAYDINZZ5g1NgYhbIYqmQZI1B6A1C3g5uHBwatNNVW7FVs9gcHu5Zw+D7Kbd K60x3fO3fCvGMm6xamKr9z0cDxY4Blb19rh+LJ9TOEu81XtYZUWtquo3Pj22 FfF8hpHurDkZWGSLm+XWnWpXWxu0G7Z31v5xt+r9DjzHf4t5XUGLVYH6mLjH FriTZrn2r7PXK83bfza9XcjzRgl5fLt3uaD7oMccg1rWsyDtZIT4RsD49M3g +sRnmH4r8G0vUvlpeMWu9asqscvsaHR1o934mp1ab3RycENl7Wew758s4Bnk 0vOaqD1J3qq7n113nKqwDf52VXd96v3q9Hp8mz9cqfTVjSfZid3sxdNibg4a rVi/X5g56y36vFfFw72q+l84N4eOyO+2kKnUGMZ2btYa1xgMLLbP263S93ul JJUAAEAgNr6xfrhjAz65mZmZhP4AcRCu4P74PYHgHvA9gYDYGA2B/VFKfzf1 wftqFfOjL+iQs/zf49X+uKU/4Qa+EPaD+vr/0/6fs6uAZ0h+fy2v9sUpl/2z 0B/hoP91BwJgZWUYxv+3eg1B+7/Rr+pDP/3OIMcIfnQag+v0+v0+v35Qdu22 z0gyD+sH1/jqq/Sg7f+d+4HpB6wfugdQZA9AfANoMkn74MQfoDgHzBsqX8lV YDiayMJl8I2qrA0wMMsyMA1Kp1SWgypvlVsDWdQfwwahtVXZFgaDuDzBuFf7 IMdkMssUdkHpB/qBsDAage8HcG8DAcg1BvBkDyDyD1B/9QcILcHwD3g0DeDp BewOoPeDYH8oMB/MDuDoHwDkHxB8oOgfAOa+Li1RaUaUZZaatWqel7QcQdsQ bA2ugd4PAP93uD0g4B2BwDaDi2B2B1B+kH+tB8oN4OAegP/sHkGQe4P3wbA+ wMB+dekoiq7wZRH0B4g+VVCVbA3ztVSpV/LvA5B+IPWKkr+G+wPtgO3++B+A esg3g/fdwf7LAbQPzB63uvFBdA+GkH9yD/gh2g/cg/hg0D9boH97VeK8QdQO oPag/QG0DhB8kF94KSrsD6g3QbA+IPuD9gHeB7A9wYD7fcH/QLkEIfZVfdVa gaRS/0CqalRSrUhbA1AxBklgxLAn7qDaDQN4OqQsfmD8gag0DMBvFSVoHUG/ wDcGA2qUlfUGVVcSFgMg/sg7X2g6INA5B1B/JEV+5BuDqQu++ILuDsDUHiD5 Ay0DEP9iPpBiq7ofSBtA1X4BnEHMHGgywMocA2B7g7wOqRSqyBlEdA8IkfKD aD5yF7A4BzB2g8g6B4g3g+UlX4jGYpZipjIMszLMMykyKylFfeDKiPwDCF9Z Yix5g1BqUyhWwMDiBgV8A+sHEHIb1CsTAZK+YP+tFKdv2/Gtaedwd1Vz7VV9 BlVgYzEZTBDFWCDkHkGA7wZB3B8gYDAfaD1IWA53pC1/hwH5A/qQ9ets64lU toPnA2oH7b6QPSvyB6IL3g6B3IXvyH9qDA/SDzekHtW1VXoDyDeB2VSpV9MR bwYqyDQPuQvpB9YNUHvUHuD8oHxA0D2g2gcQPWg5BoGA2QXEHuDiBxINA2gb yF2lV+QNgfx0DeDEF8oMIV9Acg0DIGQMB+l0gtfODaqq2gcxFZBgPi/QGQeA bQMBsD0BiVRWiFltA/i7A+oMiK2oL3BwDcGoG8HiB2B7QdwfJBf/14kLkHag 4B9bUG1yD/n5B+2DxBiqvxt3lBb0FvBqg7wMB4BtSFvIV94P8GgdA7Kq/Kqr tQNaB5qqu4O0GEqTpBd6B8gcg/oBr7/4qNCK6kL0kL84G0HreKC4BvEV8l+Q Mg6BkDAf3+EJFXbtVfNVfMDtVdaghDUquwfxg3/eDiQvMDm5cO0HeDeD9lAz L9sGIOr9IH2g3g4/seQe8D+9A8A9QfnB7g9u4N6D21VV6X0g97IN6wHCG0D3 qpUq/wfvg/OB+AdQfwg57QdweYNIPEGqqrmgZxtA9IPrvliyyyZLKysjMsmZ mMzMyjEVT+31B+QNoH+LYGdgeAfEHiDUD9UDIP0gdoP3yqyDQPWBgNQfqfMG gfnQbQZBoHZEjAaB8QPERX8MDL+K4ByDqD9KC5iK837wbA9Qbg/vrpBf8uAf zQPFIX8d5g+7iD5QcSFbA+UHsD5g4QX9sHtvB/ZByD0B+qD7g9IitQfSB+2I rQPX+n+j+z/t6g5gz9dYD/CD0IWoPEHir9oN/2A+8GpC/fBl7wPNBag0DkHI OoO4P1YDUD/JXIP2gyD8VwDxBkGQPYH+DcHgHmBoHiDlBehVWqDEF7eKg/St qIrUG4NgewP5QcREDb8gf3QYDAfFA2BvQaoirwg2oP7XrX8kG8RXkH7oGA0D 8A/Wg1IXYHzBqilPUHMGqDAbqq/1AQhsT+zKZir+AGKplBkH1B8IMBgPzgyD 6/FcqDaDwDqD972QfuriDaDRGQbQaBkH5VtA81VeAZgOIMg7QOoNoNQUlXcG vhZXiDYGgfzwcQN4MQdqqvzygfi8Qd0F3B6gW1A0D8gZB5kLaDIOvnVSpV5g 6B/W+veDJC3B70H/QRW8RP5N4HmB71UqVe38+A8g9wagf9a/7V8A9ag+gNVV XFIWyC+QPSDsg2gb3mD6v6a4B+AYDcHdB2VEdgbMg/0QZA19YPwgvWD8IOxK k8n/+YoKyTKazsFfeIgWXk/8b//+AQJDH/+pcZ72B////4AACAAIIAgAQwIx 8+vvcaEQAADVAAKBwBHSgdFAAAAAB0CgEQAOcMBE5aGgOsrrQCdAA1QDqgKA SAAAAOgAAAAAdDoAAANMgoUBQoUACgAoJJJ2ANAKBIquAQAkkgqUSJKKABQA FSEgVRIHMJoDQGjRhGgxGmJkxNBhGgZAMmAkgTTUTRNIoaAAAGRoAA0AAABz CaA0Bo0YRoMRpiZMTQYRoGQDJgJBKRJ6gmaJgEwGkxMAmJkYAAExGBSkiSTe oaNJshkCNDU00GmjTJ6TQMjNJ6JpoeoFKSAghBMiJpU9TTTRoGAaAamjQbUM hkHqf5dT2et29MxZyCdVY9w1Xn83UlI67gba2kko397UgjZlVSDGQCp0Yrqs aVlUlRqsKIk2wpBZiCWqyEDKxAQIy7jEf3IJdBoVYPciNjSTsG+kwTt9NiWO 0Qzb3WqhmDK1L6i36ZW3SVCA7ASm0KXbwdOM9nl5zoCnLKDbb3jLT1mNMx1c MzjKXWUof+tDKkNzh1tTTo2a21lVU84RJwbmgaLBJKMBAjdNtGp4SMhYUijI k5OTk5OTycih4MhV2hlSrk5acGGmVE5yY7ctlJcY5O9s6Mt888trMUcG7Qxw cmjo3EPVQr2yEqVL8sFFJV1ua9RrLLbbIOkzHCtyQyW1jzJZC2VwkkLJYOWM lpkg2ZXLbJGR3I1lrysdbmVszCQSSVEkhFBICihpQ3UNlDZQ7UMoZQyh6UMo aoYhxS5peqWhqtV6VqhlqhqkxDBlDKO0N6GUOaG6G6GyGIbyGqGyHshwhzQ3 Q5oaQ3q68IapqDI6Q5UO6HKhlDA0hiHVDqhxQ6ocIdUPFDZWqGFzQzKHdDmh ih3Q1QyhiHKG6GyGIbEMpXCGUOaG1DihpDdDaXKG9yh3Wm9DqhqhpNUPCGkN qGyGqGiHSGydIYh0huhwKyquqWotqG9Dlid0O6HaG8HMS5ocxciyhg2QwjKH dDgaoaQ3ofMeEYdDo0eKHkaR2RlDahwdHJDlDXbp48u2Oc4N1jMeWnWqGzw6 eMeG7dtbGXDhY3cMaOWWUPKnJGFzQ0bjDB2h2c1KuaA6QDVQDYFhsJsODEuC x6CiiCinEOhq2HMvMnFDqQ8jwOK3SFKQEmE8Dmzgos0OaGJMEBLhRuDg4Nze wZYsOGjWjcxFW1DikO6HShtQ1Q8UOJ01o6uaGnbR0cvF03cWq7eHEjhDXJva 2bNs2eWWWzdueVdCyDmOchmVhGSaOzbzI4rc6Y7Y7Y3zs0aeGyHNDY0Y8O5o 7McseGPLiaOzhpyh0HLly22M01pppw3pDVDBYMtbGiHFHl4TwbtjK0MbcOGy tzkMLB0MurRvvQ6ob21DxOsrMGGVjYabmzajfjUHg8m6Gx4oc0PNDfrMcacD kaEvJlUPCqTkS1Qyh0oakNcax2buDZ26cmzTW9b0OkMkNKNIdjocIaliWKtO LC1Bs4NONLhOojQ+QrdIDSENqHFDpDqQ3ocUNFWJ/1oZ5raukNukM6yvBtba eeLzLEPPo30zexpPDM4cbGrxhuhp6jMLuh4oaG0N0hkMkNLjbnKuqYnEhshl DlDBL8qpMob0MQyhnqE1S1QwaUyhiskMUaQyhhGQmiTKjBhlRlZUMGqGIZQx Q1QyhkkMIYosQwhhDKVlDKGqGKGUNUNSGIZCYhkjEMrEMEZQwJiGESyhhIxD KmVaoaUNKmMgxFYxqhqtSLEMUNIZFoqwWMoYMGSGqGUNRWZLKGUMGUMKMHhQ 4bsMxOWNMYYxvskNIZIeaG9D/7V/865CcYm+PWV1vpaxGb6SbRFEzJuw0N6a YsNScIajJQhCDRNObw3eh8S+VTZJtQ2bSPiR3MQ1UNVDhDtDxpmeM21vs2ym VSEMxu5g0QiccYnDGUFQrUbdYvFu5emnb12hwJeJSeTvwenbs7Z3jFXCrxI8 1B1Q3mTyd9mx5PXg3NO3mhjmwz0YcPVu51uwxuhxbWYoYh1Q9UPNDyeiNqGU PSGUMNHq8M3Zy6eumeXpu55anhRiT00eTqQ6Q80DlcDlG1EEEGEsUkAwYEhw ICrLNbnmh5od6TI3i80ucreo2EuFDtRbVHaTT0bGxwcnjbgadvDTZjTLS5Q2 Q7oeW3J4HThmAsoyZZ9O0KiSiRWFhqAk2aNlIjdDx2cPB4oZVdIcUNq82yym Ml429UPVD0oeju2aJjlsux1QylxetqGpT0dnB2ZpDcdtrZDo4ad+tk3YnXLl rmdtIdTqMoc9ujjls5YO3mvI8DXFwwcB5OqvAlvVG8Xmh5odjzQ7ob77IaQy h6obtUc0Y6d3GY2VpWzznk7VxQyTm5vBuGFwJyeDc6rKsHLhw4eG21pNbtNm tukM8eJxtp2w9PNDJDuosoZEsoZQyh2h5ocUNUNhpZSaix1Q1PBjFrNNWZlh jA0yG1DahzQyh3Q3obxDdJMFqhyPEmk7wW9DqhiHJDpDUhpDEMUWIZQyG1Du lmLVDhDZDhDhDojz4t42htbbIcqHSHKGDFYymIYhwhpDZDEMd4mRkuKGS8LM MpxQ26mSuKHMh4oYO6GkMoZQyhtQyh1Qyhu4GMMo1Q0LFDihlDpTrMpqhqmK w8MNUOKWRLpQ6oaIb0N6HihvQ4obnNjDhDlDehxGrMYZjGWTDOudObMZjGeH jnhwQ1UZixhMIbG+WxsyYyrMscxpDoPFhkx4Y4Q6QyUCekP9aGEP5Q+KlKbq k671QzKGeDVDLETB6PR2htVVTdkl0pRbHCGzJJbRVZQ4daPLGFRuhOZekmzC YwRNhC8HilZlLM4tUN2BvRbK80MdnZpqqTeqTRbNnl5l5oYpzQ5rlD3EbRGI b0PGeTr4C5C7I+eh8VDUlfUMXLQyp7dTKGFlDVDFFvEsFNMHKGlNIYmIbFD9 TShuhiHNDehvLFDSHFDY/BxQ6Q4Ta8oekNNj/vQ6O6H+tHihlwY/7kP2hps5 NNMmTRpxQ7pGIe6HCGUNIbM/+jJD1IZ3DEN0xWLUXYQ56HfU1QyX7ofah+aH LmhvQ3oZQ7obtUPJNqxjGUwwtn5Q02Ysskrh21Q5tm2yGzQ2n/bWpTU3obZQ +U8Rb0McE2NNMoaMbBssNls1VsbLZq7rU2NqGibFiZOqG1DSdTpD6Q0dz+j8 LTRpjEtlqNmO9ZWmmWxptm1c0NbJhmUdlpshxW83e0hwNhhwQ4IbHI++h+KH FD7EPuQ5VDyUOZQ+2hxEu1Q8qHcQ2obUNvIY2bN7dyvI5GN+H8N2xqjGqJpQ 0hrVDEOlCuaH+io2quKHOZzf7FtmazM3aa643IbBLKHSh/5aG6HKKuaGKsnF DYh/ZDbwzt24eWmj1t5cthy8OG2Mxpw3ocMeZHZadNNuHbDHTW6HlD+VCtjy cqPIkpp6docWx4kPVDolXkeRqh3sodkeKGUOaT0cnhHKNyycoddUPEh0hvQ3 5bOmMbbOcNHZsdTk6p6kFN9RUdeIswbQQm3hzrbPXKHch1ty8FknVDycHg6N HJ1eIl1tkgJxEvmltpbaVS4yXCS3m022b5xptqEg2xtjTbGxmyRjBttJtMbY xlZGmMzMMC5YEB2YAZ27kU/XchtXdM278bupmaLJNtq6IzUVFxm5h2e4ydG8 t9ZG53BO62+91WkApZbu94gopMkse9PL3WPO3KjHyNGzUzttb1OZdVTwOyaw dhhkwyZkzC+ckhvIZQ1lIdDFDgpMpA+8EJ5pDVCekOWqlHnFDxvI3oYQ41EN Zmt6HGIcbeueMcIq81RZUl1hM8aO8ovONsqWsnOSQb4NyHHeyHO2jjBJTvxo 0yJtv3sJZ3zztuQOOtUNs8ds1UPHfVDmhnTihmCDTCMq9JAbxst9zSQGO4gN MCQE3MCAznVRKQFMIBuJ1VJAQyStgB8grYjd2iGSTJ3QgBI1rUtO2BJCMt3t BCc98xzhy86JyMS73ob0N0Ma7rG0hzxp411zN0i2208lvOtsoeasrqrEnMh7 yu3X6K78cJVPgBZRZVimUxVMYqo4+bMRaoYetKGixDKMKGeWBR48wQeHex9V 6RTRrVevqcAXcOOytldIh427585551xopMyhtvs44acRAXO23OtbaNJlKxuk Nl1ruluoedolziVvvpDOtb5phIda22I3yIysSHetuc1m22/eJDWXGEdYhmNq GXYx9CGIq+2hkVLjlOxx62NtaNVrRFrvjxusy95KPfvShX9qHnlt1p52VGtt UO8W2KkpmZijzlecpL4xQN8RXWtMxQ9Y1lU41pTvImszPWSNYhzkobbNfOtM 9YakYa0kazMkbZVzhM9aq34bW+O8cSb76OKsEKZjWUQLXekNsL1lQ88ab608 ZtgS2xrbVYLXpaJmj2OiQDjaMkE2NjaGNmZizDMzeomKxZJiRMoYksoYEJih kSyhkSyJZRRMoYpVX5kN6HXobUNUNUNUPVDZQyhzSo0oYhlDFEquCHUhiD2C AYSQlOuOt51zzd3e+syI3l7zI1u93vbT1UOWh/usr0oc1arvFKkYrmrRSpGV /KHVcIe6H3oe6H3oaoaoaobUNUNUNKG1D+dQrrCuKGJPRXv1pJ+NeKuCu5WK 3r0UPdiX40NUNRLJNUMVqqTKGUNSsoYsiWqGFDFaoaoakyJaoYoeeqTeuFdp D3fhukhux4ulbUTqe3q+OFLeyVSc+U8bGmZmtZqk79kbTGYttYxmkUNsoFwM FE9v83bGmmwou7BQ6MBVw4NG7MxRNmBLZjGG22lG2UkBdZqNrR5mOusrzBmS Q5wkktYgSVUO6GIdIYhoqxQwpiGRLuh7N28jxQ3KuCnih6TiLqhq6XJNeH+9 4bHShwQwS449XjL/Js90OLTn0fL0oeZDyL0zRDw0erz76V69UM7Q33439HZu cHk3PV6HhDvEOi5ODh5eG29bUN1DRxQ89EM8HnZDzxQ1Q2Tjd4rvTflDW1Xi NeYITOlRkptlN4rtDo5lnWlhubyOVpk0Q0yQ+KHqhqhvLx21Q2UMMIctOzYS 8o5vTzbpDqhlVK80MlSuVZVVbVhKfJQ+WL4Avlr0oelDahuhuouEOUMIf6UO KGyGId6R8yHz0N0OtQ1Q4UNIcIc0P+ZDahzFVTyh2UPES/ouqHqhy2qT7Ifz 932bBxIb/pD8m9L+n7IdUOqGz+n8v6QzVDahvQ89HHlCE3oa6kOYXsFx76f1 79OOYXuyHHW3jbW8nr3hHUQz1xxv72d8kbuE0Y78bUNneKXgl79au8ucWMYi budk8b+m+2q21r4MyNte8q0+M9ZlZlNlUnnjv3goTpgyRp5kWLt05YIREtP1 LFvg+D97hiQ4ZeKHrRS0oY2oYhqhjxjicwHRbJAQDLiXJsdHWlm5F2akyFLW J5Uk2odfHPG4Zc5rzrbJ0cq30eHqYdGc0M8+ducbuNWyKsyhjGt+Xnit8c8j NSYbUOq2Rm/HC3vOQ5hjuhg5FhhcGjRb87eeGlDehihmqsGDWsXjZmzYYhlD lS8UPKGmteNjs4NVVhlTffbNVdYPG/BYJgxGzdjQg3FUE4i4sEgLZICmSAge X7qVz8BDX2MWHyUO5DsSyhu8Onzb3kpuxs6UOqHBw/lD+x04odH00+zR5bUP Kh9ORm701pd0OFDeocMet6Hps/ufBwXDyhhtQ85N6HY8yPmkNDAmqGKLcWKi 3h4obIbIjqhyMcMyyralOLCpiH2q0pRcv9FDt5ofCh27ek+WnaHocqBoQOiA pCUIgOR3mZQa7cQCiG8Mbbh63rncgMwYwtEs7xZCQHKQACArNc/qOUVcUNnk akb0NYV/hipj7Pw2SfLKHdpnjH8fD4obEn5yfWSHqhs3Q+t6l5ZUj8UMsoYj 6xxQ41LTGR/nHdpG+6f2GWxwYNfgoZaGxlJx73NkbowjQjeh9q5epHaHh93N DeR8vy6+8+/p1ntQwwZ84qT4bKLWIbb7UMy2oZrG7aLV5m46ed0N0OlIaUK/ CG1Q2fw5PmhvUN6H8HxxuQzKWZ5yQ+fnqh2+qHyVfCEJ8aR/ZGGGEbV5xtQ0 1sNhhhlK1CWIb60fl938RLYlXnVDURxPiPBScsc2x6Gyi0fih46+nzorz8Ym lD2oZovTKi1Q3Moc6NqH6ob0NqXrXqyhpDSQ3Q3of32SHr+NUPHvyh5qcmae coaI5N1sGNmkMNphIfdD0h9jen2mbeVGnfj87vHr1Q5ek09O6G5s5fxWUNza n2Y2cHy4odPmvbtDwh4fl8qvd/lPsfduq/Kh51VF+PuofjP1ns/P42xh85SX mh8kMoaicn005ykNkOvOD1P9SGT9vXFz8dHef24/mhybuL8qZw1T9pcUN+DQ 9S+H4vu+v6JvF9Pbn6+Hmh6Huh+2neNiP5PTd+eXrz4H0frz8/1EuEVeWqHj 7n80OKH067f6qGtUP+5DPK9fkHFDUhRFcklUard3lrYkAxIDYkIbobUNpVyh pDzQ2HxpZ2+EMTw3cUPFDT9rHg/uhxR+HoduKj1G+pDyVd1teyk5qLChiGKF dN8X5bIcuMnX9jPBxUOseemmxto2Ohy29Pk/s4qJx7NPCGxomXdDSN3aG3ih 79Byh7Q5oZCOaGUNUO0N6G1DbuuTapV/NDRph0/lDzQ9UPCh4obSaocUPNDL jfWaRVuqyhs87SGyHqhrDeQ6Nc8NuOueta5+t/w5KTrnKo1kMz3Q88bbsec3 yg9+d/3v7b79vzQ8KGFDPfXx1Q6obUOKHSHgvHDt1wirpVbUPNDoS72kb0No 8/VPqXmh9GnaHUEJ78Z4iHxef5yXPxQ/FD4ob0NqoslD8PjaUn96GIZQ9tRw XBZFZJhU0WDDjHcpwfih7of0+xqhwjpiPnk608Xz/ahu2oeo8jLozaxgxd5J fS5fyx6r9eFqv438+921D/KhjzezLD9akfGJms0vrDcyxto0Pf+hDY+k+p9U P5fz5Hv67Gk4HpRtIaDjahtv71sfkw2w+R49v53558UhttQ/PNDVD90P39G1 Dqh7UK6+Xv45Q59ZqhmnPVD8fbihr7Pri6LjDWGfwxvmUPA+R+vjfuzv1heT wdetG58aIcNpDWGv5UNBLWqH732zxprxYh+W4vmlwaoeK2Sf4H5aQ3F+XPPj 2tmEPL7j8Ydxtg6wnYxDZdg8JskNqHj9s6NpBTqhwh7pfwfnbLr3613+qPmj XyYffpw4keOPgodtWTYw/LN/44Q/O6HLihjwh3+D2/Jo4iX2oYSrvZIfZ03+ Ptqh9K3R8jq+w9m5NMQ9YcPox3YJaaodSZQ8/J7699ubyzz4Hj0b7Uvr8X0+ +e4h2Og7HS1x1kwy7ahlDuIZQykYQxDEMA/CHhNnL09mh+HAmzZs80uXLf4l tXioNmh/kvKGlF4cUMcY2YXLNvaqt22iF/10Nf7iG9Dh8N3s8bnxcT663+c/ O5zKA3EuzHWuI3LE8E6z1BSCEkwgSs8eeP1+L9fq1Q8Q9HrXe9DaGxGJ8tkN n9m7HxTt/lQ0PYziaGT7nz8/DcofL1YzLSvnM/T0noq6fafas3i3qMof0q0f 5obKco8Ib0P8KGQNSHTRSfZ+XyofXVDWyGMGb/VaYYobqG3dDbyeXto/2ZbP 0+/fTsz0htojuo6Q+KjzbP8PS3PVD96E/D4vu0+D1Ie9KHzQ9htNZPnwP2Pq hyvf1P0fzO/m9HzX5fDj5oaqxejU8fD5Lz4qHK+g74oN6HDfkfC6aQ+D8cUM dUPKH3fj7UPch9fPy67od0PrVemvk+TXYac0NUPn1cUPvXhuc+fXvr+fju33 vintDja+aHt8G5YOTwHx4ePT080NkMperRvSGUhlRefj+WyHWcke6Hwwy6aM YfdweceT6UPs+/nnmr/CHyON77nsw+x9NjvbMN8s61Zm2NM3329qpNKHVDSp NRLohuhwy2PZ4vpxjw8IY26dO0wNUNt9aKGu0N0PvjpDjShtcXMamm9D7b8b n+UvCGzfnuUmxmol/hxQ/J/L2k6OZp28mj3Vg4yh9pvr3RHtDCqd0NUNqG1D VDAS5DqQyhwodEn9pWSYtqQ+G9QrVDutmz1Q1IKeaHlDmhtTuKgtaQIWmvyO 87Ku6EByUkBgICI1rIlCDVPkpCWmgdwSebdICs1qZ9vsEgNvHdDVD1Q4/B5e WyH3fyvPfM6TXR9+XpDiQ3+fVp1o9a9+kPO/ip+3w2KH3UMvdbMIfKGMQ8Gf HTn16Q9bYx7zWm7YTDqh7PMhttlqtIZQ8/XjvM7xlfpodusZt5bnX1odcwPC ktICAQOiiToz1jQgPCk7pAXUfHfDeh5iXrbOfXlpmjnavfDwUm26Hx4PDKH3 tOnM6OsIeLzuPZ1httkjk8Un9ZS+zyx8tOz0HfczNj9swboYYrdW4+a8XzWw 8fPVs8NJAPuDq3gjxDjJiQZVHn0nQ2S01YcJCDeUnX2G0h9sQ7eUNe/TVDR9 mUPNDcOPgN0PPC4b58dN1bb/Oxf1fFeuHeax09Mbuj3v8GRHxQwJfKGxohr4 XVD6PA2Y3PpTxQ8UOzd4cGdNqGw2D26aNuaeT9XnavvQxDc/D4LQ41Q9WFwb UNng9vFDjE4oYPqMos90Gw2bM32OA0DnU81OoeMq4u5Gbh96edbgrh57Ya27 475/ZDtub0MHZh93g27l4Yfe74rdvnRi/RjarB7Ye/Tp99Jty4ZfhSGn7UOv dsboY438Z2aQyh8EOfbY/VD9ZQ06oc35t4gIjqw9L9pwJdB1LkCA+GFha3Rp LY5uuflQ4oYobVw4b+r26UPYyQ4eXcgplOvOe+PTg52ZjOvL0btMfdbn5ezf jXrTv0hwUNnt7emzqPPPps+LvfZj72nKh84Escoa5en28reQDAcHcMj4zxtu jx4GkDPU8nkN5Rx3EBnpmH2422qvjbaKj5PtVGbvp65ednVDb3Zefrf7c+B8 3wNhsbft3ay1sy3YUNKH0h8UPSHpGOKGnLR023F0p3Q1S5e3NNm8ejTZ0bLS XSHg14c0OaHCGjpl1bUNkPFiGzitN/TTXDOxp2WNLwcKvBtQ/ltbKNhqbmHW Mhbjaho66t2xG/pDqh6q3eaHKGIeo5Q1Q8qty6Dod0uKh8bzVUnl6od1eqGR NlDFcIdPK04QwjLOIS3Y1Q0xtQ5Q9R6Lc6NOkN+9tedb89dyHVSBNL49PTxT AO/LcNmxIDG4lIDEgMUPsrz4z0zah4wWbmLzBCby8L8G5Do9a4cbNue6z6oc /e5fR4GnQ0MGvGdDbweK2Qy9Px9UPkzxzvQ6Ojz/HrrPUv4a8ft6Xny4Ptv8 7N8UhnBpqr83zP3EvcVHeqHNyb2sePgfcwbXw7yOfSfaeBrMrPu87VbMYwuD tQ578eO+j34WmFjhjTFfTvQ2qHwofT29G/D2ofw3Ozj7cny9+d+vPDpD8m49 KGJ9PPjv34Huyhh7lv1seGil72Q7eaHCjihqh4dbEfZv/VD1Q4+fj49m8Q+X RNq2QHYHS5jRCXeeAOHN0kwvUOQAzMkUg2Qx7vKotUNUMQ+kMEtx0bHy7dnk eKHdquxZdm3A7GfTt7cG9DZDa2kFPofmhjVDucmxftuhuhuapeKHm7vmtG1D Fg9nUwulbsoeaH23OXxZsUGFB4BgXKL0d93cXqLrNPvM3cwPWu0EtN1sCazj foMpCvKXQMCUdmt+0eUbCcesfR8VXvf58cvWqGmHbZ6a+vG6h3IZQ8Ucb+qG 9DlxR57146+uqHXxwFldtc6OeA7MD6CI7AWkuW4YrRD2p5Eig7gTs0c65YeO 62QkYHMHfjt0MdvJ2SJGS+vI+lzQwD2hzI499UPTo579PTN5a1tx4de3zzbq H1QyhtlDMsg6Xou50S+3w62UQnNY5ydgoJDs6pdRh8cQFgOc863vgrJn2lE4 EhoHYLDkgChdp6ODbbkdcWg7LFaJAb0EICocD7oa4NpD5odQbUhpxW6lo4Qw 9jT7ZQ8seFUnyfeujdDrm/CHxQ80PhXg2kfCQ3QxD34zZ8WN8zr7IbvZwaI9 Oz9n3bbOTDDD1Q0dH8uCfTcNRcUbPR7HRiH1Q1U4o1Q0sKPKHmhvQyhxQ/c1 QxVtQ1Q+7Ttx2h2hpDhY3Yho5JVvQ4OEPueT1vU6OGteOOG9jZ1UQIQHG1WG znRNITOQRLQ3LXodcLHGdZ+UUobd7efVDmhxQw1352zbWxHUjVDhXWGF7obU O0O/e9/N4Q07VorfV9M7V3o2eN3Ph8KH0ofd3H8A6+1Dy+h2GntobUMvy7e8 PrwGx5On32UO4l0bsDZDHHFDlDxn24H7w6VJzQ6oaId4R2eP0+G/I1dh6Njd 1wzrc5ssuWqn40/Dg+np5Q4348e/TUr1y5kYF31QUWb7IAgS17A9DkoQHbOe +diudcsNzjn7+/POfuht3v421fqdBVqh0oeUMQxDKGpc0NJshkh/47yaiyhl Dah7Q1Q4oeiOGnD+WjYV91Gza4McEwyvn7VH5fF49Gxyhu3Ol2d+jtxHDem9 UdNXoytuJO6HaOY0K03Yngsk7hRQMIHDg4CwQXFVFz40uLaih+a3HGcPMN5e SzkNZzzJsRIDFhQcJSG1Dwyh2+27jvW9r5fLZ5X6YoeHw6/hrn43+Xvfr2oc SGw3GcnhDih9Yir7UNkPLbjCNPqh71t58H7oZcOkKQZ60dJAeWml137mzZvE gLGSA1038/POih+qH4O3BbkfZDnRH7Uw5neW4o6PHFpUMI0F+UgO6wnZgeiQ PDeo9eKHHZetiP+JDkUqR7i3swGLjdnmQ4Ek+FD0IfMh5UPkIfEhiGyGIbIf DUK9dDwVSfNEvrqFfhQ7NDjQ9aGqHb9VQrgh76h6lDahlDShxofdQ1Q/Ch8a huJfRQyh60P1IfWod7oQ+mh9NDwEOShhD0oc6G1DIHTQxJ7SHBD/CGyD+IxD gpiHqWmDGGMMrFlDSi6hbhjKYjSHVD/Ohuh3Q3oeEPVDah/rodoaodyHuh/1 IbIYhoh/qoeENyGIcoaob0MIekPSHwh/2UOFFuh9kPqhpDeh0ovlD6h3QwbI Yn+4hpD/1IeEOkPshyh9qH3odIfZDkPshwhpHv5ocCWkNcoeKHlD5Q80N0Ok N0NqGkOkOaH5UPrkO5Q3ocEOVD7UOKGUOVD4qH3t0OdDEPddmu0gpHqhipfl D3Q+6JKNkPJSpGiG6H7ofNRE/JD6iHOh10htQ8fIQ0Q50ONRboZIe/Q/Chwo eKQ8lDSH3r7oY+7xQ5Ic0P9ah/KGxDiQ+kMYQ/AUUdodSG8hsh3qHeQ9FeIh xIcqHbQxDoQ/JDgUqR3KHgru0NkNpKvMIaVBRqJbIb1sQ1IYhhDjXkUOFDZD kodKJdKHhQ2oaIbgidVsh3Q/CG6GIaqInQhiG8SxDKH2UO/Q5Ih5Q5Q6of7N UnzXdea/zkOkPMS9KLhDyhofFD8oalWkPyoc9DuV16Gq5yGEPChyUOqG9qhl Vuhsh+EPJDlVURhDFS6Q9IB+aG1D+Il8ocIc0O6HpDpDzQ3od6FPCMKjAxYM MsUp4KGUg8KGUhxoZQ76GIcYoboaVqhkl9kP4ocUOaHFUX5Q+a1X/iFK5odo fMsVTFMpQ4Q7QxDzQyh5UOZDEMQ6KHESxDZEsQ6EPXQ0SmqHMQ69UPsUeH7Q 7UXuhyh2Je+aH+xQ/qh59UPs2Q9oekNyHdUFGyh3UPbEuah3aGlD6KH0h+yH 2IaQ+aGxDgh8SjlDSGIbKLih9kOJDhEaQ2IbxLtD9of2rdDziHChii7tDEk7 yHWQ0hhDCGIf27UX5oaIaUOapMoYh/ZD8UPKGxDENkPhDIiTQlsQ/vXaH6Q2 rVUmVF9kOUN0NEN6H1Xoh4Q+qHlD8KLzEuEO1DdD+NUOEPKHivdDxQxDpKLV Raof9FbKHZDEPKG6JcUk/ihiHaH7Q/aHQhiHoh2h1QwKV0ouxD7Icof6UNKk 4iXmJfshtQ891FwhvVJ/CH3odIYQxDdSKjmh6Q9IclDhopUjUhyIeZDxIbxL wQ47oeqGqH9qof3oZIcfsh+KG9DtD5If7RDwh8IfxQ+EP+yvhDdQxD8KHzGI em1b0PmQ+SlSP70P0Q/pDuh/mh1Q8oeKGpDxQ1IcCGiH5obesxjGMYymMwww YYYYZZVRcUP2hohpD2h2h/FDuhoh/ghlD9kP1XdD91/khpJuh8EMQ1Q+9DEO VD+qGUNq2Q8IBiGyH0Q9VSf4IZuhxQ6of1UXFUnn/JDZD2h9Vwh0ot0P/MQ8 Il4ofit6HzQ3pJshy0Oyh20OCi89eVDeh/4UOUPSH80P0h7qk1Q/BDpqk0h7 PZxQ2oezMw53tVuhyocRLVDr0OY6UPHXDwoc9DUS/xQz7EPiotUNIdIcod0P JDCH964QxDKH926HdDKGEPhDZDpDshpJ3Q4UXhQ0oYou4lzWUO+2VSaobobI dKHmQ4FKQ5qxDjWq6EPGhiGIdsQ7Q5UNKKPchsodP/JQ2qk9If4IYhpD+K/a HuQ2iXaH2Q0qFfCHVDShiG5DopUjZD8kMIarShqh+kPmQxDEP1Qyh8vblD5r IlvQ/3a8K9IeaG1f5vqQ5oaoeJDKG1DSGUOTvtyHyh+0N8Q5oZQ0Q6obUNBR R2ENONDSGiP10NyG9DJDpJN6/XdD+FF2h7UWwhpD9oZQ5iW1DKH0UqR5ocof 976ruh+olwh9KH/FSbiq2IcSHSUqR2kOKHaQ0Q7Nf+exzIdkod1DRDlEtlFy h6od1DYh5ob196/9rhDVfdDSHCHiQ7VVXaGzKH/FQwh1X4SfwovihsUdhSvS v/+LuSKcKEh8kD1bgA== ---1463747160-1301739425-1169685270=:4028-- From owner-xfs@oss.sgi.com Wed Jan 24 16:48:38 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 24 Jan 2007 16:48:44 -0800 (PST) X-Spam-oss-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from smtp106.mail.mud.yahoo.com (smtp106.mail.mud.yahoo.com [209.191.85.216]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0P0mbqw003752 for ; Wed, 24 Jan 2007 16:48:38 -0800 Received: (qmail 21043 invoked from network); 25 Jan 2007 00:47:42 -0000 DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com.au; h=Received:X-YMail-OSG:Message-ID:Date:From:User-Agent:X-Accept-Language:MIME-Version:To:CC:Subject:References:In-Reply-To:Content-Type:Content-Transfer-Encoding; b=vBsEclqnEYA8YFCRqLDii4+FtB7DZDDdP5F6GPqFgZGAwKEuIw50BXakUHJ5Sq3llarKX7HLGBcoyyA5JI424T0SSO/d7RuESib/OdLqAKOE1eOXIpcIUiF/hr3VH242tveqxLUgWVFusKOhdPY7D1nU0J1Es8MNRklu3oaFg/4= ; Received: from unknown (HELO ?192.168.0.1?) (nickpiggin@203.173.3.219 with plain) by smtp106.mail.mud.yahoo.com with SMTP; 25 Jan 2007 00:47:40 -0000 X-YMail-OSG: 788orj0VM1lNOySuZHAG_Jy0qusI3Mo0w.zlGU5qPFZdFi47on4ZeYCemSuKohnaqJj5bJvGZw-- Message-ID: <45B7FE1C.3070807@yahoo.com.au> Date: Thu, 25 Jan 2007 11:47:24 +1100 From: Nick Piggin User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.12) Gecko/20051007 Debian/1.7.12-1 X-Accept-Language: en MIME-Version: 1.0 To: David Chinner CC: Peter Zijlstra , linux-kernel@vger.kernel.org, xfs@oss.sgi.com, akpm@osdl.org Subject: Re: [PATCH 1/2]: Fix BUG in cancel_dirty_pages on XFS References: <20070123223702.GF33919298@melbourne.sgi.com> <1169640835.6189.14.camel@twins> <45B7627B.8050202@yahoo.com.au> <20070124224654.GN33919298@melbourne.sgi.com> <45B7F5F9.2070308@yahoo.com.au> <20070125003536.GS33919298@melbourne.sgi.com> In-Reply-To: <20070125003536.GS33919298@melbourne.sgi.com> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 10433 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nickpiggin@yahoo.com.au Precedence: bulk X-list: xfs Content-Length: 2096 Lines: 52 David Chinner wrote: > On Thu, Jan 25, 2007 at 11:12:41AM +1100, Nick Piggin wrote: >>... so surely if you do a direct read followed by a buffered read, >>you should *not* get the same data if there has been some activity >>to modify that part of the file in the meantime (whether that be a >>buffered or direct write). > > > Right. And that is what happens in XFS because it purges the > caches on direct I/O and forces data to be re-read from disk. And that is critical for direct IO writes, of course. > Effectively, if you are mixing direct I/O with other types of I/O > (buffered or mmap) then the application really needs to be certain > it is doing the right thing because there are races that can occur > below the filesystem. All we care about in the filesystem is that > what we cache is the same as what is on disk, and that implies that > direct I/O needs to purge the cache regardless of the state it is in.... > > Hence we need to unmap pages and use truncate semantics on them to > ensure they are removed from the page cache.... OK, I understand that this does need to happen (at least for writes), so you need to fix it regardless of the DIO read issue. But I'm just interested about DIO reads. I think you can get pretty reasonable semantics without discarding pagecache, but the semantics are weaker in one aspect. DIO read 1. writeback page 2. read from disk Now your read will pick up data no older than 1. And if a buffered write happens after 2, then there is no problem either. So if you are doing a buffered write and DIO read concurrently, you want synchronisation so the buffered write happens either before 1 or after 2 -- the DIO read will see either all or none of the write. Supposing your pagecache isn't invalidated, then a buffered write (from mmap, if XFS doesn't allow write(2)) comes in between 1 and 2, then the DIO read will find either none, some, or all of that write. So I guess what you are preventing is the "some" case. Am I right? -- SUSE Labs, Novell Inc. Send instant messages to your online friends http://au.messenger.yahoo.com From owner-xfs@oss.sgi.com Wed Jan 24 16:59:41 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 24 Jan 2007 16:59:49 -0800 (PST) X-Spam-oss-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00, SPF_HELO_PASS autolearn=ham version=3.2.0-pre1-r497472 Received: from lucidpixels.com (lucidpixels.com [75.144.35.66]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0P0xdqw006488 for ; Wed, 24 Jan 2007 16:59:40 -0800 Received: by lucidpixels.com (Postfix, from userid 1001) id 508FD1A048CAD; Wed, 24 Jan 2007 19:58:45 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id 0A2C9A000878; Wed, 24 Jan 2007 19:58:44 -0500 (EST) Date: Wed, 24 Jan 2007 19:58:44 -0500 (EST) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: Pavel Machek cc: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com Subject: Re: 2.6.20-rc5: cp 18gb 18gb.2 = OOM killer, reproducible just like 2.16.19.2 In-Reply-To: <20070125003242.GA23343@elf.ucw.cz> Message-ID: References: <20070122133735.GB4493@ucw.cz> <20070125003242.GA23343@elf.ucw.cz> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 10434 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Content-Length: 2937 Lines: 69 On Thu, 25 Jan 2007, Pavel Machek wrote: > Hi! > > > > Is it highmem-related? Can you try it with mem=256M? > > > > Bad idea, the kernel crashes & burns when I use mem=256, I had to boot > > 2.6.20-rc5-6 single to get back into my machine, very nasty. Remember I > > use an onboard graphics controller that has 128MB of RAM allocated to it > > and I believe the ICH8 chipset also uses some memory, in any event mem=256 > > causes the machine to lockup before it can even get to the boot/init > > processes, the two leds on the keyboard were blinking, caps lock and > > scroll lock and I saw no console at all! > > Okay, so try mem=700M or disable CONFIG_HIGHMEM or something. > Pavel > -- > (english) http://www.livejournal.com/~pavelmachek > (cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html > - > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Looks like you may be onto something Pavel, has not invoked the OOM killer yet using mem=700M, I see the swap increasing and the speed of the copy is only 14-15MB/s, when mem= is off (giving me all memory w/preempt) I get 45-65MB/s. With append="mem=700M", seen below: top - 19:38:46 up 1 min, 3 users, load average: 1.24, 0.38, 0.13 Tasks: 172 total, 1 running, 171 sleeping, 0 stopped, 0 zombie Cpu(s): 10.6%us, 4.6%sy, 1.3%ni, 69.6%id, 13.9%wa, 0.0%hi, 0.1%si, 0.0%st Mem: 705512k total, 699268k used, 6244k free, 12k buffers Swap: 2200760k total, 18520k used, 2182240k free, 34968k cached (with mem=700M): procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 2 0 40276 7456 12 23184 0 0 13196 14904 1204 4756 1 2 50 47 0 2 40276 6156 12 24264 0 0 20096 18352 1293 6400 1 4 49 46 0 2 40276 6264 12 24264 0 0 15504 17836 1219 5202 0 2 50 48 0 2 40276 6144 12 24376 0 0 14348 14453 1190 4815 0 3 49 48 0 2 40276 6156 12 24504 0 0 11396 12724 1169 3724 1 2 50 48 0 1 40276 7532 12 23272 0 0 11412 13121 1183 4017 0 2 50 48 0 1 40276 7944 12 22644 0 0 19084 19144 1234 6548 0 4 50 46 Almost there, looks like its going to make it. -rw-r--r-- 1 user group 18630127104 2007-01-21 12:41 18gb -rw-r--r-- 1 user group 15399329792 2007-01-24 19:55 18gb.copy Hrmm, with preempt off it works and with preempt ON and mem=700M it works. I guess I will run with preempt off as I'd prefer to have the memory available-- and tolerate the large lag bursts w/ no preemption. Yup it worked. -rw-r--r-- 1 user group 18630127104 2007-01-21 12:41 18gb -rw-r--r-- 1 user group 18630127104 2007-01-21 12:41 18gb.copy Justin. From owner-xfs@oss.sgi.com Wed Jan 24 17:48:50 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 24 Jan 2007 17:48:57 -0800 (PST) X-Spam-oss-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from smtp109.sbc.mail.re2.yahoo.com (smtp109.sbc.mail.re2.yahoo.com [68.142.229.96]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0P1mnqw016663 for ; Wed, 24 Jan 2007 17:48:50 -0800 Received: (qmail 77857 invoked from network); 25 Jan 2007 01:21:15 -0000 Received: from unknown (HELO ?65.42.80.213?) (wdcizek@sbcglobal.net@65.42.80.213 with plain) by smtp109.sbc.mail.re2.yahoo.com with SMTP; 25 Jan 2007 01:21:15 -0000 X-YMail-OSG: oFOGhjwVM1lk4hHSStit.C7O0QawBqv3kSBhOBbZR0ycnfM6.kdXihocVbfU7FAW9Zjgrw18cS12ujMhfGcuabd9arBZHCOEijgZxu6XUHeZAJXiigy7EcVc9SZrK6UIms7r3MDG0pQA9P4- Message-ID: <45B80610.5010804@rcn.com> Date: Wed, 24 Jan 2007 19:21:20 -0600 From: Bill Cizek User-Agent: Thunderbird 1.5.0.9 (X11/20061206) MIME-Version: 1.0 To: Justin Piszcz CC: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com Subject: Re: 2.6.20-rc5: cp 18gb 18gb.2 = OOM killer, reproducible just like 2.16.19.2 References: <20070122115703.97ed54f3.akpm@osdl.org> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 10435 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: cizek@rcn.com Precedence: bulk X-list: xfs Content-Length: 1357 Lines: 35 Justin Piszcz wrote: > On Mon, 22 Jan 2007, Andrew Morton wrote: > >>> On Sun, 21 Jan 2007 14:27:34 -0500 (EST) Justin Piszcz wrote: >>> Why does copying an 18GB on a 74GB raptor raid1 cause the kernel to invoke >>> the OOM killer and kill all of my processes? >>> > Running with PREEMPT OFF lets me copy the file!! The machine LAGS > occasionally every 5-30-60 seconds or so VERY BADLY, talking 5-10 seconds > of lag, but hey, it does not crash!! I will boot the older kernel with > preempt on and see if I can get you that information you requested. > Justin, According to your kernel_ring_buffer.txt (attached to another email), you are using "anticipatory" as your io scheduler: 289 Jan 24 18:35:25 p34 kernel: [ 0.142130] io scheduler noop registered 290 Jan 24 18:35:25 p34 kernel: [ 0.142194] io scheduler anticipatory registered (default) I had a problem with this scheduler where my system would occasionally lockup during heavy I/O. Sometimes it would fix itself, sometimes I had to reboot. I changed to the "CFQ" io scheduler and my system has worked fine since then. CFQ has to be built into the kernel (under BlockLayer/IOSchedulers). It can be selected as default or you can set it during runtime: echo cfq > /sys/block//queue/scheduler ... Hope this helps, Bill From owner-xfs@oss.sgi.com Wed Jan 24 17:53:16 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 24 Jan 2007 17:53:21 -0800 (PST) X-Spam-oss-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0P1rCqw017920 for ; Wed, 24 Jan 2007 17:53:14 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id MAA05539; Thu, 25 Jan 2007 12:52:11 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l0P1q87Y102797508; Thu, 25 Jan 2007 12:52:09 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l0P1q4nY103377850; Thu, 25 Jan 2007 12:52:04 +1100 (AEDT) Date: Thu, 25 Jan 2007 12:52:04 +1100 From: David Chinner To: Nick Piggin Cc: David Chinner , Peter Zijlstra , linux-kernel@vger.kernel.org, xfs@oss.sgi.com, akpm@osdl.org Subject: Re: [PATCH 1/2]: Fix BUG in cancel_dirty_pages on XFS Message-ID: <20070125015204.GV33919298@melbourne.sgi.com> References: <20070123223702.GF33919298@melbourne.sgi.com> <1169640835.6189.14.camel@twins> <45B7627B.8050202@yahoo.com.au> <20070124224654.GN33919298@melbourne.sgi.com> <45B7F5F9.2070308@yahoo.com.au> <20070125003536.GS33919298@melbourne.sgi.com> <45B7FE1C.3070807@yahoo.com.au> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <45B7FE1C.3070807@yahoo.com.au> User-Agent: Mutt/1.4.2.1i X-archive-position: 10436 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 3394 Lines: 80 On Thu, Jan 25, 2007 at 11:47:24AM +1100, Nick Piggin wrote: > David Chinner wrote: > >On Thu, Jan 25, 2007 at 11:12:41AM +1100, Nick Piggin wrote: > > >>... so surely if you do a direct read followed by a buffered read, > >>you should *not* get the same data if there has been some activity > >>to modify that part of the file in the meantime (whether that be a > >>buffered or direct write). > > > > > >Right. And that is what happens in XFS because it purges the > >caches on direct I/O and forces data to be re-read from disk. > > And that is critical for direct IO writes, of course. > > >Effectively, if you are mixing direct I/O with other types of I/O > >(buffered or mmap) then the application really needs to be certain > >it is doing the right thing because there are races that can occur > >below the filesystem. All we care about in the filesystem is that > >what we cache is the same as what is on disk, and that implies that > >direct I/O needs to purge the cache regardless of the state it is in.... > > > >Hence we need to unmap pages and use truncate semantics on them to > >ensure they are removed from the page cache.... > > OK, I understand that this does need to happen (at least for writes), > so you need to fix it regardless of the DIO read issue. > > But I'm just interested about DIO reads. I think you can get pretty > reasonable semantics without discarding pagecache, but the semantics > are weaker in one aspect. > > DIO read > 1. writeback page > 2. read from disk > > Now your read will pick up data no older than 1. And if a buffered > write happens after 2, then there is no problem either. > > So if you are doing a buffered write and DIO read concurrently, you > want synchronisation so the buffered write happens either before 1 > or after 2 -- the DIO read will see either all or none of the write. > > Supposing your pagecache isn't invalidated, then a buffered write > (from mmap, if XFS doesn't allow write(2)) comes in between 1 and 2, > then the DIO read will find either none, some, or all of that write. > > So I guess what you are preventing is the "some" case. Am I right? No. The only thing that will happen here is that the direct read will see _none_ of the write because the mmap write occurred during the DIO read to a different set of pages in memory. There is no "some" or "all" case here. IOWs, at a single point in time we have 2 different views of the one file which are both apparently valid and that is what we are trying to avoid. We have a coherency problem here which is solved by forcing the mmap write to reread the data off disk.... Look at it this way - direct I/O in XFS implies an I/O barrier (similar to a memory barrier). Writing back and tossing out of the page cache at the start of the direct IO gives us an I/O coherency barrier - everything before the direct IO is sync'd to disk before the direct IO can proceed, and everything after the direct IO has started must be fetched from disk again. Because mmap I/O doesn't necessarily need I/O to change the state of a page (think of a read fault then a later write fault), to make the I/O barrier work correctly with mmap() we need to ensure that it will fault the page from disk again. We can only do that by unmapping the pages before tossing them from the page cache..... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Wed Jan 24 18:02:23 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 24 Jan 2007 18:02:29 -0800 (PST) X-Spam-oss-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from smtp110.mail.mud.yahoo.com (smtp110.mail.mud.yahoo.com [209.191.85.220]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0P22Lqw020292 for ; Wed, 24 Jan 2007 18:02:23 -0800 Received: (qmail 15058 invoked from network); 25 Jan 2007 02:01:27 -0000 DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com.au; h=Received:X-YMail-OSG:Message-ID:Date:From:User-Agent:X-Accept-Language:MIME-Version:To:CC:Subject:References:In-Reply-To:Content-Type:Content-Transfer-Encoding; b=TZgO3pupe1MQVVE88rJ/UzTlIa+LCIRpCNlrE7pa02IPnaV7up8xmGdn81SAf5xqXUheIOh1rX/gg+WETb2jloarRsJZz/XQRCVrbF8WhnVsGkT0Z7ImZHe/Lr16S8IGmtRGyoI1zRFGTA3u4go8gNhHVaAU5nH6Cw85Lrf2n3E= ; Received: from unknown (HELO ?192.168.0.1?) (nickpiggin@203.173.3.219 with plain) by smtp110.mail.mud.yahoo.com with SMTP; 25 Jan 2007 02:01:25 -0000 X-YMail-OSG: x9UYv3IVM1nJ4zGsPm15.IJkb.3i2CZe1L_89fxjdwgiI4CoOGHAxgnZyOi4_XYR0C6wZEv3Gg-- Message-ID: <45B80F65.6010206@yahoo.com.au> Date: Thu, 25 Jan 2007 13:01:09 +1100 From: Nick Piggin User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.12) Gecko/20051007 Debian/1.7.12-1 X-Accept-Language: en MIME-Version: 1.0 To: David Chinner CC: Peter Zijlstra , linux-kernel@vger.kernel.org, xfs@oss.sgi.com, akpm@osdl.org Subject: Re: [PATCH 1/2]: Fix BUG in cancel_dirty_pages on XFS References: <20070123223702.GF33919298@melbourne.sgi.com> <1169640835.6189.14.camel@twins> <45B7627B.8050202@yahoo.com.au> <20070124224654.GN33919298@melbourne.sgi.com> <45B7F5F9.2070308@yahoo.com.au> <20070125003536.GS33919298@melbourne.sgi.com> <45B7FE1C.3070807@yahoo.com.au> <20070125015204.GV33919298@melbourne.sgi.com> In-Reply-To: <20070125015204.GV33919298@melbourne.sgi.com> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 10437 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nickpiggin@yahoo.com.au Precedence: bulk X-list: xfs Content-Length: 3958 Lines: 92 David Chinner wrote: > On Thu, Jan 25, 2007 at 11:47:24AM +1100, Nick Piggin wrote: > >>David Chinner wrote: >> >>>On Thu, Jan 25, 2007 at 11:12:41AM +1100, Nick Piggin wrote: >> >>>>... so surely if you do a direct read followed by a buffered read, >>>>you should *not* get the same data if there has been some activity >>>>to modify that part of the file in the meantime (whether that be a >>>>buffered or direct write). >>> >>> >>>Right. And that is what happens in XFS because it purges the >>>caches on direct I/O and forces data to be re-read from disk. >> >>And that is critical for direct IO writes, of course. >> >> >>>Effectively, if you are mixing direct I/O with other types of I/O >>>(buffered or mmap) then the application really needs to be certain >>>it is doing the right thing because there are races that can occur >>>below the filesystem. All we care about in the filesystem is that >>>what we cache is the same as what is on disk, and that implies that >>>direct I/O needs to purge the cache regardless of the state it is in.... >>> >>>Hence we need to unmap pages and use truncate semantics on them to >>>ensure they are removed from the page cache.... >> >>OK, I understand that this does need to happen (at least for writes), >>so you need to fix it regardless of the DIO read issue. >> >>But I'm just interested about DIO reads. I think you can get pretty >>reasonable semantics without discarding pagecache, but the semantics >>are weaker in one aspect. >> >>DIO read >>1. writeback page >>2. read from disk >> >>Now your read will pick up data no older than 1. And if a buffered >>write happens after 2, then there is no problem either. >> >>So if you are doing a buffered write and DIO read concurrently, you >>want synchronisation so the buffered write happens either before 1 >>or after 2 -- the DIO read will see either all or none of the write. >> >>Supposing your pagecache isn't invalidated, then a buffered write >>(from mmap, if XFS doesn't allow write(2)) comes in between 1 and 2, >>then the DIO read will find either none, some, or all of that write. >> >>So I guess what you are preventing is the "some" case. Am I right? > > > No. The only thing that will happen here is that the direct read > will see _none_ of the write because the mmap write occurred during > the DIO read to a different set of pages in memory. There is no > "some" or "all" case here. But if the buffers get partially or completely written back in the meantime, then the DIO read could see that. > IOWs, at a single point in time we have 2 different views > of the one file which are both apparently valid and that is what > we are trying to avoid. We have a coherency problem here which is > solved by forcing the mmap write to reread the data off disk.... I don't see why the mmap write needs to reread data off disk. The data on disk won't get changed by the DIO read. > Look at it this way - direct I/O in XFS implies an I/O barrier > (similar to a memory barrier). Writing back and tossing out of the > page cache at the start of the direct IO gives us an I/O coherency > barrier - everything before the direct IO is sync'd to disk before > the direct IO can proceed, and everything after the direct IO has > started must be fetched from disk again. > > Because mmap I/O doesn't necessarily need I/O to change the state > of a page (think of a read fault then a later write fault), to make > the I/O barrier work correctly with mmap() we need to ensure that > it will fault the page from disk again. We can only do that by > unmapping the pages before tossing them from the page cache..... OK, but direct IO *reads* do not conceptually invalidate pagecache sitting on top of those blocks. Pagecache becomes invalid when the page no longer represents the most uptodate copy of the data (eg. in the case of a direct IO write). -- SUSE Labs, Novell Inc. Send instant messages to your online friends http://au.messenger.yahoo.com From owner-xfs@oss.sgi.com Wed Jan 24 19:43:42 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 24 Jan 2007 19:43:48 -0800 (PST) X-Spam-oss-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0P3hcqw007343 for ; Wed, 24 Jan 2007 19:43:41 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA08015; Thu, 25 Jan 2007 14:42:35 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l0P3gW7Y103461459; Thu, 25 Jan 2007 14:42:32 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l0P3gRiV103434624; Thu, 25 Jan 2007 14:42:27 +1100 (AEDT) Date: Thu, 25 Jan 2007 14:42:27 +1100 From: David Chinner To: Nick Piggin Cc: David Chinner , Peter Zijlstra , linux-kernel@vger.kernel.org, xfs@oss.sgi.com, akpm@osdl.org Subject: Re: [PATCH 1/2]: Fix BUG in cancel_dirty_pages on XFS Message-ID: <20070125034227.GX33919298@melbourne.sgi.com> References: <20070123223702.GF33919298@melbourne.sgi.com> <1169640835.6189.14.camel@twins> <45B7627B.8050202@yahoo.com.au> <20070124224654.GN33919298@melbourne.sgi.com> <45B7F5F9.2070308@yahoo.com.au> <20070125003536.GS33919298@melbourne.sgi.com> <45B7FE1C.3070807@yahoo.com.au> <20070125015204.GV33919298@melbourne.sgi.com> <45B80F65.6010206@yahoo.com.au> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <45B80F65.6010206@yahoo.com.au> User-Agent: Mutt/1.4.2.1i X-archive-position: 10438 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 4237 Lines: 99 On Thu, Jan 25, 2007 at 01:01:09PM +1100, Nick Piggin wrote: > David Chinner wrote: > >On Thu, Jan 25, 2007 at 11:47:24AM +1100, Nick Piggin wrote: > > > >>David Chinner wrote: > >> > >>>On Thu, Jan 25, 2007 at 11:12:41AM +1100, Nick Piggin wrote: > >>But I'm just interested about DIO reads. I think you can get pretty > >>reasonable semantics without discarding pagecache, but the semantics > >>are weaker in one aspect. > >> > >>DIO read > >>1. writeback page > >>2. read from disk > >> > >>Now your read will pick up data no older than 1. And if a buffered > >>write happens after 2, then there is no problem either. > >> > >>So if you are doing a buffered write and DIO read concurrently, you > >>want synchronisation so the buffered write happens either before 1 > >>or after 2 -- the DIO read will see either all or none of the write. > >> > >>Supposing your pagecache isn't invalidated, then a buffered write > >>(from mmap, if XFS doesn't allow write(2)) comes in between 1 and 2, > >>then the DIO read will find either none, some, or all of that write. > >> > >>So I guess what you are preventing is the "some" case. Am I right? > > > > > >No. The only thing that will happen here is that the direct read > >will see _none_ of the write because the mmap write occurred during > >the DIO read to a different set of pages in memory. There is no > >"some" or "all" case here. > > But if the buffers get partially or completely written back in the > meantime, then the DIO read could see that. Only if you can dirty them and flush them to disk while the direct read is waiting in the I/O queue (remember, the direct read flushes dirty cached data before being issued). Given that we don't lock the inode in the buffered I/O *writeback* path, we have to stop pages being dirtied in the page cache up front so we don't have mmap writeback over the top of the direct read. Hence we have to prevent mmap for dirtying the same file offset we are doing direct reads on until the direct read has been issued. i.e. we need a barrier. > >IOWs, at a single point in time we have 2 different views > >of the one file which are both apparently valid and that is what > >we are trying to avoid. We have a coherency problem here which is > >solved by forcing the mmap write to reread the data off disk.... > > I don't see why the mmap write needs to reread data off disk. The > data on disk won't get changed by the DIO read. No, but the data _in memory_ will, and now when the direct read completes it will data that is different to what is in the page cache. For direct I/O we define the correct data to be what is on disk, not what is in memory, so any time we bypass what is in memory, we need to ensure that we prevent the data being changed again in memory before we issue the disk I/O. > >Look at it this way - direct I/O in XFS implies an I/O barrier > >(similar to a memory barrier). Writing back and tossing out of the > >page cache at the start of the direct IO gives us an I/O coherency > >barrier - everything before the direct IO is sync'd to disk before > >the direct IO can proceed, and everything after the direct IO has > >started must be fetched from disk again. > > > >Because mmap I/O doesn't necessarily need I/O to change the state > >of a page (think of a read fault then a later write fault), to make > >the I/O barrier work correctly with mmap() we need to ensure that > >it will fault the page from disk again. We can only do that by > >unmapping the pages before tossing them from the page cache..... > > OK, but direct IO *reads* do not conceptually invalidate pagecache > sitting on top of those blocks. Pagecache becomes invalid when the > page no longer represents the most uptodate copy of the data (eg. > in the case of a direct IO write). In theory, yes. In practice, if you don't invalidate the page cache you have no mechanism of synchronising mmap with direct I/O, and at that point you have no coherency model that you can work with in your filesystem. You have to be able to guarantee synchronisation betwen the different methods that can dirty data before you can give any guarantees about data coherency..... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Wed Jan 24 19:51:27 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 24 Jan 2007 19:51:32 -0800 (PST) X-Spam-oss-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0P3pPqw009594 for ; Wed, 24 Jan 2007 19:51:26 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA08234; Thu, 25 Jan 2007 14:50:25 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l0P3oO7Y103338212; Thu, 25 Jan 2007 14:50:25 +1100 (AEDT) Received: (from bnaujok@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l0P3oN8f103391539; Thu, 25 Jan 2007 14:50:23 +1100 (AEDT) Date: Thu, 25 Jan 2007 14:50:23 +1100 (AEDT) From: Barry Naujok Message-Id: <200701250350.l0P3oN8f103391539@snort.melbourne.sgi.com> To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: TAKE 960471 - fix extent length in xfs_io bmap X-archive-position: 10439 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@snort.melbourne.sgi.com Precedence: bulk X-list: xfs Content-Length: 547 Lines: 17 Fix xfs_bmap -n option displaying a truncated extent. Date: Thu Jan 25 14:49:53 AEDT 2007 Workarea: snort.melbourne.sgi.com:/home/bnaujok/isms/xfs-cmds Inspected by: Utako Kusaka The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:27990a xfsprogs/io/bmap.c - 1.13 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/io/bmap.c.diff?r1=text&tr1=1.13&r2=text&tr2=1.12&f=h - Fix xfs_bmap -n option displaying a truncated extent. From owner-xfs@oss.sgi.com Wed Jan 24 19:53:09 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 24 Jan 2007 19:53:16 -0800 (PST) X-Spam-oss-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0P3r6qw010247 for ; Wed, 24 Jan 2007 19:53:08 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA08268; Thu, 25 Jan 2007 14:52:08 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l0P3q77Y103400827; Thu, 25 Jan 2007 14:52:07 +1100 (AEDT) Received: (from bnaujok@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l0P3q51G100730153; Thu, 25 Jan 2007 14:52:05 +1100 (AEDT) Date: Thu, 25 Jan 2007 14:52:05 +1100 (AEDT) From: Barry Naujok Message-Id: <200701250352.l0P3q51G100730153@snort.melbourne.sgi.com> To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: TAKE 960472 - Fix SEGV when using the xfs_io mwrite command #2 X-archive-position: 10440 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@snort.melbourne.sgi.com Precedence: bulk X-list: xfs Content-Length: 710 Lines: 21 Fix SEGV when using the xfs_io mwrite command Date: Thu Jan 25 14:51:39 AEDT 2007 Workarea: snort.melbourne.sgi.com:/home/bnaujok/isms/xfs-cmds Inspected by: Utako Kusaka The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:27991a xfsprogs/doc/CHANGES - 1.233 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/doc/CHANGES.diff?r1=text&tr1=1.233&r2=text&tr2=1.232&f=h - Update changes history xfsprogs/io/mmap.c - 1.13 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/io/mmap.c.diff?r1=text&tr1=1.13&r2=text&tr2=1.12&f=h - Fix SEGV when using the xfs_io mwrite command From owner-xfs@oss.sgi.com Wed Jan 24 20:26:43 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 24 Jan 2007 20:26:50 -0800 (PST) X-Spam-oss-Status: No, score=-2.1 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from smtp110.mail.mud.yahoo.com (smtp110.mail.mud.yahoo.com [209.191.85.220]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0P4Qfqw022427 for ; Wed, 24 Jan 2007 20:26:42 -0800 Received: (qmail 22960 invoked from network); 25 Jan 2007 04:25:46 -0000 DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com.au; h=Received:X-YMail-OSG:Message-ID:Date:From:User-Agent:X-Accept-Language:MIME-Version:To:CC:Subject:References:In-Reply-To:Content-Type:Content-Transfer-Encoding; b=5h57CT11AypD6H/bJcdbn1stFHrby4IzQSH+BPnmINE+lW2FJCwQqQ1L6Kx9GRw8VSyC+m6IL48GM7MtcXoMgrzdi+OR8tfilK0Bg9l3pEi+SnNaubdJ7Soh3Iy63RDebtVzMUfJkjLBz3+khS4FjURO8xL8Pf+z5VXHycBejT8= ; Received: from unknown (HELO ?192.168.0.1?) (nickpiggin@203.173.3.219 with plain) by smtp110.mail.mud.yahoo.com with SMTP; 25 Jan 2007 04:25:45 -0000 X-YMail-OSG: GDBSSjIVM1m8zYk78_rOBSbBIE66TPRC7xBG6bDwAYop4B6U Message-ID: <45B83139.1040007@yahoo.com.au> Date: Thu, 25 Jan 2007 15:25:29 +1100 From: Nick Piggin User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.12) Gecko/20051007 Debian/1.7.12-1 X-Accept-Language: en MIME-Version: 1.0 To: David Chinner CC: Peter Zijlstra , linux-kernel@vger.kernel.org, xfs@oss.sgi.com, akpm@osdl.org Subject: Re: [PATCH 1/2]: Fix BUG in cancel_dirty_pages on XFS References: <20070123223702.GF33919298@melbourne.sgi.com> <1169640835.6189.14.camel@twins> <45B7627B.8050202@yahoo.com.au> <20070124224654.GN33919298@melbourne.sgi.com> <45B7F5F9.2070308@yahoo.com.au> <20070125003536.GS33919298@melbourne.sgi.com> <45B7FE1C.3070807@yahoo.com.au> <20070125015204.GV33919298@melbourne.sgi.com> <45B80F65.6010206@yahoo.com.au> <20070125034227.GX33919298@melbourne.sgi.com> In-Reply-To: <20070125034227.GX33919298@melbourne.sgi.com> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 10441 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nickpiggin@yahoo.com.au Precedence: bulk X-list: xfs Content-Length: 2730 Lines: 64 David Chinner wrote: > On Thu, Jan 25, 2007 at 01:01:09PM +1100, Nick Piggin wrote: > >>David Chinner wrote: >>>No. The only thing that will happen here is that the direct read >>>will see _none_ of the write because the mmap write occurred during >>>the DIO read to a different set of pages in memory. There is no >>>"some" or "all" case here. >> >>But if the buffers get partially or completely written back in the >>meantime, then the DIO read could see that. > > > Only if you can dirty them and flush them to disk while the direct > read is waiting in the I/O queue (remember, the direct read flushes > dirty cached data before being issued). Given that we don't lock the > inode in the buffered I/O *writeback* path, we have to stop pages being > dirtied in the page cache up front so we don't have mmap writeback > over the top of the direct read. However unlikely it may be, that is what I'm talking about in my "some" or "all" cases. Note that I'm not talking about a specific implementation (eg. XFS I guess avoids "some"), but just the possible scenarios. > Hence we have to prevent mmap for dirtying the same file offset we > are doing direct reads on until the direct read has been issued. > > i.e. we need a barrier. So you need to eliminate the "some" case? Because of course "none" and "all" are unavoidable. >>>IOWs, at a single point in time we have 2 different views >>>of the one file which are both apparently valid and that is what >>>we are trying to avoid. We have a coherency problem here which is >>>solved by forcing the mmap write to reread the data off disk.... >> >>I don't see why the mmap write needs to reread data off disk. The >>data on disk won't get changed by the DIO read. > > > No, but the data _in memory_ will, and now when the direct read > completes it will data that is different to what is in the page > cache. For direct I/O we define the correct data to be what is on > disk, not what is in memory, so any time we bypass what is in > memory, we need to ensure that we prevent the data being changed > again in memory before we issue the disk I/O. But when you drop your locks, before the direct IO read returns, some guy can mmap and dirty the pagecache anyway. By the time the read returns, the data is stale. This obviously must be synchronised in userspace. As I said, you can't avoid "none" or "all", and you can't say that userspace will see the most uptodate copy of the data. All you can say is that it will be no older than when the syscall is first made. Which is what you get if you simply writeback but do not invalidate pagecache for direct IO reads. -- SUSE Labs, Novell Inc. Send instant messages to your online friends http://au.messenger.yahoo.com From owner-xfs@oss.sgi.com Wed Jan 24 21:44:33 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 24 Jan 2007 21:44:39 -0800 (PST) X-Spam-oss-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0P5iUqw004622 for ; Wed, 24 Jan 2007 21:44:32 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id QAA10581; Thu, 25 Jan 2007 16:26:12 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l0P5QB7Y103409775; Thu, 25 Jan 2007 16:26:11 +1100 (AEDT) Received: (from donaldd@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l0P5QAka102609992; Thu, 25 Jan 2007 16:26:10 +1100 (AEDT) Date: Thu, 25 Jan 2007 16:26:10 +1100 (AEDT) From: Donald Douwsma Message-Id: <200701250526.l0P5QAka102609992@snort.melbourne.sgi.com> To: sgi.bugs.xfs@engr.sgi.com, xfs@oss.sgi.com Subject: TAKE 957441 - xfs_quota manpage contains errors for project quota. X-archive-position: 10442 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: donaldd@snort.melbourne.sgi.com Precedence: bulk X-list: xfs Content-Length: 552 Lines: 16 Fix errors in the xfs_quota manpage. Date: Thu Jan 25 16:24:18 AEDT 2007 Workarea: snort.melbourne.sgi.com:/home/donaldd/isms/xfs-cmds Inspected by: bnaujok The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:27994a xfsprogs/man/man8/xfs_quota.8 - 1.8 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/man/man8/xfs_quota.8.diff?r1=text&tr1=1.8&r2=text&tr2=1.7&f=h - Fixed errors in the projid file format and the use of the xfs_quota project command. From owner-xfs@oss.sgi.com Wed Jan 24 23:41:30 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 24 Jan 2007 23:41:36 -0800 (PST) X-Spam-oss-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0P7fRqw027409 for ; Wed, 24 Jan 2007 23:41:29 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id SAA12856; Thu, 25 Jan 2007 18:40:25 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l0P7eM7Y103353516; Thu, 25 Jan 2007 18:40:22 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l0P7eIsj103228133; Thu, 25 Jan 2007 18:40:18 +1100 (AEDT) Date: Thu, 25 Jan 2007 18:40:18 +1100 From: David Chinner To: Nick Piggin Cc: David Chinner , Peter Zijlstra , linux-kernel@vger.kernel.org, xfs@oss.sgi.com, akpm@osdl.org Subject: Re: [PATCH 1/2]: Fix BUG in cancel_dirty_pages on XFS Message-ID: <20070125074018.GB33919298@melbourne.sgi.com> References: <1169640835.6189.14.camel@twins> <45B7627B.8050202@yahoo.com.au> <20070124224654.GN33919298@melbourne.sgi.com> <45B7F5F9.2070308@yahoo.com.au> <20070125003536.GS33919298@melbourne.sgi.com> <45B7FE1C.3070807@yahoo.com.au> <20070125015204.GV33919298@melbourne.sgi.com> <45B80F65.6010206@yahoo.com.au> <20070125034227.GX33919298@melbourne.sgi.com> <45B83139.1040007@yahoo.com.au> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <45B83139.1040007@yahoo.com.au> User-Agent: Mutt/1.4.2.1i X-archive-position: 10443 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 3307 Lines: 83 On Thu, Jan 25, 2007 at 03:25:29PM +1100, Nick Piggin wrote: > David Chinner wrote: > >On Thu, Jan 25, 2007 at 01:01:09PM +1100, Nick Piggin wrote: > > > >>David Chinner wrote: > > >>>No. The only thing that will happen here is that the direct read > >>>will see _none_ of the write because the mmap write occurred during > >>>the DIO read to a different set of pages in memory. There is no > >>>"some" or "all" case here. > >> > >>But if the buffers get partially or completely written back in the > >>meantime, then the DIO read could see that. > > > > > >Only if you can dirty them and flush them to disk while the direct > >read is waiting in the I/O queue (remember, the direct read flushes > >dirty cached data before being issued). Given that we don't lock the > >inode in the buffered I/O *writeback* path, we have to stop pages being > >dirtied in the page cache up front so we don't have mmap writeback > >over the top of the direct read. > > However unlikely it may be, that is what I'm talking about in my "some" > or "all" cases. Note that I'm not talking about a specific implementation > (eg. XFS I guess avoids "some"), but just the possible scenarios. Unfortunately, behaviour is different for different filesystems. I can only answer for XFS, which is different to most of the other filesystems in both locking and the way it treats the page cache. IOWs, if you want to talk about details, then have to talk about specific implementations because..... > >Hence we have to prevent mmap for dirtying the same file offset we > >are doing direct reads on until the direct read has been issued. > > > >i.e. we need a barrier. > > So you need to eliminate the "some" case? Because of course "none" and > "all" are unavoidable. "all" is avoidable, too, once you've kicked the pages out of the page cache - you just have to block the buffered read triggered by refaulting the page will cause until the direct I/O completes. > >>>IOWs, at a single point in time we have 2 different views > >>>of the one file which are both apparently valid and that is what > >>>we are trying to avoid. We have a coherency problem here which is > >>>solved by forcing the mmap write to reread the data off disk.... > >> > >>I don't see why the mmap write needs to reread data off disk. The > >>data on disk won't get changed by the DIO read. > > > > > >No, but the data _in memory_ will, and now when the direct read > >completes it will data that is different to what is in the page > >cache. For direct I/O we define the correct data to be what is on > >disk, not what is in memory, so any time we bypass what is in > >memory, we need to ensure that we prevent the data being changed > >again in memory before we issue the disk I/O. > > But when you drop your locks, before the direct IO read returns, some > guy can mmap and dirty the pagecache anyway. > By the time the read returns, the data is stale. Only if we leave the page in the page cache. If we toss the page, the time it takes to do the I/O for the page fault is enough for the direct I/o to complete. Sure it's not an absolute guarantee, but if you want an absolute guarantee: > This obviously must be synchronised in > userspace. As I said earlier..... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Thu Jan 25 01:09:31 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 25 Jan 2007 01:10:33 -0800 (PST) X-Spam-oss-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00, SPF_HELO_PASS autolearn=ham version=3.2.0-pre1-r497472 Received: from lucidpixels.com (lucidpixels.com [75.144.35.66]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0P99Uqw017438 for ; Thu, 25 Jan 2007 01:09:31 -0800 Received: by lucidpixels.com (Postfix, from userid 1001) id 807A31A048CAB; Thu, 25 Jan 2007 04:08:36 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id 7CDA0A000878; Thu, 25 Jan 2007 04:08:36 -0500 (EST) Date: Thu, 25 Jan 2007 04:08:36 -0500 (EST) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: Pavel Machek cc: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com Subject: Re: 2.6.20-rc5: cp 18gb 18gb.2 = OOM killer, reproducible just like 2.16.19.2 In-Reply-To: <20070125003242.GA23343@elf.ucw.cz> Message-ID: References: <20070122133735.GB4493@ucw.cz> <20070125003242.GA23343@elf.ucw.cz> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 10444 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Content-Length: 2309 Lines: 70 On Thu, 25 Jan 2007, Pavel Machek wrote: > Hi! > > > > Is it highmem-related? Can you try it with mem=256M? > > > > Bad idea, the kernel crashes & burns when I use mem=256, I had to boot > > 2.6.20-rc5-6 single to get back into my machine, very nasty. Remember I > > use an onboard graphics controller that has 128MB of RAM allocated to it > > and I believe the ICH8 chipset also uses some memory, in any event mem=256 > > causes the machine to lockup before it can even get to the boot/init > > processes, the two leds on the keyboard were blinking, caps lock and > > scroll lock and I saw no console at all! > > Okay, so try mem=700M or disable CONFIG_HIGHMEM or something. > Pavel > -- > (english) http://www.livejournal.com/~pavelmachek > (cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html > - > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > I forgot to remove the mem=700M with PREEMPT off and this is what I saw this morning: [18005.261875] Killed process 4768 (screen) [18005.262343] Out of memory: kill process 4793 (screen) score 385 or a child [18005.262350] Killed process 4793 (screen) [18005.378536] Out of memory: kill process 4825 (screen) score 385 or a child [18005.378542] Killed process 4825 (screen) [18005.378547] Out of memory: kill process 4825 (screen) score 385 or a child [18005.378553] Killed process 4825 (screen) [18005.413072] Out of memory: kill process 4875 (screen) score 385 or a child [18005.413079] Killed process 4875 (screen) [18005.423735] Out of memory: kill process 4970 (screen) score 385 or a child [18005.423742] Killed process 4970 (screen) [18005.431391] Out of memory: kill process 21365 (xfs_fsr) score 286 or a child [18005.431398] Killed process 21365 (xfs_fsr) $ screen -ls There are screens on: 2532.pts-0.p34 (Dead ???) 3776.pts-2.p34 (Dead ???) 4768.pts-7.p34 (Dead ???) 4793.pts-9.p34 (Dead ???) 4825.pts-11.p34 (Dead ???) 4875.pts-13.p34 (Dead ???) 4970.pts-15.p34 (Dead ???) Lovely... $ uname -r 2.6.20-rc5 Something is seriously wrong with that OOM killer. Justin. From owner-xfs@oss.sgi.com Thu Jan 25 02:27:35 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 25 Jan 2007 02:27:41 -0800 (PST) X-Spam-oss-Status: No, score=-1.4 required=5.0 tests=AWL,BAYES_05 autolearn=ham version=3.2.0-pre1-r497472 Received: from smtp102.mail.mud.yahoo.com (smtp102.mail.mud.yahoo.com [209.191.85.212]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0PARWqw032728 for ; Thu, 25 Jan 2007 02:27:33 -0800 Received: (qmail 18968 invoked from network); 25 Jan 2007 10:26:38 -0000 DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com.au; h=Received:X-YMail-OSG:Message-ID:Date:From:User-Agent:X-Accept-Language:MIME-Version:To:CC:Subject:References:In-Reply-To:Content-Type:Content-Transfer-Encoding; b=VNh/3AsBVfNWk7QyYg27s1mfmOyBgaAXMxco5kLv+Lo7oUk5NGFtMP+Wh3QzvaL2WjiWP0Zb5XTcZuCnIdj7epc9a0+LkLJEn7AiwZWytUzsctoIkgW3Hriw3uZLyCjOrTU2wxbSL9R5txZ6a/NGbI+HHgTB6S529Wton/2t2FA= ; Received: from unknown (HELO ?192.168.0.1?) (nickpiggin@203.173.3.219 with plain) by smtp102.mail.mud.yahoo.com with SMTP; 25 Jan 2007 10:26:37 -0000 X-YMail-OSG: ZSiPeooVM1n7QF.FfGSV.EgHDabCwPNbjJOodvf87.4sKdZ.iZjWuuhugALKPqPyhtWv96p9VA-- Message-ID: <45B885CE.4030206@yahoo.com.au> Date: Thu, 25 Jan 2007 21:26:22 +1100 From: Nick Piggin User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.12) Gecko/20051007 Debian/1.7.12-1 X-Accept-Language: en MIME-Version: 1.0 To: David Chinner CC: Peter Zijlstra , linux-kernel@vger.kernel.org, xfs@oss.sgi.com, akpm@osdl.org Subject: Re: [PATCH 1/2]: Fix BUG in cancel_dirty_pages on XFS References: <1169640835.6189.14.camel@twins> <45B7627B.8050202@yahoo.com.au> <20070124224654.GN33919298@melbourne.sgi.com> <45B7F5F9.2070308@yahoo.com.au> <20070125003536.GS33919298@melbourne.sgi.com> <45B7FE1C.3070807@yahoo.com.au> <20070125015204.GV33919298@melbourne.sgi.com> <45B80F65.6010206@yahoo.com.au> <20070125034227.GX33919298@melbourne.sgi.com> <45B83139.1040007@yahoo.com.au> <20070125074018.GB33919298@melbourne.sgi.com> In-Reply-To: <20070125074018.GB33919298@melbourne.sgi.com> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 10445 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nickpiggin@yahoo.com.au Precedence: bulk X-list: xfs Content-Length: 628 Lines: 17 David Chinner wrote: > Only if we leave the page in the page cache. If we toss the page, > the time it takes to do the I/O for the page fault is enough for > the direct I/o to complete. Sure it's not an absolute guarantee, > but if you want an absolute guarantee: So I guess you *could* relax it in theory... Anyway, don't take my pestering as advocacy for wanting XFS to do something more clever in such a corner case. I think you're quite right to be conservative and share codepaths between direct IO read and write. -- SUSE Labs, Novell Inc. Send instant messages to your online friends http://au.messenger.yahoo.com From owner-xfs@oss.sgi.com Thu Jan 25 03:12:33 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 25 Jan 2007 03:12:39 -0800 (PST) X-Spam-oss-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00, SPF_HELO_PASS autolearn=ham version=3.2.0-pre1-r497472 Received: from lucidpixels.com (lucidpixels.com [75.144.35.66]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0PBCWqw009444 for ; Thu, 25 Jan 2007 03:12:33 -0800 Received: by lucidpixels.com (Postfix, from userid 1001) id 2D4621A048CAB; Thu, 25 Jan 2007 06:11:38 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id 27C8BA00226A; Thu, 25 Jan 2007 06:11:38 -0500 (EST) Date: Thu, 25 Jan 2007 06:11:38 -0500 (EST) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: Nick Piggin cc: Andrew Morton , linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com Subject: Re: 2.6.20-rc5: cp 18gb 18gb.2 = OOM killer, reproducible just like 2.16.19.2 In-Reply-To: <45B7FB71.5030603@yahoo.com.au> Message-ID: References: <20070122115703.97ed54f3.akpm@osdl.org> <45B7FB71.5030603@yahoo.com.au> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 10446 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Content-Length: 1311 Lines: 40 On Thu, 25 Jan 2007, Nick Piggin wrote: > Justin Piszcz wrote: > > > > On Mon, 22 Jan 2007, Andrew Morton wrote: > > > >After the oom-killing, please see if you can free up the ZONE_NORMAL memory > > >via a few `echo 3 > /proc/sys/vm/drop_caches' commands. See if you can > > >work out what happened to the missing couple-of-hundred MB from > > >ZONE_NORMAL. > > > > > > > Running with PREEMPT OFF lets me copy the file!! The machine LAGS > > occasionally every 5-30-60 seconds or so VERY BADLY, talking 5-10 seconds of > > lag, but hey, it does not crash!! I will boot the older kernel with preempt > > on and see if I can get you that information you requested. > > It wouldn't be a bad idea to recompile the new kernel with preempt on > and get the info from there. > > It is usually best to be working with the most recent kernels. We can > always backport any important fixes if we need to. > > Thanks, > Nick > > -- > SUSE Labs, Novell Inc. > Send instant messages to your online friends http://au.messenger.yahoo.com - > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > In my tests for the most part I am using the latest kernels. Justin. From owner-xfs@oss.sgi.com Thu Jan 25 03:14:05 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 25 Jan 2007 03:14:11 -0800 (PST) X-Spam-oss-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00, SPF_HELO_PASS autolearn=ham version=3.2.0-pre1-r497472 Received: from lucidpixels.com (lucidpixels.com [75.144.35.66]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0PBE1qw010014 for ; Thu, 25 Jan 2007 03:14:05 -0800 Received: by lucidpixels.com (Postfix, from userid 1001) id B83181A048CAB; Thu, 25 Jan 2007 06:13:07 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id B08E6A00226A; Thu, 25 Jan 2007 06:13:07 -0500 (EST) Date: Thu, 25 Jan 2007 06:13:07 -0500 (EST) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: Bill Cizek cc: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com, Alan Piszcz Subject: Re: 2.6.20-rc5: cp 18gb 18gb.2 = OOM killer, reproducible just like 2.16.19.2 In-Reply-To: <45B80610.5010804@rcn.com> Message-ID: References: <20070122115703.97ed54f3.akpm@osdl.org> <45B80610.5010804@rcn.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 10447 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Content-Length: 1752 Lines: 48 On Wed, 24 Jan 2007, Bill Cizek wrote: > Justin Piszcz wrote: > > On Mon, 22 Jan 2007, Andrew Morton wrote: > > > > > > On Sun, 21 Jan 2007 14:27:34 -0500 (EST) Justin Piszcz > > > > wrote: > > > > Why does copying an 18GB on a 74GB raptor raid1 cause the kernel to > > > > invoke the OOM killer and kill all of my processes? > > > > > > Running with PREEMPT OFF lets me copy the file!! The machine LAGS > > occasionally every 5-30-60 seconds or so VERY BADLY, talking 5-10 seconds of > > lag, but hey, it does not crash!! I will boot the older kernel with preempt > > on and see if I can get you that information you requested. > > > Justin, > > According to your kernel_ring_buffer.txt (attached to another email), you are > using "anticipatory" as your io scheduler: > 289 Jan 24 18:35:25 p34 kernel: [ 0.142130] io scheduler noop registered > 290 Jan 24 18:35:25 p34 kernel: [ 0.142194] io scheduler anticipatory > registered (default) > > I had a problem with this scheduler where my system would occasionally lockup > during heavy I/O. Sometimes it would fix itself, sometimes I had to reboot. > I changed to the "CFQ" io scheduler and my system has worked fine since then. > > CFQ has to be built into the kernel (under BlockLayer/IOSchedulers). It can > be selected as default or you can set it during runtime: > > echo cfq > /sys/block//queue/scheduler > ... > > Hope this helps, > Bill > > I used to run CFQ awhile back but then I switched over to AS as it has better performance for my workloads, currently, I am running with PREEMPT off, if I see any additional issues, I will switch to the CFQ scheduler. Right now, its the OOM killer that is going crazy. Justin. From owner-xfs@oss.sgi.com Thu Jan 25 16:10:08 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 25 Jan 2007 16:10:16 -0800 (PST) X-Spam-oss-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from sigma957.cis.mcmaster.ca (sigma957.CIS.McMaster.CA [130.113.64.83]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0Q0A7qw004437 for ; Thu, 25 Jan 2007 16:10:08 -0800 Received: from daltron7.UTS.McMaster.CA (daltron7.UTS.mcmaster.ca [130.113.64.13]) by sigma957.cis.mcmaster.ca (8.13.7/8.13.7) with ESMTP id l0PMZbiP014822; Thu, 25 Jan 2007 17:35:43 -0500 (EST) Received: from coffee.psychology.mcmaster.ca (coffee.Psychology.McMaster.CA [130.113.218.59]) by daltron7.UTS.McMaster.CA (8.13.7/8.13.7) with ESMTP id l0PMYVpr031756; Thu, 25 Jan 2007 17:34:35 -0500 Received: by coffee.psychology.mcmaster.ca (Postfix, from userid 502) id C9D1EEE030D; Thu, 25 Jan 2007 17:34:31 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by coffee.psychology.mcmaster.ca (Postfix) with ESMTP id C721BEE0301; Thu, 25 Jan 2007 17:34:31 -0500 (EST) Date: Thu, 25 Jan 2007 17:34:31 -0500 (EST) From: Mark Hahn X-X-Sender: hahn@coffee.psychology.mcmaster.ca To: Justin Piszcz cc: Pavel Machek , linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com Subject: Re: 2.6.20-rc5: cp 18gb 18gb.2 = OOM killer, reproducible just like 2.16.19.2 In-Reply-To: Message-ID: References: <20070122133735.GB4493@ucw.cz> <20070125003242.GA23343@elf.ucw.cz> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-PMX-Version-Mac: 4.7.1.128075, Antispam-Engine: 2.4.0.264935, Antispam-Data: 2007.1.25.140433 X-PerlMx-Spam: Gauge=IIIIIII, Probability=7%, Report='__CT 0, __CT_TEXT_PLAIN 0, __HAS_MSGID 0, __MIME_TEXT_ONLY 0, __MIME_VERSION 0, __SANE_MSGID 0' X-archive-position: 10453 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hahn@mcmaster.ca Precedence: bulk X-list: xfs Content-Length: 491 Lines: 13 > Something is seriously wrong with that OOM killer. do you know you don't have to operate in OOM-slaughter mode? "vm.overcommit_memory = 2" in your /etc/sysctl.conf puts you into a mode where the kernel tracks your "committed" memory needs, and will eventually cause some allocations to fail. this is often much nicer than the default random OOM slaughter. (you probably also need to adjust vm.overcommit_ratio with some knowlege of your MemTotal and SwapTotal.) regards, mark hahn. From owner-xfs@oss.sgi.com Thu Jan 25 16:23:00 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 25 Jan 2007 16:23:05 -0800 (PST) X-Spam-oss-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00, J_CHICKENPOX_24,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r497472 Received: from lucidpixels.com (lucidpixels.com [75.144.35.66]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0Q0Mwqw011796 for ; Thu, 25 Jan 2007 16:23:00 -0800 Received: by lucidpixels.com (Postfix, from userid 1001) id 3D3EC1A0B936F; Thu, 25 Jan 2007 19:22:04 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id 15E65A18DD66; Thu, 25 Jan 2007 19:22:04 -0500 (EST) Date: Thu, 25 Jan 2007 19:22:04 -0500 (EST) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: Mark Hahn cc: Pavel Machek , linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com Subject: Re: 2.6.20-rc5: cp 18gb 18gb.2 = OOM killer, reproducible just like 2.16.19.2 In-Reply-To: Message-ID: References: <20070122133735.GB4493@ucw.cz> <20070125003242.GA23343@elf.ucw.cz> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 10454 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Content-Length: 913 Lines: 31 On Thu, 25 Jan 2007, Mark Hahn wrote: > > Something is seriously wrong with that OOM killer. > > do you know you don't have to operate in OOM-slaughter mode? > > "vm.overcommit_memory = 2" in your /etc/sysctl.conf puts you into a mode where > the kernel tracks your "committed" memory needs, and will eventually cause > some allocations to fail. > this is often much nicer than the default random OOM slaughter. > (you probably also need to adjust vm.overcommit_ratio with some knowlege of > your MemTotal and SwapTotal.) > > regards, mark hahn. > - > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > > # sysctl -a | grep vm.over vm.overcommit_ratio = 50 vm.overcommit_memory = 0 I'll have to experiment with these options, thanks for the info! Justin. From owner-xfs@oss.sgi.com Fri Jan 26 01:26:39 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 26 Jan 2007 01:26:48 -0800 (PST) X-Spam-oss-Status: No, score=-2.3 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from smtp.osdl.org (smtp.osdl.org [65.172.181.24]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0Q9Qbqw016865 for ; Fri, 26 Jan 2007 01:26:38 -0800 Received: from shell0.pdx.osdl.net (fw.osdl.org [65.172.181.6]) by smtp.osdl.org (8.12.8/8.12.8) with ESMTP id l0Q9P6gP028950 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO); Fri, 26 Jan 2007 01:25:07 -0800 Received: from box (shell0.pdx.osdl.net [10.9.0.31]) by shell0.pdx.osdl.net (8.13.1/8.11.6) with SMTP id l0Q9P5um012579; Fri, 26 Jan 2007 01:25:06 -0800 Date: Fri, 26 Jan 2007 01:25:05 -0800 From: Andrew Morton To: Justin Piszcz Cc: Chuck Ebbert , linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com, Neil Brown Subject: Re: Kernel 2.6.19.2 New RAID 5 Bug (oops when writing Samba -> RAID5) Message-Id: <20070126012505.d8cb07f2.akpm@osdl.org> In-Reply-To: References: <45B5261B.1050104@redhat.com> X-Mailer: Sylpheed version 2.2.7 (GTK+ 2.8.17; x86_64-unknown-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-MIMEDefang-Filter: osdl$Revision: 1.172 $ X-Scanned-By: MIMEDefang 2.36 X-archive-position: 10457 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: akpm@osdl.org Precedence: bulk X-list: xfs Content-Length: 2054 Lines: 74 On Wed, 24 Jan 2007 18:37:15 -0500 (EST) Justin Piszcz wrote: > > Without digging too deeply, I'd say you've hit the same bug Sami Farin and > > others > > have reported starting with 2.6.19: pages mapped with kmap_atomic() become > > unmapped > > during memcpy() or similar operations. Try disabling preempt -- that seems to > > be the > > common factor. > > > > > > - > > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > > the body of a message to majordomo@vger.kernel.org > > More majordomo info at http://vger.kernel.org/majordomo-info.html > > > > > > After I run some other tests, I am going to re-run this test and see if it > OOPSes again with PREEMPT off. Strange. The below debug patch might catch it - please run with this applied. --- a/arch/i386/mm/highmem.c~kmap_atomic-debugging +++ a/arch/i386/mm/highmem.c @@ -30,7 +30,43 @@ void *kmap_atomic(struct page *page, enu { enum fixed_addresses idx; unsigned long vaddr; + static unsigned warn_count = 10; + if (unlikely(warn_count == 0)) + goto skip; + + if (unlikely(in_interrupt())) { + if (in_irq()) { + if (type != KM_IRQ0 && type != KM_IRQ1 && + type != KM_BIO_SRC_IRQ && type != KM_BIO_DST_IRQ && + type != KM_BOUNCE_READ) { + WARN_ON(1); + warn_count--; + } + } else if (!irqs_disabled()) { /* softirq */ + if (type != KM_IRQ0 && type != KM_IRQ1 && + type != KM_SOFTIRQ0 && type != KM_SOFTIRQ1 && + type != KM_SKB_SUNRPC_DATA && + type != KM_SKB_DATA_SOFTIRQ && + type != KM_BOUNCE_READ) { + WARN_ON(1); + warn_count--; + } + } + } + + if (type == KM_IRQ0 || type == KM_IRQ1 || type == KM_BOUNCE_READ) { + if (!irqs_disabled()) { + WARN_ON(1); + warn_count--; + } + } else if (type == KM_SOFTIRQ0 || type == KM_SOFTIRQ1) { + if (irq_count() == 0 && !irqs_disabled()) { + WARN_ON(1); + warn_count--; + } + } +skip: /* even !CONFIG_PREEMPT needs this, for in_atomic in do_page_fault */ pagefault_disable(); if (!PageHighMem(page)) _ From owner-xfs@oss.sgi.com Fri Jan 26 02:35:43 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 26 Jan 2007 02:35:50 -0800 (PST) X-Spam-oss-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00, SPF_HELO_PASS autolearn=ham version=3.2.0-pre1-r497472 Received: from lucidpixels.com (lucidpixels.com [75.144.35.66]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0QAZfqw027958 for ; Fri, 26 Jan 2007 02:35:42 -0800 Received: by lucidpixels.com (Postfix, from userid 1001) id 1F38A1A000516; Fri, 26 Jan 2007 04:37:56 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id 10CCAA050B16; Fri, 26 Jan 2007 04:37:56 -0500 (EST) Date: Fri, 26 Jan 2007 04:37:55 -0500 (EST) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: Andrew Morton cc: Chuck Ebbert , linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com, Neil Brown Subject: Re: Kernel 2.6.19.2 New RAID 5 Bug (oops when writing Samba -> RAID5) In-Reply-To: <20070126012505.d8cb07f2.akpm@osdl.org> Message-ID: References: <45B5261B.1050104@redhat.com> <20070126012505.d8cb07f2.akpm@osdl.org> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 10459 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Content-Length: 2451 Lines: 86 On Fri, 26 Jan 2007, Andrew Morton wrote: > On Wed, 24 Jan 2007 18:37:15 -0500 (EST) > Justin Piszcz wrote: > > > > Without digging too deeply, I'd say you've hit the same bug Sami Farin and > > > others > > > have reported starting with 2.6.19: pages mapped with kmap_atomic() become > > > unmapped > > > during memcpy() or similar operations. Try disabling preempt -- that seems to > > > be the > > > common factor. > > > > > > > > > - > > > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > > > the body of a message to majordomo@vger.kernel.org > > > More majordomo info at http://vger.kernel.org/majordomo-info.html > > > > > > > > > > After I run some other tests, I am going to re-run this test and see if it > > OOPSes again with PREEMPT off. > > Strange. The below debug patch might catch it - please run with this > applied. > > > --- a/arch/i386/mm/highmem.c~kmap_atomic-debugging > +++ a/arch/i386/mm/highmem.c > @@ -30,7 +30,43 @@ void *kmap_atomic(struct page *page, enu > { > enum fixed_addresses idx; > unsigned long vaddr; > + static unsigned warn_count = 10; > > + if (unlikely(warn_count == 0)) > + goto skip; > + > + if (unlikely(in_interrupt())) { > + if (in_irq()) { > + if (type != KM_IRQ0 && type != KM_IRQ1 && > + type != KM_BIO_SRC_IRQ && type != KM_BIO_DST_IRQ && > + type != KM_BOUNCE_READ) { > + WARN_ON(1); > + warn_count--; > + } > + } else if (!irqs_disabled()) { /* softirq */ > + if (type != KM_IRQ0 && type != KM_IRQ1 && > + type != KM_SOFTIRQ0 && type != KM_SOFTIRQ1 && > + type != KM_SKB_SUNRPC_DATA && > + type != KM_SKB_DATA_SOFTIRQ && > + type != KM_BOUNCE_READ) { > + WARN_ON(1); > + warn_count--; > + } > + } > + } > + > + if (type == KM_IRQ0 || type == KM_IRQ1 || type == KM_BOUNCE_READ) { > + if (!irqs_disabled()) { > + WARN_ON(1); > + warn_count--; > + } > + } else if (type == KM_SOFTIRQ0 || type == KM_SOFTIRQ1) { > + if (irq_count() == 0 && !irqs_disabled()) { > + WARN_ON(1); > + warn_count--; > + } > + } > +skip: > /* even !CONFIG_PREEMPT needs this, for in_atomic in do_page_fault */ > pagefault_disable(); > if (!PageHighMem(page)) > _ > > The RAID5 bug may be hard to trigger, I have only made it happen once so far (but only tried it once, don't like locking up the raid :)), I will re-run the test after applying this patch. Justin. From owner-xfs@oss.sgi.com Fri Jan 26 04:32:55 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 26 Jan 2007 04:33:08 -0800 (PST) X-Spam-oss-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00, SPF_HELO_PASS autolearn=ham version=3.2.0-pre1-r497472 Received: from lucidpixels.com (lucidpixels.com [75.144.35.66]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0QCWsqw021354 for ; Fri, 26 Jan 2007 04:32:55 -0800 Received: by lucidpixels.com (Postfix, from userid 1001) id ECB591A000516; Fri, 26 Jan 2007 07:31:50 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id E4E85A59695C; Fri, 26 Jan 2007 07:31:50 -0500 (EST) Date: Fri, 26 Jan 2007 07:31:50 -0500 (EST) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: Andrew Morton cc: Chuck Ebbert , linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com, Neil Brown Subject: Re: Kernel 2.6.19.2 New RAID 5 Bug (oops when writing Samba -> RAID5) In-Reply-To: <20070126012505.d8cb07f2.akpm@osdl.org> Message-ID: References: <45B5261B.1050104@redhat.com> <20070126012505.d8cb07f2.akpm@osdl.org> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 10460 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Content-Length: 2393 Lines: 80 Just re-ran the test 4-5 times, could not reproduce this one, but I'll keep running this kernel w/patch for a while and see if it happens again. On Fri, 26 Jan 2007, Andrew Morton wrote: > On Wed, 24 Jan 2007 18:37:15 -0500 (EST) > Justin Piszcz wrote: > > > > Without digging too deeply, I'd say you've hit the same bug Sami Farin and > > > others > > > have reported starting with 2.6.19: pages mapped with kmap_atomic() become > > > unmapped > > > during memcpy() or similar operations. Try disabling preempt -- that seems to > > > be the > > > common factor. > > > > > > > > > - > > > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > > > the body of a message to majordomo@vger.kernel.org > > > More majordomo info at http://vger.kernel.org/majordomo-info.html > > > > > > > > > > After I run some other tests, I am going to re-run this test and see if it > > OOPSes again with PREEMPT off. > > Strange. The below debug patch might catch it - please run with this > applied. > > > --- a/arch/i386/mm/highmem.c~kmap_atomic-debugging > +++ a/arch/i386/mm/highmem.c > @@ -30,7 +30,43 @@ void *kmap_atomic(struct page *page, enu > { > enum fixed_addresses idx; > unsigned long vaddr; > + static unsigned warn_count = 10; > > + if (unlikely(warn_count == 0)) > + goto skip; > + > + if (unlikely(in_interrupt())) { > + if (in_irq()) { > + if (type != KM_IRQ0 && type != KM_IRQ1 && > + type != KM_BIO_SRC_IRQ && type != KM_BIO_DST_IRQ && > + type != KM_BOUNCE_READ) { > + WARN_ON(1); > + warn_count--; > + } > + } else if (!irqs_disabled()) { /* softirq */ > + if (type != KM_IRQ0 && type != KM_IRQ1 && > + type != KM_SOFTIRQ0 && type != KM_SOFTIRQ1 && > + type != KM_SKB_SUNRPC_DATA && > + type != KM_SKB_DATA_SOFTIRQ && > + type != KM_BOUNCE_READ) { > + WARN_ON(1); > + warn_count--; > + } > + } > + } > + > + if (type == KM_IRQ0 || type == KM_IRQ1 || type == KM_BOUNCE_READ) { > + if (!irqs_disabled()) { > + WARN_ON(1); > + warn_count--; > + } > + } else if (type == KM_SOFTIRQ0 || type == KM_SOFTIRQ1) { > + if (irq_count() == 0 && !irqs_disabled()) { > + WARN_ON(1); > + warn_count--; > + } > + } > +skip: > /* even !CONFIG_PREEMPT needs this, for in_atomic in do_page_fault */ > pagefault_disable(); > if (!PageHighMem(page)) > _ > From owner-xfs@oss.sgi.com Sun Jan 28 03:00:51 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 28 Jan 2007 03:00:57 -0800 (PST) X-Spam-oss-Status: No, score=-1.1 required=5.0 tests=AWL,BAYES_05, J_CHICKENPOX_66 autolearn=no version=3.2.0-pre1-r497472 Received: from py-out-1112.google.com (py-out-1112.google.com [64.233.166.176]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0SB0oqw015036 for ; Sun, 28 Jan 2007 03:00:51 -0800 Received: by py-out-1112.google.com with SMTP id p76so600489pyb for ; Sun, 28 Jan 2007 02:59:56 -0800 (PST) DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=gjrRdDT1/jIKMgJ15aRKiYVsR6myXYtmyifs1QTlqGyaSaqy4U7ymZesfnTbn7zIllysdOFN0AHOMcQuRbZJA2dD7g5wDFpQDy3nZ9K8yA8K/YNnFA0NiQz4/xlNkv58oM9kx3LD2nz1liLx9dO+vtcuv+URY/f5qJ74kWISgDI= Received: by 10.35.40.10 with SMTP id s10mr10471718pyj.1169980343387; Sun, 28 Jan 2007 02:32:23 -0800 (PST) Received: by 10.35.46.19 with HTTP; Sun, 28 Jan 2007 02:32:23 -0800 (PST) Message-ID: <5d96567b0701280232w17e1a187r95d2c59711799b1a@mail.gmail.com> Date: Sun, 28 Jan 2007 12:32:23 +0200 From: "Raz Ben-Jehuda(caro)" To: nscott@aconex.com Subject: Re: [DISCUSS] xfs allocation bitmap method over linux raid Cc: linux-xfs@oss.sgi.com In-Reply-To: <1169678294.18017.200.camel@edge> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline References: <5d96567b0701232234y2ff15762sbd1aaada5c3a0a0@mail.gmail.com> <1169678294.18017.200.camel@edge> X-archive-position: 10468 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: raziebe@gmail.com Precedence: bulk X-list: xfs Content-Length: 3180 Lines: 96 first many thanks to your reply. see bellow. On 1/25/07, Nathan Scott wrote: > Hi Raz, > > On Wed, 2007-01-24 at 08:34 +0200, Raz Ben-Jehuda(caro) wrote: > > David Hello. > > I have looked up in LKML and hopefully you are the one to ask in > > regard to xfs file system in Linux. > > OOC, which one? (would be nice to put an entry for your company > on the http://oss.sgi.com/projects/xfs/users.html page). > > > These servers demand high throughput from the storage. > > We applied XFS file system on our machines. > > > > A video server reads a file in a sequential manner. So, if a > > Do you write the file sequentially? Buffered or direct writes? does not matter. even command like: dd if=/dev/zero of=/d1/xxx bs=1M count=1000 will reveil extents of size modulo(stripe unit ) != 0 > > file extent size is not a factor of the stripe unit size a sequential > > read over a raid would break into several small pieces which > > is undesirable for performance. > > > > I have been examining the bitmap of a file over Linux raid5. > > I've found that, in combination with Jens Axboe's blktrace toolkit > to be very useful - if you have a sufficiently recent kernel, I'd > highly recommend you check out blktrace, it should help you alot. > > (bmap == block map, theres no bitmap involved) > > > According to the documentation XFS tries to align a file on > > stripe unit size. > > > > What I have done is to fix the bitmap allocation method during > > the writing to be aligned by the stripe unit size. > > Thats not quite what the patch does, FWIW - it does two things: > - forces allocations to be stripe unit sized (not aligned) which is what i meant. > - and, er, removes some of the per-inode extsize hint code :) what is it? could my fix make any damage ? what sort of a damage ? > > /d1/rt/kernels/linux-2.6.17-UNI/fs/xfs/xfs_iomap.c > > linux-2.6.17-UNI/fs/xfs/xfs_iomap.c > > --- /d1/rt/kernels/linux-2.6.17-UNI/fs/xfs/xfs_iomap.c 2006-06-18 > > 01:49:35.000000000 +0000 > > +++ linux-2.6.17-UNI/fs/xfs/xfs_iomap.c 2006-12-26 14:11:02.000000000 +0000 > > @@ -441,8 +441,8 @@ > > if (unlikely(rt)) { > > if (!(extsz = ip->i_d.di_extsize)) > > extsz = mp->m_sb.sb_rextsize; > > - } else { > > - extsz = ip->i_d.di_extsize; > > + } else { > > + extsz = mp->m_dalign; // raz fix alignment to raid stripe unit > > } > > The real question is, why are your initial writes not being affected by > the code in xfs_iomap_eof_align_last_fsb which rounds requests to a > stripe unit boundary? I debugged xfs_iomap_write_delay: ip->i_d.di_extsize is zero and prealloc is zero. is it correct ? isn't it suppose stripe unit size in pages ? Also , xfs_iomap_eof_align_last_fsb has this line : if (io->io_flags & XFS_IOCORE_RT) ; > Provided you are writing sequentially, you should > be seeing xfs_iomap_eof_want_preallocate return true, then later doing > stripe unit alignment in xfs_iomap_eof_align_last_fsb (because prealloc > got set earlier) ... can you trace your requests through the routines > you've modified and find why this is _not_ happening? > > cheers. > > -- > Nathan > > -- Raz From owner-xfs@oss.sgi.com Sun Jan 28 13:51:22 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 28 Jan 2007 13:51:26 -0800 (PST) X-Spam-oss-Status: No, score=-1.8 required=5.0 tests=AWL,BAYES_00, J_CHICKENPOX_66 autolearn=no version=3.2.0-pre1-r497472 Received: from postoffice.aconex.com (mail.app.aconex.com [203.89.192.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0SLpJqw015508 for ; Sun, 28 Jan 2007 13:51:21 -0800 Received: from edge (unknown [203.89.192.141]) by postoffice.aconex.com (Postfix) with ESMTP id 55129AAC2BD; Mon, 29 Jan 2007 08:37:10 +1100 (EST) Subject: Re: [DISCUSS] xfs allocation bitmap method over linux raid From: Nathan Scott Reply-To: nscott@aconex.com To: "Raz Ben-Jehuda(caro)" Cc: xfs@oss.sgi.com In-Reply-To: <5d96567b0701280232w17e1a187r95d2c59711799b1a@mail.gmail.com> References: <5d96567b0701232234y2ff15762sbd1aaada5c3a0a0@mail.gmail.com> <1169678294.18017.200.camel@edge> <5d96567b0701280232w17e1a187r95d2c59711799b1a@mail.gmail.com> Content-Type: text/plain Organization: Aconex Date: Mon, 29 Jan 2007 08:49:56 +1100 Message-Id: <1170020997.18017.236.camel@edge> Mime-Version: 1.0 X-Mailer: Evolution 2.6.3 Content-Transfer-Encoding: 7bit X-archive-position: 10471 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nscott@aconex.com Precedence: bulk X-list: xfs Content-Length: 2105 Lines: 57 On Sun, 2007-01-28 at 12:32 +0200, Raz Ben-Jehuda(caro) wrote: > > OOC, which one? (would be nice to put an entry for your company > > on the http://oss.sgi.com/projects/xfs/users.html page). > > > dd if=/dev/zero of=/d1/xxx bs=1M count=1000 > will reveil extents of size modulo(stripe unit ) != 0 Does using direct IO change things (oflag=direct to dd iirc). > > - and, er, removes some of the per-inode extsize hint code :) > what is it? See the "extsize" command within xfs_io(8). > could my fix make any damage ? > what sort of a damage ? Not really "damage" (as in filesystem integrity), its more that it accidentally breaks existing functionality. > > The real question is, why are your initial writes not being affected by > > the code in xfs_iomap_eof_align_last_fsb which rounds requests to a > > stripe unit boundary? > > I debugged xfs_iomap_write_delay: > ip->i_d.di_extsize is zero and prealloc is zero. is it correct ? prealloc shouldn't be zero for writes that will extend the file size; but now that I think about it, I'm not sure how it could ever get set for a buffered write (delalloc), since by the time we come to do the actual allocation and writes to disk, the inode size will be beyond the allocation offset. Hmm, maybe the logic in there needs a rethink (any thoughts there, Dave/Lachlan?) > isn't it suppose stripe unit size in pages ? No, extsize is not and should not be set unless its explicitly been asked for (see the man page I refered to above). > Also , xfs_iomap_eof_align_last_fsb has this line : > if (io->io_flags & XFS_IOCORE_RT) Are you using the realtime subvolume? You didn't mention that before, so I guess you're not - in which case, the above line is not relevent in your case. > > Provided you are writing sequentially, you should > > be seeing xfs_iomap_eof_want_preallocate return true, then later doing > > stripe unit alignment in xfs_iomap_eof_align_last_fsb (because prealloc > > got set earlier) ... can you trace your requests through the routines > > you've modified and find why this is _not_ happening? cheers. -- Nathan From owner-xfs@oss.sgi.com Sun Jan 28 14:23:49 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 28 Jan 2007 14:23:53 -0800 (PST) X-Spam-oss-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0SMNkqw020598 for ; Sun, 28 Jan 2007 14:23:48 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA23510; Mon, 29 Jan 2007 09:22:51 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l0SMMo7Y107254865; Mon, 29 Jan 2007 09:22:50 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l0SMMn4O107085977; Mon, 29 Jan 2007 09:22:49 +1100 (AEDT) Date: Mon, 29 Jan 2007 09:22:49 +1100 From: David Chinner To: Lachlan McIlroy Cc: David Chinner , xfs-dev@sgi.com, xfs@oss.sgi.com Subject: Re: Review: Fix sub-page zeroing for buffered writes into unwritten extents Message-ID: <20070128222249.GL33919298@melbourne.sgi.com> References: <20070123224704.GH33919298@melbourne.sgi.com> <45B78CD4.1060400@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <45B78CD4.1060400@sgi.com> User-Agent: Mutt/1.4.2.1i X-archive-position: 10472 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 1580 Lines: 50 On Wed, Jan 24, 2007 at 04:44:04PM +0000, Lachlan McIlroy wrote: > Dave, > > I'm trying to understand what the sequence of events is here. > > If we write to an unwritten extent then will __xfs_get_blocks() > be called with create=1 and flags=BMAPI_WRITE? Yup. > And calling > bhv_vop_bmap() with flags set to BMAPI_WRITE will cause xfs_iomap() > to set iomap_flags to IOMAP_NEW? Only if we allocate an extent in xfs_iomap: In xfs_iomap: 258 phase2: 259 switch (flags & (BMAPI_WRITE|BMAPI_ALLOCATE|BMAPI_UNWRITTEN)) { 260 case BMAPI_WRITE: 261 /* If we found an extent, return it */ 262 if (nimaps && 263 (imap.br_startblock != HOLESTARTBLOCK) && 264 (imap.br_startblock != DELAYSTARTBLOCK)) { 265 xfs_iomap_map_trace(XFS_IOMAP_WRITE_MAP, io, 266 offset, count, iomapp, &imap, flags); 267 break; 268 } We found an extent - an unwritten extent - which means we have a map and the startblock is a real number (i.e. not a hole or delalloc region). Hence we break here and never set the IOMAP_NEW flag which is correct because we didn't just do an allocation. > The combination of create=1 and > iomap_flags=IOMAP_NEW in __xfs_get_blocks() should result in calling > set_buffer_new(), right? Yes, it would, but unwritten extents are not new extents..... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Sun Jan 28 15:54:06 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 28 Jan 2007 15:54:12 -0800 (PST) X-Spam-oss-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00, J_CHICKENPOX_66 autolearn=no version=3.2.0-pre1-r497472 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0SNs2qw031932 for ; Sun, 28 Jan 2007 15:54:05 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA25655; Mon, 29 Jan 2007 10:52:59 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l0SNqv7Y106751320; Mon, 29 Jan 2007 10:52:57 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l0SNqtVS107181650; Mon, 29 Jan 2007 10:52:55 +1100 (AEDT) Date: Mon, 29 Jan 2007 10:52:55 +1100 From: David Chinner To: "Raz Ben-Jehuda(caro)" Cc: nscott@aconex.com, linux-xfs@oss.sgi.com Subject: Re: [DISCUSS] xfs allocation bitmap method over linux raid Message-ID: <20070128235255.GS33919298@melbourne.sgi.com> References: <5d96567b0701232234y2ff15762sbd1aaada5c3a0a0@mail.gmail.com> <1169678294.18017.200.camel@edge> <5d96567b0701280232w17e1a187r95d2c59711799b1a@mail.gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <5d96567b0701280232w17e1a187r95d2c59711799b1a@mail.gmail.com> User-Agent: Mutt/1.4.2.1i X-archive-position: 10473 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 1119 Lines: 37 On Sun, Jan 28, 2007 at 12:32:23PM +0200, Raz Ben-Jehuda(caro) wrote: > first many thanks to your reply. > see bellow. > > On 1/25/07, Nathan Scott wrote: > >Hi Raz, > > > >On Wed, 2007-01-24 at 08:34 +0200, Raz Ben-Jehuda(caro) wrote: > >> David Hello. > >> I have looked up in LKML and hopefully you are the one to ask in > >> regard to xfs file system in Linux. > > > > >OOC, which one? (would be nice to put an entry for your company > >on the http://oss.sgi.com/projects/xfs/users.html page). > > > >> These servers demand high throughput from the storage. > >> We applied XFS file system on our machines. > >> > >> A video server reads a file in a sequential manner. So, if a > > > >Do you write the file sequentially? Buffered or direct writes? > does not matter. even command like: > dd if=/dev/zero of=/d1/xxx bs=1M count=1000 > will reveil extents of size modulo(stripe unit ) != 0 Did you make the filesystem with a stripe unit set properly? Can you post the output of 'xfs_info -n /path/to/mntpt'? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Jan 29 04:56:50 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 29 Jan 2007 04:56:57 -0800 (PST) X-Spam-oss-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from imr2.americas.sgi.com (imr2.americas.sgi.com [198.149.16.18]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0TCunqw000786 for ; Mon, 29 Jan 2007 04:56:50 -0800 Received: from [134.15.160.31] (vpn-emea-sw-emea-160-31.emea.sgi.com [134.15.160.31]) by imr2.americas.sgi.com (8.12.9/8.12.10/SGI_generic_relay-1.2) with ESMTP id l0TCPjnc80257197; Mon, 29 Jan 2007 04:25:47 -0800 (PST) Message-ID: <45BDEED7.4040500@sgi.com> Date: Mon, 29 Jan 2007 12:55:51 +0000 From: Lachlan McIlroy Reply-To: lachlan@sgi.com Organization: SGI User-Agent: Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-US; rv:1.7.12) Gecko/20050920 X-Accept-Language: en-us, en MIME-Version: 1.0 To: David Chinner CC: xfs-dev@sgi.com, xfs@oss.sgi.com Subject: Re: Review: Fix sub-page zeroing for buffered writes into unwritten extents References: <20070123224704.GH33919298@melbourne.sgi.com> <45B78CD4.1060400@sgi.com> <20070128222249.GL33919298@melbourne.sgi.com> In-Reply-To: <20070128222249.GL33919298@melbourne.sgi.com> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 10475 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: lachlan@sgi.com Precedence: bulk X-list: xfs Content-Length: 1700 Lines: 56 David Chinner wrote: > On Wed, Jan 24, 2007 at 04:44:04PM +0000, Lachlan McIlroy wrote: > >>Dave, >> >>I'm trying to understand what the sequence of events is here. >> >>If we write to an unwritten extent then will __xfs_get_blocks() >>be called with create=1 and flags=BMAPI_WRITE? > > > Yup. > > >>And calling >>bhv_vop_bmap() with flags set to BMAPI_WRITE will cause xfs_iomap() >>to set iomap_flags to IOMAP_NEW? > > > Only if we allocate an extent in xfs_iomap: > > In xfs_iomap: > > 258 phase2: > 259 switch (flags & (BMAPI_WRITE|BMAPI_ALLOCATE|BMAPI_UNWRITTEN)) { > 260 case BMAPI_WRITE: > 261 /* If we found an extent, return it */ > 262 if (nimaps && > 263 (imap.br_startblock != HOLESTARTBLOCK) && > 264 (imap.br_startblock != DELAYSTARTBLOCK)) { > 265 xfs_iomap_map_trace(XFS_IOMAP_WRITE_MAP, io, > 266 offset, count, iomapp, &imap, flags); > 267 break; > 268 } > > > We found an extent - an unwritten extent - which means we have a map > and the startblock is a real number (i.e. not a hole or delalloc region). > Hence we break here and never set the IOMAP_NEW flag which is correct > because we didn't just do an allocation. I must have skimmed over the break statement. Your fix makes sense to me now. > > >>The combination of create=1 and >>iomap_flags=IOMAP_NEW in __xfs_get_blocks() should result in calling >>set_buffer_new(), right? > > > Yes, it would, but unwritten extents are not new extents..... > > Cheers, > > Dave. From owner-xfs@oss.sgi.com Mon Jan 29 08:38:55 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 29 Jan 2007 08:39:00 -0800 (PST) X-Spam-oss-Status: No, score=-0.9 required=5.0 tests=AWL,BAYES_50, J_CHICKENPOX_35,J_CHICKENPOX_54,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r497472 Received: from mx1.redhat.com (mx1.redhat.com [66.187.233.31]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0TGcrqw000628 for ; Mon, 29 Jan 2007 08:38:54 -0800 Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by mx1.redhat.com (8.13.1/8.13.1) with ESMTP id l0TGbvx0019282; Mon, 29 Jan 2007 11:37:58 -0500 Received: from pobox-2.corp.redhat.com (pobox-2.corp.redhat.com [10.11.255.15]) by int-mx1.corp.redhat.com (8.13.1/8.13.1) with ESMTP id l0TGbvGC014824; Mon, 29 Jan 2007 11:37:57 -0500 Received: from [10.15.80.10] (neon.msp.redhat.com [10.15.80.10]) by pobox-2.corp.redhat.com (8.13.1/8.13.1) with ESMTP id l0TGbtUX016757; Mon, 29 Jan 2007 11:37:56 -0500 Message-ID: <45BE22FD.1040508@sandeen.net> Date: Mon, 29 Jan 2007 10:38:21 -0600 From: Eric Sandeen User-Agent: Thunderbird 1.5.0.9 (X11/20061219) MIME-Version: 1.0 To: xfs@oss.sgi.com CC: rpjday@mindspring.com Subject: [PATCH] kill off unused xfs_mac, xfs_cap headers Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-archive-position: 10478 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Content-Length: 16377 Lines: 518 Followup to Robert's initial noticing of the unused bits. compile-tested on 2.6, x86_64 --- Remove unused xfs_mac.h, xfs_cap.h headers Signed-off-by: Eric Sandeen xfs-linux/linux-2.4/xfs_ioctl.c | 2 xfs-linux/linux-2.4/xfs_iops.c | 2 xfs-linux/linux-2.4/xfs_ksyms.c | 2 xfs-linux/linux-2.4/xfs_lrw.c | 2 xfs-linux/linux-2.4/xfs_super.c | 2 xfs-linux/linux-2.6/xfs_ioctl.c | 2 xfs-linux/linux-2.6/xfs_iops.c | 2 xfs-linux/linux-2.6/xfs_ksyms.c | 2 xfs-linux/linux-2.6/xfs_lrw.c | 2 xfs-linux/linux-2.6/xfs_super.c | 2 xfs-linux/quota/xfs_dquot.c | 2 xfs-linux/quota/xfs_dquot_item.c | 2 xfs-linux/quota/xfs_qm.c | 2 xfs-linux/quota/xfs_qm_bhv.c | 2 xfs-linux/quota/xfs_qm_stats.c | 2 xfs-linux/quota/xfs_qm_syscalls.c | 2 xfs-linux/quota/xfs_trans_dquot.c | 2 xfs-linux/xfs_acl.c | 1 xfs-linux/xfs_dfrag.c | 1 xfs-linux/xfs_inode.c | 1 xfs-linux/xfs_iomap.c | 2 xfs-linux/xfs_rw.c | 1 xfs-linux/xfs_vnodeops.c | 1 xfs_cap.h | 70 ------------------------- xfs_mac.h | 106 -------------------------------------- Index: xfs-linux/linux-2.4/xfs_ioctl.c =================================================================== --- xfs-linux.orig/linux-2.4/xfs_ioctl.c +++ xfs-linux/linux-2.4/xfs_ioctl.c @@ -42,8 +42,6 @@ #include "xfs_itable.h" #include "xfs_rw.h" #include "xfs_acl.h" -#include "xfs_cap.h" -#include "xfs_mac.h" #include "xfs_attr.h" #include "xfs_buf_item.h" #include "xfs_utils.h" Index: xfs-linux/linux-2.4/xfs_iops.c =================================================================== --- xfs-linux.orig/linux-2.4/xfs_iops.c +++ xfs-linux/linux-2.4/xfs_iops.c @@ -43,8 +43,6 @@ #include "xfs_itable.h" #include "xfs_rw.h" #include "xfs_acl.h" -#include "xfs_cap.h" -#include "xfs_mac.h" #include "xfs_attr.h" #include "xfs_buf_item.h" #include "xfs_utils.h" Index: xfs-linux/linux-2.4/xfs_ksyms.c =================================================================== --- xfs-linux.orig/linux-2.4/xfs_ksyms.c +++ xfs-linux/linux-2.4/xfs_ksyms.c @@ -53,8 +53,6 @@ #include "xfs_dir2_node.h" #include "xfs_dir2_trace.h" #include "xfs_acl.h" -#include "xfs_cap.h" -#include "xfs_mac.h" #include "xfs_attr.h" #include "xfs_attr_leaf.h" #include "xfs_inode_item.h" Index: xfs-linux/linux-2.4/xfs_lrw.c =================================================================== --- xfs-linux.orig/linux-2.4/xfs_lrw.c +++ xfs-linux/linux-2.4/xfs_lrw.c @@ -44,8 +44,6 @@ #include "xfs_rw.h" #include "xfs_refcache.h" #include "xfs_acl.h" -#include "xfs_cap.h" -#include "xfs_mac.h" #include "xfs_attr.h" #include "xfs_inode_item.h" #include "xfs_buf_item.h" Index: xfs-linux/linux-2.4/xfs_super.c =================================================================== --- xfs-linux.orig/linux-2.4/xfs_super.c +++ xfs-linux/linux-2.4/xfs_super.c @@ -44,8 +44,6 @@ #include "xfs_itable.h" #include "xfs_rw.h" #include "xfs_acl.h" -#include "xfs_cap.h" -#include "xfs_mac.h" #include "xfs_attr.h" #include "xfs_buf_item.h" #include "xfs_utils.h" Index: xfs-linux/linux-2.6/xfs_ioctl.c =================================================================== --- xfs-linux.orig/linux-2.6/xfs_ioctl.c +++ xfs-linux/linux-2.6/xfs_ioctl.c @@ -41,8 +41,6 @@ #include "xfs_error.h" #include "xfs_rw.h" #include "xfs_acl.h" -#include "xfs_cap.h" -#include "xfs_mac.h" #include "xfs_attr.h" #include "xfs_bmap.h" #include "xfs_buf_item.h" Index: xfs-linux/linux-2.6/xfs_iops.c =================================================================== --- xfs-linux.orig/linux-2.6/xfs_iops.c +++ xfs-linux/linux-2.6/xfs_iops.c @@ -43,8 +43,6 @@ #include "xfs_itable.h" #include "xfs_rw.h" #include "xfs_acl.h" -#include "xfs_cap.h" -#include "xfs_mac.h" #include "xfs_attr.h" #include "xfs_buf_item.h" #include "xfs_utils.h" Index: xfs-linux/linux-2.6/xfs_ksyms.c =================================================================== --- xfs-linux.orig/linux-2.6/xfs_ksyms.c +++ xfs-linux/linux-2.6/xfs_ksyms.c @@ -53,8 +53,6 @@ #include "xfs_dir2_node.h" #include "xfs_dir2_trace.h" #include "xfs_acl.h" -#include "xfs_cap.h" -#include "xfs_mac.h" #include "xfs_attr.h" #include "xfs_attr_leaf.h" #include "xfs_inode_item.h" Index: xfs-linux/linux-2.6/xfs_lrw.c =================================================================== --- xfs-linux.orig/linux-2.6/xfs_lrw.c +++ xfs-linux/linux-2.6/xfs_lrw.c @@ -43,8 +43,6 @@ #include "xfs_itable.h" #include "xfs_rw.h" #include "xfs_acl.h" -#include "xfs_cap.h" -#include "xfs_mac.h" #include "xfs_attr.h" #include "xfs_inode_item.h" #include "xfs_buf_item.h" Index: xfs-linux/linux-2.6/xfs_super.c =================================================================== --- xfs-linux.orig/linux-2.6/xfs_super.c +++ xfs-linux/linux-2.6/xfs_super.c @@ -43,8 +43,6 @@ #include "xfs_itable.h" #include "xfs_rw.h" #include "xfs_acl.h" -#include "xfs_cap.h" -#include "xfs_mac.h" #include "xfs_attr.h" #include "xfs_buf_item.h" #include "xfs_utils.h" Index: xfs-linux/quota/xfs_dquot.c =================================================================== --- xfs-linux.orig/quota/xfs_dquot.c +++ xfs-linux/quota/xfs_dquot.c @@ -43,8 +43,6 @@ #include "xfs_itable.h" #include "xfs_rw.h" #include "xfs_acl.h" -#include "xfs_cap.h" -#include "xfs_mac.h" #include "xfs_attr.h" #include "xfs_buf_item.h" #include "xfs_trans_space.h" Index: xfs-linux/quota/xfs_dquot_item.c =================================================================== --- xfs-linux.orig/quota/xfs_dquot_item.c +++ xfs-linux/quota/xfs_dquot_item.c @@ -43,8 +43,6 @@ #include "xfs_itable.h" #include "xfs_rw.h" #include "xfs_acl.h" -#include "xfs_cap.h" -#include "xfs_mac.h" #include "xfs_attr.h" #include "xfs_buf_item.h" #include "xfs_trans_priv.h" Index: xfs-linux/quota/xfs_qm.c =================================================================== --- xfs-linux.orig/quota/xfs_qm.c +++ xfs-linux/quota/xfs_qm.c @@ -44,8 +44,6 @@ #include "xfs_bmap.h" #include "xfs_rw.h" #include "xfs_acl.h" -#include "xfs_cap.h" -#include "xfs_mac.h" #include "xfs_attr.h" #include "xfs_buf_item.h" #include "xfs_trans_space.h" Index: xfs-linux/quota/xfs_qm_bhv.c =================================================================== --- xfs-linux.orig/quota/xfs_qm_bhv.c +++ xfs-linux/quota/xfs_qm_bhv.c @@ -44,8 +44,6 @@ #include "xfs_error.h" #include "xfs_rw.h" #include "xfs_acl.h" -#include "xfs_cap.h" -#include "xfs_mac.h" #include "xfs_attr.h" #include "xfs_buf_item.h" #include "xfs_qm.h" Index: xfs-linux/quota/xfs_qm_stats.c =================================================================== --- xfs-linux.orig/quota/xfs_qm_stats.c +++ xfs-linux/quota/xfs_qm_stats.c @@ -43,8 +43,6 @@ #include "xfs_error.h" #include "xfs_rw.h" #include "xfs_acl.h" -#include "xfs_cap.h" -#include "xfs_mac.h" #include "xfs_attr.h" #include "xfs_buf_item.h" #include "xfs_qm.h" Index: xfs-linux/quota/xfs_qm_syscalls.c =================================================================== --- xfs-linux.orig/quota/xfs_qm_syscalls.c +++ xfs-linux/quota/xfs_qm_syscalls.c @@ -46,8 +46,6 @@ #include "xfs_error.h" #include "xfs_rw.h" #include "xfs_acl.h" -#include "xfs_cap.h" -#include "xfs_mac.h" #include "xfs_attr.h" #include "xfs_buf_item.h" #include "xfs_utils.h" Index: xfs-linux/quota/xfs_trans_dquot.c =================================================================== --- xfs-linux.orig/quota/xfs_trans_dquot.c +++ xfs-linux/quota/xfs_trans_dquot.c @@ -43,8 +43,6 @@ #include "xfs_error.h" #include "xfs_rw.h" #include "xfs_acl.h" -#include "xfs_cap.h" -#include "xfs_mac.h" #include "xfs_attr.h" #include "xfs_buf_item.h" #include "xfs_trans_priv.h" Index: xfs-linux/xfs_acl.c =================================================================== --- xfs-linux.orig/xfs_acl.c +++ xfs-linux/xfs_acl.c @@ -31,7 +31,6 @@ #include "xfs_inode.h" #include "xfs_btree.h" #include "xfs_acl.h" -#include "xfs_mac.h" #include "xfs_attr.h" #include Index: xfs-linux/xfs_cap.h =================================================================== --- xfs-linux.orig/xfs_cap.h +++ /dev/null @@ -1,70 +0,0 @@ -/* - * Copyright (c) 2000-2002,2005 Silicon Graphics, Inc. - * All Rights Reserved. - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License as - * published by the Free Software Foundation. - * - * This program is distributed in the hope that it would be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write the Free Software Foundation, - * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA - */ -#ifndef __XFS_CAP_H__ -#define __XFS_CAP_H__ - -/* - * Capabilities - */ -typedef __uint64_t xfs_cap_value_t; - -typedef struct xfs_cap_set { - xfs_cap_value_t cap_effective; /* use in capability checks */ - xfs_cap_value_t cap_permitted; /* combined with file attrs */ - xfs_cap_value_t cap_inheritable;/* pass through exec */ -} xfs_cap_set_t; - -/* On-disk XFS extended attribute names */ -#define SGI_CAP_FILE "SGI_CAP_FILE" -#define SGI_CAP_FILE_SIZE (sizeof(SGI_CAP_FILE)-1) -#define SGI_CAP_LINUX "SGI_CAP_LINUX" -#define SGI_CAP_LINUX_SIZE (sizeof(SGI_CAP_LINUX)-1) - -/* - * For Linux, we take the bitfields directly from capability.h - * and no longer attempt to keep this attribute ondisk compatible - * with IRIX. Since this attribute is only set on executables, - * it just doesn't make much sense to try. We do use a different - * named attribute though, to avoid confusion. - */ - -#ifdef __KERNEL__ - -#ifdef CONFIG_FS_POSIX_CAP - -#include - -struct bhv_vnode; - -extern int xfs_cap_vhascap(struct bhv_vnode *); -extern int xfs_cap_vset(struct bhv_vnode *, void *, size_t); -extern int xfs_cap_vget(struct bhv_vnode *, void *, size_t); -extern int xfs_cap_vremove(struct bhv_vnode *); - -#define _CAP_EXISTS xfs_cap_vhascap - -#else -#define xfs_cap_vset(v,p,sz) (-EOPNOTSUPP) -#define xfs_cap_vget(v,p,sz) (-EOPNOTSUPP) -#define xfs_cap_vremove(v) (-EOPNOTSUPP) -#define _CAP_EXISTS (NULL) -#endif - -#endif /* __KERNEL__ */ - -#endif /* __XFS_CAP_H__ */ Index: xfs-linux/xfs_dfrag.c =================================================================== --- xfs-linux.orig/xfs_dfrag.c +++ xfs-linux/xfs_dfrag.c @@ -41,7 +41,6 @@ #include "xfs_itable.h" #include "xfs_dfrag.h" #include "xfs_error.h" -#include "xfs_mac.h" #include "xfs_rw.h" /* Index: xfs-linux/xfs_inode.c =================================================================== --- xfs-linux.orig/xfs_inode.c +++ xfs-linux/xfs_inode.c @@ -47,7 +47,6 @@ #include "xfs_utils.h" #include "xfs_dir2_trace.h" #include "xfs_quota.h" -#include "xfs_mac.h" #include "xfs_acl.h" Index: xfs-linux/xfs_iomap.c =================================================================== --- xfs-linux.orig/xfs_iomap.c +++ xfs-linux/xfs_iomap.c @@ -43,8 +43,6 @@ #include "xfs_itable.h" #include "xfs_rw.h" #include "xfs_acl.h" -#include "xfs_cap.h" -#include "xfs_mac.h" #include "xfs_attr.h" #include "xfs_buf_item.h" #include "xfs_trans_space.h" Index: xfs-linux/xfs_mac.h =================================================================== --- xfs-linux.orig/xfs_mac.h +++ /dev/null @@ -1,106 +0,0 @@ -/* - * Copyright (c) 2001-2002,2005 Silicon Graphics, Inc. - * All Rights Reserved. - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License as - * published by the Free Software Foundation. - * - * This program is distributed in the hope that it would be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write the Free Software Foundation, - * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA - */ -#ifndef __XFS_MAC_H__ -#define __XFS_MAC_H__ - -/* - * Mandatory Access Control - * - * Layout of a composite MAC label: - * ml_list contains the list of categories (MSEN) followed by the list of - * divisions (MINT). This is actually a header for the data structure which - * will have an ml_list with more than one element. - * - * ------------------------------- - * | ml_msen_type | ml_mint_type | - * ------------------------------- - * | ml_level | ml_grade | - * ------------------------------- - * | ml_catcount | - * ------------------------------- - * | ml_divcount | - * ------------------------------- - * | category 1 | - * | . . . | - * | category N | (where N = ml_catcount) - * ------------------------------- - * | division 1 | - * | . . . | - * | division M | (where M = ml_divcount) - * ------------------------------- - */ -#define XFS_MAC_MAX_SETS 250 -typedef struct xfs_mac_label { - __uint8_t ml_msen_type; /* MSEN label type */ - __uint8_t ml_mint_type; /* MINT label type */ - __uint8_t ml_level; /* Hierarchical level */ - __uint8_t ml_grade; /* Hierarchical grade */ - __uint16_t ml_catcount; /* Category count */ - __uint16_t ml_divcount; /* Division count */ - /* Category set, then Division set */ - __uint16_t ml_list[XFS_MAC_MAX_SETS]; -} xfs_mac_label_t; - -/* MSEN label type names. Choose an upper case ASCII character. */ -#define XFS_MSEN_ADMIN_LABEL 'A' /* Admin: low; Mon, 29 Jan 2007 13:50:59 -0800 Received: from edge (unknown [203.89.192.141]) by postoffice.aconex.com (Postfix) with ESMTP id 7AD0EAAC239; Tue, 30 Jan 2007 08:36:39 +1100 (EST) Subject: Re: [DISCUSS] xfs allocation bitmap method over linux raid From: Nathan Scott Reply-To: nscott@aconex.com To: "Raz Ben-Jehuda(caro)" Cc: xfs@oss.sgi.com In-Reply-To: <1170020997.18017.236.camel@edge> References: <5d96567b0701232234y2ff15762sbd1aaada5c3a0a0@mail.gmail.com> <1169678294.18017.200.camel@edge> <5d96567b0701280232w17e1a187r95d2c59711799b1a@mail.gmail.com> <1170020997.18017.236.camel@edge> Content-Type: text/plain Organization: Aconex Date: Tue, 30 Jan 2007 08:49:43 +1100 Message-Id: <1170107383.18017.245.camel@edge> Mime-Version: 1.0 X-Mailer: Evolution 2.6.3 Content-Transfer-Encoding: 7bit X-archive-position: 10481 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nscott@aconex.com Precedence: bulk X-list: xfs Content-Length: 1557 Lines: 36 On Mon, 2007-01-29 at 08:49 +1100, Nathan Scott wrote: > prealloc shouldn't be zero for writes that will extend the file size; > but now that I think about it, I'm not sure how it could ever get set > for a buffered write (delalloc), since by the time we come to do the > actual allocation and writes to disk, the inode size will be beyond > the allocation offset. Hmm, maybe the logic in there needs a rethink > (any thoughts there, Dave/Lachlan?) I had a closer look, and remember now how this works - I was looking in the wrong place entirely. For real (not delayed) allocations, the stripe alignment is performed within the allocator, so deep down in the xfs_bmapi -> xfs_bmap_alloc -> xfs_bmap_btalloc call path. In particular, see the big comment mid-way through xfs_bmap_btalloc.. * If we are not low on available data blocks, and the * underlying logical volume manager is a stripe, and * the file offset is zero then try to allocate data * blocks on stripe unit boundary. * NOTE: ap->aeof is only set if the allocation length * is >= the stripe unit and the allocation offset is * at the end of file. (the "file offset is zero" part seems misleading to me, since it is not only aligning in that case). So, the real answer to your "why isn't it aligning" question lies in there - if you can instrument that code and figure out why you aren't seeing allocation alignment adjustnments inside there, you should be 99% of the way to understanding your problem. cheers. -- Nathan From owner-xfs@oss.sgi.com Mon Jan 29 15:56:06 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 29 Jan 2007 15:56:16 -0800 (PST) X-Spam-oss-Status: No, score=-0.2 required=5.0 tests=AWL,BAYES_50, J_CHICKENPOX_45,RCVD_NUMERIC_HELO autolearn=no version=3.2.0-pre1-r497472 Received: from pem-exsmtp01.silverapp.local (pem-smtp01.silverapp.com [209.43.6.67] (may be forged)) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0TNtxqw030136 for ; Mon, 29 Jan 2007 15:56:02 -0800 Received: from pem-exback01.silverapp.local ([10.1.200.165]) by pem-exsmtp01.silverapp.local with Microsoft SMTPSVC(6.0.3790.1830); Mon, 29 Jan 2007 18:43:16 -0500 Received: from 68.51.66.16 ([68.51.66.16]) by pem-exback01.silverapp.local ([10.1.200.165]) via Exchange Front-End Server exchange.apparatus.net ([10.1.200.132]) with Microsoft Exchange Server HTTP-DAV ; Mon, 29 Jan 2007 23:43:14 +0000 Received: from tmolus by exchange.apparatus.net; 29 Jan 2007 18:42:51 -0500 Subject: xfs_repair leaves things un-repaired. From: Andrew Jones Reply-To: ajones@apparatus.net To: xfs@oss.sgi.com Content-Type: multipart/mixed; boundary="=-Cc97/zT+/7NvnDD/zFtM" Organization: Apparatus Date: Mon, 29 Jan 2007 18:41:36 -0500 Message-Id: <1170114096.12767.9.camel@tmolus.apparatus.net> Mime-Version: 1.0 X-Mailer: Evolution 2.6.3 X-OriginalArrivalTime: 29 Jan 2007 23:43:16.0568 (UTC) FILETIME=[3F932D80:01C743FF] X-archive-position: 10482 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: ajones@apparatus.net Precedence: bulk X-list: xfs Content-Length: 131215 Lines: 1747 --=-Cc97/zT+/7NvnDD/zFtM Content-Type: text/plain Content-Transfer-Encoding: 7bit I have a filesystem which I cannot repair with xfs_repair. Running xfs_repair results in its finding and fixing the same errors, over and over and over. Whenever I attempt to manipulate certain directories, the filesystem shuts itself down: Jan 29 17:59:02 amnesiac kernel: [] xfs_btree_check_sblock +0x9c/0xab [xfs] Jan 29 17:59:02 amnesiac kernel: [] xfs_alloc_lookup +0x134/0x35c [xfs] Jan 29 17:59:02 amnesiac kernel: [] xfs_alloc_lookup +0x134/0x35c [xfs] Jan 29 17:59:02 amnesiac kernel: [] xfs_free_ag_extent +0x48/0x5fd [xfs] Jan 29 17:59:02 amnesiac kernel: [] xfs_free_extent+0xb7/0xd4 [xfs] Jan 29 17:59:02 amnesiac kernel: [] xfs_bmap_finish +0xe6/0x167 [xfs] Jan 29 17:59:02 amnesiac kernel: [] xfs_itruncate_finish +0x1af/0x2ff [xfs] Jan 29 17:59:02 amnesiac kernel: [] xfs_inactive+0x254/0x92c [xfs] Jan 29 17:59:02 amnesiac kernel: [] iput+0x3d/0x66 Jan 29 17:59:02 amnesiac kernel: [] xfs_remove+0x322/0x3a9 [xfs] Jan 29 17:59:02 amnesiac kernel: [] xfs_validate_fields +0x1e/0x7d [xfs] Jan 29 17:59:02 amnesiac kernel: [] xfs_vn_unlink+0x2f/0x3b [xfs] Jan 29 17:59:02 amnesiac kernel: [] inotify_inode_is_dead +0x18/0x6c Jan 29 17:59:02 amnesiac kernel: [] xfs_fs_clear_inode +0x6d/0xa3 [xfs] Jan 29 17:59:02 amnesiac kernel: [] clear_inode+0xab/0xd8 Jan 29 17:59:02 amnesiac kernel: [] generic_delete_inode +0xbd/0x10f Jan 29 17:59:02 amnesiac kernel: [] iput+0x64/0x66 Jan 29 17:59:02 amnesiac kernel: [] do_unlinkat+0xa7/0x113 Jan 29 17:59:02 amnesiac kernel: [] vfs_readdir+0x7d/0x8d Jan 29 17:59:02 amnesiac kernel: [] filldir64+0x0/0xc3 Jan 29 17:59:02 amnesiac kernel: [] sys_getdents64+0x9b/0xa5 Jan 29 17:59:02 amnesiac kernel: [] sysenter_past_esp +0x56/0x79 Jan 29 17:59:02 amnesiac kernel: xfs_force_shutdown(dm-0,0x8) called from line 4267 of file fs/xfs/xfs_bmap.c. Return address = 0xf94e46f0 Jan 29 17:59:15 amnesiac kernel: xfs_force_shutdown(dm-0,0x1) called from line 424 of file fs/xfs/xfs_rw.c. Return address = 0xf94e46f0 Jan 29 17:59:15 amnesiac kernel: xfs_force_shutdown(dm-0,0x1) called from line 424 of file fs/xfs/xfs_rw.c. Return address = 0xf94e46f0 I think the second and third "xfs_force_shutdown" calls came after I unmounted, remounted, and attempted to repeat the "rm" that had failed with the first one, without an xfs_repair attempt in the interregnum. I tried copying it from one filesystem to a new one, using tar. It worked fine for a while, but then I had an "unplanned" shutdown due to a failure in the RAID devices. Since then, the same problems have arisen. Is this a normal problem? Should I just give up and copy to a new filesystem? root@amnesiac#xfs_info /dev/vg0/home meta-data=/dev/vg0/home isize=256 agcount=65, agsize=7325792 blks = sectsz=512 attr=0 data = bsize=4096 blocks=468855808, imaxpct=25 = sunit=0 swidth=0 blks, unwritten=1 naming =version 2 bsize=4096 log =internal bsize=4096 blocks=32768, version=1 = sectsz=512 sunit=0 blks realtime =none extsz=4096 blocks=0, rtextents=0 root@amnesiac#uname -a Linux amnesiac 2.6.18-3-686 #1 SMP Sun Dec 10 19:37:06 UTC 2006 i686 GNU/Linux root@amnesiac#xfs_repair -V xfs_repair version 2.8.18 The xfs_repair -v output is attached to this message. --=-Cc97/zT+/7NvnDD/zFtM Content-Disposition: attachment; filename=xfs_repair.out Content-Type: text/plain; name=xfs_repair.out; charset=UTF-8 Content-Transfer-Encoding: base64 U2NyaXB0IHN0YXJ0ZWQgb24gTW9uIDI5IEphbiAyMDA3IDA2OjMzOjM5IFBNIEVTVA0KG10wO2Ft bmVzaWFjOnJvb3QHcm9vdEBhbW5lc2lhYyN4ZnNfcmVhcAgbW0sIG1tLcGFpciAHL2Rldi92ZzAv aG9tZSAtdg0KICAgICAgICAtIGNyZWF0aW5nIDQgd29ya2VyIHRocmVhZChzKQ0KUGhhc2UgMSAt IGZpbmQgYW5kIHZlcmlmeSBzdXBlcmJsb2NrLi4uDQogICAgICAgIC0gcmVwb3J0aW5nIHByb2dy ZXNzIGluIGludGVydmFscyBvZiAxNSBtaW51dGVzDQpQaGFzZSAyIC0gdXNpbmcgaW50ZXJuYWwg bG9nDQogICAgICAgIC0gemVybyBsb2cuLi4NClhGUzogdG90YWxseSB6ZXJvZWQgbG9nDQp6ZXJv X2xvZzogaGVhZCBibG9jayAwIHRhaWwgYmxvY2sgMA0KICAgICAgICAtIHNjYW4gZmlsZXN5c3Rl bSBmcmVlc3BhY2UgYW5kIGlub2RlIG1hcHMuLi4NCiAgICAgICAgLSAxODozNDowMDogc2Nhbm5p bmcgZmlsZXN5c3RlbSBmcmVlc3BhY2UgLSA2NSBvZiA2NSBhbGxvY2F0aW9uIGdyb3VwcyBkb25l DQogICAgICAgIC0gZm91bmQgcm9vdCBpbm9kZSBjaHVuaw0KUGhhc2UgMyAtIGZvciBlYWNoIEFH Li4uDQogICAgICAgIC0gc2NhbiBhbmQgY2xlYXIgYWdpIHVubGlua2VkIGxpc3RzLi4uDQogICAg ICAgIC0gMTg6MzQ6MDA6IHNjYW5uaW5nIGFnaSB1bmxpbmtlZCBsaXN0cyAtIDY1IG9mIDY1IGFs bG9jYXRpb24gZ3JvdXBzIGRvbmUNCiAgICAgICAgLSBwcm9jZXNzIGtub3duIGlub2RlcyBhbmQg cGVyZm9ybSBpbm9kZSBkaXNjb3ZlcnkuLi4NCiAgICAgICAgLSBhZ25vID0gMA0KICAgICAgICAt IGFnbm8gPSAyDQogICAgICAgIC0gYWdubyA9IDENCiAgICAgICAgLSBhZ25vID0gMw0KICAgICAg ICAtIGFnbm8gPSA0DQogICAgICAgIC0gYWdubyA9IDUNCiAgICAgICAgLSBhZ25vID0gNg0KICAg ICAgICAtIGFnbm8gPSA3DQogICAgICAgIC0gYWdubyA9IDgNCiAgICAgICAgLSBhZ25vID0gOQ0K ICAgICAgICAtIGFnbm8gPSAxMA0KICAgICAgICAtIGFnbm8gPSAxMQ0KICAgICAgICAtIGFnbm8g PSAxMg0KICAgICAgICAtIGFnbm8gPSAxMw0KICAgICAgICAtIGFnbm8gPSAxNA0KICAgICAgICAt IGFnbm8gPSAxNQ0KICAgICAgICAtIGFnbm8gPSAxNg0KICAgICAgICAtIGFnbm8gPSAxNw0KICAg ICAgICAtIGFnbm8gPSAxOA0KICAgICAgICAtIGFnbm8gPSAxOQ0KICAgICAgICAtIGFnbm8gPSAy MA0KICAgICAgICAtIGFnbm8gPSAyMQ0KICAgICAgICAtIGFnbm8gPSAyMg0KICAgICAgICAtIGFn bm8gPSAyMw0KICAgICAgICAtIGFnbm8gPSAyNA0KICAgICAgICAtIGFnbm8gPSAyNQ0KICAgICAg ICAtIGFnbm8gPSAyNg0KICAgICAgICAtIGFnbm8gPSAyNw0KICAgICAgICAtIGFnbm8gPSAyOA0K ICAgICAgICAtIGFnbm8gPSAyOQ0KICAgICAgICAtIGFnbm8gPSAzMA0KICAgICAgICAtIGFnbm8g PSAzMQ0KICAgICAgICAtIGFnbm8gPSAzMg0KICAgICAgICAtIGFnbm8gPSAzMw0KICAgICAgICAt IGFnbm8gPSAzNA0KICAgICAgICAtIGFnbm8gPSAzNQ0KICAgICAgICAtIGFnbm8gPSAzNg0KICAg ICAgICAtIGFnbm8gPSAzNw0KICAgICAgICAtIGFnbm8gPSAzOA0KICAgICAgICAtIGFnbm8gPSAz OQ0KICAgICAgICAtIGFnbm8gPSA0MA0KICAgICAgICAtIGFnbm8gPSA0MQ0KICAgICAgICAtIGFn bm8gPSA0Mg0KICAgICAgICAtIGFnbm8gPSA0Mw0KICAgICAgICAtIGFnbm8gPSA0NA0KICAgICAg ICAtIGFnbm8gPSA0NQ0KICAgICAgICAtIGFnbm8gPSA0Ng0KICAgICAgICAtIGFnbm8gPSA0Nw0K ICAgICAgICAtIGFnbm8gPSA0OA0KICAgICAgICAtIGFnbm8gPSA0OQ0KICAgICAgICAtIGFnbm8g PSA1MA0KICAgICAgICAtIGFnbm8gPSA1MQ0KICAgICAgICAtIGFnbm8gPSA1Mg0KICAgICAgICAt IGFnbm8gPSA1Mw0KICAgICAgICAtIGFnbm8gPSA1NA0KICAgICAgICAtIGFnbm8gPSA1NQ0KICAg ICAgICAtIGFnbm8gPSA1Ng0KICAgICAgICAtIGFnbm8gPSA1Nw0KICAgICAgICAtIGFnbm8gPSA1 OA0KICAgICAgICAtIGFnbm8gPSA1OQ0KICAgICAgICAtIGFnbm8gPSA2MA0KICAgICAgICAtIGFn bm8gPSA2MQ0KICAgICAgICAtIGFnbm8gPSA2Mg0KICAgICAgICAtIGFnbm8gPSA2Mw0KICAgICAg ICAtIGFnbm8gPSA2NA0KICAgICAgICAtIDE4OjM1OjAxOiBwcm9jZXNzIGtub3duIGlub2RlcyBh bmQgaW5vZGUgZGlzY292ZXJ5IC0gNDUwNjI0IG9mIDQ1MDYyNCBpbm9kZXMgZG9uZQ0KICAgICAg ICAtIHByb2Nlc3MgbmV3bHkgZGlzY292ZXJlZCBpbm9kZXMuLi4NCiAgICAgICAgLSAxODozNTow MTogcHJvY2VzcyBuZXdseSBkaXNjb3ZlcmVkIGlub2RlcyAtIDY1IG9mIDY1IGFsbG9jYXRpb24g Z3JvdXBzIGRvbmUNClBoYXNlIDQgLSBjaGVjayBmb3IgZHVwbGljYXRlIGJsb2Nrcy4uLg0KICAg ICAgICAtIHNldHRpbmcgdXAgZHVwbGljYXRlIGV4dGVudCBsaXN0Li4uDQogICAgICAgIC0gY2xl YXIgbG9zdCtmb3VuZCAoaWYgaXQgZXhpc3RzKSAuLi4NCiAgICAgICAgLSBjbGVhcmluZyBleGlz dGluZyAibG9zdCtmb3VuZCIgaW5vZGUNCiAgICAgICAgLSBkZWxldGluZyBleGlzdGluZyAibG9z dCtmb3VuZCIgZW50cnkNCiAgICAgICAgLSAxODozNTowMjogc2V0dGluZyB1cCBkdXBsaWNhdGUg ZXh0ZW50IGxpc3QgLSA2NSBvZiA2NSBhbGxvY2F0aW9uIGdyb3VwcyBkb25lDQogICAgICAgIC0g Y2hlY2sgZm9yIGlub2RlcyBjbGFpbWluZyBkdXBsaWNhdGUgYmxvY2tzLi4uDQogICAgICAgIC0g YWdubyA9IDANCiAgICAgICAgLSBhZ25vID0gMg0KICAgICAgICAtIGFnbm8gPSAxDQplbnRyeSAi Li4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVjdG9yeSBpbm9kZSAxMzQyMTc5MDIgcmVm ZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51bWJlciBpbiBlbnRyeSBh dCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5IGZvciBkaXJlY3RvcnkgMTM0MjE3OTAyDQplbnRy eSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVjdG9yeSBpbm9kZSAyODg1NTk0MjEg cmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51bWJlciBpbiBlbnRy eSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5IGZvciBkaXJlY3RvcnkgMjg4NTU5NDIxDQpl bnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVjdG9yeSBpbm9kZSAyODg1NTk1 NzYgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51bWJlciBpbiBl bnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5IGZvciBkaXJlY3RvcnkgMjg4NTU5NTc2 DQogICAgICAgIC0gYWdubyA9IDMNCmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4g ZGlyZWN0b3J5IGlub2RlIDI5MTk3MDA4NSByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xl YXJpbmcgaW5vZGUgbnVtYmVyIGluIGVudHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50cnkg Zm9yIGRpcmVjdG9yeSAyOTE5NzAwODUNCmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIg aW4gZGlyZWN0b3J5IGlub2RlIDQxNTc4Njk5MSByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJ Y2xlYXJpbmcgaW5vZGUgbnVtYmVyIGluIGVudHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50 cnkgZm9yIGRpcmVjdG9yeSA0MTU3ODY5OTENCmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQg MzIgaW4gZGlyZWN0b3J5IGlub2RlIDI3NjQ3MzQyIHJlZmVyZW5jZXMgZnJlZSBpbm9kZSAxNDEN CgljbGVhcmluZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpubyAuLiBl bnRyeSBmb3IgZGlyZWN0b3J5IDI3NjQ3MzQyDQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0 IDMyIGluIGRpcmVjdG9yeSBpbm9kZSA3NzMyMTQ4NSByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQx DQoJY2xlYXJpbmcgaW5vZGUgbnVtYmVyIGluIGVudHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4g ZW50cnkgZm9yIGRpcmVjdG9yeSA3NzMyMTQ4NQ0KZW50cnkgIi4uIiBhdCBibG9jayAwIG9mZnNl dCAzMiBpbiBkaXJlY3RvcnkgaW5vZGUgNzczNjczOTkgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0 MQ0KCWNsZWFyaW5nIGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4u IGVudHJ5IGZvciBkaXJlY3RvcnkgNzczNjczOTkNCiAgICAgICAgLSBhZ25vID0gNA0KICAgICAg ICAtIGFnbm8gPSA1DQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVjdG9y eSBpbm9kZSA2OTQ2OTE2NTcgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNsZWFyaW5nIGlu b2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5IGZvciBkaXJl Y3RvcnkgNjk0NjkxNjU3DQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVj dG9yeSBpbm9kZSAxNjI1OTEwODkgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNsZWFyaW5n IGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5IGZvciBk aXJlY3RvcnkgMTYyNTkxMDg5DQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRp cmVjdG9yeSBpbm9kZSAxODA2NjQ3MTMgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNsZWFy aW5nIGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5IGZv ciBkaXJlY3RvcnkgMTgwNjY0NzEzDQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGlu IGRpcmVjdG9yeSBpbm9kZSA3NjE5MzczMDIgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNs ZWFyaW5nIGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5 IGZvciBkaXJlY3RvcnkgNzYxOTM3MzAyDQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMy IGluIGRpcmVjdG9yeSBpbm9kZSA3NjE5MzczMTkgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0K CWNsZWFyaW5nIGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVu dHJ5IGZvciBkaXJlY3RvcnkgNzYxOTM3MzE5DQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0 IDMyIGluIGRpcmVjdG9yeSBpbm9kZSA1OTc2NTQzMTMgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0 MQ0KCWNsZWFyaW5nIGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4u IGVudHJ5IGZvciBkaXJlY3RvcnkgNTk3NjU0MzEzDQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zm c2V0IDMyIGluIGRpcmVjdG9yeSBpbm9kZSAyMDQyMzA4OTYgcmVmZXJlbmNlcyBmcmVlIGlub2Rl IDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5v IC4uIGVudHJ5IGZvciBkaXJlY3RvcnkgMjA0MjMwODk2DQplbnRyeSAiLi4iIGF0IGJsb2NrIDAg b2Zmc2V0IDMyIGluIGRpcmVjdG9yeSBpbm9kZSA1MTk4MzgzNzkgcmVmZXJlbmNlcyBmcmVlIGlu b2RlIDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4N Cm5vIC4uIGVudHJ5IGZvciBkaXJlY3RvcnkgNTE5ODM4Mzc5DQogICAgICAgIC0gYWdubyA9IDYN CmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4gZGlyZWN0b3J5IGlub2RlIDIzOTY4 MDMxMyByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5vZGUgbnVtYmVyIGlu IGVudHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9yIGRpcmVjdG9yeSAyMzk2ODAz MTMNCmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4gZGlyZWN0b3J5IGlub2RlIDc2 MzM3Mjg0NyByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5vZGUgbnVtYmVy IGluIGVudHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9yIGRpcmVjdG9yeSA3NjMz NzI4NDcNCiAgICAgICAgLSBhZ25vID0gNw0KZW50cnkgIi4uIiBhdCBibG9jayAwIG9mZnNldCAz MiBpbiBkaXJlY3RvcnkgaW5vZGUgMjQ5Njk3MDcyIHJlZmVyZW5jZXMgZnJlZSBpbm9kZSAxNDEN CgljbGVhcmluZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpubyAuLiBl bnRyeSBmb3IgZGlyZWN0b3J5IDI0OTY5NzA3Mg0KZW50cnkgIi4uIiBhdCBibG9jayAwIG9mZnNl dCAzMiBpbiBkaXJlY3RvcnkgaW5vZGUgMTAwMjcxNDQyMCByZWZlcmVuY2VzIGZyZWUgaW5vZGUg MTQxDQoJY2xlYXJpbmcgaW5vZGUgbnVtYmVyIGluIGVudHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8g Li4gZW50cnkgZm9yIGRpcmVjdG9yeSAxMDAyNzE0NDIwDQogICAgICAgIC0gYWdubyA9IDgNCiAg ICAgICAgLSBhZ25vID0gOQ0KZW50cnkgIi4uIiBhdCBibG9jayAwIG9mZnNldCAzMiBpbiBkaXJl Y3RvcnkgaW5vZGUgODYwMTEzMTI0IHJlZmVyZW5jZXMgZnJlZSBpbm9kZSAxNDENCgljbGVhcmlu ZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpubyAuLiBlbnRyeSBmb3Ig ZGlyZWN0b3J5IDg2MDExMzEyNA0KICAgICAgICAtIGFnbm8gPSAxMA0KZW50cnkgIi4uIiBhdCBi bG9jayAwIG9mZnNldCAzMiBpbiBkaXJlY3RvcnkgaW5vZGUgODY2NTAwNDE5IHJlZmVyZW5jZXMg ZnJlZSBpbm9kZSAxNDENCgljbGVhcmluZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zmc2V0 IDMyLi4uDQpubyAuLiBlbnRyeSBmb3IgZGlyZWN0b3J5IDg2NjUwMDQxOQ0KZW50cnkgIi4uIiBh dCBibG9jayAwIG9mZnNldCAzMiBpbiBkaXJlY3RvcnkgaW5vZGUgODY3MDUyNDI1IHJlZmVyZW5j ZXMgZnJlZSBpbm9kZSAxNDENCgljbGVhcmluZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zm c2V0IDMyLi4uDQpubyAuLiBlbnRyeSBmb3IgZGlyZWN0b3J5IDg2NzA1MjQyNQ0KZW50cnkgIi4u IiBhdCBibG9jayAwIG9mZnNldCAzMiBpbiBkaXJlY3RvcnkgaW5vZGUgMTQwMjQyMTY2NCByZWZl cmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5vZGUgbnVtYmVyIGluIGVudHJ5IGF0 IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9yIGRpcmVjdG9yeSAxNDAyNDIxNjY0DQogICAg ICAgIC0gYWdubyA9IDExDQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVj dG9yeSBpbm9kZSAxNTg0MzI3MDUyIHJlZmVyZW5jZXMgZnJlZSBpbm9kZSAxNDENCgljbGVhcmlu ZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpubyAuLiBlbnRyeSBmb3Ig ZGlyZWN0b3J5IDE1ODQzMjcwNTINCiAgICAgICAgLSBhZ25vID0gMTINCiAgICAgICAgLSBhZ25v ID0gMTMNCmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4gZGlyZWN0b3J5IGlub2Rl IDE3NjgwODYxMjcgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51 bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5IGZvciBkaXJlY3Rvcnkg MTc2ODA4NjEyNw0KZW50cnkgIi4uIiBhdCBibG9jayAwIG9mZnNldCAzMiBpbiBkaXJlY3Rvcnkg aW5vZGUgMTY4MTA3NTY1NCByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5v ZGUgbnVtYmVyIGluIGVudHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9yIGRpcmVj dG9yeSAxNjgxMDc1NjU0DQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVj dG9yeSBpbm9kZSAxODQ0OTAxMTU0IHJlZmVyZW5jZXMgZnJlZSBpbm9kZSAxNDENCgljbGVhcmlu ZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpubyAuLiBlbnRyeSBmb3Ig ZGlyZWN0b3J5IDE4NDQ5MDExNTQNCmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4g ZGlyZWN0b3J5IGlub2RlIDE4NDQ5MDExNzggcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNs ZWFyaW5nIGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5 IGZvciBkaXJlY3RvcnkgMTg0NDkwMTE3OA0KZW50cnkgIi4uIiBhdCBibG9jayAwIG9mZnNldCAz MiBpbiBkaXJlY3RvcnkgaW5vZGUgMTg0NDk1MzgwMSByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQx DQoJY2xlYXJpbmcgaW5vZGUgbnVtYmVyIGluIGVudHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4g ZW50cnkgZm9yIGRpcmVjdG9yeSAxODQ0OTUzODAxDQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zm c2V0IDMyIGluIGRpcmVjdG9yeSBpbm9kZSAxODUyNDAwNzQ5IHJlZmVyZW5jZXMgZnJlZSBpbm9k ZSAxNDENCgljbGVhcmluZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpu byAuLiBlbnRyeSBmb3IgZGlyZWN0b3J5IDE4NTI0MDA3NDkNCiAgICAgICAgLSBhZ25vID0gMTQN CmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4gZGlyZWN0b3J5IGlub2RlIDE4Nzkw NDgzMzYgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51bWJlciBp biBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5IGZvciBkaXJlY3RvcnkgMTg3OTA0 ODMzNg0KZW50cnkgIi4uIiBhdCBibG9jayAwIG9mZnNldCAzMiBpbiBkaXJlY3RvcnkgaW5vZGUg MTcyNzgxNjU3NCByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5vZGUgbnVt YmVyIGluIGVudHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9yIGRpcmVjdG9yeSAx NzI3ODE2NTc0DQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVjdG9yeSBp bm9kZSAxNDU5Mzg5MjEzIHJlZmVyZW5jZXMgZnJlZSBpbm9kZSAxNDENCgljbGVhcmluZyBpbm9k ZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpubyAuLiBlbnRyeSBmb3IgZGlyZWN0 b3J5IDE0NTkzODkyMTMNCiAgICAgICAgLSBhZ25vID0gMTUNCmVudHJ5ICIuLiIgYXQgYmxvY2sg MCBvZmZzZXQgMzIgaW4gZGlyZWN0b3J5IGlub2RlIDIwMTMyNjYxMDQgcmVmZXJlbmNlcyBmcmVl IGlub2RlIDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIu Li4NCm5vIC4uIGVudHJ5IGZvciBkaXJlY3RvcnkgMjAxMzI2NjEwNA0KICAgICAgICAtIGFnbm8g PSAxNg0KZW50cnkgIi4uIiBhdCBibG9jayAwIG9mZnNldCAzMiBpbiBkaXJlY3RvcnkgaW5vZGUg MjE3MDQ0OTUzMiByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5vZGUgbnVt YmVyIGluIGVudHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9yIGRpcmVjdG9yeSAy MTcwNDQ5NTMyDQogICAgICAgIC0gYWdubyA9IDE3DQogICAgICAgIC0gYWdubyA9IDE4DQplbnRy eSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVjdG9yeSBpbm9kZSAyMjgxNzA2OTM0 IHJlZmVyZW5jZXMgZnJlZSBpbm9kZSAxNDENCgljbGVhcmluZyBpbm9kZSBudW1iZXIgaW4gZW50 cnkgYXQgb2Zmc2V0IDMyLi4uDQpubyAuLiBlbnRyeSBmb3IgZGlyZWN0b3J5IDIyODE3MDY5MzQN CmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4gZGlyZWN0b3J5IGlub2RlIDIyODE3 MDY5NTcgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51bWJlciBp biBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5IGZvciBkaXJlY3RvcnkgMjI4MTcw Njk1Nw0KZW50cnkgIi4uIiBhdCBibG9jayAwIG9mZnNldCAzMiBpbiBkaXJlY3RvcnkgaW5vZGUg MjI4MTcwNzIxNyByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5vZGUgbnVt YmVyIGluIGVudHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9yIGRpcmVjdG9yeSAy MjgxNzA3MjE3DQogICAgICAgIC0gYWdubyA9IDE5DQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zm c2V0IDMyIGluIGRpcmVjdG9yeSBpbm9kZSAyNTUwMjIxMTk3IHJlZmVyZW5jZXMgZnJlZSBpbm9k ZSAxNDENCgljbGVhcmluZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpu byAuLiBlbnRyeSBmb3IgZGlyZWN0b3J5IDI1NTAyMjExOTcNCiAgICAgICAgLSBhZ25vID0gMjAN CmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4gZGlyZWN0b3J5IGlub2RlIDI2ODQz NTcxNzAgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51bWJlciBp biBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5IGZvciBkaXJlY3RvcnkgMjY4NDM1 NzE3MA0KICAgICAgICAtIGFnbm8gPSAyMQ0KICAgICAgICAtIGFnbm8gPSAyMg0KZW50cnkgIi4u IiBhdCBibG9jayAwIG9mZnNldCAzMiBpbiBkaXJlY3RvcnkgaW5vZGUgMzAzNzAyMzY3MiByZWZl cmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5vZGUgbnVtYmVyIGluIGVudHJ5IGF0 IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9yIGRpcmVjdG9yeSAzMDM3MDIzNjcyDQplbnRy eSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVjdG9yeSBpbm9kZSAzMDUzMDQyMTQ2 IHJlZmVyZW5jZXMgZnJlZSBpbm9kZSAxNDENCgljbGVhcmluZyBpbm9kZSBudW1iZXIgaW4gZW50 cnkgYXQgb2Zmc2V0IDMyLi4uDQpubyAuLiBlbnRyeSBmb3IgZGlyZWN0b3J5IDMwNTMwNDIxNDYN CiAgICAgICAgLSBhZ25vID0gMjMNCmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4g ZGlyZWN0b3J5IGlub2RlIDMwODcwMDc5ODYgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNs ZWFyaW5nIGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5 IGZvciBkaXJlY3RvcnkgMzA4NzAwNzk4Ng0KZW50cnkgIi4uIiBhdCBibG9jayAwIG9mZnNldCAz MiBpbiBkaXJlY3RvcnkgaW5vZGUgMzA4NzM3MDkzMiByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQx DQoJY2xlYXJpbmcgaW5vZGUgbnVtYmVyIGluIGVudHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4g ZW50cnkgZm9yIGRpcmVjdG9yeSAzMDg3MzcwOTMyDQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zm c2V0IDMyIGluIGRpcmVjdG9yeSBpbm9kZSAzMDg3NjIxMzU2IHJlZmVyZW5jZXMgZnJlZSBpbm9k ZSAxNDENCgljbGVhcmluZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpu byAuLiBlbnRyeSBmb3IgZGlyZWN0b3J5IDMwODc2MjEzNTYNCmVudHJ5ICIuLiIgYXQgYmxvY2sg MCBvZmZzZXQgMzIgaW4gZGlyZWN0b3J5IGlub2RlIDMwNTczMjIyOTYgcmVmZXJlbmNlcyBmcmVl IGlub2RlIDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIu Li4NCm5vIC4uIGVudHJ5IGZvciBkaXJlY3RvcnkgMzA1NzMyMjI5Ng0KICAgICAgICAtIGFnbm8g PSAyNA0KZW50cnkgIi4uIiBhdCBibG9jayAwIG9mZnNldCAzMiBpbiBkaXJlY3RvcnkgaW5vZGUg MzIyMTIyNjM0NyByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5vZGUgbnVt YmVyIGluIGVudHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9yIGRpcmVjdG9yeSAz MjIxMjI2MzQ3DQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVjdG9yeSBp bm9kZSAzMjIyMTcwMTg3IHJlZmVyZW5jZXMgZnJlZSBpbm9kZSAxNDENCgljbGVhcmluZyBpbm9k ZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpubyAuLiBlbnRyeSBmb3IgZGlyZWN0 b3J5IDMyMjIxNzAxODcNCmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4gZGlyZWN0 b3J5IGlub2RlIDMyMjIyODEzNzUgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNsZWFyaW5n IGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5IGZvciBk aXJlY3RvcnkgMzIyMjI4MTM3NQ0KZW50cnkgIi4uIiBhdCBibG9jayAwIG9mZnNldCAzMiBpbiBk aXJlY3RvcnkgaW5vZGUgMzIyMzUyMjgzMCByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xl YXJpbmcgaW5vZGUgbnVtYmVyIGluIGVudHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50cnkg Zm9yIGRpcmVjdG9yeSAzMjIzNTIyODMwDQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMy IGluIGRpcmVjdG9yeSBpbm9kZSAzMjQyMzM0MDgyIHJlZmVyZW5jZXMgZnJlZSBpbm9kZSAxNDEN CgljbGVhcmluZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpubyAuLiBl bnRyeSBmb3IgZGlyZWN0b3J5IDMyNDIzMzQwODINCmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZz ZXQgMzIgaW4gZGlyZWN0b3J5IGlub2RlIDMyNDIzNDE2NzYgcmVmZXJlbmNlcyBmcmVlIGlub2Rl IDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5v IC4uIGVudHJ5IGZvciBkaXJlY3RvcnkgMzI0MjM0MTY3Ng0KZW50cnkgIi4uIiBhdCBibG9jayAw IG9mZnNldCAzMiBpbiBkaXJlY3RvcnkgaW5vZGUgMzI0Mjg0ODkwMCByZWZlcmVuY2VzIGZyZWUg aW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5vZGUgbnVtYmVyIGluIGVudHJ5IGF0IG9mZnNldCAzMi4u Lg0Kbm8gLi4gZW50cnkgZm9yIGRpcmVjdG9yeSAzMjQyODQ4OTAwDQplbnRyeSAiLi4iIGF0IGJs b2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVjdG9yeSBpbm9kZSAzMjQzMDE4NjM1IHJlZmVyZW5jZXMg ZnJlZSBpbm9kZSAxNDENCgljbGVhcmluZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zmc2V0 IDMyLi4uDQpubyAuLiBlbnRyeSBmb3IgZGlyZWN0b3J5IDMyNDMwMTg2MzUNCmVudHJ5ICIuLiIg YXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4gZGlyZWN0b3J5IGlub2RlIDMyNDYxMTUwNTAgcmVmZXJl bmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBv ZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5IGZvciBkaXJlY3RvcnkgMzI0NjExNTA1MA0KZW50cnkg Ii4uIiBhdCBibG9jayAwIG9mZnNldCAzMiBpbiBkaXJlY3RvcnkgaW5vZGUgMzEwNTM0MzMzOCBy ZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5vZGUgbnVtYmVyIGluIGVudHJ5 IGF0IG9mZnNldCAzMi4uLg0KZW50cnkgIi4uIiBhdCBibG9jayAwIG9mZnNldCAzMiBpbiBkaXJl Y3RvcnkgaW5vZGUgMzI1MjUxOTYwOCByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJp bmcgaW5vZGUgbnVtYmVyIGluIGVudHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9y IGRpcmVjdG9yeSAzMjUyNTE5NjA4DQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGlu IGRpcmVjdG9yeSBpbm9kZSAzMjU1MTE5OTA0IHJlZmVyZW5jZXMgZnJlZSBpbm9kZSAxNDENCglj bGVhcmluZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpubyAuLiBlbnRy eSBmb3IgZGlyZWN0b3J5IDMyNTUxMTk5MDQNCmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQg MzIgaW4gZGlyZWN0b3J5IGlub2RlIDMyNTUxMTk5NDMgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0 MQ0KCWNsZWFyaW5nIGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4u IGVudHJ5IGZvciBkaXJlY3RvcnkgMzI1NTExOTk0Mw0KZW50cnkgIi4uIiBhdCBibG9jayAwIG9m ZnNldCAzMiBpbiBkaXJlY3RvcnkgaW5vZGUgMzI1NTExOTk2NCByZWZlcmVuY2VzIGZyZWUgaW5v ZGUgMTQxDQoJY2xlYXJpbmcgaW5vZGUgbnVtYmVyIGluIGVudHJ5IGF0IG9mZnNldCAzMi4uLg0K bm8gLi4gZW50cnkgZm9yIGRpcmVjdG9yeSAzMjU1MTE5OTY0DQplbnRyeSAiLi4iIGF0IGJsb2Nr IDAgb2Zmc2V0IDMyIGluIGRpcmVjdG9yeSBpbm9kZSAzMjY5OTc0NDMyIHJlZmVyZW5jZXMgZnJl ZSBpbm9kZSAxNDENCgljbGVhcmluZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zmc2V0IDMy Li4uDQpubyAuLiBlbnRyeSBmb3IgZGlyZWN0b3J5IDMyNjk5NzQ0MzINCmVudHJ5ICIuLiIgYXQg YmxvY2sgMCBvZmZzZXQgMzIgaW4gZGlyZWN0b3J5IGlub2RlIDMyNjk5NzQ0ODMgcmVmZXJlbmNl cyBmcmVlIGlub2RlIDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZz ZXQgMzIuLi4NCm5vIC4uIGVudHJ5IGZvciBkaXJlY3RvcnkgMzI2OTk3NDQ4Mw0KZW50cnkgIi4u IiBhdCBibG9jayAwIG9mZnNldCAzMiBpbiBkaXJlY3RvcnkgaW5vZGUgMzI3Mjc3NTcxNyByZWZl cmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5vZGUgbnVtYmVyIGluIGVudHJ5IGF0 IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9yIGRpcmVjdG9yeSAzMjcyNzc1NzE3DQplbnRy eSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVjdG9yeSBpbm9kZSAzMjcyNzc1NzIw IHJlZmVyZW5jZXMgZnJlZSBpbm9kZSAxNDENCgljbGVhcmluZyBpbm9kZSBudW1iZXIgaW4gZW50 cnkgYXQgb2Zmc2V0IDMyLi4uDQpubyAuLiBlbnRyeSBmb3IgZGlyZWN0b3J5IDMyNzI3NzU3MjAN CmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4gZGlyZWN0b3J5IGlub2RlIDMyNzQ1 OTQ4MjAgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51bWJlciBp biBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5IGZvciBkaXJlY3RvcnkgMzI3NDU5 NDgyMA0Kbm8gLi4gZW50cnkgZm9yIGRpcmVjdG9yeSAzMTA1MzQzMzM4DQplbnRyeSAiLi4iIGF0 IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVjdG9yeSBpbm9kZSAzMTA1MzYxMDY0IHJlZmVyZW5j ZXMgZnJlZSBpbm9kZSAxNDENCgljbGVhcmluZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zm c2V0IDMyLi4uDQpubyAuLiBlbnRyeSBmb3IgZGlyZWN0b3J5IDMxMDUzNjEwNjQNCmVudHJ5ICIu LiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4gZGlyZWN0b3J5IGlub2RlIDMxMDUzNjExMDggcmVm ZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51bWJlciBpbiBlbnRyeSBh dCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5IGZvciBkaXJlY3RvcnkgMzEwNTM2MTEwOA0KZW50 cnkgIi4uIiBhdCBibG9jayAwIG9mZnNldCAzMiBpbiBkaXJlY3RvcnkgaW5vZGUgMzEwNTM3MzI4 NiByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5vZGUgbnVtYmVyIGluIGVu dHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9yIGRpcmVjdG9yeSAzMTA1MzczMjg2 DQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVjdG9yeSBpbm9kZSAzMTA1 MzczMzA0IHJlZmVyZW5jZXMgZnJlZSBpbm9kZSAxNDENCgljbGVhcmluZyBpbm9kZSBudW1iZXIg aW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpubyAuLiBlbnRyeSBmb3IgZGlyZWN0b3J5IDMxMDUz NzMzMDQNCmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4gZGlyZWN0b3J5IGlub2Rl IDMxMDUzNzM2MTMgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51 bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5IGZvciBkaXJlY3Rvcnkg MzEwNTM3MzYxMw0KZW50cnkgIi4uIiBhdCBibG9jayAwIG9mZnNldCAzMiBpbiBkaXJlY3Rvcnkg aW5vZGUgMzEwNTM3NDY2MCByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5v ZGUgbnVtYmVyIGluIGVudHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9yIGRpcmVj dG9yeSAzMTA1Mzc0NjYwDQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVj dG9yeSBpbm9kZSAzMTA1NTU4MDA5IHJlZmVyZW5jZXMgZnJlZSBpbm9kZSAxNDENCgljbGVhcmlu ZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpubyAuLiBlbnRyeSBmb3Ig ZGlyZWN0b3J5IDMxMDU1NTgwMDkNCmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4g ZGlyZWN0b3J5IGlub2RlIDMxMDYwMTIxMzYgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNs ZWFyaW5nIGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5 IGZvciBkaXJlY3RvcnkgMzEwNjAxMjEzNg0KZW50cnkgIi4uIiBhdCBibG9jayAwIG9mZnNldCAz MiBpbiBkaXJlY3RvcnkgaW5vZGUgMzEwNjAxMjE1NCByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQx DQoJY2xlYXJpbmcgaW5vZGUgbnVtYmVyIGluIGVudHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4g ZW50cnkgZm9yIGRpcmVjdG9yeSAzMTA2MDEyMTU0DQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zm c2V0IDMyIGluIGRpcmVjdG9yeSBpbm9kZSAzMTA2MDkxMzM0IHJlZmVyZW5jZXMgZnJlZSBpbm9k ZSAxNDENCgljbGVhcmluZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpu byAuLiBlbnRyeSBmb3IgZGlyZWN0b3J5IDMxMDYwOTEzMzQNCmVudHJ5ICIuLiIgYXQgYmxvY2sg MCBvZmZzZXQgMzIgaW4gZGlyZWN0b3J5IGlub2RlIDMxMDYwOTE2MTAgcmVmZXJlbmNlcyBmcmVl IGlub2RlIDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIu Li4NCm5vIC4uIGVudHJ5IGZvciBkaXJlY3RvcnkgMzEwNjA5MTYxMA0KZW50cnkgIi4uIiBhdCBi bG9jayAwIG9mZnNldCAzMiBpbiBkaXJlY3RvcnkgaW5vZGUgMzEwNjA5MTYzNCByZWZlcmVuY2Vz IGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5vZGUgbnVtYmVyIGluIGVudHJ5IGF0IG9mZnNl dCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9yIGRpcmVjdG9yeSAzMTA2MDkxNjM0DQplbnRyeSAiLi4i IGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVjdG9yeSBpbm9kZSAzMzMyMDEzMTYxIHJlZmVy ZW5jZXMgZnJlZSBpbm9kZSAxNDENCgljbGVhcmluZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQg b2Zmc2V0IDMyLi4uDQpubyAuLiBlbnRyeSBmb3IgZGlyZWN0b3J5IDMzMzIwMTMxNjENCiAgICAg ICAgLSBhZ25vID0gMjUNCmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4gZGlyZWN0 b3J5IGlub2RlIDMzNTkyNjkzMjEgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNsZWFyaW5n IGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5IGZvciBk aXJlY3RvcnkgMzM1OTI2OTMyMQ0KZW50cnkgIi4uIiBhdCBibG9jayAwIG9mZnNldCAzMiBpbiBk aXJlY3RvcnkgaW5vZGUgMzM4MDU5NTM1MyByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xl YXJpbmcgaW5vZGUgbnVtYmVyIGluIGVudHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50cnkg Zm9yIGRpcmVjdG9yeSAzMzgwNTk1MzUzDQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMy IGluIGRpcmVjdG9yeSBpbm9kZSAzMzgxMjU5NTg0IHJlZmVyZW5jZXMgZnJlZSBpbm9kZSAxNDEN CgljbGVhcmluZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpubyAuLiBl bnRyeSBmb3IgZGlyZWN0b3J5IDMzODEyNTk1ODQNCmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZz ZXQgMzIgaW4gZGlyZWN0b3J5IGlub2RlIDMzODE0MjQ5NjMgcmVmZXJlbmNlcyBmcmVlIGlub2Rl IDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5v IC4uIGVudHJ5IGZvciBkaXJlY3RvcnkgMzM4MTQyNDk2Mw0KZW50cnkgIi4uIiBhdCBibG9jayAw IG9mZnNldCAzMiBpbiBkaXJlY3RvcnkgaW5vZGUgMzQxNzMxODU2MCByZWZlcmVuY2VzIGZyZWUg aW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5vZGUgbnVtYmVyIGluIGVudHJ5IGF0IG9mZnNldCAzMi4u Lg0Kbm8gLi4gZW50cnkgZm9yIGRpcmVjdG9yeSAzNDE3MzE4NTYwDQplbnRyeSAiLi4iIGF0IGJs b2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVjdG9yeSBpbm9kZSAzNDIwMTg5NjgwIHJlZmVyZW5jZXMg ZnJlZSBpbm9kZSAxNDENCgljbGVhcmluZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zmc2V0 IDMyLi4uDQpubyAuLiBlbnRyeSBmb3IgZGlyZWN0b3J5IDM0MjAxODk2ODANCmVudHJ5ICIuLiIg YXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4gZGlyZWN0b3J5IGlub2RlIDM0MjE1NzAxNDYgcmVmZXJl bmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBv ZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5IGZvciBkaXJlY3RvcnkgMzQyMTU3MDE0Ng0KZW50cnkg Ii4uIiBhdCBibG9jayAwIG9mZnNldCAzMiBpbiBkaXJlY3RvcnkgaW5vZGUgMzEwNjEzNzM5OCBy ZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5vZGUgbnVtYmVyIGluIGVudHJ5 IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9yIGRpcmVjdG9yeSAzMTA2MTM3Mzk4DQpl bnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVjdG9yeSBpbm9kZSAzMTA2MTM3 NDEyIHJlZmVyZW5jZXMgZnJlZSBpbm9kZSAxNDENCgljbGVhcmluZyBpbm9kZSBudW1iZXIgaW4g ZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpubyAuLiBlbnRyeSBmb3IgZGlyZWN0b3J5IDMxMDYxMzc0 MTINCmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4gZGlyZWN0b3J5IGlub2RlIDM0 NTM1NDU2MTIgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51bWJl ciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5IGZvciBkaXJlY3RvcnkgMzQ1 MzU0NTYxMg0KZW50cnkgIi4uIiBhdCBibG9jayAwIG9mZnNldCAzMiBpbiBkaXJlY3RvcnkgaW5v ZGUgMzQ1MzU0NTY4OCByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5vZGUg bnVtYmVyIGluIGVudHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9yIGRpcmVjdG9y eSAzNDUzNTQ1Njg4DQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVjdG9y eSBpbm9kZSAzMTIxNDc4MTQ5IHJlZmVyZW5jZXMgZnJlZSBpbm9kZSAxNDENCgljbGVhcmluZyBp bm9kZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpubyAuLiBlbnRyeSBmb3IgZGly ZWN0b3J5IDMxMjE0NzgxNDkNCmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4gZGly ZWN0b3J5IGlub2RlIDM0NTM2NzY5MjkgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNsZWFy aW5nIGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5IGZv ciBkaXJlY3RvcnkgMzQ1MzY3NjkyOQ0KZW50cnkgIi4uIiBhdCBibG9jayAwIG9mZnNldCAzMiBp biBkaXJlY3RvcnkgaW5vZGUgMzQ1MzY3Njk0MiByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJ Y2xlYXJpbmcgaW5vZGUgbnVtYmVyIGluIGVudHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50 cnkgZm9yIGRpcmVjdG9yeSAzNDUzNjc2OTQyDQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0 IDMyIGluIGRpcmVjdG9yeSBpbm9kZSAzNDUzNzc5NTAyIHJlZmVyZW5jZXMgZnJlZSBpbm9kZSAx NDENCgljbGVhcmluZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpubyAu LiBlbnRyeSBmb3IgZGlyZWN0b3J5IDM0NTM3Nzk1MDINCmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBv ZmZzZXQgMzIgaW4gZGlyZWN0b3J5IGlub2RlIDM0NTM3Nzk1MzIgcmVmZXJlbmNlcyBmcmVlIGlu b2RlIDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4N Cm5vIC4uIGVudHJ5IGZvciBkaXJlY3RvcnkgMzQ1Mzc3OTUzMg0KZW50cnkgIi4uIiBhdCBibG9j ayAwIG9mZnNldCAzMiBpbiBkaXJlY3RvcnkgaW5vZGUgMzEyMTU1MTM5OSByZWZlcmVuY2VzIGZy ZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5vZGUgbnVtYmVyIGluIGVudHJ5IGF0IG9mZnNldCAz Mi4uLg0Kbm8gLi4gZW50cnkgZm9yIGRpcmVjdG9yeSAzMTIxNTUxMzk5DQplbnRyeSAiLi4iIGF0 IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVjdG9yeSBpbm9kZSAzMTIxNTY4NTAwIHJlZmVyZW5j ZXMgZnJlZSBpbm9kZSAxNDENCgljbGVhcmluZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zm c2V0IDMyLi4uDQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVjdG9yeSBp bm9kZSAzNDUzODIwNjgxIHJlZmVyZW5jZXMgZnJlZSBpbm9kZSAxNDENCgljbGVhcmluZyBpbm9k ZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpubyAuLiBlbnRyeSBmb3IgZGlyZWN0 b3J5IDM0NTM4MjA2ODENCmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4gZGlyZWN0 b3J5IGlub2RlIDM0NTM5NTY4NjQgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNsZWFyaW5n IGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5IGZvciBk aXJlY3RvcnkgMzQ1Mzk1Njg2NA0Kbm8gLi4gZW50cnkgZm9yIGRpcmVjdG9yeSAzMTIxNTY4NTAw DQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVjdG9yeSBpbm9kZSAzNDU0 OTIxMDM0IHJlZmVyZW5jZXMgZnJlZSBpbm9kZSAxNDENCgljbGVhcmluZyBpbm9kZSBudW1iZXIg aW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpubyAuLiBlbnRyeSBmb3IgZGlyZWN0b3J5IDM0NTQ5 MjEwMzQNCmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4gZGlyZWN0b3J5IGlub2Rl IDM0NTQ5MzE2ODggcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51 bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5IGZvciBkaXJlY3Rvcnkg MzQ1NDkzMTY4OA0KZW50cnkgIi4uIiBhdCBibG9jayAwIG9mZnNldCAzMiBpbiBkaXJlY3Rvcnkg aW5vZGUgMzQ1NTA1MzIzNiByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5v ZGUgbnVtYmVyIGluIGVudHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9yIGRpcmVj dG9yeSAzNDU1MDUzMjM2DQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVj dG9yeSBpbm9kZSAzNDU1MDUzMjQzIHJlZmVyZW5jZXMgZnJlZSBpbm9kZSAxNDENCgljbGVhcmlu ZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpubyAuLiBlbnRyeSBmb3Ig ZGlyZWN0b3J5IDM0NTUwNTMyNDMNCmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4g ZGlyZWN0b3J5IGlub2RlIDM0NTUzNjgwODAgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNs ZWFyaW5nIGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5 IGZvciBkaXJlY3RvcnkgMzQ1NTM2ODA4MA0KZW50cnkgIi4uIiBhdCBibG9jayAwIG9mZnNldCAz MiBpbiBkaXJlY3RvcnkgaW5vZGUgMzQ1NTM2ODA5MiByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQx DQoJY2xlYXJpbmcgaW5vZGUgbnVtYmVyIGluIGVudHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4g ZW50cnkgZm9yIGRpcmVjdG9yeSAzNDU1MzY4MDkyDQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zm c2V0IDMyIGluIGRpcmVjdG9yeSBpbm9kZSAzNDU1Mzc0ODQxIHJlZmVyZW5jZXMgZnJlZSBpbm9k ZSAxNDENCgljbGVhcmluZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpu byAuLiBlbnRyeSBmb3IgZGlyZWN0b3J5IDM0NTUzNzQ4NDENCmVudHJ5ICIuLiIgYXQgYmxvY2sg MCBvZmZzZXQgMzIgaW4gZGlyZWN0b3J5IGlub2RlIDM0NTUzNzQ4NTEgcmVmZXJlbmNlcyBmcmVl IGlub2RlIDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIu Li4NCm5vIC4uIGVudHJ5IGZvciBkaXJlY3RvcnkgMzQ1NTM3NDg1MQ0KZW50cnkgIi4uIiBhdCBi bG9jayAwIG9mZnNldCAzMiBpbiBkaXJlY3RvcnkgaW5vZGUgMzQ1NjczMjEzMCByZWZlcmVuY2Vz IGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5vZGUgbnVtYmVyIGluIGVudHJ5IGF0IG9mZnNl dCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9yIGRpcmVjdG9yeSAzNDU2NzMyMTMwDQplbnRyeSAiLi4i IGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVjdG9yeSBpbm9kZSAzNDU3NjgyOTU5IHJlZmVy ZW5jZXMgZnJlZSBpbm9kZSAxNDENCgljbGVhcmluZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQg b2Zmc2V0IDMyLi4uDQpubyAuLiBlbnRyeSBmb3IgZGlyZWN0b3J5IDM0NTc2ODI5NTkNCmVudHJ5 ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4gZGlyZWN0b3J5IGlub2RlIDM0NTc2ODMwNDIg cmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51bWJlciBpbiBlbnRy eSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5IGZvciBkaXJlY3RvcnkgMzQ1NzY4MzA0Mg0K ZW50cnkgIi4uIiBhdCBibG9jayAwIG9mZnNldCAzMiBpbiBkaXJlY3RvcnkgaW5vZGUgMzQ1ODA2 ODcyNCByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5vZGUgbnVtYmVyIGlu IGVudHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9yIGRpcmVjdG9yeSAzNDU4MDY4 NzI0DQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVjdG9yeSBpbm9kZSAz NDYyNjQ1MDkzIHJlZmVyZW5jZXMgZnJlZSBpbm9kZSAxNDENCgljbGVhcmluZyBpbm9kZSBudW1i ZXIgaW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpubyAuLiBlbnRyeSBmb3IgZGlyZWN0b3J5IDM0 NjI2NDUwOTMNCmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4gZGlyZWN0b3J5IGlu b2RlIDM0NjI2NDUxMTUgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNsZWFyaW5nIGlub2Rl IG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5IGZvciBkaXJlY3Rv cnkgMzQ2MjY0NTExNQ0KZW50cnkgIi4uIiBhdCBibG9jayAwIG9mZnNldCAzMiBpbiBkaXJlY3Rv cnkgaW5vZGUgMzQ2Mjg4Njc0NiByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcg aW5vZGUgbnVtYmVyIGluIGVudHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9yIGRp cmVjdG9yeSAzNDYyODg2NzQ2DQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRp cmVjdG9yeSBpbm9kZSAzNDYyOTE4MDI1IHJlZmVyZW5jZXMgZnJlZSBpbm9kZSAxNDENCgljbGVh cmluZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpubyAuLiBlbnRyeSBm b3IgZGlyZWN0b3J5IDM0NjI5MTgwMjUNCmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIg aW4gZGlyZWN0b3J5IGlub2RlIDM0NjI5MTgwNTYgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0K CWNsZWFyaW5nIGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVu dHJ5IGZvciBkaXJlY3RvcnkgMzQ2MjkxODA1Ng0KZW50cnkgIi4uIiBhdCBibG9jayAwIG9mZnNl dCAzMiBpbiBkaXJlY3RvcnkgaW5vZGUgMzQ2MjkyMjk1MCByZWZlcmVuY2VzIGZyZWUgaW5vZGUg MTQxDQoJY2xlYXJpbmcgaW5vZGUgbnVtYmVyIGluIGVudHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8g Li4gZW50cnkgZm9yIGRpcmVjdG9yeSAzNDYyOTIyOTUwDQplbnRyeSAiLi4iIGF0IGJsb2NrIDAg b2Zmc2V0IDMyIGluIGRpcmVjdG9yeSBpbm9kZSAzNDYyOTI0OTQ4IHJlZmVyZW5jZXMgZnJlZSBp bm9kZSAxNDENCgljbGVhcmluZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4u DQpubyAuLiBlbnRyeSBmb3IgZGlyZWN0b3J5IDM0NjI5MjQ5NDgNCmVudHJ5ICIuLiIgYXQgYmxv Y2sgMCBvZmZzZXQgMzIgaW4gZGlyZWN0b3J5IGlub2RlIDM0NjMxMDMxODYgcmVmZXJlbmNlcyBm cmVlIGlub2RlIDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQg MzIuLi4NCm5vIC4uIGVudHJ5IGZvciBkaXJlY3RvcnkgMzQ2MzEwMzE4Ng0KZW50cnkgIi4uIiBh dCBibG9jayAwIG9mZnNldCAzMiBpbiBkaXJlY3RvcnkgaW5vZGUgMzQ2MzEwMzIwMiByZWZlcmVu Y2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5vZGUgbnVtYmVyIGluIGVudHJ5IGF0IG9m ZnNldCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9yIGRpcmVjdG9yeSAzNDYzMTAzMjAyDQplbnRyeSAi Li4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVjdG9yeSBpbm9kZSAzNDY0NTE2MDA5IHJl ZmVyZW5jZXMgZnJlZSBpbm9kZSAxNDENCgljbGVhcmluZyBpbm9kZSBudW1iZXIgaW4gZW50cnkg YXQgb2Zmc2V0IDMyLi4uDQpubyAuLiBlbnRyeSBmb3IgZGlyZWN0b3J5IDM0NjQ1MTYwMDkNCmVu dHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4gZGlyZWN0b3J5IGlub2RlIDM0NjQ1MTY0 NTggcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51bWJlciBpbiBl bnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5IGZvciBkaXJlY3RvcnkgMzQ2NDUxNjQ1 OA0KZW50cnkgIi4uIiBhdCBibG9jayAwIG9mZnNldCAzMiBpbiBkaXJlY3RvcnkgaW5vZGUgMzQ2 NDUxNjU3MyByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5vZGUgbnVtYmVy IGluIGVudHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9yIGRpcmVjdG9yeSAzNDY0 NTE2NTczDQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVjdG9yeSBpbm9k ZSAzNDY0ODE2NjAxIHJlZmVyZW5jZXMgZnJlZSBpbm9kZSAxNDENCgljbGVhcmluZyBpbm9kZSBu dW1iZXIgaW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpubyAuLiBlbnRyeSBmb3IgZGlyZWN0b3J5 IDM0NjQ4MTY2MDENCmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4gZGlyZWN0b3J5 IGlub2RlIDM0NjQ4MTY2MDYgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNsZWFyaW5nIGlu b2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5IGZvciBkaXJl Y3RvcnkgMzQ2NDgxNjYwNg0KZW50cnkgIi4uIiBhdCBibG9jayAwIG9mZnNldCAzMiBpbiBkaXJl Y3RvcnkgaW5vZGUgMzQ2NDgxNjkzNiByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJp bmcgaW5vZGUgbnVtYmVyIGluIGVudHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9y IGRpcmVjdG9yeSAzNDY0ODE2OTM2DQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGlu IGRpcmVjdG9yeSBpbm9kZSAzNDY0ODE2OTUyIHJlZmVyZW5jZXMgZnJlZSBpbm9kZSAxNDENCglj bGVhcmluZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpubyAuLiBlbnRy eSBmb3IgZGlyZWN0b3J5IDM0NjQ4MTY5NTINCiAgICAgICAgLSBhZ25vID0gMjYNCmVudHJ5ICIu LiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4gZGlyZWN0b3J5IGlub2RlIDI0MTYwNDMwNjAgcmVm ZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51bWJlciBpbiBlbnRyeSBh dCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5IGZvciBkaXJlY3RvcnkgMjQxNjA0MzA2MA0KZW50 cnkgIi4uIiBhdCBibG9jayAwIG9mZnNldCAzMiBpbiBkaXJlY3RvcnkgaW5vZGUgMzIwMzcxNjIw MSByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5vZGUgbnVtYmVyIGluIGVu dHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9yIGRpcmVjdG9yeSAzMjAzNzE2MjAx DQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVjdG9yeSBpbm9kZSAzMjA0 MjExMzkzIHJlZmVyZW5jZXMgZnJlZSBpbm9kZSAxNDENCgljbGVhcmluZyBpbm9kZSBudW1iZXIg aW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpubyAuLiBlbnRyeSBmb3IgZGlyZWN0b3J5IDMyMDQy MTEzOTMNCmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4gZGlyZWN0b3J5IGlub2Rl IDMyMDQyMTQ4NzcgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51 bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5IGZvciBkaXJlY3Rvcnkg MzIwNDIxNDg3Nw0KZW50cnkgIi4uIiBhdCBibG9jayAwIG9mZnNldCAzMiBpbiBkaXJlY3Rvcnkg aW5vZGUgMzIwNDIxNDg5MCByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5v ZGUgbnVtYmVyIGluIGVudHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9yIGRpcmVj dG9yeSAzMjA0MjE0ODkwDQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVj dG9yeSBpbm9kZSAzMjA0MjE0OTEwIHJlZmVyZW5jZXMgZnJlZSBpbm9kZSAxNDENCgljbGVhcmlu ZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpubyAuLiBlbnRyeSBmb3Ig ZGlyZWN0b3J5IDMyMDQyMTQ5MTANCmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4g ZGlyZWN0b3J5IGlub2RlIDMyMDQyMTQ5ODMgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNs ZWFyaW5nIGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5 IGZvciBkaXJlY3RvcnkgMzIwNDIxNDk4Mw0KZW50cnkgIi4uIiBhdCBibG9jayAwIG9mZnNldCAz MiBpbiBkaXJlY3RvcnkgaW5vZGUgMzIwNDIxNDk5MSByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQx DQoJY2xlYXJpbmcgaW5vZGUgbnVtYmVyIGluIGVudHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4g ZW50cnkgZm9yIGRpcmVjdG9yeSAzMjA0MjE0OTkxDQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zm c2V0IDMyIGluIGRpcmVjdG9yeSBpbm9kZSAzMjA0MjE1MDAxIHJlZmVyZW5jZXMgZnJlZSBpbm9k ZSAxNDENCgljbGVhcmluZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpu byAuLiBlbnRyeSBmb3IgZGlyZWN0b3J5IDMyMDQyMTUwMDENCmVudHJ5ICIuLiIgYXQgYmxvY2sg MCBvZmZzZXQgMzIgaW4gZGlyZWN0b3J5IGlub2RlIDMyMDQyMTUwMTQgcmVmZXJlbmNlcyBmcmVl IGlub2RlIDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIu Li4NCm5vIC4uIGVudHJ5IGZvciBkaXJlY3RvcnkgMzIwNDIxNTAxNA0KZW50cnkgIi4uIiBhdCBi bG9jayAwIG9mZnNldCAzMiBpbiBkaXJlY3RvcnkgaW5vZGUgMzIwNDIxNTAxNSByZWZlcmVuY2Vz IGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5vZGUgbnVtYmVyIGluIGVudHJ5IGF0IG9mZnNl dCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9yIGRpcmVjdG9yeSAzMjA0MjE1MDE1DQplbnRyeSAiLi4i IGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVjdG9yeSBpbm9kZSAzMjA0MjE1MDM1IHJlZmVy ZW5jZXMgZnJlZSBpbm9kZSAxNDENCgljbGVhcmluZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQg b2Zmc2V0IDMyLi4uDQpubyAuLiBlbnRyeSBmb3IgZGlyZWN0b3J5IDMyMDQyMTUwMzUNCmVudHJ5 ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4gZGlyZWN0b3J5IGlub2RlIDMyMDQyMjAxMzgg cmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51bWJlciBpbiBlbnRy eSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5IGZvciBkaXJlY3RvcnkgMzIwNDIyMDEzOA0K ZW50cnkgIi4uIiBhdCBibG9jayAwIG9mZnNldCAzMiBpbiBkaXJlY3RvcnkgaW5vZGUgMzIwNDIy MDE0MyByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5vZGUgbnVtYmVyIGlu IGVudHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9yIGRpcmVjdG9yeSAzMjA0MjIw MTQzDQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVjdG9yeSBpbm9kZSAz MjA0MjIwMTcxIHJlZmVyZW5jZXMgZnJlZSBpbm9kZSAxNDENCgljbGVhcmluZyBpbm9kZSBudW1i ZXIgaW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpubyAuLiBlbnRyeSBmb3IgZGlyZWN0b3J5IDMy MDQyMjAxNzENCmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4gZGlyZWN0b3J5IGlu b2RlIDM0NzI0Mjg3NzMgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNsZWFyaW5nIGlub2Rl IG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5IGZvciBkaXJlY3Rv cnkgMzQ3MjQyODc3Mw0KZW50cnkgIi4uIiBhdCBibG9jayAwIG9mZnNldCAzMiBpbiBkaXJlY3Rv cnkgaW5vZGUgMzQ3MjY0MTU4NiByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcg aW5vZGUgbnVtYmVyIGluIGVudHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9yIGRp cmVjdG9yeSAzNDcyNjQxNTg2DQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRp cmVjdG9yeSBpbm9kZSAzNDcyNjQxNjAxIHJlZmVyZW5jZXMgZnJlZSBpbm9kZSAxNDENCgljbGVh cmluZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpubyAuLiBlbnRyeSBm b3IgZGlyZWN0b3J5IDM0NzI2NDE2MDENCmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIg aW4gZGlyZWN0b3J5IGlub2RlIDM0NzI2NDE2MDkgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0K CWNsZWFyaW5nIGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVu dHJ5IGZvciBkaXJlY3RvcnkgMzQ3MjY0MTYwOQ0KICAgICAgICAtIGFnbm8gPSAyNw0KZW50cnkg Ii4uIiBhdCBibG9jayAwIG9mZnNldCAzMiBpbiBkaXJlY3RvcnkgaW5vZGUgMzYyNTM3OTAyMiBy ZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5vZGUgbnVtYmVyIGluIGVudHJ5 IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9yIGRpcmVjdG9yeSAzNjI1Mzc5MDIyDQpl bnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVjdG9yeSBpbm9kZSAzNjQxMTY3 Nzk1IHJlZmVyZW5jZXMgZnJlZSBpbm9kZSAxNDENCgljbGVhcmluZyBpbm9kZSBudW1iZXIgaW4g ZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpubyAuLiBlbnRyeSBmb3IgZGlyZWN0b3J5IDM2NDExNjc3 OTUNCmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4gZGlyZWN0b3J5IGlub2RlIDMy MDQyMjAyODcgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51bWJl ciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5IGZvciBkaXJlY3RvcnkgMzIw NDIyMDI4Nw0KZW50cnkgIi4uIiBhdCBibG9jayAwIG9mZnNldCAzMiBpbiBkaXJlY3RvcnkgaW5v ZGUgMzIwNDIyMDMzNiByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5vZGUg bnVtYmVyIGluIGVudHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9yIGRpcmVjdG9y eSAzMjA0MjIwMzM2DQogICAgICAgIC0gYWdubyA9IDI4DQplbnRyeSAiLi4iIGF0IGJsb2NrIDAg b2Zmc2V0IDMyIGluIGRpcmVjdG9yeSBpbm9kZSAzNzU5NDQzMDI0IHJlZmVyZW5jZXMgZnJlZSBp bm9kZSAxNDENCgljbGVhcmluZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4u DQpubyAuLiBlbnRyeSBmb3IgZGlyZWN0b3J5IDM3NTk0NDMwMjQNCmVudHJ5ICIuLiIgYXQgYmxv Y2sgMCBvZmZzZXQgMzIgaW4gZGlyZWN0b3J5IGlub2RlIDM3NjQ2NDkyMjggcmVmZXJlbmNlcyBm cmVlIGlub2RlIDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQg MzIuLi4NCm5vIC4uIGVudHJ5IGZvciBkaXJlY3RvcnkgMzc2NDY0OTIyOA0KZW50cnkgIi4uIiBh dCBibG9jayAwIG9mZnNldCAzMiBpbiBkaXJlY3RvcnkgaW5vZGUgMzc2NTEyNTYzNiByZWZlcmVu Y2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5vZGUgbnVtYmVyIGluIGVudHJ5IGF0IG9m ZnNldCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9yIGRpcmVjdG9yeSAzNzY1MTI1NjM2DQplbnRyeSAi Li4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVjdG9yeSBpbm9kZSAzNzgwMjgxMDk0IHJl ZmVyZW5jZXMgZnJlZSBpbm9kZSAxNDENCgljbGVhcmluZyBpbm9kZSBudW1iZXIgaW4gZW50cnkg YXQgb2Zmc2V0IDMyLi4uDQpubyAuLiBlbnRyeSBmb3IgZGlyZWN0b3J5IDM3ODAyODEwOTQNCmVu dHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4gZGlyZWN0b3J5IGlub2RlIDM3ODU1MjQ2 OTUgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51bWJlciBpbiBl bnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5IGZvciBkaXJlY3RvcnkgMzc4NTUyNDY5 NQ0KZW50cnkgIi4uIiBhdCBibG9jayAwIG9mZnNldCAzMiBpbiBkaXJlY3RvcnkgaW5vZGUgMzQ5 MTM2MTk5MyByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5vZGUgbnVtYmVy IGluIGVudHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9yIGRpcmVjdG9yeSAzNDkx MzYxOTkzDQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVjdG9yeSBpbm9k ZSAzNDk1MDQ2MTA5IHJlZmVyZW5jZXMgZnJlZSBpbm9kZSAxNDENCgljbGVhcmluZyBpbm9kZSBu dW1iZXIgaW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpubyAuLiBlbnRyeSBmb3IgZGlyZWN0b3J5 IDM0OTUwNDYxMDkNCmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4gZGlyZWN0b3J5 IGlub2RlIDM0OTY0MjE1NzIgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNsZWFyaW5nIGlu b2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5IGZvciBkaXJl Y3RvcnkgMzQ5NjQyMTU3Mg0KZW50cnkgIi4uIiBhdCBibG9jayAwIG9mZnNldCAzMiBpbiBkaXJl Y3RvcnkgaW5vZGUgMjQxNzIxNzU1MyByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJp bmcgaW5vZGUgbnVtYmVyIGluIGVudHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9y IGRpcmVjdG9yeSAyNDE3MjE3NTUzDQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGlu IGRpcmVjdG9yeSBpbm9kZSAyNDE3MjE3NTc3IHJlZmVyZW5jZXMgZnJlZSBpbm9kZSAxNDENCglj bGVhcmluZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpubyAuLiBlbnRy eSBmb3IgZGlyZWN0b3J5IDI0MTcyMTc1NzcNCmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQg MzIgaW4gZGlyZWN0b3J5IGlub2RlIDI0MTcyMTc1ODkgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0 MQ0KCWNsZWFyaW5nIGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4u IGVudHJ5IGZvciBkaXJlY3RvcnkgMjQxNzIxNzU4OQ0KICAgICAgICAtIGFnbm8gPSAyOQ0KZW50 cnkgIi4uIiBhdCBibG9jayAwIG9mZnNldCAzMiBpbiBkaXJlY3RvcnkgaW5vZGUgMzUwODMyMDQ5 MSByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5vZGUgbnVtYmVyIGluIGVu dHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9yIGRpcmVjdG9yeSAzNTA4MzIwNDkx DQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVjdG9yeSBpbm9kZSAzNjYz NDc3MjE0IHJlZmVyZW5jZXMgZnJlZSBpbm9kZSAxNDENCgljbGVhcmluZyBpbm9kZSBudW1iZXIg aW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpubyAuLiBlbnRyeSBmb3IgZGlyZWN0b3J5IDM2NjM0 NzcyMTQNCmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4gZGlyZWN0b3J5IGlub2Rl IDM1MTI5NjU1ODIgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51 bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5IGZvciBkaXJlY3Rvcnkg MzUxMjk2NTU4Mg0KZW50cnkgIi4uIiBhdCBibG9jayAwIG9mZnNldCAzMiBpbiBkaXJlY3Rvcnkg aW5vZGUgMzUxMzkxMTQxOCByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5v ZGUgbnVtYmVyIGluIGVudHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9yIGRpcmVj dG9yeSAzNTEzOTExNDE4DQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVj dG9yeSBpbm9kZSAzNTIyMzk0OTEyIHJlZmVyZW5jZXMgZnJlZSBpbm9kZSAxNDENCgljbGVhcmlu ZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpubyAuLiBlbnRyeSBmb3Ig ZGlyZWN0b3J5IDM1MjIzOTQ5MTINCmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4g ZGlyZWN0b3J5IGlub2RlIDM1MjI1NzEwODUgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNs ZWFyaW5nIGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5 IGZvciBkaXJlY3RvcnkgMzUyMjU3MTA4NQ0KZW50cnkgIi4uIiBhdCBibG9jayAwIG9mZnNldCAz MiBpbiBkaXJlY3RvcnkgaW5vZGUgMzU0MTg0NTMxNCByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQx DQoJY2xlYXJpbmcgaW5vZGUgbnVtYmVyIGluIGVudHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4g ZW50cnkgZm9yIGRpcmVjdG9yeSAzNTQxODQ1MzE0DQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zm c2V0IDMyIGluIGRpcmVjdG9yeSBpbm9kZSAzNTQyNTkxMjQ1IHJlZmVyZW5jZXMgZnJlZSBpbm9k ZSAxNDENCgljbGVhcmluZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpu byAuLiBlbnRyeSBmb3IgZGlyZWN0b3J5IDM1NDI1OTEyNDUNCmVudHJ5ICIuLiIgYXQgYmxvY2sg MCBvZmZzZXQgMzIgaW4gZGlyZWN0b3J5IGlub2RlIDM1NDI3ODQ2MDQgcmVmZXJlbmNlcyBmcmVl IGlub2RlIDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIu Li4NCm5vIC4uIGVudHJ5IGZvciBkaXJlY3RvcnkgMzU0Mjc4NDYwNA0KZW50cnkgIi4uIiBhdCBi bG9jayAwIG9mZnNldCAzMiBpbiBkaXJlY3RvcnkgaW5vZGUgMzU0Mjc4NDYxMyByZWZlcmVuY2Vz IGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5vZGUgbnVtYmVyIGluIGVudHJ5IGF0IG9mZnNl dCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9yIGRpcmVjdG9yeSAzNTQyNzg0NjEzDQplbnRyeSAiLi4i IGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVjdG9yeSBpbm9kZSAzNTQ0Nzg2NDQyIHJlZmVy ZW5jZXMgZnJlZSBpbm9kZSAxNDENCgljbGVhcmluZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQg b2Zmc2V0IDMyLi4uDQpubyAuLiBlbnRyeSBmb3IgZGlyZWN0b3J5IDM1NDQ3ODY0NDINCiAgICAg ICAgLSBhZ25vID0gMzANCmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4gZGlyZWN0 b3J5IGlub2RlIDQwMzQxMDI1MDEgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNsZWFyaW5n IGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5IGZvciBk aXJlY3RvcnkgNDAzNDEwMjUwMQ0KICAgICAgICAtIGFnbm8gPSAzMQ0KICAgICAgICAtIGFnbm8g PSAzMg0KICAgICAgICAtIGFnbm8gPSAzMw0KICAgICAgICAtIGFnbm8gPSAzNA0KICAgICAgICAt IGFnbm8gPSAzNQ0KICAgICAgICAtIGFnbm8gPSAzNg0KICAgICAgICAtIGFnbm8gPSAzNw0KICAg ICAgICAtIGFnbm8gPSAzOA0KICAgICAgICAtIGFnbm8gPSAzOQ0KICAgICAgICAtIGFnbm8gPSA0 MA0KICAgICAgICAtIGFnbm8gPSA0MQ0KICAgICAgICAtIGFnbm8gPSA0Mg0KICAgICAgICAtIGFn bm8gPSA0Mw0KICAgICAgICAtIGFnbm8gPSA0NA0KICAgICAgICAtIGFnbm8gPSA0NQ0KICAgICAg ICAtIGFnbm8gPSA0Ng0KICAgICAgICAtIGFnbm8gPSA0Nw0KICAgICAgICAtIGFnbm8gPSA0OA0K ICAgICAgICAtIGFnbm8gPSA0OQ0KICAgICAgICAtIGFnbm8gPSA1MA0KICAgICAgICAtIGFnbm8g PSA1MQ0KICAgICAgICAtIGFnbm8gPSA1Mg0KICAgICAgICAtIGFnbm8gPSA1Mw0KICAgICAgICAt IGFnbm8gPSA1NA0KICAgICAgICAtIGFnbm8gPSA1NQ0KICAgICAgICAtIGFnbm8gPSA1Ng0KICAg ICAgICAtIGFnbm8gPSA1Nw0KICAgICAgICAtIGFnbm8gPSA1OA0KICAgICAgICAtIGFnbm8gPSA1 OQ0KICAgICAgICAtIGFnbm8gPSA2MA0KICAgICAgICAtIGFnbm8gPSA2MQ0KICAgICAgICAtIGFn bm8gPSA2Mg0KICAgICAgICAtIGFnbm8gPSA2Mw0KICAgICAgICAtIGFnbm8gPSA2NA0KZW50cnkg Ii4uIiBhdCBibG9jayAwIG9mZnNldCAzMiBpbiBkaXJlY3RvcnkgaW5vZGUgNDEyMjIyOTg3NCBy ZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5vZGUgbnVtYmVyIGluIGVudHJ5 IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9yIGRpcmVjdG9yeSA0MTIyMjI5ODc0DQpl bnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVjdG9yeSBpbm9kZSA0MTIyNjYw MDgyIHJlZmVyZW5jZXMgZnJlZSBpbm9kZSAxNDENCgljbGVhcmluZyBpbm9kZSBudW1iZXIgaW4g ZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpubyAuLiBlbnRyeSBmb3IgZGlyZWN0b3J5IDQxMjI2NjAw ODINCmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4gZGlyZWN0b3J5IGlub2RlIDQx MjI2NjAwOTUgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51bWJl ciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5IGZvciBkaXJlY3RvcnkgNDEy MjY2MDA5NQ0KZW50cnkgIi4uIiBhdCBibG9jayAwIG9mZnNldCAzMiBpbiBkaXJlY3RvcnkgaW5v ZGUgNDEyMjY2MDEwNSByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5vZGUg bnVtYmVyIGluIGVudHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9yIGRpcmVjdG9y eSA0MTIyNjYwMTA1DQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVjdG9y eSBpbm9kZSA0MTIyNjYwMTIwIHJlZmVyZW5jZXMgZnJlZSBpbm9kZSAxNDENCgljbGVhcmluZyBp bm9kZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpubyAuLiBlbnRyeSBmb3IgZGly ZWN0b3J5IDQxMjI2NjAxMjANCmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4gZGly ZWN0b3J5IGlub2RlIDQxMjM5MDg3NzAgcmVmZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNsZWFy aW5nIGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5IGZv ciBkaXJlY3RvcnkgNDEyMzkwODc3MA0KZW50cnkgIi4uIiBhdCBibG9jayAwIG9mZnNldCAzMiBp biBkaXJlY3RvcnkgaW5vZGUgNDEyMzkwODc4MSByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJ Y2xlYXJpbmcgaW5vZGUgbnVtYmVyIGluIGVudHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50 cnkgZm9yIGRpcmVjdG9yeSA0MTIzOTA4NzgxDQplbnRyeSAiLi4iIGF0IGJsb2NrIDAgb2Zmc2V0 IDMyIGluIGRpcmVjdG9yeSBpbm9kZSA0MTI0MTQwMzg0IHJlZmVyZW5jZXMgZnJlZSBpbm9kZSAx NDENCgljbGVhcmluZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zmc2V0IDMyLi4uDQpubyAu LiBlbnRyeSBmb3IgZGlyZWN0b3J5IDQxMjQxNDAzODQNCmVudHJ5ICIuLiIgYXQgYmxvY2sgMCBv ZmZzZXQgMzIgaW4gZGlyZWN0b3J5IGlub2RlIDQxMjU1OTk1NDAgcmVmZXJlbmNlcyBmcmVlIGlu b2RlIDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51bWJlciBpbiBlbnRyeSBhdCBvZmZzZXQgMzIuLi4N Cm5vIC4uIGVudHJ5IGZvciBkaXJlY3RvcnkgNDEyNTU5OTU0MA0KZW50cnkgIi4uIiBhdCBibG9j ayAwIG9mZnNldCAzMiBpbiBkaXJlY3RvcnkgaW5vZGUgNDE5Mjc1MDc4OCByZWZlcmVuY2VzIGZy ZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5vZGUgbnVtYmVyIGluIGVudHJ5IGF0IG9mZnNldCAz Mi4uLg0Kbm8gLi4gZW50cnkgZm9yIGRpcmVjdG9yeSA0MTkyNzUwNzg4DQplbnRyeSAiLi4iIGF0 IGJsb2NrIDAgb2Zmc2V0IDMyIGluIGRpcmVjdG9yeSBpbm9kZSA0MTkyOTc1Njc2IHJlZmVyZW5j ZXMgZnJlZSBpbm9kZSAxNDENCgljbGVhcmluZyBpbm9kZSBudW1iZXIgaW4gZW50cnkgYXQgb2Zm c2V0IDMyLi4uDQpubyAuLiBlbnRyeSBmb3IgZGlyZWN0b3J5IDQxOTI5NzU2NzYNCmVudHJ5ICIu LiIgYXQgYmxvY2sgMCBvZmZzZXQgMzIgaW4gZGlyZWN0b3J5IGlub2RlIDQwMDk1MjYxNDYgcmVm ZXJlbmNlcyBmcmVlIGlub2RlIDE0MQ0KCWNsZWFyaW5nIGlub2RlIG51bWJlciBpbiBlbnRyeSBh dCBvZmZzZXQgMzIuLi4NCm5vIC4uIGVudHJ5IGZvciBkaXJlY3RvcnkgNDAwOTUyNjE0Ng0KZW50 cnkgIi4uIiBhdCBibG9jayAwIG9mZnNldCAzMiBpbiBkaXJlY3RvcnkgaW5vZGUgNDAwOTUyNjE4 MCByZWZlcmVuY2VzIGZyZWUgaW5vZGUgMTQxDQoJY2xlYXJpbmcgaW5vZGUgbnVtYmVyIGluIGVu dHJ5IGF0IG9mZnNldCAzMi4uLg0Kbm8gLi4gZW50cnkgZm9yIGRpcmVjdG9yeSA0MDA5NTI2MTgw DQogICAgICAgIC0gMTg6MzU6MDM6IGNoZWNrIGZvciBpbm9kZXMgY2xhaW1pbmcgZHVwbGljYXRl IGJsb2NrcyAtIDQ1MDYyNCBvZiA0NTA2MjQgaW5vZGVzIGRvbmUNClBoYXNlIDUgLSByZWJ1aWxk IEFHIGhlYWRlcnMgYW5kIHRyZWVzLi4uDQogICAgICAgIC0gYWdubyA9IDANCiAgICAgICAgLSBh Z25vID0gMQ0KICAgICAgICAtIGFnbm8gPSAyDQogICAgICAgIC0gYWdubyA9IDMNCiAgICAgICAg LSBhZ25vID0gNA0KICAgICAgICAtIGFnbm8gPSA2DQogICAgICAgIC0gYWdubyA9IDUNCiAgICAg ICAgLSBhZ25vID0gNw0KICAgICAgICAtIGFnbm8gPSA4DQogICAgICAgIC0gYWdubyA9IDkNCiAg ICAgICAgLSBhZ25vID0gMTANCiAgICAgICAgLSBhZ25vID0gMTENCiAgICAgICAgLSBhZ25vID0g MTINCiAgICAgICAgLSBhZ25vID0gMTMNCiAgICAgICAgLSBhZ25vID0gMTQNCiAgICAgICAgLSBh Z25vID0gMTUNCiAgICAgICAgLSBhZ25vID0gMTYNCiAgICAgICAgLSBhZ25vID0gMTcNCiAgICAg ICAgLSBhZ25vID0gMTgNCiAgICAgICAgLSBhZ25vID0gMTkNCiAgICAgICAgLSBhZ25vID0gMjAN CiAgICAgICAgLSBhZ25vID0gMjENCiAgICAgICAgLSBhZ25vID0gMjINCiAgICAgICAgLSBhZ25v ID0gMjMNCiAgICAgICAgLSBhZ25vID0gMjQNCiAgICAgICAgLSBhZ25vID0gMjUNCiAgICAgICAg LSBhZ25vID0gMjYNCiAgICAgICAgLSBhZ25vID0gMjcNCiAgICAgICAgLSBhZ25vID0gMjgNCiAg ICAgICAgLSBhZ25vID0gMjkNCiAgICAgICAgLSBhZ25vID0gMzANCiAgICAgICAgLSBhZ25vID0g MzENCiAgICAgICAgLSBhZ25vID0gMzINCiAgICAgICAgLSBhZ25vID0gMzMNCiAgICAgICAgLSBh Z25vID0gMzQNCiAgICAgICAgLSBhZ25vID0gMzUNCiAgICAgICAgLSBhZ25vID0gMzYNCiAgICAg ICAgLSBhZ25vID0gMzcNCiAgICAgICAgLSBhZ25vID0gMzgNCiAgICAgICAgLSBhZ25vID0gMzkN CiAgICAgICAgLSBhZ25vID0gNDANCiAgICAgICAgLSBhZ25vID0gNDENCiAgICAgICAgLSBhZ25v ID0gNDINCiAgICAgICAgLSBhZ25vID0gNDMNCiAgICAgICAgLSBhZ25vID0gNDQNCiAgICAgICAg LSBhZ25vID0gNDUNCiAgICAgICAgLSBhZ25vID0gNDYNCiAgICAgICAgLSBhZ25vID0gNDcNCiAg ICAgICAgLSBhZ25vID0gNDgNCiAgICAgICAgLSBhZ25vID0gNDkNCiAgICAgICAgLSBhZ25vID0g NTANCiAgICAgICAgLSBhZ25vID0gNTENCiAgICAgICAgLSBhZ25vID0gNTINCiAgICAgICAgLSBh Z25vID0gNTMNCiAgICAgICAgLSBhZ25vID0gNTQNCiAgICAgICAgLSBhZ25vID0gNTUNCiAgICAg ICAgLSBhZ25vID0gNTYNCiAgICAgICAgLSBhZ25vID0gNTcNCiAgICAgICAgLSBhZ25vID0gNTgN CiAgICAgICAgLSBhZ25vID0gNTkNCiAgICAgICAgLSBhZ25vID0gNjANCiAgICAgICAgLSBhZ25v ID0gNjENCiAgICAgICAgLSBhZ25vID0gNjINCiAgICAgICAgLSBhZ25vID0gNjMNCiAgICAgICAg LSBhZ25vID0gNjQNCiAgICAgICAgLSAxODozNTowNzogcmVidWlsZCBBRyBoZWFkZXJzIGFuZCB0 cmVlcyAtIDY1IG9mIDY1IGFsbG9jYXRpb24gZ3JvdXBzIGRvbmUNCiAgICAgICAgLSByZXNldCBz dXBlcmJsb2NrLi4uDQpQaGFzZSA2IC0gY2hlY2sgaW5vZGUgY29ubmVjdGl2aXR5Li4uDQogICAg ICAgIC0gcmVzZXR0aW5nIGNvbnRlbnRzIG9mIHJlYWx0aW1lIGJpdG1hcCBhbmQgc3VtbWFyeSBp bm9kZXMNCiAgICAgICAgLSBlbnN1cmluZyBleGlzdGVuY2Ugb2YgbG9zdCtmb3VuZCBkaXJlY3Rv cnkNCiAgICAgICAgLSB0cmF2ZXJzaW5nIGZpbGVzeXN0ZW0gc3RhcnRpbmcgYXQgLyAuLi4gDQog ICAgICAgIC0gYWdubyA9IDANCnJlYnVpbGRpbmcgZGlyZWN0b3J5IGlub2RlIDI3NjQ3MzQyDQpy ZWJ1aWxkaW5nIGRpcmVjdG9yeSBpbm9kZSA3NzMyMTQ4NQ0KcmVidWlsZGluZyBkaXJlY3Rvcnkg aW5vZGUgNzczNjczOTkNCiAgICAgICAgLSBhZ25vID0gMQ0KcmVidWlsZGluZyBkaXJlY3Rvcnkg aW5vZGUgMTM0MjE3OTAyDQpyZWJ1aWxkaW5nIGRpcmVjdG9yeSBpbm9kZSAxNjI1OTEwODkNCnJl YnVpbGRpbmcgZGlyZWN0b3J5IGlub2RlIDE4MDY2NDcxMw0KcmVidWlsZGluZyBkaXJlY3Rvcnkg aW5vZGUgMjA0MjMwODk2DQpyZWJ1aWxkaW5nIGRpcmVjdG9yeSBpbm9kZSAyMzk2ODAzMTMNCnJl YnVpbGRpbmcgZGlyZWN0b3J5IGlub2RlIDI0OTY5NzA3Mg0KICAgICAgICAtIGFnbm8gPSAyDQpy ZWJ1aWxkaW5nIGRpcmVjdG9yeSBpbm9kZSAyODg1NTk0MjENCnJlYnVpbGRpbmcgZGlyZWN0b3J5 IGlub2RlIDI4ODU1OTU3Ng0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMjkxOTcwMDg1DQog ICAgICAgIC0gYWdubyA9IDMNCnJlYnVpbGRpbmcgZGlyZWN0b3J5IGlub2RlIDQxNTc4Njk5MQ0K cmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgNTE5ODM4Mzc5DQogICAgICAgIC0gYWdubyA9IDQN CnJlYnVpbGRpbmcgZGlyZWN0b3J5IGlub2RlIDU5NzY1NDMxMw0KICAgICAgICAtIGFnbm8gPSA1 DQpyZWJ1aWxkaW5nIGRpcmVjdG9yeSBpbm9kZSA2OTQ2OTE2NTcNCnJlYnVpbGRpbmcgZGlyZWN0 b3J5IGlub2RlIDc2MTkzNzMwMg0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgNzYxOTM3MzE5 DQpyZWJ1aWxkaW5nIGRpcmVjdG9yeSBpbm9kZSA3NjMzNzI4NDcNCiAgICAgICAgLSBhZ25vID0g Ng0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgODYwMTEzMTI0DQpyZWJ1aWxkaW5nIGRpcmVj dG9yeSBpbm9kZSA4NjY1MDA0MTkNCnJlYnVpbGRpbmcgZGlyZWN0b3J5IGlub2RlIDg2NzA1MjQy NQ0KICAgICAgICAtIGFnbm8gPSA3DQpyZWJ1aWxkaW5nIGRpcmVjdG9yeSBpbm9kZSAxMDAyNzE0 NDIwDQogICAgICAgIC0gYWdubyA9IDgNCiAgICAgICAgLSBhZ25vID0gOQ0KICAgICAgICAtIGFn bm8gPSAxMA0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMTQwMjQyMTY2NA0KcmVidWlsZGlu ZyBkaXJlY3RvcnkgaW5vZGUgMTQ1OTM4OTIxMw0KICAgICAgICAtIGFnbm8gPSAxMQ0KcmVidWls ZGluZyBkaXJlY3RvcnkgaW5vZGUgMTU4NDMyNzA1Mg0KICAgICAgICAtIGFnbm8gPSAxMg0KcmVi dWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMTY4MTA3NTY1NA0KcmVidWlsZGluZyBkaXJlY3Rvcnkg aW5vZGUgMTcyNzgxNjU3NA0KICAgICAgICAtIGFnbm8gPSAxMw0KcmVidWlsZGluZyBkaXJlY3Rv cnkgaW5vZGUgMTc2ODA4NjEyNw0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMTg0NDkwMTE1 NA0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMTg0NDkwMTE3OA0KcmVidWlsZGluZyBkaXJl Y3RvcnkgaW5vZGUgMTg0NDk1MzgwMQ0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMTg1MjQw MDc0OQ0KICAgICAgICAtIGFnbm8gPSAxNA0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMTg3 OTA0ODMzNg0KICAgICAgICAtIGFnbm8gPSAxNQ0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUg MjAxMzI2NjEwNA0KICAgICAgICAtIGFnbm8gPSAxNg0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5v ZGUgMjE3MDQ0OTUzMg0KICAgICAgICAtIGFnbm8gPSAxNw0KcmVidWlsZGluZyBkaXJlY3Rvcnkg aW5vZGUgMjI4MTcwNjkzNA0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMjI4MTcwNjk1Nw0K cmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMjI4MTcwNzIxNw0KICAgICAgICAtIGFnbm8gPSAx OA0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMjQxNjA0MzA2MA0KcmVidWlsZGluZyBkaXJl Y3RvcnkgaW5vZGUgMjQxNzIxNzU1Mw0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMjQxNzIx NzU3Nw0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMjQxNzIxNzU4OQ0KICAgICAgICAtIGFn bm8gPSAxOQ0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMjU1MDIyMTE5Nw0KICAgICAgICAt IGFnbm8gPSAyMA0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMjY4NDM1NzE3MA0KICAgICAg ICAtIGFnbm8gPSAyMQ0KICAgICAgICAtIGFnbm8gPSAyMg0KcmVidWlsZGluZyBkaXJlY3Rvcnkg aW5vZGUgMzAzNzAyMzY3Mg0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzA1MzA0MjE0Ng0K cmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzA1NzMyMjI5Ng0KICAgICAgICAtIGFnbm8gPSAy Mw0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzA4NzAwNzk4Ng0KcmVidWlsZGluZyBkaXJl Y3RvcnkgaW5vZGUgMzA4NzM3MDkzMg0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzA4NzYy MTM1Ng0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzEwNTM0MzMzOA0KcmVidWlsZGluZyBk aXJlY3RvcnkgaW5vZGUgMzEwNTM2MTA2NA0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzEw NTM2MTEwOA0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzEwNTM3MzI4Ng0KcmVidWlsZGlu ZyBkaXJlY3RvcnkgaW5vZGUgMzEwNTM3MzMwNA0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUg MzEwNTM3MzYxMw0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzEwNTM3NDY2MA0KcmVidWls ZGluZyBkaXJlY3RvcnkgaW5vZGUgMzEwNTU1ODAwOQ0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5v ZGUgMzEwNjAxMjEzNg0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzEwNjAxMjE1NA0KcmVi dWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzEwNjA5MTMzNA0KcmVidWlsZGluZyBkaXJlY3Rvcnkg aW5vZGUgMzEwNjA5MTYxMA0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzEwNjA5MTYzNA0K cmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzEwNjEzNzM5OA0KcmVidWlsZGluZyBkaXJlY3Rv cnkgaW5vZGUgMzEwNjEzNzQxMg0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzEyMTQ3ODE0 OQ0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzEyMTU1MTM5OQ0KcmVidWlsZGluZyBkaXJl Y3RvcnkgaW5vZGUgMzEyMTU2ODUwMA0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzIwMzcx NjIwMQ0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzIwNDIxMTM5Mw0KcmVidWlsZGluZyBk aXJlY3RvcnkgaW5vZGUgMzIwNDIxNDg3Nw0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzIw NDIxNDg5MA0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzIwNDIxNDkxMA0KcmVidWlsZGlu ZyBkaXJlY3RvcnkgaW5vZGUgMzIwNDIxNDk4Mw0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUg MzIwNDIxNDk5MQ0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzIwNDIxNTAwMQ0KcmVidWls ZGluZyBkaXJlY3RvcnkgaW5vZGUgMzIwNDIxNTAxNA0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5v ZGUgMzIwNDIxNTAxNQ0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzIwNDIxNTAzNQ0KcmVi dWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzIwNDIyMDEzOA0KcmVidWlsZGluZyBkaXJlY3Rvcnkg aW5vZGUgMzIwNDIyMDE0Mw0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzIwNDIyMDE3MQ0K cmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzIwNDIyMDI4Nw0KcmVidWlsZGluZyBkaXJlY3Rv cnkgaW5vZGUgMzIwNDIyMDMzNg0KICAgICAgICAtIGFnbm8gPSAyNA0KcmVidWlsZGluZyBkaXJl Y3RvcnkgaW5vZGUgMzIyMTIyNjM0Nw0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzIyMjE3 MDE4Nw0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzIyMjI4MTM3NQ0KcmVidWlsZGluZyBk aXJlY3RvcnkgaW5vZGUgMzIyMzUyMjgzMA0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzI0 MjMzNDA4Mg0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzI0MjM0MTY3Ng0KcmVidWlsZGlu ZyBkaXJlY3RvcnkgaW5vZGUgMzI0Mjg0ODkwMA0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUg MzI0MzAxODYzNQ0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzI0NjExNTA1MA0KcmVidWls ZGluZyBkaXJlY3RvcnkgaW5vZGUgMzI1MjUxOTYwOA0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5v ZGUgMzI1NTExOTkwNA0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzI1NTExOTk0Mw0KcmVi dWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzI1NTExOTk2NA0KcmVidWlsZGluZyBkaXJlY3Rvcnkg aW5vZGUgMzI2OTk3NDQzMg0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzI2OTk3NDQ4Mw0K cmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzI3Mjc3NTcxNw0KcmVidWlsZGluZyBkaXJlY3Rv cnkgaW5vZGUgMzI3Mjc3NTcyMA0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzI3NDU5NDgy MA0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzMzMjAxMzE2MQ0KICAgICAgICAtIGFnbm8g PSAyNQ0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzM1OTI2OTMyMQ0KcmVidWlsZGluZyBk aXJlY3RvcnkgaW5vZGUgMzM4MDU5NTM1Mw0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzM4 MTI1OTU4NA0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzM4MTQyNDk2Mw0KcmVidWlsZGlu ZyBkaXJlY3RvcnkgaW5vZGUgMzQxNzMxODU2MA0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUg MzQyMDE4OTY4MA0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzQyMTU3MDE0Ng0KcmVidWls ZGluZyBkaXJlY3RvcnkgaW5vZGUgMzQ1MzU0NTYxMg0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5v ZGUgMzQ1MzU0NTY4OA0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzQ1MzY3NjkyOQ0KcmVi dWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzQ1MzY3Njk0Mg0KcmVidWlsZGluZyBkaXJlY3Rvcnkg aW5vZGUgMzQ1Mzc3OTUwMg0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzQ1Mzc3OTUzMg0K cmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzQ1MzgyMDY4MQ0KcmVidWlsZGluZyBkaXJlY3Rv cnkgaW5vZGUgMzQ1Mzk1Njg2NA0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzQ1NDkyMTAz NA0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzQ1NDkzMTY4OA0KcmVidWlsZGluZyBkaXJl Y3RvcnkgaW5vZGUgMzQ1NTA1MzIzNg0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzQ1NTA1 MzI0Mw0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzQ1NTM2ODA4MA0KcmVidWlsZGluZyBk aXJlY3RvcnkgaW5vZGUgMzQ1NTM2ODA5Mg0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzQ1 NTM3NDg0MQ0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzQ1NTM3NDg1MQ0KcmVidWlsZGlu ZyBkaXJlY3RvcnkgaW5vZGUgMzQ1NjczMjEzMA0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUg MzQ1NzY4Mjk1OQ0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzQ1NzY4MzA0Mg0KcmVidWls ZGluZyBkaXJlY3RvcnkgaW5vZGUgMzQ1ODA2ODcyNA0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5v ZGUgMzQ2MjY0NTA5Mw0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzQ2MjY0NTExNQ0KcmVi dWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzQ2Mjg4Njc0Ng0KcmVidWlsZGluZyBkaXJlY3Rvcnkg aW5vZGUgMzQ2MjkxODAyNQ0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzQ2MjkxODA1Ng0K cmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzQ2MjkyMjk1MA0KcmVidWlsZGluZyBkaXJlY3Rv cnkgaW5vZGUgMzQ2MjkyNDk0OA0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzQ2MzEwMzE4 Ng0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzQ2MzEwMzIwMg0KcmVidWlsZGluZyBkaXJl Y3RvcnkgaW5vZGUgMzQ2NDUxNjAwOQ0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzQ2NDUx NjQ1OA0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzQ2NDUxNjU3Mw0KcmVidWlsZGluZyBk aXJlY3RvcnkgaW5vZGUgMzQ2NDgxNjYwMQ0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzQ2 NDgxNjYwNg0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzQ2NDgxNjkzNg0KcmVidWlsZGlu ZyBkaXJlY3RvcnkgaW5vZGUgMzQ2NDgxNjk1Mg0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUg MzQ3MjQyODc3Mw0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzQ3MjY0MTU4Ng0KcmVidWls ZGluZyBkaXJlY3RvcnkgaW5vZGUgMzQ3MjY0MTYwMQ0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5v ZGUgMzQ3MjY0MTYwOQ0KICAgICAgICAtIGFnbm8gPSAyNg0KcmVidWlsZGluZyBkaXJlY3Rvcnkg aW5vZGUgMzQ5MTM2MTk5Mw0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzQ5NTA0NjEwOQ0K cmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzQ5NjQyMTU3Mg0KcmVidWlsZGluZyBkaXJlY3Rv cnkgaW5vZGUgMzUwODMyMDQ5MQ0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzUxMjk2NTU4 Mg0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzUxMzkxMTQxOA0KcmVidWlsZGluZyBkaXJl Y3RvcnkgaW5vZGUgMzUyMjM5NDkxMg0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzUyMjU3 MTA4NQ0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzU0MTg0NTMxNA0KcmVidWlsZGluZyBk aXJlY3RvcnkgaW5vZGUgMzU0MjU5MTI0NQ0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzU0 Mjc4NDYwNA0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzU0Mjc4NDYxMw0KcmVidWlsZGlu ZyBkaXJlY3RvcnkgaW5vZGUgMzU0NDc4NjQ0Mg0KICAgICAgICAtIGFnbm8gPSAyNw0KcmVidWls ZGluZyBkaXJlY3RvcnkgaW5vZGUgMzYyNTM3OTAyMg0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5v ZGUgMzY0MTE2Nzc5NQ0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzY2MzQ3NzIxNA0KICAg ICAgICAtIGFnbm8gPSAyOA0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzc1OTQ0MzAyNA0K cmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzc2NDY0OTIyOA0KcmVidWlsZGluZyBkaXJlY3Rv cnkgaW5vZGUgMzc2NTEyNTYzNg0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzc4MDI4MTA5 NA0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgMzc4NTUyNDY5NQ0KICAgICAgICAtIGFnbm8g PSAyOQ0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgNDAwOTUyNjE0Ng0KcmVidWlsZGluZyBk aXJlY3RvcnkgaW5vZGUgNDAwOTUyNjE4MA0KICAgICAgICAtIGFnbm8gPSAzMA0KcmVidWlsZGlu ZyBkaXJlY3RvcnkgaW5vZGUgNDAzNDEwMjUwMQ0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUg NDEyMjIyOTg3NA0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgNDEyMjY2MDA4Mg0KcmVidWls ZGluZyBkaXJlY3RvcnkgaW5vZGUgNDEyMjY2MDA5NQ0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5v ZGUgNDEyMjY2MDEwNQ0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgNDEyMjY2MDEyMA0KcmVi dWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgNDEyMzkwODc3MA0KcmVidWlsZGluZyBkaXJlY3Rvcnkg aW5vZGUgNDEyMzkwODc4MQ0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgNDEyNDE0MDM4NA0K cmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgNDEyNTU5OTU0MA0KICAgICAgICAtIGFnbm8gPSAz MQ0KcmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgNDE5Mjc1MDc4OA0KcmVidWlsZGluZyBkaXJl Y3RvcnkgaW5vZGUgNDE5Mjk3NTY3Ng0KICAgICAgICAtIGFnbm8gPSAzMg0KICAgICAgICAtIGFn bm8gPSAzMw0KICAgICAgICAtIGFnbm8gPSAzNA0KICAgICAgICAtIGFnbm8gPSAzNQ0KICAgICAg ICAtIGFnbm8gPSAzNg0KICAgICAgICAtIGFnbm8gPSAzNw0KICAgICAgICAtIGFnbm8gPSAzOA0K ICAgICAgICAtIGFnbm8gPSAzOQ0KICAgICAgICAtIGFnbm8gPSA0MA0KICAgICAgICAtIGFnbm8g PSA0MQ0KICAgICAgICAtIGFnbm8gPSA0Mg0KICAgICAgICAtIGFnbm8gPSA0Mw0KICAgICAgICAt IGFnbm8gPSA0NA0KICAgICAgICAtIGFnbm8gPSA0NQ0KICAgICAgICAtIGFnbm8gPSA0Ng0KICAg ICAgICAtIGFnbm8gPSA0Nw0KICAgICAgICAtIGFnbm8gPSA0OA0KICAgICAgICAtIGFnbm8gPSA0 OQ0KICAgICAgICAtIGFnbm8gPSA1MA0KICAgICAgICAtIGFnbm8gPSA1MQ0KICAgICAgICAtIGFn bm8gPSA1Mg0KICAgICAgICAtIGFnbm8gPSA1Mw0KICAgICAgICAtIGFnbm8gPSA1NA0KICAgICAg ICAtIGFnbm8gPSA1NQ0KICAgICAgICAtIGFnbm8gPSA1Ng0KICAgICAgICAtIGFnbm8gPSA1Nw0K ICAgICAgICAtIGFnbm8gPSA1OA0KICAgICAgICAtIGFnbm8gPSA1OQ0KICAgICAgICAtIGFnbm8g PSA2MA0KICAgICAgICAtIGFnbm8gPSA2MQ0KICAgICAgICAtIGFnbm8gPSA2Mg0KICAgICAgICAt IGFnbm8gPSA2Mw0KICAgICAgICAtIGFnbm8gPSA2NA0KICAgICAgICAtIDE4OjM1OjU1OiB0cmF2 ZXJzaW5nIGZpbGVzeXN0ZW0gLSA2NSBvZiA2NSBhbGxvY2F0aW9uIGdyb3VwcyBkb25lDQogICAg ICAgIC0gdHJhdmVyc2FsIGZpbmlzaGVkIC4uLiANCiAgICAgICAgLSB0cmF2ZXJzaW5nIGFsbCB1 bmF0dGFjaGVkIHN1YnRyZWVzIC4uLiANCiAgICAgICAgLSB0cmF2ZXJzYWxzIGZpbmlzaGVkIC4u LiANCiAgICAgICAgLSBtb3ZpbmcgZGlzY29ubmVjdGVkIGlub2RlcyB0byBsb3N0K2ZvdW5kIC4u LiANCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMjc2NDczNDIsIG1vdmluZyB0byBsb3N0K2ZvdW5k DQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMwMjcwODk4LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0K ZGlzY29ubmVjdGVkIGRpciBpbm9kZSA3NzMyMTQ4MiwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRp c2Nvbm5lY3RlZCBkaXIgaW5vZGUgNzczMjE0ODUsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNj b25uZWN0ZWQgZGlyIGlub2RlIDc3MzY3Mzk5LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29u bmVjdGVkIGRpciBpbm9kZSA3ODQ0MjcyMiwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5l Y3RlZCBkaXIgaW5vZGUgNzg0NDI3NzEsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0 ZWQgZGlyIGlub2RlIDc4NDQyNzczLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVk IGRpciBpbm9kZSA3ODU1NTkyNiwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBk aXIgaW5vZGUgMTM0MjE3ODYyLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGRp ciBpbm9kZSAxMzQyMTc5MDIsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGly IGlub2RlIDEzOTM3NzQ0MywgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIg aW5vZGUgMTYyNTkxMDg5LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGRpciBp bm9kZSAxNjI1OTM3NjcsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlu b2RlIDE2MzY1NjU3OSwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5v ZGUgMTgwNjY0NzEzLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGRpciBpbm9k ZSAxOTYzMjAwNjEsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2Rl IDIwNDIzMDg5NiwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUg MjMwNTIyMjAzLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGRpciBpbm9kZSAy MzA1MjIyMDYsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDIz ODU2MzY5MCwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMjM4 NTYzNjkxLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGRpciBpbm9kZSAyMzk2 ODAzMTMsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDI0MjEz MzMyOCwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMjQyMTMz MzMyLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGRpciBpbm9kZSAyNDIxMzMz MzQsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDI0MjEzMzMz NywgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMjQyMTkwMDM0 LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGRpciBpbm9kZSAyNDIxOTAwNDEs IG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDI0MjE5MDA0NSwg bW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMjQ5Njk3MDcyLCBt b3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGRpciBpbm9kZSAyNjg0MzU1OTMsIG1v dmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDI2OTU5Mjc1NCwgbW92 aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMjg4NTU5NDIxLCBtb3Zp bmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGRpciBpbm9kZSAyODg1NTk1NzYsIG1vdmlu ZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDI4ODU2Mzg3NiwgbW92aW5n IHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMjkxOTcwMDg1LCBtb3Zpbmcg dG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGRpciBpbm9kZSA0MDI2NTMzMTksIG1vdmluZyB0 byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDQxNTc4Njk5MSwgbW92aW5nIHRv IGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNDE1NzkyNTI3LCBtb3ZpbmcgdG8g bG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGRpciBpbm9kZSA1MTk4MzgzNzksIG1vdmluZyB0byBs b3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDU1NDY3MjAwOSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNTk3NjU0MzEzLCBtb3ZpbmcgdG8gbG9z dCtmb3VuZA0KZGlzY29ubmVjdGVkIGRpciBpbm9kZSA2NTQwNzg5NTgsIG1vdmluZyB0byBsb3N0 K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDY3MTA4ODc3MSwgbW92aW5nIHRvIGxvc3Qr Zm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNjg1OTY5OTQ0LCBtb3ZpbmcgdG8gbG9zdCtm b3VuZA0KZGlzY29ubmVjdGVkIGRpciBpbm9kZSA2OTQ2OTE2NTcsIG1vdmluZyB0byBsb3N0K2Zv dW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDc2MTkzNzMwMiwgbW92aW5nIHRvIGxvc3QrZm91 bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNzYxOTM3MzE5LCBtb3ZpbmcgdG8gbG9zdCtmb3Vu ZA0KZGlzY29ubmVjdGVkIGRpciBpbm9kZSA3NjIzNDE1MzgsIG1vdmluZyB0byBsb3N0K2ZvdW5k DQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDc2MjM0MTU0MSwgbW92aW5nIHRvIGxvc3QrZm91bmQN CmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNzYyMzQxNTQ2LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0K ZGlzY29ubmVjdGVkIGRpciBpbm9kZSA3NjI3MjkxNjUsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpk aXNjb25uZWN0ZWQgZGlyIGlub2RlIDc2MjcyOTE2NywgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRp c2Nvbm5lY3RlZCBkaXIgaW5vZGUgNzYzMzcyODQ3LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlz Y29ubmVjdGVkIGRpciBpbm9kZSA3Njg2NjA4MjgsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNj b25uZWN0ZWQgZGlyIGlub2RlIDgwNTMwNjUwNSwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nv bm5lY3RlZCBkaXIgaW5vZGUgODE0NzY4NTg2LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29u bmVjdGVkIGRpciBpbm9kZSA4MTQ3ODUyNTcsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDgyMDQwNTY4MCwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5l Y3RlZCBkaXIgaW5vZGUgODMzMzEwOTQxLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVj dGVkIGRpciBpbm9kZSA4NTc3Mjk1NzgsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0 ZWQgZGlyIGlub2RlIDg1NzcyOTU4OCwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3Rl ZCBkaXIgaW5vZGUgODYwMDQ0ODE2LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVk IGRpciBpbm9kZSA4NjAwNDQ4MTgsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQg ZGlyIGlub2RlIDg2MDA0NDgyNiwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBk aXIgaW5vZGUgODYwMTEzMTIxLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGRp ciBpbm9kZSA4NjAxMTMxMjQsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGly IGlub2RlIDg2NjUwMDQxOSwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIg aW5vZGUgODY3MDUyNDI1LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGRpciBp bm9kZSA5NTY4MTczNDksIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlu b2RlIDk1NjgyNjE5NSwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5v ZGUgMTAwMjcxNDQyMCwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5v ZGUgMTAxNDM3MTQ0NSwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5v ZGUgMTIwNzk1OTY4MywgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAx MjA3OTYwMDMyLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDEyMDc5 NjAwMzMsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTIwNzk2MDAz NCwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxMjA3OTYwMDM1LCBt b3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDEyMDc5NjAwMzYsIG1vdmlu ZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTIwNzk2MDAzNywgbW92aW5nIHRv IGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxMjA3OTYwMDM4LCBtb3ZpbmcgdG8gbG9z dCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDEyMDc5NjAwMzksIG1vdmluZyB0byBsb3N0K2Zv dW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTIwNzk2MDA0MCwgbW92aW5nIHRvIGxvc3QrZm91bmQN CmRpc2Nvbm5lY3RlZCBpbm9kZSAxMjA3OTYwMDQxLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlz Y29ubmVjdGVkIGlub2RlIDEyMDc5NjAwNDIsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgaW5vZGUgMTIwNzk2MDA0MywgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3Rl ZCBpbm9kZSAxMjA3OTYwMDQ0LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlu b2RlIDEyMDc5NjAwNDUsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUg MTIwNzk2MDA0NiwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUg MTM0MjE3NzQxMSwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUg MTM0MjE3NzQzMSwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUg MTQwMjQyMTY2NCwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUg MTQwMjQ0MDgyOCwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUg MTQwMjQ0MDgzMCwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUg MTQwMjQ0NjYyNCwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUg MTQ1OTM4OTIxMywgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUg MTQ3NjM5NTI4NCwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUg MTUyOTYxMTk4MiwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUg MTU4NDMyNzA1MiwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUg MTYxMDYxMjg2NywgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUg MTY0OTk4NDc1MSwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUg MTY4MTA3NTY1NCwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUg MTcyNzgxNjU3NCwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0 ODMwNjI0LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzA2 MjUsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMDYyNiwg bW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMwNjI3LCBtb3Zp bmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzA2MjgsIG1vdmluZyB0 byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMDYyOSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMwNjMwLCBtb3ZpbmcgdG8gbG9zdCtm b3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzA2MzEsIG1vdmluZyB0byBsb3N0K2ZvdW5k DQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMDYzMiwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRp c2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMwNjMzLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29u bmVjdGVkIGlub2RlIDE3NDQ4MzA2MzQsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0 ZWQgaW5vZGUgMTc0NDgzMDYzNSwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBp bm9kZSAxNzQ0ODMwNjM2LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2Rl IDE3NDQ4MzA2MzcsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0 NDgzMDYzOCwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMw NjM5LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzA2NDAs IG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMDY0MSwgbW92 aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMwNjQyLCBtb3Zpbmcg dG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzA2NDMsIG1vdmluZyB0byBs b3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMDY0NCwgbW92aW5nIHRvIGxvc3Qr Zm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMwNjQ1LCBtb3ZpbmcgdG8gbG9zdCtmb3Vu ZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzA2NDYsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpk aXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMDY0NywgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nv bm5lY3RlZCBpbm9kZSAxNzQ0ODMwNjQ4LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVj dGVkIGlub2RlIDE3NDQ4MzA2NDksIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQg aW5vZGUgMTc0NDgzMDY1MCwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9k ZSAxNzQ0ODMwNjUxLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3 NDQ4MzA2NTIsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgz MDY1MywgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMwNjU0 LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzA2NTUsIG1v dmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMDgxNiwgbW92aW5n IHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMwODE3LCBtb3ZpbmcgdG8g bG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzA4MTgsIG1vdmluZyB0byBsb3N0 K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMDgxOSwgbW92aW5nIHRvIGxvc3QrZm91 bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMwODIwLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0K ZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzA4MjEsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNj b25uZWN0ZWQgaW5vZGUgMTc0NDgzMDgyMiwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5l Y3RlZCBpbm9kZSAxNzQ0ODMwODIzLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVk IGlub2RlIDE3NDQ4MzA4MjQsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5v ZGUgMTc0NDgzMDgyNSwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAx NzQ0ODMwODI2LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4 MzA4MjcsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMDgy OCwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMwODI5LCBt b3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzA4MzAsIG1vdmlu ZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMDgzMSwgbW92aW5nIHRv IGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMwODMyLCBtb3ZpbmcgdG8gbG9z dCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzA4MzMsIG1vdmluZyB0byBsb3N0K2Zv dW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMDgzNCwgbW92aW5nIHRvIGxvc3QrZm91bmQN CmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMwODM1LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlz Y29ubmVjdGVkIGlub2RlIDE3NDQ4MzA4MzYsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgaW5vZGUgMTc0NDgzMDgzNywgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3Rl ZCBpbm9kZSAxNzQ0ODMwODM4LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlu b2RlIDE3NDQ4MzA4MzksIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUg MTc0NDgzMDg0MCwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0 ODMwODQxLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzA4 NDIsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMDg0Mywg bW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMwODQ0LCBtb3Zp bmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzA4NDUsIG1vdmluZyB0 byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMDg0NiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMwODQ3LCBtb3ZpbmcgdG8gbG9zdCtm b3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzA4NDgsIG1vdmluZyB0byBsb3N0K2ZvdW5k DQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMDg0OSwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRp c2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMwODUwLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29u bmVjdGVkIGlub2RlIDE3NDQ4MzA4NTEsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0 ZWQgaW5vZGUgMTc0NDgzMDg1MiwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBp bm9kZSAxNzQ0ODMwODUzLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2Rl IDE3NDQ4MzA4NTQsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0 NDgzMDg1NSwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMw ODU2LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzA4NTcs IG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMDg1OCwgbW92 aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMwODU5LCBtb3Zpbmcg dG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzA4NjAsIG1vdmluZyB0byBs b3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMDg2MSwgbW92aW5nIHRvIGxvc3Qr Zm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMwODYyLCBtb3ZpbmcgdG8gbG9zdCtmb3Vu ZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzA4NjMsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpk aXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMDg2NCwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nv bm5lY3RlZCBpbm9kZSAxNzQ0ODMwODY1LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVj dGVkIGlub2RlIDE3NDQ4MzA4NjYsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQg aW5vZGUgMTc0NDgzMDg2NywgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9k ZSAxNzQ0ODMwODY4LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3 NDQ4MzA4NjksIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgz MDg3MCwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMwODcx LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzA4NzIsIG1v dmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMDg3MywgbW92aW5n IHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMwODc0LCBtb3ZpbmcgdG8g bG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzA4NzUsIG1vdmluZyB0byBsb3N0 K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMDg3NiwgbW92aW5nIHRvIGxvc3QrZm91 bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMwODc3LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0K ZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzA4NzgsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNj b25uZWN0ZWQgaW5vZGUgMTc0NDgzMDg3OSwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5l Y3RlZCBpbm9kZSAxNzQ0ODMxMTY4LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVk IGlub2RlIDE3NDQ4MzExNjksIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5v ZGUgMTc0NDgzMTE3MCwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAx NzQ0ODMxMTcxLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4 MzExNzIsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMTE3 MywgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMxMTc0LCBt b3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzExNzUsIG1vdmlu ZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMTE3NiwgbW92aW5nIHRv IGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMxMTc3LCBtb3ZpbmcgdG8gbG9z dCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzExNzgsIG1vdmluZyB0byBsb3N0K2Zv dW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMTE3OSwgbW92aW5nIHRvIGxvc3QrZm91bmQN CmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMxMTgwLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlz Y29ubmVjdGVkIGlub2RlIDE3NDQ4MzExODEsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgaW5vZGUgMTc0NDgzMTE4MiwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3Rl ZCBpbm9kZSAxNzQ0ODMxMTgzLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlu b2RlIDE3NDQ4MzExODQsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUg MTc0NDgzMTE4NSwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0 ODMxMTg2LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzEx ODcsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMTE4OCwg bW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMxMTg5LCBtb3Zp bmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzExOTAsIG1vdmluZyB0 byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMTE5MSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMxMTkyLCBtb3ZpbmcgdG8gbG9zdCtm b3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzExOTMsIG1vdmluZyB0byBsb3N0K2ZvdW5k DQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMTE5NCwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRp c2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMxMTk1LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29u bmVjdGVkIGlub2RlIDE3NDQ4MzExOTYsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0 ZWQgaW5vZGUgMTc0NDgzMTE5NywgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBp bm9kZSAxNzQ0ODMxMjAwLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2Rl IDE3NDQ4MzEyMDEsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0 NDgzMTIwMiwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMx MjAzLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzEyMDQs IG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMTIwNSwgbW92 aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMxMjA2LCBtb3Zpbmcg dG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzEyMDcsIG1vdmluZyB0byBs b3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMTIwOCwgbW92aW5nIHRvIGxvc3Qr Zm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMxMjA5LCBtb3ZpbmcgdG8gbG9zdCtmb3Vu ZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzEyMTAsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpk aXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMTIxMSwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nv bm5lY3RlZCBpbm9kZSAxNzQ0ODMxMjEyLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVj dGVkIGlub2RlIDE3NDQ4MzEyMTMsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQg aW5vZGUgMTc0NDgzMTIxNCwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9k ZSAxNzQ0ODMxMjE1LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3 NDQ4MzEyMTYsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgz MTIxNywgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMxMjE4 LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzEyMTksIG1v dmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMTIyMCwgbW92aW5n IHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMxMjIxLCBtb3ZpbmcgdG8g bG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzEyMjIsIG1vdmluZyB0byBsb3N0 K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMTIyMywgbW92aW5nIHRvIGxvc3QrZm91 bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMxMjI0LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0K ZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzEyMjUsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNj b25uZWN0ZWQgaW5vZGUgMTc0NDgzMTIyNiwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5l Y3RlZCBpbm9kZSAxNzQ0ODMxMjI3LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVk IGlub2RlIDE3NDQ4MzEyMjgsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5v ZGUgMTc0NDgzMTIyOSwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAx NzQ0ODMxMjMwLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4 MzEyMzEsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMTI2 NCwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMxMjY1LCBt b3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzEyNjYsIG1vdmlu ZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMTI2NywgbW92aW5nIHRv IGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMxMjY4LCBtb3ZpbmcgdG8gbG9z dCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzEyNjksIG1vdmluZyB0byBsb3N0K2Zv dW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMTI3MCwgbW92aW5nIHRvIGxvc3QrZm91bmQN CmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMxMjcxLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlz Y29ubmVjdGVkIGlub2RlIDE3NDQ4MzEyNzIsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgaW5vZGUgMTc0NDgzMTI3MywgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3Rl ZCBpbm9kZSAxNzQ0ODMxMjc0LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlu b2RlIDE3NDQ4MzEyNzUsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUg MTc0NDgzMTI3NiwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0 ODMxMjc3LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzEy NzgsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMTI3OSwg bW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMxMjgwLCBtb3Zp bmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzEyODEsIG1vdmluZyB0 byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMTI4MiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMxMjgzLCBtb3ZpbmcgdG8gbG9zdCtm b3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzEyODQsIG1vdmluZyB0byBsb3N0K2ZvdW5k DQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMTI4NSwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRp c2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMxMjg2LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29u bmVjdGVkIGlub2RlIDE3NDQ4MzEyODcsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0 ZWQgaW5vZGUgMTc0NDgzMTI4OCwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBp bm9kZSAxNzQ0ODMxMjg5LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2Rl IDE3NDQ4MzEyOTAsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0 NDgzMTI5MSwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMx MjkyLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzEyOTMs IG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMTI5NCwgbW92 aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMxMjk1LCBtb3Zpbmcg dG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzEyOTYsIG1vdmluZyB0byBs b3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMTI5NywgbW92aW5nIHRvIGxvc3Qr Zm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMxMjk4LCBtb3ZpbmcgdG8gbG9zdCtmb3Vu ZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzEyOTksIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpk aXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMTMwMCwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nv bm5lY3RlZCBpbm9kZSAxNzQ0ODMxMzAxLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVj dGVkIGlub2RlIDE3NDQ4MzEzMDIsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQg aW5vZGUgMTc0NDgzMTMwMywgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9k ZSAxNzQ0ODMxMzA0LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3 NDQ4MzEzMDUsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgz MTMwNiwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMxMzA3 LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzEzMDgsIG1v dmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMTMwOSwgbW92aW5n IHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMxMzEwLCBtb3ZpbmcgdG8g bG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzEzMTEsIG1vdmluZyB0byBsb3N0 K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMTMxMiwgbW92aW5nIHRvIGxvc3QrZm91 bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMxMzEzLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0K ZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzEzMTQsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNj b25uZWN0ZWQgaW5vZGUgMTc0NDgzMTMxNSwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5l Y3RlZCBpbm9kZSAxNzQ0ODMxMzE2LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVk IGlub2RlIDE3NDQ4MzEzMTcsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5v ZGUgMTc0NDgzMTMxOCwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAx NzQ0ODMxMzE5LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4 MzEzMjAsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMTMy MSwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMxMzIyLCBt b3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzEzMjMsIG1vdmlu ZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMTMyNCwgbW92aW5nIHRv IGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODMxMzI1LCBtb3ZpbmcgdG8gbG9z dCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDE3NDQ4MzEzMjYsIG1vdmluZyB0byBsb3N0K2Zv dW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMTc0NDgzMTMyNywgbW92aW5nIHRvIGxvc3QrZm91bmQN CmRpc2Nvbm5lY3RlZCBpbm9kZSAxNzQ0ODczNTA0LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlz Y29ubmVjdGVkIGRpciBpbm9kZSAxNzYwNzk1OTA4LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlz Y29ubmVjdGVkIGRpciBpbm9kZSAxNzY4MDg2MTI3LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlz Y29ubmVjdGVkIGRpciBpbm9kZSAxODQ0OTAxMTU0LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlz Y29ubmVjdGVkIGRpciBpbm9kZSAxODQ0OTAxMTc4LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlz Y29ubmVjdGVkIGRpciBpbm9kZSAxODQ0OTUzODAxLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlz Y29ubmVjdGVkIGRpciBpbm9kZSAxODQ1NDgyNjc3LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlz Y29ubmVjdGVkIGRpciBpbm9kZSAxODUyNDAwNzQ5LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlz Y29ubmVjdGVkIGRpciBpbm9kZSAxODc5MDQ4MzIwLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlz Y29ubmVjdGVkIGRpciBpbm9kZSAxODc5MDQ4MzI0LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlz Y29ubmVjdGVkIGRpciBpbm9kZSAxODc5MDQ4MzI2LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlz Y29ubmVjdGVkIGRpciBpbm9kZSAxODc5MDQ4MzMyLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlz Y29ubmVjdGVkIGRpciBpbm9kZSAxODc5MDQ4MzM2LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlz Y29ubmVjdGVkIGRpciBpbm9kZSAxODc5MDQ4MzQzLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlz Y29ubmVjdGVkIGRpciBpbm9kZSAxODc5MDQ4MzQ4LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlz Y29ubmVjdGVkIGRpciBpbm9kZSAxODc5MDQ4MzQ5LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlz Y29ubmVjdGVkIGRpciBpbm9kZSAxODgxNjU2MjEyLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlz Y29ubmVjdGVkIGRpciBpbm9kZSAyMDEzMjY2MDUyLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlz Y29ubmVjdGVkIGlub2RlIDIwMTMyNjYwNTcsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDIwMTMyNjYwNjQsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDIwMTMyNjYwOTUsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDIwMTMyNjYwOTcsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDIwMTMyNjYxMDQsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDIwMTQzMjA4MjksIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDIwMTYyMTgxMzksIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDIxNDgwMDgwNjgsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDIxNDgwMDgwODAsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDIxNzA0NDk1MzIsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDIxNzA0NTQzMTIsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDIyODE3MDM4MDgsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDIyODE3MDY5MzQsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDIyODE3MDY5NTcsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDIyODE3MDcyMTUsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDIyODE3MDcyMTcsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDIyODE3MjgyMzQsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDIyODE3MjgyMzgsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDIyODE3MjgyNTEsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDIyODE3MjgyNjgsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDI0MTU5MjA2NTAsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDI0MTU5MjA3NDUsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDI0MTU5NDIwMzQsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDI0MTU5NDMyNjksIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDI0MTU5NDMyNzcsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDI0MTYwNDI5NzIsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDI0MTYwNDI5NzQsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDI0MTYwNDMwMjEsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDI0MTYwNDMwMzcsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDI0MTYwNDMwNTEsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDI0MTYwNDMwNTMsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDI0MTYwNDMwNTUsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDI0MTYwNDMwNTcsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDI0MTYwNDMwNjAsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDI0MTcyMTc1NDIsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDI0MTcyMTc1NDQsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDI0MTcyMTc1NTMsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDI0MTcyMTc1NzcsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDI0MTcyMTc1ODksIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDI1NTAyMjExNTQsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDI1NTAyMjExNTgsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDI1NTAyMjExNjAsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDI1NTAyMjExNjIsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDI1NTAyMjExOTcsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDI1NTUwODUwNTIsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDI1OTU1NjYyNTksIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDI2NDkyMjgwNzYsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDI2ODQzNTcxNTgsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDI2ODQzNTcxNjAsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDI2ODQzNTcxNjIsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgZGlyIGlub2RlIDI2ODQzNTcxNzAsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25u ZWN0ZWQgaW5vZGUgMjY4OTUwNTczNiwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3Rl ZCBkaXIgaW5vZGUgMjY5NDE1MDY4MCwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3Rl ZCBkaXIgaW5vZGUgMjcwNjgwMTk3MCwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3Rl ZCBkaXIgaW5vZGUgMjc1NDY2Nzk3OSwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3Rl ZCBkaXIgaW5vZGUgMjgxODU3NDA1OCwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3Rl ZCBkaXIgaW5vZGUgMjgxODU3NDA2MiwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3Rl ZCBkaXIgaW5vZGUgMjgyNjA3NjIzNSwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3Rl ZCBpbm9kZSAzMDMxNTExNjgwLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlu b2RlIDMwMzE1MTE2ODEsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUg MzAzMTUxMTY4MiwgbW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAzMDMx NTExNjgzLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGRpciBpbm9kZSAzMDMy NjgyNDk2LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGRpciBpbm9kZSAzMDMy NjgyNDk3LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGRpciBpbm9kZSAzMDMy NjgyNTA2LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGRpciBpbm9kZSAzMDMy NjgyNTEwLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGRpciBpbm9kZSAzMDMy NjgyNTEyLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGRpciBpbm9kZSAzMDM3 MDIzNjcyLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGRpciBpbm9kZSAzMDUz MDQyMTQ2LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGRpciBpbm9kZSAzMDU2 NDg3NjEwLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGRpciBpbm9kZSAzMDU3 MzIyMjk2LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGRpciBpbm9kZSAzMDU3 MzIyMzA1LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGRpciBpbm9kZSAzMDg3 MDA3OTc0LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGRpciBpbm9kZSAzMDg3 MDA3OTc2LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGRpciBpbm9kZSAzMDg3 MDA3OTc4LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGRpciBpbm9kZSAzMDg3 MDA3OTgwLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGRpciBpbm9kZSAzMDg3 MDA3OTg2LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGRpciBpbm9kZSAzMDg3 MDA4MDI3LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDMwODcwMDg0 NjAsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMwODcwMTIy NTcsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMwODcwMTIy NTgsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMwODcwMTIy NTksIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMwODczNDU1 MjUsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMwODczNzA5 MjIsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMwODczNzA5 MzIsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMwODc2MjEz NTYsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMwODc2Mjc0 ODksIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMwODc2ODc4 ODUsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMDUzNDMz MjQsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMDUzNDMz MjUsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMDUzNDMz MzgsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMDUzNjEw NjQsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMDUzNjEw ODksIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMDUzNjEx MDQsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMDUzNjEx MDgsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMDUzNzMy ODYsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMDUzNzMz MDQsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMDUzNzM2 MDgsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMDUzNzM2 MTMsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMDUzNzM2 NjIsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMDUzNzQ2 NTgsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMDUzNzQ2 NjAsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMDU1NTgw MDksIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMDU4NTkz ODYsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMDU4NTkz ODcsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMDU4NTkz ODksIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMDYwMTIx MjksIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMDYwMTIx MzYsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMDYwMTIx NTIsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMDYwMTIx NTQsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMDYwOTEz MzIsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMDYwOTEz MzQsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMDYwOTE2 MDUsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMDYwOTE2 MDcsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMDYwOTE2 MTAsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMDYwOTE2 MjksIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMDYwOTE2 MzQsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMDYwOTM5 NjYsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMDYwOTQw MDIsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMDYxMDE0 NDIsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMDYxMDE0 NTgsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMDYxMzcz OTQsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMDYxMzcz OTYsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMDYxMzcz OTgsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMDYxMzc0 MTIsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMjE0Nzgx NDksIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMjE0OTYz NjUsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMjE0OTYz NjYsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMjE0OTk1 NTAsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMjE0OTk1 NTEsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMjE0OTk1 NTgsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMjE0OTk1 NjEsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMjE1NTEz OTUsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMjE1NTEz OTksIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMjE1NTE0 MTAsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMjE1NTE0 MTcsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMjE1NTE0 MzUsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMjE1NjY3 OTIsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMxMjE1Njg1 MDAsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyMDM3MTYy MDEsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyMDQyMTEy NjcsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyMDQyMTEy NzAsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyMDQyMTEy ODUsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyMDQyMTEz MDgsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyMDQyMTEz OTMsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyMDQyMTE0 MDksIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyMDQyMTQ4 NzcsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyMDQyMTQ4 OTAsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyMDQyMTQ5 MDQsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyMDQyMTQ5 MTAsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyMDQyMTQ5 ODMsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyMDQyMTQ5 OTEsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyMDQyMTUw MDEsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyMDQyMTUw MTQsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyMDQyMTUw MTUsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyMDQyMTUw MTcsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyMDQyMTUw MzUsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyMDQyMjAx MzgsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyMDQyMjAx NDMsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyMDQyMjAx NzEsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyMDQyMjAx OTEsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyMDQyMjAy ODYsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyMDQyMjAy ODcsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyMDQyMjAz MzYsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyMjEyMjYz MTIsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyMjEyMjYz MzksIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyMjEyMjYz NDQsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyMjEyMjYz NDcsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyMjIxNTMw NDUsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyMjIxNzAx ODcsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyMjIxNzAy MjEsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyMjIyODEz NzUsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyMjM1MjI4 MzAsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyNDIzMzQw ODEsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyNDIzMzQw ODIsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyNDIzMzQx MzksIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyNDIzMzQ1 MjUsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyNDIzMzU1 MjksIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyNDIzMzU1 NzgsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyNDIzMzU1 ODMsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyNDIzNDE2 NjcsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyNDIzNDE2 NzYsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyNDI1NDg4 NDgsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyNDI4NDQ2 MTQsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyNDI4NDg5 MDAsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyNDMwMTg2 MzUsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyNDYxMTUw NTAsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyNDYxMTUw NjUsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyNTI1MTk2 MDgsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyNTUwNDI5 NDUsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyNTUwOTA1 MTMsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyNTUwOTA1 MTYsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyNTUxMDAy NDEsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyNTUxMTk5 MDQsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyNTUxMTk5 NDMsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyNTUxMTk5 NjQsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyNjQ4MDAw NDcsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyNjk5NzQ0 MzIsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyNjk5NzQ0 NDcsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyNjk5NzQ0 ODMsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyNzI3NzU3 MDEsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyNzI3NzU3 MTcsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyNzI3NzU3 MjAsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMyNzQ1OTQ4 MjAsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgZGlyIGlub2RlIDMzMzIwMTMx NjEsIG1vdmluZyB0byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMzM1NTQ0NDgzMiwg bW92aW5nIHRvIGxvc3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBpbm9kZSAzMzU1NDQ0ODMzLCBtb3Zp bmcgdG8gbG9zdCtmb3VuZA0KZGlzY29ubmVjdGVkIGlub2RlIDMzNTU0NDQ4MzQsIG1vdmluZyB0 byBsb3N0K2ZvdW5kDQpkaXNjb25uZWN0ZWQgaW5vZGUgMzM1NTQ0NDgzNSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzM1NTQ0NDg0MCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzM1NTU2OTI3MCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzM1OTI2OTMyMSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzM4MDU4ODg4MywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzM4MDU5NTMzNCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzM4MDU5NTM1MywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzM4MDU5NzA2OCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzM4MDYwNDcwNywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzM4MTIyMTAwOSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzM4MTI1OTU4NCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzM4MTI3NDAzOSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzM4MTQyNDk2MywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQxNjg5NzkxMywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQxNjk1NzY0MiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQxNjk4NDMwNywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQxNjk4OTU2NywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQxNzMxODU2MCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQyMDE4OTY2NSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQyMDE4OTY4MCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQyMTU3MDE0NiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1MzAzNDI3MiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1MzU0NTYxMiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1MzU0NTY4OCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1MzY3NjkyNywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1MzY3NjkyOSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1MzY3Njk0MiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1Mzc3OTQ5NywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1Mzc3OTUwMiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1Mzc3OTUzMiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1MzgyMDY4MSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1Mzk1Njg2NCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NDYyMTUyMCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NDYyMTUyOCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NDYyMTUzMiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NDYyMTUzNCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NDc2NjQwNiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NDc3NjI4MSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NDc3NjI4NywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NDc3NjMyMiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NDc3NjMzNCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NDgxMzY3MCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NDgxMzY5NiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NDgxMzcwOSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NDkxMzMxMiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NDkyMTAzNCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NDkzMTY4OCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NDkzMTcwOSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NTA1MzIzNiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NTA1MzI0MywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NTA1MzI0OSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NTA1MzI1OCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NTA1MzI2MCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NTA1MzI2MSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NTM2ODA4MCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NTM2ODA5MiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NTM2ODEwNSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NTM3NDg0MSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NTM3NDg1MSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NTQyNTY4MCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NTQyNTY4NCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NTQyNTY5MiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NjczMjEzMCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NzY4MjkyOCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NzY4MjkzMiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NzY4Mjk0MSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NzY4Mjk0NCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NzY4Mjk0NywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NzY4Mjk1OSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NzY4MzA0MiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NzY4MzIwOSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NzY4MzIxMCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NzY4MzIxMSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NzY5NTEwOSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1NzY5NTExNywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1ODA2ODYzNSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1ODA2ODY3NCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1ODA2ODY4NCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1ODA2ODcwMiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1ODA2ODcyMywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ1ODA2ODcyNCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2MjY0NTA3NiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2MjY0NTA4MywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2MjY0NTA5MywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2MjY0NTExMCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2MjY0NTExNSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2MjgzNzIwMSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2Mjg4MTMwOCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2Mjg4Njc0NiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2MjkxODAyMSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2MjkxODAyNSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2MjkxODA1NiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2MjkyMjg0NSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2MjkyMjk1MCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2MjkyMjk2NSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2MjkyMjk2NywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2MjkyMjk3MSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2MjkyNDkyOSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2MjkyNDkzNSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2MjkyNDkzNywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2MjkyNDk0OCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2MzEwMzE4MSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2MzEwMzE4NiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2MzEwMzE5NywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2MzEwMzIwMiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2NDUxNjAwOSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2NDUxNjQxMCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2NDUxNjQyOCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2NDUxNjQzNCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2NDUxNjQzNiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2NDUxNjQzOCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2NDUxNjQ0MCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2NDUxNjQ0MiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2NDUxNjQ0NiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2NDUxNjQ0OCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2NDUxNjQ1MiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2NDUxNjQ1NiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2NDUxNjQ1OCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2NDUxNjQ3NCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2NDUxNjU3MywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2NDUzMDUxMCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2NDgxNjU5OCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2NDgxNjYwMSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2NDgxNjYwNiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2NDgxNjkzNSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2NDgxNjkzNiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2NDgxNjk1MSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2NDgxNjk1MiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2NDgyNjc2NCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ2NDgyNjk2MywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ3MjQyODc3MywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ3MjY0MTU4NiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ3MjY0MTYwMSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ3MjY0MTYwOSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ4OTY2MTY2NSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ4OTY2MTY2OSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ4OTY2MTY3MywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ4OTY2MTY3NSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ4OTY2MTY3OSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ4OTY2MTY5OCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ5MDYxNzM5MywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ5MTM2MTk5MywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ5MTQ4NTE3NCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ5NTA0NjEwOSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ5NTA0NjEyMywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ5NTA1NTIwNiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ5NTA1NTIyMiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ5NTI1NzgwOSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ5NTM0Mjg2NywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzQ5NjQyMTU3MiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzUwODMyMDQ5MSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzUxMTM0MTQ1MCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzUxMjk2NTU4MiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzUxMzkxMTQxOCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzUyMTk2NzMzOCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzUyMjM5NDkxMiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzUyMjM5NDkyOCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzUyMjQzNjIwMiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzUyMjQzNjIwNCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzUyMjU3MTA4NSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzUyNDY0MjcxNCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzUyNDY3MzA4MiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzUyNDY4NzQ1NCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzUyNDY4NzQ5MiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzUyNDY4NzUyOCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzUzOTk3Nzc0NiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzUzOTk3Nzc1MywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzUzOTk3Nzc1NSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzU0MTg0NTMxNCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzU0MTg4MDY5NywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzU0MjU1NjY3MiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzU0MjU1NjY3NiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzU0MjU1NjY4MCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzU0MjU5MTI0NSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzU0MjU5MTI3OCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzU0MjU5MTI4MiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzU0Mjc4NDYwMSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzU0Mjc4NDYwNCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzU0Mjc4NDYxMywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzU0Mjc5ODE0MCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzU0NDc4NjQzOCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzU0NDc4NjQ0MiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzU2NzE2NjQyNCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzYyMzg3OTMzOCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzYyMzg3OTM0NywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzYyNTM3OTAyMiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzYyOTc2ODA1MywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzYyOTc2ODA2MSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzYyOTc2OTAwNywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzYyOTc3MDA2OSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzYyOTk2ODI1NSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzY0MTE2Nzc5NSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzY2MjAzMTI0MCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzY2MzQ3NzIxNCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzc1ODA5NzU4MywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzc1ODA5NzU5MywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzc1ODA5NzYwNywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzc1ODA5NzYxNCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzc1OTQ0MzAyNCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzc2NDY0OTIyOCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzc2NDY1NTA0MSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzc2NTEyNTYzNiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzc4MDI4MTA5NCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzc4NTUyNDY0MywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzc4NTUyNDY5NSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzg5MjMxNDUwOCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzg5MjMxNDUxMCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzg5MjMxNDUxNCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzg5MjMxNDU0OSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzkwNDg4NjcwNSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzkwOTg2MTM3OCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzkxMDE5MDcyMiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzk2NTY5OTMzMSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzk2NTY5OTMzMiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzk2NzQ0NzU2OSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgMzk2ODgzOTM4MywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNDAwOTUyNjE0NiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNDAwOTUyNjE4MCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNDAyNzM1NDg5NiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNDAzNDEwMjUwMSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNDAzNDEwMzc5NCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNDAzNDIwMzU1NiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNDAzNDI2OTMyMiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNDA0NTU0MTA2NCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNDA0OTY5Mjg0NywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNDEyMTY1NTg0NiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNDEyMjIyOTg3NCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNDEyMjY2MDA4MiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNDEyMjY2MDA5NSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNDEyMjY2MDEwNSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNDEyMjY2MDEyMCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNDEyMzg5MTIyNywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNDEyMzkwODc3MCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNDEyMzkwODc3NywgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNDEyMzkwODc4MSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNDEyMzk0NTIzMiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNDEyNDExMzMyNiwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNDEyNDE0MDM4NCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNDEyNDE0MDQzMCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNDEyNDE0MDQzOCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNDEyNTU5OTU0MCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNDE2MDc0OTk3MSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNDE2MjYwNjU1OCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNDE2NzA2MzUyOSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNDE2Nzc2NDgyOSwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNDE2Nzk0Njk1NCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNDE3NzUxNDY0OCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNDE5Mjc1MDc4OCwgbW92aW5nIHRvIGxv c3QrZm91bmQNCmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNDE5Mjk3NTY3NiwgbW92aW5nIHRvIGxv c3QrZm91bmQNClBoYXNlIDcgLSB2ZXJpZnkgYW5kIGNvcnJlY3QgbGluayBjb3VudHMuLi4NCmNh Y2hlX3B1cmdlOiBzaGFrZSBvbiBjYWNoZSAweDgwZDM4ODAgbGVmdCA1IG5vZGVzIT8NCiAgICAg ICAgLSBhZ25vID0gMA0KICAgICAgICAtIGFnbm8gPSAxDQogICAgICAgIC0gYWdubyA9IDINCiAg ICAgICAgLSBhZ25vID0gMw0KICAgICAgICAtIGFnbm8gPSA0DQogICAgICAgIC0gYWdubyA9IDUN CiAgICAgICAgLSBhZ25vID0gNg0KICAgICAgICAtIGFnbm8gPSA3DQogICAgICAgIC0gYWdubyA9 IDgNCiAgICAgICAgLSBhZ25vID0gOQ0KICAgICAgICAtIGFnbm8gPSAxMA0KICAgICAgICAtIGFn bm8gPSAxMQ0KICAgICAgICAtIGFnbm8gPSAxMg0KICAgICAgICAtIGFnbm8gPSAxMw0KICAgICAg ICAtIGFnbm8gPSAxNA0KICAgICAgICAtIGFnbm8gPSAxNQ0KICAgICAgICAtIGFnbm8gPSAxNg0K ICAgICAgICAtIGFnbm8gPSAxNw0KICAgICAgICAtIGFnbm8gPSAxOA0KICAgICAgICAtIGFnbm8g PSAxOQ0KICAgICAgICAtIGFnbm8gPSAyMA0KICAgICAgICAtIGFnbm8gPSAyMQ0KICAgICAgICAt IGFnbm8gPSAyMg0KICAgICAgICAtIGFnbm8gPSAyMw0KICAgICAgICAtIGFnbm8gPSAyNA0KICAg ICAgICAtIGFnbm8gPSAyNQ0KICAgICAgICAtIGFnbm8gPSAyNg0KICAgICAgICAtIGFnbm8gPSAy Nw0KICAgICAgICAtIGFnbm8gPSAyOA0KICAgICAgICAtIGFnbm8gPSAyOQ0KICAgICAgICAtIGFn bm8gPSAzMA0KICAgICAgICAtIGFnbm8gPSAzMQ0KICAgICAgICAtIGFnbm8gPSAzMg0KICAgICAg ICAtIGFnbm8gPSAzMw0KICAgICAgICAtIGFnbm8gPSAzNA0KICAgICAgICAtIGFnbm8gPSAzNQ0K ICAgICAgICAtIGFnbm8gPSAzNg0KICAgICAgICAtIGFnbm8gPSAzNw0KICAgICAgICAtIGFnbm8g PSAzOA0KICAgICAgICAtIGFnbm8gPSAzOQ0KICAgICAgICAtIGFnbm8gPSA0MA0KICAgICAgICAt IGFnbm8gPSA0MQ0KICAgICAgICAtIGFnbm8gPSA0Mg0KICAgICAgICAtIGFnbm8gPSA0Mw0KICAg ICAgICAtIGFnbm8gPSA0NA0KICAgICAgICAtIGFnbm8gPSA0NQ0KICAgICAgICAtIGFnbm8gPSA0 Ng0KICAgICAgICAtIGFnbm8gPSA0Nw0KICAgICAgICAtIGFnbm8gPSA0OA0KICAgICAgICAtIGFn bm8gPSA0OQ0KICAgICAgICAtIGFnbm8gPSA1MA0KICAgICAgICAtIGFnbm8gPSA1MQ0KICAgICAg ICAtIGFnbm8gPSA1Mg0KICAgICAgICAtIGFnbm8gPSA1Mw0KICAgICAgICAtIGFnbm8gPSA1NA0K ICAgICAgICAtIGFnbm8gPSA1NQ0KICAgICAgICAtIGFnbm8gPSA1Ng0KICAgICAgICAtIGFnbm8g PSA1Nw0KICAgICAgICAtIGFnbm8gPSA1OA0KICAgICAgICAtIGFnbm8gPSA1OQ0KICAgICAgICAt IGFnbm8gPSA2MA0KICAgICAgICAtIGFnbm8gPSA2MQ0KICAgICAgICAtIGFnbm8gPSA2Mg0KICAg ICAgICAtIGFnbm8gPSA2Mw0KICAgICAgICAtIGFnbm8gPSA2NA0KICAgICAgICAtIDE4OjM2OjM4 OiB2ZXJpZnkgYW5kIGNvcnJlY3QgbGluayBjb3VudHMgLSA0NTA2MjQgb2YgNDUwNjI0IGlub2Rl cyBkb25lDQpjYWNoZV9wdXJnZTogc2hha2Ugb24gY2FjaGUgMHg4MGQzODgwIGxlZnQgNSBub2Rl cyE/DQpjYWNoZV9wdXJnZTogc2hha2Ugb24gY2FjaGUgMHg4MGQzODgwIGxlZnQgNSBub2RlcyE/ DQoNCiAgICAgICAgWEZTX1JFUEFJUiBTdW1tYXJ5ICAgIE1vbiBKYW4gMjkgMTg6MzY6MzggMjAw Nw0KDQpQaGFzZQkJU3RhcnQJCUVuZAkJRHVyYXRpb24NClBoYXNlIDE6CTAxLzI5IDE4OjMzOjQ5 CTAxLzI5IDE4OjMzOjQ5CQ0KUGhhc2UgMjoJMDEvMjkgMTg6MzM6NDkJMDEvMjkgMTg6MzQ6MDAJ MTEgc2Vjb25kcw0KUGhhc2UgMzoJMDEvMjkgMTg6MzQ6MDAJMDEvMjkgMTg6MzU6MDEJMSBtaW51 dGUsIDEgc2Vjb25kDQpQaGFzZSA0OgkwMS8yOSAxODozNTowMQkwMS8yOSAxODozNTowMwkyIHNl Y29uZHMNClBoYXNlIDU6CTAxLzI5IDE4OjM1OjAzCTAxLzI5IDE4OjM1OjA3CTQgc2Vjb25kcw0K UGhhc2UgNjoJMDEvMjkgMTg6MzU6MDcJMDEvMjkgMTg6MzY6MDQJNTcgc2Vjb25kcw0KUGhhc2Ug NzoJMDEvMjkgMTg6MzY6MDQJMDEvMjkgMTg6MzY6MzgJMzQgc2Vjb25kcw0KDQpUb3RhbCBydW4g dGltZTogMiBtaW51dGVzLCA0OSBzZWNvbmRzDQpkb25lDQobXTA7YW1uZXNpYWM6cm9vdAdyb290 QGFtbmVzaWFjI15ECAhleGl0DQoNClNjcmlwdCBkb25lIG9uIE1vbiAyOSBKYW4gMjAwNyAwNjoz Njo0MiBQTSBFU1QNCm== --=-Cc97/zT+/7NvnDD/zFtM-- From owner-xfs@oss.sgi.com Mon Jan 29 16:12:07 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 29 Jan 2007 16:12:10 -0800 (PST) X-Spam-oss-Status: No, score=0.4 required=5.0 tests=AWL,BAYES_50, J_CHICKENPOX_23,J_CHICKENPOX_45 autolearn=no version=3.2.0-pre1-r497472 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0U0C3qw031856 for ; Mon, 29 Jan 2007 16:12:05 -0800 Received: from pcbnaujok (pc-bnaujok.melbourne.sgi.com [134.14.55.58]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA00558; Tue, 30 Jan 2007 11:10:56 +1100 Message-Id: <200701300010.LAA00558@larry.melbourne.sgi.com> From: "Barry Naujok" To: , Subject: RE: xfs_repair leaves things un-repaired. Date: Tue, 30 Jan 2007 11:14:58 +1100 MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit X-Mailer: Microsoft Office Outlook, Build 11.0.6353 Thread-Index: AcdEASpg0FWU0AdPSVmpW/2Tyke5XwAAXCFQ X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.3028 In-Reply-To: <1170114096.12767.9.camel@tmolus.apparatus.net> X-archive-position: 10483 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@melbourne.sgi.com Precedence: bulk X-list: xfs Content-Length: 4836 Lines: 113 Hi Andrew, > -----Original Message----- > From: xfs-bounce@oss.sgi.com [mailto:xfs-bounce@oss.sgi.com] > On Behalf Of Andrew Jones > Sent: Tuesday, 30 January 2007 10:42 AM > To: xfs@oss.sgi.com > Subject: xfs_repair leaves things un-repaired. > > I have a filesystem which I cannot repair with xfs_repair. Running > xfs_repair results in its finding and fixing the same errors, over and > over and over. Whenever I attempt to manipulate certain directories, > the filesystem shuts itself down: > > Jan 29 17:59:02 amnesiac kernel: [] xfs_btree_check_sblock > +0x9c/0xab [xfs] > Jan 29 17:59:02 amnesiac kernel: [] xfs_alloc_lookup > +0x134/0x35c [xfs] > Jan 29 17:59:02 amnesiac kernel: [] xfs_alloc_lookup > +0x134/0x35c [xfs] > Jan 29 17:59:02 amnesiac kernel: [] xfs_free_ag_extent > +0x48/0x5fd [xfs] > Jan 29 17:59:02 amnesiac kernel: [] > xfs_free_extent+0xb7/0xd4 > [xfs] > Jan 29 17:59:02 amnesiac kernel: [] xfs_bmap_finish > +0xe6/0x167 [xfs] > Jan 29 17:59:02 amnesiac kernel: [] xfs_itruncate_finish > +0x1af/0x2ff [xfs] > Jan 29 17:59:02 amnesiac kernel: [] > xfs_inactive+0x254/0x92c > [xfs] > Jan 29 17:59:02 amnesiac kernel: [] iput+0x3d/0x66 > Jan 29 17:59:02 amnesiac kernel: [] xfs_remove+0x322/0x3a9 > [xfs] > Jan 29 17:59:02 amnesiac kernel: [] xfs_validate_fields > +0x1e/0x7d [xfs] > Jan 29 17:59:02 amnesiac kernel: [] xfs_vn_unlink+0x2f/0x3b > [xfs] > Jan 29 17:59:02 amnesiac kernel: [] inotify_inode_is_dead > +0x18/0x6c > Jan 29 17:59:02 amnesiac kernel: [] xfs_fs_clear_inode > +0x6d/0xa3 [xfs] > Jan 29 17:59:02 amnesiac kernel: [] clear_inode+0xab/0xd8 > Jan 29 17:59:02 amnesiac kernel: [] generic_delete_inode > +0xbd/0x10f > Jan 29 17:59:02 amnesiac kernel: [] iput+0x64/0x66 > Jan 29 17:59:02 amnesiac kernel: [] do_unlinkat+0xa7/0x113 > Jan 29 17:59:02 amnesiac kernel: [] vfs_readdir+0x7d/0x8d > Jan 29 17:59:02 amnesiac kernel: [] filldir64+0x0/0xc3 > Jan 29 17:59:02 amnesiac kernel: [] > sys_getdents64+0x9b/0xa5 > Jan 29 17:59:02 amnesiac kernel: [] sysenter_past_esp > +0x56/0x79 > Jan 29 17:59:02 amnesiac kernel: xfs_force_shutdown(dm-0,0x8) called > from line 4267 of file fs/xfs/xfs_bmap.c. Return address = 0xf94e46f0 > Jan 29 17:59:15 amnesiac kernel: xfs_force_shutdown(dm-0,0x1) called > from line 424 of file fs/xfs/xfs_rw.c. Return address = 0xf94e46f0 > Jan 29 17:59:15 amnesiac kernel: xfs_force_shutdown(dm-0,0x1) called > from line 424 of file fs/xfs/xfs_rw.c. Return address = 0xf94e46f0 > > I think the second and third "xfs_force_shutdown" calls came after I > unmounted, remounted, and attempted to repeat the "rm" that had failed > with the first one, without an xfs_repair attempt in the interregnum. > > I tried copying it from one filesystem to a new one, using tar. It > worked fine for a while, but then I had an "unplanned" > shutdown due to a > failure in the RAID devices. Since then, the same problems > have arisen. > > Is this a normal problem? Should I just give up and copy to a new > filesystem? The xfs_repair output is valid. All the inodes that are reporting errors are orphaned inodes that were moved into lost+found. At the start of phase 4, the lost+found directory is deleted which causes all the inodes in lost+found to be re-orphaned. The current solution to this problem is to rename lost+found after an xfs_repair run and then unmount and try xfs_repair again. Regarding the shutdown, that is not normal and I personally don't know what the problem is from the trace. If it's a corrupt lost+found that xfs_repair is generating (I gather you are rm'ing lost+found), the second xfs_repair run after a rename should identify the problem with the directory. You can also try running xfs_check on the device as it may pick up something xfs_repair is missing. Regards, Barry. > root@amnesiac#xfs_info /dev/vg0/home > meta-data=/dev/vg0/home isize=256 agcount=65, > agsize=7325792 > blks > = sectsz=512 attr=0 > data = bsize=4096 blocks=468855808, > imaxpct=25 > = sunit=0 swidth=0 blks, > unwritten=1 > naming =version 2 bsize=4096 > log =internal bsize=4096 blocks=32768, version=1 > = sectsz=512 sunit=0 blks > realtime =none extsz=4096 blocks=0, rtextents=0 > root@amnesiac#uname -a > Linux amnesiac 2.6.18-3-686 #1 SMP Sun Dec 10 19:37:06 UTC 2006 i686 > GNU/Linux > root@amnesiac#xfs_repair -V > xfs_repair version 2.8.18 > > The xfs_repair -v output is attached to this message. > From owner-xfs@oss.sgi.com Tue Jan 30 05:46:48 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 30 Jan 2007 05:46:55 -0800 (PST) X-Spam-oss-Status: No, score=0.2 required=5.0 tests=BAYES_50,J_CHICKENPOX_23, J_CHICKENPOX_45 autolearn=no version=3.2.0-pre1-r497472 Received: from pem-exsmtp01.silverapp.local (pem-smtp01.silverapp.com [209.43.6.67] (may be forged)) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0UDkkqw009423 for ; Tue, 30 Jan 2007 05:46:47 -0800 Received: from [192.168.5.165] ([209.43.15.211]) by pem-exsmtp01.silverapp.local with Microsoft SMTPSVC(6.0.3790.1830); Tue, 30 Jan 2007 08:46:06 -0500 Message-ID: <45BF4C0F.8040004@apparatus.net> Date: Tue, 30 Jan 2007 08:45:51 -0500 From: Andrew Jones User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.13) Gecko/20060809 Debian/1.7.13-0.3 X-Accept-Language: en MIME-Version: 1.0 To: Barry Naujok , xfs@oss.sgi.com Subject: Re: RE: xfs_repair leaves things un-repaired. References: <200701300010.LAA00558@larry.melbourne.sgi.com> In-Reply-To: <200701300010.LAA00558@larry.melbourne.sgi.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-OriginalArrivalTime: 30 Jan 2007 13:46:06.0780 (UTC) FILETIME=[FDC527C0:01C74474] X-archive-position: 10486 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: ajones@apparatus.net Precedence: bulk X-list: xfs Content-Length: 1177 Lines: 28 Barry Naujok wrote: >Hi Andrew, > >The xfs_repair output is valid. All the inodes that are reporting errors >are orphaned inodes that were moved into lost+found. At the start of >phase 4, the lost+found directory is deleted which causes all the inodes >in lost+found to be re-orphaned. The current solution to this problem is >to rename lost+found after an xfs_repair run and then unmount and try >xfs_repair again. > >Regarding the shutdown, that is not normal and I personally don't know >what the problem is from the trace. If it's a corrupt lost+found that >xfs_repair is generating (I gather you are rm'ing lost+found), the >second xfs_repair run after a rename should identify the problem with >the directory. You can also try running xfs_check on the device as it >may pick up something xfs_repair is missing. > >Regards, >Barry. > > Thanks a lot for the clear explanation. I still don't know why it bombs out and shuts down the filesystem when the corrupt directories are manipulated, but I don't particularly care, in this case. Moving lost+found and re-running xfs_repair has worked out the problem. I can now manipulate the contents of lost+found safely. From owner-xfs@oss.sgi.com Tue Jan 30 10:42:38 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 30 Jan 2007 10:42:46 -0800 (PST) X-Spam-oss-Status: No, score=-0.5 required=5.0 tests=AWL,BAYES_50 autolearn=ham version=3.2.0-pre1-r497472 Received: from pne-smtpout3-sn1.fre.skanova.net (pne-smtpout3-sn1.fre.skanova.net [81.228.11.120]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0UIgZqw015209 for ; Tue, 30 Jan 2007 10:42:37 -0800 Received: from safari.iki.fi (80.223.106.128) by pne-smtpout3-sn1.fre.skanova.net (7.2.075) id 45AE1FD5000B0C8E for xfs@oss.sgi.com; Tue, 30 Jan 2007 18:32:32 +0100 Received: (qmail 6319 invoked by uid 500); 30 Jan 2007 17:32:28 -0000 Date: Tue, 30 Jan 2007 19:32:27 +0200 From: Sami Farin To: XFS Mailing List Cc: linux-kernel Mailing List Subject: XFS internal error xfs_da_do_buf Message-ID: <20070130173227.GA6017@m.safari.iki.fi> Mail-Followup-To: XFS Mailing List , linux-kernel Mailing List MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.5.13 (2006-08-11) X-archive-position: 10492 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: safari-xfs@safari.iki.fi Precedence: bulk X-list: xfs Content-Length: 3090 Lines: 61 I setup namespace for /tmp and /var/tmp ( pam_namespace.so into /etc/pam.d/{su,login} ) and something did not like something I did: [322593.844838] 0x0: 00 00 00 00 2b 00 00 11 20 21 00 00 00 68 ff ff [322593.844854] Filesystem "sda8": XFS internal error xfs_da_do_buf(2) at line 2087 of file fs/xfs/xfs_da_btree.c. Caller 0xc0230808 [322593.844997] [] dump_trace+0x215/0x21a [322593.845024] [] show_trace_log_lvl+0x1a/0x30 [322593.845044] [] show_trace+0x12/0x14 [322593.845062] [] dump_stack+0x19/0x1b [322593.845080] [] xfs_error_report+0x55/0x5b [322593.845430] [] xfs_corruption_error+0x3f/0x59 [322593.845780] [] xfs_da_do_buf+0x716/0x7c1 [322593.846124] [] xfs_da_read_buf+0x2f/0x35 [322593.846464] [] xfs_attr_leaf_get+0x3c/0xa3 [322593.846798] [] xfs_attr_fetch+0xb0/0xfe [322593.847127] [] xfs_acl_iaccess+0x59/0xc2 [322593.847451] [] xfs_iaccess+0x14b/0x189 [322593.847806] [] xfs_access+0x34/0x53 [322593.848182] [] xfs_vn_permission+0x12/0x17 [322593.848564] [] permission+0xe4/0xe6 [322593.848755] [] vfs_permission+0xf/0x11 [322593.848942] [] sys_faccessat+0xa6/0x144 [322593.849120] [] sys_access+0x20/0x22 [322593.849292] [] syscall_call+0x7/0xb [322593.849310] [<00f4f410>] 0xf4f410 [322593.849334] ======================= [322593.849358] 0x0: 00 00 00 00 2b 00 00 11 20 21 00 00 00 68 ff ff [322593.849364] Filesystem "sda8": XFS internal error xfs_da_do_buf(2) at line 2087 of file fs/xfs/xfs_da_btree.c. Caller 0xc0230808 [322593.849484] [] dump_trace+0x215/0x21a [322593.849512] [] show_trace_log_lvl+0x1a/0x30 [322593.849530] [] show_trace+0x12/0x14 [322593.849548] [] dump_stack+0x19/0x1b [322593.849573] [] xfs_error_report+0x55/0x5b [322593.849913] [] xfs_corruption_error+0x3f/0x59 [322593.850258] [] xfs_da_do_buf+0x716/0x7c1 [322593.850598] [] xfs_da_read_buf+0x2f/0x35 [322593.850936] [] xfs_attr_leaf_get+0x3c/0xa3 [322593.851266] [] xfs_attr_fetch+0xb0/0xfe [322593.851593] [] xfs_acl_iaccess+0x59/0xc2 [322593.851917] [] xfs_iaccess+0x14b/0x189 [322593.852268] [] xfs_access+0x34/0x53 [322593.852638] [] xfs_vn_permission+0x12/0x17 [322593.853018] [] permission+0xe4/0xe6 [322593.853204] [] vfs_permission+0xf/0x11 [322593.853387] [] sys_faccessat+0xa6/0x144 [322593.853562] [] sys_access+0x20/0x22 [322593.853734] [] syscall_call+0x7/0xb [322593.853753] [<00f4f410>] 0xf4f410 [322593.853759] ======================= sda8 is /usr , I haven't played with mount --bind or namespaces at /usr (AFAIK). My sda hard disk is not broken and I have SMP kernel 2.6.19.2 + Pentium D. I have got no other BUGs or xfs errors and I can access /usr and other partitions OK. -- Do what you love because life is too short for anything else. From owner-xfs@oss.sgi.com Tue Jan 30 14:00:52 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 30 Jan 2007 14:00:57 -0800 (PST) X-Spam-oss-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from postoffice.aconex.com (mail.app.aconex.com [203.89.192.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0UM0nqw008903 for ; Tue, 30 Jan 2007 14:00:52 -0800 Received: from edge (unknown [203.89.192.141]) by postoffice.aconex.com (Postfix) with ESMTP id 8C08AAAC21F; Wed, 31 Jan 2007 08:46:21 +1100 (EST) Subject: [PATCH] bump xfsprogs version for xfs_quota(8) update From: Nathan Scott Reply-To: nscott@aconex.com To: bnaujok@sgi.com, donaldd@sgi.com Cc: xfs@oss.sgi.com Content-Type: multipart/mixed; boundary="=-rORJjqAxnrUzuSF4hvNr" Organization: Aconex Date: Wed, 31 Jan 2007 08:59:43 +1100 Message-Id: <1170194383.18017.293.camel@edge> Mime-Version: 1.0 X-Mailer: Evolution 2.6.3 X-archive-position: 10493 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nscott@aconex.com Precedence: bulk X-list: xfs Content-Length: 1911 Lines: 64 --=-rORJjqAxnrUzuSF4hvNr Content-Type: text/plain Content-Transfer-Encoding: 7bit Hi guys, Those xfs_quota(8) errors were reported as a Debian bug today - could you guys bump the version & update the docs (as in this patch) for me? thanks. -- Nathan --=-rORJjqAxnrUzuSF4hvNr Content-Disposition: attachment; filename=bump-debian Content-Type: text/x-patch; name=bump-debian; charset=UTF-8 Content-Transfer-Encoding: 7bit Index: xfsprogs/VERSION =================================================================== --- xfsprogs.orig/VERSION 2007-01-31 08:53:02.371648250 +1100 +++ xfsprogs/VERSION 2007-01-31 08:53:10.748171750 +1100 @@ -3,5 +3,5 @@ # PKG_MAJOR=2 PKG_MINOR=8 -PKG_REVISION=18 +PKG_REVISION=19 PKG_BUILD=1 Index: xfsprogs/debian/changelog =================================================================== --- xfsprogs.orig/debian/changelog 2007-01-31 08:52:38.182136500 +1100 +++ xfsprogs/debian/changelog 2007-01-31 08:54:18.016375750 +1100 @@ -1,3 +1,9 @@ +xfsprogs (2.8.19-1) unstable; urgency=low + + * New upstream release (closes: #409063) + + -- Nathan Scott Wed, 31 Jan 2007 08:53:32 +1100 + xfsprogs (2.8.18-1) unstable; urgency=low * New upstream release (closes: #399888) Index: xfsprogs/doc/CHANGES =================================================================== --- xfsprogs.orig/doc/CHANGES 2007-01-31 08:52:38.202137750 +1100 +++ xfsprogs/doc/CHANGES 2007-01-31 08:55:09.939620750 +1100 @@ -1,8 +1,9 @@ -xfsprogs-2.8.x (25 January 2007) +xfsprogs-2.8.19 (31 January 2007) - Fix pthread stack size setting in xfs_repair. - Fix xfs_bmap -n option displaying a truncated extent. - Fix xfs_io mwrite segfault. Thanks to Utako Kusaka for these two fixes. + - Fix errors in xfs_quota(8) man page. xfsprogs-2.8.18 (8 December 2006) - is an installed file, we cannot simply rename it, --=-rORJjqAxnrUzuSF4hvNr-- From owner-xfs@oss.sgi.com Tue Jan 30 14:04:31 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 30 Jan 2007 14:04:38 -0800 (PST) X-Spam-oss-Status: No, score=-0.8 required=5.0 tests=AWL,BAYES_40 autolearn=ham version=3.2.0-pre1-r497472 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0UM4Rqw009728 for ; Tue, 30 Jan 2007 14:04:29 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA02501; Wed, 31 Jan 2007 09:03:28 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l0UM3R7Y109055575; Wed, 31 Jan 2007 09:03:28 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l0UM3QTS109109807; Wed, 31 Jan 2007 09:03:26 +1100 (AEDT) Date: Wed, 31 Jan 2007 09:03:26 +1100 From: David Chinner To: xfs-dev@sgi.com Cc: xfs@oss.sgi.com Subject: Review: freezing sometimes leaves the log dirty Message-ID: <20070130220326.GM33919298@melbourne.sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.1i X-archive-position: 10494 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 5360 Lines: 149 When we freeze the filesystem on a system that is under heavy load, the fleeze can complete it's flushes while there are still transactions active. Hence the freeze completes with a dirty log and dirty metadata buffers still in memory. The Linux freeze path is a tangled mess - I had to go back to the irix code to work out exactly what we should be doing to work out why the linux code was failing because of the convoluted paths the linux code takes through the generic layers. In short, when we freeze the writes, we should not be quiescing the filesystem at this point. All we should be doing is a blocking data sync because we haven't shut down the transaction subsystem yet. We also need to wait for all direct I/O writes to complete as well. Once the data sync is complete, we can return to the generic code for it to freeze new transactions. Then we can wait for all active transactions to complete before we quiesce the filesystem which flushes out all the dirty metadata buffers. At this point we have a clean filesystem and an empty log so we can safely write the unmount record followed by a dummy record to dirty the log to ensure unlinked list processing on remount if we crash or shut down the machine while the filesystem is frozen. Comments? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group --- fs/xfs/linux-2.6/xfs_super.c | 14 +++++++++++--- fs/xfs/linux-2.6/xfs_vfs.h | 1 + fs/xfs/xfs_vfsops.c | 26 ++++++++++++++++++++++---- 3 files changed, 34 insertions(+), 7 deletions(-) Index: 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_super.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/linux-2.6/xfs_super.c 2007-01-08 14:32:40.000000000 +1100 +++ 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_super.c 2007-01-08 22:46:12.520522391 +1100 @@ -730,9 +730,17 @@ xfs_fs_sync_super( int error; int flags; - if (unlikely(sb->s_frozen == SB_FREEZE_WRITE)) - flags = SYNC_QUIESCE; - else + if (unlikely(sb->s_frozen == SB_FREEZE_WRITE)) { + /* + * First stage of freeze - no more writers will make progress + * now we are here, so we flush delwri and delalloc buffers + * here, then wait for all I/O to complete. Data is frozen at + * that point. Metadata is not frozen, transactions can still + * occur here so don't bother flushing the buftarg (i.e + * SYNC_QUIESCE) because it'll just get dirty again. + */ + flags = SYNC_FSDATA | SYNC_DELWRI | SYNC_WAIT | SYNC_DIO_WAIT; + } else flags = SYNC_FSDATA | (wait ? SYNC_WAIT : 0); error = bhv_vfs_sync(vfsp, flags, NULL); Index: 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_vfs.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/linux-2.6/xfs_vfs.h 2006-12-22 10:53:22.000000000 +1100 +++ 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_vfs.h 2007-01-08 22:27:26.366619320 +1100 @@ -92,6 +92,7 @@ typedef enum { #define SYNC_REFCACHE 0x0040 /* prune some of the nfs ref cache */ #define SYNC_REMOUNT 0x0080 /* remount readonly, no dummy LRs */ #define SYNC_QUIESCE 0x0100 /* quiesce fileystem for a snapshot */ +#define SYNC_DIO_WAIT 0x0200 /* wait for direct I/O to complete */ #define SHUTDOWN_META_IO_ERROR 0x0001 /* write attempt to metadata failed */ #define SHUTDOWN_LOG_IO_ERROR 0x0002 /* write attempt to the log failed */ Index: 2.6.x-xfs-new/fs/xfs/xfs_vfsops.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_vfsops.c 2007-01-08 20:06:55.000000000 +1100 +++ 2.6.x-xfs-new/fs/xfs/xfs_vfsops.c 2007-01-08 23:27:54.696637946 +1100 @@ -881,6 +881,10 @@ xfs_statvfs( * this by simply making sure the log gets flushed * if SYNC_BDFLUSH is set, and by actually writing it * out otherwise. + * SYNC_DIO_WAIT - The caller wants us to wait for all direct I/Os + * as well to ensure all data I/O completes before we + * return. Forms the drain side of the write barrier needed + * to safely quiesce the filesystem. * */ /*ARGSUSED*/ @@ -892,10 +896,7 @@ xfs_sync( { xfs_mount_t *mp = XFS_BHVTOM(bdp); - if (unlikely(flags == SYNC_QUIESCE)) - return xfs_quiesce_fs(mp); - else - return xfs_syncsub(mp, flags, NULL); + return xfs_syncsub(mp, flags, NULL); } /* @@ -1181,6 +1182,12 @@ xfs_sync_inodes( } } + /* + * When freezing, we need to wait ensure direct I/O is complete + * as well to ensure all data modification is complete here + */ + if (flags & SYNC_DIO_WAIT) + vn_iowait(vp); if (flags & SYNC_BDFLUSH) { if ((flags & SYNC_ATTR) && @@ -1959,15 +1966,26 @@ xfs_showargs( return 0; } +/* + * Second stage of a freeze. The data is already frozen, now we have to take + * care of the metadata. New transactions are already blocked, so we need to + * wait for any remaining transactions to drain out before proceding. + */ STATIC void xfs_freeze( bhv_desc_t *bdp) { xfs_mount_t *mp = XFS_BHVTOM(bdp); + /* wait for all modifications to complete */ while (atomic_read(&mp->m_active_trans) > 0) delay(100); + /* flush inodes and push all remaining buffers out to disk */ + xfs_quiesce_fs(mp); + + BUG_ON(atomic_read(&mp->m_active_trans) > 0); + /* Push the superblock and write an unmount record */ xfs_log_unmount_write(mp); xfs_unmountfs_writesb(mp); From owner-xfs@oss.sgi.com Tue Jan 30 14:42:01 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 30 Jan 2007 14:42:09 -0800 (PST) X-Spam-oss-Status: No, score=-2.3 required=5.0 tests=AWL,BAYES_00, SPF_HELO_PASS autolearn=ham version=3.2.0-pre1-r497472 Received: from mx1.redhat.com (mx1.redhat.com [66.187.233.31]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0UMfxqw013748 for ; Tue, 30 Jan 2007 14:42:01 -0800 Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by mx1.redhat.com (8.13.1/8.13.1) with ESMTP id l0UMf4Zh000750; Tue, 30 Jan 2007 17:41:04 -0500 Received: from pobox-2.corp.redhat.com (pobox-2.corp.redhat.com [10.11.255.15]) by int-mx1.corp.redhat.com (8.13.1/8.13.1) with ESMTP id l0UMf3mT005719; Tue, 30 Jan 2007 17:41:04 -0500 Received: from [10.15.80.10] (neon.msp.redhat.com [10.15.80.10]) by pobox-2.corp.redhat.com (8.13.1/8.13.1) with ESMTP id l0UMf3nL011982; Tue, 30 Jan 2007 17:41:03 -0500 Message-ID: <45BFC99A.9050009@sandeen.net> Date: Tue, 30 Jan 2007 16:41:30 -0600 From: Eric Sandeen User-Agent: Thunderbird 1.5.0.9 (X11/20061219) MIME-Version: 1.0 To: XFS Mailing List , linux-kernel Mailing List Subject: Re: XFS internal error xfs_da_do_buf References: <20070130173227.GA6017@m.safari.iki.fi> In-Reply-To: <20070130173227.GA6017@m.safari.iki.fi> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-archive-position: 10495 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Content-Length: 1010 Lines: 28 Sami Farin wrote: > I setup namespace for /tmp and /var/tmp > ( pam_namespace.so into /etc/pam.d/{su,login} ) > and something did not like something I did: > > [322593.844838] 0x0: 00 00 00 00 2b 00 00 11 20 21 00 00 00 68 ff ff > [322593.844854] Filesystem "sda8": XFS internal error xfs_da_do_buf(2) at line 2087 of file fs/xfs/xfs_da_btree.c. Caller 0xc0230808 ... > sda8 is /usr , I haven't played with mount --bind or namespaces > at /usr (AFAIK). > > My sda hard disk is not broken and I have SMP kernel 2.6.19.2 + Pentium D. > I have got no other BUGs or xfs errors and I can access /usr and other > partitions OK. > I'd try xfs_repair. The error above means xfs read something unexpected for metadata, which did not match the magic numbers it was expecting. I suppose the error message could be more informative in that regard... The buffer it read contained this at the beginning: 0x0: 00 00 00 00 2b 00 00 11 20 21 00 00 00 68 ff ff but I'm not sure that offers any real insight. -Eric From owner-xfs@oss.sgi.com Tue Jan 30 16:35:06 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 30 Jan 2007 16:35:11 -0800 (PST) X-Spam-oss-Status: No, score=-1.4 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0V0Z2qw029509 for ; Tue, 30 Jan 2007 16:35:04 -0800 Received: from linuxbuild.melbourne.sgi.com (linuxbuild.melbourne.sgi.com [134.14.54.115]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA06933; Wed, 31 Jan 2007 11:34:07 +1100 From: donaldd@sgi.com Received: by linuxbuild.melbourne.sgi.com (Postfix, from userid 16365) id 8E5B41B023EB; Wed, 31 Jan 2007 11:34:07 +1100 (EST) To: sgi.bugs.xfs@melbourne.sgi.com, xfs@oss.sgi.com Subject: TAKE 957441 - xfs_quota manpage contains errors for project quota Message-Id: <20070131003407.8E5B41B023EB@linuxbuild.melbourne.sgi.com> Date: Wed, 31 Jan 2007 11:34:07 +1100 (EST) X-archive-position: 10496 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: donaldd@sgi.com Precedence: bulk X-list: xfs Content-Length: 880 Lines: 21 Update changelogs and revision number to indicate updates to xfs_quota manpage. Date: Wed Jan 31 11:31:01 AEDT 2007 Workarea: linuxbuild.melbourne.sgi.com:/home/donaldd/isms/xfs-cmds Inspected by: nathans@debian.org The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:28001a xfsprogs/VERSION - 1.169 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/VERSION.diff?r1=text&tr1=1.169&r2=text&tr2=1.168&f=h xfsprogs/doc/CHANGES - 1.234 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/doc/CHANGES.diff?r1=text&tr1=1.234&r2=text&tr2=1.233&f=h xfsprogs/debian/changelog - 1.149 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/debian/changelog.diff?r1=text&tr1=1.149&r2=text&tr2=1.148&f=h - Update changelogs to indicate updates to xfs_quota manpage. From owner-xfs@oss.sgi.com Tue Jan 30 17:30:17 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 30 Jan 2007 17:30:24 -0800 (PST) X-Spam-oss-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0V1UFqw002564 for ; Tue, 30 Jan 2007 17:30:16 -0800 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id MAA08446; Wed, 31 Jan 2007 12:29:17 +1100 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l0V1TG7Y107644146; Wed, 31 Jan 2007 12:29:16 +1100 (AEDT) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l0V1TFQG109269164; Wed, 31 Jan 2007 12:29:15 +1100 (AEDT) Date: Wed, 31 Jan 2007 12:29:15 +1100 From: David Chinner To: xfs-dev@sgi.com Cc: xfs@oss.sgi.com Subject: Review: Fix bulkstat block count units Message-ID: <20070131012915.GO33919298@melbourne.sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.1i X-archive-position: 10497 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 1284 Lines: 46 The recent bulkstat improvements introduced a minor regression - they changed the units of the dt_count field from 512 byte blocks to filesystem blocks. This changes the userspace visible fields and can break existing applications that use bulkstat. Comments? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group --- fs/xfs/dmapi/xfs_dm.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) Index: 2.6.x-xfs-new/fs/xfs/dmapi/xfs_dm.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/dmapi/xfs_dm.c 2007-01-16 10:54:14.000000000 +1100 +++ 2.6.x-xfs-new/fs/xfs/dmapi/xfs_dm.c 2007-01-29 18:09:35.612014885 +1100 @@ -426,7 +426,8 @@ xfs_dip_to_stat( case XFS_DINODE_FMT_BTREE: buf->dt_rdev = 0; buf->dt_blksize = mp->m_sb.sb_blocksize; - buf->dt_blocks = INT_GET(dic->di_nblocks, ARCH_CONVERT); + buf->dt_blocks = XFS_FSB_TO_BB(mp, + INT_GET(dic->di_nblocks, ARCH_CONVERT)); break; } @@ -494,7 +495,8 @@ xfs_ip_to_stat( case XFS_DINODE_FMT_BTREE: buf->dt_rdev = 0; buf->dt_blksize = mp->m_sb.sb_blocksize; - buf->dt_blocks = dic->di_nblocks + ip->i_delayed_blks; + buf->dt_blocks = XFS_FSB_TO_BB(mp, + (dic->di_nblocks + ip->i_delayed_blks)); break; } From owner-xfs@oss.sgi.com Tue Jan 30 17:39:07 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 30 Jan 2007 17:39:14 -0800 (PST) X-Spam-oss-Status: No, score=-2.1 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l0V1d4qw003791 for ; Tue, 30 Jan 2007 17:39:05 -0800 Received: from [134.14.55.89] (soarer.melbourne.sgi.com [134.14.55.89]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id MAA08743; Wed, 31 Jan 2007 12:38:05 +1100 Message-ID: <45BFF357.3010806@sgi.com> Date: Wed, 31 Jan 2007 12:39:35 +1100 From: Vlad Apostolov User-Agent: Thunderbird 1.5.0.9 (X11/20061206) MIME-Version: 1.0 To: David Chinner CC: xfs-dev@sgi.com, xfs@oss.sgi.com Subject: Re: Review: Fix bulkstat block count units References: <20070131012915.GO33919298@melbourne.sgi.com> In-Reply-To: <20070131012915.GO33919298@melbourne.sgi.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 10498 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: vapo@sgi.com Precedence: bulk X-list: xfs Content-Length: 371 Lines: 19 The patch is looking good Dave. Regards, Vlad David Chinner wrote: > The recent bulkstat improvements introduced a minor > regression - they changed the units of the dt_count field > from 512 byte blocks to filesystem blocks. This changes > the userspace visible fields and can break existing applications > that use bulkstat. > > Comments? > > Cheers, > > Dave. > From owner-xfs@oss.sgi.com Tue Jan 30 20:13:54 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 30 Jan 2007 20:13:57 -0800 (PST) X-Spam-oss-Status: No, score=-1.6 required=5.0 tests=BAYES_00,SUBJ_ALL_CAPS autolearn=no version=3.2.0-pre1-r497472 Received: from imr2.americas.sgi.com (imr2.americas.sgi.com [198.149.16.18]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0V4Drqw017450 for ; Tue, 30 Jan 2007 20:13:53 -0800 Received: from clink.americas.sgi.com (clink.americas.sgi.com [128.162.236.153]) by imr2.americas.sgi.com (8.12.9/8.12.10/SGI_generic_relay-1.2) with ESMTP id l0V3gUnc80592736 for ; Tue, 30 Jan 2007 19:42:31 -0800 (PST) Received: from clink.americas.sgi.com by clink.americas.sgi.com (SGI-8.12.5/SGI-client-1.7) via ESMTP id l0V4Cuu215709652; Tue, 30 Jan 2007 22:12:56 -0600 (CST) Received: (from mvalluri@localhost) by clink.americas.sgi.com (SGI-8.12.5/8.12.5/Submit) id l0V4Cusv14683285 for xfs@oss.sgi.com; Tue, 30 Jan 2007 22:12:56 -0600 (CST) Date: Tue, 30 Jan 2007 22:12:56 -0600 (CST) From: Madan Valluri Message-Id: <200701310412.l0V4Cusv14683285@clink.americas.sgi.com> To: xfs@oss.sgi.com Subject: TAKE 959451 - X-archive-position: 10499 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: mvalluri@sgi.com Precedence: bulk X-list: xfs Content-Length: 1582 Lines: 37 xfs_repair - 031 QA failure with platform that don't use the uuid field in the super block. Date: Tue Jan 30 20:11:16 PST 2007 Workarea: clink.americas.sgi.com:/data/clink/a01/mvalluri/totrepair Inspected by: bnaujok@sgi.com The following file(s) were checked into: bonnie.engr.sgi.com:/isms/xfs-cmds/master Modid: master:xfs-cmds:220542a xfsprogs/libxlog/xfs_log_recover.c - 1.31 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/libxlog/xfs_log_recover.c.diff?r1=text&tr1=1.31&r2=text&tr2=1.30&f=h - Do not call xlog_header_check_mount(), if platform does not support uuid in super block. xfsprogs/libxfs/init.h - 1.13 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/libxfs/init.h.diff?r1=text&tr1=1.13&r2=text&tr2=1.12&f=h - Declared platform_has_uuid. xfsprogs/libxfs/linux.c - 1.17 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/libxfs/linux.c.diff?r1=text&tr1=1.17&r2=text&tr2=1.16&f=h - Linux has uuid in super block. xfsprogs/libxfs/darwin.c - 1.13 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/libxfs/darwin.c.diff?r1=text&tr1=1.13&r2=text&tr2=1.12&f=h - Darwin has uuid in super block. xfsprogs/libxfs/irix.c - 1.14 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/libxfs/irix.c.diff?r1=text&tr1=1.14&r2=text&tr2=1.13&f=h - Irix does not have uuid in super block. xfsprogs/libxfs/freebsd.c - 1.16 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/libxfs/freebsd.c.diff?r1=text&tr1=1.16&r2=text&tr2=1.15&f=h - Freebsd has uuid in super block. From owner-xfs@oss.sgi.com Tue Jan 30 20:19:40 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 30 Jan 2007 20:19:43 -0800 (PST) X-Spam-oss-Status: No, score=-1.6 required=5.0 tests=AWL,BAYES_00, J_CHICKENPOX_45,SUBJ_ALL_CAPS autolearn=no version=3.2.0-pre1-r497472 Received: from omx1.sgi.com (omx1.americas.sgi.com [198.149.16.13]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0V4Jdqw023171 for ; Tue, 30 Jan 2007 20:19:40 -0800 Received: from imr2.americas.sgi.com (imr2.americas.sgi.com [198.149.16.18]) by omx1.sgi.com (8.12.10/8.12.9/linux-outbound_gateway-1.1) with ESMTP id l0V40lDW006693 for ; Tue, 30 Jan 2007 22:00:47 -0600 Received: from clink.americas.sgi.com (clink.americas.sgi.com [128.162.236.153]) by imr2.americas.sgi.com (8.12.9/8.12.10/SGI_generic_relay-1.2) with ESMTP id l0V3UHnc80585414 for ; Tue, 30 Jan 2007 19:30:17 -0800 (PST) Received: from clink.americas.sgi.com by clink.americas.sgi.com (SGI-8.12.5/SGI-client-1.7) via ESMTP id l0V40hu215745393; Tue, 30 Jan 2007 22:00:43 -0600 (CST) Received: (from mvalluri@localhost) by clink.americas.sgi.com (SGI-8.12.5/8.12.5/Submit) id l0V40hUI15749846 for xfs@oss.sgi.com; Tue, 30 Jan 2007 22:00:43 -0600 (CST) Date: Tue, 30 Jan 2007 22:00:43 -0600 (CST) From: Madan Valluri Message-Id: <200701310400.l0V40hUI15749846@clink.americas.sgi.com> To: xfs@oss.sgi.com Subject: TAKE 959450 - X-archive-position: 10500 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: mvalluri@sgi.com Precedence: bulk X-list: xfs Content-Length: 567 Lines: 17 QA 033 failing due to root inode being put in lost+found Date: Tue Jan 30 19:58:09 PST 2007 Workarea: clink.americas.sgi.com:/data/clink/a01/mvalluri/totrepair Inspected by: bnaujok@sgi.com The following file(s) were checked into: bonnie.engr.sgi.com:/isms/xfs-cmds/master Modid: master:xfs-cmds:220539a xfsprogs/repair/phase6.c - 1.36 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/repair/phase6.c.diff?r1=text&tr1=1.36&r2=text&tr2=1.35&f=h - Do no skip a rootino even if it doesn't appear to be a directory, in traverse_function(). From owner-xfs@oss.sgi.com Tue Jan 30 20:25:08 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 30 Jan 2007 20:25:13 -0800 (PST) X-Spam-oss-Status: No, score=-1.8 required=5.0 tests=AWL,BAYES_00, J_CHICKENPOX_45 autolearn=no version=3.2.0-pre1-r497472 Received: from postoffice.aconex.com (mail.app.aconex.com [203.89.192.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0V4P7qw024014 for ; Tue, 30 Jan 2007 20:25:08 -0800 Received: from edge (unknown [203.89.192.141]) by postoffice.aconex.com (Postfix) with ESMTP id 53389AAC2D1; Wed, 31 Jan 2007 15:10:35 +1100 (EST) Subject: Re: TAKE 959451 - From: Nathan Scott Reply-To: nscott@aconex.com To: Madan Valluri Cc: xfs@oss.sgi.com In-Reply-To: <200701310412.l0V4Cusv14683285@clink.americas.sgi.com> References: <200701310412.l0V4Cusv14683285@clink.americas.sgi.com> Content-Type: text/plain Organization: Aconex Date: Wed, 31 Jan 2007 15:24:02 +1100 Message-Id: <1170217442.18017.306.camel@edge> Mime-Version: 1.0 X-Mailer: Evolution 2.6.3 Content-Transfer-Encoding: 7bit X-archive-position: 10501 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nscott@aconex.com Precedence: bulk X-list: xfs Content-Length: 379 Lines: 16 Hi Madan, On Tue, 2007-01-30 at 22:12 -0600, Madan Valluri wrote: > xfs_repair - 031 QA failure with platform that don't use the uuid field > in the super block. You mean "uuid in the log", I think - all platforms would have it in the super block. > QA 033 failing due to root inode being put in lost+found Ahahahaa - now thats funny; you just made my day... ;) -- Nathan From owner-xfs@oss.sgi.com Tue Jan 30 20:28:57 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 30 Jan 2007 20:29:01 -0800 (PST) X-Spam-oss-Status: No, score=-1.1 required=5.0 tests=AWL,BAYES_05, J_CHICKENPOX_45 autolearn=no version=3.2.0-pre1-r497472 Received: from postoffice.aconex.com (mail.app.aconex.com [203.89.192.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0V4Suqw024714 for ; Tue, 30 Jan 2007 20:28:57 -0800 Received: from edge (unknown [203.89.192.141]) by postoffice.aconex.com (Postfix) with ESMTP id 4DFD8AAC1B1; Wed, 31 Jan 2007 15:14:26 +1100 (EST) Subject: Re: TAKE 959451 - From: Nathan Scott Reply-To: nscott@aconex.com To: Madan Valluri Cc: xfs@oss.sgi.com In-Reply-To: <1170217442.18017.306.camel@edge> References: <200701310412.l0V4Cusv14683285@clink.americas.sgi.com> <1170217442.18017.306.camel@edge> Content-Type: text/plain Organization: Aconex Date: Wed, 31 Jan 2007 15:27:53 +1100 Message-Id: <1170217673.18017.310.camel@edge> Mime-Version: 1.0 X-Mailer: Evolution 2.6.3 Content-Transfer-Encoding: 7bit X-archive-position: 10502 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nscott@aconex.com Precedence: bulk X-list: xfs Content-Length: 737 Lines: 25 On Wed, 2007-01-31 at 15:24 +1100, Nathan Scott wrote: > Hi Madan, > > On Tue, 2007-01-30 at 22:12 -0600, Madan Valluri wrote: > > xfs_repair - 031 QA failure with platform that don't use the uuid field > > in the super block. > > You mean "uuid in the log", I think - all platforms would have it in the > super block. > > > QA 033 failing due to root inode being put in lost+found > > Ahahahaa - now thats funny; you just made my day... ;) > Actually, you'll want to make sure that the 2 realtime inodes, and the 2 optional quota inodes also don't ever get put in lost+found; I've not looked at these fixes (I missed the request for review :), but they could be in the same boat as the root inode I imagine. cheers. -- Nathan From owner-xfs@oss.sgi.com Tue Jan 30 21:11:08 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 30 Jan 2007 21:11:12 -0800 (PST) X-Spam-oss-Status: No, score=-0.9 required=5.0 tests=AWL,BAYES_50, J_CHICKENPOX_41,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r497472 Received: from sandeen.net (sandeen.net [209.173.210.139]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0V5B6qw029198 for ; Tue, 30 Jan 2007 21:11:07 -0800 Received: by sandeen.net (Postfix, from userid 500) id BB0CA18003EF1; Tue, 30 Jan 2007 23:10:11 -0600 (CST) To: xfs@oss.sgi.com Subject: [PATCH] resend - remove unused functions Message-Id: <20070131051011.BB0CA18003EF1@sandeen.net> Date: Tue, 30 Jan 2007 23:10:11 -0600 (CST) From: sandeen@sandeen.net X-archive-position: 10503 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Content-Length: 13050 Lines: 450 Do what the subject says... these functions are never called. These turned up for me when I was messing with marking static functions static.... xfs_bmap.c | 36 ------------------ xfs_bmap_btree.c | 76 -------------------------------------- xfs_bmap_btree.h | 13 ------ xfs_da_btree.c | 15 ------- xfs_da_btree.h | 1 xfs_error.c | 26 ------------- xfs_error.h | 1 xfs_rtalloc.c | 108 ------------------------------------------------------- xfs_rtalloc.h | 18 --------- 9 files changed, 294 deletions(-) Signed-off-by: Eric Sandeen =================================================================== Index: linux/fs/xfs/xfs_bmap.c =================================================================== --- linux.orig/fs/xfs/xfs_bmap.c +++ linux/fs/xfs/xfs_bmap.c @@ -185,16 +185,6 @@ xfs_bmap_btree_to_extents( int *logflagsp, /* inode logging flags */ int whichfork); /* data or attr fork */ -#ifdef DEBUG -/* - * Check that the extents list for the inode ip is in the right order. - */ -STATIC void -xfs_bmap_check_extents( - xfs_inode_t *ip, /* incore inode pointer */ - int whichfork); /* data or attr fork */ -#endif - /* * Called by xfs_bmapi to update file extent records and the btree * after removing space (or undoing a delayed allocation). @@ -6049,32 +6039,6 @@ xfs_bmap_eof( } #ifdef DEBUG -/* - * Check that the extents list for the inode ip is in the right order. - */ -STATIC void -xfs_bmap_check_extents( - xfs_inode_t *ip, /* incore inode pointer */ - int whichfork) /* data or attr fork */ -{ - xfs_bmbt_rec_t *ep; /* current extent entry */ - xfs_extnum_t idx; /* extent record index */ - xfs_ifork_t *ifp; /* inode fork pointer */ - xfs_extnum_t nextents; /* number of extents in list */ - xfs_bmbt_rec_t *nextp; /* next extent entry */ - - ifp = XFS_IFORK_PTR(ip, whichfork); - ASSERT(ifp->if_flags & XFS_IFEXTENTS); - nextents = ifp->if_bytes / (uint)sizeof(xfs_bmbt_rec_t); - ep = xfs_iext_get_ext(ifp, 0); - for (idx = 0; idx < nextents - 1; idx++) { - nextp = xfs_iext_get_ext(ifp, idx + 1); - xfs_btree_check_rec(XFS_BTNUM_BMAP, (void *)ep, - (void *)(nextp)); - ep = nextp; - } -} - STATIC xfs_buf_t * xfs_bmap_get_bp( Index: linux/fs/xfs/xfs_bmap_btree.c =================================================================== --- linux.orig/fs/xfs/xfs_bmap_btree.c +++ linux/fs/xfs/xfs_bmap_btree.c @@ -678,47 +678,6 @@ error0: return error; } -#ifdef DEBUG -/* - * Get the data from the pointed-to record. - */ -int -xfs_bmbt_get_rec( - xfs_btree_cur_t *cur, - xfs_fileoff_t *off, - xfs_fsblock_t *bno, - xfs_filblks_t *len, - xfs_exntst_t *state, - int *stat) -{ - xfs_bmbt_block_t *block; - xfs_buf_t *bp; -#ifdef DEBUG - int error; -#endif - int ptr; - xfs_bmbt_rec_t *rp; - - block = xfs_bmbt_get_block(cur, 0, &bp); - ptr = cur->bc_ptrs[0]; -#ifdef DEBUG - if ((error = xfs_btree_check_lblock(cur, block, 0, bp))) - return error; -#endif - if (ptr > be16_to_cpu(block->bb_numrecs) || ptr <= 0) { - *stat = 0; - return 0; - } - rp = XFS_BMAP_REC_IADDR(block, ptr, cur); - *off = xfs_bmbt_disk_get_startoff(rp); - *bno = xfs_bmbt_disk_get_startblock(rp); - *len = xfs_bmbt_disk_get_blockcount(rp); - *state = xfs_bmbt_disk_get_state(rp); - *stat = 1; - return 0; -} -#endif - /* * Insert one record/level. Return information to the caller * allowing the next level up to proceed if necessary. @@ -2016,30 +1975,6 @@ xfs_bmbt_disk_get_blockcount( } /* - * Extract the startblock field from an on disk bmap extent record. - */ -xfs_fsblock_t -xfs_bmbt_disk_get_startblock( - xfs_bmbt_rec_t *r) -{ -#if XFS_BIG_BLKNOS - return (((xfs_fsblock_t)INT_GET(r->l0, ARCH_CONVERT) & XFS_MASK64LO(9)) << 43) | - (((xfs_fsblock_t)INT_GET(r->l1, ARCH_CONVERT)) >> 21); -#else -#ifdef DEBUG - xfs_dfsbno_t b; - - b = (((xfs_dfsbno_t)INT_GET(r->l0, ARCH_CONVERT) & XFS_MASK64LO(9)) << 43) | - (((xfs_dfsbno_t)INT_GET(r->l1, ARCH_CONVERT)) >> 21); - ASSERT((b >> 32) == 0 || ISNULLDSTARTBLOCK(b)); - return (xfs_fsblock_t)b; -#else /* !DEBUG */ - return (xfs_fsblock_t)(((xfs_dfsbno_t)INT_GET(r->l1, ARCH_CONVERT)) >> 21); -#endif /* DEBUG */ -#endif /* XFS_BIG_BLKNOS */ -} - -/* * Extract the startoff field from a disk format bmap extent record. */ xfs_fileoff_t @@ -2049,17 +1984,6 @@ xfs_bmbt_disk_get_startoff( return ((xfs_fileoff_t)INT_GET(r->l0, ARCH_CONVERT) & XFS_MASK64LO(64 - BMBT_EXNTFLAG_BITLEN)) >> 9; } - -xfs_exntst_t -xfs_bmbt_disk_get_state( - xfs_bmbt_rec_t *r) -{ - int ext_flag; - - ext_flag = (int)((INT_GET(r->l0, ARCH_CONVERT)) >> (64 - BMBT_EXNTFLAG_BITLEN)); - return xfs_extent_state(xfs_bmbt_disk_get_blockcount(r), - ext_flag); -} #endif /* XFS_NATIVE_HOST */ Index: linux/fs/xfs/xfs_bmap_btree.h =================================================================== --- linux.orig/fs/xfs/xfs_bmap_btree.h +++ linux/fs/xfs/xfs_bmap_btree.h @@ -315,15 +315,11 @@ extern xfs_exntst_t xfs_bmbt_get_state(x #ifndef XFS_NATIVE_HOST extern void xfs_bmbt_disk_get_all(xfs_bmbt_rec_t *r, xfs_bmbt_irec_t *s); -extern xfs_exntst_t xfs_bmbt_disk_get_state(xfs_bmbt_rec_t *r); extern xfs_filblks_t xfs_bmbt_disk_get_blockcount(xfs_bmbt_rec_t *r); -extern xfs_fsblock_t xfs_bmbt_disk_get_startblock(xfs_bmbt_rec_t *r); extern xfs_fileoff_t xfs_bmbt_disk_get_startoff(xfs_bmbt_rec_t *r); #else #define xfs_bmbt_disk_get_all(r, s) xfs_bmbt_get_all(r, s) -#define xfs_bmbt_disk_get_state(r) xfs_bmbt_get_state(r) #define xfs_bmbt_disk_get_blockcount(r) xfs_bmbt_get_blockcount(r) -#define xfs_bmbt_disk_get_startblock(r) xfs_bmbt_get_blockcount(r) #define xfs_bmbt_disk_get_startoff(r) xfs_bmbt_get_startoff(r) #endif /* XFS_NATIVE_HOST */ @@ -364,15 +360,6 @@ extern void xfs_bmbt_to_bmdr(xfs_bmbt_bl extern int xfs_bmbt_update(struct xfs_btree_cur *, xfs_fileoff_t, xfs_fsblock_t, xfs_filblks_t, xfs_exntst_t); -#ifdef DEBUG -/* - * Get the data from the pointed-to record. - */ -extern int xfs_bmbt_get_rec(struct xfs_btree_cur *, xfs_fileoff_t *, - xfs_fsblock_t *, xfs_filblks_t *, - xfs_exntst_t *, int *); -#endif - #endif /* __KERNEL__ */ #endif /* __XFS_BMAP_BTREE_H__ */ Index: linux/fs/xfs/xfs_da_btree.c =================================================================== --- linux.orig/fs/xfs/xfs_da_btree.c +++ linux/fs/xfs/xfs_da_btree.c @@ -2165,21 +2165,6 @@ xfs_da_reada_buf( return rval; } -/* - * Calculate the number of bits needed to hold i different values. - */ -uint -xfs_da_log2_roundup(uint i) -{ - uint rval; - - for (rval = 0; rval < NBBY * sizeof(i); rval++) { - if ((1 << rval) >= i) - break; - } - return(rval); -} - kmem_zone_t *xfs_da_state_zone; /* anchor for state struct zone */ kmem_zone_t *xfs_dabuf_zone; /* dabuf zone */ Index: linux/fs/xfs/xfs_da_btree.h =================================================================== --- linux.orig/fs/xfs/xfs_da_btree.h +++ linux/fs/xfs/xfs_da_btree.h @@ -249,7 +249,6 @@ int xfs_da_shrink_inode(xfs_da_args_t *a xfs_dabuf_t *dead_buf); uint xfs_da_hashname(const uchar_t *name_string, int name_length); -uint xfs_da_log2_roundup(uint i); xfs_da_state_t *xfs_da_state_alloc(void); void xfs_da_state_free(xfs_da_state_t *state); Index: linux/fs/xfs/xfs_error.c =================================================================== --- linux.orig/fs/xfs/xfs_error.c +++ linux/fs/xfs/xfs_error.c @@ -132,32 +132,6 @@ xfs_errortag_add(int error_tag, xfs_moun } int -xfs_errortag_clear(int error_tag, xfs_mount_t *mp) -{ - int i; - int64_t fsid; - - memcpy(&fsid, mp->m_fixedfsid, sizeof(xfs_fsid_t)); - - for (i = 0; i < XFS_NUM_INJECT_ERROR; i++) { - if (xfs_etest_fsid[i] == fsid && xfs_etest[i] == error_tag) { - xfs_etest[i] = 0; - xfs_etest_fsid[i] = 0LL; - kmem_free(xfs_etest_fsname[i], - strlen(xfs_etest_fsname[i]) + 1); - xfs_etest_fsname[i] = NULL; - cmn_err(CE_WARN, "Cleared XFS error tag #%d", - error_tag); - return 0; - } - } - - cmn_err(CE_WARN, "XFS error tag %d not on", error_tag); - - return 1; -} - -int xfs_errortag_clearall_umount(int64_t fsid, char *fsname, int loud) { int i; Index: linux/fs/xfs/xfs_error.h =================================================================== --- linux.orig/fs/xfs/xfs_error.h +++ linux/fs/xfs/xfs_error.h @@ -144,7 +144,6 @@ extern void xfs_error_test_init(void); #endif /* __ANSI_CPP__ */ extern int xfs_errortag_add(int error_tag, xfs_mount_t *mp); -extern int xfs_errortag_clear(int error_tag, xfs_mount_t *mp); extern int xfs_errortag_clearall(xfs_mount_t *mp); extern int xfs_errortag_clearall_umount(int64_t fsid, char *fsname, int loud); #else Index: linux/fs/xfs/xfs_rtalloc.c =================================================================== --- linux.orig/fs/xfs/xfs_rtalloc.c +++ linux/fs/xfs/xfs_rtalloc.c @@ -913,57 +913,6 @@ xfs_rtcheck_alloc_range( } #endif -#ifdef DEBUG -/* - * Check whether the given block in the bitmap has the given value. - */ -STATIC int /* 1 for matches, 0 for not */ -xfs_rtcheck_bit( - xfs_mount_t *mp, /* file system mount structure */ - xfs_trans_t *tp, /* transaction pointer */ - xfs_rtblock_t start, /* bit (block) to check */ - int val) /* 1 for free, 0 for allocated */ -{ - int bit; /* bit number in the word */ - xfs_rtblock_t block; /* bitmap block number */ - xfs_buf_t *bp; /* buf for the block */ - xfs_rtword_t *bufp; /* pointer into the buffer */ - /* REFERENCED */ - int error; /* error value */ - xfs_rtword_t wdiff; /* difference between bit & expected */ - int word; /* word number in the buffer */ - xfs_rtword_t wval; /* word value from buffer */ - - block = XFS_BITTOBLOCK(mp, start); - error = xfs_rtbuf_get(mp, tp, block, 0, &bp); - bufp = (xfs_rtword_t *)XFS_BUF_PTR(bp); - word = XFS_BITTOWORD(mp, start); - bit = (int)(start & (XFS_NBWORD - 1)); - wval = bufp[word]; - xfs_trans_brelse(tp, bp); - wdiff = (wval ^ -val) & ((xfs_rtword_t)1 << bit); - return !wdiff; -} -#endif /* DEBUG */ - -#if 0 -/* - * Check that the given extent (block range) is free already. - */ -STATIC int /* error */ -xfs_rtcheck_free_range( - xfs_mount_t *mp, /* file system mount point */ - xfs_trans_t *tp, /* transaction pointer */ - xfs_rtblock_t bno, /* starting block number of extent */ - xfs_extlen_t len, /* length of extent */ - int *stat) /* out: 1 for free, 0 for not */ -{ - xfs_rtblock_t new; /* dummy for xfs_rtcheck_range */ - - return xfs_rtcheck_range(mp, tp, bno, len, 1, &new, stat); -} -#endif - /* * Check that the given range is either all allocated (val = 0) or * all free (val = 1). @@ -2382,60 +2331,3 @@ xfs_rtpick_extent( *pick = b; return 0; } - -#ifdef DEBUG -/* - * Debug code: print out the value of a range in the bitmap. - */ -void -xfs_rtprint_range( - xfs_mount_t *mp, /* file system mount structure */ - xfs_trans_t *tp, /* transaction pointer */ - xfs_rtblock_t start, /* starting block to print */ - xfs_extlen_t len) /* length to print */ -{ - xfs_extlen_t i; /* block number in the extent */ - - cmn_err(CE_DEBUG, "%Ld: ", (long long)start); - for (i = 0; i < len; i++) - cmn_err(CE_DEBUG, "%d", xfs_rtcheck_bit(mp, tp, start + i, 1)); - cmn_err(CE_DEBUG, "\n"); -} - -/* - * Debug code: print the summary file. - */ -void -xfs_rtprint_summary( - xfs_mount_t *mp, /* file system mount structure */ - xfs_trans_t *tp) /* transaction pointer */ -{ - xfs_suminfo_t c; /* summary data */ - xfs_rtblock_t i; /* bitmap block number */ - int l; /* summary information level */ - int p; /* flag for printed anything */ - xfs_fsblock_t sb; /* summary block number */ - xfs_buf_t *sumbp; /* summary block buffer */ - - sumbp = NULL; - for (l = 0; l < mp->m_rsumlevels; l++) { - for (p = 0, i = 0; i < mp->m_sb.sb_rbmblocks; i++) { - (void)xfs_rtget_summary(mp, tp, l, i, &sumbp, &sb, &c); - if (c) { - if (!p) { - cmn_err(CE_DEBUG, "%Ld-%Ld:", 1LL << l, - XFS_RTMIN((1LL << l) + - ((1LL << l) - 1LL), - mp->m_sb.sb_rextents)); - p = 1; - } - cmn_err(CE_DEBUG, " %Ld:%d", (long long)i, c); - } - } - if (p) - cmn_err(CE_DEBUG, "\n"); - } - if (sumbp) - xfs_trans_brelse(tp, sumbp); -} -#endif /* DEBUG */ Index: linux/fs/xfs/xfs_rtalloc.h =================================================================== --- linux.orig/fs/xfs/xfs_rtalloc.h +++ linux/fs/xfs/xfs_rtalloc.h @@ -134,24 +134,6 @@ xfs_rtpick_extent( xfs_rtblock_t *pick); /* result rt extent */ /* - * Debug code: print out the value of a range in the bitmap. - */ -void -xfs_rtprint_range( - struct xfs_mount *mp, /* file system mount structure */ - struct xfs_trans *tp, /* transaction pointer */ - xfs_rtblock_t start, /* starting block to print */ - xfs_extlen_t len); /* length to print */ - -/* - * Debug code: print the summary file. - */ -void -xfs_rtprint_summary( - struct xfs_mount *mp, /* file system mount structure */ - struct xfs_trans *tp); /* transaction pointer */ - -/* * Grow the realtime area of the filesystem. */ int From owner-xfs@oss.sgi.com Wed Jan 31 09:48:11 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 31 Jan 2007 09:48:17 -0800 (PST) X-Spam-oss-Status: No, score=4.5 required=5.0 tests=BAYES_80,HTML_MESSAGE, MSGID_FROM_MTA_HEADER,RAZOR2_CF_RANGE_51_100,RAZOR2_CF_RANGE_E4_51_100, RAZOR2_CHECK autolearn=no version=3.2.0-pre1-r497472 Received: from ar-pinto.atl.sa.earthlink.net (ar-pinto.atl.sa.earthlink.net [207.69.195.107]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0VHm9qw017537 for ; Wed, 31 Jan 2007 09:48:11 -0800 Message-Id: <200701311748.l0VHm9qw017537@oss.sgi.com> Received: from strange.mail.mindspring.net ([207.69.200.30]) by ar-pinto.atl.sa.earthlink.net with smtp (Exim 4.34) id 1HCJAL-0008Gh-Qw for linux-xfs@oss.sgi.com; Wed, 31 Jan 2007 12:22:37 -0500 From: gminc@mindspring.com Date: Wed, 31 Jan 2007 12:22:23 -0500 (EST) Subject: Re: [WARNING: VIRUS REMOVED]Returned mail: see transcript for details Reply-to: nobody@earthlink.net Precedence: auto_reply MIME-Version: 1.0 To: linux-xfs@oss.sgi.com Content-Type: text/plain Content-Disposition: inline Content-Transfer-Encoding: 7bit X-archive-position: 10505 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: gminc@mindspring.com Precedence: bulk X-list: xfs Content-Length: 594 Lines: 17 I apologize for this automatic reply to your email. To control spam, I now allow incoming messages only from senders I have approved beforehand. If you would like to be added to my list of approved senders, please fill out the short request form (see link below). Once I approve you, I will receive your original message in my inbox. You do not need to resend your message. I apologize for this one-time inconvenience. Click the link below to fill out the request: https://webmail.atl.earthlink.net/wam/addme?a=gminc@mindspring.com&id=1hcja779Z3Nl3oW2 [[HTML alternate version deleted]] From owner-xfs@oss.sgi.com Wed Jan 31 22:29:05 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 31 Jan 2007 22:29:09 -0800 (PST) X-Spam-oss-Status: No, score=-1.5 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.2.0-pre1-r497472 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l116T2qw018030 for ; Wed, 31 Jan 2007 22:29:04 -0800 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA22012; Thu, 1 Feb 2007 17:28:04 +1100 From: donaldd@sgi.com Received: by chook.melbourne.sgi.com (Postfix, from userid 16365) id 3BAA858FF64D; Thu, 1 Feb 2007 17:28:04 +1100 (EST) To: xfs@oss.sgi.com, sgi.bugs.xfs@engr.sgi.com Subject: PARTIAL TAKE 957103 - Merge 2.6.x-xfs up to more recent kernels and new kdb versions Message-Id: <20070201062804.3BAA858FF64D@chook.melbourne.sgi.com> Date: Thu, 1 Feb 2007 17:28:04 +1100 (EST) X-archive-position: 10506 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: donaldd@sgi.com Precedence: bulk X-list: xfs Content-Length: 644 Lines: 19 Restore revision 1.19 that was clobbered during the last mainline pull modid: xfs-linux-melb:xfs-kern:27915a Date: Thu Feb 1 17:26:51 AEDT 2007 Workarea: chook.melbourne.sgi.com:/home/donaldd/isms/2.6.x-xfs Inspected by: donaldd The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:28008a fs/xfs/linux-2.6/mrlock.h - 1.21 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/linux-2.6/mrlock.h.diff?r1=text&tr1=1.21&r2=text&tr2=1.20&f=h - Restore revision 1.19 that was clobbered during the last mainline pull modid: xfs-linux-melb:xfs-kern:27915a From owner-xfs@oss.sgi.com Wed Jan 31 22:54:04 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 31 Jan 2007 22:54:08 -0800 (PST) X-Spam-oss-Status: No, score=-0.8 required=5.0 tests=AWL,BAYES_05 autolearn=ham version=3.2.0-pre1-r497472 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l116s1qw021257 for ; Wed, 31 Jan 2007 22:54:03 -0800 Received: from [134.14.55.84] (shark.melbourne.sgi.com [134.14.55.84]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA22690; Thu, 1 Feb 2007 17:53:02 +1100 Message-ID: <45C18E22.5010102@sgi.com> Date: Thu, 01 Feb 2007 17:52:18 +1100 From: Donald Douwsma User-Agent: Thunderbird 1.5.0.9 (X11/20070103) MIME-Version: 1.0 To: David Chinner CC: xfs-dev@sgi.com, xfs@oss.sgi.com Subject: Re: Review: freezing sometimes leaves the log dirty References: <20070130220326.GM33919298@melbourne.sgi.com> In-Reply-To: <20070130220326.GM33919298@melbourne.sgi.com> X-Enigmail-Version: 0.94.0.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-archive-position: 10507 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: donaldd@sgi.com Precedence: bulk X-list: xfs Content-Length: 1408 Lines: 41 Hi Dave, It looks good to me. Donald David Chinner wrote: > When we freeze the filesystem on a system that is under > heavy load, the fleeze can complete it's flushes while there > are still transactions active. Hence the freeze completes > with a dirty log and dirty metadata buffers still in memory. > > The Linux freeze path is a tangled mess - I had to go back > to the irix code to work out exactly what we should be doing > to work out why the linux code was failing because of > the convoluted paths the linux code takes through the > generic layers. > > In short, when we freeze the writes, we should not be > quiescing the filesystem at this point. All we should > be doing is a blocking data sync because we haven't shut down > the transaction subsystem yet. We also need to wait > for all direct I/O writes to complete as well. > > Once the data sync is complete, we can return to the generic > code for it to freeze new transactions. Then we can wait for > all active transactions to complete before we quiesce the > filesystem which flushes out all the dirty metadata buffers. > > At this point we have a clean filesystem and an empty log > so we can safely write the unmount record followed by a > dummy record to dirty the log to ensure unlinked list > processing on remount if we crash or shut down the machine > while the filesystem is frozen. > > Comments? > > Cheers, > > Dave.