From owner-xfs@oss.sgi.com Fri Jun 1 10:06:27 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 01 Jun 2007 10:06:32 -0700 (PDT) Received: from wr-out-0506.google.com (wr-out-0506.google.com [64.233.184.227]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l51H6QWt007883 for ; Fri, 1 Jun 2007 10:06:27 -0700 Received: by wr-out-0506.google.com with SMTP id i22so545183wra for ; Fri, 01 Jun 2007 10:06:26 -0700 (PDT) DKIM-Signature: a=rsa-sha1; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:subject:from:to:cc:content-type:date:message-id:mime-version:x-mailer; b=pz8R51Z6i4fOJfGtBhElNE7doQo34Mxu8YBbcj/g6nnTqdxmT2b6Ndl4o9hQ4xs4yCfUqOp0MVBiDsaJv6BZoDeH7Hb7a03ss/hI++nEQdiUbMd8u0bby88XUFTQcbzMsD4N6Dnne8xCpeoMkR2ptMYu4n4ZeleqX+zPb0OvHIE= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:subject:from:to:cc:content-type:date:message-id:mime-version:x-mailer; b=RS+qe2Re2iokfmxCG49fuFKhW4o2bj9huEjL2J+HJ47vaj2ba1sFNKlxK5LhpliE7KlNvTJK/JkF0RsNc4HA7LCWG2pQNm28TWS4nIGGqteHerkn3Xoi/wId323QAmk+FceW06rDxk2/qw9hzACnW4dF0KQCHlwF022nOvayuUw= Received: by 10.78.193.19 with SMTP id q19mr1313567huf.1180715979445; Fri, 01 Jun 2007 09:39:39 -0700 (PDT) Received: from ?192.168.1.55? ( [84.59.122.193]) by mx.google.com with ESMTP id f7sm484088nfh.2007.06.01.09.39.36; Fri, 01 Jun 2007 09:39:36 -0700 (PDT) Subject: XFS shrink functionality From: Ruben Porras To: xfs@oss.sgi.com Cc: iusty@k1024.org, cw@f00f.org Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="=-cNVvu74QiAC041q1y3BN" Date: Fri, 01 Jun 2007 18:39:34 +0200 Message-Id: <1180715974.10796.46.camel@localhost> Mime-Version: 1.0 X-Mailer: Evolution 2.10.1 X-archive-position: 11572 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nahoo82@gmail.com Precedence: bulk X-list: xfs --=-cNVvu74QiAC041q1y3BN Content-Type: text/plain Content-Transfer-Encoding: quoted-printable Hello,=20 I'm investigating the possibility to write myself the necessary code to shrink an xfs filesystem (I'd be able to dedicate a day/week). Trying to know if something is already done I came across the mails of a previous intent [0], [1] (I'm cc'ing the people involved).=20 At a first glance the patch is a little outdated and will no more apply (as of linux 2.16.18, which is the last customised kernel that I was able to run under a XEN environment), because at least the function xfs_fs_geometry is changed.=20 I'm really curious about what happened to this patches and why they were discontinued. The second part never was made public, and there was also no answer. Was there any flaw in any of the posted code or anything in XFS that makes it especially hard to shrink [3] that discouraged the development? After that, the first questions that arouse are, would there be some assistance/groove in from the developers?=20 How doable is it? What are the programmers requirements from your point of view? Thank you. [1] http://oss.sgi.com/archives/xfs/2005-08/msg00142.html [2] http://oss.sgi.com/archives/xfs/2005-09/msg00038.html [3] the only limitation that I might think of is not being able to shrink past the internal journal. --=-cNVvu74QiAC041q1y3BN Content-Type: application/pgp-signature; name=signature.asc Content-Description: Dies ist ein digital signierter Nachrichtenteil -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQBGYEvGYubrKblAx+oRAhjZAJ4terI47D96a4JX8wsyge5iA/f6nQCgkTeN 3qlB2vKeB7015poCrDexuWQ= =7rZq -----END PGP SIGNATURE----- --=-cNVvu74QiAC041q1y3BN-- From owner-xfs@oss.sgi.com Fri Jun 1 18:14:08 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 01 Jun 2007 18:14:11 -0700 (PDT) Received: from silver.tritoncore.com (silver.tritoncore.com [209.59.142.74]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l521E6Wt027310 for ; Fri, 1 Jun 2007 18:14:08 -0700 Received: from powersle by silver.tritoncore.com with local (Exim 4.63) (envelope-from ) id 1HtotX-00045K-VP for xfs@oss.sgi.com; Thu, 31 May 2007 13:57:08 -0400 To: xfs@oss.sgi.com Subject: Vacancy! Vacancy!! Vacancy!!! From: "Kenneth Fabrics Ltd." Reply-To: kennethfabricsltdd@mail2recruiter.com MIME-Version: 1.0 Content-Type: text/plain Content-Transfer-Encoding: 8bit Message-Id: Date: Thu, 31 May 2007 13:57:07 -0400 X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - silver.tritoncore.com X-AntiAbuse: Original Domain - oss.sgi.com X-AntiAbuse: Originator/Caller UID/GID - [32761 32002] / [47 12] X-AntiAbuse: Sender Address Domain - silver.tritoncore.com X-Source: X-Source-Args: X-Source-Dir: X-archive-position: 11573 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: employment.dept@kennethfabricsltd.com Precedence: bulk X-list: xfs From The Desk Of The: Recruitment Manager Mr Kenneth Holley Kenneth Fabrics Limited. Dear , KENNETH FABRICS LTD.Is committed to global citizenship by operating in a responsible and sustainable manner around the globe. As part of our Multi Level Marketing scheme, we need capable hands to act as representative/book keeper in the United Kingdom and Canada on the company’s behalf. Kenneth Fabrics Ltd.. Is a new Store under KENNETH FARICS in India? We are into supplies of Raw Materials. We are ranked No.1 among India private enterprises with annual production capacity exceeding 1 million units sold everywhere in India and exported to all over the world including UK, Mexico, Southeast Asia countries and European countries. We have won a good reputation for high-quality products, prompt delivery and close cooperation among our customers. We needs a representative in the United states, United Kingdom, Canada, Mexico, Southeast Asia countries and European countries, to act as our Online Staff through which our customers can pay outstanding bills owed by them to us in your Region via Bank Wire Transfer. JOB DESCRIPTION: 1. Receive payment from Clients by wire transfer and Cheques 2. Deduct 10% which will be your commission on each payment processed. 3. Forward the balance after deducting of 10% commission to offices which shall be provided by you as soon as the fund becomes available. HOW MUCH WILL YOU EARN: 10% from each operation! For instance: you receive £ 5000 or $5000 via wire transfer Or Cheques on our behalf. You will cash the money and keep £ 500 or $500 (10% from £ 5000-$5000) for yourself! At the beginning your commission will equal 10%. After creditable performance, your commission may be reviewed for increment. We are looking only for the Honest and Open – Hearted Individual who satisfies our requirements and glad to offer this job position to you. If our proposals interest you, Do get back to us with your under listed detailed information; Names:.................. Address:................ City:................... Zip Code:............... State:.................. Country;................ Home Phone:............. Cell Phone:............. Gender:................. Age..................... Thanks for Reading Our Job Offer Kenneth Fabrics Limited 301-A, World Trade Tower Barakhamba Lane New Delhi -110001 India From owner-xfs@oss.sgi.com Sat Jun 2 14:25:00 2007 Received: with ECARTIS (v1.0.0; list xfs); Sat, 02 Jun 2007 14:25:03 -0700 (PDT) Received: from lucidpixels.com (lucidpixels.com [75.144.35.66]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l52LOxWt024035 for ; Sat, 2 Jun 2007 14:25:00 -0700 Received: by lucidpixels.com (Postfix, from userid 1001) id 57052B000083; Sat, 2 Jun 2007 17:24:58 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id 51C8350001A7; Sat, 2 Jun 2007 17:24:58 -0400 (EDT) Date: Sat, 2 Jun 2007 17:24:58 -0400 (EDT) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: linux-kernel@vger.kernel.org cc: xfs@oss.sgi.com Subject: Kernel 2.6.22-rc3 safe to migrate to? Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-archive-position: 11574 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Wondering as their were a lot of XFS related issues early on in development..? The 2.6.22-rc3 kernel has the core 2 duo coretemp patch by ruik which I want be running as long as 2.6.22-rc3 does not have any severe XFS issues? Justin. From owner-xfs@oss.sgi.com Sat Jun 2 14:35:55 2007 Received: with ECARTIS (v1.0.0; list xfs); Sat, 02 Jun 2007 14:35:57 -0700 (PDT) Received: from wa-out-1112.google.com (wa-out-1112.google.com [209.85.146.181]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l52LZsWt026693 for ; Sat, 2 Jun 2007 14:35:54 -0700 Received: by wa-out-1112.google.com with SMTP id l24so1189895waf for ; Sat, 02 Jun 2007 14:35:54 -0700 (PDT) DKIM-Signature: a=rsa-sha1; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:message-id:date:from:to:subject:cc:mime-version:content-type:content-transfer-encoding:content-disposition; b=GN6AfKnezeJpV3TkJoPQujAokhgjWw2yXxH39wXfwWmEBMPzjSn5b18uVJz0I44qkZ7JlLX2AjJGOFsCIZ1VL/P3VBlSPD0uHydGGIFfXcrm+nuSlnQGBnj3EUHonhkvz67PqPV03jsfIDTUqzwRMml8yTWRnkwUYfdSXk0+/Ps= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:message-id:date:from:to:subject:cc:mime-version:content-type:content-transfer-encoding:content-disposition; b=gmPrf1JYRMcZujT3O3OgGZOA+Fet2Agyr5OC5Mv8kUQhx7Whpj/65MZTNhL9JHW3Naw4ZgAhSDyQ5EzvV2c1JT51N+v2KWBIbM/9sl41kS3m0JZ/ljOzE9LANaV3kDL96+kLPIUNYURQZRFa59qIIV0Lz9s4eMysTY8e97RBZ9E= Received: by 10.115.92.2 with SMTP id u2mr3130430wal.1180818448644; Sat, 02 Jun 2007 14:07:28 -0700 (PDT) Received: by 10.114.14.8 with HTTP; Sat, 2 Jun 2007 14:07:28 -0700 (PDT) Message-ID: <5d96567b0706021407q4455b60asd9d23ef82cb90b55@mail.gmail.com> Date: Sun, 3 Jun 2007 00:07:28 +0300 From: "Raz Ben-Jehuda(caro)" To: linux-xfs@oss.sgi.com Subject: corruption bug in 2.6.17 Cc: alkirkco@sgi.com MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline X-archive-position: 11575 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: raziebe@gmail.com Precedence: bulk X-list: xfs Mandy Hello I have got into a serious trouble with xfs . I had some directories returning "no such file or directory". This xfs file system is over a raid5 of 4 disks, with 1MB stripe unit size. files are hierarchly being "XFS_IOC_FSSETXATTR" extended to 1MB. when I tried to create a new file I got the bellow oops. could it be that your fix for 2.6.17.7 solves this problem ? thank you. raz 4499008.609000] Filesystem "md1": XFS internal error xfs_dir2_block_addname at line 105 of file fs/xfs/xfs_dir2_block.c. Caller 0xc10e7e1c [4499008.634000] xfs_dir2_block_addname+0x88b/0x8a9 xfs_dir2_createname+0x11a/0x179 [4499008.654000] xfs_dir2_createname+0x11a/0x179 xfs_bmap_last_offset+0xce/0x128 [4499008.673000] xfs_dir2_isblock+0x32/0x82 xfs_dir2_createname+0x11a/0x179 [4499008.690000] kmem_zone_alloc+0x57/0xc5 xfs_trans_ijoin+0x35/0x7f [4499008.707000] xfs_create+0x45b/0x7aa xfs_vn_mknod+0x2f9/0x37e [4499008.725000] xfs_vn_lookup+0x79/0x96 lookup_mnt+0x32/0x5c [4499008.741000] do_lookup+0x50/0xa8 dput+0x23/0x183 [4499008.755000] __link_path_walk+0x50/0xe17 lookup_mnt+0x32/0x5c [4499008.772000] mntput_no_expire+0x22/0xb1 mntput_no_expire+0x22/0xb1 [4499008.789000] link_path_walk+0x71/0xc9 permission+0x7e/0x9b [4499008.805000] vfs_create+0x7d/0x122 open_namei+0x69e/0x6ff [4499008.820000] page_add_file_rmap+0x29/0x2b do_filp_open+0x43/0x61 [4499008.837000] do_sys_open+0x57/0xe1 sys_open+0x27/0x2b [4499008.852000] syscall_call+0x7/0xb [4499008.863000] Filesystem "md1": XFS internal error xfs_trans_cancel at line 1150 of file fs/xfs/xfs_trans.c. Caller 0xc112100e [4499008.886000] xfs_trans_cancel+0x103/0x136 xfs_create+0x2ec/0x7aa [4499008.903000] xfs_create+0x2ec/0x7aa xfs_vn_mknod+0x2f9/0x37e [4499008.918000] xfs_vn_lookup+0x79/0x96 lookup_mnt+0x32/0x5c [4499008.935000] do_lookup+0x50/0xa8 dput+0x23/0x183 [4499008.948000] __link_path_walk+0x50/0xe17 lookup_mnt+0x32/0x5c [4499008.966000] mntput_no_expire+0x22/0xb1 mntput_no_expire+0x22/0xb1 [4499008.983000] link_path_walk+0x71/0xc9 permission+0x7e/0x9b [4499008.999000] vfs_create+0x7d/0x122 open_namei+0x69e/0x6ff [4499009.015000] page_add_file_rmap+0x29/0x2b do_filp_open+0x43/0x61 [4499009.031000] do_sys_open+0x57/0xe1 sys_open+0x27/0x2b [4499009.046000] syscall_call+0x7/0xb [4499009.057000] xfs_force_shutdown(md1,0x8) called from line 1151 of file fs/xfs/xfs_trans.c. Return address = 0xc1131b54 [4499009.078000] Filesystem "md1": Corruption of in-memory data detected. Shutting down filesystem: md1 [4499009.097000] Please umount the filesystem, and rectify the problem(s) -- Raz From owner-xfs@oss.sgi.com Sat Jun 2 14:56:36 2007 Received: with ECARTIS (v1.0.0; list xfs); Sat, 02 Jun 2007 14:56:37 -0700 (PDT) Received: from lucidpixels.com (lucidpixels.com [75.144.35.66]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l52LuYWt003639 for ; Sat, 2 Jun 2007 14:56:35 -0700 Received: by lucidpixels.com (Postfix, from userid 1001) id 8B9C4B000083; Sat, 2 Jun 2007 17:39:13 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id 856EA50001A7; Sat, 2 Jun 2007 17:39:13 -0400 (EDT) Date: Sat, 2 Jun 2007 17:39:13 -0400 (EDT) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: "Raz Ben-Jehuda(caro)" cc: linux-xfs@oss.sgi.com, alkirkco@sgi.com Subject: Re: corruption bug in 2.6.17 In-Reply-To: <5d96567b0706021407q4455b60asd9d23ef82cb90b55@mail.gmail.com> Message-ID: References: <5d96567b0706021407q4455b60asd9d23ef82cb90b55@mail.gmail.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-archive-position: 11576 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Raz, 2.6.17-2.6.17.6 had the nasty corruption bug you're seeing, best to restore from backup and/or check the FAQ and get off that kernel range asap. On Sun, 3 Jun 2007, Raz Ben-Jehuda(caro) wrote: > Mandy Hello > > I have got into a serious trouble with xfs . I had some directories > returning "no such file or directory". This xfs file system is over a > raid5 of 4 disks, with 1MB stripe unit size. files are hierarchly > being "XFS_IOC_FSSETXATTR" extended to 1MB. > > when I tried to create a new file I got the bellow oops. > could it be that your fix for 2.6.17.7 solves this problem ? > thank you. > raz > > 4499008.609000] Filesystem "md1": XFS internal error > xfs_dir2_block_addname at line 105 of file fs/xfs/xfs_dir2_block.c. > Caller 0xc10e7e1c > [4499008.634000] xfs_dir2_block_addname+0x88b/0x8a9 > xfs_dir2_createname+0x11a/0x179 > [4499008.654000] xfs_dir2_createname+0x11a/0x179 > xfs_bmap_last_offset+0xce/0x128 > [4499008.673000] xfs_dir2_isblock+0x32/0x82 > xfs_dir2_createname+0x11a/0x179 > [4499008.690000] kmem_zone_alloc+0x57/0xc5 > xfs_trans_ijoin+0x35/0x7f > [4499008.707000] xfs_create+0x45b/0x7aa > xfs_vn_mknod+0x2f9/0x37e > [4499008.725000] xfs_vn_lookup+0x79/0x96 > lookup_mnt+0x32/0x5c > [4499008.741000] do_lookup+0x50/0xa8 dput+0x23/0x183 > [4499008.755000] __link_path_walk+0x50/0xe17 > lookup_mnt+0x32/0x5c > [4499008.772000] mntput_no_expire+0x22/0xb1 > mntput_no_expire+0x22/0xb1 > [4499008.789000] link_path_walk+0x71/0xc9 > permission+0x7e/0x9b > [4499008.805000] vfs_create+0x7d/0x122 > open_namei+0x69e/0x6ff > [4499008.820000] page_add_file_rmap+0x29/0x2b > do_filp_open+0x43/0x61 > [4499008.837000] do_sys_open+0x57/0xe1 > sys_open+0x27/0x2b > [4499008.852000] syscall_call+0x7/0xb > [4499008.863000] Filesystem "md1": XFS internal error xfs_trans_cancel > at line 1150 of file fs/xfs/xfs_trans.c. Caller 0xc112100e > [4499008.886000] xfs_trans_cancel+0x103/0x136 > xfs_create+0x2ec/0x7aa > [4499008.903000] xfs_create+0x2ec/0x7aa > xfs_vn_mknod+0x2f9/0x37e > [4499008.918000] xfs_vn_lookup+0x79/0x96 > lookup_mnt+0x32/0x5c > [4499008.935000] do_lookup+0x50/0xa8 dput+0x23/0x183 > [4499008.948000] __link_path_walk+0x50/0xe17 > lookup_mnt+0x32/0x5c > [4499008.966000] mntput_no_expire+0x22/0xb1 > mntput_no_expire+0x22/0xb1 > [4499008.983000] link_path_walk+0x71/0xc9 > permission+0x7e/0x9b > [4499008.999000] vfs_create+0x7d/0x122 > open_namei+0x69e/0x6ff > [4499009.015000] page_add_file_rmap+0x29/0x2b > do_filp_open+0x43/0x61 > [4499009.031000] do_sys_open+0x57/0xe1 > sys_open+0x27/0x2b > [4499009.046000] syscall_call+0x7/0xb > [4499009.057000] xfs_force_shutdown(md1,0x8) called from line 1151 of > file fs/xfs/xfs_trans.c. Return address = 0xc1131b54 > [4499009.078000] Filesystem "md1": Corruption of in-memory data > detected. Shutting down filesystem: md1 > [4499009.097000] Please umount the filesystem, and rectify the problem(s) > > > -- > Raz > > From owner-xfs@oss.sgi.com Sat Jun 2 14:59:10 2007 Received: with ECARTIS (v1.0.0; list xfs); Sat, 02 Jun 2007 14:59:14 -0700 (PDT) Received: from mail.g-house.de (ns2.g-housing.de [81.169.133.75]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l52Lx9Wt005112 for ; Sat, 2 Jun 2007 14:59:10 -0700 Received: from [82.41.246.210] (helo=[10.0.0.30]) by mail.g-house.de with esmtpsa (TLS-1.0:DHE_RSA_AES_256_CBC_SHA:32) (Exim 4.50) id 1Hubck-0003Ai-Dd; Sat, 02 Jun 2007 23:59:02 +0200 Date: Sat, 2 Jun 2007 22:59:01 +0100 (BST) From: Christian Kujau X-X-Sender: evil@sheep.housecafe.de To: Justin Piszcz cc: linux-kernel@vger.kernel.org, xfs@oss.sgi.com Subject: Re: Kernel 2.6.22-rc3 safe to migrate to? In-Reply-To: Message-ID: References: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; format=flowed; charset=us-ascii X-archive-position: 11577 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: lists@nerdbynature.de Precedence: bulk X-list: xfs On Sat, 2 Jun 2007, Justin Piszcz wrote: > Wondering as their were a lot of XFS related issues early on in > development..? The 2.6.22-rc3 kernel has the core 2 duo coretemp patch by > ruik which I want be running as long as 2.6.22-rc3 does not have any severe > XFS issues? Tracking -rc and running 2.6.22-rc3 for a couple of days now, no problems with xfs since this infamous 2.6.17 thingy[0]. But that's pretty useless information I guess, since your setup will be different from mine. Did you have anything "special" in mind when asking this? C. [0] http://oss.sgi.com/projects/xfs/faq.html#dir2 -- BOFH excuse #339: manager in the cable duct From owner-xfs@oss.sgi.com Sat Jun 2 15:09:32 2007 Received: with ECARTIS (v1.0.0; list xfs); Sat, 02 Jun 2007 15:09:35 -0700 (PDT) Received: from lucidpixels.com (lucidpixels.com [75.144.35.66]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l52M9UWt009389 for ; Sat, 2 Jun 2007 15:09:32 -0700 Received: by lucidpixels.com (Postfix, from userid 1001) id 1F629B000083; Sat, 2 Jun 2007 18:09:30 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id 1C358500014A; Sat, 2 Jun 2007 18:09:30 -0400 (EDT) Date: Sat, 2 Jun 2007 18:09:30 -0400 (EDT) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: Christian Kujau cc: linux-kernel@vger.kernel.org, xfs@oss.sgi.com Subject: Re: Kernel 2.6.22-rc3 safe to migrate to? In-Reply-To: Message-ID: References: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-archive-position: 11578 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs On Sat, 2 Jun 2007, Christian Kujau wrote: > On Sat, 2 Jun 2007, Justin Piszcz wrote: >> Wondering as their were a lot of XFS related issues early on in >> development..? The 2.6.22-rc3 kernel has the core 2 duo coretemp patch by >> ruik which I want be running as long as 2.6.22-rc3 does not have any severe >> XFS issues? > > Tracking -rc and running 2.6.22-rc3 for a couple of days now, no problems > with xfs since this infamous 2.6.17 thingy[0]. But that's pretty useless > information I guess, since your setup will be different from mine. Did you > have anything "special" in mind when asking this? > > C. > > [0] http://oss.sgi.com/projects/xfs/faq.html#dir2 > -- > BOFH excuse #339: > > manager in the cable duct > Thanks, I'll give it a go then--! From owner-xfs@oss.sgi.com Sat Jun 2 15:27:06 2007 Received: with ECARTIS (v1.0.0; list xfs); Sat, 02 Jun 2007 15:27:09 -0700 (PDT) Received: from lucidpixels.com (lucidpixels.com [75.144.35.66]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l52MR5Wt016783 for ; Sat, 2 Jun 2007 15:27:06 -0700 Received: by lucidpixels.com (Postfix, from userid 1001) id 8F780B000083; Sat, 2 Jun 2007 18:27:05 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id 8BAB15000149; Sat, 2 Jun 2007 18:27:05 -0400 (EDT) Date: Sat, 2 Jun 2007 18:27:05 -0400 (EDT) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: Christian Kujau cc: linux-kernel@vger.kernel.org, xfs@oss.sgi.com Subject: Re: Kernel 2.6.22-rc3 safe to migrate to? In-Reply-To: Message-ID: References: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-archive-position: 11579 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Jun 2 18:23:23 p34 upsd[2225]: Can't connect to UPS [belkin] (newhidups-auto): No such file or directory Jun 2 18:24:23 p34 upsmon[2228]: Poll UPS [belkin@localhost] failed - Driver not connected Hmm, something changes with USB in 2.6.22-rc3 form 2.6.21.. On Sat, 2 Jun 2007, Christian Kujau wrote: > On Sat, 2 Jun 2007, Justin Piszcz wrote: >> Wondering as their were a lot of XFS related issues early on in >> development..? The 2.6.22-rc3 kernel has the core 2 duo coretemp patch by >> ruik which I want be running as long as 2.6.22-rc3 does not have any severe >> XFS issues? > > Tracking -rc and running 2.6.22-rc3 for a couple of days now, no problems > with xfs since this infamous 2.6.17 thingy[0]. But that's pretty useless > information I guess, since your setup will be different from mine. Did you > have anything "special" in mind when asking this? > > C. > > [0] http://oss.sgi.com/projects/xfs/faq.html#dir2 > -- > BOFH excuse #339: > > manager in the cable duct > > From owner-xfs@oss.sgi.com Sat Jun 2 15:52:59 2007 Received: with ECARTIS (v1.0.0; list xfs); Sat, 02 Jun 2007 15:53:02 -0700 (PDT) Received: from mail.g-house.de (ns2.g-housing.de [81.169.133.75]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l52MqwWt032611 for ; Sat, 2 Jun 2007 15:52:59 -0700 Received: from [82.41.246.210] (helo=[10.0.0.30]) by mail.g-house.de with esmtpsa (TLS-1.0:DHE_RSA_AES_256_CBC_SHA:32) (Exim 4.50) id 1Hubhc-0003SC-Op; Sun, 03 Jun 2007 00:04:04 +0200 Date: Sat, 2 Jun 2007 23:04:03 +0100 (BST) From: Christian Kujau X-X-Sender: evil@sheep.housecafe.de To: "Raz Ben-Jehuda(caro)" cc: linux-xfs@oss.sgi.com, alkirkco@sgi.com Subject: Re: corruption bug in 2.6.17 In-Reply-To: <5d96567b0706021407q4455b60asd9d23ef82cb90b55@mail.gmail.com> Message-ID: References: <5d96567b0706021407q4455b60asd9d23ef82cb90b55@mail.gmail.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=us-ascii; format=flowed X-archive-position: 11580 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: lists@nerdbynature.de Precedence: bulk X-list: xfs On Sun, 3 Jun 2007, Raz Ben-Jehuda(caro) wrote: > when I tried to create a new file I got the bellow oops. > could it be that your fix for 2.6.17.7 solves this problem ? If you're running a 2.6.17 kernel (< 2.6.17.7) when you're getting these errors, I guess it's most likely this particular 2.6.17-bug. Please read http://oss.sgi.com/projects/xfs/faq.html#dir2 very carefully and upgrade xfsprogs and your kernel. C. -- BOFH excuse #55: Plumber mistook routing panel for decorative wall fixture From owner-xfs@oss.sgi.com Sat Jun 2 15:54:15 2007 Received: with ECARTIS (v1.0.0; list xfs); Sat, 02 Jun 2007 15:54:18 -0700 (PDT) Received: from mail.goop.org (gw.goop.org [64.81.55.164]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l52MsEWt000899 for ; Sat, 2 Jun 2007 15:54:14 -0700 Received: by lurch.goop.org (Postfix, from userid 525) id 4E4B72C8058; Sat, 2 Jun 2007 15:53:17 -0700 (PDT) Received: from lurch.goop.org (localhost [127.0.0.1]) by lurch.goop.org (Postfix) with ESMTP id 2EB9B2C8053; Sat, 2 Jun 2007 15:53:17 -0700 (PDT) Received: from [192.168.28.126] (outer-dhcp-126.goop.org [192.168.28.126]) by lurch.goop.org (Postfix) with ESMTP; Sat, 2 Jun 2007 15:53:17 -0700 (PDT) Message-ID: <4661F511.4070207@goop.org> Date: Sat, 02 Jun 2007 15:54:09 -0700 From: Jeremy Fitzhardinge User-Agent: Thunderbird 1.5.0.10 (X11/20070302) MIME-Version: 1.0 To: Justin Piszcz CC: linux-kernel@vger.kernel.org, xfs@oss.sgi.com Subject: Re: Kernel 2.6.22-rc3 safe to migrate to? References: In-Reply-To: Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-archive-position: 11581 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeremy@goop.org Precedence: bulk X-list: xfs Justin Piszcz wrote: > Wondering as their were a lot of XFS related issues early on in > development..? The 2.6.22-rc3 kernel has the core 2 duo coretemp patch > by ruik which I want be running as long as 2.6.22-rc3 does not have > any severe XFS issues? XFS currently has a data-corrupting bug, where files which were appended by small amounts may lose their updates on umount - I see this corrupting hg repos. There's a patch which works for me, and is in 2.6.22-rc3-mm1, but it hasn't been merged upstream yet. J From owner-xfs@oss.sgi.com Sat Jun 2 15:55:45 2007 Received: with ECARTIS (v1.0.0; list xfs); Sat, 02 Jun 2007 15:55:48 -0700 (PDT) Received: from lucidpixels.com (lucidpixels.com [75.144.35.66]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l52MtiWt001689 for ; Sat, 2 Jun 2007 15:55:45 -0700 Received: by lucidpixels.com (Postfix, from userid 1001) id 33BA8B000083; Sat, 2 Jun 2007 18:55:45 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id 317BD5000091; Sat, 2 Jun 2007 18:55:45 -0400 (EDT) Date: Sat, 2 Jun 2007 18:55:45 -0400 (EDT) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: Jeremy Fitzhardinge cc: linux-kernel@vger.kernel.org, xfs@oss.sgi.com Subject: Re: Kernel 2.6.22-rc3 safe to migrate to? In-Reply-To: <4661F511.4070207@goop.org> Message-ID: References: <4661F511.4070207@goop.org> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-archive-position: 11582 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs On Sat, 2 Jun 2007, Jeremy Fitzhardinge wrote: > Justin Piszcz wrote: >> Wondering as their were a lot of XFS related issues early on in >> development..? The 2.6.22-rc3 kernel has the core 2 duo coretemp patch >> by ruik which I want be running as long as 2.6.22-rc3 does not have >> any severe XFS issues? > > XFS currently has a data-corrupting bug, where files which were appended > by small amounts may lose their updates on umount - I see this > corrupting hg repos. There's a patch which works for me, and is in > 2.6.22-rc3-mm1, but it hasn't been merged upstream yet. > > J > > Ah that's it- and USB appears to be broken as well, I'll stick with 2.6.21.3 for now. Thanks! Justin. From owner-xfs@oss.sgi.com Sat Jun 2 16:00:46 2007 Received: with ECARTIS (v1.0.0; list xfs); Sat, 02 Jun 2007 16:00:49 -0700 (PDT) Received: from smtp1.linux-foundation.org (smtp1.linux-foundation.org [207.189.120.13]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l52N0jWt003744; Sat, 2 Jun 2007 16:00:46 -0700 Received: from localhost (phoenix.linux-foundation.org [207.189.120.27]) by smtp1.linux-foundation.org (8.13.5.20060308/8.13.5/Debian-3ubuntu1.1) with ESMTP id l52Mkqbd009363; Sat, 2 Jun 2007 15:46:53 -0700 Date: Sat, 2 Jun 2007 15:46:47 -0700 (PDT) From: Linus Torvalds To: David Greaves cc: "Rafael J. Wysocki" , xfs@oss.sgi.com, "'linux-kernel@vger.kernel.org'" , netdev@oss.sgi.com, linux-pm Subject: Re: 2.6.22-rc3 hibernate(?) fails totally - regression In-Reply-To: <4661EFBB.5010406@dgreaves.com> Message-ID: References: <46608E3F.4060201@dgreaves.com> <200706012342.45657.rjw@sisk.pl> <46609FAD.7010203@dgreaves.com> <200706020122.49989.rjw@sisk.pl> <4661EFBB.5010406@dgreaves.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=us-ascii Received-SPF: neutral (207.189.120.27 is neither permitted nor denied by domain of torvalds@linux-foundation.org) X-MIMEDefang-Filter: osdl$Revision: 1.179 $ X-Scanned-By: MIMEDefang 2.53 on 207.189.120.13 X-archive-position: 11583 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: torvalds@linux-foundation.org Precedence: bulk X-list: xfs On Sat, 2 Jun 2007, David Greaves wrote: > > Then 2.6.22-rc3 again but CONFIG_DISABLE_CONSOLE_SUSPEND=y > It suspended again. > Froze on restore. > Screen photo here: > http://www.dgreaves.com/pub/2.6.21-rc3-resume-failure.jpg Ok, it wasn't a hidden oops. The DISABLE_CONSOLE_SUSPEND=y thing sometimes shows oopses that are otherwise hidden, but at other times it just causes more problems (hard hangs when trying to display something on a device that is suspended, or behind a bridge that got suspended). In your case, the screen output just shows normal resume output, and it apparently just hung for some unknown reason. It *may* be worth trying to do a SysRQ + 't' thing to see what tasks are running (or rather, not running), but since you won't be able to capture it, it's probably not going to be useful. > Then 2.6.22-rc3 again but CONFIG_DISABLE_CONSOLE_SUSPEND=y > This time, before suspending I unmounted my xfs/lvm/raid6 filesystem. > Just a umount, I left the devices/array up. > It suspended again. > This time it resumed without fault. It would be interesting to see what triggered it, since it apparently worked before. So yes, a bisection would be great. Linus From owner-xfs@oss.sgi.com Sat Jun 2 16:02:11 2007 Received: with ECARTIS (v1.0.0; list xfs); Sat, 02 Jun 2007 16:02:16 -0700 (PDT) Received: from mail.ukfsn.org (s2.ukfsn.org [217.158.120.143]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l52N29Wt004683; Sat, 2 Jun 2007 16:02:11 -0700 Received: from localhost (mailman.ukfsn.org [80.168.53.75]) by mail.ukfsn.org (Postfix) with ESMTP id B51C7E6CAD; Sat, 2 Jun 2007 23:31:21 +0100 (BST) Received: from mail.ukfsn.org ([80.168.53.20]) by localhost (smtp-filter.ukfsn.org [80.168.53.75]) (amavisd-new, port 10024) with ESMTP id BfTBdpdPbsQE; Sat, 2 Jun 2007 23:29:21 +0100 (BST) Received: from elm.dgreaves.com (i-83-67-36-194.freedom2surf.net [83.67.36.194]) by mail.ukfsn.org (Postfix) with ESMTP id 2406EE6C62; Sat, 2 Jun 2007 23:31:21 +0100 (BST) Received: from ash.dgreaves.com ([10.0.0.90]) by elm.dgreaves.com with esmtp (Exim 4.62) (envelope-from ) id 1Huc83-0003U5-Kv; Sat, 02 Jun 2007 23:31:23 +0100 Message-ID: <4661EFBB.5010406@dgreaves.com> Date: Sat, 02 Jun 2007 23:31:23 +0100 From: David Greaves User-Agent: Icedove 1.5.0.10 (X11/20070329) MIME-Version: 1.0 To: "Rafael J. Wysocki" , Linus Torvalds , xfs@oss.sgi.com Cc: "'linux-kernel@vger.kernel.org'" , netdev@oss.sgi.com, linux-pm Subject: Re: 2.6.22-rc3 hibernate(?) fails totally - regression References: <46608E3F.4060201@dgreaves.com> <200706012342.45657.rjw@sisk.pl> <46609FAD.7010203@dgreaves.com> <200706020122.49989.rjw@sisk.pl> In-Reply-To: <200706020122.49989.rjw@sisk.pl> X-Enigmail-Version: 0.94.2.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-archive-position: 11584 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: david@dgreaves.com Precedence: bulk X-list: xfs This started as a non-regression bug-report about wakeonlan During tests I found a real regression and this email is only about 2.6.22-rc3 without Rafael's patches - which I'll happily come back to later :) Rafael J. Wysocki wrote: > On Saturday, 2 June 2007 00:37, David Greaves wrote: >> Rafael J. Wysocki wrote: >>> On Friday, 1 June 2007 23:23, David Greaves wrote: >> The real situation is worse :( > > Ouch. > >> 2.6.22-rc3 (no patches) just hangs on suspend at: >> Suspending consoles >> >> console switching works but needs a hard reset to reboot. >> >> 2.6.22-rc3-skge (with Rafael's patches) > Can you set CONFIG_DISABLE_CONSOLE_SUSPEND in .config and see where exactly it > fails? Given that I expected it to fail I unmounted my 1Tb array data before suspending. It succeeded... so I started digging... So I tried again. 2.6.22-rc3 (vanilla) suspended OK this time but on resume hung at: Stopping tasks done Shrinking memory done (0 pages freed) Freed 0 kb in 0.0 secs (0.0MB/s) Suspending console(s) Then 2.6.22-rc3 again but CONFIG_DISABLE_CONSOLE_SUSPEND=y It suspended again. Froze on restore. Screen photo here: http://www.dgreaves.com/pub/2.6.21-rc3-resume-failure.jpg Then 2.6.22-rc3 again but CONFIG_DISABLE_CONSOLE_SUSPEND=y This time, before suspending I unmounted my xfs/lvm/raid6 filesystem. Just a umount, I left the devices/array up. It suspended again. This time it resumed without fault. The machine has another xfs filesystem : /dev/hdb2 on /scratch type xfs (rw) I've started bisecting... any other info needed? David Added Linus as it's a regression Added xfs as unmounting an xfs filesystem 'fixes' it. From owner-xfs@oss.sgi.com Sat Jun 2 17:27:24 2007 Received: with ECARTIS (v1.0.0; list xfs); Sat, 02 Jun 2007 17:27:28 -0700 (PDT) Received: from lucidpixels.com (lucidpixels.com [75.144.35.66]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l530RMWt003142 for ; Sat, 2 Jun 2007 17:27:24 -0700 Received: by lucidpixels.com (Postfix, from userid 1001) id C6A05B000083; Sat, 2 Jun 2007 20:27:22 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id C019150025A7; Sat, 2 Jun 2007 20:27:22 -0400 (EDT) Date: Sat, 2 Jun 2007 20:27:22 -0400 (EDT) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: Christian Kujau cc: Jeremy Fitzhardinge , linux-kernel@vger.kernel.org, xfs@oss.sgi.com Subject: Re: Kernel 2.6.22-rc3 safe to migrate to? In-Reply-To: Message-ID: References: <4661F511.4070207@goop.org> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-archive-position: 11586 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs On Sun, 3 Jun 2007, Christian Kujau wrote: > On Sat, 2 Jun 2007, Justin Piszcz wrote: >> On Sat, 2 Jun 2007, Jeremy Fitzhardinge wrote: >>> XFS currently has a data-corrupting bug, where files which were appended >>> by small amounts may lose their updates on umount - I see this >>> corrupting hg repos. There's a patch which works for me, and is in >>> 2.6.22-rc3-mm1, but it hasn't been merged upstream yet. > > Just for the record, you mean this one: > http://lkml.org/lkml/2007/5/12/93 ..right? (haven't been bitten by this > one...yet) > >> Ah that's it- and USB appears to be broken as well, I'll stick with >> 2.6.21.3 for now. > > Got any pointers? I'm using USB right now, one is i386 with 2.6.22-rc3, > another one is powerpc, tracking -git and USB seems to work. > > C. > -- > BOFH excuse #15: > > temporary routing anomaly > Sent this in another e-mail to LKML, it breaks support for my UPS: From jpiszcz@lucidpixels.com Sat Jun 2 18:43:52 2007 Date: Sat, 2 Jun 2007 18:43:52 -0400 (EDT) From: Justin Piszcz To: linux-kernel@vger.kernel.org Subject: Kernel 2.6.22-rc3 breaks USB: Unable to get HID descriptor (error sending control message: Operation not permitted) I use nut-2.0.4-4 with a UPS attached via USB and from 2.6.21.3 -> 2.6.22-rc3 it stops working, see below. My .config is attached. 2.6.21.3: p34:~# /lib/nut/newhidups -u nut -DDDDDD auto Checking device (050D/0912) (005/002) - VendorID: 050d - ProductID: 0912 - Manufacturer: - Product: UPS - Serial Number: unknown - Bus: 005 Trying to match device Device matches HID descriptor retrieved (Reportlen = 820) Size read for the report descriptor: 820 Report descriptor retrieved (Reportlen = 820) Found HID device Network UPS Tools: New USB/HID UPS driver 0.28 (2.0.4) Report Descriptor size = 820 Report Descriptor: (200 bytes) => 05 84 09 04 A1 01 05 86 09 26 A1 02 85 01 75 08 Detected a UPS: /UPS Using subdriver: Belkin HID 0.1 Looking up 00840004 Looking up 00860026 Looking up 00860040 entering string_to_path() parsing UPS Looking up UPS hid_lookup_usage: found 840004 parsing BELKINConfig Looking up BELKINConfig hid_lookup_usage: found 860026 parsing BELKINConfigVoltage Looking up BELKINConfigVoltage hid_lookup_usage: found 860040 Path depth = 3 0: UPage(84), Usage(4) 1: UPage(86), Usage(26) 2: UPage(86), Usage(40) Entering libusb_get_report =>> Before exponent: 120, 0/0) =>> After conversion: 120.000000 (120), 0/0) Report : (8 bytes) => 01 78 80 00 00 00 00 00 Path: UPS.BELKINConfig.BELKINConfigVoltage, Type: Feature, Value: 120.000000 Looking up 00840004 Looking up 00860026 Looking up 00860042 (works fine) 2.6.22-rc3: p34:~# /lib/nut/newhidups -u nut -DDDDDD auto Checking device (050D/0912) (005/002) - VendorID: 050d - ProductID: 0912 - Manufacturer: unknown - Product: unknown - Serial Number: unknown - Bus: 005 Trying to match device Device matches failed to claim USB device, trying 2 more time(s)... detaching kernel driver from USB device... failed to detach kernel driver from USB device... trying again to claim USB device... failed to claim USB device, trying 1 more time(s)... detaching kernel driver from USB device... failed to detach kernel driver from USB device... trying again to claim USB device... failed to claim USB device, trying 0 more time(s)... detaching kernel driver from USB device... failed to detach kernel driver from USB device... trying again to claim USB device... Unable to get HID descriptor (error sending control message: Operation not permitted) [ Part 2, "" Application/OCTET-STREAM (Name: "config-2.6.22-rc3.bz2") ] [ 10KB. ] [ Unable to print this part. ] From owner-xfs@oss.sgi.com Sat Jun 2 17:26:01 2007 Received: with ECARTIS (v1.0.0; list xfs); Sat, 02 Jun 2007 17:26:04 -0700 (PDT) Received: from mail.g-house.de (ns2.g-housing.de [81.169.133.75]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l530PxWt002207 for ; Sat, 2 Jun 2007 17:26:00 -0700 Received: from [82.41.246.210] (helo=[10.0.0.30]) by mail.g-house.de with esmtpsa (TLS-1.0:DHE_RSA_AES_256_CBC_SHA:32) (Exim 4.50) id 1Hudub-0008LF-CH; Sun, 03 Jun 2007 02:25:37 +0200 Date: Sun, 3 Jun 2007 01:25:35 +0100 (BST) From: Christian Kujau X-X-Sender: evil@sheep.housecafe.de To: Justin Piszcz cc: Jeremy Fitzhardinge , linux-kernel@vger.kernel.org, xfs@oss.sgi.com Subject: Re: Kernel 2.6.22-rc3 safe to migrate to? In-Reply-To: Message-ID: References: <4661F511.4070207@goop.org> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; format=flowed; charset=us-ascii X-archive-position: 11585 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: lists@nerdbynature.de Precedence: bulk X-list: xfs On Sat, 2 Jun 2007, Justin Piszcz wrote: > On Sat, 2 Jun 2007, Jeremy Fitzhardinge wrote: >> XFS currently has a data-corrupting bug, where files which were appended >> by small amounts may lose their updates on umount - I see this >> corrupting hg repos. There's a patch which works for me, and is in >> 2.6.22-rc3-mm1, but it hasn't been merged upstream yet. Just for the record, you mean this one: http://lkml.org/lkml/2007/5/12/93 ..right? (haven't been bitten by this one...yet) > Ah that's it- and USB appears to be broken as well, I'll stick with 2.6.21.3 > for now. Got any pointers? I'm using USB right now, one is i386 with 2.6.22-rc3, another one is powerpc, tracking -git and USB seems to work. C. -- BOFH excuse #15: temporary routing anomaly From owner-xfs@oss.sgi.com Sat Jun 2 23:30:56 2007 Received: with ECARTIS (v1.0.0; list xfs); Sat, 02 Jun 2007 23:31:01 -0700 (PDT) Received: from mail.goop.org (gw.goop.org [64.81.55.164]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l536UtWt020599 for ; Sat, 2 Jun 2007 23:30:56 -0700 Received: by lurch.goop.org (Postfix, from userid 525) id D7F6C2C805B; Sat, 2 Jun 2007 23:29:57 -0700 (PDT) Received: from lurch.goop.org (localhost [127.0.0.1]) by lurch.goop.org (Postfix) with ESMTP id A9B452C8056; Sat, 2 Jun 2007 23:29:57 -0700 (PDT) Received: from [192.168.28.126] (outer-dhcp-126.goop.org [192.168.28.126]) by lurch.goop.org (Postfix) with ESMTP; Sat, 2 Jun 2007 23:29:57 -0700 (PDT) Message-ID: <46626019.3070807@goop.org> Date: Sat, 02 Jun 2007 23:30:49 -0700 From: Jeremy Fitzhardinge User-Agent: Thunderbird 1.5.0.10 (X11/20070302) MIME-Version: 1.0 To: Christian Kujau CC: Justin Piszcz , linux-kernel@vger.kernel.org, xfs@oss.sgi.com Subject: Re: Kernel 2.6.22-rc3 safe to migrate to? References: <4661F511.4070207@goop.org> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-archive-position: 11587 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeremy@goop.org Precedence: bulk X-list: xfs Christian Kujau wrote: > Just for the record, you mean this one: > http://lkml.org/lkml/2007/5/12/93 ..right? (haven't been bitten by > this one...yet) That's the one. It's fairly subtle; there's no metadata/filesystem corruption, and the file just reverts back to a previous length, so if it were append-only, it would just return to a previous state. J From owner-xfs@oss.sgi.com Sun Jun 3 00:23:44 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 03 Jun 2007 00:23:51 -0700 (PDT) Received: from pd3mo2so.prod.shaw.ca (shawidc-mo1.cg.shawcable.net [24.71.223.10]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l537NhWt020808 for ; Sun, 3 Jun 2007 00:23:44 -0700 Received: from pd2mr4so.prod.shaw.ca (pd2mr4so-qfe3.prod.shaw.ca [10.0.141.107]) by l-daemon (Sun ONE Messaging Server 6.0 HotFix 1.01 (built Mar 15 2004)) with ESMTP id <0JJ10086OR3JP670@l-daemon> for xfs@oss.sgi.com; Sun, 03 Jun 2007 00:23:43 -0600 (MDT) Received: from pn2ml3so.prod.shaw.ca ([10.0.121.147]) by pd2mr4so.prod.shaw.ca (Sun Java System Messaging Server 6.2-7.05 (built Sep 5 2006)) with ESMTP id <0JJ100A65R3J1M20@pd2mr4so.prod.shaw.ca> for xfs@oss.sgi.com; Sun, 03 Jun 2007 00:23:43 -0600 (MDT) Received: from [192.168.1.113] ([70.64.1.86]) by l-daemon (Sun ONE Messaging Server 6.0 HotFix 1.01 (built Mar 15 2004)) with ESMTP id <0JJ100DXZR3IE3N3@l-daemon> for xfs@oss.sgi.com; Sun, 03 Jun 2007 00:23:42 -0600 (MDT) Date: Sun, 03 Jun 2007 00:23:48 -0600 From: Robert Hancock Subject: Re: Kernel 2.6.22-rc3 safe to migrate to? In-reply-to: To: Justin Piszcz Cc: Christian Kujau , Jeremy Fitzhardinge , linux-kernel@vger.kernel.org, xfs@oss.sgi.com Message-id: <46625E74.8070800@shaw.ca> MIME-version: 1.0 Content-type: text/plain; charset=ISO-8859-1; format=flowed Content-transfer-encoding: 7bit References: User-Agent: Thunderbird 2.0.0.0 (Windows/20070326) X-archive-position: 11588 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hancockr@shaw.ca Precedence: bulk X-list: xfs Justin Piszcz wrote: > Sent this in another e-mail to LKML, it breaks support for my UPS: > > From jpiszcz@lucidpixels.com Sat Jun 2 18:43:52 2007 > Date: Sat, 2 Jun 2007 18:43:52 -0400 (EDT) > From: Justin Piszcz > To: linux-kernel@vger.kernel.org > Subject: Kernel 2.6.22-rc3 breaks USB: Unable to get HID descriptor > (error sending control message: Operation not permitted) > > I use nut-2.0.4-4 with a UPS attached via USB and from 2.6.21.3 -> > 2.6.22-rc3 it > stops working, see below. My .config is attached. > I also have seen this on -rc2-mm1, my APC UPS status was not showing up. I haven't investigated in any more detail. -- Robert Hancock Saskatoon, SK, Canada To email, remove "nospam" from hancockr@nospamshaw.ca Home Page: http://www.roberthancock.com/ From owner-xfs@oss.sgi.com Sun Jun 3 01:02:14 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 03 Jun 2007 01:02:18 -0700 (PDT) Received: from mail.goop.org (gw.goop.org [64.81.55.164]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l53829Wt008118 for ; Sun, 3 Jun 2007 01:02:12 -0700 Received: by lurch.goop.org (Postfix, from userid 525) id C96452C805B; Sun, 3 Jun 2007 01:01:09 -0700 (PDT) Received: from lurch.goop.org (localhost [127.0.0.1]) by lurch.goop.org (Postfix) with ESMTP id A8CAF2C8056; Sun, 3 Jun 2007 01:01:09 -0700 (PDT) Received: from [192.168.28.126] (outer-dhcp-126.goop.org [192.168.28.126]) by lurch.goop.org (Postfix) with ESMTP; Sun, 3 Jun 2007 01:01:09 -0700 (PDT) Message-ID: <46627579.3040507@goop.org> Date: Sun, 03 Jun 2007 01:02:01 -0700 From: Jeremy Fitzhardinge User-Agent: Thunderbird 1.5.0.10 (X11/20070302) MIME-Version: 1.0 To: Michal Piotrowski CC: Justin Piszcz , linux-kernel@vger.kernel.org, xfs@oss.sgi.com Subject: Re: Kernel 2.6.22-rc3 safe to migrate to? References: <4661F511.4070207@goop.org> <6bffcb0e0706030059o3e53e647uf3ae1aa3f16609d8@mail.gmail.com> In-Reply-To: <6bffcb0e0706030059o3e53e647uf3ae1aa3f16609d8@mail.gmail.com> Content-Type: text/plain; charset=ISO-8859-2 Content-Transfer-Encoding: 7bit X-archive-position: 11589 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeremy@goop.org Precedence: bulk X-list: xfs Michal Piotrowski wrote: > It's already merged into mainline > > http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=df3c7244264f1d12562413aa32d56be802486516 > Oh, good. I hadn't read through the last couple of days of commits. J From owner-xfs@oss.sgi.com Sun Jun 3 01:25:29 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 03 Jun 2007 01:25:34 -0700 (PDT) Received: from wa-out-1112.google.com (wa-out-1112.google.com [209.85.146.182]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l538PSWt019301 for ; Sun, 3 Jun 2007 01:25:29 -0700 Received: by wa-out-1112.google.com with SMTP id l24so1348090waf for ; Sun, 03 Jun 2007 01:25:28 -0700 (PDT) DKIM-Signature: a=rsa-sha1; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=qnDdIp58N/s+cogb5y3YWwYZrtZxI1esxvfHwQizEwtpia3oz6PydT8YZi95ijOBjo/5uMwzsBKfquAIjMTeqAbygVRGt6oMy6V8E3lYVGnr/Wyq1mNIkx/9rdat6Qa0epYb9PKzbuq3yIoacsHw/w9WxXEvDgfl1luj2nzcKGE= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=aws8cpktjN3Zi3FuT+Fpb2VlzJ4nB826yZaHSW3MqQ50u1MlScXPtrKZdaGBTE6oGq4zP2oUh7TrDdZayX2Nupm+/tZlymSi77iUAYxx1oElfNq1ezg1oKbMUnWY0ALHYkuJS5ss0xZTX0pysXK/1xwxWWScd+Ep4omP2YW1wPo= Received: by 10.114.190.6 with SMTP id n6mr3618372waf.1180857555600; Sun, 03 Jun 2007 00:59:15 -0700 (PDT) Received: by 10.114.182.4 with HTTP; Sun, 3 Jun 2007 00:59:15 -0700 (PDT) Message-ID: <6bffcb0e0706030059o3e53e647uf3ae1aa3f16609d8@mail.gmail.com> Date: Sun, 3 Jun 2007 09:59:15 +0200 From: "Michal Piotrowski" To: "Jeremy Fitzhardinge" Subject: Re: Kernel 2.6.22-rc3 safe to migrate to? Cc: "Justin Piszcz" , linux-kernel@vger.kernel.org, xfs@oss.sgi.com In-Reply-To: <4661F511.4070207@goop.org> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-2; format=flowed Content-Disposition: inline References: <4661F511.4070207@goop.org> Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from base64 to 8bit by oss.sgi.com id l538PTWt019318 X-archive-position: 11590 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: michal.k.k.piotrowski@gmail.com Precedence: bulk X-list: xfs H, On 03/06/07, Jeremy Fitzhardinge wrote:> Justin Piszcz wrote:> > Wondering as their were a lot of XFS related issues early on in> > development..? The 2.6.22-rc3 kernel has the core 2 duo coretemp patch> > by ruik which I want be running as long as 2.6.22-rc3 does not have> > any severe XFS issues?>> XFS currently has a data-corrupting bug, where files which were appended> by small amounts may lose their updates on umount - I see this> corrupting hg repos. There's a patch which works for me, and is in> 2.6.22-rc3-mm1, but it hasn't been merged upstream yet.> It's already merged into mainline http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=df3c7244264f1d12562413aa32d56be802486516 Regards,Michal -- "Najbardziej brakowa³o mi twojego milczenia."-- Andrzej Sapkowski "Co¶ wiêcej" From owner-xfs@oss.sgi.com Sun Jun 3 01:41:40 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 03 Jun 2007 01:41:42 -0700 (PDT) Received: from lucidpixels.com (lucidpixels.com [75.144.35.66]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l538fcWt026312 for ; Sun, 3 Jun 2007 01:41:39 -0700 Received: by lucidpixels.com (Postfix, from userid 1001) id B9B3FB000083; Sun, 3 Jun 2007 04:41:38 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id B5DDA500009A; Sun, 3 Jun 2007 04:41:38 -0400 (EDT) Date: Sun, 3 Jun 2007 04:41:38 -0400 (EDT) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: Jeremy Fitzhardinge cc: Michal Piotrowski , linux-kernel@vger.kernel.org, xfs@oss.sgi.com Subject: Re: Kernel 2.6.22-rc3 safe to migrate to? In-Reply-To: <46627579.3040507@goop.org> Message-ID: References: <4661F511.4070207@goop.org> <6bffcb0e0706030059o3e53e647uf3ae1aa3f16609d8@mail.gmail.com> <46627579.3040507@goop.org> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-archive-position: 11591 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs On Sun, 3 Jun 2007, Jeremy Fitzhardinge wrote: > Michal Piotrowski wrote: >> It's already merged into mainline >> >> http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=df3c7244264f1d12562413aa32d56be802486516 >> > > Oh, good. I hadn't read through the last couple of days of commits. > > J > > Nice, any word on the USB bug? From owner-xfs@oss.sgi.com Sun Jun 3 08:03:52 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 03 Jun 2007 08:03:56 -0700 (PDT) Received: from mail.ukfsn.org (s2.ukfsn.org [217.158.120.143]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l53F3oWt017809; Sun, 3 Jun 2007 08:03:51 -0700 Received: from localhost (mailman.ukfsn.org [80.168.53.75]) by mail.ukfsn.org (Postfix) with ESMTP id 44CA5E6E87; Sun, 3 Jun 2007 16:03:42 +0100 (BST) Received: from mail.ukfsn.org ([80.168.53.20]) by localhost (smtp-filter.ukfsn.org [80.168.53.75]) (amavisd-new, port 10024) with ESMTP id E4XNnJ5bECXv; Sun, 3 Jun 2007 16:01:41 +0100 (BST) Received: from elm.dgreaves.com (i-83-67-36-194.freedom2surf.net [83.67.36.194]) by mail.ukfsn.org (Postfix) with ESMTP id 782C3E6D03; Sun, 3 Jun 2007 16:03:40 +0100 (BST) Received: from ash.dgreaves.com ([10.0.0.90]) by elm.dgreaves.com with esmtp (Exim 4.62) (envelope-from ) id 1HurcR-0004gS-3f; Sun, 03 Jun 2007 16:03:47 +0100 Message-ID: <4662D852.4000005@dgreaves.com> Date: Sun, 03 Jun 2007 16:03:46 +0100 From: David Greaves User-Agent: Icedove 1.5.0.10 (X11/20070329) MIME-Version: 1.0 To: Linus Torvalds , Tejun Heo Cc: "Rafael J. Wysocki" , xfs@oss.sgi.com, "'linux-kernel@vger.kernel.org'" , netdev@oss.sgi.com, linux-pm , Neil Brown Subject: Re: 2.6.22-rc3 hibernate(?) fails totally - regression References: <46608E3F.4060201@dgreaves.com> <200706012342.45657.rjw@sisk.pl> <46609FAD.7010203@dgreaves.com> <200706020122.49989.rjw@sisk.pl> <4661EFBB.5010406@dgreaves.com> In-Reply-To: X-Enigmail-Version: 0.94.2.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-archive-position: 11592 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: david@dgreaves.com Precedence: bulk X-list: xfs Linus Torvalds wrote: > It would be interesting to see what triggered it, since it apparently > worked before. So yes, a bisection would be great. Tejun, all the problematic patches are yours - so adding you. Neil, since the problem only occurs whilst an xfs filesystem is mounted on a raid6 array, I've cc'ed you too... OK Got as far as I could... I've run 9 or 10 kernels/bisects and got to a point with 8 of Tejun's changesets where it wouldn't compile: CC drivers/ata/sata_via.o drivers/ata/sata_via.c:120: error: `ata_scsi_device_suspend' undeclared here (not in a function) drivers/ata/sata_via.c:120: error: initializer element is not constant drivers/ata/sata_via.c:120: error: (near initialization for `svia_sht.suspend') drivers/ata/sata_via.c:121: error: `ata_scsi_device_resume' undeclared here (not in a function) drivers/ata/sata_via.c:121: error: initializer element is not constant drivers/ata/sata_via.c:121: error: (near initialization for `svia_sht.resume') make[2]: *** [drivers/ata/sata_via.o] Error 1 git bisect visualise gave: bad: 48aaae7a2fa46e1ed0d0b7677fde79ccfcb8c963 bisect: 54936f8b099325992f0f212a5e366fd5257c6c9c good: 0a3fd051c7036ef71b58863f8e5da7c3dabd9d3f I used: git reset --hard 8575b814097af648dad284bd3087875a11b13d18 git reset --hard e92351bb53c0849fabfa80be53cbf3b0aa166e54 git reset --hard 3a32a8e96694a243ec7e7feb6d76dfc4b1fe90c1 git reset --hard 9666f4009c22f6520ac3fb8a19c9e32ab973e828 to step through - non compiled git reset --hard 1d30c33d8d07868199560b24f10ed6280e78a89c compiled and hung on resume. given the first patch identified is 9666f4009c22f6520ac3fb8a19c9e32ab973e828: "libata: reimplement suspend/resume support using sdev->manage_start_stop" That seems a good candidate... Incidentally, when I compile 1d30c33d8d07868199560b24f10ed6280e78a89c (far side of the implicated changesets) if I umount my xfs over raid6 filesystem (no lvm as I said in the OP) the resume succeeds. David PS I hope I've interpreted bisect correctly - first use and all that... From owner-xfs@oss.sgi.com Sun Jun 3 14:37:59 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 03 Jun 2007 14:38:03 -0700 (PDT) Received: from node21.rhrz.uni-bonn.de (node21-gb.rhrz.uni-bonn.de [131.220.15.211]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l53LbuWt026814 for ; Sun, 3 Jun 2007 14:37:58 -0700 Received: (from wwwserv@localhost) by node21.rhrz.uni-bonn.de (8.9.3/8.9.3) id XAA83218; Sun, 3 Jun 2007 23:37:56 +0200 Date: Sun, 3 Jun 2007 23:37:56 +0200 Message-Id: <200706032137.XAA83218@node21.rhrz.uni-bonn.de> To: xfs@oss.sgi.com Subject: Properties. From: Peter Kok Reply-To: as564ag@yahoo.com.hk MIME-Version: 1.0 Content-Type: text/plain Content-Transfer-Encoding: 8bit X-archive-position: 11593 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: as564ag@yahoo.com.hk Precedence: bulk X-list: xfs Good Day, I wish to introduce myself to you. I am Peter Kok a top Sudanese Goverment official who opposed the war in Dalfour in my country Sudan.Due to my oppostion to the war, the goverment of my country has been persecuting me and my family.Consequently my wife,children and I managed to enter a red cross airplane that was evacuating foreigners and we are presently in Cape Town, South Africa. We wish to invest in properties in your country with your assistance and cooperation.If you are in a good position to help my family, please send an e-mail to the e-mail address below indicating your desire to help my family invest this funds in your country. best regards Hope to meet you soon. God bless, Peter Kok E-mail:as564ag@yahoo.com.hk From owner-xfs@oss.sgi.com Sun Jun 3 17:16:55 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 03 Jun 2007 17:16:59 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l540GpWt026604 for ; Sun, 3 Jun 2007 17:16:53 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA27993; Mon, 4 Jun 2007 10:16:41 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l540GcAf113319342; Mon, 4 Jun 2007 10:16:39 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l540GWCC113287721; Mon, 4 Jun 2007 10:16:32 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Mon, 4 Jun 2007 10:16:32 +1000 From: David Chinner To: Ruben Porras Cc: xfs@oss.sgi.com, iusty@k1024.org, cw@f00f.org Subject: Re: XFS shrink functionality Message-ID: <20070604001632.GA86004887@sgi.com> References: <1180715974.10796.46.camel@localhost> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1180715974.10796.46.camel@localhost> User-Agent: Mutt/1.4.2.1i X-archive-position: 11594 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Fri, Jun 01, 2007 at 06:39:34PM +0200, Ruben Porras wrote: > Hello, > > I'm investigating the possibility to write myself the necessary code to > shrink an xfs filesystem (I'd be able to dedicate a day/week). Trying to > know if something is already done I came across the mails of a previous > intent [0], [1] (I'm cc'ing the people involved). Oh, thanks for pointing those out - they're before my time ;) > At a first glance the patch is a little outdated and will no more apply > (as of linux 2.16.18, which is the last customised kernel that I was > able to run under a XEN environment), because at least the function > xfs_fs_geometry is changed. Any work for this would need to be done against current mainline of the xfs-dev tree. Yes, that patch is out of date, and it also did things that were not necessary i.e. walk btrees to work out if AGs are empty or not. > I'm really curious about what happened to this patches and why they were > discontinued. The second part never was made public, and there was also > no answer. Was there any flaw in any of the posted code or anything in > XFS that makes it especially hard to shrink [3] that discouraged the > development? The posted code is only a *tiny* part of the shrink problem. > After that, the first questions that arouse are, > would there be some assistance/groove in from the developers? Certainly there's help available. ;) > How doable is it? It is doable. > What are the programmers requirements from your point of view? Here's the "simple" bits that will allow you to shrink the filesystem down to the end of the internal log: 0. Check space is available for shrink 1. Mark allocation groups as "don't use - going away soon" - so we don't put new stuff in them while we are moving all the bits out of them - requires hooks in the allocators to prevent the AG from being selected for allllocations - must still allow allocations for the free lists so that extent freeing can succeed - *new transaction required*. - also needs an "undo" (e.g. on partial failure) so we need to be able to mark allocation groups online again. 2. Move inodes out of offline AGs - On Irix, we have a program called 'xfs_reno' which converts 64 bit inode filesystems to 32 bit inode filesystems. This needs to be: - released under the GPL (should not be a problem). - ported to linux - modified to understand inodes sit in certain AGs and to move them out of those AGs as needed. - requires filesystem traversal to find all the inodes to be moved. % wc -l xfs_reno.c 1991 xfs_reno.c - even with "-o ikeep", this needs to trigger inode cluster deletion in offline AGs (needs hooks in xfs_ifree()). 3. Move data out of offline AGs. - this is difficult to do efficiently as we do not have a block-to-owner reverse mapping in the filesystem. Hence requires a walk of the *entire* filesystem to find the owners of data blocks in the AGs being offlined. - xfs_db wrapper might be the best way to do this... 4. Execute shrink - new transaction - XFS_TRANS_SHRINKFS - check AGs are empty - icount == 0 - freeblks == mp->m_sb.sb_agblocks (will be a little more than this) - check shrink won't go past end of internal log - free AGs, updating superblock fields - update perag structure - not a simple realloc() as there may be other threads using the structure at the same time.... Initially, I'd say just support shrinking to whole AGs - you've got to empty the whole "partial-last-ag" to ensure we can shrink it anyway, so doing a subsequent grow operation to increase the size afterwards should be trivial. Once this all works, we can then tackle the "move the log" problem which will allow you to shrink to much smaller sizes. As you can see, doing a shrink properly is not trivial, which is probably why it has't gone anywhere fast.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Sun Jun 3 20:09:56 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 03 Jun 2007 20:10:06 -0700 (PDT) Received: from MFWJ042.mfw.is.co.za (mfwj042.mfw.is.co.za [196.35.77.58]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5439rWt028417 for ; Sun, 3 Jun 2007 20:09:54 -0700 Received: from MailMarshal.Engine ([127.0.0.1]) by MFWJ042.mfw.is.co.za with MailMarshal (v5.5.7.1596) id ; Mon, 04 Jun 2007 05:14:35 +0200 Message-ID: From: mailfwadmin@bec.co.za To: linux-xfs@oss.sgi.com CC: archspec@plascon.co.za Date: Mon, 04 Jun 2007 05:14:35 +0200 Subject: Message Stopped ---- Virus Detected ---- MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="--=614e9445-c29c-4b50-a572-36ff706140c7" X-archive-position: 11595 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: mailfwadmin@bec.co.za Precedence: bulk X-list: xfs ----=614e9445-c29c-4b50-a572-36ff706140c7 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Barloworld Plascon MailFirewall has stopped the following message: Message: BA02f6702b.00000001.mml From: linux-xfs@oss.sgi.com To: archspec@plascon.co.za Subject: Because it believes the message or an attachment to the message contains a virus. The virus scanning software used was: McAfee for Marshal W32/Mydoom.o@MM!zip Barloworld Plascon MailFirewall Rule: Plascon Inbound : Block Virus Email Content Security provided by NetIQ MailMarshal. ----=614e9445-c29c-4b50-a572-36ff706140c7-- From owner-xfs@oss.sgi.com Sun Jun 3 21:31:59 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 03 Jun 2007 21:32:12 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l544VoWt011482 for ; Sun, 3 Jun 2007 21:31:58 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA03794; Mon, 4 Jun 2007 14:31:46 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id 225DC58C38C1; Mon, 4 Jun 2007 14:31:46 +1000 (EST) To: xfs@oss.sgi.com, sgi.bugs.xfs@engr.sgi.com Subject: PARTIAL TAKE 964092 - synchronous direct I/O write calls are incomplete when returning to user space Message-Id: <20070604043146.225DC58C38C1@chook.melbourne.sgi.com> Date: Mon, 4 Jun 2007 14:31:46 +1000 (EST) From: dgc@sgi.com (David Chinner) X-archive-position: 11596 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs QA test to exercise unwritten extent conversion for sync direct I/O Date: Mon Jun 4 14:31:01 AEST 2007 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/xfs-cmds Inspected by: donaldd The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:28769a xfstests/167 - 1.1 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/167 xfstests/167.out - 1.1 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/167.out xfstests/src/unwritten_sync.c - 1.1 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/src/unwritten_sync.c xfstests/group - 1.105 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/group.diff?r1=text&tr1=1.105&r2=text&tr2=1.104&f=h xfstests/src/Makefile - 1.40 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/src/Makefile.diff?r1=text&tr1=1.40&r2=text&tr2=1.39&f=h - QA test to exercise unwritten extent conversion for sync direct I/O. From owner-xfs@oss.sgi.com Sun Jun 3 21:41:02 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 03 Jun 2007 21:41:06 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l544exWt015527 for ; Sun, 3 Jun 2007 21:41:01 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA03959; Mon, 4 Jun 2007 14:40:54 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id C4CA958C38C1; Mon, 4 Jun 2007 14:40:54 +1000 (EST) To: xfs@oss.sgi.com, sgi.bugs.xfs@engr.sgi.com Subject: TAKE 965626 - xfsqa - test 030 doesn't test what it's supposed to Message-Id: <20070604044054.C4CA958C38C1@chook.melbourne.sgi.com> Date: Mon, 4 Jun 2007 14:40:54 +1000 (EST) From: dgc@sgi.com (David Chinner) X-archive-position: 11597 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Make sure the repair tests dirty the filesystem before corrupting it. Date: Mon Jun 4 14:40:06 AEST 2007 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/xfs-cmds Inspected by: ddiss The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:28770a xfstests/common.repair - 1.14 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/common.repair.diff?r1=text&tr1=1.14&r2=text&tr2=1.13&f=h xfstests/030.out.irix - 1.2 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/030.out.irix.diff?r1=text&tr1=1.2&r2=text&tr2=1.1&f=h xfstests/030.out.linux - 1.2 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/030.out.linux.diff?r1=text&tr1=1.2&r2=text&tr2=1.1&f=h xfstests/148.out - 1.2 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/148.out.diff?r1=text&tr1=1.2&r2=text&tr2=1.1&f=h - Make sure the repair tests dirty the filesystem before corrupting it. From owner-xfs@oss.sgi.com Sun Jun 3 21:52:30 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 03 Jun 2007 21:52:33 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l544qPWt021124 for ; Sun, 3 Jun 2007 21:52:28 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA04203; Mon, 4 Jun 2007 14:52:21 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l544qKAf112221586; Mon, 4 Jun 2007 14:52:21 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l544qJKp110131735; Mon, 4 Jun 2007 14:52:19 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Mon, 4 Jun 2007 14:52:19 +1000 From: David Chinner To: xfs-dev Cc: xfs-oss Subject: Review: Be smarter about handling ENOSPC during writeback Message-ID: <20070604045219.GG86004887@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.1i X-archive-position: 11598 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs During delayed allocation extent conversion or unwritten extent conversion, we need to reserve some blocks for transactions reservations. We need to reserve these blocks in case a btree split occurs and we need to allocate some blocks. Unfortunately, we've only ever reserved the number of data blocks we are allocating, so in both the unwritten and delalloc case we can get ENOSPC to the transaction reservation. This is bad because in both cases we cannot report the failure to the writing application. The fix is two-fold: 1 - leverage the reserved block infrastructure XFS already has to reserve a small pool of blocks by default to allow specially marked transactions to dip into when we are at ENOSPC. Default setting is min(5%, 1024 blocks). 2 - convert critical transaction reservations to be allowed to dip into this pool. Spots changed are delalloc conversion, unwritten extent conversion and growing a filesystem at ENOSPC. Comments? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group --- fs/xfs/xfs_fsops.c | 10 +++++++--- fs/xfs/xfs_iomap.c | 22 ++++++++-------------- fs/xfs/xfs_mount.c | 37 +++++++++++++++++++++++++++++++++++-- 3 files changed, 50 insertions(+), 19 deletions(-) Index: 2.6.x-xfs-new/fs/xfs/xfs_fsops.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_fsops.c 2007-05-11 10:35:29.288847149 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_fsops.c 2007-05-11 11:13:34.195363437 +1000 @@ -179,6 +179,7 @@ xfs_growfs_data_private( up_write(&mp->m_peraglock); } tp = xfs_trans_alloc(mp, XFS_TRANS_GROWFS); + tp->t_flags |= XFS_TRANS_RESERVE; if ((error = xfs_trans_reserve(tp, XFS_GROWFS_SPACE_RES(mp), XFS_GROWDATA_LOG_RES(mp), 0, 0, 0))) { xfs_trans_cancel(tp, 0); @@ -500,8 +501,9 @@ xfs_reserve_blocks( unsigned long s; /* If inval is null, report current values and return */ - if (inval == (__uint64_t *)NULL) { + if (!outval) + return EINVAL; outval->resblks = mp->m_resblks; outval->resblks_avail = mp->m_resblks_avail; return 0; @@ -564,8 +566,10 @@ retry: } } out: - outval->resblks = mp->m_resblks; - outval->resblks_avail = mp->m_resblks_avail; + if (outval) { + outval->resblks = mp->m_resblks; + outval->resblks_avail = mp->m_resblks_avail; + } XFS_SB_UNLOCK(mp, s); if (fdblks_delta) { Index: 2.6.x-xfs-new/fs/xfs/xfs_mount.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_mount.c 2007-05-11 10:35:29.292846630 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_mount.c 2007-05-11 11:13:47.229662318 +1000 @@ -718,7 +718,7 @@ xfs_mountfs( bhv_vnode_t *rvp = NULL; int readio_log, writeio_log; xfs_daddr_t d; - __uint64_t ret64; + __uint64_t resblks; __int64_t update_flags; uint quotamount, quotaflags; int agno; @@ -835,6 +835,7 @@ xfs_mountfs( */ if ((mfsi_flags & XFS_MFSI_SECOND) == 0 && (mp->m_flags & XFS_MOUNT_NOUUID) == 0) { + __uint64_t ret64; if (xfs_uuid_mount(mp)) { error = XFS_ERROR(EINVAL); goto error1; @@ -1127,13 +1128,27 @@ xfs_mountfs( goto error4; } - /* * Complete the quota initialisation, post-log-replay component. */ if ((error = XFS_QM_MOUNT(mp, quotamount, quotaflags, mfsi_flags))) goto error4; + /* + * Now we are mounted, reserve a small amount of unused space for + * privileged transactions. This is needed so that transaction + * space required for critical operations can dip into this pool + * when at ENOSPC. This is needed for operations like create with + * attr, unwritten extent conversion at ENOSPC, etc. Data allocations + * are not allowed to use this reserved space. + * + * We default to 5% or 1024 fsbs of space reserved, whichever is smaller. + * This may drive us straight to ENOSPC on mount, but that implies + * we were already there on the last unmount. + */ + resblks = min_t(__uint64_t, mp->m_sb.sb_dblocks / 20, 1024); + xfs_reserve_blocks(mp, &resblks, NULL); + return 0; error4: @@ -1172,6 +1187,7 @@ xfs_unmountfs(xfs_mount_t *mp, struct cr #if defined(DEBUG) || defined(INDUCE_IO_ERROR) int64_t fsid; #endif + __uint64_t resblks; /* * We can potentially deadlock here if we have an inode cluster @@ -1200,6 +1216,23 @@ xfs_unmountfs(xfs_mount_t *mp, struct cr xfs_binval(mp->m_rtdev_targp); } + /* + * Unreserve any blocks we have so that when we unmount we don't account + * the reserved free space as used. This is really only necessary for + * lazy superblock counting because it trusts the incore superblock + * counters to be aboslutely correct on clean unmount. + * + * We don't bother correcting this elsewhere for lazy superblock + * counting because on mount of an unclean filesystem we reconstruct the + * correct counter value and this is irrelevant. + * + * For non-lazy counter filesystems, this doesn't matter at all because + * we only every apply deltas to the superblock and hence the incore + * value does not matter.... + */ + resblks = 0; + xfs_reserve_blocks(mp, &resblks, NULL); + xfs_log_sbcount(mp, 1); xfs_unmountfs_writesb(mp); xfs_unmountfs_wait(mp); /* wait for async bufs */ Index: 2.6.x-xfs-new/fs/xfs/xfs_iomap.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_iomap.c 2007-05-11 11:13:13.862017149 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_iomap.c 2007-05-11 11:13:34.199362915 +1000 @@ -489,13 +489,13 @@ xfs_iomap_write_direct( if (unlikely(rt)) { resrtextents = qblocks = resaligned; resrtextents /= mp->m_sb.sb_rextsize; - resblks = XFS_DIOSTRAT_SPACE_RES(mp, 0); - quota_flag = XFS_QMOPT_RES_RTBLKS; - } else { - resrtextents = 0; + resblks = XFS_DIOSTRAT_SPACE_RES(mp, 0); + quota_flag = XFS_QMOPT_RES_RTBLKS; + } else { + resrtextents = 0; resblks = qblocks = XFS_DIOSTRAT_SPACE_RES(mp, resaligned); - quota_flag = XFS_QMOPT_RES_REGBLKS; - } + quota_flag = XFS_QMOPT_RES_REGBLKS; + } /* * Allocate and setup the transaction @@ -788,18 +788,12 @@ xfs_iomap_write_allocate( nimaps = 0; while (nimaps == 0) { tp = xfs_trans_alloc(mp, XFS_TRANS_STRAT_WRITE); + tp->t_flags |= XFS_TRANS_RESERVE; nres = XFS_EXTENTADD_SPACE_RES(mp, XFS_DATA_FORK); error = xfs_trans_reserve(tp, nres, XFS_WRITE_LOG_RES(mp), 0, XFS_TRANS_PERM_LOG_RES, XFS_WRITE_LOG_COUNT); - if (error == ENOSPC) { - error = xfs_trans_reserve(tp, 0, - XFS_WRITE_LOG_RES(mp), - 0, - XFS_TRANS_PERM_LOG_RES, - XFS_WRITE_LOG_COUNT); - } if (error) { xfs_trans_cancel(tp, 0); return XFS_ERROR(error); @@ -917,8 +911,8 @@ xfs_iomap_write_unwritten( * from unwritten to real. Do allocations in a loop until * we have covered the range passed in. */ - tp = xfs_trans_alloc(mp, XFS_TRANS_STRAT_WRITE); + tp->t_flags |= XFS_TRANS_RESERVE; error = xfs_trans_reserve(tp, resblks, XFS_WRITE_LOG_RES(mp), 0, XFS_TRANS_PERM_LOG_RES, From owner-xfs@oss.sgi.com Sun Jun 3 21:53:27 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 03 Jun 2007 21:53:31 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l544rNWt021743 for ; Sun, 3 Jun 2007 21:53:26 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA04217; Mon, 4 Jun 2007 14:53:19 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l544rIAf110554529; Mon, 4 Jun 2007 14:53:18 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l544rHLO111675106; Mon, 4 Jun 2007 14:53:17 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Mon, 4 Jun 2007 14:53:17 +1000 From: David Chinner To: David Chinner Cc: xfs-dev , xfs-oss Subject: Re: Review - writing to multiple non-contiguous unwritten extents within a page is broken. Message-ID: <20070604045317.GH86004887@sgi.com> References: <20070523092103.GT85884050@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070523092103.GT85884050@sgi.com> User-Agent: Mutt/1.4.2.1i X-archive-position: 11599 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Ping? On Wed, May 23, 2007 at 07:21:03PM +1000, David Chinner wrote: > [Nathan - probably another one for you] > > This test run on ia64 (16k page size) on a 4k block size filesystem: > > #!/bin/bash > > file=$1 > rm -f $file > > xfs_io -f \ > -c "truncate 1048576" \ > -c "resvsp 1032192 16384" \ > -c "pwrite 1033216 2560" \ > -c "pwrite 1040384 8192" \ > -c "bmap -vvp" \ > -c "fsync" \ > -c "bmap -vvp" \ > -c "close" \ > $file > > Which writes 3 unwritten blocks in a page (first block and last 2) > results in a corrupted write. > > The problem is that the second block on teh page is uninitialised > and so is skipped by xfs_page_state_convert. The problem is that the > xfs_ioend structures are not getting created correctly. > > When we skip the uninitialised block, we add the second unwritten block > we are writing to into the original ioend. While this results in > the correct I/O being sent to disk, it results in a ioend with a > start offset of 0 and a length of 3 blocks. When we do unwritten > extent conversion based on this range, we convert the wrong blocks. > > What we need to be doing is creating two xfs_ioend structures, one > for the first block and one for the second set of blocks in the page. > That way we get two separate I/O completion events and convert the > ranges separately and correctly. > > I've checked xfs_convert_page(), and I don't think it needs any > fix here - it already appears to force multiple ioends to be used in this > case... > > Thoughts? > > Cheers, > > Dave. > -- > Dave Chinner > Principal Engineer > SGI Australian Software Group > > --- > fs/xfs/linux-2.6/xfs_aops.c | 13 ++++++++++++- > 1 file changed, 12 insertions(+), 1 deletion(-) > > Index: 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_aops.c > =================================================================== > --- 2.6.x-xfs-new.orig/fs/xfs/linux-2.6/xfs_aops.c 2007-05-23 16:33:04.000000000 +1000 > +++ 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_aops.c 2007-05-23 17:52:15.540456674 +1000 > @@ -1008,6 +1008,8 @@ xfs_page_state_convert( > if (buffer_unwritten(bh) || buffer_delay(bh) || > ((buffer_uptodate(bh) || PageUptodate(page)) && > !buffer_mapped(bh) && (unmapped || startio))) { > + int new_ioend = 0; > + > /* > * Make sure we don't use a read-only iomap > */ > @@ -1026,6 +1028,15 @@ xfs_page_state_convert( > } > > if (!iomap_valid) { > + /* > + * if we didn't have a valid mapping then we > + * need to ensure that we put the new mapping > + * in a new ioend structure. This needs to be > + * done to ensure that the ioends correctly > + * reflect the block mappings at io completion > + * for unwritten extent conversion. > + */ > + new_ioend = 1; > if (type == IOMAP_NEW) { > size = xfs_probe_cluster(inode, > page, bh, head, 0); > @@ -1045,7 +1056,7 @@ xfs_page_state_convert( > if (startio) { > xfs_add_to_ioend(inode, bh, offset, > type, &ioend, > - !iomap_valid); > + new_ioend); > } else { > set_buffer_dirty(bh); > unlock_buffer(bh); -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Sun Jun 3 22:14:44 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 03 Jun 2007 22:14:47 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l545EdWt002262 for ; Sun, 3 Jun 2007 22:14:42 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA04599; Mon, 4 Jun 2007 15:14:35 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l545EYAf113392920; Mon, 4 Jun 2007 15:14:35 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l545EXRb113181890; Mon, 4 Jun 2007 15:14:33 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Mon, 4 Jun 2007 15:14:33 +1000 From: David Chinner To: xfs-dev Cc: xfs-oss Subject: Review: remount read-only path is as broken as freezing was.... Message-ID: <20070604051433.GP85884050@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.1i X-archive-position: 11600 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs I recently had a remount,ro test fail in a way I had previously only seen freezing fail. That is, it failed because we still had active transactions after calling xfs_quiesce_fs(). Further investigation shows that this path is broken in the same ways that the xfs freeze path was broken (and recently fixed). Make the remount,ro path properly flush the filesystem down to a clean state before returning. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group --- fs/xfs/linux-2.6/xfs_super.c | 2 - fs/xfs/linux-2.6/xfs_vfs.h | 10 +++++++ fs/xfs/xfs_vfsops.c | 54 ++++++++++++++++++++++++------------------- 3 files changed, 42 insertions(+), 24 deletions(-) Index: 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_super.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/linux-2.6/xfs_super.c 2007-05-10 16:56:09.594774832 +1000 +++ 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_super.c 2007-05-10 17:39:02.374544197 +1000 @@ -726,7 +726,7 @@ xfs_fs_sync_super( * occur here so don't bother flushing the buftarg (i.e * SYNC_QUIESCE) because it'll just get dirty again. */ - flags = SYNC_FSDATA | SYNC_DELWRI | SYNC_WAIT | SYNC_IOWAIT; + flags = SYNC_DATA_QUIESCE; } else flags = SYNC_FSDATA | (wait ? SYNC_WAIT : 0); Index: 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_vfs.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/linux-2.6/xfs_vfs.h 2007-05-10 16:56:09.590775350 +1000 +++ 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_vfs.h 2007-05-10 17:39:02.386542627 +1000 @@ -94,6 +94,16 @@ typedef enum { #define SYNC_IOWAIT 0x0100 /* wait for all I/O to complete */ #define SYNC_SUPER 0x0200 /* flush superblock to disk */ +/* + * When remounting a filesystem read-only or freezing the filesystem, + * we have two phases to execute. This first phase is syncing the data + * before we quiesce the filesystem, and the second is flushing all the + * inodes out after we've waited for all the transactions created by + * the first phase to complete. + */ +#define SYNC_DATA_QUIESCE (SYNC_DELWRI|SYNC_FSDATA|SYNC_WAIT|SYNC_IOWAIT) +#define SYNC_INODE_QUIESCE (SYNC_REMOUNT|SYNC_ATTR|SYNC_WAIT) + #define SHUTDOWN_META_IO_ERROR 0x0001 /* write attempt to metadata failed */ #define SHUTDOWN_LOG_IO_ERROR 0x0002 /* write attempt to the log failed */ #define SHUTDOWN_FORCE_UMOUNT 0x0004 /* shutdown from a forced unmount */ Index: 2.6.x-xfs-new/fs/xfs/xfs_vfsops.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_vfsops.c 2007-05-10 17:38:58.351070788 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_vfsops.c 2007-05-10 17:58:09.958425162 +1000 @@ -665,7 +665,7 @@ xfs_quiesce_fs( * we can write the unmount record. */ do { - xfs_syncsub(mp, SYNC_REMOUNT|SYNC_ATTR|SYNC_WAIT, NULL); + xfs_syncsub(mp, SYNC_INODE_QUIESCE, NULL); pincount = xfs_flush_buftarg(mp->m_ddev_targp, 1); if (!pincount) { count++; @@ -682,6 +682,30 @@ xfs_quiesce_fs( return 0; } +/* + * Second stage of a quiesce. The data is already synced, now we have to take + * care of the metadata. New transactions are already blocked, so we need to + * wait for any remaining transactions to drain out before proceding. + */ +STATIC void +xfs_attr_quiesce( + xfs_mount_t *mp) +{ + /* wait for all modifications to complete */ + while (atomic_read(&mp->m_active_trans) > 0) + delay(100); + + /* flush inodes and push all remaining buffers out to disk */ + xfs_quiesce_fs(mp); + + ASSERT_ALWAYS(atomic_read(&mp->m_active_trans) == 0); + + /* Push the superblock and write an unmount record */ + xfs_log_sbcount(mp, 1); + xfs_log_unmount_write(mp); + xfs_unmountfs_writesb(mp); +} + STATIC int xfs_mntupdate( bhv_desc_t *bdp, @@ -701,11 +725,7 @@ xfs_mntupdate( mp->m_flags &= ~XFS_MOUNT_BARRIER; } } else if (!(vfsp->vfs_flag & VFS_RDONLY)) { /* rw -> ro */ - bhv_vfs_sync(vfsp, SYNC_FSDATA|SYNC_BDFLUSH|SYNC_ATTR, NULL); - xfs_quiesce_fs(mp); - xfs_log_sbcount(mp, 1); - xfs_log_unmount_write(mp); - xfs_unmountfs_writesb(mp); + bhv_vfs_sync(vfsp, SYNC_DATA_QUIESCE, NULL); + xfs_attr_quiesce(mp); vfsp->vfs_flag |= VFS_RDONLY; } return 0; @@ -1998,9 +2018,9 @@ xfs_showargs( } /* - * Second stage of a freeze. The data is already frozen, now we have to take - * care of the metadata. New transactions are already blocked, so we need to - * wait for any remaining transactions to drain out before proceding. + * Second stage of a freeze. The data is already frozen so we only + * need to take care of the metadata. Once that's done write a dummy + * record to dirty the log in case of a crash while frozen. */ STATIC void xfs_freeze( @@ -2008,19 +2028,7 @@ xfs_freeze( { xfs_mount_t *mp = XFS_BHVTOM(bdp); - /* wait for all modifications to complete */ - while (atomic_read(&mp->m_active_trans) > 0) - delay(100); - - /* flush inodes and push all remaining buffers out to disk */ - xfs_quiesce_fs(mp); - - ASSERT_ALWAYS(atomic_read(&mp->m_active_trans) == 0); - - /* Push the superblock and write an unmount record */ - xfs_log_sbcount(mp, 1); - xfs_log_unmount_write(mp); - xfs_unmountfs_writesb(mp); + xfs_attr_quiesce(mp); xfs_fs_log_dummy(mp); } From owner-xfs@oss.sgi.com Sun Jun 3 22:19:55 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 03 Jun 2007 22:19:57 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l545JoWt004337 for ; Sun, 3 Jun 2007 22:19:54 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA04811; Mon, 4 Jun 2007 15:19:46 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l545JjAf113117607; Mon, 4 Jun 2007 15:19:46 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l545JiJ1113297283; Mon, 4 Jun 2007 15:19:44 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Mon, 4 Jun 2007 15:19:44 +1000 From: David Chinner To: xfs-dev Cc: xfs-oss Subject: Review: xfs_bmapi does not update previous extent pointer correctly Message-ID: <20070604051944.GQ85884050@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.1i X-archive-position: 11601 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs When looping across multiple extents, xfs_bmapi will fail to update the previous extent pointer which is used in subsequent loops. As a result, we can end up with the second loop in xfs_bmapi trying to use an incorrect previous extent pointer and assert failures or corrupted in-memory extent lists will result. Correctly update the previous extent at the end of each loop so that we DTRT when processing multiple map requests. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group --- fs/xfs/xfs_bmap.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) Index: 2.6.x-xfs-new/fs/xfs/xfs_bmap.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_bmap.c 2007-05-23 16:33:00.000000000 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_bmap.c 2007-05-25 11:53:31.949847746 +1000 @@ -5575,10 +5575,10 @@ xfs_bmapi( * Else go on to the next record. */ ep = xfs_iext_get_ext(ifp, ++lastx); - if (lastx >= nextents) { + prev = got; + if (lastx >= nextents) eof = 1; - prev = got; - } else + else xfs_bmbt_get_all(ep, &got); } ifp->if_lastex = lastx; From owner-xfs@oss.sgi.com Sun Jun 3 22:23:45 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 03 Jun 2007 22:23:48 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l545NdWt006180 for ; Sun, 3 Jun 2007 22:23:43 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA04962; Mon, 4 Jun 2007 15:23:35 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l545NYAf110674749; Mon, 4 Jun 2007 15:23:35 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l545NXlg113315390; Mon, 4 Jun 2007 15:23:33 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Mon, 4 Jun 2007 15:23:33 +1000 From: David Chinner To: xfs-dev Cc: xfs-oss Subject: Review: factor extracting extent size hints from the inode Message-ID: <20070604052333.GR85884050@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.1i X-archive-position: 11602 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Replace frequently repeated, open coded extraction of the extent size hint from the xfs_inode with a single helper function. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group --- fs/xfs/xfs_bmap.c | 33 +++++++++++---------------------- fs/xfs/xfs_iomap.c | 19 ++++--------------- fs/xfs/xfs_rw.h | 22 ++++++++++++++++++++++ fs/xfs/xfs_vnodeops.c | 17 +++++------------ 4 files changed, 42 insertions(+), 49 deletions(-) Index: 2.6.x-xfs-new/fs/xfs/xfs_vnodeops.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_vnodeops.c 2007-05-31 17:07:38.421796043 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_vnodeops.c 2007-05-31 17:14:29.188303231 +1000 @@ -197,9 +197,8 @@ xfs_getattr( * realtime extent size or the realtime volume's * extent size. */ - vap->va_blocksize = ip->i_d.di_extsize ? - (ip->i_d.di_extsize << mp->m_sb.sb_blocklog) : - (mp->m_sb.sb_rextsize << mp->m_sb.sb_blocklog); + vap->va_blocksize = + xfs_get_extsz_hint(ip) << mp->m_sb.sb_blocklog; } break; } @@ -4094,22 +4093,16 @@ xfs_alloc_file_space( if (XFS_FORCED_SHUTDOWN(mp)) return XFS_ERROR(EIO); - rt = XFS_IS_REALTIME_INODE(ip); - if (unlikely(rt)) { - if (!(extsz = ip->i_d.di_extsize)) - extsz = mp->m_sb.sb_rextsize; - } else { - extsz = ip->i_d.di_extsize; - } - if ((error = XFS_QM_DQATTACH(mp, ip, 0))) return error; if (len <= 0) return XFS_ERROR(EINVAL); + rt = XFS_IS_REALTIME_INODE(ip); + extsz = xfs_get_extsz_hint(ip); + count = len; - error = 0; imapp = &imaps[0]; nimaps = 1; bmapi_flag = XFS_BMAPI_WRITE | (alloc_type ? XFS_BMAPI_PREALLOC : 0); Index: 2.6.x-xfs-new/fs/xfs/xfs_rw.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_rw.h 2007-05-31 17:07:37.000000000 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_rw.h 2007-05-31 17:36:31.711921349 +1000 @@ -77,6 +77,28 @@ xfs_fsb_to_db_io(struct xfs_iocore *io, #define XFS_FREE_EOF_LOCK (1<<0) #define XFS_FREE_EOF_NOLOCK (1<<1) + +/* + * helper function to extract extent size hint from inode + */ +STATIC_INLINE xfs_extlen_t +xfs_get_extsz_hint( + xfs_inode_t *ip) +{ + xfs_extlen_t extsz; + + if (unlikely(ip->i_d.di_flags & XFS_DIFLAG_REALTIME)) { + extsz = (ip->i_d.di_flags & XFS_DIFLAG_EXTSIZE) + ? ip->i_d.di_extsize + : ip->i_mount->m_sb.sb_rextsize; + ASSERT(extsz); + } else { + extsz = (ip->i_d.di_flags & XFS_DIFLAG_EXTSIZE) + ? ip->i_d.di_extsize : 0; + } + return extsz; +} + /* * Prototypes for functions in xfs_rw.c. */ Index: 2.6.x-xfs-new/fs/xfs/xfs_bmap.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_bmap.c 2007-05-29 16:40:12.000000000 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_bmap.c 2007-05-31 17:38:24.429227867 +1000 @@ -2618,8 +2618,7 @@ xfs_bmap_rtalloc( xfs_rtblock_t rtb; mp = ap->ip->i_mount; - align = ap->ip->i_d.di_extsize ? - ap->ip->i_d.di_extsize : mp->m_sb.sb_rextsize; + align = xfs_get_extsz_hint(ap->ip); prod = align / mp->m_sb.sb_rextsize; error = xfs_bmap_extsize_align(mp, ap->gotp, ap->prevp, align, 1, ap->eof, 0, @@ -2727,9 +2726,7 @@ xfs_bmap_btalloc( if (!args) return XFS_ERROR(ENOMEM); mp = ap->ip->i_mount; - align = (ap->userdata && ap->ip->i_d.di_extsize && - (ap->ip->i_d.di_flags & XFS_DIFLAG_EXTSIZE)) ? - ap->ip->i_d.di_extsize : 0; + align = ap->userdata ? xfs_get_extsz_hint(ap->ip) : 0; if (unlikely(align)) { error = xfs_bmap_extsize_align(mp, ap->gotp, ap->prevp, align, 0, ap->eof, 0, ap->conv, @@ -2829,9 +2826,9 @@ xfs_bmap_btalloc( args->total = ap->total; args->minlen = ap->minlen; } - if (unlikely(ap->userdata && ap->ip->i_d.di_extsize && - (ap->ip->i_d.di_flags & XFS_DIFLAG_EXTSIZE))) { - args->prod = ap->ip->i_d.di_extsize; + /* apply extent size hints if obtained earlier */ + if (unlikely(align)) { + args->prod = align; if ((args->mod = (xfs_extlen_t)do_mod(ap->off, args->prod))) args->mod = (xfs_extlen_t)(args->prod - args->mod); } else if (mp->m_sb.sb_blocksize >= NBPP) { @@ -3018,9 +3015,7 @@ xfs_bmap_filestreams( */ mp = ap->ip->i_mount; rt = (ap->ip->i_d.di_flags & XFS_DIFLAG_REALTIME) && ap->userdata; - align = (ap->userdata && ap->ip->i_d.di_extsize && - (ap->ip->i_d.di_flags & XFS_DIFLAG_EXTSIZE)) ? - ap->ip->i_d.di_extsize : 0; + align = ap->userdata ? xfs_get_extsz_hint(ap->ip) : 0; if (align) { error = xfs_bmap_extsize_align(mp, ap->gotp, ap->prevp, align, rt, @@ -3166,9 +3161,9 @@ xfs_bmap_filestreams( args.total = ap->total; args.minlen = ap->minlen; } - if (ap->userdata && ap->ip->i_d.di_extsize && - (ap->ip->i_d.di_flags & XFS_DIFLAG_EXTSIZE)) { - args.prod = ap->ip->i_d.di_extsize; + /* apply extent size hint if found earlier */ + if (align) { + args.prod = align; if ((args.mod = (xfs_extlen_t)(do_mod(ap->off, args.prod)))) args.mod = (xfs_extlen_t)(args.prod - args.mod); } else if (mp->m_sb.sb_blocksize >= NBPP) { @@ -5224,12 +5219,7 @@ xfs_bmapi( xfs_extlen_t extsz; /* Figure out the extent size, adjust alen */ - if (rt) { - if (!(extsz = ip->i_d.di_extsize)) - extsz = mp->m_sb.sb_rextsize; - } else { - extsz = ip->i_d.di_extsize; - } + extsz = xfs_get_extsz_hint(ip); if (extsz) { error = xfs_bmap_extsize_align(mp, &got, &prev, extsz, @@ -6170,8 +6160,7 @@ xfs_getbmap( ip->i_d.di_format != XFS_DINODE_FMT_LOCAL) return XFS_ERROR(EINVAL); if (whichfork == XFS_DATA_FORK) { - if ((ip->i_d.di_extsize && (ip->i_d.di_flags & - (XFS_DIFLAG_REALTIME|XFS_DIFLAG_EXTSIZE))) || + if (xfs_get_extsz_hint(ip) || ip->i_d.di_flags & (XFS_DIFLAG_PREALLOC|XFS_DIFLAG_APPEND)){ prealloced = 1; fixlen = XFS_MAXIOFFSET(mp); Index: 2.6.x-xfs-new/fs/xfs/xfs_iomap.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_iomap.c 2007-05-31 17:07:38.000000000 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_iomap.c 2007-05-31 17:30:22.096110172 +1000 @@ -451,19 +451,14 @@ xfs_iomap_write_direct( return XFS_ERROR(error); rt = XFS_IS_REALTIME_INODE(ip); - if (unlikely(rt)) { - if (!(extsz = ip->i_d.di_extsize)) - extsz = mp->m_sb.sb_rextsize; - } else { - extsz = ip->i_d.di_extsize; - } + extsz = xfs_get_extsz_hint(ip); isize = ip->i_size; if (io->io_new_size > isize) isize = io->io_new_size; - offset_fsb = XFS_B_TO_FSBT(mp, offset); - last_fsb = XFS_B_TO_FSB(mp, ((xfs_ufsize_t)(offset + count))); + offset_fsb = XFS_B_TO_FSBT(mp, offset); + last_fsb = XFS_B_TO_FSB(mp, ((xfs_ufsize_t)(offset + count))); if ((offset + count) > isize) { error = xfs_iomap_eof_align_last_fsb(mp, io, isize, extsz, &last_fsb); @@ -666,13 +661,7 @@ xfs_iomap_write_delay( if (error) return XFS_ERROR(error); - if (XFS_IS_REALTIME_INODE(ip)) { - if (!(extsz = ip->i_d.di_extsize)) - extsz = mp->m_sb.sb_rextsize; - } else { - extsz = ip->i_d.di_extsize; - } - + extsz = xfs_get_extsz_hint(ip); offset_fsb = XFS_B_TO_FSBT(mp, offset); retry: From owner-xfs@oss.sgi.com Sun Jun 3 22:24:14 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 03 Jun 2007 22:24:16 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l545OAWt006541 for ; Sun, 3 Jun 2007 22:24:13 -0700 Received: from [134.14.55.89] (soarer.melbourne.sgi.com [134.14.55.89]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA04971; Mon, 4 Jun 2007 15:24:06 +1000 Message-ID: <4663A283.2030205@sgi.com> Date: Mon, 04 Jun 2007 15:26:27 +1000 From: Vlad Apostolov User-Agent: Thunderbird 1.5.0.10 (X11/20070221) MIME-Version: 1.0 To: David Chinner CC: xfs-dev , xfs-oss Subject: Re: Review: xfs_bmapi does not update previous extent pointer correctly References: <20070604051944.GQ85884050@sgi.com> In-Reply-To: <20070604051944.GQ85884050@sgi.com> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 11603 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: vapo@sgi.com Precedence: bulk X-list: xfs It is looking good Dave, Regards, Vlad David Chinner wrote: > When looping across multiple extents, xfs_bmapi will fail to > update the previous extent pointer which is used in subsequent > loops. > > As a result, we can end up with the second loop in xfs_bmapi trying > to use an incorrect previous extent pointer and assert failures or > corrupted in-memory extent lists will result. > > Correctly update the previous extent at the end of each loop so that > we DTRT when processing multiple map requests. > > Cheers, > > Dave. > From owner-xfs@oss.sgi.com Sun Jun 3 23:13:25 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 03 Jun 2007 23:13:28 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l546DLWt025059 for ; Sun, 3 Jun 2007 23:13:23 -0700 Received: from boing.melbourne.sgi.com (boing.melbourne.sgi.com [134.14.55.141]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id QAA05911; Mon, 4 Jun 2007 16:13:16 +1000 Date: Mon, 04 Jun 2007 16:13:12 +1000 From: Timothy Shimmin To: David Chinner , xfs-dev cc: xfs-oss Subject: Re: Review: Be smarter about handling ENOSPC during writeback Message-ID: In-Reply-To: <20070604045219.GG86004887@sgi.com> References: <20070604045219.GG86004887@sgi.com> X-Mailer: Mulberry/4.0.8 (Mac OS X) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline X-archive-position: 11604 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs As previously discussed, the idea sounds reasonable to me. I'll look at the patch shortly. --Tim --On 4 June 2007 2:52:19 PM +1000 David Chinner wrote: > > During delayed allocation extent conversion or unwritten extent > conversion, we need to reserve some blocks for transactions > reservations. We need to reserve these blocks in case a btree > split occurs and we need to allocate some blocks. > > Unfortunately, we've only ever reserved the number of data blocks we > are allocating, so in both the unwritten and delalloc case we can > get ENOSPC to the transaction reservation. This is bad because in > both cases we cannot report the failure to the writing application. > > The fix is two-fold: > > 1 - leverage the reserved block infrastructure XFS already > has to reserve a small pool of blocks by default to allow > specially marked transactions to dip into when we are at > ENOSPC. > > Default setting is min(5%, 1024 blocks). > > 2 - convert critical transaction reservations to be allowed > to dip into this pool. Spots changed are delalloc > conversion, unwritten extent conversion and growing a > filesystem at ENOSPC. > > Comments? > > Cheers, > > Dave. > -- > Dave Chinner > Principal Engineer > SGI Australian Software Group > > --- > fs/xfs/xfs_fsops.c | 10 +++++++--- > fs/xfs/xfs_iomap.c | 22 ++++++++-------------- > fs/xfs/xfs_mount.c | 37 +++++++++++++++++++++++++++++++++++-- > 3 files changed, 50 insertions(+), 19 deletions(-) > > Index: 2.6.x-xfs-new/fs/xfs/xfs_fsops.c > =================================================================== > --- 2.6.x-xfs-new.orig/fs/xfs/xfs_fsops.c 2007-05-11 10:35:29.288847149 +1000 > +++ 2.6.x-xfs-new/fs/xfs/xfs_fsops.c 2007-05-11 11:13:34.195363437 +1000 > @@ -179,6 +179,7 @@ xfs_growfs_data_private( > up_write(&mp->m_peraglock); > } > tp = xfs_trans_alloc(mp, XFS_TRANS_GROWFS); > + tp->t_flags |= XFS_TRANS_RESERVE; > if ((error = xfs_trans_reserve(tp, XFS_GROWFS_SPACE_RES(mp), > XFS_GROWDATA_LOG_RES(mp), 0, 0, 0))) { > xfs_trans_cancel(tp, 0); > @@ -500,8 +501,9 @@ xfs_reserve_blocks( > unsigned long s; > > /* If inval is null, report current values and return */ > - > if (inval == (__uint64_t *)NULL) { > + if (!outval) > + return EINVAL; > outval->resblks = mp->m_resblks; > outval->resblks_avail = mp->m_resblks_avail; > return 0; > @@ -564,8 +566,10 @@ retry: > } > } > out: > - outval->resblks = mp->m_resblks; > - outval->resblks_avail = mp->m_resblks_avail; > + if (outval) { > + outval->resblks = mp->m_resblks; > + outval->resblks_avail = mp->m_resblks_avail; > + } > XFS_SB_UNLOCK(mp, s); > > if (fdblks_delta) { > Index: 2.6.x-xfs-new/fs/xfs/xfs_mount.c > =================================================================== > --- 2.6.x-xfs-new.orig/fs/xfs/xfs_mount.c 2007-05-11 10:35:29.292846630 +1000 > +++ 2.6.x-xfs-new/fs/xfs/xfs_mount.c 2007-05-11 11:13:47.229662318 +1000 > @@ -718,7 +718,7 @@ xfs_mountfs( > bhv_vnode_t *rvp = NULL; > int readio_log, writeio_log; > xfs_daddr_t d; > - __uint64_t ret64; > + __uint64_t resblks; > __int64_t update_flags; > uint quotamount, quotaflags; > int agno; > @@ -835,6 +835,7 @@ xfs_mountfs( > */ > if ((mfsi_flags & XFS_MFSI_SECOND) == 0 && > (mp->m_flags & XFS_MOUNT_NOUUID) == 0) { > + __uint64_t ret64; > if (xfs_uuid_mount(mp)) { > error = XFS_ERROR(EINVAL); > goto error1; > @@ -1127,13 +1128,27 @@ xfs_mountfs( > goto error4; > } > > - > /* > * Complete the quota initialisation, post-log-replay component. > */ > if ((error = XFS_QM_MOUNT(mp, quotamount, quotaflags, mfsi_flags))) > goto error4; > > + /* > + * Now we are mounted, reserve a small amount of unused space for > + * privileged transactions. This is needed so that transaction > + * space required for critical operations can dip into this pool > + * when at ENOSPC. This is needed for operations like create with > + * attr, unwritten extent conversion at ENOSPC, etc. Data allocations > + * are not allowed to use this reserved space. > + * > + * We default to 5% or 1024 fsbs of space reserved, whichever is smaller. > + * This may drive us straight to ENOSPC on mount, but that implies > + * we were already there on the last unmount. > + */ > + resblks = min_t(__uint64_t, mp->m_sb.sb_dblocks / 20, 1024); > + xfs_reserve_blocks(mp, &resblks, NULL); > + > return 0; > > error4: > @@ -1172,6 +1187,7 @@ xfs_unmountfs(xfs_mount_t *mp, struct cr > #if defined(DEBUG) || defined(INDUCE_IO_ERROR) > int64_t fsid; > #endif > + __uint64_t resblks; > > /* > * We can potentially deadlock here if we have an inode cluster > @@ -1200,6 +1216,23 @@ xfs_unmountfs(xfs_mount_t *mp, struct cr > xfs_binval(mp->m_rtdev_targp); > } > > + /* > + * Unreserve any blocks we have so that when we unmount we don't account > + * the reserved free space as used. This is really only necessary for > + * lazy superblock counting because it trusts the incore superblock > + * counters to be aboslutely correct on clean unmount. > + * > + * We don't bother correcting this elsewhere for lazy superblock > + * counting because on mount of an unclean filesystem we reconstruct the > + * correct counter value and this is irrelevant. > + * > + * For non-lazy counter filesystems, this doesn't matter at all because > + * we only every apply deltas to the superblock and hence the incore > + * value does not matter.... > + */ > + resblks = 0; > + xfs_reserve_blocks(mp, &resblks, NULL); > + > xfs_log_sbcount(mp, 1); > xfs_unmountfs_writesb(mp); > xfs_unmountfs_wait(mp); /* wait for async bufs */ > Index: 2.6.x-xfs-new/fs/xfs/xfs_iomap.c > =================================================================== > --- 2.6.x-xfs-new.orig/fs/xfs/xfs_iomap.c 2007-05-11 11:13:13.862017149 +1000 > +++ 2.6.x-xfs-new/fs/xfs/xfs_iomap.c 2007-05-11 11:13:34.199362915 +1000 > @@ -489,13 +489,13 @@ xfs_iomap_write_direct( > if (unlikely(rt)) { > resrtextents = qblocks = resaligned; > resrtextents /= mp->m_sb.sb_rextsize; > - resblks = XFS_DIOSTRAT_SPACE_RES(mp, 0); > - quota_flag = XFS_QMOPT_RES_RTBLKS; > - } else { > - resrtextents = 0; > + resblks = XFS_DIOSTRAT_SPACE_RES(mp, 0); > + quota_flag = XFS_QMOPT_RES_RTBLKS; > + } else { > + resrtextents = 0; > resblks = qblocks = XFS_DIOSTRAT_SPACE_RES(mp, resaligned); > - quota_flag = XFS_QMOPT_RES_REGBLKS; > - } > + quota_flag = XFS_QMOPT_RES_REGBLKS; > + } > > /* > * Allocate and setup the transaction > @@ -788,18 +788,12 @@ xfs_iomap_write_allocate( > nimaps = 0; > while (nimaps == 0) { > tp = xfs_trans_alloc(mp, XFS_TRANS_STRAT_WRITE); > + tp->t_flags |= XFS_TRANS_RESERVE; > nres = XFS_EXTENTADD_SPACE_RES(mp, XFS_DATA_FORK); > error = xfs_trans_reserve(tp, nres, > XFS_WRITE_LOG_RES(mp), > 0, XFS_TRANS_PERM_LOG_RES, > XFS_WRITE_LOG_COUNT); > - if (error == ENOSPC) { > - error = xfs_trans_reserve(tp, 0, > - XFS_WRITE_LOG_RES(mp), > - 0, > - XFS_TRANS_PERM_LOG_RES, > - XFS_WRITE_LOG_COUNT); > - } > if (error) { > xfs_trans_cancel(tp, 0); > return XFS_ERROR(error); > @@ -917,8 +911,8 @@ xfs_iomap_write_unwritten( > * from unwritten to real. Do allocations in a loop until > * we have covered the range passed in. > */ > - > tp = xfs_trans_alloc(mp, XFS_TRANS_STRAT_WRITE); > + tp->t_flags |= XFS_TRANS_RESERVE; > error = xfs_trans_reserve(tp, resblks, > XFS_WRITE_LOG_RES(mp), 0, > XFS_TRANS_PERM_LOG_RES, From owner-xfs@oss.sgi.com Sun Jun 3 23:15:25 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 03 Jun 2007 23:15:28 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l546FLWt025769 for ; Sun, 3 Jun 2007 23:15:23 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id QAA05952; Mon, 4 Jun 2007 16:15:17 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id AF6EF58C38C1; Mon, 4 Jun 2007 16:15:17 +1000 (EST) To: xfs@oss.sgi.com, sgi.bugs.xfs@engr.sgi.com Subject: TAKE 965631 - xfs_bmapi() fails to update previous extent pointer Message-Id: <20070604061517.AF6EF58C38C1@chook.melbourne.sgi.com> Date: Mon, 4 Jun 2007 16:15:17 +1000 (EST) From: dgc@sgi.com (David Chinner) X-archive-position: 11605 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs xfs_bmapi fails to update the previous extent pointer When processing multiple extent maps, xfs_bmapi needs to keep track of the extent behind the one it is currently working on to be able to trim extent ranges correctly. Failing to update the previous pointer can result in corrupted extent lists in memory and this will result in panics or assert failures. Update the previous pointer correctly when we move to the next extent to process. Date: Mon Jun 4 16:14:47 AEST 2007 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/2.6.x-xfs Inspected by: vapo@sgi.com The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:28773a fs/xfs/xfs_bmap.c - 1.368 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_bmap.c.diff?r1=text&tr1=1.368&r2=text&tr2=1.367&f=h - Update the previous extent pointer correctly in xfs_bmapi. From owner-xfs@oss.sgi.com Sun Jun 3 23:33:36 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 03 Jun 2007 23:33:38 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l546XXWt030048 for ; Sun, 3 Jun 2007 23:33:35 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id QAA06451; Mon, 4 Jun 2007 16:33:29 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l546XSAf113366370; Mon, 4 Jun 2007 16:33:29 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l546XSXf113386261; Mon, 4 Jun 2007 16:33:28 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Mon, 4 Jun 2007 16:33:28 +1000 From: David Chinner To: xfs-dev Cc: xfs-oss , asg-qa Subject: Review: fix test 004 to account for reserved space Message-ID: <20070604063328.GT85884050@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.1i X-archive-position: 11606 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs With the changes to use some space by default in only in memory as a reserved pool, df and statfs will now output a fre block count that is slightly different to what is held in the superblock. Update the qa test to account for this change. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group --- xfstests/004 | 35 +++++++++++++++++++++++++---------- 1 file changed, 25 insertions(+), 10 deletions(-) Index: xfs-cmds/xfstests/004 =================================================================== --- xfs-cmds.orig/xfstests/004 2006-11-14 19:57:39.000000000 +1100 +++ xfs-cmds/xfstests/004 2007-05-04 16:38:03.957537306 +1000 @@ -67,21 +67,36 @@ xfs_db -r -c "freesp -s" $SCRATCH_DEV >$ echo "xfs_db for $SCRATCH_DEV" >>$seq.full cat $tmp.xfs_db >>$seq.full +eval `$XFS_IO_PROG -x -c resblks $SCRATCH_MNT 2>&1 \ + | $AWK_PROG '/available/ { printf "resblks=%u\n", $5 }'` +echo "resblks gave: resblks=$resblks" >>$seq.full + # check the 'blocks' field from freesp command is OK # since 2.6.18, df does not report the 4 blocks per AG that cannot # be allocated, hence we check for that exact mismatch. +# since ~2.6.22, reserved blocks are used by default and df does +# not report them, hence check for an exact mismatch. perl -ne ' - BEGIN { $avail ='$avail' * 512; - $answer="(no xfs_db free blocks line?)" } - /free blocks (\d+)$/ || next; - $freesp = $1 * '$dbsize'; - if ($freesp == $avail) { $answer = "yes"; } - else { + BEGIN { $avail ='$avail' * 512; + $answer="(no xfs_db free blocks line?)" } + /free blocks (\d+)$/ || next; + $freesp = $1 * '$dbsize'; + if ($freesp == $avail) { + $answer = "yes"; + } else { $avail = $avail + (('$agcount' + 1) * '$dbsize' * 4); - if ($freesp == $avail) { $answer = "yes"; } - else { $answer = "no ($freesp != $avail)"; } - } - END { print "$answer\n" } + if ($freesp == $avail) { + $answer = "yes"; + } else { + $avail = $avail + ('$resblks' * '$dbsize'); + if ($freesp == $avail) { + $answer = "yes"; + } else { + $answer = "no ($freesp != $avail)"; + } + } + } + END { print "$answer\n" } ' <$tmp.xfs_db >$tmp.ans ans="`cat $tmp.ans`" echo "Checking blocks column same as df: $ans" From owner-xfs@oss.sgi.com Sun Jun 3 23:58:46 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 03 Jun 2007 23:58:49 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l546whWt008370 for ; Sun, 3 Jun 2007 23:58:45 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id QAA06833; Mon, 4 Jun 2007 16:58:39 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id 7054B58C38C1; Mon, 4 Jun 2007 16:58:39 +1000 (EST) To: xfs@oss.sgi.com, sgi.bugs.xfs@engr.sgi.com Subject: TAKE 965630 - xfs should flush the block device on unmount Message-Id: <20070604065839.7054B58C38C1@chook.melbourne.sgi.com> Date: Mon, 4 Jun 2007 16:58:39 +1000 (EST) From: dgc@sgi.com (David Chinner) X-archive-position: 11607 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Flush the block device before closing it on unmount. Date: Mon Jun 4 16:58:12 AEST 2007 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/2.6.x-xfs Inspected by: hch@infradead.org The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:28774a fs/xfs/linux-2.6/xfs_buf.c - 1.242 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/linux-2.6/xfs_buf.c.diff?r1=text&tr1=1.242&r2=text&tr2=1.241&f=h - Flush the block device before closing it on unmount. From owner-xfs@oss.sgi.com Mon Jun 4 00:18:47 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 04 Jun 2007 00:18:50 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l547IiWt020789 for ; Mon, 4 Jun 2007 00:18:46 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA07298; Mon, 4 Jun 2007 17:18:40 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id 341A958C38C1; Mon, 4 Jun 2007 17:18:40 +1000 (EST) To: xfs@oss.sgi.com, sgi.bugs.xfs@engr.sgi.com Subject: TAKE 964092 - synchronous direct I/O write calls are incomplete when returning to user space Message-Id: <20070604071840.341A958C38C1@chook.melbourne.sgi.com> Date: Mon, 4 Jun 2007 17:18:40 +1000 (EST) From: dgc@sgi.com (David Chinner) X-archive-position: 11608 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Block on unwritten extent conversion during synchronous direct I/O. Currently we do not wait on extent conversion to occur, and hence we can return to userspace from a sycnhronous direct I/O write without having completed all teh actions in the write. Hence a read after the write may see zeroes (unwritten extent) rather than the data that was written. Block the I/O completion by triggering a synchronous workqueue flush to ensure that the conversion has occurred before we return to userspace. Date: Mon Jun 4 17:18:01 AEST 2007 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/2.6.x-xfs Inspected by: tes@sgi.com The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:28775a fs/xfs/linux-2.6/xfs_aops.c - 1.144 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/linux-2.6/xfs_aops.c.diff?r1=text&tr1=1.144&r2=text&tr2=1.143&f=h - Make unwritten extent conversion occur synchronously for synchronous direct I/O to ensure it is completed before we return to userspace. From owner-xfs@oss.sgi.com Mon Jun 4 01:07:33 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 04 Jun 2007 01:07:47 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5487SWt011430 for ; Mon, 4 Jun 2007 01:07:31 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id SAA08282; Mon, 4 Jun 2007 18:07:22 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5487MAf113202618; Mon, 4 Jun 2007 18:07:22 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5487Ksc113426956; Mon, 4 Jun 2007 18:07:20 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Mon, 4 Jun 2007 18:07:20 +1000 From: David Chinner To: xfs-dev Cc: xfs-oss Subject: Review: apply transaction deltas atomically to superblock Message-ID: <20070604080720.GV85884050@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.1i X-archive-position: 11609 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs When testing lazy superblock counters and ENOSPC conditions (test 083), I came across semi-regular assert failures in xfs_mod_incore_sb_batch() where the assert failure was occurring as a result of failing to *undo* block reservations for a transaction that had reserved all the blocks it was going to use up front. That is, we failed to apply the transaction delta when it should not have failed, and then we failed to remove the delta's that we had already applied. It turns out that that the problem is an interaction between the per-cpu superblock counters and xfs_trans_unreserve_and_mod_sb(). Prior to the per-cpu superblock counters, transaction delta's were applied under the XFS_SB_LOCK() and so were always applied atomically. The per-cpu superblock counters don't hold the XFS_SB_LOCK() and hence are not applied atomically. This was not thought to be a problem because each cahnge that needed ot be made had already been validated and reserved. It turns out that xfs_trans_unreserve_and_mod_sb() does something incredibly stupid. It applies changes to the free block in *two* separate deltas. The first change puts back the *entire reservation* to the superblock and then it takes away what was actually used. So now we have a window where the transaction reservation is undone and another thread can come along and use that reservation. the result is the second delta to mark the blocks as used fails with ENOSPC, and because the blocks need for the transaction reservation has been taken by something else, we then fail to get them back for the transaction reservation when we try to undo that delta. So, the fix is to simply calculate what the free block delta is and apply it in a single atomic delta. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group --- fs/xfs/xfs_trans.c | 77 +++++++++++++++++++++++++++-------------------------- 1 file changed, 40 insertions(+), 37 deletions(-) Index: 2.6.x-xfs-new/fs/xfs/xfs_trans.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_trans.c 2007-05-03 15:05:09.000000000 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_trans.c 2007-05-09 12:14:18.429409061 +1000 @@ -638,11 +638,23 @@ xfs_trans_apply_sb_deltas( } /* - * xfs_trans_unreserve_and_mod_sb() is called to release unused - * reservations and apply superblock counter changes to the in-core - * superblock. + * xfs_trans_unreserve_and_mod_sb() is called to release unused reservations + * and apply superblock counter changes to the in-core superblock. The + * t_res_fdblocks_delta and t_res_frextents_delta fields are explicitly NOT + * applied to the in-core superblock. The idea is that that has already been + * done. * * This is done efficiently with a single call to xfs_mod_incore_sb_batch(). + * However, we have to ensure that we only modify each superblock field only + * once because the application of the delta values may not be atomic. That can + * lead to ENOSPC races occurring if we have two separate modifcations of the + * free space counter to put back the entire reservation and then take away + * what we used. + * + * If we are not logging superblock counters, then the inode allocated/free and + * used block counts are not updated in the on disk superblock. In this case, + * XFS_TRANS_SB_DIRTY will not be set when the transaction is updated but we + * still need to update the incore superblock with the changes. */ STATIC void xfs_trans_unreserve_and_mod_sb( @@ -654,42 +666,43 @@ xfs_trans_unreserve_and_mod_sb( /* REFERENCED */ int error; int rsvd; + int64_t blkdelta = 0; + int64_t rtxdelta = 0; msbp = msb; rsvd = (tp->t_flags & XFS_TRANS_RESERVE) != 0; - /* - * Release any reserved blocks. Any that were allocated - * will be taken back again by fdblocks_delta below. - */ - if (tp->t_blk_res > 0) { + /* calculate free blocks delta */ + if (tp->t_blk_res > 0) + blkdelta = tp->t_blk_res; + + if ((tp->t_fdblocks_delta != 0) && + (xfs_sb_version_haslazysbcount(&mp->m_sb) || + (tp->t_flags & XFS_TRANS_SB_DIRTY))) + blkdelta += tp->t_fdblocks_delta; + + if (blkdelta != 0) { msbp->msb_field = XFS_SBS_FDBLOCKS; - msbp->msb_delta = tp->t_blk_res; + msbp->msb_delta = blkdelta; msbp++; } - /* - * Release any reserved real time extents . Any that were - * allocated will be taken back again by frextents_delta below. - */ - if (tp->t_rtx_res > 0) { + /* calculate free realtime extents delta */ + if (tp->t_rtx_res > 0) + rtxdelta = tp->t_rtx_res; + + if ((tp->t_frextents_delta != 0) && + (tp->t_flags & XFS_TRANS_SB_DIRTY)) + rtxdelta = tp->t_frextents_delta; + + if (rtxdelta != 0) { msbp->msb_field = XFS_SBS_FREXTENTS; - msbp->msb_delta = tp->t_rtx_res; + msbp->msb_delta = rtxdelta; msbp++; } - /* - * Apply any superblock modifications to the in-core version. - * The t_res_fdblocks_delta and t_res_frextents_delta fields are - * explicitly NOT applied to the in-core superblock. - * The idea is that that has already been done. - * - * If we are not logging superblock counters, then the inode - * allocated/free and used block counts are not updated in the - * on disk superblock. In this case, XFS_TRANS_SB_DIRTY will - * not be set when the transaction is updated but we still need - * to update the incore superblock with the changes. - */ + /* apply remaining deltas */ + if (xfs_sb_version_haslazysbcount(&mp->m_sb) || (tp->t_flags & XFS_TRANS_SB_DIRTY)) { if (tp->t_icount_delta != 0) { @@ -702,19 +715,9 @@ xfs_trans_unreserve_and_mod_sb( msbp->msb_delta = tp->t_ifree_delta; msbp++; } - if (tp->t_fdblocks_delta != 0) { - msbp->msb_field = XFS_SBS_FDBLOCKS; - msbp->msb_delta = tp->t_fdblocks_delta; - msbp++; - } } if (tp->t_flags & XFS_TRANS_SB_DIRTY) { - if (tp->t_frextents_delta != 0) { - msbp->msb_field = XFS_SBS_FREXTENTS; - msbp->msb_delta = tp->t_frextents_delta; - msbp++; - } if (tp->t_dblocks_delta != 0) { msbp->msb_field = XFS_SBS_DBLOCKS; msbp->msb_delta = tp->t_dblocks_delta; From owner-xfs@oss.sgi.com Mon Jun 4 01:28:20 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 04 Jun 2007 01:28:27 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l548SHWt021598 for ; Mon, 4 Jun 2007 01:28:18 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id SAA08635; Mon, 4 Jun 2007 18:28:12 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id 49AF658C38C1; Mon, 4 Jun 2007 18:28:12 +1000 (EST) To: xfs@oss.sgi.com, sgi.bugs.xfs@engr.sgi.com Subject: TAKE 965636 - xfs_setfilesize panics on null xfs_inode Message-Id: <20070604082812.49AF658C38C1@chook.melbourne.sgi.com> Date: Mon, 4 Jun 2007 18:28:12 +1000 (EST) From: dgc@sgi.com (David Chinner) X-archive-position: 11610 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Handle null returned from xfs_vtoi() in xfs_setfilesize(). Date: Mon Jun 4 18:27:21 AEST 2007 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/2.6.x-xfs Inspected by: tes@sgi.com, olaf@sgi.com The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:28777a fs/xfs/linux-2.6/xfs_aops.c - 1.145 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/linux-2.6/xfs_aops.c.diff?r1=text&tr1=1.145&r2=text&tr2=1.144&f=h - If we get a null xfs_inode in xfs_setfilesize(), we can't update the file size. Don't even bother trying. From owner-xfs@oss.sgi.com Mon Jun 4 01:46:22 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 04 Jun 2007 01:46:24 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l548kJWt029446 for ; Mon, 4 Jun 2007 01:46:20 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id SAA09062; Mon, 4 Jun 2007 18:46:10 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l548k9Af108660004; Mon, 4 Jun 2007 18:46:10 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l548k8E7113429636; Mon, 4 Jun 2007 18:46:08 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Mon, 4 Jun 2007 18:46:08 +1000 From: David Chinner To: Christoph Hellwig Cc: David Chinner , xfs@oss.sgi.com Subject: Re: TAKE 965636 - xfs_setfilesize panics on null xfs_inode Message-ID: <20070604084608.GW85884050@sgi.com> References: <20070604082812.49AF658C38C1@chook.melbourne.sgi.com> <20070604083222.GA16922@infradead.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070604083222.GA16922@infradead.org> User-Agent: Mutt/1.4.2.1i X-archive-position: 11611 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Mon, Jun 04, 2007 at 09:32:22AM +0100, Christoph Hellwig wrote: > On Mon, Jun 04, 2007 at 06:28:12PM +1000, David Chinner wrote: > > Handle null returned from xfs_vtoi() in xfs_setfilesize(). > > This doesn't make any sense at all. xfs_inodes always live longer than > vnodes. What's the backtrace of the problem you're seeing? The I/O completion handlers are used by CXFS clients as well. Hence we can't assume that the vnode is backed by a xfs_inode.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Jun 4 01:49:44 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 04 Jun 2007 01:49:46 -0700 (PDT) Received: from pentafluge.infradead.org (pentafluge.infradead.org [213.146.154.40]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l548nhWt031500 for ; Mon, 4 Jun 2007 01:49:44 -0700 Received: from hch by pentafluge.infradead.org with local (Exim 4.63 #1 (Red Hat Linux)) id 1Hv7zC-0004QJ-QH; Mon, 04 Jun 2007 09:32:22 +0100 Date: Mon, 4 Jun 2007 09:32:22 +0100 From: Christoph Hellwig To: David Chinner Cc: xfs@oss.sgi.com, sgi.bugs.xfs@engr.sgi.com Subject: Re: TAKE 965636 - xfs_setfilesize panics on null xfs_inode Message-ID: <20070604083222.GA16922@infradead.org> References: <20070604082812.49AF658C38C1@chook.melbourne.sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070604082812.49AF658C38C1@chook.melbourne.sgi.com> User-Agent: Mutt/1.4.2.2i X-SRS-Rewrite: SMTP reverse-path rewritten from by pentafluge.infradead.org See http://www.infradead.org/rpr.html X-archive-position: 11612 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs On Mon, Jun 04, 2007 at 06:28:12PM +1000, David Chinner wrote: > Handle null returned from xfs_vtoi() in xfs_setfilesize(). This doesn't make any sense at all. xfs_inodes always live longer than vnodes. What's the backtrace of the problem you're seeing? From owner-xfs@oss.sgi.com Mon Jun 4 01:52:31 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 04 Jun 2007 01:52:32 -0700 (PDT) Received: from pentafluge.infradead.org (pentafluge.infradead.org [213.146.154.40]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l548qTWt032585 for ; Mon, 4 Jun 2007 01:52:30 -0700 Received: from hch by pentafluge.infradead.org with local (Exim 4.63 #1 (Red Hat Linux)) id 1Hv8If-0004ah-Qw; Mon, 04 Jun 2007 09:52:29 +0100 Date: Mon, 4 Jun 2007 09:52:29 +0100 From: Christoph Hellwig To: David Chinner Cc: Christoph Hellwig , xfs@oss.sgi.com Subject: Re: TAKE 965636 - xfs_setfilesize panics on null xfs_inode Message-ID: <20070604085229.GA17635@infradead.org> References: <20070604082812.49AF658C38C1@chook.melbourne.sgi.com> <20070604083222.GA16922@infradead.org> <20070604084608.GW85884050@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070604084608.GW85884050@sgi.com> User-Agent: Mutt/1.4.2.2i X-SRS-Rewrite: SMTP reverse-path rewritten from by pentafluge.infradead.org See http://www.infradead.org/rpr.html X-archive-position: 11613 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs On Mon, Jun 04, 2007 at 06:46:08PM +1000, David Chinner wrote: > On Mon, Jun 04, 2007 at 09:32:22AM +0100, Christoph Hellwig wrote: > > On Mon, Jun 04, 2007 at 06:28:12PM +1000, David Chinner wrote: > > > Handle null returned from xfs_vtoi() in xfs_setfilesize(). > > > > This doesn't make any sense at all. xfs_inodes always live longer than > > vnodes. What's the backtrace of the problem you're seeing? > > The I/O completion handlers are used by CXFS clients as well. > Hence we can't assume that the vnode is backed by a xfs_inode.... Please switch them over to something different instead of crapping up the code like this From owner-xfs@oss.sgi.com Mon Jun 4 02:07:47 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 04 Jun 2007 02:07:53 -0700 (PDT) Received: from mail5.voith.com (mail5.voith.com [62.225.5.140]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5497iWt003384 for ; Mon, 4 Jun 2007 02:07:46 -0700 Received: from HDHS0111.euro1.voith.net ([172.21.49.6]) by mail5.voith.com with Microsoft SMTPSVC(6.0.3790.3959); Mon, 4 Jun 2007 10:55:35 +0200 X-MimeOLE: Produced By Microsoft Exchange V6.5 Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----_=_NextPart_001_01C7A686.1D9B3AE1" Subject: XFS with project quota under linux? Date: Mon, 4 Jun 2007 10:55:35 +0200 Message-ID: <950DD867A5E1B04ABE82A56FCDC03A5E9CE8CF@HDHS0111.euro1.voith.net> X-MS-Has-Attach: yes X-MS-TNEF-Correlator: Thread-Topic: XFS with project quota under linux? Thread-Index: Acemhh8K+x24xSSwTcG6vV51EFsg6A== From: "Jahnke, Steffen" To: X-OriginalArrivalTime: 04 Jun 2007 08:55:35.0690 (UTC) FILETIME=[1DAAA6A0:01C7A686] X-archive-position: 11614 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: Steffen.Jahnke@vs-hydro.com Precedence: bulk X-list: xfs This is a multi-part message in MIME format. ------_=_NextPart_001_01C7A686.1D9B3AE1 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable I recently switched the quota usrquota to pquota on our Altix 4700 under SL= ES10. I then found out that the project quota is not updated if files are m= oved within the same filesystem. E.g. if I move a file from a different pro= ject to a new project it still belongs to the old project. The same thing h= appens if I move a file which not belongs to any project but which is on th= e filesystem mounted with pquota. Some details of our system: hdhu0250:/home/t # cat /etc/*release LSB_VERSION=3D"core-2.0-noarch:core-3.0-noarch:core-2.0-ia64:core-3.0-ia64" SGI ProPack 5SP1 for Linux, Build 501r2-0703010508 SUSE Linux Enterprise Server 10 (ia64) VERSION =3D 10 hdhu0250:/home/t # uname -a Linux hdhu0250 2.6.16.27-0.9-default #1 SMP Tue Feb 13 09:35:18 UTC 2007 ia= 64 ia64 ia64 GNU/Linux hdhu0250:/home # rpm -qa |grep xfs xfsdump-2.2.33-12.2 xfsprogs-2.7.11-18.2 Any help would be very appreciated. Maybe there is a developer version whic= h is already to be able to handle pquota correctly?? Mit freundlichen Gr=FC=DFen / Best regards Steffen Jahnke ---------------------------------------------------------------------------= ---- Voith Siemens Hydro Power Generation GmbH & Co. KG VSEC-International - tts Alexanderstrasse 11 89522 Heidenheim, Germany Tel +49 7321 37 2955 Fax +49 7321 37 7601 E-Mail steffen.jahnke@vs-hydro.com Internet http://www.voithsiemens.com ---------------------------------------------------------------------------= ---- Handelsregister: Reg. Gericht Ulm, HRA 661052 Sitz der Gesellschaft: Heidenheim Gesch=E4ftsf=FChrung: Dr. Hermut Kormann (Vorsitzender), Dr. Hermann Jung,=20 Dr. Hans-Peter Sollinger, Dr. Hubert Lienhard, Peter Edelmann, Martin Hennerici, Bertram Staudenmaier, Dr. Roland M=FCnch Pers=F6nlich haftende Gesellschafterin: J. M. Voith Verwaltungs GmbH Reg. Gericht Ulm, HRB 661225 <>=20 ------_=_NextPart_001_01C7A686.1D9B3AE1 Content-Type: text/x-vcard; name="Jahnke, Steffen.vcf" Content-Transfer-Encoding: base64 Content-Description: Jahnke, Steffen.vcf Content-Disposition: attachment; filename="Jahnke, Steffen.vcf" QkVHSU46VkNBUkQNClZFUlNJT046Mi4xDQpOOkphaG5rZTtTdGVmZmVuDQpG TjpKYWhua2UsIFN0ZWZmZW4NCk9SRzpWU0g7dHRzDQpURUw7V09SSztWT0lD RTorNDkgNzMyMSAzNyAyOTU1DQpBRFI7V09SSzo7dHRzO0FsZXhhbmRlcnN0 cmHfZSAxMTtIZWlkZW5oZWltOzs4OTUyMjtHZXJtYW55DQpMQUJFTDtXT1JL O0VOQ09ESU5HPVFVT1RFRC1QUklOVEFCTEU6dHRzPTBEPTBBQWxleGFuZGVy c3RyYT1ERmUgMTE9MEQ9MEFIZWlkZW5oZWltIDg5NTIyPTBEPTBBR2VybWFu eQ0KRU1BSUw7UFJFRjtJTlRFUk5FVDpTdGVmZmVuLkphaG5rZUB2cy1oeWRy by5jb20NClJFVjoyMDA3MDQxMlQxMTM3MDVaDQpFTkQ6VkNBUkQNCg== ------_=_NextPart_001_01C7A686.1D9B3AE1-- From owner-xfs@oss.sgi.com Mon Jun 4 02:15:42 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 04 Jun 2007 02:15:45 -0700 (PDT) Received: from astra.simleu.ro (astra.simleu.ro [80.97.18.177]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l549FfWt006143 for ; Mon, 4 Jun 2007 02:15:42 -0700 Received: from teal.hq.k1024.org (84-75-125-186.dclient.hispeed.ch [84.75.125.186]) by astra.simleu.ro (Postfix) with ESMTP id 46E6271; Mon, 4 Jun 2007 11:42:07 +0300 (EEST) Received: by teal.hq.k1024.org (Postfix, from userid 4004) id A411D40FE14; Mon, 4 Jun 2007 10:41:54 +0200 (CEST) Date: Mon, 4 Jun 2007 10:41:54 +0200 From: Iustin Pop To: David Chinner Cc: Ruben Porras , xfs@oss.sgi.com, cw@f00f.org Subject: Re: XFS shrink functionality Message-ID: <20070604084154.GA8273@teal.hq.k1024.org> Mail-Followup-To: David Chinner , Ruben Porras , xfs@oss.sgi.com, cw@f00f.org References: <1180715974.10796.46.camel@localhost> <20070604001632.GA86004887@sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070604001632.GA86004887@sgi.com> X-Linux: This message was written on Linux X-Header: /usr/include gives great headers User-Agent: Mutt/1.5.13 (2006-08-11) X-archive-position: 11615 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: iusty@k1024.org Precedence: bulk X-list: xfs Disclaimer: all the below is based on my weak understanding of the code, I don't claim I'm right below. On Mon, Jun 04, 2007 at 10:16:32AM +1000, David Chinner wrote: > Any work for this would need to be done against current mainline > of the xfs-dev tree. > > Yes, that patch is out of date, and it also did things that were not > necessary i.e. walk btrees to work out if AGs are empty or not. Well, I did what I could based on my own understanding of the code. Sorry if it's ugly :) > > I'm really curious about what happened to this patches and why they were > > discontinued. The second part never was made public, and there was also > > no answer. Was there any flaw in any of the posted code or anything in > > XFS that makes it especially hard to shrink [3] that discouraged the > > development? > > The posted code is only a *tiny* part of the shrink problem. My ideea at that time is to start small and be able to shrink an empty filesystem (or empty at least regarding the AGs that you want to clear). The point is that if AGs are lockable outside of a transaction (something like the freeze/unfreeze functionality at the fs level), then by simply copying the conflicting files you ensure that they are allocated on an available AG and when you remove the originals, the to-be-shrinked AGs become free. Yes, utterly non-optimal, but it was the simplest way to do it based on what I knew at the time. > > After that, the first questions that arouse are, > > would there be some assistance/groove in from the developers? > > Certainly there's help available. ;) Good to know. If there is at least more documentation about the internals, I could try to find some time to work on this again. > > > What are the programmers requirements from your point of view? > > Here's the "simple" bits that will allow you to shrink > the filesystem down to the end of the internal log: > > 0. Check space is available for shrink Can be done by actually allocating the space to be freed at the beggining of the transaction. Right? This is actually a bit more than needed, since when freeing an AG you also free some non-available space, but it's ok. > 1. Mark allocation groups as "don't use - going away soon" > - so we don't put new stuff in them while we > are moving all the bits out of them > - requires hooks in the allocators to prevent > the AG from being selected for allllocations > - must still allow allocations for the free lists > so that extent freeing can succeed > - *new transaction required*. > - also needs an "undo" (e.g. on partial failure) > so we need to be able to mark allocation groups > online again. So a question: can transaction be nested? Because the offline AG transation needs to live until the shrink transaction is done. I was more thinking that the offline-AG should be a bit on the AG that could be changed by the admin (like xfs_freeze); this could also help for other reasons than shrink (when on a big FS some AGs lie on a physical device and others on a different device, and you would like to restrict writes to a given AG, as much as possible). > 2. Move inodes out of offline AGs > - On Irix, we have a program called 'xfs_reno' which > converts 64 bit inode filesystems to 32 bit inode > filesystems. This needs to be: > - released under the GPL (should not be a problem). > - ported to linux > - modified to understand inodes sit in certain > AGs and to move them out of those AGs as needed. > - requires filesystem traversal to find all the > inodes to be moved. Interesing. I've read on the mail list of this before, but no other details. > > % wc -l xfs_reno.c > 1991 xfs_reno.c > > - even with "-o ikeep", this needs to trigger inode cluster > deletion in offline AGs (needs hooks in xfs_ifree()). This part (removal of inodes) is not actually needed if the icount == ifree (I presume this means that all the existing inodes are free). > 3. Move data out of offline AGs. > - this is difficult to do efficiently as we do not have > a block-to-owner reverse mapping in the filesystem. > Hence requires a walk of the *entire* filesystem to find > the owners of data blocks in the AGs being offlined. > - xfs_db wrapper might be the best way to do this... > > > > 4. Execute shrink > - new transaction - XFS_TRANS_SHRINKFS > - check AGs are empty > - icount == 0 > - freeblks == mp->m_sb.sb_agblocks > (will be a little more than this) > - check shrink won't go past end of internal log > - free AGs, updating superblock fields > - update perag structure > - not a simple realloc() as there may > be other threads using the structure at the > same time.... > My suggestion would be to start implementing these steps in reverse. 4) is the most important as it touches the entire FS. If 4) is working correctly, then 1) would be simpler (I think) and 3) can be implemented by just running a forced xfs_fsr against the conflicting files. I don't know about 2). Sorry if I'm blatantly wrong in my statements. Good to have more information! regards, iustin From owner-xfs@oss.sgi.com Mon Jun 4 02:21:29 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 04 Jun 2007 02:21:32 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l549LPWt007722 for ; Mon, 4 Jun 2007 02:21:27 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id TAA09755; Mon, 4 Jun 2007 19:21:19 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l549LHAf110747443; Mon, 4 Jun 2007 19:21:18 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l549LFSE112142526; Mon, 4 Jun 2007 19:21:15 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Mon, 4 Jun 2007 19:21:15 +1000 From: David Chinner To: David Chinner , Ruben Porras , xfs@oss.sgi.com, cw@f00f.org Subject: Re: XFS shrink functionality Message-ID: <20070604092115.GX85884050@sgi.com> References: <1180715974.10796.46.camel@localhost> <20070604001632.GA86004887@sgi.com> <20070604084154.GA8273@teal.hq.k1024.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070604084154.GA8273@teal.hq.k1024.org> User-Agent: Mutt/1.4.2.1i X-archive-position: 11616 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Mon, Jun 04, 2007 at 10:41:54AM +0200, Iustin Pop wrote: > Disclaimer: all the below is based on my weak understanding of the code, > I don't claim I'm right below. > > On Mon, Jun 04, 2007 at 10:16:32AM +1000, David Chinner wrote: > > Any work for this would need to be done against current mainline > > of the xfs-dev tree. > > > > Yes, that patch is out of date, and it also did things that were not > > necessary i.e. walk btrees to work out if AGs are empty or not. > > Well, I did what I could based on my own understanding of the code. > Sorry if it's ugly :) > > > > I'm really curious about what happened to this patches and why they were > > > discontinued. The second part never was made public, and there was also > > > no answer. Was there any flaw in any of the posted code or anything in > > > XFS that makes it especially hard to shrink [3] that discouraged the > > > development? > > > > The posted code is only a *tiny* part of the shrink problem. > > My ideea at that time is to start small and be able to shrink an empty > filesystem (or empty at least regarding the AGs that you want to clear). Yes, that is one way of looking at it.... > The point is that if AGs are lockable outside of a transaction > (something like the freeze/unfreeze functionality at the fs level), then > by simply copying the conflicting files you ensure that they are Copying is not good enough - attributes must remain unchanged. The only thing we can't preserve is the inode number.... > allocated on an available AG and when you remove the originals, the > to-be-shrinked AGs become free. Yes, utterly non-optimal, but it was the > simplest way to do it based on what I knew at the time. Not quite that simple, unfortunately. You can't leave the AGs locked in the same way we do for a grow because we need to be able to use the AGs to move stuff about and that requires locking them. Hence we need a separate mechanism to prevent allocation in a given AG outside of locking them. Hence we need: - a transaction to mark AGs "no-allocate" - a transaction to mark AGs "allocatable" - a flag in each AGF/AGI to say the AG is available for allocations (persistent over crashes) - a flag in the per-ag structure to indicate allocation status of the AG. - everywhere we select an AG for allocation, we need to check this flag and skip the AG if it's not available. FWIW, the transactions can probably just be an extension of xfs_alloc_log_agf() and xfs_alloc_log_agi().... > > > What are the programmers requirements from your point of view? > > > > Here's the "simple" bits that will allow you to shrink > > the filesystem down to the end of the internal log: > > > > 0. Check space is available for shrink > Can be done by actually allocating the space to be freed at the > beggining of the transaction. Right? No, I mean that you need to check that there is sufficient space in the untouched AGs to mve all the data from the AG's to be removed into the remaining part of the filesystem. This is not part of a transaction, but still a check that needs to be done before starting.... > > 1. Mark allocation groups as "don't use - going away soon" > > - so we don't put new stuff in them while we > > are moving all the bits out of them > > - requires hooks in the allocators to prevent > > the AG from being selected for allllocations > > - must still allow allocations for the free lists > > so that extent freeing can succeed > > - *new transaction required*. > > - also needs an "undo" (e.g. on partial failure) > > so we need to be able to mark allocation groups > > online again. > > So a question: can transaction be nested? No. > Because the offline AG > transation needs to live until the shrink transaction is done. No it doesn't - the *state* needs to remain until we do the shrink, the transaction only needs to live until it has hit the disk. > I was > more thinking that the offline-AG should be a bit on the AG that could > be changed by the admin (like xfs_freeze); this could also help for > other reasons than shrink (when on a big FS some AGs lie on a physical > device and others on a different device, and you would like to restrict > writes to a given AG, as much as possible). Yes, that's exactly what I'm talking about ;) > > 2. Move inodes out of offline AGs > > - On Irix, we have a program called 'xfs_reno' which > > converts 64 bit inode filesystems to 32 bit inode > > filesystems. This needs to be: > > - released under the GPL (should not be a problem). > > - ported to linux > > - modified to understand inodes sit in certain > > AGs and to move them out of those AGs as needed. > > - requires filesystem traversal to find all the > > inodes to be moved. > Interesing. I've read on the mail list of this before, but no other > details. > > > > > % wc -l xfs_reno.c > > 1991 xfs_reno.c > > > > - even with "-o ikeep", this needs to trigger inode cluster > > deletion in offline AGs (needs hooks in xfs_ifree()). > This part (removal of inodes) is not actually needed if the icount == > ifree (I presume this means that all the existing inodes are free). Yes, I guess that could be done - it means extra stuffing about when doing the final shrink transaction, though. e.g. making sure that free block counts update correctly given that the AGI btrees will be consuming blocks - easier just to free the clusters as they get emptied, I think.... > > 3. Move data out of offline AGs. > > - this is difficult to do efficiently as we do not have > > a block-to-owner reverse mapping in the filesystem. > > Hence requires a walk of the *entire* filesystem to find > > the owners of data blocks in the AGs being offlined. > > - xfs_db wrapper might be the best way to do this... > > > > > > > > 4. Execute shrink > > - new transaction - XFS_TRANS_SHRINKFS > > - check AGs are empty > > - icount == 0 > > - freeblks == mp->m_sb.sb_agblocks > > (will be a little more than this) > > - check shrink won't go past end of internal log > > - free AGs, updating superblock fields > > - update perag structure > > - not a simple realloc() as there may > > be other threads using the structure at the > > same time.... > > > > My suggestion would be to start implementing these steps in reverse. 4) > is the most important as it touches the entire FS. If 4) is working > correctly, then 1) would be simpler (I think) and 3) can be implemented > by just running a forced xfs_fsr against the conflicting files. I don't > know about 2). Yeah, 1) and 4) are separable parts of the problem and can be done in any order. 2) can be implemented relatively easily as stated above. 3) is the hard one - we need to find the owner of each block (metadata and data) remaining in the AGs to be removed. This may be a directory btree block, a inode extent btree block, a data block, and extended attr block, etc. Moving the data blocks is easy to do (swap extents), but moving the metadata blocks is a major PITA as it will need to be done transactionally and that will require a bunch of new (complex) code to be written, I think. It will be of equivalent complexity to defragmenting metadata.... If we ignore the metadata block problem then finding and moving the data blocks should not be a problem - swap extents can be used for that as well - but it will be extremely time consuming and won't scale to large filesystem sizes.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Jun 4 02:41:33 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 04 Jun 2007 02:41:36 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l549fTWt019011 for ; Mon, 4 Jun 2007 02:41:30 -0700 Received: from boing.melbourne.sgi.com (boing.melbourne.sgi.com [134.14.55.141]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id TAA10180; Mon, 4 Jun 2007 19:41:23 +1000 Date: Mon, 04 Jun 2007 19:41:19 +1000 From: Timothy Shimmin To: David Chinner , xfs-dev cc: xfs-oss Subject: Re: Review: Be smarter about handling ENOSPC during writeback Message-ID: In-Reply-To: <20070604045219.GG86004887@sgi.com> References: <20070604045219.GG86004887@sgi.com> X-Mailer: Mulberry/4.0.8 (Mac OS X) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline X-archive-position: 11617 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs Hi Dave, As an aside, I don't understand the following bit of code in xfs_reserve_blocks(): /* * If our previous reservation was larger than the current value, * then move any unused blocks back to the free pool. */ fdblks_delta = 0; if (mp->m_resblks > request) { lcounter = mp->m_resblks_avail - request; if (lcounter > 0) { /* release unused blocks */ fdblks_delta = lcounter; mp->m_resblks_avail -= lcounter; } I can't see why it is not calculating a delta for: delta = m_resblks - request and using that to update the m_resblks_avail and fdblks_delta; like we do in the case below where the request is the larger one. Instead it is affectively doing: m_resblks_avail = m_resblks_avail - m_resblks_avail + request = request It looks wrong to me. What am I missing? And why doesn't sb_fdblocks need to be updated in this case. Thanks, --Tim --On 4 June 2007 2:52:19 PM +1000 David Chinner wrote: > > During delayed allocation extent conversion or unwritten extent > conversion, we need to reserve some blocks for transactions > reservations. We need to reserve these blocks in case a btree > split occurs and we need to allocate some blocks. > > Unfortunately, we've only ever reserved the number of data blocks we > are allocating, so in both the unwritten and delalloc case we can > get ENOSPC to the transaction reservation. This is bad because in > both cases we cannot report the failure to the writing application. > > The fix is two-fold: > > 1 - leverage the reserved block infrastructure XFS already > has to reserve a small pool of blocks by default to allow > specially marked transactions to dip into when we are at > ENOSPC. > > Default setting is min(5%, 1024 blocks). > > 2 - convert critical transaction reservations to be allowed > to dip into this pool. Spots changed are delalloc > conversion, unwritten extent conversion and growing a > filesystem at ENOSPC. > > Comments? > > Cheers, > > Dave. > -- > Dave Chinner > Principal Engineer > SGI Australian Software Group > > --- > fs/xfs/xfs_fsops.c | 10 +++++++--- > fs/xfs/xfs_iomap.c | 22 ++++++++-------------- > fs/xfs/xfs_mount.c | 37 +++++++++++++++++++++++++++++++++++-- > 3 files changed, 50 insertions(+), 19 deletions(-) > > Index: 2.6.x-xfs-new/fs/xfs/xfs_fsops.c > =================================================================== > --- 2.6.x-xfs-new.orig/fs/xfs/xfs_fsops.c 2007-05-11 10:35:29.288847149 +1000 > +++ 2.6.x-xfs-new/fs/xfs/xfs_fsops.c 2007-05-11 11:13:34.195363437 +1000 > @@ -179,6 +179,7 @@ xfs_growfs_data_private( > up_write(&mp->m_peraglock); > } > tp = xfs_trans_alloc(mp, XFS_TRANS_GROWFS); > + tp->t_flags |= XFS_TRANS_RESERVE; > if ((error = xfs_trans_reserve(tp, XFS_GROWFS_SPACE_RES(mp), > XFS_GROWDATA_LOG_RES(mp), 0, 0, 0))) { > xfs_trans_cancel(tp, 0); > @@ -500,8 +501,9 @@ xfs_reserve_blocks( > unsigned long s; > > /* If inval is null, report current values and return */ > - > if (inval == (__uint64_t *)NULL) { > + if (!outval) > + return EINVAL; > outval->resblks = mp->m_resblks; > outval->resblks_avail = mp->m_resblks_avail; > return 0; > @@ -564,8 +566,10 @@ retry: > } > } > out: > - outval->resblks = mp->m_resblks; > - outval->resblks_avail = mp->m_resblks_avail; > + if (outval) { > + outval->resblks = mp->m_resblks; > + outval->resblks_avail = mp->m_resblks_avail; > + } > XFS_SB_UNLOCK(mp, s); > > if (fdblks_delta) { > Index: 2.6.x-xfs-new/fs/xfs/xfs_mount.c > =================================================================== > --- 2.6.x-xfs-new.orig/fs/xfs/xfs_mount.c 2007-05-11 10:35:29.292846630 +1000 > +++ 2.6.x-xfs-new/fs/xfs/xfs_mount.c 2007-05-11 11:13:47.229662318 +1000 > @@ -718,7 +718,7 @@ xfs_mountfs( > bhv_vnode_t *rvp = NULL; > int readio_log, writeio_log; > xfs_daddr_t d; > - __uint64_t ret64; > + __uint64_t resblks; > __int64_t update_flags; > uint quotamount, quotaflags; > int agno; > @@ -835,6 +835,7 @@ xfs_mountfs( > */ > if ((mfsi_flags & XFS_MFSI_SECOND) == 0 && > (mp->m_flags & XFS_MOUNT_NOUUID) == 0) { > + __uint64_t ret64; > if (xfs_uuid_mount(mp)) { > error = XFS_ERROR(EINVAL); > goto error1; > @@ -1127,13 +1128,27 @@ xfs_mountfs( > goto error4; > } > > - > /* > * Complete the quota initialisation, post-log-replay component. > */ > if ((error = XFS_QM_MOUNT(mp, quotamount, quotaflags, mfsi_flags))) > goto error4; > > + /* > + * Now we are mounted, reserve a small amount of unused space for > + * privileged transactions. This is needed so that transaction > + * space required for critical operations can dip into this pool > + * when at ENOSPC. This is needed for operations like create with > + * attr, unwritten extent conversion at ENOSPC, etc. Data allocations > + * are not allowed to use this reserved space. > + * > + * We default to 5% or 1024 fsbs of space reserved, whichever is smaller. > + * This may drive us straight to ENOSPC on mount, but that implies > + * we were already there on the last unmount. > + */ > + resblks = min_t(__uint64_t, mp->m_sb.sb_dblocks / 20, 1024); > + xfs_reserve_blocks(mp, &resblks, NULL); > + > return 0; > > error4: > @@ -1172,6 +1187,7 @@ xfs_unmountfs(xfs_mount_t *mp, struct cr > #if defined(DEBUG) || defined(INDUCE_IO_ERROR) > int64_t fsid; > #endif > + __uint64_t resblks; > > /* > * We can potentially deadlock here if we have an inode cluster > @@ -1200,6 +1216,23 @@ xfs_unmountfs(xfs_mount_t *mp, struct cr > xfs_binval(mp->m_rtdev_targp); > } > > + /* > + * Unreserve any blocks we have so that when we unmount we don't account > + * the reserved free space as used. This is really only necessary for > + * lazy superblock counting because it trusts the incore superblock > + * counters to be aboslutely correct on clean unmount. > + * > + * We don't bother correcting this elsewhere for lazy superblock > + * counting because on mount of an unclean filesystem we reconstruct the > + * correct counter value and this is irrelevant. > + * > + * For non-lazy counter filesystems, this doesn't matter at all because > + * we only every apply deltas to the superblock and hence the incore > + * value does not matter.... > + */ > + resblks = 0; > + xfs_reserve_blocks(mp, &resblks, NULL); > + > xfs_log_sbcount(mp, 1); > xfs_unmountfs_writesb(mp); > xfs_unmountfs_wait(mp); /* wait for async bufs */ > Index: 2.6.x-xfs-new/fs/xfs/xfs_iomap.c > =================================================================== > --- 2.6.x-xfs-new.orig/fs/xfs/xfs_iomap.c 2007-05-11 11:13:13.862017149 +1000 > +++ 2.6.x-xfs-new/fs/xfs/xfs_iomap.c 2007-05-11 11:13:34.199362915 +1000 > @@ -489,13 +489,13 @@ xfs_iomap_write_direct( > if (unlikely(rt)) { > resrtextents = qblocks = resaligned; > resrtextents /= mp->m_sb.sb_rextsize; > - resblks = XFS_DIOSTRAT_SPACE_RES(mp, 0); > - quota_flag = XFS_QMOPT_RES_RTBLKS; > - } else { > - resrtextents = 0; > + resblks = XFS_DIOSTRAT_SPACE_RES(mp, 0); > + quota_flag = XFS_QMOPT_RES_RTBLKS; > + } else { > + resrtextents = 0; > resblks = qblocks = XFS_DIOSTRAT_SPACE_RES(mp, resaligned); > - quota_flag = XFS_QMOPT_RES_REGBLKS; > - } > + quota_flag = XFS_QMOPT_RES_REGBLKS; > + } > > /* > * Allocate and setup the transaction > @@ -788,18 +788,12 @@ xfs_iomap_write_allocate( > nimaps = 0; > while (nimaps == 0) { > tp = xfs_trans_alloc(mp, XFS_TRANS_STRAT_WRITE); > + tp->t_flags |= XFS_TRANS_RESERVE; > nres = XFS_EXTENTADD_SPACE_RES(mp, XFS_DATA_FORK); > error = xfs_trans_reserve(tp, nres, > XFS_WRITE_LOG_RES(mp), > 0, XFS_TRANS_PERM_LOG_RES, > XFS_WRITE_LOG_COUNT); > - if (error == ENOSPC) { > - error = xfs_trans_reserve(tp, 0, > - XFS_WRITE_LOG_RES(mp), > - 0, > - XFS_TRANS_PERM_LOG_RES, > - XFS_WRITE_LOG_COUNT); > - } > if (error) { > xfs_trans_cancel(tp, 0); > return XFS_ERROR(error); > @@ -917,8 +911,8 @@ xfs_iomap_write_unwritten( > * from unwritten to real. Do allocations in a loop until > * we have covered the range passed in. > */ > - > tp = xfs_trans_alloc(mp, XFS_TRANS_STRAT_WRITE); > + tp->t_flags |= XFS_TRANS_RESERVE; > error = xfs_trans_reserve(tp, resblks, > XFS_WRITE_LOG_RES(mp), 0, > XFS_TRANS_PERM_LOG_RES, From owner-xfs@oss.sgi.com Mon Jun 4 07:12:03 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 04 Jun 2007 07:12:07 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l54EBxWt010770 for ; Mon, 4 Jun 2007 07:12:01 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id AAA16511; Tue, 5 Jun 2007 00:11:57 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l54EBuAf113627423; Tue, 5 Jun 2007 00:11:56 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l54EBs8T112839881; Tue, 5 Jun 2007 00:11:54 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Tue, 5 Jun 2007 00:11:54 +1000 From: David Chinner To: Timothy Shimmin Cc: David Chinner , xfs-dev , xfs-oss Subject: Re: Review: Be smarter about handling ENOSPC during writeback Message-ID: <20070604141154.GK86004887@sgi.com> References: <20070604045219.GG86004887@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.1i X-archive-position: 11618 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Mon, Jun 04, 2007 at 07:41:19PM +1000, Timothy Shimmin wrote: > Hi Dave, > > As an aside, > I don't understand the following bit of code in xfs_reserve_blocks(): > /* > * If our previous reservation was larger than the current value, > * then move any unused blocks back to the free pool. > */ > fdblks_delta = 0; > if (mp->m_resblks > request) { > lcounter = mp->m_resblks_avail - request; > if (lcounter > 0) { /* release unused blocks */ > fdblks_delta = lcounter; > mp->m_resblks_avail -= lcounter; > } >>>> mp->m_resblks = request; > > I can't see why it is not calculating a delta for: > delta = m_resblks - request Because mp->m_resblks_avail is the amount of reservation space we have *unallocated*. i.e. 0 <= mp->m_resblks_avail <= mp->m_resblks IOWs, when we have used some blocks and then change mp->m_resblks, the amount we can can return is limited by the the available blocks, not the total reservation. > and using that to update the m_resblks_avail and fdblks_delta; > like we do in the case below where the request is the larger one. > Instead it is affectively doing: > m_resblks_avail = m_resblks_avail - m_resblks_avail + request > = request Surprising, but correct. When we reduce mp->m_resblks, mp->m_resblks_avail must be <= mp->m_resblks. IOWs we reduce the available blocks to that of the limit, so any allocated reserved blocks will now immediately be considered as unreserved and when they are freed the space will be immediately returned to the normal pool. Example: resblks = 1000, avail = 750. Set new resblks = 500. avail must be reduced to 500 and 250 must be freed. fdblks_delta = 0 if (1000 > 500) { lcounter = 750 - 500 = 250 if (250 > 0) { fdblks_delta = 250 resblks_avail = 500 } m_resblks = 500; } else { ..... } ..... if (fdblks_delta) { ..... error = xfs_mod_incore_sb(mp, XFS_SBS_FDBLOCKS, fdblks_delta, 0); ..... That "frees" 250 blocks. This is correct behaviour. > It looks wrong to me. > What am I missing? > And why doesn't sb_fdblocks need to be updated in this case. Because if we update mp->m_sb.sb_fdblocks, the value minus the reservation gets written to disk, and that means it is incorrect (xfs-check would fail) as the reservation is purely an in-memory construct... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Jun 4 07:33:56 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 04 Jun 2007 07:33:59 -0700 (PDT) Received: from mail.lst.de (verein.lst.de [213.95.11.210]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l54EXsWt017205 for ; Mon, 4 Jun 2007 07:33:56 -0700 Received: from verein.lst.de (localhost [127.0.0.1]) by mail.lst.de (8.12.3/8.12.3/Debian-7.1) with ESMTP id l54EXro6008984 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO) for ; Mon, 4 Jun 2007 16:33:53 +0200 Received: (from hch@localhost) by verein.lst.de (8.12.3/8.12.3/Debian-6.6) id l54EXqwD008980 for xfs@oss.sgi.com; Mon, 4 Jun 2007 16:33:52 +0200 Date: Mon, 4 Jun 2007 16:33:52 +0200 From: Christoph Hellwig To: xfs@oss.sgi.com Subject: [PATCH] get rid of file_count abuse Message-ID: <20070604143352.GA8721@lst.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.3.28i X-Scanned-By: MIMEDefang 2.39 X-archive-position: 11619 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@lst.de Precedence: bulk X-list: xfs A check for file_count is always a bad idea. Linux has the ->release method to deal with cleanups on last close and ->flush is only for the very rare case where we want to perform an operation on every drop of a reference to a file struct. This patch gets rid of vop_close and surrounding code in favour of simply doing the page flushing from ->release. Signed-off-by: Christoph Hellwig Index: linux-2.6/fs/xfs/linux-2.6/xfs_file.c =================================================================== --- linux-2.6.orig/fs/xfs/linux-2.6/xfs_file.c 2007-06-01 13:20:26.000000000 +0200 +++ linux-2.6/fs/xfs/linux-2.6/xfs_file.c 2007-06-01 13:20:41.000000000 +0200 @@ -208,15 +208,6 @@ xfs_file_open( } STATIC int -xfs_file_close( - struct file *filp, - fl_owner_t id) -{ - return -bhv_vop_close(vn_from_inode(filp->f_path.dentry->d_inode), 0, - file_count(filp) > 1 ? L_FALSE : L_TRUE, NULL); -} - -STATIC int xfs_file_release( struct inode *inode, struct file *filp) @@ -461,7 +452,6 @@ const struct file_operations xfs_file_op #endif .mmap = xfs_file_mmap, .open = xfs_file_open, - .flush = xfs_file_close, .release = xfs_file_release, .fsync = xfs_file_fsync, #ifdef HAVE_FOP_OPEN_EXEC @@ -484,7 +474,6 @@ const struct file_operations xfs_invis_f #endif .mmap = xfs_file_mmap, .open = xfs_file_open, - .flush = xfs_file_close, .release = xfs_file_release, .fsync = xfs_file_fsync, }; Index: linux-2.6/fs/xfs/linux-2.6/xfs_vnode.h =================================================================== --- linux-2.6.orig/fs/xfs/linux-2.6/xfs_vnode.h 2007-06-01 13:19:59.000000000 +0200 +++ linux-2.6/fs/xfs/linux-2.6/xfs_vnode.h 2007-06-01 13:21:05.000000000 +0200 @@ -129,10 +129,7 @@ typedef enum bhv_vchange { VCHANGE_FLAGS_IOEXCL_COUNT = 4 } bhv_vchange_t; -typedef enum { L_FALSE, L_TRUE } lastclose_t; - typedef int (*vop_open_t)(bhv_desc_t *, struct cred *); -typedef int (*vop_close_t)(bhv_desc_t *, int, lastclose_t, struct cred *); typedef ssize_t (*vop_read_t)(bhv_desc_t *, struct kiocb *, const struct iovec *, unsigned int, loff_t *, int, struct cred *); @@ -203,7 +200,6 @@ typedef int (*vop_iflush_t)(bhv_desc_t * typedef struct bhv_vnodeops { bhv_position_t vn_position; /* position within behavior chain */ vop_open_t vop_open; - vop_close_t vop_close; vop_read_t vop_read; vop_write_t vop_write; vop_sendfile_t vop_sendfile; @@ -249,7 +245,6 @@ typedef struct bhv_vnodeops { #define VNHEAD(vp) ((vp)->v_bh.bh_first) #define VOP(op, vp) (*((bhv_vnodeops_t *)VNHEAD(vp)->bd_ops)->op) #define bhv_vop_open(vp, cr) VOP(vop_open, vp)(VNHEAD(vp),cr) -#define bhv_vop_close(vp, f,last,cr) VOP(vop_close, vp)(VNHEAD(vp),f,last,cr) #define bhv_vop_read(vp,file,iov,segs,offset,ioflags,cr) \ VOP(vop_read, vp)(VNHEAD(vp),file,iov,segs,offset,ioflags,cr) #define bhv_vop_write(vp,file,iov,segs,offset,ioflags,cr) \ Index: linux-2.6/fs/xfs/xfs_vnodeops.c =================================================================== --- linux-2.6.orig/fs/xfs/xfs_vnodeops.c 2007-06-01 13:17:41.000000000 +0200 +++ linux-2.6/fs/xfs/xfs_vnodeops.c 2007-06-01 13:19:54.000000000 +0200 @@ -77,36 +77,6 @@ xfs_open( return 0; } -STATIC int -xfs_close( - bhv_desc_t *bdp, - int flags, - lastclose_t lastclose, - cred_t *credp) -{ - bhv_vnode_t *vp = BHV_TO_VNODE(bdp); - xfs_inode_t *ip = XFS_BHVTOI(bdp); - - if (XFS_FORCED_SHUTDOWN(ip->i_mount)) - return XFS_ERROR(EIO); - - if (lastclose != L_TRUE || !VN_ISREG(vp)) - return 0; - - /* - * If we previously truncated this file and removed old data in - * the process, we want to initiate "early" writeout on the last - * close. This is an attempt to combat the notorious NULL files - * problem which is particularly noticable from a truncate down, - * buffered (re-)write (delalloc), followed by a crash. What we - * are effectively doing here is significantly reducing the time - * window where we'd otherwise be exposed to that problem. - */ - if (VUNTRUNCATE(vp) && VN_DIRTY(vp) && ip->i_delayed_blks > 0) - return bhv_vop_flush_pages(vp, 0, -1, XFS_B_ASYNC, FI_NONE); - return 0; -} - /* * xfs_getattr */ @@ -1560,6 +1530,22 @@ xfs_release( if (vp->v_vfsp->vfs_flag & VFS_RDONLY) return 0; + if (!XFS_FORCED_SHUTDOWN(ip->i_mount)) { + /* + * If we previously truncated this file and removed old data + * in the process, we want to initiate "early" writeout on + * the last close. This is an attempt to combat the notorious + * NULL files problem which is particularly noticable from a + * truncate down, buffered (re-)write (delalloc), followed by + * a crash. What we are effectively doing here is + * significantly reducing the time window where we'd otherwise + * be exposed to that problem. + */ + if (VUNTRUNCATE(vp) && VN_DIRTY(vp) && ip->i_delayed_blks > 0) + bhv_vop_flush_pages(vp, 0, -1, XFS_B_ASYNC, FI_NONE); + } + + #ifdef HAVE_REFCACHE /* If we are in the NFS reference cache then don't do this now */ if (ip->i_refcache) @@ -4678,7 +4664,6 @@ xfs_change_file_space( bhv_vnodeops_t xfs_vnodeops = { BHV_IDENTITY_INIT(VN_BHV_XFS,VNODE_POSITION_XFS), .vop_open = xfs_open, - .vop_close = xfs_close, .vop_read = xfs_read, #ifdef HAVE_SENDFILE .vop_sendfile = xfs_sendfile, From owner-xfs@oss.sgi.com Mon Jun 4 07:36:08 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 04 Jun 2007 07:36:14 -0700 (PDT) Received: from mail.lst.de (verein.lst.de [213.95.11.210]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l54Ea5Wt017971 for ; Mon, 4 Jun 2007 07:36:06 -0700 Received: from verein.lst.de (localhost [127.0.0.1]) by mail.lst.de (8.12.3/8.12.3/Debian-7.1) with ESMTP id l54Ea3o6009169 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO) for ; Mon, 4 Jun 2007 16:36:03 +0200 Received: (from hch@localhost) by verein.lst.de (8.12.3/8.12.3/Debian-6.6) id l54Ea2Dx009167 for xfs@oss.sgi.com; Mon, 4 Jun 2007 16:36:02 +0200 Date: Mon, 4 Jun 2007 16:36:02 +0200 From: Christoph Hellwig To: xfs@oss.sgi.com Subject: Re: [PATCH] kill macro noise in xfs_dir2*.h Message-ID: <20070604143602.GA9081@lst.de> References: <20070418175859.GB18315@lst.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070418175859.GB18315@lst.de> User-Agent: Mutt/1.3.28i X-Scanned-By: MIMEDefang 2.39 X-archive-position: 11620 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@lst.de Precedence: bulk X-list: xfs On Wed, Apr 18, 2007 at 07:59:00PM +0200, Christoph Hellwig wrote: > Remove all the macros that just give inline functions uppercase names. > > Signed-off-by: Christoph Hellwig This patch still hasn't made it to mainline, so here's a version rediffed for latest mainline because it's required for the next patch I'll post: Signed-off-by: Christoph Hellwig Index: linux-2.6/fs/xfs/xfs_dir2.c =================================================================== --- linux-2.6.orig/fs/xfs/xfs_dir2.c 2007-04-28 09:37:26.000000000 +0200 +++ linux-2.6/fs/xfs/xfs_dir2.c 2007-05-12 13:38:34.000000000 +0200 @@ -55,9 +55,9 @@ xfs_dir_mount( XFS_MAX_BLOCKSIZE); mp->m_dirblksize = 1 << (mp->m_sb.sb_blocklog + mp->m_sb.sb_dirblklog); mp->m_dirblkfsbs = 1 << mp->m_sb.sb_dirblklog; - mp->m_dirdatablk = XFS_DIR2_DB_TO_DA(mp, XFS_DIR2_DATA_FIRSTDB(mp)); - mp->m_dirleafblk = XFS_DIR2_DB_TO_DA(mp, XFS_DIR2_LEAF_FIRSTDB(mp)); - mp->m_dirfreeblk = XFS_DIR2_DB_TO_DA(mp, XFS_DIR2_FREE_FIRSTDB(mp)); + mp->m_dirdatablk = xfs_dir2_db_to_da(mp, XFS_DIR2_DATA_FIRSTDB(mp)); + mp->m_dirleafblk = xfs_dir2_db_to_da(mp, XFS_DIR2_LEAF_FIRSTDB(mp)); + mp->m_dirfreeblk = xfs_dir2_db_to_da(mp, XFS_DIR2_FREE_FIRSTDB(mp)); mp->m_attr_node_ents = (mp->m_sb.sb_blocksize - (uint)sizeof(xfs_da_node_hdr_t)) / (uint)sizeof(xfs_da_node_entry_t); @@ -554,7 +554,7 @@ xfs_dir2_grow_inode( */ if (mapp != &map) kmem_free(mapp, sizeof(*mapp) * count); - *dbp = XFS_DIR2_DA_TO_DB(mp, (xfs_dablk_t)bno); + *dbp = xfs_dir2_da_to_db(mp, (xfs_dablk_t)bno); /* * Update file's size if this is the data space and it grew. */ @@ -706,7 +706,7 @@ xfs_dir2_shrink_inode( dp = args->dp; mp = dp->i_mount; tp = args->trans; - da = XFS_DIR2_DB_TO_DA(mp, db); + da = xfs_dir2_db_to_da(mp, db); /* * Unmap the fsblock(s). */ @@ -742,7 +742,7 @@ xfs_dir2_shrink_inode( /* * If the block isn't the last one in the directory, we're done. */ - if (dp->i_d.di_size > XFS_DIR2_DB_OFF_TO_BYTE(mp, db + 1, 0)) + if (dp->i_d.di_size > xfs_dir2_db_off_to_byte(mp, db + 1, 0)) return 0; bno = da; if ((error = xfs_bmap_last_before(tp, dp, &bno, XFS_DATA_FORK))) { Index: linux-2.6/fs/xfs/xfs_dir2_block.c =================================================================== --- linux-2.6.orig/fs/xfs/xfs_dir2_block.c 2007-05-10 10:53:16.000000000 +0200 +++ linux-2.6/fs/xfs/xfs_dir2_block.c 2007-05-12 13:38:34.000000000 +0200 @@ -115,13 +115,13 @@ xfs_dir2_block_addname( xfs_da_brelse(tp, bp); return XFS_ERROR(EFSCORRUPTED); } - len = XFS_DIR2_DATA_ENTSIZE(args->namelen); + len = xfs_dir2_data_entsize(args->namelen); /* * Set up pointers to parts of the block. */ bf = block->hdr.bestfree; - btp = XFS_DIR2_BLOCK_TAIL_P(mp, block); - blp = XFS_DIR2_BLOCK_LEAF_P(btp); + btp = xfs_dir2_block_tail_p(mp, block); + blp = xfs_dir2_block_leaf_p(btp); /* * No stale entries? Need space for entry and new leaf. */ @@ -396,7 +396,7 @@ xfs_dir2_block_addname( * Fill in the leaf entry. */ blp[mid].hashval = cpu_to_be32(args->hashval); - blp[mid].address = cpu_to_be32(XFS_DIR2_BYTE_TO_DATAPTR(mp, + blp[mid].address = cpu_to_be32(xfs_dir2_byte_to_dataptr(mp, (char *)dep - (char *)block)); xfs_dir2_block_log_leaf(tp, bp, lfloglow, lfloghigh); /* @@ -411,7 +411,7 @@ xfs_dir2_block_addname( dep->inumber = cpu_to_be64(args->inumber); dep->namelen = args->namelen; memcpy(dep->name, args->name, args->namelen); - tagp = XFS_DIR2_DATA_ENTRY_TAG_P(dep); + tagp = xfs_dir2_data_entry_tag_p(dep); *tagp = cpu_to_be16((char *)dep - (char *)block); /* * Clean up the bestfree array and log the header, tail, and entry. @@ -455,7 +455,7 @@ xfs_dir2_block_getdents( /* * If the block number in the offset is out of range, we're done. */ - if (XFS_DIR2_DATAPTR_TO_DB(mp, uio->uio_offset) > mp->m_dirdatablk) { + if (xfs_dir2_dataptr_to_db(mp, uio->uio_offset) > mp->m_dirdatablk) { *eofp = 1; return 0; } @@ -471,15 +471,15 @@ xfs_dir2_block_getdents( * Extract the byte offset we start at from the seek pointer. * We'll skip entries before this. */ - wantoff = XFS_DIR2_DATAPTR_TO_OFF(mp, uio->uio_offset); + wantoff = xfs_dir2_dataptr_to_off(mp, uio->uio_offset); block = bp->data; xfs_dir2_data_check(dp, bp); /* * Set up values for the loop. */ - btp = XFS_DIR2_BLOCK_TAIL_P(mp, block); + btp = xfs_dir2_block_tail_p(mp, block); ptr = (char *)block->u; - endptr = (char *)XFS_DIR2_BLOCK_LEAF_P(btp); + endptr = (char *)xfs_dir2_block_leaf_p(btp); p.dbp = dbp; p.put = put; p.uio = uio; @@ -502,7 +502,7 @@ xfs_dir2_block_getdents( /* * Bump pointer for the next iteration. */ - ptr += XFS_DIR2_DATA_ENTSIZE(dep->namelen); + ptr += xfs_dir2_data_entsize(dep->namelen); /* * The entry is before the desired starting point, skip it. */ @@ -513,7 +513,7 @@ xfs_dir2_block_getdents( */ p.namelen = dep->namelen; - p.cook = XFS_DIR2_DB_OFF_TO_DATAPTR(mp, mp->m_dirdatablk, + p.cook = xfs_dir2_db_off_to_dataptr(mp, mp->m_dirdatablk, ptr - (char *)block); p.ino = be64_to_cpu(dep->inumber); #if XFS_BIG_INUMS @@ -531,7 +531,7 @@ xfs_dir2_block_getdents( */ if (!p.done) { uio->uio_offset = - XFS_DIR2_DB_OFF_TO_DATAPTR(mp, mp->m_dirdatablk, + xfs_dir2_db_off_to_dataptr(mp, mp->m_dirdatablk, (char *)dep - (char *)block); xfs_da_brelse(tp, bp); return error; @@ -545,7 +545,7 @@ xfs_dir2_block_getdents( *eofp = 1; uio->uio_offset = - XFS_DIR2_DB_OFF_TO_DATAPTR(mp, mp->m_dirdatablk + 1, 0); + xfs_dir2_db_off_to_dataptr(mp, mp->m_dirdatablk + 1, 0); xfs_da_brelse(tp, bp); @@ -569,8 +569,8 @@ xfs_dir2_block_log_leaf( mp = tp->t_mountp; block = bp->data; - btp = XFS_DIR2_BLOCK_TAIL_P(mp, block); - blp = XFS_DIR2_BLOCK_LEAF_P(btp); + btp = xfs_dir2_block_tail_p(mp, block); + blp = xfs_dir2_block_leaf_p(btp); xfs_da_log_buf(tp, bp, (uint)((char *)&blp[first] - (char *)block), (uint)((char *)&blp[last + 1] - (char *)block - 1)); } @@ -589,7 +589,7 @@ xfs_dir2_block_log_tail( mp = tp->t_mountp; block = bp->data; - btp = XFS_DIR2_BLOCK_TAIL_P(mp, block); + btp = xfs_dir2_block_tail_p(mp, block); xfs_da_log_buf(tp, bp, (uint)((char *)btp - (char *)block), (uint)((char *)(btp + 1) - (char *)block - 1)); } @@ -623,13 +623,13 @@ xfs_dir2_block_lookup( mp = dp->i_mount; block = bp->data; xfs_dir2_data_check(dp, bp); - btp = XFS_DIR2_BLOCK_TAIL_P(mp, block); - blp = XFS_DIR2_BLOCK_LEAF_P(btp); + btp = xfs_dir2_block_tail_p(mp, block); + blp = xfs_dir2_block_leaf_p(btp); /* * Get the offset from the leaf entry, to point to the data. */ dep = (xfs_dir2_data_entry_t *) - ((char *)block + XFS_DIR2_DATAPTR_TO_OFF(mp, be32_to_cpu(blp[ent].address))); + ((char *)block + xfs_dir2_dataptr_to_off(mp, be32_to_cpu(blp[ent].address))); /* * Fill in inode number, release the block. */ @@ -675,8 +675,8 @@ xfs_dir2_block_lookup_int( ASSERT(bp != NULL); block = bp->data; xfs_dir2_data_check(dp, bp); - btp = XFS_DIR2_BLOCK_TAIL_P(mp, block); - blp = XFS_DIR2_BLOCK_LEAF_P(btp); + btp = xfs_dir2_block_tail_p(mp, block); + blp = xfs_dir2_block_leaf_p(btp); /* * Loop doing a binary search for our hash value. * Find our entry, ENOENT if it's not there. @@ -713,7 +713,7 @@ xfs_dir2_block_lookup_int( * Get pointer to the entry from the leaf. */ dep = (xfs_dir2_data_entry_t *) - ((char *)block + XFS_DIR2_DATAPTR_TO_OFF(mp, addr)); + ((char *)block + xfs_dir2_dataptr_to_off(mp, addr)); /* * Compare, if it's right give back buffer & entry number. */ @@ -768,20 +768,20 @@ xfs_dir2_block_removename( tp = args->trans; mp = dp->i_mount; block = bp->data; - btp = XFS_DIR2_BLOCK_TAIL_P(mp, block); - blp = XFS_DIR2_BLOCK_LEAF_P(btp); + btp = xfs_dir2_block_tail_p(mp, block); + blp = xfs_dir2_block_leaf_p(btp); /* * Point to the data entry using the leaf entry. */ dep = (xfs_dir2_data_entry_t *) - ((char *)block + XFS_DIR2_DATAPTR_TO_OFF(mp, be32_to_cpu(blp[ent].address))); + ((char *)block + xfs_dir2_dataptr_to_off(mp, be32_to_cpu(blp[ent].address))); /* * Mark the data entry's space free. */ needlog = needscan = 0; xfs_dir2_data_make_free(tp, bp, (xfs_dir2_data_aoff_t)((char *)dep - (char *)block), - XFS_DIR2_DATA_ENTSIZE(dep->namelen), &needlog, &needscan); + xfs_dir2_data_entsize(dep->namelen), &needlog, &needscan); /* * Fix up the block tail. */ @@ -843,13 +843,13 @@ xfs_dir2_block_replace( dp = args->dp; mp = dp->i_mount; block = bp->data; - btp = XFS_DIR2_BLOCK_TAIL_P(mp, block); - blp = XFS_DIR2_BLOCK_LEAF_P(btp); + btp = xfs_dir2_block_tail_p(mp, block); + blp = xfs_dir2_block_leaf_p(btp); /* * Point to the data entry we need to change. */ dep = (xfs_dir2_data_entry_t *) - ((char *)block + XFS_DIR2_DATAPTR_TO_OFF(mp, be32_to_cpu(blp[ent].address))); + ((char *)block + xfs_dir2_dataptr_to_off(mp, be32_to_cpu(blp[ent].address))); ASSERT(be64_to_cpu(dep->inumber) != args->inumber); /* * Change the inode number to the new value. @@ -912,7 +912,7 @@ xfs_dir2_leaf_to_block( mp = dp->i_mount; leaf = lbp->data; ASSERT(be16_to_cpu(leaf->hdr.info.magic) == XFS_DIR2_LEAF1_MAGIC); - ltp = XFS_DIR2_LEAF_TAIL_P(mp, leaf); + ltp = xfs_dir2_leaf_tail_p(mp, leaf); /* * If there are data blocks other than the first one, take this * opportunity to remove trailing empty data blocks that may have @@ -920,7 +920,7 @@ xfs_dir2_leaf_to_block( * These will show up in the leaf bests table. */ while (dp->i_d.di_size > mp->m_dirblksize) { - bestsp = XFS_DIR2_LEAF_BESTS_P(ltp); + bestsp = xfs_dir2_leaf_bests_p(ltp); if (be16_to_cpu(bestsp[be32_to_cpu(ltp->bestcount) - 1]) == mp->m_dirblksize - (uint)sizeof(block->hdr)) { if ((error = @@ -974,14 +974,14 @@ xfs_dir2_leaf_to_block( /* * Initialize the block tail. */ - btp = XFS_DIR2_BLOCK_TAIL_P(mp, block); + btp = xfs_dir2_block_tail_p(mp, block); btp->count = cpu_to_be32(be16_to_cpu(leaf->hdr.count) - be16_to_cpu(leaf->hdr.stale)); btp->stale = 0; xfs_dir2_block_log_tail(tp, dbp); /* * Initialize the block leaf area. We compact out stale entries. */ - lep = XFS_DIR2_BLOCK_LEAF_P(btp); + lep = xfs_dir2_block_leaf_p(btp); for (from = to = 0; from < be16_to_cpu(leaf->hdr.count); from++) { if (be32_to_cpu(leaf->ents[from].address) == XFS_DIR2_NULL_DATAPTR) continue; @@ -1067,7 +1067,7 @@ xfs_dir2_sf_to_block( ASSERT(dp->i_df.if_bytes == dp->i_d.di_size); ASSERT(dp->i_df.if_u1.if_data != NULL); sfp = (xfs_dir2_sf_t *)dp->i_df.if_u1.if_data; - ASSERT(dp->i_d.di_size >= XFS_DIR2_SF_HDR_SIZE(sfp->hdr.i8count)); + ASSERT(dp->i_d.di_size >= xfs_dir2_sf_hdr_size(sfp->hdr.i8count)); /* * Copy the directory into the stack buffer. * Then pitch the incore inode data so we can make extents. @@ -1119,10 +1119,10 @@ xfs_dir2_sf_to_block( /* * Fill in the tail. */ - btp = XFS_DIR2_BLOCK_TAIL_P(mp, block); + btp = xfs_dir2_block_tail_p(mp, block); btp->count = cpu_to_be32(sfp->hdr.count + 2); /* ., .. */ btp->stale = 0; - blp = XFS_DIR2_BLOCK_LEAF_P(btp); + blp = xfs_dir2_block_leaf_p(btp); endoffset = (uint)((char *)blp - (char *)block); /* * Remove the freespace, we'll manage it. @@ -1138,25 +1138,25 @@ xfs_dir2_sf_to_block( dep->inumber = cpu_to_be64(dp->i_ino); dep->namelen = 1; dep->name[0] = '.'; - tagp = XFS_DIR2_DATA_ENTRY_TAG_P(dep); + tagp = xfs_dir2_data_entry_tag_p(dep); *tagp = cpu_to_be16((char *)dep - (char *)block); xfs_dir2_data_log_entry(tp, bp, dep); blp[0].hashval = cpu_to_be32(xfs_dir_hash_dot); - blp[0].address = cpu_to_be32(XFS_DIR2_BYTE_TO_DATAPTR(mp, + blp[0].address = cpu_to_be32(xfs_dir2_byte_to_dataptr(mp, (char *)dep - (char *)block)); /* * Create entry for .. */ dep = (xfs_dir2_data_entry_t *) ((char *)block + XFS_DIR2_DATA_DOTDOT_OFFSET); - dep->inumber = cpu_to_be64(XFS_DIR2_SF_GET_INUMBER(sfp, &sfp->hdr.parent)); + dep->inumber = cpu_to_be64(xfs_dir2_sf_get_inumber(sfp, &sfp->hdr.parent)); dep->namelen = 2; dep->name[0] = dep->name[1] = '.'; - tagp = XFS_DIR2_DATA_ENTRY_TAG_P(dep); + tagp = xfs_dir2_data_entry_tag_p(dep); *tagp = cpu_to_be16((char *)dep - (char *)block); xfs_dir2_data_log_entry(tp, bp, dep); blp[1].hashval = cpu_to_be32(xfs_dir_hash_dotdot); - blp[1].address = cpu_to_be32(XFS_DIR2_BYTE_TO_DATAPTR(mp, + blp[1].address = cpu_to_be32(xfs_dir2_byte_to_dataptr(mp, (char *)dep - (char *)block)); offset = XFS_DIR2_DATA_FIRST_OFFSET; /* @@ -1165,7 +1165,7 @@ xfs_dir2_sf_to_block( if ((i = 0) == sfp->hdr.count) sfep = NULL; else - sfep = XFS_DIR2_SF_FIRSTENTRY(sfp); + sfep = xfs_dir2_sf_firstentry(sfp); /* * Need to preserve the existing offset values in the sf directory. * Insert holes (unused entries) where necessary. @@ -1177,7 +1177,7 @@ xfs_dir2_sf_to_block( if (sfep == NULL) newoffset = endoffset; else - newoffset = XFS_DIR2_SF_GET_OFFSET(sfep); + newoffset = xfs_dir2_sf_get_offset(sfep); /* * There should be a hole here, make one. */ @@ -1186,7 +1186,7 @@ xfs_dir2_sf_to_block( ((char *)block + offset); dup->freetag = cpu_to_be16(XFS_DIR2_DATA_FREE_TAG); dup->length = cpu_to_be16(newoffset - offset); - *XFS_DIR2_DATA_UNUSED_TAG_P(dup) = cpu_to_be16( + *xfs_dir2_data_unused_tag_p(dup) = cpu_to_be16( ((char *)dup - (char *)block)); xfs_dir2_data_log_unused(tp, bp, dup); (void)xfs_dir2_data_freeinsert((xfs_dir2_data_t *)block, @@ -1198,22 +1198,22 @@ xfs_dir2_sf_to_block( * Copy a real entry. */ dep = (xfs_dir2_data_entry_t *)((char *)block + newoffset); - dep->inumber = cpu_to_be64(XFS_DIR2_SF_GET_INUMBER(sfp, - XFS_DIR2_SF_INUMBERP(sfep))); + dep->inumber = cpu_to_be64(xfs_dir2_sf_get_inumber(sfp, + xfs_dir2_sf_inumberp(sfep))); dep->namelen = sfep->namelen; memcpy(dep->name, sfep->name, dep->namelen); - tagp = XFS_DIR2_DATA_ENTRY_TAG_P(dep); + tagp = xfs_dir2_data_entry_tag_p(dep); *tagp = cpu_to_be16((char *)dep - (char *)block); xfs_dir2_data_log_entry(tp, bp, dep); blp[2 + i].hashval = cpu_to_be32(xfs_da_hashname( (char *)sfep->name, sfep->namelen)); - blp[2 + i].address = cpu_to_be32(XFS_DIR2_BYTE_TO_DATAPTR(mp, + blp[2 + i].address = cpu_to_be32(xfs_dir2_byte_to_dataptr(mp, (char *)dep - (char *)block)); offset = (int)((char *)(tagp + 1) - (char *)block); if (++i == sfp->hdr.count) sfep = NULL; else - sfep = XFS_DIR2_SF_NEXTENTRY(sfp, sfep); + sfep = xfs_dir2_sf_nextentry(sfp, sfep); } /* Done with the temporary buffer */ kmem_free(buf, buf_len); Index: linux-2.6/fs/xfs/xfs_dir2_block.h =================================================================== --- linux-2.6.orig/fs/xfs/xfs_dir2_block.h 2007-04-28 09:37:26.000000000 +0200 +++ linux-2.6/fs/xfs/xfs_dir2_block.h 2007-05-12 13:38:34.000000000 +0200 @@ -60,7 +60,6 @@ typedef struct xfs_dir2_block { /* * Pointer to the leaf header embedded in a data block (1-block format) */ -#define XFS_DIR2_BLOCK_TAIL_P(mp,block) xfs_dir2_block_tail_p(mp,block) static inline xfs_dir2_block_tail_t * xfs_dir2_block_tail_p(struct xfs_mount *mp, xfs_dir2_block_t *block) { @@ -71,7 +70,6 @@ xfs_dir2_block_tail_p(struct xfs_mount * /* * Pointer to the leaf entries embedded in a data block (1-block format) */ -#define XFS_DIR2_BLOCK_LEAF_P(btp) xfs_dir2_block_leaf_p(btp) static inline struct xfs_dir2_leaf_entry * xfs_dir2_block_leaf_p(xfs_dir2_block_tail_t *btp) { Index: linux-2.6/fs/xfs/xfs_dir2_data.c =================================================================== --- linux-2.6.orig/fs/xfs/xfs_dir2_data.c 2007-05-10 10:53:16.000000000 +0200 +++ linux-2.6/fs/xfs/xfs_dir2_data.c 2007-05-12 13:39:34.000000000 +0200 @@ -72,8 +72,8 @@ xfs_dir2_data_check( bf = d->hdr.bestfree; p = (char *)d->u; if (be32_to_cpu(d->hdr.magic) == XFS_DIR2_BLOCK_MAGIC) { - btp = XFS_DIR2_BLOCK_TAIL_P(mp, (xfs_dir2_block_t *)d); - lep = XFS_DIR2_BLOCK_LEAF_P(btp); + btp = xfs_dir2_block_tail_p(mp, (xfs_dir2_block_t *)d); + lep = xfs_dir2_block_leaf_p(btp); endp = (char *)lep; } else endp = (char *)d + mp->m_dirblksize; @@ -107,7 +107,7 @@ xfs_dir2_data_check( */ if (be16_to_cpu(dup->freetag) == XFS_DIR2_DATA_FREE_TAG) { ASSERT(lastfree == 0); - ASSERT(be16_to_cpu(*XFS_DIR2_DATA_UNUSED_TAG_P(dup)) == + ASSERT(be16_to_cpu(*xfs_dir2_data_unused_tag_p(dup)) == (char *)dup - (char *)d); dfp = xfs_dir2_data_freefind(d, dup); if (dfp) { @@ -131,12 +131,12 @@ xfs_dir2_data_check( dep = (xfs_dir2_data_entry_t *)p; ASSERT(dep->namelen != 0); ASSERT(xfs_dir_ino_validate(mp, be64_to_cpu(dep->inumber)) == 0); - ASSERT(be16_to_cpu(*XFS_DIR2_DATA_ENTRY_TAG_P(dep)) == + ASSERT(be16_to_cpu(*xfs_dir2_data_entry_tag_p(dep)) == (char *)dep - (char *)d); count++; lastfree = 0; if (be32_to_cpu(d->hdr.magic) == XFS_DIR2_BLOCK_MAGIC) { - addr = XFS_DIR2_DB_OFF_TO_DATAPTR(mp, mp->m_dirdatablk, + addr = xfs_dir2_db_off_to_dataptr(mp, mp->m_dirdatablk, (xfs_dir2_data_aoff_t) ((char *)dep - (char *)d)); hash = xfs_da_hashname((char *)dep->name, dep->namelen); @@ -147,7 +147,7 @@ xfs_dir2_data_check( } ASSERT(i < be32_to_cpu(btp->count)); } - p += XFS_DIR2_DATA_ENTSIZE(dep->namelen); + p += xfs_dir2_data_entsize(dep->namelen); } /* * Need to have seen all the entries and all the bestfree slots. @@ -346,8 +346,8 @@ xfs_dir2_data_freescan( */ p = (char *)d->u; if (be32_to_cpu(d->hdr.magic) == XFS_DIR2_BLOCK_MAGIC) { - btp = XFS_DIR2_BLOCK_TAIL_P(mp, (xfs_dir2_block_t *)d); - endp = (char *)XFS_DIR2_BLOCK_LEAF_P(btp); + btp = xfs_dir2_block_tail_p(mp, (xfs_dir2_block_t *)d); + endp = (char *)xfs_dir2_block_leaf_p(btp); } else endp = (char *)d + mp->m_dirblksize; /* @@ -360,7 +360,7 @@ xfs_dir2_data_freescan( */ if (be16_to_cpu(dup->freetag) == XFS_DIR2_DATA_FREE_TAG) { ASSERT((char *)dup - (char *)d == - be16_to_cpu(*XFS_DIR2_DATA_UNUSED_TAG_P(dup))); + be16_to_cpu(*xfs_dir2_data_unused_tag_p(dup))); xfs_dir2_data_freeinsert(d, dup, loghead); p += be16_to_cpu(dup->length); } @@ -370,8 +370,8 @@ xfs_dir2_data_freescan( else { dep = (xfs_dir2_data_entry_t *)p; ASSERT((char *)dep - (char *)d == - be16_to_cpu(*XFS_DIR2_DATA_ENTRY_TAG_P(dep))); - p += XFS_DIR2_DATA_ENTSIZE(dep->namelen); + be16_to_cpu(*xfs_dir2_data_entry_tag_p(dep))); + p += xfs_dir2_data_entsize(dep->namelen); } } } @@ -402,7 +402,7 @@ xfs_dir2_data_init( /* * Get the buffer set up for the block. */ - error = xfs_da_get_buf(tp, dp, XFS_DIR2_DB_TO_DA(mp, blkno), -1, &bp, + error = xfs_da_get_buf(tp, dp, xfs_dir2_db_to_da(mp, blkno), -1, &bp, XFS_DATA_FORK); if (error) { return error; @@ -427,7 +427,7 @@ xfs_dir2_data_init( t=mp->m_dirblksize - (uint)sizeof(d->hdr); d->hdr.bestfree[0].length = cpu_to_be16(t); dup->length = cpu_to_be16(t); - *XFS_DIR2_DATA_UNUSED_TAG_P(dup) = cpu_to_be16((char *)dup - (char *)d); + *xfs_dir2_data_unused_tag_p(dup) = cpu_to_be16((char *)dup - (char *)d); /* * Log it and return it. */ @@ -452,7 +452,7 @@ xfs_dir2_data_log_entry( ASSERT(be32_to_cpu(d->hdr.magic) == XFS_DIR2_DATA_MAGIC || be32_to_cpu(d->hdr.magic) == XFS_DIR2_BLOCK_MAGIC); xfs_da_log_buf(tp, bp, (uint)((char *)dep - (char *)d), - (uint)((char *)(XFS_DIR2_DATA_ENTRY_TAG_P(dep) + 1) - + (uint)((char *)(xfs_dir2_data_entry_tag_p(dep) + 1) - (char *)d - 1)); } @@ -497,8 +497,8 @@ xfs_dir2_data_log_unused( * Log the end (tag) of the unused entry. */ xfs_da_log_buf(tp, bp, - (uint)((char *)XFS_DIR2_DATA_UNUSED_TAG_P(dup) - (char *)d), - (uint)((char *)XFS_DIR2_DATA_UNUSED_TAG_P(dup) - (char *)d + + (uint)((char *)xfs_dir2_data_unused_tag_p(dup) - (char *)d), + (uint)((char *)xfs_dir2_data_unused_tag_p(dup) - (char *)d + sizeof(xfs_dir2_data_off_t) - 1)); } @@ -535,8 +535,8 @@ xfs_dir2_data_make_free( xfs_dir2_block_tail_t *btp; /* block tail */ ASSERT(be32_to_cpu(d->hdr.magic) == XFS_DIR2_BLOCK_MAGIC); - btp = XFS_DIR2_BLOCK_TAIL_P(mp, (xfs_dir2_block_t *)d); - endptr = (char *)XFS_DIR2_BLOCK_LEAF_P(btp); + btp = xfs_dir2_block_tail_p(mp, (xfs_dir2_block_t *)d); + endptr = (char *)xfs_dir2_block_leaf_p(btp); } /* * If this isn't the start of the block, then back up to @@ -587,7 +587,7 @@ xfs_dir2_data_make_free( * Fix up the new big freespace. */ be16_add(&prevdup->length, len + be16_to_cpu(postdup->length)); - *XFS_DIR2_DATA_UNUSED_TAG_P(prevdup) = + *xfs_dir2_data_unused_tag_p(prevdup) = cpu_to_be16((char *)prevdup - (char *)d); xfs_dir2_data_log_unused(tp, bp, prevdup); if (!needscan) { @@ -621,7 +621,7 @@ xfs_dir2_data_make_free( else if (prevdup) { dfp = xfs_dir2_data_freefind(d, prevdup); be16_add(&prevdup->length, len); - *XFS_DIR2_DATA_UNUSED_TAG_P(prevdup) = + *xfs_dir2_data_unused_tag_p(prevdup) = cpu_to_be16((char *)prevdup - (char *)d); xfs_dir2_data_log_unused(tp, bp, prevdup); /* @@ -649,7 +649,7 @@ xfs_dir2_data_make_free( newdup = (xfs_dir2_data_unused_t *)((char *)d + offset); newdup->freetag = cpu_to_be16(XFS_DIR2_DATA_FREE_TAG); newdup->length = cpu_to_be16(len + be16_to_cpu(postdup->length)); - *XFS_DIR2_DATA_UNUSED_TAG_P(newdup) = + *xfs_dir2_data_unused_tag_p(newdup) = cpu_to_be16((char *)newdup - (char *)d); xfs_dir2_data_log_unused(tp, bp, newdup); /* @@ -676,7 +676,7 @@ xfs_dir2_data_make_free( newdup = (xfs_dir2_data_unused_t *)((char *)d + offset); newdup->freetag = cpu_to_be16(XFS_DIR2_DATA_FREE_TAG); newdup->length = cpu_to_be16(len); - *XFS_DIR2_DATA_UNUSED_TAG_P(newdup) = + *xfs_dir2_data_unused_tag_p(newdup) = cpu_to_be16((char *)newdup - (char *)d); xfs_dir2_data_log_unused(tp, bp, newdup); (void)xfs_dir2_data_freeinsert(d, newdup, needlogp); @@ -712,7 +712,7 @@ xfs_dir2_data_use_free( ASSERT(be16_to_cpu(dup->freetag) == XFS_DIR2_DATA_FREE_TAG); ASSERT(offset >= (char *)dup - (char *)d); ASSERT(offset + len <= (char *)dup + be16_to_cpu(dup->length) - (char *)d); - ASSERT((char *)dup - (char *)d == be16_to_cpu(*XFS_DIR2_DATA_UNUSED_TAG_P(dup))); + ASSERT((char *)dup - (char *)d == be16_to_cpu(*xfs_dir2_data_unused_tag_p(dup))); /* * Look up the entry in the bestfree table. */ @@ -745,7 +745,7 @@ xfs_dir2_data_use_free( newdup = (xfs_dir2_data_unused_t *)((char *)d + offset + len); newdup->freetag = cpu_to_be16(XFS_DIR2_DATA_FREE_TAG); newdup->length = cpu_to_be16(oldlen - len); - *XFS_DIR2_DATA_UNUSED_TAG_P(newdup) = + *xfs_dir2_data_unused_tag_p(newdup) = cpu_to_be16((char *)newdup - (char *)d); xfs_dir2_data_log_unused(tp, bp, newdup); /* @@ -772,7 +772,7 @@ xfs_dir2_data_use_free( else if (matchback) { newdup = dup; newdup->length = cpu_to_be16(((char *)d + offset) - (char *)newdup); - *XFS_DIR2_DATA_UNUSED_TAG_P(newdup) = + *xfs_dir2_data_unused_tag_p(newdup) = cpu_to_be16((char *)newdup - (char *)d); xfs_dir2_data_log_unused(tp, bp, newdup); /* @@ -799,13 +799,13 @@ xfs_dir2_data_use_free( else { newdup = dup; newdup->length = cpu_to_be16(((char *)d + offset) - (char *)newdup); - *XFS_DIR2_DATA_UNUSED_TAG_P(newdup) = + *xfs_dir2_data_unused_tag_p(newdup) = cpu_to_be16((char *)newdup - (char *)d); xfs_dir2_data_log_unused(tp, bp, newdup); newdup2 = (xfs_dir2_data_unused_t *)((char *)d + offset + len); newdup2->freetag = cpu_to_be16(XFS_DIR2_DATA_FREE_TAG); newdup2->length = cpu_to_be16(oldlen - len - be16_to_cpu(newdup->length)); - *XFS_DIR2_DATA_UNUSED_TAG_P(newdup2) = + *xfs_dir2_data_unused_tag_p(newdup2) = cpu_to_be16((char *)newdup2 - (char *)d); xfs_dir2_data_log_unused(tp, bp, newdup2); /* Index: linux-2.6/fs/xfs/xfs_dir2_data.h =================================================================== --- linux-2.6.orig/fs/xfs/xfs_dir2_data.h 2007-05-10 10:53:16.000000000 +0200 +++ linux-2.6/fs/xfs/xfs_dir2_data.h 2007-05-12 13:38:34.000000000 +0200 @@ -44,7 +44,7 @@ struct xfs_trans; #define XFS_DIR2_DATA_SPACE 0 #define XFS_DIR2_DATA_OFFSET (XFS_DIR2_DATA_SPACE * XFS_DIR2_SPACE_SIZE) #define XFS_DIR2_DATA_FIRSTDB(mp) \ - XFS_DIR2_BYTE_TO_DB(mp, XFS_DIR2_DATA_OFFSET) + xfs_dir2_byte_to_db(mp, XFS_DIR2_DATA_OFFSET) /* * Offsets of . and .. in data space (always block 0) @@ -52,9 +52,9 @@ struct xfs_trans; #define XFS_DIR2_DATA_DOT_OFFSET \ ((xfs_dir2_data_aoff_t)sizeof(xfs_dir2_data_hdr_t)) #define XFS_DIR2_DATA_DOTDOT_OFFSET \ - (XFS_DIR2_DATA_DOT_OFFSET + XFS_DIR2_DATA_ENTSIZE(1)) + (XFS_DIR2_DATA_DOT_OFFSET + xfs_dir2_data_entsize(1)) #define XFS_DIR2_DATA_FIRST_OFFSET \ - (XFS_DIR2_DATA_DOTDOT_OFFSET + XFS_DIR2_DATA_ENTSIZE(2)) + (XFS_DIR2_DATA_DOTDOT_OFFSET + xfs_dir2_data_entsize(2)) /* * Structures. @@ -123,7 +123,6 @@ typedef struct xfs_dir2_data { /* * Size of a data entry. */ -#define XFS_DIR2_DATA_ENTSIZE(n) xfs_dir2_data_entsize(n) static inline int xfs_dir2_data_entsize(int n) { return (int)roundup(offsetof(xfs_dir2_data_entry_t, name[0]) + (n) + \ @@ -133,19 +132,16 @@ static inline int xfs_dir2_data_entsize( /* * Pointer to an entry's tag word. */ -#define XFS_DIR2_DATA_ENTRY_TAG_P(dep) xfs_dir2_data_entry_tag_p(dep) static inline __be16 * xfs_dir2_data_entry_tag_p(xfs_dir2_data_entry_t *dep) { return (__be16 *)((char *)dep + - XFS_DIR2_DATA_ENTSIZE(dep->namelen) - sizeof(__be16)); + xfs_dir2_data_entsize(dep->namelen) - sizeof(__be16)); } /* * Pointer to a freespace's tag word. */ -#define XFS_DIR2_DATA_UNUSED_TAG_P(dup) \ - xfs_dir2_data_unused_tag_p(dup) static inline __be16 * xfs_dir2_data_unused_tag_p(xfs_dir2_data_unused_t *dup) { Index: linux-2.6/fs/xfs/xfs_dir2_leaf.c =================================================================== --- linux-2.6.orig/fs/xfs/xfs_dir2_leaf.c 2007-05-10 10:53:16.000000000 +0200 +++ linux-2.6/fs/xfs/xfs_dir2_leaf.c 2007-05-12 13:38:34.000000000 +0200 @@ -92,7 +92,7 @@ xfs_dir2_block_to_leaf( if ((error = xfs_da_grow_inode(args, &blkno))) { return error; } - ldb = XFS_DIR2_DA_TO_DB(mp, blkno); + ldb = xfs_dir2_da_to_db(mp, blkno); ASSERT(ldb == XFS_DIR2_LEAF_FIRSTDB(mp)); /* * Initialize the leaf block, get a buffer for it. @@ -104,8 +104,8 @@ xfs_dir2_block_to_leaf( leaf = lbp->data; block = dbp->data; xfs_dir2_data_check(dp, dbp); - btp = XFS_DIR2_BLOCK_TAIL_P(mp, block); - blp = XFS_DIR2_BLOCK_LEAF_P(btp); + btp = xfs_dir2_block_tail_p(mp, block); + blp = xfs_dir2_block_leaf_p(btp); /* * Set the counts in the leaf header. */ @@ -137,9 +137,9 @@ xfs_dir2_block_to_leaf( /* * Set up leaf tail and bests table. */ - ltp = XFS_DIR2_LEAF_TAIL_P(mp, leaf); + ltp = xfs_dir2_leaf_tail_p(mp, leaf); ltp->bestcount = cpu_to_be32(1); - bestsp = XFS_DIR2_LEAF_BESTS_P(ltp); + bestsp = xfs_dir2_leaf_bests_p(ltp); bestsp[0] = block->hdr.bestfree[0].length; /* * Log the data header and leaf bests table. @@ -209,9 +209,9 @@ xfs_dir2_leaf_addname( */ index = xfs_dir2_leaf_search_hash(args, lbp); leaf = lbp->data; - ltp = XFS_DIR2_LEAF_TAIL_P(mp, leaf); - bestsp = XFS_DIR2_LEAF_BESTS_P(ltp); - length = XFS_DIR2_DATA_ENTSIZE(args->namelen); + ltp = xfs_dir2_leaf_tail_p(mp, leaf); + bestsp = xfs_dir2_leaf_bests_p(ltp); + length = xfs_dir2_data_entsize(args->namelen); /* * See if there are any entries with the same hash value * and space in their block for the new entry. @@ -223,7 +223,7 @@ xfs_dir2_leaf_addname( index++, lep++) { if (be32_to_cpu(lep->address) == XFS_DIR2_NULL_DATAPTR) continue; - i = XFS_DIR2_DATAPTR_TO_DB(mp, be32_to_cpu(lep->address)); + i = xfs_dir2_dataptr_to_db(mp, be32_to_cpu(lep->address)); ASSERT(i < be32_to_cpu(ltp->bestcount)); ASSERT(be16_to_cpu(bestsp[i]) != NULLDATAOFF); if (be16_to_cpu(bestsp[i]) >= length) { @@ -378,7 +378,7 @@ xfs_dir2_leaf_addname( */ else { if ((error = - xfs_da_read_buf(tp, dp, XFS_DIR2_DB_TO_DA(mp, use_block), + xfs_da_read_buf(tp, dp, xfs_dir2_db_to_da(mp, use_block), -1, &dbp, XFS_DATA_FORK))) { xfs_da_brelse(tp, lbp); return error; @@ -407,7 +407,7 @@ xfs_dir2_leaf_addname( dep->inumber = cpu_to_be64(args->inumber); dep->namelen = args->namelen; memcpy(dep->name, args->name, dep->namelen); - tagp = XFS_DIR2_DATA_ENTRY_TAG_P(dep); + tagp = xfs_dir2_data_entry_tag_p(dep); *tagp = cpu_to_be16((char *)dep - (char *)data); /* * Need to scan fix up the bestfree table. @@ -529,7 +529,7 @@ xfs_dir2_leaf_addname( * Fill in the new leaf entry. */ lep->hashval = cpu_to_be32(args->hashval); - lep->address = cpu_to_be32(XFS_DIR2_DB_OFF_TO_DATAPTR(mp, use_block, + lep->address = cpu_to_be32(xfs_dir2_db_off_to_dataptr(mp, use_block, be16_to_cpu(*tagp))); /* * Log the leaf fields and give up the buffers. @@ -567,13 +567,13 @@ xfs_dir2_leaf_check( * Should factor in the size of the bests table as well. * We can deduce a value for that from di_size. */ - ASSERT(be16_to_cpu(leaf->hdr.count) <= XFS_DIR2_MAX_LEAF_ENTS(mp)); - ltp = XFS_DIR2_LEAF_TAIL_P(mp, leaf); + ASSERT(be16_to_cpu(leaf->hdr.count) <= xfs_dir2_max_leaf_ents(mp)); + ltp = xfs_dir2_leaf_tail_p(mp, leaf); /* * Leaves and bests don't overlap. */ ASSERT((char *)&leaf->ents[be16_to_cpu(leaf->hdr.count)] <= - (char *)XFS_DIR2_LEAF_BESTS_P(ltp)); + (char *)xfs_dir2_leaf_bests_p(ltp)); /* * Check hash value order, count stale entries. */ @@ -815,12 +815,12 @@ xfs_dir2_leaf_getdents( * Inside the loop we keep the main offset value as a byte offset * in the directory file. */ - curoff = XFS_DIR2_DATAPTR_TO_BYTE(mp, uio->uio_offset); + curoff = xfs_dir2_dataptr_to_byte(mp, uio->uio_offset); /* * Force this conversion through db so we truncate the offset * down to get the start of the data block. */ - map_off = XFS_DIR2_DB_TO_DA(mp, XFS_DIR2_BYTE_TO_DB(mp, curoff)); + map_off = xfs_dir2_db_to_da(mp, xfs_dir2_byte_to_db(mp, curoff)); /* * Loop over directory entries until we reach the end offset. * Get more blocks and readahead as necessary. @@ -870,7 +870,7 @@ xfs_dir2_leaf_getdents( */ if (1 + ra_want > map_blocks && map_off < - XFS_DIR2_BYTE_TO_DA(mp, XFS_DIR2_LEAF_OFFSET)) { + xfs_dir2_byte_to_da(mp, XFS_DIR2_LEAF_OFFSET)) { /* * Get more bmaps, fill in after the ones * we already have in the table. @@ -878,7 +878,7 @@ xfs_dir2_leaf_getdents( nmap = map_size - map_valid; error = xfs_bmapi(tp, dp, map_off, - XFS_DIR2_BYTE_TO_DA(mp, + xfs_dir2_byte_to_da(mp, XFS_DIR2_LEAF_OFFSET) - map_off, XFS_BMAPI_METADATA, NULL, 0, &map[map_valid], &nmap, NULL, NULL); @@ -903,7 +903,7 @@ xfs_dir2_leaf_getdents( map[map_valid + nmap - 1].br_blockcount; else map_off = - XFS_DIR2_BYTE_TO_DA(mp, + xfs_dir2_byte_to_da(mp, XFS_DIR2_LEAF_OFFSET); /* * Look for holes in the mapping, and @@ -931,14 +931,14 @@ xfs_dir2_leaf_getdents( * No valid mappings, so no more data blocks. */ if (!map_valid) { - curoff = XFS_DIR2_DA_TO_BYTE(mp, map_off); + curoff = xfs_dir2_da_to_byte(mp, map_off); break; } /* * Read the directory block starting at the first * mapping. */ - curdb = XFS_DIR2_DA_TO_DB(mp, map->br_startoff); + curdb = xfs_dir2_da_to_db(mp, map->br_startoff); error = xfs_da_read_buf(tp, dp, map->br_startoff, map->br_blockcount >= mp->m_dirblkfsbs ? XFS_FSB_TO_DADDR(mp, map->br_startblock) : @@ -1014,7 +1014,7 @@ xfs_dir2_leaf_getdents( /* * Having done a read, we need to set a new offset. */ - newoff = XFS_DIR2_DB_OFF_TO_BYTE(mp, curdb, 0); + newoff = xfs_dir2_db_off_to_byte(mp, curdb, 0); /* * Start of the current block. */ @@ -1024,7 +1024,7 @@ xfs_dir2_leaf_getdents( * Make sure we're in the right block. */ else if (curoff > newoff) - ASSERT(XFS_DIR2_BYTE_TO_DB(mp, curoff) == + ASSERT(xfs_dir2_byte_to_db(mp, curoff) == curdb); data = bp->data; xfs_dir2_data_check(dp, bp); @@ -1032,7 +1032,7 @@ xfs_dir2_leaf_getdents( * Find our position in the block. */ ptr = (char *)&data->u; - byteoff = XFS_DIR2_BYTE_TO_OFF(mp, curoff); + byteoff = xfs_dir2_byte_to_off(mp, curoff); /* * Skip past the header. */ @@ -1054,15 +1054,15 @@ xfs_dir2_leaf_getdents( } dep = (xfs_dir2_data_entry_t *)ptr; length = - XFS_DIR2_DATA_ENTSIZE(dep->namelen); + xfs_dir2_data_entsize(dep->namelen); ptr += length; } /* * Now set our real offset. */ curoff = - XFS_DIR2_DB_OFF_TO_BYTE(mp, - XFS_DIR2_BYTE_TO_DB(mp, curoff), + xfs_dir2_db_off_to_byte(mp, + xfs_dir2_byte_to_db(mp, curoff), (char *)ptr - (char *)data); if (ptr >= (char *)data + mp->m_dirblksize) { continue; @@ -1091,9 +1091,9 @@ xfs_dir2_leaf_getdents( p->namelen = dep->namelen; - length = XFS_DIR2_DATA_ENTSIZE(p->namelen); + length = xfs_dir2_data_entsize(p->namelen); - p->cook = XFS_DIR2_BYTE_TO_DATAPTR(mp, curoff + length); + p->cook = xfs_dir2_byte_to_dataptr(mp, curoff + length); p->ino = be64_to_cpu(dep->inumber); #if XFS_BIG_INUMS @@ -1121,10 +1121,10 @@ xfs_dir2_leaf_getdents( * All done. Set output offset value to current offset. */ *eofp = eof; - if (curoff > XFS_DIR2_DATAPTR_TO_BYTE(mp, XFS_DIR2_MAX_DATAPTR)) + if (curoff > xfs_dir2_dataptr_to_byte(mp, XFS_DIR2_MAX_DATAPTR)) uio->uio_offset = XFS_DIR2_MAX_DATAPTR; else - uio->uio_offset = XFS_DIR2_BYTE_TO_DATAPTR(mp, curoff); + uio->uio_offset = xfs_dir2_byte_to_dataptr(mp, curoff); kmem_free(map, map_size * sizeof(*map)); kmem_free(p, sizeof(*p)); if (bp) @@ -1159,7 +1159,7 @@ xfs_dir2_leaf_init( /* * Get the buffer for the block. */ - error = xfs_da_get_buf(tp, dp, XFS_DIR2_DB_TO_DA(mp, bno), -1, &bp, + error = xfs_da_get_buf(tp, dp, xfs_dir2_db_to_da(mp, bno), -1, &bp, XFS_DATA_FORK); if (error) { return error; @@ -1181,7 +1181,7 @@ xfs_dir2_leaf_init( * the block. */ if (magic == XFS_DIR2_LEAF1_MAGIC) { - ltp = XFS_DIR2_LEAF_TAIL_P(mp, leaf); + ltp = xfs_dir2_leaf_tail_p(mp, leaf); ltp->bestcount = 0; xfs_dir2_leaf_log_tail(tp, bp); } @@ -1206,9 +1206,9 @@ xfs_dir2_leaf_log_bests( leaf = bp->data; ASSERT(be16_to_cpu(leaf->hdr.info.magic) == XFS_DIR2_LEAF1_MAGIC); - ltp = XFS_DIR2_LEAF_TAIL_P(tp->t_mountp, leaf); - firstb = XFS_DIR2_LEAF_BESTS_P(ltp) + first; - lastb = XFS_DIR2_LEAF_BESTS_P(ltp) + last; + ltp = xfs_dir2_leaf_tail_p(tp->t_mountp, leaf); + firstb = xfs_dir2_leaf_bests_p(ltp) + first; + lastb = xfs_dir2_leaf_bests_p(ltp) + last; xfs_da_log_buf(tp, bp, (uint)((char *)firstb - (char *)leaf), (uint)((char *)lastb - (char *)leaf + sizeof(*lastb) - 1)); } @@ -1268,7 +1268,7 @@ xfs_dir2_leaf_log_tail( mp = tp->t_mountp; leaf = bp->data; ASSERT(be16_to_cpu(leaf->hdr.info.magic) == XFS_DIR2_LEAF1_MAGIC); - ltp = XFS_DIR2_LEAF_TAIL_P(mp, leaf); + ltp = xfs_dir2_leaf_tail_p(mp, leaf); xfs_da_log_buf(tp, bp, (uint)((char *)ltp - (char *)leaf), (uint)(mp->m_dirblksize - 1)); } @@ -1312,7 +1312,7 @@ xfs_dir2_leaf_lookup( */ dep = (xfs_dir2_data_entry_t *) ((char *)dbp->data + - XFS_DIR2_DATAPTR_TO_OFF(dp->i_mount, be32_to_cpu(lep->address))); + xfs_dir2_dataptr_to_off(dp->i_mount, be32_to_cpu(lep->address))); /* * Return the found inode number. */ @@ -1381,7 +1381,7 @@ xfs_dir2_leaf_lookup_int( /* * Get the new data block number. */ - newdb = XFS_DIR2_DATAPTR_TO_DB(mp, be32_to_cpu(lep->address)); + newdb = xfs_dir2_dataptr_to_db(mp, be32_to_cpu(lep->address)); /* * If it's not the same as the old data block number, * need to pitch the old one and read the new one. @@ -1391,7 +1391,7 @@ xfs_dir2_leaf_lookup_int( xfs_da_brelse(tp, dbp); if ((error = xfs_da_read_buf(tp, dp, - XFS_DIR2_DB_TO_DA(mp, newdb), -1, &dbp, + xfs_dir2_db_to_da(mp, newdb), -1, &dbp, XFS_DATA_FORK))) { xfs_da_brelse(tp, lbp); return error; @@ -1404,7 +1404,7 @@ xfs_dir2_leaf_lookup_int( */ dep = (xfs_dir2_data_entry_t *) ((char *)dbp->data + - XFS_DIR2_DATAPTR_TO_OFF(mp, be32_to_cpu(lep->address))); + xfs_dir2_dataptr_to_off(mp, be32_to_cpu(lep->address))); /* * If it matches then return it. */ @@ -1469,20 +1469,20 @@ xfs_dir2_leaf_removename( * Point to the leaf entry, use that to point to the data entry. */ lep = &leaf->ents[index]; - db = XFS_DIR2_DATAPTR_TO_DB(mp, be32_to_cpu(lep->address)); + db = xfs_dir2_dataptr_to_db(mp, be32_to_cpu(lep->address)); dep = (xfs_dir2_data_entry_t *) - ((char *)data + XFS_DIR2_DATAPTR_TO_OFF(mp, be32_to_cpu(lep->address))); + ((char *)data + xfs_dir2_dataptr_to_off(mp, be32_to_cpu(lep->address))); needscan = needlog = 0; oldbest = be16_to_cpu(data->hdr.bestfree[0].length); - ltp = XFS_DIR2_LEAF_TAIL_P(mp, leaf); - bestsp = XFS_DIR2_LEAF_BESTS_P(ltp); + ltp = xfs_dir2_leaf_tail_p(mp, leaf); + bestsp = xfs_dir2_leaf_bests_p(ltp); ASSERT(be16_to_cpu(bestsp[db]) == oldbest); /* * Mark the former data entry unused. */ xfs_dir2_data_make_free(tp, dbp, (xfs_dir2_data_aoff_t)((char *)dep - (char *)data), - XFS_DIR2_DATA_ENTSIZE(dep->namelen), &needlog, &needscan); + xfs_dir2_data_entsize(dep->namelen), &needlog, &needscan); /* * We just mark the leaf entry stale by putting a null in it. */ @@ -1602,7 +1602,7 @@ xfs_dir2_leaf_replace( */ dep = (xfs_dir2_data_entry_t *) ((char *)dbp->data + - XFS_DIR2_DATAPTR_TO_OFF(dp->i_mount, be32_to_cpu(lep->address))); + xfs_dir2_dataptr_to_off(dp->i_mount, be32_to_cpu(lep->address))); ASSERT(args->inumber != be64_to_cpu(dep->inumber)); /* * Put the new inode number in, log it. @@ -1698,7 +1698,7 @@ xfs_dir2_leaf_trim_data( /* * Read the offending data block. We need its buffer. */ - if ((error = xfs_da_read_buf(tp, dp, XFS_DIR2_DB_TO_DA(mp, db), -1, &dbp, + if ((error = xfs_da_read_buf(tp, dp, xfs_dir2_db_to_da(mp, db), -1, &dbp, XFS_DATA_FORK))) { return error; } @@ -1712,7 +1712,7 @@ xfs_dir2_leaf_trim_data( */ leaf = lbp->data; - ltp = XFS_DIR2_LEAF_TAIL_P(mp, leaf); + ltp = xfs_dir2_leaf_tail_p(mp, leaf); ASSERT(be16_to_cpu(data->hdr.bestfree[0].length) == mp->m_dirblksize - (uint)sizeof(data->hdr)); ASSERT(db == be32_to_cpu(ltp->bestcount) - 1); @@ -1727,7 +1727,7 @@ xfs_dir2_leaf_trim_data( /* * Eliminate the last bests entry from the table. */ - bestsp = XFS_DIR2_LEAF_BESTS_P(ltp); + bestsp = xfs_dir2_leaf_bests_p(ltp); be32_add(<p->bestcount, -1); memmove(&bestsp[1], &bestsp[0], be32_to_cpu(ltp->bestcount) * sizeof(*bestsp)); xfs_dir2_leaf_log_tail(tp, lbp); @@ -1838,12 +1838,12 @@ xfs_dir2_node_to_leaf( /* * Set up the leaf tail from the freespace block. */ - ltp = XFS_DIR2_LEAF_TAIL_P(mp, leaf); + ltp = xfs_dir2_leaf_tail_p(mp, leaf); ltp->bestcount = free->hdr.nvalid; /* * Set up the leaf bests table. */ - memcpy(XFS_DIR2_LEAF_BESTS_P(ltp), free->bests, + memcpy(xfs_dir2_leaf_bests_p(ltp), free->bests, be32_to_cpu(ltp->bestcount) * sizeof(leaf->bests[0])); xfs_dir2_leaf_log_bests(tp, lbp, 0, be32_to_cpu(ltp->bestcount) - 1); xfs_dir2_leaf_log_tail(tp, lbp); Index: linux-2.6/fs/xfs/xfs_dir2_leaf.h =================================================================== --- linux-2.6.orig/fs/xfs/xfs_dir2_leaf.h 2007-04-28 09:37:26.000000000 +0200 +++ linux-2.6/fs/xfs/xfs_dir2_leaf.h 2007-05-12 13:38:34.000000000 +0200 @@ -32,7 +32,7 @@ struct xfs_trans; #define XFS_DIR2_LEAF_SPACE 1 #define XFS_DIR2_LEAF_OFFSET (XFS_DIR2_LEAF_SPACE * XFS_DIR2_SPACE_SIZE) #define XFS_DIR2_LEAF_FIRSTDB(mp) \ - XFS_DIR2_BYTE_TO_DB(mp, XFS_DIR2_LEAF_OFFSET) + xfs_dir2_byte_to_db(mp, XFS_DIR2_LEAF_OFFSET) /* * Offset in data space of a data entry. @@ -82,7 +82,6 @@ typedef struct xfs_dir2_leaf { * DB blocks here are logical directory block numbers, not filesystem blocks. */ -#define XFS_DIR2_MAX_LEAF_ENTS(mp) xfs_dir2_max_leaf_ents(mp) static inline int xfs_dir2_max_leaf_ents(struct xfs_mount *mp) { return (int)(((mp)->m_dirblksize - (uint)sizeof(xfs_dir2_leaf_hdr_t)) / @@ -92,7 +91,6 @@ static inline int xfs_dir2_max_leaf_ents /* * Get address of the bestcount field in the single-leaf block. */ -#define XFS_DIR2_LEAF_TAIL_P(mp,lp) xfs_dir2_leaf_tail_p(mp, lp) static inline xfs_dir2_leaf_tail_t * xfs_dir2_leaf_tail_p(struct xfs_mount *mp, xfs_dir2_leaf_t *lp) { @@ -104,7 +102,6 @@ xfs_dir2_leaf_tail_p(struct xfs_mount *m /* * Get address of the bests array in the single-leaf block. */ -#define XFS_DIR2_LEAF_BESTS_P(ltp) xfs_dir2_leaf_bests_p(ltp) static inline __be16 * xfs_dir2_leaf_bests_p(xfs_dir2_leaf_tail_t *ltp) { @@ -114,7 +111,6 @@ xfs_dir2_leaf_bests_p(xfs_dir2_leaf_tail /* * Convert dataptr to byte in file space */ -#define XFS_DIR2_DATAPTR_TO_BYTE(mp,dp) xfs_dir2_dataptr_to_byte(mp, dp) static inline xfs_dir2_off_t xfs_dir2_dataptr_to_byte(struct xfs_mount *mp, xfs_dir2_dataptr_t dp) { @@ -124,7 +120,6 @@ xfs_dir2_dataptr_to_byte(struct xfs_moun /* * Convert byte in file space to dataptr. It had better be aligned. */ -#define XFS_DIR2_BYTE_TO_DATAPTR(mp,by) xfs_dir2_byte_to_dataptr(mp,by) static inline xfs_dir2_dataptr_t xfs_dir2_byte_to_dataptr(struct xfs_mount *mp, xfs_dir2_off_t by) { @@ -134,7 +129,6 @@ xfs_dir2_byte_to_dataptr(struct xfs_moun /* * Convert byte in space to (DB) block */ -#define XFS_DIR2_BYTE_TO_DB(mp,by) xfs_dir2_byte_to_db(mp, by) static inline xfs_dir2_db_t xfs_dir2_byte_to_db(struct xfs_mount *mp, xfs_dir2_off_t by) { @@ -145,17 +139,15 @@ xfs_dir2_byte_to_db(struct xfs_mount *mp /* * Convert dataptr to a block number */ -#define XFS_DIR2_DATAPTR_TO_DB(mp,dp) xfs_dir2_dataptr_to_db(mp, dp) static inline xfs_dir2_db_t xfs_dir2_dataptr_to_db(struct xfs_mount *mp, xfs_dir2_dataptr_t dp) { - return XFS_DIR2_BYTE_TO_DB(mp, XFS_DIR2_DATAPTR_TO_BYTE(mp, dp)); + return xfs_dir2_byte_to_db(mp, xfs_dir2_dataptr_to_byte(mp, dp)); } /* * Convert byte in space to offset in a block */ -#define XFS_DIR2_BYTE_TO_OFF(mp,by) xfs_dir2_byte_to_off(mp, by) static inline xfs_dir2_data_aoff_t xfs_dir2_byte_to_off(struct xfs_mount *mp, xfs_dir2_off_t by) { @@ -166,18 +158,15 @@ xfs_dir2_byte_to_off(struct xfs_mount *m /* * Convert dataptr to a byte offset in a block */ -#define XFS_DIR2_DATAPTR_TO_OFF(mp,dp) xfs_dir2_dataptr_to_off(mp, dp) static inline xfs_dir2_data_aoff_t xfs_dir2_dataptr_to_off(struct xfs_mount *mp, xfs_dir2_dataptr_t dp) { - return XFS_DIR2_BYTE_TO_OFF(mp, XFS_DIR2_DATAPTR_TO_BYTE(mp, dp)); + return xfs_dir2_byte_to_off(mp, xfs_dir2_dataptr_to_byte(mp, dp)); } /* * Convert block and offset to byte in space */ -#define XFS_DIR2_DB_OFF_TO_BYTE(mp,db,o) \ - xfs_dir2_db_off_to_byte(mp, db, o) static inline xfs_dir2_off_t xfs_dir2_db_off_to_byte(struct xfs_mount *mp, xfs_dir2_db_t db, xfs_dir2_data_aoff_t o) @@ -189,7 +178,6 @@ xfs_dir2_db_off_to_byte(struct xfs_mount /* * Convert block (DB) to block (dablk) */ -#define XFS_DIR2_DB_TO_DA(mp,db) xfs_dir2_db_to_da(mp, db) static inline xfs_dablk_t xfs_dir2_db_to_da(struct xfs_mount *mp, xfs_dir2_db_t db) { @@ -199,29 +187,25 @@ xfs_dir2_db_to_da(struct xfs_mount *mp, /* * Convert byte in space to (DA) block */ -#define XFS_DIR2_BYTE_TO_DA(mp,by) xfs_dir2_byte_to_da(mp, by) static inline xfs_dablk_t xfs_dir2_byte_to_da(struct xfs_mount *mp, xfs_dir2_off_t by) { - return XFS_DIR2_DB_TO_DA(mp, XFS_DIR2_BYTE_TO_DB(mp, by)); + return xfs_dir2_db_to_da(mp, xfs_dir2_byte_to_db(mp, by)); } /* * Convert block and offset to dataptr */ -#define XFS_DIR2_DB_OFF_TO_DATAPTR(mp,db,o) \ - xfs_dir2_db_off_to_dataptr(mp, db, o) static inline xfs_dir2_dataptr_t xfs_dir2_db_off_to_dataptr(struct xfs_mount *mp, xfs_dir2_db_t db, xfs_dir2_data_aoff_t o) { - return XFS_DIR2_BYTE_TO_DATAPTR(mp, XFS_DIR2_DB_OFF_TO_BYTE(mp, db, o)); + return xfs_dir2_byte_to_dataptr(mp, xfs_dir2_db_off_to_byte(mp, db, o)); } /* * Convert block (dablk) to block (DB) */ -#define XFS_DIR2_DA_TO_DB(mp,da) xfs_dir2_da_to_db(mp, da) static inline xfs_dir2_db_t xfs_dir2_da_to_db(struct xfs_mount *mp, xfs_dablk_t da) { @@ -231,11 +215,10 @@ xfs_dir2_da_to_db(struct xfs_mount *mp, /* * Convert block (dablk) to byte offset in space */ -#define XFS_DIR2_DA_TO_BYTE(mp,da) xfs_dir2_da_to_byte(mp, da) static inline xfs_dir2_off_t xfs_dir2_da_to_byte(struct xfs_mount *mp, xfs_dablk_t da) { - return XFS_DIR2_DB_OFF_TO_BYTE(mp, XFS_DIR2_DA_TO_DB(mp, da), 0); + return xfs_dir2_db_off_to_byte(mp, xfs_dir2_da_to_db(mp, da), 0); } /* Index: linux-2.6/fs/xfs/xfs_dir2_node.c =================================================================== --- linux-2.6.orig/fs/xfs/xfs_dir2_node.c 2007-05-10 10:53:16.000000000 +0200 +++ linux-2.6/fs/xfs/xfs_dir2_node.c 2007-05-12 13:38:34.000000000 +0200 @@ -136,14 +136,14 @@ xfs_dir2_leaf_to_node( /* * Get the buffer for the new freespace block. */ - if ((error = xfs_da_get_buf(tp, dp, XFS_DIR2_DB_TO_DA(mp, fdb), -1, &fbp, + if ((error = xfs_da_get_buf(tp, dp, xfs_dir2_db_to_da(mp, fdb), -1, &fbp, XFS_DATA_FORK))) { return error; } ASSERT(fbp != NULL); free = fbp->data; leaf = lbp->data; - ltp = XFS_DIR2_LEAF_TAIL_P(mp, leaf); + ltp = xfs_dir2_leaf_tail_p(mp, leaf); /* * Initialize the freespace block header. */ @@ -155,7 +155,7 @@ xfs_dir2_leaf_to_node( * Copy freespace entries from the leaf block to the new block. * Count active entries. */ - for (i = n = 0, from = XFS_DIR2_LEAF_BESTS_P(ltp), to = free->bests; + for (i = n = 0, from = xfs_dir2_leaf_bests_p(ltp), to = free->bests; i < be32_to_cpu(ltp->bestcount); i++, from++, to++) { if ((off = be16_to_cpu(*from)) != NULLDATAOFF) n++; @@ -215,7 +215,7 @@ xfs_dir2_leafn_add( * a compact. */ - if (be16_to_cpu(leaf->hdr.count) == XFS_DIR2_MAX_LEAF_ENTS(mp)) { + if (be16_to_cpu(leaf->hdr.count) == xfs_dir2_max_leaf_ents(mp)) { if (!leaf->hdr.stale) return XFS_ERROR(ENOSPC); compact = be16_to_cpu(leaf->hdr.stale) > 1; @@ -327,7 +327,7 @@ xfs_dir2_leafn_add( * Insert the new entry, log everything. */ lep->hashval = cpu_to_be32(args->hashval); - lep->address = cpu_to_be32(XFS_DIR2_DB_OFF_TO_DATAPTR(mp, + lep->address = cpu_to_be32(xfs_dir2_db_off_to_dataptr(mp, args->blkno, args->index)); xfs_dir2_leaf_log_header(tp, bp); xfs_dir2_leaf_log_ents(tp, bp, lfloglow, lfloghigh); @@ -352,7 +352,7 @@ xfs_dir2_leafn_check( leaf = bp->data; mp = dp->i_mount; ASSERT(be16_to_cpu(leaf->hdr.info.magic) == XFS_DIR2_LEAFN_MAGIC); - ASSERT(be16_to_cpu(leaf->hdr.count) <= XFS_DIR2_MAX_LEAF_ENTS(mp)); + ASSERT(be16_to_cpu(leaf->hdr.count) <= xfs_dir2_max_leaf_ents(mp)); for (i = stale = 0; i < be16_to_cpu(leaf->hdr.count); i++) { if (i + 1 < be16_to_cpu(leaf->hdr.count)) { ASSERT(be32_to_cpu(leaf->ents[i].hashval) <= @@ -440,7 +440,7 @@ xfs_dir2_leafn_lookup_int( if (args->addname) { curfdb = curbp ? state->extrablk.blkno : -1; curdb = -1; - length = XFS_DIR2_DATA_ENTSIZE(args->namelen); + length = xfs_dir2_data_entsize(args->namelen); if ((free = (curbp ? curbp->data : NULL))) ASSERT(be32_to_cpu(free->hdr.magic) == XFS_DIR2_FREE_MAGIC); } @@ -465,7 +465,7 @@ xfs_dir2_leafn_lookup_int( /* * Pull the data block number from the entry. */ - newdb = XFS_DIR2_DATAPTR_TO_DB(mp, be32_to_cpu(lep->address)); + newdb = xfs_dir2_dataptr_to_db(mp, be32_to_cpu(lep->address)); /* * For addname, we're looking for a place to put the new entry. * We want to use a data block with an entry of equal @@ -482,7 +482,7 @@ xfs_dir2_leafn_lookup_int( * Convert the data block to the free block * holding its freespace information. */ - newfdb = XFS_DIR2_DB_TO_FDB(mp, newdb); + newfdb = xfs_dir2_db_to_fdb(mp, newdb); /* * If it's not the one we have in hand, * read it in. @@ -497,7 +497,7 @@ xfs_dir2_leafn_lookup_int( * Read the free block. */ if ((error = xfs_da_read_buf(tp, dp, - XFS_DIR2_DB_TO_DA(mp, + xfs_dir2_db_to_da(mp, newfdb), -1, &curbp, XFS_DATA_FORK))) { @@ -517,7 +517,7 @@ xfs_dir2_leafn_lookup_int( /* * Get the index for our entry. */ - fi = XFS_DIR2_DB_TO_FDINDEX(mp, curdb); + fi = xfs_dir2_db_to_fdindex(mp, curdb); /* * If it has room, return it. */ @@ -561,7 +561,7 @@ xfs_dir2_leafn_lookup_int( */ if ((error = xfs_da_read_buf(tp, dp, - XFS_DIR2_DB_TO_DA(mp, newdb), -1, + xfs_dir2_db_to_da(mp, newdb), -1, &curbp, XFS_DATA_FORK))) { return error; } @@ -573,7 +573,7 @@ xfs_dir2_leafn_lookup_int( */ dep = (xfs_dir2_data_entry_t *) ((char *)curbp->data + - XFS_DIR2_DATAPTR_TO_OFF(mp, be32_to_cpu(lep->address))); + xfs_dir2_dataptr_to_off(mp, be32_to_cpu(lep->address))); /* * Compare the entry, return it if it matches. */ @@ -876,9 +876,9 @@ xfs_dir2_leafn_remove( /* * Extract the data block and offset from the entry. */ - db = XFS_DIR2_DATAPTR_TO_DB(mp, be32_to_cpu(lep->address)); + db = xfs_dir2_dataptr_to_db(mp, be32_to_cpu(lep->address)); ASSERT(dblk->blkno == db); - off = XFS_DIR2_DATAPTR_TO_OFF(mp, be32_to_cpu(lep->address)); + off = xfs_dir2_dataptr_to_off(mp, be32_to_cpu(lep->address)); ASSERT(dblk->index == off); /* * Kill the leaf entry by marking it stale. @@ -898,7 +898,7 @@ xfs_dir2_leafn_remove( longest = be16_to_cpu(data->hdr.bestfree[0].length); needlog = needscan = 0; xfs_dir2_data_make_free(tp, dbp, off, - XFS_DIR2_DATA_ENTSIZE(dep->namelen), &needlog, &needscan); + xfs_dir2_data_entsize(dep->namelen), &needlog, &needscan); /* * Rescan the data block freespaces for bestfree. * Log the data block header if needed. @@ -924,8 +924,8 @@ xfs_dir2_leafn_remove( * Convert the data block number to a free block, * read in the free block. */ - fdb = XFS_DIR2_DB_TO_FDB(mp, db); - if ((error = xfs_da_read_buf(tp, dp, XFS_DIR2_DB_TO_DA(mp, fdb), + fdb = xfs_dir2_db_to_fdb(mp, db); + if ((error = xfs_da_read_buf(tp, dp, xfs_dir2_db_to_da(mp, fdb), -1, &fbp, XFS_DATA_FORK))) { return error; } @@ -937,7 +937,7 @@ xfs_dir2_leafn_remove( /* * Calculate which entry we need to fix. */ - findex = XFS_DIR2_DB_TO_FDINDEX(mp, db); + findex = xfs_dir2_db_to_fdindex(mp, db); longest = be16_to_cpu(data->hdr.bestfree[0].length); /* * If the data block is now empty we can get rid of it @@ -1073,7 +1073,7 @@ xfs_dir2_leafn_split( /* * Initialize the new leaf block. */ - error = xfs_dir2_leaf_init(args, XFS_DIR2_DA_TO_DB(mp, blkno), + error = xfs_dir2_leaf_init(args, xfs_dir2_da_to_db(mp, blkno), &newblk->bp, XFS_DIR2_LEAFN_MAGIC); if (error) { return error; @@ -1385,7 +1385,7 @@ xfs_dir2_node_addname_int( dp = args->dp; mp = dp->i_mount; tp = args->trans; - length = XFS_DIR2_DATA_ENTSIZE(args->namelen); + length = xfs_dir2_data_entsize(args->namelen); /* * If we came in with a freespace block that means that lookup * found an entry with our hash value. This is the freespace @@ -1438,7 +1438,7 @@ xfs_dir2_node_addname_int( if ((error = xfs_bmap_last_offset(tp, dp, &fo, XFS_DATA_FORK))) return error; - lastfbno = XFS_DIR2_DA_TO_DB(mp, (xfs_dablk_t)fo); + lastfbno = xfs_dir2_da_to_db(mp, (xfs_dablk_t)fo); fbno = ifbno; } /* @@ -1474,7 +1474,7 @@ xfs_dir2_node_addname_int( * to avoid it. */ if ((error = xfs_da_read_buf(tp, dp, - XFS_DIR2_DB_TO_DA(mp, fbno), -2, &fbp, + xfs_dir2_db_to_da(mp, fbno), -2, &fbp, XFS_DATA_FORK))) { return error; } @@ -1550,9 +1550,9 @@ xfs_dir2_node_addname_int( * Get the freespace block corresponding to the data block * that was just allocated. */ - fbno = XFS_DIR2_DB_TO_FDB(mp, dbno); + fbno = xfs_dir2_db_to_fdb(mp, dbno); if (unlikely(error = xfs_da_read_buf(tp, dp, - XFS_DIR2_DB_TO_DA(mp, fbno), -2, &fbp, + xfs_dir2_db_to_da(mp, fbno), -2, &fbp, XFS_DATA_FORK))) { xfs_da_buf_done(dbp); return error; @@ -1567,14 +1567,14 @@ xfs_dir2_node_addname_int( return error; } - if (unlikely(XFS_DIR2_DB_TO_FDB(mp, dbno) != fbno)) { + if (unlikely(xfs_dir2_db_to_fdb(mp, dbno) != fbno)) { cmn_err(CE_ALERT, "xfs_dir2_node_addname_int: dir ino " "%llu needed freesp block %lld for\n" " data block %lld, got %lld\n" " ifbno %llu lastfbno %d\n", (unsigned long long)dp->i_ino, - (long long)XFS_DIR2_DB_TO_FDB(mp, dbno), + (long long)xfs_dir2_db_to_fdb(mp, dbno), (long long)dbno, (long long)fbno, (unsigned long long)ifbno, lastfbno); if (fblk) { @@ -1598,7 +1598,7 @@ xfs_dir2_node_addname_int( * Get a buffer for the new block. */ if ((error = xfs_da_get_buf(tp, dp, - XFS_DIR2_DB_TO_DA(mp, fbno), + xfs_dir2_db_to_da(mp, fbno), -1, &fbp, XFS_DATA_FORK))) { return error; } @@ -1623,7 +1623,7 @@ xfs_dir2_node_addname_int( /* * Set the freespace block index from the data block number. */ - findex = XFS_DIR2_DB_TO_FDINDEX(mp, dbno); + findex = xfs_dir2_db_to_fdindex(mp, dbno); /* * If it's after the end of the current entries in the * freespace block, extend that table. @@ -1669,7 +1669,7 @@ xfs_dir2_node_addname_int( * Read the data block in. */ if (unlikely( - error = xfs_da_read_buf(tp, dp, XFS_DIR2_DB_TO_DA(mp, dbno), + error = xfs_da_read_buf(tp, dp, xfs_dir2_db_to_da(mp, dbno), -1, &dbp, XFS_DATA_FORK))) { if ((fblk == NULL || fblk->bp == NULL) && fbp != NULL) xfs_da_buf_done(fbp); @@ -1698,7 +1698,7 @@ xfs_dir2_node_addname_int( dep->inumber = cpu_to_be64(args->inumber); dep->namelen = args->namelen; memcpy(dep->name, args->name, dep->namelen); - tagp = XFS_DIR2_DATA_ENTRY_TAG_P(dep); + tagp = xfs_dir2_data_entry_tag_p(dep); *tagp = cpu_to_be16((char *)dep - (char *)data); xfs_dir2_data_log_entry(tp, dbp, dep); /* @@ -1904,7 +1904,7 @@ xfs_dir2_node_replace( ASSERT(be32_to_cpu(data->hdr.magic) == XFS_DIR2_DATA_MAGIC); dep = (xfs_dir2_data_entry_t *) ((char *)data + - XFS_DIR2_DATAPTR_TO_OFF(state->mp, be32_to_cpu(lep->address))); + xfs_dir2_dataptr_to_off(state->mp, be32_to_cpu(lep->address))); ASSERT(inum != be64_to_cpu(dep->inumber)); /* * Fill in the new inode number and log the entry. @@ -1980,7 +1980,7 @@ xfs_dir2_node_trim_free( * Blow the block away. */ if ((error = - xfs_dir2_shrink_inode(args, XFS_DIR2_DA_TO_DB(mp, (xfs_dablk_t)fo), + xfs_dir2_shrink_inode(args, xfs_dir2_da_to_db(mp, (xfs_dablk_t)fo), bp))) { /* * Can't fail with ENOSPC since that only happens with no Index: linux-2.6/fs/xfs/xfs_dir2_node.h =================================================================== --- linux-2.6.orig/fs/xfs/xfs_dir2_node.h 2007-04-28 09:37:26.000000000 +0200 +++ linux-2.6/fs/xfs/xfs_dir2_node.h 2007-05-12 13:38:34.000000000 +0200 @@ -36,7 +36,7 @@ struct xfs_trans; #define XFS_DIR2_FREE_SPACE 2 #define XFS_DIR2_FREE_OFFSET (XFS_DIR2_FREE_SPACE * XFS_DIR2_SPACE_SIZE) #define XFS_DIR2_FREE_FIRSTDB(mp) \ - XFS_DIR2_BYTE_TO_DB(mp, XFS_DIR2_FREE_OFFSET) + xfs_dir2_byte_to_db(mp, XFS_DIR2_FREE_OFFSET) #define XFS_DIR2_FREE_MAGIC 0x58443246 /* XD2F */ @@ -60,7 +60,6 @@ typedef struct xfs_dir2_free { /* * Convert data space db to the corresponding free db. */ -#define XFS_DIR2_DB_TO_FDB(mp,db) xfs_dir2_db_to_fdb(mp, db) static inline xfs_dir2_db_t xfs_dir2_db_to_fdb(struct xfs_mount *mp, xfs_dir2_db_t db) { @@ -70,7 +69,6 @@ xfs_dir2_db_to_fdb(struct xfs_mount *mp, /* * Convert data space db to the corresponding index in a free db. */ -#define XFS_DIR2_DB_TO_FDINDEX(mp,db) xfs_dir2_db_to_fdindex(mp, db) static inline int xfs_dir2_db_to_fdindex(struct xfs_mount *mp, xfs_dir2_db_t db) { Index: linux-2.6/fs/xfs/xfs_dir2_sf.c =================================================================== --- linux-2.6.orig/fs/xfs/xfs_dir2_sf.c 2007-04-28 09:37:26.000000000 +0200 +++ linux-2.6/fs/xfs/xfs_dir2_sf.c 2007-05-12 13:38:34.000000000 +0200 @@ -89,8 +89,8 @@ xfs_dir2_block_sfsize( mp = dp->i_mount; count = i8count = namelen = 0; - btp = XFS_DIR2_BLOCK_TAIL_P(mp, block); - blp = XFS_DIR2_BLOCK_LEAF_P(btp); + btp = xfs_dir2_block_tail_p(mp, block); + blp = xfs_dir2_block_leaf_p(btp); /* * Iterate over the block's data entries by using the leaf pointers. @@ -102,7 +102,7 @@ xfs_dir2_block_sfsize( * Calculate the pointer to the entry at hand. */ dep = (xfs_dir2_data_entry_t *) - ((char *)block + XFS_DIR2_DATAPTR_TO_OFF(mp, addr)); + ((char *)block + xfs_dir2_dataptr_to_off(mp, addr)); /* * Detect . and .., so we can special-case them. * . is not included in sf directories. @@ -124,7 +124,7 @@ xfs_dir2_block_sfsize( /* * Calculate the new size, see if we should give up yet. */ - size = XFS_DIR2_SF_HDR_SIZE(i8count) + /* header */ + size = xfs_dir2_sf_hdr_size(i8count) + /* header */ count + /* namelen */ count * (uint)sizeof(xfs_dir2_sf_off_t) + /* offset */ namelen + /* name */ @@ -139,7 +139,7 @@ xfs_dir2_block_sfsize( */ sfhp->count = count; sfhp->i8count = i8count; - XFS_DIR2_SF_PUT_INUMBER((xfs_dir2_sf_t *)sfhp, &parent, &sfhp->parent); + xfs_dir2_sf_put_inumber((xfs_dir2_sf_t *)sfhp, &parent, &sfhp->parent); return size; } @@ -199,15 +199,15 @@ xfs_dir2_block_to_sf( * Copy the header into the newly allocate local space. */ sfp = (xfs_dir2_sf_t *)dp->i_df.if_u1.if_data; - memcpy(sfp, sfhp, XFS_DIR2_SF_HDR_SIZE(sfhp->i8count)); + memcpy(sfp, sfhp, xfs_dir2_sf_hdr_size(sfhp->i8count)); dp->i_d.di_size = size; /* * Set up to loop over the block's entries. */ - btp = XFS_DIR2_BLOCK_TAIL_P(mp, block); + btp = xfs_dir2_block_tail_p(mp, block); ptr = (char *)block->u; - endptr = (char *)XFS_DIR2_BLOCK_LEAF_P(btp); - sfep = XFS_DIR2_SF_FIRSTENTRY(sfp); + endptr = (char *)xfs_dir2_block_leaf_p(btp); + sfep = xfs_dir2_sf_firstentry(sfp); /* * Loop over the active and unused entries. * Stop when we reach the leaf/tail portion of the block. @@ -233,22 +233,22 @@ xfs_dir2_block_to_sf( else if (dep->namelen == 2 && dep->name[0] == '.' && dep->name[1] == '.') ASSERT(be64_to_cpu(dep->inumber) == - XFS_DIR2_SF_GET_INUMBER(sfp, &sfp->hdr.parent)); + xfs_dir2_sf_get_inumber(sfp, &sfp->hdr.parent)); /* * Normal entry, copy it into shortform. */ else { sfep->namelen = dep->namelen; - XFS_DIR2_SF_PUT_OFFSET(sfep, + xfs_dir2_sf_put_offset(sfep, (xfs_dir2_data_aoff_t) ((char *)dep - (char *)block)); memcpy(sfep->name, dep->name, dep->namelen); temp = be64_to_cpu(dep->inumber); - XFS_DIR2_SF_PUT_INUMBER(sfp, &temp, - XFS_DIR2_SF_INUMBERP(sfep)); - sfep = XFS_DIR2_SF_NEXTENTRY(sfp, sfep); + xfs_dir2_sf_put_inumber(sfp, &temp, + xfs_dir2_sf_inumberp(sfep)); + sfep = xfs_dir2_sf_nextentry(sfp, sfep); } - ptr += XFS_DIR2_DATA_ENTSIZE(dep->namelen); + ptr += xfs_dir2_data_entsize(dep->namelen); } ASSERT((char *)sfep - (char *)sfp == size); xfs_dir2_sf_check(args); @@ -294,11 +294,11 @@ xfs_dir2_sf_addname( ASSERT(dp->i_df.if_bytes == dp->i_d.di_size); ASSERT(dp->i_df.if_u1.if_data != NULL); sfp = (xfs_dir2_sf_t *)dp->i_df.if_u1.if_data; - ASSERT(dp->i_d.di_size >= XFS_DIR2_SF_HDR_SIZE(sfp->hdr.i8count)); + ASSERT(dp->i_d.di_size >= xfs_dir2_sf_hdr_size(sfp->hdr.i8count)); /* * Compute entry (and change in) size. */ - add_entsize = XFS_DIR2_SF_ENTSIZE_BYNAME(sfp, args->namelen); + add_entsize = xfs_dir2_sf_entsize_byname(sfp, args->namelen); incr_isize = add_entsize; objchange = 0; #if XFS_BIG_INUMS @@ -392,7 +392,7 @@ xfs_dir2_sf_addname_easy( /* * Grow the in-inode space. */ - xfs_idata_realloc(dp, XFS_DIR2_SF_ENTSIZE_BYNAME(sfp, args->namelen), + xfs_idata_realloc(dp, xfs_dir2_sf_entsize_byname(sfp, args->namelen), XFS_DATA_FORK); /* * Need to set up again due to realloc of the inode data. @@ -403,10 +403,10 @@ xfs_dir2_sf_addname_easy( * Fill in the new entry. */ sfep->namelen = args->namelen; - XFS_DIR2_SF_PUT_OFFSET(sfep, offset); + xfs_dir2_sf_put_offset(sfep, offset); memcpy(sfep->name, args->name, sfep->namelen); - XFS_DIR2_SF_PUT_INUMBER(sfp, &args->inumber, - XFS_DIR2_SF_INUMBERP(sfep)); + xfs_dir2_sf_put_inumber(sfp, &args->inumber, + xfs_dir2_sf_inumberp(sfep)); /* * Update the header and inode. */ @@ -463,14 +463,14 @@ xfs_dir2_sf_addname_hard( * If it's going to end up at the end then oldsfep will point there. */ for (offset = XFS_DIR2_DATA_FIRST_OFFSET, - oldsfep = XFS_DIR2_SF_FIRSTENTRY(oldsfp), - add_datasize = XFS_DIR2_DATA_ENTSIZE(args->namelen), + oldsfep = xfs_dir2_sf_firstentry(oldsfp), + add_datasize = xfs_dir2_data_entsize(args->namelen), eof = (char *)oldsfep == &buf[old_isize]; !eof; - offset = new_offset + XFS_DIR2_DATA_ENTSIZE(oldsfep->namelen), - oldsfep = XFS_DIR2_SF_NEXTENTRY(oldsfp, oldsfep), + offset = new_offset + xfs_dir2_data_entsize(oldsfep->namelen), + oldsfep = xfs_dir2_sf_nextentry(oldsfp, oldsfep), eof = (char *)oldsfep == &buf[old_isize]) { - new_offset = XFS_DIR2_SF_GET_OFFSET(oldsfep); + new_offset = xfs_dir2_sf_get_offset(oldsfep); if (offset + add_datasize <= new_offset) break; } @@ -495,10 +495,10 @@ xfs_dir2_sf_addname_hard( * Fill in the new entry, and update the header counts. */ sfep->namelen = args->namelen; - XFS_DIR2_SF_PUT_OFFSET(sfep, offset); + xfs_dir2_sf_put_offset(sfep, offset); memcpy(sfep->name, args->name, sfep->namelen); - XFS_DIR2_SF_PUT_INUMBER(sfp, &args->inumber, - XFS_DIR2_SF_INUMBERP(sfep)); + xfs_dir2_sf_put_inumber(sfp, &args->inumber, + xfs_dir2_sf_inumberp(sfep)); sfp->hdr.count++; #if XFS_BIG_INUMS if (args->inumber > XFS_DIR2_MAX_SHORT_INUM && !objchange) @@ -508,7 +508,7 @@ xfs_dir2_sf_addname_hard( * If there's more left to copy, do that. */ if (!eof) { - sfep = XFS_DIR2_SF_NEXTENTRY(sfp, sfep); + sfep = xfs_dir2_sf_nextentry(sfp, sfep); memcpy(sfep, oldsfep, old_isize - nbytes); } kmem_free(buf, old_isize); @@ -544,9 +544,9 @@ xfs_dir2_sf_addname_pick( mp = dp->i_mount; sfp = (xfs_dir2_sf_t *)dp->i_df.if_u1.if_data; - size = XFS_DIR2_DATA_ENTSIZE(args->namelen); + size = xfs_dir2_data_entsize(args->namelen); offset = XFS_DIR2_DATA_FIRST_OFFSET; - sfep = XFS_DIR2_SF_FIRSTENTRY(sfp); + sfep = xfs_dir2_sf_firstentry(sfp); holefit = 0; /* * Loop over sf entries. @@ -555,10 +555,10 @@ xfs_dir2_sf_addname_pick( */ for (i = 0; i < sfp->hdr.count; i++) { if (!holefit) - holefit = offset + size <= XFS_DIR2_SF_GET_OFFSET(sfep); - offset = XFS_DIR2_SF_GET_OFFSET(sfep) + - XFS_DIR2_DATA_ENTSIZE(sfep->namelen); - sfep = XFS_DIR2_SF_NEXTENTRY(sfp, sfep); + holefit = offset + size <= xfs_dir2_sf_get_offset(sfep); + offset = xfs_dir2_sf_get_offset(sfep) + + xfs_dir2_data_entsize(sfep->namelen); + sfep = xfs_dir2_sf_nextentry(sfp, sfep); } /* * Calculate data bytes used excluding the new entry, if this @@ -617,18 +617,18 @@ xfs_dir2_sf_check( sfp = (xfs_dir2_sf_t *)dp->i_df.if_u1.if_data; offset = XFS_DIR2_DATA_FIRST_OFFSET; - ino = XFS_DIR2_SF_GET_INUMBER(sfp, &sfp->hdr.parent); + ino = xfs_dir2_sf_get_inumber(sfp, &sfp->hdr.parent); i8count = ino > XFS_DIR2_MAX_SHORT_INUM; - for (i = 0, sfep = XFS_DIR2_SF_FIRSTENTRY(sfp); + for (i = 0, sfep = xfs_dir2_sf_firstentry(sfp); i < sfp->hdr.count; - i++, sfep = XFS_DIR2_SF_NEXTENTRY(sfp, sfep)) { - ASSERT(XFS_DIR2_SF_GET_OFFSET(sfep) >= offset); - ino = XFS_DIR2_SF_GET_INUMBER(sfp, XFS_DIR2_SF_INUMBERP(sfep)); + i++, sfep = xfs_dir2_sf_nextentry(sfp, sfep)) { + ASSERT(xfs_dir2_sf_get_offset(sfep) >= offset); + ino = xfs_dir2_sf_get_inumber(sfp, xfs_dir2_sf_inumberp(sfep)); i8count += ino > XFS_DIR2_MAX_SHORT_INUM; offset = - XFS_DIR2_SF_GET_OFFSET(sfep) + - XFS_DIR2_DATA_ENTSIZE(sfep->namelen); + xfs_dir2_sf_get_offset(sfep) + + xfs_dir2_data_entsize(sfep->namelen); } ASSERT(i8count == sfp->hdr.i8count); ASSERT(XFS_BIG_INUMS || i8count == 0); @@ -671,7 +671,7 @@ xfs_dir2_sf_create( ASSERT(dp->i_df.if_flags & XFS_IFINLINE); ASSERT(dp->i_df.if_bytes == 0); i8count = pino > XFS_DIR2_MAX_SHORT_INUM; - size = XFS_DIR2_SF_HDR_SIZE(i8count); + size = xfs_dir2_sf_hdr_size(i8count); /* * Make a buffer for the data. */ @@ -684,7 +684,7 @@ xfs_dir2_sf_create( /* * Now can put in the inode number, since i8count is set. */ - XFS_DIR2_SF_PUT_INUMBER(sfp, &pino, &sfp->hdr.parent); + xfs_dir2_sf_put_inumber(sfp, &pino, &sfp->hdr.parent); sfp->hdr.count = 0; dp->i_d.di_size = size; xfs_dir2_sf_check(args); @@ -727,12 +727,12 @@ xfs_dir2_sf_getdents( sfp = (xfs_dir2_sf_t *)dp->i_df.if_u1.if_data; - ASSERT(dp->i_d.di_size >= XFS_DIR2_SF_HDR_SIZE(sfp->hdr.i8count)); + ASSERT(dp->i_d.di_size >= xfs_dir2_sf_hdr_size(sfp->hdr.i8count)); /* * If the block number in the offset is out of range, we're done. */ - if (XFS_DIR2_DATAPTR_TO_DB(mp, dir_offset) > mp->m_dirdatablk) { + if (xfs_dir2_dataptr_to_db(mp, dir_offset) > mp->m_dirdatablk) { *eofp = 1; return 0; } @@ -747,9 +747,9 @@ xfs_dir2_sf_getdents( * Put . entry unless we're starting past it. */ if (dir_offset <= - XFS_DIR2_DB_OFF_TO_DATAPTR(mp, mp->m_dirdatablk, + xfs_dir2_db_off_to_dataptr(mp, mp->m_dirdatablk, XFS_DIR2_DATA_DOT_OFFSET)) { - p.cook = XFS_DIR2_DB_OFF_TO_DATAPTR(mp, 0, + p.cook = xfs_dir2_db_off_to_dataptr(mp, 0, XFS_DIR2_DATA_DOTDOT_OFFSET); p.ino = dp->i_ino; #if XFS_BIG_INUMS @@ -762,7 +762,7 @@ xfs_dir2_sf_getdents( if (!p.done) { uio->uio_offset = - XFS_DIR2_DB_OFF_TO_DATAPTR(mp, mp->m_dirdatablk, + xfs_dir2_db_off_to_dataptr(mp, mp->m_dirdatablk, XFS_DIR2_DATA_DOT_OFFSET); return error; } @@ -772,11 +772,11 @@ xfs_dir2_sf_getdents( * Put .. entry unless we're starting past it. */ if (dir_offset <= - XFS_DIR2_DB_OFF_TO_DATAPTR(mp, mp->m_dirdatablk, + xfs_dir2_db_off_to_dataptr(mp, mp->m_dirdatablk, XFS_DIR2_DATA_DOTDOT_OFFSET)) { - p.cook = XFS_DIR2_DB_OFF_TO_DATAPTR(mp, mp->m_dirdatablk, + p.cook = xfs_dir2_db_off_to_dataptr(mp, mp->m_dirdatablk, XFS_DIR2_DATA_FIRST_OFFSET); - p.ino = XFS_DIR2_SF_GET_INUMBER(sfp, &sfp->hdr.parent); + p.ino = xfs_dir2_sf_get_inumber(sfp, &sfp->hdr.parent); #if XFS_BIG_INUMS p.ino += mp->m_inoadd; #endif @@ -787,7 +787,7 @@ xfs_dir2_sf_getdents( if (!p.done) { uio->uio_offset = - XFS_DIR2_DB_OFF_TO_DATAPTR(mp, mp->m_dirdatablk, + xfs_dir2_db_off_to_dataptr(mp, mp->m_dirdatablk, XFS_DIR2_DATA_DOTDOT_OFFSET); return error; } @@ -796,23 +796,23 @@ xfs_dir2_sf_getdents( /* * Loop while there are more entries and put'ing works. */ - for (i = 0, sfep = XFS_DIR2_SF_FIRSTENTRY(sfp); + for (i = 0, sfep = xfs_dir2_sf_firstentry(sfp); i < sfp->hdr.count; - i++, sfep = XFS_DIR2_SF_NEXTENTRY(sfp, sfep)) { + i++, sfep = xfs_dir2_sf_nextentry(sfp, sfep)) { - off = XFS_DIR2_DB_OFF_TO_DATAPTR(mp, mp->m_dirdatablk, - XFS_DIR2_SF_GET_OFFSET(sfep)); + off = xfs_dir2_db_off_to_dataptr(mp, mp->m_dirdatablk, + xfs_dir2_sf_get_offset(sfep)); if (dir_offset > off) continue; p.namelen = sfep->namelen; - p.cook = XFS_DIR2_DB_OFF_TO_DATAPTR(mp, mp->m_dirdatablk, - XFS_DIR2_SF_GET_OFFSET(sfep) + - XFS_DIR2_DATA_ENTSIZE(p.namelen)); + p.cook = xfs_dir2_db_off_to_dataptr(mp, mp->m_dirdatablk, + xfs_dir2_sf_get_offset(sfep) + + xfs_dir2_data_entsize(p.namelen)); - p.ino = XFS_DIR2_SF_GET_INUMBER(sfp, XFS_DIR2_SF_INUMBERP(sfep)); + p.ino = xfs_dir2_sf_get_inumber(sfp, xfs_dir2_sf_inumberp(sfep)); #if XFS_BIG_INUMS p.ino += mp->m_inoadd; #endif @@ -832,7 +832,7 @@ xfs_dir2_sf_getdents( *eofp = 1; uio->uio_offset = - XFS_DIR2_DB_OFF_TO_DATAPTR(mp, mp->m_dirdatablk + 1, 0); + xfs_dir2_db_off_to_dataptr(mp, mp->m_dirdatablk + 1, 0); return 0; } @@ -865,7 +865,7 @@ xfs_dir2_sf_lookup( ASSERT(dp->i_df.if_bytes == dp->i_d.di_size); ASSERT(dp->i_df.if_u1.if_data != NULL); sfp = (xfs_dir2_sf_t *)dp->i_df.if_u1.if_data; - ASSERT(dp->i_d.di_size >= XFS_DIR2_SF_HDR_SIZE(sfp->hdr.i8count)); + ASSERT(dp->i_d.di_size >= xfs_dir2_sf_hdr_size(sfp->hdr.i8count)); /* * Special case for . */ @@ -878,21 +878,21 @@ xfs_dir2_sf_lookup( */ if (args->namelen == 2 && args->name[0] == '.' && args->name[1] == '.') { - args->inumber = XFS_DIR2_SF_GET_INUMBER(sfp, &sfp->hdr.parent); + args->inumber = xfs_dir2_sf_get_inumber(sfp, &sfp->hdr.parent); return XFS_ERROR(EEXIST); } /* * Loop over all the entries trying to match ours. */ - for (i = 0, sfep = XFS_DIR2_SF_FIRSTENTRY(sfp); + for (i = 0, sfep = xfs_dir2_sf_firstentry(sfp); i < sfp->hdr.count; - i++, sfep = XFS_DIR2_SF_NEXTENTRY(sfp, sfep)) { + i++, sfep = xfs_dir2_sf_nextentry(sfp, sfep)) { if (sfep->namelen == args->namelen && sfep->name[0] == args->name[0] && memcmp(args->name, sfep->name, args->namelen) == 0) { args->inumber = - XFS_DIR2_SF_GET_INUMBER(sfp, - XFS_DIR2_SF_INUMBERP(sfep)); + xfs_dir2_sf_get_inumber(sfp, + xfs_dir2_sf_inumberp(sfep)); return XFS_ERROR(EEXIST); } } @@ -934,19 +934,19 @@ xfs_dir2_sf_removename( ASSERT(dp->i_df.if_bytes == oldsize); ASSERT(dp->i_df.if_u1.if_data != NULL); sfp = (xfs_dir2_sf_t *)dp->i_df.if_u1.if_data; - ASSERT(oldsize >= XFS_DIR2_SF_HDR_SIZE(sfp->hdr.i8count)); + ASSERT(oldsize >= xfs_dir2_sf_hdr_size(sfp->hdr.i8count)); /* * Loop over the old directory entries. * Find the one we're deleting. */ - for (i = 0, sfep = XFS_DIR2_SF_FIRSTENTRY(sfp); + for (i = 0, sfep = xfs_dir2_sf_firstentry(sfp); i < sfp->hdr.count; - i++, sfep = XFS_DIR2_SF_NEXTENTRY(sfp, sfep)) { + i++, sfep = xfs_dir2_sf_nextentry(sfp, sfep)) { if (sfep->namelen == args->namelen && sfep->name[0] == args->name[0] && memcmp(sfep->name, args->name, args->namelen) == 0) { - ASSERT(XFS_DIR2_SF_GET_INUMBER(sfp, - XFS_DIR2_SF_INUMBERP(sfep)) == + ASSERT(xfs_dir2_sf_get_inumber(sfp, + xfs_dir2_sf_inumberp(sfep)) == args->inumber); break; } @@ -961,7 +961,7 @@ xfs_dir2_sf_removename( * Calculate sizes. */ byteoff = (int)((char *)sfep - (char *)sfp); - entsize = XFS_DIR2_SF_ENTSIZE_BYNAME(sfp, args->namelen); + entsize = xfs_dir2_sf_entsize_byname(sfp, args->namelen); newsize = oldsize - entsize; /* * Copy the part if any after the removed entry, sliding it down. @@ -1027,7 +1027,7 @@ xfs_dir2_sf_replace( ASSERT(dp->i_df.if_bytes == dp->i_d.di_size); ASSERT(dp->i_df.if_u1.if_data != NULL); sfp = (xfs_dir2_sf_t *)dp->i_df.if_u1.if_data; - ASSERT(dp->i_d.di_size >= XFS_DIR2_SF_HDR_SIZE(sfp->hdr.i8count)); + ASSERT(dp->i_d.di_size >= xfs_dir2_sf_hdr_size(sfp->hdr.i8count)); #if XFS_BIG_INUMS /* * New inode number is large, and need to convert to 8-byte inodes. @@ -1067,28 +1067,28 @@ xfs_dir2_sf_replace( if (args->namelen == 2 && args->name[0] == '.' && args->name[1] == '.') { #if XFS_BIG_INUMS || defined(DEBUG) - ino = XFS_DIR2_SF_GET_INUMBER(sfp, &sfp->hdr.parent); + ino = xfs_dir2_sf_get_inumber(sfp, &sfp->hdr.parent); ASSERT(args->inumber != ino); #endif - XFS_DIR2_SF_PUT_INUMBER(sfp, &args->inumber, &sfp->hdr.parent); + xfs_dir2_sf_put_inumber(sfp, &args->inumber, &sfp->hdr.parent); } /* * Normal entry, look for the name. */ else { - for (i = 0, sfep = XFS_DIR2_SF_FIRSTENTRY(sfp); + for (i = 0, sfep = xfs_dir2_sf_firstentry(sfp); i < sfp->hdr.count; - i++, sfep = XFS_DIR2_SF_NEXTENTRY(sfp, sfep)) { + i++, sfep = xfs_dir2_sf_nextentry(sfp, sfep)) { if (sfep->namelen == args->namelen && sfep->name[0] == args->name[0] && memcmp(args->name, sfep->name, args->namelen) == 0) { #if XFS_BIG_INUMS || defined(DEBUG) - ino = XFS_DIR2_SF_GET_INUMBER(sfp, - XFS_DIR2_SF_INUMBERP(sfep)); + ino = xfs_dir2_sf_get_inumber(sfp, + xfs_dir2_sf_inumberp(sfep)); ASSERT(args->inumber != ino); #endif - XFS_DIR2_SF_PUT_INUMBER(sfp, &args->inumber, - XFS_DIR2_SF_INUMBERP(sfep)); + xfs_dir2_sf_put_inumber(sfp, &args->inumber, + xfs_dir2_sf_inumberp(sfep)); break; } } @@ -1189,22 +1189,22 @@ xfs_dir2_sf_toino4( */ sfp->hdr.count = oldsfp->hdr.count; sfp->hdr.i8count = 0; - ino = XFS_DIR2_SF_GET_INUMBER(oldsfp, &oldsfp->hdr.parent); - XFS_DIR2_SF_PUT_INUMBER(sfp, &ino, &sfp->hdr.parent); + ino = xfs_dir2_sf_get_inumber(oldsfp, &oldsfp->hdr.parent); + xfs_dir2_sf_put_inumber(sfp, &ino, &sfp->hdr.parent); /* * Copy the entries field by field. */ - for (i = 0, sfep = XFS_DIR2_SF_FIRSTENTRY(sfp), - oldsfep = XFS_DIR2_SF_FIRSTENTRY(oldsfp); + for (i = 0, sfep = xfs_dir2_sf_firstentry(sfp), + oldsfep = xfs_dir2_sf_firstentry(oldsfp); i < sfp->hdr.count; - i++, sfep = XFS_DIR2_SF_NEXTENTRY(sfp, sfep), - oldsfep = XFS_DIR2_SF_NEXTENTRY(oldsfp, oldsfep)) { + i++, sfep = xfs_dir2_sf_nextentry(sfp, sfep), + oldsfep = xfs_dir2_sf_nextentry(oldsfp, oldsfep)) { sfep->namelen = oldsfep->namelen; sfep->offset = oldsfep->offset; memcpy(sfep->name, oldsfep->name, sfep->namelen); - ino = XFS_DIR2_SF_GET_INUMBER(oldsfp, - XFS_DIR2_SF_INUMBERP(oldsfep)); - XFS_DIR2_SF_PUT_INUMBER(sfp, &ino, XFS_DIR2_SF_INUMBERP(sfep)); + ino = xfs_dir2_sf_get_inumber(oldsfp, + xfs_dir2_sf_inumberp(oldsfep)); + xfs_dir2_sf_put_inumber(sfp, &ino, xfs_dir2_sf_inumberp(sfep)); } /* * Clean up the inode. @@ -1266,22 +1266,22 @@ xfs_dir2_sf_toino8( */ sfp->hdr.count = oldsfp->hdr.count; sfp->hdr.i8count = 1; - ino = XFS_DIR2_SF_GET_INUMBER(oldsfp, &oldsfp->hdr.parent); - XFS_DIR2_SF_PUT_INUMBER(sfp, &ino, &sfp->hdr.parent); + ino = xfs_dir2_sf_get_inumber(oldsfp, &oldsfp->hdr.parent); + xfs_dir2_sf_put_inumber(sfp, &ino, &sfp->hdr.parent); /* * Copy the entries field by field. */ - for (i = 0, sfep = XFS_DIR2_SF_FIRSTENTRY(sfp), - oldsfep = XFS_DIR2_SF_FIRSTENTRY(oldsfp); + for (i = 0, sfep = xfs_dir2_sf_firstentry(sfp), + oldsfep = xfs_dir2_sf_firstentry(oldsfp); i < sfp->hdr.count; - i++, sfep = XFS_DIR2_SF_NEXTENTRY(sfp, sfep), - oldsfep = XFS_DIR2_SF_NEXTENTRY(oldsfp, oldsfep)) { + i++, sfep = xfs_dir2_sf_nextentry(sfp, sfep), + oldsfep = xfs_dir2_sf_nextentry(oldsfp, oldsfep)) { sfep->namelen = oldsfep->namelen; sfep->offset = oldsfep->offset; memcpy(sfep->name, oldsfep->name, sfep->namelen); - ino = XFS_DIR2_SF_GET_INUMBER(oldsfp, - XFS_DIR2_SF_INUMBERP(oldsfep)); - XFS_DIR2_SF_PUT_INUMBER(sfp, &ino, XFS_DIR2_SF_INUMBERP(sfep)); + ino = xfs_dir2_sf_get_inumber(oldsfp, + xfs_dir2_sf_inumberp(oldsfep)); + xfs_dir2_sf_put_inumber(sfp, &ino, xfs_dir2_sf_inumberp(sfep)); } /* * Clean up the inode. Index: linux-2.6/fs/xfs/xfs_dir2_sf.h =================================================================== --- linux-2.6.orig/fs/xfs/xfs_dir2_sf.h 2007-04-28 09:37:26.000000000 +0200 +++ linux-2.6/fs/xfs/xfs_dir2_sf.h 2007-05-12 13:38:34.000000000 +0200 @@ -90,7 +90,6 @@ typedef struct xfs_dir2_sf { xfs_dir2_sf_entry_t list[1]; /* shortform entries */ } xfs_dir2_sf_t; -#define XFS_DIR2_SF_HDR_SIZE(i8count) xfs_dir2_sf_hdr_size(i8count) static inline int xfs_dir2_sf_hdr_size(int i8count) { return ((uint)sizeof(xfs_dir2_sf_hdr_t) - \ @@ -98,14 +97,11 @@ static inline int xfs_dir2_sf_hdr_size(i ((uint)sizeof(xfs_dir2_ino8_t) - (uint)sizeof(xfs_dir2_ino4_t))); } -#define XFS_DIR2_SF_INUMBERP(sfep) xfs_dir2_sf_inumberp(sfep) static inline xfs_dir2_inou_t *xfs_dir2_sf_inumberp(xfs_dir2_sf_entry_t *sfep) { return (xfs_dir2_inou_t *)&(sfep)->name[(sfep)->namelen]; } -#define XFS_DIR2_SF_GET_INUMBER(sfp, from) \ - xfs_dir2_sf_get_inumber(sfp, from) static inline xfs_intino_t xfs_dir2_sf_get_inumber(xfs_dir2_sf_t *sfp, xfs_dir2_inou_t *from) { @@ -114,8 +110,6 @@ xfs_dir2_sf_get_inumber(xfs_dir2_sf_t *s (xfs_intino_t)XFS_GET_DIR_INO8((from)->i8)); } -#define XFS_DIR2_SF_PUT_INUMBER(sfp,from,to) \ - xfs_dir2_sf_put_inumber(sfp,from,to) static inline void xfs_dir2_sf_put_inumber(xfs_dir2_sf_t *sfp, xfs_ino_t *from, xfs_dir2_inou_t *to) { @@ -125,24 +119,18 @@ static inline void xfs_dir2_sf_put_inumb XFS_PUT_DIR_INO8(*(from), (to)->i8); } -#define XFS_DIR2_SF_GET_OFFSET(sfep) \ - xfs_dir2_sf_get_offset(sfep) static inline xfs_dir2_data_aoff_t xfs_dir2_sf_get_offset(xfs_dir2_sf_entry_t *sfep) { return INT_GET_UNALIGNED_16_BE(&(sfep)->offset.i); } -#define XFS_DIR2_SF_PUT_OFFSET(sfep,off) \ - xfs_dir2_sf_put_offset(sfep,off) static inline void xfs_dir2_sf_put_offset(xfs_dir2_sf_entry_t *sfep, xfs_dir2_data_aoff_t off) { INT_SET_UNALIGNED_16_BE(&(sfep)->offset.i, off); } -#define XFS_DIR2_SF_ENTSIZE_BYNAME(sfp,len) \ - xfs_dir2_sf_entsize_byname(sfp,len) static inline int xfs_dir2_sf_entsize_byname(xfs_dir2_sf_t *sfp, int len) { return ((uint)sizeof(xfs_dir2_sf_entry_t) - 1 + (len) - \ @@ -150,8 +138,6 @@ static inline int xfs_dir2_sf_entsize_by ((uint)sizeof(xfs_dir2_ino8_t) - (uint)sizeof(xfs_dir2_ino4_t))); } -#define XFS_DIR2_SF_ENTSIZE_BYENTRY(sfp,sfep) \ - xfs_dir2_sf_entsize_byentry(sfp,sfep) static inline int xfs_dir2_sf_entsize_byentry(xfs_dir2_sf_t *sfp, xfs_dir2_sf_entry_t *sfep) { @@ -160,19 +146,17 @@ xfs_dir2_sf_entsize_byentry(xfs_dir2_sf_ ((uint)sizeof(xfs_dir2_ino8_t) - (uint)sizeof(xfs_dir2_ino4_t))); } -#define XFS_DIR2_SF_FIRSTENTRY(sfp) xfs_dir2_sf_firstentry(sfp) static inline xfs_dir2_sf_entry_t *xfs_dir2_sf_firstentry(xfs_dir2_sf_t *sfp) { return ((xfs_dir2_sf_entry_t *) \ - ((char *)(sfp) + XFS_DIR2_SF_HDR_SIZE(sfp->hdr.i8count))); + ((char *)(sfp) + xfs_dir2_sf_hdr_size(sfp->hdr.i8count))); } -#define XFS_DIR2_SF_NEXTENTRY(sfp,sfep) xfs_dir2_sf_nextentry(sfp,sfep) static inline xfs_dir2_sf_entry_t * xfs_dir2_sf_nextentry(xfs_dir2_sf_t *sfp, xfs_dir2_sf_entry_t *sfep) { return ((xfs_dir2_sf_entry_t *) \ - ((char *)(sfep) + XFS_DIR2_SF_ENTSIZE_BYENTRY(sfp,sfep))); + ((char *)(sfep) + xfs_dir2_sf_entsize_byentry(sfp,sfep))); } /* From owner-xfs@oss.sgi.com Mon Jun 4 07:40:01 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 04 Jun 2007 07:40:08 -0700 (PDT) Received: from mail.lst.de (verein.lst.de [213.95.11.210]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l54EdxWt019371 for ; Mon, 4 Jun 2007 07:40:00 -0700 Received: from verein.lst.de (localhost [127.0.0.1]) by mail.lst.de (8.12.3/8.12.3/Debian-7.1) with ESMTP id l54Edwo6009368 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO) for ; Mon, 4 Jun 2007 16:39:58 +0200 Received: (from hch@localhost) by verein.lst.de (8.12.3/8.12.3/Debian-6.6) id l54EdwoT009366 for xfs@oss.sgi.com; Mon, 4 Jun 2007 16:39:58 +0200 Date: Mon, 4 Jun 2007 16:39:58 +0200 From: Christoph Hellwig To: xfs@oss.sgi.com Subject: [PATCH] use filldir internally Message-ID: <20070604143958.GB9081@lst.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.3.28i X-Scanned-By: MIMEDefang 2.39 X-archive-position: 11621 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@lst.de Precedence: bulk X-list: xfs Currently xfs has a rather complicated internal scheme to allow for different directory formats in IRIX. This patch rips all code related to this out and pushes useage of the Linux filldir callback into the lowlevel directory code. This does not make the code any less portable because filldir can be used to create dirents of all possible variations (including the IRIX ones as proved by the IRIX binary emulation code under arch/mips/). This patch get rid of an unessecary copy in the readdir path, about 250 lines of code and one of the last two users of the uio structure. Signed-off-by: Christoph Hellwig Index: linux-2.6/fs/xfs/linux-2.6/xfs_file.c =================================================================== --- linux-2.6.orig/fs/xfs/linux-2.6/xfs_file.c 2007-05-19 00:22:40.000000000 +0200 +++ linux-2.6/fs/xfs/linux-2.6/xfs_file.c 2007-06-01 13:17:15.000000000 +0200 @@ -267,74 +267,29 @@ xfs_file_readdir( void *dirent, filldir_t filldir) { - int error = 0; - bhv_vnode_t *vp = vn_from_inode(filp->f_path.dentry->d_inode); - uio_t uio; - iovec_t iov; - int eof = 0; - caddr_t read_buf; - int namelen, size = 0; - size_t rlen = PAGE_CACHE_SIZE; - xfs_off_t start_offset, curr_offset; - xfs_dirent_t *dbp = NULL; - - /* Try fairly hard to get memory */ - do { - if ((read_buf = kmalloc(rlen, GFP_KERNEL))) - break; - rlen >>= 1; - } while (rlen >= 1024); - - if (read_buf == NULL) - return -ENOMEM; - - uio.uio_iov = &iov; - uio.uio_segflg = UIO_SYSSPACE; - curr_offset = filp->f_pos; - if (filp->f_pos != 0x7fffffff) - uio.uio_offset = filp->f_pos; - else - uio.uio_offset = 0xffffffff; - - while (!eof) { - uio.uio_resid = iov.iov_len = rlen; - iov.iov_base = read_buf; - uio.uio_iovcnt = 1; - - start_offset = uio.uio_offset; - - error = bhv_vop_readdir(vp, &uio, NULL, &eof); - if ((uio.uio_offset == start_offset) || error) { - size = 0; - break; - } - - size = rlen - uio.uio_resid; - dbp = (xfs_dirent_t *)read_buf; - while (size > 0) { - namelen = strlen(dbp->d_name); - - if (filldir(dirent, dbp->d_name, namelen, - (loff_t) curr_offset & 0x7fffffff, - (ino_t) dbp->d_ino, - DT_UNKNOWN)) { - goto done; - } - size -= dbp->d_reclen; - curr_offset = (loff_t)dbp->d_off /* & 0x7fffffff */; - dbp = (xfs_dirent_t *)((char *)dbp + dbp->d_reclen); - } - } -done: - if (!error) { - if (size == 0) - filp->f_pos = uio.uio_offset & 0x7fffffff; - else if (dbp) - filp->f_pos = curr_offset; - } + struct inode *inode = filp->f_path.dentry->d_inode; + bhv_vnode_t *vp = vn_from_inode(inode); + int error; + size_t bufsize; + + /* + * The Linux API doesn't pass down the total size of the buffer + * we read into down to the filesystem. With the filldir concept + * it's not needed for correct information, but the XFS dir2 leaf + * code wants an estimate of the buffer size to calculate it's + * readahead window and size the buffers used for mapping to + * physical blocks. + * + * Try to give it an estimate that's good enough, maybe at some + * point we can change the ->readdir prototype to include the + * buffer size. + */ + bufsize = (size_t)min_t(loff_t, PAGE_SIZE, inode->i_size); - kfree(read_buf); - return -error; + error = bhv_vop_readdir(vp, dirent, bufsize, &filp->f_pos, filldir); + if (error) + return -error; + return 0; } STATIC int Index: linux-2.6/fs/xfs/linux-2.6/xfs_vnode.h =================================================================== --- linux-2.6.orig/fs/xfs/linux-2.6/xfs_vnode.h 2007-05-19 00:22:40.000000000 +0200 +++ linux-2.6/fs/xfs/linux-2.6/xfs_vnode.h 2007-06-01 13:17:15.000000000 +0200 @@ -167,8 +167,8 @@ typedef int (*vop_rename_t)(bhv_desc_t * typedef int (*vop_mkdir_t)(bhv_desc_t *, bhv_vname_t *, struct bhv_vattr *, bhv_vnode_t **, struct cred *); typedef int (*vop_rmdir_t)(bhv_desc_t *, bhv_vname_t *, struct cred *); -typedef int (*vop_readdir_t)(bhv_desc_t *, struct uio *, struct cred *, - int *); +typedef int (*vop_readdir_t)(bhv_desc_t *, void *dirent, size_t bufsize, + xfs_off_t *offset, filldir_t filldir); typedef int (*vop_symlink_t)(bhv_desc_t *, bhv_vname_t *, struct bhv_vattr*, char *, bhv_vnode_t **, struct cred *); typedef int (*vop_readlink_t)(bhv_desc_t *, struct uio *, int, @@ -278,8 +278,8 @@ typedef struct bhv_vnodeops { #define bhv_vop_mkdir(dp,d,vap,vpp,cr) \ VOP(vop_mkdir, dp)(VNHEAD(dp),d,vap,vpp,cr) #define bhv_vop_rmdir(dp,d,cr) VOP(vop_rmdir, dp)(VNHEAD(dp),d,cr) -#define bhv_vop_readdir(vp,uiop,cr,eofp) \ - VOP(vop_readdir, vp)(VNHEAD(vp),uiop,cr,eofp) +#define bhv_vop_readdir(vp,dirent,bufsize,offset,filldir) \ + VOP(vop_readdir, vp)(VNHEAD(vp),dirent,bufsize,offset,filldir) #define bhv_vop_symlink(dvp,d,vap,tnm,vpp,cr) \ VOP(vop_symlink, dvp)(VNHEAD(dvp),d,vap,tnm,vpp,cr) #define bhv_vop_readlink(vp,uiop,fl,cr) \ Index: linux-2.6/fs/xfs/xfs_vnodeops.c =================================================================== --- linux-2.6.orig/fs/xfs/xfs_vnodeops.c 2007-05-21 16:15:11.000000000 +0200 +++ linux-2.6/fs/xfs/xfs_vnodeops.c 2007-06-01 13:17:15.000000000 +0200 @@ -3247,37 +3247,6 @@ xfs_rmdir( goto std_return; } - -/* - * Read dp's entries starting at uiop->uio_offset and translate them into - * bufsize bytes worth of struct dirents starting at bufbase. - */ -STATIC int -xfs_readdir( - bhv_desc_t *dir_bdp, - uio_t *uiop, - cred_t *credp, - int *eofp) -{ - xfs_inode_t *dp; - xfs_trans_t *tp = NULL; - int error = 0; - uint lock_mode; - - vn_trace_entry(BHV_TO_VNODE(dir_bdp), __FUNCTION__, - (inst_t *)__return_address); - dp = XFS_BHVTOI(dir_bdp); - - if (XFS_FORCED_SHUTDOWN(dp->i_mount)) - return XFS_ERROR(EIO); - - lock_mode = xfs_ilock_map_shared(dp); - error = xfs_dir_getdents(tp, dp, uiop, eofp); - xfs_iunlock_map_shared(dp, lock_mode); - return error; -} - - STATIC int xfs_symlink( bhv_desc_t *dir_bdp, Index: linux-2.6/fs/xfs/xfs_dir2.c =================================================================== --- linux-2.6.orig/fs/xfs/xfs_dir2.c 2007-06-01 13:17:03.000000000 +0200 +++ linux-2.6/fs/xfs/xfs_dir2.c 2007-06-01 13:17:15.000000000 +0200 @@ -43,8 +43,6 @@ #include "xfs_dir2_trace.h" #include "xfs_error.h" -static int xfs_dir2_put_dirent64_direct(xfs_dir2_put_args_t *pa); -static int xfs_dir2_put_dirent64_uio(xfs_dir2_put_args_t *pa); void xfs_dir_mount( @@ -293,47 +291,35 @@ xfs_dir_removename( * Read a directory. */ int -xfs_dir_getdents( - xfs_trans_t *tp, - xfs_inode_t *dp, - uio_t *uio, /* caller's buffer control */ - int *eofp) /* out: eof reached */ +xfs_readdir( + bhv_desc_t *dir_bdp, + void *dirent, + size_t bufsize, + xfs_off_t *offset, + filldir_t filldir) { - int alignment; /* alignment required for ABI */ - xfs_dirent_t *dbp; /* malloc'ed buffer */ - xfs_dir2_put_t put; /* entry formatting routine */ + xfs_inode_t *dp = XFS_BHVTOI(dir_bdp); int rval; /* return value */ int v; /* type-checking value */ + vn_trace_entry(BHV_TO_VNODE(dir_bdp), __FUNCTION__, + (inst_t *)__return_address); + + if (XFS_FORCED_SHUTDOWN(dp->i_mount)) + return XFS_ERROR(EIO); + ASSERT((dp->i_d.di_mode & S_IFMT) == S_IFDIR); XFS_STATS_INC(xs_dir_getdents); - /* - * If our caller has given us a single contiguous aligned memory buffer, - * just work directly within that buffer. If it's in user memory, - * lock it down first. - */ - alignment = sizeof(xfs_off_t) - 1; - if ((uio->uio_iovcnt == 1) && - (((__psint_t)uio->uio_iov[0].iov_base & alignment) == 0) && - ((uio->uio_iov[0].iov_len & alignment) == 0)) { - dbp = NULL; - put = xfs_dir2_put_dirent64_direct; - } else { - dbp = kmem_alloc(sizeof(*dbp) + MAXNAMELEN, KM_SLEEP); - put = xfs_dir2_put_dirent64_uio; - } - *eofp = 0; if (dp->i_d.di_format == XFS_DINODE_FMT_LOCAL) - rval = xfs_dir2_sf_getdents(dp, uio, eofp, dbp, put); - else if ((rval = xfs_dir2_isblock(tp, dp, &v))) + rval = xfs_dir2_sf_getdents(dp, dirent, offset, filldir); + else if ((rval = xfs_dir2_isblock(NULL, dp, &v))) ; else if (v) - rval = xfs_dir2_block_getdents(tp, dp, uio, eofp, dbp, put); + rval = xfs_dir2_block_getdents(dp, dirent, offset, filldir); else - rval = xfs_dir2_leaf_getdents(tp, dp, uio, eofp, dbp, put); - if (dbp != NULL) - kmem_free(dbp, sizeof(*dbp) + MAXNAMELEN); + rval = xfs_dir2_leaf_getdents(dp, dirent, bufsize, offset, + filldir); return rval; } @@ -613,77 +599,6 @@ xfs_dir2_isleaf( } /* - * Getdents put routine for 64-bit ABI, direct form. - */ -static int -xfs_dir2_put_dirent64_direct( - xfs_dir2_put_args_t *pa) -{ - xfs_dirent_t *idbp; /* dirent pointer */ - iovec_t *iovp; /* io vector */ - int namelen; /* entry name length */ - int reclen; /* entry total length */ - uio_t *uio; /* I/O control */ - - namelen = pa->namelen; - reclen = DIRENTSIZE(namelen); - uio = pa->uio; - /* - * Won't fit in the remaining space. - */ - if (reclen > uio->uio_resid) { - pa->done = 0; - return 0; - } - iovp = uio->uio_iov; - idbp = (xfs_dirent_t *)iovp->iov_base; - iovp->iov_base = (char *)idbp + reclen; - iovp->iov_len -= reclen; - uio->uio_resid -= reclen; - idbp->d_reclen = reclen; - idbp->d_ino = pa->ino; - idbp->d_off = pa->cook; - idbp->d_name[namelen] = '\0'; - pa->done = 1; - memcpy(idbp->d_name, pa->name, namelen); - return 0; -} - -/* - * Getdents put routine for 64-bit ABI, uio form. - */ -static int -xfs_dir2_put_dirent64_uio( - xfs_dir2_put_args_t *pa) -{ - xfs_dirent_t *idbp; /* dirent pointer */ - int namelen; /* entry name length */ - int reclen; /* entry total length */ - int rval; /* return value */ - uio_t *uio; /* I/O control */ - - namelen = pa->namelen; - reclen = DIRENTSIZE(namelen); - uio = pa->uio; - /* - * Won't fit in the remaining space. - */ - if (reclen > uio->uio_resid) { - pa->done = 0; - return 0; - } - idbp = pa->dbp; - idbp->d_reclen = reclen; - idbp->d_ino = pa->ino; - idbp->d_off = pa->cook; - idbp->d_name[namelen] = '\0'; - memcpy(idbp->d_name, pa->name, namelen); - rval = xfs_uio_read((caddr_t)idbp, reclen, uio); - pa->done = (rval == 0); - return rval; -} - -/* * Remove the given block from the directory. * This routine is used for data and free blocks, leaf/node are done * by xfs_da_shrink_inode. Index: linux-2.6/fs/xfs/xfs_dir2.h =================================================================== --- linux-2.6.orig/fs/xfs/xfs_dir2.h 2007-05-12 15:16:05.000000000 +0200 +++ linux-2.6/fs/xfs/xfs_dir2.h 2007-06-01 13:17:15.000000000 +0200 @@ -60,21 +60,6 @@ typedef __uint32_t xfs_dir2_db_t; typedef xfs_off_t xfs_dir2_off_t; /* - * For getdents, argument struct for put routines. - */ -typedef int (*xfs_dir2_put_t)(struct xfs_dir2_put_args *pa); -typedef struct xfs_dir2_put_args { - xfs_off_t cook; /* cookie of (next) entry */ - xfs_intino_t ino; /* inode number */ - xfs_dirent_t *dbp; /* buffer pointer */ - char *name; /* directory entry name */ - int namelen; /* length of name */ - int done; /* output: set if value was stored */ - xfs_dir2_put_t put; /* put function ptr (i/o) */ - struct uio *uio; /* uio control structure */ -} xfs_dir2_put_args_t; - -/* * Generic directory interface routines */ extern void xfs_dir_startup(void); @@ -92,8 +77,6 @@ extern int xfs_dir_removename(struct xfs char *name, int namelen, xfs_ino_t ino, xfs_fsblock_t *first, struct xfs_bmap_free *flist, xfs_extlen_t tot); -extern int xfs_dir_getdents(struct xfs_trans *tp, struct xfs_inode *dp, - uio_t *uio, int *eofp); extern int xfs_dir_replace(struct xfs_trans *tp, struct xfs_inode *dp, char *name, int namelen, xfs_ino_t inum, xfs_fsblock_t *first, @@ -101,6 +84,8 @@ extern int xfs_dir_replace(struct xfs_tr extern int xfs_dir_canenter(struct xfs_trans *tp, struct xfs_inode *dp, char *name, int namelen); extern int xfs_dir_ino_validate(struct xfs_mount *mp, xfs_ino_t ino); +extern int xfs_readdir(bhv_desc_t *dir_bdp, void *dirent, size_t bufsize, + xfs_off_t *offset, filldir_t filldir); /* * Utility routines for v2 directories. Index: linux-2.6/fs/xfs/xfs_dir2_block.c =================================================================== --- linux-2.6.orig/fs/xfs/xfs_dir2_block.c 2007-06-01 13:17:03.000000000 +0200 +++ linux-2.6/fs/xfs/xfs_dir2_block.c 2007-06-01 13:17:15.000000000 +0200 @@ -432,12 +432,10 @@ xfs_dir2_block_addname( */ int /* error */ xfs_dir2_block_getdents( - xfs_trans_t *tp, /* transaction (NULL) */ xfs_inode_t *dp, /* incore inode */ - uio_t *uio, /* caller's buffer control */ - int *eofp, /* eof reached? (out) */ - xfs_dirent_t *dbp, /* caller's buffer */ - xfs_dir2_put_t put) /* abi's formatting function */ + void *dirent, + xfs_off_t *offset, + filldir_t filldir) { xfs_dir2_block_t *block; /* directory block structure */ xfs_dabuf_t *bp; /* buffer for block */ @@ -447,31 +445,32 @@ xfs_dir2_block_getdents( char *endptr; /* end of the data entries */ int error; /* error return value */ xfs_mount_t *mp; /* filesystem mount point */ - xfs_dir2_put_args_t p; /* arg package for put rtn */ char *ptr; /* current data entry */ int wantoff; /* starting block offset */ + xfs_ino_t ino; + xfs_off_t cook; mp = dp->i_mount; /* * If the block number in the offset is out of range, we're done. */ - if (xfs_dir2_dataptr_to_db(mp, uio->uio_offset) > mp->m_dirdatablk) { - *eofp = 1; + if (xfs_dir2_dataptr_to_db(mp, *offset) > mp->m_dirdatablk) { return 0; } /* * Can't read the block, give up, else get dabuf in bp. */ - if ((error = - xfs_da_read_buf(tp, dp, mp->m_dirdatablk, -1, &bp, XFS_DATA_FORK))) { + error = xfs_da_read_buf(NULL, dp, mp->m_dirdatablk, -1, + &bp, XFS_DATA_FORK); + if (error) return error; - } + ASSERT(bp != NULL); /* * Extract the byte offset we start at from the seek pointer. * We'll skip entries before this. */ - wantoff = xfs_dir2_dataptr_to_off(mp, uio->uio_offset); + wantoff = xfs_dir2_dataptr_to_off(mp, *offset); block = bp->data; xfs_dir2_data_check(dp, bp); /* @@ -480,9 +479,7 @@ xfs_dir2_block_getdents( btp = xfs_dir2_block_tail_p(mp, block); ptr = (char *)block->u; endptr = (char *)xfs_dir2_block_leaf_p(btp); - p.dbp = dbp; - p.put = put; - p.uio = uio; + /* * Loop over the data portion of the block. * Each object is a real entry (dep) or an unused one (dup). @@ -508,33 +505,24 @@ xfs_dir2_block_getdents( */ if ((char *)dep - (char *)block < wantoff) continue; - /* - * Set up argument structure for put routine. - */ - p.namelen = dep->namelen; - p.cook = xfs_dir2_db_off_to_dataptr(mp, mp->m_dirdatablk, + cook = xfs_dir2_db_off_to_dataptr(mp, mp->m_dirdatablk, ptr - (char *)block); - p.ino = be64_to_cpu(dep->inumber); + ino = be64_to_cpu(dep->inumber); #if XFS_BIG_INUMS - p.ino += mp->m_inoadd; + ino += mp->m_inoadd; #endif - p.name = (char *)dep->name; - - /* - * Put the entry in the caller's buffer. - */ - error = p.put(&p); /* * If it didn't fit, set the final offset to here & return. */ - if (!p.done) { - uio->uio_offset = - xfs_dir2_db_off_to_dataptr(mp, mp->m_dirdatablk, + if (filldir(dirent, dep->name, dep->namelen, cook, + ino, DT_UNKNOWN)) { + *offset = xfs_dir2_db_off_to_dataptr(mp, + mp->m_dirdatablk, (char *)dep - (char *)block); - xfs_da_brelse(tp, bp); - return error; + xfs_da_brelse(NULL, bp); + return 0; } } @@ -542,13 +530,8 @@ xfs_dir2_block_getdents( * Reached the end of the block. * Set the offset to a non-existent block 1 and return. */ - *eofp = 1; - - uio->uio_offset = - xfs_dir2_db_off_to_dataptr(mp, mp->m_dirdatablk + 1, 0); - - xfs_da_brelse(tp, bp); - + *offset = xfs_dir2_db_off_to_dataptr(mp, mp->m_dirdatablk + 1, 0); + xfs_da_brelse(NULL, bp); return 0; } Index: linux-2.6/fs/xfs/xfs_dir2_block.h =================================================================== --- linux-2.6.orig/fs/xfs/xfs_dir2_block.h 2007-06-01 13:17:03.000000000 +0200 +++ linux-2.6/fs/xfs/xfs_dir2_block.h 2007-06-01 13:17:15.000000000 +0200 @@ -80,9 +80,8 @@ xfs_dir2_block_leaf_p(xfs_dir2_block_tai * Function declarations. */ extern int xfs_dir2_block_addname(struct xfs_da_args *args); -extern int xfs_dir2_block_getdents(struct xfs_trans *tp, struct xfs_inode *dp, - struct uio *uio, int *eofp, - struct xfs_dirent *dbp, xfs_dir2_put_t put); +extern int xfs_dir2_block_getdents(struct xfs_inode *dp, void *dirent, + xfs_off_t *offset, filldir_t filldir); extern int xfs_dir2_block_lookup(struct xfs_da_args *args); extern int xfs_dir2_block_removename(struct xfs_da_args *args); extern int xfs_dir2_block_replace(struct xfs_da_args *args); Index: linux-2.6/fs/xfs/xfs_dir2_leaf.c =================================================================== --- linux-2.6.orig/fs/xfs/xfs_dir2_leaf.c 2007-06-01 13:17:03.000000000 +0200 +++ linux-2.6/fs/xfs/xfs_dir2_leaf.c 2007-06-01 13:17:15.000000000 +0200 @@ -749,12 +749,11 @@ xfs_dir2_leaf_compact_x1( */ int /* error */ xfs_dir2_leaf_getdents( - xfs_trans_t *tp, /* transaction pointer */ xfs_inode_t *dp, /* incore directory inode */ - uio_t *uio, /* I/O control & vectors */ - int *eofp, /* out: reached end of dir */ - xfs_dirent_t *dbp, /* caller's buffer */ - xfs_dir2_put_t put) /* ABI formatting routine */ + void *dirent, + size_t bufsize, + xfs_off_t *offset, + filldir_t filldir) { xfs_dabuf_t *bp; /* data block buffer */ int byteoff; /* offset in current block */ @@ -763,7 +762,6 @@ xfs_dir2_leaf_getdents( xfs_dir2_data_t *data; /* data block structure */ xfs_dir2_data_entry_t *dep; /* data entry */ xfs_dir2_data_unused_t *dup; /* unused entry */ - int eof; /* reached end of directory */ int error = 0; /* error return value */ int i; /* temporary loop index */ int j; /* temporary loop index */ @@ -776,46 +774,38 @@ xfs_dir2_leaf_getdents( xfs_mount_t *mp; /* filesystem mount point */ xfs_dir2_off_t newoff; /* new curoff after new blk */ int nmap; /* mappings to ask xfs_bmapi */ - xfs_dir2_put_args_t *p; /* formatting arg bundle */ char *ptr = NULL; /* pointer to current data */ int ra_current; /* number of read-ahead blks */ int ra_index; /* *map index for read-ahead */ int ra_offset; /* map entry offset for ra */ int ra_want; /* readahead count wanted */ + xfs_ino_t ino; /* * If the offset is at or past the largest allowed value, - * give up right away, return eof. + * give up right away. */ - if (uio->uio_offset >= XFS_DIR2_MAX_DATAPTR) { - *eofp = 1; + if (*offset >= XFS_DIR2_MAX_DATAPTR) return 0; - } + mp = dp->i_mount; - /* - * Setup formatting arguments. - */ - p = kmem_alloc(sizeof(*p), KM_SLEEP); - p->dbp = dbp; - p->put = put; - p->uio = uio; + /* * Set up to bmap a number of blocks based on the caller's * buffer size, the directory block size, and the filesystem * block size. */ - map_size = - howmany(uio->uio_resid + mp->m_dirblksize, - mp->m_sb.sb_blocksize); + map_size = howmany(bufsize + mp->m_dirblksize, mp->m_sb.sb_blocksize); map = kmem_alloc(map_size * sizeof(*map), KM_SLEEP); map_valid = ra_index = ra_offset = ra_current = map_blocks = 0; bp = NULL; - eof = 1; + /* * Inside the loop we keep the main offset value as a byte offset * in the directory file. */ - curoff = xfs_dir2_dataptr_to_byte(mp, uio->uio_offset); + curoff = xfs_dir2_dataptr_to_byte(mp, *offset); + /* * Force this conversion through db so we truncate the offset * down to get the start of the data block. @@ -836,7 +826,7 @@ xfs_dir2_leaf_getdents( * take it out of the mapping. */ if (bp) { - xfs_da_brelse(tp, bp); + xfs_da_brelse(NULL, bp); bp = NULL; map_blocks -= mp->m_dirblkfsbs; /* @@ -862,8 +852,9 @@ xfs_dir2_leaf_getdents( /* * Recalculate the readahead blocks wanted. */ - ra_want = howmany(uio->uio_resid + mp->m_dirblksize, + ra_want = howmany(bufsize + mp->m_dirblksize, mp->m_sb.sb_blocksize) - 1; + /* * If we don't have as many as we want, and we haven't * run out of data blocks, get some more mappings. @@ -876,7 +867,7 @@ xfs_dir2_leaf_getdents( * we already have in the table. */ nmap = map_size - map_valid; - error = xfs_bmapi(tp, dp, + error = xfs_bmapi(NULL, dp, map_off, xfs_dir2_byte_to_da(mp, XFS_DIR2_LEAF_OFFSET) - map_off, @@ -939,7 +930,7 @@ xfs_dir2_leaf_getdents( * mapping. */ curdb = xfs_dir2_da_to_db(mp, map->br_startoff); - error = xfs_da_read_buf(tp, dp, map->br_startoff, + error = xfs_da_read_buf(NULL, dp, map->br_startoff, map->br_blockcount >= mp->m_dirblkfsbs ? XFS_FSB_TO_DADDR(mp, map->br_startblock) : -1, @@ -982,7 +973,7 @@ xfs_dir2_leaf_getdents( * is a very rare case. */ else if (i > ra_current) { - (void)xfs_da_reada_buf(tp, dp, + (void)xfs_da_reada_buf(NULL, dp, map[ra_index].br_startoff + ra_offset, XFS_DATA_FORK); ra_current = i; @@ -1089,46 +1080,39 @@ xfs_dir2_leaf_getdents( */ dep = (xfs_dir2_data_entry_t *)ptr; - p->namelen = dep->namelen; - - length = xfs_dir2_data_entsize(p->namelen); - - p->cook = xfs_dir2_byte_to_dataptr(mp, curoff + length); + length = xfs_dir2_data_entsize(dep->namelen); - p->ino = be64_to_cpu(dep->inumber); + ino = be64_to_cpu(dep->inumber); #if XFS_BIG_INUMS - p->ino += mp->m_inoadd; + ino += mp->m_inoadd; #endif - p->name = (char *)dep->name; - - error = p->put(p); /* * Won't fit. Return to caller. */ - if (!p->done) { - eof = 0; + if (filldir(dirent, dep->name, dep->namelen, + xfs_dir2_byte_to_dataptr(mp, curoff + length), + ino, DT_UNKNOWN)) break; - } + /* * Advance to next entry in the block. */ ptr += length; curoff += length; + bufsize -= length; } /* * All done. Set output offset value to current offset. */ - *eofp = eof; if (curoff > xfs_dir2_dataptr_to_byte(mp, XFS_DIR2_MAX_DATAPTR)) - uio->uio_offset = XFS_DIR2_MAX_DATAPTR; + *offset = XFS_DIR2_MAX_DATAPTR; else - uio->uio_offset = xfs_dir2_byte_to_dataptr(mp, curoff); + *offset = xfs_dir2_byte_to_dataptr(mp, curoff); kmem_free(map, map_size * sizeof(*map)); - kmem_free(p, sizeof(*p)); if (bp) - xfs_da_brelse(tp, bp); + xfs_da_brelse(NULL, bp); return error; } Index: linux-2.6/fs/xfs/xfs_dir2_leaf.h =================================================================== --- linux-2.6.orig/fs/xfs/xfs_dir2_leaf.h 2007-06-01 13:17:03.000000000 +0200 +++ linux-2.6/fs/xfs/xfs_dir2_leaf.h 2007-06-01 13:17:15.000000000 +0200 @@ -232,9 +232,9 @@ extern void xfs_dir2_leaf_compact(struct extern void xfs_dir2_leaf_compact_x1(struct xfs_dabuf *bp, int *indexp, int *lowstalep, int *highstalep, int *lowlogp, int *highlogp); -extern int xfs_dir2_leaf_getdents(struct xfs_trans *tp, struct xfs_inode *dp, - struct uio *uio, int *eofp, - struct xfs_dirent *dbp, xfs_dir2_put_t put); +extern int xfs_dir2_leaf_getdents(struct xfs_inode *dp, void *dirent, + size_t bufsize, xfs_off_t *offset, + filldir_t filldir); extern int xfs_dir2_leaf_init(struct xfs_da_args *args, xfs_dir2_db_t bno, struct xfs_dabuf **bpp, int magic); extern void xfs_dir2_leaf_log_ents(struct xfs_trans *tp, struct xfs_dabuf *bp, Index: linux-2.6/fs/xfs/xfs_dir2_sf.c =================================================================== --- linux-2.6.orig/fs/xfs/xfs_dir2_sf.c 2007-06-01 13:17:03.000000000 +0200 +++ linux-2.6/fs/xfs/xfs_dir2_sf.c 2007-06-01 13:17:15.000000000 +0200 @@ -695,19 +695,18 @@ xfs_dir2_sf_create( int /* error */ xfs_dir2_sf_getdents( xfs_inode_t *dp, /* incore directory inode */ - uio_t *uio, /* caller's buffer control */ - int *eofp, /* eof reached? (out) */ - xfs_dirent_t *dbp, /* caller's buffer */ - xfs_dir2_put_t put) /* abi's formatting function */ + void *dirent, + xfs_off_t *offset, + filldir_t filldir) { - int error; /* error return value */ int i; /* shortform entry number */ xfs_mount_t *mp; /* filesystem mount point */ xfs_dir2_dataptr_t off; /* current entry's offset */ - xfs_dir2_put_args_t p; /* arg package for put rtn */ xfs_dir2_sf_entry_t *sfep; /* shortform directory entry */ xfs_dir2_sf_t *sfp; /* shortform structure */ - xfs_off_t dir_offset; + xfs_dir2_dataptr_t dot_offset; + xfs_dir2_dataptr_t dotdot_offset; + xfs_ino_t ino; mp = dp->i_mount; @@ -720,8 +719,6 @@ xfs_dir2_sf_getdents( return XFS_ERROR(EIO); } - dir_offset = uio->uio_offset; - ASSERT(dp->i_df.if_bytes == dp->i_d.di_size); ASSERT(dp->i_df.if_u1.if_data != NULL); @@ -732,108 +729,78 @@ xfs_dir2_sf_getdents( /* * If the block number in the offset is out of range, we're done. */ - if (xfs_dir2_dataptr_to_db(mp, dir_offset) > mp->m_dirdatablk) { - *eofp = 1; + if (xfs_dir2_dataptr_to_db(mp, *offset) > mp->m_dirdatablk) return 0; - } /* - * Set up putargs structure. - */ - p.dbp = dbp; - p.put = put; - p.uio = uio; + * Precalculate offsets for . and .. as we will always need them. + * + * XXX(hch): the second argument is sometimes 0 and sometimes + * mp->m_dirdatablk. + */ + dot_offset = xfs_dir2_db_off_to_dataptr(mp, mp->m_dirdatablk, + XFS_DIR2_DATA_DOT_OFFSET); + dotdot_offset = xfs_dir2_db_off_to_dataptr(mp, mp->m_dirdatablk, + XFS_DIR2_DATA_DOTDOT_OFFSET); + /* * Put . entry unless we're starting past it. */ - if (dir_offset <= - xfs_dir2_db_off_to_dataptr(mp, mp->m_dirdatablk, - XFS_DIR2_DATA_DOT_OFFSET)) { - p.cook = xfs_dir2_db_off_to_dataptr(mp, 0, - XFS_DIR2_DATA_DOTDOT_OFFSET); - p.ino = dp->i_ino; + if (*offset <= dot_offset) { + ino = dp->i_ino; #if XFS_BIG_INUMS - p.ino += mp->m_inoadd; + ino += mp->m_inoadd; #endif - p.name = "."; - p.namelen = 1; - - error = p.put(&p); - - if (!p.done) { - uio->uio_offset = - xfs_dir2_db_off_to_dataptr(mp, mp->m_dirdatablk, - XFS_DIR2_DATA_DOT_OFFSET); - return error; + if (filldir(dirent, ".", 1, dotdot_offset, ino, DT_DIR)) { + *offset = dot_offset; + return 0; } } /* * Put .. entry unless we're starting past it. */ - if (dir_offset <= - xfs_dir2_db_off_to_dataptr(mp, mp->m_dirdatablk, - XFS_DIR2_DATA_DOTDOT_OFFSET)) { - p.cook = xfs_dir2_db_off_to_dataptr(mp, mp->m_dirdatablk, - XFS_DIR2_DATA_FIRST_OFFSET); - p.ino = xfs_dir2_sf_get_inumber(sfp, &sfp->hdr.parent); + if (*offset <= dotdot_offset) { + off = xfs_dir2_db_off_to_dataptr(mp, mp->m_dirdatablk, + XFS_DIR2_DATA_FIRST_OFFSET); + ino = xfs_dir2_sf_get_inumber(sfp, &sfp->hdr.parent); #if XFS_BIG_INUMS - p.ino += mp->m_inoadd; + ino += mp->m_inoadd; #endif - p.name = ".."; - p.namelen = 2; - - error = p.put(&p); - - if (!p.done) { - uio->uio_offset = - xfs_dir2_db_off_to_dataptr(mp, mp->m_dirdatablk, - XFS_DIR2_DATA_DOTDOT_OFFSET); - return error; + if (filldir(dirent, "..", 2, off, ino, DT_DIR)) { + *offset = dotdot_offset; + return 0; } } /* * Loop while there are more entries and put'ing works. */ - for (i = 0, sfep = xfs_dir2_sf_firstentry(sfp); - i < sfp->hdr.count; - i++, sfep = xfs_dir2_sf_nextentry(sfp, sfep)) { - + sfep = xfs_dir2_sf_firstentry(sfp); + for (i = 0; i < sfp->hdr.count; i++) { off = xfs_dir2_db_off_to_dataptr(mp, mp->m_dirdatablk, xfs_dir2_sf_get_offset(sfep)); - if (dir_offset > off) + if (*offset > off) { + sfep = xfs_dir2_sf_nextentry(sfp, sfep); continue; + } - p.namelen = sfep->namelen; - - p.cook = xfs_dir2_db_off_to_dataptr(mp, mp->m_dirdatablk, - xfs_dir2_sf_get_offset(sfep) + - xfs_dir2_data_entsize(p.namelen)); - - p.ino = xfs_dir2_sf_get_inumber(sfp, xfs_dir2_sf_inumberp(sfep)); + ino = xfs_dir2_sf_get_inumber(sfp, xfs_dir2_sf_inumberp(sfep)); #if XFS_BIG_INUMS - p.ino += mp->m_inoadd; + ino += mp->m_inoadd; #endif - p.name = (char *)sfep->name; - error = p.put(&p); - - if (!p.done) { - uio->uio_offset = off; - return error; + if (filldir(dirent, sfep->name, sfep->namelen, + off + xfs_dir2_data_entsize(sfep->namelen), + ino, DT_UNKNOWN)) { + *offset = off; + return 0; } + sfep = xfs_dir2_sf_nextentry(sfp, sfep); } - /* - * They all fit. - */ - *eofp = 1; - - uio->uio_offset = - xfs_dir2_db_off_to_dataptr(mp, mp->m_dirdatablk + 1, 0); - + *offset = xfs_dir2_db_off_to_dataptr(mp, mp->m_dirdatablk + 1, 0); return 0; } Index: linux-2.6/fs/xfs/xfs_dir2_sf.h =================================================================== --- linux-2.6.orig/fs/xfs/xfs_dir2_sf.h 2007-06-01 13:17:03.000000000 +0200 +++ linux-2.6/fs/xfs/xfs_dir2_sf.h 2007-06-01 13:17:15.000000000 +0200 @@ -169,9 +169,8 @@ extern int xfs_dir2_block_to_sf(struct x int size, xfs_dir2_sf_hdr_t *sfhp); extern int xfs_dir2_sf_addname(struct xfs_da_args *args); extern int xfs_dir2_sf_create(struct xfs_da_args *args, xfs_ino_t pino); -extern int xfs_dir2_sf_getdents(struct xfs_inode *dp, struct uio *uio, - int *eofp, struct xfs_dirent *dbp, - xfs_dir2_put_t put); +extern int xfs_dir2_sf_getdents(struct xfs_inode *dp, void *dirent, + xfs_off_t *offset, filldir_t filldir); extern int xfs_dir2_sf_lookup(struct xfs_da_args *args); extern int xfs_dir2_sf_removename(struct xfs_da_args *args); extern int xfs_dir2_sf_replace(struct xfs_da_args *args); From owner-xfs@oss.sgi.com Mon Jun 4 07:44:45 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 04 Jun 2007 07:44:47 -0700 (PDT) Received: from mail.lst.de (verein.lst.de [213.95.11.210]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l54EihWt020893 for ; Mon, 4 Jun 2007 07:44:45 -0700 Received: from verein.lst.de (localhost [127.0.0.1]) by mail.lst.de (8.12.3/8.12.3/Debian-7.1) with ESMTP id l54Eifo6009791 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO) for ; Mon, 4 Jun 2007 16:44:41 +0200 Received: (from hch@localhost) by verein.lst.de (8.12.3/8.12.3/Debian-6.6) id l54EifIt009789 for xfs@oss.sgi.com; Mon, 4 Jun 2007 16:44:41 +0200 Date: Mon, 4 Jun 2007 16:44:41 +0200 From: Christoph Hellwig To: xfs@oss.sgi.com Subject: state of the testsuite Message-ID: <20070604144441.GA9672@lst.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.3.28i X-Scanned-By: MIMEDefang 2.39 X-archive-position: 11622 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@lst.de Precedence: bulk X-list: xfs When running the testsuite on Debian -testing I constanyly get the 12 failing testcases (016 041 049 064 071 082 084 104 111 136 140 166). Is this expected or are other people seeing better results? From owner-xfs@oss.sgi.com Mon Jun 4 07:50:58 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 04 Jun 2007 07:51:01 -0700 (PDT) Received: from pentafluge.infradead.org (pentafluge.infradead.org [213.146.154.40]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l54EosWt023117 for ; Mon, 4 Jun 2007 07:50:58 -0700 Received: from hch by pentafluge.infradead.org with local (Exim 4.63 #1 (Red Hat Linux)) id 1HvDtW-0007Kt-P8; Mon, 04 Jun 2007 15:50:54 +0100 Date: Mon, 4 Jun 2007 15:50:54 +0100 From: Christoph Hellwig To: David Chinner Cc: xfs-dev , xfs-oss Subject: Re: Review - writing to multiple non-contiguous unwritten extents within a page is broken. Message-ID: <20070604145054.GA28033@infradead.org> References: <20070523092103.GT85884050@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070523092103.GT85884050@sgi.com> User-Agent: Mutt/1.4.2.2i X-SRS-Rewrite: SMTP reverse-path rewritten from by pentafluge.infradead.org See http://www.infradead.org/rpr.html X-archive-position: 11623 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs On Wed, May 23, 2007 at 07:21:03PM +1000, David Chinner wrote: > Index: 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_aops.c > =================================================================== > --- 2.6.x-xfs-new.orig/fs/xfs/linux-2.6/xfs_aops.c 2007-05-23 16:33:04.000000000 +1000 > +++ 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_aops.c 2007-05-23 17:52:15.540456674 +1000 > @@ -1008,6 +1008,8 @@ xfs_page_state_convert( > if (buffer_unwritten(bh) || buffer_delay(bh) || > ((buffer_uptodate(bh) || PageUptodate(page)) && > !buffer_mapped(bh) && (unmapped || startio))) { > + int new_ioend = 0; > + > /* > * Make sure we don't use a read-only iomap > */ > @@ -1026,6 +1028,15 @@ xfs_page_state_convert( > } > > if (!iomap_valid) { > + /* > + * if we didn't have a valid mapping then we > + * need to ensure that we put the new mapping > + * in a new ioend structure. This needs to be > + * done to ensure that the ioends correctly > + * reflect the block mappings at io completion > + * for unwritten extent conversion. > + */ > + new_ioend = 1; > if (type == IOMAP_NEW) { > size = xfs_probe_cluster(inode, > page, bh, head, 0); > @@ -1045,7 +1056,7 @@ xfs_page_state_convert( > if (startio) { > xfs_add_to_ioend(inode, bh, offset, > type, &ioend, > - !iomap_valid); > + new_ioend); Looks good. I'm pretty sure that we had something like a new_ioend variable in the initial versions of this code, but I don't know when and why we got rid of it. From owner-xfs@oss.sgi.com Mon Jun 4 07:53:00 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 04 Jun 2007 07:53:03 -0700 (PDT) Received: from pentafluge.infradead.org (pentafluge.infradead.org [213.146.154.40]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l54EqwWt024112 for ; Mon, 4 Jun 2007 07:52:59 -0700 Received: from hch by pentafluge.infradead.org with local (Exim 4.63 #1 (Red Hat Linux)) id 1HvDvW-0007MS-Vp; Mon, 04 Jun 2007 15:52:58 +0100 Date: Mon, 4 Jun 2007 15:52:58 +0100 From: Christoph Hellwig To: Barry Naujok Cc: "xfs@oss.sgi.com" , xfs-dev Subject: Re: [PATCH 1/3] XFS metadump utility Message-ID: <20070604145258.GB28033@infradead.org> References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.2i X-SRS-Rewrite: SMTP reverse-path rewritten from by pentafluge.infradead.org See http://www.infradead.org/rpr.html X-archive-position: 11624 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs On Mon, May 28, 2007 at 03:22:52PM +1000, Barry Naujok wrote: > Back in February, I posted a patch to xfs_db to capture metadata from a > filesystem into a file > http://oss.sgi.com/archives/xfs/2007-02/msg00072.html . > > I have now updated it with the following changes: > - obfuscates directory names and attribute names > - zeros attribute values > - better support of stdin/stdout for redirection. Is this the dpprintf to fprintf changes? These changes look like they really want to be in a separate patch with at least a sentence of an explanation. The newly added files look good to me from a quick glance. From owner-xfs@oss.sgi.com Mon Jun 4 07:55:11 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 04 Jun 2007 07:55:17 -0700 (PDT) Received: from pentafluge.infradead.org (pentafluge.infradead.org [213.146.154.40]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l54Et9Wt024954 for ; Mon, 4 Jun 2007 07:55:11 -0700 Received: from hch by pentafluge.infradead.org with local (Exim 4.63 #1 (Red Hat Linux)) id 1HvDxd-0007ON-QA; Mon, 04 Jun 2007 15:55:09 +0100 Date: Mon, 4 Jun 2007 15:55:09 +0100 From: Christoph Hellwig To: Barry Naujok Cc: "xfs@oss.sgi.com" , xfs-dev Subject: Re: [PATCH 3/3] XFS metadump utility Message-ID: <20070604145509.GD28033@infradead.org> References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.2i X-SRS-Rewrite: SMTP reverse-path rewritten from by pentafluge.infradead.org See http://www.infradead.org/rpr.html X-archive-position: 11626 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs On Mon, May 28, 2007 at 03:22:58PM +1000, Barry Naujok wrote: > xfs_metadump and xfs_mdrestore man pages. Looks fine. From owner-xfs@oss.sgi.com Mon Jun 4 07:54:53 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 04 Jun 2007 07:54:55 -0700 (PDT) Received: from pentafluge.infradead.org (pentafluge.infradead.org [213.146.154.40]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l54EsqWt024842 for ; Mon, 4 Jun 2007 07:54:53 -0700 Received: from hch by pentafluge.infradead.org with local (Exim 4.63 #1 (Red Hat Linux)) id 1HvDxM-0007O9-5p; Mon, 04 Jun 2007 15:54:52 +0100 Date: Mon, 4 Jun 2007 15:54:52 +0100 From: Christoph Hellwig To: Barry Naujok Cc: "xfs@oss.sgi.com" , xfs-dev Subject: Re: [PATCH 2/3] XFS metadump utility Message-ID: <20070604145452.GC28033@infradead.org> References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.2i X-SRS-Rewrite: SMTP reverse-path rewritten from by pentafluge.infradead.org See http://www.infradead.org/rpr.html X-archive-position: 11625 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs On Mon, May 28, 2007 at 03:22:55PM +1000, Barry Naujok wrote: > xfs_mdrestore Looks good from a quick glance. From owner-xfs@oss.sgi.com Mon Jun 4 07:56:19 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 04 Jun 2007 07:56:22 -0700 (PDT) Received: from pentafluge.infradead.org (pentafluge.infradead.org [213.146.154.40]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l54EuIWt025824 for ; Mon, 4 Jun 2007 07:56:19 -0700 Received: from hch by pentafluge.infradead.org with local (Exim 4.63 #1 (Red Hat Linux)) id 1HvDyj-0007Pf-0k; Mon, 04 Jun 2007 15:56:17 +0100 Date: Mon, 4 Jun 2007 15:56:16 +0100 From: Christoph Hellwig To: Timothy Shimmin Cc: David Chinner , Michal Marek , xfs@oss.sgi.com Subject: Re: [patch 1/3] Fix XFS_IOC_FSGEOMETRY_V1 in compat mode Message-ID: <20070604145616.GA28425@infradead.org> References: <20070530125954.706423971@suse.cz> <20070530143043.216024061@suse.cz> <20070531023031.GH85884050@sgi.com> <649C7FF68B1450E03D544BD9@timothy-shimmins-power-mac-g5.local> <20070531132615.GO85884050@sgi.com> <0C1BF59AD81186689933280E@timothy-shimmins-power-mac-g5.local> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <0C1BF59AD81186689933280E@timothy-shimmins-power-mac-g5.local> User-Agent: Mutt/1.4.2.2i X-SRS-Rewrite: SMTP reverse-path rewritten from by pentafluge.infradead.org See http://www.infradead.org/rpr.html X-archive-position: 11627 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs On Fri, Jun 01, 2007 at 02:39:48PM +1000, Timothy Shimmin wrote: > > > --On 31 May 2007 11:26:15 PM +1000 David Chinner wrote: > >>Who would want to use XFS_IOC_FSGEOMETRY_V1? > >> > >>Okay it turns out a whole bunch of our xfs-cmds :-) > >>(Such as xfsdump as Michal mentioned) > >>On Sep/2002, Nathan changed a bunch of them to use v1. > >> xfsprogs-2.3.0 (03 September 2002) > >> - Several changes to geometry ioctl callers which will make > >> the tools useable on older kernel versions too. > >>So he did this so that new tools would work on the older kernels which > >>didn't support the new geom version. > >>So I guess we are stuck with v1 now. > > > >Not necessarily - we could change the tools to use v4, and if that > >didn't exist, then try v1. That way we don't need to support v1 in > >linux, and the tools still run on old kernels..... > > > The problem with that is the old tools won't run on new kernels. > If you get a new kernel and use an old xfsdump then you are out of luck. > Not sure if we want to require people to bump up to new userspace for this. Yeah, we need to keep supporting this for a while. Fortunately this compat handler is rather trivial. From owner-xfs@oss.sgi.com Mon Jun 4 08:02:35 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 04 Jun 2007 08:02:42 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l54F2XWt028532 for ; Mon, 4 Jun 2007 08:02:34 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id BAA18220; Tue, 5 Jun 2007 01:02:26 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l54F2OAf113076575; Tue, 5 Jun 2007 01:02:25 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l54F2M9w114109131; Tue, 5 Jun 2007 01:02:22 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Tue, 5 Jun 2007 01:02:22 +1000 From: David Chinner To: Christoph Hellwig Cc: xfs@oss.sgi.com Subject: Re: state of the testsuite Message-ID: <20070604150222.GL86004887@sgi.com> References: <20070604144441.GA9672@lst.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070604144441.GA9672@lst.de> User-Agent: Mutt/1.4.2.1i X-archive-position: 11628 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Mon, Jun 04, 2007 at 04:44:41PM +0200, Christoph Hellwig wrote: > When running the testsuite on Debian -testing I constanyly get the > 12 failing testcases (016 041 049 064 071 082 084 104 111 136 140 166). > Is this expected or are other people seeing better results? What kernel+platform and what version of the tests? Prior to 2.6.22-rc2, 016 and 144 are the only ones i saw failing regularly. 166 will fail until we get ->page_mkwrite sorted out (but passes in my tree). With 2.6.22-rc2, all the loopback based tests are failing because someone broke the loopback device so lots of tests fail because of that. The other tests I haven't seen fail for some time... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Jun 4 08:08:29 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 04 Jun 2007 08:08:31 -0700 (PDT) Received: from pentafluge.infradead.org (pentafluge.infradead.org [213.146.154.40]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l54F8SWt030604 for ; Mon, 4 Jun 2007 08:08:29 -0700 Received: from hch by pentafluge.infradead.org with local (Exim 4.63 #1 (Red Hat Linux)) id 1HvEAW-0007Vn-HN; Mon, 04 Jun 2007 16:08:28 +0100 Date: Mon, 4 Jun 2007 16:08:28 +0100 From: Christoph Hellwig To: David Chinner Cc: xfs-dev , xfs-oss Subject: Re: Review: remount read-only path is as broken as freezing was.... Message-ID: <20070604150828.GB28425@infradead.org> References: <20070604051433.GP85884050@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070604051433.GP85884050@sgi.com> User-Agent: Mutt/1.4.2.2i X-SRS-Rewrite: SMTP reverse-path rewritten from by pentafluge.infradead.org See http://www.infradead.org/rpr.html X-archive-position: 11629 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs On Mon, Jun 04, 2007 at 03:14:33PM +1000, David Chinner wrote: > > I recently had a remount,ro test fail in a way I had previously > only seen freezing fail. That is, it failed because we still > had active transactions after calling xfs_quiesce_fs(). Further > investigation shows that this path is broken in the same ways > that the xfs freeze path was broken (and recently fixed). Actually it became more broken due to the fix changing things a little. In general a mount -o ro should be similar to a quience in most ways, so the closer we can get them the better. The patch looks good to me. One thing is that SYNC_INODE_QUIESCE should probably get a more descriptive name and a comment explaining why it's needed for quience (we want inodes updated not only in the log) and why not for remount r/o (because no one cares for that in this case) From owner-xfs@oss.sgi.com Mon Jun 4 08:10:37 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 04 Jun 2007 08:10:41 -0700 (PDT) Received: from pentafluge.infradead.org (pentafluge.infradead.org [213.146.154.40]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l54FAZWt031602 for ; Mon, 4 Jun 2007 08:10:36 -0700 Received: from hch by pentafluge.infradead.org with local (Exim 4.63 #1 (Red Hat Linux)) id 1HvECZ-0007Xm-75; Mon, 04 Jun 2007 16:10:35 +0100 Date: Mon, 4 Jun 2007 16:10:34 +0100 From: Christoph Hellwig To: David Chinner Cc: xfs-dev , xfs-oss Subject: Re: Review: factor extracting extent size hints from the inode Message-ID: <20070604151034.GC28425@infradead.org> References: <20070604052333.GR85884050@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070604052333.GR85884050@sgi.com> User-Agent: Mutt/1.4.2.2i X-SRS-Rewrite: SMTP reverse-path rewritten from by pentafluge.infradead.org See http://www.infradead.org/rpr.html X-archive-position: 11630 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs On Mon, Jun 04, 2007 at 03:23:33PM +1000, David Chinner wrote: > Replace frequently repeated, open coded extraction of the > extent size hint from the xfs_inode with a single helper > function. Looks good, but I'd suggest not putting in the unlikelys. Realtime or alignment are perfectly normal codepaths and hardcoding them to be predicted not taken sounds like a bad idea. unlilely should be limited to exception error handling code. From owner-xfs@oss.sgi.com Mon Jun 4 08:12:18 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 04 Jun 2007 08:12:20 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l54FCFWt032306 for ; Mon, 4 Jun 2007 08:12:17 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id BAA18592; Tue, 5 Jun 2007 01:12:08 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l54FC6Af114148792; Tue, 5 Jun 2007 01:12:06 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l54FC445114060064; Tue, 5 Jun 2007 01:12:04 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Tue, 5 Jun 2007 01:12:04 +1000 From: David Chinner To: David Chinner Cc: Christoph Hellwig , xfs@oss.sgi.com Subject: Re: state of the testsuite Message-ID: <20070604151204.GM86004887@sgi.com> References: <20070604144441.GA9672@lst.de> <20070604150222.GL86004887@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070604150222.GL86004887@sgi.com> User-Agent: Mutt/1.4.2.1i X-archive-position: 11631 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Tue, Jun 05, 2007 at 01:02:22AM +1000, David Chinner wrote: > On Mon, Jun 04, 2007 at 04:44:41PM +0200, Christoph Hellwig wrote: > > When running the testsuite on Debian -testing I constanyly get the > > 12 failing testcases (016 041 049 064 071 082 084 104 111 136 140 166). > > Is this expected or are other people seeing better results? > > What kernel+platform and what version of the tests? > > Prior to 2.6.22-rc2, 016 and 144 are the only ones i saw failing > regularly. 166 will fail until we get ->page_mkwrite > sorted out (but passes in my tree). looks like 082 and 088 have been failing for a while in the automated QA as well but there's not a lot of other regular failures.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Jun 4 08:12:29 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 04 Jun 2007 08:12:32 -0700 (PDT) Received: from pentafluge.infradead.org (pentafluge.infradead.org [213.146.154.40]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l54FCSWt032372 for ; Mon, 4 Jun 2007 08:12:29 -0700 Received: from hch by pentafluge.infradead.org with local (Exim 4.63 #1 (Red Hat Linux)) id 1HvEEO-0007Zu-Si; Mon, 04 Jun 2007 16:12:28 +0100 Date: Mon, 4 Jun 2007 16:12:28 +0100 From: Christoph Hellwig To: David Chinner Cc: xfs-dev , xfs-oss Subject: Re: Review: apply transaction deltas atomically to superblock Message-ID: <20070604151228.GD28425@infradead.org> References: <20070604080720.GV85884050@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070604080720.GV85884050@sgi.com> User-Agent: Mutt/1.4.2.2i X-SRS-Rewrite: SMTP reverse-path rewritten from by pentafluge.infradead.org See http://www.infradead.org/rpr.html X-archive-position: 11632 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs Looks good and actually makes the code easier to read. From owner-xfs@oss.sgi.com Mon Jun 4 08:13:55 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 04 Jun 2007 08:13:57 -0700 (PDT) Received: from mail.lst.de (verein.lst.de [213.95.11.210]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l54FDqWt000912 for ; Mon, 4 Jun 2007 08:13:54 -0700 Received: from verein.lst.de (localhost [127.0.0.1]) by mail.lst.de (8.12.3/8.12.3/Debian-7.1) with ESMTP id l54FDpo6011593 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO); Mon, 4 Jun 2007 17:13:52 +0200 Received: (from hch@localhost) by verein.lst.de (8.12.3/8.12.3/Debian-6.6) id l54FDpcf011591; Mon, 4 Jun 2007 17:13:51 +0200 Date: Mon, 4 Jun 2007 17:13:51 +0200 From: Christoph Hellwig To: David Chinner Cc: Christoph Hellwig , xfs@oss.sgi.com Subject: Re: state of the testsuite Message-ID: <20070604151351.GA11536@lst.de> References: <20070604144441.GA9672@lst.de> <20070604150222.GL86004887@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070604150222.GL86004887@sgi.com> User-Agent: Mutt/1.3.28i X-Scanned-By: MIMEDefang 2.39 X-archive-position: 11633 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@lst.de Precedence: bulk X-list: xfs On Tue, Jun 05, 2007 at 01:02:22AM +1000, David Chinner wrote: > On Mon, Jun 04, 2007 at 04:44:41PM +0200, Christoph Hellwig wrote: > > When running the testsuite on Debian -testing I constanyly get the > > 12 failing testcases (016 041 049 064 071 082 084 104 111 136 140 166). > > Is this expected or are other people seeing better results? > > What kernel+platform and what version of the tests? Current mainline on i386 (running in qemu) > With 2.6.22-rc2, all the loopback based tests are failing > because someone broke the loopback device so lots of tests > fail because of that. The other tests I haven't seen fail > for some time... Ok, that should account for the loop stuff. I already suspected Ken's unlimited number of loop devices patch breaks something. From owner-xfs@oss.sgi.com Mon Jun 4 10:21:41 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 04 Jun 2007 10:21:44 -0700 (PDT) Received: from mail.lst.de (verein.lst.de [213.95.11.210]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l54HLdWt025752 for ; Mon, 4 Jun 2007 10:21:41 -0700 Received: from verein.lst.de (localhost [127.0.0.1]) by mail.lst.de (8.12.3/8.12.3/Debian-7.1) with ESMTP id l54HLdo6016961 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO); Mon, 4 Jun 2007 19:21:39 +0200 Received: (from hch@localhost) by verein.lst.de (8.12.3/8.12.3/Debian-6.6) id l54HLcGf016959; Mon, 4 Jun 2007 19:21:38 +0200 Date: Mon, 4 Jun 2007 19:21:38 +0200 From: Christoph Hellwig To: David Chinner Cc: Christoph Hellwig , xfs@oss.sgi.com Subject: Re: state of the testsuite Message-ID: <20070604172138.GA16879@lst.de> References: <20070604144441.GA9672@lst.de> <20070604150222.GL86004887@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070604150222.GL86004887@sgi.com> User-Agent: Mutt/1.3.28i X-Scanned-By: MIMEDefang 2.39 X-archive-position: 11634 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@lst.de Precedence: bulk X-list: xfs On Tue, Jun 05, 2007 at 01:02:22AM +1000, David Chinner wrote: > With 2.6.22-rc2, all the loopback based tests are failing > because someone broke the loopback device so lots of tests > fail because of that. The other tests I haven't seen fail > for some time... Backing out the dynamic loop device allocations gets me 049 back to pass. From owner-xfs@oss.sgi.com Mon Jun 4 10:24:02 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 04 Jun 2007 10:24:07 -0700 (PDT) Received: from server46.publicompserver.de (server46.publicompserver.de [80.190.243.82]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l54HO0Wt027656 for ; Mon, 4 Jun 2007 10:24:02 -0700 Received: by server46.publicompserver.de (Postfix, from userid 30) id 8490E265D8A; Mon, 4 Jun 2007 19:09:42 +0200 (CEST) To: xfs@oss.sgi.com Subject: Work Less And Earn More !!! From: Ju Lee Corporation Reply-To: sulee@mail2japan.com MIME-Version: 1.0 Content-Type: text/plain Content-Transfer-Encoding: 8bit Message-Id: <20070604170942.8490E265D8A@server46.publicompserver.de> Date: Mon, 4 Jun 2007 19:09:42 +0200 (CEST) X-archive-position: 11635 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: employment@juleecorp.co.jp Precedence: bulk X-list: xfs Ju Lee Corporation Vacancy Post For the 2007 Quarter. Ju Lee Corporation is a Japanese textile company.We produce and distribute clothing materials such as batiks,assorted fabrics and traditional costume worldwide.We have reached big sales volume of textile materials in the Europe and now are trying to penetrate the U.K market.Quitesoon we will open representative offices or authorized sales centers in The U.K and therefore we are currently looking for people who will assist us in establishing a new distribution network there.The fact is that despite the U.K market is new for us we already have regular clients also speaks for itself. WHAT YOU NEED TO DO FOR US? The international money transfer tax for legal entities (companies) in Japan is 25%, whereas for the individual it is only 7%.There is no sense for us to work this way,while tax for international money transfer made by a private individual is 7% .That's why we need you!We need agents to receive payment for our textiles to cash payment( in money orders,cheque or bank wire transfers) and to resend the money to us via Money Gram or Western Union Money Transfer.This way we will save money because of tax decreasing. JOB DESCRIPTION? 1) Receive payment from Clients 2) Cash Payments at your Bank 3) Deduct 10% which will be your percentage/pay on Payment processed. 4) Forward balance after deduction of percentage/pay to any of the offices you will be contacted to send payment to(Payment is to forwarded either by Money Gram or Western Union Money Transfer) HOW MUCH WILL YOU EARN? 10% from each operation! For instance: you receive $7000 USD via cheques or money orders on our behalf.You will cash the money and keep $700 (10% from $7000) for yourself! At the beginning your commission will equal 10%,though later it will increase up to 12%! ADVANTAGES You do not have to go out as you will work as an independent contractor right from your home office. Your job is absolutely legal. You can earn up to $3000-$4000 monthly depending on time you will spend for this job. You do not need any capital to start.You can do the Work easily without leaving or affecting your present Job.The employees who make efforts and work hard have a strong possibility to become managers.Anyway our employees never leave us. MAIN REQUIREMENTS 18 years or older Legally capable Responsible Ready to work 3-4 hours per week. With PC knowledge E-mail and Internet experience (minimal) And please know that Everything is absolutely legal,that's why You would be sent the Contract Agreement to see the terms and condition of the company! If you are interested in our offer,please respond with the following details in order for us to reach you: FULL NAME: CONTACT ADDRESS: AGE: SEX: OCCUPATION: PHONE #: FAX #: Response should be sent to the following following email address: Sulee@mail2japan.com Please send your resume so that I can let you know more details as regards the job Please get back to me as soon as you can. Thanks for your anticipated action.And we hope to hear back from you. Very Respectfully, Su Lee Recruitment Officer Ju Lee Corporation From owner-xfs@oss.sgi.com Mon Jun 4 17:43:32 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 04 Jun 2007 17:43:35 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l550hTWt016204 for ; Mon, 4 Jun 2007 17:43:30 -0700 Received: from pc-bnaujok.melbourne.sgi.com (pc-bnaujok.melbourne.sgi.com [134.14.55.58]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA03908; Tue, 5 Jun 2007 10:43:22 +1000 Date: Tue, 05 Jun 2007 10:47:24 +1000 To: "Christoph Hellwig" Subject: Re: [PATCH 1/3] XFS metadump utility From: "Barry Naujok" Organization: SGI Cc: "xfs@oss.sgi.com" , xfs-dev Content-Type: text/plain; format=flowed; delsp=yes; charset=iso-8859-15 MIME-Version: 1.0 References: <20070604145258.GB28033@infradead.org> Content-Transfer-Encoding: 7bit Message-ID: In-Reply-To: <20070604145258.GB28033@infradead.org> User-Agent: Opera Mail/9.10 (Win32) X-archive-position: 11636 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@sgi.com Precedence: bulk X-list: xfs On Tue, 05 Jun 2007 00:52:58 +1000, Christoph Hellwig wrote: > On Mon, May 28, 2007 at 03:22:52PM +1000, Barry Naujok wrote: >> Back in February, I posted a patch to xfs_db to capture metadata from a >> filesystem into a file >> http://oss.sgi.com/archives/xfs/2007-02/msg00072.html . >> >> I have now updated it with the following changes: >> - obfuscates directory names and attribute names >> - zeros attribute values >> - better support of stdin/stdout for redirection. > > Is this the dpprintf to fprintf changes? These changes look like they > really want to be in a separate patch with at least a sentence of an > explanation. Nah, the fprintf instead of dbprintf was required so errors don't go to stdout when using stdout as a target. The better support is using fread/fwrite instead of read/write to a handle. > The newly added files look good to me from a quick glance. > From owner-xfs@oss.sgi.com Mon Jun 4 18:43:45 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 04 Jun 2007 18:43:48 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l551hfWt032232 for ; Mon, 4 Jun 2007 18:43:43 -0700 Received: from pc-bnaujok.melbourne.sgi.com (pc-bnaujok.melbourne.sgi.com [134.14.55.58]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA05435; Tue, 5 Jun 2007 11:43:40 +1000 Date: Tue, 05 Jun 2007 11:46:59 +1000 To: "xfs@oss.sgi.com" , xfs-dev Subject: [REVIEW 1/3] - xfs_repair speedups (AG stride) From: "Barry Naujok" Organization: SGI Content-Type: multipart/mixed; boundary=----------snCyDrAzmtE7lnFw3tZOqj MIME-Version: 1.0 Message-ID: User-Agent: Opera Mail/9.10 (Win32) X-archive-position: 11637 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@sgi.com Precedence: bulk X-list: xfs ------------snCyDrAzmtE7lnFw3tZOqj Content-Type: text/plain; format=flowed; delsp=yes; charset=iso-8859-15 Content-Transfer-Encoding: Quoted-Printable As an ongoing series from previous xfs_repair patches, this one changes=20= =20 the method of selecting and performing multithreaded AG processing. With most simple filesystems, the current multithreading method actually=20= =20 make xfs_repair slower by increasing the amount of disk seeking between=20= =20 AGs. The biggest benefits for parallel processing of AGs is when the filesystem= =20=20 is across a concat of disks/LUNs. So, now the default is a single thread=20= =20 unless you specify an "ag_stride" using the xfs_repair -o option. Then it= =20=20 will initiate a suitable number of threads based on the number of AGs in=20= =20 the filesystem and the stride value. For example, a 32 AG filesystem with a concat of 2 equal sized disks/LUNs,= =20=20 you would use the "-o ag_stride=3D16" to get one thread to process AGs 0-15= =20=20 and the second thread to process AGs 16-31. Generally, the stride value is set to the AG number that is fully within=20= =20 the second part of the concat.= ------------snCyDrAzmtE7lnFw3tZOqj Content-Disposition: attachment; filename=ag_stride Content-Type: application/octet-stream; name=ag_stride Content-Transfer-Encoding: Base64 SW5kZXg6IHJlcGFpci94ZnNwcm9ncy9yZXBhaXIvZ2xvYmFscy5oCj09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT0KLS0tIHJlcGFpci5vcmlnL3hmc3Byb2dzL3Jl cGFpci9nbG9iYWxzLmgJMjAwNy0wNC0xMiAxNjoxMzo1OC4wMDAwMDAwMDAg KzEwMDAKKysrIHJlcGFpci94ZnNwcm9ncy9yZXBhaXIvZ2xvYmFscy5oCTIw MDctMDQtMTIgMTc6MDc6MjUuNjU2NjYwNDk4ICsxMDAwCkBAIC0yMDYsNCAr MjA2LDYgQEAKIEVYVEVSTiBpbnQgcmVwb3J0X2ludGVydmFsOwogRVhURVJO IF9fdWludDY0X3QgKnByb2dfcnB0X2RvbmU7CiAKK0VYVEVSTiBpbnQJCWFn X3N0cmlkZTsKKwogI2VuZGlmIC8qIF9YRlNfUkVQQUlSX0dMT0JBTF9IICov CkluZGV4OiByZXBhaXIveGZzcHJvZ3MvcmVwYWlyL2luaXQuYwo9PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09Ci0tLSByZXBhaXIub3JpZy94ZnNwcm9ncy9yZXBh aXIvaW5pdC5jCTIwMDctMDQtMTIgMTQ6NTU6NTcuMDAwMDAwMDAwICsxMDAw CisrKyByZXBhaXIveGZzcHJvZ3MvcmVwYWlyL2luaXQuYwkyMDA3LTA0LTEy IDE2OjQwOjQwLjUzNzEyMTczOCArMTAwMApAQCAtMTQ5LDUgKzE0OSw0IEBA CiAJCWlmIChkb19wcmVmZXRjaCkKIAkJCWxpYnhmc19saW9fYWxsb2NhdGUo KTsKIAl9Ci0JdGhyZWFkX2luaXQoKTsKIH0KSW5kZXg6IHJlcGFpci94ZnNw cm9ncy9yZXBhaXIvcGhhc2UzLmMKPT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQot LS0gcmVwYWlyLm9yaWcveGZzcHJvZ3MvcmVwYWlyL3BoYXNlMy5jCTIwMDct MDQtMTIgMTQ6NTU6NTcuMDAwMDAwMDAwICsxMDAwCisrKyByZXBhaXIveGZz cHJvZ3MvcmVwYWlyL3BoYXNlMy5jCTIwMDctMDQtMTIgMTY6NDU6NTAuNTk2 OTEzNjE1ICsxMDAwCkBAIC0xOTIsOCArMTkyLDE0IEBACiAJICAgICIgICAg ICAgIC0gcHJvY2VzcyBrbm93biBpbm9kZXMgYW5kIHBlcmZvcm0gaW5vZGUg ZGlzY292ZXJ5Li4uXG4iKSk7CiAKIAlzZXRfcHJvZ3Jlc3NfbXNnKFBST0df Rk1UX1BST0NFU1NfSU5PLCAoX191aW50NjRfdCkgbXAtPm1fc2Iuc2JfaWNv dW50KTsKLQlmb3IgKGkgPSAwOyBpIDwgbXAtPm1fc2Iuc2JfYWdjb3VudDsg aSsrKSAgewotCQlxdWV1ZV93b3JrKHBhcmFsbGVsX3AzX3Byb2Nlc3NfYWdp bm9kZXMsIG1wLCBpKTsKKwlpZiAoYWdfc3RyaWRlKSB7CisJCWludCAJc3Rl cHMgPSAobXAtPm1fc2Iuc2JfYWdjb3VudCArIGFnX3N0cmlkZSAtIDEpIC8g YWdfc3RyaWRlOworCQlmb3IgKGkgPSAwOyBpIDwgc3RlcHM7IGkrKykKKwkJ CWZvciAoaiA9IGk7IGogPCBtcC0+bV9zYi5zYl9hZ2NvdW50OyBqICs9IGFn X3N0cmlkZSkKKwkJCQlxdWV1ZV93b3JrKHBhcmFsbGVsX3AzX3Byb2Nlc3Nf YWdpbm9kZXMsIG1wLCBqKTsKKwl9IGVsc2UgeworCQlmb3IgKGkgPSAwOyBp IDwgbXAtPm1fc2Iuc2JfYWdjb3VudDsgaSsrKQorCQkJcGFyYWxsZWxfcDNf cHJvY2Vzc19hZ2lub2RlcyhtcCwgaSk7CiAJfQogCXdhaXRfZm9yX3dvcmtl cnMoKTsKIAlwcmludF9maW5hbF9ycHQoKTsKSW5kZXg6IHJlcGFpci94ZnNw cm9ncy9yZXBhaXIvcGhhc2U0LmMKPT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQot LS0gcmVwYWlyLm9yaWcveGZzcHJvZ3MvcmVwYWlyL3BoYXNlNC5jCTIwMDct MDQtMTIgMTQ6NTU6NTcuMDAwMDAwMDAwICsxMDAwCisrKyByZXBhaXIveGZz cHJvZ3MvcmVwYWlyL3BoYXNlNC5jCTIwMDctMDQtMTIgMTY6NDU6NTEuNzI0 NzY3MzAwICsxMDAwCkBAIC0xMzQyLDE4ICsxMzQyLDI2IEBACiAKIAlkb19s b2coXygiICAgICAgICAtIGNoZWNrIGZvciBpbm9kZXMgY2xhaW1pbmcgZHVw bGljYXRlIGJsb2Nrcy4uLlxuIikpOwogCXNldF9wcm9ncmVzc19tc2coUFJP R19GTVRfRFVQX0JMT0NLUywgKF9fdWludDY0X3QpIG1wLT5tX3NiLnNiX2lj b3VudCk7Ci0JZm9yIChpID0gMDsgaSA8IG1wLT5tX3NiLnNiX2FnY291bnQ7 IGkrKykgIHsKLQkJLyoKLQkJICogb2ssIG5vdyBwcm9jZXNzIHRoZSBpbm9k ZXMgLS0gc2lnbmFsIDItcGFzcyBjaGVjayBwZXIgaW5vZGUuCi0JCSAqIGZp cnN0IHBhc3MgY2hlY2tzIGlmIHRoZSBpbm9kZSBjb25mbGljdHMgd2l0aCBh IGtub3duCi0JCSAqIGR1cGxpY2F0ZSBleHRlbnQuICBpZiBzbywgdGhlIGlu b2RlIGlzIGNsZWFyZWQgYW5kIHNlY29uZAotCQkgKiBwYXNzIGlzIHNraXBw ZWQuICBzZWNvbmQgcGFzcyBzZXRzIHRoZSBibG9jayBiaXRtYXAKLQkJICog Zm9yIGFsbCBibG9ja3MgY2xhaW1lZCBieSB0aGUgaW5vZGUuICBkaXJlY3Rv cnkKLQkJICogYW5kIGF0dHJpYnV0ZSBwcm9jZXNzaW5nIGlzIHR1cm5lZCBP RkYgc2luY2Ugd2UgZGlkIHRoYXQKLQkJICogYWxyZWFkeSBpbiBwaGFzZSAz LgotCQkgKi8KLQkJcXVldWVfd29yayhwYXJhbGxlbF9wNF9wcm9jZXNzX2Fn aW5vZGVzLCBtcCwgaSk7CisKKwkvKgorCSAqIG9rLCBub3cgcHJvY2VzcyB0 aGUgaW5vZGVzIC0tIHNpZ25hbCAyLXBhc3MgY2hlY2sgcGVyIGlub2RlLgor CSAqIGZpcnN0IHBhc3MgY2hlY2tzIGlmIHRoZSBpbm9kZSBjb25mbGljdHMg d2l0aCBhIGtub3duCisJICogZHVwbGljYXRlIGV4dGVudC4gIGlmIHNvLCB0 aGUgaW5vZGUgaXMgY2xlYXJlZCBhbmQgc2Vjb25kCisJICogcGFzcyBpcyBz a2lwcGVkLiAgc2Vjb25kIHBhc3Mgc2V0cyB0aGUgYmxvY2sgYml0bWFwCisJ ICogZm9yIGFsbCBibG9ja3MgY2xhaW1lZCBieSB0aGUgaW5vZGUuICBkaXJl Y3RvcnkKKwkgKiBhbmQgYXR0cmlidXRlIHByb2Nlc3NpbmcgaXMgdHVybmVk IE9GRiBzaW5jZSB3ZSBkaWQgdGhhdAorCSAqIGFscmVhZHkgaW4gcGhhc2Ug My4KKwkgKi8KKwlpZiAoYWdfc3RyaWRlKSB7CisJCWludCAJc3RlcHMgPSAo bXAtPm1fc2Iuc2JfYWdjb3VudCArIGFnX3N0cmlkZSAtIDEpIC8gYWdfc3Ry aWRlOworCQlmb3IgKGkgPSAwOyBpIDwgc3RlcHM7IGkrKykKKwkJCWZvciAo aiA9IGk7IGogPCBtcC0+bV9zYi5zYl9hZ2NvdW50OyBqICs9IGFnX3N0cmlk ZSkKKwkJCQlxdWV1ZV93b3JrKHBhcmFsbGVsX3A0X3Byb2Nlc3NfYWdpbm9k ZXMsIG1wLCBqKTsKKwl9IGVsc2UgeworCQlmb3IgKGkgPSAwOyBpIDwgbXAt Pm1fc2Iuc2JfYWdjb3VudDsgaSsrKQorCQkJcGFyYWxsZWxfcDRfcHJvY2Vz c19hZ2lub2RlcyhtcCwgaSk7CiAJfQorCiAJd2FpdF9mb3Jfd29ya2Vycygp OwogCXByaW50X2ZpbmFsX3JwdCgpOwogCkluZGV4OiByZXBhaXIveGZzcHJv Z3MvcmVwYWlyL3hmc19yZXBhaXIuYwo9PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 Ci0tLSByZXBhaXIub3JpZy94ZnNwcm9ncy9yZXBhaXIveGZzX3JlcGFpci5j CTIwMDctMDQtMTIgMTY6MTE6MjQuMDAwMDAwMDAwICsxMDAwCisrKyByZXBh aXIveGZzcHJvZ3MvcmVwYWlyL3hmc19yZXBhaXIuYwkyMDA3LTA0LTEyIDE2 OjQ0OjQ2LjIyNTI2Mzc3NiArMTAwMApAQCAtNjUsOCArNjUsOCBAQAogCSJw ZmRpciIsCiAjZGVmaW5lCVBSRUZFVENIX0FJT19DTlQJNgogCSJwZmFpbyIs Ci0jZGVmaW5lCVRIUkVBRF9DTlQJCTcKLQkidGhyZWFkIiwKKyNkZWZpbmUJ QUdfU1RSSURFCQk3CisJImFnX3N0cmlkZSIsCiAJTlVMTAogfTsKIApAQCAt MTg2LDYgKzE4Niw5IEBACiAJZnNfaGFzX2V4dGZsZ2JpdF9hbGxvd2VkID0g MTsKIAlwcmVfNjVfYmV0YSA9IDA7CiAJZnNfc2hhcmVkX2FsbG93ZWQgPSAx OworCWFnX3N0cmlkZSA9IDA7CisJdGhyZWFkX2NvdW50ID0gMDsKKwlkb19w YXJhbGxlbCA9IDA7CiAJcmVwb3J0X2ludGVydmFsID0gUFJPR19SUFRfREVG QVVMVDsKIAogCS8qCkBAIC0yMzMsOCArMjM2LDggQEAKIAkJCQljYXNlIFBS RUZFVENIX0FJT19DTlQ6CiAJCQkJCWxpYnhmc19saW9fYWlvX2NvdW50ID0g KGludCkgc3RydG9sKHZhbCwgMCwgMCk7CiAJCQkJCWJyZWFrOwotCQkJCWNh c2UgVEhSRUFEX0NOVDoKLQkJCQkJdGhyZWFkX2NvdW50ID0gKGludCkgc3Ry dG9sKHZhbCwgMCwgMCk7CisJCQkJY2FzZSBBR19TVFJJREU6CisJCQkJCWFn X3N0cmlkZSA9IChpbnQpIHN0cnRvbCh2YWwsIDAsIDApOwogCQkJCQlicmVh azsKIAkJCQlkZWZhdWx0OgogCQkJCQl1bmtub3duKCdvJywgdmFsKTsKQEAg LTUyNSw2ICs1MjgsMTIgQEAKIAltYXhfc3ltbGlua19ibG9ja3MgPSBob3dt YW55KE1BWFBBVEhMRU4gLSAxLCBtcC0+bV9zYi5zYl9ibG9ja3NpemUpOwog CWlub2Rlc19wZXJfY2x1c3RlciA9IFhGU19JTk9ERV9DTFVTVEVSX1NJWkUo bXApID4+IG1wLT5tX3NiLnNiX2lub2RlbG9nOwogCisJaWYgKGFnX3N0cmlk ZSkgeworCQlkb19wYXJhbGxlbCA9IDE7CisJCXRocmVhZF9jb3VudCA9ICht cC0+bV9zYi5zYl9hZ2NvdW50ICsgYWdfc3RyaWRlIC0gMSkgLyBhZ19zdHJp ZGU7CisJCXRocmVhZF9pbml0KCk7CisJfQorCiAJaWYgKGRvX3BhcmFsbGVs ICYmIHJlcG9ydF9pbnRlcnZhbCkgewogCQlpbml0X3Byb2dyZXNzX3JwdCgp OwogCQltc2didWYgPSBtYWxsb2MoRFVSQVRJT05fQlVGX1NJWkUpOwo= ------------snCyDrAzmtE7lnFw3tZOqj-- From owner-xfs@oss.sgi.com Mon Jun 4 18:50:44 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 04 Jun 2007 18:50:49 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l551odWt001588 for ; Mon, 4 Jun 2007 18:50:41 -0700 Received: from pc-bnaujok.melbourne.sgi.com (pc-bnaujok.melbourne.sgi.com [134.14.55.58]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA05642; Tue, 5 Jun 2007 11:50:39 +1000 Date: Tue, 05 Jun 2007 11:54:00 +1000 To: "xfs@oss.sgi.com" , xfs-dev Subject: [REVIEW 2/3] - xfs_repair enhancements (lost+found handling) From: "Barry Naujok" Organization: SGI Content-Type: multipart/mixed; boundary=----------yUh6rLTYauJAeqPswTCPZX MIME-Version: 1.0 Message-ID: User-Agent: Opera Mail/9.10 (Win32) X-archive-position: 11638 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@sgi.com Precedence: bulk X-list: xfs ------------yUh6rLTYauJAeqPswTCPZX Content-Type: text/plain; format=flowed; delsp=yes; charset=iso-8859-15 Content-Transfer-Encoding: 7bit This particular patch is in the sequence for part 3 which will rely on I/O being uniform for each piece of metadata. It changes the behaviour of lost+found quite substantially in xfs_repair. Currently, xfs_repair deletes any lost+found directories in phase 3, which will orphan inodes that still reside in lost+found. During phase 6, it's recreated and repopulated with orphaned inodes. When one leaves inodes in lost+found between successive runs, it can cause some confusion as xfs_repair keeps on discovering orphaned inodes. This change leaves any lost+found directory or inode alone during phases 3/4 (other than checking for normal consistancy). During phase 6, if it finds lost+found in the root inode, it checks that it's a directory and is consistant and remembers it. If it's corrupted, it's junked or repaired like any other directory. If it's not a directory, it's junked. During the last part of phase 6, if any orphaned inodes are found, it will place them in the previously located lost+found directory, or create it and place them. ------------yUh6rLTYauJAeqPswTCPZX Content-Disposition: attachment; filename=lost+found Content-Type: application/octet-stream; name=lost+found Content-Transfer-Encoding: Base64 SW5kZXg6IHJlcGFpci94ZnNwcm9ncy9yZXBhaXIvZGlub19jaHVua3MuYwo9 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09Ci0tLSByZXBhaXIub3JpZy94ZnNwcm9n cy9yZXBhaXIvZGlub19jaHVua3MuYwkyMDA3LTA0LTI3IDE0OjA5OjA1LjY5 MDc4MzAwNSArMTAwMAorKysgcmVwYWlyL3hmc3Byb2dzL3JlcGFpci9kaW5v X2NodW5rcy5jCTIwMDctMDQtMjcgMTQ6MTE6NDEuMDAwMDAwMDAwICsxMDAw CkBAIC03NjIsMTggKzc2MiwxNCBAQAogCQkgKi8KIAkJaWYgKGlzX3VzZWQp ICB7CiAJCQlpZiAoaXNfaW5vZGVfZnJlZShpbm9fcmVjLCBpcmVjX29mZnNl dCkpICB7Ci0JCQkJaWYgKHZlcmJvc2UgfHwgbm9fbW9kaWZ5IHx8Ci0JCQkJ ICAgIFhGU19BR0lOT19UT19JTk8obXAsIGFnbm8sIGFnaW5vKSAhPQotCQkJ CQkJCW9sZF9vcnBoYW5hZ2VfaW5vKSAgeworCQkJCWlmICh2ZXJib3NlIHx8 IG5vX21vZGlmeSkgIHsKIAkJCQkJZG9fd2FybihfKCJpbWFwIGNsYWltcyBp bi11c2UgaW5vZGUgIgogCQkJCQkJICAiJWxsdSBpcyBmcmVlLCAiKSwKIAkJ CQkJCVhGU19BR0lOT19UT19JTk8obXAsIGFnbm8sCiAJCQkJCQlhZ2lubykp OwogCQkJCX0KIAotCQkJCWlmICh2ZXJib3NlIHx8ICghbm9fbW9kaWZ5ICYm Ci0JCQkJICAgIFhGU19BR0lOT19UT19JTk8obXAsIGFnbm8sIGFnaW5vKSAh PQotCQkJCQkJb2xkX29ycGhhbmFnZV9pbm8pKQorCQkJCWlmICh2ZXJib3Nl IHx8ICFub19tb2RpZnkpCiAJCQkJCWRvX3dhcm4oXygiY29ycmVjdGluZyBp bWFwXG4iKSk7CiAJCQkJZWxzZQogCQkJCQlkb193YXJuKF8oIndvdWxkIGNv cnJlY3QgaW1hcFxuIikpOwpJbmRleDogcmVwYWlyL3hmc3Byb2dzL3JlcGFp ci9kaXIuYwo9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09Ci0tLSByZXBhaXIub3Jp Zy94ZnNwcm9ncy9yZXBhaXIvZGlyLmMJMjAwNy0wNC0yNyAxMzoxMzozNS45 NzE5OTM0ODkgKzEwMDAKKysrIHJlcGFpci94ZnNwcm9ncy9yZXBhaXIvZGly LmMJMjAwNy0wNC0yNyAxNDoxMTo0MS4wMDAwMDAwMDAgKzEwMDAKQEAgLTE5 NTIsMTIgKzE5NTIsNiBAQAogCQkJCV8oIlx0d291bGQgY2xlYXIgaW5vIG51 bWJlciBpbiBlbnRyeSAlZC4uLlxuIiksCiAJCQkJCWkpOwogCQkJfQotCQl9 IGVsc2UgaWYgKGxpbm8gPT0gb2xkX29ycGhhbmFnZV9pbm8pICB7Ci0JCQkv KgotCQkJICogZG8gbm90aGluZywgc2lsZW50bHkgaWdub3JlIGl0LCBlbnRy eSBoYXMKLQkJCSAqIGFscmVhZHkgYmVlbiBtYXJrZWQgVEJEIHNpbmNlIG9s ZF9vcnBoYW5hZ2VfaW5vCi0JCQkgKiBpcyBzZXQgbm9uLXplcm8uCi0JCQkg Ki8KIAkJfSBlbHNlIGlmICgoaXJlY19wID0gZmluZF9pbm9kZV9yZWMoCiAJ CQkJWEZTX0lOT19UT19BR05PKG1wLCBsaW5vKSwKIAkJCQlYRlNfSU5PX1RP X0FHSU5PKG1wLCBsaW5vKSkpICE9IE5VTEwpICB7CkluZGV4OiByZXBhaXIv eGZzcHJvZ3MvcmVwYWlyL2RpcjIuYwo9PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 Ci0tLSByZXBhaXIub3JpZy94ZnNwcm9ncy9yZXBhaXIvZGlyMi5jCTIwMDct MDQtMjcgMTM6MTM6MzUuOTcxOTkzNDg5ICsxMDAwCisrKyByZXBhaXIveGZz cHJvZ3MvcmVwYWlyL2RpcjIuYwkyMDA3LTA0LTI3IDE0OjExOjQxLjAwMDAw MDAwMCArMTAwMApAQCAtMTQ0MCwxMyArMTQ0MCw2IEBACiAJCX0gZWxzZSBp ZiAoSU5UX0dFVChkZXAtPmludW1iZXIsIEFSQ0hfQ09OVkVSVCkgPT0gbXAt Pm1fc2Iuc2JfZ3F1b3Rpbm8pIHsKIAkJCWNsZWFyaW5vID0gMTsKIAkJCWNs ZWFycmVhc29uID0gXygiZ3JvdXAgcXVvdGEiKTsKLQkJfSBlbHNlIGlmIChJ TlRfR0VUKGRlcC0+aW51bWJlciwgQVJDSF9DT05WRVJUKSA9PSBvbGRfb3Jw aGFuYWdlX2lubykgewotCQkJLyoKLQkJCSAqIERvIG5vdGhpbmcsIHNpbGVu dGx5IGlnbm9yZSBpdCwgZW50cnkgaGFzIGFscmVhZHkKLQkJCSAqIGJlZW4g bWFya2VkIFRCRCBzaW5jZSBvbGRfb3JwaGFuYWdlX2lubyBpcyBzZXQKLQkJ CSAqIG5vbi16ZXJvLgotCQkJICovCi0JCQljbGVhcmlubyA9IDA7CiAJCX0g ZWxzZSBpZiAoKGlyZWNfcCA9IGZpbmRfaW5vZGVfcmVjKAogCQkJCVhGU19J Tk9fVE9fQUdOTyhtcCwgSU5UX0dFVChkZXAtPmludW1iZXIsCiAJCQkJCUFS Q0hfQ09OVkVSVCkpLApJbmRleDogcmVwYWlyL3hmc3Byb2dzL3JlcGFpci9n bG9iYWxzLmgKPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQotLS0gcmVwYWlyLm9y aWcveGZzcHJvZ3MvcmVwYWlyL2dsb2JhbHMuaAkyMDA3LTA0LTI3IDE0OjEx OjQxLjY4MjQxODIzNCArMTAwMAorKysgcmVwYWlyL3hmc3Byb2dzL3JlcGFp ci9nbG9iYWxzLmgJMjAwNy0wNC0yNyAxNDoxMTo0MS4wMDAwMDAwMDAgKzEw MDAKQEAgLTE4OCw5ICsxODgsNiBAQAogRVhURVJOIF9fdWludDY0X3QJc2Jf ZmRibG9ja3M7CS8qIGZyZWUgZGF0YSBibG9ja3MgKi8KIEVYVEVSTiBfX3Vp bnQ2NF90CXNiX2ZyZXh0ZW50czsJLyogZnJlZSByZWFsdGltZSBleHRlbnRz ICovCiAKLUVYVEVSTiB4ZnNfaW5vX3QJb3JwaGFuYWdlX2lubzsKLUVYVEVS TiB4ZnNfaW5vX3QJb2xkX29ycGhhbmFnZV9pbm87Ci0KIC8qIHN1cGVyYmxv Y2sgZ2VvbWV0cnkgaW5mbyAqLwogCiBFWFRFUk4geGZzX2V4dGxlbl90CXNi X2lub2FsaWdubXQ7CkluZGV4OiByZXBhaXIveGZzcHJvZ3MvcmVwYWlyL3Bo YXNlMS5jCj09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0KLS0tIHJlcGFpci5vcmln L3hmc3Byb2dzL3JlcGFpci9waGFzZTEuYwkyMDA3LTA0LTI3IDEzOjEzOjM1 Ljk3MTk5MzQ4OSArMTAwMAorKysgcmVwYWlyL3hmc3Byb2dzL3JlcGFpci9w aGFzZTEuYwkyMDA3LTA0LTI3IDE0OjExOjQxLjc4MjQwNTE4MSArMTAwMApA QCAtNjMsNyArNjMsNiBAQAogCW5lZWRfcmJtaW5vID0gMDsKIAluZWVkX3Jz dW1pbm8gPSAwOwogCWxvc3RfcXVvdGFzID0gMDsKLQlvbGRfb3JwaGFuYWdl X2lubyA9ICh4ZnNfaW5vX3QpIDA7CiAKIAkvKgogCSAqIGdldCBBRyAwIGlu dG8gYWcgaGVhZGVyIGJ1ZgpJbmRleDogcmVwYWlyL3hmc3Byb2dzL3JlcGFp ci9waGFzZTQuYwo9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09Ci0tLSByZXBhaXIu b3JpZy94ZnNwcm9ncy9yZXBhaXIvcGhhc2U0LmMJMjAwNy0wNC0yNyAxNDox MTo0MS42OTA0MTcxODkgKzEwMDAKKysrIHJlcGFpci94ZnNwcm9ncy9yZXBh aXIvcGhhc2U0LmMJMjAwNy0wNC0yNyAxNDoxMTo0MS4wMDAwMDAwMDAgKzEw MDAKQEAgLTMyLDEwMjAgKzMyLDYgQEAKICNpbmNsdWRlICJwcm9ncmVzcy5o IgogCiAKLS8qIEFSR1NVU0VEICovCi1pbnQKLWxmX2Jsb2NrX2RlbGV0ZV9v cnBoYW5hZ2UoeGZzX21vdW50X3QJCSptcCwKLQkJCXhmc19pbm9fdAkJaW5v LAotCQkJeGZzX2Rpcl9sZWFmYmxvY2tfdAkqbGVhZiwKLQkJCWludAkJCSpk aXJ0eSwKLQkJCXhmc19idWZfdAkJKnJvb3Rpbm9fYnAsCi0JCQlpbnQJCQkq cmJ1Zl9kaXJ0eSkKLXsKLQl4ZnNfZGlyX2xlYWZfZW50cnlfdAkqZW50cnk7 Ci0JeGZzX2Rpbm9kZV90CQkqZGlubzsKLQl4ZnNfYnVmX3QJCSpicDsKLQlp bm9fdHJlZV9ub2RlX3QJCSppcmVjOwotCXhmc19pbm9fdAkJbGlubzsKLQl4 ZnNfZGlyX2xlYWZfbmFtZV90CSpuYW1lc3Q7Ci0JeGZzX2FnaW5vX3QJCWFn aW5vOwotCXhmc19hZ251bWJlcl90CQlhZ25vOwotCXhmc19hZ2lub190CQly b290X2FnaW5vOwotCXhmc19hZ251bWJlcl90CQlyb290X2Fnbm87Ci0JaW50 CQkJaTsKLQlpbnQJCQlpbm9fb2Zmc2V0OwotCWludAkJCWlub19kaXJ0eTsK LQlpbnQJCQl1c2VfcmJ1ZjsKLQlpbnQJCQlsZW47Ci0JY2hhcgkJCWZuYW1l W01BWE5BTUVMRU4gKyAxXTsKLQlpbnQJCQlyZXM7Ci0KLQllbnRyeSA9ICZs ZWFmLT5lbnRyaWVzWzBdOwotCSpkaXJ0eSA9IDA7Ci0JdXNlX3JidWYgPSAw OwotCXJlcyA9IDA7Ci0Jcm9vdF9hZ25vID0gWEZTX0lOT19UT19BR05PKG1w LCBtcC0+bV9zYi5zYl9yb290aW5vKTsKLQlyb290X2FnaW5vID0gWEZTX0lO T19UT19BR0lOTyhtcCwgbXAtPm1fc2Iuc2Jfcm9vdGlubyk7Ci0KLQlmb3Ig KGkgPSAwOyBpIDwgSU5UX0dFVChsZWFmLT5oZHIuY291bnQsIEFSQ0hfQ09O VkVSVCk7IGVudHJ5KyssIGkrKykgewotCQluYW1lc3QgPSBYRlNfRElSX0xF QUZfTkFNRVNUUlVDVChsZWFmLAotCQkJSU5UX0dFVChlbnRyeS0+bmFtZWlk eCwgQVJDSF9DT05WRVJUKSk7Ci0JCVhGU19ESVJfU0ZfR0VUX0RJUklOTygm bmFtZXN0LT5pbnVtYmVyLCAmbGlubyk7Ci0JCWJjb3B5KG5hbWVzdC0+bmFt ZSwgZm5hbWUsIGVudHJ5LT5uYW1lbGVuKTsKLQkJZm5hbWVbZW50cnktPm5h bWVsZW5dID0gJ1wwJzsKLQotCQlpZiAoZm5hbWVbMF0gIT0gJy8nICYmICFz dHJjbXAoZm5hbWUsIE9SUEhBTkFHRSkpICB7Ci0JCQlhZ2lubyA9IFhGU19J Tk9fVE9fQUdJTk8obXAsIGxpbm8pOwotCQkJYWdubyA9IFhGU19JTk9fVE9f QUdOTyhtcCwgbGlubyk7Ci0KLQkJCW9sZF9vcnBoYW5hZ2VfaW5vID0gbGlu bzsKLQotCQkJaXJlYyA9IGZpbmRfaW5vZGVfcmVjKGFnbm8sIGFnaW5vKTsK LQotCQkJLyoKLQkJCSAqIGlmIHRoZSBvcnBoYW5nZSBpbm9kZSBpcyBpbiB0 aGUgdHJlZSwKLQkJCSAqIGdldCBpdCwgY2xlYXIgaXQsIGFuZCBtYXJrIGl0 IGZyZWUuCi0JCQkgKiB0aGUgaW5vZGVzIGluIHRoZSBvcnBoYW5hZ2Ugd2ls bCBnZXQKLQkJCSAqIHJlYXR0YWNoZWQgdG8gdGhlIG5ldyBvcnBoYW5hZ2Uu Ci0JCQkgKi8KLQkJCWlmIChpcmVjICE9IE5VTEwpICB7Ci0JCQkJaW5vX29m ZnNldCA9IGFnaW5vIC0gaXJlYy0+aW5vX3N0YXJ0bnVtOwotCi0JCQkJLyoK LQkJCQkgKiBjaGVjayBpZiB3ZSBoYXZlIHRvIHVzZSB0aGUgcm9vdCBpbm9k ZQotCQkJCSAqIGJ1ZmZlciBvciByZWFkIG9uZSBpbiBvdXJzZWx2ZXMuICBO b3RlCi0JCQkJICogdGhhdCB0aGUgcm9vdCBpbm9kZSBpcyBhbHdheXMgdGhl IGZpcnN0Ci0JCQkJICogaW5vZGUgb2YgdGhlIGNodW5rIHRoYXQgaXQncyBp biBzbyB0aGVyZQotCQkJCSAqIGFyZSB0d28gcG9zc2libGUgY2FzZXMgd2hl cmUgbG9zdCtmb3VuZAotCQkJCSAqIG1pZ2h0IGJlIGluIHRoZSBzYW1lIGJ1 ZmZlciBhcyB0aGUgcm9vdAotCQkJCSAqIGlub2RlLiAgT25lIGNhc2UgaXMg YSBsYXJnZSBibG9jawotCQkJCSAqIGZpbGVzeXN0ZW0gd2hlcmUgdGhlIHR3 byBpbm9kZXMgYXJlCi0JCQkJICogaW4gZGlmZmVyZW50IGlub2RlIGNodW5r cyBidXQgd2luZAotCQkJCSAqIHVwIGluIHRoZSBzYW1lIGJsb2NrIChtdWx0 aXBsZSBjaHVua3MKLQkJCQkgKiBwZXIgYmxvY2spIGFuZCB0aGUgc2Vjb25k IGNhc2UgKG9uZSBvcgotCQkJCSAqIG1vcmUgYmxvY2tzIHBlciBjaHVuaykg aXMgd2hlcmUgdGhlIHR3bwotCQkJCSAqIGlub2RlcyBhcmUgaW4gdGhlIHNh bWUgY2h1bmsuIE5vdGUgdGhhdAotCQkJCSAqIGlub2RlcyBhcmUgYWxsb2Nh dGVkIG9uIGRpc2sgaW4gdW5pdHMKLQkJCQkgKiBvZiBNQVgoWEZTX0lOT0RF U19QRVJfQ0hVTkssc2JfaW5vcGJsb2NrKS4KLQkJCQkgKi8KLQkJCQlpZiAo WEZTX0lOT19UT19GU0IobXAsIG1wLT5tX3NiLnNiX3Jvb3Rpbm8pCi0JCQkJ CQk9PSBYRlNfSU5PX1RPX0ZTQihtcCwgbGlubykgfHwKLQkJCQkgICAgKGFn bm8gPT0gcm9vdF9hZ25vICYmCi0JCQkJICAgICBhZ2lubyA8IHJvb3RfYWdp bm8gKyBYRlNfSU5PREVTX1BFUl9DSFVOSykpIHsKLQkJCQkJdXNlX3JidWYg PSAxOwotCQkJCQlicCA9IHJvb3Rpbm9fYnA7Ci0JCQkJCWRpbm8gPSBYRlNf TUFLRV9JUFRSKG1wLCBicCwgYWdpbm8gLQotCQkJCQkJWEZTX0lOT19UT19B R0lOTyhtcCwKLQkJCQkJCQltcC0+bV9zYi5zYl9yb290aW5vKSk7Ci0JCQkJ fSBlbHNlIHsKLQkJCQkJbGVuID0gKGludClYRlNfRlNCX1RPX0JCKG1wLAot CQkJCQkJTUFYKDEsIFhGU19JTk9ERVNfUEVSX0NIVU5LLwotCQkJCQkJCWlu b2Rlc19wZXJfYmxvY2spKTsKLQkJCQkJYnAgPSBsaWJ4ZnNfcmVhZGJ1Ziht cC0+bV9kZXYsCi0JCQkJCQlYRlNfQUdCX1RPX0RBRERSKG1wLCBhZ25vLAot CQkJCQkJCVhGU19BR0lOT19UT19BR0JOTyhtcCwKLQkJCQkJCQkJaXJlYy0+ aW5vX3N0YXJ0bnVtKSksCi0JCQkJCQlsZW4sIDApOwotCQkJCQlpZiAoIWJw KQotCQkJCQkJZG9fZXJyb3IoCi0JCQkJCV8oImNvdWxkbid0IHJlYWQgJXMg aW5vZGUgJWxsdVxuIiksCi0JCQkJCQkJT1JQSEFOQUdFLCBsaW5vKTsKLQot CQkJCQkvKgotCQkJCQkgKiBnZXQgdGhlIGFnYm5vIGNvbnRhaW5pbmcgdGhl IGZpcnN0Ci0JCQkJCSAqIGlub2RlIGluIHRoZSBjaHVuay4gIEluIG11bHRp LWJsb2NrCi0JCQkJCSAqIGNodW5rcywgdGhpcyBnZXRzIHVzIHRoZSBvZmZz ZXQKLQkJCQkJICogcmVsYXRpdmUgdG8gdGhlIGJlZ2lubmluZyBvZiBhCi0J CQkJCSAqIHByb3Blcmx5IGFsaWduZWQgYnVmZmVyLiAgSW4KLQkJCQkJICog bXVsdGktY2h1bmsgYmxvY2tzLCB0aGlzIGdldHMgdXMKLQkJCQkJICogdGhl IGNvcnJlY3QgYmxvY2sgbnVtYmVyLiAgVGhlbgotCQkJCQkgKiB0dXJuIHRo ZSBibG9jayBudW1iZXIgYmFjayBpbnRvCi0JCQkJCSAqIGFuIGFnaW5vIGFu ZCBjYWxjdWxhdGUgdGhlIG9mZnNldAotCQkJCQkgKiBmcm9tIHRoZXJlIHRv IGZlZWQgdG8gbWFrZSB0aGUgaXB0ci4KLQkJCQkJICogdGhlIGxhc3QgdGVy bSBpbiBlZmZlY3Qgcm91bmRzIGRvd24KLQkJCQkJICogdG8gdGhlIGZpcnN0 IGFnaW5vIGluIHRoZSBidWZmZXIuCi0JCQkJCSAqLwotCQkJCQlkaW5vID0g WEZTX01BS0VfSVBUUihtcCwgYnAsCi0JCQkJCQlhZ2lubyAtIFhGU19PRkZC Tk9fVE9fQUdJTk8obXAsCi0JCQkJCQkJWEZTX0FHSU5PX1RPX0FHQk5PKG1w LAotCQkJCQkJCWlyZWMtPmlub19zdGFydG51bSksCi0JCQkJCQkJMCkpOwot CQkJCX0KLQotCQkJCWRvX3dhcm4oCi0JCQlfKCIgICAgICAgIC0gY2xlYXJp bmcgZXhpc3RpbmcgXCIlc1wiIGlub2RlXG4iKSwKLQkJCQkJT1JQSEFOQUdF KTsKLQotCQkJCWlub19kaXJ0eSA9IGNsZWFyX2Rpbm9kZShtcCwgZGlubywg bGlubyk7Ci0KLQkJCQlpZiAoIXVzZV9yYnVmKSAgewotCQkJCQlBU1NFUlQo aW5vX2RpcnR5ID09IDAgfHwKLQkJCQkJCShpbm9fZGlydHkgJiYgIW5vX21v ZGlmeSkpOwotCi0JCQkJCWlmIChpbm9fZGlydHkgJiYgIW5vX21vZGlmeSkK LQkJCQkJCWxpYnhmc193cml0ZWJ1ZihicCwgMCk7Ci0JCQkJCWVsc2UKLQkJ CQkJCWxpYnhmc19wdXRidWYoYnApOwotCQkJCX0gZWxzZSAgewotCQkJCQlp ZiAoaW5vX2RpcnR5KQotCQkJCQkJKnJidWZfZGlydHkgPSAxOwotCQkJCX0K LQotCQkJCWlmIChpbm9kZV9pc2FkaXIoaXJlYywgaW5vX29mZnNldCkpCi0J CQkJCWNsZWFyX2lub2RlX2lzYWRpcihpcmVjLCBpbm9fb2Zmc2V0KTsKLQot CQkJCXNldF9pbm9kZV9mcmVlKGlyZWMsIGlub19vZmZzZXQpOwotCQkJfQot Ci0JCQkvKgotCQkJICogcmVnYXJkbGVzcyBvZiB3aGV0aGVyIHRoZSBpbm9k ZSBudW0gaXMgZ29vZCBvcgotCQkJICogYmFkLCBtYXJrIHRoZSBlbnRyeSB0 byBiZSBqdW5rZWQgc28gdGhlCi0JCQkgKiBjcmVhdGVuYW1lIGluIHBoYXNl IDYgd2lsbCBzdWNjZWVkLgotCQkJICovCi0JCQluYW1lc3QtPm5hbWVbMF0g PSAnLyc7Ci0JCQkqZGlydHkgPSAxOwotCQkJZG9fd2FybigKLQkJCV8oIiAg ICAgICAgLSBtYXJraW5nIGVudHJ5IFwiJXNcIiB0byBiZSBkZWxldGVkXG4i KSwKLQkJCQlmbmFtZSk7Ci0JCQlyZXMrKzsKLQkJfQotCX0KLQotCXJldHVy bihyZXMpOwotfQotCi1pbnQKLWxvbmdmb3JtX2RlbGV0ZV9vcnBoYW5hZ2Uo eGZzX21vdW50X3QJKm1wLAotCQkJeGZzX2lub190CWlubywKLQkJCXhmc19k aW5vZGVfdAkqZGlubywKLQkJCXhmc19idWZfdAkqcm9vdGlub19icCwKLQkJ CWludAkJKnJidWZfZGlydHkpCi17Ci0JeGZzX2Rpcl9sZWFmYmxvY2tfdAkq bGVhZjsKLQl4ZnNfYnVmX3QJCSpicDsKLQl4ZnNfZGZzYm5vX3QJCWZzYm5v OwotCXhmc19kYWJsa190CQlkYV9ibm87Ci0JaW50CQkJZGlydHk7Ci0JaW50 CQkJcmVzOwotCi0JZGFfYm5vID0gMDsKLQkqcmJ1Zl9kaXJ0eSA9IDA7Ci0K LQlpZiAoKGZzYm5vID0gZ2V0X2ZpcnN0X2RibG9ja19mc2JubyhtcCwgaW5v LCBkaW5vKSkgPT0gTlVMTERGU0JOTykKLQkJZG9fZXJyb3IoCi0JXygiY291 bGRuJ3QgbWFwIGZpcnN0IGxlYWYgYmxvY2sgb2YgZGlyZWN0b3J5IGlub2Rl ICVsbHVcbiIpLCBpbm8pOwotCi0JLyoKLQkgKiBjeWNsZSB0aHJvdWdoIHRo ZSBlbnRpcmUgZGlyZWN0b3J5IGxvb2tpbmcgdG8gZGVsZXRlCi0JICogZXZl cnkgImxvc3QrZm91bmQiIGVudHJ5LiAgbWFrZSBzdXJlIHRvIGNhdGNoIGR1 cGxpY2F0ZQotCSAqIGVudHJpZXMuCi0JICoKLQkgKiBXZSBjb3VsZCBwcm9i YWJseSBzcGVlZCB0aGlzIHVwIGJ5IGRvaW5nIGEgc21hcnRlciBsb29rdXAK LQkgKiB0byBnZXQgdXMgdG8gdGhlIGZpcnN0IGJsb2NrIHRoYXQgY29udGFp bnMgdGhlIGhhc2h2YWx1ZQotCSAqIG9mICJsb3N0K2ZvdW5kIiBidXQgd2hh dCB0aGUgaGVjay4gIHRoYXQgd291bGQgcmVxdWlyZSBhCi0JICogZG91Ymxl IGxvb2t1cCBmb3IgZWFjaCBsZXZlbC4gIGFuZCBob3cgYmlnIGNhbiAnLycg Z2V0Pz8/Ci0JICogSXQncyBwcm9iYWJseSBub3Qgd29ydGggaXQuCi0JICov Ci0JcmVzID0gMDsKLQotCWRvIHsKLQkJaWYgKGZzYm5vID09IE5VTExERlNC Tk8pCi0JCQlicmVhazsKLQkJYnAgPSBsaWJ4ZnNfcmVhZGJ1ZihtcC0+bV9k ZXYsIFhGU19GU0JfVE9fREFERFIobXAsIGZzYm5vKSwKLQkJCQkJWEZTX0ZT Ql9UT19CQihtcCwgMSksIDApOwotCQlpZiAoIWJwKQotCQkJZG9fZXJyb3Io XygiY2FuJ3QgcmVhZCBibG9jayAldSAoZnNibm8gJWxsdSkgZm9yICIKLQkJ CQkgICAiZGlyZWN0b3J5IGlub2RlICVsbHVcbiIpLAotCQkJCWRhX2Jubywg ZnNibm8sIGlubyk7Ci0KLQkJbGVhZiA9ICh4ZnNfZGlyX2xlYWZibG9ja190 ICopWEZTX0JVRl9QVFIoYnApOwotCi0JCWlmIChJTlRfR0VUKGxlYWYtPmhk ci5pbmZvLm1hZ2ljLCBBUkNIX0NPTlZFUlQpICE9Ci0JCSAgICBYRlNfRElS X0xFQUZfTUFHSUMpCi0JCQlkb19lcnJvcihfKCJiYWQgbWFnaWMgIyAoMHgl eCkgZm9yIGRpcmVjdG9yeSAiCi0JCQkJImxlYWYgYmxvY2sgKGJubyAldSBm c2JubyAlbGx1KVxuIiksCi0JCQkJSU5UX0dFVChsZWFmLT5oZHIuaW5mby5t YWdpYywgQVJDSF9DT05WRVJUKSwKLQkJCQlkYV9ibm8sIGZzYm5vKTsKLQot CQlkYV9ibm8gPSBJTlRfR0VUKGxlYWYtPmhkci5pbmZvLmZvcncsIEFSQ0hf Q09OVkVSVCk7Ci0KLQkJcmVzICs9IGxmX2Jsb2NrX2RlbGV0ZV9vcnBoYW5h Z2UobXAsIGlubywgbGVhZiwgJmRpcnR5LAotCQkJCQlyb290aW5vX2JwLCBy YnVmX2RpcnR5KTsKLQotCQlBU1NFUlQoZGlydHkgPT0gMCB8fCAoZGlydHkg JiYgIW5vX21vZGlmeSkpOwotCi0JCWlmIChkaXJ0eSAmJiAhbm9fbW9kaWZ5 KQotCQkJbGlieGZzX3dyaXRlYnVmKGJwLCAwKTsKLQkJZWxzZQotCQkJbGli eGZzX3B1dGJ1ZihicCk7Ci0KLQkJaWYgKGRhX2JubyAhPSAwKQotCQkJZnNi bm8gPSBnZXRfYm1hcGkobXAsIGRpbm8sIGlubywgZGFfYm5vLCBYRlNfREFU QV9GT1JLKTsKLQotCX0gd2hpbGUgKGRhX2JubyAhPSAwKTsKLQotCXJldHVy bihyZXMpOwotfQotCi0vKgotICogcmV0dXJucyAxIGlmIGEgZGVsZXRpb24g aGFwcGVuZWQsIDAgb3RoZXJ3aXNlLgotICovCi0vKiBBUkdTVVNFRCAqLwot aW50Ci1zaG9ydGZvcm1fZGVsZXRlX29ycGhhbmFnZSh4ZnNfbW91bnRfdAkq bXAsCi0JCQl4ZnNfaW5vX3QJaW5vLAotCQkJeGZzX2Rpbm9kZV90CSpyb290 X2Rpbm8sCi0JCQl4ZnNfYnVmX3QJKnJvb3Rpbm9fYnAsCi0JCQlpbnQJCSpp bm9fZGlydHkpCi17Ci0JeGZzX2Rpcl9zaG9ydGZvcm1fdAkqc2Y7Ci0JeGZz X2Rpbm9kZV90CQkqZGlubzsKLQl4ZnNfZGlyX3NmX2VudHJ5X3QJKnNmX2Vu dHJ5LCAqbmV4dF9zZmUsICp0bXBfc2ZlOwotCXhmc19idWZfdAkJKmJwOwot CXhmc19pbm9fdAkJbGlubzsKLQl4ZnNfYWdpbm9fdAkJYWdpbm87Ci0JeGZz X2FnaW5vX3QJCXJvb3RfYWdpbm87Ci0JaW50CQkJbWF4X3NpemU7Ci0JeGZz X2FnbnVtYmVyX3QJCWFnbm87Ci0JeGZzX2FnbnVtYmVyX3QJCXJvb3RfYWdu bzsKLQlpbnQJCQlpbm9fZGlyX3NpemU7Ci0JaW5vX3RyZWVfbm9kZV90CQkq aXJlYzsKLQlpbnQJCQlpbm9fb2Zmc2V0OwotCWludAkJCWk7Ci0JaW50CQkJ ZGlydHk7Ci0JaW50CQkJdG1wX2xlbjsKLQlpbnQJCQl0bXBfZWxlbjsKLQlp bnQJCQlsZW47Ci0JaW50CQkJdXNlX3JidWY7Ci0JY2hhcgkJCWZuYW1lW01B WE5BTUVMRU4gKyAxXTsKLQlpbnQJCQlyZXM7Ci0KLQlzZiA9ICZyb290X2Rp bm8tPmRpX3UuZGlfZGlyc2Y7Ci0JKmlub19kaXJ0eSA9IDA7Ci0JcmVzID0g MDsKLQlpcmVjID0gTlVMTDsKLQlpbm9fZGlyX3NpemUgPSBJTlRfR0VUKHJv b3RfZGluby0+ZGlfY29yZS5kaV9zaXplLCBBUkNIX0NPTlZFUlQpOwotCW1h eF9zaXplID0gWEZTX0RGT1JLX0RTSVpFKHJvb3RfZGlubywgbXApOwotCXVz ZV9yYnVmID0gMDsKLQlyb290X2Fnbm8gPSBYRlNfSU5PX1RPX0FHTk8obXAs IG1wLT5tX3NiLnNiX3Jvb3Rpbm8pOwotCXJvb3RfYWdpbm8gPSBYRlNfSU5P X1RPX0FHSU5PKG1wLCBtcC0+bV9zYi5zYl9yb290aW5vKTsKLQotCS8qCi0J ICogcnVuIHRocm91Z2ggZW50cmllcyBsb29raW5nIGZvciAibG9zdCtmb3Vu ZCIuCi0JICovCi0Jc2ZfZW50cnkgPSBuZXh0X3NmZSA9ICZzZi0+bGlzdFsw XTsKLQlmb3IgKGkgPSAwOyBpIDwgSU5UX0dFVChzZi0+aGRyLmNvdW50LCBB UkNIX0NPTlZFUlQpICYmIGlub19kaXJfc2l6ZSA+Ci0JCQkoX19wc2ludF90 KW5leHRfc2ZlIC0gKF9fcHNpbnRfdClzZjsgaSsrKSAgewotCQl0bXBfc2Zl ID0gTlVMTDsKLQkJc2ZfZW50cnkgPSBuZXh0X3NmZTsKLQkJWEZTX0RJUl9T Rl9HRVRfRElSSU5PKCZzZl9lbnRyeS0+aW51bWJlciwgJmxpbm8pOwotCQli Y29weShzZl9lbnRyeS0+bmFtZSwgZm5hbWUsIHNmX2VudHJ5LT5uYW1lbGVu KTsKLQkJZm5hbWVbc2ZfZW50cnktPm5hbWVsZW5dID0gJ1wwJzsKLQotCQlp ZiAoIXN0cmNtcChPUlBIQU5BR0UsIGZuYW1lKSkgIHsKLQkJCWFnbm8gPSBY RlNfSU5PX1RPX0FHTk8obXAsIGxpbm8pOwotCQkJYWdpbm8gPSBYRlNfSU5P X1RPX0FHSU5PKG1wLCBsaW5vKTsKLQotCQkJaXJlYyA9IGZpbmRfaW5vZGVf cmVjKGFnbm8sIGFnaW5vKTsKLQotCQkJLyoKLQkJCSAqIGlmIHRoZSBvcnBo YW5nZSBpbm9kZSBpcyBpbiB0aGUgdHJlZSwKLQkJCSAqIGdldCBpdCwgY2xl YXIgaXQsIGFuZCBtYXJrIGl0IGZyZWUuCi0JCQkgKiB0aGUgaW5vZGVzIGlu IHRoZSBvcnBoYW5hZ2Ugd2lsbCBnZXQKLQkJCSAqIHJlYXR0YWNoZWQgdG8g dGhlIG5ldyBvcnBoYW5hZ2UuCi0JCQkgKi8KLQkJCWlmIChpcmVjICE9IE5V TEwpIHsKLQkJCQlkb193YXJuKAotCQkJXygiICAgICAgICAtIGNsZWFyaW5n IGV4aXN0aW5nIFwiJXNcIiBpbm9kZVxuIiksCi0JCQkJCU9SUEhBTkFHRSk7 Ci0KLQkJCQlpbm9fb2Zmc2V0ID0gYWdpbm8gLSBpcmVjLT5pbm9fc3RhcnRu dW07Ci0KLQkJCQkvKgotCQkJCSAqIGNoZWNrIGlmIHdlIGhhdmUgdG8gdXNl IHRoZSByb290IGlub2RlCi0JCQkJICogYnVmZmVyIG9yIHJlYWQgb25lIGlu IG91cnNlbHZlcy4gIE5vdGUKLQkJCQkgKiB0aGF0IHRoZSByb290IGlub2Rl IGlzIGFsd2F5cyB0aGUgZmlyc3QKLQkJCQkgKiBpbm9kZSBvZiB0aGUgY2h1 bmsgdGhhdCBpdCdzIGluIHNvIHRoZXJlCi0JCQkJICogYXJlIHR3byBwb3Nz aWJsZSBjYXNlcyB3aGVyZSBsb3N0K2ZvdW5kCi0JCQkJICogbWlnaHQgYmUg aW4gdGhlIHNhbWUgYnVmZmVyIGFzIHRoZSByb290Ci0JCQkJICogaW5vZGUu ICBPbmUgY2FzZSBpcyBhIGxhcmdlIGJsb2NrCi0JCQkJICogZmlsZXN5c3Rl bSB3aGVyZSB0aGUgdHdvIGlub2RlcyBhcmUKLQkJCQkgKiBpbiBkaWZmZXJl bnQgaW5vZGUgY2h1bmtzIGJ1dCB3aW5kCi0JCQkJICogdXAgaW4gdGhlIHNh bWUgYmxvY2sgKG11bHRpcGxlIGNodW5rcwotCQkJCSAqIHBlciBibG9jaykg YW5kIHRoZSBzZWNvbmQgY2FzZSAob25lIG9yCi0JCQkJICogbW9yZSBibG9j a3MgcGVyIGNodW5rKSBpcyB3aGVyZSB0aGUgdHdvCi0JCQkJICogaW5vZGVz IGFyZSBpbiB0aGUgc2FtZSBjaHVuay4gTm90ZSB0aGF0Ci0JCQkJICogaW5v ZGVzIGFyZSBhbGxvY2F0ZWQgb24gZGlzayBpbiB1bml0cwotCQkJCSAqIG9m IE1BWChYRlNfSU5PREVTX1BFUl9DSFVOSyxzYl9pbm9wYmxvY2spLgotCQkJ CSAqLwotCQkJCWlmIChYRlNfSU5PX1RPX0ZTQihtcCwgbXAtPm1fc2Iuc2Jf cm9vdGlubykKLQkJCQkJCT09IFhGU19JTk9fVE9fRlNCKG1wLCBsaW5vKSB8 fAotCQkJCSAgICAoYWdubyA9PSByb290X2Fnbm8gJiYKLQkJCQkgICAgIGFn aW5vIDwgcm9vdF9hZ2lubyArIFhGU19JTk9ERVNfUEVSX0NIVU5LKSkgewot CQkJCQl1c2VfcmJ1ZiA9IDE7Ci0JCQkJCWJwID0gcm9vdGlub19icDsKLQot CQkJCQlkaW5vID0gWEZTX01BS0VfSVBUUihtcCwgYnAsIGFnaW5vIC0KLQkJ CQkJCVhGU19JTk9fVE9fQUdJTk8obXAsCi0JCQkJCQkJbXAtPm1fc2Iuc2Jf cm9vdGlubykpOwotCQkJCX0gZWxzZSB7Ci0JCQkJCWxlbiA9IChpbnQpWEZT X0ZTQl9UT19CQihtcCwKLQkJCQkJCU1BWCgxLCBYRlNfSU5PREVTX1BFUl9D SFVOSy8KLQkJCQkJCQlpbm9kZXNfcGVyX2Jsb2NrKSk7Ci0JCQkJCWJwID0g bGlieGZzX3JlYWRidWYobXAtPm1fZGV2LAotCQkJCQkJWEZTX0FHQl9UT19E QUREUihtcCwgYWdubywKLQkJCQkJCQlYRlNfQUdJTk9fVE9fQUdCTk8obXAs Ci0JCQkJCQkJCWlyZWMtPmlub19zdGFydG51bSkpLAotCQkJCQkJbGVuLCAw KTsKLQkJCQkJaWYgKCFicCkKLQkJCQkJCWRvX2Vycm9yKAotCQkJCQlfKCJj b3VsZCBub3QgcmVhZCAlcyBpbm9kZSAlbGx1XG4iKSwKLQkJCQkJCQlPUlBI QU5BR0UsIGxpbm8pOwotCi0JCQkJCS8qCi0JCQkJCSAqIGdldCB0aGUgYWdi bm8gY29udGFpbmluZyB0aGUgZmlyc3QKLQkJCQkJICogaW5vZGUgaW4gdGhl IGNodW5rLiAgSW4gbXVsdGktYmxvY2sKLQkJCQkJICogY2h1bmtzLCB0aGlz IGdldHMgdXMgdGhlIG9mZnNldAotCQkJCQkgKiByZWxhdGl2ZSB0byB0aGUg YmVnaW5uaW5nIG9mIGEKLQkJCQkJICogcHJvcGVybHkgYWxpZ25lZCBidWZm ZXIuICBJbgotCQkJCQkgKiBtdWx0aS1jaHVuayBibG9ja3MsIHRoaXMgZ2V0 cyB1cwotCQkJCQkgKiB0aGUgY29ycmVjdCBibG9jayBudW1iZXIuICBUaGVu Ci0JCQkJCSAqIHR1cm4gdGhlIGJsb2NrIG51bWJlciBiYWNrIGludG8KLQkJ CQkJICogYW4gYWdpbm8gYW5kIGNhbGN1bGF0ZSB0aGUgb2Zmc2V0Ci0JCQkJ CSAqIGZyb20gdGhlcmUgdG8gZmVlZCB0byBtYWtlIHRoZSBpcHRyLgotCQkJ CQkgKiB0aGUgbGFzdCB0ZXJtIGluIGVmZmVjdCByb3VuZHMgZG93bgotCQkJ CQkgKiB0byB0aGUgZmlyc3QgYWdpbm8gaW4gdGhlIGJ1ZmZlci4KLQkJCQkJ ICovCi0JCQkJCWRpbm8gPSBYRlNfTUFLRV9JUFRSKG1wLCBicCwKLQkJCQkJ CWFnaW5vIC0gWEZTX09GRkJOT19UT19BR0lOTyhtcCwKLQkJCQkJCQlYRlNf QUdJTk9fVE9fQUdCTk8obXAsCi0JCQkJCQkJaXJlYy0+aW5vX3N0YXJ0bnVt KSwKLQkJCQkJCQkwKSk7Ci0JCQkJfQotCi0JCQkJZGlydHkgPSBjbGVhcl9k aW5vZGUobXAsIGRpbm8sIGxpbm8pOwotCi0JCQkJQVNTRVJUKGRpcnR5ID09 IDAgfHwgKGRpcnR5ICYmICFub19tb2RpZnkpKTsKLQotCQkJCS8qCi0JCQkJ ICogaWYgd2UgcmVhZCB0aGUgbG9zdCtmb3VuZCBpbm9kZSBpbiB0bwotCQkJ CSAqIGl0LCBnZXQgcmlkIG9mIGl0IGhlcmUuICBpZiB0aGUgbG9zdCtmb3Vu ZAotCQkJCSAqIGlub2RlIGlzIGluIHRoZSByb290IGlub2RlIGJ1ZmZlciwg dGhlCi0JCQkJICogYnVmZmVyIHdpbGwgYmUgbWFya2VkIGRpcnR5IGFueXdh eSBzaW5jZQotCQkJCSAqIHRoZSBsb3N0K2ZvdW5kIGVudHJ5IGluIHRoZSBy b290IGlub2RlIGlzCi0JCQkJICogYWxzbyBiZWluZyBkZWxldGVkIHdoaWNo IG1ha2VzIHRoZSByb290Ci0JCQkJICogaW5vZGUgYnVmZmVyIGF1dG9tYXRp Y2FsbHkgZGlydHkuCi0JCQkJICovCi0JCQkJaWYgKCF1c2VfcmJ1ZikgIHsK LQkJCQkJZGlubyA9IE5VTEw7Ci0JCQkJCWlmIChkaXJ0eSAmJiAhbm9fbW9k aWZ5KQotCQkJCQkJbGlieGZzX3dyaXRlYnVmKGJwLCAwKTsKLQkJCQkJZWxz ZQotCQkJCQkJbGlieGZzX3B1dGJ1ZihicCk7Ci0JCQkJfQotCi0JCQkJaWYg KGlub2RlX2lzYWRpcihpcmVjLCBpbm9fb2Zmc2V0KSkKLQkJCQkJY2xlYXJf aW5vZGVfaXNhZGlyKGlyZWMsIGlub19vZmZzZXQpOwotCi0JCQkJc2V0X2lu b2RlX2ZyZWUoaXJlYywgaW5vX29mZnNldCk7Ci0JCQl9Ci0KLQkJCWRvX3dh cm4oXygiICAgICAgICAtIGRlbGV0aW5nIGV4aXN0aW5nIFwiJXNcIiBlbnRy eVxuIiksCi0JCQkJT1JQSEFOQUdFKTsKLQotCQkJLyoKLQkJCSAqIG5vdGUg LS0gZXhhY3RseSB0aGUgc2FtZSBkZWxldGlvbiBjb2RlIGFzIGluCi0JCQkg KiBwcm9jZXNzX3Nob3J0Zm9ybV9kaXIoKQotCQkJICovCi0JCQl0bXBfZWxl biA9IFhGU19ESVJfU0ZfRU5UU0laRV9CWUVOVFJZKHNmX2VudHJ5KTsKLQkJ CUlOVF9NT0Qocm9vdF9kaW5vLT5kaV9jb3JlLmRpX3NpemUsIEFSQ0hfQ09O VkVSVCwKLQkJCQktKHRtcF9lbGVuKSk7Ci0KLQkJCXRtcF9zZmUgPSAoeGZz X2Rpcl9zZl9lbnRyeV90ICopCi0JCQkJKChfX3BzaW50X3QpIHNmX2VudHJ5 ICsgdG1wX2VsZW4pOwotCQkJdG1wX2xlbiA9IG1heF9zaXplIC0gKChfX3Bz aW50X3QpIHRtcF9zZmUKLQkJCQkJLSAoX19wc2ludF90KSBzZik7Ci0KLQkJ CW1lbW1vdmUoc2ZfZW50cnksIHRtcF9zZmUsIHRtcF9sZW4pOwotCi0JCQlJ TlRfTU9EKHNmLT5oZHIuY291bnQsIEFSQ0hfQ09OVkVSVCwgLTEpOwotCi0J CQliemVybygodm9pZCAqKSAoKF9fcHNpbnRfdCkgc2ZfZW50cnkgKyB0bXBf bGVuKSwKLQkJCQl0bXBfZWxlbik7Ci0KLQkJCS8qCi0JCQkgKiBzZXQgdGhl IHRtcCB2YWx1ZSB0byB0aGUgY3VycmVudAotCQkJICogcG9pbnRlciBzbyB3 ZSdsbCBwcm9jZXNzIHRoZSBlbnRyeQotCQkJICogd2UganVzdCBtb3ZlZCB1 cAotCQkJICovCi0JCQl0bXBfc2ZlID0gc2ZfZW50cnk7Ci0KLQkJCS8qCi0J CQkgKiBXQVJOSU5HOiAgZHJvcCB0aGUgaW5kZXggaSBieSBvbmUKLQkJCSAq IHNvIGl0IG1hdGNoZXMgdGhlIGRlY3JlbWVudGVkIGNvdW50IGZvcgotCQkJ ICogYWNjdXJhdGUgY29tcGFyaXNvbnMgaW4gdGhlIGxvb3AgdGVzdC4KLQkJ CSAqIG1hcmsgcm9vdCBpbm9kZSBhcyBkaXJ0eSB0byBtYWtlIGRlbGV0aW9u Ci0JCQkgKiBwZXJtYW5lbnQuCi0JCQkgKi8KLQkJCWktLTsKLQotCQkJKmlu b19kaXJ0eSA9IDE7Ci0JCQlyZXMrKzsKLQotCQl9Ci0JCW5leHRfc2ZlID0g KHRtcF9zZmUgPT0gTlVMTCkKLQkJCT8gKHhmc19kaXJfc2ZfZW50cnlfdCAq KSAoKF9fcHNpbnRfdCkgc2ZfZW50cnkgKwotCQkJCVhGU19ESVJfU0ZfRU5U U0laRV9CWUVOVFJZKHNmX2VudHJ5KSkKLQkJCTogdG1wX3NmZTsKLQl9Ci0K LQlyZXR1cm4ocmVzKTsKLX0KLQotLyogQVJHU1VTRUQgKi8KLWludAotbGYy X2Jsb2NrX2RlbGV0ZV9vcnBoYW5hZ2UoeGZzX21vdW50X3QJCSptcCwKLQkJ CXhmc19pbm9fdAkJaW5vLAotCQkJeGZzX2RpcjJfZGF0YV90CQkqZGF0YSwK LQkJCWludAkJCSpkaXJ0eSwKLQkJCXhmc19idWZfdAkJKnJvb3Rpbm9fYnAs Ci0JCQlpbnQJCQkqcmJ1Zl9kaXJ0eSkKLXsKLQl4ZnNfZGlub2RlX3QJCSpk aW5vOwotCXhmc19idWZfdAkJKmJwOwotCWlub190cmVlX25vZGVfdAkJKmly ZWM7Ci0JeGZzX2lub190CQlsaW5vOwotCXhmc19hZ2lub190CQlhZ2lubzsK LQl4ZnNfYWdudW1iZXJfdAkJYWdubzsKLQl4ZnNfYWdpbm9fdAkJcm9vdF9h Z2lubzsKLQl4ZnNfYWdudW1iZXJfdAkJcm9vdF9hZ25vOwotCWludAkJCWlu b19vZmZzZXQ7Ci0JaW50CQkJaW5vX2RpcnR5OwotCWludAkJCXVzZV9yYnVm OwotCWludAkJCWxlbjsKLQljaGFyCQkJZm5hbWVbTUFYTkFNRUxFTiArIDFd OwotCWludAkJCXJlczsKLQljaGFyCQkJKnB0cjsKLQljaGFyCQkJKmVuZHB0 cjsKLQl4ZnNfZGlyMl9ibG9ja190YWlsX3QJKmJ0cDsKLQl4ZnNfZGlyMl9k YXRhX2VudHJ5X3QJKmRlcDsKLQl4ZnNfZGlyMl9kYXRhX3VudXNlZF90CSpk dXA7Ci0KLQlwdHIgPSAoY2hhciAqKWRhdGEtPnU7Ci0JaWYgKElOVF9HRVQo ZGF0YS0+aGRyLm1hZ2ljLCBBUkNIX0NPTlZFUlQpID09IFhGU19ESVIyX0JM T0NLX01BR0lDKSB7Ci0JCWJ0cCA9IFhGU19ESVIyX0JMT0NLX1RBSUxfUCht cCwgKHhmc19kaXIyX2Jsb2NrX3QgKilkYXRhKTsKLQkJZW5kcHRyID0gKGNo YXIgKilYRlNfRElSMl9CTE9DS19MRUFGX1AoYnRwKTsKLQl9IGVsc2UKLQkJ ZW5kcHRyID0gKGNoYXIgKilkYXRhICsgbXAtPm1fZGlyYmxrc2l6ZTsKLQkq ZGlydHkgPSAwOwotCXVzZV9yYnVmID0gMDsKLQlyZXMgPSAwOwotCXJvb3Rf YWdubyA9IFhGU19JTk9fVE9fQUdOTyhtcCwgbXAtPm1fc2Iuc2Jfcm9vdGlu byk7Ci0Jcm9vdF9hZ2lubyA9IFhGU19JTk9fVE9fQUdJTk8obXAsIG1wLT5t X3NiLnNiX3Jvb3Rpbm8pOwotCi0Jd2hpbGUgKHB0ciA8IGVuZHB0cikgewot CQlkdXAgPSAoeGZzX2RpcjJfZGF0YV91bnVzZWRfdCAqKXB0cjsKLQkJaWYg KElOVF9HRVQoZHVwLT5mcmVldGFnLCBBUkNIX0NPTlZFUlQpID09Ci0JCSAg ICBYRlNfRElSMl9EQVRBX0ZSRUVfVEFHKSB7Ci0JCQlpZiAocHRyICsgSU5U X0dFVChkdXAtPmxlbmd0aCwgQVJDSF9DT05WRVJUKSA+IGVuZHB0ciB8fAot CQkJCUlOVF9HRVQoZHVwLT5sZW5ndGgsIEFSQ0hfQ09OVkVSVCkgPT0gMCB8 fAotCQkJCShJTlRfR0VUKGR1cC0+bGVuZ3RoLCBBUkNIX0NPTlZFUlQpICYK LQkJCQkJCShYRlNfRElSMl9EQVRBX0FMSUdOIC0gMSkpKQotCQkJCWJyZWFr OwotCQkJcHRyICs9IElOVF9HRVQoZHVwLT5sZW5ndGgsIEFSQ0hfQ09OVkVS VCk7Ci0JCQljb250aW51ZTsKLQkJfQotCQlkZXAgPSAoeGZzX2RpcjJfZGF0 YV9lbnRyeV90ICopcHRyOwotCQlsaW5vID0gSU5UX0dFVChkZXAtPmludW1i ZXIsIEFSQ0hfQ09OVkVSVCk7Ci0JCWJjb3B5KGRlcC0+bmFtZSwgZm5hbWUs IGRlcC0+bmFtZWxlbik7Ci0JCWZuYW1lW2RlcC0+bmFtZWxlbl0gPSAnXDAn OwotCi0JCWlmIChmbmFtZVswXSAhPSAnLycgJiYgIXN0cmNtcChmbmFtZSwg T1JQSEFOQUdFKSkgIHsKLQkJCWFnaW5vID0gWEZTX0lOT19UT19BR0lOTyht cCwgbGlubyk7Ci0JCQlhZ25vID0gWEZTX0lOT19UT19BR05PKG1wLCBsaW5v KTsKLQotCQkJb2xkX29ycGhhbmFnZV9pbm8gPSBsaW5vOwotCi0JCQlpcmVj ID0gZmluZF9pbm9kZV9yZWMoYWdubywgYWdpbm8pOwotCi0JCQkvKgotCQkJ ICogaWYgdGhlIG9ycGhhbmdlIGlub2RlIGlzIGluIHRoZSB0cmVlLAotCQkJ ICogZ2V0IGl0LCBjbGVhciBpdCwgYW5kIG1hcmsgaXQgZnJlZS4KLQkJCSAq IHRoZSBpbm9kZXMgaW4gdGhlIG9ycGhhbmFnZSB3aWxsIGdldAotCQkJICog cmVhdHRhY2hlZCB0byB0aGUgbmV3IG9ycGhhbmFnZS4KLQkJCSAqLwotCQkJ aWYgKGlyZWMgIT0gTlVMTCkgIHsKLQkJCQlpbm9fb2Zmc2V0ID0gYWdpbm8g LSBpcmVjLT5pbm9fc3RhcnRudW07Ci0KLQkJCQkvKgotCQkJCSAqIGNoZWNr IGlmIHdlIGhhdmUgdG8gdXNlIHRoZSByb290IGlub2RlCi0JCQkJICogYnVm ZmVyIG9yIHJlYWQgb25lIGluIG91cnNlbHZlcy4gIE5vdGUKLQkJCQkgKiB0 aGF0IHRoZSByb290IGlub2RlIGlzIGFsd2F5cyB0aGUgZmlyc3QKLQkJCQkg KiBpbm9kZSBvZiB0aGUgY2h1bmsgdGhhdCBpdCdzIGluIHNvIHRoZXJlCi0J CQkJICogYXJlIHR3byBwb3NzaWJsZSBjYXNlcyB3aGVyZSBsb3N0K2ZvdW5k Ci0JCQkJICogbWlnaHQgYmUgaW4gdGhlIHNhbWUgYnVmZmVyIGFzIHRoZSBy b290Ci0JCQkJICogaW5vZGUuICBPbmUgY2FzZSBpcyBhIGxhcmdlIGJsb2Nr Ci0JCQkJICogZmlsZXN5c3RlbSB3aGVyZSB0aGUgdHdvIGlub2RlcyBhcmUK LQkJCQkgKiBpbiBkaWZmZXJlbnQgaW5vZGUgY2h1bmtzIGJ1dCB3aW5kCi0J CQkJICogdXAgaW4gdGhlIHNhbWUgYmxvY2sgKG11bHRpcGxlIGNodW5rcwot CQkJCSAqIHBlciBibG9jaykgYW5kIHRoZSBzZWNvbmQgY2FzZSAob25lIG9y Ci0JCQkJICogbW9yZSBibG9ja3MgcGVyIGNodW5rKSBpcyB3aGVyZSB0aGUg dHdvCi0JCQkJICogaW5vZGVzIGFyZSBpbiB0aGUgc2FtZSBjaHVuay4gTm90 ZSB0aGF0Ci0JCQkJICogaW5vZGVzIGFyZSBhbGxvY2F0ZWQgb24gZGlzayBp biB1bml0cwotCQkJCSAqIG9mIE1BWChYRlNfSU5PREVTX1BFUl9DSFVOSyxz Yl9pbm9wYmxvY2spLgotCQkJCSAqLwotCQkJCWlmIChYRlNfSU5PX1RPX0ZT QihtcCwgbXAtPm1fc2Iuc2Jfcm9vdGlubykKLQkJCQkJCT09IFhGU19JTk9f VE9fRlNCKG1wLCBsaW5vKSB8fAotCQkJCSAgICAoYWdubyA9PSByb290X2Fn bm8gJiYKLQkJCQkgICAgIGFnaW5vIDwgcm9vdF9hZ2lubyArIFhGU19JTk9E RVNfUEVSX0NIVU5LKSkgewotCQkJCQl1c2VfcmJ1ZiA9IDE7Ci0JCQkJCWJw ID0gcm9vdGlub19icDsKLQkJCQkJZGlubyA9IFhGU19NQUtFX0lQVFIobXAs IGJwLCBhZ2lubyAtCi0JCQkJCQlYRlNfSU5PX1RPX0FHSU5PKG1wLAotCQkJ CQkJCW1wLT5tX3NiLnNiX3Jvb3Rpbm8pKTsKLQkJCQl9IGVsc2UgIHsKLQkJ CQkJbGVuID0gKGludClYRlNfRlNCX1RPX0JCKG1wLAotCQkJCQkJTUFYKDEs IFhGU19JTk9ERVNfUEVSX0NIVU5LLwotCQkJCQkJCWlub2Rlc19wZXJfYmxv Y2spKTsKLQkJCQkJYnAgPSBsaWJ4ZnNfcmVhZGJ1ZihtcC0+bV9kZXYsCi0J CQkJCQlYRlNfQUdCX1RPX0RBRERSKG1wLCBhZ25vLAotCQkJCQkJCVhGU19B R0lOT19UT19BR0JOTyhtcCwKLQkJCQkJCQkJaXJlYy0+aW5vX3N0YXJ0bnVt KSksCi0JCQkJCQlsZW4sIDApOwotCQkJCQlpZiAoIWJwKQotCQkJCQkJZG9f ZXJyb3IoCi0JCQkJCV8oImNvdWxkbid0IHJlYWQgJXMgaW5vZGUgJWxsdVxu IiksCi0JCQkJCQkJT1JQSEFOQUdFLCBsaW5vKTsKLQotCQkJCQkvKgotCQkJ CQkgKiBnZXQgdGhlIGFnYm5vIGNvbnRhaW5pbmcgdGhlIGZpcnN0Ci0JCQkJ CSAqIGlub2RlIGluIHRoZSBjaHVuay4gIEluIG11bHRpLWJsb2NrCi0JCQkJ CSAqIGNodW5rcywgdGhpcyBnZXRzIHVzIHRoZSBvZmZzZXQKLQkJCQkJICog cmVsYXRpdmUgdG8gdGhlIGJlZ2lubmluZyBvZiBhCi0JCQkJCSAqIHByb3Bl cmx5IGFsaWduZWQgYnVmZmVyLiAgSW4KLQkJCQkJICogbXVsdGktY2h1bmsg YmxvY2tzLCB0aGlzIGdldHMgdXMKLQkJCQkJICogdGhlIGNvcnJlY3QgYmxv Y2sgbnVtYmVyLiAgVGhlbgotCQkJCQkgKiB0dXJuIHRoZSBibG9jayBudW1i ZXIgYmFjayBpbnRvCi0JCQkJCSAqIGFuIGFnaW5vIGFuZCBjYWxjdWxhdGUg dGhlIG9mZnNldAotCQkJCQkgKiBmcm9tIHRoZXJlIHRvIGZlZWQgdG8gbWFr ZSB0aGUgaXB0ci4KLQkJCQkJICogdGhlIGxhc3QgdGVybSBpbiBlZmZlY3Qg cm91bmRzIGRvd24KLQkJCQkJICogdG8gdGhlIGZpcnN0IGFnaW5vIGluIHRo ZSBidWZmZXIuCi0JCQkJCSAqLwotCQkJCQlkaW5vID0gWEZTX01BS0VfSVBU UihtcCwgYnAsCi0JCQkJCQlhZ2lubyAtIFhGU19PRkZCTk9fVE9fQUdJTk8o bXAsCi0JCQkJCQkJWEZTX0FHSU5PX1RPX0FHQk5PKG1wLAotCQkJCQkJCWly ZWMtPmlub19zdGFydG51bSksCi0JCQkJCQkJMCkpOwotCQkJCX0KLQotCQkJ CWRvX3dhcm4oCi0JCQkJXygiICAgICAgICAtIGNsZWFyaW5nIGV4aXN0aW5n IFwiJXNcIiBpbm9kZVxuIiksCi0JCQkJCU9SUEhBTkFHRSk7Ci0KLQkJCQlp bm9fZGlydHkgPSBjbGVhcl9kaW5vZGUobXAsIGRpbm8sIGxpbm8pOwotCi0J CQkJaWYgKCF1c2VfcmJ1ZikgewotCQkJCQlBU1NFUlQoaW5vX2RpcnR5ID09 IDAgfHwKLQkJCQkJCShpbm9fZGlydHkgJiYgIW5vX21vZGlmeSkpOwotCi0J CQkJCWlmIChpbm9fZGlydHkgJiYgIW5vX21vZGlmeSkKLQkJCQkJCWxpYnhm c193cml0ZWJ1ZihicCwgMCk7Ci0JCQkJCWVsc2UKLQkJCQkJCWxpYnhmc19w dXRidWYoYnApOwotCQkJCX0gZWxzZSB7Ci0JCQkJCWlmIChpbm9fZGlydHkp Ci0JCQkJCQkqcmJ1Zl9kaXJ0eSA9IDE7Ci0JCQkJfQotCi0JCQkJaWYgKGlu b2RlX2lzYWRpcihpcmVjLCBpbm9fb2Zmc2V0KSkKLQkJCQkJY2xlYXJfaW5v ZGVfaXNhZGlyKGlyZWMsIGlub19vZmZzZXQpOwotCi0JCQkJc2V0X2lub2Rl X2ZyZWUoaXJlYywgaW5vX29mZnNldCk7Ci0KLQkJCX0KLQotCQkJLyoKLQkJ CSAqIHJlZ2FyZGxlc3Mgb2Ygd2hldGhlciB0aGUgaW5vZGUgbnVtIGlzIGdv b2Qgb3IKLQkJCSAqIGJhZCwgbWFyayB0aGUgZW50cnkgdG8gYmUganVua2Vk IHNvIHRoZQotCQkJICogY3JlYXRlbmFtZSBpbiBwaGFzZSA2IHdpbGwgc3Vj Y2VlZC4KLQkJCSAqLwotCQkJZGVwLT5uYW1lWzBdID0gJy8nOwotCQkJKmRp cnR5ID0gMTsKLQkJCWRvX3dhcm4oCi0JCQlfKCIgICAgICAgIC0gbWFya2lu ZyBlbnRyeSBcIiVzXCIgdG8gYmUgZGVsZXRlZFxuIiksCi0JCQkJZm5hbWUp OwotCQkJcmVzKys7Ci0JCX0KLQkJcHRyICs9IFhGU19ESVIyX0RBVEFfRU5U U0laRShkZXAtPm5hbWVsZW4pOwotCX0KLQotCXJldHVybihyZXMpOwotfQot Ci1pbnQKLWxvbmdmb3JtMl9kZWxldGVfb3JwaGFuYWdlKHhmc19tb3VudF90 CSptcCwKLQkJCXhmc19pbm9fdAlpbm8sCi0JCQl4ZnNfZGlub2RlX3QJKmRp bm8sCi0JCQl4ZnNfYnVmX3QJKnJvb3Rpbm9fYnAsCi0JCQlpbnQJCSpyYnVm X2RpcnR5KQotewotCXhmc19kaXIyX2RhdGFfdAkJKmRhdGE7Ci0JeGZzX2Rh YnVmX3QJCSpicDsKLQl4ZnNfZGZzYm5vX3QJCWZzYm5vOwotCXhmc19kYWJs a190CQlkYV9ibm87Ci0JaW50CQkJZGlydHk7Ci0JaW50CQkJcmVzOwotCWJt YXBfZXh0X3QJCSpibXA7Ci0JaW50CQkJaTsKLQotCWRhX2JubyA9IDA7Ci0J KnJidWZfZGlydHkgPSAwOwotCWZzYm5vID0gTlVMTERGU0JOTzsKLQlibXAg PSBtYWxsb2MobXAtPm1fZGlyYmxrZnNicyAqIHNpemVvZigqYm1wKSk7Ci0J aWYgKCFibXApCi0JCWRvX2Vycm9yKAotCV8oIm1hbGxvYyBmYWlsZWQgKCV1 IGJ5dGVzKSBpbiBsb25nZm9ybTJfZGVsZXRlX29ycGhhbmFnZSwgaW5vICVs bHVcbiIpLAotCQkJbXAtPm1fZGlyYmxrZnNicyAqIHNpemVvZigqYm1wKSwg aW5vKTsKLQotCS8qCi0JICogY3ljbGUgdGhyb3VnaCB0aGUgZW50aXJlIGRp cmVjdG9yeSBsb29raW5nIHRvIGRlbGV0ZQotCSAqIGV2ZXJ5ICJsb3N0K2Zv dW5kIiBlbnRyeS4gIG1ha2Ugc3VyZSB0byBjYXRjaCBkdXBsaWNhdGUKLQkg KiBlbnRyaWVzLgotCSAqCi0JICogV2UgY291bGQgcHJvYmFibHkgc3BlZWQg dGhpcyB1cCBieSBkb2luZyBhIHNtYXJ0ZXIgbG9va3VwCi0JICogdG8gZ2V0 IHVzIHRvIHRoZSBmaXJzdCBibG9jayB0aGF0IGNvbnRhaW5zIHRoZSBoYXNo dmFsdWUKLQkgKiBvZiAibG9zdCtmb3VuZCIgYnV0IHdoYXQgdGhlIGhlY2su ICB0aGF0IHdvdWxkIHJlcXVpcmUgYQotCSAqIGRvdWJsZSBsb29rdXAgZm9y IGVhY2ggbGV2ZWwuICBhbmQgaG93IGJpZyBjYW4gJy8nIGdldD8/PwotCSAq IEl0J3MgcHJvYmFibHkgbm90IHdvcnRoIGl0LgotCSAqLwotCXJlcyA9IDA7 Ci0KLQlmb3IgKGRhX2JubyA9IDA7Ci0JICAgICBkYV9ibm8gPCBYRlNfQl9U T19GU0IobXAsIElOVF9HRVQoZGluby0+ZGlfY29yZS5kaV9zaXplLCBBUkNI X0NPTlZFUlQpKTsKLQkgICAgIGRhX2JubyArPSBtcC0+bV9kaXJibGtmc2Jz KSB7Ci0JCWZvciAoaSA9IDA7IGkgPCBtcC0+bV9kaXJibGtmc2JzOyBpKysp IHsKLQkJCWZzYm5vID0gZ2V0X2JtYXBpKG1wLCBkaW5vLCBpbm8sIGRhX2Ju byArIGksCi0JCQkJCSAgWEZTX0RBVEFfRk9SSyk7Ci0JCQlpZiAoZnNibm8g PT0gTlVMTERGU0JOTykKLQkJCQlicmVhazsKLQkJCWJtcFtpXS5zdGFydG9m ZiA9IGRhX2JubyArIGk7Ci0JCQlibXBbaV0uc3RhcnRibG9jayA9IGZzYm5v OwotCQkJYm1wW2ldLmJsb2NrY291bnQgPSAxOwotCQkJYm1wW2ldLmZsYWcg PSAwOwotCQl9Ci0JCWlmIChmc2JubyA9PSBOVUxMREZTQk5PKQotCQkJY29u dGludWU7Ci0JCWJwID0gZGFfcmVhZF9idWYobXAsIG1wLT5tX2RpcmJsa2Zz YnMsIGJtcCk7Ci0JCWlmIChicCA9PSBOVUxMKQotCQkJZG9fZXJyb3IoCi0J XygiY2FuJ3QgcmVhZCBibG9jayAldSAoZnNibm8gJWxsdSkgZm9yIGRpcmVj dG9yeSBpbm9kZSAlbGx1XG4iKSwKLQkJCQlkYV9ibm8sIGJtcFswXS5zdGFy dGJsb2NrLCBpbm8pOwotCi0JCWRhdGEgPSAoeGZzX2RpcjJfZGF0YV90ICop YnAtPmRhdGE7Ci0KLQkJaWYgKElOVF9HRVQoZGF0YS0+aGRyLm1hZ2ljLCBB UkNIX0NPTlZFUlQpICE9Ci0JCQkJCVhGU19ESVIyX0RBVEFfTUFHSUMgJiYK LQkJICAgIElOVF9HRVQoZGF0YS0+aGRyLm1hZ2ljLCBBUkNIX0NPTlZFUlQp ICE9Ci0JCQkJCVhGU19ESVIyX0JMT0NLX01BR0lDKQotCQkJZG9fZXJyb3Io Ci0JXygiYmFkIG1hZ2ljICMgKDB4JXgpIGZvciBkaXJlY3RvcnkgZGF0YSBi bG9jayAoYm5vICV1IGZzYm5vICVsbHUpXG4iKSwKLQkJCQlJTlRfR0VUKGRh dGEtPmhkci5tYWdpYywgQVJDSF9DT05WRVJUKSwKLQkJCQlkYV9ibm8sIGJt cFswXS5zdGFydGJsb2NrKTsKLQotCQlyZXMgKz0gbGYyX2Jsb2NrX2RlbGV0 ZV9vcnBoYW5hZ2UobXAsIGlubywgZGF0YSwgJmRpcnR5LAotCQkJCQlyb290 aW5vX2JwLCByYnVmX2RpcnR5KTsKLQotCQlBU1NFUlQoZGlydHkgPT0gMCB8 fCAoZGlydHkgJiYgIW5vX21vZGlmeSkpOwotCi0JCWlmIChkaXJ0eSAmJiAh bm9fbW9kaWZ5KQotCQkJZGFfYndyaXRlKG1wLCBicCk7Ci0JCWVsc2UKLQkJ CWRhX2JyZWxzZShicCk7Ci0JfQotCWZyZWUoYm1wKTsKLQotCXJldHVybihy ZXMpOwotfQotCi0vKgotICogcmV0dXJucyAxIGlmIGEgZGVsZXRpb24gaGFw cGVuZWQsIDAgb3RoZXJ3aXNlLgotICovCi0vKiBBUkdTVVNFRCAqLwotaW50 Ci1zaG9ydGZvcm0yX2RlbGV0ZV9vcnBoYW5hZ2UoeGZzX21vdW50X3QJKm1w LAotCQkJeGZzX2lub190CWlubywKLQkJCXhmc19kaW5vZGVfdAkqcm9vdF9k aW5vLAotCQkJeGZzX2J1Zl90CSpyb290aW5vX2JwLAotCQkJaW50CQkqaW5v X2RpcnR5KQotewotCXhmc19kaXIyX3NmX3QJCSpzZjsKLQl4ZnNfZGlub2Rl X3QJCSpkaW5vOwotCXhmc19kaXIyX3NmX2VudHJ5X3QJKnNmX2VudHJ5LCAq bmV4dF9zZmUsICp0bXBfc2ZlOwotCXhmc19idWZfdAkJKmJwOwotCXhmc19p bm9fdAkJbGlubzsKLQl4ZnNfYWdpbm9fdAkJYWdpbm87Ci0JeGZzX2FnaW5v X3QJCXJvb3RfYWdpbm87Ci0JaW50CQkJbWF4X3NpemU7Ci0JeGZzX2FnbnVt YmVyX3QJCWFnbm87Ci0JeGZzX2FnbnVtYmVyX3QJCXJvb3RfYWdubzsKLQlp bnQJCQlpbm9fZGlyX3NpemU7Ci0JaW5vX3RyZWVfbm9kZV90CQkqaXJlYzsK LQlpbnQJCQlpbm9fb2Zmc2V0OwotCWludAkJCWk7Ci0JaW50CQkJZGlydHk7 Ci0JaW50CQkJdG1wX2xlbjsKLQlpbnQJCQl0bXBfZWxlbjsKLQlpbnQJCQls ZW47Ci0JaW50CQkJdXNlX3JidWY7Ci0JY2hhcgkJCWZuYW1lW01BWE5BTUVM RU4gKyAxXTsKLQlpbnQJCQlyZXM7Ci0KLQlzZiA9ICZyb290X2Rpbm8tPmRp X3UuZGlfZGlyMnNmOwotCSppbm9fZGlydHkgPSAwOwotCWlyZWMgPSBOVUxM OwotCWlub19kaXJfc2l6ZSA9IElOVF9HRVQocm9vdF9kaW5vLT5kaV9jb3Jl LmRpX3NpemUsIEFSQ0hfQ09OVkVSVCk7Ci0JbWF4X3NpemUgPSBYRlNfREZP UktfRFNJWkUocm9vdF9kaW5vLCBtcCk7Ci0JdXNlX3JidWYgPSAwOwotCXJl cyA9IDA7Ci0Jcm9vdF9hZ25vID0gWEZTX0lOT19UT19BR05PKG1wLCBtcC0+ bV9zYi5zYl9yb290aW5vKTsKLQlyb290X2FnaW5vID0gWEZTX0lOT19UT19B R0lOTyhtcCwgbXAtPm1fc2Iuc2Jfcm9vdGlubyk7Ci0KLQkvKgotCSAqIHJ1 biB0aHJvdWdoIGVudHJpZXMgbG9va2luZyBmb3IgImxvc3QrZm91bmQiLgot CSAqLwotCXNmX2VudHJ5ID0gbmV4dF9zZmUgPSBYRlNfRElSMl9TRl9GSVJT VEVOVFJZKHNmKTsKLQlmb3IgKGkgPSAwOyBpIDwgSU5UX0dFVChzZi0+aGRy LmNvdW50LCBBUkNIX0NPTlZFUlQpICYmIGlub19kaXJfc2l6ZSA+Ci0JCQko X19wc2ludF90KW5leHRfc2ZlIC0gKF9fcHNpbnRfdClzZjsgaSsrKSAgewot CQl0bXBfc2ZlID0gTlVMTDsKLQkJc2ZfZW50cnkgPSBuZXh0X3NmZTsKLQkJ bGlubyA9IFhGU19ESVIyX1NGX0dFVF9JTlVNQkVSKHNmLAotCQkJWEZTX0RJ UjJfU0ZfSU5VTUJFUlAoc2ZfZW50cnkpKTsKLQkJYmNvcHkoc2ZfZW50cnkt Pm5hbWUsIGZuYW1lLCBzZl9lbnRyeS0+bmFtZWxlbik7Ci0JCWZuYW1lW3Nm X2VudHJ5LT5uYW1lbGVuXSA9ICdcMCc7Ci0KLQkJaWYgKCFzdHJjbXAoT1JQ SEFOQUdFLCBmbmFtZSkpICB7Ci0JCQlhZ25vID0gWEZTX0lOT19UT19BR05P KG1wLCBsaW5vKTsKLQkJCWFnaW5vID0gWEZTX0lOT19UT19BR0lOTyhtcCwg bGlubyk7Ci0KLQkJCWlyZWMgPSBmaW5kX2lub2RlX3JlYyhhZ25vLCBhZ2lu byk7Ci0KLQkJCS8qCi0JCQkgKiBpZiB0aGUgb3JwaGFuZ2UgaW5vZGUgaXMg aW4gdGhlIHRyZWUsCi0JCQkgKiBnZXQgaXQsIGNsZWFyIGl0LCBhbmQgbWFy ayBpdCBmcmVlLgotCQkJICogdGhlIGlub2RlcyBpbiB0aGUgb3JwaGFuYWdl IHdpbGwgZ2V0Ci0JCQkgKiByZWF0dGFjaGVkIHRvIHRoZSBuZXcgb3JwaGFu YWdlLgotCQkJICovCi0JCQlpZiAoaXJlYyAhPSBOVUxMKSAgewotCQkJCWRv X3dhcm4oCi0JCQkJXygiICAgICAgICAtIGNsZWFyaW5nIGV4aXN0aW5nIFwi JXNcIiBpbm9kZVxuIiksCi0JCQkJCU9SUEhBTkFHRSk7Ci0KLQkJCQlpbm9f b2Zmc2V0ID0gYWdpbm8gLSBpcmVjLT5pbm9fc3RhcnRudW07Ci0KLQkJCQkv KgotCQkJCSAqIGNoZWNrIGlmIHdlIGhhdmUgdG8gdXNlIHRoZSByb290IGlu b2RlCi0JCQkJICogYnVmZmVyIG9yIHJlYWQgb25lIGluIG91cnNlbHZlcy4g IE5vdGUKLQkJCQkgKiB0aGF0IHRoZSByb290IGlub2RlIGlzIGFsd2F5cyB0 aGUgZmlyc3QKLQkJCQkgKiBpbm9kZSBvZiB0aGUgY2h1bmsgdGhhdCBpdCdz IGluIHNvIHRoZXJlCi0JCQkJICogYXJlIHR3byBwb3NzaWJsZSBjYXNlcyB3 aGVyZSBsb3N0K2ZvdW5kCi0JCQkJICogbWlnaHQgYmUgaW4gdGhlIHNhbWUg YnVmZmVyIGFzIHRoZSByb290Ci0JCQkJICogaW5vZGUuICBPbmUgY2FzZSBp cyBhIGxhcmdlIGJsb2NrCi0JCQkJICogZmlsZXN5c3RlbSB3aGVyZSB0aGUg dHdvIGlub2RlcyBhcmUKLQkJCQkgKiBpbiBkaWZmZXJlbnQgaW5vZGUgY2h1 bmtzIGJ1dCB3aW5kCi0JCQkJICogdXAgaW4gdGhlIHNhbWUgYmxvY2sgKG11 bHRpcGxlIGNodW5rcwotCQkJCSAqIHBlciBibG9jaykgYW5kIHRoZSBzZWNv bmQgY2FzZSAob25lIG9yCi0JCQkJICogbW9yZSBibG9ja3MgcGVyIGNodW5r KSBpcyB3aGVyZSB0aGUgdHdvCi0JCQkJICogaW5vZGVzIGFyZSBpbiB0aGUg c2FtZSBjaHVuay4gTm90ZSB0aGF0Ci0JCQkJICogaW5vZGVzIGFyZSBhbGxv Y2F0ZWQgb24gZGlzayBpbiB1bml0cwotCQkJCSAqIG9mIE1BWChYRlNfSU5P REVTX1BFUl9DSFVOSyxzYl9pbm9wYmxvY2spLgotCQkJCSAqLwotCQkJCWlm IChYRlNfSU5PX1RPX0ZTQihtcCwgbXAtPm1fc2Iuc2Jfcm9vdGlubykKLQkJ CQkJCT09IFhGU19JTk9fVE9fRlNCKG1wLCBsaW5vKSB8fAotCQkJCSAgICAo YWdubyA9PSByb290X2Fnbm8gJiYKLQkJCQkgICAgIGFnaW5vIDwgcm9vdF9h Z2lubyArIFhGU19JTk9ERVNfUEVSX0NIVU5LKSkgewotCQkJCQl1c2VfcmJ1 ZiA9IDE7Ci0JCQkJCWJwID0gcm9vdGlub19icDsKLQotCQkJCQlkaW5vID0g WEZTX01BS0VfSVBUUihtcCwgYnAsIGFnaW5vIC0KLQkJCQkJCVhGU19JTk9f VE9fQUdJTk8obXAsCi0JCQkJCQkJbXAtPm1fc2Iuc2Jfcm9vdGlubykpOwot CQkJCX0gZWxzZSAgewotCQkJCQlsZW4gPSAoaW50KVhGU19GU0JfVE9fQkIo bXAsCi0JCQkJCQlNQVgoMSwgWEZTX0lOT0RFU19QRVJfQ0hVTksvCi0JCQkJ CQkJaW5vZGVzX3Blcl9ibG9jaykpOwotCQkJCQlicCA9IGxpYnhmc19yZWFk YnVmKG1wLT5tX2RldiwKLQkJCQkJCVhGU19BR0JfVE9fREFERFIobXAsIGFn bm8sCi0JCQkJCQkJWEZTX0FHSU5PX1RPX0FHQk5PKG1wLAotCQkJCQkJCQlp cmVjLT5pbm9fc3RhcnRudW0pKSwKLQkJCQkJCWxlbiwgMCk7Ci0JCQkJCWlm ICghYnApCi0JCQkJCQlkb19lcnJvcigKLQkJCQkJXygiY291bGQgbm90IHJl YWQgJXMgaW5vZGUgJWxsdVxuIiksCi0JCQkJCQkJT1JQSEFOQUdFLCBsaW5v KTsKLQotCQkJCQkvKgotCQkJCQkgKiBnZXQgdGhlIGFnYm5vIGNvbnRhaW5p bmcgdGhlIGZpcnN0Ci0JCQkJCSAqIGlub2RlIGluIHRoZSBjaHVuay4gIElu IG11bHRpLWJsb2NrCi0JCQkJCSAqIGNodW5rcywgdGhpcyBnZXRzIHVzIHRo ZSBvZmZzZXQKLQkJCQkJICogcmVsYXRpdmUgdG8gdGhlIGJlZ2lubmluZyBv ZiBhCi0JCQkJCSAqIHByb3Blcmx5IGFsaWduZWQgYnVmZmVyLiAgSW4KLQkJ CQkJICogbXVsdGktY2h1bmsgYmxvY2tzLCB0aGlzIGdldHMgdXMKLQkJCQkJ ICogdGhlIGNvcnJlY3QgYmxvY2sgbnVtYmVyLiAgVGhlbgotCQkJCQkgKiB0 dXJuIHRoZSBibG9jayBudW1iZXIgYmFjayBpbnRvCi0JCQkJCSAqIGFuIGFn aW5vIGFuZCBjYWxjdWxhdGUgdGhlIG9mZnNldAotCQkJCQkgKiBmcm9tIHRo ZXJlIHRvIGZlZWQgdG8gbWFrZSB0aGUgaXB0ci4KLQkJCQkJICogdGhlIGxh c3QgdGVybSBpbiBlZmZlY3Qgcm91bmRzIGRvd24KLQkJCQkJICogdG8gdGhl IGZpcnN0IGFnaW5vIGluIHRoZSBidWZmZXIuCi0JCQkJCSAqLwotCQkJCQlk aW5vID0gWEZTX01BS0VfSVBUUihtcCwgYnAsCi0JCQkJCQlhZ2lubyAtIFhG U19PRkZCTk9fVE9fQUdJTk8obXAsCi0JCQkJCQkJWEZTX0FHSU5PX1RPX0FH Qk5PKG1wLAotCQkJCQkJCWlyZWMtPmlub19zdGFydG51bSksCi0JCQkJCQkJ MCkpOwotCQkJCX0KLQotCQkJCWRpcnR5ID0gY2xlYXJfZGlub2RlKG1wLCBk aW5vLCBsaW5vKTsKLQotCQkJCUFTU0VSVChkaXJ0eSA9PSAwIHx8IChkaXJ0 eSAmJiAhbm9fbW9kaWZ5KSk7Ci0KLQkJCQkvKgotCQkJCSAqIGlmIHdlIHJl YWQgdGhlIGxvc3QrZm91bmQgaW5vZGUgaW4gdG8KLQkJCQkgKiBpdCwgZ2V0 IHJpZCBvZiBpdCBoZXJlLiAgaWYgdGhlIGxvc3QrZm91bmQKLQkJCQkgKiBp bm9kZSBpcyBpbiB0aGUgcm9vdCBpbm9kZSBidWZmZXIsIHRoZQotCQkJCSAq IGJ1ZmZlciB3aWxsIGJlIG1hcmtlZCBkaXJ0eSBhbnl3YXkgc2luY2UKLQkJ CQkgKiB0aGUgbG9zdCtmb3VuZCBlbnRyeSBpbiB0aGUgcm9vdCBpbm9kZSBp cwotCQkJCSAqIGFsc28gYmVpbmcgZGVsZXRlZCB3aGljaCBtYWtlcyB0aGUg cm9vdAotCQkJCSAqIGlub2RlIGJ1ZmZlciBhdXRvbWF0aWNhbGx5IGRpcnR5 LgotCQkJCSAqLwotCQkJCWlmICghdXNlX3JidWYpICB7Ci0JCQkJCWRpbm8g PSBOVUxMOwotCQkJCQlpZiAoZGlydHkgJiYgIW5vX21vZGlmeSkKLQkJCQkJ CWxpYnhmc193cml0ZWJ1ZihicCwgMCk7Ci0JCQkJCWVsc2UKLQkJCQkJCWxp Ynhmc19wdXRidWYoYnApOwotCQkJCX0KLQotCi0JCQkJaWYgKGlub2RlX2lz YWRpcihpcmVjLCBpbm9fb2Zmc2V0KSkKLQkJCQkJY2xlYXJfaW5vZGVfaXNh ZGlyKGlyZWMsIGlub19vZmZzZXQpOwotCi0JCQkJc2V0X2lub2RlX2ZyZWUo aXJlYywgaW5vX29mZnNldCk7Ci0JCQl9Ci0KLQkJCWRvX3dhcm4oXygiICAg ICAgICAtIGRlbGV0aW5nIGV4aXN0aW5nIFwiJXNcIiBlbnRyeVxuIiksCi0J CQkJT1JQSEFOQUdFKTsKLQotCQkJLyoKLQkJCSAqIG5vdGUgLS0gZXhhY3Rs eSB0aGUgc2FtZSBkZWxldGlvbiBjb2RlIGFzIGluCi0JCQkgKiBwcm9jZXNz X3Nob3J0Zm9ybV9kaXIoKQotCQkJICovCi0JCQl0bXBfZWxlbiA9IFhGU19E SVIyX1NGX0VOVFNJWkVfQllFTlRSWShzZiwgc2ZfZW50cnkpOwotCQkJSU5U X01PRChyb290X2Rpbm8tPmRpX2NvcmUuZGlfc2l6ZSwgQVJDSF9DT05WRVJU LAotCQkJCS0odG1wX2VsZW4pKTsKLQotCQkJdG1wX3NmZSA9ICh4ZnNfZGly Ml9zZl9lbnRyeV90ICopCi0JCQkJKChfX3BzaW50X3QpIHNmX2VudHJ5ICsg dG1wX2VsZW4pOwotCQkJdG1wX2xlbiA9IG1heF9zaXplIC0gKChfX3BzaW50 X3QpIHRtcF9zZmUKLQkJCQkJLSAoX19wc2ludF90KSBzZik7Ci0KLQkJCW1l bW1vdmUoc2ZfZW50cnksIHRtcF9zZmUsIHRtcF9sZW4pOwotCi0JCQlJTlRf TU9EKHNmLT5oZHIuY291bnQsIEFSQ0hfQ09OVkVSVCwgLTEpOwotCQkJaWYg KGxpbm8gPiBYRlNfRElSMl9NQVhfU0hPUlRfSU5VTSkKLQkJCQlzZi0+aGRy Lmk4Y291bnQtLTsKLQotCQkJYnplcm8oKHZvaWQgKikgKChfX3BzaW50X3Qp IHNmX2VudHJ5ICsgdG1wX2xlbiksCi0JCQkJdG1wX2VsZW4pOwotCi0JCQkv KgotCQkJICogc2V0IHRoZSB0bXAgdmFsdWUgdG8gdGhlIGN1cnJlbnQKLQkJ CSAqIHBvaW50ZXIgc28gd2UnbGwgcHJvY2VzcyB0aGUgZW50cnkKLQkJCSAq IHdlIGp1c3QgbW92ZWQgdXAKLQkJCSAqLwotCQkJdG1wX3NmZSA9IHNmX2Vu dHJ5OwotCi0JCQkvKgotCQkJICogV0FSTklORzogIGRyb3AgdGhlIGluZGV4 IGkgYnkgb25lCi0JCQkgKiBzbyBpdCBtYXRjaGVzIHRoZSBkZWNyZW1lbnRl ZCBjb3VudCBmb3IKLQkJCSAqIGFjY3VyYXRlIGNvbXBhcmlzb25zIGluIHRo ZSBsb29wIHRlc3QuCi0JCQkgKiBtYXJrIHJvb3QgaW5vZGUgYXMgZGlydHkg dG8gbWFrZSBkZWxldGlvbgotCQkJICogcGVybWFuZW50LgotCQkJICovCi0J CQlpLS07Ci0KLQkJCSppbm9fZGlydHkgPSAxOwotCi0JCQlyZXMrKzsKLQkJ fQotCQluZXh0X3NmZSA9ICh0bXBfc2ZlID09IE5VTEwpCi0JCQk/ICh4ZnNf ZGlyMl9zZl9lbnRyeV90ICopICgoX19wc2ludF90KSBzZl9lbnRyeSArCi0J CQkJWEZTX0RJUjJfU0ZfRU5UU0laRV9CWUVOVFJZKHNmLCBzZl9lbnRyeSkp Ci0JCQk6IHRtcF9zZmU7Ci0JfQotCi0JcmV0dXJuKHJlcyk7Ci19Ci0KLXZv aWQKLWRlbGV0ZV9vcnBoYW5hZ2UoeGZzX21vdW50X3QgKm1wKQotewotCXhm c19pbm9fdCBpbm87Ci0JeGZzX2Rpbm9kZV90ICpkaW5vOwotCXhmc19idWZf dCAqZGJwOwotCWludCBkaXJ0eSwgcmVzLCBsZW47Ci0KLQlBU1NFUlQoIW5v X21vZGlmeSk7Ci0KLQlkYnAgPSBOVUxMOwotCWRpcnR5ID0gcmVzID0gMDsK LQlpbm8gPSBtcC0+bV9zYi5zYl9yb290aW5vOwotCi0JLyoKLQkgKiB3ZSBr bm93IHRoZSByb290IGlzIGluIHVzZSBvciB3ZSB3b3VsZG4ndCBiZSBoZXJl Ci0JICovCi0JbGVuID0gKGludClYRlNfRlNCX1RPX0JCKG1wLAotCQkJTUFY KDEsIFhGU19JTk9ERVNfUEVSX0NIVU5LL2lub2Rlc19wZXJfYmxvY2spKTsK LQlkYnAgPSBsaWJ4ZnNfcmVhZGJ1ZihtcC0+bV9kZXYsCi0JCQlYRlNfRlNC X1RPX0RBRERSKG1wLCBYRlNfSU5PX1RPX0ZTQihtcCwgaW5vKSksIGxlbiwg MCk7Ci0JaWYgKCFkYnApCi0JCWRvX2Vycm9yKF8oImNvdWxkIG5vdCByZWFk IGJ1ZmZlciBmb3Igcm9vdCBpbm9kZSAlbGx1ICIKLQkJCSAgICIoZGFkZHIg JWxsZCwgc2l6ZSAlZClcbiIpLCBpbm8sCi0JCQlYRlNfRlNCX1RPX0RBRERS KG1wLCBYRlNfSU5PX1RPX0ZTQihtcCwgaW5vKSksCi0JCQlYRlNfRlNCX1RP X0JCKG1wLCAxKSk7Ci0KLQkvKgotCSAqIHdlIGFsc28ga25vdyB0aGF0IHRo ZSByb290IGlub2RlIGlzIGFsd2F5cyB0aGUgZmlyc3QgaW5vZGUKLQkgKiBh bGxvY2F0ZWQgaW4gdGhlIHN5c3RlbSwgdGhlcmVmb3JlIGl0J2xsIGJlIGF0 IHRoZSBiZWdpbm5pbmcKLQkgKiBvZiB0aGUgcm9vdCBpbm9kZSBjaHVuawot CSAqLwotCWRpbm8gPSBYRlNfTUFLRV9JUFRSKG1wLCBkYnAsIDApOwotCi0J c3dpdGNoIChkaW5vLT5kaV9jb3JlLmRpX2Zvcm1hdCkgIHsKLQljYXNlIFhG U19ESU5PREVfRk1UX0VYVEVOVFM6Ci0JY2FzZSBYRlNfRElOT0RFX0ZNVF9C VFJFRToKLQkJaWYgKFhGU19TQl9WRVJTSU9OX0hBU0RJUlYyKCZtcC0+bV9z YikpCi0JCQlyZXMgPSBsb25nZm9ybTJfZGVsZXRlX29ycGhhbmFnZShtcCwg aW5vLCBkaW5vLCBkYnAsCi0JCQkJJmRpcnR5KTsKLQkJZWxzZQotCQkJcmVz ID0gbG9uZ2Zvcm1fZGVsZXRlX29ycGhhbmFnZShtcCwgaW5vLCBkaW5vLCBk YnAsCi0JCQkJJmRpcnR5KTsKLQkJYnJlYWs7Ci0JY2FzZSBYRlNfRElOT0RF X0ZNVF9MT0NBTDoKLQkJaWYgKFhGU19TQl9WRVJTSU9OX0hBU0RJUlYyKCZt cC0+bV9zYikpCi0JCQlyZXMgPSBzaG9ydGZvcm0yX2RlbGV0ZV9vcnBoYW5h Z2UobXAsIGlubywgZGlubywgZGJwLAotCQkJCSZkaXJ0eSk7Ci0JCWVsc2UK LQkJCXJlcyA9IHNob3J0Zm9ybV9kZWxldGVfb3JwaGFuYWdlKG1wLCBpbm8s IGRpbm8sIGRicCwKLQkJCQkmZGlydHkpOwotCQlBU1NFUlQoKHJlcyA9PSAw ICYmIGRpcnR5ID09IDApIHx8IChyZXMgPiAwICYmIGRpcnR5ID09IDEpKTsK LQkJYnJlYWs7Ci0JZGVmYXVsdDoKLQkJYnJlYWs7Ci0JfQotCi0JaWYgKHJl cykgIHsKLQkJc3dpdGNoIChkaW5vLT5kaV9jb3JlLmRpX3ZlcnNpb24pICB7 Ci0JCWNhc2UgWEZTX0RJTk9ERV9WRVJTSU9OXzE6Ci0JCQlJTlRfTU9EKGRp bm8tPmRpX2NvcmUuZGlfb25saW5rLCBBUkNIX0NPTlZFUlQsIC1yZXMpOwot CQkJSU5UX1NFVChkaW5vLT5kaV9jb3JlLmRpX25saW5rLCBBUkNIX0NPTlZF UlQsCi0JCQkJSU5UX0dFVChkaW5vLT5kaV9jb3JlLmRpX29ubGluaywgQVJD SF9DT05WRVJUKSk7Ci0JCQlicmVhazsKLQkJY2FzZSBYRlNfRElOT0RFX1ZF UlNJT05fMjoKLQkJCUlOVF9NT0QoZGluby0+ZGlfY29yZS5kaV9ubGluaywg QVJDSF9DT05WRVJULCAtcmVzKTsKLQkJCWJyZWFrOwotCQlkZWZhdWx0Ogot CQkJZG9fZXJyb3IoXygidW5rbm93biB2ZXJzaW9uICMlZCBpbiByb290IGlu b2RlXG4iKSwKLQkJCQkJZGluby0+ZGlfY29yZS5kaV92ZXJzaW9uKTsKLQkJ fQotCi0JCWRpcnR5ID0gMTsKLQl9Ci0KLQlpZiAoZGlydHkpCi0JCWxpYnhm c193cml0ZWJ1ZihkYnAsIDApOwotCWVsc2UKLQkJbGlieGZzX3B1dGJ1Zihk YnApOwotfQotCiAvKgogICogbnVsbCBvdXQgcXVvdGEgaW5vZGUgZmllbGRz IGluIHNiIGlmIHRoZXkgcG9pbnQgdG8gbm9uLWV4aXN0ZW50IGlub2Rlcy4K ICAqIHRoaXMgaXNuJ3QgYXMgcmVkdW5kYW50IGFzIGl0IGxvb2tzIHNpbmNl IGl0J3MgcG9zc2libGUgdGhhdCB0aGUgc2IgZmllbGQKQEAgLTExODAsMTYg KzE2Niw2IEBACiAJCQlkb193YXJuKF8oInJvb3QgaW5vZGUgbG9zdFxuIikp OwogCX0KIAotCS8qCi0JICogaGF2ZSB0byBkZWxldGUgbG9zdCtmb3VuZCBm aXJzdCBzbyB0aGF0IGJsb2NrcyB1c2VkCi0JICogYnkgbG9zdCtmb3VuZCBk b24ndCBzaG93IHVwIGFzIHVzZWQKLQkgKi8KLQlpZiAoIW5vX21vZGlmeSkg IHsKLQkJZG9fbG9nKF8oIiAgICAgICAgLSBjbGVhciBsb3N0K2ZvdW5kIChp ZiBpdCBleGlzdHMpIC4uLlxuIikpOwotCQlpZiAoIW5lZWRfcm9vdF9pbm9k ZSkKLQkJCWRlbGV0ZV9vcnBoYW5hZ2UobXApOwotCX0KLQogCWZvciAoaSA9 IDA7IGkgPCBtcC0+bV9zYi5zYl9hZ2NvdW50OyBpKyspICB7CiAJCWFnX2Vu ZCA9IChpIDwgbXAtPm1fc2Iuc2JfYWdjb3VudCAtIDEpID8gbXAtPm1fc2Iu c2JfYWdibG9ja3MgOgogCQkJbXAtPm1fc2Iuc2JfZGJsb2NrcyAtCkluZGV4 OiByZXBhaXIveGZzcHJvZ3MvcmVwYWlyL3BoYXNlNi5jCj09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT0KLS0tIHJlcGFpci5vcmlnL3hmc3Byb2dzL3JlcGFpci9w aGFzZTYuYwkyMDA3LTA0LTI3IDE0OjExOjA5LjM3MDYzNjQyMCArMTAwMAor KysgcmVwYWlyL3hmc3Byb2dzL3JlcGFpci9waGFzZTYuYwkyMDA3LTA0LTI3 IDE0OjExOjQxLjAwMDAwMDAwMCArMTAwMApAQCAtMzEsOSArMzEsMTAgQEAK ICNpbmNsdWRlICJwcm9ncmVzcy5oIgogI2luY2x1ZGUgInZlcnNpb25zLmgi CiAKLXN0YXRpYyBzdHJ1Y3QgY3JlZCB6ZXJvY3I7Ci1zdGF0aWMgc3RydWN0 IGZzeGF0dHIgemVyb2ZzeDsKLXN0YXRpYyBpbnQgb3JwaGFuYWdlX2VudGVy ZWQ7CitzdGF0aWMgc3RydWN0IGNyZWQJCXplcm9jcjsKK3N0YXRpYyBzdHJ1 Y3QgZnN4YXR0ciAJCXplcm9mc3g7CitzdGF0aWMgeGZzX2lub190CQlvcnBo YW5hZ2VfaW5vOworc3RhdGljIHhmc19pbm9kZV90CQkqb3JwaGFuYWdlX2lw OwogCiAvKgogICogRGF0YSBzdHJ1Y3R1cmVzIGFuZCByb3V0aW5lcyB0byBr ZWVwIHRyYWNrIG9mIGRpcmVjdG9yeSBlbnRyaWVzCkBAIC04MDgsNiArODA5 LDI0IEBACiAJY29uc3QgaW50CW1vZGUgPSAwNzU1OwogCWludAkJbnJlczsK IAorCS8qCisJICogY2hlY2sgZm9yIGFuIGV4aXN0aW5nIGxvc3QrZm91bmQg Zmlyc3QsIGlmIGl0IGV4aXN0cywgcmV0dXJuCisJICogaXQncyBpbm9kZS4g T3RoZXJ3aXNlLCB3ZSBjYW4gY3JlYXRlIGl0LiBCYWQgbG9zdCtmb3VuZCBp bm9kZXMKKwkgKiB3b3VsZCBoYXZlIGJlZW4gY2xlYXJlZCBpbiBwaGFzZTMg YW5kIHBoYXNlNC4KKwkgKi8KKworCWlmICgoaSA9IGxpYnhmc19pZ2V0KG1w LCBOVUxMLCBtcC0+bV9zYi5zYl9yb290aW5vLCAwLCAmcGlwLCAwKSkpCisJ CWRvX2Vycm9yKF8oIiVkIC0gY291bGRuJ3QgaWdldCByb290IGlub2RlIHRv IG9idGFpbiAlc1xuIiksCisJCQlpLCBPUlBIQU5BR0UpOworCisJaWYgKGRp cl9sb29rdXAobXAsIE5VTEwsIHBpcCwgT1JQSEFOQUdFLCBzdHJsZW4oT1JQ SEFOQUdFKSwKKwkJCSZpbm8pID09IDApCisJCXJldHVybiBpbm87CisKKwkv KgorCSAqIGNvdWxkIG5vdCBiZSBmb3VuZCwgY3JlYXRlIGl0CisJICovCisK IAl0cCA9IGxpYnhmc190cmFuc19hbGxvYyhtcCwgMCk7CiAJWEZTX0JNQVBf SU5JVCgmZmxpc3QsICZmaXJzdCk7CiAKQEAgLTgyMCw5ICs4MzksOSBAQAog CSAqIHVzZSBpZ2V0L2lqb2luIGluc3RlYWQgb2YgdHJhbnNfaWdldCBiZWNh dXNlIHRoZSBpYWxsb2MKIAkgKiB3cmFwcGVyIGNhbiBjb21taXQgdGhlIHRy YW5zYWN0aW9uIGFuZCBzdGFydCBhIG5ldyBvbmUKIAkgKi8KLQlpZiAoKGkg PSBsaWJ4ZnNfaWdldChtcCwgTlVMTCwgbXAtPm1fc2Iuc2Jfcm9vdGlubywg MCwgJnBpcCwgMCkpKQorLyoJaWYgKChpID0gbGlieGZzX2lnZXQobXAsIE5V TEwsIG1wLT5tX3NiLnNiX3Jvb3Rpbm8sIDAsICZwaXAsIDApKSkKIAkJZG9f ZXJyb3IoXygiJWQgLSBjb3VsZG4ndCBpZ2V0IHJvb3QgaW5vZGUgdG8gbWFr ZSAlc1xuIiksCi0JCQlpLCBPUlBIQU5BR0UpOworCQkJaSwgT1JQSEFOQUdF KTsqLwogCiAJZXJyb3IgPSBsaWJ4ZnNfaW5vZGVfYWxsb2MoJnRwLCBwaXAs IG1vZGV8U19JRkRJUiwKIAkJCQkJMSwgMCwgJnplcm9jciwgJnplcm9mc3gs ICZpcCk7CkBAIC04NDMsMTggKzg2MiwxOSBAQAogCSAqLwogCWlmICgoZXJy b3IgPSBkaXJfY3JlYXRlbmFtZShtcCwgdHAsIHBpcCwgT1JQSEFOQUdFLAog CQkJc3RybGVuKE9SUEhBTkFHRSksIGlwLT5pX2lubywgJmZpcnN0LCAmZmxp c3QsIG5yZXMpKSkgewotCQlkb193YXJuKAotCQlfKCJjYW4ndCBtYWtlICVz LCBjcmVhdGVuYW1lIGVycm9yICVkLCB3aWxsIHRyeSBsYXRlclxuIiksCisJ CWRvX2Vycm9yKAorCQlfKCJjYW4ndCBtYWtlICVzLCBjcmVhdGVuYW1lIGVy cm9yICVkXG4iKSwKIAkJCU9SUEhBTkFHRSwgZXJyb3IpOwotCQlvcnBoYW5h Z2VfZW50ZXJlZCA9IDA7Ci0JfSBlbHNlCi0JCW9ycGhhbmFnZV9lbnRlcmVk ID0gMTsKKwl9CiAKIAkvKgogCSAqIGJ1bXAgdXAgdGhlIGxpbmsgY291bnQg aW4gdGhlIHJvb3QgZGlyZWN0b3J5IHRvIGFjY291bnQKIAkgKiBmb3IgLi4g aW4gdGhlIG5ldyBkaXJlY3RvcnkKIAkgKi8KIAlwaXAtPmlfZC5kaV9ubGlu aysrOworCWFkZF9pbm9kZV9yZWYoZmluZF9pbm9kZV9yZWMoWEZTX0lOT19U T19BR05PKG1wLCBtcC0+bV9zYi5zYl9yb290aW5vKSwKKwkJCQlYRlNfSU5P X1RPX0FHSU5PKG1wLCBtcC0+bV9zYi5zYl9yb290aW5vKSksIDApOworCiAK IAlsaWJ4ZnNfdHJhbnNfbG9nX2lub2RlKHRwLCBwaXAsIFhGU19JTE9HX0NP UkUpOwogCWRpcl9pbml0KG1wLCB0cCwgaXAsIHBpcCk7CkBAIC04NzAsMzYg Kzg5MCw0MyBAQAogCiAJbGlieGZzX3RyYW5zX2NvbW1pdCh0cCwgWEZTX1RS QU5TX1JFTEVBU0VfTE9HX1JFU3xYRlNfVFJBTlNfU1lOQywgMCk7CiAKLQkv KiBuZWVkIGxpYnhmc19pcHV0IGhlcmU/IC0gbmF0aGFucyBUT0RPIC0gcG9z c2libGUgbWVtb3J5IGxlYWs/ICovCi0KIAlyZXR1cm4oaW5vKTsKIH0KIAog LyoKLSAqIG1vdmUgYSBmaWxlIHRvIHRoZSBvcnBoYW5nZS4gIHRoZSBvcnBo YW5hZ2UgaXMgZ3VhcmFudGVlZAotICogYXQgdGhpcyBwb2ludCB0byBvbmx5 IGhhdmUgZmlsZSBpbiBpdCB3aG9zZSBuYW1lID09IGZpbGUgaW5vZGUgIwor ICogbW92ZSBhIGZpbGUgdG8gdGhlIG9ycGhhbmdlLgogICovCi12b2lkCi1t dl9vcnBoYW5hZ2UoeGZzX21vdW50X3QJKm1wLAotCQl4ZnNfaW5vX3QJZGly X2lubywJLyogb3JwaGFuZ2UgaW5vZGUgIyAqLwotCQl4ZnNfaW5vX3QJaW5v LAkJLyogaW5vZGUgIyB0byBiZSBtb3ZlZCAqLwotCQlpbnQJCWlzYV9kaXIp CS8qIDEgaWYgaW5vZGUgaXMgYSBkaXJlY3RvcnkgKi8KLXsKLQl4ZnNfaW5v X3QJZW50cnlfaW5vX251bTsKLQl4ZnNfaW5vZGVfdAkqZGlyX2lub19wOwot CXhmc19pbm9kZV90CSppbm9fcDsKLQl4ZnNfdHJhbnNfdAkqdHA7Ci0JeGZz X2ZzYmxvY2tfdAlmaXJzdDsKLQl4ZnNfYm1hcF9mcmVlX3QJZmxpc3Q7Ci0J aW50CQllcnI7Ci0JaW50CQljb21taXR0ZWQ7Ci0JY2hhcgkJZm5hbWVbTUFY UEFUSExFTiArIDFdOwotCWludAkJbnJlczsKK3N0YXRpYyB2b2lkCittdl9v cnBoYW5hZ2UoCisJeGZzX21vdW50X3QJCSptcCwKKwl4ZnNfaW5vX3QJCWlu bywJCS8qIGlub2RlICMgdG8gYmUgbW92ZWQgKi8KKwlpbnQJCQlpc2FfZGly KQkvKiAxIGlmIGlub2RlIGlzIGEgZGlyZWN0b3J5ICovCit7CisJeGZzX2lu b190CQllbnRyeV9pbm9fbnVtOworCXhmc19pbm9kZV90CQkqaW5vX3A7CisJ eGZzX3RyYW5zX3QJCSp0cDsKKwl4ZnNfZnNibG9ja190CQlmaXJzdDsKKwl4 ZnNfYm1hcF9mcmVlX3QJCWZsaXN0OworCWludAkJCWVycjsKKwlpbnQJCQlj b21taXR0ZWQ7CisJY2hhcgkJCWZuYW1lW01BWFBBVEhMRU4gKyAxXTsKKwlp bnQJCQlmbmFtZWxlbjsKKwlpbnQJCQlucmVzOworCWludAkJCWluY3I7CiAK LQlzbnByaW50ZihmbmFtZSwgc2l6ZW9mKGZuYW1lKSwgIiVsbHUiLCAodW5z aWduZWQgbG9uZyBsb25nKWlubyk7CisJZm5hbWVsZW4gPSBzbnByaW50Zihm bmFtZSwgc2l6ZW9mKGZuYW1lKSwgIiVsbHUiLAorCQkJKHVuc2lnbmVkIGxv bmcgbG9uZylpbm8pOwogCi0JaWYgKChlcnIgPSBsaWJ4ZnNfaWdldChtcCwg TlVMTCwgZGlyX2lubywgMCwgJmRpcl9pbm9fcCwgMCkpKQotCQlkb19lcnJv cihfKCIlZCAtIGNvdWxkbid0IGlnZXQgb3JwaGFuYWdlIGlub2RlXG4iKSwg ZXJyKTsKKwlBU1NFUlQob3JwaGFuYWdlX2lwICE9IE5VTEwpOworCS8qCisJ ICogTWFrZSBzdXJlIHRoZSBmaWxlbmFtZSBpcyB1bmlxdWUgaW4gdGhlIGxv c3QrZm91bmQKKwkgKi8KKwlpbmNyID0gMDsKKwl3aGlsZSAoZGlyX2xvb2t1 cChtcCwgTlVMTCwgb3JwaGFuYWdlX2lwLCBmbmFtZSwgZm5hbWVsZW4sCisJ CQkmZW50cnlfaW5vX251bSkgPT0gMCkgeworCQlmbmFtZWxlbiA9IHNucHJp bnRmKGZuYW1lLCBzaXplb2YoZm5hbWUpLCAiJWxsdS4lZCIsCisJCQkJKHVu c2lnbmVkIGxvbmcgbG9uZylpbm8sICsraW5jcik7CisJfQogCiAJdHAgPSBs aWJ4ZnNfdHJhbnNfYWxsb2MobXAsIDApOwogCkBAIC05MDcsMTAgKzkzNCwx MCBAQAogCQlkb19lcnJvcihfKCIlZCAtIGNvdWxkbid0IGlnZXQgZGlzY29u bmVjdGVkIGlub2RlXG4iKSwgZXJyKTsKIAogCWlmIChpc2FfZGlyKSAgewot CQlucmVzID0gWEZTX0RJUkVOVEVSX1NQQUNFX1JFUyhtcCwgc3RybGVuKGZu YW1lKSkgKworCQlucmVzID0gWEZTX0RJUkVOVEVSX1NQQUNFX1JFUyhtcCwg Zm5hbWVsZW4pICsKIAkJICAgICAgIFhGU19ESVJFTlRFUl9TUEFDRV9SRVMo bXAsIDIpOwotCQlpZiAoKGVyciA9IGRpcl9sb29rdXAobXAsIHRwLCBpbm9f cCwgIi4uIiwgMiwKLQkJCQkmZW50cnlfaW5vX251bSkpKSB7CisJCWVyciA9 IGRpcl9sb29rdXAobXAsIHRwLCBpbm9fcCwgIi4uIiwgMiwgJmVudHJ5X2lu b19udW0pOworCQlpZiAoZXJyKSB7CiAJCQlBU1NFUlQoZXJyID09IEVOT0VO VCk7CiAKIAkJCWlmICgoZXJyID0gbGlieGZzX3RyYW5zX3Jlc2VydmUodHAs IG5yZXMsCkBAIC05MjEsMjIgKzk0OCwyMiBAQAogCV8oInNwYWNlIHJlc2Vy dmF0aW9uIGZhaWxlZCAoJWQpLCBmaWxlc3lzdGVtIG1heSBiZSBvdXQgb2Yg c3BhY2VcbiIpLAogCQkJCQllcnIpOwogCi0JCQlsaWJ4ZnNfdHJhbnNfaWpv aW4odHAsIGRpcl9pbm9fcCwgMCk7CisJCQlsaWJ4ZnNfdHJhbnNfaWpvaW4o dHAsIG9ycGhhbmFnZV9pcCwgMCk7CiAJCQlsaWJ4ZnNfdHJhbnNfaWpvaW4o dHAsIGlub19wLCAwKTsKIAogCQkJWEZTX0JNQVBfSU5JVCgmZmxpc3QsICZm aXJzdCk7Ci0JCQlpZiAoKGVyciA9IGRpcl9jcmVhdGVuYW1lKG1wLCB0cCwg ZGlyX2lub19wLCBmbmFtZSwKLQkJCQkJCXN0cmxlbihmbmFtZSksIGlubywg JmZpcnN0LAorCQkJaWYgKChlcnIgPSBkaXJfY3JlYXRlbmFtZShtcCwgdHAs IG9ycGhhbmFnZV9pcCwgZm5hbWUsCisJCQkJCQlmbmFtZWxlbiwgaW5vLCAm Zmlyc3QsCiAJCQkJCQkmZmxpc3QsIG5yZXMpKSkKIAkJCQlkb19lcnJvcigK IAlfKCJuYW1lIGNyZWF0ZSBmYWlsZWQgaW4gJXMgKCVkKSwgZmlsZXN5c3Rl bSBtYXkgYmUgb3V0IG9mIHNwYWNlXG4iKSwKIAkJCQkJT1JQSEFOQUdFLCBl cnIpOwogCi0JCQlkaXJfaW5vX3AtPmlfZC5kaV9ubGluaysrOwotCQkJbGli eGZzX3RyYW5zX2xvZ19pbm9kZSh0cCwgZGlyX2lub19wLCBYRlNfSUxPR19D T1JFKTsKKwkJCW9ycGhhbmFnZV9pcC0+aV9kLmRpX25saW5rKys7CisJCQls aWJ4ZnNfdHJhbnNfbG9nX2lub2RlKHRwLCBvcnBoYW5hZ2VfaXAsIFhGU19J TE9HX0NPUkUpOwogCiAJCQlpZiAoKGVyciA9IGRpcl9jcmVhdGVuYW1lKG1w LCB0cCwgaW5vX3AsICIuLiIsIDIsCi0JCQkJCQlkaXJfaW5vLCAmZmlyc3Qs ICZmbGlzdCwgbnJlcykpKQorCQkJCQkJb3JwaGFuYWdlX2lubywgJmZpcnN0 LCAmZmxpc3QsIG5yZXMpKSkKIAkJCQlkb19lcnJvcigKIAlfKCJjcmVhdGlv biBvZiAuLiBlbnRyeSBmYWlsZWQgKCVkKSwgZmlsZXN5c3RlbSBtYXkgYmUg b3V0IG9mIHNwYWNlXG4iKSwKIAkJCQkJZXJyKTsKQEAgLTk2MCwyOCArOTg3 LDI4IEBACiAJXygic3BhY2UgcmVzZXJ2YXRpb24gZmFpbGVkICglZCksIGZp bGVzeXN0ZW0gbWF5IGJlIG91dCBvZiBzcGFjZVxuIiksCiAJCQkJCWVycik7 CiAKLQkJCWxpYnhmc190cmFuc19pam9pbih0cCwgZGlyX2lub19wLCAwKTsK KwkJCWxpYnhmc190cmFuc19pam9pbih0cCwgb3JwaGFuYWdlX2lwLCAwKTsK IAkJCWxpYnhmc190cmFuc19pam9pbih0cCwgaW5vX3AsIDApOwogCiAJCQlY RlNfQk1BUF9JTklUKCZmbGlzdCwgJmZpcnN0KTsKIAotCQkJaWYgKChlcnIg PSBkaXJfY3JlYXRlbmFtZShtcCwgdHAsIGRpcl9pbm9fcCwgZm5hbWUsCi0J CQkJCQlzdHJsZW4oZm5hbWUpLCBpbm8sICZmaXJzdCwKKwkJCWlmICgoZXJy ID0gZGlyX2NyZWF0ZW5hbWUobXAsIHRwLCBvcnBoYW5hZ2VfaXAsIGZuYW1l LAorCQkJCQkJZm5hbWVsZW4sIGlubywgJmZpcnN0LAogCQkJCQkJJmZsaXN0 LCBucmVzKSkpCiAJCQkJZG9fZXJyb3IoCiAJXygibmFtZSBjcmVhdGUgZmFp bGVkIGluICVzICglZCksIGZpbGVzeXN0ZW0gbWF5IGJlIG91dCBvZiBzcGFj ZVxuIiksCiAJCQkJCU9SUEhBTkFHRSwgZXJyKTsKIAotCQkJZGlyX2lub19w LT5pX2QuZGlfbmxpbmsrKzsKLQkJCWxpYnhmc190cmFuc19sb2dfaW5vZGUo dHAsIGRpcl9pbm9fcCwgWEZTX0lMT0dfQ09SRSk7CisJCQlvcnBoYW5hZ2Vf aXAtPmlfZC5kaV9ubGluaysrOworCQkJbGlieGZzX3RyYW5zX2xvZ19pbm9k ZSh0cCwgb3JwaGFuYWdlX2lwLCBYRlNfSUxPR19DT1JFKTsKIAogCQkJLyoK IAkJCSAqIGRvbid0IHJlcGxhY2UgLi4gdmFsdWUgaWYgaXQgYWxyZWFkeSBw b2ludHMKIAkJCSAqIHRvIHVzLiAgdGhhdCdsbCBwb3AgYSBsaWJ4ZnMva2Vy bmVsIEFTU0VSVC4KIAkJCSAqLwotCQkJaWYgKGVudHJ5X2lub19udW0gIT0g ZGlyX2lubykgIHsKKwkJCWlmIChlbnRyeV9pbm9fbnVtICE9IG9ycGhhbmFn ZV9pbm8pICB7CiAJCQkJaWYgKChlcnIgPSBkaXJfcmVwbGFjZShtcCwgdHAs IGlub19wLCAiLi4iLAotCQkJCQkJCTIsIGRpcl9pbm8sICZmaXJzdCwKKwkJ CQkJCQkyLCBvcnBoYW5hZ2VfaW5vLCAmZmlyc3QsCiAJCQkJCQkJJmZsaXN0 LCBucmVzKSkpCiAJCQkJCWRvX2Vycm9yKAogCV8oIm5hbWUgcmVwbGFjZSBv cCBmYWlsZWQgKCVkKSwgZmlsZXN5c3RlbSBtYXkgYmUgb3V0IG9mIHNwYWNl XG4iKSwKQEAgLTEwMDQsMTkgKzEwMzEsMTkgQEAKIAkJICogbGlua3MsIHdl J3JlIG5vdCBkb2luZyB0aGUgaW5vZGUgYWxsb2NhdGlvbgogCQkgKiBhbHNv IGFjY291bnRlZCBmb3IgaW4gdGhlIGNyZWF0ZQogCQkgKi8KLQkJbnJlcyA9 IFhGU19ESVJFTlRFUl9TUEFDRV9SRVMobXAsIHN0cmxlbihmbmFtZSkpOwor CQlucmVzID0gWEZTX0RJUkVOVEVSX1NQQUNFX1JFUyhtcCwgZm5hbWVsZW4p OwogCQlpZiAoKGVyciA9IGxpYnhmc190cmFuc19yZXNlcnZlKHRwLCBucmVz LCBYRlNfUkVNT1ZFX0xPR19SRVMobXApLCAwLAogCQkJCVhGU19UUkFOU19Q RVJNX0xPR19SRVMsIFhGU19SRU1PVkVfTE9HX0NPVU5UKSkpCiAJCQlkb19l cnJvcigKIAlfKCJzcGFjZSByZXNlcnZhdGlvbiBmYWlsZWQgKCVkKSwgZmls ZXN5c3RlbSBtYXkgYmUgb3V0IG9mIHNwYWNlXG4iKSwKIAkJCQllcnIpOwog Ci0JCWxpYnhmc190cmFuc19pam9pbih0cCwgZGlyX2lub19wLCAwKTsKKwkJ bGlieGZzX3RyYW5zX2lqb2luKHRwLCBvcnBoYW5hZ2VfaXAsIDApOwogCQls aWJ4ZnNfdHJhbnNfaWpvaW4odHAsIGlub19wLCAwKTsKIAogCQlYRlNfQk1B UF9JTklUKCZmbGlzdCwgJmZpcnN0KTsKLQkJaWYgKChlcnIgPSBkaXJfY3Jl YXRlbmFtZShtcCwgdHAsIGRpcl9pbm9fcCwgZm5hbWUsCi0JCQkJc3RybGVu KGZuYW1lKSwgaW5vLCAmZmlyc3QsICZmbGlzdCwgbnJlcykpKQorCQlpZiAo KGVyciA9IGRpcl9jcmVhdGVuYW1lKG1wLCB0cCwgb3JwaGFuYWdlX2lwLCBm bmFtZSwKKwkJCQlmbmFtZWxlbiwgaW5vLCAmZmlyc3QsICZmbGlzdCwgbnJl cykpKQogCQkJZG9fZXJyb3IoCiAJXygibmFtZSBjcmVhdGUgZmFpbGVkIGlu ICVzICglZCksIGZpbGVzeXN0ZW0gbWF5IGJlIG91dCBvZiBzcGFjZVxuIiks CiAJCQkJT1JQSEFOQUdFLCBlcnIpOwpAQCAtMTM2NSw2ICsxMzkyLDI0IEBA CiAJcmV0dXJuKDEpOwogfQogCitzdGF0aWMgaW50CitlbnRyeV9qdW5rZWQo CisJY29uc3QgY2hhciAJKm1zZywKKwljb25zdCBjaGFyCSppbmFtZSwKKwl4 ZnNfaW5vX3QJaW5vMSwKKwl4ZnNfaW5vX3QJaW5vMikKK3sKKwlkb193YXJu KG1zZywgaW5hbWUsIGlubzEsIGlubzIpOworCWlmICghbm9fbW9kaWZ5KSB7 CisJCWlmICh2ZXJib3NlKQorCQkJZG9fd2FybihfKCIsIG1hcmtpbmcgZW50 cnkgdG8gYmUganVua2VkXG4iKSk7CisJCWVsc2UKKwkJCWRvX3dhcm4oIlxu Iik7CisJfSBlbHNlCisJCWRvX3dhcm4oXygiLCB3b3VsZCBqdW5rIGVudHJ5 XG4iKSk7CisJcmV0dXJuICFub19tb2RpZnk7Cit9CisKIC8qCiAgKiBwcm9j ZXNzIGEgbGVhZiBibG9jaywgYWxzbyBjaGVja3MgZm9yIC4uIGVudHJ5CiAg KiBhbmQgY29ycmVjdHMgaXQgdG8gbWF0Y2ggd2hhdCB3ZSB0aGluayAuLiBz aG91bGQgYmUKQEAgLTE0NDIsOSArMTQ4Nyw5IEBACiAJCSAqIHRha2UgY2Fy ZSBvZiBpdCB0aGVuLgogCQkgKi8KIAkJaWYgKGVudHJ5LT5uYW1lbGVuID09 IDIgJiYgbmFtZXN0LT5uYW1lWzBdID09ICcuJyAmJgotCQkJCW5hbWVzdC0+ bmFtZVsxXSA9PSAnLicpICB7CisJCQkJbmFtZXN0LT5uYW1lWzFdID09ICcu JykKIAkJCWNvbnRpbnVlOwotCQl9CisKIAkJQVNTRVJUKG5vX21vZGlmeSB8 fCAhdmVyaWZ5X2ludW0obXAsIGxpbm8pKTsKIAogCQkvKgpAQCAtMTQ2NCwx NyArMTUwOSw2IEBACiAJCX0KIAogCQkvKgotCQkgKiBzcGVjaWFsIGNhc2Ug dGhlICJsb3N0K2ZvdW5kIiBlbnRyeSBpZiBwb2ludGluZwotCQkgKiB0byB3 aGVyZSB3ZSB0aGluayBsb3N0K2ZvdW5kIHNob3VsZCBiZS4gIGlmIHRoYXQn cwotCQkgKiB0aGUgY2FzZSwgdGhhdCdzIHRoZSBvbmUgd2UgY3JlYXRlZCBp biBwaGFzZSA2LgotCQkgKiBqdXN0IHNraXAgaXQuICBubyBuZWVkIHRvIHBy b2Nlc3MgaXQgYW5kIGl0J3MgLi4KLQkJICogbGluayBpcyBhbHJlYWR5IGFj Y291bnRlZCBmb3IuCi0JCSAqLwotCi0JCWlmIChsaW5vID09IG9ycGhhbmFn ZV9pbm8gJiYgc3RyY21wKGZuYW1lLCBPUlBIQU5BR0UpID09IDApCi0JCQlj b250aW51ZTsKLQotCQkvKgogCQkgKiBza2lwIGVudHJpZXMgd2l0aCBib2d1 cyBpbnVtYmVycyBpZiB3ZSdyZSBpbiBubyBtb2RpZnkgbW9kZQogCQkgKi8K IAkJaWYgKG5vX21vZGlmeSAmJiB2ZXJpZnlfaW51bShtcCwgbGlubykpCkBA IC0xNDg4LDE4ICsxNTIyLDEyIEBACiAKIAkJaWYgKGlyZWMgPT0gTlVMTCkg IHsKIAkJCW5iYWQrKzsKLQkJCWRvX3dhcm4oCi0JXygiZW50cnkgXCIlc1wi IGluIGRpciBpbm9kZSAlbGx1IHBvaW50cyB0byBub24tZXhpc3RlbnQgaW5v ZGUsICIpLAotCQkJCWZuYW1lLCBpbm8pOwotCi0JCQlpZiAoIW5vX21vZGlm eSkgIHsKKwkJCWlmIChlbnRyeV9qdW5rZWQoXygiZW50cnkgXCIlc1wiIGlu IGRpciBpbm9kZSAlbGx1ICIKKwkJCQkJInBvaW50cyB0byBub24tZXhpc3Rl bnQgaW5vZGUgJWxsdSIpLAorCQkJCQlmbmFtZSwgaW5vLCBsaW5vKSkgewog CQkJCW5hbWVzdC0+bmFtZVswXSA9ICcvJzsKIAkJCQkqZGlydHkgPSAxOwot CQkJCWRvX3dhcm4oXygibWFya2luZyBlbnRyeSB0byBiZSBqdW5rZWRcbiIp KTsKLQkJCX0gZWxzZSAgewotCQkJCWRvX3dhcm4oXygid291bGQganVuayBl bnRyeVxuIikpOwogCQkJfQotCiAJCQljb250aW51ZTsKIAkJfQogCkBAIC0x NTExLDUyICsxNTM5LDUxIEBACiAJCSAqIHJlYWxseSBpcyBmcmVlLgogCQkg Ki8KIAkJaWYgKGlzX2lub2RlX2ZyZWUoaXJlYywgaW5vX29mZnNldCkpICB7 Ci0JCQkvKgotCQkJICogZG9uJ3QgY29tcGxhaW4gaWYgdGhpcyBlbnRyeSBw b2ludHMgdG8gdGhlIG9sZAotCQkJICogYW5kIG5vdy1mcmVlIGxvc3QrZm91 bmQgaW5vZGUKLQkJCSAqLwotCQkJaWYgKHZlcmJvc2UgfHwgbm9fbW9kaWZ5 IHx8IGxpbm8gIT0gb2xkX29ycGhhbmFnZV9pbm8pCi0JCQkJZG9fd2FybigK LQkJXygiZW50cnkgXCIlc1wiIGluIGRpciBpbm9kZSAlbGx1IHBvaW50cyB0 byBmcmVlIGlub2RlICVsbHUiKSwKLQkJCQkJZm5hbWUsIGlubywgbGlubyk7 CiAJCQluYmFkKys7Ci0KLQkJCWlmICghbm9fbW9kaWZ5KSAgewotCQkJCWlm ICh2ZXJib3NlIHx8IGxpbm8gIT0gb2xkX29ycGhhbmFnZV9pbm8pCi0JCQkJ CWRvX3dhcm4oCi0JCQkJCV8oIiwgbWFya2luZyBlbnRyeSB0byBiZSBqdW5r ZWRcbiIpKTsKLQotCQkJCWVsc2UKLQkJCQkJZG9fd2FybigiXG4iKTsKKwkJ CWlmIChlbnRyeV9qdW5rZWQoXygiZW50cnkgXCIlc1wiIGluIGRpciBpbm9k ZSAlbGx1ICIKKwkJCQkJInBvaW50cyB0byBmcmVlIGlub2RlICVsbHUiKSwK KwkJCQkJZm5hbWUsIGlubywgbGlubykpIHsKIAkJCQluYW1lc3QtPm5hbWVb MF0gPSAnLyc7CiAJCQkJKmRpcnR5ID0gMTsKLQkJCX0gZWxzZSAgewotCQkJ CWRvX3dhcm4oXygiLCB3b3VsZCBqdW5rIGVudHJ5XG4iKSk7CiAJCQl9Ci0K IAkJCWNvbnRpbnVlOwogCQl9CiAKIAkJLyoKKwkJICogY2hlY2sgaWYgdGhp cyBpbm9kZSBpcyBsb3N0K2ZvdW5kIGRpciBpbiB0aGUgcm9vdAorCQkgKi8K KwkJaWYgKGlubyA9PSBtcC0+bV9zYi5zYl9yb290aW5vICYmIHN0cmNtcChm bmFtZSwgT1JQSEFOQUdFKSA9PSAwKSB7CisJCQkvKiByb290IGlub2RlLCAi bG9zdCtmb3VuZCIsIGlmIGl0J3Mgbm90IGEgZGlyZWN0b3J5LAorCQkJICog dHJhc2ggaXQsIG90aGVyd2lzZSwgYXNzaWduIGl0ICovCisJCQlpZiAoIWlu b2RlX2lzYWRpcihpcmVjLCBpbm9fb2Zmc2V0KSkgeworCQkJCW5iYWQrKzsK KwkJCQlpZiAoZW50cnlfanVua2VkKF8oIiVzIChpbm8gJWxsdSkgaW4gcm9v dCAiCisJCQkJCQkiKCVsbHUpIGlzIG5vdCBhIGRpcmVjdG9yeSIpLAorCQkJ CQkJT1JQSEFOQUdFLCBsaW5vLCBpbm8pKSB7CisJCQkJCW5hbWVzdC0+bmFt ZVswXSA9ICcvJzsKKwkJCQkJKmRpcnR5ID0gMTsKKwkJCQl9CisJCQkJY29u dGludWU7CisJCQl9CisJCQkvKgorCQkJICogaWYgdGhpcyBpcyBhIGR1cCwg aXQgd2lsbCBiZSBwaWNrZWQgdXAgYmVsb3csCisJCQkgKiBvdGhlcndpc2Us IG1hcmsgaXQgYXMgdGhlIG9ycGhhbmFnZSBmb3IgbGF0ZXIuCisJCQkgKi8K KwkJCWlmICghb3JwaGFuYWdlX2lubykKKwkJCQlvcnBoYW5hZ2VfaW5vID0g bGlubzsKKwkJfQorCQkvKgogCQkgKiBjaGVjayBmb3IgZHVwbGljYXRlIG5h bWVzIGluIGRpcmVjdG9yeS4KIAkJICovCiAJCWlmICghZGlyX2hhc2hfYWRk KGhhc2h0YWIsIChkYV9ibm8gPDwgbXAtPm1fc2Iuc2JfYmxvY2tsb2cpICsK LQkJCQkJCWVudHJ5LT5uYW1laWR4LAotCQkJCWxpbm8sIGVudHJ5LT5uYW1l bGVuLCBuYW1lc3QtPm5hbWUpKSB7Ci0JCQlkb193YXJuKAotCQlfKCJlbnRy eSBcIiVzXCIgKGlubyAlbGx1KSBpbiBkaXIgJWxsdSBpcyBhIGR1cGxpY2F0 ZSBuYW1lIiksCi0JCQkJZm5hbWUsIGxpbm8sIGlubyk7CisJCQkJZW50cnkt Pm5hbWVpZHgsIGxpbm8sIGVudHJ5LT5uYW1lbGVuLAorCQkJCW5hbWVzdC0+ bmFtZSkpIHsKIAkJCW5iYWQrKzsKLQkJCWlmICghbm9fbW9kaWZ5KSB7Ci0J CQkJaWYgKHZlcmJvc2UpCi0JCQkJCWRvX3dhcm4oCi0JCQkJCV8oIiwgbWFy a2luZyBlbnRyeSB0byBiZSBqdW5rZWRcbiIpKTsKLQkJCQllbHNlCi0JCQkJ CWRvX3dhcm4oIlxuIik7CisJCQlpZiAoZW50cnlfanVua2VkKF8oImVudHJ5 IFwiJXNcIiAoaW5vICVsbHUpIGluIGRpciAiCisJCQkJCSIlbGx1IGlzIGEg ZHVwbGljYXRlIG5hbWUiKSwKKwkJCQkJZm5hbWUsIGxpbm8sIGlubykpIHsK IAkJCQluYW1lc3QtPm5hbWVbMF0gPSAnLyc7CiAJCQkJKmRpcnR5ID0gMTsK LQkJCX0gZWxzZSB7Ci0JCQkJZG9fd2FybihfKCIsIHdvdWxkIGp1bmsgZW50 cnlcbiIpKTsKIAkJCX0KIAkJCWNvbnRpbnVlOwogCQl9CkBAIC0xNjA0LDcg KzE2MzEsNyBAQAogCQkJaWYgKCFub19tb2RpZnkpICB7CiAJCQkJbmFtZXN0 LT5uYW1lWzBdID0gJy8nOwogCQkJCSpkaXJ0eSA9IDE7Ci0JCQkJaWYgKHZl cmJvc2UgfHwgbGlubyAhPSBvbGRfb3JwaGFuYWdlX2lubykKKwkJCQlpZiAo dmVyYm9zZSkKIAkJCQkJZG9fd2FybigKIAkJCQkJXygiXHR3aWxsIGNsZWFy IGVudHJ5IFwiJXNcIlxuIiksCiAJCQkJCQlmbmFtZSk7CkBAIC0yMTU1LDIx ICsyMTgyLDc1IEBACiAJCXB0ciArPSBYRlNfRElSMl9EQVRBX0VOVFNJWkUo ZGVwLT5uYW1lbGVuKTsKIAkJaW51bSA9IElOVF9HRVQoZGVwLT5pbnVtYmVy LCBBUkNIX0NPTlZFUlQpOwogCQlsYXN0ZnJlZSA9IDA7CisKKwkJaXJlYyA9 IGZpbmRfaW5vZGVfcmVjKFhGU19JTk9fVE9fQUdOTyhtcCwgaW51bSksCisJ CQkJCVhGU19JTk9fVE9fQUdJTk8obXAsIGludW0pKTsKKwkJaWYgKGlyZWMg PT0gTlVMTCkgIHsKKwkJCW5iYWQrKzsKKwkJCWlmIChlbnRyeV9qdW5rZWQo XygiZW50cnkgXCIlc1wiIGluIGRpcmVjdG9yeSBpbm9kZSAiCisJCQkJCSIl bGx1IHBvaW50cyB0byBub24tZXhpc3RlbnQgaW5vZGUgJWxsdSIpLAorCQkJ CQlmbmFtZSwgaXAtPmlfaW5vLCBpbnVtKSkgeworCQkJCWRlcC0+bmFtZVsw XSA9ICcvJzsKKwkJCQlsaWJ4ZnNfZGlyMl9kYXRhX2xvZ19lbnRyeSh0cCwg YnAsIGRlcCk7CisJCQl9CisJCQljb250aW51ZTsKKwkJfQorCQlpbm9fb2Zm c2V0ID0gWEZTX0lOT19UT19BR0lOTyhtcCwgaW51bSkgLSBpcmVjLT5pbm9f c3RhcnRudW07CisKKwkJLyoKKwkJICogaWYgaXQncyBhIGZyZWUgaW5vZGUs IGJsb3cgb3V0IHRoZSBlbnRyeS4KKwkJICogYnkgbm93LCBhbnkgaW5vZGUg dGhhdCB3ZSB0aGluayBpcyBmcmVlCisJCSAqIHJlYWxseSBpcyBmcmVlLgor CQkgKi8KKwkJaWYgKGlzX2lub2RlX2ZyZWUoaXJlYywgaW5vX29mZnNldCkp ICB7CisJCQluYmFkKys7CisJCQlpZiAoZW50cnlfanVua2VkKF8oImVudHJ5 IFwiJXNcIiBpbiBkaXJlY3RvcnkgaW5vZGUgIgorCQkJCQkiJWxsdSBwb2lu dHMgdG8gZnJlZSBpbm9kZSAlbGx1IiksCisJCQkJCWZuYW1lLCBpcC0+aV9p bm8sIGludW0pKSB7CisJCQkJZGVwLT5uYW1lWzBdID0gJy8nOworCQkJCWxp Ynhmc19kaXIyX2RhdGFfbG9nX2VudHJ5KHRwLCBicCwgZGVwKTsKKwkJCX0K KwkJCWNvbnRpbnVlOworCQl9CisKKwkJLyoKKwkJICogY2hlY2sgaWYgdGhp cyBpbm9kZSBpcyBsb3N0K2ZvdW5kIGRpciBpbiB0aGUgcm9vdAorCQkgKi8K KwkJaWYgKGludW0gPT0gbXAtPm1fc2Iuc2Jfcm9vdGlubyAmJiBzdHJjbXAo Zm5hbWUsIE9SUEhBTkFHRSkgPT0gMCkgeworCQkJLyoKKwkJCSAqIGlmIGl0 J3Mgbm90IGEgZGlyZWN0b3J5LCB0cmFzaCBpdAorCQkJICovCisJCQlpZiAo IWlub2RlX2lzYWRpcihpcmVjLCBpbm9fb2Zmc2V0KSkgeworCQkJCW5iYWQr KzsKKwkJCQlpZiAoZW50cnlfanVua2VkKF8oIiVzIChpbm8gJWxsdSkgaW4g cm9vdCAiCisJCQkJCQkiKCVsbHUpIGlzIG5vdCBhIGRpcmVjdG9yeSIpLAor CQkJCQkJT1JQSEFOQUdFLCBpbnVtLCBpcC0+aV9pbm8pKSB7CisJCQkJCWRl cC0+bmFtZVswXSA9ICcvJzsKKwkJCQkJbGlieGZzX2RpcjJfZGF0YV9sb2df ZW50cnkodHAsIGJwLCBkZXApOworCQkJCX0KKwkJCQljb250aW51ZTsKKwkJ CX0KKwkJCS8qCisJCQkgKiBpZiB0aGlzIGlzIGEgZHVwLCBpdCB3aWxsIGJl IHBpY2tlZCB1cCBiZWxvdywKKwkJCSAqIG90aGVyd2lzZSwgbWFyayBpdCBh cyB0aGUgb3JwaGFuYWdlIGZvciBsYXRlci4KKwkJCSAqLworCQkJaWYgKCFv cnBoYW5hZ2VfaW5vKQorCQkJCW9ycGhhbmFnZV9pbm8gPSBpbnVtOworCQl9 CisKKwkJLyoKKwkJICogY2hlY2sgZm9yIGR1cGxpY2F0ZSBuYW1lcyBpbiBk aXJlY3RvcnkuCisJCSAqLwogCQlpZiAoIWRpcl9oYXNoX2FkZChoYXNodGFi LCBhZGRyLCBpbnVtLCBkZXAtPm5hbWVsZW4sCiAJCQkJZGVwLT5uYW1lKSkg ewotCQkJZG9fd2FybigKLQkJXygiZW50cnkgXCIlc1wiIChpbm8gJWxsdSkg aW4gZGlyICVsbHUgaXMgYSBkdXBsaWNhdGUgbmFtZSIpLAotCQkJCWZuYW1l LCBpbnVtLCBpcC0+aV9pbm8pOwotCQkJaWYgKCFub19tb2RpZnkpIHsKLQkJ CQlpZiAodmVyYm9zZSkKLQkJCQkJZG9fd2FybigKLQkJCQkJXygiLCBtYXJr aW5nIGVudHJ5IHRvIGJlIGp1bmtlZFxuIikpOwotCQkJCWVsc2UKLQkJCQkJ ZG9fd2FybigiXG4iKTsKLQkJCX0gZWxzZSB7Ci0JCQkJZG9fd2FybihfKCIs IHdvdWxkIGp1bmsgZW50cnlcbiIpKTsKKwkJCW5iYWQrKzsKKwkJCWlmIChl bnRyeV9qdW5rZWQoXygiZW50cnkgXCIlc1wiIChpbm8gJWxsdSkgaW4gZGly ICIKKwkJCQkJIiVsbHUgaXMgYSBkdXBsaWNhdGUgbmFtZSIpLAorCQkJCQlm bmFtZSwgaW51bSwgaXAtPmlfaW5vKSkgeworCQkJCWRlcC0+bmFtZVswXSA9 ICcvJzsKKwkJCQlsaWJ4ZnNfZGlyMl9kYXRhX2xvZ19lbnRyeSh0cCwgYnAs IGRlcCk7CiAJCQl9Ci0JCQlkZXAtPm5hbWVbMF0gPSAnLyc7CisJCQljb250 aW51ZTsKIAkJfQogCQkvKgogCQkgKiBza2lwIGJvZ3VzIGVudHJpZXMgKGxl YWRpbmcgJy8nKS4gIHRoZXknbGwgYmUgZGVsZXRlZApAQCAtMjIxMiw2OCAr MjI5MywxMSBAQAogCQkJY29udGludWU7CiAJCX0KIAkJLyoKLQkJICogc3Bl Y2lhbCBjYXNlIHRoZSAibG9zdCtmb3VuZCIgZW50cnkgaWYgcG9pbnRpbmcK LQkJICogdG8gd2hlcmUgd2UgdGhpbmsgbG9zdCtmb3VuZCBzaG91bGQgYmUu ICBpZiB0aGF0J3MKLQkJICogdGhlIGNhc2UsIHRoYXQncyB0aGUgb25lIHdl IGNyZWF0ZWQgaW4gcGhhc2UgNi4KLQkJICoganVzdCBza2lwIGl0LiAgbm8g bmVlZCB0byBwcm9jZXNzIGl0IGFuZCBpdCdzIC4uCi0JCSAqIGxpbmsgaXMg YWxyZWFkeSBhY2NvdW50ZWQgZm9yLgotCQkgKi8KLQkJaWYgKGludW0gPT0g b3JwaGFuYWdlX2lubyAmJiBzdHJjbXAoZm5hbWUsIE9SUEhBTkFHRSkgPT0g MCkKLQkJCWNvbnRpbnVlOwotCQkvKgogCQkgKiBza2lwIGVudHJpZXMgd2l0 aCBib2d1cyBpbnVtYmVycyBpZiB3ZSdyZSBpbiBubyBtb2RpZnkgbW9kZQog CQkgKi8KIAkJaWYgKG5vX21vZGlmeSAmJiB2ZXJpZnlfaW51bShtcCwgaW51 bSkpCiAJCQljb250aW51ZTsKIAkJLyoKLQkJICogb2ssIG5vdyBoYW5kbGUg dGhlIHJlc3Qgb2YgdGhlIGNhc2VzIGJlc2lkZXMgJy4nIGFuZCAnLi4nCi0J CSAqLwotCQlpcmVjID0gZmluZF9pbm9kZV9yZWMoWEZTX0lOT19UT19BR05P KG1wLCBpbnVtKSwKLQkJCQkJWEZTX0lOT19UT19BR0lOTyhtcCwgaW51bSkp OwotCQlpZiAoaXJlYyA9PSBOVUxMKSAgewotCQkJbmJhZCsrOwotCQkJZG9f d2FybihfKCJlbnRyeSBcIiVzXCIgaW4gZGlyZWN0b3J5IGlub2RlICVsbHUg cG9pbnRzICIKLQkJCQkgICJ0byBub24tZXhpc3RlbnQgaW5vZGUsICIpLAot CQkJCWZuYW1lLCBpcC0+aV9pbm8pOwotCQkJaWYgKCFub19tb2RpZnkpICB7 Ci0JCQkJZGVwLT5uYW1lWzBdID0gJy8nOwotCQkJCWxpYnhmc19kaXIyX2Rh dGFfbG9nX2VudHJ5KHRwLCBicCwgZGVwKTsKLQkJCQlkb193YXJuKF8oIm1h cmtpbmcgZW50cnkgdG8gYmUganVua2VkXG4iKSk7Ci0JCQl9IGVsc2UgIHsK LQkJCQlkb193YXJuKF8oIndvdWxkIGp1bmsgZW50cnlcbiIpKTsKLQkJCX0K LQkJCWNvbnRpbnVlOwotCQl9Ci0JCWlub19vZmZzZXQgPSBYRlNfSU5PX1RP X0FHSU5PKG1wLCBpbnVtKSAtIGlyZWMtPmlub19zdGFydG51bTsKLQkJLyoK LQkJICogaWYgaXQncyBhIGZyZWUgaW5vZGUsIGJsb3cgb3V0IHRoZSBlbnRy eS4KLQkJICogYnkgbm93LCBhbnkgaW5vZGUgdGhhdCB3ZSB0aGluayBpcyBm cmVlCi0JCSAqIHJlYWxseSBpcyBmcmVlLgotCQkgKi8KLQkJaWYgKGlzX2lu b2RlX2ZyZWUoaXJlYywgaW5vX29mZnNldCkpICB7Ci0JCQkvKgotCQkJICog ZG9uJ3QgY29tcGxhaW4gaWYgdGhpcyBlbnRyeSBwb2ludHMgdG8gdGhlIG9s ZAotCQkJICogYW5kIG5vdy1mcmVlIGxvc3QrZm91bmQgaW5vZGUKLQkJCSAq LwotCQkJaWYgKHZlcmJvc2UgfHwgbm9fbW9kaWZ5IHx8IGludW0gIT0gb2xk X29ycGhhbmFnZV9pbm8pCi0JCQkJZG9fd2FybigKLQlfKCJlbnRyeSBcIiVz XCIgaW4gZGlyZWN0b3J5IGlub2RlICVsbHUgcG9pbnRzIHRvIGZyZWUgaW5v ZGUgJWxsdSIpLAotCQkJCQlmbmFtZSwgaXAtPmlfaW5vLCBpbnVtKTsKLQkJ CW5iYWQrKzsKLQkJCWlmICghbm9fbW9kaWZ5KSAgewotCQkJCWlmICh2ZXJi b3NlIHx8IGludW0gIT0gb2xkX29ycGhhbmFnZV9pbm8pCi0JCQkJCWRvX3dh cm4oCi0JCQkJCV8oIiwgbWFya2luZyBlbnRyeSB0byBiZSBqdW5rZWRcbiIp KTsKLQkJCQllbHNlCi0JCQkJCWRvX3dhcm4oIlxuIik7Ci0JCQkJZGVwLT5u YW1lWzBdID0gJy8nOwotCQkJCWxpYnhmc19kaXIyX2RhdGFfbG9nX2VudHJ5 KHRwLCBicCwgZGVwKTsKLQkJCX0gZWxzZSAgewotCQkJCWRvX3dhcm4oXygi LCB3b3VsZCBqdW5rIGVudHJ5XG4iKSk7Ci0JCQl9Ci0JCQljb250aW51ZTsK LQkJfQotCQkvKgogCQkgKiBjaGVjayBlYXN5IGNhc2UgZmlyc3QsIHJlZ3Vs YXIgaW5vZGUsIGp1c3QgYnVtcAogCQkgKiB0aGUgbGluayBjb3VudCBhbmQg Y29udGludWUKIAkJICovCkBAIC0yMzEyLDcgKzIzMzYsNyBAQAogCQkJaWYg KCFub19tb2RpZnkpICB7CiAJCQkJZGVwLT5uYW1lWzBdID0gJy8nOwogCQkJ CWxpYnhmc19kaXIyX2RhdGFfbG9nX2VudHJ5KHRwLCBicCwgZGVwKTsKLQkJ CQlpZiAodmVyYm9zZSB8fCBpbnVtICE9IG9sZF9vcnBoYW5hZ2VfaW5vKQor CQkJCWlmICh2ZXJib3NlKQogCQkJCQlkb193YXJuKAogCQkJCQlfKCJcdHdp bGwgY2xlYXIgZW50cnkgXCIlc1wiXG4iKSwKIAkJCQkJCWZuYW1lKTsKQEAg LTI3NjcsMzYgKzI3OTEsMTQgQEAKIAkJQVNTRVJUKG5vX21vZGlmeSB8fCBs aW5vICE9IE5VTExGU0lOTyk7CiAJCUFTU0VSVChub19tb2RpZnkgfHwgIXZl cmlmeV9pbnVtKG1wLCBsaW5vKSk7CiAKLQkJLyoKLQkJICogc3BlY2lhbCBj YXNlIHRoZSAibG9zdCtmb3VuZCIgZW50cnkgaWYgaXQncyBwb2ludGluZwot CQkgKiB0byB3aGVyZSB3ZSB0aGluayBsb3N0K2ZvdW5kIHNob3VsZCBiZS4g IGlmIHRoYXQncwotCQkgKiB0aGUgY2FzZSwgdGhhdCdzIHRoZSBvbmUgd2Ug Y3JlYXRlZCBpbiBwaGFzZSA2LgotCQkgKiBqdXN0IHNraXAgaXQuICBubyBu ZWVkIHRvIHByb2Nlc3MgaXQgYW5kIGl0cyAuLgotCQkgKiBsaW5rIGlzIGFs cmVhZHkgYWNjb3VudGVkIGZvci4gIEFsc28gc2tpcCBlbnRyaWVzCi0JCSAq IHdpdGggYm9ndXMgaW5vZGUgbnVtYmVycyBpZiB3ZSdyZSBpbiBubyBtb2Rp ZnkgbW9kZS4KLQkJICovCi0KLQkJaWYgKChsaW5vID09IG9ycGhhbmFnZV9p bm8gJiYgc3RyY21wKGZuYW1lLCBPUlBIQU5BR0UpID09IDApCi0JCQkJfHwg KG5vX21vZGlmeSAmJiB2ZXJpZnlfaW51bShtcCwgbGlubykpKSB7Ci0JCQlu ZXh0X3NmZSA9ICh4ZnNfZGlyX3NmX2VudHJ5X3QgKikKLQkJCQkoKF9fcHNp bnRfdCkgc2ZfZW50cnkgKwotCQkJCVhGU19ESVJfU0ZfRU5UU0laRV9CWUVO VFJZKHNmX2VudHJ5KSk7Ci0JCQljb250aW51ZTsKLQkJfQotCiAJCWlyZWMg PSBmaW5kX2lub2RlX3JlYyhYRlNfSU5PX1RPX0FHTk8obXAsIGxpbm8pLAog CQkJCQlYRlNfSU5PX1RPX0FHSU5PKG1wLCBsaW5vKSk7Ci0KLQkJaWYgKGly ZWMgPT0gTlVMTCAmJiBub19tb2RpZnkpICB7Ci0JCQlkb193YXJuKAotXygi ZW50cnkgXCIlc1wiIGluIHNob3J0Zm9ybSBkaXIgJWxsdSByZWZlcmVuY2Vz IG5vbi1leGlzdGVudCBpbm8gJWxsdVxuIiksCisJCWlmIChpcmVjID09IE5V TEwpIHsKKwkJCWRvX3dhcm4oXygiZW50cnkgXCIlc1wiIGluIHNob3J0Zm9y bSBkaXIgJWxsdSAiCisJCQkJInJlZmVyZW5jZXMgbm9uLWV4aXN0ZW50IGlu byAlbGx1IiksCiAJCQkJZm5hbWUsIGlubywgbGlubyk7Ci0JCQlkb193YXJu KF8oIndvdWxkIGp1bmsgZW50cnlcbiIpKTsKLQkJCWNvbnRpbnVlOworCQkJ Z290byBkb19qdW5raXQ7CiAJCX0KLQotCQlBU1NFUlQoaXJlYyAhPSBOVUxM KTsKLQogCQlpbm9fb2Zmc2V0ID0gWEZTX0lOT19UT19BR0lOTyhtcCwgbGlu bykgLSBpcmVjLT5pbm9fc3RhcnRudW07CiAKIAkJLyoKQEAgLTI4MDQsNDIg KzI4MDYsNDIgQEAKIAkJICogYnkgbm93LCBhbnkgaW5vZGUgdGhhdCB3ZSB0 aGluayBpcyBmcmVlCiAJCSAqIHJlYWxseSBpcyBmcmVlLgogCQkgKi8KLQkJ aWYgKGlzX2lub2RlX2ZyZWUoaXJlYywgaW5vX29mZnNldCkpICB7CisJCWlm ICghaXNfaW5vZGVfZnJlZShpcmVjLCBpbm9fb2Zmc2V0KSkgIHsKKwkJCWRv X3dhcm4oXygiZW50cnkgXCIlc1wiIGluIHNob3J0Zm9ybSBkaXIgaW5vZGUg JWxsdSAiCisJCQkJInBvaW50cyB0byBmcmVlIGlub2RlICVsbHUiKSwgZm5h bWUsIGlubywgbGlubyk7CisJCQlnb3RvIGRvX2p1bmtpdDsKKwkJfQorCQkv KgorCQkgKiBjaGVjayBpZiB0aGlzIGlub2RlIGlzIGxvc3QrZm91bmQgZGly IGluIHRoZSByb290CisJCSAqLworCQlpZiAoaW5vID09IG1wLT5tX3NiLnNi X3Jvb3Rpbm8gJiYgc3RyY21wKGZuYW1lLCBPUlBIQU5BR0UpID09IDApIHsK IAkJCS8qCi0JCQkgKiBkb24ndCBjb21wbGFpbiBpZiB0aGlzIGVudHJ5IHBv aW50cyB0byB0aGUgb2xkCi0JCQkgKiBhbmQgbm93LWZyZWUgbG9zdCtmb3Vu ZCBpbm9kZQorCQkJICogaWYgaXQncyBub3QgYSBkaXJlY3RvcnksIHRyYXNo IGl0CiAJCQkgKi8KLQkJCWlmICh2ZXJib3NlIHx8IG5vX21vZGlmeSB8fCBs aW5vICE9IG9sZF9vcnBoYW5hZ2VfaW5vKQotCQkJCWRvX3dhcm4oCi1fKCJl bnRyeSBcIiVzXCIgaW4gc2hvcnRmb3JtIGRpciBpbm9kZSAlbGx1IHBvaW50 cyB0byBmcmVlIGlub2RlICVsbHVcbiIpLAotCQkJCQlmbmFtZSwgaW5vLCBs aW5vKTsKLQotCQkJaWYgKCFub19tb2RpZnkpICB7Ci0JCQkJanVua2l0ID0g MTsKLQkJCX0gZWxzZSAgewotCQkJCWRvX3dhcm4oXygid291bGQganVuayBl bnRyeSBcIiVzXCJcbiIpLAotCQkJCQlmbmFtZSk7CisJCQlpZiAoIWlub2Rl X2lzYWRpcihpcmVjLCBpbm9fb2Zmc2V0KSkgeworCQkJCWRvX3dhcm4oXygi JXMgKGlubyAlbGx1KSBpbiByb290ICglbGx1KSBpcyBub3QgIgorCQkJCQki YSBkaXJlY3RvcnkiKSwgT1JQSEFOQUdFLCBsaW5vLCBpbm8pOworCQkJCWdv dG8gZG9fanVua2l0OwogCQkJfQotCQl9IGVsc2UgaWYgKCFkaXJfaGFzaF9h ZGQoaGFzaHRhYiwKLQkJCQkoeGZzX2RpcjJfZGF0YXB0cl90KShzZl9lbnRy eSAtICZzZi0+bGlzdFswXSksCi0JCQkJbGlubywgc2ZfZW50cnktPm5hbWVs ZW4sIHNmX2VudHJ5LT5uYW1lKSkgewogCQkJLyoKLQkJCSAqIGNoZWNrIGZv ciBkdXBsaWNhdGUgbmFtZXMgaW4gZGlyZWN0b3J5LgorCQkJICogaWYgdGhp cyBpcyBhIGR1cCwgaXQgd2lsbCBiZSBwaWNrZWQgdXAgYmVsb3csCisJCQkg KiBvdGhlcndpc2UsIG1hcmsgaXQgYXMgdGhlIG9ycGhhbmFnZSBmb3IgbGF0 ZXIuCiAJCQkgKi8KLQkJCWRvX3dhcm4oCi0JCV8oImVudHJ5IFwiJXNcIiAo aW5vICVsbHUpIGluIGRpciAlbGx1IGlzIGEgZHVwbGljYXRlIG5hbWUiKSwK LQkJCQlmbmFtZSwgbGlubywgaW5vKTsKLQkJCWlmICghbm9fbW9kaWZ5KSB7 Ci0JCQkJanVua2l0ID0gMTsKLQkJCQlpZiAodmVyYm9zZSkKLQkJCQkJZG9f d2FybigKLQkJCQkJXygiLCBtYXJraW5nIGVudHJ5IHRvIGJlIGp1bmtlZFxu IikpOwotCQkJCWVsc2UKLQkJCQkJZG9fd2FybigiXG4iKTsKLQkJCX0gZWxz ZSB7Ci0JCQkJZG9fd2FybihfKCIsIHdvdWxkIGp1bmsgZW50cnlcbiIpKTsK LQkJCX0KLQkJfSBlbHNlIGlmICghaW5vZGVfaXNhZGlyKGlyZWMsIGlub19v ZmZzZXQpKSAgeworCQkJaWYgKCFvcnBoYW5hZ2VfaW5vKQorCQkJCW9ycGhh bmFnZV9pbm8gPSBsaW5vOworCQl9CisJCS8qCisJCSAqIGNoZWNrIGZvciBk dXBsaWNhdGUgbmFtZXMgaW4gZGlyZWN0b3J5LgorCQkgKi8KKwkJaWYgKCFk aXJfaGFzaF9hZGQoaGFzaHRhYiwKKwkJCQkoeGZzX2RpcjJfZGF0YXB0cl90 KShzZl9lbnRyeSAtICZzZi0+bGlzdFswXSksCisJCQkJbGlubywgc2ZfZW50 cnktPm5hbWVsZW4sIHNmX2VudHJ5LT5uYW1lKSkgeworCQkJZG9fd2Fybihf KCJlbnRyeSBcIiVzXCIgKGlubyAlbGx1KSBpbiBkaXIgJWxsdSBpcyBhICIK KwkJCQkiZHVwbGljYXRlIG5hbWUiKSwgZm5hbWUsIGxpbm8sIGlubyk7CisJ CQlnb3RvIGRvX2p1bmtpdDsKKwkJfQorCisJCWlmICghaW5vZGVfaXNhZGly KGlyZWMsIGlub19vZmZzZXQpKSAgewogCQkJLyoKIAkJCSAqIGNoZWNrIGVh c3kgY2FzZSBmaXJzdCwgcmVndWxhciBpbm9kZSwganVzdCBidW1wCiAJCQkg KiB0aGUgbGluayBjb3VudCBhbmQgY29udGludWUKQEAgLTI4NjAsOCArMjg2 Miw4IEBACiAJCQkgKi8KIAkJCWlmIChpc19pbm9kZV9yZWFjaGVkKGlyZWMs IGlub19vZmZzZXQpKSAgewogCQkJCWp1bmtpdCA9IDE7Ci0JCQkJZG9fd2Fy bigKLV8oImVudHJ5IFwiJXNcIiBpbiBkaXIgJWxsdSByZWZlcmVuY2VzIGFs cmVhZHkgY29ubmVjdGVkIGRpciBpbm8gJWxsdSxcbiIpLAorCQkJCWRvX3dh cm4oXygiZW50cnkgXCIlc1wiIGluIGRpciAlbGx1IHJlZmVyZW5jZXMgIgor CQkJCQkiYWxyZWFkeSBjb25uZWN0ZWQgZGlyIGlubyAlbGx1LFxuIiksCiAJ CQkJCWZuYW1lLCBpbm8sIGxpbm8pOwogCQkJfSBlbHNlIGlmIChwYXJlbnQg PT0gaW5vKSAgewogCQkJCWFkZF9pbm9kZV9yZWFjaGVkKGlyZWMsIGlub19v ZmZzZXQpOwpAQCAtMjg3MiwxMyArMjg3NCwxNCBAQAogCQkJCQlwdXNoX2Rp cihzdGFjaywgbGlubyk7CiAJCQl9IGVsc2UgIHsKIAkJCQlqdW5raXQgPSAx OwotCQkJCWRvX3dhcm4oCi1fKCJlbnRyeSBcIiVzXCIgaW4gZGlyICVsbHUg bm90IGNvbnNpc3RlbnQgd2l0aCAuLiB2YWx1ZSAoJWxsdSkgaW4gZGlyIGlu byAlbGx1LFxuIiksCisJCQkJZG9fd2FybihfKCJlbnRyeSBcIiVzXCIgaW4g ZGlyICVsbHUgbm90ICIKKwkJCQkJImNvbnNpc3RlbnQgd2l0aCAuLiB2YWx1 ZSAoJWxsdSkgaW4gIgorCQkJCQkiZGlyIGlubyAlbGx1IiksCiAJCQkJCWZu YW1lLCBpbm8sIHBhcmVudCwgbGlubyk7CiAJCQl9CiAJCX0KLQogCQlpZiAo anVua2l0KSAgeworZG9fanVua2l0OgogCQkJaWYgKCFub19tb2RpZnkpICB7 CiAJCQkJdG1wX2VsZW4gPSBYRlNfRElSX1NGX0VOVFNJWkVfQllFTlRSWShz Zl9lbnRyeSk7CiAJCQkJdG1wX3NmZSA9ICh4ZnNfZGlyX3NmX2VudHJ5X3Qg KikKQEAgLTI5MTAsMTIgKzI5MTMsMTIgQEAKIAogCQkJCSppbm9fZGlydHkg PSAxOwogCi0JCQkJaWYgKHZlcmJvc2UgfHwgbGlubyAhPSBvbGRfb3JwaGFu YWdlX2lubykKLQkJCQkJZG9fd2FybigKLQkJCV8oImp1bmtpbmcgZW50cnkg XCIlc1wiIGluIGRpcmVjdG9yeSBpbm9kZSAlbGx1XG4iKSwKLQkJCQkJCWZu YW1lLCBsaW5vKTsKKwkJCQlpZiAodmVyYm9zZSkKKwkJCQkJZG9fd2Fybihf KCJqdW5raW5nIGVudHJ5XG4iKSk7CisJCQkJZWxzZQorCQkJCQlkb193YXJu KCJcbiIpOwogCQkJfSBlbHNlICB7Ci0JCQkJZG9fd2FybihfKCJ3b3VsZCBq dW5rIGVudHJ5IFwiJXNcIlxuIiksIGZuYW1lKTsKKwkJCQlkb193YXJuKF8o IndvdWxkIGp1bmsgZW50cnlcbiIpLCBmbmFtZSk7CiAJCQl9CiAJCX0KIApA QCAtMzE3NCwyMyArMzE3Nyw2IEBACiAJCUFTU0VSVChub19tb2RpZnkgfHwg IXZlcmlmeV9pbnVtKG1wLCBsaW5vKSk7CiAKIAkJLyoKLQkJICogc3BlY2lh bCBjYXNlIHRoZSAibG9zdCtmb3VuZCIgZW50cnkgaWYgaXQncyBwb2ludGlu ZwotCQkgKiB0byB3aGVyZSB3ZSB0aGluayBsb3N0K2ZvdW5kIHNob3VsZCBi ZS4gIGlmIHRoYXQncwotCQkgKiB0aGUgY2FzZSwgdGhhdCdzIHRoZSBvbmUg d2UgY3JlYXRlZCBpbiBwaGFzZSA2LgotCQkgKiBqdXN0IHNraXAgaXQuICBu byBuZWVkIHRvIHByb2Nlc3MgaXQgYW5kIGl0cyAuLgotCQkgKiBsaW5rIGlz IGFscmVhZHkgYWNjb3VudGVkIGZvci4KLQkJICovCi0KLQkJaWYgKGxpbm8g PT0gb3JwaGFuYWdlX2lubyAmJiBzdHJjbXAoZm5hbWUsIE9SUEhBTkFHRSkg PT0gMCkgewotCQkJaWYgKGxpbm8gPiBYRlNfRElSMl9NQVhfU0hPUlRfSU5V TSkKLQkJCQlpOCsrOwotCQkJbmV4dF9zZmVwID0gKHhmc19kaXIyX3NmX2Vu dHJ5X3QgKikKLQkJCQkoKF9fcHNpbnRfdCkgc2ZlcCArCi0JCQkJWEZTX0RJ UjJfU0ZfRU5UU0laRV9CWUVOVFJZKHNmcCwgc2ZlcCkpOwotCQkJY29udGlu dWU7Ci0JCX0KLQotCQkvKgogCQkgKiBBbHNvIHNraXAgZW50cmllcyB3aXRo IGJvZ3VzIGlub2RlIG51bWJlcnMgaWYgd2UncmUKIAkJICogaW4gbm8gbW9k aWZ5IG1vZGUuCiAJCSAqLwpAQCAtMzIwNSwxNiArMzE5MSwxMyBAQAogCQlp cmVjID0gZmluZF9pbm9kZV9yZWMoWEZTX0lOT19UT19BR05PKG1wLCBsaW5v KSwKIAkJCQkJWEZTX0lOT19UT19BR0lOTyhtcCwgbGlubykpOwogCi0JCWlm IChpcmVjID09IE5VTEwgJiYgbm9fbW9kaWZ5KSAgeworCQlpZiAoaXJlYyA9 PSBOVUxMKSAgewogCQkJZG9fd2FybihfKCJlbnRyeSBcIiVzXCIgaW4gc2hv cnRmb3JtIGRpcmVjdG9yeSAlbGx1ICIKLQkJCQkgICJyZWZlcmVuY2VzIG5v bi1leGlzdGVudCBpbm9kZSAlbGx1XG4iKSwKKwkJCQkgICJyZWZlcmVuY2Vz IG5vbi1leGlzdGVudCBpbm9kZSAlbGx1IiksCiAJCQkJZm5hbWUsIGlubywg bGlubyk7Ci0JCQlkb193YXJuKF8oIndvdWxkIGp1bmsgZW50cnlcbiIpKTsK LQkJCWNvbnRpbnVlOworCQkJZ290byBkb19qdW5raXQ7CiAJCX0KIAotCQlB U1NFUlQoaXJlYyAhPSBOVUxMKTsKLQogCQlpbm9fb2Zmc2V0ID0gWEZTX0lO T19UT19BR0lOTyhtcCwgbGlubykgLSBpcmVjLT5pbm9fc3RhcnRudW07CiAK IAkJLyoKQEAgLTMyMjMsNDIgKzMyMDYsNDEgQEAKIAkJICogcmVhbGx5IGlz IGZyZWUuCiAJCSAqLwogCQlpZiAoaXNfaW5vZGVfZnJlZShpcmVjLCBpbm9f b2Zmc2V0KSkgIHsKKwkJCWRvX3dhcm4oXygiZW50cnkgXCIlc1wiIGluIHNo b3J0Zm9ybSBkaXJlY3RvcnkgIgorCQkJCSAgImlub2RlICVsbHUgcG9pbnRz IHRvIGZyZWUgaW5vZGUgJWxsdSIpLAorCQkJCWZuYW1lLCBpbm8sIGxpbm8p OworCQkJZ290byBkb19qdW5raXQ7CisJCX0KKwkJLyoKKwkJICogY2hlY2sg aWYgdGhpcyBpbm9kZSBpcyBsb3N0K2ZvdW5kIGRpciBpbiB0aGUgcm9vdAor CQkgKi8KKwkJaWYgKGlubyA9PSBtcC0+bV9zYi5zYl9yb290aW5vICYmIHN0 cmNtcChmbmFtZSwgT1JQSEFOQUdFKSA9PSAwKSB7CiAJCQkvKgotCQkJICog ZG9uJ3QgY29tcGxhaW4gaWYgdGhpcyBlbnRyeSBwb2ludHMgdG8gdGhlIG9s ZAotCQkJICogYW5kIG5vdy1mcmVlIGxvc3QrZm91bmQgaW5vZGUKKwkJCSAq IGlmIGl0J3Mgbm90IGEgZGlyZWN0b3J5LCB0cmFzaCBpdAogCQkJICovCi0J CQlpZiAodmVyYm9zZSB8fCBub19tb2RpZnkgfHwgbGlubyAhPSBvbGRfb3Jw aGFuYWdlX2lubykKLQkJCQlkb193YXJuKF8oImVudHJ5IFwiJXNcIiBpbiBz aG9ydGZvcm0gZGlyZWN0b3J5ICIKLQkJCQkJICAiaW5vZGUgJWxsdSBwb2lu dHMgdG8gZnJlZSBpbm9kZSAiCi0JCQkJCSAgIiVsbHVcbiIpLAotCQkJCQlm bmFtZSwgaW5vLCBsaW5vKTsKLQotCQkJaWYgKCFub19tb2RpZnkpICB7Ci0J CQkJanVua2l0ID0gMTsKLQkJCX0gZWxzZSAgewotCQkJCWRvX3dhcm4oXygi d291bGQganVuayBlbnRyeSBcIiVzXCJcbiIpLAotCQkJCQlmbmFtZSk7CisJ CQlpZiAoIWlub2RlX2lzYWRpcihpcmVjLCBpbm9fb2Zmc2V0KSkgeworCQkJ CWRvX3dhcm4oXygiJXMgKGlubyAlbGx1KSBpbiByb290ICglbGx1KSBpcyBu b3QgIgorCQkJCQkiYSBkaXJlY3RvcnkiKSwgT1JQSEFOQUdFLCBsaW5vLCBp bm8pOworCQkJCWdvdG8gZG9fanVua2l0OwogCQkJfQotCQl9IGVsc2UgaWYg KCFkaXJfaGFzaF9hZGQoaGFzaHRhYiwgKHhmc19kaXIyX2RhdGFwdHJfdCkK LQkJCQkJKHNmZXAgLSBYRlNfRElSMl9TRl9GSVJTVEVOVFJZKHNmcCkpLAot CQkJCWxpbm8sIHNmZXAtPm5hbWVsZW4sIHNmZXAtPm5hbWUpKSB7CiAJCQkv KgotCQkJICogY2hlY2sgZm9yIGR1cGxpY2F0ZSBuYW1lcyBpbiBkaXJlY3Rv cnkuCisJCQkgKiBpZiB0aGlzIGlzIGEgZHVwLCBpdCB3aWxsIGJlIHBpY2tl ZCB1cCBiZWxvdywKKwkJCSAqIG90aGVyd2lzZSwgbWFyayBpdCBhcyB0aGUg b3JwaGFuYWdlIGZvciBsYXRlci4KIAkJCSAqLwotCQkJZG9fd2FybigKLQkJ XygiZW50cnkgXCIlc1wiIChpbm8gJWxsdSkgaW4gZGlyICVsbHUgaXMgYSBk dXBsaWNhdGUgbmFtZSIpLAotCQkJCWZuYW1lLCBsaW5vLCBpbm8pOwotCQkJ aWYgKCFub19tb2RpZnkpIHsKLQkJCQlqdW5raXQgPSAxOwotCQkJCWlmICh2 ZXJib3NlKQotCQkJCQlkb193YXJuKAotCQkJCQlfKCIsIG1hcmtpbmcgZW50 cnkgdG8gYmUganVua2VkXG4iKSk7Ci0JCQkJZWxzZQotCQkJCQlkb193YXJu KCJcbiIpOwotCQkJfSBlbHNlIHsKLQkJCQlkb193YXJuKF8oIiwgd291bGQg anVuayBlbnRyeVxuIikpOwotCQkJfQotCQl9IGVsc2UgaWYgKCFpbm9kZV9p c2FkaXIoaXJlYywgaW5vX29mZnNldCkpICB7CisJCQlpZiAoIW9ycGhhbmFn ZV9pbm8pCisJCQkJb3JwaGFuYWdlX2lubyA9IGxpbm87CisJCX0KKwkJLyoK KwkJICogY2hlY2sgZm9yIGR1cGxpY2F0ZSBuYW1lcyBpbiBkaXJlY3Rvcnku CisJCSAqLworCQlpZiAoIWRpcl9oYXNoX2FkZChoYXNodGFiLCAoeGZzX2Rp cjJfZGF0YXB0cl90KQorCQkJCQkoc2ZlcCAtIFhGU19ESVIyX1NGX0ZJUlNU RU5UUlkoc2ZwKSksCisJCQkJbGlubywgc2ZlcC0+bmFtZWxlbiwgc2ZlcC0+ bmFtZSkpIHsKKwkJCWRvX3dhcm4oXygiZW50cnkgXCIlc1wiIChpbm8gJWxs dSkgaW4gZGlyICVsbHUgaXMgYSAiCisJCQkJImR1cGxpY2F0ZSBuYW1lIiks IGZuYW1lLCBsaW5vLCBpbm8pOworCQkJZ290byBkb19qdW5raXQ7CisJCX0K KwkJaWYgKCFpbm9kZV9pc2FkaXIoaXJlYywgaW5vX29mZnNldCkpICB7CiAJ CQkvKgogCQkJICogY2hlY2sgZWFzeSBjYXNlIGZpcnN0LCByZWd1bGFyIGlu b2RlLCBqdXN0IGJ1bXAKIAkJCSAqIHRoZSBsaW5rIGNvdW50CkBAIC0zMjk1 LDYgKzMyNzcsNyBAQAogCQl9CiAKIAkJaWYgKGp1bmtpdCkgIHsKK2RvX2p1 bmtpdDoKIAkJCWlmICghbm9fbW9kaWZ5KSAgewogCQkJCXRtcF9lbGVuID0g WEZTX0RJUjJfU0ZfRU5UU0laRV9CWUVOVFJZKHNmcCwgc2ZlcCk7CiAJCQkJ dG1wX3NmZXAgPSAoeGZzX2RpcjJfc2ZfZW50cnlfdCAqKQpAQCAtMzMyNiwx MiArMzMwOSwxMiBAQAogCiAJCQkJKmlub19kaXJ0eSA9IDE7CiAKLQkJCQlp ZiAodmVyYm9zZSB8fCBsaW5vICE9IG9sZF9vcnBoYW5hZ2VfaW5vKQotCQkJ CQlkb193YXJuKF8oImp1bmtpbmcgZW50cnkgXCIlc1wiIGluICIKLQkJCQkJ CSAgImRpcmVjdG9yeSBpbm9kZSAlbGx1XG4iKSwKLQkJCQkJCWZuYW1lLCBs aW5vKTsKKwkJCQlpZiAodmVyYm9zZSkKKwkJCQkJZG9fd2FybihfKCJqdW5r aW5nIGVudHJ5XG4iKSk7CisJCQkJZWxzZQorCQkJCQlkb193YXJuKCJcbiIp OwogCQkJfSBlbHNlICB7Ci0JCQkJZG9fd2FybihfKCJ3b3VsZCBqdW5rIGVu dHJ5IFwiJXNcIlxuIiksIGZuYW1lKTsKKwkJCQlkb193YXJuKF8oIndvdWxk IGp1bmsgZW50cnlcbiIpKTsKIAkJCX0KIAkJfSBlbHNlIGlmIChsaW5vID4g WEZTX0RJUjJfTUFYX1NIT1JUX0lOVU0pCiAJCQlpOCsrOwpAQCAtMzQ2NSwy NSArMzQ0OCw2IEBACiAJCQkgKiBndWFyYW50ZWVkIGJ5IHBoYXNlIDMgYW5k L29yIGJlbG93LgogCQkJICovCiAJCQlhZGRfaW5vZGVfcmVhY2hlZChpcmVj LCBpbm9fb2Zmc2V0KTsKLQkJCS8qCi0JCQkgKiBhY2NvdW50IGZvciBsaW5r IGZvciB0aGUgb3JwaGFuYWdlCi0JCQkgKiAibG9zdCtmb3VuZCIuICBpZiB3 ZSdyZSBydW5uaW5nIGluCi0JCQkgKiBtb2RpZnkgbW9kZSBhbmQgaXQgYWxy ZWFkeSBleGlzdGVkLAotCQkJICogd2UgZGVsZXRlZCBpdCBzbyBpdCdzICcu LicgcmVmZXJlbmNlCi0JCQkgKiBuZXZlciBnb3QgY291bnRlZC4gIHNvIGFk ZCBpdCBoZXJlIGlmCi0JCQkgKiB3ZSdyZSBnb2luZyB0byBjcmVhdGUgbG9z dCtmb3VuZC4KLQkJCSAqCi0JCQkgKiBpZiB3ZSdyZSBydW5uaW5nIGluIG5v X21vZGlmeSBtb2RlLAotCQkJICogd2UgbmV2ZXIgZGVsZXRlZCBsb3N0K2Zv dW5kIGFuZCB3ZSdyZQotCQkJICogbm90IGdvaW5nIHRvIGNyZWF0ZSBpdCBz byBkbyBub3RoaW5nLgotCQkJICoKLQkJCSAqIGVpdGhlciB3YXksIHRoZSBj b3VudHMgd2lsbCBtYXRjaCB3aGVuCi0JCQkgKiB3ZSBsb29rIGF0IHRoZSBy b290IGlub2RlJ3MgbmxpbmtzCi0JCQkgKiBmaWVsZCBhbmQgY29tcGFyZSB0 aGF0IHRvIG91ciBpbmNvcmUKLQkJCSAqIGNvdW50IGluIHBoYXNlIDcuCi0J CQkgKi8KLQkJCWlmICghbm9fbW9kaWZ5KQotCQkJCWFkZF9pbm9kZV9yZWYo aXJlYywgaW5vX29mZnNldCk7CiAJCX0KIAogCQlhZGRfaW5vZGVfcmVmY2hl Y2tlZChpbm8sIGlyZWMsIGlub19vZmZzZXQpOwpAQCAtMzU2MiwzNiArMzUy Niw2IEBACiAKIAkJaGFzaHZhbCA9IDA7CiAKLQkJaWYgKCFub19tb2RpZnkg JiYgIW9ycGhhbmFnZV9lbnRlcmVkICYmCi0JCSAgICBpbm8gPT0gbXAtPm1f c2Iuc2Jfcm9vdGlubykgewotCQkJZG9fd2FybihfKCJyZS1lbnRlcmluZyAl cyBpbnRvIHJvb3QgZGlyZWN0b3J5XG4iKSwKLQkJCQlPUlBIQU5BR0UpOwot CQkJdHAgPSBsaWJ4ZnNfdHJhbnNfYWxsb2MobXAsIDApOwotCQkJbnJlcyA9 IFhGU19NS0RJUl9TUEFDRV9SRVMobXAsIHN0cmxlbihPUlBIQU5BR0UpKTsK LQkJCWVycm9yID0gbGlieGZzX3RyYW5zX3Jlc2VydmUodHAsIG5yZXMsCi0J CQkJCVhGU19NS0RJUl9MT0dfUkVTKG1wKSwgMCwKLQkJCQkJWEZTX1RSQU5T X1BFUk1fTE9HX1JFUywKLQkJCQkJWEZTX01LRElSX0xPR19DT1VOVCk7Ci0J CQlpZiAoZXJyb3IpCi0JCQkJcmVzX2ZhaWxlZChlcnJvcik7Ci0JCQlsaWJ4 ZnNfdHJhbnNfaWpvaW4odHAsIGlwLCAwKTsKLQkJCWxpYnhmc190cmFuc19p aG9sZCh0cCwgaXApOwotCQkJWEZTX0JNQVBfSU5JVCgmZmxpc3QsICZmaXJz dCk7Ci0JCQlpZiAoKGVycm9yID0gZGlyX2NyZWF0ZW5hbWUobXAsIHRwLCBp cCwgT1JQSEFOQUdFLAotCQkJCQkJc3RybGVuKE9SUEhBTkFHRSksCi0JCQkJ CQlvcnBoYW5hZ2VfaW5vLCAmZmlyc3QsICZmbGlzdCwKLQkJCQkJCW5yZXMp KSkKLQkJCQlkb19lcnJvcihfKCJjYW4ndCBtYWtlICVzIGVudHJ5IGluIHJv b3QgaW5vZGUgIgotCQkJCQkgICAiJWxsdSwgY3JlYXRlbmFtZSBlcnJvciAl ZFxuIiksCi0JCQkJCU9SUEhBTkFHRSwgaW5vLCBlcnJvcik7Ci0JCQlsaWJ4 ZnNfdHJhbnNfbG9nX2lub2RlKHRwLCBpcCwgWEZTX0lMT0dfQ09SRSk7Ci0J CQllcnJvciA9IGxpYnhmc19ibWFwX2ZpbmlzaCgmdHAsICZmbGlzdCwgZmly c3QsICZjb21taXR0ZWQpOwotCQkJQVNTRVJUKGVycm9yID09IDApOwotCQkJ bGlieGZzX3RyYW5zX2NvbW1pdCh0cCwKLQkJCQlYRlNfVFJBTlNfUkVMRUFT RV9MT0dfUkVTIHwgWEZTX1RSQU5TX1NZTkMsIDApOwotCQkJb3JwaGFuYWdl X2VudGVyZWQgPSAxOwotCQl9Ci0KIAkJLyoKIAkJICogaWYgd2UgaGF2ZSB0 byBjcmVhdGUgYSAuLiBmb3IgLywgZG8gaXQgbm93ICpiZWZvcmUqCiAJCSAq IHdlIGRlbGV0ZSB0aGUgYm9ndXMgZW50cmllcywgb3RoZXJ3aXNlIHRoZSBk aXJlY3RvcnkKQEAgLTM4MjMsNiArMzc1Nyw1MSBAQAogfQogCiBzdGF0aWMg dm9pZAorY2hlY2tfZm9yX29ycGhhbmVkX2lub2RlcygKKwl4ZnNfbW91bnRf dAkJKm1wLAorCWlub190cmVlX25vZGVfdAkJKmlyZWMpCit7CisJaW50CQkJ aTsKKwlpbnQJCQllcnI7CisJeGZzX2lub190CQlpbm87CisKKwlmb3IgKGkg PSAwOyBpIDwgWEZTX0lOT0RFU19QRVJfQ0hVTks7IGkrKykgIHsKKwkJQVNT RVJUKGlzX2lub2RlX2NvbmZpcm1lZChpcmVjLCBpKSk7CisJCWlmIChpc19p bm9kZV9mcmVlKGlyZWMsIGkpKQorCQkJY29udGludWU7CisKKwkJaWYgKCFp c19pbm9kZV9yZWFjaGVkKGlyZWMsIGkpKSB7CisJCQlBU1NFUlQoaW5vZGVf aXNhZGlyKGlyZWMsIGkpIHx8CisJCQkJbnVtX2lub2RlX3JlZmVyZW5jZXMo aXJlYywgaSkgPT0gMCk7CisJCQlpbm8gPSBYRlNfQUdJTk9fVE9fSU5PKG1w LCBpLCBpICsgaXJlYy0+aW5vX3N0YXJ0bnVtKTsKKwkJCWlmIChpbm9kZV9p c2FkaXIoaXJlYywgaSkpCisJCQkJZG9fd2FybihfKCJkaXNjb25uZWN0ZWQg ZGlyIGlub2RlICVsbHUsICIpLCBpbm8pOworCQkJZWxzZQorCQkJCWRvX3dh cm4oXygiZGlzY29ubmVjdGVkIGlub2RlICVsbHUsICIpLCBpbm8pOworCQkJ aWYgKCFub19tb2RpZnkpICB7CisJCQkgICAgCWlmICghb3JwaGFuYWdlX2lu bykKKwkJCQkJb3JwaGFuYWdlX2lubyA9IG1rX29ycGhhbmFnZShtcCk7CisJ CQkJaWYgKCFvcnBoYW5hZ2VfaXApIHsKKwkJCQkJZXJyID0gbGlieGZzX2ln ZXQobXAsIE5VTEwsIG9ycGhhbmFnZV9pbm8sIDAsICZvcnBoYW5hZ2VfaXAs IDApOworCQkJCQlpZiAoZXJyKQorCQkJCQkJZG9fZXJyb3IoXygiJWQgLSBj b3VsZG4ndCBpZ2V0IG9ycGhhbmFnZSBpbm9kZVxuIiksIGVycik7CisJCQkJ fQorCQkJCWRvX3dhcm4oXygibW92aW5nIHRvICVzXG4iKSwgT1JQSEFOQUdF KTsKKwkJCQltdl9vcnBoYW5hZ2UobXAsIGlubywgaW5vZGVfaXNhZGlyKGly ZWMsIGkpKTsKKwkJCX0gZWxzZSAgeworCQkJCWRvX3dhcm4oXygid291bGQg bW92ZSB0byAlc1xuIiksIE9SUEhBTkFHRSk7CisJCQl9CisJCQkvKgorCQkJ ICogZm9yIHJlYWQtb25seSBjYXNlLCBldmVuIHRob3VnaCB0aGUgaW5vZGUg aXNuJ3QKKwkJCSAqIHJlYWxseSByZWFjaGFibGUsIHNldCB0aGUgZmxhZyAo YW5kIGJ1bXAgb3VyIGxpbmsKKwkJCSAqIGNvdW50KSBhbnl3YXkgdG8gZm9v bCBwaGFzZSA3CisJCQkgKi8KKwkJCWFkZF9pbm9kZV9yZWFjaGVkKGlyZWMs IGkpOworCQl9CisJfQorfQorCitzdGF0aWMgdm9pZAogdHJhdmVyc2VfZnVu Y3Rpb24oeGZzX21vdW50X3QgKm1wLCB4ZnNfYWdudW1iZXJfdCBhZ25vKQog ewogCXJlZ2lzdGVyIGlub190cmVlX25vZGVfdCAqaXJlYzsKQEAgLTM4Nzcs OSArMzg1NiwxMSBAQAogCWRpcl9zdGFja190CQlzdGFjazsKIAlpbnQJCQlp OwogCWludAkJCWo7CisJeGZzX2lub190CQlvcnBoYW5hZ2VfaW5vOwogCiAJ Ynplcm8oJnplcm9jciwgc2l6ZW9mKHN0cnVjdCBjcmVkKSk7CiAJYnplcm8o Jnplcm9mc3gsIHNpemVvZihzdHJ1Y3QgZnN4YXR0cikpOworCW9ycGhhbmFn ZV9pbm8gPSAwOwogCiAJZG9fbG9nKF8oIlBoYXNlIDYgLSBjaGVjayBpbm9k ZSBjb25uZWN0aXZpdHkuLi5cbiIpKTsKIApAQCAtMzk0MywxNSArMzkyNCw2 IEBACiAJCX0KIAl9CiAKLQkvKgotCSAqIG1ha2Ugb3JwaGFuYWdlIChpdCdz IGd1YXJhbnRlZWQgdG8gbm90IGV4aXN0IG5vdykKLQkgKi8KLQlpZiAoIW5v X21vZGlmeSkgIHsKLQkJZG9fbG9nKF8oIiAgICAgICAgLSBlbnN1cmluZyBl eGlzdGVuY2Ugb2YgJXMgZGlyZWN0b3J5XG4iKSwKLQkJCU9SUEhBTkFHRSk7 Ci0JCW9ycGhhbmFnZV9pbm8gPSBta19vcnBoYW5hZ2UobXApOwotCX0KLQog CWRpcl9zdGFja19pbml0KCZzdGFjayk7CiAKIAltYXJrX3N0YW5kYWxvbmVf aW5vZGVzKG1wKTsKQEAgLTQwMzEsNTkgKzQwMDMsMTYgQEAKIAl9CiAKIAlk b19sb2coXygiICAgICAgICAtIHRyYXZlcnNhbHMgZmluaXNoZWQgLi4uIFxu IikpOwotCi0JLyogZmx1c2ggYWxsIGRpcnR5IGRhdGEgYmVmb3JlIGRvaW5n IGxvc3QrZm91bmQgc2VhcmNoICovCi0JbGlieGZzX2JjYWNoZV9mbHVzaCgp OwotCi0JZG9fbG9nKF8oIiAgICAgICAgLSBtb3ZpbmcgZGlzY29ubmVjdGVk IGlub2RlcyB0byBsb3N0K2ZvdW5kIC4uLiBcbiIpKTsKKwlkb19sb2coXygi ICAgICAgICAtIG1vdmluZyBkaXNjb25uZWN0ZWQgaW5vZGVzIHRvICVzIC4u LiBcbiIpLAorCQlPUlBIQU5BR0UpOwogCiAJLyoKIAkgKiBtb3ZlIGFsbCBk aXNjb25uZWN0ZWQgaW5vZGVzIHRvIHRoZSBvcnBoYW5hZ2UKIAkgKi8KIAlm b3IgKGkgPSAwOyBpIDwgZ2xvYl9hZ2NvdW50OyBpKyspICB7CiAJCWlyZWMg PSBmaW5kZmlyc3RfaW5vZGVfcmVjKGkpOwotCi0JCWlmIChpcmVjID09IE5V TEwpCi0JCQljb250aW51ZTsKLQogCQl3aGlsZSAoaXJlYyAhPSBOVUxMKSAg ewotCQkJZm9yIChqID0gMDsgaiA8IFhGU19JTk9ERVNfUEVSX0NIVU5LOyBq KyspICB7Ci0JCQkJQVNTRVJUKGlzX2lub2RlX2NvbmZpcm1lZChpcmVjLCBq KSk7Ci0JCQkJaWYgKGlzX2lub2RlX2ZyZWUoaXJlYywgaikpCi0JCQkJCWNv bnRpbnVlOwotCQkJCWlmICghaXNfaW5vZGVfcmVhY2hlZChpcmVjLCBqKSkg ewotCQkJCQlBU1NFUlQoaW5vZGVfaXNhZGlyKGlyZWMsIGopIHx8Ci0JCQkJ CQludW1faW5vZGVfcmVmZXJlbmNlcyhpcmVjLCBqKQotCQkJCQkJPT0gMCk7 Ci0JCQkJCWlubyA9IFhGU19BR0lOT19UT19JTk8obXAsIGksCi0JCQkJCQlq ICsgaXJlYy0+aW5vX3N0YXJ0bnVtKTsKLQkJCQkJaWYgKGlub2RlX2lzYWRp cihpcmVjLCBqKSkKLQkJCQkJCWRvX3dhcm4oCi0JCQkJCV8oImRpc2Nvbm5l Y3RlZCBkaXIgaW5vZGUgJWxsdSwgIiksCi0JCQkJCQkJaW5vKTsKLQkJCQkJ ZWxzZQotCQkJCQkJZG9fd2FybigKLQkJCQkJXygiZGlzY29ubmVjdGVkIGlu b2RlICVsbHUsICIpLAotCQkJCQkJCWlubyk7Ci0JCQkJCWlmICghbm9fbW9k aWZ5KSAgewotCQkJCQkJZG9fd2FybihfKCJtb3ZpbmcgdG8gJXNcbiIpLAot CQkJCQkJCU9SUEhBTkFHRSk7Ci0JCQkJCQltdl9vcnBoYW5hZ2UobXAsIG9y cGhhbmFnZV9pbm8sCi0JCQkJCQkJaW5vLAotCQkJCQkJCWlub2RlX2lzYWRp cihpcmVjLCBqKSk7Ci0JCQkJCX0gZWxzZSAgewotCQkJCQkJZG9fd2Fybihf KCJ3b3VsZCBtb3ZlIHRvICVzXG4iKSwKLQkJCQkJCQlPUlBIQU5BR0UpOwot CQkJCQl9Ci0JCQkJCS8qCi0JCQkJCSAqIGZvciByZWFkLW9ubHkgY2FzZSwg ZXZlbiB0aG91Z2gKLQkJCQkJICogdGhlIGlub2RlIGlzbid0IHJlYWxseSBy ZWFjaGFibGUsCi0JCQkJCSAqIHNldCB0aGUgZmxhZyAoYW5kIGJ1bXAgb3Vy IGxpbmsKLQkJCQkJICogY291bnQpIGFueXdheSB0byBmb29sIHBoYXNlIDcK LQkJCQkJICovCi0JCQkJCWFkZF9pbm9kZV9yZWFjaGVkKGlyZWMsIGopOwot CQkJCX0KLQkJCX0KKwkJCWNoZWNrX2Zvcl9vcnBoYW5lZF9pbm9kZXMobXAs IGlyZWMpOwogCQkJaXJlYyA9IG5leHRfaW5vX3JlYyhpcmVjKTsKIAkJfQog CX0KSW5kZXg6IHJlcGFpci94ZnNwcm9ncy9yZXBhaXIvcGhhc2U3LmMKPT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PQotLS0gcmVwYWlyLm9yaWcveGZzcHJvZ3Mv cmVwYWlyL3BoYXNlNy5jCTIwMDctMDQtMjcgMTM6MTM6MzUuOTcxOTkzNDg5 ICsxMDAwCisrKyByZXBhaXIveGZzcHJvZ3MvcmVwYWlyL3BoYXNlNy5jCTIw MDctMDQtMjcgMTQ6MTE6NDEuMDAwMDAwMDAwICsxMDAwCkBAIC05NiwxMSAr OTYsOCBAQAogCiAJLyoKIAkgKiBjb21wYXJlIGFuZCBzZXQgbGlua3MgZm9y IGFsbCBpbm9kZXMKLQkgKiBidXQgdGhlIGxvc3QrZm91bmQgaW5vZGUuICB3 ZSBrZWVwCi0JICogdGhhdCBjb3JyZWN0IGFzIHdlIGdvLgogCSAqLwotCWlm IChpbm8gIT0gb3JwaGFuYWdlX2lubykKLQkJc2V0X25saW5rcygmaXAtPmlf ZCwgaW5vLCBubGlua3MsICZkaXJ0eSk7CisJc2V0X25saW5rcygmaXAtPmlf ZCwgaW5vLCBubGlua3MsICZkaXJ0eSk7CiAKIAlpZiAoIWRpcnR5KSAgewog CQlsaWJ4ZnNfdHJhbnNfaXB1dCh0cCwgaXAsIDApOwo= ------------yUh6rLTYauJAeqPswTCPZX-- From owner-xfs@oss.sgi.com Mon Jun 4 19:21:06 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 04 Jun 2007 19:21:12 -0700 (PDT) Received: from nz-out-0506.google.com (nz-out-0506.google.com [64.233.162.227]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l552L5Wt008672 for ; Mon, 4 Jun 2007 19:21:06 -0700 Received: by nz-out-0506.google.com with SMTP id 4so1043848nzn for ; Mon, 04 Jun 2007 19:21:06 -0700 (PDT) DKIM-Signature: a=rsa-sha1; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:message-id:date:from:to:subject:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=TTTAQNsyhIUi2YqcdKTm/z0hya6pAGR2wiTgS8+GedaxstYt4IrRCEObu6Y6hreRIU3maYJw9E2w0aHXJh9lcUC1jnkBYVJeYNcbjwEAzJBIT2KHymP06Ax8BaYSeFx0GAaM4qkqzRiqRg2Q9bZa5/4cm2aRl0t69DyzdvGjgtw= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:message-id:date:from:to:subject:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=fwepOvTwoSqSAjTdJe03EiBw89htl7fzYiiUs2L0FD22eKM0mPtsmmOfOjnbtkGnhHAze7H7d3draJXoRAuJSkmbGx4+2rwT68dxuy/4PpT07FQPwoGD4hUSI/ipY1HLM6vLQ2x/Aq3INbSM/Ieec+/Qwr/5bUzrGH7rhjYp8qI= Received: by 10.114.73.1 with SMTP id v1mr5550930waa.1181010065707; Mon, 04 Jun 2007 19:21:05 -0700 (PDT) Received: by 10.115.55.14 with HTTP; Mon, 4 Jun 2007 19:21:05 -0700 (PDT) Message-ID: Date: Mon, 4 Jun 2007 22:21:05 -0400 From: "=?ISO-8859-1?Q?Germ=E1n_Po=F3-Caama=F1o?=" To: xfs@oss.sgi.com Subject: Re: Reporting a bug In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Disposition: inline References: Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id l552L6Wt008682 X-archive-position: 11640 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: german.poo@gmail.com Precedence: bulk X-list: xfs Adding a little more information: Sysrescue has kernel 2.6.20. Filesystem "sda5": corrupt dinode 786561, (btree extents). Unmount and run xfs_repair. Filesystem "sda5": XFS internal error xfs_bmap_read_extents(1) at line 4565 of file fs/xfs/xfs_bmap.c. Caller 0xc02e7c99 [] xfs_bmap_read_extents+0x488/0x4a2 [] xfs_iread_extents+0xa0/0xbb [] xfs_iext_realloc_direct+0xb3/0xc1 [] xfs_iread_extents+0xa0/0xbb [] xfs_bmap_last_offset+0x94/0xdc [] xfs_dir2_isblock+0x1b/0x60 [] __make_request+0x384/0x495 [] xfs_dir_lookup+0x8e/0xeb [] xfs_bmapi+0x25b/0x1fd7 [] xfs_dir_lookup_int+0x2c/0xd4 [] down_write+0x8/0x10 [] xfs_ilock+0x47/0x67 [] xfs_lookup+0x50/0x76 [] __mutex_lock_slowpath+0x1ac/0x1b4 [] xfs_vn_lookup+0x3b/0x70 [] do_lookup+0xa3/0x140 [] __link_path_walk+0x61d/0xa24 [] link_path_walk+0x42/0xaf [] xfs_setattr+0xdbe/0xe7c [] do_path_lookup+0x144/0x164 [] get_empty_filp+0x4f/0xca [] __path_lookup_intent_open+0x43/0x72 [] path_lookup_open+0x20/0x25 [] open_namei+0x6e/0x523 [] do_page_fault+0x278/0x53f [] do_filp_open+0x2a/0x3e [] xfs_setattr+0xdbe/0xe7c [] do_sys_open+0x47/0xcf [] sys_open+0x1c/0x1e [] syscall_call+0x7/0xb [] svcauth_gss_accept+0x76a/0xadb ======================= Filesystem "sda5": corrupt dinode 786561, (btree extents). Unmount and run xfs_repair. Filesystem "sda5": XFS internal error xfs_bmap_read_extents(1) at line 4565 of file fs/xfs/xfs_bmap.c. Caller 0xc02e7c99 [] xfs_bmap_read_extents+0x488/0x4a2 [] xfs_iread_extents+0xa0/0xbb [] xfs_iext_realloc_direct+0xb3/0xc1 [] xfs_iread_extents+0xa0/0xbb [] __make_request+0x384/0x495 [] xfs_bmap_last_offset+0x94/0xdc [] xfs_dir2_isblock+0x1b/0x60 [] xfs_dir_lookup+0x8e/0xeb [] xfs_dir_lookup+0x7d/0xeb [] xfs_dir_lookup_int+0x2c/0xd4 [] down_write+0x8/0x10 [] xfs_ilock+0x47/0x67 [] xfs_lookup+0x50/0x76 [] __mutex_lock_slowpath+0x1ac/0x1b4 [] xfs_dir_lookup_int+0x2c/0xd4 [] xfs_vn_lookup+0x3b/0x70 [] do_lookup+0xa3/0x140 [] __link_path_walk+0x61d/0xa24 [] link_path_walk+0x42/0xaf [] do_path_lookup+0x144/0x164 [] __user_walk_fd+0x30/0x45 [] vfs_stat_fd+0x19/0x40 [] sys_stat64+0xf/0x23 [] syscall_call+0x7/0xb and so on. -- Germán Poó Caamaño http://www.gnome.org/~gpoo/ From owner-xfs@oss.sgi.com Mon Jun 4 19:20:11 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 04 Jun 2007 19:20:15 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l552K2Wt008374 for ; Mon, 4 Jun 2007 19:20:04 -0700 Received: from pc-bnaujok.melbourne.sgi.com (pc-bnaujok.melbourne.sgi.com [134.14.55.58]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id MAA06631; Tue, 5 Jun 2007 12:20:00 +1000 Date: Tue, 05 Jun 2007 12:23:32 +1000 To: "xfs@oss.sgi.com" , xfs-dev Subject: [REVIEW 3/3] - xfs_repair speedups (enhanced prefetch) From: "Barry Naujok" Organization: SGI Content-Type: multipart/mixed; boundary=----------IrJlgiTfBCmIeXv1iG6QRc MIME-Version: 1.0 Message-ID: User-Agent: Opera Mail/9.10 (Win32) X-archive-position: 11639 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@sgi.com Precedence: bulk X-list: xfs ------------IrJlgiTfBCmIeXv1iG6QRc Content-Type: text/plain; format=flowed; delsp=yes; charset=iso-8859-15 Content-Transfer-Encoding: 7bit Back in Jan 2007, Michael Nishimoto from Agami posted a patch to a 2.7.x based xfs_repair tree to perform prefetch/reahead which primed the linux block buffer for the main xfs_repair processing threads (http://oss.sgi.com/archives/xfs/2007-01/msg00135.html). Benchmarking this and 2.8.1x xfs_repair at the time revealed very interesting numbers. 2.8.x was very slow using direct I/O and the libxfs cache. Researching this technique and intergrating it with the libxfs cache proved rather challenging. Many changes were required : - proper xfs_buf_t locking for multi-threaded access. - unify I/O sizes for inodes and metadata blocks. - try to serialise as much I/O as possible. - handle queuing, I/O and processing in parallel minimising starvation, especially when only a subset of the metadata can be stored in memory. - smarter work queues. The unifying of the I/O sizes was a significant change which resulted in a lot of improvements in both performance and correctness, in particulary, with inode blocks. During phase 6, inodes are accessed using xfs_iread/xfs_iget which using inode "clusters" which are either 8KB or blocksize, whichever is greater. Phases 3/4 read using inode "chunks" which can be 16KB or larger. With libxfs caching method, this meant all data had to be flushed/purged before phase 6 started, and all the inodes read again. Also, one part of the libxfs transaction code didn't release buffers properly. This behaviour has been seen in the past with the infamous "shake on cache 0x######## left # nodes!?" warning. Batch reading/serialising I/O requests in the prefetch code had major benefits when metadata is close together, especially with RAIDs. Also, the AIO/LIO stuff was yanked with threaded I/O prefetch. The queuing/IO/processing threads and synchronising them efficiently, especially in low memory conditions was the most challenging aspect. Most of the changes for this are in prefetch.c with minor changes for I/O in the phases. Phase 6 also eliminated the dir_stack code which is not required. It now processes the directory inodes as per layout the inode AVL tree (which it did anyway after doing a path traversal). The patch will have a lot of apparenty noop changes, these are automatic EOL whitespace cleanups. ------------IrJlgiTfBCmIeXv1iG6QRc Content-Disposition: attachment; filename=prefetch Content-Type: application/octet-stream; name=prefetch Content-Transfer-Encoding: Base64 SW5kZXg6IHJlcGFpci94ZnNwcm9ncy9pbmNsdWRlL2NhY2hlLmgKPT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PQotLS0gcmVwYWlyLm9yaWcveGZzcHJvZ3MvaW5jbHVkZS9jYWNo ZS5oCTIwMDctMDQtMjcgMTM6MTM6MzUuMDAwMDAwMDAwICsxMDAwCisrKyByZXBh aXIveGZzcHJvZ3MvaW5jbHVkZS9jYWNoZS5oCTIwMDctMDYtMDQgMTc6MjM6NDIu NDE0ODQ5MTQwICsxMDAwCkBAIC0xOCw2ICsxOCw4IEBACiAjaWZuZGVmIF9fQ0FD SEVfSF9fCiAjZGVmaW5lIF9fQ0FDSEVfSF9fCiAKKyNkZWZpbmUJSEFTSF9DQUNI RV9SQVRJTwk4CisKIC8qCiAgKiBTaW1wbGUsIGdlbmVyaWMgaW1wbGVtZW50YXRp b24gb2YgYSBjYWNoZSAoYXJiaXRyYXJ5IGRhdGEpLgogICogUHJvdmlkZXMgYSBo YXNoIHRhYmxlIHdpdGggYSBjYXBwZWQgbnVtYmVyIG9mIGNhY2hlIGVudHJpZXMu CkBAIC0yOCw4ICszMCw5IEBACiBzdHJ1Y3QgY2FjaGVfbm9kZTsKIAogdHlwZWRl ZiB2b2lkICpjYWNoZV9rZXlfdDsKKwogdHlwZWRlZiB2b2lkICgqY2FjaGVfd2Fs a190KShzdHJ1Y3QgY2FjaGVfbm9kZSAqKTsKLXR5cGVkZWYgc3RydWN0IGNhY2hl X25vZGUgKiAoKmNhY2hlX25vZGVfYWxsb2NfdCkodm9pZCk7Cit0eXBlZGVmIHN0 cnVjdCBjYWNoZV9ub2RlICogKCpjYWNoZV9ub2RlX2FsbG9jX3QpKGNhY2hlX2tl eV90KTsKIHR5cGVkZWYgdm9pZCAoKmNhY2hlX25vZGVfZmx1c2hfdCkoc3RydWN0 IGNhY2hlX25vZGUgKik7CiB0eXBlZGVmIHZvaWQgKCpjYWNoZV9ub2RlX3JlbHNl X3QpKHN0cnVjdCBjYWNoZV9ub2RlICopOwogdHlwZWRlZiB1bnNpZ25lZCBpbnQg KCpjYWNoZV9ub2RlX2hhc2hfdCkoY2FjaGVfa2V5X3QsIHVuc2lnbmVkIGludCk7 CkBAIC04NCw1ICs4Nyw2IEBACiB2b2lkIGNhY2hlX25vZGVfcHV0KHN0cnVjdCBj YWNoZV9ub2RlICopOwogaW50IGNhY2hlX25vZGVfcHVyZ2Uoc3RydWN0IGNhY2hl ICosIGNhY2hlX2tleV90LCBzdHJ1Y3QgY2FjaGVfbm9kZSAqKTsKIHZvaWQgY2Fj aGVfcmVwb3J0KEZJTEUgKmZwLCBjb25zdCBjaGFyICosIHN0cnVjdCBjYWNoZSAq KTsKK2ludCBjYWNoZV9vdmVyZmxvd2VkKHN0cnVjdCBjYWNoZSAqKTsKIAogI2Vu ZGlmCS8qIF9fQ0FDSEVfSF9fICovCkluZGV4OiByZXBhaXIveGZzcHJvZ3MvaW5j bHVkZS9saWJ4ZnMuaAo9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09Ci0tLSByZXBhaXIub3Jp Zy94ZnNwcm9ncy9pbmNsdWRlL2xpYnhmcy5oCTIwMDctMDQtMjcgMTM6MTM6MzUu MDAwMDAwMDAwICsxMDAwCisrKyByZXBhaXIveGZzcHJvZ3MvaW5jbHVkZS9saWJ4 ZnMuaAkyMDA3LTA2LTA1IDEyOjExOjQyLjc1NjI3NTIwNyArMTAwMApAQCAtMTkw LDggKzE5MCw4IEBACiAjZGVmaW5lIExJQlhGU19NT1VOVF8zMkJJVElOT09QVAkw eDAwMDgKICNkZWZpbmUgTElCWEZTX01PVU5UX0NPTVBBVF9BVFRSCTB4MDAxMAog Ci0jZGVmaW5lIExJQlhGU19JSEFTSFNJWkUoc2JwKQkJKDE8PDE2KQkvKiB0d2Vh ayBiYXNlZCBvbiBpY291bnQ/ICovCi0jZGVmaW5lIExJQlhGU19CSEFTSFNJWkUo c2JwKQkJKDE8PDE2KQkvKiBkaXR0bywgb24gYmxvY2tzIHVzZWQ/ICovCisjZGVm aW5lIExJQlhGU19JSEFTSFNJWkUoc2JwKQkJKDE8PDEwKQorI2RlZmluZSBMSUJY RlNfQkhBU0hTSVpFKHNicCkgCQkoMTw8MTApCiAKIGV4dGVybiB4ZnNfbW91bnRf dAkqbGlieGZzX21vdW50ICh4ZnNfbW91bnRfdCAqLCB4ZnNfc2JfdCAqLAogCQkJ CWRldl90LCBkZXZfdCwgZGV2X3QsIGludCk7CkBAIC0yMTUsMTAgKzIxNSwxNyBA QAogCXhmc19kYWRkcl90CQliX2Jsa25vOwogCXVuc2lnbmVkCQliX2Jjb3VudDsK IAlkZXZfdAkJCWJfZGV2OworCXB0aHJlYWRfbXV0ZXhfdAkJYl9sb2NrOwogCXZv aWQJCQkqYl9mc3ByaXZhdGU7CiAJdm9pZAkJCSpiX2ZzcHJpdmF0ZTI7CiAJdm9p ZAkJCSpiX2ZzcHJpdmF0ZTM7CiAJY2hhcgkJCSpiX2FkZHI7CisjaWZkZWYgWEZT X0JVRl9UUkFDSU5HCisJc3RydWN0IGxpc3RfaGVhZAliX2xvY2tfbGlzdDsKKwlj b25zdCBjaGFyCQkqYl9mdW5jOworCWNvbnN0IGNoYXIJCSpiX2ZpbGU7CisJaW50 CQkJYl9saW5lOworI2VuZGlmCiB9IHhmc19idWZfdDsKIAogZW51bSB4ZnNfYnVm X2ZsYWdzX3QgewkvKiBiX2ZsYWdzIGJpdHMgKi8KQEAgLTI0NiwyNSArMjUzLDQ5 IEBACiAjZGVmaW5lIFhGU19CVUZfRlNQUklWQVRFMyhicCx0eXBlKQkoKHR5cGUp KGJwKS0+Yl9mc3ByaXZhdGUzKQogI2RlZmluZSBYRlNfQlVGX1NFVF9GU1BSSVZB VEUzKGJwLHZhbCkJKGJwKS0+Yl9mc3ByaXZhdGUzID0gKHZvaWQgKikodmFsKQog Ci1leHRlcm4geGZzX2J1Zl90CSpsaWJ4ZnNfZ2V0c2IgKHhmc19tb3VudF90ICos IGludCk7Ci1leHRlcm4geGZzX2J1Zl90CSpsaWJ4ZnNfcmVhZGJ1ZiAoZGV2X3Qs IHhmc19kYWRkcl90LCBpbnQsIGludCk7Ci1leHRlcm4gaW50CWxpYnhmc19yZWFk YnVmciAoZGV2X3QsIHhmc19kYWRkcl90LCB4ZnNfYnVmX3QgKiwgaW50LCBpbnQp OwotZXh0ZXJuIGludAlsaWJ4ZnNfd3JpdGVidWYgKHhmc19idWZfdCAqLCBpbnQp OwotZXh0ZXJuIGludAlsaWJ4ZnNfd3JpdGVidWZyICh4ZnNfYnVmX3QgKik7Ci1l eHRlcm4gaW50CWxpYnhmc193cml0ZWJ1Zl9pbnQgKHhmc19idWZfdCAqLCBpbnQp OwotCiAvKiBCdWZmZXIgQ2FjaGUgSW50ZXJmYWNlcyAqLworCiBleHRlcm4gc3Ry dWN0IGNhY2hlCSpsaWJ4ZnNfYmNhY2hlOwogZXh0ZXJuIHN0cnVjdCBjYWNoZV9v cGVyYXRpb25zCWxpYnhmc19iY2FjaGVfb3BlcmF0aW9uczsKLWV4dGVybiB2b2lk CWxpYnhmc19iY2FjaGVfcHVyZ2UgKHZvaWQpOwotZXh0ZXJuIHZvaWQJbGlieGZz X2JjYWNoZV9mbHVzaCAodm9pZCk7Ci1leHRlcm4geGZzX2J1Zl90CSpsaWJ4ZnNf Z2V0YnVmIChkZXZfdCwgeGZzX2RhZGRyX3QsIGludCk7CisKKyNpZmRlZiBYRlNf QlVGX1RSQUNJTkcKKworI2RlZmluZSBsaWJ4ZnNfcmVhZGJ1ZihkZXYsIGRhZGRy LCBsZW4sIGZsYWdzKSBcCisJCWxpYnhmc190cmFjZV9yZWFkYnVmKF9fRlVOQ1RJ T05fXywgX19GSUxFX18sIF9fTElORV9fLCAoZGV2KSwgKGRhZGRyKSwgKGxlbiks IChmbGFncykpCisjZGVmaW5lIGxpYnhmc193cml0ZWJ1ZihidWYsIGZsYWdzKSBc CisJCWxpYnhmc190cmFjZV93cml0ZWJ1ZihfX0ZVTkNUSU9OX18sIF9fRklMRV9f LCBfX0xJTkVfXywgKGJ1ZiksIChmbGFncykpCisjZGVmaW5lIGxpYnhmc19nZXRi dWYoZGV2LCBkYWRkciwgbGVuKSBcCisJCWxpYnhmc190cmFjZV9nZXRidWYoX19G VU5DVElPTl9fLCBfX0ZJTEVfXywgX19MSU5FX18sIChkZXYpLCAoZGFkZHIpLCAo bGVuKSkKKyNkZWZpbmUgbGlieGZzX3B1dGJ1ZihidWYpIFwKKwkJbGlieGZzX3Ry YWNlX3B1dGJ1ZihfX0ZVTkNUSU9OX18sIF9fRklMRV9fLCBfX0xJTkVfXywgKGJ1 ZikpCisKK2V4dGVybiB4ZnNfYnVmX3QgKmxpYnhmc190cmFjZV9yZWFkYnVmKGNv bnN0IGNoYXIgKiwgY29uc3QgY2hhciAqLCBpbnQsIGRldl90LCB4ZnNfZGFkZHJf dCwgaW50LCBpbnQpOworZXh0ZXJuIGludAlsaWJ4ZnNfdHJhY2Vfd3JpdGVidWYo Y29uc3QgY2hhciAqLCBjb25zdCBjaGFyICosIGludCwgeGZzX2J1Zl90ICosIGlu dCk7CitleHRlcm4geGZzX2J1Zl90ICpsaWJ4ZnNfdHJhY2VfZ2V0YnVmKGNvbnN0 IGNoYXIgKiwgY29uc3QgY2hhciAqLCBpbnQsIGRldl90LCB4ZnNfZGFkZHJfdCwg aW50KTsKK2V4dGVybiB2b2lkCWxpYnhmc190cmFjZV9wdXRidWYgKGNvbnN0IGNo YXIgKiwgY29uc3QgY2hhciAqLCBpbnQsIHhmc19idWZfdCAqKTsKKworI2Vsc2UK KworZXh0ZXJuIHhmc19idWZfdCAqbGlieGZzX3JlYWRidWYoZGV2X3QsIHhmc19k YWRkcl90LCBpbnQsIGludCk7CitleHRlcm4gaW50CWxpYnhmc193cml0ZWJ1Zih4 ZnNfYnVmX3QgKiwgaW50KTsKK2V4dGVybiB4ZnNfYnVmX3QgKmxpYnhmc19nZXRi dWYoZGV2X3QsIHhmc19kYWRkcl90LCBpbnQpOwogZXh0ZXJuIHZvaWQJbGlieGZz X3B1dGJ1ZiAoeGZzX2J1Zl90ICopOwotZXh0ZXJuIHZvaWQJbGlieGZzX3B1cmdl YnVmICh4ZnNfYnVmX3QgKik7CisKKyNlbmRpZgorCitleHRlcm4geGZzX2J1Zl90 ICpsaWJ4ZnNfZ2V0c2IoeGZzX21vdW50X3QgKiwgaW50KTsKK2V4dGVybiB2b2lk CWxpYnhmc19iY2FjaGVfcHVyZ2Uodm9pZCk7CitleHRlcm4gdm9pZAlsaWJ4ZnNf YmNhY2hlX2ZsdXNoKHZvaWQpOworZXh0ZXJuIHZvaWQJbGlieGZzX3B1cmdlYnVm KHhmc19idWZfdCAqKTsKK2V4dGVybiBpbnQJbGlieGZzX2JjYWNoZV9vdmVyZmxv d2VkKHZvaWQpOworZXh0ZXJuIGludAlsaWJ4ZnNfYmNhY2hlX3VzYWdlKHZvaWQp OwogCiAvKiBCdWZmZXIgKFJhdykgSW50ZXJmYWNlcyAqLwotZXh0ZXJuIHhmc19i dWZfdAkqbGlieGZzX2dldGJ1ZnIgKGRldl90LCB4ZnNfZGFkZHJfdCwgaW50KTsK LWV4dGVybiB2b2lkCWxpYnhmc19wdXRidWZyICh4ZnNfYnVmX3QgKik7CitleHRl cm4geGZzX2J1Zl90ICpsaWJ4ZnNfZ2V0YnVmcihkZXZfdCwgeGZzX2RhZGRyX3Qs IGludCk7CitleHRlcm4gdm9pZAlsaWJ4ZnNfcHV0YnVmcih4ZnNfYnVmX3QgKik7 CisKK2V4dGVybiBpbnQJbGlieGZzX3dyaXRlYnVmX2ludCh4ZnNfYnVmX3QgKiwg aW50KTsKK2V4dGVybiBpbnQJbGlieGZzX3JlYWRidWZyKGRldl90LCB4ZnNfZGFk ZHJfdCwgeGZzX2J1Zl90ICosIGludCwgaW50KTsKIAogZXh0ZXJuIGludCBsaWJ4 ZnNfYmhhc2hfc2l6ZTsKIGV4dGVybiBpbnQgbGlieGZzX2loYXNoX3NpemU7CkBA IC00NzMsNyArNTA0LDcgQEAKIGV4dGVybiB2b2lkCWxpYnhmc19ibWFwX2NhbmNl bCh4ZnNfYm1hcF9mcmVlX3QgKik7CiBleHRlcm4gaW50CWxpYnhmc19ibWFwX25l eHRfb2Zmc2V0ICh4ZnNfdHJhbnNfdCAqLCB4ZnNfaW5vZGVfdCAqLAogCQkJCXhm c19maWxlb2ZmX3QgKiwgaW50KTsKLWV4dGVybiBpbnQJbGlieGZzX2JtYXBfbGFz dF9vZmZzZXQoeGZzX3RyYW5zX3QgKiwgeGZzX2lub2RlX3QgKiwgCitleHRlcm4g aW50CWxpYnhmc19ibWFwX2xhc3Rfb2Zmc2V0KHhmc190cmFuc190ICosIHhmc19p bm9kZV90ICosCiAJCQkJeGZzX2ZpbGVvZmZfdCAqLCBpbnQpOwogZXh0ZXJuIGlu dAlsaWJ4ZnNfYnVubWFwaSAoeGZzX3RyYW5zX3QgKiwgeGZzX2lub2RlX3QgKiwg eGZzX2ZpbGVvZmZfdCwKIAkJCQl4ZnNfZmlsYmxrc190LCBpbnQsIHhmc19leHRu dW1fdCwKQEAgLTU1NSwyOSArNTg2LDEwIEBACiBleHRlcm4gdm9pZCBjbW5fZXJy KGludCwgY2hhciAqLCAuLi4pOwogZW51bSBjZSB7IENFX0RFQlVHLCBDRV9DT05U LCBDRV9OT1RFLCBDRV9XQVJOLCBDRV9BTEVSVCwgQ0VfUEFOSUMgfTsKIAotLyog bGlvIGludGVyZmFjZSAqLwotLyogbGlvX2xpc3RpbygzKSBpbnRlcmZhY2UgKFBP U0lYIGxpbmtlZCBhc3luY2hyb25vdXMgSS9PKSAqLwotZXh0ZXJuIGludCBsaWJ4 ZnNfbGlvX2lub19jb3VudDsKLWV4dGVybiBpbnQgbGlieGZzX2xpb19kaXJfY291 bnQ7Ci1leHRlcm4gaW50IGxpYnhmc19saW9fYWlvX2NvdW50OwotCi1leHRlcm4g aW50IGxpYnhmc19saW9faW5pdCh2b2lkKTsKLWV4dGVybiB2b2lkIGxpYnhmc19s aW9fYWxsb2NhdGUodm9pZCk7Ci1leHRlcm4gdm9pZCAqbGlieGZzX2dldF9saW9f YnVmZmVyKGludCB0eXBlKTsKLWV4dGVybiB2b2lkIGxpYnhmc19wdXRfbGlvX2J1 ZmZlcih2b2lkICpidWZmZXIpOwotZXh0ZXJuIGludCBsaWJ4ZnNfcmVhZGJ1Zl9s aXN0KGRldl90IGRldiwgaW50IG5lbnQsIHZvaWQgKnZvaWRwLCBpbnQgdHlwZSk7 Ci0KLXR5cGVkZWYgc3RydWN0ICBsaWJ4ZnNfbGlvX3JlcSB7Ci0JeGZzX2RhZGRy X3QJYmxrbm87Ci0JaW50CQlsZW47CS8qIGJicyAqLwotfSBsaWJ4ZnNfbGlvX3Jl cV90OwotCi0jZGVmaW5lCUxJQlhGU19MSU9fVFlQRV9JTk8JCTB4MQotI2RlZmlu ZQlMSUJYRlNfTElPX1RZUEVfRElSCQkweDIKLSNkZWZpbmUJTElCWEZTX0xJT19U WVBFX1JBVwkJMHgzCiAKICNkZWZpbmUgTElCWEZTX0JCVE9PRkY2NChiYnMpCSgo KHhmc19vZmZfdCkoYmJzKSkgPDwgQkJTSElGVCkKLWV4dGVybiBpbnQgbGlieGZz X25wcm9jKHZvaWQpOworZXh0ZXJuIGludAkJbGlieGZzX25wcm9jKHZvaWQpOwor ZXh0ZXJuIHVuc2lnbmVkIGxvbmcJbGlieGZzX3BoeXNtZW0odm9pZCk7CS8qIGlu IGtpbG9ieXRlcyAqLwogCiAjaW5jbHVkZSA8eGZzL3hmc19pYWxsb2MuaD4KICNp bmNsdWRlIDx4ZnMveGZzX3J0YWxsb2MuaD4KSW5kZXg6IHJlcGFpci94ZnNwcm9n cy9saWJ4ZnMvY2FjaGUuYwo9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09Ci0tLSByZXBhaXIu b3JpZy94ZnNwcm9ncy9saWJ4ZnMvY2FjaGUuYwkyMDA3LTA0LTI3IDEzOjEzOjM1 LjAwMDAwMDAwMCArMTAwMAorKysgcmVwYWlyL3hmc3Byb2dzL2xpYnhmcy9jYWNo ZS5jCTIwMDctMDYtMDQgMTc6MjM6NDIuNDIyODQ4MTE0ICsxMDAwCkBAIC0zMSw3 ICszMSw2IEBACiAjZGVmaW5lIENBQ0hFX0RFQlVHIDEKICN1bmRlZiBDQUNIRV9B Qk9SVAogLyogI2RlZmluZSBDQUNIRV9BQk9SVCAxICovCi0jZGVmaW5lCUhBU0hf Q0FDSEVfUkFUSU8JOAogCiBzdGF0aWMgdW5zaWduZWQgaW50IGNhY2hlX2dlbmVy aWNfYnVsa3JlbHNlKHN0cnVjdCBjYWNoZSAqLCBzdHJ1Y3QgbGlzdF9oZWFkICop OwogCkBAIC0yNzUsNyArMjc0LDggQEAKIHN0cnVjdCBjYWNoZV9ub2RlICoKIGNh Y2hlX25vZGVfYWxsb2NhdGUoCiAJc3RydWN0IGNhY2hlICoJCWNhY2hlLAotCXN0 cnVjdCBjYWNoZV9oYXNoICoJaGFzaGxpc3QpCisJc3RydWN0IGNhY2hlX2hhc2gg KgloYXNobGlzdCwKKwljYWNoZV9rZXlfdAkJa2V5KQogewogCXVuc2lnbmVkIGlu dAkJbm9kZXNmcmVlOwogCXN0cnVjdCBjYWNoZV9ub2RlICoJbm9kZTsKQEAgLTI5 MCw3ICsyOTAsNyBAQAogCXB0aHJlYWRfbXV0ZXhfdW5sb2NrKCZjYWNoZS0+Y19t dXRleCk7CiAJaWYgKCFub2Rlc2ZyZWUpCiAJCXJldHVybiBOVUxMOwotCWlmICgh KG5vZGUgPSBjYWNoZS0+YWxsb2MoKSkpIHsJLyogdWgtb2ggKi8KKwlpZiAoIShu b2RlID0gY2FjaGUtPmFsbG9jKGtleSkpKSB7CS8qIHVoLW9oICovCiAJCXB0aHJl YWRfbXV0ZXhfbG9jaygmY2FjaGUtPmNfbXV0ZXgpOwogCQljYWNoZS0+Y19jb3Vu dC0tOwogCQlwdGhyZWFkX211dGV4X3VubG9jaygmY2FjaGUtPmNfbXV0ZXgpOwpA QCAtMzAyLDYgKzMwMiwxMyBAQAogCXJldHVybiBub2RlOwogfQogCitpbnQKK2Nh Y2hlX292ZXJmbG93ZWQoCisJc3RydWN0IGNhY2hlICoJCWNhY2hlKQoreworCXJl dHVybiAoY2FjaGUtPmNfbWF4Y291bnQgPT0gY2FjaGUtPmNfbWF4KTsKK30KKwog LyoKICAqIExvb2t1cCBpbiB0aGUgY2FjaGUgaGFzaCB0YWJsZS4gIFdpdGggYW55 IGx1Y2sgd2UnbGwgZ2V0IGEgY2FjaGUKICAqIGhpdCwgaW4gd2hpY2ggY2FzZSB0 aGlzIHdpbGwgYWxsIGJlIG92ZXIgcXVpY2tseSBhbmQgcGFpbmxlc3NseS4KQEAg LTM0MSw3ICszNDgsNyBAQAogCQlicmVhazsKIAl9CiAJaWYgKHBvcyA9PSBoZWFk KSB7Ci0JCW5vZGUgPSBjYWNoZV9ub2RlX2FsbG9jYXRlKGNhY2hlLCBoYXNoKTsK KwkJbm9kZSA9IGNhY2hlX25vZGVfYWxsb2NhdGUoY2FjaGUsIGhhc2gsIGtleSk7 CiAJCWlmICghbm9kZSkgewogCQkJcHJpb3JpdHkgPSBjYWNoZV9zaGFrZShjYWNo ZSwgaGFzaCwgcHJpb3JpdHkpOwogCQkJZ290byByZXN0YXJ0OwpAQCAtNDI4LDcg KzQzNSw3IEBACiB9CiAKIC8qCi0gKiBGbHVzaCBhbGwgbm9kZXMgaW4gdGhlIGNh Y2hlIHRvIGRpc2suIAorICogRmx1c2ggYWxsIG5vZGVzIGluIHRoZSBjYWNoZSB0 byBkaXNrLgogICovCiB2b2lkCiBjYWNoZV9mbHVzaCgKQEAgLTQzOSwxMyArNDQ2 LDEzIEBACiAJc3RydWN0IGxpc3RfaGVhZCAqCXBvczsKIAlzdHJ1Y3QgY2FjaGVf bm9kZSAqCW5vZGU7CiAJaW50CQkJaTsKLQkKKwogCWlmICghY2FjaGUtPmZsdXNo KQogCQlyZXR1cm47Ci0JCisKIAlmb3IgKGkgPSAwOyBpIDwgY2FjaGUtPmNfaGFz aHNpemU7IGkrKykgewogCQloYXNoID0gJmNhY2hlLT5jX2hhc2hbaV07Ci0JCQor CiAJCXB0aHJlYWRfbXV0ZXhfbG9jaygmaGFzaC0+Y2hfbXV0ZXgpOwogCQloZWFk ID0gJmhhc2gtPmNoX2xpc3Q7CiAJCWZvciAocG9zID0gaGVhZC0+bmV4dDsgcG9z ICE9IGhlYWQ7IHBvcyA9IHBvcy0+bmV4dCkgewpAQCAtNTA1LDEwICs1MTIsMTAg QEAKIAkJdG90YWwgKz0gaSpoYXNoX2J1Y2tldF9sZW5ndGhzW2ldOwogCQlpZiAo aGFzaF9idWNrZXRfbGVuZ3Roc1tpXSA9PSAwKQogCQkJY29udGludWU7Ci0JCWZw cmludGYoZnAsICJIYXNoIGJ1Y2tldHMgd2l0aCAgJTJkIGVudHJpZXMgJTVsZCAo JTNsZCUlKVxuIiwgCisJCWZwcmludGYoZnAsICJIYXNoIGJ1Y2tldHMgd2l0aCAg JTJkIGVudHJpZXMgJTVsZCAoJTNsZCUlKVxuIiwKIAkJCWksIGhhc2hfYnVja2V0 X2xlbmd0aHNbaV0sIChpKmhhc2hfYnVja2V0X2xlbmd0aHNbaV0qMTAwKS9jYWNo ZS0+Y19jb3VudCk7CiAJfQogCWlmIChoYXNoX2J1Y2tldF9sZW5ndGhzW2ldKQkv KiBsYXN0IHJlcG9ydCBidWNrZXQgaXMgdGhlIG92ZXJmbG93IGJ1Y2tldCAqLwot CQlmcHJpbnRmKGZwLCAiSGFzaCBidWNrZXRzIHdpdGggPiUyZCBlbnRyaWVzICU1 bGQgKCUzbGQlJSlcbiIsIAorCQlmcHJpbnRmKGZwLCAiSGFzaCBidWNrZXRzIHdp dGggPiUyZCBlbnRyaWVzICU1bGQgKCUzbGQlJSlcbiIsCiAJCQlpLTEsIGhhc2hf YnVja2V0X2xlbmd0aHNbaV0sICgoY2FjaGUtPmNfY291bnQtdG90YWwpKjEwMCkv Y2FjaGUtPmNfY291bnQpOwogfQpJbmRleDogcmVwYWlyL3hmc3Byb2dzL2xpYnhm cy9saW8uYwo9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09Ci0tLSByZXBhaXIub3JpZy94ZnNw cm9ncy9saWJ4ZnMvbGlvLmMJMjAwNy0wNC0yNyAxMzoxMzozNS4wMDAwMDAwMDAg KzEwMDAKKysrIC9kZXYvbnVsbAkxOTcwLTAxLTAxIDAwOjAwOjAwLjAwMDAwMDAw MCArMDAwMApAQCAtMSwxOTIgKzAsMCBAQAotI2luY2x1ZGUgPHhmcy9saWJ4ZnMu aD4KLSNpbmNsdWRlICJpbml0LmgiCi0jaW5jbHVkZSAiYWlvLmgiCi0KLSNkZWZp bmUJREVGX1BSRUZFVENIX0lOT1MJMTYKLSNkZWZpbmUJREVGX1BSRUZFVENIX0RJ UlMJMTYKLSNkZWZpbmUJREVGX1BSRUZFVENIX0FJTwkzMgotaW50CWxpYnhmc19s aW9faW5vX2NvdW50ID0gREVGX1BSRUZFVENIX0lOT1M7Ci1pbnQJbGlieGZzX2xp b19kaXJfY291bnQgPSBERUZfUFJFRkVUQ0hfRElSUzsKLWludAlsaWJ4ZnNfbGlv X2Fpb19jb3VudCA9IERFRl9QUkVGRVRDSF9BSU87Ci0KLXN0YXRpYyBwdGhyZWFk X2tleV90IGxpb19pbm9fa2V5Owotc3RhdGljIHB0aHJlYWRfa2V5X3QgbGlvX2Rp cl9rZXk7Ci0KLXZvaWQKLWxpYnhmc19saW9fYWxsb2NhdGUodm9pZCkKLXsKLSNp ZmRlZglfQUlPQ0I2NF9UX0RFRklORUQKLQlzaXplX3QJCXNpemU7Ci0Jdm9pZAkJ KnZvaWRwOwotCi0JLyoKLQkgKiBhbGxvY2F0ZSBhIHBlci10aHJlYWQgYnVmZmVy IHdoaWNoIHdpbGwgYmUgdXNlZCBpbiBsaWJ4ZnNfcmVhZGJ1Zl9saXN0Ci0JICog aW4gdGhlIGZvbGxvd2luZyBvcmRlcjoKLQkgKiBsaWJ4ZnNfbGlvX3JlcV90IGFy cmF5Ci0JICogYWlvY2I2NF90IGFycmF5Ci0JICogYWlvY2I2NF90ICogYXJyYXkK LQkgKiB4ZnNfYnVmX3QgKiBhcnJheQotCSAqLwotCXNpemUgPSBzaXplb2YobGli eGZzX2xpb19yZXFfdCkgKyBzaXplb2YoYWlvY2I2NF90KSArICBzaXplb2YoYWlv Y2I2NF90ICopICsgc2l6ZW9mKHhmc19idWZfdCAqKTsKLQotCXZvaWRwID0gbWFs bG9jKGxpYnhmc19saW9faW5vX2NvdW50KnNpemUpOwotCWlmICh2b2lkcCA9PSBO VUxMKSB7Ci0JCWZwcmludGYoc3RkZXJyLCAibGlvX2FsbG9jYXRlOiBjYW5ub3Qg YWxsb2NhdGUgdGhyZWFkIHNwZWNpZmljIHN0b3JhZ2VcbiIpOwotCQlleGl0KDEp OwotCQkvKiBOTyBSRVRVUk4gKi8KLQkJcmV0dXJuOwotCX0KLQlwdGhyZWFkX3Nl dHNwZWNpZmljKGxpb19pbm9fa2V5LCAgdm9pZHApOwotCi0Jdm9pZHAgPSBtYWxs b2MobGlieGZzX2xpb19kaXJfY291bnQqc2l6ZSk7Ci0JaWYgKHZvaWRwID09IE5V TEwpIHsKLQkJZnByaW50ZihzdGRlcnIsICJsaW9fYWxsb2NhdGU6IGNhbm5vdCBh bGxvY2F0ZSB0aHJlYWQgc3BlY2lmaWMgc3RvcmFnZVxuIik7Ci0JCWV4aXQoMSk7 Ci0JCS8qIE5PIFJFVFVSTiAqLwotCQlyZXR1cm47Ci0JfQotCXB0aHJlYWRfc2V0 c3BlY2lmaWMobGlvX2Rpcl9rZXksICB2b2lkcCk7Ci0jZW5kaWYJLyogX0FJT0NC NjRfVF9ERUZJTkVEICovCi19Ci0KLWludAotbGlieGZzX2xpb19pbml0KHZvaWQp Ci17Ci0jaWZkZWYJX0FJT0NCNjRfVF9ERUZJTkVECi0JaWYgKHBsYXRmb3JtX2Fp b19pbml0KGxpYnhmc19saW9fYWlvX2NvdW50KSkgewotCQlwdGhyZWFkX2tleV9j cmVhdGUoJmxpb19pbm9fa2V5LCBOVUxMKTsKLQkJcHRocmVhZF9rZXlfY3JlYXRl KCZsaW9fZGlyX2tleSwgTlVMTCk7Ci0JCXJldHVybiAoMSk7Ci0JfQotI2VuZGlm CS8qIF9BSU9DQjY0X1RfREVGSU5FRCAqLwotCXJldHVybiAoMCk7Ci19Ci0KLXZv aWQgKgotbGlieGZzX2dldF9saW9fYnVmZmVyKGludCB0eXBlKQotewotI2lmZGVm CV9BSU9DQjY0X1RfREVGSU5FRAotCWlmICh0eXBlID09IExJQlhGU19MSU9fVFlQ RV9JTk8pCi0JCXJldHVybiBwdGhyZWFkX2dldHNwZWNpZmljKGxpb19pbm9fa2V5 KTsKLQlpZiAodHlwZSA9PSBMSUJYRlNfTElPX1RZUEVfRElSKQotCQlyZXR1cm4g cHRocmVhZF9nZXRzcGVjaWZpYyhsaW9fZGlyX2tleSk7Ci0JaWYgKHR5cGUgPT0g TElCWEZTX0xJT19UWVBFX1JBVykgewotCQkvKiB1c2UgdGhlIGlub2RlIGJ1ZmZl cnMgc2luY2UgdGhlcmUgaXMKLQkJICogbm8gb3ZlcmxhcCB3aXRoIHRoZSBvdGhl ciByZXF1ZXN0cy4KLQkJICovCi0JCXJldHVybiBwdGhyZWFkX2dldHNwZWNpZmlj KGxpb19pbm9fa2V5KTsKLQl9Ci0JZnByaW50ZihzdGRlcnIsICJnZXRfbGlvX2J1 ZmZlcjogaW52YWxpZCB0eXBlIDB4JXhcbiIsIHR5cGUpOwotCWV4aXQoMSk7Ci0j ZW5kaWYKLQlyZXR1cm4gTlVMTDsKLX0KLQotLyogQVJHU1VTRUQgKi8KLXZvaWQK LWxpYnhmc19wdXRfbGlvX2J1ZmZlcih2b2lkICpidWZmZXIpCi17Ci0JcmV0dXJu OwkvKiBub3RoaW5nIHRvIGRvICovCi19Ci0KLXN0YXRpYyBpbnQKLWxpb19jb21w YXJlKGNvbnN0IHZvaWQgKmUxLCBjb25zdCB2b2lkICplMikKLXsKLQlsaWJ4ZnNf bGlvX3JlcV90ICpyMSA9IChsaWJ4ZnNfbGlvX3JlcV90ICopIGUxOwotCWxpYnhm c19saW9fcmVxX3QgKnIyID0gKGxpYnhmc19saW9fcmVxX3QgKikgZTI7Ci0KLQly ZXR1cm4gKGludCkgKHIxLT5ibGtubyAtIHIyLT5ibGtubyk7Ci19Ci0KLWludAot bGlieGZzX3JlYWRidWZfbGlzdChkZXZfdCBkZXYsIGludCBuZW50LCB2b2lkICp2 b2lkcCwgaW50IHR5cGUpCi17Ci0jaWZkZWYJX0FJT0NCNjRfVF9ERUZJTkVECi0J bGlieGZzX2xpb19yZXFfdAkqcmJscDsKLQl4ZnNfYnVmX3QJCSpicCwgKipicGxp c3Q7Ci0JYWlvY2I2NF90CQkqYWlvY2xpc3QsICoqYWlvY3B0cjsKLQlpbnQJCQlp LCBuYnAsIGVycjsKLQlpbnQJCQlmZDsKLQotCWlmIChuZW50IDw9IDApCi0JCXJl dHVybiAwOwotCWlmICgodHlwZSA9PSBMSUJYRlNfTElPX1RZUEVfSU5PKSB8fCAo dHlwZSA9PSBMSUJYRlNfTElPX1RZUEVfUkFXKSkgewotCQlpZiAobGlieGZzX2xp b19pbm9fY291bnQgPT0gMCkKLQkJCXJldHVybiAoMCk7Ci0JCWlmIChuZW50ID4g bGlieGZzX2xpb19pbm9fY291bnQpCi0JCQluZW50ID0gbGlieGZzX2xpb19pbm9f Y291bnQ7Ci0JfQotCWVsc2UgaWYgKHR5cGUgPT0gTElCWEZTX0xJT19UWVBFX0RJ UikgewotCQlpZiAobGlieGZzX2xpb19kaXJfY291bnQgPT0gMCkKLQkJCXJldHVy biAoMCk7Ci0JCWlmIChuZW50ID4gbGlieGZzX2xpb19kaXJfY291bnQpCi0JCQlu ZW50ID0gbGlieGZzX2xpb19kaXJfY291bnQ7Ci0JCWlmIChuZW50ID4gMikKLQkJ CXFzb3J0KHZvaWRwLCBuZW50LCBzaXplb2YobGlieGZzX2xpb19yZXFfdCksIGxp b19jb21wYXJlKTsKLQl9Ci0JZWxzZSB7Ci0JCWZwcmludGYoc3RkZXJyLCAiSW52 YWxpZCB0eXBlIDB4JXggaW4gbGlieGZzX3JlYWRidWZfbGlzdFxuIiwgdHlwZSk7 Ci0JCWFib3J0KCk7Ci0JCS8qIE5PIFJFVFVSTiAqLwotCQlyZXR1cm4gKDApOwot CX0KLQotCS8qIHNwYWNlIGZvciBsaW9fbGlzdGlvIHByb2Nlc3NpbmcsIHNlZSBs aWJ4ZnNfbGlvX2FsbG9jYXRlICovCi0JcmJscCA9IChsaWJ4ZnNfbGlvX3JlcV90 ICopIHZvaWRwOwotCWFpb2NsaXN0ID0gKGFpb2NiNjRfdCAqKSAocmJscCArIG5l bnQpOwotCWFpb2NwdHIgPSAoYWlvY2I2NF90ICoqKSAoYWlvY2xpc3QgKyBuZW50 KTsKLQlicGxpc3QgPSAoeGZzX2J1Zl90ICoqKSAoYWlvY3B0ciArIG5lbnQpOwot Ci0JYnplcm8oYWlvY2xpc3QsIG5lbnQqc2l6ZW9mKGFpb2NiNjRfdCkpOwotCi0J LyogbG9vayBpbiBidWZmZXIgY2FjaGUgKi8KLQlmb3IgKGkgPSAwLCBuYnAgPSAw OyBpIDwgbmVudDsgaSsrKSB7Ci0JCUFTU0VSVChyYmxwW2ldLmxlbik7Ci0JCWJw ID0gbGlieGZzX2dldGJ1ZihkZXYsIHJibHBbaV0uYmxrbm8sIHJibHBbaV0ubGVu KTsKLQkJaWYgKGJwID09IE5VTEwpCi0JCQljb250aW51ZTsKLQkJaWYgKGJwLT5i X2ZsYWdzICYgKExJQlhGU19CX1VQVE9EQVRFfExJQlhGU19CX0RJUlRZKSkgewot CQkJLyogYWxyZWFkeSBpbiBjYWNoZSAqLwotCQkJbGlieGZzX3B1dGJ1ZihicCk7 Ci0JCQljb250aW51ZTsKLQkJfQotCQlicGxpc3RbbmJwKytdID0gYnA7Ci0JfQot Ci0JaWYgKG5icCA9PSAwKQotCQlyZXR1cm4gKDApOyAvKiBOb3RoaW5nIHRvIGRv ICovCi0KLQlpZiAobmJwID09IDEpIHsKLQkJbGlieGZzX3B1dGJ1ZihicGxpc3Rb MF0pOwkvKiBzaW5nbGUgYnVmZmVyLCBubyBwb2ludCAqLwotCQlyZXR1cm4gKDAp OwotCX0KLQotCWZkID0gbGlieGZzX2RldmljZV90b19mZChkZXYpOwotCi0JZm9y IChpID0gMDsgaSA8IG5icDsgaSsrKSB7Ci0JCWFpb2NsaXN0W2ldLmFpb19maWxk ZXMgPSBmZDsKLQkJYWlvY2xpc3RbaV0uYWlvX25ieXRlcyA9IFhGU19CVUZfQ09V TlQoYnBsaXN0W2ldKTsKLQkJYWlvY2xpc3RbaV0uYWlvX2J1ZiA9IFhGU19CVUZf UFRSKGJwbGlzdFtpXSk7Ci0JCWFpb2NsaXN0W2ldLmFpb19vZmZzZXQgPSBMSUJY RlNfQkJUT09GRjY0KFhGU19CVUZfQUREUihicGxpc3RbaV0pKTsKLQkJYWlvY2xp c3RbaV0uYWlvX2xpb19vcGNvZGUgPSBMSU9fUkVBRDsKLQkJYWlvY3B0cltpXSA9 ICZhaW9jbGlzdFtpXTsKLQl9Ci0KLQllcnIgPSBsaW9fbGlzdGlvNjQoTElPX1dB SVQsIGFpb2NwdHIsIG5icCwgTlVMTCk7Ci0KLQlpZiAoZXJyICE9IDApIHsKLQkJ ZnByaW50ZihzdGRlcnIsICJsaW9fbGlzdGlvICglZCBlbnRyaWVzKSBmYWlsdXJl IGVyciA9ICVkXG4iLCBuYnAsIGVycik7Ci0JfQotCi0JZm9yIChpID0gMDsgaSA8 IG5icDsgaSsrKSB7Ci0JCS8qIGJ1ZmZlciB3aXRoIGRhdGEgaW4gY2FjaGUgYXZh aWxhYmxlIHZpYSBmdXR1cmUgbGlieGZzX3JlYWRidWYgKi8KLQkJaWYgKGVyciA9 PSAwKQotCQkJYnBsaXN0W2ldLT5iX2ZsYWdzIHw9IExJQlhGU19CX1VQVE9EQVRF OwotCQlsaWJ4ZnNfcHV0YnVmKGJwbGlzdFtpXSk7Ci0JfQotCi0JcmV0dXJuIChl cnIgPT0gMD8gbmJwIDogLTEpOwotI2Vsc2UJLyogX0FJT0NCNjRfVF9ERUZJTkVE ICovCi0JcmV0dXJuIC0xOwotI2VuZGlmCS8qIF9BSU9DQjY0X1RfREVGSU5FRCAq LwotfQpJbmRleDogcmVwYWlyL3hmc3Byb2dzL2xpYnhmcy9yZHdyLmMKPT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PQotLS0gcmVwYWlyLm9yaWcveGZzcHJvZ3MvbGlieGZzL3Jk d3IuYwkyMDA3LTA0LTI3IDEzOjEzOjM1LjAwMDAwMDAwMCArMTAwMAorKysgcmVw YWlyL3hmc3Byb2dzL2xpYnhmcy9yZHdyLmMJMjAwNy0wNi0wNCAxNDowNDozNC42 MTk1NjIxNjkgKzEwMDAKQEAgLTI0LDYgKzI0LDggQEAKICNkZWZpbmUgQkRTVFJB VF9TSVpFCSgyNTYgKiAxMDI0KQogI2RlZmluZSBtaW4oeCwgeSkJKCh4KSA8ICh5 KSA/ICh4KSA6ICh5KSkKIAorI2RlZmluZSBJT19CQ09NUEFSRV9DSEVDSworCiB2 b2lkCiBsaWJ4ZnNfZGV2aWNlX3plcm8oZGV2X3QgZGV2LCB4ZnNfZGFkZHJfdCBz dGFydCwgdWludCBsZW4pCiB7CkBAIC0xODMsNiArMTg1LDcxIEBACiAJcmV0dXJu IEJCVE9CKGxlbik7CiB9CiAKKy8qCisgKiBTaW1wbGUgSS9PIChidWZmZXIgY2Fj aGUpIGludGVyZmFjZQorICovCisKKworI2lmZGVmIFhGU19CVUZfVFJBQ0lORwor CisjdW5kZWYgbGlieGZzX3JlYWRidWYKKyN1bmRlZiBsaWJ4ZnNfd3JpdGVidWYK KyN1bmRlZiBsaWJ4ZnNfZ2V0YnVmCisjdW5kZWYgbGlieGZzX3B1dGJ1ZgorCit4 ZnNfYnVmX3QgCSpsaWJ4ZnNfcmVhZGJ1ZihkZXZfdCwgeGZzX2RhZGRyX3QsIGlu dCwgaW50KTsKK2ludAkJbGlieGZzX3dyaXRlYnVmKHhmc19idWZfdCAqLCBpbnQp OworeGZzX2J1Zl90IAkqbGlieGZzX2dldGJ1ZihkZXZfdCwgeGZzX2RhZGRyX3Qs IGludCk7Cit2b2lkCQlsaWJ4ZnNfcHV0YnVmICh4ZnNfYnVmX3QgKik7CisKK3hm c19idWZfdCAqCitsaWJ4ZnNfdHJhY2VfcmVhZGJ1Zihjb25zdCBjaGFyICpmdW5j LCBjb25zdCBjaGFyICpmaWxlLCBpbnQgbGluZSwgZGV2X3QgZGV2LCB4ZnNfZGFk ZHJfdCBibGtubywgaW50IGxlbiwgaW50IGZsYWdzKQoreworCXhmc19idWZfdAkq YnAgPSBsaWJ4ZnNfcmVhZGJ1ZihkZXYsIGJsa25vLCBsZW4sIGZsYWdzKTsKKwor CWJwLT5iX2Z1bmMgPSBmdW5jOworCWJwLT5iX2ZpbGUgPSBmaWxlOworCWJwLT5i X2xpbmUgPSBsaW5lOworCisJcmV0dXJuIGJwOworfQorCitpbnQKK2xpYnhmc190 cmFjZV93cml0ZWJ1Zihjb25zdCBjaGFyICpmdW5jLCBjb25zdCBjaGFyICpmaWxl LCBpbnQgbGluZSwgeGZzX2J1Zl90ICpicCwgaW50IGZsYWdzKQoreworCWJwLT5i X2Z1bmMgPSBmdW5jOworCWJwLT5iX2ZpbGUgPSBmaWxlOworCWJwLT5iX2xpbmUg PSBsaW5lOworCisJcmV0dXJuIGxpYnhmc193cml0ZWJ1ZihicCwgZmxhZ3MpOwor fQorCit4ZnNfYnVmX3QgKgorbGlieGZzX3RyYWNlX2dldGJ1Zihjb25zdCBjaGFy ICpmdW5jLCBjb25zdCBjaGFyICpmaWxlLCBpbnQgbGluZSwgZGV2X3QgZGV2aWNl LCB4ZnNfZGFkZHJfdCBibGtubywgaW50IGxlbikKK3sKKwl4ZnNfYnVmX3QJKmJw ID0gbGlieGZzX2dldGJ1ZihkZXZpY2UsIGJsa25vLCBsZW4pOworCisJYnAtPmJf ZnVuYyA9IGZ1bmM7CisJYnAtPmJfZmlsZSA9IGZpbGU7CisJYnAtPmJfbGluZSA9 IGxpbmU7CisKKwlyZXR1cm4gYnA7Cit9CisKK3ZvaWQKK2xpYnhmc190cmFjZV9w dXRidWYoY29uc3QgY2hhciAqZnVuYywgY29uc3QgY2hhciAqZmlsZSwgaW50IGxp bmUsIHhmc19idWZfdCAqYnApCit7CisJYnAtPmJfZnVuYyA9IGZ1bmM7CisJYnAt PmJfZmlsZSA9IGZpbGU7CisJYnAtPmJfbGluZSA9IGxpbmU7CisKKwlsaWJ4ZnNf cHV0YnVmKGJwKTsKK30KKworCisjZW5kaWYKKworCiB4ZnNfYnVmX3QgKgogbGli eGZzX2dldHNiKHhmc19tb3VudF90ICptcCwgaW50IGZsYWdzKQogewpAQCAtMTkw LDIzICsyNTcsMTggQEAKIAkJCQlYRlNfRlNTX1RPX0JCKG1wLCAxKSwgZmxhZ3Mp OwogfQogCi0KLS8qCi0gKiBTaW1wbGUgSS9PIChidWZmZXIgY2FjaGUpIGludGVy ZmFjZQotICovCi0KIHhmc196b25lX3QJKnhmc19idWZfem9uZTsKIAogdHlwZWRl ZiBzdHJ1Y3QgewogCWRldl90CQlkZXZpY2U7CiAJeGZzX2RhZGRyX3QJYmxrbm87 Ci0JdW5zaWduZWQgaW50CWNvdW50OworCXVuc2lnbmVkIGludAliYmxlbjsKIH0g eGZzX2J1ZmtleV90OwogCiBzdGF0aWMgdW5zaWduZWQgaW50CiBsaWJ4ZnNfYmhh c2goY2FjaGVfa2V5X3Qga2V5LCB1bnNpZ25lZCBpbnQgaGFzaHNpemUpCiB7Ci0J cmV0dXJuICgodW5zaWduZWQgaW50KSgoeGZzX2J1ZmtleV90ICopa2V5KS0+Ymxr bm8pICUgaGFzaHNpemU7CisJcmV0dXJuICgoKHVuc2lnbmVkIGludCkoKHhmc19i dWZrZXlfdCAqKWtleSktPmJsa25vKSA+PiA1KSAlIGhhc2hzaXplOwogfQogCiBz dGF0aWMgaW50CkBAIC0yMTgsMTYgKzI4MCwxNyBAQAogI2lmZGVmIElPX0JDT01Q QVJFX0NIRUNLCiAJaWYgKGJwLT5iX2RldiA9PSBia2V5LT5kZXZpY2UgJiYKIAkg ICAgYnAtPmJfYmxrbm8gPT0gYmtleS0+Ymxrbm8gJiYKLQkgICAgYnAtPmJfYmNv dW50ICE9IGJrZXktPmNvdW50KQotCQlmcHJpbnRmKHN0ZGVyciwgIkJhZG5lc3Mg aW4ga2V5IGxvb2t1cCAobGVuZ3RoKVxuIgotCQkJImJwPShibm8gJWxsdSwgbGVu ICV1IGJiKSBrZXk9KGJubyAlbGx1LCBsZW4gJXUgYmJzKVxuIiwKKwkgICAgYnAt PmJfYmNvdW50ICE9IEJCVE9CKGJrZXktPmJibGVuKSkKKwkJZnByaW50ZihzdGRl cnIsICIlbHg6IEJhZG5lc3MgaW4ga2V5IGxvb2t1cCAobGVuZ3RoKVxuIgorCQkJ ImJwPShibm8gJWxsdSwgbGVuICV1IGJ5dGVzKSBrZXk9KGJubyAlbGx1LCBsZW4g JXUgYnl0ZXMpXG4iLAorCQkJcHRocmVhZF9zZWxmKCksCiAJCQkodW5zaWduZWQg bG9uZyBsb25nKWJwLT5iX2Jsa25vLCAoaW50KWJwLT5iX2Jjb3VudCwKLQkJCSh1 bnNpZ25lZCBsb25nIGxvbmcpYmtleS0+Ymxrbm8sIChpbnQpYmtleS0+Y291bnQp OworCQkJKHVuc2lnbmVkIGxvbmcgbG9uZylia2V5LT5ibGtubywgQkJUT0IoYmtl eS0+YmJsZW4pKTsKICNlbmRpZgogCiAJcmV0dXJuIChicC0+Yl9kZXYgPT0gYmtl eS0+ZGV2aWNlICYmCiAJCWJwLT5iX2Jsa25vID09IGJrZXktPmJsa25vICYmCi0J CWJwLT5iX2Jjb3VudCA9PSBia2V5LT5jb3VudCk7CisJCWJwLT5iX2Jjb3VudCA9 PSBCQlRPQihia2V5LT5iYmxlbikpOwogfQogCiB2b2lkCkBAIC0yMzksMjcgKzMw Miw2IEBACiB9CiAKIHN0YXRpYyB2b2lkCi1saWJ4ZnNfYnJlbHNlKHN0cnVjdCBj YWNoZV9ub2RlICpub2RlKQotewotCXhmc19idWZfdAkJKmJwID0gKHhmc19idWZf dCAqKW5vZGU7Ci0JeGZzX2J1Zl9sb2dfaXRlbV90CSpiaXA7Ci0JZXh0ZXJuIHhm c196b25lX3QJKnhmc19idWZfaXRlbV96b25lOwotCi0JaWYgKGJwICE9IE5VTEwp IHsKLQkJaWYgKGJwLT5iX2ZsYWdzICYgTElCWEZTX0JfRElSVFkpCi0JCQlsaWJ4 ZnNfd3JpdGVidWZyKGJwKTsKLQkJYmlwID0gWEZTX0JVRl9GU1BSSVZBVEUoYnAs IHhmc19idWZfbG9nX2l0ZW1fdCAqKTsKLQkJaWYgKGJpcCkKLQkJICAgIGxpYnhm c196b25lX2ZyZWUoeGZzX2J1Zl9pdGVtX3pvbmUsIGJpcCk7Ci0JCWZyZWUoYnAt PmJfYWRkcik7Ci0JCWJwLT5iX2FkZHIgPSBOVUxMOwotCQlicC0+Yl9mbGFncyA9 IDA7Ci0JCWZyZWUoYnApOwotCQlicCA9IE5VTEw7Ci0JfQotfQotCi1zdGF0aWMg dm9pZAogbGlieGZzX2luaXRidWYoeGZzX2J1Zl90ICpicCwgZGV2X3QgZGV2aWNl LCB4ZnNfZGFkZHJfdCBibm8sIHVuc2lnbmVkIGludCBieXRlcykKIHsKIAlicC0+ Yl9mbGFncyA9IDA7CkBAIC0yNzQsNiArMzE2LDEwIEBACiAJCQlzdHJlcnJvcihl cnJubykpOwogCQlleGl0KDEpOwogCX0KKyNpZmRlZiBYRlNfQlVGX1RSQUNJTkcK KwlsaXN0X2hlYWRfaW5pdCgmYnAtPmJfbG9ja19saXN0KTsKKyNlbmRpZgorCXB0 aHJlYWRfbXV0ZXhfaW5pdCgmYnAtPmJfbG9jaywgTlVMTCk7CiB9CiAKIHhmc19i dWZfdCAqCkBAIC0yODIsNDEgKzMyOCw2MyBAQAogCXhmc19idWZfdAkqYnA7CiAK IAlicCA9IGxpYnhmc196b25lX3phbGxvYyh4ZnNfYnVmX3pvbmUpOwotCWxpYnhm c19pbml0YnVmKGJwLCBkZXZpY2UsIGJsa25vLCBCQlRPQihsZW4pKTsKKwlpZiAo YnAgIT0gTlVMTCkKKwkJbGlieGZzX2luaXRidWYoYnAsIGRldmljZSwgYmxrbm8s IEJCVE9CKGxlbikpOworI2lmZGVmIElPX0RFQlVHCisJcHJpbnRmKCIlbHg6ICVz OiBhbGxvY2F0ZWQgJXUgYnl0ZXMgYnVmZmVyLCBrZXk9JWxsdSglbGx1KSwgJXBc biIsCisJCXB0aHJlYWRfc2VsZigpLCBfX0ZVTkNUSU9OX18sIEJCVE9CKGxlbiks CisJCShsb25nIGxvbmcpTElCWEZTX0JCVE9PRkY2NChibGtubyksIChsb25nIGxv bmcpYmxrbm8sIGJwKTsKKyNlbmRpZgogCXJldHVybiBicDsKIH0KIAotdm9pZAot bGlieGZzX3B1dGJ1ZnIoeGZzX2J1Zl90ICpicCkKLXsKLQlsaWJ4ZnNfYnJlbHNl KChzdHJ1Y3QgY2FjaGVfbm9kZSAqKWJwKTsKLX0KKworI2lmZGVmIFhGU19CVUZf VFJBQ0lORworc3RydWN0IGxpc3RfaGVhZAlsb2NrX2J1Zl9saXN0ID0geyZsb2Nr X2J1Zl9saXN0LCAmbG9ja19idWZfbGlzdH07CitpbnQJCQlsb2NrX2J1Zl9jb3Vu dCA9IDA7CisjZW5kaWYKIAogeGZzX2J1Zl90ICoKIGxpYnhmc19nZXRidWYoZGV2 X3QgZGV2aWNlLCB4ZnNfZGFkZHJfdCBibGtubywgaW50IGxlbikKIHsKIAl4ZnNf YnVmX3QJKmJwOwogCXhmc19idWZrZXlfdAlrZXk7Ci0JdW5zaWduZWQgaW50CWJ5 dGVzID0gQkJUT0IobGVuKTsKKwlpbnQJCW1pc3M7CiAKIAlrZXkuZGV2aWNlID0g ZGV2aWNlOwogCWtleS5ibGtubyA9IGJsa25vOwotCWtleS5jb3VudCA9IGJ5dGVz OworCWtleS5iYmxlbiA9IGxlbjsKIAotCWlmIChjYWNoZV9ub2RlX2dldChsaWJ4 ZnNfYmNhY2hlLCAma2V5LCAoc3RydWN0IGNhY2hlX25vZGUgKiopJmJwKSkgewor CW1pc3MgPSBjYWNoZV9ub2RlX2dldChsaWJ4ZnNfYmNhY2hlLCAma2V5LCAoc3Ry dWN0IGNhY2hlX25vZGUgKiopJmJwKTsKKwlpZiAoYnApIHsKKwkJcHRocmVhZF9t dXRleF9sb2NrKCZicC0+Yl9sb2NrKTsKKyNpZmRlZiBYRlNfQlVGX1RSQUNJTkcK KwkJcHRocmVhZF9tdXRleF9sb2NrKCZsaWJ4ZnNfYmNhY2hlLT5jX211dGV4KTsK KwkJbG9ja19idWZfY291bnQrKzsKKwkJbGlzdF9hZGQoJmJwLT5iX2xvY2tfbGlz dCwgJmxvY2tfYnVmX2xpc3QpOworCQlwdGhyZWFkX211dGV4X3VubG9jaygmbGli eGZzX2JjYWNoZS0+Y19tdXRleCk7CisjZW5kaWYKICNpZmRlZiBJT19ERUJVRwot CQlmcHJpbnRmKHN0ZGVyciwgIiVzOiBhbGxvY2F0ZWQgJXVieXRlcyBidWZmZXIs IGtleT0lbGx1KCVsbHUpLCAlcFxuIiwKLQkJCV9fRlVOQ1RJT05fXywgQkJUT0Io bGVuKSwKLQkJCShsb25nIGxvbmcpTElCWEZTX0JCVE9PRkY2NChibGtubyksIChs b25nIGxvbmcpYmxrbm8sIGJwKTsKKwkJcHJpbnRmKCIlbHggJXM6ICVzIGJ1ZmZl ciAlcCBmb3IgYm5vID0gJWxsdVxuIiwKKwkJCXB0aHJlYWRfc2VsZigpLCBfX0ZV TkNUSU9OX18sIG1pc3MgPyAibWlzcyIgOiAiaGl0IiwKKwkJCWJwLCAobG9uZyBs b25nKUxJQlhGU19CQlRPT0ZGNjQoYmxrbm8pKTsKICNlbmRpZgotCQlsaWJ4ZnNf aW5pdGJ1ZihicCwgZGV2aWNlLCBibGtubywgYnl0ZXMpOwogCX0KKwogCXJldHVy biBicDsKIH0KIAogdm9pZAogbGlieGZzX3B1dGJ1Zih4ZnNfYnVmX3QgKmJwKQog eworI2lmZGVmIFhGU19CVUZfVFJBQ0lORworCXB0aHJlYWRfbXV0ZXhfbG9jaygm bGlieGZzX2JjYWNoZS0+Y19tdXRleCk7CisJbG9ja19idWZfY291bnQtLTsKKwlB U1NFUlQobG9ja19idWZfY291bnQgPj0gMCk7CisJbGlzdF9kZWxfaW5pdCgmYnAt PmJfbG9ja19saXN0KTsKKwlwdGhyZWFkX211dGV4X3VubG9jaygmbGlieGZzX2Jj YWNoZS0+Y19tdXRleCk7CisjZW5kaWYKKwlwdGhyZWFkX211dGV4X3VubG9jaygm YnAtPmJfbG9jayk7CiAJY2FjaGVfbm9kZV9wdXQoKHN0cnVjdCBjYWNoZV9ub2Rl ICopYnApOwogfQogCkBAIC0zMjcsMTUgKzM5NSwxOCBAQAogCiAJa2V5LmRldmlj ZSA9IGJwLT5iX2RldjsKIAlrZXkuYmxrbm8gPSBicC0+Yl9ibGtubzsKLQlrZXku Y291bnQgPSBicC0+Yl9iY291bnQ7CisJa2V5LmJibGVuID0gYnAtPmJfYmNvdW50 ID4+IEJCU0hJRlQ7CiAKIAljYWNoZV9ub2RlX3B1cmdlKGxpYnhmc19iY2FjaGUs ICZrZXksIChzdHJ1Y3QgY2FjaGVfbm9kZSAqKWJwKTsKIH0KIAogc3RhdGljIHN0 cnVjdCBjYWNoZV9ub2RlICoKLWxpYnhmc19iYWxsb2Modm9pZCkKK2xpYnhmc19i YWxsb2MoY2FjaGVfa2V5X3Qga2V5KQogewotCXJldHVybiBsaWJ4ZnNfem9uZV96 YWxsb2MoeGZzX2J1Zl96b25lKTsKKwl4ZnNfYnVma2V5X3QJKmJ1ZmtleSA9ICh4 ZnNfYnVma2V5X3QgKilrZXk7CisKKwlyZXR1cm4gKHN0cnVjdCBjYWNoZV9ub2Rl ICopbGlieGZzX2dldGJ1ZnIoYnVma2V5LT5kZXZpY2UsCisJCQkJCWJ1ZmtleS0+ Ymxrbm8sIGJ1ZmtleS0+YmJsZW4pOwogfQogCiBpbnQKQEAgLTM1NCw4ICs0MjUs OSBAQAogCQlyZXR1cm4gZXJybm87CiAJfQogI2lmZGVmIElPX0RFQlVHCi0JZnBy aW50ZihzdGRlcnIsICJyZWFkYnVmciByZWFkICV1Ynl0ZXMsIGJsa25vPSVsbHUo JWxsdSksICVwXG4iLAotCQlieXRlcywgKGxvbmcgbG9uZylMSUJYRlNfQkJUT09G RjY0KGJsa25vKSwgKGxvbmcgbG9uZylibGtubywgYnApOworCXByaW50ZigiJWx4 OiAlczogcmVhZCAldSBieXRlcywgYmxrbm89JWxsdSglbGx1KSwgJXBcbiIsCisJ CXB0aHJlYWRfc2VsZigpLCBfX0ZVTkNUSU9OX18sIGJ5dGVzLAorCQkobG9uZyBs b25nKUxJQlhGU19CQlRPT0ZGNjQoYmxrbm8pLCAobG9uZyBsb25nKWJsa25vLCBi cCk7CiAjZW5kaWYKIAlpZiAoYnAtPmJfZGV2ID09IGRldiAmJgogCSAgICBicC0+ Yl9ibGtubyA9PSBibGtubyAmJgpAQCAtMzcxLDcgKzQ0Myw3IEBACiAJaW50CQll cnJvcjsKIAogCWJwID0gbGlieGZzX2dldGJ1ZihkZXYsIGJsa25vLCBsZW4pOwot CWlmICghKGJwLT5iX2ZsYWdzICYgKExJQlhGU19CX1VQVE9EQVRFfExJQlhGU19C X0RJUlRZKSkpIHsKKwlpZiAoYnAgJiYgIShicC0+Yl9mbGFncyAmIChMSUJYRlNf Ql9VUFRPREFURXxMSUJYRlNfQl9ESVJUWSkpKSB7CiAJCWVycm9yID0gbGlieGZz X3JlYWRidWZyKGRldiwgYmxrbm8sIGJwLCBsZW4sIGZsYWdzKTsKIAkJaWYgKGVy cm9yKSB7CiAJCQlsaWJ4ZnNfcHV0YnVmKGJwKTsKQEAgLTQwMyw5ICs0NzUsMTAg QEAKIAkJcmV0dXJuIEVJTzsKIAl9CiAjaWZkZWYgSU9fREVCVUcKLQlmcHJpbnRm KHN0ZGVyciwgIndyaXRlYnVmciB3cm90ZSAldWJ5dGVzLCBibGtubz0lbGx1KCVs bHUpLCAlcFxuIiwKLQkJYnAtPmJfYmNvdW50LCAobG9uZyBsb25nKUxJQlhGU19C QlRPT0ZGNjQoYnAtPmJfYmxrbm8pLAotCQkobG9uZyBsb25nKWJwLT5iX2Jsa25v LCBicCk7CisJcHJpbnRmKCIlbHg6ICVzOiB3cm90ZSAldSBieXRlcywgYmxrbm89 JWxsdSglbGx1KSwgJXBcbiIsCisJCQlwdGhyZWFkX3NlbGYoKSwgX19GVU5DVElP Tl9fLCBicC0+Yl9iY291bnQsCisJCQkobG9uZyBsb25nKUxJQlhGU19CQlRPT0ZG NjQoYnAtPmJfYmxrbm8pLAorCQkJKGxvbmcgbG9uZylicC0+Yl9ibGtubywgYnAp OwogI2VuZGlmCiAJYnAtPmJfZmxhZ3MgfD0gTElCWEZTX0JfVVBUT0RBVEU7CiAJ YnAtPmJfZmxhZ3MgJj0gfihMSUJYRlNfQl9ESVJUWSB8IExJQlhGU19CX0VYSVQp OwpAQCAtNDMyLDcgKzUwNSw3IEBACiB7CiAjaWZkZWYgSU9fREVCVUcKIAlpZiAo Ym9mZiArIGxlbiA+IGJwLT5iX2Jjb3VudCkgewotCQlmcHJpbnRmKHN0ZGVyciwg IkJhZG5lc3MsIGlvbW92ZSBvdXQgb2YgcmFuZ2UhXG4iCisJCXByaW50ZigiQmFk bmVzcywgaW9tb3ZlIG91dCBvZiByYW5nZSFcbiIKIAkJCSJicD0oYm5vICVsbHUs IGJ5dGVzICV1KSByYW5nZT0oYm9mZiAldSwgYnl0ZXMgJXUpXG4iLAogCQkJKGxv bmcgbG9uZylicC0+Yl9ibGtubywgYnAtPmJfYmNvdW50LCBib2ZmLCBsZW4pOwog CQlhYm9ydCgpOwpAQCAtNDYwLDYgKzUzMywzNSBAQAogCQlsaWJ4ZnNfd3JpdGVi dWZyKGJwKTsKIH0KIAorc3RhdGljIHZvaWQKK2xpYnhmc19icmVsc2Uoc3RydWN0 IGNhY2hlX25vZGUgKm5vZGUpCit7CisJeGZzX2J1Zl90CQkqYnAgPSAoeGZzX2J1 Zl90ICopbm9kZTsKKwl4ZnNfYnVmX2xvZ19pdGVtX3QJKmJpcDsKKwlleHRlcm4g eGZzX3pvbmVfdAkqeGZzX2J1Zl9pdGVtX3pvbmU7CisKKwlpZiAoYnAgIT0gTlVM TCkgeworCQlpZiAoYnAtPmJfZmxhZ3MgJiBMSUJYRlNfQl9ESVJUWSkKKwkJCWxp Ynhmc193cml0ZWJ1ZnIoYnApOworCQliaXAgPSBYRlNfQlVGX0ZTUFJJVkFURShi cCwgeGZzX2J1Zl9sb2dfaXRlbV90ICopOworCQlpZiAoYmlwKQorCQkJbGlieGZz X3pvbmVfZnJlZSh4ZnNfYnVmX2l0ZW1fem9uZSwgYmlwKTsKKwkJZnJlZShicC0+ Yl9hZGRyKTsKKwkJcHRocmVhZF9tdXRleF9kZXN0cm95KCZicC0+Yl9sb2NrKTsK KwkJYnAtPmJfYWRkciA9IE5VTEw7CisJCWJwLT5iX2ZsYWdzID0gMDsKKwkJZnJl ZShicCk7CisJCWJwID0gTlVMTDsKKwl9Cit9CisKK3ZvaWQKK2xpYnhmc19wdXRi dWZyKHhmc19idWZfdCAqYnApCit7CisJbGlieGZzX2JyZWxzZSgoc3RydWN0IGNh Y2hlX25vZGUgKilicCk7Cit9CisKKwogdm9pZAogbGlieGZzX2JjYWNoZV9wdXJn ZSh2b2lkKQogewpAQCAtNDcyLDYgKzU3NCwxMiBAQAogCWNhY2hlX2ZsdXNoKGxp Ynhmc19iY2FjaGUpOwogfQogCitpbnQKK2xpYnhmc19iY2FjaGVfb3ZlcmZsb3dl ZCh2b2lkKQoreworCXJldHVybiBjYWNoZV9vdmVyZmxvd2VkKGxpYnhmc19iY2Fj aGUpOworfQorCiBzdHJ1Y3QgY2FjaGVfb3BlcmF0aW9ucyBsaWJ4ZnNfYmNhY2hl X29wZXJhdGlvbnMgPSB7CiAJLyogLmhhc2ggKi8JbGlieGZzX2JoYXNoLAogCS8q IC5hbGxvYyAqLwlsaWJ4ZnNfYmFsbG9jLApAQCAtNjM3LDcgKzc0NSw3IEBACiB9 CiAKIHN0YXRpYyBzdHJ1Y3QgY2FjaGVfbm9kZSAqCi1saWJ4ZnNfaWFsbG9jKHZv aWQpCitsaWJ4ZnNfaWFsbG9jKGNhY2hlX2tleV90IGtleSkKIHsKIAlyZXR1cm4g bGlieGZzX3pvbmVfemFsbG9jKHhmc19pbm9kZV96b25lKTsKIH0KSW5kZXg6IHJl cGFpci94ZnNwcm9ncy9yZXBhaXIvTWFrZWZpbGUKPT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PQotLS0gcmVwYWlyLm9yaWcveGZzcHJvZ3MvcmVwYWlyL01ha2VmaWxlCTIwMDct MDQtMjcgMTM6MTM6MzUuMDAwMDAwMDAwICsxMDAwCisrKyByZXBhaXIveGZzcHJv Z3MvcmVwYWlyL01ha2VmaWxlCTIwMDctMDYtMDQgMTc6Mzk6NTkuMTIxMDAwNTA2 ICsxMDAwCkBAIC04LDE1ICs4LDE2IEBACiBMVENPTU1BTkQgPSB4ZnNfcmVwYWly CiAKIEhGSUxFUyA9IGFnaGVhZGVyLmggYXR0cl9yZXBhaXIuaCBhdmwuaCBhdmw2 NC5oIGJtYXAuaCBkaW5vZGUuaCBkaXIuaCBcCi0JZGlyMi5oIGRpcl9zdGFjay5o IGVycl9wcm90b3MuaCBnbG9iYWxzLmggaW5jb3JlLmggcHJvdG9zLmggcnQuaCBc Ci0JcHJvZ3Jlc3MuaCBzY2FuLmggdmVyc2lvbnMuaCBwcmVmZXRjaC5oIHRocmVh ZHMuaCB0cmFja21lbS5oCisJZGlyMi5oIGVycl9wcm90b3MuaCBnbG9iYWxzLmgg aW5jb3JlLmggcHJvdG9zLmggcnQuaCBcCisJcHJvZ3Jlc3MuaCBzY2FuLmggdmVy c2lvbnMuaCBwcmVmZXRjaC5oIHJhZGl4LXRyZWUuaCB0aHJlYWRzLmggXAorCXRy YWNrbWVtLmgKIAogQ0ZJTEVTID0gYWdoZWFkZXIuYyBhdHRyX3JlcGFpci5jIGF2 bC5jIGF2bDY0LmMgYm1hcC5jIGRpbm9fY2h1bmtzLmMgXAotCWRpbm9kZS5jIGRp ci5jIGRpcjIuYyBkaXJfc3RhY2suYyBnbG9iYWxzLmMgaW5jb3JlLmMgXAorCWRp bm9kZS5jIGRpci5jIGRpcjIuYyBnbG9iYWxzLmMgaW5jb3JlLmMgXAogCWluY29y ZV9ibWMuYyBpbml0LmMgaW5jb3JlX2V4dC5jIGluY29yZV9pbm8uYyBwaGFzZTEu YyBcCi0JcGhhc2UyLmMgcGhhc2UzLmMgcGhhc2U0LmMgcGhhc2U1LmMgcGhhc2U2 LmMgcGhhc2U3LmMgcnQuYyBzYi5jIFwKLQlwcm9ncmVzcy5jIHByZWZldGNoLmMg c2Nhbi5jIHRocmVhZHMuYyB0cmFja21lbS5jIHZlcnNpb25zLmMgXAotCXhmc19y ZXBhaXIuYworCXBoYXNlMi5jIHBoYXNlMy5jIHBoYXNlNC5jIHBoYXNlNS5jIHBo YXNlNi5jIHBoYXNlNy5jIFwKKwlwcm9ncmVzcy5jIHByZWZldGNoLmMgcmFkaXgt dHJlZS5jIHJ0LmMgc2IuYyBzY2FuLmMgdGhyZWFkcy5jIFwKKwl0cmFja21lbS5j IHZlcnNpb25zLmMgeGZzX3JlcGFpci5jCiAKIExMRExJQlMgPSAkKExJQlhGUykg JChMSUJYTE9HKSAkKExJQlVVSUQpICQoTElCUFRIUkVBRCkgJChMSUJSVCkKIExU REVQRU5ERU5DSUVTID0gJChMSUJYRlMpICQoTElCWExPRykKQEAgLTQwLDYgKzQx LDggQEAKICMgLURYUl9CTERfSU5PX1RSQUNFCWJ1aWxkaW5nIG9uLWRpc2sgaW5v ZGUgYWxsb2NhdGlvbiBidHJlZXMKICMgLURYUl9CTERfQUREX0VYVEVOVAl0cmFj ayBwaGFzZSA1IGJsb2NrIGV4dGVudCBjcmVhdGlvbgogIyAtRFhSX0JDS1BUUl9E QkcJcGFyZW50IGxpc3QgZGVidWdnaW5nIGluZm8KKyMgLURYUl9QRl9UUkFDRQkJ cHJlZmV0Y2ggdHJhY2UKKyMgLURUUkFDRV9NRU1PUlkJdHJhY2sgbWVtb3J5IHVz YWdlCiAjCiAjQ0ZMQUdTICs9IC4uLgogCkluZGV4OiByZXBhaXIveGZzcHJvZ3Mv cmVwYWlyL2Rpbm9fY2h1bmtzLmMKPT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQotLS0gcmVw YWlyLm9yaWcveGZzcHJvZ3MvcmVwYWlyL2Rpbm9fY2h1bmtzLmMJMjAwNy0wNC0y NyAxNDoxMTo0MS4wMDAwMDAwMDAgKzEwMDAKKysrIHJlcGFpci94ZnNwcm9ncy9y ZXBhaXIvZGlub19jaHVua3MuYwkyMDA3LTA1LTI5IDExOjE4OjI4LjU4NTA2OTY0 OSArMTAwMApAQCAtMjUsOSArMjUsOCBAQAogI2luY2x1ZGUgImVycl9wcm90b3Mu aCIKICNpbmNsdWRlICJkaXIuaCIKICNpbmNsdWRlICJkaW5vZGUuaCIKLSNpbmNs dWRlICJwcmVmZXRjaC5oIgotI2luY2x1ZGUgInRocmVhZHMuaCIKICNpbmNsdWRl ICJ2ZXJzaW9ucy5oIgorI2luY2x1ZGUgInByZWZldGNoLmgiCiAjaW5jbHVkZSAi cHJvZ3Jlc3MuaCIKIAogLyoKQEAgLTE1MCwxOSArMTQ5LDE4IEBACiAJCWlmIChj aGVja19pbm9kZV9ibG9jayhtcCwgaW5vKSA9PSAwKQogCQkJcmV0dXJuKDApOwog Ci0JCVBSRVBBSVJfUldfV1JJVEVfTE9DSygmcGVyX2FnX2xvY2tbYWdub10pOwor CQlwdGhyZWFkX211dGV4X2xvY2soJmFnX2xvY2tzW2Fnbm9dKTsKKwogCQlzd2l0 Y2ggKHN0YXRlID0gZ2V0X2FnYm5vX3N0YXRlKG1wLCBhZ25vLCBhZ2JubykpICB7 CiAJCWNhc2UgWFJfRV9JTk86CiAJCQlkb193YXJuKAogCQlfKCJ1bmNlcnRhaW4g aW5vZGUgYmxvY2sgJWQvJWQgYWxyZWFkeSBrbm93blxuIiksCiAJCQkJYWdubywg YWdibm8pOwotCQkJUFJFUEFJUl9SV19VTkxPQ0soJnBlcl9hZ19sb2NrW2Fnbm9d KTsKIAkJCWJyZWFrOwogCQljYXNlIFhSX0VfVU5LTk9XTjoKIAkJY2FzZSBYUl9F X0ZSRUUxOgogCQljYXNlIFhSX0VfRlJFRToKIAkJCXNldF9hZ2Jub19zdGF0ZSht cCwgYWdubywgYWdibm8sIFhSX0VfSU5PKTsKLQkJCVBSRVBBSVJfUldfVU5MT0NL KCZwZXJfYWdfbG9ja1thZ25vXSk7CiAJCQlicmVhazsKIAkJY2FzZSBYUl9FX01V TFQ6CiAJCWNhc2UgWFJfRV9JTlVTRToKQEAgLTE3NSwxNyArMTczLDE4IEBACiAJ CV8oImlub2RlIGJsb2NrICVkLyVkIG11bHRpcGx5IGNsYWltZWQsIChzdGF0ZSAl ZClcbiIpLAogCQkJCWFnbm8sIGFnYm5vLCBzdGF0ZSk7CiAJCQlzZXRfYWdibm9f c3RhdGUobXAsIGFnbm8sIGFnYm5vLCBYUl9FX01VTFQpOwotCQkJUFJFUEFJUl9S V19VTkxPQ0soJnBlcl9hZ19sb2NrW2Fnbm9dKTsKKwkJCXB0aHJlYWRfbXV0ZXhf dW5sb2NrKCZhZ19sb2Nrc1thZ25vXSk7CiAJCQlyZXR1cm4oMCk7CiAJCWRlZmF1 bHQ6CiAJCQlkb193YXJuKAogCQlfKCJpbm9kZSBibG9jayAlZC8lZCBiYWQgc3Rh dGUsIChzdGF0ZSAlZClcbiIpLAogCQkJCWFnbm8sIGFnYm5vLCBzdGF0ZSk7CiAJ CQlzZXRfYWdibm9fc3RhdGUobXAsIGFnbm8sIGFnYm5vLCBYUl9FX0lOTyk7Ci0J CQlQUkVQQUlSX1JXX1VOTE9DSygmcGVyX2FnX2xvY2tbYWdub10pOwogCQkJYnJl YWs7CiAJCX0KIAorCQlwdGhyZWFkX211dGV4X3VubG9jaygmYWdfbG9ja3NbYWdu b10pOworCiAJCXN0YXJ0X2FnaW5vID0gWEZTX09GRkJOT19UT19BR0lOTyhtcCwg YWdibm8sIDApOwogCQkqc3RhcnRfaW5vID0gWEZTX0FHSU5PX1RPX0lOTyhtcCwg YWdubywgc3RhcnRfYWdpbm8pOwogCkBAIC00MzIsNyArNDMxLDcgQEAKIAkgKiB1 c2VyIGRhdGEgLS0gd2UncmUgcHJvYmFibHkgaGVyZSBhcyBhIHJlc3VsdCBvZiBh IGRpcmVjdG9yeQogCSAqIGVudHJ5IG9yIGFuIGl1bmxpbmtlZCBwb2ludGVyCiAJ ICovCi0JUFJFUEFJUl9SV19XUklURV9MT0NLKCZwZXJfYWdfbG9ja1thZ25vXSk7 CisJcHRocmVhZF9tdXRleF9sb2NrKCZhZ19sb2Nrc1thZ25vXSk7CiAJZm9yIChq ID0gMCwgY3VyX2FnYm5vID0gY2h1bmtfc3RhcnRfYWdibm87CiAJCQljdXJfYWdi bm8gPCBjaHVua19zdG9wX2FnYm5vOyBjdXJfYWdibm8rKykgIHsKIAkJc3dpdGNo IChzdGF0ZSA9IGdldF9hZ2Jub19zdGF0ZShtcCwgYWdubywgY3VyX2FnYm5vKSkg IHsKQEAgLTQ1NiwxMSArNDU1LDExIEBACiAJCX0KIAogCQlpZiAoaikgewotCQkJ UFJFUEFJUl9SV19VTkxPQ0soJnBlcl9hZ19sb2NrW2Fnbm9dKTsKKwkJCXB0aHJl YWRfbXV0ZXhfdW5sb2NrKCZhZ19sb2Nrc1thZ25vXSk7CiAJCQlyZXR1cm4oMCk7 CiAJCX0KIAl9Ci0JUFJFUEFJUl9SV19VTkxPQ0soJnBlcl9hZ19sb2NrW2Fnbm9d KTsKKwlwdGhyZWFkX211dGV4X3VubG9jaygmYWdfbG9ja3NbYWdub10pOwogCiAJ LyoKIAkgKiBvaywgY2h1bmsgaXMgZ29vZC4gIHB1dCB0aGUgcmVjb3JkIGludG8g dGhlIHRyZWUgaWYgcmVxdWlyZWQsCkBAIC00ODMsNyArNDgyLDggQEAKIAogCXNl dF9pbm9kZV91c2VkKGlyZWNfcCwgYWdpbm8gLSBzdGFydF9hZ2lubyk7CiAKLQlQ UkVQQUlSX1JXX1dSSVRFX0xPQ0soJnBlcl9hZ19sb2NrW2Fnbm9dKTsKKwlwdGhy ZWFkX211dGV4X2xvY2soJmFnX2xvY2tzW2Fnbm9dKTsKKwogCWZvciAoY3VyX2Fn Ym5vID0gY2h1bmtfc3RhcnRfYWdibm87CiAJCQljdXJfYWdibm8gPCBjaHVua19z dG9wX2FnYm5vOyBjdXJfYWdibm8rKykgIHsKIAkJc3dpdGNoIChzdGF0ZSA9IGdl dF9hZ2Jub19zdGF0ZShtcCwgYWdubywgY3VyX2FnYm5vKSkgIHsKQEAgLTUxMyw3 ICs1MTMsNyBAQAogCQkJYnJlYWs7CiAJCX0KIAl9Ci0JUFJFUEFJUl9SV19VTkxP Q0soJnBlcl9hZ19sb2NrW2Fnbm9dKTsKKwlwdGhyZWFkX211dGV4X3VubG9jaygm YWdfbG9ja3NbYWdub10pOwogCiAJcmV0dXJuKGlub19jbnQpOwogfQpAQCAtNTY2 LDIxICs1NjYsMjcgQEAKICAqCiAgKiAqYm9ndXMgaXMgc2V0IHRvIDEgaWYgdGhl IGVudGlyZSBzZXQgb2YgaW5vZGVzIGlzIGJhZC4KICAqLworCiAvKiBBUkdTVVNF RCAqLwotaW50Ci1wcm9jZXNzX2lub2RlX2NodW5rKHhmc19tb3VudF90ICptcCwg eGZzX2FnbnVtYmVyX3QgYWdubywgaW50IG51bV9pbm9zLAotCQkJaW5vX3RyZWVf bm9kZV90ICpmaXJzdF9pcmVjLCBpbnQgaW5vX2Rpc2NvdmVyeSwKLQkJCWludCBj aGVja19kdXBzLCBpbnQgZXh0cmFfYXR0cl9jaGVjaywgaW50ICpib2d1cykKK3N0 YXRpYyBpbnQKK3Byb2Nlc3NfaW5vZGVfY2h1bmsoCisJeGZzX21vdW50X3QgCQkq bXAsCisJeGZzX2FnbnVtYmVyX3QJCWFnbm8sCisJaW50IAkJCW51bV9pbm9zLAor CWlub190cmVlX25vZGVfdCAJKmZpcnN0X2lyZWMsCisJaW50IAkJCWlub19kaXNj b3ZlcnksCisJaW50IAkJCWNoZWNrX2R1cHMsCisJaW50IAkJCWV4dHJhX2F0dHJf Y2hlY2ssCisJaW50IAkJCSpib2d1cykKIHsKIAl4ZnNfaW5vX3QJCXBhcmVudDsK IAlpbm9fdHJlZV9ub2RlX3QJCSppbm9fcmVjOwotCXhmc19idWZfdAkJKmJwOwor CXhmc19idWZfdAkJKipicGxpc3Q7CiAJeGZzX2Rpbm9kZV90CQkqZGlubzsKIAlp bnQJCQlpY250OwogCWludAkJCXN0YXR1czsKIAlpbnQJCQlpc191c2VkOwogCWlu dAkJCXN0YXRlOwotCWludAkJCWRvbmU7CiAJaW50CQkJaW5vX2RpcnR5OwogCWlu dAkJCWlyZWNfb2Zmc2V0OwogCWludAkJCWlidWZfb2Zmc2V0OwpAQCAtNTg5LDYg KzU5NSwxMCBAQAogCWludAkJCWRpcnR5ID0gMDsKIAlpbnQJCQljbGVhcmVkID0g MDsKIAlpbnQJCQlpc2FfZGlyID0gMDsKKwlpbnQJCQlibGtzX3Blcl9jbHVzdGVy OworCWludAkJCWNsdXN0ZXJfY291bnQ7CisJaW50CQkJYnBfaW5kZXg7CisJaW50 CQkJY2x1c3Rlcl9vZmZzZXQ7CiAKIAlBU1NFUlQoZmlyc3RfaXJlYyAhPSBOVUxM KTsKIAlBU1NFUlQoWEZTX0FHSU5PX1RPX09GRlNFVChtcCwgZmlyc3RfaXJlYy0+ aW5vX3N0YXJ0bnVtKSA9PSAwKTsKQEAgLTU5Niw0NCArNjA2LDc3IEBACiAJKmJv Z3VzID0gMDsKIAlBU1NFUlQoWEZTX0lBTExPQ19CTE9DS1MobXApID4gMCk7CiAK KwlibGtzX3Blcl9jbHVzdGVyID0gWEZTX0lOT0RFX0NMVVNURVJfU0laRShtcCkg Pj4gbXAtPm1fc2Iuc2JfYmxvY2tsb2c7CisJaWYgKGJsa3NfcGVyX2NsdXN0ZXIg PT0gMCkKKwkJYmxrc19wZXJfY2x1c3RlciA9IDE7CisJY2x1c3Rlcl9jb3VudCA9 IFhGU19JTk9ERVNfUEVSX0NIVU5LIC8gaW5vZGVzX3Blcl9jbHVzdGVyOworCUFT U0VSVChjbHVzdGVyX2NvdW50ID4gMCk7CisKIAkvKgogCSAqIGdldCBhbGwgYmxv Y2tzIHJlcXVpcmVkIHRvIHJlYWQgaW4gdGhpcyBjaHVuayAobWF5IHdpbmQgdXAK IAkgKiBoYXZpbmcgdG8gcHJvY2VzcyBtb3JlIGNodW5rcyBpbiBhIG11bHRpLWNo dW5rIHBlciBibG9jayBmcykKIAkgKi8KIAlhZ2JubyA9IFhGU19BR0lOT19UT19B R0JOTyhtcCwgZmlyc3RfaXJlYy0+aW5vX3N0YXJ0bnVtKTsKIAotCWJwID0gbGli eGZzX3JlYWRidWYobXAtPm1fZGV2LCBYRlNfQUdCX1RPX0RBRERSKG1wLCBhZ25v LCBhZ2JubyksCi0JCQlYRlNfRlNCX1RPX0JCKG1wLCBYRlNfSUFMTE9DX0JMT0NL UyhtcCkpLCAwKTsKLQlpZiAoIWJwKSB7Ci0JCWRvX3dhcm4oXygiY2Fubm90IHJl YWQgaW5vZGUgJWxsdSwgZGlzayBibG9jayAlbGxkLCBjbnQgJWRcbiIpLAotCQkJ WEZTX0FHSU5PX1RPX0lOTyhtcCwgYWdubywgZmlyc3RfaXJlYy0+aW5vX3N0YXJ0 bnVtKSwKLQkJCVhGU19BR0JfVE9fREFERFIobXAsIGFnbm8sIGFnYm5vKSwKLQkJ CShpbnQpWEZTX0ZTQl9UT19CQihtcCwgWEZTX0lBTExPQ19CTE9DS1MobXApKSk7 Ci0JCXJldHVybigxKTsKLQl9Ci0KIAkvKgogCSAqIHNldCB1cCBmaXJzdCBpcmVj CiAJICovCiAJaW5vX3JlYyA9IGZpcnN0X2lyZWM7CisKKwlicGxpc3QgPSBtYWxs b2MoY2x1c3Rlcl9jb3VudCAqIHNpemVvZih4ZnNfYnVmX3QgKikpOworCWlmIChi cGxpc3QgPT0gTlVMTCkKKwkJZG9fZXJyb3IoXygiZmFpbGVkIHRvIGFsbG9jYXRl ICVkIGJ5dGVzIG9mIG1lbW9yeVxuIiksCisJCQljbHVzdGVyX2NvdW50ICogc2l6 ZW9mKHhmc19idWZfdCopKTsKKworCWZvciAoYnBfaW5kZXggPSAwOyBicF9pbmRl eCA8IGNsdXN0ZXJfY291bnQ7IGJwX2luZGV4KyspIHsKKyNpZmRlZiBYUl9QRl9U UkFDRQorCQlwZnRyYWNlKCJhYm91dCB0byByZWFkIG9mZiAlbGx1IGluIEFHICVk IiwKKwkJCShsb25nIGxvbmcpWEZTX0FHQl9UT19EQUREUihtcCwgYWdubywgYWdi bm8pLCBhZ25vKTsKKyNlbmRpZgorCQlicGxpc3RbYnBfaW5kZXhdID0gbGlieGZz X3JlYWRidWYobXAtPm1fZGV2LAorCQkJCQlYRlNfQUdCX1RPX0RBRERSKG1wLCBh Z25vLCBhZ2JubyksCisJCQkJCVhGU19GU0JfVE9fQkIobXAsIGJsa3NfcGVyX2Ns dXN0ZXIpLCAwKTsKKwkJaWYgKCFicGxpc3RbYnBfaW5kZXhdKSB7CisJCQlkb193 YXJuKF8oImNhbm5vdCByZWFkIGlub2RlICVsbHUsIGRpc2sgYmxvY2sgJWxsZCwg Y250ICVkXG4iKSwKKwkJCQlYRlNfQUdJTk9fVE9fSU5PKG1wLCBhZ25vLCBmaXJz dF9pcmVjLT5pbm9fc3RhcnRudW0pLAorCQkJCVhGU19BR0JfVE9fREFERFIobXAs IGFnbm8sIGFnYm5vKSwKKwkJCQkoaW50KVhGU19GU0JfVE9fQkIobXAsIGJsa3Nf cGVyX2NsdXN0ZXIpKTsKKwkJCXdoaWxlIChicF9pbmRleCA+IDApIHsKKwkJCQli cF9pbmRleC0tOworCQkJCWxpYnhmc19wdXRidWYoYnBsaXN0W2JwX2luZGV4XSk7 CisJCQl9CisJCQlmcmVlKGJwbGlzdCk7CisJCQlyZXR1cm4oMSk7CisJCX0KKwkJ YWdibm8gKz0gYmxrc19wZXJfY2x1c3RlcjsKKworI2lmZGVmIFhSX1BGX1RSQUNF CisJCXBmdHJhY2UoInJlYWRidWYgJXAgKCVsbHUsICVkKSBpbiBBRyAlZCIsIGJw bGlzdFticF9pbmRleF0sCisJCQkobG9uZyBsb25nKVhGU19CVUZfQUREUihicGxp c3RbYnBfaW5kZXhdKSwKKwkJCVhGU19CVUZfQ09VTlQoYnBsaXN0W2JwX2luZGV4 XSksIGFnbm8pOworI2VuZGlmCisJfQorCWFnYm5vID0gWEZTX0FHSU5PX1RPX0FH Qk5PKG1wLCBmaXJzdF9pcmVjLT5pbm9fc3RhcnRudW0pOworCiAJLyoKIAkgKiBp bml0aWFsaXplIGNvdW50ZXJzCiAJICovCiAJaXJlY19vZmZzZXQgPSAwOwogCWli dWZfb2Zmc2V0ID0gMDsKKwljbHVzdGVyX29mZnNldCA9IDA7CiAJaWNudCA9IDA7 CiAJc3RhdHVzID0gMDsKLQlkb25lID0gMDsKKwlicF9pbmRleCA9IDA7CiAKIAkv KgogCSAqIHZlcmlmeSBpbm9kZSBjaHVuayBpZiBuZWNlc3NhcnkKIAkgKi8KIAlp ZiAoaW5vX2Rpc2NvdmVyeSkgIHsKLQkJd2hpbGUgKCFkb25lKSAgeworCQlmb3Ig KDs7KSAgewogCQkJLyoKIAkJCSAqIG1ha2UgaW5vZGUgcG9pbnRlcgogCQkJICov Ci0JCQlkaW5vID0gWEZTX01BS0VfSVBUUihtcCwgYnAsIGljbnQpOworCQkJZGlu byA9IFhGU19NQUtFX0lQVFIobXAsIGJwbGlzdFticF9pbmRleF0sIGNsdXN0ZXJf b2Zmc2V0KTsKIAkJCWFnaW5vID0gaXJlY19vZmZzZXQgKyBpbm9fcmVjLT5pbm9f c3RhcnRudW07CiAKIAkJCS8qCkBAIC02NTEsNiArNjk0LDcgQEAKIAogCQkJaXJl Y19vZmZzZXQrKzsKIAkJCWljbnQrKzsKKwkJCWNsdXN0ZXJfb2Zmc2V0Kys7CiAK IAkJCWlmIChpY250ID09IFhGU19JQUxMT0NfSU5PREVTKG1wKSAmJgogCQkJCQlp cmVjX29mZnNldCA9PSBYRlNfSU5PREVTX1BFUl9DSFVOSykgIHsKQEAgLTY1OCw4 ICs3MDIsNiBAQAogCQkJCSAqIGRvbmUhIC0gZmluaXNoZWQgdXAgaXJlYyBhbmQg YmxvY2sKIAkJCQkgKiBzaW11bHRhbmVvdXNseQogCQkJCSAqLwotCQkJCWxpYnhm c19wdXRidWYoYnApOwotCQkJCWRvbmUgPSAxOwogCQkJCWJyZWFrOwogCQkJfSBl bHNlIGlmIChpcmVjX29mZnNldCA9PSBYRlNfSU5PREVTX1BFUl9DSFVOSykgIHsK IAkJCQkvKgpAQCAtNjY5LDYgKzcxMSwxMCBAQAogCQkJCUFTU0VSVChpbm9fcmVj LT5pbm9fc3RhcnRudW0gPT0gYWdpbm8gKyAxKTsKIAkJCQlpcmVjX29mZnNldCA9 IDA7CiAJCQl9CisJCQlpZiAoY2x1c3Rlcl9vZmZzZXQgPT0gaW5vZGVzX3Blcl9j bHVzdGVyKSB7CisJCQkJYnBfaW5kZXgrKzsKKwkJCQljbHVzdGVyX29mZnNldCA9 IDA7CisJCQl9CiAJCX0KIAogCQkvKgpAQCAtNjc3LDggKzcyMyw5IEBACiAJCSAq LwogCQlpZiAoIXN0YXR1cykgIHsKIAkJCSpib2d1cyA9IDE7Ci0JCQlpZiAoIWRv bmUpIC8qIGFscmVhZHkgZnJlZSdkICovCi0JCQkgIGxpYnhmc19wdXRidWYoYnAp OworCQkJZm9yIChicF9pbmRleCA9IDA7IGJwX2luZGV4IDwgY2x1c3Rlcl9jb3Vu dDsgYnBfaW5kZXgrKykKKwkJCQlsaWJ4ZnNfcHV0YnVmKGJwbGlzdFticF9pbmRl eF0pOworCQkJZnJlZShicGxpc3QpOwogCQkJcmV0dXJuKDApOwogCQl9CiAKQEAg LTY4OCw1NiArNzM1LDQwIEBACiAJCWlub19yZWMgPSBmaXJzdF9pcmVjOwogCiAJ CWlyZWNfb2Zmc2V0ID0gMDsKLQkJaWJ1Zl9vZmZzZXQgPSAwOworCQljbHVzdGVy X29mZnNldCA9IDA7CisJCWJwX2luZGV4ID0gMDsKIAkJaWNudCA9IDA7CiAJCXN0 YXR1cyA9IDA7Ci0JCWRvbmUgPSAwOwotCi0JCS8qIG5hdGhhbnMgVE9ETyAuLi4g bWVtb3J5IGxlYWsgaGVyZT86ICovCi0KLQkJLyoKLQkJICogZ2V0IGZpcnN0IGJs b2NrCi0JCSAqLwotCQlicCA9IGxpYnhmc19yZWFkYnVmKG1wLT5tX2RldiwKLQkJ CQlYRlNfQUdCX1RPX0RBRERSKG1wLCBhZ25vLCBhZ2JubyksCi0JCQkJWEZTX0ZT Ql9UT19CQihtcCwgWEZTX0lBTExPQ19CTE9DS1MobXApKSwgMCk7Ci0JCWlmICgh YnApIHsKLQkJCWRvX3dhcm4oXygiY2FuJ3QgcmVhZCBpbm9kZSAlbGx1LCBkaXNr IGJsb2NrICVsbGQsICIKLQkJCQkiY250ICVkXG4iKSwgWEZTX0FHSU5PX1RPX0lO TyhtcCwgYWdubywgYWdpbm8pLAotCQkJCVhGU19BR0JfVE9fREFERFIobXAsIGFn bm8sIGFnYm5vKSwKLQkJCQkoaW50KVhGU19GU0JfVE9fQkIobXAsIFhGU19JQUxM T0NfQkxPQ0tTKG1wKSkpOwotCQkJcmV0dXJuKDEpOwotCQl9CiAJfQogCiAJLyoK IAkgKiBtYXJrIGJsb2NrIGFzIGFuIGlub2RlIGJsb2NrIGluIHRoZSBpbmNvcmUg Yml0bWFwCiAJICovCi0JUFJFUEFJUl9SV19XUklURV9MT0NLKCZwZXJfYWdfbG9j a1thZ25vXSk7CisJcHRocmVhZF9tdXRleF9sb2NrKCZhZ19sb2Nrc1thZ25vXSk7 CiAJc3dpdGNoIChzdGF0ZSA9IGdldF9hZ2Jub19zdGF0ZShtcCwgYWdubywgYWdi bm8pKSAgewotCWNhc2UgWFJfRV9JTk86CS8qIGFscmVhZHkgbWFya2VkICovCi0J CWJyZWFrOwotCWNhc2UgWFJfRV9VTktOT1dOOgotCWNhc2UgWFJfRV9GUkVFOgot CWNhc2UgWFJfRV9GUkVFMToKLQkJc2V0X2FnYm5vX3N0YXRlKG1wLCBhZ25vLCBh Z2JubywgWFJfRV9JTk8pOwotCQlicmVhazsKLQljYXNlIFhSX0VfQkFEX1NUQVRF OgotCQlkb19lcnJvcihfKCJiYWQgc3RhdGUgaW4gYmxvY2sgbWFwICVkXG4iKSwg c3RhdGUpOwotCQlicmVhazsKLQlkZWZhdWx0OgotCQlzZXRfYWdibm9fc3RhdGUo bXAsIGFnbm8sIGFnYm5vLCBYUl9FX01VTFQpOwotCQlkb193YXJuKF8oImlub2Rl IGJsb2NrICVsbHUgbXVsdGlwbHkgY2xhaW1lZCwgc3RhdGUgd2FzICVkXG4iKSwK LQkJCVhGU19BR0JfVE9fRlNCKG1wLCBhZ25vLCBhZ2JubyksIHN0YXRlKTsKLQkJ YnJlYWs7CisJCWNhc2UgWFJfRV9JTk86CS8qIGFscmVhZHkgbWFya2VkICovCisJ CQlicmVhazsKKwkJY2FzZSBYUl9FX1VOS05PV046CisJCWNhc2UgWFJfRV9GUkVF OgorCQljYXNlIFhSX0VfRlJFRTE6CisJCQlzZXRfYWdibm9fc3RhdGUobXAsIGFn bm8sIGFnYm5vLCBYUl9FX0lOTyk7CisJCQlicmVhazsKKwkJY2FzZSBYUl9FX0JB RF9TVEFURToKKwkJCWRvX2Vycm9yKF8oImJhZCBzdGF0ZSBpbiBibG9jayBtYXAg JWRcbiIpLCBzdGF0ZSk7CisJCQlicmVhazsKKwkJZGVmYXVsdDoKKwkJCXNldF9h Z2Jub19zdGF0ZShtcCwgYWdubywgYWdibm8sIFhSX0VfTVVMVCk7CisJCQlkb193 YXJuKF8oImlub2RlIGJsb2NrICVsbHUgbXVsdGlwbHkgY2xhaW1lZCwgc3RhdGUg d2FzICVkXG4iKSwKKwkJCQlYRlNfQUdCX1RPX0ZTQihtcCwgYWdubywgYWdibm8p LCBzdGF0ZSk7CisJCQlicmVhazsKIAl9Ci0JUFJFUEFJUl9SV19VTkxPQ0soJnBl cl9hZ19sb2NrW2Fnbm9dKTsKKwlwdGhyZWFkX211dGV4X3VubG9jaygmYWdfbG9j a3NbYWdub10pOwogCi0Jd2hpbGUgKCFkb25lKSAgeworCWZvciAoOzspIHsKIAkJ LyoKIAkJICogbWFrZSBpbm9kZSBwb2ludGVyCiAJCSAqLwotCQlkaW5vID0gWEZT X01BS0VfSVBUUihtcCwgYnAsIGljbnQpOworCQlkaW5vID0gWEZTX01BS0VfSVBU UihtcCwgYnBsaXN0W2JwX2luZGV4XSwgY2x1c3Rlcl9vZmZzZXQpOwogCQlhZ2lu byA9IGlyZWNfb2Zmc2V0ICsgaW5vX3JlYy0+aW5vX3N0YXJ0bnVtOwogCiAJCWlz X3VzZWQgPSAzOwpAQCAtODY4LDE4ICs4OTksMjQgQEAKIAkJaXJlY19vZmZzZXQr KzsKIAkJaWJ1Zl9vZmZzZXQrKzsKIAkJaWNudCsrOworCQljbHVzdGVyX29mZnNl dCsrOwogCiAJCWlmIChpY250ID09IFhGU19JQUxMT0NfSU5PREVTKG1wKSAmJgog CQkJCWlyZWNfb2Zmc2V0ID09IFhGU19JTk9ERVNfUEVSX0NIVU5LKSAgewogCQkJ LyoKIAkJCSAqIGRvbmUhIC0gZmluaXNoZWQgdXAgaXJlYyBhbmQgYmxvY2sgc2lt dWx0YW5lb3VzbHkKIAkJCSAqLwotCQkJaWYgKGRpcnR5ICYmICFub19tb2RpZnkp Ci0JCQkJbGlieGZzX3dyaXRlYnVmKGJwLCAwKTsKLQkJCWVsc2UKLQkJCQlsaWJ4 ZnNfcHV0YnVmKGJwKTsKLQotCQkJZG9uZSA9IDE7CisJCQlmb3IgKGJwX2luZGV4 ID0gMDsgYnBfaW5kZXggPCBjbHVzdGVyX2NvdW50OyBicF9pbmRleCsrKSB7Cisj aWZkZWYgWFJfUEZfVFJBQ0UKKwkJCQlwZnRyYWNlKCJwdXQvd3JpdGVidWYgJXAg KCVsbHUpIGluIEFHICVkIiwgYnBsaXN0W2JwX2luZGV4XSwKKwkJCQkJKGxvbmcg bG9uZylYRlNfQlVGX0FERFIoYnBsaXN0W2JwX2luZGV4XSksIGFnbm8pOworI2Vu ZGlmCisJCQkJaWYgKGRpcnR5ICYmICFub19tb2RpZnkpCisJCQkJCWxpYnhmc193 cml0ZWJ1ZihicGxpc3RbYnBfaW5kZXhdLCAwKTsKKwkJCQllbHNlCisJCQkJCWxp Ynhmc19wdXRidWYoYnBsaXN0W2JwX2luZGV4XSk7CisJCQl9CisJCQlmcmVlKGJw bGlzdCk7CiAJCQlicmVhazsKIAkJfSBlbHNlIGlmIChpYnVmX29mZnNldCA9PSBt cC0+bV9zYi5zYl9pbm9wYmxvY2spICB7CiAJCQkvKgpAQCAtODg5LDcgKzkyNiw3 IEBACiAJCQlpYnVmX29mZnNldCA9IDA7CiAJCQlhZ2JubysrOwogCi0JCQlQUkVQ QUlSX1JXX1dSSVRFX0xPQ0soJnBlcl9hZ19sb2NrW2Fnbm9dKTsKKwkJCXB0aHJl YWRfbXV0ZXhfbG9jaygmYWdfbG9ja3NbYWdub10pOwogCQkJc3dpdGNoIChzdGF0 ZSA9IGdldF9hZ2Jub19zdGF0ZShtcCwgYWdubywgYWdibm8pKSAgewogCQkJY2Fz ZSBYUl9FX0lOTzoJLyogYWxyZWFkeSBtYXJrZWQgKi8KIAkJCQlicmVhazsKQEAg LTkwOSw3ICs5NDYsNyBAQAogCQkJCQlYRlNfQUdCX1RPX0ZTQihtcCwgYWdubywg YWdibm8pLCBzdGF0ZSk7CiAJCQkJYnJlYWs7CiAJCQl9Ci0JCQlQUkVQQUlSX1JX X1VOTE9DSygmcGVyX2FnX2xvY2tbYWdub10pOworCQkJcHRocmVhZF9tdXRleF91 bmxvY2soJmFnX2xvY2tzW2Fnbm9dKTsKIAogCQl9IGVsc2UgaWYgKGlyZWNfb2Zm c2V0ID09IFhGU19JTk9ERVNfUEVSX0NIVU5LKSAgewogCQkJLyoKQEAgLTkxOSw2 ICs5NTYsMTAgQEAKIAkJCUFTU0VSVChpbm9fcmVjLT5pbm9fc3RhcnRudW0gPT0g YWdpbm8gKyAxKTsKIAkJCWlyZWNfb2Zmc2V0ID0gMDsKIAkJfQorCQlpZiAoY2x1 c3Rlcl9vZmZzZXQgPT0gaW5vZGVzX3Blcl9jbHVzdGVyKSB7CisJCQlicF9pbmRl eCsrOworCQkJY2x1c3Rlcl9vZmZzZXQgPSAwOworCQl9CiAJfQogCXJldHVybigw KTsKIH0KQEAgLTkzNiwxNiArOTc3LDIxIEBACiAgKiBwaGFzZSA0IGFmdGVyIHdl J3ZlIHJ1biB0aHJvdWdoIGFuZCBzZXQgdGhlIGJpdG1hcCBvbmNlLgogICovCiB2 b2lkCi1wcm9jZXNzX2FnaW5vZGVzKHhmc19tb3VudF90ICptcCwgeGZzX2FnbnVt YmVyX3QgYWdubywKLQkJaW50IGlub19kaXNjb3ZlcnksIGludCBjaGVja19kdXBz LCBpbnQgZXh0cmFfYXR0cl9jaGVjaykKK3Byb2Nlc3NfYWdpbm9kZXMoCisJeGZz X21vdW50X3QJCSptcCwKKwlwcmVmZXRjaF9hcmdzX3QJCSpwZl9hcmdzLAorCXhm c19hZ251bWJlcl90CQlhZ25vLAorCWludCAJCQlpbm9fZGlzY292ZXJ5LAorCWlu dCAJCQljaGVja19kdXBzLAorCWludCAJCQlleHRyYV9hdHRyX2NoZWNrKQogewot CWludCBudW1faW5vcywgYm9ndXM7Ci0JaW5vX3RyZWVfbm9kZV90ICppbm9fcmVj LCAqZmlyc3RfaW5vX3JlYywgKnByZXZfaW5vX3JlYzsKLQlpbm9fdHJlZV9ub2Rl X3QgKmlub19yYTsKLQotCWlub19yYSA9IGRvX3ByZWZldGNoID8gcHJlZmV0Y2hf aW5vZGVfY2h1bmtzKG1wLCBhZ25vLCBOVUxMKSA6IE5VTEw7Ci0KKwlpbnQgCQkJ bnVtX2lub3MsIGJvZ3VzOworCWlub190cmVlX25vZGVfdCAJKmlub19yZWMsICpm aXJzdF9pbm9fcmVjLCAqcHJldl9pbm9fcmVjOworI2lmZGVmIFhSX1BGX1RSQUNF CisJaW50CQkJY291bnQ7CisjZW5kaWYKIAlmaXJzdF9pbm9fcmVjID0gaW5vX3Jl YyA9IGZpbmRmaXJzdF9pbm9kZV9yZWMoYWdubyk7CisKIAl3aGlsZSAoaW5vX3Jl YyAhPSBOVUxMKSAgewogCQkvKgogCQkgKiBwYXJhbm9pYSAtIHN0ZXAgdGhyb3Vn aCBpbm9kZSByZWNvcmRzIHVudGlsIHdlIHN0ZXAKQEAgLTk1Nyw3ICsxMDAzLDYg QEAKIAkJICovCiAJCW51bV9pbm9zID0gWEZTX0lOT0RFU19QRVJfQ0hVTks7CiAJ CXdoaWxlIChudW1faW5vcyA8IFhGU19JQUxMT0NfSU5PREVTKG1wKSAmJiBpbm9f cmVjICE9IE5VTEwpICB7Ci0JCQlBU1NFUlQoaW5vX3JlYyAhPSBOVUxMKTsKIAkJ CS8qCiAJCQkgKiBpbm9kZXMgY2h1bmtzIHdpbGwgYWx3YXlzIGJlIGFsaWduZWQg YW5kIHNpemVkCiAJCQkgKiBjb3JyZWN0bHkKQEAgLTk2OCwxMSArMTAxMywxOCBA QAogCiAJCUFTU0VSVChudW1faW5vcyA9PSBYRlNfSUFMTE9DX0lOT0RFUyhtcCkp OwogCi0JCWlmIChkb19wcmVmZXRjaCAmJiBpbm9fcmEgJiYgKGZpcnN0X2lub19y ZWMtPmlub19zdGFydG51bSA+PSBpbm9fcmEtPmlub19zdGFydG51bSkpCi0JCQlp bm9fcmEgPSBwcmVmZXRjaF9pbm9kZV9jaHVua3MobXAsIGFnbm8sIGlub19yYSk7 CisJCWlmIChwZl9hcmdzKSB7CisJCQlzZW1fcG9zdCgmcGZfYXJncy0+cmFfY291 bnQpOworI2lmZGVmIFhSX1BGX1RSQUNFCisJCQlzZW1fZ2V0dmFsdWUoJnBmX2Fy Z3MtPnJhX2NvdW50LCAmY291bnQpOworCQkJcGZ0cmFjZSgicHJvY2Vzc2luZyBp bm9kZSBjaHVuayAlcCBpbiBBRyAlZCAoc2VtIGNvdW50ID0gJWQpIiwKKwkJCQlm aXJzdF9pbm9fcmVjLCBhZ25vLCBjb3VudCk7CisjZW5kaWYKKwkJfQogCiAJCWlm IChwcm9jZXNzX2lub2RlX2NodW5rKG1wLCBhZ25vLCBudW1faW5vcywgZmlyc3Rf aW5vX3JlYywKLQkJCQlpbm9fZGlzY292ZXJ5LCBjaGVja19kdXBzLCBleHRyYV9h dHRyX2NoZWNrLCAmYm9ndXMpKSAgeworCQkJCWlub19kaXNjb3ZlcnksIGNoZWNr X2R1cHMsIGV4dHJhX2F0dHJfY2hlY2ssCisJCQkJJmJvZ3VzKSkgIHsKIAkJCS8q IFhYWCAtIGkvbyBlcnJvciwgd2UndmUgZ290IGEgcHJvYmxlbSAqLwogCQkJYWJv cnQoKTsKIAkJfQpJbmRleDogcmVwYWlyL3hmc3Byb2dzL3JlcGFpci9kaXIuYwo9 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09Ci0tLSByZXBhaXIub3JpZy94ZnNwcm9ncy9yZXBh aXIvZGlyLmMJMjAwNy0wNC0yNyAxNDoxMTo0MS4wMDAwMDAwMDAgKzEwMDAKKysr IHJlcGFpci94ZnNwcm9ncy9yZXBhaXIvZGlyLmMJMjAwNy0wNC0yNyAxNDoxMjoz NC4xMzk1NzAyODkgKzEwMDAKQEAgLTI2LDcgKzI2LDYgQEAKICNpbmNsdWRlICJk aW5vZGUuaCIKICNpbmNsdWRlICJkaXIuaCIKICNpbmNsdWRlICJibWFwLmgiCi0j aW5jbHVkZSAicHJlZmV0Y2guaCIKIAogI2lmIFhGU19ESVJfTEVBRl9NQVBTSVpF ID49IFhGU19BVFRSX0xFQUZfTUFQU0laRQogI2RlZmluZSBYUl9EQV9MRUFGX01B UFNJWkUJWEZTX0RJUl9MRUFGX01BUFNJWkUKQEAgLTc4MSw5ICs3ODAsNiBAQAog CW5vZGUgPSBOVUxMOwogCWRhX2N1cnNvci0+YWN0aXZlID0gMDsKIAotCWlmIChk b19wcmVmZXRjaCAmJiAod2hpY2hmb3JrID09IFhGU19EQVRBX0ZPUkspKQotCQlw cmVmZXRjaF9kaXIxKG1wLCBibm8sIGRhX2N1cnNvcik7Ci0KIAlkbyB7CiAJCS8q CiAJCSAqIHJlYWQgaW4gZWFjaCBibG9jayBhbG9uZyB0aGUgd2F5IGFuZCBzZXQg dXAgY3Vyc29yCkluZGV4OiByZXBhaXIveGZzcHJvZ3MvcmVwYWlyL2RpcjIuYwo9 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09Ci0tLSByZXBhaXIub3JpZy94ZnNwcm9ncy9yZXBh aXIvZGlyMi5jCTIwMDctMDQtMjcgMTQ6MTE6NDEuMDAwMDAwMDAwICsxMDAwCisr KyByZXBhaXIveGZzcHJvZ3MvcmVwYWlyL2RpcjIuYwkyMDA3LTA1LTI5IDExOjIy OjEyLjY4NDE2MDE1MSArMTAwMApAQCAtMjYsNiArMjYsNyBAQAogI2luY2x1ZGUg ImRpcjIuaCIKICNpbmNsdWRlICJibWFwLmgiCiAjaW5jbHVkZSAicHJlZmV0Y2gu aCIKKyNpbmNsdWRlICJwcm9ncmVzcy5oIgogCiAvKgogICogVGFnIGJhZCBkaXJl Y3RvcnkgZW50cmllcyB3aXRoIHRoaXMuCkBAIC04NywxMCArODgsMTkgQEAKIAl4 ZnNfYnVmX3QJKmJwYXJyYXlbNF07CiAJeGZzX2J1Zl90CSoqYnBsaXN0OwogCXhm c19kYWJ1Zl90CSpkYWJ1ZjsKLQlpbnQJCWk7CisJaW50CQlpLCBqOwogCWludAkJ b2ZmOworCWludAkJbmJsb2NrczsKKworCS8qCisJICogZHVlIHRvIGxpbWl0YXRp b25zIGluIGxpYnhmc19jYWNoZSwgd2UgbmVlZCB0byByZWFkIHRoZQorCSAqIGJs b2NrcyBpbiBmc2Jsb2NrIHNpemUgY2h1bmtzCisJICovCisKKwlmb3IgKGkgPSAw LCBuYmxvY2tzID0gMDsgaSA8IG5leDsgaSsrKQorCQluYmxvY2tzICs9IGJtcFtp XS5ibG9ja2NvdW50OwogCi0JaWYgKG5leCA+IChzaXplb2YoYnBhcnJheSkvc2l6 ZW9mKHhmc19idWZfdCAqKSkpIHsKKwlpZiAobmJsb2NrcyA+IChzaXplb2YoYnBh cnJheSkvc2l6ZW9mKHhmc19idWZfdCAqKSkpIHsKIAkJYnBsaXN0ID0gY2FsbG9j KG5leCwgc2l6ZW9mKCpicGxpc3QpKTsKIAkJaWYgKGJwbGlzdCA9PSBOVUxMKSB7 CiAJCQlkb19lcnJvcihfKCJjb3VsZG4ndCBtYWxsb2MgZGlyMiBidWZmZXIgbGlz dFxuIikpOwpAQCAtMTAxLDIxICsxMTEsMzkgQEAKIAkJLyogY29tbW9uIGNhc2Ug YXZvaWRzIGNhbGxvYy9mcmVlICovCiAJCWJwbGlzdCA9IGJwYXJyYXk7CiAJfQot CWZvciAoaSA9IDA7IGkgPCBuZXg7IGkrKykgewotCQlicGxpc3RbaV0gPSBsaWJ4 ZnNfcmVhZGJ1ZihtcC0+bV9kZXYsCi0JCQkJWEZTX0ZTQl9UT19EQUREUihtcCwg Ym1wW2ldLnN0YXJ0YmxvY2spLAotCQkJCVhGU19GU0JfVE9fQkIobXAsIGJtcFtp XS5ibG9ja2NvdW50KSwgMCk7Ci0JCWlmICghYnBsaXN0W2ldKQotCQkJZ290byBm YWlsZWQ7CisJZm9yIChpID0gMCwgaiA9IDA7IGogPCBuZXg7IGorKykgeworCQl4 ZnNfZGZzYm5vX3QJYm5vOworCQlpbnQJCWM7CisKKwkJYm5vID0gYm1wW2pdLnN0 YXJ0YmxvY2s7CisJCWZvciAoYyA9IDA7IGMgPCBibXBbal0uYmxvY2tjb3VudDsg YysrLCBibm8rKykgeworI2lmZGVmIFhSX1BGX1RSQUNFCisJCQlwZnRyYWNlKCJh Ym91dCB0byByZWFkIG9mZiAlbGx1IiwKKwkJCQkobG9uZyBsb25nKVhGU19GU0Jf VE9fREFERFIobXAsIGJubykpOworI2VuZGlmCisJCQlicGxpc3RbaV0gPSBsaWJ4 ZnNfcmVhZGJ1ZihtcC0+bV9kZXYsCisJCQkJCVhGU19GU0JfVE9fREFERFIobXAs IGJubyksCisJCQkJCVhGU19GU0JfVE9fQkIobXAsIDEpLCAwKTsKKwkJCWlmICgh YnBsaXN0W2ldKQorCQkJCWdvdG8gZmFpbGVkOworI2lmZGVmIFhSX1BGX1RSQUNF CisJCQlwZnRyYWNlKCJyZWFkYnVmICVwICglbGx1LCAlZCkiLCBicGxpc3RbaV0s CisJCQkJKGxvbmcgbG9uZylYRlNfQlVGX0FERFIoYnBsaXN0W2ldKSwKKwkJCQlY RlNfQlVGX0NPVU5UKGJwbGlzdFtpXSkpOworI2VuZGlmCisJCQlpKys7CisJCX0K IAl9Ci0JZGFidWYgPSBtYWxsb2MoWEZTX0RBX0JVRl9TSVpFKG5leCkpOworCUFT U0VSVChpID09IG5ibG9ja3MpOworCisJZGFidWYgPSBtYWxsb2MoWEZTX0RBX0JV Rl9TSVpFKG5ibG9ja3MpKTsKIAlpZiAoZGFidWYgPT0gTlVMTCkgewogCQlkb19l cnJvcihfKCJjb3VsZG4ndCBtYWxsb2MgZGlyMiBidWZmZXIgaGVhZGVyXG4iKSk7 CiAJCWV4aXQoMSk7CiAJfQogCWRhYnVmLT5kaXJ0eSA9IDA7CiAJZGFidWYtPm5i dWYgPSBuZXg7Ci0JaWYgKG5leCA9PSAxKSB7CisJaWYgKG5ibG9ja3MgPT0gMSkg ewogCQlicCA9IGJwbGlzdFswXTsKIAkJZGFidWYtPmJiY291bnQgPSAoc2hvcnQp QlRPQkIoWEZTX0JVRl9DT1VOVChicCkpOwogCQlkYWJ1Zi0+ZGF0YSA9IFhGU19C VUZfUFRSKGJwKTsKQEAgLTEzMCw3ICsxNTgsNyBAQAogCQkJZG9fZXJyb3IoXygi Y291bGRuJ3QgbWFsbG9jIGRpcjIgYnVmZmVyIGRhdGFcbiIpKTsKIAkJCWV4aXQo MSk7CiAJCX0KLQkJZm9yIChpID0gb2ZmID0gMDsgaSA8IG5leDsgaSsrLCBvZmYg Kz0gWEZTX0JVRl9DT1VOVChicCkpIHsKKwkJZm9yIChpID0gb2ZmID0gMDsgaSA8 IG5ibG9ja3M7IGkrKywgb2ZmICs9IFhGU19CVUZfQ09VTlQoYnApKSB7CiAJCQli cCA9IGJwbGlzdFtpXTsKIAkJCWJjb3B5KFhGU19CVUZfUFRSKGJwKSwgKGNoYXIg KilkYWJ1Zi0+ZGF0YSArIG9mZiwKIAkJCQlYRlNfQlVGX0NPVU5UKGJwKSk7CkBA IC0xNDAsNyArMTY4LDcgQEAKIAkJZnJlZShicGxpc3QpOwogCXJldHVybiBkYWJ1 ZjsKIGZhaWxlZDoKLQlmb3IgKGkgPSAwOyBpIDwgbmV4OyBpKyspCisJZm9yIChp ID0gMDsgaSA8IG5ibG9ja3M7IGkrKykKIAkJbGlieGZzX3B1dGJ1ZihicGxpc3Rb aV0pOwogCWlmIChicGxpc3QgIT0gYnBhcnJheSkKIAkJZnJlZShicGxpc3QpOwpA QCAtMjM2LDggKzI2NCwxMiBAQAogCQliY29weShkYWJ1Zi0+YnBzLCBicGxpc3Qs IG5idWYgKiBzaXplb2YoKmJwbGlzdCkpOwogCX0KIAlkYV9idWZfZG9uZShkYWJ1 Zik7Ci0JZm9yIChpID0gMDsgaSA8IG5idWY7IGkrKykKKwlmb3IgKGkgPSAwOyBp IDwgbmJ1ZjsgaSsrKSB7CisjaWZkZWYgWFJfUEZfVFJBQ0UKKwkJcGZ0cmFjZSgi cHV0YnVmICVwICglbGx1KSIsIGJwbGlzdFtpXSwgKGxvbmcgbG9uZylYRlNfQlVG X0FERFIoYnBsaXN0W2ldKSk7CisjZW5kaWYKIAkJbGlieGZzX3B1dGJ1ZihicGxp c3RbaV0pOworCX0KIAlpZiAoYnBsaXN0ICE9ICZicCkKIAkJZnJlZShicGxpc3Qp OwogfQpAQCAtODUzLDcgKzg4NSw3IEBACiAKIAlzZnAgPSAmZGlwLT5kaV91LmRp X2RpcjJzZjsKIAltYXhfc2l6ZSA9IFhGU19ERk9SS19EU0laRShkaXAsIG1wKTsK LQludW1fZW50cmllcyA9IElOVF9HRVQoc2ZwLT5oZHIuY291bnQsIEFSQ0hfQ09O VkVSVCk7CisJbnVtX2VudHJpZXMgPSBzZnAtPmhkci5jb3VudDsKIAlpbm9fZGly X3NpemUgPSBJTlRfR0VUKGRpcC0+ZGlfY29yZS5kaV9zaXplLCBBUkNIX0NPTlZF UlQpOwogCW9mZnNldCA9IFhGU19ESVIyX0RBVEFfRklSU1RfT0ZGU0VUOwogCWJh ZF9vZmZzZXQgPSAqcmVwYWlyID0gMDsKQEAgLTE5ODYsOSArMjAxOCw2IEBACiAJ aW50CQkJdDsKIAlibWFwX2V4dF90CQlsYm1wOwogCi0JaWYgKGRvX3ByZWZldGNo KQotCQlwcmVmZXRjaF9kaXIyKG1wLCBibGttYXApOwotCiAJKnJlcGFpciA9ICpk b3QgPSAqZG90ZG90ID0gZ29vZCA9IDA7CiAJKnBhcmVudCA9IE5VTExGU0lOTzsK IAluZGJubyA9IE5VTExERklMT0ZGOwpJbmRleDogcmVwYWlyL3hmc3Byb2dzL3Jl cGFpci9kaXJfc3RhY2suYwo9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09Ci0tLSByZXBhaXIu b3JpZy94ZnNwcm9ncy9yZXBhaXIvZGlyX3N0YWNrLmMJMjAwNy0wNC0yNyAxMzox MzozNS4wMDAwMDAwMDAgKzEwMDAKKysrIC9kZXYvbnVsbAkxOTcwLTAxLTAxIDAw OjAwOjAwLjAwMDAwMDAwMCArMDAwMApAQCAtMSwxMzYgKzAsMCBAQAotLyoKLSAq IENvcHlyaWdodCAoYykgMjAwMC0yMDAxLDIwMDUgU2lsaWNvbiBHcmFwaGljcywg SW5jLgotICogQWxsIFJpZ2h0cyBSZXNlcnZlZC4KLSAqCi0gKiBUaGlzIHByb2dy YW0gaXMgZnJlZSBzb2Z0d2FyZTsgeW91IGNhbiByZWRpc3RyaWJ1dGUgaXQgYW5k L29yCi0gKiBtb2RpZnkgaXQgdW5kZXIgdGhlIHRlcm1zIG9mIHRoZSBHTlUgR2Vu ZXJhbCBQdWJsaWMgTGljZW5zZSBhcwotICogcHVibGlzaGVkIGJ5IHRoZSBGcmVl IFNvZnR3YXJlIEZvdW5kYXRpb24uCi0gKgotICogVGhpcyBwcm9ncmFtIGlzIGRp c3RyaWJ1dGVkIGluIHRoZSBob3BlIHRoYXQgaXQgd291bGQgYmUgdXNlZnVsLAot ICogYnV0IFdJVEhPVVQgQU5ZIFdBUlJBTlRZOyB3aXRob3V0IGV2ZW4gdGhlIGlt cGxpZWQgd2FycmFudHkgb2YKLSAqIE1FUkNIQU5UQUJJTElUWSBvciBGSVRORVNT IEZPUiBBIFBBUlRJQ1VMQVIgUFVSUE9TRS4gIFNlZSB0aGUKLSAqIEdOVSBHZW5l cmFsIFB1YmxpYyBMaWNlbnNlIGZvciBtb3JlIGRldGFpbHMuCi0gKgotICogWW91 IHNob3VsZCBoYXZlIHJlY2VpdmVkIGEgY29weSBvZiB0aGUgR05VIEdlbmVyYWwg UHVibGljIExpY2Vuc2UKLSAqIGFsb25nIHdpdGggdGhpcyBwcm9ncmFtOyBpZiBu b3QsIHdyaXRlIHRoZSBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb24sCi0gKiBJbmMu LCAgNTEgRnJhbmtsaW4gU3QsIEZpZnRoIEZsb29yLCBCb3N0b24sIE1BICAwMjEx MC0xMzAxICBVU0EKLSAqLwotCi0jaW5jbHVkZSA8bGlieGZzLmg+Ci0jaW5jbHVk ZSAiZGlyX3N0YWNrLmgiCi0jaW5jbHVkZSAiZXJyX3Byb3Rvcy5oIgotI2luY2x1 ZGUgInRocmVhZHMuaCIKLQotLyoKLSAqIGEgZGlyZWN0b3J5IHN0YWNrIGZvciBo b2xkaW5nIGRpcmVjdG9yaWVzIHdoaWxlCi0gKiB3ZSB0cmF2ZXJzZSBmaWxlc3lz dGVtIGhpZXJhcmNoeSBzdWJ0cmVlcy4KLSAqIG5hbWVzIGFyZSBraW5kIG9mIG1p c2xlYWRpbmcgYXMgdGhpcyBpcyByZWFsbHkKLSAqIGltcGxlbWVudGVkIGFzIGFu IGlub2RlIHN0YWNrLiAgc28gc3VlIG1lLi4uCi0gKi8KLQotc3RhdGljIGRpcl9z dGFja190CWRpcnN0YWNrX2ZyZWVsaXN0Owotc3RhdGljIGludAkJZGlyc3RhY2tf aW5pdCA9IDA7Ci1zdGF0aWMgcHRocmVhZF9tdXRleF90CWRpcnN0YWNrX211dGV4 Owotc3RhdGljIHB0aHJlYWRfbXV0ZXhhdHRyX3QgZGlyc3RhY2tfbXV0ZXhhdHRy OwotCi0KLXZvaWQKLWRpcl9zdGFja19pbml0KGRpcl9zdGFja190ICpzdGFjaykK LXsKLQlzdGFjay0+Y250ID0gMDsKLQlzdGFjay0+aGVhZCA9IE5VTEw7Ci0KLQlp ZiAoZGlyc3RhY2tfaW5pdCA9PSAwKSAgewotCQlkaXJzdGFja19pbml0ID0gMTsK LQkJUFJFUEFJUl9NVFhfQVRUUl9JTklUKCZkaXJzdGFja19tdXRleGF0dHIpOwot I2lmZGVmIFBUSFJFQURfTVVURVhfU1BJTkJMT0NLX05QCi0JCVBSRVBBSVJfTVRY X0FUVFJfU0VUKCZkaXJzdGFja19tdXRleGF0dHIsIFBUSFJFQURfTVVURVhfU1BJ TkJMT0NLX05QKTsKLSNlbmRpZgotCQlQUkVQQUlSX01UWF9MT0NLX0lOSVQoJmRp cnN0YWNrX211dGV4LCAmZGlyc3RhY2tfbXV0ZXhhdHRyKTsKLQkJZGlyX3N0YWNr X2luaXQoJmRpcnN0YWNrX2ZyZWVsaXN0KTsKLQl9Ci0KLQlzdGFjay0+Y250ID0g MDsKLQlzdGFjay0+aGVhZCA9IE5VTEw7Ci0KLQlyZXR1cm47Ci19Ci0KLXN0YXRp YyB2b2lkCi1kaXJfc3RhY2tfcHVzaChkaXJfc3RhY2tfdCAqc3RhY2ssIGRpcl9z dGFja19lbGVtX3QgKmVsZW0pCi17Ci0JQVNTRVJUKHN0YWNrLT5jbnQgPiAwIHx8 IChzdGFjay0+Y250ID09IDAgJiYgc3RhY2stPmhlYWQgPT0gTlVMTCkpOwotCi0J ZWxlbS0+bmV4dCA9IHN0YWNrLT5oZWFkOwotCXN0YWNrLT5oZWFkID0gZWxlbTsK LQlzdGFjay0+Y250Kys7Ci0KLQlyZXR1cm47Ci19Ci0KLXN0YXRpYyBkaXJfc3Rh Y2tfZWxlbV90ICoKLWRpcl9zdGFja19wb3AoZGlyX3N0YWNrX3QgKnN0YWNrKQot ewotCWRpcl9zdGFja19lbGVtX3QgKmVsZW07Ci0KLQlpZiAoc3RhY2stPmNudCA9 PSAwKSAgewotCQlBU1NFUlQoc3RhY2stPmhlYWQgPT0gTlVMTCk7Ci0JCXJldHVy bihOVUxMKTsKLQl9Ci0KLQllbGVtID0gc3RhY2stPmhlYWQ7Ci0KLQlBU1NFUlQo ZWxlbSAhPSBOVUxMKTsKLQotCXN0YWNrLT5oZWFkID0gZWxlbS0+bmV4dDsKLQll bGVtLT5uZXh0ID0gTlVMTDsKLQlzdGFjay0+Y250LS07Ci0KLQlyZXR1cm4oZWxl bSk7Ci19Ci0KLXZvaWQKLXB1c2hfZGlyKGRpcl9zdGFja190ICpzdGFjaywgeGZz X2lub190IGlubykKLXsKLQlkaXJfc3RhY2tfZWxlbV90ICplbGVtOwotCi0JUFJF UEFJUl9NVFhfTE9DSygmZGlyc3RhY2tfbXV0ZXgpOwotCWlmIChkaXJzdGFja19m cmVlbGlzdC5jbnQgPT0gMCkgIHsKLQkJaWYgKChlbGVtID0gbWFsbG9jKHNpemVv ZihkaXJfc3RhY2tfZWxlbV90KSkpID09IE5VTEwpICB7Ci0JCQlQUkVQQUlSX01U WF9VTkxPQ0soJmRpcnN0YWNrX211dGV4KTsKLQkJCWRvX2Vycm9yKAotCQlfKCJj b3VsZG4ndCBtYWxsb2MgZGlyIHN0YWNrIGVsZW1lbnQsIHRyeSBtb3JlIHN3YXBc biIpKTsKLQkJCWV4aXQoMSk7Ci0JCX0KLQl9IGVsc2UgIHsKLQkJZWxlbSA9IGRp cl9zdGFja19wb3AoJmRpcnN0YWNrX2ZyZWVsaXN0KTsKLQl9Ci0JUFJFUEFJUl9N VFhfVU5MT0NLKCZkaXJzdGFja19tdXRleCk7Ci0KLQllbGVtLT5pbm8gPSBpbm87 Ci0KLQlkaXJfc3RhY2tfcHVzaChzdGFjaywgZWxlbSk7Ci0KLQlyZXR1cm47Ci19 Ci0KLXhmc19pbm9fdAotcG9wX2RpcihkaXJfc3RhY2tfdCAqc3RhY2spCi17Ci0J ZGlyX3N0YWNrX2VsZW1fdCAqZWxlbTsKLQl4ZnNfaW5vX3QgaW5vOwotCi0JZWxl bSA9IGRpcl9zdGFja19wb3Aoc3RhY2spOwotCi0JaWYgKGVsZW0gPT0gTlVMTCkK LQkJcmV0dXJuKE5VTExGU0lOTyk7Ci0KLQlpbm8gPSBlbGVtLT5pbm87Ci0JZWxl bS0+aW5vID0gTlVMTEZTSU5POwotCi0JUFJFUEFJUl9NVFhfTE9DSygmZGlyc3Rh Y2tfbXV0ZXgpOwotCWRpcl9zdGFja19wdXNoKCZkaXJzdGFja19mcmVlbGlzdCwg ZWxlbSk7Ci0JUFJFUEFJUl9NVFhfVU5MT0NLKCZkaXJzdGFja19tdXRleCk7Ci0K LQlyZXR1cm4oaW5vKTsKLX0KSW5kZXg6IHJlcGFpci94ZnNwcm9ncy9yZXBhaXIv Z2xvYmFscy5oCj09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0KLS0tIHJlcGFpci5vcmlnL3hm c3Byb2dzL3JlcGFpci9nbG9iYWxzLmgJMjAwNy0wNC0yNyAxNDoxMTo0MS4wMDAw MDAwMDAgKzEwMDAKKysrIHJlcGFpci94ZnNwcm9ncy9yZXBhaXIvZ2xvYmFscy5o CTIwMDctMDYtMDQgMTc6MjI6NTkuMTA0NDA0ODgzICsxMDAwCkBAIC0yMyw4ICsy Myw2IEBACiAjZGVmaW5lIEVYVEVSTiBleHRlcm4KICNlbmRpZgogCi0jZGVmaW5l IFRSQUNLX01FTU9SWQotCiAjaWZkZWYgVFJBQ0tfTUVNT1JZCiAjaW5jbHVkZSAi dHJhY2ttZW0uaCIKICNlbmRpZgpAQCAtMTU0LDcgKzE1Miw3IEBACiAvKiBjb25m aWd1cmF0aW9uIHZhcnMgLS0gZnMgZ2VvbWV0cnkgZGVwZW5kZW50ICovCiAKIEVY VEVSTiBpbnQJCWlub2Rlc19wZXJfYmxvY2s7Ci1FWFRFUk4gaW50CQlpbm9kZXNf cGVyX2NsdXN0ZXI7CS8qIGlub2RlcyBwZXIgaW5vZGUgYnVmZmVyICovCitFWFRF Uk4gaW50CQlpbm9kZXNfcGVyX2NsdXN0ZXI7CiBFWFRFUk4gdW5zaWduZWQgaW50 CWdsb2JfYWdjb3VudDsKIEVYVEVSTiBpbnQJCWNodW5rc19wYmxvY2s7CS8qICMg b2YgNjQtaW5vIGNodW5rcyBwZXIgYWxsb2NhdGlvbiAqLwogRVhURVJOIGludAkJ bWF4X3N5bWxpbmtfYmxvY2tzOwpAQCAtMTk4LDExICsxOTYsMTYgQEAKIGV4dGVy biBzaXplX3QgCQl0c19kaXJfZnJlZW1hcF9zaXplOwogZXh0ZXJuIHNpemVfdCAJ CXRzX2F0dHJfZnJlZW1hcF9zaXplOwogCi1FWFRFUk4gcHRocmVhZF9yd2xvY2tf dAkqcGVyX2FnX2xvY2s7CitFWFRFUk4gcHRocmVhZF9tdXRleF90CSphZ19sb2Nr czsKKworRVhURVJOIGludCAJCXJlcG9ydF9pbnRlcnZhbDsKK0VYVEVSTiBfX3Vp bnQ2NF90IAkqcHJvZ19ycHRfZG9uZTsKIAotRVhURVJOIGludCByZXBvcnRfaW50 ZXJ2YWw7Ci1FWFRFUk4gX191aW50NjRfdCAqcHJvZ19ycHRfZG9uZTsKKyNpZmRl ZiBYUl9QRl9UUkFDRQorRVhURVJOIEZJTEUJCSpwZl90cmFjZV9maWxlOworI2Vu ZGlmCiAKIEVYVEVSTiBpbnQJCWFnX3N0cmlkZTsKK0VYVEVSTiBpbnQJCXRocmVh ZF9jb3VudDsKIAogI2VuZGlmIC8qIF9YRlNfUkVQQUlSX0dMT0JBTF9IICovCklu ZGV4OiByZXBhaXIveGZzcHJvZ3MvcmVwYWlyL2luY29yZS5jCj09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT0KLS0tIHJlcGFpci5vcmlnL3hmc3Byb2dzL3JlcGFpci9pbmNvcmUu YwkyMDA3LTA0LTI3IDEzOjEzOjM1LjAwMDAwMDAwMCArMTAwMAorKysgcmVwYWly L3hmc3Byb2dzL3JlcGFpci9pbmNvcmUuYwkyMDA3LTA1LTE2IDEyOjAyOjM5LjYx MjY1NDUzMCArMTAwMApAQCAtNjEsMTEgKzYxLDEyIEBACiAJc2l6ZV90IHNpemUg PSAwOwogCiAJYmFfYm1hcCA9IChfX3VpbnQ2NF90KiopbWFsbG9jKGFnbm8qc2l6 ZW9mKF9fdWludDY0X3QgKikpOwotCWlmICghYmFfYm1hcCkgIHsKKwlpZiAoIWJh X2JtYXApCiAJCWRvX2Vycm9yKF8oImNvdWxkbid0IGFsbG9jYXRlIGJsb2NrIG1h cCBwb2ludGVyc1xuIikpOwotCQlyZXR1cm47Ci0JfQotCVBSRVBBSVJfUldfTE9D S19BTExPQyhwZXJfYWdfbG9jaywgYWdubyk7CisJYWdfbG9ja3MgPSBtYWxsb2Mo YWdubyAqIHNpemVvZihwdGhyZWFkX211dGV4X3QpKTsKKwlpZiAoIWFnX2xvY2tz KQorCQlkb19lcnJvcihfKCJjb3VsZG4ndCBhbGxvY2F0ZSBibG9jayBtYXAgbG9j a3NcbiIpKTsKKwogCWZvciAoaSA9IDA7IGkgPCBhZ25vOyBpKyspICB7CiAJCXNp emUgPSByb3VuZHVwKChudW1ibG9ja3MrKE5CQlkvWFJfQkIpLTEpIC8gKE5CQlkv WFJfQkIpLAogCQkgICAgICAgCQlzaXplb2YoX191aW50NjRfdCkpOwpAQCAtNzcs NyArNzgsNyBAQAogCQkJcmV0dXJuOwogCQl9CiAJCWJ6ZXJvKGJhX2JtYXBbaV0s IHNpemUpOwotCQlQUkVQQUlSX1JXX0xPQ0tfSU5JVCgmcGVyX2FnX2xvY2tbaV0s IE5VTEwpOworCQlwdGhyZWFkX211dGV4X2luaXQoJmFnX2xvY2tzW2ldLCBOVUxM KTsKIAl9CiAKIAlpZiAocnRibG9ja3MgPT0gMCkgIHsKSW5kZXg6IHJlcGFpci94 ZnNwcm9ncy9yZXBhaXIvaW5jb3JlLmgKPT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQotLS0g cmVwYWlyLm9yaWcveGZzcHJvZ3MvcmVwYWlyL2luY29yZS5oCTIwMDctMDQtMjcg MTM6NDY6NTEuMDAwMDAwMDAwICsxMDAwCisrKyByZXBhaXIveGZzcHJvZ3MvcmVw YWlyL2luY29yZS5oCTIwMDctMDUtMTYgMTI6MDI6MzkuNjEyNjU0NTMwICsxMDAw CkBAIC0xNiw2ICsxNiwxMCBAQAogICogSW5jLiwgIDUxIEZyYW5rbGluIFN0LCBG aWZ0aCBGbG9vciwgQm9zdG9uLCBNQSAgMDIxMTAtMTMwMSAgVVNBCiAgKi8KIAor I2lmbmRlZiBYRlNfUkVQQUlSX0lOQ09SRV9ICisjZGVmaW5lIFhGU19SRVBBSVJf SU5DT1JFX0gKKworI2luY2x1ZGUgImF2bC5oIgogLyoKICAqIGNvbnRhaW5zIGRl ZmluaXRpb24gaW5mb3JtYXRpb24uICBpbXBsZW1lbnRhdGlvbiAoY29kZSkKICAq IGlzIHNwcmVhZCBvdXQgaW4gc2VwYXJhdGUgZmlsZXMuCkBAIC02MDMsNyArNjA3 LDcgQEAKICNkZWZpbmUgYWRkX2lub2RlX3JlZmNoZWNrZWQoaW5vLCBpbm9fcmVj LCBpbm9fb2Zmc2V0KSBcCiAJCVhGU19JTk9QUk9DX1NFVF9QUk9DKChpbm9fcmVj KSwgKGlub19vZmZzZXQpKQogI2RlZmluZSBpc19pbm9kZV9yZWZjaGVja2VkKGlu bywgaW5vX3JlYywgaW5vX29mZnNldCkgXAotCQkoWEZTX0lOT1BST0NfSVNfUFJP Qyhpbm9fcmVjLCBpbm9fb2Zmc2V0KSA9PSAwTEwgPyAwIDogMSkKKwkJKFhGU19J Tk9QUk9DX0lTX1BST0MoaW5vX3JlYywgaW5vX29mZnNldCkgIT0gMExMKQogI2Vs c2UKIHZvaWQgYWRkX2lub2RlX3JlZmNoZWNrZWQoeGZzX2lub190IGlubywKIAkJ CWlub190cmVlX25vZGVfdCAqaW5vX3JlYywgaW50IGlub19vZmZzZXQpOwpAQCAt NjQ3LDMgKzY1MSw1IEBACiB9IGJtYXBfY3Vyc29yX3Q7CiAKIHZvaWQgaW5pdF9i bV9jdXJzb3IoYm1hcF9jdXJzb3JfdCAqY3Vyc29yLCBpbnQgbnVtX2xldmVsKTsK KworI2VuZGlmIC8qIFhGU19SRVBBSVJfSU5DT1JFX0ggKi8KSW5kZXg6IHJlcGFp ci94ZnNwcm9ncy9yZXBhaXIvaW5jb3JlX2V4dC5jCj09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT0KLS0tIHJlcGFpci5vcmlnL3hmc3Byb2dzL3JlcGFpci9pbmNvcmVfZXh0LmMJ MjAwNy0wNC0yNyAxMzoxMzozNS4wMDAwMDAwMDAgKzEwMDAKKysrIHJlcGFpci94 ZnNwcm9ncy9yZXBhaXIvaW5jb3JlX2V4dC5jCTIwMDctMDQtMjcgMTQ6MTI6MzQu MTUxNTY4NzIyICsxMDAwCkBAIC05NSw5ICs5NSw5IEBACiAvKgogICogbG9ja3Mu CiAgKi8KLXN0YXRpYyBwdGhyZWFkX3J3bG9ja190IGV4dF9mbGlzdF9sb2NrOwot c3RhdGljIHB0aHJlYWRfcndsb2NrX3QgcnRfZXh0X3RyZWVfbG9jazsKLXN0YXRp YyBwdGhyZWFkX3J3bG9ja190IHJ0X2V4dF9mbGlzdF9sb2NrOworc3RhdGljIHB0 aHJlYWRfbXV0ZXhfdAlleHRfZmxpc3RfbG9jazsKK3N0YXRpYyBwdGhyZWFkX211 dGV4X3QJcnRfZXh0X3RyZWVfbG9jazsKK3N0YXRpYyBwdGhyZWFkX211dGV4X3QJ cnRfZXh0X2ZsaXN0X2xvY2s7CiAKIC8qCiAgKiBleHRlbnQgdHJlZSBzdHVmZiBp cyBhdmwgdHJlZXMgb2YgZHVwbGljYXRlIGV4dGVudHMsCkBAIC0xMTIsNyArMTEy LDcgQEAKIAlleHRlbnRfdHJlZV9ub2RlX3QgKm5ldzsKIAlleHRlbnRfYWxsb2Nf cmVjX3QgKnJlYzsKIAotCVBSRVBBSVJfUldfV1JJVEVfTE9DSygmZXh0X2ZsaXN0 X2xvY2spOworCXB0aHJlYWRfbXV0ZXhfbG9jaygmZXh0X2ZsaXN0X2xvY2spOwog CWlmIChleHRfZmxpc3QuY250ID09IDApICB7CiAJCUFTU0VSVChleHRfZmxpc3Qu bGlzdCA9PSBOVUxMKTsKIApAQCAtMTM5LDcgKzEzOSw3IEBACiAJZXh0X2ZsaXN0 Lmxpc3QgPSAoZXh0ZW50X3RyZWVfbm9kZV90ICopIG5ldy0+YXZsX25vZGUuYXZs X25leHRpbm87CiAJZXh0X2ZsaXN0LmNudC0tOwogCW5ldy0+YXZsX25vZGUuYXZs X25leHRpbm8gPSBOVUxMOwotCVBSRVBBSVJfUldfVU5MT0NLKCZleHRfZmxpc3Rf bG9jayk7CisJcHRocmVhZF9tdXRleF91bmxvY2soJmV4dF9mbGlzdF9sb2NrKTsK IAogCS8qIGluaXRpYWxpemUgbm9kZSAqLwogCkBAIC0xNTUsMTEgKzE1NSwxMSBA QAogdm9pZAogcmVsZWFzZV9leHRlbnRfdHJlZV9ub2RlKGV4dGVudF90cmVlX25v ZGVfdCAqbm9kZSkKIHsKLQlQUkVQQUlSX1JXX1dSSVRFX0xPQ0soJmV4dF9mbGlz dF9sb2NrKTsKKwlwdGhyZWFkX211dGV4X2xvY2soJmV4dF9mbGlzdF9sb2NrKTsK IAlub2RlLT5hdmxfbm9kZS5hdmxfbmV4dGlubyA9IChhdmxub2RlX3QgKikgZXh0 X2ZsaXN0Lmxpc3Q7CiAJZXh0X2ZsaXN0Lmxpc3QgPSBub2RlOwogCWV4dF9mbGlz dC5jbnQrKzsKLQlQUkVQQUlSX1JXX1VOTE9DSygmZXh0X2ZsaXN0X2xvY2spOwor CXB0aHJlYWRfbXV0ZXhfdW5sb2NrKCZleHRfZmxpc3RfbG9jayk7CiAKIAlyZXR1 cm47CiB9CkBAIC0zMjcsMTEgKzMyNywxMSBAQAogCQkgKiBhdmwgdHJlZSBjb2Rl IGRvZXNuJ3QgaGFuZGxlIGR1cHMgc28gaW5zZXJ0CiAJCSAqIG9udG8gbGlua2Vk IGxpc3QgaW4gaW5jcmVhc2luZyBzdGFydGJsb2NrIG9yZGVyCiAJCSAqCi0JCSAq IHdoZW4gY2FsbGVkIGZyb20gbWtfaW5jb3JlX2ZzdHJlZSwgCisJCSAqIHdoZW4g Y2FsbGVkIGZyb20gbWtfaW5jb3JlX2ZzdHJlZSwKIAkJICogc3RhcnRibG9jayBp cyBpbiBpbmNyZWFzaW5nIG9yZGVyLgogCQkgKiBjdXJyZW50IGlzIGFuICJhbmNo b3IiIG5vZGUuCiAJCSAqIHF1aWNrIGNoZWNrIGlmIHRoZSBuZXcgZXh0IGdvZXMg dG8gdGhlIGVuZC4KLQkJICogaWYgc28sIGFwcGVuZCBhdCB0aGUgZW5kLCB1c2lu ZyB0aGUgbGFzdCBmaWVsZCAKKwkJICogaWYgc28sIGFwcGVuZCBhdCB0aGUgZW5k LCB1c2luZyB0aGUgbGFzdCBmaWVsZAogCQkgKiBvZiB0aGUgImFuY2hvciIuCiAJ CSAqLwogCQlBU1NFUlQoY3VycmVudC0+bGFzdCAhPSBOVUxMKTsKQEAgLTM0MSw3 ICszNDEsNyBAQAogCQkJcmV0dXJuOwogCQl9CiAKLQkJLyogCisJCS8qCiAJCSAq IHNjYW4sIHRvIGZpbmQgdGhlIHByb3BlciBsb2NhdGlvbiBmb3IgbmV3IGVudHJ5 LgogCQkgKiB0aGlzIHNjYW4gaXMgKnZlcnkqIGV4cGVuc2l2ZSBhbmQgZ2V0cyB3 b3JzZSB3aXRoCiAJCSAqIHdpdGggaW5jcmVhc2luZyBlbnRyaWVzLgpAQCAtNjcw LDcgKzY3MCw3IEBACiAJcnRfZXh0ZW50X3RyZWVfbm9kZV90ICpuZXc7CiAJcnRf ZXh0ZW50X2FsbG9jX3JlY190ICpyZWM7CiAKLQlQUkVQQUlSX1JXX1dSSVRFX0xP Q0soJnJ0X2V4dF9mbGlzdF9sb2NrKTsKKwlwdGhyZWFkX211dGV4X2xvY2soJnJ0 X2V4dF9mbGlzdF9sb2NrKTsKIAlpZiAocnRfZXh0X2ZsaXN0LmNudCA9PSAwKSAg ewogCQlBU1NFUlQocnRfZXh0X2ZsaXN0Lmxpc3QgPT0gTlVMTCk7CiAKQEAgLTY5 Nyw3ICs2OTcsNyBAQAogCXJ0X2V4dF9mbGlzdC5saXN0ID0gKHJ0X2V4dGVudF90 cmVlX25vZGVfdCAqKSBuZXctPmF2bF9ub2RlLmF2bF9uZXh0aW5vOwogCXJ0X2V4 dF9mbGlzdC5jbnQtLTsKIAluZXctPmF2bF9ub2RlLmF2bF9uZXh0aW5vID0gTlVM TDsKLQlQUkVQQUlSX1JXX1VOTE9DSygmcnRfZXh0X2ZsaXN0X2xvY2spOworCXB0 aHJlYWRfbXV0ZXhfdW5sb2NrKCZydF9leHRfZmxpc3RfbG9jayk7CiAKIAkvKiBp bml0aWFsaXplIG5vZGUgKi8KIApAQCAtNzc2LDcgKzc3Niw3IEBACiAJeGZzX2Ry dGJub190IG5ld19zdGFydGJsb2NrOwogCXhmc19leHRsZW5fdCBuZXdfYmxvY2tj b3VudDsKIAotCVBSRVBBSVJfUldfV1JJVEVfTE9DSygmcnRfZXh0X3RyZWVfbG9j ayk7CisJcHRocmVhZF9tdXRleF9sb2NrKCZydF9leHRfdHJlZV9sb2NrKTsKIAlh dmw2NF9maW5kcmFuZ2VzKHJ0X2V4dF90cmVlX3B0ciwgc3RhcnRibG9jayAtIDEs CiAJCXN0YXJ0YmxvY2sgKyBibG9ja2NvdW50ICsgMSwKIAkJKGF2bDY0bm9kZV90 ICoqKSAmZmlyc3QsIChhdmw2NG5vZGVfdCAqKikgJmxhc3QpOwpAQCAtNzk0LDcg Kzc5NCw3IEBACiAJCQlkb19lcnJvcihfKCJkdXBsaWNhdGUgZXh0ZW50IHJhbmdl XG4iKSk7CiAJCX0KIAotCQlQUkVQQUlSX1JXX1VOTE9DSygmcnRfZXh0X3RyZWVf bG9jayk7CisJCXB0aHJlYWRfbXV0ZXhfdW5sb2NrKCZydF9leHRfdHJlZV9sb2Nr KTsKIAkJcmV0dXJuOwogCX0KIApAQCAtODE5LDcgKzgxOSw3IEBACiAJCSAqLwog CQlpZiAoZXh0LT5ydF9zdGFydGJsb2NrIDw9IHN0YXJ0YmxvY2sgJiYKIAkJCQll eHQtPnJ0X2Jsb2NrY291bnQgPj0gYmxvY2tjb3VudCkgewotCQkJUFJFUEFJUl9S V19VTkxPQ0soJnJ0X2V4dF90cmVlX2xvY2spOworCQkJcHRocmVhZF9tdXRleF91 bmxvY2soJnJ0X2V4dF90cmVlX2xvY2spOwogCQkJcmV0dXJuOwogCQl9CiAJCS8q CkBAIC04NDksNyArODQ5LDcgQEAKIAkJZG9fZXJyb3IoXygiZHVwbGljYXRlIGV4 dGVudCByYW5nZVxuIikpOwogCX0KIAotCVBSRVBBSVJfUldfVU5MT0NLKCZydF9l eHRfdHJlZV9sb2NrKTsKKwlwdGhyZWFkX211dGV4X3VubG9jaygmcnRfZXh0X3Ry ZWVfbG9jayk7CiAJcmV0dXJuOwogfQogCkBAIC04NjIsMTIgKzg2MiwxMiBAQAog ewogCWludCByZXQ7CiAKLQlQUkVQQUlSX1JXX1JFQURfTE9DSygmcnRfZXh0X3Ry ZWVfbG9jayk7CisJcHRocmVhZF9tdXRleF9sb2NrKCZydF9leHRfdHJlZV9sb2Nr KTsKIAlpZiAoYXZsNjRfZmluZHJhbmdlKHJ0X2V4dF90cmVlX3B0ciwgYm5vKSAh PSBOVUxMKQogCQlyZXQgPSAxOwogCWVsc2UKIAkJcmV0ID0gMDsKLQlQUkVQQUlS X1JXX1VOTE9DSygmcnRfZXh0X3RyZWVfbG9jayk7CisJcHRocmVhZF9tdXRleF91 bmxvY2soJnJ0X2V4dF90cmVlX2xvY2spOwogCXJldHVybihyZXQpOwogfQogCkBA IC04OTcsOSArODk3LDkgQEAKIAogCWJhX2xpc3QgPSBOVUxMOwogCXJ0X2JhX2xp c3QgPSBOVUxMOwotCVBSRVBBSVJfUldfTE9DS19JTklUKCZleHRfZmxpc3RfbG9j aywgTlVMTCk7Ci0JUFJFUEFJUl9SV19MT0NLX0lOSVQoJnJ0X2V4dF90cmVlX2xv Y2ssIE5VTEwpOwotCVBSRVBBSVJfUldfTE9DS19JTklUKCZydF9leHRfZmxpc3Rf bG9jaywgTlVMTCk7CisJcHRocmVhZF9tdXRleF9pbml0KCZleHRfZmxpc3RfbG9j aywgTlVMTCk7CisJcHRocmVhZF9tdXRleF9pbml0KCZydF9leHRfdHJlZV9sb2Nr LCBOVUxMKTsKKwlwdGhyZWFkX211dGV4X2luaXQoJnJ0X2V4dF9mbGlzdF9sb2Nr LCBOVUxMKTsKIAogCWlmICgoZXh0ZW50X3RyZWVfcHRycyA9IG1hbGxvYyhhZ2Nv dW50ICoKIAkJCQkJc2l6ZW9mKGF2bHRyZWVfZGVzY190ICopKSkgPT0gTlVMTCkK SW5kZXg6IHJlcGFpci94ZnNwcm9ncy9yZXBhaXIvaW5jb3JlX2luby5jCj09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT0KLS0tIHJlcGFpci5vcmlnL3hmc3Byb2dzL3JlcGFpci9p bmNvcmVfaW5vLmMJMjAwNy0wNC0yNyAxNDowOTozNy4wMDAwMDAwMDAgKzEwMDAK KysrIHJlcGFpci94ZnNwcm9ncy9yZXBhaXIvaW5jb3JlX2luby5jCTIwMDctMDQt MjcgMTQ6MTI6MzQuMTU1NTY4MjAwICsxMDAwCkBAIC0yNSw3ICsyNSw3IEBACiAj aW5jbHVkZSAidGhyZWFkcy5oIgogI2luY2x1ZGUgImVycl9wcm90b3MuaCIKIAot c3RhdGljIHB0aHJlYWRfcndsb2NrX3QgaW5vX2ZsaXN0X2xvY2s7CitzdGF0aWMg cHRocmVhZF9tdXRleF90CWlub19mbGlzdF9sb2NrOwogZXh0ZXJuIGF2bG5vZGVf dAkqYXZsX2ZpcnN0aW5vKGF2bG5vZGVfdCAqcm9vdCk7CiAKIC8qCkBAIC0yNTks NyArMjU5LDcgQEAKIAlpbm9fdHJlZV9ub2RlX3QgCSppbm9fcmVjOwogCWF2bG5v ZGVfdCAJCSpub2RlOwogCi0JUFJFUEFJUl9SV19XUklURV9MT0NLKCZpbm9fZmxp c3RfbG9jayk7CisJcHRocmVhZF9tdXRleF9sb2NrKCZpbm9fZmxpc3RfbG9jayk7 CiAJaWYgKGlub19mbGlzdC5jbnQgPT0gMCkgIHsKIAkJQVNTRVJUKGlub19mbGlz dC5saXN0ID09IE5VTEwpOwogCkBAIC0yODMsNyArMjgzLDcgQEAKIAlpbm9fZmxp c3QuY250LS07CiAJbm9kZSA9ICZpbm9fcmVjLT5hdmxfbm9kZTsKIAlub2RlLT5h dmxfbmV4dGlubyA9IG5vZGUtPmF2bF9mb3J3ID0gbm9kZS0+YXZsX2JhY2sgPSBO VUxMOwotCVBSRVBBSVJfUldfVU5MT0NLKCZpbm9fZmxpc3RfbG9jayk7CisJcHRo cmVhZF9tdXRleF91bmxvY2soJmlub19mbGlzdF9sb2NrKTsKIAogCS8qIGluaXRp YWxpemUgbm9kZSAqLwogCkBAIC0zMTEsNyArMzExLDcgQEAKIAlpbm9fcmVjLT5h dmxfbm9kZS5hdmxfZm9ydyA9IE5VTEw7CiAJaW5vX3JlYy0+YXZsX25vZGUuYXZs X2JhY2sgPSBOVUxMOwogCi0JUFJFUEFJUl9SV19XUklURV9MT0NLKCZpbm9fZmxp c3RfbG9jayk7CisJcHRocmVhZF9tdXRleF9sb2NrKCZpbm9fZmxpc3RfbG9jayk7 CiAJaWYgKGlub19mbGlzdC5saXN0ICE9IE5VTEwpICB7CiAJCUFTU0VSVChpbm9f Zmxpc3QuY250ID4gMCk7CiAJCWlub19yZWMtPmF2bF9ub2RlLmF2bF9uZXh0aW5v ID0gKGF2bG5vZGVfdCAqKSBpbm9fZmxpc3QubGlzdDsKQEAgLTMzMyw5ICszMzMs NyBAQAogCQlmcmVlKGlub19yZWMtPmlub191bi5leF9kYXRhKTsKIAogCX0KLQlQ UkVQQUlSX1JXX1VOTE9DSygmaW5vX2ZsaXN0X2xvY2spOwotCi0JcmV0dXJuOwor CXB0aHJlYWRfbXV0ZXhfdW5sb2NrKCZpbm9fZmxpc3RfbG9jayk7CiB9CiAKIC8q CkBAIC00MDMsOCArNDAxLDYgQEAKIAkgKiBzZXQgY2FjaGUgZW50cnkKIAkgKi8K IAlsYXN0X3JlY1thZ25vXSA9IGlub19yZWM7Ci0KLQlyZXR1cm47CiB9CiAKIC8q CkBAIC00NTIsOCArNDQ4LDYgQEAKIGNsZWFyX3VuY2VydGFpbl9pbm9fY2FjaGUo eGZzX2FnbnVtYmVyX3QgYWdubykKIHsKIAlsYXN0X3JlY1thZ25vXSA9IE5VTEw7 Ci0KLQlyZXR1cm47CiB9CiAKIApAQCAtNTIxLDggKzUxNSw2IEBACiBmcmVlX2lu b2RlX3JlYyh4ZnNfYWdudW1iZXJfdCBhZ25vLCBpbm9fdHJlZV9ub2RlX3QgKmlu b19yZWMpCiB7CiAJZnJlZV9pbm9fdHJlZV9ub2RlKGlub19yZWMpOwotCi0JcmV0 dXJuOwogfQogCiB2b2lkCkBAIC01MzQsNyArNTI2LDYgQEAKIAogCWF2bF9maW5k cmFuZ2VzKGlub2RlX3RyZWVfcHRyc1thZ25vXSwgc3RhcnRfaW5vLAogCQllbmRf aW5vLCAoYXZsbm9kZV90ICoqKSBmaXJzdCwgKGF2bG5vZGVfdCAqKikgbGFzdCk7 Ci0JcmV0dXJuOwogfQogCiAvKgpAQCAtNzE2LDggKzcwNyw2IEBACiAjZW5kaWYK IAlpcmVjLT5pbm9fdW4ucGxpc3QtPnBlbnRyaWVzW3RhcmdldF0gPSBwYXJlbnQ7 CiAJaXJlYy0+aW5vX3VuLnBsaXN0LT5wbWFzayB8PSAoMUxMIDw8IG9mZnNldCk7 Ci0KLQlyZXR1cm47CiB9CiAKIHhmc19pbm9fdApAQCAtODEwLDcgKzc5OSw3IEBA CiAJaW50IGk7CiAJaW50IGFnY291bnQgPSBtcC0+bV9zYi5zYl9hZ2NvdW50Owog Ci0JUFJFUEFJUl9SV19MT0NLX0lOSVQoJmlub19mbGlzdF9sb2NrLCBOVUxMKTsK KwlwdGhyZWFkX211dGV4X2luaXQoJmlub19mbGlzdF9sb2NrLCBOVUxMKTsKIAlp ZiAoKGlub2RlX3RyZWVfcHRycyA9IG1hbGxvYyhhZ2NvdW50ICoKIAkJCQkJc2l6 ZW9mKGF2bHRyZWVfZGVzY190ICopKSkgPT0gTlVMTCkKIAkJZG9fZXJyb3IoXygi Y291bGRuJ3QgbWFsbG9jIGlub2RlIHRyZWUgZGVzY3JpcHRvciB0YWJsZVxuIikp OwpAQCAtODQyLDggKzgzMSw2IEBACiAJYnplcm8obGFzdF9yZWMsIHNpemVvZihp bm9fdHJlZV9ub2RlX3QgKikgKiBhZ2NvdW50KTsKIAogCWZ1bGxfaW5vX2V4X2Rh dGEgPSAwOwotCi0JcmV0dXJuOwogfQogCiAjaWZkZWYgWFJfSU5PX1JFRl9ERUJV RwpAQCAtODUzLDggKzg0MCw2IEBACiAJWEZTX0lOT1BST0NfU0VUX1BST0MoKGlu b19yZWMpLCAoaW5vX29mZnNldCkpOwogCiAJQVNTRVJUKGlzX2lub2RlX3JlZmNo ZWNrZWQoaW5vLCBpbm9fcmVjLCBpbm9fb2Zmc2V0KSk7Ci0KLQlyZXR1cm47CiB9 CiAKIGludApJbmRleDogcmVwYWlyL3hmc3Byb2dzL3JlcGFpci9pbml0LmMKPT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PQotLS0gcmVwYWlyLm9yaWcveGZzcHJvZ3MvcmVwYWly L2luaXQuYwkyMDA3LTA0LTI3IDE0OjExOjQxLjAwMDAwMDAwMCArMTAwMAorKysg cmVwYWlyL3hmc3Byb2dzL3JlcGFpci9pbml0LmMJMjAwNy0wNC0yNyAxNDoxMjoz NC4xNTU1NjgyMDAgKzEwMDAKQEAgLTIyLDcgKzIyLDExIEBACiAjaW5jbHVkZSAi cHJvdG9zLmgiCiAjaW5jbHVkZSAiZXJyX3Byb3Rvcy5oIgogI2luY2x1ZGUgInB0 aHJlYWQuaCIKKyNpbmNsdWRlICJhdmwuaCIKKyNpbmNsdWRlICJkaXIuaCIKKyNp bmNsdWRlICJpbmNvcmUuaCIKICNpbmNsdWRlICJwcmVmZXRjaC5oIgorI2luY2x1 ZGUgInJhZGl4LXRyZWUuaCIKICNpbmNsdWRlIDxzeXMvcmVzb3VyY2UuaD4KIAog c3RhdGljIHB0aHJlYWRfa2V5X3QgZGlyYnVmX2tleTsKQEAgLTE0NCw5ICsxNDgs NSBAQAogCXRzX2NyZWF0ZSgpOwogCXRzX2luaXQoKTsKIAlpbmNyZWFzZV9ybGlt aXQoKTsKLQlpZiAoZG9fcHJlZmV0Y2gpIHsKLQkJZG9fcHJlZmV0Y2ggPSBsaWJ4 ZnNfbGlvX2luaXQoKTsKLQkJaWYgKGRvX3ByZWZldGNoKQotCQkJbGlieGZzX2xp b19hbGxvY2F0ZSgpOwotCX0KKwlyYWRpeF90cmVlX2luaXQoKTsKIH0KSW5kZXg6 IHJlcGFpci94ZnNwcm9ncy9yZXBhaXIvcGhhc2UzLmMKPT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PQotLS0gcmVwYWlyLm9yaWcveGZzcHJvZ3MvcmVwYWlyL3BoYXNlMy5jCTIw MDctMDQtMjcgMTQ6MTE6NDEuMDAwMDAwMDAwICsxMDAwCisrKyByZXBhaXIveGZz cHJvZ3MvcmVwYWlyL3BoYXNlMy5jCTIwMDctMDUtMDEgMTY6NTQ6NTguNzExMTc0 NDUyICsxMDAwCkBAIC0yNiw2ICsyNiw3IEBACiAjaW5jbHVkZSAiZGlub2RlLmgi CiAjaW5jbHVkZSAidGhyZWFkcy5oIgogI2luY2x1ZGUgInByb2dyZXNzLmgiCisj aW5jbHVkZSAicHJlZmV0Y2guaCIKIAogLyoKICAqIHdhbGtzIGFuIHVubGlua2Vk IGxpc3QsIHJldHVybnMgMSBvbiBhbiBlcnJvciAoYm9ndXMgcG9pbnRlcikgb3IK QEAgLTU5LDcgKzYwLDcgQEAKIAkJCQlhZGRfYWdpbm9kZV91bmNlcnRhaW4oYWdu bywgY3VycmVudF9pbm8sIDEpOwogCQkJCWFnYm5vID0gWEZTX0FHSU5PX1RPX0FH Qk5PKG1wLCBjdXJyZW50X2lubyk7CiAKLQkJCQlQUkVQQUlSX1JXX1dSSVRFX0xP Q0soJnBlcl9hZ19sb2NrW2Fnbm9dKTsKKwkJCQlwdGhyZWFkX211dGV4X2xvY2so JmFnX2xvY2tzW2Fnbm9dKTsKIAkJCQlzd2l0Y2ggKHN0YXRlID0gZ2V0X2FnYm5v X3N0YXRlKG1wLAogCQkJCQkJCWFnbm8sIGFnYm5vKSkgIHsKIAkJCQljYXNlIFhS X0VfVU5LTk9XTjoKQEAgLTY3LDE0ICs2OCwxMSBAQAogCQkJCWNhc2UgWFJfRV9G UkVFMToKIAkJCQkJc2V0X2FnYm5vX3N0YXRlKG1wLCBhZ25vLCBhZ2JubywKIAkJ CQkJCVhSX0VfSU5PKTsKLQkJCQkJUFJFUEFJUl9SV19VTkxPQ0soJnBlcl9hZ19s b2NrW2Fnbm9dKTsKIAkJCQkJYnJlYWs7CiAJCQkJY2FzZSBYUl9FX0JBRF9TVEFU RToKLQkJCQkJUFJFUEFJUl9SV19VTkxPQ0soJnBlcl9hZ19sb2NrW2Fnbm9dKTsK IAkJCQkJZG9fZXJyb3IoXygKIAkJCQkJCSJiYWQgc3RhdGUgaW4gYmxvY2sgbWFw ICVkXG4iKSwKIAkJCQkJCXN0YXRlKTsKLQkJCQkJYWJvcnQoKTsKIAkJCQkJYnJl YWs7CiAJCQkJZGVmYXVsdDoKIAkJCQkJLyoKQEAgLTg5LDkgKzg3LDkgQEAKIAkJ CQkJICovCiAJCQkJCXNldF9hZ2Jub19zdGF0ZShtcCwgYWdubywgYWdibm8sCiAJ CQkJCQlYUl9FX0lOTyk7Ci0JCQkJCVBSRVBBSVJfUldfVU5MT0NLKCZwZXJfYWdf bG9ja1thZ25vXSk7CiAJCQkJCWJyZWFrOwogCQkJCX0KKwkJCQlwdGhyZWFkX211 dGV4X3VubG9jaygmYWdfbG9ja3NbYWdub10pOwogCQkJfQogCQkJY3VycmVudF9p bm8gPSBkaXAtPmRpX25leHRfdW5saW5rZWQ7CiAJCX0gZWxzZSAgewpAQCAtMTQ5 LDIxICsxNDcsNjcgQEAKIAkJbGlieGZzX3B1dGJ1ZihicCk7CiB9CiAKLXZvaWQK LXBhcmFsbGVsX3AzX3Byb2Nlc3NfYWdpbm9kZXMoeGZzX21vdW50X3QgKm1wLCB4 ZnNfYWdudW1iZXJfdCBhZ25vKQorc3RhdGljIHZvaWQKK3Byb2Nlc3NfYWdfZnVu YygKKwl3b3JrX3F1ZXVlX3QJCSp3cSwKKwl4ZnNfYWdudW1iZXJfdCAJCWFnbm8s CisJdm9pZAkJCSphcmcpCiB7CiAJLyoKIAkgKiB0dXJuIG9uIGRpcmVjdG9yeSBw cm9jZXNzaW5nIChpbm9kZSBkaXNjb3ZlcnkpIGFuZAogCSAqIGF0dHJpYnV0ZSBw cm9jZXNzaW5nIChleHRyYV9hdHRyX2NoZWNrKQogCSAqLworCXdhaXRfZm9yX2lu b2RlX3ByZWZldGNoKGFyZyk7CiAJZG9fbG9nKF8oIiAgICAgICAgLSBhZ25vID0g JWRcbiIpLCBhZ25vKTsKLQlwcm9jZXNzX2FnaW5vZGVzKG1wLCBhZ25vLCAxLCAw LCAxKTsKKwlwcm9jZXNzX2FnaW5vZGVzKHdxLT5tcCwgYXJnLCBhZ25vLCAxLCAw LCAxKTsKKwljbGVhbnVwX2lub2RlX3ByZWZldGNoKGFyZyk7Cit9CisKK3N0YXRp YyB2b2lkCitwcm9jZXNzX2FncygKKwl4ZnNfbW91bnRfdAkJKm1wKQoreworCWlu dCAJCQlpLCBqOworCXdvcmtfcXVldWVfdAkJKnF1ZXVlczsKKwlwcmVmZXRjaF9h cmdzX3QJCSpwZl9hcmdzWzJdOworCisJcXVldWVzID0gbWFsbG9jKHRocmVhZF9j b3VudCAqIHNpemVvZih3b3JrX3F1ZXVlX3QpKTsKKworCWlmIChhZ19zdHJpZGUp IHsKKwkJLyoKKwkJICogY3JlYXRlIG9uZSB3b3JrZXIgdGhyZWFkIGZvciBlYWNo IHNlZ21lbnQgb2YgdGhlIHZvbHVtZQorCQkgKi8KKwkJZm9yIChpID0gMDsgaSA8 IHRocmVhZF9jb3VudDsgaSsrKSB7CisJCQljcmVhdGVfd29ya19xdWV1ZSgmcXVl dWVzW2ldLCBtcCwgMSk7CisJCQlwZl9hcmdzWzBdID0gTlVMTDsKKwkJCWZvciAo aiA9IGk7IGogPCBtcC0+bV9zYi5zYl9hZ2NvdW50OyBqICs9IGFnX3N0cmlkZSkg eworCQkJCXBmX2FyZ3NbMF0gPSBzdGFydF9pbm9kZV9wcmVmZXRjaChqLCAwLCBw Zl9hcmdzWzBdKTsKKwkJCQlxdWV1ZV93b3JrKCZxdWV1ZXNbaV0sIHByb2Nlc3Nf YWdfZnVuYywgaiwgcGZfYXJnc1swXSk7CisJCQl9CisJCX0KKwkJLyoKKwkJICog d2FpdCBmb3Igd29ya2VycyB0byBjb21wbGV0ZQorCQkgKi8KKwkJZm9yIChpID0g MDsgaSA8IHRocmVhZF9jb3VudDsgaSsrKQorCQkJZGVzdHJveV93b3JrX3F1ZXVl KCZxdWV1ZXNbaV0pOworCX0gZWxzZSB7CisJCXF1ZXVlc1swXS5tcCA9IG1wOwor CQlwZl9hcmdzWzBdID0gc3RhcnRfaW5vZGVfcHJlZmV0Y2goMCwgMCwgTlVMTCk7 CisJCWZvciAoaSA9IDA7IGkgPCBtcC0+bV9zYi5zYl9hZ2NvdW50OyBpKyspIHsK KwkJCXBmX2FyZ3NbKH5pKSAmIDFdID0gc3RhcnRfaW5vZGVfcHJlZmV0Y2goaSAr IDEsIDAsCisJCQkJCXBmX2FyZ3NbaSAmIDFdKTsKKwkJCXByb2Nlc3NfYWdfZnVu YygmcXVldWVzWzBdLCBpLCBwZl9hcmdzW2kgJiAxXSk7CisJCX0KKwl9CisJZnJl ZShxdWV1ZXMpOwogfQogCiB2b2lkCiBwaGFzZTMoeGZzX21vdW50X3QgKm1wKQog ewotCWludCBpLCBqOworCWludCAJCQlpLCBqOworCisJcHJpbnRmKCJNYWluIHRo cmVhZCA9ICVseFxuIiwgcHRocmVhZF9zZWxmKCkpOwogCiAJZG9fbG9nKF8oIlBo YXNlIDMgLSBmb3IgZWFjaCBBRy4uLlxuIikpOwogCWlmICghbm9fbW9kaWZ5KQpA QCAtMTkyLDE2ICsyMzYsOSBAQAogCSAgICAiICAgICAgICAtIHByb2Nlc3Mga25v d24gaW5vZGVzIGFuZCBwZXJmb3JtIGlub2RlIGRpc2NvdmVyeS4uLlxuIikpOwog CiAJc2V0X3Byb2dyZXNzX21zZyhQUk9HX0ZNVF9QUk9DRVNTX0lOTywgKF9fdWlu dDY0X3QpIG1wLT5tX3NiLnNiX2ljb3VudCk7Ci0JaWYgKGFnX3N0cmlkZSkgewot CQlpbnQgCXN0ZXBzID0gKG1wLT5tX3NiLnNiX2FnY291bnQgKyBhZ19zdHJpZGUg LSAxKSAvIGFnX3N0cmlkZTsKLQkJZm9yIChpID0gMDsgaSA8IHN0ZXBzOyBpKysp Ci0JCQlmb3IgKGogPSBpOyBqIDwgbXAtPm1fc2Iuc2JfYWdjb3VudDsgaiArPSBh Z19zdHJpZGUpCi0JCQkJcXVldWVfd29yayhwYXJhbGxlbF9wM19wcm9jZXNzX2Fn aW5vZGVzLCBtcCwgaik7Ci0JfSBlbHNlIHsKLQkJZm9yIChpID0gMDsgaSA8IG1w LT5tX3NiLnNiX2FnY291bnQ7IGkrKykKLQkJCXBhcmFsbGVsX3AzX3Byb2Nlc3Nf YWdpbm9kZXMobXAsIGkpOwotCX0KLQl3YWl0X2Zvcl93b3JrZXJzKCk7CisKKwlw cm9jZXNzX2FncyhtcCk7CisKIAlwcmludF9maW5hbF9ycHQoKTsKIAogCS8qCklu ZGV4OiByZXBhaXIveGZzcHJvZ3MvcmVwYWlyL3BoYXNlNC5jCj09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT0KLS0tIHJlcGFpci5vcmlnL3hmc3Byb2dzL3JlcGFpci9waGFzZTQu YwkyMDA3LTA0LTI3IDE0OjExOjQxLjAwMDAwMDAwMCArMTAwMAorKysgcmVwYWly L3hmc3Byb2dzL3JlcGFpci9waGFzZTQuYwkyMDA3LTA0LTI3IDE0OjEyOjM0LjE3 MTU2NjExMiArMTAwMApAQCAtMzAsNiArMzAsNyBAQAogI2luY2x1ZGUgImRpcjIu aCIKICNpbmNsdWRlICJ0aHJlYWRzLmgiCiAjaW5jbHVkZSAicHJvZ3Jlc3MuaCIK KyNpbmNsdWRlICJwcmVmZXRjaC5oIgogCiAKIC8qCkBAIC0xMTQsMTEgKzExNSwx NiBAQAogfQogCiAKLXZvaWQKLXBhcmFsbGVsX3A0X3Byb2Nlc3NfYWdpbm9kZXMo eGZzX21vdW50X3QgKm1wLCB4ZnNfYWdudW1iZXJfdCBhZ25vKQorc3RhdGljIHZv aWQKK3Byb2Nlc3NfYWdfZnVuYygKKwl3b3JrX3F1ZXVlX3QJCSp3cSwKKwl4ZnNf YWdudW1iZXJfdCAJCWFnbm8sCisJdm9pZAkJCSphcmcpCiB7CisJd2FpdF9mb3Jf aW5vZGVfcHJlZmV0Y2goYXJnKTsKIAlkb19sb2coXygiICAgICAgICAtIGFnbm8g PSAlZFxuIiksIGFnbm8pOwotCXByb2Nlc3NfYWdpbm9kZXMobXAsIGFnbm8sIDAs IDEsIDApOworCXByb2Nlc3NfYWdpbm9kZXMod3EtPm1wLCBhcmcsIGFnbm8sIDAs IDEsIDApOworCWNsZWFudXBfaW5vZGVfcHJlZmV0Y2goYXJnKTsKIAogCS8qCiAJ ICogbm93IHJlY3ljbGUgdGhlIHBlci1BRyBkdXBsaWNhdGUgZXh0ZW50IHJlY29y ZHMKQEAgLTEyNiw2ICsxMzIsNTQgQEAKIAlyZWxlYXNlX2R1cF9leHRlbnRfdHJl ZShhZ25vKTsKIH0KIAorc3RhdGljIHZvaWQKK3Byb2Nlc3NfYWdzKAorCXhmc19t b3VudF90CQkqbXApCit7CisJaW50IAkJCWksIGo7CisJd29ya19xdWV1ZV90CQkq cXVldWVzOworCXByZWZldGNoX2FyZ3NfdAkJKnBmX2FyZ3NbMl07CisKKwlxdWV1 ZXMgPSBtYWxsb2ModGhyZWFkX2NvdW50ICogc2l6ZW9mKHdvcmtfcXVldWVfdCkp OworCisJaWYgKCFsaWJ4ZnNfYmNhY2hlX292ZXJmbG93ZWQoKSkgeworCQlxdWV1 ZXNbMF0ubXAgPSBtcDsKKwkJY3JlYXRlX3dvcmtfcXVldWUoJnF1ZXVlc1swXSwg bXAsIGxpYnhmc19ucHJvYygpKTsKKwkJZm9yIChpID0gMDsgaSA8IG1wLT5tX3Ni LnNiX2FnY291bnQ7IGkrKykKKwkJCXF1ZXVlX3dvcmsoJnF1ZXVlc1swXSwgcHJv Y2Vzc19hZ19mdW5jLCBpLCBOVUxMKTsKKwkJZGVzdHJveV93b3JrX3F1ZXVlKCZx dWV1ZXNbMF0pOworCX0gZWxzZSB7CisJCWlmIChhZ19zdHJpZGUpIHsKKwkJCS8q CisJCQkgKiBjcmVhdGUgb25lIHdvcmtlciB0aHJlYWQgZm9yIGVhY2ggc2VnbWVu dCBvZiB0aGUgdm9sdW1lCisJCQkgKi8KKwkJCWZvciAoaSA9IDA7IGkgPCB0aHJl YWRfY291bnQ7IGkrKykgeworCQkJCWNyZWF0ZV93b3JrX3F1ZXVlKCZxdWV1ZXNb aV0sIG1wLCAxKTsKKwkJCQlwZl9hcmdzWzBdID0gTlVMTDsKKwkJCQlmb3IgKGog PSBpOyBqIDwgbXAtPm1fc2Iuc2JfYWdjb3VudDsgaiArPSBhZ19zdHJpZGUpIHsK KwkJCQkJcGZfYXJnc1swXSA9IHN0YXJ0X2lub2RlX3ByZWZldGNoKGosIDAsIHBm X2FyZ3NbMF0pOworCQkJCQlxdWV1ZV93b3JrKCZxdWV1ZXNbaV0sIHByb2Nlc3Nf YWdfZnVuYywgaiwgcGZfYXJnc1swXSk7CisJCQkJfQorCQkJfQorCQkJLyoKKwkJ CSAqIHdhaXQgZm9yIHdvcmtlcnMgdG8gY29tcGxldGUKKwkJCSAqLworCQkJZm9y IChpID0gMDsgaSA8IHRocmVhZF9jb3VudDsgaSsrKQorCQkJCWRlc3Ryb3lfd29y a19xdWV1ZSgmcXVldWVzW2ldKTsKKwkJfSBlbHNlIHsKKwkJCXF1ZXVlc1swXS5t cCA9IG1wOworCQkJcGZfYXJnc1swXSA9IHN0YXJ0X2lub2RlX3ByZWZldGNoKDAs IDAsIE5VTEwpOworCQkJZm9yIChpID0gMDsgaSA8IG1wLT5tX3NiLnNiX2FnY291 bnQ7IGkrKykgeworCQkJCXBmX2FyZ3NbKH5pKSAmIDFdID0gc3RhcnRfaW5vZGVf cHJlZmV0Y2goaSArIDEsCisJCQkJCQkwLCBwZl9hcmdzW2kgJiAxXSk7CisJCQkJ cHJvY2Vzc19hZ19mdW5jKCZxdWV1ZXNbMF0sIGksIHBmX2FyZ3NbaSAmIDFdKTsK KwkJCX0KKwkJfQorCX0KKwlmcmVlKHF1ZXVlcyk7Cit9CisKKwogdm9pZAogcGhh c2U0KHhmc19tb3VudF90ICptcCkKIHsKQEAgLTMyOCwxNyArMzgyLDcgQEAKIAkg KiBhbmQgYXR0cmlidXRlIHByb2Nlc3NpbmcgaXMgdHVybmVkIE9GRiBzaW5jZSB3 ZSBkaWQgdGhhdAogCSAqIGFscmVhZHkgaW4gcGhhc2UgMy4KIAkgKi8KLQlpZiAo YWdfc3RyaWRlKSB7Ci0JCWludCAJc3RlcHMgPSAobXAtPm1fc2Iuc2JfYWdjb3Vu dCArIGFnX3N0cmlkZSAtIDEpIC8gYWdfc3RyaWRlOwotCQlmb3IgKGkgPSAwOyBp IDwgc3RlcHM7IGkrKykKLQkJCWZvciAoaiA9IGk7IGogPCBtcC0+bV9zYi5zYl9h Z2NvdW50OyBqICs9IGFnX3N0cmlkZSkKLQkJCQlxdWV1ZV93b3JrKHBhcmFsbGVs X3A0X3Byb2Nlc3NfYWdpbm9kZXMsIG1wLCBqKTsKLQl9IGVsc2UgewotCQlmb3Ig KGkgPSAwOyBpIDwgbXAtPm1fc2Iuc2JfYWdjb3VudDsgaSsrKQotCQkJcGFyYWxs ZWxfcDRfcHJvY2Vzc19hZ2lub2RlcyhtcCwgaSk7Ci0JfQotCi0Jd2FpdF9mb3Jf d29ya2VycygpOworCXByb2Nlc3NfYWdzKG1wKTsKIAlwcmludF9maW5hbF9ycHQo KTsKIAogCS8qCkluZGV4OiByZXBhaXIveGZzcHJvZ3MvcmVwYWlyL3BoYXNlNS5j Cj09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT0KLS0tIHJlcGFpci5vcmlnL3hmc3Byb2dzL3Jl cGFpci9waGFzZTUuYwkyMDA3LTA0LTI3IDEzOjEzOjM1LjAwMDAwMDAwMCArMTAw MAorKysgcmVwYWlyL3hmc3Byb2dzL3JlcGFpci9waGFzZTUuYwkyMDA3LTA0LTI3 IDE0OjEyOjM0LjE3NTU2NTU5MCArMTAwMApAQCAtMTQxOSw4ICsxNDE5LDEwIEBA CiAJCXNldF9pbm9kZV91c2VkKGlyZWMsIGkpOwogfQogCi12b2lkCi1waGFzZTVf ZnVuY3Rpb24oeGZzX21vdW50X3QgKm1wLCB4ZnNfYWdudW1iZXJfdCBhZ25vKQor c3RhdGljIHZvaWQKK3BoYXNlNV9mdW5jKAorCXhmc19tb3VudF90CSptcCwKKwl4 ZnNfYWdudW1iZXJfdAlhZ25vKQogewogCV9fdWludDY0X3QJbnVtX2lub3M7CiAJ X191aW50NjRfdAludW1fZnJlZV9pbm9zOwpAQCAtMTU5OCw3ICsxNjAwLDcgQEAK IHZvaWQKIHBoYXNlNSh4ZnNfbW91bnRfdCAqbXApCiB7Ci0JeGZzX2FnbnVtYmVy X3QgYWdubzsKKwl4ZnNfYWdudW1iZXJfdAkJYWdubzsKIAogCWRvX2xvZyhfKCJQ aGFzZSA1IC0gcmVidWlsZCBBRyBoZWFkZXJzIGFuZCB0cmVlcy4uLlxuIikpOwog CXNldF9wcm9ncmVzc19tc2coUFJPR19GTVRfUkVCVUlMRF9BRywgKF9fdWludDY0 X3QgKWdsb2JfYWdjb3VudCk7CkBAIC0xNjQxLDEwICsxNjQzLDkgQEAKIAlpZiAo c2JfZmRibG9ja3NfYWcgPT0gTlVMTCkKIAkJZG9fZXJyb3IoXygiY2Fubm90IGFs bG9jIHNiX2ZkYmxvY2tzX2FnIGJ1ZmZlcnNcbiIpKTsKIAotCWZvciAoYWdubyA9 IDA7IGFnbm8gPCBtcC0+bV9zYi5zYl9hZ2NvdW50OyBhZ25vKyspICB7Ci0JCXF1 ZXVlX3dvcmsocGhhc2U1X2Z1bmN0aW9uLCBtcCwgYWdubyk7Ci0JfQotCXdhaXRf Zm9yX3dvcmtlcnMoKTsKKwlmb3IgKGFnbm8gPSAwOyBhZ25vIDwgbXAtPm1fc2Iu c2JfYWdjb3VudDsgYWdubysrKQorCQlwaGFzZTVfZnVuYyhtcCwgYWdubyk7CisK IAlwcmludF9maW5hbF9ycHQoKTsKIAogCS8qIGFnZ3JlZ2F0ZSBwZXIgYWcgY291 bnRlcnMgKi8KSW5kZXg6IHJlcGFpci94ZnNwcm9ncy9yZXBhaXIvcGhhc2U2LmMK PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PQotLS0gcmVwYWlyLm9yaWcveGZzcHJvZ3MvcmVw YWlyL3BoYXNlNi5jCTIwMDctMDQtMjcgMTQ6MTE6NDEuMDAwMDAwMDAwICsxMDAw CisrKyByZXBhaXIveGZzcHJvZ3MvcmVwYWlyL3BoYXNlNi5jCTIwMDctMDUtMjIg MTE6NTc6MzMuMDA3NTQ5Njc0ICsxMDAwCkBAIC0yMywxOCArMjMsMTcgQEAKICNp bmNsdWRlICJpbmNvcmUuaCIKICNpbmNsdWRlICJkaXIuaCIKICNpbmNsdWRlICJk aXIyLmgiCi0jaW5jbHVkZSAiZGlyX3N0YWNrLmgiCiAjaW5jbHVkZSAicHJvdG9z LmgiCiAjaW5jbHVkZSAiZXJyX3Byb3Rvcy5oIgogI2luY2x1ZGUgImRpbm9kZS5o IgogI2luY2x1ZGUgInByZWZldGNoLmgiCiAjaW5jbHVkZSAicHJvZ3Jlc3MuaCIK KyNpbmNsdWRlICJ0aHJlYWRzLmgiCiAjaW5jbHVkZSAidmVyc2lvbnMuaCIKIAog c3RhdGljIHN0cnVjdCBjcmVkCQl6ZXJvY3I7CiBzdGF0aWMgc3RydWN0IGZzeGF0 dHIgCQl6ZXJvZnN4Owogc3RhdGljIHhmc19pbm9fdAkJb3JwaGFuYWdlX2lubzsK LXN0YXRpYyB4ZnNfaW5vZGVfdAkJKm9ycGhhbmFnZV9pcDsKIAogLyoKICAqIERh dGEgc3RydWN0dXJlcyBhbmQgcm91dGluZXMgdG8ga2VlcCB0cmFjayBvZiBkaXJl Y3RvcnkgZW50cmllcwpAQCAtMjM2LDggKzIzNSw4IEBACiAJaW50CQloc2l6ZTsK IAogCWhzaXplID0gc2l6ZSAvICgxNiAqIDQpOwotCWlmIChoc2l6ZSA+IDEwMjQp Ci0JCWhzaXplID0gMTAyNDsKKwlpZiAoaHNpemUgPiA2NTUzNikKKwkJaHNpemUg PSA2MzMzNjsKIAllbHNlIGlmIChoc2l6ZSA8IDE2KQogCQloc2l6ZSA9IDE2Owog CWlmICgoaGFzaHRhYiA9IGNhbGxvYyhESVJfSEFTSF9UQUJfU0laRShoc2l6ZSks IDEpKSA9PSBOVUxMKQpAQCAtOTAyLDYgKzkwMSw3IEBACiAJeGZzX2lub190CQlp bm8sCQkvKiBpbm9kZSAjIHRvIGJlIG1vdmVkICovCiAJaW50CQkJaXNhX2RpcikJ LyogMSBpZiBpbm9kZSBpcyBhIGRpcmVjdG9yeSAqLwogeworCXhmc19pbm9kZV90 CQkqb3JwaGFuYWdlX2lwOwogCXhmc19pbm9fdAkJZW50cnlfaW5vX251bTsKIAl4 ZnNfaW5vZGVfdAkJKmlub19wOwogCXhmc190cmFuc190CQkqdHA7CkBAIC05MTcs NyArOTE3LDkgQEAKIAlmbmFtZWxlbiA9IHNucHJpbnRmKGZuYW1lLCBzaXplb2Yo Zm5hbWUpLCAiJWxsdSIsCiAJCQkodW5zaWduZWQgbG9uZyBsb25nKWlubyk7CiAK LQlBU1NFUlQob3JwaGFuYWdlX2lwICE9IE5VTEwpOworCWVyciA9IGxpYnhmc19p Z2V0KG1wLCBOVUxMLCBvcnBoYW5hZ2VfaW5vLCAwLCAmb3JwaGFuYWdlX2lwLCAw KTsKKwlpZiAoZXJyKQorCQlkb19lcnJvcihfKCIlZCAtIGNvdWxkbid0IGlnZXQg b3JwaGFuYWdlIGlub2RlXG4iKSwgZXJyKTsKIAkvKgogCSAqIE1ha2Ugc3VyZSB0 aGUgZmlsZW5hbWUgaXMgdW5pcXVlIGluIHRoZSBsb3N0K2ZvdW5kCiAJICovCkBA IC0xMDY4LDcgKzEwNzAsNyBAQAogICogUmV0dXJucyB0aGUgZnNibm8gb2YgdGhl IGZpcnN0IChsZWZ0bW9zdCkgYmxvY2sgaW4gdGhlIGRpcmVjdG9yeSBsZWFmLgog ICogc2V0cyAqYm5vIHRvIHRoZSBkaXJlY3RvcnkgYmxvY2sgIyBjb3JyZXNwb25k aW5nIHRvIHRoZSByZXR1cm5lZCBmc2Juby4KICAqLwoteGZzX2Rmc2Jub190Citz dGF0aWMgeGZzX2Rmc2Jub190CiBtYXBfZmlyc3RfZGJsb2NrX2ZzYm5vKHhmc19t b3VudF90CSptcCwKIAkJCXhmc19pbm9fdAlpbm8sCiAJCQl4ZnNfaW5vZGVfdAkq aXAsCkBAIC0xMDg0LDcgKzEwODYsNiBAQAogCWludAkJCWk7CiAJaW50CQkJZXJy b3I7CiAJY2hhcgkJCSpmdHlwZTsKLQl4ZnNfZnNibG9ja190CQlmYmxvY2syOwog CiAJLyoKIAkgKiB0cmF2ZXJzZSBkb3duIGxlZnQtc2lkZSBvZiB0cmVlIHVudGls IHdlIGhpdCB0aGUKQEAgLTExMzIsMTEgKzExMzMsNiBAQAogCWlmIChYRlNfU0Jf VkVSU0lPTl9IQVNESVJWMigmbXAtPm1fc2IpKQogCQlyZXR1cm4oZnNibm8pOwog Ci0JaWYgKGRvX3ByZWZldGNoKSB7Ci0gICAgICAgICAgICAgICAgZmJsb2NrMiA9 IE5VTExGU0JMT0NLOwotICAgICAgICAgICAgICAgIHByZWZldGNoX3A2X2RpcjEo bXAsIGlubywgaXAsIDAsICZmYmxvY2syKTsKLSAgICAgICAgfQotCiAJZG8gewog CQkvKgogCQkgKiB3YWxrIGRvd24gbGVmdCBzaWRlIG9mIGJ0cmVlLCByZWxlYXNl IGJ1ZmZlcnMgYXMgeW91CkBAIC0xMjE5LDcgKzEyMTUsNyBAQAogICoKICAqIHRo aXMgcm91dGluZSBjYW4gTk9UIGJlIGNhbGxlZCBpZiBydW5uaW5nIGluIG5vIG1v ZGlmeSBtb2RlCiAgKi8KLWludAorc3RhdGljIGludAogcHJ1bmVfbGZfZGlyX2Vu dHJ5KHhmc19tb3VudF90ICptcCwgeGZzX2lub190IGlubywgeGZzX2lub2RlX3Qg KmlwLAogCQkJeGZzX2RhaGFzaF90ICpoYXNodmFsKQogewpAQCAtMTQxNCwxNCAr MTQxMCwxMyBAQAogICogcHJvY2VzcyBhIGxlYWYgYmxvY2ssIGFsc28gY2hlY2tz IGZvciAuLiBlbnRyeQogICogYW5kIGNvcnJlY3RzIGl0IHRvIG1hdGNoIHdoYXQg d2UgdGhpbmsgLi4gc2hvdWxkIGJlCiAgKi8KLXZvaWQKK3N0YXRpYyB2b2lkCiBs Zl9ibG9ja19kaXJfZW50cnlfY2hlY2soeGZzX21vdW50X3QJCSptcCwKIAkJCXhm c19pbm9fdAkJaW5vLAogCQkJeGZzX2Rpcl9sZWFmYmxvY2tfdAkqbGVhZiwKIAkJ CWludAkJCSpkaXJ0eSwKIAkJCWludAkJCSpudW1faWxsZWdhbCwKIAkJCWludAkJ CSpuZWVkX2RvdCwKLQkJCWRpcl9zdGFja190CQkqc3RhY2ssCiAJCQlpbm9fdHJl ZV9ub2RlX3QJCSpjdXJyZW50X2lyZWMsCiAJCQlpbnQJCQljdXJyZW50X2lub19v ZmZzZXQsCiAJCQlkaXJfaGFzaF90YWJfdAkJKmhhc2h0YWIsCkBAIC0xNjE0LDkg KzE2MDksNiBAQAogCQl9IGVsc2UgaWYgKHBhcmVudCA9PSBpbm8pICB7CiAJCQlh ZGRfaW5vZGVfcmVhY2hlZChpcmVjLCBpbm9fb2Zmc2V0KTsKIAkJCWFkZF9pbm9k ZV9yZWYoY3VycmVudF9pcmVjLCBjdXJyZW50X2lub19vZmZzZXQpOwotCi0JCQlp ZiAoIWRvX3ByZWZldGNoICYmICFpc19pbm9kZV9yZWZjaGVja2VkKGxpbm8sIGly ZWMsIGlub19vZmZzZXQpKQotCQkJCXB1c2hfZGlyKHN0YWNrLCBsaW5vKTsKIAkJ fSBlbHNlICB7CiAJCQlqdW5raXQgPSAxOwogCQkJZG9fd2FybigKQEAgLTE2NTAs MTMgKzE2NDIsMTIgQEAKICAqIGhhcHBlbiBpbiBmaWxlIGJsb2Nrcy4gIHRoZSBp bm9kZSBzaXplIGFuZCBvdGhlciBjb3JlIGluZm8KICAqIGlzIGFscmVhZHkgY29y cmVjdCwgaXQncyBqdXN0IHRoZSBsZWFmIGVudHJpZXMgdGhhdCBnZXQgYWx0ZXJl ZC4KICAqLwotdm9pZAorc3RhdGljIHZvaWQKIGxvbmdmb3JtX2Rpcl9lbnRyeV9j aGVjayh4ZnNfbW91bnRfdAkqbXAsCiAJCQl4ZnNfaW5vX3QJaW5vLAogCQkJeGZz X2lub2RlX3QJKmlwLAogCQkJaW50CQkqbnVtX2lsbGVnYWwsCiAJCQlpbnQJCSpu ZWVkX2RvdCwKLQkJCWRpcl9zdGFja190CSpzdGFjaywKIAkJCWlub190cmVlX25v ZGVfdAkqaXJlYywKIAkJCWludAkJaW5vX29mZnNldCwKIAkJCWRpcl9oYXNoX3Rh Yl90CSpoYXNodGFiKQpAQCAtMTcyMiw3ICsxNzEzLDcgQEAKIAogCQlpZiAoIXNr aXBpdCkKIAkJCWxmX2Jsb2NrX2Rpcl9lbnRyeV9jaGVjayhtcCwgaW5vLCBsZWFm LCAmZGlydHksCi0JCQkJCW51bV9pbGxlZ2FsLCBuZWVkX2RvdCwgc3RhY2ssIGly ZWMsCisJCQkJCW51bV9pbGxlZ2FsLCBuZWVkX2RvdCwgaXJlYywKIAkJCQkJaW5v X29mZnNldCwgaGFzaHRhYiwgZGFfYm5vKTsKIAogCQlkYV9ibm8gPSBJTlRfR0VU KGxlYWYtPmhkci5pbmZvLmZvcncsIEFSQ0hfQ09OVkVSVCk7CkBAIC0xNzg2LDgg KzE3NzcsNyBAQAogCXhmc19maWxlb2ZmX3QJCWxhc3RibG9jazsKIAl4ZnNfZnNi bG9ja190CQlmaXJzdGJsb2NrOwogCXhmc19ibWFwX2ZyZWVfdAkJZmxpc3Q7Ci0J eGZzX2lub190CQlwYXJlbnRpbm87Ci0JeGZzX2lub2RlX3QJCSpwaXA7CisJeGZz X2lub2RlX3QJCXBpcDsKIAlpbnQJCQlieWhhc2g7CiAJZGlyX2hhc2hfZW50X3QJ CSpwOwogCWludAkJCWNvbW1pdHRlZDsKQEAgLTE4MDIsMTMgKzE3OTIsMTUgQEAK IAogCS8qCiAJICogZmlyc3QgYXR0ZW1wdCB0byBsb2NhdGUgdGhlIHBhcmVudCBp bm9kZSwgaWYgaXQgY2FuJ3QgYmUgZm91bmQsCi0JICogd2UnbGwgdXNlIHRoZSBs b3N0K2ZvdW5kIGlub2RlCisJICogc2V0IGl0IHRvIHRoZSByb290IGlub2RlIGFu ZCBpdCdsbCBiZSBhZGp1c3RlZCBvciBmaXhlZCBsYXRlcgorCSAqIGlmIGluY29y cmVjdCAodGhlIGlub2RlIG51bWJlciBoZXJlIG5lZWRzIHRvIGJlIHZhbGlkIGZv ciB0aGUKKwkgKiBsaWJ4ZnNfZGlyMl9pbml0KCkgY2FsbCkuCiAJICovCiAJYnlo YXNoID0gRElSX0hBU0hfRlVOQyhoYXNodGFiLCBsaWJ4ZnNfZGFfaGFzaG5hbWUo KHVjaGFyX3QqKSIuLiIsIDIpKTsKLQlwYXJlbnRpbm8gPSBvcnBoYW5hZ2VfaW5v OworCXBpcC5pX2lubyA9IG1wLT5tX3NiLnNiX3Jvb3Rpbm87CiAJZm9yIChwID0g aGFzaHRhYi0+YnloYXNoW2J5aGFzaF07IHA7IHAgPSBwLT5uZXh0YnloYXNoKSB7 CiAJCWlmIChwLT5uYW1lbGVuID09IDIgJiYgcC0+bmFtZVswXSA9PSAnLicgJiYg cC0+bmFtZVsxXSA9PSAnLicpIHsKLQkJCXBhcmVudGlubyA9IHAtPmludW07CisJ CQlwaXAuaV9pbm8gPSBwLT5pbnVtOwogCQkJYnJlYWs7CiAJCX0KIAl9CkBAIC0x ODI5LDE5ICsxODIxLDYgQEAKIAkJZG9fZXJyb3IoXygieGZzX2JtYXBfbGFzdF9v ZmZzZXQgZmFpbGVkIC0tIGVycm9yIC0gJWRcbiIpLAogCQkJZXJyb3IpOwogCi0J LyogcmUtaW5pdCB0aGUgZGlyZWN0b3J5IHRvIHNob3J0Zm9ybSAqLwotCWlmICgo ZXJyb3IgPSBsaWJ4ZnNfdHJhbnNfaWdldChtcCwgdHAsIHBhcmVudGlubywgMCwg MCwgJnBpcCkpKSB7Ci0JCWRvX3dhcm4oCi0JCV8oImNvdWxkbid0IGlnZXQgcGFy ZW50IGlub2RlICVsbHUgLS0gZXJyb3IgLSAlZFxuIiksCi0JCQlwYXJlbnRpbm8s IGVycm9yKTsKLQkJLyogd2UnbGwgdHJ5IHRvIHVzZSB0aGUgb3JwaGFuYWdlIGlu byB0aGVuICovCi0JCXBhcmVudGlubyA9IG9ycGhhbmFnZV9pbm87Ci0JCWlmICgo ZXJyb3IgPSBsaWJ4ZnNfdHJhbnNfaWdldChtcCwgdHAsIHBhcmVudGlubywgMCwg MCwgJnBpcCkpKQotCQkJZG9fZXJyb3IoCi0JCV8oImNvdWxkbid0IGlnZXQgbG9z dCtmb3VuZCBpbm9kZSAlbGx1IC0tIGVycm9yIC0gJWRcbiIpLAotCQkJCXBhcmVu dGlubywgZXJyb3IpOwotCX0KLQogCS8qIGZyZWUgYWxsIGRhdGEsIGxlYWYsIG5v ZGUgYW5kIGZyZWVzcGFjZSBibG9ja3MgKi8KIAogCWlmICgoZXJyb3IgPSBsaWJ4 ZnNfYnVubWFwaSh0cCwgaXAsIDAsIGxhc3RibG9jaywKQEAgLTE4NTUsNyArMTgz NCw3IEBACiAKIAlBU1NFUlQoZG9uZSk7CiAKLQlsaWJ4ZnNfZGlyMl9pbml0KHRw LCBpcCwgcGlwKTsKKwlsaWJ4ZnNfZGlyMl9pbml0KHRwLCBpcCwgJnBpcCk7CiAK IAllcnJvciA9IGxpYnhmc19ibWFwX2ZpbmlzaCgmdHAsICZmbGlzdCwgZmlyc3Ri bG9jaywgJmNvbW1pdHRlZCk7CiAKQEAgLTE5NzEsNyArMTk1MCw2IEBACiAJeGZz X2lub2RlX3QJCSppcCwKIAlpbnQJCQkqbnVtX2lsbGVnYWwsCiAJaW50CQkJKm5l ZWRfZG90LAotCWRpcl9zdGFja190CQkqc3RhY2ssCiAJaW5vX3RyZWVfbm9kZV90 CQkqY3VycmVudF9pcmVjLAogCWludAkJCWN1cnJlbnRfaW5vX29mZnNldCwKIAl4 ZnNfZGFidWZfdAkJKipicHAsCkBAIC0yMTgyLDYgKzIxNjAsMTcgQEAKIAkJcHRy ICs9IFhGU19ESVIyX0RBVEFfRU5UU0laRShkZXAtPm5hbWVsZW4pOwogCQlpbnVt ID0gSU5UX0dFVChkZXAtPmludW1iZXIsIEFSQ0hfQ09OVkVSVCk7CiAJCWxhc3Rm cmVlID0gMDsKKwkJLyoKKwkJICogc2tpcCBib2d1cyBlbnRyaWVzIChsZWFkaW5n ICcvJykuICB0aGV5J2xsIGJlIGRlbGV0ZWQKKwkJICogbGF0ZXIuICBtdXN0IHN0 aWxsIGxvZyBpdCwgZWxzZSB3ZSBsZWFrIHJlZmVyZW5jZXMgdG8KKwkJICogYnVm ZmVycy4KKwkJICovCisJCWlmIChkZXAtPm5hbWVbMF0gPT0gJy8nKSAgeworCQkJ bmJhZCsrOworCQkJaWYgKCFub19tb2RpZnkpCisJCQkJbGlieGZzX2RpcjJfZGF0 YV9sb2dfZW50cnkodHAsIGJwLCBkZXApOworCQkJY29udGludWU7CisJCX0KIAog CQlpcmVjID0gZmluZF9pbm9kZV9yZWMoWEZTX0lOT19UT19BR05PKG1wLCBpbnVt KSwKIAkJCQkJWEZTX0lOT19UT19BR0lOTyhtcCwgaW51bSkpOwpAQCAtMjI1Miwx NyArMjI0MSw3IEBACiAJCQl9CiAJCQljb250aW51ZTsKIAkJfQotCQkvKgotCQkg KiBza2lwIGJvZ3VzIGVudHJpZXMgKGxlYWRpbmcgJy8nKS4gIHRoZXknbGwgYmUg ZGVsZXRlZAotCQkgKiBsYXRlci4gIG11c3Qgc3RpbGwgbG9nIGl0LCBlbHNlIHdl IGxlYWsgcmVmZXJlbmNlcyB0bwotCQkgKiBidWZmZXJzLgotCQkgKi8KLQkJaWYg KGRlcC0+bmFtZVswXSA9PSAnLycpICB7Ci0JCQluYmFkKys7Ci0JCQlpZiAoIW5v X21vZGlmeSkKLQkJCQlsaWJ4ZnNfZGlyMl9kYXRhX2xvZ19lbnRyeSh0cCwgYnAs IGRlcCk7Ci0JCQljb250aW51ZTsKLQkJfQorCiAJCWp1bmtpdCA9IDA7CiAJCWJj b3B5KGRlcC0+bmFtZSwgZm5hbWUsIGRlcC0+bmFtZWxlbik7CiAJCWZuYW1lW2Rl cC0+bmFtZWxlbl0gPSAnXDAnOwpAQCAtMjMyMiw4ICsyMzAxLDYgQEAKIAkJfSBl bHNlIGlmIChwYXJlbnQgPT0gaXAtPmlfaW5vKSAgewogCQkJYWRkX2lub2RlX3Jl YWNoZWQoaXJlYywgaW5vX29mZnNldCk7CiAJCQlhZGRfaW5vZGVfcmVmKGN1cnJl bnRfaXJlYywgY3VycmVudF9pbm9fb2Zmc2V0KTsKLQkJCWlmICghZG9fcHJlZmV0 Y2ggJiYgIWlzX2lub2RlX3JlZmNoZWNrZWQoaW51bSwgaXJlYywgaW5vX29mZnNl dCkpCi0JCQkJcHVzaF9kaXIoc3RhY2ssIGludW0pOwogCQl9IGVsc2UgIHsKIAkJ CWp1bmtpdCA9IDE7CiAJCQlkb193YXJuKApAQCAtMjM2MCw3ICsyMzM3LDcgQEAK IC8qCiAgKiBDaGVjayBjb250ZW50cyBvZiBsZWFmLWZvcm0gYmxvY2suCiAgKi8K LWludAorc3RhdGljIGludAogbG9uZ2Zvcm1fZGlyMl9jaGVja19sZWFmKAogCXhm c19tb3VudF90CQkqbXAsCiAJeGZzX2lub2RlX3QJCSppcCwKQEAgLTI0MjcsNyAr MjQwNCw3IEBACiAgKiBDaGVjayBjb250ZW50cyBvZiB0aGUgbm9kZSBibG9ja3Mg KGxlYXZlcykKICAqIExvb2tzIGZvciBtYXRjaGluZyBoYXNoIHZhbHVlcyBmb3Ig dGhlIGRhdGEgZW50cmllcy4KICAqLwotaW50CitzdGF0aWMgaW50CiBsb25nZm9y bV9kaXIyX2NoZWNrX25vZGUoCiAJeGZzX21vdW50X3QJCSptcCwKIAl4ZnNfaW5v ZGVfdAkJKmlwLApAQCAtMjU2MCwxMyArMjUzNywxMiBAQAogICogZGVzdHJveSB0 aGUgZW50cnkgYW5kIGNyZWF0ZSBhIG5ldyBvbmUgd2l0aCByZWNvdmVyZWQgbmFt ZS9pbm9kZSBwYWlycy4KICAqIChpZS4gZ2V0IGxpYnhmcyB0byBkbyBhbGwgdGhl IGdydW50IHdvcmspCiAgKi8KLXZvaWQKK3N0YXRpYyB2b2lkCiBsb25nZm9ybV9k aXIyX2VudHJ5X2NoZWNrKHhmc19tb3VudF90CSptcCwKIAkJCXhmc19pbm9fdAlp bm8sCiAJCQl4ZnNfaW5vZGVfdAkqaXAsCiAJCQlpbnQJCSpudW1faWxsZWdhbCwK IAkJCWludAkJKm5lZWRfZG90LAotCQkJZGlyX3N0YWNrX3QJKnN0YWNrLAogCQkJ aW5vX3RyZWVfbm9kZV90CSppcmVjLAogCQkJaW50CQlpbm9fb2Zmc2V0LAogCQkJ ZGlyX2hhc2hfdGFiX3QJKmhhc2h0YWIpCkBAIC0yNjA2LDkgKzI1ODIsNiBAQAog CWxpYnhmc19kaXIyX2lzYmxvY2soTlVMTCwgaXAsICZpc2Jsb2NrKTsKIAlsaWJ4 ZnNfZGlyMl9pc2xlYWYoTlVMTCwgaXAsICZpc2xlYWYpOwogCi0JaWYgKGRvX3By ZWZldGNoICYmICFpc2Jsb2NrKQotCQlwcmVmZXRjaF9wNl9kaXIyKG1wLCBpcCk7 Ci0KIAkvKiBjaGVjayBkaXJlY3RvcnkgImRhdGEiIGJsb2NrcyAoaWUuIG5hbWUv aW5vZGUgcGFpcnMpICovCiAJZm9yIChkYV9ibm8gPSAwLCBuZXh0X2RhX2JubyA9 IDA7CiAJICAgICBuZXh0X2RhX2JubyAhPSBOVUxMRklMRU9GRiAmJiBkYV9ibm8g PCBtcC0+bV9kaXJsZWFmYmxrOwpAQCAtMjYzNSw3ICsyNjA4LDcgQEAKIAkJCWNv bnRpbnVlOwkvKiB0cnkgYW5kIHJlYWQgYWxsICJkYXRhIiBibG9ja3MgKi8KIAkJ fQogCQlsb25nZm9ybV9kaXIyX2VudHJ5X2NoZWNrX2RhdGEobXAsIGlwLCBudW1f aWxsZWdhbCwgbmVlZF9kb3QsCi0JCQkJc3RhY2ssIGlyZWMsIGlub19vZmZzZXQs ICZicGxpc3RbZGJdLCBoYXNodGFiLAorCQkJCWlyZWMsIGlub19vZmZzZXQsICZi cGxpc3RbZGJdLCBoYXNodGFiLAogCQkJCSZmcmVldGFiLCBkYV9ibm8sIGlzYmxv Y2spOwogCX0KIAlmaXhpdCA9ICgqbnVtX2lsbGVnYWwgIT0gMCkgfHwgZGlyMl9p c19iYWRpbm8oaW5vKTsKQEAgLTI2NzYsMTIgKzI2NDksMTEgQEAKICAqIHNob3J0 Zm9ybSBkaXJlY3RvcnkgcHJvY2Vzc2luZyByb3V0aW5lcyAtLSBlbnRyeSB2ZXJp ZmljYXRpb24gYW5kCiAgKiBiYWQgZW50cnkgZGVsZXRpb24gKHBydW5pbmcpLgog ICovCi12b2lkCitzdGF0aWMgdm9pZAogc2hvcnRmb3JtX2Rpcl9lbnRyeV9jaGVj ayh4ZnNfbW91bnRfdAkqbXAsCiAJCQl4ZnNfaW5vX3QJaW5vLAogCQkJeGZzX2lu b2RlX3QJKmlwLAogCQkJaW50CQkqaW5vX2RpcnR5LAotCQkJZGlyX3N0YWNrX3QJ KnN0YWNrLAogCQkJaW5vX3RyZWVfbm9kZV90CSpjdXJyZW50X2lyZWMsCiAJCQlp bnQJCWN1cnJlbnRfaW5vX29mZnNldCwKIAkJCWRpcl9oYXNoX3RhYl90CSpoYXNo dGFiKQpAQCAtMjg2OCwxMCArMjg0MCw2IEBACiAJCQl9IGVsc2UgaWYgKHBhcmVu dCA9PSBpbm8pICB7CiAJCQkJYWRkX2lub2RlX3JlYWNoZWQoaXJlYywgaW5vX29m ZnNldCk7CiAJCQkJYWRkX2lub2RlX3JlZihjdXJyZW50X2lyZWMsIGN1cnJlbnRf aW5vX29mZnNldCk7Ci0KLQkJCQlpZiAoIWRvX3ByZWZldGNoICYmICFpc19pbm9k ZV9yZWZjaGVja2VkKGxpbm8sIGlyZWMsCi0JCQkJCQlpbm9fb2Zmc2V0KSkKLQkJ CQkJcHVzaF9kaXIoc3RhY2ssIGxpbm8pOwogCQkJfSBlbHNlICB7CiAJCQkJanVu a2l0ID0gMTsKIAkJCQlkb193YXJuKF8oImVudHJ5IFwiJXNcIiBpbiBkaXIgJWxs dSBub3QgIgpAQCAtMjk2NCw3ICsyOTMyLDcgQEAKIH0KIAogLyogQVJHU1VTRUQg Ki8KLXZvaWQKK3N0YXRpYyB2b2lkCiBwcnVuZV9zZl9kaXJfZW50cnkoeGZzX21v dW50X3QgKm1wLCB4ZnNfaW5vX3QgaW5vLCB4ZnNfaW5vZGVfdCAqaXApCiB7CiAJ CQkJLyogUkVGRVJFTkNFRCAqLwpAQCAtMzA2MSwxMiArMzAyOSwxMSBAQAogICog c2hvcnRmb3JtIGRpcmVjdG9yeSB2MiBwcm9jZXNzaW5nIHJvdXRpbmVzIC0tIGVu dHJ5IHZlcmlmaWNhdGlvbiBhbmQKICAqIGJhZCBlbnRyeSBkZWxldGlvbiAocHJ1 bmluZykuCiAgKi8KLXZvaWQKK3N0YXRpYyB2b2lkCiBzaG9ydGZvcm1fZGlyMl9l bnRyeV9jaGVjayh4ZnNfbW91bnRfdAkqbXAsCiAJCQl4ZnNfaW5vX3QJaW5vLAog CQkJeGZzX2lub2RlX3QJKmlwLAogCQkJaW50CQkqaW5vX2RpcnR5LAotCQkJZGly X3N0YWNrX3QJKnN0YWNrLAogCQkJaW5vX3RyZWVfbm9kZV90CSpjdXJyZW50X2ly ZWMsCiAJCQlpbnQJCWN1cnJlbnRfaW5vX29mZnNldCwKIAkJCWRpcl9oYXNoX3Rh Yl90CSpoYXNodGFiKQpAQCAtMzI2MywxMCArMzIzMCw2IEBACiAJCQl9IGVsc2Ug aWYgKHBhcmVudCA9PSBpbm8pICB7CiAJCQkJYWRkX2lub2RlX3JlYWNoZWQoaXJl YywgaW5vX29mZnNldCk7CiAJCQkJYWRkX2lub2RlX3JlZihjdXJyZW50X2lyZWMs IGN1cnJlbnRfaW5vX29mZnNldCk7Ci0KLQkJCQlpZiAoIWRvX3ByZWZldGNoICYm ICFpc19pbm9kZV9yZWZjaGVja2VkKGxpbm8sIGlyZWMsCi0JCQkJCQlpbm9fb2Zm c2V0KSkKLQkJCQkJcHVzaF9kaXIoc3RhY2ssIGxpbm8pOwogCQkJfSBlbHNlICB7 CiAJCQkJanVua2l0ID0gMTsKIAkJCQlkb193YXJuKF8oImVudHJ5IFwiJXNcIiBp biBkaXJlY3RvcnkgaW5vZGUgJWxsdSIKQEAgLTMzNzcsODcgKzMzNDAsNzggQEAK IH0KIAogLyoKLSAqIHByb2Nlc3NlcyBhbGwgZGlyZWN0b3JpZXMgcmVhY2hhYmxl IHZpYSB0aGUgaW5vZGVzIG9uIHRoZSBzdGFjawotICogcmV0dXJucyAwIGlmIHRo aW5ncyBhcmUgZ29vZCwgMSBpZiB0aGVyZSdzIGEgcHJvYmxlbQorICogcHJvY2Vz c2VzIGFsbCByZWFjaGFibGUgaW5vZGVzIGluIGRpcmVjdG9yaWVzCiAgKi8KLXZv aWQKLXByb2Nlc3NfZGlyc3RhY2soeGZzX21vdW50X3QgKm1wLCBkaXJfc3RhY2tf dCAqc3RhY2spCitzdGF0aWMgdm9pZAorcHJvY2Vzc19kaXJfaW5vZGUoCisJeGZz X21vdW50X3QgCQkqbXAsCisJeGZzX2lub190CQlpbm8sCisJaW5vX3RyZWVfbm9k ZV90CQkqaXJlYywKKwlpbnQJCQlpbm9fb2Zmc2V0KQogewogCXhmc19ibWFwX2Zy ZWVfdAkJZmxpc3Q7CiAJeGZzX2ZzYmxvY2tfdAkJZmlyc3Q7Ci0JeGZzX2lub190 CQlpbm87CiAJeGZzX2lub2RlX3QJCSppcDsKIAl4ZnNfdHJhbnNfdAkJKnRwOwog CXhmc19kYWhhc2hfdAkJaGFzaHZhbDsKLQlpbm9fdHJlZV9ub2RlX3QJCSppcmVj OwogCWRpcl9oYXNoX3RhYl90CQkqaGFzaHRhYjsKLQlpbnQJCQlpbm9fb2Zmc2V0 LCBuZWVkX2RvdCwgY29tbWl0dGVkOworCWludAkJCW5lZWRfZG90LCBjb21taXR0 ZWQ7CiAJaW50CQkJZGlydHksIG51bV9pbGxlZ2FsLCBlcnJvciwgbnJlczsKIAog CS8qCi0JICogcHVsbCBkaXJlY3RvcnkgaW5vZGUgIyBvZmYgZGlyZWN0b3J5IHN0 YWNrCi0JICoKIAkgKiBvcGVuIHVwIGRpcmVjdG9yeSBpbm9kZSwgY2hlY2sgYWxs IGVudHJpZXMsCiAJICogdGhlbiBjYWxsIHBydW5lX2Rpcl9lbnRyaWVzIHRvIHJl bW92ZSBhbGwKIAkgKiByZW1haW5pbmcgaWxsZWdhbCBkaXJlY3RvcnkgZW50cmll cy4KIAkgKi8KIAotCXdoaWxlICgoaW5vID0gcG9wX2RpcihzdGFjaykpICE9IE5V TExGU0lOTykgIHsKLQkJaXJlYyA9IGZpbmRfaW5vZGVfcmVjKFhGU19JTk9fVE9f QUdOTyhtcCwgaW5vKSwKLQkJCQkJWEZTX0lOT19UT19BR0lOTyhtcCwgaW5vKSk7 Ci0JCUFTU0VSVChpcmVjICE9IE5VTEwpOwotCi0JCWlub19vZmZzZXQgPSBYRlNf SU5PX1RPX0FHSU5PKG1wLCBpbm8pIC0gaXJlYy0+aW5vX3N0YXJ0bnVtOwotCi0J CUFTU0VSVCghaXNfaW5vZGVfcmVmY2hlY2tlZChpbm8sIGlyZWMsIGlub19vZmZz ZXQpKTsKLQotCQlpZiAoKGVycm9yID0gbGlieGZzX2lnZXQobXAsIE5VTEwsIGlu bywgMCwgJmlwLCAwKSkpIHsKLQkJCWlmICghbm9fbW9kaWZ5KQotCQkJCWRvX2Vy cm9yKAotCQkJCV8oImNvdWxkbid0IG1hcCBpbm9kZSAlbGx1LCBlcnIgPSAlZFxu IiksCi0JCQkJCWlubywgZXJyb3IpOwotCQkJZWxzZSAgewotCQkJCWRvX3dhcm4o Ci0JCQkJXygiY291bGRuJ3QgbWFwIGlub2RlICVsbHUsIGVyciA9ICVkXG4iKSwK LQkJCQkJaW5vLCBlcnJvcik7Ci0JCQkJLyoKLQkJCQkgKiBzZWUgYmVsb3cgZm9y IHdoYXQgd2UncmUgZG9pbmcgaWYgdGhpcwotCQkJCSAqIGlzIHJvb3QuICBXaHkg ZG8gd2UgbmVlZCB0byBkbyB0aGlzIGhlcmU/Ci0JCQkJICogdG8gZW5zdXJlIHRo YXQgdGhlIHJvb3QgZG9lc24ndCBzaG93IHVwCi0JCQkJICogYXMgYmVpbmcgZGlz Y29ubmVjdGVkIGluIHRoZSBub19tb2RpZnkgY2FzZS4KLQkJCQkgKi8KLQkJCQlp ZiAobXAtPm1fc2Iuc2Jfcm9vdGlubyA9PSBpbm8pICB7Ci0JCQkJCWFkZF9pbm9k ZV9yZWFjaGVkKGlyZWMsIDApOwotCQkJCQlhZGRfaW5vZGVfcmVmKGlyZWMsIDAp OwotCQkJCX0KLQkJCX0KKwlBU1NFUlQoIWlzX2lub2RlX3JlZmNoZWNrZWQoaW5v LCBpcmVjLCBpbm9fb2Zmc2V0KSk7CiAKLQkJCWFkZF9pbm9kZV9yZWZjaGVja2Vk KGlubywgaXJlYywgMCk7Ci0JCQljb250aW51ZTsKLQkJfQotCi0JCW5lZWRfZG90 ID0gZGlydHkgPSBudW1faWxsZWdhbCA9IDA7Ci0KLQkJaWYgKG1wLT5tX3NiLnNi X3Jvb3Rpbm8gPT0gaW5vKSAgeworCWVycm9yID0gbGlieGZzX2lnZXQobXAsIE5V TEwsIGlubywgMCwgJmlwLCAwKTsKKwlpZiAoZXJyb3IpIHsKKwkJaWYgKCFub19t b2RpZnkpCisJCQlkb19lcnJvcihfKCJjb3VsZG4ndCBtYXAgaW5vZGUgJWxsdSwg ZXJyID0gJWRcbiIpLAorCQkJCWlubywgZXJyb3IpOworCQllbHNlICB7CisJCQlk b193YXJuKF8oImNvdWxkbid0IG1hcCBpbm9kZSAlbGx1LCBlcnIgPSAlZFxuIiks CisJCQkJaW5vLCBlcnJvcik7CiAJCQkvKgotCQkJICogbWFyayByb290IGlub2Rl IHJlYWNoZWQgYW5kIGJ1bXAgdXAKLQkJCSAqIGxpbmsgY291bnQgZm9yIHJvb3Qg aW5vZGUgdG8gYWNjb3VudAotCQkJICogZm9yICcuLicgZW50cnkgc2luY2UgdGhl IHJvb3QgaW5vZGUgaXMKLQkJCSAqIG5ldmVyIHJlYWNoZWQgYnkgYSBwYXJlbnQu ICB3ZSBrbm93Ci0JCQkgKiB0aGF0IHJvb3QncyAnLi4nIGlzIGFsd2F5cyBnb29k IC0tCi0JCQkgKiBndWFyYW50ZWVkIGJ5IHBoYXNlIDMgYW5kL29yIGJlbG93Lgor CQkJICogc2VlIGJlbG93IGZvciB3aGF0IHdlJ3JlIGRvaW5nIGlmIHRoaXMKKwkJ CSAqIGlzIHJvb3QuICBXaHkgZG8gd2UgbmVlZCB0byBkbyB0aGlzIGhlcmU/CisJ CQkgKiB0byBlbnN1cmUgdGhhdCB0aGUgcm9vdCBkb2Vzbid0IHNob3cgdXAKKwkJ CSAqIGFzIGJlaW5nIGRpc2Nvbm5lY3RlZCBpbiB0aGUgbm9fbW9kaWZ5IGNhc2Uu CiAJCQkgKi8KLQkJCWFkZF9pbm9kZV9yZWFjaGVkKGlyZWMsIGlub19vZmZzZXQp OworCQkJaWYgKG1wLT5tX3NiLnNiX3Jvb3Rpbm8gPT0gaW5vKSAgeworCQkJCWFk ZF9pbm9kZV9yZWFjaGVkKGlyZWMsIDApOworCQkJCWFkZF9pbm9kZV9yZWYoaXJl YywgMCk7CisJCQl9CiAJCX0KIAotCQlhZGRfaW5vZGVfcmVmY2hlY2tlZChpbm8s IGlyZWMsIGlub19vZmZzZXQpOworCQlhZGRfaW5vZGVfcmVmY2hlY2tlZChpbm8s IGlyZWMsIDApOworCQlyZXR1cm47CisJfQogCi0JCWhhc2h0YWIgPSBkaXJfaGFz aF9pbml0KGlwLT5pX2QuZGlfc2l6ZSk7CisJbmVlZF9kb3QgPSBkaXJ0eSA9IG51 bV9pbGxlZ2FsID0gMDsKIAorCWlmIChtcC0+bV9zYi5zYl9yb290aW5vID09IGlu bykgIHsKIAkJLyoKLQkJICogbG9vayBmb3IgYm9ndXMgZW50cmllcworCQkgKiBt YXJrIHJvb3QgaW5vZGUgcmVhY2hlZCBhbmQgYnVtcCB1cAorCQkgKiBsaW5rIGNv dW50IGZvciByb290IGlub2RlIHRvIGFjY291bnQKKwkJICogZm9yICcuLicgZW50 cnkgc2luY2UgdGhlIHJvb3QgaW5vZGUgaXMKKwkJICogbmV2ZXIgcmVhY2hlZCBi eSBhIHBhcmVudC4gIHdlIGtub3cKKwkJICogdGhhdCByb290J3MgJy4uJyBpcyBh bHdheXMgZ29vZCAtLQorCQkgKiBndWFyYW50ZWVkIGJ5IHBoYXNlIDMgYW5kL29y IGJlbG93LgogCQkgKi8KLQkJc3dpdGNoIChpcC0+aV9kLmRpX2Zvcm1hdCkgIHsK KwkJYWRkX2lub2RlX3JlYWNoZWQoaXJlYywgaW5vX29mZnNldCk7CisJfQorCisJ YWRkX2lub2RlX3JlZmNoZWNrZWQoaW5vLCBpcmVjLCBpbm9fb2Zmc2V0KTsKKwor CWhhc2h0YWIgPSBkaXJfaGFzaF9pbml0KGlwLT5pX2QuZGlfc2l6ZSk7CisKKwkv KgorCSAqIGxvb2sgZm9yIGJvZ3VzIGVudHJpZXMKKwkgKi8KKwlzd2l0Y2ggKGlw LT5pX2QuZGlfZm9ybWF0KSAgewogCQljYXNlIFhGU19ESU5PREVfRk1UX0VYVEVO VFM6CiAJCWNhc2UgWEZTX0RJTk9ERV9GTVRfQlRSRUU6CiAJCQkvKgpAQCAtMzQ2 OSwxNiArMzQyMywxNSBAQAogCQkJaWYgKFhGU19TQl9WRVJTSU9OX0hBU0RJUlYy KCZtcC0+bV9zYikpCiAJCQkJbG9uZ2Zvcm1fZGlyMl9lbnRyeV9jaGVjayhtcCwg aW5vLCBpcCwKIAkJCQkJCQkmbnVtX2lsbGVnYWwsICZuZWVkX2RvdCwKLQkJCQkJ CQlzdGFjaywgaXJlYywKLQkJCQkJCQlpbm9fb2Zmc2V0LAorCQkJCQkJCWlyZWMs IGlub19vZmZzZXQsCiAJCQkJCQkJaGFzaHRhYik7CiAJCQllbHNlCiAJCQkJbG9u Z2Zvcm1fZGlyX2VudHJ5X2NoZWNrKG1wLCBpbm8sIGlwLAogCQkJCQkJCSZudW1f aWxsZWdhbCwgJm5lZWRfZG90LAotCQkJCQkJCXN0YWNrLCBpcmVjLAotCQkJCQkJ CWlub19vZmZzZXQsCisJCQkJCQkJaXJlYywgaW5vX29mZnNldCwKIAkJCQkJCQlo YXNodGFiKTsKIAkJCWJyZWFrOworCiAJCWNhc2UgWEZTX0RJTk9ERV9GTVRfTE9D QUw6CiAJCQl0cCA9IGxpYnhmc190cmFuc19hbGxvYyhtcCwgMCk7CiAJCQkvKgpA QCAtMzUwMCw1MiArMzQ1MywxNjcgQEAKIAogCQkJaWYgKFhGU19TQl9WRVJTSU9O X0hBU0RJUlYyKCZtcC0+bV9zYikpCiAJCQkJc2hvcnRmb3JtX2RpcjJfZW50cnlf Y2hlY2sobXAsIGlubywgaXAsICZkaXJ0eSwKLQkJCQkJCQlzdGFjaywgaXJlYywK LQkJCQkJCQlpbm9fb2Zmc2V0LAorCQkJCQkJCWlyZWMsIGlub19vZmZzZXQsCiAJ CQkJCQkJaGFzaHRhYik7CiAJCQllbHNlCiAJCQkJc2hvcnRmb3JtX2Rpcl9lbnRy eV9jaGVjayhtcCwgaW5vLCBpcCwgJmRpcnR5LAotCQkJCQkJCXN0YWNrLCBpcmVj LAotCQkJCQkJCWlub19vZmZzZXQsCisJCQkJCQkJaXJlYywgaW5vX29mZnNldCwK IAkJCQkJCQloYXNodGFiKTsKIAogCQkJQVNTRVJUKGRpcnR5ID09IDAgfHwgKGRp cnR5ICYmICFub19tb2RpZnkpKTsKIAkJCWlmIChkaXJ0eSkgIHsKIAkJCQlsaWJ4 ZnNfdHJhbnNfbG9nX2lub2RlKHRwLCBpcCwKIAkJCQkJWEZTX0lMT0dfQ09SRSB8 IFhGU19JTE9HX0REQVRBKTsKLQkJCQlsaWJ4ZnNfdHJhbnNfY29tbWl0KHRwLCBY RlNfVFJBTlNfUkVMRUFTRV9MT0dfUkVTCi0JCQkJCQl8WEZTX1RSQU5TX1NZTkMs IDApOworCQkJCWxpYnhmc190cmFuc19jb21taXQodHAsCisJCQkJCVhGU19UUkFO U19SRUxFQVNFX0xPR19SRVMgfAorCQkJCQlYRlNfVFJBTlNfU1lOQywgMCk7CiAJ CQl9IGVsc2UgIHsKLQkJCQlsaWJ4ZnNfdHJhbnNfY2FuY2VsKHRwLCBYRlNfVFJB TlNfUkVMRUFTRV9MT0dfUkVTKTsKKwkJCQlsaWJ4ZnNfdHJhbnNfY2FuY2VsKHRw LAorCQkJCQlYRlNfVFJBTlNfUkVMRUFTRV9MT0dfUkVTKTsKIAkJCX0KIAkJCWJy ZWFrOworCiAJCWRlZmF1bHQ6CiAJCQlicmVhazsKLQkJfQotCQlkaXJfaGFzaF9k b25lKGhhc2h0YWIpOworCX0KKwlkaXJfaGFzaF9kb25lKGhhc2h0YWIpOworCisJ aGFzaHZhbCA9IDA7CisKKwkvKgorCSAqIGlmIHdlIGhhdmUgdG8gY3JlYXRlIGEg Li4gZm9yIC8sIGRvIGl0IG5vdyAqYmVmb3JlKgorCSAqIHdlIGRlbGV0ZSB0aGUg Ym9ndXMgZW50cmllcywgb3RoZXJ3aXNlIHRoZSBkaXJlY3RvcnkKKwkgKiBjb3Vs ZCB0cmFuc2Zvcm0gaW50byBhIHNob3J0Zm9ybSBkaXIgd2hpY2ggd291bGQKKwkg KiBwcm9iYWJseSBjYXVzZSB0aGUgc2ltdWxhdGlvbiB0byBjaG9rZS4gIEV2ZW4K KwkgKiBpZiB0aGUgaWxsZWdhbCBlbnRyaWVzIGdldCBzaGlmdGVkIGFyb3VuZCwg aXQncyBvaworCSAqIGJlY2F1c2UgdGhlIGVudHJpZXMgYXJlIHN0cnVjdHVyYWxs eSBpbnRhY3QgYW5kIGluCisJICogaW4gaGFzaC12YWx1ZSBvcmRlciBzbyB0aGUg c2ltdWxhdGlvbiB3b24ndCBnZXQgY29uZnVzZWQKKwkgKiBpZiBpdCBoYXMgdG8g bW92ZSB0aGVtIGFyb3VuZC4KKwkgKi8KKwlpZiAoIW5vX21vZGlmeSAmJiBuZWVk X3Jvb3RfZG90ZG90ICYmIGlubyA9PSBtcC0+bV9zYi5zYl9yb290aW5vKSAgewor CQlBU1NFUlQoaXAtPmlfZC5kaV9mb3JtYXQgIT0gWEZTX0RJTk9ERV9GTVRfTE9D QUwpOwogCi0JCWhhc2h2YWwgPSAwOworCQlkb193YXJuKF8oInJlY3JlYXRpbmcg cm9vdCBkaXJlY3RvcnkgLi4gZW50cnlcbiIpKTsKKworCQl0cCA9IGxpYnhmc190 cmFuc19hbGxvYyhtcCwgMCk7CisJCUFTU0VSVCh0cCAhPSBOVUxMKTsKKworCQlu cmVzID0gWEZTX01LRElSX1NQQUNFX1JFUyhtcCwgMik7CisJCWVycm9yID0gbGli eGZzX3RyYW5zX3Jlc2VydmUodHAsIG5yZXMsIFhGU19NS0RJUl9MT0dfUkVTKG1w KSwKKwkJCQkwLCBYRlNfVFJBTlNfUEVSTV9MT0dfUkVTLCBYRlNfTUtESVJfTE9H X0NPVU5UKTsKKwkJaWYgKGVycm9yKQorCQkJcmVzX2ZhaWxlZChlcnJvcik7CisK KwkJbGlieGZzX3RyYW5zX2lqb2luKHRwLCBpcCwgMCk7CisJCWxpYnhmc190cmFu c19paG9sZCh0cCwgaXApOworCisJCVhGU19CTUFQX0lOSVQoJmZsaXN0LCAmZmly c3QpOworCisJCWVycm9yID0gZGlyX2NyZWF0ZW5hbWUobXAsIHRwLCBpcCwgIi4u IiwgMiwgaXAtPmlfaW5vLCAmZmlyc3QsCisJCQkJJmZsaXN0LCBucmVzKTsKKwkJ aWYgKGVycm9yKQorCQkJZG9fZXJyb3IoXygiY2FuJ3QgbWFrZSBcIi4uXCIgZW50 cnkgaW4gcm9vdCBpbm9kZSAiCisJCQkJIiVsbHUsIGNyZWF0ZW5hbWUgZXJyb3Ig JWRcbiIpLCBpbm8sIGVycm9yKTsKKworCQlsaWJ4ZnNfdHJhbnNfbG9nX2lub2Rl KHRwLCBpcCwgWEZTX0lMT0dfQ09SRSk7CisKKwkJZXJyb3IgPSBsaWJ4ZnNfYm1h cF9maW5pc2goJnRwLCAmZmxpc3QsIGZpcnN0LCAmY29tbWl0dGVkKTsKKwkJQVNT RVJUKGVycm9yID09IDApOworCQlsaWJ4ZnNfdHJhbnNfY29tbWl0KHRwLCBYRlNf VFJBTlNfUkVMRUFTRV9MT0dfUkVTIHwKKwkJCQlYRlNfVFJBTlNfU1lOQywgMCk7 CisKKwkJbmVlZF9yb290X2RvdGRvdCA9IDA7CisJfSBlbHNlIGlmIChuZWVkX3Jv b3RfZG90ZG90ICYmIGlubyA9PSBtcC0+bV9zYi5zYl9yb290aW5vKSAgeworCQlk b193YXJuKF8oIndvdWxkIHJlY3JlYXRlIHJvb3QgZGlyZWN0b3J5IC4uIGVudHJ5 XG4iKSk7CisJfQorCisJLyoKKwkgKiBkZWxldGUgYW55IGlsbGVnYWwgZW50cmll cyAtLSB3aGljaCBzaG91bGQgb25seSBleGlzdAorCSAqIGlmIHRoZSBkaXJlY3Rv cnkgaXMgYSBsb25nZm9ybSBkaXJlY3RvcnkuICBib2d1cworCSAqIHNob3J0Zm9y bSBkaXJlY3RvcnkgZW50cmllcyB3ZXJlIGRlbGV0ZWQgaW4gcGhhc2UgNC4KKwkg Ki8KKwlpZiAoIW5vX21vZGlmeSAmJiBudW1faWxsZWdhbCA+IDApICB7CisJCUFT U0VSVChpcC0+aV9kLmRpX2Zvcm1hdCAhPSBYRlNfRElOT0RFX0ZNVF9MT0NBTCk7 CisJCUFTU0VSVCghWEZTX1NCX1ZFUlNJT05fSEFTRElSVjIoJm1wLT5tX3NiKSk7 CisKKwkJd2hpbGUgKG51bV9pbGxlZ2FsID4gMCAmJiBpcC0+aV9kLmRpX2Zvcm1h dCAhPQorCQkJCVhGU19ESU5PREVfRk1UX0xPQ0FMKSAgeworCQkJcHJ1bmVfbGZf ZGlyX2VudHJ5KG1wLCBpbm8sIGlwLCAmaGFzaHZhbCk7CisJCQludW1faWxsZWdh bC0tOworCQl9CiAKIAkJLyoKLQkJICogaWYgd2UgaGF2ZSB0byBjcmVhdGUgYSAu LiBmb3IgLywgZG8gaXQgbm93ICpiZWZvcmUqCi0JCSAqIHdlIGRlbGV0ZSB0aGUg Ym9ndXMgZW50cmllcywgb3RoZXJ3aXNlIHRoZSBkaXJlY3RvcnkKLQkJICogY291 bGQgdHJhbnNmb3JtIGludG8gYSBzaG9ydGZvcm0gZGlyIHdoaWNoIHdvdWxkCi0J CSAqIHByb2JhYmx5IGNhdXNlIHRoZSBzaW11bGF0aW9uIHRvIGNob2tlLiAgRXZl bgotCQkgKiBpZiB0aGUgaWxsZWdhbCBlbnRyaWVzIGdldCBzaGlmdGVkIGFyb3Vu ZCwgaXQncyBvawotCQkgKiBiZWNhdXNlIHRoZSBlbnRyaWVzIGFyZSBzdHJ1Y3R1 cmFsbHkgaW50YWN0IGFuZCBpbgotCQkgKiBpbiBoYXNoLXZhbHVlIG9yZGVyIHNv IHRoZSBzaW11bGF0aW9uIHdvbid0IGdldCBjb25mdXNlZAotCQkgKiBpZiBpdCBo YXMgdG8gbW92ZSB0aGVtIGFyb3VuZC4KLQkJICovCi0JCWlmICghbm9fbW9kaWZ5 ICYmIG5lZWRfcm9vdF9kb3Rkb3QgJiYKLQkJCQlpbm8gPT0gbXAtPm1fc2Iuc2Jf cm9vdGlubykgIHsKLQkJCUFTU0VSVChpcC0+aV9kLmRpX2Zvcm1hdCAhPSBYRlNf RElOT0RFX0ZNVF9MT0NBTCk7CisJCSogaGFuZGxlIGNhc2Ugd2hlcmUgd2UndmUg ZGVsZXRlZCBzbyBtYW55CisJCSogZW50cmllcyB0aGF0IHRoZSBkaXJlY3Rvcnkg aGFzIGNoYW5nZWQgZnJvbQorCQkqIGEgbG9uZ2Zvcm0gdG8gYSBzaG9ydGZvcm0g ZGlyZWN0b3J5LiAgaGF2ZQorCQkqIHRvIGFsbG9jYXRlIGEgdHJhbnNhY3Rpb24g c2luY2Ugd2UncmUgd29ya2luZworCQkqIHdpdGggdGhlIGluY29yZSBkYXRhIGZv cmsuCisJCSovCisJCWlmIChudW1faWxsZWdhbCA+IDApICB7CisJCQlBU1NFUlQo aXAtPmlfZC5kaV9mb3JtYXQgPT0gWEZTX0RJTk9ERV9GTVRfTE9DQUwpOworCQkJ dHAgPSBsaWJ4ZnNfdHJhbnNfYWxsb2MobXAsIDApOworCQkJLyoKKwkJCSogdXNp bmcgdGhlIHJlbW92ZSByZXNlcnZhdGlvbiBpcyBvdmVya2lsbAorCQkJKiBzaW5j ZSBhdCBtb3N0IHdlJ2xsIG9ubHkgbmVlZCB0byBsb2cgdGhlCisJCQkqIGlub2Rl IGJ1dCBpdCdzIGVhc2llciB0aGFuIHdlZGdpbmcgYQorCQkJKiBuZXcgZGVmaW5l IGluIG91cnNlbHZlcy4gIDEwIGJsb2NrIGZzCisJCQkqIHNwYWNlIHJlc2VydmF0 aW9uIGlzIGFsc28gb3ZlcmtpbGwgYnV0CisJCQkqIHdoYXQgdGhlIGhlY2suLi4K KwkJCSovCisJCQlucmVzID0gWEZTX1JFTU9WRV9TUEFDRV9SRVMobXApOworCQkJ ZXJyb3IgPSBsaWJ4ZnNfdHJhbnNfcmVzZXJ2ZSh0cCwgbnJlcywKKwkJCQkJWEZT X1JFTU9WRV9MT0dfUkVTKG1wKSwgMCwKKwkJCQkJWEZTX1RSQU5TX1BFUk1fTE9H X1JFUywKKwkJCQkJWEZTX1JFTU9WRV9MT0dfQ09VTlQpOworCQkJaWYgKGVycm9y KQorCQkJCXJlc19mYWlsZWQoZXJyb3IpOwogCi0JCQlkb193YXJuKF8oInJlY3Jl YXRpbmcgcm9vdCBkaXJlY3RvcnkgLi4gZW50cnlcbiIpKTsKKwkJCWxpYnhmc190 cmFuc19pam9pbih0cCwgaXAsIDApOworCQkJbGlieGZzX3RyYW5zX2lob2xkKHRw LCBpcCk7CisKKwkJCXBydW5lX3NmX2Rpcl9lbnRyeShtcCwgaW5vLCBpcCk7CisK KwkJCWxpYnhmc190cmFuc19sb2dfaW5vZGUodHAsIGlwLAorCQkJCQlYRlNfSUxP R19DT1JFIHwgWEZTX0lMT0dfRERBVEEpOworCQkJQVNTRVJUKGVycm9yID09IDAp OworCQkJbGlieGZzX3RyYW5zX2NvbW1pdCh0cCwgWEZTX1RSQU5TX1JFTEVBU0Vf TE9HX1JFUworCQkJCQl8WEZTX1RSQU5TX1NZTkMsIDApOworCQl9CisJfQorCisJ LyoKKwkgKiBpZiB3ZSBuZWVkIHRvIGNyZWF0ZSB0aGUgJy4nIGVudHJ5LCBkbyBz byBvbmx5IGlmCisJICogdGhlIGRpcmVjdG9yeSBpcyBhIGxvbmdmb3JtIGRpci4g IGl0IGl0J3MgYmVlbgorCSAqIHR1cm5lZCBpbnRvIGEgc2hvcnRmb3JtIGRpciwg dGhlbiB0aGUgaW5vZGUgaXMgb2sKKwkgKiBzaW5jZSBzaG9ydGZvcm0gZGlycyBo YXZlIG5vICcuJyBlbnRyeSBhbmQgdGhlIGlub2RlCisJICogaGFzIGFscmVhZHkg YmVlbiBjb21taXR0ZWQgYnkgcHJ1bmVfbGZfZGlyX2VudHJ5KCkuCisJICovCisJ aWYgKG5lZWRfZG90KSAgeworCQkvKgorCQkgKiBidW1wIHVwIG91ciBsaW5rIGNv dW50IGJ1dCBkb24ndAorCQkgKiBidW1wIHVwIHRoZSBpbm9kZSBsaW5rIGNvdW50 LiAgY2hhbmNlcworCQkgKiBhcmUgZ29vZCB0aGF0IGV2ZW4gdGhvdWdoIHdlIGxv c3QgJy4nCisJCSAqIHRoZSBpbm9kZSBsaW5rIGNvdW50cyByZWZsZWN0ICcuJyBz bworCQkgKiBsZWF2ZSB0aGUgaW5vZGUgbGluayBjb3VudCBhbG9uZSBhbmQgaWYK KwkJICogaXQgdHVybnMgb3V0IHRvIGJlIHdyb25nLCB3ZSdsbCBjYXRjaAorCQkg KiB0aGF0IGluIHBoYXNlIDcuCisJCSAqLworCQlhZGRfaW5vZGVfcmVmKGlyZWMs IGlub19vZmZzZXQpOworCisJCWlmIChub19tb2RpZnkpICB7CisJCQlkb193YXJu KF8oIndvdWxkIGNyZWF0ZSBtaXNzaW5nIFwiLlwiIGVudHJ5IGluIGRpciBpbm8g JWxsdVxuIiksCisJCQkJaW5vKTsKKwkJfSBlbHNlIGlmIChpcC0+aV9kLmRpX2Zv cm1hdCAhPSBYRlNfRElOT0RFX0ZNVF9MT0NBTCkgIHsKKwkJCS8qCisJCQkgKiBu ZWVkIHRvIGNyZWF0ZSAuIGVudHJ5IGluIGxvbmdmb3JtIGRpci4KKwkJCSAqLwor CQkJZG9fd2FybihfKCJjcmVhdGluZyBtaXNzaW5nIFwiLlwiIGVudHJ5IGluIGRp ciBpbm8gJWxsdVxuIiksCisJCQkJaW5vKTsKIAogCQkJdHAgPSBsaWJ4ZnNfdHJh bnNfYWxsb2MobXAsIDApOwogCQkJQVNTRVJUKHRwICE9IE5VTEwpOwogCi0JCQlu cmVzID0gWEZTX01LRElSX1NQQUNFX1JFUyhtcCwgMik7CisJCQlucmVzID0gWEZT X01LRElSX1NQQUNFX1JFUyhtcCwgMSk7CiAJCQllcnJvciA9IGxpYnhmc190cmFu c19yZXNlcnZlKHRwLCBucmVzLAogCQkJCQlYRlNfTUtESVJfTE9HX1JFUyhtcCks CiAJCQkJCTAsCkBAIC0zNTYwLDEwICszNjI4LDExIEBACiAKIAkJCVhGU19CTUFQ X0lOSVQoJmZsaXN0LCAmZmlyc3QpOwogCi0JCQlpZiAoKGVycm9yID0gZGlyX2Ny ZWF0ZW5hbWUobXAsIHRwLCBpcCwgIi4uIiwgMiwKLQkJCQkJaXAtPmlfaW5vLCAm Zmlyc3QsICZmbGlzdCwgbnJlcykpKQotCQkJCWRvX2Vycm9yKAotXygiY2FuJ3Qg bWFrZSBcIi4uXCIgZW50cnkgaW4gcm9vdCBpbm9kZSAlbGx1LCBjcmVhdGVuYW1l IGVycm9yICVkXG4iKSwKKwkJCWlmICgoZXJyb3IgPSBkaXJfY3JlYXRlbmFtZSht cCwgdHAsIGlwLCAiLiIsCisJCQkJCTEsIGlwLT5pX2lubywgJmZpcnN0LCAmZmxp c3QsCisJCQkJCW5yZXMpKSkKKwkJCQlkb19lcnJvcihfKCJjYW4ndCBtYWtlIFwi LlwiIGVudHJ5IGluIGRpciBpbm8gIgorCQkJCQkiJWxsdSwgY3JlYXRlbmFtZSBl cnJvciAlZFxuIiksCiAJCQkJCWlubywgZXJyb3IpOwogCiAJCQlsaWJ4ZnNfdHJh bnNfbG9nX2lub2RlKHRwLCBpcCwgWEZTX0lMT0dfQ09SRSk7CkBAIC0zNTczLDEz NSArMzY0MiwxMCBAQAogCQkJQVNTRVJUKGVycm9yID09IDApOwogCQkJbGlieGZz X3RyYW5zX2NvbW1pdCh0cCwgWEZTX1RSQU5TX1JFTEVBU0VfTE9HX1JFUwogCQkJ CQl8WEZTX1RSQU5TX1NZTkMsIDApOwotCi0JCQluZWVkX3Jvb3RfZG90ZG90ID0g MDsKLQkJfSBlbHNlIGlmIChuZWVkX3Jvb3RfZG90ZG90ICYmIGlubyA9PSBtcC0+ bV9zYi5zYl9yb290aW5vKSAgewotCQkJZG9fd2FybihfKCJ3b3VsZCByZWNyZWF0 ZSByb290IGRpcmVjdG9yeSAuLiBlbnRyeVxuIikpOwotCQl9Ci0KLQkJLyoKLQkJ ICogZGVsZXRlIGFueSBpbGxlZ2FsIGVudHJpZXMgLS0gd2hpY2ggc2hvdWxkIG9u bHkgZXhpc3QKLQkJICogaWYgdGhlIGRpcmVjdG9yeSBpcyBhIGxvbmdmb3JtIGRp cmVjdG9yeS4gIGJvZ3VzCi0JCSAqIHNob3J0Zm9ybSBkaXJlY3RvcnkgZW50cmll cyB3ZXJlIGRlbGV0ZWQgaW4gcGhhc2UgNC4KLQkJICovCi0JCWlmICghbm9fbW9k aWZ5ICYmIG51bV9pbGxlZ2FsID4gMCkgIHsKLQkJCUFTU0VSVChpcC0+aV9kLmRp X2Zvcm1hdCAhPSBYRlNfRElOT0RFX0ZNVF9MT0NBTCk7Ci0JCQlBU1NFUlQoIVhG U19TQl9WRVJTSU9OX0hBU0RJUlYyKCZtcC0+bV9zYikpOwotCi0JCQl3aGlsZSAo bnVtX2lsbGVnYWwgPiAwICYmIGlwLT5pX2QuZGlfZm9ybWF0ICE9Ci0JCQkJCVhG U19ESU5PREVfRk1UX0xPQ0FMKSAgewotCQkJCXBydW5lX2xmX2Rpcl9lbnRyeSht cCwgaW5vLCBpcCwgJmhhc2h2YWwpOwotCQkJCW51bV9pbGxlZ2FsLS07Ci0JCQl9 Ci0KLQkJCS8qCi0JCQkgKiBoYW5kbGUgY2FzZSB3aGVyZSB3ZSd2ZSBkZWxldGVk IHNvIG1hbnkKLQkJCSAqIGVudHJpZXMgdGhhdCB0aGUgZGlyZWN0b3J5IGhhcyBj aGFuZ2VkIGZyb20KLQkJCSAqIGEgbG9uZ2Zvcm0gdG8gYSBzaG9ydGZvcm0gZGly ZWN0b3J5LiAgaGF2ZQotCQkJICogdG8gYWxsb2NhdGUgYSB0cmFuc2FjdGlvbiBz aW5jZSB3ZSdyZSB3b3JraW5nCi0JCQkgKiB3aXRoIHRoZSBpbmNvcmUgZGF0YSBm b3JrLgotCQkJICovCi0JCQlpZiAobnVtX2lsbGVnYWwgPiAwKSAgewotCQkJCUFT U0VSVChpcC0+aV9kLmRpX2Zvcm1hdCA9PQotCQkJCQlYRlNfRElOT0RFX0ZNVF9M T0NBTCk7Ci0JCQkJdHAgPSBsaWJ4ZnNfdHJhbnNfYWxsb2MobXAsIDApOwotCQkJ CS8qCi0JCQkJICogdXNpbmcgdGhlIHJlbW92ZSByZXNlcnZhdGlvbiBpcyBvdmVy a2lsbAotCQkJCSAqIHNpbmNlIGF0IG1vc3Qgd2UnbGwgb25seSBuZWVkIHRvIGxv ZyB0aGUKLQkJCQkgKiBpbm9kZSBidXQgaXQncyBlYXNpZXIgdGhhbiB3ZWRnaW5n IGEKLQkJCQkgKiBuZXcgZGVmaW5lIGluIG91cnNlbHZlcy4gIDEwIGJsb2NrIGZz Ci0JCQkJICogc3BhY2UgcmVzZXJ2YXRpb24gaXMgYWxzbyBvdmVya2lsbCBidXQK LQkJCQkgKiB3aGF0IHRoZSBoZWNrLi4uCi0JCQkJICovCi0JCQkJbnJlcyA9IFhG U19SRU1PVkVfU1BBQ0VfUkVTKG1wKTsKLQkJCQllcnJvciA9IGxpYnhmc190cmFu c19yZXNlcnZlKHRwLCBucmVzLAotCQkJCQkJWEZTX1JFTU9WRV9MT0dfUkVTKG1w KSwgMCwKLQkJCQkJCVhGU19UUkFOU19QRVJNX0xPR19SRVMsCi0JCQkJCQlYRlNf UkVNT1ZFX0xPR19DT1VOVCk7Ci0JCQkJaWYgKGVycm9yKQotCQkJCQlyZXNfZmFp bGVkKGVycm9yKTsKLQotCQkJCWxpYnhmc190cmFuc19pam9pbih0cCwgaXAsIDAp OwotCQkJCWxpYnhmc190cmFuc19paG9sZCh0cCwgaXApOwotCi0JCQkJcHJ1bmVf c2ZfZGlyX2VudHJ5KG1wLCBpbm8sIGlwKTsKLQotCQkJCWxpYnhmc190cmFuc19s b2dfaW5vZGUodHAsIGlwLAotCQkJCQkJWEZTX0lMT0dfQ09SRSB8IFhGU19JTE9H X0REQVRBKTsKLQkJCQlBU1NFUlQoZXJyb3IgPT0gMCk7Ci0JCQkJbGlieGZzX3Ry YW5zX2NvbW1pdCh0cCwgWEZTX1RSQU5TX1JFTEVBU0VfTE9HX1JFUwotCQkJCQkJ fFhGU19UUkFOU19TWU5DLCAwKTsKLQkJCX0KIAkJfQotCi0JCS8qCi0JCSAqIGlm IHdlIG5lZWQgdG8gY3JlYXRlIHRoZSAnLicgZW50cnksIGRvIHNvIG9ubHkgaWYK LQkJICogdGhlIGRpcmVjdG9yeSBpcyBhIGxvbmdmb3JtIGRpci4gIGl0IGl0J3Mg YmVlbgotCQkgKiB0dXJuZWQgaW50byBhIHNob3J0Zm9ybSBkaXIsIHRoZW4gdGhl IGlub2RlIGlzIG9rCi0JCSAqIHNpbmNlIHNob3J0Zm9ybSBkaXJzIGhhdmUgbm8g Jy4nIGVudHJ5IGFuZCB0aGUgaW5vZGUKLQkJICogaGFzIGFscmVhZHkgYmVlbiBj b21taXR0ZWQgYnkgcHJ1bmVfbGZfZGlyX2VudHJ5KCkuCi0JCSAqLwotCQlpZiAo bmVlZF9kb3QpICB7Ci0JCQkvKgotCQkJICogYnVtcCB1cCBvdXIgbGluayBjb3Vu dCBidXQgZG9uJ3QKLQkJCSAqIGJ1bXAgdXAgdGhlIGlub2RlIGxpbmsgY291bnQu ICBjaGFuY2VzCi0JCQkgKiBhcmUgZ29vZCB0aGF0IGV2ZW4gdGhvdWdoIHdlIGxv c3QgJy4nCi0JCQkgKiB0aGUgaW5vZGUgbGluayBjb3VudHMgcmVmbGVjdCAnLicg c28KLQkJCSAqIGxlYXZlIHRoZSBpbm9kZSBsaW5rIGNvdW50IGFsb25lIGFuZCBp ZgotCQkJICogaXQgdHVybnMgb3V0IHRvIGJlIHdyb25nLCB3ZSdsbCBjYXRjaAot CQkJICogdGhhdCBpbiBwaGFzZSA3LgotCQkJICovCi0JCQlhZGRfaW5vZGVfcmVm KGlyZWMsIGlub19vZmZzZXQpOwotCi0JCQlpZiAobm9fbW9kaWZ5KSAgewotCQkJ CWRvX3dhcm4oCi0JXygid291bGQgY3JlYXRlIG1pc3NpbmcgXCIuXCIgZW50cnkg aW4gZGlyIGlubyAlbGx1XG4iKSwKLQkJCQkJaW5vKTsKLQkJCX0gZWxzZSBpZiAo aXAtPmlfZC5kaV9mb3JtYXQgIT0gWEZTX0RJTk9ERV9GTVRfTE9DQUwpICB7Ci0J CQkJLyoKLQkJCQkgKiBuZWVkIHRvIGNyZWF0ZSAuIGVudHJ5IGluIGxvbmdmb3Jt IGRpci4KLQkJCQkgKi8KLQkJCQlkb193YXJuKAotCV8oImNyZWF0aW5nIG1pc3Np bmcgXCIuXCIgZW50cnkgaW4gZGlyIGlubyAlbGx1XG4iKSwKLQkJCQkJaW5vKTsK LQotCQkJCXRwID0gbGlieGZzX3RyYW5zX2FsbG9jKG1wLCAwKTsKLQkJCQlBU1NF UlQodHAgIT0gTlVMTCk7Ci0KLQkJCQlucmVzID0gWEZTX01LRElSX1NQQUNFX1JF UyhtcCwgMSk7Ci0JCQkJZXJyb3IgPSBsaWJ4ZnNfdHJhbnNfcmVzZXJ2ZSh0cCwg bnJlcywKLQkJCQkJCVhGU19NS0RJUl9MT0dfUkVTKG1wKSwKLQkJCQkJCTAsCi0J CQkJCQlYRlNfVFJBTlNfUEVSTV9MT0dfUkVTLAotCQkJCQkJWEZTX01LRElSX0xP R19DT1VOVCk7Ci0KLQkJCQlpZiAoZXJyb3IpCi0JCQkJCXJlc19mYWlsZWQoZXJy b3IpOwotCi0JCQkJbGlieGZzX3RyYW5zX2lqb2luKHRwLCBpcCwgMCk7Ci0JCQkJ bGlieGZzX3RyYW5zX2lob2xkKHRwLCBpcCk7Ci0KLQkJCQlYRlNfQk1BUF9JTklU KCZmbGlzdCwgJmZpcnN0KTsKLQotCQkJCWlmICgoZXJyb3IgPSBkaXJfY3JlYXRl bmFtZShtcCwgdHAsIGlwLCAiLiIsCi0JCQkJCQkxLCBpcC0+aV9pbm8sICZmaXJz dCwgJmZsaXN0LAotCQkJCQkJbnJlcykpKQotCQkJCQlkb19lcnJvcigKLQlfKCJj YW4ndCBtYWtlIFwiLlwiIGVudHJ5IGluIGRpciBpbm8gJWxsdSwgY3JlYXRlbmFt ZSBlcnJvciAlZFxuIiksCi0JCQkJCQlpbm8sIGVycm9yKTsKLQotCQkJCWxpYnhm c190cmFuc19sb2dfaW5vZGUodHAsIGlwLCBYRlNfSUxPR19DT1JFKTsKLQotCQkJ CWVycm9yID0gbGlieGZzX2JtYXBfZmluaXNoKCZ0cCwgJmZsaXN0LCBmaXJzdCwK LQkJCQkJCSZjb21taXR0ZWQpOwotCQkJCUFTU0VSVChlcnJvciA9PSAwKTsKLQkJ CQlsaWJ4ZnNfdHJhbnNfY29tbWl0KHRwLCBYRlNfVFJBTlNfUkVMRUFTRV9MT0df UkVTCi0JCQkJCQl8WEZTX1RSQU5TX1NZTkMsIDApOwotCQkJfQotCQl9Ci0KLQkJ bGlieGZzX2lwdXQoaXAsIDApOwogCX0KKworCWxpYnhmc19pcHV0KGlwLCAwKTsK IH0KIAogLyoKQEAgLTM3NTksMTAgKzM3MDMsMTAgQEAKIHN0YXRpYyB2b2lkCiBj aGVja19mb3Jfb3JwaGFuZWRfaW5vZGVzKAogCXhmc19tb3VudF90CQkqbXAsCisJ eGZzX2FnbnVtYmVyX3QJCWFnbm8sCiAJaW5vX3RyZWVfbm9kZV90CQkqaXJlYykK IHsKIAlpbnQJCQlpOwotCWludAkJCWVycjsKIAl4ZnNfaW5vX3QJCWlubzsKIAog CWZvciAoaSA9IDA7IGkgPCBYRlNfSU5PREVTX1BFUl9DSFVOSzsgaSsrKSAgewpA QCAtMzc3MCw5MyArMzcxNCwxMDEgQEAKIAkJaWYgKGlzX2lub2RlX2ZyZWUoaXJl YywgaSkpCiAJCQljb250aW51ZTsKIAotCQlpZiAoIWlzX2lub2RlX3JlYWNoZWQo aXJlYywgaSkpIHsKLQkJCUFTU0VSVChpbm9kZV9pc2FkaXIoaXJlYywgaSkgfHwK LQkJCQludW1faW5vZGVfcmVmZXJlbmNlcyhpcmVjLCBpKSA9PSAwKTsKLQkJCWlu byA9IFhGU19BR0lOT19UT19JTk8obXAsIGksIGkgKyBpcmVjLT5pbm9fc3RhcnRu dW0pOwotCQkJaWYgKGlub2RlX2lzYWRpcihpcmVjLCBpKSkKLQkJCQlkb193YXJu KF8oImRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgJWxsdSwgIiksIGlubyk7Ci0JCQll bHNlCi0JCQkJZG9fd2FybihfKCJkaXNjb25uZWN0ZWQgaW5vZGUgJWxsdSwgIiks IGlubyk7Ci0JCQlpZiAoIW5vX21vZGlmeSkgIHsKLQkJCSAgICAJaWYgKCFvcnBo YW5hZ2VfaW5vKQotCQkJCQlvcnBoYW5hZ2VfaW5vID0gbWtfb3JwaGFuYWdlKG1w KTsKLQkJCQlpZiAoIW9ycGhhbmFnZV9pcCkgewotCQkJCQllcnIgPSBsaWJ4ZnNf aWdldChtcCwgTlVMTCwgb3JwaGFuYWdlX2lubywgMCwgJm9ycGhhbmFnZV9pcCwg MCk7Ci0JCQkJCWlmIChlcnIpCi0JCQkJCQlkb19lcnJvcihfKCIlZCAtIGNvdWxk bid0IGlnZXQgb3JwaGFuYWdlIGlub2RlXG4iKSwgZXJyKTsKLQkJCQl9Ci0JCQkJ ZG9fd2FybihfKCJtb3ZpbmcgdG8gJXNcbiIpLCBPUlBIQU5BR0UpOwotCQkJCW12 X29ycGhhbmFnZShtcCwgaW5vLCBpbm9kZV9pc2FkaXIoaXJlYywgaSkpOwotCQkJ fSBlbHNlICB7Ci0JCQkJZG9fd2FybihfKCJ3b3VsZCBtb3ZlIHRvICVzXG4iKSwg T1JQSEFOQUdFKTsKLQkJCX0KLQkJCS8qCi0JCQkgKiBmb3IgcmVhZC1vbmx5IGNh c2UsIGV2ZW4gdGhvdWdoIHRoZSBpbm9kZSBpc24ndAotCQkJICogcmVhbGx5IHJl YWNoYWJsZSwgc2V0IHRoZSBmbGFnIChhbmQgYnVtcCBvdXIgbGluawotCQkJICog Y291bnQpIGFueXdheSB0byBmb29sIHBoYXNlIDcKLQkJCSAqLwotCQkJYWRkX2lu b2RlX3JlYWNoZWQoaXJlYywgaSk7CisJCWlmIChpc19pbm9kZV9yZWFjaGVkKGly ZWMsIGkpKQorCQkJY29udGludWU7CisKKwkJQVNTRVJUKGlub2RlX2lzYWRpcihp cmVjLCBpKSB8fAorCQkJbnVtX2lub2RlX3JlZmVyZW5jZXMoaXJlYywgaSkgPT0g MCk7CisKKwkJaW5vID0gWEZTX0FHSU5PX1RPX0lOTyhtcCwgYWdubywgaSArIGly ZWMtPmlub19zdGFydG51bSk7CisJCWlmIChpbm9kZV9pc2FkaXIoaXJlYywgaSkp CisJCQlkb193YXJuKF8oImRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgJWxsdSwgIiks IGlubyk7CisJCWVsc2UKKwkJCWRvX3dhcm4oXygiZGlzY29ubmVjdGVkIGlub2Rl ICVsbHUsICIpLCBpbm8pOworCQlpZiAoIW5vX21vZGlmeSkgIHsKKwkJICAgIAlp ZiAoIW9ycGhhbmFnZV9pbm8pCisJCQkJb3JwaGFuYWdlX2lubyA9IG1rX29ycGhh bmFnZShtcCk7CisJCQlkb193YXJuKF8oIm1vdmluZyB0byAlc1xuIiksIE9SUEhB TkFHRSk7CisJCQltdl9vcnBoYW5hZ2UobXAsIGlubywgaW5vZGVfaXNhZGlyKGly ZWMsIGkpKTsKKwkJfSBlbHNlICB7CisJCQlkb193YXJuKF8oIndvdWxkIG1vdmUg dG8gJXNcbiIpLCBPUlBIQU5BR0UpOwogCQl9CisJCS8qCisJCSAqIGZvciByZWFk LW9ubHkgY2FzZSwgZXZlbiB0aG91Z2ggdGhlIGlub2RlIGlzbid0CisJCSAqIHJl YWxseSByZWFjaGFibGUsIHNldCB0aGUgZmxhZyAoYW5kIGJ1bXAgb3VyIGxpbmsK KwkJICogY291bnQpIGFueXdheSB0byBmb29sIHBoYXNlIDcKKwkJICovCisJCWFk ZF9pbm9kZV9yZWFjaGVkKGlyZWMsIGkpOwogCX0KIH0KIAogc3RhdGljIHZvaWQK LXRyYXZlcnNlX2Z1bmN0aW9uKHhmc19tb3VudF90ICptcCwgeGZzX2FnbnVtYmVy X3QgYWdubykKK3RyYXZlcnNlX2Z1bmN0aW9uKAorCXdvcmtfcXVldWVfdAkJKndx LAorCXhmc19hZ251bWJlcl90IAkJYWdubywKKwl2b2lkCQkJKmFyZykKIHsKLQly ZWdpc3RlciBpbm9fdHJlZV9ub2RlX3QgKmlyZWM7Ci0JaW50CQkJajsKLQl4ZnNf aW5vX3QJCWlubzsKLQlkaXJfc3RhY2tfdAkJc3RhY2s7CisJaW5vX3RyZWVfbm9k ZV90IAkqaXJlYzsKKwlpbnQJCQlpOworCXByZWZldGNoX2FyZ3NfdAkJKnBmX2Fy Z3MgPSBhcmc7CisKKwl3YWl0X2Zvcl9pbm9kZV9wcmVmZXRjaChwZl9hcmdzKTsK IAogCWlmICh2ZXJib3NlKQogCQlkb19sb2coXygiICAgICAgICAtIGFnbm8gPSAl ZFxuIiksIGFnbm8pOwogCi0JZGlyX3N0YWNrX2luaXQoJnN0YWNrKTsKLQlpcmVj ID0gZmluZGZpcnN0X2lub2RlX3JlYyhhZ25vKTsKLQotCXdoaWxlIChpcmVjICE9 IE5VTEwpICB7Ci0JCWZvciAoaiA9IDA7IGogPCBYRlNfSU5PREVTX1BFUl9DSFVO SzsgaisrKSAgewotCQkJaWYgKCFpbm9kZV9pc2FkaXIoaXJlYywgaikpIHsKLQkJ CQlpbm8gPSBYRlNfQUdJTk9fVE9fSU5PKG1wLCBhZ25vLAotCQkJCQlpcmVjLT5p bm9fc3RhcnRudW0gKyBqKTsKLQkJCQlpZiAobXAtPm1fc2Iuc2Jfcm9vdGlubyAh PSBpbm8pCi0JCQkJCWNvbnRpbnVlOwotCQkJfQorCWZvciAoaXJlYyA9IGZpbmRm aXJzdF9pbm9kZV9yZWMoYWdubyk7IGlyZWM7IGlyZWMgPSBuZXh0X2lub19yZWMo aXJlYykpIHsKKwkJaWYgKGlyZWMtPmlub19pc2FfZGlyID09IDApCisJCQljb250 aW51ZTsKIAotCQkJaW5vID0gWEZTX0FHSU5PX1RPX0lOTyhtcCwgYWdubywKLQkJ CQlpcmVjLT5pbm9fc3RhcnRudW0gKyBqKTsKKwkJaWYgKHBmX2FyZ3MpCisJCQlz ZW1fcG9zdCgmcGZfYXJncy0+cmFfY291bnQpOwogCi0JCQlwdXNoX2Rpcigmc3Rh Y2ssIGlubyk7Ci0JCQlwcm9jZXNzX2RpcnN0YWNrKG1wLCAmc3RhY2spOworCQlm b3IgKGkgPSAwOyBpIDwgWEZTX0lOT0RFU19QRVJfQ0hVTks7IGkrKykgIHsKKwkJ CWlmIChpbm9kZV9pc2FkaXIoaXJlYywgaSkpCisJCQkJcHJvY2Vzc19kaXJfaW5v ZGUod3EtPm1wLAorCQkJCQlYRlNfQUdJTk9fVE9fSU5PKHdxLT5tcCwgYWdubywK KwkJCQkJaXJlYy0+aW5vX3N0YXJ0bnVtICsgaSksIGlyZWMsIGkpOwogCQl9Ci0J CWlyZWMgPSBuZXh0X2lub19yZWMoaXJlYyk7CiAJfQotCXJldHVybjsKKwljbGVh bnVwX2lub2RlX3ByZWZldGNoKHBmX2FyZ3MpOwogfQogCiBzdGF0aWMgdm9pZAot dHJhdmVyc2VfYWx0KHhmc19tb3VudF90ICptcCkKK3RyYXZlcnNlX2FncygKKwl4 ZnNfbW91bnRfdCAJCSptcCkKIHsKIAlpbnQJCQlpOworCXdvcmtfcXVldWVfdAkJ KnF1ZXVlczsKKwlwcmVmZXRjaF9hcmdzX3QJCSpwZl9hcmdzWzJdOworCisJcXVl dWVzID0gbWFsbG9jKHRocmVhZF9jb3VudCAqIHNpemVvZih3b3JrX3F1ZXVlX3Qp KTsKKwlxdWV1ZXNbMF0ubXAgPSBtcDsKIAotCXNldF9wcm9ncmVzc19tc2coUFJP R19GTVRfVFJBVkVSU0FMLCAoX191aW50NjRfdCkgZ2xvYl9hZ2NvdW50KTsKLQlm b3IgKGkgPSAwOyBpIDwgbXAtPm1fc2Iuc2JfYWdjb3VudDsgaSsrKSAgewotCQl0 cmF2ZXJzZV9mdW5jdGlvbihtcCwgaSk7Ci0JCVBST0dfUlBUX0lOQyhwcm9nX3Jw dF9kb25lW2ldLCAxKTsKKwlpZiAoIWxpYnhmc19iY2FjaGVfb3ZlcmZsb3dlZCgp KSB7CisJCS8qY3JlYXRlX3dvcmtfcXVldWUoJnF1ZXVlc1swXSwgbXAsIGxpYnhm c19ucHJvYygpKTsKKwkJZm9yIChpID0gMDsgaSA8IGdsb2JfYWdjb3VudDsgaSsr KQorCQkJcXVldWVfd29yaygmcXVldWVzWzBdLCB0cmF2ZXJzZV9mdW5jdGlvbiwg aSwgTlVMTCk7CisJCWRlc3Ryb3lfd29ya19xdWV1ZSgmcXVldWVzWzBdKTsqLwor CQlmb3IgKGkgPSAwOyBpIDwgZ2xvYl9hZ2NvdW50OyBpKyspCisJCQl0cmF2ZXJz ZV9mdW5jdGlvbigmcXVldWVzWzBdLCBpLCBOVUxMKTsKKwl9IGVsc2UgeworCQkv KiBUT0RPOiBBRyBzdHJpZGUgc3VwcG9ydCAqLworCQlwZl9hcmdzWzBdID0gc3Rh cnRfaW5vZGVfcHJlZmV0Y2goMCwgMSwgTlVMTCk7CisJCWZvciAoaSA9IDA7IGkg PCBnbG9iX2FnY291bnQ7IGkrKykgeworCQkJcGZfYXJnc1sofmkpICYgMV0gPSBz dGFydF9pbm9kZV9wcmVmZXRjaChpICsgMSwgMSwKKwkJCQkJcGZfYXJnc1tpICYg MV0pOworCQkJdHJhdmVyc2VfZnVuY3Rpb24oJnF1ZXVlc1swXSwgaSwgcGZfYXJn c1tpICYgMV0pOworCQl9CiAJfQotCXByaW50X2ZpbmFsX3JwdCgpOworCWZyZWUo cXVldWVzKTsKIH0KIAogdm9pZAogcGhhc2U2KHhmc19tb3VudF90ICptcCkKIHsK LQl4ZnNfaW5vX3QJCWlubzsKIAlpbm9fdHJlZV9ub2RlX3QJCSppcmVjOwotCWRp cl9zdGFja190CQlzdGFjazsKIAlpbnQJCQlpOwotCWludAkJCWo7Ci0JeGZzX2lu b190CQlvcnBoYW5hZ2VfaW5vOwogCiAJYnplcm8oJnplcm9jciwgc2l6ZW9mKHN0 cnVjdCBjcmVkKSk7CiAJYnplcm8oJnplcm9mc3gsIHNpemVvZihzdHJ1Y3QgZnN4 YXR0cikpOwpAQCAtMzkyNCwzNCArMzg3Niw5IEBACiAJCX0KIAl9CiAKLQlkaXJf c3RhY2tfaW5pdCgmc3RhY2spOwotCiAJbWFya19zdGFuZGFsb25lX2lub2Rlcyht cCk7CiAKLQkvKgotCSAqIHB1c2ggcm9vdCBkaXIgb24gc3RhY2ssIHRoZW4gZ28K LQkgKi8KLQlpZiAoIW5lZWRfcm9vdF9pbm9kZSkgIHsKLQkJZG9fbG9nKF8oIiAg ICAgICAgLSB0cmF2ZXJzaW5nIGZpbGVzeXN0ZW0gc3RhcnRpbmcgYXQgLyAuLi4g XG4iKSk7Ci0KLQkJaWYgKGRvX3ByZWZldGNoKSB7Ci0JCQl0cmF2ZXJzZV9hbHQo bXApOwotCQl9IGVsc2UgewotCQkJcHVzaF9kaXIoJnN0YWNrLCBtcC0+bV9zYi5z Yl9yb290aW5vKTsKLQkJCXByb2Nlc3NfZGlyc3RhY2sobXAsICZzdGFjayk7Ci0J CX0KLQotCQlkb19sb2coXygiICAgICAgICAtIHRyYXZlcnNhbCBmaW5pc2hlZCAu Li4gXG4iKSk7Ci0JfSBlbHNlICB7Ci0JCUFTU0VSVChub19tb2RpZnkgIT0gMCk7 Ci0KLQkJZG9fbG9nKAotXygiICAgICAgICAtIHJvb3QgaW5vZGUgbG9zdCwgY2Fu bm90IG1ha2UgbmV3IG9uZSBpbiBubyBtb2RpZnkgbW9kZSAuLi4gXG4iKSk7Ci0J CWRvX2xvZygKLV8oIiAgICAgICAgLSBza2lwcGluZyBmaWxlc3lzdGVtIHRyYXZl cnNhbCBmcm9tIC8gLi4uIFxuIikpOwotCX0KLQotCWRvX2xvZyhfKCIgICAgICAg IC0gdHJhdmVyc2luZyBhbGwgdW5hdHRhY2hlZCBzdWJ0cmVlcyAuLi4gXG4iKSk7 CisJZG9fbG9nKF8oIiAgICAgICAgLSB0cmF2ZXJzaW5nIGZpbGVzeXN0ZW0gLi4u IFxuIikpOwogCiAJaXJlYyA9IGZpbmRfaW5vZGVfcmVjKFhGU19JTk9fVE9fQUdO TyhtcCwgbXAtPm1fc2Iuc2Jfcm9vdGlubyksCiAJCQkJWEZTX0lOT19UT19BR0lO TyhtcCwgbXAtPm1fc2Iuc2Jfcm9vdGlubykpOwpAQCAtMzk2NSw0MiArMzg5Miw5 IEBACiAJfQogCiAJLyoKLQkgKiB0aGVuIHByb2Nlc3MgYWxsIHVucmVhY2hlZCBp bm9kZXMKLQkgKiBieSB3YWxraW5nIGluY29yZSBpbm9kZSB0cmVlCi0JICoKLQkg KglnZXQgbmV4dCB1bnJlYWNoZWQgZGlyZWN0b3J5IGlub2RlICMgZnJvbQotCSAq CQlpbmNvcmUgbGlzdAotCSAqCXB1c2ggaW5vZGUgb24gZGlyIHN0YWNrCi0JICoJ Y2FsbCBwcm9jZXNzX2RpcnN0YWNrCisJICogdGhlbiBwcm9jZXNzIGFsbCBpbm9k ZXMgYnkgd2Fsa2luZyBpbmNvcmUgaW5vZGUgdHJlZQogCSAqLwotCWZvciAoaSA9 IDA7IGkgPCBnbG9iX2FnY291bnQ7IGkrKykgIHsKLQkJaXJlYyA9IGZpbmRmaXJz dF9pbm9kZV9yZWMoaSk7Ci0KLQkJaWYgKGlyZWMgPT0gTlVMTCkKLQkJCWNvbnRp bnVlOwotCi0JCXdoaWxlIChpcmVjICE9IE5VTEwpICB7Ci0JCQlmb3IgKGogPSAw OyBqIDwgWEZTX0lOT0RFU19QRVJfQ0hVTks7IGorKykgIHsKLQkJCQlpZiAoIWlz X2lub2RlX2NvbmZpcm1lZChpcmVjLCBqKSkKLQkJCQkJY29udGludWU7Ci0JCQkJ LyoKLQkJCQkgKiBza2lwIGRpcmVjdG9yaWVzIHRoYXQgaGF2ZSBhbHJlYWR5IGJl ZW4KLQkJCQkgKiBwcm9jZXNzZWQsIGV2ZW4gaWYgdGhleSBoYXZlbid0IGJlZW4K LQkJCQkgKiByZWFjaGVkLiAgSWYgdGhleSBhcmUgcmVhY2hhYmxlLCB3ZSdsbAot CQkJCSAqIHBpY2sgdGhlbSB1cCB3aGVuIHdlIHByb2Nlc3MgdGhlaXIgcGFyZW50 LgotCQkJCSAqLwotCQkJCWlubyA9IFhGU19BR0lOT19UT19JTk8obXAsIGksCi0J CQkJCQlqICsgaXJlYy0+aW5vX3N0YXJ0bnVtKTsKLQkJCQlpZiAoaW5vZGVfaXNh ZGlyKGlyZWMsIGopICYmCi0JCQkJCQkhaXNfaW5vZGVfcmVmY2hlY2tlZChpbm8s Ci0JCQkJCQkJaXJlYywgaikpIHsKLQkJCQkJcHVzaF9kaXIoJnN0YWNrLCBpbm8p OwotCQkJCQlwcm9jZXNzX2RpcnN0YWNrKG1wLCAmc3RhY2spOwotCQkJCX0KLQkJ CX0KLQkJCWlyZWMgPSBuZXh0X2lub19yZWMoaXJlYyk7Ci0JCX0KLQl9CisJdHJh dmVyc2VfYWdzKG1wKTsKIAogCWRvX2xvZyhfKCIgICAgICAgIC0gdHJhdmVyc2Fs cyBmaW5pc2hlZCAuLi4gXG4iKSk7CiAJZG9fbG9nKF8oIiAgICAgICAgLSBtb3Zp bmcgZGlzY29ubmVjdGVkIGlub2RlcyB0byAlcyAuLi4gXG4iKSwKQEAgLTQwMTIs NyArMzkwNiw3IEBACiAJZm9yIChpID0gMDsgaSA8IGdsb2JfYWdjb3VudDsgaSsr KSAgewogCQlpcmVjID0gZmluZGZpcnN0X2lub2RlX3JlYyhpKTsKIAkJd2hpbGUg KGlyZWMgIT0gTlVMTCkgIHsKLQkJCWNoZWNrX2Zvcl9vcnBoYW5lZF9pbm9kZXMo bXAsIGlyZWMpOworCQkJY2hlY2tfZm9yX29ycGhhbmVkX2lub2RlcyhtcCwgaSwg aXJlYyk7CiAJCQlpcmVjID0gbmV4dF9pbm9fcmVjKGlyZWMpOwogCQl9CiAJfQpJ bmRleDogcmVwYWlyL3hmc3Byb2dzL3JlcGFpci9waGFzZTcuYwo9PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09Ci0tLSByZXBhaXIub3JpZy94ZnNwcm9ncy9yZXBhaXIvcGhhc2U3 LmMJMjAwNy0wNC0yNyAxNDoxMTo0MS4wMDAwMDAwMDAgKzEwMDAKKysrIHJlcGFp ci94ZnNwcm9ncy9yZXBhaXIvcGhhc2U3LmMJMjAwNy0wNC0yNyAxNDoxMjozNC4x OTE1NjM1MDEgKzEwMDAKQEAgLTI1LDkgKzI1LDcgQEAKICNpbmNsdWRlICJlcnJf cHJvdG9zLmgiCiAjaW5jbHVkZSAiZGlub2RlLmgiCiAjaW5jbHVkZSAidmVyc2lv bnMuaCIKLSNpbmNsdWRlICJwcmVmZXRjaC5oIgogI2luY2x1ZGUgInByb2dyZXNz LmgiCi0jaW5jbHVkZSAidGhyZWFkcy5oIgogCiAvKiBkaW5vYyBpcyBhIHBvaW50 ZXIgdG8gdGhlIElOLUNPUkUgZGlub2RlIGNvcmUgKi8KIHN0YXRpYyB2b2lkCkBA IC0xMTYsNTcgKzExNCw2IEBACiAJfQogfQogCi1zdGF0aWMgdm9pZAotcGhhc2U3 X2FsdF9mdW5jdGlvbih4ZnNfbW91bnRfdCAqbXAsIHhmc19hZ251bWJlcl90IGFn bm8pCi17Ci0JaW5vX3RyZWVfbm9kZV90IAkqaXJlYzsKLQlpbnQJCQlqOwotCV9f dWludDMyX3QJCW5yZWZzOwotCi0JLyoKLQkgKiB1c2luZyB0aGUgbmxpbmsgdmFs dWVzIG1lbW9yaXNlZCBkdXJpbmcgcGhhc2UzLzQsIGNvbXBhcmUgdG8gdGhlCi0J ICogbmxpbmsgY291bnRlZCBpbiBwaGFzZSA2LCBhbmQgaWYgZGlmZmVyZW50LCB1 cGRhdGUgb24tZGlzay4KLQkgKi8KLQotCWlyZWMgPSBmaW5kZmlyc3RfaW5vZGVf cmVjKGFnbm8pOwotCi0Jd2hpbGUgKGlyZWMgIT0gTlVMTCkgIHsKLQkJZm9yIChq ID0gMDsgaiA8IFhGU19JTk9ERVNfUEVSX0NIVU5LOyBqKyspICB7Ci0JCQlhc3Nl cnQoaXNfaW5vZGVfY29uZmlybWVkKGlyZWMsIGopKTsKLQotCQkJaWYgKGlzX2lu b2RlX2ZyZWUoaXJlYywgaikpCi0JCQkJY29udGludWU7Ci0KLQkJCWFzc2VydChu b19tb2RpZnkgfHwgaXNfaW5vZGVfcmVhY2hlZChpcmVjLCBqKSk7Ci0JCQlhc3Nl cnQobm9fbW9kaWZ5IHx8IGlzX2lub2RlX3JlZmVyZW5jZWQoaXJlYywgaikpOwot Ci0JCQlucmVmcyA9IG51bV9pbm9kZV9yZWZlcmVuY2VzKGlyZWMsIGopOwotCi0g CQkJaWYgKGdldF9pbm9kZV9kaXNrX25saW5rcyhpcmVjLCBqKSAhPSBucmVmcykK LSAJCQkJdXBkYXRlX2lub2RlX25saW5rcyhtcCwgWEZTX0FHSU5PX1RPX0lOTyht cCwKLSAJCQkJCQlhZ25vLCBpcmVjLT5pbm9fc3RhcnRudW0gKyBqKSwKLSAJCQkJ CQlucmVmcyk7Ci0JCX0KLQkJaXJlYyA9IG5leHRfaW5vX3JlYyhpcmVjKTsKLQkJ UFJPR19SUFRfSU5DKHByb2dfcnB0X2RvbmVbYWdub10sIFhGU19JTk9ERVNfUEVS X0NIVU5LKTsKLQl9Ci19Ci0KLXN0YXRpYyB2b2lkCi1waGFzZTdfYWx0KHhmc19t b3VudF90ICptcCkKLXsKLQlpbnQJCWk7Ci0KLQlzZXRfcHJvZ3Jlc3NfbXNnKG5v X21vZGlmeSA/IFBST0dSRVNTX0ZNVF9WUkZZX0xJTksgOiBQUk9HUkVTU19GTVRf Q09SUl9MSU5LLAotCQkoX191aW50NjRfdCkgbXAtPm1fc2Iuc2JfaWNvdW50KTsK LQotCWZvciAoaSA9IDA7IGkgPCBnbG9iX2FnY291bnQ7IGkrKykgIHsKLQkJcXVl dWVfd29yayhwaGFzZTdfYWx0X2Z1bmN0aW9uLCBtcCwgaSk7Ci0JfQotCXdhaXRf Zm9yX3dvcmtlcnMoKTsKLQlwcmludF9maW5hbF9ycHQoKTsKLX0KLQogdm9pZAog cGhhc2U3KHhmc19tb3VudF90ICptcCkKIHsKQEAgLTE4MCwxMSArMTI3LDYgQEAK IAllbHNlCiAJCWRvX2xvZyhfKCJQaGFzZSA3IC0gdmVyaWZ5IGxpbmsgY291bnRz Li4uXG4iKSk7CiAKLQlpZiAoZG9fcHJlZmV0Y2gpIHsKLQkJcGhhc2U3X2FsdCht cCk7Ci0JCXJldHVybjsKLQl9Ci0KIAkvKgogCSAqIGZvciBlYWNoIGFnLCBsb29r IGF0IGVhY2ggaW5vZGUgMSBhdCBhIHRpbWUuIElmIHRoZSBudW1iZXIgb2YKIAkg KiBsaW5rcyBpcyBiYWQsIHJlc2V0IGl0LCBsb2cgdGhlIGlub2RlIGNvcmUsIGNv bW1pdCB0aGUgdHJhbnNhY3Rpb24KSW5kZXg6IHJlcGFpci94ZnNwcm9ncy9yZXBh aXIvcHJlZmV0Y2guYwo9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09Ci0tLSByZXBhaXIub3Jp Zy94ZnNwcm9ncy9yZXBhaXIvcHJlZmV0Y2guYwkyMDA3LTA0LTI3IDEzOjEzOjM1 LjAwMDAwMDAwMCArMTAwMAorKysgcmVwYWlyL3hmc3Byb2dzL3JlcGFpci9wcmVm ZXRjaC5jCTIwMDctMDYtMDUgMTE6MzQ6MDEuOTU3NzMyOTY0ICsxMDAwCkBAIC0x LDYgKzEsNiBAQAogI2luY2x1ZGUgPGxpYnhmcy5oPgotI2luY2x1ZGUgInByZWZl dGNoLmgiCi0jaW5jbHVkZSAiYWlvLmgiCisjaW5jbHVkZSA8cHRocmVhZC5oPgor I2luY2x1ZGUgPHNjaGVkLmg+CiAjaW5jbHVkZSAiYXZsLmgiCiAjaW5jbHVkZSAi Z2xvYmFscy5oIgogI2luY2x1ZGUgImFnaGVhZGVyLmgiCkBAIC0xMyw0NTQgKzEz LDc4MiBAQAogI2luY2x1ZGUgImRpbm9kZS5oIgogI2luY2x1ZGUgImJtYXAuaCIK ICNpbmNsdWRlICJ2ZXJzaW9ucy5oIgorI2luY2x1ZGUgInRocmVhZHMuaCIKKyNp bmNsdWRlICJwcmVmZXRjaC5oIgorI2luY2x1ZGUgInByb2dyZXNzLmgiCisjaW5j bHVkZSAicmFkaXgtdHJlZS5oIgogCiBpbnQgZG9fcHJlZmV0Y2ggPSAxOwogCi1p bm9fdHJlZV9ub2RlX3QgKgotcHJlZmV0Y2hfaW5vZGVfY2h1bmtzKHhmc19tb3Vu dF90ICptcCwKLQkJeGZzX2FnbnVtYmVyX3QgYWdubywKLQkJaW5vX3RyZWVfbm9k ZV90ICppbm9fcmEpCi17Ci0JeGZzX2FnYmxvY2tfdCBhZ2JubzsKLQlsaWJ4ZnNf bGlvX3JlcV90ICpsaW9wOwotCWludCBpOwotCi0JaWYgKGxpYnhmc19saW9faW5v X2NvdW50ID09IDApCi0JCXJldHVybiBOVUxMOwotCi0JbGlvcCA9IChsaWJ4ZnNf bGlvX3JlcV90ICopIGxpYnhmc19nZXRfbGlvX2J1ZmZlcihMSUJYRlNfTElPX1RZ UEVfSU5PKTsKLQlpZiAobGlvcCA9PSBOVUxMKSB7Ci0JCWRvX3ByZWZldGNoID0g MDsKLQkJcmV0dXJuIE5VTEw7CisvKgorICogUGVyZm9ybXMgcHJlZmV0Y2hpbmcg YnkgcHJpbWluZyB0aGUgbGlieGZzIGNhY2hlIGJ5IHVzaW5nIGEgZGVkaWNhdGUg dGhyZWFkCisgKiBzY2FubmluZyBpbm9kZXMgYW5kIHJlYWRpbmcgYmxvY2tzIGlu IGFoZWFkIG9mIHRpbWUgdGhleSBhcmUgcmVxdWlyZWQuCisgKgorICogQW55IEkv TyBlcnJvcnMgY2FuIGJlIHNhZmVseSBpZ25vcmVkLgorICovCisKK3N0YXRpYyB4 ZnNfbW91bnRfdAkqbXA7CitzdGF0aWMgaW50IAkJbXBfZmQ7CitzdGF0aWMgaW50 CQlwZl9tYXhfYnl0ZXM7CitzdGF0aWMgaW50CQlwZl9tYXhfYmJzOworc3RhdGlj IGludAkJcGZfbWF4X2ZzYnM7CitzdGF0aWMgaW50CQlwZl9iYXRjaF9ieXRlczsK K3N0YXRpYyBpbnQJCXBmX2JhdGNoX2ZzYnM7CisKKyNkZWZpbmUgQl9JTk9ERQkJ MHgxMDAwMDAwCisjZGVmaW5lIEJfTUVUQQkJMHgyMDAwMDAwCisKKyNkZWZpbmUg SU9fVEhSRVNIT0xECTIwMAorCisjZGVmaW5lIERFRl9CQVRDSF9CWVRFUwkweDEw MDAwCisKK3N0YXRpYyBpbmxpbmUgdm9pZAorcGZfc3RhcnRfcHJvY2Vzc2luZygK KwlwcmVmZXRjaF9hcmdzX3QJCSphcmdzKQoreworCWlmICghYXJncy0+Y2FuX3N0 YXJ0X3Byb2Nlc3NpbmcpIHsKKyNpZmRlZiBYUl9QRl9UUkFDRQorCQlwZnRyYWNl KCJzaWduYWxsaW5nIHByb2Nlc3NpbmcgZm9yIEFHICVkIiwgYXJncy0+YWdubyk7 CisjZW5kaWYKKwkJYXJncy0+Y2FuX3N0YXJ0X3Byb2Nlc3NpbmcgPSAxOworCQlw dGhyZWFkX2NvbmRfc2lnbmFsKCZhcmdzLT5zdGFydF9wcm9jZXNzaW5nKTsKIAl9 Cit9CiAKLQlpZiAoaW5vX3JhID09IE5VTEwpCi0JCWlub19yYSA9IGZpbmRmaXJz dF9pbm9kZV9yZWMoYWdubyk7Ci0KLQlpID0gMDsKLQl3aGlsZSAoaW5vX3JhKSB7 Ci0JCWFnYm5vID0gWEZTX0FHSU5PX1RPX0FHQk5PKG1wLCBpbm9fcmEtPmlub19z dGFydG51bSk7Ci0JCWxpb3BbaV0uYmxrbm8gPSBYRlNfQUdCX1RPX0RBRERSKG1w LCBhZ25vLCBhZ2Jubyk7Ci0JCWxpb3BbaV0ubGVuID0gKGludCkgWEZTX0ZTQl9U T19CQihtcCwgWEZTX0lBTExPQ19CTE9DS1MobXApKTsKLQkJaSsrOwotCQlpbm9f cmEgPSBuZXh0X2lub19yZWMoaW5vX3JhKTsKLQkJaWYgKGkgPj0gbGlieGZzX2xp b19pbm9fY291bnQpCi0JCQlicmVhazsKLQl9Ci0JaWYgKGkpIHsKLQkJaWYgKGxp Ynhmc19yZWFkYnVmX2xpc3QobXAtPm1fZGV2LCBpLCAodm9pZCAqKSBsaW9wLCBM SUJYRlNfTElPX1RZUEVfSU5PKSA9PSAtMSkKLQkJCWRvX3ByZWZldGNoID0gMDsK K3N0YXRpYyBpbmxpbmUgdm9pZAorcGZfc3RhcnRfaW9fd29ya2VycygKKwlwcmVm ZXRjaF9hcmdzX3QJCSphcmdzKQoreworCWlmICghYXJncy0+Y2FuX3N0YXJ0X3Jl YWRpbmcpIHsKKyNpZmRlZiBYUl9QRl9UUkFDRQorCQlwZnRyYWNlKCJzaWduYWxs aW5nIHJlYWRpbmcgZm9yIEFHICVkIiwgYXJncy0+YWdubyk7CisjZW5kaWYKKwkJ YXJncy0+Y2FuX3N0YXJ0X3JlYWRpbmcgPSAxOworCQlwdGhyZWFkX2NvbmRfYnJv YWRjYXN0KCZhcmdzLT5zdGFydF9yZWFkaW5nKTsKIAl9Ci0JbGlieGZzX3B1dF9s aW9fYnVmZmVyKCh2b2lkICopIGxpb3ApOwotCXJldHVybiAoaW5vX3JhKTsKIH0K IAorCiBzdGF0aWMgdm9pZAotcHJlZmV0Y2hfbm9kZSgKLQl4ZnNfbW91bnRfdAkJ Km1wLAotCXhmc19idWZfdAkJKmJwLAotCWRhX2J0X2N1cnNvcl90CQkqZGFfY3Vy c29yKQorcGZfcXVldWVfaW8oCisJcHJlZmV0Y2hfYXJnc190CQkqYXJncywKKwl4 ZnNfZnNibG9ja190CQlmc2JubywKKwlpbnQJCQlibGVuLAorCWludAkJCWZsYWcp CiB7Ci0JeGZzX2RhX2ludG5vZGVfdAkqbm9kZTsKLQlsaWJ4ZnNfbGlvX3JlcV90 CSpsaW9wOwotCWludAkJCWk7Ci0JeGZzX2Rmc2Jub190CQlmc2JubzsKKwl4ZnNf YnVmX3QJCSpicDsKIAotCW5vZGUgPSAoeGZzX2RhX2ludG5vZGVfdCAqKVhGU19C VUZfUFRSKGJwKTsKLQlpZiAoSU5UX0dFVChub2RlLT5oZHIuY291bnQsIEFSQ0hf Q09OVkVSVCkgPD0gMSkKKwlicCA9IGxpYnhmc19nZXRidWYobXAtPm1fZGV2LCBY RlNfRlNCX1RPX0RBRERSKG1wLCBmc2JubyksCisJCQlYRlNfRlNCX1RPX0JCKG1w LCBibGVuKSk7CisJaWYgKGJwLT5iX2ZsYWdzICYgTElCWEZTX0JfVVBUT0RBVEUp IHsKKwkJbGlieGZzX3B1dGJ1ZihicCk7CiAJCXJldHVybjsKKwl9CisJYnAtPmJf ZmxhZ3MgfD0gZmxhZzsKIAotCWlmICgobGlvcCA9IChsaWJ4ZnNfbGlvX3JlcV90 ICopIGxpYnhmc19nZXRfbGlvX2J1ZmZlcihMSUJYRlNfTElPX1RZUEVfRElSKSkg PT0gTlVMTCkgewotCQlyZXR1cm47CisJcHRocmVhZF9tdXRleF9sb2NrKCZhcmdz LT5sb2NrKTsKKworCWlmIChmc2JubyA+IGFyZ3MtPmxhc3RfYm5vX3JlYWQpIHsK KwkJcmFkaXhfdHJlZV9pbnNlcnQoJmFyZ3MtPnByaW1hcnlfaW9fcXVldWUsIGZz Ym5vLCBicCk7CisJCWlmIChmbGFnID09IEJfTUVUQSkKKwkJCXJhZGl4X3RyZWVf dGFnX3NldCgmYXJncy0+cHJpbWFyeV9pb19xdWV1ZSwgZnNibm8sIDApOworCQll bHNlIHsKKwkJCWFyZ3MtPmlub2RlX2J1ZnNfcXVldWVkKys7CisJCQlpZiAoYXJn cy0+aW5vZGVfYnVmc19xdWV1ZWQgPT0gSU9fVEhSRVNIT0xEKQorCQkJCXBmX3N0 YXJ0X2lvX3dvcmtlcnMoYXJncyk7CisJCX0KKyNpZmRlZiBYUl9QRl9UUkFDRQor CQlwZnRyYWNlKCJnZXRidWYgJXAgKCVsbHUpIGluIEFHICVkIChmc2JubyA9ICVs dSkgYWRkZWQgdG8gIgorCQkJInByaW1hcnkgcXVldWUgKGlub2RlX2J1ZnNfcXVl dWVkID0gJWQsIGxhc3RfYm5vID0gJWx1KSIsIGJwLAorCQkJKGxvbmcgbG9uZylY RlNfQlVGX0FERFIoYnApLCBhcmdzLT5hZ25vLCBmc2JubywKKwkJCWFyZ3MtPmlu b2RlX2J1ZnNfcXVldWVkLCBhcmdzLT5sYXN0X2Jub19yZWFkKTsKKyNlbmRpZgor CX0gZWxzZSB7CisJCUFTU0VSVChmbGFnID09IEJfTUVUQSk7CisJCXJhZGl4X3Ry ZWVfaW5zZXJ0KCZhcmdzLT5zZWNvbmRhcnlfaW9fcXVldWUsIGZzYm5vLCBicCk7 CisjaWZkZWYgWFJfUEZfVFJBQ0UKKwkJcGZ0cmFjZSgiZ2V0YnVmICVwICglbGx1 KSBpbiBBRyAlZCAoZnNibm8gPSAlbHUpIGFkZGVkIHRvICIKKwkJCSJzZWNvbmRh cnkgcXVldWUgKGxhc3RfYm5vID0gJWx1KSIsIGJwLAorCQkJKGxvbmcgbG9uZylY RlNfQlVGX0FERFIoYnApLCBhcmdzLT5hZ25vLCBmc2JubywKKwkJCWFyZ3MtPmxh c3RfYm5vX3JlYWQpOworI2VuZGlmCiAJfQogCi0JZm9yIChpID0gMDsgaSA8IElO VF9HRVQobm9kZS0+aGRyLmNvdW50LCBBUkNIX0NPTlZFUlQpOyBpKyspIHsKLQkJ aWYgKGkgPT0gbGlieGZzX2xpb19kaXJfY291bnQpCi0JCQlicmVhazsKKwlwZl9z dGFydF9wcm9jZXNzaW5nKGFyZ3MpOwogCi0JCWZzYm5vID0gYmxrbWFwX2dldChk YV9jdXJzb3ItPmJsa21hcCwgSU5UX0dFVChub2RlLT5idHJlZVtpXS5iZWZvcmUs IEFSQ0hfQ09OVkVSVCkpOwotCQlpZiAoZnNibm8gPT0gTlVMTERGU0JOTykgewot CQkJbGlieGZzX3B1dF9saW9fYnVmZmVyKCh2b2lkICopIGxpb3ApOwotCQkJcmV0 dXJuOwotCQl9CisJcHRocmVhZF9tdXRleF91bmxvY2soJmFyZ3MtPmxvY2spOwor fQogCi0JCWxpb3BbaV0uYmxrbm8gPSBYRlNfRlNCX1RPX0RBRERSKG1wLCBmc2Ju byk7Ci0JCWxpb3BbaV0ubGVuID0gIFhGU19GU0JfVE9fQkIobXAsIDEpOworc3Rh dGljIGludAorcGZfcmVhZF9ibWJ0X3JlY2xpc3QoCisJcHJlZmV0Y2hfYXJnc190 CQkqYXJncywKKwl4ZnNfYm1idF9yZWNfdAkJKnJwLAorCWludAkJCW51bXJlY3Mp Cit7CisJaW50CQkJaTsKKwl4ZnNfZGZzYm5vX3QJCXM7CQkvKiBzdGFydCAqLwor CXhmc19kZmlsYmxrc190CQljOwkJLyogY291bnQgKi8KKwl4ZnNfZGZpbG9mZl90 CQlvOwkJLyogb2Zmc2V0ICovCisJeGZzX2RmaWxibGtzX3QJCWNwID0gMDsJCS8q IHByZXYgY291bnQgKi8KKwl4ZnNfZGZpbG9mZl90CQlvcCA9IDA7CQkvKiBwcmV2 IG9mZnNldCAqLworCWludAkJCWZsYWc7CQkvKiBleHRlbnQgZmxhZyAqLworCisJ Zm9yIChpID0gMDsgaSA8IG51bXJlY3M7IGkrKywgcnArKykgeworCQljb252ZXJ0 X2V4dGVudCgoeGZzX2JtYnRfcmVjXzMyX3QqKXJwLCAmbywgJnMsICZjLCAmZmxh Zyk7CisKKwkJaWYgKCgoaSA+IDApICYmIChvcCArIGNwID4gbykpIHx8IChjID09 IDApIHx8CisJCQkJKG8gPj0gZnNfbWF4X2ZpbGVfb2Zmc2V0KSkKKwkJCXJldHVy biAwOworCisJCWlmICghdmVyaWZ5X2Rmc2JubyhtcCwgcykgfHwgIXZlcmlmeV9k ZnNibm8obXAsIHMgKyBjIC0gMSkpCisJCQlyZXR1cm4gMDsKKworCQlvcCA9IG87 CisJCWNwID0gYzsKKworCQl3aGlsZSAoYykgeworI2lmZGVmIFhSX1BGX1RSQUNF CisJCQlwZnRyYWNlKCJxdWV1aW5nIGRpciBleHRlbnQgaW4gQUcgJWQiLCBhcmdz LT5hZ25vKTsKKyNlbmRpZgorCQkJcGZfcXVldWVfaW8oYXJncywgcywgMSwgQl9N RVRBKTsKKwkJCWMtLTsKKwkJCXMrKzsKKwkJfQogCX0KKwlyZXR1cm4gMTsKK30K IAotCWlmIChpID4gMSkgewotCQlpZiAobGlieGZzX3JlYWRidWZfbGlzdChtcC0+ bV9kZXYsIGksICh2b2lkICopIGxpb3AsIExJQlhGU19MSU9fVFlQRV9ESVIpID09 IC0xKQotCQkJZG9fcHJlZmV0Y2ggPSAwOwotCX0KKy8qCisgKiBzaW1wbGlmaWVk IHZlcnNpb24gb2YgdGhlIG1haW4gc2Nhbl9sYnRyZWUuIFJldHVybnMgMCB0byBz dG9wLgorICovCisKK3N0YXRpYyBpbnQKK3BmX3NjYW5fbGJ0cmVlKAorCXhmc19k ZnNibm9fdAkJZGJubywKKwlpbnQJCQlsZXZlbCwKKwlwcmVmZXRjaF9hcmdzX3QJ CSphcmdzLAorCWludAkJCSgqZnVuYykoeGZzX2J0cmVlX2xibG9ja190CSpibG9j aywKKwkJCQkJaW50CQkJbGV2ZWwsCisJCQkJCXByZWZldGNoX2FyZ3NfdAkJKmFy Z3MpKQoreworCXhmc19idWZfdAkJKmJwOworCWludAkJCXJjOworCisJYnAgPSBs aWJ4ZnNfcmVhZGJ1ZihtcC0+bV9kZXYsIFhGU19GU0JfVE9fREFERFIobXAsIGRi bm8pLAorCQkJWEZTX0ZTQl9UT19CQihtcCwgMSksIDApOworCWlmICghYnApCisJ CXJldHVybiAwOwogCi0JbGlieGZzX3B1dF9saW9fYnVmZmVyKCh2b2lkICopIGxp b3ApOwotCXJldHVybjsKKwlyYyA9ICgqZnVuYykoKHhmc19idHJlZV9sYmxvY2tf dCAqKVhGU19CVUZfUFRSKGJwKSwgbGV2ZWwgLSAxLCBhcmdzKTsKKworCWxpYnhm c19wdXRidWYoYnApOworCisJcmV0dXJuIHJjOwogfQogCi12b2lkCi1wcmVmZXRj aF9kaXIxKAotCXhmc19tb3VudF90CQkqbXAsCi0JeGZzX2RhYmxrX3QJCWJubywK LQlkYV9idF9jdXJzb3JfdAkJKmRhX2N1cnNvcikKK3N0YXRpYyBpbnQKK3BmX3Nj YW5mdW5jX2JtYXAoCisJeGZzX2J0cmVlX2xibG9ja190CSpibG9jaywKKwlpbnQJ CQlsZXZlbCwKKwlwcmVmZXRjaF9hcmdzX3QJCSphcmdzKQogewotCXhmc19kYV9p bnRub2RlX3QJKm5vZGU7Ci0JeGZzX2J1Zl90CQkqYnA7Ci0JeGZzX2Rmc2Jub190 CQlmc2JubzsKKwl4ZnNfYm1idF9yZWNfdAkJKnJwOworCXhmc19ibWJ0X3B0cl90 CQkqcHA7CisJaW50IAkJCW51bXJlY3M7CiAJaW50CQkJaTsKKwl4ZnNfZGZzYm5v X3QJCWRibm87CiAKLQlmc2JubyA9IGJsa21hcF9nZXQoZGFfY3Vyc29yLT5ibGtt YXAsIGJubyk7Ci0JaWYgKGZzYm5vID09IE5VTExERlNCTk8pCi0JCXJldHVybjsK KwkvKgorCSAqIGRvIHNvbWUgdmFsaWRhdGlvbiBvbiB0aGUgYmxvY2sgY29udGVu dHMKKwkgKi8KKwlpZiAoKGJlMzJfdG9fY3B1KGJsb2NrLT5iYl9tYWdpYykgIT0g WEZTX0JNQVBfTUFHSUMpIHx8CisJCQkoYmUxNl90b19jcHUoYmxvY2stPmJiX2xl dmVsKSAhPSBsZXZlbCkpCisJCXJldHVybiAwOwogCi0JYnAgPSBsaWJ4ZnNfcmVh ZGJ1ZihtcC0+bV9kZXYsIFhGU19GU0JfVE9fREFERFIobXAsIGZzYm5vKSwKLQkJ CVhGU19GU0JfVE9fQkIobXAsIDEpLCAwKTsKKwludW1yZWNzID0gYmUxNl90b19j cHUoYmxvY2stPmJiX251bXJlY3MpOwogCi0JaWYgKGJwID09IE5VTEwpCi0JIAly ZXR1cm47CisJaWYgKGxldmVsID09IDApIHsKKwkJaWYgKG51bXJlY3MgPiBtcC0+ bV9ibWFwX2RteHJbMF0pCisJCQlyZXR1cm4gMDsKIAorCQlycCA9IFhGU19CVFJF RV9SRUNfQUREUihtcC0+bV9zYi5zYl9ibG9ja3NpemUsIHhmc19ibWJ0LAorCQkJ CWJsb2NrLCAxLCBtcC0+bV9ibWFwX2RteHJbMF0pOwogCi0Jbm9kZSA9ICh4ZnNf ZGFfaW50bm9kZV90ICopWEZTX0JVRl9QVFIoYnApOwotCWlmIChJTlRfR0VUKG5v ZGUtPmhkci5pbmZvLm1hZ2ljLCBBUkNIX0NPTlZFUlQpICE9IFhGU19EQV9OT0RF X01BR0lDKSAgewotCQlsaWJ4ZnNfcHV0YnVmKGJwKTsKLQkJcmV0dXJuOworCQly ZXR1cm4gcGZfcmVhZF9ibWJ0X3JlY2xpc3QoYXJncywgcnAsIG51bXJlY3MpOwog CX0KIAotCXByZWZldGNoX25vZGUobXAsIGJwLCBkYV9jdXJzb3IpOworCWlmIChu dW1yZWNzID4gbXAtPm1fYm1hcF9kbXhyWzFdKQorCQlyZXR1cm4gMDsKIAotCS8q IHNraXAgcHJlZmV0Y2hpbmcgaWYgbmV4dCBsZXZlbCBpcyBsZWFmIGxldmVsICov Ci0JaWYgKElOVF9HRVQobm9kZS0+aGRyLmxldmVsLCBBUkNIX0NPTlZFUlQpID4g MSkgewotCQlmb3IgKGkgPSAwOyBpIDwgSU5UX0dFVChub2RlLT5oZHIuY291bnQs IEFSQ0hfQ09OVkVSVCk7IGkrKykgewotCQkJcHJlZmV0Y2hfZGlyMShtcCwKLQkJ CQlJTlRfR0VUKG5vZGUtPmJ0cmVlW2ldLmJlZm9yZSwgQVJDSF9DT05WRVJUKSwK LQkJCQlkYV9jdXJzb3IpOwotCQl9CisJcHAgPSBYRlNfQlRSRUVfUFRSX0FERFIo bXAtPm1fc2Iuc2JfYmxvY2tzaXplLCB4ZnNfYm1idCwgYmxvY2ssIDEsCisJCQlt cC0+bV9ibWFwX2RteHJbMV0pOworCisJZm9yIChpID0gMDsgaSA8IG51bXJlY3M7 IGkrKykgeworCQlkYm5vID0gYmU2NF90b19jcHUocHBbaV0pOworCQlpZiAoIXZl cmlmeV9kZnNibm8obXAsIGRibm8pKQorCQkJcmV0dXJuIDA7CisJCWlmICghcGZf c2Nhbl9sYnRyZWUoZGJubywgbGV2ZWwsIGFyZ3MsIHBmX3NjYW5mdW5jX2JtYXAp KQorCQkJcmV0dXJuIDA7CiAJfQotCQotCWxpYnhmc19wdXRidWYoYnApOwotCXJl dHVybjsKKwlyZXR1cm4gMTsKIH0KIAotdm9pZAotcHJlZmV0Y2hfZGlyMigKLQl4 ZnNfbW91bnRfdCAgICAgKm1wLAotCWJsa21hcF90ICAgICAgICAqYmxrbWFwKQot ewotCXhmc19kZmlsb2ZmX3QJCWRibm87Ci0JeGZzX2RmaWxvZmZfdAkJcGRibm87 Ci0JYm1hcF9leHRfdAkJKmJtcDsJCi0JaW50CQkJbmV4OwotCWludAkJCWksIGos IHQ7Ci0JbGlieGZzX2xpb19yZXFfdAkqbGlvcDsKIAotCWxpb3AgPSAobGlieGZz X2xpb19yZXFfdCAqKSBsaWJ4ZnNfZ2V0X2xpb19idWZmZXIoTElCWEZTX0xJT19U WVBFX0RJUik7Ci0JaWYgKGxpb3AgPT0gTlVMTCkKK3N0YXRpYyB2b2lkCitwZl9y ZWFkX2J0aW5vZGUoCisJcHJlZmV0Y2hfYXJnc190CQkqYXJncywKKwl4ZnNfZGlu b2RlX3QJCSpkaW5vKQoreworCXhmc19ibWRyX2Jsb2NrX3QJKmRpYjsKKwl4ZnNf Ym1idF9wdHJfdAkJKnBwOworCWludAkJCWk7CisJaW50CQkJbGV2ZWw7CisJaW50 CQkJbnVtcmVjczsKKwlpbnQJCQlkc2l6ZTsKKwl4ZnNfZGZzYm5vX3QJCWRibm87 CisKKwlkaWIgPSAoeGZzX2JtZHJfYmxvY2tfdCAqKVhGU19ERk9SS19EUFRSKGRp bm8pOworCisJbGV2ZWwgPSBiZTE2X3RvX2NwdShkaWItPmJiX2xldmVsKTsKKwlu dW1yZWNzID0gYmUxNl90b19jcHUoZGliLT5iYl9udW1yZWNzKTsKKworCWlmICgo bnVtcmVjcyA9PSAwKSB8fCAobGV2ZWwgPT0gMCkgfHwKKwkJCShsZXZlbCA+IFhG U19CTV9NQVhMRVZFTFMobXAsIFhGU19EQVRBX0ZPUkspKSkKKwkJcmV0dXJuOwor CS8qCisJICogdXNlIGJtZHIvZGZvcmtfZHNpemUgc2luY2UgdGhlIHJvb3QgYmxv Y2sgaXMgaW4gdGhlIGRhdGEgZm9yaworCSAqLworCWlmIChYRlNfQk1EUl9TUEFD RV9DQUxDKG51bXJlY3MpID4gWEZTX0RGT1JLX0RTSVpFKGRpbm8sIG1wKSkKIAkJ cmV0dXJuOwogCi0JcGRibm8gPSBOVUxMREZJTE9GRjsJLyogcHJldmlvdXMgZGJu byBpcyBOVUxMREZJTE9GRiAqLwotCWkgPSAwOwotCXdoaWxlICgoZGJubyA9IGJs a21hcF9uZXh0X29mZihibGttYXAsIHBkYm5vLCAmdCkpIDwgbXAtPm1fZGlyZnJl ZWJsaykgewotCQlpZiAoaSA9PSBsaWJ4ZnNfbGlvX2Rpcl9jb3VudCkKKwlkc2l6 ZSA9IFhGU19ERk9SS19EU0laRShkaW5vLCBtcCk7CisJcHAgPSBYRlNfQlRSRUVf UFRSX0FERFIoZHNpemUsIHhmc19ibWRyLCBkaWIsIDEsCisJCQlYRlNfQlRSRUVf QkxPQ0tfTUFYUkVDUyhkc2l6ZSwgeGZzX2JtZHIsIDApKTsKKworCWZvciAoaSA9 IDA7IGkgPCBudW1yZWNzOyBpKyspIHsKKwkJZGJubyA9IGJlNjRfdG9fY3B1KHBw W2ldKTsKKwkJaWYgKCF2ZXJpZnlfZGZzYm5vKG1wLCBkYm5vKSkKIAkJCWJyZWFr OwotCQlpZiAoZGJubyA9PSBOVUxMREZJTE9GRikKKwkJaWYgKCFwZl9zY2FuX2xi dHJlZShkYm5vLCBsZXZlbCwgYXJncywgcGZfc2NhbmZ1bmNfYm1hcCkpCiAJCQli cmVhazsKLQkJaWYgKG1wLT5tX2RpcmJsa2ZzYnMgPT0gMSkgewotCQkJeGZzX2Rm c2Jub190IGJsazsKLQotCQkJLyogYXZvaWQgYm1wIHJlYWxsb2MvZnJlZSBvdmVy aGVhZCwgdXNlIGJsa21hcF9nZXQgKi8KLQkJCWJsayA9IGJsa21hcF9nZXQoYmxr bWFwLCBkYm5vKTsKLQkJCWlmIChibGsgPT0gTlVMTERGU0JOTykKLQkJCQlicmVh azsKLQkJCXBkYm5vID0gZGJubzsKLQkJCWxpb3BbaV0uYmxrbm8gPSBYRlNfRlNC X1RPX0RBRERSKG1wLCBibGspOwotCQkJbGlvcFtpXS5sZW4gPSAoaW50KSBYRlNf RlNCX1RPX0JCKG1wLCAxKTsKLQkJCWkrKzsKLQkJfQotCQllbHNlIGlmIChtcC0+ bV9kaXJibGtmc2JzID4gMSkgewotCQkJbmV4ID0gYmxrbWFwX2dldG4oYmxrbWFw LCBkYm5vLCBtcC0+bV9kaXJibGtmc2JzLCAmYm1wLCBOVUxMKTsKLQkJCWlmIChu ZXggPT0gMCkKLQkJCQlicmVhazsKLQkJCXBkYm5vID0gZGJubyArIG1wLT5tX2Rp cmJsa2ZzYnMgLSAxOwotCQkJZm9yIChqID0gMDsgaiA8IG5leDsgaisrKSB7Ci0J CQkJbGlvcFtpXS5ibGtubyA9IFhGU19GU0JfVE9fREFERFIobXAsIGJtcFtqXS5z dGFydGJsb2NrKTsKLQkJCQlsaW9wW2ldLmxlbiA9IChpbnQpIFhGU19GU0JfVE9f QkIobXAsIGJtcFtqXS5ibG9ja2NvdW50KTsKLQkJCQlpKys7Ci0JCQkJaWYgKGkg PT0gbGlieGZzX2xpb19kaXJfY291bnQpCi0JCQkJCWJyZWFrOwkvKiBmb3IgbG9v cCAqLwotCQkJfQotCQkJZnJlZShibXApOwotCQl9Ci0JCWVsc2UgewotCQkJZG9f ZXJyb3IoImludmFsaWQgbXAtPm1fZGlyYmxrZnNicyAlZFxuIiwgbXAtPm1fZGly YmxrZnNicyk7Ci0JCX0KLQl9Ci0JaWYgKGkgPiAxKSB7Ci0JCWlmIChsaWJ4ZnNf cmVhZGJ1Zl9saXN0KG1wLT5tX2RldiwgaSwgKHZvaWQgKikgbGlvcCwgTElCWEZT X0xJT19UWVBFX0RJUikgPT0gLTEpCi0JCQlkb19wcmVmZXRjaCA9IDA7CiAJfQot CWxpYnhmc19wdXRfbGlvX2J1ZmZlcigodm9pZCAqKSBsaW9wKTsKIH0KIAogc3Rh dGljIHZvaWQKLXByZWZldGNoX3A2X25vZGUoCi0JeGZzX21vdW50X3QJCSptcCwK LQl4ZnNfaW5vZGVfdAkJKmlwLAotCXhmc19idWZfdAkJKmJwKQorcGZfcmVhZF9l eGlub2RlKAorCXByZWZldGNoX2FyZ3NfdAkJKmFyZ3MsCisJeGZzX2Rpbm9kZV90 CQkqZGlubykKIHsKLQl4ZnNfZGFfaW50bm9kZV90CSpub2RlOwotCWxpYnhmc19s aW9fcmVxX3QJKmxpb3A7Ci0JaW50CQkJaTsKLQl4ZnNfZnNibG9ja190CQlmYmxv Y2s7Ci0JeGZzX2Rmc2Jub190CQlmc2JubzsKLQl4ZnNfYm1idF9pcmVjX3QJCW1h cDsKLQlpbnQJCQlubWFwOwotCWludAkJCWVycm9yOwotCi0Jbm9kZSA9ICh4ZnNf ZGFfaW50bm9kZV90ICopWEZTX0JVRl9QVFIoYnApOwotCWlmIChJTlRfR0VUKG5v ZGUtPmhkci5jb3VudCwgQVJDSF9DT05WRVJUKSA8PSAxKQotCQlyZXR1cm47CisJ cGZfcmVhZF9ibWJ0X3JlY2xpc3QoYXJncywgKHhmc19ibWJ0X3JlY190ICopWEZT X0RGT1JLX0RQVFIoZGlubyksCisJCQliZTMyX3RvX2NwdShkaW5vLT5kaV9jb3Jl LmRpX25leHRlbnRzKSk7Cit9CiAKLQlpZiAoKGxpb3AgPSAobGlieGZzX2xpb19y ZXFfdCAqKSBsaWJ4ZnNfZ2V0X2xpb19idWZmZXIoTElCWEZTX0xJT19UWVBFX0RJ UikpID09IE5VTEwpIHsKLQkJcmV0dXJuOworc3RhdGljIHZvaWQKK3BmX3JlYWRf aW5vZGVfZGlycygKKwlwcmVmZXRjaF9hcmdzX3QJCSphcmdzLAorCXhmc19idWZf dAkJKmJwKQoreworCXhmc19kaW5vZGVfdAkJKmRpbm87CisJaW50CQkJaWNudCA9 IDA7CisJeGZzX2Rpbm9kZV9jb3JlX3QJKmRpbm9jOworCisJZm9yIChpY250ID0g MDsgaWNudCA8IChYRlNfQlVGX0NPVU5UKGJwKSA+PiBtcC0+bV9zYi5zYl9pbm9k ZWxvZyk7IGljbnQrKykgeworCQlkaW5vID0gWEZTX01BS0VfSVBUUihtcCwgYnAs IGljbnQpOworCQlkaW5vYyA9ICZkaW5vLT5kaV9jb3JlOworCisJCS8qCisJCSAq IFdlIGFyZSBvbmx5IHByZWZldGNoaW5nIGRpcmVjdG9yeSBjb250ZW50cyBpbiBl eHRlbnRzCisJCSAqLworCQlpZiAoKChiZTE2X3RvX2NwdShkaW5vYy0+ZGlfbW9k ZSkgJiBTX0lGTVQpICE9IFNfSUZESVIpIHx8CisJCQkJKGRpbm9jLT5kaV9mb3Jt YXQgIT0gWEZTX0RJTk9ERV9GTVRfRVhURU5UUyAmJgorCQkJCWRpbm9jLT5kaV9m b3JtYXQgIT0gWEZTX0RJTk9ERV9GTVRfQlRSRUUpKQorCQkJY29udGludWU7CisK KwkJLyoKKwkJICogZG8gc29tZSBjaGVja3Mgb24gdGhlIGlub2RlIHRvIHNlZSBp ZiB3ZSBjYW4gcHJlZmV0Y2gKKwkJICogaXRzIGRpcmVjdG9yeSBkYXRhLiBJdCdz IGEgY3V0IGRvd24gdmVyc2lvbiBvZgorCQkgKiBwcm9jZXNzX2Rpbm9kZV9pbnQo KSBpbiBkaW5vZGUuYy4KKwkJICovCisJCWlmIChiZTE2X3RvX2NwdShkaW5vYy0+ ZGlfbWFnaWMpICE9IFhGU19ESU5PREVfTUFHSUMpCisJCQljb250aW51ZTsKKwor CQlpZiAoIVhGU19ESU5PREVfR09PRF9WRVJTSU9OKGRpbm9jLT5kaV92ZXJzaW9u KSB8fAorCQkJCSghZnNfaW5vZGVfbmxpbmsgJiYgZGlub2MtPmRpX3ZlcnNpb24g PgorCQkJCQlYRlNfRElOT0RFX1ZFUlNJT05fMSkpCisJCQljb250aW51ZTsKKwor CQlpZiAoYmU2NF90b19jcHUoZGlub2MtPmRpX3NpemUpIDw9IFhGU19ERk9SS19E U0laRShkaW5vLCBtcCkpCisJCQljb250aW51ZTsKKworCQlpZiAoKGRpbm9jLT5k aV9mb3Jrb2ZmICE9IDApICYmCisJCQkJKGRpbm9jLT5kaV9mb3Jrb2ZmID49IChY RlNfTElUSU5PKG1wKSA+PiAzKSkpCisJCQljb250aW51ZTsKKworCQlzd2l0Y2gg KGRpbm9jLT5kaV9mb3JtYXQpIHsKKwkJCWNhc2UgWEZTX0RJTk9ERV9GTVRfRVhU RU5UUzoKKwkJCQlwZl9yZWFkX2V4aW5vZGUoYXJncywgZGlubyk7CisJCQkJYnJl YWs7CisJCQljYXNlIFhGU19ESU5PREVfRk1UX0JUUkVFOgorCQkJCXBmX3JlYWRf YnRpbm9kZShhcmdzLCBkaW5vKTsKKwkJCQlicmVhazsKKwkJfQogCX0KK30KIAot CWZibG9jayA9IE5VTExGU0JMT0NLOwotCi0JZm9yIChpID0gMDsgaSA8IElOVF9H RVQobm9kZS0+aGRyLmNvdW50LCBBUkNIX0NPTlZFUlQpOyBpKyspIHsKLQkJaWYg KGkgPT0gbGlieGZzX2xpb19kaXJfY291bnQpCi0JCQlicmVhazsKKyNkZWZpbmUg TUFYX0JVRlMJMTI4CiAKLQkJbm1hcCA9IDE7Ci0JCWVycm9yID0gbGlieGZzX2Jt YXBpKE5VTEwsIGlwLCAoeGZzX2ZpbGVvZmZfdCkKLQkJCQlJTlRfR0VUKG5vZGUt PmJ0cmVlW2ldLmJlZm9yZSwgQVJDSF9DT05WRVJUKSwgMSwKLQkJCQlYRlNfQk1B UElfTUVUQURBVEEsICZmYmxvY2ssIDAsCi0JCQkJJm1hcCwgJm5tYXAsIE5VTEwp OwordHlwZWRlZiBlbnVtIHBmX3doaWNoCit7CisJUEZfUFJJTUFSWSwKKwlQRl9T RUNPTkRBUlksCisJUEZfTUVUQV9PTkxZCit9IHBmX3doaWNoX3Q7CisKKy8qCisg KiBwZl9iYXRjaF9yZWFkIG11c3QgYmUgY2FsbGVkIHdpdGggdGhlIGxvY2sgbG9j a2VkLgorICovCiAKLQkJaWYgKGVycm9yIHx8IChubWFwICE9IDEpKSB7Ci0JCQls aWJ4ZnNfcHV0X2xpb19idWZmZXIoKHZvaWQgKikgbGlvcCk7Ci0JCQlyZXR1cm47 CitzdGF0aWMgdm9pZAorcGZfYmF0Y2hfcmVhZCgKKwlwcmVmZXRjaF9hcmdzX3QJ CSphcmdzLAorCXBmX3doaWNoX3QJCXdoaWNoLAorCXZvaWQJCQkqYnVmKQorewor CXN0cnVjdCByYWRpeF90cmVlX3Jvb3QJKnF1ZXVlOworCXhmc19idWZfdAkJKmJw bGlzdFtNQVhfQlVGU107CisJdW5zaWduZWQgaW50CQludW07CisJb2ZmNjRfdAkJ CWZpcnN0X29mZiwgbGFzdF9vZmYsIG5leHRfb2ZmOworCWludAkJCWxlbiwgc2l6 ZTsKKwlpbnQJCQlpOworCWludAkJCWlub2RlX2J1ZnM7CisJdW5zaWduZWQgbG9u ZwkJZnNibm87CisJY2hhcgkJCSpwYnVmOworCisJcXVldWUgPSAod2hpY2ggIT0g UEZfU0VDT05EQVJZKSA/ICZhcmdzLT5wcmltYXJ5X2lvX3F1ZXVlCisJCQkJOiAm YXJncy0+c2Vjb25kYXJ5X2lvX3F1ZXVlOworCisJd2hpbGUgKHJhZGl4X3RyZWVf bG9va3VwX2ZpcnN0KHF1ZXVlLCAmZnNibm8pICE9IE5VTEwpIHsKKworCQlpZiAo d2hpY2ggIT0gUEZfTUVUQV9PTkxZKSB7CisJCQludW0gPSByYWRpeF90cmVlX2dh bmdfbG9va3VwX2V4KHF1ZXVlLAorCQkJCQkodm9pZCoqKSZicGxpc3RbMF0sIGZz Ym5vLAorCQkJCQlmc2JubyArIHBmX21heF9mc2JzLCBNQVhfQlVGUyk7CisJCQlB U1NFUlQobnVtID4gMCk7CisJCQlBU1NFUlQoWEZTX0ZTQl9UT19EQUREUihtcCwg ZnNibm8pID09CisJCQkJWEZTX0JVRl9BRERSKGJwbGlzdFswXSkpOworCQl9IGVs c2UgeworCQkJbnVtID0gcmFkaXhfdHJlZV9nYW5nX2xvb2t1cF90YWcocXVldWUs CisJCQkJCSh2b2lkKiopJmJwbGlzdFswXSwgZnNibm8sCisJCQkJCU1BWF9CVUZT IC8gNCwgMCk7CisJCQlpZiAobnVtID09IDApCisJCQkJcmV0dXJuOwogCQl9CiAK LQkJaWYgKChmc2JubyA9IG1hcC5icl9zdGFydGJsb2NrKSA9PSBIT0xFU1RBUlRC TE9DSykgewotCQkJbGlieGZzX3B1dF9saW9fYnVmZmVyKCh2b2lkICopIGxpb3Ap OwotCQkJcmV0dXJuOworCQkvKgorCQkgKiBkbyBhIGJpZyByZWFkIGlmIDI1JSBv ZiB0aGUgcG90ZW50aWFsIGJ1ZmZlciBpcyB1c2VmdWwsCisJCSAqIG90aGVyd2lz ZSwgZmluZCBhcyBtYW55IGNsb3NlIHRvZ2V0aGVyIGJsb2NrcyBhbmQKKwkJICog cmVhZCB0aGVtIGluIG9uZSByZWFkCisJCSAqLworCQlmaXJzdF9vZmYgPSBMSUJY RlNfQkJUT09GRjY0KFhGU19CVUZfQUREUihicGxpc3RbMF0pKTsKKwkJbGFzdF9v ZmYgPSBMSUJYRlNfQkJUT09GRjY0KFhGU19CVUZfQUREUihicGxpc3RbbnVtLTFd KSkgKworCQkJWEZTX0JVRl9TSVpFKGJwbGlzdFtudW0tMV0pOworCQl3aGlsZSAo bGFzdF9vZmYgLSBmaXJzdF9vZmYgPiBwZl9tYXhfYnl0ZXMpIHsKKwkJCW51bS0t OworCQkJbGFzdF9vZmYgPSBMSUJYRlNfQkJUT09GRjY0KFhGU19CVUZfQUREUihi cGxpc3RbbnVtLTFdKSkgKworCQkJCVhGU19CVUZfU0laRShicGxpc3RbbnVtLTFd KTsKKwkJfQorCQlpZiAobnVtIDwgKChsYXN0X29mZiAtIGZpcnN0X29mZikgPj4g KG1wLT5tX3NiLnNiX2Jsb2NrbG9nICsgMykpKSB7CisJCQkvKgorCQkJICogbm90 IGVub3VnaCBibG9ja3MgZm9yIG9uZSBiaWcgcmVhZCwgc28gZGV0ZXJtaW5lCisJ CQkgKiB0aGUgbnVtYmVyIG9mIGJsb2NrcyB0aGF0IGFyZSBjbG9zZSBlbm91Z2gu CisJCQkgKi8KKwkJCWxhc3Rfb2ZmID0gZmlyc3Rfb2ZmICsgWEZTX0JVRl9TSVpF KGJwbGlzdFswXSk7CisJCQlmb3IgKGkgPSAxOyBpIDwgbnVtOyBpKyspIHsKKwkJ CQluZXh0X29mZiA9IExJQlhGU19CQlRPT0ZGNjQoWEZTX0JVRl9BRERSKGJwbGlz dFtpXSkpICsKKwkJCQkJCVhGU19CVUZfU0laRShicGxpc3RbaV0pOworCQkJCWlm IChuZXh0X29mZiAtIGxhc3Rfb2ZmID4gcGZfYmF0Y2hfYnl0ZXMpCisJCQkJCWJy ZWFrOworCQkJCWxhc3Rfb2ZmID0gbmV4dF9vZmY7CisJCQl9CisJCQludW0gPSBp OwogCQl9Ci0JCWxpb3BbaV0uYmxrbm8gPSBYRlNfRlNCX1RPX0RBRERSKG1wLCBm c2Jubyk7Ci0JCWxpb3BbaV0ubGVuID0gIFhGU19GU0JfVE9fQkIobXAsIDEpOwot CX0KIAotCWlmIChpID4gMSkgewotCQlpZiAobGlieGZzX3JlYWRidWZfbGlzdCht cC0+bV9kZXYsIGksICh2b2lkICopIGxpb3AsIExJQlhGU19MSU9fVFlQRV9ESVIp ID09IC0xKQotCQkJZG9fcHJlZmV0Y2ggPSAwOworCQlmb3IgKGkgPSAwOyBpIDwg bnVtOyBpKyspIHsKKwkJCWlmIChyYWRpeF90cmVlX2RlbGV0ZShxdWV1ZSwgWEZT X0RBRERSX1RPX0ZTQihtcCwKKwkJCQkJWEZTX0JVRl9BRERSKGJwbGlzdFtpXSkp KSA9PSBOVUxMKQorCQkJCWRvX2Vycm9yKF8oInByZWZldGNoIGNvcnJ1cHRpb25c biIpKTsKKwkJfQorCisJCWlmICh3aGljaCA9PSBQRl9QUklNQVJZKSB7CisJCQlp ZiAoKGZpcnN0X29mZiA+PiBtcC0+bV9zYi5zYl9ibG9ja2xvZykgPiBwZl9iYXRj aF9mc2JzKQorCQkJCWFyZ3MtPmxhc3RfYm5vX3JlYWQgPSAoZmlyc3Rfb2ZmID4+ IG1wLT5tX3NiLnNiX2Jsb2NrbG9nKTsKKwkJfQorCisJCXB0aHJlYWRfbXV0ZXhf dW5sb2NrKCZhcmdzLT5sb2NrKTsKKworI2lmZGVmIFhSX1BGX1RSQUNFCisJCXBm dHJhY2UoInJlYWRpbmcgYmJzICVsbHUgdG8gJWxsdSAoJWQgYnVmcykgZnJvbSAl cyBxdWV1ZSBpbiBBRyAlZCAobGFzdF9ibm8gPSAlbHUpIiwKKwkJCShsb25nIGxv bmcpWEZTX0JVRl9BRERSKGJwbGlzdFswXSksCisJCQkobG9uZyBsb25nKVhGU19C VUZfQUREUihicGxpc3RbbnVtLTFdKSwgbnVtLAorCQkJKHdoaWNoICE9IFBGX1NF Q09OREFSWSkgPyAicHJpIiA6ICJzZWMiLCBhcmdzLT5hZ25vLAorCQkJYXJncy0+ bGFzdF9ibm9fcmVhZCk7CisjZW5kaWYKKwkJLyoKKwkJICogbm93IHJlYWQgdGhl IGRhdGEgYW5kIHB1dCBpbnRvIHRoZSB4ZnNfYnV0X3QncworCQkgKi8KKwkJbGVu ID0gcHJlYWQ2NChtcF9mZCwgYnVmLCAoaW50KShsYXN0X29mZiAtIGZpcnN0X29m ZiksIGZpcnN0X29mZik7CisJCWlmIChsZW4gPiAwKSB7CisJCQkvKgorCQkJICog Z28gdGhyb3VnaCB0aGUgeGZzX2J1Zl90IGxpc3QgY29weWluZyBmcm9tIHRoZQor CQkJICogcmVhZCBidWZmZXIgaW50byB0aGUgeGZzX2J1Zl90J3MgYW5kIHJlbGVh c2UgdGhlbS4KKwkJCSAqLworCQkJbGFzdF9vZmYgPSBmaXJzdF9vZmY7CisJCQlm b3IgKGkgPSAwOyBpIDwgbnVtOyBpKyspIHsKKworCQkJCXBidWYgPSAoKGNoYXIg KilidWYpICsgKExJQlhGU19CQlRPT0ZGNjQoWEZTX0JVRl9BRERSKGJwbGlzdFtp XSkpIC0gZmlyc3Rfb2ZmKTsKKwkJCQlzaXplID0gWEZTX0JVRl9TSVpFKGJwbGlz dFtpXSk7CisJCQkJaWYgKGxlbiA8IHNpemUpCisJCQkJCWJyZWFrOworCQkJCW1l bWNweShYRlNfQlVGX1BUUihicGxpc3RbaV0pLCBwYnVmLCBzaXplKTsKKwkJCQli cGxpc3RbaV0tPmJfZmxhZ3MgfD0gTElCWEZTX0JfVVBUT0RBVEU7CisJCQkJbGVu IC09IHNpemU7CisJCQkJaWYgKGJwbGlzdFtpXS0+Yl9mbGFncyAmIEJfSU5PREUp CisJCQkJCXBmX3JlYWRfaW5vZGVfZGlycyhhcmdzLCBicGxpc3RbaV0pOworI2lm ZGVmIFhSX1BGX1RSQUNFCisJCQkJcGZ0cmFjZSgicHV0YnVmICVwICglbGx1KSBp biBBRyAlZCIsIGJwbGlzdFtpXSwKKwkJCQkJKGxvbmcgbG9uZylYRlNfQlVGX0FE RFIoYnBsaXN0W2ldKSwKKwkJCQkJYXJncy0+YWdubyk7CisjZW5kaWYKKwkJCX0K KwkJfQorCQlpbm9kZV9idWZzID0gMDsKKwkJZm9yIChpID0gMDsgaSA8IG51bTsg aSsrKSB7CisJCQlpZiAoYnBsaXN0W2ldLT5iX2ZsYWdzICYgQl9JTk9ERSkKKwkJ CQlpbm9kZV9idWZzKys7CisJCQlsaWJ4ZnNfcHV0YnVmKGJwbGlzdFtpXSk7CisJ CX0KKwkJcHRocmVhZF9tdXRleF9sb2NrKCZhcmdzLT5sb2NrKTsKKwkJaWYgKHdo aWNoICE9IFBGX1NFQ09OREFSWSkgeworCQkJYXJncy0+aW5vZGVfYnVmc19xdWV1 ZWQgLT0gaW5vZGVfYnVmczsKKyNpZmRlZiBYUl9QRl9UUkFDRQorCQkJcGZ0cmFj ZSgiaW5vZGVfYnVmc19xdWV1ZWQgZm9yIEFHICVkID0gJWQiLCBhcmdzLT5hZ25v LAorCQkJCWFyZ3MtPmlub2RlX2J1ZnNfcXVldWVkKTsKKyNlbmRpZgorCQkJLyoK KwkJCSAqIGlmIHByaW1hcnkgaW5vZGUgcXVldWUgcnVubmluZyBsb3csIHByb2Nl c3MgbWV0YWRhdGEKKwkJCSAqIGluIGJvdGhzIHF1ZXVlcyB0byBhdm9pZCBJL08g c3RhcnZhdGlvbiBhcyB0aGUKKwkJCSAqIHByb2Nlc3NpbmcgdGhyZWFkIHdvdWxk IGJlIHdhaXRpbmcgZm9yIGEgbWV0YWRhdGEKKwkJCSAqIGJ1ZmZlcgorCQkJICov CisJCQlpZiAod2hpY2ggPT0gUEZfUFJJTUFSWSAmJiAhYXJncy0+cXVldWluZ19k b25lICYmCisJCQkJCWFyZ3MtPmlub2RlX2J1ZnNfcXVldWVkIDwgSU9fVEhSRVNI T0xEKSB7CisjaWZkZWYgWFJfUEZfVFJBQ0UKKwkJCQlwZnRyYWNlKCJyZWFkaW5n IG1ldGFkYXRhIGJ1ZnMgZnJvbSBwcmltYXJ5IHF1ZXVlIGZvciBBRyAlZCIsCisJ CQkJCWFyZ3MtPmFnbm8pOworI2VuZGlmCisJCQkJcGZfYmF0Y2hfcmVhZChhcmdz LCBQRl9NRVRBX09OTFksIGJ1Zik7CisjaWZkZWYgWFJfUEZfVFJBQ0UKKwkJCQlw ZnRyYWNlKCJyZWFkaW5nIGJ1ZnMgZnJvbSBzZWNvbmRhcnkgcXVldWUgZm9yIEFH ICVkIiwKKwkJCQkJYXJncy0+YWdubyk7CisjZW5kaWYKKwkJCQlwZl9iYXRjaF9y ZWFkKGFyZ3MsIFBGX1NFQ09OREFSWSwgYnVmKTsKKwkJCX0KKwkJfQogCX0KK30K IAotCWxpYnhmc19wdXRfbGlvX2J1ZmZlcigodm9pZCAqKSBsaW9wKTsKLQlyZXR1 cm47CitzdGF0aWMgdm9pZCAqCitwZl9pb193b3JrZXIoCisJdm9pZAkJCSpwYXJh bSkKK3sKKwlwcmVmZXRjaF9hcmdzX3QJCSphcmdzID0gcGFyYW07CisJdm9pZAkJ CSpidWYgPSBtZW1hbGlnbihsaWJ4ZnNfZGV2aWNlX2FsaWdubWVudCgpLCBwZl9t YXhfYnl0ZXMpOworCXN0cnVjdCBzY2hlZF9wYXJhbQlzcGFyYW07CisJaW50CQkJ cG9saWN5OworCisJcHRocmVhZF9nZXRzY2hlZHBhcmFtKHB0aHJlYWRfc2VsZigp LCAmcG9saWN5LCAmc3BhcmFtKTsKKwlzcGFyYW0uc2NoZWRfcHJpb3JpdHkrKzsK KwlwdGhyZWFkX3NldHNjaGVkcGFyYW0ocHRocmVhZF9zZWxmKCksIHBvbGljeSwg JnNwYXJhbSk7CisKKwlwdGhyZWFkX211dGV4X2xvY2soJmFyZ3MtPmxvY2spOwor CXdoaWxlICghYXJncy0+cXVldWluZ19kb25lIHx8IGFyZ3MtPnByaW1hcnlfaW9f cXVldWUuaGVpZ2h0KSB7CisKKyNpZmRlZiBYUl9QRl9UUkFDRQorCQlwZnRyYWNl KCJ3YWl0aW5nIHRvIHN0YXJ0IHByZWZldGNoIEkvTyBmb3IgQUcgJWQiLCBhcmdz LT5hZ25vKTsKKyNlbmRpZgorCQl3aGlsZSAoIWFyZ3MtPmNhbl9zdGFydF9yZWFk aW5nICYmICFhcmdzLT5xdWV1aW5nX2RvbmUpCisJCQlwdGhyZWFkX2NvbmRfd2Fp dCgmYXJncy0+c3RhcnRfcmVhZGluZywgJmFyZ3MtPmxvY2spOworI2lmZGVmIFhS X1BGX1RSQUNFCisJCXBmdHJhY2UoInN0YXJ0aW5nIHByZWZldGNoIEkvTyBmb3Ig QUcgJWQiLCBhcmdzLT5hZ25vKTsKKyNlbmRpZgorCQlwZl9iYXRjaF9yZWFkKGFy Z3MsIFBGX1BSSU1BUlksIGJ1Zik7CisJCXBmX2JhdGNoX3JlYWQoYXJncywgUEZf U0VDT05EQVJZLCBidWYpOworCisjaWZkZWYgWFJfUEZfVFJBQ0UKKwkJcGZ0cmFj ZSgicmFuIG91dCBvZiBidWZzIHRvIHByZWZldGNoIGZvciBBRyAlZCIsIGFyZ3Mt PmFnbm8pOworI2VuZGlmCisJCWlmICghYXJncy0+cXVldWluZ19kb25lKQorCQkJ YXJncy0+Y2FuX3N0YXJ0X3JlYWRpbmcgPSAwOworCX0KKwlwdGhyZWFkX211dGV4 X3VubG9jaygmYXJncy0+bG9jayk7CisKKwlmcmVlKGJ1Zik7CisKKyNpZmRlZiBY Ul9QRl9UUkFDRQorCXBmdHJhY2UoImZpbmlzaGVkIHByZWZldGNoIEkvTyBmb3Ig QUcgJWQiLCBhcmdzLT5hZ25vKTsKKyNlbmRpZgorCXJldHVybiBOVUxMOwogfQog Ci12b2lkCi1wcmVmZXRjaF9wNl9kaXIxKAotCXhmc19tb3VudF90CQkqbXAsCi0J eGZzX2lub190CQlpbm8sCi0JeGZzX2lub2RlX3QJCSppcCwKLQl4ZnNfZGFibGtf dAkJZGFfYm5vLAotCXhmc19mc2Jsb2NrX3QJCSpmYmxvY2twKQorc3RhdGljIGlu dAorcGZfY3JlYXRlX3ByZWZldGNoX3RocmVhZCgKKwlwcmVmZXRjaF9hcmdzX3QJ CSphcmdzKTsKKworc3RhdGljIHZvaWQgKgorcGZfcXVldWluZ193b3JrZXIoCisJ dm9pZAkJCSpwYXJhbSkKIHsKLQl4ZnNfZGFfaW50bm9kZV90CSpub2RlOwotCXhm c19idWZfdAkJKmJwOwotCXhmc19kZnNibm9fdAkJZnNibm87Ci0JeGZzX2JtYnRf aXJlY190CQltYXA7Ci0JaW50CQkJbm1hcDsKKwlwcmVmZXRjaF9hcmdzX3QJCSph cmdzID0gcGFyYW07CisJaW50CQkJbnVtX2lub3M7CisJaW5vX3RyZWVfbm9kZV90 CQkqaXJlYzsKKwlpbm9fdHJlZV9ub2RlX3QJCSpjdXJfaXJlYzsKKwlpbnQJCQli bGtzX3Blcl9jbHVzdGVyOworCWludAkJCWlub3NfcGVyX2NsdXN0ZXI7CisJeGZz X2FnYmxvY2tfdAkJYm5vOwogCWludAkJCWk7Ci0JaW50CQkJZXJyb3I7CisJaW50 CQkJZXJyOwogCi0Jbm1hcCA9IDE7Ci0JZXJyb3IgPSBsaWJ4ZnNfYm1hcGkoTlVM TCwgaXAsICh4ZnNfZmlsZW9mZl90KSBkYV9ibm8sIDEsCi0JCQlYRlNfQk1BUElf TUVUQURBVEEsIGZibG9ja3AsIDAsCi0JCQkmbWFwLCAmbm1hcCwgTlVMTCk7Ci0J aWYgKGVycm9yIHx8IChubWFwICE9IDEpKSAgewotCQlyZXR1cm47CisJYmxrc19w ZXJfY2x1c3RlciA9ICBYRlNfSU5PREVfQ0xVU1RFUl9TSVpFKG1wKSA+PiBtcC0+ bV9zYi5zYl9ibG9ja2xvZzsKKwlpZiAoYmxrc19wZXJfY2x1c3RlciA9PSAwKQor CQlibGtzX3Blcl9jbHVzdGVyID0gMTsKKwlpbm9zX3Blcl9jbHVzdGVyID0gYmxr c19wZXJfY2x1c3RlciAqIG1wLT5tX3NiLnNiX2lub3BibG9jazsKKworCWZvciAo aSA9IDA7IGkgPCBQRl9USFJFQURfQ09VTlQ7IGkrKykgeworCQllcnIgPSBwdGhy ZWFkX2NyZWF0ZSgmYXJncy0+aW9fdGhyZWFkc1tpXSwgTlVMTCwKKwkJCQlwZl9p b193b3JrZXIsIGFyZ3MpOworCQlpZiAoZXJyICE9IDApIHsKKwkJCWRvX3dhcm4o XygiZmFpbGVkIHRvIGNyZWF0ZSBwcmVmZXRjaCB0aHJlYWQ6ICVzXG4iKSwKKwkJ CQlzdHJlcnJvcihlcnIpKTsKKwkJCWlmIChpID09IDApIHsKKwkJCQlwZl9zdGFy dF9wcm9jZXNzaW5nKGFyZ3MpOworCQkJCXJldHVybiBOVUxMOworCQkJfQorCQkJ LyoKKwkJCSAqIHNpbmNlIHdlIGhhdmUgYXQgbGVhc3Qgb25lIEkvTyB0aHJlYWQs IHVzZSB0aGVtIGZvcgorCQkJICogcHJlZmV0Y2gKKwkJCSAqLworCQkJYnJlYWs7 CisJCX0KIAl9CiAKLQlpZiAoKGZzYm5vID0gbWFwLmJyX3N0YXJ0YmxvY2spID09 IEhPTEVTVEFSVEJMT0NLKQotCQlyZXR1cm47CisjaWZkZWYgWFJfUEZfVFJBQ0UK KwlwZnRyYWNlKCJzdGFydGluZyBwcmVmZXRjaCBmb3IgQUcgJWQiLCBhcmdzLT5h Z25vKTsKKyNlbmRpZgorCisJZm9yIChpcmVjID0gZmluZGZpcnN0X2lub2RlX3Jl YyhhcmdzLT5hZ25vKTsgaXJlYyAhPSBOVUxMOworCQkJaXJlYyA9IG5leHRfaW5v X3JlYyhpcmVjKSkgeworCisJCWN1cl9pcmVjID0gaXJlYzsKKworCQludW1faW5v cyA9IFhGU19JTk9ERVNfUEVSX0NIVU5LOworCQl3aGlsZSAobnVtX2lub3MgPCBY RlNfSUFMTE9DX0lOT0RFUyhtcCkgJiYgaXJlYyAhPSBOVUxMKSB7CisJCQlpcmVj ID0gbmV4dF9pbm9fcmVjKGlyZWMpOworCQkJbnVtX2lub3MgKz0gWEZTX0lOT0RF U19QRVJfQ0hVTks7CisJCX0KKworCQlpZiAoYXJncy0+ZGlyc19vbmx5ICYmIGN1 cl9pcmVjLT5pbm9faXNhX2RpciA9PSAwKQorCQkJY29udGludWU7CisjaWZkZWYg WFJfUEZfVFJBQ0UKKwkJc2VtX2dldHZhbHVlKCZhcmdzLT5yYV9jb3VudCwgJmkp OworCQlwZnRyYWNlKCJxdWV1aW5nIGlyZWMgJXAgaW4gQUcgJWQsIHNlbSBjb3Vu dCA9ICVkIiwKKwkJCWlyZWMsIGFyZ3MtPmFnbm8sIGkpOworI2VuZGlmCisJCXNl bV93YWl0KCZhcmdzLT5yYV9jb3VudCk7CisKKwkJbnVtX2lub3MgPSAwOworCQli bm8gPSBYRlNfQUdJTk9fVE9fQUdCTk8obXAsIGN1cl9pcmVjLT5pbm9fc3RhcnRu dW0pOworCisJCWRvIHsKKwkJCXBmX3F1ZXVlX2lvKGFyZ3MsIFhGU19BR0JfVE9f RlNCKG1wLCBhcmdzLT5hZ25vLCBibm8pLAorCQkJCQlibGtzX3Blcl9jbHVzdGVy LCBCX0lOT0RFKTsKKwkJCWJubyArPSBibGtzX3Blcl9jbHVzdGVyOworCQkJbnVt X2lub3MgKz0gaW5vc19wZXJfY2x1c3RlcjsKKwkJfSB3aGlsZSAobnVtX2lub3Mg PCBYRlNfSUFMTE9DX0lOT0RFUyhtcCkpOworCX0KKworCXB0aHJlYWRfbXV0ZXhf bG9jaygmYXJncy0+bG9jayk7CisKKyNpZmRlZiBYUl9QRl9UUkFDRQorCXBmdHJh Y2UoImZpbmlzaGVkIHF1ZXVpbmcgaW5vZGVzIGZvciBBRyAlZCAoaW5vZGVfYnVm c19xdWV1ZWQgPSAlZCkiLAorCQlhcmdzLT5hZ25vLCBhcmdzLT5pbm9kZV9idWZz X3F1ZXVlZCk7CisjZW5kaWYKKwlhcmdzLT5xdWV1aW5nX2RvbmUgPSAxOworCXBm X3N0YXJ0X2lvX3dvcmtlcnMoYXJncyk7CisJcGZfc3RhcnRfcHJvY2Vzc2luZyhh cmdzKTsKKwlwdGhyZWFkX211dGV4X3VubG9jaygmYXJncy0+bG9jayk7CisKKwkv KiBub3cgd2FpdCBmb3IgdGhlIHJlYWRlcnMgdG8gZmluaXNoICovCisJZm9yIChp ID0gMDsgaSA8IFBGX1RIUkVBRF9DT1VOVDsgaSsrKQorCQlpZiAoYXJncy0+aW9f dGhyZWFkc1tpXSkKKwkJCXB0aHJlYWRfam9pbihhcmdzLT5pb190aHJlYWRzW2ld LCBOVUxMKTsKKworI2lmZGVmIFhSX1BGX1RSQUNFCisJcGZ0cmFjZSgicHJlZmV0 Y2ggZm9yIEFHICVkIGZpbmlzaGVkIiwgYXJncy0+YWdubyk7CisjZW5kaWYKKwlw dGhyZWFkX211dGV4X2xvY2soJmFyZ3MtPmxvY2spOworCisJQVNTRVJUKGFyZ3Mt PnByaW1hcnlfaW9fcXVldWUuaGVpZ2h0ID09IDApOworCUFTU0VSVChhcmdzLT5z ZWNvbmRhcnlfaW9fcXVldWUuaGVpZ2h0ID09IDApOworCisJYXJncy0+cHJlZmV0 Y2hfZG9uZSA9IDE7CisJaWYgKGFyZ3MtPm5leHRfYXJncykKKwkJcGZfY3JlYXRl X3ByZWZldGNoX3RocmVhZChhcmdzLT5uZXh0X2FyZ3MpOwogCi0JYnAgPSBsaWJ4 ZnNfcmVhZGJ1ZihtcC0+bV9kZXYsIFhGU19GU0JfVE9fREFERFIobXAsIGZzYm5v KSwKLQkJCVhGU19GU0JfVE9fQkIobXAsIDEpLCAwKTsKKwlwdGhyZWFkX211dGV4 X3VubG9jaygmYXJncy0+bG9jayk7CiAKLQlpZiAoYnAgPT0gTlVMTCkKLQkgCXJl dHVybjsKKwlyZXR1cm4gTlVMTDsKK30KIAorc3RhdGljIGludAorcGZfY3JlYXRl X3ByZWZldGNoX3RocmVhZCgKKwlwcmVmZXRjaF9hcmdzX3QJCSphcmdzKQorewor CWludAkJCWVycjsKIAotCW5vZGUgPSAoeGZzX2RhX2ludG5vZGVfdCAqKVhGU19C VUZfUFRSKGJwKTsKLQlpZiAoSU5UX0dFVChub2RlLT5oZHIuaW5mby5tYWdpYywg QVJDSF9DT05WRVJUKSAhPSBYRlNfREFfTk9ERV9NQUdJQykgIHsKLQkJbGlieGZz X3B1dGJ1ZihicCk7Ci0JCXJldHVybjsKKyNpZmRlZiBYUl9QRl9UUkFDRQorCXBm dHJhY2UoImNyZWF0aW5nIHF1ZXVlIHRocmVhZCBmb3IgQUcgJWQiLCBhcmdzLT5h Z25vKTsKKyNlbmRpZgorCWVyciA9IHB0aHJlYWRfY3JlYXRlKCZhcmdzLT5xdWV1 aW5nX3RocmVhZCwgTlVMTCwKKwkJCXBmX3F1ZXVpbmdfd29ya2VyLCBhcmdzKTsK KwlpZiAoZXJyICE9IDApIHsKKwkJZG9fd2FybihfKCJmYWlsZWQgdG8gY3JlYXRl IHByZWZldGNoIHRocmVhZDogJXNcbiIpLAorCQkJc3RyZXJyb3IoZXJyKSk7CisJ CWNsZWFudXBfaW5vZGVfcHJlZmV0Y2goYXJncyk7CiAJfQogCi0JcHJlZmV0Y2hf cDZfbm9kZShtcCwgaXAsIGJwKTsKLQotCS8qIHNraXAgcHJlZmV0Y2hpbmcgaWYg bmV4dCBsZXZlbCBpcyBsZWFmIGxldmVsICovCi0JaWYgKElOVF9HRVQobm9kZS0+ aGRyLmxldmVsLCBBUkNIX0NPTlZFUlQpID4gMSkgewotCQlmb3IgKGkgPSAwOyBp IDwgSU5UX0dFVChub2RlLT5oZHIuY291bnQsIEFSQ0hfQ09OVkVSVCk7IGkrKykg ewotCQkJKHZvaWQpIHByZWZldGNoX3A2X2RpcjEobXAsIGlubywgaXAsCi0JCQkJ SU5UX0dFVChub2RlLT5idHJlZVtpXS5iZWZvcmUsIEFSQ0hfQ09OVkVSVCksCi0J CQkJZmJsb2NrcCk7Ci0JCX0KLQl9Ci0JCi0JbGlieGZzX3B1dGJ1ZihicCk7Ci0J cmV0dXJuOworCXJldHVybiBlcnIgPT0gMDsKIH0KIAotI2RlZmluZQlOTUFQUAk0 Ci0KIHZvaWQKLXByZWZldGNoX3A2X2RpcjIoCi0JeGZzX21vdW50X3QgICAgICpt cCwKLQl4ZnNfaW5vZGVfdAkqaXApCi17Ci0JeGZzX2ZpbGVvZmZfdAkJZGFfYm5v OwotCXhmc19maWxlb2ZmX3QJCW5leHRfZGFfYm5vOwotCWludAkJCWksIGo7Ci0J bGlieGZzX2xpb19yZXFfdAkqbGlvcDsKLQl4ZnNfZnNibG9ja190CQlmc2I7Ci0J aW50CQkJbmZzYjsKLQlpbnQJCQllcnJvcjsKK2luaXRfcHJlZmV0Y2goCisJeGZz X21vdW50X3QJCSpwbXApCit7CisJbXAgPSBwbXA7CisJbXBfZmQgPSBsaWJ4ZnNf ZGV2aWNlX3RvX2ZkKG1wLT5tX2Rldik7CisJcGZfbWF4X2J5dGVzID0gc3lzY29u ZihfU0NfUEFHRV9TSVpFKSA8PCA3OworCXBmX21heF9iYnMgPSBwZl9tYXhfYnl0 ZXMgPj4gQkJTSElGVDsKKwlwZl9tYXhfZnNicyA9IHBmX21heF9ieXRlcyA+PiBt cC0+bV9zYi5zYl9ibG9ja2xvZzsKKwlwZl9iYXRjaF9ieXRlcyA9IERFRl9CQVRD SF9CWVRFUzsKKwlwZl9iYXRjaF9mc2JzID0gREVGX0JBVENIX0JZVEVTID4+ICht cC0+bV9zYi5zYl9ibG9ja2xvZyArIDEpOworfQogCi0JaWYgKChsaW9wID0gKGxp Ynhmc19saW9fcmVxX3QgKikgbGlieGZzX2dldF9saW9fYnVmZmVyKExJQlhGU19M SU9fVFlQRV9ESVIpKSA9PSBOVUxMKSB7Ci0JCXJldHVybjsKLQl9Ci0JaSA9IDA7 Ci0JZm9yIChkYV9ibm8gPSAwLCBuZXh0X2RhX2JubyA9IDA7IG5leHRfZGFfYm5v ICE9IE5VTExGSUxFT0ZGOyBkYV9ibm8gPSBuZXh0X2RhX2JubykgewotCQlpZiAo aSA9PSBsaWJ4ZnNfbGlvX2Rpcl9jb3VudCkKLQkJCWJyZWFrOwotCQluZXh0X2Rh X2JubyA9IGRhX2JubyArIG1wLT5tX2RpcmJsa2ZzYnMgLSAxOwotCQlpZiAobGli eGZzX2JtYXBfbmV4dF9vZmZzZXQoTlVMTCwgaXAsICZuZXh0X2RhX2JubywgWEZT X0RBVEFfRk9SSykpCi0JCQlicmVhazsKK3ByZWZldGNoX2FyZ3NfdCAqCitzdGFy dF9pbm9kZV9wcmVmZXRjaCgKKwl4ZnNfYWdudW1iZXJfdAkJYWdubywKKwlpbnQJ CQlkaXJzX29ubHksCisJcHJlZmV0Y2hfYXJnc190CQkqcHJldl9hcmdzKQorewor CXByZWZldGNoX2FyZ3NfdAkJKmFyZ3M7CiAKLQkJaWYgKG1wLT5tX2RpcmJsa2Zz YnMgPT0gMSkgewotCQkJaWYgKChlcnJvciA9IGxpYnhmc19ibWFwaV9zaW5nbGUo TlVMTCwgaXAsIFhGU19EQVRBX0ZPUkssICZmc2IsIGRhX2JubykpICE9IDApIHsK LQkJCQlsaWJ4ZnNfcHV0X2xpb19idWZmZXIoKHZvaWQgKikgbGlvcCk7Ci0JCQkJ ZG9fcHJlZmV0Y2ggPSAwOwotCQkJCWRvX3dhcm4oInBoYXNlNiBwcmVmZXRjaDog Y2Fubm90IGJtYXAgc2luZ2xlIGJsb2NrIGVyciA9ICVkXG4iLCBlcnJvcik7Ci0J CQkJcmV0dXJuOwotCQkJfQotCQkJaWYgKGZzYiA9PSBOVUxMRlNCTE9DSykgewot CQkJCWxpYnhmc19wdXRfbGlvX2J1ZmZlcigodm9pZCAqKSBsaW9wKTsKLQkJCQly ZXR1cm47Ci0JCQl9CisJaWYgKGFnbm8gPj0gbXAtPm1fc2Iuc2JfYWdjb3VudCB8 fCAhZG9fcHJlZmV0Y2gpCisJCXJldHVybiBOVUxMOwogCi0JCQlsaW9wW2ldLmJs a25vID0gWEZTX0ZTQl9UT19EQUREUihtcCwgZnNiKTsKLQkJCWxpb3BbaV0ubGVu ID0gIFhGU19GU0JfVE9fQkIobXAsIDEpOwotCQkJaSsrOwotCQl9Ci0JCWVsc2Ug aWYgKChuZnNiID0gbXAtPm1fZGlyYmxrZnNicykgPiAxKSB7Ci0JCQl4ZnNfZnNi bG9ja190ICAgZmlyc3RibG9jazsKLQkJCXhmc19ibWJ0X2lyZWNfdCBtYXBbTk1B UFBdOwotCQkJeGZzX2JtYnRfaXJlY190ICptYXBwOwotCQkJaW50ICAgICAgICAg ICAgIG5tYXA7Ci0KLQkJCWlmIChuZnNiID4gTk1BUFApIHsKLSAgICAgICAgICAg ICAgICAgICAgICAgIAltYXBwID0gbWFsbG9jKHNpemVvZigqbWFwcCkgKiBuZnNi KTsKLQkJCQlpZiAobWFwcCA9PSBOVUxMKSB7Ci0JCQkJCWxpYnhmc19wdXRfbGlv X2J1ZmZlcigodm9pZCAqKSBsaW9wKTsKLQkJCQkJZG9fcHJlZmV0Y2ggPSAwOwot CQkJCQlkb193YXJuKCJwaGFzZTYgcHJlZmV0Y2g6IGNhbm5vdCBhbGxvY2F0ZSBt ZW0gZm9yIG1hcFxuIik7Ci0JCQkJCXJldHVybjsKLQkJCQl9Ci0JCQl9Ci0JCQll bHNlIHsKLQkJCQltYXBwID0gbWFwOwotCQkJfQotICAgICAgICAgICAgICAgICAg ICAgICAgZmlyc3RibG9jayA9IE5VTExGU0JMT0NLOwotICAgICAgICAgICAgICAg ICAgICAgICAgbm1hcCA9IG5mc2I7Ci0gICAgICAgICAgICAgICAgICAgICAgICBp ZiAoKGVycm9yID0gbGlieGZzX2JtYXBpKE5VTEwsIGlwLCBkYV9ibm8sCi0gICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbmZzYiwKLSAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBYRlNfQk1BUElfTUVU QURBVEEgfCBYRlNfQk1BUElfQUZMQUcoWEZTX0RBVEFfRk9SSyksCi0gICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgJmZpcnN0YmxvY2ssIDAs IG1hcHAsICZubWFwLCBOVUxMKSkpIHsKLQkJCQlsaWJ4ZnNfcHV0X2xpb19idWZm ZXIoKHZvaWQgKikgbGlvcCk7Ci0JCQkJZG9fcHJlZmV0Y2ggPSAwOwotCQkJCWRv X3dhcm4oInBoYXNlNiBwcmVmZXRjaDogY2Fubm90IGJtYXAgZXJyID0gJWRcbiIs IGVycm9yKTsKLQkJCQlyZXR1cm47Ci0JCQl9Ci0JCQlmb3IgKGogPSAwOyBqIDwg bm1hcDsgaisrKSB7Ci0JCQkJbGlvcFtpXS5ibGtubyA9IFhGU19GU0JfVE9fREFE RFIobXAsIG1hcHBbal0uYnJfc3RhcnRibG9jayk7Ci0JCQkJbGlvcFtpXS5sZW4g PSAoaW50KVhGU19GU0JfVE9fQkIobXAsIG1hcHBbal0uYnJfYmxvY2tjb3VudCk7 Ci0JCQkJaSsrOwotCQkJCWlmIChpID09IGxpYnhmc19saW9fZGlyX2NvdW50KQot CQkJCQlicmVhazsgLyogZm9yIGxvb3AgKi8KLQkJCX0KLQkJCWlmIChtYXBwICE9 IG1hcCkKLQkJCQlmcmVlKG1hcHApOworCWFyZ3MgPSBjYWxsb2MoMSwgc2l6ZW9m KHByZWZldGNoX2FyZ3NfdCkpOwogCi0JCX0KLQkJZWxzZSB7Ci0JCQlkb19lcnJv cigicGhhc2U2OiBpbnZhbGlkIG1wLT5tX2RpcmJsa2ZzYnMgJWRcbiIsIG1wLT5t X2RpcmJsa2ZzYnMpOwotCQl9Ci0JfQotCWlmIChpID4gMSkgewotCQlpZiAobGli eGZzX3JlYWRidWZfbGlzdChtcC0+bV9kZXYsIGksICh2b2lkICopIGxpb3AsIExJ QlhGU19MSU9fVFlQRV9ESVIpID09IC0xKQotCQkJZG9fcHJlZmV0Y2ggPSAwOwor CUlOSVRfUkFESVhfVFJFRSgmYXJncy0+cHJpbWFyeV9pb19xdWV1ZSwgMCk7CisJ SU5JVF9SQURJWF9UUkVFKCZhcmdzLT5zZWNvbmRhcnlfaW9fcXVldWUsIDApOwor CXB0aHJlYWRfbXV0ZXhfaW5pdCgmYXJncy0+bG9jaywgTlVMTCk7CisJcHRocmVh ZF9jb25kX2luaXQoJmFyZ3MtPnN0YXJ0X3JlYWRpbmcsIE5VTEwpOworCXB0aHJl YWRfY29uZF9pbml0KCZhcmdzLT5zdGFydF9wcm9jZXNzaW5nLCBOVUxMKTsKKwlh cmdzLT5hZ25vID0gYWdubzsKKwlhcmdzLT5kaXJzX29ubHkgPSBkaXJzX29ubHk7 CisKKwkvKgorCSAqIHVzZSBvbmx5IDEvNCBvZiB0aGUgbGlieGZzIGNhY2hlIGFz IHdlIGFyZSBvbmx5IGNvdW50aW5nIGlub2RlcworCSAqIGFuZCBub3QgYW55IG90 aGVyIGFzc29jaWF0ZWQgbWV0YWRhdGEgbGlrZSBkaXJlY3RvcmllcworCSAqLwor CisJc2VtX2luaXQoJmFyZ3MtPnJhX2NvdW50LCAwLCBsaWJ4ZnNfYmNhY2hlLT5j X21heGNvdW50IC8gdGhyZWFkX2NvdW50IC8KKwkJKFhGU19JQUxMT0NfQkxPQ0tT KG1wKSAvIChYRlNfSU5PREVfQ0xVU1RFUl9TSVpFKG1wKSA+PiBtcC0+bV9zYi5z Yl9ibG9ja2xvZykpIC8gNCk7CisKKwlpZiAoIXByZXZfYXJncykgeworCQlpZiAo IXBmX2NyZWF0ZV9wcmVmZXRjaF90aHJlYWQoYXJncykpCisJCQlyZXR1cm4gTlVM TDsKKwl9IGVsc2UgeworCQlwdGhyZWFkX211dGV4X2xvY2soJnByZXZfYXJncy0+ bG9jayk7CisJCWlmIChwcmV2X2FyZ3MtPnByZWZldGNoX2RvbmUpIHsKKwkJCWlm ICghcGZfY3JlYXRlX3ByZWZldGNoX3RocmVhZChhcmdzKSkKKwkJCQlhcmdzID0g TlVMTDsKKwkJfSBlbHNlCisJCQlwcmV2X2FyZ3MtPm5leHRfYXJncyA9IGFyZ3M7 CisJCXB0aHJlYWRfbXV0ZXhfdW5sb2NrKCZwcmV2X2FyZ3MtPmxvY2spOwogCX0K LQlsaWJ4ZnNfcHV0X2xpb19idWZmZXIoKHZvaWQgKikgbGlvcCk7CisKKwlyZXR1 cm4gYXJnczsKIH0KIAogdm9pZAotcHJlZmV0Y2hfc2IoeGZzX21vdW50X3QgKm1w LCB4ZnNfYWdudW1iZXJfdCAgYWdubykKK3dhaXRfZm9yX2lub2RlX3ByZWZldGNo KAorCXByZWZldGNoX2FyZ3NfdAkJKmFyZ3MpCiB7Ci0JbGlieGZzX2xpb19yZXFf dAkqbGlvcDsKLQotCWlmICgobGlvcCA9IChsaWJ4ZnNfbGlvX3JlcV90ICopIGxp Ynhmc19nZXRfbGlvX2J1ZmZlcihMSUJYRlNfTElPX1RZUEVfUkFXKSkgPT0gTlVM TCkgewotCQlkb19wcmVmZXRjaCA9IDA7CisJaWYgKGFyZ3MgPT0gTlVMTCkKIAkJ cmV0dXJuOwotCX0KIAotCWxpb3BbMF0uYmxrbm8gPSBYRlNfQUdfREFERFIobXAs IGFnbm8sIFhGU19TQl9EQUREUik7Ci0JbGlvcFsxXS5ibGtubyA9IFhGU19BR19E QUREUihtcCwgYWdubywgWEZTX0FHRl9EQUREUihtcCkpOwotCWxpb3BbMl0uYmxr bm8gPSBYRlNfQUdfREFERFIobXAsIGFnbm8sIFhGU19BR0lfREFERFIobXApKTsK LQlsaW9wWzBdLmxlbiA9IFhGU19GU1NfVE9fQkIobXAsIDEpOwotCWxpb3BbMV0u bGVuID0gWEZTX0ZTU19UT19CQihtcCwgMSk7Ci0JbGlvcFsyXS5sZW4gPSBYRlNf RlNTX1RPX0JCKG1wLCAxKTsKLQlpZiAobGlieGZzX3JlYWRidWZfbGlzdChtcC0+ bV9kZXYsIDMsICh2b2lkICopIGxpb3AsIExJQlhGU19MSU9fVFlQRV9SQVcpID09 IC0xKQotCQlkb19wcmVmZXRjaCA9IDA7CisJcHRocmVhZF9tdXRleF9sb2NrKCZh cmdzLT5sb2NrKTsKIAotCWxpYnhmc19wdXRfbGlvX2J1ZmZlcigodm9pZCAqKSBs aW9wKTsKKwl3aGlsZSAoIWFyZ3MtPmNhbl9zdGFydF9wcm9jZXNzaW5nKSB7Cisj aWZkZWYgWFJfUEZfVFJBQ0UKKwkJcGZ0cmFjZSgid2FpdGluZyB0byBzdGFydCBw cm9jZXNzaW5nIEFHICVkIiwgYXJncy0+YWdubyk7CisjZW5kaWYKKwkJcHRocmVh ZF9jb25kX3dhaXQoJmFyZ3MtPnN0YXJ0X3Byb2Nlc3NpbmcsICZhcmdzLT5sb2Nr KTsKKwl9CisjaWZkZWYgWFJfUEZfVFJBQ0UKKwlwZnRyYWNlKCJjYW4gc3RhcnQg cHJvY2Vzc2luZyBBRyAlZCIsIGFyZ3MtPmFnbm8pOworI2VuZGlmCisJcHRocmVh ZF9tdXRleF91bmxvY2soJmFyZ3MtPmxvY2spOwogfQogCiB2b2lkCi1wcmVmZXRj aF9yb290cyh4ZnNfbW91bnRfdCAqbXAsIHhmc19hZ251bWJlcl90IGFnbm8sCi0J CXhmc19hZ2ZfdCAqYWdmLCB4ZnNfYWdpX3QgKmFnaSkKK2NsZWFudXBfaW5vZGVf cHJlZmV0Y2goCisJcHJlZmV0Y2hfYXJnc190CQkqYXJncykKIHsKLQlpbnQJCQlp OwotCWxpYnhmc19saW9fcmVxX3QJKmxpb3A7Ci0KLQlpZiAoKGxpb3AgPSAobGli eGZzX2xpb19yZXFfdCAqKSBsaWJ4ZnNfZ2V0X2xpb19idWZmZXIoTElCWEZTX0xJ T19UWVBFX1JBVykpID09IE5VTEwpIHsKLQkJZG9fcHJlZmV0Y2ggPSAwOworCWlm IChhcmdzID09IE5VTEwpCiAJCXJldHVybjsKLQl9CiAKLQlpID0gMDsKLQlpZiAo YWdmLT5hZ2Zfcm9vdHNbWEZTX0JUTlVNX0JOT10gIT0gMCAmJgotCQkJdmVyaWZ5 X2FnYm5vKG1wLCBhZ25vLCBhZ2YtPmFnZl9yb290c1tYRlNfQlROVU1fQk5PXSkp IHsKLQkJbGlvcFtpXS5ibGtubyA9IFhGU19BR0JfVE9fREFERFIobXAsIGFnbm8s IGFnZi0+YWdmX3Jvb3RzW1hGU19CVE5VTV9CTk9dKTsKLQkJbGlvcFtpXS5sZW4g PSBYRlNfRlNCX1RPX0JCKG1wLCAxKTsKLQkJaSsrOwotCX0KLQlpZiAoYWdmLT5h Z2Zfcm9vdHNbWEZTX0JUTlVNX0NOVF0gIT0gMCAmJgotCQkJdmVyaWZ5X2FnYm5v KG1wLCBhZ25vLCBhZ2YtPmFnZl9yb290c1tYRlNfQlROVU1fQ05UXSkpIHsKLQkJ bGlvcFtpXS5ibGtubyA9IFhGU19BR0JfVE9fREFERFIobXAsIGFnbm8sIGFnZi0+ YWdmX3Jvb3RzW1hGU19CVE5VTV9DTlRdKTsKLQkJbGlvcFtpXS5sZW4gPSBYRlNf RlNCX1RPX0JCKG1wLCAxKTsKLQkJaSsrOwotCX0KLQlpZiAoYWdpLT5hZ2lfcm9v dCAhPSAwICYmIHZlcmlmeV9hZ2JubyhtcCwgYWdubywgYWdpLT5hZ2lfcm9vdCkp IHsKLQkJbGlvcFtpXS5ibGtubyA9IFhGU19BR0JfVE9fREFERFIobXAsIGFnbm8s IGFnaS0+YWdpX3Jvb3QpOwotCQlsaW9wW2ldLmxlbiA9IFhGU19GU0JfVE9fQkIo bXAsIDEpOwotCQlpKys7Ci0JfQotCWlmIChpID4gMSkgewotCQlpZiAobGlieGZz X3JlYWRidWZfbGlzdChtcC0+bV9kZXYsIGksICh2b2lkICopIGxpb3AsIExJQlhG U19MSU9fVFlQRV9SQVcpID09IC0xKQotCQkJZG9fcHJlZmV0Y2ggPSAwOwotCX0K KyNpZmRlZiBYUl9QRl9UUkFDRQorCXBmdHJhY2UoIndhaXRpbmcgQUcgJWQgcHJl ZmV0Y2ggdG8gZmluaXNoIiwgYXJncy0+YWdubyk7CisjZW5kaWYKKwlpZiAoYXJn cy0+cXVldWluZ190aHJlYWQpCisJCXB0aHJlYWRfam9pbihhcmdzLT5xdWV1aW5n X3RocmVhZCwgTlVMTCk7CisKKyNpZmRlZiBYUl9QRl9UUkFDRQorCXBmdHJhY2Uo IkFHICVkIHByZWZldGNoIGRvbmUiLCBhcmdzLT5hZ25vKTsKKyNlbmRpZgorCXB0 aHJlYWRfbXV0ZXhfZGVzdHJveSgmYXJncy0+bG9jayk7CisJcHRocmVhZF9jb25k X2Rlc3Ryb3koJmFyZ3MtPnN0YXJ0X3JlYWRpbmcpOworCXB0aHJlYWRfY29uZF9k ZXN0cm95KCZhcmdzLT5zdGFydF9wcm9jZXNzaW5nKTsKKwlzZW1fZGVzdHJveSgm YXJncy0+cmFfY291bnQpOworCisJZnJlZShhcmdzKTsKK30KKworI2lmZGVmIFhS X1BGX1RSQUNFCiAKLQlsaWJ4ZnNfcHV0X2xpb19idWZmZXIoKHZvaWQgKikgbGlv cCk7Cit2b2lkCitfcGZ0cmFjZShjb25zdCBjaGFyICpmdW5jLCBjb25zdCBjaGFy ICptc2csIC4uLikKK3sKKwljaGFyCQlidWZbMTI4XTsKKwlzdHJ1Y3QgdGltZXZh bAl0djsKKwl2YV9saXN0IAlhcmdzOworCisJZ2V0dGltZW9mZGF5KCZ0diwgTlVM TCk7CisKKwl2YV9zdGFydChhcmdzLCBtc2cpOworCXZzbnByaW50ZihidWYsIHNp emVvZihidWYpLCBtc2csIGFyZ3MpOworCWJ1ZltzaXplb2YoYnVmKS0xXSA9ICdc MCc7CisJdmFfZW5kKGFyZ3MpOworCisJZnByaW50ZihwZl90cmFjZV9maWxlLCAi JWx1LiUwNmx1ICAlczogJXNcbiIsIHR2LnR2X3NlYywgdHYudHZfdXNlYywgZnVu YywgYnVmKTsKIH0KKworI2VuZGlmCkluZGV4OiByZXBhaXIveGZzcHJvZ3MvcmVw YWlyL3ByZWZldGNoLmgKPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQotLS0gcmVwYWlyLm9y aWcveGZzcHJvZ3MvcmVwYWlyL3ByZWZldGNoLmgJMjAwNy0wNC0yNyAxMzoxMzoz NS4wMDAwMDAwMDAgKzEwMDAKKysrIHJlcGFpci94ZnNwcm9ncy9yZXBhaXIvcHJl ZmV0Y2guaAkyMDA3LTA2LTA1IDEyOjA3OjIyLjUwMjI3MjU3MiArMTAwMApAQCAt MSw0NSArMSw1OSBAQAogI2lmbmRlZiBfWEZTX1JFUEFJUl9QUkVGRVRDSF9ICiAj ZGVmaW5lCV9YRlNfUkVQQUlSX1BSRUZFVENIX0gKIAotc3RydWN0IGJsa21hcDsK LXN0cnVjdCBkYV9idF9jdXJzb3I7Ci1zdHJ1Y3QgeGZzX21vdW50OwotCi1leHRl cm4gCWludCBkb19wcmVmZXRjaDsKLQotc3RydWN0IGlub190cmVlX25vZGUgKnBy ZWZldGNoX2lub2RlX2NodW5rcygKLQlzdHJ1Y3QgeGZzX21vdW50ICosCi0JeGZz X2FnbnVtYmVyX3QsCi0Jc3RydWN0IGlub190cmVlX25vZGUgKik7Ci0KLWV4dGVy biB2b2lkIHByZWZldGNoX2RpcjEoCi0Jc3RydWN0IHhmc19tb3VudAkqbXAsCi0J eGZzX2RhYmxrX3QJCWJubywKLQlzdHJ1Y3QgZGFfYnRfY3Vyc29yCSpkYV9jdXJz b3IpOwotCi1leHRlcm4gdm9pZCBwcmVmZXRjaF9kaXIyKAotCXN0cnVjdCB4ZnNf bW91bnQJKm1wLAotCXN0cnVjdCBibGttYXAJCSpibGttYXApOwotCi1leHRlcm4g dm9pZCBwcmVmZXRjaF9wNl9kaXIxKAotCXN0cnVjdCB4ZnNfbW91bnQJKm1wLAot CXhmc19pbm9fdAkJaW5vLAotCXN0cnVjdCB4ZnNfaW5vZGUJKmlwLAotCXhmc19k YWJsa190CQlkYV9ibm8sCi0JeGZzX2ZzYmxvY2tfdAkJKmZibG9ja3ApOwotCi1l eHRlcm4gdm9pZCBwcmVmZXRjaF9wNl9kaXIyKAotCXN0cnVjdCB4ZnNfbW91bnQJ Km1wLAotCXN0cnVjdCB4ZnNfaW5vZGUJKmlwKTsKLQotZXh0ZXJuIHZvaWQgcHJl ZmV0Y2hfc2IoCi0Jc3RydWN0IHhmc19tb3VudAkqbXAsCi0JeGZzX2FnbnVtYmVy X3QJCWFnbm8pOwotCi1leHRlcm4gdm9pZCBwcmVmZXRjaF9yb290cygKLQlzdHJ1 Y3QgeGZzX21vdW50IAkqbXAsCi0JeGZzX2FnbnVtYmVyX3QgCQlhZ25vLAotCXhm c19hZ2ZfdAkJKmFnZiwKLQl4ZnNfYWdpX3QJCSphZ2kpOworI2luY2x1ZGUgPHNl bWFwaG9yZS5oPgorI2luY2x1ZGUgImluY29yZS5oIgorI2luY2x1ZGUgInJhZGl4 LXRyZWUuaCIKKworCitleHRlcm4gaW50IAlkb19wcmVmZXRjaDsKKworI2RlZmlu ZSBQRl9USFJFQURfQ09VTlQJNAorCit0eXBlZGVmIHN0cnVjdCBwcmVmZXRjaF9h cmdzIHsKKwlwdGhyZWFkX211dGV4X3QJCWxvY2s7CisJcHRocmVhZF90CQlxdWV1 aW5nX3RocmVhZDsKKwlwdGhyZWFkX3QJCWlvX3RocmVhZHNbUEZfVEhSRUFEX0NP VU5UXTsKKwlzdHJ1Y3QgcmFkaXhfdHJlZV9yb290CXByaW1hcnlfaW9fcXVldWU7 CisJc3RydWN0IHJhZGl4X3RyZWVfcm9vdAlzZWNvbmRhcnlfaW9fcXVldWU7CisJ cHRocmVhZF9jb25kX3QJCXN0YXJ0X3JlYWRpbmc7CisJcHRocmVhZF9jb25kX3QJ CXN0YXJ0X3Byb2Nlc3Npbmc7CisJaW50CQkJYWdubzsKKwlpbnQJCQlkaXJzX29u bHk7CisJdm9sYXRpbGUgaW50CQljYW5fc3RhcnRfcmVhZGluZzsKKwl2b2xhdGls ZSBpbnQJCWNhbl9zdGFydF9wcm9jZXNzaW5nOworCXZvbGF0aWxlIGludAkJcHJl ZmV0Y2hfZG9uZTsKKwl2b2xhdGlsZSBpbnQJCXF1ZXVpbmdfZG9uZTsKKwl2b2xh dGlsZSBpbnQJCWlub2RlX2J1ZnNfcXVldWVkOworCXZvbGF0aWxlIHhmc19mc2Js b2NrX3QJbGFzdF9ibm9fcmVhZDsKKwlzZW1fdAkJCXJhX2NvdW50OworCXN0cnVj dCBwcmVmZXRjaF9hcmdzCSpuZXh0X2FyZ3M7Cit9IHByZWZldGNoX2FyZ3NfdDsK KworCisKK3ZvaWQKK2luaXRfcHJlZmV0Y2goCisJeGZzX21vdW50X3QJCSpwbXAp OworCitwcmVmZXRjaF9hcmdzX3QgKgorc3RhcnRfaW5vZGVfcHJlZmV0Y2goCisJ eGZzX2FnbnVtYmVyX3QJCWFnbm8sCisJaW50CQkJZGlyc19vbmx5LAorCXByZWZl dGNoX2FyZ3NfdAkJKnByZXZfYXJncyk7CisKK3ZvaWQKK3dhaXRfZm9yX2lub2Rl X3ByZWZldGNoKAorCXByZWZldGNoX2FyZ3NfdAkJKmFyZ3MpOworCit2b2lkCitj bGVhbnVwX2lub2RlX3ByZWZldGNoKAorCXByZWZldGNoX2FyZ3NfdAkJKmFyZ3Mp OworCisKKyNpZmRlZiBYUl9QRl9UUkFDRQorI2RlZmluZSBwZnRyYWNlKG1zZy4u LikJX3BmdHJhY2UoX19GVU5DVElPTl9fLCAjIyBtc2cpCit2b2lkCV9wZnRyYWNl KGNvbnN0IGNoYXIgKiwgY29uc3QgY2hhciAqLCAuLi4pOworI2VuZGlmCiAKICNl bmRpZiAvKiBfWEZTX1JFUEFJUl9QUkVGRVRDSF9IICovCkluZGV4OiByZXBhaXIv eGZzcHJvZ3MvcmVwYWlyL3Byb2dyZXNzLmMKPT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQot LS0gcmVwYWlyLm9yaWcveGZzcHJvZ3MvcmVwYWlyL3Byb2dyZXNzLmMJMjAwNy0w NC0yNyAxMzoxMzozNS4wMDAwMDAwMDAgKzEwMDAKKysrIHJlcGFpci94ZnNwcm9n cy9yZXBhaXIvcHJvZ3Jlc3MuYwkyMDA3LTA2LTA1IDEyOjE0OjIzLjgzMTI0MDg5 NSArMTAwMApAQCAtMSw3ICsxLDcgQEAKIAogI2luY2x1ZGUgPGxpYnhmcy5oPgot I2luY2x1ZGUgInByb2dyZXNzLmgiCiAjaW5jbHVkZSAiZ2xvYmFscy5oIgorI2lu Y2x1ZGUgInByb2dyZXNzLmgiCiAjaW5jbHVkZSAiZXJyX3Byb3Rvcy5oIgogI2lu Y2x1ZGUgPHNpZ25hbC5oPgogCkBAIC05Niw3ICs5Niw3IEBACiAJdGltZV90CQlz dGFydDsKIAl0aW1lX3QJCWVuZDsKIAl0aW1lX3QJCWR1cmF0aW9uOwotCV9fdWlu dDY0X3QJaXRlbV9jb3VudHNbNF07CQorCV9fdWludDY0X3QJaXRlbV9jb3VudHNb NF07CiB9IHBoYXNlX3RpbWVzX3Q7CiBzdGF0aWMgcGhhc2VfdGltZXNfdCBwaGFz ZV90aW1lc1s4XTsKIApAQCAtMTc3LDcgKzE3Nyw3IEBACiAJLyoKIAkgKiBTcGVj aWZ5IGEgcmVwZWF0aW5nIHRpbWVyIHRoYXQgZmlyZXMgZWFjaCBNU0dfSU5URVJW QUwgc2Vjb25kcy4KIAkgKi8KLQkKKwogCXRpbWVzcGVjLml0X3ZhbHVlLnR2X3Nl YyA9IG1zZ3AtPmludGVydmFsOwogCXRpbWVzcGVjLml0X3ZhbHVlLnR2X25zZWMg PSAwOwogCXRpbWVzcGVjLml0X2ludGVydmFsLnR2X3NlYyA9IG1zZ3AtPmludGVy dmFsOwpAQCAtMjg1LDcgKzI4NSw3IEBACiBzZXRfcHJvZ3Jlc3NfbXNnIChpbnQg cmVwb3J0LCBfX3VpbnQ2NF90IHRvdGFsKQogewogCi0JaWYgKCFkb19wYXJhbGxl bCkKKwlpZiAoIWFnX3N0cmlkZSkKIAkJcmV0dXJuICgwKTsKIAogCWlmIChwdGhy ZWFkX211dGV4X2xvY2soJmdsb2JhbF9tc2dzLm11dGV4KSkKQEAgLTMxNCw4ICsz MTQsOCBAQAogCV9fdWludDY0X3Qgc3VtOwogCW1zZ19ibG9ja190IAkqbXNncCA9 ICZnbG9iYWxfbXNnczsKIAljaGFyCQltc2didWZbRFVSQVRJT05fQlVGX1NJWkVd OwotCQotCWlmICghZG9fcGFyYWxsZWwpCisKKwlpZiAoIWFnX3N0cmlkZSkKIAkJ cmV0dXJuIDA7CiAKIAlpZiAocHRocmVhZF9tdXRleF9sb2NrKCZnbG9iYWxfbXNn cy5tdXRleCkpCkBAIC0zNzksNiArMzc5LDkgQEAKIAl0aW1lX3QgICAgbm93Owog CXN0cnVjdCB0bSAqdG1wOwogCisJaWYgKHZlcmJvc2UgPiAxKQorCQljYWNoZV9y ZXBvcnQoc3RkZXJyLCAibGlieGZzX2JjYWNoZSIsIGxpYnhmc19iY2FjaGUpOwor CiAJbm93ID0gdGltZShOVUxMKTsKIAogCWlmIChlbmQpIHsKQEAgLTQ2MSw3ICs0 NjQsNyBAQAogCQkJfQogCQkJc3RyY2F0KGJ1ZiwgdGVtcCk7CiAJCX0KLQkJCQor CiAJfQogCWlmIChsZW5ndGggPj0gT05FTUlOVVRFKSB7CiAJCW1pbnV0ZXMgPSAo bGVuZ3RoIC0gc3VtKSAvIE9ORU1JTlVURTsKQEAgLTQ4OCw3ICs0OTEsNyBAQAog CQkJc3RyY2F0KGJ1ZiwgXygiLCAiKSk7CiAJCXN0cmNhdChidWYsIHRlbXApOwog CX0KLQkJCisKIAlyZXR1cm4oYnVmKTsKIH0KIApJbmRleDogcmVwYWlyL3hmc3By b2dzL3JlcGFpci9wcm9ncmVzcy5oCj09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0KLS0tIHJl cGFpci5vcmlnL3hmc3Byb2dzL3JlcGFpci9wcm9ncmVzcy5oCTIwMDctMDQtMjcg MTM6MTM6MzUuMDAwMDAwMDAwICsxMDAwCisrKyByZXBhaXIveGZzcHJvZ3MvcmVw YWlyL3Byb2dyZXNzLmgJMjAwNy0wNS0yOSAxMToyMTowMS40NjkzNDc0MjUgKzEw MDAKQEAgLTIxLDggKzIxLDggQEAKICNkZWZpbmUJUFJPR19GTVRfUkVCVUlMRF9B Rwk5CS8qIFBoYXNlIDUgKi8KIAogI2RlZmluZQlQUk9HX0ZNVF9UUkFWRVJTQUwJ MTAJLyogUGhhc2UgNiAqLwotI2RlZmluZQlQUk9HX0ZNVF9UUkFWRVJTU1VCCTEx CQotI2RlZmluZQlQUk9HX0ZNVF9ESVNDT05JTk9ERQkxMgkKKyNkZWZpbmUJUFJP R19GTVRfVFJBVkVSU1NVQgkxMQorI2RlZmluZQlQUk9HX0ZNVF9ESVNDT05JTk9E RQkxMgogCiAjZGVmaW5lCVBST0dSRVNTX0ZNVF9DT1JSX0xJTksJMTMJLyogUGhh c2UgNyAqLwogI2RlZmluZQlQUk9HUkVTU19GTVRfVlJGWV9MSU5LIAkxNApAQCAt MzgsNiArMzgsNiBAQAogZXh0ZXJuIGNoYXIgKmR1cmF0aW9uKGludCB2YWwsIGNo YXIgKmJ1Zik7CiBleHRlcm4gaW50IGRvX3BhcmFsbGVsOwogCi0jZGVmaW5lCVBS T0dfUlBUX0lOQyhhLGIpIGlmIChkb19wYXJhbGxlbCAmJiBwcm9nX3JwdF9kb25l KSAoYSkgKz0gKGIpCisjZGVmaW5lCVBST0dfUlBUX0lOQyhhLGIpIGlmIChhZ19z dHJpZGUgJiYgcHJvZ19ycHRfZG9uZSkgKGEpICs9IChiKQogCiAjZW5kaWYJLyog X1hGU19SRVBBSVJfUFJPR1JFU1NfUlBUX0hfICovCkluZGV4OiByZXBhaXIveGZz cHJvZ3MvcmVwYWlyL3JhZGl4LXRyZWUuYwo9PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09Ci0t LSAvZGV2L251bGwJMTk3MC0wMS0wMSAwMDowMDowMC4wMDAwMDAwMDAgKzAwMDAK KysrIHJlcGFpci94ZnNwcm9ncy9yZXBhaXIvcmFkaXgtdHJlZS5jCTIwMDctMDUt MTcgMTM6MDg6MjYuMzYxMjM5ODEyICsxMDAwCkBAIC0wLDAgKzEsODA1IEBACisv KgorICogQ29weXJpZ2h0IChDKSAyMDAxIE1vbWNoaWwgVmVsaWtvdgorICogUG9y dGlvbnMgQ29weXJpZ2h0IChDKSAyMDAxIENocmlzdG9waCBIZWxsd2lnCisgKiBD b3B5cmlnaHQgKEMpIDIwMDUgU0dJLCBDaHJpc3RvcGggTGFtZXRlciA8Y2xhbWV0 ZXJAc2dpLmNvbT4KKyAqCisgKiBUaGlzIHByb2dyYW0gaXMgZnJlZSBzb2Z0d2Fy ZTsgeW91IGNhbiByZWRpc3RyaWJ1dGUgaXQgYW5kL29yCisgKiBtb2RpZnkgaXQg dW5kZXIgdGhlIHRlcm1zIG9mIHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5z ZSBhcworICogcHVibGlzaGVkIGJ5IHRoZSBGcmVlIFNvZnR3YXJlIEZvdW5kYXRp b247IGVpdGhlciB2ZXJzaW9uIDIsIG9yIChhdAorICogeW91ciBvcHRpb24pIGFu eSBsYXRlciB2ZXJzaW9uLgorICoKKyAqIFRoaXMgcHJvZ3JhbSBpcyBkaXN0cmli dXRlZCBpbiB0aGUgaG9wZSB0aGF0IGl0IHdpbGwgYmUgdXNlZnVsLCBidXQKKyAq IFdJVEhPVVQgQU5ZIFdBUlJBTlRZOyB3aXRob3V0IGV2ZW4gdGhlIGltcGxpZWQg d2FycmFudHkgb2YKKyAqIE1FUkNIQU5UQUJJTElUWSBvciBGSVRORVNTIEZPUiBB IFBBUlRJQ1VMQVIgUFVSUE9TRS4gIFNlZSB0aGUgR05VCisgKiBHZW5lcmFsIFB1 YmxpYyBMaWNlbnNlIGZvciBtb3JlIGRldGFpbHMuCisgKgorICogWW91IHNob3Vs ZCBoYXZlIHJlY2VpdmVkIGEgY29weSBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGlj IExpY2Vuc2UKKyAqIGFsb25nIHdpdGggdGhpcyBwcm9ncmFtOyBpZiBub3QsIHdy aXRlIHRvIHRoZSBGcmVlIFNvZnR3YXJlCisgKiBGb3VuZGF0aW9uLCBJbmMuLCA2 NzUgTWFzcyBBdmUsIENhbWJyaWRnZSwgTUEgMDIxMzksIFVTQS4KKyAqLworCisj aW5jbHVkZSA8bGlieGZzLmg+CisjaW5jbHVkZSAicmFkaXgtdHJlZS5oIgorCisj aWZuZGVmIEFSUkFZX1NJWkUKKyNkZWZpbmUgQVJSQVlfU0laRSh4KSAoc2l6ZW9m KHgpIC8gc2l6ZW9mKCh4KVswXSkpCisjZW5kaWYKKworI2RlZmluZSBSQURJWF9U UkVFX01BUF9TSElGVAk2CisjZGVmaW5lIFJBRElYX1RSRUVfTUFQX1NJWkUJKDFV TCA8PCBSQURJWF9UUkVFX01BUF9TSElGVCkKKyNkZWZpbmUgUkFESVhfVFJFRV9N QVBfTUFTSwkoUkFESVhfVFJFRV9NQVBfU0laRS0xKQorCisjaWZkZWYgUkFESVhf VFJFRV9UQUdTCisjZGVmaW5lIFJBRElYX1RSRUVfVEFHX0xPTkdTCVwKKwkoKFJB RElYX1RSRUVfTUFQX1NJWkUgKyBCSVRTX1BFUl9MT05HIC0gMSkgLyBCSVRTX1BF Ul9MT05HKQorI2VuZGlmCisKK3N0cnVjdCByYWRpeF90cmVlX25vZGUgeworCXVu c2lnbmVkIGludAljb3VudDsKKwl2b2lkCQkqc2xvdHNbUkFESVhfVFJFRV9NQVBf U0laRV07CisjaWZkZWYgUkFESVhfVFJFRV9UQUdTCisJdW5zaWduZWQgbG9uZwl0 YWdzW1JBRElYX1RSRUVfTUFYX1RBR1NdW1JBRElYX1RSRUVfVEFHX0xPTkdTXTsK KyNlbmRpZgorfTsKKworc3RydWN0IHJhZGl4X3RyZWVfcGF0aCB7CisJc3RydWN0 IHJhZGl4X3RyZWVfbm9kZSAqbm9kZTsKKwlpbnQgb2Zmc2V0OworfTsKKworI2Rl ZmluZSBSQURJWF9UUkVFX0lOREVYX0JJVFMgICg4IC8qIENIQVJfQklUICovICog c2l6ZW9mKHVuc2lnbmVkIGxvbmcpKQorI2RlZmluZSBSQURJWF9UUkVFX01BWF9Q QVRIIChSQURJWF9UUkVFX0lOREVYX0JJVFMvUkFESVhfVFJFRV9NQVBfU0hJRlQg KyAyKQorCitzdGF0aWMgdW5zaWduZWQgbG9uZyBoZWlnaHRfdG9fbWF4aW5kZXhb UkFESVhfVFJFRV9NQVhfUEFUSF07CisKKy8qCisgKiBSYWRpeCB0cmVlIG5vZGUg Y2FjaGUuCisgKi8KKworI2RlZmluZSByYWRpeF90cmVlX25vZGVfYWxsb2Mocikg CSgoc3RydWN0IHJhZGl4X3RyZWVfbm9kZSAqKSBcCisJCWNhbGxvYygxLCBzaXpl b2Yoc3RydWN0IHJhZGl4X3RyZWVfbm9kZSkpKQorI2RlZmluZSByYWRpeF90cmVl X25vZGVfZnJlZShuKSAJZnJlZShuKQorCisjaWZkZWYgUkFESVhfVFJFRV9UQUdT CisKK3N0YXRpYyBpbmxpbmUgdm9pZCB0YWdfc2V0KHN0cnVjdCByYWRpeF90cmVl X25vZGUgKm5vZGUsIHVuc2lnbmVkIGludCB0YWcsCisJCWludCBvZmZzZXQpCit7 CisJKigoX191aW50MzJfdCAqKW5vZGUtPnRhZ3NbdGFnXSArIChvZmZzZXQgPj4g NSkpIHw9ICgxIDw8IChvZmZzZXQgJiAzMSkpOworfQorCitzdGF0aWMgaW5saW5l IHZvaWQgdGFnX2NsZWFyKHN0cnVjdCByYWRpeF90cmVlX25vZGUgKm5vZGUsIHVu c2lnbmVkIGludCB0YWcsCisJCWludCBvZmZzZXQpCit7CisJX191aW50MzJfdCAJ KnAgPSAoX191aW50MzJfdCopbm9kZS0+dGFnc1t0YWddICsgKG9mZnNldCA+PiA1 KTsKKwlfX3VpbnQzMl90IAltID0gMSA8PCAob2Zmc2V0ICYgMzEpOworCSpwICY9 IH5tOworfQorCitzdGF0aWMgaW5saW5lIGludCB0YWdfZ2V0KHN0cnVjdCByYWRp eF90cmVlX25vZGUgKm5vZGUsIHVuc2lnbmVkIGludCB0YWcsCisJCWludCBvZmZz ZXQpCit7CisJcmV0dXJuIDEgJiAoKChjb25zdCBfX3VpbnQzMl90ICopbm9kZS0+ dGFnc1t0YWddKVtvZmZzZXQgPj4gNV0gPj4gKG9mZnNldCAmIDMxKSk7Cit9CisK Ky8qCisgKiBSZXR1cm5zIDEgaWYgYW55IHNsb3QgaW4gdGhlIG5vZGUgaGFzIHRo aXMgdGFnIHNldC4KKyAqIE90aGVyd2lzZSByZXR1cm5zIDAuCisgKi8KK3N0YXRp YyBpbmxpbmUgaW50IGFueV90YWdfc2V0KHN0cnVjdCByYWRpeF90cmVlX25vZGUg Km5vZGUsIHVuc2lnbmVkIGludCB0YWcpCit7CisJaW50IGlkeDsKKwlmb3IgKGlk eCA9IDA7IGlkeCA8IFJBRElYX1RSRUVfVEFHX0xPTkdTOyBpZHgrKykgeworCQlp ZiAobm9kZS0+dGFnc1t0YWddW2lkeF0pCisJCQlyZXR1cm4gMTsKKwl9CisJcmV0 dXJuIDA7Cit9CisKKyNlbmRpZgorCisvKgorICoJUmV0dXJuIHRoZSBtYXhpbXVt IGtleSB3aGljaCBjYW4gYmUgc3RvcmUgaW50byBhCisgKglyYWRpeCB0cmVlIHdp dGggaGVpZ2h0IEhFSUdIVC4KKyAqLworc3RhdGljIGlubGluZSB1bnNpZ25lZCBs b25nIHJhZGl4X3RyZWVfbWF4aW5kZXgodW5zaWduZWQgaW50IGhlaWdodCkKK3sK KwlyZXR1cm4gaGVpZ2h0X3RvX21heGluZGV4W2hlaWdodF07Cit9CisKKy8qCisg KglFeHRlbmQgYSByYWRpeCB0cmVlIHNvIGl0IGNhbiBzdG9yZSBrZXkgQGluZGV4 LgorICovCitzdGF0aWMgaW50IHJhZGl4X3RyZWVfZXh0ZW5kKHN0cnVjdCByYWRp eF90cmVlX3Jvb3QgKnJvb3QsIHVuc2lnbmVkIGxvbmcgaW5kZXgpCit7CisJc3Ry dWN0IHJhZGl4X3RyZWVfbm9kZSAqbm9kZTsKKwl1bnNpZ25lZCBpbnQgaGVpZ2h0 OworI2lmZGVmIFJBRElYX1RSRUVfVEFHUworCWNoYXIgdGFnc1tSQURJWF9UUkVF X01BWF9UQUdTXTsKKwlpbnQgdGFnOworI2VuZGlmCisKKwkvKiBGaWd1cmUgb3V0 IHdoYXQgdGhlIGhlaWdodCBzaG91bGQgYmUuICAqLworCWhlaWdodCA9IHJvb3Qt PmhlaWdodCArIDE7CisJd2hpbGUgKGluZGV4ID4gcmFkaXhfdHJlZV9tYXhpbmRl eChoZWlnaHQpKQorCQloZWlnaHQrKzsKKworCWlmIChyb290LT5ybm9kZSA9PSBO VUxMKSB7CisJCXJvb3QtPmhlaWdodCA9IGhlaWdodDsKKwkJZ290byBvdXQ7CisJ fQorCisjaWZkZWYgUkFESVhfVFJFRV9UQUdTCisJLyoKKwkgKiBQcmVwYXJlIHRo ZSB0YWcgc3RhdHVzIG9mIHRoZSB0b3AtbGV2ZWwgbm9kZSBmb3IgcHJvcGFnYXRp b24KKwkgKiBpbnRvIHRoZSBuZXdseS1wdXNoZWQgdG9wLWxldmVsIG5vZGUocykK KwkgKi8KKwlmb3IgKHRhZyA9IDA7IHRhZyA8IFJBRElYX1RSRUVfTUFYX1RBR1M7 IHRhZysrKSB7CisJCXRhZ3NbdGFnXSA9IDA7CisJCWlmIChhbnlfdGFnX3NldChy b290LT5ybm9kZSwgdGFnKSkKKwkJCXRhZ3NbdGFnXSA9IDE7CisJfQorI2VuZGlm CisJZG8geworCQlpZiAoIShub2RlID0gcmFkaXhfdHJlZV9ub2RlX2FsbG9jKHJv b3QpKSkKKwkJCXJldHVybiAtRU5PTUVNOworCisJCS8qIEluY3JlYXNlIHRoZSBo ZWlnaHQuICAqLworCQlub2RlLT5zbG90c1swXSA9IHJvb3QtPnJub2RlOworCisj aWZkZWYgUkFESVhfVFJFRV9UQUdTCisJCS8qIFByb3BhZ2F0ZSB0aGUgYWdncmVn YXRlZCB0YWcgaW5mbyBpbnRvIHRoZSBuZXcgcm9vdCAqLworCQlmb3IgKHRhZyA9 IDA7IHRhZyA8IFJBRElYX1RSRUVfTUFYX1RBR1M7IHRhZysrKSB7CisJCQlpZiAo dGFnc1t0YWddKQorCQkJCXRhZ19zZXQobm9kZSwgdGFnLCAwKTsKKwkJfQorI2Vu ZGlmCisJCW5vZGUtPmNvdW50ID0gMTsKKwkJcm9vdC0+cm5vZGUgPSBub2RlOwor CQlyb290LT5oZWlnaHQrKzsKKwl9IHdoaWxlIChoZWlnaHQgPiByb290LT5oZWln aHQpOworb3V0OgorCXJldHVybiAwOworfQorCisvKioKKyAqCXJhZGl4X3RyZWVf aW5zZXJ0ICAgIC0gICAgaW5zZXJ0IGludG8gYSByYWRpeCB0cmVlCisgKglAcm9v dDoJCXJhZGl4IHRyZWUgcm9vdAorICoJQGluZGV4OgkJaW5kZXgga2V5CisgKglA aXRlbToJCWl0ZW0gdG8gaW5zZXJ0CisgKgorICoJSW5zZXJ0IGFuIGl0ZW0gaW50 byB0aGUgcmFkaXggdHJlZSBhdCBwb3NpdGlvbiBAaW5kZXguCisgKi8KK2ludCBy YWRpeF90cmVlX2luc2VydChzdHJ1Y3QgcmFkaXhfdHJlZV9yb290ICpyb290LAor CQkJdW5zaWduZWQgbG9uZyBpbmRleCwgdm9pZCAqaXRlbSkKK3sKKwlzdHJ1Y3Qg cmFkaXhfdHJlZV9ub2RlICpub2RlID0gTlVMTCwgKnNsb3Q7CisJdW5zaWduZWQg aW50IGhlaWdodCwgc2hpZnQ7CisJaW50IG9mZnNldDsKKwlpbnQgZXJyb3I7CisK KwkvKiBNYWtlIHN1cmUgdGhlIHRyZWUgaXMgaGlnaCBlbm91Z2guICAqLworCWlm ICgoIWluZGV4ICYmICFyb290LT5ybm9kZSkgfHwKKwkJCWluZGV4ID4gcmFkaXhf dHJlZV9tYXhpbmRleChyb290LT5oZWlnaHQpKSB7CisJCWVycm9yID0gcmFkaXhf dHJlZV9leHRlbmQocm9vdCwgaW5kZXgpOworCQlpZiAoZXJyb3IpCisJCQlyZXR1 cm4gZXJyb3I7CisJfQorCisJc2xvdCA9IHJvb3QtPnJub2RlOworCWhlaWdodCA9 IHJvb3QtPmhlaWdodDsKKwlzaGlmdCA9IChoZWlnaHQtMSkgKiBSQURJWF9UUkVF X01BUF9TSElGVDsKKworCW9mZnNldCA9IDA7CQkJLyogdW5pbml0aWFsaXNlZCB2 YXIgd2FybmluZyAqLworCWRvIHsKKwkJaWYgKHNsb3QgPT0gTlVMTCkgeworCQkJ LyogSGF2ZSB0byBhZGQgYSBjaGlsZCBub2RlLiAgKi8KKwkJCWlmICghKHNsb3Qg PSByYWRpeF90cmVlX25vZGVfYWxsb2Mocm9vdCkpKQorCQkJCXJldHVybiAtRU5P TUVNOworCQkJaWYgKG5vZGUpIHsKKwkJCQlub2RlLT5zbG90c1tvZmZzZXRdID0g c2xvdDsKKwkJCQlub2RlLT5jb3VudCsrOworCQkJfSBlbHNlCisJCQkJcm9vdC0+ cm5vZGUgPSBzbG90OworCQl9CisKKwkJLyogR28gYSBsZXZlbCBkb3duICovCisJ CW9mZnNldCA9IChpbmRleCA+PiBzaGlmdCkgJiBSQURJWF9UUkVFX01BUF9NQVNL OworCQlub2RlID0gc2xvdDsKKwkJc2xvdCA9IG5vZGUtPnNsb3RzW29mZnNldF07 CisJCXNoaWZ0IC09IFJBRElYX1RSRUVfTUFQX1NISUZUOworCQloZWlnaHQtLTsK Kwl9IHdoaWxlIChoZWlnaHQgPiAwKTsKKworCWlmIChzbG90ICE9IE5VTEwpCisJ CXJldHVybiAtRUVYSVNUOworCisJQVNTRVJUKG5vZGUpOworCW5vZGUtPmNvdW50 Kys7CisJbm9kZS0+c2xvdHNbb2Zmc2V0XSA9IGl0ZW07CisjaWZkZWYgUkFESVhf VFJFRV9UQUdTCisJQVNTRVJUKCF0YWdfZ2V0KG5vZGUsIDAsIG9mZnNldCkpOwor CUFTU0VSVCghdGFnX2dldChub2RlLCAxLCBvZmZzZXQpKTsKKyNlbmRpZgorCXJl dHVybiAwOworfQorCitzdGF0aWMgaW5saW5lIHZvaWQgKipfX2xvb2t1cF9zbG90 KHN0cnVjdCByYWRpeF90cmVlX3Jvb3QgKnJvb3QsCisJCQkJICAgdW5zaWduZWQg bG9uZyBpbmRleCkKK3sKKwl1bnNpZ25lZCBpbnQgaGVpZ2h0LCBzaGlmdDsKKwlz dHJ1Y3QgcmFkaXhfdHJlZV9ub2RlICoqc2xvdDsKKworCWhlaWdodCA9IHJvb3Qt PmhlaWdodDsKKwlpZiAoaW5kZXggPiByYWRpeF90cmVlX21heGluZGV4KGhlaWdo dCkpCisJCXJldHVybiBOVUxMOworCisJc2hpZnQgPSAoaGVpZ2h0LTEpICogUkFE SVhfVFJFRV9NQVBfU0hJRlQ7CisJc2xvdCA9ICZyb290LT5ybm9kZTsKKworCXdo aWxlIChoZWlnaHQgPiAwKSB7CisJCWlmICgqc2xvdCA9PSBOVUxMKQorCQkJcmV0 dXJuIE5VTEw7CisKKwkJc2xvdCA9IChzdHJ1Y3QgcmFkaXhfdHJlZV9ub2RlICoq KQorCQkJKCgqc2xvdCktPnNsb3RzICsKKwkJCQkoKGluZGV4ID4+IHNoaWZ0KSAm IFJBRElYX1RSRUVfTUFQX01BU0spKTsKKwkJc2hpZnQgLT0gUkFESVhfVFJFRV9N QVBfU0hJRlQ7CisJCWhlaWdodC0tOworCX0KKworCXJldHVybiAodm9pZCAqKilz bG90OworfQorCisvKioKKyAqCXJhZGl4X3RyZWVfbG9va3VwX3Nsb3QgICAgLSAg ICBsb29rdXAgYSBzbG90IGluIGEgcmFkaXggdHJlZQorICoJQHJvb3Q6CQlyYWRp eCB0cmVlIHJvb3QKKyAqCUBpbmRleDoJCWluZGV4IGtleQorICoKKyAqCUxvb2t1 cCB0aGUgc2xvdCBjb3JyZXNwb25kaW5nIHRvIHRoZSBwb3NpdGlvbiBAaW5kZXgg aW4gdGhlIHJhZGl4IHRyZWUKKyAqCUByb290LiBUaGlzIGlzIHVzZWZ1bCBmb3Ig dXBkYXRlLWlmLWV4aXN0cyBvcGVyYXRpb25zLgorICovCit2b2lkICoqcmFkaXhf dHJlZV9sb29rdXBfc2xvdChzdHJ1Y3QgcmFkaXhfdHJlZV9yb290ICpyb290LCB1 bnNpZ25lZCBsb25nIGluZGV4KQoreworCXJldHVybiBfX2xvb2t1cF9zbG90KHJv b3QsIGluZGV4KTsKK30KKworLyoqCisgKglyYWRpeF90cmVlX2xvb2t1cCAgICAt ICAgIHBlcmZvcm0gbG9va3VwIG9wZXJhdGlvbiBvbiBhIHJhZGl4IHRyZWUKKyAq CUByb290OgkJcmFkaXggdHJlZSByb290CisgKglAaW5kZXg6CQlpbmRleCBrZXkK KyAqCisgKglMb29rdXAgdGhlIGl0ZW0gYXQgdGhlIHBvc2l0aW9uIEBpbmRleCBp biB0aGUgcmFkaXggdHJlZSBAcm9vdC4KKyAqLwordm9pZCAqcmFkaXhfdHJlZV9s b29rdXAoc3RydWN0IHJhZGl4X3RyZWVfcm9vdCAqcm9vdCwgdW5zaWduZWQgbG9u ZyBpbmRleCkKK3sKKwl2b2lkICoqc2xvdDsKKworCXNsb3QgPSBfX2xvb2t1cF9z bG90KHJvb3QsIGluZGV4KTsKKwlyZXR1cm4gc2xvdCAhPSBOVUxMID8gKnNsb3Qg OiBOVUxMOworfQorCisvKioKKyAqCXJhaWRfdHJlZV9maXJzdF9rZXkgLSBmaW5k IHRoZSBmaXJzdCBpbmRleCBrZXkgaW4gdGhlIHJhZGl4IHRyZWUKKyAqCUByb290 OgkJcmFkaXggdHJlZSByb290CisgKglAaW5kZXg6CQl3aGVyZSB0aGUgZmlyc3Qg aW5kZXggd2lsbCBiZSBwbGFjZWQKKyAqCisgKglSZXR1cm5zIHRoZSBmaXJzdCBl bnRyeSBhbmQgaW5kZXgga2V5IGluIHRoZSByYWRpeCB0cmVlIEByb290LgorICov Cit2b2lkICpyYWRpeF90cmVlX2xvb2t1cF9maXJzdChzdHJ1Y3QgcmFkaXhfdHJl ZV9yb290ICpyb290LCB1bnNpZ25lZCBsb25nICppbmRleCkKK3sKKwl1bnNpZ25l ZCBpbnQgaGVpZ2h0LCBzaGlmdDsKKwlzdHJ1Y3QgcmFkaXhfdHJlZV9ub2RlICpz bG90OworCXVuc2lnbmVkIGxvbmcgaTsKKworCWhlaWdodCA9IHJvb3QtPmhlaWdo dDsKKwkqaW5kZXggPSAwOworCWlmIChoZWlnaHQgPT0gMCkKKwkJcmV0dXJuIE5V TEw7CisKKwlzaGlmdCA9IChoZWlnaHQtMSkgKiBSQURJWF9UUkVFX01BUF9TSElG VDsKKwlzbG90ID0gcm9vdC0+cm5vZGU7CisKKwlmb3IgKDsgaGVpZ2h0ID4gMTsg aGVpZ2h0LS0pIHsKKwkJZm9yIChpID0gMDsgaSA8IFJBRElYX1RSRUVfTUFQX1NJ WkU7IGkrKykgeworCQkJaWYgKHNsb3QtPnNsb3RzW2ldICE9IE5VTEwpCisJCQkJ YnJlYWs7CisJCX0KKwkJQVNTRVJUKGkgPCBSQURJWF9UUkVFX01BUF9TSVpFKTsK KworCQkqaW5kZXggfD0gKGkgPDwgc2hpZnQpOworCQlzaGlmdCAtPSBSQURJWF9U UkVFX01BUF9TSElGVDsKKwkJc2xvdCA9IHNsb3QtPnNsb3RzW2ldOworCX0KKwlm b3IgKGkgPSAwOyBpIDwgUkFESVhfVFJFRV9NQVBfU0laRTsgaSsrKSB7CisJCWlm IChzbG90LT5zbG90c1tpXSAhPSBOVUxMKSB7CisJCQkqaW5kZXggfD0gaTsKKwkJ CXJldHVybiBzbG90LT5zbG90c1tpXTsKKwkJfQorCX0KKwlyZXR1cm4gTlVMTDsK K30KKworI2lmZGVmIFJBRElYX1RSRUVfVEFHUworCisvKioKKyAqCXJhZGl4X3Ry ZWVfdGFnX3NldCAtIHNldCBhIHRhZyBvbiBhIHJhZGl4IHRyZWUgbm9kZQorICoJ QHJvb3Q6CQlyYWRpeCB0cmVlIHJvb3QKKyAqCUBpbmRleDoJCWluZGV4IGtleQor ICoJQHRhZzogCQl0YWcgaW5kZXgKKyAqCisgKglTZXQgdGhlIHNlYXJjaCB0YWcg KHdoaWNoIG11c3QgYmUgPCBSQURJWF9UUkVFX01BWF9UQUdTKQorICoJY29ycmVz cG9uZGluZyB0byBAaW5kZXggaW4gdGhlIHJhZGl4IHRyZWUuICBGcm9tCisgKgl0 aGUgcm9vdCBhbGwgdGhlIHdheSBkb3duIHRvIHRoZSBsZWFmIG5vZGUuCisgKgor ICoJUmV0dXJucyB0aGUgYWRkcmVzcyBvZiB0aGUgdGFnZ2VkIGl0ZW0uICAgU2V0 dGluZyBhIHRhZyBvbiBhIG5vdC1wcmVzZW50CisgKglpdGVtIGlzIGEgYnVnLgor ICovCit2b2lkICpyYWRpeF90cmVlX3RhZ19zZXQoc3RydWN0IHJhZGl4X3RyZWVf cm9vdCAqcm9vdCwKKwkJCXVuc2lnbmVkIGxvbmcgaW5kZXgsIHVuc2lnbmVkIGlu dCB0YWcpCit7CisJdW5zaWduZWQgaW50IGhlaWdodCwgc2hpZnQ7CisJc3RydWN0 IHJhZGl4X3RyZWVfbm9kZSAqc2xvdDsKKworCWhlaWdodCA9IHJvb3QtPmhlaWdo dDsKKwlpZiAoaW5kZXggPiByYWRpeF90cmVlX21heGluZGV4KGhlaWdodCkpCisJ CXJldHVybiBOVUxMOworCisJc2hpZnQgPSAoaGVpZ2h0IC0gMSkgKiBSQURJWF9U UkVFX01BUF9TSElGVDsKKwlzbG90ID0gcm9vdC0+cm5vZGU7CisKKwl3aGlsZSAo aGVpZ2h0ID4gMCkgeworCQlpbnQgb2Zmc2V0OworCisJCW9mZnNldCA9IChpbmRl eCA+PiBzaGlmdCkgJiBSQURJWF9UUkVFX01BUF9NQVNLOworCQlpZiAoIXRhZ19n ZXQoc2xvdCwgdGFnLCBvZmZzZXQpKQorCQkJdGFnX3NldChzbG90LCB0YWcsIG9m ZnNldCk7CisJCXNsb3QgPSBzbG90LT5zbG90c1tvZmZzZXRdOworCQlBU1NFUlQo c2xvdCAhPSBOVUxMKTsKKwkJc2hpZnQgLT0gUkFESVhfVFJFRV9NQVBfU0hJRlQ7 CisJCWhlaWdodC0tOworCX0KKworCXJldHVybiBzbG90OworfQorCisvKioKKyAq CXJhZGl4X3RyZWVfdGFnX2NsZWFyIC0gY2xlYXIgYSB0YWcgb24gYSByYWRpeCB0 cmVlIG5vZGUKKyAqCUByb290OgkJcmFkaXggdHJlZSByb290CisgKglAaW5kZXg6 CQlpbmRleCBrZXkKKyAqCUB0YWc6IAkJdGFnIGluZGV4CisgKgorICoJQ2xlYXIg dGhlIHNlYXJjaCB0YWcgKHdoaWNoIG11c3QgYmUgPCBSQURJWF9UUkVFX01BWF9U QUdTKQorICoJY29ycmVzcG9uZGluZyB0byBAaW5kZXggaW4gdGhlIHJhZGl4IHRy ZWUuICBJZgorICoJdGhpcyBjYXVzZXMgdGhlIGxlYWYgbm9kZSB0byBoYXZlIG5v IHRhZ3Mgc2V0IHRoZW4gY2xlYXIgdGhlIHRhZyBpbiB0aGUKKyAqCW5leHQtdG8t bGVhZiBub2RlLCBldGMuCisgKgorICoJUmV0dXJucyB0aGUgYWRkcmVzcyBvZiB0 aGUgdGFnZ2VkIGl0ZW0gb24gc3VjY2VzcywgZWxzZSBOVUxMLiAgaWU6CisgKglo YXMgdGhlIHNhbWUgcmV0dXJuIHZhbHVlIGFuZCBzZW1hbnRpY3MgYXMgcmFkaXhf dHJlZV9sb29rdXAoKS4KKyAqLwordm9pZCAqcmFkaXhfdHJlZV90YWdfY2xlYXIo c3RydWN0IHJhZGl4X3RyZWVfcm9vdCAqcm9vdCwKKwkJCXVuc2lnbmVkIGxvbmcg aW5kZXgsIHVuc2lnbmVkIGludCB0YWcpCit7CisJc3RydWN0IHJhZGl4X3RyZWVf cGF0aCBwYXRoW1JBRElYX1RSRUVfTUFYX1BBVEhdLCAqcGF0aHAgPSBwYXRoOwor CXN0cnVjdCByYWRpeF90cmVlX25vZGUgKnNsb3Q7CisJdW5zaWduZWQgaW50IGhl aWdodCwgc2hpZnQ7CisJdm9pZCAqcmV0ID0gTlVMTDsKKworCWhlaWdodCA9IHJv b3QtPmhlaWdodDsKKwlpZiAoaW5kZXggPiByYWRpeF90cmVlX21heGluZGV4KGhl aWdodCkpCisJCWdvdG8gb3V0OworCisJc2hpZnQgPSAoaGVpZ2h0IC0gMSkgKiBS QURJWF9UUkVFX01BUF9TSElGVDsKKwlwYXRocC0+bm9kZSA9IE5VTEw7CisJc2xv dCA9IHJvb3QtPnJub2RlOworCisJd2hpbGUgKGhlaWdodCA+IDApIHsKKwkJaW50 IG9mZnNldDsKKworCQlpZiAoc2xvdCA9PSBOVUxMKQorCQkJZ290byBvdXQ7CisK KwkJb2Zmc2V0ID0gKGluZGV4ID4+IHNoaWZ0KSAmIFJBRElYX1RSRUVfTUFQX01B U0s7CisJCXBhdGhwWzFdLm9mZnNldCA9IG9mZnNldDsKKwkJcGF0aHBbMV0ubm9k ZSA9IHNsb3Q7CisJCXNsb3QgPSBzbG90LT5zbG90c1tvZmZzZXRdOworCQlwYXRo cCsrOworCQlzaGlmdCAtPSBSQURJWF9UUkVFX01BUF9TSElGVDsKKwkJaGVpZ2h0 LS07CisJfQorCisJcmV0ID0gc2xvdDsKKwlpZiAocmV0ID09IE5VTEwpCisJCWdv dG8gb3V0OworCisJZG8geworCQlpZiAoIXRhZ19nZXQocGF0aHAtPm5vZGUsIHRh ZywgcGF0aHAtPm9mZnNldCkpCisJCQlnb3RvIG91dDsKKwkJdGFnX2NsZWFyKHBh dGhwLT5ub2RlLCB0YWcsIHBhdGhwLT5vZmZzZXQpOworCQlpZiAoYW55X3RhZ19z ZXQocGF0aHAtPm5vZGUsIHRhZykpCisJCQlnb3RvIG91dDsKKwkJcGF0aHAtLTsK Kwl9IHdoaWxlIChwYXRocC0+bm9kZSk7CitvdXQ6CisJcmV0dXJuIHJldDsKK30K KworI2VuZGlmCisKK3N0YXRpYyB1bnNpZ25lZCBpbnQKK19fbG9va3VwKHN0cnVj dCByYWRpeF90cmVlX3Jvb3QgKnJvb3QsIHZvaWQgKipyZXN1bHRzLCB1bnNpZ25l ZCBsb25nIGluZGV4LAorCXVuc2lnbmVkIGludCBtYXhfaXRlbXMsIHVuc2lnbmVk IGxvbmcgKm5leHRfaW5kZXgpCit7CisJdW5zaWduZWQgaW50IG5yX2ZvdW5kID0g MDsKKwl1bnNpZ25lZCBpbnQgc2hpZnQsIGhlaWdodDsKKwlzdHJ1Y3QgcmFkaXhf dHJlZV9ub2RlICpzbG90OworCXVuc2lnbmVkIGxvbmcgaTsKKworCWhlaWdodCA9 IHJvb3QtPmhlaWdodDsKKwlpZiAoaGVpZ2h0ID09IDApCisJCWdvdG8gb3V0Owor CisJc2hpZnQgPSAoaGVpZ2h0LTEpICogUkFESVhfVFJFRV9NQVBfU0hJRlQ7CisJ c2xvdCA9IHJvb3QtPnJub2RlOworCisJZm9yICggOyBoZWlnaHQgPiAxOyBoZWln aHQtLSkgeworCisJCWZvciAoaSA9IChpbmRleCA+PiBzaGlmdCkgJiBSQURJWF9U UkVFX01BUF9NQVNLIDsKKwkJCQlpIDwgUkFESVhfVFJFRV9NQVBfU0laRTsgaSsr KSB7CisJCQlpZiAoc2xvdC0+c2xvdHNbaV0gIT0gTlVMTCkKKwkJCQlicmVhazsK KwkJCWluZGV4ICY9IH4oKDFVTCA8PCBzaGlmdCkgLSAxKTsKKwkJCWluZGV4ICs9 IDFVTCA8PCBzaGlmdDsKKwkJCWlmIChpbmRleCA9PSAwKQorCQkJCWdvdG8gb3V0 OwkvKiAzMi1iaXQgd3JhcGFyb3VuZCAqLworCQl9CisJCWlmIChpID09IFJBRElY X1RSRUVfTUFQX1NJWkUpCisJCQlnb3RvIG91dDsKKworCQlzaGlmdCAtPSBSQURJ WF9UUkVFX01BUF9TSElGVDsKKwkJc2xvdCA9IHNsb3QtPnNsb3RzW2ldOworCX0K KworCS8qIEJvdHRvbSBsZXZlbDogZ3JhYiBzb21lIGl0ZW1zICovCisJZm9yIChp ID0gaW5kZXggJiBSQURJWF9UUkVFX01BUF9NQVNLOyBpIDwgUkFESVhfVFJFRV9N QVBfU0laRTsgaSsrKSB7CisJCWluZGV4Kys7CisJCWlmIChzbG90LT5zbG90c1tp XSkgeworCQkJcmVzdWx0c1tucl9mb3VuZCsrXSA9IHNsb3QtPnNsb3RzW2ldOwor CQkJaWYgKG5yX2ZvdW5kID09IG1heF9pdGVtcykKKwkJCQlnb3RvIG91dDsKKwkJ fQorCX0KK291dDoKKwkqbmV4dF9pbmRleCA9IGluZGV4OworCXJldHVybiBucl9m b3VuZDsKK30KKworLyoqCisgKglyYWRpeF90cmVlX2dhbmdfbG9va3VwIC0gcGVy Zm9ybSBtdWx0aXBsZSBsb29rdXAgb24gYSByYWRpeCB0cmVlCisgKglAcm9vdDoJ CXJhZGl4IHRyZWUgcm9vdAorICoJQHJlc3VsdHM6CXdoZXJlIHRoZSByZXN1bHRz IG9mIHRoZSBsb29rdXAgYXJlIHBsYWNlZAorICoJQGZpcnN0X2luZGV4OglzdGFy dCB0aGUgbG9va3VwIGZyb20gdGhpcyBrZXkKKyAqCUBtYXhfaXRlbXM6CXBsYWNl IHVwIHRvIHRoaXMgbWFueSBpdGVtcyBhdCAqcmVzdWx0cworICoKKyAqCVBlcmZv cm1zIGFuIGluZGV4LWFzY2VuZGluZyBzY2FuIG9mIHRoZSB0cmVlIGZvciBwcmVz ZW50IGl0ZW1zLiAgUGxhY2VzCisgKgl0aGVtIGF0ICpAcmVzdWx0cyBhbmQgcmV0 dXJucyB0aGUgbnVtYmVyIG9mIGl0ZW1zIHdoaWNoIHdlcmUgcGxhY2VkIGF0Cisg KgkqQHJlc3VsdHMuCisgKgorICoJVGhlIGltcGxlbWVudGF0aW9uIGlzIG5haXZl LgorICovCit1bnNpZ25lZCBpbnQKK3JhZGl4X3RyZWVfZ2FuZ19sb29rdXAoc3Ry dWN0IHJhZGl4X3RyZWVfcm9vdCAqcm9vdCwgdm9pZCAqKnJlc3VsdHMsCisJCQl1 bnNpZ25lZCBsb25nIGZpcnN0X2luZGV4LCB1bnNpZ25lZCBpbnQgbWF4X2l0ZW1z KQoreworCWNvbnN0IHVuc2lnbmVkIGxvbmcgbWF4X2luZGV4ID0gcmFkaXhfdHJl ZV9tYXhpbmRleChyb290LT5oZWlnaHQpOworCXVuc2lnbmVkIGxvbmcgY3VyX2lu ZGV4ID0gZmlyc3RfaW5kZXg7CisJdW5zaWduZWQgaW50IHJldCA9IDA7CisKKwl3 aGlsZSAocmV0IDwgbWF4X2l0ZW1zKSB7CisJCXVuc2lnbmVkIGludCBucl9mb3Vu ZDsKKwkJdW5zaWduZWQgbG9uZyBuZXh0X2luZGV4OwkvKiBJbmRleCBvZiBuZXh0 IHNlYXJjaCAqLworCisJCWlmIChjdXJfaW5kZXggPiBtYXhfaW5kZXgpCisJCQli cmVhazsKKwkJbnJfZm91bmQgPSBfX2xvb2t1cChyb290LCByZXN1bHRzICsgcmV0 LCBjdXJfaW5kZXgsCisJCQkJCW1heF9pdGVtcyAtIHJldCwgJm5leHRfaW5kZXgp OworCQlyZXQgKz0gbnJfZm91bmQ7CisJCWlmIChuZXh0X2luZGV4ID09IDApCisJ CQlicmVhazsKKwkJY3VyX2luZGV4ID0gbmV4dF9pbmRleDsKKwl9CisJcmV0dXJu IHJldDsKK30KKworLyoqCisgKglyYWRpeF90cmVlX2dhbmdfbG9va3VwX2V4IC0g cGVyZm9ybSBtdWx0aXBsZSBsb29rdXAgb24gYSByYWRpeCB0cmVlCisgKglAcm9v dDoJCXJhZGl4IHRyZWUgcm9vdAorICoJQHJlc3VsdHM6CXdoZXJlIHRoZSByZXN1 bHRzIG9mIHRoZSBsb29rdXAgYXJlIHBsYWNlZAorICoJQGZpcnN0X2luZGV4Oglz dGFydCB0aGUgbG9va3VwIGZyb20gdGhpcyBrZXkKKyAqCUBsYXN0X2luZGV4Oglk b24ndCBsb29rdXAgcGFzdCB0aGlzIGtleQorICoJQG1heF9pdGVtczoJcGxhY2Ug dXAgdG8gdGhpcyBtYW55IGl0ZW1zIGF0ICpyZXN1bHRzCisgKgorICoJUGVyZm9y bXMgYW4gaW5kZXgtYXNjZW5kaW5nIHNjYW4gb2YgdGhlIHRyZWUgZm9yIHByZXNl bnQgaXRlbXMgc3RhcnRpbmcKKyAqCUBmaXJzdF9pbmRleCB1bnRpbCBAbGFzdF9p bmRleCB1cCB0byBhcyBtYW55IGFzIEBtYXhfaXRlbXMuICBQbGFjZXMKKyAqCXRo ZW0gYXQgKkByZXN1bHRzIGFuZCByZXR1cm5zIHRoZSBudW1iZXIgb2YgaXRlbXMg d2hpY2ggd2VyZSBwbGFjZWQKKyAqCWF0ICpAcmVzdWx0cy4KKyAqCisgKglUaGUg aW1wbGVtZW50YXRpb24gaXMgbmFpdmUuCisgKi8KK3Vuc2lnbmVkIGludAorcmFk aXhfdHJlZV9nYW5nX2xvb2t1cF9leChzdHJ1Y3QgcmFkaXhfdHJlZV9yb290ICpy b290LCB2b2lkICoqcmVzdWx0cywKKwkJCXVuc2lnbmVkIGxvbmcgZmlyc3RfaW5k ZXgsIHVuc2lnbmVkIGxvbmcgbGFzdF9pbmRleCwKKwkJCXVuc2lnbmVkIGludCBt YXhfaXRlbXMpCit7CisJY29uc3QgdW5zaWduZWQgbG9uZyBtYXhfaW5kZXggPSBy YWRpeF90cmVlX21heGluZGV4KHJvb3QtPmhlaWdodCk7CisJdW5zaWduZWQgbG9u ZyBjdXJfaW5kZXggPSBmaXJzdF9pbmRleDsKKwl1bnNpZ25lZCBpbnQgcmV0ID0g MDsKKworCXdoaWxlIChyZXQgPCBtYXhfaXRlbXMgJiYgY3VyX2luZGV4IDwgbGFz dF9pbmRleCkgeworCQl1bnNpZ25lZCBpbnQgbnJfZm91bmQ7CisJCXVuc2lnbmVk IGxvbmcgbmV4dF9pbmRleDsJLyogSW5kZXggb2YgbmV4dCBzZWFyY2ggKi8KKwor CQlpZiAoY3VyX2luZGV4ID4gbWF4X2luZGV4KQorCQkJYnJlYWs7CisJCW5yX2Zv dW5kID0gX19sb29rdXAocm9vdCwgcmVzdWx0cyArIHJldCwgY3VyX2luZGV4LAor CQkJCQltYXhfaXRlbXMgLSByZXQsICZuZXh0X2luZGV4KTsKKwkJcmV0ICs9IG5y X2ZvdW5kOworCQlpZiAobmV4dF9pbmRleCA9PSAwKQorCQkJYnJlYWs7CisJCWN1 cl9pbmRleCA9IG5leHRfaW5kZXg7CisJfQorCXJldHVybiByZXQ7Cit9CisKKyNp ZmRlZiBSQURJWF9UUkVFX1RBR1MKKworc3RhdGljIHVuc2lnbmVkIGludAorX19s b29rdXBfdGFnKHN0cnVjdCByYWRpeF90cmVlX3Jvb3QgKnJvb3QsIHZvaWQgKipy ZXN1bHRzLCB1bnNpZ25lZCBsb25nIGluZGV4LAorCXVuc2lnbmVkIGludCBtYXhf aXRlbXMsIHVuc2lnbmVkIGxvbmcgKm5leHRfaW5kZXgsIHVuc2lnbmVkIGludCB0 YWcpCit7CisJdW5zaWduZWQgaW50IG5yX2ZvdW5kID0gMDsKKwl1bnNpZ25lZCBp bnQgc2hpZnQ7CisJdW5zaWduZWQgaW50IGhlaWdodCA9IHJvb3QtPmhlaWdodDsK KwlzdHJ1Y3QgcmFkaXhfdHJlZV9ub2RlICpzbG90OworCisJc2hpZnQgPSAoaGVp Z2h0IC0gMSkgKiBSQURJWF9UUkVFX01BUF9TSElGVDsKKwlzbG90ID0gcm9vdC0+ cm5vZGU7CisKKwl3aGlsZSAoaGVpZ2h0ID4gMCkgeworCQl1bnNpZ25lZCBsb25n IGkgPSAoaW5kZXggPj4gc2hpZnQpICYgUkFESVhfVFJFRV9NQVBfTUFTSzsKKwor CQlmb3IgKCA7IGkgPCBSQURJWF9UUkVFX01BUF9TSVpFOyBpKyspIHsKKwkJCWlm ICh0YWdfZ2V0KHNsb3QsIHRhZywgaSkpIHsKKwkJCQlBU1NFUlQoc2xvdC0+c2xv dHNbaV0gIT0gTlVMTCk7CisJCQkJYnJlYWs7CisJCQl9CisJCQlpbmRleCAmPSB+ KCgxVUwgPDwgc2hpZnQpIC0gMSk7CisJCQlpbmRleCArPSAxVUwgPDwgc2hpZnQ7 CisJCQlpZiAoaW5kZXggPT0gMCkKKwkJCQlnb3RvIG91dDsJLyogMzItYml0IHdy YXBhcm91bmQgKi8KKwkJfQorCQlpZiAoaSA9PSBSQURJWF9UUkVFX01BUF9TSVpF KQorCQkJZ290byBvdXQ7CisJCWhlaWdodC0tOworCQlpZiAoaGVpZ2h0ID09IDAp IHsJLyogQm90dG9tIGxldmVsOiBncmFiIHNvbWUgaXRlbXMgKi8KKwkJCXVuc2ln bmVkIGxvbmcgaiA9IGluZGV4ICYgUkFESVhfVFJFRV9NQVBfTUFTSzsKKworCQkJ Zm9yICggOyBqIDwgUkFESVhfVFJFRV9NQVBfU0laRTsgaisrKSB7CisJCQkJaW5k ZXgrKzsKKwkJCQlpZiAodGFnX2dldChzbG90LCB0YWcsIGopKSB7CisJCQkJCUFT U0VSVChzbG90LT5zbG90c1tqXSAhPSBOVUxMKTsKKwkJCQkJcmVzdWx0c1tucl9m b3VuZCsrXSA9IHNsb3QtPnNsb3RzW2pdOworCQkJCQlpZiAobnJfZm91bmQgPT0g bWF4X2l0ZW1zKQorCQkJCQkJZ290byBvdXQ7CisJCQkJfQorCQkJfQorCQl9CisJ CXNoaWZ0IC09IFJBRElYX1RSRUVfTUFQX1NISUZUOworCQlzbG90ID0gc2xvdC0+ c2xvdHNbaV07CisJfQorb3V0OgorCSpuZXh0X2luZGV4ID0gaW5kZXg7CisJcmV0 dXJuIG5yX2ZvdW5kOworfQorCisvKioKKyAqCXJhZGl4X3RyZWVfZ2FuZ19sb29r dXBfdGFnIC0gcGVyZm9ybSBtdWx0aXBsZSBsb29rdXAgb24gYSByYWRpeCB0cmVl CisgKgkgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGJhc2VkIG9uIGEgdGFn CisgKglAcm9vdDoJCXJhZGl4IHRyZWUgcm9vdAorICoJQHJlc3VsdHM6CXdoZXJl IHRoZSByZXN1bHRzIG9mIHRoZSBsb29rdXAgYXJlIHBsYWNlZAorICoJQGZpcnN0 X2luZGV4OglzdGFydCB0aGUgbG9va3VwIGZyb20gdGhpcyBrZXkKKyAqCUBtYXhf aXRlbXM6CXBsYWNlIHVwIHRvIHRoaXMgbWFueSBpdGVtcyBhdCAqcmVzdWx0cwor ICoJQHRhZzoJCXRoZSB0YWcgaW5kZXggKDwgUkFESVhfVFJFRV9NQVhfVEFHUykK KyAqCisgKglQZXJmb3JtcyBhbiBpbmRleC1hc2NlbmRpbmcgc2NhbiBvZiB0aGUg dHJlZSBmb3IgcHJlc2VudCBpdGVtcyB3aGljaAorICoJaGF2ZSB0aGUgdGFnIGlu ZGV4ZWQgYnkgQHRhZyBzZXQuICBQbGFjZXMgdGhlIGl0ZW1zIGF0ICpAcmVzdWx0 cyBhbmQKKyAqCXJldHVybnMgdGhlIG51bWJlciBvZiBpdGVtcyB3aGljaCB3ZXJl IHBsYWNlZCBhdCAqQHJlc3VsdHMuCisgKi8KK3Vuc2lnbmVkIGludAorcmFkaXhf dHJlZV9nYW5nX2xvb2t1cF90YWcoc3RydWN0IHJhZGl4X3RyZWVfcm9vdCAqcm9v dCwgdm9pZCAqKnJlc3VsdHMsCisJCXVuc2lnbmVkIGxvbmcgZmlyc3RfaW5kZXgs IHVuc2lnbmVkIGludCBtYXhfaXRlbXMsCisJCXVuc2lnbmVkIGludCB0YWcpCit7 CisJY29uc3QgdW5zaWduZWQgbG9uZyBtYXhfaW5kZXggPSByYWRpeF90cmVlX21h eGluZGV4KHJvb3QtPmhlaWdodCk7CisJdW5zaWduZWQgbG9uZyBjdXJfaW5kZXgg PSBmaXJzdF9pbmRleDsKKwl1bnNpZ25lZCBpbnQgcmV0ID0gMDsKKworCXdoaWxl IChyZXQgPCBtYXhfaXRlbXMpIHsKKwkJdW5zaWduZWQgaW50IG5yX2ZvdW5kOwor CQl1bnNpZ25lZCBsb25nIG5leHRfaW5kZXg7CS8qIEluZGV4IG9mIG5leHQgc2Vh cmNoICovCisKKwkJaWYgKGN1cl9pbmRleCA+IG1heF9pbmRleCkKKwkJCWJyZWFr OworCQlucl9mb3VuZCA9IF9fbG9va3VwX3RhZyhyb290LCByZXN1bHRzICsgcmV0 LCBjdXJfaW5kZXgsCisJCQkJCW1heF9pdGVtcyAtIHJldCwgJm5leHRfaW5kZXgs IHRhZyk7CisJCXJldCArPSBucl9mb3VuZDsKKwkJaWYgKG5leHRfaW5kZXggPT0g MCkKKwkJCWJyZWFrOworCQljdXJfaW5kZXggPSBuZXh0X2luZGV4OworCX0KKwly ZXR1cm4gcmV0OworfQorCisjZW5kaWYKKworLyoqCisgKglyYWRpeF90cmVlX3No cmluayAgICAtICAgIHNocmluayBoZWlnaHQgb2YgYSByYWRpeCB0cmVlIHRvIG1p bmltYWwKKyAqCUByb290CQlyYWRpeCB0cmVlIHJvb3QKKyAqLworc3RhdGljIGlu bGluZSB2b2lkIHJhZGl4X3RyZWVfc2hyaW5rKHN0cnVjdCByYWRpeF90cmVlX3Jv b3QgKnJvb3QpCit7CisJLyogdHJ5IHRvIHNocmluayB0cmVlIGhlaWdodCAqLwor CXdoaWxlIChyb290LT5oZWlnaHQgPiAxICYmCisJCQlyb290LT5ybm9kZS0+Y291 bnQgPT0gMSAmJgorCQkJcm9vdC0+cm5vZGUtPnNsb3RzWzBdKSB7CisJCXN0cnVj dCByYWRpeF90cmVlX25vZGUgKnRvX2ZyZWUgPSByb290LT5ybm9kZTsKKworCQly b290LT5ybm9kZSA9IHRvX2ZyZWUtPnNsb3RzWzBdOworCQlyb290LT5oZWlnaHQt LTsKKwkJLyogbXVzdCBvbmx5IGZyZWUgemVyb2VkIG5vZGVzIGludG8gdGhlIHNs YWIgKi8KKyNpZmRlZiBSQURJWF9UUkVFX1RBR1MKKwkJdGFnX2NsZWFyKHRvX2Zy ZWUsIDAsIDApOworCQl0YWdfY2xlYXIodG9fZnJlZSwgMSwgMCk7CisjZW5kaWYK KwkJdG9fZnJlZS0+c2xvdHNbMF0gPSBOVUxMOworCQl0b19mcmVlLT5jb3VudCA9 IDA7CisJCXJhZGl4X3RyZWVfbm9kZV9mcmVlKHRvX2ZyZWUpOworCX0KK30KKwor LyoqCisgKglyYWRpeF90cmVlX2RlbGV0ZSAgICAtICAgIGRlbGV0ZSBhbiBpdGVt IGZyb20gYSByYWRpeCB0cmVlCisgKglAcm9vdDoJCXJhZGl4IHRyZWUgcm9vdAor ICoJQGluZGV4OgkJaW5kZXgga2V5CisgKgorICoJUmVtb3ZlIHRoZSBpdGVtIGF0 IEBpbmRleCBmcm9tIHRoZSByYWRpeCB0cmVlIHJvb3RlZCBhdCBAcm9vdC4KKyAq CisgKglSZXR1cm5zIHRoZSBhZGRyZXNzIG9mIHRoZSBkZWxldGVkIGl0ZW0sIG9y IE5VTEwgaWYgaXQgd2FzIG5vdCBwcmVzZW50LgorICovCit2b2lkICpyYWRpeF90 cmVlX2RlbGV0ZShzdHJ1Y3QgcmFkaXhfdHJlZV9yb290ICpyb290LCB1bnNpZ25l ZCBsb25nIGluZGV4KQoreworCXN0cnVjdCByYWRpeF90cmVlX3BhdGggcGF0aFtS QURJWF9UUkVFX01BWF9QQVRIXSwgKnBhdGhwID0gcGF0aDsKKwlzdHJ1Y3QgcmFk aXhfdHJlZV9wYXRoICpvcmlnX3BhdGhwOworCXN0cnVjdCByYWRpeF90cmVlX25v ZGUgKnNsb3Q7CisJdW5zaWduZWQgaW50IGhlaWdodCwgc2hpZnQ7CisJdm9pZCAq cmV0ID0gTlVMTDsKKyNpZmRlZiBSQURJWF9UUkVFX1RBR1MKKwljaGFyIHRhZ3Nb UkFESVhfVFJFRV9NQVhfVEFHU107CisJaW50IG5yX2NsZWFyZWRfdGFnczsKKwlp bnQgdGFnOworI2VuZGlmCisJaW50IG9mZnNldDsKKworCWhlaWdodCA9IHJvb3Qt PmhlaWdodDsKKwlpZiAoaW5kZXggPiByYWRpeF90cmVlX21heGluZGV4KGhlaWdo dCkpCisJCWdvdG8gb3V0OworCisJc2hpZnQgPSAoaGVpZ2h0IC0gMSkgKiBSQURJ WF9UUkVFX01BUF9TSElGVDsKKwlwYXRocC0+bm9kZSA9IE5VTEw7CisJc2xvdCA9 IHJvb3QtPnJub2RlOworCisJZm9yICggOyBoZWlnaHQgPiAwOyBoZWlnaHQtLSkg eworCQlpZiAoc2xvdCA9PSBOVUxMKQorCQkJZ290byBvdXQ7CisKKwkJcGF0aHAr KzsKKwkJb2Zmc2V0ID0gKGluZGV4ID4+IHNoaWZ0KSAmIFJBRElYX1RSRUVfTUFQ X01BU0s7CisJCXBhdGhwLT5vZmZzZXQgPSBvZmZzZXQ7CisJCXBhdGhwLT5ub2Rl ID0gc2xvdDsKKwkJc2xvdCA9IHNsb3QtPnNsb3RzW29mZnNldF07CisJCXNoaWZ0 IC09IFJBRElYX1RSRUVfTUFQX1NISUZUOworCX0KKworCXJldCA9IHNsb3Q7CisJ aWYgKHJldCA9PSBOVUxMKQorCQlnb3RvIG91dDsKKworCW9yaWdfcGF0aHAgPSBw YXRocDsKKworI2lmZGVmIFJBRElYX1RSRUVfVEFHUworCS8qCisJICogQ2xlYXIg YWxsIHRhZ3MgYXNzb2NpYXRlZCB3aXRoIHRoZSBqdXN0LWRlbGV0ZWQgaXRlbQor CSAqLworCW5yX2NsZWFyZWRfdGFncyA9IDA7CisJZm9yICh0YWcgPSAwOyB0YWcg PCBSQURJWF9UUkVFX01BWF9UQUdTOyB0YWcrKykgeworCQl0YWdzW3RhZ10gPSAx OworCQlpZiAodGFnX2dldChwYXRocC0+bm9kZSwgdGFnLCBwYXRocC0+b2Zmc2V0 KSkgeworCQkJdGFnX2NsZWFyKHBhdGhwLT5ub2RlLCB0YWcsIHBhdGhwLT5vZmZz ZXQpOworCQkJaWYgKCFhbnlfdGFnX3NldChwYXRocC0+bm9kZSwgdGFnKSkgewor CQkJCXRhZ3NbdGFnXSA9IDA7CisJCQkJbnJfY2xlYXJlZF90YWdzKys7CisJCQl9 CisJCX0KKwl9CisKKwlmb3IgKHBhdGhwLS07IG5yX2NsZWFyZWRfdGFncyAmJiBw YXRocC0+bm9kZTsgcGF0aHAtLSkgeworCQlmb3IgKHRhZyA9IDA7IHRhZyA8IFJB RElYX1RSRUVfTUFYX1RBR1M7IHRhZysrKSB7CisJCQlpZiAodGFnc1t0YWddKQor CQkJCWNvbnRpbnVlOworCisJCQl0YWdfY2xlYXIocGF0aHAtPm5vZGUsIHRhZywg cGF0aHAtPm9mZnNldCk7CisJCQlpZiAoYW55X3RhZ19zZXQocGF0aHAtPm5vZGUs IHRhZykpIHsKKwkJCQl0YWdzW3RhZ10gPSAxOworCQkJCW5yX2NsZWFyZWRfdGFn cy0tOworCQkJfQorCQl9CisJfQorI2VuZGlmCisJLyogTm93IGZyZWUgdGhlIG5v ZGVzIHdlIGRvIG5vdCBuZWVkIGFueW1vcmUgKi8KKwlmb3IgKHBhdGhwID0gb3Jp Z19wYXRocDsgcGF0aHAtPm5vZGU7IHBhdGhwLS0pIHsKKwkJcGF0aHAtPm5vZGUt PnNsb3RzW3BhdGhwLT5vZmZzZXRdID0gTlVMTDsKKwkJcGF0aHAtPm5vZGUtPmNv dW50LS07CisKKwkJaWYgKHBhdGhwLT5ub2RlLT5jb3VudCkgeworCQkJaWYgKHBh dGhwLT5ub2RlID09IHJvb3QtPnJub2RlKQorCQkJCXJhZGl4X3RyZWVfc2hyaW5r KHJvb3QpOworCQkJZ290byBvdXQ7CisJCX0KKworCQkvKiBOb2RlIHdpdGggemVy byBzbG90cyBpbiB1c2Ugc28gZnJlZSBpdCAqLworCQlyYWRpeF90cmVlX25vZGVf ZnJlZShwYXRocC0+bm9kZSk7CisJfQorCXJvb3QtPnJub2RlID0gTlVMTDsKKwly b290LT5oZWlnaHQgPSAwOworb3V0OgorCXJldHVybiByZXQ7Cit9CisKKyNpZmRl ZiBSQURJWF9UUkVFX1RBR1MKKy8qKgorICoJcmFkaXhfdHJlZV90YWdnZWQgLSB0 ZXN0IHdoZXRoZXIgYW55IGl0ZW1zIGluIHRoZSB0cmVlIGFyZSB0YWdnZWQKKyAq CUByb290OgkJcmFkaXggdHJlZSByb290CisgKglAdGFnOgkJdGFnIHRvIHRlc3QK KyAqLworaW50IHJhZGl4X3RyZWVfdGFnZ2VkKHN0cnVjdCByYWRpeF90cmVlX3Jv b3QgKnJvb3QsIHVuc2lnbmVkIGludCB0YWcpCit7CisgIAlzdHJ1Y3QgcmFkaXhf dHJlZV9ub2RlICpybm9kZTsKKyAgCXJub2RlID0gcm9vdC0+cm5vZGU7CisgIAlp ZiAoIXJub2RlKQorICAJCXJldHVybiAwOworCXJldHVybiBhbnlfdGFnX3NldChy bm9kZSwgdGFnKTsKK30KKyNlbmRpZgorCitzdGF0aWMgdW5zaWduZWQgbG9uZyBf X21heGluZGV4KHVuc2lnbmVkIGludCBoZWlnaHQpCit7CisJdW5zaWduZWQgaW50 IHRtcCA9IGhlaWdodCAqIFJBRElYX1RSRUVfTUFQX1NISUZUOworCXVuc2lnbmVk IGxvbmcgaW5kZXggPSAofjBVTCA+PiAoUkFESVhfVFJFRV9JTkRFWF9CSVRTIC0g dG1wIC0gMSkpID4+IDE7CisKKwlpZiAodG1wID49IFJBRElYX1RSRUVfSU5ERVhf QklUUykKKwkJaW5kZXggPSB+MFVMOworCXJldHVybiBpbmRleDsKK30KKworc3Rh dGljIHZvaWQgcmFkaXhfdHJlZV9pbml0X21heGluZGV4KHZvaWQpCit7CisJdW5z aWduZWQgaW50IGk7CisKKwlmb3IgKGkgPSAwOyBpIDwgQVJSQVlfU0laRShoZWln aHRfdG9fbWF4aW5kZXgpOyBpKyspCisJCWhlaWdodF90b19tYXhpbmRleFtpXSA9 IF9fbWF4aW5kZXgoaSk7Cit9CisKK3ZvaWQgcmFkaXhfdHJlZV9pbml0KHZvaWQp Cit7CisJcmFkaXhfdHJlZV9pbml0X21heGluZGV4KCk7Cit9CkluZGV4OiByZXBh aXIveGZzcHJvZ3MvcmVwYWlyL3JhZGl4LXRyZWUuaAo9PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09Ci0tLSAvZGV2L251bGwJMTk3MC0wMS0wMSAwMDowMDowMC4wMDAwMDAwMDAg KzAwMDAKKysrIHJlcGFpci94ZnNwcm9ncy9yZXBhaXIvcmFkaXgtdHJlZS5oCTIw MDctMDUtMDMgMTg6Mzk6MDYuNzA1ODE1Njg3ICsxMDAwCkBAIC0wLDAgKzEsNzYg QEAKKy8qCisgKiBDb3B5cmlnaHQgKEMpIDIwMDEgTW9tY2hpbCBWZWxpa292Cisg KiBQb3J0aW9ucyBDb3B5cmlnaHQgKEMpIDIwMDEgQ2hyaXN0b3BoIEhlbGx3aWcK KyAqCisgKiBUaGlzIHByb2dyYW0gaXMgZnJlZSBzb2Z0d2FyZTsgeW91IGNhbiBy ZWRpc3RyaWJ1dGUgaXQgYW5kL29yCisgKiBtb2RpZnkgaXQgdW5kZXIgdGhlIHRl cm1zIG9mIHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5zZSBhcworICogcHVi bGlzaGVkIGJ5IHRoZSBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb247IGVpdGhlciB2 ZXJzaW9uIDIsIG9yIChhdAorICogeW91ciBvcHRpb24pIGFueSBsYXRlciB2ZXJz aW9uLgorICoKKyAqIFRoaXMgcHJvZ3JhbSBpcyBkaXN0cmlidXRlZCBpbiB0aGUg aG9wZSB0aGF0IGl0IHdpbGwgYmUgdXNlZnVsLCBidXQKKyAqIFdJVEhPVVQgQU5Z IFdBUlJBTlRZOyB3aXRob3V0IGV2ZW4gdGhlIGltcGxpZWQgd2FycmFudHkgb2YK KyAqIE1FUkNIQU5UQUJJTElUWSBvciBGSVRORVNTIEZPUiBBIFBBUlRJQ1VMQVIg UFVSUE9TRS4gIFNlZSB0aGUgR05VCisgKiBHZW5lcmFsIFB1YmxpYyBMaWNlbnNl IGZvciBtb3JlIGRldGFpbHMuCisgKgorICogWW91IHNob3VsZCBoYXZlIHJlY2Vp dmVkIGEgY29weSBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2UKKyAq IGFsb25nIHdpdGggdGhpcyBwcm9ncmFtOyBpZiBub3QsIHdyaXRlIHRvIHRoZSBG cmVlIFNvZnR3YXJlCisgKiBGb3VuZGF0aW9uLCBJbmMuLCA2NzUgTWFzcyBBdmUs IENhbWJyaWRnZSwgTUEgMDIxMzksIFVTQS4KKyAqLworI2lmbmRlZiBfX1hGU19T VVBQT1JUX1JBRElYX1RSRUVfSF9fCisjZGVmaW5lIF9fWEZTX1NVUFBPUlRfUkFE SVhfVFJFRV9IX18KKworI2RlZmluZSBSQURJWF9UUkVFX1RBR1MKKworc3RydWN0 IHJhZGl4X3RyZWVfcm9vdCB7CisJdW5zaWduZWQgaW50CQloZWlnaHQ7CisJc3Ry dWN0IHJhZGl4X3RyZWVfbm9kZQkqcm5vZGU7Cit9OworCisjZGVmaW5lIFJBRElY X1RSRUVfSU5JVChtYXNrKQl7CQkJCQlcCisJLmhlaWdodCA9IDAsCQkJCQkJCVwK Kwkucm5vZGUgPSBOVUxMLAkJCQkJCQlcCit9CisKKyNkZWZpbmUgUkFESVhfVFJF RShuYW1lLCBtYXNrKSBcCisJc3RydWN0IHJhZGl4X3RyZWVfcm9vdCBuYW1lID0g UkFESVhfVFJFRV9JTklUKG1hc2spCisKKyNkZWZpbmUgSU5JVF9SQURJWF9UUkVF KHJvb3QsIG1hc2spCQkJCQlcCitkbyB7CQkJCQkJCQkJXAorCShyb290KS0+aGVp Z2h0ID0gMDsJCQkJCQlcCisJKHJvb3QpLT5ybm9kZSA9IE5VTEw7CQkJCQkJXAor fSB3aGlsZSAoMCkKKworI2lmZGVmIFJBRElYX1RSRUVfVEFHUworI2RlZmluZSBS QURJWF9UUkVFX01BWF9UQUdTIDIKKyNlbmRpZgorCitpbnQgcmFkaXhfdHJlZV9p bnNlcnQoc3RydWN0IHJhZGl4X3RyZWVfcm9vdCAqLCB1bnNpZ25lZCBsb25nLCB2 b2lkICopOwordm9pZCAqcmFkaXhfdHJlZV9sb29rdXAoc3RydWN0IHJhZGl4X3Ry ZWVfcm9vdCAqLCB1bnNpZ25lZCBsb25nKTsKK3ZvaWQgKipyYWRpeF90cmVlX2xv b2t1cF9zbG90KHN0cnVjdCByYWRpeF90cmVlX3Jvb3QgKiwgdW5zaWduZWQgbG9u Zyk7Cit2b2lkICpyYWRpeF90cmVlX2xvb2t1cF9maXJzdChzdHJ1Y3QgcmFkaXhf dHJlZV9yb290ICosIHVuc2lnbmVkIGxvbmcgKik7Cit2b2lkICpyYWRpeF90cmVl X2RlbGV0ZShzdHJ1Y3QgcmFkaXhfdHJlZV9yb290ICosIHVuc2lnbmVkIGxvbmcp OwordW5zaWduZWQgaW50CityYWRpeF90cmVlX2dhbmdfbG9va3VwKHN0cnVjdCBy YWRpeF90cmVlX3Jvb3QgKnJvb3QsIHZvaWQgKipyZXN1bHRzLAorCQkJdW5zaWdu ZWQgbG9uZyBmaXJzdF9pbmRleCwgdW5zaWduZWQgaW50IG1heF9pdGVtcyk7Cit1 bnNpZ25lZCBpbnQKK3JhZGl4X3RyZWVfZ2FuZ19sb29rdXBfZXgoc3RydWN0IHJh ZGl4X3RyZWVfcm9vdCAqcm9vdCwgdm9pZCAqKnJlc3VsdHMsCisJCQl1bnNpZ25l ZCBsb25nIGZpcnN0X2luZGV4LCB1bnNpZ25lZCBsb25nIGxhc3RfaW5kZXgsCisJ CQl1bnNpZ25lZCBpbnQgbWF4X2l0ZW1zKTsKKwordm9pZCByYWRpeF90cmVlX2lu aXQodm9pZCk7CisKKyNpZmRlZiBSQURJWF9UUkVFX1RBR1MKK3ZvaWQgKnJhZGl4 X3RyZWVfdGFnX3NldChzdHJ1Y3QgcmFkaXhfdHJlZV9yb290ICpyb290LAorCQkJ dW5zaWduZWQgbG9uZyBpbmRleCwgdW5zaWduZWQgaW50IHRhZyk7Cit2b2lkICpy YWRpeF90cmVlX3RhZ19jbGVhcihzdHJ1Y3QgcmFkaXhfdHJlZV9yb290ICpyb290 LAorCQkJdW5zaWduZWQgbG9uZyBpbmRleCwgdW5zaWduZWQgaW50IHRhZyk7Citp bnQgcmFkaXhfdHJlZV90YWdfZ2V0KHN0cnVjdCByYWRpeF90cmVlX3Jvb3QgKnJv b3QsCisJCQl1bnNpZ25lZCBsb25nIGluZGV4LCB1bnNpZ25lZCBpbnQgdGFnKTsK K3Vuc2lnbmVkIGludAorcmFkaXhfdHJlZV9nYW5nX2xvb2t1cF90YWcoc3RydWN0 IHJhZGl4X3RyZWVfcm9vdCAqcm9vdCwgdm9pZCAqKnJlc3VsdHMsCisJCQl1bnNp Z25lZCBsb25nIGZpcnN0X2luZGV4LCB1bnNpZ25lZCBpbnQgbWF4X2l0ZW1zLAor CQkJdW5zaWduZWQgaW50IHRhZyk7CitpbnQgcmFkaXhfdHJlZV90YWdnZWQoc3Ry dWN0IHJhZGl4X3RyZWVfcm9vdCAqcm9vdCwgdW5zaWduZWQgaW50IHRhZyk7Cisj ZW5kaWYKKworI2VuZGlmIC8qIF9fWEZTX1NVUFBPUlRfUkFESVhfVFJFRV9IX18g Ki8KSW5kZXg6IHJlcGFpci94ZnNwcm9ncy9yZXBhaXIvc2Nhbi5jCj09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT0KLS0tIHJlcGFpci5vcmlnL3hmc3Byb2dzL3JlcGFpci9zY2Fu LmMJMjAwNy0wNC0yNyAxMzoxMzozNS4wMDAwMDAwMDAgKzEwMDAKKysrIHJlcGFp ci94ZnNwcm9ncy9yZXBhaXIvc2Nhbi5jCTIwMDctMDUtMTYgMTI6MDI6MzkuNjQw NjUwOTEwICsxMDAwCkBAIC0yNyw3ICsyNyw2IEBACiAjaW5jbHVkZSAic2Nhbi5o IgogI2luY2x1ZGUgInZlcnNpb25zLmgiCiAjaW5jbHVkZSAiYm1hcC5oIgotI2lu Y2x1ZGUgInByZWZldGNoLmgiCiAjaW5jbHVkZSAicHJvZ3Jlc3MuaCIKIAogZXh0 ZXJuIGludCB2ZXJpZnlfc2V0X2FnaGVhZGVyKHhmc19tb3VudF90ICptcCwgeGZz X2J1Zl90ICpzYnVmLCB4ZnNfc2JfdCAqc2IsCkBAIC0xMTQ3LDkgKzExNDYsNiBA QAogCiAJYWdpX2RpcnR5ID0gYWdmX2RpcnR5ID0gc2JfZGlydHkgPSAwOwogCi0J aWYgKGRvX3ByZWZldGNoKQotCQlwcmVmZXRjaF9zYihtcCwgYWdubyk7Ci0KIAlz YmJ1ZiA9IGxpYnhmc19yZWFkYnVmKG1wLT5tX2RldiwgWEZTX0FHX0RBRERSKG1w LCBhZ25vLCBYRlNfU0JfREFERFIpLAogCQkJCVhGU19GU1NfVE9fQkIobXAsIDEp LCAwKTsKIAlpZiAoIXNiYnVmKSAgewpAQCAtMTI0MSw5ICsxMjM3LDYgQEAKIAog CXNjYW5fZnJlZWxpc3QoYWdmKTsKIAotCWlmIChkb19wcmVmZXRjaCkKLQkJIHBy ZWZldGNoX3Jvb3RzKG1wLCBhZ25vLCBhZ2YsIGFnaSk7Ci0KIAlpZiAoSU5UX0dF VChhZ2YtPmFnZl9yb290c1tYRlNfQlROVU1fQk5PXSwgQVJDSF9DT05WRVJUKSAh PSAwICYmCiAJICAgIHZlcmlmeV9hZ2JubyhtcCwgYWdubywKIAkJCUlOVF9HRVQo YWdmLT5hZ2Zfcm9vdHNbWEZTX0JUTlVNX0JOT10sIEFSQ0hfQ09OVkVSVCkpKQpJ bmRleDogcmVwYWlyL3hmc3Byb2dzL3JlcGFpci90aHJlYWRzLmMKPT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PQotLS0gcmVwYWlyLm9yaWcveGZzcHJvZ3MvcmVwYWlyL3RocmVh ZHMuYwkyMDA3LTA0LTI3IDEzOjEzOjM1LjAwMDAwMDAwMCArMTAwMAorKysgcmVw YWlyL3hmc3Byb2dzL3JlcGFpci90aHJlYWRzLmMJMjAwNy0wNC0yNyAxNDoxMjoz NC4yMTk1NTk4NDcgKzEwMDAKQEAgLTEsMTM4ICsxLDYwIEBACiAjaW5jbHVkZSA8 bGlieGZzLmg+Ci0jaW5jbHVkZSAicHRocmVhZC5oIgotI2luY2x1ZGUgInNpZ25h bC5oIgorI2luY2x1ZGUgPHB0aHJlYWQuaD4KKyNpbmNsdWRlIDxzaWduYWwuaD4K ICNpbmNsdWRlICJ0aHJlYWRzLmgiCiAjaW5jbHVkZSAiZXJyX3Byb3Rvcy5oIgog I2luY2x1ZGUgInByb3Rvcy5oIgorI2luY2x1ZGUgImdsb2JhbHMuaCIKIAotaW50 IGRvX3BhcmFsbGVsID0gMTsKLWludCB0aHJlYWRfY291bnQ7Ci0KLS8qIEEgcXVh bnR1bSBvZiB3b3JrICovCi10eXBlZGVmIHN0cnVjdCB3b3JrX3MgewotCXN0cnVj dCB3b3JrX3MJKm5leHQ7Ci0JZGlzcF9mdW5jX3QJKmZ1bmN0aW9uOwotCXhmc19t b3VudF90CSptcDsKLQl4ZnNfYWdudW1iZXJfdAlhZ25vOwotfSB3b3JrX3Q7Ci0K LXR5cGVkZWYgc3RydWN0ICB3b3JrX3F1ZXVlX3MgewotCXdvcmtfdAkJKm5leHQ7 Ci0Jd29ya190CQkqbGFzdDsKLQlpbnQJCWFjdGl2ZV90aHJlYWRzOwotCWludAkJ d29ya19jb3VudDsKLQlwdGhyZWFkX2NvbmRfdAltY3Y7CS8qIG1haW4gdGhyZWFk IGNvbmRpdGlvbmFsIHZhcmlhYmxlICovCi0JcHRocmVhZF9jb25kX3QJd2N2Owkv KiB3b3JrZXIgdGhyZWFkcyBjb25kaXRpb25hbCB2YXJpYWJsZSAqLwotCXB0aHJl YWRfbXV0ZXhfdAltdXRleDsKLX0gd29ya19xdWV1ZV90OwotCi1zdGF0aWMJd29y a19xdWV1ZV90CXdvcmtfcXVldWU7Ci1zdGF0aWMJcHRocmVhZF90CSp3b3JrX3Ro cmVhZHM7Ci0KLXN0YXRpYwl2b2lkCSp3b3JrZXJfdGhyZWFkKHZvaWQgKmFyZyk7 Ci0KLXN0YXRpYyB2b2lkCi1pbml0X3dvcmtlcnMod29ya19xdWV1ZV90ICp3cSwg aW50IG53KQorc3RhdGljIHZvaWQgKgord29ya2VyX3RocmVhZCh2b2lkICphcmcp CiB7Ci0JaW50CQkJZXJyOwotCXB0aHJlYWRfbXV0ZXhhdHRyX3QJbXR4YXR0cjsK LQotCW1lbXNldCh3cSwgMCwgc2l6ZW9mKHdvcmtfcXVldWVfdCkpOwotCXdxLT5h Y3RpdmVfdGhyZWFkcyA9IG53OwotCi0JcHRocmVhZF9jb25kX2luaXQoJndxLT5t Y3YsIE5VTEwpOwotCXB0aHJlYWRfY29uZF9pbml0KCZ3cS0+d2N2LCBOVUxMKTsK LQlwdGhyZWFkX211dGV4YXR0cl9pbml0KCZtdHhhdHRyKTsKLQotI2lmZGVmCVBU SFJFQURfTVVURVhfU1BJTkJMT0NLX05QCi0JLyogTlAgLSBOb24gUG9ydGFibGUg LSBJcml4ICovCi0JaWYgKChlcnIgPSBwdGhyZWFkX211dGV4YXR0cl9zZXR0eXBl KCZtdHhhdHRyLAotCQkJUFRIUkVBRF9NVVRFWF9TUElOQkxPQ0tfTlApKSA+IDAp IHsKLQkJZG9fZXJyb3IoXygiaW5pdF93b3JrZXJzOiB0aHJlYWQgMHgleDogcHRo cmVhZF9tdXRleGF0dHJfc2V0dHlwZSBlcnJvciAlZDogJXNcbiIpLAotCQkJcHRo cmVhZF9zZWxmKCksIGVyciwgc3RyZXJyb3IoZXJyKSk7Ci0JfQotI2VuZGlmCi0j aWZkZWYJUFRIUkVBRF9NVVRFWF9GQVNUX05QCi0JLyogTlAgLSBOb24gUG9ydGFi bGUgLSBMaW51eCAqLwotCWlmICgoZXJyID0gcHRocmVhZF9tdXRleGF0dHJfc2V0 dHlwZSgmbXR4YXR0ciwKLQkJCVBUSFJFQURfTVVURVhfRkFTVF9OUCkpID4gMCkg ewotCQlkb19lcnJvcihfKCJpbml0X3dvcmtlcnM6IHRocmVhZCAweCV4OiBwdGhy ZWFkX211dGV4YXR0cl9zZXR0eXBlIGVycm9yICVkOiAlc1xuIiksCi0JCQlwdGhy ZWFkX3NlbGYoKSwgZXJyLCBzdHJlcnJvcihlcnIpKTsKLQl9Ci0jZW5kaWYKLQlp ZiAoKGVyciA9IHB0aHJlYWRfbXV0ZXhfaW5pdCgmd3EtPm11dGV4LCAmbXR4YXR0 cikpID4gMCkgewotCQlkb19lcnJvcihfKCJpbml0X3dvcmtlcnM6IHRocmVhZCAw eCV4OiBwdGhyZWFkX211dGV4X2luaXQgZXJyb3IgJWQ6ICVzXG4iKSwKLQkJCXB0 aHJlYWRfc2VsZigpLCBlcnIsIHN0cmVycm9yKGVycikpOwotCX0KLX0KKwl3b3Jr X3F1ZXVlX3QJKndxOworCXdvcmtfaXRlbV90CSp3aTsKIAotc3RhdGljIHZvaWQK LXF1aWVzY2Vfd29ya2Vycyh3b3JrX3F1ZXVlX3QgKndxKQotewotCWludAllcnI7 CisJd3EgPSAod29ya19xdWV1ZV90Kilhcmc7CiAKLQlpZiAoKGVyciA9IHB0aHJl YWRfbXV0ZXhfbG9jaygmd3EtPm11dGV4KSkgPiAwKQotCQlkb19lcnJvcihfKCJx dWllc2NlX3dvcmtlcnM6IHRocmVhZCAweCV4OiBwdGhyZWFkX211dGV4X2xvY2sg ZXJyb3IgJWQ6ICVzXG4iKSwKLQkJCXB0aHJlYWRfc2VsZigpLCBlcnIsIHN0cmVy cm9yKGVycikpOwotCWlmICh3cS0+YWN0aXZlX3RocmVhZHMgPiAwKSB7Ci0JCWlm ICgoZXJyID0gcHRocmVhZF9jb25kX3dhaXQoJndxLT5tY3YsICZ3cS0+bXV0ZXgp KSA+IDApCi0JCQlkb19lcnJvcihfKCJxdWllc2NlX3dvcmtlcnM6IHRocmVhZCAw eCV4OiBwdGhyZWFkX2NvbmRfd2FpdCBlcnJvciAlZDogJXNcbiIpLAotCQkJCXB0 aHJlYWRfc2VsZigpLCBlcnIsIHN0cmVycm9yKGVycikpOwotCX0KLQlBU1NFUlQo d3EtPmFjdGl2ZV90aHJlYWRzID09IDApOwotCWlmICgoZXJyID0gcHRocmVhZF9t dXRleF91bmxvY2soJndxLT5tdXRleCkpID4gMCkKLQkJZG9fZXJyb3IoXygicXVp ZXNjZV93b3JrZXJzOiB0aHJlYWQgMHgleDogcHRocmVhZF9tdXRleF91bmxvY2sg ZXJyb3IgJWQ6ICVzXG4iKSwKLQkJCXB0aHJlYWRfc2VsZigpLCBlcnIsIHN0cmVy cm9yKGVycikpOwotfQorCS8qCisJICogTG9vcCBwdWxsaW5nIHdvcmsgZnJvbSB0 aGUgcGFzc2VkIGluIHdvcmsgcXVldWUuCisJICogQ2hlY2sgZm9yIG5vdGlmaWNh dGlvbiB0byBleGl0IGFmdGVyIGV2ZXJ5IGNodW5rIG9mIHdvcmsuCisJICovCisJ d2hpbGUgKDEpIHsKKwkJcHRocmVhZF9tdXRleF9sb2NrKCZ3cS0+bG9jayk7CiAK LXN0YXRpYyB2b2lkCi1zdGFydF93b3JrZXJzKHdvcmtfcXVldWVfdCAqd3EsIHVu c2lnbmVkIHRoY250LCBwdGhyZWFkX2F0dHJfdCAqYXR0cnApCi17Ci0JaW50CQll cnI7Ci0JdW5zaWduZWQgbG9uZwlpOworCQkvKgorCQkgKiBXYWl0IGZvciB3b3Jr LgorCQkgKi8KKwkJd2hpbGUgKHdxLT5uZXh0X2l0ZW0gPT0gTlVMTCAmJiAhd3Et PnRlcm1pbmF0ZSkgeworCQkJQVNTRVJUKHdxLT5pdGVtX2NvdW50ID09IDApOwor CQkJcHRocmVhZF9jb25kX3dhaXQoJndxLT53YWtldXAsICZ3cS0+bG9jayk7CisJ CX0KKwkJaWYgKHdxLT5uZXh0X2l0ZW0gPT0gTlVMTCAmJiB3cS0+dGVybWluYXRl KSB7CisJCQlwdGhyZWFkX211dGV4X3VubG9jaygmd3EtPmxvY2spOworCQkJYnJl YWs7CisJCX0KIAotCWluaXRfd29ya2Vycyh3cSwgdGhjbnQpOworCQkvKgorCQkg KiAgRGVxdWV1ZSB3b3JrIGZyb20gdGhlIGhlYWQgb2YgdGhlIGxpc3QuCisJCSAq LworCQlBU1NFUlQod3EtPml0ZW1fY291bnQgPiAwKTsKKwkJd2kgPSB3cS0+bmV4 dF9pdGVtOworCQl3cS0+bmV4dF9pdGVtID0gd2ktPm5leHQ7CisJCXdxLT5pdGVt X2NvdW50LS07CiAKLQlpZiAoKHdvcmtfdGhyZWFkcyA9IChwdGhyZWFkX3QgKilt YWxsb2Moc2l6ZW9mKHB0aHJlYWRfdCkgKiB0aGNudCkpID09IE5VTEwpCi0JCWRv X2Vycm9yKF8oImNhbm5vdCBtYWxsb2MgJWxkIGJ5dGVzIGZvciB3b3JrX3RocmVh ZHMgYXJyYXlcbiIpLAotCQkJCXNpemVvZihwdGhyZWFkX3QpICogdGhjbnQpOwor CQlwdGhyZWFkX211dGV4X3VubG9jaygmd3EtPmxvY2spOwogCi0JLyoKLQkqKiAg Q3JlYXRlIHdvcmtlciB0aHJlYWRzCi0JKi8KLQlmb3IgKGkgPSAwOyBpIDwgdGhj bnQ7IGkrKykgewotCQllcnIgPSBwdGhyZWFkX2NyZWF0ZSgmd29ya190aHJlYWRz W2ldLCBhdHRycCwgd29ya2VyX3RocmVhZCwgKHZvaWQgKikgaSk7Ci0JCWlmKGVy ciA+IDApIHsKLQkJICAgICAgICBkb19lcnJvcihfKCJjYW5ub3QgY3JlYXRlIHdv cmtlciB0aHJlYWRzLCBzdGF0dXMgPSBbJWRdICVzXG4iKSwKLQkJCQllcnIsIHN0 cmVycm9yKGVycikpOwotCQl9CisJCSh3aS0+ZnVuY3Rpb24pKHdpLT5xdWV1ZSwg d2ktPmFnbm8sIHdpLT5hcmcpOworCQlmcmVlKHdpKTsKIAl9Ci0JZG9fbG9nKF8o IiAgICAgICAgLSBjcmVhdGluZyAlZCB3b3JrZXIgdGhyZWFkKHMpXG4iKSwgdGhj bnQpOwogCi0JLyoKLQkqKiAgV2FpdCBmb3IgYWxsIHdvcmtlciB0aHJlYWRzIHRv IGluaXRpYWxpemUKLQkqLwotCXF1aWVzY2Vfd29ya2Vycyh3cSk7CisJcmV0dXJu IE5VTEw7CiB9CiAKIHZvaWQKIHRocmVhZF9pbml0KHZvaWQpCiB7Ci0JaW50CQlz dGF0dXM7Ci0JcHRocmVhZF9hdHRyX3QJYXR0cjsKIAlzaWdzZXRfdAlibG9ja2Vk OwogCi0JaWYgKGRvX3BhcmFsbGVsID09IDApCi0JCXJldHVybjsKLQlpZiAodGhy ZWFkX2NvdW50ID09IDApCi0JCXRocmVhZF9jb3VudCA9IDIgKiBsaWJ4ZnNfbnBy b2MoKTsKLQotCWlmICgoc3RhdHVzID0gcHRocmVhZF9hdHRyX2luaXQoJmF0dHIp KSAhPSAwKQotCQlkb19lcnJvcihfKCJzdGF0dXMgZnJvbSBwdGhyZWFkX2F0dHJf aW5pdDogJWQiKSxzdGF0dXMpOwotCi0JaWYgKChzdGF0dXMgPSBwdGhyZWFkX3Nl dGNvbmN1cnJlbmN5KHRocmVhZF9jb3VudCkpICE9IDApCi0JCWRvX2Vycm9yKF8o IlN0YXR1cyBmcm9tIHB0aHJlYWRfc2V0Y29uY3VycmVuY3koJWQpOiAlZCIpLCB0 aHJlYWRfY291bnQsIHN0YXR1cyk7Ci0KIAkvKgogCSAqICBibG9jayBkZWxpdmVy eSBvZiBwcm9ncmVzcyByZXBvcnQgc2lnbmFsIHRvIGFsbCB0aHJlYWRzCiAgICAg ICAgICAqLwpAQCAtMTQwLDE2MCArNjIsOTAgQEAKIAlzaWdhZGRzZXQoJmJsb2Nr ZWQsIFNJR0hVUCk7CiAJc2lnYWRkc2V0KCZibG9ja2VkLCBTSUdBTFJNKTsKIAlw dGhyZWFkX3NpZ21hc2soU0lHX0JMT0NLLCAmYmxvY2tlZCwgTlVMTCk7Ci0KLQlz dGFydF93b3JrZXJzKCZ3b3JrX3F1ZXVlLCB0aHJlYWRfY291bnQsICZhdHRyKTsK IH0KIAotLyoKLSAqIERlcXVldWUgZnJvbSB0aGUgaGVhZCBvZiB0aGUgbGlzdC4K LSAqIHdxLT5tdXRleCBoZWxkLgotICovCi1zdGF0aWMgd29ya190ICoKLWRlcXVl dWUod29ya19xdWV1ZV90ICp3cSkKKwordm9pZAorY3JlYXRlX3dvcmtfcXVldWUo CisJd29ya19xdWV1ZV90CQkqd3EsCisJeGZzX21vdW50X3QJCSptcCwKKwlpbnQJ CQlud29ya2VycykKIHsKLQl3b3JrX3QJKndwOworCWludAkJCWVycjsKKwlpbnQJ CQlpOwogCi0JQVNTRVJUKHdxLT53b3JrX2NvdW50ID4gMCk7Ci0Jd3AgPSB3cS0+ bmV4dDsKLQl3cS0+bmV4dCA9IHdwLT5uZXh0OwotCXdxLT53b3JrX2NvdW50LS07 Ci0JaWYgKHdxLT5uZXh0ID09IE5VTEwpIHsKLQkJQVNTRVJUKHdxLT53b3JrX2Nv dW50ID09IDApOwotCQl3cS0+bGFzdCA9IE5VTEw7Ci0JfQotCXdwLT5uZXh0ID0g TlVMTDsKLQlyZXR1cm4gKHdwKTsKLX0KKwltZW1zZXQod3EsIDAsIHNpemVvZih3 b3JrX3F1ZXVlX3QpKTsKIAotc3RhdGljIHZvaWQgKgotd29ya2VyX3RocmVhZCh2 b2lkICphcmcpCi17Ci0Jd29ya19xdWV1ZV90CSp3cTsKLQl3b3JrX3QJCSp3cDsK LQlpbnQJCWVycjsKLQl1bnNpZ25lZCBsb25nCW15aWQ7Ci0KLQl3cSA9ICZ3b3Jr X3F1ZXVlOwotCW15aWQgPSAodW5zaWduZWQgbG9uZykgYXJnOwotCXRzX2luaXQo KTsKLQlsaWJ4ZnNfbGlvX2FsbG9jYXRlKCk7CisJcHRocmVhZF9jb25kX2luaXQo JndxLT53YWtldXAsIE5VTEwpOworCXB0aHJlYWRfbXV0ZXhfaW5pdCgmd3EtPmxv Y2ssIE5VTEwpOwogCi0JLyoKLQkgKiBMb29wIHB1bGxpbmcgd29yayBmcm9tIHRo ZSBnbG9iYWwgd29yayBxdWV1ZS4KLQkgKiBDaGVjayBmb3Igbm90aWZpY2F0aW9u IHRvIGV4aXQgYWZ0ZXIgZXZlcnkgY2h1bmsgb2Ygd29yay4KLQkgKi8KLQl3aGls ZSAoMSkgewotCQlpZiAoKGVyciA9IHB0aHJlYWRfbXV0ZXhfbG9jaygmd3EtPm11 dGV4KSkgPiAwKQotCQkJZG9fZXJyb3IoXygid29ya190aHJlYWQlZDogdGhyZWFk IDB4JXg6IHB0aHJlYWRfbXV0ZXhfbG9jayBlcnJvciAlZDogJXNcbiIpLAotCQkJ CW15aWQsIHB0aHJlYWRfc2VsZigpLCBlcnIsIHN0cmVycm9yKGVycikpOwotCQkv KgotCQkgKiBXYWl0IGZvciB3b3JrLgotCQkgKi8KLQkJd2hpbGUgKHdxLT5uZXh0 ID09IE5VTEwpIHsKLQkJCUFTU0VSVCh3cS0+d29ya19jb3VudCA9PSAwKTsKLQkJ CS8qCi0JCQkgKiBMYXN0IHRocmVhZCBnb2luZyB0byBpZGxlIHNsZWVwIG11c3Qg d2FrZXVwCi0JCQkgKiB0aGUgbWFzdGVyIHRocmVhZC4gIFNhbWUgbXV0ZXggaXMg dXNlZCB0byBsb2NrCi0JCQkgKiBhcm91bmQgdHdvIGRpZmZlcmVudCBjb25kaXRp b24gdmFyaWFibGVzLgotCQkJICovCi0JCQl3cS0+YWN0aXZlX3RocmVhZHMtLTsK LQkJCUFTU0VSVCh3cS0+YWN0aXZlX3RocmVhZHMgPj0gMCk7Ci0JCQlpZiAoIXdx LT5hY3RpdmVfdGhyZWFkcykgewotCQkJCWlmICgoZXJyID0gcHRocmVhZF9jb25k X3NpZ25hbCgmd3EtPm1jdikpID4gMCkKLQkJCQkJZG9fZXJyb3IoXygid29ya190 aHJlYWQlZDogdGhyZWFkIDB4JXg6IHB0aHJlYWRfY29uZF9zaWduYWwgZXJyb3Ig JWQ6ICVzXG4iKSwKLQkJCQkJCW15aWQsIHB0aHJlYWRfc2VsZigpLCBlcnIsIHN0 cmVycm9yKGVycikpOwotCQkJfQotCQkJaWYgKChlcnIgPSBwdGhyZWFkX2NvbmRf d2FpdCgmd3EtPndjdiwgJndxLT5tdXRleCkpID4gMCkKLQkJCQlkb19lcnJvcihf KCJ3b3JrX3RocmVhZCVkOiB0aHJlYWQgMHgleDogcHRocmVhZF9jb25kX3dhaXQg ZXJyb3IgJWQ6ICVzXG4iKSwKLQkJCQkJbXlpZCwgcHRocmVhZF9zZWxmKCksIGVy ciwgc3RyZXJyb3IoZXJyKSk7Ci0JCQl3cS0+YWN0aXZlX3RocmVhZHMrKzsKKwl3 cS0+bXAgPSBtcDsKKwl3cS0+dGhyZWFkX2NvdW50ID0gbndvcmtlcnM7CisJd3Et PnRocmVhZHMgPSBtYWxsb2MobndvcmtlcnMgKiBzaXplb2YocHRocmVhZF90KSk7 CisJd3EtPnRlcm1pbmF0ZSA9IDA7CisKKwlmb3IgKGkgPSAwOyBpIDwgbndvcmtl cnM7IGkrKykgeworCQllcnIgPSBwdGhyZWFkX2NyZWF0ZSgmd3EtPnRocmVhZHNb aV0sIE5VTEwsIHdvcmtlcl90aHJlYWQsIHdxKTsKKwkJaWYgKGVyciAhPSAwKSB7 CisJCQlkb19lcnJvcihfKCJjYW5ub3QgY3JlYXRlIHdvcmtlciB0aHJlYWRzLCBl cnJvciA9IFslZF0gJXNcbiIpLAorCQkJCWVyciwgc3RyZXJyb3IoZXJyKSk7CiAJ CX0KLQkJLyoKLQkJICogIERlcXVldWUgd29yayBmcm9tIHRoZSBoZWFkIG9mIHRo ZSBsaXN0LgotCQkgKi8KLQkJQVNTRVJUKHdxLT53b3JrX2NvdW50ID4gMCk7Ci0J CXdwID0gZGVxdWV1ZSh3cSk7Ci0JCWlmICgoZXJyID0gcHRocmVhZF9tdXRleF91 bmxvY2soJndxLT5tdXRleCkpID4gMCkKLQkJCWRvX2Vycm9yKF8oIndvcmtfdGhy ZWFkJWQ6IHRocmVhZCAweCV4OiBwdGhyZWFkX211dGV4X3VubG9jayBlcnJvciAl ZDogJXNcbiIpLAotCQkJCW15aWQsIHB0aHJlYWRfc2VsZigpLCBlcnIsIHN0cmVy cm9yKGVycikpOwotCQkvKgotCQkgKiAgRG8gdGhlIHdvcmsuCi0JCSAqLwotCQko d3AtPmZ1bmN0aW9uKSh3cC0+bXAsIHdwLT5hZ25vKTsKLQotCQlmcmVlKHdwKTsK IAl9Ci0JLyogTk9UIFJFQUNIRUQgKi8KLQlwdGhyZWFkX2V4aXQoTlVMTCk7Ci0J cmV0dXJuIChOVUxMKTsKLX0KIAotaW50Ci1xdWV1ZV93b3JrKGRpc3BfZnVuY190 IGZ1bmMsIHhmc19tb3VudF90ICptcCwgeGZzX2FnbnVtYmVyX3QgYWdubykKLXsK LQl3b3JrX3F1ZXVlX3QgKndxOwotCXdvcmtfdAkqd3A7Cit9CiAKLQlpZiAoZG9f cGFyYWxsZWwgPT0gMCkgewotCQlmdW5jKG1wLCBhZ25vKTsKLQkJcmV0dXJuIDA7 Ci0JfQotCXdxID0gJndvcmtfcXVldWU7Ci0JLyoKLQkgKiBHZXQgbWVtb3J5IGZv ciBhIG5ldyB3b3JrIHN0cnVjdHVyZS4KLQkgKi8KLQlpZiAoKHdwID0gKHdvcmtf dCAqKW1lbWFsaWduKDgsIHNpemVvZih3b3JrX3QpKSkgPT0gTlVMTCkKLQkJcmV0 dXJuIChFTk9NRU0pOwotCS8qCi0JICogSW5pdGlhbGl6ZSB0aGUgbmV3IHdvcmsg c3RydWN0dXJlLgotCSAqLwotCXdwLT5mdW5jdGlvbiA9IGZ1bmM7Ci0Jd3AtPm1w ID0gbXA7Ci0Jd3AtPmFnbm8gPSBhZ25vOwordm9pZAorcXVldWVfd29yaygKKwl3 b3JrX3F1ZXVlX3QJKndxLAorCXdvcmtfZnVuY190CWZ1bmMsCisJeGZzX2FnbnVt YmVyX3QJYWdubywKKwl2b2lkCQkqYXJnKQoreworCXdvcmtfaXRlbV90CSp3aTsK KworCXdpID0gKHdvcmtfaXRlbV90ICopbWFsbG9jKHNpemVvZih3b3JrX2l0ZW1f dCkpOworCWlmICh3aSA9PSBOVUxMKQorCQlkb19lcnJvcihfKCJjYW5ub3QgYWxs b2NhdGUgd29ya2VyIGl0ZW0sIGVycm9yID0gWyVkXSAlc1xuIiksCisJCQllcnJu bywgc3RyZXJyb3IoZXJybm8pKTsKKworCXdpLT5mdW5jdGlvbiA9IGZ1bmM7CisJ d2ktPmFnbm8gPSBhZ25vOworCXdpLT5hcmcgPSBhcmc7CisJd2ktPnF1ZXVlID0g d3E7CisJd2ktPm5leHQgPSBOVUxMOwogCiAJLyoKIAkgKiAgTm93IHF1ZXVlIHRo ZSBuZXcgd29yayBzdHJ1Y3R1cmUgdG8gdGhlIHdvcmsgcXVldWUuCiAJICovCi0J aWYgKHdxLT5uZXh0ID09IE5VTEwpIHsKLQkJd3EtPm5leHQgPSB3cDsKKwlwdGhy ZWFkX211dGV4X2xvY2soJndxLT5sb2NrKTsKKwlpZiAod3EtPm5leHRfaXRlbSA9 PSBOVUxMKSB7CisJCXdxLT5uZXh0X2l0ZW0gPSB3aTsKKwkJQVNTRVJUKHdxLT5p dGVtX2NvdW50ID09IDApOworCQlwdGhyZWFkX2NvbmRfc2lnbmFsKCZ3cS0+d2Fr ZXVwKTsKIAl9IGVsc2UgewotCQl3cS0+bGFzdC0+bmV4dCA9IHdwOworCQl3cS0+ bGFzdF9pdGVtLT5uZXh0ID0gd2k7CiAJfQotCXdxLT5sYXN0ID0gd3A7Ci0Jd3At Pm5leHQgPSBOVUxMOwotCXdxLT53b3JrX2NvdW50Kys7Ci0KLQlyZXR1cm4gKDAp OworCXdxLT5sYXN0X2l0ZW0gPSB3aTsKKwl3cS0+aXRlbV9jb3VudCsrOworCXB0 aHJlYWRfbXV0ZXhfdW5sb2NrKCZ3cS0+bG9jayk7CiB9CiAKIHZvaWQKLXdhaXRf Zm9yX3dvcmtlcnModm9pZCkKK2Rlc3Ryb3lfd29ya19xdWV1ZSgKKwl3b3JrX3F1 ZXVlX3QJKndxKQogewotCWludAkJZXJyOwotCXdvcmtfcXVldWVfdAkqd3E7CisJ aW50CQlpOwogCi0JaWYgKGRvX3BhcmFsbGVsID09IDApCi0JCXJldHVybjsKLQl3 cSA9ICZ3b3JrX3F1ZXVlOwotCWlmICgoZXJyID0gcHRocmVhZF9tdXRleF9sb2Nr KCZ3cS0+bXV0ZXgpKSA+IDApCi0JCWRvX2Vycm9yKF8oIndhaXRfZm9yX3dvcmtl cnM6IHRocmVhZCAweCV4OiBwdGhyZWFkX211dGV4X2xvY2sgZXJyb3IgJWQ6ICVz XG4iKSwKLQkJCXB0aHJlYWRfc2VsZigpLCBlcnIsIHN0cmVycm9yKGVycikpOwot Ci0JQVNTRVJUKHdxLT5hY3RpdmVfdGhyZWFkcyA9PSAwKTsKLQlpZiAod3EtPndv cmtfY291bnQgPiAwKSB7Ci0JCS8qIGdldCB0aGUgd29ya2VycyBnb2luZyAqLwot CQlpZiAoKGVyciA9IHB0aHJlYWRfY29uZF9icm9hZGNhc3QoJndxLT53Y3YpKSA+ IDApCi0JCQlkb19lcnJvcihfKCJ3YWl0X2Zvcl93b3JrZXJzOiB0aHJlYWQgMHgl eDogcHRocmVhZF9jb25kX2Jyb2FkY2FzdCBlcnJvciAlZDogJXNcbiIpLAotCQkJ CXB0aHJlYWRfc2VsZigpLCBlcnIsIHN0cmVycm9yKGVycikpOwotCQkvKiBhbmQg d2FpdCBmb3IgdGhlbSAqLwotCQlpZiAoKGVyciA9IHB0aHJlYWRfY29uZF93YWl0 KCZ3cS0+bWN2LCAmd3EtPm11dGV4KSkgPiAwKQotCQkJZG9fZXJyb3IoXygid2Fp dF9mb3Jfd29ya2VyczogdGhyZWFkIDB4JXg6IHB0aHJlYWRfY29uZF93YWl0IGVy cm9yICVkOiAlc1xuIiksCi0JCQkJcHRocmVhZF9zZWxmKCksIGVyciwgc3RyZXJy b3IoZXJyKSk7Ci0JfQotCUFTU0VSVCh3cS0+YWN0aXZlX3RocmVhZHMgPT0gMCk7 Ci0JQVNTRVJUKHdxLT53b3JrX2NvdW50ID09IDApOworCXB0aHJlYWRfbXV0ZXhf bG9jaygmd3EtPmxvY2spOworCXdxLT50ZXJtaW5hdGUgPSAxOworCXB0aHJlYWRf bXV0ZXhfdW5sb2NrKCZ3cS0+bG9jayk7CisKKwlwdGhyZWFkX2NvbmRfYnJvYWRj YXN0KCZ3cS0+d2FrZXVwKTsKKworCWZvciAoaSA9IDA7IGkgPCB3cS0+dGhyZWFk X2NvdW50OyBpKyspCisJCXB0aHJlYWRfam9pbih3cS0+dGhyZWFkc1tpXSwgTlVM TCk7CiAKLQlpZiAoKGVyciA9IHB0aHJlYWRfbXV0ZXhfdW5sb2NrKCZ3cS0+bXV0 ZXgpKSA+IDApCi0JCWRvX2Vycm9yKF8oIndhaXRfZm9yX3dvcmtlcnM6IHRocmVh ZCAweCV4OiBwdGhyZWFkX211dGV4X3VubG9jayBlcnJvciAlZDogJXNcbiIpLAot CQkJcHRocmVhZF9zZWxmKCksIGVyciwgc3RyZXJyb3IoZXJyKSk7CisJZnJlZSh3 cS0+dGhyZWFkcyk7CisJcHRocmVhZF9tdXRleF9kZXN0cm95KCZ3cS0+bG9jayk7 CisJcHRocmVhZF9jb25kX2Rlc3Ryb3koJndxLT53YWtldXApOwogfQpJbmRleDog cmVwYWlyL3hmc3Byb2dzL3JlcGFpci90aHJlYWRzLmgKPT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PQotLS0gcmVwYWlyLm9yaWcveGZzcHJvZ3MvcmVwYWlyL3RocmVhZHMuaAky MDA3LTA0LTI3IDEzOjEzOjM1LjAwMDAwMDAwMCArMTAwMAorKysgcmVwYWlyL3hm c3Byb2dzL3JlcGFpci90aHJlYWRzLmgJMjAwNy0wNC0yNyAxNDoxMjozNC4yMTk1 NTk4NDcgKzEwMDAKQEAgLTEsMzcgKzEsNDcgQEAKICNpZm5kZWYJX1hGU19SRVBB SVJfVEhSRUFEU19IXwogI2RlZmluZQlfWEZTX1JFUEFJUl9USFJFQURTX0hfCiAK LWV4dGVybiBpbnQJCWRvX3BhcmFsbGVsOwotZXh0ZXJuIGludAkJdGhyZWFkX2Nv dW50OwotLyoKLSoqICBsb2NraW5nIHZhcmlhbnRzIC0gcndsb2NrL211dGV4Ci0q LwotI2RlZmluZSBQUkVQQUlSX1JXX0xPQ0tfQVRUUgkJUFRIUkVBRF9QUk9DRVNT X1BSSVZBVEUKLQotI2RlZmluZQlQUkVQQUlSX1JXX0xPQ0tfQUxMT0MobGtwLCBu KQkJCQlcCi0JaWYgKGRvX3BhcmFsbGVsKSB7CQkJCQlcCi0JCWxrcCA9IG1hbGxv YyhuKnNpemVvZihwdGhyZWFkX3J3bG9ja190KSk7CVwKLQkJaWYgKGxrcCA9PSBO VUxMKQkJCQlcCi0JCQlkb19lcnJvcigiY2Fubm90IGFsbG9jICVkIGxvY2tzXG4i LCBuKTsJXAotCQkJLyogTk8gUkVUVVJOICovCQkJCVwKLQl9Ci0jZGVmaW5lIFBS RVBBSVJfUldfTE9DS19JTklUKGwsYSkJaWYgKGRvX3BhcmFsbGVsKSBwdGhyZWFk X3J3bG9ja19pbml0KChsKSwoYSkpCi0jZGVmaW5lIFBSRVBBSVJfUldfUkVBRF9M T0NLKGwpIAlpZiAoZG9fcGFyYWxsZWwpIHB0aHJlYWRfcndsb2NrX3JkbG9jaygo bCkpCi0jZGVmaW5lIFBSRVBBSVJfUldfV1JJVEVfTE9DSyhsKQlpZiAoZG9fcGFy YWxsZWwpIHB0aHJlYWRfcndsb2NrX3dybG9jaygobCkpCi0jZGVmaW5lIFBSRVBB SVJfUldfVU5MT0NLKGwpCQlpZiAoZG9fcGFyYWxsZWwpIHB0aHJlYWRfcndsb2Nr X3VubG9jaygobCkpCi0jZGVmaW5lIFBSRVBBSVJfUldfV1JJVEVfTE9DS19OT1RF U1QobCkJcHRocmVhZF9yd2xvY2tfd3Jsb2NrKChsKSkKLSNkZWZpbmUgUFJFUEFJ Ul9SV19VTkxPQ0tfTk9URVNUKGwpCXB0aHJlYWRfcndsb2NrX3VubG9jaygobCkp Ci0jZGVmaW5lIFBSRVBBSVJfUldfTE9DS19ERUxFVEUobCkJaWYgKGRvX3BhcmFs bGVsKSBwdGhyZWFkX3J3bG9ja19kZXN0cm95KChsKSkKLQotI2RlZmluZSBQUkVQ QUlSX01UWF9MT0NLX0lOSVQobSwgYSkJaWYgKGRvX3BhcmFsbGVsKSBwdGhyZWFk X211dGV4X2luaXQoKG0pLCAoYSkpCi0jZGVmaW5lIFBSRVBBSVJfTVRYX0FUVFJf SU5JVChhKQlpZiAoZG9fcGFyYWxsZWwpIHB0aHJlYWRfbXV0ZXhhdHRyX2luaXQo KGEpKQotI2RlZmluZSBQUkVQQUlSX01UWF9BVFRSX1NFVChhLCBsKQlpZiAoZG9f cGFyYWxsZWwpIHB0aHJlYWRfbXV0ZXhhdHRyX3NldHR5cGUoKGEpLCBsKQotI2Rl ZmluZSBQUkVQQUlSX01UWF9MT0NLKG0pCQlpZiAoZG9fcGFyYWxsZWwpIHB0aHJl YWRfbXV0ZXhfbG9jayhtKQotI2RlZmluZSBQUkVQQUlSX01UWF9VTkxPQ0sobSkJ CWlmIChkb19wYXJhbGxlbCkgcHRocmVhZF9tdXRleF91bmxvY2sobSkKLQotCi10 eXBlZGVmIHZvaWQJZGlzcF9mdW5jX3QoeGZzX21vdW50X3QgKm1wLCB4ZnNfYWdu dW1iZXJfdCBhZ25vKTsKLWV4dGVybglpbnQJcXVldWVfd29yayhkaXNwX2Z1bmNf dCBmdW5jLCB4ZnNfbW91bnRfdCAqbXAsIHhmc19hZ251bWJlcl90IGFnbm8pOwot ZXh0ZXJuCXZvaWQJd2FpdF9mb3Jfd29ya2Vycyh2b2lkKTsKK3ZvaWQJdGhyZWFk X2luaXQodm9pZCk7CisKK3N0cnVjdCAgd29ya19xdWV1ZTsKKwordHlwZWRlZiB2 b2lkIHdvcmtfZnVuY190KHN0cnVjdCB3b3JrX3F1ZXVlICosIHhmc19hZ251bWJl cl90LCB2b2lkICopOworCit0eXBlZGVmIHN0cnVjdCB3b3JrX2l0ZW0geworCXN0 cnVjdCB3b3JrX2l0ZW0JKm5leHQ7CisJd29ya19mdW5jX3QJCSpmdW5jdGlvbjsK KwlzdHJ1Y3Qgd29ya19xdWV1ZQkqcXVldWU7CisJeGZzX2FnbnVtYmVyX3QJCWFn bm87CisJdm9pZAkJCSphcmc7Cit9IHdvcmtfaXRlbV90OworCit0eXBlZGVmIHN0 cnVjdCAgd29ya19xdWV1ZSB7CisJd29ya19pdGVtX3QJCSpuZXh0X2l0ZW07CisJ d29ya19pdGVtX3QJCSpsYXN0X2l0ZW07CisJaW50CQkJaXRlbV9jb3VudDsKKwlp bnQJCQl0aHJlYWRfY291bnQ7CisJcHRocmVhZF90CQkqdGhyZWFkczsKKwl4ZnNf bW91bnRfdAkJKm1wOworCXB0aHJlYWRfbXV0ZXhfdAkJbG9jazsKKwlwdGhyZWFk X2NvbmRfdAkJd2FrZXVwOworCWludAkJCXRlcm1pbmF0ZTsKK30gd29ya19xdWV1 ZV90OworCit2b2lkCitjcmVhdGVfd29ya19xdWV1ZSgKKwl3b3JrX3F1ZXVlX3QJ CSp3cSwKKwl4ZnNfbW91bnRfdAkJKm1wLAorCWludAkJCW53b3JrZXJzKTsKKwor dm9pZAorcXVldWVfd29yaygKKwl3b3JrX3F1ZXVlX3QJCSp3cSwKKwl3b3JrX2Z1 bmNfdCAJCWZ1bmMsCisJeGZzX2FnbnVtYmVyX3QgCQlhZ25vLAorCXZvaWQJCQkq YXJnKTsKKwordm9pZAorZGVzdHJveV93b3JrX3F1ZXVlKAorCXdvcmtfcXVldWVf dAkJKndxKTsKIAogI2VuZGlmCS8qIF9YRlNfUkVQQUlSX1RIUkVBRFNfSF8gKi8K SW5kZXg6IHJlcGFpci94ZnNwcm9ncy9yZXBhaXIveGZzX3JlcGFpci5jCj09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT0KLS0tIHJlcGFpci5vcmlnL3hmc3Byb2dzL3JlcGFpci94 ZnNfcmVwYWlyLmMJMjAwNy0wNC0yNyAxNDoxMTo0MS4wMDAwMDAwMDAgKzEwMDAK KysrIHJlcGFpci94ZnNwcm9ncy9yZXBhaXIveGZzX3JlcGFpci5jCTIwMDctMDUt MzEgMTE6MjM6MjUuMjQ0NjI3MDUwICsxMDAwCkBAIC01OSwxNyArNTksMTQgQEAK IAkiaWhhc2giLAogI2RlZmluZQlCSEFTSF9TSVpFCTMKIAkiYmhhc2giLAotI2Rl ZmluZQlQUkVGRVRDSF9JTk9fQ05UCTQKLQkicGZpbm8iLAotI2RlZmluZQlQUkVG RVRDSF9ESVJfQ05UCTUKLQkicGZkaXIiLAotI2RlZmluZQlQUkVGRVRDSF9BSU9f Q05UCTYKLQkicGZhaW8iLAotI2RlZmluZQlBR19TVFJJREUJCTcKKyNkZWZpbmUJ QUdfU1RSSURFCTQKIAkiYWdfc3RyaWRlIiwKIAlOVUxMCiB9OwogCitzdGF0aWMg aW50CWloYXNoX29wdGlvbl91c2VkOworc3RhdGljIGludAliaGFzaF9vcHRpb25f dXNlZDsKKwogc3RhdGljIHZvaWQKIHVzYWdlKHZvaWQpCiB7CkBAIC0xODcsOCAr MTg0LDcgQEAKIAlwcmVfNjVfYmV0YSA9IDA7CiAJZnNfc2hhcmVkX2FsbG93ZWQg PSAxOwogCWFnX3N0cmlkZSA9IDA7Ci0JdGhyZWFkX2NvdW50ID0gMDsKLQlkb19w YXJhbGxlbCA9IDA7CisJdGhyZWFkX2NvdW50ID0gMTsKIAlyZXBvcnRfaW50ZXJ2 YWwgPSBQUk9HX1JQVF9ERUZBVUxUOwogCiAJLyoKQEAgLTIyMywxOCArMjE5LDEx IEBACiAJCQkJCWJyZWFrOwogCQkJCWNhc2UgSUhBU0hfU0laRToKIAkJCQkJbGli eGZzX2loYXNoX3NpemUgPSAoaW50KSBzdHJ0b2wodmFsLCAwLCAwKTsKKwkJCQkJ aWhhc2hfb3B0aW9uX3VzZWQgPSAxOwogCQkJCQlicmVhazsKIAkJCQljYXNlIEJI QVNIX1NJWkU6CiAJCQkJCWxpYnhmc19iaGFzaF9zaXplID0gKGludCkgc3RydG9s KHZhbCwgMCwgMCk7Ci0JCQkJCWJyZWFrOwotCQkJCWNhc2UgUFJFRkVUQ0hfSU5P X0NOVDoKLQkJCQkJbGlieGZzX2xpb19pbm9fY291bnQgPSAoaW50KSBzdHJ0b2wo dmFsLCAwLCAwKTsKLQkJCQkJYnJlYWs7Ci0JCQkJY2FzZSBQUkVGRVRDSF9ESVJf Q05UOgotCQkJCQlsaWJ4ZnNfbGlvX2Rpcl9jb3VudCA9IChpbnQpIHN0cnRvbCh2 YWwsIDAsIDApOwotCQkJCQlicmVhazsKLQkJCQljYXNlIFBSRUZFVENIX0FJT19D TlQ6Ci0JCQkJCWxpYnhmc19saW9fYWlvX2NvdW50ID0gKGludCkgc3RydG9sKHZh bCwgMCwgMCk7CisJCQkJCWJoYXNoX29wdGlvbl91c2VkID0gMTsKIAkJCQkJYnJl YWs7CiAJCQkJY2FzZSBBR19TVFJJREU6CiAJCQkJCWFnX3N0cmlkZSA9IChpbnQp IHN0cnRvbCh2YWwsIDAsIDApOwpAQCAtMjcyLDEwICsyNjEsNyBAQAogCQkJcHJp bnRmKF8oIiVzIHZlcnNpb24gJXNcbiIpLCBwcm9nbmFtZSwgVkVSU0lPTik7CiAJ CQlleGl0KDApOwogCQljYXNlICdQJzoKLQkJCWRvX3ByZWZldGNoIF49IDE7Ci0J CQlicmVhazsKLQkJY2FzZSAnTSc6Ci0JCQlkb19wYXJhbGxlbCBePSAxOworCQkJ ZG9fcHJlZmV0Y2ggPSAwOwogCQkJYnJlYWs7CiAJCWNhc2UgJ3QnOgogCQkJcmVw b3J0X2ludGVydmFsID0gKGludCkgc3RydG9sKG9wdGFyZywgMCwgMCk7CkBAIC00 ODMsMTIgKzQ2OSwxOSBAQAogCWJpbmR0ZXh0ZG9tYWluKFBBQ0tBR0UsIExPQ0FM RURJUik7CiAJdGV4dGRvbWFpbihQQUNLQUdFKTsKIAorI2lmZGVmIFhSX1BGX1RS QUNFCisJcGZfdHJhY2VfZmlsZSA9IGZvcGVuKCIvdG1wL3hmc19yZXBhaXJfcHJl ZmV0Y2gudHJhY2UiLCAidyIpOworCXNldHZidWYocGZfdHJhY2VfZmlsZSwgTlVM TCwgX0lPTEJGLCAxMDI0KTsKKyNlbmRpZgorCiAJdGVtcF9tcCA9ICZ4ZnNfbTsK IAlzZXRidWYoc3Rkb3V0LCBOVUxMKTsKIAogCXByb2Nlc3NfYXJncyhhcmdjLCBh cmd2KTsKIAl4ZnNfaW5pdCgmeCk7CiAKKwltc2didWYgPSBtYWxsb2MoRFVSQVRJ T05fQlVGX1NJWkUpOworCiAJdGltZXN0YW1wKFBIQVNFX1NUQVJULCAwLCBOVUxM KTsKIAl0aW1lc3RhbXAoUEhBU0VfRU5ELCAwLCBOVUxMKTsKIApAQCAtNTI5LDIy ICs1MjIsNzcgQEAKIAlpbm9kZXNfcGVyX2NsdXN0ZXIgPSBYRlNfSU5PREVfQ0xV U1RFUl9TSVpFKG1wKSA+PiBtcC0+bV9zYi5zYl9pbm9kZWxvZzsKIAogCWlmIChh Z19zdHJpZGUpIHsKLQkJZG9fcGFyYWxsZWwgPSAxOwotCQl0aHJlYWRfY291bnQg PSAobXAtPm1fc2Iuc2JfYWdjb3VudCArIGFnX3N0cmlkZSAtIDEpIC8gYWdfc3Ry aWRlOworCQl0aHJlYWRfY291bnQgPSAoZ2xvYl9hZ2NvdW50ICsgYWdfc3RyaWRl IC0gMSkgLyBhZ19zdHJpZGU7CiAJCXRocmVhZF9pbml0KCk7CiAJfQogCi0JaWYg KGRvX3BhcmFsbGVsICYmIHJlcG9ydF9pbnRlcnZhbCkgeworCWlmIChhZ19zdHJp ZGUgJiYgcmVwb3J0X2ludGVydmFsKSB7CiAJCWluaXRfcHJvZ3Jlc3NfcnB0KCk7 Ci0JCW1zZ2J1ZiA9IG1hbGxvYyhEVVJBVElPTl9CVUZfU0laRSk7CiAJCWlmICht c2didWYpIHsKIAkJCWRvX2xvZyhfKCIgICAgICAgIC0gcmVwb3J0aW5nIHByb2dy ZXNzIGluIGludGVydmFscyBvZiAlc1xuIiksCiAJCQlkdXJhdGlvbihyZXBvcnRf aW50ZXJ2YWwsIG1zZ2J1ZikpOwotCQkJZnJlZShtc2didWYpOwogCQl9CiAJfQog CiAJLyoKKwkgKiBBZGp1c3QgbGlieGZzIGNhY2hlIHNpemVzIGJhc2VkIG9uIHN5 c3RlbSBtZW1vcnksCisJICogZmlsZXN5c3RlbSBzaXplIGFuZCBpbm9kZSBjb3Vu dC4KKwkgKgorCSAqIFdlJ2xsIHNldCB0aGUgY2FjaGUgc2l6ZSBiYXNlZCBvbiAz LzRzIHRoZSBtZW1vcnkgbWludXMKKwkgKiBzcGFjZSB1c2VkIGJ5IHRoZSBpbm9k ZSBBVkwgdHJlZSBhbmQgYmxvY2sgdXNhZ2UgbWFwLgorCSAqCisJICogSW5vZGUg QVZMIHRyZWUgc3BhY2UgaXMgYXBwcm94aW1hdGVseSA0IGJ5dGVzIHBlciBpbm9k ZSwKKwkgKiBibG9jayB1c2FnZSBtYXAgaXMgY3VycmVudGx5IDEgYnl0ZSBmb3Ig MiBibG9ja3MuCisJICoKKwkgKiBXZSBhc3N1bWUgbW9zdCBibG9ja3Mgd2lsbCBi ZSBpbm9kZSBjbHVzdGVycy4KKwkgKgorCSAqIENhbGN1bGF0aW9ucyBhcmUgZG9u ZSBpbiBraWxvYnl0ZSB1bml0cy4KKwkgKi8KKworCWlmICghYmhhc2hfb3B0aW9u X3VzZWQpIHsKKwkJdW5zaWduZWQgbG9uZyAJbWVtX3VzZWQ7CisJCXVuc2lnbmVk IGxvbmcJcGh5c19tZW07CisKKwkJbGlieGZzX2ljYWNoZV9wdXJnZSgpOworCQls aWJ4ZnNfYmNhY2hlX3B1cmdlKCk7CisJCWNhY2hlX2Rlc3Ryb3kobGlieGZzX2lj YWNoZSk7CisJCWNhY2hlX2Rlc3Ryb3kobGlieGZzX2JjYWNoZSk7CisKKwkJbWVt X3VzZWQgPSAobXAtPm1fc2Iuc2JfaWNvdW50ID4+ICgxMCAtIDIpKSArCisJCQkJ CShtcC0+bV9zYi5zYl9kYmxvY2tzID4+ICgxMCArIDEpKTsKKwkJcGh5c19tZW0g PSBsaWJ4ZnNfcGh5c21lbSgpICogMyAvIDQ7CisKKwkJaWYgKHBoeXNfbWVtIDw9 IG1lbV91c2VkKSB7CisJCQkvKgorCQkJICogVHVybiBvZmYgcHJlZmV0Y2ggYW5k IG1pbmltaXNlIGxpYnhmcyBjYWNoZSBpZgorCQkJICogcGh5c2ljYWwgbWVtb3J5 IGlzIGRlZW1lZCBpbnN1ZmZpY2llbnQKKwkJCSAqLworCQkJZG9fcHJlZmV0Y2gg PSAwOworCQkJbGlieGZzX2JoYXNoX3NpemUgPSA2NDsKKwkJfSBlbHNlIHsKKwkJ CXBoeXNfbWVtIC09IG1lbV91c2VkOworCQkJaWYgKHBoeXNfbWVtID49ICgxIDw8 IDMwKSkKKwkJCQlwaHlzX21lbSA9IDEgPDwgMzA7CisJCQlsaWJ4ZnNfYmhhc2hf c2l6ZSA9IHBoeXNfbWVtIC8gKEhBU0hfQ0FDSEVfUkFUSU8gKgorCQkJCQkobXAt Pm1faW5vZGVfY2x1c3Rlcl9zaXplID4+IDEwKSk7CisJCQlpZiAobGlieGZzX2Jo YXNoX3NpemUgPCA1MTIpCisJCQkJbGlieGZzX2JoYXNoX3NpemUgPSA1MTI7CisJ CX0KKworCQlpZiAodmVyYm9zZSkKKwkJCWRvX2xvZyhfKCIgICAgICAgIC0gYmxv Y2sgY2FjaGUgc2l6ZSBzZXQgdG8gJWQgZW50cmllc1xuIiksCisJCQkJbGlieGZz X2JoYXNoX3NpemUgKiBIQVNIX0NBQ0hFX1JBVElPKTsKKworCQlpZiAoIWloYXNo X29wdGlvbl91c2VkKQorCQkJbGlieGZzX2loYXNoX3NpemUgPSBsaWJ4ZnNfYmhh c2hfc2l6ZTsKKworCQlsaWJ4ZnNfaWNhY2hlID0gY2FjaGVfaW5pdChsaWJ4ZnNf aWhhc2hfc2l6ZSwKKwkJCQkJCSZsaWJ4ZnNfaWNhY2hlX29wZXJhdGlvbnMpOwor CQlsaWJ4ZnNfYmNhY2hlID0gY2FjaGVfaW5pdChsaWJ4ZnNfYmhhc2hfc2l6ZSwK KwkJCQkJCSZsaWJ4ZnNfYmNhY2hlX29wZXJhdGlvbnMpOworCX0KKworCS8qCiAJ ICogY2FsY3VsYXRlIHdoYXQgbWtmcyB3b3VsZCBkbyB0byB0aGlzIGZpbGVzeXN0 ZW0KIAkgKi8KIAljYWxjX21rZnMobXApOwpAQCAtNTY0LDE2ICs2MTIsMTUgQEAK IAlwaGFzZTIobXApOwogCXRpbWVzdGFtcChQSEFTRV9FTkQsIDIsIE5VTEwpOwog CisJaWYgKGRvX3ByZWZldGNoKQorCQlpbml0X3ByZWZldGNoKG1wKTsKKwogCXBo YXNlMyhtcCk7CiAJdGltZXN0YW1wKFBIQVNFX0VORCwgMywgTlVMTCk7CiAKIAlw aGFzZTQobXApOwogCXRpbWVzdGFtcChQSEFTRV9FTkQsIDQsIE5VTEwpOwogCi0J LyogWFhYOiBuYXRoYW5zIC0gc29tZXRoaW5nIGluIHBoYXNlNCBhaW4ndCBwbGF5 aW5nIGJ5ICovCi0JLyogdGhlIGJ1ZmZlciBjYWNoZSBydWxlcy4uIHdoeSBkb2Vz bid0IElSSVggaGl0IHRoaXM/ICovCi0JbGlieGZzX2JjYWNoZV9mbHVzaCgpOwot CiAJaWYgKG5vX21vZGlmeSkKIAkJcHJpbnRmKF8oIk5vIG1vZGlmeSBmbGFnIHNl dCwgc2tpcHBpbmcgcGhhc2UgNVxuIikpOwogCWVsc2UgewpAQCAtNTg1LDggKzYz Miw2IEBACiAJCXBoYXNlNihtcCk7CiAJCXRpbWVzdGFtcChQSEFTRV9FTkQsIDYs IE5VTEwpOwogCi0JCWxpYnhmc19iY2FjaGVfZmx1c2goKTsKLQogCQlwaGFzZTco bXApOwogCQl0aW1lc3RhbXAoUEhBU0VfRU5ELCA3LCBOVUxMKTsKIAl9IGVsc2Ug IHsKQEAgLTY0OCw3ICs2OTMsNyBAQAogCQl9CiAJfQogCi0JaWYgKGRvX3BhcmFs bGVsICYmIHJlcG9ydF9pbnRlcnZhbCkKKwlpZiAoYWdfc3RyaWRlICYmIHJlcG9y dF9pbnRlcnZhbCkKIAkJc3RvcF9wcm9ncmVzc19ycHQoKTsKIAogI2lmZGVmIFRS QUNLX01FTU9SWQpAQCAtNjY3LDEyICs3MTIsNiBAQAogCX0KIAogCS8qCi0JICog RG9uZSwgZmx1c2ggYWxsIGNhY2hlZCBidWZmZXJzIGFuZCBpbm9kZXMuCi0JICov Ci0JbGlieGZzX2ljYWNoZV9wdXJnZSgpOwotCWxpYnhmc19iY2FjaGVfcHVyZ2Uo KTsKLQotCS8qCiAJICogQ2xlYXIgdGhlIHF1b3RhIGZsYWdzIGlmIHRoZXkncmUg b24uCiAJICovCiAJc2JwID0gbGlieGZzX2dldHNiKG1wLCAwKTsKQEAgLTY5OCw2 ICs3MzcsMTEgQEAKIAogCWxpYnhmc193cml0ZWJ1ZihzYnAsIDApOwogCisJLyoK KwkgKiBEb25lLCBmbHVzaCBhbGwgY2FjaGVkIGJ1ZmZlcnMgYW5kIGlub2Rlcy4K KwkgKi8KKwlsaWJ4ZnNfYmNhY2hlX2ZsdXNoKCk7CisKIAlsaWJ4ZnNfdW1vdW50 KG1wKTsKIAlpZiAoeC5ydGRldikKIAkJbGlieGZzX2RldmljZV9jbG9zZSh4LnJ0 ZGV2KTsKQEAgLTcwOCw1ICs3NTIsOCBAQAogCWlmICh2ZXJib3NlKQogCQlzdW1t YXJ5X3JlcG9ydCgpOwogCWRvX2xvZyhfKCJkb25lXG4iKSk7CisjaWZkZWYgWFJf UEZfVFJBQ0UKKwlmY2xvc2UocGZfdHJhY2VfZmlsZSk7CisjZW5kaWYKIAlyZXR1 cm4gKDApOwogfQpJbmRleDogcmVwYWlyL3hmc3Byb2dzL3JlcGFpci9kaW5vZGUu Ywo9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09Ci0tLSByZXBhaXIub3JpZy94ZnNwcm9ncy9y ZXBhaXIvZGlub2RlLmMJMjAwNy0wNC0yNyAxNDoxMTo0MS4wMDAwMDAwMDAgKzEw MDAKKysrIHJlcGFpci94ZnNwcm9ncy9yZXBhaXIvZGlub2RlLmMJMjAwNy0wNS0x NiAxMjowMjozOS42MTI2NTQ1MzAgKzEwMDAKQEAgLTUxNCwyOCArNTE0LDYgQEAK IAlyZXR1cm4oTlVMTERGU0JOTyk7CiB9CiAKLS8qCi0gKiBwcm9jZXNzX2JtYnRf cmVjbGlzdF9pbnQgaXMgdGhlIG1vc3QgY29tcHV0ZSBpbnRlbnNpdmUKLSAqIGZ1 bmN0aW9uIGluIHJlcGFpci4gVGhlIGZvbGxvd2luZyBtYWNyb3MgcmVkdWNlIHRo ZQotICogdGhlIGxhcmdlIG51bWJlciBvZiBsb2NrL3VubG9jayBzdGVwcyBpdCB3 b3VsZCBvdGhlcndpc2UKLSAqIGNhbGwuCi0gKi8KLSNkZWZpbmUJUFJPQ0VTU19C TUJUX0RFQ0wodHlwZSwgdmFyKQl0eXBlIHZhcgotCi0jZGVmaW5lCVBST0NFU1Nf Qk1CVF9MT0NLKGFnbm8pCQkJCQkJCVwKLQlpZiAoZG9fcGFyYWxsZWwgJiYgKGFn bm8gIT0gbG9ja2VkX2Fnbm8pKSB7CQkJCVwKLQkJaWYgKGxvY2tlZF9hZ25vICE9 IC0xKQkvKiByZWxlYXNlIG9sZCBhZyBsb2NrICovCQlcCi0JCQlQUkVQQUlSX1JX X1VOTE9DS19OT1RFU1QoJnBlcl9hZ19sb2NrW2xvY2tlZF9hZ25vXSk7CVwKLQkJ UFJFUEFJUl9SV19XUklURV9MT0NLX05PVEVTVCgmcGVyX2FnX2xvY2tbYWdub10p OwkJXAotCQlsb2NrZWRfYWdubyA9IGFnbm87CQkJCQkJXAotCX0KLQotI2RlZmlu ZQlQUk9DRVNTX0JNQlRfVU5MT0NLX1JFVFVSTih2YWwpCQkJCQkJXAotCWRvIHsJ CQkJCQkJCQlcCi0JCWlmIChsb2NrZWRfYWdubyAhPSAtMSkgCQkJCQkJXAotCQkJ UFJFUEFJUl9SV19VTkxPQ0tfTk9URVNUKCZwZXJfYWdfbG9ja1tsb2NrZWRfYWdu b10pOwlcCi0JCXJldHVybiAodmFsKTsJCQkJCQkJXAotCX0gd2hpbGUgKDApCiAK IHN0YXRpYyBpbnQKIHByb2Nlc3NfcnRfcmVjKApAQCAtNjg5LDggKzY2Nyw4IEBA CiAJeGZzX2Rmc2Jub190CQllOwogCXhmc19hZ251bWJlcl90CQlhZ25vOwogCXhm c19hZ2Jsb2NrX3QJCWFnYm5vOwotCVBST0NFU1NfQk1CVF9ERUNMCi0JCQkJKHhm c19hZ251bWJlcl90LCBsb2NrZWRfYWdubz0tMSk7CisJeGZzX2FnbnVtYmVyX3QJ CWxvY2tlZF9hZ25vID0gLTE7CisJaW50CQkJZXJyb3IgPSAxOwogCiAJaWYgKHdo aWNoZm9yayA9PSBYRlNfREFUQV9GT1JLKQogCQlmb3JrbmFtZSA9IF8oImRhdGEi KTsKQEAgLTcwOSwxMSArNjg3LDEwIEBACiAJCWVsc2UKIAkJCSpsYXN0X2tleSA9 IG87CiAJCWlmIChpID4gMCAmJiBvcCArIGNwID4gbykgIHsKLQkJCWRvX3dhcm4o Ci0JXygiYm1hcCByZWMgb3V0IG9mIG9yZGVyLCBpbm9kZSAlbGx1IGVudHJ5ICVk ICIKLQkgICJbbyBzIGNdIFslbGx1ICVsbHUgJWxsdV0sICVkIFslbGx1ICVsbHUg JWxsdV1cbiIpLAorCQkJZG9fd2FybihfKCJibWFwIHJlYyBvdXQgb2Ygb3JkZXIs IGlub2RlICVsbHUgZW50cnkgJWQgIgorCSAgCQkJIltvIHMgY10gWyVsbHUgJWxs dSAlbGx1XSwgJWQgWyVsbHUgJWxsdSAlbGx1XVxuIiksCiAJCQkJaW5vLCBpLCBv LCBzLCBjLCBpLTEsIG9wLCBzcCwgY3ApOwotCQkJUFJPQ0VTU19CTUJUX1VOTE9D S19SRVRVUk4oMSk7CisJCQlnb3RvIGRvbmU7CiAJCX0KIAkJb3AgPSBvOwogCQlj cCA9IGM7CkBAIC03MjMsMTAgKzcwMCw5IEBACiAJCSAqIGNoZWNrIG51bWVyaWMg dmFsaWRpdHkgb2YgdGhlIGV4dGVudAogCQkgKi8KIAkJaWYgKGMgPT0gMCkgIHsK LQkJCWRvX3dhcm4oCi0JXygiemVybyBsZW5ndGggZXh0ZW50IChvZmYgPSAlbGx1 LCBmc2JubyA9ICVsbHUpIGluIGlubyAlbGx1XG4iKSwKLQkJCQlvLCBzLCBpbm8p OwotCQkJUFJPQ0VTU19CTUJUX1VOTE9DS19SRVRVUk4oMSk7CisJCQlkb193YXJu KF8oInplcm8gbGVuZ3RoIGV4dGVudCAob2ZmID0gJWxsdSwgIgorCQkJCSJmc2Ju byA9ICVsbHUpIGluIGlubyAlbGx1XG4iKSwgbywgcywgaW5vKTsKKwkJCWdvdG8g ZG9uZTsKIAkJfQogCiAJCWlmICh0eXBlID09IFhSX0lOT19SVERBVEEgJiYgd2hp Y2hmb3JrID09IFhGU19EQVRBX0ZPUkspIHsKQEAgLTc0MywzOSArNzE5LDM4IEBA CiAJCQljb250aW51ZTsKIAkJfQogCisJCS8qCisJCSAqIHJlZ3VsYXIgZmlsZSBk YXRhIGZvcmsgb3IgYXR0cmlidXRlIGZvcmsKKwkJICovCiAJCXN3aXRjaCAodmVy aWZ5X2Rmc2Jub19yYW5nZShtcCwgcywgYykpIHsKIAkJCWNhc2UgWFJfREZTQk5P UkFOR0VfVkFMSUQ6CiAJCQkJYnJlYWs7CisKIAkJCWNhc2UgWFJfREZTQk5PUkFO R0VfQkFEU1RBUlQ6Ci0JCQkJZG9fd2FybigKLQlfKCJpbm9kZSAlbGx1IC0gYmFk IGV4dGVudCBzdGFydGluZyBibG9jayBudW1iZXIgJWxsdSwgb2Zmc2V0ICVsbHVc biIpLAorCQkJCWRvX3dhcm4oXygiaW5vZGUgJWxsdSAtIGJhZCBleHRlbnQgc3Rh cnRpbmcgIgorCQkJCQkiYmxvY2sgbnVtYmVyICVsbHUsIG9mZnNldCAlbGx1XG4i KSwKIAkJCQkJaW5vLCBzLCBvKTsKLQkJCQlQUk9DRVNTX0JNQlRfVU5MT0NLX1JF VFVSTigxKTsKKwkJCQlnb3RvIGRvbmU7CisKIAkJCWNhc2UgWFJfREZTQk5PUkFO R0VfQkFERU5EOgotCQkJCWRvX3dhcm4oCi0JXygiaW5vZGUgJWxsdSAtIGJhZCBl eHRlbnQgbGFzdCBibG9jayBudW1iZXIgJWxsdSwgb2Zmc2V0ICVsbHVcbiIpLAor CQkJCWRvX3dhcm4oXygiaW5vZGUgJWxsdSAtIGJhZCBleHRlbnQgbGFzdCBibG9j ayAiCisJCQkJCSJudW1iZXIgJWxsdSwgb2Zmc2V0ICVsbHVcbiIpLAogCQkJCQlp bm8sIHMgKyBjIC0gMSwgbyk7Ci0JCQkJUFJPQ0VTU19CTUJUX1VOTE9DS19SRVRV Uk4oMSk7Ci0JCQljYXNlIFhSX0RGU0JOT1JBTkdFX09WRVJGTE9XOgotCQkJCWRv X3dhcm4oCisJCQkJZ290byBkb25lOwogCi0JXygiaW5vZGUgJWxsdSAtIGJhZCBl eHRlbnQgb3ZlcmZsb3dzIC0gc3RhcnQgJWxsdSwgZW5kICVsbHUsICIKLQkgICJv ZmZzZXQgJWxsdVxuIiksCisJCQljYXNlIFhSX0RGU0JOT1JBTkdFX09WRVJGTE9X OgorCQkJCWRvX3dhcm4oXygiaW5vZGUgJWxsdSAtIGJhZCBleHRlbnQgb3ZlcmZs b3dzIC0gIgorCQkJCQkic3RhcnQgJWxsdSwgZW5kICVsbHUsIG9mZnNldCAlbGx1 XG4iKSwKIAkJCQkJaW5vLCBzLCBzICsgYyAtIDEsIG8pOwotCQkJCVBST0NFU1Nf Qk1CVF9VTkxPQ0tfUkVUVVJOKDEpOworCQkJCWdvdG8gZG9uZTsKIAkJfQogCQlp ZiAobyA+PSBmc19tYXhfZmlsZV9vZmZzZXQpICB7Ci0JCQlkb193YXJuKAotCV8o Imlub2RlICVsbHUgLSBleHRlbnQgb2Zmc2V0IHRvbyBsYXJnZSAtIHN0YXJ0ICVs bHUsIGNvdW50ICVsbHUsICIKLQkgICJvZmZzZXQgJWxsdVxuIiksCisJCQlkb193 YXJuKF8oImlub2RlICVsbHUgLSBleHRlbnQgb2Zmc2V0IHRvbyBsYXJnZSAtICIK KwkJCQkic3RhcnQgJWxsdSwgY291bnQgJWxsdSwgb2Zmc2V0ICVsbHVcbiIpLAog CQkJCWlubywgcywgYywgbyk7Ci0JCQlQUk9DRVNTX0JNQlRfVU5MT0NLX1JFVFVS TigxKTsKKwkJCWdvdG8gZG9uZTsKIAkJfQogCi0KLQkJLyoKLQkJICogcmVndWxh ciBmaWxlIGRhdGEgZm9yayBvciBhdHRyaWJ1dGUgZm9yawotCQkgKi8KIAkJaWYg KGJsa21hcHAgJiYgKmJsa21hcHApCiAJCQlibGttYXBfc2V0X2V4dChibGttYXBw LCBvLCBzLCBjKTsKIAkJLyoKQEAgLTc4NSwyNiArNzYwLDM2IEBACiAJCWFnbm8g PSBYRlNfRlNCX1RPX0FHTk8obXAsIHMpOwogCQlhZ2JubyA9IFhGU19GU0JfVE9f QUdCTk8obXAsIHMpOwogCQllID0gcyArIGM7Ci0JCVBST0NFU1NfQk1CVF9MT0NL KGFnbm8pOwotCQlmb3IgKGIgPSBzOyBiIDwgZTsgYisrLCBhZ2JubysrKSAgewot CQkJaWYgKGNoZWNrX2R1cHMgPT0gMSkgIHsKLQkJCQkvKgotCQkJCSAqIGlmIHdl J3JlIGp1c3QgY2hlY2tpbmcgdGhlIGJtYXAgZm9yIGR1cHMsCi0JCQkJICogcmV0 dXJuIGlmIHdlIGZpbmQgb25lLCBvdGhlcndpc2UsIGNvbnRpbnVlCi0JCQkJICog Y2hlY2tpbmcgZWFjaCBlbnRyeSB3aXRob3V0IHNldHRpbmcgdGhlCi0JCQkJICog YmxvY2sgYml0bWFwCi0JCQkJICovCisJCWlmIChhZ25vICE9IGxvY2tlZF9hZ25v KSB7CisJCQlpZiAobG9ja2VkX2Fnbm8gIT0gLTEpCisJCQkJcHRocmVhZF9tdXRl eF91bmxvY2soJmFnX2xvY2tzW2xvY2tlZF9hZ25vXSk7CisJCQlwdGhyZWFkX211 dGV4X2xvY2soJmFnX2xvY2tzW2Fnbm9dKTsKKwkJCWxvY2tlZF9hZ25vID0gYWdu bzsKKwkJfQorCisJCWlmIChjaGVja19kdXBzKSB7CisJCQkvKgorCQkJICogaWYg d2UncmUganVzdCBjaGVja2luZyB0aGUgYm1hcCBmb3IgZHVwcywKKwkJCSAqIHJl dHVybiBpZiB3ZSBmaW5kIG9uZSwgb3RoZXJ3aXNlLCBjb250aW51ZQorCQkJICog Y2hlY2tpbmcgZWFjaCBlbnRyeSB3aXRob3V0IHNldHRpbmcgdGhlCisJCQkgKiBi bG9jayBiaXRtYXAKKwkJCSAqLworCQkJZm9yIChiID0gczsgYiA8IGU7IGIrKywg YWdibm8rKykgIHsKIAkJCQlpZiAoc2VhcmNoX2R1cF9leHRlbnQobXAsIGFnbm8s IGFnYm5vKSkgewotCQkJCQlkb193YXJuKAotCV8oIiVzIGZvcmsgaW4gaW5vICVs bHUgY2xhaW1zIGR1cCBleHRlbnQsIG9mZiAtICVsbHUsICIKLQkgICJzdGFydCAt ICVsbHUsIGNudCAlbGx1XG4iKSwKKwkJCQkJZG9fd2FybihfKCIlcyBmb3JrIGlu IGlubyAlbGx1IGNsYWltcyAiCisJCQkJCQkiZHVwIGV4dGVudCwgb2ZmIC0gJWxs dSwgIgorCQkJCQkJInN0YXJ0IC0gJWxsdSwgY250ICVsbHVcbiIpLAogCQkJCQkJ Zm9ya25hbWUsIGlubywgbywgcywgYyk7Ci0JCQkJCVBST0NFU1NfQk1CVF9VTkxP Q0tfUkVUVVJOKDEpOworCQkJCQlnb3RvIGRvbmU7CiAJCQkJfQotCQkJCWNvbnRp bnVlOwogCQkJfQorCQkJKnRvdCArPSBjOworCQkJY29udGludWU7CisJCX0KIAot CQkJLyogUHJvY2VzcyBpbiBjaHVua3Mgb2YgMTYgKFhSX0JCX1VOSVQvWFJfQkIp CisJCWZvciAoYiA9IHM7IGIgPCBlOyBiKyssIGFnYm5vKyspICB7CisJCQkvKgor CQkJICogUHJvY2VzcyBpbiBjaHVua3Mgb2YgMTYgKFhSX0JCX1VOSVQvWFJfQkIp CiAJCQkgKiBmb3IgY29tbW9uIFhSX0VfVU5LTk9XTiB0byBYUl9FX0lOVVNFIHRy YW5zaXRpb24KIAkJCSAqLwogCQkJaWYgKCgoYWdibm8gJiBYUl9CQl9NQVNLKSA9 PSAwKSAmJiAoKHMgKyBjIC0gYikgPj0gKFhSX0JCX1VOSVQvWFJfQkIpKSkgewpA QCAtODE4LDQ1ICs4MDMsNDkgQEAKIAkJCX0KIAogCQkJc3RhdGUgPSBnZXRfYWdi bm9fc3RhdGUobXAsIGFnbm8sIGFnYm5vKTsKKwogCQkJc3dpdGNoIChzdGF0ZSkg IHsKIAkJCWNhc2UgWFJfRV9GUkVFOgogCQkJY2FzZSBYUl9FX0ZSRUUxOgotCQkJ CWRvX3dhcm4oCi0JCQlfKCIlcyBmb3JrIGluIGlubyAlbGx1IGNsYWltcyBmcmVl IGJsb2NrICVsbHVcbiIpLAorCQkJCWRvX3dhcm4oXygiJXMgZm9yayBpbiBpbm8g JWxsdSBjbGFpbXMgZnJlZSAiCisJCQkJCSJibG9jayAlbGx1XG4iKSwKIAkJCQkJ Zm9ya25hbWUsIGlubywgKF9fdWludDY0X3QpIGIpOwogCQkJCS8qIGZhbGwgdGhy b3VnaCAuLi4gKi8KIAkJCWNhc2UgWFJfRV9VTktOT1dOOgogCQkJCXNldF9hZ2Ju b19zdGF0ZShtcCwgYWdubywgYWdibm8sIFhSX0VfSU5VU0UpOwogCQkJCWJyZWFr OworCiAJCQljYXNlIFhSX0VfQkFEX1NUQVRFOgogCQkJCWRvX2Vycm9yKF8oImJh ZCBzdGF0ZSBpbiBibG9jayBtYXAgJWxsdVxuIiksIGIpOwotCQkJCWFib3J0KCk7 Ci0JCQkJYnJlYWs7CisKIAkJCWNhc2UgWFJfRV9GU19NQVA6CiAJCQljYXNlIFhS X0VfSU5POgogCQkJY2FzZSBYUl9FX0lOVVNFX0ZTOgotCQkJCWRvX3dhcm4oCi0J CQlfKCIlcyBmb3JrIGluIGlub2RlICVsbHUgY2xhaW1zIG1ldGFkYXRhIGJsb2Nr ICVsbHVcbiIpLAorCQkJCWRvX3dhcm4oXygiJXMgZm9yayBpbiBpbm9kZSAlbGx1 IGNsYWltcyAiCisJCQkJCSJtZXRhZGF0YSBibG9jayAlbGx1XG4iKSwKIAkJCQkJ Zm9ya25hbWUsIGlubywgKF9fdWludDY0X3QpIGIpOwotCQkJCVBST0NFU1NfQk1C VF9VTkxPQ0tfUkVUVVJOKDEpOworCQkJCWdvdG8gZG9uZTsKKwogCQkJY2FzZSBY Ul9FX0lOVVNFOgogCQkJY2FzZSBYUl9FX01VTFQ6CiAJCQkJc2V0X2FnYm5vX3N0 YXRlKG1wLCBhZ25vLCBhZ2JubywgWFJfRV9NVUxUKTsKLQkJCQlkb193YXJuKAot CQkJXygiJXMgZm9yayBpbiAlcyBpbm9kZSAlbGx1IGNsYWltcyB1c2VkIGJsb2Nr ICVsbHVcbiIpLAorCQkJCWRvX3dhcm4oXygiJXMgZm9yayBpbiAlcyBpbm9kZSAl bGx1IGNsYWltcyAiCisJCQkJCSJ1c2VkIGJsb2NrICVsbHVcbiIpLAogCQkJCQlm b3JrbmFtZSwgZnR5cGUsIGlubywgKF9fdWludDY0X3QpIGIpOwotCQkJCVBST0NF U1NfQk1CVF9VTkxPQ0tfUkVUVVJOKDEpOworCQkJCWdvdG8gZG9uZTsKKwogCQkJ ZGVmYXVsdDoKLQkJCQlkb19lcnJvcigKLQkJCV8oImlsbGVnYWwgc3RhdGUgJWQg aW4gYmxvY2sgbWFwICVsbHVcbiIpLAorCQkJCWRvX2Vycm9yKF8oImlsbGVnYWwg c3RhdGUgJWQgaW4gYmxvY2sgbWFwICVsbHVcbiIpLAogCQkJCQlzdGF0ZSwgYik7 Ci0JCQkJYWJvcnQoKTsKIAkJCX0KIAkJfQogCQkqdG90ICs9IGM7CiAJfQotCi0J UFJPQ0VTU19CTUJUX1VOTE9DS19SRVRVUk4oMCk7CisJZXJyb3IgPSAwOworZG9u ZToKKwlpZiAobG9ja2VkX2Fnbm8gIT0gLTEpCisJCXB0aHJlYWRfbXV0ZXhfdW5s b2NrKCZhZ19sb2Nrc1tsb2NrZWRfYWdub10pOworCXJldHVybiBlcnJvcjsKIH0K IAogLyoKSW5kZXg6IHJlcGFpci94ZnNwcm9ncy9yZXBhaXIvZGlub2RlLmgKPT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PQotLS0gcmVwYWlyLm9yaWcveGZzcHJvZ3MvcmVwYWly L2Rpbm9kZS5oCTIwMDctMDQtMjcgMTM6MTM6MzUuMDAwMDAwMDAwICsxMDAwCisr KyByZXBhaXIveGZzcHJvZ3MvcmVwYWlyL2Rpbm9kZS5oCTIwMDctMDQtMjcgMTQ6 MTI6MzQuMzExNTQ3ODM4ICsxMDAwCkBAIC0xOCw2ICsxOCw4IEBACiAjaWZuZGVm IF9YUl9ESU5PREVfSAogI2RlZmluZSBfWFJfRElOT0RFX0gKIAorI2luY2x1ZGUg InByZWZldGNoLmgiCisKIHN0cnVjdCBibGttYXA7CiAKIGludApAQCAtMTE2LDYg KzExOCw3IEBACiAJCQkJeGZzX2FnbnVtYmVyX3QJYWdubyk7CiB2b2lkCiBwcm9j ZXNzX2FnaW5vZGVzKHhmc19tb3VudF90CSptcCwKKwkJcHJlZmV0Y2hfYXJnc190 CSpwZl9hcmdzLAogCQl4ZnNfYWdudW1iZXJfdAlhZ25vLAogCQlpbnQJCWNoZWNr X2RpcnMsCiAJCWludAkJY2hlY2tfZHVwcywKSW5kZXg6IHJlcGFpci94ZnNwcm9n cy9saWJ4ZnMvdHJhbnMuYwo9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09Ci0tLSByZXBhaXIu b3JpZy94ZnNwcm9ncy9saWJ4ZnMvdHJhbnMuYwkyMDA3LTA0LTI3IDEzOjEzOjM1 LjAwMDAwMDAwMCArMTAwMAorKysgcmVwYWlyL3hmc3Byb2dzL2xpYnhmcy90cmFu cy5jCTIwMDctMDQtMjcgMTQ6MTI6MzQuMzExNTQ3ODM4ICsxMDAwCkBAIC02NDQs MTggKzY0NCwxNyBAQAogCVhGU19CVUZfU0VUX0ZTUFJJVkFURTIoYnAsIE5VTEwp OwkvKiByZW1vdmUgeGFjdCBwdHIgKi8KIAogCWhvbGQgPSAoYmlwLT5ibGlfZmxh Z3MgJiBYRlNfQkxJX0hPTEQpOwotCWlmIChiaXAtPmJsaV9mbGFncyAmIChYRlNf QkxJX0RJUlRZfFhGU19CTElfU1RBTEUpKSB7CisJaWYgKGJpcC0+YmxpX2ZsYWdz ICYgWEZTX0JMSV9ESVJUWSkgewogI2lmZGVmIFhBQ1RfREVCVUcKIAkJZnByaW50 ZihzdGRlcnIsICJmbHVzaGluZy9zdGFsaW5nIGJ1ZmZlciAlcCAoaG9sZD0lZClc biIsCiAJCQlicCwgaG9sZCk7CiAjZW5kaWYKLQkJaWYgKGJpcC0+YmxpX2ZsYWdz ICYgWEZTX0JMSV9ESVJUWSkKLQkJCWxpYnhmc193cml0ZWJ1Zl9pbnQoYnAsIDAp OwotCQlpZiAoaG9sZCkKLQkJCWJpcC0+YmxpX2ZsYWdzICY9IH5YRlNfQkxJX0hP TEQ7Ci0JCWVsc2UKLQkJCWxpYnhmc19wdXRidWYoYnApOworCQlsaWJ4ZnNfd3Jp dGVidWZfaW50KGJwLCAwKTsKIAl9CisJaWYgKGhvbGQpCisJCWJpcC0+YmxpX2Zs YWdzICY9IH5YRlNfQkxJX0hPTEQ7CisJZWxzZQorCQlsaWJ4ZnNfcHV0YnVmKGJw KTsKIAkvKiByZWxlYXNlIHRoZSBidWYgaXRlbSAqLwogCWttZW1fem9uZV9mcmVl KHhmc19idWZfaXRlbV96b25lLCBiaXApOwogfQpJbmRleDogcmVwYWlyL3hmc3By b2dzL2xpYnhmcy9NYWtlZmlsZQo9PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09Ci0tLSByZXBh aXIub3JpZy94ZnNwcm9ncy9saWJ4ZnMvTWFrZWZpbGUJMjAwNy0wNC0xMiAxNDo1 NTo1My4wMDAwMDAwMDAgKzEwMDAKKysrIHJlcGFpci94ZnNwcm9ncy9saWJ4ZnMv TWFrZWZpbGUJMjAwNy0wNS0yOSAxMDo1NzozOC44NzA1MTk5ODggKzEwMDAKQEAg LTExLDcgKzExLDcgQEAKIExUX0FHRSA9IDAKIAogSEZJTEVTID0geGZzLmggaW5p dC5oCi1DRklMRVMgPSBiaXQuYyBjYWNoZS5jIGluaXQuYyBsaW8uYyBsb2dpdGVt LmMgcmR3ci5jIHRyYW5zLmMgdXRpbC5jIFwKK0NGSUxFUyA9IGJpdC5jIGNhY2hl LmMgaW5pdC5jIGxvZ2l0ZW0uYyByZHdyLmMgdHJhbnMuYyB1dGlsLmMgXAogCXhm c19hbGxvYy5jIHhmc19pYWxsb2MuYyB4ZnNfcnRhbGxvYy5jIFwKIAl4ZnNfaW5v ZGUuYyB4ZnNfYnRyZWUuYyB4ZnNfYWxsb2NfYnRyZWUuYyB4ZnNfaWFsbG9jX2J0 cmVlLmMgXAogCXhmc19ibWFwX2J0cmVlLmMgeGZzX2RhX2J0cmVlLmMgeGZzX2Rp ci5jIHhmc19kaXJfbGVhZi5jIFwKSW5kZXg6IHJlcGFpci94ZnNwcm9ncy9pbmNs dWRlL2xpbnV4LmgKPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQotLS0gcmVwYWlyLm9yaWcv eGZzcHJvZ3MvaW5jbHVkZS9saW51eC5oCTIwMDctMDQtMTIgMTQ6NTU6NTIuMDAw MDAwMDAwICsxMDAwCisrKyByZXBhaXIveGZzcHJvZ3MvaW5jbHVkZS9saW51eC5o CTIwMDctMDUtMjkgMTE6MDY6MDEuMDI5NTI5Njk2ICsxMDAwCkBAIC0xMTksOSAr MTE5LDQgQEAKICNkZWZpbmUgX0JPT0xFQU5fVF9ERUZJTkVECTEKICNlbmRpZgog Ci0jaWZkZWYgX19VU0VfR05VCi10eXBlZGVmIHN0cnVjdCBhaW9jYjY0IGFpb2Ni NjRfdDsKLSNkZWZpbmUJX0FJT0NCNjRfVF9ERUZJTkVECTEKLSNlbmRpZgotCiAj ZW5kaWYJLyogX19YRlNfTElOVVhfSF9fICovCkluZGV4OiByZXBhaXIveGZzcHJv Z3MvbGlieGZzL2Rhcndpbi5jCj09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0KLS0tIHJlcGFp ci5vcmlnL3hmc3Byb2dzL2xpYnhmcy9kYXJ3aW4uYwkyMDA3LTA0LTEyIDE0OjU1 OjUzLjAwMDAwMDAwMCArMTAwMAorKysgcmVwYWlyL3hmc3Byb2dzL2xpYnhmcy9k YXJ3aW4uYwkyMDA3LTA1LTMwIDEzOjQ4OjI1Ljk4OTY2NDE1NCArMTAwMApAQCAt MjEsNiArMjEsNyBAQAogI2luY2x1ZGUgPHN5cy9tb3VudC5oPgogI2luY2x1ZGUg PHN5cy9pb2N0bC5oPgogI2luY2x1ZGUgPHhmcy9saWJ4ZnMuaD4KKyNpbmNsdWRl IDxzeXMvc3lzY3RsLmg+CiAKIGludCBwbGF0Zm9ybV9oYXNfdXVpZCA9IDE7CiBl eHRlcm4gY2hhciAqcHJvZ25hbWU7CkBAIC05MCwxMyArOTEsNiBAQAogCSpic3og PSBCQlNJWkU7CiB9CiAKLS8qIEFSR1NVU0VEICovCi1pbnQKLXBsYXRmb3JtX2Fp b19pbml0KGludCBhaW9fY291bnQpCi17Ci0JcmV0dXJuIDA7CQkvKiBhaW8vbGlv X2xpc3RpbyBub3QgYXZhaWxhYmxlICovCi19Ci0KIGNoYXIgKgogcGxhdGZvcm1f ZmluZHJhd3BhdGgoY2hhciAqcGF0aCkKIHsKQEAgLTEyNCw1ICsxMTgsMjggQEAK IGludAogcGxhdGZvcm1fbnByb2Modm9pZCkKIHsKLQlyZXR1cm4gMTsKKwlpbnQJ CW5jcHU7CisJc2l6ZV90CQlsZW4gPSBzaXplb2YobmNwdSk7CisJc3RhdGljIGlu dAltaWJbMl0gPSB7Q1RMX0hXLCBIV19OQ1BVfTsKKworCWlmIChzeXNjdGwobWli LCAyLCAmbmNwdSwgJmxlbiwgTlVMTCwgMCkgPCAwKQorCQluY3B1ID0gMTsKKwor CXJldHVybiBuY3B1OwogfQorCit1bnNpZ25lZCBsb25nCitwbGF0Zm9ybV9waHlz bWVtKHZvaWQpCit7CisJdW5zaWduZWQgbG9uZwlwaHlzbWVtOworCXNpemVfdAkJ bGVuID0gc2l6ZW9mKHBoeXNtZW0pOworCXN0YXRpYyBpbnQJbWliWzJdID0ge0NU TF9IVywgSFdfUEhZU01FTX07CisKKwlpZiAoc3lzY3RsKG1pYiwgMiwgJnBoeXNt ZW0sICZsZW4sIE5VTEwsIDApIDwgMCkgeworCQlmcHJpbnRmKHN0ZGVyciwgXygi JXM6IGNhbid0IGRldGVybWluZSBtZW1vcnkgc2l6ZVxuIiksCisJCQlwcm9nbmFt ZSk7CisJCWV4aXQoMSk7CisJfQorCXJldHVybiBwaHlzbWVtID4+IDEwOworfQor CkluZGV4OiByZXBhaXIveGZzcHJvZ3MvbGlieGZzL2ZyZWVic2QuYwo9PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09Ci0tLSByZXBhaXIub3JpZy94ZnNwcm9ncy9saWJ4ZnMvZnJl ZWJzZC5jCTIwMDctMDQtMTIgMTQ6NTU6NTMuMDAwMDAwMDAwICsxMDAwCisrKyBy ZXBhaXIveGZzcHJvZ3MvbGlieGZzL2ZyZWVic2QuYwkyMDA3LTA1LTMwIDEzOjQ4 OjE3LjU4Njc1Mjk1NSArMTAwMApAQCAtMTQ5LDEzICsxNDksNiBAQAogCSpic3og PSAoaW50KXNzaXplOwogfQogCi0vKiBBUkdTVVNFRCAqLwotaW50Ci1wbGF0Zm9y bV9haW9faW5pdChpbnQgYWlvX2NvdW50KQotewotCXJldHVybiAwOwkJLyogYWlv L2xpb19saXN0aW8gbm90IGF2YWlsYWJsZSAqLwotfQotCiBjaGFyICoKIHBsYXRm b3JtX2ZpbmRyYXdwYXRoKGNoYXIgKnBhdGgpCiB7CkBAIC0xODMsNSArMTc2LDI3 IEBACiBpbnQKIHBsYXRmb3JtX25wcm9jKHZvaWQpCiB7Ci0JcmV0dXJuIDE7CisJ aW50CQluY3B1OworCXNpemVfdAkJbGVuID0gc2l6ZW9mKG5jcHUpOworCXN0YXRp YyBpbnQJbWliWzJdID0ge0NUTF9IVywgSFdfTkNQVX07CisKKwlpZiAoc3lzY3Rs KG1pYiwgMiwgJm5jcHUsICZsZW4sIE5VTEwsIDApIDwgMCkKKwkJbmNwdSA9IDE7 CisKKwlyZXR1cm4gbmNwdTsKK30KKwordW5zaWduZWQgbG9uZworcGxhdGZvcm1f cGh5c21lbSh2b2lkKQoreworCXVuc2lnbmVkIGxvbmcJcGh5c21lbTsKKwlzaXpl X3QJCWxlbiA9IHNpemVvZihwaHlzbWVtKTsKKwlzdGF0aWMgaW50CW1pYlsyXSA9 IHtDVExfSFcsIEhXX1BIWVNNRU19OworCisJaWYgKHN5c2N0bChtaWIsIDIsICZw aHlzbWVtLCAmbGVuLCBOVUxMLCAwKSA8IDApIHsKKwkJZnByaW50ZihzdGRlcnIs IF8oIiVzOiBjYW4ndCBkZXRlcm1pbmUgbWVtb3J5IHNpemVcbiIpLAorCQkJcHJv Z25hbWUpOworCQlleGl0KDEpOworCX0KKwlyZXR1cm4gcGh5c21lbSA+PiAxMDsK IH0KSW5kZXg6IHJlcGFpci94ZnNwcm9ncy9saWJ4ZnMvaW5pdC5oCj09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT0KLS0tIHJlcGFpci5vcmlnL3hmc3Byb2dzL2xpYnhmcy9pbml0 LmgJMjAwNy0wNC0xMiAxNDo1NTo1My4wMDAwMDAwMDAgKzEwMDAKKysrIHJlcGFp ci94ZnNwcm9ncy9saWJ4ZnMvaW5pdC5oCTIwMDctMDUtMzAgMTI6NTE6MTkuNDE5 MTY2NDg2ICsxMDAwCkBAIC0zMiw4ICszMiw4IEBACiBleHRlcm4gY2hhciAqcGxh dGZvcm1fZmluZGJsb2NrcGF0aCAoY2hhciAqcGF0aCk7CiBleHRlcm4gaW50IHBs YXRmb3JtX2RpcmVjdF9ibG9ja2RldiAodm9pZCk7CiBleHRlcm4gaW50IHBsYXRm b3JtX2FsaWduX2Jsb2NrZGV2ICh2b2lkKTsKLWV4dGVybiBpbnQgcGxhdGZvcm1f YWlvX2luaXQgKGludCBhaW9fY291bnQpOwogZXh0ZXJuIGludCBwbGF0Zm9ybV9u cHJvYyh2b2lkKTsKK2V4dGVybiB1bnNpZ25lZCBsb25nIHBsYXRmb3JtX3BoeXNt ZW0odm9pZCk7CS8qIGluIGtpbG9ieXRlcyAqLwogZXh0ZXJuIGludCBwbGF0Zm9y bV9oYXNfdXVpZDsKIAogI2VuZGlmCS8qIExJQlhGU19JTklUX0ggKi8KSW5kZXg6 IHJlcGFpci94ZnNwcm9ncy9saWJ4ZnMvaXJpeC5jCj09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT0KLS0tIHJlcGFpci5vcmlnL3hmc3Byb2dzL2xpYnhmcy9pcml4LmMJMjAwNy0w NC0xMiAxNDo1NTo1My4wMDAwMDAwMDAgKzEwMDAKKysrIHJlcGFpci94ZnNwcm9n cy9saWJ4ZnMvaXJpeC5jCTIwMDctMDUtMzAgMTM6NDg6MzQuMTQwNjA3OTk2ICsx MDAwCkBAIC0xNyw3ICsxNyw2IEBACiAgKi8KIAogI2luY2x1ZGUgPHhmcy9saWJ4 ZnMuaD4KLSNpbmNsdWRlIDxhaW8uaD4KICNpbmNsdWRlIDxkaXNraW5mby5oPgog I2luY2x1ZGUgPHN5cy9zeXNtcC5oPgogCkBAIC02OCwxOSArNjcsNiBAQAogCSpi c3ogPSBCQlNJWkU7CiB9CiAKLWludAotcGxhdGZvcm1fYWlvX2luaXQoaW50IGFp b19jb3VudCkKLXsKLQlzdHJ1Y3QgYWlvaW5pdCBhaW9faW5pdDsKLQotCW1lbXNl dCgmYWlvX2luaXQsIDAsIHNpemVvZihhaW9faW5pdCkpOwotCWFpb19pbml0LmFp b190aHJlYWRzID0gYWlvX2NvdW50OwotCWFpb19pbml0LmFpb19udW11c2VycyA9 IGFpb19jb3VudDsKLQotCWFpb19zZ2lfaW5pdDY0KCZhaW9faW5pdCk7Ci0JcmV0 dXJuICgxKTsJCS8qIGFpby9saW9fbGlzdGlvIGF2YWlsYWJsZSAqLwotfQotCiBj aGFyICoKIHBsYXRmb3JtX2ZpbmRyYXdwYXRoKGNoYXIgKnBhdGgpCiB7CkBAIC0x MTEsMyArOTcsMTUgQEAKIAlyZXR1cm4gc3lzbXAoTVBfTlBST0NTKTsKIH0KIAor dW5zaWduZWQgbG9uZworcGxhdGZvcm1fcGh5c21lbSh2b2lkKQoreworCXN0cnVj dCBybWluZm8gcmk7CisKKwlpZiAoc3lzbXAoTVBfU0FHRVQsIE1QU0FfUk1JTkZP LCAmcmksIHNpemVvZihyaSkpIDwgMCkKKwkJZnByaW50ZihzdGRlcnIsIF8oIiVz OiBjYW4ndCBkZXRlcm1pbmUgbWVtb3J5IHNpemVcbiIpLAorCQkJcHJvZ25hbWUp OworCQlleGl0KDEpOworCX0KKwlyZXR1cm4gKHJpLnBoeXNtZW0gPj4gMTApICog Z2V0cGFnZXNpemUoKTsJLyoga2lsb2J5dGVzICovCit9ClwgTm8gbmV3bGluZSBh dCBlbmQgb2YgZmlsZQpJbmRleDogcmVwYWlyL3hmc3Byb2dzL2xpYnhmcy9saW51 eC5jCj09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT0KLS0tIHJlcGFpci5vcmlnL3hmc3Byb2dz L2xpYnhmcy9saW51eC5jCTIwMDctMDQtMTIgMTQ6NTU6NTMuMDAwMDAwMDAwICsx MDAwCisrKyByZXBhaXIveGZzcHJvZ3MvbGlieGZzL2xpbnV4LmMJMjAwNy0wNS0z MCAxMzo0NzozNC41ODQzMjQ4OTQgKzEwMDAKQEAgLTIwLDExICsyMCwxMSBAQAog I2luY2x1ZGUgPHhmcy9saWJ4ZnMuaD4KICNpbmNsdWRlIDxtbnRlbnQuaD4KICNp bmNsdWRlIDxzeXMvc3RhdC5oPgotI2luY2x1ZGUgPGFpby5oPgogI3VuZGVmIHVz dGF0CiAjaW5jbHVkZSA8c3lzL3VzdGF0Lmg+CiAjaW5jbHVkZSA8c3lzL21vdW50 Lmg+CiAjaW5jbHVkZSA8c3lzL2lvY3RsLmg+CisjaW5jbHVkZSA8c3lzL3N5c2lu Zm8uaD4KIAogaW50IHBsYXRmb3JtX2hhc191dWlkID0gMTsKIGV4dGVybiBjaGFy ICpwcm9nbmFtZTsKQEAgLTE3NCwxOSArMTc0LDYgQEAKIAkJbWF4X2Jsb2NrX2Fs aWdubWVudCA9ICpic3o7CiB9CiAKLWludAotcGxhdGZvcm1fYWlvX2luaXQoaW50 IGFpb19jb3VudCkKLXsKLQlzdHJ1Y3QgYWlvaW5pdCBsY2xfYWlvX2luaXQ7Ci0K LQltZW1zZXQoJmxjbF9haW9faW5pdCwgMCwgc2l6ZW9mKGxjbF9haW9faW5pdCkp OwotCWxjbF9haW9faW5pdC5haW9fdGhyZWFkcyA9IGFpb19jb3VudDsKLQlsY2xf YWlvX2luaXQuYWlvX251bXVzZXJzID0gYWlvX2NvdW50OwotCi0JYWlvX2luaXQo JmxjbF9haW9faW5pdCk7Ci0JcmV0dXJuICgxKTsJCS8qIGFpby9saW9fbGlzdGlv IGF2YWlsYWJsZSAqLwotfQotCiBjaGFyICoKIHBsYXRmb3JtX2ZpbmRyYXdwYXRo KGNoYXIgKnBhdGgpCiB7CkBAIC0yMTgsMyArMjA1LDE2IEBACiB7CiAJcmV0dXJu IHN5c2NvbmYoX1NDX05QUk9DRVNTT1JTX09OTE4pOwogfQorCit1bnNpZ25lZCBs b25nCitwbGF0Zm9ybV9waHlzbWVtKHZvaWQpCit7CisJc3RydWN0IHN5c2luZm8g IHNpOworCisJaWYgKHN5c2luZm8oJnNpKSA8IDApIHsKKwkJZnByaW50ZihzdGRl cnIsIF8oIiVzOiBjYW4ndCBkZXRlcm1pbmUgbWVtb3J5IHNpemVcbiIpLAorCQkJ cHJvZ25hbWUpOworCQlleGl0KDEpOworCX0KKwlyZXR1cm4gKHNpLnRvdGFscmFt ID4+IDEwKSAqIHNpLm1lbV91bml0OwkvKiBraWxvYnl0ZXMgKi8KK30KSW5kZXg6 IHJlcGFpci94ZnNwcm9ncy9yZXBhaXIvZGlyX3N0YWNrLmgKPT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PQotLS0gcmVwYWlyLm9yaWcveGZzcHJvZ3MvcmVwYWlyL2Rpcl9zdGFj ay5oCTIwMDctMDQtMTIgMTQ6NTU6NTcuMDAwMDAwMDAwICsxMDAwCisrKyAvZGV2 L251bGwJMTk3MC0wMS0wMSAwMDowMDowMC4wMDAwMDAwMDAgKzAwMDAKQEAgLTEs MzMgKzAsMCBAQAotLyoKLSAqIENvcHlyaWdodCAoYykgMjAwMC0yMDAxLDIwMDUg U2lsaWNvbiBHcmFwaGljcywgSW5jLgotICogQWxsIFJpZ2h0cyBSZXNlcnZlZC4K LSAqCi0gKiBUaGlzIHByb2dyYW0gaXMgZnJlZSBzb2Z0d2FyZTsgeW91IGNhbiBy ZWRpc3RyaWJ1dGUgaXQgYW5kL29yCi0gKiBtb2RpZnkgaXQgdW5kZXIgdGhlIHRl cm1zIG9mIHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5zZSBhcwotICogcHVi bGlzaGVkIGJ5IHRoZSBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb24uCi0gKgotICog VGhpcyBwcm9ncmFtIGlzIGRpc3RyaWJ1dGVkIGluIHRoZSBob3BlIHRoYXQgaXQg d291bGQgYmUgdXNlZnVsLAotICogYnV0IFdJVEhPVVQgQU5ZIFdBUlJBTlRZOyB3 aXRob3V0IGV2ZW4gdGhlIGltcGxpZWQgd2FycmFudHkgb2YKLSAqIE1FUkNIQU5U QUJJTElUWSBvciBGSVRORVNTIEZPUiBBIFBBUlRJQ1VMQVIgUFVSUE9TRS4gIFNl ZSB0aGUKLSAqIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlIGZvciBtb3JlIGRl dGFpbHMuCi0gKgotICogWW91IHNob3VsZCBoYXZlIHJlY2VpdmVkIGEgY29weSBv ZiB0aGUgR05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2UKLSAqIGFsb25nIHdpdGgg dGhpcyBwcm9ncmFtOyBpZiBub3QsIHdyaXRlIHRoZSBGcmVlIFNvZnR3YXJlIEZv dW5kYXRpb24sCi0gKiBJbmMuLCAgNTEgRnJhbmtsaW4gU3QsIEZpZnRoIEZsb29y LCBCb3N0b24sIE1BICAwMjExMC0xMzAxICBVU0EKLSAqLwotCi10eXBlZGVmIHN0 cnVjdCBkaXJfc3RhY2tfZWxlbSAgewotCXhmc19pbm9fdAkJaW5vOwotCXN0cnVj dCBkaXJfc3RhY2tfZWxlbQkqbmV4dDsKLX0gZGlyX3N0YWNrX2VsZW1fdDsKLQot dHlwZWRlZiBzdHJ1Y3QgZGlyX3N0YWNrICB7Ci0JaW50CQkJY250OwotCWRpcl9z dGFja19lbGVtX3QJKmhlYWQ7Ci19IGRpcl9zdGFja190OwotCi0KLXZvaWQJCWRp cl9zdGFja19pbml0KGRpcl9zdGFja190ICpzdGFjayk7Ci0KLXZvaWQJCXB1c2hf ZGlyKGRpcl9zdGFja190ICpzdGFjaywgeGZzX2lub190IGlubyk7Ci14ZnNfaW5v X3QJcG9wX2RpcihkaXJfc3RhY2tfdCAqc3RhY2spOwo= ------------IrJlgiTfBCmIeXv1iG6QRc-- From owner-xfs@oss.sgi.com Mon Jun 4 19:37:06 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 04 Jun 2007 19:37:09 -0700 (PDT) Received: from nz-out-0506.google.com (nz-out-0506.google.com [64.233.162.239]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l552b4Wt011960 for ; Mon, 4 Jun 2007 19:37:05 -0700 Received: by nz-out-0506.google.com with SMTP id 4so1046391nzn for ; Mon, 04 Jun 2007 19:37:04 -0700 (PDT) DKIM-Signature: a=rsa-sha1; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:message-id:date:from:to:subject:mime-version:content-type:content-transfer-encoding:content-disposition; b=V2hx4hEtayAjXpYjkPPxDHkfDO9Z4FDZJzq6O0MNMn6F6x3/0Gu6l2V1eOk5leHqvC68Y10Z+g8vzmVymMx36mVzHIW2KPfE/J7pEqVXFRMbCWLKuUJL5GT1/E1RHopwtQkMwzUwZog0YvonwnSXbX/URiAqBO/I6hyWBQQUGC8= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:message-id:date:from:to:subject:mime-version:content-type:content-transfer-encoding:content-disposition; b=fU2KeVCrzy/Oqr2KILX/7Em+v5q6CdR9yqCYFqwXTG3UoYcysH2yHNTUu8WBIKvEQLpwEQCoqZVvWnzLcBp/QwVC2gp2XpjzqtWELH3aEmXF5zFRPtb+LN7a9LsZtBc1oOX6vBvPM91vPiXo7uX6716pvp3DSEG3Rymp/VAsV2E= Received: by 10.114.106.1 with SMTP id e1mr5450745wac.1181009406802; Mon, 04 Jun 2007 19:10:06 -0700 (PDT) Received: by 10.115.55.14 with HTTP; Mon, 4 Jun 2007 19:10:06 -0700 (PDT) Message-ID: Date: Mon, 4 Jun 2007 22:10:06 -0400 From: "=?ISO-8859-1?Q?Germ=E1n_Po=F3-Caama=F1o?=" To: xfs@oss.sgi.com Subject: Reporting a bug MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Disposition: inline Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id l552b6Wt011965 X-archive-position: 11641 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: german.poo@gmail.com Precedence: bulk X-list: xfs I having have some problems with a XFS partition in Debian Sarge: After a clean reboot (it supposed to be), my machine started with kernel messages of problems, such us XFS_WANT_CORRUPTED_GOTO and XFS_WANT_CORRUPTED_RETURN. It mainly was located in /var. But, after cleaning that, I checked other partitions. I guessed that my root partition (/dev/sda5) was in problems also. I mounted as readonly partition and I ran xfs_repair on it. xfs_repair moved 6 files (all of them ELF binaries) to lost+found. After reboot the machine, it can't boot anymore. Trying with Sysrescue 0.3.5 I get the following: # xfs_check /dev/sda5 [...] dir 1310848 block 8388608 extra leaf entry fc4e7e74 e7 dir 1310848 block 8388608 extra leaf entry fcdbb5f3 8f dir 1310848 block 8388608 extra leaf entry fddcbf74 164 /usr/bin/xfs_check: line 28: 14691 Segmentation fault xfs_db$DBOPTS -i -p xfs_check -c "check$OPTS" $1 # xfs_repair -n /dev/sda5 Phase 1 - find and verify superblock... Phase 2 - using internal log - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 bad nextents 12 for inode 786561, would reset to 13 bad directory leaf magic # 0x46e for directory inode 786561 block 8388610 - agno = 4 - agno = 5 bmap rec out of order, inode 1310848 entry 2 [o s c] [8388608 81946 1], 1 [8388608 81946 1] bad data fork in inode 1310848 would have cleared inode 1310848 - agno = 6 - agno = 7 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 entry "etc" at block 0 offset 128 in directory inode 128 references free inode 1310848 would clear inode number in entry at offset 128... entry ".." at block 0 offset 32 in directory inode 149 references free inode 1310848 would clear inode number in entry at offset 32... entry ".." at block 0 offset 32 in directory inode 970 references free inode 1310848 would clear inode number in entry at offset 32... entry ".." at block 0 offset 32 in directory inode 17805 references free inode 1310848 would clear inode number in entry at offset 32... entry ".." at block 0 offset 32 in directory inode 42528 references free inode 1310848 would clear inode number in entry at offset 32... - agno = 1 entry ".." at block 0 offset 32 in directory inode 262276 references free inode 1310848 would clear inode number in entry at offset 32... entry ".." at block 0 offset 32 in directory inode 262288 references free inode 1310848 would clear inode number in entry at offset 32... - agno = 2 entry ".." at block 0 offset 32 in directory inode 524569 references free inode 1310848 would clear inode number in entry at offset 32... entry ".." at block 0 offset 32 in directory inode 560783 references free inode 1310848 would clear inode number in entry at offset 32... - agno = 3 bad nextents 12 for inode 786561, would reset to 13 entry ".." at block 0 offset 32 in directory inode 786608 references free inode 1310848 would clear inode number in entry at offset 32... - agno = 4 entry ".." at block 0 offset 32 in directory inode 1067905 references free inode 1310848 would clear inode number in entry at offset 32... entry ".." at block 0 offset 32 in directory inode 1067924 references free inode 1310848 would clear inode number in entry at offset 32... - agno = 5 bmap rec out of order, inode 1310848 entry 2 [o s c] [8388608 81946 1], 1 [8388608 81946 1] bad data fork in inode 1310848 would have cleared inode 1310848 entry ".." at block 0 offset 32 in directory inode 1310944 references free inode 1310848 would clear inode number in entry at offset 32... - agno = 6 entry ".." at block 0 offset 32 in directory inode 1573000 references free inode 1310848 would clear inode number in entry at offset 32... entry ".." at block 0 offset 32 in directory inode 1573094 references free inode 1310848 would clear inode number in entry at offset 32... entry ".." at block 0 offset 32 in directory inode 1573120 references free inode 1310848 would clear inode number in entry at offset 32... - agno = 7 entry ".." at block 0 offset 32 in directory inode 1835140 references free inode 1310848 would clear inode number in entry at offset 32... entry ".." at block 0 offset 32 in directory inode 1835168 references free inode 1310848 would clear inode number in entry at offset 32... entry ".." at block 0 offset 32 in directory inode 1854273 references free inode 1310848 would clear inode number in entry at offset 32... entry ".." at block 0 offset 32 in directory inode 1854300 references free inode 1310848 would clear inode number in entry at offset 32... No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem starting at / ... entry "etc" in directory inode 128 points to free inode 1310848, would junk entry corrupt dinode 786561, (btree extents). This is a bug. Please report it to xfs@oss.sgi.com. corrupt dinode 786561, (btree extents). This is a bug. Please report it to xfs@oss.sgi.com. corrupt dinode 786561, (btree extents). This is a bug. Please report it to xfs@oss.sgi.com. Segmentation fault -- Germán Poó Caamaño http://www.gnome.org/~gpoo/ From owner-xfs@oss.sgi.com Mon Jun 4 19:44:07 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 04 Jun 2007 19:44:10 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l552i4Wt013922 for ; Mon, 4 Jun 2007 19:44:06 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id MAA07293; Tue, 5 Jun 2007 12:43:59 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 1161) id 18DCB58C38C1; Tue, 5 Jun 2007 12:43:59 +1000 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: TAKE 957724 - need an xfsdump/xfscopy style tools that collects metadata only Message-Id: <20070605024359.18DCB58C38C1@chook.melbourne.sgi.com> Date: Tue, 5 Jun 2007 12:43:59 +1000 (EST) From: bnaujok@sgi.com (Barry Naujok) X-archive-position: 11642 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@sgi.com Precedence: bulk X-list: xfs XFS metadata dump tool Date: Tue Jun 5 12:43:11 AEST 2007 Workarea: chook.melbourne.sgi.com:/home/bnaujok/isms/metadump Inspected by: Christoph Hellwig The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:28782a xfsprogs/mdrestore/xfs_mdrestore.c - 1.1 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/mdrestore/xfs_mdrestore.c xfsprogs/mdrestore/Makefile - 1.1 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/mdrestore/Makefile - Metadump restore tool xfsprogs/db/metadump.c - 1.1 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/db/metadump.c - Add metadump to xfs_db xfsprogs/man/man8/xfs_mdrestore.8 - 1.1 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/man/man8/xfs_mdrestore.8 - Update man page for xfs_metadump xfsprogs/db/xfs_metadump.sh - 1.1 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/db/xfs_metadump.sh - Scripto wrapper for xfs_db metadump command xfsprogs/include/xfs_metadump.h - 1.1 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/include/xfs_metadump.h - Header for xfs_metadump structures xfsprogs/db/metadump.h - 1.1 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/db/metadump.h - Add metadump to xfs_db xfsprogs/man/man8/xfs_metadump.8 - 1.1 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/man/man8/xfs_metadump.8 - Update man page for xfs_metadump xfsprogs/db/init.c - 1.17 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/db/init.c.diff?r1=text&tr1=1.17&r2=text&tr2=1.16&f=h xfsprogs/db/Makefile - 1.17 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/db/Makefile.diff?r1=text&tr1=1.17&r2=text&tr2=1.16&f=h xfsprogs/db/command.c - 1.14 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/db/command.c.diff?r1=text&tr1=1.14&r2=text&tr2=1.13&f=h - Add metadump to xfs_db xfsprogs/Makefile - 1.28 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/Makefile.diff?r1=text&tr1=1.28&r2=text&tr2=1.27&f=h - Add xfs_mdrestore directory to makefile xfsprogs/VERSION - 1.172 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/VERSION.diff?r1=text&tr1=1.172&r2=text&tr2=1.171&f=h xfsprogs/doc/CHANGES - 1.241 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/doc/CHANGES.diff?r1=text&tr1=1.241&r2=text&tr2=1.240&f=h - Update to version 2.9.0 xfsprogs/man/man8/xfs_db.8 - 1.14 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/man/man8/xfs_db.8.diff?r1=text&tr1=1.14&r2=text&tr2=1.13&f=h xfsprogs/man/man8/xfs_repair.8 - 1.8 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/man/man8/xfs_repair.8.diff?r1=text&tr1=1.8&r2=text&tr2=1.7&f=h - Update man page for xfs_metadump From owner-xfs@oss.sgi.com Tue Jun 5 00:40:35 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 05 Jun 2007 00:40:38 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l557eVWt008006 for ; Tue, 5 Jun 2007 00:40:34 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA14559; Tue, 5 Jun 2007 17:40:26 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id B7F6358C38C1; Tue, 5 Jun 2007 17:40:26 +1000 (EST) To: xfs@oss.sgi.com, sgi.bugs.xfs@engr.sgi.com Subject: TAKE 964465 - Transaction delta counts are not applied atomically Message-Id: <20070605074026.B7F6358C38C1@chook.melbourne.sgi.com> Date: Tue, 5 Jun 2007 17:40:26 +1000 (EST) From: dgc@sgi.com (David Chinner) X-archive-position: 11643 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Apply transaction delta counts atomically to incore counters With the per-cpu superblock counters, batch updates are no longer atomic across the entire batch of changes. This is not an issue if each individual change in teh batch is applied atomically. Unfortunately, free block count changes are not applied atomically, and they are applied in a manner guaranteed to cause problems. Essentially, the free block count reservation that the transaction took initially is returned to the in core counters before a second delta takes away what is used. because these two operations are not atomic, we can race with another thread that can use the returned transaction reservation before the transaction takes the space away again and we can then get ENOSPC being reported in a spot where we don't have an ENOSPC condition, nor should we ever see one there. Fix it up by rolling the two deltas into the one so it can be applied safely (i.e. atomically) to the incore counters. Date: Tue Jun 5 17:39:41 AEST 2007 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/2.6.x-xfs Inspected by: hch@infradead.org The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:28796a fs/xfs/xfs_trans.c - 1.181 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_trans.c.diff?r1=text&tr1=1.181&r2=text&tr2=1.180&f=h - Apply transaction deltas atomically by ensuring we only ever update each counter once per transaction. From owner-xfs@oss.sgi.com Tue Jun 5 00:52:19 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 05 Jun 2007 00:52:21 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l557qGWt011448 for ; Tue, 5 Jun 2007 00:52:18 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA14851; Tue, 5 Jun 2007 17:52:12 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id B502958C38C1; Tue, 5 Jun 2007 17:52:12 +1000 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: TAKE 964647 - file corruption seen when DMF recalls files with buffered i/o Message-Id: <20070605075212.B502958C38C1@chook.melbourne.sgi.com> Date: Tue, 5 Jun 2007 17:52:12 +1000 (EST) From: dgc@sgi.com (David Chinner) X-archive-position: 11644 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Map unwritten extents correctly for I/o completion processing If we have multiple unwritten extents within a single page, we fail to tell the I/o completion construction handlers we need a new handle for the second and subsequent blocks in teh page. While we still issue the I/O correctly, we do not have the correct ranges recorded in the ioend structures and hence when we go to convert the unwritten extents we screw it up. Make sure we start a new ioend every time the mapping changes so that we convert the correct ranges on I/O completion. Date: Tue Jun 5 17:51:26 AEST 2007 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/2.6.x-xfs Inspected by: hch@infradead.org The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:28797a fs/xfs/linux-2.6/xfs_aops.c - 1.146 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/linux-2.6/xfs_aops.c.diff?r1=text&tr1=1.146&r2=text&tr2=1.145&f=h - If the mapping changes between blocks in xfs_page_state_convert, make sure that we also start a new ioend for the new mapping. From owner-xfs@oss.sgi.com Tue Jun 5 01:23:47 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 05 Jun 2007 01:23:50 -0700 (PDT) Received: from astra.simleu.ro (astra.simleu.ro [80.97.18.177]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l558NjWt020118 for ; Tue, 5 Jun 2007 01:23:47 -0700 Received: from teal.hq.k1024.org (84-75-124-135.dclient.hispeed.ch [84.75.124.135]) by astra.simleu.ro (Postfix) with ESMTP id 276AD152; Tue, 5 Jun 2007 11:23:43 +0300 (EEST) Received: by teal.hq.k1024.org (Postfix, from userid 4004) id 181F0411159; Tue, 5 Jun 2007 10:00:13 +0200 (CEST) Date: Tue, 5 Jun 2007 10:00:12 +0200 From: Iustin Pop To: David Chinner Cc: Ruben Porras , xfs@oss.sgi.com, cw@f00f.org Subject: Re: XFS shrink functionality Message-ID: <20070605080012.GA10677@teal.hq.k1024.org> Mail-Followup-To: David Chinner , Ruben Porras , xfs@oss.sgi.com, cw@f00f.org References: <1180715974.10796.46.camel@localhost> <20070604001632.GA86004887@sgi.com> <20070604084154.GA8273@teal.hq.k1024.org> <20070604092115.GX85884050@sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070604092115.GX85884050@sgi.com> X-Linux: This message was written on Linux X-Header: /usr/include gives great headers User-Agent: Mutt/1.5.13 (2006-08-11) X-archive-position: 11645 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: iusty@k1024.org Precedence: bulk X-list: xfs On Mon, Jun 04, 2007 at 07:21:15PM +1000, David Chinner wrote: > > allocated on an available AG and when you remove the originals, the > > to-be-shrinked AGs become free. Yes, utterly non-optimal, but it was the > > simplest way to do it based on what I knew at the time. > > Not quite that simple, unfortunately. You can't leave the > AGs locked in the same way we do for a grow because we need > to be able to use the AGs to move stuff about and that > requires locking them. Hence we need a separate mechanism > to prevent allocation in a given AG outside of locking them. > > Hence we need: > > - a transaction to mark AGs "no-allocate" > - a transaction to mark AGs "allocatable" > - a flag in each AGF/AGI to say the AG is available for > allocations (persistent over crashes) > - a flag in the per-ag structure to indicate allocation > status of the AG. > - everywhere we select an AG for allocation, we need to > check this flag and skip the AG if it's not available. > > FWIW, the transactions can probably just be an extension of > xfs_alloc_log_agf() and xfs_alloc_log_agi().... A question: do you think that the cost of having this in the code (especially the last part, check that flag in every allocation function) is acceptable? I mean, let's say one would write the patch to implement all this. Does it have a chance to be accepted? Or will people say it's only bloat? ... > > I was > > more thinking that the offline-AG should be a bit on the AG that could > > be changed by the admin (like xfs_freeze); this could also help for > > other reasons than shrink (when on a big FS some AGs lie on a physical > > device and others on a different device, and you would like to restrict > > writes to a given AG, as much as possible). > > Yes, that's exactly what I'm talking about ;) Ah, I see now what did you mean by having a transaction for locking/unlocking AGs for allocation. > Yeah, 1) and 4) are separable parts of the problem and can be done > in any order. 2) can be implemented relatively easily as stated > above. > > 3) is the hard one - we need to find the owner of each block > (metadata and data) remaining in the AGs to be removed. This may be > a directory btree block, a inode extent btree block, a data block, > and extended attr block, etc. Moving the data blocks is easy to > do (swap extents), but moving the metadata blocks is a major PITA > as it will need to be done transactionally and that will require > a bunch of new (complex) code to be written, I think. It will be > of equivalent complexity to defragmenting metadata.... > > If we ignore the metadata block problem then finding and moving the > data blocks should not be a problem - swap extents can be used for > that as well - but it will be extremely time consuming and won't > scale to large filesystem sizes.... So given these caveats, is there a chance that a) this will be actually useful and b) will this be accepted? The last time I tried to work on this there has been no real feedback and I'm thinking that maybe the code will be too intrusive and will give to little gain to be accepted. Thanks for your comments, iustin From owner-xfs@oss.sgi.com Tue Jun 5 07:49:55 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 05 Jun 2007 07:49:59 -0700 (PDT) Received: from wa-out-1112.google.com (wa-out-1112.google.com [209.85.146.178]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l55EnsWt015560 for ; Tue, 5 Jun 2007 07:49:55 -0700 Received: by wa-out-1112.google.com with SMTP id k22so2207115waf for ; Tue, 05 Jun 2007 07:49:54 -0700 (PDT) DKIM-Signature: a=rsa-sha1; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:message-id:date:from:to:subject:mime-version:content-type:content-transfer-encoding:content-disposition; b=BplMohzWFsZ9B3BPPIPJb4TGByD38KRXhTn2NbKnYyC5aftP7StMktGhFg6p3Jj4OfyIsiHnc81qCwm76HGCpi3OXMVedjvn73HExVsMNJGB/0g8aK2Ag1L9vzR3/DYnW8bB1/bmmhrrl2OMn5FRn1LcBHqDaDNTqBjPCs+MRlY= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:message-id:date:from:to:subject:mime-version:content-type:content-transfer-encoding:content-disposition; b=QZog1A56EtcUVQSsXmFa4Elx51uOvudJg7/hRwMbW1/NTj0a6rYWsAk9fWJXmJr8QoeJtviW1/kNgXliABv0tc796GZwAgOubxu8nqQ/WhfZuvYWaV/qwTnjW8eSQIEvD4cRvkcm/yBFuAWFSFVQ/jh/Cfrys3ZY1sOiNWqIlSg= Received: by 10.114.201.1 with SMTP id y1mr5995386waf.1181054994139; Tue, 05 Jun 2007 07:49:54 -0700 (PDT) Received: by 10.114.13.15 with HTTP; Tue, 5 Jun 2007 07:49:54 -0700 (PDT) Message-ID: <5d96567b0706050749o74bc7701g154e836c43a511be@mail.gmail.com> Date: Tue, 5 Jun 2007 16:49:54 +0200 From: "Raz Ben-Jehuda(caro)" To: linux-xfs@oss.sgi.com Subject: building xfstests MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline X-archive-position: 11646 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: raziebe@gmail.com Precedence: bulk X-list: xfs Hello I have downloaded from the web the cvs repository of the module xfs-cmds. I failing to build it. the bellow is the "make" output. ... == dist, log is Logs/dist Wrote: /d1/rt/raz/downloads/xfs-cmds/xfsprogs/build/xfsprogs-2.9.0.src.tar.gz Wrote: /d1/rt/raz/downloads/xfs-cmds/xfsprogs/build/tar/xfsprogs-2.9.0.tar.gz == Building dmapi == clean, log is Logs/clean == configure, log is Logs/configure == default, log is Logs/default == dist, log is Logs/dist Wrote: /d1/rt/raz/downloads/xfs-cmds/dmapi/build/dmapi-2.2.8.src.tar.gz Wrote: /d1/rt/raz/downloads/xfs-cmds/dmapi/build/tar/dmapi-2.2.8.tar.gz == Building xfsdump == clean, log is Logs/clean == configure, log is Logs/configure == default, log is Logs/default = dist, log is Logs/dist Wrote: /d1/rt/raz/downloads/xfs-cmds/xfsdump/build/xfsdump-2.2.45.src.tar.gz Wrote: /d1/rt/raz/downloads/xfs-cmds/xfsdump/build/tar/xfsdump-2.2.45.tar.gz for d in attr acl xfsprogs dmapi xfsdump; do \ ( cd /d1/rt/raz/downloads/xfs-cmds && /bin/cp $d/build/rpm/*.src.rpm /d1/rt/raz/downloads/xfs-cmds/SRPMS ) \ done /bin/cp: cannot stat `attr/build/rpm/*.src.rpm': No such file or directory /bin/cp: cannot stat `acl/build/rpm/*.src.rpm': No such file or directory /bin/cp: cannot stat `xfsprogs/build/rpm/*.src.rpm': No such file or directory /bin/cp: cannot stat `dmapi/build/rpm/*.src.rpm': No such file or directory /bin/cp: cannot stat `xfsdump/build/rpm/*.src.rpm': No such file or directory make: *** [cmds] Error 1 anyone ? -- Raz From owner-xfs@oss.sgi.com Tue Jun 5 08:07:06 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 05 Jun 2007 08:07:09 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l55F75Wt021248 for ; Tue, 5 Jun 2007 08:07:06 -0700 Received: from Liberator.local (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id DB0D91806631F; Tue, 5 Jun 2007 10:07:04 -0500 (CDT) Message-ID: <46657C17.1090105@sandeen.net> Date: Tue, 05 Jun 2007 10:07:03 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.0 (Macintosh/20070326) MIME-Version: 1.0 To: "Raz Ben-Jehuda(caro)" CC: linux-xfs@oss.sgi.com Subject: Re: building xfstests References: <5d96567b0706050749o74bc7701g154e836c43a511be@mail.gmail.com> In-Reply-To: <5d96567b0706050749o74bc7701g154e836c43a511be@mail.gmail.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 11647 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Raz Ben-Jehuda(caro) wrote: > Hello > I have downloaded from the web the cvs repository of the > module xfs-cmds. > > I failing to build it. the bellow is the "make" output. ... > Wrote: > /d1/rt/raz/downloads/xfs-cmds/xfsdump/build/xfsdump-2.2.45.src.tar.gz > Wrote: > /d1/rt/raz/downloads/xfs-cmds/xfsdump/build/tar/xfsdump-2.2.45.tar.gz So, everything built - but it just did not build rpms - which is maybe fine with you? > for d in attr acl xfsprogs dmapi xfsdump; do \ > ( cd /d1/rt/raz/downloads/xfs-cmds && /bin/cp > $d/build/rpm/*.src.rpm /d1/rt/raz/downloads/xfs-cmds/SRPMS ) \ > done > /bin/cp: cannot stat `attr/build/rpm/*.src.rpm': No such file or directory > /bin/cp: cannot stat `acl/build/rpm/*.src.rpm': No such file or directory > /bin/cp: cannot stat `xfsprogs/build/rpm/*.src.rpm': No such file or > directory > /bin/cp: cannot stat `dmapi/build/rpm/*.src.rpm': No such file or directory > /bin/cp: cannot stat `xfsdump/build/rpm/*.src.rpm': No such file or > directory It's just trying to copy rpms to a central location, and they didn't build for you (maybe you're on debian or similar?) So kind of a dumb top-level makefile (hmm who wrote that stuff... ;-) but it looks like all of your packages built as tarballs. -Eric > make: *** [cmds] Error 1 > > anyone ? > > From owner-xfs@oss.sgi.com Tue Jun 5 13:24:33 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 05 Jun 2007 13:24:38 -0700 (PDT) Received: from mail.g-house.de (ns2.g-housing.de [81.169.133.75]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l55KOWWt021662 for ; Tue, 5 Jun 2007 13:24:33 -0700 Received: from [89.54.183.197] (helo=noname) by mail.g-house.de with esmtpsa (TLS-1.0:DHE_RSA_AES_256_CBC_SHA:32) (Exim 4.50) id 1HvfZs-0006xy-Pu; Tue, 05 Jun 2007 22:24:29 +0200 Date: Tue, 5 Jun 2007 22:24:32 +0200 (CEST) From: Christian Kujau X-X-Sender: dummy@foobar-g4 To: "Raz Ben-Jehuda(caro)" cc: xfs@oss.sgi.com Subject: Re: corruption bug in 2.6.17 In-Reply-To: <5d96567b0706050127y7f5eap7b92cddb5cdae02d@mail.gmail.com> Message-ID: References: <5d96567b0706021407q4455b60asd9d23ef82cb90b55@mail.gmail.com> <5d96567b0706050127y7f5eap7b92cddb5cdae02d@mail.gmail.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; format=flowed; charset=us-ascii X-archive-position: 11648 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: lists@nerdbynature.de Precedence: bulk X-list: xfs [please reply on-list, so that everbody can help. and top-posting is evil;) ] On Tue, 5 Jun 2007, Raz Ben-Jehuda(caro) wrote: > It is unclear from the web page how can reproduce this bug. Can you > think of a several steps that produces it so I will be able to know > whether 2.7.17.7 fix had fixed it ? I have been bitten by this one too and to "reproduce" it I only had to 1) boot 2.6.17.x (x<17) 2) generate some IO on the xfs-mounted partition...and wait until the fs shut down. The fix is really to use a 2.7.17.7 or later kernel and check your xfs with xfsprogs (version 2.8.10 or later) - if no corruptions are found, you should be fine. C. -- make bzImage, not war From owner-xfs@oss.sgi.com Tue Jun 5 15:23:38 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 05 Jun 2007 15:23:40 -0700 (PDT) Received: from ext.agami.com (64.221.212.177.ptr.us.xo.net [64.221.212.177]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l55MNbWt025377 for ; Tue, 5 Jun 2007 15:23:38 -0700 Received: from agami.com (mail [192.168.168.5]) by ext.agami.com (8.12.5/8.12.5) with ESMTP id l55MN79r028549 for ; Tue, 5 Jun 2007 15:23:13 -0700 Received: from mx1.agami.com (mx1.agami.com [10.123.10.30]) by agami.com (8.12.11/8.12.11) with ESMTP id l55MNRre022798 for ; Tue, 5 Jun 2007 15:23:27 -0700 Received: from [10.123.4.142] ([10.123.4.142]) by mx1.agami.com with Microsoft SMTPSVC(6.0.3790.1830); Tue, 5 Jun 2007 15:23:50 -0700 Message-ID: <4665E276.9020406@agami.com> Date: Tue, 05 Jun 2007 15:23:50 -0700 From: Michael Nishimoto User-Agent: Mail/News 1.5.0.4 (X11/20060629) MIME-Version: 1.0 To: David Chinner CC: Michael Nishimoto , xfs@oss.sgi.com Subject: Re: Reducing memory requirements for high extent xfs files References: <200705301649.l4UGnckA027406@oss.sgi.com> <20070530225516.GB85884050@sgi.com> In-Reply-To: <20070530225516.GB85884050@sgi.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-OriginalArrivalTime: 05 Jun 2007 22:23:50.0448 (UTC) FILETIME=[31353B00:01C7A7C0] X-Scanned-By: MIMEDefang 2.58 on 192.168.168.13 X-archive-position: 11649 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: miken@agami.com Precedence: bulk X-list: xfs David Chinner wrote: > On Wed, May 30, 2007 at 09:49:38AM -0700, Michael Nishimoto wrote: > > Hello, > > > > Has anyone done any work or had thoughts on changes required > > to reduce the total memory footprint of high extent xfs files? > > We changed the way we do memory allocation to avoid needing > large contiguous chunks of memory a bit over a year ago; > that solved the main OOM problem we were getting reported > with highly fragmented files. > > > Obviously, it is important to reduce fragmentation as files > > are generated and to regularly defrag files, but both of these > > alternatives are not complete solutions. > > > > To reduce memory consumption, xfs could bring in extents > > from disk as needed (or just before needed) and could free > > up mappings when certain extent ranges have not been recently > > accessed. A solution should become more aggressive about > > reclaiming extent mapping memory as free memory becomes limited. > > Yes, it could, but that's a pretty major overhaul of the extent > interface which currently assumes everywhere that the entire > extent tree is in core. > > Can you describe the problem you are seeing that leads you to > ask this question? What's the problem you need to solve? > > Cheers, > > Dave. > -- > Dave Chinner > Principal Engineer > SGI Australian Software Group I realize that this work won't be trivial which is why I asked if anyone has thought about all relevant issues. When using NFS over XFS, slowly growing files (can be ascii log files) tend to fragment quite a bit. One system had several hundred files which required more than one page to store the extents. Quite a few files had extent counts greater than 10k, and one file had 120k extents. Besides the memory consumption, latency to return the first byte of the file can get noticeable. Michael From owner-xfs@oss.sgi.com Tue Jun 5 16:00:07 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 05 Jun 2007 16:00:12 -0700 (PDT) Received: from postoffice.aconex.com (mail.app.aconex.com [203.89.192.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l55N06Wt003482 for ; Tue, 5 Jun 2007 16:00:07 -0700 Received: from [192.168.5.76] (unknown [203.89.192.141]) by postoffice.aconex.com (Postfix) with ESMTP id 17FF892C3E1; Wed, 6 Jun 2007 09:00:05 +1000 (EST) Subject: Re: corruption bug in 2.6.17 From: Nathan Scott To: Christian Kujau Cc: "Raz Ben-Jehuda(caro)" , xfs@oss.sgi.com In-Reply-To: References: <5d96567b0706021407q4455b60asd9d23ef82cb90b55@mail.gmail.com> <5d96567b0706050127y7f5eap7b92cddb5cdae02d@mail.gmail.com> Content-Type: text/plain Date: Wed, 06 Jun 2007 08:58:47 +1000 Message-Id: <1181084327.3176.4.camel@edge.yarra.acx> Mime-Version: 1.0 X-Mailer: Evolution 2.8.0 (2.8.0-33.el5) Content-Transfer-Encoding: 7bit X-archive-position: 11650 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nscott@aconex.com Precedence: bulk X-list: xfs On Tue, 2007-06-05 at 22:24 +0200, Christian Kujau wrote: > ... > 2) generate some IO on the xfs-mounted partition...and wait until > the > fs shut down. The problem was actually related to a particular form of btree dir2 transition - so, certain combinations of file names in a relatively large directory with size changes would trigger it. "generate some IO" is a bit too vague to be helpful. cheers. -- Nathan From owner-xfs@oss.sgi.com Tue Jun 5 16:01:16 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 05 Jun 2007 16:01:19 -0700 (PDT) Received: from lucidpixels.com (lucidpixels.com [75.144.35.66]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l55N1FWt003808 for ; Tue, 5 Jun 2007 16:01:16 -0700 Received: by lucidpixels.com (Postfix, from userid 1001) id 25B72B000867; Tue, 5 Jun 2007 19:01:15 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id 243865000168; Tue, 5 Jun 2007 19:01:15 -0400 (EDT) Date: Tue, 5 Jun 2007 19:01:15 -0400 (EDT) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: Nathan Scott cc: Christian Kujau , "Raz Ben-Jehuda(caro)" , xfs@oss.sgi.com Subject: Re: corruption bug in 2.6.17 In-Reply-To: <1181084327.3176.4.camel@edge.yarra.acx> Message-ID: References: <5d96567b0706021407q4455b60asd9d23ef82cb90b55@mail.gmail.com> <5d96567b0706050127y7f5eap7b92cddb5cdae02d@mail.gmail.com> <1181084327.3176.4.camel@edge.yarra.acx> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-archive-position: 11651 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs On Wed, 6 Jun 2007, Nathan Scott wrote: > On Tue, 2007-06-05 at 22:24 +0200, Christian Kujau wrote: >> ... >> 2) generate some IO on the xfs-mounted partition...and wait until >> the >> fs shut down. > > The problem was actually related to a particular form of btree > dir2 transition - so, certain combinations of file names in a > relatively large directory with size changes would trigger it. > > "generate some IO" is a bit too vague to be helpful. > > cheers. > > -- > Nathan > > The patch that fixed it: --- linux-2.6.17.6.orig/fs/xfs/xfs_dir2_node.c +++ linux-2.6.17.6/fs/xfs/xfs_dir2_node.c @@ -970,7 +970,7 @@ xfs_dir2_leafn_remove( /* * One less used entry in the free table. */ - free->hdr.nused = cpu_to_be32(-1); + be32_add(&free->hdr.nused, -1); xfs_dir2_free_log_header(tp, fbp); /* * If this was the last entry in the table, we can From owner-xfs@oss.sgi.com Tue Jun 5 16:08:55 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 05 Jun 2007 16:08:58 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l55N8qWt006311 for ; Tue, 5 Jun 2007 16:08:54 -0700 Received: from [134.14.55.89] (soarer.melbourne.sgi.com [134.14.55.89]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA07154; Wed, 6 Jun 2007 09:08:44 +1000 Message-ID: <4665ED89.4090202@sgi.com> Date: Wed, 06 Jun 2007 09:11:05 +1000 From: Vlad Apostolov User-Agent: Thunderbird 1.5.0.10 (X11/20070221) MIME-Version: 1.0 To: Michael Nishimoto CC: David Chinner , Michael Nishimoto , xfs@oss.sgi.com Subject: Re: Reducing memory requirements for high extent xfs files References: <200705301649.l4UGnckA027406@oss.sgi.com> <20070530225516.GB85884050@sgi.com> <4665E276.9020406@agami.com> In-Reply-To: <4665E276.9020406@agami.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 11652 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: vapo@sgi.com Precedence: bulk X-list: xfs Michael Nishimoto wrote: > > > David Chinner wrote: >> On Wed, May 30, 2007 at 09:49:38AM -0700, Michael Nishimoto wrote: >> > Hello, >> > >> > Has anyone done any work or had thoughts on changes required >> > to reduce the total memory footprint of high extent xfs files? >> >> We changed the way we do memory allocation to avoid needing >> large contiguous chunks of memory a bit over a year ago; >> that solved the main OOM problem we were getting reported >> with highly fragmented files. >> >> > Obviously, it is important to reduce fragmentation as files >> > are generated and to regularly defrag files, but both of these >> > alternatives are not complete solutions. >> > >> > To reduce memory consumption, xfs could bring in extents >> > from disk as needed (or just before needed) and could free >> > up mappings when certain extent ranges have not been recently >> > accessed. A solution should become more aggressive about >> > reclaiming extent mapping memory as free memory becomes limited. >> >> Yes, it could, but that's a pretty major overhaul of the extent >> interface which currently assumes everywhere that the entire >> extent tree is in core. >> >> Can you describe the problem you are seeing that leads you to >> ask this question? What's the problem you need to solve? >> >> Cheers, >> >> Dave. >> -- >> Dave Chinner >> Principal Engineer >> SGI Australian Software Group > > I realize that this work won't be trivial which is why I asked if anyone > has thought about all relevant issues. > > When using NFS over XFS, slowly growing files (can be ascii log files) > tend to fragment quite a bit. One system had several hundred files > which required more than one page to store the extents. Quite a few > files had extent counts greater than 10k, and one file had 120k extents. > Besides the memory consumption, latency to return the first byte of the > file can get noticeable. > > Michael > Hi Michael, You could use XFS_XFLAG_EXTSIZE and XFS_XFLAG_RTINHERIT flags to set extent hint size, which would reduce the file fragmentation in this scenario. Please check xfcntl man page for more details. Regards, Vlad / / From owner-xfs@oss.sgi.com Tue Jun 5 16:15:16 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 05 Jun 2007 16:15:18 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l55NFDWt009150 for ; Tue, 5 Jun 2007 16:15:14 -0700 Received: from [134.14.55.89] (soarer.melbourne.sgi.com [134.14.55.89]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA07227; Wed, 6 Jun 2007 09:15:04 +1000 Message-ID: <4665EF04.5020803@sgi.com> Date: Wed, 06 Jun 2007 09:17:24 +1000 From: Vlad Apostolov User-Agent: Thunderbird 1.5.0.10 (X11/20070221) MIME-Version: 1.0 To: Vlad Apostolov CC: Michael Nishimoto , David Chinner , Michael Nishimoto , xfs@oss.sgi.com Subject: Re: Reducing memory requirements for high extent xfs files References: <200705301649.l4UGnckA027406@oss.sgi.com> <20070530225516.GB85884050@sgi.com> <4665E276.9020406@agami.com> <4665ED89.4090202@sgi.com> In-Reply-To: <4665ED89.4090202@sgi.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 11653 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: vapo@sgi.com Precedence: bulk X-list: xfs Vlad Apostolov wrote: > Michael Nishimoto wrote: >> >> >> David Chinner wrote: >>> On Wed, May 30, 2007 at 09:49:38AM -0700, Michael Nishimoto wrote: >>> > Hello, >>> > >>> > Has anyone done any work or had thoughts on changes required >>> > to reduce the total memory footprint of high extent xfs files? >>> >>> We changed the way we do memory allocation to avoid needing >>> large contiguous chunks of memory a bit over a year ago; >>> that solved the main OOM problem we were getting reported >>> with highly fragmented files. >>> >>> > Obviously, it is important to reduce fragmentation as files >>> > are generated and to regularly defrag files, but both of these >>> > alternatives are not complete solutions. >>> > >>> > To reduce memory consumption, xfs could bring in extents >>> > from disk as needed (or just before needed) and could free >>> > up mappings when certain extent ranges have not been recently >>> > accessed. A solution should become more aggressive about >>> > reclaiming extent mapping memory as free memory becomes limited. >>> >>> Yes, it could, but that's a pretty major overhaul of the extent >>> interface which currently assumes everywhere that the entire >>> extent tree is in core. >>> >>> Can you describe the problem you are seeing that leads you to >>> ask this question? What's the problem you need to solve? >>> >>> Cheers, >>> >>> Dave. >>> -- >>> Dave Chinner >>> Principal Engineer >>> SGI Australian Software Group >> >> I realize that this work won't be trivial which is why I asked if anyone >> has thought about all relevant issues. >> >> When using NFS over XFS, slowly growing files (can be ascii log files) >> tend to fragment quite a bit. One system had several hundred files >> which required more than one page to store the extents. Quite a few >> files had extent counts greater than 10k, and one file had 120k extents. >> Besides the memory consumption, latency to return the first byte of the >> file can get noticeable. >> >> Michael >> > Hi Michael, > > You could use XFS_XFLAG_EXTSIZE and XFS_XFLAG_RTINHERIT flags to > set extent hint size, which would reduce the file fragmentation in > this scenario. > Please check xfcntl man page for more details. > > Regards, > Vlad I meant XFS_XFLAG_EXTSZINHERIT not XFS_XFLAG_RTINHERIT. This one should be set on a parent directory. Regards, Vlad From owner-xfs@oss.sgi.com Tue Jun 5 16:20:47 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 05 Jun 2007 16:20:49 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l55NKiWt013845 for ; Tue, 5 Jun 2007 16:20:46 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA07545; Wed, 6 Jun 2007 09:20:38 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l55NKaAf114653268; Wed, 6 Jun 2007 09:20:37 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l55NKXvw114255060; Wed, 6 Jun 2007 09:20:33 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Wed, 6 Jun 2007 09:20:33 +1000 From: David Chinner To: "Jahnke, Steffen" Cc: xfs@oss.sgi.com Subject: Re: XFS with project quota under linux? Message-ID: <20070605232033.GZ85884050@sgi.com> References: <950DD867A5E1B04ABE82A56FCDC03A5E9CE8CF@HDHS0111.euro1.voith.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <950DD867A5E1B04ABE82A56FCDC03A5E9CE8CF@HDHS0111.euro1.voith.net> User-Agent: Mutt/1.4.2.1i X-archive-position: 11654 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Mon, Jun 04, 2007 at 10:55:35AM +0200, Jahnke, Steffen wrote: > I recently switched the quota usrquota to pquota on our Altix 4700 under > SLES10. I then found out that the project quota is not updated if files are > moved within the same filesystem. E.g. if I move a file from a different > project to a new project it still belongs to the old project. The same thing > happens if I move a file which not belongs to any project but which is on > the filesystem mounted with pquota. Working as designed, by the sounds of it. Moving a file around the filesystem does not change the project it is assigned to. Files created (i.e. moved into the filesystem) get assigned a new project id on create which is why there is different behaviour there. Are you trying to use directory quotas here rather than just plain project quotas? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Tue Jun 5 16:25:07 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 05 Jun 2007 16:25:10 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l55NP4Wt015325 for ; Tue, 5 Jun 2007 16:25:06 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA07628; Wed, 6 Jun 2007 09:24:59 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l55NOwAf114351195; Wed, 6 Jun 2007 09:24:59 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l55NOv3M114474218; Wed, 6 Jun 2007 09:24:57 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Wed, 6 Jun 2007 09:24:57 +1000 From: David Chinner To: =?iso-8859-1?Q?Germ=E1n_Po=F3-Caama=F1o?= Cc: xfs@oss.sgi.com Subject: Re: Reporting a bug Message-ID: <20070605232457.GA85884050@sgi.com> References: Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: User-Agent: Mutt/1.4.2.1i X-archive-position: 11655 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Mon, Jun 04, 2007 at 10:10:06PM -0400, Germán Poó-Caamaño wrote: > I having have some problems with a XFS partition in Debian Sarge: > > After a clean reboot (it supposed to be), my machine started with > kernel messages of problems, such us XFS_WANT_CORRUPTED_GOTO and > XFS_WANT_CORRUPTED_RETURN. > > It mainly was located in /var. But, after cleaning that, I checked > other partitions. I guessed that my root partition (/dev/sda5) was in > problems also. I mounted as readonly partition and I ran xfs_repair > on it. xfs_repair moved 6 files (all of them ELF binaries) to > lost+found. After reboot the machine, it can't boot anymore. Sounds like a critical binary for boot got lost... > Trying with Sysrescue 0.3.5 I get the following: What version of the XFS utilities has that got? You might do better booting knoppix and then downloading the latest tools and running them.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Tue Jun 5 18:36:10 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 05 Jun 2007 18:36:13 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l561a7Wt014049 for ; Tue, 5 Jun 2007 18:36:09 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA11001; Wed, 6 Jun 2007 11:36:07 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l561a5Af115221377; Wed, 6 Jun 2007 11:36:05 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l561a1HD115123473; Wed, 6 Jun 2007 11:36:01 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Wed, 6 Jun 2007 11:36:01 +1000 From: David Chinner To: Michael Nishimoto Cc: David Chinner , Michael Nishimoto , xfs@oss.sgi.com Subject: Re: Reducing memory requirements for high extent xfs files Message-ID: <20070606013601.GR86004887@sgi.com> References: <200705301649.l4UGnckA027406@oss.sgi.com> <20070530225516.GB85884050@sgi.com> <4665E276.9020406@agami.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4665E276.9020406@agami.com> User-Agent: Mutt/1.4.2.1i X-archive-position: 11656 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Tue, Jun 05, 2007 at 03:23:50PM -0700, Michael Nishimoto wrote: > David Chinner wrote: > >On Wed, May 30, 2007 at 09:49:38AM -0700, Michael Nishimoto wrote: > > > Hello, > > > > > > Has anyone done any work or had thoughts on changes required > > > to reduce the total memory footprint of high extent xfs files? ..... > >Yes, it could, but that's a pretty major overhaul of the extent > >interface which currently assumes everywhere that the entire > >extent tree is in core. > > > >Can you describe the problem you are seeing that leads you to > >ask this question? What's the problem you need to solve? > > I realize that this work won't be trivial which is why I asked if anyone > has thought about all relevant issues. > > When using NFS over XFS, slowly growing files (can be ascii log files) > tend to fragment quite a bit. Oh, that problem. The issue is that allocation beyond EOF (the normal way we prevent fragmentation in this case) gets truncated off on file close. Even NFS request is processed by doing: open write close And so XFS truncates the allocation beyond EOF on close. Hence the next write requires a new allocation and that results in a non-contiguous file because the adjacent blocks have already been used.... Options: - NFS server open file cache to avoid the close. - add detection to XFS to determine if the called is an NFS thread and don't truncate on close. - use preallocation. - preallocation on the file once will result in the XFS_DIFLAG_PREALLOC being set on the inode and it won't truncate on close. - append only flag will work in the same way as the prealloc flag w.r.t preventing truncation on close. - run xfs_fsr Note - i don't think extent size hints alone will help as they don't prevent EOF truncation on close. > One system had several hundred files > which required more than one page to store the extents. I don't consider that a problem as such. We'll always get some level of fragmentation if we don't preallocate. > Quite a few > files had extent counts greater than 10k, and one file had 120k extents. you should run xfs_fsr occassionally.... > Besides the memory consumption, latency to return the first byte of the > file can get noticeable. Yes, that too :/ However, I think we should be trying to fix the root cause of this worst case fragmentation rather than trying to make the rest of the filesystem accommodate an extreme corner case efficiently. i.e. let's look at the test cases and determine what piece of logic we need to add or remove to prevent this cause of fragmentation. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Tue Jun 5 18:51:39 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 05 Jun 2007 18:51:44 -0700 (PDT) Received: from postoffice.aconex.com (mail.app.aconex.com [203.89.192.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l561pbWt017857 for ; Tue, 5 Jun 2007 18:51:39 -0700 Received: from edge.local (unknown [203.89.192.141]) by postoffice.aconex.com (Postfix) with ESMTP id 9F07C92C53D; Wed, 6 Jun 2007 11:51:25 +1000 (EST) Subject: Re: XFS shrink functionality From: Nathan Scott Reply-To: nscott@aconex.com To: Iustin Pop Cc: David Chinner , Ruben Porras , xfs@oss.sgi.com, cw@f00f.org In-Reply-To: <20070605080012.GA10677@teal.hq.k1024.org> References: <1180715974.10796.46.camel@localhost> <20070604001632.GA86004887@sgi.com> <20070604084154.GA8273@teal.hq.k1024.org> <20070604092115.GX85884050@sgi.com> <20070605080012.GA10677@teal.hq.k1024.org> Content-Type: text/plain Organization: Aconex Date: Wed, 06 Jun 2007 11:50:10 +1000 Message-Id: <1181094610.3758.2.camel@edge.yarra.acx> Mime-Version: 1.0 X-Mailer: Evolution 2.6.3 Content-Transfer-Encoding: 7bit X-archive-position: 11657 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nscott@aconex.com Precedence: bulk X-list: xfs On Tue, 2007-06-05 at 10:00 +0200, Iustin Pop wrote: > > > So given these caveats, is there a chance that a) this will be > actually > useful and b) will this be accepted? Theres no doubt that its useful, its probably the most frequently requested feature for XFS from the community. I'd imagine its acceptance will depend on code quality, testing, etc, etc. > The last time I tried to work on this there has been no real feedback > and I'm thinking that maybe the code will be too intrusive and will > give > to little gain to be accepted. IIRC, most people missed the patch last time cos it got bounced by the list (cant remember why) - that was why I missed it for a long time, anyway. cheers. -- Nathan From owner-xfs@oss.sgi.com Tue Jun 5 18:58:18 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 05 Jun 2007 18:58:21 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l561wFWt019745 for ; Tue, 5 Jun 2007 18:58:17 -0700 Received: from [134.14.55.89] (soarer.melbourne.sgi.com [134.14.55.89]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA11655; Wed, 6 Jun 2007 11:58:08 +1000 Message-ID: <4666153C.9050409@sgi.com> Date: Wed, 06 Jun 2007 12:00:28 +1000 From: Vlad Apostolov User-Agent: Thunderbird 1.5.0.10 (X11/20070221) MIME-Version: 1.0 To: David Chinner CC: Michael Nishimoto , Michael Nishimoto , xfs@oss.sgi.com Subject: Re: Reducing memory requirements for high extent xfs files References: <200705301649.l4UGnckA027406@oss.sgi.com> <20070530225516.GB85884050@sgi.com> <4665E276.9020406@agami.com> <20070606013601.GR86004887@sgi.com> In-Reply-To: <20070606013601.GR86004887@sgi.com> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 11658 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: vapo@sgi.com Precedence: bulk X-list: xfs David Chinner wrote: > On Tue, Jun 05, 2007 at 03:23:50PM -0700, Michael Nishimoto wrote: > >> David Chinner wrote: >> >>> On Wed, May 30, 2007 at 09:49:38AM -0700, Michael Nishimoto wrote: >>> >>>> Hello, >>>> >>>> Has anyone done any work or had thoughts on changes required >>>> to reduce the total memory footprint of high extent xfs files? >>>> > ..... > >>> Yes, it could, but that's a pretty major overhaul of the extent >>> interface which currently assumes everywhere that the entire >>> extent tree is in core. >>> >>> Can you describe the problem you are seeing that leads you to >>> ask this question? What's the problem you need to solve? >>> >> I realize that this work won't be trivial which is why I asked if anyone >> has thought about all relevant issues. >> >> When using NFS over XFS, slowly growing files (can be ascii log files) >> tend to fragment quite a bit. >> > > Oh, that problem. > > The issue is that allocation beyond EOF (the normal way we prevent > fragmentation in this case) gets truncated off on file close. > > Even NFS request is processed by doing: > > open > write > close > > And so XFS truncates the allocation beyond EOF on close. Hence > the next write requires a new allocation and that results in > a non-contiguous file because the adjacent blocks have already > been used.... > > Options: > > - NFS server open file cache to avoid the close. > - add detection to XFS to determine if the called is > an NFS thread and don't truncate on close. > - use preallocation. > - preallocation on the file once will result in the > XFS_DIFLAG_PREALLOC being set on the inode and it > won't truncate on close. > - append only flag will work in the same way as the > prealloc flag w.r.t preventing truncation on close. > - run xfs_fsr > > Note - i don't think extent size hints alone will help as they > don't prevent EOF truncation on close. > Dave, I think extent hint should help in this situation. Here is an example of writing 4 chars in a file with extent hint of 16Kb. The file ends up with size of 4 and 8 basic blocks (512 bytes each) allocation in one extent. emu:/mnt/scratch1/temp # xfs_io -c "extsize 16384" -f foo emu:/mnt/scratch1/temp # ls -al foo -rw------- 1 root root 0 2007-06-06 12:33 foo emu:/mnt/scratch1/temp # xfs_bmap -l -v foo foo: no extents emu:/mnt/scratch1/temp # echo "abc" > foo emu:/mnt/scratch1/temp # ls -al foo -rw------- 1 root root 4 2007-06-06 12:35 foo emu:/mnt/scratch1/temp # xfs_bmap -l -v foo foo: EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL 0: [0..7]: 326088..326095 0 (326088..326095) 8 Just a warning that the extent hint works at the moment only for contiguous files. There are problems for sparse files (with holes) and extent hint. Regards, Vlad From owner-xfs@oss.sgi.com Tue Jun 5 19:03:01 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 05 Jun 2007 19:03:03 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5622wWt021282 for ; Tue, 5 Jun 2007 19:03:00 -0700 Received: from [134.14.55.89] (soarer.melbourne.sgi.com [134.14.55.89]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id MAA11835; Wed, 6 Jun 2007 12:02:50 +1000 Message-ID: <46661657.2060507@sgi.com> Date: Wed, 06 Jun 2007 12:05:11 +1000 From: Vlad Apostolov User-Agent: Thunderbird 1.5.0.10 (X11/20070221) MIME-Version: 1.0 To: Vlad Apostolov CC: David Chinner , Michael Nishimoto , Michael Nishimoto , xfs@oss.sgi.com Subject: Re: Reducing memory requirements for high extent xfs files References: <200705301649.l4UGnckA027406@oss.sgi.com> <20070530225516.GB85884050@sgi.com> <4665E276.9020406@agami.com> <20070606013601.GR86004887@sgi.com> <4666153C.9050409@sgi.com> In-Reply-To: <4666153C.9050409@sgi.com> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 11659 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: vapo@sgi.com Precedence: bulk X-list: xfs Vlad Apostolov wrote: No, Dave is right. The example worked because the extent hint was the same size as the filesystem block. Regards, Vlad >> >> Note - i don't think extent size hints alone will help as they >> don't prevent EOF truncation on close. > Dave, > > I think extent hint should help in this situation. Here is an example > of writing 4 chars in a file with extent hint of 16Kb. The file ends > up with size of 4 and 8 basic blocks (512 bytes each) allocation in > one extent. > > emu:/mnt/scratch1/temp # xfs_io -c "extsize 16384" -f foo > emu:/mnt/scratch1/temp # ls -al foo > -rw------- 1 root root 0 2007-06-06 12:33 foo > emu:/mnt/scratch1/temp # xfs_bmap -l -v foo > foo: no extents > emu:/mnt/scratch1/temp # echo "abc" > foo > emu:/mnt/scratch1/temp # ls -al foo > -rw------- 1 root root 4 2007-06-06 12:35 foo > emu:/mnt/scratch1/temp # xfs_bmap -l -v foo > foo: > EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL > 0: [0..7]: 326088..326095 0 (326088..326095) 8 > > Just a warning that the extent hint works at the moment only for > contiguous files. There are problems for sparse files (with holes) > and extent hint. > > Regards, > Vlad > From owner-xfs@oss.sgi.com Tue Jun 5 20:52:03 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 05 Jun 2007 20:52:06 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.4 required=5.0 tests=AWL,BAYES_99,J_CHICKENPOX_45, MIME_8BIT_HEADER autolearn=no version=3.2.0-pre1-r499012 Received: from nz-out-0506.google.com (nz-out-0506.google.com [64.233.162.230]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l563q0Wt022962 for ; Tue, 5 Jun 2007 20:52:03 -0700 Received: by nz-out-0506.google.com with SMTP id 4so13363nzn for ; Tue, 05 Jun 2007 20:52:00 -0700 (PDT) DKIM-Signature: a=rsa-sha1; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=sRmJ6tg3vCZMVqHk/6fLRdqezLf0PnioW3qhNKkwLbbCHoxx20CIRdFcnXk9elvledYCx8C2fim85q2EoLTgw81UpbJrxWvuVuV9UUHl2d1hBgmXKTpQ9TRB7v8Nwr+dJ7iNuzmYl/LKt0MjPudkkGokdv8ju2lqCyBqlwsP5eY= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=h7Ix38OeVtJCLx8xV1UH4i9OvvNmW5lG5IzP+ljrgnhWs+B1EQmn51IcDs0oHGwOmaVVvzVfH0zAGnfwe6P3YSNcdVjUgrKGENzxh7E1uVwb03hRbC3+0V5FOURXHdBXc46WDGpVl9WpOzc/b2AGwPTVkGE67OP18twSII9BtBU= Received: by 10.115.32.1 with SMTP id k1mr39676waj.1181100176606; Tue, 05 Jun 2007 20:22:56 -0700 (PDT) Received: by 10.115.55.14 with HTTP; Tue, 5 Jun 2007 20:22:56 -0700 (PDT) Message-ID: Date: Tue, 5 Jun 2007 23:22:56 -0400 From: "=?ISO-8859-1?Q?Germ=E1n_Po=F3-Caama=F1o?=" To: "David Chinner" Subject: Re: Reporting a bug Cc: xfs@oss.sgi.com In-Reply-To: <20070605232457.GA85884050@sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Disposition: inline References: <20070605232457.GA85884050@sgi.com> X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id l563q3Wt022992 X-archive-position: 11660 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: german.poo@gmail.com Precedence: bulk X-list: xfs 2007/6/5, David Chinner : > On Mon, Jun 04, 2007 at 10:10:06PM -0400, Germán Poó-Caamaño wrote: > > I having have some problems with a XFS partition in Debian Sarge: > > > > After a clean reboot (it supposed to be), my machine started with > > kernel messages of problems, such us XFS_WANT_CORRUPTED_GOTO and > > XFS_WANT_CORRUPTED_RETURN. > > > > It mainly was located in /var. But, after cleaning that, I checked > > other partitions. I guessed that my root partition (/dev/sda5) was in > > problems also. I mounted as readonly partition and I ran xfs_repair > > on it. xfs_repair moved 6 files (all of them ELF binaries) to > > lost+found. After reboot the machine, it can't boot anymore. > > Sounds like a critical binary for boot got lost... I thought that in the beginning. But, the crash and segfaults I pasted in my second message were produced also when I ran Sysrescue (LiveCD) and tried to work with that filesystem. Anyway, I applied objdump -T to each file in lost+found. A lot of them were important (libgcc, tls/lpthreads, tls/libm, and such). It seems there were duplicates, because for each file in lost+found was a library in /lib or /lib/tls. Some nasty behavior was something like: # mkdir foo # cd foo # foo: No such file or directory ls over /etc was able to show me group, passwd and shadow. But none of them was available. I copied group- to group. ls show me two 'group' files. > > Trying with Sysrescue 0.3.5 I get the following: > > What version of the XFS utilities has that got? > You might do better booting knoppix and then downloading the > latest tools and running them.... I used xfsprogs 2.8.18. Unfortunately I haven't had another disk/partition to apply 'dd' to the conflictive partition. I discarded a memory problem, it passed a night session of memtest without errors. In another partition (137 GB) with *.a lot of* files (think it's used for maildir). I got 17000+ files in lost+found). It passed xfs_repair. After mounted it again, and a little load, some files were deleted, but still listed with ls. If I try to stat that file I get 'No such file or directory', but still listed in the directory. I knew that XFS required a robust hardware, I'm not sure if still is true that statement. Anyway, I thought the hardware was robust and probably made a mistake. -- Germán Poó Caamaño http://www.gnome.org/~gpoo/ From owner-xfs@oss.sgi.com Tue Jun 5 22:37:58 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 05 Jun 2007 22:38:01 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l565btWt017471 for ; Tue, 5 Jun 2007 22:37:57 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA17089; Wed, 6 Jun 2007 15:37:50 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l565bnAf113526369; Wed, 6 Jun 2007 15:37:50 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l565bmtC115116202; Wed, 6 Jun 2007 15:37:48 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Wed, 6 Jun 2007 15:37:48 +1000 From: David Chinner To: xfs-dev Cc: xfs-oss Subject: review: prevent log tail-pushing unmount deadlock Message-ID: <20070606053748.GV86004887@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11661 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs When we are unmounting the filesystem, we flush all the inodes to disk. Unfortunately, if we have an inode cluster that has just been freed and marked stale sitting in an incore log buffer (i.e. hasn't been flushed to disk), it will be holding all the flush locks on the inodes in that cluster. xfs_iflush_all() which is called during unmount walks all the inodes trying to reclaim them, and it doing so calls xfs_finish_reclaim() on each inode. If the inode is dirty, if grabs the flush lock and flushes it. Unfortunately, find dirty inodes that already have their flush lock held and so we sleep. At this point in the unmount process, we are running single-threaded. There is nothing more that can push on the log to force the transaction holding the inode flush locks to disk and hence we deadlock. The fix is to issue a log force before flushing the inodes on unmount so that all the flush locks will be released before we start flushing the inodes. Recently discovered during testing with filestreams enabled. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group --- fs/xfs/xfs_mount.c | 11 +++++++++++ 1 file changed, 11 insertions(+) Index: 2.6.x-xfs-new/fs/xfs/xfs_mount.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_mount.c 2007-04-19 13:47:41.272642686 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_mount.c 2007-04-19 13:49:53.643581957 +1000 @@ -1162,6 +1162,17 @@ xfs_unmountfs(xfs_mount_t *mp, struct cr int64_t fsid; #endif + /* + * We can potentially deadlock here if we have an inode cluster + * that has been freed has it's buffer still pinned in memory because + * the transaction is still sitting in a iclog. The stale inodes + * on that buffer will have their flush locks held until the + * transaction hits the disk and the callbacks run. the inode + * flush takes the flush lock unconditionally and with nothing to + * push out the iclog we will never get that unlocked. hence we + * need to force the log first. + */ + xfs_log_force(mp, (xfs_lsn_t)0, XFS_LOG_FORCE | XFS_LOG_SYNC); xfs_iflush_all(mp); XFS_QM_DQPURGEALL(mp, XFS_QMOPT_QUOTALL | XFS_QMOPT_UMOUNTING); From owner-xfs@oss.sgi.com Tue Jun 5 22:45:39 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 05 Jun 2007 22:45:42 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l565jZWt019371 for ; Tue, 5 Jun 2007 22:45:37 -0700 Received: from [134.14.55.89] (soarer.melbourne.sgi.com [134.14.55.89]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA17475; Wed, 6 Jun 2007 15:45:32 +1000 Message-ID: <46664A88.2000807@sgi.com> Date: Wed, 06 Jun 2007 15:47:52 +1000 From: Vlad Apostolov User-Agent: Thunderbird 1.5.0.10 (X11/20070221) MIME-Version: 1.0 To: xfs-dev CC: xfs@oss.sgi.com Subject: [REVIEW 1/2] - setting realtime and extent size hint flags via fcntl(XFS_IOC_FSSETXATTR) Content-Type: multipart/mixed; boundary="------------080609010505080602030404" X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11662 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: vapo@sgi.com Precedence: bulk X-list: xfs This is a multi-part message in MIME format. --------------080609010505080602030404 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit This patch fixes error handling when setting realtime and extent size hint flags via fcntl(XFS_IOC_FSSETXATTR). It also makes XFS_XFLAG_RTINHERIT/XFS_XFLAG_EXTSZINHERIT and XFS_XFLAG_REALTIME/XFS_XFLAG_EXTSIZE flags mutually exclusive. Currently both, the realtime and extent hint flags could be set at the same time, which is not how the code is designed to work and could cause unexpected behavior. The realtime extent size is taken either from the on disk xfs inode di_extsize (if set to non zero) or from the supperblock sb_rextsize fields. The parent directory inheritance didn't inherit the di_extsize, which is also fixed in this patch. Regards, Vlad --------------080609010505080602030404 Content-Type: text/plain; name="fix_realtime_and_extent_hint_size_error_handling" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="fix_realtime_and_extent_hint_size_error_handling" Index: linux-xfs/fs/xfs/xfs_inode.c =================================================================== --- linux-xfs.orig/fs/xfs/xfs_inode.c +++ linux-xfs/fs/xfs/xfs_inode.c @@ -793,6 +793,8 @@ _xfs_dic2xflags( if (di_flags & XFS_DIFLAG_ANY) { if (di_flags & XFS_DIFLAG_REALTIME) flags |= XFS_XFLAG_REALTIME; + else if (di_flags & XFS_DIFLAG_EXTSIZE) + flags |= XFS_XFLAG_EXTSIZE; if (di_flags & XFS_DIFLAG_PREALLOC) flags |= XFS_XFLAG_PREALLOC; if (di_flags & XFS_DIFLAG_IMMUTABLE) @@ -807,14 +809,12 @@ _xfs_dic2xflags( flags |= XFS_XFLAG_NODUMP; if (di_flags & XFS_DIFLAG_RTINHERIT) flags |= XFS_XFLAG_RTINHERIT; + else if (di_flags & XFS_DIFLAG_EXTSZINHERIT) + flags |= XFS_XFLAG_EXTSZINHERIT; if (di_flags & XFS_DIFLAG_PROJINHERIT) flags |= XFS_XFLAG_PROJINHERIT; if (di_flags & XFS_DIFLAG_NOSYMLINKS) flags |= XFS_XFLAG_NOSYMLINKS; - if (di_flags & XFS_DIFLAG_EXTSIZE) - flags |= XFS_XFLAG_EXTSIZE; - if (di_flags & XFS_DIFLAG_EXTSZINHERIT) - flags |= XFS_XFLAG_EXTSZINHERIT; if (di_flags & XFS_DIFLAG_NODEFRAG) flags |= XFS_XFLAG_NODEFRAG; } @@ -1200,9 +1200,12 @@ xfs_ialloc( uint di_flags = 0; if ((mode & S_IFMT) == S_IFDIR) { - if (pip->i_d.di_flags & XFS_DIFLAG_RTINHERIT) + if (pip->i_d.di_flags & XFS_DIFLAG_RTINHERIT) { di_flags |= XFS_DIFLAG_RTINHERIT; - if (pip->i_d.di_flags & XFS_DIFLAG_EXTSZINHERIT) { + ip->i_d.di_extsize = pip->i_d.di_extsize; + } + else if (pip->i_d.di_flags & + XFS_DIFLAG_EXTSZINHERIT) { di_flags |= XFS_DIFLAG_EXTSZINHERIT; ip->i_d.di_extsize = pip->i_d.di_extsize; } @@ -1210,8 +1213,10 @@ xfs_ialloc( if (pip->i_d.di_flags & XFS_DIFLAG_RTINHERIT) { di_flags |= XFS_DIFLAG_REALTIME; ip->i_iocore.io_flags |= XFS_IOCORE_RT; + ip->i_d.di_extsize = pip->i_d.di_extsize; } - if (pip->i_d.di_flags & XFS_DIFLAG_EXTSZINHERIT) { + else if (pip->i_d.di_flags & + XFS_DIFLAG_EXTSZINHERIT) { di_flags |= XFS_DIFLAG_EXTSIZE; ip->i_d.di_extsize = pip->i_d.di_extsize; } Index: linux-xfs/fs/xfs/xfs_vnodeops.c =================================================================== --- linux-xfs.orig/fs/xfs/xfs_vnodeops.c +++ linux-xfs/fs/xfs/xfs_vnodeops.c @@ -547,15 +547,35 @@ xfs_setattr( } /* - * Can't change realtime flag if any extents are allocated. + * Can't have both realtime and extent hint flags set at + * the same time. + */ + if ((mask & XFS_AT_XFLAGS) && + (((vap->va_xflags & + (XFS_XFLAG_REALTIME | XFS_XFLAG_EXTSIZE)) == + (XFS_XFLAG_REALTIME | XFS_XFLAG_EXTSIZE)) || + ((vap->va_xflags & XFS_XFLAG_REALTIME) && + (ip->i_d.di_flags & XFS_DIFLAG_EXTSIZE)) || + ((vap->va_xflags & XFS_XFLAG_EXTSIZE) && + (ip->i_d.di_flags & XFS_DIFLAG_REALTIME)))) { + code = XFS_ERROR(EINVAL); + goto error_return; + } + + /* + * Can't change realtime and extent hint flags if any extents + * are allocated. */ if ((ip->i_d.di_nextents || ip->i_delayed_blks) && (mask & XFS_AT_XFLAGS) && - (ip->i_d.di_flags & XFS_DIFLAG_REALTIME) != - (vap->va_xflags & XFS_XFLAG_REALTIME)) { + (((ip->i_d.di_flags & XFS_DIFLAG_REALTIME) != + (vap->va_xflags & XFS_XFLAG_REALTIME)) || + ((ip->i_d.di_flags & XFS_DIFLAG_EXTSIZE) != + (vap->va_xflags & XFS_XFLAG_EXTSIZE)))) { code = XFS_ERROR(EINVAL); /* EFBIG? */ goto error_return; } + /* * Extent size must be a multiple of the appropriate block * size, if set at all. @@ -563,6 +583,16 @@ xfs_setattr( if ((mask & XFS_AT_EXTSIZE) && vap->va_extsize != 0) { xfs_extlen_t size; + /* + * Either realtime or extent hint flags must be set + * when extent size is non zero + */ + if (!(vap->va_xflags & + (XFS_XFLAG_REALTIME | XFS_XFLAG_EXTSIZE))) { + code = XFS_ERROR(EINVAL); + goto error_return; + } + if ((ip->i_d.di_flags & XFS_DIFLAG_REALTIME) || ((mask & XFS_AT_XFLAGS) && (vap->va_xflags & XFS_XFLAG_REALTIME))) { @@ -817,10 +847,11 @@ xfs_setattr( if ((ip->i_d.di_mode & S_IFMT) == S_IFDIR) { if (vap->va_xflags & XFS_XFLAG_RTINHERIT) di_flags |= XFS_DIFLAG_RTINHERIT; + else if (vap->va_xflags & + XFS_XFLAG_EXTSZINHERIT) + di_flags |= XFS_DIFLAG_EXTSZINHERIT; if (vap->va_xflags & XFS_XFLAG_NOSYMLINKS) di_flags |= XFS_DIFLAG_NOSYMLINKS; - if (vap->va_xflags & XFS_XFLAG_EXTSZINHERIT) - di_flags |= XFS_DIFLAG_EXTSZINHERIT; } else if ((ip->i_d.di_mode & S_IFMT) == S_IFREG) { if (vap->va_xflags & XFS_XFLAG_REALTIME) { di_flags |= XFS_DIFLAG_REALTIME; --------------080609010505080602030404-- From owner-xfs@oss.sgi.com Tue Jun 5 22:47:13 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 05 Jun 2007 22:47:16 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l565lAWt019909 for ; Tue, 5 Jun 2007 22:47:12 -0700 Received: from [134.14.55.89] (soarer.melbourne.sgi.com [134.14.55.89]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA17579; Wed, 6 Jun 2007 15:47:07 +1000 Message-ID: <46664AE7.7070600@sgi.com> Date: Wed, 06 Jun 2007 15:49:27 +1000 From: Vlad Apostolov User-Agent: Thunderbird 1.5.0.10 (X11/20070221) MIME-Version: 1.0 To: xfs-dev CC: xfs@oss.sgi.com Subject: [REVIEW 2/2] - setting realtime and extent size hint flags via fcntl(XFS_IOC_FSSETXATTR) Content-Type: multipart/mixed; boundary="------------050606040702010003040602" X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11663 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: vapo@sgi.com Precedence: bulk X-list: xfs This is a multi-part message in MIME format. --------------050606040702010003040602 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit This patch updates the xfsctl(XFS_IOC_FSSETXATTR) and xfsctl(XFS_IOC_RESVSP64) man page. Regards, Vlad --------------050606040702010003040602 Content-Type: text/plain; name*0="update_XFS_IOC_FSSETXATTR_and_XFS_IOC_RESVSP64_xfsctl_man_page" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename*0="update_XFS_IOC_FSSETXATTR_and_XFS_IOC_RESVSP64_xfsctl_man_pa"; filename*1="ge" Index: xfs-cmds/xfsprogs/man/man3/xfsctl.3 =================================================================== --- xfs-cmds.orig/xfsprogs/man/man3/xfsctl.3 +++ xfs-cmds/xfsprogs/man/man3/xfsctl.3 @@ -251,11 +251,17 @@ and .BR fsx_extsize . The .B fsx_xflags -realtime file bit and the file's extent size may be changed only +realtime file bit +.B XFS_XFLAG_REALTIME, +extent hint bit +.B XFS_XFLAG_EXTSIZE +and the file's extent size may be changed only when the file is empty, except in the case of a directory where the extent size can be set at any time (this value is only used for regular file allocations, so should only be set on a directory in conjunction with the XFS_XFLAG_EXTSZINHERIT flag). +The extent size has to be aligned to the filesystem block size or +when the XFS_XFLAG_REALTIME flag is set to the realtime extent size. .TP .B XFS_IOC_GETBMAP @@ -325,7 +331,15 @@ The blocks are allocated, but not zeroed If the XFS filesystem is configured to flag unwritten file extents, performance will be negatively affected when writing to preallocated space, since extra filesystem transactions are required to convert extent flags on -the range of the file written. +the range of the file written. The +.B l_len +field is rounded to the filesystem block size. +For realtime files the +.B l_len +field is rounded to the realtime extent size. If the extent hint is set, the +.B l_len +is rounded to the extent hint size. + If .IR xfs_info (8) reports unwritten=1, then the filesystem was made to flag unwritten extents. --------------050606040702010003040602-- From owner-xfs@oss.sgi.com Tue Jun 5 22:55:03 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 05 Jun 2007 22:55:08 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.5 required=5.0 tests=AWL,BAYES_99,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r499012 Received: from mail.g-house.de (ns2.g-housing.de [81.169.133.75]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l565t1Wt021651 for ; Tue, 5 Jun 2007 22:55:03 -0700 Received: from [89.54.183.197] (helo=noname) by mail.g-house.de with esmtpsa (TLS-1.0:DHE_RSA_AES_256_CBC_SHA:32) (Exim 4.50) id 1HvoTw-0007SG-8G; Wed, 06 Jun 2007 07:54:56 +0200 Date: Wed, 6 Jun 2007 07:54:58 +0200 (CEST) From: Christian Kujau X-X-Sender: dummy@foobar-g4 To: Nathan Scott cc: "Raz Ben-Jehuda(caro)" , xfs@oss.sgi.com Subject: Re: corruption bug in 2.6.17 In-Reply-To: <1181084327.3176.4.camel@edge.yarra.acx> Message-ID: References: <5d96567b0706021407q4455b60asd9d23ef82cb90b55@mail.gmail.com> <5d96567b0706050127y7f5eap7b92cddb5cdae02d@mail.gmail.com> <1181084327.3176.4.camel@edge.yarra.acx> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; format=flowed; charset=us-ascii X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11664 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: lists@nerdbynature.de Precedence: bulk X-list: xfs On Wed, 6 Jun 2007, Nathan Scott wrote: > The problem was actually related to a particular form of btree > dir2 transition - so, certain combinations of file names in a > relatively large directory with size changes would trigger it. Sorry, I did not intend to confuse. But all I noticed when the fs was shut down, was that a backup script was running and triggering the bug every morning. I was not aware of any large directories.... thanks, Christian. -- make bzImage, not war From owner-xfs@oss.sgi.com Wed Jun 6 03:18:45 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 06 Jun 2007 03:18:51 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.5 required=5.0 tests=AWL,BAYES_60 autolearn=no version=3.2.0-pre1-r499012 Received: from wa-out-1112.google.com (wa-out-1112.google.com [209.85.146.183]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l56AIiWt006814 for ; Wed, 6 Jun 2007 03:18:45 -0700 Received: by wa-out-1112.google.com with SMTP id k22so115953waf for ; Wed, 06 Jun 2007 03:18:44 -0700 (PDT) DKIM-Signature: a=rsa-sha1; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:received:date:from:to:cc:subject:message-id:references:mime-version:content-type:content-disposition:in-reply-to:user-agent; b=quj5J5RUiGykyy4ZYKHZ1ITdmVDDvpbZ59/g51x+SztIe5z1Bab9TR38KwoO+Z50mNI2tdppZ5rfXEce4MKmGOrWt6Jrz9g1SD3q/02UnOGs+arTN2vmJzeobLjvFTl+TNhKD1nghxTsq6R2KHMdNzz/944d96FfJYGPWAOQx/s= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:date:from:to:cc:subject:message-id:references:mime-version:content-type:content-disposition:in-reply-to:user-agent; b=j8PyBB4Wplqq0GaWavWUCxz5vtZKbVJKPyQhufZ1VSaQxa8gkd+G9htR3Vq/yUiBUdrm8c2wlre4b8gFIvJosgImAnCSdLPtZMfVUSciRXlgAEB4Wa2TjHuFqhSQbzFlpyDXsdapSEV8dd5DMvUg1xPsC1s2AGW3G9cba09D3MI= Received: by 10.114.175.16 with SMTP id x16mr274636wae.1181125124381; Wed, 06 Jun 2007 03:18:44 -0700 (PDT) Received: from htj.dyndns.org ( [221.139.199.126]) by mx.google.com with ESMTP id m24sm4647644waf.2007.06.06.03.18.41; Wed, 06 Jun 2007 03:18:43 -0700 (PDT) Received: by htj.dyndns.org (Postfix, from userid 1000) id 018C323D4BBB; Wed, 6 Jun 2007 19:18:37 +0900 (KST) Date: Wed, 6 Jun 2007 19:18:37 +0900 From: Tejun Heo To: David Greaves Cc: Linus Torvalds , "Rafael J. Wysocki" , xfs@oss.sgi.com, "'linux-kernel@vger.kernel.org'" , netdev@oss.sgi.com, linux-pm , Neil Brown , mikpe@it.uu.se Subject: [PATCH] sata_promise: use TF interface for polling NODATA commands Message-ID: <20070606101837.GC29122@htj.dyndns.org> References: <46608E3F.4060201@dgreaves.com> <200706012342.45657.rjw@sisk.pl> <46609FAD.7010203@dgreaves.com> <200706020122.49989.rjw@sisk.pl> <4661EFBB.5010406@dgreaves.com> <4662D852.4000005@dgreaves.com> <46667160.80905@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <46667160.80905@gmail.com> User-Agent: Mutt/1.5.13 (2006-08-11) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11665 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: htejun@gmail.com Precedence: bulk X-list: xfs sata_promise uses two different command modes - packet and TF. Packet mode is intelligent low-overhead mode while TF is the same old taskfile interface. As with other advanced interface (ahci/sil24), ATA_TFLAG_POLLING has no effect in packet mode. However, PIO commands are issued using TF interface in polling mode, so pdc_interrupt() considers interrupts spurious if ATA_TFLAG_POLLING is set. This is broken for polling NODATA commands because command is issued using packet mode but the interrupt handler ignores it due to ATA_TFLAG_POLLING. Fix pdc_qc_issue_prot() such that ATA/ATAPI NODATA commands are issued using TF interface if ATA_TFLAG_POLLING is set. This patch fixes detection failure introduced by polling SETXFERMODE. Signed-off-by: Tejun Heo --- David, please verify this patch. Mikael, does this look okay? Please push this upstream after David and Mikael's ack. Thanks. drivers/ata/sata_promise.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/drivers/ata/sata_promise.c b/drivers/ata/sata_promise.c index 2b924a6..6dc0b01 100644 --- a/drivers/ata/sata_promise.c +++ b/drivers/ata/sata_promise.c @@ -784,9 +784,12 @@ static unsigned int pdc_qc_issue_prot(struct ata_queued_cmd *qc) if (qc->dev->flags & ATA_DFLAG_CDB_INTR) break; /*FALLTHROUGH*/ + case ATA_PROT_NODATA: + if (qc->tf.flags & ATA_TFLAG_POLLING) + break; + /*FALLTHROUGH*/ case ATA_PROT_ATAPI_DMA: case ATA_PROT_DMA: - case ATA_PROT_NODATA: pdc_packet_start(qc); return 0; @@ -800,7 +803,7 @@ static unsigned int pdc_qc_issue_prot(struct ata_queued_cmd *qc) static void pdc_tf_load_mmio(struct ata_port *ap, const struct ata_taskfile *tf) { WARN_ON (tf->protocol == ATA_PROT_DMA || - tf->protocol == ATA_PROT_NODATA); + tf->protocol == ATA_PROT_ATAPI_DMA); ata_tf_load(ap, tf); } @@ -808,7 +811,7 @@ static void pdc_tf_load_mmio(struct ata_port *ap, const struct ata_taskfile *tf) static void pdc_exec_command_mmio(struct ata_port *ap, const struct ata_taskfile *tf) { WARN_ON (tf->protocol == ATA_PROT_DMA || - tf->protocol == ATA_PROT_NODATA); + tf->protocol == ATA_PROT_ATAPI_DMA); ata_exec_command(ap, tf); } From owner-xfs@oss.sgi.com Wed Jun 6 03:39:35 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 06 Jun 2007 03:39:40 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.7 required=5.0 tests=AWL,BAYES_99,J_CHICKENPOX_16, J_CHICKENPOX_32,J_CHICKENPOX_33,J_CHICKENPOX_62 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.ukfsn.org (s2.ukfsn.org [217.158.120.143]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l56AdXWt012234 for ; Wed, 6 Jun 2007 03:39:34 -0700 Received: from localhost (mailman.ukfsn.org [80.168.53.75]) by mail.ukfsn.org (Postfix) with ESMTP id 43E21E7328; Wed, 6 Jun 2007 11:39:19 +0100 (BST) Received: from mail.ukfsn.org ([80.168.53.20]) by localhost (smtp-filter.ukfsn.org [80.168.53.75]) (amavisd-new, port 10024) with ESMTP id K10DXX16RMnQ; Wed, 6 Jun 2007 11:37:07 +0100 (BST) Received: from elm.dgreaves.com (i-83-67-36-194.freedom2surf.net [83.67.36.194]) by mail.ukfsn.org (Postfix) with ESMTP id 69B14E7358; Wed, 6 Jun 2007 11:39:17 +0100 (BST) Received: from ash.dgreaves.com ([10.0.0.90]) by elm.dgreaves.com with esmtp (Exim 4.62) (envelope-from ) id 1HvsvJ-0003D3-Be; Wed, 06 Jun 2007 11:39:29 +0100 Message-ID: <46668EE0.2030509@dgreaves.com> Date: Wed, 06 Jun 2007 11:39:28 +0100 From: David Greaves User-Agent: Mozilla-Thunderbird 2.0.0.0 (X11/20070601) MIME-Version: 1.0 To: Tejun Heo Cc: Linus Torvalds , "Rafael J. Wysocki" , xfs@oss.sgi.com, "'linux-kernel@vger.kernel.org'" , linux-pm , Neil Brown Subject: Re: 2.6.22-rc3 hibernate(?) fails totally - regression (xfs on raid6) References: <46608E3F.4060201@dgreaves.com> <200706012342.45657.rjw@sisk.pl> <46609FAD.7010203@dgreaves.com> <200706020122.49989.rjw@sisk.pl> <4661EFBB.5010406@dgreaves.com> <4662D852.4000005@dgreaves.com> <46667160.80905@gmail.com> In-Reply-To: <46667160.80905@gmail.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11666 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: david@dgreaves.com Precedence: bulk X-list: xfs Tejun Heo wrote: > Hello, > > David Greaves wrote: >> Linus Torvalds wrote: >>> It would be interesting to see what triggered it, since it apparently >>> worked before. So yes, a bisection would be great. >> Tejun, all the problematic patches are yours - so adding you. > > Ouch.... that's what everyone says! Just to be clear. This problem is where my system won't resume after s2d unless I umount my xfs over raid6 filesystem. >> given the first patch identified is >> 9666f4009c22f6520ac3fb8a19c9e32ab973e828: "libata: reimplement suspend/resume >> support using sdev->manage_start_stop" >> That seems a good candidate... > > 9ce3075c20d458040138690edfdf6446664ec3ee works, right? Yes git reset --hard ec4883b015c3212f6f6d04fb2ff45f528492f598 vi Makefile make oldconfig make && make install && make modules_install && update-grub init 6 > Can you test > 9666f4009c22f6520ac3fb8a19c9e32ab973e828 by removing > ata_scsi_device_suspend/resume callbacks from sata_via.c? Just delete > all lines referencing those two functions. There were one or two > fallouts from the conversion. Yes, after I posted I realised that Andrews patch fixed the compile failure :) git reset --hard 9666f4009c22f6520ac3fb8a19c9e32ab973e828 diff --git a/drivers/ata/sata_via.c b/drivers/ata/sata_via.c index 939c924..bad87b5 100644 --- a/drivers/ata/sata_via.c +++ b/drivers/ata/sata_via.c @@ -117,8 +117,6 @@ static struct scsi_host_template svia_sht = { .slave_destroy = ata_scsi_slave_destroy, .bios_param = ata_std_bios_param, #ifdef CONFIG_PM - .suspend = ata_scsi_device_suspend, - .resume = ata_scsi_device_resume, #endif }; So now this compiles but it does cause the problem: umount /huge echo platform > /sys/power/disk echo disk > /sys/power/state # resumes fine mount /huge echo platform > /sys/power/disk echo disk > /sys/power/state # won't resume FWIW, /huge is: /dev/md0 on /huge type xfs (rw) cu:~# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] md0 : active raid6 sdf1[0] sde1[1] sdd1[2] sdc1[3] sdb1[4] sda1[5] hdb1[6] 1225557760 blocks level 6, 256k chunk, algorithm 2 [7/7] [UUUUUUU] bitmap: 0/234 pages [0KB], 512KB chunk unused devices: > > How many drives do you have? 8 in total 2 pata : VIA vt8237 2 sata on sata_via 4 sata on sata_promise +1 pata cdrom Behavior difference introduced by the > reimplementation is serialization of resume sequence, so it takes more > time. My test machine had problems resuming if resume took too long > even with the previous implementation. It didn't matter whether the > long resuming sequence is caused by too many controllers or explicit > ssleep(). If time needed for resume sequence is over certain threshold, > machine hangs while resuming. I thought it was a BIOS glitch and didn't > dig into it but you might be seeing the same issue. given the mount/umount thing this sounds unlikely... but what do I know? resume does throw up: ATA: abnormal status 0x7F on port 0x0001b007 ATA: abnormal status 0x7F on port 0x0001b007 ATA: abnormal status 0x7F on port 0x0001a407 ATA: abnormal status 0x7F on port 0x0001a407 which I've not noticed before... oh, alright, I'll check... reboots to 2.6.21, suspend, resume... nope, not output on resume in 2.6.21 > Please post dmesg too. Thanks. > Here is: dmesg from 2.6.22-9666f4009c22f6520ac3fb8a19c9e32ab973e828 (ie with sata_via fix) dmesg from resume of above when /huge is unmounted dmesg from resume of 2.6.21 Linux version 2.6.21-TejunTst2-g9666f400-dirty (root@cu.dgreaves.com) (gcc version 3.3.5 (Debian 1:3.3.5-13)) #13 Wed Jun 6 10:16:03 BST 2007 BIOS-provided physical RAM map: BIOS-e820: 0000000000000000 - 000000000009c400 (usable) BIOS-e820: 000000000009c400 - 00000000000a0000 (reserved) BIOS-e820: 00000000000f0000 - 0000000000100000 (reserved) BIOS-e820: 0000000000100000 - 000000003fffc000 (usable) BIOS-e820: 000000003fffc000 - 000000003ffff000 (ACPI data) BIOS-e820: 000000003ffff000 - 0000000040000000 (ACPI NVS) BIOS-e820: 00000000fec00000 - 00000000fec01000 (reserved) BIOS-e820: 00000000fee00000 - 00000000fee01000 (reserved) BIOS-e820: 00000000ffff0000 - 0000000100000000 (reserved) 127MB HIGHMEM available. 896MB LOWMEM available. Entering add_active_range(0, 0, 262140) 0 entries of 256 used Zone PFN ranges: DMA 0 -> 4096 Normal 4096 -> 229376 HighMem 229376 -> 262140 early_node_map[1] active PFN ranges 0: 0 -> 262140 On node 0 totalpages: 262140 DMA zone: 32 pages used for memmap DMA zone: 0 pages reserved DMA zone: 4064 pages, LIFO batch:0 Normal zone: 1760 pages used for memmap Normal zone: 223520 pages, LIFO batch:31 HighMem zone: 255 pages used for memmap HighMem zone: 32509 pages, LIFO batch:7 DMI 2.3 present. ACPI: RSDP 000F62A0, 0014 (r0 ASUS ) ACPI: RSDT 3FFFC000, 0030 (r1 ASUS A7V600 42302E31 MSFT 31313031) ACPI: FACP 3FFFC0B2, 0074 (r1 ASUS A7V600 42302E31 MSFT 31313031) ACPI: DSDT 3FFFC126, 2C4F (r1 ASUS A7V600 1000 MSFT 100000B) ACPI: FACS 3FFFF000, 0040 ACPI: BOOT 3FFFC030, 0028 (r1 ASUS A7V600 42302E31 MSFT 31313031) ACPI: APIC 3FFFC058, 005A (r1 ASUS A7V600 42302E31 MSFT 31313031) ACPI: PM-Timer IO Port: 0xe408 ACPI: Local APIC address 0xfee00000 ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled) Processor #0 6:10 APIC version 16 ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0]) IOAPIC[0]: apic_id 2, version 3, address 0xfec00000, GSI 0-23 ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl edge) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 low level) ACPI: IRQ0 used by override. ACPI: IRQ2 used by override. ACPI: IRQ9 used by override. Enabling APIC mode: Flat. Using 1 I/O APICs Using ACPI (MADT) for SMP configuration information Allocating PCI resources starting at 50000000 (gap: 40000000:bec00000) Built 1 zonelists. Total pages: 260093 Kernel command line: root=/dev/hda2 ro log_buf_len=128k log_buf_len: 131072 mapped APIC to ffffd000 (fee00000) mapped IOAPIC to ffffc000 (fec00000) Enabling fast FPU save and restore... done. Enabling unmasked SIMD FPU exception support... done. Initializing CPU#0 PID hash table entries: 4096 (order: 12, 16384 bytes) Detected 1999.872 MHz processor. Console: colour VGA+ 80x25 Dentry cache hash table entries: 131072 (order: 7, 524288 bytes) Inode-cache hash table entries: 65536 (order: 6, 262144 bytes) Memory: 1034940k/1048560k available (2426k kernel code, 12968k reserved, 888k data, 188k init, 131056k highmem) virtual kernel memory layout: fixmap : 0xfffaa000 - 0xfffff000 ( 340 kB) pkmap : 0xff800000 - 0xffc00000 (4096 kB) vmalloc : 0xf8800000 - 0xff7fe000 ( 111 MB) lowmem : 0xc0000000 - 0xf8000000 ( 896 MB) .init : 0xc0440000 - 0xc046f000 ( 188 kB) .data : 0xc035e9ef - 0xc043cb90 ( 888 kB) .text : 0xc0100000 - 0xc035e9ef (2426 kB) Checking if this processor honours the WP bit even in supervisor mode... Ok. Calibrating delay using timer specific routine.. 4003.08 BogoMIPS (lpj=8006169) Mount-cache hash table entries: 512 CPU: After generic identify, caps: 0383fbff c1cbfbff 00000000 00000000 00000000 00000000 00000000 CPU: L1 I Cache: 64K (64 bytes/line), D cache 64K (64 bytes/line) CPU: L2 Cache: 512K (64 bytes/line) CPU: After all inits, caps: 0383fbff c1cbfbff 00000000 00000420 00000000 00000000 00000000 Intel machine check architecture supported. Intel machine check reporting enabled on CPU#0. Compat vDSO mapped to ffffe000. CPU: AMD Athlon(TM) MP stepping 00 Checking 'hlt' instruction... OK. ACPI: Core revision 20070126 ENABLING IO-APIC IRQs ..TIMER: vector=0x31 apic1=0 pin1=2 apic2=-1 pin2=-1 NET: Registered protocol family 16 ACPI: bus type pci registered PCI: PCI BIOS revision 2.10 entry at 0xf1970, last bus=1 PCI: Using configuration type 1 Setting up standard PCI resources ACPI: Interpreter enabled ACPI: (supports S0 S1 S4 S5) ACPI: Using IOAPIC for interrupt routing ACPI: PCI Root Bridge [PCI0] (0000:00) PCI: Probing PCI hardware (bus 00) PCI: enabled onboard AC97/MC97 devices ACPI: PCI Interrupt Routing Table [\_SB_.PCI0._PRT] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.PCI1._PRT] ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 5 6 7 9 10 *11 12) ACPI: PCI Interrupt Link [LNKB] (IRQs 3 4 5 6 7 9 10 11 12) *0, disabled. ACPI: PCI Interrupt Link [LNKC] (IRQs 3 4 5 6 7 9 10 11 12) *0, disabled. ACPI: PCI Interrupt Link [LNKD] (IRQs 3 4 5 6 7 9 10 11 12) *0, disabled. ACPI: PCI Interrupt Link [LNKE] (IRQs 3 4 *5 6 7 9 10 11 12) ACPI: PCI Interrupt Link [LNKF] (IRQs 3 4 *5 6 7 9 10 11 12) ACPI: PCI Interrupt Link [LNKG] (IRQs 3 4 5 *6 7 9 10 11 12) ACPI: PCI Interrupt Link [LNKH] (IRQs 3 4 5 6 7 9 10 11 12) *15, disabled. Linux Plug and Play Support v0.97 (c) Adam Belay pnp: PnP ACPI init ACPI: bus type pnp registered pnp: ACPI device : hid PNP0C01 pnp: ACPI device : hid PNP0A03 pnp: ACPI device : hid PNP0C02 pnp: ACPI device : hid PNP0C02 pnp: ACPI device : hid PNP0200 pnp: ACPI device : hid PNP0B00 pnp: ACPI device : hid PNP0800 pnp: ACPI device : hid PNP0C04 pnp: ACPI device : hid PNP0501 pnp: ACPI device : hid PNP0501 pnp: ACPI device : hid PNP0303 pnp: ACPI device : hid PNP0F03 pnp: ACPI device : hid PNPB02F pnp: ACPI device : hid PNP0C02 pnp: PnP ACPI: found 14 devices ACPI: ACPI bus type pnp unregistered SCSI subsystem initialized libata version 2.20 loaded. PCI: Using ACPI for IRQ routing PCI: If a device doesn't work, try "pci=routeirq". If it helps, post a report pnp: the driver 'system' has been registered pnp: match found with the PnP device '00:00' and the driver 'system' pnp: 00:00: iomem range 0x0-0x9ffff could not be reserved pnp: 00:00: iomem range 0xf0000-0xfffff could not be reserved pnp: 00:00: iomem range 0x100000-0x3fffffff could not be reserved pnp: 00:00: iomem range 0xfec00000-0xfec000ff could not be reserved pnp: match found with the PnP device '00:02' and the driver 'system' pnp: 00:02: ioport range 0xe400-0xe47f has been reserved pnp: 00:02: ioport range 0xe800-0xe81f has been reserved pnp: 00:02: iomem range 0xfff80000-0xffffffff could not be reserved pnp: 00:02: iomem range 0xffb80000-0xffbfffff has been reserved pnp: match found with the PnP device '00:03' and the driver 'system' pnp: match found with the PnP device '00:0d' and the driver 'system' pnp: 00:0d: ioport range 0x290-0x297 has been reserved pnp: 00:0d: ioport range 0x370-0x375 has been reserved Time: tsc clocksource has been installed. PCI: Bridge: 0000:00:01.0 IO window: disabled. MEM window: disabled. PREFETCH window: disabled. PCI: Setting latency timer of device 0000:00:01.0 to 64 NET: Registered protocol family 2 IP route cache hash table entries: 32768 (order: 5, 131072 bytes) TCP established hash table entries: 131072 (order: 8, 1048576 bytes) TCP bind hash table entries: 65536 (order: 6, 262144 bytes) TCP: Hash tables configured (established 131072 bind 65536) TCP reno registered Simple Boot Flag at 0x3a set to 0x1 Machine check exception polling timer started. highmem bounce pool size: 64 pages SGI XFS with ACLs, no debug enabled io scheduler noop registered io scheduler anticipatory registered (default) io scheduler deadline registered io scheduler cfq registered Boot video device is 0000:00:0a.0 PCI: Bypassing VIA 8237 APIC De-Assert Message atyfb: using auxiliary register aperture atyfb: 3D RAGE II+ (Mach64 GU) [0x4755 rev 0x9a] atyfb: Mach64 BIOS is located at c0000, mapped at c00c0000. atyfb: BIOS frequency table: atyfb: PCLK_min_freq 926, PCLK_max_freq 22216, ref_freq 1432, ref_divider 33 atyfb: MCLK_pwd 4200, MCLK_max_freq 6000, XCLK_max_freq 6000, SCLK_freq 5000 atyfb: 4M EDO, 14.31818 MHz XTAL, 222 MHz PLL, 60 Mhz MCLK, 60 MHz XCLK Console: switching to colour frame buffer device 80x30 atyfb: fb0: ATY Mach64 frame buffer device on PCI input: Power Button (FF) as /class/input/input0 ACPI: Power Button (FF) [PWRF] input: Power Button (CM) as /class/input/input1 ACPI: Power Button (CM) [PWRB] netconsole: not configured, aborting Uniform Multi-Platform E-IDE driver Revision: 7.00alpha2 ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx VP_IDE: IDE controller at PCI slot 0000:00:0f.1 ACPI: PCI Interrupt 0000:00:0f.1[A] -> GSI 20 (level, low) -> IRQ 16 VP_IDE: chipset revision 6 VP_IDE: not 100% native mode: will probe irqs later VP_IDE: VIA vt8237 (rev 00) IDE UDMA133 controller on pci0000:00:0f.1 ide0: BM-DMA at 0x9000-0x9007, BIOS settings: hda:DMA, hdb:DMA ide1: BM-DMA at 0x9008-0x900f, BIOS settings: hdc:pio, hdd:DMA Probing IDE interface ide0... Switched to high resolution mode on CPU 0 hda: ST320420A, ATA DISK drive hdb: Maxtor 5A300J0, ATA DISK drive ide0 at 0x1f0-0x1f7,0x3f6 on irq 14 Probing IDE interface ide1... hdd: PLEXTOR CD-R PX-W2410A, ATAPI CD/DVD-ROM drive ide1 at 0x170-0x177,0x376 on irq 15 hda: max request size: 128KiB hda: 39851760 sectors (20404 MB) w/2048KiB Cache, CHS=39535/16/63, UDMA(66) hda: cache flushes not supported hda: hda1 hda2 hda3 hdb: max request size: 512KiB hdb: 585940320 sectors (300001 MB) w/2048KiB Cache, CHS=36473/255/63, UDMA(133) hdb: cache flushes supported hdb: hdb1 hdb2 hdd: ATAPI 40X CD-ROM CD-R/RW drive, 4096kB Cache, UDMA(33) Uniform CD-ROM driver Revision: 3.20 sata_promise 0000:00:0d.0: version 2.07 ACPI: PCI Interrupt 0000:00:0d.0[A] -> GSI 16 (level, low) -> IRQ 17 scsi0 : sata_promise scsi1 : sata_promise scsi2 : sata_promise scsi3 : sata_promise ata1: SATA max UDMA/133 cmd 0xf880a200 ctl 0xf880a238 bmdma 0x00000000 irq 0 ata2: SATA max UDMA/133 cmd 0xf880a280 ctl 0xf880a2b8 bmdma 0x00000000 irq 0 ata3: SATA max UDMA/133 cmd 0xf880a300 ctl 0xf880a338 bmdma 0x00000000 irq 0 ata4: SATA max UDMA/133 cmd 0xf880a380 ctl 0xf880a3b8 bmdma 0x00000000 irq 0 ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) ata1.00: ata_hpa_resize 1: sectors = 490234752, hpa_sectors = 490234752 ata1.00: ATA-7: Maxtor 6B250S0, BANC19J0, max UDMA/133 ata1.00: 490234752 sectors, multi 0: LBA48 NCQ (depth 0/32) ata1.00: ata_hpa_resize 1: sectors = 490234752, hpa_sectors = 490234752 ata1.00: configured for UDMA/133 ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300) ata2.00: ata_hpa_resize 1: sectors = 490234752, hpa_sectors = 490234752 ata2.00: ATA-7: Maxtor 7Y250M0, YAR51EW0, max UDMA/133 ata2.00: 490234752 sectors, multi 0: LBA48 ata2.00: ata_hpa_resize 1: sectors = 490234752, hpa_sectors = 490234752 ata2.00: configured for UDMA/133 ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) ata3.00: ata_hpa_resize 1: sectors = 490234752, hpa_sectors = 490234752 ata3.00: ATA-7: Maxtor 7Y250M0, YAR51EW0, max UDMA/133 ata3.00: 490234752 sectors, multi 0: LBA48 ata3.00: ata_hpa_resize 1: sectors = 490234752, hpa_sectors = 490234752 ata3.00: configured for UDMA/133 ata4: SATA link up 1.5 Gbps (SStatus 113 SControl 300) ata4.00: ata_hpa_resize 1: sectors = 490234752, hpa_sectors = 490234752 ata4.00: ATA-7: Maxtor 6B250S0, BANC1980, max UDMA/133 ata4.00: 490234752 sectors, multi 0: LBA48 NCQ (depth 0/32) ata4.00: ata_hpa_resize 1: sectors = 490234752, hpa_sectors = 490234752 ata4.00: configured for UDMA/133 scsi 0:0:0:0: Direct-Access ATA Maxtor 6B250S0 BANC PQ: 0 ANSI: 5 sd 0:0:0:0: [sda] 490234752 512-byte hardware sectors (251000 MB) sd 0:0:0:0: [sda] Write Protect is off sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sd 0:0:0:0: [sda] 490234752 512-byte hardware sectors (251000 MB) sd 0:0:0:0: [sda] Write Protect is off sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sda: sda1 sd 0:0:0:0: [sda] Attached SCSI disk scsi 1:0:0:0: Direct-Access ATA Maxtor 7Y250M0 YAR5 PQ: 0 ANSI: 5 sd 1:0:0:0: [sdb] 490234752 512-byte hardware sectors (251000 MB) sd 1:0:0:0: [sdb] Write Protect is off sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sd 1:0:0:0: [sdb] 490234752 512-byte hardware sectors (251000 MB) sd 1:0:0:0: [sdb] Write Protect is off sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sdb: sdb1 sd 1:0:0:0: [sdb] Attached SCSI disk scsi 2:0:0:0: Direct-Access ATA Maxtor 7Y250M0 YAR5 PQ: 0 ANSI: 5 sd 2:0:0:0: [sdc] 490234752 512-byte hardware sectors (251000 MB) sd 2:0:0:0: [sdc] Write Protect is off sd 2:0:0:0: [sdc] Mode Sense: 00 3a 00 00 sd 2:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sd 2:0:0:0: [sdc] 490234752 512-byte hardware sectors (251000 MB) sd 2:0:0:0: [sdc] Write Protect is off sd 2:0:0:0: [sdc] Mode Sense: 00 3a 00 00 sd 2:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sdc: sdc1 sd 2:0:0:0: [sdc] Attached SCSI disk scsi 3:0:0:0: Direct-Access ATA Maxtor 6B250S0 BANC PQ: 0 ANSI: 5 sd 3:0:0:0: [sdd] 490234752 512-byte hardware sectors (251000 MB) sd 3:0:0:0: [sdd] Write Protect is off sd 3:0:0:0: [sdd] Mode Sense: 00 3a 00 00 sd 3:0:0:0: [sdd] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sd 3:0:0:0: [sdd] 490234752 512-byte hardware sectors (251000 MB) sd 3:0:0:0: [sdd] Write Protect is off sd 3:0:0:0: [sdd] Mode Sense: 00 3a 00 00 sd 3:0:0:0: [sdd] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sdd: sdd1 sd 3:0:0:0: [sdd] Attached SCSI disk sata_via 0000:00:0f.0: version 2.1 ACPI: PCI Interrupt 0000:00:0f.0[B] -> GSI 20 (level, low) -> IRQ 16 sata_via 0000:00:0f.0: routed to hard irq line 0 scsi4 : sata_via scsi5 : sata_via ata5: SATA max UDMA/133 cmd 0x0001b000 ctl 0x0001a802 bmdma 0x00019800 irq 0 ata6: SATA max UDMA/133 cmd 0x0001a400 ctl 0x0001a002 bmdma 0x00019808 irq 0 ata5: SATA link up 1.5 Gbps (SStatus 113 SControl 300) ATA: abnormal status 0x7F on port 0x0001b007 ATA: abnormal status 0x7F on port 0x0001b007 ata5.00: ata_hpa_resize 1: sectors = 490234752, hpa_sectors = 490234752 ata5.00: ATA-7: Maxtor 7B250S0, BANC1980, max UDMA/133 ata5.00: 490234752 sectors, multi 16: LBA48 NCQ (depth 0/32) ata5.00: ata_hpa_resize 1: sectors = 490234752, hpa_sectors = 490234752 ata5.00: configured for UDMA/133 ata6: SATA link up 1.5 Gbps (SStatus 113 SControl 300) ATA: abnormal status 0x7F on port 0x0001a407 ATA: abnormal status 0x7F on port 0x0001a407 ata6.00: ata_hpa_resize 1: sectors = 781422768, hpa_sectors = 781422768 ata6.00: ATA-7: ST3400620AS, 3.AAK, max UDMA/133 ata6.00: 781422768 sectors, multi 16: LBA48 NCQ (depth 0/32) ata6.00: ata_hpa_resize 1: sectors = 781422768, hpa_sectors = 781422768 ata6.00: configured for UDMA/133 scsi 4:0:0:0: Direct-Access ATA Maxtor 7B250S0 BANC PQ: 0 ANSI: 5 sd 4:0:0:0: [sde] 490234752 512-byte hardware sectors (251000 MB) sd 4:0:0:0: [sde] Write Protect is off sd 4:0:0:0: [sde] Mode Sense: 00 3a 00 00 sd 4:0:0:0: [sde] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sd 4:0:0:0: [sde] 490234752 512-byte hardware sectors (251000 MB) sd 4:0:0:0: [sde] Write Protect is off sd 4:0:0:0: [sde] Mode Sense: 00 3a 00 00 sd 4:0:0:0: [sde] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sde: sde1 sd 4:0:0:0: [sde] Attached SCSI disk scsi 5:0:0:0: Direct-Access ATA ST3400620AS 3.AA PQ: 0 ANSI: 5 sd 5:0:0:0: [sdf] 781422768 512-byte hardware sectors (400088 MB) sd 5:0:0:0: [sdf] Write Protect is off sd 5:0:0:0: [sdf] Mode Sense: 00 3a 00 00 sd 5:0:0:0: [sdf] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sd 5:0:0:0: [sdf] 781422768 512-byte hardware sectors (400088 MB) sd 5:0:0:0: [sdf] Write Protect is off sd 5:0:0:0: [sdf] Mode Sense: 00 3a 00 00 sd 5:0:0:0: [sdf] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sdf: sdf1 sdf2 sd 5:0:0:0: [sdf] Attached SCSI disk pnp: the driver 'i8042 kbd' has been registered pnp: match found with the PnP device '00:0a' and the driver 'i8042 kbd' pnp: the driver 'i8042 aux' has been registered pnp: match found with the PnP device '00:0b' and the driver 'i8042 aux' PNP: PS/2 Controller [PNP0303:PS2K,PNP0f03:PS2M] at 0x60,0x64 irq 1,12 serio: i8042 KBD port at 0x60,0x64 irq 1 serio: i8042 AUX port at 0x60,0x64 irq 12 mice: PS/2 mouse device common for all mice input: AT Translated Set 2 keyboard as /class/input/input2 md: linear personality registered for level -1 md: raid0 personality registered for level 0 md: raid1 personality registered for level 1 raid6: int32x1 806 MB/s raid6: int32x2 1097 MB/s raid6: int32x4 694 MB/s raid6: int32x8 648 MB/s raid6: mmxx1 1683 MB/s raid6: mmxx2 3028 MB/s raid6: sse1x1 1622 MB/s raid6: sse1x2 2655 MB/s raid6: using algorithm sse1x2 (2655 MB/s) md: raid6 personality registered for level 6 md: raid5 personality registered for level 5 md: raid4 personality registered for level 4 raid5: automatically using best checksumming function: pIII_sse pIII_sse : 4201.000 MB/sec raid5: using function: pIII_sse (4201.000 MB/sec) device-mapper: ioctl: 4.11.0-ioctl (2006-10-12) initialised: dm-devel@redhat.com TCP cubic registered Using IPI Shortcut mode input: ImPS/2 Logitech Wheel Mouse as /class/input/input3 md: Autodetecting RAID arrays. md: autorun ... md: considering sdf1 ... md: adding sdf1 ... md: adding sde1 ... md: adding sdd1 ... md: adding sdc1 ... md: adding sdb1 ... md: adding sda1 ... md: adding hdb1 ... md: created md0 md: bind md: bind md: bind md: bind md: bind md: bind md: bind md: running: raid5: device sdf1 operational as raid disk 0 raid5: device sde1 operational as raid disk 1 raid5: device sdd1 operational as raid disk 2 raid5: device sdc1 operational as raid disk 3 raid5: device sdb1 operational as raid disk 4 raid5: device sda1 operational as raid disk 5 raid5: device hdb1 operational as raid disk 6 raid5: allocated 7316kB for md0 raid5: raid level 6 set md0 active with 7 out of 7 devices, algorithm 2 RAID5 conf printout: --- rd:7 wd:7 disk 0, o:1, dev:sdf1 disk 1, o:1, dev:sde1 disk 2, o:1, dev:sdd1 disk 3, o:1, dev:sdc1 disk 4, o:1, dev:sdb1 disk 5, o:1, dev:sda1 disk 6, o:1, dev:hdb1 md0: bitmap initialized from disk: read 15/15 pages, set 2 bits, status: 0 created bitmap (234 pages) for device md0 md: ... autorun DONE. Filesystem "hda2": Disabling barriers, not supported by the underlying device XFS mounting filesystem hda2 Ending clean XFS mount for filesystem: hda2 VFS: Mounted root (xfs filesystem) readonly. Freeing unused kernel memory: 188k freed NET: Registered protocol family 1 PCI: Enabling device 0000:00:09.0 (0014 -> 0017) ACPI: PCI Interrupt 0000:00:09.0[A] -> GSI 18 (level, low) -> IRQ 18 skge 1.11 addr 0xf6000000 irq 18 chip Yukon rev 1 skge eth0: addr 00:0c:6e:f6:47:ee usbcore: registered new interface driver usbfs usbcore: registered new interface driver hub usbcore: registered new device driver usb USB Universal Host Controller Interface driver v3.0 ACPI: PCI Interrupt 0000:00:10.0[A] -> GSI 21 (level, low) -> IRQ 19 uhci_hcd 0000:00:10.0: UHCI Host Controller uhci_hcd 0000:00:10.0: new USB bus registered, assigned bus number 1 uhci_hcd 0000:00:10.0: irq 19, io base 0x00008800 usb usb1: configuration #1 chosen from 1 choice hub 1-0:1.0: USB hub found hub 1-0:1.0: 2 ports detected ACPI: PCI Interrupt 0000:00:10.1[A] -> GSI 21 (level, low) -> IRQ 19 uhci_hcd 0000:00:10.1: UHCI Host Controller uhci_hcd 0000:00:10.1: new USB bus registered, assigned bus number 2 uhci_hcd 0000:00:10.1: irq 19, io base 0x00008400 usb usb2: configuration #1 chosen from 1 choice hub 2-0:1.0: USB hub found hub 2-0:1.0: 2 ports detected sk98lin: driver has been replaced by the skge driver and is scheduled for removal ACPI: PCI Interrupt 0000:00:10.2[B] -> GSI 21 (level, low) -> IRQ 19 uhci_hcd 0000:00:10.2: UHCI Host Controller uhci_hcd 0000:00:10.2: new USB bus registered, assigned bus number 3 uhci_hcd 0000:00:10.2: irq 19, io base 0x00008000 usb usb3: configuration #1 chosen from 1 choice hub 3-0:1.0: USB hub found hub 3-0:1.0: 2 ports detected ACPI: PCI Interrupt 0000:00:10.3[B] -> GSI 21 (level, low) -> IRQ 19 uhci_hcd 0000:00:10.3: UHCI Host Controller uhci_hcd 0000:00:10.3: new USB bus registered, assigned bus number 4 uhci_hcd 0000:00:10.3: irq 19, io base 0x00007800 usb usb4: configuration #1 chosen from 1 choice hub 4-0:1.0: USB hub found hub 4-0:1.0: 2 ports detected vt596_smbus 0000:00:11.0: VT596_smba = 0xE800 i2c-adapter i2c-0: adapter [SMBus Via Pro adapter at e800] registered ACPI: PCI Interrupt 0000:00:10.4[C] -> GSI 21 (level, low) -> IRQ 19 ehci_hcd 0000:00:10.4: EHCI Host Controller ehci_hcd 0000:00:10.4: new USB bus registered, assigned bus number 5 ehci_hcd 0000:00:10.4: irq 19, io mem 0xf4000000 ehci_hcd 0000:00:10.4: USB 2.0 started, EHCI 1.00, driver 10 Dec 2004 usb usb5: configuration #1 chosen from 1 choice hub 5-0:1.0: USB hub found hub 5-0:1.0: 8 ports detected ACPI: PCI Interrupt 0000:00:11.5[C] -> GSI 22 (level, low) -> IRQ 20 PCI: Setting latency timer of device 0000:00:11.5 to 64 codec_read: codec 0 is not valid [0xfe0000] codec_read: codec 0 is not valid [0xfe0000] codec_read: codec 0 is not valid [0xfe0000] codec_read: codec 0 is not valid [0xfe0000] Adding 522100k swap on /dev/hda3. Priority:-1 extents:1 across:522100k Filesystem "hda2": Disabling barriers, not supported by the underlying device i2c-core: driver [eeprom] registered i2c-adapter i2c-0: found normal entry for adapter 0, addr 0x50 i2c-adapter i2c-0: Transaction (pre): STS=42 CNT=14 CMD=00 ADD=a0 DAT=11,00 i2c-adapter i2c-0: SMBus busy (0x42). Resetting... i2c-adapter i2c-0: Transaction (post): STS=00 CNT=00 CMD=00 ADD=a0 DAT=11,00 i2c-adapter i2c-0: Transaction (pre): STS=40 CNT=00 CMD=00 ADD=a0 DAT=11,00 i2c-adapter i2c-0: Transaction (post): STS=00 CNT=00 CMD=00 ADD=a0 DAT=11,00 i2c-adapter i2c-0: client [eeprom] registered with bus id 0-0050 i2c-adapter i2c-0: found normal entry for adapter 0, addr 0x51 i2c-adapter i2c-0: Transaction (pre): STS=40 CNT=00 CMD=00 ADD=a2 DAT=11,00 i2c-adapter i2c-0: Transaction (post): STS=00 CNT=00 CMD=00 ADD=a2 DAT=11,00 i2c-adapter i2c-0: found normal entry for adapter 0, addr 0x52 i2c-adapter i2c-0: Transaction (pre): STS=40 CNT=00 CMD=00 ADD=a4 DAT=11,00 i2c-adapter i2c-0: Transaction (post): STS=00 CNT=00 CMD=00 ADD=a4 DAT=11,00 i2c-adapter i2c-0: Transaction (pre): STS=40 CNT=00 CMD=00 ADD=a4 DAT=11,00 i2c-adapter i2c-0: Transaction (post): STS=00 CNT=00 CMD=00 ADD=a4 DAT=11,00 i2c-adapter i2c-0: client [eeprom] registered with bus id 0-0052 i2c-adapter i2c-0: found normal entry for adapter 0, addr 0x53 i2c-adapter i2c-0: Transaction (pre): STS=40 CNT=00 CMD=00 ADD=a6 DAT=11,00 i2c-adapter i2c-0: Transaction (post): STS=00 CNT=00 CMD=00 ADD=a6 DAT=11,00 i2c-adapter i2c-0: found normal entry for adapter 0, addr 0x54 i2c-adapter i2c-0: Transaction (pre): STS=40 CNT=00 CMD=00 ADD=a8 DAT=11,00 i2c-adapter i2c-0: Transaction (post): STS=00 CNT=00 CMD=00 ADD=a8 DAT=11,00 i2c-adapter i2c-0: found normal entry for adapter 0, addr 0x55 i2c-adapter i2c-0: Transaction (pre): STS=40 CNT=00 CMD=00 ADD=aa DAT=11,00 i2c-adapter i2c-0: Transaction (post): STS=00 CNT=00 CMD=00 ADD=aa DAT=11,00 i2c-adapter i2c-0: found normal entry for adapter 0, addr 0x56 i2c-adapter i2c-0: Transaction (pre): STS=40 CNT=00 CMD=00 ADD=ac DAT=11,00 i2c-adapter i2c-0: Transaction (post): STS=00 CNT=00 CMD=00 ADD=ac DAT=11,00 i2c-adapter i2c-0: found normal entry for adapter 0, addr 0x57 i2c-adapter i2c-0: Transaction (pre): STS=40 CNT=00 CMD=00 ADD=ae DAT=11,00 i2c-adapter i2c-0: Transaction (post): STS=00 CNT=00 CMD=00 ADD=ae DAT=11,00 i2c-adapter i2c-9191: ISA main adapter registered it87: Found IT8712F chip at 0x290, revision 5 i2c-adapter i2c-9191: Driver it87-isa registered i2c-adapter i2c-9191: client [it8712] registered with bus id 9191-0290 Installing knfsd (copyright (C) 1996 okir@monad.swb.de). kjournald starting. Commit interval 5 seconds EXT3 FS on hda1, internal journal EXT3-fs: mounted filesystem with ordered data mode. Filesystem "md0": Disabling barriers, not supported by the underlying device XFS mounting filesystem md0 Ending clean XFS mount for filesystem: md0 XFS mounting filesystem hdb2 Ending clean XFS mount for filesystem: hdb2 skge eth0: enabling interface skge eth0: Link is up at 1000 Mbps, full duplex, flow control both [I umount /dev/md0 here and then suspend2disk/resume] swsusp: Basic memory bitmaps created Stopping tasks ... done. Shrinking memory... done (0 pages freed) Freed 0 kbytes in 0.03 seconds (0.00 MB/s) Suspending console(s) sd 5:0:0:0: [sdf] Synchronizing SCSI cache sd 4:0:0:0: [sde] Synchronizing SCSI cache sd 3:0:0:0: [sdd] Synchronizing SCSI cache sd 2:0:0:0: [sdc] Synchronizing SCSI cache sd 1:0:0:0: [sdb] Synchronizing SCSI cache sd 0:0:0:0: [sda] Synchronizing SCSI cache ACPI: PCI interrupt for device 0000:00:11.5 disabled ACPI: PCI interrupt for device 0000:00:10.4 disabled ACPI: PCI interrupt for device 0000:00:10.3 disabled ACPI: PCI interrupt for device 0000:00:10.2 disabled ACPI: PCI interrupt for device 0000:00:10.1 disabled ACPI: PCI interrupt for device 0000:00:10.0 disabled ACPI: PCI interrupt for device 0000:00:0f.0 disabled skge eth0: disabling interface swsusp: critical section: swsusp: Need to copy 34443 pages Intel machine check architecture supported. Intel machine check reporting enabled on CPU#0. PCI: Setting latency timer of device 0000:00:01.0 to 64 PM: Writing back config space on device 0000:00:09.0 at offset 1 (was 2b00014, writing 2b00017) skge eth0: enabling interface Clocksource tsc unstable (delta = 4327744420428 ns) ACPI: PCI Interrupt 0000:00:0d.0[A] -> GSI 16 (level, low) -> IRQ 17 PM: Writing back config space on device 0000:00:0f.0 at offset 1 (was 2900003, writing 2900007) ACPI: PCI Interrupt 0000:00:0f.0[B] -> GSI 20 (level, low) -> IRQ 16 ACPI: PCI Interrupt 0000:00:0f.1[A] -> GSI 20 (level, low) -> IRQ 16 Time: acpi_pm clocksource has been installed. ACPI: PCI Interrupt 0000:00:10.0[A] -> GSI 21 (level, low) -> IRQ 19 usb usb1: root hub lost power or was reset ACPI: PCI Interrupt 0000:00:10.1[A] -> GSI 21 (level, low) -> IRQ 19 usb usb2: root hub lost power or was reset ACPI: PCI Interrupt 0000:00:10.2[B] -> GSI 21 (level, low) -> IRQ 19 usb usb3: root hub lost power or was reset ACPI: PCI Interrupt 0000:00:10.3[B] -> GSI 21 (level, low) -> IRQ 19 usb usb4: root hub lost power or was reset ACPI: PCI Interrupt 0000:00:10.4[C] -> GSI 21 (level, low) -> IRQ 19 PM: Writing back config space on device 0000:00:10.4 at offset 3 (was 802008, writing 802010) usb usb5: root hub lost power or was reset ACPI: PCI Interrupt 0000:00:11.5[C] -> GSI 22 (level, low) -> IRQ 20 PCI: Setting latency timer of device 0000:00:11.5 to 64 pnp: Res cnt 3 pnp: res cnt 3 pnp: Encode io pnp: Encode io pnp: Encode irq pnp: Failed to activate device 00:0a. pnp: Res cnt 1 pnp: res cnt 1 pnp: Encode irq pnp: Failed to activate device 00:0b. sd 0:0:0:0: [sda] Starting disk sd 1:0:0:0: [sdb] Starting disk sd 2:0:0:0: [sdc] Starting disk sd 3:0:0:0: [sdd] Starting disk sd 4:0:0:0: [sde] Starting disk ATA: abnormal status 0x7F on port 0x0001b007 ATA: abnormal status 0x7F on port 0x0001b007 ATA: abnormal status 0x7F on port 0x0001a407 ATA: abnormal status 0x7F on port 0x0001a407 ata5.00: ata_hpa_resize 1: sectors = 490234752, hpa_sectors = 490234752 ata5.00: ata_hpa_resize 1: sectors = 490234752, hpa_sectors = 490234752 ata5.00: configured for UDMA/133 sd 4:0:0:0: [sde] 490234752 512-byte hardware sectors (251000 MB) sd 5:0:0:0: [sdf] Starting disk sd 4:0:0:0: [sde] Write Protect is off sd 4:0:0:0: [sde] Mode Sense: 00 3a 00 00 sd 4:0:0:0: [sde] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA ata6.00: ata_hpa_resize 1: sectors = 781422768, hpa_sectors = 781422768 ata6.00: ata_hpa_resize 1: sectors = 781422768, hpa_sectors = 781422768 ata6.00: configured for UDMA/133 sd 5:0:0:0: [sdf] 781422768 512-byte hardware sectors (400088 MB) sd 5:0:0:0: [sdf] Write Protect is off sd 5:0:0:0: [sdf] Mode Sense: 00 3a 00 00 sd 5:0:0:0: [sdf] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Restarting tasks ... done. swsusp: Basic memory bitmaps freed skge eth0: Link is up at 1000 Mbps, full duplex, flow control both [following is the dmesg of 2.6.21 post resume] Stopping tasks ... done. Shrinking memory... done (0 pages freed) Freed 0 kbytes in 0.03 seconds (0.00 MB/s) Suspending console(s) ACPI: PCI interrupt for device 0000:00:11.5 disabled ACPI: PCI interrupt for device 0000:00:10.4 disabled ACPI: PCI interrupt for device 0000:00:10.3 disabled ACPI: PCI interrupt for device 0000:00:10.2 disabled ACPI: PCI interrupt for device 0000:00:10.1 disabled ACPI: PCI interrupt for device 0000:00:10.0 disabled skge eth0: disabling interface swsusp: critical section: swsusp: Need to copy 34078 pages Intel machine check architecture supported. Intel machine check reporting enabled on CPU#0. PCI: Setting latency timer of device 0000:00:01.0 to 64 Clocksource tsc unstable (delta = 4337118157984 ns) Time: acpi_pm clocksource has been installed. PM: Writing back config space on device 0000:00:09.0 at offset 1 (was 2b00014, writing 2b00017) skge eth0: enabling interface ACPI: PCI Interrupt 0000:00:0d.0[A] -> GSI 16 (level, low) -> IRQ 17 ACPI: PCI Interrupt 0000:00:0f.0[B] -> GSI 20 (level, low) -> IRQ 16 ACPI: PCI Interrupt 0000:00:0f.1[A] -> GSI 20 (level, low) -> IRQ 16 ACPI: PCI Interrupt 0000:00:10.0[A] -> GSI 21 (level, low) -> IRQ 19 usb usb1: root hub lost power or was reset ACPI: PCI Interrupt 0000:00:10.1[A] -> GSI 21 (level, low) -> IRQ 19 usb usb2: root hub lost power or was reset ACPI: PCI Interrupt 0000:00:10.2[B] -> GSI 21 (level, low) -> IRQ 19 usb usb3: root hub lost power or was reset ACPI: PCI Interrupt 0000:00:10.3[B] -> GSI 21 (level, low) -> IRQ 19 usb usb4: root hub lost power or was reset ACPI: PCI Interrupt 0000:00:10.4[C] -> GSI 21 (level, low) -> IRQ 19 PM: Writing back config space on device 0000:00:10.4 at offset 3 (was 802008, writing 802010) usb usb5: root hub lost power or was reset ACPI: PCI Interrupt 0000:00:11.5[C] -> GSI 22 (level, low) -> IRQ 20 PCI: Setting latency timer of device 0000:00:11.5 to 64 pnp: Res cnt 3 pnp: res cnt 3 pnp: Encode io pnp: Encode io pnp: Encode irq pnp: Failed to activate device 00:0a. pnp: Res cnt 1 pnp: res cnt 1 pnp: Encode irq pnp: Failed to activate device 00:0b. logips2pp: Detected unknown logitech mouse model 1 Restarting tasks ... done. skge eth0: Link is up at 1000 Mbps, full duplex, flow control both From owner-xfs@oss.sgi.com Wed Jun 6 10:18:18 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 06 Jun 2007 10:18:20 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.5 required=5.0 tests=BAYES_50,FH_HOST_EQ_D_D_D_D, FH_HOST_EQ_D_D_D_DB,RDNS_DYNAMIC autolearn=no version=3.2.0-pre1-r499012 Received: from ext.agami.com (64.221.212.177.ptr.us.xo.net [64.221.212.177]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l56HIGWt008456 for ; Wed, 6 Jun 2007 10:18:17 -0700 Received: from agami.com (mail [192.168.168.5]) by ext.agami.com (8.12.5/8.12.5) with ESMTP id l56HHq9r012920 for ; Wed, 6 Jun 2007 10:17:52 -0700 Received: from mx1.agami.com (mx1.agami.com [10.123.10.30]) by agami.com (8.12.11/8.12.11) with ESMTP id l56HICDM007023 for ; Wed, 6 Jun 2007 10:18:12 -0700 Received: from [127.0.0.1] ([10.123.0.56]) by mx1.agami.com with Microsoft SMTPSVC(6.0.3790.1830); Wed, 6 Jun 2007 10:18:34 -0700 Message-ID: <4666EC56.9000606@agami.com> Date: Wed, 06 Jun 2007 10:18:14 -0700 From: Michael Nishimoto User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.2) Gecko/20040804 Netscape/7.2 (ax) X-Accept-Language: en-us, en MIME-Version: 1.0 To: David Chinner CC: xfs@oss.sgi.com Subject: Re: Reducing memory requirements for high extent xfs files References: <200705301649.l4UGnckA027406@oss.sgi.com> <20070530225516.GB85884050@sgi.com> <4665E276.9020406@agami.com> <20070606013601.GR86004887@sgi.com> In-Reply-To: <20070606013601.GR86004887@sgi.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-OriginalArrivalTime: 06 Jun 2007 17:18:35.0049 (UTC) FILETIME=[B6CAE190:01C7A85E] X-Scanned-By: MIMEDefang 2.58 on 192.168.168.13 X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11667 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: miken@agami.com Precedence: bulk X-list: xfs David Chinner wrote: > On Tue, Jun 05, 2007 at 03:23:50PM -0700, Michael Nishimoto wrote: > > David Chinner wrote: > > >On Wed, May 30, 2007 at 09:49:38AM -0700, Michael Nishimoto wrote: > > > > Hello, > > > > > > > > Has anyone done any work or had thoughts on changes required > > > > to reduce the total memory footprint of high extent xfs files? > ..... > > >Yes, it could, but that's a pretty major overhaul of the extent > > >interface which currently assumes everywhere that the entire > > >extent tree is in core. > > > > > >Can you describe the problem you are seeing that leads you to > > >ask this question? What's the problem you need to solve? > > > > I realize that this work won't be trivial which is why I asked if anyone > > has thought about all relevant issues. > > > > When using NFS over XFS, slowly growing files (can be ascii log files) > > tend to fragment quite a bit. > > Oh, that problem. > > The issue is that allocation beyond EOF (the normal way we prevent > fragmentation in this case) gets truncated off on file close. > > Even NFS request is processed by doing: > > open > write > close > > And so XFS truncates the allocation beyond EOF on close. Hence > the next write requires a new allocation and that results in > a non-contiguous file because the adjacent blocks have already > been used.... > Yes, we diagnosed this same issue. > > Options: > > 1 NFS server open file cache to avoid the close. > 2 add detection to XFS to determine if the called is > an NFS thread and don't truncate on close. > 3 use preallocation. > 4 preallocation on the file once will result in the > XFS_DIFLAG_PREALLOC being set on the inode and it > won't truncate on close. > 5 append only flag will work in the same way as the > prealloc flag w.r.t preventing truncation on close. > 6 run xfs_fsr > We have discussed doing number 1. The problem with number 2, 3, 4, & 5 is that we ended up with a bunch of files which appeared to leak space. If the truncate isn't done at file close time, the extra space sits around forever. > > Note - i don't think extent size hints alone will help as they > don't prevent EOF truncation on close. > > > One system had several hundred files > > which required more than one page to store the extents. > > I don't consider that a problem as such. We'll always get some > level of fragmentation if we don't preallocate. > > > Quite a few > > files had extent counts greater than 10k, and one file had 120k extents. > > you should run xfs_fsr occassionally.... > > > Besides the memory consumption, latency to return the first byte of the > > file can get noticeable. > > Yes, that too :/ > > However, I think we should be trying to fix the root cause of this > worst case fragmentation rather than trying to make the rest of the > filesystem accommodate an extreme corner case efficiently. i.e. > let's look at the test cases and determine what piece of logic we > need to add or remove to prevent this cause of fragmentation. > > Cheers, > > Dave. > -- > Dave Chinner > Principal Engineer > SGI Australian Software Group > I guess there are multiple ways to look at this problem. I have been going under the assumption that xfs' inability to handle a large number of extents is the root cause. When a filesystem is full, defragmentation might not be possible. Also, should we consider a file with 1MB extents as fragmented? A 100GB file with 1MB extents has 100k extents. As disks and, hence, filesystems get larger, it's possible to have a larger number of such files in a filesystem. I still think that trying to not fragment up front is required as well as running xfs_fsr, but I don't think those alone can be a complete solution. Getting back to the original question, has there ever been serious thought in what it might take to handle large extent files? What might be involved with trying to page extent blocks? I'm most concerned about the potential locking consequences and streaming performance implications. thanks, Michael From owner-xfs@oss.sgi.com Wed Jun 6 16:47:44 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 06 Jun 2007 16:47:46 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l56NleWt018353 for ; Wed, 6 Jun 2007 16:47:42 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA15787; Thu, 7 Jun 2007 09:47:31 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l56NlTAf116274519; Thu, 7 Jun 2007 09:47:30 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l56NlN7Y116352085; Thu, 7 Jun 2007 09:47:23 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Thu, 7 Jun 2007 09:47:23 +1000 From: David Chinner To: Michael Nishimoto Cc: David Chinner , xfs@oss.sgi.com Subject: Re: Reducing memory requirements for high extent xfs files Message-ID: <20070606234723.GC86004887@sgi.com> References: <200705301649.l4UGnckA027406@oss.sgi.com> <20070530225516.GB85884050@sgi.com> <4665E276.9020406@agami.com> <20070606013601.GR86004887@sgi.com> <4666EC56.9000606@agami.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4666EC56.9000606@agami.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11668 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Wed, Jun 06, 2007 at 10:18:14AM -0700, Michael Nishimoto wrote: > David Chinner wrote: > >On Tue, Jun 05, 2007 at 03:23:50PM -0700, Michael Nishimoto wrote: > >> When using NFS over XFS, slowly growing files (can be ascii log files) > >> tend to fragment quite a bit. > > > >Oh, that problem. ..... > >And so XFS truncates the allocation beyond EOF on close. Hence > >the next write requires a new allocation and that results in > >a non-contiguous file because the adjacent blocks have already > >been used.... > > > Yes, we diagnosed this same issue. > > >Options: > > > > 1 NFS server open file cache to avoid the close. > > 2 add detection to XFS to determine if the called is > > an NFS thread and don't truncate on close. > > 3 use preallocation. > > 4 preallocation on the file once will result in the > > XFS_DIFLAG_PREALLOC being set on the inode and it > > won't truncate on close. > > 5 append only flag will work in the same way as the > > prealloc flag w.r.t preventing truncation on close. > > 6 run xfs_fsr > > > We have discussed doing number 1. So has the community - there may even be patches floating around... > The problem with number 2, > 3, 4, & 5 is that we ended up with a bunch of files which appeared > to leak space. If the truncate isn't done at file close time, the extra > space sits around forever. That's not a problem for slowly growing log files - they will eventually use the space. I'm not saying that the truncate should be avoided on all files, just the slow growing ones that get fragmented.... > >However, I think we should be trying to fix the root cause of this > >worst case fragmentation rather than trying to make the rest of the > >filesystem accommodate an extreme corner case efficiently. i.e. > >let's look at the test cases and determine what piece of logic we > >need to add or remove to prevent this cause of fragmentation. > I guess there are multiple ways to look at this problem. I have been > going under the assumption that xfs' inability to handle a large number > of extents is the root cause. Fair enough. > When a filesystem is full, defragmentation > might not be possible. Yes, that's true. > Also, should we consider a file with 1MB extents as > fragmented? A 100GB file with 1MB extents has 100k extents. Yes, that's fragmented - it has 4 orders of magnitude more extents than optimal - and the extents are too small to allow reads or writes to acheive full bandwidth on high end raid configs.... > As disks > and, hence, filesystems get larger, it's possible to have a larger number > of such files in a filesystem. Yes. But as disks get larger, there's more space available from which to allocate contiguous ranges and so that sort of problem is less likely to occur (until filesytsem gets full). > I still think that trying to not fragment up front is required as well > as running > xfs_fsr, but I don't think those alone can be a complete solution. > > Getting back to the original question, has there ever been serious thought > in what it might take to handle large extent files? Yes, I've thought about it from a relatively high level, but enough to indicate real problems that breed complexity. > What might be involved > with trying to page extent blocks? - Rewriting all of the incore extent handling code to support missing extent ranges (currently uses deltas from the previous block for file offset). - changing the bmap btree code to convert to incore, uncompressed format on a block by block basis rather than into a global table - add code to demand read the extent list - needs to use cursors to pin blocks in memory while doing traversals - needs to work in ENOMEM conditions - convert xfs_buf.c to be able to use mempools for both xfs_buf_t and block dev page cache so that we can read blocks when ENOMEM in the writeback path - convert in-core extent structures to use mempools so we can read blocks when -ENOMEM in the writeback path - any new allocated structures will also have to use mempools - add memory shaker interfaces > I'm most concerned about the potential locking consequences and streaming > performance implications. In reality, the worst problem is writeback at ENOMEM. Who cares about locking and performance if it's fundamentally unworkable when the machine is out of memory? Even using mempools we may not be able to demand page extent blocks safely in all cases. This is my big worry about it, and the more I thought about it, the less that demand paging made sense - it gets horrendously complex when you have to start playing by mempool rules and given that the lifetime of modified buffers is determined by the log and AIL flushing behaviour we have serious problems guaranteeing when objects would be returned to the mempool. This is a showstopper issue, IMO. I'm happy to be proven wrong, but it looks *extremely* messy and complex at this point.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Wed Jun 6 23:44:03 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 06 Jun 2007 23:44:05 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l576hxWt020613 for ; Wed, 6 Jun 2007 23:44:02 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id QAA25574; Thu, 7 Jun 2007 16:43:55 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l576hrAf116326687; Thu, 7 Jun 2007 16:43:55 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l576hq5k115417772; Thu, 7 Jun 2007 16:43:52 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Thu, 7 Jun 2007 16:43:52 +1000 From: David Chinner To: xfs-dev Cc: xfs-oss Subject: review: fix i386 build Message-ID: <20070607064352.GN86004887@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11669 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Don't use type-unsafe macros. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group --- fs/xfs/dmapi/xfs_dm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) Index: 2.6.x-xfs-new/fs/xfs/dmapi/xfs_dm.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/dmapi/xfs_dm.c 2007-06-06 15:55:00.000000000 +1000 +++ 2.6.x-xfs-new/fs/xfs/dmapi/xfs_dm.c 2007-06-06 15:56:00.613530262 +1000 @@ -2654,7 +2654,7 @@ xfs_dm_punch_hole( * make sure we punch the block and not just zero it. */ if (punch_to_eof) - len = roundup((realsize - off), bsize); + len = roundup_64((realsize - off), bsize); xfs_iunlock(xip, XFS_ILOCK_EXCL); From owner-xfs@oss.sgi.com Thu Jun 7 00:03:54 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 07 Jun 2007 00:03:57 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5773oWt032118 for ; Thu, 7 Jun 2007 00:03:52 -0700 Received: from [134.15.64.54] (cf-vpn-sw-corp-64-54.corp.sgi.com [134.15.64.54]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA26284; Thu, 7 Jun 2007 17:03:35 +1000 Message-ID: <4667ADAF.7000904@sgi.com> Date: Thu, 07 Jun 2007 17:03:11 +1000 From: Tim Shimmin User-Agent: Thunderbird 1.5.0.10 (Windows/20070221) MIME-Version: 1.0 To: xfs-dev@sgi.com CC: xfs@oss.sgi.com Subject: review: xfs_growfs_data_private() not logging agf length change Content-Type: multipart/mixed; boundary="------------000807030904070307000007" X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11670 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs This is a multi-part message in MIME format. --------------000807030904070307000007 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Looks like we forgot to log the agf_length change here. (cut 'n' pasted patch) --Tim =========================================================================== Index: fs/xfs/xfs_fsops.c =========================================================================== --- a/fs/xfs/xfs_fsops.c 2007-04-17 18:02:46.000000000 +1000 +++ b/fs/xfs/xfs_fsops.c 2007-04-17 17:59:44.467987572 +1000 @@ -328,6 +328,7 @@ xfs_growfs_data_private( be32_add(&agf->agf_length, new); ASSERT(be32_to_cpu(agf->agf_length) == be32_to_cpu(agi->agi_length)); + xfs_alloc_log_agf(tp, bp, XFS_AGF_LENGTH); /* * Free the new space. */ --------------000807030904070307000007 Content-Type: text/plain; name="agf_length.patch" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="agf_length.patch" --- .pc/agf_length.patch/fs/xfs/xfs_fsops.c 2007-06-07 16:19:27.000000000 +1000 +++ fs/xfs/xfs_fsops.c 2007-06-07 16:23:41.302363734 +1000 @@ -332,6 +332,7 @@ xfs_growfs_data_private( be32_add(&agf->agf_length, new); ASSERT(be32_to_cpu(agf->agf_length) == be32_to_cpu(agi->agi_length)); + xfs_alloc_log_agf(tp, bp, XFS_AGF_LENGTH); /* * Free the new space. */ --------------000807030904070307000007-- From owner-xfs@oss.sgi.com Thu Jun 7 00:30:52 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 07 Jun 2007 00:30:54 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l577UkWt011519 for ; Thu, 7 Jun 2007 00:30:49 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA27101; Thu, 7 Jun 2007 17:30:36 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l577UZAf114495412; Thu, 7 Jun 2007 17:30:36 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l577UYRL113570304; Thu, 7 Jun 2007 17:30:34 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Thu, 7 Jun 2007 17:30:34 +1000 From: David Chinner To: Tim Shimmin Cc: xfs-dev@sgi.com, xfs@oss.sgi.com Subject: Re: review: xfs_growfs_data_private() not logging agf length change Message-ID: <20070607073034.GP86004887@sgi.com> References: <4667ADAF.7000904@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4667ADAF.7000904@sgi.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11671 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Thu, Jun 07, 2007 at 05:03:11PM +1000, Tim Shimmin wrote: > Looks like we forgot to log the agf_length change here. > > (cut 'n' pasted patch) > > --Tim > > =========================================================================== > Index: fs/xfs/xfs_fsops.c > =========================================================================== > > --- a/fs/xfs/xfs_fsops.c 2007-04-17 18:02:46.000000000 +1000 > +++ b/fs/xfs/xfs_fsops.c 2007-04-17 17:59:44.467987572 +1000 > @@ -328,6 +328,7 @@ xfs_growfs_data_private( > be32_add(&agf->agf_length, new); > ASSERT(be32_to_cpu(agf->agf_length) == > be32_to_cpu(agi->agi_length)); > + xfs_alloc_log_agf(tp, bp, XFS_AGF_LENGTH); > /* > * Free the new space. > */ Yup, looks ok to me. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Thu Jun 7 00:57:22 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 07 Jun 2007 00:57:25 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.0 required=5.0 tests=BAYES_80,URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from bay0-omc2-s6.bay0.hotmail.com (bay0-omc2-s6.bay0.hotmail.com [65.54.246.142]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l577vKWt018537 for ; Thu, 7 Jun 2007 00:57:21 -0700 Received: from hotmail.com ([65.54.174.79]) by bay0-omc2-s6.bay0.hotmail.com with Microsoft SMTPSVC(6.0.3790.2668); Thu, 7 Jun 2007 00:45:19 -0700 Received: from mail pickup service by hotmail.com with Microsoft SMTPSVC; Thu, 7 Jun 2007 00:45:19 -0700 Message-ID: Received: from 85.36.106.198 by BAY103-DAV7.phx.gbl with DAV; Thu, 07 Jun 2007 07:45:16 +0000 X-Originating-IP: [85.36.106.198] X-Originating-Email: [pupilla@hotmail.com] X-Sender: pupilla@hotmail.com From: "Marco Berizzi" To: "David Chinner" Cc: "David Chinner" , , , "Marco Berizzi" References: <20070316012520.GN5743@melbourne.sgi.com> <20070316195951.GB5743@melbourne.sgi.com> <20070320064632.GO32602149@melbourne.sgi.com> Subject: Re: XFS internal error xfs_da_do_buf(2) at line 2087 of file fs/xfs/xfs_da_btree.c. Caller 0xc01b00bd Date: Thu, 7 Jun 2007 09:44:51 +0200 X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2800.1123 X-MIMEOLE: Produced By Microsoft MimeOLE V6.00.2800.1123 X-OriginalArrivalTime: 07 Jun 2007 07:45:19.0188 (UTC) FILETIME=[CBACC140:01C7A8D7] X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11672 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: pupilla@hotmail.com Precedence: bulk X-list: xfs Hi David. Three months ago I wrote the message below. I had built various 2.6.20.x and 2.6.21.x vanilla kernel with all the debug options enabled and linux had never crashed. On june 4, I have builded linux 2.6.21.3 without any debugging options and after 2 days linux has starting print these errors: Jun 6 09:47:09 Pleiadi kernel: ======================= Jun 6 09:47:09 Pleiadi kernel: 0x0: 28 f1 45 d4 22 53 35 11 09 80 37 5a 47 8a 22 ee Jun 6 09:47:09 Pleiadi kernel: Filesystem "sda8": XFS internal error xfs_da_do_buf(2) at line 2086 of file fs/xfs/xfs_da_btree.c. Caller 0xc01b2301 Jun 6 09:47:09 Pleiadi kernel: [] xfs_da_do_buf+0x70c/0x7b1 Jun 6 09:47:09 Pleiadi kernel: [] xfs_da_read_buf+0x30/0x35 Jun 6 09:47:09 Pleiadi kernel: [] xfs_da_read_buf+0x30/0x35 Jun 6 09:47:09 Pleiadi kernel: [] _xfs_buf_lookup_pages+0x1e8/0x2ea Jun 6 09:47:09 Pleiadi kernel: [] _xfs_buf_initialize+0xc8/0xf6 Jun 6 09:47:09 Pleiadi kernel: [] xfs_trans_unreserve_and_mod_sb+0x241/0x264 Jun 6 09:47:09 Pleiadi kernel: [] xfs_da_read_buf+0x30/0x35 Jun 6 09:47:09 Pleiadi kernel: [] xfs_dir2_leaf_lookup_int+0x5c/0x2b6 Jun 6 09:47:09 Pleiadi kernel: [] xfs_dir2_leaf_lookup_int+0x5c/0x2b6 Jun 6 09:47:09 Pleiadi kernel: [] __next_cpu+0x12/0x1f Jun 6 09:47:09 Pleiadi kernel: [] xfs_dir2_leaf_lookup+0x2b/0xe2 Jun 6 09:47:09 Pleiadi kernel: [] xfs_dir2_isleaf+0x1a/0x4f Jun 6 09:47:09 Pleiadi kernel: [] xfs_dir_lookup+0xc1/0x11d Jun 6 09:47:09 Pleiadi kernel: [] xfs_dir_lookup_int+0x34/0x10e Jun 6 09:47:09 Pleiadi kernel: [] _xfs_trans_commit+0x1c7/0x3a2 Jun 6 09:47:09 Pleiadi kernel: [] xfs_lookup+0x5a/0x90 Jun 6 09:47:09 Pleiadi kernel: [] xfs_vn_lookup+0x52/0x93 Jun 6 09:47:09 Pleiadi kernel: [] real_lookup+0xbb/0x116 Jun 6 09:47:09 Pleiadi kernel: [] do_lookup+0x90/0xc2 Jun 6 09:47:09 Pleiadi kernel: [] xfs_vn_lookup+0x0/0x93 Jun 6 09:47:09 Pleiadi kernel: [] __link_path_walk+0x10c/0xcf1 Jun 6 09:47:09 Pleiadi kernel: [] pipe_read+0x23b/0x2bf Jun 6 09:47:09 Pleiadi kernel: [] link_path_walk+0x3e/0xac Jun 6 09:47:09 Pleiadi kernel: [] vfs_read+0xee/0x141 Jun 6 09:47:09 Pleiadi kernel: [] sys_read+0x41/0x6a Jun 6 09:47:09 Pleiadi kernel: [] do_path_lookup+0x11a/0x1ba Jun 6 09:47:09 Pleiadi kernel: [] do_unlinkat+0x41/0x114 Jun 6 09:47:09 Pleiadi kernel: [] vfs_read+0xee/0x141 Jun 6 09:47:09 Pleiadi kernel: [] sys_read+0x41/0x6a Jun 6 09:47:09 Pleiadi kernel: [] syscall_call+0x7/0xb Jun 6 09:47:09 Pleiadi kernel: ======================= PS: I haven't rebooted the system. It is printing this message every few seconds on the console: Filesystem "sda8": XFS internal error xfs_da_do_buf(2) at line 2086 of file fs/xfs/xfs_da_btree.c. Caller 0xc01b2301 Here is dmesg output: Jun 4 20:53:05 Pleiadi kernel: sanitize start Jun 4 20:53:05 Pleiadi kernel: sanitize end Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 0000000000000000 size: 000000000009ac00 end: 000000000009ac00 type: 1 Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() type is E820_RAM Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 000000000009ac00 size: 0000000000005400 end: 00000000000a0000 type: 2 Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 00000000000ce000 size: 0000000000002000 end: 00000000000d0000 type: 2 Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 00000000000e0000 size: 0000000000020000 end: 0000000000100000 type: 2 Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 0000000000100000 size: 000000003fdf0000 end: 000000003fef0000 type: 1 Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() type is E820_RAM Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 000000003fef0000 size: 000000000000b000 end: 000000003fefb000 type: 3 Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 000000003fefb000 size: 0000000000005000 end: 000000003ff00000 type: 4 Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 000000003ff00000 size: 0000000000080000 end: 000000003ff80000 type: 1 Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() type is E820_RAM Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 000000003ff80000 size: 0000000000080000 end: 0000000040000000 type: 2 Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 00000000e0000000 size: 0000000010000000 end: 00000000f0000000 type: 2 Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 00000000fec00000 size: 0000000000100400 end: 00000000fed00400 type: 2 Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 00000000fee00000 size: 0000000000100000 end: 00000000fef00000 type: 2 Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 00000000ffb00000 size: 0000000000100000 end: 00000000ffc00000 type: 2 Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 00000000fff00000 size: 0000000000100000 end: 0000000100000000 type: 2 Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 0000000000000000 - 000000000009ac00 (usable) Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 000000000009ac00 - 00000000000a0000 (reserved) Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 00000000000ce000 - 00000000000d0000 (reserved) Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 00000000000e0000 - 0000000000100000 (reserved) Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 0000000000100000 - 000000003fef0000 (usable) Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 000000003fef0000 - 000000003fefb000 (ACPI data) Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 000000003fefb000 - 000000003ff00000 (ACPI NVS) Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 000000003ff00000 - 000000003ff80000 (usable) Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 000000003ff80000 - 0000000040000000 (reserved) Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 00000000e0000000 - 00000000f0000000 (reserved) Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 00000000fec00000 - 00000000fed00400 (reserved) Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 00000000fee00000 - 00000000fef00000 (reserved) Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 00000000ffb00000 - 00000000ffc00000 (reserved) Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 00000000fff00000 - 0000000100000000 (reserved) Jun 4 20:53:05 Pleiadi kernel: Zone PFN ranges: Jun 4 20:53:05 Pleiadi kernel: DMA 0 -> 4096 Jun 4 20:53:05 Pleiadi kernel: Normal 4096 -> 229376 Jun 4 20:53:05 Pleiadi kernel: HighMem 229376 -> 262016 Jun 4 20:53:05 Pleiadi kernel: early_node_map[1] active PFN ranges Jun 4 20:53:05 Pleiadi kernel: 0: 0 -> 262016 Jun 4 20:53:05 Pleiadi kernel: ACPI: RSDP 000F6BA0, 0024 (r2 PTLTD ) Jun 4 20:53:05 Pleiadi kernel: ACPI: XSDT 3FEF5381, 004C (r1 PTLTD ^I XSDT 6040001 LTP 0) Jun 4 20:53:05 Pleiadi kernel: ACPI: FACP 3FEF5441, 00F4 (r3 FSC 6040001 F4240) Jun 4 20:53:05 Pleiadi kernel: ACPI: DSDT 3FEF5535, 597B (r1 FSC D1649 6040001 MSFT 2000002) Jun 4 20:53:05 Pleiadi kernel: ACPI: FACS 3FEFBFC0, 0040 Jun 4 20:53:05 Pleiadi kernel: ACPI: SPCR 3FEFAEB0, 0050 (r1 PTLTD $UCRTBL$ 6040001 PTL 1) Jun 4 20:53:05 Pleiadi kernel: ACPI: MCFG 3FEFAF00, 0040 (r1 PTLTD MCFG 6040001 LTP 0) Jun 4 20:53:05 Pleiadi kernel: ACPI: APIC 3FEFAF40, 0098 (r1 PTLTD ^I APIC 6040001 LTP 0) Jun 4 20:53:05 Pleiadi kernel: ACPI: BOOT 3FEFAFD8, 0028 (r1 PTLTD $SBFTBL$ 6040001 LTP 1) Jun 4 20:53:05 Pleiadi kernel: Processor #0 15:4 APIC version 20 Jun 4 20:53:05 Pleiadi kernel: Processor #1 15:4 APIC version 20 Jun 4 20:53:05 Pleiadi kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-23 Jun 4 20:53:05 Pleiadi kernel: IOAPIC[1]: apic_id 3, version 32, address 0xfec80000, GSI 24-47 Jun 4 20:53:05 Pleiadi kernel: IOAPIC[2]: apic_id 4, version 32, address 0xfec80800, GSI 48-71 Jun 4 20:53:05 Pleiadi kernel: IOAPIC[3]: apic_id 5, version 32, address 0xfec84000, GSI 72-95 Jun 4 20:53:05 Pleiadi kernel: IOAPIC[4]: apic_id 6, version 32, address 0xfec84800, GSI 96-119 Jun 4 20:53:05 Pleiadi kernel: Enabling APIC mode: Flat. Using 5 I/O APICs Jun 4 20:53:05 Pleiadi kernel: Allocating PCI resources starting at 50000000 (gap: 40000000:a0000000) Jun 4 20:53:05 Pleiadi kernel: Built 1 zonelists. Total pages: 259969 Jun 4 20:53:05 Pleiadi kernel: PID hash table entries: 4096 (order: 12, 16384 bytes) Jun 4 20:53:05 Pleiadi kernel: Detected 3200.428 MHz processor. Jun 4 20:53:05 Pleiadi kernel: Console: colour VGA+ 80x25 Jun 4 20:53:05 Pleiadi kernel: Dentry cache hash table entries: 131072 (order: 7, 524288 bytes) Jun 4 20:53:05 Pleiadi kernel: Inode-cache hash table entries: 65536 (order: 6, 262144 bytes) Jun 4 20:53:05 Pleiadi kernel: virtual kernel memory layout: Jun 4 20:53:05 Pleiadi kernel: fixmap : 0xfff9d000 - 0xfffff000 ( 392 kB) Jun 4 20:53:05 Pleiadi kernel: pkmap : 0xff800000 - 0xffc00000 (4096 kB) Jun 4 20:53:05 Pleiadi kernel: vmalloc : 0xf8800000 - 0xff7fe000 ( 111 MB) Jun 4 20:53:05 Pleiadi kernel: lowmem : 0xc0000000 - 0xf8000000 ( 896 MB) Jun 4 20:53:05 Pleiadi kernel: .init : 0xc039f000 - 0xc03ce000 ( 188 kB) Jun 4 20:53:05 Pleiadi kernel: .data : 0xc02fd400 - 0xc0398114 ( 619 kB) Jun 4 20:53:05 Pleiadi kernel: .text : 0xc0100000 - 0xc02fd400 (2037 kB) Jun 4 20:53:05 Pleiadi kernel: Checking if this processor honours the WP bit even in supervisor mode... Ok. Jun 4 20:53:05 Pleiadi kernel: Calibrating delay using timer specific routine.. 6403.78 BogoMIPS (lpj=32018905) Jun 4 20:53:05 Pleiadi kernel: Mount-cache hash table entries: 512 Jun 4 20:53:05 Pleiadi kernel: monitor/mwait feature present. Jun 4 20:53:05 Pleiadi kernel: using mwait in idle threads. Jun 4 20:53:05 Pleiadi kernel: CPU0: Intel(R) Xeon(TM) CPU 3.20GHz stepping 0a Jun 4 20:53:05 Pleiadi kernel: Booting processor 1/1 eip 2000 Jun 4 20:53:05 Pleiadi kernel: Calibrating delay using timer specific routine.. 6400.45 BogoMIPS (lpj=32002267) Jun 4 20:53:05 Pleiadi kernel: monitor/mwait feature present. Jun 4 20:53:05 Pleiadi kernel: CPU1: Intel(R) Xeon(TM) CPU 3.20GHz stepping 0a Jun 4 20:53:05 Pleiadi kernel: ENABLING IO-APIC IRQs Jun 4 20:53:05 Pleiadi kernel: migration_cost=142 Jun 4 20:53:05 Pleiadi kernel: Setting up standard PCI resources Jun 4 20:53:05 Pleiadi kernel: ACPI: Device [PS2M] status [00000008]: functional but not present; setting present Jun 4 20:53:05 Pleiadi kernel: ACPI: Device [ECP] status [00000008]: functional but not present; setting present Jun 4 20:53:05 Pleiadi kernel: ACPI: Device [COM1] status [00000008]: functional but not present; setting present Jun 4 20:53:05 Pleiadi kernel: PCI quirk: region f000-f07f claimed by ICH4 ACPI/GPIO/TCO Jun 4 20:53:05 Pleiadi kernel: PCI quirk: region f180-f1bf claimed by ICH4 GPIO Jun 4 20:53:05 Pleiadi kernel: PCI: PXH quirk detected, disabling MSI for SHPC device Jun 4 20:53:05 Pleiadi kernel: PCI: PXH quirk detected, disabling MSI for SHPC device Jun 4 20:53:05 Pleiadi kernel: ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 5 6 7 9 10 *11 12 14 15) Jun 4 20:53:05 Pleiadi kernel: ACPI: PCI Interrupt Link [LNKB] (IRQs 3 4 5 6 7 *9 10 11 12 14 15) Jun 4 20:53:05 Pleiadi kernel: ACPI: PCI Interrupt Link [LNKC] (IRQs 3 4 *5 6 7 9 10 11 12 14 15) Jun 4 20:53:05 Pleiadi kernel: ACPI: PCI Interrupt Link [LNKD] (IRQs 3 4 5 6 7 9 *10 11 12 14 15) Jun 4 20:53:05 Pleiadi kernel: ACPI: PCI Interrupt Link [LNKE] (IRQs 3 4 5 6 7 9 10 11 12 14 15) *0, disabled. Jun 4 20:53:05 Pleiadi kernel: ACPI: PCI Interrupt Link [LNKF] (IRQs 3 4 5 6 7 9 10 11 12 14 15) *0, disabled. Jun 4 20:53:05 Pleiadi kernel: ACPI: PCI Interrupt Link [LNKG] (IRQs 3 4 5 6 7 9 10 11 12 14 15) *0, disabled. Jun 4 20:53:05 Pleiadi kernel: ACPI: PCI Interrupt Link [LNKH] (IRQs 3 *4 5 6 7 9 10 11 12 14 15) Jun 4 20:53:05 Pleiadi kernel: IP route cache hash table entries: 32768 (order: 5, 131072 bytes) Jun 4 20:53:05 Pleiadi kernel: TCP established hash table entries: 131072 (order: 8, 1572864 bytes) Jun 4 20:53:05 Pleiadi kernel: TCP bind hash table entries: 65536 (order: 7, 524288 bytes) Jun 4 20:53:05 Pleiadi kernel: highmem bounce pool size: 64 pages Jun 4 20:53:05 Pleiadi kernel: PNP: PS/2 controller doesn't have AUX irq; using default 12 Jun 4 20:53:05 Pleiadi kernel: nf_conntrack version 0.5.0 (8188 buckets, 65504 max) Jun 4 20:53:05 Pleiadi kernel: ip_tables: (C) 2000-2006 Netfilter Core Team Jun 4 20:53:05 Pleiadi kernel: Using IPI Shortcut mode Jun 4 20:53:05 Pleiadi kernel: VFS: Mounted root (xfs filesystem) readonly. Jun 6 09:47:09 Pleiadi kernel: 0x0: 28 f1 45 d4 22 53 35 11 09 80 37 5a 47 8a 22 ee Jun 6 09:47:09 Pleiadi kernel: Filesystem "sda8": XFS internal error xfs_da_do_buf(2) at line 2086 of file fs/xfs/xfs_da_btree.c. Caller 0xc01b2301 Jun 6 09:47:09 Pleiadi kernel: [] xfs_da_do_buf+0x70c/0x7b1 Jun 6 09:47:09 Pleiadi kernel: [] xfs_da_read_buf+0x30/0x35 Jun 6 09:47:09 Pleiadi last message repeated 2 times Jun 6 09:47:09 Pleiadi kernel: [] xfs_dir2_leaf_lookup_int+0x5c/0x2b6 Jun 6 09:47:09 Pleiadi kernel: [] xfs_dir2_leaf_lookup_int+0x5c/0x2b6 Jun 6 09:47:09 Pleiadi kernel: [] xfs_dir2_leaf_lookup+0x2b/0xe2 Jun 6 09:47:09 Pleiadi kernel: [] xfs_dir2_isleaf+0x1a/0x4f Jun 6 09:47:09 Pleiadi kernel: [] xfs_dir_lookup+0xc1/0x11d Jun 6 09:47:09 Pleiadi kernel: [] __block_commit_write+0x7d/0xb0 Jun 6 09:47:09 Pleiadi kernel: [] generic_file_buffered_write+0x2d1/0x682 Jun 6 09:47:09 Pleiadi kernel: [] xfs_dir_lookup_int+0x34/0x10e Jun 6 09:47:09 Pleiadi kernel: [] xfs_lookup+0x5a/0x90 Jun 6 09:47:09 Pleiadi kernel: [] xfs_vn_lookup+0x52/0x93 Jun 6 09:47:09 Pleiadi kernel: [] real_lookup+0xbb/0x116 Jun 6 09:47:09 Pleiadi kernel: [] do_lookup+0x90/0xc2 Jun 6 09:47:09 Pleiadi kernel: [] xfs_vn_lookup+0x0/0x93 Jun 6 09:47:09 Pleiadi kernel: [] __link_path_walk+0x10c/0xcf1 Jun 6 09:47:09 Pleiadi kernel: [] link_path_walk+0x3e/0xac Jun 6 09:47:09 Pleiadi kernel: [] get_unused_fd+0x2e/0xb6 Jun 6 09:47:09 Pleiadi kernel: [] do_path_lookup+0x11a/0x1ba Jun 6 09:47:09 Pleiadi kernel: [] __path_lookup_intent_open+0x50/0x90 Jun 6 09:47:09 Pleiadi kernel: [] path_lookup_open+0x20/0x25 Jun 6 09:47:09 Pleiadi kernel: [] open_namei+0x7a/0x550 Jun 6 09:47:09 Pleiadi kernel: [] do_wp_page+0x20e/0x3ec Jun 6 09:47:09 Pleiadi kernel: [] do_filp_open+0x2e/0x5b Jun 6 09:47:09 Pleiadi kernel: [] get_unused_fd+0x2e/0xb6 Jun 6 09:47:09 Pleiadi kernel: [] do_sys_open+0x4e/0xdb Jun 6 09:47:09 Pleiadi kernel: [] sys_open+0x1c/0x20 Jun 6 09:47:09 Pleiadi kernel: [] syscall_call+0x7/0xb Jun 6 09:47:09 Pleiadi kernel: ======================= Jun 6 09:47:09 Pleiadi kernel: 0x0: 28 f1 45 d4 22 53 35 11 09 80 37 5a 47 8a 22 ee Jun 6 09:47:09 Pleiadi kernel: Filesystem "sda8": XFS internal error xfs_da_do_buf(2) at line 2086 of file fs/xfs/xfs_da_btree.c. Caller 0xc01b2301 Jun 6 09:47:09 Pleiadi kernel: [] xfs_da_do_buf+0x70c/0x7b1 Jun 6 09:47:09 Pleiadi kernel: [] xfs_da_read_buf+0x30/0x35 Jun 6 09:47:09 Pleiadi kernel: [] xfs_da_read_buf+0x30/0x35 Jun 6 09:47:09 Pleiadi kernel: [] _xfs_buf_lookup_pages+0x1e8/0x2ea Jun 6 09:47:09 Pleiadi kernel: [] _xfs_buf_initialize+0xc8/0xf6 Jun 6 09:47:09 Pleiadi kernel: [] xfs_trans_unreserve_and_mod_sb+0x241/0x264 Jun 6 09:47:09 Pleiadi kernel: [] xfs_da_read_buf+0x30/0x35 Jun 6 09:47:09 Pleiadi kernel: [] xfs_dir2_leaf_lookup_int+0x5c/0x2b6 Jun 6 09:47:09 Pleiadi kernel: [] xfs_dir2_leaf_lookup_int+0x5c/0x2b6 Jun 6 09:47:09 Pleiadi kernel: [] __next_cpu+0x12/0x1f Jun 6 09:47:09 Pleiadi kernel: [] xfs_dir2_leaf_lookup+0x2b/0xe2 Jun 6 09:47:09 Pleiadi kernel: [] xfs_dir2_isleaf+0x1a/0x4f Jun 6 09:47:09 Pleiadi kernel: [] xfs_dir_lookup+0xc1/0x11d Jun 6 09:47:09 Pleiadi kernel: [] xfs_dir_lookup_int+0x34/0x10e Jun 6 09:47:09 Pleiadi kernel: [] _xfs_trans_commit+0x1c7/0x3a2 Jun 6 09:47:09 Pleiadi kernel: [] xfs_lookup+0x5a/0x90 Jun 6 09:47:09 Pleiadi kernel: [] xfs_vn_lookup+0x52/0x93 Jun 6 09:47:09 Pleiadi kernel: [] real_lookup+0xbb/0x116 Jun 6 09:47:09 Pleiadi kernel: [] do_lookup+0x90/0xc2 Jun 6 09:47:09 Pleiadi kernel: [] xfs_vn_lookup+0x0/0x93 Jun 6 09:47:09 Pleiadi kernel: [] __link_path_walk+0x10c/0xcf1 Jun 6 09:47:09 Pleiadi kernel: [] pipe_read+0x23b/0x2bf Jun 6 09:47:09 Pleiadi kernel: [] link_path_walk+0x3e/0xac Jun 6 09:47:09 Pleiadi kernel: [] vfs_read+0xee/0x141 Jun 6 09:47:09 Pleiadi kernel: [] sys_read+0x41/0x6a Jun 6 09:47:09 Pleiadi kernel: [] do_path_lookup+0x11a/0x1ba Jun 6 09:47:09 Pleiadi kernel: [] do_unlinkat+0x41/0x114 Jun 6 09:47:09 Pleiadi kernel: [] vfs_read+0xee/0x141 Jun 6 09:47:09 Pleiadi kernel: [] sys_read+0x41/0x6a Jun 6 09:47:09 Pleiadi kernel: [] syscall_call+0x7/0xb Jun 6 09:47:09 Pleiadi kernel: ======================= .. David Chinner wrote: > On Mon, Mar 19, 2007 at 11:32:27AM +0100, Marco Berizzi wrote: > > Marco Berizzi wrote: > > > David Chinner wrote: > > > > > >> Ok, so an ipsec change. And I see from the history below it > > >> really has nothing to do with this problem. it seems the problem > > >> has something to do with changes between 2.6.19.1 and 2.6.19.2. > > > > > > indeed. Yesterday at 13:00 I have switched from 2.6.19.1 to 2.6.19.2 > > > (without the ipsec fix) and at about 17:30 linux has crashed again. > > > I have recompiled 2.6.19.2 with all kernel debugging options enabled > > > and rebooted. Now I'm waiting for the crash... > > > > Linux has not been crashed. However here is dmesg output > > with all debugging option enabled: (search for 'INFO: > > possible recursive locking detected'). Is that normal? > > ..... > > ============================================= > > [ INFO: possible recursive locking detected ] > > 2.6.19.2 #1 > > --------------------------------------------- > > rm/470 is trying to acquire lock: > > (&(&ip->i_lock)->mr_lock){----}, at: [] xfs_ilock+0x5b/0xa1 > > > > but task is already holding lock: > > (&(&ip->i_lock)->mr_lock){----}, at: [] xfs_ilock+0x5b/0xa1 > > > > other info that might help us debug this: > > 3 locks held by rm/470: > > #0: (&inode->i_mutex/1){--..}, at: [] do_unlinkat+0x70/0x115 > > #1: (&inode->i_mutex){--..}, at: [] mutex_lock+0x1c/0x1f > > #2: (&(&ip->i_lock)->mr_lock){----}, at: [] > > xfs_ilock+0x5b/0xa1 > > > > stack backtrace: > > [] dump_trace+0x215/0x21a > > [] show_trace_log_lvl+0x1a/0x30 > > [] show_trace+0x12/0x14 > > [] dump_stack+0x19/0x1b > > [] print_deadlock_bug+0xc0/0xcf > > [] check_deadlock+0x6a/0x79 > > [] __lock_acquire+0x350/0x970 > > [] lock_acquire+0x75/0x97 > > [] down_write+0x3a/0x54 > > [] xfs_ilock+0x5b/0xa1 > > [] xfs_lock_dir_and_entry+0x105/0x11b > > [] xfs_remove+0x180/0x47f > > [] xfs_vn_unlink+0x22/0x4f > > [] vfs_unlink+0x9e/0xa2 > > [] do_unlinkat+0xa8/0x115 > > [] sys_unlink+0x10/0x12 > > [] syscall_call+0x7/0xb > > [] 0xb7efaa7d > > ======================= > > That's no problem - lockdep just doesn't know that we can nest i_lock > (we've got to get the annotations for this sorted out). > > > Here is the relevant results: > > > > Phase 2 - found root inode chunk > > Phase 3 - ... > > agno = 0 > > ... > > agno = 12 > > LEAFN node level is 1 inode 1610612918 bno = 8388608 > > Hmmm - single bit error in the bno - that reminds of this: > > http://oss.sgi.com/projects/xfs/faq.html#dir2 > > So I'd definitely make sure that is repaired.... > > Cheers, > > Dave. > -- > Dave Chinner > Principal Engineer > SGI Australian Software Group > From owner-xfs@oss.sgi.com Thu Jun 7 01:18:45 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 07 Jun 2007 01:18:47 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.7 required=5.0 tests=AWL,BAYES_60 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l578IfWt026810 for ; Thu, 7 Jun 2007 01:18:43 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id SAA28327; Thu, 7 Jun 2007 18:18:35 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l578IXAf108741981; Thu, 7 Jun 2007 18:18:33 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l578IUuT115565036; Thu, 7 Jun 2007 18:18:30 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Thu, 7 Jun 2007 18:18:30 +1000 From: David Chinner To: David Chinner , Ruben Porras , xfs@oss.sgi.com, cw@f00f.org Subject: Re: XFS shrink functionality Message-ID: <20070607081830.GR86004887@sgi.com> References: <1180715974.10796.46.camel@localhost> <20070604001632.GA86004887@sgi.com> <20070604084154.GA8273@teal.hq.k1024.org> <20070604092115.GX85884050@sgi.com> <20070605080012.GA10677@teal.hq.k1024.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070605080012.GA10677@teal.hq.k1024.org> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11673 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Tue, Jun 05, 2007 at 10:00:12AM +0200, Iustin Pop wrote: > On Mon, Jun 04, 2007 at 07:21:15PM +1000, David Chinner wrote: > > > allocated on an available AG and when you remove the originals, the > > > to-be-shrinked AGs become free. Yes, utterly non-optimal, but it was the > > > simplest way to do it based on what I knew at the time. > > > > Not quite that simple, unfortunately. You can't leave the > > AGs locked in the same way we do for a grow because we need > > to be able to use the AGs to move stuff about and that > > requires locking them. Hence we need a separate mechanism > > to prevent allocation in a given AG outside of locking them. > > > > Hence we need: > > > > - a transaction to mark AGs "no-allocate" > > - a transaction to mark AGs "allocatable" > > - a flag in each AGF/AGI to say the AG is available for > > allocations (persistent over crashes) > > - a flag in the per-ag structure to indicate allocation > > status of the AG. > > - everywhere we select an AG for allocation, we need to > > check this flag and skip the AG if it's not available. > > > > FWIW, the transactions can probably just be an extension of > > xfs_alloc_log_agf() and xfs_alloc_log_agi().... > > A question: do you think that the cost of having this in the code > (especially the last part, check that flag in every allocation function) > is acceptable? I mean, let's say one would write the patch to implement > all this. Does it have a chance to be accepted? Or will people say it's > only bloat? ... Lots of ppl ask for shrink capability on XFS, so if it's implemented and reviewed and passes QA tests, then I see no reason why it wouldn't be accepted... > > Yeah, 1) and 4) are separable parts of the problem and can be done > > in any order. 2) can be implemented relatively easily as stated > > above. > > > > 3) is the hard one - we need to find the owner of each block > > (metadata and data) remaining in the AGs to be removed. This may be > > a directory btree block, a inode extent btree block, a data block, > > and extended attr block, etc. Moving the data blocks is easy to > > do (swap extents), but moving the metadata blocks is a major PITA > > as it will need to be done transactionally and that will require > > a bunch of new (complex) code to be written, I think. It will be > > of equivalent complexity to defragmenting metadata.... > > > > If we ignore the metadata block problem then finding and moving the > > data blocks should not be a problem - swap extents can be used for > > that as well - but it will be extremely time consuming and won't > > scale to large filesystem sizes.... > > So given these caveats, is there a chance that a) this will be actually > useful and b) will this be accepted? Look at it this way - if we get to the point where 3 is a problem, then we've got most of a useful shrinker. That's way ahead of what we have now and in a lot of cases it will just work. The corner cases are the hard bit, but we can work on them incrementally once the rest is done, and in doing so we'll also be introducing the means by which to defragment metadata. IOWs, we kill two birds with one stone at that point in time. Likewise for the shrink case that needs to move the log - we've got hooks for userspace tools to move the log, just no implementation. Implementing log moving for shrink will also enable us to do online log resize and internal/external log switching. Once again, two birds with one stone. Hence I don't see these issues as showstoppers at all - getting to the point of a full shrink implementation will give us other features that we need to have anyway.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Thu Jun 7 03:30:09 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 07 Jun 2007 03:30:15 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.7 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.ukfsn.org (s2.ukfsn.org [217.158.120.143]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l57AU8Wt028993 for ; Thu, 7 Jun 2007 03:30:09 -0700 Received: from localhost (mailman.ukfsn.org [80.168.53.75]) by mail.ukfsn.org (Postfix) with ESMTP id 5F34CE6C44; Thu, 7 Jun 2007 11:29:48 +0100 (BST) Received: from mail.ukfsn.org ([80.168.53.20]) by localhost (smtp-filter.ukfsn.org [80.168.53.75]) (amavisd-new, port 10024) with ESMTP id XBwCbLFtQcNW; Thu, 7 Jun 2007 11:27:36 +0100 (BST) Received: from elm.dgreaves.com (i-83-67-36-194.freedom2surf.net [83.67.36.194]) by mail.ukfsn.org (Postfix) with ESMTP id 9A42BE6C57; Thu, 7 Jun 2007 11:29:46 +0100 (BST) Received: from ash.dgreaves.com ([10.0.0.90]) by elm.dgreaves.com with esmtp (Exim 4.62) (envelope-from ) id 1HwFFl-0001eM-Jk; Thu, 07 Jun 2007 11:30:05 +0100 Message-ID: <4667DE2D.6050903@dgreaves.com> Date: Thu, 07 Jun 2007 11:30:05 +0100 From: David Greaves User-Agent: Mozilla-Thunderbird 2.0.0.0 (X11/20070601) MIME-Version: 1.0 To: Tejun Heo Cc: Linus Torvalds , "Rafael J. Wysocki" , xfs@oss.sgi.com, "'linux-kernel@vger.kernel.org'" , linux-pm , Neil Brown Subject: Re: 2.6.22-rc3 hibernate(?) fails totally - regression (xfs on raid6) References: <46608E3F.4060201@dgreaves.com> <200706012342.45657.rjw@sisk.pl> <46609FAD.7010203@dgreaves.com> <200706020122.49989.rjw@sisk.pl> <4661EFBB.5010406@dgreaves.com> <4662D852.4000005@dgreaves.com> <46667160.80905@gmail.com> <46668EE0.2030509@dgreaves.com> <46679D56.7040001@gmail.com> In-Reply-To: <46679D56.7040001@gmail.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11674 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: david@dgreaves.com Precedence: bulk X-list: xfs Tejun Heo wrote: > Hello, > > David Greaves wrote: >> Just to be clear. This problem is where my system won't resume after s2d >> unless I umount my xfs over raid6 filesystem. > > This is really weird. I don't see how xfs mount can affect this at all. Indeed. It does :) > How hard does the machine freeze? Can you use sysrq? If so, please > dump sysrq-t. I suspect there is a problem writing to the consoles... I recompiled (rc4+patch) with sysrq support, suspended, resumed and tried sysrq-t but got no output. I *can* change VTs and see the various login prompts, bitmap messages and the console messages. Caps/Num lock lights work. Fearing incompetence I tried sysrq-s sysrq-u sysrq-b and got a reboot so sysrq is OK. Any suggestions on how to see more? Or what to try next? Any other kernel debug options to set? David PS Back in a couple of hours... From owner-xfs@oss.sgi.com Thu Jun 7 04:07:32 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 07 Jun 2007 04:07:35 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l57B7TWt004799 for ; Thu, 7 Jun 2007 04:07:30 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id VAA01980; Thu, 7 Jun 2007 21:07:22 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l57B7HAf115363674; Thu, 7 Jun 2007 21:07:18 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l57B7Aqi116423461; Thu, 7 Jun 2007 21:07:10 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Thu, 7 Jun 2007 21:07:10 +1000 From: David Chinner To: David Greaves Cc: Tejun Heo , Linus Torvalds , "Rafael J. Wysocki" , xfs@oss.sgi.com, "'linux-kernel@vger.kernel.org'" , linux-pm , Neil Brown Subject: Re: 2.6.22-rc3 hibernate(?) fails totally - regression (xfs on raid6) Message-ID: <20070607110708.GS86004887@sgi.com> References: <200706012342.45657.rjw@sisk.pl> <46609FAD.7010203@dgreaves.com> <200706020122.49989.rjw@sisk.pl> <4661EFBB.5010406@dgreaves.com> <4662D852.4000005@dgreaves.com> <46667160.80905@gmail.com> <46668EE0.2030509@dgreaves.com> <46679D56.7040001@gmail.com> <4667DE2D.6050903@dgreaves.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4667DE2D.6050903@dgreaves.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11675 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Thu, Jun 07, 2007 at 11:30:05AM +0100, David Greaves wrote: > Tejun Heo wrote: > >Hello, > > > >David Greaves wrote: > >>Just to be clear. This problem is where my system won't resume after s2d > >>unless I umount my xfs over raid6 filesystem. > > > >This is really weird. I don't see how xfs mount can affect this at all. > Indeed. > It does :) Ok, so lets determine if it really is XFS. Does the lockup happen with a different filesystem on the md device? Or if you can't test that, does any other XFS filesystem you have show the same problem? If it is xfs that is causing the problem, what happens if you remount read-only instead of unmounting before shutting down? What about freezing the filesystem? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Thu Jun 7 05:34:46 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 07 Jun 2007 05:34:48 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.3 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from pentafluge.infradead.org (pentafluge.infradead.org [213.146.154.40]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l57CYiWt031174 for ; Thu, 7 Jun 2007 05:34:46 -0700 Received: from hch by pentafluge.infradead.org with local (Exim 4.63 #1 (Red Hat Linux)) id 1HwGx3-00089a-SQ; Thu, 07 Jun 2007 13:18:53 +0100 Date: Thu, 7 Jun 2007 13:18:53 +0100 From: Christoph Hellwig To: David Chinner Cc: Tim Shimmin , xfs-dev@sgi.com, xfs@oss.sgi.com Subject: Re: review: xfs_growfs_data_private() not logging agf length change Message-ID: <20070607121853.GA29442@infradead.org> References: <4667ADAF.7000904@sgi.com> <20070607073034.GP86004887@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070607073034.GP86004887@sgi.com> User-Agent: Mutt/1.4.2.2i X-SRS-Rewrite: SMTP reverse-path rewritten from by pentafluge.infradead.org See http://www.infradead.org/rpr.html X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11676 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs On Thu, Jun 07, 2007 at 05:30:34PM +1000, David Chinner wrote: > On Thu, Jun 07, 2007 at 05:03:11PM +1000, Tim Shimmin wrote: > > Looks like we forgot to log the agf_length change here. > > > > (cut 'n' pasted patch) > > > > --Tim > > > > =========================================================================== > > Index: fs/xfs/xfs_fsops.c > > =========================================================================== > > > > --- a/fs/xfs/xfs_fsops.c 2007-04-17 18:02:46.000000000 +1000 > > +++ b/fs/xfs/xfs_fsops.c 2007-04-17 17:59:44.467987572 +1000 > > @@ -328,6 +328,7 @@ xfs_growfs_data_private( > > be32_add(&agf->agf_length, new); > > ASSERT(be32_to_cpu(agf->agf_length) == > > be32_to_cpu(agi->agi_length)); > > + xfs_alloc_log_agf(tp, bp, XFS_AGF_LENGTH); > > /* > > * Free the new space. > > */ > > Yup, looks ok to me. Except for the whitespace damage, of course - but that might have been the cut & pasting. From owner-xfs@oss.sgi.com Thu Jun 7 05:49:25 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 07 Jun 2007 05:49:27 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l57CnKWt003194 for ; Thu, 7 Jun 2007 05:49:23 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id WAA04257; Thu, 7 Jun 2007 22:49:16 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l57CnFAf116466952; Thu, 7 Jun 2007 22:49:16 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l57CnEPg116437643; Thu, 7 Jun 2007 22:49:14 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Thu, 7 Jun 2007 22:49:14 +1000 From: David Chinner To: Christoph Hellwig Cc: David Chinner , Tim Shimmin , xfs-dev@sgi.com, xfs@oss.sgi.com Subject: Re: review: xfs_growfs_data_private() not logging agf length change Message-ID: <20070607124914.GD85884050@sgi.com> References: <4667ADAF.7000904@sgi.com> <20070607073034.GP86004887@sgi.com> <20070607121853.GA29442@infradead.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070607121853.GA29442@infradead.org> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11677 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Thu, Jun 07, 2007 at 01:18:53PM +0100, Christoph Hellwig wrote: > On Thu, Jun 07, 2007 at 05:30:34PM +1000, David Chinner wrote: > > On Thu, Jun 07, 2007 at 05:03:11PM +1000, Tim Shimmin wrote: > > > Looks like we forgot to log the agf_length change here. > > > > > > (cut 'n' pasted patch) > > > > > > --Tim ..... > > > > Yup, looks ok to me. > > Except for the whitespace damage, of course - but that might have been > the cut & pasting. The attached patch had no ws damage. ;) Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Thu Jun 7 06:05:14 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 07 Jun 2007 06:05:20 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.7 required=5.0 tests=AWL,BAYES_50,URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l57D5CWt009284 for ; Thu, 7 Jun 2007 06:05:13 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id XAA04636; Thu, 7 Jun 2007 23:05:10 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l57D58Af115711774; Thu, 7 Jun 2007 23:05:09 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l57D55AW116168152; Thu, 7 Jun 2007 23:05:05 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Thu, 7 Jun 2007 23:05:05 +1000 From: David Chinner To: Marco Berizzi Cc: David Chinner , linux-kernel@vger.kernel.org, xfs@oss.sgi.com Subject: Re: XFS internal error xfs_da_do_buf(2) at line 2087 of file fs/xfs/xfs_da_btree.c. Caller 0xc01b00bd Message-ID: <20070607130505.GE85884050@sgi.com> References: <20070316012520.GN5743@melbourne.sgi.com> <20070316195951.GB5743@melbourne.sgi.com> <20070320064632.GO32602149@melbourne.sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11678 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Thu, Jun 07, 2007 at 09:44:51AM +0200, Marco Berizzi wrote: > Hi David. > Three months ago I wrote the message below. > I had built various 2.6.20.x and 2.6.21.x > vanilla kernel with all the debug options > enabled and linux had never crashed. > On june 4, I have builded linux 2.6.21.3 without > any debugging options and after 2 days linux > has starting print these errors: > > Jun 6 09:47:09 Pleiadi kernel: ======================= > Jun 6 09:47:09 Pleiadi kernel: 0x0: 28 f1 45 d4 22 53 35 11 09 80 37 5a > 47 8a 22 ee > Jun 6 09:47:09 Pleiadi kernel: Filesystem "sda8": XFS internal error > xfs_da_do_buf(2) at line 2086 of file fs/xfs/xfs_da_btree.c. Caller > 0xc01b2301 > Jun 6 09:47:09 Pleiadi kernel: [] xfs_da_do_buf+0x70c/0x7b1 > Jun 6 09:47:09 Pleiadi kernel: [] xfs_da_read_buf+0x30/0x35 > Jun 6 09:47:09 Pleiadi kernel: [] xfs_da_read_buf+0x30/0x35 These above stack trace is the sign of a corrupted directory. Chopping out the rest of the top posting (please don't do that) we get down to 3 months ago: > > On Mon, Mar 19, 2007 at 11:32:27AM +0100, Marco Berizzi wrote: > > > Marco Berizzi wrote: > > > Here is the relevant results: > > > > > > Phase 2 - found root inode chunk > > > Phase 3 - ... > > > agno = 0 > > > ... > > > agno = 12 > > > LEAFN node level is 1 inode 1610612918 bno = 8388608 > > > > Hmmm - single bit error in the bno - that reminds of this: > > > > http://oss.sgi.com/projects/xfs/faq.html#dir2 > > > > So I'd definitely make sure that is repaired.... Where we saw signs of on disk directory corruption. Have you run xfs_repair successfully on the filesystem since you reported this? If you did clean up the error, does xfs_repair report the same sort of error again? Have you run a 2.6.16-rcX or 2.6.17.[0-6] kernel since you last reported this problem? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Thu Jun 7 07:00:03 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 07 Jun 2007 07:00:08 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.8 required=5.0 tests=AWL,BAYES_95 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.ukfsn.org (s2.ukfsn.org [217.158.120.143]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l57E01Wt025578 for ; Thu, 7 Jun 2007 07:00:02 -0700 Received: from localhost (mailman.ukfsn.org [80.168.53.75]) by mail.ukfsn.org (Postfix) with ESMTP id 4A27CE6AD6; Thu, 7 Jun 2007 14:59:41 +0100 (BST) Received: from mail.ukfsn.org ([80.168.53.20]) by localhost (smtp-filter.ukfsn.org [80.168.53.75]) (amavisd-new, port 10024) with ESMTP id nhKk-zsWb0CF; Thu, 7 Jun 2007 14:57:29 +0100 (BST) Received: from elm.dgreaves.com (i-83-67-36-194.freedom2surf.net [83.67.36.194]) by mail.ukfsn.org (Postfix) with ESMTP id 60B9BE7406; Thu, 7 Jun 2007 14:59:39 +0100 (BST) Received: from ash.dgreaves.com ([10.0.0.90]) by elm.dgreaves.com with esmtp (Exim 4.62) (envelope-from ) id 1HwIWt-0001x1-0i; Thu, 07 Jun 2007 14:59:59 +0100 Message-ID: <46680F5E.6070806@dgreaves.com> Date: Thu, 07 Jun 2007 14:59:58 +0100 From: David Greaves User-Agent: Mozilla-Thunderbird 2.0.0.0 (X11/20070601) MIME-Version: 1.0 To: David Chinner Cc: Tejun Heo , Linus Torvalds , "Rafael J. Wysocki" , xfs@oss.sgi.com, "'linux-kernel@vger.kernel.org'" , linux-pm , Neil Brown Subject: Re: 2.6.22-rc3 hibernate(?) fails totally - regression (xfs on raid6) References: <200706012342.45657.rjw@sisk.pl> <46609FAD.7010203@dgreaves.com> <200706020122.49989.rjw@sisk.pl> <4661EFBB.5010406@dgreaves.com> <4662D852.4000005@dgreaves.com> <46667160.80905@gmail.com> <46668EE0.2030509@dgreaves.com> <46679D56.7040001@gmail.com> <4667DE2D.6050903@dgreaves.com> <20070607110708.GS86004887@sgi.com> In-Reply-To: <20070607110708.GS86004887@sgi.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11679 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: david@dgreaves.com Precedence: bulk X-list: xfs David Chinner wrote: > On Thu, Jun 07, 2007 at 11:30:05AM +0100, David Greaves wrote: >> Tejun Heo wrote: >>> Hello, >>> >>> David Greaves wrote: >>>> Just to be clear. This problem is where my system won't resume after s2d >>>> unless I umount my xfs over raid6 filesystem. >>> This is really weird. I don't see how xfs mount can affect this at all. >> Indeed. >> It does :) > > Ok, so lets determine if it really is XFS. Seems like a good next step... > Does the lockup happen with a > different filesystem on the md device? Or if you can't test that, does > any other XFS filesystem you have show the same problem? It's a rather full 1.2Tb raid6 array - can't reformat it - sorry :) I only noticed the problem when I umounted the fs during tests to prevent corruption - and it worked. I'm doing a sync each time it hibernates (see below) and a couple of paranoia xfs_repairs haven't shown any problems. I do have another xfs filesystem on /dev/hdb2 (mentioned when I noticed the md/XFS correlation). It doesn't seem to have/cause any problems. > If it is xfs that is causing the problem, what happens if you > remount read-only instead of unmounting before shutting down? Yes, I'm happy to try these tests. nb, the hibernate script is: ethtool -s eth0 wol g sync echo platform > /sys/power/disk echo disk > /sys/power/state So there has always been a sync before any hibernate. cu:~# mount -oremount,ro /huge cu:~# mount /dev/hda2 on / type xfs (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) usbfs on /proc/bus/usb type usbfs (rw) tmpfs on /dev/shm type tmpfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) nfsd on /proc/fs/nfsd type nfsd (rw) /dev/hda1 on /boot type ext3 (rw) /dev/md0 on /huge type xfs (ro) /dev/hdb2 on /scratch type xfs (rw) tmpfs on /dev type tmpfs (rw,size=10M,mode=0755) rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) cu:(pid2862,port1022) on /net type nfs (intr,rw,port=1022,toplvl,map=/usr/share/am-utils/amd.net,noac) elm:/space on /amd/elm/root/space type nfs (rw,vers=3,proto=tcp) elm:/space-backup on /amd/elm/root/space-backup type nfs (rw,vers=3,proto=tcp) elm:/usr/src on /amd/elm/root/usr/src type nfs (rw,vers=3,proto=tcp) cu:~# /usr/net/bin/hibernate [this works and resumes] cu:~# mount -oremount,rw /huge cu:~# /usr/net/bin/hibernate [this works and resumes too !] cu:~# touch /huge/tst cu:~# /usr/net/bin/hibernate [but this doesn't even hibernate] > What about freezing the filesystem? cu:~# xfs_freeze -f /huge cu:~# /usr/net/bin/hibernate [but this doesn't even hibernate - same as the 'touch'] Nb the screen looks like this: http://www.dgreaves.com/pub/2.6.21-rc4-ptched-suspend-failure.jpg whether it hangs on suspend or resume. So I wouldn't say it *is* XFS at fault - but there certainly seems to be an interaction... At least it's easily reproducible :) Shame about the sysrq I can think of other permutations of freeze/ro/writing tests but I'm just thrashing really. Happy for you to tell me what to try next ... David From owner-xfs@oss.sgi.com Thu Jun 7 07:00:15 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 07 Jun 2007 07:00:20 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.8 required=5.0 tests=AWL,BAYES_60 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.ukfsn.org (s2.ukfsn.org [217.158.120.143]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l57E0CWt025620 for ; Thu, 7 Jun 2007 07:00:15 -0700 Received: from localhost (mailman.ukfsn.org [80.168.53.75]) by mail.ukfsn.org (Postfix) with ESMTP id 93B80E749F; Thu, 7 Jun 2007 14:59:52 +0100 (BST) Received: from mail.ukfsn.org ([80.168.53.20]) by localhost (smtp-filter.ukfsn.org [80.168.53.75]) (amavisd-new, port 10024) with ESMTP id k9BV9fJ1zdaB; Thu, 7 Jun 2007 14:57:41 +0100 (BST) Received: from elm.dgreaves.com (i-83-67-36-194.freedom2surf.net [83.67.36.194]) by mail.ukfsn.org (Postfix) with ESMTP id E3F91E74A8; Thu, 7 Jun 2007 14:59:49 +0100 (BST) Received: from ash.dgreaves.com ([10.0.0.90]) by elm.dgreaves.com with esmtp (Exim 4.62) (envelope-from ) id 1HwIX3-0001x8-UO; Thu, 07 Jun 2007 15:00:10 +0100 Message-ID: <46680F69.60105@dgreaves.com> Date: Thu, 07 Jun 2007 15:00:09 +0100 From: David Greaves User-Agent: Mozilla-Thunderbird 2.0.0.0 (X11/20070601) MIME-Version: 1.0 To: Duane Griffin Cc: Tejun Heo , Linus Torvalds , "Rafael J. Wysocki" , xfs@oss.sgi.com, "linux-kernel@vger.kernel.org" , linux-pm , Neil Brown Subject: Re: 2.6.22-rc3 hibernate(?) fails totally - regression (xfs on raid6) References: <46608E3F.4060201@dgreaves.com> <46609FAD.7010203@dgreaves.com> <200706020122.49989.rjw@sisk.pl> <4661EFBB.5010406@dgreaves.com> <4662D852.4000005@dgreaves.com> <46667160.80905@gmail.com> <46668EE0.2030509@dgreaves.com> <46679D56.7040001@gmail.com> <4667DE2D.6050903@dgreaves.com> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11680 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: david@dgreaves.com Precedence: bulk X-list: xfs Duane Griffin wrote: > On 07/06/07, David Greaves wrote: >> > How hard does the machine freeze? Can you use sysrq? If so, please >> > dump sysrq-t. >> I suspect there is a problem writing to the consoles... >> >> I recompiled (rc4+patch) with sysrq support, suspended, resumed and tried >> sysrq-t but got no output. >> >> I *can* change VTs and see the various login prompts, bitmap messages >> and the >> console messages. Caps/Num lock lights work. >> >> Fearing incompetence I tried sysrq-s sysrq-u sysrq-b and got a reboot >> so sysrq >> is OK. > > Try sysrq-9 before the sysrq-t. Probably the messages are not being > printed to console with your default output level. Good idea :) Didn't work :( Cheers David From owner-xfs@oss.sgi.com Thu Jun 7 07:11:09 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 07 Jun 2007 07:11:16 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.0 required=5.0 tests=BAYES_99 autolearn=no version=3.2.0-pre1-r499012 Received: from wa-out-1112.google.com (wa-out-1112.google.com [209.85.146.183]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l57EB8Wt029216 for ; Thu, 7 Jun 2007 07:11:09 -0700 Received: by wa-out-1112.google.com with SMTP id k22so624779waf for ; Thu, 07 Jun 2007 07:11:08 -0700 (PDT) DKIM-Signature: a=rsa-sha1; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:message-id:date:from:sender:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references:x-google-sender-auth; b=M0IaTuJ825OavE+MTk2aaoQvpIED68gS1d6DnNcgJl1Qu+aKLqIHc+RA3T/AjK+DMyl0E3TjZj9MfQP9cdGpt5FhqBQwjGYCEf4At0vJUp62NvMPW/k0gAoZUnNvm/mauP5ozbz9JgTMXZaDwp+RN7q0z9VJggCafpX6TQzGdgE= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:message-id:date:from:sender:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references:x-google-sender-auth; b=Gx1aMSxT6YyS+GXubG8hK0AH6GXpsEgR5tOF/8lxXl1sfv7QR7LsEj+d7vJBNWSIpgL1TOlTNsZ7p9VOXPnexcpwHKK5hqMZndyWMzCjX+JFiipp9I4ZAj4Q9Yqif6sGMiAm4t4SrjTvZ6sfPr8EgYNjenTompjakNVX3/Qc6co= Received: by 10.114.156.1 with SMTP id d1mr1555351wae.1181223951301; Thu, 07 Jun 2007 06:45:51 -0700 (PDT) Received: by 10.141.76.3 with HTTP; Thu, 7 Jun 2007 06:45:51 -0700 (PDT) Message-ID: Date: Thu, 7 Jun 2007 14:45:51 +0100 From: "Duane Griffin" To: "David Greaves" Subject: Re: 2.6.22-rc3 hibernate(?) fails totally - regression (xfs on raid6) Cc: "Tejun Heo" , "Linus Torvalds" , "Rafael J. Wysocki" , xfs@oss.sgi.com, "linux-kernel@vger.kernel.org" , linux-pm , "Neil Brown" In-Reply-To: <4667DE2D.6050903@dgreaves.com> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline References: <46608E3F.4060201@dgreaves.com> <46609FAD.7010203@dgreaves.com> <200706020122.49989.rjw@sisk.pl> <4661EFBB.5010406@dgreaves.com> <4662D852.4000005@dgreaves.com> <46667160.80905@gmail.com> <46668EE0.2030509@dgreaves.com> <46679D56.7040001@gmail.com> <4667DE2D.6050903@dgreaves.com> X-Google-Sender-Auth: 749024c516b999cd X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11681 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: duaneg@dghda.com Precedence: bulk X-list: xfs On 07/06/07, David Greaves wrote: > > How hard does the machine freeze? Can you use sysrq? If so, please > > dump sysrq-t. > I suspect there is a problem writing to the consoles... > > I recompiled (rc4+patch) with sysrq support, suspended, resumed and tried > sysrq-t but got no output. > > I *can* change VTs and see the various login prompts, bitmap messages and the > console messages. Caps/Num lock lights work. > > Fearing incompetence I tried sysrq-s sysrq-u sysrq-b and got a reboot so sysrq > is OK. Try sysrq-9 before the sysrq-t. Probably the messages are not being printed to console with your default output level. Cheers, Duane Griffin. -- "I never could learn to drink that blood and call it wine" - Bob Dylan From owner-xfs@oss.sgi.com Thu Jun 7 08:05:31 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 07 Jun 2007 08:05:35 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.rtr.ca (rtr.ca [64.26.128.89]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l57F5UWt012159 for ; Thu, 7 Jun 2007 08:05:31 -0700 Received: by mail.rtr.ca (Postfix, from userid 1002) id AE1FB25C0C6; Thu, 7 Jun 2007 10:36:31 -0400 (EDT) Received: from [10.0.0.6] (corey.localnet [10.0.0.6]) by mail.rtr.ca (Postfix) with ESMTP id 856AA25C0B9; Thu, 7 Jun 2007 10:36:31 -0400 (EDT) Message-ID: <466817EF.7090707@rtr.ca> Date: Thu, 07 Jun 2007 10:36:31 -0400 From: Mark Lord User-Agent: Thunderbird 2.0.0.0 (X11/20070326) MIME-Version: 1.0 To: Tejun Heo Cc: David Greaves , Duane Griffin , Linus Torvalds , "Rafael J. Wysocki" , xfs@oss.sgi.com, "linux-kernel@vger.kernel.org" , linux-pm , Neil Brown Subject: Re: 2.6.22-rc3 hibernate(?) fails totally - regression (xfs on raid6) References: <46608E3F.4060201@dgreaves.com> <46609FAD.7010203@dgreaves.com> <200706020122.49989.rjw@sisk.pl> <4661EFBB.5010406@dgreaves.com> <4662D852.4000005@dgreaves.com> <46667160.80905@gmail.com> <46668EE0.2030509@dgreaves.com> <46679D56.7040001@gmail.com> <4667DE2D.6050903@dgreaves.com> <46680F69.60105@dgreaves.com> <46681094.4070103@gmail.com> In-Reply-To: <46681094.4070103@gmail.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11682 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: lkml@rtr.ca Precedence: bulk X-list: xfs Tejun Heo wrote: > > Can you setup serial console and/or netconsole (not sure whether this > would work tho)? Since he has good console output already, capturable by digicam, I think a better approach might be to provide a patch with extra instrumentation.. You know.. progress messages and the like, so we can see at what step things stop working. Or would that not help ? David, does scrollback work on your dead console? Cheers From owner-xfs@oss.sgi.com Thu Jun 7 08:20:08 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 07 Jun 2007 08:20:17 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.4 required=5.0 tests=AWL,BAYES_99 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.ukfsn.org (s2.ukfsn.org [217.158.120.143]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l57FK6Wt019413 for ; Thu, 7 Jun 2007 08:20:07 -0700 Received: from localhost (mailman.ukfsn.org [80.168.53.75]) by mail.ukfsn.org (Postfix) with ESMTP id 119D9E6AF4; Thu, 7 Jun 2007 16:19:46 +0100 (BST) Received: from mail.ukfsn.org ([80.168.53.20]) by localhost (smtp-filter.ukfsn.org [80.168.53.75]) (amavisd-new, port 10024) with ESMTP id FDynGbWwD5hc; Thu, 7 Jun 2007 16:17:34 +0100 (BST) Received: from elm.dgreaves.com (i-83-67-36-194.freedom2surf.net [83.67.36.194]) by mail.ukfsn.org (Postfix) with ESMTP id A67A0E6AE7; Thu, 7 Jun 2007 16:19:43 +0100 (BST) Received: from ash.dgreaves.com ([10.0.0.90]) by elm.dgreaves.com with esmtp (Exim 4.62) (envelope-from ) id 1HwJmL-00026m-V3; Thu, 07 Jun 2007 16:20:01 +0100 Message-ID: <46682221.8070705@dgreaves.com> Date: Thu, 07 Jun 2007 16:20:01 +0100 From: David Greaves User-Agent: Mozilla-Thunderbird 2.0.0.0 (X11/20070601) MIME-Version: 1.0 To: Mark Lord Cc: Tejun Heo , Duane Griffin , Linus Torvalds , "Rafael J. Wysocki" , xfs@oss.sgi.com, "linux-kernel@vger.kernel.org" , linux-pm , Neil Brown Subject: Re: 2.6.22-rc3 hibernate(?) fails totally - regression (xfs on raid6) References: <46608E3F.4060201@dgreaves.com> <46609FAD.7010203@dgreaves.com> <200706020122.49989.rjw@sisk.pl> <4661EFBB.5010406@dgreaves.com> <4662D852.4000005@dgreaves.com> <46667160.80905@gmail.com> <46668EE0.2030509@dgreaves.com> <46679D56.7040001@gmail.com> <4667DE2D.6050903@dgreaves.com> <46680F69.60105@dgreaves.com> <46681094.4070103@gmail.com> <466817EF.7090707@rtr.ca> In-Reply-To: <466817EF.7090707@rtr.ca> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11683 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: david@dgreaves.com Precedence: bulk X-list: xfs Mark Lord wrote: > Tejun Heo wrote: >> >> Can you setup serial console and/or netconsole (not sure whether this >> would work tho)? > > Since he has good console output already, capturable by digicam, > I think a better approach might be to provide a patch with extra > instrumentation.. > You know.. progress messages and the like, so we can see at what step > things stop working. Or would that not help ? > > David, does scrollback work on your dead console? hmmmm, scrollback doesn't currently _do_ anything. But the messages didn't scroll there, they just appear (as the memory is restored I assume). The same messages appear during the fail-to-suspend case too. Linus said at one point: > Ok, it wasn't a hidden oops. The DISABLE_CONSOLE_SUSPEND=y thing sometimes > shows oopses that are otherwise hidden, but at other times it just causes > more problems (hard hangs when trying to display something on a device > that is suspended, or behind a bridge that got suspended). > In your case, the screen output just shows normal resume output, and it > apparently just hung for some unknown reason. It *may* be worth trying to > do a SysRQ + 't' thing to see what tasks are running (or rather, not > running), but since you won't be able to capture it, it's probably not > going to be useful. So I've since removed DISABLE_CONSOLE_SUSPEND=y Should I put it back? I was actually doing the netconsole anyway - but skge is currently a module - I've avoided making any changes to the config during all these tests but what the heck... And wouldn't you know it. Get netconsole working (ie new kernel with skge builtin) and I get the hang on suspend. Here's the netconsole output... swsusp: Basic memory bitmaps created Stopping tasks ... done. Shrinking memory... done (0 pages freed) Freed 0 kbytes in 0.03 seconds (0.00 MB/s) Suspending console(s) Given that moving something from module to builtin changes the behaviour I thought I'd bring these warnings up again (Andrew or Alan mentioned similar warnings being problems in another thread...) Now, I have mentioned these before but there's been a lot going on so here you go: MODPOST vmlinux WARNING: arch/i386/kernel/built-in.o(.text+0x968f): Section mismatch: reference to .init.text: (between 'mtrr_bp_init' and 'mtrr_ap_init') WARNING: arch/i386/kernel/built-in.o(.text+0x9781): Section mismatch: reference to .init.text: (between 'mtrr_bp_init' and 'mtrr_ap_init') WARNING: arch/i386/kernel/built-in.o(.text+0x9786): Section mismatch: reference to .init.text: (between 'mtrr_bp_init' and 'mtrr_ap_init') WARNING: arch/i386/kernel/built-in.o(.text+0xa25c): Section mismatch: reference to .init.text: (between 'get_mtrr_state' and 'mtrr_wrmsr') WARNING: arch/i386/kernel/built-in.o(.text+0xa303): Section mismatch: reference to .init.text: (between 'get_mtrr_state' and 'mtrr_wrmsr') WARNING: arch/i386/kernel/built-in.o(.text+0xa31b): Section mismatch: reference to .init.text: (between 'get_mtrr_state' and 'mtrr_wrmsr') WARNING: arch/i386/kernel/built-in.o(.text+0xa344): Section mismatch: reference to .init.text: (between 'get_mtrr_state' and 'mtrr_wrmsr') WARNING: arch/i386/kernel/built-in.o(.exit.text+0x19): Section mismatch: reference to .init.text: (between 'cache_remove_dev' and 'powernow_k6_exit') WARNING: arch/i386/kernel/built-in.o(.data+0x2160): Section mismatch: reference to .init.text: (between 'thermal_throttle_cpu_notifier' and 'mce_work') WARNING: kernel/built-in.o(.text+0x14502): Section mismatch: reference to .init.text: (between 'kthreadd' and 'init_waitqueue_head') David PS Gotta go - back in a couple of hours - let me know if there are any more tests to try. From owner-xfs@oss.sgi.com Thu Jun 7 11:22:58 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 07 Jun 2007 11:23:00 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.4 required=5.0 tests=AWL,BAYES_80 autolearn=no version=3.2.0-pre1-r499012 Received: from smtp14.wxs.nl (smtp14.wxs.nl [195.121.247.5]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l57IMuWt006080 for ; Thu, 7 Jun 2007 11:22:57 -0700 Received: from mail.deserver.nl (ip565e92ac.direct-adsl.nl [86.94.146.172]) by smtp14.wxs.nl (iPlanet Messaging Server 5.2 HotFix 2.15 (built Nov 14 2006)) with ESMTP id <0JJA0085L2LHA6@smtp14.wxs.nl> for xfs@oss.sgi.com; Thu, 07 Jun 2007 20:12:53 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by mail.deserver.nl (Postfix) with ESMTP id D9310236CD for ; Thu, 07 Jun 2007 20:12:52 +0200 (CEST) Received: from [192.168.0.14] (unknown [192.168.0.14]) by mail.deserver.nl (Postfix) with ESMTP id 21E3A236C4 for ; Thu, 07 Jun 2007 20:12:44 +0200 (CEST) Date: Thu, 07 Jun 2007 20:12:43 +0200 From: Jaap Struyk Subject: Re: ways to restore data from crashed disk In-reply-to: <465ECF9B.2000500@sandeen.net> To: xfs@oss.sgi.com Message-id: <46684A9B.90908@deserver.nl> MIME-version: 1.0 Content-type: text/plain; charset=ISO-8859-1 Content-transfer-encoding: 7BIT User-Agent: Thunderbird 2.0.0.0 (X11/20070326) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Scanned: by mailscan at deserver.nl X-Enigmail-Version: 0.95.0 References: <465EA882.3030403@deserver.nl> <465ECF9B.2000500@sandeen.net> X-Virus-Status: Clean X-archive-position: 11684 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: japie@deserver.nl Precedence: bulk X-list: xfs Eric Sandeen schreef: > For starters, is the file really that big? (9397895168 bytes?) dd_rhelp /dev/hdd4 hdd4.img xfs_repair hdd4.img Phase 1 - find and verify superblock... superblock read failed, offset 103376846848, size 2048, ag 11, rval 0 fatal error -- Gelukt (gelukt is dutch for sucses or done) The image is 92G. big but my original filesystem was about 120G. but the 92G is about the size of the data that was on the disc. dd_rhelp /dev/hdd4 hdd4.img info: - Jump pos : 96365061.0 - max file size : 156019623.0 - Biggest hole size : 119309126 k - total holes : 0k - xferd(succ/err) : 36710498.0k(36710496.0k/2.0k) - EOF is not found, but between 36710498.0k and 156019623.0k. ALL your data has been dd_rescued !! -- Groetjes Japie From owner-xfs@oss.sgi.com Thu Jun 7 11:56:37 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 07 Jun 2007 11:56:39 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.0 required=5.0 tests=AWL,BAYES_99 autolearn=no version=3.2.0-pre1-r499012 Received: from psmtp13.wxs.nl (psmtp13.wxs.nl [195.121.247.25]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l57IuaWt014628 for ; Thu, 7 Jun 2007 11:56:37 -0700 Received: from mail.deserver.nl (ip565e92ac.direct-adsl.nl [86.94.146.172]) by psmtp13.wxs.nl (iPlanet Messaging Server 5.2 HotFix 2.15 (built Nov 14 2006)) with ESMTP id <0JJA003NG4M9XH@psmtp13.wxs.nl> for xfs@oss.sgi.com; Thu, 07 Jun 2007 20:56:34 +0200 (MEST) Received: from localhost (localhost [127.0.0.1]) by mail.deserver.nl (Postfix) with ESMTP id C160F236CE for ; Thu, 07 Jun 2007 20:56:32 +0200 (CEST) Received: from [192.168.0.14] (unknown [192.168.0.14]) by mail.deserver.nl (Postfix) with ESMTP id 38E8C236C6 for ; Thu, 07 Jun 2007 20:56:26 +0200 (CEST) Date: Thu, 07 Jun 2007 20:56:25 +0200 From: Jaap Struyk Subject: Re: ways to restore data from crashed disk In-reply-to: <46684A9B.90908@deserver.nl> To: xfs@oss.sgi.com Message-id: <466854D9.3070903@deserver.nl> MIME-version: 1.0 Content-type: text/plain; charset=ISO-8859-1 Content-transfer-encoding: 7BIT User-Agent: Thunderbird 2.0.0.0 (X11/20070326) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Scanned: by mailscan at deserver.nl X-Enigmail-Version: 0.95.0 References: <465EA882.3030403@deserver.nl> <465ECF9B.2000500@sandeen.net> <46684A9B.90908@deserver.nl> X-Virus-Status: Clean X-archive-position: 11685 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: japie@deserver.nl Precedence: bulk X-list: xfs xfs_db hdd4.img xfs_db> sb 0 xfs_db> p magicnum = 0x58465342 blocksize = 4096 dblocks = 36710528 rblocks = 0 rextents = 0 uuid = 9b1a2a9c-9572-426f-b234-b6ba89bae713 logstart = 33554436 rootino = 128 rbmino = 129 rsumino = 130 rextsize = 16 agblocks = 2294408 agcount = 16 rbmblocks = 0 logblocks = 17925 versionnum = 0x3084 sectsize = 512 inodesize = 256 inopblock = 16 fname = "\000\000\000\000\000\000\000\000\000\000\000\000" blocklog = 12 sectlog = 9 inodelog = 8 inopblog = 4 agblklog = 22 rextslog = 0 inprogress = 0 imax_pct = 25 icount = 76864 ifree = 20402 fdblocks = 8890535 frextents = 0 uquotino = 0 gquotino = 0 qflags = 0 flags = 0 shared_vn = 0 inoalignmt = 2 unit = 0 width = 0 dirblklog = 0 logsectlog = 0 logsectsize = 0 logsunit = 0 features2 = 0 -- Groetjes Japie From owner-xfs@oss.sgi.com Thu Jun 7 12:32:50 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 07 Jun 2007 12:32:52 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.3 required=5.0 tests=AWL,BAYES_60,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r499012 Received: from mx1.redhat.com (mx1.redhat.com [66.187.233.31]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l57JWmWt028000 for ; Thu, 7 Jun 2007 12:32:49 -0700 Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by mx1.redhat.com (8.13.1/8.13.1) with ESMTP id l57JVkCI022126; Thu, 7 Jun 2007 15:31:47 -0400 Received: from pobox-2.corp.redhat.com (pobox-2.corp.redhat.com [10.11.255.15]) by int-mx1.corp.redhat.com (8.13.1/8.13.1) with ESMTP id l57JVkoj011286; Thu, 7 Jun 2007 15:31:46 -0400 Received: from [10.15.80.10] (neon.msp.redhat.com [10.15.80.10]) by pobox-2.corp.redhat.com (8.13.1/8.13.1) with ESMTP id l57JViEw019118; Thu, 7 Jun 2007 15:31:45 -0400 Message-ID: <46685C5F.5090804@sandeen.net> Date: Thu, 07 Jun 2007 14:28:31 -0500 From: Eric Sandeen User-Agent: Thunderbird 1.5.0.10 (X11/20070302) MIME-Version: 1.0 To: Jaap Struyk CC: xfs@oss.sgi.com Subject: Re: ways to restore data from crashed disk References: <465EA882.3030403@deserver.nl> <465ECF9B.2000500@sandeen.net> <46684A9B.90908@deserver.nl> In-Reply-To: <46684A9B.90908@deserver.nl> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11686 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Jaap Struyk wrote: > Eric Sandeen schreef: > >> For starters, is the file really that big? (9397895168 bytes?) > > dd_rhelp /dev/hdd4 hdd4.img > xfs_repair hdd4.img > Phase 1 - find and verify superblock... > superblock read failed, offset 103376846848, size 2048, ag 11, rval 0 > fatal error -- Gelukt (gelukt is dutch for sucses or done) > > The image is 92G. big but my original filesystem was about 120G. but the > 92G is about the size of the data that was on the disc. when you are talking about sizes, do you mean space used (du) or max offset (ls -l?) max offset should be the same for your image file as for your original device... 120G. -Eric > dd_rhelp /dev/hdd4 hdd4.img info: > - Jump pos : 96365061.0 - max file size : 156019623.0 > - Biggest hole size : 119309126 k - total holes : 0k > - xferd(succ/err) : 36710498.0k(36710496.0k/2.0k) > - EOF is not found, but between 36710498.0k and 156019623.0k. > ALL your data has been dd_rescued !! From owner-xfs@oss.sgi.com Thu Jun 7 15:28:38 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 07 Jun 2007 15:28:45 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l57MSZWt007786 for ; Thu, 7 Jun 2007 15:28:37 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id IAA21270; Fri, 8 Jun 2007 08:28:27 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l57MSLAf116143884; Fri, 8 Jun 2007 08:28:22 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l57MSD7a101597986; Fri, 8 Jun 2007 08:28:13 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Fri, 8 Jun 2007 08:28:13 +1000 From: David Chinner To: David Greaves Cc: David Chinner , Tejun Heo , Linus Torvalds , "Rafael J. Wysocki" , xfs@oss.sgi.com, "'linux-kernel@vger.kernel.org'" , linux-pm , Neil Brown Subject: Re: 2.6.22-rc3 hibernate(?) fails totally - regression (xfs on raid6) Message-ID: <20070607222813.GG85884050@sgi.com> References: <200706020122.49989.rjw@sisk.pl> <4661EFBB.5010406@dgreaves.com> <4662D852.4000005@dgreaves.com> <46667160.80905@gmail.com> <46668EE0.2030509@dgreaves.com> <46679D56.7040001@gmail.com> <4667DE2D.6050903@dgreaves.com> <20070607110708.GS86004887@sgi.com> <46680F5E.6070806@dgreaves.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <46680F5E.6070806@dgreaves.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11687 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Thu, Jun 07, 2007 at 02:59:58PM +0100, David Greaves wrote: > David Chinner wrote: > >On Thu, Jun 07, 2007 at 11:30:05AM +0100, David Greaves wrote: > >>Tejun Heo wrote: > >>>Hello, > >>> > >>>David Greaves wrote: > >>>>Just to be clear. This problem is where my system won't resume after s2d > >>>>unless I umount my xfs over raid6 filesystem. > >>>This is really weird. I don't see how xfs mount can affect this at all. > >>Indeed. > >>It does :) > > > >Ok, so lets determine if it really is XFS. > Seems like a good next step... > > >Does the lockup happen with a > >different filesystem on the md device? Or if you can't test that, does > >any other XFS filesystem you have show the same problem? > It's a rather full 1.2Tb raid6 array - can't reformat it - sorry :) I suspected as much :/ > I only noticed the problem when I umounted the fs during tests to prevent > corruption - and it worked. I'm doing a sync each time it hibernates (see > below) and a couple of paranoia xfs_repairs haven't shown any problems. sync just guarantees that metadata changes are logged and data is on disk - it doesn't stop the filesystem from doing anything after the sync... > I do have another xfs filesystem on /dev/hdb2 (mentioned when I noticed the > md/XFS correlation). It doesn't seem to have/cause any problems. Ok, so it's not an obvious XFS problem... > >If it is xfs that is causing the problem, what happens if you > >remount read-only instead of unmounting before shutting down? > Yes, I'm happy to try these tests. > nb, the hibernate script is: > ethtool -s eth0 wol g > sync > echo platform > /sys/power/disk > echo disk > /sys/power/state > > So there has always been a sync before any hibernate. > > > cu:~# mount -oremount,ro /huge ..... > [this works and resumes] Ok. > cu:~# mount -oremount,rw /huge > cu:~# /usr/net/bin/hibernate > [this works and resumes too !] Interesting. That means something in the generic remount code is affecting this. > cu:~# touch /huge/tst > cu:~# /usr/net/bin/hibernate > [but this doesn't even hibernate] Ok, so a clean inode is sufficient to prevent hibernate from working. So, what's different between a sync and a remount? do_remount_sb() does: 599 shrink_dcache_sb(sb); 600 fsync_super(sb); of which a sync does neither. sync does what fsync_super() does in different sort of way, but does not call sync_blockdev() on each block device. It looks like that is the two main differences between sync and remount - remount trims the dentry cache and syncs the blockdev, sync doesn't. > > What about freezing the filesystem? > cu:~# xfs_freeze -f /huge > cu:~# /usr/net/bin/hibernate > [but this doesn't even hibernate - same as the 'touch'] I suspect that the frozen filesystem might cause other problems in the hibernate process. However, while a freeze calls sync_blockdev() it does not trim the dentry cache..... So, rather than a remount before hibernate, lets see if we can remove the dentries some other way to determine if removing excess dentries/inodes from the caches makes a difference. Can you do: # touch /huge/foo # sync # echo 1 > /proc/sys/vm/drop_caches # hibernate # touch /huge/bar # sync # echo 2 > /proc/sys/vm/drop_caches # hibernate # touch /huge/baz # sync # echo 3 > /proc/sys/vm/drop_caches # hibernate And see if any of those survive the suspend/resume? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Thu Jun 7 16:24:31 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 07 Jun 2007 16:24:34 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l57NOTWt025708 for ; Thu, 7 Jun 2007 16:24:30 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA22930; Fri, 8 Jun 2007 09:24:22 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l57NOKAf117183128; Fri, 8 Jun 2007 09:24:21 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l57NOI61117076029; Fri, 8 Jun 2007 09:24:18 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Fri, 8 Jun 2007 09:24:18 +1000 From: David Chinner To: Christoph Hellwig Cc: xfs@oss.sgi.com Subject: Re: [PATCH] kill macro noise in xfs_dir2*.h Message-ID: <20070607232418.GJ85884050@sgi.com> References: <20070418175859.GB18315@lst.de> <20070604143602.GA9081@lst.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070604143602.GA9081@lst.de> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11688 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Mon, Jun 04, 2007 at 04:36:02PM +0200, Christoph Hellwig wrote: > On Wed, Apr 18, 2007 at 07:59:00PM +0200, Christoph Hellwig wrote: > > Remove all the macros that just give inline functions uppercase names. > > > > Signed-off-by: Christoph Hellwig > > This patch still hasn't made it to mainline, so here's a version > rediffed for latest mainline because it's required for the next patch It's in my test tree - I just haven't had a chance to review it properly and check it in yet because of all the other stuff I've got to do right now. Cleanups are not high priority compared to finding and fixing the numerous data corruption problems that have been uncovered recently... At least it's being QA'd regularly. ;) Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Thu Jun 7 16:27:27 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 07 Jun 2007 16:27:29 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l57NROWt027299 for ; Thu, 7 Jun 2007 16:27:25 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA22989; Fri, 8 Jun 2007 09:27:20 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l57NRIAf117166048; Fri, 8 Jun 2007 09:27:19 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l57NRGoG117141858; Fri, 8 Jun 2007 09:27:16 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Fri, 8 Jun 2007 09:27:16 +1000 From: David Chinner To: Christoph Hellwig Cc: xfs@oss.sgi.com Subject: Re: [PATCH] use filldir internally Message-ID: <20070607232716.GK85884050@sgi.com> References: <20070604143958.GB9081@lst.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070604143958.GB9081@lst.de> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11689 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Mon, Jun 04, 2007 at 04:39:58PM +0200, Christoph Hellwig wrote: > Currently xfs has a rather complicated internal scheme to allow for > different directory formats in IRIX. This patch rips all code related > to this out and pushes useage of the Linux filldir callback into the > lowlevel directory code. This does not make the code any less portable > because filldir can be used to create dirents of all possible variations > (including the IRIX ones as proved by the IRIX binary emulation code > under arch/mips/). > > This patch get rid of an unessecary copy in the readdir path, about > 250 lines of code and one of the last two users of the uio structure. Looks like a nice cleanup at a quick glance, but I need to spend more time looking at it which I don't have right now. I'll add it to my QA in the meantime.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Thu Jun 7 17:21:05 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 07 Jun 2007 17:21:07 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l580L2Wt006351 for ; Thu, 7 Jun 2007 17:21:04 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA24352; Fri, 8 Jun 2007 10:20:53 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l580KpAf117146869; Fri, 8 Jun 2007 10:20:52 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l580KnVi115373524; Fri, 8 Jun 2007 10:20:49 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Fri, 8 Jun 2007 10:20:49 +1000 From: David Chinner To: David Chinner Cc: Christoph Hellwig , xfs@oss.sgi.com Subject: Re: [PATCH] use filldir internally Message-ID: <20070608002049.GM85884050@sgi.com> References: <20070604143958.GB9081@lst.de> <20070607232716.GK85884050@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20070607232716.GK85884050@sgi.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11690 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Fri, Jun 08, 2007 at 09:27:16AM +1000, David Chinner wrote: > On Mon, Jun 04, 2007 at 04:39:58PM +0200, Christoph Hellwig wrote: > > Currently xfs has a rather complicated internal scheme to allow for > > different directory formats in IRIX. This patch rips all code related > > to this out and pushes useage of the Linux filldir callback into the > > lowlevel directory code. This does not make the code any less portable > > because filldir can be used to create dirents of all possible variations > > (including the IRIX ones as proved by the IRIX binary emulation code > > under arch/mips/). > > > > This patch get rid of an unessecary copy in the readdir path, about > > 250 lines of code and one of the last two users of the uio structure. > > Looks like a nice cleanup at a quick glance, but I need to spend more time > looking at it which I don't have right now. I'll add it to my QA in the > meantime.... FYI: CC fs/xfs/linux-2.6/xfs_file.o fs/xfs/linux-2.6/xfs_file.c: In function "xfs_file_readdir": fs/xfs/linux-2.6/xfs_file.c:289: warning: passing argument 4 of â(vp->v_bh.bh_first->bd_ops)->vop_readdirâ from incompatible pointer type loff_t * for f_pos, vs xfs_off_t * which is what the interface is defined to take. typedef __kernel_loff_t loff_t; typedef long long __kernel_loff_t; typedef __s64 xfs_off_t; So they are both signed 64 bit types on all platforms so the compiler warning is spurious. Just add a cast to bhv_vop_readdir()? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Thu Jun 7 17:39:58 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 07 Jun 2007 17:40:01 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l580dtWt011168 for ; Thu, 7 Jun 2007 17:39:57 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA24781; Fri, 8 Jun 2007 10:39:45 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l580dhAf116632057; Fri, 8 Jun 2007 10:39:44 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l580dfW5117130920; Fri, 8 Jun 2007 10:39:41 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Fri, 8 Jun 2007 10:39:41 +1000 From: David Chinner To: David Chinner Cc: Christoph Hellwig , xfs@oss.sgi.com Subject: Re: [PATCH] use filldir internally Message-ID: <20070608003941.GN85884050@sgi.com> References: <20070604143958.GB9081@lst.de> <20070607232716.GK85884050@sgi.com> <20070608002049.GM85884050@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070608002049.GM85884050@sgi.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11691 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Fri, Jun 08, 2007 at 10:20:49AM +1000, David Chinner wrote: > On Fri, Jun 08, 2007 at 09:27:16AM +1000, David Chinner wrote: > > On Mon, Jun 04, 2007 at 04:39:58PM +0200, Christoph Hellwig wrote: > > > Currently xfs has a rather complicated internal scheme to allow for > > > different directory formats in IRIX. This patch rips all code related > > > to this out and pushes useage of the Linux filldir callback into the > > > lowlevel directory code. This does not make the code any less portable > > > because filldir can be used to create dirents of all possible variations > > > (including the IRIX ones as proved by the IRIX binary emulation code > > > under arch/mips/). > > > > > > This patch get rid of an unessecary copy in the readdir path, about > > > 250 lines of code and one of the last two users of the uio structure. > > > > Looks like a nice cleanup at a quick glance, but I need to spend more time > > looking at it which I don't have right now. I'll add it to my QA in the > > meantime.... Hmmmm, this breaks dmapi which calls xfs_get_dirents() directly and that function was removed by this patch. I'm going to have to drop this patch for the moment because I don't have time to fix it up.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Thu Jun 7 21:03:48 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 07 Jun 2007 21:03:50 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.2 required=5.0 tests=AWL,BAYES_99 autolearn=no version=3.2.0-pre1-r499012 Received: from psmtp09.wxs.nl (psmtp09.wxs.nl [195.121.247.23]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5843kWt027652 for ; Thu, 7 Jun 2007 21:03:47 -0700 Received: from mail.deserver.nl (ip565e92ac.direct-adsl.nl [86.94.146.172]) by psmtp09.wxs.nl (iPlanet Messaging Server 5.2 HotFix 2.15 (built Nov 14 2006)) with ESMTP id <0JJA00CCSTY9M4@psmtp09.wxs.nl> for xfs@oss.sgi.com; Fri, 08 Jun 2007 06:03:46 +0200 (MEST) Received: from localhost (localhost [127.0.0.1]) by mail.deserver.nl (Postfix) with ESMTP id 662A8236CD for ; Fri, 08 Jun 2007 06:03:45 +0200 (CEST) Received: from [192.168.0.14] (unknown [192.168.0.14]) by mail.deserver.nl (Postfix) with ESMTP id 89C31236C4 for ; Fri, 08 Jun 2007 06:03:43 +0200 (CEST) Date: Fri, 08 Jun 2007 06:03:43 +0200 From: Jaap Struyk Subject: Re: ways to restore data from crashed disk In-reply-to: <46685C5F.5090804@sandeen.net> To: xfs@oss.sgi.com Message-id: <4668D51F.8010804@deserver.nl> MIME-version: 1.0 Content-type: text/plain; charset=ISO-8859-1 Content-transfer-encoding: 7BIT User-Agent: Thunderbird 2.0.0.0 (X11/20070326) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Scanned: by mailscan at deserver.nl X-Enigmail-Version: 0.95.0 References: <465EA882.3030403@deserver.nl> <465ECF9B.2000500@sandeen.net> <46684A9B.90908@deserver.nl> <46685C5F.5090804@sandeen.net> X-Virus-Status: Clean X-archive-position: 11692 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: japie@deserver.nl Precedence: bulk X-list: xfs Eric Sandeen schreef: > when you are talking about sizes, do you mean space used (du) or max > offset (ls -l?) max offset should be the same for your image file as > for your original device... 120G. ls -l But I don't know what tot trust anymore, if I look with gparted at my partitions the old disk gaves me a partition of 140G with 106G used space. My new disk has a partition of 200G with 166G used space. If I create a new xfs partition it has about 10% used space (according to gparted, I suspect thats the size of the logfiles?) so from the 166G on the new disk 146G is the "real" used space so that should be the size of the image file. (nomather what ls -l tells me) Is this correct? -- Groetjes Japie From owner-xfs@oss.sgi.com Thu Jun 7 22:28:25 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 07 Jun 2007 22:28:27 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.4 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l585SLWt013610 for ; Thu, 7 Jun 2007 22:28:22 -0700 Received: from boing.melbourne.sgi.com (boing.melbourne.sgi.com [134.14.55.141]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA03067; Fri, 8 Jun 2007 15:28:14 +1000 Date: Fri, 08 Jun 2007 15:28:14 +1000 From: Timothy Shimmin To: David Chinner , xfs-dev cc: xfs-oss Subject: Re: Review: Be smarter about handling ENOSPC during writeback Message-ID: In-Reply-To: References: <20070604045219.GG86004887@sgi.com> X-Mailer: Mulberry/4.0.8 (Mac OS X) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11693 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs Hi Dave, Putting the xfs_reserve_blocks discussion to the side.... (discussed separately) Back to the review, looking at the changes: --On 4 June 2007 2:52:19 PM +1000 David Chinner wrote: > > During delayed allocation extent conversion or unwritten extent > conversion, we need to reserve some blocks for transactions > reservations. We need to reserve these blocks in case a btree > split occurs and we need to allocate some blocks. > > Unfortunately, we've only ever reserved the number of data blocks we > are allocating, so in both the unwritten and delalloc case we can > get ENOSPC to the transaction reservation. This is bad because in > both cases we cannot report the failure to the writing application. > > The fix is two-fold: > > 1 - leverage the reserved block infrastructure XFS already > has to reserve a small pool of blocks by default to allow > specially marked transactions to dip into when we are at > ENOSPC. > > Default setting is min(5%, 1024 blocks). > > 2 - convert critical transaction reservations to be allowed > to dip into this pool. Spots changed are delalloc > conversion, unwritten extent conversion and growing a > filesystem at ENOSPC. > > Comments? > > Cheers, > > Dave. > -- > Dave Chinner > Principal Engineer > SGI Australian Software Group > > --- > fs/xfs/xfs_fsops.c | 10 +++++++--- * allow xfs_reserve_blocks() to handle a null outval so that we can call xfs_reserve_blocks other than thru ioctl, where we don't care about outval * xfs_growfs_data_private() or's in XFS_TRANS_RESERVE like we do for root EAs -> allow growfs transaction to dip in to reserve space > fs/xfs/xfs_mount.c | 37 +++++++++++++++++++++++++++++++++++-- * xfs_mountfs(): cleanup - restrict a variable (ret64) to the block its used in * xfs_mountfs(): do our xfs_reserve_blocks() for what we think we'll need - pass NULL for 2nd param to it as we don't care (why we changed xfs_fsops.c) - defaults to min(1024 FSBs, 5% dblocks) -> not sure how one would choose this but it sounds big enough * xfs_unmountfs(): xfs_reserve_blocks of zero and so restoring the sb free counter Q: so I guess, for DMF systems which presumably turn this stuff on using the ioctl; we should tell them to stop doing this - they could stuff us up by overriding it maybe and they don't need to. > fs/xfs/xfs_iomap.c | 22 ++++++++-------------- * some whitespace cleanup xfs_iomap_write_allocate(): * delalloc extent conversion - mark transaction for reserved blocks space * don't handle ENOSPC here, as we shouldn't get it now I presume xfs_iomap_write_unwritten * unwritten extent conversion - mark trans for reserved blocks Seems simple enough :) Will we get questions from people about reduced space from df? :) --Tim > 3 files changed, 50 insertions(+), 19 deletions(-) > > Index: 2.6.x-xfs-new/fs/xfs/xfs_fsops.c > =================================================================== > --- 2.6.x-xfs-new.orig/fs/xfs/xfs_fsops.c 2007-05-11 10:35:29.288847149 +1000 > +++ 2.6.x-xfs-new/fs/xfs/xfs_fsops.c 2007-05-11 11:13:34.195363437 +1000 > @@ -179,6 +179,7 @@ xfs_growfs_data_private( > up_write(&mp->m_peraglock); > } > tp = xfs_trans_alloc(mp, XFS_TRANS_GROWFS); > + tp->t_flags |= XFS_TRANS_RESERVE; > if ((error = xfs_trans_reserve(tp, XFS_GROWFS_SPACE_RES(mp), > XFS_GROWDATA_LOG_RES(mp), 0, 0, 0))) { > xfs_trans_cancel(tp, 0); > @@ -500,8 +501,9 @@ xfs_reserve_blocks( > unsigned long s; > > /* If inval is null, report current values and return */ > - > if (inval == (__uint64_t *)NULL) { > + if (!outval) > + return EINVAL; > outval->resblks = mp->m_resblks; > outval->resblks_avail = mp->m_resblks_avail; > return 0; > @@ -564,8 +566,10 @@ retry: > } > } > out: > - outval->resblks = mp->m_resblks; > - outval->resblks_avail = mp->m_resblks_avail; > + if (outval) { > + outval->resblks = mp->m_resblks; > + outval->resblks_avail = mp->m_resblks_avail; > + } > XFS_SB_UNLOCK(mp, s); > > if (fdblks_delta) { > Index: 2.6.x-xfs-new/fs/xfs/xfs_mount.c > =================================================================== > --- 2.6.x-xfs-new.orig/fs/xfs/xfs_mount.c 2007-05-11 10:35:29.292846630 +1000 > +++ 2.6.x-xfs-new/fs/xfs/xfs_mount.c 2007-05-11 11:13:47.229662318 +1000 > @@ -718,7 +718,7 @@ xfs_mountfs( > bhv_vnode_t *rvp = NULL; > int readio_log, writeio_log; > xfs_daddr_t d; > - __uint64_t ret64; > + __uint64_t resblks; > __int64_t update_flags; > uint quotamount, quotaflags; > int agno; > @@ -835,6 +835,7 @@ xfs_mountfs( > */ > if ((mfsi_flags & XFS_MFSI_SECOND) == 0 && > (mp->m_flags & XFS_MOUNT_NOUUID) == 0) { > + __uint64_t ret64; > if (xfs_uuid_mount(mp)) { > error = XFS_ERROR(EINVAL); > goto error1; > @@ -1127,13 +1128,27 @@ xfs_mountfs( > goto error4; > } > > - > /* > * Complete the quota initialisation, post-log-replay component. > */ > if ((error = XFS_QM_MOUNT(mp, quotamount, quotaflags, mfsi_flags))) > goto error4; > > + /* > + * Now we are mounted, reserve a small amount of unused space for > + * privileged transactions. This is needed so that transaction > + * space required for critical operations can dip into this pool > + * when at ENOSPC. This is needed for operations like create with > + * attr, unwritten extent conversion at ENOSPC, etc. Data allocations > + * are not allowed to use this reserved space. > + * > + * We default to 5% or 1024 fsbs of space reserved, whichever is smaller. > + * This may drive us straight to ENOSPC on mount, but that implies > + * we were already there on the last unmount. > + */ > + resblks = min_t(__uint64_t, mp->m_sb.sb_dblocks / 20, 1024); > + xfs_reserve_blocks(mp, &resblks, NULL); > + > return 0; > > error4: > @@ -1172,6 +1187,7 @@ xfs_unmountfs(xfs_mount_t *mp, struct cr > #if defined(DEBUG) || defined(INDUCE_IO_ERROR) > int64_t fsid; > #endif > + __uint64_t resblks; > > /* > * We can potentially deadlock here if we have an inode cluster > @@ -1200,6 +1216,23 @@ xfs_unmountfs(xfs_mount_t *mp, struct cr > xfs_binval(mp->m_rtdev_targp); > } > > + /* > + * Unreserve any blocks we have so that when we unmount we don't account > + * the reserved free space as used. This is really only necessary for > + * lazy superblock counting because it trusts the incore superblock > + * counters to be aboslutely correct on clean unmount. > + * > + * We don't bother correcting this elsewhere for lazy superblock > + * counting because on mount of an unclean filesystem we reconstruct the > + * correct counter value and this is irrelevant. > + * > + * For non-lazy counter filesystems, this doesn't matter at all because > + * we only every apply deltas to the superblock and hence the incore > + * value does not matter.... > + */ > + resblks = 0; > + xfs_reserve_blocks(mp, &resblks, NULL); > + > xfs_log_sbcount(mp, 1); > xfs_unmountfs_writesb(mp); > xfs_unmountfs_wait(mp); /* wait for async bufs */ > Index: 2.6.x-xfs-new/fs/xfs/xfs_iomap.c > =================================================================== > --- 2.6.x-xfs-new.orig/fs/xfs/xfs_iomap.c 2007-05-11 11:13:13.862017149 +1000 > +++ 2.6.x-xfs-new/fs/xfs/xfs_iomap.c 2007-05-11 11:13:34.199362915 +1000 > @@ -489,13 +489,13 @@ xfs_iomap_write_direct( > if (unlikely(rt)) { > resrtextents = qblocks = resaligned; > resrtextents /= mp->m_sb.sb_rextsize; > - resblks = XFS_DIOSTRAT_SPACE_RES(mp, 0); > - quota_flag = XFS_QMOPT_RES_RTBLKS; > - } else { > - resrtextents = 0; > + resblks = XFS_DIOSTRAT_SPACE_RES(mp, 0); > + quota_flag = XFS_QMOPT_RES_RTBLKS; > + } else { > + resrtextents = 0; > resblks = qblocks = XFS_DIOSTRAT_SPACE_RES(mp, resaligned); > - quota_flag = XFS_QMOPT_RES_REGBLKS; > - } > + quota_flag = XFS_QMOPT_RES_REGBLKS; > + } > > /* > * Allocate and setup the transaction > @@ -788,18 +788,12 @@ xfs_iomap_write_allocate( > nimaps = 0; > while (nimaps == 0) { > tp = xfs_trans_alloc(mp, XFS_TRANS_START_WRITE); > + tp->t_flags |= XFS_TRANS_RESERVE; > nres = XFS_EXTENTADD_SPACE_RES(mp, XFS_DATA_FORK); > error = xfs_trans_reserve(tp, nres, > XFS_WRITE_LOG_RES(mp), > 0, XFS_TRANS_PERM_LOG_RES, > XFS_WRITE_LOG_COUNT); > - if (error == ENOSPC) { > - error = xfs_trans_reserve(tp, 0, > - XFS_WRITE_LOG_RES(mp), > - 0, > - XFS_TRANS_PERM_LOG_RES, > - XFS_WRITE_LOG_COUNT); > - } > if (error) { > xfs_trans_cancel(tp, 0); > return XFS_ERROR(error); > @@ -917,8 +911,8 @@ xfs_iomap_write_unwritten( > * from unwritten to real. Do allocations in a loop until > * we have covered the range passed in. > */ > - > tp = xfs_trans_alloc(mp, XFS_TRANS_START_WRITE); > + tp->t_flags |= XFS_TRANS_RESERVE; > error = xfs_trans_reserve(tp, resblks, > XFS_WRITE_LOG_RES(mp), 0, > XFS_TRANS_PERM_LOG_RES, From owner-xfs@oss.sgi.com Thu Jun 7 23:24:44 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 07 Jun 2007 23:24:46 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l586OfWt023758 for ; Thu, 7 Jun 2007 23:24:43 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id QAA04426; Fri, 8 Jun 2007 16:24:34 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l586OWAf115248565; Fri, 8 Jun 2007 16:24:33 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l586OUaL115423795; Fri, 8 Jun 2007 16:24:30 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Fri, 8 Jun 2007 16:24:30 +1000 From: David Chinner To: Christoph Hellwig Cc: xfs@oss.sgi.com Subject: Re: [PATCH] get rid of file_count abuse Message-ID: <20070608062430.GY86004887@sgi.com> References: <20070604143352.GA8721@lst.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070604143352.GA8721@lst.de> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11694 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Mon, Jun 04, 2007 at 04:33:52PM +0200, Christoph Hellwig wrote: > A check for file_count is always a bad idea. Linux has the ->release > method to deal with cleanups on last close and ->flush is only for the > very rare case where we want to perform an operation on every drop of > a reference to a file struct. *nod* > This patch gets rid of vop_close and surrounding code in favour of > simply doing the page flushing from ->release. Added to my qa tree, and mangled to move the filestreams stuff from xfs_close to xfs_release as well. Thanks, Christoph. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Fri Jun 8 00:33:50 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 08 Jun 2007 00:33:51 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l587XkWt007805 for ; Fri, 8 Jun 2007 00:33:48 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA06910; Fri, 8 Jun 2007 17:33:45 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l587XiAf117249269; Fri, 8 Jun 2007 17:33:44 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l587Xgak115891519; Fri, 8 Jun 2007 17:33:42 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Fri, 8 Jun 2007 17:33:42 +1000 From: David Chinner To: Timothy Shimmin Cc: David Chinner , xfs-dev , xfs-oss Subject: Re: Review: Be smarter about handling ENOSPC during writeback Message-ID: <20070608073342.GW85884050@sgi.com> References: <20070604045219.GG86004887@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11695 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Fri, Jun 08, 2007 at 03:28:14PM +1000, Timothy Shimmin wrote: > Hi Dave, > > Putting the xfs_reserve_blocks discussion to the side.... > (discussed separately) *nod* BTW, did you try that patch I sent? > > fs/xfs/xfs_fsops.c | 10 +++++++--- > > * allow xfs_reserve_blocks() to handle a null outval so that > we can call xfs_reserve_blocks other than thru ioctl, > where we don't care about outval > * xfs_growfs_data_private() or's in XFS_TRANS_RESERVE like we do for root > EAs > -> allow growfs transaction to dip in to reserve space Yes, and so now you can grow a completely full filesystem :) > > fs/xfs/xfs_mount.c | 37 +++++++++++++++++++++++++++++++++++-- > > * xfs_mountfs(): cleanup - restrict a variable (ret64) to the block its > used in > * xfs_mountfs(): do our xfs_reserve_blocks() for what we think we'll need > - pass NULL for 2nd param to it as we don't care (why we changed > xfs_fsops.c) > - defaults to min(1024 FSBs, 5% dblocks) > -> not sure how one would choose this but it sounds big enough It's a SWAG. I think it's sufficient to begin with. If it proves to be a problem, then we can change it later.... > * xfs_unmountfs(): xfs_reserve_blocks of zero and so restoring the sb free > counter > > Q: so I guess, for DMF systems which presumably turn this stuff on using > the ioctl; yeah - the rope is long enough ;) > we should tell them to stop doing this - they could stuff us up by > overriding it > maybe and they don't need to. All they need to do is check first before setting a new value... > > fs/xfs/xfs_iomap.c | 22 ++++++++-------------- > > * some whitespace cleanup > xfs_iomap_write_allocate(): > * delalloc extent conversion - mark transaction for reserved blocks space > * don't handle ENOSPC here, as we shouldn't get it now I presume We still can, just much more unlikely. I need to do another set of patches for ENOSPC notification but I haven't had a chance yet. > xfs_iomap_write_unwritten > * unwritten extent conversion - mark trans for reserved blocks Ditto. > Seems simple enough :) It's just one of a few to begin with? > Will we get questions from people about reduced space from df? :) If we do, I think you just volunteered to write the FAQ entry ;) Thanks for the review. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Fri Jun 8 00:53:34 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 08 Jun 2007 00:53:37 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.9 required=5.0 tests=AWL,BAYES_50,URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l587rVWt011344 for ; Fri, 8 Jun 2007 00:53:32 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA07763; Fri, 8 Jun 2007 17:53:27 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 1116) id 35F7B58C38C1; Fri, 8 Jun 2007 17:53:27 +1000 (EST) To: sgi.bugs.xfs@engr.sgi.com, xfs@oss.sgi.com Subject: TAKE 963528 - xfs_growfs_data_private not logging agf length change Message-Id: <20070608075327.35F7B58C38C1@chook.melbourne.sgi.com> Date: Fri, 8 Jun 2007 17:53:27 +1000 (EST) From: tes@sgi.com (Tim Shimmin) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11696 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs Log the agf_length change in xfs_growfs_data_private(). Date: Fri Jun 8 17:52:33 AEST 2007 Workarea: chook.melbourne.sgi.com:/build/tes/2.6.x-xfs Inspected by: dgc@sgi.com,hch@infradead.org The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:28856a fs/xfs/xfs_fsops.c - 1.125 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_fsops.c.diff?r1=text&tr1=1.125&r2=text&tr2=1.124&f=h - Log the agf_length change in xfs_growfs_data_private(). From owner-xfs@oss.sgi.com Fri Jun 8 01:35:36 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 08 Jun 2007 01:35:38 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.7 required=5.0 tests=AWL,BAYES_50,URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l588ZXWt018792 for ; Fri, 8 Jun 2007 01:35:35 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id SAA10349; Fri, 8 Jun 2007 18:35:29 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id D7EC658C38F2; Fri, 8 Jun 2007 18:35:28 +1000 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: TAKE 964538 - tail-pushing deadlock when flushing inodes on unmount Message-Id: <20070608083528.D7EC658C38F2@chook.melbourne.sgi.com> Date: Fri, 8 Jun 2007 18:35:28 +1000 (EST) From: dgc@sgi.com (David Chinner) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11697 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Prevent deadlock when flushing inodes on unmount When we are unmounting the filesystem, we flush all the inodes to disk. Unfortunately, if we have an inode cluster that has just been freed and marked stale sitting in an incore log buffer (i.e. hasn't been flushed to disk), it will be holding all the flush locks on the inodes in that cluster. xfs_iflush_all() which is called during unmount walks all the inodes trying to reclaim them, and it doing so calls xfs_finish_reclaim() on each inode. If the inode is dirty, if grabs the flush lock and flushes it. Unfortunately, find dirty inodes that already have their flush lock held and so we sleep. At this point in the unmount process, we are running single-threaded. There is nothing more that can push on the log to force the transaction holding the inode flush locks to disk and hence we deadlock. The fix is to issue a log force before flushing the inodes on unmount so that all the flush locks will be released before we start flushing the inodes. Date: Fri Jun 8 18:34:39 AEST 2007 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/2.6.x-xfs Inspected by: tes@sgi.com The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:28862a fs/xfs/xfs_mount.c - 1.396 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_mount.c.diff?r1=text&tr1=1.396&r2=text&tr2=1.395&f=h - Force the log before we flush all the inodes on unmount to prevent a deadlock if any inode flush locks are held by transactions that are not yet on disk. From owner-xfs@oss.sgi.com Fri Jun 8 01:44:45 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 08 Jun 2007 01:44:48 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.7 required=5.0 tests=AWL,BAYES_50,URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l588ihWt020811 for ; Fri, 8 Jun 2007 01:44:44 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id SAA11195; Fri, 8 Jun 2007 18:44:39 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id 1558C58C38F3; Fri, 8 Jun 2007 18:44:38 +1000 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: TAKE 964468 - XFS can ENOSPC in bad places Message-Id: <20070608084439.1558C58C38F3@chook.melbourne.sgi.com> Date: Fri, 8 Jun 2007 18:44:38 +1000 (EST) From: dgc@sgi.com (David Chinner) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11698 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Prevent ENOSPC from aborting transactions that need to succeed During delayed allocation extent conversion or unwritten extent conversion, we need to reserve some blocks for transactions reservations. We need to reserve these blocks in case a btree split occurs and we need to allocate some blocks. Unfortunately, we've only ever reserved the number of data blocks we are allocating, so in both the unwritten and delalloc case we can get ENOSPC to the transaction reservation. This is bad because in both cases we cannot report the failure to the writing application. The fix is two-fold: 1 - leverage the reserved block infrastructure XFS already has to reserve a small pool of blocks by default to allow specially marked transactions to dip into when we are at ENOSPC. Default setting is min(5%, 1024 blocks). 2 - convert critical transaction reservations to be allowed to dip into this pool. Spots changed are delalloc conversion, unwritten extent conversion and growing a filesystem at ENOSPC. This also allows growing the filesytsem to succeed at ENOSPC. Date: Fri Jun 8 18:44:10 AEST 2007 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/2.6.x-xfs Inspected by: tes@sgi.com The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:28865a fs/xfs/xfs_mount.c - 1.397 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_mount.c.diff?r1=text&tr1=1.397&r2=text&tr2=1.396&f=h - reserve a small amount of disk blocks by default to use in emergency ENOSPC situations. fs/xfs/xfs_fsops.c - 1.126 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_fsops.c.diff?r1=text&tr1=1.126&r2=text&tr2=1.125&f=h - Allow growfs transaction to use reserved space. fs/xfs/xfs_iomap.c - 1.53 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_iomap.c.diff?r1=text&tr1=1.53&r2=text&tr2=1.52&f=h - Allow unwritten extent conversion and delayed allocation tranactions to use reserved space instead of silently failing at ENOSPC. From owner-xfs@oss.sgi.com Fri Jun 8 01:52:08 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 08 Jun 2007 01:52:12 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.0 required=5.0 tests=AWL,BAYES_95,URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from ug-out-1314.google.com (ug-out-1314.google.com [66.249.92.168]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l588q6Wt022501 for ; Fri, 8 Jun 2007 01:52:08 -0700 Received: by ug-out-1314.google.com with SMTP id 74so1052176ugb for ; Fri, 08 Jun 2007 01:52:06 -0700 (PDT) DKIM-Signature: a=rsa-sha1; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:subject:from:to:cc:in-reply-to:references:content-type:date:message-id:mime-version:x-mailer; b=l+hkMqNQSlCffzVw4PM5LprHWtx4J3Qa7r5Yzg8sTKjztjvweDzUQpRCiZSqe2y3P3m0c1vsnQxJVUq0kp9+X7TYm40uLbL62uJDVH2Q2laSsJIVfJb9OGdz/I3zRDnLyQYtJ+NnGnG22MQmiKLoeOvISDBSEV0wJ9/+O7xKEoo= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:subject:from:to:cc:in-reply-to:references:content-type:date:message-id:mime-version:x-mailer; b=aJiW94djuLDMCsrCU+/8vzLqjVEbn3v6zcj9/PZnkzfQHQ9VKRfQcApKnfwxhm00Jf2ITUSukWLuARmM364P8VHBU1BYW7rNyFIbl0pAZT6QgBLYJ9uCDebK273/0Tt+ovOCOH8WO999Qr9m8u1VuxoJ2NbzBwfopvmln8vDRaA= Received: by 10.66.248.8 with SMTP id v8mr1809102ugh.1181291038937; Fri, 08 Jun 2007 01:23:58 -0700 (PDT) Received: from ?192.168.1.10? ( [84.59.101.174]) by mx.google.com with ESMTP id 27sm4511853ugp.2007.06.08.01.23.56 (version=TLSv1/SSLv3 cipher=RC4-MD5); Fri, 08 Jun 2007 01:23:56 -0700 (PDT) Subject: Re: XFS shrink functionality From: Ruben Porras To: Iustin Pop Cc: David Chinner , xfs@oss.sgi.com, cw@f00f.org In-Reply-To: <20070604084154.GA8273@teal.hq.k1024.org> References: <1180715974.10796.46.camel@localhost> <20070604001632.GA86004887@sgi.com> <20070604084154.GA8273@teal.hq.k1024.org> Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="=-LdQDPymmi+rBggGPnr8T" Date: Fri, 08 Jun 2007 10:23:53 +0200 Message-Id: <1181291033.7510.40.camel@localhost> Mime-Version: 1.0 X-Mailer: Evolution 2.10.2 X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11699 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nahoo82@gmail.com Precedence: bulk X-list: xfs --=-LdQDPymmi+rBggGPnr8T Content-Type: text/plain Content-Transfer-Encoding: quoted-printable Am Montag, den 04.06.2007, 10:41 +0200 schrieb Iustin Pop: > Good to know. If there is at least more documentation about the > internals, I could try to find some time to work on this again. there is now a document explaining the XFS on disk format [0] and some presentations for training courses, I think none of this were available at the time you made the first try. Although they are not enough for our purpose.=20 > My suggestion would be to start implementing these steps in reverse. 4) > is the most important as it touches the entire FS. If 4) is working > correctly, then 1) would be simpler (I think) Why do you think that 1) would be simpler after 4)? For what I understand, they are independent. 3) worries me, if walking the entire filesystem is needed, it want scale... =20=20 Since I don't know yet the xfs code I would like to begin with 1), I see it independent from the other parts, and I can then learn more about the transactions, allocators, and walking through the xfs structures. As you did 4) one time, maybe you could try with this part of the problem if you find the needed time, taking David's suggestions into account. [0] http://oss.sgi.com/projects/xfs/papers/xfs_filesystem_structure.pdf Cheers -- Ruben Porras LinWorks GmbH --=-LdQDPymmi+rBggGPnr8T Content-Type: application/pgp-signature; name=signature.asc Content-Description: Dies ist ein digital signierter Nachrichtenteil -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQBGaRIZYubrKblAx+oRAgScAJ908dJzI9U9BLjh5ePQZkp6AfjSSgCgh6gL 5ON+D15BpF2nkNqM/LCiM8w= =b01I -----END PGP SIGNATURE----- --=-LdQDPymmi+rBggGPnr8T-- From owner-xfs@oss.sgi.com Fri Jun 8 02:01:03 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 08 Jun 2007 02:01:06 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.7 required=5.0 tests=AWL,BAYES_50,URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5890xWt024214 for ; Fri, 8 Jun 2007 02:01:00 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id TAA11950; Fri, 8 Jun 2007 19:00:55 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id 7309658C38F3; Fri, 8 Jun 2007 19:00:55 +1000 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: TAKE 966004 - cleanup obtaining extent size hints from the inode Message-Id: <20070608090055.7309658C38F3@chook.melbourne.sgi.com> Date: Fri, 8 Jun 2007 19:00:55 +1000 (EST) From: dgc@sgi.com (David Chinner) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11700 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Cleanup inode extent size hint extraction Date: Fri Jun 8 19:00:21 AEST 2007 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/2.6.x-xfs Inspected by: hch@infradead.org The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:28866a fs/xfs/xfs_rw.h - 1.82 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_rw.h.diff?r1=text&tr1=1.82&r2=text&tr2=1.81&f=h - Define a common function for extracting the valid extent size hint from a given inode. fs/xfs/xfs_vnodeops.c - 1.698 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_vnodeops.c.diff?r1=text&tr1=1.698&r2=text&tr2=1.697&f=h - Use xfs_get_extsz_hint rather than open coded statements. fs/xfs/xfs_bmap.c - 1.369 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_bmap.c.diff?r1=text&tr1=1.369&r2=text&tr2=1.368&f=h - Use xfs_get_extsz_hint rather than open coded statements. fs/xfs/xfs_iomap.c - 1.54 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_iomap.c.diff?r1=text&tr1=1.54&r2=text&tr2=1.53&f=h - Use xfs_get_extsz_hint rather than open coded statements. From owner-xfs@oss.sgi.com Fri Jun 8 02:39:28 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 08 Jun 2007 02:39:31 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.7 required=5.0 tests=AWL,BAYES_50,J_CHICKENPOX_72, URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l589dPWt030741 for ; Fri, 8 Jun 2007 02:39:26 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id TAA13459; Fri, 8 Jun 2007 19:39:20 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id CA13C58C38C1; Fri, 8 Jun 2007 19:39:20 +1000 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: TAKE 964464 - remount r/o path has same flush problems as freeze path Message-Id: <20070608093920.CA13C58C38C1@chook.melbourne.sgi.com> Date: Fri, 8 Jun 2007 19:39:20 +1000 (EST) From: dgc@sgi.com (David Chinner) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11701 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Fix remount,readonly path to flush everything correctly. The remount readonly path can fail to writeback properly because we still have active transactions after calling xfs_quiesce_fs(). Further investigation shows that this path is broken in the same ways that the xfs freeze path was broken so fix it the same way. Date: Fri Jun 8 19:38:52 AEST 2007 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/2.6.x-xfs Inspected by: hch@infradead.org The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:28869a fs/xfs/xfs_vfsops.c - 1.521 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_vfsops.c.diff?r1=text&tr1=1.521&r2=text&tr2=1.520&f=h - Modify remount,ro path to use data and inode quiesce code. Factor common inode quiesce code. fs/xfs/linux-2.6/xfs_vfs.h - 1.71 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/linux-2.6/xfs_vfs.h.diff?r1=text&tr1=1.71&r2=text&tr2=1.70&f=h - Define data and inode quiesce flush flags and document what they do. fs/xfs/linux-2.6/xfs_super.c - 1.383 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/linux-2.6/xfs_super.c.diff?r1=text&tr1=1.383&r2=text&tr2=1.382&f=h - Clean up data quiesce flags. From owner-xfs@oss.sgi.com Fri Jun 8 02:43:27 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 08 Jun 2007 02:43:29 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.7 required=5.0 tests=AWL,BAYES_50,URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l589hNWt031969 for ; Fri, 8 Jun 2007 02:43:25 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id TAA13530; Fri, 8 Jun 2007 19:43:19 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id 924C058C38C1; Fri, 8 Jun 2007 19:43:19 +1000 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: TAKE 965784 - i386 linker error: xfs_dm_punch_hole - undefined ref to `__divdi3' Message-Id: <20070608094319.924C058C38C1@chook.melbourne.sgi.com> Date: Fri, 8 Jun 2007 19:43:19 +1000 (EST) From: dgc@sgi.com (David Chinner) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11702 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Fix i386 dmapi build - use roundup_64 for 64 bit types. Date: Fri Jun 8 19:42:38 AEST 2007 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/2.6.x-xfs Inspected by: donaldd@sgi.com The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:28870a fs/xfs/dmapi/xfs_dm.c - 1.37 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/dmapi/xfs_dm.c.diff?r1=text&tr1=1.37&r2=text&tr2=1.36&f=h - Use roundup_64() instead of roundup() which cannot handle 64 bit types on 32bit platforms. From owner-xfs@oss.sgi.com Fri Jun 8 03:15:41 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 08 Jun 2007 03:15:45 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.7 required=5.0 tests=AWL,BAYES_99,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r499012 Received: from astra.simleu.ro (astra.simleu.ro [80.97.18.177]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l58AFdWt005106 for ; Fri, 8 Jun 2007 03:15:41 -0700 Received: from teal.hq.k1024.org (84-75-124-135.dclient.hispeed.ch [84.75.124.135]) by astra.simleu.ro (Postfix) with ESMTP id 67DD0134; Fri, 8 Jun 2007 13:15:39 +0300 (EEST) Received: by teal.hq.k1024.org (Postfix, from userid 4004) id CEC8C40FE2A; Fri, 8 Jun 2007 12:15:32 +0200 (CEST) Date: Fri, 8 Jun 2007 12:15:32 +0200 From: Iustin Pop To: Ruben Porras Cc: David Chinner , xfs@oss.sgi.com, cw@f00f.org Subject: Re: XFS shrink functionality Message-ID: <20070608101532.GA18788@teal.hq.k1024.org> Mail-Followup-To: Ruben Porras , David Chinner , xfs@oss.sgi.com, cw@f00f.org References: <1180715974.10796.46.camel@localhost> <20070604001632.GA86004887@sgi.com> <20070604084154.GA8273@teal.hq.k1024.org> <1181291033.7510.40.camel@localhost> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1181291033.7510.40.camel@localhost> X-Linux: This message was written on Linux X-Header: /usr/include gives great headers User-Agent: Mutt/1.5.13 (2006-08-11) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11703 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: iusty@k1024.org Precedence: bulk X-list: xfs On Fri, Jun 08, 2007 at 10:23:53AM +0200, Ruben Porras wrote: > Am Montag, den 04.06.2007, 10:41 +0200 schrieb Iustin Pop: > > Good to know. If there is at least more documentation about the > > internals, I could try to find some time to work on this again. > > there is now a document explaining the XFS on disk format [0] and some > presentations for training courses, I think none of this were available > at the time you made the first try. Although they are not enough for our > purpose. > Yes, just yesterday I found the document and it helps. > > My suggestion would be to start implementing these steps in reverse. 4) > > is the most important as it touches the entire FS. If 4) is working > > correctly, then 1) would be simpler (I think) > > Why do you think that 1) would be simpler after 4)? For what I > understand, they are independent. Not after that in the cronological sense, but in the importance part. Yes, it was a bad choice of words. > 3) worries me, if walking the entire filesystem is needed, it want > scale... > > Since I don't know yet the xfs code I would like to begin with 1), I see > it independent from the other parts, and I can then learn more about the > transactions, allocators, and walking through the xfs structures. As you > did 4) one time, maybe you could try with this part of the problem if > you find the needed time, taking David's suggestions into account. I took a look at both items since this discussion started. And honestly, I think 1) is harder that 4), so you're welcome to work on it :) The points that make it harder is that, per David's suggestion, there needs to be: - define two new transaction types - define two new ioctls - update the ondisk-format (!), if we want persistence of these flags; luckily, there are two spare fields in the AGF structure. - check the list of allocation functions that allocate space from the AG I did some preliminary work on this but just a little. I think that after the weekend I'll send an updated patch of 4). I have one working now with the current CVS tree, just that it's still ugly and needs polishing. Open questions (re. point 4): - the filesystem document says the agf->agf_btreeblks is held only in case we have an extended flag active for the filesystem (XFS_SB_VERSION2_LAZYSBCOUNTBIT); is this true? without this, I'm not sure how to calculate this number of blocks nicely - or can I assume that an empty AG will *always* have agf_levels = 1 for both Btrees, so there are no extra blocks actually used for the btrees (except for the two reserved ones at the beggining of the AG - can I assume that an AG with agi->icount == agi->ifree == 0 will have no blocks used for the inode btrees (logically yes, but I'm not sure) thanks, iustin From owner-xfs@oss.sgi.com Fri Jun 8 06:59:40 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 08 Jun 2007 06:59:43 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.5 required=5.0 tests=AWL,BAYES_50,J_CHICKENPOX_45, URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from bay0-omc1-s13.bay0.hotmail.com (bay0-omc1-s13.bay0.hotmail.com [65.54.246.85]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l58DxcWt010121 for ; Fri, 8 Jun 2007 06:59:40 -0700 Received: from hotmail.com ([65.54.174.86]) by bay0-omc1-s13.bay0.hotmail.com with Microsoft SMTPSVC(6.0.3790.2668); Fri, 8 Jun 2007 06:59:38 -0700 Received: from mail pickup service by hotmail.com with Microsoft SMTPSVC; Fri, 8 Jun 2007 06:59:38 -0700 Message-ID: Received: from 85.36.106.214 by BAY103-DAV14.phx.gbl with DAV; Fri, 08 Jun 2007 13:59:37 +0000 X-Originating-IP: [85.36.106.214] X-Originating-Email: [pupilla@hotmail.com] X-Sender: pupilla@hotmail.com From: "Marco Berizzi" To: "David Chinner" Cc: "David Chinner" , , , "Marco Berizzi" References: <20070316012520.GN5743@melbourne.sgi.com> <20070316195951.GB5743@melbourne.sgi.com> <20070320064632.GO32602149@melbourne.sgi.com> <20070607130505.GE85884050@sgi.com> Subject: Re: XFS internal error xfs_da_do_buf(2) at line 2087 of file fs/xfs/xfs_da_btree.c. Caller 0xc01b00bd Date: Fri, 8 Jun 2007 15:59:39 +0200 X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2800.1123 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2800.1123 X-OriginalArrivalTime: 08 Jun 2007 13:59:38.0538 (UTC) FILETIME=[40E758A0:01C7A9D5] X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11704 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: pupilla@hotmail.com Precedence: bulk X-list: xfs David Chinner wrote: > > Jun 6 09:47:09 Pleiadi kernel: ======================= > > Jun 6 09:47:09 Pleiadi kernel: 0x0: 28 f1 45 d4 22 53 35 11 09 80 37 5a > > 47 8a 22 ee > > Jun 6 09:47:09 Pleiadi kernel: Filesystem "sda8": XFS internal error > > xfs_da_do_buf(2) at line 2086 of file fs/xfs/xfs_da_btree.c. Caller > > 0xc01b2301 > > Jun 6 09:47:09 Pleiadi kernel: [] xfs_da_do_buf+0x70c/0x7b1 > > Jun 6 09:47:09 Pleiadi kernel: [] xfs_da_read_buf+0x30/0x35 > > Jun 6 09:47:09 Pleiadi kernel: [] xfs_da_read_buf+0x30/0x35 > > These above stack trace is the sign of a corrupted directory. > > Chopping out the rest of the top posting (please don't do that) apologies > we get down to 3 months ago: > > > > On Mon, Mar 19, 2007 at 11:32:27AM +0100, Marco Berizzi wrote: > > > > Marco Berizzi wrote: > > > > Here is the relevant results: > > > > > > > > Phase 2 - found root inode chunk > > > > Phase 3 - ... > > > > agno = 0 > > > > ... > > > > agno = 12 > > > > LEAFN node level is 1 inode 1610612918 bno = 8388608 > > > > > > Hmmm - single bit error in the bno - that reminds of this: > > > > > > http://oss.sgi.com/projects/xfs/faq.html#dir2 > > > > > > So I'd definitely make sure that is repaired.... > > Where we saw signs of on disk directory corruption. Have you run > xfs_repair successfully on the filesystem since you reported > this? yes. > If you did clean up the error, does xfs_repair report the same sort > of error again? I have run xfs_repair this morning. Here is the report: Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - clear lost+found (if it exists) ... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - ensuring existence of lost+found directory - traversing filesystem starting at / ... - traversal finished ... - traversing all unattached subtrees ... - traversals finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... done > Have you run a 2.6.16-rcX or 2.6.17.[0-6] kernel since you last > reported this problem? No. I have run only 2.6.19.x and 2.6.21.x After the xfs_repair I have remounted the file system. After few hours linux has crashed with this message: BUG: at arch/i386/kernel/smp.c:546 smp_call_function() I have also the monitor bitmap. From owner-xfs@oss.sgi.com Fri Jun 8 07:45:17 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 08 Jun 2007 07:45:19 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l58EjEWt017931 for ; Fri, 8 Jun 2007 07:45:15 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id AAA19896; Sat, 9 Jun 2007 00:45:03 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l58EiwAf117628356; Sat, 9 Jun 2007 00:45:00 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l58EitYa117895070; Sat, 9 Jun 2007 00:44:55 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Sat, 9 Jun 2007 00:44:55 +1000 From: David Chinner To: Ruben Porras Cc: Iustin Pop , David Chinner , xfs@oss.sgi.com, cw@f00f.org Subject: Re: XFS shrink functionality Message-ID: <20070608144455.GE86004887@sgi.com> References: <1180715974.10796.46.camel@localhost> <20070604001632.GA86004887@sgi.com> <20070604084154.GA8273@teal.hq.k1024.org> <1181291033.7510.40.camel@localhost> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1181291033.7510.40.camel@localhost> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11705 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Fri, Jun 08, 2007 at 10:23:53AM +0200, Ruben Porras wrote: > Am Montag, den 04.06.2007, 10:41 +0200 schrieb Iustin Pop: > > Good to know. If there is at least more documentation about the > > internals, I could try to find some time to work on this again. > > there is now a document explaining the XFS on disk format [0] and some > presentations for training courses, I think none of this were available > at the time you made the first try. Although they are not enough for our > purpose. There's thousands of lines of code documenting that format as well ;) > > My suggestion would be to start implementing these steps in reverse. 4) > > is the most important as it touches the entire FS. If 4) is working > > correctly, then 1) would be simpler (I think) > > Why do you think that 1) would be simpler after 4)? For what I > understand, they are independent. > > 3) worries me, if walking the entire filesystem is needed, it want > scale... I think walking the filesystem can be avoided effectively by introducing an reverse map that points to the owner of the block. (i.e. another btree). Reverse mapping provides other benefits as well e.g. somewhere to put block checksums and more information for repair and scrubbing. The hard part is the moving of metadata. I haven't really though deeply on the best method for this; there's lots of options and I don't know what is the best way to proceed there yet. That's not something I need to think about right now, though ;) Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Fri Jun 8 08:12:38 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 08 Jun 2007 08:12:40 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.7 required=5.0 tests=AWL,BAYES_50,URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l58FCYWt022306 for ; Fri, 8 Jun 2007 08:12:36 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id BAA20640; Sat, 9 Jun 2007 01:12:28 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l58FCQAf117389659; Sat, 9 Jun 2007 01:12:27 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l58FCNN8116731816; Sat, 9 Jun 2007 01:12:23 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Sat, 9 Jun 2007 01:12:23 +1000 From: David Chinner To: Ruben Porras , David Chinner , xfs@oss.sgi.com, cw@f00f.org Subject: Re: XFS shrink functionality Message-ID: <20070608151223.GF86004887@sgi.com> References: <1180715974.10796.46.camel@localhost> <20070604001632.GA86004887@sgi.com> <20070604084154.GA8273@teal.hq.k1024.org> <1181291033.7510.40.camel@localhost> <20070608101532.GA18788@teal.hq.k1024.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070608101532.GA18788@teal.hq.k1024.org> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11706 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Fri, Jun 08, 2007 at 12:15:32PM +0200, Iustin Pop wrote: > On Fri, Jun 08, 2007 at 10:23:53AM +0200, Ruben Porras wrote: > > Am Montag, den 04.06.2007, 10:41 +0200 schrieb Iustin Pop: > > > Good to know. If there is at least more documentation about the > > > internals, I could try to find some time to work on this again. > > > > there is now a document explaining the XFS on disk format [0] and some > > presentations for training courses, I think none of this were available > > at the time you made the first try. Although they are not enough for our > > purpose. > > > > Yes, just yesterday I found the document and it helps. > > > > My suggestion would be to start implementing these steps in reverse. 4) > > > is the most important as it touches the entire FS. If 4) is working > > > correctly, then 1) would be simpler (I think) > > > > Why do you think that 1) would be simpler after 4)? For what I > > understand, they are independent. > Not after that in the cronological sense, but in the importance part. > Yes, it was a bad choice of words. > > > 3) worries me, if walking the entire filesystem is needed, it want > > scale... > > > > Since I don't know yet the xfs code I would like to begin with 1), I see > > it independent from the other parts, and I can then learn more about the > > transactions, allocators, and walking through the xfs structures. As you > > did 4) one time, maybe you could try with this part of the problem if > > you find the needed time, taking David's suggestions into account. > > I took a look at both items since this discussion started. And honestly, > I think 1) is harder that 4), so you're welcome to work on it :) The > points that make it harder is that, per David's suggestion, there needs > to be: > - define two new transaction types one new transaction type: XFS_TRANS_AGF_FLAGS and and extension to xfs_alloc_log_agf(). Is about all that is needed there. See the patch here: http://oss.sgi.com/archives/xfs/2007-04/msg00103.html For an example of a very simlar transaction to what is needed (look at xfs_log_sbcount()) and very similar addition to the AGF (xfs_btreeblks). > - define two new ioctls XFS_IOC_ALLOC_ALLOW_AG, parameter xfsagnumber_t. XFS_IOC_ALLOC_DENY_AG, parameter xfsagnumber_t. > - update the ondisk-format (!), if we want persistence of these flags; > luckily, there are two spare fields in the AGF structure. Better to expand, I think. The AGF is a sector in length - we can expand the structure as we need to this size without fear, esp. as the part of the sector outside the structure is guaranteed to be zero. i.e. we can add a fields flag to the end of the AGF structure - old filesystems simple read as "no flags set" and old kernels never look at those bits.... > - check the list of allocation functions that allocate space from the > AG > I did some preliminary work on this but just a little. > > I think that after the weekend I'll send an updated patch of 4). I have > one working now with the current CVS tree, just that it's still ugly and > needs polishing. > > Open questions (re. point 4): > - the filesystem document says the agf->agf_btreeblks is held only in > case we have an extended flag active for the filesystem > (XFS_SB_VERSION2_LAZYSBCOUNTBIT); is this true? without this, I'm not > sure how to calculate this number of blocks nicely Yes, that is true. There's a pre-req for shrinking for the moment :/ > - or can I assume that an empty AG will *always* have agf_levels = 1 > for both Btrees, so there are no extra blocks actually used for the > btrees (except for the two reserved ones at the beggining of the AG Yes, that is a valid assumption. > - can I assume that an AG with agi->icount == agi->ifree == 0 will have > no blocks used for the inode btrees (logically yes, but I'm not sure) yes. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Fri Jun 8 08:19:53 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 08 Jun 2007 08:19:56 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.0 required=5.0 tests=BAYES_99 autolearn=no version=3.2.0-pre1-r499012 Received: from webmail3.sd.dreamhost.com (webmail3.sd.dreamhost.com [64.111.100.15]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l58FJqWt027705 for ; Fri, 8 Jun 2007 08:19:52 -0700 Received: from webmail.ff-service-online.net (localhost [127.0.0.1]) by webmail3.sd.dreamhost.com (Postfix) with ESMTP id E169A142ED; Fri, 8 Jun 2007 08:07:20 -0700 (PDT) Received: from 80.88.141.75 (SquirrelMail authenticated user etrader@ff-service-online.net) by webmail.ff-service-online.net with HTTP; Fri, 8 Jun 2007 08:07:21 -0700 (PDT) Message-ID: <3199.80.88.141.75.1181315241.squirrel@webmail.ff-service-online.net> Date: Fri, 8 Jun 2007 08:07:21 -0700 (PDT) Subject: Assistance Needed.. From: "info@grandimport.com" Reply-To: donpard56@yahoo.com.mx User-Agent: SquirrelMail/1.4.9a MIME-Version: 1.0 Content-Type: text/plain;charset=iso-8859-1 Content-Transfer-Encoding: 8bit To: undisclosed-recipients:; X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11707 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: Garrson@GrandImport.com Precedence: bulk X-list: xfs GRAND IMPORT AND EXPORT COMPANy I am Garry Anderson, we are a group of business enterpreneurs who deal on raw materials and export into america/europe. We are searching for representatives who can help us establish a medium of getting to our costumers in america/europe as well as making payments through you to us. Please if you are interested in transacting business with us we will be very glad. Please contact us for more information. Contact our procurement officer Mr.Don Lampard through the e-mail below: donpard56@yahoo.com.mx Tel: +(44) 704 572 2185 Fax: +(44) 871 263 6749 Subject to your satisfaction you will be given the opportunity to negotiate your mode of which we will pay for your services as our representative in america/europe. Please if you are interested forward to us your phone number/fax and your full contact addresses. Thanks Garry Anderson, President Units 2A & 2B, Olympic Way, Sefton Business Park, Aintree Liverpool, L30 1RD, UNITED KINGDOM +44-772-9380-810 GRAND IMPORT AND EXPORT Goods for Import / Export & Freight Fwdg. Svcs. --------- and --------------- President , Veinna M ARKET SOLUTIONS Import / Export & Freight Fwdg. Svcs From owner-xfs@oss.sgi.com Fri Jun 8 08:26:52 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 08 Jun 2007 08:26:57 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from spitz.ucw.cz (gprs189-60.eurotel.cz [160.218.189.60]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l58FQkWt028881 for ; Fri, 8 Jun 2007 08:26:50 -0700 Received: by spitz.ucw.cz (Postfix, from userid 0) id B2A7B279F5; Thu, 7 Jun 2007 20:12:59 +0000 (UTC) Date: Thu, 7 Jun 2007 20:12:58 +0000 From: Pavel Machek To: David Greaves Cc: Tejun Heo , Linus Torvalds , "Rafael J. Wysocki" , xfs@oss.sgi.com, "'linux-kernel@vger.kernel.org'" , linux-pm , Neil Brown Subject: Re: 2.6.22-rc3 hibernate(?) fails totally - regression (xfs on raid6) Message-ID: <20070607201258.GB10323@ucw.cz> References: <200706012342.45657.rjw@sisk.pl> <46609FAD.7010203@dgreaves.com> <200706020122.49989.rjw@sisk.pl> <4661EFBB.5010406@dgreaves.com> <4662D852.4000005@dgreaves.com> <46667160.80905@gmail.com> <46668EE0.2030509@dgreaves.com> <46679D56.7040001@gmail.com> <4667DE2D.6050903@dgreaves.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4667DE2D.6050903@dgreaves.com> User-Agent: Mutt/1.5.9i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11708 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: pavel@ucw.cz Precedence: bulk X-list: xfs Hi! > >How hard does the machine freeze? Can you use sysrq? > >If so, please > >dump sysrq-t. > I suspect there is a problem writing to the consoles... > > I recompiled (rc4+patch) with sysrq support, suspended, > resumed and tried sysrq-t but got no output. > > I *can* change VTs and see the various login prompts, > bitmap messages and the console messages. Caps/Num lock > lights work. > > Fearing incompetence I tried sysrq-s sysrq-u sysrq-b and > got a reboot so sysrq is OK. Increase console loglevel by killing klogd/sysrq-9? -- (english) http://www.livejournal.com/~pavelmachek (cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html From owner-xfs@oss.sgi.com Fri Jun 8 08:58:14 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 08 Jun 2007 08:58:20 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.1 required=5.0 tests=AWL,BAYES_60,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r499012 Received: from sandeen.net (sandeen.net [209.173.210.139]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l58FwDWt001714 for ; Fri, 8 Jun 2007 08:58:14 -0700 Received: from Liberator.local (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 5ECC9180741F8; Fri, 8 Jun 2007 10:58:12 -0500 (CDT) Message-ID: <46697C94.40401@sandeen.net> Date: Fri, 08 Jun 2007 10:58:12 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.0 (Macintosh/20070326) MIME-Version: 1.0 To: Jaap Struyk CC: xfs@oss.sgi.com Subject: Re: ways to restore data from crashed disk References: <465EA882.3030403@deserver.nl> <465ECF9B.2000500@sandeen.net> <46684A9B.90908@deserver.nl> <46685C5F.5090804@sandeen.net> <4668D51F.8010804@deserver.nl> In-Reply-To: <4668D51F.8010804@deserver.nl> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11709 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Jaap Struyk wrote: > Eric Sandeen schreef: > >> when you are talking about sizes, do you mean space used (du) or max >> offset (ls -l?) max offset should be the same for your image file as >> for your original device... 120G. > > ls -l > But I don't know what tot trust anymore, if I look with gparted at my > partitions the old disk gaves me a partition of 140G with 106G used space. > My new disk has a partition of 200G with 166G used space. > If I create a new xfs partition it has about 10% used space (according > to gparted, I suspect thats the size of the logfiles?) so from the 166G > on the new disk 146G is the "real" used space so that should be the size > of the image file. (nomather what ls -l tells me) > Is this correct? Ok. repair is trying to read a superblock at: superblock read failed, offset 103376846848, size 2048, ag 11, rval 0 103376846848 bytes... or about 96 MB (base 2) (or 103 base 10) if ls -l on your image file is not at least that big, of course it can't read it. And if that's smaller than your filesystem, then the image isn't right... from your db output: xfs_db> sb 0 xfs_db> p magicnum = 0x58465342 blocksize = 4096 dblocks = 36710528 it looks like the original filesystem was bigger than your image: 4096*36710528 150366322688 <-- 140 MB so it looks like your image file is not correct... I'm not familiar with the tool you're using, is it somehow compressing a sparse file or something like that? -Eric From owner-xfs@oss.sgi.com Fri Jun 8 09:04:00 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 08 Jun 2007 09:04:03 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.8 required=5.0 tests=AWL,BAYES_60,SPF_HELO_PASS, URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from astra.simleu.ro (astra.simleu.ro [80.97.18.177]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l58G3wWt002739 for ; Fri, 8 Jun 2007 09:03:59 -0700 Received: from teal.hq.k1024.org (84-75-124-135.dclient.hispeed.ch [84.75.124.135]) by astra.simleu.ro (Postfix) with ESMTP id 8345573; Fri, 8 Jun 2007 19:03:58 +0300 (EEST) Received: by teal.hq.k1024.org (Postfix, from userid 4004) id 7E3A640FE2A; Fri, 8 Jun 2007 18:03:18 +0200 (CEST) Date: Fri, 8 Jun 2007 18:03:18 +0200 From: Iustin Pop To: David Chinner Cc: Ruben Porras , xfs@oss.sgi.com, cw@f00f.org Subject: Re: XFS shrink functionality Message-ID: <20070608160318.GA25579@teal.hq.k1024.org> Mail-Followup-To: David Chinner , Ruben Porras , xfs@oss.sgi.com, cw@f00f.org References: <1180715974.10796.46.camel@localhost> <20070604001632.GA86004887@sgi.com> <20070604084154.GA8273@teal.hq.k1024.org> <1181291033.7510.40.camel@localhost> <20070608101532.GA18788@teal.hq.k1024.org> <20070608151223.GF86004887@sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070608151223.GF86004887@sgi.com> X-Linux: This message was written on Linux X-Header: /usr/include gives great headers User-Agent: Mutt/1.5.13 (2006-08-11) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11710 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: iusty@k1024.org Precedence: bulk X-list: xfs On Sat, Jun 09, 2007 at 01:12:23AM +1000, David Chinner wrote: > > I took a look at both items since this discussion started. And honestly, > > I think 1) is harder that 4), so you're welcome to work on it :) The > > points that make it harder is that, per David's suggestion, there needs > > to be: > > - define two new transaction types > > one new transaction type: > > XFS_TRANS_AGF_FLAGS > > and and extension to xfs_alloc_log_agf(). Is about all that is > needed there. > > See the patch here: > > http://oss.sgi.com/archives/xfs/2007-04/msg00103.html Ah, I see now. I was wondering how one can enable the new bits (CVS xfs_db shows the btreeblks but 'version' cmd doesn't allow to change them), it seems that manual xfs_db work + xfs_repair allows them. > For an example of a very simlar transaction to what is needed > (look at xfs_log_sbcount()) and very similar addition to > the AGF (xfs_btreeblks). Just a question: why do you think this per-ag-bit to be persistent? I'm just curious. When I first thought about this, I was thinking more like this should be an in-core flag only, like the freeze flag is for the filesystem. The idea being that you don't need to recover this state after a crash - there is no actual state, just restart the shrink operation if you want. And no actual filesystem state (e.g. space allocation or such) is happenning when you toggle the AGs not allocatable. This would allow a much simpler implementation of the 'no-alloc' part. > > - update the ondisk-format (!), if we want persistence of these flags; > > luckily, there are two spare fields in the AGF structure. > > Better to expand, I think. The AGF is a sector in length - we can > expand the structure as we need to this size without fear, esp. as > the part of the sector outside the structure is guaranteed to be > zero. i.e. we can add a fields flag to the end of the AGF > structure - old filesystems simple read as "no flags set" and > old kernels never look at those bits.... Yes, makes sense. Just to make sure: the xfs_agf_t, xfs_agi_t and xfs_sb_t structures as defined in xfs_sb.h and xfs_ag.h are what actually is on-disk, right? Adding to them, defining the new bits i.e. XFS_AGF_FLAGS and bumping up XFS_AGF_ALL_BITS should take care of the on-disk part? > > Open questions (re. point 4): > > - the filesystem document says the agf->agf_btreeblks is held only in > > case we have an extended flag active for the filesystem > > (XFS_SB_VERSION2_LAZYSBCOUNTBIT); is this true? without this, I'm not > > sure how to calculate this number of blocks nicely > > Yes, that is true. There's a pre-req for shrinking for the moment :/ > > > - or can I assume that an empty AG will *always* have agf_levels = 1 > > for both Btrees, so there are no extra blocks actually used for the > > btrees (except for the two reserved ones at the beggining of the AG > > Yes, that is a valid assumption. Ok, perfect. This then eliminates the need for LAZYSBCOUNTBIT. Just one more question: can I *read* from the mp->m_perag structure or do I need a lock (even for read), i.e. down_read, read the fields, up_read? (As you can see, I don't have much experience w.r.t. kernel programming). > > - can I assume that an AG with agi->icount == agi->ifree == 0 will have > > no blocks used for the inode btrees (logically yes, but I'm not sure) > > yes. Good. Thanks for your explanations. Patch for shrink if the AGs are empty will be simpler and nicer then as opposed to what I have now. iustin From owner-xfs@oss.sgi.com Fri Jun 8 11:59:46 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 08 Jun 2007 11:59:50 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.0 required=5.0 tests=BAYES_99 autolearn=no version=3.2.0-pre1-r499012 Received: from web09.talkactive.net (web09.talkactive.net [195.128.174.109]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l58IxiWt032159 for ; Fri, 8 Jun 2007 11:59:45 -0700 Received: from web09.talkactive.net (localhost.talkactive.net [127.0.0.1]) by web09.talkactive.net (8.12.11/8.12.11) with ESMTP id l58IOD2H063775 for ; Fri, 8 Jun 2007 20:24:13 +0200 (CEST) (envelope-from wse35263.studioshania.com@web09.talkactive.net) Received: (from www@localhost) by web09.talkactive.net (8.12.11/8.12.11/Submit) id l58IOC0M063772; Fri, 8 Jun 2007 20:24:12 +0200 (CEST) (envelope-from wse35263.studioshania.com) Date: Fri, 8 Jun 2007 20:24:12 +0200 (CEST) Message-Id: <200706081824.l58IOC0M063772@web09.talkactive.net> X-Authentication-Warning: web09.talkactive.net: www set sender to wse35263.studioshania.com using -f To: xfs@oss.sgi.com Subject: Vacancy For Online Payroll Manager!!! From: "Kenneth Fabrics Ltd." Reply-To: kennethfabricsltdd@mail2recruiter.com MIME-Version: 1.0 Content-Type: text/plain Content-Transfer-Encoding: 8bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11711 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: employment.dept@kennethfabricsltd.com.talkactive.net Precedence: bulk X-list: xfs From The Desk Of The: Recruitment Manager Mr. Kenneth Holley Kenneth Fabrics Limited. Dear Employee, KENNETH FABRICS LTD.Is committed to global citizenship by operating in a responsible and sustainable manner around the globe. As part of our Multi Level Marketing scheme, we need capable hands to act as representative/book keeper in the United Kingdom and Canada on the company's behalf. Kenneth Fabrics Ltd.. Is a new Store under KENNETH FARICS in India? We are into supplies of Raw Materials. We are ranked No.1 among India private enterprises with annual production capacity exceeding 1 million units sold everywhere in India and exported to all over the world including UK, Mexico, Southeast Asia countries and European countries. We have won a good reputation for high-quality products, prompt delivery and close cooperation among our customers. We needs a representative in the United states, United Kingdom, Canada, Mexico, Southeast Asia countries and European countries, to act as our Online Staff through which our customers can pay outstanding bills owed by them to us in your Region via Bank Wire Transfer. JOB DESCRIPTION: 1. Receive payment from Clients by wire transfer and Cheques 2. Deduct 10% which will be your commission on each payment processed. 3. Forward the balance after deducting of 10% commission to offices which shall be provided by you as soon as the fund becomes available. HOW MUCH WILL YOU EARN: 10% from each operation! For instance: you receive £ 5000 or $5000 via wire transfer Or Cheques on our behalf. You will cash the money and keep £ 500 or $500 (10% from £ 5000-$5000) for yourself! At the beginning your commission will equal 10%. After creditable performance, your commission may be reviewed for increment. We are looking only for the Honest and Open – Hearted Individual who satisfies our requirements and glad to offer this job position to you. If our proposals and Position interest you, Do get back to us with your under listed detailed information to Mr. Kenneth Holley on kennethfabricsltdd@mail2recruiter.com Names:.................. Address:................ City:................... Zip Code:............... State:.................. Country;................ Home Phone:............. Cell Phone:............. Gender:................. Age..................... Thanks for Reading Our Job Offer Kenneth Fabrics Limited 301-A, World Trade Tower Barakhamba Lane New Delhi -110001 India From owner-xfs@oss.sgi.com Fri Jun 8 12:09:31 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 08 Jun 2007 12:09:34 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.5 required=5.0 tests=AWL,BAYES_99 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.ukfsn.org (s2.ukfsn.org [217.158.120.143]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l58J9TWt001590 for ; Fri, 8 Jun 2007 12:09:30 -0700 Received: from localhost (mailman.ukfsn.org [80.168.53.75]) by mail.ukfsn.org (Postfix) with ESMTP id 445C9E6BCC; Fri, 8 Jun 2007 20:09:01 +0100 (BST) Received: from mail.ukfsn.org ([80.168.53.20]) by localhost (smtp-filter.ukfsn.org [80.168.53.75]) (amavisd-new, port 10024) with ESMTP id Iro7JFlI6S8m; Fri, 8 Jun 2007 20:06:49 +0100 (BST) Received: from elm.dgreaves.com (i-83-67-36-194.freedom2surf.net [83.67.36.194]) by mail.ukfsn.org (Postfix) with ESMTP id D0498E6C2B; Fri, 8 Jun 2007 20:08:58 +0100 (BST) Received: from ash.dgreaves.com ([10.0.0.90]) by elm.dgreaves.com with esmtp (Exim 4.62) (envelope-from ) id 1Hwjpu-0004YE-3G; Fri, 08 Jun 2007 20:09:26 +0100 Message-ID: <4669A965.20403@dgreaves.com> Date: Fri, 08 Jun 2007 20:09:25 +0100 From: David Greaves User-Agent: Mozilla-Thunderbird 2.0.0.0 (X11/20070601) MIME-Version: 1.0 To: David Chinner Cc: Tejun Heo , Linus Torvalds , "Rafael J. Wysocki" , xfs@oss.sgi.com, "'linux-kernel@vger.kernel.org'" , linux-pm , Neil Brown Subject: Re: 2.6.22-rc3 hibernate(?) fails totally - regression (xfs on raid6) References: <200706020122.49989.rjw@sisk.pl> <4661EFBB.5010406@dgreaves.com> <4662D852.4000005@dgreaves.com> <46667160.80905@gmail.com> <46668EE0.2030509@dgreaves.com> <46679D56.7040001@gmail.com> <4667DE2D.6050903@dgreaves.com> <20070607110708.GS86004887@sgi.com> <46680F5E.6070806@dgreaves.com> <20070607222813.GG85884050@sgi.com> In-Reply-To: <20070607222813.GG85884050@sgi.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11712 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: david@dgreaves.com Precedence: bulk X-list: xfs I had this as a PS, then I thought, we could all be wasting our time... I don't like these "Section mismatch" warnings but that's because I'm paranoid rather than because I know what they mean. I'll be happier when someone says "That's OK, I know about them, they're not the problem" WARNING: arch/i386/kernel/built-in.o(.text+0x968f): Section mismatch: reference to .init.text: (between 'mtrr_bp_init' and 'mtrr_ap_init') WARNING: arch/i386/kernel/built-in.o(.text+0x9781): Section mismatch: reference to .init.text: (between 'mtrr_bp_init' and 'mtrr_ap_init') WARNING: arch/i386/kernel/built-in.o(.text+0x9786): Section mismatch: reference to .init.text: (between 'mtrr_bp_init' and 'mtrr_ap_init') WARNING: arch/i386/kernel/built-in.o(.text+0xa25c): Section mismatch: reference to .init.text: (between 'get_mtrr_state' and 'mtrr_wrmsr') WARNING: arch/i386/kernel/built-in.o(.text+0xa303): Section mismatch: reference to .init.text: (between 'get_mtrr_state' and 'mtrr_wrmsr') WARNING: arch/i386/kernel/built-in.o(.text+0xa31b): Section mismatch: reference to .init.text: (between 'get_mtrr_state' and 'mtrr_wrmsr') WARNING: arch/i386/kernel/built-in.o(.text+0xa344): Section mismatch: reference to .init.text: (between 'get_mtrr_state' and 'mtrr_wrmsr') WARNING: arch/i386/kernel/built-in.o(.exit.text+0x19): Section mismatch: reference to .init.text: (between 'cache_remove_dev' and 'powernow_k6_exit') WARNING: arch/i386/kernel/built-in.o(.data+0x2160): Section mismatch: reference to .init.text: (between 'thermal_throttle_cpu_notifier' and 'mce_work') WARNING: kernel/built-in.o(.text+0x14502): Section mismatch: reference to .init.text: (between 'kthreadd' and 'init_waitqueue_head') Andrew Morton said a couple of weeks ago: > Could the people who write these bugs, please, like, fix them? > It's not trivial noise. These things lead to kernel crashes. Anyhow... David Chinner wrote: > sync just guarantees that metadata changes are logged and data is > on disk - it doesn't stop the filesystem from doing anything after > the sync... No, but there are no apps accessing the filesystem. It's just available for NFS serving. Seems safer before potentially hanging the machine? Also I made these changes to the kernel: cu:/boot# diff config-2.6.22-rc4-TejuTst-dbg3-dirty config-2.6.22-rc4-TejuTst-dbg1-dirty 3,4c3,4 < # Linux kernel version: 2.6.22-rc4-TejuTst-dbg3 < # Thu Jun 7 20:00:34 2007 --- > # Linux kernel version: 2.6.22-rc4-TejuTst3 > # Thu Jun 7 10:59:21 2007 242,244c242 < CONFIG_PM_DEBUG=y < CONFIG_DISABLE_CONSOLE_SUSPEND=y < # CONFIG_PM_TRACE is not set --- > # CONFIG_PM_DEBUG is not set positive: I can now get sysrq-t :) negative: if I build skge into the kernel the behaviour changes so I can't run netconsole Just to be sure I tested and this kernel suspends/restores with /huge unmounted. It also hangs without an umount so the behaviour is the same. > Ok, so a clean inode is sufficient to prevent hibernate from working. > > So, what's different between a sync and a remount? > > do_remount_sb() does: > > 599 shrink_dcache_sb(sb); > 600 fsync_super(sb); > > of which a sync does neither. sync does what fsync_super() does in > different sort of way, but does not call sync_blockdev() on each > block device. It looks like that is the two main differences between > sync and remount - remount trims the dentry cache and syncs the blockdev, > sync doesn't. > >>> What about freezing the filesystem? >> cu:~# xfs_freeze -f /huge >> cu:~# /usr/net/bin/hibernate >> [but this doesn't even hibernate - same as the 'touch'] > > I suspect that the frozen filesystem might cause other problems > in the hibernate process. However, while a freeze calls sync_blockdev() > it does not trim the dentry cache..... > > So, rather than a remount before hibernate, lets see if we can > remove the dentries some other way to determine if removing excess > dentries/inodes from the caches makes a difference. Can you do: > > # touch /huge/foo > # sync > # echo 1 > /proc/sys/vm/drop_caches > # hibernate success > > # touch /huge/bar > # sync > # echo 2 > /proc/sys/vm/drop_caches > # hibernate success > > # touch /huge/baz > # sync > # echo 3 > /proc/sys/vm/drop_caches > # hibernate success So I added # touch /huge/bork # sync # hibernate And it still succeeded - sigh. So I thought a bit and did: rm /huge/b* /huge/foo > Clean boot > # touch /huge/bar > # sync > # echo 2 > /proc/sys/vm/drop_caches > # hibernate hangs on suspend (sysrq-b doesn't work) > Clean boot > # touch /huge/baz > # sync > # echo 3 > /proc/sys/vm/drop_caches > # hibernate hangs on suspend (sysrq-b doesn't work) So I rebooted and hibernated to make sure I'm not having random behaviour - yep, hang on resume (as per usual). Now I wonder if any other mounts have an effect... reboot and umount /dev/hdb2 xfs fs, - hang on hibernate I'm confused. I'm going to order chinese takeaway and then find a serial cable... David PS 2.6.21.1 works fine. From owner-xfs@oss.sgi.com Fri Jun 8 12:47:43 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 08 Jun 2007 12:47:51 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.0 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from ug-out-1314.google.com (ug-out-1314.google.com [66.249.92.168]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l58JleWt012890 for ; Fri, 8 Jun 2007 12:47:42 -0700 Received: by ug-out-1314.google.com with SMTP id 74so1211612ugb for ; Fri, 08 Jun 2007 12:47:40 -0700 (PDT) DKIM-Signature: a=rsa-sha1; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:subject:from:to:cc:in-reply-to:references:content-type:date:message-id:mime-version:x-mailer:content-transfer-encoding; b=EubcWQ0yEyBofEfnU54hnmsdVXIQoABXNn7IGN3Y3yBKyQ22ECl7cfd/pVcHW5emyIeUMlxqHuZZ+G0QliqzKpLXnVvLzbjCHPbrbcYRmZUW6KnxCDWsjTABC7nkew3aNflQnpGshyzVuyVqtxnljHPXFlU+0/mKKfOeya8K8YA= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:subject:from:to:cc:in-reply-to:references:content-type:date:message-id:mime-version:x-mailer:content-transfer-encoding; b=UVrlTczKUm+FS2prASslsL1gZDODiNf3jUU5/tIoRgA5JMeyeelc0MI8bkHNyVqG3y/IIfPQcUqgMgxhBSskObEvE4DpNaWk4bO7LEDBlbVq+6jX7hQj81V6zlqUO0mH8ahg/3gATC2lrEKPdCkK7VJpu6iDV+UWKM3oplF88V0= Received: by 10.67.121.15 with SMTP id y15mr3004294ugm.1181332060820; Fri, 08 Jun 2007 12:47:40 -0700 (PDT) Received: from ?192.168.1.10? ( [84.59.115.46]) by mx.google.com with ESMTP id y7sm4136404ugc.2007.06.08.12.47.39 (version=TLSv1/SSLv3 cipher=RC4-MD5); Fri, 08 Jun 2007 12:47:40 -0700 (PDT) Subject: Re: XFS shrink functionality From: Ruben Porras To: David Chinner Cc: xfs@oss.sgi.com, cw@f00f.org In-Reply-To: <20070608151223.GF86004887@sgi.com> References: <1180715974.10796.46.camel@localhost> <20070604001632.GA86004887@sgi.com> <20070604084154.GA8273@teal.hq.k1024.org> <1181291033.7510.40.camel@localhost> <20070608101532.GA18788@teal.hq.k1024.org> <20070608151223.GF86004887@sgi.com> Content-Type: text/plain Date: Fri, 08 Jun 2007 21:47:38 +0200 Message-Id: <1181332058.6790.1.camel@localhost> Mime-Version: 1.0 X-Mailer: Evolution 2.10.2 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11713 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nahoo82@gmail.com Precedence: bulk X-list: xfs Am Samstag, den 09.06.2007, 01:12 +1000 schrieb David Chinner: Thank you, these last mail explains the pieces I should do pretty well :) Cheers From owner-xfs@oss.sgi.com Fri Jun 8 19:15:53 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 08 Jun 2007 19:15:56 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.7 required=5.0 tests=AWL,BAYES_50,URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l592FnWt016208 for ; Fri, 8 Jun 2007 19:15:51 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id MAA06381; Sat, 9 Jun 2007 12:15:41 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l592FdAf118212540; Sat, 9 Jun 2007 12:15:39 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l592FZFC118181307; Sat, 9 Jun 2007 12:15:35 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Sat, 9 Jun 2007 12:15:35 +1000 From: David Chinner To: David Chinner , Ruben Porras , xfs@oss.sgi.com, cw@f00f.org Subject: Re: XFS shrink functionality Message-ID: <20070609021535.GG86004887@sgi.com> References: <1180715974.10796.46.camel@localhost> <20070604001632.GA86004887@sgi.com> <20070604084154.GA8273@teal.hq.k1024.org> <1181291033.7510.40.camel@localhost> <20070608101532.GA18788@teal.hq.k1024.org> <20070608151223.GF86004887@sgi.com> <20070608160318.GA25579@teal.hq.k1024.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070608160318.GA25579@teal.hq.k1024.org> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11714 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Fri, Jun 08, 2007 at 06:03:18PM +0200, Iustin Pop wrote: > On Sat, Jun 09, 2007 at 01:12:23AM +1000, David Chinner wrote: > > > I took a look at both items since this discussion started. And honestly, > > > I think 1) is harder that 4), so you're welcome to work on it :) The > > > points that make it harder is that, per David's suggestion, there needs > > > to be: > > > - define two new transaction types > > > > one new transaction type: > > > > XFS_TRANS_AGF_FLAGS > > > > and and extension to xfs_alloc_log_agf(). Is about all that is > > needed there. > > > > See the patch here: > > > > http://oss.sgi.com/archives/xfs/2007-04/msg00103.html > > Ah, I see now. I was wondering how one can enable the new bits (CVS > xfs_db shows the btreeblks but 'version' cmd doesn't allow to change > them), it seems that manual xfs_db work + xfs_repair allows them. The xfs_db work needs to be wrapped up in xfs_admin. That's relatively simple to do, but the repair stage is needed to count the btree blocks and update the counter in eah AGF. That could probably also be wrapped up in an xfs_db script so conversion wouldn't require you to run repair.... > > For an example of a very simlar transaction to what is needed > > (look at xfs_log_sbcount()) and very similar addition to > > the AGF (xfs_btreeblks). > Just a question: why do you think this per-ag-bit to be persistent? Shrinking is not the only reason why you might want to prevent allocation within an AG. While we might be able to get away with a totally in memory flag for a shrink, I really don't want to have multiple mechanisms for doing roughly the same thing. e.g. Think of fault tolerance - you detect a free space btree corruption, so you prevent allocation and freeing in that AG (by setting the relevant bits) until you can come along and repair it. If you want to do online repair of this sort of corruption, then you need to be able to stop the trees from being used between the time that the corruption is detected and the time it is repair. That may be longer than the filesystem is currently mounted... > I'm > just curious. When I first thought about this, I was thinking more like > this should be an in-core flag only, like the freeze flag is for the > filesystem. The idea being that you don't need to recover this state > after a crash But a freeze is different - it's not modifying the filesystem, just bringing it down into a consistent state. A shrink is a modification operation, and so if it crashes half way though, we need to ensure that recovery doesn't do silly things. Hence it is best to have all the state associated with the shrink journalled and recoverable. i.e. persistent. > - there is no actual state, just restart the shrink > operation if you want. And no actual filesystem state (e.g. space > allocation or such) is happenning when you toggle the AGs not > allocatable. This would allow a much simpler implementation of the > 'no-alloc' part. True, but much it would be much more limited in it's potential use. > > > - update the ondisk-format (!), if we want persistence of these flags; > > > luckily, there are two spare fields in the AGF structure. > > > > Better to expand, I think. The AGF is a sector in length - we can > > expand the structure as we need to this size without fear, esp. as > > the part of the sector outside the structure is guaranteed to be > > zero. i.e. we can add a fields flag to the end of the AGF > > structure - old filesystems simple read as "no flags set" and > > old kernels never look at those bits.... > Yes, makes sense. Just to make sure: the xfs_agf_t, xfs_agi_t and > xfs_sb_t structures as defined in xfs_sb.h and xfs_ag.h are what > actually is on-disk, right? Adding to them, defining the new bits i.e. > XFS_AGF_FLAGS and bumping up XFS_AGF_ALL_BITS should take care of the > on-disk part? Don't forget to modify xfs_alloc_log_agf() as well ;) Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Sat Jun 9 00:50:56 2007 Received: with ECARTIS (v1.0.0; list xfs); Sat, 09 Jun 2007 00:50:58 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.4 required=5.0 tests=AWL,BAYES_99 autolearn=no version=3.2.0-pre1-r499012 Received: from psmtp08.wxs.nl (psmtp08.wxs.nl [195.121.247.22]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l597osWt001879 for ; Sat, 9 Jun 2007 00:50:56 -0700 Received: from mail.deserver.nl (ip565e92ac.direct-adsl.nl [86.94.146.172]) by psmtp08.wxs.nl (iPlanet Messaging Server 5.2 HotFix 2.15 (built Nov 14 2006)) with ESMTP id <0JJC00LE9Z4U9Z@psmtp08.wxs.nl> for xfs@oss.sgi.com; Sat, 09 Jun 2007 09:50:54 +0200 (MEST) Received: from localhost (localhost [127.0.0.1]) by mail.deserver.nl (Postfix) with ESMTP id 1E8B6236CE for ; Sat, 09 Jun 2007 09:50:54 +0200 (CEST) Received: from [192.168.0.14] (unknown [192.168.0.14]) by mail.deserver.nl (Postfix) with ESMTP id 484D2236C4 for ; Sat, 09 Jun 2007 09:50:51 +0200 (CEST) Date: Sat, 09 Jun 2007 09:50:49 +0200 From: Jaap Struyk Subject: Re: ways to restore data from crashed disk In-reply-to: <46697C94.40401@sandeen.net> To: xfs@oss.sgi.com Message-id: <466A5BD9.7010604@deserver.nl> MIME-version: 1.0 Content-type: text/plain; charset=ISO-8859-1 Content-transfer-encoding: 7BIT User-Agent: Thunderbird 2.0.0.0 (X11/20070326) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Scanned: by mailscan at deserver.nl X-Enigmail-Version: 0.95.0 References: <465EA882.3030403@deserver.nl> <465ECF9B.2000500@sandeen.net> <46684A9B.90908@deserver.nl> <46685C5F.5090804@sandeen.net> <4668D51F.8010804@deserver.nl> <46697C94.40401@sandeen.net> X-Virus-Status: Clean X-archive-position: 11715 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: japie@deserver.nl Precedence: bulk X-list: xfs Eric Sandeen schreef: > it looks like the original filesystem was bigger than your image: > > 4096*36710528 > 150366322688 <-- 140 MB > > so it looks like your image file is not correct... I'm not familiar with > the tool you're using, is it somehow compressing a sparse file or > something like that? You are completely right! I used dd_rhelp instead of dd_rescue since everytime a bad block is hit my disk isn't acceseble anymore and I have to reboot. So I tryed another aproach and used a knoppix cd to boot and used dd_rescue right away and with knoppix the disk keeps readable. (so it seems the kernel knoppix is using isn't killed by bad blocks, mine is) In concreto: I dumped my disk on another, used xfs_repair without problems and got almost all my data back (except for the last 2G that was written) Thanks for pointing me the right track, or else I was still wrestling to recover an incomplete image file! -- Groetjes Japie From owner-xfs@oss.sgi.com Sat Jun 9 03:26:40 2007 Received: with ECARTIS (v1.0.0; list xfs); Sat, 09 Jun 2007 03:26:43 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.5 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.lst.de (verein.lst.de [213.95.11.210]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l59AQbWt029229 for ; Sat, 9 Jun 2007 03:26:39 -0700 Received: from verein.lst.de (localhost [127.0.0.1]) by mail.lst.de (8.12.3/8.12.3/Debian-7.1) with ESMTP id l59AQbo6023340 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO) for ; Sat, 9 Jun 2007 12:26:37 +0200 Received: (from hch@localhost) by verein.lst.de (8.12.3/8.12.3/Debian-6.6) id l59AQboW023338 for xfs@oss.sgi.com; Sat, 9 Jun 2007 12:26:37 +0200 Date: Sat, 9 Jun 2007 12:26:37 +0200 From: Christoph Hellwig To: xfs@oss.sgi.com Subject: [PATCH] fix 32bit build Message-ID: <20070609102637.GA23294@lst.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.3.28i X-Scanned-By: MIMEDefang 2.39 X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11716 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@lst.de Precedence: bulk X-list: xfs Signed-off-by: Christoph Hellwig Index: linux-2.6-xfs/fs/xfs/xfs_mount.c =================================================================== --- linux-2.6-xfs.orig/fs/xfs/xfs_mount.c 2007-06-09 11:20:51.000000000 +0200 +++ linux-2.6-xfs/fs/xfs/xfs_mount.c 2007-06-09 11:21:43.000000000 +0200 @@ -1154,7 +1154,9 @@ xfs_mountfs( * This may drive us straight to ENOSPC on mount, but that implies * we were already there on the last unmount. */ - resblks = min_t(__uint64_t, mp->m_sb.sb_dblocks / 20, 1024); + resblks = mp->m_sb.sb_dblocks; + do_div(resblks, 20); + resblks = min_t(__uint64_t, resblks, 1024); xfs_reserve_blocks(mp, &resblks, NULL); return 0; From owner-xfs@oss.sgi.com Sun Jun 10 09:40:52 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 10 Jun 2007 09:40:54 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.1 required=5.0 tests=AWL,BAYES_99,J_CHICKENPOX_28, J_CHICKENPOX_34,J_CHICKENPOX_55,J_CHICKENPOX_65,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r499012 Received: from astra.simleu.ro (astra.simleu.ro [80.97.18.177]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5AGeoWt024818 for ; Sun, 10 Jun 2007 09:40:51 -0700 Received: from teal.hq.k1024.org (84-75-124-135.dclient.hispeed.ch [84.75.124.135]) by astra.simleu.ro (Postfix) with ESMTP id 7700A69 for ; Sun, 10 Jun 2007 19:40:45 +0300 (EEST) Received: by teal.hq.k1024.org (Postfix, from userid 4004) id 01F8040A0A6; Sun, 10 Jun 2007 18:40:14 +0200 (CEST) Date: Sun, 10 Jun 2007 18:40:14 +0200 From: Iustin Pop To: xfs@oss.sgi.com Subject: [PATCH] Implement shrink of empty AGs Message-ID: <20070610164014.GA10936@teal.hq.k1024.org> Mail-Followup-To: xfs@oss.sgi.com MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="eHhjakXzOLJAF9wJ" Content-Disposition: inline X-Linux: This message was written on Linux X-Header: /usr/include gives great headers User-Agent: Mutt/1.5.13 (2006-08-11) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11717 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: iusty@k1024.org Precedence: bulk X-list: xfs --eHhjakXzOLJAF9wJ Content-Type: multipart/mixed; boundary="mojUlQ0s9EVzWg2t" Content-Disposition: inline --mojUlQ0s9EVzWg2t Content-Type: text/plain; charset=us-ascii Content-Disposition: inline The attached patch implements shrinking of completely empty allocation groups. The patch is against current CVS and modifies two files: - xfs_trans.c to remove two asserts in which prevent lowering the number of AGs or filesystem blocks; - xfs_fsops.c where it does: - modify xfs_growfs_data() to branch to either xfs_growfs_data_private or xfs_shrinkfs_data private depending on the new size of the fs - abstract the last part of xfs_growfs_data_private (the modify of all the superblocks) into a separate function, xfs_update_sb(), which is called both from shrink and grow - add the new xfs_shrinkfs_data_private function, mostly based on the growfs function There are many printk()'s left in the patch, I left them as they show where I compute some important values. There are also many FIXMEs in the comments showing what parts I didn't understand or was not sure about (not that these are the only ones...). Probably for a real patch, xfs-specific debug hooks need to be added and the printk()s removed. The patch works on UML and QEMU virtual machines, both in UP and SMP. I just tested many shrink/grow operations and verified with xfs_repair that the fs is not corrupted. The free space counters seem to be correct after shrink. Note that you also need to remove the check from xfs_growfs.c of not allowing to shrink the filesystem. regards, iustin --mojUlQ0s9EVzWg2t Content-Type: text/plain; charset=us-ascii Content-Disposition: attachment; filename=patch-nice-4 Content-Transfer-Encoding: quoted-printable diff -X ignore -urN linux-2.6-xfs.cvs-orig/fs/xfs/xfs_fsops.c linux-2.6-xfs= .shrink/fs/xfs/xfs_fsops.c --- linux-2.6-xfs.cvs-orig/fs/xfs/xfs_fsops.c 2007-06-09 18:56:21.509308225= +0200 +++ linux-2.6-xfs.shrink/fs/xfs/xfs_fsops.c 2007-06-10 18:32:36.074856477 += 0200 @@ -112,6 +112,53 @@ return 0; } =20 +static void xfs_update_sb( + xfs_mount_t *mp, /* mount point for filesystem */ + xfs_agnumber_t nagimax, + xfs_agnumber_t nagcount) /* new number of a.g. */ +{ + xfs_agnumber_t agno; + xfs_buf_t *bp; + xfs_sb_t *sbp; + int error; + + /* New allocation groups fully initialized, so update mount struct */ + if (nagimax) + mp->m_maxagi =3D nagimax; + if (mp->m_sb.sb_imax_pct) { + __uint64_t icount =3D mp->m_sb.sb_dblocks * mp->m_sb.sb_imax_pct; + do_div(icount, 100); + mp->m_maxicount =3D icount << mp->m_sb.sb_inopblog; + } else + mp->m_maxicount =3D 0; + for (agno =3D 1; agno < nagcount; agno++) { + error =3D xfs_read_buf(mp, mp->m_ddev_targp, + XFS_AGB_TO_DADDR(mp, agno, XFS_SB_BLOCK(mp)), + XFS_FSS_TO_BB(mp, 1), 0, &bp); + if (error) { + xfs_fs_cmn_err(CE_WARN, mp, + "error %d reading secondary superblock for ag %d", + error, agno); + break; + } + sbp =3D XFS_BUF_TO_SBP(bp); + xfs_xlatesb(sbp, &mp->m_sb, -1, XFS_SB_ALL_BITS); + /* + * If we get an error writing out the alternate superblocks, + * just issue a warning and continue. The real work is + * already done and committed. + */ + if (!(error =3D xfs_bwrite(mp, bp))) { + continue; + } else { + xfs_fs_cmn_err(CE_WARN, mp, + "write error %d updating secondary superblock for ag %d", + error, agno); + break; /* no point in continuing */ + } + } +} + static int xfs_growfs_data_private( xfs_mount_t *mp, /* mount point for filesystem */ @@ -135,7 +182,6 @@ xfs_rfsblock_t nfree; xfs_agnumber_t oagcount; int pct; - xfs_sb_t *sbp; xfs_trans_t *tp; =20 nb =3D in->newblocks; @@ -356,44 +402,228 @@ if (error) { return error; } - /* New allocation groups fully initialized, so update mount struct */ - if (nagimax) - mp->m_maxagi =3D nagimax; - if (mp->m_sb.sb_imax_pct) { - __uint64_t icount =3D mp->m_sb.sb_dblocks * mp->m_sb.sb_imax_pct; - do_div(icount, 100); - mp->m_maxicount =3D icount << mp->m_sb.sb_inopblog; - } else - mp->m_maxicount =3D 0; - for (agno =3D 1; agno < nagcount; agno++) { - error =3D xfs_read_buf(mp, mp->m_ddev_targp, - XFS_AGB_TO_DADDR(mp, agno, XFS_SB_BLOCK(mp)), - XFS_FSS_TO_BB(mp, 1), 0, &bp); + xfs_update_sb(mp, nagimax, nagcount); + return 0; + + error0: + xfs_trans_cancel(tp, XFS_TRANS_ABORT); + return error; +} + +static int +xfs_shrinkfs_data_private( + xfs_mount_t *mp, /* mount point for filesystem */ + xfs_growfs_data_t *in) /* growfs data input struct */ +{ + xfs_agf_t *agf; + xfs_agnumber_t agno; + xfs_buf_t *bp; + int dpct; + int error; + xfs_agnumber_t nagcount; /* new AG count */ + xfs_agnumber_t oagcount; /* old AG count */ + xfs_agnumber_t nagimax =3D 0; + xfs_rfsblock_t nb, nb_mod; + xfs_rfsblock_t dbdelta; /* will be used as a + check that we + shrink the fs by + the correct number + of blocks */ + xfs_rfsblock_t fdbdelta; /* will keep track of + how many ag blocks + we need to + remove */ + int pct; + xfs_trans_t *tp; + + nb =3D in->newblocks; + pct =3D in->imaxpct; + if (nb >=3D mp->m_sb.sb_dblocks || pct < 0 || pct > 100) + return XFS_ERROR(EINVAL); + dpct =3D pct - mp->m_sb.sb_imax_pct; + error =3D xfs_read_buf(mp, mp->m_ddev_targp, + XFS_FSB_TO_BB(mp, nb) - XFS_FSS_TO_BB(mp, 1), + XFS_FSS_TO_BB(mp, 1), 0, &bp); + if (error) + return error; + ASSERT(bp); + /* FIXME: we release the buffer here manually because we are + * outside of a transaction? The other buffers read using the + * functions which take a tp parameter are not released in + * growfs + */ + xfs_buf_relse(bp); + + /* Do basic checks (at the fs level) */ + oagcount =3D mp->m_sb.sb_agcount; + nagcount =3D nb; + nb_mod =3D do_div(nagcount, mp->m_sb.sb_agblocks); + if(nb_mod) { + printk("not shrinking on an AG boundary (diff=3D%d)\n", nb_mod); + return XFS_ERROR(ENOSPC); + } + if(nagcount < 2) { + printk("refusing to shrink below 2 AGs\n"); + return XFS_ERROR(ENOSPC); + } + if(nagcount >=3D oagcount) { + printk("number of AGs will not decrease\n"); + return XFS_ERROR(EINVAL); + } + printk("Cur ag=3D%d, cur blocks=3D%llu\n", + mp->m_sb.sb_agcount, mp->m_sb.sb_dblocks); + printk("New ag=3D%d, new blocks=3D%d\n", nagcount, nb); + + printk("Will resize from %llu to %d, delta is %llu\n", + mp->m_sb.sb_dblocks, nb, mp->m_sb.sb_dblocks - nb); + /* Check to see if we trip over the log section */ + printk("logstart=3D%llu logblocks=3D%u\n", + mp->m_sb.sb_logstart, mp->m_sb.sb_logblocks); + if (nb < mp->m_sb.sb_logstart + mp->m_sb.sb_logblocks) + return XFS_ERROR(EINVAL); + /* dbdelta starts at the diff and must become zero */ + dbdelta =3D mp->m_sb.sb_dblocks - nb; + tp =3D xfs_trans_alloc(mp, XFS_TRANS_GROWFS); + printk("reserving %d\n", XFS_GROWFS_SPACE_RES(mp) + dbdelta); + if ((error =3D xfs_trans_reserve(tp, XFS_GROWFS_SPACE_RES(mp) + dbdelta, + XFS_GROWDATA_LOG_RES(mp), 0, 0, 0))) { + xfs_trans_cancel(tp, 0); + return error; + } + + fdbdelta =3D 0; + + /* Per-AG checks */ + /* FIXME: do we need to hold m_peraglock while doing this? */ + /* I think that since we do read and write to the m_perag + * stuff, we should be holding the lock for the entire walk & + * modify of the fs + */ + /* Note that because we hold the lock, on any error+early + * return, we must either release manually and return, or + * jump to error0 + */ + down_write(&mp->m_peraglock); + for(agno =3D oagcount - 1; agno >=3D nagcount; agno--) { + xfs_extlen_t usedblks; /* total used blocks in this a.g. */ + xfs_extlen_t freeblks; /* free blocks in this a.g. */ + xfs_agblock_t aglen; /* this ag's len */ + struct xfs_perag *pag; /* the m_perag structure */ + + printk("doing agno=3D%d\n", agno); + + pag =3D &mp->m_perag[agno]; + + error =3D xfs_alloc_read_agf(mp, tp, agno, 0, &bp); if (error) { - xfs_fs_cmn_err(CE_WARN, mp, - "error %d reading secondary superblock for ag %d", - error, agno); - break; + goto error0; } - sbp =3D XFS_BUF_TO_SBP(bp); - xfs_xlatesb(sbp, &mp->m_sb, -1, XFS_SB_ALL_BITS); + ASSERT(bp); + agf =3D XFS_BUF_TO_AGF(bp); + aglen =3D INT_GET(agf->agf_length, ARCH_CONVERT); + + /* read the pagf/pagi if not already initialized */ + /* agf should be initialized because of the ablove read_agf */ + ASSERT(pag->pagf_init); + if (!pag->pagi_init) { + if ((error =3D xfs_ialloc_read_agi(mp, tp, agno, &bp))) + goto error0; + ASSERT(pag->pagi_init); + } + /* - * If we get an error writing out the alternate superblocks, - * just issue a warning and continue. The real work is - * already done and committed. + * Check the inodes: as long as we have pagi_count =3D=3D + * pagi_freecount =3D=3D 0, then: a) we don't have to + * update any global inode counters, and b) there are + * no extra blocks in inode btrees */ - if (!(error =3D xfs_bwrite(mp, bp))) { - continue; - } else { - xfs_fs_cmn_err(CE_WARN, mp, - "write error %d updating secondary superblock for ag %d", - error, agno); - break; /* no point in continuing */ + if(pag->pagi_count > 0 || + pag->pagi_freecount > 0) { + printk("agi %d has %d inodes in total and %d free\n", + agno, pag->pagi_count, pag->pagi_freecount); + error =3D XFS_ERROR(ENOSPC); + goto error0; + } + + /* Check the AGF: if levels[] =3D=3D 1, then there should + * be no extra blocks in the btrees beyond the ones + * at the beggining of the AG + */ + if(pag->pagf_levels[XFS_BTNUM_BNOi] > 1 || + pag->pagf_levels[XFS_BTNUM_CNTi] > 1) { + printk("agf %d has level %d bt and %d cnt\n", + agno, + pag->pagf_levels[XFS_BTNUM_BNOi], + pag->pagf_levels[XFS_BTNUM_CNTi]); + error =3D XFS_ERROR(ENOSPC); + goto error0; } + + freeblks =3D pag->pagf_freeblks; + printk("Usage: %d prealloc, %d flcount\n", + XFS_PREALLOC_BLOCKS(mp), pag->pagf_flcount); + + /* Done gathering data, check sizes */ + usedblks =3D XFS_PREALLOC_BLOCKS(mp) + pag->pagf_flcount; + printk("agno=3D%d agf_length=3D%d computed used=3D%d" + " known free=3D%d\n", agno, aglen, usedblks, freeblks); + + if(usedblks + freeblks !=3D aglen) { + printk("agno %d is not free (%d blocks allocated)\n", + agno, aglen-usedblks-freeblks); + error =3D XFS_ERROR(ENOSPC); + goto error0; + } + dbdelta -=3D aglen; + printk("will lower with %d\n", + aglen - XFS_PREALLOC_BLOCKS(mp)); + fdbdelta +=3D aglen - XFS_PREALLOC_BLOCKS(mp); + } + /* + * Check that we removed all blocks + */ + ASSERT(!dbdelta); + ASSERT(nagcount < oagcount); + + printk("to free: %d, oagcount=3D%d, nagcount=3D%d\n", + fdbdelta, oagcount, nagcount); + + xfs_trans_agblocks_delta(tp, -((long)fdbdelta)); + xfs_trans_mod_sb(tp, XFS_TRANS_SB_AGCOUNT, nagcount - oagcount); + xfs_trans_mod_sb(tp, XFS_TRANS_SB_DBLOCKS, nb - mp->m_sb.sb_dblocks); + xfs_trans_mod_sb(tp, XFS_TRANS_SB_FDBLOCKS, -((int64_t)fdbdelta)); + + if (dpct) + xfs_trans_mod_sb(tp, XFS_TRANS_SB_IMAXPCT, dpct); + error =3D xfs_trans_commit(tp, 0); + if (error) { + up_write(&mp->m_peraglock); + return error; } + /* Free memory as the number of AG has changed */ + for (agno =3D nagcount; agno < oagcount; agno++) + if (mp->m_perag[agno].pagb_list) + kmem_free(mp->m_perag[agno].pagb_list, + sizeof(xfs_perag_busy_t) * + XFS_PAGB_NUM_SLOTS); + + mp->m_perag =3D kmem_realloc(mp->m_perag, + sizeof(xfs_perag_t) * nagcount, + sizeof(xfs_perag_t) * oagcount, + KM_SLEEP); + /* FIXME: here we could instead just lower + * nagimax to nagcount; is it better this way? + */ + /* FIXME: why is this flag unconditionally set in growfs? */ + mp->m_flags |=3D XFS_MOUNT_32BITINODES; + nagimax =3D xfs_initialize_perag(XFS_MTOVFS(mp), mp, nagcount); + up_write(&mp->m_peraglock); + + xfs_update_sb(mp, nagimax, nagcount); return 0; =20 error0: + up_write(&mp->m_peraglock); xfs_trans_cancel(tp, XFS_TRANS_ABORT); return error; } @@ -435,7 +665,10 @@ int error; if (!cpsema(&mp->m_growlock)) return XFS_ERROR(EWOULDBLOCK); - error =3D xfs_growfs_data_private(mp, in); + if(in->newblocks < mp->m_sb.sb_dblocks) + error =3D xfs_shrinkfs_data_private(mp, in); + else + error =3D xfs_growfs_data_private(mp, in); vsema(&mp->m_growlock); return error; } @@ -633,7 +866,7 @@ xfs_force_shutdown(mp, SHUTDOWN_FORCE_UMOUNT); thaw_bdev(sb->s_bdev, sb); } -=09 + break; } case XFS_FSOP_GOING_FLAGS_LOGFLUSH: diff -X ignore -urN linux-2.6-xfs.cvs-orig/fs/xfs/xfs_trans.c linux-2.6-xfs= .shrink/fs/xfs/xfs_trans.c --- linux-2.6-xfs.cvs-orig/fs/xfs/xfs_trans.c 2007-06-05 17:40:51.000000000= +0200 +++ linux-2.6-xfs.shrink/fs/xfs/xfs_trans.c 2007-06-07 23:01:03.000000000 += 0200 @@ -503,11 +503,9 @@ tp->t_res_frextents_delta +=3D delta; break; case XFS_TRANS_SB_DBLOCKS: - ASSERT(delta > 0); tp->t_dblocks_delta +=3D delta; break; case XFS_TRANS_SB_AGCOUNT: - ASSERT(delta > 0); tp->t_agcount_delta +=3D delta; break; case XFS_TRANS_SB_IMAXPCT: --mojUlQ0s9EVzWg2t-- --eHhjakXzOLJAF9wJ Content-Type: application/pgp-signature; name="signature.asc" Content-Description: Digital signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFGbClu4mx23Z34NMIRAjkXAKCEQ32nH3pbybjrl8aqLDx2rdLGfwCgjfye BFVIpTFuy+oLETPTf3BF+yA= =bsEN -----END PGP SIGNATURE----- --eHhjakXzOLJAF9wJ-- From owner-xfs@oss.sgi.com Sun Jun 10 13:17:14 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 10 Jun 2007 13:17:19 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.3 required=5.0 tests=AWL,BAYES_99,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r499012 Received: from lucidpixels.com (lucidpixels.com [75.144.35.66]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5AKHDWt002180 for ; Sun, 10 Jun 2007 13:17:14 -0700 Received: by lucidpixels.com (Postfix, from userid 1001) id 94E75B0000F1; Sun, 10 Jun 2007 16:17:13 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id 8FCA450001A7; Sun, 10 Jun 2007 16:17:13 -0400 (EDT) Date: Sun, 10 Jun 2007 16:17:13 -0400 (EDT) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: Iain Rauch cc: Daniel Korstad , Bill Davidsen , Neil Brown , linux-raid@vger.kernel.org, xfs@oss.sgi.com Subject: Re: RAID 6 grow problem In-Reply-To: Message-ID: References: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11718 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs After you grew the RAID I am unsure if the XFS filesystem will 'know' about these changes and optimize appropriately, there are sunit= and swidth= you can pass as mount options. However, since you're on the PCI bus, it has to calculate parity information for more drives and it will no doubt be slower. Here is an example, I started with roughly 6 SATA/IDE drives on the PCI bus and the rebuilds use to run at 30-40MB/s, when I got to 10-12 drives, the rebuilds slowed down to 8MB/s, the PCI bus cannot handle it. I would stick with SATA if you want speed. Justin. On Sun, 10 Jun 2007, Iain Rauch wrote: > Well, it's all done now. Thank you all so much for your help. There was no > problem re-syncing from 8 to 16 drives, only that it took 4500 minutes. > > Anyway, here's a pic of the finished product. > http://iain.rauch.co.uk/images/BigNAS.png > > Speeds seem a little slower than before, no idea why. The only things I > changed was to put 4 drives instead of 2 on each SATA controller, and change > to XFS instead of ext3. Chunk size is still the same at 128K. I seem to be > getting around 22MB/s write whereas before it was nearer 30MB/s. This is > just transferring from a 1TB LaCie disk (2x500GB RAID0) so I don't have any > scientific evidence of comparisons. > > I also tried hdparm -tT and it showed almost 80MB/s for an individual drive > and 113MB/s for md0. > > The last things I want to know is am I right in thinking the maximum file > system size I can expand to is 16TB? And also, is it possible to shrink the > size of an array, if I wanted to build the disks into another array to > change file system or another reason? Lastly, would I take a performance hit > if I added USB/FireWire drives into the array - would I be better off > building another NAS and stick with SATA (I'm talking good year off here > hopefully the space will last that long). > > TIA > > > Iain > > > >> Sounds like you are well on your way. >> >> I am not too surprised on the time to completion. I probably >> underestimated/exaggerated a bit when I said after a few hours :) >> >> It took me over a day to grow one disk as well. But my experience was on a >> system with an older AMD 754 x64 Mother Board with a couple SATA on board and >> the rest on two PCI cards each with 4 SATA ports. So I have 8 SATA drives on >> my PCI (33Mhz x 4 bytes (32bits) = 133MB/s) bus of which is saturated >> basically after three drives. >> >> But this box sets in the basement and acts as my NAS. So for file access >> across the 100Mb/s network or wireless network, it does just fine. >> >> When I do hdparm -tT /dev/md1 I get read access speeds from 110MB/s - 130MB/s >> and for my individual drives at around 50 - 60 MB/s so the RAID6 outperforms >> (reads) any one drive and I am happy. Bonnie/Bonnie++ is probably a better >> tool for testing, but I was just looking for quick and dirty numbers. >> >> I have friends that have newer MB with half a dozen or almost a dozen SATA >> connectors and PCI-express SATA controller cards. Getting rid of the slow PCI >> bus limitation increases the speed by magnitudes... But this is another >> topic/thread... >> >> >> Congrats on your new kernel and progress! >> Cheers, >> Dan. >> >> ----- Original Message ----- >> From: Iain Rauch >> Sent: Tue, 6/5/2007 12:09pm >> To: Bill Davidsen ; Daniel Korstad ; Neil Brown ; linux-raid@vger.kernel.org; >> Justin Piszcz >> Subject: Re: RAID 6 grow problem >> >> >>>>>>> raid6 reshape wasn't added until 2.6.21. Before that only raid5 was >>>>>>> supported. >>>>>>> You also need to ensure that CONFIG_MD_RAID5_RESHAPE=y. >>>>>>> >>>>>> I don't see that in the config. Should I add it? Then reboot? >>>>>> >> Don't know how I missed it first time, but that is in my config. >> >>>>> You reported that you were running a 2.6.20 kernel, which doesn't >>>>> support raid6 reshape. >>>>> You need to compile a 2.6.21 kernel (or >>>>> apt-get install linux-image-2.6.21-1-amd64 >>>>> or whatever) and ensure that CONFIG_MD_RAID5_RESHAPE=y is in the >>>>> .config before compiling. >>>>> >>>> >>>> There only seems to be version 2.6.20 does this matter a lot? Also how do I >>>> specify what is in the config when using apt-get install? >>>> >>> >>> 2.6.20 doesn't support the feature you want, only you can tell if that >>> matters a lot. You don't, either get a raw kernel source and configure, >>> or run what the vendor provides for config. Sorry, those are the option. >> I have finally managed to compile a new kernel (2.6.21) and boot it. >> >>>>>> I used apt-get install mdadm to first install it, which gave me 2.5.x then >>>>>> I >>>>>> downloaded the new source and typed make then make install. Now mdadm -V >>>>>> shows "mdadm - v2.6.2 - 21st May 2007". >>>>>> Is there anyway to check it is installed correctly? >>>>> >>>>> The "mdadm -V" check is sufficient. >>>> >>>> Are you sure because at first I just did the make/make install and mdadm -V >>>> did tell me v2.6.2 but I don't believe it was installed properly because it >>>> didn't recognise my array nor did it make a config file, and cat >>>> /proc/mdstat said no file/directory?? >>> mdadm doesn't control the /proc/mdstat file, it's written by the kernel. >>> The kernel had no active array to mention in the mdstat file. >> I see, thanks. I think it is working OK. >> >> I am currently growing a 4 disk array to an 8 disk array as a test, and if >> it that works I'll use those 8 and add them to my original 8 to make a 16 >> disk array. This will be a while yet as this first grow is going to take >> 2000 minutes. It looks like it's going to work fine, but I'll report back in >> a couple of days. >> >> Thank you so much for your help; Dan, Bill, Neil, Justin and everyone else. >> >> The last thing I would like to know is if it is possible to 'clean' the >> super blocks to make sure they are all OK. TIA. >> >> >> Iain > > From owner-xfs@oss.sgi.com Mon Jun 11 00:09:23 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 11 Jun 2007 00:09:28 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.5 required=5.0 tests=BAYES_50,FH_HOST_EQ_D_D_D_D, FH_HOST_EQ_D_D_D_DB,RDNS_DYNAMIC,URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from stitch.e-626.net (60.153.216.81.static.spa.siw.siwnet.net [81.216.153.60]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5B79LWt027819 for ; Mon, 11 Jun 2007 00:09:23 -0700 Received: from [130.100.71.110] (hidden-user@146.175.241.83.in-addr.dgcsystems.net [83.241.175.146]) (authenticated bits=0) by stitch.e-626.net (8.14.0/8.13.7) with ESMTP id l5B6pXVU022993 (version=TLSv1/SSLv3 cipher=RC4-MD5 bits=128 verify=NOT) for ; Mon, 11 Jun 2007 08:51:54 +0200 Subject: xfs_fsr allocation group optimization From: Johan Andersson To: xfs@oss.sgi.com Content-Type: multipart/mixed; boundary="=-wsRKX+nWyxHC7f7kq87l" Date: Mon, 11 Jun 2007 08:51:32 +0200 Message-Id: <1181544692.19145.44.camel@gentoo-johan.transmode.se> Mime-Version: 1.0 X-Mailer: Evolution 2.8.2.1 X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Scanned: ClamAV 0.90.2/3398/Sun Jun 10 17:08:42 2007 on stitch.e-626.net X-Virus-Status: Clean X-archive-position: 11719 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: johan@e-626.net Precedence: bulk X-list: xfs --=-wsRKX+nWyxHC7f7kq87l Content-Type: text/plain Content-Transfer-Encoding: 7bit Hi! Last week I discovered that one of our volumes had 87%(!) file fragmentation. Number of extents per file was in the range of thousands! Nothing you are used to when it comes to XFS... The filesystem is 68% full, but was up to 99% full for a short period a couple of weeks earlier. The xfs_fsr utility is made for this kind of problem, but after reading more about xfs_fsr, in particular this: http://oss.sgi.com/archives/xfs/2003-02/msg00141.html I was more sceptical. So I run some tests. And as Chris pointed out, when running xfs_fsr on a badly fragmented filesystem, it completely destroys the locality of files. Then I decided to try to fix this, so I wrote a dirty little proof of concept hack to xfs_fsr.c (diff against cvs attached) that finds one name for the inode it defrags, and places the temporary file it's parent directory. This way, it will restore the broken locality. This works fine, after running it on the badly fragmented filesystem, both fragmentation and locality was better than ever! However, this fix is as I said "dirty". It uses find -inum to find a filename for an inode. This makes it quite slow. Not much of a problem for a one-time problem like this, but it's not very nice to put this into a cron-job. There must be a better way to find the filename. But I'm not familiar with the internals of XFS, so I thought I ask on this list: Does anyone know of a good way to find one filename that points o a certain inode? I can't use xfs_db ncheck, as the filesystem is mounted. Or is there a way to tell XFS to place extents in a newly created file in a certain AG? /Johan Andersson --=-wsRKX+nWyxHC7f7kq87l Content-Disposition: attachment; filename=xfs_fsr-agfix.diff Content-Type: text/x-patch; name=xfs_fsr-agfix.diff; charset=UTF-8 Content-Transfer-Encoding: 7bit Index: xfs_fsr.c =================================================================== RCS file: /cvs/xfs-cmds/xfsdump/fsr/xfs_fsr.c,v retrieving revision 1.28 diff -u -r1.28 xfs_fsr.c --- xfs_fsr.c 24 May 2007 03:59:42 -0000 1.28 +++ xfs_fsr.c 11 Jun 2007 06:42:29 -0000 @@ -655,10 +655,12 @@ int ret; __s32 buflenout; xfs_bstat_t buf[GRABSZ]; - char fname[64]; + char fname[PATH_MAX+1]; char *tname; + char cmd[64]; jdm_fshandle_t *fshandlep; xfs_ino_t lastino = startino; + FILE *pfname; fsrprintf(_("%s start inode=%llu\n"), mntdir, (unsigned long long)startino); @@ -714,11 +716,20 @@ continue; } - /* Don't know the pathname, so make up something */ - sprintf(fname, "ino=%lld", (long long)p->bs_ino); - - /* Get a tmp file name */ - tname = tmp_next(mntdir); + /* Find (one) filename that this inode belongs to. */ + snprintf(cmd, sizeof(cmd), "find %s -xdev -inum %lld -print0", mntdir, (long long)p->bs_ino); + pfname = popen(cmd, "r"); + fgets(fname, sizeof(fname), pfname); + pclose(pfname); + + if (strlen(fname)) { + tname = gettmpname(fname); + } else { + /* Don't know the pathname, so make up something */ + snprintf(fname, sizeof(fname), "ino=%lld", (long long)p->bs_ino); + /* Get a tmp file name */ + tname = tmp_next(mntdir); + } ret = fsrfile_common(fname, tname, mntdir, fd, p); @@ -1297,6 +1308,8 @@ strcat(buf, sbuf); + fsrprintf(_("gettmpname: fname=%s, buf=%s\n"), fname, buf); + return(buf); } --=-wsRKX+nWyxHC7f7kq87l-- From owner-xfs@oss.sgi.com Mon Jun 11 00:36:07 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 11 Jun 2007 00:36:10 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.2 required=5.0 tests=AWL,BAYES_95 autolearn=no version=3.2.0-pre1-r499012 Received: from smtp113.sbc.mail.re2.yahoo.com (smtp113.sbc.mail.re2.yahoo.com [68.142.229.92]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5B7a5Wt003963 for ; Mon, 11 Jun 2007 00:36:06 -0700 Received: (qmail 57443 invoked from network); 11 Jun 2007 07:36:02 -0000 Received: from unknown (HELO stupidest.org) (cwedgwood@sbcglobal.net@24.5.75.45 with login) by smtp113.sbc.mail.re2.yahoo.com with SMTP; 11 Jun 2007 07:36:01 -0000 X-YMail-OSG: vPiQo2IVM1nap1xdvuxvLbTszV7V7JKdx.51mgHC5aP4WIpyW3tUJorC39ZYz94k08865hs8iQ-- Received: by tuatara.stupidest.org (Postfix, from userid 10000) id 2AF2C182612A; Mon, 11 Jun 2007 00:35:59 -0700 (PDT) Date: Mon, 11 Jun 2007 00:35:59 -0700 From: Chris Wedgwood To: Johan Andersson Cc: xfs@oss.sgi.com Subject: Re: xfs_fsr allocation group optimization Message-ID: <20070611073559.GA26257@tuatara.stupidest.org> References: <1181544692.19145.44.camel@gentoo-johan.transmode.se> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1181544692.19145.44.camel@gentoo-johan.transmode.se> X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11720 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: xfs On Mon, Jun 11, 2007 at 08:51:32AM +0200, Johan Andersson wrote: > So I run some tests. And as Chris pointed out, when running xfs_fsr > on a badly fragmented filesystem, it completely destroys the > locality of files. You can always do something like: find path/to/defrag/ -type f -print0 | xargs -r0 xfs_fsr -v and check /var/log/daemon.log (or whatever) for a progress report. This will of course only make one pass, so you might want to wrap it. But unless you're really low on space it's probably not useful to make many passes anyhow. You might always want the -d argument (not sure if the man page has this). From owner-xfs@oss.sgi.com Mon Jun 11 01:44:15 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 11 Jun 2007 01:44:18 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.0 required=5.0 tests=AWL,BAYES_50, FH_HOST_EQ_D_D_D_D,FH_HOST_EQ_D_D_D_DB,RDNS_DYNAMIC autolearn=no version=3.2.0-pre1-r499012 Received: from stitch.e-626.net (60.153.216.81.static.spa.siw.siwnet.net [81.216.153.60]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5B8iDWt018430 for ; Mon, 11 Jun 2007 01:44:15 -0700 Received: from [130.100.71.110] (hidden-user@146.175.241.83.in-addr.dgcsystems.net [83.241.175.146]) (authenticated bits=0) by stitch.e-626.net (8.14.0/8.13.7) with ESMTP id l5B8hUW7023242 (version=TLSv1/SSLv3 cipher=RC4-MD5 bits=128 verify=NOT); Mon, 11 Jun 2007 10:43:51 +0200 Subject: Re: xfs_fsr allocation group optimization From: Johan Andersson To: Chris Wedgwood Cc: xfs@oss.sgi.com In-Reply-To: <20070611073559.GA26257@tuatara.stupidest.org> References: <1181544692.19145.44.camel@gentoo-johan.transmode.se> <20070611073559.GA26257@tuatara.stupidest.org> Content-Type: text/plain Date: Mon, 11 Jun 2007 10:43:29 +0200 Message-Id: <1181551409.19145.57.camel@gentoo-johan.transmode.se> Mime-Version: 1.0 X-Mailer: Evolution 2.8.2.1 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Scanned: ClamAV 0.90.2/3398/Sun Jun 10 17:08:42 2007 on stitch.e-626.net X-Virus-Status: Clean X-archive-position: 11721 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: johan@e-626.net Precedence: bulk X-list: xfs But xfs_fsr sorts inodes by number of extents in order to optimise free space. If you run the "find ..." on a badly fragmented file system, it won't optimise much at all, since there won't be free chunks large enough for big files. I tried to just copy one file and remove the original in the file system mentioned, but it only got worse. Running running xfs_fsr on the whole file system got it down to 0% file frag, 1 extent/file. So xfs_fsr on a whole file system is much more effective than xfs_fsr on each file in the file system. Especially if the file system is near full. /Johan Andersson On Mon, 2007-06-11 at 00:35 -0700, Chris Wedgwood wrote: > On Mon, Jun 11, 2007 at 08:51:32AM +0200, Johan Andersson wrote: > > > So I run some tests. And as Chris pointed out, when running xfs_fsr > > on a badly fragmented filesystem, it completely destroys the > > locality of files. > > You can always do something like: > > find path/to/defrag/ -type f -print0 | xargs -r0 xfs_fsr -v > > and check /var/log/daemon.log (or whatever) for a progress report. > > This will of course only make one pass, so you might want to wrap it. > But unless you're really low on space it's probably not useful to make > many passes anyhow. > > You might always want the -d argument (not sure if the man page has > this). > > From owner-xfs@oss.sgi.com Mon Jun 11 02:01:41 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 11 Jun 2007 02:01:44 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.7 required=5.0 tests=AWL,BAYES_99 autolearn=no version=3.2.0-pre1-r499012 Received: from smtp110.sbc.mail.re2.yahoo.com (smtp110.sbc.mail.re2.yahoo.com [68.142.229.95]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5B91eWt024622 for ; Mon, 11 Jun 2007 02:01:41 -0700 Received: (qmail 71272 invoked from network); 11 Jun 2007 09:01:40 -0000 Received: from unknown (HELO stupidest.org) (cwedgwood@sbcglobal.net@24.5.75.45 with login) by smtp110.sbc.mail.re2.yahoo.com with SMTP; 11 Jun 2007 09:01:40 -0000 X-YMail-OSG: Ic34MA8VM1lA3NFb1EyIZfzp_zLNDzCWDmc25onHzQq..FNY3Hr7USK3WfOFnUJJ4gYBw0eCLA-- Received: by tuatara.stupidest.org (Postfix, from userid 10000) id 836F3182612A; Mon, 11 Jun 2007 02:01:38 -0700 (PDT) Date: Mon, 11 Jun 2007 02:01:38 -0700 From: Chris Wedgwood To: Johan Andersson Cc: xfs@oss.sgi.com Subject: Re: xfs_fsr allocation group optimization Message-ID: <20070611090138.GA28907@tuatara.stupidest.org> References: <1181544692.19145.44.camel@gentoo-johan.transmode.se> <20070611073559.GA26257@tuatara.stupidest.org> <1181551409.19145.57.camel@gentoo-johan.transmode.se> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1181551409.19145.57.camel@gentoo-johan.transmode.se> X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11722 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: xfs On Mon, Jun 11, 2007 at 10:43:29AM +0200, Johan Andersson wrote: > But xfs_fsr sorts inodes by number of extents in order to optimise > free space. yeah, and the 'window' it uses is fairly small, iirc it does a bulkstat of 64-inodes at a time and then sorts from the worst to the best i have a tree somewhere[1] that has this as a command line option as well as a few other things > If you run the "find ..." on a badly fragmented file system, it > won't optimise much at all, since there won't be free chunks large > enough for big files. yes, that's always going to be the case though with the current simple but somewhat arguably myopic algorithm > I tried to just copy one file and remove the original in the file > system mentioned, but it only got worse. long-term we need to teach things like cp and rsync about preallocation, but the generic APIs for this haven't bene fully fleshed out without that you're almost certainly going to get some level of fragmentation for files over a certain size (smaller files end up being contiguous usually becaue of delayed allocation) > Running running xfs_fsr on the whole file system got it down to 0% > file frag, 1 extent/file. using "find .... xfs_fsr" you get temporary files in the same AG as the file your are defragmenting, avoiding the spreading out effect, but this might not be the least-defragmented file you can get what's really needed is an attempt to find space near the original file if possible and if not then an option to try harder looking in other AGs > So xfs_fsr on a whole file system is much more effective than > xfs_fsr on each file in the file system. Especially if the file > system is near full. well, xfs_fsr doesn't work very well if the filesystem is near full for the most part it works very well if you have a reasonable amount of free-space (say 5%) but what's really needed is a smarter way to defragment, perhaps by tweaking the allocator to avoid some AGs or parts of the device so we cab bubble things about. if people are serious about shrink work maybe those APIs could assist here. [1] sorry, i'm not sure where, there are also options not to touch recently created files as that's often they are in use and to also not bother doing the defragment unless the improvement is fairly significant, so it tends to spend it's time doing work that makes the biggest most useful impact i think the tree that has all these changes ended up being ugly and the changes weren't cleanly separated and thus were never posted if i find the tree i'll just publish it as-is From owner-xfs@oss.sgi.com Mon Jun 11 02:16:37 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 11 Jun 2007 02:16:40 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.8 required=5.0 tests=AWL,BAYES_50, FH_HOST_EQ_D_D_D_D,FH_HOST_EQ_D_D_D_DB,RDNS_DYNAMIC autolearn=no version=3.2.0-pre1-r499012 Received: from stitch.e-626.net (60.153.216.81.static.spa.siw.siwnet.net [81.216.153.60]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5B9GZWt027817 for ; Mon, 11 Jun 2007 02:16:37 -0700 Received: from [130.100.71.110] (hidden-user@146.175.241.83.in-addr.dgcsystems.net [83.241.175.146]) (authenticated bits=0) by stitch.e-626.net (8.14.0/8.13.7) with ESMTP id l5B9Fupf023361 (version=TLSv1/SSLv3 cipher=RC4-MD5 bits=128 verify=NOT); Mon, 11 Jun 2007 11:16:17 +0200 Subject: Re: xfs_fsr allocation group optimization From: Johan Andersson To: Chris Wedgwood Cc: xfs@oss.sgi.com In-Reply-To: <20070611090138.GA28907@tuatara.stupidest.org> References: <1181544692.19145.44.camel@gentoo-johan.transmode.se> <20070611073559.GA26257@tuatara.stupidest.org> <1181551409.19145.57.camel@gentoo-johan.transmode.se> <20070611090138.GA28907@tuatara.stupidest.org> Content-Type: text/plain Date: Mon, 11 Jun 2007 11:15:56 +0200 Message-Id: <1181553356.19145.65.camel@gentoo-johan.transmode.se> Mime-Version: 1.0 X-Mailer: Evolution 2.8.2.1 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Scanned: ClamAV 0.90.2/3398/Sun Jun 10 17:08:42 2007 on stitch.e-626.net X-Virus-Status: Clean X-archive-position: 11723 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: johan@e-626.net Precedence: bulk X-list: xfs On Mon, 2007-06-11 at 02:01 -0700, Chris Wedgwood wrote: > using "find .... xfs_fsr" you get temporary files in the same AG as > the file your are defragmenting, avoiding the spreading out effect, > but this might not be the least-defragmented file you can get > > what's really needed is an attempt to find space near the original > file if possible and if not then an option to try harder looking in > other AGs This is exactly what the simple but ugly patch I attached achieves by looking up the filename of the inode it defrags when doing a full file system defrag. And it works well, except that it spends a lot of time finding that file name. As I said, a better option would be if you could tell XFS in what AG you want extents for a newly created file to place it's extents in. /Johan Andersson From owner-xfs@oss.sgi.com Mon Jun 11 02:30:42 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 11 Jun 2007 02:30:44 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.1 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5B9UaWt032039 for ; Mon, 11 Jun 2007 02:30:40 -0700 Received: from cxfsmac10.melbourne.sgi.com (cxfsmac10.melbourne.sgi.com [134.14.55.100]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id TAA25823; Mon, 11 Jun 2007 19:30:29 +1000 Message-ID: <466D1635.3040005@sgi.com> Date: Mon, 11 Jun 2007 19:30:29 +1000 From: Donald Douwsma User-Agent: Thunderbird 2.0.0.0 (Macintosh/20070326) MIME-Version: 1.0 To: Utako Kusaka CC: xfs-oss Subject: Re: [PATCH] Fix xfs_quota command handling. References: <200705310422.AA05481@TNESG9305.tnes.nec.co.jp> In-Reply-To: <200705310422.AA05481@TNESG9305.tnes.nec.co.jp> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11724 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: donaldd@sgi.com Precedence: bulk X-list: xfs Utako Kusaka wrote: > Hi, > > This is my last patch for xfs_quota, maybe. Thanks all your work on quotas, the xfs quota code is in much better shape thanks to your efforts. > When path argument is not specified, xfs_quota executes commands repeatedly > to the number of mounted XFS file systems. > As a result, I get the same command report many times. > This patch implements the similar command loop operation to xfs_db. This bug has been around for some time. I've applied your patch and confirmed it fixes the problem. When verifying it with the xfs qa tools I found that the quota test (xfstest/050) fails with expected output problems. The code in test 050 always passes the mount point in so your change should have no effect on it. I suspect this is a failure in the qa script itself. Sorry for the delay in picking up your last five patches. I'll get them checked in as soon as I resolve our qa problem. Donald From owner-xfs@oss.sgi.com Mon Jun 11 03:08:19 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 11 Jun 2007 03:08:22 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.8 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from smtp120.sbc.mail.re3.yahoo.com (smtp120.sbc.mail.re3.yahoo.com [66.196.96.93]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5BA8HWt008547 for ; Mon, 11 Jun 2007 03:08:19 -0700 Received: (qmail 61179 invoked from network); 11 Jun 2007 09:41:37 -0000 Received: from unknown (HELO stupidest.org) (cwedgwood@sbcglobal.net@24.5.75.45 with login) by smtp120.sbc.mail.re3.yahoo.com with SMTP; 11 Jun 2007 09:41:37 -0000 X-YMail-OSG: orN9C0cVM1lpFbDuCtF8PUbYCo2D3ZAXn.re9KQH8JBKrWQvJF0eyCdE.SQe3IYThZ53GLDJFg-- Received: by tuatara.stupidest.org (Postfix, from userid 10000) id BD287182612A; Mon, 11 Jun 2007 02:41:33 -0700 (PDT) Date: Mon, 11 Jun 2007 02:41:33 -0700 From: Chris Wedgwood To: Johan Andersson Cc: xfs@oss.sgi.com Subject: Re: xfs_fsr allocation group optimization Message-ID: <20070611094133.GA31108@tuatara.stupidest.org> References: <1181544692.19145.44.camel@gentoo-johan.transmode.se> <20070611073559.GA26257@tuatara.stupidest.org> <1181551409.19145.57.camel@gentoo-johan.transmode.se> <20070611090138.GA28907@tuatara.stupidest.org> <1181553356.19145.65.camel@gentoo-johan.transmode.se> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1181553356.19145.65.camel@gentoo-johan.transmode.se> X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11725 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: xfs On Mon, Jun 11, 2007 at 11:15:56AM +0200, Johan Andersson wrote: > This is exactly what the simple but ugly patch I attached achieves > by looking up the filename of the inode it defrags when doing a full > file system defrag. And it works well, except that it spends a lot > of time finding that file name. As I said, a better option would be > if you could tell XFS in what AG you want extents for a newly > created file to place it's extents in. AGs can be large, you really want to say 'allocate near ...' From owner-xfs@oss.sgi.com Mon Jun 11 03:39:54 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 11 Jun 2007 03:39:57 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.7 required=5.0 tests=AWL,BAYES_50, FH_HOST_EQ_D_D_D_D,FH_HOST_EQ_D_D_D_DB,RDNS_DYNAMIC autolearn=no version=3.2.0-pre1-r499012 Received: from stitch.e-626.net (60.153.216.81.static.spa.siw.siwnet.net [81.216.153.60]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5BAdqWt017548 for ; Mon, 11 Jun 2007 03:39:53 -0700 Received: from [130.100.71.110] (hidden-user@146.175.241.83.in-addr.dgcsystems.net [83.241.175.146]) (authenticated bits=0) by stitch.e-626.net (8.14.0/8.13.7) with ESMTP id l5BAdDmo023558 (version=TLSv1/SSLv3 cipher=RC4-MD5 bits=128 verify=NOT); Mon, 11 Jun 2007 12:39:34 +0200 Subject: Re: xfs_fsr allocation group optimization From: Johan Andersson To: Chris Wedgwood Cc: xfs@oss.sgi.com In-Reply-To: <20070611094133.GA31108@tuatara.stupidest.org> References: <1181544692.19145.44.camel@gentoo-johan.transmode.se> <20070611073559.GA26257@tuatara.stupidest.org> <1181551409.19145.57.camel@gentoo-johan.transmode.se> <20070611090138.GA28907@tuatara.stupidest.org> <1181553356.19145.65.camel@gentoo-johan.transmode.se> <20070611094133.GA31108@tuatara.stupidest.org> Content-Type: text/plain Date: Mon, 11 Jun 2007 12:39:13 +0200 Message-Id: <1181558353.19145.76.camel@gentoo-johan.transmode.se> Mime-Version: 1.0 X-Mailer: Evolution 2.8.2.1 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Scanned: ClamAV 0.90.2/3399/Mon Jun 11 09:27:52 2007 on stitch.e-626.net X-Virus-Status: Clean X-archive-position: 11726 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: johan@e-626.net Precedence: bulk X-list: xfs On Mon, 2007-06-11 at 02:41 -0700, Chris Wedgwood wrote: > On Mon, Jun 11, 2007 at 11:15:56AM +0200, Johan Andersson wrote: > > > This is exactly what the simple but ugly patch I attached achieves > > by looking up the filename of the inode it defrags when doing a full > > file system defrag. And it works well, except that it spends a lot > > of time finding that file name. As I said, a better option would be > > if you could tell XFS in what AG you want extents for a newly > > created file to place it's extents in. > > AGs can be large, you really want to say 'allocate near ...' Yes, absolutely, if that was possible. But with the current XFS, at least we can place it in the same AG. In the way xfs_fsr operates now, in almost all user space, I don't see any good way to tell XFS where to place the extents, other than creating the temporary file in the same directory as the original file. My question is really, is there a better way than "find -xdev -inum" to find what file points to a given inode? It would solve our immediate problem with xfs_fsr destroying locality of files, while still optimising the file system properly. /Johan Andersson From owner-xfs@oss.sgi.com Mon Jun 11 07:33:53 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 11 Jun 2007 07:34:02 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.3 required=5.0 tests=AWL,BAYES_99,J_CHICKENPOX_52 autolearn=no version=3.2.0-pre1-r499012 Received: from bay0-omc3-s35.bay0.hotmail.com (bay0-omc3-s35.bay0.hotmail.com [65.54.246.235]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5BEXqWt010718 for ; Mon, 11 Jun 2007 07:33:53 -0700 Received: from hotmail.com ([65.54.174.81]) by bay0-omc3-s35.bay0.hotmail.com with Microsoft SMTPSVC(6.0.3790.2668); Mon, 11 Jun 2007 07:33:52 -0700 Received: from mail pickup service by hotmail.com with Microsoft SMTPSVC; Mon, 11 Jun 2007 07:33:52 -0700 Message-ID: Received: from 85.36.106.214 by BAY103-DAV9.phx.gbl with DAV; Mon, 11 Jun 2007 14:33:47 +0000 X-Originating-IP: [85.36.106.214] X-Originating-Email: [pupilla@hotmail.com] X-Sender: pupilla@hotmail.com From: "Marco Berizzi" To: "Satyam Sharma" Cc: , "David Chinner" , , "Andrew Morton" , "Christoph Lameter" References: Subject: Re: 2.6.21.3 Oops (was Re: XFS internal error xfs_da_do_buf(2) at line 2087 of file fs/xfs/xfs_da_btree.c. Caller 0xc01b00bd) Date: Mon, 11 Jun 2007 16:33:43 +0200 X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2800.1123 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2800.1123 X-OriginalArrivalTime: 11 Jun 2007 14:33:52.0680 (UTC) FILETIME=[8881CA80:01C7AC35] X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11727 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: pupilla@hotmail.com Precedence: bulk X-list: xfs Satyam Sharma wrote: > Hi Marco, Ciao Satyam, thanks for the feedback. > [ Re-adding David, XFS, Andrew and Christoph; this appears to be > some SLAB / fs (?) issue, so I'm a little out of my depth here :-) ] > > > > On 6/8/07, Marco Berizzi wrote: > > >> After few hours linux has crashed with this message: > > >> BUG: at arch/i386/kernel/smp.c:546 smp_call_function() > > Well, _this_ particular "bug" (due to a WARN_ON(irqs_disabled) > that should be avoided when we're panicing) is resolved in the latest > 22-rc4 / -git kernel. However, interestingly, this is not the problem that > crashed your system in the first place. Your box had *already* paniced, > due to unknown reasons, and _then_ hit the aforementioned WARN_ON. > > > > Which kernel (exactly) was this > > > > 2.6.21.3 > > Ok, so apparently what happened here was this: > > Some RCU callback (that calls kmem_cache_free()) oopsed and > panic'ed his box. [ Marco had experienced fs issues lately, so we could > suspect file_free_rcu() here, but I can't really tell from the stack trace; > BTW whats with the rampant disease in the kernel to declare as inline > even those functions exclusively meant to be dereferenced and passed > as pointers to call_rcu()?! ] > > Sadly, 21.3 (21.4 too, actually) had a busticated smp_send_stop() > that would always go WARN_ON when called by panic() as mentioned > above, which meant that the original dmesg stuff outputted by the oops + > panic got scrolled up and all that we had on the screen was the stack trace > for the WARN_ON when you snapped the pic -- the system didn't write to > syslog messages in time and so the extract below isn't quite useful :-( > > > > and does this occur > > > reproducibly? > > > > I don't know. I try to explain. With all debugging options > > enabled 2.6.21.x has never crashed. After two days 2.6.21.3 > > was running without any debug options, it has crashed. > > Tomorrow morning I will start that linux box with linux 2.6.21.3 > > without any debug options, and I will keep you informed > > (friday evening I have switched back to 2.6.21.3 with debug > > options enabled, so the machine doesn't crash during the week > > end: this system is my company firewall.) > > I hope you're able to reproduce this with various debug options > enabled No, I'm not able to reproduce this error with the debug options enabled. Friday evening (UTC) I have compiled 2.6.21.3 with only slab debug option enabled and the machine has not been crashed after 3 days (till now). > (and/or also try the latest 22-rc4 or -git kernel). Yes, I can try 2.6.22-rc4. I'm going to compile it right now... > Could you > please send the .config that crashed too? Here is: # # Automatically generated make config: don't edit # Linux kernel version: 2.6.21.3 # Thu May 31 14:53:05 2007 # CONFIG_X86_32=y CONFIG_GENERIC_TIME=y CONFIG_CLOCKSOURCE_WATCHDOG=y CONFIG_GENERIC_CLOCKEVENTS=y CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y CONFIG_LOCKDEP_SUPPORT=y CONFIG_STACKTRACE_SUPPORT=y CONFIG_SEMAPHORE_SLEEPERS=y CONFIG_X86=y CONFIG_MMU=y CONFIG_ZONE_DMA=y CONFIG_GENERIC_ISA_DMA=y CONFIG_GENERIC_IOMAP=y CONFIG_GENERIC_BUG=y CONFIG_GENERIC_HWEIGHT=y CONFIG_ARCH_MAY_HAVE_PC_FDC=y CONFIG_DMI=y CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config" # # Code maturity level options # # CONFIG_EXPERIMENTAL is not set CONFIG_LOCK_KERNEL=y CONFIG_INIT_ENV_ARG_LIMIT=32 # # General setup # CONFIG_LOCALVERSION="" # CONFIG_LOCALVERSION_AUTO is not set CONFIG_SWAP=y CONFIG_SYSVIPC=y # CONFIG_IPC_NS is not set CONFIG_SYSVIPC_SYSCTL=y CONFIG_BSD_PROCESS_ACCT=y # CONFIG_BSD_PROCESS_ACCT_V3 is not set # CONFIG_TASKSTATS is not set # CONFIG_UTS_NS is not set # CONFIG_AUDIT is not set # CONFIG_IKCONFIG is not set # CONFIG_CPUSETS is not set # CONFIG_SYSFS_DEPRECATED is not set # CONFIG_RELAY is not set # CONFIG_BLK_DEV_INITRD is not set CONFIG_SYSCTL=y # CONFIG_EMBEDDED is not set CONFIG_UID16=y CONFIG_SYSCTL_SYSCALL=y CONFIG_KALLSYMS=y # CONFIG_KALLSYMS_EXTRA_PASS is not set CONFIG_HOTPLUG=y CONFIG_PRINTK=y CONFIG_BUG=y CONFIG_ELF_CORE=y CONFIG_BASE_FULL=y CONFIG_FUTEX=y CONFIG_EPOLL=y CONFIG_SHMEM=y CONFIG_SLAB=y CONFIG_VM_EVENT_COUNTERS=y CONFIG_RT_MUTEXES=y # CONFIG_TINY_SHMEM is not set CONFIG_BASE_SMALL=0 # CONFIG_SLOB is not set # # Loadable module support # CONFIG_MODULES=y CONFIG_MODULE_UNLOAD=y # CONFIG_MODVERSIONS is not set # CONFIG_MODULE_SRCVERSION_ALL is not set # CONFIG_KMOD is not set CONFIG_STOP_MACHINE=y # # Block layer # CONFIG_BLOCK=y # CONFIG_LBD is not set # CONFIG_BLK_DEV_IO_TRACE is not set # CONFIG_LSF is not set # # IO Schedulers # CONFIG_IOSCHED_NOOP=y # CONFIG_IOSCHED_AS is not set CONFIG_IOSCHED_DEADLINE=y # CONFIG_IOSCHED_CFQ is not set # CONFIG_DEFAULT_AS is not set CONFIG_DEFAULT_DEADLINE=y # CONFIG_DEFAULT_CFQ is not set # CONFIG_DEFAULT_NOOP is not set CONFIG_DEFAULT_IOSCHED="deadline" # # Processor type and features # # CONFIG_TICK_ONESHOT is not set # CONFIG_NO_HZ is not set # CONFIG_HIGH_RES_TIMERS is not set CONFIG_SMP=y CONFIG_X86_PC=y # CONFIG_X86_ELAN is not set # CONFIG_X86_VOYAGER is not set # CONFIG_X86_NUMAQ is not set # CONFIG_X86_SUMMIT is not set # CONFIG_X86_BIGSMP is not set # CONFIG_X86_VISWS is not set # CONFIG_X86_GENERICARCH is not set # CONFIG_X86_ES7000 is not set # CONFIG_M386 is not set # CONFIG_M486 is not set # CONFIG_M586 is not set # CONFIG_M586TSC is not set # CONFIG_M586MMX is not set # CONFIG_M686 is not set # CONFIG_MPENTIUMII is not set # CONFIG_MPENTIUMIII is not set # CONFIG_MPENTIUMM is not set # CONFIG_MCORE2 is not set CONFIG_MPENTIUM4=y # CONFIG_MK6 is not set # CONFIG_MK7 is not set # CONFIG_MK8 is not set # CONFIG_MCRUSOE is not set # CONFIG_MEFFICEON is not set # CONFIG_MWINCHIPC6 is not set # CONFIG_MWINCHIP2 is not set # CONFIG_MWINCHIP3D is not set # CONFIG_MGEODEGX1 is not set # CONFIG_MGEODE_LX is not set # CONFIG_MCYRIXIII is not set # CONFIG_MVIAC3_2 is not set # CONFIG_X86_GENERIC is not set CONFIG_X86_CMPXCHG=y CONFIG_X86_L1_CACHE_SHIFT=7 CONFIG_RWSEM_XCHGADD_ALGORITHM=y # CONFIG_ARCH_HAS_ILOG2_U32 is not set # CONFIG_ARCH_HAS_ILOG2_U64 is not set CONFIG_GENERIC_CALIBRATE_DELAY=y CONFIG_X86_WP_WORKS_OK=y CONFIG_X86_INVLPG=y CONFIG_X86_BSWAP=y CONFIG_X86_POPAD_OK=y CONFIG_X86_CMPXCHG64=y CONFIG_X86_GOOD_APIC=y CONFIG_X86_INTEL_USERCOPY=y CONFIG_X86_USE_PPRO_CHECKSUM=y CONFIG_X86_TSC=y # CONFIG_HPET_TIMER is not set CONFIG_NR_CPUS=2 CONFIG_SCHED_SMT=y # CONFIG_SCHED_MC is not set CONFIG_PREEMPT_NONE=y # CONFIG_PREEMPT_VOLUNTARY is not set # CONFIG_PREEMPT is not set # CONFIG_PREEMPT_BKL is not set CONFIG_X86_LOCAL_APIC=y CONFIG_X86_IO_APIC=y # CONFIG_X86_MCE is not set CONFIG_VM86=y # CONFIG_TOSHIBA is not set # CONFIG_I8K is not set # CONFIG_X86_REBOOTFIXUPS is not set # CONFIG_MICROCODE is not set # CONFIG_X86_MSR is not set # CONFIG_X86_CPUID is not set # # Firmware Drivers # # CONFIG_EDD is not set # CONFIG_DELL_RBU is not set # CONFIG_DCDBAS is not set # CONFIG_NOHIGHMEM is not set CONFIG_HIGHMEM4G=y # CONFIG_HIGHMEM64G is not set CONFIG_PAGE_OFFSET=0xC0000000 CONFIG_HIGHMEM=y CONFIG_ARCH_POPULATES_NODE_MAP=y CONFIG_FLATMEM=y CONFIG_FLAT_NODE_MEM_MAP=y # CONFIG_SPARSEMEM_STATIC is not set CONFIG_SPLIT_PTLOCK_CPUS=4 # CONFIG_RESOURCES_64BIT is not set CONFIG_ZONE_DMA_FLAG=1 CONFIG_HIGHPTE=y # CONFIG_MATH_EMULATION is not set # CONFIG_MTRR is not set # CONFIG_EFI is not set CONFIG_IRQBALANCE=y CONFIG_SECCOMP=y CONFIG_HZ_100=y # CONFIG_HZ_250 is not set # CONFIG_HZ_300 is not set # CONFIG_HZ_1000 is not set CONFIG_HZ=100 # CONFIG_KEXEC is not set CONFIG_PHYSICAL_START=0x100000 CONFIG_PHYSICAL_ALIGN=0x100000 # CONFIG_COMPAT_VDSO is not set CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y # # Power management options (ACPI, APM) # CONFIG_PM=y # CONFIG_PM_LEGACY is not set # CONFIG_PM_DEBUG is not set # CONFIG_PM_SYSFS_DEPRECATED is not set # # ACPI (Advanced Configuration and Power Interface) Support # CONFIG_ACPI=y # CONFIG_ACPI_PROCFS is not set # CONFIG_ACPI_AC is not set # CONFIG_ACPI_BATTERY is not set # CONFIG_ACPI_BUTTON is not set # CONFIG_ACPI_FAN is not set # CONFIG_ACPI_PROCESSOR is not set # CONFIG_ACPI_ASUS is not set # CONFIG_ACPI_IBM is not set # CONFIG_ACPI_TOSHIBA is not set CONFIG_ACPI_BLACKLIST_YEAR=0 # CONFIG_ACPI_DEBUG is not set CONFIG_ACPI_EC=y CONFIG_ACPI_POWER=y CONFIG_ACPI_SYSTEM=y CONFIG_X86_PM_TIMER=y # # APM (Advanced Power Management) BIOS Support # # CONFIG_APM is not set # # CPU Frequency scaling # # CONFIG_CPU_FREQ is not set # # Bus options (PCI, PCMCIA, EISA, MCA, ISA) # CONFIG_PCI=y # CONFIG_PCI_GOBIOS is not set # CONFIG_PCI_GOMMCONFIG is not set # CONFIG_PCI_GODIRECT is not set CONFIG_PCI_GOANY=y CONFIG_PCI_BIOS=y CONFIG_PCI_DIRECT=y CONFIG_PCI_MMCONFIG=y # CONFIG_PCIEPORTBUS is not set # CONFIG_PCI_MSI is not set # CONFIG_HT_IRQ is not set CONFIG_ISA_DMA_API=y # CONFIG_ISA is not set # CONFIG_MCA is not set # CONFIG_SCx200 is not set # # PCCARD (PCMCIA/CardBus) support # # CONFIG_PCCARD is not set # # PCI Hotplug Support # # # Executable file formats # CONFIG_BINFMT_ELF=y # CONFIG_BINFMT_AOUT is not set # CONFIG_BINFMT_MISC is not set # # Networking # CONFIG_NET=y # # Networking options # # CONFIG_NETDEBUG is not set CONFIG_PACKET=y CONFIG_PACKET_MMAP=y CONFIG_UNIX=y CONFIG_XFRM=y CONFIG_XFRM_USER=y CONFIG_NET_KEY=y CONFIG_INET=y # CONFIG_IP_MULTICAST is not set CONFIG_IP_ADVANCED_ROUTER=y CONFIG_ASK_IP_FIB_HASH=y # CONFIG_IP_FIB_TRIE is not set CONFIG_IP_FIB_HASH=y CONFIG_IP_MULTIPLE_TABLES=y CONFIG_IP_ROUTE_MULTIPATH=y # CONFIG_IP_ROUTE_MULTIPATH_CACHED is not set CONFIG_IP_ROUTE_VERBOSE=y # CONFIG_IP_PNP is not set # CONFIG_NET_IPIP is not set # CONFIG_NET_IPGRE is not set CONFIG_SYN_COOKIES=y # CONFIG_INET_AH is not set CONFIG_INET_ESP=y CONFIG_INET_IPCOMP=y CONFIG_INET_XFRM_TUNNEL=y CONFIG_INET_TUNNEL=y CONFIG_INET_XFRM_MODE_TRANSPORT=y CONFIG_INET_XFRM_MODE_TUNNEL=y # CONFIG_INET_XFRM_MODE_BEET is not set CONFIG_INET_DIAG=y CONFIG_INET_TCP_DIAG=y # CONFIG_TCP_CONG_ADVANCED is not set CONFIG_TCP_CONG_CUBIC=y CONFIG_DEFAULT_TCP_CONG="cubic" # # IP: Virtual Server Configuration # # CONFIG_IP_VS is not set # CONFIG_IPV6 is not set # CONFIG_INET6_XFRM_TUNNEL is not set # CONFIG_INET6_TUNNEL is not set # CONFIG_NETWORK_SECMARK is not set CONFIG_NETFILTER=y # CONFIG_NETFILTER_DEBUG is not set # # Core Netfilter Configuration # CONFIG_NETFILTER_NETLINK=m CONFIG_NETFILTER_NETLINK_QUEUE=m CONFIG_NETFILTER_NETLINK_LOG=m CONFIG_NF_CONNTRACK_ENABLED=y CONFIG_NF_CONNTRACK_SUPPORT=y # CONFIG_IP_NF_CONNTRACK_SUPPORT is not set CONFIG_NF_CONNTRACK=y CONFIG_NF_CT_ACCT=y CONFIG_NF_CONNTRACK_MARK=y CONFIG_NF_CT_PROTO_GRE=m # CONFIG_NF_CONNTRACK_AMANDA is not set CONFIG_NF_CONNTRACK_FTP=m # CONFIG_NF_CONNTRACK_IRC is not set CONFIG_NF_CONNTRACK_PPTP=m # CONFIG_NF_CONNTRACK_TFTP is not set CONFIG_NETFILTER_XTABLES=y CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m CONFIG_NETFILTER_XT_TARGET_CONNMARK=m CONFIG_NETFILTER_XT_TARGET_DSCP=m CONFIG_NETFILTER_XT_TARGET_MARK=y CONFIG_NETFILTER_XT_TARGET_NFQUEUE=m CONFIG_NETFILTER_XT_TARGET_NFLOG=m CONFIG_NETFILTER_XT_TARGET_NOTRACK=m CONFIG_NETFILTER_XT_TARGET_TCPMSS=m CONFIG_NETFILTER_XT_MATCH_COMMENT=m CONFIG_NETFILTER_XT_MATCH_CONNBYTES=m CONFIG_NETFILTER_XT_MATCH_CONNMARK=m CONFIG_NETFILTER_XT_MATCH_CONNTRACK=m CONFIG_NETFILTER_XT_MATCH_DCCP=m CONFIG_NETFILTER_XT_MATCH_DSCP=m CONFIG_NETFILTER_XT_MATCH_ESP=m CONFIG_NETFILTER_XT_MATCH_HELPER=y CONFIG_NETFILTER_XT_MATCH_LENGTH=m CONFIG_NETFILTER_XT_MATCH_LIMIT=y CONFIG_NETFILTER_XT_MATCH_MAC=m CONFIG_NETFILTER_XT_MATCH_MARK=y CONFIG_NETFILTER_XT_MATCH_POLICY=y CONFIG_NETFILTER_XT_MATCH_MULTIPORT=y CONFIG_NETFILTER_XT_MATCH_PKTTYPE=m CONFIG_NETFILTER_XT_MATCH_QUOTA=m CONFIG_NETFILTER_XT_MATCH_REALM=m CONFIG_NETFILTER_XT_MATCH_STATE=y CONFIG_NETFILTER_XT_MATCH_STATISTIC=y CONFIG_NETFILTER_XT_MATCH_STRING=m CONFIG_NETFILTER_XT_MATCH_TCPMSS=y CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=m # # IP: Netfilter Configuration # CONFIG_NF_CONNTRACK_IPV4=y # CONFIG_NF_CONNTRACK_PROC_COMPAT is not set # CONFIG_IP_NF_QUEUE is not set CONFIG_IP_NF_IPTABLES=y CONFIG_IP_NF_MATCH_IPRANGE=m CONFIG_IP_NF_MATCH_TOS=m # CONFIG_IP_NF_MATCH_RECENT is not set CONFIG_IP_NF_MATCH_ECN=m # CONFIG_IP_NF_MATCH_AH is not set CONFIG_IP_NF_MATCH_TTL=m CONFIG_IP_NF_MATCH_OWNER=m CONFIG_IP_NF_MATCH_ADDRTYPE=m CONFIG_IP_NF_FILTER=y CONFIG_IP_NF_TARGET_REJECT=y CONFIG_IP_NF_TARGET_LOG=m # CONFIG_IP_NF_TARGET_ULOG is not set CONFIG_NF_NAT=y CONFIG_NF_NAT_NEEDED=y CONFIG_IP_NF_TARGET_MASQUERADE=m CONFIG_IP_NF_TARGET_REDIRECT=m CONFIG_IP_NF_TARGET_NETMAP=m CONFIG_IP_NF_TARGET_SAME=m CONFIG_NF_NAT_PROTO_GRE=m CONFIG_NF_NAT_FTP=m # CONFIG_NF_NAT_IRC is not set # CONFIG_NF_NAT_TFTP is not set # CONFIG_NF_NAT_AMANDA is not set CONFIG_NF_NAT_PPTP=m # CONFIG_NF_NAT_H323 is not set # CONFIG_NF_NAT_SIP is not set CONFIG_IP_NF_MANGLE=y CONFIG_IP_NF_TARGET_TOS=m CONFIG_IP_NF_TARGET_ECN=m CONFIG_IP_NF_TARGET_TTL=m CONFIG_IP_NF_RAW=m CONFIG_IP_NF_ARPTABLES=m CONFIG_IP_NF_ARPFILTER=m CONFIG_IP_NF_ARP_MANGLE=m # CONFIG_BRIDGE is not set # CONFIG_VLAN_8021Q is not set # CONFIG_DECNET is not set # CONFIG_LLC2 is not set # CONFIG_IPX is not set # CONFIG_ATALK is not set # # QoS and/or fair queueing # CONFIG_NET_SCHED=y CONFIG_NET_SCH_FIFO=y # CONFIG_NET_SCH_CLK_JIFFIES is not set CONFIG_NET_SCH_CLK_GETTIMEOFDAY=y # CONFIG_NET_SCH_CLK_CPU is not set # # Queueing/Scheduling # # CONFIG_NET_SCH_CBQ is not set CONFIG_NET_SCH_HTB=m CONFIG_NET_SCH_HFSC=m CONFIG_NET_SCH_PRIO=m CONFIG_NET_SCH_RED=m CONFIG_NET_SCH_SFQ=m CONFIG_NET_SCH_TEQL=m CONFIG_NET_SCH_TBF=m CONFIG_NET_SCH_GRED=m CONFIG_NET_SCH_DSMARK=m CONFIG_NET_SCH_NETEM=m CONFIG_NET_SCH_INGRESS=m # # Classification # CONFIG_NET_CLS=y CONFIG_NET_CLS_BASIC=m CONFIG_NET_CLS_TCINDEX=m CONFIG_NET_CLS_ROUTE4=m CONFIG_NET_CLS_ROUTE=y CONFIG_NET_CLS_FW=m CONFIG_NET_CLS_U32=m CONFIG_CLS_U32_PERF=y CONFIG_CLS_U32_MARK=y CONFIG_NET_CLS_RSVP=m # CONFIG_NET_CLS_RSVP6 is not set CONFIG_NET_EMATCH=y CONFIG_NET_EMATCH_STACK=32 CONFIG_NET_EMATCH_CMP=m CONFIG_NET_EMATCH_NBYTE=m CONFIG_NET_EMATCH_U32=m CONFIG_NET_EMATCH_META=m CONFIG_NET_EMATCH_TEXT=m CONFIG_NET_CLS_ACT=y CONFIG_NET_ACT_POLICE=m CONFIG_NET_ACT_GACT=m CONFIG_GACT_PROB=y CONFIG_NET_ACT_MIRRED=m CONFIG_NET_ACT_IPT=m CONFIG_NET_ACT_PEDIT=m # CONFIG_NET_ACT_SIMP is not set # CONFIG_NET_CLS_IND is not set CONFIG_NET_ESTIMATOR=y # # Network testing # # CONFIG_NET_PKTGEN is not set # CONFIG_HAMRADIO is not set # CONFIG_IRDA is not set # CONFIG_BT is not set # CONFIG_IEEE80211 is not set CONFIG_FIB_RULES=y # # Device Drivers # # # Generic Driver Options # CONFIG_STANDALONE=y # CONFIG_PREVENT_FIRMWARE_BUILD is not set # CONFIG_FW_LOADER is not set # CONFIG_SYS_HYPERVISOR is not set # # Connector - unified userspace <-> kernelspace linker # # CONFIG_CONNECTOR is not set # # Memory Technology Devices (MTD) # # CONFIG_MTD is not set # # Parallel port support # # CONFIG_PARPORT is not set # # Plug and Play support # CONFIG_PNP=y # CONFIG_PNP_DEBUG is not set # # Protocols # CONFIG_PNPACPI=y # # Block devices # CONFIG_BLK_DEV_FD=m # CONFIG_BLK_CPQ_DA is not set # CONFIG_BLK_CPQ_CISS_DA is not set # CONFIG_BLK_DEV_DAC960 is not set # CONFIG_BLK_DEV_COW_COMMON is not set # CONFIG_BLK_DEV_LOOP is not set # CONFIG_BLK_DEV_NBD is not set # CONFIG_BLK_DEV_SX8 is not set # CONFIG_BLK_DEV_RAM is not set # CONFIG_CDROM_PKTCDVD is not set # CONFIG_ATA_OVER_ETH is not set # # Misc devices # # CONFIG_SGI_IOC4 is not set # CONFIG_SONY_LAPTOP is not set # # ATA/ATAPI/MFM/RLL support # CONFIG_IDE=m CONFIG_BLK_DEV_IDE=m # # Please see Documentation/ide.txt for help/info on IDE drives # # CONFIG_BLK_DEV_IDE_SATA is not set # CONFIG_BLK_DEV_HD_IDE is not set # CONFIG_BLK_DEV_IDEDISK is not set # CONFIG_IDEDISK_MULTI_MODE is not set CONFIG_BLK_DEV_IDECD=m # CONFIG_BLK_DEV_IDEFLOPPY is not set # CONFIG_BLK_DEV_IDESCSI is not set # CONFIG_BLK_DEV_IDEACPI is not set # CONFIG_IDE_TASK_IOCTL is not set # # IDE chipset support/bugfixes # # CONFIG_IDE_GENERIC is not set # CONFIG_BLK_DEV_CMD640 is not set # CONFIG_BLK_DEV_IDEPNP is not set CONFIG_BLK_DEV_IDEPCI=y CONFIG_IDEPCI_SHARE_IRQ=y # CONFIG_BLK_DEV_OFFBOARD is not set # CONFIG_BLK_DEV_GENERIC is not set # CONFIG_BLK_DEV_RZ1000 is not set CONFIG_BLK_DEV_IDEDMA_PCI=y # CONFIG_BLK_DEV_IDEDMA_FORCED is not set CONFIG_IDEDMA_ONLYDISK=y # CONFIG_BLK_DEV_AEC62XX is not set # CONFIG_BLK_DEV_ALI15X3 is not set # CONFIG_BLK_DEV_AMD74XX is not set # CONFIG_BLK_DEV_ATIIXP is not set # CONFIG_BLK_DEV_CMD64X is not set # CONFIG_BLK_DEV_TRIFLEX is not set # CONFIG_BLK_DEV_CY82C693 is not set # CONFIG_BLK_DEV_CS5530 is not set # CONFIG_BLK_DEV_CS5535 is not set # CONFIG_BLK_DEV_HPT34X is not set # CONFIG_BLK_DEV_HPT366 is not set # CONFIG_BLK_DEV_JMICRON is not set # CONFIG_BLK_DEV_SC1200 is not set CONFIG_BLK_DEV_PIIX=m # CONFIG_BLK_DEV_IT8213 is not set # CONFIG_BLK_DEV_IT821X is not set # CONFIG_BLK_DEV_NS87415 is not set # CONFIG_BLK_DEV_PDC202XX_OLD is not set # CONFIG_BLK_DEV_PDC202XX_NEW is not set # CONFIG_BLK_DEV_SVWKS is not set # CONFIG_BLK_DEV_SIIMAGE is not set # CONFIG_BLK_DEV_SIS5513 is not set # CONFIG_BLK_DEV_SLC90E66 is not set # CONFIG_BLK_DEV_TRM290 is not set # CONFIG_BLK_DEV_VIA82CXXX is not set # CONFIG_BLK_DEV_TC86C001 is not set # CONFIG_IDE_ARM is not set CONFIG_BLK_DEV_IDEDMA=y # CONFIG_IDEDMA_IVB is not set # CONFIG_BLK_DEV_HD is not set # # SCSI device support # # CONFIG_RAID_ATTRS is not set CONFIG_SCSI=y # CONFIG_SCSI_NETLINK is not set # CONFIG_SCSI_PROC_FS is not set # # SCSI support type (disk, tape, CD-ROM) # CONFIG_BLK_DEV_SD=y # CONFIG_CHR_DEV_ST is not set # CONFIG_CHR_DEV_OSST is not set # CONFIG_BLK_DEV_SR is not set # CONFIG_CHR_DEV_SG is not set # CONFIG_CHR_DEV_SCH is not set # # Some SCSI devices (e.g. CD jukebox) support multiple LUNs # # CONFIG_SCSI_MULTI_LUN is not set # CONFIG_SCSI_CONSTANTS is not set # CONFIG_SCSI_LOGGING is not set # CONFIG_SCSI_SCAN_ASYNC is not set # # SCSI Transports # CONFIG_SCSI_SPI_ATTRS=y # CONFIG_SCSI_FC_ATTRS is not set # CONFIG_SCSI_ISCSI_ATTRS is not set # CONFIG_SCSI_SAS_ATTRS is not set # CONFIG_SCSI_SAS_LIBSAS is not set # # SCSI low-level drivers # # CONFIG_ISCSI_TCP is not set # CONFIG_BLK_DEV_3W_XXXX_RAID is not set # CONFIG_SCSI_3W_9XXX is not set # CONFIG_SCSI_ACARD is not set # CONFIG_SCSI_AACRAID is not set # CONFIG_SCSI_AIC7XXX is not set # CONFIG_SCSI_AIC7XXX_OLD is not set # CONFIG_SCSI_AIC79XX is not set # CONFIG_SCSI_AIC94XX is not set # CONFIG_SCSI_DPT_I2O is not set # CONFIG_SCSI_ADVANSYS is not set # CONFIG_SCSI_ARCMSR is not set # CONFIG_MEGARAID_NEWGEN is not set # CONFIG_MEGARAID_LEGACY is not set # CONFIG_MEGARAID_SAS is not set # CONFIG_SCSI_HPTIOP is not set # CONFIG_SCSI_BUSLOGIC is not set # CONFIG_SCSI_DMX3191D is not set # CONFIG_SCSI_EATA is not set # CONFIG_SCSI_FUTURE_DOMAIN is not set # CONFIG_SCSI_GDTH is not set # CONFIG_SCSI_IPS is not set # CONFIG_SCSI_INITIO is not set # CONFIG_SCSI_INIA100 is not set # CONFIG_SCSI_STEX is not set # CONFIG_SCSI_SYM53C8XX_2 is not set # CONFIG_SCSI_QLOGIC_1280 is not set # CONFIG_SCSI_QLA_FC is not set # CONFIG_SCSI_QLA_ISCSI is not set # CONFIG_SCSI_LPFC is not set # CONFIG_SCSI_DC390T is not set # CONFIG_SCSI_NSP32 is not set # CONFIG_SCSI_DEBUG is not set # CONFIG_SCSI_SRP is not set # # Serial ATA (prod) and Parallel ATA (experimental) drivers # # CONFIG_ATA is not set # # Multi-device support (RAID and LVM) # # CONFIG_MD is not set # # Fusion MPT device support # CONFIG_FUSION=y CONFIG_FUSION_SPI=y # CONFIG_FUSION_FC is not set # CONFIG_FUSION_SAS is not set CONFIG_FUSION_MAX_SGE=128 # CONFIG_FUSION_CTL is not set # # IEEE 1394 (FireWire) support # # CONFIG_IEEE1394 is not set # # I2O device support # # CONFIG_I2O is not set # # Macintosh device drivers # # CONFIG_MAC_EMUMOUSEBTN is not set # # Network device support # CONFIG_NETDEVICES=y CONFIG_IFB=m # CONFIG_DUMMY is not set # CONFIG_BONDING is not set # CONFIG_EQUALIZER is not set CONFIG_TUN=m # CONFIG_NET_SB1000 is not set # # ARCnet devices # # CONFIG_ARCNET is not set # # PHY device support # # # Ethernet (10 or 100Mbit) # # CONFIG_NET_ETHERNET is not set # # Ethernet (1000 Mbit) # # CONFIG_ACENIC is not set # CONFIG_DL2K is not set CONFIG_E1000=m CONFIG_E1000_NAPI=y # CONFIG_E1000_DISABLE_PACKET_SPLIT is not set # CONFIG_NS83820 is not set # CONFIG_HAMACHI is not set # CONFIG_R8169 is not set # CONFIG_SIS190 is not set # CONFIG_SKGE is not set # CONFIG_SKY2 is not set # CONFIG_SK98LIN is not set CONFIG_TIGON3=m # CONFIG_BNX2 is not set # CONFIG_QLA3XXX is not set # # Ethernet (10000 Mbit) # # CONFIG_CHELSIO_T1 is not set # CONFIG_CHELSIO_T3 is not set # CONFIG_IXGB is not set # CONFIG_S2IO is not set # CONFIG_MYRI10GE is not set # CONFIG_NETXEN_NIC is not set # # Token Ring devices # # CONFIG_TR is not set # # Wireless LAN (non-hamradio) # # CONFIG_NET_RADIO is not set # # Wan interfaces # # CONFIG_WAN is not set # CONFIG_FDDI is not set # CONFIG_PPP is not set # CONFIG_SLIP is not set # CONFIG_NET_FC is not set # CONFIG_NETPOLL is not set # CONFIG_NET_POLL_CONTROLLER is not set # # ISDN subsystem # # CONFIG_ISDN is not set # # Telephony Support # # CONFIG_PHONE is not set # # Input device support # CONFIG_INPUT=y # CONFIG_INPUT_FF_MEMLESS is not set # # Userland interfaces # CONFIG_INPUT_MOUSEDEV=y # CONFIG_INPUT_MOUSEDEV_PSAUX is not set CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024 CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768 # CONFIG_INPUT_JOYDEV is not set # CONFIG_INPUT_TSDEV is not set # CONFIG_INPUT_EVDEV is not set # CONFIG_INPUT_EVBUG is not set # # Input Device Drivers # CONFIG_INPUT_KEYBOARD=y CONFIG_KEYBOARD_ATKBD=y # CONFIG_KEYBOARD_SUNKBD is not set # CONFIG_KEYBOARD_LKKBD is not set # CONFIG_KEYBOARD_XTKBD is not set # CONFIG_KEYBOARD_NEWTON is not set # CONFIG_KEYBOARD_STOWAWAY is not set # CONFIG_INPUT_MOUSE is not set # CONFIG_INPUT_JOYSTICK is not set # CONFIG_INPUT_TOUCHSCREEN is not set # CONFIG_INPUT_MISC is not set # # Hardware I/O ports # CONFIG_SERIO=y CONFIG_SERIO_I8042=y # CONFIG_SERIO_SERPORT is not set # CONFIG_SERIO_CT82C710 is not set # CONFIG_SERIO_PCIPS2 is not set CONFIG_SERIO_LIBPS2=y # CONFIG_SERIO_RAW is not set # CONFIG_GAMEPORT is not set # # Character devices # CONFIG_VT=y CONFIG_VT_CONSOLE=y CONFIG_HW_CONSOLE=y # CONFIG_VT_HW_CONSOLE_BINDING is not set # CONFIG_SERIAL_NONSTANDARD is not set # # Serial drivers # # CONFIG_SERIAL_8250 is not set # # Non-8250 serial port support # # CONFIG_SERIAL_JSM is not set CONFIG_UNIX98_PTYS=y # CONFIG_LEGACY_PTYS is not set # # IPMI # # CONFIG_IPMI_HANDLER is not set # # Watchdog Cards # # CONFIG_WATCHDOG is not set CONFIG_HW_RANDOM=y CONFIG_HW_RANDOM_INTEL=m # CONFIG_HW_RANDOM_AMD is not set # CONFIG_HW_RANDOM_GEODE is not set # CONFIG_HW_RANDOM_VIA is not set # CONFIG_NVRAM is not set # CONFIG_RTC is not set # CONFIG_GEN_RTC is not set # CONFIG_DTLK is not set # CONFIG_R3964 is not set # CONFIG_APPLICOM is not set # CONFIG_AGP is not set # CONFIG_DRM is not set # CONFIG_MWAVE is not set # CONFIG_PC8736x_GPIO is not set # CONFIG_NSC_GPIO is not set # CONFIG_CS5535_GPIO is not set # CONFIG_RAW_DRIVER is not set # CONFIG_HPET is not set # CONFIG_HANGCHECK_TIMER is not set # # TPM devices # # # I2C support # # CONFIG_I2C is not set # # SPI support # # CONFIG_SPI is not set # CONFIG_SPI_MASTER is not set # # Dallas's 1-wire bus # # CONFIG_W1 is not set # # Hardware Monitoring support # # CONFIG_HWMON is not set # CONFIG_HWMON_VID is not set # # Multifunction device drivers # # CONFIG_MFD_SM501 is not set # # Multimedia devices # # CONFIG_VIDEO_DEV is not set # # Digital Video Broadcasting Devices # # CONFIG_DVB is not set # # Graphics support # # CONFIG_BACKLIGHT_LCD_SUPPORT is not set # CONFIG_FB is not set # # Console display driver support # CONFIG_VGA_CONSOLE=y # CONFIG_VGACON_SOFT_SCROLLBACK is not set # CONFIG_VIDEO_SELECT is not set CONFIG_DUMMY_CONSOLE=y # # Sound # # CONFIG_SOUND is not set # # HID Devices # # CONFIG_HID is not set # # USB support # CONFIG_USB_ARCH_HAS_HCD=y CONFIG_USB_ARCH_HAS_OHCI=y CONFIG_USB_ARCH_HAS_EHCI=y # CONFIG_USB is not set # # NOTE: USB_STORAGE enables SCSI, and 'SCSI disk support' # # # USB Gadget Support # # CONFIG_USB_GADGET is not set # # MMC/SD Card support # # CONFIG_MMC is not set # # LED devices # # CONFIG_NEW_LEDS is not set # # LED drivers # # # LED Triggers # # # InfiniBand support # # CONFIG_INFINIBAND is not set # # EDAC - error detection and reporting (RAS) (EXPERIMENTAL) # # # Real Time Clock # # # DMA Engine support # # CONFIG_DMA_ENGINE is not set # # DMA Clients # # # DMA Devices # # # Auxiliary Display support # # # Virtualization # # # File systems # CONFIG_EXT2_FS=m # CONFIG_EXT2_FS_XATTR is not set # CONFIG_EXT2_FS_XIP is not set # CONFIG_EXT3_FS is not set # CONFIG_REISERFS_FS is not set # CONFIG_JFS_FS is not set # CONFIG_FS_POSIX_ACL is not set CONFIG_XFS_FS=y # CONFIG_XFS_QUOTA is not set # CONFIG_XFS_SECURITY is not set # CONFIG_XFS_POSIX_ACL is not set # CONFIG_XFS_RT is not set # CONFIG_OCFS2_FS is not set # CONFIG_MINIX_FS is not set # CONFIG_ROMFS_FS is not set CONFIG_INOTIFY=y CONFIG_INOTIFY_USER=y # CONFIG_QUOTA is not set CONFIG_DNOTIFY=y # CONFIG_AUTOFS_FS is not set # CONFIG_AUTOFS4_FS is not set # CONFIG_FUSE_FS is not set # # CD-ROM/DVD Filesystems # CONFIG_ISO9660_FS=m # CONFIG_JOLIET is not set # CONFIG_ZISOFS is not set # CONFIG_UDF_FS is not set # # DOS/FAT/NT Filesystems # # CONFIG_MSDOS_FS is not set # CONFIG_VFAT_FS is not set # CONFIG_NTFS_FS is not set # # Pseudo filesystems # CONFIG_PROC_FS=y CONFIG_PROC_KCORE=y CONFIG_PROC_SYSCTL=y CONFIG_SYSFS=y # CONFIG_TMPFS is not set # CONFIG_HUGETLBFS is not set # CONFIG_HUGETLB_PAGE is not set CONFIG_RAMFS=y # # Miscellaneous filesystems # # CONFIG_HFSPLUS_FS is not set # CONFIG_CRAMFS is not set # CONFIG_VXFS_FS is not set # CONFIG_HPFS_FS is not set # CONFIG_QNX4FS_FS is not set # CONFIG_SYSV_FS is not set # CONFIG_UFS_FS is not set # # Network File Systems # # CONFIG_NFS_FS is not set # CONFIG_NFSD is not set # CONFIG_SMB_FS is not set # CONFIG_CIFS is not set # CONFIG_NCP_FS is not set # CONFIG_CODA_FS is not set # # Partition Types # # CONFIG_PARTITION_ADVANCED is not set CONFIG_MSDOS_PARTITION=y # # Native Language Support # # CONFIG_NLS is not set # # Kernel hacking # CONFIG_TRACE_IRQFLAGS_SUPPORT=y # CONFIG_PRINTK_TIME is not set # CONFIG_ENABLE_MUST_CHECK is not set # CONFIG_MAGIC_SYSRQ is not set # CONFIG_UNUSED_SYMBOLS is not set # CONFIG_DEBUG_FS is not set # CONFIG_HEADERS_CHECK is not set # CONFIG_DEBUG_KERNEL is not set CONFIG_LOG_BUF_SHIFT=15 CONFIG_DEBUG_BUGVERBOSE=y CONFIG_EARLY_PRINTK=y CONFIG_X86_FIND_SMP_CONFIG=y CONFIG_X86_MPPARSE=y CONFIG_DOUBLEFAULT=y # # Security options # # CONFIG_KEYS is not set # CONFIG_SECURITY is not set # # Cryptographic options # CONFIG_CRYPTO=y CONFIG_CRYPTO_ALGAPI=y CONFIG_CRYPTO_BLKCIPHER=y CONFIG_CRYPTO_HASH=y CONFIG_CRYPTO_MANAGER=y CONFIG_CRYPTO_HMAC=y # CONFIG_CRYPTO_NULL is not set # CONFIG_CRYPTO_MD4 is not set CONFIG_CRYPTO_MD5=y CONFIG_CRYPTO_SHA1=y CONFIG_CRYPTO_SHA256=y CONFIG_CRYPTO_SHA512=y # CONFIG_CRYPTO_WP512 is not set # CONFIG_CRYPTO_TGR192 is not set CONFIG_CRYPTO_ECB=m CONFIG_CRYPTO_CBC=y # CONFIG_CRYPTO_PCBC is not set CONFIG_CRYPTO_DES=y # CONFIG_CRYPTO_FCRYPT is not set CONFIG_CRYPTO_BLOWFISH=m # CONFIG_CRYPTO_TWOFISH is not set CONFIG_CRYPTO_TWOFISH_COMMON=m CONFIG_CRYPTO_TWOFISH_586=m CONFIG_CRYPTO_SERPENT=m # CONFIG_CRYPTO_AES is not set CONFIG_CRYPTO_AES_586=y # CONFIG_CRYPTO_CAST5 is not set # CONFIG_CRYPTO_CAST6 is not set # CONFIG_CRYPTO_TEA is not set # CONFIG_CRYPTO_ARC4 is not set # CONFIG_CRYPTO_KHAZAD is not set # CONFIG_CRYPTO_ANUBIS is not set CONFIG_CRYPTO_DEFLATE=y # CONFIG_CRYPTO_MICHAEL_MIC is not set # CONFIG_CRYPTO_CRC32C is not set # CONFIG_CRYPTO_CAMELLIA is not set # CONFIG_CRYPTO_TEST is not set # # Hardware crypto devices # # CONFIG_CRYPTO_DEV_PADLOCK is not set # CONFIG_CRYPTO_DEV_GEODE is not set # # Library routines # CONFIG_BITREVERSE=m CONFIG_CRC_CCITT=m CONFIG_CRC16=m CONFIG_CRC32=m CONFIG_LIBCRC32C=m CONFIG_ZLIB_INFLATE=y CONFIG_ZLIB_DEFLATE=y CONFIG_TEXTSEARCH=y CONFIG_TEXTSEARCH_KMP=m CONFIG_TEXTSEARCH_BM=m CONFIG_TEXTSEARCH_FSM=m CONFIG_PLIST=y CONFIG_HAS_IOMEM=y CONFIG_HAS_IOPORT=y CONFIG_GENERIC_HARDIRQS=y CONFIG_GENERIC_IRQ_PROBE=y CONFIG_GENERIC_PENDING_IRQ=y CONFIG_X86_SMP=y CONFIG_X86_HT=y CONFIG_X86_BIOS_REBOOT=y CONFIG_X86_TRAMPOLINE=y CONFIG_KTIME_SCALAR=y > Anyway, I'd have to leave this upto the others Cc:'ed here. Doesn't look > like a known / resolved issue, though. > > > > Also, could you please send the dmesg, > > > > Jun 4 20:53:05 Pleiadi kernel: sanitize start > > Jun 4 20:53:05 Pleiadi kernel: sanitize end > > Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 0000000000000000 > > size: 000000000009ac00 end: 000000000009ac00 type: 1 > > Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() type is E820_RAM > > Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 000000000009ac00 > > size: 0000000000005400 end: 00000000000a0000 type: 2 > > Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 00000000000ce000 > > size: 0000000000002000 end: 00000000000d0000 type: 2 > > Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 00000000000e0000 > > size: 0000000000020000 end: 0000000000100000 type: 2 > > Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 0000000000100000 > > size: 000000003fdf0000 end: 000000003fef0000 type: 1 > > Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() type is E820_RAM > > Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 000000003fef0000 > > size: 000000000000b000 end: 000000003fefb000 type: 3 > > Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 000000003fefb000 > > size: 0000000000005000 end: 000000003ff00000 type: 4 > > Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 000000003ff00000 > > size: 0000000000080000 end: 000000003ff80000 type: 1 > > Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() type is E820_RAM > > Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 000000003ff80000 > > size: 0000000000080000 end: 0000000040000000 type: 2 > > Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 00000000e0000000 > > size: 0000000010000000 end: 00000000f0000000 type: 2 > > Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 00000000fec00000 > > size: 0000000000100400 end: 00000000fed00400 type: 2 > > Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 00000000fee00000 > > size: 0000000000100000 end: 00000000fef00000 type: 2 > > Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 00000000ffb00000 > > size: 0000000000100000 end: 00000000ffc00000 type: 2 > > Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 00000000fff00000 > > size: 0000000000100000 end: 0000000100000000 type: 2 > > Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 0000000000000000 - > > 000000000009ac00 (usable) > > Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 000000000009ac00 - > > 00000000000a0000 (reserved) > > Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 00000000000ce000 - > > 00000000000d0000 (reserved) > > Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 00000000000e0000 - > > 0000000000100000 (reserved) > > Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 0000000000100000 - > > 000000003fef0000 (usable) > > Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 000000003fef0000 - > > 000000003fefb000 (ACPI data) > > Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 000000003fefb000 - > > 000000003ff00000 (ACPI NVS) > > Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 000000003ff00000 - > > 000000003ff80000 (usable) > > Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 000000003ff80000 - > > 0000000040000000 (reserved) > > Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 00000000e0000000 - > > 00000000f0000000 (reserved) > > Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 00000000fec00000 - > > 00000000fed00400 (reserved) > > Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 00000000fee00000 - > > 00000000fef00000 (reserved) > > Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 00000000ffb00000 - > > 00000000ffc00000 (reserved) > > Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 00000000fff00000 - > > 0000000100000000 (reserved) > > Jun 4 20:53:05 Pleiadi kernel: Zone PFN ranges: > > Jun 4 20:53:05 Pleiadi kernel: DMA 0 -> 4096 > > Jun 4 20:53:05 Pleiadi kernel: Normal 4096 -> 229376 > > Jun 4 20:53:05 Pleiadi kernel: HighMem 229376 -> 262016 > > Jun 4 20:53:05 Pleiadi kernel: early_node_map[1] active PFN ranges > > Jun 4 20:53:05 Pleiadi kernel: 0: 0 -> 262016 > > Jun 4 20:53:05 Pleiadi kernel: ACPI: RSDP 000F6BA0, 0024 (r2 PTLTD ) > > Jun 4 20:53:05 Pleiadi kernel: ACPI: XSDT 3FEF5381, 004C (r1 PTLTD ^I > > XSDT 6040001 LTP 0) > > Jun 4 20:53:05 Pleiadi kernel: ACPI: FACP 3FEF5441, 00F4 (r3 FSC > > 6040001 F4240) > > Jun 4 20:53:05 Pleiadi kernel: ACPI: DSDT 3FEF5535, 597B (r1 FSC > > D1649 6040001 MSFT 2000002) > > Jun 4 20:53:05 Pleiadi kernel: ACPI: FACS 3FEFBFC0, 0040 > > Jun 4 20:53:05 Pleiadi kernel: ACPI: SPCR 3FEFAEB0, 0050 (r1 PTLTD > > $UCRTBL$ 6040001 PTL 1) > > Jun 4 20:53:05 Pleiadi kernel: ACPI: MCFG 3FEFAF00, 0040 (r1 PTLTD > > MCFG 6040001 LTP 0) > > Jun 4 20:53:05 Pleiadi kernel: ACPI: APIC 3FEFAF40, 0098 (r1 PTLTD ^I > > APIC 6040001 LTP 0) > > Jun 4 20:53:05 Pleiadi kernel: ACPI: BOOT 3FEFAFD8, 0028 (r1 PTLTD > > $SBFTBL$ 6040001 LTP 1) > > Jun 4 20:53:05 Pleiadi kernel: Processor #0 15:4 APIC version 20 > > Jun 4 20:53:05 Pleiadi kernel: Processor #1 15:4 APIC version 20 > > Jun 4 20:53:05 Pleiadi kernel: IOAPIC[0]: apic_id 2, version 32, > > address 0xfec00000, GSI 0-23 > > Jun 4 20:53:05 Pleiadi kernel: IOAPIC[1]: apic_id 3, version 32, > > address 0xfec80000, GSI 24-47 > > Jun 4 20:53:05 Pleiadi kernel: IOAPIC[2]: apic_id 4, version 32, > > address 0xfec80800, GSI 48-71 > > Jun 4 20:53:05 Pleiadi kernel: IOAPIC[3]: apic_id 5, version 32, > > address 0xfec84000, GSI 72-95 > > Jun 4 20:53:05 Pleiadi kernel: IOAPIC[4]: apic_id 6, version 32, > > address 0xfec84800, GSI 96-119 > > Jun 4 20:53:05 Pleiadi kernel: Enabling APIC mode: Flat. Using 5 I/O > > APICs > > Jun 4 20:53:05 Pleiadi kernel: Allocating PCI resources starting at > > 50000000 (gap: 40000000:a0000000) > > Jun 4 20:53:05 Pleiadi kernel: Built 1 zonelists. Total pages: 259969 > > Jun 4 20:53:05 Pleiadi kernel: PID hash table entries: 4096 (order: 12, > > 16384 bytes) > > Jun 4 20:53:05 Pleiadi kernel: Detected 3200.428 MHz processor. > > Jun 4 20:53:05 Pleiadi kernel: Console: colour VGA+ 80x25 > > Jun 4 20:53:05 Pleiadi kernel: Dentry cache hash table entries: 131072 > > (order: 7, 524288 bytes) > > Jun 4 20:53:05 Pleiadi kernel: Inode-cache hash table entries: 65536 > > (order: 6, 262144 bytes) > > Jun 4 20:53:05 Pleiadi kernel: virtual kernel memory layout: > > Jun 4 20:53:05 Pleiadi kernel: fixmap : 0xfff9d000 - 0xfffff000 > > ( 392 kB) > > Jun 4 20:53:05 Pleiadi kernel: pkmap : 0xff800000 - 0xffc00000 > > (4096 kB) > > Jun 4 20:53:05 Pleiadi kernel: vmalloc : 0xf8800000 - 0xff7fe000 > > ( 111 MB) > > Jun 4 20:53:05 Pleiadi kernel: lowmem : 0xc0000000 - 0xf8000000 > > ( 896 MB) > > Jun 4 20:53:05 Pleiadi kernel: .init : 0xc039f000 - 0xc03ce000 > > ( 188 kB) > > Jun 4 20:53:05 Pleiadi kernel: .data : 0xc02fd400 - 0xc0398114 > > ( 619 kB) > > Jun 4 20:53:05 Pleiadi kernel: .text : 0xc0100000 - 0xc02fd400 > > (2037 kB) > > Jun 4 20:53:05 Pleiadi kernel: Checking if this processor honours the > > WP bit even in supervisor mode... Ok. > > Jun 4 20:53:05 Pleiadi kernel: Calibrating delay using timer specific > > routine.. 6403.78 BogoMIPS (lpj=32018905) > > Jun 4 20:53:05 Pleiadi kernel: Mount-cache hash table entries: 512 > > Jun 4 20:53:05 Pleiadi kernel: monitor/mwait feature present. > > Jun 4 20:53:05 Pleiadi kernel: using mwait in idle threads. > > Jun 4 20:53:05 Pleiadi kernel: CPU0: Intel(R) Xeon(TM) CPU 3.20GHz > > stepping 0a > > Jun 4 20:53:05 Pleiadi kernel: Booting processor 1/1 eip 2000 > > Jun 4 20:53:05 Pleiadi kernel: Calibrating delay using timer specific > > routine.. 6400.45 BogoMIPS (lpj=32002267) > > Jun 4 20:53:05 Pleiadi kernel: monitor/mwait feature present. > > Jun 4 20:53:05 Pleiadi kernel: CPU1: Intel(R) Xeon(TM) CPU 3.20GHz > > stepping 0a > > Jun 4 20:53:05 Pleiadi kernel: ENABLING IO-APIC IRQs > > Jun 4 20:53:05 Pleiadi kernel: migration_cost=142 > > Jun 4 20:53:05 Pleiadi kernel: Setting up standard PCI resources > > Jun 4 20:53:05 Pleiadi kernel: ACPI: Device [PS2M] status [00000008]: > > functional but not present; setting present > > Jun 4 20:53:05 Pleiadi kernel: ACPI: Device [ECP] status [00000008]: > > functional but not present; setting present > > Jun 4 20:53:05 Pleiadi kernel: ACPI: Device [COM1] status [00000008]: > > functional but not present; setting present > > Jun 4 20:53:05 Pleiadi kernel: PCI quirk: region f000-f07f claimed by > > ICH4 ACPI/GPIO/TCO > > Jun 4 20:53:05 Pleiadi kernel: PCI quirk: region f180-f1bf claimed by > > ICH4 GPIO > > Jun 4 20:53:05 Pleiadi kernel: PCI: PXH quirk detected, disabling MSI > > for SHPC device > > Jun 4 20:53:05 Pleiadi kernel: PCI: PXH quirk detected, disabling MSI > > for SHPC device > > Jun 4 20:53:05 Pleiadi kernel: ACPI: PCI Interrupt Link [LNKA] (IRQs 3 > > 4 5 6 7 9 10 *11 12 14 15) > > Jun 4 20:53:05 Pleiadi kernel: ACPI: PCI Interrupt Link [LNKB] (IRQs 3 > > 4 5 6 7 *9 10 11 12 14 15) > > Jun 4 20:53:05 Pleiadi kernel: ACPI: PCI Interrupt Link [LNKC] (IRQs 3 > > 4 *5 6 7 9 10 11 12 14 15) > > Jun 4 20:53:05 Pleiadi kernel: ACPI: PCI Interrupt Link [LNKD] (IRQs 3 > > 4 5 6 7 9 *10 11 12 14 15) > > Jun 4 20:53:05 Pleiadi kernel: ACPI: PCI Interrupt Link [LNKE] (IRQs 3 > > 4 5 6 7 9 10 11 12 14 15) *0, disabled. > > Jun 4 20:53:05 Pleiadi kernel: ACPI: PCI Interrupt Link [LNKF] (IRQs 3 > > 4 5 6 7 9 10 11 12 14 15) *0, disabled. > > Jun 4 20:53:05 Pleiadi kernel: ACPI: PCI Interrupt Link [LNKG] (IRQs 3 > > 4 5 6 7 9 10 11 12 14 15) *0, disabled. > > Jun 4 20:53:05 Pleiadi kernel: ACPI: PCI Interrupt Link [LNKH] (IRQs 3 > > *4 5 6 7 9 10 11 12 14 15) > > Jun 4 20:53:05 Pleiadi kernel: IP route cache hash table entries: 32768 > > (order: 5, 131072 bytes) > > Jun 4 20:53:05 Pleiadi kernel: TCP established hash table entries: > > 131072 (order: 8, 1572864 bytes) > > Jun 4 20:53:05 Pleiadi kernel: TCP bind hash table entries: 65536 > > (order: 7, 524288 bytes) > > Jun 4 20:53:05 Pleiadi kernel: highmem bounce pool size: 64 pages > > Jun 4 20:53:05 Pleiadi kernel: PNP: PS/2 controller doesn't have AUX > > irq; using default 12 > > Jun 4 20:53:05 Pleiadi kernel: nf_conntrack version 0.5.0 (8188 > > buckets, 65504 max) > > Jun 4 20:53:05 Pleiadi kernel: ip_tables: (C) 2000-2006 Netfilter Core > > Team > > Jun 4 20:53:05 Pleiadi kernel: Using IPI Shortcut mode > > Jun 4 20:53:05 Pleiadi kernel: VFS: Mounted root (xfs filesystem) > > readonly. > > > > > stack trace, etc for when this happened? > > > > I have only a monitor bitmap. Tell me if you want it. > > [ Marco sent me the stack trace photo off-list; attached herewith. ] > > Satyam > From owner-xfs@oss.sgi.com Mon Jun 11 08:58:29 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 11 Jun 2007 08:58:34 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.8 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from smtp121.sbc.mail.re3.yahoo.com (smtp121.sbc.mail.re3.yahoo.com [66.196.96.94]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5BFwQWt028272 for ; Mon, 11 Jun 2007 08:58:29 -0700 Received: (qmail 74325 invoked from network); 11 Jun 2007 15:58:26 -0000 Received: from unknown (HELO stupidest.org) (cwedgwood@sbcglobal.net@24.5.75.45 with login) by smtp121.sbc.mail.re3.yahoo.com with SMTP; 11 Jun 2007 15:58:26 -0000 X-YMail-OSG: VP7dDZ0VM1kTwBFOqpW250eoHQGX2uP7MQ74zKXOa1sQlZssZcJtbVTS0s4q_uBSQ91VuTX2Hdr_DGUZWK457d9RVGopyZFydfGXDVuTenWt_xXIjLp2b7pnq9Pf52QYGJdZ_MyHkwNn2ho- Received: by tuatara.stupidest.org (Postfix, from userid 10000) id 025F3182612B; Mon, 11 Jun 2007 08:58:24 -0700 (PDT) Date: Mon, 11 Jun 2007 08:58:24 -0700 From: Chris Wedgwood To: Johan Andersson Cc: xfs@oss.sgi.com Subject: Re: xfs_fsr allocation group optimization Message-ID: <20070611155824.GA12668@tuatara.stupidest.org> References: <1181544692.19145.44.camel@gentoo-johan.transmode.se> <20070611073559.GA26257@tuatara.stupidest.org> <1181551409.19145.57.camel@gentoo-johan.transmode.se> <20070611090138.GA28907@tuatara.stupidest.org> <1181553356.19145.65.camel@gentoo-johan.transmode.se> <20070611094133.GA31108@tuatara.stupidest.org> <1181558353.19145.76.camel@gentoo-johan.transmode.se> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1181558353.19145.76.camel@gentoo-johan.transmode.se> X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11728 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: xfs On Mon, Jun 11, 2007 at 12:39:13PM +0200, Johan Andersson wrote: > In the way xfs_fsr operates now, in almost all user space, I don't > see any good way to tell XFS where to place the extents, other than > creating the temporary file in the same directory as the original > file. Exactly. > My question is really, is there a better way than "find -xdev -inum" > to find what file points to a given inode? You can build then entire tree in-core using bulkstat and readdir, doing the bulkstat first means you can try to optimize the order you do the readdirs in somewhat. From owner-xfs@oss.sgi.com Mon Jun 11 16:08:36 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 11 Jun 2007 16:08:40 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from postoffice.aconex.com (mail.app.aconex.com [203.89.192.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5BN8XWt004298 for ; Mon, 11 Jun 2007 16:08:36 -0700 Received: from edge.local (unknown [203.89.192.141]) by postoffice.aconex.com (Postfix) with ESMTP id 83D5892C3E8; Tue, 12 Jun 2007 09:08:32 +1000 (EST) Subject: Re: xfs_fsr allocation group optimization From: Nathan Scott Reply-To: nscott@aconex.com To: Chris Wedgwood Cc: Johan Andersson , xfs@oss.sgi.com In-Reply-To: <20070611155824.GA12668@tuatara.stupidest.org> References: <1181544692.19145.44.camel@gentoo-johan.transmode.se> <20070611073559.GA26257@tuatara.stupidest.org> <1181551409.19145.57.camel@gentoo-johan.transmode.se> <20070611090138.GA28907@tuatara.stupidest.org> <1181553356.19145.65.camel@gentoo-johan.transmode.se> <20070611094133.GA31108@tuatara.stupidest.org> <1181558353.19145.76.camel@gentoo-johan.transmode.se> <20070611155824.GA12668@tuatara.stupidest.org> Content-Type: text/plain Organization: Aconex Date: Tue, 12 Jun 2007 09:07:36 +1000 Message-Id: <1181603256.3758.46.camel@edge.yarra.acx> Mime-Version: 1.0 X-Mailer: Evolution 2.6.3 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11729 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nscott@aconex.com Precedence: bulk X-list: xfs On Mon, 2007-06-11 at 08:58 -0700, Chris Wedgwood wrote: > > > > In the way xfs_fsr operates now, in almost all user space, I don't > > see any good way to tell XFS where to place the extents, other than > > creating the temporary file in the same directory as the original > > file. > > Exactly. > > > My question is really, is there a better way than "find -xdev -inum" > > to find what file points to a given inode? > > You can build then entire tree in-core using bulkstat and readdir, > doing the bulkstat first means you can try to optimize the order you > do the readdirs in somewhat. > Probably better to change the kernel extent-swap code to not do alloc-near-tempinode allocations, and instead find a way to pass XFS_ALLOCTYPE_THIS_AG/XFS_ALLOCTYPE_NEAR_BNO/or some saner alloc flag down to the allocator for all extent swapping allocations. cheers. From owner-xfs@oss.sgi.com Mon Jun 11 16:15:00 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 11 Jun 2007 16:15:02 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from postoffice.aconex.com (mail.app.aconex.com [203.89.192.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5BNEuWt005344 for ; Mon, 11 Jun 2007 16:14:59 -0700 Received: from edge.local (unknown [203.89.192.141]) by postoffice.aconex.com (Postfix) with ESMTP id 8CE9392C736; Tue, 12 Jun 2007 09:14:41 +1000 (EST) Subject: Re: Review: Be smarter about handling ENOSPC during writeback From: Nathan Scott Reply-To: nscott@aconex.com To: David Chinner Cc: Timothy Shimmin , xfs-dev , xfs-oss In-Reply-To: <20070608073342.GW85884050@sgi.com> References: <20070604045219.GG86004887@sgi.com> <20070608073342.GW85884050@sgi.com> Content-Type: text/plain Organization: Aconex Date: Tue, 12 Jun 2007 09:13:45 +1000 Message-Id: <1181603625.3758.53.camel@edge.yarra.acx> Mime-Version: 1.0 X-Mailer: Evolution 2.6.3 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11730 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nscott@aconex.com Precedence: bulk X-list: xfs On Fri, 2007-06-08 at 17:33 +1000, David Chinner wrote: > On Fri, Jun 08, 2007 at 03:28:14PM +1000, Timothy Shimmin wrote: > > Will we get questions from people about reduced space from df? :) > > If we do, I think you just volunteered to write the FAQ entry ;) It would be more correct of XFS to start doing the right thing by reporting different values for b_free and b_avail in statfs(2) - this code in xfs_mount.c::xfs_statvfs() ... statp->f_bfree = statp->f_bavail = sbp->sb_fdblocks - XFS_ALLOC_SET_ASIDE(mp); I know this wasn't done for the original per-mount space reserving ioctls (that was for one user though - dmapi, so I can see why there may have been a shortcut done there) ... but if it affects everyone now, there will be questions asked, and there is a standard interface for reporting this space discrepency that tools like df(1) already use. cheers. From owner-xfs@oss.sgi.com Mon Jun 11 17:22:18 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 11 Jun 2007 17:22:21 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.7 required=5.0 tests=AWL,BAYES_50,J_CHICKENPOX_43, RCVD_NUMERIC_HELO autolearn=no version=3.2.0-pre1-r499012 Received: from mail34.messagelabs.com (mail34.messagelabs.com [216.82.241.35]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5C0MHWt017831 for ; Mon, 11 Jun 2007 17:22:18 -0700 X-VirusChecked: Checked X-Env-Sender: Rene.Salmon@bp.com X-Msg-Ref: server-19.tower-34.messagelabs.com!1181606135!53482335!1 X-StarScan-Version: 5.5.12.11; banners=-,-,- X-Originating-IP: [129.230.248.73] Received: (qmail 19444 invoked from network); 11 Jun 2007 23:55:36 -0000 Received: from unknown (HELO bp1xeuav706.bp1.ad.bp.com) (129.230.248.73) by server-19.tower-34.messagelabs.com with SMTP; 11 Jun 2007 23:55:36 -0000 Received: from BP1XEUEX033.bp1.ad.bp.com ([149.184.176.167]) by bp1xeuav706.bp1.ad.bp.com with InterScan Messaging Security Suite; Tue, 12 Jun 2007 00:55:35 +0100 Received: from BP1XEUEX706-C.bp1.ad.bp.com ([149.182.218.95]) by BP1XEUEX033.bp1.ad.bp.com with Microsoft SMTPSVC(6.0.3790.1830); Tue, 12 Jun 2007 00:55:35 +0100 Received: from 149.179.228.36 ([149.179.228.36]) by BP1XEUEX706-C.bp1.ad.bp.com ([149.182.218.28]) with Microsoft Exchange Server HTTP-DAV ; Mon, 11 Jun 2007 23:55:34 +0000 Received: from holwrs01 by bp1xeuex706-c.bp1.ad.bp.com; 11 Jun 2007 18:55:34 -0500 Subject: sunit not working From: "Salmon, Rene" To: xfs@oss.sgi.com Content-Type: text/plain Content-Transfer-Encoding: 7bit Date: Mon, 11 Jun 2007 18:55:34 -0500 Message-Id: <1181606134.7873.72.camel@holwrs01> Mime-Version: 1.0 X-Mailer: Evolution 2.8.2 X-OriginalArrivalTime: 11 Jun 2007 23:55:35.0149 (UTC) FILETIME=[00BE71D0:01C7AC84] X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11731 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: Rene.Salmon@bp.com Precedence: bulk X-list: xfs Hi list, I have a HW raid 5 array with chunck size 256KB and stripe size 2560KB. Basically a 10+1+1 array. 10 drives, one parity, one spare. I am trying to create an xfs file system with the appropriate sunit and width. # mkfs.xfs -d sunit=512,swidth=5120 -f /dev/mapper/mpath9 meta-data=/dev/mapper/mpath9 isize=256 agcount=32, agsize=56652352 blks = sectsz=512 attr=0 data = bsize=4096 blocks=1812874752, imaxpct=25 = sunit=64 swidth=640 blks, unwritten=1 naming =version 2 bsize=4096 log =internal log bsize=4096 blocks=32768, version=1 = sectsz=512 sunit=0 blks realtime =none extsz=65536 blocks=0, rtextents=0 As you can see the sunit gets set to 64 upon creation and not 512 like I asked. Also if it try to give it some mount options it does the same thing. sgi210a:~ # mount -o sunit=512,swidth=5120 /dev/mapper/mpath9 /mnt/ sgi210a:~ # xfs_info /mnt/ meta-data=/dev/mapper/mpath9 isize=256 agcount=32, agsize=56652352 blks = sectsz=512 attr=0 data = bsize=4096 blocks=1812874752, imaxpct=25 = sunit=64 swidth=640 blks, unwritten=1 naming =version 2 bsize=4096 log =internal bsize=4096 blocks=32768, version=1 = sectsz=512 sunit=0 blks realtime =none extsz=65536 blocks=0, rtextents=0 sgi210a:~ # Any ideas? Last I tried to subscribe to the list by sending email to ecartis@oss.sgi.com a couple of times but was unsuccessful should I send email elsewhere to subscribe? thank you Rene From owner-xfs@oss.sgi.com Mon Jun 11 17:35:01 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 11 Jun 2007 17:35:03 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from postoffice.aconex.com (mail.app.aconex.com [203.89.192.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5C0YxWt020703 for ; Mon, 11 Jun 2007 17:35:00 -0700 Received: from edge.local (unknown [203.89.192.141]) by postoffice.aconex.com (Postfix) with ESMTP id B8C9292C38F; Tue, 12 Jun 2007 10:34:59 +1000 (EST) Subject: Re: sunit not working From: Nathan Scott Reply-To: nscott@aconex.com To: "Salmon, Rene" Cc: xfs@oss.sgi.com In-Reply-To: <1181606134.7873.72.camel@holwrs01> References: <1181606134.7873.72.camel@holwrs01> Content-Type: text/plain Organization: Aconex Date: Tue, 12 Jun 2007 10:34:04 +1000 Message-Id: <1181608444.3758.73.camel@edge.yarra.acx> Mime-Version: 1.0 X-Mailer: Evolution 2.6.3 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11732 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nscott@aconex.com Precedence: bulk X-list: xfs On Mon, 2007-06-11 at 18:55 -0500, Salmon, Rene wrote: > As you can see the sunit gets set to 64 upon creation and not 512 like I > asked. Also if it try to give it some mount options it does the same > thing. > > sgi210a:~ # mount -o sunit=512,swidth=5120 /dev/mapper/mpath9 /mnt/ Its being reported in units of filesystem blocks, and its specified in 512 byte units. Pretty dopey, but thats why its different. > sgi210a:~ # xfs_info /mnt/ > meta-data=/dev/mapper/mpath9 isize=256 agcount=32, > agsize=56652352 blks > = sectsz=512 attr=0 > data = bsize=4096 blocks=1812874752, > imaxpct=25 > = sunit=64 swidth=640 blks, > unwritten=1 > naming =version 2 bsize=4096 > log =internal bsize=4096 blocks=32768, version=1 > = sectsz=512 sunit=0 blks > realtime =none extsz=65536 blocks=0, rtextents=0 > sgi210a:~ # $ gdb -q (gdb) p 512 * 512 $1 = 262144 (gdb) p 64 * 4096 $2 = 262144 (gdb) (thats 262144 bytes, of course) > Last I tried to subscribe to the list by sending email to > ecartis@oss.sgi.com a couple of times but was unsuccessful should I send > email elsewhere to subscribe? Its a frikkin' lottery. :) Keep trying and keep whining is how I ended up getting back on (whining on IRC on #xfs helps too). cheers. -- Nathan From owner-xfs@oss.sgi.com Mon Jun 11 18:38:22 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 11 Jun 2007 18:38:25 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5C1cIWt029334 for ; Mon, 11 Jun 2007 18:38:21 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA18700; Tue, 12 Jun 2007 11:38:12 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5C1c8Af120734884; Tue, 12 Jun 2007 11:38:09 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5C1c3Bc120813180; Tue, 12 Jun 2007 11:38:03 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Tue, 12 Jun 2007 11:38:03 +1000 From: David Chinner To: Nathan Scott Cc: Chris Wedgwood , Johan Andersson , xfs@oss.sgi.com, linux-fsdevel Subject: Re: xfs_fsr allocation group optimization Message-ID: <20070612013803.GI86004887@sgi.com> References: <1181544692.19145.44.camel@gentoo-johan.transmode.se> <20070611073559.GA26257@tuatara.stupidest.org> <1181551409.19145.57.camel@gentoo-johan.transmode.se> <20070611090138.GA28907@tuatara.stupidest.org> <1181553356.19145.65.camel@gentoo-johan.transmode.se> <20070611094133.GA31108@tuatara.stupidest.org> <1181558353.19145.76.camel@gentoo-johan.transmode.se> <20070611155824.GA12668@tuatara.stupidest.org> <1181603256.3758.46.camel@edge.yarra.acx> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1181603256.3758.46.camel@edge.yarra.acx> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11733 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Tue, Jun 12, 2007 at 09:07:36AM +1000, Nathan Scott wrote: > On Mon, 2007-06-11 at 08:58 -0700, Chris Wedgwood wrote: > > > In the way xfs_fsr operates now, in almost all user space, I don't > > > see any good way to tell XFS where to place the extents, other than > > > creating the temporary file in the same directory as the original > > > file. > > > > Exactly. > > > > > My question is really, is there a better way than "find -xdev -inum" > > > to find what file points to a given inode? > > > > You can build then entire tree in-core using bulkstat and readdir, > > doing the bulkstat first means you can try to optimize the order you > > do the readdirs in somewhat. > > Probably better to change the kernel extent-swap code to not do > alloc-near-tempinode allocations, and instead find a way to pass > XFS_ALLOCTYPE_THIS_AG/XFS_ALLOCTYPE_NEAR_BNO/or some saner alloc > flag down to the allocator for all extent swapping allocations. /me sighs and points to the generic allocation interface I wanted for exactly these reasons: http://marc.info/?l=linux-fsdevel&m=116278169519095&w=2 Instead, we're getting a mostly useless XFS_IOC_RESVSP replacement called sys_fallocate() that provides us with pretty much nothing. Given that sys_fallocate() can't be extended to do this sort of thing, we're going to be stuck with doing our own thing again.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Jun 11 18:41:57 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 11 Jun 2007 18:42:01 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5C1frWt029925 for ; Mon, 11 Jun 2007 18:41:55 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA18819; Tue, 12 Jun 2007 11:41:50 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5C1fmAf113651432; Tue, 12 Jun 2007 11:41:49 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5C1fkdw119759772; Tue, 12 Jun 2007 11:41:46 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Tue, 12 Jun 2007 11:41:46 +1000 From: David Chinner To: Johan Andersson Cc: Chris Wedgwood , xfs@oss.sgi.com Subject: Re: xfs_fsr allocation group optimization Message-ID: <20070612014146.GJ86004887@sgi.com> References: <1181544692.19145.44.camel@gentoo-johan.transmode.se> <20070611073559.GA26257@tuatara.stupidest.org> <1181551409.19145.57.camel@gentoo-johan.transmode.se> <20070611090138.GA28907@tuatara.stupidest.org> <1181553356.19145.65.camel@gentoo-johan.transmode.se> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1181553356.19145.65.camel@gentoo-johan.transmode.se> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11734 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Mon, Jun 11, 2007 at 11:15:56AM +0200, Johan Andersson wrote: > On Mon, 2007-06-11 at 02:01 -0700, Chris Wedgwood wrote: > > using "find .... xfs_fsr" you get temporary files in the same AG as > > the file your are defragmenting, avoiding the spreading out effect, > > but this might not be the least-defragmented file you can get > > > > what's really needed is an attempt to find space near the original > > file if possible and if not then an option to try harder looking in > > other AGs > This is exactly what the simple but ugly patch I attached achieves by > looking up the filename of the inode it defrags when doing a full file > system defrag. And it works well, except that it spends a lot of time > finding that file name. As I said, a better option would be if you could > tell XFS in what AG you want extents for a newly created file to place > it's extents in. Yup. That would be nice. We've got the basis of doing allocation policies with the filestreams code - an arbitrary blob of data associated with an inode that influences the allocation decision. Userspace driven allocation hints are more complex and require subtler hooks in the allocation path, though.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Jun 11 18:45:01 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 11 Jun 2007 18:45:03 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5C1iwWt030529 for ; Mon, 11 Jun 2007 18:45:00 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA18910; Tue, 12 Jun 2007 11:44:55 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5C1irAf120883052; Tue, 12 Jun 2007 11:44:54 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5C1iq2D116951696; Tue, 12 Jun 2007 11:44:52 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Tue, 12 Jun 2007 11:44:52 +1000 From: David Chinner To: Johan Andersson Cc: xfs@oss.sgi.com Subject: Re: xfs_fsr allocation group optimization Message-ID: <20070612014452.GK86004887@sgi.com> References: <1181544692.19145.44.camel@gentoo-johan.transmode.se> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1181544692.19145.44.camel@gentoo-johan.transmode.se> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11735 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Mon, Jun 11, 2007 at 08:51:32AM +0200, Johan Andersson wrote: > Does anyone know of a good way to find one filename that points o a > certain inode? We need an rmap.... We have some prototype linux code that does parent pointers (i.e. each inode has a back pointer to it's parent inode), but that, IIUC, is a long way from prime-time. Tim? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Jun 11 18:46:12 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 11 Jun 2007 18:46:15 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5C1k9Wt030893 for ; Mon, 11 Jun 2007 18:46:10 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA19047; Tue, 12 Jun 2007 11:46:02 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5C1k0Af120663727; Tue, 12 Jun 2007 11:46:01 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5C1jwhm120859736; Tue, 12 Jun 2007 11:45:58 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Tue, 12 Jun 2007 11:45:58 +1000 From: David Chinner To: Christoph Hellwig Cc: xfs@oss.sgi.com Subject: Re: [PATCH] fix 32bit build Message-ID: <20070612014558.GL86004887@sgi.com> References: <20070609102637.GA23294@lst.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070609102637.GA23294@lst.de> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11736 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Sat, Jun 09, 2007 at 12:26:37PM +0200, Christoph Hellwig wrote: > > Signed-off-by: Christoph Hellwig > > Index: linux-2.6-xfs/fs/xfs/xfs_mount.c > =================================================================== > --- linux-2.6-xfs.orig/fs/xfs/xfs_mount.c 2007-06-09 11:20:51.000000000 +0200 > +++ linux-2.6-xfs/fs/xfs/xfs_mount.c 2007-06-09 11:21:43.000000000 +0200 > @@ -1154,7 +1154,9 @@ xfs_mountfs( > * This may drive us straight to ENOSPC on mount, but that implies > * we were already there on the last unmount. > */ > - resblks = min_t(__uint64_t, mp->m_sb.sb_dblocks / 20, 1024); > + resblks = mp->m_sb.sb_dblocks; > + do_div(resblks, 20); > + resblks = min_t(__uint64_t, resblks, 1024); > xfs_reserve_blocks(mp, &resblks, NULL); > > return 0; > /me smacks forehead. I'll get this sorted. Thanks, Christoph. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Jun 11 19:40:42 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 11 Jun 2007 19:40:44 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.8 required=5.0 tests=AWL,BAYES_80,J_CHICKENPOX_28, J_CHICKENPOX_34,J_CHICKENPOX_55 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5C2ebWt005900 for ; Mon, 11 Jun 2007 19:40:39 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id MAA20356; Tue, 12 Jun 2007 12:40:32 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5C2eUAf119759759; Tue, 12 Jun 2007 12:40:32 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5C2ePgS119337552; Tue, 12 Jun 2007 12:40:25 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Tue, 12 Jun 2007 12:40:25 +1000 From: David Chinner To: xfs@oss.sgi.com Cc: iusty@k1024.org Subject: Re: [PATCH] Implement shrink of empty AGs Message-ID: <20070612024025.GM86004887@sgi.com> References: <20070610164014.GA10936@teal.hq.k1024.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070610164014.GA10936@teal.hq.k1024.org> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11737 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Sun, Jun 10, 2007 at 06:40:14PM +0200, Iustin Pop wrote: > The attached patch implements shrinking of completely empty allocation > groups. The patch is against current CVS and modifies two files: > - xfs_trans.c to remove two asserts in which prevent lowering the > number of AGs or filesystem blocks; > - xfs_fsops.c where it does: > - modify xfs_growfs_data() to branch to either > xfs_growfs_data_private or xfs_shrinkfs_data private depending on > the new size of the fs > - abstract the last part of xfs_growfs_data_private (the modify of > all the superblocks) into a separate function, xfs_update_sb(), > which is called both from shrink and grow > - add the new xfs_shrinkfs_data_private function, mostly based on > the growfs function comments are all inline.... > > There are many printk()'s left in the patch, I left them as they show > where I compute some important values. There are also many FIXMEs in the > comments showing what parts I didn't understand or was not sure about > (not that these are the only ones...). Probably for a real patch, > xfs-specific debug hooks need to be added and the printk()s removed. > > The patch works on UML and QEMU virtual machines, both in UP and SMP. I > just tested many shrink/grow operations and verified with xfs_repair > that the fs is not corrupted. The free space counters seem to be correct > after shrink. > > Note that you also need to remove the check from xfs_growfs.c of not > allowing to shrink the filesystem. > > regards, > iustin > diff -X ignore -urN linux-2.6-xfs.cvs-orig/fs/xfs/xfs_fsops.c linux-2.6-xfs.shrink/fs/xfs/xfs_fsops.c > --- linux-2.6-xfs.cvs-orig/fs/xfs/xfs_fsops.c 2007-06-09 18:56:21.509308225 +0200 > +++ linux-2.6-xfs.shrink/fs/xfs/xfs_fsops.c 2007-06-10 18:32:36.074856477 +0200 > @@ -112,6 +112,53 @@ > return 0; > } > > +static void xfs_update_sb( STATIC void xfs_growfs_update_sb( > + xfs_mount_t *mp, /* mount point for filesystem */ > + xfs_agnumber_t nagimax, > + xfs_agnumber_t nagcount) /* new number of a.g. */ tabs, not spaces (and tabs are 8 spaces). > +{ > + xfs_agnumber_t agno; > + xfs_buf_t *bp; > + xfs_sb_t *sbp; > + int error; > + > + /* New allocation groups fully initialized, so update mount struct */ > + if (nagimax) > + mp->m_maxagi = nagimax; > + if (mp->m_sb.sb_imax_pct) { > + __uint64_t icount = mp->m_sb.sb_dblocks * mp->m_sb.sb_imax_pct; I'd prefer to have long lines like this split. > + do_div(icount, 100); > + mp->m_maxicount = icount << mp->m_sb.sb_inopblog; > + } else > + mp->m_maxicount = 0; Insert empty line. > + for (agno = 1; agno < nagcount; agno++) { > + error = xfs_read_buf(mp, mp->m_ddev_targp, > + XFS_AGB_TO_DADDR(mp, agno, XFS_SB_BLOCK(mp)), > + XFS_FSS_TO_BB(mp, 1), 0, &bp); > + if (error) { > + xfs_fs_cmn_err(CE_WARN, mp, > + "error %d reading secondary superblock for ag %d", > + error, agno); > + break; > + } > + sbp = XFS_BUF_TO_SBP(bp); > + xfs_xlatesb(sbp, &mp->m_sb, -1, XFS_SB_ALL_BITS); Insert empty line > + /* > + * If we get an error writing out the alternate superblocks, > + * just issue a warning and continue. The real work is > + * already done and committed. > + */ > + if (!(error = xfs_bwrite(mp, bp))) { > + continue; > + } else { > + xfs_fs_cmn_err(CE_WARN, mp, > + "write error %d updating secondary superblock for ag %d", > + error, agno); > + break; /* no point in continuing */ > + } > + } error = xfs_bwrite(mp, bp); if (error) { xfs_fs_cmn_err(...) break; } } > +} > + > static int > xfs_growfs_data_private( > xfs_mount_t *mp, /* mount point for filesystem */ > @@ -135,7 +182,6 @@ > xfs_rfsblock_t nfree; > xfs_agnumber_t oagcount; > int pct; > - xfs_sb_t *sbp; > xfs_trans_t *tp; > > nb = in->newblocks; > @@ -356,44 +402,228 @@ > if (error) { > return error; > } > - /* New allocation groups fully initialized, so update mount struct */ > - if (nagimax) > - mp->m_maxagi = nagimax; > - if (mp->m_sb.sb_imax_pct) { > - __uint64_t icount = mp->m_sb.sb_dblocks * mp->m_sb.sb_imax_pct; > - do_div(icount, 100); > - mp->m_maxicount = icount << mp->m_sb.sb_inopblog; > - } else > - mp->m_maxicount = 0; > - for (agno = 1; agno < nagcount; agno++) { > - error = xfs_read_buf(mp, mp->m_ddev_targp, > - XFS_AGB_TO_DADDR(mp, agno, XFS_SB_BLOCK(mp)), > - XFS_FSS_TO_BB(mp, 1), 0, &bp); > + xfs_update_sb(mp, nagimax, nagcount); > + return 0; > + > + error0: > + xfs_trans_cancel(tp, XFS_TRANS_ABORT); > + return error; > +} > + > +static int STATIC int > +xfs_shrinkfs_data_private( > + xfs_mount_t *mp, /* mount point for filesystem */ > + xfs_growfs_data_t *in) /* growfs data input struct */ whitespace issues > +{ > + xfs_agf_t *agf; > + xfs_agnumber_t agno; > + xfs_buf_t *bp; > + int dpct; > + int error; > + xfs_agnumber_t nagcount; /* new AG count */ > + xfs_agnumber_t oagcount; /* old AG count */ > + xfs_agnumber_t nagimax = 0; > + xfs_rfsblock_t nb, nb_mod; > + xfs_rfsblock_t dbdelta; /* will be used as a > + check that we > + shrink the fs by > + the correct number > + of blocks */ > + xfs_rfsblock_t fdbdelta; /* will keep track of > + how many ag blocks > + we need to > + remove */ Long comments like this don't go on the declaration. Put them where the variable is initialised or first used. > + int pct; > + xfs_trans_t *tp; > + > + nb = in->newblocks; > + pct = in->imaxpct; > + if (nb >= mp->m_sb.sb_dblocks || pct < 0 || pct > 100) > + return XFS_ERROR(EINVAL); > + dpct = pct - mp->m_sb.sb_imax_pct; This next bit: > + error = xfs_read_buf(mp, mp->m_ddev_targp, > + XFS_FSB_TO_BB(mp, nb) - XFS_FSS_TO_BB(mp, 1), > + XFS_FSS_TO_BB(mp, 1), 0, &bp); > + if (error) > + return error; > + ASSERT(bp); > + /* FIXME: we release the buffer here manually because we are > + * outside of a transaction? The other buffers read using the > + * functions which take a tp parameter are not released in > + * growfs > + */ > + xfs_buf_relse(bp); Should not be necessary - we don't need to check if the new last filesystem block is beyond EOF because we are shrinking.... To answer the FIXME - xfs_trans_commit() releases locked buffers and inodes that have been joined ot the transaction unless they have also been held. So if you are outside a transaction, you do have to ensure you release any buffers you read. > + /* Do basic checks (at the fs level) */ > + oagcount = mp->m_sb.sb_agcount; > + nagcount = nb; > + nb_mod = do_div(nagcount, mp->m_sb.sb_agblocks); > + if(nb_mod) { > + printk("not shrinking on an AG boundary (diff=%d)\n", nb_mod); > + return XFS_ERROR(ENOSPC); EINVAL, I think. > + } > + if(nagcount < 2) { > + printk("refusing to shrink below 2 AGs\n"); > + return XFS_ERROR(ENOSPC); EINVAL. > + } > + if(nagcount >= oagcount) { > + printk("number of AGs will not decrease\n"); > + return XFS_ERROR(EINVAL); > + } > + printk("Cur ag=%d, cur blocks=%llu\n", > + mp->m_sb.sb_agcount, mp->m_sb.sb_dblocks); > + printk("New ag=%d, new blocks=%d\n", nagcount, nb); > + > + printk("Will resize from %llu to %d, delta is %llu\n", > + mp->m_sb.sb_dblocks, nb, mp->m_sb.sb_dblocks - nb); > + /* Check to see if we trip over the log section */ > + printk("logstart=%llu logblocks=%u\n", > + mp->m_sb.sb_logstart, mp->m_sb.sb_logblocks); > + if (nb < mp->m_sb.sb_logstart + mp->m_sb.sb_logblocks) > + return XFS_ERROR(EINVAL); Insert empty line > + /* dbdelta starts at the diff and must become zero */ > + dbdelta = mp->m_sb.sb_dblocks - nb; > + tp = xfs_trans_alloc(mp, XFS_TRANS_GROWFS); > + printk("reserving %d\n", XFS_GROWFS_SPACE_RES(mp) + dbdelta); > + if ((error = xfs_trans_reserve(tp, XFS_GROWFS_SPACE_RES(mp) + dbdelta, > + XFS_GROWDATA_LOG_RES(mp), 0, 0, 0))) { > + xfs_trans_cancel(tp, 0); > + return error; > + } What's the dbdelta part of the reservation for? That's reserving dbdelta blocks for *allocations*, so I don't think this is right.... > + > + fdbdelta = 0; > + > + /* Per-AG checks */ > + /* FIXME: do we need to hold m_peraglock while doing this? */ Yes. > + /* I think that since we do read and write to the m_perag > + * stuff, we should be holding the lock for the entire walk & > + * modify of the fs > + */ Deadlock warning! holding the m_peraglock in write mode will cause allocation deadlocks if you are not careful as all allocation/free operations take the m_peraglock in read mode. (And yes, growing an active, loaded filesystem can deadlock because of this.) > + /* Note that because we hold the lock, on any error+early > + * return, we must either release manually and return, or > + * jump to error0 > + */ whitespace. > + down_write(&mp->m_peraglock); > + for(agno = oagcount - 1; agno >= nagcount; agno--) { > + xfs_extlen_t usedblks; /* total used blocks in this a.g. */ > + xfs_extlen_t freeblks; /* free blocks in this a.g. */ > + xfs_agblock_t aglen; /* this ag's len */ > + struct xfs_perag *pag; /* the m_perag structure */ > + > + printk("doing agno=%d\n", agno); > + > + pag = &mp->m_perag[agno]; > + > + error = xfs_alloc_read_agf(mp, tp, agno, 0, &bp); > if (error) { > - xfs_fs_cmn_err(CE_WARN, mp, > - "error %d reading secondary superblock for ag %d", > - error, agno); > - break; > + goto error0; > } > - sbp = XFS_BUF_TO_SBP(bp); > - xfs_xlatesb(sbp, &mp->m_sb, -1, XFS_SB_ALL_BITS); > + ASSERT(bp); > + agf = XFS_BUF_TO_AGF(bp); > + aglen = INT_GET(agf->agf_length, ARCH_CONVERT); > + > + /* read the pagf/pagi if not already initialized */ > + /* agf should be initialized because of the ablove read_agf */ > + ASSERT(pag->pagf_init); > + if (!pag->pagi_init) { > + if ((error = xfs_ialloc_read_agi(mp, tp, agno, &bp))) > + goto error0; I don't think you should be overwriting bp here.... > + ASSERT(pag->pagi_init); > + } > + Because now you have bp potentially pointing to two different buffers. > /* > - * If we get an error writing out the alternate superblocks, > - * just issue a warning and continue. The real work is > - * already done and committed. > + * Check the inodes: as long as we have pagi_count == > + * pagi_freecount == 0, then: a) we don't have to > + * update any global inode counters, and b) there are > + * no extra blocks in inode btrees > */ > - if (!(error = xfs_bwrite(mp, bp))) { > - continue; > - } else { > - xfs_fs_cmn_err(CE_WARN, mp, > - "write error %d updating secondary superblock for ag %d", > - error, agno); > - break; /* no point in continuing */ > + if(pag->pagi_count > 0 || > + pag->pagi_freecount > 0) { > + printk("agi %d has %d inodes in total and %d free\n", > + agno, pag->pagi_count, pag->pagi_freecount); > + error = XFS_ERROR(ENOSPC); > + goto error0; > + } > + > + /* Check the AGF: if levels[] == 1, then there should > + * be no extra blocks in the btrees beyond the ones > + * at the beggining of the AG > + */ > + if(pag->pagf_levels[XFS_BTNUM_BNOi] > 1 || > + pag->pagf_levels[XFS_BTNUM_CNTi] > 1) { > + printk("agf %d has level %d bt and %d cnt\n", > + agno, > + pag->pagf_levels[XFS_BTNUM_BNOi], > + pag->pagf_levels[XFS_BTNUM_CNTi]); > + error = XFS_ERROR(ENOSPC); > + goto error0; > } ok, so we have empty ag's here. You might want to check that the inode btree is empty and that the AGI unlinked list is empty. > + > + freeblks = pag->pagf_freeblks; > + printk("Usage: %d prealloc, %d flcount\n", > + XFS_PREALLOC_BLOCKS(mp), pag->pagf_flcount); > + > + /* Done gathering data, check sizes */ > + usedblks = XFS_PREALLOC_BLOCKS(mp) + pag->pagf_flcount; > + printk("agno=%d agf_length=%d computed used=%d" > + " known free=%d\n", agno, aglen, usedblks, freeblks); > + > + if(usedblks + freeblks != aglen) { > + printk("agno %d is not free (%d blocks allocated)\n", > + agno, aglen-usedblks-freeblks); > + error = XFS_ERROR(ENOSPC); > + goto error0; > + } > + dbdelta -= aglen; > + printk("will lower with %d\n", > + aglen - XFS_PREALLOC_BLOCKS(mp)); > + fdbdelta += aglen - XFS_PREALLOC_BLOCKS(mp); Ok, so why not just fdbdelta += mp->m_sb.sb_agblocks - XFS_PREALLOC_BLOCKS(mp); > + } > + /* > + * Check that we removed all blocks > + */ > + ASSERT(!dbdelta); > + ASSERT(nagcount < oagcount); Error out, not assert, because at this point we have not changed anything. > + > + printk("to free: %d, oagcount=%d, nagcount=%d\n", > + fdbdelta, oagcount, nagcount); > + > + xfs_trans_agblocks_delta(tp, -((long)fdbdelta)); > + xfs_trans_mod_sb(tp, XFS_TRANS_SB_AGCOUNT, nagcount - oagcount); > + xfs_trans_mod_sb(tp, XFS_TRANS_SB_DBLOCKS, nb - mp->m_sb.sb_dblocks); > + xfs_trans_mod_sb(tp, XFS_TRANS_SB_FDBLOCKS, -((int64_t)fdbdelta)); > + > + if (dpct) > + xfs_trans_mod_sb(tp, XFS_TRANS_SB_IMAXPCT, dpct); > + error = xfs_trans_commit(tp, 0); > + if (error) { > + up_write(&mp->m_peraglock); > + return error; > } > + /* Free memory as the number of AG has changed */ > + for (agno = nagcount; agno < oagcount; agno++) > + if (mp->m_perag[agno].pagb_list) > + kmem_free(mp->m_perag[agno].pagb_list, > + sizeof(xfs_perag_busy_t) * > + XFS_PAGB_NUM_SLOTS); > + > + mp->m_perag = kmem_realloc(mp->m_perag, > + sizeof(xfs_perag_t) * nagcount, > + sizeof(xfs_perag_t) * oagcount, > + KM_SLEEP); This is not really safe - how do we know if all the users of the higher AGs have gone away? I think we should simply just zero out the unused AGs and don't worry about a realloc(). > + /* FIXME: here we could instead just lower > + * nagimax to nagcount; is it better this way? > + */ Not really. > + /* FIXME: why is this flag unconditionally set in growfs? */ > + mp->m_flags |= XFS_MOUNT_32BITINODES; good question. I don't think it should be there but I'll have to do some digging.... > + nagimax = xfs_initialize_perag(XFS_MTOVFS(mp), mp, nagcount); > + up_write(&mp->m_peraglock); > + > + xfs_update_sb(mp, nagimax, nagcount); > return 0; > > error0: > + up_write(&mp->m_peraglock); > xfs_trans_cancel(tp, XFS_TRANS_ABORT); > return error; > } > @@ -435,7 +665,10 @@ > int error; > if (!cpsema(&mp->m_growlock)) > return XFS_ERROR(EWOULDBLOCK); > - error = xfs_growfs_data_private(mp, in); > + if(in->newblocks < mp->m_sb.sb_dblocks) > + error = xfs_shrinkfs_data_private(mp, in); > + else > + error = xfs_growfs_data_private(mp, in); Hmmm - that's using the one ioctl to do both grow and shrink. I'd prefer a new shrink ioctl rather than changing the behaviour of an existing ioctl. Looks like a good start ;) Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Jun 11 20:09:49 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 11 Jun 2007 20:09:52 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5C39iWt010364 for ; Mon, 11 Jun 2007 20:09:47 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id NAA20969; Tue, 12 Jun 2007 13:09:38 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5C39ZAf120147670; Tue, 12 Jun 2007 13:09:37 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5C39XBB120936657; Tue, 12 Jun 2007 13:09:33 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Tue, 12 Jun 2007 13:09:33 +1000 From: David Chinner To: Nathan Scott Cc: David Chinner , Timothy Shimmin , xfs-dev , xfs-oss Subject: Re: Review: Be smarter about handling ENOSPC during writeback Message-ID: <20070612030933.GN86004887@sgi.com> References: <20070604045219.GG86004887@sgi.com> <20070608073342.GW85884050@sgi.com> <1181603625.3758.53.camel@edge.yarra.acx> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1181603625.3758.53.camel@edge.yarra.acx> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11738 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Tue, Jun 12, 2007 at 09:13:45AM +1000, Nathan Scott wrote: > On Fri, 2007-06-08 at 17:33 +1000, David Chinner wrote: > > On Fri, Jun 08, 2007 at 03:28:14PM +1000, Timothy Shimmin wrote: > > > > Will we get questions from people about reduced space from df? :) > > > > If we do, I think you just volunteered to write the FAQ entry ;) > > It would be more correct of XFS to start doing the right thing by > reporting different values for b_free and b_avail in statfs(2) - > this code in xfs_mount.c::xfs_statvfs() ... > > statp->f_bfree = statp->f_bavail = > sbp->sb_fdblocks - XFS_ALLOC_SET_ASIDE(mp); Ok, yeah, that'd work. Something like: --- fs/xfs/xfs_vfsops.c | 1 + 1 file changed, 1 insertion(+) Index: 2.6.x-xfs-new/fs/xfs/xfs_vfsops.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_vfsops.c 2007-06-08 21:46:29.000000000 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_vfsops.c 2007-06-12 13:08:49.933837815 +1000 @@ -876,6 +876,7 @@ xfs_statvfs( statp->f_blocks = sbp->sb_dblocks - lsize; statp->f_bfree = statp->f_bavail = sbp->sb_fdblocks - XFS_ALLOC_SET_ASIDE(mp); + statp->f_bfree += mp->m_resblks_avail; fakeinos = statp->f_bfree << sbp->sb_inopblog; #if XFS_BIG_INUMS fakeinos += mp->m_inoadd; Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Jun 11 20:52:48 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 11 Jun 2007 20:52:50 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.4 required=5.0 tests=AWL,BAYES_50,URIBL_RHS_ABUSE, WEIRD_PORT autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5C3qjWt020944 for ; Mon, 11 Jun 2007 20:52:47 -0700 Received: from boing.melbourne.sgi.com (boing.melbourne.sgi.com [134.14.55.141]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id NAA21998; Tue, 12 Jun 2007 13:52:40 +1000 Date: Tue, 12 Jun 2007 13:52:40 +1000 From: Timothy Shimmin To: torvalds@linux-foundation.org cc: xfs@oss.sgi.com Subject: [GIT PULL] xfs maintainers file update Message-ID: <50875DEF97A1E8D083501772@boing.melbourne.sgi.com> X-Mailer: Mulberry/4.0.8 (Mac OS X) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11739 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs Hi Linus, Keep forgetting to update MAINTAINERS file, since David has left SGI... Please pull from the for-linus branch: git pull git://oss.sgi.com:8090/xfs/xfs-2.6 for-linus Updates: MAINTAINERS | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) thru: commit 78bfd36169398bfc07bca218952a429bf301bc55 Author: Timothy Shimmin Date: Mon Jun 11 20:42:09 2007 -0700 [XFS] Update the MAINTAINERS file entry for XFS. Remove David Chatterton from XFS entry in MAINTAINERS file. Signed-off-by: Tim Shimmin From owner-xfs@oss.sgi.com Mon Jun 11 21:25:48 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 11 Jun 2007 21:25:52 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.1 required=5.0 tests=AWL,BAYES_50,SPF_HELO_PASS, URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from sandeen.net (sandeen.net [209.173.210.139]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5C4PlWt026739 for ; Mon, 11 Jun 2007 21:25:48 -0700 Received: from Liberator.local (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 39CE91806631F; Mon, 11 Jun 2007 23:25:46 -0500 (CDT) Message-ID: <466E204B.8060608@sandeen.net> Date: Mon, 11 Jun 2007 23:25:47 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.0 (Macintosh/20070326) MIME-Version: 1.0 To: David Chinner CC: xfs@oss.sgi.com, iusty@k1024.org Subject: Re: [PATCH] Implement shrink of empty AGs References: <20070610164014.GA10936@teal.hq.k1024.org> <20070612024025.GM86004887@sgi.com> In-Reply-To: <20070612024025.GM86004887@sgi.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11740 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs David Chinner wrote: > On Sun, Jun 10, 2007 at 06:40:14PM +0200, Iustin Pop wrote: ... >> + /* FIXME: why is this flag unconditionally set in growfs? */ >> + mp->m_flags |= XFS_MOUNT_32BITINODES; >> + nagimax = xfs_initialize_perag(XFS_MTOVFS(mp), mp, nagcount); > good question. I don't think it should be there but I'll have to > do some digging.... http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_fsops.c#rev1.72 Thu Dec 6 19:26:09 2001 UTC (5 years, 6 months ago) by lord Add in the 32 bit inode mount flag before re initializing the perag structures in growfs. http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_fsops.c.diff?r1=1.71;r2=1.72 but, it seems harmless because it immediately calls xfs_initialize_perag which does: /* Clear the mount flag if no inode can overflow 32 bits * on this filesystem, or if specifically requested.. */ if ((mp->m_flags & XFS_MOUNT_32BITINOOPT) && ino > max_inum) { mp->m_flags |= XFS_MOUNT_32BITINODES; } else { mp->m_flags &= ~XFS_MOUNT_32BITINODES; } so I think it sets it (or clears it) properly in any case. I'd probably remove the setting before initialize_perag though as it's superfluous... that was added after steve's change... http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_mount.c#rev1.335 Mon Sep 8 05:46:42 2003 UTC (3 years, 9 months ago) by nathans Add inode64 mount option; fix case where growfs can push 32 bit inodes into 64 bit space accidentally - both changes originally from IRIX http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_mount.c.diff?r1=1.334;r2=1.335 (previously it would always clear the flag if max inode was < 32 bits..) ... so yeah, looks like the setting in question can/should go. -Eric From owner-xfs@oss.sgi.com Mon Jun 11 22:11:17 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 11 Jun 2007 22:11:18 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5C5BCWt001407 for ; Mon, 11 Jun 2007 22:11:15 -0700 Received: from [134.14.55.89] (soarer.melbourne.sgi.com [134.14.55.89]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA23607; Tue, 12 Jun 2007 15:11:08 +1000 Message-ID: <466E2B76.7010707@sgi.com> Date: Tue, 12 Jun 2007 15:13:26 +1000 From: Vlad Apostolov User-Agent: Thunderbird 1.5.0.10 (X11/20070221) MIME-Version: 1.0 To: David Chinner CC: xfs-dev , xfs-oss Subject: Re: Review: factor extracting extent size hints from the inode References: <20070604052333.GR85884050@sgi.com> In-Reply-To: <20070604052333.GR85884050@sgi.com> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11741 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: vapo@sgi.com Precedence: bulk X-list: xfs David Chinner wrote: > Replace frequently repeated, open coded extraction of the > extent size hint from the xfs_inode with a single helper > function. > > Cheers, > > Dave. > Dave, I think XFS_DIFLAG_REALTIME and XFS_DIFLAG_EXTSIZE flags are mutually exclusive. XFS_DIFLAG_REALTIME and di_extsize have been introduced and used on Irix and Linux before XFS_DIFLAG_EXTSIZE. This code: + if (unlikely(ip->i_d.di_flags & XFS_DIFLAG_REALTIME)) { + extsz = (ip->i_d.di_flags & XFS_DIFLAG_EXTSIZE) + ? ip->i_d.di_extsize + : ip->i_mount->m_sb.sb_rextsize; + ASSERT(extsz); + } else { shouldn't test for XFS_DIFLAG_EXTSIZE but use di_extsize if non zero. Regards, Vlad From owner-xfs@oss.sgi.com Mon Jun 11 23:08:06 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 11 Jun 2007 23:08:07 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5C682Wt010354 for ; Mon, 11 Jun 2007 23:08:04 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id QAA24951; Tue, 12 Jun 2007 16:08:02 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5C681Af120979512; Tue, 12 Jun 2007 16:08:02 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5C680b3120545221; Tue, 12 Jun 2007 16:08:00 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Tue, 12 Jun 2007 16:08:00 +1000 From: David Chinner To: Vlad Apostolov Cc: David Chinner , xfs-dev , xfs-oss Subject: Re: Review: factor extracting extent size hints from the inode Message-ID: <20070612060800.GP86004887@sgi.com> References: <20070604052333.GR85884050@sgi.com> <466E2B76.7010707@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <466E2B76.7010707@sgi.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11742 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Tue, Jun 12, 2007 at 03:13:26PM +1000, Vlad Apostolov wrote: > David Chinner wrote: > >Replace frequently repeated, open coded extraction of the > >extent size hint from the xfs_inode with a single helper > >function. > > > >Cheers, > > > >Dave. > > > Dave, > > I think XFS_DIFLAG_REALTIME and XFS_DIFLAG_EXTSIZE flags are > mutually exclusive. No, that's not true. > XFS_DIFLAG_REALTIME and di_extsize have been introduced and used > on Irix and Linux before XFS_DIFLAG_EXTSIZE. Sure, but look at how we are supposed to set the extent size hint. i.e XFS_IOC_FSSETXATTR, which by your own account should have the XFS_XFLAG_EXTSIZE bit set in it. And that does not matter if the file is realtime or not. See the xfs_io code that sets the extent size hint: 567 if (S_ISREG(stat.st_mode)) { 568 fsx.fsx_xflags |= XFS_XFLAG_EXTSIZE; 569 } else if (S_ISDIR(stat.st_mode)) { 570 fsx.fsx_xflags |= XFS_XFLAG_EXTSZINHERIT; 571 } else { 572 printf(_("invalid target file type - file %s\n"), path); 573 return 0; 574 } 575 fsx.fsx_extsize = extsz; 576 577 if ((xfsctl(path, fd, XFS_IOC_FSSETXATTR, &fsx)) < 0) { 578 printf("%s: XFS_IOC_FSSETXATTR %s: %s\n", 579 progname, path, strerror(errno)); 580 return 0; 581 } If you use this method of setting the extent size hint, then you will *always* get the XFS_DIFLAG_EXTSIZE flag set when you have an extent size hint, regardless of whether it is a realtime file or not. > This code: > > + if (unlikely(ip->i_d.di_flags & XFS_DIFLAG_REALTIME)) { > + extsz = (ip->i_d.di_flags & XFS_DIFLAG_EXTSIZE) > + ? ip->i_d.di_extsize > + : ip->i_mount->m_sb.sb_rextsize; > + ASSERT(extsz); > + } else { > > shouldn't test for XFS_DIFLAG_EXTSIZE but use di_extsize if non zero. Hmmmm - I think having one rule for realtime and a different rule for data to do exactly the same thing is busted. Like the realtime code, there are many places the non-realtime code don't bother to check for a valid extent size hint flag - it just read it blindly. That was part of the problem that the DMF folks tripped over - the flag wasn't set but the hint was, and bad things were happening because it wasn't consistently applied. Also, if we want to fix up the setting of the extent size hint so that it errors out if you don't set the XFS_XFLAG_EXTSIZE as well, then we'd have to make an exception for the realtime inodes. So, there's plenty of reasons for leaving this code as it stands and have both realtime and non-realtime behave the same way. Thoughts? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Jun 11 23:14:52 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 11 Jun 2007 23:14:54 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5C6EmWt011435 for ; Mon, 11 Jun 2007 23:14:50 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id QAA25052; Tue, 12 Jun 2007 16:14:46 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5C6EhAf118552763; Tue, 12 Jun 2007 16:14:45 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5C6Ee8E120803519; Tue, 12 Jun 2007 16:14:40 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Tue, 12 Jun 2007 16:14:40 +1000 From: David Chinner To: Marco Berizzi Cc: David Chinner , linux-kernel@vger.kernel.org, xfs@oss.sgi.com Subject: Re: XFS internal error xfs_da_do_buf(2) at line 2087 of file fs/xfs/xfs_da_btree.c. Caller 0xc01b00bd Message-ID: <20070612061440.GQ86004887@sgi.com> References: <20070316012520.GN5743@melbourne.sgi.com> <20070316195951.GB5743@melbourne.sgi.com> <20070320064632.GO32602149@melbourne.sgi.com> <20070607130505.GE85884050@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11743 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Fri, Jun 08, 2007 at 03:59:39PM +0200, Marco Berizzi wrote: > David Chinner wrote: > > Where we saw signs of on disk directory corruption. Have you run > > xfs_repair successfully on the filesystem since you reported > > this? > > yes. > > > If you did clean up the error, does xfs_repair report the same sort > > of error again? > > I have run xfs_repair this morning. > Here is the report: > > Have you run a 2.6.16-rcX or 2.6.17.[0-6] kernel since you last > > reported this problem? > > No. I have run only 2.6.19.x and 2.6.21.x > > After the xfs_repair I have remounted the file system. > After few hours linux has crashed with this message: > BUG: at arch/i386/kernel/smp.c:546 smp_call_function() > I have also the monitor bitmap. This is sounding like memory corruption is no corruption is being found on disk by xfs_repair. Have you run memtest86 on that box to see if it's got bad memory? Cheers Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Jun 11 23:40:41 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 11 Jun 2007 23:40:44 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from postoffice.aconex.com (mail.app.aconex.com [203.89.192.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5C6eeWt015497 for ; Mon, 11 Jun 2007 23:40:41 -0700 Received: from edge.local (unknown [203.89.192.141]) by postoffice.aconex.com (Postfix) with ESMTP id 3D2B492C582; Tue, 12 Jun 2007 16:40:41 +1000 (EST) Subject: Re: Review: factor extracting extent size hints from the inode From: Nathan Scott Reply-To: nscott@aconex.com To: David Chinner Cc: Vlad Apostolov , xfs-dev , xfs-oss In-Reply-To: <20070612060800.GP86004887@sgi.com> References: <20070604052333.GR85884050@sgi.com> <466E2B76.7010707@sgi.com> <20070612060800.GP86004887@sgi.com> Content-Type: text/plain Organization: Aconex Date: Tue, 12 Jun 2007 16:39:46 +1000 Message-Id: <1181630386.3758.90.camel@edge.yarra.acx> Mime-Version: 1.0 X-Mailer: Evolution 2.6.3 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11744 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nscott@aconex.com Precedence: bulk X-list: xfs On Tue, 2007-06-12 at 16:08 +1000, David Chinner wrote: > > If you use this method of setting the extent size hint, then you will > *always* get the XFS_DIFLAG_EXTSIZE flag set when you have an extent > size hint, regardless of whether it is a realtime file or not. The extsize flag is relatively recent though, and traditionally realtime files could have had their extsize explicitly set with no associated extsize flag (thats just how it was implemented, originally, in realtime). But, not many people use realtime, even fewer would be using the extent size option with realtime (like, none?, on Linux anyway) ... so, you could pretty much make whatever rule you like. cheers. -- Nathan From owner-xfs@oss.sgi.com Tue Jun 12 00:11:06 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 12 Jun 2007 00:11:11 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=-0.0 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from e36.co.us.ibm.com (e36.co.us.ibm.com [32.97.110.154]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5C7B5Wt023491 for ; Tue, 12 Jun 2007 00:11:06 -0700 Received: from d03relay04.boulder.ibm.com (d03relay04.boulder.ibm.com [9.17.195.106]) by e36.co.us.ibm.com (8.13.8/8.13.8) with ESMTP id l5C7B5Wt027819 for ; Tue, 12 Jun 2007 03:11:05 -0400 Received: from d03av02.boulder.ibm.com (d03av02.boulder.ibm.com [9.17.195.168]) by d03relay04.boulder.ibm.com (8.13.8/8.13.8/NCO v8.3) with ESMTP id l5C7B5WG031148 for ; Tue, 12 Jun 2007 01:11:05 -0600 Received: from d03av02.boulder.ibm.com (loopback [127.0.0.1]) by d03av02.boulder.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id l5C78ud2015801 for ; Tue, 12 Jun 2007 01:11:05 -0600 Received: from amitarora.in.ibm.com (amitarora.in.ibm.com [9.124.31.181]) by d03av02.boulder.ibm.com (8.12.11.20060308/8.12.11) with ESMTP id l5C6GnKE022529; Tue, 12 Jun 2007 00:16:50 -0600 Received: from amitarora.in.ibm.com (localhost.localdomain [127.0.0.1]) by amitarora.in.ibm.com (Postfix) with ESMTP id 5B8759D4E2; Tue, 12 Jun 2007 11:46:58 +0530 (IST) Received: (from amit@localhost) by amitarora.in.ibm.com (8.13.1/8.13.1/Submit) id l5C6Gvrx015780; Tue, 12 Jun 2007 11:46:57 +0530 Date: Tue, 12 Jun 2007 11:46:52 +0530 From: "Amit K. Arora" To: David Chinner Cc: Suparna Bhattacharya , torvalds@osdl.org, akpm@linux-foundation.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, xfs@oss.sgi.com, cmm@us.ibm.com Subject: Re: [PATCH 1/5] fallocate() implementation in i86, x86_64 and powerpc Message-ID: <20070612061652.GA6320@amitarora.in.ibm.com> References: <20070420145918.GY355@devserv.devel.redhat.com> <20070424121632.GA10136@amitarora.in.ibm.com> <20070426175056.GA25321@amitarora.in.ibm.com> <20070426180332.GA7209@amitarora.in.ibm.com> <20070509160102.GA30745@amitarora.in.ibm.com> <20070510005926.GT85884050@sgi.com> <20070510115620.GB21400@amitarora.in.ibm.com> <20070510223950.GD86004887@sgi.com> <20070511110301.GB28425@in.ibm.com> <20070512080157.GF85884050@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070512080157.GF85884050@sgi.com> User-Agent: Mutt/1.4.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11745 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: aarora@linux.vnet.ibm.com Precedence: bulk X-list: xfs On Sat, May 12, 2007 at 06:01:57PM +1000, David Chinner wrote: > On Fri, May 11, 2007 at 04:33:01PM +0530, Suparna Bhattacharya wrote: > > On Fri, May 11, 2007 at 08:39:50AM +1000, David Chinner wrote: > > > All I'm really interested in right now is that the fallocate > > > _interface_ can be used as a *complete replacement* for the > > > pre-existing XFS-specific ioctls that are already used by > > > applications. What ext4 can or can't do right now is irrelevant to > > > this discussion - the interface definition needs to take priority > > > over implementation.... > > > > Would you like to write up an interface definition description (likely > > man page) and post it for review, possibly with a mention of apps using > > it today ? > > Yeah, I started doing that yesterday as i figured it was the only way > to cut the discussion short.... > > > One reason for introducing the mode parameter was to allow the interface to > > evolve incrementally as more options / semantic questions are proposed, so > > that we don't have to make all the decisions right now. > > So it would be good to start with a *minimal* definition, even just one mode. > > The rest could follow as subsequent patches, each being reviewed and debated > > separately. Otherwise this discussion can drag on for a long time. > > Minimal definition to replace what applicaitons use on XFS and to > support poasix_fallocate are the thre that have been mentioned so > far (FA_ALLOCATE, FA_PREALLOCATE, FA_DEALLOCATE). I'll document them > all in a man page... Hi Dave, Did you get time to write the above man page ? It will help to push further patches in time (eg. for FA_PREALLOCATE mode). The idea I had was to push the patch with bare minimum functionality (FA_ALLOCATE and FA_DEALLOCATE modes) and parallely finalize on other new mode(s) based on the man page you planned to provide. Thanks! -- Regards, Amit Arora > > Cheers, > > Dave. > -- > Dave Chinner > Principal Engineer > SGI Australian Software Group From owner-xfs@oss.sgi.com Tue Jun 12 00:14:40 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 12 Jun 2007 00:14:43 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.0 required=5.0 tests=AWL,BAYES_60 autolearn=no version=3.2.0-pre1-r499012 Received: from bay0-omc1-s19.bay0.hotmail.com (bay0-omc1-s19.bay0.hotmail.com [65.54.246.91]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5C7EdWt024452 for ; Tue, 12 Jun 2007 00:14:40 -0700 Received: from hotmail.com ([65.54.174.88]) by bay0-omc1-s19.bay0.hotmail.com with Microsoft SMTPSVC(6.0.3790.2668); Tue, 12 Jun 2007 00:14:40 -0700 Received: from mail pickup service by hotmail.com with Microsoft SMTPSVC; Tue, 12 Jun 2007 00:14:40 -0700 Message-ID: Received: from 85.36.106.198 by BAY103-DAV16.phx.gbl with DAV; Tue, 12 Jun 2007 07:14:35 +0000 X-Originating-IP: [85.36.106.198] X-Originating-Email: [pupilla@hotmail.com] X-Sender: pupilla@hotmail.com From: "Marco Berizzi" To: "David Chinner" Cc: "David Chinner" , , , "Marco Berizzi" References: <20070316012520.GN5743@melbourne.sgi.com> <20070316195951.GB5743@melbourne.sgi.com> <20070320064632.GO32602149@melbourne.sgi.com> <20070607130505.GE85884050@sgi.com> <20070612061440.GQ86004887@sgi.com> Subject: Re: XFS internal error xfs_da_do_buf(2) at line 2087 of file fs/xfs/xfs_da_btree.c. Caller 0xc01b00bd Date: Tue, 12 Jun 2007 09:14:27 +0200 X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2800.1123 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2800.1123 X-OriginalArrivalTime: 12 Jun 2007 07:14:40.0432 (UTC) FILETIME=[57C19B00:01C7ACC1] X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11746 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: pupilla@hotmail.com Precedence: bulk X-list: xfs David Chinner wrote: > On Fri, Jun 08, 2007 at 03:59:39PM +0200, Marco Berizzi wrote: > > David Chinner wrote: > > > Where we saw signs of on disk directory corruption. Have you run > > > xfs_repair successfully on the filesystem since you reported > > > this? > > > > yes. > > > > > If you did clean up the error, does xfs_repair report the same sort > > > of error again? > > > > I have run xfs_repair this morning. > > Here is the report: > > > > > > Have you run a 2.6.16-rcX or 2.6.17.[0-6] kernel since you last > > > reported this problem? > > > > No. I have run only 2.6.19.x and 2.6.21.x > > > > After the xfs_repair I have remounted the file system. > > After few hours linux has crashed with this message: > > BUG: at arch/i386/kernel/smp.c:546 smp_call_function() > > I have also the monitor bitmap. > > This is sounding like memory corruption is no corruption is being > found on disk by xfs_repair. Have you run memtest86 on that box to > see if it's got bad memory? Yes. I have run memtest for one week: no errors. I have also changed the mother board, scsi controller and ram. Only the cpu and the 2 hot swap scsi disks were not replaced. IMHO this isn't an hardware problem, because the kernel with debugging options enabled didn't crash for a long time (>1 month). Just for record, at this moment this box is running 2.6.22-rc4 with no debug options enabled. I will keep you informed. Thanks everybody for the support. From owner-xfs@oss.sgi.com Tue Jun 12 01:12:02 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 12 Jun 2007 01:12:06 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5C8BxWt005443 for ; Tue, 12 Jun 2007 01:12:01 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id SAA28273; Tue, 12 Jun 2007 18:11:41 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5C8BaAf120366110; Tue, 12 Jun 2007 18:11:36 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5C8BSaR119125781; Tue, 12 Jun 2007 18:11:28 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Tue, 12 Jun 2007 18:11:28 +1000 From: David Chinner To: "Amit K. Arora" Cc: David Chinner , Suparna Bhattacharya , torvalds@osdl.org, akpm@linux-foundation.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, xfs@oss.sgi.com, cmm@us.ibm.com Subject: Re: [PATCH 1/5] fallocate() implementation in i86, x86_64 and powerpc Message-ID: <20070612081128.GS86004887@sgi.com> References: <20070424121632.GA10136@amitarora.in.ibm.com> <20070426175056.GA25321@amitarora.in.ibm.com> <20070426180332.GA7209@amitarora.in.ibm.com> <20070509160102.GA30745@amitarora.in.ibm.com> <20070510005926.GT85884050@sgi.com> <20070510115620.GB21400@amitarora.in.ibm.com> <20070510223950.GD86004887@sgi.com> <20070511110301.GB28425@in.ibm.com> <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070612061652.GA6320@amitarora.in.ibm.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11747 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Tue, Jun 12, 2007 at 11:46:52AM +0530, Amit K. Arora wrote: > On Sat, May 12, 2007 at 06:01:57PM +1000, David Chinner wrote: > > Minimal definition to replace what applicaitons use on XFS and to > > support poasix_fallocate are the thre that have been mentioned so > > far (FA_ALLOCATE, FA_PREALLOCATE, FA_DEALLOCATE). I'll document them > > all in a man page... > > Hi Dave, > > Did you get time to write the above man page ? It will help to push > further patches in time (eg. for FA_PREALLOCATE mode). No, I didn't. Instead of working on new preallocation stuff, I've been spending all my time fixing bugs found by new and interesting (ab)uses of preallocation and hole punching. > The idea I had was to push the patch with bare minimum functionality > (FA_ALLOCATE and FA_DEALLOCATE modes) and parallely finalize on other > new mode(s) based on the man page you planned to provide. Push them. I'll just make XFS work with whatever is provided. Is there a test harness for the syscall yet? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Tue Jun 12 01:52:18 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 12 Jun 2007 01:52:20 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.7 required=5.0 tests=AWL,BAYES_50,URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5C8qEWt013580 for ; Tue, 12 Jun 2007 01:52:16 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id SAA29245; Tue, 12 Jun 2007 18:52:10 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id B425458C38C1; Tue, 12 Jun 2007 18:52:10 +1000 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: TAKE 966145 - fix i386 build Message-Id: <20070612085210.B425458C38C1@chook.melbourne.sgi.com> Date: Tue, 12 Jun 2007 18:52:10 +1000 (EST) From: dgc@sgi.com (David Chinner) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11748 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Use do_div() on 64 bit types. Signed-Off-By: Christoph Hellwig Date: Tue Jun 12 18:51:47 AEST 2007 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/2.6.x-xfs Inspected by: hch@lst.de The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:28889a fs/xfs/xfs_mount.c - 1.398 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_mount.c.diff?r1=text&tr1=1.398&r2=text&tr2=1.397&f=h - Use do_div() on 64 bit types. From owner-xfs@oss.sgi.com Tue Jun 12 02:49:31 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 12 Jun 2007 02:49:33 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5C9nRWt024715 for ; Tue, 12 Jun 2007 02:49:30 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id TAA00504; Tue, 12 Jun 2007 19:49:26 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5C9nPAf121030841; Tue, 12 Jun 2007 19:49:25 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5C9nMAV116081632; Tue, 12 Jun 2007 19:49:22 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Tue, 12 Jun 2007 19:49:22 +1000 From: David Chinner To: Nathan Scott Cc: David Chinner , Vlad Apostolov , xfs-dev , xfs-oss Subject: Re: Review: factor extracting extent size hints from the inode Message-ID: <20070612094922.GW86004887@sgi.com> References: <20070604052333.GR85884050@sgi.com> <466E2B76.7010707@sgi.com> <20070612060800.GP86004887@sgi.com> <1181630386.3758.90.camel@edge.yarra.acx> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1181630386.3758.90.camel@edge.yarra.acx> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11749 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Tue, Jun 12, 2007 at 04:39:46PM +1000, Nathan Scott wrote: > On Tue, 2007-06-12 at 16:08 +1000, David Chinner wrote: > > > > If you use this method of setting the extent size hint, then you will > > *always* get the XFS_DIFLAG_EXTSIZE flag set when you have an extent > > size hint, regardless of whether it is a realtime file or not. > > The extsize flag is relatively recent though, and traditionally > realtime files could have had their extsize explicitly set with > no associated extsize flag (thats just how it was implemented, > originally, in realtime). *nod* We've got recent bugs reported because of this assumption and lack of checking of the extent size hint flag where it needs to be checked. Either we have a flag to indicate the di_extsize field is valid or we don't - it's too confusing to have different interfaces just because an inode has a different, unrelated flag set on it. Now that we have a flag, we can't remove support for it..... > But, not many people use realtime, even fewer would be using the > extent size option with realtime (like, none?, on Linux anyway) > ... so, you could pretty much make whatever rule you like. I sorta left that unsaid, but that is yet another reason I think the change should stand. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Tue Jun 12 05:31:43 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 12 Jun 2007 05:31:46 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.5 required=5.0 tests=AWL,BAYES_99 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.ukfsn.org (s2.ukfsn.org [217.158.120.143]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5CCVfWt024677 for ; Tue, 12 Jun 2007 05:31:42 -0700 Received: from localhost (mailman.ukfsn.org [80.168.53.75]) by mail.ukfsn.org (Postfix) with ESMTP id 6E728E6C6F; Tue, 12 Jun 2007 13:31:28 +0100 (BST) Received: from mail.ukfsn.org ([80.168.53.20]) by localhost (smtp-filter.ukfsn.org [80.168.53.75]) (amavisd-new, port 10024) with ESMTP id HvPOAnAtUrPl; Tue, 12 Jun 2007 13:28:39 +0100 (BST) Received: from elm.dgreaves.com (i-83-67-36-194.freedom2surf.net [83.67.36.194]) by mail.ukfsn.org (Postfix) with ESMTP id 75891E6BB7; Tue, 12 Jun 2007 13:31:27 +0100 (BST) Received: from ash.dgreaves.com ([10.0.0.90]) by elm.dgreaves.com with esmtp (Exim 4.62) (envelope-from ) id 1Hy5Wz-0004Pi-B3; Tue, 12 Jun 2007 13:31:29 +0100 Message-ID: <466E9220.5050507@dgreaves.com> Date: Tue, 12 Jun 2007 13:31:28 +0100 From: David Greaves User-Agent: Mozilla-Thunderbird 2.0.0.0 (X11/20070601) MIME-Version: 1.0 To: David Chinner Cc: Tejun Heo , Linus Torvalds , "Rafael J. Wysocki" , xfs@oss.sgi.com, "'linux-kernel@vger.kernel.org'" , linux-pm , Neil Brown Subject: Re: 2.6.22-rc3 hibernate(?) fails totally - regression (xfs on raid6) References: <200706020122.49989.rjw@sisk.pl> <4661EFBB.5010406@dgreaves.com> <4662D852.4000005@dgreaves.com> <46667160.80905@gmail.com> <46668EE0.2030509@dgreaves.com> <46679D56.7040001@gmail.com> <4667DE2D.6050903@dgreaves.com> <20070607110708.GS86004887@sgi.com> <46680F5E.6070806@dgreaves.com> <20070607222813.GG85884050@sgi.com> In-Reply-To: <20070607222813.GG85884050@sgi.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11750 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: david@dgreaves.com Precedence: bulk X-list: xfs [RESEND since I sent this late last friday and it's probably been buried by now.] I had this as a PS, then I thought, we could all be wasting our time... I don't like these "Section mismatch" warnings but that's because I'm paranoid rather than because I know what they mean. I'll be happier when someone says "That's OK, I know about them, they're not the problem" WARNING: arch/i386/kernel/built-in.o(.text+0x968f): Section mismatch: reference to .init.text: (between 'mtrr_bp_init' and 'mtrr_ap_init') WARNING: arch/i386/kernel/built-in.o(.text+0x9781): Section mismatch: reference to .init.text: (between 'mtrr_bp_init' and 'mtrr_ap_init') WARNING: arch/i386/kernel/built-in.o(.text+0x9786): Section mismatch: reference to .init.text: (between 'mtrr_bp_init' and 'mtrr_ap_init') WARNING: arch/i386/kernel/built-in.o(.text+0xa25c): Section mismatch: reference to .init.text: (between 'get_mtrr_state' and 'mtrr_wrmsr') WARNING: arch/i386/kernel/built-in.o(.text+0xa303): Section mismatch: reference to .init.text: (between 'get_mtrr_state' and 'mtrr_wrmsr') WARNING: arch/i386/kernel/built-in.o(.text+0xa31b): Section mismatch: reference to .init.text: (between 'get_mtrr_state' and 'mtrr_wrmsr') WARNING: arch/i386/kernel/built-in.o(.text+0xa344): Section mismatch: reference to .init.text: (between 'get_mtrr_state' and 'mtrr_wrmsr') WARNING: arch/i386/kernel/built-in.o(.exit.text+0x19): Section mismatch: reference to .init.text: (between 'cache_remove_dev' and 'powernow_k6_exit') WARNING: arch/i386/kernel/built-in.o(.data+0x2160): Section mismatch: reference to .init.text: (between 'thermal_throttle_cpu_notifier' and 'mce_work') WARNING: kernel/built-in.o(.text+0x14502): Section mismatch: reference to .init.text: (between 'kthreadd' and 'init_waitqueue_head') I'm paranoid because Andrew Morton said a couple of weeks ago: > Could the people who write these bugs, please, like, fix them? > It's not trivial noise. These things lead to kernel crashes. Anyhow... David Chinner wrote: > sync just guarantees that metadata changes are logged and data is > on disk - it doesn't stop the filesystem from doing anything after > the sync... No, but there are no apps accessing the filesystem. It's just available for NFS serving. Seems safer before potentially hanging the machine? Also I made these changes to the kernel: cu:/boot# diff config-2.6.22-rc4-TejuTst-dbg3-dirty config-2.6.22-rc4-TejuTst-dbg1-dirty 3,4c3,4 < # Linux kernel version: 2.6.22-rc4-TejuTst-dbg3 < # Thu Jun 7 20:00:34 2007 --- > # Linux kernel version: 2.6.22-rc4-TejuTst3 > # Thu Jun 7 10:59:21 2007 242,244c242 < CONFIG_PM_DEBUG=y < CONFIG_DISABLE_CONSOLE_SUSPEND=y < # CONFIG_PM_TRACE is not set --- > # CONFIG_PM_DEBUG is not set positive: I can now get sysrq-t :) negative: if I build skge into the kernel the behaviour changes so I can't run netconsole Just to be sure I tested and this kernel suspends/restores with /huge unmounted. It also hangs without an umount so the behaviour is the same. > Ok, so a clean inode is sufficient to prevent hibernate from working. > > So, what's different between a sync and a remount? > > do_remount_sb() does: > > 599 shrink_dcache_sb(sb); > 600 fsync_super(sb); > > of which a sync does neither. sync does what fsync_super() does in > different sort of way, but does not call sync_blockdev() on each > block device. It looks like that is the two main differences between > sync and remount - remount trims the dentry cache and syncs the blockdev, > sync doesn't. > >>> What about freezing the filesystem? >> cu:~# xfs_freeze -f /huge >> cu:~# /usr/net/bin/hibernate >> [but this doesn't even hibernate - same as the 'touch'] > > I suspect that the frozen filesystem might cause other problems > in the hibernate process. However, while a freeze calls sync_blockdev() > it does not trim the dentry cache..... > > So, rather than a remount before hibernate, lets see if we can > remove the dentries some other way to determine if removing excess > dentries/inodes from the caches makes a difference. Can you do: > > # touch /huge/foo > # sync > # echo 1 > /proc/sys/vm/drop_caches > # hibernate success > > # touch /huge/bar > # sync > # echo 2 > /proc/sys/vm/drop_caches > # hibernate success > > # touch /huge/baz > # sync > # echo 3 > /proc/sys/vm/drop_caches > # hibernate success So I added # touch /huge/bork # sync # hibernate And it still succeeded - sigh. So I thought a bit and did: rm /huge/b* /huge/foo > Clean boot > # touch /huge/bar > # sync > # echo 2 > /proc/sys/vm/drop_caches > # hibernate hangs on suspend (sysrq-b doesn't work) > Clean boot > # touch /huge/baz > # sync > # echo 3 > /proc/sys/vm/drop_caches > # hibernate hangs on suspend (sysrq-b doesn't work) So I rebooted and hibernated to make sure I'm not having random behaviour - yep, hang on resume (as per usual). Now I wonder if any other mounts have an effect... reboot and umount /dev/hdb2 xfs fs, - hang on hibernate I'm confused. I'm going to order chinese takeaway and then find a serial cable... David PS 2.6.21.1 works fine. PPS the takeaway was nice. From owner-xfs@oss.sgi.com Tue Jun 12 06:12:44 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 12 Jun 2007 06:12:47 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from mail37.messagelabs.com (mail37.messagelabs.com [216.82.241.51]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5CDChWt031463 for ; Tue, 12 Jun 2007 06:12:44 -0700 X-VirusChecked: Checked X-Env-Sender: Rene.Salmon@bp.com X-Msg-Ref: server-23.tower-37.messagelabs.com!1181653959!23181175!1 X-StarScan-Version: 5.5.12.11; banners=-,-,- X-Originating-IP: [129.230.248.44] Received: (qmail 20680 invoked from network); 12 Jun 2007 13:12:39 -0000 Received: from unknown (HELO BP1XEUAV002.bp1.ad.bp.com) (129.230.248.44) by server-23.tower-37.messagelabs.com with SMTP; 12 Jun 2007 13:12:39 -0000 Received: from BP1XEUEX033.bp1.ad.bp.com ([149.184.176.167]) by BP1XEUAV002.bp1.ad.bp.com with InterScan Messaging Security Suite; Tue, 12 Jun 2007 14:12:38 +0100 Received: from bp1xeuex006.bp1.ad.bp.com ([149.184.176.244]) by BP1XEUEX033.bp1.ad.bp.com with Microsoft SMTPSVC(6.0.3790.1830); Tue, 12 Jun 2007 14:12:38 +0100 Received: from BP1XEUEX706-C.bp1.ad.bp.com ([149.182.218.95]) by bp1xeuex006.bp1.ad.bp.com with Microsoft SMTPSVC(6.0.3790.0); Tue, 12 Jun 2007 14:12:37 +0100 Received: from 172.23.67.233 ([172.23.67.233]) by BP1XEUEX706-C.bp1.ad.bp.com ([149.182.218.28]) via Exchange Front-End Server ammail.bp.com ([172.23.67.38]) with Microsoft Exchange Server HTTP-DAV ; Tue, 12 Jun 2007 13:12:36 +0000 X-rim-org-msg-ref-id: 902286657 Message-ID: <902286657-1181653953-cardhu_decombobulator_blackberry.rim.net-1527539029-@bxe120.bisx.prod.on.blackberry> References: <1181606134.7873.72.camel@holwrs01><1181608444.3758.73.camel@edge.yarra.acx> In-Reply-To: <1181608444.3758.73.camel@edge.yarra.acx> Sensitivity: Normal Importance: Normal To: nscott@aconex.com, "Salmon, Rene" Cc: xfs@oss.sgi.com Subject: Re: sunit not working From: salmr0@bp.com Date: Tue, 12 Jun 2007 13:12:15 +0000 Content-Type: text/plain MIME-Version: 1.0 X-OriginalArrivalTime: 12 Jun 2007 13:12:37.0805 (UTC) FILETIME=[5944A5D0:01C7ACF3] X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from base64 to 8bit by oss.sgi.com id l5CDCiWt031468 X-archive-position: 11751 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: salmr0@bp.com Precedence: bulk X-list: xfs Thanks that helps. Now that I know I have the right sunit and swidth I have a performace related question. If I do a dd on the raw device or to the lun directy I get speeds of around 190-200 MBytes/sec. As soon as I add xfs on top of the lun my speeds go to around 150 MBytes/sec. This is for a single stream write using various block sizes on a 2 Gbit/sec fiber channel card. Is this overhead more or less what you would expect from xfs? Or is there some tunning I need to do? Thanks Rene Rene -----Original Message----- From: Nathan Scott Date: Tue, 12 Jun 2007 10:34:04 To:"Salmon, Rene" Cc:xfs@oss.sgi.com Subject: Re: sunit not working On Mon, 2007-06-11 at 18:55 -0500, Salmon, Rene wrote: > As you can see the sunit gets set to 64 upon creation and not 512 like I > asked. Also if it try to give it some mount options it does the same > thing. > > sgi210a:~ # mount -o sunit=512,swidth=5120 /dev/mapper/mpath9 /mnt/ Its being reported in units of filesystem blocks, and its specified in 512 byte units. Pretty dopey, but thats why its different. > sgi210a:~ # xfs_info /mnt/ > meta-data=/dev/mapper/mpath9 isize=256 agcount=32, > agsize=56652352 blks > = sectsz=512 attr=0 > data = bsize=4096 blocks=1812874752, > imaxpct=25 > = sunit=64 swidth=640 blks, > unwritten=1 > naming =version 2 bsize=4096 > log =internal bsize=4096 blocks=32768, version=1 > = sectsz=512 sunit=0 blks > realtime =none extsz=65536 blocks=0, rtextents=0 > sgi210a:~ # $ gdb -q (gdb) p 512 * 512 $1 = 262144 (gdb) p 64 * 4096 $2 = 262144 (gdb) (thats 262144 bytes, of course) > Last I tried to subscribe to the list by sending email to > ecartis@oss.sgi.com a couple of times but was unsuccessful should I send > email elsewhere to subscribe? Its a frikkin' lottery. :) Keep trying and keep whining is how I ended up getting back on (whining on IRC on #xfs helps too). cheers. -- Nathan From owner-xfs@oss.sgi.com Tue Jun 12 07:41:33 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 12 Jun 2007 07:41:37 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.1 required=5.0 tests=AWL,BAYES_80, DATE_IN_PAST_24_48 autolearn=no version=3.2.0-pre1-r499012 Received: from spitz.ucw.cz (gprs189-60.eurotel.cz [160.218.189.60]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5CEf1Wt016965 for ; Tue, 12 Jun 2007 07:41:09 -0700 Received: by spitz.ucw.cz (Postfix, from userid 0) id E05FC279F2; Sun, 10 Jun 2007 18:43:48 +0000 (UTC) Date: Sun, 10 Jun 2007 18:43:48 +0000 From: Pavel Machek To: David Greaves Cc: David Chinner , Tejun Heo , Linus Torvalds , "Rafael J. Wysocki" , xfs@oss.sgi.com, "'linux-kernel@vger.kernel.org'" , linux-pm , Neil Brown Subject: Re: 2.6.22-rc3 hibernate(?) fails totally - regression (xfs on raid6) Message-ID: <20070610184348.GA4417@ucw.cz> References: <200706020122.49989.rjw@sisk.pl> <4661EFBB.5010406@dgreaves.com> <4662D852.4000005@dgreaves.com> <46667160.80905@gmail.com> <46668EE0.2030509@dgreaves.com> <46679D56.7040001@gmail.com> <4667DE2D.6050903@dgreaves.com> <20070607110708.GS86004887@sgi.com> <46680F5E.6070806@dgreaves.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <46680F5E.6070806@dgreaves.com> User-Agent: Mutt/1.5.9i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11752 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: pavel@ucw.cz Precedence: bulk X-list: xfs Hi! > >On Thu, Jun 07, 2007 at 11:30:05AM +0100, David Greaves > >wrote: > >>Tejun Heo wrote: > >>>Hello, > >>> > >>>David Greaves wrote: > >>>>Just to be clear. This problem is where my system > >>>>won't resume after s2d > >>>>unless I umount my xfs over raid6 filesystem. > >>>This is really weird. I don't see how xfs mount can > >>>affect this at all. > >>Indeed. > >>It does :) > > > >Ok, so lets determine if it really is XFS. > Seems like a good next step... > > >Does the lockup happen with a > >different filesystem on the md device? Or if you can't > >test that, does > >any other XFS filesystem you have show the same problem? > It's a rather full 1.2Tb raid6 array - can't reformat it > - sorry :) > I only noticed the problem when I umounted the fs during > tests to prevent corruption - and it worked. I'm doing a > sync each time it hibernates (see below) and a couple of > paranoia xfs_repairs haven't shown any problems. > > I do have another xfs filesystem on /dev/hdb2 (mentioned > when I noticed the md/XFS correlation). It doesn't seem > to have/cause any problems. > > >If it is xfs that is causing the problem, what happens > >if you > >remount read-only instead of unmounting before shutting > >down? > Yes, I'm happy to try these tests. > nb, the hibernate script is: > ethtool -s eth0 wol g > sync > echo platform > /sys/power/disk > echo disk > /sys/power/state > > So there has always been a sync before any hibernate. > > > cu:~# mount -oremount,ro /huge > cu:~# mount > /dev/hda2 on / type xfs (rw) > proc on /proc type proc (rw) > sysfs on /sys type sysfs (rw) > usbfs on /proc/bus/usb type usbfs (rw) > tmpfs on /dev/shm type tmpfs (rw) > devpts on /dev/pts type devpts (rw,gid=5,mode=620) > nfsd on /proc/fs/nfsd type nfsd (rw) > /dev/hda1 on /boot type ext3 (rw) > /dev/md0 on /huge type xfs (ro) > /dev/hdb2 on /scratch type xfs (rw) > tmpfs on /dev type tmpfs (rw,size=10M,mode=0755) > rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs > (rw) > cu:(pid2862,port1022) on /net type nfs > (intr,rw,port=1022,toplvl,map=/usr/share/am-utils/amd.net,noac) > elm:/space on /amd/elm/root/space type nfs > (rw,vers=3,proto=tcp) > elm:/space-backup on /amd/elm/root/space-backup type nfs > (rw,vers=3,proto=tcp) > elm:/usr/src on /amd/elm/root/usr/src type nfs > (rw,vers=3,proto=tcp) > cu:~# /usr/net/bin/hibernate > [this works and resumes] > > cu:~# mount -oremount,rw /huge > cu:~# /usr/net/bin/hibernate > [this works and resumes too !] > > cu:~# touch /huge/tst > cu:~# /usr/net/bin/hibernate > [but this doesn't even hibernate] This is very probably separate problem... and you should have enough data in dmesg to do something with it. Pavel -- (english) http://www.livejournal.com/~pavelmachek (cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html From owner-xfs@oss.sgi.com Tue Jun 12 11:00:08 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 12 Jun 2007 11:00:15 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.6 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.ukfsn.org (s2.ukfsn.org [217.158.120.143]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5CI06Wt023956 for ; Tue, 12 Jun 2007 11:00:07 -0700 Received: from localhost (mailman.ukfsn.org [80.168.53.75]) by mail.ukfsn.org (Postfix) with ESMTP id 7CA44E6C5F; Tue, 12 Jun 2007 18:59:52 +0100 (BST) Received: from mail.ukfsn.org ([80.168.53.20]) by localhost (smtp-filter.ukfsn.org [80.168.53.75]) (amavisd-new, port 10024) with ESMTP id yD+vigTPU6zZ; Tue, 12 Jun 2007 18:57:03 +0100 (BST) Received: from elm.dgreaves.com (i-83-67-36-194.freedom2surf.net [83.67.36.194]) by mail.ukfsn.org (Postfix) with ESMTP id 5F8C1E6B6D; Tue, 12 Jun 2007 18:59:51 +0100 (BST) Received: from ash.dgreaves.com ([10.0.0.90]) by elm.dgreaves.com with esmtp (Exim 4.62) (envelope-from ) id 1HyAey-0004sM-OL; Tue, 12 Jun 2007 19:00:04 +0100 Message-ID: <466EDF24.2000806@dgreaves.com> Date: Tue, 12 Jun 2007 19:00:04 +0100 From: David Greaves User-Agent: Mozilla-Thunderbird 2.0.0.0 (X11/20070601) MIME-Version: 1.0 To: Pavel Machek Cc: David Chinner , Tejun Heo , Linus Torvalds , "Rafael J. Wysocki" , xfs@oss.sgi.com, "'linux-kernel@vger.kernel.org'" , linux-pm , Neil Brown Subject: Re: 2.6.22-rc3 hibernate(?) fails totally - regression (xfs on raid6) References: <200706020122.49989.rjw@sisk.pl> <4661EFBB.5010406@dgreaves.com> <4662D852.4000005@dgreaves.com> <46667160.80905@gmail.com> <46668EE0.2030509@dgreaves.com> <46679D56.7040001@gmail.com> <4667DE2D.6050903@dgreaves.com> <20070607110708.GS86004887@sgi.com> <46680F5E.6070806@dgreaves.com> <20070610184348.GA4417@ucw.cz> In-Reply-To: <20070610184348.GA4417@ucw.cz> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11753 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: david@dgreaves.com Precedence: bulk X-list: xfs Pavel Machek wrote: > Hi! >> cu:~# mount -oremount,ro /huge >> cu:~# /usr/net/bin/hibernate >> [this works and resumes] >> >> cu:~# mount -oremount,rw /huge >> cu:~# /usr/net/bin/hibernate >> [this works and resumes too !] >> >> cu:~# touch /huge/tst >> cu:~# /usr/net/bin/hibernate >> [but this doesn't even hibernate] > > This is very probably separate problem... and you should have enough > data in dmesg to do something with it. What makes you say it's a different problem - it's hanging at the same point visually - it's just that one is pre suspend, one is post suspend. It all feels very related to me - the behaviour all hinges around the same patch too. I'll take a look in dmesg though... David PS, looks like some mail holdups somewhere... Received: from spitz.ucw.cz (gprs189-60.eurotel.cz [160.218.189.60]) by mail.ukfsn.org (Postfix) with ESMTP id A9125E6AE9 for ; Tue, 12 Jun 2007 15:41:23 +0100 (BST) Received: by spitz.ucw.cz (Postfix, from userid 0) id E05FC279F2; Sun, 10 Jun 2007 18:43:48 +0000 (UTC) Date: Sun, 10 Jun 2007 18:43:48 +0000 From owner-xfs@oss.sgi.com Tue Jun 12 11:43:48 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 12 Jun 2007 11:43:52 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.0 required=5.0 tests=BAYES_99 autolearn=no version=3.2.0-pre1-r499012 Received: from smtp2.linux-foundation.org (smtp2.linux-foundation.org [207.189.120.14]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5CIhlWt031475 for ; Tue, 12 Jun 2007 11:43:48 -0700 Received: from imap1.linux-foundation.org (imap1.linux-foundation.org [207.189.120.55]) by smtp2.linux-foundation.org (8.13.5.20060308/8.13.5/Debian-3ubuntu1.1) with ESMTP id l5CIhhH1007699 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Tue, 12 Jun 2007 11:43:44 -0700 Received: from localhost (localhost [127.0.0.1]) by imap1.linux-foundation.org (8.13.5.20060308/8.13.5/Debian-3ubuntu1.1) with ESMTP id l5CIhbNn018148; Tue, 12 Jun 2007 11:43:38 -0700 Date: Tue, 12 Jun 2007 11:43:37 -0700 (PDT) From: Linus Torvalds To: David Greaves cc: David Chinner , Tejun Heo , "Rafael J. Wysocki" , xfs@oss.sgi.com, "'linux-kernel@vger.kernel.org'" , linux-pm , Neil Brown , Jeff Garzik Subject: Re: 2.6.22-rc3 hibernate(?) fails totally - regression (xfs on raid6) In-Reply-To: <4669A965.20403@dgreaves.com> Message-ID: References: <200706020122.49989.rjw@sisk.pl> <4661EFBB.5010406@dgreaves.com> <4662D852.4000005@dgreaves.com> <46667160.80905@gmail.com> <46668EE0.2030509@dgreaves.com> <46679D56.7040001@gmail.com> <4667DE2D.6050903@dgreaves.com> <20070607110708.GS86004887@sgi.com> <46680F5E.6070806@dgreaves.com> <20070607222813.GG85884050@sgi.com> <4669A965.20403@dgreaves.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=us-ascii X-MIMEDefang-Filter: osdl$Revision: 1.181 $ X-Scanned-By: MIMEDefang 2.53 on 207.189.120.14 X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11754 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: torvalds@linux-foundation.org Precedence: bulk X-list: xfs On Fri, 8 Jun 2007, David Greaves wrote: > > positive: I can now get sysrq-t :) Ok, so color me confused, and maybe I have missed some of the emails or skimmed over them too fast (there's been too many of them ;), but - I haven't actually seen any traces for this (netconsole apparently doesn't work for you, and I'm not surprised: it never really worked well for me over suspend/resume either, but I think I saw a mention of serial console?) - You apparently bisected it down to the range 0a3fd051c7036ef71b58863f8e5da7c3dabd9d3f <- works 1d30c33d8d07868199560b24f10ed6280e78a89c <- breaks but some of the intermediates in that range didn't compile. Correct? Can you try to bisect down a bit more, despite the compile error? Just do git bisect start git bisect good 0a3fd051c7036ef71b58863f8e5da7c3dabd9d3f git bisect bad 1d30c33d8d07868199560b24f10ed6280e78a89c and it should pick f4d6d004: libata: ignore EH scheduling during initialization for you to test. It will apparently break on the fact that "sata_via.c" wants "ata_scsi_device_resume/suspend" for the initialization of the resume/suspend things in the scsi_host_template, but you should just remove those lines, and the compile hopefully completes cleanly after that. IOW, it *should* be easy enough to pinpoint this from 9 changes down to just one. Jeff added to the Cc, since he may not have noticed that one of the most long-running issues is apparently sata-related. (Jeff: David Greaves _also_ had issues with -rc4 due to the SETFXSR change, but that should hopefully be resolved and is presumably an independent bug. Apart from the fact that "sata_via.c" seems problematic) Linus From owner-xfs@oss.sgi.com Tue Jun 12 14:58:27 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 12 Jun 2007 14:58:31 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.1 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from amd.ucw.cz (gprs189-60.eurotel.cz [160.218.189.60]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5CLwKWt001338 for ; Tue, 12 Jun 2007 14:58:23 -0700 Received: by amd.ucw.cz (Postfix, from userid 8) id 0CBF32B9D1; Tue, 12 Jun 2007 23:31:59 +0200 (CEST) Date: Tue, 12 Jun 2007 23:31:59 +0200 From: Pavel Machek To: David Greaves Cc: David Chinner , Tejun Heo , Linus Torvalds , "Rafael J. Wysocki" , xfs@oss.sgi.com, "'linux-kernel@vger.kernel.org'" , linux-pm , Neil Brown Subject: Re: 2.6.22-rc3 hibernate(?) fails totally - regression (xfs on raid6) Message-ID: <20070612213159.GB13747@elf.ucw.cz> References: <4662D852.4000005@dgreaves.com> <46667160.80905@gmail.com> <46668EE0.2030509@dgreaves.com> <46679D56.7040001@gmail.com> <4667DE2D.6050903@dgreaves.com> <20070607110708.GS86004887@sgi.com> <46680F5E.6070806@dgreaves.com> <20070610184348.GA4417@ucw.cz> <466EDF24.2000806@dgreaves.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <466EDF24.2000806@dgreaves.com> X-Warning: Reading this can be dangerous to your mental health. User-Agent: Mutt/1.5.11+cvs20060126 X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11755 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: pavel@ucw.cz Precedence: bulk X-list: xfs Hi! > >>cu:~# mount -oremount,ro /huge > >>cu:~# /usr/net/bin/hibernate > >>[this works and resumes] > >> > >>cu:~# mount -oremount,rw /huge > >>cu:~# /usr/net/bin/hibernate > >>[this works and resumes too !] > >> > >>cu:~# touch /huge/tst > >>cu:~# /usr/net/bin/hibernate > >>[but this doesn't even hibernate] > > > >This is very probably separate problem... and you should have enough > >data in dmesg to do something with it. > > What makes you say it's a different problem - it's hanging at the same > point visually - it's just that one is pre suspend, one is post suspend. Ok, I did not see the visuals. > It all feels very related to me - the behaviour all hinges around the same > patch too. > > I'll take a look in dmesg though... > > David > > PS, looks like some mail holdups somewhere... > Received: from spitz.ucw.cz (gprs189-60.eurotel.cz [160.218.189.60]) > by mail.ukfsn.org (Postfix) with ESMTP id A9125E6AE9 > for ; Tue, 12 Jun 2007 15:41:23 +0100 (BST) > Received: by spitz.ucw.cz (Postfix, from userid 0) > id E05FC279F2; Sun, 10 Jun 2007 18:43:48 +0000 (UTC) > Date: Sun, 10 Jun 2007 18:43:48 +0000 Yep, that's normal, spitz is 0.3kg machine connected over gprs. Okay, I should probably try to sync it more often than once in two days. Pavel -- (english) http://www.livejournal.com/~pavelmachek (cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html From owner-xfs@oss.sgi.com Tue Jun 12 16:22:16 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 12 Jun 2007 16:22:21 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from postoffice.aconex.com (mail.app.aconex.com [203.89.192.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5CNMDWt027336 for ; Tue, 12 Jun 2007 16:22:16 -0700 Received: from edge.local (unknown [203.89.192.141]) by postoffice.aconex.com (Postfix) with ESMTP id 9924392C378; Wed, 13 Jun 2007 09:22:11 +1000 (EST) Subject: Re: sunit not working From: Nathan Scott Reply-To: nscott@aconex.com To: salmr0@bp.com Cc: "Salmon, Rene" , xfs@oss.sgi.com In-Reply-To: <902286657-1181653953-cardhu_decombobulator_blackberry.rim.net-1527539029-@bxe120.bisx.prod.on.blackberry> References: <1181606134.7873.72.camel@holwrs01> <1181608444.3758.73.camel@edge.yarra.acx> <902286657-1181653953-cardhu_decombobulator_blackberry.rim.net-1527539029-@bxe120.bisx.prod.on.blackberry> Content-Type: text/plain Organization: Aconex Date: Wed, 13 Jun 2007 09:21:18 +1000 Message-Id: <1181690478.3758.108.camel@edge.yarra.acx> Mime-Version: 1.0 X-Mailer: Evolution 2.6.3 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11756 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nscott@aconex.com Precedence: bulk X-list: xfs On Tue, 2007-06-12 at 13:12 +0000, salmr0@bp.com wrote: > > > > > > Thanks that helps. Now that I know I have the right sunit and swidth > I have a performace related question. > > If I do a dd on the raw device or to the lun directy I get speeds of > around 190-200 MBytes/sec. > > As soon as I add xfs on top of the lun my speeds go to around 150 > MBytes/sec. This is for a single stream write using various block > sizes on a 2 Gbit/sec fiber channel card. > Reads or writes? What are your I/O sizes? Buffered or direct IO? Including fsync time in there or not? etc, etc. (Actual dd commands used and their output results would be best) xfs_io is pretty good for this kind of analysis, as it gives very fine grained control of operations performed, has integrated bmap command, etc - use the -F flag for the raw device comparisons). > Is this overhead more or less what you would expect from xfs? Or is > there some tunning I need to do? You should be able to get very close to raw device speeds esp. for a single stream reader/writer, with some tuning. cheers. From owner-xfs@oss.sgi.com Tue Jun 12 16:28:29 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 12 Jun 2007 16:28:33 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5CNSOWt029708 for ; Tue, 12 Jun 2007 16:28:27 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA20853; Wed, 13 Jun 2007 09:28:22 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5CNSJAf120847629; Wed, 13 Jun 2007 09:28:20 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5CNSHQc121703340; Wed, 13 Jun 2007 09:28:17 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Wed, 13 Jun 2007 09:28:17 +1000 From: David Chinner To: salmr0@bp.com Cc: nscott@aconex.com, "Salmon, Rene" , xfs@oss.sgi.com Subject: Re: sunit not working Message-ID: <20070612232817.GZ86004887@sgi.com> References: <1181608444.3758.73.camel@edge.yarra.acx> <902286657-1181653953-cardhu_decombobulator_blackberry.rim.net-1527539029-@bxe120.bisx.prod.on.blackberry> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <902286657-1181653953-cardhu_decombobulator_blackberry.rim.net-1527539029-@bxe120.bisx.prod.on.blackberry> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11757 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Tue, Jun 12, 2007 at 01:12:15PM +0000, salmr0@bp.com wrote: > > Thanks that helps. Now that I know I have the right sunit and swidth I have > a performace related question. > > If I do a dd on the raw device or to the lun directy I get speeds of around > 190-200 MBytes/sec. > > As soon as I add xfs on top of the lun my speeds go to around 150 > MBytes/sec. This is for a single stream write using various block sizes on a > 2 Gbit/sec fiber channel card. That's for buffered I/O, right? That sounds about right - if you do two writes, it should increase a little further. Also, direct I/O should be able to get you to >90% of the raw device capability.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Jun 11 07:14:12 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 12 Jun 2007 18:30:45 -0700 (PDT) Received: from nz-out-0506.google.com (nz-out-0506.google.com [64.233.162.235]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5BEE7Wt006001 for ; Mon, 11 Jun 2007 07:14:10 -0700 Received: by nz-out-0506.google.com with SMTP id 4so1069317nzn for ; Mon, 11 Jun 2007 07:14:08 -0700 (PDT) DKIM-Signature: a=rsa-sha1; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:message-id:date:from:to:subject:cc:mime-version:content-type; b=bDXgf3nwxrvoZQ+A0OVNX+KUvLa9No/mQrVv5fjghOIRe5pJyqYhICL7gOhhv/RfOxUTavsrc1A8LxJRg+ZD/VaRrk8CW9EYpLR4O8CPDnmgpdHdBzYk0CdpM7EdGcmdFsB6uXb2+fBlSJlTeLXWaRvIae7LK9oJdzJ9Y/1sZvo= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:message-id:date:from:to:subject:cc:mime-version:content-type; b=BCclVjOWg779i9ikUGdBiqhsmHAV6/lPrURCAhLXkUo22EVeUImBIffgNqBPYXdZCaLzpNcnW8kdWfJK6sx1ZMQwdzHFOvJ9wtYtkX3t/31EMEf8x6nRhdK86n08PRzZyx2V903qmrhA8/AsvVCSnm5cYQk+DQnZHYidkt4aJNs= Received: by 10.143.33.19 with SMTP id l19mr282124wfj.1181569766761; Mon, 11 Jun 2007 06:49:26 -0700 (PDT) Received: by 10.143.3.17 with HTTP; Mon, 11 Jun 2007 06:49:26 -0700 (PDT) Message-ID: Date: Mon, 11 Jun 2007 19:19:26 +0530 From: "Satyam Sharma" To: "Marco Berizzi" Subject: 2.6.21.3 Oops (was Re: XFS internal error xfs_da_do_buf(2) at line 2087 of file fs/xfs/xfs_da_btree.c. Caller 0xc01b00bd) Cc: linux-kernel@vger.kernel.org, "David Chinner" , xfs@oss.sgi.com, "Andrew Morton" , "Christoph Lameter" MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_Part_72026_18516924.1181569766607" X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11758 X-Approved-By: dgc@sgi.com X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: satyam.sharma@gmail.com Precedence: bulk X-list: xfs ------=_Part_72026_18516924.1181569766607 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline Hi Marco, [ Re-adding David, XFS, Andrew and Christoph; this appears to be some SLAB / fs (?) issue, so I'm a little out of my depth here :-) ] > > On 6/8/07, Marco Berizzi wrote: > >> After few hours linux has crashed with this message: > >> BUG: at arch/i386/kernel/smp.c:546 smp_call_function() Well, _this_ particular "bug" (due to a WARN_ON(irqs_disabled) that should be avoided when we're panicing) is resolved in the latest 22-rc4 / -git kernel. However, interestingly, this is not the problem that crashed your system in the first place. Your box had *already* paniced, due to unknown reasons, and _then_ hit the aforementioned WARN_ON. > > Which kernel (exactly) was this > > 2.6.21.3 Ok, so apparently what happened here was this: Some RCU callback (that calls kmem_cache_free()) oopsed and panic'ed his box. [ Marco had experienced fs issues lately, so we could suspect file_free_rcu() here, but I can't really tell from the stack trace; BTW whats with the rampant disease in the kernel to declare as inline even those functions exclusively meant to be dereferenced and passed as pointers to call_rcu()?! ] Sadly, 21.3 (21.4 too, actually) had a busticated smp_send_stop() that would always go WARN_ON when called by panic() as mentioned above, which meant that the original dmesg stuff outputted by the oops + panic got scrolled up and all that we had on the screen was the stack trace for the WARN_ON when you snapped the pic -- the system didn't write to syslog messages in time and so the extract below isn't quite useful :-( > > and does this occur > > reproducibly? > > I don't know. I try to explain. With all debugging options > enabled 2.6.21.x has never crashed. After two days 2.6.21.3 > was running without any debug options, it has crashed. > Tomorrow morning I will start that linux box with linux 2.6.21.3 > without any debug options, and I will keep you informed > (friday evening I have switched back to 2.6.21.3 with debug > options enabled, so the machine doesn't crash during the week > end: this system is my company firewall.) I hope you're able to reproduce this with various debug options enabled (and/or also try the latest 22-rc4 or -git kernel). Could you please send the .config that crashed too? Anyway, I'd have to leave this upto the others Cc:'ed here. Doesn't look like a known / resolved issue, though. > > Also, could you please send the dmesg, > > Jun 4 20:53:05 Pleiadi kernel: sanitize start > Jun 4 20:53:05 Pleiadi kernel: sanitize end > Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 0000000000000000 > size: 000000000009ac00 end: 000000000009ac00 type: 1 > Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() type is E820_RAM > Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 000000000009ac00 > size: 0000000000005400 end: 00000000000a0000 type: 2 > Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 00000000000ce000 > size: 0000000000002000 end: 00000000000d0000 type: 2 > Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 00000000000e0000 > size: 0000000000020000 end: 0000000000100000 type: 2 > Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 0000000000100000 > size: 000000003fdf0000 end: 000000003fef0000 type: 1 > Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() type is E820_RAM > Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 000000003fef0000 > size: 000000000000b000 end: 000000003fefb000 type: 3 > Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 000000003fefb000 > size: 0000000000005000 end: 000000003ff00000 type: 4 > Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 000000003ff00000 > size: 0000000000080000 end: 000000003ff80000 type: 1 > Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() type is E820_RAM > Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 000000003ff80000 > size: 0000000000080000 end: 0000000040000000 type: 2 > Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 00000000e0000000 > size: 0000000010000000 end: 00000000f0000000 type: 2 > Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 00000000fec00000 > size: 0000000000100400 end: 00000000fed00400 type: 2 > Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 00000000fee00000 > size: 0000000000100000 end: 00000000fef00000 type: 2 > Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 00000000ffb00000 > size: 0000000000100000 end: 00000000ffc00000 type: 2 > Jun 4 20:53:05 Pleiadi kernel: copy_e820_map() start: 00000000fff00000 > size: 0000000000100000 end: 0000000100000000 type: 2 > Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 0000000000000000 - > 000000000009ac00 (usable) > Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 000000000009ac00 - > 00000000000a0000 (reserved) > Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 00000000000ce000 - > 00000000000d0000 (reserved) > Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 00000000000e0000 - > 0000000000100000 (reserved) > Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 0000000000100000 - > 000000003fef0000 (usable) > Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 000000003fef0000 - > 000000003fefb000 (ACPI data) > Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 000000003fefb000 - > 000000003ff00000 (ACPI NVS) > Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 000000003ff00000 - > 000000003ff80000 (usable) > Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 000000003ff80000 - > 0000000040000000 (reserved) > Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 00000000e0000000 - > 00000000f0000000 (reserved) > Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 00000000fec00000 - > 00000000fed00400 (reserved) > Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 00000000fee00000 - > 00000000fef00000 (reserved) > Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 00000000ffb00000 - > 00000000ffc00000 (reserved) > Jun 4 20:53:05 Pleiadi kernel: BIOS-e820: 00000000fff00000 - > 0000000100000000 (reserved) > Jun 4 20:53:05 Pleiadi kernel: Zone PFN ranges: > Jun 4 20:53:05 Pleiadi kernel: DMA 0 -> 4096 > Jun 4 20:53:05 Pleiadi kernel: Normal 4096 -> 229376 > Jun 4 20:53:05 Pleiadi kernel: HighMem 229376 -> 262016 > Jun 4 20:53:05 Pleiadi kernel: early_node_map[1] active PFN ranges > Jun 4 20:53:05 Pleiadi kernel: 0: 0 -> 262016 > Jun 4 20:53:05 Pleiadi kernel: ACPI: RSDP 000F6BA0, 0024 (r2 PTLTD ) > Jun 4 20:53:05 Pleiadi kernel: ACPI: XSDT 3FEF5381, 004C (r1 PTLTD ^I > XSDT 6040001 LTP 0) > Jun 4 20:53:05 Pleiadi kernel: ACPI: FACP 3FEF5441, 00F4 (r3 FSC > 6040001 F4240) > Jun 4 20:53:05 Pleiadi kernel: ACPI: DSDT 3FEF5535, 597B (r1 FSC > D1649 6040001 MSFT 2000002) > Jun 4 20:53:05 Pleiadi kernel: ACPI: FACS 3FEFBFC0, 0040 > Jun 4 20:53:05 Pleiadi kernel: ACPI: SPCR 3FEFAEB0, 0050 (r1 PTLTD > $UCRTBL$ 6040001 PTL 1) > Jun 4 20:53:05 Pleiadi kernel: ACPI: MCFG 3FEFAF00, 0040 (r1 PTLTD > MCFG 6040001 LTP 0) > Jun 4 20:53:05 Pleiadi kernel: ACPI: APIC 3FEFAF40, 0098 (r1 PTLTD ^I > APIC 6040001 LTP 0) > Jun 4 20:53:05 Pleiadi kernel: ACPI: BOOT 3FEFAFD8, 0028 (r1 PTLTD > $SBFTBL$ 6040001 LTP 1) > Jun 4 20:53:05 Pleiadi kernel: Processor #0 15:4 APIC version 20 > Jun 4 20:53:05 Pleiadi kernel: Processor #1 15:4 APIC version 20 > Jun 4 20:53:05 Pleiadi kernel: IOAPIC[0]: apic_id 2, version 32, > address 0xfec00000, GSI 0-23 > Jun 4 20:53:05 Pleiadi kernel: IOAPIC[1]: apic_id 3, version 32, > address 0xfec80000, GSI 24-47 > Jun 4 20:53:05 Pleiadi kernel: IOAPIC[2]: apic_id 4, version 32, > address 0xfec80800, GSI 48-71 > Jun 4 20:53:05 Pleiadi kernel: IOAPIC[3]: apic_id 5, version 32, > address 0xfec84000, GSI 72-95 > Jun 4 20:53:05 Pleiadi kernel: IOAPIC[4]: apic_id 6, version 32, > address 0xfec84800, GSI 96-119 > Jun 4 20:53:05 Pleiadi kernel: Enabling APIC mode: Flat. Using 5 I/O > APICs > Jun 4 20:53:05 Pleiadi kernel: Allocating PCI resources starting at > 50000000 (gap: 40000000:a0000000) > Jun 4 20:53:05 Pleiadi kernel: Built 1 zonelists. Total pages: 259969 > Jun 4 20:53:05 Pleiadi kernel: PID hash table entries: 4096 (order: 12, > 16384 bytes) > Jun 4 20:53:05 Pleiadi kernel: Detected 3200.428 MHz processor. > Jun 4 20:53:05 Pleiadi kernel: Console: colour VGA+ 80x25 > Jun 4 20:53:05 Pleiadi kernel: Dentry cache hash table entries: 131072 > (order: 7, 524288 bytes) > Jun 4 20:53:05 Pleiadi kernel: Inode-cache hash table entries: 65536 > (order: 6, 262144 bytes) > Jun 4 20:53:05 Pleiadi kernel: virtual kernel memory layout: > Jun 4 20:53:05 Pleiadi kernel: fixmap : 0xfff9d000 - 0xfffff000 > ( 392 kB) > Jun 4 20:53:05 Pleiadi kernel: pkmap : 0xff800000 - 0xffc00000 > (4096 kB) > Jun 4 20:53:05 Pleiadi kernel: vmalloc : 0xf8800000 - 0xff7fe000 > ( 111 MB) > Jun 4 20:53:05 Pleiadi kernel: lowmem : 0xc0000000 - 0xf8000000 > ( 896 MB) > Jun 4 20:53:05 Pleiadi kernel: .init : 0xc039f000 - 0xc03ce000 > ( 188 kB) > Jun 4 20:53:05 Pleiadi kernel: .data : 0xc02fd400 - 0xc0398114 > ( 619 kB) > Jun 4 20:53:05 Pleiadi kernel: .text : 0xc0100000 - 0xc02fd400 > (2037 kB) > Jun 4 20:53:05 Pleiadi kernel: Checking if this processor honours the > WP bit even in supervisor mode... Ok. > Jun 4 20:53:05 Pleiadi kernel: Calibrating delay using timer specific > routine.. 6403.78 BogoMIPS (lpj=32018905) > Jun 4 20:53:05 Pleiadi kernel: Mount-cache hash table entries: 512 > Jun 4 20:53:05 Pleiadi kernel: monitor/mwait feature present. > Jun 4 20:53:05 Pleiadi kernel: using mwait in idle threads. > Jun 4 20:53:05 Pleiadi kernel: CPU0: Intel(R) Xeon(TM) CPU 3.20GHz > stepping 0a > Jun 4 20:53:05 Pleiadi kernel: Booting processor 1/1 eip 2000 > Jun 4 20:53:05 Pleiadi kernel: Calibrating delay using timer specific > routine.. 6400.45 BogoMIPS (lpj=32002267) > Jun 4 20:53:05 Pleiadi kernel: monitor/mwait feature present. > Jun 4 20:53:05 Pleiadi kernel: CPU1: Intel(R) Xeon(TM) CPU 3.20GHz > stepping 0a > Jun 4 20:53:05 Pleiadi kernel: ENABLING IO-APIC IRQs > Jun 4 20:53:05 Pleiadi kernel: migration_cost=142 > Jun 4 20:53:05 Pleiadi kernel: Setting up standard PCI resources > Jun 4 20:53:05 Pleiadi kernel: ACPI: Device [PS2M] status [00000008]: > functional but not present; setting present > Jun 4 20:53:05 Pleiadi kernel: ACPI: Device [ECP] status [00000008]: > functional but not present; setting present > Jun 4 20:53:05 Pleiadi kernel: ACPI: Device [COM1] status [00000008]: > functional but not present; setting present > Jun 4 20:53:05 Pleiadi kernel: PCI quirk: region f000-f07f claimed by > ICH4 ACPI/GPIO/TCO > Jun 4 20:53:05 Pleiadi kernel: PCI quirk: region f180-f1bf claimed by > ICH4 GPIO > Jun 4 20:53:05 Pleiadi kernel: PCI: PXH quirk detected, disabling MSI > for SHPC device > Jun 4 20:53:05 Pleiadi kernel: PCI: PXH quirk detected, disabling MSI > for SHPC device > Jun 4 20:53:05 Pleiadi kernel: ACPI: PCI Interrupt Link [LNKA] (IRQs 3 > 4 5 6 7 9 10 *11 12 14 15) > Jun 4 20:53:05 Pleiadi kernel: ACPI: PCI Interrupt Link [LNKB] (IRQs 3 > 4 5 6 7 *9 10 11 12 14 15) > Jun 4 20:53:05 Pleiadi kernel: ACPI: PCI Interrupt Link [LNKC] (IRQs 3 > 4 *5 6 7 9 10 11 12 14 15) > Jun 4 20:53:05 Pleiadi kernel: ACPI: PCI Interrupt Link [LNKD] (IRQs 3 > 4 5 6 7 9 *10 11 12 14 15) > Jun 4 20:53:05 Pleiadi kernel: ACPI: PCI Interrupt Link [LNKE] (IRQs 3 > 4 5 6 7 9 10 11 12 14 15) *0, disabled. > Jun 4 20:53:05 Pleiadi kernel: ACPI: PCI Interrupt Link [LNKF] (IRQs 3 > 4 5 6 7 9 10 11 12 14 15) *0, disabled. > Jun 4 20:53:05 Pleiadi kernel: ACPI: PCI Interrupt Link [LNKG] (IRQs 3 > 4 5 6 7 9 10 11 12 14 15) *0, disabled. > Jun 4 20:53:05 Pleiadi kernel: ACPI: PCI Interrupt Link [LNKH] (IRQs 3 > *4 5 6 7 9 10 11 12 14 15) > Jun 4 20:53:05 Pleiadi kernel: IP route cache hash table entries: 32768 > (order: 5, 131072 bytes) > Jun 4 20:53:05 Pleiadi kernel: TCP established hash table entries: > 131072 (order: 8, 1572864 bytes) > Jun 4 20:53:05 Pleiadi kernel: TCP bind hash table entries: 65536 > (order: 7, 524288 bytes) > Jun 4 20:53:05 Pleiadi kernel: highmem bounce pool size: 64 pages > Jun 4 20:53:05 Pleiadi kernel: PNP: PS/2 controller doesn't have AUX > irq; using default 12 > Jun 4 20:53:05 Pleiadi kernel: nf_conntrack version 0.5.0 (8188 > buckets, 65504 max) > Jun 4 20:53:05 Pleiadi kernel: ip_tables: (C) 2000-2006 Netfilter Core > Team > Jun 4 20:53:05 Pleiadi kernel: Using IPI Shortcut mode > Jun 4 20:53:05 Pleiadi kernel: VFS: Mounted root (xfs filesystem) > readonly. > > > stack trace, etc for when this happened? > > I have only a monitor bitmap. Tell me if you want it. [ Marco sent me the stack trace photo off-list; attached herewith. ] Satyam ------=_Part_72026_18516924.1181569766607 Content-Type: image/jpeg; name=100-0081_IMG1.JPG Content-Transfer-Encoding: base64 X-Attachment-Id: f_f2sxxbgr Content-Disposition: attachment; filename="100-0081_IMG1.JPG" /9j/4Rf+RXhpZgAASUkqAAgAAAAJAA8BAgAGAAAAegAAABABAgAUAAAAgAAAABIBAwABAAAAAQAA ABoBBQABAAAAlAAAABsBBQABAAAAnAAAACgBAwABAAAAAgAAADIBAgAUAAAApAAAABMCAwABAAAA AQAAAGmHBAABAAAAuAAAAGYEAABDYW5vbgBDYW5vbiBQb3dlclNob3QgUzEwALQAAAABAAAAtAAA AAEAAAAxOTgwOjA1OjAyIDIzOjUzOjM4ABsAmoIFAAEAAADKAgAAnYIFAAEAAADSAgAAAJAHAAQA AAAwMjEwA5ACABQAAAACAgAABJACABQAAAAWAgAAAZEHAAQAAAABAgMAApEFAAEAAACyAgAAAZIK AAEAAAC6AgAAApIFAAEAAADCAgAABJIKAAEAAADaAgAABZIFAAEAAADiAgAABpIFAAEAAADqAgAA B5IDAAEAAAACAAAACZIDAAEAAAABAAAACpIFAAEAAADyAgAAfJIHACYBAAD6AgAAhpIHAIgAAAAq AgAAAKAHAAQAAAAwMTAwAaADAAEAAAABAAAAAqADAAEAAABABgAAA6ADAAEAAACwBAAABaAEAAEA AAAwBAAADqIFAAEAAAAgBAAAD6IFAAEAAAAoBAAAEKIDAAEAAAACAAAAF6IDAAEAAAACAAAAAKMH AAEAAAADAAAAAAAAADE5ODA6MDU6MDIgMjM6NTM6MzgAMTk4MDowNTowMiAyMzo1MzozOAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAwAAAAEAAAAq6AMAAAABACxbAwAAAAEAAQAAAA8AAAAgAAAACgAA AAAAAAADAAAAHAAAAAoAAABaDwAA6AMAAMwAAAAgAAAACQABAAMAEgAAAGwDAAACAAMABAAAAJAD AAADAAMABAAAAJgDAAAEAAMADgAAAKADAAAAAAMABgAAALwDAAAGAAIAIAAAAMgDAAAHAAIAGAAA AOgDAAAIAAQAAQAAAJFCDwAJAAIAIAAAAAAEAAAAAAAAJAACAAAAAwABAAAAAAABAAAAAQAAAAAA AQAAAAAAAAAAAAAAAgDMAH4AXgAvAHYAAAAAABwAAACKACIAawCAAAAAAAAAAAEAAAAAAAAARwAA AAAAAAAAAAAAAABJTUc6UG93ZXJTaG90IFMxMCBKUEVHAAAAAAAAAAAAAEZpcm13YXJlIFZlcnNp b24gMDEuMDAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAANQwAPYAAAAAnyQAuAAA AAQAAQACAAQAAABSOTgAAgAHAAQAAAAwMTAwARADAAEAAABABgAAAhADAAEAAACwBAAAAAAAAAYA AwEDAAEAAAAGAAAAGgEFAAEAAAC0BAAAGwEFAAEAAAC8BAAAKAEDAAEAAAACAAAAAQIEAAEAAAD0 BQAAAgIEAAEAAAAYEAAAAAAAALQAAAABAAAAtAAAAAEAAABN3v1b/3/f/9z///+9z3fvX3d/r/Xv /1bZ7f//3UZ99+v338bX/V/f+mj/998Ut///5v79/vvWvf/3dw11x9Vu3/6/3f+f+u/+z/996/fd ///ZN//v/9c3v8/V/++7dnc3ff3799Hv5/+9/b3f/2//H/3//9N8/dsV/9f27v3b/tXn/3fPV9f/ /euv///t///9Vn3/3/39ffl33v/fd9z2ak995f53/d3/Z3/f2+3v13/7f9++/fv/3Xndf++993T2 9b/ved93/Pfd///fvt53f93/97z///38e/9vvX//d99//dfX1N/f91/9/677vf3/2f7/0ZXP9/v/ 1//8/vz/Vfv57ft+3ef9ffz+e9//fd5n/93//8X3Z//f/39f99f2rWO1z5//X//uW1v333///9j/ 2wCEAAkGBggGBQkIBwgKCQkLDRYPDQwMDRwTFRAWIR0jIiEcIB8kKTQsJCcxJx4fLT0tMTY3Ojo6 Iio/RD44QjM3OTYBCQkJDAoMFAwMFA8KCgoPsBoKChqwTxoaGhoaT09PT09PT09PT09PT09PT09P T09PT09PT09PT09PT09PT09PT//AABEIAHgAoAMBIQACEQEDEQH/xAGiAAABBQEBAQEBAQAAAAAA AAAAAQIDBAUGBwgJCgsQAAIBAwMCBAMFBQQEAAABfQECAwAEEQUSITFBBhNRYQcicRQygZGhCCNC scEVUtHwJDNicoIJChYXGBkaJSYnKCkqNDU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0 dXZ3eHl6g4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY 2drh4uPk5ebn6Onq8fLz9PX29/j5+gEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoLEQACAQIE BAMEBwUEBAABAncAAQIDEQQFITEGEkFRB2FxEyIygQgUQpGhscEJIzNS8BVictEKFiQ04SXxFxgZ GiYnKCkqNTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqCg4SFhoeIiYqSk5SV lpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2dri4+Tl5ufo6ery8/T19vf4 +fr/2gAMAwEAAhEDEQA/AHxCpwpIwoLE9gK592PoRva3Tni3mx/1zNAsLr/n2n/79mt1BroQOFhd /wDPrP8A9+zThYXf/PrP/wB+z/hT5X2AcLC7/wCfWf8A79n/AApRp93/AM+0/wD37NPlYhw067/5 9pv+/Zpf7Ou/+faX/vg0+VgKNMu/+faX/vmg6VenpbSflT5WA3+yL7/n2f8AKl/se+/59n/SjlYC jR77/n3b8xSjSL3/AJ4H8x/jT5WFhf7Ivf8Angf++h/jR/Zd2OsJ/wC+h/jRysCGSNonKOMMOoqv L0pCFiHFXdPb/iZwLjq1c0PiRr0NjUNWGnyxx+Q8zOpb5SBgAgd/rVAeL4sMRZTYT73I46/4Gurm s7EDm8XKikmxmGBk/MvT/JpH8YKig/YJiC23769c4o5xEL+ORGGLabNhev71OP1ph+IAEAm/suYo SBnzU6njpnNPmAQfENDavN/Z7hUzkGZc8Z7fhTJfiI0SoW0p8PkLidCTj2Bo5gGH4jlZhEdMIfLL gzjgjrz0ob4izAhf7JAY4wDcLzkE9enY07hcq3HxTe34fS0z6C4B/kKqt8X2H/MJH/f/AP8AsaVw uRN8YZB/zCE/8CP/ALGmH4xS/wDQIT/wIP8A8TRzARn4xzf9AhP/AAIP/wATXR+DfGT+LWug9mtt 5AXpJu3Zz7D0pKV3YCtreom31iWLy89Oc+1MLb0DetK+oiSIcVe05AdThPcGuam/fXqavYg8ZgNe 2YIB+R+CcelZ0EKCFWNrEef4nOTx344rd7kFe7jjMKssEK9eVf2rMlgRoYUFohdsNv8AOxkdwecC mhFeWzIOFggAXBOZgex/nmofISSRNlnFGGz8rTe/cnp0/wA96sBLHboAw+y2Y5Ugmckjn6n8v50h tUHmt5Vk3ylCPM5z2YYH8vWmkK5T1OIC6ZFSAbWb5ocbTk9uBxVIxGiwyMxcVG8VICJo6iZKQERU 969D+D3/AB8aoP8AZjP6tSW4F7xOdviOX/gP8qnQ/uF+lP7TETwtxWhpZzqMX1P8q5YfGvU1exQ8 cPt1GzGE/wBW33uB2rJgmRYlytmct1fk9O/FdL1M2Q3bjaOLTODnYfp/n86rNEhtk3JaBvlbcZcM R7j8qaAi8mKeKM/6BHnDffIIOOhzSmJAqzBdOJIYlAfXtj2x+tWkIhiuArA/ZLY8Ecp/9emTJ55U +WibRjCDGaYWIzbUw2xx0oGRNb47VBJDjpSaAryR1A6VIEJXnpXffCEYvdSH/TOP+ZpLcC34tO3x JJ9F/lU0bf6Ov0pfaYEkPStLSHH9qRL35/lXNS1mjWWxnePnC6rZgsi/un5fp2rJt5UW2XdNp6kA HGMn+fWurqZENzOjncj2u0hvujBB+mT1qNJGBtwjWIJUHcw+6R2Y9jzTQh7M0srR77GMOm7Me7b1 +7z0+ntVvzTLK6NJZIXDhmUNgnj37+uKsBkWkRFUJu4xkgMMZxnPPXpxUjaTHHEWE6Ow/hUdeaaA i+xe1Meyx2p2GV5bTA6VRmtqTApyQ4qpJHUMCs6V3Xwk41PUR/0yT+ZqeoFjxlx4mf8A3V/lUkZ/ 0ZfpSfxMCBb5iMDArV8Nvv1iLJycH+VZ04pMtu5U+Icvl6zZ/MF/ctywyKxIdRMVjtiuIQSPuiI8 cdc56/hWr3IY2e63qg+1KxAYbRG3HQd/p/nNJDKhERZ7UlEAAaEnt345P+NUhFuDaxMjvawtsxsW LIPfsMVfhXa4dJ4mOCMCDgZ7dOlUgNWKMNCds6vwMJ5GPwpfsckzguvPTpVL7hpEn9lnH3ajk0wj qKYGfcaeR2rMubPrxQBl3FvjNZ00eKhgU5ErtPhQMavfD1iX+ZqOoE/jXjxK3+4tPh/49V+lJ/Ew MxTW54VP/E8i/wB1v5VMdyyr8Rn269ZYcx4hY7gu7HPpWLazh4EAvpCNo4FrlQef9k5AqiGJcXAl nKi9abbuGfJwT0x274/SrFtOptNi3gb5FBhaAYPGMbu3XGapCNWORJJA5vBgE+UPs5AH0/lzmta3 ZDuK3hLvywMOMt78H1qwNq0gzEClwzZUceVjpirkFhkAsOaRRZFiPSo5dPUg8UAZl3po54rAvrLb nirTEYN5bYrGuYsGpYGdKmDXX/CzjW7sf9MR/Os+oE3jb/kZD/1zWnwf8eq/Sk/iYGWtbnhX/kNx /wC638qlbllP4ivt8RWeGZT5BwVXJ6/Ssu3vGW1R/td2+E6rb4GPQHaarqQ9yTUpA8yutzdSBlbP m2+w5/Lp/LH5WbS98uziEOoyiUIoWIWwPOOm7v8A561SFuaOmsIkxHeybdpCgQZ6EHHNdDp7SzYZ 55NwAJHld+asdjobNCUGJZG+q9614LbK7QzfQjFSUiytgMc4qObT8LkCkprYLGVdWuMgiue1O0Ay cVaYjldQgwTxXP3iYJpsRkzrzXU/DDjXbr/rh/Ws+oE3jjjxH/2zWnW5/wBET6VL+IDLU1ueE/8A kNx/7rfyqVuWUfiOxHiW0wZB/o5/1YyetYiTMsUeZL8fKSCYgR9QMf1/GqIZLeTOXwZL8gFwBLGA QM9On5+hH5W7B5PLiAluUClGP7rIA28EcfgPY1SEbdjOQ7KZ7oFssR5fXkAGuj0x3B2+ZNjOANuO M/pVDR01kDwMye/FbltGAgbknvmolsUiegjIrMZmahEATiub1KMEH6VuiWchqg+Y1zN6OapiMecY NdN8MT/xUNwP+mB/mKze4Ik8eHHiMf8AXNadan/RE+lS/iGZamt7wl/yG0/3GqVuUZ3xH58T23En FsT+7OGHPWsgQ5EPyaqN6nlmXceeeM8VRD3Jb+B4SivHfq+18+c4OenPB6Z/PirNjHciy2SPebyq mMrnywuM4JP9M1SEbNqxY5VNQbcPndgOeRxnuOv6V0unbo22OLvJ5AJ9Ov8APrV3GdLZzgbd5lB6 8mt20mEiYyfxrOS09CkWKKzGZuoOBnHeua1OQYNbLYTOO1Rxk81zV63Jq2SY855rpfhmQPEk4H/P A/zFZvcCT4gHHiRf+uS/1otG/wBDT6VD+IozVNb/AISP/E6X/rm1Stxmb8QznxZbjEh/0X/lm20j 5uvUVn7VNrCZF1VBsf52nAc4Ydi2Bjp0/wDrWQyTV5EMixrBqCFFckTzFsjkAj5zwMY49R7AyQhy YR/xNlt0gVjl8DcAOVJOMY/H2ql8xdOhp2zPLlYotSVMnAzwTkAcknP4V0do7KBmO+DrhMPKCVfp kfN098Yqw2N23uFjiDPHMh4X5z3/AD9q1rO92gEGkWjTj1BSvNNl1AY4OKjk1GZV3dAjJNc3ql2M Ng1oiTk9QnyTXP3cnJoYjKnbJNdL8Mm/4qWb/r3b+YrN7jHfERseJE/64r/M0WT5sk+lS/iGUlNb 3hI/8Tkf9c2/pSjuMyfiLlvFkO2NpCLUcB9v8XrWcYZUtEU2VyDsc/NdElsMOo3DGOh4/wAKr7yG JqdvPbStHNBcqwBJD3G7nDZbhjnPf3/I3Y4pTZxFrbUsfJgxz5B49OcZ6j+tUvmI0raB2MkX2LVC CvVphuDZ7jp2/QfWte3OXO+11EhmUBzNk5U87ucDqP1xiqv6h9wRawZJcrJIU44diecfU98/nW5Z aqMDJplGnHqakdaWTUVA60guZl5qSkHmuev7zdnmqQGBeXGc1j3MmTUsRnzPzXS/DN/+Kml/692/ mKgCT4in/io4/wDriv8AM0WJ/wBCT6VL+JlFJTXQeET/AMTj/tm39KUdxmJ8Rl83xZEnlJL/AKKO HlEYHzHnJrMT7OEAXRIAArD/AJCCnPP/AOqq69SGJdRieRvs+lpBljgC8V9uRjHv3NTxiIJHnSpA Ag3bbzq2Oo4478VS/wC3kJliJhCrtJZTsSGJYTbSgyOM5OQMH8/arNtq9is6OIrpVBJZRPktn37Y +hzVfeBeXUrAxLshmEnGT5gwfXtWnFrFj54KW0gi7qZOfw/Wn8xlmTWIGjQW6OpGclmzn0qJ9UJH WmBTn1Etnmsy5vM55ouBlXE+e9Z88mahgUpG5rp/hm4/4SaUD/n3b+YqAJviKf8Aioov+uI/maWw P+gp9KT+JlFJTzXQeEj/AMTc/wDXJv5ilHcY3xl4Iu/EmrR3VvcQxqsYTa+c9T/jWCPhVqo6XVqf xb/Cm07iHD4W6wOk9mf+Bt/hT1+GOtD/AJa2Z/7aN/8AE0WYWJl+Gutr/HZ/9/W/+JqdPh3rS/xW n/f0/wCFPUViwngTWU6m2/7+H/Cpk8Iasjbd1vnHTzP/AK1Wm+wWJ18KauOgh/7+f/Wpx8Kav/di /wC/lFwsRP4R1hh9yL/v4KrSeCtZb+CH/v4KL+oWKsngTWm/gh/7+iqz/D/Wz/BB/wB/RU6hYrv8 OtcP8EH/AH9ra8FeEtS0HWpLm9EQjMJQbHyckj/ClZ3Cxn/EQ58QRf8AXEfzNFgf9BT6UpfExlJT zW94Wk8vU3Y9BEe2e4qY7jOrGoRjs/5f/Xpy6lGOz/8AfNaiHjUov7r/AJU8ajH/AHXpjHrqMf8A danrfKRkIcfWgBzXmMfIefeqlzbrM/y5jYk5IAOT+P40LTUQtu8kM4L3Ekq4+6cAVb+3ADJTjGet MCP+1E/umkbVYgehoAYdUiPY/lTTqUXv+RoAjOoxnuf1/wAKje/TP/6/8KBHn/j6TzNchYdDCP5m naef9BT6Vk/iYz//2Z/13d/8nef3dX9X+/Xd///Xm5nd7lvX/33z/7f//9//fv/vv9X91/v5/zX/ tf7ffX1+33/N0/P3i3dX3/9V/f/71l1ffcVZ3+33/73Nb/vf9o1//3/t/11/39/0+n/d2+f3X+z/ v//91tXeP133e9v//t/X773P/e993dX//99fXezf/X//9X/9/98/1f79dvv3//fff/z/x/35d1// fHvdf/p993/8/u3+/3/39Vlu//X339/f6f/dz99/1/1+9d9f9f9d33zd/f1/bsn/bv/93f/fb3Vd /9/+n+P1/n//r+v/9k+H//+v31/v9Ht//9zdd0d/Xp9917L3995///tXf+4/3//311/Xfd9+vtX/ 3f3/X7d//v837339e/frf7/t/9/+ZfN3f139Vz/x/c1/WH3/587/T9/90++91939///Vf//c///X bdv9/v/X/99379f/f9K9VX5/d/f9///Xd///31X//98n3+dfl973/b339/f//9/dJ79e/Gd+/z/n d//XXd7/3//0d+/39d33vX//3vf/f+5d9//X91fb7b/dfN3//91O/XX8/8/H979/1f71Wve/8fb8 X9x7d937918/vJ//9V9/favP3f9/533d/9fffN8/2v3/35j89vPX1//f0X/3/9j/2wCEAAEBAQIB AQICAgIDAgIDAwYEAwMDAwcFCAQGCAgKCQgHCwgKDg0LCgoMCggICxAMDA0PDw8PCQsQEQ8OEQ0O Dg4BBAQEBQQFCQUFCQ8KCAoPsBMJCROwGrAaDRqwGhoaGhoaGhoaGhoaGhoaGhoaGhoaGhoaGhoa GhoaGhoaGhoaGv/AABEIBLAGQAMBIQACEQEDEQH/xAGiAAABBQEBAQEBAQAAAAAAAAAAAQIDBAUG BwgJCgsQAAIBAwMCBAMFBQQEAAABfQECAwAEEQUSITFBBhNRYQcicRQygZGhCCNCscEVUtHwJDNi coIJChYXGBkaJSYnKCkqNDU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6g4SF hoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2drh4uPk5ebn 6Onq8fLz9PX29/j5+gEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoLEQACAQIEBAMEBwUEBAAB AncAAQIDEQQFITEGEkFRB2FxEyIygQgUQpGhscEJIzNS8BVictEKFiQ04SXxFxgZGiYnKCkqNTY3 ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqCg4SFhoeIiYqSk5SVlpeYmZqio6Sl pqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2dri4+Tl5ufo6ery8/T19vf4+fr/2gAMAwEA AhEDEQA/AMRXLRqCOe9OVSrgD7prnW1zTc0BgMAtXYfkYADLHvSvcCZlMUpU8irlqx5HapkrlIvx x7EJqyuHQEja1J6IXUvqRJgkcipo0DvxWTY4q7LkbJjkc1JtDP1qehb0JIoi8TAHoaAhaNVx+JoW hUdi3HwNh6im79jFcdaJSVxbj1OwAEcH2qscNKRg47cUSjfUh6IYxCDkZxVG5k3RgKcA1pF2Mogm FUL1/CpVUDJbp2rS9y0ReWZBu6Y7UJIHUgjGKWwWITIPTisqWQM5HShk7GXdtlwAflqo0pjkODx2 raIrlPzC65PWqjSMz4Y0nuDWg8L5QPcVVL/iK0XkKO5aaTcFIPFPQeZJk9PSm46A2EkWXwDhaaq7 G24zRawLUVgpbJ+99KfG6upyMY9qSuncJLQqWzbrlsirkSYd6u9xbIhzuBGP0qRV4UEfpSGmWliF ueOhphk8sjIzTewm7ssiIS85wB2NNjCl8UX0CwvmbJcdqCoWUsOlShhuDvwKtrGdpJNNk2HQRI2S e3amzTKrAkce1CY2tCdJDnd1Wn+c0R3kZB7VfQzXYavznLLwakKjBCjmhaF3IxGScEc07zcA4HNJ yQpR0BU3IAaA5R+eVqrhFD9u1t/rTJXKsD1o3YmtSwvzjIFPiywJPHpRcuOwkcnmZ3CrqjYnIpCe hWZsLk5qRY0aHrUrcnoAXauB+NN2+hp2uykNUgH1zUmdj57VbJlsL5qzfKBjHtTZdq4Kjke1EiI6 kqkvyeuKSGYoSD36VLKHMNvHeo4vmkyBWttBJkq24SbLc00bmkfHSs9ilqEm3yFB6mpQgiwQaL30 E9yN+O+anQqCBjk1UYgNdtzYAximtJlgGH6VTQkx+PQYpw+RC46Cs27FWuWYZ1uIQwyv1GKr78OT UpsZdjzKvB+Y00YifHfvV2vqNuwNJzjHBqGQiNwo5qdQSIihD4HenRgqSpHIqkPYmjkLkjHFG5VP TitImMtyYKIh7mm+VuxgU+o9hrrkBRxiox8rc8n6U27IaF8tnb3qRyCoA61FguGNqkCiQldvrQ2X FXJlxKHHcUsI+TjtUtAyIDPzg96bIu6XcBxSadiL6jc45FM2HJI6mpimiriKgUYY5NPEvlEBhzWz 2Fa7IMAyO2OKUAyINowKljWjCWExY5zVZSQxI4qdge5ErmbcM1XWHBBJ6dsVV7g42JHbLAjrVWSH zZM/xVN9RW0HmLHBHzUsDYyO4q27k2ELE/NjFV2Ut1PArMpOzJVXzBuzgCgsoPIrSASZCke8HnAp GjOAF4NMzQ9lMeB1FR7dmQDxU2szW+hMimWL5j0qFztx60apErVlnOSMjinxnDvTT0K2dhvlFCGB zUO/LZYcUrkNaiyMFIK9KUws8uM04BYmSRcFSOR3qNl+Xmq0uDVgdAiAg0iziTCgdO5FS0UiwkUj vkHjvUYbYx7jPSkrivYJSN4wcVJEgGcDApRGywH8gAYzSTwsxD5wPSqYthUO3g81H96XgkGmS3cs MpRwetOlfMRCqBQ9CkrERDtAikdKURl5gQeR1qoaBJ9ixGwYsSMtSbAwyRyKN2JDVkDLgrSxIBJk 9BQw6krMigsBx2pqyb4jxUp2GxElAj2kfjViOXaAgGc96XNqWorccYmVhuOV9KBFzxwKtaksSM7o 2GO9OXCYGcmhPoQ1qOjfg0qjDDPNFrDbGy8Nt6ZqyqBEAHWnbQcWJtBXrzREpI61MUVLRCNhnXJ4 FIibZGOeO1EtyYMkKYdWB61O5JbHShPoD3I8bhzTtqkjHUVVxNExYE4Oc1IItgzUN3GlYlit2lXc DtA/WjOBkjmkkVMaeX3E8elSKfPUnpWmyMhsUH2Zslt1LIMglTwT0pXsUiq4LOAeBUscgHBGDRFd QkV5XMfBGQaiaISKCDgiqJZWkkWPOV5PoKjVtsZU0bsZQdQg29zUeFAwe1D1Y2jOn5PSq7ruj5+9 SYXKEm2KP1NZ8bMzHstNIm5z0cuH6YFW9pPA6V5fQ6kW4R5KEEZ96ltmOw4/OoasFzSiy0OcZqaH g4HWhFrY1IlJIya0Fj3cY5qZEInhjAQjvnvVpYiwBXjHWs+W5rEkUKOD1qz5IjUk96TVtCrXH2wV FYZ5NWUjLxgA4203sJCiQD7w5okO9AMZNYvVjWgrSBYwCuT60w3H2YYZAwNbx21FL3kVHUyZzwDV WSIBQO9VbQxtYSOLGccikK7Uz1q1sPYiUshJzUDyDqDg0nqHUpySkrxxWNPNvYA8EUIUkY95KVbr xWfJKWKknrXTDYyd0wUsJeTxTXBlk4PIqGrs06Fj1B4NNjQRHeRn2rWKIvqSK3kncRwe1C5Lk5qk ymhBISpB6+tOEhXBHWm9SFdDu+4jmiIvMzZAX05pNJIq9yOP5HPHNO588HPFJIllktl+nFJI4TFR qMWbcFDY+WnwsZu3Sr6EhMnmkEHaBUI3Zx1Wq5dBxZaReB6VIFLnYamxTYiKLeTyzkt64q8oCfKe TSkrIXUbt2sQOlSC3VV+bkGiKC5GpEPygfLVnJkIParEEknluMDINWQNrHB60rCGLEzyEBuainhk UbT+YNJrU0WqHwjJAIwRTGG5yR607Ex3LTYCADrUSIVBBFK45K5PFnYTikALpjoRQC0QW/yKSwyR Uqy7hk0yBj/fGRxQYwvAGBTsPclHyjHemRRtKWK9KLiSsOii25AoRTzuGRVoUmQgiA8jKmr6yxjo u4USJimiPcI8t19qYzblBxz2qN2NkowAD3pNu88DBrS4WuG3B+9T0feCOg9aW5SVkIwVkCjtRJHv wV6Vna0gY0KqkZHNWGYDjH41o3bQaWlyNY8NgnJqfgg4GSKaZGzI4mbafl4qSMEDPb0qJblx2JX/ AHhUhQAOwp7QeY5dfypNAMkULjacetLHGrHPf1rVbGcndhJy2CKi4EmO9Sapjs5m5HAp6N+/ZiMU 0iW22QsfmyBTlXnpyaa0E1cl8s8BuopDJ8x7D6UMErjhxgdakVcL0/Gpeug0hm7d061LHGGwW4NN A0MnlVZQpH5CmGJiwPSiWhUN7C/dc4H1pyS7F6UIcosY4KsFUcHmnhWVsA9KrcwWjB1CuMjj6VWm +VwAeKLFCFQwyR09qib5nHcUvIp7E20qCF6VGjYQkdamWgokIjfG7fwe1Vpi6t14qWUhUBPzAcUj ESgYHSrS0ByGL80nIwabHL5UhwctUW1C+gg3M5LHmmM2MMorRLUgjLNLjPGKjkOQARUyWo7DoxuQ c/hRnYcstVAVhjgzDIO0e1TL8wGOgqmK1iTbz6iqrbd4K81MdWMbI3lseeKZBL9pbIBGPUVUthp2 ZeYYQ1FGu1Mg5JqVohXuyZjsUf0ppTI4NKwSY1E8zoelRbHc8Hj1qloOJYCiHBHJ75qOMFmYv0zS tqD2JbhNqKRSJhjgjFaPYS2JYmdMoTx61WVSkpxzWQ7EspEiggc1Ix8tVXGSaSWo3oh7LtdcmppM OSO/aqYkQRtsQ7hzVqOIPHuNVYmxWllYsAKkgQmMk0nqy2y0mTtHamuR52BxTElcUN5WTjNEHynJ 79qcSXox6tuLIBg/SmCPYcdTSbGkJLHkD0HahFLOM8ClygPkiz8h65p+37NhRzgUuXUrmsiVQVXc TuzSSExfQ1aWpF9ByHdgAdqjChkc9waGrMlO5Zt48ID1WlMefnHbtTLsMml3oNq8+uKf5mIgSMnv xVPYUdyY/v4MqNv1qAfuo9meahaFNhBGOjHNTl124A/SluSNh5GMVYM6h8EfpSaEtywkBVN2Rj0q EfuznFVFXG3oK0ofBA5qeSXdEAODT5Rp6EfmOwCk9Kd5m9D6iptZhe5D94jI61eMZVBtq2yRwwYy T1HaqgkU47DuKW4loSzMAAo/CoWUgAN1oWiL3Elwy4B5FVI4mc5Py4pxFJXASBWJI344qoE5YnjP Sk9ASKB6fNzVW4wSOP0oTuW9iLOQS3HpVKSQS+wppXMW9TKuxg4Xmq0q7UB6CqWgLcyvklJyP0qS KL34rxk9bHYWEtmd87vl7itCPaqgYwPpVTFsWUTYwAPymrsUS5OOorNuxS7E1q3mf1rS2eW3ByKm T1L5bItKvyg4wasRqSdpOKm/KEUWJbcptxyamMR2AE80blCmDzNpA2461IFPnnHT6VnfWxS0HnCL tZd2e+Kkij8phtGatR7iuOHzSlRyKiaEqx3HjNU0SiKaPcFqtOu1galPoKSSI2xswPlqHzSqba2W iM0ytL+75JrPklVs9iKqKFfUzzfBQcisKd9zbwea05RX1Kcq78Ed6gZQ0gGOB1NXFNBLcV8xTdNy mlkj8uUlaEiZMXJcAdW9aFyj7ab0HFErASR5Y/So1+bABpNPconiT5yD0pHxG4GMj1q4vQViZpNw 2Y4oTCjHek22FrEBQk8Dn1qWPhsNyRVrYlkjMrDOMVJHB55GetCWgWHT7lQe3amo21tw49aEtBDS QDnGAfapxtjizV7oNmOUAxAmlD+WM96hqwXLKSiRDkZPrUMSlMtnJ96HqgiWUIIGelT7wVORwKWw 7aiB1VRjn8KImAk5HH0oYNE2BJOAOlR+QUnK557Gq0QRRZWLywSetKqkxnByai+o0+gi25jQMx61 XZMHg02yeoQRfNknJqY565qS0KpJI9KlHBz1p2ZDHIqvnBwaYD8oBHFUkAjKHfI49sVFKzIRTsPZ jkYspPSkijZ23B9o9KljZYyVbCc5p5c4K4q+hNtRirtiI6gUtt8xyBiluFiUIC53dKe5COAenahq xO4xjyeKaimUEjgiqSugE8olevzVPbYQcjPrSGn0Gx4eRiO9TqPJiwOTQrXHuNjbb94ZJppIDfN2 6VT3JvpYkjXfnjmkjJiJGOadibD7d2lDgjBqwYyIkI/Gs5FFiFAjsp5HaqkTMJyRwKIajlsTrGJS SeDSeUAu3PIrTYzE4Yc1EIwfmzQykmW44gRuzUUgy3IxTvcrYcIxtPPNNXETgtzilbUV9BWO5i1N KkkZFJjTsgYFADipbhf3QI4oRN7kKr5aK3f6VPJMCAcfNQWLsBjJxlj60YLQqM896iWo4KzuMwyN kHjvUEkm/AAwKEurLlJFwgGEEVUMm1hitEYdSdpg5AIqtIgZjnqKezK2GvlkGOlKYvMUbeBipmNa kKxsYiAelOV17jFTZtDdkI42IQO9ZytjKsM+9VyiTsSxoeitx3FICdjBRzTWhDIBkgbuDUaosYJP 3jQ1YERl92QOuKbEhaPJHFWtgsSg7yB0poGJCvVfes3uaJXQ1INsmc09pA0mDxSQnsMdRA4B5Bpx TehA45rRuwkrjl+SKoUKAdOfpUrQlrUdIouBkDGKcEwg2j5qbdxWY9VwCD170xOUAUYHvU9RpaD2 QpN83IqJEEm8k7cGqbBK4RDemAeKnUEJjPy046hsNH7welTRWplQ7e3rUJajb0GIu7r2qEHduzxz xVMcdhWzJGPanwZHDdKS3HsiSZhE21RuzUUasG6809iJMuo29jntSRqGnLDr6UlqNMdc/Mg7Go/L YoADitFsBIjAvyOlSGTbGWIytQnqJio4aDdjGfaoVUnlqbKvZFll8uPPX0qKN9zLxg1cdjN6stMc NkjmhXw5OOahLUbdkRtLtTO3LZqbONpPFXLQUW2CyAHcBTpCByRnNRFlSQ2OU5K/pTkiMmQe1Deo kiaPMWfWq5jJQsRx6VS1J2Y+F8RADj8KfKxSQY/Glsy0PBJfAHFNkPlkACqWobDZWcEL0FSsqrKt D2EPbEbkjn8KjjmEL5K53e1Y3ZSsWl/dE5GQad5SxRhjznpV3ElcbG5mIxxippG6jrTTsSMigZj8 v41LcIoZcimndlNWRMqBOT+VM+6wCjJNEggSyLtYVGsjRYweKlagyw2GIYGqkiAnAHXvTsSxm37M gB+Y09/u5PWn0LiVgCRk8CpJHxak9F7U46iZnQjYuQcmpNvmkk8mpkOOxmSf6Px96oEAjc7xkHpQ tAb0KEyFpMjhaqyDcNvarTsYvUpFQGIxk1nSK0smMYUUJ3ZSWhnlBtGOtSxnZg44FePb3jr2LStv OcVajGMlqb1FfUfFGwzzkZrRj3pjFQ0XFWZp2sShDzyasCFlhB7g1mbF5NysCemKlK5iBPFQ9QRP GpVMH86tEfJnvVLsS0TQz/aUwBgr1q0U3JmpasxPQZu8xMY4p4OE2g8VXMSnYaseyMuDUcjGSIl/ wq0MyJJzOAACpXjkU7cFX5qFHUmUrlGeQg4qsjFASeT2q7GdinPc8EA5NZBm8z2NawWoPQzZmO8j vWdlijLW8VqZsjIIRQvGOtOV+y/jQ9ytxm8g7T+FSqCRhhVJWRO7GpEyHcG4HUVIgHnbj0NZt3NN h6MrM6flSQJh+TVLYT3JJZCg46GkaNiuCeKaQPRESKUb19qtKQjcirUSHIOWYkDihcL97r71my1q Sui7QO1II9ihVY5oi7gxzRksFLZx3qGaIvIGDcDtVSdiepO4LKOOBTxH8nzGiLBq42Nx0zwKseWH U1o1cz1QkQCjHSp/K+YsDhaiK1sWnZEW/c21e3erjRARbs5omrMcXcYibVBAq4sgeLbtwfWpKlqC qVfipJFxg5y1ISIYVaSQknNWjx04pdAtZkcb73IJyBSbhyx4FVFXJkNiGXJ6CmrF5p4OMUWsUnoX UXaxBNQZEZJ71qkrGbvcWIbuQOtSL8rkEUWC7GSbh7NSRsd2G5NJMciVkGMmiKLMYqZRKTJkO1cq elPR88461MbgxBbl1KjpStA0EY5z9KvYGxAu5Mnk1JHhUyRuqZO4ooZOBCRjvUYfDAj9KuOwmtSU RkNu6Zqwg2E4wc0rFcpTU+WWUjrSRoWGM81ndpjtYki+UYPJHtUw/eEkjj6Vo2yGtRVPl8mnxMGY k9atO6HYkjUliM4FNiLcr79azkrjsPYsxwOoqRcKis1OmrDlsOL4n24qrIpM/pVvQhIsgrEpz1NR 7N3AOBUFPRC+cUIA7Usrb5MGnsw3RG8YnG0HbjvTVTaQuSapszStuW9oU4PalYncMdKkp6CyICAQ c1GP3kmOoFJXbHbQWYkkKR9KcBtABHNVsCdxJFO4EHFPK7eB3qbajvYS1iMO5s7l96rGUTzMAu36 Vb2J1uOYtAgB5qWOISpnGCKWwo6sYsZZifSooRumYEdad7oclqSLAVkI7Cog+1iAKS1ZT0RD5RcE g7SO1NaPeoJ5IrRqxG4YwQKgIAkZe1K4ah5WEBFQySGOPj71RfUCvlhGCeTUEnG09qp6iJsBT60r Ky9qtbBcQoNvH3u9JJ0AH3qzktS1LQcq7lznkVSkmTdhl59QKpJEuVydmWRV9qADzk0SRSY5sSIM 8YppUZyBU3QPcVGDLmkZdjhx+VKwSZcSYEk7etV2BaPYPlqrWDoRpCwQbjuIqPZuBz/Kp3YrWIo4 yrEKeKuJHleTwKtbAPVw3C9PenurNghtvt60risMiHmOeaZJB1IqNRpoRVOFbtip0iMuQTgdqdtb g3dCRjYxXGTTVjYS4Hf1qidycfI5BpU3JlgKHohoTBcAkfnUk3JXaOaa2J2ZIFJxmlRyhPGVp8o1 qSbvNQYFEz+UQFAppalWGAGQhT1pzYD4PBFA+UtnCoGbk9qhdipBPSknqQ4jl2v0GDUed7kntTlq NKw8ShCMDg1Ix28AZqVoVuhqx7n3jg1OysEDDrQ1dk3toRrOcZK809Lje3A+oqoit1JZyAAxGBTB JlM0mWiMTjbtxz64qRlLxjHamtBMjaUyMGIwBTmYO+R+FUlcT2JACMg9aIH7HtUuKuJaE0h2txzT lkE2A3anyjvYkwrOQBg+opIozADGTvb+8alrQRds/lQqelOMPm/h0oiNsZtbIDVIyFMY61T1RK0I N+5wh5NPWMCQ5PFStB7j5AB8oHFFsghJU8076CloyTeN+ccVWaLdKWz8tTe5oiKdgmABkVWlXdBh ulNPlJZCsQePA4xVQNsU7T81Naol6MgJBkHr3qnJJmRjjI7VD3Ha5WSTOciqm4Marcm2pTZgsh5y KpyTBZcdjTSaHcyVQRnI5BqxsIQZ6V5h1slRjE4BHFWTliB2qSUaMRCjAFWFyeB0qW9DToWoYhB9 4nn2rWEpSMcZrJgm7k+43G0jj2q0oIwrDcKS0NNi2y4YD9Kmx0AFIGTBRJ90YxTX3DgU4+8iWh4/ doOOaaoXfk8fhU2sKwyQbpAB0qOU5yO4rZBYpSyh4QAuGFZv2gRphuTTWjM7FbzN6kntVGW4JT5R g1aQ+hmXLFIvesZm6EH5q2iZvUbLJgYPJ9aqXW6LZjvWz0IQNLmTb04psOWzjtUsdx0bLg7uSKla TcBxxV20E3ZiBstgUjEoxJ59qzcSxUXLZ6ZpzxlAAadtCepK8e4KueKcRtc7j0oTCQ1pPnBUVGCV fnnNbxasZ2HCdi20dKl8xS3IywrGSNYsXzd7YIxVtYsthetKKsx7kUalC2epNLHIsWVIyactSbWH q2Rg0hTB45FK1ihDGAflGKsquxCCa1iyWrkcRMpIHap41+XBPSqRmHl+Y3y8LSlxDJgDI+lRU1Kj oWQ5aM5HzVNBgKM9RWd7FNjZJd8o5qfbjr0oKRC2RwpxUmSw57U2SwjiVuRwaHiOzjkirWiDcbu3 pwMEVNDKSnTH4UJXDRDARIxC9fpTfL3GnN2Qkiyh8two5PrSvJtbkc0oy0E4iAljhhz606ONVl9T SirMb1Q7bmQimo+0lD0+lWwiJGuAR/DSqTgDtUbFNFvkpweKEUrGQ3SqZndhFaNb5bO7PQHtTkbL FcYqWikNli3KMnmpYpFtGBK7+KcRvYGO6Mk8elQoGHQcVXUL6CygsgIGaS3YQDJGazktSbjtpZiV HHvUqttTnrVrYaYLFv74p0UWZMelPYNxzgxnJGRT8gsD0FGgaksSsuW7VAVKqc+vFRe0hrUGkZO2 TR5u5QCOTWl7goj/AC1MfzctTkA2YPBpctwBLUznCnaKn2iJdrDkd6drohOzIFO8EYxSyfugNoya lblS2Goe5HzU9VMeT1B/Shii7ocoHbgU5eFJHWiDKGF1ZgT1FTzDcAc1djNsZkFMd/WiUlVXHTua kaI1fKOFOBSQL5cALDk0wY8fLyQSPemqpfJU9O1Ta7KjoOgBQF/0qKTMh3R8Gq5bCkxCXZQoPzUw xsFyetTsUtUIYt8eScVCiknK9K0vdE2sRyNz7GoJVHCg/jWfMUlZiM5jUAUxhhQH5NJbjYhbcm0C mH5UAK8Vp0MrEIj8t92cj6U9pOhzTTDlK8qeYQytgjrUyfI2Rz60MRHK2IsgYP0qWFgsfK7vqKm9 mXy3RHOMoOMUioVQEHNW3oCHmL7TIqqdvrTZYiknJ+Uelc/UpqyIGbbKFxxV5YmZcnoK26GbJGcK mKrmT2pXH0JFm2jpzSqN3OM0WC5AVHm5UcVNwRtHHrSbsCQ9VAb5RwKZHmdyGG3HSmtR7ExUIpIF V0f5CTxzWtkZjZm3BBnCipkHzAAc0nGyKSuSRjaSCKaf9Zjp71ncqw8vsO0jPvipPMK4A6UJ3Jaa HMdoFTx7Yny3zH6VWwcuhE8hd+Rge1SGPHGeDVX0Gg2eUdoNRyQbuCeaLlXHldigA/N61CVaUDcO hqNbjvYdJuN2n90CpfvSkEZAqrEt6ir1yOMVIgMwJ+6KSEwdVSRfSpRhy59DTSHexXEvzYxj0qUS OsoDdO1UhONy0Y8KW7+lRLHhCV+WiG5MnZFd28yRUxT8EsYx2oluEWSbAgAxUrAsmOlS2XcI05KH tR5CqM5wKcXYhoQvvPHT1p0kHmx/IdrUJ6jCMNHDg8t61KqBlBPBFXcaLPmqR05+lNDMpHHWs9yr F1RxgGo2crhAcH1pXsZtE1w4jVAOT3NSPIrDCjmlcXkV4QIpCCNx+lSrGhyWPPpQlcvYhC7GLBuP SpEIKkD71aJWRD1ZE7bEz2qJXDDANJKxSuxN4K/MPmrPUkyNnpUvVlEDzbDx0qKEBZCxrS1kS9xs rrIMBdtZ7KYmOPmrNofQqIdobPWs+SJGzjqfaqjoIpSWwQcGqkpDAYHIpyZHUoWhwxBGcVdZySF7 V5Z2MkZBIwOeRVuAbwRj5qhj5TSgjATHerKQnd8poSvuBdChjjGSKmDMSDjAqLajW5ejyJwB6VcV zChB6k1EkXcsAeWQwGTipoPmc560mtBsnjyrkdKWHcm89aUNBsa0p8oNjcaUp5gHGDQyGKhO/OMY qCbb8x/iqkNbGbPGYkU9jWBd4RsmrTuzJlFbss5UDio5JMZFbpaE3MKWTCHJzVBZd2OKuN0Iero7 8dvaoZRuYk9R0qpCsDw/uFc/eNDReUu6nZsVrIIofJ+c85p6yBzyMVd7ISVyVlCr1pqjfxTjaxTu hxXcgA4xQUYqNxzSbshdSZVZlx09DSG22gZbc1QkO12RFSkuD0p6lWOD96tFdIl7ljYp6Dmo4oMO STRa4r2HTxbWDU1WYS7geKzejsaReg+TL9DzSxg45600gY6JsZBGTVuPIYZ4WnckaSVkIHSnSw7o sg800x7MhWPywu09etTSYiADd616EMVm2hQo4q35fRsVi9y0tCaJtzEkYFQtGQ5z92lIljlUNyKc u+VSEHze9K44i+W6Dbj5qmaL9yGz83eqjqHUmAGwL0JpyN5WUxz61SKsVQmHBbpU7yhRjsaohodH iMDC806Ztq5xzUPUpIlgCyKAOvrTGtthDE5NJISfQGbb+NQIu6UGqWoth/mZYqRz9Km8oZBNXcLa 3FlnWLAK7hTVBY4AwKi1y2wU7JNvapnjZnDZ6dqpktEu9t2c9akVh3FQ3cjYadqKSfwqjKW2LTRS 1ReyJML3p7qSAppisNZvkCjjHU0zcsT7iOB7U+W4WHLfpc5KJs9sVagjATLc5qdtCktBssXkAE8g +1M88Ajjn3rZLQXUnyGbb2pCFLbcdKhgmrCsCqKM8U6STgLjmicVa4ou7GxAsz5/WmeTuQH0qVoa raw9cKvvQyhhj+KtVsZMjA3EANhqseWTw5z70rhbqRp+7OP1pZcLgnipWjFLsIqMSD605z5b7fWl IcVYjyUYAjirDuqpgClAuS0K/l+aCMYqwjBSoIzitdDGw4kM/A4pZUxgfw1Ety0iu0fQjgCrGC6j jimhqIkrEAAimHEacd6mO45aDlPlsABUc0DRvvVsDuK1exluyGVgSCpxT87htY9Ky3NL2QHCxgY+ WoIz8x7CrQk7lSWM7sDkUMvlx89fSpcdRqV2Vh8rhjyKa0vmXOVHFVYpibSJcqMVK8Zf5j0pNGa0 1IDtj68rUDIm7K9PpSvYrceIgEyKekZhXg5Jpp3JasMDbnKmnsMJjpUS3LTIo4dx5NOeJ4SCT8p7 VfQz6jsDIPT8KdtypH86lI15rkayjOMZIpxV5nBzhad9RSjoLJGxT3quY2IC56UluJLQftzwo5Ht RtcDGatsm2pHGrhiD8oqyCNnA59ajdl7II3Kr0oZy3GK0tYkZIvyAE4NOjhEkZBP50cwuUhMbNGF AyBTvO+z4XBJ+lXJ3RS0LkBLRnPFMVgxAxk1gMsbtrDeKVk82UBOFq4xIk9SJjtkIxnFSLceahwu CPUU2V0HRRkpuY5qYnEYbqc1D3JihjfPNjpUm/yUO4ZJNaJaBsx23OKhaRicj7oppaibGlWkYEGp uS3vTY0tRC4Z8AcCo/tGCfSlyhJ2FJE6bgOlRwMyOwIODWqSSJd2WIoxKRUsgPmhfTvWcjWOxOD8 +D2qJ/mTBOOalGckTLwNxHSoFmEvKggDrkYpokllcEgipBOFUDHH0p2KIZS24Y4NI2eABkd6SWpS 1LCkAYUcUhX5sdKbQLckWJjlc5xSFSuM1NwloTkgkYp8sxQhQOaEFyTaQMnhqnhISIiQZY9KUlqF gKdsZpFO3JPFVZWsZPcR38vDDmo5EM3zY60bMrVjYlMRAPWrMfyuRmrYEM9ylpkOu4E8YGahZFjw c49BUMqLGNk8dfeo3jz0qFuKTKaRpIzbiQfTFI22NcAc1t0FYpk4XJFVHZlGRUjbKDSFuCNtV9qj cMZNN7C2Kq8fLVS6i8kgVDEZhUb9o4NWI4S6ntivLTvodbepaiUHA71fjj/A0SKuXIlCrx1q+FOB t4qU7CZNGjK4PetDyemTUbO47Eqo3mjPGKsSKQRu5qHKxaRdVjG4U85707yzE5xyaTldFPYljLEn IyakZSoAzjPWlfQEOjQ7wB92pNuHJHWo5hNWKgk8uQg9TUDfJuB7mtYu5LdjPuZdke3rXLXQ8tTk 5z610QjpczbMmQvbIHHNQTXPmxBhwfSrgiGihM+/AHU1AQ0ZPHNataDQgUKucYNEknmRgY5pJNh1 I44i2ASald9rFT1rRaEsI8qPmOfSn/6w4IxSY0KY9ox1ApYl2t7UXshN3LIQAEdvWlwWwMUlqBFI 7h9uML3NQpb5JIY1VrBHcXAMoLGrgjDMSAM1TegPcjiXZuBPNNVSF27jmpTJSuWVAxg8mo449r5q d2UtC0Y/M4HFRM3lHaeTVATKoaUY4NTv8rDncKnqSQbA+WJx6CrGCkQHrTTGkxhUswUdRSeUZn2k 8irbsrA4lqX7iqBgjvQI23kZrPqNE6xlVyegphPmf7v0q5R0BFqNVVQEHFSsBEflODWaTDYkt/mc 7jgkVEIw64Jxg1ewPRjZF/fhl7VKfmGR1+lU1bUafQjC4BJ5NM8nA/xqE3cTLaLlOvSoym+Iux4q rD2QQFV5HWp0XzSVzTiidhPs6Qj7xY+4pAmIwRzVqNhAJNr7mTj6U8JvQsDwe2KzkXshsUfy7Rz9 adH8z7e9OBO5JLBjBzzT4YuvPNUwQqgh+elSlQSPas0tRtXIyod8GjygWIPIFaEoS3QNISOAKcsu ZiMfpTWoSGgeXw3U1M0YkUYHSmiWI1qGxtHzUtrG0e4vlvSla7LjoiZWZmG9eO1EsIuH3fcbsKu9 tBMYIWEoLDoKkZAxVhwKhitoSyx4kA7UyT5Hyfw4qW3sVGKSInykoB4JqzHbE5BPAp7DRCsflvjt Uu0b9pGCe9VclxuMW2VJNgbcfXFEyH7pPSi2glorCwxHbgnpUbRByAw4+lSwSLMYVcjpjpSR/vCd wzRa6KuRspMmAeKWOTY+CufwqoxshSZKyGJsn8qZ/riWxtxQ0yAj4Uk/dpyEbCxajlbKTK0eWcEf dNWJrdzIAGKD2qdUy0MZTHw7fnQ0Weh+lUlqS7sTdjCk5amww4uZAz7s9B6U5XFGIvkIoBIBIqA4 dst0pJWAabgMQoGQKsXEYFuG3AfjRsUloU2UR7TnIqCQBvvHFPUzS1IkVIwdzD25psVyiNgKCexq 0i9xJTtfcSPzpk8oijBLDB7ZpNCs7FTcFYJnOasRxKuRuH50nBlRjoROBs2q4H400QiFxiXIPqaU YtA1dEjqN+ARmn70jgILZak4tsSViuEG0EOAT7093CR4LbjT5WhWuSRkFEJPNNuHXdgHA9adhxjY jidSpCnpU8UgKcsAPrU8mpruiF7tYj8zZFSMysokDYWhRaZNrDre7julKD5T6nimxSBZjEWBYd80 +Vk9Qn2Jwz/lVY3sUeBk49hVcoO4CVLh8oxCjrxVwXUQUMD09qnXYLFQTxXEpJbH4VNLLGjBQ2Ca fI2CQ+KZbcNl81Gl9FyW/lVqLaE0NGoRS5GTge1O0/ULZ5WHOR6il7NgTRbZp3Yv8o7VFHeK4K5I 544ppMmxOL6KxYLISx9QM1MLuK7JPKge1JxbKG/bIY0GSxH+7ULXsQ5TOPTFSoO5SiR/b48liTn6 U/7cjpk5x9K35bIiUbssRX8RUjLZ/wB2npNGybckfhWNncjlYk15HFHjJyPaq66nHKgzuH/AatRd iyf7TE6dTx7VE97AVB5x/u00mKSuh8eoxBOh9vlq5HeQvAckhs+lTNNFRQwXUIdTlsf7tMk1WFZ8 fNj/AHaIxbRWxIdTiL4G764qNtRVZAGBx7Cr5AcSzPqsSyKADgdeKWbVIZOQGUey0KDM3ErJexDn LEHttq5JqUEqhVUjHfbScWHKRi7jlAwWyP8AZqJtTRVKfN7nFKMWUlYbFqEZGAWx/u1aXU4SMEN/ 3zV8jC2pNFfR7S3zf980f2hGy5+Y49qjkCSGveRlgRuH/AacuqRO4BDZHfbT5GQk7lwapDI+Pmz/ ALtPW/iEuw5PvilyM25SW1vkmmdUB49RVhf3gw3BFFjGSsySW2CYPrSBGjOD0qG9QsJKvQEc1Ei+ W+AOTRcbWgeUHYgjJFRraGeTA61W6JWgoj8gOh5as/BRAM4qeoyAFTOeKYvzO3HB9qt7FdCsVMeR 2qrcP5eMCpWpJnyLlmJOfaqMcbsmRVvYVrsYp5PHIqlLOGPzDJrC5UYmdAvmEqRyO9aCcR4rzI7m zYiR9CvArQjyz7c80pPUuJZEbRyYxmr6gxyDnilvqNo0YsBjnrVyCNhyeRSkXEmw0j4HFWVyw2Yy fWsZ6lWsTBdhCjqKk5bPGBQlcJFiPCrkU54ywDmnawIkizu3Z49KN+48DHrSigaEkiB5A5FUJ03g nvWkVYyephyrkNk4xXLTyAkgkZ+tdcFdGVrMx5bvypArHIHTFV7idGG7pW0abFK9zON0iyAg81fe 8UDqC1acugLVkAuEVCx5P0pi7ZcNuwPSpjEpoiF9EJduDx7U5riJnznp7VUabbE9ETwslyDsOMev FVJNQQOBznvxRKDJs7k6anCmQQW/CnJeQ7s5OO3FP2TaDlFl1KMKAC2fpT/tySwgKWDD/ZpKm0Vb QiS9VsKxY++2pIruNNw+bH+7VODIimmN8+JwuQ2P92pZLxEmA+bHstNQuNpkklzGsgLK3PT5aFuo 95JDf981DpscVqPjuYtpHzZ/3alivIRwd2f92qjSZVrMWW+RXAAfHstRtOitkKzH/dqnSYnF2J4b mMuAUbcf9irSyRoTkN/3zQ6TZMF3ITPFKxwr5H+xSi5w4HlsR/umpVFpmjVh/mMHY+W3H+zTra7D IzNEwP8Aumn7NtiGC8aT/lmwA77TVppnbAETE+u01XsWtRNCQSz4KtGx/wCAml86baF8g49gabpt olIsiaVMbYWx/umkPmj5vKfP+6aSpMqxeiVpcZhbP+6arMkzXBxCwUf7JodJsl3J4/tHm4W3JH+6 am2ykki3YEf7Jo5HsCiyrILl34gYD/dNTgTy8C3YY/2TSVF3uaW0JEinHSBs/wC6aBBcFiphb6BT V+yZHKyFYrlbr/j2bgf3TUsaXIYn7M2P9001SYrEotbmViFtm4/2TSx2t3tx9nYf8BNP2bG00TtH eLgG0b/vk0wW95H8otWOf9k8UexuS2yeCxvIg2bVjk/3TQ+nXsUgItXOf9k0lQKWiIm0bUXORC4/ 4CauHS79QBHaPnudpodJglcR9Iv3IAt3HvtNJLpWoBdv2Rsj+LaatYe6NLaEsfh+/MAJt3JP+yak i8O6i8RRrZx77TUuiZ2FTwtqYQKLd/rtNT/8ItqkfP2Z8/7poVIpxEk8HatcgMIHyP8AZNTR+DNW lGVhkGO200ez1EodR7eEdYSPi3ckn+6f8Kn/AOEO1koEFtJn12n/AAq40kheQ6PwPrbHH2d8j/ZN JJ8P9ZE2TBIT/umhUk2OSJo/AetOCPJk/wC+TSH4e64Fx9nkyP8AZP8AhQ6Iug+P4e65KB/o8gH+ 6f8ACrB+HWuEAfZnbHT5T/hR7BAtiP8A4VxrryAm3kJH+yf8KvL8M9czkwyc9tp/wpOjqC3sRP8A C3XB8wgk+mD/AIU1/hjrs20iCQEf7J/wp+xSLSROPhLrypkQuD9P/rU//hUuvS43Qv8Akf8ACn7K 4raki/CXX1QjyXz9DSx/CLX2TmF8j2/+tUuihuKQh+D+vSjBgf64P+FTRfB/XgCBE+fp/wDWpqmi Ui3B8GNdIy0T5+n/ANalX4MeIFfPlP8Al/8AWo9mhWJv+FK+IASTE5Hb/OKX/hSWuywZMbj8Kapo LFcfA7XyoURPs9P8ipm+AutuAfKbZ6YrRU0gcSeP4Ga+iDEbY7Crh+B+vnGY2LD/AD6Vm6SbKirE c3wC124YF0YkUsv7PmuyqHWJhimqaKaSA/s768kQk8slj7//AFqdD+zxroJZoyWPej2aE7Eifs66 8gKhTz71PF+zdrrJt25+pp+zVibEw/Zi11OAn61Bc/sw67LgbTgds1Kpq4dAb9mPXGC/KQPrUv8A wyzr0kvlsMrjj5qvkRCQn/DI2uS/KcYH+0Kh/wCGUdbil+UYx/tCqUUaJEq/sp65K+CP/HhTJf2S dbjUZwef7wqeRbDLH/DJOtqV4GCP7wpv/DIeuLkgjP8AvCqcFYBkH7I2tMSGI/76FWT+yBrO4HIw P9oUcqJLyfsiawIs8Z/3hUJ/ZD1c5GAP+BCkoRuO10EX7H+rNwxH/fQq9H+yBqi4GRn/AHhQ4JiW go/ZF1YS4JGP94VJJ+x9qcnBI/76FL2aKYR/seaiPukf99Cnn9jnUmcFmGPTIp8qTEmP/wCGO9Q3 BcjHuwqRv2PtQI2ggge4o5ItgKv7IN8Vxnke4p6/seXrkEnDfUU3BILFlf2Obsn5iD+IpifsdXSz E7ht+oqUkJ6D4f2P7pN21htPuKb/AMMc3Z6HA+op8iRS2JW/Y3nYqd/16VYf9j6QShC2R+FUkrCQ 9v2PGH/LT9BToP2O5MYaTj8KVkhNE6/sb7MkP+HFRRfsc9SXwT6AU42CxYP7H+3GJCPXgVO37Iyx 4AfI+go5U2NFeP8AZCVnO5s+nAq4n7Iyn5Q/14FDSRLSuWV/ZFjCFN36Cq5/ZEi4+c5+gqEkNMdJ +yJGXALcfQVYH7I0EbZMh+mBTdmUSn9km3DgF+vsKVv2S4Fzh/0FJRRncen7I9rKAWkOf90VJ/wy LaAcOf8AvkVWlrBYhb9kW0XnzCP+Ainj9kWz3Y3kj/dFCsgH/wDDJtio++cj/ZFNj/ZOsnbPmEH/ AHRTdmgWhYf9lSyX5dxz/uioJf2TLNsHef8AvkUo2QXuIP2TrLIIcj/gIqeP9lWzV+WLfVRTckWS P+yjY7vvn/vkUP8AssWLJ1P/AHyKV0SyJf2WNPzjP/joqVP2XLCBXO7r/sinoNWKkX7L1l5Od5B/ 3RTE/ZesIySz5/4CKFYUrBH+zLp0LnHP/ARUbfswWLE7WP8A3yKd0JEo/ZnssYLYx/sinj9m6w24 B57/ACip0LauMb9m2xkxyQB/sik/4ZssIs/N1/2RRoRbXQRP2c7BODz77RVZv2c7MPlWP/fIp6Gi 2PHPi18ME8FWi3dscAEBhjGa8UtLz7UEfbtLDOKicLRMJas05hlcE89qiWVpI9p6jvXLa4h0e4pk 8kVAwP3h1pWGWG/1QA4z3qlErw55/Gq6E3FjGxfMbnNVCdzM5/AU0iblP7xDAcmnPKUXaRzTkik2 Q9aybkbmz0FKAytHwCSKIQGJC8USWo07Iz5UMRJH5VlTSBDkDJrNRux300K0Xy4NSCbbLtVfl7ki vLjobyWpp2sqvlVHP0qRLfyZMk0pK7KWxpCUxADBwfap3BiwetJqxcTQgOVDkVdWXa+7tWbdxpal lYi3zKeKtRsBypzUqN2XLYsRlS+SOamjBLFe2aq3KQtSYIGGMYNJJGxQZNNj2GK+MZHSms5Z8kcV KVh3HeWxBGeKyr2c28BbHIOK3hG5n1Os8I/B3VvH1r50TbYjz96uwb9lPU36kH/gQr1aVFJJsiSG D9kvUn+Xj67hU6/sjX7/AC5Ax1ORW/IkQWh+xzcu6/Pgd+RUw/Y6lSTiU/jinyoa0NGD9juSIFml zn6UH9jt2GQ/8qORCbNGL9jcyKuXw34U5v2MgX5bn6CmkkDLy/sYRsmRIVx6AVNH+xnFIu5pD+Qp WixrYsR/sV2wOfNP/fIq1F+xhZ8BpDjt8orRcqC6Jz+xfZ55kJPb5RVlf2NLMdJDn/dFTZDb0LFr +xnZNwXIP+6Ks/8ADHNgHwHP/fIpNISLSfseacrZ3Hb/ALooP7HWmbiQTn/cFCsDHy/si6e6qpPT /ZFOX9j3TUk3df8AgIqtAWhZX9j/AEtpDx/46Kmb9j7SsgHj/gIp3SC5c/4ZI0kAKEHHfaKnH7I+ jI4Yrn/gApNpjT0Jl/ZN0h3JCBf+ACpf+GTtGD/cDY/2RTuZt2ZpRfsp6JGMi3XP+7Uy/sraKvPk L/3xRdFvYkH7L2iYwYFH/ART0/Ze0JBzbqR/u0tESmTx/suaE2P9GUD/AHatN+zJokQ/491x/u0p ML3Jh+zTomABbJ/3zUkf7NmhQZVrZDn/AGaFoJEi/s36Eox9mT/vmrcf7OuheUQ1pGffbVXRaaIo /wBm/REORbp/3zVwfs86Ft2m1TPrtoIb1LEH7PuhQgkWcef92lHwB0EOD9jj/wC+amxV0y2P2fNB dCTZx/8AfNL/AMKG0FYx/oUX/fNMTaHj4B6CVz9iiB/3aevwJ0GJwRYx/wDfNK7HdWGzfA/QWcsL GMf8BpY/ghoTKv8AoUYA/wBmqRF9S0nwT0LeWFlFz/s1MvwS0QkZsY8D/ZpPUpu5Mfgpoj9bGPj/ AGaaPgxoiPj7FHn6U0InHwZ0bP8Ax5R/9808/B/RUXBsowPpTvqCCP4S6MowLKM/hVtPhVoxTiyj B+lDQkxB8JdGHC2Uf5VI3wj0fODZx/lTUi7gnwu0dAF+xR/lVw/DPSY15s4zn2pMm4n/AArHSQo2 2sefpU4+Helr960T8qXQdySH4eaUvS2QZ9qlXwFpKuVW1QEe1QO+hVt/Aun+e4+zJ19K118DacBj 7OmfpV30JQjeCdPRsiBfypn/AAh2nk7fsy59cURsDeo6PwXp8Z/491z9Kmh8J2TOxMK/lQxEv/CI 2UeAIV/KpG8J2kTZES/lRzAVf+EctQ+PJA/CpT4dte0S8ego3EhR4ctG5MY/Kkj8P2gbmMflRcq5 K+hWvTyhn6U8aBbAcxihOw0yGXQ7ccCMD8KcdGgK4WMD8KdwbAaRbj/lmD+FL/ZMCtjywKRLY1tM ghkI2A/hQ+nwDjYOfamgTLcGlw7DlAajOmRdNg259KXULj00+E8BBT/7KhUklBTKQDTockbP0pRp 8Kt9z9KVwZK2mRNghBn6VMltEsZG38hRdCbY5dKhKA4GB2NV2tIVfhBTTQCmwj7oKFsETOE5p3Q2 TLbIOq/pT/sMbKSEFTexIiWUTKMqM/Sk+zIzYC/jijmHYkNisC8Lk1n+Qkmfl5ppgC23bZgVKLWP +7n8KVwJWto2QEL+lM8hNx+Xj6UXGhFtEycLStaruHy8U7kknlIONv6VE8KnOV4+lK40CQRFRxg/ SmmFI2xjimmMasSA5Az+FOaNTzt/DFDZNyIxbcHbUzRBcHFK5SEeFZgT0qrs8vOF4oTAUJhTwMmr KxAoAVwabdwG/Z8nB6Cqk4AbhaSBkO9Qo+UZqyGBQnFNkkMbJgAj9KZJHuuflA+tFxjGjwuCvPqa YFAI4zSBjp2CORioY28wDimhEM+BJtHSmFCuMCncaJt2GwBUW0gnHBoYMYu6NuTk0pYs3SkIXdl8 dKWS2LnO6psFyq6GPj7xqFQc4NUJbk4kULUXmGOb1FJoY6UlhkDmmw55BOGqblWuQL87kEc0yRCw +UbSKdyWhqlonDN82ae+TnnFAkivhiMA1KkDsCd360iiB0cLjdSAPFFgnJpDsMLiJOT81VJAz4Ga pMmxWZjAfUU+OEXHJ4BoYIbsEJwRkVVdmDHDEZpahaw1s4C7uRUMj4Yrjn1pjTIU3E43YxVaWTdJ jqRQtRoiMnmc9KYhKPtU5BpjPmf9o6Mt4eMZP8QPWvjLTED2sRxtwuKKmsTKRZ3HkDkiqwulM/lg EN64rnS1ITLkUbiQgVGoKyHNJopjpHCrz0pGYbRtHJFJLQkh3OiFWFUJQYE45z1pxetgsN+VbbIH IqssiyopxzVMdhlwu2YHFUJ1y+ccVN7MTKJYq5XHyHvVeV/KwFoYJXM65R2GM7TVb7NsQj+Oq2GZ /mEPkD5R2rSeZZY1AXafavFOl3Hx5iTKjpV6B96qwHX1prYexrAYxxmrUSC4BzxUt3KRZhjEYOTm rG1ZEyTwO1ZS01L8y22FVFTjNSxxGKULiiDuPcnB/wBJIHTNWSpfgcH1omNKw5VKLxzUhUtgk8el SmS9yQgFQMcU0oc47CtVZoRCTtbaORXP6qwksmGcHeMVrRfvEo/SH9nGNf8AhA7YBRuCDmvfGysh QjJ9a9lbEPcuwR+Qvzc1Oq7uccGi5LJApiyMZpjYuhtUbSKSeoPYgCsrYJyKswI0b89PQ1qZrU0/ uj3PpSxZUHdzUFoeTti6cmp0+VAWFStx7Ff5nmyDhakkYNwB0qyQEh5GOaihZ1ckmnYLl5ZQ8RCn DZqJtyMMnNJIbJg3mAr0x6VBHuHRuKEDHtE0jHB4pELRjDHNAdS8rbVxt5qsZGGQ3ShagywG+UYF R7WL8HNU9BItJGWBY8VXMe1vl4NCYmiw29V+9+tNWVnQhjik0O+gzHryKauc47VRKJtzbgF4qWSO TG4tx6UnuNEfmENgHkUzJZjk02gDaVNS7mxg8ikFwG4cZ4qUr8wwTimSSlCOp4NMWLa4zQUhHJLY zxT1ixHyaL6DtrYUxlPmByKlTcRkj5akGrDJIgc8cVHHHuG0DBqlsJk6x7MH0qZ3ZvpSYrjAzjoc ih4TIQc4amh3JlDouSd1Mw8qYYfSkIYE8rI6U+GNhz1qgRM2GXAGKiUEygE9qQ0PCg57HPpUjt5S gEbj9KV7j21ImQLg45qyEyelUJCNCOvQ01ocJuxU2GEKqOvWpREXOQaQ0OSDYxJ5qRYSVzjmnsD1 GmMk4Iyak8lVXPek2CQhQHGBzTJEymc0hspzNtQHHPtU0cYiTJ71WxBB5LI+eq0sUon3YXbg+lAJ D44i5pFgLEndQC3FaPaOetQldynigbY2ND5Qyac33cYqkSxnlFxgfnTRblGxnNO40SxI2SCMCrG0 YxUt6hYPI+XIprLtA9aGxp2LMS5bpQ0WXJIqdh3uKIsx9eDTLaLbkEcU2D2IZAVPHSm+Wzc1S2J2 LS88mpY0OSetIL3BlypDDmiPKptxUlIYy7Tinxrt4HSmhCbs7gTUMUYClgOvtQIhbIfjpTwuB0pj QqLkCpfKy3pSvqNDo0xnNI6hOetD3BkIGamFsw5zxQIhkiCnkVX2AvgCqQ7jhGVbpipREJGz2oZN tRzR72wO1QGLJwaRSIZIijgA8U/hSBQJuwjfI9NYtICARTsJFY7kO0/nSvEduMUDK08e1RinwDCH POab2JHhAvUUu3ywWzmpKRCzeeAGFMihCkgHIFOwMJY95JqrjtQhWJEgQtlqJMK5A6UwICC7cDBF K4wAM5PvQSMKbSAetKISzYBoGQtF1GfxpgiaLqc0gHKpXJxTkh4Oe9IZUFqWk4PSmYy/rTWpRLI2 wYUdKhRc5ZutK2hN7MgizHIWHIq1HOHJyMU7DGIFUHIye1QugZwD0oRLJDbbHyp4qPYVOQaGUhks Pm4A4oS22qctmoaLK8lsrAlhyKhjQDluoqkSVooWklZvQ0lwrIuegNNIVyGO1ZeN24GoWjIY4HAq eo1sQIvJz3qZYgq880WM3oUXTKFh1+lUlRQ2QOT1qkWmOZACeKotHsmBX1pBufNf7Skf/EhMhPyg gfrXxdpcfnWcXOBinP4SZ6I12ZVPTg0v2ZZ/lACsO9cqlqZbEMKsgJz0qGDcpJI4NNjRHKpZ8Y4H SmxMd/NUloOw5lbcec5qqw+Qgc81n1DqRuuFwKzcDbhRyDVsexJdk4XHpWbJlgQDwKhC3KjZ8vAq ttB+Y8CjqOOhWmxIc4ziqUjfxEc0XuJ7mWikuaurEe9eWkdaJ42KSBSOPStOM/vOmFpS0QWLykD7 vWrMSM/J6Vmhx2NBAEbAFWkiWPPGTS3Rb0JI8OvzDGOlW48ygZOD2qY+6C0Q97domBzknrVt/wB2 BilN3EmS26hVJJ4NObAGAMmpQdR0QD8d6iJKE5PPpVKQ1oVlzESzDrXMa3MIkUAEncO1b0vjRK3P 0x/Z5hA8EWrDugNfQ8W1n4HPeva+yiJbk3lFnJP3RUn+sYEHA9KEiWPVCuTnJo3hjtK4Prihbib0 HxQ7ZORmrfkeYNxFVNkRQ4W+3NLtWJck81K1L2Jo5Fb5sfhimb9+eMVVhXHqvSneRg5zQMcUyPlH zVCq7Sc9apMRGse6QgcGpzAQAN1U0JkltEUJLdKRo/l4qRImDYTI605Y92MipZRI6skmMcUxflHP IqooT1HKAvPap0w43DiiQIczgrUWBJyOtJDI2Xn1xUnlk9TxVk2BgsYz/SlDrgH+lOwiTGGyKkKG RRhsGpGRiDe+Oh9aCnO09RTbAdt42mpI08puaABY94JFSrHuTFAEMUR3nmpWXDUmCERMHkcVN97g jimPYnAQIR3+lQk+WuM8VNim9BoTdyTU4jG7NUQKi/LgVIVyBigGhvTpSMnU0ALGGC5HIpyFpDzw BSAlaLcQc8Uhx90GncEhViyM07yV3AsKGIs+WAnI5qYRhYtxGahlrYpELIxHQ9qZEhiU561V7IXU cAGcbqsNGQ2Oq0hkcKjJBFW0UJnikwRCzsx4FL5hBwTTKlohd22TpmlZy2OMCjlJvYThTgUMoAPP NId7kKqOpGaRgGb6VQgU+W/TIp0qBUBAwTQhbD7dSHOKqNGUcgUr6hYRlyRUyhRF0zTYiHPQYzQY 9uR3oAsJCVi6VDjbzii5ViXdv6DApyJuycUWEx6xle+RUzgbQAKLBYi8o9RRywK5/GhjsSxgLDtJ yfpUe7DHHSlYe6GunqOKkjQbSSKd9CbXEUhuAMYp/wDq8HsaQco113ygZ4xTgvBp7BsRRRhZ+TkG p22lyBxRce4xYgck8mm7NoKikFiNIwV6c0wDIINMWwqR5OAamMRRgCaAJSmFzVXauDxS3ECoBjHP 4U6RieF4osWthhXjBOag27XB9KLkvceQWyabyF9qdxbEav8AOQemKiVgQRVWGmOUbWAPNMEBDFic ikhMgYbXJOTQY8gkcVRUdhRCXTAPNQJE+cMc0kS9SUoojPOTSLERgnpQUtiGUneFxxSOuGx2FAbE B7nFTQDGSeuKaZDKwUqSScipQihATwaNh3AAhiMZpjQkAgnmgFqRxjaSTTHALk0WEES5BBFSnCjG KTYyMxqsY9ahf5gDmkD2Inm2jb+tNSTdDwelMBsfymk+4xxUodyTyiAD61HInl8dqpCepWlj2Dp1 qNYjuoEWsrnpzUEmFYDFLqMi8zZNjnHfilLhXIHSnYewzz/NO0DGPapWUBhikwRBcDawzVKRWk4H GKIoT3CFgCQ3X2pJsSEKR0quoWI2I6jjFVml2nBWlbUBr7WyGXFVUXbkDoaQbjhGM4zgVTuVVXG0 ZHrQhWKbRmQ8HimpEEc560+paPmz9o+Dz/CTR4x868/jXxHp8Wy0ijB+YDGaKj92wpq5eZTE4VuS O9JPFhgQ1cqWpiP/AOWXNOR9+1DTaDqRXMe1iAazSu7C5xVRZTLgZY1yeuKzLf5HYk5z2qbdRbEk iAqcGqBVsgrwO9NDZC8jmXgfLVNs7yuOTSFAgkT5wvSobmIcqf0qbD6lFY9sZA6CqUg8xQPSptYZ RUdSo61ciDMgboBXm3NkSoRM4wMNV/ztsgQrz9KlvQq5o9U6VPA7nAX7vfNQtgWhoj5sAcYq4kRE ee571m2WtWTIRGgUjJ9akhUjkjik7sp6FxWyfanDgY6ioJ2JJUBiUdDRhmIAGKcRiMhifjrUojOw 85NaKI29CvIcoBnmud1RtzRoFwxI5rSk/fRB+l37P7/8UTbIByqgV75ASJzxXvRWhlJ6mk0eFz2p 5mWMA7P0poVizAwfnGKe3zMeOBUdQEHzLzxTo3w+2m1cSY92Zu+MVUdSzHIoSsUySPrirIUqDVNk JWFUZHHFSq2xvUUJDbsEU+0lsfpU7leuOT7UhlfZnNJFK0pwV6Vb2FHctxgtlTUiwYBAqLlNWK7/ ALsjjip1yDwOKrckkkj8+MYOCKaE3CgEK4HAIpVGTgcCmw9BzrtbHaojIE470RQmSxcAmiMbc0xo exDL0qMqpAyOKEySTOSCBxQyl+hxQAu1j04p/l4780mFw24FO5NIpDyu1RipUYJ9aZNiBV+fJ65o dMTD0oHsWN2Plx0oBzmmA1x5eCB19qeil1O4YNAEnlcjnFO27TgHigCNpNpwODUqMVGCeafQGPJ2 8g0xnO09xQkSS20gNvsH51Jt2wle9LYtBtxGFPNM+WIY28/SpGtyVTuYdhV2SJX46/hQS0VCrJjn ippnbAC0WHfQiWJVfJ61Kqbzimw3B4VVsjr9KYoZOTQg2HLxg1YHKkjtSeoR3CJcpTZFG3B60i2M VCvvSScjiquZvYlhjDZJ4pJY9oJpXKUdLlZRnGKUL1wOabEtywAAoOM1BI/mybccUhkgXZgLTdu6 TmgCORfKJGKYvzgYGKZI4pjBA6UsbhsHH5ijoLdkk0+48cUQ4MeT1qUXcamNxBOKmyo4Bp3EhEhY c54qVR2NUnoNlmMiJCMZqsbctkgYpXGRrHsJHWp1gG0tRcEQtuX7w+XtT1Hr0pEPcbtAJxUgXbzj NNFXGqf3g7CrDbS2BQxdStPFtcHFR4D4OKNwZMIsjOKNhxRYCsFyM08cDmjUBm8jHGMGns4d80hd QV80hXHFA2OGExTJQXfA4FAETfIQveo3U7himCJ1iYL8tV2yCVFCQpaldVOCWNJAoVzmrEkSk5fp j04qXbtXk1GxSVxqcEnHFQFN0oYH5R2oQbEgO1yKZJwcUwRVZPmwBUpJK7fSgpESqeMU4wM5JJ4p ksh8rY2O1NdNmQDzRYRGQGAXpUskaArk5xQFiKQbXOKcFCryeaGNKxWX5t31qvzmqvoDRMGAzULS eb0+XFSkD0EQ55PNMeDch7e9HUl7EDR4UAnOKhOI+FHH0pgi6pCR9PxqBsMuf6UhjlmBUKKQwlie 9GwmV5d2wcdKkB8zGOKBiyxhDjPNVmwhoQdRhwAcjNEOHfbjBpvQY/Yq5C9e9R+Q7sHH3ajcEPaL zM54rNlj+fAqo6ITRLFbfOPeo7qPY5GOKFuCZU2ZjGKQRFk6cj1qmBRmQg5NQIxAIP3qnoIe371d oHPc1A0O0BRU7AiLyPKPFVHjJYk0y0fPH7RmIvBBcjILL0r4U0qD7PbRs3ORkU6i90lmjMd+TnGa pNJ5ceAd1c0Vdmew6FiwGaSTdHMCpq7CuNnlKDLciohtkUEinbQm7uOuFVmCqOBxUTweWvXk1HSx o11KaxOr9dwqR5CcrjB+lJuwWuilKrBcdqqHA5z81D2IWjKkxyhDHnsarqphABO4+9JMu5n3MrRH OPl9KqZ81+OBTsCGRwgORmnCAoc78j0ry3HQ3TLSjJBHBrVjUSJk9R1rCSKJ4VCqM81oQRsCCRwa FsNltkGPSpg5WHArFrUqDNCNPMiB6VPBIAmMVbWhT1ZLbRhgRTI1w5z0HtUxjdjZbVQ0YIHNTthe nSnblYS2Kync+CKe64OBxVozKciBDt6+9ZWoxB4w+PuMBWlJe+i7aH6WfAC22+A7RyPvIDXvaRhW Jr21ojCSsyaBSHJJ+nFWBJvyNuMe1AFmKMKOamLeXxjih6C6DVG4e9TpHuHAwaZNhrjaCKYAGXg0 DLAjVUGRk0zym3E7qQIekZGQKXbsbBFWthPcsCIYJzSZBGMZI70iloQHdjOOKesu5QQMfWq3J2Yg kLGrEZxkmk1YHqTR7VGCKZI+3gUkhrQRIyWBBwPSnsCh4PFVYlsQgYBzTcYOAaW4R7jELbuRmp2i z0PWqWgPUVECJgHmlVt3agfQcOuMU0x+1AIcsJlUEHGKV0K49KaEyxEmeSeKWZfn+Xio3YW0IWXB GaVUy+KbGtiVVyeeKULgY70gWrJCyAcdaI08xCT2oFbUZtPVaenbjmmhWJSxiGStRsS/PQmmMesv l8MM/hSswcgikJEuFjXBGSaakIk6dqdx7kgG8EYqIDbxQmFiUIIk4p4bAHFJgXFRSAc5o27eSM1J ewPHuwRwaekoAIA5FG4hhO407ayY4qkSxPL38mrHCcAcUmxxRHvySuP0qNmA+UChAwCEoOOlTbTG OKBpAcomM8mkEYc880CbuWFRdhx1qrtwMUATI2c+gpGwyilbUq6K+7axNRgN5wwPlNUQXniCsNvN QiINKQePwpF2JdqqcA01xjBpMdrDSqkjNMkKrxQTYiU7WJ7U12BwAMVSJejIyCuRUiZaML6UPQOp YESkY71MsCx8kUrl2JDJgEYqJhgg00SyUSAEnFSq/wAnWpZSYotj14NMZdje1IZNPb7oAw6VVVdy Z7U7isM2KGHFJI4ibjmqQ0h4jDDPSpba2V8ktyKGTbUYshluCCOKrzN5UpGKSGxgmJ6cfhSKrSOS WwBVsRKV25xUGAc7utIVh+7dGARUM8oiGMZ/CpYW0LCYaEHFMbYyjHX6UCJCoGKh28ZFAEGDnJqf AxnPNNFdCLzWAwo5ojUjlvvd6dyRsqL36UiwKwyBRcbFaDOM00LtOKm9ykh8y7kUKMetRyoYwMVQ EfllY8moG5GaSIvYdG2eNuKgZfKkOOfoKqw7i7tg5HFRSOWGF4oQh0akKAeab5O7PH6UAQCML94V OYg8eccigohWNpAaicFSOOaGJuxE4w/Pf0p0UXqKYrkTrscgCoh82BimDdyV4/JwOtG5m+UdKgTG rH8+0CmPZuScDinsCEEbLAdwzVcjanC/pQncZHbRlyTjGKd5siP8oyKGA9gCD1z9KqyRNwcYFGwy TyWU7m5WoZhtOQDj6ULuLqOJXrtP5VBJGzzLIOMdqp7BcneIxHft61Cl05JUDCioW4XHpd7WO9CR 24qluMnzBcCqE2SNOY3BAqs07XLEbeanZ6ANEDw8HvSsxVNp6073G9DNbIyG5qrJEVOR0oiIlK/L uHFDxeauE4NTJ6lJFA27ROFLZx1qr5g85kxk+uKe4kzwX4+WPm+CpI84+YHn2r4HsZPNt41H8AxV VNY2EyRhnjqadHBuYDFcy0IbuT3O206fMenHNRW8Rdt3rTUhWILiTa5RhnHfFLFGGiDEZFO9kK12 TKq+WW6VTkBKA54rNO5q9iCNvKXjqaY0pClSPmPelLcVyi/7qMg8k1myxg4HSi5LILmLcoPUCo2t t8aNu/CkhMoyx4lPeqabGYg9a0uBU3buOlXIV3fIO3evL5tDoSLMTbCc8Y71at8qDz1NZspo1lgL lQOwq9ExJIz0qbXHctRR+YmHHT2qxuDDGMYqbFQWpajX5RngDtViIAS5A4NLlLZYC+S52n601h3F TflYiXdmMAcVO46YFD1YrkX33OOMVH90lSearZCKryBRtAyfWsfUrkrGIccs4Na4dXmU2fqH8Com i8Aaeh5CxDNezbiXOBxXtmMtWXANpU5q4r5BzVWM+YOMjFaKspTpk1EkVEjjiLZI4pjxuxABxTQ2 QupzyOlP2KB8gPvxVIhksc4K8g5+lWDGWTOamw0yFd5+7xTooXc5Jye+aqLVge4FSGwKuxRmPGRy al6hsxxiKk7vu1BKgLg4O36VUWKWo9LQ7CR+tEcbEYxzRJjRO8RKdOaaI89aSYnuMKlXC4qREbeQ RVXEyXy9xxjBqKSMrIBjil1H0FERycVL5GVyTzS6jAREegqaKLbwQKvSxKIyGjc8VMImkXmlcb0I vszo2Qfl9KlFu2Bii+glqOCMjEdqkEO8Fqm5Q2OLzCdxxR5W0kg80c1xoZHG7tzjH1q3tVE6AmgF uUzCActT4wenan0Jb1JlYr0wPxprxszAgD86E7DY99wHJGPrTVU4FFyWTN7DmmDhs4GBT3H0JeGO WxToxs5GMfWmCHyfvSD0prRFRnIP41LY0hFXC9jUkg28cfnQPoTwYiXqD+NSNNnlSPzosLmGM2Tk kZp8ACocnk0E31HDgYwKh81lOAePrQVcuLKuOSMUgkDDGR+dKwc1gWRADkio3dDyCPzpoTY/zVC9 RTo5lcdRQxpkblS/LVIZEPcUCb1IS654YfnT/NUgjIzQFxiuqp1B/GmGRB8u4fnQhDvkH8Y/Oj7T H0yPzphsOjuUSQfMPzq2biNsgkE0mWmV/tMSpu3KfxqF7tHxyPzpLUJSI/MQY+cZ+tMe5iU5LL+d MSZRbVIixHmKD9aP7Tg6l1/Oi4SQg1OBm/1in8RVuC4jZSRKMZ9RRcmzLaGLfvMo/Oh76Hr5i/TI qSrkX9oQmQDzF/OnyXsJ6yLj61RNxf7Tt0XBdT+NQNq1v08xR+NA7liPVodmBMoH+8KVNQg3ZMin /gQpBcml1eAR4Ey49Nwqmus2yjAlTP8AvChFOQPrVqAMzIP+BCoZNZtU585PwYUw5tAGr2smCJlA /wB4VLFrlrHnEyf99CkxJkba9ZqSfOQf8CFQ/wBuWjtnzk/76FCJuP8A7dtP+eqf99Co5dctAmfO TH+8KdytRYNes/spbzkxn+8KY+v2SxhjMnP+0KLi1uMj8S2IABmT/voUs3iCxY8TR/8AfYoB3GN4 ns1GDMmP94UQ+ILHkiZP++hSuidR3/CS2IJJnjx/viq48U2A/wCXhP8AvoU7oauQDxbYOxXz48/7 wqYeJdPUDNxH/wB9ClcL2diJ/FmnK3E6A+u4UxvGGn7ubhOP9oUKQxG8Yaac5uEz/vCoE8aacGwZ 0/76FFwJ38baaCMzp/30KqyeNtMV8mdPzFK9gbEPj3TVOftC/mKqy/ELTAMeevX1ptj6APiLpe3m dSPqKi/4WDpWP9ev5imQUpPiTpiA4uF/MVW/4WZpSKMTjJ9Ku4rhL8TtLGFMy0n/AAs3SYEOZV/C puGpSHxa0gICs2M+1Sn4uaVGuxpvmpNjTK0vxW0lxgyjikX4vaQHA83H4U7juTD4taQW4n/Sqd18 XtIRx+9z+FJMJEFx8YtH2r+85+lVJfjPpEBXdL19qdyNxT8ZNJDE+Z/47VU/GjSAeH/8douy7WGf 8Lo0lpQBJ+lNn+Nmk7ziTofSldhbuOi+Nukqv+sOfpUbfHPSx1kOPpSe5XQgPx00vk+YcfSoD8dN KJ/1h5/2aFoS2OX466Sj8uf++aZL8dtLRCQ3H0p6hdFZfjnpfkhw5z6baln+O+kyRKAzZ/3aGNbE B+OumsMbjgf7NRn486YyYySB/s0+gmQt8fNKjHLHB6fLUf8AwvvS2j4Ysc/3adm0JNXHXP7QWnYj ABJx/dNVP+F86d5pzuH/AAE1Ow27jh8etOBIIb/vg1Ub462IYKNxP+6aZKYsnxytOgUk9/lNIvxw ti3yqcfQ1K3KHW/xshuUc7G4P901Z/4W7AVVmQlSR2NMJHpmkapHrtmJ4lwpHHFWdpCkMPpQg3KT KWfngCkKtKmM7adrjWhAT5Y29W9agt5lSRl25J71WwHhP7QsiL4LkLcDcO1fnHpl5HDC+5+c8Cqk vduTN6FwatF5oLcH0q62sQOzKvBPeuOUktCEixbaraafZ5c+c/uKoSaqjyBkI57VKegSZZGtW0g2 SKM+uKhOqpFlAQRSk9AhYja4DqGzxTzPHxg5FOC0LkyskyuSMdKaz7rrPbFXYjqM4ic7hvB7VmXJ yxwMCotqVYrqnUn8qgZQtF7aBYgmTaMjmsuSNUIbqfpTuJogKqgBxzVlWaAhlGSeteVujpROX8wA EVoRoIuvSpadgvcvQTbunAFXVUONwpJ2RSjcvxMxt8j73vTbcMzgEc1N9TSKsaKkCYK/3R3xUqsp dvLOVFU2NsljYSAH+lK5DSgDgCs5JXIuXdykYx+OKOV681KCxTciEM2c5rEnuRG2VNWtWHUqpfGU E9DWPqV3vmgycEEfjzXXh42kRJn62/A0/wDFvNPOOsYr2XzBEOBmvVe5DHph0yBzUqrgjnmqTIaJ R8jkEZqaNtwBHAzTY1oc74k8Vw+HXHmjg+1cdN8XLGCQhQS3+7WTHuVX+MtmrfMD/wB8mq3/AAu6 xSUjkKOvy1aRDZHJ8cbFh+5Bb1ypqKX442keMAkn/ZNDHElg+Otizd8jr8pqX/hfmnu52g5H+yak aKx+PFoq52H/AL5NMPx8tSwyhx/umkDJpPj/AGg6qf8Avk1Wf9oC1PIU/wDfJp7ASj4/2+zG0/ka gb9oKBRxGWP0NMT0JR8f4AhLIR+BqsP2goU5MR9uDUjirjm/aCilTIj+YdsGqi/tBgtxFz9DTWg7 F1f2gFbrDg/Q1Vl+P3zZEXH0NJPUkpn9oFzJgQ5H405fj85/5YfzqwehKnx/MjlfIx+Bp0/x/dUw kXI+tJspbGf/AMNESgENDg/jT4P2iZDnMP6GmQ3qP/4aDmVAfJyD9aen7QMz9I8Y+tIEC/tCTNLg wfzqaT9oObdjycD8allbDk+O88hLeXgD61UHx/uJMkQDHuTQhXHQ/Ha6ny4iAUdsmoLn9oG4XGIv yJpjRVPx+ut4HlZ/E1ZPx8uEOPKHPoTVEjX+PF1HgGPOe+TViL443jA4Uf8AfRqS0tCjN8fryEfN GDj3NTR/Hy7nThAv/AjTRJMvx1vAnKj/AL6NMHx5uwu3YufXcaaBsqP8db1G+YDHsxqVfjteHpgD /eodw0Q//hfV8QOFA/3jSS/Hi+VgOP8AvqhK7C5WX483xPUD/gVWh8cL+RS+Vx/v1T0DoV5fjjfm IuGA9t9U1+NGqMBJ5gA9N9CZJbX42ajLFvDgf8DqRfjhqEYwXU5/6aVKY9CJvjTqgQnzh/38qofj Vqp4Mqj/ALaUIb7Ei/GjVlHMy4/66U1PjPqhY/v1P/bSi4tBsnxh1UDP2hcH/ppUafGPVY+twpH/ AF0pXGOPxn1Tp9oX/v5Ui/GXVYhj7Qv/AH9o3EB+MmqbT/pK/wDfyoovjLqhO03K/wDfyi4ErfFz Ul6Xa5/66Uv/AAtvVDz9rXP/AF1FDYrlF/jBqySEC7XH/XUVGfi9qYXP2tc/9dRQguiu3xg1Veft qgn/AKaiopPi/qrIp+2qCOv70UxrUjPxj1QvvN6v/f0U5PjVqiykm9XB/wCmooY5NIafi9fgDOoD n/pqKZ/wuS8hfH9ogt/10FArkJ+MepSSgG/UD180VBdfFzUZAdmoBgPWUUDTKMHxMupvme/Cn0Eg pZfiXckcX4x/10FGgNkH/CybmJR/xMMf8DFWI/i9eQZVNSG3/roKFqJuyI/+F1XqPtfUwB/11FQy fGWUSY/tUH/tqKnZi5hY/jHLJKB/ai59fNFLL8Xp42YHVlOf+moq1YRnr8W7jndqo/7+iki+LUpJ 36oCO2ZBSbRUfMlb4ryo3OqqB2/eCmD4yyRvj+1Rj/roKbtYlysx03xmKtn+01wf+mgqJfiyi/Md VCsf+mi1KKuUZ/izDOQG1YDH+2Krt8XIFbDasNo7+YtVdCuSH4xxMfl1Zdn/AF0WpD8XbZU41YKO /wA60nYfN0K5+LticsdZX6GRapSfF6zTONZHP/TRaUWJsrt8abMHaNaX/v4tI/xnsl4OsqQf+mi0 5WQ76FiP4y6dHalDrKgH/potZ03xw01fl/tlfb94tZqSuUmQt8ZtNkjBOtL/AN/FqAfGbTQD/wAT xc/9dFq+ZWEQN8c9KICnWlJHcyLSyfHHSicLraqP+ui1MWr2JkVj8cdJWIhtbX/v4tUR8ctJYca0 gA6/vFqrolOww/HTRy+V1tAR/wBNFqGT49aMRg62uf8ArotEWmKV7kD/AB80WJMnWkz/ANdF/wAa rr+0JobEt/bCcdvMX/GpbVzRMbL+0LoLrn+2Ix/20X/Gq6/tDaFj5dYj98yL/jT5kgZDL+0Z4fOB /bSZ/wB9f8ai/wCGlfDjtsOsR8d/MX/GlzohkU37TPhqFgTq6Eem9f8AGqjftUeFw3zamn/fQ/xp 8ybGnoUm/am8LKSf7UTH+8P8aqt+1V4WHP8AaqD6uP8AGr5rErUqTftT+FZYznU4/wDvof41WH7V HhREH/E0jz/vr/jS9poNbjrr9rbwlHEv/EzjLD/aH+NV1/a58JOn/IRjz3yw/wAalVExsYv7YXgy FSjXsbEehH+NZt3+2v4LjwDeIcemP8afNdgQL+254Kbrepz9P8aim/bY8FKfmvYz9Mf40OVgRUk/ bg8EIOL5c/Qf41D/AMN0+CNvN2nH+fWpUxyZVb9vHwNG2Gu1yfb/AOvVO4/br8CuRGbtQc9cf/Xo 9pqJDZP29/A8fy/bFIHt/wDXqk37ffgdcZul2n0X/wCvTc1YpMfL+3p4E25W6X8v/r1kj9vnwOhL G6HPt/8AXoU9AZL/AMPAfAyD/j4X/vn/AOvTX/4KB+BgOZxt/wB3/wCvRzoVyl/w8H8BzH5Lgcf7 P/16gk/4KFeBFYKLgD/gP/16OcUthr/8FCPAo3A3AJH+z/8AXqlH/wAFD/AzKMXH4bP/AK9CmrEa liT/AIKIeBYlGZ//ABz/AOvUa/8ABR3wJEMCf/xz/wCvWbqam0dihJ/wUc8Dq+RPwf8AY/8Ar0xv +CjngmFcpcYXuNn/ANetlUTRLRUk/wCCkXgSUkebz/1zP+NUk/4KP+CLeT5bggD/AGP/AK9CqJEt algf8FLPBDSEmc5/65n/ABpG/wCClvgZlJ8459PLP+NS6hVitN/wU38EoVVZm/79n/Go5f8Agpz4 KiUETHd/1zP+NHtNCeWxIn/BT/wZEgJnIz/sH/Gp0/4KjeComx5xH/bM/wCNS56lIv23/BVTwTZn ElyV3sFGUPfj1r9JPg18VrD4rfD6LU7QBrd9pVsdaI1NbCabPvX4ZTBvC8RUdVGK6yR2Od341qii GRRIgwcUw8ELj8aspjXjx9aqyLsxj1oGfOn7Tiunw8nK8Ywc/SvxRtvHxM7rvwynGM1pJ/uzOaNW Tx8eN7/N9abF8QVd2Uvgj3rypO7FfQB47CqRv47c0Q/EERN8snzfWqTIepMnjzc+Gb5j3rQtvGO5 vvkik5DirHZ2XiESwrhsituz1NppCAcCrjLQo6W0m+0P/dxV5FyzDHPrVqRPUhZScEc+tVrkfKPW naxd9CjIQQO7DrVa5uAyr8vT2qbXYrtIq3MgVMk4+lUHnLhSBV8tyWMaMsoxU0TCLgnk14x0lhfm b3rQgRsfvKTZSViywCspXp6VpQ/PjHQVLWhSZfwpjBUbakiYrlfXvQlc0T0JWztAxle9TrGkTqVG FI6U2S9yxt2nKjj0qx5fmjjis1uJodGm9ApGDTpiYwcHOBTW4HMyyyMhZhhc9KxLuTccjgYrWEdR SdjnGvWUkg4FZ93d+ZcWwH3iw5/GuyiveM2z9oPgXCU+HenAcsIhXrYjIbkV6DBosKOmKsKuZg3t TRCJ0H7wkimS4VlKjvRcbWh4V8X7rNxBER97/Gvjn41fFrSPhKkIuR+/Zc5C571LZF7HzYf2uNKu wJdmUPqDUU/7VWkz5DRAD/dNHOkFrmZH+1jptuCqQgL7Kasf8NY6auAY/wBDSci1sNb9qbSzysAz 9DVFf2q9NiDGO3GfoanmI6jP+GrrPyQPJAz7Gnr+1VaK+1ocjHUA0RlcJMij/ajt2mZjENg6cGop /wBq23fBjtxjPcGm2K4kn7WEAcYtxuxyOaiX9qyJXIFuv61PMNjW/aviySYVPoOapH9rRAcG1XHb rTiyk7IZJ+1sE5W2XJ+tVJP2unEX/HuA3rk0ubUe6Hr+2JIqLmAHHXk1FJ+2O7TBvs42+mTUuWpK IU/bLdZyBbD/AMeqvc/tmzrnEIUD3NUp2CWxkp+2rMz/AOpH606b9tmVcAQDd680OQle1jOn/bXm l48jn8aiX9tyeH70IPHq1Up3HaxMP25JTAAIBnPq1Vbj9uaa2kAMIz9TQ5WFbUan7dVxGS/lDH1N Qyft7TygDyQG+pqG7DZVuP297tjhYQuOoBasx/28r5l3BQq+mTUuTC10U0/b1vhkkEL6ZNU/+G+r 1SzJF5gHu1VGQWdiuv8AwUBvdpcxY/FqfH/wUAvmjJVMj1yarmuTazKUn/BQu/RNvlbvfLVlzf8A BQjUI1yufpk1NzRMil/4KG37RBduX+prOuf+CguqSKuxDkdQC1CnYlK7In/4KHautvwhJHbLVl/8 PE9XbBKuv4tVKQSIV/4KMasku51JHoS1En/BRjVXQsEI+hahyC2hRuP+CkWpRQAvleQMljUOp/8A BRbWLS1SYkhT0O401Jmepmt/wUe1l0U4JJ9Gakl/4KN65vABbPpuNKUzRLQaf+CjutSq2QwK+7VD H/wUa8Qf6zLFAOm404yJsUpv+CkOvXSN5e9SD2LVQuf+CifiJbZHMsmT2yaSYtULH/wUa8RQqC7O c+5rOn/4KM+JpZ3IaTbn1ap5ncGm9SBP+CiniiTJ8yXH1P8AhTU/4KKeJ1b93LI3r8xqhq5Un/4K J+KfN2+bLj2J/wAKrxf8FDvFMkhQzSj0+Y02xpaiyf8ABQvxTAMGeXj1Y/4VPa/t++LtTDNDcSuq 9drE4/SkpWZTjfYl/wCG+/GX2QuXnCjud3+FZB/4KKeJowS12+M45c0ndMiwan/wUE8X2cCTSXMy RnuSf8KWz/b+8Y3lp50c1xLE3Rk3H+lFm9SZK+xPc/t4+MdPgEk1xKm77oZiP6VVT9u7xfMN32qU H/eP+FO72BJsybn9vPxihJNzNkdsn/Csif8Ab08ZtEWF1OPYE/4VVy9UUbX9u3xo+VN7MT6Fj/hU v/DdHjWOT57yfH+8f8KTJWrM+X9t/wAatKzG+m29vmP+FQQ/tp+NriQyfbZse7n/AAqU2Vyjj+25 43LEfbp9v+8f8KX/AIbk8aRcLezj/gR/wqm9AQh/bX8clxi+mwenzn/Cq7ftr+O5Jigv5lP+/wD/ AFqjmY+Ujn/bg8cMBAbycsOp3H/Cse5/bP8AHKdL6fP+8f8ACqTaJZN4f/a+8a+JtTa0F5cNKqkk KSTx+FcxF+2H4zuvEN9YfbJ45LaTa+9iO2fSrj72oorodJ4e/ax8Y63c3EcN1cOIMiQpk4OM+lYU X7YPjOWecHUJgUfAy+P6VDbK0vYu/wDDX/jK3UeZezEn/b/+tTL79sfxbZWokl1CREBHPmVLbuVa x12m/tGfEHxDoa31rLcyw7cgrk5H5V5za/tleMLq+uITdTK8TbWViRz+Va2drsydm9Dcg/an8ZXK /JfSfjJj+lRyftS+MmUlr2XK/wC3/wDWqS0Y0n7VPjO6P/H5KFH+3/8AWpH/AGlvGdwhT7bJt95P /rVLuHKZkn7SXjWyTaL+Tb7Sf/WrS8K/tGeNvFj3aQXVxceQcPsJODjPpQ3oHLZ3Zy8/7R/jAzyo 19JG8bbWV3xz+Vavhv49+NvGGsPp9jNcXMyAlxFlsY+gpXcQdjjZv2kfFsPie+0ma4miureTYwYk dvpV7/hf/jCB9v2129/M/wDrVU02JJMf/wALy8Y3SEm/fP8A11qhc/HHxWiLuu3yPR//AK1ZWaZo Z2qfHzxVp9m1y+oOkKjkeZW94Q+KPjLxt4X/ALWsZrmS2K7tyg4x+XtWsYtoTORs/jr4p1zWobK3 upWmc42huf5VsfEX4m+MPhlPCmqT3MCy8KxBxnOOuKlRa1JersYFr8WfFd6/y6g7p2/eVqx/EXxZ ICDfNj/rpV2E4q5b/wCE58TRQAnUGDnpiQVmT+PvFKnH21s/9dKmKY7IzJPHPimSXBvmJ/66CoIv GviYztu1FwB28wUrNsdrGdN4z8TGfP8AaD7f98VSm8b+JA+E1BwvfDihpi3In8YeIw4VdQcr3O8V Ul8YeIFcg3rnnruqeVhZEF14u8QyqpF8+R/tCo4vFOvSNmW/cEdgwos0OyKj+K9baUkX8mfTcKed d1e6gLm+kDemRVaslRsVjr2thABeuT/vVS/tTWos7r2RmPuKWo7CDUdY6teOfqaf/aGqNgNfOF78 ihRsOSD7dqWSyXshT6iljub7ad15JtNC0FbQZM14IMm9k2+vFVY0vCgK3sjfXFOV2gSQ+SSZSAb2 T8hVSRZWcbb2THpxU2aBopzI874N3Jx7CmyyPcDYLl8Dvii1x2K8IkB2G5fnuRSzWkqKQt05A7YF AJWI4Ypms9zXDrT1tHaMH7S+cdcVVtAZCYWMbM1y+R7CktiXT5rmTH0qLMNCo1s6ysUuHAHfFZ0t o6vvFw+fpSdyrIkS3l2kvcsB9KelkysSs7nPtVJaEPcbJZzumxZmZqrPZvCAkkreZ64qLGisTSIq 4/fNke1UpY2kYETuo9MVa0QtGCWhMmBM2fXFWGtFDbPNbI74pK5MkhqaaLqYMJWAXrxRdWsVsSwl LH6U7ajRkBRvDs7EfSnIqyZO4n8KTAa1qLoj94ygdsUl1Es20Ix474oaCNjnPEemLeQWu6VlAnTn HfNf2pf8E3reXTf2YtBVpjLG1uhGT9amL94JH7h/C5t3heAf7Fd6I8giuxAUtuMgDpVcsz/LV9BF ZozD1O6q0p3YHfNUkHU8G/acDP8ADK8Uj+Dr+Br+asSyJ4imh8w7wxIJqp/AKZq3EspulYylh6VL u8ydj09K8qUdTK5EsjRAFpC3qKinkZpw6HaPam4gncv20syzgs5dT616Hprkcn7tK2paPQNDk2cg 8HoK9FsWLOCDg04oTO50+TdFycGulRgkC/qa3iupnciZwi4Heqx/c5z82abNEykPnzxiqv30KMOf WktinsZ88SkDd0FQXJAjQqPlHoKpS0JtcrLJnNTQKJO3TvXjPc6EaakHAxg1oRxkqN3WhrqaLYsJ bHdnsKt2j7ywHWs29CVuWVVghU/dq1bqWTBpxehpsWvLK9Dn2qYQlfmPX0qNR2uWQ5kQcVJGxV8Y qmtBqI8rt6mqUpKnGc1m3YfKYuplhFwePauWuwxAHrXVS1RhVMKdMDZjis3aEvrYjnaw4/GuylpI yP23+Bkvm/D3T3VNmYhxjpXq3IJLV3dS7k0KAqCD9atomGpkdQXrS8GUAUkVufOHxpfZqluc4/8A 11+Nv7emNQ1SzXOMLzx/tVnIlxPkrwL8HNW8Y6dLLZxlkiOPTtmqHjD4f6r4OtUmurfcpYKMc9TW DlrYpLQdqfgTU9F0e0v3tsw3GMBsjqcUnir4f6loGkWd69sdkxXGAe5xTbHZJHp2j/AfVtWsYJ1Q RpIuQrtg/lXAeIPDE3g3X30+8ttmM4dgRnFJsyW5RjtoHkH7sYHtV5raJmOI12mnC9rlyQsdvEkA UxDFUJLIGQEKEQe9Vdsm1geKAyL8qlvUV3Hw9+G138Uddlt7CNR5KtuwcZwM0rMEk2cHq2gSeHPF 97YXMYDwyFefpUxtYZl3NGKqOgpbleOGJwxCDiqckan5vlb2z0rNu7LjsZsttH53BA9RUixW8GcB SfrVJMSVzNl2Q5kVBVCO3huXeQuM4PyGhoD3r4F/s8TfFrR9QmhILxngZHA25rwLWfCsvhPxfqGl 3RVnhkK43ZxgU18NhvYhaxgjUkoqn3pmj+HH8Reb9ktPNCcswBojogWrsb/wa+EkvxT+Ic2ns4tR ErZQkDkDPeuT+Jfwzl8MfFh9ARvOmMuFcHOBkA8j60m7suS5T68tf2TNMis7C3u73bezqCEG05Oc V83/ABH/AGXfEGj/AB003QrS3/0SUMxfOOAwFVLRGSabPqC5/Y30v/hJH0uO5Daj5bHysL1FfKfg f9lbXtd+P154duEK2dtv3ZPHygGq5Vy3FCV5WPoWf9kzQ7i/1e0hvF+1W24GP5eCFzX526rpD+Cf FtxpspDcnv6VC2NJblGw06XxfrU2laTbfaLsIxbaD8uBX3J+zd+xQ/jH4V61earMUv4FJVBg7cIT ThqxS90/Oe/26Hrl7ptxuL20nlklaryXtohI8tnUnG5UyKtrUkZMLOz2ExlcjPzLjNZba9Al0Eji cO3T92eahxuaRA61BDetGyMsoPIK96DqENzdFGhfd6eWacUyHqzIubu1m3jbmRTgpjn8qypdRMKE GzbZ3/dmhLUroc3qTW3iCC1gVAqvcxoSByuTX6gfHr9im18JfslaTr8Um+V/KdmIHPzH/CtUuVC5 NOY/NG0mtY7SDCBQE+biuy+DPg64+N/i670/Q7fz1t9wlkIwAQM9elZDh7yOT8VadN4F8aarok8D T3NvNsJRS2MD2rPF8thAPtEJgRlyNykZ/OqWxn1sV4tVU2LXFvbObcH7+wjNY+veJBpNjBdT27+X KQEKxk9TgU0uxVrosG+uZZIQ9mxDjKZQ8itOfVIrG5W1dTDOeqMMUuUE7IvNcmNQijg9aoT38WlT Roo3SydEQZJ/ClqPTcgnj1WG4Y/2ZJ5QyS5jbp+VVrHUoNX3svylOu7iqS6g3oZusaw76TLNFbST RocF1jJr9s/2AP2ZfDOu/sx6p4y1+2SYmHzfmTO35Cf6VEtWrEqqoqzPlDxd8bPhneeBdQhsrJfN T5Q3kEYOD718G/AT4W6j+0x8YLHQ9JgYaeLtGnmxgKAwJGenTNaTfKhqJ+yn/BTv9kfRPgB8JNBt 9KtQ9zdTwwPKqcjfJtJ4+tbFv8AvCP7Jf7K3h7Wtfi+3LceSryNFuOWYjtWsWnBChTerZ8q/toab 4L8TfDfSNV8KmO3mk2NhAASN3pmvhSC6lW2twT85Xms5RFEqav4iTTdokPzsQFHqTWivhnxTII5Y 9IaWGQblIVun5UtE7D6HAz62y+J/7NNu6akjbZIihzn29a6aX+2l8Z2+kLpjl5ELjcjA4HXtVtIV PUyjeate+Ob3RIdNd5LctuyjDoM+ldJHoXiBdNmu/wCzmW3hOJDtbjv6VPLqW7WuYOneKrfWIN0B O4HDBhjmtxjuiIJAwMkk4pNWdiEVPD9pqvjdpo9EszeNDw7AH0z2qz4g8O+IPBuhR6jqunvBC2Bn axwT+FTZXLvoZng3TfEHxBuZTp+ntJDH/wAtAG57+lZXiXW7nwprKadqNu0F03KZU8gVryXjdGfM r2P0/wD+CUfwq0X4ifFLxBNq8K31wiyeWsi52jy6+O/2vfDtr4J/aX8WW+nwiEC7cbAMY4FEdIlO PI7n6L/8EpPgL4f8YfDTxpq+t20d5cjcVV1zt/dGvyF+I1oukfGjxPEiCKzjv9ttEv8ACuBxVU0p QMrPm5uhJOiyMSw4rg9e8LP431vRtKgd1Sa/gEpQZ43gEfka52tTZu6P6aPGvgnwP+yV8EtIivLA GGS1CGQQk5JJA6V/PNrPgzUPiT8X/FOpeGLZzpELyMNqEDhMj19K66zXIrGNGNp2Zwfhnwx4q17w nc6mbKWH7M4EmFbHTPXFbXhfwz4k8a6TNfWlozRx/wBzJ3cZz0qIwvG5TklKxc1H4e+J9G8FPrd1 bSx2owfmUjA/L2qv8JvBviL456LPPo1vIEiYASAEBuM9cVLitzVSQ74q/C/xZ8FdGtrzVraSeOcq i7AWwWOB0Ffsz/wSG/ZZW58A+NNc8S6fFLNduZLNT82B5RH4cis42c0iaqvHQ/GH9qPwFrXw4+Nf iq61GNbayk1MLYRI+fkIA/Dmvun/AIJf20XgjX/FmrX9k2pSSFzDmMtsHl9OK3qQTnYxSfIfn5+0 D4jEnx48Va19na3a51QRJAEPVwAOOteo2v7LXjvUrWzlggZbeaLflztIH0xUVWlJRNaUXY8om8Ea 9pvj9vDi28jXm4jLKQcDGe3vXqVn+zX4yu/EctmIZikaM2WBGcD6UlTu7g5pHzdf+H7vxV4xm8LX Cy29wt/HbSZQj7xAJ5+tfvT8ZvBGgf8ABPj9iDSrcWn2u8uI4oDIIsklyy5+X61cWkmioLmdz8I/ BWujwZ8Q4PEcxxFv8x0PQDI/wr97vjd8CtH/AG0P2KLTxlYxi1EcKXKy7cFtu5sc+u2s4q75Wc9e o4O5+Cfg2bNkgAxt4HFeiSSC3jJJzVSVnY0jK6uZ3nrIm8nJHao4Ln7VlydoqNirkYmUu205asZo 3aUkj8aEK9ytNfFAVbqOgqhbSZcsevpikw6k7MB82ePSqt1PxuHSkih8GbmMFTjHXtVO6xE5Y8mo Y0UY135YinLIycjpTQmWIEdmPNWVQryetV0FcdIwlTjqKx2YbtvekhjrWRYrgg/d9Kinkaa5ZV+V KTQD5ZB9lVQenWovP2oAvSlvoBkzrt+bOR6GqnllpgwJAofYLDmU28mSd2aMbmyvGaBkrRbnAHBH ehn28bs0CYjXIEAjIzioftBZRgYUU72QivIwlyccemKjjQBcUILalqKL5DzxVNoiWAX7tFgZLLaK EBJyR2qtBIu87e1K+gbktjf/AGa6ct+FV7uYX8jMRg+tC2GyvBaqqkk5NR+XsJ3c+lICvC3lSl+q +lDzbpQ3RaHowWo27mMJ3rxH6CqIYTfNn5aV9RoiWQRjaBuz61NCBacAZJoF5D0IRznqamixkhuB RuC0Oc8SKZY7WNRtX7QhJ/Gv7Mf+CclyLj9nHQ41OFjt0H86yWkynHqfuf8ACVvM8KQnH8Ir0SVi hHbNd0dQtoNhTezZqqrRq7ADp14rREdStIwZuBVJhiYcd6aGeF/tHI03w61AE/dQn9DX8z18vneI pJl+U8itJL3CZ6lxIDE7MTupyOWGAa8yW9zESWNUAyctUySBgFA4oUiraFm2jZZhzx6V3umg7gOl JrUpHo+mHYBk16PphAQZ79KaQ2dtYJhcHqK6a2XfDgitttCIq4ksca4XvVVo8SEdRVbjtZlF2MnQ YxUAk8xj6dKkqT0I5okTCk5zVaTEBxjctKxKZRtkEg6cVah5ygG2vGkdqWhYMiqAMZI9q0YmDqCe o7Va1RK0L2wyoAG20sGUYjGDWbRUUaAXci81YaVmIRRjFQtGUW7ZW/i6irDS7u3I61fQa0HWkvmA gdB7VbWXCAd6RV7CTgrCQetZSyARYb71ZSiw5rGFPIXyueK5+83INo5YV1UlZHNLVnPXCP54bd9R VBW26zahR8xYfzrqpSvIlI/cP4MK0Xw/sATz5Qr0tIWPLHJr0imTxnbyOtTqSWz0poz6itJuyuOl OTsanqWfMnx0Y/2jbE8//rr8df23iRrlnzklSf8Ax6sqgJ3PQ/2O76e5+HGueXGGlj4X3+Q1v+I4 bXVvg6n9sosd558e3Prn3rBqxLdnY5r40/6H8JPDlsEG4tF8/wDwOvT/ABrpNrJ4Q8J2dzGJQwjJ yM/x0nJEqV3Y5H9pnxO3wq8eeHntjix2YdB/vivlz9qP4nab8QNfthpwKzBCWIXHes4tt2Hazucp 8IPhRe/FHSryaJwgtx/eHPGa4q70iTRLye1lkDyRtjrmt72VhvcpOxhjLMc4rp/APh+D4j3MkIn8 gI4U9OT+NODCWx1Hxm+BmqfDPSLS+giElu7KC2euTjtX2J+wp4Sgi1S+vRMPMKtuXI7rTb0uRF6n zF+1NoUGm/E/VLyGQM73HzKCOM4r54jYiUg9KlSuUlfUq3uoLp0sYZflkYD8zivrnwF+zhbeN9Bk ubNy10ULbCoGOKhuzC1kfO3h74S3Z+JFzo9+dkgl2hSfp/jX0pc/so2llq0tvdzeRIYmaMYHYV0O 1kxRbW54p8Ov2etU8V+NL/THXfZW7Eb854AzXqs37LuiXlnfxW13tv7bIIwvUDNZSepSdz1H/gnv ZzeHfEPiCxmcsquwbv8AwVgfH/8AZfg8R6pr/ifSSHuFlLSAY9Mn+VXFpom/Q/LK81C51m+ttKgR zfSuEwAe5xn9a/U3RvBmm/s2/A1ZtTCNqk8QDE4yWII/wpSfu6Bf3jxr9kzwcfiL431K/t7g21+F fCrj+77183fGltU+H/xY1eS8kM16lz+7cnOBxWcJa6lzlzaHvHwF0jxH8XvHujazeaiVt4EyIzIO RkH+lfb/AMTPi9a6Z+0folq1sXkFvIuQh/vCr+JkW5RdO+Hl1N+1fceK1uiLbyZj9myO4Hbr2rZ+ EfieLx1+0b4mNvAYJIzKhJUr1QetEpW0M4rldz4y+IvwH8T6P8UvE+uaPqTu0jyNJErr12/TPavy M8X6tqN3rd4l4rf2v54iww5Jb/Ipx1NE+Y/Uf9k34J237PHw1v8Axh4nZP7SuoGdRIR8pKEY7dxX 01+xl46f4gfD/wAYaqMqkrExIvPBjNOmaVVoj5V8afssaPq/wk1XX70LaareToULAA5II7/hVLwf +y7oXwn+Auk6r4jg+0vcNEu7y93LEjtS57Mze1zi/wBrH9lC20Twp4e8TaWoj06V4w6EBeGfH8hX s+pfsu+F9D8A6B4ont4mtHVASwGPmbr+lXGV3YIztofKnx4+Enhaz+LGh3GlPC9heSruCEHq4A6G vtrxV+yd4X8JeMNIkktkdLyE4GzuTgVW0rEtnypD+wIzfte3FukxOnOJZTDhcfLjtX0p4f8AgR4P 8ZfFfWvBsGlbb+3WRWlaAgZVc9eneolK07FKR+PHxh+DLfBb453mhvJujOqxlBxwAQK/bb9sG5ex /YU0O3jcmIJCP/Hmq3uEqnu2P5mNU1me68R2ugWNu9xf3zCONVQnapIUnj0zX7zfBn4Z6H/wT6/Z xmvb8ofEN/GBK5xlpGBUHj8KiorRuEJW0LPwx+Aeix/BK7+JfiG1XUtQ1J1uJFCeYVyCCBjntXz5 +0d4I8F/FL4LaRcaTb/2VfXDxKpMWwjc2Mc046RuLqfoB8Gv+Ccvh+b9mzQ7K+xLqLwIzysgJOCc 1z37QP7Bfhbwx8FLMQW0bTW08Su2wDPzZpYWftL+QXsjE1r9mrwx4c8CeE9bl0+J4ZvLj+5/ffFf C3/BTz9l6w+F3jTQvEWjRCGCaI741XA+ZwP6VrfqQ5aH59Nd+UjvwQOma+rP+Cf37PUHx/8A2gBd 3532tnBIGgIBDHAYGockipXcT9DNc8a/D3S/jTr/AII1CyjhNuJVAeMgDavufevw3+OGhW+k/FLV rfQHA0vzGChOi8DFaS0gTTbbsfpZ+zwfh7a/sgXtrqcEc2vm2/1jR5O/Y3fPrivpn/gn3+0D4Wsv 2XNU8HeIZkRvKEe1umNhB7+9ckJNS1NKlHZnzX8Rf2BvD3iv4L6hqfhJ1hV5UbfCq4xg9+a+0v2P v2dYP2ZP2TtI1zT7dLnW7qWDzpu5ySCeK6a8LpBGp71juf8Agpndate/Bjwffzo135l7beYDk4zL 7V7P+0J4O8L+LP2QvDNr4mMf2B1hC+aBw2846470r+zSQ1W3R+JP7c37H938KPhtoXiXw1cFtDGx QoIA2s/t9DXw7Z3onWFxzuXIrS/NG5MWenfs4/DrTPiz8erHTtadEsVl3ESY5III61+7Hxttrf4N /EnQdI0rRo5/DrW7+dJGrHaQwA4HHTNcc2+Y6IxTgfJ/wi+CPgn4/wD/AAUCmvbG2WO309J1uYXi 27m+VgeT6V+wXiD9lzwHP8Ro9cGlW6yW4MIIT+9j3rpw79q7PoYx9xs+aNZ/Zg8NaZ+1XDLZ6fCq 3tvM7qBjPQVc8N/Cjwx4n+Jni/wL9iiwolONvTbH/wDXrabUZX7GSk/Zs/mk+LXw5X4T/G7xLoEM axwWt4UiK91AFcnqwe80mVQdob5A31rOpZu6HTd4o/ev9j74I6J8Bv2GG8YS2qT6lJaCWSYryx2t 6fSvkrx9+1J4M+Mf7KDfbkVdTkRRGpQnEhztHJ9cVztOEec3cdD1T/gm98H/ABDofwl0271q3SCy uWjw0j4yDx3rjP8Ags7+z1pXw7ufDviDRbdI7hyqOVGMhpMHn6V00KnuXZy1lyyVjW/4IuwCX4v+ LHRv30fmBR/2yr4j/bs1Bj+1R4taRAJvtjhvrgURfMmjVy57I/Vn/gj15d58HfG6SDbMM7T/ANsT X4ffGmM23xq8UK3LrqGP0FRRna8RTkk+VGS10CCrL19q9f8A2Udc0Hwr8VIp/EaiWNtQiWAFd2Mk Y/WlPQq1kf0Q/wDBTzxL4QtP2ctPn1G2WS3lRBAWjJwxYhf1r5Q/4JQ+CNE134PeODdWqNKA3lMV 7eSaOa8EzCNTVs918AfDnQ4f2RfGK3VinmY/dSbO/lt/Wvln/gk74WsvG3gfxFpN/bqzWSMIGI+9 iMnP51qp+zhfuKaekj7F8CfDPSfil8FfHWkeILCNo7VzHbkjP/LMnP515R4F+GWjfsg/sCf21plp H9pMcZ37cbjhh2+lTG8o2NE/eR+bXxK/be8O+Pf2ZtNi1m2D6s5iZQiF8Nk9/riv0c/4JFfHK+8d fDnxZZXdv5IsMrARn5gIic8+9ZJOnU1NZvlg2fgd+2L8V7v4o/HXxdp+pAA2GrLHCgbPGAc/nX62 f8Eanh8VTeL9PuIUeSJmCbv+uVbzqcs0zHnvC58oS/CTT/iP/wAFGdQ0S/tUlt4NQLvARkEqUIP4 V+o/7W/xFufht+1b4C8HaBZQDSJ7ZjcIZCpXEqjp9CaxqQcqnOddL+Geaft1eAdO+H/7XvgfU9Bt kjlulK3SAbdwaVQT+VfdnxC8O2+nfH/w9Zabbotvc2UrXA6Z+YD+Vb05pKxx1E+ZH4kftw+BtH8G /t06TBo9mkLS36vcqq43ESIM/lX1/wD8FqYhp37OXhqZCPsjSwDDHGCZDiptyS16mvM4aLqfzmaq tvfWjwTKskJU9e1f05fsqzx2f/BLH7GyqLUaXtgKnIxsfH61KTdT3SJwU1Zn80/hO3e2skic5YDj FdTJC8jtnkCrm/eLStGxXtoQm5iefSqkxbyyV4HpWd9QsUrdGtjuJ5PYVeaVlG0nrSbGtEY91abZ 1Y8mo5H23W0Lx3oQtiq+552AG0VEc42kdPWgso3lyYCADt+lI0/mxZxUtXBEMBZkIJ61athtwh5N JKwi0wyxVTjHeq1yXjiHOTRe4WHKpUqT0xyKhaJZbgvjjtQgWpTu4mhcNninIPNU44zTArtFlNo4 96h8j7MQc7s0tmOwy7t9pBzwaYFEMeG59KJCK8kJB3E8VFHNumxjFSDZHKJDLjOF9qljsxHNktnP TNUFtCeSMRMFIyTVOZdhwTj6UmJEkSrKuFHTrVQKDMT0xQtBrUkMXlSblbKntVeWTyHG0ZB7UyWM mlEwCAYYdeKb8gTcoxj9aTGjMa8DT4KfQ4qRWKMe+aeyHuyPaSeuAKSOdjLkj5R60gKm8rO23oaa Veb3C0bhsTlS0Q5yD2NVpowmFx8v0qOoEKIJJ/lOAO1LLH9nJc80FNIZCdwLt17VPBBJIhZzTiS0 ZWvybLWAMPlEyDP41/Yn/wAE0iYf2cNIY8o0Cbf1rN6SLvofvP8ACceX4Qt+4KCvSGOEG4ZNd1PY lsgSXZxVRoC90CDhat7iGXICS7V6VWZdrgkZ+tCYNHhn7Qq5+HeoN/CYyP0NfzLXkLW/iaaMnPJx Wsv4Zm2XWfkqBVeNdzHBIArzWyLE652HjJ7ZppGQFHDd6hDXY0EbEigfeFd1pUhZlJ7VS2FfWx6J YZZlY9K9H04EKp7dqpdytzvtOj2sCe9dVGu1gF6U73BPlKtyBG5I5qsPmIOK1Sshp3GPkZ96zi3k fKOahMb1RDKm+QcVDJGFB5qiUioiYXjipYBufBBz9K8e1zti7GxblI2b5c49aW3u42lOE/SqtZAa sUe2TeOvpUqkyuSRtIrNgXgu1M4zUyKXxjrSsPYswfOxHpU0amRsAcd6noVctNGq4EYx609U3cU4 i5hxjBX5jmuc1A4lGKrlVyGc5fOS24cDvXPyyssvByK2SIuUZnzLuBqlEpbWbVx0VwP1Fa0I++Ju x+4PwgQy+AdPI+XMQNemR7gNvX3r02Jss8RJk9aBPhMFeaESgjVmz6VZjjLfIfwpspXPl348Ns1O 1XPI6/nX43/tuxed4ms3V8YjbK/jWMlcS01PTP2NNcXw58NNduGi3senB/uGtvxf4fPj34UWep3R Ma+dE4Vee+f6Vzyu2F09TD/aAk3fDrw0wGxA8XH/AAOvU/FsT3en+Eyi4XYp5H+3WbQuW2p53+1N 4YHj34l6LpRbysoTzxnDj1r5J/aC+DP/AAqHxRBITuDowJ49cVUY21Ep6j/gLd+Jrax1c6GzeQxP Rscba8iNrdpr17Jfufthky4PPal9ou3UNTP2i1O35T0zX3P+zX8DLW1+HFz4gux5jGRHUAZzx/8A WrRKzsEmlE+k/wBoC+i1v9m+yuTB5SkJtypGOTXzz+wk0rajrERYlmDFT/wCq6WM1G7sfJvxyt9T b4tayZ4pDbrckBipx0FcNDErZPUGpUSlo7G98PPBy/Er4m6dpMw224YOSR/dINfsx4B0qz8DfE86 PYWWF8iQtIEIHA9elTKN2VfofBXxAidv2stmMN9p7fVa9A/bm8T6j4f8Tad/ZhNvIIG3MDjPzU3J 7ESfQ1/2Fdautd0HxDc3bCW5BOSTn+CvhvX/ABB4jh+NniRdO89YvMffhSB90e1O6sNbn1J+wTe3 F5P4qNyji7+fMhU9fL9a+nvhtBPafDXxc0zmd3Jxu9fLNSnylSVtT4f/AGcvgRBpl1feMNbiQeWS YA3OAVz/ADFfMn7T3xcu/jD4r+xRSldPt5OMHrggiqSujJfEfQf/AATyjeL4p3yqWY+XJyB/s14X +2Hpdxq/xw1W3iz9pe5IUHv0p8qRTVncwPgxb+L/AAF8adA0d2njtZVy4AO3AYDriv2N8V+C9Iv/ ANoHRzJBGbhLSUl/fIpx0dipO6R872Wva0v7ctxpSyS/2SYpzjHy8Bcc19A+CIdP0L46+I1swi3b LLkr3OwVm/ekEl7p89/sv6zrOp/tBePrbV2kltFuJBEZeAB5Y6V8t6Z+zxZeN/2pde1a/jT+y7W5 Z16EEgBh/Kuqmlczj7queGfts/HU+PfGaeGNNnMekWiMJdh4LA5H86+8v+CWECaP8LtblvF32IYF SR22Gstm0aSloc1+3F4a1jxv4N0rUPDdwIdKS8haULIBkb89Ppmvoj4r69o1t+yj4dlvrT7eiiEq iRl/m3HHSqnTVrmEXfQ+Hf2pvjdqfin4NaBpEdh9ksmlhKghgQA/TBr3L9oTS5V/YG0eKIGOJYos YHTlqKasgasz8Jfhze3V7458N28l406RXMQVCR03iv6RfjVbG58W+CVjK/LEM8/9NBRe8rlvVXOl sL62tP2tHEkoEogmwCRyMCvB9c+Mp8N/tNa7Do2jf6afN8y5MbqD8o5z0qakdeYiLufiF+0n4puP Fv7QN5c6i4a9TVEBw2cfMua/YD9sNbi5/YS0R4UDIEhJwf8AaatVrZsTi9z4A/YC/Z601Li9+IXi IIzWyn7L5mPkUrn+Yr5h/bF/aWuP2hPi3Np8buNEsWZIxg4ZsgqfwrScU6Yqbbmfev7PPjnxd4Z/ Zzt1mQ6tpaoiJCzk8HPYCvoD4nfBzw/42/Zu8M6/cQppd2bm2k8oLgg784557Vzz0hym9tT7k8Yf HLT/AIFfBXQNX1G422gtggI56scVzmvfEOx+Ln7M1z4kinzYXMsckWe4Oe34Uqa9ktOpCdnqYuve Dm8a/s1eDLn7Z9ksIHt5WYkDIWTPf6V+Z/8AwVZ/aK0PxdquieG9LlErw27O7JyPkfPXpV6y0Jl7 uh8MfBz9lHWvjb8N9T8SWs4jskIeIFwONpPQ/Svs/wD4JHa9ZeDfjBrdhe3CjUVWRV3kDd8gpuBa fNE4b42fsseKPix+2x4n1YbIdPZ5iswkHTaPWvzV+JuiS/Cj4g6tozyfa2MpAfOcYGO1XPaxcFZX P1C+CfwN8Iyfsf3Gu6hexJqrWu/YSud21uOuewr5U/Z9/Zi1P4vfCnWdT026K3PlkwoWC/wH+tcc XqaTqXsj9hP2EPDF78Hv2GdYtvHTxR3UVuPLDShukbfTvX59/CX/AIKNa7HfWPgu1sxd6a99Ctu5 ZuEDAduO9dkZ8yuzlmrTdj9dP+CimtXVr+z/AOG/JWPYt3btMS+NqiTn9K8I/bS0if8AaC/Y+8Ga b4Rljk8q8tJJH8wL8qy5b17VbpqbMNUZf7ffiDT/AAn+wt4d0Ga5VLqNYUKAjOd5/wAa/ALS4Daa faALwqY4oklBcp0U72NrwHpt1rHxk0aCwvWsLrzVbcpA3AMM8mv6GtM/4S8fGvQdK1FYrvQWtJPO lefOCGGOOnrXJUWhvGo72Ot+Efh/wZ4O/bh1e30FYIrqaK4a5ePH3goHJz6VznxC/bZ0/Q/2lG+H sTiSd70b3B4G0qDz071dGfsY3RnvJn1l8QtTsrP9qnwraafMs102nzbtpB43DPSuB+JXi/wl+zF8 X/Ffim/u4zqlxDO/l5GeUx0zntVxvVMaklTVu5/KP45+MC/Gj4qeJ/FXzm2u73fCu08KQO34VzGp +KrUaW4HmhVYOQsRPAq5xaVuxpQj7p/Sf+z54y0v4yf8E0bfR7C5/wBJk08IqPhWBKv1HbrX5Twf sCWPwo/ZxsdW8TamDKt3bzsrFDja2cfpXPKfOvZnS2lG5+73gfW9H+NP7Lng238P3y2VnA1vJIyE D5Vckjn2r4x/4LVrH4m+H+gLoEkdxDalPOPmAZxJn+VaxjZqByTfNC7PAP8Agipq9jp3xY8TNcOI prjzCm/j/lnivjH/AIKH6T/wjX7VniaSS4N19ru3lTbhtoGBjitYx9mmZwbUz9W/+CPr2EHwg8WT Xd0kc8oOyMsOMxGvxZ/aI0R9C+Ofiiadw6S32+PaQeMCoow0cipa1TzWPxLaSFiY5cAf88jV74Ye H0+IXxQ0O3gRkH2+GR3kXbgK6k9ama5jplpFn9Bn/BXHwpZeK/2YdD0uwvIxJbtDI21152uTivLv +CSXxQ0Gz+HPinSLuZY79lKhG4/5ZEVUo8kFE4KKbuz7J0XWdL/4Z28VaLNexrc9EQOOMIf8a+Q/ +CUupWHgFPGy6pdRiVBJ5QZx08qk3zR5Ox1uN4I+wvgP8QNGvfhX8RNR1C+iaESl0jMgPAiP+FeJ nx7ov7U/7C3/AAjWmXq2xAjUYIBH3ux+tOm+XUyV3JI/MH4pfsS+FPgL+zLYXGpXy32qosewuFzu yfQ+uK+lP+CQXxh0vw3p3ibTNUkSO8u2IiDHoDHg/wA6U5e0qHTXV6VkfL37efwF8JfCbVNZ8Rwu l1rGp3qzHYoYg9Ox9q+xf+CLVnZaDD4n1zULiOCW4LGMMwGMx471VaHNJRRxq6onztrviyx+D/8A wUZu/E0l0LiC9vG3nIITcVXtX6TftDfDnS/jF+1N4R8c2GqrDBZ27rIAygMDIG7+wqalTk906qTv TR82ftv/AB+8P+Mv21fA+h6ZOhkiBaa4U8LtlUkZ6civ0C+LPxo8NaR+1P4RsIb2Jpn0+YtIHHGH HfNZQbvcznqz8Xv+CoHxH0vw1+1tbaxpLrd3FuzyyOhzgKyk8j2FfUnxf8b+HP2+f2JtNuri6WKS 3jjmEbkA5Usw4PPanUk6tmug5NJn89eieED4y+Itv4cSX7NBLOoaYYxsyARzx0Nft3+2J8f9N/ZB /Y10L4ceE5I7y5McUD7G6LuKsflz2au3DcqbbMZzfOkj8YvCMcsNmjS/MQuFJ710rM7McHjvXLN3 m2dTVkRLGu04O4/SoJNsMJyc+1LcfQoQoWfJOfTNK8ZcljkGnElkIiOCzNn0qNWUqTiqa0JI4oxM STwwqrdQl+R0FR1NOhg3dr5+MnGKkCLHGMdqQdCzZ2m/5yce1WzGsJwoyT1zTRIsiCXAHFZ86+XM AWzStZlD5UZcDNJEMPxyR1pNjSGGNXkJc/L6VFMVVMIOaOgupmSSsFAYfSnMwSEDqadgYoiEqAs2 cdqpvgPjqO3tUkoikfKiM9aWODZwVy1AxuzaDkbmFQ7t65PBHtRfUC5BEWGSe1UJ7Xe5IORTYCRM LaIqo69SarmNuB29aV7jSsSBMJjPSo7hjsBUZqhMrNHtQN3PeiRghUVF9R2Fm2sApXDetU8i3cEj JpsEPvLcysrg7V9qilg3IMHaO9D2ArLDgAL+dTopUbEHJ6miIMpyRNFOozkjrRdnd8oHNJ6BuVLd fs5+Yc06Rg2BjrUjegkkartX7p9avxkS5QHp3o2AwNeXdaxQnoZkOfxr+wf/AIJkyG5/Z00qI/dj gUD9ayk/fDY/ef4PHzfCUJboFFeksxd8jp6V6FPYliMBjIFV5GKc1qCFVwU3Ec1TYAPUJajk7niP x5g+1+Ab9CdoETH8ga/me1KMf8JFczdgxGK1n/CItcRiuRtqJN0U5yMrXlLVkPRkj4dgy8ewqJVJ lyeoqmgTVzTiPmuDj8a7bSImC8mkuwra3PTdKUGAIe1d3ZyNBECPu1T0RSO40slo1Yn867BGMbKe uaumupMhtyo8zdj9KrebuyMbR2rboOKIWn3Jhu1VFAlB/wAKxSdyytKxABXpVe5ZRCBjk+1EnYmO 5TiOcHpV9ZfMYADH0rzLNM6Uy8UBcKPTrU+1QgAHOetOTurFmtHICoK8EVOjec4zyaxe5RaZMDA6 1KqsoUqOe+abegdS1boUmcjoaRd6LioWqFLQsKTsyKspxGBVrQSI9uOKxL23O7J6027Azkb2BvM4 Y7aw7n92eBmtqbujGWjMu5cHbjrSW48rVbRSclnGPzrroL3wSuj9y/g8nl+AtPjJ+YRDNeiqMSFe 1drWoNWJJ1DRhQOlEbcAY5FNE7FtfpViDDPmpexR8lfH4Ea5aqvX/wCvX42/trThPFVoO+07v++q zeiFLY9D/Zz+I/h3RPhne2crrHJIOSR/skVzHxX+Ptj4Z+GthpOmATMHjzjsA3tXPcSTSsdI3xH0 P4keCNIt9RZd8AQ7WHQhs1zHxk/aTt9F1vQLLT0WaCHaCcngbxQjSW1j1zXvit4c8X+LNO1a7ZfP t4zg475z/Svlj9pb4or8WfEYjgBeKPIUn65ob6HO42M74OfFqT4UWFxYrEpMinJyfTFeWavqb+If EV1fuABIxKj0oa1NYu6Mu5t/tEBjJ5619l/sy/tBJ4U8N3Wh6qv+ip/qzycgCi+opK6PTPih+0do njj4cPoiLtjiwIxsPbpXyr+z98V7z4R+PQzLusJlYNknjOBUpsqG57D+0b8WdG8V2wj06MGeQ7nY rjvXx/EAsEe35cDmtY7Gf22XvDvim68G+NrLUbYZVD8/0yM1+mOmftfaQLqO4VALsoQzEHvUz0Rd rs+Vda+MWkP8bU1uVd3zk5257j/Cpv2sfjXpvxRuLZ7M4KoQeMd6UNVYhxadzyz9nH4xXfwo1yWN hs0+bO/JxnjFe4+LPj14ds01O406L/S7gksdhGSRilNWZUdTjP2c/wBpCx8C6VrplixcTZycHk7M Vf8ACH7YP9n+BtbhEHzSSDG7IJ+Wi2g56qx4lq/7Q2o+Lfhoul2jGzZlG/aSPX1r5gsNKbTrR1lf zJSPvGtI7WI6n0t+y1+0BB8EdXnlktg1wwKliDzkYrgvjb8YB4r+Ly+IoIxuEmSBnuR/hU6pmsve ij6F/wCGtNGiWyvZ7XF/EuFwjHvXiPiP9rDXb341W+uwSMtoqspQE9CRVGTWp7q37ZNha65capHb 41F1YbsHuPWvlXwN+1frPh745XmuXLvLBcbztJOOQBU2sXfofQGs/trW9lHqtzp8HkXlwG3MoPUr ivmXwZ+09q6+ENVhkUrc3LZMuTk/LitKba1InqrHynLp5uprq4lffcSNlmPfivtr4G/tbt8LPhDf eHooRGzR4Dcj+Ej+tBLVzlT+15qN38CZtDZN0xZNhJPGAavfDX9s260P4V2GjayjXYtwgG8E9Dnt Scm1YcYWPNfjZ+1BdfEfVNMdbXytNtxnYM8kNkcV6h8Qv23bzxj8EYfDLWgWFVVBgt05/wAaSuU0 mrH556Sk3hfWLHUbRf30UqsO3AIP9K+2fiH+2lr+qeI/D1zGpKQJhvmPHzA0+ol8NjkvGn7YOu3f x1tfEVuzoI43RypPO4j/AAr0yX9vaeXUL+VbJVupVYCbLZyRiqm7smMeVH52eK76fVvEVzrN2N91 cTiYt1O4Y/wr7D8Vfti694j+Alp4YuEDpEqAZcngE/40m3dWLjszwzSPj7rdn8OJPD9u7RQPhTyR kcj+teFw6AkFqUAHnYzu966L3RFOHK7n2F8Ev2udV+EPgFtJntxchSoAJJ4A9qyfjV+11r3xZ8N2 VhaM9nDDKknloTgbWz3rCWrLM/4tftG6t8XfhVY+GtWleW3t0XAOTnaSf61nfD79prxFY/DTTPAU UxtdHju4EU78fIG568dzVJdxTjzWsftJ+1J8Q9Nsf2KtK0LRNXSG6ihjA8uRecFuK/mUs9Lv9d1u 4utZkLzRq0aNnduBFUlYmS5pXPr34WftOa58Mvhbc+FtPj8m3ICBlYjjBH9a8K8GeKtV+HnxVsPE NlKY7lYmWcqfvEkf0FKTbY6a5Wz7W1H9vHX9RkvbeCDyppFZTcBmBORj6V8Ia7aT65qst5eSme9m O6SVuSTSbuy91Y6e2vL+38KnS2vJGtxjbH2OK9Q+An7Q+vfBAzww8WhHEYY46YqOVIm2p1nxS/au 8S/GTwzNozO1hZyH5o0Y4I/H618yeDobj4aahY3WmrultMBTnHGc1a+GyEo63Z9TfGb9r/xR8ZfA 40O9mk+zcYyx6D8K5jwJ+1D4g+HHw7s/D9lLJ5FttCYJ5Apxm0ynTTRwnxk+JusfHuW1TVrmRraI Z8tjnkHIrjraAW1okbDcqjAzTbcncUVY5m+gvbPX7K/sJDDPAwIdTg4BB/pX2U37cvi+FIIBI7SI hTzS5yM/hSaugtrc8N8H/GDxT4D+MOoeKracrPe7/NcSc5YAf0rhtY8QanN8S7nxU779bmuRMLjP IHGRn3xURS2ew5LRWP0+/YM/aWN3+0Nc+I/GuoFmhSSO2DsGwjKM+ncV4N/wVS8bab8YPjjHq/h3 UXCmKRJY1AAfcR1/Ct6TUDGrT52jk/2JPhJ4Ju/DV0NfMInX/lm+OPl+tfYqfBv4SRWN1K0VoSsT AAAc8fWnGV2zo+FaH516J8WtY+EHiLWrDwrceVoxuP3CK+0IoHQCj4l/GHxL8ZfCK6PrN45sBgmP fnOOnFc7hadxbqzJfh/8fPEfwe8ApoGkXLwWKALHHG3QD/8AXWJ4s+MXiX4h+HksL25drdcHDP8A exVN+/cTiuWxz/w48c6z8MPEMeoaPO1pN5bK5RsZzWL4r1rUPG3i6+1XWblr+5uGJDSHO3Iq3JvQ jk1R0Xwh+I/iT4MQamNPvnEd0+VQPjbxirfwvv7Txh8XBc+MJzcLOxZ1lGR2+lKLcVYvlV7n6eX3 h/4OR3UcQtrbb/Edg/xr4S/ay17QvCfiDTB8O0jtbtJFLPF8vyhxnnntWkLW1FUfQ4nx/wDGHxT8 T7aOx1PU5Wt4yCRvz0OfSuX8F6xefDzxBNf6PL5MsgO/BxkkYrOTcmKEFFG3b/FnxWL3UZHv2Zrp iWzJ6jFZ3gjxnrHg17x0u2VriNlkYHkkjFTZrU1i0lZmZ4c8a+IvDng3WbBdQkkS8b7hYYxtIrl/ hH468UfDTwvLY2Fx9nJdSCkmMACj7NjNRSdzoviB4+8Q/FKztLbW9QkubeLB2swPIORWB4b1q58A 6qbzSzsvG/j6dalabGl01ZknjXxXrvjm8jn1a7a7ZeAGfNdJ4F8feIPh7BMml3zwQTHLKr4xxjFX zO9yHFWseZ+LYdU8QeIm1Ke8eeWSUSkORwQen6V7Ivxv8Zx2kSQ3zW8CJt8tZev6UpJSd2OC0sjx TV4bzV/EH9rG4aDUvMD+cp5Ht+lbOt+NfEeoePtP1eS7Mk1vE0fmmTnBINPl0IcbMh8Ryv4p1W+u r6Zrma4zl37AjBqn4T1DVfB3hB9HsL6RLTGwICBxjH9aSVthctzM0zTf7H+zuJCtzF0lA5qz4qD+ Mb1bjUJDcSqflZ+apSaDkSdxuPKjVE4HTirkZW3Xk5Pek0a7gm1CXQde1UpFSV8g59c00hXIpF8t wFGRSXFxtTaRyalb2AqFQuMn8KjKh3JA+WtOhL3K+fJfOOvtUE2U5zlTWa3L6GTfR7kBHBPpUFvH iAhhlvpS6iNm0kUQKMc9+Kmlg3uGBwPSi4IyrnczHacAdaz2izICeRTGizHJlducntRG5Dknis7W dx3ECiXryKoNEQxAOOaq+hPUilU7QDzUayiNdrLuzT6B1HpCsuVU4/CqrRBZAnf1pWBkhssyBs5x V9SJVIXg96VwMRZfsspxls06UEKMjrRYGRtKYnHHP0p6rhSx4FDBMrNEJoiA2BVaNX8vaDwKYNgq ZUkniq6ltxVR8lK4iRVVeOv4VUliLZPYVBotUSKvmqATnFNuwvAAz+FUtSHoyJC0YG/kelOlKSRf L/KmwCGJlgxgYPeoo4nt+C2c+9MRViJ84jGSO9Vrub5/lHP0qGMZuLIA/WmBgMAjipKWoy+25UHo afbRHdtBwaYNWM7XGO2LI+VZVB/Ov69v+CYT+V+zxpmW3FoFx+RrOS95DTufvx8FMnwnEh5O0c16 eI/IJGa7qbuhCPKoAG0Z9arqwTJYbq1JuQTAkgrwPSqskmWwVwR3oQPQ8c+N6b/AN/7wsP0NfzHa 3E8PiK5gJJbdnP0q6n8IEMXBbBHAqZgzH5a81KxlLcWNhEdrDDU1oi0uQ3FO5Ntbl+0fN0FBwBXb 6UxZ8g8elJKw7npWkyfIvrXoGm54yeKdruw9jv8AT4wyD0FdPE428dauOmhMiNZCUJYd6a6biMnB xWt7IuLM5x5Llhz7GkDHeeOCPyqeg5Mzx8xdM8Cq7sMDvipkiSpEu8ncOlXoogpGDnNeZLY6IovR giXGOKvhRnArI0RfRCcBeAatLGY34PTvStqWSo4bJGc59KucvEMHFKZXQImOM54HFWFyvJ71MdCN yyhGOKdHMIpcgZ/CrbBE6x/aGJJx7VWuYwQSRmpburAzkr1Q8m3H6Vyl3afv8A8VvR00MZrqYs0G Jz7VFaAya9ZEc4kUY/EV2Un7wRZ+5vwwiK+ELI4wfKrvIxuG7PPevRYplxYWYhs05IOWx1NQ3YSR cWJlj69KdFwRijoM+Qfj9KZNfgPTb/jX42/tsTLJ4ktcLtYoR+tZyWgtz4us7QJZJEkjRg8ttHWt eKyiuYxH97b0JGK57F7EyaUqP/x87CO2RWTfaGlxcq8j7iDwaLWYPVHSyWKzCMByu0dQKuQRiA4U c9z60WsSkSm2R5x5o3HtSNHtfauFUVLY1EFljDjk5HtTLkASqyv7cU0upLJo47eJvmUbh6inT6hB KuX3FR0wtJocSFbm1uFLlTn+8y4p63kEShTk57hatXRLjdkMl9DHkc4PqKx7lraFwQjL9E60nqVF Ec8VrdWomAAb26/lWb5NrMwZkdmHqlEVYuSVi/eSR3NuAW3oOw5xWOtvahD8jbfdKHdsUUkPttPt 4VLxouD1xTDYRj5wgC/3RTa0M3uYNxfRWkrLFE2c/wAKk1Vmvg2AwKOR/EMVURJGFPF5py3PoapL ZqZixOWp6MbdiOWxhk+Zog596rExoNoXb6AClswWpTljhOA8Y571mXdqseABuWhjsY8sMUmR5YBH fFUogbPcFbhu1UtES9ylIVhfNULpI7ld20GhLQCj5KEABMEVXkjjaQkoGqbDIGIuItrjbGOi1iwa kLzUTa2tu8vlg7mVCQMVdug0rjY9SW5mmQfK0RwwIxilt7p761klihaWJPvMEPFLbQjqY8GppeJv jOUPSoBbQxyAso3k8GlbUpIreIr9NNtkeU8EgDHcnpUUtlqtpYC9ksXFoej7G4FUldjWhTa9S7tU kiferc8dqjViZASfwrXyEXApJKdQaWNfLACjAFRJalWBxknPOe1c/q9g1zZsgfyypDqR7UBY2577 xBq/gdJnE89nFg5KkjA59K5vSb9dX05Ln7oZd2DxTZCtc2dCsdS1fS7u8t7CSa2h+86xsRjGawdM vf7Xt1nQFFbBwRjFDHHVG6UPIHUdadsAC4GTUtIIkr7ih7tUGwzQEOMGp3Y7Fe4vTaW5ZzgKM1ve BfCOt/FCxln0q3kMa9HVThuPWnFWB6EmtfDbxT4R0ua8v7Nxbx/ebBOB+VcV4e1yLXrJLi3OYyOp GK2cPduhc19DSkvFh8xzxjuak+H9hrPxNur2LT9MkdbfPzeW3OBn0qIpsIq7OC1XXr/Q797e7tDH dLcLB5YBPLHFfST/ALNXjW9soblLdxFIm8bjg4+mKJNJ2ZcVc8Xv5J9A1uTSbxHjuojtbeuM4q9E rCXceRT5epDIbbT1XUWuFcpJ6gdaydQjtoNYDyyl5m5Vcdqm9gNb4SeD9V+J/jLVoNDglZ4HKSfI QM7c9a9yn/Zg8Z6Vp1xNcR5VUJIjfd29MVVP3noa8vu3Z8vaLqp+3XNhLE0VxbvskDKRzXcNL+7w fujvTktTIxfCUOrfEPx0NC0awmuZWVj5qxNjj3xitXxt4V1v4SahdWusxsJlfasfU5+lCgnG5HPa fKepeCv2aPGfj/wjaazZxtbwXABCudmM+mRXk/xM8H6x8GfGlvpeqwySmaJn8xlOBg469KuNNyVz WSUXY2/gx8JPE/xwvdUfSYJfJtZdgO0hW4znOK4r4h/DrVPhz42Wz1rMUy5C96fIrGPtLuwDQjNI XknYg+1S22ixJMzquWHRsVlexVrmsrMVweDUUjEL8h5HWknYqwyP99g45FOnkaeXanGOtO9waI9x Py/nVeKArIxQ8Gk7ILWQ8KuSB1qOaFRtUnmkkLdkM0ZUjnigN5ZwTxUjkMQvMxX+Ed6iuXbIUngd KcUwWhkS7xnJzUcYMtueTnPetBMSSDMG3JyKdZEqh55qBEhTjLtnNN8pQmM8U2gRnSuLZ9q5Oe9M iyzE5/Ojcq9kWGn+UheD3qnEMnJODVbIzLKMyhgOajWFo4iXOfSo6loqbRNzz+VPjQRI3vRJhYjt 8ZIbn6isy7YxE4NSCKDrtxuPNSIDs3AfLU6tldCzZp+83549Ks3DLJKrKcDuMVXqSzJvf3R+U9T0 qNlMcIJ6ULUZn7SzArwDVmZCkgBOR3pSQDREYX4PB6U5oyCc9aSB7lER+cCAeKRo9q8HJpgkNiYl GHQ0tpFuVt5+YUwe5UBds4Py0yAMJjtbA7ilYBZACSVXBrPuGZXGTRsBNHN5icD5h1zUysDCfM59 OKQ0tCo1sAg5xTJF2rgH8qAsRhfKgK9TUECsvOTj0oESGZVbp+lDOHj69aLXRUXYbHBtXGcUwp5U oAHNJOwpIUxhmJ7DqKrLIjyEBcL9Kb1BbEdy2YxGCRVd2NrtDnLAUvISCOcsS3QUl1Isce485oQy lEvmRbs/L71WkbDgAcVIK6Emiy+Qc4qNXwQ27kU0G4mvt9q0dMDa3mqc/jX9a/8AwSuvBe/s82C4 wYYVU/kahq8geh/QJ8D5Xk8LxnGQAK9anjZ5ie3pXbTVhNkTRY4IqGYGBQF5Ga0bHykrR+ZtI7VV uFJBoQjyb4wp5/w/vgRgrCx/Q1/MN4lm/wCKwu5OgDEYrSf8MfQormR+mFqZHIwyHgV5jepg1rcd KwmTcRhqrGYbggHzetJbjaNOyQ+eAeTXc6Ty+KblYVrM9M07CoMV3emfvSAOKKbvIqS0O501mAOT 07V1lrMrDJXj6Vso63MdxWixIST8nao2Ad92cYpyNoozZf3r4AxSleMA4I6mkhyMojYW4696hWHa jnOQfWlLYLCRDcSpPFXLO0aJiM5NeZNaGyepfikwxzWivygHAINZstosh8sAOKtR8IcjpUs0toXh Oiw8pwfanWu3YcjgdOKXqKOpZhRduccVKyCR+vShiS1JIotjHPQ1bUbF24z71KdynoRSRFmGflxR I275cUJ2FuY15Ckcecc/SuRvYlkB2nBFbw1FJaHNTAspyOlZ2nuYtcsdo5aRf5iuyh8Zmkfur8NC 58LWgbtHXfIoXHHBr0WwkieRdqginxuNox96oZI2cNJ3IFSWpwwwe1UloJ6M+O/jq27xEhPr/Wvx u/bSy/iK1XHzbT/OspaIV7M+SLS0YQRlhxUWoXDQz7IflJOMiue/U03PuDwD+zPY6t8Mode1KTZv jDbiBz1/wr5b8b2FhpF8fssm+Jc4AFNO4y38OvBep/EjzpLFCIowck8Z4zWJf2c/h3XntLt8suc4 OaG9bE7EUmoxSl2iDM69mXFe1fs8/B28+MNze3k3yWkDfMBz2zQ46XG5aH0Z4f8AgT4c8Uaxc6Za zK14ithQB2FeCeHPgBdSfE+50a7k8uKOfauSOnH+NRF8ysTFXZ1n7TfwJT4S6tYpbkzCRCDkDrnH au38Hfs+WNh8NIdb1sLFG6hsEA5/zin1sC0bRd8Sfs46b8RPhnHc+Gyu4yx5K4HGa2E/Z+0b4e6N o1pq6q91MgUswHUtitqkbRuhJ2PLvjZ+y5d6H8RNIh0tQ9lcndJkgfxAdK9r1f4DeHNK8R2eg3CI 19NETkqOxx/WoprmHc8X0f8AZDli+Os2nMd1ku9gvHbFesWXwP8ADniHxzqmg28aLewK4OFHULn+ tJ25rC5meX/CT9kkt4u10X7f6NbO3GAei5rbbwZ4M1/RtVsokigu7bI6AZIXPc1bVhXufLHwm+D8 vjrW9QsoZgnlZ8tQw5wM1438R7S8+Geuz6ZfxlZFJCEjg496N0JaPU9N+BWuaBcXVpb6zbJI9xKi ruXPU4r2n9ub4J6d8N7XTLzS41VZlydoxj5sU4bF1PdSPgS0hMsYBPIqK+uodMwZTjdwPc1N7Mi3 ME9pqKxq0Vk8kbDIbaa5ia+8q/W3eMic8FNpyDRFczKty6DL23vDqAtUtHaXqFKkcCsdJ7q51CSx itGNzFneu09utXyjZFJa6oNPmuTYlYo2AY4PFc/FdpcxrIrbi3b0qraESVmV5nwJGbhVBPNVdE0y /wBe09ruwtjcoCAQoJ6/Sq0sC3KniGDU/DVtE99p7W0b9HKtUel6dq2tx79P09riMdX2tURV2Bzd /qn2PWXsLuMxXOxvlwe1fpv/AMEzfhBovxD0jxW+p2yTTRsdm5en7vNOXuMd/dZ+dHxz0CLwp8Xf E9paLsSO5ZVUDoNor9Tf2Hfgn4f8a/spa5f6lZxPd+RksVzg+W1RNczugpaxbZ+OV/YR+HdcuLOE ZgD4TA7VK0QiJz83H5VQJ3G+BvCZ+Jvxf0DQ5EZ4JJVdiq5HDiv3V/aXtvAXwR8Fr4a1GyUeZbts bySeRkDv71dOPvXKqu0D+fzSvBGq3K+IdT0u1f8Ase3uQLbCnDLtzWhcaFrFl8P7PxDPYukc+3au 04+Y49KuppLQzp7amrpvg7xHq+kJdx6eyqxABUE9fwqHxz4W1f4Y6TDqOqwOtrJjDMpxknAFRNdT W9tDX8LfC3xR8RtFh1DS7NktpQDG5yMg/hXnfjLwz4l8I+MIvDt7YuZ5omZJWBxkcdcY70culxtr Y/oA/Z3/AGc47j9gd11HTYm1g2PDH+9tbv8AlX88WsaDqXg2caNfItvdmM7drZ6f/rqlDS7IlHlf qfr7+ylrMfgb9kbVItT0bz5jb4E/ls275G5yOK/IbR5p/FviyDStNjf7bdqZI4lU/KBwaiCbC3Ju e/L+zB430yK7uLpXEUeTtU5JwPTFefeDvBPiDxzqM9vaWkivDneFU54GemKc48oRkmdzcfArxVYe Fb3WJoZVgtuXXBzjGemPavmtvG0/iDTbCXT4T59zKkflsCpG446daIwurjuuax698SPhTq/w+a1i 1uNoYLsgBn9ziv3Z8FeD9J/Zl/4J96d4h02xWe9a2j+YKcsTuHb6VnrKXINxu7EH7F19H+2R+zV4 sm8SaSkEioRCjq3eJj3x3r8HviN8CvEPwG0S8ubu3e10j7dHFYkA4KtwB07mtZVFTjysx5WqljqP HXwM8S2Pw20rxLcwNa6dI8bMOm4Fvce1ft5/wS01DwZ4+8G+JILOzVtQtgVkJjI58snrTptWuOo+ Q/PTTNJ8P65+29rVv4ktYksbXUSQrjgkbSDzX6EeKf2l9N8U/tp+HPA/haBbjSBZzfavLBwhVlwO M9ia5cQ26isbUNYczPiz/gqp8GtM0H9ojSl8OLFBeXKP50aEDJLgZNfM3if9kjxb4Q8HPrczvcQq u8rG2/gfQe1dyj7iXUxvq2fLmmeKl1LSzdSRm2IGSjAjH517b+xB8AL/APar+OsuoywmPw5p0MqS yMMBztDA89e9c1RNRZUXdpH9Gf7O3wI8G/CvwD421bQ9PgN/ErnzUXlm8o46H2r4/wD+CdvxQ1z9 ob4p+JdI8UaUbS1h8wQlg2HAjz3A70sO/Zx16ixVTkaij87/ANtz9j7xJ4Q+JnjHxZpUYtNCtrtv lR8bhtBzj8DXz58Ov2ffFnxT+AsfjG1Ji0ySNZFkLY+XB9R7GvRWH548xn7Sysfo9/wSW8f+Grf4 nW3h/wCzfatX+zOWlaM84AzyPrXMf8FHbOy/4bT09bpALJZyZ48ccOv9K5oJu67Aou/Oeo/tb/tj af4Gi+H3hj4fWyzvPPAk0UYI2p5wDHjPQGu1/wCCrnw+sYfhp4fvbOBH1udFUqPvfM+D713QSjTu c9as3Usj6X/Y68Dv8C/2F4dYXSA2rm2V5PkbLkK3p9BX8+/7Uvxjm+N/jyzmvNO+wahyzLhhxuBP WuGnfVmrjytM4N23YjHT6VfhX7JbkMeT0rOWpsnqZrZjPz8g01Y/KXeBlTSsU2PtnDEgDFQXYa2Y bepqdgZCIjnrzVmNjEpBHFVYB0cQdCwGKpeXubB+8elS9AI3iYSBTzippIlkBK8kdaED1KUbEsVH AqfyBJHlutUJFOaHynCkYFUPJBO3GKTY7ArbgY9mCO+KmSFVQDHNJbhYWezDAdhTbe1XcQRxVMRX mhAbG3j6VA9v5qfIOlNCuUntnZQRQ8O1ASKTEhtupZGycVN5LvDljxU6lIfFGggbAw1ZbJuGB19a drg2MeMLgdSKy5bdriUk8EUNWBEdzCQg3DJpNmINucD0qUrMe5PBAQgCmi4zEeBmhiRE8QkQM34V DOu5AtKOhTK6xC3OCMr9KZMd0gOPlpydyRzRrKMjj2oY7V6Z4qUIzYtu5j0B9BUi2+HwB1pstFSW FhNxwKiCMs24n8KpbASkbZAcZB7U77NtuSaTF0ILtG4J+Ue1UY1WQEg8+9FrgiTd5ahQmPWmJIqT 4xuFS1YVxbk+e2AMEVTt5PLcqy7jSuWiOYMCOcVJM+AARtHc0tRsovlhtUZFQqREpAHP0qkyGS7m EAOMtSRysiAyDJoGOkjNzgodvrTpbYIikde9AFG5fDbSOfWq0jC4G1vvDoalgQtF5IBc5A7U7y1n jLgfL6UIGQmyLRZHGOlJbQ+YhB4NAmVHV4FIznmoVhBwW4oBaCazldPRmPyh1GPxr+sX/gk0pT4A x7zkOilf++TUL4xvU/oa+BhMPhOJevyivZ1J8ssRzXakCKEj5wTSNibAA6VdhuRFt21Tly54HFCZ J5f8YISngS/Yngwtx+Br+XvxfGW8Y3MY4beTVz/hj6FSJti/MODRGeoXivNM3oxH+VcA5NJlSuDw fWpegzTthukQg8iux0yTbOM9KS1ZLPUNOO6RDXounrmUbeBWtPRinqd5YKu/pXUQI2zIPFbXRKQr DMPPWoZH2IGxmktTVFHcCS2OazZZNpPGRVIh3uQGUBTxz2rPMxVDnn1otoV0LsSA5PQVNESOhwa8 hu6N4qzNWM7Y+Rk1bWNmVcHmotqaN6mg0PlAA/e9qtLloc9AKiTLeiJ1IjKgDPHpVqMbVct0zwKl vQS0LQUNbAelNK7wuOKE9BkwBJIznFTqSqg0RQnqPUM/zMc+1RSxFm2g4z3pS3sCViibUrlGO73r mbnTxFIwA5+lXTdiZM5ySAgEMOKp6bZE+JrLAwFcfzFduGfvkI/cf4csP+EVtDjBMddfgnODjFeo 9wZYQZjGetEcYWXOKRBYOeeKkhhBBwOaYHxt8bIjN4jUA8j/ABr8ZP2x5G/4Te2weQD/ADFZz2Jt dnzFDJ57MpOAKyZ1ETg5yPNWuVo0R+pXie7ns/2StNEbkK0SDH4mvzD0nQLjXdXt7CE+ZJKwHzHH GeaIrWwm7aH6R3JsP2avhCsZC/b5lCHA5ycivlH4QfDWf41fFFPtEm9QrNJkjnGDVNe8Je8fVdx8 PvBVr4n1HRpIooLuFHG8qByB9a9X/ZO0SLwx4R8SWduwdHYqjj/cIpzdlYjrYz/2fvgjceDvi3Nq 8kvmvNG7YyO4Ary74p6zcr+0xHHbyGA/ahu29/mWsvhLXuHov7cF+z6xo7g9MAj/AIHXqHirwZN8 S/2fNIsA/kh4l5zjufWtEuot3c6j9nz4bp8IfhzcWdxNvjjkQ5yOgFcJ+0r4Zk+Kt/4XvNHmR4op IzIA4Gf3gNPn5tGQ3fVHovxf1OHQPGfh5Z3EYCYOTgffFedfET4fXHin9o7SNW0+VfLjjfcQw6bg aL8hS13Oi1PxncWH7SLWVpbeYxjlEr4PHSoNA8LaX4e+MGua1auJdWkWQsgxnJX257VmpXlcLdUa v7P2pN4luvFramvks0p4PYbPevkHxJ+zdbauviHVNAujLcqzFgMD+GtW7sS0Pzx8B/EzWPg54tim cETCdY5uT0JAP6V9+/tR/DKw+NPwns/FFqgiuwFYsRgnJJPX6Uo9ipLqj81Ph1YK/wARdFiljDqt wnHphhX6hf8ABQVk/wCEW0jaMRCMcf8AAqpPWwqnvJH5SQRhgZF4B6CrngrwdB8SfifpWmXbrHbL IC5OOSGHrSkuwJ8p+qnxM0bTPhZ420vSodKFzpzwMXkSMnGDjtxXgPgz4LeHPi5+08zWMYjtYN5l jKYyRg9DTpaFr3tT9Mrj9l3wa/iaLUTYRIYwY/uetfM837OugaT+0nNHDYx+VcxSv932ArRdWcyn zTcexY0n4P6B4k8S+KfDf2KPbGXwNvTCf/Xr8KfiB4GHgb4navpcQxDbzFFGPYU1L3bFvXU4DVYf tFq8LfKGYL+fFfsD8K/hVpHwL/Y9tPEwtxPPLHGzOV55yO30qb6GqSUbnzV8bPjX4O+LHwFsk2eX qhVNo8vndk+v4V7N+xF8LNU0zwzpX9sW0aafdxAgu/XPHSs02nYIpcnMeC/8FJfgbo3w1+J9ndaT brHNNE5fauP4gK9+/wCCUsaxeH/FjN/rCTj/AL9VrU95JGcPeTPyu/aGutvx68WiQZP2l/8A0EV+ v/7BUKx/saa9M52hrfK/9+2ojo7CU/Zpo/C3UX83xHOW67qdeBlikQfeKnBomuVjp6q59g/sCXWi ad8U9Kg1RVl1E/cZhz1Ga/QP/grBP4UZLSO6hRr027eWxTP8VOm9BTleXKZP7K/w70g/sNxXVzbK xdYvmK9chq2/2k/A+i6J+wnpFzBZIpCwlTtx/E1JyuL06Gf+xdpFn42/ZbvL3ULZTNbIpiIGf4Sf 511vxk+COkfGf9kLQ7jUbZCTPbsdw6/OapO8bBKd3dHA/tSazD+yP+y34UufD+nxCDdArjJUBTIQ entX58ftC/tRaF8S7HRNSsrcf2lHBkkqRk7s9a1l8OgrtSP18+Bnxa1XU/2BJtbniWGT7DvRQx4+ Rv8ACv5c9a+Jk/xH8ZwahqCbrkQuVYZI9etRz3gzRz5peh/Sp+xrHa+NP2EdRm1CFHEdp+7zz/Ax r4K/4JifCHR/G37QGs6je2Sy3losotSVzsXYCRn60sPKy1FUftXZH3d8DvH03xG/bH8a+FNRt4hZ 2cs0cSBidwCA9PxrjP2dPD1nof7dvjDQEgH2QyTMqEYxtRf8a0m1IlR5D698H+GtP8S+JfiJouo2 6yWkUkixRkZGBFX4PfBz4OeGrr9q25sLrZBBaakq28BAAwCp7+9ODtAlP3uY/Xr/AIKo/CzwnceB 9ES4SFGj2NGcDqHyO9epeFv7Il/YR8Nrrcato6QRZMg6jc3/ANeud+6+ZG6lZ8z2Ot/Zlv8AwqPh f4kuvCyRppsClyIhwcRk+9eEan4V0j9vH4MWelS2wt7ew1G2lZyuNxRt461nKDrq5nUqLn5+h8Xf 8FVPjfp/gnS9C+F+m2/lxrF80gQgDY/TPTvXXf8ABF3TbLQbLxuqKvnh2+Ze/wC6reMHGCiRH99d n5I/th+I7rQ/j/4vu7dna7fV1jiRBkksAB+tfq/+wh8D7X9mz4bX3xa8aFRr99CZLcSYJTchG0dD 1UVlVspo2jeNNxPij4cad4p/b2/bavNTWeWK0geRowxxsX5WxyPav6Lfhn4K0m38GeK9Bv2+33Nl G8bKyg7SIye1dTnzSXYzkuWC7s/kL+LvgbVPHfx8vfCHhq3JmuNSQNsB/dxZUOO/Y96/cfx94g0H /gm5+y6mgaUY28QXFp5ZUHDMSGXdx6ZHatHBSg2Sm4TSZ7v/AMEs9dvte/Y61nX/ABNI0pnVZZS/ P/LNs/yr0v8AZJ+M/gv4h/Eqa08LW6C6gt5BLiMjnbn1PauWnDm+RFZOtK66FKfVo/2jvG3jv4ea vaeTbpdspk2kggJ78d6+U/20NQg/Ya/Y30nwbotoLiy2xWwdAflUsV7cd69KliIwjykqm5NeR8Rf 8EptKg0T9pmwnsgslxLaSkknBxgZrov+CtN3Hon7StzOx33Jt52AAyc8VwRlaUjplJRtEyv+CY37 NEusaPN8WfiAB9it4i+mR3BHCFd3fBB3L0ryL9pf9r3UPit+03BqiGY+HLCORYInUhZBkMD+h6V1 O/sDljTvUdz+hb4VftA/21/wT+/4SFrRFgWw3qvPPysf6V/KZ4m8cH4v+LLbxEYBBHJExVRnuff6 Vz0naBopqc+XsamwYGF5HemSRmRwf51mbhs3E7jk02NjCdmMg+1LVAlcj8nZOOML7U29gZWBU5FB TRAYDIgI4qd4nTac7hVCsVmXbJ3ye1OZQkZP8VRICmpZeTyfWnfZ36ocMetJ7AhsMLecSe1NZn+0 dOKIq5OxLdQs4BIrK8wpOBjP4U7GhLI2WHy4qJrYoM5o2JQ7yy2P61IYwq8nmncTKUjtIBinrbm3 gJB5Pak5CszPjjaR9v8ASrMjALtcfSpG9ij5QTr1qRVLRl2OAO1W9CSoZQqMSMKTxVTcFAIGPwoD cRY9yEjgd6rZ82InGCOlKTKWhW27sBjz9KriHEpyOKljSHwN85A6Crci7PlHJNQ7iRHJGfJIIqjb 2fnNuLYA7UW0GyKZijOB83pUJUm2DfdNFhCRIzjBFMyYn2MODVJDJDbRwMADnNJOgBBHAoYrmbcA kjnCimsuVDAfpSRQR5Tkinx/vNz9hVWJuV/ME6ncOKzDD5T8HHpSBEyxsBubkVV4ilyBwaTA0buH zoVZflYfrWfGm5twHOKVitijKuc4O45pGJ8jD8mgW5TScwKABkUIxlfOMYpDJlnKkhRmo9y7MSHJ pkkCkliFqXcVIweO9DQxzsrOQRx61Q8hZicHGO9Kw7gtqRw/zCnbfKwgHy0bARyRO42q2FFU5U8k 5zzU2uU9CJkDIHB4HWo96zOBj5fXFHUCvq8O6wKk8bga/qu/4JFy/a/2fowxwIkQD3+U1G0yHof0 T/Apw3hNCPQV6/JcMQFPAPpXoQ1EMjfyydwz+FCzqzAKuD9KtsZPFGJZM9hVS6j8qT5Pu5rPqVuj yT4xBk8GXrE7h5LHH4Gv5gfFcofx/dytwGZsL+VdEo/u2xdDIYgMVJyB0pixFlLg4HpXlrcwkNjj 2gNmjcGl56CpktdATNO2G5hg4FdrpSncFzQlYbPS9PUxgYNekaW26EA1cRnc2A8vaTzXZ20hMJwO aq1xbMrO4WYJ61FOzK5UcirigZRaTaNpHzVnSl0kNVsUtRCu4EmqkkQjBJ5oTJluXrVlHy4yKtrA C5wM/hXjRZ19SeNd30FXkty67w54PSieg0jRKlWR254rQJHlgKOprPRmjV0W4VCsF2/jTJV8p8Nz npWbd9CLXZaHy4U/hVpgQgyMGmy7EkTBGJYZzT2jDY54ojLWw2rInAEa+tR+Tkcc0mrsmWxGYjsI HWs2SDKEsOa1iiOpzckAuMjGMVXs7Urr1iRxudcfmK7MNpMaR+0XgCEw+FrVScsEAzXWKCz4xXps iejFIKk54qSP5xjPNPoZ9SyrcVIh2Oo9qErg3Y+NPjVEY/FWd3Bz+Ffi1+2CPI8bQKTkkHn8azm7 IWx82QII4jk/MTWbeYEkSDkmVf51hbW5onofqj4ns5X/AGVdNiWMuwiQ8D0Jr8v/AAx4rk8J+NbW 9mjaMxyAEYPTIpLR3M37zP0V+NHg6L4/fDaHVNMmDzRgSFQR2Of6Vx/7BkPl/EW++0t5VxHFICh/ 3atSsxR908++MfgHWvE/xz12fT4CihpP3nTPAr67/YzsZ7D4a65DczBr+PtuBydhrJ+9IexV/Zl8 YeI9a+NV/FqO5LG3WRfmJx90H0rzr4lQm5/agFxbJ56G75A7fMvpTlHUltzPSf23tDuY9a0iRUzC 2NxH8Pz133xM1fUdF/Zt0c6PIXnWJPmjOT1PpV7Kwoy0sanwZ1W/8Vfs+3x1tzDeOg2tIcEnafWv jDw43jbwV4n0W3RzNp0kidJM4G4DsKy2ZfLZH0t+2xbtr11olpazqNQIDDDDjD15P8Hrjxx4d+NN hBqkhm07yn+bzM+ntVyWhK3sfZGlX2l/8NCXzrMnnmOXjI9BXivwr8Oajp37UXibVbxQNOJlKZbq NgqVGyuPms7HrPgXWdM8QzeM1tJEW5Zn2qCOT5deP/sm6ffeFPDHjFta2xLvOzLdR5ZpQd2NrS5+ duh/Cx/jj8aLrymEemRz+ZKc43EYI617R+1d8XoPBvh+38G6EwUImHIOMbT/AIGtl8Y4u8dT4++D PgXWdW8b6Pd2kSXEAmTzWd8Y+Yf0r9B/+CiFqLbR9IjDBlMXzAHod1DVp3G17p+T8chjVABwKs+E dOlvviNY/Zbg28+8FSCB3FPqS1dWP1w0PV/EY8e6bp2q20d9pskLb53lJI5HpxW78PPB2ieFP2pL 6XSJEh8yOZpUXA5wKi9pDT5VY09f/a10zTfi/J4Rdv3wugu7BxwR36d69X8USxxftGaU1rMJgbWU MFIOORWilfQwUOWXN3OL8deJNE+Cnj/X9WuLuNbu4SVtm4Zztx9a/nr8efEeP4g/EzVtRQbFmmLA gewotYtaqx5Rqepxw3m5nJjSVc8e9fu7YXtv8W/2GdO0fS51WYQxjAYD+9/jVctlccnpyn5r+KP2 S7f4Z+DdB1PWbsM4liLxkqR9+v2d0yCy8YfDrwjNpV4kFnAkZfy2HZ81kndtsbdqfKfBX/BV7ypd Y0i/smWaBYSrlTnkv7Vof8EpZbW88O+KTJcLBJuIUEgf8sz61rF89mTH3In5gftU6b/Yvxw8RCQ+ Y01yxVgc8YAr9f8A9gmzs9Y/Y/1vTnu0ilW3wo3Dr5bVM3y1CeVzhc/A/wAXxjw54xu7G4YvLHJj KjOarTeILezwXWVvbyzWktWXSXun0P8AsX6Xb+Kv2kdLvV+WOAFX8wbccg197f8ABW7wtZ3uu6Ve 2kkdwEgYMu4d39qwjP37DlDXmPZP2LfG+jeOf2Rf+EaNwkV3aogYMQMFQT3+tdH+0rd2WvfsZ2vh +C6TzreNPmDjqCTVx965Pw6dzmP+Cf8ArmmaH+yhq+n31zGLlEAOXHXY1es+MviTouifsfaOs11E I45YOjg/xGpi2kSo20PF/wBrDw5Zftbfs0eF9H0jUBDHuhaRlZegckjn2r8t/wBqz9n/AMMfAHw5 pFnaTi6vliAbCjOd3tXZFe4Zyl7yR+p/7FXxK0P4wfsiv4KuJ0tZY4BDLuIH8LZ6/Wvxx/ao+Dnh 74DeILHQtDC3lwYyhlCj1x1HHeuWHVM3UOXXufuN+xBpdhov7EN7pt/fxQXTWfCeYvXY1fE//BO3 4oaT8IfjprWn38yrNN5gikJ+8NgGc0qTuTB+zkfaHwf+G2lfDL9q3xj8Rp9WV471ppAjOuBuQDjv 2rwr9lL43aP8Rf29/Gesl0to4JZ0R2O3zAY1Oef6Vu9IESqOUrH218Mvi74cn+J3xGka6hR1mkGS wG7919a/m2+InjCbTf2kNW8RWHzNDqSmIg9V+Uk/pTg707k7PlPXP2nf2nNW/aE1XTEvpnFjajJU k4OG3DrX62+EfG+hftD/ALC1j4NjvVtJYoI1LZAxjce/1rLZHRU1p8qJv2XrPw7+x9+ztqmlajqa XSXbpECzKfvKV7fWuR+PP7Qmi/sr/ADw3N4TaIz6hf2q3Cwt/C0m1icZ6CujCOKi2zixF4wUUeQf 8FG9D8IfEP4caF4itxBPrkwRfMUgsNz89/pXsv8AwSu+HGh/CDwn4iutW1VGur3LASOowfLIp1Wn NWLw8nCGp8l+IfgZ4auv2wptY1W6hu9NuL7zvLdlIDArg9fav0U/ai8DaF8ZfGvh7RotcS08OW8e 6S3R02uVcEZB/GuKum6p0Up8ybZ4X8P/AB/4G/Zg/a5fT9BEMVrdJKJGiHG44UZwa/RHTfiN4N8I X/ixxdwHUtVEku4MCc7NvrWyfLG46zu4pdD8b/2eL7wh8IfjF418U6m8VxqXnSNC4wxGYxxwfUV+ XXx4+I2r/tLfH3UfEepyGXT4PMjsYS2QqMAc+owa61NKiZ1VzVE0fs5/wTd/aM0O/wD2f/Evw/1S QQTRL5AZ+M/uzzzx3rqf2QPh74R/ZA1bX/EkWppcTyo56rnlMdj7Vz0XcUb0m7npcv7U3hLQfg14 m8bWpjj17UJAySAfNuZCPX1Aryi6+Jnhn41fsLWc3jWSK+157dGHmkMQ/wA2O474rmm5c+h0UEuW 7PAv+CXPw/0LwF8Uo/FGqX8cDLC6xozKMBgP8K7/AP4KH+GvCHxD+PGl+JYri3miMwSYFl+YM4z3 9K15rHPNOVW59U/Gq28I+I/gP4b8K6Jq0Wm6YDF5scLpgqH5GCfQmvzo/b68DfD/AOHfhnw3pXha GB7plWN3iAztL4J4J7Gu+Ml7OxLdps/Rf4aeLfCPhf8AYDHhK4vYnkGnbAMg87WHr71/Nv4a0JNF jltIm328LhIW/wBmsFJcjRFKm41HLudXIyxuV7VSuo9sYI6ZrKKOqV+hEsXmgADbVtV3ggjgd6mT 1HHREMh8qLj5jVfpFlxg0LUZGJ13KqA4PXin3EbrKApwveq2GndDlKqxIGahaMNkn7tQxFXy+xPH anq2Mc8ii10UOnH8QGFqlChYknpV20IZcacKCuMms37QqSbSnPrisuowlXcR6U3crShcfLTAsNGo +WmiJBwRmlqJlZbPEm4cD0qvP+7myxytFhoUDDZAwKqTNvYgjOKa3J1KDLuOQMn3qEkltp6CqauL oQzhWfGOKfEVfgrj8KARBdDy3wec0xo9sIwMVLTLRCsalSzDBHSq3/HxyoxjrUxE3Yakexi3Wptj FgwHNNoSK82+Q7R+NR5WMgYxj2ppAx/lDJcimNHG0WW6UWBECSrbA5GfTiq0SiRmd+hPShaIaJDA jMOeKilJU4xxSaDqVLtVkCgdaVB5Q24z70JDZWizNK2egq6kacheBTZJRkkTeV24x7VR8xWBGM/h UhsPjKtCMjH4VWmjSQ5HA+lLqUhUdo48EZWoAmxTjvTGyiT5BwB1qGRcsVLY9KkUSvJGFwpPzetP EG1CQciktwIIF+cjpSeXnPGaGNIrCQQZBHWppZFeIKBg0rsbVkOlXEAY9PSqHlecuV+QU0ItRnEX J4FNePzkAQ/jQFyGD53aI8Ed6jeEW+Uk+YVKdipIikjjEA2jAPtSR26sgRTxTJ2KOrxqumnuQwFf 1J/8EhiyfAgRt0KqR/3yayfxDcbo/o6+AqbPCgTvgY/Kva2iEUIDctXfT2M2V93lZBGTTDIoBwuD 64qnuV0FA+Xg4qvKQFK9fela7KjseWfFiPd4FvgT/wAsW5/A1/Lj4yiW38e3iscsGP8ASuiX8Jol voY0Mi7tx5B6cUPMXk2rxXkvQzepPs28k8CnrsWPfjjp0pRdwsXrRssBjiuusjudOcVaaYrWPULC bCouK9G06UbBgYxTeg4q532mZkZSe1dsnyoT0NUnoKSdytMqyJgfe9apKzQJg/M1ax2uDKEr7fmf rUDyqsRbH6UPUcSkJXYZIwPSqrybUO78qmwM0rdfnwDgVpqzRPla8V6M6i3boJWI7/StCCEQA5HW m3dFx0RZYEYB7VetojKCV4rO1i+hY8zYBjqKtxgOAW61CiTBDpfncbeg70+SNndctxQNlhogoyOa d5h2gAcfSqUdblX0CNWCkdutSi68wABduKpiIpAY5cjkVTk3SMVIwK0TIsZ8lkZOny+tQ2trt16x UcFXH8xXRhv4hN7H7DeC5P8AimrXjnYK6hZd74Xt7V6skZt3LnlGRRuNPiRQx45qREpVd2AOadHA Gfg9qqL0FufHHxhxJ4s+gNfid+2DMsvjxMjkEgce4rKpqHU+ctyBen6VVntXbbIg5Vg35VHQu2h9 n+Gf2pfI+HkGj3dtmKFApJBr5s8ZalYeMLktbwhc9W24qFsZWszuvgd8Wb74WW99puDPbvkIpzgD bjtWf4d+K0nhX4kvq1jGYmbdvABGc4qbNspo+gtR/aoLJPJHZqk0ykM/OeRivIvh/wDF7VPh/rV3 dRys8VyxZkz7YqkrMiR6pY/tSz6fLcSJbCCaUH51znkYrzHw58aLzw34nfVJYftM0sgcFiePyqdU 7gkeofFf9pm7+I9okU9sAQhwQSawPAX7RF/4e8JjT7iI3EUYARXzwBTk2CjYqeOf2mNS8QeEUsbR WtCjqfkzyAaPDf7T1/Y6VaxT2olkiUAMS3FLct3PM/iB8WtZ8WeNLXVwzM0XCqxxgZBr26z/AGrt Qt0Vntx54QqHycinLYiO589p8VdbsviLJr3nuZnYnOexxn+Ve1X/AO1TquoJKI1aGSRSGdSeSRil q1YfLrc8V8C/FPWPh/r93fJIzPMxLAHrxiu58S/tL6rr2kz2aA2/m8NgnntURXKyvI8r8EfFLUvh sZI7X78qnc4PtivL/ERk8V67Lf3p3XD5JbrW8HrcXLY7f4bfFLUfh1KYbQfIDlWzjGKf8V/izq3x XuI/ttw7JHxgn3zVTlcvoeQMFEm0fdArGhuG0rWI7uD5ZozlTSRDPev+GuvENmkMIBZlTbu3GvKd D+NHiDSPiXca8Ll0aVXDYb+8MVnPcmzZ5r4v8V3t94wbxFExbUGnEm88HqM/yr76/Y6/aNhuPi/c al4quyzBXEQY52ggcfnVRfQJJ2PB/wDgoD4vsPHnxYfUNIvWMbB8qoGDkiuV/Zu+GHhXVfDcsmtT RrOw43444+tay6BHQ9b1f4JfDtPD12C9u8pBK/d64+tfH/h34ua58J1udL0a4ZdOSQeXhsYArZtc pK1kYnxT+KniT4u6LDaXl45t0x8pf0Oa1fCvx78VeAPCNvoVndPFbRqApV+w/CuZxLaOZ8YfFTX/ AIg20VtqUzTQRjHzNnNYXwp+IOu/CjVNSfTbx4oLh92wNjHGKKa5VYp2aMTxVqE/jDULrUNRlaS6 mJPPOM1qfCr4k+JPhvpd9Y2WoyR283RQ+OMYqpK7uQnZWL3wOh0LxD8UZpfFkYlY7vmdc54Hfivv aXwp8J/7QcNBbFT22D/GtFtqVH3T4Z+LWtW3w3+KX2jwKVtFO4uYjt7j69q4Xxj8V9b+JEkf9p3k lwEGCHOax5Nbj5r6GV4Q8Yar8O57gaPMYFuMmQhsc4xV7U/if4l1TQZNPmvGaFmB5f0q17rCSuZH hzxbrnhXw7e2VreMouOSobGOMVjah4u8Q6t8MoPD1/dvNDGUxls9D/8AXqWtQtbU6zw78VPE/gzw 1a6dpt41tDEAF2SY6Vx/i7WdT8c6kt5rc7XU4BCljnrXRGTUbGMqd3cueBte1X4eNO+l3LQCVtz4 OO2Kz/EOoXfinUnv7yZpbljnJrO3Y2WqOp0T4j+JdP0R7BdRlWELtVQw6YrzTT4b7TfEEeqLcu1/ GCoYkc5qYx5dhSjc9aufi54q1K3MM987QuMMpftXEaDJdeFPEB1PTrhoJtpVtvG/Pc05N2sS6aWp oad4j1mz1XUbtrx0a8YmTa3qMVjWlnHDncd7t1cinFNKw+TW5FLbLdo0LL8h9q6DQ/Emq+DdNa00 y6+zRnGCHxgUbqw2in4m8T634v0dLPUNRaWFXVgC4PIPFQ+ILW98RaLZ2tzcs0Fvt8tCQcYOaFeK siJQUtzW1XUrq+tbSN7uSVIVwFbtzWnZeMtb0mFxDqDqrdMsBRrcn2atY5s3GqXOoC8utTlnfduC MRgVt3vxE10agsr6gdoHygSDgUoxvK7KjHlWhx7eddeI21fz2e9Zt270rsm8Ya0+ri9a6dpghT73 QHrRNX0K5bu5yUSmG8u5pJ2d7hsvu9cYpNJtLfScrEygt1APWtN42C2ty74esX8K6peXdnM0bXDF mUcc4xVyXWNYnt5UlvZDC/BAOetSk47CmlIzpNMnn8MJpb3LfZNysM44x04pzO9vpC6a120sKj5A cfLjpQopu5otEUdFudU0NG2ahIR0UMQAo9Kn1e1u9fsR9tvmYKQy8g9KOW7JslqTR6peyW0KG+Ii i+4Nw5qrq96+u3sV1PLm5jQopz2NVZkOKbuadhqWoJprWkl27RY2jOOlY+m6Z/ZkLR7yw7ZqVoUk iwkGUx1NUpImR8HlaZUVoS/dOKcJdqhCMZo5bsTdiKSL7NNx82aNp6uMg09gtciMYhYDb1qGfLSg ZwKW4WsQywFBgHFOMGEBGSPpUtghj225wD+FRtGofGPnHtVLVBLQdtZsEihgOVIzQ9NAIwu1SAuS KgEgbAK80rAAtlLknr6VEtv85x92kURuoiwv3j9KnWYI4Up+lIQ2Vt7YXOPaoZrHdGOelPYGRLhU IIzUHkgHjpUrcCCSMLJuUfpVe+QRKCBnPtWkWQ0U4IDKdxFTXcYiUALyfalfUEiqLUu65NT3KgAA jAFDLiY8qtcPtAwPWqpUwgx5yaVrEtDYT5eVHLVcts7iCaQhrxlyT0FULjbLgdMUthsaFxyORUUc Y8/ae9K4kiKePEhQDPvTGG3C45HWkiiOQjdtA4+lQzNI0e3JwK06C6kaDauSOasxXKqvK8fSkhkE u0p8owPaq8q7kUR5z34qSSF4C7bvTqKrywfOGUYoBiXEilRtHNV8bUCkUW1GmDPwFUZIpk7lMHHS kMot+9uVf+H0qvOwe4K7cfhQK9itcQbAOM0RZUADr3zU2YJkiqr5bH6VnSthvlyPwoSKYeUWII6d xU00KxqCDzQwRCwLqCenpRPGAoANIq2gyOzaSM5O0VFFbSGfajYUUEXLVzB5QHl8tWTKGZcty1Lq X0JWdZUCEYb6VUjgME4GTtpkJEGsndp7gD5QwPSv6jP+CQkrT/AwSZKgKoA/4CaxfxGjeh/R7+z7 J5vhYseox1+le4y/vG3N92u+m9DJkJwMHrQzLIwwMYrWwIgkUfNx09qqMVaHAzn6UomnQ4D4ooH8 C3o29IW7exr+WT4lReV4+vJSfvSEbfyrf/l2zLqYRURoBjJHFCDyxvPOa8xwujNysx5m3p0qWORR H049KwtylblqOUKyuBgV2OkTRyXAYjPtilC97lM9V0ogIAelei6YoOCBWvNcqOh3mn9tvFdpakvF zyfetIol6shP7ojNRTDLbu9VsrCZUmjUgA1UnuEVdgXn6UxopE7n29wKypT5mAw5FNCaNlossCOB mtZGUEDH6V400dFy/bR+Wxepz+9k3MalI06FiNWdySa14B5UYFDWg09CYKJMg8VYhXfHjGcVGyKW hKqnbgdKkSPd3qCZMeZPLAQipY23sQBitEJEsLZkwOtSXCDfgDH0p21BuxGYvLzzVKdikgxRezFc qzI0rbhxis21mM/iWwUfeMi/zFduG+Imx+xHhaIxaBbpjkLjNdArYYA8e9eqZstI2x8g5FTx/vZi wOKUkJak3XJHWn2n3iepxS6BsfHPxn2L4iLZ/eHPGK/D79rqXZ8RoweSc/zFYMGfPysiksBmnx3Z Tc7HaorFtmnQYtzcahEStoXhPQ7TzV+0ja1hGI/LHoRinHUjY1be7FsMk7c96iTaJsjkGtGrK5N9 bEp2uQGQHFWILxnuDFAnmuOwGcUlqEkSS298qFpLFsDksVNUrC+n1pWWzgaUocNhTxUy0CJcvGut AgSa9tzErYALA1uraX91YLPFZM8bdDtNFrrQd0mctqV3eaRJGlxZNFvOF+U1qyaHq0e1o7JmVhnh TQkrBIrNBqUd2sD2hEh6DBrTi8MazPfCJ7B92CeUPT8qdrolbmDfFtN1GWC4QxFAc5HpVG1ju9T0 57q0hLxL3wapQ0KT1I9B0/VPEJIgty2BklQTWXeNNFeyQTp5csZwwIqbWH1M5btWlKkkHHpWfHJt jkfduA60kN2JLTQ9W1rT45dPtjIshGCuaxte0i/8I6pHa6lCys4yCwNXy21Etrmfc25W4BB/Csmd t8ztjhaNzO9xPDnhnVvHPnyWFs0kcXdQfTNcqLme21C6srpDFcwttII9qXIWmhI4x9m+bk+tZKQR W8zyRfupD1IHWoSsyrXM+eBL5izrvk/vGs6eyKKNshj/AN0Vo9TOS6GK1tLHcAtcMyeh70++faow uF9q1SuiLWMi71L7KyKPvNworr/EXwz8S+HfBcev3Fq7WL4IOCeD+HtQo62NXsea6dqv2+0jlQ4D Lurpvhz4H1j4iz6nPZWrTR2xIG0E8bc5okrIhbanMLLNFPNaXUey4ibBBFVo2YE5AwKS1RPmitME MwYxgtjg1kQ6dIJpJZXPrQ0aLU9C+FvwZ134salN/Y8Rl2gknp2zXDarpFx4L8Y6jomoqIryCQgg n0FOCuJ2uOA8xAyncMVWLb5CSOaEi0NlmCyAswQ+5xV2NRKh24I9QaTQ2ZrkeZtLBcepxStKgByQ fTmmIglkIIywX8aZI/mEBfu1SQrFpz5UYIIDHpk1Xji2qZGIz70luWaUUyPbgl1/OmyTosRXcoz6 moktRblZZAuAzqw+tPaEBDzge/Fa9AuZeo6muj6ZLL/rNo6DmvfvgV+zJ4g+Ofgka/JGbWxcAxtK dvBHv9KUY3YrpHa+N/2LdZ8F+BrnW4n+2QRYY+WwbAxnt9K+UvDuvR6jpUcrlQzYypP3aco22Juj S1OeOzjDEqT0UA9T2r3n4CfspeK/j5a6nfLa/Zra3yYuccbc9xUapg2krnzV4t8Ka7pnj0eFbNHu NWF4kTIoJwuQG5A96/RuH/gnJfve2cVzcql3Jbs/kl1HIroULq5lOdkrHwX418E6j8JvilqHh/UQ FCOwiwc8AD/Gsi81VNLwrHLucKo5JJqJQsaxd0eifs/fAjxR8cfiVcad9hkjsk3ZZ1IAwAe4r7qj /wCCc8SapPBb3wn1GNWPk5XggVhTm5VOUUnbQ/NHxBpmoeB/jFrfhrWAbe4tpmEKNwGVQMnn61t/ ao/s5cMrADJwa65xtoZu6N74XfCbxN8e/Ftpp2hxKIHOS+/GRke1elftQ/sz3/7N/iuwtrqX97JE zMoIOcHGaSjZXKlJ2R8x6faan8UPGemeG9Et2nu5pV8woCdoDDOfwNfY/wC01+xrrvwIvfD0MzF/ tzJGYsjgs4Xt9ayg73Yqj5Uj6Z1X/gmQvhnwdp+pavqH2RpQvGUxycd6+NP2pPgXpnwcttPk03UP tUjMqsF2nq2O1OnLmvcy5r7Hi7Q/Z0yRk1mXMxdwSOKT3OmKJYovJXd6+tPa28tsv36Ckh2sUhbf vj60ktqWO5jjHSq2JcQ3ljyMe9SvH5aKeopWNI9ipcElxxkVDJF5wLdxTSM5PUqzxN5aE85rShjK w5ZcqKjdjRTly5BHy4pybZznGCOpqoqwMru5ViuMipI4sHbjP1oaK5dCjcqySD0zTriHIBxxQ9iU PFqZiCvApjWzKSmRiptYpkUUBgc4Td9aS5XOD/FUslkSghwQPrUUkrhyvartoK5VkOw7eoPen2+E Q7hn04pNC2HyRoLcEfeNZ1x+5GGGc1Keo9xELx4IHBqWZQQPSiTsxIrtCGxk1n3bLNJ5eMEe1EdS thGgNpBhuWNZLW+WyeDVsRXW3ZHZu2as+T0Cmo2Y7EE24DHoaRlRlUkU7XEO3qw2qMVmt9/BHTvi pcRotH7o4GKoAM0hLDiktNBPQSQ7JRtA21aW1Bbcx+X0qmguVp4BE2QMqe1VbiINgAYFSmMZGqxt tIzU5QKvAANDArBQr4PU9ai+zfMzZ+X0oYrGaI1BY0siLJGCPvUkKxmCRkuAoHTrWi0scsmMfmKB pWM+SJVm54X2puA+Wx06cUDKklu24OTx6VIIRIcgY4pk2sylkRHBGfwqRgrEDGDS2K3Kv2Qo+d3H tUMg859uNpHrSa6lLQWPbFlTy1PkiTcu0Zb6UrDbsiSWTy02nv6UtnD5afMevSkyEupDdL5cox09 qhmjDMNtFikynHD5Ls7HNSRxiSkPyKGrKq6dJnglhX9Pn/BH9SvwQ8pxk7Vx/wB8msHrIUlof0ef ABHXQXQrhcjn8K962iP5W5xXoQ0JKdwfmOBge1RRR7ULfpW/QErkivuTpioZFDKBjGKgZ5/8SUz4 Mvu2YW/ka/le+LHPj68h6Osp5H4Vt9hiOdUbcfNnNPaLy8ZPBrhuYNXY3iMnAzTGkA5AwaxaLiaE B8wDjn3ruNKWK3ZBjLHrxUFHpOmxHzFbPy9hXpulfLyTzQtBo7zTTyGrtYpBjAHWuiOxD3KRzkgm o/M3gj0quo5EDuFjAYZPrWdKVLgD86FqStyk85W5KqOneqckmGIHeqSsVJm2q7guetacR8twzDIr yWro6UiwymRtynC+lasGPIA71lsxoulN0a461ZRCFAbqKls0tZEy7lXgZNXoxsTHrWcnqDEigYEk ManVSuBjk07aCtcfLEN4U9akisTK3XCilACWWHYcKePWkWMoB824+9aPQl6iyssZG7nPtVYqGkJA 4FTbUkhWB7kEBtqisnS4fN8VWgxzHIv8xXdhX7xT0R+xfhtvM8PW7EclRW+EDkDGa9Z7mTZO0OzG D+FLKd2No2+uBQTsNVGHGa0rf5eOhxR0Ez4n+M6mXxYT3UmvxM/a+kil+JCtjBAb+YrmkNO5822L go4Azz6U++nTdbwsMq0ig8e9Z21NFrofqVqXhXw98Of2eNN1uSzSSV4kGQmTkkjtX5q+NPFkPiWc 3FkvlKP4SMZppWZm9HY9/wDgr8D5vH3g+TVdXH2e2Cboye4xnv8ASvDtaRLXxU+m2A84AkRgDqK0 lqiftF+58N6vYCXzLFyqocnafT6V9jfsf/CWw1TwFqmvalEJJIzuCsvT5Sf6VnawOVkeofBzW9C+ M3i3UNC+xhCiOATGR0X3+ta6/DPwx8BvD+tXUnlSzicbVOMjj0FKo9AS5dSXxT8NNH+MfwL0vXFh EZLxPyuM/Nnv9K3Piv8A2T8IfgZpWoQaf85VAdkZPVjSg7LUlq58W2fxH0/4ueO9DsXttiOQxUoR nDD1r67/AGpPEWn/AAWfTbaysOJF52xn+9iiKaKburHyN4a+MGk6h8S4p9Ti8uKPPylT7V9dfDr4 paT8UfjX9h0603WCRvvkKEDt36Vb0RNNHyt+1x4Mhvvi42naEqtK7kSCP0yM/pX3p8JP2ddD8J/s 7TfbIo5b/wCzEuWAyDtNVF+7cHKzsfK/7Fdpb33xC1nSp7dZYjv8okeiV8v/ALUPw41fw58UdYuI 7QrZiZtpUHAHFDs0VflY79mXwloXxIupbLUSouHQ+UzAZ6e/vXjfxx+FGp/CDxpd2RG6wlYtE+ew 4p8tkNu7PQP2c/i5/YHi/QtFeDfFIAGOD/eA/rX0x/wUd0XT9O1HS7iytwkhTJIXH8VLdik+h+Z8 d15siSuSQyniuSvNSmnvHsrSIvO7bRgHvxTW5NrH6vf8E5fh3q3hqfV7TXbQNHIGaNnyTgJXwj+0 t4aXUf2i9Y0nSIm+0tcN9xe3Ga0ktNBpa3MmX9mrxRYWMkj/ADqik7Q2TwPTFfL9nc3EOqXlneIY p4pNoBHtWcoWNb2NDUZBa225fvAV0Vh8KPEXiTwLJrltG32YkbCPf8PanTjcyk7WK3iD4J+JdG+G 1jrVwjIk+35vTJx6VqaP+zd4n1nwvDdRO00bFRv3DHP4VotBz0aOH+MHwT1/4P22mapeQvc27SoM gbgMt6gV+72k+Bpfi3+wrYQrBGsslqu1i3f5qrzBs/ni+JHwf174EXtna6gJJIpYzhhyBzjqK/YX /gldaWknw78Wm5tw7kna5H/TI1L1YP4T8o/jpaO3xp1q2tW2vLfhQc4wDgV7Dqn7HXiC28IPqltc tcKqb2WNg3Tnt9Klq0hQWljwj4P/AAo1v4ua/LZwZimiU742OCCBnoa9mT9jnxZc2uqbpWD2uf3Y cZYBc9MVXkU1qfS//BJvxBeWPxO13Rr23KNbiRSrggjCelebftC/sq6r8bP2mPGGraWuyG0kl3kE DnaD/SrtZGb92ep+eNlZXPhvVZNLupN1xEcNznpVvWtQXTbWWTb8y9AO5rKTsbW1Por4P/slax8Y fhzB4gvmNjaTbWjZyFxntz9Kwf2jP2cNf+A/hCy1KyP2q0kZE3RsCW3NgcCqirojm96x6T8OP2Lt U8efD/TfEOoSNZpdKpRJMKcscDg815z8Xf2UNb+HHjnTNPM2La6PDhxx8wAqdiup7Pqn7ButWeo2 SySgxSrwxdeecV8x/Eb4A+KfCPxvtPCdvbmVp2JXyzuwoIB6D3rVR925m5WlY+ztQ/YCu5LiK0N2 BqQiZjbh1yMfrXxt4M/Z+8T+Jv2h7vwTJDMohL7mZCBhQD1xjvUSVloXCXM7H2hd/wDBPt4INTjs rtLnULZWLW+9flIXPbmvl34C/so618TfEWsWF2MXtsxVYHOMfLnvRGN9xydj3Rv+Cf2tQfD/AFe6 Rg2o2fLxhhwQpNfnjoVhrPi7xFpXh1baVdVkmRJSqE/xAHnHvTauzLmajc+o/wBp/wDZov8A9mbX tFTUI5ZrS7dd+yPcBlwvav2n8faHeaV/wTx0ZfCMBtrpoIvLaEEHq31q0+V2CXwc5D/wT18Ha7rn 7MviSz8chri/mTbGJucZjI9u9fjb+0x+xFqX7PvhdfECu89vqWowRxpGA3l722AcVakkncmnesuZ HY/GX9izUfhb8JNF8UXk3yHy32Mwz97069q/VX/glV8ZW8d+DPFNpcWixR2wZIcZO4eUTWKtKRpU i+Wx8g/s/eHYNR/4KK6y9xaLK73EhTcPu8LX1l+3L4A+I+i/tKWGs+Frll06G1mElus2M5YY4wT0 FaOVp2Mto8rPw7+OPiPVvGXxR1K812E29/buwbfkbuhPWu//AGH/ANma8/ah+M41vUEMXhrTdxYk cSEAMDz9D0NOq9Lm1Fpbn9JH7Nk/hV9f8ZW+hwxK9iZIgUH8Xl5Ffnx+zl4R+Ilp+2lq2savNI3h +eSQwRF8gAhQOMfWuahGycgqRtJz6HU/tnfsD2Pxn+K+u+NY5/s9xZWtx8ihfm+Xd357V+bH7KH7 G83xl+E/ijVr3dZQ2ZxCzADcPLJ7110057nNKspamj+wf8UNQ+FP7Q2j+FLSEXEYBRpWJGACB247 19i/8Fq7NYvH+hSQHddSWrqoH94vgVlKfvOJ0R960TW/4JlfsqQfs9eFLT4keMIxJqOpSx+SSNxT f8uO3cV9F/8ABUfXI9c+JXgEw/u43uInZjxgCcVlh4vW4sSvf5F0Ol/4KU+A/EnxX+Fnh3S/Buqf ZnDRtIyTKvAk5HftX88Hx/8AAniP4S/EKxsvEss92jRsQWG4AgjuBiupUrK6OektWmc7Iwu4DIhz npVVLL9yCRlqwkjtWiJ7yHCLngiqr5mdd3PFCQhjKdxUD8adJGZEA64oehSFlh3QBehqCRWjTy8Z PrihO4noyNbVfILZ5pLe1WNCc8nmqRL1YHZKgX09qhbIbaD8tJRFYgnhXJbP0GKZEQsJUr19qYPc I48jgcCpt55GKTKUrmbcgyMD+lTLGWOT09KOguo2RvKTPb0qgitLKHzgVLDqXvMAYqP1rPkP737t Kw7kFx8ki4PXtU7wB3609kStyu1mkQzUfErBQOB1qNypEgtxk7egqp9nM0vzdKkEPIHKDrVZkA4P Wm1cHEqlgj7fSmpArktj5s1SVkIjuoPPwSeRVILuBDDLdqQmU4otgZWPU0y4hELqQcrQ1cexHduo 246d6jkQbcpwPWmtAIFtCSGU1beANnIpslMohcQkDpUMC+d8uMgdzUopoka18xcgYIrMknKSbTVL Uloli+Zee9QTN5OQORUW1B6IqAscAdTVzymKgA896m5SFFupbJ7VRukXcSCR7CmDMprEoQwb8KnW 0LPkDml1BCIoBJdMH1qqYlmkyBim0NCzxpBgNzULSAEbRgUkxFGaJpZM5oIMXyg4P0pARpbMVLHG KqJG0svNA9mSRptlZTyajubVpXUj5T3xVdAIhbeVPlhkVbMkSn5V57kil0E9ymU8xS3Wo2j3kYOM Vn1K6CAruJbkUxbbfLkHaK06EGb5eLzjlallG6b5elQUtinrsG7Ry+7kEE1/TZ/wR3vPtHwVaQj7 oUf+OmsnH3irn9J3wAlkuNAZmPAxgfhXuhk3S8jOa7obGbAxgkjFKg4IArTcpOxWkwHFV5224Hf6 U0Lqee/ErdJ4QvfaFv5Gv5YfikRF8QNQkk/1jSkKPyrf/l2waOXSIIoBOSKXeCwUnJPTivKlKzsQ kNmYxSYz0pZHLMpIob0M1uXo/wB5crj5a7jTFEmDjGKy6lnqWkOZUGR0r0jSnEowBzVpXKTO3sAY lAJzXZWg+UZPPatloRrcc2VYlhVEgMSRxTaK3KFzISOfu1SEOOAeacQ6lVpdk3T61EzBc7hxTe4j diIcYxyKuxRmQhT29a8o60ai4gjwRmrUH7wZAwPSsZrS4Lc0Y0IQMKtB9zAkdBzWSuassRAkHsDV xl2RqoOfepEkSZ444FCnKgg5xVCuWtqyYbq3cYpJd0mFU7aNmT1AZUhBzTpMLhe49q0ewWsR8jDE cU6U7cFR8pqormHYqSo6MSOExzVXQ1EfiWxOP9ZIpz+Irow+kyJaH7B6DEYtCt0z0UZrdiBTIPWv XZnYkVSpyetTryMmpbAdtGenNSRbllyRxii49D4k+L84i8ZSAncDmvw2/a7lV/ifsB/iP8xWb2Mz wXeEfbHx61H5Ylv7ZGPWdBn8RU2W41Jo/Vf40262f7MOlQdVCx/zNfm78OPCqeLvHVpYyALCZAWz 7EVlKWo4u7ufeH7UPxDb4TeCrTw9pkPlQHapkUEcZx9O9effsZ/DLT/Hfiy+1a6QSm1VlQMOvy5/ pWifciW7aO11b466ZpHxB1/SdUtwkMZdY/lJ/hr3v9leez1/4c6y0KiOwdxg4xxtNJq5KV0dz8Dv h1oGgeNpb3TJkaYwvvCY64r4V+Keh6t8T/2hdS0mF2azF1mQdgBjP6VM9UEXzOx7j8d/iZafBv4c ad4W02Tc0LJux6K3t9att+014X8Z/DLSNC1qISM5jVQUJ/i/+vScdLlrdo4L44/Byy+C2raB4x0u ILYR7QyKMdXH+FeleJfj34Q+N+taRa30IlnkXCboycZar+LYUIO58tftc/AG3+H/AIqtpbYnybno Ao+XJxX0v4F8NW/7Pf7P9vrKRf6ZPCP3qjkk5FZVJW90a92Rwf7HVpB8TPiJq2uas4a82vs8zsSv /wBavvfwx4cvr/wf4m+0TZQE+ThgeNhrRO0LGUleR8OfscQy6X8ZNXLD54lkA9/lFfWCWkXxT03x iurW6YidgnOf4M1cVoVfmPwP1K4vfh58RbeTR3dHjukjREHVSwzX6oftEaRZ+Lfgda3+rKsWo7FK 569T61U3oOJ+ZHwXiT/hamh7iCwcAZP+0K+8/wDgo1JtuNKiAwDAf/QqUVpcqUban5dWmFtkXrjv XvH7Jnhyy8RftEWaXkfmxhHOwrnkYwalOwbn7O6DrcunfG7VNNtgqWwjl+UH0Udq+Qfgr4K0/wAQ ftZ+K7q6hEtxHLLsJHT5AapTtoxSfQ3/AIR65deIP2o/FGj3hV7CJ5Vji35wAg7V+Xn7Wvh+00P4 5aylrCIVFycgDHpVOXMjNS1sfMWvyiGzcHv0r9s/2crS2k/Yks1nj6iL5sdetKBo7NF79pK2jt/2 NNMjWPCbI9r46cmq/wCxqE139lS+juTulgVRHKep+Umqlq7ohvmVza+OGh22r/sWWUuoRLLOPKw5 GecmtjwLrl34b/Yp8MR2kmwYgVm3Y43HNG4k+ZWPDP8AgoJotiv7O+h6kLYS3LRoDKVPdj3rM/4J ZwrP8LvFZY4ABx/36NXpa44s/PfR/g5dfG39qnUtMtZDEEuzJIRjnbtPev3f+CfhjS/C7+IPDkr/ AGieFHUgr0wnt9aUbSdy3pG6Pyi/Z802Xw/+3jqsMDFbZ5JSYh0PC1+pfhyKG8/ad1i12fuBFNuT HX5RSvaVwv7ibPkf9m7w5bW/7dPjKK1QQF5pgygY/gWvob9oq9j/AGavAfizUdOg+13l8xeUgHgl CvaqU+bQym+Zpn8vmm6u/i/V49auE2XFyhdx6E1Y1uANIk0nMMUyswA6gHNTJHTGV9T93fBnxF0H 4u/shaXolhfPocypGVn2hOQScfNxXjnxY8PeKvC3g7wZY3rf25ohngVp5Hzu/e8Hjj1pRu3ymNTR 3PYf+CovjO6+C3wP8LTeH5DZRrLBgRnGR5vT8a/MLx/+0d4j+LmpeFRc27wKLmEeY24ceYPUVU0r hB31P1u/by8Y3nwq/Zt8Oa1pZ3XaNDl84zlznp9K+U/2DPitf/tAftS2mp6vbq0y28oBJJ6gGtm1 yGcfekfox4o0TRvBX7Z0uvX+tEhYplFjlSOcduvGKk+Cet6F8U/2t/El7pcW0wecjMU24JQGsZaP Uq9tjC8CWnh/4WftI+MNU1DWmupJpJT9l+U7MoBjjmvnv9k7xGnjT9ufxZe2gK2O6fbFjG75FwcV q7KOhPO72Pvb4Y7NX8V/EyKZD+7lkCJj/plX4dfBbxxovhH9qS6j1O0Vbn+0FWN9pOB8vesojl/K fo3/AMFT/ix4WstI0W2mRL152RU8sbypL4B4r1K9+ICfA/8AYV8P6vcj7RZQwRYBGe7dh9Kq12Tz Xh7M1v2Pfj1a/tB/AnxV4hs7Y2zIhaFdpGT5ZI6+4rB+AsQ+Pvwk2+PbZVtbW8he2FxnBK/MCM47 1lNu9jeglRhys/OX/gq98SfEI8YaT4ftIpIvC0cTfMoO0kP8vbHr3r2n/gjlJBq+g+NUiJzC7BDt /wCmWaqmnBXYTmnqjhP2Z7+W7/4KJa0gUiaGaVTkYzwtfb/xn+PmrD9v7RPCsMfm2U1rOZkyezKP 5GhpyldGdub3j89v+Cn/AMBX8b/tRaNonhe38qW9DG9MYxxvAbPX+E1678bviPov/BO39mnT/B3h 9Ul8QXECwnbwTklCxx3GRWyV6bTJqO0lY7v/AII8Wk+s+BvGl/qMhm1RpDJK7c7mER712PwM/bCu fHf7Wtx4EOm7DZmRXfDc7QD9O9c9J2jym0p3XIe2+LdX8Qav+1rqmhQFl8PtFOLrnC4wOvboTXmP 7XkU/wAI/wBna7sPh1CqwyKBcPanOOCO2e1dlOXK7HD7JxVj8Uv2JJJYv2gfCn2ppJ9ReItOzLyT uGc199/8FjdTFp8Y/Ck+yTyUG+QBD2lBrBQvNyO1e7aR9meC/wBqjwN4++E/gvw7NKJZcwskWzPK yfX3rC/4K7W+naL4b8O6naKRexhRAoXuZP8AGqdoImUnKpzH5Mt+0x8UtD8ReFY75LpdPnuoYz97 ADSAelfo1/wVo8AWsHwO0DW44ka8cxhnPBwZDmnRrraQ6seV3R+F+lxeRAFB3L2NbUh/dBgMms5t XLWxUlthModx+FVZMHAUYNJMCCCUOShGCKVI2iueeVptDTJ53WZiMYx0pUYPhNvPrUpWB6lWW2ER ODknsKrfYH2Hac+1NCHeSFg6bSKoKisSKtMdtBWhDFd3X6VYltt7kDjFIdtCpBE0jlc7QKlkzzgc CkyYooG3Dv5mefSoSru5+tBbIp1MfBFKsJMIyefSo3JaFjjBkGDhqZcRFMnqTVWFYy1tjvDHn61q xW29ySelEtEKJDKUkkKMOKpyWmzO04qUtB9SSFWbgDBp62zI/JzRaw2yoYkWUgnn6VUNuZLgnPAo 6lLUrS26yNzwPWnFRHAQOTnrTkSVp43WFSOlUvIaBS+eDU6C3EWBWjzjrVae1/cYB704jZELcFQh 596ilA2hNuMH0pyJY4gjAXiiS0aP5yc5qLgkRKisMDrTViWLnoe9TYtkkkqiHcOtZT2m2InqWoTa YaFZ7YxKNv3h1qEQq1uST83vTZO5VhXauSeam35HU0ktSrWQiwtOcLwR1NQXNuYmViM1VjNO7Kcd q0055wK0VwhAPakkVsRToJGOeFqtBCiZJ5Hal5DKV1FuP3eKpxxeavHBHtSQDpk8racc1HMFlwcc /SgDOjjcMfm496ZuJmOBxSW4MfJbeahwdpHekChYBl8tVvQRFLFjBJqNlUEZGPoKgaI9ginJU8el VZSTnnBpMbKZmPC9TVlyIhhj8x9KBbkBiKvleaSSBowCeCaLAZOtR/ZdJfecgsK/pn/4I6Hb8G5Y j3C4/wC+TWbfvDsf0o/s8y+VoDo3JBA/Svod41DACuynqIqyRAE4bmovs7qNwOfatR2Iwp6mq88r NIMLxQT1OJ+IXzeEb9e/kt/I1/Kx8YYA/wAQr1X/ANakp/pW1rQZV7o5YHzEzjBNQqRBMvGa8mSv IyvZEm8NMSeRn0qSWUZAI47cUN6EIdbMVuQW6dq77TD84Oc+1QaJXPUNK/dwA/xZFekWMoKIVGDV RumOx19nu3DHWu7t1EaIzc1ruJqxYuZ1cgKOPpVCSQKNuKd7jiZ0oyuD1quTsIzTTB6FC4iaaU7e lUpUyCh6irIZ1dsyqMkc1oJMARkV5DlodjLkak4ZjkdqsI7tccD5aybuNK5cDEHr8vtWqgWRRg4A 9aVrItkucLg9KmjzgH+GsW9S0i7sDDIH4VFbxbGOeapbkW1LYlAHyrzUnmjA3Dk1QrAxcuAoqEHE 3zUugmWjb+aBg4FZzg7yuelaRdkF7Dv+WZDHik8OYu/FNgo48twP1FdOH+O4nqfr5o6Z0mIjritq MEqCeor1mzN6EUpLuOatRgMuabRCJ9u3mnqDySe3SpsM+HfitF53jSUem6vwt/avix8T5XPZyP1F RPREnhbna/A4qK3uMapZKRhTOnOP9oVlfSxSimfr78V/Ctx4s/Z60+LTp1d1jU7d4HQmvyhtL/U/ hxrscsihbmKQE/N1APNRbUTVj9JNRGkftLfB+O5mZF1COMOQeuRk965X9ibWofBnibV9Ju3WOZw2 wlvRMVe+hFtDm/G37Or+KPiLr2rahdBbcs5j5B/hr3X9lbUtN0z4Y674eiuR9oHyR5IH8B/xqr2V h09mH7MPgu6+FXifWL3WL7zEYP5eXBwNtezeHvD+ny2Gu+ItKmjbULlyVbcB1XH9Kia0FBcrZ8jf Ef4ELc+AW8Q63qHm30jqWQspxnr/ACrg7P8AZns/GHhnQ9WsrvaySxsV+Ucbsn+VNaodz62/aa8W aVcfD7SPDMkyv9wPgg9G/wDr1896H+zNZ6Z440DVtPuwLWNdzLlR/ED/AEpR91mlN3VzrP20fiTZ 6v4q061tdsywMBJz0wwNel/Grxrp9/8Asq6QkMiM2yMbAeR8x7VEo394zlq9D8u/C3xA1nwRf3Fx ps7QZyvytjORX7Cfs1+ObjUv2e76fVtSDXs0OQDIDg7TTuPlVrnhP7IcAt/iFrE9/dLFu3hGLjnK 19D3niPTfhZ4e8VzXd6twLhyUywPVCO1VzcqMqfW5+cP7PnhPSfF3jW48Ras0bWsZLRK5HpkfqK4 39rH4yTeN/EJs7KQrpsAK4X65HFabwHDczP2YfhBbeI/Fmla1c3/AJCR4baxUdwe/wBK9+/4KNeK bDVtV022sJUmKQkMwI/vVnGetjapqj8z7aFobSIe1ezfsv8AxMsPh38erae8X5WR1346ZwKfW5C0 P2U8Ea3oV/8AFnU9dkvogJUkCguoxuWvkPwL8TdN+Gf7WWuNPcq8N80rRvkEAbQKcldXMk25HpXg 6PQPh38ZPEfjB72N2uTI2Ny91A/pX5B/H/4kD4l/FzWbu2tpJY2nJRljJyMCnH3UDV5Hz/q1pqdz aFU02ViBk5jav19/Y3+L+k63+z9J4c1KQW8tqqjZJxgqpPeqi+UrpY9E+OPxC0TxX+zRHogu0UwI oUBh1GTXEfsgfFfw/wCC/wBmzVbK/uFFwu0ANj+4aOboKC91nTfEv4y6F4h/ZL0zT0uU3M0I2kj1 NeueENV8L3P7M3h7RJryHMflEguv8LH3qvhIhofD/wDwUM/aC0y+8LaX4H0gh9kYyydBtb16d6u/ 8E1/iHo3gXwR4i07VblY5pCVXcR/zzIoi/dKj1PDPh/8YbD4HftcX91nzrO7lf8AeqM7cgDtX6t6 F8bvBHh7xZqWpLPHJeagHZ3wOpXHrQvdi0OOqPzT8I+MNJ8KftwPrKXYNpKspY5GBnbX2X4d/ap8 JWX7TmsTmcbmjmAbH+yPepi7rlZpJe5ZHBfs8/FHwtpn7RfjPxRc3ce6WWVomyMgGMe/tWvpf7Tf hf4j6f4+g165W4tvOYWyyc8eX9fWqguSRilofgba+HNU1bxbqJ0bTTNpPmH7M6q3C49qNX8B+K4t KcLpDzRCRTIXVh8o69vStJ7l05WVmfqF8I08D+O/gBpmlXEsWmXls0RkUYB+Uk4wTXXftTfta6J4 E+GPh3w9oqC7FrLD0B4CvntRCylcVXU9L8ffEnwV+1n8K/DcGvSRGOAxSNHLjqr7u5r5N/aP8R+D x8SvDuh6HBFBYwTI0siLj7sgPrjpUS3YoKyPrv8Abf8AjB4L8cfs42ei2U6yyW5jGMDqGJ9a/KT9 kH48N8B/jzYXAhJtXhdHcg8ZwKeqigiveP1j8SeJ/APi/wCLc/jHULpJ7ryZcIyjPzD657V8xfs7 /te6f8P/ANpzX5LW1WDSrt5PnAIzlQBxVVfehoVb3j6L1nxH8OdI8R+JPFdxLHdajeF3QFQSpK4x wfavkz9hn9pPSfh/+0JruraqhgiuhK1oVUkBSgH4c06avAynG09D7y+Fv7a3hiw8R/ES8nuDuuJ3 8r5euYsV+EPjzxLeeIvHOp65pjbHlv0ljbOCBx/hRFXdxzVpGl498ba3441+0vdQna4S3jJCs2ec 5FfrB8Ov2mvDnxZ/ZfsPCPilAsMMaAxspIOMnvj1rO7jUu9i4xW50vhn9pDwZ+zR8HJdL8NKkcM8 8aSRxrj5T8p6H0NeUfth/tpx33we8JaT4NcwSrd20l0wynyLLlh3/hrZJTqc3Qmu3ZWKP7ZH7R/h v4yfB3QtJsl33xEfnOUIwA/PP0r3z9hH4tfD/wDZr8CXdu2xbq7X94ypncdu3nBp10rWRNJO2p8p fFH4/wCk/DD9o+HxV4ViRrm4uMXBwV4ZgCeM9hX2lfftGfD+4+MVr4ylVH8QpayjzNnI3YyM59hU Uny6M15XbQ+WtD/bWttU/aW13xDf24LoJUtGwSMMo7/UV+b/AMXvFGo/tAfHDVtd1iVmijmYWked wVSAf5irclrYz5W56n27+wF+1OfgF4n1fTL8brG7JJY5O35duMV9heFfjj8NPAnxa1bxdptug1mS OUmYRYJJX1z7CueMLTuXJajvBH7e+ka74O8cahcqE1ecukLYPG6Mjr9cV4T+yl+2Na+F/gL4ksPE 7tdXU6/usqX/AOWZHb3q5SvUujSSTieOfsdeN/CXg34jjxbqyATOS0G5PuA44/MV9R/t2/tEeCfj tpaywQia9WMqjNGe5+tbc9okJXR+Vf7M4svAPxa07V9XdpLW1kAjiK5C8g8flX6if8FAv2tvDHxn 0zRI7BC01oyNnyyPuvurGXvIEjY0j9pnwN4k+HPh4a3aq9zb+W3MZOGDZ9a+Zf2+/wBq65+PV5pe gaWP+JLCmSQSOVbI4rGUHzKxrdOOp8LwRiCFYwo44rVFmsa5z+FbNERdzOnnXO3GfwqoF2A4GSe/ pRFDK32NiSQ31q7FaZQMxqxLYe9ssm4g4I9qow27ykqGP1qRp3J4rIpLtDbm9TUOPs85FC1G9AuE 807h8oqjDbZY8cetMBqkIxJGeeOKnuIPkD55pGiRQt490hZj+lTmLyn5+YGk2R1IZkVCSBzVJWLH IHPejoJ7jpYt/J7U9oFKhgaSRb2IvKG3KjketVXO/tj8KpMlxIWU3EeFGMd6cC1vDgjk8ZokStA+ zCJBkbiac7qQBt4+lSirEDH/AEj5R8tIs+CTjIpPcVinOFeXdt4+lVVAeQYO2mNFi5txt2gZ96y5 kaJCAPlpXuJjGV2Rcfdqtc/uYuRmi1wRRM7bQQMH0xU0kLDDdQetGwPUcloI+SfzFRTgPt+Xp7Um wsQXC+W4CjBIpI38xSjjge1HQWxUaJYyWHSqrsJMsc59MUrARp865x+FTJDuUsD9KTWobFFpvLJA GW7mqXk4hLNwO1FgTGxWYnhIJ+btTgoCBAOfWknqW9h1vcrA5BXIp8kP2kFg3FUQlYqRL5IIIzVe 3IRmJGSaFsVYpXCMchuPSnpEvkAqTxUoTRm73lf0FSRf60gCk3YC5OVfGR09qzZjg4wAalBYzRGW fB4FTfYyq8ciqtYBWt/kIB571jy7UYZBJ9MU2IW3b7QCpH51Ozxww7Cu5vpUjMtLdlbIbrTpAkLZ IyfpSAoyugU7V5z6VOkCzoueoptWGizJII3CqKpySGWXkZA6UmxmLrgaayZCOAc/lX9KX/BHCRrz 4V3DEbVTaAP+AmuebtIeyP6Yv2dm36LNx0I/lX0Ow3OeK76T0E1oQzLjBxUXXvjFbbiuDEuoweKq SqdwA7U0Jo43x4uPCd8xHzeS3H4Gv5UvjhIbb4l3cxG0vKfl+uK3WsGJ6I49sqQG4x0xTpF8wAY4 HevHbs2ZvVlSOTyZMkVakkP3iPlNZ7jsTwEmVcdK7qwcI47Gpd0ylsep6IPk+Y5Feg6UCwB9K6Fq gjoz0LTwTtyMV18TZjHFX0B6k+3MQwMYqhcqSo45oQJmewIcHrUMikygmna2wmrleWTynwO9Zc6l VJNOOrE1Y6iMBiAetW9xQ7cZNeLayOk04YmCAGr0TvERn8aFqi4MmGQ2VHHpV63Al++do+lJ7F3u XjsdQFOce1aUEQMWScVk0UhoyseVPJqSIkkY7daV9RCRMftDDHymrPkAMeciquLoTLMYmCgVGxWR +RiqWpNhZF3KMHGO1QLHumODgntSejJZHcwMM5GcVJ4WCW/i2yJH3nH8xXbhV7wrn7BaTEV0eIDr gVb2spAPavVEyQFS3NOPyHIHFUQMkkdANo4qykW5sk44oeiE9T4b+KIKeObls5A3V+GX7Wkqx/EZ tx+85P6isZ7E2Z4IrHecHirFziSBQDskUggj2rO2haPV9L+PHiLStIisorl3gjAUKWrg/E95ceLr s3N2drN6HNJK4p+RteDfF914Ltmhs5XTeMNgY9qktNav9N8QDVreYi6wc8460bMmOx3c3xa8S6wr pPeNsbqvmZ4rkNE12/8ACmqyXVhIUlf72GxzSZUFY69vil4i1aGaKW8chwQcv14qPwx8RPEXhfS/ sVteMkfGcPjpRe+45WRm+MPiR4h8V2KafdXrm3BGRu9Kt6J8R9b8MaUljb3zpCuNoD+lVexCjc5v xZrOoeJ7uG7mm3yr/te9dfpXxO8Q2qxRJetHAi427+oqW9RpOKscP4ivbrX9YkunfLNnvVkavqNz o0dlNMTAmMKTTewR8zKitlAkXsQavaD4w1fw1pk1ql27W+MIM+1Ta5dtDF8O+Mdd02R5FuSvPHzV PrHi/WfEOYr+7aW3bkgtmnJXM+Wxz1nqN54dZorO4Y2zD/V5wBWBcsbifbKuWI+Y1cZaWBRsOt9c 1HQUEdpetFGp4jDAVR1vU7jX5hPeyb3AwBnNQo63Kb1MWHdLEQBgCsVtNjiuhcdJ16OB0qrBa6L8 Hi/VoJmEV2+e5JrktYe8utSS9e6d7hehPalqTFJO5S1PxZrV24Bu5NpGCM9a9l+C3xC0PwM0g1u2 Erk/KdhbPFU7jcUfR6/tKeBnimiWxHzIcfuT6fWvgPxZq5v/ABbdXekSNYwSsThBjI/GrexhZ3MK 7vr25sGtpbt/LyDwRWBuuYdNltY7liknLk+tCia2sZbxXcmhx6dJeyG3jI2jjjFb6a9q1pYQwx3r mGMDaCRVPUVkjNu5/wC1Lk3F05nuezt2rMgMumTvJDMyFvvYosOKKP2SWa4eZ7l5GJzz2q5Nf3kb K8V8yyDg8gUDSMVbO5/toXxvZDJ3AxzUU9sz+JX1JZWiuWVlbA6561MV1K8hNC06TQby5mju3kMx OQQOOMVUi0NoTc7byQCdsuMDntTJUdT66+CX7Q+mfCDw7HpMlmJmRcA7Sen0r1nUv20NLvdAvLSP Tx5koI+43GRijmbM3FqR+c6aI1x4hvr3zmiWaTd5a9OlXbq2S6hKzZnxwpYdBRfU1SRTa2kECIlw 8aJjYgHSphY/bpzJNKzSf3iKtgkiSTTS5VJpmlizkqw6nsauJHBFc5KgEdM9qzkwUbMpy3cKXzSt cvvIIxtqtYWyGd5IDtkPfuKuOsRNa3JprZ5QRJMzOTycda0ba2t7GJCwVXAwGPYU02tAcU9SOHT4 La5kkjcASglh/eqnZRw2aSBGGFPKjtVJ2VieXmNaGQXe2UfdI6U6UR3EBgQGMk5JAqZq40rGWNBW 4iaN5meMHJBA6im6tFb/AGWFJJC6joVGaabjsLluWoUtp0j2AoqjHzLitaGwWVd0Mpjx/dFU72KS SKV3YxJOs0jFmz95h3psEdrPevLvkM/clKlq407F+x+ypI7DBPf1qKHVbOKUhUdTnpsotfQVtbk6 WMRnd1IV3B+Ydais7FNPRySSzcZxWi0VhbsW20+C2gaONNiscvgfeNaFpbW8COiRABuvvUJagZtz pENzbLEg8vaRjaKmbTgIlUsXI7mm9BpaFmWwW4sVjPBBGDThpUEibJYhK4HUipuLqZkemKZirLui HRT2rWjtkiQlUAP0p9B3Kq2w83eOfanOrO+48e1G6GlYgNmoZ+OarxgNDhRz3oRPUelr9nGeuaZJ l/l6AULUb0RTkDqdvQU2JyH2rTaJjuWiwgnLdSaqyQMz5z1qEzV7FprdWiBxyO1VZTtjVduM96L6 k7Ei2imLafvVBKVihCdaL6hzEKxrMhVRg06BlVCrD5vpQ0CMu8jYOABx3pskfkYCDINWkDYyROCv U/So4oMJgUbDQTrtG1RinGIRRLv5JrO+pbAW6xDcPu+lVZQWIOPkqiGgll8ohQOTTIU3KSwxSeg0 NjjAyx+XNU5YucjiptcctjOnk85wF4x1FRzfvSNi4xVtaGaLRkDIEB59aili24wM1BREXxlR2rGn j8xsmriS9wjtvny3SrkS5YgDpWcmUijJE8sxUnApsce4kN/DSAiuG5EgH4UioWySOtFxFeWJUUhR uPpWWU8s5A+ariK9hUiUxMSfnqI58rC9KGhXKMiiMbup+lZhma4QjOFB6UCZYWTyQMDI+lOkm8w7 QMD1qWik9DOZmUlcZBqe3i+zA5J5pIY1XZpsAcd6sTW6x7cUDehXnj38H5sVlGbAMeMVL0B6iSTr 5AQL83rVb5osAcGktRIseZk7GH41WniErDnBFAPQoyyGWQADp7U6N2BwelO9wIGk2y4qGRlS5Bxk GhDsN8jZIT61GpBJUjOO9IexUlkDDap5HekEYmgBI+Yd6BGaZPIlOUzU0bl0OM57UpMELHuELAj5 6qK8nk7cc55NJahLQpawfLsyCetf0j/8Eb5Gi+F93Eevy4P/AACsZq8kPdH9Mn7OcpGiSoRg7hz+ FfSE7GIjA6V3U1ZAtiN38zBxVEgKxBzya1WguojNs6fdprHHOaqwM4/x8zN4WuxjkxMB+Rr+VL4/ o9x8V7yBx+8jlOT9MVtH4GLc4cs02R19DUSylRhjnFeM9xWImy5BI4FAkMzc8AUthNF2zJLFu4rt dPJugjmhog9V0heFUHmvTNMiLRhVOGFVBl20O502Vn2hjXaKdkYx09q3WoJ6CeechOopJmUOQpzV coluZrFlXms+WUvJnsO1TfUtaFUx73yTg0kvDg449KtOxnuzdV9hU4yathhvyORXi7nUWlvt8yLt OMdcVtP93K81MtBxRPaMJBnGD71o7V2EMKzbuabBaRgZAq9n5gBSY0Sxko+SOKtyTK+PLXaaIoGx ycKFYc04Iyv1+UVXLYTZI2DjNMYiLk9KGuohgXALnk9hUE4ztYcN3pbieogZ0yGOVqTwvH9q8X2a q2Csgx+YruwnxCtofsPpWZdJiGcEAZq6y7s46V6jWpk9Suqlc9xTt5Ye1WibWJRnYBV9YyLdy3NE hHwV8UD/AMVvcITxhq/Cr9q7N18THXHEbkfqK56iuhnh8kgYll4welS3OWgWQD5yQAPXNJLQL62P SdL+DniO80uPUIY2ELLnDcD+Vcfq6z6KXjvAFdT2OaBbBa6l9stAbeAykdcA1o2hbyySMH0Pak9x EtpPh2OcECuh8I+EdT8f3cqWKMFXq2MU7ISdmdzd/AzxDpenSXEmWROvlndXm2maXqnivXINOsLZ zIB87BT2qbWLZY8beH9T8Eaibe+hKknjOc5rvfDPwJ1vxR4fi1JAY0cAgOdvX61L1LprS5z/AMQv hXrXgTT7aZ0kdZXVfkG7qcdq7qD9nXX9UsrW4VjFHIm7axwfyoSM5S1seR3vhfVdO8eDQTFJ5xYh TsPbH+Ne4n9mrxEpkUyb9qFtocE8D0pii9T5hudQudH1O8sbqJo7iB9nzAjPFeyaf8Dte1z4frrk BLW7Jv4PTr/hVRs2VKVjxXSrW+1eaGytoTJcOQrY7GvQPiD8KNX+FljbXOpRlY5F6H1zihoE7nme iadf6m81xHA0tsmckAnHFcxd6qsvmMoKOpwQwxSSswvdHs/w6/Z/vfijYwXNtdbZnwwQsBxXG/Gz 4X6p8HvFFtaagPlkQkYOe+K15dLmalrY8xld4M88VlXE4VwM/KetJK5tfQJLuLZhXX/vqs6Wfy4c HGT3zU8pn1M2S5ihhG8jcfesi8jhfBk2SZ9T0qkMjh0qzTkJGD9ar3D7ZNo6Dpim0KxnMrO5XPPe qxMcCks4BHXmk9B2KLypMnmKcD3qg+oQxcO+1vSiN73E9SKGRJcyI3y988U281GGKEKTuY+lWwiU ri+S0tS+8Djnmvqr9mH9kaf9oHwlq2qQXYklhOVTcvy/LnFQ2UlqfK3iDQLzwZ42vtIuzh7SQxkE 1WljMku0H5jTWgutircyLp2TIxGOoFUItVivkzG5AyOWGKFqUkeyfs2fA+f9oH4y/wBl2tz/AKuG QlMjnABrlvjb4Dm+Cfxj1Pw/OcvHKwX6DFCi2xVbQscWdSiSMEux9SozT4tUgMBcMQv+0MZp21Bb FE6rCueWyf8AZq9bXKSxAA/MfUYrXl0EnqaUe2WYDeMqOea1vhj8MdV+Nvxm0/QdOXMbozu6nspF YyWha0TZ+k3iT9izwl4Z8S2+i32oLHqTHBQ7Mls49fevhn9rf4Eah+zN8S9PHP8AZNzGx47ncAOK 2hHTQ54T52eIWetW15ezqjljEfmGK6Dwh4CvPjp4it7DSX8lFuESQsdvGR6+1S009TZbWPuT9r39 ilv2cvhppeuJIbmRiiuMD+Jsdq3f2Q/2MNI+Onw513W3uNlyoMgj2rwdhOOfpWkkroUFaVj889Xm h8N+PNV0Jg4axn8olkwDwD1/Gr166wRmQ4WPGcipnHUHpck8K/DvxF8VdEvZNCgZgJliWRc4IYdc 4r9PPBP7DOkfCn4EaRqnjWdBdyIm8yFTySR14p2VjSlFTi2cb+01+xzDN8C08W+EnVbSPY7tHgDb kk+vYV+Z2i+MLIeH7KRS+JEHOzrk05JW0OalLmk0zv8Awh4C1H42/E3RfDGkRlBIwknkb5eFYZ68 dDX7F337H3gnwx41tfDEskUuuzW7nGFzxwe/uKKMeZ6kYip7N2PjHSP2B9dvv2sm8PKAmlDzHbDD ouO3519la7+yn8PLbxnqPh8yW6apbo4JO0EED60+W1SxrzXpJn4/fEDwY/gX4rX+krN50ETMI2BB 4GPSsto94APNVNWFB6FkxbgAF4HtUEi7GCqOe+KhLQsswRHsKhuY2gmyRkGptcV7FhkYKMU6OcpL kjnpTcRCcK5I61EsmQc8UrWGmJGSIywGB2NRp86/7VJDkyw9mfL3g4PcVlbCGBRcevFPYm5qRIJI T3I9qzZYVyGP8qRb1QlxamdRimxWPlncDzR5DikMjj2TszLnNSRKsshULj8KVtSyxJEqqVHX6VTE e9SuMkUmjJgkGEOetUGtM/MRmiwrDPsw3jb96tLy0MOcZYdauSGnoZMloWbcR8vpVEQncyjpVbkj o4PK75z1NN8vbITjIFZy0LQ6SEygYGKqpCsV1tf5vwqCiUogZlA/SmSQhVAGcfSrQmQXkShAR1+l VYuE+f8AlSaKQ+VQ0QXGMVlz/MgVRg1K0B7FNbUxOd351XLDzyB0rRMzfcrlG3FNuB61cQtHHgc0 NWQ7jJdoj3YwfpVUQq8ecYPas0FyBOGy/ampKY5iR36UmgRGyliWPFOgQzZbGBR0GUblSvy1HH5o wD0FTYQsMOZGPrVSe3BJCn5hVbCKP2bzMnoR14qqGEb7QM/hVXuIqSD5/m4qpdW6yH92MN34pMRP EhVVBHFV51y5UDAqbjRUU4y2D8vtTBOZ1LAYFMaVizHKFXAHHvTC4Pzdh7UtimVpXKjKnrWc4MpB xwOtTIQ3aBKNq8H2qSOPEu08t60xCTRtG5U9ayG3MrLnkd6AuEDERliMmq6s0bljyDRaw0Qsu85/ KoY4iswLHJ9Ki5RYl3RnB6mkMnybSnPrimBTmstyBl4FOZcQgAZqrdRNmftPII5pkU/lPsK1nuxp 6EjPsn3L0p7MWhLIOvWnsS9TA1SXztMdWTkc9PSv6M/+CM18b74b3LkbdpUYI/2axesir2P6dP2e o92nTn/aH8q+lmuEtpPmXcfpXdB9BDJZQ7lgMD0qgJPNyB1rZICDy9uVNQuxYhQOKpiZzvjZXfw5 cAjpGf5V/K3+0g32T4ualJjDGUj88Va+Bi2PMocx2oC9e9VmQSKOvHXivHnpITuP3+YgAHApB8xw DxUrUHsX7UBOM4FddpUhcgJwBRL3Qsen6S/KkHBFep6e3mIpzg1UCtLHeaegJUnkV3EClIM5z6Ct Y7ktEEpMYBxzVdlJQvitpSsgiinIzHAHSqztsc4GSKyvqW0UJVW4GcfN9Kg3MygHqKp7mdtTpoiV HqatwttGCODXl7HTFGkiARgCr1o6ovIyaxqao0huaCSBiSOKlEnABGR71nE0aNFR5Y+Wr0J2x5PX 1ppXIEaTeQB0qQ4dRjqKewr3LixlcE0pJDeoqnqiWtR7lWxz0qOQFgcikOxFHIVHIqMxFn3A9KpW sO2g5k3x9ad4Ot8eNLM+rjH5iuvCfGZp9D9gNOXbYIuecVbRvkANerJEbaEqYJIpVTa23GRUoFqW gvoMVNAx8iQHmh6jeh8G/FGIHxzOT6NX4O/tUXHk/Fe5wc5kP9KmSujJ6M8P3hHIYY96ltW87XtL Q/da4jH4bhUbCW5+xH7QniOT4cfA/TU06BRIURSckcZNfjH4o1q88W6qYyGkuZZAqKBnGeKV7sGz 9H/h78HtN+E3wfGo66EN+8PRsdcGvjnwz4evvif4rnTSwyxsxJAGMVMmOx6d4g/Zo17TbCe4hkVm j5dRIOlfbX7Lmj29h8B9RvFtgupIgyQpyTtNC1BK5qfsh+I9U8d3mvWeu2zLbJuCbweRs965H4R6 zp3hT9pq+0tLQeWzvswp44FU3dWITfNYp/tk6HZ3nxo0eNztWSUfKB1+cV69+1F/aHgD4W6XaeHb c+ZhP9WD0Dc9Kyvys6G7RPUPhTo0Xjz4I6EdbtQ17+7Zt477q8x+P/iS88FfG7w1ptnCzWMiEMFB wPnApvVXMbansuofDjR7n482l28EZlNvI3I78V414c8Q3v8Aw11qOmvuawAlwhHA4FCegbM+Sf2m fgxP49+Oeo22jx7GLuzsoxjGK+6vhf4QHgP9li4066mFxPDbbSSQedrelCvew9z8l/hR4yi8MfEi yJtw7yXKDgHuQK/QD/gobKl14R0tniGGUH6fNVtWE3ZHx7+yV480ZNYvNB1VEWOYkRlu/GP615h+ 1l8C7nwL40a909MaTLlsr0znApX2C+hhfst+NdZsPi1ottDcEWewgx7uOor6y/4KQKknivTMgMxh Yn2O6tkrozatqfmbsd4gCeKb4e8Jaj8QvGNro+nQ+YzH95z2BGayi9bG6Wlz7Tk/Yi02DXBYzX4g 1EwuwhyvUV84+BP2a9d8U/G648KPmSOLdtfOR8oB+net5aIlb6n0if8Agnbrc/xCe0ZVNqI3OS46 jpXlvhj9iq+u/iJrukXHMlu7CJTjoFzRb3UyOe0mXLn9iK7/AOED1eWBt1/agkjjghSa/PHw/bX2 l2Lwalk3aMFOactGVF3JtdupbK0HlHbM5AXnqTX3B8K/2Pl1n4SWviTxRILFLjYQ3B5OfXHpWM3e Vi1ocB+0t+y6/gvwtol14auBeC6eNRhhyGfGeM17Vo37DmmaN4W0e58UTi2vbhVwG2ncScd8Vq1a Jmndnjvx6/Y+vfh18RtJt4G8vQb1hl8gZJYKOPzr3/U/2BNP0XX9PWaQrBPGSmFHzHOBSi7uxXwo +SPiF+xF4hX9ou38N2cebC4DuTnoFIHp71+337G37PWl/s7eHPEmnWl1vvQjeauB1EZ9KOW0mPnt G5/Pn8f55NT+NviSR+JRe4+nArza5ItDweiE5+lDVzOEr6n0/wDsZfsqX/x7Ota5rbhNHgJMJyDl dmeh+hr3jxV+yH4R8S/DvUJtAvUiuLVxnbtGcDPrWjhyo0jO7se3/wDBLT4aeH/CnxGkvDIja4YX EhGM8rzXgH/BULw1oVz8V9SmsWQ69LMUBXGQWIH+FFNqzZNZ81jQ+Bf7FOneF/2eNO8QeNCm+4WP LSYOS2R14rkv2pf2MbbQvh3oXiXRmEekPLEGKAAYZ/8AAGsYSuynJJWPV4/2ItCuPBmheIBJnT7h UUtsHJZsCvmz9s/9lK4+B9zp+raOPN0yVPu8DJLYHArrnpFGSdjt/wBiL9hq8+OPgbxTrWsp9mO4 vaoMHjyye/uK9I/4JxeBb34b/tUahpt9agNbpKkTH+IbRWUrKJUp+9ynq37Q37OHibx1+2VF4sgl C6fb3oDRmQDgsp6de1fcf7V/7N+iftE6ppMGpOqNDaOybgOSDkdfetILks31Mqi9mro/If8AZq/Y bj8U/tAeP7XVIDZ6Xp11JHBIUwJB5YOeeOvpXzV4klufgd+0MdN8MuDbjVokkdWxxuUHpntVYi1l YVCbb1P2l/4KX3T3P7MGizeazlvKLE9/nNcZ/wAEkIIb/wAE+J/tD7I1z8uOo8s1g5aJmsZXbZ5x +0P+xzonxB8O+JPEuhQrb3KTeZJIigbvlz/SvxIs9F1bxxr2leFNLDXN5csok287U3AMePY1tK2j HzXTbP39i0/w1+wZ8E/Dvh5kS41e8ubdHIGWJL7SePrX0F+258CtR/aC+Duh6Tp03lKyJI+5guNr k96wqS5J2Jp1Go27nafDL4WQ+CP2PLbwZrJWaFEjhkkYggjkH271+UP7S37EOn+FfFXgPT/CVmJd MuFQzmFOFHmgHpntmiN09epooKC5jsf2sLXRP2FvGWi6p4YSGfUhGVmQEKcFgGPHPArN/Yws/EX7 T37W918V9TdoNFsEljiMpwGVwG4z2+U122UI6HBX9+Z+k3wX+MGl/F/9tXxDDorb5LLzoZZAOCSo PB7185/HL9kGXxD+0J4i1/SNVZdaYys0Sle6jPv2Fc0J3dzZaRUT8T/EulavoHxL12y17P22O4Kq znkjAzVQRGMKR3rWojSBYZmdgFG0d6jdFW6DKPrxWXkW2aSybIz8vJNVwFclW5IqNmPoNSJmyw7U 0keUSRlvpWq1M73YyK36FvvUyaHYCTQxtalZJd0W0j5afEu0gAVKWpXQsSg+apNTiLDfKOKUtAii GWNYJCAcZ60rRqFAxmhIG+gS2nlsGXgY6VUjTG8mi2o0yv5gijORzVi3gYwlwOTSaK5izb2gZGZu 3tWdDD5Tu/XNTvoFtRhQspABzTreHYhDDJp2sDII4vLZjjj0xVRY3UllHymi9ydkRyqyDCdT19qj aMRqFPXuaq9iNSi0eJ9qHg1Zj2woVbk0TVyoMdvVgRjFVPLXOQOfWsdjZIam2OUllz+FIZvMyAvW q6ES3Kl1iNcAdKpf61emKaKGyQNGmXPHaqrMiDJHP0osDJVCMnz/AMqptax78Dr9KCdyP7MHY4PA psEXmyEAdOtOT0ElqBVJZGUjGO2KpNFiXaRgVnEGVrxf3gIHAqsts27fniraGmW50V1HameViI47 VHQEihlZlwwxiopFPC+vSkhsaw8ldhGW9apyW21uDg/SmhMZIdqY6+tUJHjU7gOfpStqJFAMs8xJ GaiacQT4CZz3oEDYZiPT2qsykj3oaGSOq7BkYHfiqC7JMhBtXNJANukyVCjA9qiMqhdgGcUPQpFC VgTyOKl8nbFuUdaQmV93lHPf0qRZQBnHzUmMcG5yRnNZ88YwQo5o2JsRCIC33dAO1MCBrYsR1obD YzIieuPlFT21ruDSHk9s0mtCkEylo0z96pW+SPDDNCEQRXG392U4Pep9iRIdvLU76CMl4wrZxyap SRBXyTmp2KQn3UKn7x70I26JUQY/vUrgVdXK2ds0gG5cYxiv6F/+CMkwm8DagrDaWYFR/wAAqPtB JaH9PX7PDEafcIeGDjp9K+j7gfM2evrXZTWouhXj4TB5NJHBhyBxW70AjuYyuQevtVePcnBFLoVY wvGWX8OXAH3vKOD+Ffyn/tPAr8WtSRjudJjn9K0grwYmtDyiK6MlsjqNu4U8EooA7149Ve8IeAbc AHnNRoh3kUR7kssxgF9prsNGlMcyqBwKJe8Utj1fT2WV1GNv0r0vTUPyqBwKuCVjJ3PQbV9kakCu ytWXygWatorUroTCYN1XNDy/Ltxx6U5IaMRpgJiAufwqORtyk4wazRS1MzmNiR3qs7HnIrSLuFrH SlSGG01bjbbye1ePJnRE0V+ZAwOKux/d46+tZtlbFkN5SYPJq7AGkAA5IqDRO6NHLFgOmKuqVQY6 n6VS0Rk1qLGgXkflViNQG4GKpajRO8hYgdhTzIUP+zSloNjFGSx7dqcrMEwKzTuPREcxwqqB9aJn 3ADGAO9aRJk9CnI5fGDgVu+CMXXjS1QDBRxz68iu3DK0zG+p+t+nLusUJ7irfllcgV6rYnuSKN0e ehFTKzbMryTS6FLQtCTzIgG4NSRr8rCl0Jb1Pg74nHd44u0Ho1fgD+1UdnxVnLH5lkPP4ipb0M5b nkE1yJ4uMVNpsX2nXNIAfaVuY+P+BCsb6jgkfq9+15eCD4HaXKTgYQc/7xr4E/ZtbS7n4qQLqITO SY2b8P601oKSs7H0F+3FLrd1qNtHDuXR1PBU5B+bivTv2AtEtJdG165aIPKpOxwM4+SpSuKcktD5 98VftH6r4G+I/iPT2U3kDSsqjJO35cdq+9f2YdbW2+AV1rN4mElCyMuOnyn/AAobsOGi1PRP2ePi jonj2W/j0uLZMqMGIQjnbXyR4FLH9sC+tnb96HkPX0AqFvcOXXmL/wC1o4ufjnpCg7nScf8AoYr6 9+OnxRsfh34M06fUYd8TRhRlSeScVco8wJ8xn6148dP2dLHxFpkZWPMbqi56ZP8AhXz/AOBv2ldH +MPxF0eHULcpqCrwDGfUHvUp9BuyOi/az+JeofC/4w6Xf6cpMRRlcLnoWFdJ+z98YNB+Jnxlu2SD ZrHlybyYyOcDPNXFGd76nq/hjSY1+IHi2R1Bux5m1z2+SuC8Jwyx/BrxI97cNO38O7nHyGm1Zgpa H47eDZY4PiPpIbJY3Kcgf7Qr9Jv+CgNznwfpWB8nlgdP9qqY3qj8c/BfhDU/FfxJsrPRiz3RnWRj 02hWGefpX6tftb+MNP8ADfwn0/RNQdX1MRgNzk5BP+NSlf5C1Vj88/2XryC5+O2jWMbMZShbdt7A ivsP/go8n2bxtpijn9y2SR/tVpFvYKj6H5uxzCUKR0Fevfs2fEX/AIVx8Yorr7OZ0kzuwpO3OB2r JqzubQ1Vj9K/EOjaV8X/AItx6npOoyW+tC3kAi2gdcZ68+lVf2UtK1rwZ8d/E0euRi4uhK/ky7tx 27BnpW6fMZT3sfolZeNLW71WGNZkN7/Gm4Z96+fLSZJv2mNSjjgCzGKbt1OBTlvYz/ulDwGJdGHx AfV0ESGR/K3Ht5VfzieNZRqHxB1SQENF5xMeORjAokm1cqL96x574mtzMkV0BloZVZVPfBzX7J+G PjVpXxD/AGVtH0jxTamziPlBNsZb5snB5x3NYJe9c3aOd+I/wX17wva+DL3R7w3GgLJEAkjhRjzB 2/Ovt/8Aa8t/C0ekeFL7Wl3LE0exUTfz5mR+tdG5m9HY+Qf25vizZ+Obnwfp+ko8ESXMJ3GMr0lB xzX1n8aVupbrwOYyUTYm7B6/vBUrR6EvVHqOsQqf2lNN8uFA/wBhmyxOO4rkfhrOo+MXj1JEd5vN lwxQ4H7v1pvUndWP5xPjvCw+O3idWBDG9J6ewrzPVI1MBDJkYxnFKWjsOCsj9Df+Cfnx1vvCfhPx DpGoaeW0VAUDorNuXyz2/Ovo3w18HNN+IXwh8Q6v4MlewQtvZfLEefkPr9K3fvIUXyyPE/8Agl/p hT496nFeSBr+GOVHYnqdgrxH9s74ba5cfte3upJbM+ni/wDmlcEDqtZ046tF1Vyn6Rftnzhf2GPC 6RxNcQfaLXKwqXP+sPYVsfHaz83/AIJ2+GkA8i3At8I3ykDe3GDU8ii9CEmdd4Z8I3/ij9izwbBp ajyEa2ZiWx8ock18+f8ABTLx5Z6V8NvDuj2rpJfoqN5Yb+6/NdCjdGU5WaSOt/YP/ag07w38F/EC 63JFbXMEZEcavnJ8s/1ryf8AYW+Lknxc/ax1C7nt44kKS/Z23HLLtHrWNuaJpy+9c3f2hv2ifFHh j9saLwtZWUkmnzXw8yQBsABlHYY6Gvtn9pbw1rus/F/wjF4cnMaLbs13hwOA4z+lVNuUV5GkbVHy nVeP5bLxP4Q8U6P4TmVPEkQZbhkIB3bD9e2K/l4s9O1vwl8V7fTNbSS81r+1YfNKgv0dcnipi3UW ocigz91v+Cmcdzbfsv6CsFo8keYt+1CcfOa5j/gldbeZ8LvF0ghcfIwTKEf8sjUzWiQoK10fTfwq haw/Zj8aHU1ELP8ALFuONxMbAdfevgz9iP8AZ/0v4E+C9Z+J3idI/tToXs2bBKqUOQPxWhN8yQp6 QaPz28U/GnUv2n/2lrHWbmYnR4tRjFmkjY2oXU9D75r96f2+PidrHws+C+h3nhXzJrgqisYQTwXI PTPaqrx5p3IpxvBMseGJ9c+JP7Dtle3jmLV7sRNiQ4OTkd/wrf8Ahp4gsvg74A8M6N4nk8zX57Tb Du5IOce3cirirxv2G6l3yn5I/tlfss+MPHv7U9hFO/2iG9kJG6QfLHvUMOlfaHxta9/Zr+GWjfDH wVYOdWuLfY8sSMAACVPIyOjVvRlz3uZVUotHM/8ABMb4T6r8D/jb4gTxDKo1a4Ejj5wScIAa9j/Z +v8AxDqf/BQfxm+otM+nCScRBwduNi98VnyKA3qz8x/+CgcFs37T+sG3ZXk8587OcdK+VVjH2NM9 QKly5jVaIkjib7PuI5NJIvlIpApbMvoCI+456GlW0DZweaHqTexYhtmMZHb1qX7OohwoyfWk3YqK 1K7KVTJHI6VUeF5BkihMqWhWFp3A4q7Hb7uD8pFW2Z9RFh2A5+Y1OkLBPlFZSvctaIqtZiXrwwoh h8vLEcitVaxHURpGuGOBioGgIA/WpKLU2nxvCBnmmWX+iqQ3zKKOgDxMLhmIG0elVTFvyFGAKztq VfQW1QIG3dT0qDGyQkjJqpMaIZWDSZ24H0pdu4Ej7vpQokvcppa7yzA4FUbi0MmABwD1pPcu2gT2 XkojAflSS2qlQSPm+lO+hNiCK0a5cqBtI65qaK08tTkZ5rNloSe3WeHaOMd6p2cQBKkdKtbEN6la 9iVDtI5PQ4qjBY4zzzQNsml2ysIz271WktklkCYoErsfFCkM5VhuqtJDHGWIGSTSQ7FKS3Cr16+l Qov2TJ6k+1EhpEMEQlZmJwwplx++4z0pWsSY5Qu+QTxTJd3RepqugJCxxMzbT1FC72Lg8HNQ9i1s VrmNY41XGW+lHl/J8wpRIYyFNwJxmpPKEqHI6VLumBkSr5QOKymiyDkVSEV47YxAuOlQGNnUmpbu xokjxIojxhx3qs8W2RlzzQgZVkBjjO45Wls7bzELKfl9KaVhrYhjj3FlJpbeBVzxRLYEQG3VWYkZ HaopmKwKBUJ3E9yn5OCM8k96lfCnafvUxjPMXhQOainA4C8MOpoYFJ7lRGwxmqKx7bbJbGe1Zg9x kUZhTafumreNqgKcDvV7oZJEsboQRyOnFTRQ7o+mTTS0F1KjRquQB+NVvLER3EZNSAx4/Ocfwis+ 5Cq+B0FQUtiKSEkA0+NQmBjJp2Az9ZVY7GQMOgziv6AP+CMUjXXgu9mbACsAFz/sVlLRoG9D+o39 nmIizuZCerjj8K+jbld5OBXdRJKsaHbz1qcL8pGea2YysAYnwx3fWmOduc1IzA8VjPh64Yc4ib+V fynftSR/Z/i3qc4HzSznj64FbQ0gxXPI1XyrVY24x0ppRg4ZjxXkz1ZmyUz+c21evrUIZ4if1oSV hO5fhYSOrYrtNJcM5I61HUs9G0Bi02CK9b0hjGRnpWkboDvdOYcZFddaIGxxxWsZWYuhdbEIziqz yKU3Ac1V7htoUTIF+YrisxpCshz909KhouOhW55BP0qKaExA7jkUoaDk9TfQ7QMfjVreDgAcfSvL krGy2LqziEDcMj6VqQr5sfBqZIouxKAoRuW9auRSGA/Lwazt0EtGaFs+W3v0qWVlZ9y0uhbBLnZO EYHJ6cVfedWkCqPmHWqp6InYss2SOOKal2FfaV3fUU3qgTFZGH0p6bn6YAHeoUbAyONhK7DuKpyy BmweMVpETVyWNklUrt4FX/AMgg8dW+RnMgAwPcV24b4ieU/XvT4mjskzwMVOGKAsCa9NsztZi53R 8cUQOyKdtPoMnG88kcVaUbonIPQUESPhL4huJfGV4cYI3fyr+fr9rR1T4oXY6kyHH6VhNNE9T590 +RpIghOMd63bCVNP8R6ZJM52LMhY/wDAhU20Bbn7C/GG/wDDnxX+DVhYNdxho0VsFh1BJ9a/Hbxv Zt4Z8Qr/AGbKUaKQFHT2Io2JveR+lXgn4pad8XPgk+la/IovYoeXc5ywBPeuU/Y5+Ltj8ML/AFbQ Z3H2aZiEl9Rtx9O9OLshTjrc7zxR4I8GWM+t6vIYbi6nYumcEj5fr7Va/Zr+Oema18ONY8NXxEEK MEQEcY2n1+tQtNxvVaHafCjXfDPwB0HVL20nR5pMkbcf3cdjXl/7P/jvRNY+NNx4t1CYK8m7AcdN wH+FNIbbtY7X9qzW9D1HxbZeJNOmR5EYMQCP7wP9K67xR8RPDnx9+F9jBqUiCePaWD+xz3p3uhU1 y7mF4/8A2g9G8BfC3S/DFkBLbqY0cKMgDdjt9aZ4K0zwNpfi2x8QQrDHcRQNnAGcn8falZJkSu2e a/ET4/aT47+MIW+USafG5VNwPTIr2jwpf+Cvhn47uPE2mmKKZ4nyqAZyR9faiLuyorliebfDf9r2 C/8Aivrz3n7q1uGfaeeQVxX0lB8avBtt8MtSsVkDSzIT93vtI9a0k1chJ2sfmN8JY9Gj+IgvtSZT ZRTBohweBg19z/tUfFDwn8Ufh+sUEymWEAJx759aHqaxVkfHf7Mfi3w78MrfU9Wnw2qLkISvJytf Ivxo8eX/AMXfGtxq10SyCTES9cKcZpR0KlrY+vP2VX8HeDtY0/XL6OMXsMRBbbzz+PtUH7bvxl07 4o+MbaXTx5kSoy5x0yaV1ciUbu58SKPsYT0ro/hb4sg8GfFG2u71A9o5O4EZxkik9TSGh+memfFn wF4O+IH/AAkdiirfiCQfLH6j6+1ecfCT9ta1b4s+JNVv12Ru7iLIPQoK1WkTN/Hc8M8JftUTaL+0 5N4gubx30dmfbF1AzjH8q/SH4AfFNPjl+0LqmvaaAIIFlXnjdlQf6Uk/esZzT5uc+Hf22f2rfEHh rx1rvh2OIW0MsjDzI3PPGP61+f3w7+C3ivx54bW7sUDrkZdpMbv0raVth0lq5Gt4x/Zd8c2+li7R F2xOHZfN6gcntX1x8Ofjh4L1P4WWeh+KbVPt1oU3L5ZbDKc+tYtam7lZGf8AtQ/tqyW/gnRdH8KR BrO2eM9Su0K+enNe0ab+1V4P+Kvw50YeKYFnurdFO0oWwQc+tbtJRMFrK58iftE/H7T/ABl480j+ x7YR2NpOhyoI4Dg19H/Fr9vKyl1zwrDBH5qwIN2AeMODWEd9S/s2ML4nf8FACvx70nVrFcW6RPHK Bn+Jh/Svqnwz+3v4R0e/1i6x/pVzuLkRnn5cVdxKLSR+R/jjw1f/ALSPxi1DUPDCqkc0jPIS23J4 9farGq/sXeNrmJ7dZ1VwOT5w4/SmlzMs7n9nT4pxfs2XepeE/GCC4VwwDnL7gFwen1r3jxB+2lov gP4SXmk+DrYRLcrtwFKYyCPf1rbZWOea1R8H/s6/GLUvg58W7HX2kbdLG32vB+8Tgfyr7J/bA/bB t/iX4eW20aERXbkM0nI5BrCD5ZM6anvpeRf+Dv7ZVpb/AAN03w34pBumtRGMsC3KkmvKv2sP2udQ +M3h3RfBugO1rp32iI5UkfKr89fYmqjq9RM/XzXb6++F/wCwRo8Wj3SNdQ20fWQDGC1fzb+LfH+v fFbx5b6jrc5ka3idBl89ea1lOysjnULy5mWtKtpdNa7dbh2juG3OuPbFegfBH4oah8D/AIq2Gr2D N5SRsjgcZyRXOux0PY/RfWv20fDmseMl1a5sA2pgk7tjHnjv+Fc/4Y/4KJaiPixql7c24e2WORIM s3AK8frW+nLZHPRvCd2eO/AL9sG88E/GDxlr8sfmPqdw8gQk8ZQLXm2g/HnSpfjjqHinU7LzLqS6 3oNhIHTv+FRFWNpu59z/ABz/AG/rL4i/Dv8Asi5slaHACABjXm37KX7alv8ABnwfq9mliPJdsDIY ZG3FQK+px3xv/bouvG3w3bS9JjazRp43dY9w3AH3rxrx/wDtL6p8SPhHpvhZWeK1ijUOnIBwT6/W tVFaC3ufLKaTLpKW407/AEYwOrArxjBzX6meDf23Vb4YWOka3H9vmgVQXkyScfSlVTew4WjGxW8d ft+3ws9GsdLttthDIhaNdwHDZry39oj9pq9+JXxj8M+I4kIg09NhiXJHLhv6UXtGxjGm+e56J8RP 287zXPjHo2rRWoZbeJlJJbjLA167df8ABQa0u/iRHq0umxy3Soyqx3d8d6mnPlKq03Kx8w65+1rr c37T6+K4VNvahJEaNGOG3Y/wr6Rn/wCCgR06/wBUnsrFFv7gMBMpbPK4oqTbZUYWR+XmvXup+JfG 2oa3qkzTzXshkJY5xkY/pQkKxwlcZH0qktBtajX+V0CjjuKsSR+ZKQBkCqY9kRNvXA6VPsEduTj5 j7UWIRbto8wAD8aswIggYDqKlrU0WhQ8gyHJ6elPliKqoQfKaaAi8ghtpXpVeexeRy4O0CktxqPU jtoGbIxx61bjtmUlCfyq3HQTIrm2ZsBO3U1IluF4PNJLQRG1vucqMADvULRbnwTj2pIsjgjVHdG5 Bp62ywZ28ik9GJIz5CBIePmqzBFhMY+am1oLyIWjUSAE4FVrpvKO1efelbQpDMiePaagZlDKFHTt VJ9CUiK43LP0wDU0cDRjBHFTJWKv0I/LeNwW+Zewpk5WRxheRWe4IFYqT2Y1XkYWa/N8xNEVqNuy KQj+fOeDS+RskGBkHrVS0RKV2QX0QZugAB61C6LFFjGSe+KlbFName1n5qFQcH1xUEKfZwMjcy8Z pXuUkMucqwfse1QJF5mWxx9Kq3USfQX7II5APvKf0plxD+82YyB3oeo4mXLZeW2VPBPSq8tsYiAa lCKDEg4Iwue1OuACgC8e9XsDY35mUY6jvVSVGkm4496zkIVrdo1Jb5jUcyt9kU9fWkgIFl8kYA4N NwSrHPBptCMmeNt4AG5ahigBnII+Wh7ATvbBUOB8tZ/kBOM8VmtGVbQoyQsrcVUdWjl5HNaJEWEa LPUcGkaMWwwvAPpTa0FEr8RZB5PrVM75HBA20NaFpl+eJokGOc96zJFbG01mkAxF2od2agXaXAIo aFcnaAQEsKy5A0wwOlK9hogkgBUAdR1qnPFuIwST6VPUofGplI3DbippIsPuJ/CrQmTxRb13dMVN KTCoZeQeuKrZWIM9l3SFsYHpU5j3QDaPwqLBuZrBppcY24rNu4z5owKhovYGjeTA6UiqyS4Tk+tN Be5geI2eG1dnOc8V+/n/AARfjePwzfQscksCD/wComtULof1M/s/SExXERHzBh/Kvpy7jPHYDrXZ T0F0INmCuOlPkUI+cVb1KiUpI/3hOeKruu8nB4FNDMDxMu3QbkA8mM/yr+Vz9rBG/wCFv3sJOHSc 59+lbL+GyDxief7UQoGPei5BVAucivGv7zE1oMgg8obs0zDTfMD9adxIuQvhworsNIIjcAde9Su5 poesaPwARXqGlt5qLgcitE9CbanoGmgcK1dnbDAA7CqW40iWV97njpVU/eHvWtgsUrsgTY7CqUrq 7Djj2qbD2KlyVXkDp7VTE4kHNO2hHU6GABE65q7HlFHpXjyd2daLdsA33hxVwIyrgHApXBF6GNo0 XnJ9a1N24AsaybKSNGAbxgfdqzaIokZAOlCYmtS4qDOSB7VXUbJCAOa0Ww2XIi0i4HUUoQhxnis0 9TPZkgO3I61C6uIsZ4rVDT1I44/JIJNPaNTk9TQlqURMdgGOK0Ph9GZvHlsAclXH8xXZhV75J+wV nIzWKbjyRUrx+ZGFFei9yGRi3KsMN0q/HCNh4waHohXHjKjaTxUgyiHH3SOapESPgH4k5Pjm+GcD 5v5V/Pl+1ZdKvxUudwyRKR+orOoxWueJvtZtyjA+lWNi3cAQ8/XtWTBIvx3lykCwfayFHuKckbSn aT5p/vGmkK3U2be0kWEqkzRqeoHepVX7IQITsccBhTsDdyZPtFzODNcMQe1TMF0dZAk7Qlzk7R1q WEUZ8F5JdhozcsyenrWtbWSqieVIYiBztqkroLali+827g8trp2ReOMU2G3e0t1SO5ZEx04pRiUy M2/nKfMlMq57iomuPKnUJdOpxjjFU0RYWfTUnIDNufr5nep1tnjBIun446Coas9B9DOa2it7hZFU Fz1NOuRtZttyUVuo4os2NWMZtOENqSk7BR06cVnq3nbVadmU+opp2BtEculAXPmI5XPUDvTV2WpZ duc+1F9SbGNO0OkneXZWboFGcVFZXK3LMASz/wC0MGhIYk6sj4Y5rIuIEuHyw3Y6H0oKS0Ka2hVz JGxLdDkUz+yI50ZlURSH7xHeqTFuZTaREsLRrGM45bFe0/Af4+a38DftMWmsYxJnkMRkYxUta3Br Sx5x8TfHGofGHxDc3upEszMTuY5Jrv8A4eftL6v8KfDcWl2tsrQoAqsWIrRStqTFW0Or1b9s/XtW sZtPNvGBIMZ8w18i6jpEWo3s9xPGrXMrbmYc80J63B72KC2OyExEbl9xUVrpCW7bY+FPVQK05rjS SRdTyNJViQAD1FZLXNm8+5rd938J8s1DQW6kIjtbl2DRnzM/xrgmpbTSY1mZ+U4Ixik1yjZ33w++ Jmo/Be8kuNJUZfk84x2r1oftqeLJvMuCNxYHO2QmrpuzE1ofL/ifxU/xH16TVdVTzpyDhR83B61k 20loLYMlu0cC9CYyMVpN3ZHLpqdBZLFdKXGHU9KR9NikdmOMr69qya1LiyGKK2v4ndYzIqnBKLnm sG4ml0q8jmt4GjKsNrMpHfrRF2dhvU+kdZ/aG8Xah8P4dG/tKSS1CgbPM4x+VeF2emvxJLxI3UCt ZK5mkbvlNFH0yPpVU3Edocy/Nk4C4zWb0LWxRvNQNrOq/wBnO+7+IRsa14LeJGWVAu5hyoPK1a2B pFizsYrR5JVULIepFQwWMDOS8QPfpTiTLVl1raOSRSVBC8AVqKqAFPLGz0rNrUOhR/s+K3cyCECm LGqtkLhvXFaxJRoCMiE5G7NUl09IfnCZz14qpBbU1rSKNUJVMZ9qswQGNzKCdp/hrHqa2siN7aJX DGMc+1MktIBIAkYB+lUo3ZNzQgUSoUI6CqdtYiF2Crj3oY0i1LG7cSD5e1PjtlYZxjFNPQTRCLVZ CTjDD2qSCBmU5G00OQ0rjp7LaoY9aRbYuoGOaqMtBOFmTW8a277WNMmtk+0ZjJ2nrxQtRvQlSDby DVuS0BiHYUnpqTuJJGSgUD8ax5JGMhixx64qI7lvYYsDldoGBVmO2a2i3btx9DWjZCHITLbkgfNV H7OQOT1pR0GV2Vo5sdqVtNL5kJOfSq2JvqNW3AQseDVGPdExftmhq+paZdaBA4kPJNN2Fcuam5LW pC1t5nJqtJCsceapLQCrDEj84wPcVN9mRW3Y/Ss3ozREFxEJzlePwqSCbfGV28j2pyZDWpG5CHOc n0FUWh2pvBwD60KOgtSKIfvS5NMig8zfIxyD0BotYTdyKRQqYVfm+lKoLJjo1JoqLKbQiRSC2cVU uLZ+AvShIpu4D90mMYPQ1BLaZXAJz61NtQiyncQbSqnk0L/or7WGQa16E9Ru4JMQo/Ss6SQvcYBw tZ2LEngYylh90VnXLFlz1NCQMowgSKd3JzViS0ynzDGKUnYncgjjDrwMYqOaMwEEDIqB2K8jea/X n0qVolWEJ/FSGim1sIzhvmzVG5QR/KDge1PcT0DysINtVDbMHLZ61LENZfLUITye9Zs0fln5TkCh K5V9Cu0iyJjuPaoJog4UgZ96tCZWkUleO1DW7LGGfkds0NkoqXNtuAbHy+lNjgaIZboelJvQESMr RIMnIqnPEGcFTUpFDLmNxFnAxVKziEiFm7dKGSSTyBgFA4qqIfK6dalq40MSIqzNj5e9ZjAucoOK TVhpjZIy/IPSmRyFgN45FJMV9SyJCz8jrU9uTbynd8w7CquUyKSPbMZHP4U2SUxpuXjPaqbJtYqm QzIwI2msOaVgMMPakDH2qmJf3jZBpZBtB8vj61I4mHrNs13pEmOWHNfvZ/wRfkc6Dcu/VSBj/gNZ SfvID+qD9nwvDdXD9QxyPyr6kldrhjuGDXZHYnqVHIiGR0qurl5AexrRFCXCeYSFOKqhxH8oGT3p lHPeJW/4ldwAOAh4/Cv5Y/2uoinxj1Cfu054/KtYfw2S1qeGSAoMDtSKN0XJ/SvHt74PYRGDnBPN QH5GKgnB9qfUEi1ZnYeea7TRzuYGjpYTPWdHkAKhT0r0/TjnaQcUkNHfWLjcpJy1dpZybpFBPFaR 3KeiLk4RZiKpE7ST1B6VsncVylJFtGT3qCNFiYseRQ3YmWqM6Ri+7jNZcnMQI4x1pNknR21mxlY7 8/jWjGxbapPSvHSOtaGgeHA7VcWQsCMVMikjSszhPmq5BHlvm5FQi1saUX+jLgVOm5Sdo4PU0luD XUkd2XC5qVpTuGByKrZEsfBMWJI+U55qwZgHBJ4pImxO0isuMVHuIwSPyog9QsNbEq596idyuMCr ejJZHIrXKjHGK3PhwPs3xBt2xnLenuK7cI/eGj9crN8WaAnmrkLeU+0ck16bRnIlkAV/ekQMXJLU NEomK5I5qyDi3kJPAFFhs/Pv4iyiXxnetjghsflX8837VkDR/Fu6B5zKSB+VY1EQjx8j5QVP1FTQ 20l5qFrawNteaVVyPc4qdwR9x6p+xmuhaFFqGp3f2cMmRgrya+QfEMEHgvU5YPN82EH5GHP8qSl0 JvrY2/DXh3VvFmlz3dnbv9nj5DYIyMZrHsr35W80bWU4YGq2Y5IY3iO3jL9cjgcd6+mvgh+z5c/F Xw1canf4jtk+7ISOmM96fLcEegr+yZBqvha9m0S5F1cQglsFeMDPavG/gT8A9d+I3i17OQBfLVg+ WxzilGVnyh1Mf45eAZfgl4rfT5P3jOx4HPt2r0/9n79mDVfjHaXuoSPtt4wfLQsOBtz0pv3WV0PG fFHgPUbDx6PDFjF5kwuVRuf4QRn+dfai/sh6XbapaWNxOIr54WPl4XkirSurh0Pjn4xfD/Ufg54z NldIFt2J8vnqOlcDcX0UA25Yk+grJrUW6OP1zW2tJIobcedNKQqqvPU4r720L9jm+m+BqeJNSHlS tEH28eh/wrS2gkfHXw88FX3xG8eWeiWSbkYEyKTjgEZ/Svsn44fsoWPgrwlutroJexDLINo6daIw 5iL2PlvwL8Htb8X+FbrUbX96sI+6G9s9K+eG1uWPUrm1uEMVxFJtIIrPrY0tpc+6f2e/hD4Z+Kc1 rZXsifbpE3dATxXmP7XPwLj+AvxEtUsZy0ciNuGAO4FWu4bI+c8+YCWOQe9UJLqO2fackf7IzU2u NPQzJdai8xkRJAQf+eZpk2oJbRqSWYnnCjJFEU7g9DNvvEAht0YRuq9CShFINZj+z70V2HfCZq2g K9rqn2y2klKMkSnGSMV0fws8Pr8QPiRp+n3rCPTX5ZuvcVlK9hrRnsX7bHwY0X4S+I9M/wCEfuRI WjJkVQOm7np7V8inW1igMqRSSrnGVjJropxvEzl8RoxaqlzAhKNE55w6lf51nnUGW6cQqXZQd2BR ytMaZZ+H/wAPdQ+M/wAQdN0awY+bNKrupOMKGGf0r9dfiL8EPAHwc8QaNoOrJE2ozALlkHXcB6+9 aRjeN2U3ZHy3/wAFBP2ZV+B+maX4l0iHzLKYqCqr/efA6fSvg3TfFEWrXKwCJ/tRGTHsPGKhpyjc xjK7Of1XxM17rqaNaQs+oSNs8sqRyTj+tft98JP+Ce9ppX7LU3iHXlWPURZGTaADg7WP9KShpc1v ZnzT+wf+x7b+MtD8S+LfEJEmmRMWt42wRs8vP9K+jIf2Z/Cfxv8Agxrl54at4omtRncigZIUn3rd wSiZzlfY/E/T7dvCS3NjdEiW2OG/AZr0j4D/AAo1/wDaETW5tPQxWUU4iVycAhl681nbS5N7I/V/ Rv2V/Cn7LX7NtlrPihBeySCMSO8YY7jkdqXxV+zl4Y+Nv7Ns3ibw5GLVYoPMBWPBIAJ7/SlSjz6l 03dH4h6H4kNpDBbzRSq3AXfGRmvRNRvJtP8ALWSAxluhYEZqmwRcf+1JNnl6dI0LdxG1eo/s0+DN O+IHxjNvrjLBa25O5JcDpg96Jq6uC0P1b8F6N8NPiP8AF+88FaTYwvexwyjzFj7gDvnHevyk/aK+ CU3wF+NOr2Jud8UlwfLQkcDgVVJc0ActbHnKqYwM9DV3zF8vAWp2Ait7Mkli1XlkIkXByO9DQnoX 7tVlkCr0qu1qIBjrVLYUdGTW8DE/LwverrRrGcKKh3ZpdbjWjELrgZJ9q1okC/Ko+Y0mrME7lZgr sVI+cUJbZ+YrxVxdiXuW7C3VtxIwKekY80gdKzk9TVWsSzxCTbmq1xbbJx2FCRLHxqQxUrwKtBPO G0jAFNivYR1DDAHAqCNMsQvA9aLFp6DE07zFY4y3rSx6d9miO45Y1a0Rm9RWh8rae3pVlgsyjsKH qgRLHb4XA596oLAonK4y3rSUbDZMYB5gGMCiK08y4YEfL9KWwWIJLMwudtMaHemQMEdaq6sKxWaB bqLAGGB6kUm3cCMdKlO4rWZn/wBmyXSsp4FEOm4jEYPTrV30C2pM2nJG4yenrVKWNpZSuMKKmKKl 2HxQeYpzwFoa3SWLdt5+lVexK3IGtFKdMD6VXmtmVAFGR2pNXHexUSPyQQ69/Sr0uy3twdvJ9qhl LYy3CKuVHzH2pktv51qB/EK0TuhIiXT9sAJHJ61BJaGBCR0PSktWKwiRlY/f1qDAR2OKJbklGOAI jPz+VRQZkPSk9BrUsXNmHQY61Ht8tQuOakdigbYl+RS3NspcHOWpc2gJalGS32oxP3qz5LQwoCef pSua20Hl1aEKRz9Kxp4wSQRxQIppGvnAqOPpVmY/NjOfwpvYlLUijKgkY5+lVypO4Ec5rMqxnTw8 /KPmHWmO7FQB+dVYVzNYyJPjORTms8ncxJP0osG5K8TROGXvVScsox3NSwasVnhKoMjJqv5ZSIgi lewrFeKEYI24J9qe9oI4DzzVJh0sZogwAT1onkaVsfwina4uhBdSBoFHcUzkYyeKzs0w6FK43Ywe lRj9zgkZq3sCGzv+7IOdtZSpuUBeKiwySW2ww9vSqbq0pwDgigGCqygqeTVBy0URYce1PckpQymU jIxU7xmNSByPWoaAtRn/AEcZ4NSxOqDL8n6UDKVzN+8DEfLVSUmU5U4UUMvcUy7UzjdVS4h87Zji miWhVjR/l/iFVp/lYZ4HoKVgElg2wvz8pU8V+5H/AARvZ/sGpQtw4cbcf7lYyWqHY/qd/ZzuHE8y uPnU4I/CvrG5m3uxAw2a7YCKmMYyKrNIC/StAQjPgHHNZrAq+c1SQGHrD7tNusjkocce1fy1ftjA D4v6gp4K3Bz+lbRXuMa1PBPtIdQyj5SO9QCbc3AwK8WekhSGbgpLZ5q1EUlTBHTvRuSnoOtZVWXI 7V2mkShXyBjNQrlHp2jfK4GOT3xXqOkn5wrVcdWGx6Lp+AeBzXYWMXIya2S1Jk3Y0xEDIc9arOoj YjOWqhrYyrucxr84z9Ky5JSQG6LVJXC9yf5Qm7oDWPLdJbylSNy00iGdCuPNwh4qzCuJdx6V4rOw 2I2WV89hVqIHdknipSuUi9HlpOvNXoomgkLFtwPvWctCy9HIA+3GauJMUGO1SmU9giOWYk5qYHFX fQgWJhIGGcVNHENmDzSUbg7IlkPk7VHIqV5vIA3DcDTSsQ2RRsFDD9KUlZFHatLXJepBhk+QH5c1 1nw1XzfiJbRjqp/qK7MKrSBH6yxRjyA1TqRI4wMEV6pnN6kgYk5IzUmwqQT3oIFjO2UqatTx7rKV c4OOtJlI/Pzx4vl+KLtDywDc/hX88/7Uknn/ABovACflkYZ/KsKgkeNZKZOOtb3hWVU8a6Jgf8vE eeP9sUog3ZH60fty67cr8LNL8qUwJhc7f96vyV8NeE774neLLWwtVMzNIC5P93IzRy8rMtb3P0++ JviXRf2bPhVFo8QU37oEZF69wen1r5J/Z5+CV18atX1DULgeXp6Ek45/hz3rRx6jUro9i8SfsweH tQ8L30lhfAXFtKAR8oycZr7S+D/g+dP2XRYwv5VzJAFDKe+CKQOVkZn7GXwx1n4Y6B4ii1eY3LTB im5gcDYR2ryr9m7xVeW/7S+o6cuI4QZOjegFZpa3He6ueJft5gN8YXZjvdWbGfqK+gv+CdvjrUbu HXrSc5iXdsw2eNlN+8UnzI4nwbLaP+1lem6ZQ4uGwzH/AHa+kPijo2p6v+1noktmm6zjgk3sDx94 U2+XQnVaHFftPfCuX4+fHay0u3QJHbI5mbPoQe/tXE6b+zd4Z1zxTqXh63GdRgVxnyx1C5qb6lrY 8X+B/wCyBLZfFTV7jVoxLbae7GPgH7qhv6V+oninxnH4o/Z0ukjhMdvHFtA2kdjWsndGXNZXPz5/ YS+GAn+KH9qrMBuifZHkcAgUv/BQHw/q/hvxTLd2uqMEbO+EOvc06b5U0ZvofMn7Iv7QX/CDeLBp Oo/8eN4CCT2ONoH61p/ttfs4f2L4hTxZoiKtnc5aUDAyScA/pWH2zpjrE8q/YxEjftDaQd5VvIf+ Yr6Q/wCClRaT4p2Lbs7YX/8AQhVNu9iJySVj88bFGurNW3bR1r0z9n/w9YeNPi5b6ZepuWQHblfp TjtqD0R+nafseeH7D4mvpEsSs9xBI8fyA4wMV+c2p/Br/hWX7TGp6FeQNe2vmu0SbM4VQPSqSuOf Rn3v49/Y70nxv8DX1vT7cWMqRh9gTHYnv9K+cP2Mvgfo/wAWNI1iwuogb20yHO3qQmaclZhSkpto 94vP2RtB8V/CfXbS0gW2vLDguqdSFJr89f2QfDFve/Gqx8PatlmiVlBK5zjHP60+XQly3fY9x/4K KfCiLwH8TtNj09zKJ42i2kAbdzAdq9Z0P9nfwx8B/wBnTRtZ8RQJdtP5QcumcEkjtXRBKMRU/evI +fP2qPAnhC68Fabq/h14YpZQuI0wOrfXNd9/wT6/ZA/4WNaa9e+IEV95JthkNgbP8altc1h3vEs/ Az4GT/Bn9uHy0H7giXywOw+Wvs/9rz9kuz+MXxa0/wAQy3nkTWs6kL8vPzhu/wBKicmpcqJcrRse 5/HLwro/jDwXoOia6I3sgi7BL3Ibj9a/M74afsbWWj/th63eX9ov/CNIsxhLL8oG0Y/r3roSio2M YpxmdZ8Jv2UvCfjb9p7xF4h0+3huLaw83ylUAjOwMOn0r9GYPGV940/Zv8T291btBDbQMiAA8/u2 PesZSSVkdLsldnwl/wAE8tTv/EXwR8S6LqUJttKCGMSMCPl8s+vHc1638I/Ctj8D/wBnnxRF4XnF +kgy23HB8sj+HNaSdkc8ZXZ/PI/hfV/jV8RrXQ9MQtql3KHuecbUBAfn6Gv2a8ZaxoP7BfwO0jw9 pyJJq9zcwRzBRycttJOPY0qavBmklaSR9h/Gn4WRftFfs1+HtNvLjyLWSKORy2OoY+tdr8FPhJpf wm/Z9u9CNwsulRBUZjjBG0/4ms+Z00ki1+6dmfnl+0p+x1pvxIXwVL4HtIordZInupYuMqJQW9f4 c10f7U/7PnhLxD8TvB/g7SDFb6jxJOYwMkJINw6+hrRRujnnNxZ73488BaN8PfjLoPguz0ATpPbv 5s/ktjhgOo471+bP/BQP4DwfAH4y2txo9z9lF4xMqR4GMsAf0oUbLUcanMz64/YG+HnhPQPGEmu2 Vyl34jlt5CThdxJXnofpX5+ftly6lr/7SOtTa4hSVLhhbjr8vGf1q6fuwaCTaqanh+3eFDLgU4wq D9PQVijp2H2pHmEbeD609odj7VHNNkPctJaNsznmpxZs0QPU0gsSRIUGAavNCWt8gU1YVmiS3Tag yAW96vwWpMm0j5vUVMi4Iia0Fvckgbia0WhC25xyakpoihsiLUv2qKO2LLuBoBF2OJHVQoyw9qZK v2mcjGCKY7XGKjbcnjFWRCFh3sefSnaxJV34B+Xiqm8joOKtIfQvxM0kQ2DA7mrHkEtnGaJWJiK1 ugUFh3ptzCiSBMZHbis0VcasPlAjPFVltQMyCqT6ANL5lGRirkas2eKJIF2K7WjK439+mKsw2bKS WUbal7A9GJcxRwrwMe1ZD7QwyMA+lOCsJsuOvlEAcg1IlpGMnHPen1sK5lXcCu+TxjpxVZbcKh3H r0q9kTuxWts2+1TUaQ+VBtxkn1rNtlpakTwBYSD17VmlniAyCfpQrtA1qWZI/MiGRzSxoqxsHGcd M0KNxN20MuWBGgD7MH6VPHAvkZPU01oIa6fZ4AMbs96xbmFhkk5BqoobehTWBiV3/donT7Q5ULgL 3qZbkCwwYjIPaqsUW7LAcikykhztvYNjGOtJLbYzL/CakpFZn2qGxlapNB5rg44pWBbkOo25lQBO 1VbWPcpDjPpxSNL6FK4hO7pjBrJvVKt0yaCEypDGQu7H4EVLKFBwFwT7Va2H1CCMwK24bvSqzneC xGDSsi2Z864xt/Gmsn938aRmyuqhs5HSq6yFXPcelSNDpbj5ghGKrXYX5SRTsDK3mgDI5qreTfID jH4VLWoFaCbdyw4qldlpHyv3RVWERljcKBjkVWu5Gtk3EZX2prsK5nlvNRCBgGpHnUTgY4A9KmS1 BC/aFZ2IXKn2qKTGAmMUhN6mfKjOdhHFRM/2WVUC7h9KQXJ3xbHj5s1nCPKuzHGelOKGRwB0hLda zpMRqSepqeoFJnDOqircs6wEBh+lMRXuJAy4BwamtfmQBhQOxBcOHcoBxVRFJYoDjFJopCxxYbGc D0psm7zBkfKO9Kwh8cAb5lPH0plzF5rAgdKQFa7sWW3eYNwFPFft1/wRouyyX02cvu4B/wB2omve RSP6qPgEWfULqRxhi3T8K+sJBjB7mumI0iB171Ql+VcLW6VyBsFoZGLFvwqCaDaxp3sCRh6ymdJn OOQh/lX8sv7aUTR/Gm9cjl5zwPqK2j8DGkfPcgXIQfKFGKpuwIGOMe1eNJXZLQO+CFxTbltrqoO0 1KFbQsxDOGHBFdxog8wguaTCJ61pV1lEUIOO9ek6bIqlTTp6Mpq56Bp0vnMD92u3s8K4LHit4rUU i2+WJIaqy/KCAcsaodtCvLlfvjOKzJyZfu8LQmK1kVnk/d7c5x2rJnjRgCB81VF6itc6WMbvlHB9 atRLg8npXkdDqNGAmQFV4q/vZcKPxrJuxaRbtxubHf1rRhjZbjLNn2rJu5VrF7zh5ny8VZ8/PWkO 46OMhCwPNKkjcZpoLFuAeVuz07UGTagHqauLsjN3bLE/CJ61LJJhFB6iqWpMlYSOPL7gck0jJ5hI FVF2ArNGYgBnIBrr/hayp8SLdx1J9PpXZhn7w2rH622zA2wGOR7U50ZfmHSvTTMJotW0RZDmpNvm ZHIx7UMIojhXJO7tVthttJWxwFNJjPzx8eXPm+J75yMfex+Vfz2/tKyCX4x3y458xj/Ksqi0uQ1c 8blk3AsOAK6DwLMlz490RcDJuE/9DFZwY5K5+of/AAUFaTSPh/pMTxu0ZC5KKT/FXwP+zH8U7T4Y /FCG5vE/cTHywSD8pbAFaXuZp6tH0N+3H8JLvXdUt/FtixubNwcqOcBj7fSvpb9hC3D/AAR16WPA ZYzwOrfuzVN6ExVrn5t3+oeLJNY16XTVnigN2NwKlRjH0r9gPhNr1x4e/ZOtb+UedexWwZgOTkAn tUsT96JnfsefFvUvjH4X8R3l7CbZYAwUPkZ+QnvXzl+zpN9p/ao1CUKzN+86LkdBUP3So6Kx5b+3 rc/Y/jE3mQvtbcd2w46ivb/+CcUM1x/wkdwkbiMFsblxn93URdjSL5UfIPxru9Ws/jlfz6YkkV39 tBGARxkV9n/Az46+JZ/jFY6Rq1i7t5bfvmDe3titbJq5ne+p9s+H7RLb9oy+uPODbo5S6Aj0FeOX PjrRvCHxt1u90/TXbUiZBI4hbBJUZ5qWtBp9DwPQP2ybHwZ431waxF5YuZGGwAnquK+5NL8R6R4m /Zgub61ttsEsG4AxkfwmnF3ZDWh+af7DGrahL8dJfs7sLRYpAI+3QVxX7euq3q/GG7+0XL+QS+Im 6dqct9AWp88fs2/Ae9+M/wAQILrYItJtDud89SMMOtfU37a/xutrG0s/B2nMJPJTDMD02np+tK2t zZOysfO37GumX0v7QGkyrZs0XkP8+046jvXvf/BSiB7f4pWS4B3wvnnp8wqkru5nJX1Pzrjg8m0V Q3QYr0f9nec2Xx00R+clgOP94UrNmiVz+iHxP4d/sX4zaf4guLhUtY7KXO9gOuP8K+ePAkPh34uf HHxPrMflXF1CJVhyR3QH+Yql7rM5PWx674fg1ZvgT4kg1ICEopEKK2eNhr4G/wCCbVq0XizxbNt2 SKXznjP7uqvdkx9x3PtP4YI954W8eTvtQl2IAb/pka/GL9myGa7/AGtrdo4/MkDMDjtytNvlQ0uZ 2Prb/gpZp8unfFvQHZdqM43P/d/eCvS/2y9Em8UfsZ6DZ6WBqMrGEDBzj5zzxWkHdCv7Ncp+PPxJ +Afij4d+AdFv9Qn8qHYp2eYODu4GK+9v+CfX7WMHw803Xodcn2+XkQ55z8n+NKGsrjUrRMn4A/Ha b4p/txTX91N5enkyiDc3BB2+te6/t3eNvGmhftBaFbeHPNfS5rhfN8snGPMUdge2al/GTycy5j6j /aY+H178Q4/B2n2MwW8iVJJgHHAWTJ/Sty88Yab47vdZ+H1nLt1qGGRHkxjkLzz071n7R+05S7KS 5j8cfAvx41P9gT4jeINE1FPtpupGK4Jbou3+H61+0nwI+JMvxS/ZK1TXZbFYFubQyLGcjPyN2PPa rkvfGvfgfPn7J99D4g/Zg8XWsYSK/aJgkYPIPlNXNfsJeGLvwJ+zB4sg8SBYJ3GIi7fe/dMO+O9a N30MvZ8up4t+zD8H/D/wC8O+IfiVq8iSXsuXti2MqCnQfUrX5LeM/jHfftCfHWTxFqMjHTDqcf2S N+ixllPf3FOm7RsaTabTP3//AGqL7WF/ZO0F/Bdzvm2RjMTjpuPpmtv4L2OpX37GP2HxNcmPV7sR gszAnJBHfHqKUo3aFN+117Fnw/4ysf2Svhv4Y0C9k+1XV6saK5GSMtt7fWvHfiV8Eh4G/bM8N+OL i/HkzwyLtLLxvdR/SnKXI7GCjzn0h+1B8UNc03406OnhzTIrwPE2bguw2/MPTivy0/ah8Ba1+0D+ 0Da2Wpani8RXXyhIpGMgmtI+9EfJySR7b+wl+y9rnwW/anvLm/CLpUUUwRvMHTaK+eP+Ci2r6f4i /aMvp9PmWdo5XWQKQcZIrFyalynQ487ufHq2/wBpVABgAc1JFCkcxJG6qH1JGt1Dnb3pkVrJEuWO 7NESXubH2cpbgk8GrkSBoCoFTLccCoLfYMk5Oa19gktcj8qGWkirHEZWBUYxWuswEfyrlx3pvYLW GKRJxjDGnLEdhHf1qEAqsRAVFRWql8jHStEriLG8wNwuPoKsxRrcnzBwRUtExdhiqcNlc81XEbXD begFOOo3oXhajy9pH41Xk09UUbev0qr20K2RpWUSxQeWfvd6tPAqoCvFQ22wijNmtiDub7tSOgdR heRT2Ja1KzRA4Zl9ulI1l5Yx0B6VXLbUvoVJIN6jjkVKu/ZkdKN2TsWILZ3XJ+Ye/anbmkBXPShp Nk3Zkzx+bOABkjvTW0wySg54HamtB7khXDjC5I68U25Vmw6jA9Klbk7FdYiynfyafFYDad3Jq5bF RROumsibgeKom0YMSTWTQ9mUp1+Q8c1USNkKkrkkc1cbJEXuyYQ+a5IHSoGj804xgDrT2G1ckkh3 ptAxioHtgiDjJ+lZsIrUc0RZDGR9Kwbi1aFdp+ari+hTQRIJwEYcj2pJ40gTCrznnipkQV5YvtMY Cjb61nGP7OxA5P0qbGiskJDDuyD9eaSQedAydFBoS1EtSCaNI4UUjtVNFOGB49KpoVxJottuuD8x rO3eWOOveo3DUqsvmIc9apxWyh+RuPvSa0GnqNvrdUcbRj6VnSWxVwOtOOxdr6kPltGT3UetUpo/ N+lNAxSqpb46ms63tCrFvXtUPQVhyxLE7DGRVdrMJll796EGxnz2zNhlOT3qsVMr4I5qmSMt7Xdu J45ptxGwUKy59Kh6MaZnzW7FMAc+lZnlScg8D1rRCexes7QkFs4PvVKeEtuj+8M0rai6FUwKsQG3 GPakFiDDubv0qZMSZD5QjkVQPl70l2mZOmBUXHYrsSicrkHvUccJLkngYpg0UWUwu2Tk9qgYZhIa lcCuW8uEYGfasq8kDOAQQD7UJFMZHbqGxnnsaYU3Tlm+ZR61N9QQGNZcsq4x7U6LDjjjHWmigOFO QtC2mWMnQ0Nk2GBVCl2Gaa0RuISw4T0xVLYS3GQLuix0FQyRHdgms+o2OvLfzLF9p52niv2k/wCC NSql1e27cSbsg/RazqaNDWx/Vb8BD5urXaHqG6/hX1fIjEnPVa66epT2KwL+XuI49qr43MOMCt0R YY8flnINRsdxAHWlIDK1hQ2nXCgY+Q/yr+WH9tiby/jJfAjlJyDx7it4awZUT5luZQ7CRR8rcimo uUAPA9a8Wb1FLsWAuwhuq1XlQSS7h1rO4WLMThyAOMe1ddpx2OjA8elaJXRC3seqaPNh1GODXrOk KoX5x9KIpXNHod3p2JAAvDCu105htBcZIrojsZs03kVsuowfTFZ7NzuHBoL6WEkbfHjOTWUcqhPp QkS+xlP+7bd1BqvI218Dmr3QtjaV3XAxg1fU7sA9O9eP1Ovl0LsfzYCnFXWBXABrKepcTSik2AYq 157bAV5J61FtB9S3E+4ZI5qdDldzHpUWsU0WFnJIK/dqbymeQbTik1YaLGPKjOW3Go2PnwJgYNTd k2Lbq0yqAelKvKZ5JHtWqkzKW5MgZfm7VMCUQkd61QjPncqAfeu/+D8P2j4jQAjBXP8ASuzDaSHf Q/WuHBthnrUcUvJBHA9q9JmTLAbzCMfKM1pyqoiAHUUmxFcLhTxSyE/2dKOvH9KroDPzx+INqW8T Xa7scMf0r+df9pO4f/hdmpAD7krDP5VlU1VhqyPJ48zodxwD0qjpeuL4R8W6XeOC6wyqTgZ6MDWS 0IerP1i+IX7VPhj4q+DY7bUY9xjTCZjJ57V+VvjOyS71Oa4sfljSUPGOmMcirjszLaR9w/Dz9plL 34PPoutoZXWLaCQT2P8AjXI/sy/tKN8L9T1WyZS2mzMcKc/KNuKcHc0nGx7X4s/aR8Ot4Jv102HN 1MwJ/dkc4xXAfAH9reXQdAvNH1mMvaNwobJ4xj+tVpcy8j1IftR6R4P8IXtj4fQwSzj5tqEdsVyX 7OXx80v4faw1/eIG1GQHe+08kjHWs5rmLS0uUP2oPjxpXxY1KExQjz15Zip9c12/7OX7VmlfCHRJ 7YQBZJOvyn0xSUdLAlc8X+MXx/s9T8cxaxp0G6Xzgz/KR3H+FfQWkftbaQtyl+9uE1EREAhT3960 2RNj510v9qnWtK+MtxrfnSPbylvlOcYOBX0bJ+1zpXkX91HaKt/MDl8EZJGKTHFH5o+I7mbxj4vu NZv4g4Mu4IeeOP8ACv0V8I/tkQ6f8Go/DkdmscKW+zHzccH/ABrNe6wkrrQ8O/Z/+P6fCPxfNqUd uP3gPXIxkYrif2nPi0vxy8SNdrCI5iTlgD6+9NPW5KVja+C/7Qs3wi8C3elW8ey4kXAcZHYivlfX I7vXNVu9Uu5TNdTShyWPOO9XuXI+wPgV+1HZ/CqK3UWQNzHGVDsGGPxrx39ov433vxw8ZR3jR7Np IyCehINUn7pFmeMyxkzAZ+WvZv2WtLXXPjzppeVYYIG+dmIHIIPeqWiNI7n6R/8ABSXxzqWkLYHw /q4MHklZY0lXnLf4V+Vv7Ovxl1X4OeO59XWZzDNkyR56kjFS3fUi3vH2VrX/AAUGvNSjvrfytkUq kKgLY5GK+Z/gz+09f/CzxRq1wgw14HZhk4BK4o80Ta7sdN4P/bY1fTtI8R2rKwa7k+Tk8grivEvg /wDG+7+D3jyLVxbCW8ky0jc8HgdRSm+axcItanov7Un7U198fr23SaDydqE7wSSDnI61s/C/9sW+ 8AfDW00e+jbUFtwqr5meMfSri3chq71PMf2iP2jb/wCN1rZQNEYLSMD5Bn1yOtfPP9nJHJvRim77 wA+9VbK44xG6ZrV14M8V2WoadI0MsLA/L3GQa+/bj9va+u57R7u0E8kY++xbOc0lsaP4bHNSft1e JoPieNWjZ2tvKdFQsfl3Y9q8g8E/tT6z4X+PeqeKFVmu7zzCWORjcAOv4Vzxi+dtiS0seRfFTxdf /Ffx7e69qqh52lLxDOfT+or6/wDA37eGueDPg+nh2O32wrbmMgM3of8AGt3LW4W5VZHiHwL/AGn9 Z+F11qdz5kjx3j7zEc4Hy4xXcfEr9srXvH/gp9Ms1exUsCfLJw2PrVRlqTJNo8O1r4veIvF/wzt/ DdzdSNbxqAwLdcf/AK68ig8Kiz0BLeH5GTGGFVHaxLR9rfDD9rbWfhz8ObTRH33scCqoD54x9KZ8 U/2wPFPiXw3p62jPbpFNGzIjHs2fSqUrPUFG2x5/8a/j/wCIPifrPh2+mlkIstpUEnjDhv6VofGz 9pDxX8U9Z0a4W7lW3s8Ejd1IYN6VEnzO41HlR79a/t3a/bQQhbfzZViK+YzMCK+T7X4i6/efGK48 Wy3krXjSFlBPQHGRn8K1pysKUW2fVt3+3V4je6kiVWjZ0KmVWbJzxXxXqMc974rv9TuHMs95L5jk +uMVny3nc0Xuo148xDnqe1TpbBVIBwTVAOggWFSrH5jViKPy8gnJFJbiaJ7bdIGDdDVyG2EqFVO0 iiSux3J/s6xpt6v60rL5MQOM49qTQ4FnT2W/jO1dh9xirUlp9lGQf0pNl7jktldN45PpUywGSIkc NTSIloyn9lJU561ZgUJGcD5qoECAyycjj6VctLdUuCAOO4ptErcvT2wDMVOPasmKwMz8EgZ5qY6F vU12gwmF6Vny5iwgGTTSuw8h0UQbG7730qxt3koeoquW7KsVGH7glucGprbEibx2HTFDRHUVysiD sCasz2ilFxyT7VQr9CgbTy3461JBCrKwxg/Sk0Jj4IjFOUJ4oNgULEfdNStyktDO+zl3CpwR+tPu I/LccndSKSsJ5BaUbcc9adLZGAnJ3UJakSK/2RQwcVG0bGbcopt6FRFuJWztFVEiO1s8ipeiJa1I Bpwb5u1QPbbTwKqIcuo5YivykcnvT5LNVjxnDetNllEwbhlT061UDebKV24FRbUlkkmmvGhbfmqp t1kwc8d81N7MFqilcbY5sIvXvioPsL5ORk1TI6lZ7N5Ayg7SD2rMkj+zlf4vqKcRtjPIdnJI2g1m X0RhXCt+FUl1BDLViRhxuPbNRiImUluxpT2GlqE67ySAfbisr7LJHJhh171jEp6ky2ZcEA4xWbdQ tbrlG3UN9ASKUcTT4YninSIEcbTmgu9kUpQHZgPlNZA4QnPAPSkSyCb5GVgMj6VN5vzZA61VroVy KVd+cDFU9p2bSeKmIFdoGQYU/L3qFovLBbqKvS5NrkMcRb5s9TTr4lYx61E9yraGFKxiQc5PepI4 2uogp4FGwojLglU2ocYqn5ptgMjOaaYNFa4bc2GX5e1QybraFS3K9qW4rWIGlDNkdRUMreaAJBj0 qHoVAbKcoAp3CqMjM6YPBFKLCRXiwylT97txUV1CUUK3anYQ2KMBBWZcFLhyFTGPahPUTMyVNxyD z6VGYCFyDioluWtiaOX51AHHekkO6YlRgA0CuaDsojBxjNV3jPBzRcY63VJlYHgjtUSv+6K9qsla FULuyDxQtvvIyelQDdx07ZikVBghTziv2K/4I5yltavZCfnVsY/4DWVTdFxP6uPgGc6vdMOpb09q +t1jMbtk7s8811UtgkSrIqxnI+lZj8yDjit4kkciBnO2qbExsDijcDK1XLafO4PRTx+Ffy3ft1Dy fjJdvjPmTEnj3Fb0vgZS0PlnzFVBHj5V46VVkkKkKfu9q8arH3hdS3uZYwM5FRomSdp5rJLUd9Cx ay7HBYcV2dk6hg2ODWmyM9nc9S8Ocplhx2r1PTN0mM8LVQG3c73TXAcegruLVBIBt4rROw7XNGQ4 TpWO+5n68VpFCbsVmlMUhA70yXcwwtU0IzGyTtI/Sqzfu5iM8+tLZFWubAu/NIB6VfUeUQc5Brxk ddzRAAwQcVYgn+fJ7VDQ0WVk3PnqDV9ZvLIKjioWoXsaSy+cQQNox0p0e2STA6elNopO5ZXKkqOF pWn4HOKLKxK3HbiMc8GrsYwQegrJ7F2J3YSttQ4q9DgJg9KcDOSsQvIDhemPapI5COD0rdbmZC7B sgDOK9E+Bz+b8REdhxzj9K7cP8Q0frBEv7gc9qbG+2PpzXppGMm7lmL53A71ZZcMcHIqXuMfGpcY JpJvlsZtvIAP8qTA/PDx4xPiK+kPQK38q/nI/aNvDP8AGPVABgGVjz+FZzRD1Z5DBM2AH+6Kiu57 dW3OMr2yKzsNMzrvUA/lJHbt5fb5DXVW0hVFH3Vx0xTvpYz6mvGAYCmAVPrUdrFFbKy+WBn9aI6M 0k7oSJYLbJESq3tTZp42QgQ+a/spNWRYk02aNIsvAUHQsVIrQUWz3SBDub/ZGaUtykR315aJfMrL 84PJxUcpgKiU27Pnp8hpajWgW+owRYEkJhU/dDKRmmGdp7hmSzbaO+w1Rm9xbbxDBfXItQjtIP4Q hOK1fLg80xvHtZezDFDY0i2sokYjsOlUSQS4VQvrUWuWYkl4fLIhhM+D82FJ/lTDqLQqGuLdrdW6 EqRmptYQ+eeKTbIWBA6HNZUl9sk3OflJrVfCTa7K95BPdyKI7Bpk7NsNRLcrBL5DIYph/CRiiJdt CaaQqOnP0o07ULnQLr7ZZSmC49QcU5MmKH+IfGGu+MU866kmuoozySCRjr6VzMut25svOKkKDgqB 600vdE1ZkE+2K1iuDbMsD4xIYzSvDHM2cBsd6FqgKm1UkwByOhpmd0nzOc0JForaheR2Ns08x4Xq aq2I1HxLpiXdhYvcW7Y2kIxz+VVBakSVh2sQX+iRQtqVm9tG3QlD/Wmxz/aSDHhgKuSuhxMq7vor a6G44LHCj1Pausm8K6zdxRldKdsjIbY3+FLZF8pyOo622iSeTc/LOjbXUdqvz/a2s/tcds72x/5a FDSjG7I2IvDhv/FF+8VlZSXOwHJVGP8AKuitvDGvXUV0z6VLEIThiY2Hb6UmugnJHP2ZWdCsyBZB 1WkvdXj0qJg/EaqSQKcYhdWLPhC2v/GmnPd6VZtdx54IU9Pwq9q+ga5oumm5vNNeCBSMtsbp+VaR jqTJ6FLTr9LuxSeE7o3GRWlPfLY6NJdTvhQ4VVPcnpRPQIK5s2/hHxHrulWt3aaa7QSgc7W7n6Vy /iqW/wDA2qw2V7BtkkOFU5yTnHSlBcyuOWjsei23w28WXSRzRaY/2d1yOG6flXJXmpyWuutpCwEa nnaYtpyP85oirlWsdjB8KfFSI7tp8pAGSxRv8K52Bw0xhlRkuI+HVlxg1olqTIurERPkDIHTNW1Q STAsuDUMIk626lm4z+FNhAILAHIPcYpPQp9zQW285Rs4zV5oQUVVG1hTTEkK0Pkjnk1YjT5QSuaV 7lJWJ5E2kFF2mrUEb3L4I4pNCQ9YfKuDtGAKi3nzW42gntVxQm7sUyeWwAGam+zCVi4GPYVpawak 0VofLYjjmlUEdBz60gsTrCQu4nLGrdtGoJOP0rO2pV7Il8gJC7evSsRbN8kk1UUK5bS387HYiqzQ 7ZsgHPuKsdyxhZWwy4H0qeCy4YJSYgudNFnGhIyxxnFX4IAzl2AC44FFrslGW6CK6JA3A+1X47EX JJA24psZUe2E74HBFMmjZfkHSktxp2IraAQZJGSO+KeY0lUtt+apkrMtMpeVtBUcGpHtvKgALbj6 1aRm9yK1tvPwGPArQks1Zhj5celZSu2WtEc9fR+TdgY49af5e1iw6d6LaEvcryqZQSvQdqdbxeZA VP3qcdB3uQPDkAeneoZrIzSqyt8o6ihgxzIkbbFG2qbWfmkqo59aV7AkAgdEK7s461i3R2sVAxn0 FVy9SL2IfsjLbqG5Yd6tPCwQNnmokVy9SsFCSEtnJqpe2qghlGT9KL2BIzL0/II8c+tZa6aZPm+8 a0WxFtSQWuQTt+YVnPHz8w4rPfQtoZAwZ2UD5arSuCxyOnSjlsKL11K8cvmQN25qldQhLYFeRUW1 LdkZEe4HIHB7VO0KwkMRkmmxq1iteQfPkCsMjbNjbkdxQkSxbqB42BI+Q9KpqwicjHP0qulhdSn5 jGUk/dFWDHtUvjK9hUWsytyrbSCWQjGBSSRLLMcHCjtik5WYWIXjDR5U4PpWfMrTEDrSvfUVyC4g WKPd941nuGbGOBTvcCNoCv3jnNR+X8+0jNIfQrTKzy7SMAVXu98ihW+7QmTYoKFjulAX5fpU18nm zZH3R2pSVwWhU4iIA49aWZRNgKMe9QlZjZjTt5T/ACjLVbibzYvmGTVvQlFS5Kom0daxGlKEgLUJ DZEseWJIxmqU4KuAo+Ue1J7jRY+0JKo2rhh7VNHMqqQV5qhWIJD50eOgFIsg3AryBUsfQmWRd5IX GaVJkDlSvP0p9BFZZPnPGRSxweVPvY5U9qi9yuUfJAzQTkcDaf5V+uH/AARymCa7ewtzKWzn/gNT NXYnof1jfAFw+tXMZ4O7+lfYMkZjJ55rpp6Ia1KJk3ZGOlNhQOpz1roWgDGXPTrVPaXJJqRGPqg/ 0CcDj5T/ACr+Xb9uqQD4uXi4+ZJTn8xXRS+FjufJCXC3EKuq8MM01J8feGa8mr8QJXGSTh2JUcCp YsGMOODWEtAaJ4FPnZJ+Wut0hsnZjJB600JK6PW9DmChcj9K9V0vIG5uR2rSI7WO40rjBNd7aL0x 3rS1wvYtyoyNgtkVnSERP0/GtFohWuyvMuw7v4TVNblhIV6VW6G1YqtPvyFXnNVp8xttZcN61Ddi kizGNzZA4q8BtIweK8a7RujahdRGO9WLbDElh8tS9SupehVVTCjA7VNHGVbd29Kle4U1dGlDLu6j BpF3bwBw1N6itZF5SyzBSe3NS+Ugzk5PaiWxMQhVmfDdulW955U9Kysap6F62tVgTd1FTRN5jj0q 0rGUtSdow0mAOaYqYYgdK2jvczZIuCMY29q9C+CiFfiLFGBwM8/lXfhl7wI/VeJSLUt3qOIZYV6O xm9y8VLScHAqwF2LgdDUsLDkBxinT/Jp0oA6qaBLY/PTxsu7WdRH+9/Kv5tf2iAZfjjqadNsjf0r KZDdmeULEQ3+z3qXwxoI8W/EfStNIBhlkXdk/wC0KzeiBM/XX41fBjwT8GvCMD3NtEbh4/lJTvnH rX5D+KvE9vZ6jdNEN0RlxEqjPXpQtrmfU+ofAP7O+qa58Lz4kvXNvC8e9FYgdj6/Svm7TLqSe5uY XBmljbgKN3b2pJ9TRX5rFbU7y4+whzbSRKWC7ihHWv09+A/7O+l+HvguPF+ujz4Jow4RlBxkH/Ct XsXLSNzuvCvwj8KfHL4YancaNbJHJEhIITBztJryL9k39lXTvEHiK5s9TnDzorfIdpxgUPYiHvK5 4x8ZPgnFp37QQ8OWA2xyXPzHGOhA/rX1/wDFHwF4P+BEGl2Oq2ivLKAN3l5/ix2oUbamfNeVin8Y f2UNG8Q+H9C8TQSC00z5GKqBggv7/StTwlD4A8X/ABHh8LaZbRvcSQOd4j9PfPvSe5ra2hxXg/8A Z00PwZ+1RLoN3CJhOJHQlOgGB/WvFP26/hxD8KPijbxaTYk2xjcuUjPGCPSp+1YhSsj490fU7nxA +LOAySddoBra1K21LT9Pke5sngT7pYqRyatK7Kk7H2D+zrp/g7wj8J7vU9beK4uWw2CASOD717v4 0+CmgfGb9m2LxJpcYtR5aurBMHue/wBKXLdE1ZWsz5S+D/7LkPxG+HN3NFeeffRjciErxgE18PeJ 7e98NeJbjSNRQw3FtOAQf4gME0QWhV9Ez9Nf2OPG3hb4h+MrPwvd2QefyGJ3RnHHv+NeZft0fC3T fh38YtmmxLCpDnaox3FFhyny6HxzJeGZQ22uW1fXH/tSOxt4zLcSRswQAnpQ02yktD9ff2EPhRp/ xA+COujXtGWK4iT920kZ5/dk9/evyP8AiBoZ8PfEu706JP8ARI79FCr/AHcjPFawXu2M5u7ufop8 d5fB+l/s6adBa6b5F8yoN/kkc5Pevzc8H6XqGvC4tbC2NzJbnaxAPXGe1RL3AjqW9U8Ja3oGlvf6 hYtBACOqn/CsCx1BdZ+zpax+fLMNyqBzj6VcVdA5WH6x4XvL3XbTSby3MCzXCJtYEZBYDPP1r9pv Gng/w3+xx+zpo98bEXDOkYO2MnqSO1aQVrlW51ct+Mvhbof7SX7IR8WWtkI7owCSIeWc5wxHXnqK /ATTYNd8FG0g1PTzDLc48pWDZPOPT1pQMoz5p2Pqj4Ffs/z+N/j/AKTD4ng+waYqlx5gwHIII64r 9jrzxt4Otv2j9P8Ah9ptpFOptZQ5jXIG0j0OO9ZSu5WR0VXaJ+eH7ZvwH8OfDL9omMyrHHp87sZY yBjO4AV+hfjD4I+Do/2KRqej2cJ3WwKOq98NjvWiTjoYc3uXPz7/AGFfib4S+DHg7VLjxKqyXrxl 4mZM9EP9a++f2S/H2j/tG+E/FupX2lrb6dGx+yySRsA6+WTnn3qoK8jKpe10fhR8WltNB+KuvSWZ H2RLg+XtHAGBXdfslfAO8/ae1nWry9V7XSICyJKV4YFc8Z4rWnFOTLtokfs7oHwV8Ofsf/srSava Wq3koVMSBOWJBHasn9nCDS/2wPgDr93qmnrC8MZEalT12E9/eseb94XUeqifht45+FmvfCqz1GaW 2eDS47tI7UlSAynp29a7vwf8H9T8aeJ/Ckmr2xttBuSksrsPlyHGBzx696dW72NFHkVz9k/2gPi9 4V+CHiTwZ4R8PW0Nz9sCBlhGQB5gUngn1pv7U37Lvhi++M/hHU7uOJInXfIrgAE+YPWrp2hT1OZP mmd341+L3h+y/ab0DwD4ftop7drWTzfLyQu1gMcZHQ1Uk/Y38Kt+2s1+1vHLL5Mz+VtBwRisaV9z dyvKxxvjj9oyz+Hn7ROseGtS0pxpUQlVHWFyOAMe3evyS+LGp2Pij4v6lqGkDy7IynKbccnGOK12 1JvzMo5yuCuD2p6xL5WW+9Wd9RpE6BoCuBnNaTWyzNt24B9Krcb2EjtVgfYCd30qyIwrDPLVLHAc LQGUk881ZERZgR0FOI2SrFvlyx4qzbr5JcDk1VjNPWw2THln+9VeCLz0x1x61WxXKSlPNfGzaBVq 3gCZOeD2xVIb2NT7FiMAnb9KjitQkg5yKNhCtY7Cxxk9qltIjGCSuTSRT1RLFEs5O4Yx2xSfYQzk k4HYU9iB0FuIiQRk1EdqsVI5NPcpqyJY7MOhBXpVdbCWNi6k1N0RFNltPvAyje1U7t1kuNpOAewq 4DloTsBHEI0jB96UQPEB2JqWgTG+X5Tk4yTUnleYpAH40WGtyhNG6oAq5U96qiMW77GGCfanuDdm JLCC4QD5vWrzab5iYPAFD0AoDTXjGVPFXDamWHO7GKm2oPYyp1DcOv6VDHbYySML24oa1F0KkMAE xwO/NX2tUjOR368UmhRY02yrGwA5NVra2225wMtRylXM6a1DPkcnvTU+R8YwKloSYyWzJDMp4NUm shs3MM49adyd2U5AC4OMimpG0sjHHyihK5behDKiyj3FRy2jSR/Lxj0rOSY4mXcWgEBOeaz0t5LV VdT8p601ohbsZKwRi3UGs4wCQZ6HtQl1G2Zcdq8c5Ljg0z7MWuio5Q1e7JI7+1S2gZVPJ9KyI4mW FVYblrN6Mb1IzgSlYx8v0qqWwDkcg07XRN2mJcAoqs3esm9iIkGByeacQaGxOz5VxuAqlLa/afuD AFZvcroZ8tqxlUYwTVgwMiFW6CqkOG5WjgAO4L17VJNAFwCKyZRlMu5yuNoqF0Fv0OTSSuhNEGNq YIzms4ho3I6qaaCw6OJkjbPI7VTbMY3Ec0SGVZ90pyDg1QALkr1xUq4r2HbRHjjrVe4Ch/lNXdMl mbHHuuDu7Un2nfMVC8ClLQFqVby3KgN0JqPy3G1V4z1NLcVrE0losUZbO5qxI03hvU9KHoNCT2rC IEnBFQKT5eCKhooqeVx8owaa2YiMjJpoW5EshdypHJqcW32Yc8n2pMB6fNgAcetNmCgnIp9BdSrF b/Nw1WLiPy8BTk1KRdwYSJaylufkP8q/V/8A4I6Ls8Y3MhPOCMf8BqJPUGrn9af7P8RbxDdyE/xd Pwr7IkB5PQV0QEtEVwiyKTjgVWKpgFTnNdDEiu7FGOR0qrNKSBtGBTSAytV+bT5Dj+E1/L7+33GI fiveMB80kjfzFb0/hYmfHNpD5WmQpnLKuDT1RYhg9TXkVF7zHHQgSPy9xPAPpTg4ZAAKyauDZatm 3cCux0eNiuV4NJDTPVvD74wrCvVtDjMhZc8DpmqTGd7pzKqhGGT613VoxEYx0FdMVZEMtDLgN1NV Z22LyOTQpXLSsVjhocHp2FZBGDgitFsTJ6lVmMT57Z4ps8jXDbmOT71i9RqRYjlVyQOBWjET8teS 1odK3J5HETjFalu25s+nasuaxaWpZi3NNnPy1opKSM4zg0SfMVsy5H87g4xTWYtP8vWs5SsVa5aQ FZRu5PrV0sokBpp8wlEc0hMwJ6VdgQu4IPFHURb2+U570xfnb5CVNF9SZLQ0mJSEN370w8gMDzWi ZnYS5VjECK9K+BRb/hYsTMcqQcfpXpYV6krQ/VdZVRFU8etP43ZA4r0jN6Mspbu2Hzhaesm8le4q RokyCmB1qCVs2U4PZT/Kpegj87/Gob+1tTYN8o3fyr+bn9oaQL8a9RlJ+9I39KiREkeWSPtDqDmu l+CrBvi1ozNyRMo/8eFQ1cUVqfpv/wAFKIE/sjSYiN26POfT5q+AP2X/AIDj4rfEaE3Pz2NsSzxt jkjBFRsrEpe8fXP7Zvx2Hhi3t/BWiZtUVOdoK8KeR+tP/Yr+Aenap4G1nxRqcYuQnzqGGeNhP9Kq nG6NOtzXvfFfgDx94Jv4Dbw2tzDOoUMmOcZ7mvuHwj4ct/F/7LOnaRI4Sz8lAHyOnNa26GcpcysQ /s4/CWx+E/gfxDHptz5iBWzjH9w+lfJv7IWp3WpftKapcGdxGiSgR9j8o5pbaMSlyaGH8UtR2/ti QPnJNwc/99LX2T+1B8GtN+LWuWTXtyIjENy7sdQ2e9aWJtyrnOT/AGlbebwr+yvpelxzEW6+Ugce m418+fsfeAfDWi+MbbU4LlZ9XFu+Oment+FKyFGblqd/pGs6lqn7be+/j2hUl2ZPb5a+vPiJoOi/ Fjx7qOmXVskkkNvKMkdCBms2rSuVbQ+Qv2cf2VdO0DUPFOrNH9oa2dvJiZOB8mf6V6H4P+HFn8cP hb4il1XT0s2gz5WVP9wnv709jR6xPx08D+E9M0bxve6XrN3v01JcKjYIIAFfrl4xLaN+yhHb+EkC 6aIVzsOOOfr2raFmYSvI/Kv9m/49an8Jfifp1uC0tvd/LJCScZJC54r6o/4KC/Ayzu5dP8VWYWC5 nXdIBgdW/wDrVl8LNJbI8X/YHsUi/aHsZCoaUW8g3flXrP8AwUhfPxotwD8xjf8A9CFC1CSu7n56 XjPDCmO3619j/sB/BrT/AIofGie/1ONZfssMgSNuf4QauNkVKWh+0Pw81u1OneNdItLYWsVmzomF IziMmv5u/GV80PxjllkQSf8AEzjUhj2LDNVHQzlpofrr+2L4D0E/sxaXdRWipO4jO8L33GsH9hL4 K2Gn/A/WfECWqXepldyMw6nYe4+goqR59SqLvc9T0P4cL8Yf2d9buPFFjBZSxJmJQ2eiE98d6/Nj 9g34D6b4g+P8E93cB7a2V44bc4IYHBz61UVaJEvisfZ37fvwF0nTPijod9YSpYyC5T5EwM/vB619 i/F/wNoni34SaTaeJDH9iEQx5pGCcnHXFEZWumPncI2NzwdbaJ8M/wBnRUsyh0SIoF2cjHPp+NfJ vxR/Zy039pLVfB3iXRIIxYWiK82RjOJA38hWLlbQUYcsubufH/8AwUv+NGmaT4p0zRvDEhtNStuG aNcHhwdv411f7Bvwmu/Ckup/F7xlIVuXidrRZcfdZPwIOVq4WUlc6JLmiz8/v2kPjfL+0x8VdavL pcWsN5ttup+U4Pf3r9sPDcEWl/8ABOCxiB2otknP4NWkrOehzRi+Sx/P14C8A3Hxo8c6JoMU5VJ3 UuRjlQwz+hr9of2w/FMX7GXwM0rwH4YiMd7cwBTKikfKCVJyOOhqkuVNmk4+4j8RvBPgjW/jN46s fC2nq1xdvIDeT5/hBG7J6dDX7TfHPxvon7FnwP0vwPoAVdZuPLTMYxxuKknHfmroSTg2Kt7vKfc3 gWytNY/Ys0GLxTItxA8MZmeYj5jk/Sus/Z60Hwt4d+H2vSeG/Kj02IFmWLGDhCa4bNzv0NeXTnPn zxv4H0/9sT4S22mafCsaWt9A0rkY4Vtx6+1fMP8AwUN+IOifCP4c6L4D0wNBeQRDEiR4xsb16d67 FFbsidTnirHhn/BP74B3/jvxU3xC8b3L3Gk6YjCyFzjDKRuDc47rXJftt/tbX3xe+LJi0Sd4tMsH MUQXjzDkEf5FRX/hpoypx/e2Pqf9iL4Nt4Ctb74ueMpTFdPEzWqy4+6y++DnK9K9S/Yj+Od1+0B+ 1/4i1lpH+xxNKLZTnlCgNXScfZDrrkqH0D4kl8HfGj9pXXfDUmno2qqJdztGRyFGefxr8U/j98Jv +FKfGXWtOjJMJuCUXHQACq5eaJVLc4S1YSRiRuR2FasNoGIc9DWNi3uJJtkcqDjFToxjQY6jrRYo mkXzAGAwfWmw2zyMCTxRa6EtDUj2wZ3ciohkSfKOtVCJUmXfs5iTc1LaHgyEZz2qtjNbjntPNyw4 p32b7Ii56mhml7o1YbUSjBHNR21n5cxLjkVS2INKACdyHGaVbNVl+U1nfWxXQeyGWQhegpiqygBR TtYImjEiquCORUJgWWYk8iqsK1mWYdP+0FsHBHeqC2CwOS/LVMSmaUMQRBgZzVhbYzdPlWobsxRR RurZLdQwGayZ9PErlwMA+1bR2Jkri2yNFg44FXbmJpwD3pisJDYuq4I+aq8MRkndR0HWkndlrRFi 4hCoqgdfaobuxKxjcuT2ovYW5BaRLIw3DDDuRVqcNcyFVGAO9J6sdjNaJlfap+tV7q0eJNxOfpRa wbmfDJhCZBu9OKt+T5gXsMU7iasQrb7WIx3p5tVVuD1qE9QUSEW6oGLHC1nw7eUQ9fWrtoKWgy2t Akjq4zk9ar3GnlSRnp05rN7jtoUN8kSYI5qGWNrqMKp2jvil1FYbHZ7HCjoKhusW+Vx1qkQ73MqW ERMCDkUsbl0baKUkWtTKnjdiABxUc9uwUK3A7UWFszFvNPYqArcCmR2rTR4AwV70PQZSO8sVY5AN QXkO3b5ZxmkkBSe22rtc5OKoJGY8r/DWbWpa2IWiETZXkVSubYykY+X3qibalS4tHMqZfhajLZkP y7gO+Ki+pTQiyqGwFxmmFQj4AxRYmxSmUlyTxj0qjgmQluc9Kpj2GxptkJJ5pGmUls846VD3NFsU mXzoySPpWd5BReTmlshC43qVA+aqLfKhjC5b1qUIrxRNCMNzWbczEuQR+lVuFiq5wBkUxVCyZXgU WFYZPES4Oay71SWwq8j2qNmJoosScButOjCxzbiKb1EtBJU81zluPSs9S5kIB4FNIGStCxl4OV71 mSKsUpA6/SgHoVn33Bxn8KYsG9sKcEVMkNaiTWzSJlTtIrPkkYEcfjQkBISsMWSvJ70yN2dufu+9 JoEG7ExReD9KGjbaUIyfehrQRWg/0ckDmrFpMs7txjHqKQya7kLWUiDrtIr9Uf8Agj5keNLuAjEo zz/wGs5rVFN6H9avwCkz4iuI8/Nk5/Kvs25J+VfSuumZ3ditIjJCRjANUo0EUYwelatXY43Qki+Y M96ozJ5SjJzTuU0Z+rMXspAB/CcV/L5/wUFtml+LFwAdhWQ7sfUV0U/hZHU+MbaXZZKB82Rwaex3 QAMMN615FT4mURWuXY7znFSF18zCjAFZPQTJoZTvBAwPSuy0XMrZBxUCieq6WwiZM8mvVdKl2jPT NXFalNne6c+4rnoK9Ct5lWIDHWunoRuyzNLtQEcVWnYiPcwyamKNb6FDcRFms+efecdu9bWJWpQu JNvA5FVw2cFutZtWC2pcihCHmtOJWPIPFePI60XUGOozVuM7FzjNc8jWJqxbTEO1WY1KKR1NGyIe 5Zgl8vhhyauQR5kJxWT95l9CXIPWrEab2x6VqtAT6FzyxO4GOBTvMETeWOtUR1LEchUAd/Wp9+7o u0jvU26il2JYpfLUh+c0MNoAFXEzZI0h+72r0/4BqZPiSkZH3QcfpXpYP4hH6nGJXVSwq2ibUxXq NmW4qbsYDYHpUsYw3SpsF7E42g5HWnXsavpUqg4Yqcn8KloLn5yeObb7Je6mm/dw3P4V/M78fh5n xh1JHP3ZWx+lZszlKzPOnfABJ7V2XwKUaj8adIhDbcTKSenRhQlYalY/U7/gpJ4T1bXNK0ptKgE6 RR/O2cfxZr81P2d/jdf/AAW+IkTXqiK1kfy5Rk9WwKzkhRetz7h/bE+CVp8SPDtt430NlaYJ84XH IY5P6Cvcf2JrmJ/2adatUlV7t4cCFiBg7Dx60c3IOOtz85B+yzr0uj6prGokWdv9sRwA46Y9/pX6 saVK9/8Ashafa6HciW8S3ULtYe9arRcxmtyn+x/HrHhL4ReJG8WzATSg+WzPn/lmR7d68U/Yx8P6 jqPx41S4jRFsSJNku/73yjFNLm1Jl7zMv4weH38P/tW293dhUt/tPMpPX5lr2j9tbTvEPiHxLodz 4Vlxa5HmlZMZG8fXtU82tjV/BY7f44aavir9n/R9Bu7kQ6gVjBwwJyGP+NfIv7Kn7Pms/DL9odLq 9ZX09IpAJC49BSd4smnGx9Q+fbeL/wBtLdp7qRDHKsjg9/lNfQcVh/wjXxk128uNqxGKb94T3201 7zHUslYwPgL8RLI6H4rFtcR3F4znam8cnZXzpH4s8ZW/gPXZLwJptqxIURzdQVPqKlXcrA9Io/PC z/Zbu/id4FvtbsbwXV80ylQ7KOCOefwr9IvDnl/BX9iCCy12ZFvDbKmzcD8xDCr5uQcI3PiH9j/9 n608V60fGGvOotLVSYEfGMYyP1Fee/tf/tEXXxI8bT6RpT7tOtWMY5wM5yMUN3VxJXdj0P8A4J8/ B7xDF8ZbHVriRUsDbucmQcdK0/8AgpBcRz/HCNI5AxRXBYHvkU1poEnbQ+EpUD2gQn58da+8v+CZ 3i2w8OfFq/tryQJcyxyGPd6bQKhXbsEl7p+wHgHwz9ll8c3U8yH7Q7mE7h0MeK/mm+JWhXVh8cZL QgvLJqkbKV5+UMua0iuYzk9T9sv2vPDX239kfSooJlMiLGWG4dmNRfsL+ObXSv2a9SsLSaOXVBEA sbOBg7D/APWrSfuIKHW54/rA8aWXwd1e417UxaQS/ciS4DdVIxX5cfAz4rt8F/inY6kLiSRUjYHA 4OcelDlZXFvI9u/aH/aZPxo+LejanJcOlpb3Cb48cffB7/Sv0s/a1gk/aA/Zz0VfC12ittj6SAcb jUJc7ujScdjqvh94Rh8K/sjaP4T1zUFa8nESvl1POSP61X8S/Fex/Zbk8JeBNMKXC3MASSTPbftP Tjoanlcpeg6kloj5f/aF/Y88OeMv2ptC1B71RHKTLLGNpBIda+p/2g/hjqPj3xJpXgnR7lLHw5HE fOMcoGdrcDB46E05JuSsVKdon4wftMfs923ww+MS+HvDkqzyTS5kOQNuCAensa/be1+EvnfsJW2g yX4F0LIAjcvBAb/GtEuTUyTu9D8Pv2bPBEvw3/aC8OafNchWgG1pCw/vCvvT/grJrlldeLNE+xTJ cTiBlIVgcZf2o5nJNFt3SRlfsZaB4Y+APwj13xpfGF9anUyIcgsCUIxwc9QK/LPxx8SdT+NXxMl8 Waw5kgM4a2iY52ISCR69qig2otCre9JH7+W19Y/H79iDTtD0q8W2uhAnG4AjG71+tRfsieErX9mj 4AazYeJdTSR7pljXfIp6oV9vWqjr7pc52p2IPF3xU0f9lT4LaTd6JNHLLqN3biUKw6M20nj2rzr9 r34M+FvjYvhTW3u4xczyRtKwKnIMgyOvcVtW0VkYU1aNz3v9oH4Xr/wrzw54K8JagumacQgnMMij 5Q/I5z2Jr5A8a/sYeEPA/wAZfDGnQ3iTocSTkhfmZXHXms560rFUn79z7T/aj+E0nxU1zRPB+m6k un+G403TCKRcNtbIGD7E15D8E7HwZ+yz+0q+h2Dx7popcSKB6AdjjvWNBO1mOu+Z3PVvBPw2svB/ 7WGvePbrUFaCUTFVLLj5lH+Ffk9+1l8QLX4rftAaveWbboo52U8cHOK7lJRiOnE8WS0VAEA+UVca MmPH3QPSuW5qkUktRKSD+daUMW2Lb1IrREvQWUn5VYYI9KtQlvuY4prQLFo2JZcjk06GBtmF+8Ot X0E7musDSQYYdKrCBYyABSjqw2RdWzPAH40rWm6cAjIHelLc0hqjaWBd4A4pbu0ygwelK9hWGqqx OuB8xHpV5LHLEtzUt9SSzHGmcKuBVWWzMb5xgHpVbj2GTqwiwBg/Skitfs0A/jY+oqkxrU0IrR2t 9wO32qBrML8zfM1STctQRKtLKDAMDnNK1xwZC0H2pQMdOtXPsgki2Y+Wq2Blf7CsDCMjOakksfJY EngUMCCX5Jg4+7ioba2RZJJB0b1pxVlcmQ2O3Kvu6+laMtv5sYYnLelS2JFWaw5AI25qkLQWzlS2 QaUTToVnthIDg7cd6rQQhQQW3r71VxxIPsyyg4TB+lWIrdQoXGQKm+o52KrWhkmIH4VVe0beQRjF BDZW+xmZyOwqktgvns2No+laJ6EyWlyqXPMR+9nhqZ9kJkUFiTSSDZCzwZkKgYx1qsturKSo5FZ2 swWozYck45rPlttxJYZPvTa1BJGVcWnkKcnr2xUB+S3AUc0Ma0ZGto4i8x6a6CaLaw5qU+orXZjz WPlnBJxVUp5cJCnk0PUWzMiSxMCk53Me9UJLVoxuzx6VaG9EUbjlh65qO4zuBx8uOaTSJuZzghcp 070rJkLk1L0CLHmFTweTVWVBFkY2/Ss7amr2MqRV3Z6EU7KsN4HHvVMiL0M25fD4A4NZ5HlMT97N N7FXK+4rIQRmmSqqShgOO/FShpsjeFpW+UYFVtu/IXt1zQ7WLvoQbSikg/N61EQPLP8Ae71nYjqR icJAQ9UZJo2XkZ9OKaQXKHyGUkjK+mKjljiZCAMfhQxJmZdMYwrDqKSQmRAcYJqRsptCI8hhuNQR wqwIP3u1BKKrKsfT71UZ49rbs9adwJ41MafMck9Ky5AZJcMORUrUUgjh5Y5xiqarjLkc9uKJDiQu GI3g/L3qN18+LGNvvTWgMjkiaa3APb9alCZjHtSGiOM7ctjmiVt0Z5wxovcVrFOH/RztYbvejO6U BRtNTLQqOpb2L5UpJ5Cn+Vfqd/wSNn8r4hyzMfmYEYHuorGbG0f1rfARAnjC5I5yT/Kvs65+V14r qpMlodPN5sY4xiqNpErysMZFdHUbK3mlnZduMelQXCb0+lAdDHvpQlnJkZwD2r+Yb/goYu34wTnO TI7Hp05FdFP4WJI+HjbtbW8SI2QoxTpJWjXJ5BryJv3mKSsFsSpGOholu0hl27Mn1xWMnd6DWpYj O9hjvXY6fE3lK6naQRRLQNj1rQdpKE8nFenaaC+M9auPcOU9H0+ErtBPWu3sISr7WO4CrbY4xNQb WTJ5A7VQmYsMA5HpV0xsyZ1aMHB4qjGpVdwO7NdN1YhJorPhSTjmq8kmzGBmstyuhPaFpup4FasR aJ8Doa8SR1RL3m+X8taMUwt4lOM5rJ6mxbU7z14rWiGMHPFJ7E21LblXwcdPapG3IuR3rNR1Gx1r GTwx+tXGbbJx92tErk7MtRsxYZ6VYgO92GOab1VhsnZAyBemKfE7NJsfp60RXQzbuO2qr7Sc/hUr ssT4qo6MTRWnYshI6jmvYP2b5D/wsFJXOQytj9K9LCfEStD9VI4MhQD+dWQVAIx0r0JbmYxduCWO KjQ5PXirRD3LKp83Sn3ahNLuDjOFP8qiWwWPzj8ZR/ap9UkU4ADdfpX8z37QCj/hcN+zHkyNjH4V hLchq55fLJtyOtWPD15ceFvEVpqVrhZY2Ddcdx/hWttCWj7d8R/t16lr2iDTbi0884xuYt0r4a8V 2LeLNUkunOxXfeFHY1lLcaVj6B8N/tHax4d8Ajw7y8KqEG5jj/PNYfwh+M+tfB7W7u5jmaSG5bcY g3HTHapa5tAWjPSfiV+1xq/jzw1Ppawm3jkIztLf1rC+Dn7Qmt/CPw0lnGWuISBtjJPT8K2ekbAk dh4w/as1vxr4ffT1LWiN1Ck/1rI+FP7T+s/CmJbaAeWyjarqxyRSW1iVoYnxV/aB1j4o6nHLcgrs cNvyScg5Fd7pX7YHiHStDgsZENwsQADsxzxWb91lbo8t+In7QfiTxlrllfRXEkYt+qZx3zXqtp+2 R4gWARsrB9hXeCe9XJ3Q4nnXgn406/4L8dXWvQysHmYsxVuckY/pXpXiX9sPxV4tS6gZ2AlUruLH nIx6VMXYUk2eJfCf4ma58K9YubqGdiXJ3gN944xXpXjf9qPxH4/8PXOmyu1ush6Bj0/GiLs7hONz zn4afGnxF8KNBawtJ2eNiCvz4wBxVz4q/GTXfiz4Vhs7yYskTK2wvnoc0T94uDsir4e+NOv2fgv+ xoG+zQbdpCv1H+TXj8WlmKWSQH98W3E+9PpYlbn0R4I/aa8Q+ALCG0tHaIomN6Mea8a8ceLtU+Iv iybVNUuWlLMSAxz1qk7ktXdzLlb5lwe1N8N6teeDvFiavZNtuVBAwcdaLWKTvofTVh+2D4yt7O4g +1yAS9fnPpj0r5pvPGWrXniJtWc+Zdhsh2ahOzJcT07xF+0d4p8Y+Go9JnvpTAMZVm4wPwrhPhv8 SNb+E2q3s+mTlVlJOA2O2KucuYIx5TofGf7Qnin4l+G20+9vJBGSMrvzXj+n6RFDZKkqh3UfeIoe qsUo2dynfabDdWkkYQKxOdwr1zwj8ffE3gLwpb6TZXDrDGoAAfHSnB8uwPXQb4q+O/ivxItm09/L uhZSmHzjBz6Vy/j34keJfGXijT9UluCbm24SQycgZBNUu5m1dnT+Ifjb4svvEtnqP9oSb4EIDb+e ufSuoj/ay8bya19oXUZi2CuS/r+FONkNrm0PItQ8ba/f+Op9dublri9ZiVd25Gcf4V64v7U3jdtO W0/tOXydu0qZP/rVM5XKjCx49quqape6/barFOyXqHJcN71pePta1bxfqsGp3t091KqFdrt6nrUp 6WHyWdyl/wAJBePoJs2ncwcDy+1YNjpaNZNCBtiPYCmvd0BpN3PXfAXxA174e6X9m0u+eJAMKA2O Kp+M/iZ4q8aaObDUbx54/NV1LPnG2qg1F3ZnOLasil4l1PUvFPhyws7qctDbgbVJ6EHIq9f+LvEd 5pljbRXbeXbFSuXx0OacnzO5ol7tj0uf46eLtQuoHGpSxiIY4k9/pXPa3498Ta54ug1KS/djHnDl +RyKUhU42O0u/jr4yl1QTjVZfLQFQPMHOfwrzeXV9S1Dxe2vXE5lv8ko7HkZ/wD1UfCtCuS8jv5v jB4t1WOWOa9drV+Npk6D8q89t7CO0neX78rnLtjqaLto05bLQ3hEvlbgM1bjixbgsMipYWdipGi3 MhKDCjtV22jU7iBge9aQIsQxxb5iT8w7Vqwac0e7PU1RT2HxWjQxE7s5qe1Vidm38aroSdEsK+QF x8w71lebF5rIOvvUxumS2X7SP/Q3X7z57VYgt2MIJGCKG9S07FuOLy03FOexxUqRMELOwA+tLoDY 6IRYOCN31q/DBtG7cGP1otoKLJobHzizKelPfY7qrdvaktBN3JZbdQpwmT9KhisyuC4wKuIbIvMq wx4Peo4bWNmwwx+FHKTuVbm0Ech8sZ5pHtxOqgfeHWkUtEX47UQgZOCak+zCV9o4IqrW1DoQm287 dk8jpUcdq7RFSc49aQ1qZsiGPgjip0sN0QOMZq76E9ST7L5Py9qmithEcY3GocdAIb+Ro0+ZNx9R WK1mGG49/alFBcl/spmAwflNUBaLayldv6VVrl3sX1swE34qn5sYYoq5z3xU21JbHW0KjdxjHfFU ZR+9y3Q9KEge5VUi3n2njNEtkXfb29aq1gk+hRl0ockdqpLaqXGRz64oTC2hoXNkrKMde5rFmiWN No4PtTcbsUXYp+U0KHcM1mhDcXORwKrlC4XNqA2GGTWfJZ5bg4FZzQ0iP7yeW547VWuoo7ZOuT9K x1KdkUSmYTnoaw5bfyHBA+Q1UVYzk9SpNKgzuXK9qzndHJBH0q4hLYzriwOdw6VSlgO3aDmna4mt CpJB5UQUDnvioXi8o5Iz+FRJahASFgXJArPuWCsWIzk0kgvqVpbcvg9vSoZlyoQfKKT3H0KzWzPJ yPlHWopI4o03EfLn0oH0M6S3y28dDSsETqu78KjYuOugrSpEvHSseQl7glBhe9JA9CvJtOSBWcYt hJ659qYMJLcFMt09KYtoo+Yj5fSkJIq3VoA5ZD8tUgnmocDGKFqC0KhjXbjGTVaTCqBnioS1BsoS r53ypw4pFgYKcfe781T3EjBlTcrDOCO9V0kLQBD27mnygyWKfbHtPNRrH5kvXpU2sJ6hLF5spxwB Ub4SEIRzUvUpaEEYRkaMnH4VQMYVNvpQwZPHiSPHYUkYVX2npQ9hLcq+VuZgudopfJV48PwRUWKZ SXaTtB/HFRRqxkYHtQ9RRdi0pBidGGflPNfqF/wSOG74o3ETfeXdgf8AARUySBtn9bPwHbzfGFxj 5Tk9PpX3BJ+8OD1FdFIXQoPgH3qOxBSR2JwK3KRXJMbs3XJqu+XUimK5kagmNOlGOcda/mU/4KEx +V8U5pepRzn8xW9P4WUfBUVz56LJjCvyKmdQ5AHTvXkVF7zBu4+XbHECo6VGNrxb8c1lGOpDdi3A 4lC44IrrdJZpn64UUSV2Te7PUdBy0yrXr+mPiRVI5rSCVhuTO6sVO7rmu+0zK4ye1aJFpl1iAxHa qscpjcrtq4xsDkUJxumPpVHHlvgdKTbRS1RSlYedioZZwpAx+NEWToLbyYIAHHrWrA4LYJzXjy2O uKsacMS5JPNXrZlj+Vl3D6VhIq+paVVbJQYFaNsrOgB+6Klu7KNTcBDt24PrUkEZkTBPAqxMsqVY DHUU6IkMT2qdmNrQniLSEE8Yq7CxUls0Lcl7E6sSS7D6U5nMnOK0jqTYYfuqP4qkkIGC9XbURl3M /lI+D2r1/wDZekN14+SIn7qnn8q9DBx94zbsfrQknygdDTo4DGxJOc13vczQx4RuO48VPbxqGxV2 0M3uaRXYtUNSJOl3GDxsP8qzepfQ/NjxkGaHVRux97p9K/mZ+PJdfi5qG47ikjDn8KxmrMzvrY83 EwmwelUJ7+SC4RI1MszEAKBmtE9Ctmdtc+EdesoDdPpj7O52N/hWJHds6gsPLYdQeMVG4mxzXUck BZs8EDpWhHLiJAOR2NEdyS49yLWB5pj8qjNP8KLqfieA3On2b3EB6HYauWo4mpq/hjWvD0Rv7y0a O275BrL8Oz/8JpcutjbPMyA8+WalbCtdmJea9cabey21zbsk6SBNpU9TXosfgPxPfQwyw2T+W4yu Af8ACoauwWiOe8Yrd+DI1TUIGhn9CDzW94Z8Oav4p0UXttZM0RTcpVSeKGmVayuZng7Q9d8Uaxd2 dtayPJETkbTxgZ9K6+2+Fvio+dJ9jlPl/fG08fpUlqzVzifD+n6x4t1y4sbSzf7RCSHwp4OM1r+I /A/iDwvpb3l/ayCNSATtJ/pVwXcmTRx+laqNWtFljJ2EelaG4QhiWygqmrMhMz4tVW7t3kQOqqcE 7KrXt4FtFZHBZmCgg9zSepVj0vRfgn4v1qK3lht98Tpu+Rs/0ry/xBp1/wCF/Ec9jqELRzI+35gR VRViZaOxYk+UKR1xUMN2samSTjsOO9UwWjO30D4feJNZ0e61OGyaSyRS25QTxjPp7V59o2pNeQbp U8pu60raXKepqXMaKmQ3HtWZ5qjOMnPtRYSGopgXJ5zStIyybT3qkh9SN7E2ylmO7PTFTeUVgBI5 pBa5VEnl/Njcahkmzyx2nNUiZKxZH71RzxUGwr91wOfWmgRcXMjbs9Kg3iJ9z8+gqWtSkZ13rL2d xFGsZkaZwiqozyTgV9Eal+zN4z03wO2tPbs1uF3BSecfTHtRbUd9D540m/e5tf30XlyggSL/AHTX XQMqoAgBUdcVTWpKLghRSJAcCt3yjJGkjcoSBUyLSRc8O+CdW+I3juHR9GiMwEbM5XsVqLxfpF98 O/Et7pmqL5U8JICZ9BVR1Cpamdp4D+DniLxp4KufEMUBXTlG9W9sZ/pXB6JqB1ZCvGV9DWjjoY05 czZvK6K4Rxj8Kpx6bd+K/GNno2kwvNdSqW+VScAEZ5rPlbN1orn1ZH+xz4sBWN5ghK5KeYMj8MV8 7eMvDV58MPE/9kX0RMzZILDsOK15LLQyjWu7DLeYRSNn7o9a9O+DXwr1z42X97FpUf7iLILk4HTP Wp5erNHPU7fxX+y34g8A+FLm8kJm8rlzGQ3b2Fcl8HvgH4j+NHhmbULUG1tARhnO3II961hDmWhn OfKaPxU/Zx1/4Q+E7bUnMlzCzLuZBuGCfasHwZ4cu/iTqtrYaawE0nYtjvUysioy5jv/AIy/AzWv gZqdlFqi5hlTOQc85xXnUWwOzDhACatK6KasUBcXl19n+w2j3AnkVVwhPU47V9fWf7FOt3WnWt3O y27zgHa7he+O9ZuVtDNaux5b8Xvgnq3wOnt5J4TLbOQGfqOuOoriIgl4rTIw8vsM0m9CttDu/hJ8 L9W+Oniw6TpQHyKS7B8YxzXvWsfsWanp0l1bXV6scsIOVEinoKqGo6vuo83+D/7JutfETWNShtsM lqxUuWA7Zr1BP2OL7S7O8mnvlVoQcqJF9M0+pC1Vz5RXTrpvF9vodir3F1K38K54yAen1r6c8c/s leIPCHhr+1ZVXCDOzeP89qpQTMp1OWVj5msbwtbtJIMSd1HaraI00O49DUxVmb7oSaJZkVUHI9qd 9jKDDCrehGxO0GFGBU6WccEZYD5jWXUpDYrMTSAtzQbfFyTnAq9y7aEMunENuBwpqMRYfZk59aXL cm9id9JZ0+bmnfYm2ADjFT1sNDZLRWxkc0LFsxgZNW3dGb3Hy23luA3OarPYpMpAFTsVaxWgs2hR t3QdKozQeZ8+OaIjbKkkbuOny1nRWZDHAwKbBbD7uBpoAsZ2kHrUH2HcgVzlgKSEtxn/AAjpv0yX 24qM28sGEHzADrVtiaIFtHeFgTzmoiiKgQctUR3KexUuIHI24qounLLjPUe1aXISsQXdpuJAPFYt lH5UjBhn0obKSFn2wo2Rk1ieSZcEE5qWh3FvLIMo2k578VhzQEJgjLCokkhbj7aP7TFtbjFZ1zp+ 6VlJwBSsK2pm/wBlbYSzHcuaoSWIZxgZzTWiBkElmzIVBxiswaeVQsDRFj6GeisZM4zUrkSAoyYP rRISMhrf/SQiH61BcRL5hUjke1SFrFaOIgEfypWtUkXHcVD3K0K6qzK3GMVmGNWRlI5zU31AjljD Q7cYxVWHGwx7c+9G446DGsVUEZ4FZYXypDk8U4oczKeBllJz8tGQcjbzRLcUdipt+cDqajuI2JAB 4pMd7EUluVwN3ymnfZhbptJzml0J6mTLAInwOTVYwLKTkdPapW4bsorAfNJHBpmzCNgnf34o6lbG etojJlupqjfWeECoeKbZJm+TsIWrIj2x8Dn1p9CWQSg8BDhh1qsysj7nPNRsykVGgEzEk4NV3Pk4 U8ih6ldCdl+z4I5zSb08zpyalkrcbGTHKeM5qpc/K5z19MUIplddrEHGBUrKFcYOKdtCepYFp5kE hzztJFfpX/wSSX7J8VLmZ23OQwx/wEVhUZSR/XL8AJRD4quGYZLE/wAq+32UKwI711UhPcoXMLxO SOlQQLlDnvXR0GSSIFUL0FUmXYcCpAzLk7rWYEdq/mP/AOCjim0+J06jkyOx+nIrppbMD8/rA7Le KMjO0YrUChkI6YryaukySu8gZMU1swQhupNT0IlqyW2ByGc49q7XT2DICvFZ3KSsepaDPt2nqR3r 1zSZC6AtzTg7Mdjv9NO3aD0Peu/tAAvynNdKBD3k2JjH41VdmXkGtdirDVy77mHFZk+5XPOFPSsp LUE7IoM21drdarSOqjBqktBFy2h8vaoPFbUEA34714M2dqZdWEo2c8VqwRAyA9qxbvoNWLkVoZtw BC496mtkbdjPAo5bME9TUUguAeQKfHkk5GF7VS1Y5PUubMKuKneMrhmxj0ptaji7k0bZUnFThwsY GPmNEtEIej4XnkjtQ8pVwQO1XB2RLHp80ZbODVa4m3Lh60S6kM5zU5vKiIHJxxXtn7JUbzeO/NkO Bg8fgK9LCbmUj9dNu0ripmGIyfSuzqRawBt8YyOaeq4Jq0+hLJY3bj0qrqcgXSboHspx+VQ0O+h+ bni9gbTU3zyQc/lX8yXx7kdfi3qrD7pmP9KynqZrVnnUUW2DB9Oteifs86RFq/xw0iG6QSQO3Kkf 7QqeljRas/WT9uv4i6d8FYbLSrGyB8+EklVPGDjtX4Y6zfX3jPxDHBpibrm5uFAjX0JANVCJi3aV j9Qte/Zh0f4a/A6G71+RF1VogcHBwea+B/AHhLVPiHeXdvpSvKlu20tjvjNTs2ar3jW8Y/CHxV4d 0sNqFqz28k6JlctwTj0r9iPCvhDR/gL+yNpmtW1kj3bW6E/LyScjtQp6WFKNmHwY/s39qH4C6xca rp6wPbrwCp/uE96wf2FfBPg46pqVibSJ7mBWBynJwtX8KJneLPmPx98ObHxB+2IdPMKrYPckmPHc FcV9S/tX/Ecfs+eNPD2gaRpwuLed1Vyqt8vzgdvrSSvqO3uXK37bHwb0zXPhpoWpR2Q+3XUsW5wm SAz4Ne0Louifs3/s7aTKbEytJbBWZYiSckjPFVzIzUm0fCHwo/ac8LfDrxrrl+0XzS72CNGRztwK +y/2Rvi23xo0vxNq+paelnpeWMJfI3L5ee9KMVJjcnFHxzY/Hrwx8L/HviKTTbcT3UjOUCoeTtx2 r7F+CeoWv7QP7N2tXviOwW0cRFkEikfwE98UL4rD3hc/HfwT8Dte8XXWpLokWzTrecRxuTtDAjOa s/E/4GeJ/hvo0V1eJ5kDEFiGz39hTqKzHHVH2b+zD8M/Cvxx+El7pwjig1eOPBYAZJ2k9/wr8w/i P4G1T4QeObzRL5X8uC4Bt2I+8Fx/WlBX1G3Z2P1A/wCCffx41bxN8VrTQb20VrIW0nzbic4Arzr/ AIKI6VbWv7QD/ZUCK28sAO+RVbiau7nxaYNqZJ6Vhabpl78SPEiaDpELvdMhk3Kp420r30HY/oN/ Yt+Gmq+Gf2atcsvE0KNLFbkKd+4/6tq/DjS/hlqvxR+K+qabpELpDHIfmCkAYA71UlyxJv71j0bx D+yb4l8GaJe30rtcJbn5lQhu2e1fOfhy5bWYWdk8hzzsPGPzpLQ1sSX0sltbkoN53BRmvatF/Zq8 S+LPC1pqsG5Um24weBk/SqdhIw/i/wDBfxB8EtEtNQv1e4tpCqlh8w5OO1d38Pv2Z9c+KOjR6jZX K+Q6bljEg6fSpauVE8H8Y+HtR8B+Mm0Ca2d7ppNkeVPqBn9a971D9jjxVb6Tp1yB5klyVxG7gHBO OmM1cY2IerPPvjF+zx4o+FvjHSNPMUpa8xmNBnGWC9hXtq/sD+JZtRDPd4zEz+VvXPHt1oe4lsfJ Wq2154R8W3+hXsDRy20uze6kbsD/AOvU8tsSm70Hem1YcfM9h/Zg8Ay+OvjzoUb2P2ux+/JlSQCG GK/ZP9vv4zX3wQt7bRNP0nzrCaBtzBW+XnHb61EU3O46i5YH4tfC74TX3xw1i9ks51s98wLLvA7e 9dr8b/2dtb+BelwzyRvJDIQgcDIyTgHgUN+8Q3ZIvXn7Kvin/hS2ma+ASbqWLDg84ZselWvjD8Bf EHwn8B6VqF4SsEoXv6tim9FcfMfWH/BK2zW5+MupTyIHcJJtB/3BXkP7cHhH/hMf2n9XgjIgmMzq MHpnHrVQ1Yqj52fr3+zn8Ev7I/YmOjS3AaU2W3zcjj5W/DvX89PizwEfg74ha0Nx9qGxiWyDnH0q 1vqTCPIz2f4PfAHXviv8PdV8QQxv9li+aPcMcbSemPavrX/gl/4OstU+J97cXluXvbSGRR5iEfwZ om+Ro0nLSx6B4j8eeM5P2zp7dIZl0EXZQrhtoBK+31r1H9vH9j+8+MHxA07UvDzC3aC0kaYqQM4O e/sK02XMzmceVXPze+CP7N2s/GHXfEumRgo2nO0Urtxk7N3evRvhZ8Yb79nKDU/BWmW3n6wAU81c 9QuM5HHcVz+05nyo1Sd02fpl+y7BqN9+zdr1346UG5kj3KZGz/Afp3r85dG+PuuWvhO88L+E7Uo7 ToIZkLDKjr6jvXRTl7OOoVVd2P0v1JG0L9h+FPFyCTU2gVdx+YliGx6d8V8F/sffspa/L4j0fxY0 xsdPdlKfMBuBI9fpXPzOTNqcOTc+tP8AgqzbNGdDjtoxNlPmY/79fi7p8l54i8ZpoOnQNPdTShNq g8LkAn9a3g/skTnZM/eD4X/BTwz8FtI8LadqSRT606ISDgkYbB6Vk/t9XfiqH4neHYfDBmjsEI+0 LFnH+sHse1Yzi4zbexnTd48x7T+0V8GLz4x/s76LZWgC38vls7scHhjnrX5B2X7MurWPxjXwOjFr jY7McjHy4zz071SftNi4S6s/ST9hz4F/8KS+Kmv2M9wGvT5hQbh2SvnP49aJ8RZvjR4huop5F0xZ HIEcmcjA7Yrfl5URKfPK3Q+wv+Ce851b4YeJWztvwcbm4JbyzXwb8SdG+I2m6z4gmurmUWHn/L5b 7vlx9KiOxXNy6HT/ALBPw1u/FnxftNYltleFIXG+Q4POD0r6n/bk8XeINC1+SxtoQNMaN9x3kYoT fNYznBSfMfkTa6cnkF0O4sck+tXfsjOA3RfStbGsdjXtLaKKM7hh+3FNkiJb5hkUrXFPeyGx2iuz A8elV0sSXJ7e9ZpalLRE0VttjJHBpj2jMobNaJBJuxIwNxGABjFVk08nkDkUPQnc0hlVBPTvVaaR bgnyx068VKj1KWhnqrSc46VZ2ZwQMGq5SG9SK5txKBk/NTI4DCw7jvUNWLexau4lufufLjtispYt u4EYFStHYGiU2oigVuzVmS2gQbc02rgnoUjAEGMflVWeDzFCgcinbQaLMMT4A7Cq07eXJwMip2Cx VlQBCyjk9hVNbBWIbGGqdbgNMOWIYc01rVYYyQM5ppu5VtDHltsxHI5NZlpaBGYsMkdKolETW8U7 nd0rMvLVbRAy9O1OTJtqMjh8yAt2rnpFEZI65PpUSVytiC5gHyn7vNRXcfybgOtAGIfMWLYelSfZ /Ij4Gc1DeoWMyWApC2TzVNIE+zkEfpQtGJooTbLZAuM5qldoJEwowfXFVuQjnVjaKQkjn1qVkVlJ I59cUrDuQQ25RSc5zSC22ocHBqWi1sEcAjXJPJqhdRKp6DNRYE7lFk+QlhzWPNJsXCcE1SSFqRCJ o1y7ZBqvcxKIwp5yeuKbWlx36DHtIxjmqNzGpO0DAHoKzbKiU/s4WIsOtVgq529fXNC1GzLkBLsg PfjipJotkK5O40MjcpzxliHj7darRs5lJxgVKVmBXeN1Qsaps2Vx93PemMom23EqG+WqVzF5ERAO TmpH0KUifuVz19qIwXBC/wAqpaIm1yAwbs461XeIsBv6CoeoEC24lkJBwvpVaeII4UfMBSRfQmKi UfTtioo4PNfI4pshDrbKTsuOnrTJtryHI+anokO92Yki75ivb6U2eBy6gHC+1TzdBWNHcUgcKeNh r9Iv+CSlwrfFO4jYfvBu/wDQRWNRWsUmf1yfAg7vFcvHc/yr7tb5NpxkV10ugMpzyPJNgHiofLyc DqPSt2FyCVy7YNQTIYEySDSa0EmZN3H/AKI5PcV/M9/wUdgab4nTMBgIWB/MV0UdIsfU/PTTRmyR lH3hmr0g8uME8fSvIq6zBogdhJHjGKbGGZQOorOWiJtqSKhdhk12OlKUZFcZXHpWMSj0/wAPgiXB +5XsmkKBgA8VvBaky0PQLJvlAxXcWLBkwOK6Ik7Ft1ypxUKoFUZ+8PatdDToVZHYSkDis27AduT8 w9qT3FHQoSwBgM9ahkg8j7wDelDZVixE29VBOCK1UkKuFBr5ypozqSNOCBozy24GtqEYTk4PahJP UHdElpbtbOzO+7cc1pbRvyp+WnLcdi6p2pU8a78DsalaA9i5s2fKTTWyXAzupN3HEsMD5eBxipop Q8XIw3rQ3cY1WBYYOatOS7AKMAd8VpYi4xmUE89KyLmXzIzW0TNnNXMZuON+CBX0B+yQ234gNCxy MH+Qr08GtyOh+ve3zLUc4Iqe32/ZcMcn1rq2ZO41kZUBAz9Kl83CglKogch7Vma4fK0i5JG7KHt7 UOwPRH5t+LVxpmqfQ/yr+Yn463BPxY1SLsJjz+VYz0IRwwYyxKucADrXsv7L9r9o+PujxlsMMnj2 IrM1Xu6n3L/wVAlV9e0tYmxIsLA54/irwP8AYL+Gmj654/bUrrZJqMUTmNGx6Z/nWkHqc795tnLf tufFPWtU+LZ0vUEaPT4UdUiOcNyMGv0D/Yz+G2meDf2V9R8TQQKb64h87JGOdjf4VNTfQunLl3Pj 62/bRtvEXg2bTdbtFWdLqNEKBmr9Rbe20zVv2atFm1ghdLaJDsccdTioitLly1946H4XWug6Z8H/ ABF/wj8afYypx5Y4+4a+G/2CbaM/GnXBgCQrIc/8AFapcyuZSl7R2MHVrZtR/beaFTtZJXJ98Fa+ 8/jtP4Qh8bWX9vRRPeK37veOc7h7+tD00LUrR5B/7TPia10f4a6FfPBvsfOiCgKT1fipf2mfiho+ m/syadezWoeF4lCgoe5NTJCUbI/ni0jwZD8Uvi1p0MEYht7q4V2XGMjcMjmv1z/a91V/2dfhJY+H PDcQtFljCv5fHGSD+hq6emplLXQ/HP4e6vY+EvFjahqcZuhHnzdyk8/hX75/DbxLYfGj9l959IQ2 lt5agqq4yCD2NQpWlc1WkbFXVNFg+Cv7HNtqWmwotztjLE/LuPNfnB4u/a8b4ifCW303ULULNtVd 43HnnvTneRGzseCfsy+IdY8N/Hewt/D7SO0mftCL0xkZzj2r7t/4KPaLpNxaaVdlUi1ggeZt6/f5 opbMuotUeZf8E4J0uv2gFtwwV1gl7+wpP+CiDovx1nXcdyFwTj3FOOhVT3D4dluPs+mrJuyGFfpL /wAEuvCem6v401q9mtFe8iSQRPt6DZRH4imrx0P08+HGsX15oPj+G8kBijdxEu7OB5Rr5g/Yi0O1 tPBfjnUpLUfbkZjFLg/88j3+taSXOznUlc0f2LvEt78SPCfj2HxEi3QilKw4bfgeUf61+Ivj7RDY fE3W4Lc+RHHcEIBxgYFNpNGkdWcnfxlLUgP8y859xX73fsd3f9p/sfPc3qmWSKJdjAZP3WNYttSN JPlR1Pxd8KWvxH/Yv0x7yENcz+UE3jkEkiuY/Zd+DEfwFtPDMWrXryi5tuIiAe+O1bWJpvTU4T9q r4f6Xe/tteFlghRUaN2dSMZIkXmvYP2nPF2o+Ev2nfBmi6RZGe0liYy4BAGJFHUe1JtrchTT2PfP ix4bsvEP7S3h6G5gVpVtZG24zjDCsjUPBv2H9qp7u61zydOWOUfYzIgHbt1pS2ug+Hc/Gn/goxLo 998ebh9FCbkZ/NZR1ORXxosbTaVvPBYY/OhSuik7s+2f2EPiq3gf4y6NoaWyzPLGTvOeMEDt9a+0 f+Cp/wAcE0TxHp+kLbCR57d8nB4+bH9acdxzlzPlPIv2Jv2bW0X4b3vjTV7poraS4jkgUYPy46fm K+2/21L+z8V/ssaRqFrbDynuLchmBHVzTil1IknJ2XQ6i7UW37IPg5RGPKMlsOf+uhr59/4KgYX4 H+HooAFjzFn/AL7rRR0M93ZHgX/BLSA2/wAX9Rwct5cm0f8AABXiX7bT3Ft+0drs8UhiuhO+CD9K mC5WazXJY/T34FeI9Ztf2E5L+5u5DP8AYsq2cn7rV+Amia5ceJPE0F1rRknhYElpVPIo8yVLmdkf 0wfsh6n4bT9mW9MHlQ2S2/3emRsNfOP7FGsaRqfx01ptHiCqok5VeD8gol76Vxpc7aPRfEP7QWjD 49S+Fkts6sbsbm2HsRnn8a90+LXxGvPBnxs0jQbW3NxHeWkgmOD8vIXt7GrcrxsS9fdMrxB4asP2 bvBfizVtGhFzf3rGSUKO+wjtX8/fhTx5qJ+IN14nksWe5lDSSblYYOP/AK1YxpuLuaXW3Y/cT9nj xDL+0f8As1avdagBbxxQEKiHI+4T3r8efCHj69+DfiS7NvYmW3t5NiuVYD9K1+KNib80rn7RwRr+ 0D+yZpuqaoNil4XCjnuT3+lejeN2l8M/Avwdb6V/o0Xm24JTj5fM5FRCPLuN1ObbofN3/BTTV49P 07Q0iUySNDjIXJJ315d+xx8BrL4XeE9T+JHiSJEnkQyWnmDlVKnjseq1cXaokZ1Fem2eQfDj4pah +0V+1vYaotxILWMsLWNuAEJU96/Ur9pb496V8HfG9hY3cHn3FzxnaT1YDtWldXbRNOX7tROt+Mfx D1DQfhP4ev8ASomLSywjaoPQvg10Fr4L0qy1L/hLmiQ+IntZDtx82SOffsO1ZUl7Najat7p+SCfG fxdp3x31/UkSWO5MrhE54BUZ7V9g/sc/GC8+M3j7XNH1+He43ghiTn5P/r1s53kVGnp5ng3jfxrr X7Pvxn8Rab4bDx2Tu7eTHnAwoA6V9S/sheMZ/j38PfEX9v2ytJHkYJLfwE1ndqRShzK3U8c/Zb8c 3nhT9o+78N20KpZqX8sKx6ADt+NRf8FCvHmqR/EeHSU/495InLfN6GrWswk/Z+6z86NL00WtmsYO QPatpdP3Rg5xjtWjkNIS4s8gADDU5olhjCsMv9KjmHbUILMO+T29quPYAuuDxUxepTV0Maz/ANIC KBt70XloIiAvSrbFbQqC12/L3qxFA4VieBVbkRKSwFoyCv6VDbWQVDtGCeuKNg6liKx25BFReQXY jHTpSi7lNIpvb4zkfNnvVhYFhh3NyaUyYlOVNpV1HB61XuoTK+3G3PtULVl+RJ9j224Qndt9qzJY 8OOOafUSWhQ+xPb3TMx3huQPSrMcKNGzNwfpVPYm/K7GUpZroKM7TVp7MKxPUVNi0yjHaESGTt6V E9g0zNKG2qOwNR1Fe5Vdd4wBn1qaG3zAVPNDRSlpYxpE8pipFZjQZY9vpQ7jijJuIhDnjNZsqi5i AYcn2oBokigKwGMVkz2IVC2OKroQZDESxgFenSoJInbHeobtoXa6Kbx+S+5xmoDukz6HpUsRmXNo 9shVuWJ4pPsh+yAYBanEmRhKqhyrrk+uKZNaloyBTbsQlqUlsygw4BFZlzD9nbAHy0J3G1YzWlCS Y7VLKv8Ao5bGPSlJFRMzbuTcTzUTxMrDPJNSIJSMhCMmsOazCXfy8g1Kepa2C4iLuEC/KPakkiWV doXpV30EZVzsifG3OPasdWZpyrLkdqyaHHcWeMbSo7VnRIu0+tCKkU5LfPIGCKYchMEUEoqlMZYE j2qNmYKAR19KTHYVVJiweRWXd24kTA4WkIyo4vIfaTwarahbCNMKapK4rmRZyFZCGG7HTNWoVeWZ wo25ptWBDJIzExA5I6msswusx3ZKnpWaAkjjZ5dhG0UXkKWrgLzSS1DoQzOCgIXBpiQbwCpxTasw Qx4nDhs4xTJEE75xgik2NIrRwYZmIBqNozGmWOc1IyMx7raQLwApOfwr9D/+CS8iwfF6eZvmLBgB 9VFTNXEf17fAq2MXix+clsn9K+8fLMcRBOTXRS0sD2Mp1K4I6n2qIZic4Jya6BakU0ZX5gaz5IWI MhfI9KBFeSTdauXHOK/my/4KYxtbfELco/1rEn8xWsPhZUdT857RVis44wcbVxmgHzhgnIFeRJe+ wk7MQYdyDwB2oK7QMHipmCY5oiCCDzXd6NuCR7uWrJIS3PR9LG2avVtEj+VSa3irA9T07Tjuxgc1 3Fkm7GBj1rWMgcTSeIb8A81BLFtzg8imrstOyMrJZGbvVByFGcbqpXZNyk7bY2Oee1UQ7CPJ5aqt oUnrYtxjAB7+lbNswJ96+fqROxGjb7nWtNQZl2njHes4jluaUahUVTz74q/Cd3yDoKTkJInjXzCQ eAKtR5VwQOKa1QmWpI9xBJpGPlPuA4pJDWhIikrknirBXaoIGVqU9bDIHUIAw4NLNqTbACMLW8Fd mcjLa8EIY5zmsmXUBNGSDjFdcYE9DNnkym7d1XNfRf7H1sZfH3mlvl2sP0Fd+FVjG9z9f1xEm3rm kxtjro6htoLayMj7e9aEbbnIJoYgbhs4rO1t92iXOB/Af5UXCx+aPjORY9H1Qjk4P/oNfzAfGN/O +MOplvuGRj/Ksp6mb3OIlwcBTwK9v/ZR1W20v9ovSWumCx7WBZug5FK1kVurH6Uf8FDvgqfiVfWm r6XqIEcULZCOvrn+lfjv8KviVqHwF+KtrqCyNKiSCGUA/wB4gZ4prYyirOx+qH7U3g/w78evhjD4 mtZo49SRA25cZPJJH6V6T+xz8QtL8bfswXfhNbkWt9DCIZCSBztPr9amb1HKN0fMHib9kTw38MPh 1NqmrX63dyLiORQ+09Pp9K+zY9W0v9of9kfT9C029W1kSJMkEAjGT3px+Fl392xc+Erab+zN+zZf aRqd8txcSoFDkgknaR2+tcF+wb4HtLPxVqfiW71VY45lby0Z143Lj+lOm76GUVyO55x8RpLL4Yft Up4nmvftFrLcFcZBxuKjt9K9t/ac+GVh8dPH2ja3aaoIIkYOQhX++D3+lOT6Cs3LmF/av+KGlnw7 4Z8FQXCSuJImeTcONsnPt3rsv2r9P0S//ZjsNNivo5ZIY0wAynkEmixpKZ+L3wM8Qx+H/jHoYvGW OFHCl8/7Qr9HP+CmHjKwvodIGm3CXMxiyQrA4+b2pRe6IatZnzH+zL8GfDXxb8G6wmpzxw6g54Jx kHb71+g3w5udE/ZV/ZmuNMmvluZNihSCCehHQVkuxclsN8P/ABA0X9pP9llNDjvRZyqEDkkKe57/ AFr4s+NHwW8I/CT4LafaLLFc6ntQeYNpJOTzwa6VZIxnfnuejfseeH/Dfwb8Cah4w1J4ZdVdC0ZJ BIypyPXtX5wfGv4zal8eviZdanM7RWgn2QLn+Fsc80qa0Npa2P04/YF+Cel/D7x1H4rvtWXcbdx5 bMn8Q/8ArV8y/t4+M9O8U/Hi+ls3EkXmsDt57igmq+do+TrvT0lsgqEhV7Yr7s/4J1/G+x+G3xCv LC+xEtyjkOeP4cUdTROyP2G8LXXhm00Dxfc/2pCJb3e6jzF/55ketfEH7Ifxq0rT38XeEruZFLuy xyMeGHl4znp3oUrI5XF3PS/hrrnhn9l74deLLtr+OeS7JZQrKefLI7Gvwc17xPqHxF8SX+rWVnLK s8u47Y2OKcNUzoi7Iy9QsdWh055JNMmEajDHym/wr+hT/gnhqEC/si3c2pWjC1ihDbWjPQIx6UnG 5crSR+e/7RP7d51CDR9B8PRGPTrW6iEqkMvAcHp9K/TnQfiz4S8XaB4e8QX1zEzWkARUcjjLZ9c9 q3ikYX3Pi79t3x3FrvxU8OeJ/C10kupKwVYlcD5WkG71Pavdb39oe8vfiZ4VtL3R5H1ORQGn8pzg bxnnpTqxutDLDp31PXP2rfjLZfBj9oDw5qVxIJJJIXRu+3c4HavPPGGl6D48+MkvjC78ReVB5MpF uGQjnHvntUxSSszaq77H44fGy7j8Z/G/VI9CtzdweaxyiE56ema4Z/DuvIjQDSJcqc48pu34Vk7J 2NILQ+lP2IdPt7D4/Wmq65/xLxbBo9ko29cHvivrP/gqJrnh3xX4o0280qWOe5WJuUIP8XtTi7aE uPv3PU/2IPjnoXjf4FTeGNeZYEsymRJwCVUnv9a9Q/aI+L/hLxn+z+nhvTblI47F4yoGAMoSw71m m3KxpFWbOc+GH7QmheNf2fNM0W+nWNtPaLhu5Ulu9eeftvfEnQvid8GtOihu1EkRTYFI7MTXU3ZW OZe7UueC/wDBNf4h6d8PPi5fy6xIsJZJNhY9torkf23dR03xp8ZdS1TR5t5eRiSvfpUXsbVJe0sf af7F3x80XWvgg/hLxC4WO3QRsHHDYB9cetfBv7V0WgJ4nj0/wtbRxWMalS0Yx349aUHzSsTGHK2z m/AvxP8AEHhL4cXGg29wYrWRcbQ+OMEf1r2D9hD4zxfBb4mmO+TatxG4Z8E9QBVz30NqCUb3P0Ag 8M+B7n9oCXxw13EbvDyEfL14Pr7V3mmftU+FPFnxD1/Vbltt1Yxyx2zsmOq54/Gou3JIzjBqbkeD /s7ftUWnjhvHQ8Ty77czN9lV+Rt8v/GvPv2etZ8GeMr3xFa6jbRLAXKwM64+XZ7muypy8uhzptTZ 9JQfGDwp+zr8B7/R9EZWknUIAi8Z2kdvrXh3wAfwj8RPhddWmvxxLeO6szuO4B9TXPszojG0Wz2D 4ufH3Rvg1+z3YeG/Duy5ETRjavQAE+n1r1j4RfHvQ/iL8J9Cj1qRIRa+WTGT3DZ706rS2MaKetzk /wBsX4jeEPG/iTQJLd1lihxkFe28H1r5d/a2/aDPjfw7aeFNAzDp8eN4UEcA/wCBrOCbnzHTZey5 T5g+D/ieX4PeNNK1OCEYhdYzjPQkZr9Zfidr3gf4z63pms6g0b3EI3EOBx82fX2rok7nKouMkzot X/aY0G/8W6Ro8Z/4lVtET93jKnIry/TP2mrS6/aqmuZblhoUUcqCMjg5xisqqtDQ0+Kpcy9Z+LHh fVv2jri7SEfYXLhyUIyTivc/D/ifwP8ACDxNquvaYkcd9cK7KUXuVx6+1Z0r9TduyPmD4SfGnSvG vxS8QXXiOAbrl3MZIJwCuK+lvDvxZ8J/BDwNrQ0RUNzcgkBVxk7SO1dE0nsZUm07ny1+yr4607Tv ixJ4n1thFdyqx6ZxkDP8qtftmeOdK+Jnj23vdLbeyqwYlcdTWkEkZ1051LnyJZ22ZpEHY1rwwDHP aspbm3QJLyJCA0e49iBVh7dLnBxg0uUZWaMRcEYapoIvK5ZciiOrHJ2I40RpXxx+FPg01pZDz8tV YV7oryWRDE5xirEFq05wfm9qbdkStWM8l33ALtUVVtbMEOSeR2qVK452WwiwFmyOTVlLTyXz96ha Ma1RXnsluZhgBfUVHJYggj0pTdwjEoNZdOOBUssCSqGHUURRViq0YfgLtP0rI+ysZD8pPviqEtGV fs7xzYPIqi0Bjd1zyTVJEzV2PihEIXI5NTSWjeYV7HvSZF3YrwWZCupPJPFZzWTWzkMc1Fuo0ygk GZsAcVaaPZ93igfUwr5SxJArNijNwp3cYoLWhVW0EqOpHGeDVFLHyiQACPep6D3I/JEBIccms+ZQ 2UI+WiLuKSsjOFkuNgGKzryzMDAZxSktRJ6EktklxAAcc1zt1YmxlAB3fSmloJO+hTu185xuNVQp wQTxU7BYzntgwOB+lLFbszD5fl9alu4WKt3b7HIQ5NYVxAfMCv2pLTUe+hiTW6m5OFyPWmTBmhII yoq07iehiJyQfTtVmSN9+7vUyEiN0aIhmA5rIaIrK7j1pWK2JmZnUEdKqEBWOehPpRsJamZdxK8g Cjjuaz5sM+1RyKmTLSsIYhDG28ZJ6VjGDysEnihEt3IpJwrHjcO3FU2JgC55zT2QFOdd0xCjB+lU nDRyBSM1BWxYuEJA28VRkhz3PFShIy5oRO4B4x7VUkjZnIx8oqkxMrQ2auTtPzVbjj2ZXPPrUuWo 1sZ727JLuz8vpRLA8zbgPlHaqsBnYZnL9F9KpvE6P8xznoKb0F0GvCVb5ql2iOPJ/ColqJFWMm5O M7QKSS3cAnPH1qbFdTNjkaBuDn60nmkynjioehehKyq1rMA3Ow/yr78/4JMTK/xbkix+8Xdn/vkU pOxNj+wX4GzNL4xdU4Zcj9K+8GYhl3dcV0U2DWhFIojO6sqR/wB6CBXQhIsGIGJgevasfZtgx3z6 Ur6hYjkXNs+euK/mv/4Kbbl+IyFf4QwP/fQrWm9GNaH5r2K5sFJPD8ip4WEDAEZry6mkmElcVXVp GZh+lMhQyMedo7VkxdCXOJAOhrrtOD/KRQkSeqaZJhEzXrWkn7oBzmriOz3PSdOQxAE9DXaWL+UC eoNa01dlPY0GYmL5R81UnB4DHmtlGxnuRvKka7e59BWfKdiYA4NJaFJGaYxkBuKgZdu70ob0sWo2 1LaL5OG61pW0Ydt5OPavGktDrizdtI92ccAVoW2QpxWD0KtqWY3LLjvU1vvWTaDxWEkVsXCMPgdB V2KZihOKqMrKxFtS00gjRWqVpxLjI49K0WwXHB/MJJGB2FNMxijIFQo63E2QefvjweKxLqfsDnFd UI21IkzAm1A2w/eD5TWc0/OU+6a3jcyctCvPOShVfTFfWf7GrlfF5i6kA8/hXp0ErEQ1Z+ubNu24 qbcMDJ5rR7lPccwATPQ0kbc5zzQiWXVbeme9ZuvDytEuT2KH+VKxLZ+YvjI7NI1cgZXa3/oNfy/f GGXz/ixqI+7iRv6VEtibHELP5gKnjFU7fz7bV4bq1lMU0Z6g471ne5VransurfHHxhcwC2/tJzbb cEGQdPyrxi8hGoo7SD96zgsabZnazud1B4t1bT9DTT7a8dLYjBG7FUPBeuav4F1G5l0+7aMyNk4b HahopM3fFPxA8RePNMNnqd/JPB6M2elR+D/HOt+BdOFppd41rDkfdfFLbQmxs+KfiBr/AIwsFt9R vDcRjpl81R8OeMfEXhJEittSlFv/AM8gwwKqK5dhtXI/FPi3VPF115t1cPIysCoY9K3dP+LniW0i SCPUXRVGMeYKaV9Q2RyWv6lqmva3HfT3jNcL0YMDXTal411vU9OS0nvWkhOMgsKG7C5Tkr7RkuJI nDeU0ZDAj1BzW3f6zda/Osl1MZdqleTmoiwktLFbRtQvfC87zaZcvbs33tpxWhqHizV/EkBt9Svn uYSOMsDipejNEroo6VrOp+E7RrbSr14VJycEDGKl1vVL7xZbwnUrszSxjA+YH8a0bbRm4XYj3V7F pv2T7dI1s38Jx+VZX9lwx2iqi8L3xTvZWKsdBaePNX0mzEUOoOY+ipuHArGuIU1KYXNz+9uW5Lnm jmBRJJLnK4xjNV4IFspPOicw3HZ1ouFjobPx1rNnE8f21nRhgguK5zSJr/Qr64uorhvOfOOfUUJB ZBJd6trljJBfXsksZ52Eg19dfs5fGXwn8M/C0tnqNoHuVwCfLJ7VpHQlo9c179qrwVqPha5gissM f+mJ9K6b4Xft7ad4Q+C13osFoHiki2AFWH8JH9a10M03c/KvUXt/E/iCe8itxbiVt3A6fnXSfa9R t7BbRbt/KGNoz0xU81i1G5nx6xqfhTxfpmsRXMl00DANGew3A9vpX6rWf7deir/Z91cWCtqMcJAJ VuDmmp33FyW2Pz9/aO+OOrfHH4npfXJbyEJEQBJCjINcsJNTuopE+3yEMMAnHAqZy1KhHuek/s2f EbTvgl4ruJtUjN0zE4ZkJzxjtX3BH+2r4VfU5HGnDcwP/LJqztdlbHwh8Y/iLD8RfG8t5ZE2UQYk FBj09a5G5uf7RW3lmkM0wGAzematxBO7NtZns4zHayNbb/8AWBP4qp/Z3j0+S1a5fEhyeBSgtS+g tjbSWWkG1juHyCMnHWtAWT39nHBcztLGmNqkdCOlVrcxtcsWekeXq6zCUrKAQCPTvWykYtdQlJfz FY8g0L3tymuVDltJVZ3tJvJUn5lU9as2rlFIk+ZzyW9auEVFjvdFiSCS4iyrFcHtWja6fteOQ/LJ jhu9NlItS3c8OoEx3bq/I7dKIrMzeZtmKs5/eEd6lKzuU30Lul6MNL3IjssbDBGOtauj+H/7OEnk XDR56gAdKptmLiuYi1PbcWEdr577d4JyOuDXZx6Yml6enlTMisPmAHWkabKxVe1WaBEZmePspFW5 LQ2FoESYxR5HyihruQlYvXWmNrUlo7zEbBx781sSWyWt8zqP3h6tTirFN6E7JHHAfNX5AfTvWdJM lxMpSZ19iuK0a0JSubwtlKqIpSsn96qz2UVvPuaRjNnlsdah66MtJGxHaK86SJIQw+9Wtdut2wbz WBXjIFK1thytYjtNLUy+YrFWxjcBWjBpoCld5K5yRjrTu0JRQ9LSD7WDt2uBgYFT/YlhDOBlz3xW kWwsiCCMcnbz34qYZVtvPNJq7JNP+zWSIcA56U1bXyrjbjJxk00LVsTyDcOcp8o6VbFuzJtxxSgi ZbmY1tiTGMGr8KFVIxj6VTCJoDTo2j5q5p9pFaAuw+b6Vm/eHszKvSmS23aCewqhNbxWkW5BnPXi hKw9xbOJbhdyrtH0qOS0dZCUovqVHQqf2e3n7zwe4q6liqgt69BQ1ctbGS8TAkVnCDExyOKuKJbs Sxv56sCuCvfFBjY2gwvPrQTzGfJZPEylvmU1Wm01HmB6E0c2gLUZdaeu1QR0qu8B6EfSouFipJaN JGQDhqrrYyGPawyfWqvoHKUpbfBCIMN3NQXUSwkIfvVK1ZNrMyLmEkEAc1mpaPHEwcYJoZVxiwiO Da1Z93prRxCUPgegqW9CkzKlCuAxGcVSlQSDcPvUkrbBe5m3JZQvy0xbdpeWGfrVNXJsY08TRswJ OO1V4IjLHnAP1NJ6IaWpUuYF3Hjn6VmFc5GKxuW1cpyK24bRUkoaMAD8qdjOUrGLPE0Lkj5mNY8q PN1HPeq6WJT6lTYsDFSMj6VnTNiNgo4zREpmHJBtZWXr6Vd2AQnP36ckETBuneUgDoKrsjo4LDg9 sU1ohO5dtYNsbluPSs25gKxgk5NZPUqCKssQdARxWZJZjflThqkpvUpTKyyDf8wqG6VZuRwBRfWw rGe5WABcdaruu9sAdPWiQ0ZksjLPnFLcxY5B5pIGUJZ92B3pHUIAScmkIzJZP9I6fL9KZMd4OwcU CKlpbbQWPFJLF5vKGpSuMqyMsY2scmtKxj3REsMCq6DMG5Xa5wPlzULMI5E3jqOKNxEU4yxx92qT zA4TbS6E9Q2be3BppHlIR96kh2MuZQ5XHykdRT53VowIlw3c4pNalXsVjEIYJSeW2n+VffP/AASY /dfGiZ2GCd3/AKCKxqKzGtT+xD4GKIPGbsvJbP8AKvvF/nwSc1vSEylc5ZgAMiozAigHHNdRKKs0 wRgDVWWRTGV2/jQMyXbzLVyM8e1fzgf8FNWWLx8iYyz7jn8RW1NXTDrY/M222x2kcYOQoxmr8dvv QsGyB2ryqukmFymV2tkDinXAOQc4rK9wJYyiygt82K67SLklwMfKavYR6ppKhsbea9S0EiFs9TRE fQ9S0uTzRlhXY2ce5RnpWkdGF7o24Dt6VTvI9zjmtUyUZ8xTyyAMnPXFZU7Haq1TRaepVuXGBwfy quWEvBO0VD0LuXImDv04rUhkCZGM1483aJujSt7gqmMcGr1vMYhg9DXM3c1izUVlC4Az+FPgbDgU dBSepolNq5BzVuI4wDyDUJMp7D3XzCEz09aQEb8k9O1WrmS3C5lGA2fwqs+qK0e0jBrVIJGLLqJV Tk8D3rCl1PMhI6V0wjoQ0Ubu4FwozyprPVtp2A8dq3jExkrEy5yVx0Ga+wv2L8SeMJpD0GR+ldtE dOyP1ujTaik9PepWiD49a1YPclYZAGKIlxJgdKExE8a+WCQOaoay2dEuSw/gP8qJbEWPzL8fN5Hh /WHA42t/6Ca/lm+Ks5n+JuoPjhpD/SsJPoTJnF3ClEAXvUuhabc+KNbttJ08F72Rh07c0ktbDTuf TPiD9jTxh4XQ3l3OEi2FtrSgcD8K+YL7UotOvJo5/lMTbWx3NJ7iZcEd5fWwuY7VhZdpApq9HeRx Qq24fia06Eoo6h4gjtbdpwN+P4RzmvevhR+zn4g+L/hoazHGbW2fBjEh28H61n1K6HQ/EX9mHxF8 OPC6am4aeLA/1Z3fyFcP8Dfgp4p+L019dW1u4iizgEEDG3PpTvYHojhvEOjat4a8VvokkBmvTOIg Bk4yQM8fWvp2P9iPxAyW7PceU8oz80ig1e0SVrqeLfGb4V6v8DbmK0u0Z1lHyyYzznAr2H4b/spe I/G3w7TW1Qqrw+YPMO3sfb2rG93Y0Vjw34f/AAU8V/EH4iTaLESphJEuXwOMH0r6Y8X/ALF2u+H/ AA/d3UEgc26kssThugz2rWO5Mz4x8PrqXiG8sdKitXa9mID4UkjnB/nX1j48/ZF8Q/DfwiNUkw6g AsC4yBQ4cyuOMtD5v+HPhbU/ih45h0XTYnd5FLO6j7uMcV7j8b/2Y9f+B2lRXtwqzIRyN4POfaog 7uw5WSPAtK0/WbzQm1GSxZbdRy204x+VZVvriXUcPkNu81guD2ycVtKOlyFLofb/AMPP2I7vx9Yw 3VleCWZoyzRblr5S8c+Gr74XfEPUtE1NNphlKx/QYqFG5aZiyXKyruxkj2rJ1h3WxBRwjFgMk4pW sNK59Z+Bf2M9Z8cfCSTxJZ3AnVIt+RIvPBP9K+UrSa4so0t70hbleGyeta7EvRm7HdRHCq4Ddzms q5023edi0IZz1JFGwrFeHSLVyymJVHetCC2tbC08pFRIj2z1q0yWtRbeEPPiPAUehrQLC2HzyAk+ 9SzSKFilSCQgsCx96zrm5s0uTujDMT1xQtB2NSCSMEISFXsDVuLVreOV4w+WBwcUWuCWpLJbQXe1 2USfWov7Kt/NyIVHFKLsxyWhzl1bza1rNlo1hbNJdXU6L8ik8bgCf1r9MfFX7AuqeE/hml2bgC6j j3bC6jpmujl9y5lex8B2WoGzkkhvT/pEThG75NdTPBHLJGy8EjPPFYmyWhTfUIbbOW4zjdWlJexW 6xAHO/kUzNpI92/Zu+CN98fPiJPZ2Z2xW6P5gJxyBnvXnvxj8FXHwd+I2r6XcSF2R2Vcn7vApQd5 WErM+gvhd+yvqfiL9n+XxbLP+5aLzQ24dME/0r5U8OTpqFimX81scH1rq5bK4S3sjrbe8iNo3BBR gDxXoXwq+HWqfGb4mafpNgjeR5bNIw9iK5pytsWtD71l/Yp0T/hLpLB9QVtRCNmElPvD9a+Gfi98 KtR+Cfj57S+Ahickxrn7wGBW3L7qZnza+Rj3GrQzwptwu1Cx98V9Efso/Ae++PB1TUrlTaaVbthJ TxuG3PehRurkuVtUfVNp+xhoviPSbttKu0vbqIHaBtPbPavz58QaPeeFNcu9Jvk2ywS7cHtiopO7 aDmuaul6DeeJIJl062aeSNtpKqTg19u+A/2O/O+F9vrfiiYWfmbThivf649KylUtU5TayUdR3xF/ Za+yfD8a14ecXEMceSwIGBye30r4RsdTjns42mctccA8V1uNlcwctbHd+BvDs3xK8c2WhWxPmM4L A8ZAIz/Ov0O8Z/saaB4P1m0S+1IQyNGSUOznmhbXLeiPJ9G/Zx07xv8AFtNF0m53xqGLsu3tivdf FH7J/hbwh4i+xXmoKsoViQ2zt+NZ7u4RbPhD4o+Fo9N8YNp2g7Z1ecR5Bx1IHb619+eD/wBhqGD4 cR3mp3HlXrQ7tvy9cHvUVJ8trFtnwPq+mnQvFF5pBAYwuVDDvioVBgxGV5PtWlgTL8diiJvI+arM UY8xQUzgc1pFWQmyr5a3N1IFj2AH0qRbHDgheM9cUPYhI25NOeHa5bKHoKp+WBMSRg/SpRaLMdp5 cYOODU8djKAxxle2aHoJK5nf2XvmKjlqHsGhUoRyDVXuZqLTLklt5duhxk02JAF+fpQl1L3HtbpI hYIGX3FNg09ChJXj0xQwWjIEgdXIRcJTNwBKbfm+lZWs9S2tCt9kec4UfMKY9i6YzVom9hs9kOGC 4/Cqq6cZgSq4/Sr2JepROmtC+4jiqrsxl2qvyVI+gs0GcDOfasOa2ZpDjtUtWHASJWkGGFVbu3a3 m+Y5B6UktSpDlj80DIwapzxSxv8AIfl707DvoZ7xGPcxHNY1zb/ut/U5604xMpOxGihVTPWo7lTy WFEkRF3Zhz2vO7qKpNGXRi3QdqzaLW5TltA9p8q4/CsQWhCjA4FCLRJLH5keKypbaSPocYqlqNmJ fEOBtHPes+5i+yOvln5SOcUmhdCFcbGLDce1ZhX5MMMGs3EOYrT2zKoZKpyZf5upqt0Q0VIMCUlx 8xqjqMRiYkfKDWe5cVZGLKi9OrfSoZFHkFCoDetUtBPUxxbbZvm61DLakMTnIouTYo+Wpc4+XFQy NvzkZI6UtRog8tmyS2TWR57CYhxx9KBpvce0bFTxxWdJtiJJH6VLQNkYA2HIyT61jyQiMNmly6lJ 3MsIJ3AbjFNnXZLtA4NEhLcpPaFJMk8VVkgKMSxyakG9St5SzPsxgjvilEIjyCMkVLGNlSMYULyf aseeP7OjKTnmmhFJbjdGqqMilfO/gbaHoMzLmzeScFOg65q7LK7Q7E/nTGinButjskG4e4qhqI82 cN0x0FHQXUpurD5icD0qttIYtmpAsLJui5FVPM/e9OKLaiuZ7RFrgnHNXBgDYFxmkG5HcQCOB1U5 O0192f8ABK2TzvjM6nhxu6fQVz1WVA/sG+Cc/l+LRj0NfeRYKVPtXRQHIUT+RwRnNVJGJfIHFddi EVpLcMd2KpS4Kn1otoUyAKJ7dlHy4Ffzg/8ABUW3C+OI5UHKZDf99VtR0uLqflvpqhLNHxwwzVwR NJLlW2juK8usryYm9S0TlNi9utRSxCZQB1Fc1rMBiQeRIrdu9dlpTfvAyj5fSnKVxo9O0iQR4Yjg 9q9T0VgxXtVQKS0PU9MIIB7e9dnbhtgrdLYh6GpuZBx/Oo2f5CWFatJImBntG3kk9AaqeVhR6ik5 aFMrOAVORlqyWKx8EZNRzXGXYBtYelayAZGDXjz7HXHc10QbQF6VbAAIB6VlsatF2I+UwA71aUmN yKLCsWYnK5HUVYjZi4GaqysFyWQBrknd0qo8hXc5PFNRIbMa8vgiDmucvNU2kc1vCPQTMF9QIc7j 1NNkuv3g29K6ox0E2TRz+ZkU/BLcDIpp2ZjPY0I5liiOepGK+w/2KY9niyaJuTyQfoK7KDTYqb0s frfJlkU9hTcEkY61bBEwLDjFPiO0nI600MtLgZJFZ+u8aHcNjqh/lUyZJ+Yfj9yvhrWiwz8rf+gm v5X/AIlT5+Kl6MYUM39Kya1MpHLSseWFfQ37EmmRaj+1HpYYDiJzz7YpPQI6an3F/wAFPPinrvh/ xzZabpLeTaGB/Mw+M4avyA+GXgHVP2gfijYaFYxvvaQS3cpGBhSCRnpyKnVjXvNs/XD9qceDf2e/ hPB4eggjfUtgDCJdx4JB6H3r4g/Z0/Za1f426XcanIWg0onMLPx8uPetEiI7HUfFD9jnVPh2+naj ZyLfaebiMSguOhYentX6r/GHT7zwp+yLocvhaNoLowxgLCD/AHj9aJFJN6nTfs9x3+v/ALJ98/i9 GnvVhBHmjJztb6Vy/wDwT6+IdtOmv6VHaALFuVTg/wBykkJvn0R80eDPDVl4v/blvbe6QfJM7Djp jaa9a/bZm8Y6Z8ddCh8NmVNLjmAmEZOCPMHt6Zpy7Cvy+6z1P9tP4aL8Q/CHhN4rdTciSIy5/wCu nNegftD6lrPwa/Z80ltC09GVIlSRAxHBJz09qzS1LmrWPxh8H/tVajpHxEvINOsFg1e5jk+YFhhs Y6/lX6b/ALF1z4ntPAPiq/8AHs3nrcbmiLSb9o8s/TvW1NakVJaHz/8AsffDI+Kf2qL3UoIFl05v MaIn6DFel/8ABQP4leJ/Auvz6VHYo+lyBlzvPB6DjHvTqPldkTB2Rk/8ErfhZc2d5rer31mjTTbm gbOSBs/xrxT9t74r+LIPF+p6ZfWivp0VxiImQnge2KIwsrlS1Z3X7H3j/wANfGTwFfeDNRiSK+8s rGHXGcKeRn3Nfm38dfgXqnwB+J1xpssbNYtLutZMdFGB2460+a6Hy2kfob/wTH8Y61dfHCeyur+S e0WGXbExGB8oryD/AIKEGK9/aO1EhQrRyODj8KS0LmuU+Pbd/Mhwq8Dmun+DXwP1v9oz4jnS7PdH p8ML+a4PcDI60mjSnqf0Z/s5/CiH4Gfsu6zpf2s3rw25VlbHynY3pX4e/BX9mDVP2g/iVrF/PmHS rdmG7/gOe9OGruzGpO8rI948V/sK2sPgHVdR0O4N9cWnJ4XjCk9vpX5y+Hp5rjTNt8DHeR4Dg+tH UqLuiTUYmvIFCOYssAWHpX3V4c/YlPi/4T6Trcdxugby98o29CauOxM9Gmc3+0t+ybf/AAn8EWWv aI/m2TlQ8hIXgtjt7V6N8Bf2StJ+MHga3nSdpdRljDjKDk/Wq6DbPjn4r/ADxH4N+NMPhaGH99Lc AJhv4NwB/nX334q/YAitZtD01LjydVmUM8fy9mwamy3JVToecfGn9hfWtB+MuheH9MTzXljLznIH CuM/oa9zj/YN8O2/ja40qK7D6wY3ZocLwQPz9KIasqcrK5+dnjzwLf8Awl+J2paHqkYiWOQiHBzw AKyhGI4pHxnCkinKPKxU6nOj6u/YE+EOo+P/AI96RqoRPsMcL5y/fII4r9FP+CjsvjHwj4ihOjXP l6YYH81PN29/THpVKWnKEnofmF+zP8PNG+NGt3i6wrfaVnALCPcM49a9s/az/ZF1D4TaXaarYn/i XSyLEG4BG9sdKiNm7G0dY3Ot1D9g8r8CdA1bzd0l3LDubju+K479rP8AZLHwT0DR7+GXO9RkHA6t itUlcxbvLlPoH/glpdxxfE7WFVdszLIcgdfkrxT9p/w7beKv2odatdSx5LXTbyw6dKlR5XcOVwZ+ xfwy+HmhWn7II0uCdF0sWe0EYxja1fzq+PPDlh4D8Ux2+ir9otljYKIxnPp0rZyvAyjO87H3h8Af 2N7z4j/BC98T3cJgkdBJFER/sk9+eor2v/gmlpsmh/EHW7a/tfKuo1dYSQemyueC5tzZtzdkdHF8 EfFsf7XV74mlmJ05ro7QZB0O3tj2r6V/av8A2XdM+OviyK7uZNklrZylVAByR8w61qn9xg5auPU/ N39nT9kf/hZF74sGro1rbadI0cDFPvDZu7+9fpl+zb4J/sH9m/WdIsQPlTyoZF75Q8/rVTlbRAnp qcH+wp8H9f8AhLr+qtrk3nLcKzxKXB42Yr86f2r7C51X4761FaR77yW92RovYHAzUwjbU2cORKR+ hPwg8BaP+zD8DbW+1wK+s3kkQkDYJ3Ela92/aV8I6h8Yf2etJstFPlmXy3PO3gMa5+X942zWprC6 Ou+A/wALLjwz8AJdA1Vxu8oRk5B6qR/WvzJ+P/7JMHgDUfD0ejo1z9rdPNZU6AuAentXVTld2Zx3 5tUfTlt8ANF+CHxR8N30cypdyREOWAHVgDXpn7YPwKvPif4406+0rUfuRsCquvdqVSVpcpve8Lnk X7J3heT4WftDXNrqv7qVkk2yvxngDrXbftF/s9XvxF+LN3qlhqRkXa58sOvtUfCxx+G58ffDD4Ma hf8AxXSzZ2tmt7tDJu+Xdgg96/Wn9rnRdVtvhxbHRrz7L5QUOwcLkZ5/SolHmmEtUmfhtfTtL4ju ZZX865L/ADydcmtlbM3Q3Nwwrd9EUloQbcTbcH8q0LaIwTByOfQ1UtEZ9TUvIw7h449o/ix3piq9 xEMLtjqTRK6HNcrcsIFBO32q9BpguiVZentQ2CRaisVMuwdPQ0T7oLOVMc54p7i2KGmRiO3SQg7y O4qzLCbqXhec00CElsZAwUpx61Xk05ACrLz9KdxW1FjiMcO1eFotXV7jaVPT0oWorNO5o2ejNIsr AjaD61lXVmp+6vzeuKlmiehNDZvBHvX71RiFmwXX5j1o2E1cpXkQFyvy/J6CopIsyAKMD2ob0JUb MivLXzY9q8kVzstmybdq/N9KqOxLWpai0YuhMnytVOSyhi+VTufvkVLLSMptM2yH0qK+sVCjccn0 pg2UpNPMagr3qKe08sDHJpMcdjLubAyoRnBNYs1i0CbcZH0qokVFcy49P2sWfOPSrbW4mjJH3RUu 7CMbIxLr93GAq1S+zkspIGD2qXsO2pLNYmNC3G30rnJIWkPHFRAe7KMtiznI7VBNGSnK/pVvQOpg vYCRyQMCsSex2z47ilcdhVsNr7m6elZdzZqZCc8elJoVihN8sSqBVGaEHJUYNNLQpGUqsrliucVm 3cjXa4P4VC3E1ZFVrQooyOe9ZlyfNYIB074pvVkx2IXtwo9WrNdnicAj5TUtBayKl5FhTx9MCqER xEQ3ShCZEsZkgJTqOpqpcKpQZHzVLeoLYqxsVX5qq3MispULzTBooIP7w5qCWNG60r6jirGLc2eW JWooYisZZjmok7s0aSRAzbjt7j2qGTbtyRk0rEIoOTbkkp1psGDEzMMn0qbAUZv3gAxtNZVzGDlT yaYFe2i8tPxqeYKWA/i70DImgwjEHP0qtGi2yiRep7EUdALU7rMoBX5vXFcheL/pJAJOPamgIHdp iFHCirSxqxCelDQrjbiMxLyPlqlIwZV44qbk2Kbfu7kHH6VYnURcnvStcvoVjEGtZCG52nrX29/w S4dNP+NIb7zuGzx04FYVY3GtD+wj4QyCLxREmPmPfFfoJ5QUJnuK3oqyBu5XZcTnPSozjkiupPQV ihcZjTH9KpBRsyetNCZXmAWCQgYOK/nW/wCCoNqIvEUeDkyZJ9vmralsyrH5S2asbKGMdFGM1oPY tBLnf+VeTUfvslroQiQKC3IP0pq3ZcjjAFZS1Y1oS/aDuzj5a6TRi4wRwKXLqB6vpU4iK7huFeq6 L+9wQMD0rRaIqHY9Q0xdzLnpXoVoQVAxzW8diZbl98KhA5aq3zBArDmhyElYrO+w+WfwqnJ+7Bz1 qd0U9GUGjKcjpVEDzCQOtKKHuWYYxIgPTFacUIucc7cV5LaOlJmiCyYA+6KuLKrMCRxXPJ6myWhd WQK3zD6U95GYAE0+YRILkQoA3JNWjeLGQM8VUW2yTPudSEXAOc1g3WtEIUJ4reKJZzFzqvIBNY1x cs83XK1vHTUlu6I9xkYE9BWlGyyKB3Fap3RmywxCYAHNW4flXk4NLYW5bkgWWEBTyOTX2l+xbGJf Fksw+6AQPyrqwz1J2P1nVvkGali+Zq6GCLPlqQTn9KYqn7w6DtTQ0WWbfg4xVTxBGE0OYf8ATM/y rNkn5b/ECYHw1rrAcKG/9BNfyrfEW5E3xT1Fyvy72wPwFZt6mcjmTeeUhB6E96+mf2HJxbftR6cz IWUwyYwPpVbgldWPpb/gqZqqWfxRsgSMtBIcZ/2qyv8Agmj4k0EalfR4ii1lo2EbE4J+T/8AVTS1 sJe4mmfLX7ZXhzXfC/x6ubnWg1xDcuzQOxyAuQMfnX7L/B6OPSv2DI7i1/0SdLAHzF9drVUld2JW x+K+j/taeK9V8O2egOst9Ab2FPtLbum72GO9fvP45+IVt8FP2btC1DUk86IQpnjP8R9KhrQpSsrG 94F+IVn8Yv2atT1mxj8mB4N6jGONh9a+R/8Agm9crean4rmj58t2DD/tnU6x1JpLlZwXwkkS6/bo 1KVDkl5QQexwtfYX7RP7S+lfC74rWGgXlv5lxczAI2wnHzAf1p/FqOcOZ8xrftf+K7zwdp3hO5s0 DLcSxAgtjGZAKz/29vi/qPgr4CackUO9pdgJyePmIpPTYcndH89mlaldXvjmbWLW2ka5hVmkdUJ5 Az1/Cv2w/YN+IN1+0n8IfE8Oqp5UVsCoGc5/dk96um7MTjeJwf7F/wAQLvwf+0rc+H4bdRYwLIsY BPQAVwf/AAUp+Ml/ffFkaUU2xDcRyexFKb5paEqOiPaf+CW/xP1bVbbX4ZowY7TKoAxOf3ea/P8A /bA+LOpeN/i94is7kCC3guiFG70APetoyurCno0cX+xD4I1f4gfHO2v9LjaO2sQyTzkbc9D16dK+ yf8AgpD8UNG1HWLLRYSJtRjH7yQDO0hueazS96xr0TOS/wCCXN6k3x2vSrNJIkUg+7/siuE/b4ZI f2gtVkI+c3DZH5VVwm+ax8htcYZlj+Vdpr9bP+CT1o19pvi2cIqyxbgHJwT+6oeiLjKyPvH4Uasu qfDHx4shdpU3All/6ZGvHv2I2WX9nLxnNt8iaGNsOvU/umOalJ2MVHW7MP8A4J3a8fFfwg8ctcSP M6ORumXGf3Rr8SPiPaGP4papgbVMx+VfwqoqwbM5rVlEdjIgHFf0Bfsn28uufsMWdpbpvmEce0ge zULc0lqvQ7T4yeHItY/ZH8O+HNQkWO7uHgV8kZ++Qf511Pw68AaX+zPfeEdDsbM3Mr2hDSLET/Fj t9apu2hmtYng37Q+n2en/t5eHr24cIrRSfK3AyXWvT/2jvDeval+1/4R1HRZ9mmrA4mw4GcyL/Ss 5NpWJUNdD6e17T2P7VWmSsyyN9jmDZPuK8G1/UPBvw1/a21HV53afWJFm+RYt23IAPIrSk9R1NrH 44ftn+LYfij8edSv44jEsUrBQQRkHB718+20RltWJHBGOfetKgUY2Pr39g/4har4S+PGj6LC5+zP C5wp9CO1fUH/AAVN+JGtWXxF03ToZGFpNC4kBOP4sVMFf3i5xPT/ANlf4KaL8Of2d4vE5s1utQuZ Yn+VNxycjt9K95/bT1W4179lvQbi5txE09zb/KM/LlyKUY8rcmNzcUonVeJ/P0X9mbwVarmVfOth uXn/AJae1fNH/BVszWPgzw6qlmjIQNgZx+8q49yL2lc87/4JYaUtz8TdalQjdGsgH/fFfOP7XVyb j9orxJGVZH+1MCSMdhUqXNojaXvH6g/DDTJV/YKW2jzJMbMBNvJJ2tX4d+DNH1TwJqlpc6vaHYkZ wJAQfyp2cYXMowSnc/ob/Z8/aF0Nv2ZJZ53jgaO3wIScHO09jzXz3+wn4nX4j/FvXdV2CIIJPLT1 BQHvU0nZalNezlc6NP2ktX1P9pu48JnTyLWO5++A3Rcf419M/FG21q/+P2lQ6XKV04W0n2tQ2B1H X8M1cnaNzONP3+Ym+MMEMHw71608GhEvcn7Q8J77T9e1cL+ydqd/4R/ZgvZNRHm3sMQLc5LEKTzU KTmxzgZf7G/xq1H41eJNWN9aG3S1V0QkNg/LnvXn3hD4EIP2g/FnjjxEU/s+1nc2yuQRgoDn8xXT RleVhVqnucp8c/Gz41yfHv4r26Q/LottfJHFjPzDcCDiv1o+Ofjqf4PfAbSLvS4ftG1UUAZ7sfSs Zq02io1P3SizV+F/jW/8f/s9XOqXMPkXUyAxg56lTjr71o/CrT47P4d6VP4qZHv9i+QZWGev4d6m TcWTTp8qsfCP7YOh+Jb34q2V3HI4hAPkKD23DHb1rB+E/jXxpD8bNIsdRluWt2U5QqSMZHtV251f qO1vdPo79uTSL638f6LNoJ23hjIZUPP3xmuO/Zd1PxJP+0A9lq0tw0Qjkyjr8vAHfFNxtEafQ0f2 ltYfwn+0Tp/9mN5UktwBKsffLqD+le0ftx6hq1r8M7AWkzpE5TzcHH8X+FRHR6hLRaH5UafpsEVw +FyzHO4jrVtrRhc7RxWjepa2L/2NYZASuSasf2f5k4IHHrine40i0bQxkp/Cal+w+VtU/wCrxTQ2 rLQ0LHS7S0Lvj5z04qazg2ylgcevNKSsSiveIqOZEHfsKaLdpcMw59Ka2B7jbjaHC+Xgg1ofZASJ O47UFRLZ/e8sPpWZLabmzil0FLcpz2ZXGBxUy2sMTLkZ/CiI90WZrNywKcIewqhfWUi42ICvfmmR axGkTREYpt1Ml0+0LtdepxRuaRMaa0aSYegqs8RjfjJH0pS2EOkX7KAQM5pkUQa4Ixk4zVQ2JYye L7Rau38QNYUVmrTDj5u/FXbQm5JcwCOXAGapzaeCcsPmPQ1m9BmOyskpTGMUxrfa+5u9NoE7MoT2 LmTKnms1s7yJF+b0qL20GtWYtyywyYddwPtVBxhmwML7VaKktCpFAHgJYfN2rLa2Zfm75qGieg9Y GuUYP8o7VmNpvlxbz0HFOKJWhlyxG3AYfMp7VDlZY9xXA9KUgTMiS3CuzDgViy7Vm3Fd34VJaZAy hgxI/SuelttvA5JptoJaFGSAiba33fWke2jAJ9Pam3oSrmXIoVyccVkXsQTDgcVna5TdzPUG4lJH QVlXJKykhcChIlFJl+bd3rPuENy+B2prcbK6ybJNjDcfWql7aggYGAaJIm5WKiODywKyJCIuGHNZ 2KVimHz2+Wq0yB5Bt4pJl6FZUYs3GT61iM7ecfl4BoSJHyN5jAqMevFQNFuTK9KVtR3voYVyzC4+ UYpzIMAkc0MLWKkrZJVunaltlWJyGHB9qVtCWZ94hEh5+XNUJl8pCGH0NKxSK8IAj65+tQBxI5BH 40noT1JYFEWcc1Qnk8pyWHHagYQ3m6M5HNY87b3+Vee9AtWUbmPbhV601Lc2+Cx5p3FYmkkMnDcj 3qk8RUgg8DtUWAzJZGaQt0HapopCEKsNx9aNiugmAsL5/umvtH/gmSnnfHEIh2yLuz+QrOYH9hvw lkaXxbDsHIBzX39JOwjVq6aWqEtCOXc6AjiqpBAwDzW1hkEk5GCwyajYBotwGDnpTApXDfuHIr+e T/gqHGG8RWzjgKDuGP8AarSlomVex+U9iySWUbL0IyOOlOEw3HPze9eRWXvsasMDKHwwyv0qvMw8 xQvCVhfUGi08IVgM/LXWaPIrQgY6VrF3IPTNBAJUkbh7169osit/Dtx6VqlcSlY7/TZj5gUjHoa9 AsGJcKelarRAjVkuFRwAvz+uKc0glcMetSkUUpwEl3tyO1Z8koZTxVJaCmVS23C9VqhPCpmwjbRT 5Rw1Ktnfi5VNnSuig+Y5AxXgVU0d0TRz5KhjyDT1BmcE8LWK1ZoXHlVUA6kVSN4qjO7P1rXlIehQ k1hVzj5qz7vWVfGGrWELmbZk/wBthifasi71HzXAzW8YmbZnzyA43HNSRybSAe9aW0JTLiDA6cVo QKi9TzV01oTJlrcqAZ6mp9jDAzk0SEmaNoTEr4HOCK+2/wBifA1uZB97n+VdWHQmz9W+rAVOj7D6 V0sUXYcWLHC8mp7fKhs9aXQrrcd9z6VV8RSgeHbgnn92f5VLjoJs/LTx8wbwnrZA42t/6Ca/lR+I zE/FK/QDgO39KykrMxlucvdoJCoJ5zX0Z+x18QNP8DftB2l3qIRbdUcbz+FLZ3NYI/SD9tfT/AHx oT+3Y7qFruOJgoO3PPPr7V+Euh69dfCH4h2mu6JK8LQTqhRRjehI3H8hVJ3lcie5+xf7QPxY8I/H j4HWGr3IQaqiLgMvOck9z9K7T9lr9onQvEnwLfwhrjrbwpGIsN0YYPr9aalaepPL7p5b8a5Ph38F /hraRaHbQSTK6P8AulzyD7GvoDw98dfC/wC0J+ztpml+IDGESNMxS9iCT3NFTV3RKWmozxn+0V4U /Z5+AsegaDteGZVh8uJcgA5Xt9a6D9hfx/4H+FHhrUru4uI0u9QO6UADJJXb60ptSVkaRVj5o+Jv xh8PfCr9oseJ9CKN5txtuAg67iATx7Cvqr4i6x8O/jb4p0vxPfNA9zDiQGTGQdwPr7UR0Vik9Dxj 9pD9pzTPiJ8TPD2jxShNHsnVmb3Vwwr1/wDbV/aD8HfET4O29jZzBpowoHy98nHela6M5HwH+xv4 20LT7/U9N8R20bRyKwWRhncu3BNfd/hj4x+Bf2dPhnr6+GTGJrrPEa452Ediau2mhaeh8yfsPfGz RtH+Kl1r/iFlW5nDsp64BUDH6VxX/BQn4k6D8TvHqX2hMPPyQWAxwSM0oR5VqRr0PoH/AIJw/GLw z8GND1g6vcZubk85XOMpivgz9s3W9L8T/Fi/v9GbKXFzukwMZBIz+lKGkxVFc+3Pgb8cfC3wB/Z6 ddNRV1e4hBYhcEtgj/Cvy21jX7nx142bWtUBf7XMHkJ5wCQDVfbL+ykfsx+yF40+G/wKNzrMSRR6 hLG2Sqc8rj1r84v2o/iba/Gb426nqViSqCc9sbs4NUlyslq55CymR9oGOMV9j/sQftGf8KW8X6hp twojsbkN5jEn+7im1zKw9j9RdF/a18AeHPAviC0hf97dK27EfU7CPWviT9k79rHTvC03iTQr0f8A Epuyxj+U/d2Y/rW0ErWId7nrP/DT/gz4JfB3W7bw2ix3F2cYRCMkqVHevyw8G/Crxn8Wmk1qOHIl O7LPjr+FYy+Itq2p1Grfsz+NLXSbid4RsB7yf/Wr94/2MtI1H4ZfsVG8uo0kuYrQOIy/orVm5crN F8Nz8Tfi3+2R4n+IvizTXJexisZ0xBuIBAYN3Ffrf4b/AG7vDv8Awimk6pqAB1NIwv3ScZPrWlro ySPjn9qP4r2P7Q/xU8PXWg3f2fW0dTvGF+XeCwya+yLWLx/L8YPDNnJsfTkhLTXHn88OPbHSqq20 sKndPUZ+2X+07a/B79orQzpkyyzCJ0uSp6AuM9PauT1X4/8Aw11DxjfeJp7ZJtWaN8uYieSPXPtS oxs7smo73aPyl1iHWP2j/jHqNzoUIWFpGKgMRxgV6hY/sj+OHlaLaoI5/wBb/wDWraok2aUtI3PQ v2b57P8AZ2+OP2vxftF9CrrDj5sLxn0716D+3N8ddB+PniWyk08ESpGfnMZH8WacUlGxM25M9F/Y 7/a+tPB3ga+0HX/3ltZkLb8Ft2FyOPrXffG/9szRfih8I5NPkjCrFIht02nqMkfrWc3eCSNIxu7s r/DX9saxk+C+nabrTZubTy/LU5P3Tn+dcN+1T+1XpXxc+H9laPF5lxlAo2nj5qqC/d26mM0+c4P9 iH4z2f7PPxDvpr0mRbhHPTOPlx2rg/2l/iPZfFX4l6pq2mJtLyMWJUjnis4U3F3NVufS37H37WD+ EfA02j+IgWs4MLEDk8Aen1r5++P3xB/4Wz4+kudPHlafFuVV6dea6JNOFgtZ3PKbHV9Ug06SyguG +zkj5c17t+yj8crz4M/E3bszA6MJWJIzwBWLj7tkD956n6C237Q3gqz+IV1raWw/tVlc+b5Zzkj1 z7Vk+Bf211Os+Jbq+Tc8m9bcjJwCmP51co3p2FF6nDfs6/tP2/grSvFkt8WmlvnLoCCcfu8Vb/Z5 /aiktbHXYNXjA02eXMa5J+XbjpSpR5US25M9a8M/tS+F/hx4Z1B9DtBHcSntGRnjFfOHxR/ad1Tx t8NRp8IMLXDKZihJJ7H9KukuWbkyasL2SPl7RbP/AIRzTo2t1xsYN0xyK/S3wJ+1Jp3iL4XWel+I YftCxKowylskVlNNzuW42VjX8WftW22j+GtN0rRIBFZrJHlcFeA3p9K4n9oT9or/AIS7W/DUdmvk w2pUuBkZw4NVUSlZIqN7XLvx8/akTxhreiLaQbTBguxBHRwa900f9pLw1FqFtqElsDfJCwB8s9TV wXKQ02z5q8R/tG3msfGRNYuY820blY1yThSRX11F+0v4f03UX1Kztg2pMjfNsI6+9OWrK5Xa58UN 4+m1v40T+J9XcsBcZiiPOAcfj2r6d/aK/aJs/iT4FjsYItpDL2I6GpqLawRV9z4htY9zQ7V6LXQr GpIzGM1Ja0Yy4gB5xg1btIWMOMYp7FMmis2bPG6rkNt5sfzDpTW5NyBrdGl4HP0qGTTJJZQFYr9K p6iOittPjsIQHTzPqKgMGZAQvy/SpWwEN3pBvZRsO3HpUh0uRCGDcd81SG9EPvES328bj7VGtsdR bKjYBTexC1JZ7MqQpXj6VRn09YZcgVMdS1oSrlgO1aclqDBnqe9DQ2ctfQ+QwP8ASpvsUM9vuIxI PahILpaFH7JtXLDiq8SRByNuc+1O1ypaIytStAmFx81RR24iQEjDH2qvhRMddxZbIrESBgVQtLNW kOetJO5LVmOutOVSRnLVnSRZQBu3SpkmName8QGWIz9ayFtjNOQfwoT6A1qVpoWDZUnIrnbtJ/tY YrlaLaiTsRG3MzkbazXgKyFStVfoaNpoha2WLrVFYgSxIyKnqZmdfxtKEC8c0y8s2gRAeQe1GzEY U0Sxz88+1UptkzZAxjtTa6k9SjJAskZYH8KxWhJYjbwKxk9TRES2peNiwwO3Fc5d2+zhRk+uKIps JNFKSLACEZPrWbPbMrgMPlFaNCRm3MQkbjoaybmzdEEfXNFrITKcVmbc7QeR1qhdhXYj+lZ9QKEV uHzuGKy7y28l8A/L6ipUrMp6opJbKZABznvTbtfLfy8bgKb3M7GNdvubpjFY1zC1wmSOB3pPcqOh FsDqEA4FV5Yc5AHSo6lFCOd45CijJqIkNlGXDdzVPRC6kDxcFAuAKyJImt2AU5BqYiZBJCIlJblj WfPF5SgAlh/Kp3ZV9CpKMso/hHeiRhIPlHSq3Cxj3ALOMHA9KTUH2RAnp6VLGjMiXzQSOFqnIrMp APSoeoWL9ouYRnqKqXNsJeSaaIM1ItpYL0pZdr25KD5hQy1sYsaOo3OfmNPuC7KpznFHUQ11EmDn De1V5IzsIzznrVtWJKdwRsAxkimxyEw8DBqbaFIjjbzklwOQDX2V/wAEzp10347ByN0jbgRj1ArK psJH9jXwfZbfxNER1IPavvwKoRFbp9K3o6ICS4i8wYQ8VSNqQh55re47GW0Zjb5jmlf5YsjpQw2K UgzA1fz2f8FRTjxJboo+8pyf+BVpS1TKsfk5bosFjEF6KuKgaMvkDgV5lT4mRqiVf3SYIpsm18YH SuZqzHcmUrIQSea6bSkMQIqkK+p6j4flBQDGPwr2HSGxgAZA71onZitqeiWRDEY5ru7JtyhcVsno OxtZ2IAEDD1qDb5Ep75qlsMrPGGBOc4rMeZZCVA5FNIb1KWwJkE8mqLIVbBpalxVjyTQfEEkKjHS vUtI1YXCHLgE+przK0E0bxkdLHMowokBPuadLerbtsL5+lcLpuLNbmZcaymCoOAK5O41Ytkqa6qU Lq7JnIxF1RyWbPFMivN2WrdKyMGyNbzfkioxdhuCOfpVRRLLO4hRu6VbSZYYwCN2ehp2Cxp248xM k1NAMPnNax0RLVzUgHmPluRV1H8uc+mOKVg2LX2krAxUc19wfsQW3m6xNNngHH6V00FYrlTR+r0s gVVOMHFOjxKhyK2ZNlceIin3RU8aFMEnIoBokkg3EYrM8SqI/DtxxyEP8qTYrWPy0+IMgfwnrbj5 VCNx/wABNfynfEC8aX4m6gAMZkbn8qymZvc591Xfg9ayZtLjbUUm8zypF7jvSexcWac+ppcuVa6Y jPKEUsiR3AGYwR2pJaE21LVxZCWJFaQmIYPl44FSPZGfO24aFP7qihq7G9ERXFms0SieZp0QYAYV ZE0CWARLoxxnHyrjiqadiNwM0LRos1wZ4/4Q/QVFaXMWlXDP9qZQx6KAcVmi0r6F5dLivrr7Q8xm 3c8ip3RoZADeOEHQDHFUncGraDbuxXUFBE5ZgeWGK0G03zrZI5pDMo5ww7+tX0JcSok0djI4MhRi CMqM8VFpxgt7SSLzGeF2BI20CkrIsrBbWxWWJ/KHr0ps0dq1wJGbezfdJFDvcqK0NCBorCUsfkz2 A60yOO2vHcvzITyW70BJXLkkMaWnllAUzwD2qTdF9lWKTCxDoKpIkwZbu2juPKikeMdDtWtyxtoL R8oA7t1YdTRJ3LsX/NKz4xxUV5AsibAME/xCq2RG5Ri0vaCBK3zfe461dsbFLJmjicoSCCQO1EW0 CWtyNtKiksGt8bkDhuR3FfU3w4/ax1P4deF00u1tEkEYA3bm5xQt7lSZ1fiX9uHWdS8NSWo09cnG eWq34X/bh8VR/CxtIi3W8GwIEDkdj7e9RKN2H2bHx3e7vEVwLm8TNw3L9+atXGm/afLiLlI17Yq7 6BEv29g+iavaajYyeXeW5G1xxxnJ/lX2bZ/t2eKrKGKMq3yRlPN3nP8AKpi7vUGux8qeKtWuviV4 zudb1ORppJWJy/J5qhHpsECSbBncMEYrRvXQiK0sz0j4K/Eq8+CGoPc2NsrK56ZIxxivpqX9uTxI k8k0NoMHPV2FHPqXy+7Y+TvHPief4i+IJfEesQh5S2FU5O3P/wCqq4ntbmOOUR7ARgcVV7k2sTaZ psNreySL8wfqtWbjRVuHBX5VU5CipsVFj5tGW7uFlPUdRW5HpsA2s6Alfu5HSt1orEtamnH5Em+Y IfN6bgtLYLFYkliwLHO5himyrHVRKjoGAG0j86niWOzjZF+UPzgVmwe5q6UI0UrsfI/2OtaMdjFP MWVNrKeSRimtS3GyL76bG90JFHzY5461f0+yitJpHWMEt14p36EWLllpqRs21AgY8gVoyvFbWUkV umH6HaKTY4x6lDTNKaaxDAOBkZG2uw02zW0G3gmqv0LstzbOn+bCcgBQOapaeiNGyxRb1U9lzSeh DVzbmsjN5Mjx+XjGARiumuLOG98uSZVLL09aS1Yo9iw2kx6rMoMYVlHUjrUi6Ylu5wMntxVPctxs WJNMnusfuAUHfNaGn6V9mfev3qbdkBo32nfb50LjB7GrFzoz26q27zQOxqVqhofBi1kU4/SttYGa QMBk00T1NCz08XMzGTgD2qVLVmk4+7npRIC4y+VOAo2+vvVtUXcQBzSQW0KDW/lXYBGCa2lsNqhj 0HtVNaBFdyT7E0p4+6aPsP2c49azTswW5ej0v7P8x7iq91AJLTaDzkVogkJY6WjMCw6DqRVtbFYy T2B7U3sZkd1BvUY6VmS6Y0/IHSpirFEa2WxTkfNU0CKbKTjDA+lU0WtjFmhVyiFNx9cU6fTjGwOM A0J9DNq7uWEtU8pgRxXLW+kf6U7r90HpT2KfYuyaSLuYt0YVPHp9uyZcZcdOKlu41ojGuYH8zBGV NQDSFbJNKKsK10ZS2flO245rMkgGGJ6j1o3YlojOS2Dxktx7VnyWzI2FGM96TQrsYbcW8fPOe9ZD tuJVlyvarimJGMsDK7MRwOmKqS27M5dulJ7mm5jSbZMjB471VdNvAFUkS0V2tzt5OTWddsQuXPTt UsSTMVrA3eZhwvpWZ/Zhkm9KL6BYZJZhSUUcg81VltwihhwR2rO13cV9TPu0MtsXH5VgeQSmQOTV R0YrNlCS0eI5I5rHvCzKapspGPJZbolfNDRbYuRz61MtRpXMC6iMbc96oSWyrGcL82fSsXoNRGPb 9CBjI5zXN3dm0DsSdwPQUW1JbtoZixPG+dvXtTPJYykZ/CmxWKVzAkUu5huFZM0oJYBRsosVsUlj UrkfyqpOhU4AqbWDcqW8OyUkrz61RlhLTHPrSY7CsPkOT81ZUkBaAAHc9ALUoXMXk43HLDtVaVts WcYDdqlFNGRcx7FAUZNQRJ5A+bjNPrYTKdxGon55xWbqEgnUhRyPapYbFCwtzdptJ24qw1iIiATx 7VNgvcgeT7M+f4e1ULh2bOyiJLWhn2kzLvDDFTSRNBFuU9atoSKTEEcnLCqzWzujEHk9KmO4NjIb Z4YPnO5qeqZiJPNXLUlbleG3+XGaja352g/N3xWWxoRY8hyAOOlfYn/BO1dv7RYQDDZY/oKyqOyG j+xH4SPu8URfQ1+gh/1CEiuijqJkYyrZBpksu3gjJNbjMyaZUl2kE1Vkkwu0VSWgMr7SdwI4Ir+f P/gqhEY9btnU42gg/wDfVaUeqGmfkfpUmbFCTkMOKknBi5zxXmV1aZLIXkAUE1Kqpyw4/Cudq+pC H2kYZ84rq7JCsg2mhDS1PU9DkVgqleR3r1jSXbZtHSnuymj0PSvlROMV6HZDy5PrXTFaC6mos/ln y8Hn2qORS2V6U+pSRnMWjkPpWfHhS5PrV9ARFJBhsk8mqTHOQe1QnqaI+c7dTbRDB5NaRuXCjbKU +lcbGtCwmoXIAQzNkdPer0PiaYSeW4+YcZqHBWLcrE0uslcBhn6VE14wIX1q4Rsib3ZKs4wBUnmg uEUYNKSuRbUkP7hCP5VYttpAY/dpwWpbWhMX81jjn2q1AOArVolqRexowAI/Xiry4kk460dQTLyN tYAdavRqZjjPzCm9BblpctEynqBivvL9hqAxX9wnXnr+FdFBlReh+qkse4KAM1InykY/Guh7Gety 6p3MQOlCYCZzWe5THKxHNZHipiPD1wT3Q/yqmTc/Kj4jyA+ENZTHHltz/wABNfyn+On3/EjUF6bZ CAfyrCq7GVnc58scYPLZr0n4IfBy4+P3xhstDilMIWNi23HOMHvU7o1itD7h+PX7C/h/4VWsslzq Pl3yqW24Tkivy41HxANLkMC/O4kCJ33ZpLYUXdnuuofBXxLp3gGPxDLCYrRsHDnHX2x7V5Zb6tC9 okjk9OQoziri+pMvisY2s6409r5Fkha4lIVAwxnPFfpn8Iv2CYB8DNO1/wAVziwe4RGO4r1OR3x6 U5y0NIU9Li/Gf9glp/h1p2o+EXF6WkjXOQAQW65Ga9F0T9gHR9L8OaLH4huRa6lcRgsgCnc2cd6F H3bnNGradj5G/aj+Ad/+z/4zFvbx/wCgTo3lN056Cvo39nL9gyXx58JrfW9duDC1xCHRjt9D6/Ss +tjqWseY+IPid8Om+DPxMu9Ma4M1tuYxnIPAx6Vw134qt40cDzAdpx8netWrIyjPmPuX9jT9l7Tf jH4N1PV9av1RfvJuZflG3Pf6V9L+D/2FfD3ivwNrF1pN99qa3U9AuM7Se1OC0uTOWp+Stx4cgsfi Hb+H9Rk8pVukjbAz/EP8a/Rz9qP9ifT/AIX/AAasvEWmvvVyh6AYyT6fSktzZL3bnwR8F/AWqfHn 4qadoGmxGZiCZXI4UAjPPToa+wP2xv2Rh+zjd6dIZNzSRkMvH3i2O1TF3nYi+h4fffADxJL8ME8Q QJ50GwSEK2eOfT6V8zW3iKHxBFaJIhiJkUSgjGOa01QlZn7Hfs3/ALIvgn4yWIgguI2vhESwUKSC Bn1r4H+Onws/4UZ8X9X0ZH82FJmCZA4Ax6UpK7QSfIeWKzSSZ6Keana8jgUgnLDoF5P5Vo9dB26l VtV81iqQygD7zGMiorHV4Lm6mjjDuyHDEJmpYIvrqcEVszEPt7kpVG01GKLdJGshhJ5YJTQSRtRa xFJHlg6L2LLjNT2+qrJAWRXaMHkhCaA6CXHiG20uFJHjcbyAAYz3q82oq2weXJl+VGw0rjSsTR3q R3BhkJSUfwsMVrQXDSDHUU7aiuLPqKWJUOCSxwAozWiSsUyp5UjMRn/Vmn1C1yeK9TGVYlh/ABz+ VW7fW5b0OgtpFVDhiYyKGralJrYjm1e0luYI5C5tjIofYme9fqp44/Y40K5/ZMsfFmnu0UrxpID5 YBPX/CiLZdWPLDmPyy0KdFhI2O8oHO1Ca07e6ePE0qSRxnpuQjNaysYQ2ubseoQtdRKA2XGQoWtC Szk1/VbXT7KM+fLOkZ3DHBIB/nST1sVJdT9df+GYPCfwZ+HGn3HiVkeeVVyZFHXOPavMf2lv2VbD U/ghB4o8NqqWZKONgAyMn/CtpCmuWKZ+cem3cdnpVn5gkJkA25T3r6d/Zr+Hth8Sfi+ljrZFtaRB seYMbsYPesQSbsz9O7P4L/D7VvisnhuwSGS4EbklFB6Y9/evhX9rHwHZ/CP4l/2TYAEMSdqjk4IH QVVOLeoVKnLoeELIyzrH9nkDj7x8s8VtRwCBQ6/NmnLcqOpoeEfC174/8bW+mabudmiZnUdBiv0X +BX7HEWiaZrWq+JwnkxksiZDDG3P9KiPvSFXl7NJdzvfAHwe8HfGHw7qUOhRIs0YPzBAOdua/MXx RoN14C8aanpd0jsbWbywwXI6DvVX1L2imSraXeqWUcSCRTNIqqQvqcV+q3gD9nXw58JPg9Zan4gj Sa4n2BmZQcE8UpSVrFxjdXPOf2lPgLbWvgOy8R6RiK0IU7VAHBP/ANavNf2V/wBn26+NfipLu5Ux 6TGp5I69xwapbXOf4JanVftR/Duw+GfjiCx03g7WyMY6Gvn5lIjDGPcx9BSW1zRS5jobVJRY5BwP Sqy2xByv4007k3LDOCyIFyw74rWa2dIlBPWnayKjcbbWamfZ1IroYIFg4KZNJASXUkcBChTk9cCr NoFkAO3Htihpj3Rfv7FWhjbH6VHbWaLJkDn6URKWxfbSg0okIzipEg+0MVqmLob1tYwxQBclmHqK hbR2Y7yufQ4qLERC4tGa1+Y8Cq8emIERwMjvRsO12SajaJKqiL5T3xVtdMjhs9v3nPXiruFjOSxQ 8MSMdsVaitC0Z2/L9eKIsVtDIlsd0n1qx/ZaLiNuAetVIaelineackJCQjIHfFENvHMuyboBxxWS 0YIxZbVQ7YHA7YqqtrsJKDrVXYtypfWbxhSB1qkHU3AjwQe/FSNo2pbJZLcIF+Y/xGshtPNiCGO6 i4zAv4f3ykDArPv7GAAFSdx9qtIhmULECULjP4UyazXeQ5wopsmBjTaeHLKpyvrVSPSg6ZYcCnfQ poxbu18mTavQ1l31vtwByaW7GtEZEVmHyWqC5sxFyMU2rCuUGVW4xzWNNpO6Uu7bh6VJW5Tkbyht UZHsKhZAULYxStcluxnKoD4AzmsbUdNkuv8AVnafah6EpXdynBYsgIc59qqz23lrlRyOlZrcrqcv dtJKxZuDWQdP+05YOc+lX1HbQpXEezauOlZ84w/tVW0BaFJo1eQBhkfSmX1skDAAbqwlEaZjSwmX IJ2isGe0dZwOopbGcldj5bbZg45rDnjWO6OOp9KS1LTsjOvLPdjuprn57Zbd2T1rRO2gGXGFGVx0 PXFLcsHTavUd6mS1JTKTs8Sj0qp9m3tnODSasCd2V7uPDZHHaspYfIfrnNKxS0ZVntvMlL54qiY9 zfPwtJFPcoXkitwFwR0NZd5J50YU9ajqEjJlDrhetElsVQk9TSZO5TgHloV6GpfK2jJ+YipTAr3D CSPJHNUYLN5FLBsKKaAJYBOgKDBHXiop9qQgEc/SjmuKxnLAC+9efWpoIw8hGcGjYLFaPCu6t1rN KHzCo5p3F1HNCxXKnGKZbxEEseWqWURhmmY5HTmvrj/gnlMIf2iVkPzSNux+QrOpYaTaP7DPhH+6 8W25J6g1+jRtgbYf3a3paCKnliInnNV3kyvSt9wM/GwnK9e9UHi5P1qr2GO2HaRnnFfz8f8ABVCA watApOS4J/8AHq1odSktD8etIiEVqqsc7BirL8nruHpXmV9ZszvZjdq/xj6U8Dywd3TtWHkFrkkL /KBiuh0slGHfNS9xxPWdEXAVR1r1vSG8oKGHJ9BWyjoDdj0mwG4r6Cu5tG3YYjpWkdESlc6VZEaM Erg+uKoTNvJbsKOpo1ZFGWDkOTx6VQ2AsSKq4FO4UEgg1SfKZxSsPY+cIvmQA05WCkjqRXLYtj/N y6yN0FSSSZkLZ4NDQm7k8E/QYyankk2sGIzSQbEqz+a4YDA9K0Wk3KSBg1SV0UiW0k25Djdn1q3A wVjuHHaojuF7E0bbMnvU0UpIJIz+FbJGZftpC6ccGtWzYFjnrT5bEal1Vy+M1o20ojnOKiZpEnLb o3KnBr7/AP2F5A11OxJzn+ldGHTK5bH6pPujcAHApVIDDniuhkW1JlkEUh96TzREBnnmpsDLzMHQ Y4rnvFuT4enXsIz/ACoJZ+U/xLYN4K1pl4xG3/oJr+T7xrch/iHqBJPMhxx9K56moRszFDeWCxPe vur/AIJwXDWn7TcFyp3K0EvH4ChbFXtoej/8FU9Uvrn40QIuoSQRGGQ7Fxj7wr49/Yp/Zp1H44fF eK+1KBX0CwyzMxzvIwwOPwot7pktJH2j+3j+0XY3FsPA3hvbtgG2UrxtKnp+tZP7KX7FdpqHwfbx R4r2rBKodN2CMEH1x6U6eqG/iubPxQ/Ze8JNZ+GNb0G8ijPnRfJGV+bMg9/av0j/AGsfhNrXxV/Z x0bRdJl8p0SNmw4H3WJoqaF81onRfBrTbr4HfsiJFq0QmudPgU7s5yVDHtX5d+EPjj4j/bK+O2hm yEkGnadzKVzjhg3Uj0pObjE5vZXnc+lf27dY0r4weNNK8L2t6sd7DIPNYMPlKuDj9a+2PH3w61Dw x+x3pVnp9+ba5t7ZQJgwGcE0462NW+WPKfhh8Bf2ffEX7Q/x5uU166N5ZWYcOzOG3EANX25qf7NP w31S+1fRVW2g1C1R1JwoOQufWtZO7sYw91H5NavrXi74QeJbjwj4b1GWa1u7pYVEbjhThT0B9a/a HQ/EFt+w9+y+0V/N5utXcADq3V3IK54/Cm3aJTV2mfLv7Mn7KNt8ZNJ/4WZ4vG0TXMctvEwBAB+u D1Wv0N/b4RJP2YNLstLbFtM0SR44xliP61K91G1SVlY81/Yv+B1l+yh4K0vxPewm41C+aNS+zON5 29RXNf8ABXPU38Rf2NMAyl03Djod9Z0k+a7Im+WKPAP2GP2kbLXre7+HWvqN+wpATk7wFxn06mvj /wDa7/Z2k+B3xIuJLePOmXspkjwPucgAV0zj7qZlB6n2N/wST0eWy+K2s3IuGYBZMRnH9yvA/wBu 7Uml/aK1yV2+Y3TcflULVmsveZ8vGcKu4DopOK+qf2Ef2ZX/AGhPHOqaveORpdluDIQOfl3dD9Kv bUcr2P0U0H9nHwj8XLDxXpWg2nkXtgzI7mHbyEz1NfPX7JP7JOl+KJPEOjXqg6lZS7ZG2g5wmTXL KeoR2Ox8d/s7eCm+Hms2CPb2mp2LgOMqCxAJ7ms/9lH9mDQviz8Hr2d4lb7GV3kqOcKWrptaFxSb MT9oD4JeC9Y+DdvDo00FtqySRxkIVByT9fpXrvgP9ljw18Ev2cdG1fxWiT3FyYlZ5FB+ZiR7Vk5i 5rHG/tRfsi6TpvhDw/4tsVWLSZXjUqoH8T4/pXuF5+y94T8N/D/QfE9zDE1lMqDLqABubH9KaleV i0z4J/bW+Gnh7wp4q03VPDs8Ukc6HKREd2x2zXzNbxi2RgwwAa1qOxMbts94/ZK+Adz8f/jdFbdb CFHMqtjqMHpX6d6l8Pfh14f+Jd/4YuILeK8topAzMoHQfX3rGlLnky/hWh82fs4/sbWPxM+P+u6n HOJNCs2kCxrtIxtDD+Rr7G8LfAfwZ8WJPF+k6RbpBeWLsjsqY5CZ7mtZT6GV/ePAv2U/2FbCZ/FF 74jAntLO5BiRlBGAmf6V98/Hi709f2J47XRoxDZKkax4GMD5sVdtFY0qT5konw5+yx+y1pXhr4Fz +NfEafbElQNEgQPwVPp7iuu+JX7Nui/FP9nODxNpFudPVUWQDy9hI5Pf6VcY3Q2lFJHyz+xL+zfc fGD4sQPqiK2kxQuE5zngEcV77+0B+zo/we/aM0W40qJF003KhgDj+NR0rNr3io6p3Pvf9sL4CSfH rRdFs45/IgSMOxyB0bPevUvDHgGy8M/s46d4Z1Aq+nwiOMu2MEZP4d6VSfNojOL53ZnwH8af2ULL UvjD4TtfDlmn9jNCXnaMccOPqOma4P8Abf8ACOlfC/xVp8PheRIb9YyrGEjOd340qd7XZ0OyjY9q /ZG+GMnwl8FXnxL8US79QmjLQNJjOGUjHbuK8s+Bvge6/bN/aU1PxVqpKWFq0ghh6gqQG7/SuiE1 Gk2cOIjzSTR9RHTfh/qHjnX9Aa0htbu0LoSyY5C56k+9fmR48sU0XxRd2lifPVpvLhC89eO1ZJ3h c1p/FY/Sj9m/4Uaf+zf8K73xjr4A1S5iLR7hypKkY9eoFfTfwO8U3Hxt+BWoTXkflLeAbOvQqR3o o3sbYqnzW8ix+yt+z7B8Eri78q685plLMvH93Haua+PPwc0TxT4V1q6sLSOfWZbtN5QZYEjnp+FT JnPGfN7r6HKw/B7RPhl+ztpF1qkEf9sfuz8/Xdk//Wr3fXPC8PxZ+BOjLrL/AGGPdE/OOobI61zx m+azOuLUY2OS+P2iXU3w50LwzpiB7F3iVps4+Xfg+3Q1774F8OxfBOLRtB0y13I8B8yQKecHHb61 1uVonLU95nwn+3BZIvxRs5hGVYxOXOOvzCvlKxvUt5NwTKHoMU46oaVtCzL+8nJX5V/u1YaIrH8g waoton0e0Mm5pBjFa1xa71Bxkdqq9ioi2mmecQV+QjqcVuvZsmD6UnuDHtbCJfNKBmNQWamO45Gc 1dlYhPodTFGJvkYVN/ZYjlKjkVFix62piXAGRV2104RSBsfMfamBJLZ+Xclug9q242Z7bA6UmjNb mbJp5u1IJ2r3qa30pY7ZUToPajY0RQm03E2FHPrVxIdh296QJokXTYzyy5k7cVmajGyg8bcelVBE sh0+0E8IYj5qbd2bNPv/ACFNvUnVEMEXB3jBpktlBPwDz9KlIq5QksvsnUblPFS/8IvLCqyGT5W5 xkcU7C2LM+jq6KARgdazdQ0SK1QOMM1JIGzPWMG2JI5FYFzGzAZFJxKT0M+4s1mi6Yx7ViWmmebM +7nHrVLYTRJNZi3UhRk+tYc+nmTdkZBp7krQyYNIeJCqvn61P9l8uHaeTSLTMWfTNrEMPmrjtQsm tJDuO7PTFCdmBAmm5jFZsmiu7FS/PrVzdiLalGbS/IyvVvWsmS3IjKZy/es3qh3siGPYsXlFMMOp IrJu7LuoyB2qo7EbmdJBsT5Rgn2qCOIxRnuamWoJmd5W+cgjk1nzWhIOO1K1i46nOXcCvEQRg/Ss NkESKVXjvxUrQpMzruLcd6j5ayH2su3b81XfQHYp3KFUAVcmsTzmDlGU7/pSeqJQy8UBUDHB71Ul ZewyBWckNMy5pfkIxyaxhBjIcfNULRk7h5W9SO4rA1KxTIY8Cn1LeiOdlg2HgYSq8tsIl+XnPtTb 1MmU7hTlVPAqvcjJ2rnI9qT1C9jIuUPkg5rNCsVHpQ3oEWRyh5V2AbVrIvrkrDs29D1qEXe5mfPN hsZFVDD5sm0jmm0K+hbMKKuCMsKzLmNlG/GR6VnfoWkU4kUDc3U9qadgNFiSteQCXDLwPSoYzsQq eBSsUthmSmQB8o9KoSFShZh9KVtSSnbnOeNtPlURSjBp21Hcg8tGkJqq+IXJ9apIi5lMjpOTu4Pa rMYY5CHB70mVYhuG2IQR2r6x/wCCeaI3x8jYffXd/SsKuxcH0P7CPhVIJfFlsR0APb6V+j8D+dAu OFrWixNdh0kIOQvasdgyydK60Ih3NJnI6VXn+Ujb1pSApSbpY2OcH2r8A/8AgqkCl9bS7chQQf8A vqt6HUtM/HXSYvOsRIOjcgVeWFeCOtebW0mzOS1IpXBkA6+9Bwj4JyO1c9tbj8hoR1cYORXYaOfN PA24qd2I9P0KbynAK5b1xXr+jTMQCRu+tbxYHo+nk7Rj71egaUv7v5utPqUtjWkkyQpFV5wIzt/p WiQMoyKzJjHFZUmTJtU81fKCehBcfulAwN1Vnj81yO9JqxN7nzJJOGfao6e1PWZVUlh8wrjOjfQk hk+0Jkj8KNwaQDFITjYsiZbd8Y3Grkj7wGA/CmlYGhbeUxDcRyatSuYsEnrQnfQWxcjkyw71bRtk WW60loLckt2DyDdV1m8mc46VTeo7FmJwELdDV63fcmRwfWr5hJI0oJM/Wr0MgVskc/Soj7w1oyZ7 lYgSOh4r9EP2Fo90twmOQ3X8K7aKsilJXP1PLbvwpqxgHNW3qZsdtKtyOKtwxqx5oIJXA3Yx09qx PFzeX4euA3dD/Kkw6n5RfEtfL8Ha4ueNjf8AoJr+T3xuAnxHvlxzvP8ASsZLUnZ3Ma+GLcnpX3R/ wTJt3vv2jYFXJxbyH6cClY0Wqudf/wAFU53j+OcUL2srKIZR5giJA5HfpWv/AMEyvjzp2kzXngq8 cQzXMbeXI3GQFx9O9C1VjLfU+b/2w/gJqPwU+NVzqyo91p19diTzAM45AHSv2U1Ng3/BPKyuVQyq 9muIoxuOfm7DmnG0Yi8z8Nvh34X8e3UXhq4XzrXQkuYSI5iUwu8diPrX9Bf7Wvxo1H4NfAfSNY0e 3a63Ii7YwT1YjtUNc6Lew3wx4i1b4z/sSpe3UZtrzUYUAjYkEbgw78189/Cb4eWv7E/wOjVbVrjx FqMY+5GSdxBXqPwrR0+aFjL2lpWPx/8AiVpHjHwv8WJte1iOW0u7vUY3RgCSQSAe3tX7ufH/AMX6 6n7EOk3YklRnhj3ED1LUoxUVYfxe8fPv/BJrUn1G48TveSeZcsW2sx5x5dfE3xh8MeI9Z/ak8XDR 7eeFhPJulCEA/KO+MVVrK4+Uz/2PvB097+1PYWOujz50DO2/nlSvNfUv/BWjCfEvR4zGTaohG3HG d4waHqrDdlZH2v8ACkL/AMMPaMeJkxDhE+bufSsH9rpri3/ZP8OXYgaOCB4HKhT0Dk9KW+gT9489 +GP7d/hr4hW/hXwjNCZZcIdojJwVf/69db/wVl1SxtfCekGK2KFUGDsI/jq0uVGUm3ofnD+wL+zr deOfiTc+PtTzZaXZ7vJLDG8EBs4P0p/7ef7Slv8AFjx+dI0pBJa2L+WzgHk5BGPWtpfAhwR9C/8A BJRNRvPiRrivYyQKA4V3jZQRs9TXzR+3Rprr+0RrscpzL9qY9fpWKsi7OOp8zRQZjJPI2EV+3v8A wSKhi0zwb4mMzqVdidu4dPLPah66Gu6PXvB3xvs/BviLxta+HtBaK5MzefM0LoHby+ueh4ryz/gn bqmoa/8AFTx7qVxBi4knc8ZI5i9azcEmZx1Pyx/alGq2n7Qvi4SXklkkly7YXHzfKPWv11/4Jm6Z NF+yn4klGZGeA7XI6/umrSUtFEb0ifiVqdzf6f8AFgC7uGjQapFmPPGNwr93f271fxD+yj4VXTrT 7dELy1OxAW483rxRGnrqZ20uWP2rLn+z/wBiDwpbuvlMZrYeR3H7w9utO/aO0i5u/wBgDQY40aKE RQkADkfM3ahxSY1tc/BHTI5nltxc3LXKwptQPj5RXVTTRTW7EsvDDvTfvFw7n6Zf8EqtVt4vi9qA DiORo5Nozj+AV5l+1B8JvEnxB/bK12PRreSOZvPLXOCB0HfGKXL7NXH9o+6P+CaXhm6+GHhzxhpG p3QuNUO75S4PIiPp+FdF+wto17ofxH+KN3qdp9lMl45SQgjcPJ9TTjT5ldmU9JH0t8GNRtdS+Hfj RUlW4Yy4Cqc5/dn0rzT4keFrsfsQW0f2cQpiIlOmB81WmwfxJnY/Bm707QP2HNPNzGL+KO0X9yBu 5w2Bgc18y+NPjPq1/wDswxw2ulDS9MKKoB3Jheexq6cm2aTlc+bf2F/2h7f4cfEVLS8kC6eIXKvn rgDHFegfHz4/3PxY/aT0uOwkA0eO6XLFsZ+dSKU/dZpT+G59nft0+N/EHhDwrob+FBIzMqiQw56F 8HpntXor2Op+Of2Q9EhupPJ1S48knccHO4561ly+8Yx9xOR0nhjxHZ/CPTdD8Lag+7WJrNgp64Oc dfxFflh+058L77wF8ZxqWrTeet1dq0WWBwCwHauiUORE06jqM/Qz9pK2jvf2O9KRQEg2xHj/AHjX kX/BM/UIbAa3CHCyNnylPHGyhQvGxUk07Hz/AONvg54l8ZftG+KbqGEww+dIxlJwCNo74rrv2Wvg ANe+Id5qWt7WsdOY43EEEgBgf0o5dbIqDtZlH9rr45SfEfxmdCsHP9jWhKuq9HYHINfop+zpKbT9 lFWsgFnjtPkQHkHacVmn7NtHROakcZ+xXr/iPXn1298QPJiPcsXm5GAU9wO9ev8Aw+tF+HP/AAkX inWp/wDQp7oNCAQeCuP5ipmtTlULXkeY/tH6JL8YfB2g+ILCbbpfnxSYJAyN+en4V6F+0NHceIPg voUGjrvT7RATs9A/NDo21JU29Dp/jBrkfgn4H6VLJtS4gaM7ScdGJrJ+BP7Sb/FnxrZ2K2QDLC2Z MN2561M9FY1Uep5P+3dHv8W2K7AD5TZYf71fD+mWStGuRxWtP4RLVlxrTy79cLlfYVtSWYQZIq1q adCaOFliUbePatGC0Mrqq8kUPclaGz9la0kBI60klqZ2K092Uic6ewhCnkCp4dEklxJnAq+hnbUt x6BKz7vMIHpWpDZtE4XJNStjV7G9bQpH95N/4VZmKu20Q7eOoFNC6FK303eXZmyCeAa1odLJQEcC hitYcmhlnJBO3vT7bajyRhefpUsaFitQYMMuG9ap3ejvbyIwGQe9StGJ6FuSxNvjcM+9ZF5CLklA v41qmF7jdK037M7Z5IHAqqbeWedgyYOakckOvdKEaAsM8elc7aaarzM5GEzRsQzcGnpckj+IdBWc 1qzbkZunSi4IZZ6Y0mULVj32mmBypbdTuDiUYrIlCvSq9zp4eIqo5FJO5SWhitZvwpGDVeXT1tGI /iPWh6DM2WxEfT5h9KiuVV4gFXbSRLRji2CxHd8prFltS8oKmriS9DO1BJMnaAT71zT2n2h8MORU yWtxKQyewDQ4X5cVmzxhoNp4I70bmqMtY1YbuuOKypbRBJuUcnrxVJaGMtyjfJHcgIE2svBOOtYi stvuH3qmw00jJmtGuSQDtFEdv5FsVPLetK1tx8t9TC8pjnIw3rWZcyNCDiqtclPlKbWxmUMRwaxp tK37tn3QazlGzK5jMazUna/A9qworRTdyELkL60g3M6d1fdtXkH0rObaF+ZPm9cVSLexzusw7og6 juM1Wktt1qNgwe9TJ6maM6GLD/vBmql/ArSFlzn0xWL3NUtDKDNJnA2sPasy5iaQHfzTjuJ7GDcZ jcKV+X6UySLYuOg7ZptXZmZ0tsZAGzkVSMZic5XIPem9BJXRQuIB/d+WqMdqsrnHAHapeoo32M2a cbyAORWNcy/aG2lcD6VKNEhkkQQKE4rOb5ZmLDr04p3LSIwoANR7wU5X6cVk1qNGXNab3z0+lVJL Yl9o7VYmhGIYlQOag+zLIhDHkUmStGVseX1OQapGM+eVcfJSGwkjWNtx/AAVXuYhIob86SYmiCO3 RSWH4CqF3EAmTyTVp6E21Ksce2AM33u3FIACpJ4b2qEUReV9piOeK+qP+Cf4XT/j9HjlmJzx9Kzq K6BLU/sO+D2IPF9qcbgVPWv0lXDwjAwKukrF9ClIxU+g71DcXAI4HFde5CRU83CHFZpKliW4/Ck0 VBFfftzj0r8F/wDgqlIYp7eELkuM59PmrehuxSPxo0ZDHpqj+6MU/DKp2nmvNxS99iBbbdjJwale 2CkHPFcqkFhySrbvx84966PRyZGwRj0oWjCx6lpm7aqgcivW9FjcRoSeK1iS1oej6UN7ccEV6Hp8 6zKABgitBo2jEWWqZX5iXrSGoNmc8jysVHC1nltpPGG+labIRSkYTMf7w9qrxkpMSf1pN3KSPmSJ 1MW7GGo++vPWuFmsR9uDIQynaBwas+Xtl68Gkl1NG7ola3KOG3cVa8zzE+XimyBm7OB6VddgIx/F QkS2WIpAyZXhqsJu8nBOSaNmPoSx8qAT8wq7HKWkB+8BSe4i6sm5TVpZeVUcYqrCRoiYK4wOlaBn 8wjaOKcFYTGagwjthz1Ir9Kf2Em3STsg4z/Suyg7jR+o0oaNwvY1akYJGABzVMQxmaRQO1OwMgDj HWmiLmhGc4x0rnvHKkaBPzxtNQ7lI/KH4rMP+EJ1wrwQjH/x01/Jj4uu/tHxDvJcY+Y9fwrJuzIb K0q7ELE7gT0r2f8AZI+OT/AX43DVmjL2xR8g57gelMqLsj7v/ae/bC8GfG7Spna2/wCJiykL+6PB PvX46QDUvBvi+x17TWMM0Ei4KHnbkEj9KUVqZRTR+l/xo/bMs/ih8E7fTtQtCdRGza4Ric5PNd7+ zR+2xb+GPhBD4a8RjfbW6qiZBOQM9vxpTTNFojm/2nP2xtO8R+BdP03w1D5IjZCCAV6NmvUfh9+3 dpmq/B/T9D8Uw+d5CKOVLZIyatK2hLdkHj//AIKL22i6Loel6FbA6fE8e5SGXAD+le0eIv8AgoH4 S8Qa7pM+oWu+SKP5VEbEA7s1tKyVjk5W5XPkP9q/9qjTPjB8QtOmii220DBtm09mBr3D4nf8FANP 8V/Ai08KR2Yby0Vc7W7E/wCNc7ep1Q0jY+Dv2cP2h9U+AnxfN3CWTR7hH3xAnGSABX6KeJv26PDX 9k6nc2Vp/wATOYHLeWwySMdavdFt6I/O/wCD3x6n8GfGyHxffQkysGwmCcbsf4V6L+1p+0pL+0T4 zQtB5duInwwzwc5HWlczablc9H/Zk/bGm+E3wy/4R/VAb2C3dFjD56D6V7d+0R+3ta/FH4OxaFbW aqCoG0BuOv8AjS3Zdj83P2ddb074VfE+3168tBM8JPl5UnHT0+lfbf7Xn7Z1t+0DYWlnJZAJFg5w x6Nmtd0Q1rc5CX9smTSvgTH4Z0S0+zSGIRu67ge4/rXwnY6YfDsFnqFyDOzSo0/GSxz1olLSw4rU /X34K/t4aL8KdEdNOsR57RkM7Iy84xX54/E7x/dfFz4paxrt0MedOWj56Agf4VGyLlrocsLdhaOo PavX/wBmD49az8BNfnlaZjp8hO6IHg8YqoaivY+2vFX7fNivhLU10zTkS+u/vthh1GK4T9kz9t// AIU9peos9l5l7O371mDDJ24oqRZFNnyR8ffia3xq+Il9q0lsIFkkJyAfmBx619o/s/ftyR/Av4OT +G7SwUo0ezOGH8JH9aILQc9UfBvjHUD8QtauNURPKnaTzBjsf8iv0D+Gv7dDeCfhRpWia3Ab17cI MsGPIPtVPuC0jY8w+P8A+11f/FbxFpT7WGiwEMLYZIyGBHFe7fFH9vebxd8DLXwvFpybEVV/i6An /Gs7tsu3uWPz08DeEZfiB40isLeX7Okh56Djj1r7vl/YAWKWH/ib/K/bclaRV2JaI5XVvBV3+xR8 QrDXLHUGuI3+SREIOdxA7V9Cat/wUMtxd3j2mnL/AGjLGwMuGByRinJXBu58ofA/9pbXfAnxfvNc uZHaK63tLExOMkAV9b+JP2/XvPDms2mn2wtp7klTIm7uuKXNbRA4c2p4H+zD+1LqfwQgvxqEsl7H ctvdZM9duO1fTXxJ/wCCiMvjb4VnQ4bFEhfC4y3HX/Gml1G46HlPwT/azvfhV4Om0y6Bu7EkeVEx PAAx0FY3x4/a21H4r+CbXQrG2FpZZXcFJHAPofrWsFrcz5T5n0fwxFYTRsrlHAwCB0HpXY3Ims7m 3a2kKSJIr7xx0Oamp7zNI3SsfoXo/wC2fPpfg20sdTtVu5IwMOxY5xXK+Mv209Y13W9JNjGYrK3A 3QKWwcNmlFO92KavFxOP+If7SF/41+NOk+ITEVa0jZFQZxywP9Ks/Gz4vXnxn8SwX19EEit8lACT yDkVtUk5EUIcjuzd8c/tLav4y+Eth4e8v9xFsB+Y9Af/AK9cj8JPiRc/CXxhBe2rERbCroD1zQm1 E1naU7n1reftf3V3HqMcNkI5LhWAkBbuMV4P4T+Mutaf4T1HTYmMMkzfvJA33uMUqb6mUk7WR45p Nm8UpZhulblye5r6r+Ef7QWp/C3TpbSV2mtX4WHJwBjHapcbspXZ6lqH7XV0PDlxa2Vt5Ekn8Sbv SuJ8WfH/AFLxL8GYNAuCxwULMCSWwTUyWpq1eFiUfHnUo/gvZeG4YikMKoquCcjBPb8a9j+G/wC0 deeHvANjpxj+1GAKN0mR0PtVzldHPGHK9Tz340fFa8+LF3HFNI0donWIcjOcitf4MfFZvhRqCTWV oHYjBJyPas3G6OiL0saXxf8AiZe/FvxDFNNGIkQHIyfXPevPY7IBNqjGKtaKxnY0rS2eJRkcn1rV a0CqA3OfarSsX0sbGn2kSQFZDknpVm0kttNba8RMjdCFzQI2brTvOiRh/F7VGumeQMZ/EikUthY9 ObbncX9q6HSrdUBDdPpRcmxpix8y6VgMLirraSsjMAvPrTHLYhXS5IBmNN49TWnp9q8oIlGKEwgS yaCruBG+1qtNpzWUIVzn8aaCTLEdu6JlO9Z0umSCTMa5J60rkbMINNkVXEox6VYhHmQbD0X1qepT dynMr3eUxhR3rPj0wo/qDTvYIlyCxIl4FaiWBZxx060XBshvbZblmBTC+uK5RNIEhZNuADxxU3Dc hbTWjuOOMCkt7EXO846GqQtioLJvPJB24BxXOx20k0r7+SDTC+o2fSzsOOtY66ZPaqCCW9c1KZog utNa7O7dtIrEurEqpLVe5Em76GSis5zGM46g1BLZK0+7HP0oB7Fa6sFkIDDg1l3Gnx2rbQfzoRDO fvrUs5A+UVSj01ZEyfvD2py1QJWMbVrZY0Ajb5h2rI+xRzp+84P0pRWhZlXGmLZtuX5ge2KrzwLs 3KuD9KaIZiXCCc7VXae9c5Pp4dyBwQaZMUFzYi3jGOT9KzHs/tKZB2gVDdzZaIxZoCrtjp2rKu7X dCN3BpwMpLUpRzeVGVZeKoTjch2/KKJBYw57coysBk965y8jeO6LKMKeuKxW40Z8i/KcL+lZN2pf C4xTNOhRnjWO0KkZOR2qggCkLjHtUsUUZN9ZMku4HHtWSGeS5Cgcd6Vht2G38aIrsgrkVZjDgjJ9 6LaCI5ICDluVrHvGLHDDCjpQtCGiKOMNAcfjxVGeI7Rj7opSVxw0RWmYNFhRxWYsYUN2NRqCOSmi NvOxAzn2pJcRRDcvJqmNGYqHz+fu1ZuVUAELmlsWtTGmTc+AMURzBUKMuT64pSWgFObbFICeRVec hxiPgnvU2YFJo/IY5OWqsxSIHcDn6UMllTZ57hgMe2KHYLLtfgUh7lecAPxzVASosm0n8MUuXUTI Chkm3D5VHarEygoCo3Zo2JM6UhmCkdKpyIN5J5FFhlcHCbv0r6f/AGD3879oJAo2tuJ/lUT2Gmf2 HfCr/kY7Ijg45/Ov0pjk226jHHrWlNDuVLtfMXAqgALeDDcsa6UIqFSq5Ip/lo6EMKHqUnYyX5cq o6V+FH/BVOzPm2s4PyquCM/7Va0NGD1PxP0qdp7N8ZAzxWgMxWwYfeNcGJ1kyGrMjljabBDc1Myk xAHrXI1YCNYwzgLxXU6QGhbaeTQykenaRcbXUf0r2TSZS8IIrSLC2h6Jo7/u8sMGvRdMRUjBq0yb WOgUAANms67Qu5bGKqL1Ha5TZQq5JwaqTSIRhVyfWtZELcoCP98SBgis6WMtITSRZ8uquFH6VEWM SkE5Y1wu50JEkZygQHGKmilIba3NUtETezLin5th5pySCPgDn6UiraDDN9nlzjOaswzbSXPQ9quJ jJaliM7zuHFXluMDGOaTWpS2F3srqT+NXvM2OAnQ+1LqPoXdu3DZ4HWrfn73QqM1ppYyV7l6OTax BFXkl2Y21O7saJXI7mTcuW57V+oX7Atvtsbs5zlx/wCg11UAP1KnfciAelRKpPatBW1FZTuwOKeq beo6U0Q0TK2a5Xx/clPDcqsTyvFJgj8sfiwn2bwHrGed0Tc/8BNfyV+MX/4r68QDGHP9K5p7i6lR v3Y+Y5z7Vn3N7FbYXAkc9IxyfypoBYJLSDLyWbpKT94xkc10Ym3xjb3pt2Wg4q4shhdBGwVn7c0X OmQ3kKs0YyKSdxz0KkdlBBgmMAZ64rTnigLLJJatLH6rGTmm2yOhFI1gsgke1MKdi8ZH86jVbbVZ w9tEGEZ+YgdKcZXJS1LF7NbtKJY4fMmPYLyaktJpUyTYOD1GI24pSNUrDrfWLfUZXgmhZpF/gKHP 5VJHd7FkZtOeKKM8sYmo1SJLMGv22t25NujTsvO0LnFXYdT/AOJO91NbvBGCASUIz+dTqD0FgubO ayjnKBzIMoMZJ/CpVlXStsk1uYA33SVIq1oUzWXYybtgYNzVmRYVtw5iVW7mm2+gkupHBJBE25VX 5u9aFzKghw4yvYU2LqZuJZEKRaY5Uc5MbVe0a7WeBwR5bIcFCMYpBLQ0NxcE5/ColiLQfMN656Gr hoZMkgjhLj9yB+FaDSxWMJlZQBnaMdyacmi4xsF5P9kWBJ4mi3D5cqatywLsKMoZScmiInpoWkkj 0+Pg7VHYCmSMbq2E8tk32ZukhQ0Pcroa1k0Rg2x4K9gKvROFTaBj2pWsVeyLmlTXWi3gurCYwXK9 GXivVI/jP4zwpbVJSq9P3g/wq4ySJtcxte8Wat43dJdVunudg+VWOa56KxhkmMojAfPpTk7jSNXU p4bW3ja4UsGIAAGeTTJbYaffRpJbGEOMglSM0oq5aZvDTBfZ3gFB0qBbCDlBGEYHir6CZZd0jURu m4jpkVdjiCWxkIwo9KroQtzp9Nt7q40w3i2kjRAZVjGRxWjpVydShSYjbu7elJaotl6WUXM/2YRm aYdFxmtzR/MtnMMsBt5OwYEfzqlZaEpm+kDNjYoyOtXxeqjpC+S7dBjrRsK51dl4Z1GJAyac+1u4 jaqMlvDbamttKhS4YZwRjpVRaktCebU662UwptY9OBRd3EdrCXZj+AzUrQtq7JIraeO0juzAywNj 5ipFb80SCxE7AMeAo61T0BqzL8VpLZ6etzLavEr4w2w966DRdGeaPzZTuHUZNLcpMtrOvmOh4RTz Xb6Tbzz6d5thaPMrdCENTayJm7DbjR73T0E13ZmHJ5yDXUWNvHsSYD5SOKaaYkaEsUcS+aWOWPAr qtP8J39xbrOlo7IRkZU1DepdtLmSIbq51k2S2/7wHkAEmutvvCepwokUdnIzAckoRW11Yzbdyro+ mSCeSK4jKyoehHSt23hQORsDn3qL3NFqb3kqbQArhh0pI7IT8GqQXtobdppUdtH8o4PXiiKzEbkB OKl6MImmlv5y8HaB6Vo2mmOcyb/l9DRcZNJDMqlIe9XbHSJ2ssyDDcUm7CRXGnNZ3GXJb8KdPZmb DDO30xVJiZftrXzZAMEKPUVdQJZyk7d34VNxNFeVxOWBTqfSsv8AsnfNsQbR1zVJoEiWLT2eTZt4 HU4p81mISI0U/XFIvoWLfTfKfpk45qOTT9wO3160EpFaexYqFxkVVlthboRjkelSkNbmJIjzqWVS QPUVRtIvLnORgemKtaA9y9f6ekVv5x79AKy9O0Azp5qjIbrQKxTvtPFjLkDefSsUxPKWZhgH+GhI tbGNJp7Pk9AKxbvTzdSjacDuKZJdXQILG0eVPvdwBXMbFuedm38KVxPUg1LTikKMuQR1rLudPgul DsfnHtVIm1zm760KvgjK9qhjs89sCkOWhh6h4eVN0ob5s1jrpoklQjoeuaq+g1sZmowfZ9Q8sDgV QvlDNtC7T7Vnew2lYwZrMW067gCTWfe2ymchQFqyNjBupMfKBuAqnHAGQ8YU9sUJDuY8tuFmIxn0 qtc2SHhvv+lRezGtTktWiaF9oT8azJbVnj64NW2DRXaEqBvrIkXy5W3LuWpZGxkTR+WpOBg1z15E XYHH6UrF7lea3V7fcRyCOlc3czxrdD5SM+1RFXYm7DLuAtGD3rGeDyASDljTasK9zI+cHbjjvWXd SrGrKFpAmZEcRmiJU8Zolt0lhqHqUmZchSOAKByPasC7vPPcKq4A68VcVcllOXodo2j0qtFC1wCT /DUzjYUTLuY2klGFwBVO6jDMEYdO5qFqy9irsQngdKpTgRSHPQ9KGrgnYzXgKNuxnNQTBSwqHcpF a7h3IAo6VgzpJGwJ4WmtB2JWXzcE9RVO5YZBK5oJZWkl8srgfpTLk+djjp3qHoCM7dsYkiqaBDIW K800JlkTJHwfmNMkPlJuXg+gpS0BGTKCWyKhUZkA700we5L5QSX5h1r6L/YbYaf+0rAT0ctj9Kie oH9iXwtgb/hKrJVPVefzFfpYv/HuEq4ME77laT5QBniqqxGZyCOO1dCGhHQWSHd89ZsknmDIG2gG UFby5mJHavwu/wCCp5Cm2QdWXJ/76rakhrU/EbSyDbuo/gOK0xJujxjpXmV5e+wkiqjeSSwPJ7VL GGc5c1i9SNgSPGSOxrptNfkHvSGemaOmduVr2HRv3aIRVR3K6HqOm4kxkciu90wApntWqRL1RuSY iQNnPtUD5clm/CqS1EtjNZQ+VccVQkAQ/J2rZkplKRmJBB5NNZdgwRzUvQZ8keZuXaO3ekX5lwet c0kjqTuLCcMT6VKVLZfpUvYT3LUcpRRx+NPik35z1qUx3JYU/vHJ96Tf85AHy1cdyXG5cVwSFHBq y0u7AIwRVPQViz5hJGeaspqaROUaPLHocVi9GNaouR7riBkPyMehplnK1nEELbiO5qtRNJGxuaTB X8angvAWKgYxRHcQl3Ltt+TySDX6r/8ABP8Al87Srtx90OB/47XXR6lpXR+oMw+5jrVuNcCtCWrD lG6Q4pkkgHA49aaIY+Hh8dc1zXxCZB4amUjoO9KQuh+U3xgka58AawQcBIm/9BNfyVeKLlZ/iDeb Rhix5/KueejF1KV3OLeDdjdivqH9gL4GWHx3+Pol1EK9vFHIPKYA5GAaSdgkj7j/AG3PCnw4+Fc8 uj2tjDFfKrY2x9CPxr8LtT8S3Wp6/aaPpXmNe3FwiRqin7pYAn9atQvoSnyn6IfFX9jq8+Efwstd Xv7rGosqscsvTJz/ACr5G0S8u9Xsma0tmumj4fCk8/hUxVm0VfmIYBe+IfEGmaNJbS2U15OiqTGR wWA7/Wv3a8Tfs9+C/wBlr4K6VceKohdvL5YLtFuOScdqt6Iznocb8ev2TvDnxV/Z9sfE3h2NbO3f YwaNAOCT6/StL9j79hPw1rPwl1rUrm5F3cqhY5VTg7Cf6VD0VxrY+Z/2UP2ftL8dftP32m3UYksr YyAIy8EAA19Y/tA+IPhT8DPiDc6Jc6ZEbgRvgCEnp+Naw97UlS0Pnn9kH9nHRP2jPjFqviu2+XR7 TePs2wAHKhhweexr6Z1SD4WeINQ8T+H2tLe1ubMvGxZNvITPc+9KWsrDUuh5D+xZ+xroSaf4s8U3 jC70qFmeBCoICeXnjH0r2PQPgn4K/ab+Ceu3GhWKW5siMt5W0khS3c1o4JIcndn5/fsa/s2J8Vvj 8dLvSBaaajo8XBDYAavt/wDbX+E3gDQvB9xZWS28Go23OBgHI59awvzPQUZH43+EV1nWdOL2+nPP ECAjKrHI/AVd8SLe6SLe01CE2jzuoGQRgZx3q4+Zaeh91t+xXLqPwHh13Srw3U6QCRlG3tk9vpX5 +aNrDLf2b6kTH9ndRcRMOc5z061cldiTP34/Yps/AH7R9jqVrBp8bz2alXLxEc7c+tflL+0r4Ntv h58etes7VQluLlgqL24FJauxDl7x5PCjNNjNMt9TbVbySysozNdIfmQA03oO3U3pNB1q10ma5m0u SGKPqxjb/CrXwrs4vHXjvQLG5XdZ3E8ZOR/tgdKnlcjWNj9O/wDgpX8BNF+FPhbQn0m3RJWRcsq4 /jr8tPDUGp67dzQ2dk935bYYhCf5VqlZGN+aTRmau11ovi21hv7cxL3Qg8jIzX752XwM8MeOf2HY tUstKX7WLQOpWI5yAx/pT3KW9j8B/Dlze6PdJBqNkbInhdykZ/OvWobTfJkt+VQ+w90L5zR3ywQx mWRugC5rrNJ0DWb7UWtI7CQsATjY3b8KOW6uUtCLSIry48WyaOLUtdIGLKoJxitPQ9M1LUdY1OA2 DAWhIYhSeQM0kVpsdd8EdEf4gfFDRdPurJrizeVTIChO3DCv0I/4KQ/BK1+Hd/oZ0PTQkLQHzWVC MfP/AIVdPR3HUVo3R+c1jMqwkA528UsUYefLcdzVshao2LXw7feLdy6Xam52feZQTj8q3L3wtf8A hzS7eTU7R4IyykkqeeaXN0Dlsftf8NPhdoPjL9j8X1rpiNdmz3K3lnJO1q/ES5TUvCtiovrNoGEi ryCM81SVlccn0PqX9mDRNPvPjFaXOq23n2xic7Qhb0r0T9sC30a/+KUB0SH7HFGj702bc8j1rNNu Ycllc+e9DN1raXbWls8whOGKqT2zX0v+yJ8HLb41/Faza+O2OCJjJDgHJGD3p1J68qM+Wz1Pvj4l fFDw14F+LVv4Pg08bvM8o7YjjqB/WvD/ANtj4CQeEPFGlapo1kZJpYjujjjPdsZ4qqb9mtQqq1j4 50fT72+1u4smtiZ1ydgBOMCvUPhH4Ksda8WiDXlWFFuFUI/fp61d76iTP0O/a4+Euj+Gvg9anSra OBQ8YDKMcbq/NTwXoN5fa7bxwWzX8aXCK42k4GRnpSc7xuEbybP1f/aC+GmlaX+z3a3IskhlGwcL 05Nfl8t39gsI1z0TiiErq5pDY+gf2f8A4CXnxQ0671G+zb2bSDYx7gj3r9APEHh7RP2d/hLZPFbL Pgou8J97JxninUdkc0m3Oxe1HwvpPxY+EZ1T7IFk8nenyc5wTivzZGm6hovk+fabIpMCIMCPaso+ 6dTWiPQ/DvguZfF9i+rWxhsT8xDKccEetfoLp3jvQ73xNZaFptvujMZyQhwMH/69LWUrF30M3X/B mg/Dr4pR6ldKiLIjnGO/Fd94I8ZaZ4+8czW1tbb7dA2ZChx0p1Hyqxio3kfN3x80qz0rxrPFZKqM dxfbXiuhaSZxu6H1qqfwmq0djo7jTGhfaPmrT0+xWPl1yKtMze5f2rPKyou1B7VBFZmVcjpTZUTV sbeOENvBb8KmtiLo7FUqB7UnoVJF+K0Y6mip/Kurv1eN1UHAApELQzWsWlbLDmr1rpqRONwyKANP VbWK2i3KuCR2rnbe2+1RAlevepKLLaeChUDBBpPsJKDA6dTSHYvWFpsJbtTZFjlbyxHgk/exTEy/ c6fFaQBUG5scnFczHCUaRSPlzxTuAz7M11GUTKkeoxWWul7XJdt2OuaqItSvdRJEgaIcelZUmnea 28jGfamFiGXSHKZZiV9KrqZ7OPyo+E780Mqxm+RvkIHJHWql3Z7YyUXc30oQrmOtq0cW6ReT2xVC WONcnyipx2WmCKFtbEwSN1UnvWXcWO6MlRg9qVhPQyWjlayZJUAbtWYukiOEMfyoRCetjnb3Fq+W TcvTGKz5/njBC7RVFMy3tFYEdSfasK5tjDgDt0px1FsZI08tMzudzHpntWXeWXBJHSpa1Ki7owZb UrHuPzEdK566gN2pIJVh7U0JoznsXKAKMnvTLizaNAFHB60lLUmSMpLcRuxfgdjWBLZST3RIbAFP lQ02jM1O3aTn0rAuIdxG0dOtFhSkZRRribavQetJc2y7Co6/SploStTnbi3EmExz2rMntmTKkYxU Js0WhhLEYpjxkGsa+tldm+XBz6Va7kPUyLmRgioByKxpVfzMmlLUajYrXB3RFQMGsN7YSQ7SPmrN IGjPlt3gTC9O+KpTuI7fd27iiwlcyyqn5gvHpiskIqyE45qlowZnyQmSYsTgelMVGj3hTSmxRMaf dHk9DWI7G6ZlPX1rNFsZaRbA0bD5h3qrcWx53ckUN2GolC5nEUYP6YrNdBNhhxRYb0CZNm3mqU+Z CQwGO1TYZnT/ACYUHH0pRCpXd1PpiovqNmPJ8jnHI9KeZN8OAMCm1ci9imsQfg9Ki+y7wQvFLYSK jQJD+7Jy3rUkcAjXcxyacldCW5RZhLOcCq7Iq5IGG+lRsrFCPlUyfmNe9/sXRtL+0TZYbBUnP5is 6mgH9kfwzugPEdhInA28/nX6QJzArjuK2pLRCtYauBnNQO20DFdA0U5Cc9Mj3qpqB8qJGAxQO5m+ f5rnjoK/DP8A4KpDKWsy/dAwR9WralvYI6H4haNERA/YN8wp0ZYZwelebiI2mymyVUB+f+KnISpP 8VYLRktXRNHIVkB9a6HS1aG4zjcKJrUhbHrWgszygHoa9f0i2IQCqitLj5uh6ZpmFQYru7NQyLt4 rWAM2lTgY5pZVBX5TnHWnsxpaGZcy4wAvJ71mb+CCOR1rZaozehVDhyTt6e1QO5IGaze5otj5JUn buFVlZpHyvFczdzdqxPyh+ap4mLSZP3e1S9xPUvK4RdpGfSqm8RyAU0homlkwQF61JG+5gOhp7Db saUa8cj5vpQUbgg9KCGTRSZlyDVph+9BJqOo0TPK88wGSqip2bOQDzVrYTNO0uSseD2p7SiRwBxT iQF+wSEOOg4r9aP+Ce8G3w/dtnIZwR/3zXTRNYvQ/UKZ1jQA8tSpPtT5ulaozlIIpRjjnNSCDzpM A9KNhblmDEM/zDNcb8Sxjw3P9KGJb2Pyn+MU3lfDfVlA+9C2T/wE1/I/4kj3/EG6RDgKxz+lYVFc nZkWpFktHwOvFfox/wAElbUp8d5EX7xglJP/AAEVCjc0eqMb/gpiltJ8fb8CBTPGzgtj6Vs/8E4/ 2U9NvLi++IGuxrdPAC1qpAbYCufr1FbU3qc7d0eQ/th/tSaj8b/ipc6FbboNKsZjbsnI3EkEHB+t fo58JvgR4X/Z6/ZJtPGOq2aXElwkbPlMnJyO30pdWy1pG55Z4u1X4d/FGx8Ga9pawwal5sLINoVs eYMjBOe1fpT+1T8H7H9ojwPpOnanN9nt40DcgdQ2R1q2lsS1zIwvG3g+D4P/ALFb6PZzefbRRKkb DHHDY6V8+/8ABMU3Vr8HfGAu7l7oSAmMPj5R5R4rm68oKXQ+YP2Vvi1pfwm/aq1ptYby1kkk2MR2 2gV9NfGb4CeAv2vPineXmmTxNqohl2lApOcZ9fpW8fc0Eom//wAEyfhDqfwP8X+NvDOqgJD5zeW2 c/KIsf1rC+Iv7G3hn4g+LPGF5ot0r6uXdpAgX72ypejuJLm1L/8AwT90vW/Dfww8UeGfEEezTbfM W8nPy+Wfw7mva/h14f0v4YfAbxKfB4SVZvv+X2OwjtmtpO8Rc2pwP/BM3wloSeJr2/upI2164jdr gZBKnbg18bf8FFvB2g3fxguILG9xfzXqxyogHdgD+lY0laWo5xslY+yLf4Qaf+zn+zd4cu7HShqF xO0KSN5ZzhmIJ4ryn9vr9m7SNW+A+l+KY7ZLO+fy2+VcEZY+v0qpGlrI+Xv+Ce37T9zo3jab4fam X1CC5RvJLAnaoXGOOB1rhv2/f2cNP+FvxEk1a3ceTdS7ngAHUkDp1rRaq5m9JWPrP/gj3BBZ+JfF PkQiPzGdsgf9M6+Mv2wEx+0X4mLHLC8b+QqErSuaThy6ngZkZIBKg3koetfpf/wTO/Z+8PeLk13x DqgjudRUlo4nwdvyZx69RU3vIGvdPpC58daZqel+MNM8QaF9ltbZ2SBxC5DDZnPPvX5LfB2wgl+O ehy2Bxp32pPKXGMDevatm7akRbi9T9ef+CssuzwzoTIvyLByP+B18/f8E7vBGrTaTPqs+mINOu50 2SSErlSMUN32Jj7nvdz1P/gpn+znovgu30vX9Pt0SZlAfaOuX9a+1/2cfFsHgP8AYf0m7uY/Mhlj iTZgn72RTa2Za93XufGn7enwG0i8+BGieL7C2S2mk8skhccFzn+VflDA5W1QryCueKme41poesfs /TQS/FLS4rqISRzSqvI6ZYCv6CfGHw/8O+AviNodtbafHuvYG34X1YLRHZod/wADhvA37MPhrw9+ 1Zq7m0jke4SZwpX7vygV3vw1+BWgQa18RXmsYvlmfZ8v/TKltZEqXXufmJ+z947s/h/+0V/ZkVmD 5l0NnyngZUV97/8ABUP4wReHdP02xFsHM0JyxB4+bFXsxud/dPw2s7ZfshlifKuc/SrCxyS2bKvU uE3fWmaxWtj9vvgb8L9E/Z//AGTIvFktsLi6njRncpySQR2+ldd4s8EaL8cP2QbPxRNarBI8Kyp8 vTqcc/SsGm5XJq1FF2PWf2X9dj8KfsiWN1Im6DyUVQR2Oa8M/au+Dek+KPgFp3iVYEhmbYwIHqx/ wrdPSxnfmdz5p/YZjt7j4yW1hdwiaN4XK7h0xiuy/wCCgPhaw0b4zWVvp0IiaWJ94UY/iAqI/EaS qJaH2N+xF+zxpGkfCLV57lEu57pM7mAO35CO1cF+yt8PIPht+0dqUUEvmvMJGVRj5RtAxxT5feuS /ed0fR3iv4K+Gr/4vTa3ezR/2r9qBVW25HI/HtXuXxJ8RaJpfirTrPUohJPLAwi+XPfH86i7m7E1 Jc6t2PnXwJ8ALTwN468W+K9YjU2zyu9qh52oU/xFfmb4pvrfx18ZF1CyJhsTqEZQbcZG4V0xSUDO m7ysfqH+2MP+LG2EMAwpeMZH+8as/sxfDLTvh78HT4gmjE1xPtkLEdOD/hWEotRsbfA7HeftEakf Gv7O1vdRjy0kaN8e2TX5e/Db4Z3HxV8dafp1nEWtkG64kIwMAgkZ6dKfwxsEdND9BfjP40h+GGn6 L4P0DbE3nRh2iP3VD4I/I19P+M/B+m+JPA1jZ6k6m3UKSXxyQeK1+JIzjDW50eg6Xp3hvwDthC/2 fGABt5BFeP8AjL4XwfFWLQ7zTUVbSIqxPTgNmsn73yNXI8x/au8TWNvLZ6Japi4AySB0APNW/wBn zUNAt9UtII03ar5R+bbz71cEkrkqV9Rf2ldBubnxbZm4JZHUqmOcZNe3+EdIs/gf8JlvmX/SJEGW xyWORWdX3mdFNLlufFV/q0nirxBNfXEm55iWIz0rXsrYRJsQcVtFWikZJ3Nj7Iu8Avj1pWi2oTF8 4HWmtAL0aGaxDhNpPpV60iVLUR45PU4oe5drGxBZwwAYTdn1rctdNgMZHlhWPtUNivctJp6WkiMq 5x1qDUohf3q+XwPpTQ7aGzHpJWAknJHvTF0xWg8xzhR7UySncW63wCg5QVLa6aMrGuVPsKQdS5HY KszhuSD6VHd2m1Nqrx61NhXvoOt7P5FQcDvT7uwTaBGuPU4podrFSG08gbWy+e5FVWhiScIy8npx TbLYzUrQRkAAj6CsGTQJblDIr4XuKWwraFOPSth2k/iasjRkjzl9wp3EU32r8hXcO3FYl9YZ+boK dwbMi308DzJFGDj0rA04XM1y6kbVz61ohbmpOEgu9joCag8keY+5A3tUMT0Zzd3aZyETb7Cua1K0 a3AI4PU1aB6szPtKz4B5Iqte2wXLZ6dqTRmtzmnjW8kJA/Csu609mUjHeixqU2tlhwNvzAelc29q ZpnYjABp2sib62M2S123GccYrPurYGAkjvQmOxztwqiLp1rE8pIyQU/SjqUtjNYMrMVHFQsitb7W Hzn2qWhNXObvLNZG2NwKxTCFJRTyKrWxPUw77cwKBeRXPpHtQnv3pktamTPi1lDgHB7AVXuWDPxS tcWzMKeUQzDavPrisrUJmBLsMms2rFmIbtZsbUK/hWfdhZGwaNgjYyp41ycD8cVzl3YERA7s5qUX YyJz5I5GayWLOCQKewmiiilXO/JHpVC6tTI+QOPSovYixnqV3lQuMe1Z93bJuJBwfalfUOhhsu2X DCmT4jBINOTuKOhi30f2iAfwj6Vnm0ESDB59anZFrVlmCyUZLHJrGu2HnlRxUJ3Zbdjl9Qi3zbVp QmyH7ua2exF7lWRhIB7VVuUJ4ByazAotb5wW60h+QlUHPrWT30GjOmgIzt+93rJZJIn5Y47iqT0E 0N2tI5UHCmn48iLb39cUmCIGgWdAehqtKjRMUPIpoTRWs2Q7v74pzShvlYcilYRWkZlfKj93Xu37 F8q2/wC0Nauw5LEfnis6i0Gj+xz4YKB4i0+Mc/L/AFr9NbOLZa7T2rSnsWUGG81INkYyevpiugko 3bmRxgbVqjdr5yYPQUCRkxEMSAO3Nfh1/wAFT4i1lCicMef/AB6taT94pH4caIzCyBY5KjFSxPli vSuDEO82Jbmi4VIABy3ek4jPlgc461zdR+RJAgPbkV0dhvLjBxTZB6XoYKOp7ivatGPmIDmqi9LB bqejaZyABXe6cBEct3raKsDdzo4k2zg/w96qT4ivJAPuk03oVHYrTygJtA59ayjBuBwee9NOxDV2 VphhQqDnvWdeTglRtwR7ULVl7I+R2OeAcCgsFAAHPrXFc6JA0p6EcVYiQ9AevQU7CJ48j7x596fy kgyM1TBkrx4ywNLC4cjjBpX6C3L/AJhD0kjYIOcA9qYrMkQrG2WP0wKeFLfNnj0qR2NCOXevTC01 JPLfA5p9BMuJubnOKsQv8248GmKwXcouLbGeQQcV+wH/AAT9cTeG52UbVDDI/CuugroaP01mZZcD HNQom/IPNaoiW5Ip2ZXGKtRMBHjOD60WEtGTA7nUZyBXNfEaMy+F7jB6Cok7FI/J/wCNTBfhlqq4 58lsn/gJr+RHWlEnjnUHDdJCOPpWLdzK+pFeSOtptY55zmv02/4JHQPefHe5njdAscMq4LYzlRTj oXeyM7/gpr8PPEqfHDUNTtLFXtHMhLliPT2rkf8AgnJ+1YPCfiZvBniQrFZXIKx7m46YH6mqRhBH Q/tv/stj4e/Fu38R6Eif2Vf3AlnwQNp3AfyFfp98WNHb4i/sBaRo2hFL67MMSj5vdvT60R2Kv9k/ IPwr+x5rPwl1Pwfq3iS6W1VZIsoJFPPmCv2L/b3vfEep/CPRP+EIl3XD+WGkjfHBfnkZ7UoybYX5 EWvET3Xhj9gizs/Et0DrbQIH3SAlm+b/AOtXKf8ABNfwTrEXwP8AEk11AsHmoTAN3LDyz/Ws4r3x KKa5j8ltf+Ces+Nv2gr+0uXFlcCciMhx8w4z1r7u/YU/Z58T/BT9qzU9Q1JEbSNk2xzJ1yoxXRNd RRnpY/RfwV4z0zXvjl4tjtruPzsTKI1cddleGfsj/DvVfBHxL+I+u60ywWcs8jQ5ccr5X/1qiHvj T5UetfB3xFpfxA+FHjeDT5o5bt9yoFYdTEcV5L+yF4TuvhL+zb4pHiQpbSEZQb85HlmhN3sHL1Pm r/gmXpEvif47+I9XtGxY/vAuTjdmMYr5c/bH8N/8I9+11NqF022J9SHI5Ayy1q48uonPm07H7W/F f4m3Nj+z14ch8PWsOqyuYhy54+cjPGa/PT9uzxl4qsfhfptnrEyqkm0LAk27B3YHHWsqqsuYaneV jD/YS/Zz0v4SeGdS+J/imOM3ZiaS1D4OxShyB0PVa/OP9oD9pjUf2kPi1fTojPp32jFpGc/MDjBx 9a0ou9MdZe+rH6/f8EofhL4i8LX3iTVdWgS1tJSzQYk6r5fp9a/Or9ridLv9orxNsYNm7bOPoKFs FSbbSPCFP2a1Qr91VOc96+4v2DfD+vTaxqepaHqOzerMbfzAozt6etR5msdT9WfhVpNz8Qfh34tX x1Y20Ah3CNzJu3jyyc8471+Ifw4tNL0/9pvS7PSyP7OS6ULtHA+ZcVco+4YvWfKfrF/wVk8NXM+j eG57VxJaLs80bu3mc/pX0z8FdNtfFf7K3hmx8NzpaujQGRkYDChiSPyp/DTuL4pcvY8u/wCCn2iz z/CTSbTT5FvJlaNXYsP7/tXpPws+Hl3qv7DXh3TSVW4AgLfN6MareCKeunY8i/b/APEVh4T/AGWN D0G6uUN2oiGwMCeHP+NfhZp+oW9pZsjNkgcYGac48thU3zM9G+D2opZ/Erw3IBu8y4jx/wB9iv6h fi14c0zRtY0DxHqN6sC29uQFZlGfmz3+lJJRWoTutj5v+Bv7Qmg/FP8Aaw12WCZQ1t50ce/jdlQe PWvpnwTD5V/4/e6kVJLmVzAu4cjy8fzqW9UyV8Nj8JPBIi8D/tVRTa0VVDdDYc57rX6qft0fAOH4 9x2Wr298INOtrZy2GX5ucjr9K1teYnouc/AsPbaBq2oafC5lghl2q4HXirseqQ21pnLYNwh+72zS 62N6b05j9+I2PxQ/YZ03SNImSS48uLjeOMFq6KGOH4UfsM6domtyJ9rW3SIIGByx3Afqama5UYuL nI7v4NeDpbz9jrSrJiqSOkRADdOtcT+0/JFov7M2j+GhMJtRbylSMEEn5iO31pQ1VymuTQ+SP2Rv hbrnw7+PulPq8LJvtpCgPpwK6n/gopA1v8YrOW3OJmt5OfT5hTpptsUo31M/9mr9q64+Gvww1bSb tmnuiNsbnJx8hHaum/YW8ZvcfG261LWbsvcXKu0QkPQFQP51qo3Lg+Vanu/xQ+GviXW/2l4tXtph /Zf2jcR5g6bl/wAK+uvHnhDTPiT8T9NnW7VW0+Jt6gjqCGrOMLMzfur1I9N8c2fxm1HxT4adhGtn KYtw/i+TOf1r8q/+FR3OmfFo6Rpsgmjt75A2SBwCDRGd9CqceV3P07/al8D3er/CG0tbRwWjKF8s BwCSak+Dl3DrfwGj0VJAbiELG3PTr/jRNFX52aXxg8Ky6Z+zzZ6aswaRTGhOR6muQ+HdppPwF+Dh 1CMq99NFknjJbBHapqa2HBXlc+EdM8SzeJ/Gy61fnEs12jHP8IyM/wAq/Uv4vaPL46+GVtDo0w3t tIYMBxmtNgvynQeE9I/4Rr4L2mkanMGuGVY2OQeeR/WqV/4ttfg5p+jaLDiUyhVz/wACx2+tKnHR 3Inojzb45/C+z1LxfY3Zm2ySjDYx3Iq54F+AX/CE/Ey01OBw9sI2DEkDGcVMn0RdJaWZt/HrxBaa l450+CLEjKwDe3zCvQfjnbx3PwpgiVgy/LgZ9zQvMty5VynwnoejJY8yjjscV2osdyAxCtCYrQm/ sn7Sw3jDd61odLW1TYoyT14plI1rLSJCgDjAHQVt2mjiR8KvPepY2a0WgiOTHer/ANiCOOKhslbl +Cx85uKiGlLFNkDmgq4s9iVZeTzVm4sfLhKMcg1SJMy00f7Mu7PFbdtYiWdCOtIZak0oJK5zzmqs tl5mAB9aL6DSsPmsPsqjjOfQVWhiLS7SvHuKm5TaY+aJTKQqbQKrPp8KSLIOXx6VS1M7lEqLh5CR nBrJuLeSI704TuoqktCyhJZC8i3AYH0pLewWIEPxiiw0iOTT1GHRc+tYV4qzFkx0otYlrQy3sZJI gqDao71QWxYS5VenU00xbFC/0/z75ZB1A5qjLGYX9TT3Je5kXscqklB0rn7mzn1BRvGBVXsNGTP4 cFsAyAA1Qu9Pbg4+tUmTbUwptL8uUFRipGX7JA25N57ZpPUpHI3iveqVWPa3rWOlg0KYflx196b2 JS1IFTzJCWT5AMdK5u/tsEgZwfapRr0OVubB2bYB0NZ15BsjCkcjimyNTOubZYLUeXyT1rGuI8R5 bhu1ArnPzL5jZYYb6ViT6W0EvmqQQevNUSjDvEkaViB8tZccS7CMc1F9C7FN4Ujcqyg+lYF7HFDE flw30oiZyMNXBiJIrCuX3fw5ptAiqbLc5OMGsOSz2lyeueKiS0JTdzNlXZGAwrnnuAX2hcn6VCN0 yrqEK+V8wAxWFFD5ke5ePrSd2O+pUuYt0ucdKx9jSyMc4Wjl0JloULuIKuV61z89s6zButRsQyhP HsfL96rTwIibic1aQjJ3GXPHy+lUQqjI6YqbFxehVOZVOw8is6W1MqHccMKi3KPdmI1o3mEg8VB5 hVNq/ePrVJ3QPR2KMsao5DD5j7VHFH9ljbeck9KUkVoU5spDu+9VDcMD1NSkS9ByNuzn7w71inM0 5H55pNWC9yR4fLTcOmaguWBRWHSpQbIzmXK8de3HSoYGLyFTy1XYlFZYkjvGOMVLawb3fccjNJ7D IJF3Sbc/KO1exfsnKy/tA2hXtL/UVjVdkXFH9knwqfPiLTZAf4P61+mlnMbiAFR+YrWlqkA1o/KJ IHNVpPmGT1roJsUmQ856VWuMrBg96YWM2F/JjZT36V+If/BU0+XY28gGcLg/99VdFalH4aaHiO0L MMl+VGKvSQYcMe9cFb42S9GC2qoCxNReYzSbR+dc3UGTqu0jDc10ukqXlAzz61Qj07RHEb7SMmvX dIiZNrZwKqC1HLY9R03CxKT3r0KwQPGoYcVtexC2OhMe5Pl4rMlOCBj5hQ1zMtGfM/mHGORULHIz jBqrWJ6lLkkkCqskfmJkjkULc0aPkC2hyCX60oUKh9a4kjVoTfhASM1YBAwf4hWqRK3LEsofDAYP eoTcFjx2pNB1HJKNx5yTVqAgZz1FS1qMeW3vxxShRsw2SQfSqeiuF9S3Au4/N8wHSnb92McVG5Vy zKGVFxwtTeblRtX60+hL3LKsSAVOfarMjElRjDGkmAlzGI4SejdK/Xz/AIJ5wPF4TuVY7supz/wG u/D7CR+nDrtfmpom2oSK0ZD3JFXzF3YqdIiUpXsDVyxHCydBmuO+JcnkeEp3Y9qiQtkfk18bphH8 LtUbs8Lf+gmv5H9UjWLxtfx5zmQn9K55P3iFuQXEbOHBOQegruPgB8atV/Z1+IK6jpgOXU7ypIye B2qx7n1x8bP2+9W+LWk/YrnT0ErjDSbmNfnPqnhOWXV7LU7aUreW8yyAj2YHH6U79hJWPsT4kftd eIfiR4Og0K4Q+WiYLknt/wDrru/gX+2drvwV+H8ekxxteQxbVQSEjgfSktB26nEftHftZ6/8drGw jIa3jgKnapPUNkda9k+H/wC35r3gf4f2ulTxNfvGFAMhbjH0pLQlxujzf44ftd6/8a7WwtXL20ED K5hBOCVbI617z8Of+CifiPwR4dGm29qIY1XDbXbmnFq4RTSsfNPi39p7W/FvxSt9cjh+zywsTlSf mGQT/KvpuT/go/4g1KO5hS2NudpRZQWyciqlLoJxPkrwD8bvE3gD4nXfiqO7kaacsZV3ffJAH9K+ pPE3/BQDxL4m8EX9s0JtTcDadjMdwII71UPdQ1HmWp87fs2ftI658DprvyS+y5be45GeMV7N8UP2 zvEnxW8PyaSrvZ2zjDiNjgj8frTVk7jknaxxP7Pn7S+vfs9RzWmlAqp4DAkcYxXB/GD4uav8Ytcu L3UclpH3kkk4NDnfQhQsz2v4aftm+I/hX4Q0/Soka/hgCqPMLDbg+1cJ8af2hdY+NPim3vdTeRoI OVhOSM5yKmesLDUbSub3i79o7xH42+HsPh5JmtrFY9pjVzgjn1+tfOXhTw4vhq5tJ4Bl7cghTx0O aIe7GxbV3dn6E+H/ANvrxRovh+axs4fsUe0rmN254r4suPElx4x8S32rXu5rqeTc5YfeJHWnHcUo 3dy1Ivm25Unrxiu++DfxV1f4E60bvTZGCyA71U4yelOw7nvHi79s/wAVeN/D9/p0kj2sc4IIRydw Ix3HvXyB4VkvfCkllNYv5d7byIxkzjODk1UnpYlR1ufVnxY/ae8U/FW1ttO1KdpbXyioDOT3pnwY /al8S/CHw5c6DZXEkcEfEZUnoB/9eo1ehSjaVzR8X/tY+JfF+h28F3cS3e3G4SE9c16Vo37cni/R PBtrpdrK6QxbQEDkYx+FaxlayYuW7Z4d488d6r+0h8RdJXXLopaIMEs3+0D3r9DNM/Zg+G8Ol27S 3kDSsnP3P8aUp80hqHItDn7z4ReBfA3xE8Pz293C0MEikjK9nB9a+pf+CjXxR0Dx74J0yDSNUMTR BRtixz82aJ62SBpOOp+L3w717Ufhz8Sv7as5GjcBhkcZzivqqb9sXxZLrE85upQHBz8x5yPpT5dA UdD5913V9R+ImqvqN6f9JSZZEJOenP8ASvftY/av8Wan4I/sX7TIloF8tsMen5e9aKRMo3Vj0H9m X4N+EPEPhie61q7TzmIOJAuSce5r6T/4Z++Hcnh+8c3UJYRnaML1x9aiCd22JvlskfJfw8+Muu/C BdT0vSbpn01JwIFD4woHoK6Dxx8a9e+Jum2sWoTM0MWGERbgEHIol7zsaJW1PS9G/a28UaF4Ui0u C4kSKLARAx4Arjr74261rvjrRtY1GR7gWzqpjY54LAk1SVlZBbnZ+0un/Gzwj4nfT9emlSO5t7Zl UEc88+vtX5PftF/GFvjF8XL6eFP9HtpGjjPPzA4Na07QiZJPmseYWltHp5EgQc9eK6DQr+40TxDb 6jaMYpYxgEcYqVKxbjc+iz+094nS9A+0SFBxuLGsaw+NGv2Gv3l/b3cqvcZ34PXIxVO1tDPlu7Fj wL8VNW8Nazd3VrK8dzdNmZwfvEjGa3LTxjqeg+KJb+3nLXMz75JCcHNc8FZmjVj0vXP2lvEGvWUl jJO79jkmuV+H3xI1vwlJdi3lZfNbcecdq1dnoEIHd638a/EPifRRY3FzJwQeW9KzH8QX+taLDbXM rSLHgBSc0ShdJlpcpQk02NkRAdpFe7+GPiVq+iaAlpBMwSPAUbqGieXmL2sfE3WfEMVujzMPLYHr 6HNP1zWb/wAU6vaX0sheSH7m49Oc07WQ3G50Ot+J9T12+t55Z2d4xxk9Oa7aP4uavbW4QzOcDHWo sriWhx9zfyX999scFrh2zuPauq1PxJqWqww2k0xeFcfKW9DUyWo2RXukLqLBEGyt+10prNI17AVY LQ1YbRRIWYY/Cra24acFFye1IDUiiklBRxtYVr6fH9lkxjk07FI1IrZmlLAcUrxZzgYpcomS2LJA vzAk+4p93IXQeUueetArEstu0uB0WtWTTlNiMjLDvUvQcdSv9kMtrlxgDpRp1r5L7uopJg1qTTw7 p94pzIBlcfNTSHe5agtMwhiN31pk1sOcLii2pJQ+yFk5HSqk2nx7S2MN9KpIDLt9M2MePve1Vrq1 8iUL1HcUxdSy+lhyGj4WqaabHezlepHUYpbF8wlxpRAKx5VR7Vz8mgquSBgnvikBSu9NMVsR29qz orcR2oK/MT1oJMtNPEsjkA8deKo3GjpLGSOCOtXEkxjYALtAz+FMubECFQo2kCm9Ro5O401lmOWJ 9qydU06ZbJmjXkUkymYdtFusQJkw9Z1/ADCET+VWibFOFo47HDpyOM4rAvIUQb1GfrQ9Rrcy53ju bXcqBTXK3lsAv3Mt9KSKvoc6yNCHZuvtWSsEc4O/IHuKHuTc5W5sWhvQST5fpTLuzivZTuG0jpgU 0iJM5+Wz3MVxkjvWFe6QyE4kPPOKHoEdTnp7dwCoGQOtYjRLbPgjOazuaIxNSgAuN/p2xWRLardD d6dqpMXLdma1vG7HjCjtisCWFVmIA+lNO5L0K08LLAWI5rBmt3nRQCB+NS9yUtTJ1WyJhyOorlLa QXLkBNrDgnFJIt6EVzb+aTGTkis2WJYICMYNFgTvqZ4sWuICS2Me9YCwAM69l9e9JsGzKeDzlYLw PpVSRfJtirct9KloErnOlCincN2azLqPy1A6g1TWhD0ZUlxbAAj6VhXFt5ku4tj2qEikOVERciqd ywVMjrUy1KWhVYGWE5HIrANtuiLt8rfShOwWuwJUoCRz61lKwM7F+VpMtIqyhizYHyZqotqrISev bipRLRULeWhPpWPBG00xcnAzQyFoXLmAzKQp2j61mKhVSj/gcUIb1QiW7QJuJ5NMWJYpCwPWhsSW hVdFkYnGMe1SSwbolaPg96i+o7EaxiA+rHmvUv2T7gw/tA27Ku7MuD7cisq2qLTsf2OfC9Ams6YA edv9a/US3uPItUQDBI6itKL0QmTIuVYNWXJ8hOeldQip5gcEY/Ssy4k2Y70DM5GMrMOhFfiT/wAF S4pF063X+8Mn/vqtqO4mz8LdJlL27DHKHaK1UEkvDHmvOrv32J6kskWIwpPSs6RcEEtwKwSAntiJ ywNdPo0bZx0ApsSPUdIfeRxyO9esaUS4Xn5aIse56lo/zYH8NekWP3VWtY6jNjzNmM8Co5k83kDB rRKxFzEmzC57k1ECGODwa0YRRC0hXnGDVIq3JNZN2ZrfofHnmfJjPNNhYpkMMiuY13LUWG+bHHpT lHmhmxjFOIWsOUGaIbD9c0+OEAHB5piiNRREh9aInyhIPFJq4nuaVs4YDI4p6SZcqo6UpbWHuWLc rtZhx7U6CLcNxOB6UoqyHYmzvAGeKuoViTFMGrk9vEFcMDT87rk54IpJE3LE9q1zGQxxzmv2D/4J 9rs8L3AU/KHX+VdtDRDtofpc7eozUkaZXFaGRdSIIo/WrhjHVelQ9yug6O481CqjBrzb4qKV8GSx seeKUyT8qPjjH5fwt1IHoIG/9BNfyIa8nm+PryVeAGIrna1JLJXGAT+dV7CS41PXk0zTbd7u8dCc KhOMfStL2Qk9ToNS8B+KPC6GW/0yRIT0dkbgflWLaXw2GQuFA9TQtEWyzb67CWAbdubpla37OVth G4+wpIlMz9T1NdLiEkh5yAAe5NdRpXgPxN4isPt1rpkkkJI2sEbnP4VL1ehq1ZXKXiXQdX8BNFPq 9m9vFIuQWUjHarPhyDUPFFhcz2dlLLbw9WEbdMZoTsR0MjwxcX/inxLb2FjYyPK6FmCo3GK9Rvfh H4ulvnWPS5UCHujDI/KkpXB7HCKusXPigaNFp7NfgHKBW7fhXpWpfC3xXo2gTX1/psqRxdQEY8df StdxJo8w07xFHqtrHJFGzb8AIVwST7V3ur+Hdb8LeG11C40x4LZiOSjDg/hT6D5kVfDWg694+0+S fQ7N7hEIyQD/AIVp+KtG1T4f6TZ3Gs2TW6y4G4qepOBUS91hFXZR8OvdeJteex021kuZVBztQkcf Sql9PPpd7dwXkTQXFu+x1YEVotSZWTLsepxRafFcMW8pwOcU+6vGnjjSzxJLIwCgHrSsV0PWbX4P +KryySQabK0QXJKoxz+leW/b57XXLiznt/Jmhbay4NUtSW7GtE5d9xFM1XWYtJWF5Dy7BUA7knAp t2HFXZ6Jd/DjxHYaK+qtpsrW2M8xtjH5e1cHo2oHUrXzDHsPcY6UR94U3yux0wlc7TjcBTUnZZi4 BIPWqitQbubcLhoMAbTV6Fi/BJB+lTLcpaBfaa02xlcxuvRhWvDc3flKj3cnA4pA3cyxYXev67a2 9u0t3LuHG3ODn2r0P4leHdY8GazYxavbSoki7huQ8c/SrTJkrIqG9hknuD0CgkcdeK17TwlrmoeF Y9Z+ySJYtg7tpxz+FbR10HeyHaTcGW0LDrWvAqSRjP40Jai3JYbY20h8udo1PZRW5bSyw27+ZdyB DxwBRsS46neeGPhrruoaJPfWdpI1snzGQIeeM+lGmSPeadG7p5bjG4ehrPqO/Q3reNSxYjJqvc3w tNrMN25gv4mtVorlQdmfQfh/4OeItd0SC7ht38rZwGBGf0rj/FPgjUfh/wDvLq1ZDI43Eg/Sp57j lZvQt2zK8ajbnPtV+xhkn1GGygQyTyfdUDNVYiV0e0y/BPxFJsBtWVNpJ4P+Fcbouh3z+In0iKAy XYOGUAnGKiUug4rqz14/BXXtOilupoWSKP0ribe4E+WAIdThgR3qoRC6bL1nJCLkknDd8V6J4F8O 3/jnU5jpsLPFCdpbBGe9OStqaU1oelX3wj1qO3nuZYSqxgkheegrkNB1JJ9PIC7ZQQGyMURldEOV 3Y6qw0JZnEknIPrXTXejOkcbWsny8Z5obGiK7u30yWJQm8sQOK9tsPhlrGt6RHNDH5QbB5OKbkth vREer+Fb3wxbp9ojO/IBNPis1uZQZBgYqFqLzLi2hjcBV/d+taA0zfOJFOMUPQLXOpgsW2h8VpWy yXMwLD5BU3Eb8VkbkFiNqVpQaSIJI5EX9KpD6F2eJpdUB27c1ffT916FHamtQ6GnHbE4TGDn0pLm 02kqBzQwWpHaWImwjDp1rXexS3+QDj6UhsfDZ5TLcDNaQiDJsxSauStBZ9NMdjtznkVlxwEuiL2p WsUaK6a8r524FKtkpcgjn6VQkTG0ePKqOPrVOZccAZPehCKrQkYAHNS/ZzJgSJjHpQBVuH8khVjy M88VBc6Ws8wKj65p3AzmhKSFFB49qr21utndGRF3E9eKGCK0mpMJ5FMfBPYVWurWSYKqjipsDKGo acQqx5OSKy4tANmxIbPqDQgiP+xiJHwgya5GaLaXUCqiFilZaeZWyy4H0rN1iAQykqflphY5tFW8 5ReR6imXO5IChGRSG9jh7uPLkkY9qy5LR5s4TApgtincaaYrXG3cDXK6jp0gtTsXn0q0KJyTW09v AgK85GaZfW8vn/KAExzTQpaGXDp8RLFjke9clfQhrk7RhQelS1qOKMvU7NoERzyCMiubkhaQlsVQ uUq3dvuGUH1rmtRjeOMFRmiSvqTHR2MN4HhTewzntisW6ssvvI4FZbs0Whz0zJK5GKy7rT1hiLRn ax6jFAJnHzXOLjykTLdziqc9v5LbsfMa0SsZS1K0zNKAuMe1c7c6exn3K5VR/DTaHHQyLsmKf5uU rDuZI2f92mwfShLQhtszN+HbA+Y98VkXcTBCGqGjRbGKM8ljtUVm3Stcy7kG1axe42iCZfKiBGM1 VlUBC5AyRRccTmTAZYHx97Nc9Onk/f5NVfQzktTJlBmkAY8ds1FexCI8j8aOg0UY13QMAOPes+OE lgGHFS1oF9SYQGOTHVTVCdVnYxY2gd6w6m3Qyfs4BMeeB3qikCx5LjIFXuJMziplBC8KTUMreViM DJFKwXuYkjbbhtw49MUyEK7sVGB6EU0ZvcrtB5spIfHtSyqUgII3HsahlIpxyCZdrH5hUBwGJI+U UR1E9GUpX81gqgYqyzfZ8J2rOWjKRReUb+lenfsoyMfj5b+UMMJhu7dxUz+ES3sf2PfDVQ2raVIp 6gZ/Ov1TsNs1ugIzx3Fa0NgloVpzsJGeaxpEbzwxPyjqK60C2GSyeYxAG1ayrgE8EcipbKK8f7xh xhsV+Ln/AAVGZTpkDt91Rjp/tVtS3JaPwa0aXb5hA+VjkfStZTvkJU4HpXl4j+IwGSDKkE5aqUo3 AAjke1ZxETL+5QMOldPpMbuyuT8vpUydykj1PSnSVAyDGO1eraI27aB0+lOG4nseq6R0GOK9C0b5 wfato6MS1RqzxbcMTkelKZBIMdK26GezKEsQPCn8TVCXEUyjGWx1qr6F3sR3EuF5HNZsm4YOetZt XK8z44U7WwB1q7DHyd1c6R02sAfySTjjtTIpQckD6jFO9gYBcP5gOF9MVeVRkEcA0ibWEng8v5Rz SwxLs2g4pgldE67guAOlSRZVs5571FyFoyxvWMgY/Sp1mDkKRiqLZZRQz4Awaeq/vSDSaJiy/AhV gT0qWVQ0gcdKSBhc3YMGegyBX7E/sBR+V4RlA6Fgc/hXZh9mF9D9Jo3wG9jWjbAeWCa0M+pZAVQa RX8oHbUossRvtI9a4H4uoW8LSnPpSkRLQ/KD9oqQWfwj1I43MYGH/jpr+RC+Yt4pukP3t/FYy3M0 7j9QQIp+bDdK/RP/AIJGeD9P8T/HXVl1a2W4aMSGJmGcAIDWiV0OMbnpn/BRH9py00fxtq/g6w00 okLtGZPLYAEe/TvX4z/B7wHrHx0+LWl+FtILzqziS6lHRQpGRnp0NZN2Cnrc/Vz9s39n/wAGfs/+ BbSBJo11ZYwWwFzkH618L/Dv4W+Kfif4bg1HSbWRoJFBSQgjcPyrZx0FA1bX4AeJL34r+HdE1q0e K2lnRpJCCQcOOORiv30/aw8UaT+xn8MdFtNJ0pJ95Rdqo39/HasY/Eat3XKY/wAdvhJo3x8/Y80P xjf262srrFK4K9txJHP0r0j9j/4c+Ar/APZu1q4srCGSXyDkhOh8s+9FveM5S5Vynxp/wTk+H+l6 v+0t4gW7tEkWF5PJBGcAIDXR/ta/tlv8Kvjrq/h+w0JnjiWUBzG4HAHfp3pyhbVBrFam1/wTg8G6 T8cbjxF441OwjfUIixiXBOzKZx69qt3/AO15pOra9418Ma9pLQW9pI8cchhchgEz3471cCX3Pxg8 O+HNU8cfEp5/B2nm60yHUY+CpUBcg+/av6N/2nPh7Zv+xjpV1qFhHBdERLIPTLHNapJie1zxbwRr /gf9mH9jqy1qyjt3v5/KV1jwSzNkdAc11fxr+GGj/tBfsSaP4ov7RLe5kjilXK/d5J7/AErGprI0 i7Ruecf8Ev8ASvAmpeKLvSXso59ZET7pDH6Lzzmvhv8A4KAeFrXwx+0VqdtbxqkBuiJQvpkZ/StE iai6n1TYfsfeHvit+ynbXugGP+0Y7UPiPbyRk4r8gPDL3Xw81/y9Sg2Xtg4Uw89Rz9absnYpO0T+ iL/gml8eh8f9H8Q2OpaYsS2e6OM4Y5Hl571+RP7SWkxaP+0X4pSBMRreNgY9hRHQiW9zyOP5wCOM iut+A/gK3+Lv7SPh7w/eQSz2wPnOREWXKMp69Kmo9DSm+V3P6Cf20fil4X+AHh6Hwy+mt5MtsyoY 4GIHbt9a/BTw18IvEHjKLVNS0u2Y6e84aEAH7uPpWlGPuGFSfNUubfjH4LeJ/h/4Gj12eB5bdsZU Anbn8Papfhx8FfE/xX8Lpd6RAyxyYIkf5Bj6kVSWppzIt/E/4Ua78E9BhuNVhZ1JCmTqMk46iuy8 FfAbxN8QfDdlqtlbtHbzKGBbIByfpWT1lYv7Nzk/id4P1f4V6iLfUYX3MeMg464rDs18+JXfuPSn ZoSdz3n9nK9i8PfF/RnCb0mnTeCPVgK/XD/gp58B9T+Ium6Fc+GrdU2RgylTg/fz0+lKHxBU1Vj8 H7zRrvR/Fd5pl+rQ3MFyEII+8OM1/QfF4R0q3/YAspnt1Ba2T5gvf5q2fdC5uVXPxM+GHgW/8d63 DpWlxPM2w7mCnjFe4+JP2XfEvhPR764liJERydhzgY+lEJNsb2ueU/Br4c6z8VXuVsY5JDF8rllw AcZr0/xj+zj4j+H/AIU+23qNNEXVsKd2PyFW7WEpWP2O/YhEHiL9lbVWvbdQyWxCZ/3Gr8kLz4Te I2h1fVVjcab9pUxMOm3H0qaMefcyqS+0Z/gTw7qHjnxZDpemL9ok8pmkA9RXpfgn4avL8YbXw/q0 Ox1u0JQj+6wq5x0NHNRSfc/Xf9rD4jv8A9L0TTdE09XjcKCFzx8+O1M+Mvw5/wCFsfsz6bqq2gOq yiNhwcjk1klpYPhXOflfN8PdZ8O+ILHS7iFxfTLkJg+uK+3f2S/2friD43yTa5b5MSP5SuPYf1rV +5EUJqofZ/h74hf2v+0nf+FZbdPssSygHJ7Af41haV4S8LfDr42a5qdwkMdyPMZM4HG0e/tWEE5S 1HUny6Gl8B/iOvxt8R+KYbm2B0yOcrAxBIZdma/Oz4x6VbeDPiTrUFog+z+eQAOijArpv0FFO1zv /wBmv4EP8T9YuNRuwU06NSNzD72RnvX6ieD/AADpXwp+HF7c6Tbxs2zdlR947T6VjiJWskawlyxO U/Z88VXHxQ0fVRqdp5SjKqpB5BX3r4q+J3wfv/DesatfW8P2fT1ugI9voaUXZCUPtG1p/wAOtTn8 GQ6qVJtmUHOeua7vwr8JdT13Qo7iDKxPggscVTbHdHVeHPhe2k+MrKz1OLz1c7gxGeQRX1T8YvEE 3gY6fbadbB0OAwGePm9qlfFqEnzaIteP9Hj134Xx3ssYWbCtyOnWvi7SoHvevXtV7Mldjtl0/wAi 0DMOlaljArwLIBlT0Boepa2OoaL/AEJUxgmptKsZLd8MNyd6hIGdPe2YmtcqfLQEVZsrkTbIkHA7 1SFuaRszJeFiOnQ1bgtzG+W5PqaaA0FQNIB/FV37CGOT1NMpLQQW0ULbVyX+lS3FoGiUn71JisaA s0aIButILTA460g3HyROECkDb60y20tUlL96GD0Lm7y0Ixk1VSAnBH3qWwFxLfc3zVVOl+RIT13G i4JEEtoXYbPvCkNocZblqBFZ18pcFcn6VC8BZNy8evFNAyndtviConzdziqcVoyjhetADvsMcMDs 4yw9qyrMrfElBjHUEYp9CWUdcge3VHVfmFUIonuE3OME0kUtjMmsZ43IXoayLnQXgBJYEn3p3sBz s8k1p8qjNU5LMXMJeRPwIoTE2VLTSRGpZECjHSucvLbMRIHzZ9KaAxv7HS4YORyOvFVLm0DuTH90 dae4m7HM3kNyLhRGmYu9ULux+YuBx9Kew0zkr+ASZYLz6YrKayb7O0kg2jsKaYpK5yF5a/uyVHFc pdWDrZM6D5sjOatrQI6GPqqvc6XD6jArMi094fvfdpJDk7MoRW3+ksP4axtTs2jDEc80PYS3Oalm PlbSMsOlYssLSwkEc1ktAbOSu9MReUHz96zfs5PDjoKpbiuYhERZsJtIPcVVuLIP823PpVsErnPv bOsjMVrFdGIJY4NJ6CfY5XUVYOQBmsdLZfMxIcZpKQKJCluiXBG3gVkaxtLfKOlLcvY4i5Rrh9oG BTpLdoYMDG0Vm1qKWxilSIiH6VkXVu8kJAciperCOiM2WBreIYY5xzXOvC165UdRVEvUrzae2T6i qMqgxYYZNFyepRljyo28L6VnxK/2rkfJWd76FtWJrqXY4UDmsXUcqNy9am2o7tozosLHzyx9qps4 eMp94+uKaGkYrM0T4xwKhlOPnH3qGO1itICWDEZ9azbrMcnyjH4UnoTuUmPlvnoTVm4Z0gAA3A1H URmtb71GBhu9R+UVBGcih6FblGC3MU5JPBqzcqGkAHJFRLUa0K7yeXJ93IHtXoX7L119n+PtsQMG Sb+oqZr3RLc/sf8Ahc2zU9LQHJCj+dfqtpqlbEH0FXQ0QN3KRkLuSeaoXB284rqBGe6loxg4zUEy MoGBmk9yiJG5yOtfi5/wVHts6NErdGGc/jW9HcD8DvDzBopAP4DtrVaTa/HWvMxH8Rk7DtqxBnPL VHFOJYgGXB9cVz36CSJJIg+3ByBXSaQxVwOopNDuemaLB5bCRT17V7FojqwGRg1cNwe1j07SGKR4 PQ969B0tQoGK1W5KVkaxibcdpyKi2FRg8GtybalaVgvTrVCZwecc0y0rlVl804zn61XkjySuflzU X1NEtD4yjgLckkVZeJlUc1zLY0GrG0ZJLZBpQnlOcdaNyrpF1WDR4xzTUJIGThRQmLcfnaQFNEqG ZgB8uKE9QTsizGfLyp5NEatG5zQ0QkX4gW5YcVYOIsH72elLUt7WJYA2S5qeFcsfU+tW9jPZmgny DHUU6OEqhbHB7VmiipqCj7EAB1YGv2e/YCYv4OfAG0Yz+Vd9D4TNJn6PL8rEY71biQhOTxTbAsId oB61Js6mhD6CqwOD3Fef/FhHufCMyglTxzUsmWp+UX7Q8ZPwg1FM/MsLZJ+hr+QrVSf+EsvXU/KJ MVhL4jNDdSB+xADlsg1+rv8AwR1tmb4uaxdEcKkgx/wAVpGVkVzcqPnn/go7dtP8ftfSIgeZcsDz 64r7i/4JhfBbw/8ADX4Tav4qs4IrnXDGWZgOclD6fQVLjexNLS5+Pn7QvjPWPjx+0Vqa69K4Qaok MNu/8KMVyMHmv6FfHdhZ/si/sVaDPotnGZo44gT93jcc9K0crKwlofE1t+2xonxb17wVYSWuNXkM ZYojHnzBnmv18/aS8H+FtWi0n/hLDDJCIsxicjru47jvUqNndlS9xcxxv7TTafZ/sSxWuloF01Y0 EWwcY+avEf8Agm5aW0/7LniVoUWIpEc9s/umoceqMk+d3Pz0/Z1/aPg/Z2+Pup6hcneLhnO0Angq B2r77+G3ivwJ+278UtbgSxSXUvLlDu8RGDtz3NWmuXU2k+aKPoH9hf4PQfs9an47sMZs4JWO3HZY q810j4d+C/2vNP8AHR02yW2vreRkmmEWDu8vPepS5TORofsdfALSPgP+z5qqw26PfwyoPNZcFztP PFdv+1drtz4n/Yngl1AiEskZLK2cctVQYS+FI/CH9kX4Oa5+0j4zTRJb2WbwrYTK8hkxg7MMB6ev evvH/go5+1Pb/C7wFpHw18KBfJiZI5NhICAPyOMjoaj7Zaa5LGZ/wSR022Hxr1WYKpuFjkBYHr8g rwL/AIKCZvP2kPEKl8EzvnnpwKtO7FJ8ySNX/gmR8XfEGm/FSTwtaGW/0TDLKTnamAMdOOhNbv8A wUp+HPh/wZ8aRd6W0Qvp9zTKmOuQOahyvM1lC0EfWn/BH6NmtvF8iuGKu3AOf+WVfnj+015t5+0P 4nJG0/a2z+Qq7mEnd8p4v5I+zSLuwyqRmvvj/gnB8T/D/hr4uaTYXlt5uqSRNiUxk+gPNK3NobOP Q+9P+CrnxI8OaVdWNjPYrc30sDNGWjPGGx1rO/4Jb2beNvg94nF9Eii04hXOePLJ/nWkZcjsc84a NnyR8Tv2yrWWPWvAV7YZMeoxwKwVj8vf271+qGmfCp/CX7GOhp4Uijt7plizLu2cZOavmuzKN7Gb 8Zvh9ofiD9mbw3Y+KXgvNTea3Eu5wxLbzXoXxX+HWo+Avg94R0zwlBFa2yvCJG8zZhfM5/SpUeWe p0837s+Xv+CmHhzQ9F+GemSkxz6sdoLIQx5bnpX4vaYWgsFEgydta1EjOi7s9g+Au2f4teHUK5Bm Tt/tiv6afiX4jurb4p+HNIjRJLKW1dnBf0YDpWOzOiouU/GH/gpB8PdM8O/HaG4sIlgmnctIFHU7 gK/QPU4hD/wT7sUc7VFsnJ/4FWnL0M2uZWPNP+CaPgrTl+FviDXVt1a+hiYpIR1/dk9fwr179knx ncfHjSfH8Ou2yBbWYxxrktkGLPeoXubin0XY6j4KfD3Svhl8FfFuoWFlHBcrl0YDGSIz/hX5xWv7 ZNx4l+FV5pWp2az3RAGQWbsaFdK42r2SP0X/AOCd2nP42/Z11OyfMSyBVX2BQiuO/a48TQfAX9n+ x8LWUP2m4cxxtJg9NxBPH1rWj7mrMq6slFGR/wAE4fhfoWk+IPt5dJdWaJtwOMjI5rtvj54d0LSP 2mNPvLcRvq0k/wAyrjIy6g01LnuW4XSR9a/HweGNO1Kxm8RKkh2ERhxnnPH6102v+M7Xwj8GNM1G CINp5eNYwAehb0qG7IJu/uGbH8JtO+IPjHR/G0iACGzfahXrk5z69q+Y5/2mP7I/aXuUa2EdlCXi 3nPOcdqUp86sRRi4Ssz6i+HGkeHPEPxjufEVlKj6hIr7l4zyBn37V8GftY6dc698ezY2Qf7VO7Lt Rc8EgH+dKGjLqRu7n1hpkVj+y18FxaKFbWJ4cPjqWwRnj8K/OjR/DmrfFrxWlkFaa6upRJcSnsAR nn6VUW+fU1hbkP0Z8eeIbL9nr4Z2fhrS23XzxhMqO2SCTj619GfBphD8DLE6ifNUwAyFu/WitG7R lD3ro6D4ZXuj6hBdtoqKiRg7woxk4rBURfFzStR0iePyhDcKC2OuBnvU+Rtz2XKYPxFvIvh/4U0r w/BD5sW9Ezjtux2+tex694VmsPB2nQaaRAqsmSDjjPNU5IxV0O8Xw2dk+mMNstzlR6n71aXjyfTr S8gkvVXJGFyPepvZ3NI7XM/4gBZPhjmP/UsBjH418UaNC0bIyDCgU92EVd3O1m3vZYZMk1PpsJit kQr0plvQ27aCaa5GU+T1roo0aObap470JXYmas9p9pgEZHy9607XTYrSAKg5+lNko04uybcsfUVe ktRt/wBqlfUoSOzMTByMn3q1I285A5HFMqKHix3LvUfNUrRsURSOaBSNRrP5AejUx7VyoANTcSLD WRaFRnkUfZPlwflApNlDRAI+eopRAWIIpiLCWxZsDtTXXnFHUVymItjHPBp/2XdyKBoQ2464zVTy TzxxSQmVZbbERwOajtrRhbAdDTuGpTuLBmOM8VSh0xYNxQYYmm3oFivdwNPw4zWaLXFyi9hSTHYk 1ZAAPLX6muTfTzPJnzCT6UydjIm04QykuM1SljFwhwu0Cm9Atc5yeKWQ/Lxis+4tfkw3XvU3uX0M x4ljVsdKwxAgLKRj8KuJm43M8ybIXAXiuUDb0YYq9wSMt7FFnDH8sVzPiGBrlmVPlXPAFCKRxV3a PEiDHGOayZgq27pjg1VxPQ5y8tAliq+4xXNXzNEoA5prQl66lBIiE3DrWLNAxkcuMDtS6krU5RrQ zFhjaRWY+WQqB071HU0UTl7oBWJxz9Kw5t/mAgce9WlbUhvoZt7bx3BBC/MOtV5Plj45x2pPVlrR amHdozISelcDdRO14E5A9aprQhvUrX9v5PbJrGudOWUhj178VmloO5hz8OwI4HGa5u9iZnwOmanY 0a0Kd5YeSEI79axL7cuVx8tSStUUNgaLB5ArLltiWDZ49KdiHuZ18ixMAwxn0FZP2IW25gee2KVg 6GK0bSu7k1nNabzSloTYgMQRyMdKyjOzTEFflFZmm4k8InG77uKxLy08/ODtFLdlbFHyfIj2qM+9 ZIbaGVVwc1SHuUZFOCWHPaqX2YspIODSY5kEkm1MBfrUDgqu5x8tSzOJl3NshO4HI+lPWUrAAeah qxViOBl5LDk+1Vtmxz3zRe4bFdlV2I7CoI4csT2ptCvqUrpXjkwq/Ka739mxCPjtY4/hlH8xSlaw mf2IfCjL6/pUoOFCcj8RX6x2DNLahgcLinDcErFKcsoxjBqm2S4BFdQIXaMHPaqck52FQeKmw7mQ bdoEJL7t1fjp/wAFOYCPDkc33lUYIP1rejuWtj+f3QwUaTHRzkfStuX5QQozivKxP8RiaIEm8xDu X9KsiMeUPQdqxS1F0EjXevy8DvW/po24C1TQkeo6B0+brXrWgjcmSKEh2PS9NZ2QDHSvRdNXbCG7 1rT1Bo6G3yyEdDUb7Y1weT61t1Juil5e1juHBqlcx4xjpSbLSKskYCZHFUmiYpk9ahoe58c+aWOC KnTMsBPesmrI1K0UZBAJzVpl3yfNxRHYh3YFiPlHIpWhMSgt09KlrUFoPEgYDIwe1XIM7CTzRa2o 2IItzjnA9anlbMgUdR3ppjRPExOcnpUsMTCQs3IpCvdk8F1uz7VakmYYIGBVMW5djUHHOTVzaQQM 5FZdQRSuY/KRix+UsAK/aX9gSz+x+CHBOQxU8fSuyhew2j9E5lKOMDAq2GyoweK0IJ2YEYHGKWGc RgkjcKeyJvqWhKsvOMY9q4H4ozL/AMIZJJ9Kljex+SP7TV4sPwgvyB96E9B7Gv5D7q6SXxNfRquS JO49qxluRYsTsGgOR09q/VT/AII165ZL8Udctrudbd5N/lliBkbB60RjcTV0ZP8AwUg/ZbvovH2t eJIdVLQPMXwrqeP8ivmH/gnR+13L8I/ij/wjOrOzaRf5Id88cBQPTvWi0M4u2h9Uf8FBvgR4e8Me NdM+IOizRqGuUkuI4yuGy4yT34Ar9CfiK2n/ALXf7G/h6y0nUVhWZIi21lBxuOeDT5eazHJ2Pz+1 v9mnwd+yX4h8JahdXaXd/C8auWC5GZBnoa/Qb9uD4dN+1f4X8M33h/VVt4IzGXVZFGR5me/tVSV7 JBN3pi/tO+JtN+HX7I+keCVu0n1AxxxY3DOckZ4+tdx+wD8Ef+Fd/sxara6jrAN1fQ5UGROPkK05 vkjyk0l7rZ+Tnhv9nLSJf2krzR9fvt5kMnkM204GB/U1+i/7If7Kmnfst/GXxH4kGrZs5PMKIWQA AoB/Ss4xu7AnZHsPwR/ap8N/EP4m+PNBguVW6meRFduOseMg/jTv2Z/Clh+yp4U8fX+qakswvpTK CWU4/dEdqco8z0Lloir+zb8a9E+MPws12xt7xUd7hCrMQP4T611f7VvhbT/+GNLPSZNUWR0MSsA6 nIyaqMfwM59Dhf2YvhNonwy/ZTE3h+5jttSvoV3uhUHJBWvjz9ob9jrw/wCFvgYnijXNTW/8QTsj MzlCQxJHb6Cpktbmi0PSP+CTPwfi8L61qPiC+vUiN2rNGS68ApivlX/gpJ8IprP4wajqtrqPnfbb krwynAYgZ4q4pJXJV+c+u/2ZPD/hL9iT9ll/EU00dxrl3bh2c43M5Ur2PsK/FPxJ4l1b9or4rS6x fXDJLfyFoQTny1bAI5rKMfeZ0VXayP6EP+CanwHtv2cvDPiO+1LWRKLzMgEroNv7vFfjz+0J4iXX f2h/FckBWWBrxijqc5GBWrhZXOa/7y55nAitAy4ySK+mv2BPAEWv/tM6bdSyLaw2qOGYkD0PelF2 1OqOrPvX/grppemTa/pd9Y3EV1PHEUwrg9X9q9e/4JQaHBovwh8RXOoakkb3Y3JG7qNuYyMVDfNI movdZ+Ov7TumW/gz49axcgLcI2pLKXU54GM9K/evwj8TF+Mn7H2h2XhvUv7PuPLi5DhCBk561vGG tziWiPkz9oiO0+D/AIS8MTa/rjapdRSxGVCVfkSdeK+7viN44n+Pvwa8MyeGNZ/s6EmJpCsiqdof JGD7U6ivLQ2/5dn5hf8ABQ+8tPCkul2cOptquobcOODzu9q/Pyzu7mW1CPaSFgOT5Zqp9AoI+gf2 TNOl8RfHPR4pomsreFwWlkUrnDA96/pl8faRpFx8UdJ1iPVIhHaWkisokXB5B/pWWvMdFa1kj8Hf 25fixafFT9oGaPTvmWzuCkkoHHUHr0r9RpG0rxf+wraacNRjR1tl/jXORurVSu7kW5WeEf8ABOj4 k6d4Z8G+IfCN5cqk5Vo43JHP7sjPp3r6V+BWjaR+zX4J8bahqGpi6N25kQFlJH7sjHFTNc0iK3u6 nPfs3fGXTfjR8G/ENgZxD552xhzjGUI7/Wvm5v2WvC/wU+Gd9f6leJe382BEH2kgkEdj60qkuVWK p6pM+0/2Kr20+G37ON3e3F8kUkiB0UuoI+U8Vz/xxufD3xL/AGdoNZvpoWvzsYMWBOcn/ChNyjdB USctT83fgh8Xrr4Q+NzqNizOZInz2xkY7V1fh/4m/wDCQfHtPFWrSb5PtIHJzgMR/hV0Fy7miV2f qF+0f4G039om30S+tdQWKOLaWAZf7+e9er6vqWg2PhLQPBdxcRTRoEYlmH8Lf/XpSi7NmPL+8uLr 3xls/DHxi0fwlZPHHpZtJN5VuFIIwPTvXzj8W/CHhq6+O0NukkSLcbndkIPIIrGjFp6jbXMe+fC7 4U6b8MPiJe64uogwbXwhZQOR/wDWqj8MdR0Dx98c9Y1uYx/aLd3jgZiBkFQa6Iqxb1R1uufC+x8c +MdS1PWNQWW3iDi3hZlIUEZ/nXHfCb/hH/hfp2vap+7adHPksOTjb0pS3Rnsj4c1/wAXXnxG8ZXW sXJJVnJhQ/wiv03+CPimz8ZfB1NJecRzRxBHyQOcH1+tXV0aY6Csjqfg/wCGLD4P6NqZe7Evmtn5 iOPlx2q9rXiqy8LeCDqlkyLPcyozbTyc8VnFaXYlrMt+ILiw8Q/D/T9TuWT7QCjEk89a9EvNZTxh 4QsRZ3Rtx8pJUgcZrPVsqS0PNfGmpWfhvxFp0pm86RflI4P8Q9K9A8eaFZ/EaC0mE4QIR0xzzmrq LSwRWhl/EzXLfQ/A8GmQv5p4XA+tfO2nacIoAvajZWKidJIGjgQY4xVqO3drIkL+NJFG3ol1tj2s u4AelbOmWhubg7RgZ9K0WwrXN+4t2hkAA71oLBs+8KQtjXs40dOR83ripYdO+YybiRUvQqxKsO5c gVL9iVBuUZyaaKWxfjhMUeccUwWhdw4HShEvU0VgaRsnnFWoLdd5LdKnqLYgdN820DbV2WxHkc9a nqV0M54NkIXFSRWwKAdx7VVwehcjKW6Nxub6VnLb7snFILIjaz3SgmpGiEbbRRuFiD7MT0OKgaPY CKFoSyuIMjPap2tCYuuKClsVPsbKpDVnyWhibg9aoCrJEF+XHNV1sPmy3JoBMw7uQQvIrDJNYL2b fZQYxhjjNNIlmdegRMiMpYnqQKw9RhZIyyjgUSKSIIbMy2Kvj5qyJLZX3gjLDjpSS1JbMg6cscRG 3kn0qjJYLDcBGTOVOOKsEcqmmPLNMvRc8Vjw6QbWZxJyOxouDOUvrbzpGUArg9cVjT6cxQfxYqrg YF5bGT5AvPSuMv8ASJLRip5FVEGjnZ9MJiLFjgdqxW0r7RJu/hokwSsYWr2X2Rxt61lS/wCkJ83B FHQmMdTlLld4kAyMVgC1MiKmduSOaSVxylYzNd09beZIkGT61zl5ZNkECk9NCLX1Ka2IjUlhgmsK ez2sWzQtNS7XMa5ZiwUD5e9YGosjXIUIF96vmuiXGyMe5h8uTn5q5y7jKTYHQ1I47nJ3P7i6cdVN MW23qSw+mahlSZk3ds7k5Hyj3rDaIOG3dBSBLQ56SPYWKD5aoKpDnAzmi9iWjF1KIxsAwyaxbi5N ucEcY70dSdkZEe4ROScAnio5VEaqetKSuCM6QmWUuBgCsmS6CzFcdazSKW5XuWCgHOF+lZNwMttX haUVqNmfcHdBgcEVTWHyz8x+Y0DTKVyAvU81nSyhiD0P0pMe5TmVlJK9B2qu0v2qMKw6dqQtit8k EhP3vaq6tvLF1x6cUMLkKjgjtVdUbdtHNRYV9SKeHecJx61XkBiwo6U7kjWuC3UZUV2/7OrLF8cb Ap1kmHb3FZ1NEUj+wz4aIbPVNKjHJwOfxr9WbJtlgijrirou5W6G7S5G7mkaIK4IOK6rkmVqC72C qcH2rMbMTcjND2EkQpcCKbDLuH0r8iP+CoC+b4UCx/KpGT+da0dy72P56NDGYif7vyg1u+X5Tdct XnV1+8ZMmV423O3FNVN3JOKxegk7kkTD7tb2lMIn46VHNqGx61oQ82FXIr1bQ2HGOfarQ0z1fSDk be9d3pjhU2kbq2grIGdGq5+ToPWqNygR9oOfer5rMghZt3XtVCZs59qLdTRPQifGwetV3YtET6VL 1DY+LBGxAJqZj5aD0NYN9DosRbSr7SfpVk4VQrdaRNhSfLYcVZikE8ZB+8KCSuiq8g3HpWisB8zC njFOWxTGiPduU9BU8OFGTyfpWabAmx1YDirUThlznn0xVEiLHtHHetBSWjANO4rWZZGy35zz9Knh bCZ9fWpKRn6laNexiNX2ncG6+lftb+wKT/whDZ/hKg/lXZh9h7n6JXUjTBQBgCpUO1sda1RElYsL 8y88VLtWPik+wrCq+OBXCfFKPf4OlTGCO9D0RO5+Rn7Tb5+DV+qjpCc/ka/kWu9n/CUXUiDHznOO 9c0nqGxrfaEljKsvB61e8H+K9V+GPiSLUdAuGtptpU7Tt604uwkeiePf2gPG3j20+yarq0lxat95 WkBz+leHy+HLaa6hniHkTREFGUehzVN3I5bM9C8U/ETxH4t0RNLvrxp9OC4MbPkVteBPjB4u+GHh y203QL1razh2qirJt4HtVRlyjlG6Mn4meMfEPxX1WG81e7a6KcqGfODnNd74f/aU8c+E9Ht9Lsb6 SO2iwFxJjp+FXz63J5fdsY3j34leJ/iBdQ317dmS7j+6zSdOc16N4d/aa8faNpMVourz7I1x/rP/ AK1KUuZjUbKx57d/EHxLrPjM61LdM1+hOyQvyBxn+VepX37R3j3WbOSC41SR4GGGUzf/AFqiMmmP kPGfBGs6r8P/AB7LrdnIUuJA25lbuRjNeueIPjj4w8ZaXd2N9qkk1vK3RpAeMfSq57McoXRwvw68 R678OPMh0e7a2jb+6+O1eheMvjT4z8TeE/7Mm1KS4+ZSd8np+FCqWFyXLPh746eNvDHhq20201N4 oolACrL6fhVTx58UfFfxC0eCy1TVJJ7dSCVZwcYNEnoK2poeEfjN4r8C24tbDVJY7RRhFV8YFY3i r4h+IvHl6JtT1CScD7ods4PY01L3bDS1uS6r4h1fxH4eh0/U76S5tYwAisc4xXO2mnnSPKltG8mZ PuMvalHRlT1Z6xafHTxhLp72E2ryvEykY8wH+lec2VvNaSvJK3mSucsxrRyurEqOp0SSiJc/xGtT w54o1PwrqX2nS7prW4P3tpxmoS0LvY6HxH4x1bxnqMc+pXb3ZRSNrnOfer/hD4jeI/CkFxBY6lLb 27nIjVxgcdKErBJ3RyOuG58QPcT6hKbi5mOSzHNegeB/HniTwL4ZjstLv3tEjIChXA4FXGepm4Jr QzPHPiLVPiRcRHWbhrqZRgFjnvXe6F8SPFHhnSLPTNL1B7W0hAX5HAzg+lO+tylH3bDYdTGs/Eq0 1HxBIbyLPzF+cciv03sviH8KbeztUNjGTt5Pkn/Gle8hqPIj58/aY+Kvh+C2sYvBcX2K7DqfMiQr wG59a8Vi+M3i+5vytxqMnk7SCPMzn9K0siHeTOLW2lkur1wS0tw25298YrutM8Ra1pvhKHSVv3W2 QAFdw7VMWloW1oUPCN5e+F/Ej31rOQ20gsTgkkda7O88fa/4m0+7gur+SeKRs7WbjGK1WmplU97Q b4H13U/BGmOumym3YkE7Wxiuk8QeO9e8aadBBf3T3TLgjzG9DWU1zM0j7qOytvGGst4TfTvtTqoA AQNxWZDqmpah4Uh0+9uW8mPGIwcjitIRsrCkjS0q1RIDgBQRjity18PRNboAdrbg276VdrBGR6zZ +O9b0izS2tLx0jGOQ2Kq3XiHVr/WILyS5MksY+8W5qb6WK31NaPV9S1TXv7RknYy84Ynnmr80+oy ayt9I/mzBsq7HmmkkiFHW560PHmtX8SCa7cwsMMpasXTr+50HUpHsJGSRs4cUSLPQNL8Za2lrILu 9kl3jGCc1n2z3UdlJC8jFJGB57VLdwtdHSadpiafYZxkBcV13g3UZ9Gs5GtZTFI7AjHFVJcyBRst Duzq2qajp7LNclgzAtlqtXl1LLpdvaCcsiY+U8dDRZWsKMbHTRapPd6UtpNO3kLjC/Sui0rxZeQ2 i29nIYo04GDis2kFieVLnU7tZ5m8xh6mvStK1C8ht0CzFE7KDQx26DruKbUL0NK+cd81prbeVzjP pU9SkXZIZbmNVI24IrpIrV47TagyO9DEkXdO0/ybUtjnNdBaRsg/dEbvrTuM1I4nmYGQYx2rXhtt 5+YZFBC3Lf2MBwBxWjHALYAE8Gh6l3sWJYk3hQOTSyW2AAB0qdgVy9JanyOBUVuDsCkYNPoBfEXl gGm5MrYCYA9qQCLbN5pbGKkEDxD5vmBqrIfQlkjWZR8uMe1LbwgOfShITJVhCZO3P1qM2LMS+ePS pEiv5ZYkEdKgaLAyR0ppWKuV5VLplBxVaOPymO4UWEKFQ8Y/SomjKdDxQJDpOUGRVdrLzkyO1D0G ilJpm4/McVkyrJC+xeT70R1HYwLvTi9wzMMtVQ2kiQFc81YjMi3K5Dxh/eqN5aCeB2xgChoDFSNj aqE+Xistrf7EWfbuYn0pJakrUhljCw7iPmNZN1aGWMMSA3bmqY9jnLmLykPqawJSfKORQhI468Km TbjHNVZbZ41IHAoa6jRx9zE6T5T73eudvJn+04I3A9TTi+gzNnjjYsNnH0rmZGSwzlfl+lXYm+tj zma6/tPU3jUEBTxkVDJpjSSEdMUjTRHO38K227KZP0ri7iBlI3jaucjinEzmrlC7iEk+8Nu4rEO9 ywFU0iNjFu5WDgP0FZ99BuAZD8vek1pYV2mckWAmfI79cVzmoSLNclcdO+KiKsVOTsUL21doAVPS ub1AMqq3Q96JDh5nL3auzlol3tWVDBdXTlZDs9wakJkMwZdyA5I4rCubQIhBJ3fSp1TGnoYDIRA4 HBqi0LRWwcdactQMy5hNwgLc1zd9YO+Mj5RUhbQybmAEBVPSqaweaCG6CqRFiuIBjapy1Y11AEny Rz61D3HYzJwJHK+lc/dyujgYwtGwtREl37sjgVVuJFaNCBjpmkx2sUNQiBnD/eWqdwQ0eAvNQi7l CDMW4tzVIMIgWxuJosDGTQqmJAMt71WaRpScikSyvboFc56CnxzBZicfTioY0Vbm52Z+XFUGk85N w6UITGKqxLnHDV2f7Otwh+O9hHt+7MP5ioqbDWx/Yl8OI2l1LSmj+bgfzr9YdMtd2nxufTpTpO2g JDpY/kJArGly/Ga61oU1YqygDbgVRlkG8gii4kUYeZGA5/CvyM/4KdHy/BPmHgAgH862o7iZ/PNo aFlkjB68itjeEwrcmvOxLtVY2I0JjfK1EPnye/pWD1IitSzHtY9MGtuxYRgZHNJqw2eoaM+4KgNe u6BH5cf9aaHbQ9O0p+hHFekaWp2g1qm1oI6BiRwTSSyK0QXbhh3xVNXEUJoDPFlflxWZsLRFf4vW rbshld1LoABgiqjBklwTxSRR8cxgkeopAR5eG7GuZb3N2yUSLLtO2pBh3y/QU+pLYkjb5BxgU1o9 jlkORUSdmHQR0SRQQcEVdtn287vmqugImMoXII5+lPilVVIx+lEUV0LTP8ikD5asRKGwR1NU7Gav csrEVkxjNSgBXIPas+o2WFkT+IZP0qSNjgjtVLUW2pXvFUWe8feVgM1+037Br7fA45yG2n9K7qC9 0qL0P0ZF0u/G0HFAkV33AdaNiWy6gCkMalfG7cRzQIjijWQkA8iuF+KU/k+DJCeScDmlLYnZn5Gf tPuLT4MX4IyXhP4cGv5FjAsGrX25tzCXj8q5ZbktXLyNhdzelb/wo+HOvfHHx7/Y+gwtIArFmGRj GOvFVbsVFXPavjD+x34x+EGhNqN8Y9sZHmIZh/LFfMa+JbUaXBO3DPgBQM5Jq7WV2Re7sbqx3SCN p7R4IZBkOUIH5mrSzpbybC6j8etJ6ofWxzXifxT/AGILdLcedNNKsaInOdxx2r7Z8MfsR+OfFfhy 31BD9ljmAIV5AvX6ilF30KseQfGn4CeKfgasD38csynj5RuBOcdQK6n4a/AHxn498G3OtjTXgs1X ehII4xnuPaq20ZnzWOS+GHww8S/Fj4hJoemQ75xG5lIP3SO3Svpu8/YD8c2zzia6SLyslh569ufS tIw5tSpy5Ujw34ffB3xP42+J114WtIDK8JYSXAOQCAD1xjvXunjD9hHxp4S0K71NXWYW3LIJQcgD J6D2rGS1NOayufG+iy3/AI01iGwtQtpfmQKyFtoBz719O/Fn9l3xn8LPA9vrl3DugLoCQ+cgnr0o 5RSaSufPwu1ncESKhPOM4rUgkFw4G8H2BrXoStUWftKRTFCQD7mm3WqJZ2/TecgADnNK9kJbm7c+ fpuhRX15ayQW742sYyOvSqumQv4q1e102zkCzzjKtnHfFKHvBN2Z9ueHf2APFdzoc19GUucIWOJQ dvHtXx/d6ReeGNevtKvmLXEMm3DdsCrW9hva4qtuwvce1TIcTBiQD7mrsSnobNteKspIZST2Bq9Z su5g5Cf7xxUPcNy7DiQnOGA6Yq9NIjIAWCN25oSLjYltoH1CeOGAhrh2AU59a+jvG/7L3i7wD8K7 fxNMoaCXad3mdAT9ParSBuyufPejXY1OECTE3vnNba6XBMyrvCAdsik9C07o7GKzgZE2IMoMBhVl YQy7c/NTTdjLqS6rqLaFpasFLSswRdozyeK+r/Av7Ivivxn8OE12RCiGHzAsjYPQ9iPapveVkbKN 4s+U7mxuIr+TTppGt71JliKjuTX0R48/Z+8RfDn4bWOtXKmCGYpkg+px6VvJtaGMKfMnI4TSpEYq WcbSM5zXUNAkjB4yNo75qEBuWjGWE5IHvmoiCXFuG/fSfcNapgz6f/4Ze8QaV8KINfmI8llViQ4P HP8AhXk2m3kd1axqpAwPvA9a0kvduYQledjo9PKSHaTwOpxWyESNsr92sTpXY6GC5hnRdg2468Yr YtJfPfG4YHQE0JhayNz7WrMEJ4HrWzY6hHaOwaPfno2OlXuTuzr9Mj+0bSwDL2rozHscKwGO2TUt Fo1NwaMqRtX0q5pEJKgKOKrdFXsd/bWD+SR5hAAr17wT8IbvxZoiXhQIpwQxNZybJlpqd1qXwQm0 fSzdA+aB16V53BZL50YjAQN1xUpsFqej2Xh93nitYm8137+le0D4QXFrbRh2weOpFK4W1OX8QeHX 0G+VSdy11XhXwVceIGaVGyo6Cm0BLe6aLXUzDMNjKcdK2J4I7WMrncT0xQC7F3w7olxqEnkbDg+1 dzbfD57a4IRvmHapvd6DkuVGNJZyw6nJFINuDxmrdsxhfZjNMglQf6VgVfeI+ZhhkUwJxZ+Y4cHG KtomJMdadilY02kGMbaryxL5anHzZ7UdBdTXSBTCuRUJULkgUhkiw7l3EVaiiFz8oGDT6C6lf7J5 cmwin/ZkgfHc+1CGx0lttUClNsQmTwKB20K8NsAx54NVZbAkYByKYh6WipHtxisqey+cHGR9KTGI tuGGNtRvZE4pJCe4ya3HAI5zTJI2hwFFIbKjwtMDngis5rX58MOaa0C5m3UG64AxxWbMAjsu3j1p 3EZklsVQ7R1rOigMVuyMN2TV3EUHsV3dMADpisaS0PmEgfLQhEFzpm6EvkfTNcxeadGyRsTkgdKQ zn9QiV3OBjFc9JAJhjpQM5++0FN+UGSPaufvZccMp3ewqnqhHMvCqsxIxXP3aou4Bd2faiMdSWzl NSkFnASR+QrCRV1G33smF78VoyVvcw7uyt0uA0IGR1rB1q7EcgVRt9SKlFt3Rwd9c+c+FXIHfFYe oubuPayYUdKZS21ObNqFBVRjHpWPL+5c5GAatGZQk0z7UjuWAA9TWFeRmK2C9RSsQjmZoFZSoHNZ ZgTYV25NIp6mJd/6PGAK569g+1qR0FQ0CZkLai0+VRWVex+S+E6nrQDMJozHcZI+tVLtopEZcfN6 4qGrlLY4m9JV1jA578VUcDdt7elQ7oL3MiaArIQxwvbFZd1IFQquc+4pcw0jno7YKrFjzVKeLZEW BwKpO5Oxz5xC+QetNk/eAlj9KGO5hyRbA2DkmswRtICjDOKlsRD5G2Mg8Vj3cRzhaTHuZ3zEkZ/O qvlvvPPHpUFWKb/MCGOMVnxlUb1FUAt1l/unkVSMbMoZjj2qSXqMaQNHkDbUblWRCnX1NStQKd/G 4cDqCKqQxeVEVJ59Kp6IXUUKDgdq6f8AZ7iCfH+xYdTMP5isZ7FH9kPwvLxajpKp/dGT+NfrDphJ sowT0FFNDQy7k8o5HesnZsdjjrXYuxd7lOaTCgDrWXMfkOeDQIp2zsHwox71+Sn/AAU8X7R4HERO MsCSPY1rR3Mz+eHSyDO+w/Kny59a0BEJmznpXnYlXqMb2LBfbgsagVykhbHNZPRExYwy+XKCBnNd LZyebg45qJS6lno2gqwdeetew6GWVgp6U4vqO2h6to3zKB3r0ayyQgHatlqS1Y6SJQZWJ6imPIJB kD5q1sSygSwUjOPas6UbFBHNS9St0V3J24Bx71RYeYMZ5FEdwPjiOTGFUcUySPc+0nDVzx2OpolE WwYJ6UkbF1JIzQmQ4hG25OelXI2AHIwtQ027k2ESOP06+oq0FSHGBz9KtrQdiykiuoO35vpUQb96 QBjPtSQ2WImHlkEVMjBYxnIJ6cUNsk04JvITDcsarQXC3SyKAQwPUipbGi9aplBnk+9WdoX5R3qo MmQ2a2M0OxTt+YE54r9nP2ElK+BmAHAKj9K9Ck9Bw2P0Ft4C8zZO3NX1byn24zR1JZpRfNyacZ9/ y45HfFFhEkEa/wAJ+bvXnfxbRm8ISKD0IokTa5+Rf7V7D/hUV77Qn+Rr+ReXFxrl66fd8z+lc0lq C0J7mU26Y77TX7Ef8ERLRf8AhLvFt+YlkeORsbu37qmtAvbU+cf+Cjn7Qni7V/jF4g0zKppMVwy+ UJT047fnXyT+wV+zLqn7UnxsW4nhaDwpppImLDG9hhl4P496c3eBnBXdz7s/4KWeLPB3gHT4PCfh 22R7yIbT9nTdhg3A4J9a8C+Bn7CHiT4v/CTTdcvpH0/zlQp5hCnn2YVlC9rGjspXO/0j9iO++Ev7 RXhNdbt473SWYHcWDfN5i7TgV+sH/BR7xt4q+F1z4X0vwTbOYZCnmGHdwPMAPQHtWijy+8EpWidn +0xp9u/7Fmh6xrloJtUCRPISCTuDE/0rs/2Tfinp3xQ/Y01TyrP7O9valTlCM/Ix705Ln1RnD3kf Iv8AwSr8PRt8W/E+oKiPKzSFQxx/yzFeFftc/Ev4seHvj/4kisIpF0XMpHluxCgKPb61qp8sdCH7 zVz7F/4JV6ZF4s+D/ijxDNCJtfKlndhzu8o/4Cvm3Uv2l/HvhGw8ZRa5o7Pp3nEQsd5+XZ9KyRtP SyPgb9nr9m7xL+0v4pPizSAdJso9Qicup27xkH+L6V/Qd+36n/CHfsY6fa3Ki6mCxr5nXcdx9Kt+ 7EzqPofiB8Bf2DtZ+MHhZPEV5KdPsJsNCWIXg/X6VlftEfsoa1+z/ZWN5aIL23mZUDq2SSzYHAqI vQq9lZHrHgD/AIJ963438I2Os6jcGxNyFKqzKDyenNeCftGfs/av+zZ430430C3Omffdg27ow7Ck rtjb7H6veBvAHg79sf8AZPW10lI4tVt7cEfKAysAWAwT9K/B+XSda+DXja60q6iZNX09zGkjZG4D BJzWkdG0Jrmlc/of/wCCSvxe8R/Ezwd4qGtnzlgJWPc+ePKJr8j/ANotw37Q/ixyuxlvWAA+grSN mRUlZ8p5dBIsce9h2r2L9nD4B6v+0rrepR2cLR2dq5VpCMD7uc5PFJ9y+h9L+L/2Br7wp4KvdR0y 5W9ubfl8uvGBk9PpXhP7PnwKm+P1zNb3lytjcW7bGQMOWxnvUpdWNO6Or/aM/Zq1n9m+0gmnjMlk 44kHOTnA6V3vwR/Yz1P4u/D628Q3zGys5tpR3wOvTr9KcJXdiVIr/Hr9mLV/gCmlanZbZrN5UDSq wzy2M8V+6Z8BP8af2E9H02WXy2mt4/3uRyct61o2F76H8/8A8W/2bNZ+AXirRdJitpbqK6wquiE9 WC9h719k2/8AwT3u5EsZ7y8FrczR7lhLKN3PvzWVrlKetj5Q+Mnw21P4NePV0i6gaOMgkNg44I71 xqsHWSVTjYK1iug7Xdzufg5ol58TfHmi28Fqt1bJdRmYMT2cH+Vf0Jfti+MNV+Bnwx0+PQdPWW18 sJIoJG0EkdvainH3h1KnIku5/P4ut23jP4qW19LAFmfUot6EHglhX7f/APBQeRtM/Zd0qOKMNEzR DPp8xqm9SJT5Fbufml4Q/ZK1nxJ8Iz4gif8AcOV8slgODn/Cu58S/st6t4D+BVj4gmOY3CZIYEnJ P+FRUXI/UcXsdl8Ff2U7/wCJfgiPV7o/2fZSbSkjEKSD9a1PjH+yzP8ADHRrHVraUXUAK/vcjPX2 qo6BVlrofqx8M/D/APwsr9kXT7C4fylktVUS+vX1r8l/jB+zfe/AqHTHCm6spAF8zHq2B0oc7aEK HK+Y+jfhn+ym3i7wyt7FOHdk3CMEeleNW/wru5vHtxosieTJHOEUNxkcf40RfMarc+nrb9ju8i1+ NJ3EULIxHzDr2rhdK/Z/luvifPpLEIYyQoyOcYpwBu7sj2pv2Qp7i7ukeQJKoJjXI5wK8G8MfDe6 Hjd9Ev2ELRvjqOQMVSldEvSR9Q+K/wBniXwz4fbUbImeFBnoOn4Vz/w4+Dl58R9BudYc+XBGpKDP bGe/0qb6Gl7I1vh18Ebzx1p2pXBbZHASEORz8ua43w34QvW8RJpYBaTeAdvPpTUuhmpanvfjX4cS eCr6yjlVmSVgDhc98V9d+ItPm8OfCC0i0tSJAqhSv41N9bDfvaGv8IbS81rwFPFqmWuCuMt9DXhH in4RXHhyFJ1JYSyKB7ZOKmT5dAWjPW/DXw4k8Jatp93MwYFcEH3NerfFWO/nubb7A21NvODjvWcb tmktD5+1GC+vdZFrKHaUnjjtX018LPCcuhyzCST5yCdpPtWzZFzy7xvbJdeI7jcMMG64qz4P8Kvr mopuH7peSTWctio7n0p4Y061Go3EcCruRSM/hXH6RBdL4tYuCU3dx9KIdgm7uxs+L/CTaprTywqF Cgk4rlfDvhJ9VjncjGzv+FW1Yh7hpHg+XUbmR/uoh61rX3hEwWZmU7vxosNFzw/4Pe+tzLKdkfY1 Lq3hM6TCsqneh707j6mW1inl76ihtQycCi/Qdi4luNmwHmpVs8DkUgRaWzKDjpVaOAxSE9M1SasI FtpJZ9p/OmX1k4dR1wetCGS3Fg8iIFPSpZImQBGGQKEBQaH5iFHGafNGI1HY0WBkMcKzg5PNSxWK 7wD0FKwkUZ4wbvCDAFQzZVjxSGVGtizhiOKj8jz3btigClcW58vKnkVQRFfO7O/6VVhIz7izKS5P U1RltSw5qUhohksQVyTge1Zs9iq4CU1cckM8hMHcADXPXlkQGdfuj0qkSc5NaeYq7W4NYOpaa1nj a2c1VriRhy2SGUMx478Vka3pUUKhoeCfaoehSOeu5xBZlFX956gVR0zSTe2zSyjbx1Iq4sUtDhLi 0SW7dScgGue1OOG1kOxdw+laGOtzgb7ZdOzFcD0qGKBV0l0I5yKb2KSOH1a0+wwh4hkkjIrG1S1E lqjMAARzUj2OTn05Y0IQcdq5rUbdkhYY5FK5pfQ5+zhVbT5xmQ9OKxNTjVwqMMN61SZmkY15Ysg2 qeKxXgJj2N60NkcuplanZJFDvjP1rkpbdlAlDcHtmmMwdTQDacZJPNZkkWJd3QelEloStzFusPc5 xj8Kz5bZDvAPz+tYu5uo3Rzd2rNbn1Fc00JdQcYxQiWtChLtfdlMEd65yYsqsAPm9abiQjCllYAb huIqtdxCZfOA2+1ZNJMtMwJo2lQ5FZNzpzvEu1yF7ihaEvUxJrdQ+OTjviqjorNj09qctgRWa2WN 9w59qqzNhDheaybuXYzd5MJ3CsO6Ajj3k00JaGIZROpwMYqH72CpxSasaJlG5tgp3E5qi9p+63AY JNK4miFg0ScioUk3odw/SgixHOd8RG3B9qpRx70UkYA6gU0tCSe6bdgL0NUzEC2M81F+hVinN+7k 2gZrsfgFGG+OmnNHwVmG78xWc9EUkf2L/C6cNfaURwNo/nX6s6dKF05COmOKqkO1hs2G2mqUuWzj p6V1CRkEFH5HFU7oeaRjinJdQuVWUqeuK/KH/gpdaGbwGJwf3a4B5960o7isfzpaHiJJlI4ZsrWg IijEdq8+u/fYmiJn4wRk/SljLcADOetZO1hcpc+VEAxk1vaeW+QAY9ay5boo9L0YbgMDBr2HRVIR AetUo2KueqaWNpGeK73TpGOPSt6aCTN8OQhOKkgwFBA5rWRk0RMFupyB8tUp4/KBQnP41NjRLQyJ GwgBqq2I/qaIrUbR8bzzfZlUAZpgj82UOfSua1jduxJDkFgRkn1oEnkybKnqJscko24Ayc1bfBlV enFaRM27ChtshXH0pUD72LDAobuWmXEcRqCw61JLgEMOTWXUTHC4Q8gVfiKvGOOatrQlbkyrkYqW 3VWzgbcVLQ2ya3Ksxq60W4h17UWB7EF9uFuGHB3AcV+037CEbJ4FAIznaf0ruobFLY/QNm3oxJ2k Hip48qF3c0+pmy4pGR2qdCEJUDmmmSIyHAxwRXB/FZseDpJD6ilN6DtY/I/9rPavwWvmx96E/wAj X8iFtE0WoXwxwJf6Vyyeouo6+y8aFeuK/av/AIIpoLXV/GCgbmeRjkDp+6ql7yK5bqx+en7e2pxR /tJ+JYDIsjC9O9Se3Ga/Wv8AYEl0pv2X9Ug8K+WmtSQ/MFIBDbD9fahbWM4K1z8KbvRtV0X9p27t /FKNLqT6zHu8zneNy5weM1/Rf/wUC8U3Hwl/ZF0LUPD8raWsLwnEfy7hvPHPrWsUoq7JeqPy/wDg 5+2F4i/aO+MXgjSb2yaOFYt5mIbna4PcYr9zf2qPj14Y+B+taYmtw75JV2IfLJ5LY/nQ3dWB6rlO A/bN1uLxF+xVY6tbjFlKsbIuMYBLV5/+ws0Fj+xVrl0SJFktsgg5/wCWbUk+VWEn7NWPxm+Cv7We q/AD4rXP9jRPcGUO7qobsAO1frJ+w/8AtI/8Ns+PvFmmaxpkcTxCSNslj1jz3oi1fUpRurn2h+y1 8O7P9nLQvH8dhEDGkjMqgdMRH0ryn9nrxNZ/tXfDnxtDrOnxYt3KL1bP7snvRbUly5noa3wl8G2P wX/ZYu4NNiFusdzCqbB04NY/7X88mu/sceGbu8kMqPcW24HnP7w05asUo83yKH7Td/N4C/4J9aJq eiTHTdkcJHlnacbm45r8kPBP7WviL4wyeBNEvrNru2Z4jJNJu5xIPbFLlvsNLlep+p3/AAVW8bal 8JPhX4YufD92+mRxSw7hEcbv3nT8a/Hn9oL9p/WvjPp+mpq1s0VosBJlfdyM570PSyHGOt2fQH/B JrTdd1j4naxd6T5lp4aSQ+Y5GxZBsGDz14z3rB/4KQeJ/DWs/He5g0jZ9tiZxPIg75BPP0q2rMrT lufdv/BHTUY7vwd4xWI7mDHH/fo1+Uv7Q3nxftDeLRMOTfnr9BVxRlON3zHnF+BBbohPVDX9CX/B M7SrTQv2TPEt+kYjuXtywkA5z5Tc0muhbdlqfk3pH7ZPizwbaeKtPBm1WKW6EaM247AVx2HvX6Kf sRfs4weGvhCfiN4hmcSahcxTrEVB254x69qJ62SEly07n17/AMFDV07x3+znoF6kGLZ7y2I3LjI8 z0rn/wBqDU5Phz+wRoE+kObBEEGGTg43NxSjo9CFF2Px+8XftW+IvHvw50TTL22knst8YM7Bv730 xX71aLJc6b+xt4PaxuGjg32w+Q9RvPFN9y17qudf8bNHttU17wQRbK7FFJYj/poK2f2ifA9lL8Y/ D9/PrhsIYIzm3DIN3zg9DzSbYKyep+YP/BUbx7pnjTx1pdloinzUiYySbMdHyea/NKCMmzlTrkYJ 9auGpUdz3r9lf4lN8MPHml29paqzz3CBycjqwFftL/wUf+Md94T+F2nR2turfadgckkYy2KtPlYq sfaPTofgp4Tg874iWUp4Z9RhZv8AvoV++/7eU6v+zdpCMMwlos5/3jSau7lyp80U+x2nwShgtv2N NNaRQtv5ceP1qP8AaPukf9jzR3iX/RFeFunUBjRL3tzJPm26C+AfHOk/Eb9m3TNKgum0rakeJNoX oTxzXlnx88Na/oHwcsoUuTfaUpQCUsDuGT6Uo9gaurn1T8M5XtP2SdAkt5DDHsiwV9MmpPj74Tk+ I3wk8PWFucNLLCXc8HG/mptccpXSPb/hn4M0r4SappuixFpLh7diDs7DjtXwb+0jayWf7TmmG0k8 l3uAZMHGRvXNLWLKTsrn05+2F4zvvAuk6Rd6YSrjbvI4yN3NfJvwp+KN145+PlnNJHsdlbO0k55F aR0iCXK7s+pf2rfiVf8Aw98YaY+msQzQNvQHGfmrzr4AeB5vjN4r1DxHqy+V5JIVRznIz3+lEdEC /eO59/WptJfhrqEcSkRxQsvK4/hNeS/AFFX4TaljlfLOxRz/AAmkDdnY2v2ftOubLwLq81zF5SPz Gp9Npryz4PTWEPxDuDOo+1vJlSR0GBmqirk8tnc+iPjj4g03T5LSOVQ8jOMYGe9dvqGswaJ4Atrm VT5IUdqiSs7lxXU0fAuvR634TuL+FT5eMrx7VLoM/wDwluhxyXkYRVZSu6ofvsrTc8++LHiW6h1W 1t4FPlLyGHsa0dG8X3V5qlrDOpO/1+tO3Khp3PYtX0OG08U2jpGC7A84962rZjB40uBns3yj6U9y TwfxVYy6t4skiQYYvyB+FeuYh8GeHUiTHnsuDiq5dAT1Nb4Uq8z3Uzn5uT+ldRpmuJf68LZUw4PW piglvc1dQaW08UyQDLIwINWdUhXw9okgtRuLn5ttU3di8xNDtP8Aih5Zfuscc/hXmtrd3MmneVks m4U9kNO7PYtSsWg+H8LRjafl5FeVXep3F1p0cLZZeOaS1Q9mEdqog2ryaqRQtHhSuKaGzQttO8ub zCcir4h3xuVGKTEnrYILdjECRipWtxOAgHPrTSHsRLZukhBOSO9XPs5dMMKAQ2eEBRtPSqf2VyCW HHbNCGR21p8zZXNJ/Y7zuSw+UVaJZVOmiGXKniopbby5Cc80pAiibYs2VBqRYFVcMOalB1K95s2A KMEVRt4WVGLDg0DZDcwRxQgL1PtVFrEH5sc0XEUZ4MsSetZexZWAI6e1GwyC7tfKOOtZoiaNj3p9 AMbU7RpYyFO01lxRPDYmJ/mPqaaZJmHScBTnaBWTqcPmzqoHFUIxZ7KOByrLkn2rnLu3XlWGAOnF D1LvZGAbRC5O3n6daLqcLprxkbORxUXsydzy3VrAREGAbmPXiuF1eJ4gN4we9a3CxhLaRySnb83r VK/svJi4Xv6U7gzjPEs4soYhsz64HvXHahbNqm1gxRF/hqrE9DNeInBA24rCu4vO3g9TWdtS+hyn 2byQVIwB0rldTiac8DkHrVbCp6sr3AaS1yByO9crcPtQgjNKLuEzAnxc2jjJBB9KxbHTBccM+AB3 q2tDNMwNStPLn2g7gKxUt987Z5ApJ3BxsZN9GsTEkVyd3EzHeGwKhmt7aGbI6mIq3Wsa82pBuPy4 qSZMxpJkeDIHX2rlryfy3C45rR7E21My6JxgLyeuKx7mcooQA49MVzyd2VYgUM8RVRWJcvsjwOG7 gClcdik06GDaygEViIY3lJIxj2o1YtEZckvlTttPHpWM8hSQjGaVrDvcpXUrNwvSqX2cNgHkD1o2 JerMWaDbdEKPlqOZViPAqGXEz5FHU8k0kvzRBB1FRfUtmLPuThuajkYeUB3qzO4zcVTHU1Uify5i jDg0XEhJ8JIF/Ws0ttlIbmoe5SHow35NavwB1ZdP+PVnld+64AHHuKzqLQaep/ZB8M2EV1pC4ySo 5/Gv1Yt5kj02JFGRgc4q6RTLE8qwhSRkH2qnJJuYEDFdJCMW8DSPtBxUJXAwabeg+pUMXluS3Kmv yf8A+Cmtsz+APLjcpHkNgexrShqyrH86vh1jcRzH+62Bmtg/Km0jJ9a4sQv3jM5bkbKsnAHIpySG LjHHrXP1sNO6GBWkf5T0re0iRncA8Yo2A9U0edXYDbtx3Fet6ECrDdznpSTGkeqaZgv83UV32nKz 7cdK2psJK51EnyqFxj8KYcNhfuitNyGVpdPL52NgispkCgljnFG5XQhkCpz1zWe67pNuOe1C3C58 eQwi6B5+YetQclzg9O1c5uiaLuScU1JAxK7Mn1qbdRJEkaiUYAxjvUoUIRzuNNuyE1qWp/k2gDJo 8z94F796hCRNJGrcZqFCEbANCE9yxCigZxn61cjYbhxWhaRaRgXxUixs5PYUJESRcWD7MuQM1cU7 oenNTIa2K1xHI0BVxjLgiv28/YWUS+AI1AxtCgn14rtw9uUfQ+9ZolWQgUxZOnHT2o6mZa8wHgjm pYyGcAjnFOwupZQhN3P6V538V7lf+EMkUDuKmxfQ/Ib9rbP/AApa8UH/AJYk8n2NfyT2dx5t9fDH Bl/pXPNama3JbgeUrAjgKa/WP/gjh8VNH+Hut+JV1a6SBJ5Dt8wgceXjvVU0W5cupg/t+fBXwHf6 9rfi7T72CW+u5y+AU6nj19q+Bv2GP2itX/Zs+NUCyu39h35PnLk4DHCjj86bVjKm+Zs/QX/goZP4 O1DxJonjTSpoW1WOVJGMZBJ+cE989q+7P+FmeDP20P2aNA03X7iGEJ5TypKQOVYnHJq27x0DZtHx 7+0F488Afsv/ABC8HweFLO3aWLaheBc4XzBnkE9q+2vj5YeB/wBrWy0TUtU1KGGWAqwBZT0fdjk+ 1JbCivfueVftqftF+Grf4SaH8MNIuEaM+WrMpGAqvg89OjV9ffsyv8P/AIYfsvDwz9vtvmtNpy68 /KR6+9E9hVPed0fkd8MtE8AeDP2nri2vI7aexuVk2ucELwB2NfpR8H7L4a/sqav4v8UaPcW6T3Re TZEVzny8djntS5dioS0PJv2RP2+NO+LOveMrDUgLS3u5mCM+RuUx4zz9a9r0j4o+Cv2TfhN4nl02 5imnuwX2xEHc2wgdDVT8iIRd7nkP7JP7XWjfG/4K3en60yWLG4jYrIcdB7177+158UfBQ/Zk07Rr a9hma1eJ0VCD91iexo3L2Z5r4e+LfhP9qD9l3TPCuqXKW0Vv5QkWQgZ2knHJ96+Sf2mfGHw+/Zk0 jwnb+GLaGW4tmjXdAmcASc9Ca2pWSZlWeqsfd3jLxT4G/bP+E+gTa/cw+XF5chimI6ht3c1+Z/7d P/CHX/iLSPCvhqOKG2DBZZIhxtDjI6kdDWcVeob39w+qNX+Onhn9kn9k2z0PwmkR1OW2WItD6nK5 yM+tfit4C06Xxz8Rom1477i/bfc3Eh78Dr0q6q10M6Wqsz+l79kxfh5+yh8MdaktL2A3FxGWbYVJ J2EdjX4I/Gfxzb/EP4ua3qdquEuLouCRjPAoh8NxzdvdPPdVtHu4sA4ZR1r9f/8Agm3+0/pcXw11 zwZqbCBEHlb34DDYfXjvVRFVjdKx33i7wh8Mfg78JNdvYI7a4v5pRLlQCSwU+hr0f9if9rPw78Uv gX/YmvKLe1smQqkq4ztBbvUJWbbKld0kj0z9q79oXwX4++A8GmafOqRWMsbxIBj7hLAdazvDPxd8 K/tOfswaR4e1yaOJYREZEcjqpJ7mphF7sd/dsfJf7UWpeBfBOjeG/CXhyGIhinmSRrwAr4PIJHQ1 97eLvj34U8B/s1+E/DlpdLK8UtuGC4OAH56H3raSVjPeNjkv2nv20NB8Kan4PTTpfPkTYrhR0HmD PSvT/HWseDfjn430vxDqGrLHHBAxMB2/Nznuc9qfJoZSu2flR+1x8R9M+IXxcew8H2XmfZ2MblEI zyD79q8t0z4YeJnLxNpLhiOPkb/Cs4+6dEdEdf8AA7wld+HfjHpr+I7ZbG1hmUgscA4YHviv1g/4 KGfETwx4+8BWVpp9zHLPEARtI4w2fWq+J3HDR6n40aDE1nrmk3cp8tBdxM59g4r9uv2r/ir4a8d/ s+6fYRXCyTrsYKMdmJ9aqGqNk7RsYf7K/wAftE8S/BabwrqxECWW1VDDhtoJ7/WvSPi18XvDXiX9 nyPQIZR5dsUCJjrjJptXMKUHG5xfwhXwt4z+DEGnXc0enyQvH8/A6ZPc133x6+NGjaR8HNM8JaVK LvDRoXxxtDc8jjoacY2ZE9FY9Hn+Lvh7wH+zfomkxXCytH5SlVwcfMfSsr4o/tJaVpXhbwxHpz75 P3ZZQOg381SjuZ6pWPqeH46+Fw1jrc0im58vYpxkruNfJXxf1LRfEXxp0rWxOrKXy2SOPmH+FZKO 9zVx91Hq37XHjvRPFPguKCznV5kIxtwe9fK37KkFppXxHt9T1BxGUjZQDjnOKqKsi6usbI+hf2vP FOl+JdUspbCRXuVQjI9M0n7KHxZj0S5vNKvgI1c/ePfim1oKjeKPsu5+JOhRadeaZHKu1kIbHTpX knwWlutHi1f7HibTg+YkB7bfahpcpH22fQPw98Qy654T1JrqH7JGoOAcjjb718IX/iBLDxbJNZjE yS4Dr3HFKKsjS99Ds9Y8SS+I9WiuLti5i5Gfzr6v0rxVYeMvAkNpdlQu0ZVjUSLWkbHSWHiTTvAX hCO1gIaJio+UZ9qwPGnj37NYafFp5wrMu/Hpuopx1IekbGz4z8RWL29myDfJkbuPeu8s4tMuRa3Q 2qyjP60p7hFaEOrePPP8WQsg/dpxn8a9O0zWrGXV5bskbmBzSSsW0cMt9bp4lnuepycHFc9quotq moNI/KjoKu+hPLqdz8P/ABEumSSrIu1W/wAK7nSJLC01prpSN3PapSKltY6S28UWs015PIP3xyF4 9qyPDetRNpl0tz85J+XI9qcVqK2he8N+Io30uezlGIyeOPan3n2PS9EVYVDO2OcVUlpYlKzOm0jX ItT8OCzmwMY61z+ttaW1hFb26AydzipgtBy3MCytPLLbuw9Ks2kYuEYFAeeKdrAPltCrKCOMelWo U42hcUbjsOdXacRY49atLZeQhLDI9qdxsqW1lud2B4z3q9FEEY7hkfShi2J20uJsEfXkVYe2WZNg HTvUJhcxGj8lvpVgQS3EBYDaKbdirXMtrTYvvWfLYbecfpVLYkmiszJGdq1jT2BWYbutCGtCjJYY uueRU13GIYCq8+5oYNFGPT8xB25qo7KZCAMCoGZ11bZYEHiqEtj5bZ9aYdDNYbWwRms+ZGIJApiM KSOR5Mn7oqUxqyEkcUrkmDd2jEZTkVzDNtu9pUnHtVpjsZ99tmnYgciuX1LT3e3BYdxVdBGddWmy eNgO3SuZ1uPzi5Uc9qgaOb0y3Zcs4yfSuP8AEGmf2ndbV+UnnFUmBzr6PFYh8cP9K4q91KaGXY8e 5O1aRA5zUreW5G4pu+tcvcqFwuMH2qyDClViXGOPpWKun+ewIPI65FLQNbGRqFiJ3IA5z2Fc3dWJ VyuOAOallRVkc9OhltHRDtANclPaLLEFU8jrQl1CTujmjEiuYyQKyb1PJcKo4q9LGMVqYU9plSCc msJlMblB+dYrRmz1MG/smkPLA+2aw7mFFiIA+ah7lNaHO+Sqthl5rJ1ew3oTjrUy0ZmtTGW0S3hA xl/pWHc2CyqSRg/StOgdTGQiEMDg1j38abt4HPpXPJGiKKkxkt0rCvowrllGQalXEzAu7dg4OM1k 30LKAcYHfFaRMpXZmXMP71SvTHNUbj92oBXknjioluOJTmiWP7x/Ssu7fEIxkc0JDMyeUoiqO9QC zB/i5pySHFlGeLy5xxkCppFViGTis+XU0vcxrpP32SMiqskCMu7ofSmyWitEnmZUcGrq2cZGCctS EY9zFucjHK1RcrIOmDWb0BoqRQMFbvnpWj8D4vL+OOnkD5lnXdn/AHhSlrED+yP4Wvsm0mf7y7QM fjX6paYRJpsZAwDinSLkX3+6BiqSbmkO4cCugkzT898w7U24IiByKq1wW5TjxJX5V/8ABSu2I8Am cnMYwpH1NXQVmNs/nJ0JBEJo8/ebKn2rRkYocHnHeuHEP94yWQ5aNSyjOaRZCyj+Vc77jjsOWXa3 A6+1bmnqSwI6d6lsGeraS26NNo4r1/QomUqQacUCPUtIXHJOSfWvRdO+TZzjFbLRDb0OtnuIpNpI /Ss2ZeCw+7WsNTIotM0KAg9azrvDuGUYHetUgvbQglXzFyOMVAzAAbvzqHuWj4v24lLIcE0wZXJJ 5rmNiyACgz940i5hySOKUmUPSTcMKPlPWp1wMBaTJY1G/flSalkix8+eaTQkPDEQA45pViJIwOaE gW5pxxbgMnAqVlKthelUWyUfKyk9quJG1yDhttWiI6lneVQKTk1YG5EHrWTTuPYs3DE2wJbOCBX7 X/sOQGH4dRnJwwUg/hXZQ0RaWh93SN5bkHJP0qxFKE/hqjFkbHLEitG3YKgBH6U7isOxuzgdPWvL vi0vl+D3bpyKkaPx+/bFdk+CF8//AEyI/Q1/Jbp8vlXFypGSJBzWc1qZtamvPL5oII+XHpWZoyvp 9zJLbXBgfB+UEChNIJK6I9Qln1mBEvr5pQpGEJFS3lit55S427SMYHSiWpMNGS65oZ1lovOuZG2j AyK07eW+0ywjs7S/lSJMYUY4xSWiHLcy9R0ifWtWjuJ7+Qug6kDiuql1i+tY44odUfYP9oUR3K6F K+0469OlxLfvJcKMBuMgV1VvqeqW1itvHqDFAMDLAYFaMSWhkLogtrwzNdsbzqGOMit+21Oe6glj m1OTc/38kc1LloCSKGi2knh66d7C82Lg8Kw4rQN/calayRzai0iOckMRxSTutTVKyDSrRtOt2hh1 BoojyVUjrWsL5ryza1n1CSdOwODVXM7XYum3k+l2LWsV9JDGSCSMDpTGW31dxHLcG8lUYDNjikpa hKFzpBJeR6XDbW+oS28UeMIoHaqt1pj3cxmaZjKf4yOad7MaStYuxWbvCizytNEnRW9exrXaJXiD v8mPut6VTd1qTGNti1YahcR70F40iH+8RxWbbWkP2hjF9/PLDvQnpYGtbm68PnRlAfmHU1FpEf8A YomNtP5Ejff2kcmqi7B0N62vp9csBZz3DSopG7POcVrWDPosEkdrI0UbfeVR1okxrYSXffWH2WSd /LJBxj0rc0j7To9l5UV06wHqBiq2QWJns/tc6ytO7kcLuHSty+tpLzS441u5F2EHoO1SmxOyKupW Z1iW2lmmZpohgMR712dg04IY3smduAMCtb3RHKjs/hJ40t/hF43N9Pbi5eVtxcgknoO1fobN+3tZ WwSRNIjbAweGqHG5XQ+Pvj18Xbv47zw3FrB9iiSQNxkHrnoa85vLe5vfL8yZgcgn3ql7qFHc6l7R b+zSFx8ikYIHSutECrYRwO5lQDjI6VEXY3SuP03EE7iKZogT8wUda6j557UIXLw5BANawkKTsaqJ Nc6eYfPaKMkHC10DW7ppsEEbk7RjdVSeuhla+5qhQ2mizaRuCD09K0XsJLvyCJCoiGAaTbsPlTZ6 FYytfWYjWcsUI60+e3l1HV4C1w3yjmiL0KlY7oNGsu+TLgditdNp0q3MyyRgpgYGBT3E3csNpkl9 rCyvIdq9K7fyli/eRfu5B1ZR1oXYaR02lJmJnMjFnGOR617X8IviXcfDW2ntJE82Injdn0okhNK5 6pr/AMcZ9d8NvbWkfkCQYcgkV4tpGnpp6Ruw3sBySOtJvQUY63OqW5ilm3BME9sV1VjCy/uy5UHo MVm9Snozoora4FsELExjoDW9Z2pMSh3OR0pr3RbmyIvtpADbWUfnXY6LaXFxagNIVCdOaT3LgjXh VpZiepz1rrbPdCOuSabsxvQuNbFpAUPJ61q2lodpz1qXYEaUcAMYwcVoWkeOhxigdkX4FCM2SSTW 1ZgLbnIz+FMl7k9pGPmBzip4FLgrzgdOKBWN7SrEyI5Y7abHYmS6JU8j1oTEbDW0q9utWLW0aJcr 602FjU5lcAryB6VMtkWBYYqR3HraAtk1bFt8oUDI96YtyklqFlZehzU6WeXCkdKY2Mki8+42gbce 1WTYsnIFStwaKraceeKSfesSxgYAqmkPYpGxZv8AGkGnkDJPFAktSRo0ij+Xv2rnru1xKCTkUIcl YqvaBwTjmsq/sSIVINBLZTlgKQqoNZkloWYKRin0GLcQBcKece1ZFwgLn0qFqxvYy0g8yMn0qq4B QoBzVNCRlPbBXIPSqFxFhMCkIqoRDAQRnNYTWoDtleT3poGYF1pmwEgZyawbi1kl4YYA6VVwMK5t WtVLOMk9K4xrSWe6Kk7VJ9aQIbdRQ2b7Mbn78Vw2p2cjXZeJeKcQbOTvNjPtZDu7nFc9e2IMu7GV q7j6HLxn7TeSRY2he+K5XUdLW3lZyQ3pVXJ0sZCad54YkY461y4hMztEq7SDjIpXsBh6laSafLgd e5rkNVk3nk8mkx9DjL+BoYcjoa52WzcdsVd7IIq5g3OlIZDKRlh7VlHaUaRhz2FSiWjjZIGgneQt kMelY96ziRSBhe9OQo7mRd7bWYkAtu9qwrq2DjcByO2KVtblTeljAkwJMkc+9Z907LnPSpmjOBzv luk2/G5axdVvlMrNjAPbFKMuhrayOQu1KRB14BNZ8xEcy5+bNTMaM7UonkUkHHoKy7M+XFtkG4+9 JLQnqQX7gZCjJrn/ACHl3K9JMT3MqcCKIjuKxJL4s43J8tJoWxX1CIyxfIM5rJRMoEfqKSEzKvQF lUKOakvojBCrqOe9DY4oxpc3CgjiqzAoQTwKSL6kMk/mSAMvFR3EQ5qLu42Z0asrfLSzzeSfl696 HuJEEsgA5HB9qpsgkBCjpSYMz2k8sDHUdavfCbK/G3Tih+Z5xk+nzCs56ISP7Gvg/GIrXR1ZskBc n8a/V+0IbTIwvYU6JT2BZMAZPNRSN83B610h0KpxBJuxk1nXUZmBYnGO1UmIgt1DnOeMV+YP/BR6 PzfhxPCemQ2foTV0n7w7WP5qNIfzmlZSco22tjLfe6+1cGJXvkDj5irnoD2pYyEGR1rPoO4nml1J AANbGlzMCOODWbVxvY9a0gEKgHQV7JokxmQbRg1UUI9M0qLfIATzXoFnA0sgAOFFaoR0kahlxjgU 2SNpFGOFFXDQkzzGWYqOarTxmPjFauWg0tSOV9qAAVlvB5ykk4HpUMpbnxgmChfpmlhG7ryK53ob IfJGGwQeBUu7zU9qktIXiIAL1xSwrnk9ae5D3JFIkYgdRUyAxOpfn2oSE9C1ICzAg4HpUrN5ZDKM mhCW4Fyke0nk/pSxAxkKSWPrTuaF4L82M8VZibbntRzErQtRuB0Gat+XkH5uaCbjJo2e2Vc4O8Zr 91P2K545fh3axomAiAZ9a6qJal7p9sN8sjADvT2t9y5D07GbGxKUQ55rRiXbGpJpAiVnw+BXnnxg C/8ACEyLj5sijYEfjT+2Srt8C70L2j5z9DX8lumKXku3P3TIOfwrGW5L0L007DhRwFJxX1H+xD+y Lqf7V/izUpo5DDa27lX5Ax8ueM1L0QdD1T9rX9ifQvgXayOmr+ZfRHlBs6j6V+cGmeLDqOqWtjDE Zb2bhI1BJPOM1cUZRd2e0eOfhZ4k+GVraXurQFLaYAjeeRk46YrjX1qzgnAMnDDORzTasPdnK6hc X3irXNP0nQFM93c3CIQP7pYA9PrX7RaV/wAEwtO0TQNMl17U/sd7Mo/dsU55x3qWrQ5i5I+Yf2wf 2Htc+CN9Y3OgIZbK5wokBA5ZsDpXrnwy/wCCbXiOf4LzeKdVmDSJB5m3ep5wT/SnD343FF7njH7H f7JmqftPfEbUo2JitrJmjkYkc/KDnmvs/wAc/wDBOrwp4Vj1NLrXljuIA3ykx5BAz0zUc15cpF2j yT9lD9gSz+LC67dtqZSytXKpIdg3DZnvXumnf8EvdN1jwbrmp6TqH26S1RiANmAQpI6fStuRpXLc 2fPH7Kv7Aeq/E3TNXvPEKixsraTasoYHI25717vrf/BOKyl+HmqX/hu8XUbmAZUKU67Se2aym7R5 kOLucd8Dv+Cd1/4r+FEWseMJhpJfbj5l75/vYq78YP8Agnr/AMIF8L5dZ8Kst6UTfJKCB6+mewqa Dc1djlLQ/Mbw34j8ywihveLteH75Nda90XuY7aD95NKwVQOxPFbpEKVzs/iV8MvEvwq8IWmqahaO 1tIVG4AkYJxnpXM+E2i8d+JdN0nzvKt5xueXIGMEevFPeJUXrY/Yrwn/AME3NL8afD++v9I1P7RL BESQuzqFJ7fSvyP1PSn8GeIbzR5YwJ7aTZKfU1EL81hTdpWJ0mWJiW4GOMV92/sYfsTr+0dpusX5 uzujJIX5ePkziqqvkV0PdHyV8RPCC/B74qa7oErsr29wVHHoBVXRbxL6BpUfcveiGqTFeyHDUY0J ZTvQ9xzWlp19HdAoxIHY4rZ6II3Zs70EKkHaq8ZIrTiDzbWV1EHclutRexTVzoUghVQcrjtk1dtk DvtAxVRdyWgvVjW6RHG7PqKtR3FvbSCMqdxPTbWieoWOxgYAJyAnbmtq5dURNwOSOOKJaDUexfst VS3TaV574Fa0lyssKGF+MjOeKlpDTaPvz9nX9j6P4r+A9W1gXRWWJSyoNv8AdJ/pXxbfag3hvxBe 6TMm17aXy2OO9TCXK7Ez1kdPY38VxbZOQM9cV2On3QZEAGVx1PFadRtWNaBYpLpuOR3xWxtaKMsB mMDmtGtAjufVH7K37PNx8U7a+1C5BtbESDynx94Y96yvjj8Nf+FWeKkWOXdATgEY55qXGyE371in pogvNMfCnf16Vp6JLBc6W6I22VTg5GKlMaWpqaWu8GMMxkHqK6rTbPZbMXO7mqQ32Ow0m2hd4yoO 4deK7Gazhu5wSQq+oobsI17bT0s7V2zuHavfvhV8LT4l0A3t4vlQsMqfaluW3ZHpP/CmLOfTJZrN xLIgyMY9K8UgtpLS9K3XDxnBxzWOt7E3vud/p7fbriKFeVYgCvoX/hUsGnWEc00gG4d8USfKDRgX fgu1t7uMRSbixx2rvtc8C/8ACNaDHODlWx0pOWpcdDj9IjRtxx+GK0UwJwuKt6IJHbaP4cl1O/RU U4I64rvP+ELjjuTEZf3npxUQd2Fzlda8Ny6ReKj5UVdtbQFD6iquFzs/CHhNvEU7NjaidTiuvu/C EcVnIY33BT7Um2LqW/CngmO9sXllcKo9cV0Unw8X7A1xG+QBxjFCY3qcbp9mwLK/anQWWb5h/Knu StDVFsyuQ34VYgtmZlVRx3qr3GzTe2UyY6H2FSpYssZJ4GaQ0ixDY7kyavRaYyrvPC9uaVybWMz+ xma6MhPFaTWfmZIXFA1qOi0zchYirEdkfs5wM00DM2BHDElefpU1zpguYw3RqYjOuLFoICvVqx4r aSeIjuOooQ0RWdgbmQqB09aoX2kmKbrnmmD1ZDdWO1FbnH0rOkhEvA5x2qdQsY11AzSooFJeWbRg HFUSjFK+cp45rEuoSrjP3aEhmfBHy45x9KqbfmbI6U2FjNkh3AnvWdNHhCCMkVLGZaxEgZFVfswl Y7jgilclbkM8CqrZP0rh7syCcg9O1VHUHuc3qbPdsFI4TvWBc2xncMvBHfFVawNaHK6vbfZFY4y7 dyKw0WcaeVHLGglI5MQGIsrplj3xWNe2jbSBwKZdtDjrnT2l3Bflb1rjru2NtasjZZvUirM2cfda lLbxqoGRUFtMLeYtjk80NAnc5zVNQ+23TiRcj1rk737KqbdhZx0JWk1Yu2hylxGZJPb0qJlR7dwV +YdKltgjh7m3YRsSMA1xl5a7iRnAqkI5yfTyDkHcPSsrUrUiEYxzVNkmZMqi12lMt64rkmkMchUD mk3ZAldmVNaGSUs3yiuWuw7Sso5HY1MtS0kmZwZocqea5LUIts5469qhLUptGbIBGu0ruH0rCuEB nBUfpTsRcgvIWlkBXoK567cCQqPvVF9AZAkYjXc/NZF7fKtz8q4HtTSIbOelXzGJaq81opww/KgR kz7ugO01m3MWyQd2NRsyijdW2So/iqldSMpw3CiobuzZLQzNwuXIQbQOvFQypmPGOnamhWM+V8AA D9KotMyqdw57UWJGRyEQ5HBNVpY1UFmOWpNaiRAWJgJYVFFGZVKjjHel1AruqBCG6j2pfgsWm+OF gBwqzj/0IVlU2CJ/ZH8I1QNpDE/IVH86/V+xiU6am37mOKdLQplT+H0xVd3INdQuhE1wINu4bs1D eOCWCjihA3ZGdB8jAAdq/M3/AIKNMF+HM8hGRwP51vSXvAnc/mj8OvtN0hABL5H5Vpmfy1yBnmvP xX8Rha5IXMrBh09KZOdoDDmufcVhSQyqScA+1bVgrCRR/DR0Gux6fopKyDuK9k0bKhQvSritCbHq mjThWAIyfXFei6cxYjNaLYDq40ES471UkmO4qensKpMEinK3k4C8VBPHlGYNxTirhaxkRy7QQRmq 0q5XOcA1UrIS3ufF5X90AaRmMB255rksdKfQmU7OvNSxNvzgcUio6bjoo95JzyKesbE7jTQrXHoV Q7sVNGdsgZvmBqthNXLMhxLhehpFiw2c1O5NizEyjgjOfWrMCqoIIpWLew6PlsYz6VcbBXB4IosQ RQtvBI4xWjbsEUE9TRe5DQt1emyVCw3ZcDH41+7n7FNl5Xw5tpMjDoDgfSuugtGUtj7QaNo5myep p6jHfirv0G9iyoxjjimtIHlCD+VJbiNC3jAJZjXl/wAZ5TF4Pduo3CiRD0Pxl/bYvTF8A7x1ONyY x+Br+TjTIzm5Qno44rCa1E3c0pSsSFl67Dmv3V/4In64bfw14viiTbNuOJPT91SRVrqx+Uf7cXif WH+PPi37fqc13DHeN5cTYIHA4r6G/wCCXP7HQ8b67L8TvFsIg06BS1lHKMDYVznnBHK1pHV2MEuU zP8Agop+0dB8cfiOPCvhONZEtZ/sznlQWLDGDznrX0f8M/8Agmtonhn4KaPqXjKZLa9uRGCX2nli RjJxUTdjanG7uaHg/wDYc0n4OftWeGNU0i8E+nMjHYNuPvrjpX25/wAFGfg/42+KnxU8NS+Frl7a xs5FMyLIFDASBu49KN4pClLXlR6h+3LqNz4X/ZG0i7khWS+hEWdx6nca2P2bfHWp+Mv2CJ77UwFk ksMhVbdt+RquOkbGfwM+af8AgklqlnY3/iq2knWG5mdjG5IB/wBXivlP9qr9lT4jat8WvFuv2mtT XulmZ2SFJFbA29MAZ7VKp294FrufAHws/af+IPwysbrwdpaTx3V3KsDINwKbht6Y96/ez4TeK5/2 Hv2Pri58V3rXmr6iEWRZCCdzAp2xXS3eFi1G7Z774G+0ePf2Go7zRh9nvNStVK7TjkhhXL/8EwPh B4l+EngnxLaeL5TdvLOGiDMG2jZjtjvXO43VmQpe9yHrH7efgLXfHP7OVppvheQ2lyZIj+7bHAY5 rI/Z18B6p8Pv2MdSsPFUnn3SWm1izbv4G+lZpcj0Kn7vun5b/Ab9jDwn8cRLLbI4v3UsCYQOcetf BPxw+Eeq/syfGrUrS8jHkw3BkgIOflXHpXR8SuR8Dsfsz+z98SfCf7cn7Nsnh26RG1WC12ski4Ku FJBGfqK/Bjxx8GNZ+CHxF1DQLqWSD7O7LbTjrtAH4dTSimolydpI/oR/4JB3Wp2vwW8YLfX8l+xz saUjgeUfSvxS+OKtL8dvEm4/vPth5/AVdNJ6kVVeVzipot6gP1FftT/wRq1W4t5/GCBjsDttX/tl SklPQ0jFtG14s/YysvjJ4q+IHi/Wz5TxTP8AZ9wHOY89/cV5D+yF/wAE4Lz4j/DTW9RuyIoXP+h4 IO4FD/WrjBRijmc2nY9X03/glLeeF/gvf3Mrh9RQgxRkr6Govhv+wJp+g/Bqz1rxi6Ws8pQAfKeT kd8Vzyk72R30kpU7nk37Vf7Fl74Z8LaNqfhhRLY3MkcYOQuQ74zxX0xb/wDBLK41P4OaA8M22/fy 2mUlRxu5/Sqd3DzIjrOx5t+2X+wXL8E/hbbavpuHeNkEzZAxzz09q/OHQr0Xlkkuef51tCFoJmKn eo4s9S+DHws1H4/fGnTtC09SLdEZp39NpB/lX6r3n7GPgweMbjRTdp/a0cT7gQuQQPrRFPmNaz5U kfnZbfChfD37Q8vhO/uGlsxKWj2gNwuOw+tfpR8UP2ILST4XTaxooLCKPfhlC5ABP9KU78wufkR8 yfsefsyz/GNdS1HUl8jTrbIZuOflz3r6X1b9kfwrq3gnU7jR75Gktjn5dvUDPrVSW1gi7q59Kf8A BNKCe28A+JLK7lLRpIF5PbyzXi/7Rv7H1rqOiat4i0fEs73KsQAO/wD+qhJXuyJ3TTJvgv8Asg2S /B6217xK4gE4QqpwcZz649Kyv2gv2YP+EX+HFnrmguDZ/LlgQOCf/rUQd5Dqu2p8dWEZWGNlO4sM k1v+Glutf1tNNgXO7g7uK2kOL0ufvv8AB7wHc+FP2dbW0hIiukteGU98GvzPn0fUPHfxUtdK8RXP mYJx8wbOCKjmuhN6n27/AMMy6NoWrrHJKI0dDjIAr590/wCBI1X40vo1lg2ylmdxjtis1dK5ulqf UkP7PWh/23c2UDg3aKwOFHXFeYfDf4A3t9491C0vTtsrdyAc57A1UG2rmLfvn0Db/A3SruO/SwIM 8eQdoHXFfMlv4Mujrw0ox7pROofHPcZqZM2cbH0R8TPgw/hTS7We3TepI3gD3r6S8O6V9p+EEMUH 7smMAY/GneyMpuyG/Brw5c+FdLuob1y5YfLnntXJ+KvhtGdOuL9B5kssoJGOmaEraiiup0mjfDiL StAsb1/kbK54969T8d2ja9odrDazYPGcH3qPiZpI8a1HR7nQbyBWJb5xz+NfU+p2Kaz4RtInPUDG frStqEtjxfW/CZ8O3YBOAelafhLw1/a2oAtzGDlmpy2BPQ970KaCHXTbQKG2qRxWG3h+6m8WNMT8 nmf4VMdEJ+6el+L/AAzHrV1ECQpVDXAaB4O+2310rkqkecEj2rRbAz13wDpXlaNeRIeR0P4Vw0+i X9lZSuHOwtzg1N0Gtylplzc3UC20THBYZxXsVxdDwxoAt3bfIw6elONmVseZQM0ucCtPSrMJclmG aBFi9TzZGIXGParFjGFhDE4JqkgZqWkSCfewzWrdqv2YbR19ql6DTES0a7twqDGOtXTZMbYISeOl SmBUubPyI09TWummstsG6UNhYSKyM0LAHFT2entAhDDrRcLEH2P5HGKzrW0ZoHwM4PGaaYNFNrZn UHHNVf7P+zOWPHtVXFYjihCqzKuDWVc2+7JxzQncNihdjzbYVhRwpbTkEcn2p2GmUHiVpHYfeBqr 5LTwMSeKEJnLLai3mO48Go5AjNt6iqW5Ji30HlvlBgGst4BAxYjOabApyW4k5PyismWD5jjpUvsV 0KUkIUAZwayLyHaxAORU9RdDCaNmkKbieay9Tg8ojAya0SFYwWs388hlwGrEkUWU7xldwzQ2Uc1r dst2g9RXOtbvaKecCpJMZpYWhY7MsOpxXAajLvkO0cGrQHLXdwYgQoI7Hiubv7eSSMnZnNNMVtDi 5rbyHG5M8cjFcfdlmupNqYGfTpWnQUVbcxNQt1WMNn5u/Fc5c+W0YbaM0rXG3YwJYiwLYwKwJchC wGeaUo6Dg9DLvYVa0LE/hXCXUSlj6fSoWjJZgzZic4Ga5qYBmJJ/DFUwS0MuQleAuRXP3VkJXJPy kUWKSONvp5JrgQjO0d6iuIREABz71T2Jkzm7mNftHXmsO8hXzCD+dQG6MBN0Rbjdn1rLmthG4PUm h7Eoiu9tuwBHauI1CAT3BKjHvisikVZTtg2suR9Kw5bNsEgYHvQtBNGEkqmc5+99KjkykpQnmm0J Izp7cCQZOTVS7QLgAc1DZVjMktmDbs5qrNGs8RZu1Ta+pbeljEg2rkgYH0qrIhlmZwcD0oQIy4pD JKd/ApzQb1JHJouSQxxbUIbrWb5Ss+SDt+lDGyGX721juXtVqKELECPxqehFytdwo+CoqH4SweT8 a9NKHDNOu7/voVjU2Lhuf2E/CoiH+x4gd2VHP41+smmyCHSY05JxxxTpalyVh6g7MtVSQdCBxXW0 Z3uVmi3MMDmonAjly3Q0ITEZQH+XivzL/wCCjduZvhVcKDg7lP6mtqT1BaH8zunRAGRx2OOKuyxh Y9ynjvXDiFeZd7CpMIYuBkmoE3vnA4NY7Ilt3HA4dVI4FdTFOJI0UcH2qRI9F0IsQAte0eHU2xYY 1UWNnqWkKFjHevStOgyispxxVrexK2N0xOygk4ap02gFiMmnbUpMzJmDISozWS6GQhgcD0raOgmR tGqOO7HqKjuYgAAOtTLcR8WKcRcjIpIwHXPU1yo6IojMu35SuTV21cWyZI3AikwbK8bmMsV5ye9T hdmPmpJlJjyuXyPu+lS7v3fvWsldEtj8MqBs804TBwDjFSgRcP7xAAuKeA0ajnpVcor6k2WXayjF X45Q7fMMsazbGSoyx5BXmp1TevzcYpRE0U7xmluIU6EsD+tfvX+xdHt+HVoAc7UGa7KOw7WifZsl wIpyTkjPTFXGKs+48A9qGQO3fLgDFSqwPG35vWqQCorO4IPHpXmvxsG3wVJjqSKTFI/FT9tyPzPg Fc44woz+tfylabLuubqQD+Pjis6hEY63LbuVlOehQnFfvF/wRR0938IeMblUwiueMdf3RpW0uVex +Of7a+rrJ+0L4qeSCaNI9T3cxEbsYNftv+xL8WtG/aW/ZKvfCWlziw1K0t/KeMfKd2xjxn60RdiF 70WfiboPwp1L4NftP2+j6vau0zarHtmdD8+GUZz0r97f+Cwd1eeH/wBmPQI7F5jc+dAYvITcc+Yc dKGrvUXM4R0PzD/ZA8VfEzWP2gfB8PidpxpBt22+eSP4hjggV+zX7df7Xt1+zx8TNA0i009rkX0i oXCtxmQL2+tNaoOXaRo/8FE797z9jnT7mcbXfymYE9PmNYf7KOprc/8ABOxmjjl2rp+cmMjPyPTt cUvfZ/Pv8LfiL440j4iXM3hAXCIkMjSqoK8gfQ9q/aH/AIJKfFvxb8dL3xtH4vSbNtMVMdwCP+WW e4FVGfQbVoXPKPBXw90e6/4KKXYktY2iE0jCMDPI24NYn/BZue9PxR0XS0uWTS2+bys4AIkGOKV7 u/QcHpc/S/4OeJG+F/8AwT90jUki8+S1skYKOc43Ht9K1P8AgnL+0ffftP8AhLxBqd1ZmwhhmCpk MNwKZ70JtkSjyy5z1P8AbT+OT/s+fAW31yG2+2EFAqgE9SfSsL9mf4hSftS/svrqGoxHT1vxGSpB HDAj+KjkctilL2j5jsvAfw/079nbxn4f0fRtNWd5rVjJMqHscdRxX4xf8Fir+LSfjZGWgLSywStw pOTuHFXF8uhE1zSuZ3/BIb4E6l4Ztda+IWuZ0vSpMyW0T/LlSncHBHIr5l/bZ/aF0v4pfGzVJrSP dbQSOqyRqTu6GtpJclwb5ppH60f8EftSXXvgX4uuFiliVQcb4yuf3R9a/F/423Qm+N/iJivP2w9v YVlHTQupo7HIXV0ioABzjriv22/4Is2oudS8TlwfLE2HOP8ApnWezuaJ8sT7L/bsn1Kz+FOoW3g0 bC93GLown8/XtX0P+yLq7eCf2VvDJvGEW23jE0jHGTk9a3qv3UluclOHNLU+ktK8ZWnxAtdQGnXK XMERIIRgR0z2rxT9pHS9FvvgJYjW4t9vHLE+1E34IYkcVyyaprXc7FeL5EfIXxN+P2na14c8E+Ht J09zp2+HDNEy7cScfzr9N9b8Z2ngtbSO6uEt4PIOwSOF57da1p2auRVvTdz5Q/bS8Rf8JD+yvf3B QTwOUKODnIwa/lr0zfHp1ukCjywB0rpjsc8fi5j9cf8Agkzp6r8aNQuGABEcnGf9gVyn7V2s63pH 7aOs3HhtLj7efOzsQgYwM84rOUtdDeS53d9D1z9gr4H3XxM+ImveLvF373UrZZBFG/JwUyfQ9RX6 x+FPE8niT4K+KYJbT7Nb2sLpCCpG4eWT3om7tJEP3mfNH7DDLqP7OnixWjEAEZ246n90a/KXwPfe L7OfxFb6T9oSwE3zHaQCNv0qumoQdnY/VX/gnDLNf/CnxX56Ms8ZwxI+8fLNfSvgKUT/AAU1BJl5 WZBkj2NZXextU1dux2Hiqz0hv2b9MXVwGsR5ZwBu5BOK+Y/2gvizp6fs+Wmk6RaN9iIRVZoyuBk1 dJXZhUd0fllokCo8cKMMIMda7CG6fw1q0d1ajEw6EVq3ZlwXupH7d/CzxdqF9+y7He3Dv9pNmTu9 8GvzK+HesTax8RNKvbtmmuWlXnqcFhmpjG1yran6E/ts3mo6RoemXWmSvDMNoJXj+KuY/Yjv7rWv Fmq3GoTm4uN5+ZjnHy0tLFXZ9Vw65ovh74k6hJHEX1E7y2I/bnmun+EupL4m1PWrqSPycyHAI/2f epeiM3qx/hvVdO8PXesG2iJnLnzPkPJ218vaR4yWw+JT3kiAGWYcH+HOBSSujVz6H1l8Y/iBb2vh 23SECd3wMD610nhuf7D8LIbgDlUBx+dElYj41Zlj4Za0/jDT7uaVPK2g449vetHwqHMNwt8P9HEg 2ZpX6FLRWMz4myXTafD9myLYEbQPrXK2V9fwi1yGCEgfrS+EL3PY/Hmmqul2EuMu23P512WpK9no umbQTnb296bYJmZ8R7N7+7s0QfMw5H41eluIPBfh8wLg3D4yc/hQ1oC3sP8AhbCX1qSVjukKn+Vd G3ieaPxS1v5eVEgB6+1JFSVz0TxFbzza/bNAcIVO4D61Z1AqttdJaj953I+lHkNLQtfDi5kt/DV4 JB++7H8K6LwRanVNCuxP87D+99Kh6AYngTRo4vEDjbnGe1UfHsXm+IXKfcUkYpReo7XRiWkZAyFw TXU2Vn5ah2XJ960WoiY2u7e7Lj2FRW+mG5IyDge1O9ho1xpmzAH8q3U01VVMjJxUNiNKGwEPC9Km Gn7QT2NZ8xSiVm0xXwSOR04rYhtvMiIx0ocirEUVhuYgcGoWtWVuTmlzBykPlZzkfpRBYK6MANpP tVcxNjO/s/Y4Garahp5uHCgU1ImxRNgYRs6mqFxpvmjAFWmBzup6U0BUDn1rnrvT5JLkFRwBWiYt jP8AsPlTPu79KykkMTSRsMDtTEzFFvuDFxxms42yEnYMn6UxFO9tyLXA+9WAkO75W6im0IjvrIyI DnAFYEUHnuQO1Qxoiu7MLlmOTWNJZh0yv8qnYbMQRrFI5x81ZkkolX5l+atIsWxn3KFQcjjtXKNa /aWcMO9Ow1qY8+nwoWBPzgccVyLKl0WEhwc9qmwdTBvrGO0DBTwa89u7bDERr+lWtBPc5q5BQ/Ou CK5LVr64L7YhgUIexiNlYAZB85rBu54Y0dSg8z1qlqFrnAXkG/II/SsCTTPOkx0AFaIiW5zV7LHF FIgbJz0rlVujG5RY+3cVLZSVjFWxbVblwH2KvUdM1zl7GiFgmGxSWpnLRnLSx/eJPzVxVyrRzkHr Ryj5iNj5UR4rk9WO+IBD9aEXfSxzzMY0CAZJ6msi7BhBJHFJvUlo5G8zHz2Pes6ZP3QkkPy0kgic zcSssp2fdNZEoeScbcgU2gtZlO+lDOIyenes59q/KelY7MaM6b/RxlhlfSsG7n+1A7TtHpiqsEjA kgV3yBtPesRo3W8OTkVbWhBGfkuyDzVW8kzLWLRSKzyFiAOBWRdfuie4zU7DdyhKgmjATisyRPIW hDjqUfLYKS3Q9KRR5MZYUrBsynIGlcHNIwaRSNu2kw6lRFyduMEU7zgykelJCtqVWTehwMVU+FLm P4x2JOfluFz/AN9Cs6i0H1P7A/hJJ9qGizRj93tHH41+u9iwl0mFlXA20U1YbZExCREMOSfSq21m YDtXW9hIr/aBBcFCM0+edJIGwnI74pXFYyrdxLjPXFfml/wUZ/dfC+4dj8uQMfnW1LWQJH8yujXG 63uB33gj6VornySc4BrixOlRgxiRbIs9TViC5Dx8Dke1c1wI3YykY4Pet2xiWN1GcmlsNHrOhJsU DOCa9a0vdHEi5zRew4pHquluDEpXivSdMJkVR93Faw1VyZWWh0+4sAMZPrTWzEvPIqyEUNvzt/Cl Y9yDI5CHaBWsSmQ44LD7wpjP+6yfvGs5PUS1Pi6JyUx2NPKrbqSDxWKN3oNVdzKQM59aft8uUjPN RILCyMN4wNop/kgkNQ1ZDZYhALlRwaf5QRz3q09B8tx+zcQPT1qQIlw49qFuKWhdTMPJ6UhT7Qxw cCnJ20Jgiyj7lA9OKtRIJH29D61j1K6l2KIF9p5Iqyg2sM8imh2KF6QdQgbHTjH41+9X7GabPhra +6LXZQ2Y3sfZcjDzTx0pqxGc5zginuYvcuR/cOetP8xWAGMH6VVgJVXcNqnBrzj4yx58EsRzgjNT JWKlsfiX+3GDa/Ae5PUMBx+dfypaXMqS3cQHIk4/KsKjuyb6CSFkd933thAr9QP+CXH7ZGk/s6Wm v6brrsVnk4+QnjZiq3Vib6GR+3D8W/hx8SRcahpNsHv7ltzEQnqT9a+Af2VviFrH7NvxktvEFpcM mnTk/aLfOAS2BnHXpVctkjOGh97ftsftV+HfiDr+g6xpcLLq8EiSHERGcOCefwr7n0D9vHwR8X/h 1o0PjKNWuoAjeSYy2GDZ9R3q5K60Jkrs+Tf2tf25LRvij4cm8G26pa2cirMTlMDzAT69q+yPGX7Y vw0+L99ouo+IrdLu8tgGUvGTtIbPHPqKUFaNjVbHzN+2n+3tp3xe8YaH4XtV2eF413TcEZKuCBj6 E195/C79vH4b+FfgWnha1Ty7X7KU2CEgdCPX3q6SS3M7NbH5l/BP9qHwf8N/2hdQMEONMl8zcPKI A4Ar9A4v26vAPwr8P6/eeGIlTVL7czbYyu5ipGc5qXDdoE21Y/MT9nD9piXRv2lG8aa9JmWbe21j nGcfj2rqv+ChH7Qdn+0n8RLO9scrHbxsQdpHzbgR1qdo8ppBWR9Pfsh/tzWcPwTfwl4xjU2cUPlr nLbhtPbj1r1ew/be8I/Aj4QX9n4KtlinlIxGqFQOCPetIJKHmKab0IvCP7dfh34u/Ai20zxzCs8i hN8bAuCRk+1c38X/APgoFpPww+D+kaP4DhEEUckQ2KCm1Q3Pr2p0PdvchRcT6k8Ef8FO/D+n+A9H 1HUB5usCAI3yk4JPPNfnT+2l+0b4f+O3xA0TVZUBaNwZSFJ43gkfpRb3mOasdR+09+2xBN8A7Dwv 4NzaxuqRSlQUOMkH17GvzV+GOg2GieK7A61+/slGJCwzu5FW3ooipRbldn9A3wu/bZ+HfwM+EOoa XoVuIZZ7dvlSEjnaR61+Fer+IZfHPi3UtccZF1N5gB7cUOPUU7uZFPbG5iLA7fTFfqz/AME/P2rf DX7Ovw/8Q2t3uOqTI2G8s8t5ZHX8qzcebQ1voeg/C39v+xX4beK31VPOluLgFAQTj5CKy/HX7e9v qv7J9l4e0smK9mRArKCNvJH9a1Ss7voTGyaKv7Cf7aS/AL4XeJdP1udrzU2cbGbJ5EZHUe9fSvgL /goBofxH+H62fidRtLoRGQW4rnxNP2rvE2UrTuzwf9oD9rTQrPxh4Yt/DVqkOm20iGXaCOkgPT6V hft2/tmTfGLWNCtdBme1ggTMzKCN2Hz39qqnC0VEiv76ueq+J/2vV8ffsyaN4F01BPqUsSJ8xIPU gn9a+V9H/Yf8a6bZwozqc4wPNH+FdHLaNjnj7tkz1X4WHxH+xl8UYdV1NxFaS5Rwkmc7sCvum5/a j+H1x4qvde8hZNXlhk/eeUepHrn6VFOGup0VH7uh8wfs6/tf3fhT406xd3yCLSbmRmXDE5G0DpX3 Xq37eugajY6lpdnGFs5InBO0jPy46VpGH7y72MJXUD5h/Za/axs/BE3iLTbsbdHllIiABPy7MdPz r1Dxn+054U8N/DW5tvD1qv2q5YA/IV9Qa0qRXQVK7Zufs1ftUaF8NfhNq9tMD/aMy5cBCcnYR1qH w/8AtkW7/Ba9tDFtu5HXYMH0NZqCRstZnonw3/ae0jxH8HLPRvEqCRYdnykFs4ya80/aK+Nuk+L/ AAtp/hjwxbiJW2nIUrjDf/Xp0Y2IqrWxwvgP9lPXry3W8m2iJkOw7xzUuq/AvXPBUJvdQ2taLIM/ ODxVuCkTzOLPv7wl+0b4c0/4LwaLGW/49SgGz2P+NfIPwj1DTvD3jlb65OYUlBjXGcDiqaSiXF3Z 9jftIfHHSfHOgwWdkpklOMEoeOa8C/Z68eXXwr8bu0wzbTZLZPsBXO4GyR9wj4q+F11O81KOINeM rfME6kiuE+EfxouLbxVqEt1xZTuSuOwxilbQza1PbtQ+KuhaVZ3slrGDcygn7vU4r5FtoJ/E1898 5MRaQNgdqIqxTR6XNczSKhklLqo4zX0l8P8Ax+i+F1tLzmMAD1zS6lLQ7qHx1ZeH9CJtF2oxHGKr eI/iEmoeFraK3GCWUsce9RbUlmzqXjYT+GbaBU3Fcc/jXV6Z4ispdMgWVfmXHam0NKxY1rxKNYuo VGPJjPA/Gu/bxtbNbwRlc+WOOKhLUqxzk/ipr7XfOZflUHbXP6pI2rX7zOcjPArRbCUdTo/CviE6 LqSleBjBr1WHV7P7e12EDP1zjvSirMcjS0rxwDfTyS9MEJ+VO8J+JhbXV3JKMrITjj2p8vUVzZ8P eKFsLu4RlzGx/pXZQ+J0tdMcWybC3oKhxuWkVPButfYdSM0ozkGq3iC4/tDV5JFGFLUoxsNGxZ6a AivntWrFatIdgGOaLkmkLMqNhAPqa0bKzESbR1qWxpF17Yq6A1dTas7ALkfSs2ylEltotxJA5q2y 4jNZtlXsQQqGUHFW1+Q9KVwbHwpsJbHWonhMgAzSUibkaWuyTB5zT2g2yACncq5DJabpRUc0OJNo 6+tUmFuYx/sPzuDyay/sxDkVrGRDVjOnhYSHcARXOXkZRyYxla1TIsYNzZNcDd90isaeFZX+78w7 1QmjMeyRkIY8+lYcUQgd0C/jTGZUtuYw2Tu5rGu7PdtK96tPQmxnaghWELnNZVpDtBPepGtzGuLd mlYk/L6VW+6jL3pNFNGKI1ZmB+8Kw5oDvbK4x0pxIMORmfcH6jpWJckQ7f7x9q0BGXqUKGMsnMh9 q41dMYD5gVc1IJmHPojLM7tJuHoa4+75Qsq4weaW4M5TxCnnRxsi/MetcdNp589vamD1OWuowzjd 0BrkdUt92p5QfKfatoLQq/KZGpbUQxBP3nriuKmR1cq/y+tNEPU5a40yD7UXC8/SubvGVLx1HSpk W9EczdxCAlUbk9SK5k6f5Du27I96UVqZtcxzl9EBGHByTXP3Vk08RPf1NW9CEtTnmVnBjA5FZc0U I4f5SPapbsi7XOZlKJcOQOBWLcIZonPUVjuxs5Sezae3GeBWFe27LDtJ+UdKpuwloY66f5+CDiqt 9btZTDIBB9DRzA0cvqFoDc7icA1jtH5Wc/Me1S9wZj3quyfP1rJEavGcDkUr6kvYyXgYSZJ4rKli DTgqc59qtuyCLC4tWi+YjOKy5wrsMDnvWF7mjKN5E6qAlZbqY1I65oDoZVw3lwYHyt7VSaJriEBu vvQhIr+VtG3qB7VGbfII6Cobsx7mfJE6ADOBUsoZo1YHp1o6BsZ0rZbJGBVaVd3K8VSWhLeotqxX Ibk0nwyhaL43aaMAiSYc56fMKxmxn9f/AMH1FrFo9sRzhefxr9doES202GNT/DRS1YlqR+R5gJzw KrtgKd3FdZSM9lBbOM1HMxaDYn41LKT6GbDGbeTpkV+cH/BR22+0/Ca4QHAyD+Wa2oP3hH8x2iSR zmbauCjY6das3UyhgCMAelcmK+Ni3YnmSPgEYWrUa7UPH5Vy2ExtrIrSkEcfSt612+aAp5FSxo9K 0bdIBnnFevaNMU2FqpK6C9j1vSFJAbt6V6NZKz7Wzgd61jorEy1OjVmGCPu1NI2xRjvVMVihLLuY BuAKzZWBlCgVcdhJiDarncKp+VlHwOe1S9WUtD4nU7X24zSnl9pHFYJ9DexYMe2PG7bjpiotjPIM 9ql9y0kWt6l8EYoEpTtVboWxKgFxgqduKeZDvORzT2Q07k8YCqd1JDKFyQuKzvqJlpZDIu0jNP2m DjNXJitYuQsCBxzVyNS5JHGKybsIuRNhM9/Wm8kEZ5qlqDehTLA3EakZfcOa/fH9jR8/DS0GOiDN d1Fe6Jy0PsjcC+ccVMXAGAMU7WIYgYlgFHWrCSclSvIpoRYgAjYZHJrzv41p/wAUU+OORUyY29D8 QP26pWk+B1wAPkCjn86/lY0oo0965GAJOPyrCS1uQIHLMzdvesdGtVu3KDErA7igzTJsy5pcljbI wKEOeVLptrXdlvAjSLuVTxmlzdykrlq7jgv7hJWiV2XgN6VC+iQXU/mGMK/ZsVqtieo+40q3tLQy 3DBlLAFmPentbWFu8QKSZ6ptiJqZStsaLYs3C6dqF4PPhAmH8Ui4zUsL2aF1i3KoOCUXINQpO4OK a0K7aVYx7mEYWRjncgyT+FTadBbRs5WOR9h+bdGRirUmTGyOzgsba9VJzGrkfdbrWp5KbyxUKetL cpdihdXFrJbSFlL7T8xRd1SaLf2j2qMyFYnHymRNuaOaw3oX76G0iCqMIp/hHep4rewvLQwvGF29 N4xS53cdrK7KdpFbvOsKiTYo4AjzWxFb2UoKOgbaekgwa1UrEaSNlra3uUQeWoC9MUmrRRPBGsvK fSqbKXurQn0qKyfzQBJhfvBo8Vr2SQJbBoBiLtgUlJkNK5p71a3IxzVKTS0ADqdmeuO9XF6ks0YN I8u2KR8WznLr/eNdLbWdtbQQxJCqog4AHSqlsJbmrFp0LSSMF2Bzl8D71PXRIgylThF6DHSsIya0 NNzQgiglk+eMPg9T61qXcC3AJC7foKtNp3KSTVmbXhS5m8F6/Za1ZoPOt1wB0wCc/wBK+04f27/F GqjZGnMfpI1bxldGco6nhnxR+OerfHW5SDUQdkR5GSec5rD0bRI44PlQK3qKG7I0Ubm3/Y6kqo4A FWINMFmHIcjihS6mcl0NrQLK3/s2UoCHB5+Wr+maYuogsjlEQ4xihz1CC5TrdNjjhOxE+Y9eOtdZ BYxxKq7QqHtVp3RSWtzRhtvs2UX5l6kYrStNQW3uobqzGZYiPbvURlZila59q+Gf2qNa/sq2s0h2 rGAPvGub+KXx/wBV8WWQ02UFInYA4J9a2iyJx5noclY6fb6dZW0Khs7eu2ujttPiW+UqeR1GKhsu EbHZ28nlSEom709q6C7iLWiSSNgkjipua2sdh4YsFkmaIK2NpJO3rXY6VMkUTpGuERgMYoi03Ylb nTW9krOH25JHQiut06UaVAAy7N3ah6DZ1RgM8CpjOcdq76xs0g01IsccdqyYkzYng+06asKjb74r c0e1SHTkik5AIwSKz1uM7hY0ggj+X5MVLCoERKqcewrXoJbmxp9u0iAjg+/FbaqbcZbk+1JrQ0Ne 1m81AduPwrQVcLhckn2qBIuR2q+YpIrrLaPy7UlQfxFWJ6mpa2HnWvmY2+1atjaFogBR0CxuRW32 eMZXJJro7WQrsTHFSO5vwWmX3KOBWrBYpeMGHapBHW21gpAz29K2lCOhVVwahuwFOCN4pTkZBrat 7YqQT0qZSLSNOdQADio40G7OP0rK5V9C55WAD0p4ACkYrNszECgdBS49qQBjFJgelAC4HpzSbaAI Xg85uDjFQNaFmHzdKtMpMesYTORWO8Je4IC8fStIsb1Mie1Ds4JyRWBHZeUGLc89K3TIMq7tvNlA Hyg1nf2elpuL81RJyssKXEzuowB7VzflmSaTjpVCM+S2Mqg5wKoX6+RGChzTEYFxGJId3esKWJxw ox70AQPEEwG5asS6XZKcjn6U9x7mJcoY/mVeazZkMjbu/emhGbNbKqOxHJNYFxCjKNw+ajULGBcr 9lywGSTVDVpC8EZC4PfFBKVjl7tEfh32g1xOs2cdpEUDZjJ604jexxWtwm2SAxn5cdK5yQoBLk4J 6U2SnqefXcJUkZ79ayLmPyDvZdw+laQZUznZYPtLlkHIrk7+IXCMMfPTe5Kfc5GRVXKsCpHqK5GX TkkvXbPFJlt3Rzt5Yp5hERye9c5d2bSQSDdtakRF6nEJaNaQhWYv+FUru3kuSI0cpQ2K12Y0qizk MTDL9zXKapAs0+GHShopHP3MA2HaK5C53plEPfmpsVbQieEsqp0xXO6xabogd2BnpUPczaOYbfC3 y+lZhblnkJY+mKOWwJnMX26efIHy1Syqn5gT+FJj3KGpRq8ZJGG7cVzMMBSIlTlvekhGc0iqCH5J 9qgtYki3s3TtRJ9BRRlzzmXODxmsW4Y+cAv41KVi73GuPM4U4rGnj8qTkksPapbKWxkNtlnJIyfS qEsu24BAPHBGKEriHCfE5bb8pqKZo5iSrEHPSokhLczbsblX2prHyYg33ge1WloDepltKWfp8vpU dzASBztoXYW7HeWsUIJPNR/Ctivxm01mclROv/oQrOpHQqx/X78KFM6aRNGfkwv86/X7Soo7nS4Z T021nR0YbFa7+Ukjj2rMKtIpyM13IRF5DRoADSKnk7h2NS9xoz5ZSjY7V+cX/BRBvK+Fc8hHy8DH 51tQVpAfzDaOi2s1zu5LtkcdBVqXYRnGa5cV8bEtB6yboMHr2qVW8qLJ5/CuK4WKcTbZsjofaum0 9l84ACktxnp2hy7jtUYPrXrmg/OwBHTuavYFsevaPKNuAK9G0td1qRnvTTYjoUbEaqRx7Usw2OoF bW0EQTgOuTWbvWPPr9Ka2ElqVSoGTnLVQEjb8CgOp8bTt5XI5IqqzmZwSMGuc6Vqywy5wG6ilj3R PvfkdhSY9iy2JcOowKYxLcY+X1pbCZIjKi4X+VPhbY2HGQaE7sUVYuOgwDU3lB0GaVtRtj4Y8nCn ipGBkkAHSm9RdC/Evk5B5qQSFunApOI1sXMb4QOhFFuhYHJwRSgrCaIpk2vHIOzgY/Gv3y/Y6xH8 MrM9mjBr0KHwktH16FznNTqoVCDQ2TYsooZU9amEWZc9qFoFiwz4kAAz715z8a50TwXIDy2RxUyB n4bft5yGw+Ac0hPysBx+dfyo2ke5blkPBkFZTZmnqF3Ibe2fnnacfWv1C/4JXfsU6X+0Bpevax4k 2ukMoMYIDfLsyf5Uoasp66GF+3l4I+G3gJ5dM0KKD+0beQI3lqMg5+tfmT8OtN1n4ofErTfCukx+ bdTqWfYc7QCM5PbrU1dJaDgrJn1t+0h+zTqP7NVhby3kqlmXMkYcHvjtXy7B4nmuNLt7qG0lmt5Q NrLEx6/SrTaRHmdL4D+FmqftKfFTR/CNg72n79HnU/KSFYE8H2Nf0U+Jf2P/AIX/AAUl0rRddFs+ oyKFVpQoOd2PX3pLUHLVI+NP27/+CeK6Tc6Nq+gS/ZtNmwrFcDhnx/KvpDw7/wAEzfD3g/8AZSk8 ST3P2u6W13gsin+FvT6UnFtjnLkR+fX/AAT/AP2Prn9oH4sXmp6rN5WhWausceQRt2huh/Gv1oT9 ib4ffEey8SaZ4cjhW/tAySNEozu2Z55NdEocqRNP3lc/n68VeBdQ+DPjHV/Dt0HuDb3G2MKN3AHt 9axdSuXu9OKlXtbl/lQOu3k8d6z2LhvY/Yr9i79hXRfCXwAk8Y+PJY7mO8CvslKsFJBHt6VqftX/ ALEPh/xH+zjZeK/CAS2s4vLl/dAAFQST69hRCHNqRXnyM/OL9kH4Ban+1n8fNM0m0timjxRsbifH BIIPfjpmvtT/AIKN/sPRfCT4ieFtB8Lj/S7qRFkwoX5fNCt09jTUPfsEql4aH2bafscfD/4FaPol r4oaF9RuYgN0wXqWx6ivz0/bV/Z30Hwf8RbKPw7cxlLmQALGV4ywHatakUnoTBtI8++N37K2u/BH 4Z2viRYzeWsm0kr82ATjt9K+d/hNrGn+LfHemNqm3+yGXc2/6jFQ9Uac2p/Q58Nv2O/hr8Z/hXqu qaLDAJYYGy0aA87CfWv5/wDXNFfwh441XRlf/R7afYpPfgU4L3TNu7uXbWLypDkAr71chlSSOUpG 8qo2GAQnFSm0zVbFv7b5UMbvDJFAcYJjIFaF3dsuPIhacjrtUn+VVKegktSO81uewjjae2aFGxgl SK2rrUVsILVyjlJyApCHucVEe5Zp3Trp+upZSKVlZSw46gV11jCAxzyB2rToBdvrxdKsWnl/1QO0 ADPWv0e/4J1/Anwx8fINaW7hcXSE53w452Z6miLsUo8yPlr49fDu3+C3xd1vSITvjFyQgA54A7Cu O0x7x1VUtXLYzyh6VctjOEtbGxbagJnKkfOPvD0NSalcm2tA4XJJGBUXsWrM/Tf9hT9nnQfjR4a1 Rr8r9qX+HAODtzXyF8VvDEPwr+Kmu6GiSGKC4KoTHxgAU1qOcVHU5vSpWkRWSFpSf7qk11ksTvaq GHlSZBAbitG7aEqx9z/sXfAW1+MMWu3GobmWBiqbkz1TNfNXjnwnF8PPiLrunxBhFDdbRlcdhVRs 15mUk9zT8O3cpcSRwM8ZBwQprs/CVlD4p8X2dtqSeUpnUfMPcetU9C6WrP0E/ae+CGn+Afhzbavp MA3BkUhF6gmvibwwwucueHbkqalO5Xwysei2UbpbSPDGXIbuK9x+B3w0l+KviyFbpdlpEh3fUcip uatXPuix8C+FrLX/AOyY44vtgUg8DNfMvxZ+Hknw/wDGKwwgtDPl8AdMHFSrp3ME/esM0yZCT+7J ZBz8td14I8Iv8R9SjEjmGJXH+eaqTNbH0N8SvAdv4LtLdoANxABOPevOo0aSNBGC5HXiobItqbDB rV40cbS3TNdPDHldhPA70ra3Keh2Wn6Rca3ZKqrhcgAivX7XwfbeH9Ii+0nLEDrVIiOrJtb8LBtC S8thtj46VwtvuZl3DI+lD0Q4u7sdr4c0w6vqyQINoPWvXz4Ss7bVxZ8GYg9qzje42+XQpWvgdz4m NqeRyT+Fej/8I1YxXotMjeOOlEnqF9LmL4i8MvoN4I+sR6VT02BhI/HyjpVXuilsakcmzBZdwzW5 IqsibBzTRLZ0FhE0SYznNbthAoYgDH4VEikdbaWwjick7iak060Adj396xkykjWKKy429KnjXeg7 Vm2U3Yn28DPapBhe1ZtkDaKQBRQAUUAFFABQw3AUANCYpnlkH3qkxp2MmbTxGWOck1z89i27jPPt W8ZDZzt3aSK2GPQ1He6f5lsT1OK2jqQc3Dpe6BsVzzW4DuAMVZBgy6Y0yYXpWNc6U1pESx3LVoL2 OfnAEZwv5isnez8AALTsNIxGQrOwzn3qlINxOeTSsFjGm+ZiAMVjSx5OKnYRT1ACOEcYxXMtGtxy RTQ7mRe2pkTHUZrLlc4EbLx2zVAzgtbtQ07rjGOlc3DFBeKEnbp2prQl7HG61Okt28CJwvQ4riL+ 1EGT1OfSkTYyrm0WQozDjuKz5FgiaTjegBwCKaumUzy2PUltbiU7D8x4yK5+TdFIzA8k8VotyJLQ 5rV7c3RDAfN3rmrmEIxwOKJalxXunPSWqW91jOC1Yd9BGbgqDwOpqSbWZymoS29k5CDzc+ormJRv nwBtNT1K0RyWqReVfHI59a53UHCNz19avcV0YN2SqbVXr3rmbu0KuADjFRsO9yhJKAMt26Vzl4qy 5JPTpUrcTOaaTDnjis2edYd25AwPervclI5ZrjdIePpS7kiB3oCazki7WMbUrpJHUbMAVzM8qidj GOPSlFEyM0Kqo7Y5NZRXbGxfp2FHUnYyIJVRiSPl96oTSqbgnFJoIuxny5DYWqV3MqRMCucd6ze5 r0MWxmWeN2A249RiqyuJgSBj+tWjPyEOFPHfrVbyljckLmlJFW0IrjaIhgYXvWXHAzzMc/L2zRsi Y6siKgnKiql+yttBJyKA6lMfvDgj5fen/D9PL+L+mog6zL/6EKxqMpan9fnwbYwaDpCAdlz+dfrr 4dlU+HbcdwopUl1A1ZdoTpk1iSM8JOwcGuzoESszl3HFR3AbZ/hUpFN2MGVy35ivz9/4KFRC5+E9 xEfY5/Ot6WkhdD+XfTQPtlxkZCNtqV2LN8owB7VxYp++xImHzgdjT1nO7YRkVxNalMZLIsbHaK1t MycORg0R1Ej1Xw/zhhXsGiTByoxtFWwPVtNUK67elelWEvyDaMVpFCtc6ZZPLRQeSaSTIOO9alWs UZFbeeflrJeUeZsx074q1sQ3qEEvztkZPrVVUC7ifve1S9yrHxQu5zgn5atsQgUYrnZpFleMGSYn PANaE5GFX171D0NHsJKhiQLmmJIXXy8dO9K4lqibKqgXbg0KdrbcZFGwIuZAG2kkDKAAactiXuWg vkKOeTUilkUY60RLWxYjYu2COe9XYRtTB7VTIWheZguCDkUxDhvY1Nx3IrgtGyKR96QEV++X7Hsg /wCFY2IHURgHNdtH4QR9gKfmqzt8wjIoM2tSwIdgyDVqP5R1yfeq3B6FkR4TdjvXkvxwRT4TJH38 is2Jn4f/ALfUW/4D3EZOeB/Wv5TdKUxtdIDkB/6VjLchKzI9VbamMZGwmv6Jf+CLjtc/BXxYScKv b/tkapaK47WVz8JP2w9BEvx48VT27+XPLqO1O2WIAFfsF/wTW/ZV0v8AZn+EGofFPxYgudXmhMyy FQxQFDkAjH92s5e9JDpyvFs/Lz9ob4567+3X8b0ttPna30q4v0S3AbG6JmGeD9TX7g69+zj8Of2O vhZ4Y03xFZRXBm8tPNkjzyX29j710OOlzFu0TlvBPwx8DeDP2z9H17ww1uXlik3RQkHOSozwSa+r P2xf2NF/aK+OWkeInvTaR2coIQFefnDd/pUcgktOYl/4KZSXXhP9l/SbCzuGhkjaILKvGMOatfBz V7zVP+Ccg/tC7e5lXTxukfkt8r1pC0mTf2iP51/hf+3Vq37Or3WkWEbr9tBigCbssWG0cD3NfuR/ wT10bU/gV8GvFXxD8ZSi2utWQ3KpI/rERjnB6iipL31E6aCtA/Pb9nbxJZftM/tE+Itcn0j+0LV7 3CM0TEAMo719u/t6fsL6Po/wbg8UadbR2lxE6SBUAH8RP9Kco9DNNqdz8tF/a48a/HfwZo/wn0a2 kJ86JZShbAVW+btjo1fsr8ffEFj+zd+xV4e+Gj3IfV7pIrTAIJG5ihPH+9WtKyj6EYpcx7H+yT8I rX9hz4WeE47S0N/ql9HGJ5vL6kttJJX2rD/b01ybTv2rfAd3PbtOJhj7pO3MyiudNym2ONO0Eet/ t9/sxaR8adX8PS3WqfZLtEBhXK5zvyOvvX4J/trfBrxZ8EfiRYyTTST24UvFIx9GGOgpzk7DWjsf pf8AsXfHHSv2uvghfeBtbiEmo2cPkuCCcnYTnn61+K/x2/Z5m+AfxV1HQ52/4loZvs6AjhRjtTp+ 9EJK07H73f8ABJqwjt/2ZPFfkjYqxnHHX901fhP8XHkX4na3I4+Y3f58CtIbCmuSXKc/q87mxQRc N1JHpX7A/wDBO/4DeGvjz8NtYElohvrVh5zsnLEITWdT3VdGjeh3/wAZPhT8O9U+Dmq6FCbax1qz lWJSCoYNgkdT6kVsfs4fsj6D8G/2XbfxV4tYahI8aHzZlBJyD6Y9KxScmkEH1OE+OXw9+HvxS+FX h650SS3tri7nhxt2g4L4PU19BfEr9iPw/wCFfC/gi1WBZGkEYlk2Dn95it2uX3QbtY+aP+Clf7P2 jfA3xDoV1osSiV4SGwuOr4NfA1oUEDyE4YmqkuWNiYzvJo9S+BPwcufj98WdM0NJyttnzJVyOQpB /lX9M37O/wALvC3wR8Z3uh6TaxxXxjcyMi4JIWpitOY1lPlifmnL+zmP2gP27vEBum/0a1uJN8ZA w2Ap719V6f8AC7wJrPxb1LwbZWUaalbxyK2I+hC/X3pxlzSsc+yufj78ePhe3wU+Ler2Kt5we4O1 T2HA7Vw93BvjjZh8p5xV1Y8pdOdz9Xv+CTGmvH4q8TXEsrGLzSQhHAHl19e/Fz9nzw58b4/FWoW0 CxX9sz75QnJOzNRF3NZS5lY+GP2MPh7FdX97YanpYu4becRCZ0JzkflXtP7bX7K9n4K06x1/Tz5M CuqyRKABgt/gKFLWxm3Y+uv2CNX0GbwNqCafGFkTHnEL1O3/AAr8/v2o203x58ZbnSNFTy9Qe+VZ 224yMjPP0NbQfv2FKWlj7dufhN4Y+AvgLSk1eFZppWjRnKZ5LY7V47+0J8PdETXtDutDeJGe6i3L GR/fFOUuYiErH6Y+L9N02TwFpunalGJ4ZNowwzk5wK/OT9pX4GxfD/xVZT6OBGJlIEQwAMnFZp2V y+bmlc+pvg1+z4g+FEs2pRK1/JHuHfBwe9Xf2X9Bu9D8T6na3KBHRiFCnPG2kr7m858qPSrP4RC2 +Ks3iB5MyNIcLx3x/hXsXjHwrp3ivVN9zGjzrE2zPWhX3JcbLmR4n8MvhjElxrcmpRBYkl/dbh22 157pF1LpvxCSOwfy7XzQAqHgjIqpLS5MJNux9VfHmyLaDZsrZPBbn3riPhZYRXwdng3DcAG21C1R Sd2dV8V/hyLK3guo329OPxrz3RrGTWbqO1gyxz8xxRHXcFqfSQaLwpYWljGA0hK5/OvR/Ffhk63Z W2G2gL/WnJ8rM4ux0mmabFbeEFtZfmUFRk1wGveEEi1O0FsuYT97aPek7mkVY6aK1g0DxVCtuN7/ AMQx716DLpCTeLBdB/3mCStDVhSV2bOhTteeMrgldrAN/Kki8OsfEb3Mr8+YKh6jSuXfiXODfwAD KbfT3rlIohHa8DGaEUWoLPZGpPzDPeuma2XyQ+3A9AKslrU3LKJWiVs/hXVRWiLGG7n2rKTLRu20 PydKuQWwizmsZM02ROFHlkY5p6KFTbism7kC0UgCigAooAKKACigAooAKMe1AEUq7MHGaiuYgtvk Dmriyl2Objshd5Vjz71nTxfZVdOtdEGQ9GcotttcgkgVk6jbRxEhetbohq5zcuUXC1VubYyQAk/h TuFjktQtlfKAYrjp4RE21T068VSY1oVbrT/LAccqazPKjVW3HmgTOWuVMTHj5T3rOudqIuPUdqLC uZmpxGSRUPKkVzsmLSQw447YovoCRWePMJbHA9qwpJ0uWIIwVoQ3ocJrkMkgZ05J61ytvo4jj81u TVA9jE1TTduShyTXFX9iU4I/OlclHISMFmcNzjpXLXMBiLsT1PSrQmc3qUKOUUqFGPSuGv7OWK4L k/KDwKe4iteQl4QxGGxXFOrLvDCqtoVF30OB1FXlnZenoayRp7xRlS+8HrmoWg2rM4XUII7ecbB0 PIxUE+ow3ko2x7CvHSmlczk7GLrkAEaueprzrV7dpE3A4we9Xy6Ew95mKmpeSApXJ+lZN5DvJcHk 9qza1ubP3Wci7BFcSc+nFczdS7Y8np7VBPUokJJGcHpWRcLlDnoelNINmc1HCzPtPAHeql5GyTgD mhobZjSR+azEnkdqxfKwzbRgn2qU7CexjXEbRoF71Tf7u11z6VL3JMm9RhgAYxWQkZUszdKOoNEE ibYywFZnl7vvLxWUty1sU75lMG1E2464rIQbEDEZqk9B2RG43JuxjNSRHzI9jLgigZUuIyVAI2qK pzsSAi9fWh7EWsylFEYiQTnNZsyiOcjG6kirXRHJF+8DA8DsKu+BT/xdnSpExxKoP/fQrKogitT+ ur4OyD+xtHkxxhe3vX7AaBBHLocEgHLKOMUUga1NCS32ZrKkUkkV1oSTKUq+UMHvVRtyKxBzmgHq c+wIBLfezX5/f8FA5lg+E1y7dDgfzrSn8QWP5d9K3QyXCtyWfIqzNdmGTBT5a4MW/fZTWhW837S3 mAbQKk8zac9jXGndiYu5SwAyW+lb9tIzEL0xVrRiPUdBkFtGhbr7V6/oc6HaSOvtVAj1bSW2uuBj Neiae5hPI4PemnqapHWIAVBFSOTsyetbQdyZGXIHQ5Xn1FQyIZegwB1rS5miq6eWox3qvcRC3UZb Jqd2X0PiqJPLyWPBqYMGAH5cVy3LasNGY3y3Sp5WEmNo4FDVy1qhmS2A3OKsByAABStYIu2gbTv3 N0FWYDjLmo3Je5IBv7YzUhHlHHWtL3Qx29dyqefwq5ImFGDxRdILltnCwqFFNiZulCkJF3G1QPWr PIhHH5Um7sHuV5lKTQM53AMBg/Wv38/ZFsf+LWWDKcB4gfp1r0KHwhc+rxmIEH5sGta2kVlyRis3 uTuWY5A2Vx+NIwzkDrVJ2FJFi2ZuA3Y15R8c0C+FJGxzkUpCSPwt/b5uTZ/AGZ25yFH86/lV0efz musAgF/T2rGWrI6jtRfyoTnptIzX9In/AARWsGs/gT4qQqJpJfuYOf8AlkRT6WG37tj8Jf2y9L17 w38dPEk99o5S2i1MPDNtb5gMH0r9q/2Ef2k9C/ax+At74A1KRIL+O3MXlMeT8h5AP1qIrW5MdI2P y90n9mDWf2aP2yNM0plWLSEvV+zTFgMoHX8K/Xz/AILQ+FNY+JPw28LWnh2y+33LzQjcuTt/e9eM 9OtbU7ydmZVPgR8S/sQfs0eKPg5+1no9x4t1D928Em0SSqe4GO1fof8A8FCfit478LftBeGtK8JJ N/Zcs6/aHh3YI81RzgHtmm5csuUpNOB2n/BUjVftH7M+iRlt2oMYswjkk7z2610vwT8NanB/wTdh lubUrJJpwIhIOR8r9utO3IyYLlTZ+HP7BX7EF5+0Z+0fDrfiKDZpGlBzHbOM8jDqcHnqK/Sz9uPT /Gv7RU+oeAvBFrJYaPpqslxISYw235uMgg8E1nV+NM3pv92e5f8ABM/4GWHwL/ZlvYhDFc+JVCiV sgnfsPp+Fe7fteSazefsPGfURv1Bo0xEG3Yb5vxrWKbTkYzl76R8Bf8ABMv9mc/Bz4Nar8UfEFiL jV7qPzLZAu8puQjaMc9VFfL37R/w98ffGO7t/ipqcLW2j2twktvbOxGF3BuhGf4TWVKTcWiqtua5 95fs4/8ABTPTvjn438KeD4NMMtzHDmRjG+F2sD16d6+v/wBunUoLz4/eCoTAqAYJkYEYxKverguV Dc9jyb9vuDXtU/aM+Hg0JpZLNQDOYxxjzl7j2rzT/gsxrdumjaLaW6rJqDx7ViU5OS+OnWrUOZNm Tn71jz//AIJ1/s/Qfsl/BbWvib4tmWG91CM3EcTkfKTGRjseq1+Pfx2/aH1b9pH4x32sLY+ZZPvN qi7iWBxjj61OH0iy5v8Aeo/oK/4JN6DrGnfsueKpdTszZb4yYlcEceU3rX4PfFiN5fifrSyMrN9q yADnsKqOw6kuedzmJEESjfx8pFfun/wRjtrqy8LeMnNv8jMfLbB+b90am3M7McvhPyb/AGnra9tv 2l/Eb3Fw1qr6yjbAeo+X1r99/jDZ/wDCRf8ABNzTYbMm/wB1vGFjQbjn5scCtOVQ1JhdI/FfwH+y z4y0Xwh4O1/UnNho6TwMIWkAwPMHYjPY1/Rp8XUm8TaJ4Gl01/Msx5Zds9P3lJe97w57o+BP+Cxl rPFregSWiCSEQHe5OMfPX49aSyX1jlpF6jHzCnLVXCEffZ9+/wDBNee3h/afgV7lYysEoA3D5uBX 7ueFbRz+1Lq8sVvk+XNibB5+Ud6nyNJK6sfPH7ON8+n/ALanjlL7bFcPcymLJ+8PLFUPgjoN1H/w UE8WXs1gVjZ59tyVPI2L36UcvLqjG3Q/P39vya1j/aT1IRTiSVpXzgj5elfKN1eLb28Klg341U5c yKiuVn7Af8Epy903isrEcCQhWxwf3dfoP8LraTQNF8eSXyCFZJmKknqPLNRGNittWc98JdFt9G+C kd9occct5czRM5B/AniuJ/bvn1Ifs82isPNuC8fnAHP8RzVU4XkzKpK1jzf/AIJmpbW3hPxDErL5 rv8AIueT8leSeJ/gXrPh/wDaCn8RXJS3tptRQod46Er/AIU5e7qaqN9T6z/b2sb3X/CWi2umW5u3 eWPlQTj5/avjzVfhdrvgzXdDnvpuTcR/u94PG8dqpL3SFHU/XT4hI11p2hoqD7yE54/jrwH9tK7O kXekTW4D3KKMAnH8VJLSwL4rHqHw1+Otunwiee+kWO7SL7mf4sGua/ZU8bv4v1/Vb26VUlkcmMZ7 baFrob1Fc6e18T69ffG6XTtrf2cGYhjnHGK+gNc0h7nx1BNDJ+7ijYSYNN7WJU7+6aGvMnibQ7+3 sXAdQQxB74r4u8E6PeWni+1tDktE4DH8RSvpYtQ5dT6++P8AbTtotkkKkjKh8Dtu5rsvBulx6b4G gNoo35XpUWsTtc0fif5lx4YtUblyADj61hfD/wAMxeD9Hmv5yDI4yufpTfuhT1TObs75tb15Ll2I y4wDX0H411Sa00+0FsTgryR9aJe8wUdDatBLeeC4wxxIxXrW3Y3cWj21tayjM7JjOO9UtUEnrYy9 C0R7fxz5k/O7OM/hXSzwyQfEMqmWUk9vpUtlI7bTdkPjeYHG7DVi3NxeHxaYhu8oyDP6UtgT1Os+ IscMd1BHwcLz+dcRGgmhIHQUojNmxsHktshsEV1mlwPNb/MuRTeg2a9pY7JwCOK66G3ygBrGTGlq W1hO4CrSDYcEZrnkxsWipEFFABRQAUUAFFABRQAUUAFFAA3zDFQSDPFNaMaMqSxPmFg2BVC6iE2R t5HtW8ZWG1c5+505tvGc1zV3YLG/zHk1umSYZ08RTEevaua1KOSO52g8elWiXuYtxaHJZu1ZFraR PDIzDntxTEYspCx7dv6VydzbbrjbVLQRQ1iMCIKBwK50Iq9VyPpR0FYpXa75QegFYF1Eq3O81A9m WCqvAVxxXFSaJteWUdM1UR7nEXD+XOwxnPSuZ1GfajRY+b2qhPexwqxywXJYyk5/hNUdd3NEvcmp 6ktWPP203ZvkZsZPSsa6gRu+a0RFnc5y7tUkJPUjpXD60SJBGct6YqkUlcxJmMqHnBHSuE1Bnljk JHzA1Seg7JHHT/vNmR8w61zd7dNBLIsYyx4GaloTldHG3cTxAbl+c9axdQg6MvB701oZy1MfV7tY bVHdS4HHAzXH36/bY1kT5R6VVx09GYFzarKu5cZHrxXPXJCR7s4NS9S29TGnt0uIww/GucurRVfg blrHZhcx57JIlOB8x9qx57UyWxzwRVXHa5ztxtjgIPX6VjXEjfZ1YDFEthWMmaVIE3gbmNUZJiy7 imCazYn2MS8ZVUHvn0qjOS4BIzikNaM5u8YzXGF4FZs/zHaBxVeYSKFxG8bcmqc0haPB7Vna4J2M xGHXPynsain4AwKJIFqwkgRVBY/QVWecqwyKSWhSepVv5Aygng1nNJjB9aroN2K20IeTkmqU6q2Q ODUdRoqRqSjKRzWf4BjaD4saVhz80y5z/vCoquyJW5/Xz8HbjZoOjwhdxIXn8a/Yjwxvi0C2Vl6K OlZ0i2dHdrhFPrWC4ZCWI+Wu1PQT0KLbLkEYyfpVN4/s5yRuApXsTY5zUUMxLINvNfnh/wAFAYft /wAILlM7SpBP4Zrak7sex/L5aSfarp2X5fLbb9a151808c+tcOJXvsp7FT5YMKRUEu1uVJxnpiuR ImxdTFuoYDOa1rBjM+7pzVvYTPU9Gi8yFOc49a9U0hNhHGcVK3BaHsGituRSRzXpentlAp5rVR1K UtDprYAx8np0qIszoxJ6dBWqjYi421kYoSaryttYqKV9QMuZjLgDgrxVWbCkjGTRF2Za1R8XO29l H6UM3kSgEZPaubc0auiYozE5Oc1LAjBAprXSxEbpkoVYsjGTUaBgSO4qG7mr2JhOSmGHIpY23LwO RUpaEWuW1mAi5GDSF1GAen0oQi2Spiyoq2ExEo6mnayJepZgACnPNPhk5KkYx3pW00LLoUMmWPHa rDKwQc/LQlYlamZcF5dRt1XoWH86/oO/ZP8Al+F2nJ0AiGf1rupfADPq1WEcmQMirQAkQsKTWoIv W0e+AHNTlQhAoQ5CN8p/GvHfjlc/8UsUA5yKZKdj8Lv+ChsZl+A8sI+9lf61/LVoqfZ/tO7nD1k9 GZvVlq9gS/tGRhgEelfcP7C37eeofssWWqaWbMSWhkwhJbkbcdqas9BPQqftXftkx/tE20kMGmpA 8koywDc89ea+Lvg74mu/2dfj5pfiXSgQyQOkwT+IsR1x7ChKwvM+sv2tP2wdZ+OmsaZc2kIsrm3d ZPOjY5+Vg2OfpX1t4K/4KgXWheENIstZsv7RvIUUebLuJ4PXirg7Mlq8bHyz+0J+2t4i+JHxp03X 7AvZRWb/AHYifmG4Hv8ASvuI/wDBUdLqCyludPFxeRgAOwfOc1D1lcSg7WPlD9oH9urU/iT8UtL1 G9jZ7CBCPsvzFc7gR719v6T/AMFaGtPhlHoR01Ft1tyu0F/Q/wCNWnd6lOPu2Pnr9nb/AIKKS/Dz xfql9Dp6wLKWJX5hnK4r6Dg/4KzC007W1ttJjS4u1bDjeM5XFS1zSKiuWKR8ufsw/wDBQLW/hx4h 1q6vkaWO9lMixMWxH8uMCvX/AInf8FQtS8c+EL3TJrLegkHlp8xGBW/MlDlMeRuVzq/hV/wVGl8L /Bq20a50xfLjRV2Hf71yX7Q//BR+8+KXwPh8NabpkcETlEYKX4GTn9DWMUol1E5Hxp+y18VbP9mH 4o22sRWX2qcRtlyh4zj0+lfV/wC1b/wUK1T4767o8lrbfZZIHVxIhbjDg45qlK4nF6Hvukf8FPX0 6302G/sRc3cEJHmEOec+1fDHxo/aV1b4x/H+x8V6nulsrbcsVrkkcsCD68Yq1JKNiHB86Z237WP7 Xet/HnwJp/hKzc6fYxMnmJGxwwVs9/Y182/DjUY/hR4ysb3yfPs4EKhMdiR6VEdNDecb6n61j/gq PJp3wsvtF0fT1g3QMhI3r/CR/WvyM0/ULjxTcnV7nIupjucH1NU3YinG25rarZS3UIVThtwP4V+k H7H37dcn7OnhbUdKTTxIT8u47vm+XGeKhPW5dj4m+NvxDb48ePdY1yWA2spmJTAPORnPP0r7g/Zn /bkvvhj8IbfQNVi+3W1sFVRIT2z2FW5XHbQoftR/toar8bPAVhpOjodNtomR8xkg/K2ehr1H4df8 FAb/AML/AAa0vRpkN3d2qoPMkLZ4JPamtFYlnnfxz/a3uP2obrR/DcsHlSyqN0pJ+XDc9eO9ew6H /wAE87OPRrUjWPvKO6USXQpNJXOa8c/s8n9knXNM8W6dqplu4yFZVZeVLDPT6V9ZeDf+Cn4sNa80 WAedkIZyH5JoitLjvd3Pmm6/au1G4/aLfxdCDbqS4aNScPuxz+lfVj/8FGfst9qL29kiXskb4kBb rij4tAa9658b/Cf4a3v7T3xB1TWNXvik1xKXVmYZGQPWvq8fsE20kTQf2v8AMqn5tyU5R0sQ3Zm5 8Gf2hYf2NtXvfDSwrek5Pm5PO0Y7cd61fiT+3fe+PPCN9YWEf2RrluXQsDjGD1q4q24pu6VjG/Zs /a6vvhD4KuNJud155ZAiMmeMD2rvfiJ+2Bd/EfwKbWe1V2kIyCScVcLJGc4Ns8i/Z2+Jd78HfG8t 7byM1rJkvD2zjFeyftDftL6t8VjZLZg2MULrITGTyVbPes3HmZ1xaUbHsnhD9st18J2UV9bC4uYQ AWfdknNefeO/2hLjx349sNSltsW9u3yxjJB+YHP6U2raGb0Z7r44/a3utY1HSmgtgIUxuUbv71c5 8YvizN8T9ftHePyoo4yBjPrmlsRFXnc8zgea6jkiaZgpbOBXrnwu8YXHgLxJbzW4JiWMh19apRsj ok9D6xtP2kbdtRdhbbZcHnB603wp8ep55dVaUEvITs68fLS6HMovmuSfD/4ozeGtOvy+6WW4bdzn 0xVzwp45NleNfSRAzO2e/FSonU9Ue0ax8YBruj+W0WW6DIPFWPDHxAk0PRPLY7l4xjmiyuRLY27v 4itrGmIpTc4I4qS98RzappUUJJXGOBSauxwskVY4msNh38Zr1vTPGh/s5Y5F3gdCaTQbGn/wmE12 YRH8kaY4FW9T8Tm/1aGfHKf40N2RCjeVzoLvxdLfanFMDt2V1Vt44xfGVU3SYPNCRUkUbfxBPFrx u2Y7ia9Ck8akSJIIhuPU80WEiG7nfXbsyu3PvVq2sSIwoPJqVoM62GxazgG454rqNIl820wBtx7V MikbcQPBNb1vCQmc1zyZSLSrtp1YgFFABRQAUUAFFABRQAUUAFFABRQAUYHpzQA3ywxqnMNqkhau LKTMhm87K7cH6VzFxZq91huo710QJa1OYvFZr5wg4HGawW00+cXf5j2rdbESRyVxv+2OGHGelZss ItlO7oaXUT2OXucsSccdqwJ4iCHPWrsNWMPV4mLK+cL6VnqwJKle3UinbQGc9OPN3heoNYNzERH6 moAfY7ZYzu4/Cs2Sfy4nQdCe9NBsea6raf8AEwUDgAGuR1KP7PK8icnPJNaB1ONms/t1+JB0A6Cu f1x2SUBT3pPRikjDaKC8uSsh+bB7VxeoQJG7LHxj2q9kSjlZbZ5JSVboOgrkH/ePJkfOD1NAbHKX b4JAGSOtczeOsZ+foaadnYGro5O5jUK5Hr1rh9RtnkIaP5SD1HeqJMnUWZ9rHt1rlLx1lYkflSYJ XKM0ccGmuGG8npntXDY8qHYR0pdBqJxmoW7yNlHKgHt3rLuoPOiAPp6Ur6ktGALV4yRnFU5oztwe gqJPUNjBvJkXkc1hX9z5W3cMA9OKTRSkcnqDkPvK4APSse5Y3WGA2r/dpjMYopmI9Pasu/n2N8oy KlglcwZbjcTkVT5jhZixOe1O2hPUxjLthOV698VnNCxwwNRsK9ys0e643s3FZ0kYMrnOc9KSKMee MKhUj5vao1ZjCAR070pbDiVppd3y46d6zmkJA3HmiGwS3I5oTeNhTwKqpFsUhuoobsCMxGBnIPNQ XbqrfL1pLUadhkEmVG6qvhZgPirpDIuB5q5OP9oVnUGj+u/4NOItB0Rs54X+dfsx4YmL6LbOw3Lt pU0hJ6mzffvMMvA9Kx5DvOG+7XSthyIHgEXKD5ay7huvFKwJmJI2UKV+dP8AwUAUQfCi7yOOB/Ot qC94e5/LlYKYTc5PzF8r9K2IJCkBOfmrjxX8Rg3oZ4m83JZad5qRYBXcT7VxPQSJIxgEnO36VraW T68VrHVETZ6XoMreeoBwK9r0lljK570rageraW2UAX869G0pt0i+1bRGkdDLEyTlgeKevypn1rW/ QzejKrSmTGOMdqfJ9/3qWrFmdI+ck8YqlIGZQajY1ifGJjUzgnoKeNrTnP4ZrK1iyWJcOSWprytv xmlLsSh6XHPTJFSglvnAxnrSiir3EliJ2kGrixlLfcKewxLdljbLjcD7UT4fhRTbCxajRo0CsM1o wcRlcc0tyVEfApifmpZIgHzmhOwi4I1EfPFSxEzDAPAqW7sLDXYwTwOByHA/Wv6CP2Vfm+F2mOf4 oQa7qXwiaufUSEKxJ6CrkGDLj1oCJdtmxuXOAOlTxbmJB6etSkOTJ8hsL3rxr48Q+T4SL5Gdw/nV bEWPwv8A+Ch84h+AkswHJ2j881/LBoYO6YE7gzZOawk7slG1HIsbspGRVLTLmXxBczWun2L3rxNt fEZOD+FKLsEi3d6dqWhNGl7pL2Ab+No2UfmRWlIyacNz8j1rRO+5NuxBbyx3YMiEAelaCol6w3oG I9aNhW1I9Q1e10NMsdpY7VUdya14PDes3VpBPb6DLKJMEOIXP9Kyc7M0UdLlDU7kaVqZt9VtjbXC gkBlPQdetP3m40hb+GxZ7VyNsnlnvWnmJ2sR2F7Fq10bS1tfPuzztCk9K6CTQdT03T3nu9EeCBOs jROP6URdwashuiatZ65a77fEqeprbiMSLjyxu9cU5CRj61rNnpBjjuE3NIwC4XPJOBXRXzN4btI5 JLQxKw+/sNRzu9jRRTLWlJq2tRLLaaZJdW7kYdY2P8hTdb83w5fR297bG1dzwMHn86tOzIbWxu7o rZgGjDs3frVx51i24YKR70nqw5Rkc8H2sMvMp68damvr8yXKwxxNJKw4RVJq1pqUaNtomq29hLI+ jyRqPmy0TDgd+lS6LfLqenJKqlCcZBGMUua4mrGzIxWQENwK07WRvLD461VtBFzDbAgHy1ox2yCJ d6BwPWmkBpWtygUgJtT0FFvEg34Tk01uNmnp9obS6ju7ceXeR/cYV7VZ/G/xsoWJtUlUD7v7zp+l axaW5na+hV8TeOfEnjmBINX1GS4hXsXBrEsLKO0YBYwW9cUdNA2OqSPDKMZ9ParL6fFFIrMmWYen SoNI6s67wv4p1jwBJJJpV48W88gHGO1d7Y/HzxhbOd+pzsXGM7v/AK1XF9xVIHNwtdazq8l/qUpu Z5CeXOetdZaW0Fq/yxgN9KJO5mlZnTWkaDlkBB71v2MWLgCMbVFOJdjttFIgvHJGc1tW0jtM4YZT PerT1CWpbj04sQ6nA9BXW2V4sMQBj5HGQKmQ7aHY6dOgmQsM4r0JrpbyPcg20NXKSsTaJKsDs83I B+teh6PrsE02Ui4+lPoD1R1ttapPKzlAK37BYYLaSRcKwODUkrQ6XSL6WOFd0J2vjBwa7NrdrRAM kEkdKL2NH5HY2lm8aokmQGHpXZWNs0VoyiPcq+1ZvcXQ2/DsWLgMY9oPSuwaJlnIUDrV7Eq5uRxC 5twDziujsLFGtAoNQ2XbQvCI2iBQck1pi0ZIwwU7vcVO7COht2CLJbneMNW3okIinORke9WhTOmW 1Ml0Mc47V0C2hNuSw+YEYpCWh0NpGTboAMt7V02m28ith0P5VLZVjuNJjW5yH5I9RWlERFNsC8fS smwRsxQYGPWtqFT5Y56VhJmnQkorMkKKACigAooAKKACigAooAKKACigAooAKhxlDTW40QHAwpXr XMapabp9oOPet4PUbXU5m4tPJmIJrBusrIAM4roTM9zjr5wl8+Bk81zl0rXKMX7dKpKwmYjQho8Y 5rmtVkMcqoBk1o9RGFcxNIuW7Gs2VPNIx8tLoBiahALMkA4JrFljCqM1NtQ3MuaURj5Bx9KoTMsj AkbR9Kewzz/xEGjvwyDKniuU1uDbAq5xnrVEX1MDy1sHIBBBHrXF39r50hH5GgowV8PPbu07nn3r jb+2ImJ7mncmO5zEcotL5kcfeBrn9e00xwNLGcKDzirignoefzW6i3Eg4c9eK5O/0/7YhGcHtmh7 jjscRPZy2IKSNu54zWVNLtXa1V5ks4DW4pbjKISgzXKFTa5Dnnpk0nqJOyKxUyqyg5brXFX8ro5X HHSi+hp0MG9j+yoOc5rFuZFcooOPWk1oZ9SrLJHvIPasG+VkiZz92sbFPQ4+aJJlGwYcc9KpalJH cRozjDIMVSIZxN9IZ97Ac9hWNBA8sAdzz6U2WmZ9zIsX3VyT7VmtbtnceKhod7HPXkW2clRlax5S ccDOavoRuyrKPKUgjIqFQsi8DArArl0Mq5g2IRXNyoWfanDDvVImxNGgVcOMtVOeQQth14qG7l2s Z9zOgwVTP4ViTqvm4Pf2qoLQybdxYW2ISvFUmk3tkjOfaoaNUtCo0axygDlqypI990T+dOOgNaE+ FWTBHH0pfC4K/EnS40HWZf8A0IVlVHE/ra+DCbfDGjIOgVefxr9lPBkpHhy2RBnCjrSgLqdTNiKA buSaxJyCR6GupbFOxUkkMalR0FZ8nzLz1o2JWpjBv3pyOlfn9/wUDtRdfCm6Q8Dg5x9a3o7jP5YL VVFzcknKo+0VJnbyOnpXn4v+Ix2JN6lMkcVAoG/JGR2rkepNiwtyYzgLnNbGnMsUoLd6uOhLPUNF iVMPuznpivXtEIdORk09y0j1fRGJtQAMYr0jTLYtCr7tp46VcXYLanWqV2gd6ZcR7Ixn8q1WuopI qC3MiEj5TUccbBDk5NOQJXM+S32uS7de1RBS6kA9Kncfws+K5GyNo+9UgUMgx96uZt3NWiRY8f71 MYYyS3NUtWJEULBWIx81Wtz4wD1qnoIsgGNVwenWm7jJPkE7f7tQxlnzdg5HPalDM209KcUUi5HI zSDnj0oglLXTLzgUmhy0ReZdzhs/LUouVmO3GMdDioMtzRRjsGRkUsXMhA+Ue1JblLYaytJfwRg9 WB/Wv6Bf2UizfDTT0x8qRAE/nXo0vhC59Sbd3Sr8bHaOOaQFzjcD3qwJCTtNESGSMuFyOxrxL48n d4WCdSWH86bCTsj8Lv8AgotGx+A86LwRt/rX8svh3ElvMVPIYA8Vz2uzPYs6lK1rakqeSK/oG/4I /fA7wj4l+EOveJNW06K4uYSHkLJnJEZP9K0UNBvWLZ8If8FBv2kfC/xF1O90DQ7L7NLaThMmJkAw c5ya+J/2SPg9r37Unxjt/Dtqpl0eyVkvLgHjcMEe3Qmoqe6tCKD5j6Y/b4+B/hn9lbUbO002+8+4 LBJI1C53FsdBXzjofw+8V65o8N/YaNJLC4BX5GGR+VQ5aJGnKrtnrP7J/wCzdN+0D+0/Y6D4gs3t FtmLyRPGQG2lT3xX77/tBfE34efstfEnw94GGmbZLoiMeXbsQfnC9QfenGnzO7Lv7lkfPn/BQb9i Xw4vibQPES7bS0uU/eAKAPmcDvX0Z4u/ZO8AeHP2GkuNMsoJGjtQ4mVRyQGOeDWlubQ56bvoz8+f +CYf7NXhrxKmueNNfkieO2DNDHLjpsz656iv0e+Gug+Cv2zvA3jTTdPskjSwLRBxHj/lmW4ya2jB RV2OpJvRH82vij4Oaj8MfiHf+G9BsWvRbz7DtQ8gY54+tHjG11DwCEOrWJs1ZCSSpGD+NYyd5Ci9 NT0f9hj9nDUv20PjRFCUaPQLS4UyTEfexhh149a/ab/gqZ+yn4c+Engnw7pelQRw73jEswUDjzMH 26VM42map2icj4W+MPwq/Zw+H/hfw6II73U7toomeGLfyX28lT70v/BTX9k/S9K+HGm+M9NhWNpN u1AoH3m/PtWkrRdjF3vc/HvRPBXijUNHS6g0d5bdUz5hRun5VzPhezi8a+OLXTbpzZ4uFjn4xgkj 1+tJWvY6Efoz+1t+xNN8KfhNY+JPCub8fI0rKBlV3c9M9hXwT8D/AIjafovj231vVl8+zVCzoRkg 8Y469quSM4Su2j+m/wDZ5h8B/tW/s965qtppaI1vbsFd4Sv8BPc1/N14m08+GvG2p2aYFrHPtjA9 MCqVNRRFSbU7FmaNUQsOnWtHQLbVtesJ5bbT2kiiYLuCtzSXY1Tsb2saHrGheHY9SutOeC0YD5yj Dr+FReBNI1r4gWtxJpVi1zGh+8oJzxntVfZuTuzX8QaNq/gmxhfUdPaDzCAu9SM5OKm1e01LSLfT ZBpzlbrGMow6nFZxlqUdTqqy+G/EMNlc27QNIpZSykdK3BOsrmNuGz1rSTBItQQXE14kFrGbiQ/w j/61d5pHhPWbzVJLZNNk8xQSf3bdvwq4v3SZKzMvSbHVNX8TXGmw2m6eDPmAZ4IGa9As/COsX9rN J/ZshMX3sIxx39KxcruyNY+6c1o8k+tawLK2hMt1tJMeDxiu8Xwdr+lWU091prqqHurcfpWkHdCl JXLnha3uvFaslnbGaVRyEBODXWt4O1vRtLN3qFnJEmQMlTUqd3ykySWpt+FtPv8AxHuFraNcBDg7 FJx+VdRrWg6n4Shjuby2aKFuhYEVpzW0Ktpc7XwnoWoa/GlxBavJCehVSc1u+ItNuvC7lbyBoFbk FlIolKxk9TS8JeG9W1mBLmG0klgboQh7111/BN4acQXsBjkb7q4NNaopSWxq2caYDAfNXVWIa6Aj jyZc4C4rTZFnfaN4O1SSUiWxfZ1+4a37HT2juXjih/0gceWBWfMQ7naWWjapbwM89u0QHYg9KvaB pEl7qEQc5hknTPPuKbaEz78+IvgOz8OfD21e1iXcNvIHvXhPhqCTV/ENpDKgaEsM8+4qHqODufVH xV8Hw6Vp1q1sFU4Htxmu3+Gvhu01nwpOXQGVV6/hSvdAeM3wuNOvXBj2xK2AcVv6faT3bllU7T7U Xui4rQ6WGAWcRTHJ9qv2MMoClVYj6VO47naaZB5l4nnJ/EODXs3jTw/DZaHbzwIPmAzj60lo7Es8 0t7Uq48wYz04rp9Ps5I5MCM47cVV9RmtBOy3pULz611NnE8sTFgSM9cU2JI9Z8C6JHeZkYZK+teh WL29zfSQrGMj2rmm3c1RmvZrDqTBFwvPQVbhj+duP0qb3QW1NiyxJkbenrVrp0GKyluDCipEFFAB RQAUUAFFABRQAUUAFFABRQAUUAFJigCvLHlwaoXltkjNaxZXQ5m/09GbO7muZuWSJCNvPriuiOxm cDcQq7uwHf0rPaJGXDcg1siW9TnLiNYpyi9KxZLRftJlYdKY7nK6hgux7E1zs42fMDkCqJMm+j/t DDEbSPase6tmMOMdKRSKhjVbbbjk1zt5GUXGancb0OM1BhccEZCn0rlNdtHuSrA4SrtYyaOWnslt ozvO7J6kVk3gjUAL29qRUdjj9cv7hgFZP3Q6EVyN3JDIquz7T9KaDqcNq7hdRV1Xeu081lTXfm2E yt3YYBrZbES1ZxVzH+5VAAR61xWpQmJ8BsY9KiW5S2OY1VPNGX61wt3+5O8807ma3ObkjnvpyYxX Jalpo+1ETt8wPShotK5kiP7PcmQfdCkVxV5A1zcN2GafLZXJc9bIzbnTzI4y2a5PV4Fhk+Qc9+KU dRmC0eGJ/hx6VmyTPLAUc/IDxUSVgbuzFeHbKzqMY71yOojz33K3PepT1GzGuyAuwjaT3rKlk+zI FHIHWqWokig486TO3C1QvJPlKAfTFZXsyrGDIoWAkjGK557jacgfL9KtO7E9DMM/nEg1RlkMRCg0 OIlIx7icrMQTkVQlYrgfdz3pW0E5akd04RkKjnvxVKVd8jM/IrK1mbJ3RTncRxhQMt61iy27tLlj xTTItdjooymT1FU8bQwI4qWWtipFtZyoGD61BIieYcdu9JMEQsNme9V/CrMnxH0mQf8APVcj/gQr KpqEVqf1r/Bks3hbR3UfIQv86/ZnwFg+H7c/7FOmrie51nmI4ZWGfwrNuI0jj6dK6UHQw/NAck8g +1Q3KjyyR0psIqyMGfLcgd+a+Bv2/wCYr8JrnA+QgA/rW1LSQI/lPD/Zr+4gHJZ8g/SrTKAck/hX n4z+KzQkjHylWqsjmFyOormS1IHbszHHBrY0wqykPyc8UTdiUrnqHhyMR45zXtGhsNm7NVDUaPVd ELMECnjHNeo6UMgg8Yq2hx1Om8reiqOPQ09oyCEfkjvW0HoKRBJJtyM1nLMcnFS2THcqyxbZPMJy fSqkjFUOepNUrWKlufGcEgW4JYZyDTIZNm/H3qw5Ta+o6K5LRnI+tRrAFfzP5iknZjSJ4ol3lyOt TxMI3welPclqw8DdJ7VPHtR9xFNggZv3mQOtWkQx8E8mixpEZtb7SO1XIXCyscYJNQ2Z1NdC/Pbr LEoDbTTkT7MgyMgUiUWlvTMg2rtHpipYJd0xBHShblPYjlYxalbuOoYD9a/oT/ZVcn4XWB6ZiB/n XfS1gQz6WiJZsg4FbFspbjHNS9AQMh80nuKniRps44IqokXsTqx3gV4l8fon/wCEeWVBhQwz+dD3 G9j8M/8Ago9Pt+A8soH90Y/Ov5a/D7oLafA25YdBXO3aRjqyDVxthJxuUqRjFf0uf8EerVbX9lfx UD8weI4yP+mTVrzaWLvaNj+fL9ojRI9W+Kut2IHlXE9+I/NHYHAPP41+/v7LPwq0j9hH9iW78R6T AL/WLm082abZy77GGSV+gqJrmjoKl7iPwS+Gttd/tzftF6bqfiW5YNd3qTmBiCFAZfl55r+lX9qr 4h+GP2IrHw3p8emKthKFV2SM8fPt7Vko6hzX0PH/AIFfFPwd8Zf2y7OfwmqxXot5jKypt9Ce57V9 pftLfsz+DfH3xf03XPEVxD/adrOvkrIVzneD3PqBWr0tYTlyKzPCP+CvLSN8FtAitXMNujxDcvp5 ldb4ctI7b/gmhZsx82JrBcZ+j1bsnciK5Vc/l50T4+eLfB18ngvwxLLLJqrhY4YyfkQ/IegPrX9E /wADLOx/4Ju/sX6lqfiC7zr2owBp1JBPmFGUdPwrSb9wunZ3bPiT/gnW3in45eLNV8ZTWCmwurpX WSRiAVZRzkj2r9FP+Cm37LfhjV/2f7PWntojdSTRRmRVB+8xHWs1HYhanpH7JXwEtv2dP2S/DN94 Ut4/t121uZ2X5ScsQTxntXBf8Fk9E1vWvgppF3ECJwEErE9AXOaUlzTNJu0Uj8e/2J/B3w/TxFbz eL7qPU9QF3GbcXAU+WcjAGCO+K/Vf/gqlqPiTV/AXhiw0SKRtIeaELHHk7h5vHGKznqx2vG59bfs b/BOe/8ABdtpPiDT4baKW1JVC3OAD2NfgD/wUI+DGn/CD9oXV7HQf9FkaZ5t0YAwUxVKN3oJz5Uv M/Q3/gl5+0DdftF/D7UvAniGA34tYWgM8gLbhsJz6d6/KH9q/wDZ9034D/HfWdJspA9q7SFIgBiM AAY4rS/N8iH7srn7z/8ABK/TYtM/Yy8QEIFBtiRj/rk1fgB8QZt3jjUwwy32njj6VbfMhaTncgvZ DFAgAySM4r9zv+CYXhDSPil8Ktejv7CN2tCMZGekZNZwdpG7V9DJ+Nnxf8BeIvAOq+C72COG9trp LdFKdz9T719CfD74ceGv2NP2MrHxStik0kqR72VMk5yO30rXpYlx5Vc+UfjD8afAP7Q/gzwcohRL 25ngcoY8f8tORya/RH4w/ADw6H8DWcGnQxxBEJwvpIKlQsTzXjzHwT/wVp+Hel+DPG2hjSIUtZRE S2wYyN/NfmRp8zTZkJ5zVNWjqFOXM7nvv7OmqRWXxp0pZovNW4kCnjOMsBX9HWveANA8J/FDSrSK xjV762cv8uM8gVEXujWXvL0Pja4+FWh/Cz9uD+y4LVW/tVJpXXZwMbVr2/4z+NfBf7PHxHk0O+tl hjvraVxtjJGR8vr71VOlaWpyutd2PLP2KfgH4T8W+IvFvjJLaOdYpGaAbe3l5/pXv3wZvNH/AGh7 Hxrp02nrH9ikaMZQ9fLyDzRH3bm8YNu5z37J/wCzdovwj8I+L9cuohcSo5ePcvQCM8cfSvMr74/e Dvih8Kb61v4Vt7jeqxhkI5wcdfes6dO0nIV+afL2Por4C/CTTPhR+zsniK1sEvby5RWBC56gjt9K 6j4mfCvT/ir+zhp2p6paJZ3EnluVx93k8c0bzKnO2ha0/wALad8I/wBn/R73R7GO9lJiDbQTkFjk 8V5H8d59B+Jv/CO2Nxa/ZbuZ0aTdGQAN/Iya3krkH1X420+w+CvhPSLPRNLS5hZkDFFPTdjPFfHn 7WOoaTf61Ym2gMV4UJbKEfxc1dNGcrpnzDZ/un3E5FfV37LXgiH4g/EeMzDEMSt8mOvQ06jsjeO1 z748TfEfQvCHxCXQZodgET87DgYr51sNKt9f+Nnm6IgeElmk29O1c2qCD5rs+/NT8DW2t+FLv7TE olVD744Nfm1piPpPiEWY5iju0AP4irjqgXvM/Q/4tShPhtbbfu/L/Ovnj4ewZ1u1Y9N4/nT2Q0rH 1V8cm2aRaqrYOBz+NdJ8D3XT/CdxI53ccA/SptYeyNjV9Fg1rwlLeNGAVIIGKrfDiwkurKNpYtsT YGTxU21sJM3PG3haOw1i3SMYDnnH1rrtasrbwtYWwVAS2MnHvTQ7XOd1O8t7i8gMI6sM8e9e76np r6hpNiiL+6GC350nvoNIxNS0yxvfEMFtEAu37351292sGmaqLdYAUCnJAzRYE7nD3NqkusutuMZb 0xXv+meFok8MAOAZCuc1EpWGS+CIPsdpMDyc4/SujsNPW2naUYy1YSdzRGltCuTtFMjAi3MQCKzu xXGQSAkkDGalpCCigAooAKKACigAooAKKACigAooAKKACigAooAAwU81m31sTE/P0q4bjRyzwl7c bjzXPXsYbAArqiTY5HU7faoVB8xrnZ7Ly9qk8jrWhPU5rVYmWbcgrFvNwTBP5VURuxzF5GqJhhkn 2rLeyWK1Mhb8KslaGHPdBxhVFZLy7AQwzmpYzA1OFhtCtjmsbUYtiHnJx1poGzlYIQIGJGTXDare Ms5Qrhe2BVkHI6hLK0W0L8u4VmahEohUHg5qSlsQaj5cdkicSbhXll3pkN05jK4INUtBbM4zUpPs d2Y4wCoBFcTeWM00hJfZu6c1aFa5zc6tprsksnmE9BnNcRfs7TSKwxg8UWuwW1jmdQuBLEik/NXL aggRSoXNK2orI5Dzrm3YmL5U7nOK5m8uRdzO0n3/AFq7Bsjm5p/O+RTyKpXMH7kMwwRRJ2ViIq7O ZubsOTgfKO9cbebpXbbURdjRxuzEuvltzgc1iSSH7KRt+UdamRNrMx471TauQMiuMuolVhIOPaoi ipGJLdLcu3y42+oqqriWMgIKq+gRM6RzE+3pWRdI/mEoM1lJDuZUkZkyrdzWHf2iohXNNEvUwVtj Hlh0FUZcSksOMVrsZNmZeQs8e7AxVCSLzYlB4PrUNgiFoyibeGrOnLD5e1ZGyKMcmzIK59DVaOIy uxJ5pIZXSXy5CGGVqtF8wfPTtSY0yo6hYycYqtGitESOD6UugrlJ3V/lPykVD4aIT4maOCcgyL/6 EKznsNOx/Wp8F5G/4RLSI04XC/zr9kvArFPDFsvfYOadMXU69kMUYIPJrOvtxQYHNdVhoy2hZodw GKp3Ls0SgjgUWBspMQcKBzXwB+35AJ/hFdoeMYP860p7iP5R4mD3txJj7r7a0JCEhOea4sUv3jHz dCmrkKDnrSwoFcqxzmudIGWJkCYVR9TWlpyBnx6VlLcUdD1Pw5DnBJr1/QLUrk549KuKGew6HIFV SBxXpelqFGSc5rRhDQ6ccbcHil3hi+T9K0jcJO5noSeDTZLhQwVVwe9Vy3JiVJnVyCDgjtVVm/eM W6UW0sPqfFcf7ptp6+tSoVJK4yaxTNd2QhShI9DVjfvYA8CspbjbsK0paXYB8o70r/ueSM/hVoTd 2Wo1MyZFIyYTBoYEsS54FW1XcQpOaAix4cCTb1Ip7NukBxUW1HbUnDYnAatP7/AOQKtrQnqS52Op UZqeCYSTn5MH6UhkczCTUbVRySwxj61/Ql+y1G3/AArXT0PUQiu+g/cE0fTCRmJipHNacR2Lj+Ks 5CLEMTSHK9vWrfknb1w3tQhW1HfY9oXDZJrxL9oFntPDCRg8My8/jTuHQ/B7/gpC5g+Bc0XfK/zN fy1eFpC4nZhkBqwkryM7WNLUpA2/CnhCcAV/T3/wR+sYNf8A2TNbhgnEFxJDtAJAJJjb1p8rZSV1 c/Bf9rj9nXxZ8LPiRreq6ncD7JLdeZAVlDEHjHGPUV+lH/BM39sDTvjT8O734ZeMXSWaFPJi8058 1Qhyecd27VpTV3Yzex8+a98ANP8A2TP25NGvo5li8Nzz/u4gRiMl1CjH51+pX/BUX4D6r+1f/wAI tbeGZ42sMoZ5BKBgCTJ9e2anlalYaj9o+af2VvgF4Y/Yk/a+sWudUWa6uoZRhygwThexr6G/bk+E 3jX4k/tS+HtV0G9H9hCcPKvnAceYp6Y9AaclZ2InH2mvYk/4Kt+JY9Y8E+G/BumXCXGsSbPk3jgC Tk8ema+kdF+Eer6f/wAE3rHQJLhG1GLTwGJkHJAf/GrnF2DmU1yn5a/8Er/2HLOf4m6l4z8Quk+p aajxwwuVI+ZN3Xr1FfSv7Sn7K3ir9tHWNfOsXq6b4c06RvJiSZTuUDcOG/GrUHyIly5XY+of2HvC Gi6R+y2/gnQLxYLqwaOJpUIBwASfbvXS/t0aG1n+xlbaTb3gvdQthGzOzDOVJPapppu5fwxXmfjN +yd/wUb8bT+LvDPw8itPPht3SORndgBhh3xjvX69f8FZtZ1e/wDhXo+nxuirdNGsrLLnbl8H9DRB XuxVNLH45Sf8E0tS06XwlrmhXSXqNcwyzu8ir0kGTx7Cv3b/AGgfiV4e8EXHgXQtVuoWmCoCjOOo lHv70nC8WyuaysfXi6bLf/F3TtXh1IQ6MlnKoiV1wScYr+af/gqL4f1HxB+01cppiK8l/M0Ubhug chc/rVQhaFzKq7vl7H3J8GPBvh//AIJk/sjNrF/Olx4nurYb5Tjc0hVl7fhX4DXuv+KP2nviZd65 NMW1LUN0kUTP9wEAEc81hS0kzRLnij+o3/gnz8ItT+Ff7GGsW/iGZYLg2hIAkB6RtX80njO+S8+J eqIg3BZzh8deBXRBWWpNuWVivqtwbXEjfMAMV/Qp/wAEbvDd0/wm8VXkpEMMpzFlsbh5RqYK8jok +WNz8QP2ovDN3pX7UXiKXUG2wNrSPGynPygrzX9EvxKtl+MH/BPPR9D0KaO+u2ihUHzB6t6fWt5Q sYurzQ9T8uoP2JF+B3h3wZrviS9WK4ilh3R7lOD5g/wr93vH14ni9/Bd9pjxzWSxruk3/wDTSsYt u4lpT5T84v8AgsT4fP8AwmOjarA4ktBbsJCD0Jfivx40PUoF09mLkqemRRzcyHQXLoey/s967Hbf Gzw2EBctOmMjH8Yr+sH4uaLpeieKdD8Tanfrbi1tmUIzKM5bPf6VPLZ3NKnup+Z+bOjfEe3/AGgf +ChFpeaOA9lYQzxSSnjJJVhXCf8ABXrwaifEbStR3b18hwQOeSwq+Z82hhGhZKTPYP8AglZ4lsR8 I/E2nLcKLwqQsTEDnyjXvX7HHg6++GafETU9eZIEuLkyQkyA/L5WKqMXI3lU5ND0n4OeMLD4lfCL xZBZTxyzHKooYckxnFfmFpn7Gt9ceBr/AFLX7gWSx3COm1lPAGe/0ogtbCiuX3+5+ufwr8TLpn7J OkwaQUv5ordEQM2M9fSvFPi9rHiWX4AW/wDaki2QkaMeWkucZJ9aza5ZEW5tSz4BTxN4b+DehmyZ dRgYxrteXopbHaus/ak8I6UfAmk3kwis9ZMse10IyCX9/wAK2e1yVL3rHUabeeKdH0jQEKpqNu6q JWeX1brxXln7dGh6To0VheW4jW/IAIU8kFuaql3HX3PhrTNTgeMBkO/0K19vfsXa5bWPxKdZZBFI Ubap4zwKJaouLsrHvfxc+Deo/ED4vvcxBUtmR8ybvpXSfs2+FtO+H/jfVdNa4Et5vOCcZGFrJK+g 17kT7Hht3h0XVfPl3O4PljOf4a/Mq5hNp4rW3kG6Y3SE454yKIdRx91H3x8WrBn+GluIiGHy556c 186+AJBZa3ZqTuyw6/Wm9Sk76n1R8dbOR9KtJIiHXAyCfeuo+DlgZ/Bs+/CsRwM+1HYlyudZ566d 4K8q5IGWXvXQ2MRuNGsBaOFQbd2D70uomib4iy+XqVi4O5Rjcfxo+Idk+sWtk1qdykDOD70JXLvY 5hvDzaZJbqTukLDI/Gvp6fUF0zRbOBiMuoH60ox1sDlpoeY6j4faz8SK6S/MxyK9U0SOeTWwlwA6 YPJNEtBRMXxLDBYa5KYCAd3avSNL8VRS+GhHuJnC4rOUbovqXvAspurCcM373P8ASt3Sref7d87n aO2awkrGi2OhaMtMSDgUmKxJEAxS0AFFABRQAUUAFFABRQAUUAFFABRQAUUAFFABRQAd6bLFuBIp rcaOfNt5isucEGsG609VJ4wfWuqDBnGvGkFySxz9a53UI/MmLLW9jJ7nL6y6x22APnNclBbPcD95 yR0qlsTuYOq2jCXrxWHcwHygCflphcwp4BFytUW+VeR1qZdijnb6LzLjavArKltl27XO4+tCDc5O 9hNu7dh7Vx+r2waEN1JIq76CZU1Wy+yaGkv3iSK4i6046nIF6KRmoTHsctd2n2IFQcqDiuUvYiHd oxkkHFXclnllzBJGsglHz54riNTimVE3seCK0RS2My8t4GuFl6v34rkdThF5fNt+XNVsjNLU4LUN GeO4KiXn61g3Fs1jwz+Ye9Fw6nN312JGKKuxe+K4a5USzFVHHvTjsEmc08DRXZwOKo3kjbsBSw9M VMhQdmc/dwFVzjC+lc7cbYlMm3ApJGtzlbqX7QDIo2g9qxt3mQvHnGaiZDdzmniEOYlOa5nUIy0o XHI604pWB3Mq+2xIML+VYbSPCcg4BpPYpR0KN2wZgSfxrMWVpHYKcYqJErcypVaRzjjHWufu4i8x UkjFBLM+Usgxjis59vIC4qr3RFtShcHbCRn5RWWiFoc9FrM0SMq5Y27ABsg1lyu0j/e4FQ9C0OwZ EyDiqH2nyicHNEXcRXAMhJA4qCSQr0GaJDQ5W2xkuMk9BVN02LuGNxqfIlmBdq1zIc/KR6Vh+ESY /idpjNkqkyjn/eFRMGf11fBePf4P0acEqhVePxr9m/BPPhW0aPkbBSp7lRWh08Fx5sgSQYIpLhgO Aa60FzLkkIjIzWaz7UwRzTEZ0bZnOa+Bv2+JxH8KrtmG5cY/nWlL4ho/k+hizql6h4Bk3L9AKsvJ 5uB2rlxVucGrO49grBQp6CoZBtX1Ncjeg9xjXBBUDtW7auHAbGDWLV3cdj1HQXLKpztFey6LLlVB NaRYWPV9KCqFwcfhXqGmKWVWB4FXuwSsdTbAtkmmSp85I4FbRE0V2hKjOaosN0g4qiUitNhJuBSS APkg8VNy3Gx8VmMyAEnkVKw8oAjqe9cabuXFDIwMlXPNHl7OfvCrlsO1y9tEqAgbaiDEj1AoTBxL UcTmMOnC0+Nhvx1oYIc0nkuQOlOtJCrFh39aqIloSGURvu29aurIrJkDmlYdyCN2MmCM1pW7APgD FUyUW0lWGQnBJ+lX1uElIfbtzx0qJFMiiVV1e1YdmH86/oa/Zhyfhrp7gdYhXXRfuik9D6RXczgk 1cJz9aJEXLEW9uFODU4kK9+lCGSoSze+e9eLftB5fw5FuHRl/nSYnqfgx/wUsuQvwWldh8uVH6mv 5c/DcOPNZPulqx6kPXQ6lYhG7sy7twIAr3n4AftSeMP2dtMvrTS7yVbaaQMkavwABjHAranO24+l ij8aP2hvEv7QcGdXmZYR0y+T69DXzl4E0q5+F/xBs/E+lzMt7EhXHTqR3/CnGVnclo9Z+NXxd8Vf HHxLZ3OoXTMls4dcyZ5DBhX1ToH/AAUC8a+EvDNrpttPI/lKFLlyMY/CqbXNcaXu2Pk34i/FPxN8 Tvi/Z+LLjUJTd2rZQM3bcCf5V9mz/wDBSHxzZxwpFunVeN7yMNo/Kpum7hFWVj5u8VfHjxP4v+Ll v4uubp7q6ijdY1lbGAxB9PavqD/h5D49Xwg+mtcSmIpsVfMOOn0qvaXdmZqnynkvwi/bU8cfDK4v p4pZE+0tuYI554x6V6tqX/BR3x1P4Y1CzNxLB9qQg7HJzkEelbe0WxPs7s8R/Z4/az8Y/BjTNREF zK1zeNvZyxBztx6V6JrH7anjbxt4UudOvLuV3cj7znkY+lZuSVzXl0R4B4B8War8PfEcOtWH7u/j cF3Vuc8f4V9D/HX9tDxr8XdHhhv7mSRFxtBcnHP0rOEuVWCSUjS+Hv7d/jbwP4SsdMjd544FAy7k dPwryf4s/F/W/jr8QLDX9Rmd5rP5ogx6fMG/mKtTXLYFDqfSkv8AwUQ8dxaPb2cdzMqQ4GQ56Z+l eJeLP2i/Enjrx3ba1OTPPC2Y/MYjHIP9Kv2i5OUj2d5XNT41fGfxB8flt7TXLl57OIgrG7ZAwciv N/C2r3Pw48VwalpQ3tH/AAMdvFYRSTubJWPtLW/+CgnjbxD4NudGWZ7SCVCm1HPcEdx718baNbzr Yx/bDm6A+d85ya2qS00M7Xldm/PFDeQCJxwOc4719VfBP9r/AMW/BbwreaVpMjQ2x4UCQjIxiog+ V3HL3lY+cPGHi7VvijrGoajqx3S3BJGWzjIr3z4KftV+Ivgh8PIdG09mnSAqqozEAgd+K6VO7MeW 2hU+Nn7SHin9odrGC+uJI7aBcmEtwCDkdRXrPhT9sDxX4U+HVlolrcSpHalFVgT0Bz6VjpFs15bj fFnx11r9oXxXo+ka5dN/ZpXErSP33DHX61972/7Jnw0ttHtIzqECSbcEDZ/jWS0dimrGJD8GvAvw 4+Knh66tLqB1hkXLZUfxj3r6s/4Ka/E7RfGvgrToNI1bbLGoysRU5+atZavQmWsT8iPgT8btU+DX iaW80pDHMxO+UEgscYrvPjJ+0J4g+N3ie2l1eRpYUQggsTznNVFJIu/upGN8KviTe/BnxjcahpLv EJsllXjJxivpXWP2xvEnjDwze2E8jQCfghXJyCMVUHZmclzM86+APxm1n4FyXMVrK3kytuYA9eMV 7H8Tv2pvEXxG8Nvpru1vbSYyEYnI/GnoncvdWJvg/wDtFa58KvCcGl2UrzWyFQFYkYA+ldJ8W/2i 9e+J1rDYTTOlouDsByMg8UpJMNlY9G+G/wC1Drfw/wDCEGmoTcRQ7Qu9iMAfSue+Kfxh1f4xahaT zTOkcXKoTwDnI603qrEcqvc9n8O/tYeIPCui2lht+1eUoUszHjn2rmdW8RS/Gv4kWU2rXLIgOAGP AGR60/hjYbjd3Pt23+BfhCK5Q/a4t568r/jXn3xX8Iad8Mryy1Lw/cj7bGRnyyORnnpURBbnf2/7 T+sPpKuIsuUwXLHNeSeCvG99afEOXWBI8l1JksSPXFPl1Lkr6H0ZbftC6rPq0lvM5ywOFya4walK +oz37DdOZARntQo2QrHuafFK91rw7DZyHdwM5NZOiTvZ6iJh8roeMUkPY9d1X4gXet2yQS5IHSuk 8NePL3SrdoFyqexo3BROp1PXp9csoo3JK8E11Og+KZrOz+zxOQq8Vm0UjVvfEMt/AI3Ytit7RvE8 0Nqkedyr61SJaL76rLPei5LZIPC1d1Hxdc388JLn5en509ncEjphrM9zcxys5LD3ruofF9wJwDlT jrUtpl2sQw7rq6kmc5yec1v6TF5M55+Q+lIR3+hz/ZpHaFtvrXZadrhmYgrz61hKJSZtW97sYqRk /StKLLpuNYSVimPoqBBRQAUUAFFABRQAUUAFFABRQAUUAFFABRQAUUAFRyoSmQcYpoDIkIVs4PPt XKazI/nADIBrogNnGalZsr4Y55rNux5UYKDOK61sQ0cdqkwmAynI9qw2YRoWHHakSc7dW7KhZju3 dK5+fCwEHqKSeoNHJ7i8hAqs8befg1T3DoYl7Jh2OMYrFUfa0OKASOb1uQ29g6Eb2HevNGupHt13 qc1S1FIsEtNZESEleOPSsaMiS4KLxhSKTWor6HB3YFtcyJKN6k1zOpCO35j4XtQJnnmov9puwTyK wtQ0lLmRlZgoHSrW40zzHVrYadOynkdsVxF5IyxszDBPQ1b1DY5o/vEJz83vXM3mnvbybmfeD61N 7CSuclfwEOcD5a4q/wA2s2VHWrjsKZiXMiluPvVQ8+SJXKLn1NTJkxRyd1K8g/eHArAvT0GMrTWx TORvJMykLwK5S7Oy44OFNS0IglhRUaQHp7Vw5u/tU7sMqM8ZGKjYvoUp4Mrktk1jTxrH945NNFX0 sc7d58/rx6VK8BghEmcZqZEbMpXC+ahI+UVx178/Xlh0pJaCe5nGWRkIdcelJ5OIMufmqWDMi4iG ACOKo3Q2oFj5XvUjuYtxEJs8YrKlhEb+mKh6lIScb0UDis28tgo2KfxFEVZiHRozWuScAVlLLty2 OKbeo0gjG/LE5zWZcsyOQancGjNdtmS3NZPhtTJ8StJbOB5q/wDoQqJ6IXU/rw+CZSTwFpKg8AJ/ Ov2Q+Hoz4QtAOyClTGd5EsbWhaRcP61y80LM5YHiuyIFGT5n61C8ZZeelUDMxkw59a+Av297dpfh DfKO3P5Zq6XxAz+T+Cdry+mdhtZG21cAOSoHHeuPFfGynqVwQJiAOasYypNcXUTIY4t/A+9Wxp7P FcKjD5apajTPUdLlUSqAOK9l0ROFx0p2syk7nq+mScKQM16dpu7yxtOK0QdTq7Z2RfWpSpbkmquS yv5bGXBPy1UkjMcpyeKdxRKcimJi5GapHMhyOB3qktCnqfG7qyqD1qXb8o56+tcstGWthnl+Y4BG AO9DgFhtPFPfQew5JTC+085qTcInwe9CXKxNllZmEBUcCkgUBdwPIqrXDoTJ+9cluKk8pVBANO1k QTg5h2AfjUsCgJgcGo2ZSLG3aQccVZUBcEd+9MexcWPaoJGasK6y4BXBHpS0YmQH59bs0XqWH8xX 9E/7NJK/C/Tk6ARAV10F7omj6MjJUAdqvFd0Y2daGRYciSdjzUqqzMCRTsBbGVIx1zxXhv7QUrr4 ejVvvbh/OpHc/BP/AIKZs0vwMkjA53KSfxNfzC+GYy1nI3YECsmtTLqdfAqtkE/NjPNdh8Lvhl4o +Nf9oLoFhJJ9jkEZKqcNxnOcUuhp0D4gfCPxP8IbaO48Q25jifAxnOO3pXm0ur2z+UqPuL8qFGT+ VEdUZp3Nhbvy5ljZHiPYsuM10Tr5iKM4P86q5Rw/iHXv7O1uz02KIzXtyQERFJ747fWvpqy/ZJ+I 9/bxyQWbLbOuRuYg/lim1oK7PG/HGj6j8LNYGma3btFcbSU4JLY/+vXTxeDPEFx4Di16TSJItPbb iR42XGfqPalH4tQck9jgb3xd9kggW3Rp55JFQIqk8k47V6l4s+G3ifwtpdlq19pzQWMgAzKCvU47 iiUrSBaGPIFjYNn5W6VbiXcp5+UU73GUtEe88TeKo9F0mD7VeupbagJxj6V1/wAQPh34m+F6W41T T5l8/G35GODnGOlRqK6TsL4g+G/inwb4Pttfu9Nl/s6XG1mRu5x6Vxl54qg07T7aZQXmndUSNVzy xwKqOpd9bH0Dp/7N3j3WbKG7t9OlEUib8MGHH5V5Fr633gjXGstTt3huVbbgqck1N9bA1ymx/asZ uI0mDQyMMjeu3+dQTz3uta/DpulQ+fct1xng1pED3K6/Z18baT4fk1C406UpEhcsFY9Bn09q8r8P agdV02GW4TyZmHzJjoap6aGbepuxwhb3GeBWvpdvf+I/FcOlabbNczPGz/KpOMfSlezSC9iK+hvN D8Q3OnX9ubeeJtrBgR/Ot21ZNnyrg+uK2loKKualvKI346966CF2mhwp4+lZyd2WZGuRXE0toloC 9w0qqoUZPJr3jxf4F8VeFtOtrvUI5oIWxtkcEdT7ipeg9DzHxFdTSTxsLuV2Qg7se9en6d4c1zxr 4ea+S3mmt413GTYfr6VqtB8raMjRJGuLNJCNjcZWu3gjDpuxzTvoSt7EsSz/AGmJII/OkdgAo5PW un1nQ9R0DxLb2t7ZNAsqFgzqR0+tJSsU4dTbjULKUIzg108R8tFBOQe1aRdzNaM1rByzkKMAV1lm sZBLNtPpWug5XLdjcNLeeUR8v0rrDIIP3YO3AzUNdSE29DqtC0jUte06a6tbZ2gj6sqn0zWhpV0t 1DCzEx3AHYdKTZrboeq+GdM1rxL9plhM0ixHlgp9M1YsJL5yxuGMhU4+Y1N9bAo2O+0+dvsYRjiu o0a9WwddiZY9TitAS1ub8kX/ABOku0HzYxke9etaDaNfFuMkjJzU82gSR0XhzTf+Ji6ngL616Lo2 lvquoSeQhk2nnArNuwlqd2nhO6SB5nQrj0FS6NC7Qby+T6UJlXsdja+ZkJ0WuntYhbxMUHJ602Be 0qYPKQwwfpXYW1lLDkrGWz7UnLWxT0RaFrPbFWcEL6GtR1XeGA4+lBJu2LEzJt6elehy6XcbEcRk j3FS9Ci6sUiptYFCevFdhpsSmxEafe9cU7itobFraPZY+bqea7+yiFuiFeprORSNyK3c3CnOBXQF MKB2rmmygxiisxBRQAUUAFFABRQAUUAFFABRQAUUAFFABRQAUUAFBoAz9SIhiVsCuU1e8jNvnGDW 8AZys237CZCcg1yUt4TAwTmuqGxJlTWZNkHYDccVz19YhkCA4o6iOU1HMWEVs4rjdTyjlOcmqtYR gGLyHA71n3UrG8wRxSAzZ9kjsO9YiYgJ2mqWobHN6upVGJOc1xaXH2e3xLHuB6cVViHuYF3dvtZI xwfWqGjoII5jcHEvb8qQWPL7xp5r6YycID8vNcHrd8YzsJyM4prUpRuZN3bmBFZPmyK831p5xdAh jz71a1ZLVjjtenZAqMd0grlb2UyoN/NW9BrU5LU1KqPLOBntWDdymVBk9PWo5b6lRVjEuyCF54rh 9VAe7z/DVxM5PU5yewZbncSCuOBmuelvmtppEPANJx11BanP3a71weorGuiHgIHBHtRsVy3Rxdwg WIlx81c3PGmMk5ak9gjG5kzTgzbQPkrn5kWW6KqoUVLKtYwLzdaNIr8nPFc24ZHBfkGktCdmQXEa RkMOT9KhvJN8SseB6VnJgzFupWlGF6VlpYkqWYj86PhQtyn5ZjlOVyO2aybxN78Hj0qU7ktX0Mu4 l807egHpWfINyccYpWLsY9zILnKH5XHes67iGwZOWFQ9GUZUsbKBk8VCw2gAnI96pCIruXyLfjkH pWXgzw4NIFoRxDYmM/drM1OcyEEcUBJmOspOQRxUHh1xH4/0pj081f8A0IVnV2JW5/W/8CiP+Ff6 Qw+6wTH51+zfgGA/8IpZYbA2DiopFnc3cTlQq/dFYlxKYE9a7US9zIlbzGyBio5oDCituzntVDK/ 3nBr4V/byUJ8JL0/w4x0+tOl8QH8kiLjWLuEnGXyD7CtPyhFgg9a5MZpMa1KyuEmbAyahYnnPFcY 2LEPKGVPPvW5p0p3fMPxpxCx6RorBypr2jRpNsQUU9wR6voZGFyOlenaf+9wVNWr2G3Y62F8x/L1 pv8ADnPHetIrQiTuNZwgDjn8KgknDtyKOoloiqX83IJqpJGYByabdi1qfGKyBZMk/L6VHJOG6jFc 8jWJIZVWMHqDTI9tuxU9TVpakvVi4wM9SaRm3MAeo70pbjRdVS8JKmo41MEWe9NOw9CdGMoBHSry JlAAM0uYckTxAqSh70wrt/CiWpFi8HEkYB61YhdVjKAZ/CmkCLMKMMAtxjipIIy29j1BqGrCbsSQ x51ezcDBVgP1r+iP9mklvhnppPeIV2UV7o3sfSMMSscZxVhjt4HFNkiwSFWC+tXlDfNn1oegrE4h LopBxg14H+0DldHjJPcfzrN6CZ+B/wDwU3Ln4GSup2ncv8zX8yXhxTHpKlT8xxmovqZdSzqzGOFm 3FPkPIr+oD/gj/pWn+Df2Rda14263F60Pm7yOWIjb0+lJ32L5rRZ+Dn7bX7WGs/GjxRqWnXlqlla wSk71c8gc45q9/wS9/ZAu/2nvig/iLVInt/CmnoyL5iYEoIDA89RwehrWMdDKGh1/wDwUu1fwf4N +KMHh/wjbxm5hnEBS3XPzFhjoT60/wAK/sGfETxn4bsNTUfZRMoYIZApxn0IrA2vqe6fsJfsZzXv 7bx0vxnYrObNJTbvIN2QoU9eO9fpT+1Z+1HqPwm/ap0TwNo2iibTjJ5csoDjaN6jtx0NXK+hMmor 1Mn/AIKW/BTwVpWt+FvF2p2UXCjzAUySTIPf2r6E+O+j+GPFH/BO2y1PSrJLezktkaNQhH97HFW1 omiOXljc/mw/ZVm8LeHfidYnxVEs8cl7H9nUpu/iGP1r+hP/AIKyaLpE/wCydpV5ptutrAGiZdq4 43mla+4tZRuj+cj9n74R+Kf2j9Zv4tGtpha2rFVkdCoPGepGK9H+OP7MHjH4CeC/7U1ASTq0iLtU 7hgnHUClLQcJto/R/wD4Ir/ALS/FOu6/4s1TTGe+hZvId4TwpjyQD9RXRf8ABQD9pzRdW8WnQ5NM kjubHUI4wfIbDfMD1rSCXLYUotyufpB8SPhM3xv/AOCd2l/2ZaRm7ltoymTjB+av5cfDvwou/gD8 ZtGT4hwsLGI+YhZdwJVhj071Ci4s0jL3rM/dn4f/ALfEPxX/AGjPDXgvwtorS6PJayGW58p1AAYd +R0Jri/+Cnnwl8LeD/j74bvXSBY25uUyOT5g5PPpShC8rirVNbHJftn/ALHmlePfgBYeMPAdvE00 QSV1hx8yhiW9ewr8dvhL8YB8OvFo1qOz36hGpMsDgjaf59q3lHlYqb11P6x/2QPinD+0d+xxq+t6 rp6QSmzJVcE9Y29a/l28Vf8AEv8AGOoxAYi+0fIB2HFKKuTJ80tCzealHYx+aw4JwOO56V+tH/BI P4K6l4n+OFzqmqWUcmmtDJ5TM3OCoxxRyX1Kkupy/wDwU0+AN94a/aI1DVYIUt9KzIx2t1PGOK+Q vhJ8CfEvxq0r7TpMWLdBwwfG7vmp3Q1KxxHxN8Nat8GdfWx1i3dWYHDFTyemK98+Hv7NXjL4k+DL TVdNt3hhuArrvOzj8RUNu9jRL3eYoDwTrfwO+M+iprFszIbyNNzA45dR1xX7tf8ABSfwyfFH7Lej NpFsBeSywhNo55c10QimtTKony3Pwb8T/C/X/hjZ2seu2xjDxFmd89R26Cv38/Yl8LaTq37Ed7e3 VqjMbIkEr32NWMt7GsZ2p37n4IWmmXPif4jyaNpStJJI5ZFQdAMV9R3/AOy54v0zRbm5aFgI1LYU 5PAz0xXRy+6RszQ/Yn+E+t/EL45aS0sJ+yW4P2lZOPmyD0+lfo7/AMFS/hBewarol1oNrHBbxQnz 2Dbf48/yrFK87FVKnu2PyasNQFzK3ds8mups4zDOSzbhjPNdMUkT0PWvhn8NNa+KVxM+mxYjQ/3s Z4zVjxh4FvfAetJbaojQttJBI7iplJ30HHV2Z6x4P+CniDx1oEOo6fblY2wVJOMg/hWR4s+G2teC 9ehtr2BmlmQ4yCRnp1pRlpqCSUj9mP2XfhD/AGd+zdObu0T7VPbEjvztIr8rdb8LS+E/EclveIqX TE7FBzUw97cJSvM/Qz9l6/uNE+HOti5sN7EHYwUnPyGvjS9kvLzxFJDCpNxcyhljUZ2jpTiru45P U+gIvgH4ie3R9hClcnnn8sVV8O+CtQv9TudNWJvPjJzx6Cj2mti07o9K0T4baxFp8rSQNmL72ave HL6SENkeW4cLg8daLmcXd2PSdS0O+0yKO7ZDHCw6/WvsD4M+HrfSfAM2qsu53XcePY1nPUbfJodN 8PNZi8epdRyw7FAOMg+leS6hoN1oGsXpjjzbLJhfYYprQbVlc1LXTbmWxF1gmM85FdbomnzX9oDG ufpVNkRlqbmiWWNUWGZMMXHUV734plh8Lx2qRRhty88e9Sldly1NXVtKi1fwel0F2OMc4ryyxQF1 jI3knjin1Eme6eF/A0RvLYSHkjOK9H1vWksNcis0iDIp2k/jWU3qabIl8c+Hd6QSQpjcMnA9656x tWZfKQHevUAU7hY2rCPzZzE5JcdiK69bOSzgDHJAPTFKQJHZ2D77RXI61YrlluMKKkAooAKKACig AooAKKACigAooAKKACigAooAKKACigDmdenZFAz8tcZrs/8AoihuprpggZhRxeZYBQ2VrNNqsKMF 7+1dMdjNnO3Bf7LtL8gisLVbpVdFHDY54oSux7I465g8y5D54zzWZroiwGAw30qmT1OPlAMe7vWO 1yZcgrU2KRzduhe5kLdM1UuYvLBwMmmnYTOTuoncFicD0rMjQZ+f5lxWhLRy2pKjPlPlUHoKytVW CcIVGMDnipYzgNXj3A4rzzUrGBrZmb7/AKYqoaIpOxwVxdSxQny+cVytyrSwh34enre4tzkr3TGk nMr8muYvYwJSCnQelPclaHlniCW4X5YOPmGe1YesBltIyn3z96tFsO5y07PE4JGVx+VcvqVnLOQy kjFNaMxd3I5+4inU5DmsKaPz7kI3XNEmjWCKGq2x0+4xkOp965vUbpWTgAH2rPcp6HOXOLiEcfMP auNv4xyOhqGxJ2Mi6UQ2pwMtXNQPuyW+VhSWoNmVqBNw2SeBWGyrKhLNnHanPYnqZpG3qeKzdQjK jgnbWCu2NnPvvjIKtx3waqs23LE5rSRKVihPctcgIrbQO9ZEqnzioPzeppWsikZNxEYWyDk96gkc qvJqQuY1zGblwQcYqCezeJA4NKS0FGRRnlEj4xwKyJHLMVIwKUR9SCWNpoTg8DtVNZPKGOtJoLlW bPO3rWRLGzjpj1prYCHaXYAD5R1rJ01gfiHpEYHWZcn/AIEK56rsho/ri+B6CLwLo0KnhQgz+Nft X8OhGvhuzyekYp0tgO4vpMcgYFc1KiyPwa7UIoSxlRhetQSvlQD2o3YzP/5bEdq+E/28t0vwhvkH 3QM5+maul8QkfyMzSMdWnkHZsVajlZPcVx474ytiTdnOwc0guGONy5rjT6AOlkAwVGK2NMYtxnii 1mUj1PRIiiLj7tev6Mh3qQeapaC6nrWkTbtq4wa9K0j9w2COtW3YnlbOySLYuF5zSBhGQpHPetoa olbg4D5AHAquUDYyOlFimZroF3kHnPFZ53q6h+QaW4RPjeNRIRnj8KmiwWORkCsWaQZGIt0p2nr2 qdoDFjI30X0GlqPZFiUY5zT12rGARk+tTcUieCXLFVGBTwQ3yEdabYRQyWJoSADgCrcbbCD61PU0 bsTiUjdxu96ltpAY8sPzp7kN6kwyrYxzVuIeXyB81OL1Auo24AfxU6MtyCOfanKxDV2TW4b+0bTs S4x+df0Pfs3KYPhjpqk5IhGa66PwF2sfQ1rMJpDj+Va7DCDPWmJj4MAYI5HtVuNi+TjAqWtRC+ax wvQZrwD9ocltHij9x/OpkiZH4Kf8FM5cfBGVD13L/M1/Mh4YuAliS3zDIrLqZWLHiB1uYiBwmw1/ T9/wSnP2z9hrWwrGNYrQ/wDopqtK+pTheNz+bb4qQ6f4v+K13Y30wW2e/RZW4PGRkV/Syumx/BL9 gCG2+HUSecmn/wCstj2Ab0z2q76MyWqP50f2F/Clv8V/2odKuvFcX2/U55hJdecuT5gZcE9K/ow/ 4KV/tI3n7IXizwrDpVtGdOkTDxlyuP3gHQVzxVnzG8tInk37Ev7V9v8AtF/tjzzwWPkSQRTKSFbn IB7198/GvUvh1pvxxRtXjg/t2S7Hl7wN2dwHr64rWXvaoztz6dj5J/4LJWK3Phfw1EIw0DvHgnt+ 9r1Xx1Zw2P8AwTE0q0RgqJZx/MD6b6rnt7opS5o8p/Lr4J0iDV/HPhkTBJVGoQbWY/8ATQV/TR/w VNWOz/ZK8O2bENEZIAcnt5hqW7OxXw07Hsn7G/wos9D/AGI47zwtaRW2pz2qt5o+XJ2sM5q74s8E WOt/sUgeO7i3uNUihSRy0quSyhj7UpRbsKnHQ5r/AIJHfGXw94k8E+ILHTLYxLpx8uX92Ru/dk/j X43f8FBvizovxE/aE1PS9KtBb3UGpqs0uwryCD39q0WjsKUrOx++3w/8dXPgP9inwbDbKHaT7PGS xxwXIr4T/wCCynwc0rU/gVoOs3EEf2xWiy+OxkORV3S1ZE27po+X/wDgmz+0H4I8L+PdD8N2mnhN cktmAfyWGegJz+Irf/4LE/DK/sviZY6ze3rG0e2kIjJGAd3FEdNQqR2kd5/wRv8AibrnxR8Fa94e 1SBn8OWIMULy5ClPLJ47dz3r8zf20/Avh/wb+03rH/CPCKKJjKZUjxjPH9KdXdBSd5H78/8ABNG5 S4/YV1lWfhbM4I7/ALtq/nL8YqZPHF8T0E3+FNLlRpKPJMfrunjUrCMKOFdWAPqDX7Yf8EfPjdrG p/FS40OSALBbQyAFWJzhQacZW0ZdTRHkX/BUz49anqnx+1Dw9KSLcyPGiqSeSQBx+Nfcf/BPL4D6 t8MvhNoeqarcG2sboR4jdgOpwBg1jU0kooUI6ah/wVi+G3h/TPFXg+8ltIntpbiITOR6zAfyr6n+ I0s+h/CHwza/D6K1a2j8reqzbflD8jjParjG71I57Kx+bv7cvjibWfF/hax1HSzbTNewsZlRjyJR 3PFft5rnwqk8c/DjwzJI4eytkSVwSOdrbqG7IUpacp+Ff/BTn4yWPxF+K8Xh3TY/Lhs4380lSOVI NfpN+xhcxN+wBcAIEVdPOMf7jUQg5K5so3gl2PkP/glX8JdL8W/FDV9avIEnvod4h3DOAUB6/UV+ q3grRtU1X4w+JoNb8mDR1aRYozKORsHY1qpdDCUtT899D+JK/Cn9tP8AsPw/bRy2VxMxcoThcFR2 9jXrv/BVr4w3fhe40rTIsCK5hJchv9vFRGPLLmZco+7c/HPSLNIrFXjGVbBzXSEgwPg4fGB9avm6 mkdUj9ff2EPhBrXh3wFbazfz/ZbS5dCuXAyDxjmvWP8Agof8OLC80TQruKBWkkuoY2kA7NJg1F7b kzfK9D0b4tamP2b/ANmvRZNKhR1jMSHJ28bjnpXxX8Yf2ntL8feG9KeC3zqiRhmO09mz1qormMoy uz9Lf2bviTd+If2bXvZ4xHJFaEgZPJ2k1+NHjTxzL4z8WS390uJkJ27efeiK5dDRx5ZI/Xv9j7Xm 8U/BDUTcQhTHFhff5DXgv7MngqDxH8fNRurkbmtmdY0I9gaXw6BUupWPtXTfia998eZvDrQqIUWT oT2x/jVa00+30b9oQxRIFE8cjOMdTwKyVN7mq0R7/bWsV14ourRlATa3HrxXxnqXhKz1X4tz2rOL eOK46cfNjB71a2sZbSPtr4heEbGbwTDG+1YwoNSfDqOOLwB9nJH2cADPtTtcG+Zm54LsbK2nkFkF A2nO0+1OdU8S2l1ZKoMgcZNOWw+ZyViTVFt/C3g+GwZMtgLkCtnwL4fk021hdjtifHWob6ImCfU2 vG2kRWWv2jRAAlhnH1ruvE9jayG2M5BGzjP1q1uVeyL99Iq+CCij93kYxXKeCfDsYb7bOdsaDKgj rRbUeyueg+ENbbX/ABkSp2xKDivT7+2sW1hmfHmb+9c803IuOqOs1G5FtdW4lGYyuF/OksdBXTdY lvByrZ+Wp5i46mZZWS6p4hkmiG1QTntXdvCrwlCORWcpD2I4F8qEIe1SVm3cQUUgCigAooAKKACi gAooAKKACigAooAKKACigAooAKJBtjPrTQHnPiu9Maxr1rzrU79rmdVLcDjFddNATRlooNqgn8Ku RSr5BRx+8rYzkcTqh3yhYxlh1rn9UkjKqGXB9cU0rD6HNXMxY7I1yD3rmNSicZDc4qmhWOekh2ge npWbLbM0uc7RUhHcybnbGGCfMQeTisOeb5CQeaSRUjmdZhdbESbuvpXLBTFEMt19a0voQcvcxiS+ VV4TvxVHUEhgmYLxjtipS1Gjh9SnS4YoOCK8+1CaO1Eol5YcDFULqebou5HckhW5Fc5cS+Y7Ieq9 K1irolswrmZ2hZiOBwK8+1q9ls4RJgsW9qdrExbucVcq0jbiMetYV2i+WSP5UrjOQuIX554NYl9K 1tBj19KTdyoo55o91ucn8a507GkdQeccNSY72Zyc4ZQ6udxHc1zA8t3YN1FUtgepiXkvlsSprCnR Zm34GazZEtDkr6XddYAwvpWPexh5gFGB3oHFGZf2yCPaDkn2rm5bHyF5NS7vQo52SKRpDj7oqTH2 lRGfxNJqyF1Mu4sIoJSA2V96x9RhURHb0qFdsJOxz8LIO+T3qtfbQN68GtJLQmDuYM4a4HHHvUBi 3L8x5FZhfUoy5Q4qpPK91GULUhFBLfyY9mdzetZtyCy4I5HektGVFmbsaCTJ+6RWYbkRyupzt+lN q4EiKBEXzxVdlUR5JyDUvQRnysIiQucY9K53w7OF+IOmMR8qzLnP+8KxmropH9bvwOYS+ANJkHKk Jj86/aD4axhvCdix5+QdadN6Amel3cglXbjisphGqEAYP0rquMzHtzL0OMVmSqXJGOR7UJ6gzPeP amT618H/ALdTtF8I7/d9xhj+db0viBI/kVkd4vEdxbnpkkfhWuy4TPT0rhxr/eGlieGPyPvH5jUS yBcjFcMfiE1oOkKsoI59a1NNcBwuOtbMhM9W0VcAAmvVNAkLOASfaky4nsGkrtwO9ek6ex3KTU7s o7m2kVCGJyxqSSPM5JGM10U7mTWpXI8oEDrVSeVigGOBVJ6lWKTEbcdD6mmv+8UZ7dKXUjY+MJTt bkU1UwufWsWro0iBjKLgfnUlsH3Hnp2pJaFoDGFyxNKJVCjNTbULalmMZUlRT1jwAwPNU0UtCV5d w5HP0oj+V8npQlqS7snExiyQOCanmYLsxTtqSWVbzCDn9KsBikgUDIpPQa1NCQ4AHeooXeCXc3Of WpTuNxLFvK02v2ZH3Qw4x7iv6M/2cYhJ8NNPJHWEf1rvpaQEz3SxhMMrHsTW2OgzS6kCCUK4UjJ+ laKYXAI4o3BC+XucemeK+e/2hoyNOiI9R/OpYmj8Cv8Agpwwi+C0rHnlRj8TX8zHhhlWxKsMlufp XP1EN1+M2ljLLnICkBRX9QP/AASG8QaJP+xpqOlX2oxW91dW4Qq0igoTGw7n3rROysDdos/DH9uj 9lLTvgvqV9d2Wrm9uLm4DLynHOO1fXv/AASI/bN8lNS+F3i6ffFsZLWSU8NGEweeB1atKa6GMV7t xPHfhrwl+zL+2/p+uaXLH9gmuD5ix4wCWUDoa/VP9s/4CeGv25PEXhnVZNbSDT7VAzxhk+b5w3OT 7UqkPd5UWneJ8v8Awp8WfDb9k/8AbRj0rRRbpLNFMJJYwOT8o5IOK96/aG/ZX0j44ftKaL4/HiEQ xQTCQwq8eCN6t35/hp04OMDOErM8z/4KdfE7Svjv8RPDHw/0DUEjljw800bD5QkgJGenQ191+OPh foM/7Dlt4SOuRPLFaqpJkTIxu9/eo5He4+p/MX8Ff2ZZNa/aG0vQhrDLpdheI5lDJzsZW+lf0pf8 FIvhtoHxN/Z40/TrbWIl+xmNzskQ52sW9aFBzkFR+6kc9+xp8aNO1L9lN/CunaklpfWsAhDiRQQd p9frXzT+014eg+H37MbPr/il77UHVQI2kRsk5GOPetZ6Ium7HT/8EV9B0vw18KfEuoajexWr3x3o jSKMfuyO9fkJ+2FolrD+1nqMGlypObrVVeSUEYADKDyOOlQot6mc3eR/TzovhPRdX/ZX8IacNXiQ wNA7ASJn5XJ9a+G/+ClPxa8KfELxH4T+Hkt5G8b7WkIII+WUe+O9U4t6Ck9Ejn/AP/BPPwh4J/aJ 8N+M9F1VLSC0tXDxqUAbLA+vtXhP/BVr4g2f7T37RHh3wJol+IoQD9qnjYcBZBlfTkGnFO6iXV+A 9i+M/wAYPDH/AATp/ZMt/DfhYpLrNzAsJeEcknK7jtzgjIr8LvhD4S1D4++L47XUrpv7VvkZ7m4Y jOeh5Na4hbWJoLS5/V3+zJ8KtD/Zd/Ys1PSb3WEuJvsRAMjoDny2Hav5adVZtS8TX8ykGEzfu3B6 isotuOprVfNM1pi62mEO7kc1+jn/AASl+MWm/Dn9oOe1vmEc91HIUZuONoHWnCNyamqPvX9uv4Ie BLDWrj4haldQXl412sgQlTtJI9DntX2h8M/HmgfHX9nDwoYtQjtLWHyZSodR91yehNVGnzS5n0FK VmkfMv8AwVWk0nx/4W8PJZ3yGzhljEmx1P8Ay0zUnhD4bX8/w78JTeFvETQWgMTToJEXKh+R+VJJ 3JcdTM/4KH+J9B8X+N/Bnhiwljk1f7TFJLKpHCpMN3PTpX6KfEb436f8LNB8MeH4L+KRZkVG/ejg b8Hp9a0lRvEx5nc/Er/gqb4M8P8Ag/4h2uraG8Ul7dI3mrEQfvMATx7V+jf7DM+nzfsO3Gn3V3Gk psdu0yAYOxqafJGx105aHzx/wTV8eaX8J/izrmiXdwqTylzbuxA4CAfzr7N1bw7ruq/ELxPrGoeI TBpG5zCizIRt2f8A1qIU9bnO9JH5NfD/AOJmneHf2uor7z/NsxOUMx7klce1fsD+1/8ABvwv8b9N h8SX9/H5drbP5a5Uhu47+1Nu8uU3n/Dsfz/W+q51++t7SNns45tsexSeMVr3mpyWkoX7M5VZ0YsE PQHmoa97lFCVoH9IPwP1zSvin+zP4etobwWyQiJ5MkAjaxOMGqH7Z91Y3/w70d7G8jeC0uYWfa45 Cvk/pUzpuTsiJP3Lh8X9Ps/2m/2e9HsdMvhbxO0TttYDgMeOa+Ifjn8CvDnwW8OabawTC41Hy+SA MnnnpXRThyJmdK7kfd37HPiaw8VfBe40MzCKQRiJwSARlT/jXxz+0L8MdG+Ed/bWWnyLdXUg+YkD jnHaoXvSudVd6qx+gH7G2hDR/g7qH2i4GZkyoLD5flIrxz9nrWIfBXx91WC7lGZ3doXyORtAqeXm dyKj94+rdG+HsNj8dLrxS1zkMsnykjuB/hXLaT4zsfEX7SpaJxuhSRSex6GhuyNU9D6YtNctB4+u MyrvCv3HpXwp8VdTJ+KMtzbSYKXAJK9+RRbQzSu7nqnxC+Nz63oVtp6Aq2AWIB7GvoT4S6xBrnw0 NokgWRUAI98GjZDirHV/C3R/+ETgvHuZt4bOMkeldRp97b6FpdxqMbAu7jjPrUNaXFFa3LWrT22v +F4L2RgGypIz713FlqceseH7IxyBEj25596UYa3H5FPxvfxvqVnKj5RSMn8a2PGkZ8Q29rJbsAoH PPvVNag0depij8K21k7jcdu7n3rn/GGuR2cEGnW5wgHJHsaOpT2sanw51OHT9eRTxwRnFemeItFe fxDHPHINhbPUetZPcuOiselTyRavfQRbwfKXnn0q9p+sLfavLarghQR1rNxKjoWdI8uyv5YVILE9 a3XGwEkg496xkmMbE/mxBgABS1mIKKACigAooAKKACigAooAKKACigAooAKKACigAooAKGfOFprc DzbxKmL37uV+leZXdp5epB2Pylq6oMOp20jRG3G0DGOtc7LAInDnkGt4kyOfvlEV4TGOvWua162F 0gxwatE9Dn0h8iPAYZrltRby0Yt8xzSvqHQ5uVTIisOlYV4xZyN3FNoUWYsX7kuexrCubYSKQp25 70zSxhavOLaySL7545xWDd6cJ7HzC2GXoKSMr9Dj7tyIFAA3n0rm76CXcc/e7mn1KRyOpaebiJmj baw6mvINa3RSlWy+DyaSE9zFnuVuLMxxoFx36Vx88ZtYcsdzHvWq0RFrs53UpJGQJn5SeaxdXvIv s0UQiDbRycU9wtY801yV7knyEwR2Fcm0hWLa4w1TcDmdRlIQhT0rmIZluiN3PtVNaDTKmqItiGDf drhZSXyY1xnpUvYcY3MW/jZk2EbT61yF0q24xjJ9cUou5T0OYYec7BuMVnFkQnPX0xSZm9TDu7fz 8tjArOeBfs5IGfeklcpbHNou0sxOQK51z5kjkniq5epPMZsseGCkgA+9Zd5A1o4wcqfSs3qNGDqU TSqNhwB1rnbm5ZYwOuKqKVhSVyhbWwlBcfjVZxnKnsamZEdGV55VOFAxj2rEu/nkGOBUWK3KN0m/ Eann1rFYtazlSc+9LyLS0LEK7lLZziqlxKAgJFKwkrMzXImUh+BXE6nqMcV0kKj5j7U7lKxtrAGt 9rNgVQeBWgCKeRUXuDRQZih5GcVh2QC+O9LCj78qk+3zCom9AR/Wr+z6BH8NNHQ8gKn86/ar4ZSr J4PslAxiMVFJDcT0N4yICxP4VhSNvHymu1DRCZ/KUZPFMuSFUEDr6VLEjFkRmcDHGeea+IP29LES /CC9UH5QM8e2a2o6yGtD+QidlfxLPJjoSBx61ounybO61xY5fvCtiNmLFd3Jp0ZWIksM5rlWhLZB 5xif7vBrU0/5nBBpTlqTFHqmiPsKA85r2XQk+U54Hai5SZ6no+ZCgJxXq+kQ+Z34FXEcmdREVzwO RVlpTnLGt46EMijKhiTyT0qjOTuC9BRsO9ynffvkAQ7cVR2NEAC2c0RA+PYwdh39aUT7sgCsE+hb ViPds5qeJzncep70r6WLi0I/zkLnOKkigWQkt29qS0Ym9S0pEERC9DUfmiMAYqm7sV7FmR1EXTmp bdw8A+X8cUIcXckQbvlI6VopGkicjn6U7jegkOA+0dBV/wAhgDIDhR2pNEp6iRq0nzZ/Cp45BKcn tWcdx8xo6THt1m2x1Zxj86/or/Zxynw105COBEOtejS+ATZ78qfOQvSr9vhhgnmpJJyqu3C4qwcL gAZoCwhcIwHvXz/+0PIItOhQ9SR/Ok0Js/n+/wCCoRz8KGjUYyVP6mv5m/DkRWJ2U5welZNamKet jXliEshEg3Ke1dZ4G8ca78O1u4NK1B7SGZt/yuByBgU1oxvUyde8S618RLvzNe1Fr104VWcNWdo+ lDQPEkGrWLeReRoUUr6GqjL3hqOhL4gvtU8V6zJPqF3JO4fcjNzgjmvYtE+NXjbQrKOGDV5YrZU2 7RIOf0rVyVxculjxqcX114xn1ySYyarI+5ZieV6ZGfwr2SL9oXx3ZwLbHX5lToEEwwB+VDqKxChq eaSaz4hHjhdcgvmS/IO6ZZBnnqK9pb45+Nryxkt31eVo2Uq2ZB/hUKdzRxR514P8ReJfC1/JPaan ItwWyX3gFq9G1/4weOPEOmvbTazK8T/eBlH+FOM7ByJnM/D3xt4k+G0lyNOvWjeZ9xYSAc4xVj4i +NPEvxQs7eDW9Qe9WNg2HcHkHNDkmg5LG/4P+KHifwXaTWtlqctvbP8AdRGHy8dK5eS4vb+9ub+4 uGk1GR9/mk8qfSmpJKxLhrc9S0r9oDxtp+kQ2n9tSokWBjzR2/CvLvGcmq+NvF8Ov3d3519F/qnL g7ckHr9RT50mHJc9et/2mPHlvBHZHWJUjVCuBKOf0ry+KTV08UPrv2501N33ecrAkDuM/hRdJ3Bw vodH421a++I1xHLqdw926nJMn1zR4X1K68G6t9v0uU21yoIDqcYzQ5OUtS4w5VZHqeq/Hzxl4v0S bT73V5Z4n4IaQdPyrzvQ7c2GmJa7i+0Y3GrdrWRKi1K7Ny2zEhTqp70mlR3PhzxPFqthIYrxAQrg 4wD1qIy5SmeleN/iJrvxL0r7Dq19JdW+QdsjZwRVzwr8SPE/g3w3Ho+lXz29hHgIivjAFbqaSMrX ZP4r8f8AifxRpcFncXzSRKRuDSDnmvSPD3x18ZeF9IstK07UpLa3iUAbJMcA/SoTVzSS0OH8VeL9 e1jxumrC5L6pG2FuGfBAJBPNdT4p+JPijxVqunXU+oPNPb4+Zn6c5rZ1DH2ZN411zU/G3iWHULy4 N0yRsmJG9a7fwd8RvEvhjw9PY2l88Nq3ARX9qz5k2aRjZWMPR9T1fRfFVprayH7VFGVD7uea9yvP 2lPF/iGxurCa/lNtNwQX7Yx6VSmEoJnldjoKLp6Ip2TI6uHA5JHNeweJPi94r1/wWNIOoyi3UgFd /UflUprmuWo3Prv9lFPA+ieCWOvQxvdADLMmSePrX01Lrfwrk0K9ItI2kKHbmHvj61pBKUrkVFy6 I+BfDnxb17wrdatbaRcvb6O837mNWxhcY6fnXQ3nxP1jxD4eXT5bpzAcblJ64rRWUiLe7qdb4M+K uv8AhPQlsrC6e1hixtVG9Kl1fxvqPjm9S/1OVprlFKqXOetJy1sXTglqdH4N8eat8P7sz6bO0DzH L7TjHauq1XWr7xVqD6pfym4uucMxz1pWUUU/eZ6j4Q+Lmv6HoJt7e8eKBhjarVz2l+INQTxUl7NI xnX7smc4FTFpBKN1c+j4vjB4gv1dUvJNhGMlqyvDF/eabr6XsEhS5wd8gPNKSQ/s2PQrb4h6naa5 d3bXDyO2cEHPUUvh25k1eWW7kkLO7bmBqdOhcVZWPSdOktr2Q/xuoxjFeheDNTuvDouPJfylc5AB 9ql6kzsegS+NdQ/scbpSzMR3rqLDU7qTQ4Y3fcpxxmjfQRuyajPJYrbhsIP4c10Gn63PZWKwxkqg 7A1S0QJanQwao95HGkjHiurOpXVvCixSHYO2az2NGjQk1aaUx5Y7vetWWMyOsrnLUkyWjQhuxaSC VOG9a7q08R3V1tHmMcU7AzpdM1OeG5ZxIQx6nNdXps8thO0yyEyN3FJ2KT0Nyw1CRZnkLnzGPWut stTkZwryEms5xGmdTBc+SoyMj6VqrIHGa5ZqxQ6isxBRQAUUAFFABRQAUUAFFABRQAUUAFFABRQA UUAFRTLsAamtwMXWdjWR+Tn6V4nqK4bn9a6IK45IvxfvLRY1OfWq1xIsUCxHkiuhXIMuXZ5hJPA9 q5HVkafLR8CrvoSc5daeVtfM3c1xN/Ptym3J+lONgMC9cxRDBrJnT5N2OPpTbFsU0SOaIqp+auYu MrIUbjFSnctM5LV2/eIij5ciszXo5SgWI4GOaoi2p59czHTvLMp3frUmp6oFiIxy1JlbHmt9N5MZ APB61wUtuk10DnCn1oW5PU5XW7JbO9YCQbfQGuTktxOTzlR6mtLg9Dk7xlR97/d+lcnqTJIWWN+l Naq4WVjif7fXTLtlaDeMHnBri76dby5dwu1T0FJomxzc8YSFuMk1ygX7M5dRzVX6CsYeryyX5LM3 PpXKy3UkBAzjHpSeuha0RDNObtCSa5G4iFzMwH8NQ1yibujmpLfzd7ng5rnL/bbnhck0k7k2MV5h ICvSsSS48pCn8NXsC1MWaEMDt4WsRogVbj9KXNoLl1MK4iG0gjp0NUnmBttrDkdKy3ZWxhyq23O7 B9DWQAEmYsOCDVLQGzGQNbliB8pqp5Pmbtv3qmRK1Zm7gSVIww6mse6j3MRuxikNaGOsjGfA6jqa jnVZZmI4rPqWZsqtBGQD1qrK+1I8/Mw61SegWKGpSeex2j8K5aLw8018s0g+ZelZzegWsb88W5wT wBWXMRvJQYNJbCKjRb0z3rKs1EfizTGX73mr/wChCs56jjuf1i/s7RAfCzRXkbIKp/Ov2t+Gdts8 J2bKfl8sVVIpneyK1wSmcYrP8pFyg+8K6xJlK8gVUG4deazpJSrDn5R2qRp2Mq7Z3nG07Qa+LP25 GMXwhvi5JUrjp35raj8QmfyIz2pg8RXEbHJ3Er9BS3Mm6QkcGuTGa1GNjRgx8/eqNyQwDDmuKQmh zS+WvzjNaFiMtkcCko3YbHp2g/vdmeAK920ZA6qScD0quWwranqWmqpQY7V6bo75jGODVFLc6yBc c9asCISRnPU1qmTJalYR/Z124yfWoZsY5HNW/eEjMlbcOODVVWDPhzg9qErGiR8cbvLhIJyRUKvt iDLXIX0LWQE6ZPekSQM4B4H0prUnZFhvlf8Ad8mrDx7oueD3qrAN8xPs4x2pWdVKDHX9KEtA30LC oquVb5qdEWkkCfdAqUxRVmW5XIlAUfjV6PHA6U7mtrkkUOwnFSIzB9vVaGyGrF9cRD2qUBHwAMf1 oSsybaluzby9esWXna4GPxFf0e/s/FJPhxprYwzQjj0rsp/CNo9sRdrkA9KvRRAAseDRcRYhbe2B wasxEs+0CgfQlvLHG3JxyK+bf2hR5tnCe6kfzpXuQ0fz+f8ABU2Ro/g6Zh13KMfUmv5r9HxaaPGw +8wyeKwcvesRbUkn1FbKyeRx2619cfsh/sOeMf2qdEvdXtxs0+Nx5TtIBlSM9xT3CXuo5z9pT9lS 8/Zo1NZ570T87WjDqepx2r5r0jXP7R8RWun2iG5vJ1LrEozgDr0qoK7Dmsdjremal4e1eSPULFrJ QcDepUn86Y8yYGZVGexYU5OzsEfeVyh4dstW8ffFHTvC+h2v2m4uFLMUydoBAPTPrX6dQ/8ABKXx I91DPPdJHLIu7yTKox7VDTauTKXK7Hwn8dfgR4o+C3xUGgm0mdpQxi2ISOCB1A969b8Q/sf+O/Cf wEt/GF1amBHaMsHbHyknPb2qY3NJNKNz5l0x9S8S6xp9po9q95LMwU+WpPUgdq+svjf+yn4o+DHg u213VI2tIpCo2Z9Tj0q9hRlpc+cbWVJYVd3VCwzycVPHIGUBHDH2NPYL3LscibCTIoI65Nc14j8S rotogiHmzSuI4wnJyxwOlSmUfevwx/4J3eMfid8NbHVps2xuNjDzXCkgn0Ir56/aC+BXiX4AePtP 8NzW0szXThYWiUv/ABBew9TQr3uKUrOx9qaP/wAEyvFmvaJY3sxEEs0Jk2lwDx7EV8RfEb4f6z8J PiNJoF3byPKrFY1KH5sY9vene8hc1jK103ngxw+r2rWkbnCnaep471c8AeC9S+LHjaPS9LkCBgc5 YL6Vd2aH3dr3/BOrxj4V8CTawcSwxJvLLIDwAT2HtXxBY3heBI2G2ZeJB71psjKMuZtHRRXEbRhc fpU77iM9h0oBlqzjaY73GB2roYOXAIwO1S2VGJNLKGlwxxjvXovwo+Fmt/G7xvb6bojZIjYt82Mk YNVEUnoanxa+Eut/Bfxy2ma2NsjZK85yBxWHZJEsYdZBg+9WyYPmVzUjQFdwOR7VoWzOq4B4z0pJ FNs6y0P2yEZIXHqcVPbhWcoXBbPHNO1gWrOkgjFsQrMB+NdLZrFdqRuHHXmpkzXYj0/TJNS8U22l 2SNLPNyqKM55xX1D4w/Zp8S+CPCw1S+tNtsGGQDng+2K1hJ2OerKzsch8NPAmo/EzW2tdGhMiIhL gDoQM1najpt14T8UX2l6hH5U0Eu3B9hRGTbKnHlSPQ/AHg/VPiX4ii0/R0EzFCzHOMYre17wjqXg zXrjStSAjmifHDZ6VUXzMU/csi/bBZEG4gYrsrO7H2RkDcYrS90L0Oh0uN10pcnAyOprcg1KKC7S IkEkVHkaN6WO70+83RkB9qivQvAtvP4r16PTrJC7upJPpRN6Ciu59GWP7PWs6VNMXCyqQcLv6V5p b2knh7XLmxuEMThu4xWKfQtPU77whDDZ3k0rABeefwr3fwx4HvfE2lm4gHyEgqSe1URLRnTax8O9 QttGjfb93GSDW94Y8OX2p6XE0QJAxyeKWwLVHaXng67tk89ugHOKp6dJ9oIUD601K5UUdRbqRhlX gHHAr0K00a5ubVJFGFPrxU3u7FMk1PTZbBEd1xjuKv6fN9p+9yBTSE2aCRLOT2wfSvQND0ea4t2M aZA9Kcth7mzb6fPMojUEvn0rtotFu7W1GR0HaoumxJEcDyNkYIYdc11VhE0xjbPSiQ0j0iKUeSCR nFaVrLviHFcsy1sXQc0tYCCigAooAKKACigAooAKKACigAooAKKACigAooAUHFQKeeeaaAq3dqXg ck8DtXiOv2byFivGDXTTZTILCNobLIPzd6zbxH8sZOCa6EZXKFzGqoD3x6Vzc93w65xmgSMaW5RY NjHkVxNziVnPcUJMbMRoFKFpPw4qhfspgAA4pks5i0QQuWxVKcwy3Tbxz2o6h0PPtbuA98EUcCs7 UJ2+ytk844q9gPO5wtzbhJDmUEVU1CFblVU8MoqraBc831nKOQelcBqKcYL7PTFQtybnLXekvOxk eQsPeucuYDEjbTx6CtbXLWqOWumU2+1hu9q4S5jWzumOOG9qe2hOxjakqyEAdQOa8+1CJg8jjoD0 qGVfQwJJTLF1wa4vV52t8ADgmqRNjGKnJYng9q5C8jb7Ud3SnsxN9DPKskpXPBrCvZPsrtg/lRJc wbIwHlxESRXO3as2GIwMdKy5bCOTmjadyV4wapXkSS45+b2obvoUrIwZm8pzGTk1hyzNDuB6UrD3 MiW6EqjFc9fSu10FUZSlaxEjO1AgjOcEVlfaiw+YZHaq3F0MyeQlSucCsdiUyFb5vWpkhx0M2dQF I3fMeprLAEuVzyO9JvQJbmdK6q2wcEd6qXMhEQI/OoKRWEhZDn5qrySgR4A+b1oS1K2MtImiy3Vj WnBOEiJb5qUo6g9UYr3OZGJHA6VktP5obaMEmjlsiSNkZRtzzWFFFjxjpKZwWlXn/gQrGSGtD+sb 4Bxf8Wv0SHd91U5H1r9sfhf8vhSyXOQIxTgkJXO8uvkk/d8nvWFeqyShhxXUtASFuJ1miAx09qyn gLYPpQMqblScMwyB2r43/bekW5+EN8gXCgZz9M1pSXvCP49dWnZ/GE8o+4CQBU80WHznmuPF6VGV IjZMYwcn3pzMWYAnpXHLUFsOkdVQDGSauWCtGwJ6elGw9z1DRJRlMcj6V7PpMu0J6U1K40j1rSXV toHevUtKwFANaJAzrYyRFVnzCVXHamwKcs5Z9o6VTlkPmFQODWsNgsrmY37qUjOVqHickdDTlogX Y+NpVG8g9asJCVQHtXFHsVrYshl2dOTTLVd7EYzWsVqA4xmKXIOBU73B2Zx8tXYErDflkj+VePXF XIoBLGB3pSVkCXUe6eSoxzT+Y8EjJrO2gXLKyBlwR1qa54gUL1HcUrWKTHWrMYcscCr8FwzoQq8U LcW5PFLwFI5qVYis248L6VT3FbU0dKi/4qKzAPLSA/qK/pA+AiKvw405QPmWEZrsp/CD0PaoFUoT 0arizhVyRx6UkrsljluA7jaMH6VoQSmLLD71OwIcty9w/wA/XNfPX7RE8MWkwxqMzEjOR71D90Jb H89n/BUtinwjCYydy5/M1/Nhp10s1iIwOF9q5mtbmcdzJ8TxmTRnRTjIr+s3/gl3qj+G/wDgnJcy WCCG/j07IkHHzCN+c1pCPMVU1R/Ll+0F8YfFvjHxBqE/ia9kvkjkKxx79+WPTt61+of/AASP/Yjh l0i/+L/j+0jtrURmSwjm7RlM98EHK9K0g/fsZrWm2fI/7eXx1h+PXx9i0vwdaPdQNerAhijY8M45 4zxzX3boP/BI9ptO0+51vUza3skBZYcpyc+/PWicbu5EG4xPV/8Agm9+yB/wpL9tLWX1ci6RBMLX ODgbB6e9esftFfEX4mQft72NjpSTnwmborIqlsKN647eme9Nq6sO3O7nrv8AwVa8T6N8DPEnhLXW shc3MzpH8sZYjfKB2r239rnxHF4x/wCCdFlcrEIRNbxnjtndSilYiUrvlP5mv2Mvita/Af4s6FFN atfm6uY0U+WTjLBe1f0Rf8FjJY9X/Z70yV4giv5bgD1Dnik1ymklyxR+Ev7JP7BfiP8AaP0LVtZ1 Itp+jrMPs0rED5Cuf4h7Gu1/ae/YN1T4A/C2HXdIeS/t/lQSRKGLbjj+HNJp2uJuzRvfs7f8E7NZ +LnwYsvEmryPp8d4qMrSYUjOR0b6Vo3n/BP25+C/x98G/b1XU9BmkRmdyD83mKF4FZqLaNuZPY/b H/go3r3jP4QaJ4TtPAGniW2BjaVUZl+USc9Ae1fD3wr+MF1+0X+2xo+ieK9FFq9okhUyK/zFWVu+ O9dFlyWMHrUP0B/aW+L+ueCf27fBnhbToWbQrizmMqLnGRIoH6E18g/8FVINE+Gf7SnhfVntVdHU +aNp4zIOazhCzLnoh/7Wn7MOh/tU/s1R674UWL7bCiz/ALrGfly2O/pX89/gD4i678HNZb7PDJHr MKMsm4FcHHr+VaTVpDpyuj+uT9iv4p3/AMWv+CfVxfeIkD3p04mRtxbJ2PX8tuuXcEHi+7WFNqtJ 2FVsTJckjXlcFdiDaR3qzHvdAM9OtDdikakDmQAKeB2raik4BPap3LWhheLNUNnbQiNGeWaRYkVB nJY4Fft5/wAEuP2PvF/w/wDidpnijVZjb2FxbOfIdwM5wOhGe1EnZpILK2pk/wDBYPSm1n9ovQ7P SoTNeTxujIi56uBnivPPDn/BN7WV0K0kupfKmli3+UzKOntWvLaOu5hrF2R8h/EzwVqfwa8etol/ amOPDGMsCOBXNWE/lyO2cgnpQtDS9ztvhF4D1b41fFaDQ9LSR02MZNq8AjHU199T/wDBPDWtI/tC 6S4Waa3ViYvMXggZqIT5m0VJpI+bfgR8A/EPxo+IGo6aIXUWLlJ94wM4B6kV9WeKf2GdW8LeF9R1 K3k88QnJVCDjjPb6Unq7EKVlqaX/AATT+GcvjH9oGDUNTsC0VtbyKpdDwcAivvj9vH4+3HhHUZvD cmlB7KfKrLtY4PQe3enrdWJcef3iX/gmB8FD4btNa1aacTfaX3RDIO0FMYr4p/bm+GM/hj40avq8 tx+6ubklYww4BwK2Q3O7SPTf2CvDepaN4yl1PTY0uVMT7kZ8YJX2rg/j8dU8bfHi/tpo/s1011gL n6etEN2XVtKx6vH+yD4gOoWYYKsDrkneOTmuF+J3wX1b4e+LLbTAjM87YQJz3A7UJiWrPoofssa9 b6bYxk5eRQSm4eteNfEb4aP8PPEDi6I8wA/KCDVx11B9j17wF8G7/wAS/Doa3sK25UP06j/Ir7E/ Yf8AC2m6g+q6gkebmJtqsy4xlazqSsVJpI9f8J+ONavfjfeWE9uw01GYKxzg8DFVfj98HpfFHi2O /wBPVY1jhcy4OMnrUtX1Ii7K5438OPh5d+LLO+VF+SFsNnjnGa958H/Eo+ENB/sUQ/6THiMdef8A OanmLaufTmlXJi+HAOoBVlcDqauRFfDfw4imgj3MQMYFTG70K20L/wAP78+KvC9w13Hsbb3+leGT QPa6o0cK8ZwMUX5dA2dj6A8NeFo7TSIZLoDe7Lwa7Xx4X0G1s47YZVl5A7c0QdndjnojWutL/tPw NHIybpjt6153HokmnmOPH7x+1ara5DZr3Wgz2E0cRU7m7V9H/C7Rp7S1mWUA/Kep6cVnJ6FJ2L/g +ETeKLpCoOwnr9K6PSNSa98WT25T5FyD+VZJ6lI5nxDGlnrMyx/dzT9Oc+UFH3qsDuNNDw2x8zlq 3NPPmj0xWMhxNLGKKwAKKACigAooAKKACigAooAKKACigAooAKKACigBajkwOgoApyzBYmNeYart MjA/eJrppoJHPShrLarDg9KqXJ+1Lt6YroWxnazMC9nESiLGSe9cTcwtb3xBOR3qkDOf1J0Vyx/l XO3e6FN6jANUhMzJXaePJHFYsq+axBNJ6CMKfMbbRXL3gIvx6Edakq1kY2pWyx75MfjiuNlDyxbh ytXcXQ54xRQz+ZIMEd8VzetTs87SxqDFTuRbU891WQlcqMg1wl9pHkEzyP5hPQelCG00c9cuUAZj hfSuY1WfZMDEPkxWibLicLfybXLKN3Peud1FBIBIOKb3E0cxJEu5mJyB7VyZkS8lkA+VRUrcLHAX dyLe5ZB09cVzN7Msk5Qg/XFUKTM14BIOe1clq0f2hDg7SO9AuhzfMca5feT3rFuY90rqTnFVsQtT mWtZI5C+4soP3ay7y533QyMDHTFYydmNIwnkI8xgMCuct3/evuGT70ooctEZTEeY24fMTWTqSgx7 WH6U1uKL0OUWIoS2PlBqDUlX7OsqcZ7VEmBz91GzocVQgixak4596L2QXMOVlimw3zZ7YrPuLb94 So21DdyktDHlt974JqusapuwORTewluZbW6SgyDrVdrfy8BTuB7VJd+xVVdspUD8KSS0Ubm6GqWh DZRX97yBjHWonWMNtC/pSkXF6GbJEGcoOtZ88B3bVG3HXFRzAUbnG4EDgd64+QPL410x1OEWVc/9 9Cs27oTP60P2dZPN+FeiSjncifzr9s/hfkeE7THPyCinqPRKx2rzGCdiRz9Kp3TtMgO35a7BIYXT bxWfPd+Uh+XNKwGMrLcMARgZr5C/bbxbfCS+A5Urj9DV0fiBH8eWsxlfE9zEfv7yR9KJ2EQBzk96 4cc7VDV2aIy5chlGFpN4z/8AWrkjqhbDtu08c1q2EjK4BGQaCT1HRP3WNor2XSP38SgDBojuVex6 loIKsqY59a9X0n5WAIya6EtBX1OyjzGoxyakQMGqWUiJwJHxjFU50KvWsBdTNMBwc9feqkjBGBFO TuC0Pj+VgswONwqZfmJXPy1zRRVxYIwd2786bERGTtPP0q1uBJu3Z3dfpT41C4B+ZabYN3RaMuxN oWpAwK4BxSbuHSw0zbUAU5NWftG1Qp5NT1KjEstOqxAbcmrsDoIwrD5jQxJaiNtibGMip0DKmEPy mpSE3YkhRlYA9attGyyZY8elDYXNLScS+JbEqMbWHb3Ff0d/AIk/DrTyO8IrvpWcAep7LBgPtwc/ StOSPbF06UluTuPgIdwxXBFWViaRztqr6jY3eVcDvmvnn9ou2RbG3mAw2R296zqE9D+fH/gqtP5X wt3p0LKD+Zr+bDw8o+zIp6sM1i0Yp2kV/EUoFrKOipxX9ZX/AATpkj0v/gmtPdS8xyaYSMDP/LN6 uL5VYqpKyP5ZLzxFpMvxaMmpt5mnR3qO6OPQg9K/qF+LRb4z/wDBPuwHgCbZZ/YQWihOMj5u3Pan Ba8xmn7qR+Fv/BKfwfFN+19pGjX9sY7q2hk86KVCvzLtPev1n/4LO/HTX/gr8ZfCzaBfTQBEIa2i PD/vR+NUrt3Kk1blMH/gl/8AHvxP+0d+1jrEur6c1j9nWVVZww3jYD3Ffe/xO/at8L+Df2oR4Jkj 3a5Ld94z2ZQefxqm9W0H8JanzV/wWtjWzuvCU0jBYhLHuycf8tRXsv7SN1FL/wAEzdHulfbA1tFg dO7VmlYySu+c/mQ+COpW0XxY8JsHBJu4sL7eYtf0p/8ABXbxBaxfATw7JKCsPmwgjb1/edKb941f 7xI+mPgD4Qttf/4J/aRLZXH9jwS20Z8xSFwPm9a5vxt4u8JfDn9k7RdIvr/+2ZWkgjEgAfcSxH8J 96TTM+tjzX9vjxJd/DX9gvQr/QbltJWIwHMR2k/OeOfXFfkX8I/20/FHxt8f/DrQdS0+SS0ZEk+1 Mr87ZR7Yo20RpT916n70ft1/tDz/AAu8XeCdIt7NJort40cuSODKF/rXzH/wUf0+3/Z38W+E/iHo VkBqW9I5dinLK8oDHjnoKtfFqS9ZaHrfwh/au8CfHj4/aA99bAeJltZNgkiIIGQT1P0r4a/4Lhag unfEOyuJRshW2kAPvu4q5K2oOV3ymt/wRsTxJZ/BPX9b8RyNB4UlQvZLM2Pk8s9jjuD3r8mP2tfG vh/Wvjdrc+hrELUyPukXvwKco+4pBD47H9E/7Ankf8O2pJ1bzEk0zIP/AAB6/mS1iaP/AITOVCOW YkfpUK8tSpy55HTzqd5A9e1XrebeuCMEdamSuXHcs28yAlVGD9K0TMYoSM/jTS0HJ2PTvgJ4cTxN 8bvC0F0omt/PRmVvUOtf1m/GvXb3wv8AE/wbpejy/ZLM258xUbGcSDt9KcFeRE5XWh8e/HXRrHxD /wAFEvDEToGlS1mJBHo613H7XHxE1XRv21vA/h7S7k21hLayNNEGwG2yr2+hqp3bJVRNHyZ/wWZ0 mytvGmhS20Pk3IhbLBcZG/mvyAtZN9gzqMOSMcVppykwu3c/Uv8A4JT/ABO0HwJ8TtSsdUjKancB ykvlk/wAHmv0u1fQvEEHi/xdrGj6udSglaRhaiVSE+Tpgc9qyUeVXHJPn1Lv7DNjIvwv8bavfWS2 eqkMzHnIbyj6/QVX/YM8U6n8RPB/xDGtzfbVjuCsO5t21fKP9apK24vj0R5h+xL8V49J/aY1nwrF bAbGkIbB4CqP8a86/wCCkvxmOo/FUeGoIE85VclySPukf41UY+8En7O0Wel/8ErPFuteX4hsL64e aOGTEGWzgBP8a+I/2zPGep6/8dtbtLx5ZoIrzbGmMhRxVpDlDldz7G/4JlazLbfEy8sWO5WRyFz0 worm/wBr+WWz/aeE8ACSpd84PbcuaSdmOSadu5+mXxf8dSeEv2bdO8QQjM6+WevXk/4V8qfs9fF7 /hov4z6ZJqNsqNFA5XOfUHvWUr2LXuOzP0lnvCfjALMqWjWGTbxxxivzH+OcdrqH7QF6mquDapMR tboBxTi2oC+1c/UbwDa6IvwPiS0CDTBbYQDptwa8w/ZVTTrW5146eqrbCf5do6jbQ43SuK/O2keu 6V430fUfGz6ZAR9vVjuGPTGa6LVdfW28cnR3TcZonycfhRe2hSVlylGSyt/hX4d1CS1jDtJ8xwO+ MV88/BO0Xx944a7u0GSSwQ+vFTylRaPXPjhr1xBrdtpkTFYA4JAPoa+ibKWKD4d2zzH9wqjrUpcu oKXM7mn4SvrbUNEupLcZiCnGB14rznwRoLX+pXN7cDbCjfKD9KdubUb1dzqbzxE2s69BDC2Io3A4 PvXrnjDVrbTDbicbjt4496qMb6CqamwmoGPwjHcIMx7lwK1bTSE1YW9+eCqZ21bairCjdoj0XUhr njbbLHgKDivUdG1B4PEt5ErZUBuPwrGSKSOS0PxEuj+LLhn/AIic8e1en+GLm2uddmmhA3tknj2q GrFRXQ47xYqrrEpH96qem3BikUrVxV0NnoNhO9y3zDiuisZhJMVAxis5oaNbBDdaWuZgFFIAooAK KACigAooAKKACigAooAKKACigAooAXO1STTFOYi3FNbjRiTyAxMx4FcNqFssrecvGD0NdVPYGY2q pJdIjYrIWFbZiXetiHYxNUgWYb06ivO7li9xIW+9TIuYFz9wDG71rC1W+BiWPpVIozIrjMJUDJrG kaKOXn75qGCRhXQLTEbgPxrmryYQyBSNw9cU0glsYup3QmtTGvNcXkrbOobBFVYlHKT3AusRyDn3 rB1DdbgwAfu+9PbQOpyOp2qRwAp6c15jPukkcsTtzxTQSZymr2jSxD59oFcjNPvPlKOe9axQJ2Od uhtidQPmBrjdSjleMBRgUuoN3Oca4+z27qw+b1rz+5nSOdwDtp8tyb2OUvMyoxVfl9cVjbvtVvgJ kjviqewpHOnMW4yce2K5zVZVMDe9Q9Co6o42eYRW0aKMke1ZTSuZGYjGalvQnYqsfLTJ5J61y+rK rJuPH0rN6saWhys2NgJOOaydSlHBTitYroTJ6HOvcl8g/eFUmut4IK7j61NT3RQVyg0fn5GNq1l6 hHH5IUdB7Vz31NmjEnlSK3A659q5+4Qq2FOFNWtTFqzMaWPyZhxuPqRReHzUJJwR6UnuaLY5ZkaV zjrUEG4kh+CKciUihPGI84+7VaOdYxuxk+mKSDqNG2W4MhGCfamzDzW6YFUwSM2SzKT4XoabeRCJ gvQ1L1RS0Md4THMWzzVK4kOSR+NRbQV9SgZ0kO3b0rhpSW8Z6bEflBlU/wDjwrGWiK3P6zP2bP8A kkmiL2VEH61+2nwuwvhS05/gGDRQdxWO8u5fMQnHIrKhvyylWGR2BrsKjsQeXn5gMGqM6Hk5yKZL KYUNgY618eftoWbP8J9QG4lAufyBq6XxDSP48tYk87xncT9MEjFWJXQjlefYVxY9e+WZ8tyRtRRx UsUqEEFea5IaIl3RIZQSNgx+FX7KUu+MYFS9xo9G8PoYzgknPTNe36JLshUNVRVge569oZyikGvT 9NYrIvGfWt4sTWp2EU29d2MY9qlcnbuB4p2ui1oiOOfehyKqSv5nBPSqSsSnqZ9zISo9KothhnHT 2oGz4+IZuR0qWCLYMt09K50yraliRgkeMfSm7dsKNgA0kwHv++faBg01cx5BqlqwsTRsZSFPFSNC sbDPStGkgjuM3IrEgcVZVfNQEdazNb2RYt1Oz1Iq9HDlw7HFBPmWZSsjgD9KfDlAQR8tLYzauWUn 2MBjg9KsP8vT5jUSEi/4el8zxFZgDBMi/wAxX9JPwG/dfDvTwR0hFd1FPkNFseuxy7zlRg1pLIGw TzSe5Gw8SiViAOafbSMslUibmnujc424P0r5i/aKuj9jt42PVhj86Ugeh/Pb/wAFXYjF8L1iHLFl P6mv5ttO/c2kTH7yjHFYX1M4x1uUNetm1HSbgKxRic5Ar+mn/gmX+0x4LtP2MbfwXr14kUiWqwSC XAz8rDuR61rGzRNSLkrH42/t+/B7wH4V1gt4ZFvLJLn/AFODnnGeCa9i/wCCT/7ZV58FLq68HeJ5 2fQMFbZZCfkULgADjjJNaqyJS6HbfFP45+D/AIG/tv2Xjbw3CrtdTMkxjQ/xsoJ4z2Ffrh8UdQ+F P7UHjDSvE/iSW0uJrSFiqzFcg53eueopxtytCnB81z4U8G/t2+E/gz+3XJaaFbiDSGWZJJEQgZO0 D2r7Z8ReHPhR8Qf2hoviPeXlqdVTdKGYrnqD65/hrNK25dde0irHxz+2T+0P4b/bH/aH0nw5Lcou hWMu6R5CACyOGXqcV+l/xz8a/DPXv2UoPBKX1s0NrAAF3LjK5I7+9CElanyn8+H7LPwY8Ian+0ZF fXt3bw6fYXA+zBmUDaCrdz6iv3m/4KCeOfhr8ZPgumn/AGy2max2uoLKeUJYd6qK5mEPcR4r+zN+ 1V4a+Lf7LUfgSfUBpMNoiQl+BnAJ7nHevAP2tPiZ8Pf2ePgRoukafKus6hFNCyyKm4/K/X5TWzir Gck+a59V6H8avBX7Zf7J/h/RPEU8MSIIXljmIHKsT3Ir4s/a4+Ofw5/ZM13wRp3hSxinngaNN9vG SFXzRnlSe1ZRir6l1b20P0j8X+PvA37T8HhTxFql/AHs1WRVkZQQQ+7ufavkT9uv9uLwv48+P/hT wjsF3pCD9620lQVkGORx3ptahST6n034U+Gvwi8JfHjTvHlvLZ295bWcqqqlf4sH1z2r81v2vvj/ AKL+2t+1xbaLdbI/DdhMUmeXgOQysOvHr3olrBIhRbq3PUf+ChP7XenfB/8AZ70v4ZfDUrEpRIXa 3yAke4hhkZHRq/Ir9mP4L6V4+8XwaTrNyIYdpE0j4G/861m0qaiWl+9bR/VTpHjL4d/sz/sV3HhX TL6BhFYmKOONlOTtYDofev5TLQnWdbl1Bl2qzfu19AaxhsKMHGR2cl+topd+g6nFdFoHh/XPFkJv NO0uW4th0by2+b9KlnRFWNe58EeJbW1kvG0SRY0HI8t/8KTQ/CniPXtIW6XRpdhxwI2/wrP2lnYT SaOr8Fy+J/hR8RNC1u40aVLOG4Texjfgbxk9Pav6r/Dfxl8GfFvWPDfiG6v4A0VsQoZ1GMtnufau mmrmL0ufG37ZHxw8NfDr9rbwv4ssZkuXKNC5i+bG+RR2+lfU/ji38E/Fv42eHfHd3qMKXFpaSbEL L0LBu5z2onoznhTZ+UP/AAUq+P0X7QfxstdM8PQteR2QaOQqpPO4Htmvin/hCvEYkEMehyhh/wBM n/wqFJ2O1R5UkfYX7CkFhp3xdu4fF9oLCYh1gllUjjaPXHev2W+Hl/4Z+Akvi3Wv7dF3FMXdYSyc fu8YwDntVwXM0Ke2h4d+xr+2RpHxQufHOnXbfYobmZhEXG3KmPHf617H8PPF/hD9lb4ReK7i11BL iS7JYBSpJbYQOhrSpG8lYyw/u7nx/wD8E4vGeka/8dNS8W6tcJbzXQdollIXAZQO/wBK4v8A4KM/ 2XffHMazpVxHPOQ4bYwPUj0rRrkeoVl7SSa6HTf8E6fjrb/Dv4gX2nas/lreFmRz6bQK+iv23ovB mhaXcarp6wXOr3koZnjIJyTjsam2lyqsrpIx/wDgmk2n+H/GV5qep3KQ3LK2zzGAwCvPWoP2pZdP 1/8AaMiuLO6SWN7oGUhgQPmHf6VEY6FTfNJeR99/H3UNBvf2brfSlvYXVQm0B1PIJ96/IX4Y/EO7 +EvxEsr2yY7YgUIB6gkZp8hNR80rn7tfs+/GbS/jHqt1d26E3sQKuSpGCRnrX5e/tUQtN8Z9bE/y fvmKj14FQkr2FZtWOn8EfHnUtI+EkOgRgiERiNeTwOf8a95/Yw+JVt4X1G80e8by3mJKZ9AuK1kr rQqlFwbbPrLwx8PtI0L4q3HiRb9WmfcdmV4yB/hXpP8Awmmk6zq2qajvjF3bKyoxOD93NczV5Fbt yOK+F/xHg8d+HtYXVWUNG21dx6/LXIfAfULbTPiLdkMEgUsI/TGBXQ4qxmtza+NWsw6p4raa2kDG Jstj65r33wr4mtfG/wANIbQyBDtGefrWMo6GkFZWOu8I3Nn4E8KSW+8SA4A/Kue1zxlHpnhsJajc 0mM4/KnTWg5aI4Hw7qz6bfQuQSSwycV9I+I1g8Vw20plCbRyOPWlHRj8zq4tXtbXR7XT9wZBjPPo atap4sTT9ctbW3YCADBA+tKWrKgupuy6xZ6N4simj2kODnFej6Xc20et3F0JACwPf2qWHU89gkt7 3xTKZGG0sa9V8PQwaDfy3ImBXB4yPShopM4HVdc/tXWpXT7patCC5Nu4JPy+1OImehaZfqtoCK3L WcbtwOAfaspoa0N2K6Ule9ajDv61zyRTG0VAgooAKKACigAooAKKACigAooAKKACigAooAbN8sJ9 6y458ttHWrgikZ17GFJy3HpXH6pKJCAo249K6YKxLZQvLhYrPG75q4i6fcuWOa1RBiyXgkVkUYxX Is/2m6Kgcjqau2hNrM5m8c2UrqTwTXMahCZwCKFsWiklx9khYYya5e4hZ383PPpSSE3qY905dix6 1k3NwvklcZJq7WBnKzsLO3OOpriNQmNuCwySfajYg5pYmmukkORUut4tCWchs9Bmp3Y2jzPV7ki3 ODzXA7mkUiThq0QrGDeRpK212KgDrivNyXS9nCDAU9elUNmTd3gWJ5Cufwrjp9QaZeAQppW1FE4a eT7TdSrnha4C5jQ3gZhkVpsibambquqLahk8v5T0wK56K9aC3xH9+pbuVLY5O8u5buf959761y18 224KOMiiw4KyMWVEHyjtWDqNyyEjt61LWpDRzssrrCGDZrCv2E8qPngDpU21HFmHf/Nu+XjtxXHz yeU+WGad7EuJUjtwQ0zH5T2rOkbYhI6HpUS1HH3TLa5aRSgOCO9Y17clUI9KzcbF8xlKVkiyTWNd Oynr1p7Ijdla43GLB6+tc/JIyPs+8B1JpLUtqyK8koR+OKxryU+ZhuvtQLZFETeaCCMY9agjljA6 fNn0ouKxYmkVAMAZ71A7F3wDxSb1GXtOtjhy7ZHasG8bFwykZPrin0DdmFcBg5zWdcEAEk8elQ9A 5Tn8t5mVrBu1H/CbaU56Bxn/AL6Fc83oUj+r39m84+EejFT8rIn86/bX4S2jL4Pswz5GwYpUdAPS 57ZW79KwrxUJAXhh3xXetgK8hNso3HmqN1EzQZU9TQiWVI4jZgbzur5J/bNuVHwlvyfulCOnsa1p r3hpn8bniFfJ8aXUQPzbj0qRsoQM5Jrjxy94q5BJgEKRz605YiHx1A6muGOiG9RxkAwoHHrWnYFg /I6Ubgen6I+IV2jmvYvDt0kqhSpyOuRVoix63oriNgRwK9a0vMaA5zmtFaxSWp2NnMoxGR+lS3Kh OB0qk9B2sU5SNoxwPpUTLxnNUthMz59x2gcgVUmbnaBih6E31Pj3AXkn8Knim3jBGa5Fuaaj3ZJV 2kYb6VDuYYUnOK0tYVybf1cfeoaTcAT1qeYtbE/GME8mnKpb5GNNybElYnS1CKVojXc2F4A9qdhv UnUlOf0q95gngK9GFVoN6EtlIFG3HzfStGMsRzzU7sSJEXzm6VbtyquQevvRa5JLoTFfFtjgbgJF B/MV/Sl8FSF+H9gQOsQ4rvpr3Ab0PXICAOmDVgFcYxWbWor3HLxnHWpIHYPgimhWLzeoPevl79o5 g4tCeOR/Optcln8+/wDwVfnaP4a7x2ZR+tfzfaEofSkaTlmGa55LUIstOm5sDpjBFUbWVLCRoUuH iUnO1RwKd+Ua1NCG2hluvOlm8+bHy7sZFPe1826SfJjkXgFR2qudsVtS+bWK4ujLKd79QSOhrqtP luYlZnv5nLdMgVSlYT1MCTRbd7s3JkKXIOTLjkmtKPV5ncu+qypJ2HHNDmNbGW+lGa8F0Lkxzk5d xjLH1NdBLq93dI8T3rujdTxzSvoTYw7PQzpk6zJfyQ7eFUAVrXd5eX0TwXF4zwyc4yDmqhPlG46E lhBLoGh/Zre+kgjZlJKgckVJd2ravKgvLszoF43kVTqC5bl1bq80rThbW+ovDbBxtCsOMdqXU9IT W7i3uLy78+VVwpYg1lzsrludRBdX1ksMUWqSLCg+5kCs/ULKHUdaW8e5/eg5DHHBq+awJWOkl1y+ W+W4l1WR0wV25B61mf2QJ7kzpcMfMYOzdyapSuK3U1jZGTUBNO5m4ON1XfJS0dnhl+zTHpIvWk5t saRoRXN1qGnsl1qbSx55DMKkt7eKKBFixsHQirT7Buxurx+ZpwUdA4LH2HWv03/Zp/bH8OfDD4ex adLp6y+XGF3eW3pTSV9RSfY9M8Yf8FAvDGp+B57WPTQs7MMERtUXw1/bt8NaJ4RgtZdNDSBRnMbV CppyuRJuwnxi/bc8OfET4fy6PbacIrhlxuEbDHX/ABr8xtA1PXNKPlx6hLFbxsBEgI4FdC0ehNro 67Ur6+1+6jurq5d5U+7k5xXXW/ivWlmjnOpyCONSu3cOQampa5UT0D4AfFGw+HHxOn1bUENyszli zKT2A7V+pmn/ALfHhGTWQv8AZYChTyYmpRSsOcmfDH7S3xrt/jF4tiOixGwaN8+YikcAg968ruNU 1aa3uRcai8iyH+Jh6URfIOKvuM8LeboNootpTFLtwWU9a0n1S+1C0ls7m8kkQtnLe1aRlqPkSL/h ia80O4jeCYxlF2hga7GMS6lqv2u6maZ8HhqufvDjFGitk6ait1BKYZl4GOOK7uXztUSGS5uWndR9 04NZqTWgpU1uaumave22pGWC4aPqPl7VrW1zeXF9I28+a7bi561qnoCh1PS5by8v9LjguL5nQEfK SKuWgisX3OBKSuBUuRFrM+5/2SPj5pPwit75Ht8XMrZLBDyduK86+OPxAi+I/jS41MxLHmX5cZ74 qIL3rs0t2OY07VUhvI4weCPyrs7RmtNZ+2QP++AIH41d1cHsekWfirWLXEzXDFj23VoaX4guZL6R TOUeTlgDRZbiSsj0jRpDp1o6/aWGevTmuy0HWfKkj8k+W6jAI7ikxKOpu7XF3POZS5fO5T3rpfDG rT6daAW77M9QDilcs9Fs/EEwXFxKXDdq6i2uw0Awfk9DQnYmWp0dlfF4lVQGIrsbK6mEXzMR7VHm WldGvFI8jo27gVpJMHvMs2WHSo6lLRWRpG8MtwCWO4d66ey1eRsjeQapEmtasyThy3zH1rporuWS JtznFNgtDSsWURjs3rXWaaoaLBO4n1qb6lXOosYZYkCA8Gu2s4XiUJ1NZy1A14oTHgZya3oMrgE7 qwmiktC4RjikrEQUUAFFABRQAUUAFFABRQAUUAFFABRQAUE4FAFaUmSPg4rLmk+T5RzW0Cuhx97c Oz85rJacSSncOldCM2cvq7pJLhTj2xXLTy+WdrVqtBLVnP3rFcmNvwrkxPKHdgNpq01YdjnNV3sv LZ55qi8oWAVNwtc5+7nVcms24uFeDA4oROxhsy7So5Peua1ZVtgAv3iae7E2c3qNs8ihs/J3rlry +WJPug4py1FucxHraxy5aPK5rjdRik1LxEWDnycH5am1it0c34qga1TCjJzxiuCu5GkKgjaw61pE NjKvjHdSBQM4Fef3ls6yTjP8XrTuS2cxIyqhjfhsVxV/MVQlfWqEjg5Y2iuJHHRjzXO6rpzRW4kF Uw6mFeSreWYUr8w71wUjm2kK5qNhtmRcx/vC4NYd9GJFyOT3oTLTOcuYAyAqcevFcxrd4oUKo6de KCZq5jIF+zFj+WK5y52pHuPU9OKzT1sHLY5y8vWjcI3Q9K569Ul8E89qbFuZyB5ZCjfcrOuFClwT 8o6VINWMZ4xIBsOM9ay7hkXen8QqZiOeT5JNpJKmpdSCLFgDJ7GpkEEYM7NFFknOay5MGDn5aWxU n0M67P7sBeorGmYLyRuakiUUyPOXOMVVIVcAdKqxdrIp3k4jYCPnHWmecxTPTNTsJF8aiLRFUnr7 UXGGKuo4PWi5OzMS+2eZgDHrXP6gyowIFTI13Rmhd+T90VyjOF8XaaCesi/+hCsZrQjZn9Yn7Nih /hBoqKMEIn86/bH4UKz+CbPdwVQUqSB6HfTy/OOfqKozqW5xx2ru6AmUZy0yYI6VS+crgdu1TsO1 xc7pQZBkV8fftmWwk+Fmorj5QhP5A1tRfvCsfxv6yyy+NruQjBDECl2ck1y41+/YGiOVmKgMOKh3 MvCHjvXAik7EqzLAu5ucVoQXpdVbGAfSqitBnpHh2QuykcKPavc9InTyl2oAe/vTasI9b0QLJGuR +leo6RJhApHFUUtGd1axiKM8ZY9DUayeXneNxq7DmPmuU2AeWCfWs8FpQR90VotDOOrKBYJL14qA 7Xck/wAqJDkrM+NtyNgEc/SrajyFAHOa5+U1EVNwI9KZGQrE1LvsRy6jkYhwuOat7dxAK80crKW4 9UViVPB9akVQvD9R0rS1kK+tiaCQSE56fSneaFVsLk1LdkXEnsn3KN/U9KlVNk7YqIu4PUvRERrn byfap40bzQpPFWlpckmVCspwatsnRu/oKcWCRq+F2+y+LbEkZ8yRcfmK/pR+D1sIvAlj6iIZrtg/ cFJWPSI5fMk5HAq4iiQkdqkgsxq0bgCpyrRY7k0XAtRWvmMMnIr5f/aOTy2tcjjI/nUOViXofzzf 8FcJAvw1RF6l15/4FX86emKE0u3Uf3KxlqyUtTI1i+Om2chyQw4yPWv2T/YB/wCCY6fH74OTeL/E F2YbSVRLG7bT8u0nv9KOXm0LTsj4p/bm+Bvhv4F61aN4X1Rb+beFeNCvALYPQntXzJ8PbXVfjD49 TwzoFlJc3y58xlQ4XGM88jvUv3Sab5rs734q/DPV/gT4lSw8Q4jlk6KH3Y5xXMLr1tHcGIs3Hota tWQHafs+/CvVv2n/AI/WXhfQ0ke0CO9zJtwFK4OPToa/bvWP+CVXg+21i10261eNdVRCGiJjzuB9 M59Kyh77ZV7RPzU/aa/Yd8WfDX42WXh7Sot9tesdrs23A3BfT3r6J+Pf/BMnVvgF+znba/fXxFx8 juwZT3OR+lNJt2MuazPzc+HHwc8UftIeJdL0/wANxq0LSos7PJs/iHPvxX3R+2n+wlefsc+HdJ1G /mM6zKFYcHaWbHak7pl1JWSPkj9nj4Q6t+0r8dbTwrpaPLZeW7XDgcIVwR7dDX6B/tj/APBOCf4C eBorywuvtV6SE8tiowScdqiLc5Ndi+ZRidB8Af8AgmM/iT4A6br/AIwuhpsl0qP8xXqc+uPSj9pH /gnA/wAP/hMniDwzJ/aMMKBmkXb05PbPYVVJOV7kuXY+XP2Fv2Zv+GtfGYtpNQEE0cbeZAHXrjPQ 80/9pD9kDWfhl+0lZeCNNtvtct1Nx7KGAPT61rGLkEZJH6faZ/wSf0mwjt7fU9RWHUXt2f7OSnUf rX5K/Gb4B+IfhN8Z7jQLMLdRmfbDHv8A4cgdB9a15CFPU5n4w+Ddc+El5ajVrRraOXpuBHfHpWv+ z18PLb47/FBdNvr42FlEGDdMNjB71i9zQ/Wrx1/wSvtdO+Fd9r2j3v2q1SEyZGzspPb6V+NXhqBr GxeJ2LmFghzWyjyoiTtKxtaleD7MEK7QxC/nX6afs7f8E5bv4tfA/UPEVrd7TFCXCqy/3Sf6U5J7 op7XPzNbRI/D/iC7029ZXms5PLlLH+LrXYD7BBCJBGpQ/wB0ZqVNkrVFmCWyd1KII3PTIxW808Qk WN3y3atE3uFk9EXLu+jtIXAO6RULBB3xX3P+x/8AsU63+0P4GfxBehrKxlw8O7A4I9/pQv3krA/d R4N8cfg5P8GfiQdFuI0eGW5EVu+fUgD9TX0z8Z/2M9T+EfwQ03xXdMqrKY8ncONzEY/Ss5ycJ8hL abTPj3Tb2G7uSsLCIyMFVl9TwK+3/F37G+t+Hf2eLTxZcXY2yBG4kU8En/Ck23PlNV3PkDw/dCS3 V/MJCjkHtXTtqlvdwkp1XrxXSo2FzdDf0e4W5jDE4UDkV0cOqxytlThB3qpTSRMdzsvBPhiT4neL bLTLO6EJeQL98DdyPWvsb9oj9l6+/Z5tbG7nffDIuCcg8k47VjKWuhrNqK1PmbTbyJp5GU856V0d neTXEkEdvEZZpZVjUKCfvHFb7IUXpofpT4Y/YcuJvDlvd6ve/ZZJsFVLL/WvH/j/APAO++B1lBeh PtNi7hRL16nHaueMm5WMZM8o0a8t42jICguMhgetei+EfBl38QPHNtpdkDcFgWf2wRWrfK7G8LW1 Pu2H9jhIpwXuQt2AfkBXrXzl8SPCl98M9eSwnhIL8hsHsfWnYzb1Et9TlulihUbpj2Fex/Cj4Q6n 4+8USMY3hhhyCzLjPGe9E3yxuV0Pq5v2bGe1lMdx5sq84yK8TtdLfRNZubWY7JIW24NRCTZmdNYB JzLMZM9sZr27wB8L7nW9IW6c+VG+CpFJvWxrsjf8ZeAZ/DNjFOuZEyMnHvVXRop9f1CG0t13g/ew KelrkLVnqmpeEpvCToScZHQ1HDfSvtPahajvqdVaXWeCcCr9tEZb0Mp+UdamxotjpI/Ke6BAzjrW 7EIzJletCdibG3GokUHdhh6VuWMu9QDyBVXuFjdguo0cKRn8K6O01BGIUDB+lJoS0OvsLyWPBJz+ Nd5ZXBlt926oasWjVspPOQYNbttCRg5rCRUS8Bg5NSEYrFiEopAFFABRQAUUAFFABRQAUUAFFABR QAUeYAOmaAKV042kgYrnRJszjrW8NBvQwdYkZ0CoM+9cwzsAA3BroRDOe1iJVnVl5NczcRefPhjg VYo6HNajAyTkQnjv2rmtSkkgXmgpHOSzbowMZrDa5O4ps4pBsUJrZZ854Irn7l/LUgjAFFxWObeQ bGdT3rk9QZ5JPM9O1aRM9jkr3UrsvgfcPbNYN9DLEoJ53UN6jRxNzqOLgxAZ9azZr02YfB+b3oGc jLrc0oJkG4dBzXL30kbz7pG2A1pFilqcrfNHbyF4n+U1xc+oLb3JZjnjkU9xNHm+pM+r38skUhiQ dBXPtIYYyrNkjv61dkkQc3fX4RwFH1rF1XUQ1t5Q5461A76HncszQxld2a5slWkcMcH1pBuZk7Lb xgFt1YskPmOzRttHoaLWHHexztxGcEZxiuLv4BK5x1pXuU3qV0t/JshvIz9a5K+dTMBioW4N3OV1 BVlnxnn1qjNbebgZ5HernsFrFWWRAx7EVhuB5LlucmoHuc7eBYWjKnH4Vj3tvly2cE9aTVzJ7maG WNSQMmsi4vMxlTwe1SVeyMSIPKMSdu9NuduwbjxUvsF7lBwIAHByprFmlUzlhyCaVrMpbDZo9hyO prnLtWt7ofNkHtTuNvoTPAplyOhqGY7m2jqKjcLaGY9yvm7GHIrWW+QAIV49aa0Jtcyb8GR/asKS Hk7uaUhrYx7gMiEg9+K56ZVTxdo8r93XI/4EKiWwLVn9YH7OLK3wn0ZkPy7E/nX7YfCad5PCFspP 8AqaYmtTv0iBc7hVa5Zl6DIFda1FsIsqyRBsY9sVBIFZuOCKUkaR2Mu7jfKsPxya+Rf2wm2/CrUQ 3/PM/wAjWlH4h7n8afiJseM7qNhhgx6UjsWCkdDXNjFeYEHmN5pBO4Uok3qccD6VxWsyWQmDcoJN b1jGuxQRxRexUT03RJNu1QOBXsvh2RSQGGGp7iPZ9GlERUFc/hXqenLtUOBwa0sCOkgDMQA+BWgy 7Tg8+9aRC1ynhsnHOKrNMdhyKHe41oZ7hXHJxVeSNm6dKt6ob1Z8iLbr5W5/vU2LgZJ59K5hpkzA iEkd6k8tDAgx81VZPUpAgEfv71Op3jjr61RnLcbEwRxu5ap5MBsnk/yqW9BxRbjK9MUBc/KBg1nu UOhh2SfN2qUEqc9QTRawjTjcnpyak3GLJPWqvoBIXDQgoOe9WLdCnJbdn1qUUjd8JuLzxdY7hgRS qP1Ff0pfB6VJvBFkw4Bi44rtpfAU9UekxgADirgBK4AxQYlhWOzJFODZIJOBQ0BYt5GilLdRXzH+ 0jIJktXHqOMe9S0S9T+dj/grW+z4dqw/vqMf8Cr+dfTm/wCJbbhTj5OuKxe5Jh66my0kdzvXePzr +wf9h65ns/8AglwkonaMvpPyle3yPVLQJaRP44vHmm6ufFd2Y7qXUL64mCQIw6bsDPHvX9J3/BOb 9m3Sv2Gv2YdS+KPjWQS69qEHnqJQMxEoRtHQ4JUVlX3Q6aUabZ+MvjDxL4m/4KS/tQ2lrplqkVtd 3QnjO8j90rKW6/jX7s3n/BM74d+Ddf0/RNVuoYdXltX+8EByOPX3reS5jnU7aHW/8E6/2etJ/Zm/ at8U6RYst15zSukmBxiMDtTfHn7MHj/xX+3sfFg1CSPQI7w/uvMHILKemPY1EYOBpUlyK7Ok/wCC unxHvPhr8S/BlzolujXCyIsu5ivBmGf0r1X/AIKFeJ7rx7+wFpep3A2JMkRdc+rNWsFyq5LjzrmR /Oh+xr8UtY+GXxo8MWehRobaaZPMDSbeN6j+Vfud/wAFzdRPiH4XeHTIuGcxqVP94ycVKSkxzfMk jhv+CFX7Mt/4K1zxVrt95chvJC9t84JUGLH8xXm//BQPV/Hug/tEw2moagJvD7avEqQm4BwvmKOm PrThTUdQldWR+j/7cnwy1/4mfsteHdH8GytZyARMQjbeA5z1zXoPwc+F+reEv2Fm8PayDd6rJaC3 5OfmZWHb6ijks9BVP3S1Pnb/AIJyf8E8tM/Zq8e2+t6hdeVrl3CztEQvcYPv2rodeS11/wD4KlQW dxbq80dvcmNiOwKmtIqxKdkmN/b6+DHxD1n9ozTdd8K6i9vBbW8oa3WUKGG4H0J6Cv58/if8afE/ gv8AatudU8SwStLaXfIcMRtypbHA9KpS6DjrI/bH49fDrwr/AMFC/wBk6013RtjanFbrNHsA3AjL Yx1HQV/NImleIfh3qM1s0z6ZqsGYyVOCfU81Hs9bm2x/XV+xtf6iP+CZIudVu31C5k0r5pJCDk7H 9K/lqjuRb6xMNoBkfOKad9DKT5pWRNr8ZmkKHjKnkdq/p7/4JP38+l/sW+IY7qVrj/R9sRbn/lk1 KcrG91y2Pz+8Tf8ABONNW+DXiD4gX0/k3moXCTWynbySpAHPPUV0P7Pn/BOO0k+BOmeIvGVwunm6 8shPlIBJIxk49K5veuEUkrHh37af7Ecnw8v/AA/d+EpEvIbmRI9gZRwzhc8Zr7D8Lf8ABNTStF8L aRP4jultdSuVUhTs5Ocd8V1bRRlSd5HyJ+1R+xJrPwv+MumW+lRK1lcndvZtuRuAr+ifwP4B1T4f /sR6Xb6WiQajDZr918cjd3qqfu+8FTc/AHS7a7+O/wAdrWy8a3CWz2d8hBMgbcVZW7471/QT+3D8 LdG8afsn2+nXF0sOnxeWyHIwSpJHWicOaopEuLcLo/lO8N+HrT/hZR0ZXAs0ulEMox2Ix7V+8n7R nw18S6P+xpprR6g8ulxpGSu8cgE9vzqZR/e3NL2pJnw7+yV+yfa/Hjw1fXdrKPOPIj44G2vefEv/ AAT/ALHw38MtRu47sHU7UZeP5RyATWyfNcyk7Wfc89/Y6/Y6v/iz4M1XVtfUWVovETAg5yuR1969 a1z9gaDwR8I9Q126vCbViDASF+fIOP1FcdbmdrGtOWrudZ+x1+w5JqWkaH4z1Cc6fCXjeEDbyC3T n3FfXH/BVy8aHwFpEcRBjDJuIPX562pQbepM37R2R+Yf7KXwPufj98S7i1toJILaNXzIUwCQAep4 r3bwr8NJfhV+1tZ+G57dpohKT5jKcfKV79O9Oq23ZFOXs0fpL+3ZZ+KJ9V8P23hh5EjBUzeV6Bxn se1e3eOfhZL8W/2cdM0a9OLpxGZHbrkE1NrLQHC0ec/LrxJ+yjP4Y+Luj+FICZjLC0iyED7qsM+3 evvL9mz4Hw/Bv4wX1u7+fNKHZCcfKAAMcU4p3uyYTvohPFMPi1v2jy8Eko0jzjlQeMZHt9al/btm g0yXTJvKDHZgtj/ardE83NLlPIv2WPhI3jLxPceIL5NumQg+WrjG4EZB5+lfp/4KlsLjRNVbTVVf LBClPXbxUVtTdO6seVfAJtdbxjqP9rM7W7MTEG54xWb8UPgot5ruqa0jkAbjsAHp/wDWrNJ30IlH 2e55f4C+Ec/iDwiuoufKhZ1cD2r7N1A/2N8JLX7CNu3aPkHuapq2olK5uyW39r/B2OadSZTGPvDv zVL4EeEYfDljBfXagSOBjdWUm9jVPQs/HzU0N9E6DAPYD3ryCxuPtFqGWt4qyIWpv2kiuAGOPeum sbkJkKeKbRa0RsaZHukbnBNbViphc5bOanQaZs2lwTPsrqrG5SEsjfepJD3NOzROWZsnNdPaRp5O 5euaZDVmdNYSOmAzEiuhj1RlYIo+WpZSOvtL0xQDaOTXRR3TeQgJ5rGa6lLQ3lmAQZFTLnvXPJWG x1FSIKKACigAooAKKACigAooAKKACigBQcVRun8pMnv6VUVqNbmXcz7kHpiub+2f6O5B5zxXRFBI w2vHVSCMmubvtQUrtI5zWqJOc1OQIODuNcy0sssW5uBWnQgybi88roOa5nULozD5hnNBUTm7mURA npXPXd4WAVFwT3pA9TGa8aCXa3JrI1KV7j5QAB9aLE3OaubUwoDnArGuXEDkmqTsFrmFMUuJRIBt Fcfr14Uu9gf5aS1Ytjj54reONpU+aQ+orz+/tJLyUyZO09sVYN2MC6tXs1wT17VxevWZRBJIdynt 1qkJO553dxyOVZCVQ9BWLqultHmRpN+O2aa3G3Y4W8BiG9SVX0rDaZZ42kYbVA7irZFrnBX7G4Qu mQueormdQkljdUX5s9zSZNjDnsHtMuz7h9a5eWffcfdJH0pDRnamVjfgYFcjOzLIRu2571Ld0Wlq Zc8xhyGfJ6c1ysxMBZick1KViXqzNlDm3LFuPTNchOzMxyM01oCZg3GWJqKLMKHPzZpSZe5l3UZa Qk8CsGe5+yue49KSdxLQyL3bKQwHPpWJMpeU7vyoVgaKEvAIUbTWdNZhsHGfWpvqZtWMa5bbkDgV ltEGj2s2T24osCMd2IUqT0qoV4JxxQy1cgjvAykHt0rLldZcv3rNqyH1IZOUBQ/pVOZWQ7gcmpiU Zsihm3N+dVllKzEbsjtVMRLJeZ+Vuax5WK5Kmo3Eiu0v7sDGT3rlJiJvGmlqRxuH/oQrGpc0if1e /szg/wDCndHGfuonP41+13wcla48KWpxwEAqqewp7npVzuWVvSo4v3oJ9K6dkTa5RuZ1kTCrtxWf LchnAxgj2pvUpaBdRGaAYbbivjr9rq33fCrUVLZxGTz9DWlFPmBPU/jU8QyY8eXLkZ+YjFSSSiNm GOlYYv4wZUjkG3co5PXipInHzKPzrz5PUQ12D42npW9aN5pXHUUkykj0jQ+HUk17Lo5ErqwGGFWt xNWPZNGlMoUYr1fT/lUDqK0uFtDqYlCRg9qdLKXwMcVrHYcX0KjbozweDSP+7b1Boe4+pQJXeR2q OVQnIbI9KfQNj5BizLx2FSbgrk4rmCW49GEnTvTZIjuB9KqLKi9BVxcuBjaBViPaFYD+VO5NtRYo SZRnk4pW+Wc+nekhssNIUIAGBU0knlOpHNEEPoWQNzgnnNStKEG0Lk/ShrUlbFuGYRxkY+anwSea CW6UWC5fXbGgTGM96k25wi9R3pJFLY1vDCiLxnZRg5LSL/MV/Sf8H49ngaxjH8MQ5rspK0Avoens xAUKPrVxXyhIP50nuQTK5AAIpzMNnHWmJiwFlUFj3r5k/aVlCJZr7j/0KpZDdkfzqf8ABXW4Efw/ jX+8yn/x6v549Olxo9sAvyhKxkrMmGpz3iKTzbRkU4UuDX9fn7Gkuz/glfGEjZymkcbVzu+R6avL Y0lHmjY/kU0v4oP4L+K9trOpadNCLecBVkhYZXIJPP0r+qX4gWtj+3v+wbbjQr0ebDZh/IQjqAzY I6ihQ59exHNaPKfj/wD8EevBur/D/wDbMstM1W2MEtnBLEElBXsvTNfbP/BaTVPE2hftYeGpvCi3 M2qvbybfIjJCjzBnkA1otGY8uiZ6H/wSD0zx3rH7RfinVPHNxI86tJ5KTPk48sdsA9a+qPH/AO3T qml/tpxfD+DSWa2kuTmYI/AVlHpjvVSd2aTXtIpHi3/Ba4rp3xF8FLCTK8s8YIUZIJmA7V7r+3Y9 xpH/AATq0e0mtpG2xxZKoSfvNUO60J5uSHKfzPfsy6vHD+0B4MjWORpDKgVRGenmL19K/f8A/wCC 688+k/Crw20VtLM26NvljJxiT2oiuXcT0SZw/wDwQu+MfiDxhrPiu2upZTbWUhSJOTx5Wa+Lv27f iPr/AIu/bsTTLyeRrNNZjCROeMeYnOKpS5tEXLVn79fte/Hu4/Zb/Z70fVbe0+3P5SLtAJxliO1e 4/sdfF6X46fs3aJ4kv7UWzXnlSbGyMZJ9fpWlrrQmo1W07G18TkWH9qXw15Zdk+wTYjRcj7wr4ku oPL/AOCp0F1twwtbofMMf3ajVaE2u7djsfiF+0Hr3/DyPTPCgiaTR5bO5LjnAIK4/nX5Pf8ABczQ tK8B/Fq0lsIRLdXsToYIl3FmZgoOBzQot6lNWPpj/glb8Kbz9lr9k7UvGHjK7NtbXNqZbe2mIGxf LYbcHB6jpX4FfHb412Pxi+LuueJIUeKxknY24RCd6kDnH4VtHWnzBOXvJH9Tv7FupLrX/BL1GVHS P+ysrvQr/A/rX8wS2w+1PK/Lbxt+lZO8VcLcsy3rBVfOZm2gRsc/hX9RP/BEtrXxf+y5fQ3YzbLs Usw4YbDS5eaNzVo4H/gq74z8TfD34QaLY+DYGfRob63MixE4KiTnoD2zXYaZ8ZdE+Of7MHhzSdeR 9LTdB5eIifm3HH3vercUoJmcW22YXxb+CeveCPHngi6ivDe+HFeNQsjgZzKMHA/Guo/4KheI7zQv jB8NYbOaVbeSaLdHEuQf36ipSclcUfdNf/grX4ovPAXgzwzqWmxp9p3xqSzbcAyc17Ve/GvV9O/4 J6WOt4Y3b2S/MCe+7v8AhVxd/dCT5j+Xux8Z6x418a6Lq1zKyzz6lByDnguM1/Sd/wAFFLTVY/2K tIttMMs8jNDuESlifnOelXKVnYHPlhY/nI8K6Vd6RPCNQh+zzi9i27uD1HY1/TX8fdVli/4J36QD udGhiBbHu1ZP4rF3TgkfKP8AwR1tvs/iLXNszPH82I8dP3dfNv7Y/wC0R4l8FftHeJtHsLqVLGa8 YPFnAAwB/WtINWaMpfEl2P2O+AwTUP2HtI+ylTcyRRbzGc5PzVm/tVrdaV+xl4etpiUY3Nqsmf8A fNZRWupVX3Nj1DWZX8P/ALK3g6K1JRPPtVygzwZK+bv+Cq4az+GOiPCpkG+Pdx231qnyyFF8upo/ 8Ev/AIteH7uLUdLjjS3vbc7ZCw25wuT1q38WvizoviH9r2y0iwQSahHKfMkC+jLnmhw5ndFS98+t v2n/AI/WvwS1vSklg897obR8pOMtt7V3XjnxzfWPwY0nVtPjLzTSxYRc9C2DUJa2LlO8OQ7DTdDt dUuLDXruBBrSWjhMnkA9R+lfOvwG8San4v8A2i/Ej6jG0JtpXjhDZ5UqD3reVlGxzwi0zsvE37RV ppHxe/4Rn7PvuDNgttPYj/Gsf9rL4c6h8RvFug6dZLmBgGmyccBxn9KjY0jHlk5Efxj8faf8Bvh9 D4c0tlN2YtiqvYcg9PrXo/7IVy6fB+4urs/MFDHvnCmnUV7Dg9Wz0n4UfFa08e+Jbi1tQQYMq2QR 2zXYaZqM+t+KtV0+4jxaK5GT3GKhblVH7RHL/Fm5l8FfD1bfSosxfKMJ2Fcb8Nviv5HguG11GJjG u1Q20nNOVmhQTR9R6hqkVz8PLeSJQLcbT+GahtNasdftdNjgnAIA+UEetSodSnorGZ8b7CNFgPt/ WvEtEKSoyKcY9qq/QIdjQsYmkuXRmJXNdrp1sseQOlJuxojWgtzESQetbFpKgfJPSktWGxveYskg MYxWlbxrLIXdsEUdQRtacyqSDzzXa2Tpa4JPXtU9Ruz1Ojtdv+sLfhXTQxKYDKOvpS3GjY0XUB0b mujh1FZZCoGMVLQrm3FOZ9pzgCtmK7U4GMY9axlG5a1LXmg9qlxwDWLVgtYSikIKKACigAooAKKA CigAooAKjZuoprUCq8mDjOKzZFYv97K+9aRRWyOa1LVDFclMfLWHk7SF4Ga6EtCWZFzqXlB9wz+F cvLdpcwnA5+laJEtnMtclZOmfrWBqt+8gKqdvPatLXJRhyswteW+ash2MUYJO41LQzltXZp3BU4H fFZE8+5gMdO9FgTMK7JeYsBWFds20noau1kO2lzn724M0YVj0rGvLlJbYxgZf1qGhbM564cwaeec stee3RN4Q5HPcVURN3Zhfu2zn5cVh6ney2qgwrkd6ZMkcPqty1z87/e9K4W/uGn2hz8gq0NKyMLW rpJ0CRjaoFcHMWEb5PfvTSFJHJ6kPKjGT+Fc3qFq97AQuETvzTloJHN3EkVtpTQquMdDivM5rl2k II6Ubol7mReSmVjzwKwI73yN3yZB9aaRLumc3qLG7cKpwM1hX0eboL0UVk1qaJmJqUUdwMK+1hXN S4igdnOSKGOxiyX63WnHAKgEdRiuc1C7FvGm1cj6UyTJkudwzs6+1Z/nCHPcntSlG6C9jJuZBJyG yR2rnJFLzb5OFHas0rFXIL0q0XmJwtYBYO4alJ2HuYUkklxfvk4QVE0xicg8rRbQzluZU8fmOxUZ WsmeUIuVGaV7AkZUu11JHDGqsrCCAjOWqL6lx0OcVgqMCDuPtVKLLZ3Vra6FfUnkuFf5VG3FZc0u 5WKnnvUctkO+pnqpCDJyKjEIVsnrS6DsV5lV3BY4xWbcjbIAg4qXoJ7jDH5nA+WsSe2Fp4s0x85G 8f8AoQrGexS0Z/Vb+zQ3/FnNGZj8hRMfnX7a/B2YQ+ELTH3SgopPQTueqXZRTv71iveDkAbfpXYl oBSPIK1WZtoPHzUWsO9zLe7kY7G6Zr5W/a9wvws1AnoYiP0NbUviJWjP4yPFSmHxxdQ5+YMSDUby Fk4+9XHjfjLkLbz+ZCeOaaxWOLnqa4mtAiVokWLIJ4ra09933TUIrqenaKGGzLV7XoUhZUBq0wZ7 JpfyIiDr616jpkhjRVPNXuI7S2k2YBHFWpADKcdK0i+gNW1M1zvbB6CoXznBGKdgTMyRWDknoKZk tCGHFMbZ8leeNmcY+lNQ7Dhea5W7uwJajBNtk2rwasRMykjrWqVimiVw0R3N36VIo80AAYFDVhPc lVtjYU8+tPjg3NlmzUWdy9LEwcB+enpSPycDgZrWCsSTBiHG01bD4kGetTLRkJj/ACi7k1aSPy12 45qb3CJagUHIc5btV2KMxJuzTWhZP4PjN38QLIg4CSgH8xX9MXwpXb4JsiB/yzFd8F7gk9D0qOdI zlhlvpVoMsrZIwtZ2Ey2CucD+VQMfLc/WhCZMEaV1GeOtfK37Ssu69tIwM4/xoehm43P50/+Cual PBUMpOVBA2/8Cr+e3TyX0yDHyrt4Fc8neRMFZmVqkAFswC7jvBIx2r+h39h7/go34X+Gv7ONn4M1 iLEUduImHlk9j/jW1Gyvcp36H5V/t/8Ajbwd8WPEVrb+E4vIYAiQiIqMbueee1dT/wAEx/2rrj9l XxJrui6lI8uiTSExhs8AJgcD8aqHu3Xcy5WbXxr/AG2ItH/ais/GXg1GjWO7EdyCpTcruuT3zwK/ Yy5/4KE/DLxJ4ssdd1WIXGsJZyYYxE7c9RnPsK1io2CS0sj89PA3/BSu90j9rvVdZtbUQeH3mdQ4 LDKsB2/Ov0Ik/bR+Es/xSHix7aOTWvKciYwnIJx3z7Cpppc12PVLQ/Pn4kft66J8f/2pIr7xLDjS NNuNtv8AIW3fMrA4PvX6i/tD/wDBRv4bfET4Kt4eYF4EhwimE9RnHf3q5QTlczlF2Pxa/Y++IHgL wf8AGuXX9aQYgm3WZMeSq8HHXjkV+uX7cP8AwUA+HHxq+FHllDPKoCR5hOQSTjvSqJNaFRWiTPmL /glp+1Z4H/Zht/E9zfyH7TdzbuI8/wDLPHY18Fftz/tG6d8WPj7J4t8MoyXVvqCNuKFdy7gT/KsK UOV3Lmro/Xjw9+3z4K+MvwL03S/G8SSzRxrvikUtyCT6iuR+Ov8AwU+0b4NfB3w3o/w/t1+xwzwI Y0DKFXzOeme1dVKybuZJOJ9q/DL/AIKaeAtS1DRta1eT/iZrZuD8hO3J5HWvzr/ab/4KH6TH+1Lp vi/w2S22bypcqV+V3XJ/IUuVczZVnc+7pv23/hkfiPb+MJkRvEAs5dsnl8jd1Gc+wr8W/G/7S1j+ 0d+2fL4o8Vf8gbT7opaxkbgQSrA4PuK0k4qm7CjF+012PZP+Ck37bU3xv8Naf4A8Fu1po4AMzple Fbpg5HQmvhf9kbwt4X0/xs2m+LIY5tPtcrGZVzuAAPtWNJ+5ylThedz+gnx9+3X8Pfhd+yk/hTww gSI23kxxxxEAAhh6n1r+dPw+z3mZmcsjHKg9qdezSSL5dS74hiW6UDOI2YLJ9D1r99v2Of2xfA37 Nn7LE/h+0mxefZwAFTjIUjqD706FrNMqfwkHxQ/4KBeGNS/ZAsYbtjPqEhiyAhPOTWv8G/2gfhp4 z+BugWuuxRrcW/lOd0fdWz60ThfQimrRueeft5f8FFBDfeEtK8HwLNptrPF5hJZcKsoJ6Z7Zr6/1 n9qz4efF0eGtY18Ry3lrEGQSLnaQ2719RWkYJUrdTHVyPgb/AIKBftfQ/tE/FjRNGtDt8OWp3TPz ztcMOD7Zr9BPE/7YPw9h/ZCs/CUDhgtoEWMR9Mbvf3rGhC07sqa7H850epXOhao17ZsWtbe8je3A 7KCD/Sv6UvhH+3v4V8Tfs92UXiYq91HZ5ERBb5gCf51pKHPUv0InH3T8MvFvjxPit8cLjV5B5Ohx 6ijRp/s5B6H6Gv3p+Nv7WfgTUv2UbDw3bzb3ESKkfl9Dk47+9E6d6ty18CPCv+CZ3xa8JfBB9cm1 WdYrmVyYyBnjZj1r4r/bX+Iug/Ev4y63qujfNM8zAttxnOOamMbSbKs7nd/sd/tsa78JdEtPCd3G 9/bm6iihVtx2rnHb61+137eT32t/sm2l3BGqmOaGUpu6bWJ/pWrppq5NS7lqeHfs9ftjeGvFvwM0 XTNfcRNaGLKMM8g5HWof2/P2iPCXxD+HNpZadMJplKlRt9G+tZOndFyXuo/Ir4WeLtW+H/iK61TT 52tXkyG2nGcjFdd4Y+J+o+EfivD4slHn3b3S73YnJViM/wAq1pe7uNbH7eeNvid4F+O+n6Vf6o0T TQqDhwPlO7Pc10+sftc+GtD1DRdBtiG05E6hfukNxRye9cy15jiNc/augu/2i7Fre5KaJBBJHIOm SSMcVr6l+1NomiftBQz2P/HtJHIJnCkZY4ArCSbkdMkoq56rLqPgXUviR/wk8kkTXpDNkgZyce/t Wnd/tX6Rd+J71k+aS3V1iJBGeM1o1dJEfZPzb8aeJ7zx745vtZu3LNJKfJjHIANfdX7KPxjh0/Q7 /RNUKogOxMnqNv8A9etGk1YmOkT6L8Hap4W+Hd3e39i0STNktsI54+tbNp8fNLvPCV9exybbh+mR g5IrLlstR0lqReEPijZa/wDC7bfyhrny/mDeuDT/AIe6vofiHweLeZI1KFc7hjmsuljZrU6H4jfE aDSPBUWl6eVbgLgHoK8i+EOsfY/E0bXUxRYzgA9622VjP1Ppn4u+M7PUkg+zsHf0/GvKtKu1hlY4 5btUWsOJrWOpGK8PGQa7CHUwoPGD9KlopGxY352At3rYjIc5Axn2qdimjXgn8luK24QZQCpxTWpJ uwH7PyeTWml2ZSMHNDQ1sdXHeFoo1H3q6CLUpVhKg0ojL2mXEsZ35rqra7b754z6UMFqdXpl+Jfl z0rduZG2D+Gs3EZfsbomHLcn3rZil3c9qwmi90SDkZorIkKKACigAooAKKACigAzt5pofcM0AI0u B06Vn/aQ7YA4q0ikincziOcAnArOvbxCuAcEVqkKT6HLancDYOMsawZbz7IMnvW6JOcuNQWbfuH6 VgSXiInHy1pYzbMy4kAgLA5PtXIzgMrOeDTWgIwJJcA7mwDVPcssbKGoeozlzHtdtzZFUJ+5FAGO wcsc9KxJhulYk8UF7I4q7iMfmODkZ6Vz5tGuIPNQlSexFPoSYd0GSPazZrktVmNpFuHcUIRyCRG8 tC+MHOTism+1dLeAqVyR7VQWucDPqK6gpYLtx7Vxms5kGQataFJXRw98jmLfnAHaudubn5BxVIh9 jj9Wf05NcHdz3UsuA5VR2BoZPWxkalNvg8sferjb5yhAxn1oiKaMSe0aJGbcDntmuWuQzZHajYVr ow7mLYcjg+1ZErGRxkVL1KWhyWr2yi6+XO49wK568QrCd3ODUMp7GffTJNZpGq7SOuKyJk8yAYGd o71VroyZhSPuiJA5zWcyqsZOPmNG2hSRzCxEO5JxzVC7uVtsK67yfaspDKGBtKEcH2rKkAi3DGR2 qGrlLRGPc2zCAyBsDPrWFJLgEdfQ0yGuplmVoUKk7c1SdVVOuc0pCic7JhZtpb6VWunZ5QAOBSsW +xG0YmTfjBFYzSBSWK5PpVXsJIpsS2cjAqnI6xEL696lyujTlQlxIsY2ryazy2eCOalE31KU8Ylc AnipGiHl+4pSF1I1VQnzda466JPizTQxO3zF/wDQhWVTYqL1P6sv2ZR5vwa0ZVPyhE/nX7X/AAmI PgqzGMgRiooocj0+SVZbEsR+lZMVq0sYc8LXcnoSRT/uXwDzVC5maLlstVdAKrN5gBxgk+lfLH7X cLSfC3UBjgRk/kDWlHcnqfxbeKb03HxIvX24AZgB+AqOK8MaMGFcmOVpmr2HBmABQY9aja5OckZ9 q4r3QlsTxSKYcsOTV/TZGW42Y4NZrctHqmiYOATyK9n0SfdGmB0p2uI9j0Wdjtz0r1jSGwgLcjtW 0RSR3UUn7vGOfWo4yY8nrV21GtiFiS3AwKZLOCvI4rRCiZhy5Y547VWklARVA5FSQ9z5Hkg2QKQe PSp4cAYXrXNsbbEO0SS4HXvV4MYDgDcTWql0C4uPnw/WnlechsD0pzJvqRiNkOScKavLb4j+Rsmo RaWhEmWPA6VfBDKB+tabENMa0AYgIcGrEERDkvzionqJJouxSbm4HFWmi3SDByRURBqxKThhxzWh vEI2sNwPSqauwRN4HBi8d2uTw0y4/MV/TB8KmC+CrJc9Ixmu+n/DLasj0NBnLDmpkHmMAKjYgsG4 ZJdoSrscW5MmhD6CtJ9nYc18p/tHS7bu0IHzH/Ghohn843/BXy82+CoYgPnJB/Jq/n00hvtGiWzd CErmfxElmBMF2PTvms/ahDNAGYA8hFzmqb5Sok0c9vNOiiEwT46spGa1RZidCWOxv71Tz6hyj7Sy tfKZWRXwee/NaNppNvO4kWII4GAQO1aKegrdzoR9l0yz8l412MRwO5rDNnbxagFWGQkjPER6VLm4 jsi0bHS7su8kKKynq4waScWT+SXLBSMDcuBVxqOxGlx0uj2MV1khYy33TVmb7LausMh8zcOFYcU+ fSw9i9a6Rb2ysY4xHv64HU1ZfSYk2KqBX7t60XZNxmowWU22OcfOewGc0wLp8EK2LKfLHKhkwOKO d3Gh9zFZW7RyI0quvC7I88U+SDTbu7ieUOjdfmjxk0e0ZWhrXMNudUjZXlabYcZj7d6u6Xptneby kYZgeQwwQfpUObY002dHbRxWk+9VHmgY3DtVTV7W1e382fiU/wAQGTTjKwdSa1mtY9IXKtJCuB8y VvWNyv2KNoRhCPl4q+a7Bk7xGdCpXIb7wIpIPDVtb2DxBhbxt+GParjLlBIk/wCEatpNBjsfPLxK QVDADGK27HSg1qkImYRr6DqapVHfUm3RGtPpqzW6q7l3X7rEdBV5dGN0IybhhsGFGOlL2j2JUTQu dMEkJDSHJPLY61rWum28loke48dyOtCk0U0jUhjjt42to0XyT1WrUGkxW1rtDFV6KuOg9KpTaFyp o17PQkgtf9ZtiJzitNpbWS1w7s6RnHK0e0dxcqRqWHkTvG8UxRW6cYzV600y1ttalTIa4bJJI60e 06FJdTuNC1tfAOuWWqBPMeOVcA/Uf4V95fGP/goDrPjT4cJokluv2YlVIDsc8+n41rGpfQzlqz4k tdNW4jW5jnaOJiGMYFdFdrFeSwvkttHTFNMtq6OoVztU9K23jS4gQSDI9KT0ZL0ZqxSNYKAZWZT0 AHStuziYajDLNcFQowoPejndiopHSmffqbymVoxnoB1rsLSOOaRJRJ+8zmlGw5I7WO6Bvd6ysr+w rYTebrcJipb7x9asLW0Ox0y5NqwTfwPXjNbtpNLLcyz20m3n5gDUxfUmxv6bfXAcFJmXcOa7+ANa acsYkyTg5pSdxrTY6vSta+y2vkyOQT3xXbaFqQhhcLKSoPQVNrFxOxutQxbwTBiRx/OtoXfmPFLn ZyOlG4meiQTfarmNi5yvqK7Wycy3BZWHTpmk2NI1rZjFKXf14rtYJfNRCKOgdTo4JUCAYHHpWxaX omBVeCKzauWaNo581i1a1rqG3jpj0qoqxO5sR37OmSa6jTrhHgGB81DA1YtVRGCY5HfFby3YG3Jx mlYRpDU+AsfQHmups9RRoNpOSaUkNaGrpt59nl+Xk9q6gX73cgEjYx05qSzYinK8A5FbkbtGFyeC KzkkCLkUvOM1bEgU4I/SsJRGWegBpKgQUUAFFABRQAUUAI3AqCRGDZB49KqI0UXn3MVHUVmyXX2Z cHmtkht9jOurxZ8YHNYNxIiOSzVpGNiGYFzdqy89BWNcTrcAknGK1SDc5u7u1WYKozWFqUfmAMfl qyLamNI5igKg5JrBa48uAq/JoEc7eRqw3MeK56SYZYIcUhmS8jDg9arSTgL1wKNxxKTT8kA5GK52 cqMhjzTCTOfuYBAxxyp9q4XXtamswI0jwpPGKGSYks3mbQeveua8RMlwBGh4HWkCMGDV49PhaMoC PevNb68W81MbRtj9q0ii2rI5jxDPHYy4QFl+lcReXJmQkU2TGRxd3MyWrZPT1rlUmOoIdrYK1V9C db3OI1S5lkvgoOEHWsrUJdsY29ancFqc1fRMED45rjb+Te+1T161UdCZa6GJLbmB+X3A1gXt2vmb AMe9Ju5VtDNllGzaRXIancfZ5AD09qlMRnPeIrlmQHjiuOupwquT0zxSd2N7GS7I8QbHP0rDurtY EYkfgBTTsKxgmfzY8qMZ7Vi3srRYx+NPcdihOPtqBQdrZrMvv3WEI3Ed6ybFymX55ZcYyaz5cRZz Urcp7GFesJMYPy+lYflb2x2qmQZl7FmbZnNUpYlgwhqJlRVjnry12zkjoKh2PLnB+tC2G9yg+Y0I BwKr+WoTI5PejoUyiGZiwIyD61llN5cHgioEtSiiopyevripZpAqc0yepiiRgxOOKnMuY/eo3GVJ 7jywD1rnbufPinTWZPl3Dj/gQpSWhDuf1T/sxED4K6O+flZEx+Zr9tPgu3/FGWisONgxRTRadz0+ dWMjKV2qPanpc7bMxEcDGDXQOxmSqkzj2qtMqMetCEU2jCk5IznivlL9rmcx/CzUc85iYD8jWtF+ 8DXU/io8V7ofiDeRfx7m5/KmxyboyDwawxusyr3GRyM8uzOPep3iWI/3q82+oySIjPAq3aO3ncda TQI9H0NWVgc817b4dnK7UPWrjsDZ7fpfISvU9EkDHaegrSIPY7WKQkYFTo2yMg81ta6KjsVH3DnP HpUcpBUkjikiSkzqqKBwT7VRusRZNUiLanyZKykAE/NUio0IDGuQ3e1iUKGG5Rz3oX5QvOTVpEJE sjGWTDc+lMli3pjPzClJ9B8upcXD24Vu1MgyYSV4ApLQe2hLG25AF4z1qXmKQDPy4qpPsUySP5mO Ooq3vZsN2HUUEtlwS7l+UYqRAyKAD81JiYsjEuM8Yq3CSzZJyBUpknQ+DALzxpZHG3ZKoH5iv6Tv hmNngiyOesYr0KT9wt6o9Ig5RVHA71qQgROMDikyWXhOqgnZuJqLy2Y5BwPSoTsxlWVCsi5JPNfL f7RaldStTjA/+vTbuZM/m9/4LCMV8LW0yjgYBH1av5/9LP8AxJrbbxlKx+0TcxvEOoPbaNKEyJWc Rgj34r+lj/gnp/wTp8Fj9kS08d+MjHfXE1oJ385VIX5W75HpRUTaNfhjc/Gr9unWvBV54utYfAdt BEyHMkkY2jaG55ye1ebfsp/AXxL+1z8TbjStIhxplruE9wGwMgA9SMdKmceWNyKcrsm/aX+Eg/Zo +KUuiSXImQOVY5B+bIA6fWuGe7vLRoRHp00iuu4MIW6VMZWRW7Pqj/gn7+y3qX7Y/wC0q9nKBFoF grrcgkfeADDg+2a/eab9lf4Oaf8AGMeBxFaPrccMiAbV3ZGOevuKprmY6q5Kd0fl/wDtH/8ABMS9 0b9qOx0WG+Nlol1KZMgqM7WUAYP419S/t9f8E5/CnwB/Znttcgw1xH5ZLiMfMcn0+lactlY51rHm PyX/AGPP2Urr9sD4laPHcag2maXwxClfnGQcYavvf/gqh+wjov7Ldnolzoed4i2viMDdl8Z4ojFp ic+bRH5BW2qT39iDaxvdPEQJNqk8/hWrf6s2madFdXcLwhhxuQipc+hpGNz6o/YF8E+Gf2jv2irT w5rCSmWaJ3iPkZXaMZ56d6+0/wDgox/wT7Hw9+MXhPw34OjUNdyL5mAF+TzQG9exojFuVyJzUXyn 3Lqf7Bfwz+A2j6Va+K2t/t1woC+cFHOceo714/8AtH/8Ep/+E91/wnc+D8QaZK6PK0YUAr5gz69s 1VZWVkaQjeHOfWL/APBPL4ZaX4+tvDcskDa8bWTClV3ZHXjPuO1fib8ef2Kde8AftUXnhjS5xDHd XJeLMirlQQD/ADpun7pyKraR5X+1d8Cte/ZW1i1k1G2aS2m4LgEgknGcgYpP2UdB0H4i/EWVvFE8 MOmpkRiVhgjA9cUqa6HZB8zP3X8ff8E6PAmsfsyXfizw/JH5Qt/NQxIpB+Vj1yfSv51fDN0Y7Uws MbGAUe1VGm02yPaXny9jsy+ZSQeFGTVv4PaZYfGH4w6Z4e1G4eC0luFwypkEbhxz9auWkS0z9bP+ Cjf7A+j/ALPXw70rW9Cm8pTsD4VV3ZfFfkPBq5jhRY0eVsjlUJH51K1jcTVpHSHWFheOOZGjkI4B XFa0c2yNuDmmikSi9RLUNK4UcYBPWo11jNysIt5eRlSIz0qr6ikjUsNThnnkUMyshwwZcHNdBFff 2lvURSYj77DRJkRY/SRf+O9YsNI01S0s1xGhB44LAH+dfvnH/wAEw9H8PfB77XqVz5WpG1LEEL1w aW2pcFzOx+XPwM+CsHxK/aEsvh6WKNEjlpQPvBCCfbvXsf7cP7Odh+zb8UdN0yzlZ5ZbWRw20DAD Adqy5W53Mpz5Jch89fC7QV+LHjrSPDlw/kJK6s0uOwYevHevq/8Abj/Z80r9nvXtHXS7z7VvhO+I bf72M8V1UY3k2TKWvKfLljqEJt1ch9jEAYSujUJZuDISqnkZFW3aRtFrqbsF0ssHmplkB64rRi1N DIiHcC3Q4oTuxSsd+saW9jlm3MOvtXTeAfBV98YfHekaPpse7f8AO5PGAGGf0olsTfljc/Xu+/Y0 8L6HLbWWo3yJeyDo2wHOcetfH/7UHwAn+A2vWdzBl9OkXG8DuTgdKzTaY6UvaK7PEI76GGL7QBuc 9FAzX0j+zd8KpPjZ4lUXDm3towd+eMd+9aOSsaxXMz781z9kTRL/AEi8XTrkT3UCn7oU8gZ7V4T+ zr+zhe+IdU1X+1QYLe2k2dOvy571nCRNRWZ9CeJf2adGHhq4urC7DNCMnbtPQZr4ll1IJB5CktJG wHIrSPdknoXhPQ5/FrslvCZWjUhsDvX2F8Kv2dRF4YbUNVfyfMwQpxx+dY1J3dkaNcsLno2s/A+0 k8LPPYy+YI1zwB25r5i+0rLFDCCVlQjdx71SbREHzHqNlObu8t7KI/vX7j0zX1V4f+FFvpsMbyz7 pnHfFJ7XKW5U8R+CZrPWLe3jTcrnr+NemW3w5js5UjMgErKeOKSd1qKTszgNX06fwzrLwSpncSQa sWb7AXXrnmqHHU3bWcliSa2bMiORiRmgT0ZpwN5mc8CtmC78iPbjmgFubcGQgdhjNdBBL5yjuBT3 HbU07ReCAME10OmlUlKtSZbWh09oyKcKwzWsi5OWfkVLQI1NMmZnJJ7128V6uUDHt61ElcZY89Y5 +DWx9uSRQQAPrWMkUnoN+1rvC1aDEDJ5FQ4hYVZty5wKcG3KT0qLWFYFOVzmjOKQgJwM03zMkDFO zAGk8tiDUX2tTximo3HYhlvlXtWe18C5xWkY2HsigJMbznmufmmZFLM3FbRRBlx3vys3Va5q7vUJ OTk9q0SAw7+78u13A1yl3fNtXB+U1oidiwJ0GGPQVQ1O6S4jDKai+oHLzFtmQ1YcwJJ+bP41SA56 R1nYqxwBWVcwLDMFXoaTFYzblAJgmayruNFfazZFNDRmP5cedvFc1eXCSZ55HQUxHNXWsywwMpiz g9ea469uftCKZOKVtRqzRgTXI3sV44rjGjkgd2Ykg+oqrBazOL1BmnDoTt54rjbpTYqOcitEtBt9 DnbzUVcspXPpkVzNyTLbnbww7VNtLkbHn+pySPaOGTbzXLNeR6faHH3j3xTi76A9jkLq+WEZI3Me 1c/Lc73znHPSqasKGiKV5cEkqWx7VxV+0UURxw5qRaHJ3Vy0XA61jz4JDGmyyuzI2WJ59K4zVJ1Z mIGSKhbkT0OWAaZiGbHpWa8YLEEZAq9CdzGuRvDHbtA7Vz8h8yTaRmpKM6+iVZRg7RWDq8gj2hRw etCTSKWpgXmBtKtisu5nYZDHj3rKSHczdzQjfms27nMh55zRFENmLdyi3TGMk+lZEk7pGQvA71as RczhJsIJ5Y96z9Sf94AOTUNamiehSWcEfOuTVFp1t5zx8tJLUT1MaWUSynbwDVT5oZTn7lKRa2Av vm+XhRWBdzqs7DtnriswWhDKytCFA696h8gQ4DHdQmJlO4YLMQeKy5pTK+0cKKpIVyYRieIAcY9q 57V43XW9OA4/eL/6EKiewmf1Q/svrt+B+gqe0afzNfuJ8IWV/B9my8fIKmmwSselXl24nVXHHsKz r/IbK/lXSkadClKjeWpHyk1nPbMpOXOTVWsSykI2Mo+bvXzb+1jbef8AC/U8n7sLH8gaul8QJn8T vjOcv4/u7gnadxGPyqgS027nFc+M0kaWsSRyfusd/WmLKUPB3V5j3EWJbo5GDtA61rWa79rKc5q+ g0em6KMSLk9K9t0CMLtemiHvc9i0aTcor1TQCQT/AHq0Ra10O6tmIHIxU8qtHhh0rZMq1kQTRlpd 2cY7VArZQ7uPSqRJUuZVKA7fmFUyd/zEZ9qQrXPkpUGzPcGrkcm9xmuexqxfLMUvB5p6pyWFK9iN mLETuB6mpDHsducmpvdmi2E3ssajHTrU8E+6XG35apq5D1Jl2rcHI+WmY8254peQr9C9EQJOBirU rcgYwKpCsSKNjDvV3AZc45pW1BkqZdQGHPrUlsuAwz0o0uFrm38PJPM8f2iY+9MuPzFf0s/Dy1+z eEbND1CDNd1N+4O1j0KI45FaKTqFweKRLQKfL5BqyMkqSfypWGiaM/vyCM4r5I/aRvF+2WqMMt64 96EZS3P5sv8AgsBI0fhm2hJ5ZgR/31X4G2fyabaoBzsrnk/eBIwtchZsR55Myn9a/sW+AGnpqn/B KO2tpGKhdIwSB/svVrVF1FeFj+NmTwJP4j8S2Xh7RF8u+u5ljDjjCFgrfzr+rfwB4U8Of8Eo/wBi BbySFf8AhI7y0HmzKvMkpDKCSPworL3DmhKzsfgJ+zd8MvEX/BRf9rOzuvEVwxt7mQ3Uke4MF2lT jmv6WPiZ8KvhB8EfHmmeENTs7SC5ks5MGRQOhx6+9ZU4czNXLTQo/wDBOz4beHvhv+0x41fwtNE9 nPLKWjhYEA+WAOma6fSv2EY7f9uK7+KN3elruS4cJEdvAfbn37U38VkNSc/dZ4d/wWH8aax4O/aE 8DtpV00DzTorbGxwZlBr3r/gqA0rfsJ6XHev9oaTySzN3O5q6I6aM56j5PdR/OJ+xr4y1vwh+0f4 OsdMvHgsWwGiVsA/Oor93P8AguLPdN4S8PW2PNlu1WIH0LSbc/rVtJK4OPKkzk/2TP2B/CXwD/ZW s/FPi61GoXN2se5jEHJLZHb6VS/bQ/4J16N8Wv2btH1zwtD/AGWbqaAoVjCEKXOetclNczbNk7JH 0/8AsX/sTeCP2PdO8G6vq9ql34gmtQn2pogWyzY7fhXTftuaw1p+3T4DmRd8TW0n7s/9dVrogrox kuaVzU/4KLfsXXf7VvxD8PXkd6bO2smEjAFRnbIGxz9K+/dP1D/hT/we8K2Mdv8Aa5Yo44gQCer9 eKl/Fys1jOycD5yH7Oem6h+2fD8QrnUiL8Ws+LT5eN2Px7V+EH/BUv44614e/bNt9Y0uBoGsHfOc rvQOpP8AKujbQ5VC7P0Mt5/D/wDwVA/Y7kAjQ63b2wkcYzsdQzAZP4V/Lb4l+CereA9du9B1m5Nt PbTCNDGwbI7/AM6wek9DrV4tH9fH7LemLov/AAStt7QTtIqaTje2Bn5Hr+U6ykIvEUDjNa7RM+S1 S50urXgtYpbVF3SzHyoz6M3A/U1+/f7Bn/BOTw94B+C3hnxv41jin1KRoWQuA2CW9eO4rnbcpKJ0 JKx77/wW9tLzxR8D/DOkaENqXV3boMHGFMuD/OuE+GH7BPgn4D/AXQNQ8Y+VNdXRiTzJlX7zNtHO R3qqkuVcqMFPmfofMf8AwUu/Yjsfhl4H0/xloGEsGdBtQADDP/8AWr8oLPUY2topQdyuucmiOquX CV2dD8IvgZqn7Snxp0jQbCfy7VZQ9wdwHCsDjn2r+kTUv2S/hh8NvFWm+HdRto31KW2c+Y0Q7HHX PvV04uSbNqtlFH5/aN+wFbeOf22LnT9KvBLoS+a08SFSARtxx9M1+i8f7Hvw51HxZqXg/TooF1iG OQPtUbsqv196yheVSxg7KJ+fnwB/ZDu/CH7Y0lldzC3t7K7xEoYfMAVPSv2K/wCCmVj4g0zwVbXO g6mbRUi/eKJAueT/AErolT5WVTlyrmPz6/4JjWHhjUfjFb6vqTxT+KWhfLnBbkDd/TtX0V/wVg0f wtc6ta3VwYv7ZFs/lbsZ68/0q4Q0uRWjzSUz8rv2FPCVp4x+L+m22pP5dwAfKcAHjj+tfXn/AAVC +F6/DH4haFezTtd280RQ7wPl3OB2rSK5VcmovtHuXwV/Yx8OeMvgpp+uFg0JVJGO0Yxyf6V5r+1X +zBob+EtJv8AwxdRvNcSRxlIyv8AE+PeolG0rhdn0x4W/YM0bwt8HdFt9Sfdq1yEL5UE9cGvm39s r9njRPgHpmmR2rBr+RPkUqBk7sU6Nmm2TOTO2/Yl/ZAn+M3hPW9T12IwOG/cDbnd8me/vXefsefD q8+G37XV7pt1bbIrdZUiP94YHNZ891YWsp8vQ+tP2k/grrvxC+OWlatp0uy1tpR5g3gcbwf6V9Sf HH4MWPxg8OWOnX7bUUBs4B5BzRblepb/AHeiPzg+HP7JkV/+0RqukzxMukW5kEbFOuACK5Xxtr15 8C/jXc+HvDsmI5WZGIbbtPA/rUpOUrG9GVtz9Ff2WPCWo+CdC1zUdXuzdzXBMnLBv4Mdq9e+EN6P G3h3xEY4/ILsQrYx1TrzV25SZvmdj5Hvvh94q+H/AIe1d7e7e5tnbJXeOm30Ar4o09jctGI1LXk7 AFMc5PFVcEz9WPg74Ksfgv8ADVdQ1PC3lwV3kjkMeMV7n4tgl8XfCeEWLbGmVWU9PWueKuzSo7xS F+C+g3OieBp9O1F/Mk27c5z2rwv4nfBmHQdKgurNTJJLOgbC9ATg1tH3XZnPK8NEdL4n+H8Hw+8P Wmpxc3CgDkY6muh8G3mqeMLqzu2m8uJcfKG96GrmlN9WfQPibW4IPE1hHglgOePetvXtNmvvF9re RPtiAOVz71L0B+8zjPi5fx3+sKsY2sAe1ebabLiMhuTVLYuOmht28nHyjkV0tvOu0buuKT0G9WaN pcJ0I/StpCq4Zhnnip3GtDb+3CYiMDGKnguTDkKDgH0q0hHQ2l8s2AOorT+2rFGNw+Y+1JjvoaFu zIgcN1rpoi4AYtxQI047iRYiR8o9av2Uk0xVt2ce9KSGb0Vw7Sbs966VgxSP5qyY0Wo4d84IbpV6 AMZTl+B2qJItMtGVVOSePSnEb1Dg4HpWdmVoMM20Yp4QuB8+KLBYSSURfKTmq7XHluM8iq5RaE3m 7+fWsg3AF6VzTURXvsVL24KXJCjIrKutQ8iPOea0SIKsmpebbZB5rnZbzdMMnjuKu1gK1zeJbwMB 3rnltgwLbqpAZNxIpiYZrEKIkPJ4qkQ2Y0qGU/K3y+9YN5EwPLcCkNbGBJO4cqGwtVZNyqdpyapD sZMtu0jbgeazHclmDDGO9QxpHO36zecrKfl9aoTx72Lbs01qQ3YxbqRo1bjiuFu77YpYA7vpWlgW pUg1SS9zEw4965/Un8mQhhn2osC0ZgwtE1ztkO30rjPEXiRYtTNtGmV55xS6je5xV7dxA/M/ze1c pqc9veOPn6dcVpEmTOKvLyGWR+wXocda5qG8a5MjA7QKdtCepzGr3JnhZXOK8v8AtKT3TQlfunrU JNMq2hl3MKw3pOc1yl02b4YPFVLUOljG1KMx3bSl8qe1YkumpcL50jDHYUJGZyd4m2Uk9D0rBkBU HcOnShocWcre3Tq2UG4msKWQuDv4PepsU9TIuAJgSOMViSMygkGpuwtYwru+YoY25PrWVDMGJBHI 71UVcTsijckOhBPzHkVyN/vJwxzipuNWRz89yskeMYYe1Y8mXOWNS2BBJPuTac/lWdNKrwkIPmHt SEjBINyhY8YrMYtI+1OFHU0lcOVFe7CxQkLzisRZQwDEUx21K14Sp3IPwrFkkNwORg96hOw2iJI1 tkJ6k1A6mU8NRcRmTkwghT81Zbo0o3MuPWhIFuMC7osKcYqFQZWyp6etQ9wZRvIMuD/FVSWMREe9 UnoFh0khjAA5NY98+Nc05pDwJF4/4EKwqMpI/qc/Zd+b4LaM55Qxpj8zX7efBq22+BrNs8bBSpEv c9Wkuog+G5YeorIvT5jlgcCuxaDZVklMi7fSqLxlh7Cm2NambcrjAHHNfNn7UbJF8LtS3nrCw/Q1 dG/MElY/iV8eR/Z/iNdxMcsGOP0qlE25z71hjdZCuyRpfKypoOyOHeDlq85IohXmEkjJNbOkT7UU YwB7VQ3oem6FJu+avaPD1z9p2L93FJbgloe1aRhtgHGO9eq6KWWcE9K2SL0R6JDMCnzDmpGY+XjP 0q4rXUXNcqq23IY5NVZPmHI4q5aEsrSfJGCBkVUM2xzilG7BPQ+TkcGQqOgqSMYYkEk1gndDbHyl nhIPr1qeOTbagZz71KRL1CJDtzmrAXziCOMUONjWOqsNQkTE9V96nfnCpwT3q4hLQnz5eFPzH1p0 S+W1FiUSyHdONvNWjl04HGaLWGy2yoqqM8kdauW0WQfapkJoRmJO2nQqVyvXNLqEdDpfhlbY8e2o zkpMvP4iv6WPh7KJvCdpJjkoOtdtO/KVJ6HpMVsoQMzY+lV5lVph7UdTO4Bgs9aVspkbGetPoUWN v2e5Izk+tfIX7Rr41O0G3n/69K10ZM/mu/4LFxNJotkyn5gRx/wKvwa0+QRWNuCMnZXL9oSMbWZU dEdjgi4RfzNf2Zfs/eG7xf8AglVB9mT7TcS6R+7jBzyUcDpVpNIuWkbn8bsR8X/s+eLbTxJqeneV ewzq6oxYYXIyOntX9VGkXnh7/gp9+xbYvdyQtqy2YcwAhjHINzAcnI5xV25lY5elz88/+CSXw/1H 4H/tv33hTXHW2mgWYQOXHzKqrn9TXsv/AAWA/Z38YftB/tj6Bp/hG1ImW3lZ70ErgB1J5wR0pz9z RBB3Po7/AIJAfBm+/Z2+NPjPR/Et+LrVpJJGRmkVjxEAeldvafGb4ieJP+Cidx4TWCePw0ksj+d8 23CbSO2O571PI4vmNYPqeSf8Fl5Jb/8AaG8CwaIg1G+iuoxLEpzgeeuTx6Cvqb/gqJoOtar+wzpP 2ez8xk8ndEM8fMa1tpzMxkueVz+aH9jXw9r2v/teeDIbWyc+WpMwCnCYdT6elf0Ef8FurHX5f+EM XS7HzlS4h85skBR5wyfyojLn0NqluVI/QW98VaL4d/YK0K61S2/taNIoiIY4zJ82WxwvNfJXxH/a c1nxF8F/CFpZaMumaRJc2/Eu+PavmejfjRGHKyWnsfaHxq0xvE2peABpsYuYkjQkjoAJQe1fNP7a WjyXH7cngFmxAiWsgZicD/WrVfCLYxP+Cov7RPjP4T+OvDFj4OtZriK4mRLh4g3QyhT0B7Gv1H+G 13DqPw58Lf2wyrdvbBispwc7veoau+YahfU/OG7v/EGr/wDBUOLTYWuP7ES1ui/yEJkbcc4x61+U H/Bb7fqv7SFhoHhaJZ9aug8LJGezOFJOM9M1pC82ZyfJqfdX7JPw103/AIJlfsQ3us+JLlF167s9 0iMR/rCjKBxg9cV/Mt4p+LutfHDxvqHiSbTXuJry43wQxKz7VOM+/astptm0lzWsf12/s6eHb61/ 4JYwPqERs3fSf9U427Tsfsa/lH0ywaJwGcOw6EHNVdyVg5uadjasZY4tctHmXzMX0XBH+0K/sh1C Aa5+xP4IMA85BNaExrzj94e1JQs+Yqemp4v/AMFLo7o+E/h+LWHy4ReW3mEgjA88Z/SqH/BVrT7z xF+zR4Hj0eBrpBq1izLEC3An5PGe1U4c+rOdRsTf8FJdTi0f9gnQ4rkCNcQgxtwfvt2r+WzT9k9i piwsKjCDOOKbSSsXBOO59IfslfELVvhx8ddHudBgFxMwKzoGI6kA9Pav6SLnXdA+Mvxq0zT9Y0d4 vEElpKUn+zsQoyM/MePSrpP3eVF1ZaFv9lr4SR/BT9sHxPA2om6a8aaRFcr8gCAY4rzT4AabcN/w VE8aTPayKhkucTGMgEbF79KPZ+zdznUnJHzT+0n451TRP+Cl2mWGlXTrHLM5nSM5zh0HP4V75/wW e8ea74c8OaDBpkkkcMqqJ8cceZg/pWq956jcrrlR+d//AATP0mK8/a50yaJxHItrMGOcbulfXH/B Xn4M+IfHXxl0u60a2aSKG0l8yXBAX5geuMdKiMndo1crRUWfEf7EkV1p37Q/hyxJ829jU+dg56MM 1+oH/BZOTztQ0KMxkp5Bydp676FJv5FTason1H+x1A8f7AsgeMiMaYfKwP8AYavwm+F3xF8RWPxG 0m1u71hpcd9EPKd8YG8dqabkLRuyP6f/AIjafeeONb8K3GjSK9skYLlX7bwf5V+b/wDwVbgto/iV 4eeRv3sMDPj6ODSS5XYxejsz6e/4J+/tOabrvw21dbsx2f2L5VDHG4BCe9cP8CfjJF8Zv2xr6S3j EVvbiVI5ORvBAOeaHTsrlLSdz3H9or9oPVPhr8ZdL0Cw083CXcoLygNwN4B6cd695+Ot5rrXfh+P RCVdmQzYOPl3jP6VnUfNZmzip6nqGsz2lpFerp5jbWfKfdtbnOP/ANVfg14g8H+KfFfxa1y5KSHV YpyxYc4wAT2rSGmpC3sfpL+wf4l1vxHp/iK08RGVzDJsQSg8DZX1d4BX+wvDHih7dgojLGJVOcYQ 1Ld9S0uVtnn37Nvia+8bfDzX5tXDMV4j8wdthr55+APwVS6+IOteIdSRRYwzk2ynpjaDn8xWkI3M 27Mq/Hf4wn4g+MIdFtJtun29yofaeGIYEV98a14k/wCEM+DVpdwx+Ykca4A/GsnDllc2fcv/AAX8 Wv4+8AXOqSReSzJlFwR2PrXYeCH+3+GxJquAN6+XvqZtuVydJanhXx/udROq20Eas9i46DpnPFc1 4Vm1bw3q9jbJv8hyOPQZrTZAtD7H17TYJtb04lAHK8n8az/FOuTab8RrOwjLGIg5x9RUO7EnymH8 YrZLTWY2jAD4Ofzrz7T/APVbif1q1sUnfU2LSX7OWbqDWlDcbiMCpsa2NhZ9gGOTWtBcNJjPahEm utwGYY6iugtNQXyPLI5p7CFtp/JuyQeK6Oa5W7QEDGKYrF9ZmNum081r6fK8kwV3yB70hm7/AGnv BiPAFX7LUREoQHFIaN37YRa5XgitC31ZzCGJyQKjlLNK01RntGcE7qmtL90hLuxzQlcRfGpCWNT0 rUmn3WYZXxiocRlG11cvkbdxFVftku4knAzQojuXEuz0OTRJeCPg80+URDc6p5kYCfKwFc1JeSGc 8/MaqyQF6C+NvKVk5JFYWoSb9xPTNCQtiik4S3DZqpKUuMSA1diWzLuwGYg9PWsmRPKJ+fIPan0B O5n3aILRweG9q5FlfYMt8vvQibFCaUqMA1janKWhXb1+lAzFkSRI93Ws+KV9pzSLTuUJrvy+c4NZ V3K7jBNShLczL2ZiioenqDXPXLmIgRnOPWtIqwSRi3upeZuTuOtcsxErqNuQfQVRC0K+qyrpbAKv zHvXnupSSeaXdutCA5y5mXz9/VhXJ30kXnnOPMbvRaw92cHq0ccMzkD5q4aOFLd5nY5LZ/CrgKRw b3O6SRCvGeKqNZzGEuH2KO2etXsSjm9QBuOd3Pua83vla2umUfeJ+8KljTKVxAzwHDHPc1xM8b2z klt3uaQGBqtyZYtifjXPXFwfJChulMSRiNI8gLN2rFnuy7kDmq6WJtZ3MCW6CKUxg1z1xcK0ZJOG rNl7HKvctM5VDURtXVMFht+tSDMC9gESnBz71zDNsjJLVpHRES1KIlDcEkke1ZV5ExbAP1rELHOv bFmOBxVQoBnPapNLGDqd5uQAALjuKxVuNoP86u2hGxQuXZ4yE4NUgpZSpyp9qhaD6GTKhSJx1NYd rdFG2svNJsEWbqTy5Ce5rnp5GXjHXvUWNHsU5I/KGSd1VYmbeW7UiTPuJdspGPmqqwbOM5WmJDrh VgiGB1qkpyPl4FS0DM66naU4IwB3qnITLtC9BSuCGb28/noK5rWna41/T4/uqZFOf+BCs6mxUT+r P9ldhL8CtEiQ8JGnP4mv24+Dd2F8F2q9QEA6VNATWp6FcwB23AcnnpULcJjHI9q7QM5mMnTg0kj7 UI/ipWGnYx2mImx1FfOH7UdpHd/C7UwTysLH8ga2o/EKWp/Er49fzfiLdzD7+8j+VZgQxS89TXPj fiKSInUsxX070JyhX0rgQ73J4p2ZNgHTrWnZTqr7MZNPYfQ9F8PsVcV7hoPyMpI4osTFnt2iSARq PevW9Nl8rYR0NaRZb1R3Fuc8nqe1WZZBEApHNVfUzSK0i7mzUTOXjIP4VTdy1YpSblj2g4NZcm9X CgZ9TWi0RMj5bQ4kwoxUzMEkK4wa4kmi7XJQpMJU1DBkAg8VrHVE2JY13YXPNWMfOAT0qZFLQsGI 7Mg4qFFIX+tNPQJskjuP333cj1qdHDzkflTQkrly3bybkqRkHvVsz+WNo6U5LQcSVFMigqMVoiXC ADr3rKT0BiKplbKjinr+74zk0ug7HTfC/wCT4h2yk53zLj8xX9L/AMPrcReFLWIdVQV6FP8AhiaO wjiy2XOasIu5/lGPrUkE3lDOTwRT05k3KePpQUW7dwz/ADda+Sf2iW3apbD0/wAaDJux/NL/AMFi WaDTLFM/M2D/AOPV+DVoCLGFF+8FrnesgTM3V9NL6ex++yyCTHuOa/ab9mf/AIK03/wk+BOmeFpt PWWK3txGwYv2z/jWkWluOesLH58ftWfH64/aZ8SGYWEdpZ88AnufQ1ofsO/tAax+x54o1CSzJn06 5Jbydxx93A6VUXZmMVaNjoPE37XGs+If2ph49tIjps8EjIDET86sQT1+lfqEn/BYqOw1OF300T6i LZ180h+Sfer0luSotH57fDr9uDxboH7WGp+Pb24kWxu3kK2quSAGAHpntX6LD/gr5b6b4muLyDS0 +2NG48zD9SKp2k7F7RPzt8N/tx6vc/tHap468QI2oh7otaRyliIkIGQMc9RX6L/GX/gsUvxV+GEu ijTFxtwM7+Ov+NOdnohQVlqfC/7I37ZmmfAv4hXGt3Nh5l7IxZNyN8vA4z+FfZP7WX/BWJfjb8O5 raPTF+2nAjYb8r1qYJJBJPmR5l+zr/wVDn+HnwU0/wAP+Irf7f5GxAr7mzjPp9a86/bq/wCChet/ GDwTo+l+GohpVvbyxyFonZeEfOOapMt7n1b8BP8Agr5H4F8AeHLXU7U3t9bW4R5HDk5z7V4t+2b/ AMFMb34x/EPQ9V0u3+zz2rglwWGBvBI5+lWrNamU7uWh9Ot/wVf0bUdL0l9YsBdXiKCGKO3evnn9 oz/grB4h8UfFvw1NoLNbaVbcTxo7D+MHp9M1nbWxvTlaLufXLf8ABXvSdN8QT3cGnh9VMEg83a/U j1r8jPDn7Tlzqn7WeofE/wAVO97I07GziYFgitjj16it48sInNODlY739u/9q/W/2yr7T7KKVrLQ 4cMYVY4YhsjINeafsqfEjR/gj8RJJtYtt9qAxjVULBeBxxXNFJyN4+6fq/8AHj/gq/D4u/Z2k8Ke H7ARI4WEEh1wpyCf1r8TNAsJNHtI1kk3uR8zE1VRKKsjOnFqbkzT1NWh02SeI/v1O+M+46V+wH7J P/BTt/AfwM0rR/EafaJrPy1w+48gk0U3dWZtPWNjqf24/wDgp3Z/HX4eWFhpNosVxBLG6yHcuNrZ 7123wq/4KcaaPhBoeneI4BeT2yodxDNyDntWijYz6HyL+3d+2Lq37ZHjLQPCOkP5Gkuu5ogxAyrj HX61taH/AMEzvELaNDJ9uTZJjaPOT5amcHJaFVJJJGNqn7PGu/sMeOdM8YTXKXVsGAljMoP3mAzx z2r9I4f+Cnvg2bVrHV7WzEmrm2cK/ltxnrzmlQjyvUzk+ZHwB4U/b08RWf7ZN/4wvbljprtIiQq5 IAcL+Pav0gn/AOCl/hrw/fajrNjag6xMjAtsYckY6/lXTK0pWM4rliflv8MP2lSv7TF98QfErNdX M91vtQwLbEOMj8xX2b+3l+3HpP7SHha0srWARSIBztI/iz3q5KKWhEE1O7Pzh+EHxW1T4E/EbRte 08Z8v93IQSMhmGentX7KfHb/AIKV2viv4Wy2llaCXUpl2M5DcZyDzWMUknc1lFymmfAv7I3xL0z4 J/Fi28Taz++vnU4BXOAcA9PpX19+3F+2Lo/7Q9zp9rDFsSIZZtp6Bs96uMVyjnf2lz6I+D37enh3 wD+zrF4ZjjLrHZGJTsPPB/xr8fPFOuNq11calADErTCSIqOeP/1VTSUdApJqbbP1z/4Jw/tQeKfH PxH0nQrgmTT4LZwzSSEcjBHavQv+Cs/hr7T480nUYZgQtu4cAju9c2rmbVKd3zH5SeAPEl9oa3iQ 3LRQznICn2xXtvwU+K+o/CD4j2WqWxzgFZecZyRn9K1b6DaXLpufrNrf7X/g7X9T07Ub2APqCr97 yycHPrSN+39ZP8Sz5kOLKKJwmAeemKr2SbIovli7nhHw0/bA+wftFa3rl2xOnXDyGKM56FQBXfeB f2qdOPx21bUJI9tnOzk8HuAKcqaQkfQ+oftZ+HPD+j6u+jRhbufPIQjJK4zXkv7L37T82mprEPiW YvFcvuXJzgbcYqY0glO57nrf7UuiaJ4Ju7fRUHz/ACkbSMZGK8dj/aWbT/hENPt1/wBIkjCllyeu RT5eUTV0fK/hj7RZsJTI3mEhyD3Ir9Qfhr8f9M1L4cQabrfJRAGXGc4qJq7NU7wsd3a/HjSfDfhN ItHUCLeo2YxgVi/Er4+Lqnh/TbaxbYwljZ+3Abn9KFBNkRTijofH3xutr/QrBLYb5QylsjGBnmvV NJ+JOkXNpZzOR5gX096TS2KRieMvi5NeeKbV7T/Upw2TjuK9Uj+IGm32pw3T4a5CnnFQlYU1c8y8 aeND4h19mZtqrkVi2t4Vgwr8ZFMcdEddFcebAi1t20u0ALUvQ0TNaKUhuf5VpxXTKpC9aLaDNq0k 3R88PViElHJJ5oGkX0uCjrzXS2l3lCo5+tNhYsx33lHAJ/KteO/aBAwPJoRLNS2ut8RlzzVm2vTc Y45qRo6U33kxLGDkn1rTN2trAgHLHrQgbOhsdSiS1KjqKrrqfm9elJIVyRroQkENkGtGPUgYmUHJ NPlC4ltqv9nNtK5J70l/fltrA4yaVhkD62zzqg449aP7QZZCW5Ap2QJleW9Mk4ccD0pZbkSyK6na RSsLmMvVdRKsHHJFVDqQurfH3aa0BmTJMVUqpyKzVu3KFF6U76itcgu7t2jCjtWTJqfkYVuWoYLQ zry+2ck5zWDc3hYLjp7U1sMiurlFQY6msr7QqrtI3fWkJozNQvtkflgYrOkuUigCnqaGPoc3fRiQ Ag1mnL53NjFKw4lKVcKdzZrnr5/s6ZTk1SBs5m4kSMEjkt1rmopprKdgDlCeM03oQc9rkss17874 A6c1ymrX+8AHkCmhHH3F6se45zXH6iHmdXTr3NVbQq1jldX3I5OS3rXEaqGOCpIFEWJq5xWoEwMC OSayLi7lmUgvgLTbBLQ5DUI5ZkZkfH41ycs0kKBH+Y+tUiXoZdzqotiY2+6e9creyfaZcKcAVLBL Q52dVidu4NctqEYUkn5QOlAloYf2jfnB4rn5pvmbaO/Wi5aV0Z0mmG8Hm78Y7Vz1/ArRMB19qRJx cWY92ODWXPcuHI3E1KHsUriQugBrmLiIq5bPFU1oDKckgXacYrOuXKsQx4NZbCOfluSCUTj1rLnV i4JPFJ7jbdjAu2DOS44rKbKyYx8p6cVVySrMTajOc81TursSJhR83tSaC5gvMVxn73fiqMmUlORz 61NtRrYrz4UgudxqhPKASFGaGUtjOEjRREMu4n1qFQFjL/pUBaxl3MyDBx830piyqq5xg/SqsSQz ShVwwzVKSZQv92s+oMzblWZRj9ahjkxCQOCKi2pS2KTTAgBjzXOahIq+I9O3cqHUY/4EKmew4n9V v7JzeV8GtHP8BjTA/Ov24+DMWzwZb5PBQVNIGeqTyiKBV7iqBuvKXJGa7UhFd5sqCExWXKm+4PP4 UMpRK86LEfU180/tNssfww1Q9mhbt/smtKW9x2R/Ez46I/4WLeR9GDHn8qwSxEp5+WufGv3iuhN5 oaMjuKjgBwSD17VwX1JtYEJhB5ye9aFgu6QHOKchpnpWh53jacmvePD7bkQN1pJhbU9j0f5HABzX sOkRblQk9KtaDe1juY5FADfxU+Q9CTmtYK5GxFGS6k9KgY70AHWtEhdSq6l5QPSqVyGDFgcAUS2K 6ny7EgZMk/NUbKUwzDOa5mVEnaTycA85prSiR9uMVPNZjluFuPKnLHp2qypWUs38VDkIa8blQSx2 +lTRyYTaDVREyWOTy/lYc1NhYnDEZP0q9hppKxcUFvmIxSMdxHYdqTkVsi3AHZ9ucAdqWFZVuCQf lrNkx1NeC4KZXpmpI4Gjy3U00itjoPh3C3/CwdPfp+9X+Yr+l7wFLt8N279ylejD4Aeh2MUpZelT NG0uMMV+lQZ9S7FFvjCck+uKtwwfZI9tJspj0j3SD0FfIf7RTBtXtR0P/wBepT0MpK5/NV/wWJlL 2Ni5H3BjH/Aq/B6zUtpMUqcZWsupKViBrtIrJ5jkY68VueFND1LxXpYubDSZb2M42skTEY/AVDZe yF1Wyv8Aw/dCLULN7Idg6Ff51JbagsG47+PQVcXdEW7EsVyt4CIySenIq9Z2MVtjcgZv7xq0xCy6 xGL1bOAfaLtuREBnjv0rpdJ8HazdTSE+Hpsf3zA4H54pSk0W0rHFf2vDa311Y30X2eWOQIyY71qX 9pL4ctYppbBktXIHm7D1PSlzsSs0P1CTT4zH5pBkkHyqOSfwqZbObSLyFLi3aDzVJj3KRkU/aW0C xUlmtD4gFoYfPu+SEVcn9KveIfO0VIxqFk8MTkbC8ZGefenKTQkTaP8AZL6/Nva2vn3CdVRCcflW zbx2+rXtzDNbeTLC+JVZSOfxpqb6iasOU2P2oW0EP2ifsirnA/CobS2tf7Wlj+zbZk4fcpBFEJaj ehsQ2ttFK5WBQx9qW7sYmtI96jaCMZ4xVykykkdLZ2yxRAoRjHIB61hahqEa3JhW3N1IQTsVSf5V EZW2C1zStxcpoiSy6Y9vBkcmNh/OugtriK4gUqMrjjilz87E9FoTzuqWpZjtReTgVStreLXdNNwt m5hA3eYYjyKrm5RxLlraWl7pkTpH5sDL8u4dq17DT7aOzMccIQew6VfOx8qH2cVx4W1K11TT2xdw kY5xkZyf5V9Z2n7dvj6YW8EF5MqR9QXI6fhWlOr0ZnOnc4T4wftB+K/jtamx1i8c2cZ+ZTJnOOe9 eO+HdIgt/L+w27PFGu3KoaKlRRd0TGOljcivbGO7kRoSkiHD5QjBrUnuLbTrRbhoG8uRgM7D34qF NvVFOKS1NSVbUQpviHyjAOOlX7g2VtZwTy75FcgKxTuTxVqbJ5UkdSGQweU6blz0NbttFFbRRnyx jjAochxLV7pyX+oxzuAJEHB9q25LGC9XO0FsYyRS52tjRK61HWItrApbkH2VVzXXy3KWkIt5VMeR +7XHamqjeg7JHoXwn+LWvfCbX4LzSXNuUUrlGxnNep/Ff9oHxD8Wb6ObWLhpVxtIZ84ya2hZRuVL VWOCs7YWyRbUcqq8EJWzFqSOCZ0MTdF3DFZORldbHQ29mJLZWJ3vxtOKuRbpb4K43T7TjNaxm7Ey SsaFjay3V0VNuUkX+IKa7DSLRIp5AxYTE8nbzS9pdlRtY6a0dLZJAVbA65Wt/SblNQg3bzjGM4q1 NmbSbOk062W1t2hWUhHPNdXpF3Do8PlhTKV44GamUzRR0Oq0+/W5O8qwPutdZpjR3MUiA5OeuKL3 Q4WOjjuV0y2RQzAZGcCuot7sXMsbDcQPQUlKwSseg3ISSOLDGJgOmOtddpAJQAucDpUXGjt9PuPt oePzMOvJJrsdAvo5ZAMfMgxmpeuw9zYgvor2eRH6g9cV0VsI44Aqsfyqk9BWZ0tpcbUB3ZrdstUN uSdu6kx2OgtL83CE4wa0be5YPzSHc1oLndIAa2FuArYap2KJS28hs8DtW/ZzqwJztqlsNlr7VscL jJrQtUeXJY4UU0ZdTeScRxEGtDSJcIWbGKVhmvNeJJjnB9qeL9VUZJJpbD3J1vMtle9a8V1ttyD6 0A1Yy7i7f7Uij7tbslx5a5Q8+lN7DSFtbx5E3SVaur4SYVeTSKMy3uSLg7jyKty6gGhZjzTMzPj1 AiAkjNZ0mstwF4pCE/tFkiJk+aqZu/MIAOKC7XQv2vyHOaonUhDOxxnPSlYL2MyK/aV3yec8Vhz3 Pl3W5jmmSyCW/jmU5rDu5xGBsPFUJMhdw6BuMisqS+BJAGCKVupXQxLq8aSTJHNUHkLS/P8AhSYb oz7mR0l2ryay7uVwDkYanvqLYyjdhFOTlsd648TzvLLuYBM8DNWkLYyZ5sNjvWRqN4I4E55FJ6sD gb+8E10WkbANc1e3MSnYHyvrVpAtzjr9ohayLGfmJ64rBt7xoLFIi+5scnPWmy5bGDqNz1GOR1zX CaveeUgYDcposQjiNYvFi2sOmOa55tTV7d5APlzTaG9jlL67+0x/Idorm5JdqsGPI701oiLanH38 4lP3d1ZzdN3QelSwdzmp7pUlKdT9K53VQZVJJxQnfQEjnoo1iQgHk1zupkouEOOaUtClsYrXcka4 HpWMLneXVuOaliT1Mq8iQDKmufmjVTk8A1CbLkro5m5uGgkKnlayp7zDbcZWtUyLGfKwlG0/LWLf r0G7OKh7hYyZ5FwCp5rOuWaSIdjWcmVYx3YOu0jkVn3MqQxlcceuKI7ktaHOzHcnJyKitQyoSe1U 5ENGHfXO6XCr8xqk0rW0gR/m9aQJ2dirMVkckVnSyosm1fvfSoRuV2kds7h0qtKTGRxwaS0E0V7t I0TkZNZyyFlO4cdqvoZO9yCFg4YOOe3FUJ13Lj06Vn1GZslwzKAe1NWRdpPb6UmtQuZeFlkPPPbi uf1K1C6/pyscMXU/+PCsqmiKif1XfspPu+B+iq38MaD9TX7jfBpTJ4HtDjgRipoDasz0K5QFvvHP pis4kFSK7rgMMrr8qjj3qFVKMW71L3KiZcrbixbivnH9pMKvwv1LPQQMf0Nb0VqSz+Iv4jOW+JV7 KvAMhGMfSs25j8vAArlxvxl30HRuq2xUj5h3qKJsIGJ6VwJEsbnPIPWr0BzgA80SGj0zw+gVRg4a vddBbMKZPNTYo9r0NwNgxzXrmlHa6Z5NbRWhLlY7qBQW5FSysInH936VtBaEt3EZtrEdFPSqDSmD AAzVCK7ThZCSeTVSeX5duM/hSaDqfL6YXvUv2jcm0jpXOjS9gifBAZc5q1IixZ9alx1He6Kys23G 2rcWGAGMGhq4rk8TB2ZW7VD5eHyvSmvdJvdiN80gDHNX0YbgSM1d9A6ljzWkyBwBSRv5jqhGCO+K ze5oaSpsYsTk1ZtiSTkU9x6EyQhiWJ6VZEpWMCtEtBXszo/hlP53xKskYcecuOPcV/S74GjCeHLd fRK7Ka9wc9UddEgc7c1akOw7cVD3M4mzp1yIGKso5HBqpJIzSMG55qWJu5Zj+UjnjFfH37Q9uYtZ tGfoen51EtBI/mk/4LFjyre0UHJb/wCKr8LtNf8A4k8CAcBcGsluLS5zXiaOS4tEtoTsE0yIxzjA Jwa/sR/Y6+C/gT4B/wDBPHSvEd/p0dxPDpokkn8rLMQGOeD7U3HmVgkfzAftdftI6T+0X4oTUtGQ 2emRKWJdChPORwa9d/4JmfsV3f7XPiXWte1CWez8NWTFQ8sYUSZTcCN3UcHoa1p07qxk3yxPLv2u vDeh/Bn43T6H4blW9AnMZjjAPJIA6Z9ax5/g942kTzE0GbyTE0gYRv0H4VjzcmhpBcyufbX/AARz /Yrg/aM+P+s+JfEXmLBpHmRNZyxjDEoGzzg8Yr9tNJ+KHw41r9pa8+GVtpKieEvHs8hgBgDvn/ar aS57Ix5nz2Pgb9qH/gm/4S8M/tkWJ1C9Sz03ULnzPs7BAud6gdTX0d/wVu/ZV8IfA79k/T59AsIh IJIgJY06/OfSiUVDQmUmpH5L/wDBOX9mHwn8dPilo9x4suYiVHy2shU7jkHOCQetfdH/AAW1/Zs0 H4E/2DeeGrKKHbFj5Bjgyc1nyWfMdM3aKZ8rf8EZP2LbH9p/42694x16bzrPTJGiS1YAqQyBs+va vtX/AIKpfD34ajw41hokdpb6ra3KRsi4BU7s+tXdT0MLtSOt/wCCSv7AvgbxnpGp67qlxFqWqbSW icKdpKdODntX5Ff8FHtA074L/tD+KE09Daw/bGRYo06sQAtTLRFr3p2P0U/4Im/8E0JvjVZXvjzx zbbLWUbrSKRc5Vk98HqK+Nf+CnXw2sPgX+1TqFh4cto4baW4YNEnG45AHFTC8o8xVR+/ZHzPB8P/ ABTOY2XQZdjRlzJ5b4wPfFb/AOzp4Dj+OnxfHhrU5jYRRy7HBwCG4wMH60czldF7HvH/AAUK/ZR1 T9jrxDpV9YJJe6FOu15AudpZto+7+NeSfsk+N/C3gf4g3WqeMUW6sJZMW4Zd3BAHt3rSkm1qTJ6n 9K/xJ/Zq+H3xc/Ylm8VaJp8NsZLTz4pBHg8Bj3PtX8pvhdn8oI3zeX8uf71UqfJeRNzpdXuEgQKw yhYFs+nev6Vv2U/2VPAXxX/YOk1dbGH7QunF/N2c52MfX2qlDnVzRaRZ/NXcjUPD02sRx6YRpFjd rBFLtb5gRkH0rttF07VNX09Liz0uS4ifGGjjZs/kKiTtoRCempn63d3fhe8Eeo2ptWb7odSM/nXQ Wuna9cfZ5bXSJJbeUZ3rGx/pUKVnY0urFXVYpor2fTpk+z3Ur+WN3GCeO/1r+kr9h79gXwp4L/Zc 0zXvGCw3N1cWquZZgp5II68elU7zkomeiZ+OH7V3gLR9G/aL0/SfDcP2yy1LUY9/2ZN4VS6qemex r9Ev+Cjv7Gul/AH9lzSNT0Oxe4vvNgV9sPPLkE8V0Jez0Mq0tj8SrCa5vp7KOWPaj3Mcco7gMwB4 r9sP2i/2fPAnw/8A2ONG1Gzgil1RhFhigBzuPv8ASrVuQht8ySPyN0W61HVI5VSxZzCcSMFJ969E u4bm28N217JaPGjAclCOprn5mtDq5VEfosl3rqSG3tzMYx+82gnFacct1baXJdGzZYl4Y7T1q4+8 weh7l+yZpeleM/i7pVvqts9zDcMAFERYDJAr9D/+CnP7K+jfBx9I1bQ4liLqFaMKBjc+KaXK7mda Vkj8zdED3E8lqMvMoONoz0rt/hj8NdW+IvjjT9JjkZnkuELoTjChhn9K0XvOxrTlofvJ4v8AgD8O /gbZaTY62kL3FwVALoMklsetfMn7ev7KNn4K8AW3izQEH2AyIDGgH8Te30puylynI5PnPzJ0vxbM SiLCykrlAVIr7h/YZ+G2nfGH4l3L+ImEC2m5fLkwA3APfFaKy0NKl7H6x+Ffgb8PPF3jbUNJ062g e4gLByig4IGfWvA/CP7HUGs/tAawJHK6TaSOuzAweARXMnqwXuxPWvEXwM8Ba9peuafYPBa31tlW ZdoIIXPc1+PN1bTeENeuNNV/OSF9ofP3vyrohqrkQvc9d+GfgrUPij4ga0sgW2RtuA7EDNfpJ8B/ 2ULbQvDl3f8AiVFYg5UNggDFYRfNNo7PhR7HZ/Arwt4+8I37aKsYmRDhowODtNfmTCs/gzWL3TLx HL28uwvtz2pqdpcpzrSR2TXkmr/Y7WCAkTuqhyp7nFfqD4V+AOi+BvBenvrASWZgqlnA6k4qK07a It7nnP7Q3wgTwxHbazZ4SyHBUYHU1B+z/wDDaf4k66bhlIsEB2578URbcQiyf4z+G4vAXjB7e1Yq WDEgCuc8F3O+1dief/rVcVZWLeh0+mXKiRggyc8mussrrzJCuMEUFJqxtxXIRjium0+8MaZPI9KL hY6K21SO4I2/LjtWws+0A5zTsTY2bOcMoYnBFbUL+byTUlEqKXY4PStSC4VYwDwaaY2a0V3swSKu JrYZtvOPpVJE2NcyCdF2vzjpU9qzrlWbigRaNwUb1AqVJ9wyKkaNW2v/ACMHGasrdyTSgk4X0FJl WL15qS7kAHI71KsrKwbdnNMSdhZbySQ7VamrfPFJ83YUXsCdy3p1ws0Urv1rEm1NogRj5c0waLCX hRFyBtIqGZVAyhHWpFuUJLlixRun1qtaXn2SQ7uQKPIdrD7i9FwpYACufm1AbcDr71S0FoyGIlPm ZuTVW7kWaQEn9KEhMy/JXzGBOFrOeZQCnahgkUZHJfAbAqldlETIOWpX6FMyWdpEJBrFu5zE4Jbm gNkVTcmZCynBFY93fs3y5qkrEM5i5lJc47d659kLyFjIR7VXQTMu+n2uDmuY1e54BB4pblWPL9Ru nuL7afu1ia3LFbJHg7RkA01oGxm3OwugjOVIrPOnxG5DFuR2oi7lPY5TXroTXskSfLjuK821K78m Aoz8j1rQldjzq51FbwumT6dKxHD2dq0S/MhpgmYTPtABOK5rVLxiWVRzmhjS1MWM+XIvfPXNYGpa orXjRKNuPSsnqGhhyiNH3N96uc1SfzPYCnT3JOahuTNIR2rLvOSyj8KqoCOVubR4UJ8z9axmuVkO 3HPrUkdSpOyRxkHrnrXJ6xOW27D0qEtS72Rz7k3T46EVmyyMmcLwDTb6DS0K0rIUy3JPtWFeKIkO ORUai0ObibcSRzUN3fLHFjHzUlHUHIyJW8+EFTtYdcVnyyq0BTG403oS9TFS1DRlfSmSo0cOMYFZ XuxtGCY9pJrMuU81gzcVpfQz5bSK11aCOLKv+VZhbey/JyO9Z3sbqRcIWV8EYNU9oeVlJ4FD2He5 mXJVG6ZFZMqtC24cg9qaeljKS1CUlkDdKzpgzW5YdfSoKSuZsULGMZGCaJkWGPBGTRfUViglvlAc YauY1rKeJdPLnIVh/wChCs57DWh/VR+yW4PwN0RmHymNMZ+pr9yPhHObbwTaAH5Sgoo2Hqz0Qxea wc1QmQCc11gNU/NtIqK5bYMAcUiloZK27XBI/Svnb9pxNvwv1FSMHyGH6GtqT94mR/Eb48Rofibf RuMurt/SsGdWdcjvXNjfjGldDlY+UAw6VDkN0HH0rhuMmiiEyEqcEVe01cnP8dLcex6Z4fg3ONx5 9a908Ow7QCxyKLdAPatHjKbApyPWvW9DyzbAPmHc10JWRjLVndQKT2+arUsbY+bmrpsaiVHVinNQ 8rjcOKtotaFKXy2fYBn0NRyxlQC1RJ9CUtT5bjXZ0GaVht+cDPtXKnrY0sLBPk/MOT2qf7gJ6mtB IfDdYIyOKJJ1NwDjAFZvcprQljkUMWA4+lSyTjgKOat6kRWpEsYPznk1o2zBuG9KWxVtRxIeMLna QR2q/JGoC7eT3qXsMfuYkEDAq4FKqGz1ovoJXuNlcHC8j3q9ZEXHHOB7VpB6Cnvodd8MrAv8RbMq +Aky/wAxX9KXgpTL4ctyDj5RXdTfuFdDqx8mQOtTGfYACOaklliLcxBNaOPlznk0pElqAhWCkZzX x3+0YrXOt2wzwg6fjWbVxH8yn/BY+6W20+xnI+YEDp6vX4ZafcltOhcfxL0rF6MmO5karIIri3J+ Yfaoxj8RX9mnhuzWD/gk/YzTASQvpIwh91etErK5U9I3P44fhv8ACOP4xfEXRvC8G2zs7mZGlbIH AYZHPsa/qM/a98Xwf8E2/wBhbT9B8G2XmTywJbGdFK/eLLu4yOM1vQ0u2c8rytY/Fr/glL+zFb/t TftP2Wp+Kbs6lOiPLcCQK29xgg/pX9CX7SP7WngL9nr9oCD4fX9glvbfYZ/3hiOAFwOucd656kLy ubOXIkiP/gmN408K+Jfix44k8HgG3nnfzXRMZPlAe/avpDwT+x94O0H9qS58ZfbYm8QzSMzJ8mec Z757Cr2V0RKPK+dn5sf8Fl9MuPFH7V3gOGK+lshDdRs3lgfMBOp719Vf8FecD9inRDGcoHgz7/Oa GubUzj765j+cX9hzQZdS/bE8LGK9kiiOWMSgY4Za/cP/AILxb5ZfC1vFgRFAJsnHy+bz+lNu+hpO V0on1n/wRd8DeCPCvwx1g6G0D3FwM3TR4znyyOcH0r8If+CxHh7w7oHxm1WTQL4PrEl+DMibfl5G entSpQtNsmatZn2l/wAG+ct6fE3jUXl3JcNJMxTdj5R5XSvDP2kfgbaftJ/8FMJ/C+ovstU1Izyn AO4xshxz60Vo+7p1NLOD5mf1LeDjL8KvG2jeAPDtgtro0dhJ88QIC7TwPTvX4g/Er9jMftIf8FR5 V1vLaXZ+e5jYAiQqVYHmiK9nTUSY6zcj6j+Kvx4+GPwr+PrfDS+sUtfKtZl3NCQvy47k471/NJ+2 Vq1v8NP2iLzxD8O5lh+z3wllMLAblUgkd+wpQpqLJ5+dvyP6J/gR4v0n/gph+xises2Ya9+xb2Zk JCOFYg5NfyyfFD4MwfCX4j3/AIXvZjqCW16qxMwBwBj0+ta2Sdi7cx/X38LNIjsf+CU9mEOyD+ye AB/svX8jHhlhY3McQO5GXK0m7qxPN73Kanij9/DKoX5ApGa/rI/4JY6IdQ/4J5C2gj8xp7RUCj3R hTg7aFynyxPOfj5+zp4H+GX7Lek6ZqkFrba1qFxbpLuIBJZiueT717HB8FvCH7JXwA8NPDo41J5B FHI8cBc8sQT8tDhaTJjH3Lnwh+2v8B/B37RPxS8GaToif2df3EqTShY9pKrKNw59q/Rb4leDvh/+ zN4l8JeCX06OS5vocKRCTn5wvOD71EaS5rmbm07Hwt+3N+w54c8NfH3QtRMy6fp88gkkjAUAnzBj rX7D/GHwHolx+xFaWEV8LTTYrVfLlXaOBux14rWlC07lTuo8x/M1+zJ458NfCj4/iHU5m1VZL9Ft JmTdwSo4x71/SJ/wUT+M3h7wb+zPFe6pDut5dixjyyeSSB+tbzipSIs5xufyL/DvxJa3fxiN/eIJ NDudSj2RkZwCQBx+df0Z/t9/BPRE/Y103X9IUQIpiKqFA43H/Cs5qzNYQ9xTZ4f/AMEsvhp4Y+Pf gLVxd2cc93EQJWZO+wmvdfjN4A+GY+GuveHHFra6lZP5SDgENtJHU+pFYyjezNp7KXch/Ys/ZA0H 4X/s5zeJfESrqMmobPLeVQcbgV7e9dj+0t8APCXw9/Zgt9TisIkkvpoYkZE6FyVFOmuXUmPvy9D1 b9ln9kvwl8BfhJ4b8RajbJcajN5S+c0eTuZsdvpXmv8AwWW1YpoegGFyqvtAHuZOKuK5mRUXM+VH g3/BNL9jZ/iBqeu634liSSJnItQSG+Up/jXZ6P8AAjUPgx+29E0KL/ZTSP5Shu2VHSqk+VXQop03 qfoZ+1/+yza/Hrxlo99Ldm3FnhsALzhw3f6V794l8A6Vr3wc0rQ9WWOTTl8sZkxhju4/nU6v3h+z UvfPzn1P9iqwf9q3T4ILIL4aNtM52p8oII2/1718zftn6VB8JfjHb2vgecWkp3JOlsRySwGO/asl Jtm8lGUUff37LfhmL9nr4O3/AI68ROV1O+iMuZBzuKkY7dcV6b+wh8Vp/jYni/V7yPyVnusxcnhT H70U07u5jUXvcqPPfGn7Lbz6t4r1fQ9TZruaVneJSvB29K/JC+0i+t9Yl02cE6sJxHs6nJ//AF1v TuroXLy6n7L/ALN/wzsf2afhHc+I9ZYDUblN7bwMqxUjb+lfVnhDVP8AhcPwKiu8eSt9CCMdsgjv WdJNSbQ6s+TUxv2a/hN/wpnTdStWnNwJTuy2OAFx2rz341fBPTvEXhq7vtPtkN/LdIZGQc+9acnU je0uhpQ/CPRPBPwx0e8u0jiu0ePczAA53V618b7W48e+BdFXSisoFxC7HdjgPk1go871NZ7aHQfH HwxL4o+Hem6Wh2M0ke4+wbmu78BaVb/DCDT9Js4/m8k5YL1xWiVkZxvzHxz+1TdrY/EOIupaR435 x715J4GvxJDIh4z0q1qjTc7K1m+xSEKfmzzXY2uoxhEUD5j1OKGCLMWoBrt0HIXvXS2eq+UMkZAp 2LRr2U+d0qj7xrsdPRpbYNuOabGa1uzLjI4rcinyMDjFQyVubMF8tvCWIp1tdJcDd3qCmXzO7Q4I 47UWQIlJ7VpERorf+XOAvWtNtQYuCaGJGhZamGkIYZ/CtGO6ViCp4qbFXsWDd7GIY4FW7e+VepzQ 0NaoJr7cfp6VKuoM0GQaaJHWF8Vdi2Qa1JLjfGG65oYkZ7XjQArngmqdzIzOFzxSQ2W55fLhALZx WPE0pfJbCU4oYt5OxcFDmhd6nLHIpW1B7FNmKOzE4B7VlTyqwJoFsZb3zOgFRT3XlIpNMW5Qu74t grwDULSAKSetL0LKUsu5SM496y5JgOOtHKSypPIzqEU7T61zk0u262Mc46mhIOhSnu/KicJ3Nc7L ckYyK1WpL0Mqe/EJ2kDnvXO3F8kLOx5IqWrDZyc+qC7cuMgDtXK32qBrrYxwppopLQx7hY4tzs3H bFcXqIhuUd5T8oPpQyXuchd659jVTEOeg4rIudZkkYEnB71UI23G2czJfAzvITgmuc1TTl1L97vA 9iap7k9Dl5NKiit2K4zXG3LPMpVXwv1ouCOdnt/J2ndnmuc1ucRSEqOaFqGxiPco8AIGGHXiuNuA sl1uHB96LCMfUpN3A5IrGnYTW4zwwqdhLcx8LCjY6mufupzGOnzUPUdzn7uVWfLHr7VkXiJGh2nr 3oSJW5zN4xWMDOQe9ZstuB34qSzJltvKTdnk9KybolY9rHFTbUHLoZj24ltXI6iuZErLZsG+Y5p6 EGK6+RFuQ/Ma59oXnZ2zSQnoRWOV3g8n3rMlYo7bRzmpmOI6GNiPMPUVmXly0inA4rJFamWkuw7G 5J71mXhYHC81cRMzG/c/M3zA9BUSzfOCFyKlrUBtxOsRyo5PtWP57W7uzDJJ4oeisUtCpE3mMSxx 7VWmk8tyANw7VIPUjaBpY85wB2rLuyyuNnTvSCOhQlvliGOv4VApMp3DlaVhS0K9xcYQqOo71wOq I51+wLN1df8A0IVM1aI4as/q4/ZKJufgXoaL91Y0/ma/cT4QSiTwfbL6RilR0NJaHpMZMDkk5HpU MsX2ltynANdhmiTb5aYbk+tUJbn7Mh3Ddn0FQNDBIsEZkXgmvmr9pWYz/DPUiT0gY/8AjprakveC x/Eh8QpN/wAS9RmA58wjn3ArnZJdowRk+tc+O+I0S0Ar5SAHnNN6OEAFcFiSYyIOFGMVct7kK6KE /GhC6npmjssjKBwa9t8PuPKAA4FUhntHh8lNjZ+Udq9Z0y7BcMoxn0roVrE8p6FFIFhUnnNTiTKY NJaGiWhDITIoXoBVeaVdyoeTitETJFKeIRRkr97NQSvuiUEcioaIPmNMsmAcVU807tqjkda5UtTR FqFwmWYc0SR/IHz1q7iYqRqT8vWlVDkhhk0WuNk6oUGDUsUHzFifpUX1sJKxI4CgEfiKJHyocCtJ alRLUTDrjk+oq1GjWycncxpW0GXGYiELn5qmt1Z4sk9KlRuNdx7OGTAUGr1vIsUQ45PtTSsyd2dV 8KN0fxMtXbkGZcD8RX9KvgyMvokHzbQE6V3wVojeh1StlsCpnYtgHrSMr6lqPKR+tWFm3RYxg0mX YtW0370AjtXyD+0AGi8QQEjIIP8AOoexnI/mN/4LLyrBFp0TDIbB/wDH6/DWCZUtYgowAvTFY7si OrOf1Abr20jJ2s93Hz6fMK/t28CeBJviF/wS50zQdKukN8dMCg7wOcP/AI1a1VkXV1jY/jL+Jvwh 8bfsq+JNPvZtQMWrWkiuNkwO5VYEjOO+K/px/Z2+Knhn/gpd+xsNM11IX121tgLhXOSsgVm4yee1 bLSNjKC0Pi7/AIJW+CLf9lH9ufVdH1e+RYf3y2iO6/Mu1R0+tfQn/BQX9gPxD+1/+2muqW86W/hw W85eUSqD1BHB/GprRcYpolvnafY9r/4JU/Czwt+yn8UfGHhay1ZLi63v8rMgORHjoD710fw3+Ffx Huv+Cjt14gvL3PhAeeVjNwMDO3bxj2PeiKfs7s0qy5vdR43/AMFRYp/jN+2R4T03whJFd3lpdKLv EgG0CZS3PPavrr/grR8Htf8AEn7IWm6VpMyNcRCNpMygYCsSf0q6aujCX7qNj+eD/gmt8KfFOt/t geHxHEGt7BGjupt/8WVPpg8V+0X/AAXj+GWt6vbaRc6bIJbeK1bz8SDj58/yqYxbkO1kpHKf8G+f gW7tPhl401BiI4JJP3IY43AwntX4m/8ABQfwlJ4S/aw8X6rqbqftGon7OgbOQQB/OtHFxjcuU7yU T9oP+Df/AOFniSxuvF2uX8S2+n3MpazBfqpix0PuK+Rv24fFuvfslfto6/8AEK5txNA2oHyiGJwr bR2+lJRuk2FWfN7vY/Yj/gll/wAFF9S/bT+JmozHSvs9rbK6pNIHXd8me9eu6H41bTP+Cj+pNqRh tofLuFhxJneMLzj60VIe8TGTSsfmT+3V+xF4r/ad/bzl1bTQtpoiRT+ZcCQAnlSOCPTNfjZ8fv2Y 9T8NftAJ8P8Aw5dNe3t7dAXUhIGxNwV+Rx0NRBOVVIIJRi31P6CPF3izw3/wSc/Yet9CsbhLjXpr VYAxwGdiGTd8v1Ffy+eFn8Z/tE+K7i6gtze65d3iSSuzH5emecU8Q3TqF4d3jc/tJ0PwlefDz/gl fZ6b4gZbW6h0sJJhwedr1/HL4engkn/dHfGnCsR1qF8Nxclp8xf8RXBOnXO08/dUepPSv7Ff+CPa y/Dz9g/TNQ1iJLZXto2UFvZqUG7l1Ic0T5b/AOCpvwF1D4+/DXQfF2kakbaKG6geIB1AH7zI6/Su 38MfFfxp4S+DnhHT9T08eIBcNCjSs7PgF8E/KPeumUW9TNVLQaPWfj38JPC3gD43eCPFDPDa6qyi NY8gbQ8oB757Vm/t+eAtV8Y/tb/DfUtFtxdQQwnz5c/d/fKe3tVcutjCKuuZnzZ/wXV8VMbjwdp+ k3e/VTeQBoY2BO3zhu4HNfYX7Q3h/UpP+CY+mWoOyUWke/B543ZrRw5EXz8y5T+VP4Ly2cHxH8KW 8/JhvIQzuMf8tBX9XP8AwUg+AGp/tMfs46No/h1keNTFK8gcDAVyT+lRGW7NJe5FI/mA1v4ct8If ilp/hZ3N9ef2pCCo+bbh1B6fWv6jf25tDutP/wCCfmn2awrtRIt6g9MFqpLmVwc7RUD5O/4IU6XM nhzxjPBD5UHnYTIxu/dV+XP7cWmavpn7XHim4uLlraBtT83YpyGUbc0Rpq2pM6j0if0Wfs5+LtG+ O37F3h3R7O9TfH5G75hkEEmqn/BRwD4a/sv+F9KkT7RHHqVmhbGcfvevFZWtKxTl7PXue++MPO1b 9nnwbHp6C9i+0WxODngSdeK+IP8Ags9plzB4O8PzwwiVIyhPsfMrSMbSFz2fMeXf8Exv2z10Aa3p viGVbWCBj9ly2dyhPf3rT0L9qCf4z/tzqilY9JgkdImL43cqRwa0q0/d0Hze0dj7A/4KB/Ejxp4K +IPhmHwsss9rcsouPLJwAZAD0B7V9O/FrQNU8T/BDw9pltKE1EzwSSZYA4V/m/SubbQ0jK0eQ9Kg 8b6fpF3beFTcq+ttaPj5hnjr/MV+AnxX8Eat4C/a6+0alIJhdamhjBcH5dyg1u6Fo3M1Ud7H6gf8 FLTLZ/s52Rgbbb+dFuVf941z/wDwTLvkuvhFr4icLIB+6APP+rNZpJIObmlzHsH7Gk+tf8JF8Qm1 wSCJb79yZQRlfL7V4h8KPgPaeK/2i/FXjDVCpsLa7YwBsEEbQQfzFVFXegpzdjxT9r79o6X4m+NR oGnTFNItZgs208Pgg5/Kv0f+G+vXOg/szacdKVZYYrQeWUPPfHStaNNK9zKtecUXP2RvGutePvDe tza4jxSI22FpM8gp7+9eneBlm8EWOqXutzA2clyvklmyMEY/nWFR+9Y6KesFHseKftZ6PqHjPwbY 3emzf8S9pEIAbgndxXM+HPFXiv4e+HdFia2+22jvGgwxO0FsZ4FQ/c1Li77n1x8efGQ8G+CdP1Rz 0dAR9Wqz8JvjVpnxM1y3ECFp1Q5yp4/GnuiF8Vz55/bJvVHjS3SOL5yjc496+ZfD92La1DchsjpV pWRUdz0exvUuSA5wfpXQfaybdinVallFnRbotbB2GHJ5rtrS7VmCgcd6ENHRw3gslCDvXSx6mVgA VtvIqdblbGxeai0VtGIzuY9a0bK/ZHUPzVWEdazrNDg4HtUVnKkOVHUVBdy+NSJIA4FbEU/lpwc0 9htaE8DxqoJ+/Vprjy03EZNK5CLkMvmxjb8v0qxFcLEwPp6VYmWZ7pZSDipVfI9KGNGxDLEllgnL Gqi3a2zBfvVBSQ6S9PzAjFPttTK220npVdBPQgl1VZBg9aBfKw560iRTcGSLG6q0V4dpTOQKaGiv NfFDtUd6mkvGVFwcmhjbKMs5c5c8elVWuI3OF5P0qB2KczMvCjmsq7kKDG7NUmKxmvciUhCOfWo2 naDIY5p9R3KF27CAN0FYt7eeQBjOTVEsqtcySd8fjWTdbkVnzuzS2EYUl2FTrzWLc6gGba54qouw pGFdzicDB4FcnqblCRnNDGkc5Pc4sn42lfSuHnujcsMJlsU0aGFc3bpaSox+bPHtXGfa/OgKy8in YhmK16sj7QvC98Vgand4zs+ZjVA1c4nUpXTPzfN6Vy1xqDjaCxzVNEN2djHa8lnkcF9ig+tZWoTJ Go2t9ajYtaI4+71JZJgoyB64rGvr1Qx3Hikiepzsl18hArkL69w5HTFD3BmVb3ounIx09RUd9Er5 YHGKluwHG3OpiKTaVPtgVkNNLLcEMnyetO+hFtTEu7hUkeIqM9jXO3LFoyCeaOayGlqZEczHKEZA qtNKWBHapbuinozBuFbZjOfSsS7UumGPzis7sLamK+om0XaQT+FZks/zk7cA0LVktGGeWOflqKdk Vgo/GrasQ2c3cXnlXXyjiq93mP8AeZzu7VL7Au5VkumiAwOtULq6CMFHJNZuNjVaoyLttpx1NZqS HeQx+lXFWJZVuImigPOSfesqCUqSopPcpbEgk8whcdKqSyI7MB1FQ9WJmMSWnCnqO9SuwiY5OaTu Bn3UjuoMfA70skO22B6k1LuMy/7NDxEqPrmmRKsK7egqrWJsZ13aLE+Acg8159rjY16ycD5FcA/m KznrEqOjP6t/2PMRfAnQ2HKtEmM/U1+3vwdQxeD7YnugqKLKvc9QnXI461Sd2jQEV2CEMmcHOaSW RVXld1DQ0YNwPnJzx6V4D+0QNnwx1IEZ/cMB/wB8mtqW4H8SPxJ/5KVqETfI4kJz64xXOz3CG1yo yfpXLjneZp0GJJ5kCA8D1pywBp8huRXAtjN6MY7bZDzjmtC3zLMozgilqM9O0Pa2AvDDvXtfh1GZ VOeO9NaDTPbtD2qoIOR6V6no0G1QwNa8zsCep6BaOPLDMPlq/Mu/BBxxWsdUKUrFRZTK5XHIppiC 8nk1aJ5ilPkYf07VUuJMpuXv2o62FsfKbQuZVYN+VXgvkIS3JNcl7M1asLGu6I5+7TVUyJ/sjtVt aE7j1Kx4I4P0q1AGkJcflTS0Gmr2G3EuwK1KrbgCTxUSVncZbxnBHOakRgCFYURbuCRYnkVQAB81 TxkiJVYc+tXfUpKxY4DA96sI4wcHmp6j2RMsRYBs49q1PNXaoC8ijqSkdL8LG8z4l2WONsy/jyK/ pT8Kt5nh23fGMqK9GPwCmdKoPDA4FW2kAwSM1C1IsSxytIpCjC1YwOD6UmVc1bZgzqSO1fHv7Qko fxLbjphT/Oo62Imfy+/8Fk1+1Tae5P3P/i6/Di1O61jfPBHasJrlZNMg1OxOqWoaI4deQfQ+tffX wb/4KJfEP4PfDWx8OWl5LPb28Qj+aUjOPoPeqptJ3ZcldHy98XvidrXx98Yf2lr8p2lWHl7t3X61 s/s4/FrWP2ZtU1J/DkzW63LF2VTtx8uMVs2rmaVlYoap8XPEviH41ReO7q9ki1KEny9rZyrEE/yr 9A7b/gqp47u9Kmt03RP5ZjWbzGBwR9K0c1KNmEaaR8I+Afjx4n8BfHu48bx380l/cFzOufvFgB1x 7V+g6/8ABWHx2yXCQW3lO6kCYSPnkfSpUlaw+XqfInw+/av8T/D/AOKuqeLpXe+1W9uPNLSMflyA DyB7V9RfFr/gqp48+I3gafTLxWZZRhf3jHA/L3qlJLQmcOfc+ff2a/22PEH7Pt801lZBrp+WmDMD nGO1evfHP/gpP42+NOhS2mpw+erjblpGOAevaqU1FBy3VjB/ZZ/4KA+L/wBnHwtd6PpMZWCQjnzG GRjHpXzJ8avHOo/Hj4izeI9fAlm83fDGW3eh/mKU6l42J9naVz76/Z5/4KceLfgp4NOlabYpaxx4 A2SMMgD6V86ftT/tR69+1rfA6rAsdoOZP3hO5s5BwamMlaw5Q1udd+x5+23rf7JtnNYaJZBAvyq4 ZhuGMdqwvH37b/jbxv8AtC2/jV7iW1nhYrsjckMrEE8ke1Xzoah1Ptyf/gsL4oe4uLG204CUwuv2 je4PT8q/PHwJ+03rfhP4y33jS9gbUtWmlJRpd3yBgM8j6VMZKM+YUo9Dn/2nfi34i/ay+I9pqmvX MhsoEYC2JyFbIIxnnius/Zk+Oc37MHjma9g09b9JcsS27g4x2qcQ/ayuiqUeRH29+0j/AMFOPF3x 4+EDeHo4jYWTMoKRu3K854I96/NPw3pAsLKML8iqOFpSaUEhtam1ciAQNNLF5vlMHC4zkjkV+hlh /wAFOfFOn/syWXhTT7Q2kSKgzGz5AGe2PeopNJu43qrHP/EX/goR4r1j9mbRvDiGSRY2i3MWbPys e2PevfvhR/wVK1LwH8NtJ0maw+1y28arvff1B9q6oySRjyXZ8g/tX/tleKvjZ8V9C8QC+uLWCxcY t0yR98N3Ge1foPon/BWTUYpbGKSy8+aO3YCWQuDmqjNXuxSp8qsj86fix+0xqvxG/aDj8b6y8l21 qxSG3OWXDEHOevBFfoD8SP8Agqbq/jX4Gx+GF09Vg8sLwz+/+NVOfNLyFGnZXPyWl0ma5t49TgPk TidJRt65Bz/Sv2H+Gv8AwVP8ReEPg8mgSQG5uRalBLIzAg4P+Nc+7NGrrU/M7w98TLq2+Jj+N9Yj a81B7tZArAnbyM8jntX6o/G7/gp5ffF34Ox+HRpyrbnaGYl8457fjW6lYlRvqzk/2PP2+2/Zp8O6 hptlpokV2+9hxk7cZ4r5W+N3xauv2jPiRqur3kP2fzWcYGTkEe9Ep2H7NN3ND9lb4geM/AvxA0bw 74e1J7LSxeRGYCUICoYZ7ema/pV/4KTajpfiv9mGGKLUIpruOSJ1KyKTlWJzUwjz69jPE6WR+WXw E/4KZah8J/htaaHfQnUHs2SNCxY8D6VL+1j+3ZJ+0To9jYyWaxIUzgFuOfettEr9Q5G7dj8/NP0x rDUDNHIUbpgDqK1rXX9Q8D+IbLWtOnkiuYbhCfL7jcCf5VlGpd2exuoKOqP2Gsv+Ckv2qx06W/sV uJlUZZtxPWszxZ/wUd1DUPiJp99aJiziiZTDlgMkinKHvJ9DO3XqeI2P7Wd/L+1S/jWXPyxyxx2+ Ttw+O/XtXP8Axr/aDuPin8W7bxJLF5L2swKRLkg/MD9e1dTqJxaEqdnc9z/aT/bJvPjN8NLHR/so iiVkMmC3ZveuK/ZO/aIu/gbrd6Itz6fI3+rbIHTFccU0rFKJ91+If2+lu/Dl9BZWwt5rhT867s8j FfM3hr9qvVofh1e6IjGOaTCtOCctwR/WtqceV3FKC2PnHTLOcbo3bdLIpLuT1Nfen7Nn7U118LvC Euh6kDc2kICRbsnAx7VadmwcbI+hbn9ti3svBzJpdqInJHZl4rj/AIj/ALWr/ED4TW2jhPLmDozv k/wmsJU7zBaIm1T9px5fgzp+iRR7pYNgLnIzgmvavh/+1la2Hg6xtr6ANJGoGcE85onC6Gjy/wDa F/aBufifHaafETFYkgkAnsavfAf4r2/wo1xv3W4EH5iD6VEYWNLaHTfGr4wf8LO1KGe0jxKBgnkd TXnWkXUttlZkwRVsUdDq4dSKODnjFb2ja/JNK4xhQfzqGUzv7O6VlXjFakGpGznLBd60noJGxZ67 HdTk52/UV1EFykqYJxnvQkadDoI7kWcCBT5mB1Naun33mTKxGDVdCEdHPe7JwAd1aVq4ILd6kDQi lT7uPm+lWIr8w3iR4yCKTL5tDaZgZODVyKXf8rc1AlqSRTbdyr0q0mQo5qkxNFyBiuSTmmtNIMsT xTuOIsd2ZFGDxV5GVujfMKlodyOa5cDnnmmm62DBHFNC2EiMbMSTx9KeZRg4qhFZbvy+CadbTglh SZSIpL8KCCM1Te/JIGcCkSxkt+GwDUDXaoPk+VqBxZAdR3Z5+asiafEwB+9StYp6ajWbLA1m30+x sk96pEp3M681Br6EJHkYIzVK7uVbbGfvAVUUSznH1BxcNHn8Kr3moNbQFWPXsKJIaZxcs5DsWPy5 4rIu5AYmYHJpLYbMO0uc27OWOawbjVYmRhIx3DpxRuxJnIHUWuIWXOF9aw2u/s8gCnBFaDucxrOo O8jhFySea5e4lKWu0jFUEtjjZNQ2EoB+NZN1qC2nzYyfpTXclOxw91dm4unfP3uea5+XJclutJsJ HOXbmKU5PFYc0wY7s8DtUhujLuZkdSwUZFcrdtHNnecH0FSikjIEbclc4rnL+NZHOB81AaGO37h9 v3fpUUyKMktRIk5i6YbDhdxrGk1NovkPSotcL2OJ1CbN0SD3rJuH6/N81Xy6CT1KcTPzkfNVIyfv iDx9ahRBu5l3asshwfpXO3eQdzHB9qdlYS3MKa7WLqu6si9fzQCODU2sVe+hlzR+dHydrVjiTexU nBHene5DVjPWMSysG6iqks+HwOgpJ6i6FZGFwWrMuIhEcdSKTHFlZgu3JHNZc1q2c560rh1KcpEZ 2FtxrOkdY8cc59KUmXHYrzziKUMPu96oxQfvncng8gUiSvuV3ORyPaq8mEcsTke9DHfQjabzFKoM DvVHdt6kkD2qRdCZbkG1YL96smY4Qbx83rTAy73KRrhs1w/iCTydUsIscO6knH+0KiXwjWrP6q/2 N1+1/ArQ0AwqRJx+Jr9yvg83m+DrUdggrGjuaWsj0qWTYTioWUFOa9BLQzvqUgdgIHBqJ/uE+lR1 KMhk8yZa8S/aJJT4bamGA2rC38jWtP4hbH8QXxZk834r387fKrSMAB74rmVjWI7QPlrmxnxmqehJ 8ojIxx2quCFFcSREiXapwGOT2rV0+BpJMZx70CPRdDjVSBnmvcvD7+WFXtStqFj2LQWC4K/jkV7F pEq4BA61pFAtju4B5sO0DAFXQoBUHsK2SSI3IFuFhLfLuPriot+QG/Om3YfLYqlgzuT07Cs6Nt4K kYpJlHzAsfIIOPWp5YxI2M1ztamj1IPM2P5eM/hVhf3ag9xVvYS2FkxI4OMGrKEJkjr9KaZCWpX3 b35GR6VYCqgIHei1yySP90wXqatyyK2D/EKTVgiyWHDyZNTrLvcpjp3rNl3LXkHAYc461I53Y2rg 02TuaLR4iUk4NOgJVsgbgKaiWtEdf8I8R/FC1dhkNMMD8RX9K3hOMNoFuGPAXpXfH4CZbHRxx7we cAVYVxgrjBFQiUWojhRk4qUN8wFUS9Cz5/2VwSOD6V8cfHwA+JYGY8EHA/GpehLP5hv+CychguNP jzy/Of8AgdfiJpsawWihhuAFc8/ekKBDc3yaXayz9FH8PvX0H8Kf2bvHvxf8L2+saRpUpsp03xsV YDH5VGzsUzh/iH4A1b4QazHY+IU8m5fpg574rl7bUI5tQ+zw/vJR1CDNXzWZEXcjmvTLfNCUZHQ4 IK4rbileOHJOc8c03sWmQ6CbnxH4zt9C0qJrvUZVLbEBbAHXp9a+lY/2VfiRNdPt0uRIVUsc7h0/ CsfaNMcktj5X1vxDeeH9cvdJntH/ALTt7lYHjCkncf8A9den+IvBHiTwvodnc6lpkltbSKGEsqMv GfcVp7TYzehR+GPhPXfi7qepW/h6ykvVtGKO8akjOM9QDXW/ED4X+Jvg54Mj1XxDYyRQysoGVJxk 49Kcp2dhx0POdK1canZRyW5O0jrioNb1l9I8lwnnPJIsar15Y4FNuxS95nv3/DPHj7ULCG5g0efy ZgMHy27/AIV598RPh74l+DUUH9vWjwCaRUTIP3icAdPWojN7jlbY9G0b4C+OLw208GjTS280Jky0 bDAH4V5Prl3P4W8UtpWowG3uQThGBHT61alzMjY1VYiXKDae9PlvFtFDM/cDjrVg2TtqMe8Av83o 3GaXQPtvibxEdN0mykvLnBJ8tCwGPcUc1hXO6134ZeLPAuinUdU0qaGLcF+aNsc/hXMQXnn2yORg kcgdqUlcdxbqdrO1eUYOBwvrXWeH9E8RX3hEaqmjSfZWTfu8tun5e1RfldiktLlDTdQbU9MSZh5Z YZKk4wau+Fra/wDEq6jLaWUlzHZkiTEbEdM9q0c7ImOruZM+rTX+gQXkdtvkMqIIMHPJ9Otema9o Gp+HtPtrzUNHa0t5AAXKMOSfcU4tlSaM62+z3Tuqp8o7lauw7Y1CbeO1aCSNOJGhwpORWszq0W4r +lO4KOpPADPBwMrWnbQgKFA+UDpTuDSuWptS/sO0LxxbmJC7QM9a6O+i1DRvDNvqEmnyRQylQC0b AcnFZuWo1poW9O1G6snWfT5Db3asCSpxXc+KPjB4j8R6Gba81OZoFI3Bm/SumnNQRjUjzO7Of8Ie Gr/XLNriw017pG5DLGxz+Vdlq3hXVdC0y3vb6ye2i4HzIRjJ96xlUd7GkUkjUsr9Z4lYNuB6VuW1 2sLEsgYGnGOpe5ZhkS4kCiMFR29Ku3U1vpqxqEDSO4CgfWtJSaDkTO8k8P6kk9u40uVzj7wiY4/S lkYw60sVzD5czAkbgRUqd0ZT0djfuLlVwj4K5wM1t2rLcotrbxiS4H8I5pqTY9FudNFpWogBDp8n A7xsKvabLGSwKbJk++MdKtVLaMLX1R0VvqAlkUop46nFbt7f/ZrEMI/MYsABVOXYcYm9cSvYaNbP LalA2AcqfWrdoy3t4tuiHd6AUubqOyZ2xiNhEttKhxkYytdeEjWxVQBxjmlzXJtYstdqYo9/zYxj HNdfZ3vnyJnrTuWjp7S+Gm33Bwe5ruLPVllcyStle1J6k2Nyzb7dPlDjj7tbem3hg3oy4OetT1Ha 51NhqHmRkdMd63rO8b+A5BpMEtToraCDyGZv9Z9K6HR59yDPK1XQpnZ2bIVyGwK0VkRHHz4J71CJ 6l6J1iJCyeY5rWtLh4APMPJ7VVhpG9HeKVwvB9atQMVKsx+Yd6gRoLckgNnirgu9rAikxxVi/bzb jnOBVxZDuzu4pFskS8IY4pWvfMBU9KaJegQyBE2g8VawIiDnmmCHRS+aSWbgdKGlDKRTQmNjbyY8 daPtgk+XFJMLWISd0mzrimLJ5RZR1oY1sVfMLdRTJZgUIIAoEjPa4AFQxMZmLZwB2qhpEUcnmyEj gg1PNtiUs5yaljZkz6rHGnyDP0FZtzItwgLHCimiUZL3iKxMbdK59bt7m5cscEGr2E0Zt3bFpfNM mCPese81VbXiT5zUvYEcreakXBIHyntWdd3A+zqwPBHNNbAcvea5DZ2xjjO4tXL+ZEpCy9+5pxQi tcXcBYKvCjsBXKareQiRmRsAVSEzlmuTKm9Twa5rVNfis1CyRls+1K47nKXGppODtjwO1c/PNjO8 cdqpMdjkb6TyiRWM99uQqwx71LdmKzZz99KPKyTXFz5SUlTkelVYp6Ioy3KmLg4auRunJn3E8ihq wJ6Flbo7M4wKw7m6R5D8oBHeqsmLVIw5I1ZyzHmuZutSUs6AZINZyVxbGTFd7CT1PpXOak26b5Bz 3o2RLOckRZJTmuauUeO668VcNUIiNwyFiOgrJubsysrjilLQaKF/P5gDA4FZM7rsBJ5rG41qYVxG s+QO1YjtsyBgke9FrjehiybpZMk4FVJEXn61OxL1K2VXk9axpMbmLH5c9qQPRCRxA5YHaKyZ5V8z AHOe9D0BbDLqHKDJ2msySfyxtPWkCRnsqiPBGTVO9j8uNCBkUilozPkUOvI49KqOxhi3daE9BSM2 GUvMD0z1qHUZFSbBPHajdh0GK6hdqjk96ZKptk4wc1L0CJVaRSoUDafpUS2pZSZTlR0oGkYN1AWb Kn5a4vxBcCLWNOZ1ygdR/wCPCpnqhrQ/q2/YzZYfgdo8g5DRJj9a/bj4MSlvCcAHTYKzpKzKbPTi pk3Y9ajRUP8AF83pXbGRm0Z9yC0gx2qOc/IQopbmlrIqxSgKuR8wrwz9pIKfhtqODyYG/ka0pL3i WfxA/FG2f/hZ+oRNyyynv9K599qsFI4PSufH6SuadBvH3cYFQtDtYAjP4V56ZNgWD99wckVr2m5n C7uaa1Gz0TRYfmU5ya9w8PL520/xCmtxHtegR/aY8D5GFepaYwWNFx8w9q0BaI9Fs/mgAPFTiLyf mLbh6GtERFCuAy4XgnrVeVRtCenWjdml1YrSxbWABqlIdjkCi2pJ8vZ6Y71EwKvtHLVnJIuI/aY2 G4YNWVh3kYNJsS3HJxN0zTpZvKOCODQ2CVmTJEB0NKxVO2Se9ODCRAYX4fNWoEww3d6i7bKirovS Wu05BqS2Xk5FO1h2L6RkpnPFS+UyheaGhLQmkO2XnkVpxFUQEVS0C93Y6T4SxPL8TLUk4VZh/MV/ Sf4Sk83Q7cjgbe4rvh8APsdQP3Tbj+VW0cSc4rPYCZcb+elWdoUcU2JliH9/Ki9eO9fHvx5Ak8VI pH3QcfnWcmSz+Xn/AILMRGebTmPBXj/x+vxPsnxp0HfK1jsyLHO6taLc6naLMxMRvYg646jcK/uH 8EeKNL/Zy/4JkaP4j0zTY3dNOXYmCP7/AKc9qrkurjlpDmP4vf2i/wBpm9/aG8QDxBrdt9kUg+VD HltzE8cHnqK/Wf8A4JUfsBw+LvhzrPxO+INvFbaW6GXT0lIO6Mxk5OcYOVqLXqKJjtDmPzo/aP1X S/Gn7R8mkeBLXzVlnIVbZCQwyBnjPrX0Tf8A/BPT4mro0l4lp+7WBnCGTDcD0xmrqRaLhK1j9A/+ CGH7HtifGfivxT4u0xJdes3ZYmlXJQGLJA6dwK+8PhR+11L8T/29dY+GT6AINPhMyfaNj4YKq+vH el7L3CpSs+Y8S/aS/Za+Gnww/bitrjVLG2V7278wqUHztvUA9a9d/wCC3Xwz0q4/Zg0T/hHrdLCJ riBC0Yx8hkIPX2q6NLTUxcvaao5D9kC3+GH7Dn7EB8UxLaTavcRoZypBZ3KsB0PsK99+InhDwx+1 h/wTnj8d69psSxy2AuYo2U/LgOR156rWcqfNPQ0m7RTP5Wfgf+zD42+NHh661DwxYSLpDSqbSRwU DIR7ivXvhn+yP4o0D9qLwnpHi6yb+zXmWR9wJUssi45wKdSDSSJp1Lan9Sv/AAUd+PmnfsVfCbQW 0Xw39uGY02xwucZfH8Oa/B7Q/wBoXT/+Cgn7UWi+HvFOlppOk2swmKzqy72jdWU4fFU4JQt1HGWt 2fvF45/aL8D+Ef2nvCnwl8P20N3PcWMrN5QJCKjgEZBI6NX43/8ABbn9mrSfh78X9Hv/AA3aK2uX R2iKJOW3SAE8ZPpV0aVldmFSraVj4A0r9jb4mvZSXVzpjom0vgEk4A9MVQ/ZC+H+l+Of2iZ/DXjH /RFimMYS4XG48euPWk1q7GikfWv/AAVP/YBn+BtvpnirwgC2kx4SYRqAMM/XjPYGviT9jz9rHR/2 avF1zql3am/FzcqFIjZsbgF7U4wbKWsrH9bHxqtvDXxy/wCCdUPjGXTUia6tFlTMZyDhsdfpX8dW ifuZtrMWV+Uz6U3HUTl73KJqwfTdSS5kcvbxuGZPYHNf19/smeD/AAj8S/8AgnBHrp0yNi2kl4y8 ZBzsfH8qxnC7TNKkuSPqfyX+J/hj4zltdX1eHSpo9KN2q2hVGwykcdvWv6dv+CVP7K1lJ+xt4i1b X9FR9Vu7YyLvjJJPlMP54onF3SIp1LRaPwN8MeEbz4aftRSt4mshb6VJrESWsGCRtLKOnHev6LP+ Cp/wAj1z9kTS9Q8H6Kk18ZISBGh+7vOema63FRQ0rwufy/6RoPiSDxdbaHd6a0d66ndGAxIwfTHv XvEf7P8A4uu9bms4NPmcx5+by27fhXO22NTSPPk0nVYPGLeHjbltUUkGI53cdePxr10/ArxpZW1x JLpMpt4+pZGH9Kava5fMtjzvwjZ6l4o1yTSbC1M92jYkjUElD6Yr0TxZ8OPEPw501rzVrGSCEOFG 5T3/AApqVxS0OL+HV3qfizx/ajT9LbVLeO7RZVVGYY3DPT2r+pL9pb4D+Gpv2EtP1ddHisrtIo2w YypByx7/AEpfFsKouWKmfy/WOpXEniW0jtoPNe6UtsUE55xXoWteCdTuvEtrpN7AbJbi4RfmBHBY Dv8AWqUrOxEZc+h/S14a+Ffg39jL9k3Stb1G0W4cQojy+Xkkkkdq1Nc+F3hT9rH9kybxLYWQi/0Q zQ/uiDwrEDB+lXKCcrirS9laJ/McLbXvDbRwT6Y1vEXCxO6su7P1Fem6vpGs+HbSGW4smUSY2llI 749KpytsdNOPu3Z2Ft4K8RwaUl+ulyNG4znY3T8q+j/2L/gLF+0R8d7CzvfkitUZp4SOrLhuhqZS 0Lsoq5+1HxP8b/D74UfF3S/BLaegupgUwIj/AHgPX3r42/4KRfsy2vgK803xBon+jKww6KAB8zf/ AFqdNWRyQftZH5r+FtNuviZ4903w1pkBnvHIaRgDwFIz/Ov3B/Zp/YJ0/wAGeKdS1nXHF1LHuPks AQo2+o+lXRV22FW6konr3gLxF4D+JvxTvvCNnYr9rjVwcxEAYA7596/Lj9q74N3Xwb+MOoWthC89 vOzyLhOFAwO1TVj71yOd058p5V4Ktb3VrEuLYuFHzFQTXc+A43uPHNjG9o15atcosiKhYLlh6VHt NTuSVj9bP2wvhBonhn4IW2p2NgkMiyRgYTB5Y15D+xf8EtE+IOsJqOoTLLMyEiBsenp1qpvTQwfu M5X9tHwza/Dr4o29jYxhIpY3YgDGMECvmPTFu5txLfuCfWtKa90UZc2p0tk6QZUjd74rf0i52zke lWlqVc6SSUXL57itSC7X5M846VVhxO5sdTEE4ZTg4rrbe6a8GSee5zUuNmM0o71rZiAflrZ0TVpb Zz5iZQnjFJoS3O2bUAEDLxntXS6bqCrp5JOCMVNipGlaaiZlG08V1kKJLGA7ZNGxJtWnl2S7gcAd zV6O5+33AYNkHoaYJmssxsX5O/mtuO/F0gHQVLEjWilSJAmcmr0dwFAA5pWGTtceXzjGasrP8nXm ixaaIvtwWZUzzVzfuyzdKdrEsmtnE+MHAFTTSMkmCeKmwlcasmzJzxS+aXwAcU+gWsy0JvnxnOKi k+XkDmpRRJGw25H3qinKA4DfN3prUTIoZQT838qgugrS4BoFYpzRCIE45qnBci3bkdaEPqUmm2yM QMAmo7qbz7duelPqDZiWZ8u3Z2GFPSsm7R5lJDkKO1NaAQ21pF9mLq+X75rBuYGiBbfjPoabEYN0 xPyFiTWJKm3O45YetKwloZ8wBYZPy96x9SuIz+4QfL60IOp55q1gqKAo2tkcism7zIMydR0qutir HNzt5UnAzmsbUdPW5jZAdp7kVSJZhW2bK2CE5IrC1kR3ABHLDtUNajtc4+RWWTOOKxNQm3Er2FUl rcluxxeoXhd9oGTWJLG7jmhq4JmXc2ismHbAFcvJHhnOenSmuxT2OYuBvjyDg96zhFvXNNozTsUJ ZG5UdaxLq5WKBkK/N64oWhrujmTdmWNieCtcyjBmZl+8etJ6Gb1KUqiLec8/SsSMOoeRj8prO4JG JM+WLrVe42tCHwM960i7IdjmryUZKqKxXU+Vt71M2NIyZx5KENzWcm11yRms76AlZmfJCyuSBgVx 94jJdNgnFXDVkyZRnG1Ac/hTOJIielE42IW5kzyBD5eMt2NU47ZRGxZtzA9KxfumzSaKk4aQLg4x 2qneIHIZeCtK/MQlbQqvOJFBYZrNu4DKwfd8tOwyF9gx398VlyFlRnBLAUiraGUl15jkkEVHO3nf IDjNFrEmfOvlHZ/EKx57ZnGSc80LQm5bhYKwXHQVBLK2/npUMaGMonbJ4AqybtZLYoBQnoM526DL lVrh9YgEt1Zh+V81T+tD2A/qq/Y1iWT4E6KBwqRJ/Wv28+CTed4SiYDACis4Me+h65HIpjwOWPWs v7MBOex+ldKHYSU+U+CapyN1K8jvVITZnK2WP1rw39ocb/htqPGMQMf/AB01tS+ID+Iz4oS+b8WN QctjLtwPwrlTLh+mccVzZhuadBvmcEkU7cdmT0rzrCIlcF8rw1aVhGUlznJNCQPU9M0SMoRzzXtv hds47VrFAeyaE5a6UDivY9L/ANcOKdx20O/gkDQ8cUOplA7ela26kJWJHjKgc5amS7UiyOW78UPQ kylk818EYIqK6fDg0LVXL2PlxNuMkfpUMTFpT2rCTGXrnIgDMOKW3OxBzTtoBM8wOFA2mo/L3cN+ dJajk7FlQvkbRwRVZZPLGOoqbtOwWuizAxz14NWNy5x/F9KtaalQLCWzmPzN/Hoamt5tknI61Lep ZainEUu0/MDWg3yvkD5apMiQMN7A9BV2D51wO1DZnBWO4+EK7fibaE8gyjH5iv6RfCrj+xLdT12i u6n8Bb3OhL5uSpq790AUhN2LKMrHBpxxEeehpE6li3YRSA+1fH3xwf8A4qhSeuDj86mURM/lx/4L KXTR6vp6HksvT/gdfixoxVbBI3HIHFc8nqKOqM+5jW51KyjH3jfRc/8AAhX9mXxFItP+CUOlQM2A tgmWPH9+t4PSxFSXu8p/Ir+zp4F0b4v/AB58P6Vrl3HDpYmXIZlw+HXA59elf0n/APBX3xLq3wF/ ZC0bwt8PYXs9FKxxSPZKSFj3kFeMjBBNRTjao5A9KKR+cv8AwQg/Z10LxR+0I2s3wS8ezgkWNCA2 3Kg9vpX6S/th/wDBR+X9m79sS+8O3Wnxtoa2lzj5mzwBj5R9TWjfO7GaR7r/AMEiPjJp/wC0lqPj rxFpsBtbRrrlSpXIMXv9K+yvAekfDjQv2hZbnT0tD4pdm80oQWycZ7/TtVq3LYU3ryn5Hf8ABWPT 4pP26PAsj/M7zjr/ANd0r6s/4LUXP/CL/skaMnG3fCGIPQbzzTXuKwJcmh/M3+yF8MfGP7avxltf AmmX1w/hG0uFluWOAuIyGweMdCe9fvB/wUy/aZ0n9mH4T+EPgb4TMconuILWdYzjZH5uxxxns5rP DyTqSub142oo+y/itPYfsG/8E/8Aw3N4b06JliSBMnKfLuYHpX5M/GH/AIKZeG/G9h4KupLcprC+ XI7pGxAAkBPzVvZS1OKF7WP0M8V/8FDvhp+0LqHhnwxrluLq5udqorwkgneB6+pr4M/4LI/so2f7 K8OkfELwyTYSyOsUawKB80j4A9e1cleXJZnUo6I9G/4JN/sz3+lpqH7QPxMuCL/yHew+0Ef6t0ye Tgg5TpXqv7K/xA0v/go//wAFDta1zU4RcaRoDTQWMTgkOCquG59CO1dcJr2KZy4in+9sj2n9oP8A 4KAaZ8Df2wNQ8G6nYRR6OkM4VvmPQDHH41/Nj+2F450/4h/HbVfGfhG6bTTZ3nmb1ATJGGxz9Kxm uRJ9zopxu7H9K/7F3jef9uz9hLy/FlqBAlhl7mXP7w7WOecCv5bfiT8LdB8A/FbUNC0tUlsYNRQB 8DHBGOnFXL3IpijK1WyP7BTb27f8EnNOgGPKj0xcY9g9fx3aGxlWCQcqB8tF7xuE42qcxd8RRedB MG4B7V/Y/wD8ExPDP/Cc/wDBOXRtFgQZNnGj/TDZ/nWcYuTNZx5437Hhv7f3jfwn+yV+zBonhv7M ranH5WEVCd21z6V9e/8ABNz9oeT4ifse6nqr2C2y2toSqDd82I2Pf6V0yppteRy0lzczP5f/AI3/ ABs/4aG/aVto5bdbRoNbgVFUnkeYpzzX9ivjrx3B8PfhZ4S0iS2+1LdeUhJUnGX29vrUVNdDenP3 bH5qftk/B7w78HP2vPB2sWtgjzaipWUKn9+VV7V9J/tXfErwl+x34n0y+vrRFtr2Ft5VCcEttHSo hBRMql0z4d/Yx+D3gb9qH9s7X/HljFHOlq0qohQfxIGHfPav058EWUvxG+N/ifw7qGhw2uhWrSRr Kdw3/ICDg8d6ejViW3zH5H/Db9mN/BP/AAUI8UDwrYq+nm4le5IG0KwVccjPav1v/a4/Z10L4j/s sahLqFpCt5FB5m8DO0gMa51BuTR0810kz45/4JT/ALOnhv4ffs26h4ourRLi8k2y+ay8n5D/AIV9 aftm+L4/GX7BZ1FE8q3lSNwAP96t6cOV2HWlzRUD8Qv+CX/7LVt8YPjLo/iTUrndYQW7iK0bbh84 IOOvGK/S7/gpr+yfpcPi/wAN65pc403ybqINFHtG7MoPf6VMqTU7kW5Ffsfd3xJ+EOl/Gb9m7RNF 1y4CWDRozM2OSGOOvFeh/BD4eaF8J/glcaFYOj6PaoEUjGCoB9OPWrmnJ3QlH2ycn0Phb9pb9lq0 +PGl+ELrwrBD9jhnhecowGVEoJ9e2a+cP+Co8uj+B4/CfhfwzpsVxrss8WIYwfuCUBicZ9a6aVJS 1Y3VahZH6e/s4/BRNZ+GNnp3iLToUk+z4CZz618k/s7/AA1074R/t+6xZacAius5CAY4CqK55xu7 IcKjasz6n+Kn7Lvhfx18cE8WX9ykeqQT/u1IXPJB789q+fP+CtupXGkfC3SBb/OHkjjVR1yXwOKF FwWopWpNJdTgv2E/2eLL4A/DvUPiP4nEaajPA0ls0pAKgocr26kV9k/sVfGaf9ovR/F+ozp5Vubj ZBjP3Wj96ul7sOYqv8XMbnwb/Zl0X4a/GG48TW1ys2pXG8uny8ZAB6c9q9e+IPhPw58StS1u0mto ZdYEMqJuHIJX/wDVWdXXVGMY+094+W/gv+z7p3wP+CuuSeJY4jPMP3TuQduUI9u9b37N/wAItA8A fCseJ7y3S5a8mjkifbnbnjt9KyjTbR1KZ337cPiL7X+zrb3sCboHkjOCPc18E/8ABP69e5+OkT/a GVPs8m2HHGMCuhK6Mb87O5/4KL34h+MOmMq4P2eTPHX5hXxjpHiZZbRo1OwfStVGyKguXQ6C1nLo vPHrXRWLtbuX6ik2XY37LUldGPc9q0lkMQQ+tFxm3bXpLAYOB3xXQx6xI7KkJ+XPJqk0NHoWn3K3 CopbkV19pfLaSAMA1AGqLtLm4/dnAFTxah5e9CetQyG7s6vSb77LEoIrrG1eNY1UN8/tUMZZn1AS xhSciujsY5Wtkki+RF7dKNho6C1vzdtlh0rUS+AdgOMelQndlWsiS31IwPlhnNXdP1B1unBPJ6Ct XsStTdNw0oyWqeOdkYDrmkgI5QFvdxbmr63fz4JzmmCZfadYEGB1pY7osozU6jjuOefJ44psUhZ+ pzSQSepYNyYWKnrSx3Zk4pDi11G/aPLfb1JpryiN9xFAnuRtqCLJgVBNdNGhbHU8U7AZc+qyOwAG fWmPqEcWCwyfpTtYRTl1BZgxAxisp9QCpgNnNO1hMV7xnstp+6DWPJdoAdzYqSrMxopFgd/m+U1i 6lcjAwciqQnozHW5FuzMxLj6Vj3eoqbZ3J5z0oEznm1Fngz0BrnLm4YsFB/GqSsCKl3fKqLu5I61 z13ONRDhDtFS9y3sc19rS1jIf52HrXPXOpeY3oKaWpm9TmNQvtr5XtXHXOrrFdl2BIPbFU4u5a2K Mt67klR8npXO3DmQls8U/Ijluc3NMsbkbRmsZ75bl2UD5h6UIfLYxZoQ0LtI3OeBXEXzkrjJ/CqS FJ9DCY7xtHX1qIy+RA6E/N2ouRynHPdTIx2Dc3vWVc3K79sv3qVrml+VGQyfOcn5aw51BmLL8orO W9iUzDvpC33fxrGuJmaEIPyosUzCu4ntoC278KxZLrbGpJ69qHoCKc86Mh2jDVSb5YFY4zUvsXYw bxw2WJ59K5pXKXPJO01KiRN6DNQvDJHhDgg1hXFyrLtIwauOjI3Rz14hjAYnmoPMDIADzTk9QtoV JFAILfeqvLbgZcGspq5cWZr3aou0/erMnDoMA8HrWaVgkZzPhdo4NQzPth2mrGkZyuVBGMrVKSby 0KCoY2Y5nZWIYfjVfayyb+xptkDJ4yCf7x6VWskbzSsg6UXIaJrpRDINvesq7uPKlCHkHvUrUpKx JGyxqVPSq7TiLhRnNJ6AU75WiXclcFq87JPbMRkeao/WplsUf1UfsYSBPgDooP3WiTH61+2nwUga HwjF85A2DFRT1KPVrd1RwR178VNeLgll9a6kMpEebgY+tUJW8jKjpVIlozNpDg5714z+0KQvw51Q HnMDgf8AfJrWlpMZ/D18W42s/ipexAZkMh59uKxli2kgdT0rDH6yHexXUtEShGTSebj5TXnpEiwR +XJyfxrTiJjlBXnNLZlI9I0NGl27j+Ne5eHYf3Sr37Vdw6nsOifuXVG5b1r13RJPMcA9KIp3C9j0 O2RWwG6dqsNGYs5bI7VveysTuMVwuDnNVpf9bz3p7oCvKwjPSqMp8zg0LRWB6nzNE3mxgkYNNXCy EkVz2uWyOQmY7WJ20sTCE884ovoIGk3NuI5q5FMBgMODSWg5Cz4gJIquYgFU560JFJaGruVQE9O9 LFjzCSPxqugInWNp/uthR2qxb2+7Jz0qLXY1oSrtV8kVqqxKZ6qKbJe4eYGxmplDRuGUYBpg9D0L 4POH+JVqB2lH8xX9G/hg7NHhPYivQpr3CWmdXBmd84wR3q2E2uecmoejFuPMTRLz3qRkLRAnn2qy uhNbt+9AJxxXxn8c5s+LkxnIz296mRm9j+Xr/gspJ/xO9OmA+6MHP+/X4uWN2n2aJinUccVySXvC gZWoXZ0vVtPuNm9Rfw7h6DeM1/cFo2h+G/2hP+CdejeH4tWhtjNYLwsiZH3uxPvW0Itq5M4dT+MP 9qP9nqP9lf4gWlnpGotfXtnIJI5CVBGxge1f0ifsI/tW+H/22P2ZLjwr4rdDqtvamKfz/wC/sJyC cZ61UAiuaJ84/wDBNPW/DP7GH7c3iPwo10pju3me3c4wqqirjIOOpr7K/aM/4J++HP2nv2ntU8f6 9rqrpSQzgW+6NlbcAc8nPatFS5Vcw5/esX/+CYfxT+HHwO+JXjP4c6DLbwFpHEciYAfbFjOc4716 N8B/2TYvBf7a+vfEK/8AEW6zZ5XSJpI8AFR+PalTpv4i6nxpnyT+1nNb/to/8FC9Fg8PamLez0O7 23cqOoyVkR8c8civ0c/4K7/B+x+Lf7PNnpVtqwHklN+x1PRia0UfaP0FUlqjhf8Agl9+zV4Y/Zt/ ZQ1DUNGlhGvahCGefKqzOUZcnn6V8v8Axl/4J9+G7fw0nxa8da+t5rn22KbZK0bBCWyRnIP8Irj5 Gpux0V53ikfaX7RWmaZ+2t+xB4d0LQdVjghm8hmaKRfuhzkc+xr8Z/2oP2DPAH7PXhLwlpUuoxya nhI3J2ZI8zBPB9663BwgcdL4rH034e/4Jb6Dqnif4e+L9F1RYxaCN32lBuHmhj+gr7//AOCi/wDw iP7SXxR8DfDW6vIJo1dLqWMupyYZgemfeuWdOVWJ1ylZ2PSf2xvgPdfELQfDvwv8J3yaL4ZCKbl7 eVRhUf7uDkYIJr5t/ZM+EvgH9hH9rSTwvpl9C11dwTOXJVSdoC9j71tSpPlUWctSV53PNv2mP+Cd tv8AtMfthap4w1XUxBo8Uc+FDIQ2QCDz9K/FTxh+x9pHxF/ayj+H3hm7UaQl4Gv5UKjcEZcr3HKk 1OKTukjohpFtH65f8FFP2mPD/wDwT2/Zl0n4U/D1o4NSnt1gAtzjYm4o3TOOGr+df4FfAbxh+0Hr qadZXXm3S3SNdXEkoBYggk9MdK6a0Lwikc2HfM3Jn9mHxT8K2vwM/wCCXkGh6rfIbm3sFif51J3Y f/Gv4wvBt6JNPh2g+Xj5MjtWTi1Gx0P3nY1PEeoAW5hwTJKwRMDPJ4Ff2n/8EyJh+z3/AME+dDv9 Zu03m0jzucdcNRSdma7RZ8kf8FQPhBofxx/Z3sPiLJeiO5EatGq7TwWJx69q9T/4I/8AjHSfG37I upeH0u0jvJoBGV3DK5jYdPxrSUmpWOaNoRkj8rf2lf2IrL4I/GrQZNM1L7Zrdzq0Ekinb90SqGOR 7V/Rr8fLy20/TvA1k1ygmVosruHaUVUqTvcxpS0Pmb/goDY/8JR+1L8OYNMnV5403SYYcKJlJ/Sv Av8Agvd4GfxF4G0J7acYiCmQBh0EmTUqDbNZO6ufOf8AwQT8daJ4W8aeJ7NplhvZJGaNZPlL4jr9 YtX8V/EPXPjD4pnthDp2jQ+aElW5wWGzIOCKapNK4Ne+jG/YL8daVq+qeOoJL6O58StOweXzASW8 rrn8q+qPF2h3EP7Mmu6fqF8Lu+e1bO5wcHY3pUwhZORb0qWPln9gOS0u/wBkS58NG4ja9giWJlDj IO0/412P7T3gC50z/gn/AA6M8wMsMMYb5h23VpGPN7xNR2kfzefsa/tEap+z38ddIkur949Fht3T yA3BzjBxX2D+3n+3TP8AHn4seF7fSLxrTSYLqP7QQcBsSg9/bNaNxkrdTZrmgfsF+0/eat8RP2Ot GTwNdqLlliKPHKB8u459a7n9m3QdS8L/ALIcGneLbwDU7mBYpHaQElmBX29ajk5NDmhPki0ammfE LRf2QvAPhvQbi6E8l35caFgO7be31rwD9oD4I6VYftf+E/HN7dgxGB4/Lcrjc8i4/lVyvTVjalDm jdn6i/Zbh/iRaX0N5s0pbaQGMMMEnpX5v+Dri30//goxqN3eSrFvjuBBkgbgQv8AWojB8nMYKaVS xyP7THhz4gav+1ZpkmiTlNBNwGm2y4BAde2PTNfZv7SPww0f4ueIfD9jqE6NFaATOhIPKOGoa9oj WT55o/JH/gpT+1IutajF8OvDbBdOt2xctEeFKt07joa+0P8AglJqVrp/wZ1zT0nVrlcCMFhnPlms 18HIa1It3R1f7M+jeObf9p/W59fuGOiK8n2dTJkAbRjt65r68HhOz0n4o634vN1mK3WTcgwRgrn+ lKEHflMHL2ULHmnxD1ax/ar+Bt3Npty0cEZVycAcAE963vhVo0V/+yxomk2cq3LW6xK3I7EmrnHk 0KpSvE5v9tDSLy3/AGXLS2tgA0TRbxuxwCc18R/8E7/C19qHxgh1RYNlssEihzx1Ap8vLG4oOzPQ P+CjXhfUI/iRZ6gED2qwSBnz0y1fntokkN5ZRMAGwOvrVJ80Soy5pnpen3scNqQV3Htx0q1p+pSb 2UjKVPLY1udJYSRM+4HDZ6YrZvtSFxPEFGAvHFK1ykzoku1iCrxg+laVjOIMjse9UkD0Ox0iRYX3 k5rfjvjI5JNF7CWp0UF9ttw6CtuwkW5gLH747VD1YJHS2OpMINkgwO1b+nrBIocjn6UNDOotpoVw RyfTFXv7cnaXyR8iD0PWpaCx0NnqQazOOGU4p9tespJPzfWpS1Bs6D7as6JgbWFWoZNkpf8AiPpV SQkX/tHlDluta1neZ4ZqfQC/csGh3IwLd6owXHzAE5INAmzppbpDGuDk4qpBdK0mCeKQRNR7mOWH AI4qrFJtwRzQN6h9r8xyTwR60jXhZCV4NFtA1IlvyMbjzU0lxuQ5bNSxq5QJRWBHSmXl4Zowo7Va F1MoXW5dhO01nSXO18n5sVQivDdbtwI4NJFbRzZJOKhu5WxVu7kW7hBXKX827UlXPHpSsUpDrmWK MYl4HauUurpWlxnC9qoTKTXXlnAOSawroMS+9MenFUkSZMGJbUo/APQ1zl9AYWwsmVFMZzlzNmfY SfyrKhvRYtMDzk8UupUnocjcXrBjuHWuY1G6Se1d0kwVOOK05TBM59bwzRqSeorDuZliuSWXcao1 j2Mia5kEpbO1T2rPnuUW3PPz5rOaKZyd3JuYZPzY7VyruttcOyn5ieaFsSzKu9SDcMfxrMnvY1gO FDE96SehNr6nLm7CA5OK5qa5M0xIPSq3Q+pQvb4Wki7R25rm5XW9lkZvWnsTJqxl3srJhF7d6wp7 /cNvpUSRlF6lJUCo8jHiuaa8RbgsvSpW5vujJ1C7MrlTwtc9MqyHCnp605CM2WUM+OmOtUbu4VsK vSs2mmVzmTKuZAT92sq5kAcgLxRexL1KDRhVO4c/SufvplM+0jp04pxd2JqyM6+IeIc/NWaMgDA5 FKejFAqai7CRSBxVWRWuWGPlA61Leg7akcxhb5dvzDviqNxJ5qDHy47VmUZLgecWYYrLvJnlzgcU 7lWKAZrdME5JrLuI9z5zihakjUXfCcrmnvFtgValqwJFWddzqvTFV7iZQ4x1qVsJ6MoTupGQcmsm XaB8/Joi9RvYqvL5Z65qokxaQGqZKRo3cu6FRn5a898RqzX1sgO1TIp/WonsWtz+pn9jA+d+z5oY HRIk/ma/cr4KOH8FW7HpsFTRGer70EYwoBprPmumwk9DPkl8qQnpmqDZ3EnkVQ7ieahwSvQ14R+0 ROsnw91IAYxA56exrSl8Qj+I34r4m+KmoSk/NvYD9K5NiY0GW+YVjjfiGwAzhh171FPECA2K4HuI kiYyoBjitaAlSoAyKmQ0eh6PmMIRyK9s0RzmNs8elXEaPcNCVbgKwPNetaSPJRQRzWkdCZHcRNja avTgOoI/EVW4LQiMK7QRwPpULsWfIHApxB6lNmLkkj5e1Z9yMsATzVvYXU+ZmUy4AOMU4yKHA7/S uY3GyPh8AZzU2xFTHepMxAoQqOoNWJEA4ApdRokjXzcBhSlVt2x1rRFX0JPlXtz9KFz/ABDFZyEt y3DIVkCLxVoO1uxLcg0olshifzpSMHH0rTUsh2L071ZNiW3KyyYI6e1bIkGzaRVJXE9zuvg5CB8U bNl+YmQY/MV/R14biYaJGGGCBXZBvlKa0OhtoXkQqG2+9Xlt2ser7yanqZEm52X5jTyCoyDn0q7g r3CA5kDOOR7V8ffG5tni5HIGCDx+NTIUkfy3f8Fl28vU9PXPJ5/8fr8YrNV+xQlecLyK5b6gkQ3q /bIXQDBPT2r2jwV+0J408A+E7PS9P1ua3t7dAkaCQDj8q1pz5dCZbWOA8Q6vf/EHXZtU1u6a/vZC fmcg4B681p/DnXNR+E+pXE2iXTWjzEmQocc4xT5tboUVZFG48Ratc/EA+I5Lxm1gsSJi3ODjPP4V 9Af8NdfEOTT7nTJdcuGspgV2+ZwBjHp71qq+lmR7JXueFeANS1D4aeNW8QaVcGPUiDumBwTnr/Kv pBv2zPiNqzXCya/OsLg7k84c8fSiNWysaOFzyv4cfFfxX8N/FupaxpV21rc3kvmzypJgu2MZ/Svb vF37ZHxF8aeG7i1uNauJGcgsXk/+tV06qijJwuzK8C/tl/EbwP4MGn2utXCRrgKBLjA/Ksr4vftQ +PPjT8OI/D+p63Pc2mVLxySjBwfp70ueK1KlFyY/4Y/tWeNvgv4FsdA0G6e2tbZVVAkmOB+FeYfG b4p+K/jp4nttX1zUJLm4t+I975285/mKcq/NGxMaVpXPonwr+3F8R/CXh3TNJ0/VpltbeLYSZcd/ pXkutfHnxXffGKHxi9/M+rwhlhuC3zKrEEjp3xRCooqxUoNyufRx/wCClXxRbxBBI2qTbUUru848 579K+ftf+NfizXfj8njq81Ka61BVcQys2SitjI6e1Wq0VsZulrc+jbv/AIKJfErUUu7JtRmW1mUr v805wRjpivm/4UfFfxD8KvG19rNjKf7RunLSXO/BJIxWdWSk0axXutHP/FC81P4z/EP/AISHxHdt eagqso3sG+9jJz+Fdj8Hfi34g/Z11i4vdGkLtK2fv4wcY7UpVbtE04KB698cv2yPiL8fvh3Ho2o6 nIkLSo7RmbIwDyORXzvpOmfYrdEVQAoxmipUTWhSj71ya6sm2+dEB58fMZPqOn619Mav+2b8Sr34 BWnhNL10hgaPdGJ+DtJPp71nSmk9TWorox/H37UXjn4gfBHR/Cj38q2cAjEkRk4wrEnt6Gqf7Of7 QfiX9nLXtQfw9I0K3DlhtYjHGK6OdN3MJQuzc8aftQeNPGPxHt/EF5eST6nFJmMs/wB0ZBPOPavT /i3+3T8Q/GniXQr1b6UyWoAI8w4HzA9cVsqyb1EqaSsM8Uft4eONS+K1lrXnSz3tuCsbsxAAJBPO ParH7RX7Znjb9oWG2tdVunbYRuHmEjGcntQ6kVqgUNLHhnwu8c6p8Ffi5aeIdLkaBlidSIz3OP8A CvvjUv8Agpv461e01DTpHkt0nUr5iOxLZGPT3qYVU1YqcL2Z8n/s9ftA+JvgH8TtR1u0u5ZLm9LP KScZJAHYV9d/8PIvHeoPqsN5dzNHcqwVd5IGVx6U4ySjyja96547+z3+2D4q+CFzq8sFzNLJeS+Y QSRtO3GOBX0D8R/+CjPjv4l/DUaRcTOlu2A4EhP4dPenGcUrEThzPU+C5bAa00VxcJtkBB6dKva5 oR1eIiJyhAyp9MVzp2maLRH3B8Cv28/FHwh+GdjoKs91DaosYEjEdPoK6/4tf8FDfGHjzw1p0EMs losc8bFYmJ6Nn0rsU1uznnDmdkeZfHv9qnXfifrfhm4vZ5JPsO1k5JwVcMK6f47/ALZHiv4s3mh7 7yWKGzKv8rZyVcN6VMpqbOqHuw5T6Uf/AIKT+Mb3TrG1jne3EQAJEh+bn6V89fEz9qbxJ4k+LVh4 ihuZbe6tsrvQn5gWBP8AKteeKhynH7Jqdz7Dtv8Agpt4jE1sj2QkZUIaZmfOfWvIbv8AbT8Xa38Q 76/a+l8ho5FUb+mR9KyUklZG9ODjLmZ8uWt1cazr2p6pd/NdXM28k/SvoP4AfH/WPgR4iubmzlfy 5m3GMHjpjtWStc6N5XPuX/h4/rV5BdItsLeR42xKrNkcV5J4I/bc8UXPhTXtPu7iWdLklS7sc4Kk envW0LJ3ZyV6bk9DJ+D/AO1Rrfwq+HGoaLBv+yzLtHJGPlI6fjXefs8/tj638J/D88EjvdxSOpVJ cjAxjtTk4zBR5VZHsvxY/bh1X4oeCBprWwjiYjIDNWP8Av2sdR+E9j9ntrFH28bizDtWcmrWDle5 vfHX9qvUfi3pgtbi1WPcPmbcf618paBINPthEi/KvFDaSsgpxad2drBrKCMAd62bDVdkgbO4elJu 5tY6WC8AcuOMnpW8t4qwqWPJqb2AvxStEQ+7cD29K6WG882MAU1IvodRpmoAAK54FdauuW8SqqgM T3rO7bGjTGvpbAA4Ge1PtvERt7oFRwaoT0Ots9cN1PmX5VzxXdR6ikcAIOBQLUu6drCynOea0YtY V9URWfqDilIaOxgk2FgSMZ9aWW+NsQVbJ9qzBovwaoWTcSB+Nbdnflz8p4qhl3+0BkAngVJPehnQ RvweuKGxW1N2C8WNdqnmrsdwm4D+I0osmS1LTXewlQcmqx1ZIcqQS30qkBLBPvztfGa047trdQme fXNPcZYZ0YfMearrKWyqHApBsN2ggFmyR6c1aMC+WGVuO4NQ9xrUwbyd1cBD8oqW3uS3Oc4rToFj n9Sla4uRtJXHpT/PMabSc+9DApXNyFICtWTLqkttcqg5B96gbKOp30i3HmMMKB61yNnrralqjlBn acZNPQnYm1SXz5cbiWB6VymvXckDxqF+tXFEtmWdX/eBgcYrK1TxfJOxX73vVIqJgt4idrQKQQFr Bn1XN2smDt9xQ0U0N1HW0mmGyPBPfFcpreoCzjA3ZY0WJlsctLemaEnORjvXGfbBB5igcFueKu+h MY3YhkRkCjgCsu4kBOR2rNtlvRmNqGpIkLKSM4riorkSgjqR3qug76GbeNjL7jj0rn7gRhGIbO7r xUJ30IbMGSGMwkA5HfNc7cAWqHDfL2FVaw07mB56u/z/AHaqXksNu2YzkmlF62CRgTz/AGlunI61 g3cixuzdBVsxepkvdLLB15z1rntTbbtaI5HeobKjGxQmug0G3tXO3QVYxzx7UktCmZVxGJsNn5RW bdSBM8dvSqtpcRhxv+6Zm9aybklWBX8qiWwyrcTmLGelZc0mWDk/L6VmhsBL1cjK9s1zV0qyuSce 1C0G1dGA8LSEknGDUf3HPpUy3JSKjyb229T71A9yFjORgj0pbjRhTsVkDKc5qrJOqPtJ+ak1Ytam deXPzgdqzZbnycf3TUbj6FWUosmc9arvBuj65FUlYgkjZZIwq/KB3qtdSFDsA3e9KY9jJkkMgwOG FUchc+ZxSQmZzSeW/qtQSTJ5Zz1z6UrWEijIvmEEHFTQwqkeW6n2qg2M+4nWKXAO4elcV4vY/arK UcKsig/nWc9hruf1U/sRPt/Z/wBGPUNChH61+3/wTl3eCbdCOiCs6D1BanqxXKY6Yqmkv2dCrDOe ldtwsV523LlhwOlRB9wxnimgkQ3iFERh+VeD/HyZU+HmpMy5zAwHH+ya0p6SBan8S/xX2D4sahCy 5cO3OPpXGSKF5z1rnxr94tjGzGQqmnyylcKBx3rgRJLZq0rMB0qzbuYJeORmp6jWh6Voc+ZFIHFe 2+HmWWZc/wAq1iJbnuOhQ+UQQOPavXtHPmqCTxTTuWzt7cZRT+tWJGwCSK1jsRJaiGXMS+tNkdWf aeKpCKO35sZ+WqUsY83AOTVbjPl2Q7TtHBo2g9OveuZ7GjY9c4wDzTmjO0Aj5u9SthbjlUoQmcmm 7/32C2CKLDtYvM3mptXhvWmeQYSEY7j61aFuW1YQgKfmIppYysMHA71DVxl0oAAVPT2qfAmAYnAo ii76Fx5lZVCqAR+tTvKViHGCe4p2fUSHRTC3iAC5J6mtFH8z5QOT3NawJselfAiFk+JNshbLpKOf xFf0d6EzS6JCcdhmumOxbWhrLId5A4q2QzAHOTUpGViRQXUZ71ZigKRNk5Ioe4dSeMeZsXviviz4 5W7yeMVXfgAH+dKT0Bn8tH/BZhfN1vTRn5hx/wCP1+M9pbNbWaEnORXJbURQ1O7a0tSVJ8w8KPU9 hX6c/AD/AIJeeOfjp8GtO8WNGLa3uIlkHmSBeDn1HtV8rZE/dhzHx38evhRP+zx8QU0O5mE0hVix Vg2MHHavNPDsd14s1R7XSreS9mGcqiE4/Kib5NCaL9orlu50u60zVZLW/ha1uozhkdSv86dJcqr7 RIo9TmlFmxufA/wPr3x9+La+EvDltJdSkMZJQpwu3HG7BHQ1+j+o/wDBKHxzpVxN5wVnjBJVZQSM fhSTcmOouSNz85vEPg7xR4X+KNx4Sj025uNRW5ESKIWPoD0HvX0P8b/2VfG3wX8H2mq6/bmwt2j3 EluvPTkD0q7voYRdyH9mL9k7xj+1bos17oVrJFZLyksoKBhjPUjFbX7R37Gni79mnwVHq+qRSSR7 1VjF845OOwoneO4ozPlLQdZXUrFXVtqn+9xWywESbidwFJM03NT4a6Hqfxh+Kll4Y8P2z3V08TvJ sUnbtx6Z9a6z4xfC/wAQ/Bz4mR+HtVtJYrueXZFEiFickDOMepqee7CfuWPrzTP+CavxG8Q6dbXc dq0ULR7/AJ22sR9CK+LfHuhan8IvifL4Z1a3limXdtZkPO3H+NFOV2ROViK5Y5GJAM9s062umuNU jtLeJ5pipOEQngfStW9StkLql4dF1EQXaNbytyquuDj6Guz+F3wv8S/Hb4hR6R4biDxiNvMkL7cE YxQwTUtj3r4wfsa+Pfgv4KuNS1ixbyo5kXzVJbg9TnFfNGm6gZLaMK/mqV+960JOxEZXlYhur5kL RpzcEHy19TX0L4a/Zi+IGrfCH/hJnsJI7ZovMBGeBg+3tUN8rNm/dPE9DupDo0P2x8XBTnBzzXpf w6+Evi7x7o2r6lpOmyXNtaE/PtbBG3Pp7Vopaak7W8zitPg1nXY7OC1t92pSOqSQAngk4+te4fFH 4HeMfhHoVtqWq2DW1szKuWyA2TjPShSbHJcpwsRNzghgM88GpvJIYHP1rRsnfUngBllJPQdKtR3v mOytwB0NOOg27k0BMz7lbpVwSOJA27KjtVOQkiR7ue4uFitofOnfhV5r1HXvhN4y8G+BrfWr7SpI bKR0G4qw4Jx6VnzO4ptJGRHOJXULggjtTru/XT7aWSQ7FVT19avqCd0d14B+EHin4qeDo9T0zT5J bZ8MpCnBH5V1HiT4TeI/hr4YXUdVspIYcgYZCBz+FOUmtzNSszioLuPU7OGVVGGGRinazdjR7GJg u+WSRVVR3ycUSlyK5rF8x7hZ/AHxjq9jbXUOmTPFKAQSjD+lZ3irwnffD6WK31aF4ZW+7vUjvShN y1RU0kU5NRZMRyHanTJqxpssmr+IrXSdPiM00iFsKMnitfQUbWuz2C2+DXjF9TeFdJmEYz85jYcf lWFLCdC1ltMvlKXS54I9Kz59bApnSWmyQlGAIWpJneBALQ/MxHyitFJ2G0dXd+Ftf0zQY724sZUt mdQHZGAIJ+lSC8Nzq1vZQwmd3X+EE85rNVCG+h3t/p954euoLW4tjCzqTl1K1rLcSWNufLbcc9Ca pSb1LsnEY2ry3YRZew7Vr2esxpmIpz64rV7GdrGhasJEK44J61p2am0lG1iVFJDOvtrh55VK8KK1 pGaeZQHwQelOSHF6nSQXZhuEVs4x6V11tcqj7lPFSOTszT0+fzrhw33at/YnnlxC20A8ihaDuTXF vLczIqsSVPSu1jHl7FP3sUxbs37TLfMzfKtakGsNd3YjKERj2plHVWlz5TkZwB0q7CfMmEpPzjpS eqJN6LU5yyhjye+a37Z5G6tu+vapQ2yCCSWW/cs58odq7a1u82hCH8qGJMrNJPDbOT09a1tGnMtm su7Ix3oepRo6fqDNO5zwDWnqeptFGrRn5jSW4mupFY3s6NvmkwfTNXP7S8/laoC3BeSRA4GB60yz 1WYzvl9/Pc9KLgas2qGLClixqvcayXwkeVPcgUXEZ8fiBrGfYSWNbkesSXDBidq+gqdw20IpNYDZ AGB3NVl1ERRlgfwqrjRQ/tNXcnHGKxLPVjN55P3Q3FDV2JFSXU8AHdVGa/ZrgPngUWG9Cjea2L8i M9BwTiohcW9gCsSgHufWi2pNysL5Yw0vG4Vw+p+IW1C8UY+QdTWiQjF1DU7ePKlCSfauVk1BIv3Y HJ70INhplDQ/OeM9qr6pqUKQRIB1qWy09Dl9QvgsyCI8DrWPqUkFwmW+99KaYjl52CpsTpWHOwiY A1b2BaHP3135ErMD8tYUmptcYwNorIb1M2+iimOT1+lZiW3lIzZ2j0q+gtzltW1BVAUH8K5trjch x0FNJGbumZUtyoUnJA+lY0zrKCc5GeKUnpY0irGLcqvGDkelZOoCBIwsQ+c9TioiraibuYrSfZYj k81hTHzkyeQapO5HLYzpo1ihOPu1iT22yHKthT0GalvUuJiRQtK7oeFFYCyIZGRuce1V0G7GVJLt nPJ2elZt5MyuzdV9Kjm6EqJiNPvgDgcDtVbzVeTeR+FRcq1ijdKN5bsfWuduZdsoA5WiIqmhBJLm MrnCmufuyIl255pNExlcos2+AA5+uKpSy+WcZpNDuZbz+ZkqNtRsf9FJb5qSBmAWa3YDacGq87KX O4fSqkVDRCSMDGAFrLu12sq7evtWWxRBeWq26Ajk1Q85pAo6CrRDIZZNj4VeO9QvfCIHAzUvUVzK e6Fzkj5XqlI+5MSctSGVBIE4IzVcx4kJIyPSkJaMgdPMcBR8o7Yq4WWS3KY+YVNy0rnP+QocMetc z4mja4ntoM4BmQ/kamWpL7H9T37EbqvwK0ZO0cCj+dftz8E593hCJgP4R2pUlYcVY9jh3yxFmXaT 2qldMsZCkc/SulFMhEyrweR71XjQSy5XpV7EtFOS6WK52uNwzXjX7QLGT4f6nlAqLA5H/fJrSmry EtGfxF/FWVZPi1qcw+U72AH4CuEkcuAmORWGMjaRoyRoSQDnpSrLtkB25WvPIRZllMIynGetTae3 nfN0A60dSj0jRnWaNCowfpXuPhpdyJxhqaYup7loUx+VRXp2jjyJDuztNEWWek2L77UgDjsamOWI B/KumGqJkIrIxww6dKrzKGfg80O6FcpEMWK1nyRFOR97NUnoI+Zyp8tTnce5pVgMjls/hXJcvcQg IwC/jSy3DZ+Qc961S0HsxygyPno1QyWnmy5z8wpWLkXlPkxbgMkUkVzuG480GZadCFVzjn0qQAIe O9QilsWIjtIXqKtL8oI61rGw0tBYHxJyOfpWmvzcHpUzCLBW8t8dUq55vQAYFJOwz034EnPxKt+7 NKMH8RX9Iegy/ZvD9uuPmwK7ofCNmki5fPerQJjIOeKkixoNLvAVVAHemchjt5qCepZt28sBn+9i viX467pfGaNvwADx+NDQM/lf/wCCzchi8WaWA3Lf/FivyCiDNAgznFYdTO5nzWC3mr6VGRtJ1GD8 t4r+2z4rfEvUvgz/AMEwtHm8PoIrxbKNVkDFcZ3966KdnoTVd48p/EZ8RPiR4k8fa5FqGvN/aWs3 P7qDL7z85x298V/SL/wTl/Ys0H9k39l/Ufih8RHRdRvbY3EUd1tBiJRhtGcHOV6VzYhN1EisNFRg z8RdX8Qap+3F+1g0HhLTXkgupGZCyMuEyuT3r9NNZ/4Iv6zZWN86aiDeJA7+R5icEL09alwbloNS sj9DP+CFX7LsPwk8NeNtQ1ixjXxBaylfM6knySTz+VekfsxfHP4hePv+CgPiLR9d0x08JrJMtu7b 8MNq46jHXPetaa5fdIq1OayOG/aj8X+Cf2dv+CiGltPp/mXOo3J2hIS3JdFzxXZf8HACPr3wa8NR 2kKrbTTwGXJxhPN+b9M1tTpqL1Mar5EeN+EP22vA/wCyn+wVpWleFDC2uJ5FsqJwWZiw7E+or7W8 fR2/xY/4JZW3iHxpaRpqs+nrLtlOT5mHwOcHOQKzxe9kXRj7t2fze/sw/wDBMnxn8fPhpB4nkQ6Z p126Pa+YwQ7D7MOORXnX7cX7FnjH9lSfTRbW32+1vMRDY2752bapwormUZKN2W5bJH9Gf/BC7/gn hp/wG8HWvjrxnbRHxTqcHmx+djcgZdrDseoHapf2lvhd4a8f/wDBU/w9Nq8cDRwwTmKKUgc70IOP rV4anzwcmOs72Ptf4u/EvxHov7b3hbwlpGjCTwxLp87TzDdhSrqAOmOhPevxL/4LdfB228SftU+G vD/gyyW68S30btIIlJ2IJAHJIzg4PeqhTaYqlrHzvL/wSL8YaVpk93cyCa4hhZvK81TyBn0zXzJ+ xbJpHwv/AGo20j4g2sUBNwYYVl752jvjua1VN3dyFO+h94f8Fi/2A102ysfiH4Ji32Ua/voYFBG1 myTxk8AV+Rv7K37X2p/s6a0L7R9O+2Nd3kaNvDLs3YXtVWtqKn7raP7Lvj9r8XxQ/wCCZUOr6tbK l1dWSu5AJwSHr+J/wnappcEUCMZIlXCbh2onHS4RVpXItVaRPEUN4hMf2ds8d8HOK/tb/ZT1O0+K f/BLqG5uLVYpX0c7cg8sUfHWspR5lfsdMVeJ/KNrn7DfxCtfCDeIXjeKwu5kFod3RW49PWv6vf8A gmd+zHceCv2EtQsdSgjk1afTziUtks3lsM5/KjkcopmUZJ6dj+ZOHwbq/wCy/wDtb2s+vN9oa91e JY41O/aGdVPSv6Z/+Crv7NmtftC/sk6daeFIVjvfNhk8wPtO1XJI71vGPLozOpUbP5OfCv7OHjjS fiovhe7S4N1GGDBlODjHfHvX1VpH7CHxA1rxFeWy2rrBErMNxxnAz6Vlq2y78sT5e0D4a+I9a+NM 3gqG2uG1OJmDp5Zx8uM8496+ztU/4J/eO9M0O9u7iywkSkhUbJ4GemKqN2hOaSPkT4P/AAo8SfFP xfcaHpsbreWsnl3CSfLzgHuPQ1718af2W/FnwL0UanqVm32YMAduSOT16Vmm5M0T0POv2avh54r+ LPxMtdT8OWj3Vra3kayDkAgkE849K/rK/bj8HW9p+wFB9us0iuljj3BRnafmran7z5TGte6R/IN4 UtL7WPFOkaHpQ+1308JbYDycHB6fWvYPHfwc1bwl460jQ/EMRgS/vYk2ycYBcKeuPWtJR5VcpNpH 9RPxOstH/YJ/Y40Z9G0lLzy1ijwEPOWIz8taEfhrS/2zP2HZNVvtMWC8lsDLEuw5DbWIHPuBQ17W zRlQXNc/la1n4KeNPhDaaYurwPDbzFUtlYkbsnA7etfa37Nn7I3iDxV+0t4Vh8TWLLockLTlZFON ysNvUCuWrU5pezOiho9T90f2yP2idN/Za+IPhPwzpmhrcW93hDsjc+XmQL2+teR/8FPPgNpGu/AW x8YW1ssV6JI/mC+rHP8AKuqlFU1ymWIqWqJH8/ng7QtU+LfxH0rw1pcP2y4uJFeXr8iqw3dPY1/S Z+zN/wAE7vDHw6+JsOrXZS61GGB90LKp2nj057VpDZsjE1HGKSOi039p+x1T9r2/+Hw0VVgjaRDL 5b84A/DvXxH/AMFJv2T7vSPjhp2qeFNPMoltZZLhUQgL8w9M9hWDjb3zSCfskz8/fhz8N/EHifUN U+ywvcyQMRLGAT5Z25xwKi+HPgLxJqHxCRbW2mvXhukWWBlOF5BPQelRGrdmqdoH9IPx/wDBVhb/ ALFEE95Yx29ykCsQAflYbq/N/wD4JyeF/CnjXxhYTawkM975RMaSYyenPX1rVUubUmGqueyf8FV/ C1n4R8T6FNpcK24aM7gnGfnr8xv7Vd8kH5Sa2jGyFCWtjoIZB5Iz3rZhlVYcEc+uKq10aI0LCdih HpXTWbFYdz4zS2BnQWF95UZPrVi0vNlyGbmi9xROsj1db+Tb5YXHGa6SKQW0YUHJNJLUpq5rQXYg hJ/iHWrlhr7y7TEpUdyRiqYNWZ1sWsi0YMFy5rXs9ZWWTeQCx9aSQbGtZ3zSzsueO1dJBem3GzAJ PvR0JT1LyXmHCGti2vAG68UimtTtLG9hnKhjhhVi51WRbkiMDYeuDSsSyxDP23fe610VtcNZwgJz 9KlalJEsmpyzxmOQ/LV3Tb8RxiJfuirSIloya0vP9LkXpz0q81yblwMfKtR1K6GlBcxM+6VifbFL JdJE+5Pu+lUD0RKNYkuhg/KD6UkV8lmWTq1R1KirqxJHchcgt8x6VYt9TWCXY65J70wcbEF4Y9+5 TlvU0+G9cgAHFJC3I7jUBuEY+9UEbNd3IiD4ODVAY6XTWl7NExLDsazPMeNm+fahPIJqkTsUdSuM QrsbP0rOTVXVWRh24NPYox01J/KcOduDxUUV2yJktuzTIsZt7eGRxtcgd6y1uAshDH5RTbGkZV9q cYcqqgjsa5QTplweueKcdgsRtetDGFbkGsG9nVmBPapY0rHL3V+S7MDgVnx36uQJDSGVZrtXuGKH AFYL3P2ubyyeM9adxHP6mywXzQ7sgd6wp9QRn8sHkU+W5LdtDDutSCuCDyKzb7VTcSDJoa0GtDmb sD7UWyKyLi5xFjGDmo5iuW5VkaNkAOD61z1xJBFuCNkjtiqWpWhzc11sxk4qrcsmwsp5oegkjk7m 888HHasqO63DHTFT0JZWuLjzUPtWPPOu0AnIxUvRjS0MGW8PIzgVzDyF7kjHFaX0M5JoiuWJAUc8 9ao6kvlw5HSsepUWYEB8qDJ6Gq08qqCRxQ0W2ZF3KZoc54FYdwuYQBRHQiepiSTfmKrM4fmUfMOl EnczSsys7GUYHA9Ky54it0Gz8oHNSnqXsRoqOCx49qpNJuJ2j5RSe4J3IGk85s44Hasu5jRnOfw4 pMZCEXadzYxWXJPvyew70FMjJ3xZxnPrWX5mXK7cYoWxO5UkumizlMg96xblmJIXnNStwtZFLZt5 x83rTXxJGWbkiqaBMyJJdq5xU6S71BHXvmoF1JBOEXI5Y1DchYkDdz1xSSuO5kzSiXAUYx3xWHrz BZrRyOBKoP51L0H5n9RH7EZ2fBHS3/heJSP1r9w/gRNs8Jxqy54GKVJF20PaJ7hzJleMVmkfaGYt XVFDa0KjwiTA/hqBpFtmIQcUyDNnKO+evNeR/HFzcfDrU0c/L9nfH/fJrSk7SBo/iA+MUG34o6lF n94kx5HtiuKk1BHHyjB+lZY5+8V0HJc7fvGn296pVhjmvOYAZvLbB5FalgPm4HBpIXU9C0iUB1AG 0ivcNBLkoc8GmkHU9w8PKwZcnmvY9LTdHtbrTiral2O4tP8AR4Av5VM4KsD/ABV0UjKTE3iWTpg0 wuGmxjDCtJJMSTZXluUWTAHPris24uFgGWHX2qZLQaR8xRzDG0D9KkWTb8ymuNGpXNxuY4FTibOA q/N3rS4m9SQSGJxg05mxISBnNDdkVuiQbmz6elBZYowqjr7VMXclj4BskwW3ZFWYMl2yM49apoa2 LKEoOOCan3BCATg0PQeyLJwX44FWXG5Audp7U1qhEtvb/Zvlc7ie9Xo4cgjNQ9xo9O+BOV+JFoVH KygEfiK/o40Z/O0SFiMEgV3w+ALm5GrBMgZNSwFpEO7giloK5cgyEznFPLhX3L19KRLJyd5H94iv iP41ESeMwpHIB7e9DA/lb/4LJgXfjbS1A+6COn+2K/JOyhWPBY59q57WZBnXM5/4STSSnQahDkf8 DFf2VftUahHpn/BKrSrmQbYDaxdB7vW9JW1Iqx93mP5Mv2Lrnwr4h/aN8OzeJpFfT4nDW6zAYyGU g9sHNf0Zf8Fu/D+v/Ev9lnTJ/C+Z/DsTxPJDCchgHJzgZ6DNY1NZXKi/3Wh8n/8AButoek618V9X kNkGuoY5BGzIR5Y8scV2n7bn7eni79m79u3xHpNg76pZmO5/0TcSFwoHRRmtI2MZ3ikz9IP+CLHx g1L48eBPGfiLWNP/ALPaScP5eG6GI5689q+n/hH+1R4H8Y/tRy+BtIwNctlkE2I8cqATzn0NaKN3 czknJ8x+Q/8AwUqt4G/4KT+EUlUeYtzwW9POSvpX/gv1rTad+zforW3ygCPPbjzDmhO8hyaqLTof gj/wSd/Yj1D9sr4sPqutSTQeCtKuBK/moArOgDpjPUcHnNfqf/wVx/bDF54r8GfBrwTvGjxanbC8 a3UlQEnXK5GRgqxrKqr1Vc2U0qSSP0q/4KGePr/9lr9gLw3qHhNVsFtZLYHa+z5d5yPyFfjh4R/4 KA3P7W3xd+Gmi67paJpskAuJrqQt99JQR14q6kU42REFaWp/WLd6dpPjH4meHl0rVYVgtrR1NvDK p/iB6da/KX9qT9nU+Of+CmXhbUbXUDDPDbTZRSvI3qTV06fsqdkKU/f1PtH4l/tP6X4E/bI8OfDq C383VrnT5380qRgKwB56d6xPDX7O+l2n7eF34ivlF5qjW9x5TuoygIXIB/Cqpqz1IlPn0R+VH7U/ /BRXW/2ff20/E3hu4gF1YYuDDGHZiAqjsPrX84v7UHxQtvj38QdS8Zz2z6XeW94GtAsZ3FuGHB56 gVdVqEWwo+9Ox/Wt/wAE9fEmqfFX/gnO2ofEdM2b6ZlGvDgkbH7HFfydeNdT0HQ/ivfroHlLosWr xJGEOBgkYxWMI81LmZq1etyn9k3xKmS8/wCCT+ly5zE1gmMf8Dr+MrRYvniVTwF4+lOXw2Fe0+Ub rE6h5lbGA4ya/uF/4Jh+CIPiP/wT08M6f5mLd7KPkY6fNWL2sbuahFnwx/wVy/aRi/Zm+EfhvwFo Vgt1PGIzvYMoXZJ6jjvX25/wTn+M2u+LP2GtS1vUU2TxWBaIBicfu2P8xXTZQgkcdKW77n8p+s/G e++N37W1g+sJ5rRavEE5LADepr+1r4l/EC98O6B4Q0qxhR7a5Ee8sxGB5mP5Gpl7xryWjc+EP22N Hsvh/wDtjeBP7Kto45r2Fjckcbv3qg/pXa/8FCv2o7T9jjxHpF5FZJLa3Ns/ndepbb0FKnGz1J5u dWPlD/gltr3hn9qv9ozxh47h09RfRTSJDvjIwHjB7/Sv1J+GNtrN38WPG9p4mEC6N57i1Uzg/J5f p9c1tCC2FODWp+V/wR/ZivdS/bd8Z6x4d22egw3sheSNgA3yKQfyBr9Wf2tfhvoHjX9kvVWvY4r3 Zbk+YcNyFYjpXNKFpNI0vaCkz43/AOCZXw10H4J/sWf8JNZ2Mf2i5SOQuF6naw/pXs/7bni+bx1/ wT8j1eRQonSNyoOcfeopR5ZNiclNJn4+/wDBH39mrwt4p+Mun+LNRuY5tTW2cRWz7flBAz3z1Ffo 9/wVU/Z88Kar428Ka9c3EdtfW97DsjG35v3wOeTntWrTkh8yenc/Qb4i6J4d8ZfA7SIvEvlf2UYl J88gDIJx1xW34AXw74D+Bkk2kiP+wYFHlCLkbcH0/Gl/DsOEfZJp9T5X+M37PGlftpeG/Bus6YQl hZyQzHIAyFk39/pUvxk+MOiWP7VvhDwHpEeNRS0kdtqEBVRxnn8a53RfteczdTkdj6N+PPw38Da3 45s9Q8SC1a/i4t/PIB6g8ZI718vf8FRdTfSv2M4hYRb0e4gjjVBnJLEDpXSlo5MmUPaSUj5L/wCC aX7NFn+zp8INQ+Kvi+KODUri3M1p5xAKqUOV5wRkr0r3D/gm9+0peftLfHPx3rNwzLZpeMtlEc4V DGD396eH9+m2FSHtJ2PuO1+FngmH403GsxC2/wCEkeY7yCN3OM98+navWfHfirRoPiMnh+6+bULy ym8rK9uh/nWerfKbcyUOXsfMPwj+Ael/sr6H8QvEeqhXN7K08ZwDgeURjj6VlfsT+H9B1H4a678R YbOMNqTC6iJXBxsI/wDZahUeUxVS8rHoP7THiVfiF+xsdSdfKimjVyB/wKvwd/YX0xG/aY8OTWch AFs4CgdRkV2RfLA2jNQ0P0X/AOCuDiHxJoEY+8bduP8Agdfkfpk48ll64NLpcqC5veNkXJLDB4Fa iam0eExn3ouWjfinCICGwTWlBJJvBMhZfSluyZHRpfpEg5xUy3xMy7BkGrigVzsdNlE6MzttxWvb 6qqxhdxLfSpehpGRqw3rQ5LHcp7V0lrrSLbKgjx+FK4PVllrtuMdK17U+YowxRvaqb0JZ01pK8AA J57Gttbgo4LOd3vSTuJLU2YNS/E4q3pd8y7w5yc8U4lPQ2bS8kMo+bH0rqW1NYNo+8xFDEtSW01Q jeT+FbOjau8UjGST5T0BNSkUjSfVjvIUbhVyx1Yfa0AT9KFoRLVm2Zki1F5WY5PbFRtrLLPsU4B9 aS1ZSNS11CJVIkPzdqklnadAQ+0ZpNagy3bX6W043HPFUo7vy76Ryc5PFCQJ2JdR1Py7EFE3T5GO K0bSSWXTl80BZD3zQkOTuhl5N9mgjQnJHelSRpQCJOnvSsStB88qxReYSAaxkune7DRnAxyaBl6b UVjhaPYGkPeuQvY3ueGfyyOoqo3JlozGTzpXCKcRj+KtRruK3AVjlsVQ7nOsyS3RR+AeelY8kkkJ lGRsU460wMo3m6APuxVG5vllxsP1zTSuVY5nWboWeCO9Yct2QAR1NWrJErcbeX2LVc/erEnuPNhw Gw3fNZlM5i91KBo9pO2Qe3WsGSdpRgUrEbDV+aNhuIPrWU0ptUIDZI70uoLc5ie6NxMx6n1rnpWR d7/xjrWi2FbU5u6uPPBIUj8KyZJio5Y5pvYXUikm3x5ByaxXuWEh3jp6VlbU0vYxZ7kNJ12j2rn5 3EczbR1NXFWJbZj30nmABuuaoyRmL+LCmokNOyMiWSKBsL8xPfFc9fSlXwKWormRPLLGAo6HvVXO Ztv50nqWtjJ1CTy2+QcVmT3PmxfImGHpQ2JnOt5vm7i+wDtmoriYsuM0rGa0Zk5D5XP4GsLUJ1Ul M1Q2ytb4FuQx61kudrFQeKhi3KUsA+8O1ZdwplBOelZsLW1MznPX61XkkAfnpVJBcoyzrLMQBgD2 qodxiJBwAemabQ4FKVsLvBx9Ky55hMm7uKg0sYtxKWI96sFgbbywKTJINxKBQ2MCsuV2YlAfxpEo qs58pk6tWE0hgQj+I04bjlsJbOYUO/kmo5WQx/IMfhRLcS2IbY7EKsu4VnzyAR7UGPwqdhMgtwUu Bjnjmp7pljBONxJ9KQ0UZ2VAoxz34rj/ABLO0dzaIPuNKv8AOs5ItbH9S37EE2/4GaWgHCRKP51+ 4nwFbd4PiYjkKKqnoOLPXrgl33g4B7VCPuEZwa6lsNsr+WUXk5FZ0qNL04FMRSdRFwR+leSfHABv h3qHYCBv/QTRF+8N7H8SvxjRY/i5qcrdXkbA/KvMJ9PZV4I4rHGv3h7IZHaMGAZs1bWzWKQgH5q4 kZ9RzjcvuK2tLXcoweaNhnd6Nl5OR8w717n4VJZF55pRYz27Q2YEZ5PavWtEl+YFjWhV2ekRECBc 8elWEcNgHqO9aRdjNrUBtL5xg1SniDSFjwRVlLQzd6zyYAxj2pW8tnKyD6VS1EfK/meX26U+Payb +grjejLTuSwx4XcOlRqv78kHBqg6kwTc2P46SVzGwGOabV0V0LnmfICKXduGCvXvSirGbuxIWCzE Y5FW/tAxjGGq2VFl1FGBmmiMNkvye1Jq4SdmWrb5gd3bpVpNrAMwyfpTSshJ3ZNJlcE/N6VdtRv5 zg1G5fQ9S+AitB8Vrcno0gwPxFf0baZKDpEKY528mu2mrQM2alpI0SEZ59avq+UyRyaEgtqKsxA2 4zU8a7PmHah2TKJre5VrnkdAe1fFfxpZU8chsbgVaomwkfyo/wDBYi7+z+P9MAXduzgY/wBsV+TZ iMcnB5PUVgtzPc5fVzJp+v6Uy8AXsMjkdlDjNf2uaN8Tvh1+0H+wlovhLU9StdgtU3o8i9V3ep96 6qfwCqv3eU/j6/bU+Evh34Z/FOPT/B0ySR2u5leHGFKkEdM1+9v/AATM/be0z4zfsxXngzxxcIH0 23+zmS6bHm/ITk5wO/asVG7I+GnY8Y/4J/ftM+D/ANkz9ujxDo+nIkWjahJKyPGvyr8iqOQcd6/T 74gfBf4O+KvjT4k+JutXlleXkkU7AO6HG5OnXPat409Savv00eDf8E5f2/fBmjfFHxd4SsGWw0ue Zlt8qVUr5YHGTjvX0/8ABP4K/C/4DftW+I/iv/blvLqE6zuyFo+NyAdjn+GtIw1M4u1KzPgefxz4 b/bl/wCChUvi28v0tNM0G8aGISFQJQSrhvm+nav02/4Kx6X4A+PXwSgsF1e0f7MyEhZUPAYn1oUb z0IpRcItvqWP2IE+HvgD9iO10PQtTtdMu7i1VZpY5EU5IYdz6GvCPiz8D/g98Dfgbb+KLy7s9X8R i5hle7coXLA89D7Cs6sbyui6auj2/wCI/inwV/wUB/Y/8OaDe6tHZ2btBJJuZQTtcnGGNfi//wAF Q9C+H/7Jnw78M+Hvh0bdvEVvGoiuLfGQok56E+1a0qfVhXk9LHrP/BDn4reJ/EfxoudZ8feK2Eao 6wQzXCYYFRzzjvX6E/tNftGeHPg7/wAFC9C8RPrCXkM8UsIjR1YLvdV7GtpJNESiz6u1j4e+CvGX 7YOl/Fa51uAXFpYTpHH5ifdcgnvn+Gvn/wAP/wDBS7wdq/8AwULv9CjucmKO4jEm35TwvfOO9YpX lYVOPLK5h/Eb9jn4cfEX9pXxF8UPEmsW9ydkzJDI8ZChlBx1z/DX4beHPgr4J/aj/bhntLKeOy8J abe5liwoSYqVZTyee44NTio+5YdF2rXPtb/gsr+3jYfDHwJpPwX+HEggtWi2zNa5wiK+GXjIwQxr 8eP2If2LLP8AaL1J9Nm1VtOtra9jfcNnzbcN3qrKNJROh6VnI/rl/a98TeE/gB/wTftvCg1SG6Nt bxwLiRSWPzY4B96/jM8LO1z5M23bCUOzjqKmouVEJXqc5h65DNqviyDTIBgXTgMx4wuQCf1r+4n9 j3x9oH7KX/BPPw1apqsLyw2sUJ3SqCSSw7GopwvYK+sbI+Qf+ClP/CD/ABP/AGTdP8W3M1rPrpjR kkDqWGWOe/sK7b/gjv8AHzQfiV+yTe+ELu8S3lSEW8plIU8o2ev1rStF8yJoQ5o+h8BftG/s4+A/ hL8ffDej+Frm3fVJr+Ka4uFKg4WVd3QntX7tftA/FXw74X1jwPpI1G3lnwn/AC1XoJR71r7H3S3P 7J4J+2HqujeOf22vh8lrfwuIbSQyFZFOMSqa+bf+C92h6X4o8NaNLZ6hHLJCgzGjqc/PmseVpk01 yHy3/wAEG/jroXgnxf4j0i9eOxvJpC0CscZAjx396/XPW/Ceta58Q/Gev6r4q+yaKfNNvCtxGRtM fv7g10wjpcqtO6scR/wTj+PvhzXdP8ceGotRjadbnyhMzAbx5XJ64719Y/E+w0TSP2X9Y8Ow6xDc TC2O4mVM8I3v71nCnzNyM6n8JRR8x/sCeKNF8Z/shyeCIr+NJ7ARwu29cnCk/wBa9W/a78FWen/s FJ4dh1RR9niQbw65ON3+NKFPmdyPghyn8pX7KXxvuv2Xv2htN1SO+muIYbaSPZjhgcdxX1B+17+2 fP8AtJ/Gvw3qlxcPbaZZ3cZeLnGBIrd/pWmiVioU5cyl0P35/aK0uP8Aa2/ZB0e08KaskCyeU+5J FHAY565r074R6HpfwT/ZA0jwh4g1NJ5Xijt2Z5FJJOVz+tZVoNtWOmT9o/Q434h/tGaL+xdofgvw fphiukvPLi3BuQDJszx9a474++GfC3ww/ay8J+PxeQm9mgeA5Zf+Wsijrn2raVPlRxVPemN/b3+B OsfHv4r+FNY0DVxBY28itcKsygMPNBPX2Br6o+M6+GfFPhvQPCWpXVvLDA8c7o8i9Y3yOM1koNwZ qqns0oH4k/8ABVz9sFvEniKy+GnhKbytLg/4+jCeAUbp3HQ16l/wRh8YaJ4K8Qa7o9xcol7O5aIO QMgR4ow8eWHL3N78j5j7N8MfAXxLbfty3/i6XUgNDZpSsPnLjkLjjr2r7H8QxeG/Gvxqn8Ti8jNx ottNEygrg5G7+lEab9oZNOzmeceE/jDpX7Y/gnxvpczrb29jIYQQfv8A7snPNQfsbyaRp3wI1PwT aXMb/wBnKLZSGH9w8/rTmtbI56UXzXO0+NvghdH/AGMToz3APk24BO4dg1fiv/wTf+FGtal+0Tpm oxOqaXaW8ke7eBnOCKJxsjdR52ff3/BWj4a3via+0rVdPlG23hKyHcB1bNfiT4cWWdZhuJCuBVKN qdzWnLl9w6/7OyMCOTVuLemd4x6VlY36FxXZgDxgVu2V0fLGTitEjN76mpvBQK3zEmumgCpGuzji ktGWkrEslxItu2G4Nb2g3aragOPm9TVSSaIimmdRbSgyBSa6W4uLeBE+b953FQo6FXsy8tw06r5a 8DrXQW1wGZT3HUUJCbOktdWjlAGASOtNub7zdRXHCgHmqsXFGhZ3O+V+cKK0rKaSSYuoygotYbVz obTUNzgAHn2rca4EFyFLfNik1YnZD4777wB6mtKDBYBznFNLQR0lteGAgsOO1aEd6Yn80Dg/pUtC W5oi+MuGxuPtReSfaJkZTtIqEtSx0twoKlxkjvVs6l9oOFb5afUkuqwZN/cUW+pqc7jzmq2JkXJd Q2lGzwK0L3WPtEC+WOlT1KijEk1BpBhjuI9arNqjRRYBwfanYb0IoNVfUgwbI29iOtI2qSWsJZfv dhmgTMxPEjrKGP36rT63JeS75Opp2J3ZVvfELwII0XIPpWIl40uoqSeg5FOxaQ671BpLtto6cCsf VLt4YFZ2+Vj0pXHYw55vNUKrbao3NwIFVEPzd6d7FbFXVpVltUDcmuYmkKkc9e1O5FtSOW+VAFkH T2rm72cTTfI+38aSVxN6mTeacpIk4Y+marB/KkAPFPpYTMu5uDblgCG+lYV5dq8BwcMazW4Lc52O 58hCDXOzXwWckda1CW5Xu9TR02bQD6isSSSNkKsPmPQ0r6CW5l5+yueax55w9wcmoW5T2MG6cR3P zDisq7lM4xGMepqyTGuT5iBF5Ze9UjHLcRgOP1qJA3qYd2Ft22jt1rEufncEfcoIuQ3IyAQ3Fc/5 hW5JLcU7FqRWvJI5nwh578VlXL/ZVynWoehVzlbq4+0kl+OaryXCfZ8Ac/ShbENWMW53GMMvy1Tk jimTc33x7VLl2CxmzOEQgck+1UrZMtg0b6iXYjumCPj0rEnwCcHBrNmrWhklvLnwehqlIf3x7qO1 ax2MepntL5rsVG0Cse8MjZ2/dzSvqaxRXXL45+X0qu5UuU6D6VDKZU+z7FJ64qmo3k5ODSlsQVZZ QUcenesiFJMFifk+tJCsNZvMfKnmqN03BJHzCm1YV7mbDukJJ5PvToEZ87xj0qWNDJF8r7p+tZ0j Z6Cl1BieWbdQy/MankkFwgAXDDrxQIrCMTtgcEVynipALizYj5VkXP51Mtitj+nz9hZtvwU09z9x o12/rX7lfAyQp4OQ4z8oqKerLR7JDtaD5jzVWQFjhOtdi2C1itGSmQ/WkMeec8UDsVLjajgN0rxj 45YXwFqA7eQ2OPY04LUR/Ev8cp0X4u6hG4wyyNjj6V56xLndn5T0Fc+MXvDlsRYIXJP0oBA6D5vW uMlEyKXU7utXrOFoMFDkmgb3PSdEPlwDJ+fvXtnhyURKhUYpxEe26D88wbsa9U0uH98CDxVF2PT4 1Sa1UHsOKUIEi461pFaEy3G7hs+Y81DJEQAxOR6VpFEtlRrtGl2iPZjviqN0p38c5qtkNK58qRR7 3IJyPpV5YxtCngCuSWrLtYVpFXhTgVIAQB6+tAdSFgY2IB+f1p0SOU5OWq1sWydV2rzTxIwAoILq Q7kDZ5ppcNkAc+uKTY7WJYULMBuOe9W2Xy5Quc01oZsvRR/vM9qfA7eeSRxQ2OKsXPtBhfcV3D0q xafvn3dOaUdGWes/AyJrj4t2gB4V/wCor+jLS0C6TG5HGK71pGwmtDQjQMAScCry5fjPFT0JuWY5 fs4Kkbs1Mp2oSehqOo0PsUAuCvZgTXxH8YNzeOpEz8qhqU1dimfyof8ABYeZh8RNMdf4VP8A6GK/ J6PfMfOB6+9YSViIoryYu1ZJOmMbjWto/iO78O6aLS3v3S3UYQAjgVUZ8qG1c51Xh8+WXPmTynLu epq9pwfTVlW2ma3805kCfxGmpEtdB9pCumXy3EUhS7XpIBzXQv401rUVngm1SZ4JfvoSPStY1HYO VWsY3ha1/wCER1WO7spzbyRqVUrxxXcSfEzXb/UZzNq0ssMmQULDHNCrW0D2aM/Q7rUvDV6Z9N1B 7YMcnYQM1v6z4x1/xJG9teaxL5T9RvHzVUa1gcFaxe0Pxrr/AIW0E6bYarJDbJjChwMAVn67431v xp4aOmX+pPdW4YEK7g4xS9omKMLHQ+FvHniPQvDVvYWesPaW0AGxUkHGO1cd4iFx468UpqmqXjXN 3GpRWYg8Gq9vpZCdO7udB4X1rVvCmqiax1SS2OMAqQMCs7xZqWteIfGEWpXd+b2aI5jlkkGRyDVR rdCnTvseqn46+MfO/wCQ9LGpUjYJR0rzDSmutM8at4hgujFqzE5nDDJz15/ClKqk7mfIerap8YfG WoWd3bXevyzW1yfmQyggDGCOlec+E7m+8BXUcmiag9pgYkMZA3+5qZ1lII07O4zW7R/EnixtZvpW ubtlZfMfrhutdZ4L8baz8KrrdoGoyae7/fMbBd3ak6l2Vy3Nr4jfEzxH8VvDxsNW1eW5j81ZCkjg 8qc4rJ0xvI0+BNgQouMCqnU5kCXKU723M1158DeTOqEI47V0OofEnxnq/wAKh4fOvTyiKRCoeQAf L+FVTmobg48xZ1TxV4g8TfDXTdD1DU5Zo7dVDIWBGQc1rfDLxz4h+FE840a+eySdtzmJsc4xVe1U pXYQj7NWRQ8R+MfEuo/EODXvtjNqqN8s5fnaSM8/hXUfEv4s+L/EXi3SNVbV5ZZLYYYlxlfmB449 q1WI6E+zvqaur/Gzxpe/Euz1+21OVZIlYCXzMHBI9vatL4u/F7xb8Wr+KfUdTluFTgo7+/0qKlVd AUbs838M6lf/AA98eWfiLSnMN1HEyNtOM7u/6V9C6h+1/wDELxfp2p6bd6tcGznJAUydsY9KKNay dyp07nmvwf8AiBrPwSvvtekzvDK4PmshxuJ4zXr8n7Tnje9fU5ZNUnlW7z8jP2Ix6VUK6SaG6exl fB/47+Lfgzp11Jpk5tnuGDSNHJyTjHpXaeO/2xviD8QfBx0i51m5MDkblL8Y9OlXTqqKIlS5meFa NpwSOJ5082ZRyxHSrfiCwi1OBkMYJfqMd/WueUru5qlyo+k/hT+1V4x+DPw8g0HTb2VIYQqoivgY H4e9aPjv9qzxv8QdPsPtV/LI0EiPtaTphs+ldVOpGyuTGNtjH+JnxY8Q/EXxFpGs3V207WS/IHb7 p3Bv6VpfEX48eJ/ivqWm3N7dyTR2ZHlpI/Qhsj9amrV5jP2aTue6Tft1+PrG3tLKG9k2pjnzT8oB +leZ+KP2h/Fvif4iR6vLfSsRGynL/wB7HtT9olCyJlS5pczPPmU3nim91O5bzrq5cu7NztOMVteB Na1H4cfEe38QaXKYpokZcqcdcf4VipWasaWT0Psez/b/APHdzq9wJb2VEKMARITnj6Vyfgz9p/xJ Z6b4hY3sqT3rncQfv5TFdMqkUr9TSME1ynP/AAe/aF8RfC3QdRtLS4ltpL3PmNGT8x27ea6v4Q/H 7xX8Hpby7gnfzrt97uG6nGPSudTu9Q9io7Hsni39tjxt448IHSby+l+zSD5hvJ/p71znwa/aH8Q/ CWWOPSZWhRVwWVsbq2c4uxMafIet/EL9q3xV8UbR7HUJ2ljf1cnH6V4v4c0n7BG8Z5z3pzkmrIXL Z3N9bQ2pG7BoutPN0gYPtAPOKwkWivKscIU5J/Cp/sbTpuVyv0qYtobVy9BI1uVBJYj1rfjun2hj xVdRRujSS78pFVjuzWzHciVFEYwaYze0iXyZyJDk+taNyqyTbl9eTTTsSdNaX0qIFQ/L65rUtbr7 MxfPzHiouK2otpdyW0rllyGOeK6i1u/tcgUHbVORrsjZ80Qw7c4NX9O1drSIqDx6UXuNHTWuoK22 QLj2qnd6nHLrgLSY+U0EvU2NPuPtAdlOVB61vRav5Cgsu4HpTvoJF3+0XnQLnityx1JjbmJjxTGk WNI8QDTriSNxnPQ1b/tPqSc+9LlDrYGui0Ycnch7U+2uQhGzgGotYHoadzqbRxBFPy98GqsF3HKw YLz7imzNaltbwJOd75HYelWDqfkxgqeDQlc02IrrUQoUKME96iinRW3MckVVhPUfPqiom5AFz6Vz w1QzSPvbgdKloQ1Z45Iy4HI71mSaj5DjPOaIgVVvjHOxzuzTSrJN5m/k+9VuaJ6EEWo+W75FQanL 9ss1Unp6ms7O5OpybXihxzinXwiJSSM5YjmtFqDMBBJe3DHfgL1BNQyOJZuDyPWm0CdzltTuXd2C jJrmIYHJZnchhRFWIerJodS25YjPpmsjVb1njUA7c9xUlWMiLNmrkyFx71zzXimMtj8KErik7HNX l2xvQBwD1qvcBI9zfxfSra0Enc5WWVmyCeaoT3/lqO5zWTRdrImuYjc2gdWC4681ztxLuwRzjvVx RMjIluPOPzHkVSLlO+B9aG9RcpQaT7JPvA3KaqyStvYg4U9Kye4jjtSYhnJ5ye1ZMk222RSeau4K OplNNvLKG4rIeRTkA0nIpx6lASrbOFByT1qveSZOCeKzkxI527i+cDtVOa3VGDA5HpT3QNmdfSZw FPArBuJfKz60lEXNcqgjyyx+9VRC6ZPXNOWg4ruVZ5dylD1FZ0TpLkkYxWRcmZF0wMxH5VVKcYH3 u9NMixQuY8ZAOG74rPyIsoTzQtWWZlxGIiCDjPaqsvyrkjNJgVTMMHB/CsiWXcDgYNS30F0KallU hjnNNeQRR46inYTZiJudzj5RRuaRcN0FNkrQhkGyI7eMVUS5aWEFjWZT2GsNw4NUm2hsKcnuKVwI 5o3iiypxmp7KbaMHr3qr6Ce4RRZlYr0rkvFSsUgYHA85M/nUNaBc/py/YV3P8CtNQcqsS/1r91fg LIreC4g3ACjpUw3L2R7FOUWBGH8qrLMryAgYBrrQ90V7hPMc4PIqHaNu/wDSh6Fp3RmT/vZDuryD 43Nu+H2oAY4gb+RrWnuZvc/iV+PsWPixqEpOWaU9PwrhTEyqFHYVz45e8VbQotksVJ6VKjbwM157 YkXJLcBBlqv6fEYjw/4Umx2ud7oK+a3+0K9s8ORZUA1WwJHumiW5gteTlu1ep6ArXMS7jgirQ7no tp+7jx1AqdWaUE9BWy0Jeozb5gxjpTW2rwSfyq4shohmiCx4J5NZ0kDYDZ6UpFx0PmFdrkgDbQDl 9o5Fc9i2MVArkEU8sQ2SDj6UwtcjZd04bOKs7sOMdqEJMlkYuwx92pp2UIqjg+tNrQSd2OU7VwT+ VT2sgK7cYNJF3sTCMjnuO9XlQmIOB81W1oZvcmt32KTjNXwyuFyOPpUIYx0/fAk8elXbfoQOo5qr WBHrP7P10qfFS1OPvSDt7iv6NrLnSIcHoBxXZH4blt6F2PbKcEYx2qwsm0kdBU7mTuSRyFckjNWl IliG7pQkJaDreYLcjtgEdK+K/jQvk+NHfOdwY4qZdwnqfyff8FhJNvxI0sOSCyHA/wCBivyhgVmz GDtC1zyfMxw2Od8S3k0f2eztgGnuJFijOf4mOB/Ov6Bf2ev+CMjax+ztpnijxhqP2B5rYSPhkYA8 9zj0qGnKXKiZyUT8cP2pvh/pPwY+Kv8AYejXf2+DDHzRjoCPTisP4M/CzX/jzq13beHbV7o2pIld QcDAz1we1XNezQQfMYfivwrf+BPF9zpOrAR3ULFTznpWNJfQRMyk7WHJ4pKSsh7s+pP2GP2SvE/7 avxVubLTodmg2m5Zp845wGHUY6Zr9k/+HKmnRR6laWms/atSiDHyQY+CBntzWEnJz0Jc+XQ/GbVP 2OPiHqPx7n8DaTp58y3vliZ5GKfLkbjnGOhr6g/ba/4J6ax+xzoGn6rrE26GS3OcMpO8nCjAqqjc GNy91M6b9hL/AIJp+I/2oPhlJ4q1kHSdMuAHt5SQCVIPZh6ivob4rf8ABH+48IfCy+v/AA3dHVru MZJ+Xrg+ma29m+TnJlU6HGfs1f8ABH/XviF8HLTW/E19/ZNzM0ZCK6Hk9vmxXzF+3v8AsR6/+yNe aVdW9qZ9MmwstwPd8Z4GOlY0+Zq7JlUSsjwT9l79nnX/ANrL9oH/AIRvw9HJc2NvHJ58yj5QwAI5 wR0NelftL/sf+JPgd8etM8DRQNealey/LGnPyhwrE4HbNODk6jRuppRP070f/giXqdzYwXF5feRe Nbs4t9ydv1r8xdY/Yv8AGVz+02vw7s7IMS7ES7uiqRntjvTmpc1iFK7P0y1T/gjFqul6Pef6d9ov oY2byA6HBAz25r8Yf+FYeJNJ+K2peFl0yZ7u0lMbJ5bdgD6e9XytOzIVROVkUPHD3Xw08UppWsWr Wty8bSKjqR93617h+yt+zlqv7W3ieQWki2lpC2xtzheSMjg1o46DufaH7U//AASz8UfAX4WnxAri SFZUJlEi8LnnoPavzV07WYpLIBT5hHAf1ohGSjdi57uxFJMLm5isYm23dzxGfc8D9a/Q/S/+CZvj WD9nlfFt22FFv5zMkgbOAT6e1XbmBy5T89dK1eI6LGZpkEiYV8uOprpkmTy1JkTb/vVK0GryVwF2 pk5kXHb5hV1TEoGXVifeknqWkx8VxFFKU85d3puFbEF5DKvzSrx/tCiT6BZkglQfMWG09OasRXMY XYJEGf8AaFUnbQauQ3crywR2lq2J5Z0jjK9cscCvof4w/speN/hB8MNM17UY3jtJWjzIT2LY9Pau adSUZqK6hJpbnkVlfxztiNtynoavwyhWAf7ua7dUgiepfBH4R+Ifjx4yvtO8O27zR2u4SsARyBn0 9K4nxh4X1XwR441Dw/fps1WGfy44ifp/jWdKblNomb1sfR837I3jnR/hSnia7sj5DIH3KScDn29q +etI1dLqyRnZDIByVbNbS92N0TezOkjmSRAfMAz05p8Em12VnwRRF3Q3dl7iZVYSgfjXQRqI4SN2 W9c1T2FYhsn3sxDDP1rWaRkhCxP+8PH40J6XFbU+hvAf7MnjTxP4MutZSwaa2iQsJOTkYz6e1eJ2 NzNE4inj8ucfeHpUOV2U3yux1VzfrawKX45A/GvcPhl8KPFnxXtLprCxaW0g6OM8jGfSole5blZX Z5zqEN14X1y7sL07Z4H2NGx5H4Vu6bcgKCDgn1FbQWmoubmR6FYEJGCGzXReFrLVPFXieHT9Pha5 kdS21ATjH0qm7MEjf8T+G9S8J+Imt9RDwOuR5TDFKs+UwvKmtGtRX0KkuIGAYDFT2tyqkkNkUNJM qL0JftStICBmtKC4eZCD93tUy3AuQo6jpuresncoCowvrVKSA6Tw1Z3Ou66LC1QzzspOFGSMfSvV 9P8Agn4sM8wexfyxnnB/wrPmuRJq9kcRFJc6Lq82nXCFZI253cdK2lnEhZg3SlfUIrXU7vwh4Q13 xtA0tjZu6R/eIU4Pf0rq7z4WeINOsWvXs5EVRk/Kf8KUmXscLb6tJOBJMoQHpVy4vEb5g+Ap5xVx eg7qx6bonhrWfEOjrcafaPNGcYO00mt/DLVNLs0v763eEd/lNLm1FcztP1DyrZTA3yGuiTUokizK +KoRseD7O+1zU5hbwtPEmegJ7U2a5ntdWmSTKSLIEEZ96j2mth36Hof/AArrWb5ElS2cq3Odp/wr ntea68OTx2lxCYmbuwIzVQqX0JejLE+oLFEkanoKsWkjyKMMAPrVhJ3Rpm3FpA0xm3k/w5FZr6hI 0HyHZ9KTQ4rQzUvWtpQXJfPtWq98GhOD+lCYdCUXyvbRszgN9abPeZU4b8RV9B6WOdn1KS22qCXB NacDwFt0vf2qRbjTfxWyFf8AlnnjisW61BbiUFTlaaQnoZ0t1tyVPemNeNOgIfAHvS1Q4ssQ6lCk LeZ97scVyOr6k6qjB/l9Ka1LQ1niuEBz83pUF9IVtY1jOCMZxS2KSujnrm/IuNobaT3qK51DEWM4 b60ubUhaaGP9qZYQ5PNZ105ufmU4q46k21OauNRESsmMtmsOfVWlwuPu9atqyCT0Mi4vnnlDLwvc VRnl3SZU7faso7kbozppv3gYn5qp3k6yjA61TkNaHNSOEkIHb2qtKqOhGMn6VL2LaM+5bbCVDZHp WQsiw2xUmkn0FJmEWUlgDzWJJaSxXG95SyH+GhE8w6a72L7VzOoXzzOVQ7RSa1CxgSXLW64Pzt6m sa8ZpFBU8jrQxxK6uADzyetUpWSPoM1gyr9DCv1EcoYdDWfcyNHHljnniqSuhJFSe7yRgZGK5qW6 K3B2nOetaJaGbIblhGuc5qgzoRk80nKw4q5mzuCuANopkl3tjA/WolqUjKnl+fI6HrVGSUCTaOBS YGXfSK78cYrLF4zHaDg+tSyrEM158/TLVl3FyOcjnNCQS0RTnk80qB2pk06xLuNOxNzMgImYsRjP Skkg5JzzUNagjDnZhwo5qGKUcqR9apMGig1wPtBAGE6VSaUwzkZO0njikDCdtx69KpSOPlwOKOom LJEQhwefWq0cYthkjk96za1BCSyGNhuORTn2ghulJDeo5Wy/BwK4nxTITHECcKJk/nTm7ISP6df2 DnZPgNpoHRolI/Wv3J+BQaLwZECf4RWdLV3NOh7Ex3wE5/Cq0MjJEARyK7VogFMgL5zyajuGY8L2 oYRZm3GAC3evGfjEpb4fajk/egfH/fJq6e4ra3P4nvjPb+V8YdSikfcySMefbFcLbXglc9vSsMb8 RSZDe7fL6fN6ii0iEyDLYI9a85gWGwFwTuNXNPVjMCTinYSPRtE2LkA/NmvaNCVmSMA8ijYL9D3T Q1bYgPNesaIgJBBrWKuNKyO8tgI8ANuHpV6Qhcc8e1bpXMnKwhlCAuRgVBOdwQkYU0bF3uiF3Urg cmqrKQmGOVoeo7aHyXAD5h71Zhly7Kq/N64rEBUkKMQRuNXYrsDhkz6Vmy72QfKWLEc/SiEbAxPQ 00yEriwtjOB8oqSIC5UgjAFXqaRjYlLJEvy/MabDA0rhs8VGtwaubEbr5BQfepYndVwauLM5F61G 7IJ4qy8/lKEI4HSpZXQei9859qvxMnVfvY5FaR10JWjPX/2bLT+0PivbxZAKPkfhiv6I9OVv7PjG K70rQKZrRYDE4/GrEUZYFutYbEsWUFBwM1LattwMZHfNNBYuyzJ5gCjGAa+GfivC03jt3ZjgK2BU z2Iep/KF/wAFgJDdfEqwLjmPIX/vsV+VQm3Q46E965UrMEYAi3eMfD8mN2zU7f8AH94K/tp/bj8R avZ/8EvtHn0m7fTmMEQzGccEt61vTS+ImrBuNz+HptO1rxX4o0zTIpJ9V1jUZFjDKu4qGYKeg6c1 /XX8D/hX4R/4JP8A7EF3rniCOJfFGoWv73ABZpmRkHQ5xnHOKMQuaFx4dq1j+eH9mn4TeLP+Ck37 WB8xZLW2uXe4nKchAu0lckelfvL4n/4I5+A11PUtMi1RDqsNrKpTEeQ23r17cVw2ldLoXJKCPsz/ AIIxfAKD9mz4T+P9GVjc38MhWOfAyxEJ54965H/gn74G+Krft1eL9V8UXM8vhq5nlazSV+ApRQOM DvmuqMbGcoc6cuxyP7U37QMfwa/4KbaPoOk2CS3N9M7zN8wwBIinp7Guv/4ODRF4o8AeDVlXbG19 beYoHUefyKfJzPUzUvaLlR9eeINGvvDf/BMDQ4PBMbWl89lGtv5IwRncPeqf/BMLw54x8H/sneIp viQz3uqiHzB5zbjxE2ew71stVyszm+SXIz8VPE/7S/xp/aS+Mtv4S8Cx/wBlaBpurwrPItw0e9Fd WPBBH3Sa/T7/AILLRSX37Knh3wrb2n2/xFfXNvDhVJwGkKs2R6ZFaxopRMZ3bPpX/gkV+xJ4e/YZ +HFlbXcMf/CT6jb+bPIQNxIG09Pw7V5Brfg/SPGX/BXO3k1KJZbuG0uzboRngFT/ADrnUbTbR2T9 2KD/AIKA658WPAP7Tml6h4SiafQY7Wbz4RKRzuGOAD2zX4Y+Kv8AgpJqXwm/avv9Vv8Aw+setkyR IoEhOWAH88U4pN6kp6n7Df8ABKDxt8Tvjn8X/F3jDx1ug8P3EjtZW0kpOxGjHGCAeoNfBvx//aK8 H/Cn/gp3d2whxDJdSJcOIjtDEoOT0rWskrMUYcsrnZf8Fjv2GbH40fC2x+KXgeGOe8t0Bf7Pg70Z izdMnotfzu/s/wD7SPjT4VTW76C8umSHUoEuIySh5IB4Iz0qFrKxpFc0mj+2r9prxHdeO/8AglfZ anqYEk09mhkYnJJO+v4l/DNmtvZwRRriPb8v0rWaSVjFPlnylXVZZNG163vY03z28gCE/UGv7jP2 cfEEnjf/AIJS2NxqcQ8+70XYoX5vmZXA/Wsb2VzeSsvU/mP8b/8ABKLXfh/+zfF4yumNzNqVzC8M LEZG4lcYHPUV9UfCP/gj3qvjj4SaLqetzf2TNdRK/lgrknPTB5rjlKTd0aRVlY+UPjv/AME8/Eng T9obQfB0ELRWVyjObroDtcDGcY5zX2U//BHfXW8ZRQtItvbGNiG3qNxHSrpxlJmbnynxZcf8E+fE KftVT+DmVI0PmFJDIAWC4/DvX2hp/wDwRt8R/wBvXsVwwWNFZolDqcgD0oqJqdilPQ/Pzwd+yp4p 8aftYXHw3tLGcrbGQPI8ZULtweuMd6/RLxV/wRz1jwlpOq34uftl3CGIgDKdp257c0oKUplcyUbn 5L/DXw5qPhz446dperW5Wa11u3jkjkBGfnXmv6yf+CyQFv8AsL6YbOILCJrccem811woqUrvoYVL yV10P5H/AAdcr5jIPlUD5eO1bOuXrRJCqttZ5FQY9ScCrlozSm7pH9R//BCT9m7Ufh7oni3XNVlE 7ajciS2UsDsUxYxx71+bP/BTD9nq/wDhj+1hrXj25vcwS35aG2DqRglfx7VVOkkubuZO6qan9C/w J0R/2iP+CcGmxRxLHLfaQQpJxgsrjvX8kXxR/YR8afATxVpmjTCSWC6YbZ85H3gvUDHeoSb0HKV2 fYWnf8E1PF+r32lW8UXlxNHl5t4GSD9MV87/ALQv7NviH4IfF208NzW0txLet+6eNS38QXsPeo2Z XNY+0vD/APwSv8XaholndSgRF4/M2FwCce2K+H/iB8Ide8IfHaHwZLDJa3VxuMTEEDAIB5PHehO4 SmfY/iL/AIJz+LfCngKXVwDKFj3ERMG7H0HtXxd8Kvhzr3xO8e2fhvSbd7jVdhaXcCNhXGc8cGlK 6joVS95+h/YB+yR8LNZ+Gv7Jmp6Z4mRGuYbFgMPu6RtX8lHxBlK/EzX5AcJ9sxGvoMCtsNHmjqZT bnO6MfV5d9sob7pcGv6LP+CNfihvE/gTxPpc0C/6PMI4myfmHl0or3zWXvRsj86P22P2PfGHh340 +K/F/lk6U98XjWM7tqkDtj2Nct8Av2bvEPx10db7T7RlhxkM4KjHXqRWktNjCMuXQ9B+Mn7N3iH4 JeE11C+tyYS6gsnzcE47Cvpf/glhHY6z8dCzRiRmgkMZI7bRWctdTSM73Nv/AIKZ6Ci/tFWFrp6j 7VNFJlOmfmArwrTf2U/F09pZGK2bbKASc++PSrcmkmyqcudN9jzP4zfCTXPhl4stdMvkkV5h8pUZ BOcV6zov7Gni/WNDhuorby0YA/M2CR9MVNSpbQ0jrG55nqfwj1zR/HsPh/7PJ9rbPVSMgEZ5x716 xafsveL28SyWSWZ8sKzAkkdPwoim9TFztoeJ6qt14J8U6hpuqIYXhLKA4xnAr0Lw98NvEms/D3+3 obZjpxw4cZ6fl7UdbBOpyxTP0Y/4Jl/CrS/Ek2t+JrpRPdxviIEZ2gpn+lfWXgb48P4o/aK1HwdN pgSBC4E2G5AA/DvSWkA298+Tf27f2dL2w+J8F34ctSVkt5JLkoMYIP8AhXwt8Pvhdr/jOG/Fmsko t2/ecdwM46VKlZWZtS/ee8fp/wDs7ftAeH/hP8J5NP1FNmrwKEkiCEkvg4/pX2b4V8UWvjf4AXOt 6pbparLal1V8j+E+v0oacppDrLlR/Pd4i8Tq+uXbov8AovmfudoyCK+5P2Wv2c2+JHhq81fWU+z2 U0itBkdVI966ZRtLlOdX5Efo5r8en/s9fBm3nsLRZoowqgAHnr6Va0EWn7QfwLuNQuLMRM0BKIVP XacdayqK0iubl0Pxc1fwRrvga/Hn27RWe8JFuBGc8Cu/1X4e6hYWFrfX9sYrNnUliD0zVOV9Dfls rn6y/su+HPDN54SnudPijaUp8xA77a+WdE+GNt44/aTv7S5ACLcF1XGemDWco2jcx1vzH1j8YfiX bfCjxjpOgxWXmLMMbgpO35gO31ry/wDbF+HtrN4GttagjC3IZcHHYnn+VFBaB8fvH5updPNbhj+d WbW9kKAAnHrXRca2JV1PypRFJKT6VbfUimQOQKCokkevLJDxGCV9e9Z8viJp2wI/LH0osDRJJFG8 Kv5pZvSntdFQFBwKcWIlubmOCFW6t6YqjJqjCMMRxTArLrMV2m0jHtis/wC3CCQqv3e1CHuiJtT3 XHlHgEdazI23zzJ5hAB4oYJEUYeaJhvyq+prn5rtpJADyAe9EdEUhxLibeG2j0p8l+0akZqG0NOx hSXKsxYnke1ZV3IzYlB496lbk31IRqKvyelZN1qDMWZB8grRaAmYElz56mReAetZqXUUO4ueD3qm 7iKUl5EU+V+fSspyMszHFRawjLkcOCQeaxjeAtjvQ9gKGoX6hSirl/Ws2C4ZRuI60ldqxp0KU04A PqawJoXvH2htqinaxNrmfeEQIQvzEVlpO8inf+FQnqTymPcXxjUhhmucmdo28zse1WF+hWafzCHV ePesueTy5Nx4B7Cs0C0Me5lMchwPlqg1xtxnv3rNoNyheXKyHb1xWBPMWTb2qouw9kZhndJcEcVV kQSSZHHrVJkMhkj2OVNZUzLnOMAe1Zy1Y9kUbtt8HA+U1lgmOEIeaGESGWQeRt6Ed8VmrIkalmGQ PWpLb0Mi6lWZC6jismNGWQKRyadyVLoR3coguQv8VZsqs9wS3SmtBPVFVpQHIFQTyiVdpAOO9PqI pplMA8j0pbxdqDYcms5DWhQkYbefvCsqOUtu2jmpRTM5VaRyCOlTZWbKlQCKYlqUpWXoBzTBBj7x oe5LILyQqoFQ7wVCk5NS9xrYbKNpUMMmpDD5TYIyD0osAxoyPlbpXE+J4hOkSbsKJVJ/OomtAR/T p+wXOr/AXTQOixoB+tfuT8D7jz/CMajsoooIabue1RxqEXPeoZ4d3IPFdSKexnRjdJgipJ7v7IR8 ufoKJCiZ0hXaX6gmvJPjKol8A37DgCBuPwNaUtyz+Jr444X4z6nMf4pGG364rzdsRuVAxjiufHL3 7C2BVxg9R71YljHBA21wMEyCJ+SetaVqf3m7P4U0NHo2hxruVq9x8OYbAUYoYnue3aIwVFXpXq2j YAG2tI6Dvc9AtoQUUgYNXzDscZ5Wt4szlErspyc8r6Ux/wB6oBPyiocrsaiV2jwcj7oqp5mScDIq rjvofKkDBBgDLfSrnmeXHyuDWMWaNFfBj+bPWlWfjAXJ96JbiRoIpKg4qvIpRzz17UNJDLMbeWnP SmM4iUgd6a2G2S2SLGhIHJqzFujlOfu9qm12PpcdD8k7N3rTibpnnNXy2MN2S+aA2AMVb+zeYowe aVjdK6GtasZAQ3PsauCLyUOD85rWKsZPc9o/ZozZ/Fm2YnJY/wCFf0Vacx/syNWPpXVe8C+hoNIC Ao4FXLY+UMA5rFkdSf5vNOelJv2E8UIZJDbLK2fbJr4p+LMoh8cvxwFYdKJK5mtz+S3/AIK/l/8A ha1mo6tuI/77FflpHAYYFV+WxzXO9xmRaZHjLw9CB9/VbYZ/7aLX9rv/AAUJY+HP+CYuj2bRNIRF CN6KW/ib0q4y05Sas7RsfyQ/sTfGbQPgZ+1B4f1bXopJYWfyUkeEkoXZQPwz3r+j/wD4LG/APV/2 pfgDb+IfC11/aNtCFm8pHBAUMWOMZ7Cqm7x5TKF42Z4T/wAG1umMdf8AFL3UJiurVmQrIu0j91no a8A/bN+OPxC8Ff8ABSbxfH4QuL25DR3e+2VCFB2LzwD0rLRJdy3LmlrsftV/wRT1/wAQ+JfgL4u1 zxRvXVVbzJUlJzkREkc4PavTv2RP2/o/jt+1Pqvgu10s250wyRSvscbiFDcZ4PWuhRtuN1VBOPc/ MX9u/U9P8Pf8FcPCjOjPeXDyY2pnAMqA19P/APBfy6TSvB/gmJkd/NvbbG1CcfvwKpx5HdmEP3ep +jWj/E+0+AP/AAT+8Pa7qcZuLa1skJG0noWPQfSpf2N/2orT9sX9mbxX4hsbQ21obZ/JUow3AxMe h+lDWl0KcfaT5z+UrTf2v/Hf7M37R76faeGydMvtahhW4fzF4ZlTPTHev6qf2kdM0vQ/DXgzxfrr pIytF5kkpA2kyev4U1WuuUHHqfT3g3x34N+Mnxe0q/0fWILq4t7SRPJikRupB7HNfI994E0rT/8A gp7Z6ykiyaibG6GzjgHbn3o5bI1m+ZpGh8SP2m727/4KH6X8PpLNJdPmsLlnbcT90qOnTvX8/wB/ wXL+B9n8KP2rNB8SeGtGF9eMjySwRRk5PmDk7cntWPI1qFkj7+/4Iv8A7YGvftLeJvEPhbWNJTR4 7AtFsVmy2I88hvrX40f8FjvBFj8LP2qtai0tDPquo3pMIVcsGJCjge+K0k+eN0Lnu7H9AX7EFpff Af8A4Ji21/8AFG7EJ/sr5UuZBnOx+MHHOe1fxqeJfiRYePPjje6tZKbTRp9bhNsqrjepZcEg1OHf NBye5pP93JM/tx+P8ccn/BKDTI0OY/sUZzj/AH6/iq8P3GWt41+6i4B9a1ldxuc6XNU5itrl9Gb2 RWG398Dk1/ep/wAEx/DFl8R/+CffhCyuztshZxMpI+8AWrCzmrI63ZpLsfKX7eXxnu/Cvxt+Hfw2 0zSBcaA0kbPIFbA2TLjpx3Nfoz+1N4D8y/8ACpttbGi2EDpviEiKHxIDjn8qapcq1JlPsfnb/wAF UvjHpnhrxz4AstECz6jJf26tInZDOobke1elf8FYf2kdQ/ZO+AOh+KtKtvtMwnhiZckZDyYPT6V0 R5YHJVbvofhb+zR+2TrX7Sn/AAUB8J6nPYrbwS20plUM3UsvrX9bfjO8lsP2mtAtIj/ocunTtLHn qQRg1lKF5t9zotaCZ5l8LfhtokX7WPivVba0SPUi8oEm3B5Qd6g+Hejf8Ix8VfG15r+vi6heZzFa PKh2Ls6Y61pQp2ZlWnbQ/ka/au1K31n/AIKPpd6TGF03+20VlAwGJkTBr+gH/gsxI+lfsOaaqYMJ ntwef9s0+bkm0HNaHqfyNeFUjMjMp4HAq14lgaW0jaNcyxTK6H6HNRPc6qStBH9OH/BBv43eJviV Y+LrPVHKwadceVbqHyCPKz6etflx/wAFP/jZrPi/9rbxBoWptM2nW2olI1CkrjKmt4zThY5pO9U/ pa/ZU8UyeCf+CevhOTT24FtDGpHoS1cP+3rEkP7J3hnXTCr6ib+0Xzj2DS881mnyBVjyu6PQ/jj8 XZPgn+w7ovi6CNZ7mNIizEnkFjnkfSvyb/Zv/a80/wDbk/az8LNdaeqvFaSORtY8hlbvSUOZNmkE mfv18VdI1W2+PPh+S21FbPQ0tZBPCZlXc24Y4NfnT+2p8CoPjn+2J4Vj8MsgvLaCRruZGHy4kUkZ 57VgoODKlG7P1b8D+FbG38I6rok04u5YIXjkDYPO0+n1r8o/+Ce3wg0rRv2nvinqcdnHHd2uoOkJ A7GEGt3ZKzMG3CR+kXg/x9deNfBnj+3vkEX2TzI4wpzx5RNfxefEmJl+JOvBjyLwBcHtgVpS91ML 8rSMHxNHILGBQO4PH1r+iT/giQM+CvGE7cTx3A2D1/dVi5Wlc6Forn6g6LKnxc8DeMLXW7RSkTlF 3ZOR5Z5qr8HPhrF8Pf2YLaDw3DHDOYV2sDt7Hmqs2jGS945v4ueH7PVf2QXj8VSRXF8ttmQs4bLg NivyS/4JPwx237QMwU42QyCMe20UoRa0ZbVl6nX/APBTK+bSP2mtH1C1wb1XK4JxwXXNfrvoHiv+ z/2b/DurmINcS+SrAZ/iYg1dSN0kFKPs00Znx7+Fmk+Pf+EY1W7tUmlEkZBZen7wV6l8UtN1fTPF uhwaNFEumBMThpNuPmHb6ZrCUOaVyoStFo+OP2xprLwV8bfCd9p0Ucl5IwjkCnqGkAJ49q9W/a9+ NCfs3WmlayLdZbabako543NjoK6INR3OepufjB8VPGXhv9pX47JJ8ttaNIfMkZcZyR61/QT8OfhV 4c0r9mWDRYDHLpq2exZOMEYPPp3rNxcpuxrKN4JHz9+wV4T07whdeLtO0yRZbWK7Cqy45Gz2r6O8 PaF4Ys/iPLc2y2x11mPmFWG7tnjNHSwN81oHZ+JfFmnX3jG40Gb5r24gkCjb2xg/zrwXwv4C0z9m TwH4ou7gbvtJMgwvQhCMcVKpuTRrRfsnyn5Mfs8+Bo/2kP2gJbyQ+XavMZyo5ztwcYP0r7D/AG// AIxHwRp9n4H0pfstvJGWfZxtCnBH5Gt6cb1L9iqr5oHwV+zX8I7r4z+OLDSreMtplt80rkcHaQce nSv01/aX+Mlr8F9O0Twf4e2RyNcRIywn/VrvwRx7GhS5qkn2IkrUkfcF5oVhqfwt02DWGR7Mopbz SAM54rrPBkWleHfBdwmnhF06IfJ5fIwBWFS8pXJUedc3Y8V+I3w+sfjp4R0x7YARRTxS5x1Ctmvm f9urx3Z+HvCll4cs0xM+DgDoA3P863p076mkql4nUf8ABO2zUeENaUHaUcY/74rM+GOoTt+11qSs fmR5AMHrwKzvdOJKd/dPt34h+H9B1rxPHc6ksP2tG/dmQjI5rxD9smRrD4NRMh/deYg/U0UouOjJ m/Z+6fkLp+p/aQ8QPyDpVqxvMgxK2QK1ejLWxAZIftQJG5x61dl1ARIVPIq0ilZIowX5dkQHGTV7 UdTT7YtsFGQOWpPQV7lNJtkpCmp571gwOcgVKHYX7cJRyapT3vlLktkdhTb1Cw2JP3fmv0PvWfeX aIcoc00iLlKO+87Ktw/Y1U8/7OxLNnHU0466BcW0LXSPIj7V/Kq0sm3GfzqZaaFJ6GfPeB+pxisy 6vS3TrUO5VzJe9+bb0NZ+o6p+6EecDvinBEGbuIiyDxTI99wjIDgVroSlYw0JhjaNjxWJIEuFZG6 fSktymY/2VLfJH3x7VBLdskeDzmk2C1MxJCXbBrFvmdQzAc1KYNFCB90JY9TUBn2pgk072BPQ557 pvPb0pq3oVsk49qNx3sUZJFZ2bPXmudvLsN0bbj2qEtRbnPXN6fNAY/LWbdX3zn+76VQjLS8LyYX 5UqhczsxIByc0NWQtyHa0anfzmsq7VfJwDWN9TRIxfMPlMgUbh3rBlcsgVjhgelNK5MnbQcUMvGf mrHMu0sSckUnuQncpS3LNGXxyKy55zcxgdDVJW1ByvoIshjh2mst2MU4ycg1my0h0/yqc4IrnZV+ ckcrUoEjKwUcr/DURmCygYyR609wtYpXDLcXJIHNZ8u5d2evamxpGaAUU5HNQRgSAnoRRzCsRHPO DzVVpzCwJGSahu5LIWYFi2OtVEt2UsQetSnYtK6IGgaByScA9TTDAsyEqeRVNk7GFly5BHIqKWft 3+lTdjsRkecwHenFB54Ve3WmlcT0NKdd0IOOlVAhkTBP403oBSuX3Jtzz61wfisiOGDn5fNUH86i ew0j+nj9gSNY/gDp4IzmNNv61+5HwFgZPDESnoVFKhsC3PaZJgjFB2OKZcsVAPautFSRT8wqA2KV pDvyRkH1pMSdjPlO1+OleU/Gc/8AFBaioGCYHx/3yaun8RTeh/Eh8fo2g+LGoISfNWc8+3FcNdMM rt7+tZY1e8THUJG24AFDZlYHt6V5jKRKqgfdXBqS0jKNnOaaGz0LRXO5CK9t8N7ndWPamncVj3PR ZPN2k9q9Y0jjBX8q1SDY9DsywQFjVyUEgBTVpWJbuRyy+WQuMk1XZjbqQRnNJopPQgMjFAMcGqsq bcYOKroJWufLUZEbgDr3p04Kv13A1zpWLbsiRY/mAPIpQokkf+Ejpipbdwi9B/mPJFkHBFIUMqBj 1rS9xokiZWXaOop0jBgOOfpVX0B6ly3cQctypqUyjPHI7VF7O5fSxa4VMkYq1DGqAOeapzuZRSTL LsJ5AQMfhVjb5aYBpmlx9gnkuxY5NXG+TJIzn2q4sztdnsf7M0f2n4pwRj7yNx+lf0U6bGz6ZH6k CuuLtA0a0LvlhWq5H8wx0I71luZdRwlaRgCMClIKsQapIZbt+CB7V8LfFiUzePbhTxt3UtiD+TL/ AIK43Bl+MNixP3dw/wDHxX5nyIR8/r2NYPcnqcrNusvFOiXwz5NtfwySDHZXBP8AKv7Erj/gof8A Cv4t/s5aZ4W11xNCtqC0ckXdc47+9aUqfM7mdVcx/Kn+1xP4W8W/E6eLwfbi0tbVz5bpHt5BBGOt frl/wTI/4KPDwh8Gde8FePZmkjtozDayNl8r5Z+g6mtIRTk7lpe5Y8//AGGP2/dM+Av7YniJLKDy ND1CeR2n2suflCj29a/XrxB+1T8E9A8X+IfHCwQT+Irm3mPnCLLEsmD0PsKPYpz8jKUWo3R8bf8A BOj/AIKnWNt458dWGsr9i0e+uybcndynl46Hp3r79+G37TPwR+APibWvG2ji3XWJY5GZ405clcdj 7CuirBcysRyNxuz80PgL+1N4M/ai/bQ1v4jeMljt206+K6b5o58tgrH7xB6iv0w/4KVftb/Cb42f DS0vGuY7y4sdrwIUB+ZW3Dv6iprpSVkVyPlVzj/2cv29PAv7RX7KMHhDxZIkVukKo8U4wOAfUj1r Z1X9uv4WfsAfsxvpfhXY1tK0cXlWke7Gcr/CT61VGK5LMuMWkUfh38YPgx+0R8M/Deva+LVbpDFP icAEMG3dCQc8V8N/8Fx/+Cjlp4/8IaF8N/AMzpbtska4hBGzZJ07jBBrH6vyzuZzb5TlP+CFHxM0 n4MfE7VNT8ba+zXm1xEspXABQZ54719o/H7/AIKD+EvBP7euneJtEuPtC4kgmdV4+dlHUH0rosmh Ru5H6Av8bfhRd/HGD4mz3tr/AG0llMEOV3APjI657V+Ueh/t++FP2if+Cher2/iCFf7ItmmitnmU hSpCkcnjrVOC9my2nc/Tv4beOPgx+yv4o8XeNNGuLOHUbjzHZYSpLEx47HPavwS+D/xQ8Nftrf8A BQfXfH3jVootK028c6fDOQAy4Vw3zEHgrWdGko03chRbqo3P+C1n/BQOb4+6jpXwz8Hbf+EbjxJc urEAmN8gY5HIJr5L/wCCe37OngX4r+KZYvFUsFtbWN4jQRzbRkLhu5HeuajCy5Dor+8z+jv/AIKO /tWeBPh1+w/H4N8P3sU3yxwRJAQ20EsOxPrX8eHg+CTT7VY5HZ/L+UMe9dVWKhDlOelcv3GkN4g8 aafYsQtvLMrSuTjChhn9K/tU8CftXeCv2W/2CPCtjp+rQo1vFDAEgkUnBYjoDWWHir6m0r9DwX9r T9tTwJpXwu8BeMEmiudUM9urGP5mAaXngHPavrf4xfELwj+1p4a8Mam/iddOtowkjRq6fNh885Pt XTXppaIyhfl1PzO/bK+IHgrxT+1L4H8O6LdxlLSVZLm6BAB2TK2M5xyK+1P+CyHj3wX8RP2VrfTb TUbad45IpEVJFP3WJ9a5ZQcrCcT+af8A4J5+Km8L/ti+GdUvpFttMihdAWbHVlx1r+wjxd+1r4Ou v2rtBEWpQsY9NuAzb19R3zWlOPtJW7G1R+5FHhHgX/goD4Rh/bX8Q+H0vAsspmKzY4GFA65x3rW1 nRvCmj63438X6t4sW6aUySpC0kZC/u+g5z2rojTtK5y1ldn8l0vxfHjj9rn+2oIdmhx67DtkbIDq XU55+hr+qn/gqP4z8J/F79ie0tLfVrcsHhkVElQn5WJxjNYVKXPUbXQuStCKP5Bvh74W8Ta/fakL HRZJrKGYCORI3O4Y69K9Om8A+LpbaRk8PS7VByTE4x+lcs5e9Y66bXLY/V//AIIeftS6Z8Jfip4g 8Ka4psr28kZ90ilRxGB1PHevtT/gpV4N+GVk82qwCzu9f1S7RnlUqSNzbScg/SuulR93mOZr94fb fgbxv4X+EP7AnhTS5NTtpBCkEbfvVz949ga8/wD22fizoHjL9kzwhodjqcOZb+zchJV+6svPetPY c6FVlc9K/bCfwxefsCW2gpqdu0MUKBVEqdt3bNfzT/8ABOf4s6T+zt+1jpk9+nl2rI8MMgXPDbR/ OqUFThYKMrs/q3+NGjwfGnx/o2sW3i37Bp0MLF4o5Y8N8wPf6V8c+B/2lfCXw1/btfQ479biW5im LTHGMjaOoOO9Yyp80tDaUrH6xaF4i8LeF9Z127bVbY3molpMGVOPl2+tfl9+y18YNF+Gf7Yfj7RL 3Uo2m1a7knhJdcKBGF61DouUjKtbQ/SHwnpui+GfDHjSSTV4Zft298NKny/uyMda/jJ+ONpPpnxt 1u006A36Pcl42RS3ygD0quVxgJq8kclq8urmxQx6XO5UZ2+S3+Ff0af8Ea7aDwZ8E/Emt6zOlrc3 Z85YZWClP3ZGMHntWEYuo7G9SaUbI+9PhD8ZNG1f4feM7+a9gWEMzLmQdPLPvUfwO+NFh8V/2e0s NE1JbWaONVDBwpHB9a6oUmjO92j53/agsofBX7NRTV9eN9fYTKO6nc3PpXw5/wAEo/EVqnx6eW/K WcvkyCJWOMjaPWpcfeNmtj1f/gqfo1rD8Y9E1i2mWZi+GVCD1kHpX6+fCgaZL+ydod1qihbOG3ST 5xjG0k/0qJt8yHNWXMfDHxX/AG8NDuPiT4V0fTpN+mtjdJgjaRIMe1fZvxsvNY8catos+gautrZA AzMkyjI3f4U1B2Mo6xufAv7RV5bXn7TPhbT7TUvtdzGd0zbl+XEi5HHtXsX/AAVT0211z4NWpgul do3jJUMOzE0Ki3Iym9D+czSrGXUxO1vI1uFcEFe9fqd4L/biuvCH7OFr4RO7zo7YQ+dzk9R9O9bU 1FN3Nk20j6P/AOCVvxe06xg1/R725A1GWXcnmEDcAmDX1B8PPgneeH/2n9W8XT6jmyleQrGXXABA /wAKxlDlvJExdqh9Jag2gX/jnUPFC3MT3NhBKvDA9Rn+leN+D/ivpn7T3w/8TR3wjiigJjXJ65Qn PNVRjdcxT96bPhb9g6Cx8DftG6jpaXCmKLzFiYkDI2isz/gpBJa3nxs0/wCzOskzo6OQQcAsAapJ xkwvb3T67+Beo+HP2a/2dBq0csZvJbcM+CMs+CB0r8gNS+Kd346+LbeJr3939ovkkK5yFXIz/KsK UHzSNKmyifvn8SG/4Xd+zvp9noN2qs4RldXAwATXT/B/RD8M/ghHpGt3qvMyLEXZxySCP603Hl0J jK0WjK8YfEux+AXhDRbK2Kz/AGiSOMfQttzx9a+cf26Ph7aanpFj4ghuNt2MIAuOdzVv8MRRheJ7 B+wP4QufDHgXUbi/dQbkhoxuB424ryrw/G/w7/azvNS1K4At7x3MGWHAOBj8654Qt7wo6TPbP2iP AOseNvH2k3+l3Qjt0kBlHmAZG8H+Vch+3d47s9M+ElppCTq92zp8oPYNzW0I8zuTWvKZ+RHha9dZ 543P3Tj9K6Fr0WUodTj1xSqO0jboUpbp7qdmjbac81HPqrRSpHycjk1cWG6NCC5Zx8vBFUhqJN7u c/MKl6iV0abXu5OuKrx3byKUDce5qUjRPQheQxWw+f5sjoaiN8Rwx4o3YmSR3xktnjLFvSuZbUjD JtPUmr2Rk0W9Yu2t4IfK+8w5qg1752FPQdacNwexBJrgjkCDIFSXGqbEVc5JpSWpcTMlu2Egx0rO Oo+XMxJpWTLsYrXLSyM+cKaqyXYeLGd2O9Uomd9SKS9EUQJPPpUR1crGUTg+tRszSxzs80luCxbc SelQSXAIUjj1qmQxl5cqeOrVzjzGWQgttWs7i2KsA8iZnDZWofPzvZm4qL6lHO3N0clV4Bqg9wOQ TnFUSc/czmaTahxiqN8FSEOD8w7UXBoxJ70+SpBIbuKynugxJcZFXawdDGu5Fuc44IrOJDDHektx paEZnVMoRzWSZtgfPXPFE9hR0ZFc3Z8tcdTWPI21wwOfaufqaMxbyVxKXBx7VgXNwZPmI+YVpEzl uRJdtsOc59cVXFxgkBc++KU9GEUY80r+Ycfd9KrT5VBnrSb0FbUiuHMQUBsg1TkARMk5NTY0j2Mt p8yZ3cY6VlNKyhhuxzSaSBrUzRKWyAcsO1QRsysdw60LQVivMrW75A+WqdzPyGI4p3H0KzMZnBzg UksSOD2xUMkzi2xSRyBUMs8ThSBnHXihLQVtSpM/y5TkH9KzGmIIXd0qGtS1ogu7nzQEAzVMRPGc DpTJ6ldp/s8mCODUBcK+cde9NK6E3YJUH8HLUscHlSruP1xVLQW5fMqsxX+HNLMqumEHy4qWyjkr ld0mOhFeceMI2nEKK5X98p/WonsVHU/qI/YBl3fAfTFbokS9uvWv3X+BDGfwrC2ccDinDRE2sez3 CCOXkcnvUEjB+GP0roTLSuVXG5MVFuzlT1FO5LVjNwefUV5d8ZTn4f3xHaBv5GtKXxEs/iY+PUrN 8ZNRd1yXlbA9Olecyx722HgDoaxxrtKxaVtQMu0hV5I61J5wMuB1rznuCY2SV4s5Pzegq1Yybvmb 7tJLUd7noWitkAjoa9x8Or5UKkHOaa0Yz3Hw9IrIoxg167pSgIHWtYsGtD0DTXUwZc/hVuP5pSMY rbcx1I2ARjnrUKXInQqRwO9JlIqtKJFKjtVKRtqBj1p20Hsz5ckVmUfLj3qeKVGIUjI+lc6NGroc /wAswUHg08KN+0H5h3qWhJDgRjHQihlO8bT17UItESRMrkHrTo28gkNzTYPQuq6mMDHNS286xnkZ Ppip6AmTtcbsAjj6VpM37sU4p2M72ZZgbjbirDRiJSAeTVxRVxtpES+W7VdknLAkDim9GXHU93/Z ZIHxagwvLZPT6V/RFpsvl6YAeSK7EvdBsVpAQM96uRtsByOKzuZJEpYPgH5RUgBH3ufSqTYyWF9g weuK+IPinGB45uGfoQ3SlMm1z+SD/grUqn402QB67jz/AL4r807y58q4IIwBXPuFrMxpblbglMYj zUk8UckSAORgYHFawnyoGghshZDco+Y85q7bbP3nyYLfe460uZp3BIrx6ekOGUlfcDpWur+QQPMd ievHWrVRg7WsGm2iWd1I0chjD54ArRgthbRMiTNg+1V7Ri0tYgsdNS0uTIJWRvUCrt4q3yLHJM7K OxFL2jF5F6zSS1iaNLl0ibqAKnvbJdQ0n7DJcvJDkHYw7imqrQ3ZIqx2UkVpFbx3ciwx4wmBjitK 42TTiaUebcDhXI+6O4oddsjlTEst8Wpm4SZo5PYVLPpBvdTNy1w6S7t2QO9NVGkPlVzpUuryK+F3 Jqc0zbSuGA71kJpofVTfQzvFcg8yAck0/bNjsjZF1fSTTSS38kxlzvBxWJpWlCw1ETRTNCVUqAo6 03WaVhRSTuTGKE6g0rLulJ5YipGtJ4rw3NpePaSH/nnjms4zs7g1ct6jfXmu2EdreXLyxqwPPPIN a0WF2pjCiqnVc3cSilsU7uBbmU7HKPjG4DtWhNFd6t4W/siTUJFtUIKDjjHSiFTlHy6FHUNLutU0 S0sZryR0twAmcdjmu2t/GfiHT7G1sLfU5Y7aEYADDnFae3cnqJx00MrVm1DUtUF9HevDdbsl1Ird 8T6rrXirSYLe61OaeMEMysRxg1Dq20QlEznjmjuLYw3DweUQQy+xrauPFHiBPHkGsJqUrGKJo1+Y dGq6U+XUXLdken3GpWXjufX4LtzfybsyE4Iz/wDqrsZPH/ijUoL6C+1Wa5trjOUdxxkYrX27QnTu cLa6C+n+GksYJWVUZSrD2rQ8Za34k1fwvZaa2sTvbI6M4LDselRCq035lTimj9of2KP2vvAXwe+G TWGr2yz3SgBnMROTj1Br6d1T/goX8NL/AMJ6hDb2KrO8TBf3LD+E+9aQoxl7xzyck7I/nx1nXbrU PjLc+JNKla0aQt5RXj5Tj/CtTxvr3iHxVKs0uoSXc8YwrSMBimqvJ7p0KN1ctan438Uar8NrfRBq s8ro6Eo7ABcHsas6x4n8RTeGtKsY9Qkne1VR87AYwc1UcQk9CXTVtTuNd+KPiTxH4cj0q81e4ltx j92SCOK4hNDlZreXfumgYGJ89MHNTVrc1hQp8qPomz+PfjWK2RBrU0cKrgxrIMfyrxu6utUv/HI8 QJL5Oph8idW5wcZ/lTVVIHBnuTfH7xbeeIY5ZNVnkZFIVmb/AOtXGReK9bT4mS+JZLgnUcsEmL84 OM/yqlWRLhdnr8n7Tnjg2WoxvrM7Rzg7U3+2PSvqP9hH4meC/D1u1x43jW71PYcvIm7tzzkUSkp6 Irlsrn6R2vx++C11BdSR2MAbBx+5PXH1r8nviF+0Zq8/xB1y08NXLWuhMXEaK20Y28cU+RU9UZKL lLU8d8H/ABf8UaH4bvdNlvpFhuDho1bIPGPSu0+EPxj8T/Cyxmg069e0iLDAjf2+lae1SRooXeh3 Pjb40a98Ure3i1a9luY0IP7w55BzVjw14n1Dw/r8Gq2MxtriJSqMrY4NZNpu6NbNnpF18WdQ8ZeO 9PuPEt2bu2RwzBm3dGBr9tPF/wC2P4H1L9mUeH7echRbeWE8v7pwcd/epUFKdx1Nadj+c29guIdU 8y1uWkgicGDcQMAc19LaV+1p410/Tra1g1KZYY12kB+n6VvJxTMoRfJY8tu/ip4hg+Ira9DdSPfO xIlJwQDjP8q7v4ofH7xb8Q9IjtbvUZbiMkZDv7/SkqiQey0OI0KVtLtVU9cdq6JblZImDpuJOc1x ynqbctkReCPFOoeAvHA1WxkaGZVYb06nNfWlt+2f4wkimD30x3KQfm65H0rog042ZlJXdzzjw/8A tA+JdPTU4zqEoS7JLKG65GKPAPxj17wXol9ZwXjW/wBo+8EfrxitNIx0CmrMx/CXjzWPCGvR6raT tHdgENKG5Oa0vFXxD1Txlr0mpanM01yxOGY561nzK2ppON5Jo3LvxfqfiPwjDpc12/kx4wCfSsl7 YR6MkAwzqMZNRBpF7yPePhb+0f4i+HHhlNMs7mRYowFVQ2BWz47/AGmPFXirQ4Iri7kGyRWwGz0O a1tF6sxnHscz4s+O+ueLINMNzcSTG1xtDHuGzW74+/aE17xjpdrFdTySQxkEJnPQ5qG02bKVo2O5 8Kfte+JfDOlpbQXDJGBgKHriPGPxt1vx7rtte3Ez+dCwZTnOMHNUkkrGaWtz1s/tmeKlkiiSZzGv BZnI/pXknxF+I158Ttciu7y4eV0BHzduacbRWg+XqzEtr4WGMfMzdTVme8zls5rnkuaRcVoOhvmi X5T160STlsNkfnT2JW5bj1LamF4qp9p8y4B709wZaEzSybc4A9abcXXlqQDge1A4mUb8wsBkkUt5 qHmw5BOBRbqDZc0vUFksi/Q46EVlFgzbm9aW+gmiK51LMoTuBxTRcxG3d3fa47etUtERbUxxeLKM kU2SdSud3IqW7lryKh1bzgUA+Yd6xJ7h9rA8GhDuVY78i32k1Vhu1tshhkGruLl6mXPcG6lO3gUv njYADkjrU9S09Ck92ZZPm4T1qC5u1GBGdyetC2IMZ7vbO/f0rHkkM7tltqg1kkN7E326NUWMcj6V iTXW+5cD5QKGtTMxrq8MgO0dOtZKzsGLHpVbo1S0Kdw5LF1OKyrnUMKq4y3rSsTbQzbiT7RKc8Yr K3LbAlvmHanJscYmJcyGJt6/dPWqpulByRgUnoD0IrmQFSwFZEU4khYMOc0X0JsRyvuCE9qxZLwN eOFrKw5PQyrmVixJFZwfg7h+VUtCVqVvOEf8OQfWq8zGJCAOtRJ3ZWxnphlJf5TWdc98nK9qpCZl SyeagHQiqVzILdgPvU2JXvczbgqzhlH4Yqte22XV84z6Vi2arUzpFC3WE/Om5AyxO41VtA6mfc3T GTaAStUd4yVbmgCDYwY54Wqtw+0AUiHoUJJi52Dge4qoV+zo2O9DdhIrTMVth5Zw3cVSWMk+h70h shVsyZB6VJJIwJOc+1JoClcqZ9h7d6cypKpCnBWmtCXqV1+YgjgjrUkkqxtxyfpSvdjRQmuVhViw 5otNZVI9zc+2KfKBSurpGfeRwa8/8XAAQ8ceYvP41E9EVF2P6ev+Ce8m74BaexGcxJjP41+6vwHY x+EYyw7DFKnqW7PY9oknJTJGaqtGko3E/MK6ULYgVy7Cq0r7SSfWn1DoUp5WjZdo615t8XYseA9R U9TA/b/ZNaU9GTa5/FD8d5BH8YNSjcfOkrc/lXmUs/msSv3TWGN1kW9iC3AYMo4Y0Rxi2U7uT2Ne a3qJIerKUJIyx71d06PD/PyKaYj0XTEEmzHGO1e1+HXBVRjp7U1qV0Pb9Jj2ohWvXtDUsgParih3 O8tIx5QyOe1Xo4GEmQ2RXRFGbWorfPnIqkVCoT0U1NtQRXdNifLzWdLKqINw5+lWuwNHzQ05kiCn iojIB8gGD61zM2voPWAf3skU6OI7i+cGlFdwvYeIgE3E/NUkcbKvmA/N3FNoV9STdjD9TSxJ85Y8 k0RVxonwGXgYxSsPKwVGTQ9EJaMtKm6DH8VXVRkjXJye9EXZWIlG+pdRtyhcbT606TO4DPTvVxFq WA2xgR0NW2XJB+6tTJ6lRZ79+yu3m/FmILwq5/pX9AtgzNZoSTzXdH4Ckakf3xjnFXzmLO7BzWQi EgMORU3mlxtJxjpVpFJEscu/5SOgr4e+KE4l8dzL3CtUy2MpaH8kX/BW1vtHxtsgnBTcP/HxX5p6 i2d3mdc1gSnqZLyKm0MML61Ou18BCvHvSvY0SuX7qdfs4BZQ31qKC5TATcM+uaaaFaxdjYKSC6/n UsbgycsuPrS5rMUloPTa+7a6jB65q1kfLiRQPrVcwrNFmML5ufMUj61YkdMHBX86nmQNE0bKEBEi n6sKazgyDDqD7Gi9irXNKTFugJZdx/2qRpxFCpZ1yenzU9NxWsSwkdnXd9atRykqQJFz9aakrDsE k2VA3rke9KblpBsDAMOuDRGaJaZetnAtyfMUN/vCqwnBQYcbu5zTumTZlk30UAAO1yfemwr5zYDh fqcUrouxoRyojlNylh70hvDKCNwU+5pppIVie0GwfKVb3Bq3/rH5cJ+NLmHY04JN3Ug496dE6hiC R+JprUjbQkilVXOSCO2KvfbtqAIQD3GaPMqwlsfPnyPu96vvJmQqnIq1JISiWIJ2jXbjnuamhuS0 mwDOaFJMNjTWTyF2ilkVXgIdgc9s0XsLcoW2iWxtiijy1JzgDrViHSordQuzjPJx1raFZ2sHKmdN C6QbEUbQBxWs0jLHj9RWcnd3K22Jbe5CxYI59anDbssORRHcdrosQSE8kfpV+1uSd27hR7VTYtjS tpVK5z8p9RUg320wAPynpzSuCVzQWXbIuTz7VqSXAmGMdKEwlEfJKXRcflVaLTUnmZh+6Y/ex3rW ErEE9toqWr4WZlX2FdTp9v8AYoXw5571pKo5BZI2LGfysFuSa3LW/wAz4FZt3dhpWO8sNSV1CsOl bUE5vMhTsFWtEaRSaLLRxMhDpub1IqRo4zp/lqSvrx1pwqOLJloVIlEMAVOAO1WI74wZJ/KnKVyU 7F21visuX5BrVDLJkqePSotYpM1NP1VYWKsuT0FbUN2VRlI57Gs2tR3uRWUzLcsvU+taqS8nJ+YV sjNot2smcMRg1smCJxvP36u9xqNkTm+8uBFHXI5qww82VBJytZsb0N+C6EWVHbpUwvR3OSKIXuUj St52eIOpwBWomoNcxfMapsmVyITkAgnArQj1Ipa7FGacSbOxJaJGRl03E+orTjujbD5fu05MUSRL uOV8v936Va+2wRDbFytKLNehbiueQAc5qwkmZCpOAKECYxJdrY/hokbyJg5lyp/hp2uEi4t137Hp ULXfkS4BOT7UbIjqPadp/uvt96gk3QsN77hU7ji9bEqXSPnP3e1QMXc52/u6LjkRyXot8Ljiq0l3 8hIGapISZhSXbzTFh96rUlsHtfNc4I60pMgqx3CumM4Has2MNC7BmLD3qUXaw1ZQyMRxiqhuzJEW PQU2NK5ltcKxOBVfzSHy4+WpvqD2Ip5Np3Rn61nBsKXBwe9WZrRmXLeecCM/L7VFAwiTbjjtUt2N LFK+kMMoUjj6VmXAbbuB49KaWgmyrPKF27RzTJJg8bKB8/c1m9CTnLmQ2owBnNYslz5YJJzntVLY srTyjyQQ3FZLXASUb/u0IDNa5xM+fu9qy3k83vgUpFxILtw0aoOay7qHc4z2qbmcnqUJbkIgU/dr PlkEfC4KmhsEUHlOcY+tZTSKNzjrmp6g1oVppftVvkD5qrxAJF8x+b3oe5KRlSTgvtHOKbJN5rAN 0ApNFLUyrqUOvPGKzPN8xwucAUCv0KE8ZWUt1FU5WCn5xyaVwuZ80g4Cj5u9VPNafepHSjluWtil GmMAU68jW3wF5z1qWwK823y128NWJdKUlUEVCbZW466cHBHasic5wy8n0q0iBtxFvQFhg1kAhvkO S3qRUvcEtSpIcvtIwB3pksyuoA+8Ku10S9DP3MIyqj8adbfuky43GiS0EP8AtCs20DmqM0a20hGe pqG7FJDmHkkE96rTOGYjofWiKFYyJJQxKOOexrOnbyskDOKTkNiG4WdFJHNcr4nmy1up+7vH86Ut UJLU/p//AOCeTpe/AqxUDaFiXH5Gv3L+A5aTwom45AUYqaTK2PaoLsxqylN2fWq8cRVyfXtXUtih qYL470lzD0NK4IzZF8x/cV5r8VyW8C6ic/dgf/0E1pTV2DP4mP2h3VfjBqMzDBaYgceuK858tYPl Y5PbiscbowK7KqnGfmpx/eYDHmuC2gXJI1WMFT+HFWtOk3SYI4FSkOx6Tozh1BxjFe0eHJwpQbfr VLQEe6aJhtrZ49K9a0ZljTrkntWkHYXU7qwGYua0oH8sEd63i9BNajPM8xSKrcMpX0oRLZAVBiwo +aqkrKEKsgJ9aFuPdHywv7xyScYpu4STZz0rFopPUarAOQOv0qysu1dnf1qVuUyYfcUDk0/O6Xae KtoI6isgU4705f3KnJyTUDZPuzbjA6VLFC1woCmplcRcVQh2k/PVwQE4JPSqSui1YTcS5/u+1Wlj LrntU3sFkWbQZJJHAq/KwmtycYxVJXZKV2e2/sowSxfFqBw+Q2ePyr+hOz3eQikfNiu9fAD3NOMt GhHSkQNEcMd31qBMl4YYDcUSIcjviqWwJ2L8AAkGRztNfD3xHh2eP7tyMj5qiWxL1P5If+Ctkhj+ OtmVHDBj9PnFfmvqZWeQg9R3rmTIjuN8L+HD8QPHuk+H45fJFw673yBgbgD1+tf0I+Fv+CNOgapo ltMdZk3lcn5I+amSbehrdRR8VfGv/gnZp2gfG7S/DNlqkiRSnMp2oOjAfyNfdcf/AARZ0BvLlXWJ FVYzlNkfNHJLoYqornw5Y/8ABN+31v8AaQuPDVvqci2MJcMdqDkY7fjX3Hd/8EVNH0/Tr65bWZCq RM6qVj4wM1lJSvYqVRI+H/2c/wDgmwnxV+KGsafNqTrY2cpjXhPm+UEHFfaHxC/4IwaV4b8H6hqY 1mQtFEzou2PjAJx+lXySSFKorHzP+yV/wS2X43eHdQurzUmhaOULFjYeCue9er/HT/gkVbfC34XX OpJq7yXKkYyE96iMZt3K51Y4r9mH/gkrN8Y/hnHqdzqbW8rbSMbOM/Ws79pr/glt/wAKJ8LWt1Bq DXV1LKiLnZ/E2M8Vr7OUmR7RI9O+G3/BG2/8f+AdO1O61Ux3EsYZgGSvnn9qP/gmzd/BbWNCsbW7 N1NdOikEqMZcLnj605QlsT7XWx9O6R/wRS1bU7e0u21TZuTc6h0Pevkf4vf8E79U8G/GfT/DWnXe 9pyfMyyjADAH+dTySQ3Usz7IX/ginqzusn9ogIFJI3pXx5Zf8E7dc1P9oKfwrZXQeGMuJWZ1GCMf 40oU5bhOqkfWN5/wRf1nT/tl0+oAxxxswQumOBmvjH4Jf8E8/Efxd+K+r6PG6RWdm7IzCReSFBHW toQk2S6x9Ta5/wAEdfEPhjw5qd+98G+zozpmROyk/wBK8G/Zn/4Jy+Kfjxo+qzyyJB9nlCRMJV+Y Fc55qJU5c1ioVbq56T8Sv+CT/iP4Y+A7jV5LtZZIxn5pV9D/AIVwv7PP/BMjxd8dPBkmqvItupI8 srKvII96VSEo6ExrczLnxz/4JoeKvgT4Oh1F7kOWdU/1q9WOO1dp8M/+CVfjP4ieCLLV3nVDcRhw PNXv+FV7OVhuqeT/ALQX7BPiz4ErpkLyCR7mRE4kB5Lbewr3TRv+CTvjjxLoFrdrIiNIoYgTL/hW kabSInU6nzp8Wf2F/F/wz8Y6boyuHubhwAvmjpuAPb3r6YT/AIJK+OZ5opY5VwVy/wC9X/CpjCUh qpZXPnDXf2KfGmlfFyPwzasjTMSComHQEZ7e9fSh/wCCUnxCs7tmynlhCxPnLxj8KjllzWLVT3bn z14K/Yw8aeL/AIs33h2GbcbYsrjzR1AB9PevoC4/4JgfEHShdzOVWKNGYETDsM+lVySTM3UPAPgv +xh45+K/iHV7W3w72UvluWkAGdufSvZNb/4JmeP9C0G7v7oJF5SltwmHAAz6VtOm7aE+1seW/BX9 ifx18W9Ku5LcGRIJAodnHzcZz0r0b4hfsA+P/AnhObULtxFEmAMSj/CqjSdrjdRnO/C/9hnx/wDE jwlDf2iedGwBV2kxn9KsfF39jbxz8IPCtveaim0O6plXzyxwO1JRbGqjOt8MfsHfEPxJ4Qsr63t9 8VwoYEvjr+FeZ/GT9ljxt8FjYx3UbZnZVXDZ5LY9KmN2y+ax69p37B3xM1vRrG8trQbZEBbMmO/0 ryL43fBzX/gHe2cOux+WkoyWznBzjFVKLJ9prY4NnW7RPLOE6g0rzbSVJ3Y700jaJLEcoHH61sQz iXGeFplvUusxQKVPA6VpwPvmLt6U1oZtE6XHmnleh9K2Y5UMJBP6U1qORdinXYi459cVfWRY5evF aRWtzNu6NS21LZuIPIrprbWt8CMTtYVT7FRdkb8erEoCxwc9as3OsKRweBURWpLd2V1uTMqyBtoz 0q+0vmzhv4aaepVixPP2ApdNuGDupPer3CxuRDZKGJroIL9Z1wDyKTVxxHrIYJflPXqavWmpoJSp Xd74pJ3Qt2aNtqYuJmIGFBqeO+L3hUdMURLlojSWbdBgnnIrRtdQRSEYcj2qkiGX3uCV3IfxoiuU 5+bJNC3BOxs2l4UtsE8CnxartwOxpsp6ssyakrYUn6Vp2l8FUChDeiNlbxIouHyT1qKG9IJJGVps hFy3uQxIYZBpZWgtAWQ89+KL6hK9h1tqIdA61oJq370Acn6UiE2aM+oKyhFGD9KqqoDZc5FK9tC7 3NFJkI+Y8dqptdH5s9PWi90VYfFcotqFZto9aqXFw0KEbtynpSJtqRPeCKFMnGamPiBoVEQORVIC vNc74ss1VlumaPCv8v1qkxvQq+cbb5s9as3d35tltz1qHuIwY5TH94ZA6U77aZW2gc0JaFlK5uPK iZO5rOjvDFAUNWo3ROxVtLjzVJ6Y9atNq0b5jKfMO+KlxLtpczbi7VDxXP3F0VYjPDU3GyMupXiU Wigs27NW5LoRoCOc1kyynfagAgDYJPeueuLhoiM8g1otiJGfeXIXBU81mm9kjyfWkxRVzJnvnjVg efeqMOx4Dv61Oxpcyriby1wORWXe3IuUUdxVJgZ208Bm6VT8zdOwzg9s1DeoyhKzwHB+Zs9aia7M ascZY0miLXZkSqNu1jnNZaRmzkIJLA9Pak9RqyYTyGMc1zc8haQso49BUgyGG8Dbj932qrLIZgSO MVVhPQy4yI3yeppbsiOTk1LuBnv+9gJPFUY0HXvSuBSupGiAAOfaq4uRN98YNKwrFBlCzbh36VSl Z0cg8D2oubW0I5JEZQV6isx3LTcHjvUbkMhvPkTchz7VgyXjSsGPbsaaVh30LAbzoDJ0B7VnpcIs nyjNUIhvLplwp+YVQnlDdF2sKyYXsZ73O4hWHNUSCJ9u3Hqa0i9CWPkuVRmUD5fpTYiChx1ouFio reYxJ+VhUDMWlyTkVD3GmQzThxhwcjpxWZPcbRgnj6U9gKEkhdxgfLVtdscZB5JrNgYlwBGdgPze 1cp4i+W2i3n/AJaKM/jRJ2Q0f1Ef8E649nwHsVA6wrj8jX7qfs+wNa+E42dtwIHWoo7lStY9ou3D XRIGF9BUM1yExgYFdyWhJU5d9y8VM52AZOSahDM9pRbyksN1ea/FhUfwLqBBwWgbj/gJrenuDP4m f2ko/L+LF/ARllnPP5V5dcvlwxHy9qwxy1GiBTkliOvTip7aREJVl+c+1cC2BDwBG/zDI+lXoogd pU4oGegaOSxHqK9o8MH5QD9496TZaR7f4cXI29QO9eu6KNqg4/SqWxC+I9H09Nqcc1oyyLLEAODW yRL3KAXa2yjy9rc09hWuV/MKucdKpyL85Ynj0qxxPlQYjt/m5NOtFBQnGDXNcdiUoryhehHU0qqH kO3tT2QrtlmJ/s7kkZqvKWmbK8E0N2RrsW1jPk4PUU6DkY/iojsSyeLEeQeamhujA27bge1OS0C5 bVkmkD9/pVyW48pgOxpLYfmPgxEMY3Zq2rGFgeq+lQldiRJ5uxxgfKavshlQFeFHatdjRaH0J+yj iT4qxAjBBOP0r+gWGQxrD6la64/CTNWZbaQBs4/SpNu4ZPINTYgkisg33eBSFjbTc/MPSmDLkLZu Oe6mvh34jylvHl4uem6pmRsfyGf8FXdQ+2/HyKLGPL3j/wAeFfnLOQrHI4zXKtxR7mt8K9VtfCHx f0rWL/JsIJBvAXP8QPT8K/qT8If8FI/h1Bptsq3DptiPWIj+tdVKKe4ql2tD86/iZ+3J4K8X/tJW +rIZVht5SpcQHnLA1+ocP/BSX4bJDs+1sZPJYjEWf611xhCxy8sj88vhD+3R4Luf2n9b1S5kkjgE zhGaEjIIFfph4h/4KOfDa70DUIlvzv8As7gL5ffafesoU4uTKnCTR+e/7Dv7b3gnRfH3ie7v5ngR 7nMW6EjI2D1Nfevxm/4KI/Dq8+FupLDftJNJCyqqx55Kn3q5QiwcJWR8z/sC/treC/Bfw/vW1TUG iaSRSqsnT5frXsX7YH7fHw/1H4OYtL9rqWaRFCCPOMkjPBo9nFIfI7Gr+x7+2j4F8FfCW1tb7U9k uxdykDjr714v/wAFAf25/BOqaR4fTS717pnuIt4SPOB5gz0NWqcVG5KhJs+7fg3+2t8PdI+HelwT 6wEYQDKkDj9a+C/21P2vPCPi34neGJdMvxLFFIvmNgcDzAfX0qfZpq5Xs3zH6c+HP21vh5HpFqo1 1A/l8jK/41+WHxO/ag8J+K/2wrK7tdS221sXWSQgDncp9aSpxuVUpu5+uMH7YngGNZN2vx8RtxvX 0+tfmB8I/wBqHwjqn7Y+u3wv1gtUlkHmsQN2VHPJpciWhMoM/UDxJ+134CutB1JY9djdhbycB167 T71+bf7Df7QfhiL4leKLufU47WJro7GZ1G4bB6mm4qGwo02z9BPjP+1R4It/hrqqprsUrtA4AWRT klT6Gvkb/gnZ8d/DWmeBNUlv9Yito2lUxq0ijjb7mr9nF2YQg1dHv37Wf7SvgyX4HXJi1mG5DlVC rIp659DXO/sIfHfwvofwUgiutagtuEwhmUY4PqaVSmpahTp8rZy//BQH9oXwpd/DXT0h1aG73XcP yrIpx8/Xg19O/s+/HnwvpXwi0SGXXIExAox5yccn3pKCeg3A+Lf+ChHxr8Oaj4n8Liz1SG7kM8ZO JF4HmjJ4Nfo34J+OHhXTvDGnq/iGBZPKwR56cfrT5Vew+TQ/MT9qH4s6Nq37UnhySy1eGUKDvfzV 4G9fev1x0340eGFtExr9uGCHcPPT/Gp9mkw5LRPyavfipoN7+2/9qtdUhKR+aJHMqjn5fev1zv8A 42+Gms7hm163z5TfL56en1qXTXNcIrSx+Wn7MfxB0Of9qrxRe/2rDDF58nzGVRu+Qc9a/THxb8X/ AA5J4R1r/ie27YgfgTp/dPvV+zvqJRPzn/4J9fEzSR4h8VTS6nBbw/a/lZpVG4bOvJr7v+PHxV8P Xnwh1by9ct2UwNwJk/un3pxhzDnCx8xf8E5fiPo9p8Kp0m1OCFd6bSZlGfl+te0/tn/EXQJvgrc5 1aCVS64CzKfX3qlHoJxVh/7D/wAQtFtvgLpaNqdvCPKXCmZR6+9cD+338RNCm8A2EbajBcf6VFhR Kp/j9jWcqfKUon1T8A/HujxfB/REfVrdEWBePPXjk+9fEX/BQT4g6NceKPC8cd5Bdlp4+BKDj96O eDTjR0uM/SzwT4y0e18Iaeg1S2GIhx56+v1r+f3/AILNeJ9O1XWtKihuY7tj/BG4b+PrxVRp+7dm bV5aH5N6DIYtOVHGABha0HkW3UE8k1yx3sdOxaW684AY2j0rSgfy+G5FU0PmNNmI27fu+lTRyPLy vAHakG5tWku1AGG41I0m6U/L8tQpWYNXRagn3OcngdqtCQl8fw1tFmdrF+BxFyDuPerv2vzVCgfL n0qb6l20NeO6OwIT9DWpaOVi8vOSe9UnYhrU1YmKMIya2o3MeF65qWy15l1rkJhccihL3dMNigHv iqi9CjZW5G3bnNNjn8iQBehqxS8jeWbzEGOtXLWTblejGo2JT0JTdfZ0KYwc9qtWs7N84+9jpVFP U0rWWWWI7xtbPrWvZzxrONx3HHNOL6CaNRr2NUKp8vtiktYEUFvvE9KexGrLsVx8pXNSiT5xjk96 AvqajLGyDd96nwXKj5aa1LkzQt9rvnPTrWob0OdqDAFDFEZa3DJKcnirBmUlgwyD7VHUthayCH5V xj0rSWfLjAwa0voTylnzwgbJ+akg1M7kRhzUPUSiT3N0FnKDtWVLqJhmCsdy00X0LN3dCVUyML6Y q006ugAPAotckoTyee4QDpTmRUA29R1pbCEMgvSI2O0fzrMvYRFMUWTaoPakm0MtxyKbcBn5qnLP lyAeKqOoyk10yEjOfrTVuflyDhu9UGzKMly0kvHPvTXlVQVPJPemnYHqU5j5EYAOBWU1zhvpSch8 2ljNmummkx0FRTyIQoHJ7mm3dGbRHLOFcKBxWdJdsrkDoKyS1HsUZpPNTJbmqr3flrz8xFNiepiy SGYl+hqP7RuiGeTUt3LirFKRlORWBMzNNkfdpXbG9BJ7gQpgjrWG5WFuec01oSVNyyb8NgjpVCTM 0G7o2anYt7WKku+JgDzkVRyyuwzx60XEiocEE56Viz3nBB5PamjN7mdJM11BnoR1qgkphcMo3A9a JIEynLcIt0Qwzn2qC6mKZRRx60LYb1KCN5a/NyOxqpK5Y7mPAoewupQfUFaUIo+X6VWa7EUjcfLU JFGfC7XBLLxj1qC4YSkAcYoGiqZzk8ZIqk0zMpBGahblt2M6OfymYAc1A7mKQMO/UUJXMmJNIjIS Ov0rFZw6kFeT7Uth3KZl8hShzg9KpMDCRgZz1qk9BLcdcyjAP8VRhd8IduGrO1xvQqKqzSF3GAOl VJZMuSPu0R0FuyooypbtVbLIvzGjZjIjMqjJHzGqBjZJcb/ei+oIrzXwuFKFcMves+L99yeAKTH0 HNL2xxVOVmjUvj5aRLKSMsqZPBrkfF1o89rGA2AJlP61Mtion9Rf/BOG+B+Blg+MqsAHT2Nful8D bkSeE0YcAgYFOihX1PY7Zy7ZYZ+tDNukORxXXeyLWqIn+QHstRiQTKMdqSVxMrso3Mc5FeZ/FVUP gW/JHSBv5GtYL3iVqfxNftHylfjDqMrc7pjgflXmDfOAp7Vz43RlCQ3GyQqRnFK8370HAJPevPRS GyT7HGRurRtlJcOPlHpQB6JopUMpavbPDcgWRQF+X1otqO57XoDmF8AZQ17FpBVxweBWhF9T0LTp R5eAKtfKPmPU1shMrOrMTzVVtwbk81XQIkKgq+eppt0VYc/KaBp6nyohXGDzik88l89F9K5iupMN rHA70nm/ZmwE696pvQS3Hkbosk8/SpoDuUEHApPsaFkzAN71VMmWH8OaSEzTXEcYHX3p8Khpfm5F USi5H5ceVx+OKe0KpHnO70p2KvoWkwsQIHJq3GNiZPJPalazFFjom3uUxx61ZebyoGA7U+pV9T6K /ZKkFz8TreXHGD/Sv37jwII3Y5IHFdqXujepaEvm4OMVdtxuyc8elK3QjoPklIUAd6btPOanqInt l/e7j2U18PfEVgPiDdnqMP8AypSJaufx9/8ABUy5x+0XJuXgs+P++hX5/kr8xIyPpXPHcVrKxnyh ZkweUPaphZQCAbeD7VpGTiwGrb2soCeWFcfxYqO30+K3uSV69OlWqrCyH2llFbXUjEfMx+9ipn0m KNt+7LnqcUe0aGtR8NlHauQBtz/EBUsVrGEZSS4z0IqfaMGkSNp8dtEApKhuoAqSTSoptm6Qsi/w YqlVZL7GlBbQsWAkZE/uhahn0iK52MZWwnQYq/au1ilZGuLTcoC3DgdhjpVB9IjnuMvKxI9RS9q9 gbVzSjsfLmDpcuNowOBxWeumibUjcfaHEmeWx1odVoNzcb55vMW5k3EYPFUrLQWs9Qe5S7kDOeeB S9tcho6ZVazR0jvJFMv3yAOe1ZWm2J0V3ihupELdwtV7RyGlZGlcwSvYNC17I6t1BApdIs5tLsvs 8N7JHA33lUDmq9q0LRF++tmvdI+xtfSvbAghCB2qTSLeWz07yItQmiUY4AFT7ZsaSLOpaZJ4g01Y Jr6SRUxjIHFathqGpabpcNnDqcxiQADOBiqjUsFhmuWc2v8A2eSe/kaeL7rkD1zV6bW9WgeJTqUp C9DkUlVe4pIztQtbjUtYjvm1CU3CdGwOK6Q+JPEcdwZE1ebYfcUKq2yZbWOctIby28QPqKahKL5j lnwOa7aXxXr0t15ratNuIxtyO9P2uoJGJos+raFrs99b6hKk7g52kdxW/YeMvEgiuY5tVmdJgQcs O4xWqq2Eo21Mvwhf654QilhtdQkRH7hgM8Yram8SeJb3T5LOXVZpbdvvAsKaqpFONyPwr4j8ReC7 B7bT9RkjtmOdu4DbWzq/izxN4o0J7G+1KSaEsCNzg9KXtbakcty74S8c+KfCGmpZ2+pypbJ91Q44 xUvi3xf4j8e2scd3fSSIhBG5h2PWm6yaK5TesPid4v07S4rK21ieK3hACgOBn9KzPEHizxF4surW e4vXluYBhGZ/fNWqyUbC5D0C2+MXje3EaLrM6qgxhXH+Fcf4j1DVPH3iCK/1i7e4kiUqm454PNRK t7tkONOzNKObyjtz8vap5pVOP4se1c0e5pI1BKjW4IGG9cVdilDxLj71aJ3M2jSjl8w7ScGrto3k 5YnJ9KhmkdDXivEZjgYNTeaJUwO1SlrcsrwEJLkmt6G4VEwec9K0vYz5dSzax+XuOeta1t8iAnAF SA1rj9+R/DV+G6ZG+U5/GtFqhPc1obzK5P3hW9FOQqHORiptY0aLEF/ulk384PFXLdgnzZ600Q3Z 2JonPzAHntWrat5UWT8x75FVF30Bo1La8VsDOK01OHMm7pTsZoV79ZiCR09quw3ext6jimolXsTr qbXMnHHrW1AIonWQH5vpS2Ze6LMsgncvjrV2yk8qPBbJptlRSsWPO8tgRzntVqC48o896a1Rna5L NeO0yhRUrSsZBzgjrTSFLQ2becKMk1Ml427CjAoGi7Hd8YHXvVh75XUDGKiS1KTJI7iNOc1ftpzO cqetHkNkt2v2M5zuNZ9vdecxfGCPaq2BFg6h9oIGMEdTigJGt4HduMcUibjpbzc+eq+lTtIDEGU4 qkwsQLeGFgQeabeX4j2sPvHrQ0FrFSW787kHBrLe4d3+bkDvSaF1LL3CyR8dRVJrh4SB1zVxQNlG W7JvF5xjrUlzf73KqNo+lPYT1ZRtLza7KTTbu52RKUPO4ZqZeRUSfUNRVkRSM4Haucl3yTqwJVfT FZXsOxOGSaRiWxis2NwjuQcg9KdxNGa908MhJ5pBc+Zx2NC0AxJ7hoZyM8VSlkIIbPy0N3FsZ8t6 BOeeBWfLePvyDxTS0KuZrzszkk9KhiuTNIVzgUNWBu5n3s5VuTkCs2WbeM46VLFsZrzrJMMcGiS5 2571LQ73ZB9pUEZOTVadtpJ4xU21GYFxIrybc4x6Vl+VulJBq7WJ6mZKxSZlBOO9NeYxJgdTTepD 0ZnhkiDFhlqWCZJY2Vh83aoehSZkz8KUJwe1Z02fICselCdwZTgCBDiq0r7UyBkULQZltM0eWHyj 0qi82zLclfTFJlLYSCRTH5o6emKbcTiNS2MA1C3E9Tn5G3yZXrTHkL845HWtEQ9Sq9z83TIqSQhw CMDFZyGjOllAfBGT61A8n2cHPzZqeg0VS6yLg9azJJGWcKx+ShDtcbPIXnCD7tMnKWzgAc/SkhFa SQLgCqd1IkpAUZx14obEVZ5Y5Bsx8w9qzPut833qhAMZMy/NxVNlw55+Wq3GRPMLdSoGQayJtRaD CEblNNhuWFQXEY42ntXMeJZPJtEDHP7xR+tZT2LWh/UB/wAE2YPsnwIsQ/8AFCp/Q1+5fwRiU+Gk x0wKKEieW+p7wqCOFeOo61DuwxOK69x7FKVhJnB49Kr+WIRkdPSqWgtSB2zmvNviyNvge+z0MDY/ I1rR+IcT+J79o4FfjLfwkZZJTz+VeYSjMxk7HoKwx694qS1KUpJkBIwParm3EZZa83QIpjbdeAWG TWvbhpZgCcUmNndabbspUk5Fe1+HZyyIMYAqo6iPd/D7BFAxke9eraVCVIIOFNaITR6FYSoqYx+l XJY9zYB+lap6EtEHl+WmCfmqKNWDnJy1UJEUoFvJuPJ71SlkW4ycc0DPlK2k25BFOlQfLjmsEa2J JJQHCquD7CrXmERgYyfWpZLWo+MjbgjFVw+6UqtUhsk8syOAT0qeNQDlhkUgvcuRNiQAH5fSrHni BsBc0xpE6tuUjoDTtvl7QDkVZG5o+cInUFetWgpj4PeomOKsNDFH2gfjV+WRYLXJGc8U4q4k7M+j /wBkFMfE6CP6/wBK/fOSP5I8HAx0rujojSWxYiyOD1+lXEYxdBU7sykPSRgCcc5qdZfmAI5IqbXG kTwOP3nGRXwp8RJ93ju7UDBw3NKS0Bux/Hb/AMFTrkzftFSBeNjODnvyK+CTOGgLAdetcsdGTe5Q wHwF79c1Ps8s7s/KKtlJaDEKy5YcEUfavs8YYjNK4NFy2Quu9j16VYlkVVCHk+tPcSVmNWcOu0DJ HtVqSYSooUbSOuKaKaJWuj5WSmVHFOtV3KG7npRYzZZYeVx/F61NbMIyN3OaoEy7cybtoQ4x2qVr pQnK4OKQEEVzhSAcZqZWbZtAqSkSwRbPmJ59KuxEndzx2pdR7oeuRgMelODZfPp3q0RsTICXJPIq WMugPORSbCxOs+4YIxipYrjB56egpDRat5vKQgHgmr/m/d2jjvVoYl3dqmNoOfpTY7tblcMeR7U9 mJlkP5agg1fSeQgE/d9qZNiZsGQMBUrbicn8Klgi5DOUTJPNSi4WVBuGPSjmsOxPD8pAbpV6Ocwv gD5e9VzAiRvmlwPlWp7a7Khg3bpTbBouQ3QjAOO9axvQpBK0rC3I3uVlcEDHtV9p1bBQYp3sNGos 4S3BX8aaLlpcBV696NDTSw9W2z7SORWpGib/AJl5prRGV7ksvBBA+X0FWVnCNkjg1UCraF23+aUE nBrTinEdwQetHUEKbsxzH0NaELeWhkByp7VbBPU0k2mIN1FTRXIbIA246GoZV7GhbSlPv5NakVz9 oyD90U0S+5AxkVyR0rStH+QEnBrWJD3N62ukK5A5retL5fusOaznuWmMvJxbrhedxp9pceUNud1N EvVm1BKSoPAH1rTF9gGMcYpxVmNvoNtF+0/c4bNdAS0IEedx7mqbsSRfbTDJsI4rQiuwW2jOKuMk Frl60ZyTgYXNaMt6sSAAc/Ss3dspbE9tfEEDOSa2LGUTM4J+Yd6ckCZoiZFTcQOKrw33nSkgcU4h saUNywkIYYHapJJAycNVXsD1HQT4IAOeOavW964cqelKTsRezNGKTy3POc1ZZfKG4nNTuaRFgmW6 XGMEVpwXaQ5VWwR2o3Y5EB1XDHPzCrNpqQlckLhackIpPe5uW7AVZjnFwC2enajoLoLFL5zn27VV l1Mgsg4IpoExIrppbTeeHyKnF4uwBvvU+o2zJkdmnyrfL9akkuiAFzx3p7kXsRCQody1We5aSfcT jFTcd9QjlV3YkA+9QXN35WcCjmGkZJdnyF/GoZpnSIIoy1Fw2GR3UiPiROe9XX1FZX2gYFS0NMxZ 3WGRlJznvWd9pySi8Ad6VxlW5ufOiwD071myXBigCjOT3p7iZneYVcRsdzepqteS4JjzkikkxGbt KJk81VuWIYBTim3ZjtdGdd3LQKFxlj3rMW78ubIHzYpuQrWKsZaaR2Y8Z6VFcSuicdKkHqcszvLP uJwtbEUqMnPapkEXZmK5bziV4WoDOZ1ZS2CKQ2zAKuJG+bIFVmuiZPlPFaNmdyJ2VcsfvE1VnuNq gAZOKm47XMrLMCPU0k7fZ2TB6DmlLULEJlW4Ys3bpWJL8sjbjle1ZLRlLUzwu0nHNNnlJRVAwat7 BazK7/P16Cqf2lYHZmUMvpU3K6GN9raRiUXaM9Kj8zc5VzxTSC2gxVVyQBtI71SmIQlC+4mlclIh BEa4YVQkkHmFe1TuFrGfPneNp4FI8uBuYfKO1FgKPmpPJuXgD2qG7HmEYHNFrCRVMvmMFAwR3qK8 kHBxmptcZTkYXEeFO01mifysKOTQ0BnLP/pRbHzH1pJdwcsTuaosFimVeT5i3WoG3Jlc5qkDK/7x 4sGm+QPJwwyfWi4xqy+VBjOWrjfFkZ+zxMXP+tU/rWcij+o7/gnPILr4C2LZ+7Cv8jX7ofARfM8G xs3BwKmjoylse7STCWFVHIFUpXCrxXdHYhlR16EVJkMcHgU2hIosoZye1eb/ABXXPga+LHIELY/I 1pS+INj+J/8AaVVrX4y6jPnJeY4H1xXl15G6bGB6VjjneRVyE7s4cZBqKNdu4Bq82SZRYgjLDhua 0IWLyKp4Yd6SJbPRNNbKop4avYNBD/IhP41SVho970Q5hRMdO9esaXviC5OVrVLQTep39kPNhJA5 B9K02BdVI4OOauOpLKjTeWTu5qJkw24HrVx2BESj5i3X61Tl4c8dah3KsfKsgMTq5HB6VOdxl4GB WSVixGQhyxHNJA2UYZ70mDRI8vyAYwKfGAi5x+NCYtyXy8SoxPymreRFcEk5Si9x2GqRLcZT5R9K swNtmfPI96a3G9i2q4BOfwp4XzEDHgCtJERL8OJMHFWC3mYzkVJY9ZPL7ZNXmjEloSenWqg7OxDj 1Poz9jSRl+KkTsOMHA/Kv3+tpBtQsMgCux/CU3cme4W4c7RtH0qd38uNe4rNEdAgk80HAomw0oI9 OeKpIaJ7GUbZlA4HqK+HviAmfGN9L1A3D9KUtiZK7P47f+CpEaj9oaUsfvs5H5ivz/eRYrYpjp3r l6iirIgt1FwvDbcVIZgSUpMsdFDhwpGKkliCOSRlfSgCCPcXyD8vpirMqYwR3qoie5dgxbJtxknv UG7bKB978KBl4ocf7PpU7IXUFRtxTbsRbUkBcxZxx3zVyORTGCBzSHYnEflOJGOc9qVlDkN1pkkv 2ZXmVug9KtbSs5fPHpQMW3AkDHvmnspQDB/SpkXEmVPOT5jmljVlG3qtUtCXuWYUdvlzVmFcOUJx SRVtCWSLYuc8VNayRgjjJx6VViUrD5IcsSDgZ6VZEhiiBPSmgY9ZBInA60RQKmQOtKSJv0JkkR0E ZyG9cVreYYolVfmpxd9wehZZ8qDjBHYVJBcGd8jgDtTkhommhzIDnI9KnDpOcIMMKztcNiwtq5+f PSrkTGXaW4HpWiELJPidlb7o6YqRcMqk8GnyjuWp/mYMi/L3q6rNcxAr1qnsSiWKDyyC3WryqR8w /Kp6FWLpfMCn17CtC0n2JjGMUJD6DhMzy8D5vWrtuWlZgx5FX0ISNGxYo20/NTdv+kMR1BpR0NFq ascm2PJHz083IchgMmqsK9jRj+RF+XcT3p/mENnGMdqd9BLc0o7ncoOOPSr0MQmfrgVIpGwriKDP WrGnyIm7uPSmtwvoTI4mR+elLbXKtGFz9eK0i7EM0NuJlIb5a3LWXybg7uQelTJDTNK424JzxUVt cIsLFAc+pFFy7Cw3UjFe+K34pTKCDw1XfQmW5r2Uv2ZOTj3qa2vCzE5yPWkI0Nqznk81YjlWMbO4 pX1LS0NGPUisBUct71TiuZCxLfzrRCsy9buT827BrSiu2tnwB97vQ9yS5cTsyBAeafbztEQScYoV x7m1HcvK4bqtWg+7PapuN7FyEoI9wOGqb7VuwAOTQ9SbEvnNGw+bJq+bgsuGPXvQlYq46zlMEjDO cd6hjuNszuxySaa0ZDbuWhMh5NTrfrApAFU9S7kCXG/vyaYZDbzkhuvalbUlltbponD5571HqM53 CVe9F9bAloUVvzEhB70xrtjHnr71oldCd0QpOwJ+arCzbYuTzUSbWw0rk8d/5r7fugVm304Vio5N Qg6jIJsWoB4PrUn2hZIT6ii4XMy2uGRyTU4k8v5j1pp6DIp77M3PU96yri8FqjKRknvS3BGYkmV5 fdWW05ZnGcL607ajbMx7r7KigtUc2oF2AHIq7WQbkF1e4QMV+b1FUFuwjhjyalPoNK6uNvLkKh5+ Y1jxXZVf3h6UNXKhoM8zz5Nw6DpmqFy2MkdfSs+oT1M8XnloARz9KqSzsWJJ4oZC7lBHDc9hWdfa klsm7b37Cm1oRu7lOS/aWMEDAqr5uY89DSWhW5XJPUnBrJuI8Sqw4ocri5R93IsqAdKpFQzhB6dq V7FtGXKzI7Rg496zHRiSjMWA700Q30KqSbUZByc1myBnYjOKjqWlZEKl48lT0qvPK02MdabDceG2 wNg5YVgkZkLZyPSpW5WxURWWckn5TTWTMpz+FUIZI3z7M4NZl0ot3XPLVm1qSmQ3dxjAxmqEgMaD nINC0eoMgkbygOcnvUDybXJblewqnuSUG2tnadpz0qKWb7gJ4qWCHylIuB371jGRgzLnNJAyvhW4 6GqyxCJzxkUpFIhlVTyOPSoZCIYsE5Y9am+hWxT3BEJHOfas5JhCGVhuJ9qBMTMmwH+Gq8bYYjdn 2pbiuV5DvbjgiuU8Sv5kEasM/vV/nUT0RaP6jv8AgnEBF8B7BO/kr/I1+6fwKTf4ZRT/AHRUUtWM 9rtx5ZKjtVeRCEJAzXoLQVroijDeQGqPeGbJOMVW5DdiBZN2cDivPfiid/ga9U/eMDD9DWlLcR/E 7+1KuPjJfwk7ZEnP9K81lYyR5B5rnxytI0RVQsww3Woktirkk/hXBugY9ZAGwOPpWjAA5Ug4FGwW PQdNnG9ABn3xXr2gs80iYHApoWx75oMp3quMCvXtJbc21j9KfN0HFXPQNNRkGetW5ZS3tXTFaEvc jbbGhDck0kUexODRsPYoGTa5xxTGI35bkGp6DZ8n53J83OKeZMBTnn1rC+hZJNKzRhcc+tSQxKiB j1qHqVYVlJk5+72qZm8oAEZFVFaEoX7SrDaRxTJOvHSkuwXLKy/ZohletaEeJGVsY49KolqxPGon 3YGAD6VfDpHEFIzTbGtENX92mwdT3rRVMRqpOcdaluwRuKM5IA+WrszqbNQOCBVQepo1c+mv2MYU l+JiKT8wB/pX7zKfJ8sP0IruXwkONkaAmRFzsGKVr6Mqfk/MVFtSENU/ICDjNXUTCYqnoPYniwyO oHOOtfDnj1d3jG+jXr82fyqJaoUn1P46P+CooST9oeZTwUdhn8RXwAYcMxPzIa5hRdyOyg3OxAwK klVVIIHzfSmUI96FcDFTBvNfZSAtwhI5drGlcjz92MoKoTLLusw3RjA75p0ar5e8Ci4y/HMowMZz UrENweMUEN2YwHzHxnCjqBUuxFYHGF7UDHKxVueRVryN3IOKEJ6kyqqgAnLetWZCqIAfmp2EVolX ccZA+lXUbAANK1yuaxIgCHKD8KsKwlwRwfTFVbQW7LPl7DkGozbOz7x0qLWNEy5HF03D5asJEiH5 Rg1otjN7kTWhGfmqUcIEbmmmBPHtXA71Z2ApnoaQrFtEjMY3Jz64qxE3mEBeMdKWzAteaImJI3Ma ki2KmcYanuO9ixExUgkUsYCzExjk9aFoD1NaNTsxnnvmpZWXCrVIREqAAoTk+9asEaW8I34Y1bdx A0hyVUYX0q5aN5CYAxSewIsidu65H0q/ESMECo20KY5YnVzTgjjAz1qkxrY1IoxA65PFW2mVXAAy TRe4WNKP938wFRbt0pZe9F7CRfiUrGSTk/Sp4ZUUhNuPXirTuJq5sBj5IGcAdBSPIR7k0gLtqpji PHJp8F22/aOooYb6G2pbyd2cmpLVgTwOT1pRYWJwypIUVeT1OKQrtlwgx61oncmxojMqAg9K1LS+ 2gbxk1pJaEp2ZrLd7nCjkGtsyRiEKoANZdTRFNUMRzWlbyMqMT1qiR0V4Sm1hurUSf7NGFxwaafQ LdS5aakqzY6/hV5pw/zAc5pJal3saUVwjrgjBqzAAYSoOTVplLVFOGUx3HzHkdq6CO83LyOe1Nmf UntjhxuPJp99KFYKDxST1GaFjqAtkUg5zV+LUE3szHH4VEldgPW/WQ/Lx+FOW9dCQvLVcUJlqO72 DB5bvTpdT2xbTVBFEun6z5OVYZLdDipPtyl2BFK12KSsx0E3nREk7QDT/tw4VufShaspIfbzKJSM 1einV1II+YdDTlsSxLefc7Bz9KikdmwucKKzV73Ki1axXu7jlSq8U/7aqYGP0rRSsJ6lae7DuCvQ egqnPeM047KPSlbmGhLi88yRQpxUYuPKnCsd59TRboCWpHNqm59gXip4bsOMYwaTQpWTI5NSWFyg GT61TluipLE5HpQkPcx7q5a4KkHaopLqQz7SGwoHNO1hIyJLkKSqnHNVrm9REAHPrxTSG0Zski3L gN90VCZ/s9xgD5afQEJLe7rgcZU02aJUk68VnZp3Li7aGRdXCxyEYz71TZjJHgVTdkJasjjvRC3I 5HtVK4nLy+Zng9qzCehQub0eWVCcg1mS33mRggc1SRi2Z8Vy0VxhuhqtqDAS5IyO3FS3qUo6FLzg Izu5PpSSyhUBxih3C5Cn79utUbwkAjoQaz2ZcdSmrbl+Y4qCGcRSMx5NOT0KM+6lEshPasuSXYpx 81WtjF7me5woKthj1pmxuh5z3qNjZbEV0+2MBeCODWazbCAO9K4khI2LSFcYWsyQojEDgZpXsxsZ cOsKAAbqqLIJUJ6EUOQmURsDkn73riqcz7gSRuPai+pFinEjSjDHH41HLtLbeu2pl3BFTem8ttz+ FZU8yiTB6dhilzajaKtzGGUEd6YQFiwRz2qm9BJEaRuIzn5vSqKrjg8fhWaZbSsK4VWyOtUnk8yT a3yj1p3JQyXgA/eUVXmg3HzAOfSpGyskQ8ktnnPSqLx7iTgYqr6AVyAYyA3zVj20ixStvGT9KQrD LrKlWU8Vla9IDHCmOrryfrUSWhUT+oz/AIJvqo+BlmXXG2EY49jX7mfAZhL4Xjc/KMClSRTZ7LdO I5/lNSeZvXniux7Ep6lZnwm0fdqNIlbqOapbES3GuFYNxivNvicpXwVeH0hY/oaqn8RZ/Ev+1jMF +NuoXJGS9wRjHTOK8vmgZH4bgVnjtykV1JByx4qVpFfa2eRXnrYXUYzf6SGA4rRtCfNIxxU3uD0O 90g7SvGa9y8Olxs4xVLQR7foxLOoP4V63piCJVBOTVRQ0egaZJtTrg1feFpWz6GumDshPcW4AbDY 5FY7+cJg6fc7ikyrXLFz5RjBH48VWKhYwaHsTa58nozbCpHBqWOIIgU1z2LiTA7eG59KcnzoQTSR TehDIx24zwKdADL8xOcVXQlE8kClhnjNWUjAXYDn0NSCRKrK6BZByOlX1kA4AphuWYpNnyhevpUx TK46EUA1oWraEMOenrUigRt7VIJ6Fhsq+T901OYg8LH0FaQVgjI+nP2IoxN8Tw5PY/yFfvVLGSsS kdq7b2QOV9Cxt8thk095F5+XikgSGI64KgfTipYpNqbSctQJmnaL/o0hPYV8NeNo/N8ZXsyn5QGH T2qJOxL1P45f+Coyt/w0pKCMq7uf1FfCjBYpSnUCuXdkxViRWU5ZRgelL5AlHTGap6Gi2MyW3WI4 xu/CrFpA08ZK8MKLC2KLReY5JySPati1bFqQB+dMkrNKxI7D0FKbllcKF+U98VO5T2L0Z+YbeSKs IwluNj8H1xRsxbk5BtpMDkU8TeY4DjFX0AsupjGQcj0pkc52cikIv20XnRkDr61YhjC4DHcwpiSH favKlI28VG85Tkjg0ICzFd7QMLzWlIysAVXD/SncaHxweYBuOCKvxXCqTGTz60DY6SVYLYK3zNUZ nARWH4ii4JCPOWIYdaasqtyw+b6UrjsTFlVBxk1oxAiLcRxTENW986LaBgetPScwKBjihkmjBeKz gsnHrirLzBXwOhprYHqXLeQlyO1XVkWA4UYY9aHohBIwkGc7SPSnQymUcj5hSi9B9SxEodsEfN61 Y5DjJyRVJ9QLscm2U5HNXkuQx2kcetPmB6E32tojgD5akjvjKAFGCKGgTNNbpnAGORSqpkw278KO gdTQWdRhCM0qsFnHcUkWjYN1lAo/OkicQ85oZOzNW3uRHH0zUIn2XRJXJqloNF5Ln7QpY/Lg1orI ixI3XNaCJ0uN5wpwvepFulc7VXaR1PrSlsLqa0EmE68VMt6oYKBtNRFFWNQShyFHX1ppysrAcmrW jFa5PDlYSM4NXbceUgbG6tea5k46mvCyR/OR1qVXMr5VsCpiVeyLxnx0OTVi1kZLNyz7ueKdtQ6F ixuxbjc3ORV2LUFkQjGaGiUx8Em0FwOK1IrnMIGeSafQrcsq+1gc/WrMd7sbIpGqTsWkm8yYZ4Jr TuLkQwkDkj0p3MxIL5vLVz071aecNLkn5fpSvqWloSLKdwK9PSrctws5CMACKpLUT0Qsl55KBQOa 1LGfycNJ+VaWRne44X32m/wg+XvWpeSKoAxkioloVFkaarCyCPYA397FJPKITtPfvREHqx0F5iIo Rx2p8cijDZyB2prQ02RYgn3ybsD2q3dXq25A/iNTe5DVyNb5XjJxtIpftxliLFulBKViompfuSGF W9PvFSJjIobPQGkiluVkn8tzlflpzTBx0q1oS730KMrKmN3Bqm9xi4OBSvqUkwSYJliaqNM0zllO APemxNEc0pCLIOTTbi6zEGPFRexS2GLdLtGfmzVK7cIhwc0m22CVtWc5JN0HQ1HLcrFIFI+atFsJ u4rYk5HBqN2L8gdKV9RdChNOA47Gq9xcNBgk5FV0FexC0m6McZzVKWcFiqnaRxWOrNI9yrcHbbBt wzmsu5vFAUDtUx3FN6FO8uTcoFUbfpWN9raFgu3PvWqehmkI0hlbd6Uya6WWMccis+pfQylkG8se Paq817v69O1UZ2bZFFceU2c81FNKzMTxis5LU2SsjPuGMuMHAFZd4+1cryaVhMgW5AQKRyaq3Eog mB/OrjsQ9zNknHnlsdaFnd2xGenWpZqtjNub0yuAfxpkzGNRtGc1OwitvMKlyevaqjBXQ56dqGLU otd+XHtxmqzz5YADC1IFK5dlkVV6HvUNzJ5QwOcdaEBm+YZ+R/Km4eJWJ5NUyUiNZyY8YANZiW5M 53fhWTRbIplPmbDx6U/CgHNXHUh6FVI5Pvg/KKgys9wd1Fh3K7RfOeOlULoBWB6mlYOpBlk5PT0q NZXfJ6VDKKEsqyNtztrOkcmQoDxRsONis0AQ53YNR7VWP5hlqAFniCW4b9K5jXZAbWKY8ASKMfjU yegrWP6nv+CcLCf4E2AYfehUg/ga/cD4Ix+T4cjU8jAoog9T2iUrJKyhelQYVgc8sK7ehK0ZTaXd lelSbj5f0oQFVpsjBGK4P4oSbvBV4i94WH6GqgveHHU/iY/arj8344ahaqMvFOcn6Yry+a62yqdu c1ljlqaWsQTYDcj8KasW8ZAwK82+guosA25Bq9A+zC5oiEj0XRX2FFXp617X4dyrKznI7VoB7do2 J2DZ24r13SXBRe+O9aIVtDvLIm5wAMY71p7yjBOpovZE21GGUJKQfypHk2ocVe6uO/K7FEx7l5HF UbpmDgDpTS0KR8qJEwYh2wKkjVlJBOR2rGI1oPUiRuDyKDGSvy/jRawNj489CM1NAwgLA96RQkgL 4HapVfy8e1RIRKu45NaFtIGTGPmq1sGzJY53jlOBmtC2fzUO7rSTBlqBWAPPA7U3fuQlRjBo2Ei+ 5BWM5yavsoSNj3Iq0NI+oP2INg+JojA5AY5x7Cv3geViiNjjHFdjXuolq2o1EJJYng1YXGcHpSRU difeFTAXNPjgiXB3ZJ7YpEsvwRnZIuflxXxt4wiU+Lb2FR2bkfSs5C6H8YP/AAVKW+/4anmjjtXk iV3G4IT3FfEr6fdRsR9kcsf9g1z8yixLUGsbmFMm2fn/AGDShLhT89u4Uf7Boc0y0iGRHxuEDn/g Bp0byK4IhdQevyGnGYNILpfswBWGRmbr+7NLGoVMmKTJ9IzRKokLl0ImQBRmOQc/88zV7zVRApic cf3DUqaE1cLZfKViqPj3SprgKIkdY33/AO4aHNBYklkWRVYq4cDpsqr9pZmzJE4Pb5DVqaaFaxej u043o4/4AamW4RXyEfB/2KXOgWrHwXKQMxO8Z9Eqf7TDA4JL8/7FNSLcSK4u4YrgEb2P+5Vv7THM nz7ge3y0+YjlFa5gjQM5bj/Zq9FqMUkYILD6ilzodiwl3GpOSc+uKigvYWctkkZ9KdxWNB9Rh4zk +nFQpdwx5LE7j2xTurFbDoL6JpBGxIP0qy00Kucnd6UrhYZHcJGOTn8Kvi/jlhA3ke2KakLlEDxx EfPx7VqRzxTIF3AfWruiWXliHl8uNo96ZI8fkBlcE/WpbshxiyW1vEmTarAY75rSTy1+Z5FPpyKH K6HylaS+iRtpPStW1kQ/PvUA9OaUWgsStIYud6lf96p1KYGZF596tMz1TJshTgyKcf7VXIb+KXCb gD6k0aWHuWdiuhXzl4/2hUsEqww8SJu9dwqr6DSLMdyo5Mq/99CrdnIkk5/erj/eFRcqxrGKNQW8 1T/wIVDbMiEhpFJ/3hTi0KzuWUuUVyWdcf7wqxEqlt4lUqe24UOSDldy6jeUoPmLtP8AtVYkkC7c Opz3zVXVgs0XU2FSocZ+tWIwUiALDb7mqWw2aMCBgckD6GmOPPwFfaV689aXMhcvU0YVJIKvx3Ga 0J1WZgQwB+tNNXHYlj+Vx+8GfrWzFbvGmd4LfWqYkS5+0uFB2sOtaNt8rld4P41KkNq5eaFmTIbO Ku2HlSk5YKw61rFmbRHJMvmlFbmljLQoVc4yaG0HKyyVaVlBHyite3VQRswfXFHNoTyu5akYq21T n1ArWtogU5NK5aTIWnHmFU5Hc1cilSDqck+tDZaZNLLkAg81aiZ9mD83vQhM0IZPIAVj1q4zA8MM 470r6ivYtEMtvlarKWiUPISRWq2CTujTjjLsjn7uKneUSSNz8op82tibEVpP5LkKOfpUkt28zBSK JMEizGwbPH3ehqcXH2hcN94UQ2GtxVZ3jPtRbuSCD1q3awF+EtHDuzUe5Vky53GsupRHdv5yAIdn POKWVzHAoXnFHUB0tx5iLhMUspMdsHXg1V7EPRj45ndVYnI9KZLeGBielG5UdzL/ALQa6B3DjPWp nl2oO5Hek1qWtyjJccDjrVGWdrWfYSQDVIiT1HeezA7T8tMklM0QVjyKlie5nyXSxyqgHI9qju5W SPGcipirO4Sd0Y/mmSQFl5FNniLuG6mr5rCtoMDlZsO3Iqfc86nHAFZ31KS0M2SRQQXGfTio7hs8 kcelNSIkiolz5Zwaypz5k5KnqaNgjsYt+zO2Nx49KpTKUgBJ5poVncSW5MdqGPJ9qyVY3XIJAotY ZIQbZS2cg9qqsSyccGs76hYoyqzdeoqJkYcmk2Uik67W681UuZm6J1FLdXLRRaJ5ogxJB9KpTllX aODTTCW5XjUou9utMuX81RgYNF7MVjHnZhdKpHWldWt5yM8e1JiKk6qGz61CJWwUFQ2Oxl3LMjcD PtimRSMecZ9qfQLFUgtuRutU1t3XgnpSEIys8ZK8EVj+U7ZI79aI6AyeKErHx170yZvLQDOSaJMl OxnG2L85INMWPP3jyKhXLuQTne+etPW3SWAuOvpVbEsr3LZtQF/SsdYGaMlTg96bdkCB3LRqAcHv Q0K5B6kUkx2KM8glVjjGKx1Mkozn5RUvcZlOw3sG61HCTGCSM0PYCqyGZy2eaf5fmJjO0jrSQmOJ JQY+auT8SWbfZ05/5aKSPxqWtCr30P6kP+Cad19r+AdrhdvlwqBx7Gv3F+BU5bwlGzdcCiihHsxc u+QcCnFsKSOortFbUryTqygj73eqsjsVODxVLQnbcg8wOmD1rhPicrL4Ju2HAELH9DVwXvXLWh/F D+0/clfjpq1wo+/O364ry2NiIwGH3axx2rKuVJdzSgmtB5ViiC9687lJvqUS3mMBjFattGpkGTkj 2qLWKvc7nSHDOqjivbfDuXAHYVV7Ae5eHl3YLV65o0RVOTla1Wor9D0XT2LRALwKuNG0cgIOSadg tqMlVUlAYfjULRC3kLZ3A9BV30Fa5A7nJIPHpVPBU5A5NUvhEtz5Znw2Ceopwl81CCMGudaGjI7X azkkYAqfzdxPljC0NitcRchcKMn1oEgYYxQkGxKvywlV61YgZQACM4qXqVEmyJHI/hFPScRnOKSB 7lghhIpz1q/v3yAAbSPahK7F1NOIOMg9Ka0ZHA6U+oWsSxnyyFxzWjNOPKJx0FbQRVz6p/YckE3x KEhHBVu3sK/dfzi0IQjGOldb2RIQNuWrKyEAhh+lIXQUTCIDvmnvIoIK/wAqSQ7aGzZyEWcxPUCv jrxPOtr4nubhgTnIxj1qJE+R8JfEn9lrwp8V/EFxqWp6Yk07vuLPHnmvLJP2GvBEM2/+zIm9vKrh qJtj5bFW9/Yb8DTjP9lRL7eVWc37BvgmSIEaZHn/AK5U4QZaRQ/4YH8DW4I/s2Ms3/TL/wCvVRP+ CfHgZvm/s+PaO3lf/Xq4wsZtj5P+CfPgi5wyafEMesf/ANeiL/gnh4HGWFjFn3jH+NQ6bky72RGv /BOvwRKw3WMP/fsf402f/gmz4Lmbd9mjHoPLFJ07EpdSGP8A4JreDrVPmt43B7GMVFcf8E2vBshC Lbouf+mYpezYMrj/AIJmeDlkH7pCR/0zFLd/8EzfCV1yqImPRBWkaegmZ8f/AATE8ISNj5cj/pmt Q/8ADsPwo9w3IAH/AEzWs+R3BaFC4/4JjeFFblhz/sLVdv8AgmB4UnbDuOOn7taqKZV9CM/8EtvC 23JlGf8AcWqV1/wSz8NyOCs//ji0cruJWKn/AA6y8NyNjzskf9M1psv/AASx8OyqB5+3H+wtLldy mio//BLDQSdouyB/uLS/8OrNARdouT/3wtaWaEtxr/8ABLDREKj7STjp8i1C3/BLDRSxJuSWJ4+R anW42tRh/wCCVukRvuNwSf8AdWs9v+CV+lCUlbpjnqNq07MSIj/wSw09XJW4OP8AdWpD/wAEstPm xif/AMdWkk0NkEf/AASx0/z8faTgf7K0jf8ABLSyLv8A6Qfb5Vq3clojb/gl/AbbYLjGPZapL/wS 7g2YFwR9AtS7lpaDF/4JfQoCBcEfgtDf8Eu4zGD9oI/Bal3Qm0hV/wCCX6quGuM/gtMX/gmNsGBP kH2WhXEndkkn/BMTbGAtwfyWqw/4Jjsz4abAH+7VpsckhP8Ah2S7E+XP09dtU5P+CaF3OxjWQfXK 0nJmdhZP+CZd5HEqpJk9+VqG3/4JoXiTbZHynrlaaky1sEv/AATQvxPhXHljocrUH/DtrU1z5ZB/ 4EKhtod7ssR/8E4NYZVJcLjtuXmrEP8AwTn1Rp+i8f7Qqk2Mgn/4Jy6tHOTwyntuFPi/4J26vDzx z2DCm7shy1Lg/wCCeustwxBUdiwph/4J/wCusdigbB/tCkm0ytyWT9gDXEAK43f7wob9g3xFJEq7 +P8AeFac7tYloIf2HPEETYHUf7Qq1N+w74gfDEDP+8KlSZXQav7EviFMEDDd8MKG/Yo1+Y4Ubcde R/hTTdyW9B4/Yq1/gbcMO+RV+f8AY/8AEVsqBc7sdiP8K15myE9RYf2PvEqqWJz6/MP8KD+x74gj O7aeemDWd3c16F9f2UPEltDjaefQ/wD1qrD9kjxFECwQ5PfP/wBatOexiJF+yj4hlwzIQ46HNaA/ ZX8RFx56llHTmlztmm6J5P2YPEkqFVUhB7//AFqbD+zH4jsYfkRsnrz/APWoc7CSuwh/Zs8S29wG Ctz15/8ArVpTfs4+JNxVEYZ68/8A1qFMpqxSj/Zv8R2xO1Wx35/+tTv+GcNfBDMhyf8APpVKQrWL y/s5eIRj5WHvn/61W4/2ffEsTY2ts+v/ANarU7E9SWX9n3xC+AUY/wCfpUq/AjxJE4RYyR9f/rVn zO4mTR/AzxJDdFGVtg7Z/wDrU67+CuvshbymKj+H/IrVTYKxWi+Euv8AkgCJ/wAv/rVVPwa8QE5C P/n8Kjn1KsWU+FWvWzAfZ2LeuD/hST/DDXx/y7uD9D/hTc7ghg+GHiBIwfJk+mD/AIUxfhZ4gjfd 5TgH2P8AhVqdgSLTfDXX4oxiJz+B/wAKmg+H+tqjbrZ8nvg0Or0E1qVv+ED1xiMwSHHbaaqSfD7X TL5pt5MdhtP+FCncA/4QPXISd1u5DexrSh8H6zHAIvsbH0Yg0c9gsUpfB2t79ptXBHoDz+lE/hvW Y7Yr9jYk+oNJy1E0VbXwdrcowLd1PsDVOTwDr1wzK0cgx7H/AAo9pYpLQks/CGsWqFXtGI9SDTX8 LaxFZlxasQCOMGqc7ijuNTw1rF0i7bEr+BqreeDtYRTI9mxJ9QaftAkrMyLfwtrELgm2cj0wf8Km uPDGszLu+xNGPYGpdQlaszo/D2q72/0FmIPXaaZNomryOo+wkD6GpVQpxKt5ourBwPsBH/ATWfLp WsNIClkwUdRtNaRldA0VH8J6w87Tm3k68DaalXRNZYbjaOoHYKeahvUuK0KDaVq8kuTp7ED2NUb/ AEbWZ8bLJ1A9FP8AhTUiHEzn8P6yLbmzfd67TUA8P6zDCFFk5J6ttNU5KwJWKUnh3WI8qtizn1Km qraRq5iKNp7Fv91qjnsD3MpNH1ncT/Z7cdip/wAKbDo+qt839nMD6BWpuYrEF5pmrk7V01s/7rf4 VFFomsKgzp77v9xv8Kz5rCZSubDWD/zD246/K3+FY7WOssDtsXx/ut/hSk7lJaEMWm6vPEwksWB7 Ha1QwaVqlqMGyeRj3Kmq5rIIrUguU1WBth04n/gLVjz6Vq6v5rWLY/u7T/hWfPqW0VzY6q+CdPb/ AL5aqcseqpNtOmsV/wB1v8KFPUEtCjdxaojgDT2Yn/ZbioDY6qAcWbO3upp89yGkjOZdUQndp5P/ AAFqrrFqqNkacfbIalKRdroaf7UaQk6cc/7rVU8vVoz8unk59jT5hJGfMNWD4/s4k9+Gpy/2qzbT pzKuOu1qXNYHEqTf2lGuRYN/3y1UpJ9UERK6cff5Wo5ibWIYW1NbfP2Btx/2WqCU6g686ewcf7DV HMPkKT3GqFQG08gj/ZamSfbynNi+fZDTU9QcbIpSPqPGNPbPfKtSH+0ihxYsB/utVuaISuNjTUU4 +wNj1KtVFf7SnuWH9nsFH+w1S5XZXLYpsNU+1YOntt/3WqS7j1REDLYMV/3WqXNDsZKxapM/Fg+P TY1TfYtVK4XT22/7rUc1wsjObTtQ8/LWD8f7DU2Sz1FZCU09mB/2Go5wsinLpepoA32B93+41U7i x1U7f9BfHf5TUOY7ImSx1SNwosX29vkNcd40g1e209JhpsjkTIuAjHqfpQ5+6CWtz+qD/gmPaXEX wAsjdQ+TJJCp2kHjg1+3nwUgK+GthHIxV4fVCkevPmPaacJiyk4ruRBVjiBycYNSmLcMGi43EoSJ iTjnFcX8TGz4HvgRz5DAfka1pvUeqP4of2n7bd8YdQUHa6znP6V5NeffGDhTXPjdJDWpXMpU4x+l SKrYBbnPSvPbJaJjKInC45q9aLh8kZoew0eg6QqyKuBgivY/D77GQjpUJ6jZ7robfaCoQdK9Z0ZN 2Fz061tEEeiae4TjpV5YyHYlvpWkkNMryP5icjJFV1JZwGq0lYlkU6qr7c80y4byowB1NDfQWzPk aS43YIH1FW4mXbWJaeg6Jv3hTFNRvKVivSlYtE8TkJgcA0i4L4x09aq1iXuWI8bSSOB0qxBtZOeM 1D2GtGNRNrFc9atbVghIIy1RsVItW4Zow/cVpbtwBUc1pCJnezLAclMZ5qWNTsGO1DWpW5K3707j UrjMRH8O01rB6k7M+sf2Fz/xcArjgA4P4Cv3dVjLbqSOa6n8JbQQLlz6ipV+bOe1QiSFsYxjrTY1 LEoOKsp7GhbymGFkznPU1z114K0++Yu8YLnrkVlJXMzLk+HGmRyAGBTn2qgfhPpLXBbyU+mKy5Bu WhVuPhHpM0vECrj/AGanT4S6V91YlB+lUo2CLuhv/CnNJXLGFGb3FNb4M6VMARCqn6U+XQh7jF+C GmLnCLn6VXb4L6YGI2Lz/s0QQ2NX4J6YflCL+VNPwS02JCdg49qTj7xUSZfgxp8qhygz/u02T4Ma dtzsUP8ASqUAaEX4L6cxB2jd9BTpfgjpzr8uB68Ck7ITRAvwK07OVAU/7oqYfAiwuAf4T6gCpURb lb/hn6wVgXO8e6ipW+AWmb8jGfoKOTUb2Krfs92cz7lkI9sCrX/DPFo+CJDjv8oo5SU7DP8Ahni0 WTcspH4Coz+z7aEndIR+ApchaZFJ+zla8MJifwFS/wDDOdvwfPbI9hVuKFF6if8ADPMMkmBO2PoK cf2cY0kB+0Nn6Cp5CmxW/Z4R3x9pbj2FB/Z1iiAMcxHqcCjkFexFH+zpulJ887foKen7OZRiBOcf hScSou4yX9nPH3Z2z9BTV/ZxlOCJjj8KOUljZf2cn6ecQfwqN/2cJWg4mP6U+Qd9BkX7Ob7PmlOf wpq/s5T3DlPOIUfSp5Cdx0H7OEyzlXmJA6dKVf2cZhIcTn9KrkGtCWL9m64BLm4P04qP/hm+4YZ8 48/Sp5LITeoqfs2XAOPOIH4Ui/s33UTHy5SPyqeQd7ImH7OF8nzGcnPuKqXf7N98seUnJb6imoCT uJB+zrqTR/PKfwIp6fs4ajGm5Jfl+oqnTTHexXn/AGddTaRW807fqKR/2e9TEnyv+opclgbFf9nf VE48zk/7QpsX7PWq+YB5n/jwocSOo5/2dtXLk+Zx/vCoYv2fdYViA/X/AGhS5S4skP7O2rrERv5P +0KcP2dtV8oASc/7wo5AcivJ+zvqzLy/zD/aFRQ/s/aztyX+g3Cp5dR30Jl/Z91mJ9+7IP8AtCm3 f7Pmrtgq/wBRuFXy2I3Iz8AtZnAAO3H+0KZH8AtYgY7vm+rU7E2sVv8AhQOsNIcHGe2adH8Adaji 2k7ueu6la7LT0JW+BWsx/Jjcf96kX4I6zEuGQH/gVDjclleT4LauCpEXT3pw+B2rszsy8HpzQo2K TI4vgZrCwnjOenNN/wCFI6zAQHj3DtzUtNlLuQS/BPWN5Xy/oc01fg3rEPyNHnPfNUo2QpO+pHL8 FtXixmPIPvR/wprVGwDCMe5pqLFcP+FN6qThouB706T4S6spAFuCPrVcruLcrf8ACptYib/j3yD7 0jfCXV1wRB096EhDP+FW6tO4ZbarEfwZ1q5yBbhR9aTWoluMb4L6zDkC2B981VX4L6wiEfZx+dDj c0TGy/B7WLGIO9rkdsVHH8K9WuV4s8H8aSTGQv8ACzVoxhrTn2Bo/wCFU6s6D/Q+PoaqxGzIB8Kd T34WzP5Gkb4VaomV+xZx7GpaLXmR2/wz1CMsDp+W+hqsPhXqrPtNmSPoaS0ERn4U6mCQbHI9MGiP 4T6k0YH2E/kavfUNhi/CnUY3Iaw3e+DVaT4Sag3zNYkgHpg1LepJYi+Gl7vJOmbTjj5TWe3wu1A5 LWJLfQ0WY4iS/Ca6aAH+zt2f9k1RX4TX6xgf2f8ALnptNLUaVmSN8Kr57hdmnbMeimkv/hjeuwV9 M3D3U1VnYUlcgX4UXESgjSgSPVTUNx8Mb2Zhu0v5fTaanUIqxInwsmiJA0cc/wCy1VZPhNNKuBpA DD/ZNOzG9ys/wmnH3tIDf8ANQH4SSoQRowx/utVJha5FL8LJGlXdpAVB1+Q08/C88gaMGXt8hpMq OhE3wleUjbo4H/AGqJvhJ5DHGkAsexQ0xXIf+FTbXIk0cBf9xqrn4TESk/2Qvl/7hod7E2uNm+Ek bsvl6UAP9w01/hDAvzvpQ3D/AKZmpYWKL/CK1aQN/ZfXriM0q/CWzilx/ZWc/wDTM0ELcjf4R2Qk OdJ+b18s0f8ACoLVwudL2n18s0rlWKlz8F7GNSG0rcT/ANMzVdfgnYCDA0kD/tmaEyraDE+CGmiL B0gEj/pmaoH4J6U0/Gjgcf8API0PVDiQ/wDCitLmJI0YDHfyjUE/wK0x9pbSAQO3lmpjEG7DB8C9 Fxn+yFH/AGyNRyfAnRkUn+yVKn/pmalxaC+hEn7P2hK+7+yFZz0/dGo4/wBnnQWlYNo6qx9IjTjo S1crS/s8eHoOW0hW/wC2RpW/Z28OyIJP7KH08o07XKvbQz7v9njw6xXbpC5PX90aqr+zX4fE3y6U Pr5JpWdwuOf9nPwyr4Gkjd3Pkmlb9nPw4xA/slXX/rkavlE5Fb/hm/wxK5zpCgDt5JqrJ+zh4Ydt q6UoH/XE0KJLYi/s0eGIG+bSQw7fuTT2/Zq8MsoY6OuP+uJpcpcWQT/sueF1Ic6UpB/6ZGmD9mDw tB/zCFZf+uJqOXUchP8AhmHwlcSYGkKD7QmmL+y/4TjcqdKX/vyaQrWLE/7LnhRIQw0tNvp5RqCP 9l7wmYd40lBnt5RquR2FoyuP2X/CYfB0lcn/AKYmrMf7L/hOQ7P7JTH/AFyNS42DoSH9lzwjb4H9 kJn/AK5Gmx/syeE2z/xJ0Az/AM8TRyh0Ht+yv4RnYY0eMcd4jVZf2VfCKyn/AIlMYP8A1yqnDQlE 7fsq+E1BzpEZ/wC2ZqKD9k7whMh/4lEYx6xVPsxsT/hlTwegBbR4/wDv0a0Yv2RfBl3bEHSIdpIP MdP2dkNM+gfBPg+z+HWixWNhD5UCqAqonSvuT4Hs11oMzHIwR1HtWtJcugpHqZbaoyelRGTzFGPl FdaI2CScLgZ5pWZl71LWpVytKCCCOtcD8SZ/+KUvOM4ibjHsa0p7jbufxUftglrb40ajchsCS5Py j3xXlEuVRQegrLG7oaQw3G3A25z3poUrLtJz6V5rWoDQxluSAvIrVtQ6yAg5HpVEnd6U+1dwHPpX s/hhj5KnbWezKSue3eGEZX3bsCvaNFkVSAR+NdEAR3lmwlbHarc+YU5Oa2drEPRgmVweoxzSfKs2 49DUIcTOuoN8pYVTeQxEK/zHtSd7l6NHy0sQMmSOKkmUEg421k3YTVmNjVmPFSNkIRiqTuUtiS1Z pQY3GD2NTP8AKoHpTF1EjczE8YXtVy0jCglvmPaoTurA9CeZfLIOM5pFYtLwMketAr3NBCQQf4fS rUUohcgjCn2q1sLcmjXcxxwKkjVhNgHik3qUkXIo9gcHnmrCER2z5GflPaqjuRsz6r/YXl/4uGG/ hKnj8BX7uJL+6VR1xXa17pq5XViODIc89atKdm4danYhaEW3JBFSQz/vSpHP0qtw5i4rbQeKWMgZ J/CoAUOrQkEZb1pny7NoHzDvTJGxg7sYzVo2y4L/AMVQNFdIypJq1FkSgevWqexPUtSTLE5UD9Ko uw3gkVMCrDxhQcDnNSeUzkAHI702Ow9v3bYX7wpmwnlxuPen0EycRpj5RinCNCnTpUNXHcm8gqgI FKmUYAjrTSsSKu5J2B+ZO2aesYGQBmmPcb5hiQgEjNW0leG2wGO40MSQyEuFy55qOUNLgseBSQy3 GreXuUk1Oqui5Y5JpkvQdHuMmQcVM/mPIGLnIp2GISXY84qVVZl25OKQ0rjhdGJggB+uKum6YADv VNaDW4yW6KuKY940kY2kg+1SoiYqyuVGWPNXEVwgYv07VVhEgc7jIenpT0m3sWzt9qixQfaWfPy4 xUiuCAVPJqrWAWSKVZQS/HpmpVujuKY71NhaFh5CrKQc+1WvtOXA24osFhUl/ffMSRRLKHOUHAoS 1HFEMl6wAVUzx1p0Nw8aFM7s+tMUkT+a0ihSuMelW95iYblFJoLEbXCtNkjFNe4zkKo+tHKPlFS5 YLginQhvMB7U1FEk0xcTHP3aQTEvu2gAdqTQWE+0q+SFz+FKT8owopKIPsO88rwy5FTKyOMYwafK GwOoWMbRk1VDgkblpqIydiq/dTPFVstFGW2ggnpmlyajsPWTYgYoKQSh5clBT5SWhZ2wAQgqmZDz 8g/OkkBDHKVzhBirX2kBOVGaHEoq/asHGwE1KEyoLIKloaIQQ7H5BisnVNZtdNiBldVb0rSMTN6D 9J1SDWLYyxFXA9607cowBKCiWgLUlnZFxhAaeIMfNsGDUpDaI2WND8qc9+KsRORwEGarl0AkNwW4 MYqNmXPK8+lCiDJppRPAimMcYqyypsG2MUnEtbDVRDGQYxn1pVtk2AhBTUdCXuWltY8ZWIE0ht40 BBjGalxARdPhfBMYzT5bSCKUAIOaSgNFk6PAQCEWoI7S3DlPLGR3xT5dBbivpluEPyDNQrptuxxs H5VKhqIjfTYAxxEDj2qqdNt2yTHhj7VXLYaJYNNtkhIMY/KmjS7UnaIx+VJxKZIthbRDDRAEH0q5 NodrIiMYwMj0osIoLpNoWIEY49qSXSbQj/VD8qOQWxGNJtVx+5H5U5dDtXYt5IH4U+XQQ1dJs2cj yh+VV5dGtCvEQxn0qbFoWTwxYTRHMKkfSqsfh6xt48CAY+lDiHUVdBsgM+SM/SrVv4TsLyQHyF3D 1FNIkL3wxYMdjW6tj2ql/wAItpqQHMC4+lOw0Qf8IfpjbVFuv/fNMvfB+mRoAbdcD2pcoru5mr4K 03BJt0A+lQf8IXphcYtlz64p8orEieDdMDn/AEdT+FWP+EF0yQZ8hB+FQ4lIhXwFpjAg26ZHciqj eC9MU7Ps6n3xRyiZWb4f6bHlxboT9Krx+BNMlJb7Mp/4DWiiNaFt/AWmKoAt159qoyeAdMBKm3U4 /wBmp5RMrD4d6M74NohP0qG7+HGjoQgtEznstPkuImk+GOlwqrLbID34qA/DbSWbIt03fSplDTQd yu3wz0lnObZPxFVX+GOkRZH2ZPypco0rkg+E+kGME20efpUSfDjSclBbID/u0coWKk/wq0dv+XWP 8qavwx0i3AItk/AVfLoJi/8ACrtHbcfssZJ9qyR8KNHDYNon5U1HQTLy/C/R9u02yflVpvhZoyW+ 37NH+VQ4jRnj4T6O6/8AHtHx7Up+FGjtHk20fHtT5AM3/hV2k+cRHaoD64qeb4S6M1qS1rHu9cVP sxt6Gevws0ZLcH7Mh/Co/wDhVGjIdzW6E9uK0jAi9iFvhjo8X3rVG/Com+FmjsQwtkX/AIDRKmh3 J2+GGjKuTaxk/SkPwz0l4s/Zkz9KnkAYfhbpWVYwKP8AgNB+FmjPnNsmfXFLlHYqn4XaWASYEwO2 Khb4X6U3AgTP0p8oWHxfCnSmG3yEz9Kji+FelCUqIVUd+KLCZdf4ZaXbIR5KsD7V0GkaJD4e0spa gKCRkAUlGwLUnm2hFHekm2qgGM1vETKR+bgdQasfe69RT6itYgklUD5uDXF+PmA8J3oC5LQtg/ga uPxDP4pP2v1Mnxo1CID5kuefzFeRPMZ7vaR8mOKxxjVzTZCxRgux6AUS7UkHrXnSEWkU8HgNT/PM c4C/jinEVtDudI/dhXY9e1e5+GJR5Yyfl9Khq7KTsj2zw6yvH6V6vpSk7SPugVvBE3seiaaw2H+d a7xqYQMbqetx6MpNuU7e1V5X2/Kw49qvYkheXGP7oqjd4adW7HpVSQJnyrLM6zAKMrV64+dERhz9 K5mrltpj0j8uIj0oRi659Kew0xkZeSU4GB61Y+5weaFIRIo3dKfEpCn2osJlzzCwUEcipYTslz60 risamxVYMx/Cp49spLHp2poq1hWYlcDirMJIXDdqW8rBsi3D+9jx0NWJGFvbHjORitVoJas+rv2H ohN8QFiUbSATn6AV+50q5SMjjA596627xLcSyQgRWBxml2mGQhvu9qjqSLEmOSeKmjZYzyuSfaqQ uXQm6GnJAdhJNJslESsFf2qxFt8w5GaB9RsbbnJUcVKm6WTaD+dFikiTyCCVI6GrsWIkyy5pX6E2 KkiGWXdGOfQ0vknHzD5vSqWhY1AAxVvzqZSIlIU7jSkAwRE8nrUu4+nNJbEj41K5BHWneVs7U9x2 JcGRAM/NU7rtQZG6gl6Ea5Dgbflqwkaqx2Dj6U7CTI5Ig444xU8e0ISRmpGKh3gcU54inXkGqQrm ooWO0G0YNVBndk/NUgWI4vNBI4xUUcod2DDoaepSRMI+hqdJBgkCmth7EjXIMQUoPqKjjjUNkmgA YbjnFEQ25ytCFYlKFQMHimIvlsctnNUloJliNcuKVoTLJkNjFS9xk7LuGD0pNyxsAvFDQkW1JZdz dKiaQA5UZ96SQtiWGbA6c1bEm/GBzTaGiVUwCe5qOKL7Oj85yelREpCJOfLIxilj5TPenYRaWQJC FI59ad52JACcimJDZIlLcc0sA8mQ55FBV9DQISQZIwRUcbEHI6UImxIrq2Saa8a7gM07C6k8QXO1 V/GmGMibHalew2hyquSD0HtUMu1gQDihCI4i0QGDzSSk7/mqtguSo5VSe1Rn51HNIfMOaQMoWqyP 85x0ppXJb1JWwQcVCoBYknAqbDK+/JKr09ar3EnlgZH407XGiWNsR7sA0KGZAWbK+lHKJM8k+LXx Dm8FQW8VpFvkkI5ORjnFeO6d4xt9Y1a8GrAtMqNsAG4dPWtIwIk9bHX/ALP32uaHU53QpaGUeUG4 4x6V9Exs24KBxUSTZd0gN15TNGVzSx3hIwM4HalyjuTNOsK5xkmpUugpBAz+FMRJuaRS44qzGA0I ZvvUR1EyWJRnOKtfcFNgnrYXpyR8tOQqZAOg+lNaD3LUjeSMRnioEZiTk5pLUTuh8aE5BNBsm+8X zVAPgJRsE5FTSWLBGYHANSxobGubTBPzColHyjA5oiJkkVysTlWHNRvIsrnA59aT3KWhNGVlhxja RVdv3bjA5qbXY1qytMCyFm65q20xa1Q+gqmtRsp7tuOK0p1AVCPShoi9ymz+XID2qdJROTgYFSPY rJtV3yOaZA3nHaBVINS00Pkrk96qXDK5G0CklqD0GR9RxxViOdoroJGOMVVrMSegk+fOO44NRtGs nB7UmUnoS25EUh+XPFZzq8rnAyM0tgHXKI2ARjHpVfysDI6UXuA+G1ZcswAA96ZIiGTfjn0oSC5U vJdq5HFTWbKYSW60NWAprEYmJzwaaJCj/IMetNbEtkj3qrj5P0pSu529TSGtTNtIMyuMjimRSbZH O3JHen0Ey7blbqJ93DVjIgt3YOckHikA4yqQBgUxwC2wU7IqLGTuYoRnnBqIzb8EDHFCQm9SpI21 6tQgclutOwmx32XzMuDj2zVRQrE+oqbALLGrY28+tRsnydKLFW0IjF8uOlMlj3RbAMYoW4EUG1X8 vue9SXlt5X7sHIqmhJFQRCKLaelZsgC9RmjoT1JY9sgxt/MUsnlqpB6/SkVaxRhcAbitTx/Nzj8K CepJJIsvG3pUAhDdT0osNPWxXuGzIEqtJhW+X7woHewpYyxBwMN3qVijLgjn1xSsBZXYLIqOW+lZ R3LCEHUGiwr2KcqtLeImccc0XSFXKg8CqJjuVUQ7SV6mo4/NhBLnOKbKerIZV+0oDXM+OlP/AAit 17Qt/I1pDcEfxZ/tfsLP43aq7c77k8flXjsPygcdOhrmxu5Vxkku1jxVd4+BzwelcG7GkWZl2wKC eafp6lZgCN31qtkSz0fSgpYZH4V7HoFr5MAIOSelStwPafDVz5Uahl5+ley6RIZ9pAwtboT2PRbK Py0HpV+EgbgDVKwkiEZRT3PrVdl3jGMt70Ma7FSRdvFQOojwCaE7ie9j5Yjb5sEfMDxV+aUEgkfN WSK2IC4PXrSxqVwR09KW43oTNMZTtA2igJv4zyKh6FRGKWVsCpmRtoxT5iHoWoJOzcmpY5A7YIxi mlcaZaZvMcZHAq3bkTSso49Kb3LktC2q/Jgn5vWnxp5aAsc+lS/iF0LcMuVGRjNWJPmQqT2rVO5M VY+wf2F8H4gbzwoUj9BX7gTttRVPQ11bI1k9BsXIC+lWTIJBhutKxhe7Ejl8kYIzVhpAyjHWqLTu KGw4FTMHd9obFJom1iOWQRHaRk/SnxZxgnk07WQEkMRiJA6VZ8syxZX5TSbKHRMVYKxP1q/255pW DqQThkwV+X6U1ZgkZD5L+uKroMrRphyScg1PHEYXJAzUisWi3z0oOHzmkgW5NkpISDwalacBOV3G qWgxttN56n5Nv4VdSMhD35pEbjX6e1SK4hTH8NO4WIR8x4+7T9uCcD5aAY+LDRccGnL83XkUIROi lvlPAo+0rnaowRTAgWdg5PQU+LJc56U9LFovRn5TTY5OCKgQNHlQCec1MIwrjcelNK4y8kaEZU4p jgkE44pdQRV27ec0iDa+etaXF1LUmOoGPwpgjKHOc1D1YFiB/mJPIqRY035I5psESiNgx3cpUSAu 5VR8oNPYW5cijxxirO1nQhTiobGiGAtvwxyRVtgsbkk5BosVYrtE0vRtop+NqgA5FCJH7duc1Wtn 82Qgjp3qhI1ImQE45/CgSAnJHFFri6iudxJFLHMWQjGKRb2LUar5Rz96qU25RxzTuZpamrZjcgI6 1Vmk8q4x2qWrlky4ZsdqkMCRyZ601oIrxKN5DfhSyph+eRTbuTYjyChBHFRAfJx0oCzHJxGRjNQY 2dOlF7FKI2SNlOc/KaaEDIfalcCnJyh28V4r8c/ia/wt8Bm9MfmnzFUZz3+laQV2D2PPfB3xQ1nx J4It9XWHbA8PmbQx/l+FXvgt8b5fiPLqcTxeUbV8Hr/dz3rbkuRHU8x8VfFpPHOo61YFfLksmYK+ D2XNeX+HfiTYaF8PRqF9D9ounkRRIynkniqSSVjLVzPX9d+Ot94E+H9pqS2IS3kC4xu7nFVtU/aj uNB8C2utXdttWfaEHP8AEcCpUFsaT0Z6d4E+Jmq+IPDratLDttzEXXBPTH/1q1Pgt8bIviRJqqIm x7V8NwR/Dmh0whqaPw4+NNr4z+IGoaPt+a3Yq2QewBr6IbyY1OOmeK56kWma2ASNtB7Ubctg8LSj oS1oWI5ARtAxinRna5yfzqyVuSSSZwoPFIE2y5HIqTSxMsojfDc/hStOJJNoGMU0SyeEbck0sbAO dx+goW4+gF8MaV5pWwoPy02Sh44JGeaakowcdalsaKufOc5XoauCLcvAxTHIZ91PcVBI5bkVK3Eh qSF8KTTmys6oT8tVsUndEFx8s7KDxmnLc+coTPSgTQvmDcRjJqwjd8YIpNCW447S+R3p8CrC54oR qrIreYZdy9qqx2zRqTuzQtGZPUdPP9miU4zn2qW3uwJVwvPrimLYbexGa8yGqVLYtnsaRWxmwSSS XTxnhR3qcXbQyGIHGaQDfL4ZjyKZ5gWIFelNIT0IJbozngkVWlEscy4GVNGzGtiS5jBUHOaiiiVl +Xg0PUBFXYCGPIpJIyF460irD4Z1ERTZlvXFVZ0MLq+eDTRNiFoyuWHANUzJs7ce1AkiyJg0OV+W sjb5jlm600iZaAYNp3CnCP5+mKUi4j8I3yufkqrtSE7UOV7ZpolkUrbRjvVaOJ5CBuxVBuSfY5PN Hz8d6JrMx3DEn5akdh0KlAARgUsrBc8/SkWtEUXSViHHT604qXBA6mpJZA8HlKO59asKvmQgk8ir RSIJF3qAelU5JFjbBXP4UEyWpJFOJn4XaPpVC82vKVx+lK1h2uiOOAvwOgqURAMApwaV7k7EbRYc jrVUllbnjmqEtytcxkXSMTkYqtFERPIc55oG0SxsY1Ck8VG7GORiDkUdQiTWsh2kNw1SRMG3BhzS e5VjNuWVWAzzQyb4sg9KBKNiHzAqHjBqNm3wknrQthdSpNJ5cQ2jmuY8ckP4UuAOvkt/I1cE0wuf xZftlwG4+OWpKv3luTn9K8dlASDA6+lY4saViiAxTnjFWIxviyeMV5+xS2I45ljU7uRWlbOwYMOl O90SzstLn+0TKMEfhXtHh2RnIUHgd6Eg2R7poasgBIzXreiuywqWHHpWiuNanotiSIwGPXpV9VEg 2g8jrVRTKeiEjf5SvYVWeRs8HrV2MupEq+Wh38kVRlm8xScUC+0fLkbLIxx94VOyEYJ71kixzkZH HNKWZWyOBUrQrcC2T6VZgj3DJ4NJ7lbEhk2twfmojLxKTncDRykyHxQMrbiefSrRJibdjdntTvYR YtnJJYj/AOtVu3PlyEgcmkWtR8bNlyRxmrCKCUycrVJEy0LjR+YxK/dFXolBgZj12mqSBO7Pr/8A YWtzc+OmUHbgE5/AV+20rb9g9q7UvdNJLQWNvKBA5NWkO6JcjNTcxsT7fnOR8tOjiAX27UXuxxJH YRgYFJuIfIOaAZKRxnqaZCfNfJHNA7FvJU4FWEZgB2pNCFf5V6ZojlKoc1SGh00xKoaQsMgEcGkw EJCZUdKf55TGRkUgHOxkOVXinowVsEU0gsWY5Ay9OagLFJQaQmaiH5xirSM0SuaXUSKsPzod3Who ymMnINUhjZfuAfdqYNsXHrQA0oSgAqdf3UWMZNPYmwBnKjI4FSygPggYxQC1IlAKgnmp41LH5aG7 GmxNJKIpAuKRZEExUHmktSUTxnbw3JpRMin5hk/SmhosIMqSBSSTtEi8ZU0WBkU0m5xgYqaKLy13 Hkk09iE9S6GVrlQRxjmmyujSMqDAHtU9StyNTxx0qaQefGFUYI71VgsRxPIybA2APenWzN8yg45o ElYnaTa+wH61PKcqAG/KpaBbkKy7GCjnPer8cSyvgkbvem9i2yK5jIBGcCoYR5B65FEYmbZaRTLk 9qEiMLNgdabGh0Q2LjrU8OBkdqQMnWaNVI/pSeYJAMDFDQIiaQ8Y4qzKP3Q55oBiWb7UYNxjpUIU 8EnNVbQm+pZLYGAeakibfEcDmkkUNC7wADzQymNgM5pDRFM3YVBHJgZ6igLlmNsEg9KoK5Vz6Ukr j6ErEsASeKWSYNBtAwfUUbCW5mqpZcNxzXyv+2RZiX4VhWPHnR4+uaqLsJnMfBXQNYs/hBBK05e1 +yHC7u2DXG/sstH/AG94m3EIWlJwf9ytVPUle6eX6YsC+L/GDbA2ZX5Az/BXK65p0UvwZ0vKgL58 J/8AHqu/UzXxcx9A/HWVX/Zz0mIgbA0WD/wI1fl+DafF39njRrZCBPGiMh46gk1PN1LfvHldl8Xt S+BsUnhjXot8X2dxE+S2QBj6d66f9i66j1WDxRfIFjikkJUHjHyVpfS5K91h+znaxH45eI5vM3fv 25HP8Ir7xn8a2FvuRpBlWArKS5jVS0L1x4506Dyla5RdwyMsBT18e6ZK5jFwnHowqOQTehdfxhYW 8PmCRSvTOaavjfT5pFHmDJGapRJSHHxvpcKNunj3A4+8K6jSr6LUrTzYWDAjtUNWLCMlXyeRVoJy WFAiSNs/WhtrONw59aBIk+9uNIlzsxxzQxhIf3pPTNQqRgClYLlgShSQORTpAxX5W21VgIGl3/IO o6mo3YoNo5FLqHQCu9QcYxTS+3B60mKOgyVi04YCpZoFVt3egq9wt23NnFWH/wBftA60AtxsieVI B2pZMu554p9CmNU7EOBg1FtMXfrUkkjpvjCntQbiONQpGMd6aJexXaRSflPNOhumQNVIEOHGXHes e5Xdchs9KljL0O7Y2eBVSVcDg8UkVYzt/JxwAanm1AmNExx6irI2ZFFG8rlsnHpT1k2P70rBca0Y lJz1zT5bkR4B/DApMtPQZHdi3yMZJ9qpXAMqnB70rE3LKzjyQCPyFU7I/aJnC+vfil1GtCS7hEb4 6tVFeWJxVphNXE8wD5j0FCyecxdcgemKUtWC2IwiSKA9ROiNPwPlpvQViO9RVQbTkioYM7QTxRcI 7k/Dtyeaeo35zUlkLyDGCOntVBVLZYk+wp7Esk8/dgZ5qu82x/l+9RYQeedpHXNQqwA5NPZFIbdX YRlTbTVdUbkbvrSTBjJGw5ONoNU3Xe+RzQx3sgklKYxxTg+PnHShIzY4fOC2cVQeM55PFCQIrhtj 5Iz9arY2uxHemaPYqs373BNK7GRvl4waDNaCySZI5+YVD9pIY5H40guVWu0jBBXcfpVSENPPkHao 7VXQaZalYO2AcVG82EwB+lJCZRlcNgVy/jWdE8M3Q/iELdvY1pDcR/GF+2DdfZfjnqjN957g4H5V 4mjYYZPI9a58ZuW3ciucs4OeBT3bzYyOn0rzwuQwKGTaeSK0YbjZgEcUxHdaY4WIfL16V7L4dX90 hPAFCeo5bHvnhu7EyAEdK9Z0aYMoGK2iiU2j0Gy+dMtVmNvLlOBxWsSmOkYKu7tUXL4C8ZqXuTYe 2Fyp+Yisxlzyo4qVvYdup8uQhY4zgZY1WAcSBSahaFNXLaJ8pbPSmy3IMakDP4UmESYYbGe4pDmN Rk/QVI1qEkQRxIG5PUVZSfBA6A1oxWJIpiJyp5x3rR85VzgZNZvcdhyt8wPrV/a0JDHkVVhbF0N5 uDjC+lNEBKnHAqh2uWBbu0QVW4q7HJ5SEEcBTTRSifYX7DkZl8ctKrbUAPA+gr9so+EXPpXZf3Ql oWreHIJHJ96tRyKi8jms9yXsOjbc55wPSmPJiQ/3e1CWpKdhEmEanjIp65b5gOKsE7lyNQCGJqRl 2ZI79KVtSiFJCr4bg1fVjKQvpTYMnTCsVIzU7AeWRjpUg9iCRhsHFVzlWxnNMESNlFA7GnLGZO/A pAhBOVcL2q2n73oKeyGiaXESDb1NJGwHBGaS1JYsD7JGPpU0t3lQadgLC4K8cGnxjcuT2oAhb5uD U0cgTgjNHUSIJHYSAL0q+vzdetMfUb5+1sY+WpM8knpSDZi28A2HJzUkOEkIQ8Ubg2FyPJnDHnNP ghje4LAYb6U1oVHYfICJMZpF/wBYcDk0mS9yVVaPKk9aWOQ5Ix8opoBiR7wwx9KtWyn5VJ5oYki/ dqkPIPz1TGZIGY8e4ppCvZlOCPyxktkVoeYETI6fSk7ljI8FtynFTvwuehpIRDEAGBbkd6uM6F8J 92qYiSNVBOKlWAcMCd1SMknG5ACcVX8vEPy9femnYVrsLWT7Pjefwq814HyxGAe1JrUq1iXGIlI7 ikCYGRTQiMjDgdaty25KE9D2pX1BLUrxHCbTyRVsgRqN3NNie4BRJz0ApFPzH0FVHYViwkKy8g4N PidYSRnil1GV2iLKyhsHPWonhe3RRu3+poEyJZdhIIzn2psZypGKQIniBKknmq5ZUk+bpSKemgrY kbGPlpPLUNxUtCTIJPnOcdPWvK/i94DT4meFV0+UlV8xW6Z6GmgZ0/hDwzD4R8C2ujp88Sw+WSR/ n1r5v/4UjdeF/FN/caU+xLkNvAIHJGKtCkjCufhVbfC/4a6xfXpMl3dHcTjPJUivLPh/8MZ/Hnwh sjeN5EKvG6E8cA571p0I8j3nx/4D0/xf8P7DQ470N5RQ7VKnoc13mlWj+CfBdnpemygzRKF2kgdK XK2EdDwXxb8Hl+KXjKC48QXKoIkK7dwPU571Y0L4RS/Di81O30C4U2VxuzhgNvy4qtYg1zHefAL4 N2fgiy1G7kn8+9um3M3BwduO1O1fwBIlxNvuMCSQEZYU9RvRHL+Mvg9LfWsD/bDEyj5TketP8B/B YiaT7TfF2IPcHtTjewJ6HW2fw3nNzNaG5zGrfKMirSfDG4t9ZVBMSNhAGRS1Qc1jitQ+DU/2qdjf kPvBCllr6p+E+ltonh9YppPMIHrmokgjI7ondNlT8tXGkEWCOhqNikJKdi7h0NBcSRggU0waEaUx IMDNSW1wjdRz71RJKpMrNuG0dqDHUJl2IwMDAHHrUscblhuHy1SESyRqCdv8qrzMEAGOaTF1K7Sy KMH7tKMTRYBwaVimOlgMagbsmnwyE/fWnuJEbSeU5IHFLvYncvzN70rDe5LvMvLcN3qSIYPzdKGg TGTNkYxioJJN3f5hSSB7FZZWPB4NRzRiUjjkd6pkLYvG0YoCq1VO5ZCW4FItLQtq+9cfyqkbcrLu FT1GDBy+W4HoKqTRF33A8elO9h3IPL2KVPSpvs/lWbOeadxWVyC2nZrLcoxQsQlhL9GpXE1qQW8p ZmGeRQXH8QyPpTAWCVEJJGaijbYzMR+lMlLUkS8S2jJYbs+1Z05IUmM7cntU21KepAquyh2OadG5 Z9uMDtTYl2B4lgkOTnPalWZTnA20AnYrLH8pJPNV9+DxTH5IcYyVLdfaq32jCEMKLD2Jk+aMN0xS faQA2KTQblVVaSLJP4VIo/cmmlcopNIq8Yx+FMZlBDDr3qiZIsbQcsveq7R+W2SKgBjKJFz3qrIn lqM9aS0Fe5BNcF+D07VBGxAIBxVJCIC24kNyasQYHBPFA+UmHLkfw9qrxrtnYE8CkmJ6FGVP3pbO Pao1k+U5HfrT3G2V5SpcDHPrUUo8rq1AWuilBJ5jsDThznJwKCEhvnq6FNvzdjVQlm/drwe5oGWI 02nDdqmYZBA60kORkyK3JI6VzfjqFZPB9y2Bu8o4/I1tDcix/Fl+2nC0vx0vcffS4OfzFeSyhZwM DaTWGMVmVsVDEcYzVn/j3wMZrzmXYEVZCdo2mrFpGRLljn2pEnd6U4bCgYr2Tw8CkaoaVyj3TQjl FUce9euaJEuBk8Ct4NhbQ9Jt1EsQ4wRT9uOhrSKuxMa0LBMGkwVQDORTcdQRFHGQSc1C8qqMYwPp UbMu1kfKEEbRJycmlaXe+OlZ3BIseXsQHd+FI8gVgQoC07jsSrKEwQM04SbnBwC1ZvRi2YyQ7nqV fmTDDkdKq4pEyMsSgdzUqMFbIBx9KXUaLkbDoBnNaKTFIwp5NU3YGiWPcHBB+tXJ5N3yilF3EXbO UJEFAyfpUrjajnGeDmqT1KUj7K/YXkC+LniUdientX7Rrchgg2np6V3Je6Op0NWNSUyDimlsDO3k UkjNvQmUgjfjk08xnIJ6Gl1J6CNGF5WnJlOMZzTYIUyAnYRj8KWW4ZQqrzjvQUmXDGZsMRzViCMj Izg1LYr3ZfjxGhDjJPeoGbaSKENsiBzVIO3mNx+NMC2sJjiBaTNMMnz8HipTGGQSc1eglMSHHSq3 GhJuVDZ4HapYnDuCP1oWhMix5ivxjGO+KQR+ZHgdM0CRPIhRAT1qWJvKh+c9aENMYqYyeo96ngwy 8jFILEW8if6VNLccZxz7CjoBLA5kTJqGSTb0BP4UwLUUjJDlhg+gpzL3X+VK+ohGiYR5Zi3pmrMB AAOeaq40JJN5jegFPb5RmpAjhd3kORxV0DEZ4wc0XsBHI21wAOKgZnMhPQVSKSJ2bzAATSxtJCCp GVNNEuOpG0HmEY4FWVyuFxQw2HqNjYFT9GwelTsJCEYfAPBoWAqx29TVXGy9BGy8N96puYH5/KpE DP5w3EYA9qWNBtD5GD0p2KK8kYdzS+WfJHHcUN2QXNmXCwxheuKgEnl5zS6B0KqseW7A1Za9lnCI o+X3oQgmiazfJO7NKzebDzVITJUkCxBe9PtLP77M/GeBRewh8g285qnawPdzsScKPWmJMddddqnF FpmNCC24+9TfUqw28fyYASOSR2pYPlAzxmk2CLkPyznnisu7HnXJYdAaFsJ7k4YRgN1o3iVwVGD6 UPYLWZUl5kYZwagNwIYuVyfpSS0LIgGUAjkGnPK0bnJzVoTOZ8R6BF4z0/7FeDfBkHkelfEf7a3i DWvhR8J7a18LxOkUbop8oHgbueme1XF6kWtqeT/sx+JbfxFY2V3e6syam0JMkUhUHP481yqfGzXL b9rC40YTNJZqkpXDZHAGK6VFEX1seba58btb1P8AaVvdFnv5IbLe+Bu6Yx2NfZHiPWr3wH8K9V1C 1vZJ3WMlScDnafSnJJsakkfDf7Dv7ZOsa14vvLbxPK4t2DNGXJPRfw71P+0L+2Hr0vxnstK0h3j0 5rlRvQnkbhTlFIJaI9O/av8A2gPEHg2bw3aWNw6CcKJG3Yz8+K+rvhNcaldaOL6XUmnl8ksE3A44 qGkgS0PhjT/2yfEXhT9orUdMvleWwW4KqzE4A4r6utP2mr7X/irNFZfvYY4ZCqoxPQZFNJNk76Hg Phb9oHXviB8W9Z02+u5NLEdziMM23IwP71fr/wDCq0eLwpEZLr7SxUZbcDn8qipYpaI7dpGUgKvT rV0r5iDnmuccSPYyry2cU5X59qWxfQnEZyaXyOjUXEkTvLlAO9IrleCKlblFuOJdvJ5qQzmJMAbq taElB2KnJHNM3+YeetAD4k3EhqkhEUTN3/ClcBfK385prjYwU9aEwKud7sMcCp7ZSOTwKLjsRz/N M2Kja6MQAIzVbkkLTk45NMwztwaGrCu2JK5DAVbX95a4/izSGrmrb3ZazKYwR3rFE3n5RxkA9akp DYWKOVHGOlVVnbeyls4NUkDZJ5hkcZPGO9EThyw9KT1YXKsp3HmpQhePGeKkaKMtylqBHjIz2Gal f92gA71SWgr6mdFmGYkCrEr5TGOaQX1M8ExvxVoT+YCp61RJQLlcgjNIxCgU+g0To2YSmagmONu3 tUlFZ8s2T1qUxb499MTRV5dhjpSyQfZ1JLbiaAirDYphsI70xI8gsRnFO9hshjj+cuzfL/dpIpo1 JIXA+lG4WsVnO4kqcD0qMysOMUluFyCTlwD1qQbVQkU2Qxiz7eccelLcSPNgbuBS6lLYhWQxkk9q qz3gC73GeaLEsrNOJOcYqqPmYnOM0Iu2hJwo5Jz64qWJRsOeTSGiwzbFXFUmbYztnvQthWuVWkyo bFUWlbOOgqluQ0K0mIyWrHluPOYfw0MadtCS2/dyEmrEjh4iCKnqD0RnqjLzmneZ5ZGOtWyE9B8s bt8+/A+tNtpDGGLNuPahoE7sikJBz69q5rxt+88LXe3tE3H4GqpvUvY/i/8A2z4Gg+N+oTfxNcHj P0rxO4lMuFQYZeM1hjfiApJN5E20nJpJZCJCxNcDQ2y7CF8vcTg1LaENOCDmpA9C037wUcsK9f8A DrHyhu7UWEez+HdzKpByK9q0biMd8VvAex6BaTbYQSPwq4GbIGOPWrTswaFd2bOOcVW3GJOad7sS 3IXDthlbjuKZHIrOxYVM0aM+VTyoK0ydw+MD5h3xXMmUixsEqLzzUU8bsApPyiqIvqaMe2G3BPQV Rj/eTFhwPpTsHUljYecc/gcVPFnzP7w9apJWLsSQKGmYsM+1aEVwpk2FOKgTVi5FCPMIHSrAjWN8 k5FDEWoZk3HPX6VOpVuV6ULQRbjcJwKspKY45DjOVNOHxDPsH9g64SLxm7zEKuD1PtX7TSa7Y/Io uY0bH98V6dvdFJmjDdWzL8twD+Iqx9oh7SqfxFSkSy7AbZk4mGfqKligUv8A64FfcipaAdLFCCSs oz9RTkWNVDM4/OmlcEOTyGf/AFin6mrK2kbNlZBj6inysT0LEFujN/rRkdsiraWwQsd6nJ9ahwBa khtFVQfNB+pFVpLDzGBVx+dPlsAPbKibQ67vrVeG0wcOy4+tNRGRGBWZlLDA6c063t9ikDbn61Lh YZKumyMM7h9M1Zj0x3QAED8apRYJlr+yPnzvyfTNW4NJViQxGfrS5RPUc+mnaVGNg96lhs0RMK34 UOOhKZae3jVAcjP1pWtFkXHBFCiy1qQrYuYzggfjVZbC4OMKCPrQNEv2CRWC4rSXTmVeQPzpJAMi 01vNwOlWpbYQjOM/Sk0LdiiDzEBA6+tI1qUTJpqI0hIUWaEggqfpQ1ikSZJ5+lOw9iq9s1xgKMY9 eKc0bBwGHI60WFLctCMDIUc/SrAtHcZA5qLBYJNMlZM5waiWyeRcEdO9Wh3sRrpzW7561IY3d+Px poTZK0ZDD5cD6U9rUyDP3QKQitnZJnb8o9qXzfPB2qQe3FFhoSMOqjcDu+lbMcLNGCB8wqrAyZeJ gSOe/FWngErse/vUNCRGIcAqRxVW304r1bjsKBXLLWu0jin7C03T5R7UtwKs6EynDEDtUcW9SN/I qlohXLITdLwMKe1PlYxkBBg0FIIwWkIHzcVFbo7M4Ze/GKB2LQgLDdjkUiuysBtJ/CnYhml5BdRk VErhJGwNuPShD5SkYxIxJzn6UwQE468UmitkTiMyuFYZAqGbaZOB0qFuCHN8uD0qozqkjHFMRYhT fb5yAfQmoo8RyZbrTswKly32mcnODVZgY05GaXkJ9iaIEWrEcntUcTFohuHze9MEPVAG24614X8f tQ/sjQEt2tPtsMhG4bSe/tVRdhS2PyYtvgvq3iP4/Wt7oVq+n2UaPvUKUB5HrXqHh34V6zaftQzX kkD+WscoL7Tg5ArT2liOR7nl/i74VX1r+0ve6xPp7zwB3IyhOen+FfZevX0njD4VXllbaX5OcKVC MOxojMlxPgH4mfsq6toHgLRL3Q4GivZZIxKUGCAWwf0rpfij+zXq2gW/hi5t4Glu2kjaZgOnzjPT 2q5VAd5aHfftY/DHUtd1HwyIrV5BCFMmFPZwa+tfhjqNrp2lvaQWDR3IgYE+WRzipU7lNM+d4/2a p/iRaeJ9QktvJ1BZC0Um3k/Jnv715B+w/wCAPEfhD4q3ra3avLFGrr5jg/MNo9qlVLEtcup63+0z 4P0zW57++0W0eHWhOMPHCQc/X8q+6P2KLDXrX4SWT6+ztceWud5570nO6LhE+vXbzWyPlFOQ+aCB 8tRe5XUd5PmwN82MVTP7y2GBtI702BainKQqCeTTlkI4B+UU0ik0P80SsMU+WfkLjmp6gwVjvAxz VmK4xIRimyUV55CzEnpUSwmXBBxR0DqXo41VuT8tLNBCWyh5+lIbQ+FtpKnrVNX2TEtzT2F0G7hu JpjzF+BwaLalIOR8vf1qKSJQATyRRexNiIx85FTRoeo61UndEpakcjB3K7efpTI0kT7ozStdFouw F2LK3C1lyBkY4HegVh05MCBj1PpVDzAuDt5PfFNCeg3zWZzngU+FfKLHOc1IPUkVuopk7tHB7Ura jvoQbVh2uBuz60vmeY3pVAtRbYKwcH72eKzbx8OFBqVuOxEZNqgHmlj+8MmqFsEjCPPOTVNpC0wB poEiZx5TsO1VIrjzBgDBB7ilbQL6kkrbPmIz+FPilyNvY0DuNkAiJCis9pWYFTStqMrC2ldwynGO tXDd+TGVI5NN6iuQRfN7tUE27HI20IbYkKkr1zVgbMYJ+ak0DV9ilPHuUsO1VEkLRjimQSKxCnjJ qJSUQk/eNLqV0IhL8mGPPeqsqjPAyKYWuir5eHJ7dqhWFmbg0rjT6Fz7sW1hnnrUQnKtgDimFlck d2Byfu1VklVhgfyoRPUrNjcB/SqdwnzZU9OtO45D2+WLJHWs37NuOCM0XISIpFKyBccD0FSNavJy GwPrSKaLVpIEDKVzVSSHym3dQT+VLqKKGXMf2jCBtoHpTfKEWBnP1qmxNWZccq2Ch57iuU8VW5Xw /ec53RtgfhRT0kXuj+L/APbcjZPjvqUbcSJckH9K8HLCJix6tWeL3FaxH9nMw3LQIxwGPNcEmImJ UDBGfwqSJcEBfl/CoLPQdDckKAPxr17w+rSSADgd6vSwj3bw7GF4XrXr+h7ogq9zVR0KPRraPy/l bnNaiW5KEA9Ku7C5TjRlVievrVSZvlUDr3qkwUdCF3KqMDApiODyRiqepFz5YjyVKg5pFAVyrDj1 rlSLHJLj5ccetOCl3PPFUgJpzlV9PSlLBMAjg+lF+gLRlhVDJjaNtTqnkxgDG01S2HfUVU25wfxq S0TaTuP41KBu5pxvhRjmrLIFAzyDTaEtSBkbeCoytXki+TIPWiw0ixFE0a9c1sWqfuju5+U1MFZg fQH7KWg3+qeNttqSsW05APtX6S6h8FdYuNXjm+1sseclQRXoxloHLc9ch8K3tvBHEpOVGM+tXbDw /fh2yzFj70JktEjaNqMUhUE49jVy10/VIx/rGI+tMz2YyW01NcsGbaKgitdXdNwdyDSuWkQTaTrM aAiRxn3qb7DrsUQCzMPxo5rjaCOx15juWZ89+atCPxAGP75/zpuRSWhKsXiB4zmZ/wA6jz4hUgCZ 8f71LmBxM4y+JFuGUSuw+v8A9apo28SSElpWCj0anGYtCJJvEBucGRwPrWmJvEERyHf86qTBq5ca /wDEUSL87nPvRLrHiOFQAz5HoanmQtEVU1vxK77laTjrz/8AWqz/AG74jX59z/n/APWocilbqJ/w lPiFcY8wg+uakbXPEEfzAvz9abZDiNXXNf4YmTI+tasXifW1I4f9aUZIcdC8fF2towHlsQPrW3Z+ O9WYYMTZH1obRSRabxzqe35YiT7ZqNfHOr5O6E4H1pJpE3Fi8faowP7lgfxqWPxzqcbfNCSPfNFx WsSt4y1Oc5WMqB9acfG+pDAMJI79aXMWmJN4+v06QH9aWDxvfNhmiJ9uatWsRJ6lv/hPb5sgw454 604eOLzAzDz681PUHqXIfiBco/MOT+NTDx/dhslMD05pFXsTjx/cq4/dZB+tPPxFnWcr5PH401uT JCD4iXAOPs/6GlXx1cNJgQYP0NUyUxZPH1w8uDDgD61aHjyQptMXP40k0ikNHjt5sjyOfxpq+PPs 6gGH5voam92UD+PzuUtH+hq2PiJIr8RcfjT6Ekv/AAsIliRD+hpV+IzyEHysfgaejQ0tS5cfEdWU BYssPY1CnxFDDDR8/Q0WQ7IkPxF2DJj/AENSRfEZGjOUOfoaWhPUZF45il/5Znj1Bq2fHEbR5KcD 2NDIVyH/AITqKQfKCPwpIvHiYO6MgZ4OKDRbGjF45hhYMqHJ9qmTx/FE5yvJ9RRbUWyHjxnEqFyP mz0Apy/ECBU5Xn6U0Ji/8LAiZM4P5VA3jW3GWK/N9KVi0xi+OoXwdh/KpP8AhOoskbePpQG7IpPH kHChOPXFVH8ZwwDA5H0p2JYSeMopgvGPwqx/wlNuU5HP0pE6leXxJDkEHA9MUq+J4ickZ/Cq6DRb HiO1jQv1z7VU/wCEjiCEHoenFSkWkLb+MrW2gZSu4/SpIPE9vcQl87ce1FlcGrCx+K4JXCgZ9zRf XdjqTDzgHA9RQ0JFKzTTra8LwoIyPQVfgmsUvzPtXzD1al0Hcz9Ri0y4uWkeNWY9yKSzXTLSyeJY 1VWOTgU7GaepdX+z57OOIqvlp0qaeDTrx4TIFbZ93IqXqhWsxusWml6hc7p0RyOntVOw0rRbOYyJ boHI5OKlXRdjZ0+5sNPjmSMBVkPNUbS20uznc+UgLdxTsTJXMlvBnh+WdpxbRiYtknFdzBqlraWI iik2KOMCpS1KiRRa1AM/PUv9rwBNxfbWuwiEa8nI3jb9aUa/CBtyAO3NVa4RIW1mJeWkGe3NTRaz bhMGTnvRtoHUsx6xbRgkyjFOOs2xYOJAQPelbUtosjxLbq5YsOaLbW7XJAcZJ70MhIdc61aiYR+Z Q+sWyr5YlGfWkhdSk2rQRXAQzcH3q2mp2+4gSjFVsU9hf7ahEu3zhn605tbtydhkGfXNFiUOk1m0 jTBkGfWqkusW8hULKPwqS9iz/blsrYMo/OmNq9rPIqrIAKLCuagv7fZgOv50n2yF1GHCn61Vgtdk bX1ujYLAn1zTxqMKMArjn3oQnoLHdRSOd8gH4iq1xNFExxIPbml1BbFAX0ZyZJM496al5DdocEAC gJFf9ySR5gz9aevlImfMGfrTEiGO6QyncRt+tPZ4n5Egx6E0gaHSKnlj94BT0jjUZaQD8aTLWxAs SvKSJB+dJPCgOSwx9aAuQiOJjw4BFO+zRshJkw31p2EtSoLJEA/eBj9aTyIy+5nAx70AnYlVYmf/ AFgx9aLi0jb5kYDHvTJZCjIsRyVb8aaLJANwkGT707DK0dsWkYMwx9ahNojsR5g/OlYaZaWBoU2q 4I+tQSWiso5G7vU2AqyWDJIrIwGOvNTPa/aHw7AfjTSJbE8iGyG0MKz3EPnjc4I+tO1yk9B0ttEG P735T05FRR2cSAASg/iKHEW7FWKF5SvmgY96dPaRbMLIC31FSkBkzwrGRyCx96glgbzMcfnVco4y 1sRCPy+Dg/jSJAF5BBJ96XLYG7EzxMikMVI+tQR2aMhJcL+NNIlO4yOMSfIJFKj3FUFtixfLLgHu aGrDjoyK5aOB0UOrZ96pcJMRuXaT60+Udrk3nQghC4IHvVSfCyNtddvb5qVhLQW2nVQS7IQf9qmu sSxs3mDr60WGiASwqiFZBnvzTXuYZJjmQBR2zT5Sb20IJJIhlkkBH1pSY5kVt4DY6ZpqInqVmuEt 8sxUe2axPFN2J9BndGUERnjPtTitS4rQ/jI/bq3f8NCanMW3M9yeAfpXz/JbOSM/dHSubGP3kMI8 m42rwveo5oy8xU8AdK4nqTbQbFuRsMeO1a1mwkfaRioZR3OlRsrKpOPSvYdByrrkVS2HY9u8PggB q9i0Y4jDMa0giZaHodgxljB7etasj7Mc4qybdSNo2VMBsbveqsqeV8poaNFsR+ZtiKHmsyWHzcLn aad7GVtT5eQBowQdp9qn84eX0zWCNUV2O7aqjir+8CPbjmrDqRlFSHA5b1p0UZkh5OcVk3ZivcAd iA5pUQsmS/HpWi1QFxQFUAc1LIjYxQkVbQuwQsIx2q0ufuk0mykrIt2yndheF75qTrN8vQU+hK3L wQYyDUs87RQnZwdppx+IGfa/7Bc7x+IpJmO4cjafpX7OXOpra6RNcyJkIpP04rteiQHlPw9+M1t4 0tNSeKJgtpncWQjouaT4e/Gq08Y2Gq3EEbbbQkMShGflzVJEtXJ/h/8AGqz8V+H9TvhE2y3PJZCP 4c0zwJ8bLHxR4cvdQWNhFD1LIR2zQJR1L/hb41ad4i8J3mpKrCGLuUI7ZpfDnxtsNR8DS6msTeSm Odh9KVi7WLWi/GzTdU8Byaw4YRLjBKHuKbH8aNO/4V6mtMG8lyuDsPenGK3CxPJ8ZNN0vwLDrDlj DLt2nZ61avvjNpuleC7fVpN3lTbdp2HvxTcUxeRNqfxn0rRPBlvqtzuEc5UINh7nFQ+Jvi3pnh7w 7ZajMzLFcbdo2+pxSUUiuhN4l+Muk+HNDsrqcsizlQmE65OKk8XfFfS/C9hp808hRbnHl4Xrk4oU UmZvcTxl8VdK8Lvp/wBpcr9pxswuc5OK1PFvxV0vwnNYxTuQbjGzC++KpxurFpaFrxd8XNH8KX9h BO5D3AGwbffFJ4i+Kuk+EtetLa5kINz9xdvvio5SWXvFfxR0nwvrNpaSPtef7iBevOKt+IviRpGh araWUsoE84yqY98UnEdihqvxM0jSvFcOlyP+/kyQuPQ4qzqfxE0uw8WQ6W0oEzAnYPY1VroVi/P4 /wBJj8XrpfmjzyCdn0pw+IWjjxidJ8wfaACdgHpSsTewq/EXSZfGR0lZB54Byv0pF8e6Q3i19KSV WuFzuQY4xT5RpmlZ+MNJm8RSWMLqZkB3L6YpNL8d6RqOvXNgJlknjJ3J6YFLlE9yvZ+OdI1DWprG GRWkhzvA7Y5qXSPGWkazqdxbLMkjRZ3AEHGBmhxHcl0Xxjo+q6jc28UyO0WdwBHGBmrek+L9J8QX VxBBMjNDxJgjjAzU8ozP0fxPo2tajcW8UqOYc78EHGBmrmieING1p7qO3lSQwn5yCOOM1dtCGyHT dc0zVpJzBIjpF985HHFTaFr2la75q27pLsODtIPNK1y1sbMtjbWcBmmjWJcZBPHFc7pmuaPq9vcO HjkWI/MwIOOM1EXdhYNL13StdsppbaeMxRcMVYYHerOjatpOq2ss8UsbrGPmYMD2rZRBu5Pp+taV qumSTQyxlU+8wYY/Ormjappmq2LzxzRNGnVwwpWuQlYsWmr6Rc6a9yskTovVgwx+dRW19pF9phvF eIxjqwYYqXFlIlt59JudM+1rJF5I6tuGKgkudHuNOF6rxCHI+YEY/OpURjp49Jm0uO7V4xEcfNkY q2BpSaeLktEsJ6MSMU+W4KxNGdHTTTcO0QiOMNuGKkaLSl0n7VuhWE9G3DFUoslvUS0ttJXSzcs0 QiPRyw5qnNZ6WdNS63xLFkfNuGKOVjua15Z6Oulxzl4/LOMMSKgWx0qKwWZmiEZIwxI5qVFgiaW2 0m1hV3aJUboSRzWdqtjp1rZLIZkRGIxlgKrlYuosGm6e0Ue3aWYZBXnNXLnTrGzt1E2xGJGCxxU2 LWpc/sKyt/LZiHUjjHNVdT0axDK7MkQY8biBTW4pEd7pFjaLHudFLEYyQKlvND0+2mRGdBu9SKqx LuWX0SwsGHmlAp6Zpknh+0MoCgHPoKhodx8miWVs679oB7Hinaj4dsrZ1BYLu6ChJjiyJfCNqEK5 GfpVGbwfa2pXfIPm7HFHkD3Llz4as7LYhIyelSnw3aiUKzBCadhbkP8Awi9rbz5dxg9M0s/hu1s3 Usw+boOKHew2iyfDNq77eAfSqUvhOHeUZwG7Dio2FqVj4St7VgHILehqBtDg3GNVAYHpTTK1Y7/h GIJXC7xG47CrLeHbe3bbvDP6VWorDH0OOGQLwHPaoH8OR/ahGZdufpSE9hzeHYYXaMtlu1VLfwxG zFS+D6GmFkhG8MbZTGJOR2pD4aImVVfDVK7DasSjw15d4ytJlvqKUeGt0pAf5vSnYaRNF4XVZ9jP lj2p0fhRVmZDJlx24p2YmhB4ZVsoX+b2qS08KhpShk+YUkiBZfDv78IH5HXFRz+H977A+CKfUuOx DF4cfzgpfPsKkvPCTSz/ACvjHpTu0CWhR/4RF2yTISRUi+FXkAXf81K7GkWW8Fs3yGT5qih8GNG5 TzM4+lCZfQkXwgZZinmHcKV/B8sTgCTn6ii5k9xzeE5Gk+/uYVFJ4YdSctk1NwaKzeFJJZ03Oc47 Uz/hFLiSd1STgdcmqWorlGTQLnzAoJJHpVu68M3JhUhst35pthFGZJoNwkD/ADlsdqg0zRbqeDap OaQ5F+TwpdJgF8596ur4TnjUKr5PrRclbkkXhy+ZyqP075oXQ79pDmXkf7VCbNFoQyaJfAA78/jT F0u/d/lbp70xOwLp9+8pVnz+NQT2d9G20yEj2NIl6ED2d8EJZjj61Xt4L9nKoT+dGwbl4aZf8qHw /rmqjW+oq20uSR6Gi47DWi1BV++SfrVTbfq4BJz9aY21YlW4vyxQE7hVSW51FTtLEn600C2APqQI Cu2T70559TJ2lyce9APYqLc6ozkAkAehp/mapI2N5x9abY1oQLdaqJTHvOfrTRJqMjMjMxIqU9RN EJOpKAFYk9+alFzqjSFRI2B1Gaq5XLoSP/aUOCHPPvUjS6oq/K5PrzRchld73VVOA5GevNRR3Gpw OVJJY96egixHeau5yHb5evNRPf6s8+4O1TdXsUhTqOrSvgO2B1waiM2qglt7fnQK2pmXs+quuSWb HvWO8esEh1J/OmmJ6Ec8etkdTn61UZdZgGd7fnTvcEx0MGsyscMyn601YtaSUkO2frRcOpOh1mUn Dtke9Pa81lUKMW3Hoc0Ni2ZSM2trhWds/WpYn1tkKhmBHfNKTHe5FcTa8Y+ZGyPesvz9bdyhZ/zo jIHZDbe01uFyd7Y+tWfs2uXchQSMq+zUOQkZdzpWsQuU3ksO+aoy2+tINpZjj0NHMWtSr9j1Z36t j3NQPaa4rEeYxX60rkbMgWw1red0jbfrTHtdZXI3sR9adx9Squma2+SrttHqaZ/Z2sgYZ2yfeq5l YUtx503WYshZGP41A9traYBdt31pJj2IJLTXLk8yMSPeuJ+IEniDTvDFz5Jcy+U2MH2qoyuxxZ/J R+0RfXutfGfU01FWS8juDuz36Zrh7l3klyh+XsK5ccrMojDlXH96nybpCOduK4rhYeAh5bqKsWs2 eMc+uKQM7rSG3sufvD2r2XQwZGQZxiiIpaHuOhgmMEHkdq9b0j5rYKa1joS9TvtM3CAc8ela/l/a EznAFaLUNkNYAxgkkmmjacbjTsOMis42y47VHKAMmptqCPkovuQgckdaXzPJZeMg1z3Kii+6bWDA /IaWRlcAiquDIpOwB4qS3Uo5BPy/WplqKJNFGVkJPKdqkEagEk8duKuOg2T20Z8vdnFTxt5pG04I pjNGabIQY6dasMfNZcDGKloaZJHLktxgj2qxC2yIkDJPWqitBPQtxSbgqkYqV4yjMc4G0jFNISPt 79hWADxFKCeeSPyr9pLW0ivNNkhlGUdcMDXXvEctj5E+Iepr8Kb2DS9B0tZIrxgLkqD3O09PY1U8 R6tH8MWs9K0fSVZdQx9pKK2Ou05x7Ggkl8U6snw7urLQdM0lWhvcfaCqtjrtPT2NWPEWpweCdas/ C+l6Or2l0M3DBGAGDj6dDU2Y1ch1vVbbwr4mh8IafpAOnzA+dJsYAYOPp0NTSajbaB4pi8E2mlBt NkBMkgRsfLx9O9aW0Em7kl/fWVl4xj8C2ukj+ysEySeW2Pl6e3epTqunTeNR4Cg0rfpUQJZzGwA2 Yx7d6h3GRXM9h4i8c/8ACDR6XjSrbJMhjYDKYI9u9Ibm08XeOm8GjS9ulWOcSGNsHZgj271UWxbD obuz8feOZ/ClzpOzTNPJ8uTy2wdmGHtUkl1pfxO8VSeHH04pZaYx2uYjzs+YdeKHKzG2MhOmfFzx LcaTcaeYrLSmPlnyj8235h1plgNP+MHiS4t7ywaKz0kkQ5iPzbfmB596fNcmwthDpvxi8RTzXVmy W+lE+WDEcNj5u9X7K50f4uarc6rdWLJDpRIRTCecDcP5VdxvRBoV1o/xj1G4167sWEOm58pWiPPG 7v8ASiG40b4uGTxRdWrxw6ZxGjQnn+Lv9KRDuW7O60P4o/8AFZTW7iOxUiJGhPOfm/pTYI9F+JLL 45mjkRbRSEiMJ789+e1SzVIktG0XxWG8eywOBAhCRmI/xc/XtVDRf7F1wP8AEe4EmFUhIWh6bv17 VUWVJEeito9xJJ8SpfNKlSEhaHGN4/PtVvRv7F8qX4mTGRTKMLCYum/9e1N2sZSRPpa6Lp0b/EeX zNtwCFjMXTfx9e1VNNtNE8MQT/EWdpHF79yMxfd3jH17UoscUdN4eXSfDuhSeN5nk3X/ANxfL5G/ j69qy9H0vSPAMNz4xuZZGe/Pyho+RvG3+lJy1Hyoi0rTtK8A2F14quJpGOpH5UMfI3DbUXhfT9I+ FNhdeIbqaVzqh+RTH03DbVOSJsWtI0fRvhXp9zrk9zIzaqfkUxjjcNtN0HTdJ+DWk3N9dXMkkmrt 8pMf3dw2VN0NJDdC0nSPgzZXUl1dSTS6sfkJQZG4be1WfDmk6V8FbG4W4upJZNVPykoMjI20J6E2 NvTbTS/hdpMmmzXMjT6n9xygyM/L/WoPBmnab+zvqAs7q7e7ub1w6ZUH/Z7U4lJantnjzxxBr8Me jzSNDPPGQnHQdP614TpWn6V8IrW58MXN48t7qP3X2A4429vrUxjZlyWhmaN4c034SaZP4Xlv3e91 DlXKjjjb/WrujeFbH4YaG3habUpGvr4ZDgAkfwn+da36GVjQstB03wDoR8GHUHbUrpciTaOg4P8A OjRPDdh4A0c+DpNSdtQuV4fAzxwf51N7M05dLix+GLHwXoI8GtqTvqE68PgZ44P86tW3hmy8JeE4 /Bjak76lKBhsDPHB/nQ2TYlk8NWuheGovBX9pu2puv3gFz8vB/nWfdeGoNN8LWvghNTZtSGCzgrn Cnn+dTfUOU3tW8OQR+HLPwdb6mzahHguQVzhTz/Om+OdHS78LWfg+w1IjU0272BXOFPP86akh7FX xH4dFz4JsfBtrqpOrR7fNYMuflPP6Go/GHhaWfwXY+EdO1YvqcRXziHXOFPP6Gq5ySx4v8Pz3/gf TvCmnan/AMTKIp55DrnCnn9DUnjrwzNqfgzT/C2l6nnUYWQzkOucK2T+ho5kVuR+O9Cu9X8Iaf4c 0nUg2oW7J9oIkXICnJ/Sm/EHRrzxB4Q0zw/pGqf6dbuhuMSKDhWyf0pcxCYvxO0S68QeG9K0fR9S B1C2ZPtO2RRwrZb9Kr/E3Qb3xf4Z03TNF1LN7bOn2kLIo4DZP6U00NH0D8NtZ0yzmt7Oe4Fxc20R WQZB561wH7Qnh/XfHkdlc6FM1rbxTIZCG25G7J6+1J2Q4rqesfD3W7GaeCKa9FzLBGVkXcDz17V5 L+0T4f8AEPirUNOuNCmMFkkqGTDbRt3An9KFbczTbdzC+LOn6v4sfR4tC1FSbcqbspMvGGyf0pvx TsdX8U3+kLoWokiBlF1skH97J/Sm5lppm38YtA8T62+jnSrkw2kO03DeYFzhsn9K988GeIrG4u1x drcNChV8EHn8KlO4TVjxX402niPWvEOnz6VM1tp6yK0rbtvAYE9fbNT+Mtavta8X6YumXguLWJcT 7XB53D09qLjaske3y3EttdefJlINpJY18u/FHVfEfinxXYzaBehtOhkH2grKBgbgf5ZpLUJaWNz4 h6hr2v8AizS/7Guw2nxD/SnWUcYYH+VV/iLqev6z4w08aNeB9Oi/4+HWUY+8D/KncQz4j6r4i1Lx jpQ0a6DaYo/0hllGPvD+mar/ABI1HxJqHxB0qPSLrOlopM7rKMcMP6UnuO9z07SdW1PUPiCGjuRL p6I2SsgOTxiu81ITS6j9paUwQqf4uP51M12HHVnzb8QPEXiXUviVZHSZ9+kjiRkk9xj9K+xvCdvb JDc3V++1xnaZOMcVO1kC2Z8beMNf8R3fxOjOnTbtJZ/vpJnjIru/E8uu23jmyEMhWy2kyOWx3Fav Qmm76M47XvE/iOT4yW8Vu+/RwreZIJOM5GPaq/i7xV4jf4tWy2JMmk5Ico+R1GKluw2j7Y8PWNmb We7vHAkHI8zjtXxL4h8b+JpfjKiWcZOjEnDIxIIyMUqb7kyZ9teHdNtksrq+u5MS4JUPxjivh29+ IviVvi+YY0P9kZba6sSD0xShq7lS1R9raFptrNZTX9zN+9IyobHpXxzB8RPEd58cJLRIymkKH+fc cHpjtiqWruVeysXIfG3iSX44S2uCNKUPiUPx2x7Uyy8feI5fjTcWTR/8Ssb8Tbjg4xj2q0SyHT/i P4gb45y6f5ONKUP++3HBxjHtWnZ/EPxAnxsuLAw/8SvD7ZdxxxjHtTZCRnL8TPEFv8YrmyaDOmgt tkBJB6Y9q+vtHihm0SW8mmxNtyAcelQzSOh8s+HvidrV38Sbm0ltylkA2yQk4OAMe1eteEPEN5ea 5ffaXCxKx2YbOeKbVwukU08WXsfiKRGT9yckE5FcJ4d+J+ral8UrqxktfKtI9+yTnBwBjnpSaEno U9J+LWtzfFe9sJrYrZIzBJecHgfhX1rp+mxyaC+oNcbpCu4LkUSVkNOyPkTwz8YdY1D4u3Wmy2jJ aIH2yENzgD8Ki0r406zc/Fi+0qWyIs0Zgkvzc4Ax7Uo7Es+vdJsCPDUl9JcbZtudmR6V8oeG/jLq +q/Ei40uWy2WyBsS/NzgD8KUF7w29C54c+MOoX3xIvdNlsitvDuCyENg4A/Cq/h/4valqnxQvtN+ yGO3Tdtkw2DgD8K0SRNrl3w58ZL6X4iX+lzWRWOMttkw2DgDv0pvh342XWqePb7TXsyscRba2Gwc Cixa0Mbw58Zb7VPiLqOnS2ZjhjLbWw2DgVp+D/jJd3vje+082eyKLdhyG5wKrlTJ6k/hj40Xuu+N 9RsGsiIICwViG5wM1f8ACPxhu9Y8U6hZNabUhJw2DzgZpKIrGX4U+OV5qvj++002TKsO4BtrYOB6 03w98arrUPHd9YPaECItg4ODgUKCAj8M/HC81P4hXtg9kVij3YYBsHAp/hz44XerfEHUNMaxKRxF sPtbBwPWhxSE7jPD3xkn1rxnqGmi1KiEt82DzgZqr4Y+Mlzqvje9057MhYd2Gw3OBmiyC1yLwx8a rjXfH99pcloUjh3YYK2OBml8J/GW61Lx7qWn/ZCI7csA4B5wM0rJsaRS8H/H+41/4galpbWZQQls NtbnAzTfCPxxn1Xx5qWlm1OIS3zYPYZpqOpTRD4W+OM+seO7/TPsh/dFuSrdhmp/C3xwl1nx/faW 1oQtvuGdp7DNDWpFh3hv43tq/jy/077IR5O75grdhmoPCPxrOveOtS017YgQludp7DNNLuUlYj8N fG6XV/HGoaX9mIWDdhsHsM07wf8AG7/hIPFGoab9mZXtyQW2nnAzS5dQsL4V+Oaav40v9M+ztui3 c7T2FavhP4tHWfEd7ZeQ2YSf4T2GabSC2pX0b4yf2v4svrMW5BtyRnaecDNU/DPxwj1zxje6YsBE sYbPynsKlJWKsTeEvjJ9v8S6hYeQWaEkE7T2Gaj8H/GxNZ8WX9gLc5iJydp7CiyDmsJ4f+Oaa742 u9KjiZmh3Zyp4wM0nhX44pqnjbUNJMTeZCW/hPYU4om1yHwz8aItc8bXumeU26AtnKnsM1d8O/G+ DU/Gl9p/lFmhLZ+U9hmnYSQ/w98aIdb8W3unxRtmLdn5T2FX/C3xYTWtXv7VYyGgJByD2GamyuD0 Oe8K/HGDWfGd3p6RndFuzlT2GateG/jLBr3jK909VbMJbd8p7DNVZC6j/DHxttdb8a3+lxRs/kFg cqewzVrQfjLa6t4lu7BIyzwE7htPGBmlYFqyHQ/jnaav4ru9PjjZ2iyG+U8YFR+HvjHZax4uutO2 szRbsjaewzRYUkO8OfGe01jxhfaaiMXgJz8p7DNVNC+M9rq3jS909UYmHdn5T2FGhSWg/wAM/Giy 1fxTqFhErF4Sd2UPYUvh34z2OueILvT03GWHOQVPYZppXI6jPDHxjsdd8SXlgFZpICQcoewzVnw3 8ZbLVvE+oWMO5pICQwKnjAzSaLQaN8YbDWfEE1lHuaWLO4bTxgUaR8aNL1TxNd6eiFpoc5Gw9hmp S1sTJFbw78Z9N1rxLd6cu4yQ53Db0wM0nh74wWGqeJ7vT4tztCTu+Q8YGapxCJJoXxg0zWfEt7YL lpIc5+U8YGao+Hfi/p2p6/eWKAyPETu+U8YGaLFLRlfRPi5peteI7uyQszRZyNvTAzTfDvxd0vWt burKMlpIc7ht6YGaFG5EnZiaF8WtL1nWry0Tczwk7vkPGBmqvhz4u6ZrniK4s4wztFnd8h4wKJRs UW9M+Kuma74iu7C1JMtvncNvoM1n+Hfi1pniLXryzTLSW+Q42egzRFE7h4c+Kum67rN3awsWeEnc CvTAzU3h74q6T4h1m8s42LSQZ3fL0wM0NFGZo3xS0/WPEUmnWbF7hCQVK4xXY+K5XXQ7jfGpYRnv 7Uo6SHE/jZ/bdH2T9ofU7gAAyztwO2cCvnufdvAU7QvH1rHGO7RbEbDTq9W2mDEqBXnsSlqV0t8M COcVoIoaRTjA7ikglqd/pW2TG3givVfDk22QFh0qkrBue7aJcLc7do25r13SFw4Qnn1rSLuFrM9H tozFEMdDWg8OEGDiq1THJJiKgJ46Cqsi5wvfNXexnawrQsHz1xVGdNxB7Uk9RpnyJ5ZRjg5BNXE+ UKP51zJGqJ2ZVlAJLCjyRNPleAKokkYlWwBTPLJzg5NNrQmOm5bWRraAZ5BqVWGBgZzSRdrliTIk Ve9Kr5c8dKqOobGhG2QMVdBLuFXip6iJfM3qyjhhVm2UqAT3qr2B6FpWJucsOB0qxK/mbtw/hOKq LuT1PtP9g+RpfFczY+UZGPwr9lNSklHh658jIuAhK/XFdj0iXLY+YPAF1rMUGp3uuFdwB+zCSTkn bx+tT/DrUdYfw5q2oaykazk5s1eT/Z7fjU6sRW8A6hrlz4Y1K/1lUS9wTaBpOfu/407wJruut8Pb 6+1eCNdUGPs5aQ5PB/rVvYhSszQ8I6zrV78Mp7+/t401kgeUWc5PB/rioPDep65D8Npb29hT+3WA 2Eyc8g5/pUp3HcTSNd1mz+F5urq3jbxA2AGLnJznP9Kt2Oo6zpHwuW/a2jbxHIFyfMOTnIP9KqSC Eh8ut6z4e+FcF/8AZY38Qz7RJhznng/0qHUvEmseEfhfZ3UNkk2v3RTzSGJIycHpUa3G2WPE3iHW PCHgWwmtrRJdcuNvn7WJPJwentTfFPiDUvCHgywl0+yRtZunT7VsY5GThs/hWko6EXdzR8Z+Ir/w Loem/wBmafHPqV4V+1bCT1bBzj2qn8RPFWo+CrbTbTStOSS8vCv2ryyTj5sHOPasopmgnjnxXf8A w3l0uw0vTUlkvQPtRjJOPm2nOParfjzxJc/D240/TdJ0xJkviPtWzJxztOcexqtQY3x54pufh9qu maHoelpLb3g/0nbuwvzbT09jWf488R3fgvxJp3hrTNKSawux/pJXdhecdvY00RI0vF2q3HgzXtN8 LaTpaPps4/fld2FwcdvYmneO9XuPCviiw8LabpSNpEo/eld2Bg4+nQmp3dirtIqeLPE0mj+KrLwl pukqdHcfviobAwcfToTUviTVfsfi208G2ukodEIJcgNgbTx7dzVWsNSbLGqah9q8Zw+CbXSU/sKN SWYhgPl6e3c0y8uUv/GQ8FJpa/2DDk5KkD5eR7dzSvdg9SzLqkfiLxaPBUWkqmhWufmKsB8vI9u5 ovtQg8U+Lf8AhDU0oLotoDhijAfJyPbvRqLYZpXiGLxV40k8JS6SE0axzsYo2Dt5Ht3qWPWYvib4 0ufDU2liPStPJMblGwdg3DrxQ421DVlqyvbX4k6vc6Nd6Z5Wn6aSIj5bfNtG4dfes/R9Ztvijr11 pF7pHlafpufJZo2+baNw6+9NIWxW0HULP4wazfWF9pnk2WlMfJJjbDbRuHX3q7pGo2Pxmurq2vNL 8m00pv3RMbfNtG4dabjoJXZW0K7sPjFqV5Je6Z5VtpJPklo2+bA3DrWtodzpvxnhudQvdP8AKi0s /ud0Z+bA3d/cUuhVrMg8K69YfGe5ur+903yYtMOIXaNhuwN3f3FbOlW2n/Fi4k127sig08Hyt0ZG 7+Lv9KFoBb0XXtO+I8V9rs9m0ElgCIsxkbhjd3+lcV4WfTPi19p8U3tkUlsAfL3RH5uN3f6VaZdn Yk8OHTPisLrxfeWLRvYA+UDEfm43d/pR4a1PSfiYk3jK6smjkswRGDEecjd3+lFrkqOhPpc+keNH fxzPasktupCDyjznn+lT6VLo/jP/AIr24tGE0KkKvlH+Ln69qTGmJY/2N4mkk8fywOZIlO1DEf4u fr2qTSLrR/ETH4hTwP5iqdqNEeN3P17UMSJNJfRr26k+I00blypCo0XTePz7U3w4mhm6uPiJco5a TOxTH03D8+1KwCaT/Y2i3kvxIuBK4ufuIYum/j69qs6Yug6NfTfEObzGW8z5amL7u8Y+vahEsraN pej+FdUl+IlzJJLHeZ2KYvu7+Pr2rQ06x0Hwbq03xDmlkdL7OxPK+7v4+vah7hYhtbDR/AeqS+P5 5pZIdQPyIYvu7/l+varOlabpHw91Sbx5dXEkkWoZ2IY/u7/l+vap3LS0H2GmaP8ADbVLnxrPcyPF qZJRfLB27xt+vaqen6To3wzv5vGFxcySx6kf3Y8vO3f8v9KpIjkL2laRo/wvvZ/E11cSSjVD+7Bj +7v+WpdL0bSPgve3XiK7uHlXUz8g2A7dw2jpQxpWOr+E3wgSDxLea3DM8328mRQygYBGK9j+JvxG sPht4Qi03UP3bTARjaueTx/WpT5nYtWtY+Y/DHhbTvhBPNrF3etMmoSBolIHf5e1fXMuvLrvhK30 q1jKmaLAbaRgdKroTypKx4r8O/g+nw+1bV7WS/drm73bWbA2/LiuY8PaXYfs9z3C6netPNezrtyA eT8vao1bJjE9i8efEGw0rw1FZyTeU1ym2M4554/rXn3wS+DcnhS4u3S5a6e4ff8ANjpjHatbWiVO 1z1H4tePrDS9Ai0CaX7PdTLsGB3PH9a8Y+Gng22+DN0Ibu7a5mu3DqGA6dO1KWkbhfm0Ppb4hazF q/h2TSYD5csiFQQOnb+tfLHgfQIPhF4Yv9Jv9SZru/P7vJHGRt/maV7DepL4V8PRfCvwfd6Xc6qz 3l99zcVyMjb/AFqHwz4Wi+FngK50u61QtfXxGzcVyDjb/WjmVybaFrw7oK/Db4fy6Nd6mW1K7A8v JXIOCP5mk8LaHJ4B8By6LqOqFtSul/d5ZSRxj+Zourhax1Xwa8Jn4c2sdjqF+bi6dg20kE4HXpXs XxluI/GPhmfStOYwXO37wGP50ou7C9tTwL4P2yfDPRrTTdXvxPdsynazAkYPNfQnxftJPiF4dnsN FBik2HMgGMVNrzuUu585fDvSG8A+ErfRr/UPO1QuhwzDIx1r1j4j67b6n4bXRVuxBqbrwQwz+tVz XZmtHoeeaR4F1rRfhNNp73Je8IAEhYZ6Gu1+BngjUfDPha0GpOblxt3FjmiUbrQd7s3/AI963B4g 0mTSdGulg1FuwcA8VxHwOjbwtpdjpeuXqz6iqjcGcE8dai3LELdD0749Q3viPQJrXQAYpR1cfLXh HgSGbw/4NtNNv71ZtV3Jvy4J4PNXFWgO1tD2zxjpt3q/hgQ6eNsqpyRxXiEun6jL8OZdOt79f+Eg 3LkeYuR6/wBKq1ogtyj5Gp2/wki0dL8N4lG3zf3gzxnP9O1WLSHVpPhPHpcd2v8AwkCqvmHzRuyM 5/pUJtMGZ/2PWU+F8Olx3it4jG3zD5o3DH3v6Vo38WrJ8JodMt7tH8Trt80+aNwxnd/TtVXHGx1X gK3kh8K6dZajdpLrQC/aMyAng813HxZivH0HyND3CYEbmAxQtZDlseReIxqlz8Nbaz0+5RtfXb5g EoyBn5q9H+CmnuLG2j1ebfdqv74ZB5qmZN3sQ/tIveGKKLw0Qrj721sY5rz+6i1ub4WWcNm6nWxs 85/MwRzz+lSnoVaxX8WT6vF8NbS1sJEm18bfPbzeevP6V6v8I7+eXR7Kw1S9Bu1j/epvB5FNa6Ey M3476fqWm6TJP4egK3PmKDKMjjvTvAfhzU7jwvplzdp/pTqpnkJ5681CB9Cl8fvEGo6TpQg8O3Pm zhwJERunPPTPavP9bvdXg+FdncWCq2tHZ9ow/I55/SrSsrlS0HeKbzV7f4WWF3YKr6w2zzwr/MMn n9KZ4i1fVNM+HdlcWMaSa023zwH5HPP6UK7E3Yi8Uavq2nfDW0vrOBJNZdkE4VzkZPP6VS8TarqG lfD6x1DT7dX1lyguArEnk85/CnsVfQn8ZalqmkfDyx1Kyt0fVpSgmVHJIycH36U3xZqWqaJ8NrPU rC3R9Vl2eeqMSRk4PTnpQmRqTXmuanpfw7s73T7VG1Scp9pAY5GTg/pUHjDXdT8KeDLW70u0WbUp mT7QFJyMnB6e1NsLl7xLr2o+Hvh7aanp9ij6pMV89VY5GTg9OelZ/inU9R0LwFYavYWSPqtwUE4U nI3HB/SkpO5SLfiXV7/w14Es9RsrFJdVnK+cFJJGTg9Pal8Va7qHhr4fWeq2NikupzFBOqk5GTg9 Pak22DRk+JtdvvCngmx1rT7BJNRuSnnopORuOD056VN4u8Q33hvwfZarp9ir6pclRMqk5G44P6VS QkjO8Z6/f+CfBVnqljYJPqdwVFwFJJGTg9Pas3xZ4v1DwP4Isda07T1mv7tk+0IpORuOD056Uknc oqeOtevfB/g2y17TNPWTUbor9oVc5G44PTnpUPifxDeeE/A9jr2n6erapdlPtCrnI3HB9+labAmW vFGt3nhDwZYa9p2no+pXZT7Sqk5G44Pv0qx4p8QXfhDw5aaxp2mpJqF4V+0bScjccHp7VLEL4s1i 68CeGLTWdO05J7662/aAucjJwenPSq3iPXJ/BvhC08QaZpqy6jeMv2hADkbjg9OelTd3AqeLteuv BXhiz13T9NV7+8K/aFXORuOD056VY8Ua1L4I8J2eu6ZpqyX14V+0IoORuOD056VVrj3E8UXh8H+E 7LxDp2mo2oXZX7Qqg5G44Pv0q7r3iW48G+FrPW9M05Zbu7K+eoByNxweBz0qXcFuU/GviC68FeGb TXdP0xZLq7ZftAUNkbjg9PaqHinV28FeG7PxLpmlpJfXe3zwAdw3HB4HPShKwpaMm8QeIZfB/hS0 8QadpiyXd6V+0IA2RuOD056VU8Va3J8P/C9pr2m6Wr3t6V+0KobI3HB6c9KHoTuS+INV/wCEF8M2 nibTNKV9QvMeeFDZG44PTnpR4l1YeC/Ctr4p0/S1m1O8ZROoVsjccHpz0pJsoreLNW/4Qbw7ZeI7 DSxNqN7t89VVsjccHpz0pfE2oDwP4UtvEthpgl1C9K+egVsjccHpz0puQupb8Q6uvgXwbY+JLDTF e/vConRQ2RuODx16VP4p8SnwJ4Wttb0/TvNub7aZ0VWyNxwenPShPUJIwdevE8D+GrXxNY6Yr3t4 VEyqpyNxwffoKt+J9bt/AHhS18SWWmCS8vivnqqNkbjtPTnpSbbZQ/xDqcPw/wDCVr4l0jTA95el fPUKwI3HB9+lUte1xfAHh+18RWWliW7v9vnqqtkbjtPTnpTTJt1G69qtr8N/D1v4jsNM868v2Xzl CNldx2npz0qHxJqkXw70O08TWOmefd35XzVCNldx2npz0pkPck8SazbeANJtPEVhpQe9viPPARsj cdp6e1L4p1i28AaNaeIrPTRLdXpXzgEORuOD09qnVF3IfE+vWPgDRbbxHYaXunvSvnARtn5jtPT2 qv4i1Gz8AaPa+J7HT9816R5qiM5G44PHXpVRYrXL/iDVbPwNoNr4j07Td1xesvnARkEbjg8fSl8Q alZfDzS4PEen6YJLq/x5wCHI3HaentTYmrMz/E2oW3w18P2/iay07zbm9ZRKoQ5Xcdp6c1U8R6vZ /D/SLTxNaaaZLm/K+aPLbI3HaelSnqVLYXxJfWPw30628T2Wn+bPfkCUCM5Xcdp6e1X/ABBqdj8N NCt/Etlp/mzagy+aBGcjcdp4FMlIpeLLyx+G2j2viSxsN82oFfNXyzkbjtP6VW8Q6hY/DLSrfX7D TzLNqJHmgRnI3HaentRcdhviO5074ZaVb+IbXT/Nn1AjzAIzkbjt7VS1ldO+FunReJra0aWTUWHm J5Z43Hb25ppicSXXdQ0/4a6RD4htrAyPqBHmqIzkbjt7VY1XU9L+GGgweJLXTy8t+VDqIjkbjt7U SBRuQ6vfaZ8MtKTxNbWJkn1EjzFEZ43/AC9qoardWHwu0aLxLa2LSTaiR5iCM8bvlPSkmVyNEt9q mk/DPSI9eisWaTUSPMAiORu+Wo9al0j4V6IviS1tS8moMN6iM5G75e3NDu2Vy6XPVvhf8OdOmgTx PZRCK4ufnbK4Iz/+qvSvGbmbRJ2z8xjP8qI/ETHQ/jT/AG3maf4/anFj5o7kg5/CvBpYhKAV71jj FZopkaR8AE1JdjbgjiuFogYsUqQCRWwT2qzaljjn5u+akZ3mlKWdCvBr2TRVIZc1a2Gj23w+hVU3 dK9j0ZwzAVcVYcu56JZszuMn5RWvIpkUnPFarUnm1uVkbYm0UyJSgO7lqloq6Ym5ugOD3rOurvPy 4+Ue1HUhLofKkSiHAJzSzx+e4KcY61gah0PA596tB9qDs1OxPUVmZI/m6mpbYrFnI3ZoT6DaBpV5 DDj0q1Fgx5UYFLYaZOkmIjj5mPepLWNhkEc1SdhyJ7eNjKQeBWpGpTOenrUtdRIIo9mWNOQM20g0 hS3LqyNI+MYq+YC9uT/FiqhuCPtP9hJSni51Y54PI+lftivlrB5rtsiRSWNeha6G3ofHHxPsW+K/ iW3bSNTMVtaSAzorDHBDfyq1490Z/iTqOnDR9VMdrYEfaVVlwcNu/lRYSasSePtFk8ceItOutN1U pp9iP34VlwSDu/lU3iHSp/Hvia01Cz1TytItf9aEdcE5yP0oRk1d6FzxHplz408Y2mo2OreTolqC JAsigE5yP60mt6dd+IfHKavbars0OBW3gSLg9CP5GpW5okinHpk3iXxx/b0eq7NCiVuA64OeR/I1 b0y2u9U+IEmvtqYGgIGCAyLjnp/I02+glHlZn6HDqGofFK71u61PHh3D+TmRccgbf1zWh4btdTt/ Ht9rV7qIbQTu+zAyrjkfL+tDVh3RJ4Nt9Si8calrmqaiv9kPu+yAyjHI4/Wn+CLDVLPxdq2sarfr /Z0xb7HmUEcrgfrTctCXYn+HVjquh6/q+qazqCtaSlvsZeUcZXA/Wovh5Y6rpGs6xqWtXyvDcFvs ReUcZXA/WktCovQd8N4NW0O71e61+9R/OJ+xF5QcZXA/Wrfw0tNZ8P8A9s3fiC5R5ZifsW+UHGVw P1oDmE+HCaxo1lqt1r10jXk2fsQaUHGVxx+NN+HC61oGg6vP4guI31GY/wCh75gSMrjj8cUCLngG XW/D3gy+m1ydJdWmP+ih5RkZGOPxxS+Fm1zR/BN2+sypLrEw/wBHDy8jjH88VK0ZW6KvhKXW/D3w 9updSdJtckx5W+XkcHP64qPS7vWrT4cmW6KHxDLjZmTn0P8AStegLQtWh1/RfhyjTOkniSXbwZeR 2P8ASp9Xu9d8L/C63klCT+IpNgYCTJ5OD0/CotqDdiDXdV1vQ/hPaTRon/CQzFN6iTnk4P8ASpdY 1bW9E+Fdk8Ucb6/Ls8zEnPJwadieYd4hv9c0T4c2L2sMba5MU87bJzycGrnjDU9Z8KfDOxk0+BG1 6ZkFxsfnk4b9KJGkXY09fv8AV9K8CaZ9jhjbVpSn2rbJk9cH9Kr/ABQ1vVtC8NaYmjWqHUJiguSr nIy2D+lCehEyr8QL3V/CnhbTotItUa9uHT7Xsc5+9g5x7UnxO1fVvBug6XDolmhnnKfavLY/3sHP 4Uc3QSdit4/8Q6p4Z03TbTR7JGluSv2sRsTj5sHOPapviZr2p+GNP0jStDsk/wBIK/aihPA3YOfw NTJMG2w+Jus6n4Lm0jSNFsI2huMfaWjJ4+bBzj2NHxJ8S6t4bv8AS9D0SyQ282PtDITgANgj8jTQ XE+JOt3/AIX1TSdC0nT0ktpwPtToTgfNg9OOhqXxrrl54Z1uw8OaLpyNYS/8fDqSAMHH06GjZjUm P8YeIL7wl4isPDOk6cj6bcD9+6k4ABx246E0eOr+48L6/p/hbSdLQabKP3zqGxwcfToTT2ZTehF4 l1O70LxZZeFNN0xH0mRT5rjOBg4+nQmt/U9SOieJbbwlp+mIdMZT5rDIAwfy6GiRmro5vWNcm0rx 3D4NstKX+x2B8yQBscEfh0JqbV9Waz8aw+CrPSVOjAHdJhsfKRj27mgrUnvNYabxzH4GttIQ6LGr bnIYKNvT27mqkuqK/jpvBMekKdFiz+8Ktt+Xp7d6LAjSt9Xi1zxkfAraSP7EgDFZCrY+Xke3es+H UoNd8eyeBH0kf2Na52SFGx8nI9u9BOzLVrrMeveN5vA02lA6JbZ8tyjY+Xke3eo9P1W18TeOJ/A8 +kj+yLPPluUbHycj270dQbLcWr23jHxa/gmfSANJs8+W5jbHycj271FHqVp418VXHg+40zGlWBPl t5bYOzke3ek0PmYmmavp/j/xFP4XutJxpun5EbeW2DtGR7U7QbvT/iDr194dvNN/0DTnPkbozg7R uH60y4sksNasPihq9zod5pvl22mHEJ8tudo3D9am0zV9O+MF9caNd6cY7fTWwhMZ52/N3osJnofw s+MKyeI7rSbSyMcVnmNGKMO2a734i/DWy+JGjw6jq2HkR1ZEIzjnNTy21Etzgtd+HVl4msYLi6Qm 0sl3JGV64+YUfDH4wR+Jru/NtZGGGwBVcow4AzVR1KMTw98R0+Kev3d15bQG1kxkIeeM967bx98O dN+IGiW+vXyeYbVd6qV64O7+lD01KjqeI2Fvpn7QOox3JiMC2DhEHl4z/F3r7T0+P/hD9Dt7mMbm wFX8ad7ozauzyX4q/Ciz8UtB4o1JyHthvEYXOcHd9e1cJ4R1zTPjZrcOpxxMjWf7td0ZHXnvS1aF GNtT6L1TwukUqXkh+bsK8N+Jnwt0/Xb+LxLeyMVs13eWEBz/ABf0pSV7DZ5bBa6P8Z5z4tBkji07 pGY8Z/i789qtGx0b40Oni1jJDFpwzsMX3v4u/wBKSQloy5ZQaJ8XLlvGeZPJsAcRGLrn5vr2q7Y6 Fpnx012x8T27ugtyFWMx4+8c9/pRa7HNW1PavEfwz+zeMrTWGkJlwQFwO5FdD410yPwZYza9fOQi jlVGev8A+qqSsTe+h8t6R4R0341a7beKLW4kQK4AQoB1IPf6V9vLqR8FQrD5Id5+Mj8qErDcuh8x /Ej4VW3hPxafGWpzsFjB/dKARz+vauS8LeDLH42eLIfF1nK4CfcjKY64P17UlHQS2ue5eM7yXwvb XF7qEzRW0f8AAOc0/wANfHTTrv4aNqW1ltAg2uUPoa1S0FDXU+fPCXh2w8W+J/8AhN4ruTyGyBGV A+9+vau4m+D6618Q4fE8d0+XztiIAGGI/HtUTjcd7M+gvEvjKD4aaFczaguyL+8BntXx/oPhqw8U +Ih41jvH+ySHCxlQPvdP5U+XSw73Vz7L0vxElvo6LFHuM4+RgPXivmDVfhkPhl8Qbnxdql6yI24e UMH7369qHsNMxdN8P6dD4kl+Ib37fYpwdsZAx8369qu6Jodjb+IJfHr37rYzKxRCBj5hx/KoC2pS 0DSrKw8U3Xj+fUHFlNu8tCBj5hx/KovDmg2th4vu/H8+ot9guN2xCBj5hgfyphsTeDPhrL4m+JT+ LLG+ee3udzJCSAADj8e1fWuv+J7bwRoEovkEMjDG4Dv0ojuJ72PjLwx4X/4Rnxpc+M7rUz/Z8wYR gsuPmGB/KvafhpYjVtXu9Tt5jcx3DZQdgCMU2PlOsvfDNxp3iMMGPmSKfkrA8K+Cb7QfGGoXF3OQ sxYoCR6YqfIS7nl+meBrnwD8TNS8Q6nqBjsJSwjQsMfMAB/Ksfwf4QnHxLuPEkl80lhcSFoVyCMH A/pTSFJ3R9k+MviXp2g+Ggl2RCpIG4jvUui+IovEnhGGCx+YtH8rrVNWJgnJnyHonha7+HPxJ1XU dYvttrIWCBmGMkACofAGm3Wh+NdT1/UNRA0a5ZjbqZFxgjA/WqvoVJ6kvhXS7rQvHWo65eal/wAS adm+zoXXbyMD9ao+ENGvNI+IOp65f3+3SLgt9nUuMcjA/Ws0Jknguzv9D8c6pqepX4GjXDN9nzIM cjA/Wl8F6TeeGfGOr6tqF8G0i4ZjbZkBHK4H61YJjPAljqOieKdS1bU74NpE5b7MDIMDIwP1qz4C stR0bxVq2qanfD+yp2b7KDIMYK4H61KLb0K3gayv9D8ZanqmpX4GlzM32dDICORgfrVvwXa32h+N tT1bUL5RpVwW+zoZBjkYH602Zoj8DW2qaV421XUNTvV/seZmNspkGOVwP1p/g+21PTPGGpajql8F 0aRm+yqZARyOP1pF3IvCEeq6V431O+1O8X+x5t32VWkGACMD9areA4dW0jxjqt7q92p0eUt9lDSj HK4H60wbK3gC01ez8bateatdL/Y7s32UNIMYK8frV3wFZ6pp/jDVbzVLtTpUpb7KGlGMFcD9aakJ ysUPB1pquleM9Uu9Xu1OkyFvsoaUYGRx+tUvAum6vaeK9VvdXukOjOzfZA0owMrx+tF7MV7lXwBZ atp3jXVLzWbtX0Zy32RXlGBlfl/Wl8Hwatpvi/VbvV7hP7HkLfYw0oI5Xj9aOYpFbwBZ61B4v1S7 1i6X+yHLfZFaUY+7x+tSeBbXWdP8X6rdazcqukuzfZFaUY5HH60NsTkT/DqLW7Txnq9zqsyvpLFv sitIMD5eP1qv4DXWbPxjqs2rTKdMct9kRpBgfLx+tSlqLm0K3gO81mDxZq8usun9nF2FqGk4+7x+ tT+B7fXLPxXql1rEif2W5b7IjSggZXj9armtoOEiXwDb63B4n1WbWJk/sty32VWlGB8vH61D8P49 btPFWqS6rIn9llm+yhpeB8vH60NjvqdN4Nl1K0v9TOsSI1g5P2XdIMDjj9a5H4fRaza+KNVbVpEO mEt9lV5OPu8frSTFJ3LPgWfWLbxDqp1Up/ZxY/ZFMmQPl4/Wq3w+vdZ/4SXVhrGw2BLfZd8nA+Xj H40DWhW+H8muQeKtWXVyn9mEt9lDScD5eMfjTPhvd65beKdYXWdv9mb2+yB5OPu8Y/Gk0F9Sz4Cv dZh8U6r/AGwqNppZvsgaTgfLx+tT/D0a0PFmqtrDIdNLN9lRpeB8vH60khN2ZleCbnWW8eaumsKv 9lBm+yqz8D5eMfjSfDnVdYm8batFqsSHTNzfZd78D5eP1p9BtlvwRJrDeLtXj1gJ/ZgZvsitJx93 jH41B4Gu9VuvE2qQ6xCh05S32RXfgccfrSTBog8DXmsT+J9Wg1VF/s9S32VWfgfLxj8an+HV7q03 iHU4dXjT+z0J+zB346cfrTbGnoQeDLjVrjxXq0OrxI2nBmNqrPwMLx+tO+H+vardeIdWt9YtlOnx MfsoZjjheMfjQRcTwLqmq6t4k1RNVt0XT0LfZVZjj7vH61F8Np9U1TxNrEOtRKdPRm+yo78cLx+t NlIm8FPqeq+IdVtdYto/7NjYm1Vn44Xj9ao+B9U1LV9e1S01e0T+zoWP2UMxxwMj9aFcnm1F8E6p quo+JdTt9VtU/s5C32VWY44HH61X+H2varrPivVbTVLRRYRFvswZjjhcj9apq45MseC9Q1bV9e1S 21i3VtOjYm2RmOOBx+tO8F6/feIdZ1Gx1WxQWMJP2cMTjgZGPxrNXuD2Ivh/qGpaxrerWer2iHT4 2P2VWY44Xj9ar+BLrVNY1/U7TWLRG02JibVGY4GBkfrTuNFfwjrmo+JfEOq2GsWaDTrdj9lDMccL kfrUXgHxBqGq6zqdlrFih0+Fj9lDE44GR+tD2C92TeCtYvvFGuanZ6pYr9ht2P2UMTjgZH61n+Et Z1LxVrmpWWr2KLp9ux+zBiccDI/WlFjktCXwPrV54g1jU7LV7FfsMBP2YMTjgZGPxqn4J1m/8Sav qdjq1kh0+3Y/ZlYnHAyMfjVO4oDPCPiq78RazqOnatpqixtyfs2ckcDI/Wk8BeKr3xbrGo6fq2nq LG2J+zb844GR196SVipOweDNavfFuvanp2racgsbcn7NuJxwMj9axfCmo33iPWtU0/WLFf7Ptyfs obJHAyP1qkxN2R2Hwc8d6zNr1zp9xa+Tp0bbYsZxjFfQ3jWDy9BuHGNvlnHPtSh8RKP4zf24VZP2 g9SlXgyTtx+VeChfLAUHpxWWNeqKb0I5P3ZGO9RNGzzAHpXBe5mWizL8nYU+NDtGKLDWp3Hh9Wl2 gnbivbfDk6MQCMkd6pMEe3aKwbaTXqukJuIb07VV9TSWqPS7EmZARwa0w5jHAzWsUZEXm+X1HWoW ZvM9KphEgmDN7VXkIMWAMmpL2PkiFwWJI4+lXHUxgOvArnZTJGUSAN0pkkvmgKOo700tASLbSFoQ p+960sSGOIHNNLUb0HmMNACw+c1KsRhiHOah7iRZA8rac1o21wHc5496Q0LuDk4ORVuNvMI+bitN 0JItjaxKk4z6U9IxGmM5oSH1JwzOQMYxVjz2aR0UcbTzVU4+8EtD7Y/YQQr4pkYn5ef5V+z91pLa 5p0llvKpIhBP4V3PSIPY+Q9Y0jRf2d7qS0lLyy6m20ERZxu+Tt9aRbTRPgzbtYbpPP1bncIumfk/ rSTurkOOgk0Oi/BrS08PyvJLcarj955Wcfw9vrVyfR9G+E+ix+FvOkaW/AO4R5/2f60tGOMSpq+j aT8PfDtv4TE0qvdEZlEfocfTvV7VdM0Xw74ft/BEFxIZrhQTKI+uOD7d6zlUgpW6mjg0rlHVfD2j +HvC1v8AD+3upI7tsb5FjHO04Pt3p+s6PpFp4Wtfh1bXUkV2mC8oQc7Dz7d603Jauhms6HpOp6Ba fD63u5Yrq3x5kqxjnYcn271raxoelax4ftvh/bXsguLXBeQIP4Dk+3elJ9CFGwmr+HNK8YeHoPA9 rqEiXFiV8yTYBnYcn2707U/D2n/EDTLbwpZ6nJE2mEecQo+bYd38qVgaHeJtDsfidp1v4e0/UJIz prr5xCgZ2nd/Kma5p+nfExLXRbDUJFbTCPOwoGdp3fyprUfLoXPEuiaf8YpbWzsb94f7LYedhQM7 Tu7+1aHiXRoPi7f2sFnqjx22mf67btwxU7v5VTQJIra54ftvip4gtpbLUmW10w4m27ccHd/Krnin QbP4reJra8tNTZLPTv8AWbduCQQ39KlCdiLWdMh+JXjq11O31Jv7OsARJgjGchh/KtbVdKTx38QY tbh1UtplmrBhuXA6Efyp2LWxS1GwXxd8QY9fj1Q/2TbggjK45wR/KrLeHz4m+Iy+Jl1QjR4FYYDL jnBH8qttJC0JoNIfVfiYfFT6rnRUVgg3rjnGP5VNa6VPqnxGl8VS6qX0MBhEpdcDOMfyrNMSsypY 6TcX/wATZvE0+qH+wgHESl1xzjH8qm0fRLhviBeeJ7zVidFbcIELrgbhx+oqyPtEfhyxvNO8dX3i TUtTLaM+77IrOuMMOP1FWfCMN5aeL9Q8Q6pqf/EqmLfZUaRcYIwP1FFrl31NDwLpt7pHi3U/EGqa kBpsxb7KhkXGCMD9ad4P0+/0zxVqeuatqW3TZyfsitIuORgfrU21BjfBkGoaLr+ranrGpAWVwzfY w0q91wP1pngKHU9CvNX1DXNSH2aZj9iDSr3XA/WjqEUVvh7Yal4SudU1PWtQXyrnP2NWlU9VwP1q f4eWmraNaaxqGvX6obgn7ErSjuuB+tVuPRFj4Z22q6PomqT6/qC+fPn7HulHGVwMfjUHw80/WfDf h7VZNbvVN5Of9DLyjPK4/nUvclEfw/t9Z8MeFdRk129V9RnH+i75Rnlccfjipfhvb63o3gnVZdbu V/tKbm0LyjP3SP54oZXQvfCf+19D8Hzt4huVbVZWBt/MlGcY/wAa7q4ubyHwhcLczRjWHX9zukGe n+OKYN2RwPgZtc0T4f341KdZdbk/1O6UZHB/rirXha21y0+Flw97Ov8Ab7gFN0gz0Of6UPQSkmQ+ HRrdp8K5PtkqN4gZRtLS/N0Of6VJ4fm122+E5Nx5beIWAwWl+Y9c/wBKVx3HWkuu6Z8I98jI3iSQ LnMnzd8/0pgl13S/grDIwR/EcgXcTJ83fP8ASmmxJkkWoa5oXwahkCJJ4jcKHJk+bnOf6VBPea1Y fB+2lijR/EcmwSN5nzckg/07U7EyZLq11rvhr4V2U8KJP4ifYJj5nPJ56fhVrW9Q1jRvhfbX9vbo /iGTaJgr5IycH+lIlNsqa9r+taD8J7S+s7VJdfmZBMFY5GTg/wBKj8Qa1rPhX4c2N5a2sc2u3BQX AVySMnB6e1NK5T0LPiTxFqngrwRYXVhYRz6xclPtIVicZOD09qh8ea9q3hXwrY3OlWKNqdyyfavL Y5GTg/pStqPmsin411zU/BnhqxuNKskn1K6ZftO1jkZODnHtW14p1bVPBHhqwudP0xWvrtk+0kbu 5wf0pydtBpjvH2q6j4E0CyuNJ0tTfXToZ2Xdnk4J/KvojwVHd+I7exj1DKKYsup7GkncG7I8q/aA 8aan4G1OysdHsjdQSuEkwDgAtg9PY1xvi3Wbn4aS2MWl6UGa/H+khQ3GTtPT2pR0K6HuPw/8NaX4 ZgkeW2Eb3PzOAnfpXm3xq8b6n4bvrDS9KsjPZT4WQAHgbsdvY03sSpWZe0XRk8I6tb2djYiKOcb5 SoPUcVrfFjx7qPh7UNPsrO1aeIsoJwePm9qOgRlq7nN/GP4narpN3pelJYtdW10AJeGO35sdvrXJ eL9fl+DuqaZp+iaV+7u2VpSqsMfMB29qQN30PpG41y71fW7SxkQrGYyzE+xrzv4l+JbjTNRGlLbe fZSttk4PQnH9aL6h1PKvFWvR/DLU7Dw1pWkeZYXo/fkIwxzt7exq/wCLfEEXw31Sx8LafpIfT7wf vSEbA529vY0EX1J9e1OD4a6xa+FtL0sNp92v73CMMc7f5GpL/wAQf8Kq8W6doOj6dttZfmcqrADB A/rQty3qj6zjmfxJcGOQ7PKORn25r5j+MHxQu9T8Y2vhe4sjcWUufMJViOCB9O9aLYzkrM5qTxFH 8NPGdn4b0XTttuxyxCEAbSP8a9J8Z/GC6tPiFp+li1LuVJLYJxgihPQpxd7ntWr6AnxV0a5tNT+S BT93rn86+ULXx2nwY8e2vhrRtOzbHIZgjAcEDt9anYe6sfVuv6BZfFTSp7fUcIAfmUjr+dfKOtXu k6ZfQ+BLSxzZYwzeWeNv6d6dwceUoWuv2PgrxPb+C7KyP2VRkt5ZH3T+XevtLw7oBubFZlPlR244 Xp70bsSR8tfET4k2/wAUfH8nhK+tCIBuy4QnO39O9ctYXVhpHiGHwLZWpW0h+beYyPufp3puSQkn ax30XxdTT/iLb+Hbe2OLdWG7aR0x/jXuXjv4dp8X/C06ahIYkU5GMHOOe9Re+gKLWp8cxXGl6ns+ Ha27CCEZ8wxn+Dn6d62ku9G1cx/DsW7CC3X7xiP8HP070nYvfUZfRaLru34dRxMsVvzv8v8Auc/T vVeVtE1aBfhzGj7bbguYsfc5+negD6o+F/gZPht4ejS1+aOBdoyMcV41448faf8AGXxlceG5g8Uk e4sRGf4ee/1qoq2rJvdnnd3Hovje0Pw7DSRfZCD5hixnZz9O9d78G/G+l+AvFDeFbMO8lopUsyY3 bRn+tDL7n0vaaqviLW31IKAbfKkH35ryHUPizb+LfiHLpnlsskW4k7T25ppJak9Dyb4ga9pvxp12 bwkZHhmgfcxEf93nv9af4S1vTI9Uh8HQM7S2AwzFMZ28/wBaHYSidP4+03TfjRaS6CsrRTW0g3nb /d5713Hw31C2+G9pDpVvJ57wR7WyMYx9Knm5tC4JRPHPiPqmm/HrxBdaEZXt54JNzkJ/d571kzeF bDxn4dj8HR3cifYSNz7AM7ef61TehlbUW70bS/HWhw+CILp45bEjdJsAzs5+nerGoaTpvi/Ro/Bk d06zWWNzbRzs5/rULVlqJV1DTNL8caEng2O5kWayI3PsA+5z/WoLnQbHxzo0Xg62vXEtkQHJAGdv P9atoWzOk1XwFB4r8LweF0vnQ2eNxwP4TmsXUdFsPG2hp4RhvnWawI3tgc7ef61JTjoVLzR9P+Im jp4Utr11msCPMbaBnb81Raxodl8QtGTwnaXzx3FgQJGwBnb839aozskaOpeH7Txv4Uh8K22ostzY lRI3GTtOT/OqOuaNaeOdCg8J2l+/2mwK+acDnacn+dHQbskWNX0O2+IOh2/hm0v2FxYkeaSAPunJ p3iPSbLx1olv4UtL5hcWLL5pwP4Tk/zqWUldE+uaPaeNtAg8MWmoMJ7IjzDwD8pyah8QaDbeNfDt t4csb9hc2JXzSMD7pyacSXEw/EugxfETRIPD1nqLi5siPN6DO05P6U7W9Aj+IXheLwzZamyzae6+ dyoJ2ncf0qnqFrFPW9Bt/H+hweG7XU2FxYkeZyuTtOTU3iTw7b+PfDNt4cstSY3Niy+b0B+U5P6U tw2M7xFo0PjrRrfw7Y6k32mwK+d0H3TuNWfEumw/EHwzbeHLPUGN5ZMvmkkA/Kcn9K0exFi1qFkP F2gw+HtN1FlubIqJmyAfl5P6VZ1fRo/Hei2+iWmon7bZEebyMnacmpukO1tCPxv4NuvGnhC302xv CLqzZfNO4D7pyf0qnrNk3j/wbbaHZ6kft1ky+dhh/Ccn9Klq4R0Haxox+Ifh620Kx1Rku7Mr5+Co +6cn9KXxZpP/AAl3h+08P2GqE3dmy+f8y5+U5P6VHkarYq+MtEHjXw7aaFp2pE3tm6+d8yg/Kcn9 Kd4o00+OtDtdE0/Uit9ZY8/DAH5Tk/pVLQjqT69o7eLvDdrpWnanvvbNlE43KCcHJ/SqfjPQ28ba Da6dpuqbb20K+eoZc/Kcn9KaZo0rEPiezn8eeF7XStM1LN/Zsv2j5wD8pyf0qn4w0u68d+GrLTNM 1L/TbRl+0fOo6HJ/SnoSrGl4usJvFPhSx0rTNSD39qV8/DqDwcn9Kh8YaReeJ/C9lY6XqWdQtmT7 QBIAeDk/pWaHZXHeN7C78VeGbGx0vUA19blftAEgB4OT+lVvFFhceKNAs7HS9QxfWxX7RhwDwcn9 KqInYj8a6deeMPD9jZabqO69tivngOM8HJ/Sm+NtMvvEvhuystL1EG9tiv2gCRQeDk/pRYttWIfG Gn3vifw7YWWkalm/gK/aQJADwcn9KZ450zUfFHh2ysdH1EG/tyv2nEgB4OT+lD0MosXxPp+o+IfD VjZabqW7ULcr9pxIuRg5P6Va8YWl5rug2VppWoBr+Er9pxIAeDk/pSGrIg8X2epeJNEsbbStQBvL cr9pAkHY5P6VL4ws7/xToNlb6Pf4v7dl+0gSAHg5P6VXQE0Hjmz1LXtD0+20fUA19Dt+1KJADwcn 9Kg8WRajr+gWVppN6Dfwsv2kCQZGDk/pQibXZN47ttQ1bw7Y22kX4a+hK/aAsgyMHJ/Ssjxgmq69 oen22iXqm/h2/asSgHg5P6U0xs2PFltqWu+HbG30q+B1GAr9qAkHY5P6VW8bxanrfhuxt9GuUOoR FftW2QZGDk/pSuO1yv4ug1XV/DtlbaTfBtQiK/acSjIwef0p3iyLWdY8NWNrpF6r6hEV+04lGeDk /pUMpqyKvjew1PV/D9hDpl4P7QhK/agsgzwcn9Kr+N7fVPEHh6wh0e8U3kRX7SolAPByf0p3voSt xPGsuq3fh2xg0W6X+0YioulEoBHPP6VB40XWdb8NWEOjXSm/jK/asSgYwef0ptWLbuP8cpqt54Ys YtJuFe/jK/agkgyOef0qh41XWNR8L2CaPcr9vjK/agsoBHPP6U1qCIfGjaxq3h7ToNFmT7dGV+1l ZRnrz+laXjaDVLrw1YR6POh1BCv2oLIMjnn9KUtCd2UvHf8AbU3h/Tl0edWvFK/agkoyOef0qXxz /bGpeFrKLRpka/Xb9p2yjI55/SlFthJ6HuXws8i58Pw2lxIh1JF/e4YEg1s+KrdrbR7uN3LHYQPy px0kTHU/jf8A23AX/aB1KI/eSdsn8q+eWkDnK96yxj1LlGyGNhJQX5PaliZmuCDXnWFyll1Owgnk 1ctP3cAUj5h3q+g0jrdFk84hehr2zw5EibcD5u9IlrU9p0STyscc17DorYCtjrVLVjT6HpFmu2Pd mrpdmUHGPWuiLFYjkcysABxTJELtgdRQ2J6Mqu7HBPb9apvCZH3rwPSmVc//2Q== ------=_Part_72026_18516924.1181569766607-- From owner-xfs@oss.sgi.com Tue Jun 12 19:54:09 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 12 Jun 2007 19:54:13 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5D2s5Wt007406 for ; Tue, 12 Jun 2007 19:54:07 -0700 Received: from cxfsmac10.melbourne.sgi.com (cxfsmac10.melbourne.sgi.com [134.14.55.100]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id MAA26301; Wed, 13 Jun 2007 12:53:58 +1000 Message-ID: <466F5C45.6040306@sgi.com> Date: Wed, 13 Jun 2007 12:53:57 +1000 From: Donald Douwsma User-Agent: Thunderbird 2.0.0.0 (Macintosh/20070326) MIME-Version: 1.0 To: "Jahnke, Steffen" CC: xfs@oss.sgi.com Subject: Re: XFS with project quota under linux? References: <950DD867A5E1B04ABE82A56FCDC03A5E9CE8CF@HDHS0111.euro1.voith.net> In-Reply-To: <950DD867A5E1B04ABE82A56FCDC03A5E9CE8CF@HDHS0111.euro1.voith.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11759 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: donaldd@sgi.com Precedence: bulk X-list: xfs Jahnke, Steffen wrote: > I recently switched the quota usrquota to pquota on our Altix 4700 under SLES10. > I then found out that the project quota is not updated if files are moved within > the same filesystem. E.g. if I move a file from a different project to a new project > it still belongs to the old project. The same thing happens if I move a file which not > belongs to any project but which is on the filesystem mounted with pquota. > > Some details of our system: > > hdhu0250:/home/t # cat /etc/*release > LSB_VERSION="core-2.0-noarch:core-3.0-noarch:core-2.0-ia64:core-3.0-ia64" > SGI ProPack 5SP1 for Linux, Build 501r2-0703010508 > SUSE Linux Enterprise Server 10 (ia64) > VERSION = 10 Hi Steffen, I've checked and Nathan fixed this bug back in January. The fix is in mainline and will ship in Sles10sp1 when it releases. > Any help would be very appreciated. Maybe there is a developer version > which is already to be able to handle pquota correctly? If you're not able to upgrade to sp1 but don't mind rebuilding the kernel. the following patch will solve your problem. Donald Date: Mon, Jan 15 2007 14:32:14 +1100 Subject: Fix a project quota space accounting leak on rename. =========================================================================== Index: xfs_rename.c =========================================================================== --- a/fs/xfs/xfs_rename.c 2007-01-15 14:32:15.000000000 +1100 +++ b/fs/xfs/xfs_rename.c 2007-01-15 14:32:15.000000000 +1100 @@ -316,6 +316,18 @@ xfs_rename( } } + /* + * If we are using project inheritance, we only allow renames + * into our tree when the project IDs are the same; else the + * tree quota mechanism would be circumvented. + */ + if (unlikely((target_dp->i_d.di_flags & XFS_DIFLAG_PROJINHERIT) && + (target_dp->i_d.di_projid != src_ip->i_d.di_projid))) { + error = XFS_ERROR(EXDEV); + xfs_rename_unlock4(inodes, XFS_ILOCK_SHARED); + goto rele_return; + } + new_parent = (src_dp != target_dp); src_is_directory = ((src_ip->i_d.di_mode & S_IFMT) == S_IFDIR); =========================================================================== Index: xfs_vnodeops.c =========================================================================== --- a/fs/xfs/xfs_vnodeops.c 2007-01-15 14:32:15.000000000 +1100 +++ b/fs/xfs/xfs_vnodeops.c 2007-01-15 14:32:15.000000000 +1100 @@ -2663,7 +2663,7 @@ xfs_link( */ if (unlikely((tdp->i_d.di_flags & XFS_DIFLAG_PROJINHERIT) && (tdp->i_d.di_projid != sip->i_d.di_projid))) { - error = XFS_ERROR(EPERM); + error = XFS_ERROR(EXDEV); goto error_return; } From owner-xfs@oss.sgi.com Tue Jun 12 21:16:44 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 12 Jun 2007 21:16:50 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.5 required=5.0 tests=AWL,BAYES_99,J_CHICKENPOX_32, J_CHICKENPOX_44,J_CHICKENPOX_45,J_CHICKENPOX_46,J_CHICKENPOX_62, J_CHICKENPOX_63 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5D4GcWt032566 for ; Tue, 12 Jun 2007 21:16:40 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA28414; Wed, 13 Jun 2007 14:16:31 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5D4GUAf121683743; Wed, 13 Jun 2007 14:16:31 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5D4GTNs121084576; Wed, 13 Jun 2007 14:16:29 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Wed, 13 Jun 2007 14:16:29 +1000 From: David Chinner To: xfs-dev Cc: xfs-oss Subject: Review: Multi-File Data Streams V2 Message-ID: <20070613041629.GI86004887@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11760 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Concurrent Multi-File Data Streams In media spaces, video is often stored in a frame-per-file format. When dealing with uncompressed realtime HD video streams in this format, it is crucial that files do not get fragmented and that multiple files a placed contiguously on disk. When multiple streams are being ingested and played out at the same time, it is critical that the filesystem does not cross the streams and interleave them together as this creates seek and readahead cache miss latency and prevents both ingest and playout from meeting frame rate targets. This patches creates a "stream of files" concept into the allocator to place all the data from a single stream contiguously on disk so that RAID array readahead can be used effectively. Each additional stream gets placed in different allocation groups within the filesystem, thereby ensuring that we don't cross any streams. When an AG fills up, we select a new AG for the stream that is not in use. The core of the functionality is the stream tracking - each inode that we create in a directory needs to be associated with the directories' stream. Hence every time we create a file, we look up the directories' stream object and associate the new file with that object. Once we have a stream object for a file, we use the AG that the stream object point to for allocations. If we can't allocate in that AG (e.g. it is full) we move the entire stream to another AG. Other inodes in the same stream are moved to the new AG on their next allocation (i.e. lazy update). Stream objects are kept in a cache and hold a reference on the inode. Hence the inode cannot be reclaimed while there is an outstanding stream reference. This means that on unlink we need to remove the stream association and we also need to flush all the associations on certain events that want to reclaim all unreferenced inodes (e.g. filesystem freeze). Credits: The original filestream allocator on Irix was written by Glen Overby, the Linux port and rewrite by Nathan Scott and Sam Vaughan (none of whom work at SGI any more). I just picked up the pieces and beat it repeatedly with a big stick until it passed XFSQA. Version 2: o fold xfs_bmap_filestream() into xfs_bmap_btalloc() o use ktrace infrastructure for debug code in xfs_filestream.c o wrap repeated filestream inode checks. o rename per-AG filestream reference counting macros and convert to static inline o remove debug from xfs_mru_cache.[ch] o fix function call/error check formatting. o removed unnecessary fstrm_mnt_data_t structure. o cleaned up ASSERT checks o cleaned up namespace-less globals in xfs_mru_cache.c o removed unnecessary casts --- fs/xfs/Makefile-linux-2.6 | 2 fs/xfs/linux-2.6/xfs_globals.c | 1 fs/xfs/linux-2.6/xfs_linux.h | 1 fs/xfs/linux-2.6/xfs_sysctl.c | 11 fs/xfs/linux-2.6/xfs_sysctl.h | 2 fs/xfs/quota/xfs_qm.c | 3 fs/xfs/xfs.h | 1 fs/xfs/xfs_ag.h | 1 fs/xfs/xfs_bmap.c | 68 +++ fs/xfs/xfs_clnt.h | 2 fs/xfs/xfs_dinode.h | 4 fs/xfs/xfs_filestream.c | 742 +++++++++++++++++++++++++++++++++++++++++ fs/xfs/xfs_filestream.h | 135 +++++++ fs/xfs/xfs_fs.h | 1 fs/xfs/xfs_fsops.c | 2 fs/xfs/xfs_inode.c | 17 fs/xfs/xfs_mount.h | 4 fs/xfs/xfs_mru_cache.c | 494 +++++++++++++++++++++++++++ fs/xfs/xfs_mru_cache.h | 219 ++++++++++++ fs/xfs/xfs_vfsops.c | 25 + fs/xfs/xfs_vnodeops.c | 22 + fs/xfs/xfsidbg.c | 188 ++++++++++ 22 files changed, 1934 insertions(+), 11 deletions(-) Index: 2.6.x-xfs-new/fs/xfs/Makefile-linux-2.6 =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/Makefile-linux-2.6 2007-06-13 13:58:15.727518215 +1000 +++ 2.6.x-xfs-new/fs/xfs/Makefile-linux-2.6 2007-06-13 14:11:28.440325006 +1000 @@ -54,6 +54,7 @@ xfs-y += xfs_alloc.o \ xfs_dir2_sf.o \ xfs_error.o \ xfs_extfree_item.o \ + xfs_filestream.o \ xfs_fsops.o \ xfs_ialloc.o \ xfs_ialloc_btree.o \ @@ -67,6 +68,7 @@ xfs-y += xfs_alloc.o \ xfs_log.o \ xfs_log_recover.o \ xfs_mount.o \ + xfs_mru_cache.o \ xfs_rename.o \ xfs_trans.o \ xfs_trans_ail.o \ Index: 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_globals.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/linux-2.6/xfs_globals.c 2007-06-13 13:58:15.739516660 +1000 +++ 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_globals.c 2007-06-13 14:11:28.592305170 +1000 @@ -49,6 +49,7 @@ xfs_param_t xfs_params = { .inherit_nosym = { 0, 0, 1 }, .rotorstep = { 1, 1, 255 }, .inherit_nodfrg = { 0, 1, 1 }, + .fstrm_timer = { 1, 50, 3600*100}, }; /* Index: 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_linux.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/linux-2.6/xfs_linux.h 2007-06-13 13:58:15.739516660 +1000 +++ 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_linux.h 2007-06-13 14:11:28.600304126 +1000 @@ -132,6 +132,7 @@ #define xfs_inherit_nosymlinks xfs_params.inherit_nosym.val #define xfs_rotorstep xfs_params.rotorstep.val #define xfs_inherit_nodefrag xfs_params.inherit_nodfrg.val +#define xfs_fstrm_centisecs xfs_params.fstrm_timer.val #define current_cpu() (raw_smp_processor_id()) #define current_pid() (current->pid) Index: 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_sysctl.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/linux-2.6/xfs_sysctl.c 2007-06-13 13:58:15.739516660 +1000 +++ 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_sysctl.c 2007-06-13 14:11:28.604303604 +1000 @@ -243,6 +243,17 @@ static ctl_table xfs_table[] = { .extra1 = &xfs_params.inherit_nodfrg.min, .extra2 = &xfs_params.inherit_nodfrg.max }, + { + .ctl_name = XFS_FILESTREAM_TIMER, + .procname = "filestream_centisecs", + .data = &xfs_params.fstrm_timer.val, + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = &proc_dointvec_minmax, + .strategy = &sysctl_intvec, + .extra1 = &xfs_params.fstrm_timer.min, + .extra2 = &xfs_params.fstrm_timer.max, + }, /* please keep this the last entry */ #ifdef CONFIG_PROC_FS { Index: 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_sysctl.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/linux-2.6/xfs_sysctl.h 2007-06-13 13:58:15.739516660 +1000 +++ 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_sysctl.h 2007-06-13 14:11:28.612302560 +1000 @@ -50,6 +50,7 @@ typedef struct xfs_param { xfs_sysctl_val_t inherit_nosym; /* Inherit the "nosymlinks" flag. */ xfs_sysctl_val_t rotorstep; /* inode32 AG rotoring control knob */ xfs_sysctl_val_t inherit_nodfrg;/* Inherit the "nodefrag" inode flag. */ + xfs_sysctl_val_t fstrm_timer; /* Filestream dir-AG assoc'n timeout. */ } xfs_param_t; /* @@ -89,6 +90,7 @@ enum { XFS_INHERIT_NOSYM = 19, XFS_ROTORSTEP = 20, XFS_INHERIT_NODFRG = 21, + XFS_FILESTREAM_TIMER = 22, }; extern xfs_param_t xfs_params; Index: 2.6.x-xfs-new/fs/xfs/xfs_ag.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_ag.h 2007-06-13 13:58:15.751515106 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_ag.h 2007-06-13 14:11:28.616302038 +1000 @@ -196,6 +196,7 @@ typedef struct xfs_perag lock_t pagb_lock; /* lock for pagb_list */ #endif xfs_perag_busy_t *pagb_list; /* unstable blocks */ + atomic_t pagf_fstrms; /* # of filestreams active in this AG */ /* * inode allocation search lookup optimisation. Index: 2.6.x-xfs-new/fs/xfs/xfs_bmap.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_bmap.c 2007-06-13 13:58:15.751515106 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_bmap.c 2007-06-13 14:11:28.636299428 +1000 @@ -52,6 +52,7 @@ #include "xfs_quota.h" #include "xfs_trans_space.h" #include "xfs_buf_item.h" +#include "xfs_filestream.h" #ifdef DEBUG @@ -171,6 +172,14 @@ xfs_bmap_alloc( xfs_bmalloca_t *ap); /* bmap alloc argument struct */ /* + * xfs_bmap_filestreams is the underlying allocator when filestreams are + * enabled. + */ +STATIC int /* error */ +xfs_bmap_filestreams( + xfs_bmalloca_t *ap); /* bmap alloc argument struct */ + +/* * Transform a btree format file with only one leaf node, where the * extents list will fit in the inode, into an extents format file. * Since the file extents are already in-core, all we have to do is @@ -2724,7 +2733,12 @@ xfs_bmap_btalloc( } nullfb = ap->firstblock == NULLFSBLOCK; fb_agno = nullfb ? NULLAGNUMBER : XFS_FSB_TO_AGNO(mp, ap->firstblock); - if (nullfb) + if (nullfb && xfs_inode_is_filestream(ap->ip)) { + ag = xfs_filestream_lookup_ag(ap->ip); + ag = (ag != NULLAGNUMBER) ? ag : 0; + ap->rval = (ap->userdata) ? XFS_AGB_TO_FSB(mp, ag, 0) : + XFS_INO_TO_FSB(mp, ap->ip->i_ino); + } else if (nullfb) ap->rval = XFS_INO_TO_FSB(mp, ap->ip->i_ino); else ap->rval = ap->firstblock; @@ -2750,13 +2764,22 @@ xfs_bmap_btalloc( args.firstblock = ap->firstblock; blen = 0; if (nullfb) { - args.type = XFS_ALLOCTYPE_START_BNO; + if (xfs_inode_is_filestream(ap->ip)) + args.type = XFS_ALLOCTYPE_NEAR_BNO; + else + args.type = XFS_ALLOCTYPE_START_BNO; args.total = ap->total; + /* - * Find the longest available space. - * We're going to try for the whole allocation at once. + * Search for an allocation group with a single extent + * large enough for the request. + * + * If one isn't found, then adjust the minimum allocation + * size to the largest space found. */ startag = ag = XFS_FSB_TO_AGNO(mp, args.fsbno); + if (startag == NULLAGNUMBER) + startag = ag = 0; notinit = 0; down_read(&mp->m_peraglock); while (blen < ap->alen) { @@ -2782,6 +2805,35 @@ xfs_bmap_btalloc( blen = longest; } else notinit = 1; + + if (xfs_inode_is_filestream(ap->ip)) { + if (blen >= ap->alen) + break; + + if (ap->userdata) { + /* + * If startag is an invalid AG, we've + * come here once before and + * xfs_filestream_new_ag picked the + * best currently available. + * + * Don't continue looping, since we + * could loop forever. + */ + if (startag == NULLAGNUMBER) + break; + + error = xfs_filestream_new_ag(ap, &ag); + if (error) { + up_read(&mp->m_peraglock); + return error; + } + + /* loop again to set 'blen'*/ + startag = NULLAGNUMBER; + continue; + } + } if (++ag == mp->m_sb.sb_agcount) ag = 0; if (ag == startag) @@ -2806,8 +2858,14 @@ xfs_bmap_btalloc( */ else args.minlen = ap->alen; + + if (xfs_inode_is_filestream(ap->ip)) + ap->rval = args.fsbno = XFS_AGB_TO_FSB(mp, ag, 0); } else if (ap->low) { - args.type = XFS_ALLOCTYPE_START_BNO; + if (xfs_inode_is_filestream(ap->ip)) + args.type = XFS_ALLOCTYPE_FIRST_AG; + else + args.type = XFS_ALLOCTYPE_START_BNO; args.total = args.minlen = ap->minlen; } else { args.type = XFS_ALLOCTYPE_NEAR_BNO; Index: 2.6.x-xfs-new/fs/xfs/xfs_clnt.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_clnt.h 2007-06-13 13:58:15.759514069 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_clnt.h 2007-06-13 14:11:28.640298906 +1000 @@ -99,5 +99,7 @@ struct xfs_mount_args { */ #define XFSMNT2_COMPAT_IOSIZE 0x00000001 /* don't report large preferred * I/O size in stat(2) */ +#define XFSMNT2_FILESTREAMS 0x00000002 /* enable the filestreams + * allocator */ #endif /* __XFS_CLNT_H__ */ Index: 2.6.x-xfs-new/fs/xfs/xfs_dinode.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_dinode.h 2007-06-13 13:58:15.767513033 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_dinode.h 2007-06-13 14:11:28.648297862 +1000 @@ -257,6 +257,7 @@ typedef enum xfs_dinode_fmt #define XFS_DIFLAG_EXTSIZE_BIT 11 /* inode extent size allocator hint */ #define XFS_DIFLAG_EXTSZINHERIT_BIT 12 /* inherit inode extent size */ #define XFS_DIFLAG_NODEFRAG_BIT 13 /* do not reorganize/defragment */ +#define XFS_DIFLAG_FILESTREAM_BIT 14 /* use filestream allocator */ #define XFS_DIFLAG_REALTIME (1 << XFS_DIFLAG_REALTIME_BIT) #define XFS_DIFLAG_PREALLOC (1 << XFS_DIFLAG_PREALLOC_BIT) #define XFS_DIFLAG_NEWRTBM (1 << XFS_DIFLAG_NEWRTBM_BIT) @@ -271,12 +272,13 @@ typedef enum xfs_dinode_fmt #define XFS_DIFLAG_EXTSIZE (1 << XFS_DIFLAG_EXTSIZE_BIT) #define XFS_DIFLAG_EXTSZINHERIT (1 << XFS_DIFLAG_EXTSZINHERIT_BIT) #define XFS_DIFLAG_NODEFRAG (1 << XFS_DIFLAG_NODEFRAG_BIT) +#define XFS_DIFLAG_FILESTREAM (1 << XFS_DIFLAG_FILESTREAM_BIT) #define XFS_DIFLAG_ANY \ (XFS_DIFLAG_REALTIME | XFS_DIFLAG_PREALLOC | XFS_DIFLAG_NEWRTBM | \ XFS_DIFLAG_IMMUTABLE | XFS_DIFLAG_APPEND | XFS_DIFLAG_SYNC | \ XFS_DIFLAG_NOATIME | XFS_DIFLAG_NODUMP | XFS_DIFLAG_RTINHERIT | \ XFS_DIFLAG_PROJINHERIT | XFS_DIFLAG_NOSYMLINKS | XFS_DIFLAG_EXTSIZE | \ - XFS_DIFLAG_EXTSZINHERIT | XFS_DIFLAG_NODEFRAG) + XFS_DIFLAG_EXTSZINHERIT | XFS_DIFLAG_NODEFRAG | XFS_DIFLAG_FILESTREAM) #endif /* __XFS_DINODE_H__ */ Index: 2.6.x-xfs-new/fs/xfs/xfs_filestream.c =================================================================== --- /dev/null 1970-01-01 00:00:00.000000000 +0000 +++ 2.6.x-xfs-new/fs/xfs/xfs_filestream.c 2007-06-13 14:11:28.676294208 +1000 @@ -0,0 +1,742 @@ +/* + * Copyright (c) 2000-2005 Silicon Graphics, Inc. + * All Rights Reserved. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it would be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write the Free Software Foundation, + * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA + */ +#include "xfs.h" +#include "xfs_bmap_btree.h" +#include "xfs_inum.h" +#include "xfs_dir2.h" +#include "xfs_dir2_sf.h" +#include "xfs_attr_sf.h" +#include "xfs_dinode.h" +#include "xfs_inode.h" +#include "xfs_ag.h" +#include "xfs_dmapi.h" +#include "xfs_log.h" +#include "xfs_trans.h" +#include "xfs_sb.h" +#include "xfs_mount.h" +#include "xfs_bmap.h" +#include "xfs_alloc.h" +#include "xfs_utils.h" +#include "xfs_mru_cache.h" +#include "xfs_filestream.h" + +#ifdef XFS_FILESTREAMS_TRACE + +ktrace_t *xfs_filestreams_trace_buf; + +STATIC void +xfs_filestreams_trace( + xfs_mount_t *mp, /* mount point */ + int type, /* type of trace */ + const char *func, /* source function */ + int line, /* source line number */ + __psunsigned_t arg0, + __psunsigned_t arg1, + __psunsigned_t arg2, + __psunsigned_t arg3, + __psunsigned_t arg4, + __psunsigned_t arg5) +{ + ktrace_enter(xfs_filestreams_trace_buf, + (void *)(__psint_t)(type | (line << 16)), + (void *)func, + (void *)(__psunsigned_t)current_pid(), + (void *)mp, + (void *)(__psunsigned_t)arg0, + (void *)(__psunsigned_t)arg1, + (void *)(__psunsigned_t)arg2, + (void *)(__psunsigned_t)arg3, + (void *)(__psunsigned_t)arg4, + (void *)(__psunsigned_t)arg5, + NULL, NULL, NULL, NULL, NULL, NULL); +} + +#define TRACE0(mp,t) TRACE6(mp,t,0,0,0,0,0,0) +#define TRACE1(mp,t,a0) TRACE6(mp,t,a0,0,0,0,0,0) +#define TRACE2(mp,t,a0,a1) TRACE6(mp,t,a0,a1,0,0,0,0) +#define TRACE3(mp,t,a0,a1,a2) TRACE6(mp,t,a0,a1,a2,0,0,0) +#define TRACE4(mp,t,a0,a1,a2,a3) TRACE6(mp,t,a0,a1,a2,a3,0,0) +#define TRACE5(mp,t,a0,a1,a2,a3,a4) TRACE6(mp,t,a0,a1,a2,a3,a4,0) +#define TRACE6(mp,t,a0,a1,a2,a3,a4,a5) \ + xfs_filestreams_trace(mp, t, __FUNCTION__, __LINE__, \ + (__psunsigned_t)a0, (__psunsigned_t)a1, \ + (__psunsigned_t)a2, (__psunsigned_t)a3, \ + (__psunsigned_t)a4, (__psunsigned_t)a5) + +#define TRACE_AG_SCAN(mp, ag, ag2) \ + TRACE2(mp, XFS_FSTRM_KTRACE_AGSCAN, ag, ag2); +#define TRACE_AG_PICK1(mp, max_ag, maxfree) \ + TRACE2(mp, XFS_FSTRM_KTRACE_AGPICK1, max_ag, maxfree); +#define TRACE_AG_PICK2(mp, ag, ag2, cnt, free, scan, flag) \ + TRACE6(mp, XFS_FSTRM_KTRACE_AGPICK2, ag, ag2, \ + cnt, free, scan, flag) +#define TRACE_UPDATE(mp, ip, ag, cnt, ag2, cnt2) \ + TRACE5(mp, XFS_FSTRM_KTRACE_UPDATE, ip, ag, cnt, ag2, cnt2) +#define TRACE_FREE(mp, ip, pip, ag, cnt) \ + TRACE4(mp, XFS_FSTRM_KTRACE_FREE, ip, pip, ag, cnt) +#define TRACE_LOOKUP(mp, ip, pip, ag, cnt) \ + TRACE4(mp, XFS_FSTRM_KTRACE_ITEM_LOOKUP, ip, pip, ag, cnt) +#define TRACE_ASSOCIATE(mp, ip, pip, ag, cnt) \ + TRACE4(mp, XFS_FSTRM_KTRACE_ASSOCIATE, ip, pip, ag, cnt) +#define TRACE_MOVEAG(mp, ip, pip, oag, ocnt, nag, ncnt) \ + TRACE6(mp, XFS_FSTRM_KTRACE_MOVEAG, ip, pip, oag, ocnt, nag, ncnt) +#define TRACE_ORPHAN(mp, ip, ag) \ + TRACE2(mp, XFS_FSTRM_KTRACE_ORPHAN, ip, ag); + + +#else +#define TRACE_AG_SCAN(mp, ag, ag2) +#define TRACE_AG_PICK1(mp, max_ag, maxfree) +#define TRACE_AG_PICK2(mp, ag, ag2, cnt, free, scan, flag) +#define TRACE_UPDATE(mp, ip, ag, cnt, ag2, cnt2) +#define TRACE_FREE(mp, ip, pip, ag, cnt) +#define TRACE_LOOKUP(mp, ip, pip, ag, cnt) +#define TRACE_ASSOCIATE(mp, ip, pip, ag, cnt) +#define TRACE_MOVEAG(mp, ip, pip, oag, ocnt, nag, ncnt) +#define TRACE_ORPHAN(mp, ip, ag) +#endif + +static kmem_zone_t *item_zone; + +/* + * Structure for associating a file or a directory with an allocation group. + * The parent directory pointer is only needed for files, but since there will + * generally be vastly more files than directories in the cache, using the same + * data structure simplifies the code with very little memory overhead. + */ +typedef struct fstrm_item +{ + xfs_agnumber_t ag; /* AG currently in use for the file/directory. */ + xfs_inode_t *ip; /* inode self-pointer. */ + xfs_inode_t *pip; /* Parent directory inode pointer. */ +} fstrm_item_t; + + +/* + * Scan the AGs starting at startag looking for an AG that isn't in use and has + * at least minlen blocks free. + */ +static int +_xfs_filestream_pick_ag( + xfs_mount_t *mp, + xfs_agnumber_t startag, + xfs_agnumber_t *agp, + int flags, + xfs_extlen_t minlen) +{ + int err, trylock, nscan; + xfs_extlen_t delta, longest, need, free, minfree, maxfree = 0; + xfs_agnumber_t ag, max_ag = NULLAGNUMBER; + struct xfs_perag *pag; + + /* 2% of an AG's blocks must be free for it to be chosen. */ + minfree = mp->m_sb.sb_agblocks / 50; + + ag = startag; + *agp = NULLAGNUMBER; + + /* For the first pass, don't sleep trying to init the per-AG. */ + trylock = XFS_ALLOC_FLAG_TRYLOCK; + + for (nscan = 0; 1; nscan++) { + + TRACE_AG_SCAN(mp, ag, xfs_filestream_peek_ag(mp, ag)); + + pag = mp->m_perag + ag; + + if (!pag->pagf_init) { + err = xfs_alloc_pagf_init(mp, NULL, ag, trylock); + if (err && !trylock) + return err; + } + + /* Might fail sometimes during the 1st pass with trylock set. */ + if (!pag->pagf_init) + goto next_ag; + + /* Keep track of the AG with the most free blocks. */ + if (pag->pagf_freeblks > maxfree) { + maxfree = pag->pagf_freeblks; + max_ag = ag; + } + + /* + * The AG reference count does two things: it enforces mutual + * exclusion when examining the suitability of an AG in this + * loop, and it guards against two filestreams being established + * in the same AG as each other. + */ + if (xfs_filestream_get_ag(mp, ag) > 1) { + xfs_filestream_put_ag(mp, ag); + goto next_ag; + } + + need = XFS_MIN_FREELIST_PAG(pag, mp); + delta = need > pag->pagf_flcount ? need - pag->pagf_flcount : 0; + longest = (pag->pagf_longest > delta) ? + (pag->pagf_longest - delta) : + (pag->pagf_flcount > 0 || pag->pagf_longest > 0); + + if (((minlen && longest >= minlen) || + (!minlen && pag->pagf_freeblks >= minfree)) && + (!pag->pagf_metadata || !(flags & XFS_PICK_USERDATA) || + (flags & XFS_PICK_LOWSPACE))) { + + /* Break out, retaining the reference on the AG. */ + free = pag->pagf_freeblks; + *agp = ag; + break; + } + + /* Drop the reference on this AG, it's not usable. */ + xfs_filestream_put_ag(mp, ag); +next_ag: + /* Move to the next AG, wrapping to AG 0 if necessary. */ + if (++ag >= mp->m_sb.sb_agcount) + ag = 0; + + /* If a full pass of the AGs hasn't been done yet, continue. */ + if (ag != startag) + continue; + + /* Allow sleeping in xfs_alloc_pagf_init() on the 2nd pass. */ + if (trylock != 0) { + trylock = 0; + continue; + } + + /* Finally, if lowspace wasn't set, set it for the 3rd pass. */ + if (!(flags & XFS_PICK_LOWSPACE)) { + flags |= XFS_PICK_LOWSPACE; + continue; + } + + /* + * Take the AG with the most free space, regardless of whether + * it's already in use by another filestream. + */ + if (max_ag != NULLAGNUMBER) { + xfs_filestream_get_ag(mp, max_ag); + TRACE_AG_PICK1(mp, max_ag, maxfree); + free = maxfree; + *agp = max_ag; + break; + } + + /* take AG 0 if none matched */ + TRACE_AG_PICK1(mp, max_ag, maxfree); + *agp = 0; + return 0; + } + + TRACE_AG_PICK2(mp, startag, *agp, xfs_filestream_peek_ag(mp, *agp), + free, nscan, flags); + + return 0; +} + +/* + * Set the allocation group number for a file or a directory, updating inode + * references and per-AG references as appropriate. Must be called with the + * m_peraglock held in read mode. + */ +static int +_xfs_filestream_update_ag( + xfs_inode_t *ip, + xfs_inode_t *pip, + xfs_agnumber_t ag) +{ + int err = 0; + xfs_mount_t *mp; + xfs_mru_cache_t *cache; + fstrm_item_t *item; + xfs_agnumber_t old_ag; + xfs_inode_t *old_pip; + + /* + * Either ip is a regular file and pip is a directory, or ip is a + * directory and pip is NULL. + */ + ASSERT(ip && (((ip->i_d.di_mode & S_IFREG) && pip && + (pip->i_d.di_mode & S_IFDIR)) || + ((ip->i_d.di_mode & S_IFDIR) && !pip))); + + mp = ip->i_mount; + cache = mp->m_filestream; + + item = xfs_mru_cache_lookup(cache, ip->i_ino); + if (item) { + ASSERT(item->ip == ip); + old_ag = item->ag; + item->ag = ag; + old_pip = item->pip; + item->pip = pip; + xfs_mru_cache_done(cache); + + /* + * If the AG has changed, drop the old ref and take a new one, + * effectively transferring the reference from old to new AG. + */ + if (ag != old_ag) { + xfs_filestream_put_ag(mp, old_ag); + xfs_filestream_get_ag(mp, ag); + } + + /* + * If ip is a file and its pip has changed, drop the old ref and + * take a new one. + */ + if (pip && pip != old_pip) { + IRELE(old_pip); + IHOLD(pip); + } + + TRACE_UPDATE(mp, ip, old_ag, xfs_filestream_peek_ag(mp, old_ag), + ag, xfs_filestream_peek_ag(mp, ag)); + return 0; + } + + item = kmem_zone_zalloc(item_zone, KM_MAYFAIL); + if (!item) + return ENOMEM; + + item->ag = ag; + item->ip = ip; + item->pip = pip; + + err = xfs_mru_cache_insert(cache, ip->i_ino, item); + if (err) { + kmem_zone_free(item_zone, item); + return err; + } + + /* Take a reference on the AG. */ + xfs_filestream_get_ag(mp, ag); + + /* + * Take a reference on the inode itself regardless of whether it's a + * regular file or a directory. + */ + IHOLD(ip); + + /* + * In the case of a regular file, take a reference on the parent inode + * as well to ensure it remains in-core. + */ + if (pip) + IHOLD(pip); + + TRACE_UPDATE(mp, ip, ag, xfs_filestream_peek_ag(mp, ag), + ag, xfs_filestream_peek_ag(mp, ag)); + + return 0; +} + +/* xfs_fstrm_free_func(): callback for freeing cached stream items. */ +void +xfs_fstrm_free_func( + xfs_ino_t ino, + fstrm_item_t *item) +{ + xfs_inode_t *ip = item->ip; + int ref; + + ASSERT(ip->i_ino == ino); + + /* Drop the reference taken on the AG when the item was added. */ + ref = xfs_filestream_put_ag(ip->i_mount, item->ag); + + ASSERT(ref >= 0); + + /* + * _xfs_filestream_update_ag() always takes a reference on the inode + * itself, whether it's a file or a directory. Release it here. + */ + IRELE(ip); + + /* + * In the case of a regular file, _xfs_filestream_update_ag() also takes a + * ref on the parent inode to keep it in-core. Release that too. + */ + if (item->pip) + IRELE(item->pip); + + TRACE_FREE(ip->i_mount, ip, item->pip, item->ag, + xfs_filestream_peek_ag(ip->i_mount, item->ag)); + + /* Finally, free the memory allocated for the item. */ + kmem_zone_free(item_zone, item); +} + +/* + * xfs_filestream_init() is called at xfs initialisation time to set up the + * memory zone that will be used for filestream data structure allocation. + */ +int +xfs_filestream_init(void) +{ + item_zone = kmem_zone_init(sizeof(fstrm_item_t), "fstrm_item"); +#ifdef XFS_FILESTREAMS_TRACE + xfs_filestreams_trace_buf = ktrace_alloc(XFS_FSTRM_KTRACE_SIZE, KM_SLEEP); +#endif + return item_zone ? 0 : -ENOMEM; +} + +/* + * xfs_filestream_uninit() is called at xfs termination time to destroy the + * memory zone that was used for filestream data structure allocation. + */ +void +xfs_filestream_uninit(void) +{ +#ifdef XFS_FILESTREAMS_TRACE + ktrace_free(xfs_filestreams_trace_buf); +#endif + kmem_zone_destroy(item_zone); +} + +/* + * xfs_filestream_mount() is called when a file system is mounted with the + * filestream option. It is responsible for allocating the data structures + * needed to track the new file system's file streams. + */ +int +xfs_filestream_mount( + xfs_mount_t *mp) +{ + int err; + unsigned int lifetime, grp_count; + + /* + * The filestream timer tunable is currently fixed within the range of + * one second to four minutes, with five seconds being the default. The + * group count is somewhat arbitrary, but it'd be nice to adhere to the + * timer tunable to within about 10 percent. This requires at least 10 + * groups. + */ + lifetime = xfs_fstrm_centisecs * 10; + grp_count = 10; + + err = xfs_mru_cache_create(&mp->m_filestream, lifetime, grp_count, + (xfs_mru_cache_free_func_t)xfs_fstrm_free_func); + + return err; +} + +/* + * xfs_filestream_unmount() is called when a file system that was mounted with + * the filestream option is unmounted. It drains the data structures created + * to track the file system's file streams and frees all the memory that was + * allocated. + */ +void +xfs_filestream_unmount( + xfs_mount_t *mp) +{ + xfs_mru_cache_destroy(mp->m_filestream); +} + +/* + * If the mount point's m_perag array is going to be reallocated, all + * outstanding cache entries must be flushed to avoid accessing reference count + * addresses that have been freed. The call to xfs_filestream_flush() must be + * made inside the block that holds the m_peraglock in write mode to do the + * reallocation. + */ +void +xfs_filestream_flush( + xfs_mount_t *mp) +{ + /* point in time flush, so keep the reaper running */ + xfs_mru_cache_flush(mp->m_filestream, 1); +} + +/* + * Return the AG of the filestream the file or directory belongs to, or + * NULLAGNUMBER otherwise. + */ +xfs_agnumber_t +xfs_filestream_lookup_ag( + xfs_inode_t *ip) +{ + xfs_mru_cache_t *cache; + fstrm_item_t *item; + xfs_agnumber_t ag; + int ref; + + if (!(ip->i_d.di_mode & (S_IFREG | S_IFDIR))) { + ASSERT(0); + return NULLAGNUMBER; + } + + cache = ip->i_mount->m_filestream; + item = xfs_mru_cache_lookup(cache, ip->i_ino); + if (!item) { + TRACE_LOOKUP(ip->i_mount, ip, NULL, NULLAGNUMBER, 0); + return NULLAGNUMBER; + } + + ASSERT(ip == item->ip); + ag = item->ag; + ref = xfs_filestream_peek_ag(ip->i_mount, ag); + xfs_mru_cache_done(cache); + + TRACE_LOOKUP(ip->i_mount, ip, item->pip, ag, ref); + return ag; +} + +/* + * xfs_filestream_associate() should only be called to associate a regular file + * with its parent directory. Calling it with a child directory isn't + * appropriate because filestreams don't apply to entire directory hierarchies. + * Creating a file in a child directory of an existing filestream directory + * starts a new filestream with its own allocation group association. + */ +int +xfs_filestream_associate( + xfs_inode_t *pip, + xfs_inode_t *ip) +{ + xfs_mount_t *mp; + xfs_mru_cache_t *cache; + fstrm_item_t *item; + xfs_agnumber_t ag, rotorstep, startag; + int err = 0; + + ASSERT(pip->i_d.di_mode & S_IFDIR); + ASSERT(ip->i_d.di_mode & S_IFREG); + if (!(pip->i_d.di_mode & S_IFDIR) || !(ip->i_d.di_mode & S_IFREG)) + return EINVAL; + + mp = pip->i_mount; + cache = mp->m_filestream; + down_read(&mp->m_peraglock); + xfs_ilock(pip, XFS_IOLOCK_EXCL); + + /* If the parent directory is already in the cache, use its AG. */ + item = xfs_mru_cache_lookup(cache, pip->i_ino); + if (item) { + ASSERT(item->ip == pip); + ag = item->ag; + xfs_mru_cache_done(cache); + + TRACE_LOOKUP(mp, pip, pip, ag, xfs_filestream_peek_ag(mp, ag)); + err = _xfs_filestream_update_ag(ip, pip, ag); + + goto exit; + } + + /* + * Set the starting AG using the rotor for inode32, otherwise + * use the directory inode's AG. + */ + if (mp->m_flags & XFS_MOUNT_32BITINODES) { + rotorstep = xfs_rotorstep; + startag = (mp->m_agfrotor / rotorstep) % mp->m_sb.sb_agcount; + mp->m_agfrotor = (mp->m_agfrotor + 1) % + (mp->m_sb.sb_agcount * rotorstep); + } else + startag = XFS_INO_TO_AGNO(mp, pip->i_ino); + + /* Pick a new AG for the parent inode starting at startag. */ + err = _xfs_filestream_pick_ag(mp, startag, &ag, 0, 0); + if (err || ag == NULLAGNUMBER) + goto exit_did_pick; + + /* Associate the parent inode with the AG. */ + err = _xfs_filestream_update_ag(pip, NULL, ag); + if (err) + goto exit_did_pick; + + /* Associate the file inode with the AG. */ + err = _xfs_filestream_update_ag(ip, pip, ag); + if (err) + goto exit_did_pick; + + TRACE_ASSOCIATE(mp, ip, pip, ag, xfs_filestream_peek_ag(mp, ag)); + +exit_did_pick: + /* + * If _xfs_filestream_pick_ag() returned a valid AG, remove the + * reference it took on it, since the file and directory will have taken + * their own now if they were successfully cached. + */ + if (ag != NULLAGNUMBER) + xfs_filestream_put_ag(mp, ag); + +exit: + xfs_iunlock(pip, XFS_IOLOCK_EXCL); + up_read(&mp->m_peraglock); + return err; +} + +/* + * Pick a new allocation group for the current file and its file stream. This + * function is called by xfs_bmap_filestreams() with the mount point's per-ag + * lock held. + */ +int +xfs_filestream_new_ag( + xfs_bmalloca_t *ap, + xfs_agnumber_t *agp) +{ + int flags, err; + xfs_inode_t *ip, *pip = NULL; + xfs_mount_t *mp; + xfs_mru_cache_t *cache; + xfs_extlen_t minlen; + fstrm_item_t *dir, *file; + xfs_agnumber_t ag = NULLAGNUMBER; + + ip = ap->ip; + mp = ip->i_mount; + cache = mp->m_filestream; + minlen = ap->alen; + *agp = NULLAGNUMBER; + + /* + * Look for the file in the cache, removing it if it's found. Doing + * this allows it to be held across the dir lookup that follows. + */ + file = xfs_mru_cache_remove(cache, ip->i_ino); + if (file) { + ASSERT(ip == file->ip); + + /* Save the file's parent inode and old AG number for later. */ + pip = file->pip; + ag = file->ag; + + /* Look for the file's directory in the cache. */ + dir = xfs_mru_cache_lookup(cache, pip->i_ino); + if (dir) { + ASSERT(pip == dir->ip); + + /* + * If the directory has already moved on to a new AG, + * use that AG as the new AG for the file. Don't + * forget to twiddle the AG refcounts to match the + * movement. + */ + if (dir->ag != file->ag) { + xfs_filestream_put_ag(mp, file->ag); + xfs_filestream_get_ag(mp, dir->ag); + *agp = file->ag = dir->ag; + } + + xfs_mru_cache_done(cache); + } + + /* + * Put the file back in the cache. If this fails, the free + * function needs to be called to tidy up in the same way as if + * the item had simply expired from the cache. + */ + err = xfs_mru_cache_insert(cache, ip->i_ino, file); + if (err) { + xfs_fstrm_free_func(ip->i_ino, file); + return err; + } + + /* + * If the file's AG was moved to the directory's new AG, there's + * nothing more to be done. + */ + if (*agp != NULLAGNUMBER) { + TRACE_MOVEAG(mp, ip, pip, + ag, xfs_filestream_peek_ag(mp, ag), + *agp, xfs_filestream_peek_ag(mp, *agp)); + return 0; + } + } + + /* + * If the file's parent directory is known, take its iolock in exclusive + * mode to prevent two sibling files from racing each other to migrate + * themselves and their parent to different AGs. + */ + if (pip) + xfs_ilock(pip, XFS_IOLOCK_EXCL); + + /* + * A new AG needs to be found for the file. If the file's parent + * directory is also known, it will be moved to the new AG as well to + * ensure that files created inside it in future use the new AG. + */ + ag = (ag == NULLAGNUMBER) ? 0 : (ag + 1) % mp->m_sb.sb_agcount; + flags = (ap->userdata ? XFS_PICK_USERDATA : 0) | + (ap->low ? XFS_PICK_LOWSPACE : 0); + + err = _xfs_filestream_pick_ag(mp, ag, agp, flags, minlen); + if (err || *agp == NULLAGNUMBER) + goto exit; + + /* + * If the file wasn't found in the file cache, then its parent directory + * inode isn't known. For this to have happened, the file must either + * be pre-existing, or it was created long enough ago that its cache + * entry has expired. This isn't the sort of usage that the filestreams + * allocator is trying to optimise, so there's no point trying to track + * its new AG somehow in the filestream data structures. + */ + if (!pip) { + TRACE_ORPHAN(mp, ip, *agp); + goto exit; + } + + /* Associate the parent inode with the AG. */ + err = _xfs_filestream_update_ag(pip, NULL, *agp); + if (err) + goto exit; + + /* Associate the file inode with the AG. */ + err = _xfs_filestream_update_ag(ip, pip, *agp); + if (err) + goto exit; + + TRACE_MOVEAG(mp, ip, pip, NULLAGNUMBER, 0, + *agp, xfs_filestream_peek_ag(mp, *agp)); + +exit: + /* + * If _xfs_filestream_pick_ag() returned a valid AG, remove the + * reference it took on it, since the file and directory will have taken + * their own now if they were successfully cached. + */ + if (*agp != NULLAGNUMBER) + xfs_filestream_put_ag(mp, *agp); + else + *agp = 0; + + if (pip) + xfs_iunlock(pip, XFS_IOLOCK_EXCL); + + return err; +} + +/* + * Remove an association between an inode and a filestream object. + * Typically this is done on last close of an unlinked file. + */ +void +xfs_filestream_deassociate( + xfs_inode_t *ip) +{ + xfs_mru_cache_t *cache = ip->i_mount->m_filestream; + + xfs_mru_cache_delete(cache, ip->i_ino); +} Index: 2.6.x-xfs-new/fs/xfs/xfs_filestream.h =================================================================== --- /dev/null 1970-01-01 00:00:00.000000000 +0000 +++ 2.6.x-xfs-new/fs/xfs/xfs_filestream.h 2007-06-13 14:11:28.756283768 +1000 @@ -0,0 +1,135 @@ +/* + * Copyright (c) 2000-2002,2005 Silicon Graphics, Inc. + * All Rights Reserved. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it would be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write the Free Software Foundation, + * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA + */ +#ifndef __XFS_FILESTREAM_H__ +#define __XFS_FILESTREAM_H__ + +#ifdef __KERNEL__ + +struct xfs_mount; +struct xfs_inode; +struct xfs_perag; +struct xfs_bmalloca; + +#ifdef XFS_FILESTREAMS_TRACE +#define XFS_FSTRM_KTRACE_INFO 1 +#define XFS_FSTRM_KTRACE_AGSCAN 2 +#define XFS_FSTRM_KTRACE_AGPICK1 3 +#define XFS_FSTRM_KTRACE_AGPICK2 4 +#define XFS_FSTRM_KTRACE_UPDATE 5 +#define XFS_FSTRM_KTRACE_FREE 6 +#define XFS_FSTRM_KTRACE_ITEM_LOOKUP 7 +#define XFS_FSTRM_KTRACE_ASSOCIATE 8 +#define XFS_FSTRM_KTRACE_MOVEAG 9 +#define XFS_FSTRM_KTRACE_ORPHAN 10 + +#define XFS_FSTRM_KTRACE_SIZE 16384 +extern ktrace_t *xfs_filestreams_trace_buf; + +#endif + +/* + * Allocation group filestream associations are tracked with per-ag atomic + * counters. These counters allow _xfs_filestream_pick_ag() to tell whether a + * particular AG already has active filestreams associated with it. The mount + * point's m_peraglock is used to protect these counters from per-ag array + * re-allocation during a growfs operation. When xfs_growfs_data_private() is + * about to reallocate the array, it calls xfs_filestream_flush() with the + * m_peraglock held in write mode. + * + * Since xfs_mru_cache_flush() guarantees that all the free functions for all + * the cache elements have finished executing before it returns, it's safe for + * the free functions to use the atomic counters without m_peraglock protection. + * This allows the implementation of xfs_fstrm_free_func() to be agnostic about + * whether it was called with the m_peraglock held in read mode, write mode or + * not held at all. The race condition this addresses is the following: + * + * - The work queue scheduler fires and pulls a filestream directory cache + * element off the LRU end of the cache for deletion, then gets pre-empted. + * - A growfs operation grabs the m_peraglock in write mode, flushes all the + * remaining items from the cache and reallocates the mount point's per-ag + * array, resetting all the counters to zero. + * - The work queue thread resumes and calls the free function for the element + * it started cleaning up earlier. In the process it decrements the + * filestreams counter for an AG that now has no references. + * + * With a shrinkfs feature, the above scenario could panic the system. + * + * All other uses of the following macros should be protected by either the + * m_peraglock held in read mode, or the cache's internal locking exposed by the + * interval between a call to xfs_mru_cache_lookup() and a call to + * xfs_mru_cache_done(). In addition, the m_peraglock must be held in read mode + * when new elements are added to the cache. + * + * Combined, these locking rules ensure that no associations will ever exist in + * the cache that reference per-ag array elements that have since been + * reallocated. + */ +STATIC_INLINE int +xfs_filestream_peek_ag( + xfs_mount_t *mp, + xfs_agnumber_t agno) +{ + return atomic_read(&mp->m_perag[agno].pagf_fstrms); +} + +STATIC_INLINE int +xfs_filestream_get_ag( + xfs_mount_t *mp, + xfs_agnumber_t agno) +{ + return atomic_inc_return(&mp->m_perag[agno].pagf_fstrms); +} + +STATIC_INLINE int +xfs_filestream_put_ag( + xfs_mount_t *mp, + xfs_agnumber_t agno) +{ + return atomic_dec_return(&mp->m_perag[agno].pagf_fstrms); +} + +/* allocation selection flags */ +typedef enum xfs_fstrm_alloc { + XFS_PICK_USERDATA = 1, + XFS_PICK_LOWSPACE = 2, +} xfs_fstrm_alloc_t; + +/* prototypes for filestream.c */ +int xfs_filestream_init(void); +void xfs_filestream_uninit(void); +int xfs_filestream_mount(struct xfs_mount *mp); +void xfs_filestream_unmount(struct xfs_mount *mp); +void xfs_filestream_flush(struct xfs_mount *mp); +xfs_agnumber_t xfs_filestream_lookup_ag(struct xfs_inode *ip); +int xfs_filestream_associate(struct xfs_inode *dip, struct xfs_inode *ip); +void xfs_filestream_deassociate(struct xfs_inode *ip); +int xfs_filestream_new_ag(struct xfs_bmalloca *ap, xfs_agnumber_t *agp); + + +/* filestreams for the inode? */ +STATIC_INLINE int +xfs_inode_is_filestream( + struct xfs_inode *ip) +{ + return (ip->i_mount->m_flags & XFS_MOUNT_FILESTREAMS) || + (ip->i_d.di_flags & XFS_DIFLAG_FILESTREAM); +} + +#endif /* __KERNEL__ */ + +#endif /* __XFS_FILESTREAM_H__ */ Index: 2.6.x-xfs-new/fs/xfs/xfs_fs.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_fs.h 2007-06-13 13:58:15.767513033 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_fs.h 2007-06-13 14:11:28.760283246 +1000 @@ -66,6 +66,7 @@ struct fsxattr { #define XFS_XFLAG_EXTSIZE 0x00000800 /* extent size allocator hint */ #define XFS_XFLAG_EXTSZINHERIT 0x00001000 /* inherit inode extent size */ #define XFS_XFLAG_NODEFRAG 0x00002000 /* do not defragment */ +#define XFS_XFLAG_FILESTREAM 0x00004000 /* use filestream allocator */ #define XFS_XFLAG_HASATTR 0x80000000 /* no DIFLAG for this */ /* Index: 2.6.x-xfs-new/fs/xfs/xfs_fsops.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_fsops.c 2007-06-13 13:58:15.767513033 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_fsops.c 2007-06-13 14:11:28.764282724 +1000 @@ -44,6 +44,7 @@ #include "xfs_trans_space.h" #include "xfs_rtalloc.h" #include "xfs_rw.h" +#include "xfs_filestream.h" /* * File system operations @@ -165,6 +166,7 @@ xfs_growfs_data_private( new = nb - mp->m_sb.sb_dblocks; oagcount = mp->m_sb.sb_agcount; if (nagcount > oagcount) { + xfs_filestream_flush(mp); down_write(&mp->m_peraglock); mp->m_perag = kmem_realloc(mp->m_perag, sizeof(xfs_perag_t) * nagcount, Index: 2.6.x-xfs-new/fs/xfs/xfs_inode.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_inode.c 2007-06-13 13:58:15.783510960 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_inode.c 2007-06-13 14:11:28.780280636 +1000 @@ -48,6 +48,7 @@ #include "xfs_dir2_trace.h" #include "xfs_quota.h" #include "xfs_acl.h" +#include "xfs_filestream.h" kmem_zone_t *xfs_ifork_zone; @@ -817,6 +818,8 @@ _xfs_dic2xflags( flags |= XFS_XFLAG_EXTSZINHERIT; if (di_flags & XFS_DIFLAG_NODEFRAG) flags |= XFS_XFLAG_NODEFRAG; + if (di_flags & XFS_DIFLAG_FILESTREAM) + flags |= XFS_XFLAG_FILESTREAM; } return flags; @@ -1099,7 +1102,7 @@ xfs_ialloc( * Call the space management code to pick * the on-disk inode to be allocated. */ - error = xfs_dialloc(tp, pip->i_ino, mode, okalloc, + error = xfs_dialloc(tp, pip ? pip->i_ino : 0, mode, okalloc, ialloc_context, call_again, &ino); if (error != 0) { return error; @@ -1153,7 +1156,7 @@ xfs_ialloc( if ( (prid != 0) && (ip->i_d.di_version == XFS_DINODE_VERSION_1)) xfs_bump_ino_vers2(tp, ip); - if (XFS_INHERIT_GID(pip, vp->v_vfsp)) { + if (pip && XFS_INHERIT_GID(pip, vp->v_vfsp)) { ip->i_d.di_gid = pip->i_d.di_gid; if ((pip->i_d.di_mode & S_ISGID) && (mode & S_IFMT) == S_IFDIR) { ip->i_d.di_mode |= S_ISGID; @@ -1195,8 +1198,14 @@ xfs_ialloc( flags |= XFS_ILOG_DEV; break; case S_IFREG: + if (unlikely(pip && xfs_inode_is_filestream(pip))) { + error = xfs_filestream_associate(pip, ip); + if (error) + return error; + } + /* fall through */ case S_IFDIR: - if (unlikely(pip->i_d.di_flags & XFS_DIFLAG_ANY)) { + if (unlikely(pip && (pip->i_d.di_flags & XFS_DIFLAG_ANY))) { uint di_flags = 0; if ((mode & S_IFMT) == S_IFDIR) { @@ -1233,6 +1242,8 @@ xfs_ialloc( if ((pip->i_d.di_flags & XFS_DIFLAG_NODEFRAG) && xfs_inherit_nodefrag) di_flags |= XFS_DIFLAG_NODEFRAG; + if (pip->i_d.di_flags & XFS_DIFLAG_FILESTREAM) + di_flags |= XFS_DIFLAG_FILESTREAM; ip->i_d.di_flags |= di_flags; } /* FALLTHROUGH */ Index: 2.6.x-xfs-new/fs/xfs/xfs_mount.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_mount.h 2007-06-13 13:58:15.783510960 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_mount.h 2007-06-13 14:11:28.788279592 +1000 @@ -66,6 +66,7 @@ struct xfs_bmbt_irec; struct xfs_bmap_free; struct xfs_extdelta; struct xfs_swapext; +struct xfs_mru_cache; extern struct bhv_vfsops xfs_vfsops; extern struct bhv_vnodeops xfs_vnodeops; @@ -436,6 +437,7 @@ typedef struct xfs_mount { struct notifier_block m_icsb_notifier; /* hotplug cpu notifier */ struct mutex m_icsb_mutex; /* balancer sync lock */ #endif + struct xfs_mru_cache *m_filestream; /* per-mount filestream data */ } xfs_mount_t; /* @@ -475,6 +477,8 @@ typedef struct xfs_mount { * I/O size in stat() */ #define XFS_MOUNT_NO_PERCPU_SB (1ULL << 23) /* don't use per-cpu superblock counters */ +#define XFS_MOUNT_FILESTREAMS (1ULL << 24) /* enable the filestreams + allocator */ /* Index: 2.6.x-xfs-new/fs/xfs/xfs_mru_cache.c =================================================================== --- /dev/null 1970-01-01 00:00:00.000000000 +0000 +++ 2.6.x-xfs-new/fs/xfs/xfs_mru_cache.c 2007-06-13 14:11:28.788279592 +1000 @@ -0,0 +1,494 @@ +/* + * Copyright (c) 2000-2002,2006 Silicon Graphics, Inc. + * All Rights Reserved. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it would be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write the Free Software Foundation, + * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA + */ +#include "xfs.h" +#include "xfs_mru_cache.h" + +/* + * An MRU Cache is a dynamic data structure that stores its elements in a way + * that allows efficient lookups, but also groups them into discrete time + * intervals based on insertion time. This allows elements to be efficiently + * and automatically reaped after a fixed period of inactivity. + * + * When a client data pointer is stored in the MRU Cache it needs to be added to + * both the data store and to one of the lists. It must also be possible to + * access each of these entries via the other, i.e. to: + * + * a) Walk a list, removing the corresponding data store entry for each item. + * b) Look up a data store entry, then access its list entry directly. + * + * To achieve both of these goals, each entry must contain both a list entry and + * a key, in addition to the user's data pointer. Note that it's not a good + * idea to have the client embed one of these structures at the top of their own + * data structure, because inserting the same item more than once would most + * likely result in a loop in one of the lists. That's a sure-fire recipe for + * an infinite loop in the code. + */ +typedef struct xfs_mru_cache_elem +{ + struct list_head list_node; + unsigned long key; + void *value; +} xfs_mru_cache_elem_t; + +static kmem_zone_t *xfs_mru_elem_zone; +static struct workqueue_struct *xfs_mru_reap_wq; + +/* + * When inserting, destroying or reaping, it's first necessary to update the + * lists relative to a particular time. In the case of destroying, that time + * will be well in the future to ensure that all items are moved to the reap + * list. In all other cases though, the time will be the current time. + * + * This function enters a loop, moving the contents of the LRU list to the reap + * list again and again until either a) the lists are all empty, or b) time zero + * has been advanced sufficiently to be within the immediate element lifetime. + * + * Case a) above is detected by counting how many groups are migrated and + * stopping when they've all been moved. Case b) is detected by monitoring the + * time_zero field, which is updated as each group is migrated. + * + * The return value is the earliest time that more migration could be needed, or + * zero if there's no need to schedule more work because the lists are empty. + */ +STATIC unsigned long +_xfs_mru_cache_migrate( + xfs_mru_cache_t *mru, + unsigned long now) +{ + unsigned int grp; + unsigned int migrated = 0; + struct list_head *lru_list; + + /* Nothing to do if the data store is empty. */ + if (!mru->time_zero) + return 0; + + /* While time zero is older than the time spanned by all the lists. */ + while (mru->time_zero <= now - mru->grp_count * mru->grp_time) { + + /* + * If the LRU list isn't empty, migrate its elements to the tail + * of the reap list. + */ + lru_list = mru->lists + mru->lru_grp; + if (!list_empty(lru_list)) + list_splice_init(lru_list, mru->reap_list.prev); + + /* + * Advance the LRU group number, freeing the old LRU list to + * become the new MRU list; advance time zero accordingly. + */ + mru->lru_grp = (mru->lru_grp + 1) % mru->grp_count; + mru->time_zero += mru->grp_time; + + /* + * If reaping is so far behind that all the elements on all the + * lists have been migrated to the reap list, it's now empty. + */ + if (++migrated == mru->grp_count) { + mru->lru_grp = 0; + mru->time_zero = 0; + return 0; + } + } + + /* Find the first non-empty list from the LRU end. */ + for (grp = 0; grp < mru->grp_count; grp++) { + + /* Check the grp'th list from the LRU end. */ + lru_list = mru->lists + ((mru->lru_grp + grp) % mru->grp_count); + if (!list_empty(lru_list)) + return mru->time_zero + + (mru->grp_count + grp) * mru->grp_time; + } + + /* All the lists must be empty. */ + mru->lru_grp = 0; + mru->time_zero = 0; + return 0; +} + +/* + * When inserting or doing a lookup, an element needs to be inserted into the + * MRU list. The lists must be migrated first to ensure that they're + * up-to-date, otherwise the new element could be given a shorter lifetime in + * the cache than it should. + */ +STATIC void +_xfs_mru_cache_list_insert( + xfs_mru_cache_t *mru, + xfs_mru_cache_elem_t *elem) +{ + unsigned int grp = 0; + unsigned long now = jiffies; + + /* + * If the data store is empty, initialise time zero, leave grp set to + * zero and start the work queue timer if necessary. Otherwise, set grp + * to the number of group times that have elapsed since time zero. + */ + if (!_xfs_mru_cache_migrate(mru, now)) { + mru->time_zero = now; + if (!mru->next_reap) + mru->next_reap = mru->grp_count * mru->grp_time; + } else { + grp = (now - mru->time_zero) / mru->grp_time; + grp = (mru->lru_grp + grp) % mru->grp_count; + } + + /* Insert the element at the tail of the corresponding list. */ + list_add_tail(&elem->list_node, mru->lists + grp); +} + +/* + * When destroying or reaping, all the elements that were migrated to the reap + * list need to be deleted. For each element this involves removing it from the + * data store, removing it from the reap list, calling the client's free + * function and deleting the element from the element zone. + */ +STATIC void +_xfs_mru_cache_clear_reap_list( + xfs_mru_cache_t *mru) +{ + xfs_mru_cache_elem_t *elem, *next; + struct list_head tmp; + + INIT_LIST_HEAD(&tmp); + list_for_each_entry_safe(elem, next, &mru->reap_list, list_node) { + + /* Remove the element from the data store. */ + radix_tree_delete(&mru->store, elem->key); + + /* + * remove to temp list so it can be freed without + * needing to hold the lock + */ + list_move(&elem->list_node, &tmp); + } + mutex_spinunlock(&mru->lock, 0); + + list_for_each_entry_safe(elem, next, &tmp, list_node) { + + /* Remove the element from the reap list. */ + list_del_init(&elem->list_node); + + /* Call the client's free function with the key and value pointer. */ + mru->free_func(elem->key, elem->value); + + /* Free the element structure. */ + kmem_zone_free(xfs_mru_elem_zone, elem); + } + + mutex_spinlock(&mru->lock); +} + +/* + * We fire the reap timer every group expiry interval so + * we always have a reaper ready to run. This makes shutdown + * and flushing of the reaper easy to do. Hence we need to + * keep when the next reap must occur so we can determine + * at each interval whether there is anything we need to do. + */ +STATIC void +_xfs_mru_cache_reap( + struct work_struct *work) +{ + xfs_mru_cache_t *mru = container_of(work, xfs_mru_cache_t, work.work); + unsigned long now; + + ASSERT(mru && mru->lists); + if (!mru || !mru->lists) + return; + + mutex_spinlock(&mru->lock); + now = jiffies; + if (mru->reap_all || + (mru->next_reap && time_after(now, mru->next_reap))) { + if (mru->reap_all) + now += mru->grp_count * mru->grp_time * 2; + mru->next_reap = _xfs_mru_cache_migrate(mru, now); + _xfs_mru_cache_clear_reap_list(mru); + } + + /* + * the process that triggered the reap_all is responsible + * for restating the periodic reap if it is required. + */ + if (!mru->reap_all) + queue_delayed_work(xfs_mru_reap_wq, &mru->work, mru->grp_time); + mru->reap_all = 0; + mutex_spinunlock(&mru->lock, 0); +} + +int +xfs_mru_cache_init(void) +{ + xfs_mru_elem_zone = kmem_zone_init(sizeof(xfs_mru_cache_elem_t), + "xfs_mru_cache_elem"); + if (!xfs_mru_elem_zone) + return ENOMEM; + + xfs_mru_reap_wq = create_singlethread_workqueue("xfs_mru_cache"); + if (!xfs_mru_reap_wq) { + kmem_zone_destroy(xfs_mru_elem_zone); + return ENOMEM; + } + + return 0; +} + +void +xfs_mru_cache_uninit(void) +{ + destroy_workqueue(xfs_mru_reap_wq); + kmem_zone_destroy(xfs_mru_elem_zone); +} + +int +xfs_mru_cache_create( + xfs_mru_cache_t **mrup, + unsigned int lifetime_ms, + unsigned int grp_count, + xfs_mru_cache_free_func_t free_func) +{ + xfs_mru_cache_t *mru = NULL; + int err = 0, grp; + unsigned int grp_time; + + if (mrup) + *mrup = NULL; + + if (!mrup || !grp_count || !lifetime_ms || !free_func) + return EINVAL; + + if (!(grp_time = msecs_to_jiffies(lifetime_ms) / grp_count)) + return EINVAL; + + if (!(mru = kmem_zalloc(sizeof(*mru), KM_SLEEP))) + return ENOMEM; + + /* An extra list is needed to avoid reaping up to a grp_time early. */ + mru->grp_count = grp_count + 1; + mru->lists = kmem_alloc(mru->grp_count * sizeof(*mru->lists), KM_SLEEP); + + if (!mru->lists) { + err = ENOMEM; + goto exit; + } + + for (grp = 0; grp < mru->grp_count; grp++) + INIT_LIST_HEAD(mru->lists + grp); + + /* + * We use GFP_KERNEL radix tree preload and do inserts under a + * spinlock so GFP_ATOMIC is appropriate for the radix tree itself. + */ + INIT_RADIX_TREE(&mru->store, GFP_ATOMIC); + INIT_LIST_HEAD(&mru->reap_list); + spinlock_init(&mru->lock, "xfs_mru_cache"); + INIT_DELAYED_WORK(&mru->work, _xfs_mru_cache_reap); + + mru->grp_time = grp_time; + mru->free_func = free_func; + + /* start up the reaper event */ + mru->next_reap = 0; + mru->reap_all = 0; + queue_delayed_work(xfs_mru_reap_wq, &mru->work, mru->grp_time); + + *mrup = mru; + +exit: + if (err && mru && mru->lists) + kmem_free(mru->lists, mru->grp_count * sizeof(*mru->lists)); + if (err && mru) + kmem_free(mru, sizeof(*mru)); + + return err; +} + +/* + * When flushing, we stop the periodic reaper from running first + * so we don't race with it. If we are flushing on unmount, we + * don't want to restart the reaper again, so the restart is conditional. + * + * Because reaping can drop the last refcount on inodes which can free + * extents, we have to push the reaping off to the workqueue thread + * because we could be called holding locks that extent freeing requires. + */ +void +xfs_mru_cache_flush( + xfs_mru_cache_t *mru, + int restart) +{ + if (!mru || !mru->lists) + return; + + cancel_rearming_delayed_workqueue(xfs_mru_reap_wq, &mru->work); + + mutex_spinlock(&mru->lock); + mru->reap_all = 1; + mutex_spinunlock(&mru->lock, 0); + + queue_work(xfs_mru_reap_wq, &mru->work.work); + flush_workqueue(xfs_mru_reap_wq); + + mutex_spinlock(&mru->lock); + WARN_ON_ONCE(mru->reap_all != 0); + mru->reap_all = 0; + if (restart) + queue_delayed_work(xfs_mru_reap_wq, &mru->work, mru->grp_time); + mutex_spinunlock(&mru->lock, 0); +} + +void +xfs_mru_cache_destroy( + xfs_mru_cache_t *mru) +{ + if (!mru || !mru->lists) + return; + + /* we don't want the reaper to restart here */ + xfs_mru_cache_flush(mru, 0); + + kmem_free(mru->lists, mru->grp_count * sizeof(*mru->lists)); + kmem_free(mru, sizeof(*mru)); +} + +int +xfs_mru_cache_insert( + xfs_mru_cache_t *mru, + unsigned long key, + void *value) +{ + xfs_mru_cache_elem_t *elem; + + ASSERT(mru && mru->lists); + if (!mru || !mru->lists) + return EINVAL; + + elem = kmem_zone_zalloc(xfs_mru_elem_zone, KM_SLEEP); + if (!elem) + return ENOMEM; + + if (radix_tree_preload(GFP_KERNEL)) { + kmem_zone_free(xfs_mru_elem_zone, elem); + return ENOMEM; + } + + INIT_LIST_HEAD(&elem->list_node); + elem->key = key; + elem->value = value; + + mutex_spinlock(&mru->lock); + + radix_tree_insert(&mru->store, key, elem); + radix_tree_preload_end(); + _xfs_mru_cache_list_insert(mru, elem); + + mutex_spinunlock(&mru->lock, 0); + + return 0; +} + +void* +xfs_mru_cache_remove( + xfs_mru_cache_t *mru, + unsigned long key) +{ + xfs_mru_cache_elem_t *elem; + void *value = NULL; + + ASSERT(mru && mru->lists); + if (!mru || !mru->lists) + return NULL; + + mutex_spinlock(&mru->lock); + elem = radix_tree_delete(&mru->store, key); + if (elem) { + value = elem->value; + list_del(&elem->list_node); + } + + mutex_spinunlock(&mru->lock, 0); + + if (elem) + kmem_zone_free(xfs_mru_elem_zone, elem); + + return value; +} + +void +xfs_mru_cache_delete( + xfs_mru_cache_t *mru, + unsigned long key) +{ + void *value = xfs_mru_cache_remove(mru, key); + + if (value) + mru->free_func(key, value); +} + +void* +xfs_mru_cache_lookup( + xfs_mru_cache_t *mru, + unsigned long key) +{ + xfs_mru_cache_elem_t *elem; + + ASSERT(mru && mru->lists); + if (!mru || !mru->lists) + return NULL; + + mutex_spinlock(&mru->lock); + elem = radix_tree_lookup(&mru->store, key); + if (elem) { + list_del(&elem->list_node); + _xfs_mru_cache_list_insert(mru, elem); + } + else + mutex_spinunlock(&mru->lock, 0); + + return elem ? elem->value : NULL; +} + +void* +xfs_mru_cache_peek( + xfs_mru_cache_t *mru, + unsigned long key) +{ + xfs_mru_cache_elem_t *elem; + + ASSERT(mru && mru->lists); + if (!mru || !mru->lists) + return NULL; + + mutex_spinlock(&mru->lock); + elem = radix_tree_lookup(&mru->store, key); + if (!elem) + mutex_spinunlock(&mru->lock, 0); + + return elem ? elem->value : NULL; +} + +void +xfs_mru_cache_done( + xfs_mru_cache_t *mru) +{ + mutex_spinunlock(&mru->lock, 0); +} Index: 2.6.x-xfs-new/fs/xfs/xfs_mru_cache.h =================================================================== --- /dev/null 1970-01-01 00:00:00.000000000 +0000 +++ 2.6.x-xfs-new/fs/xfs/xfs_mru_cache.h 2007-06-13 14:11:28.792279070 +1000 @@ -0,0 +1,219 @@ +/* + * Copyright (c) 2000-2002,2006 Silicon Graphics, Inc. + * All Rights Reserved. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it would be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write the Free Software Foundation, + * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA + */ +#ifndef __XFS_MRU_CACHE_H__ +#define __XFS_MRU_CACHE_H__ + +/* + * The MRU Cache data structure consists of a data store, an array of lists and + * a lock to protect its internal state. At initialisation time, the client + * supplies an element lifetime in milliseconds and a group count, as well as a + * function pointer to call when deleting elements. A data structure for + * queueing up work in the form of timed callbacks is also included. + * + * The group count controls how many lists are created, and thereby how finely + * the elements are grouped in time. When reaping occurs, all the elements in + * all the lists whose time has expired are deleted. + * + * To give an example of how this works in practice, consider a client that + * initialises an MRU Cache with a lifetime of ten seconds and a group count of + * five. Five internal lists will be created, each representing a two second + * period in time. When the first element is added, time zero for the data + * structure is initialised to the current time. + * + * All the elements added in the first two seconds are appended to the first + * list. Elements added in the third second go into the second list, and so on. + * If an element is accessed at any point, it is removed from its list and + * inserted at the head of the current most-recently-used list. + * + * The reaper function will have nothing to do until at least twelve seconds + * have elapsed since the first element was added. The reason for this is that + * if it were called at t=11s, there could be elements in the first list that + * have only been inactive for nine seconds, so it still does nothing. If it is + * called anywhere between t=12 and t=14 seconds, it will delete all the + * elements that remain in the first list. It's therefore possible for elements + * to remain in the data store even after they've been inactive for up to + * (t + t/g) seconds, where t is the inactive element lifetime and g is the + * number of groups. + * + * The above example assumes that the reaper function gets called at least once + * every (t/g) seconds. If it is called less frequently, unused elements will + * accumulate in the reap list until the reaper function is eventually called. + * The current implementation uses work queue callbacks to carefully time the + * reaper function calls, so this should happen rarely, if at all. + * + * From a design perspective, the primary reason for the choice of a list array + * representing discrete time intervals is that it's only practical to reap + * expired elements in groups of some appreciable size. This automatically + * introduces a granularity to element lifetimes, so there's no point storing an + * individual timeout with each element that specifies a more precise reap time. + * The bonus is a saving of sizeof(long) bytes of memory per element stored. + * + * The elements could have been stored in just one list, but an array of + * counters or pointers would need to be maintained to allow them to be divided + * up into discrete time groups. More critically, the process of touching or + * removing an element would involve walking large portions of the entire list, + * which would have a detrimental effect on performance. The additional memory + * requirement for the array of list heads is minimal. + * + * When an element is touched or deleted, it needs to be removed from its + * current list. Doubly linked lists are used to make the list maintenance + * portion of these operations O(1). Since reaper timing can be imprecise, + * inserts and lookups can occur when there are no free lists available. When + * this happens, all the elements on the LRU list need to be migrated to the end + * of the reap list. To keep the list maintenance portion of these operations + * O(1) also, list tails need to be accessible without walking the entire list. + * This is the reason why doubly linked list heads are used. + */ + +/* Function pointer type for callback to free a client's data pointer. */ +typedef void (*xfs_mru_cache_free_func_t)(unsigned long, void*); + +typedef struct xfs_mru_cache +{ + struct radix_tree_root store; /* Core storage data structure. */ + struct list_head *lists; /* Array of lists, one per grp. */ + struct list_head reap_list; /* Elements overdue for reaping. */ + spinlock_t lock; /* Lock to protect this struct. */ + unsigned int grp_count; /* Number of discrete groups. */ + unsigned int grp_time; /* Time period spanned by grps. */ + unsigned int lru_grp; /* Group containing time zero. */ + unsigned long time_zero; /* Time first element was added. */ + unsigned long next_reap; /* Time that the reaper should + next do something. */ + unsigned int reap_all; /* if set, reap all lists */ + xfs_mru_cache_free_func_t free_func; /* Function pointer for freeing. */ + struct delayed_work work; /* Workqueue data for reaping. */ +} xfs_mru_cache_t; + +/* + * xfs_mru_cache_init() prepares memory zones and any other globally scoped + * resources. + */ +int +xfs_mru_cache_init(void); + +/* + * xfs_mru_cache_uninit() tears down all the globally scoped resources prepared + * in xfs_mru_cache_init(). + */ +void +xfs_mru_cache_uninit(void); + +/* + * To initialise a struct xfs_mru_cache pointer, call xfs_mru_cache_create() + * with the address of the pointer, a lifetime value in milliseconds, a group + * count and a free function to use when deleting elements. This function + * returns 0 if the initialisation was successful. + */ +int +xfs_mru_cache_create(struct xfs_mru_cache **mrup, + unsigned int lifetime_ms, + unsigned int grp_count, + xfs_mru_cache_free_func_t free_func); + +/* + * Call xfs_mru_cache_flush() to flush out all cached entries, calling their + * free functions as they're deleted. When this function returns, the caller is + * guaranteed that all the free functions for all the elements have finished + * executing. + * + * While we are flushing, we stop the periodic reaper event from triggering. + * Normally, we want to restart this periodic event, but if we are shutting + * down the cache we do not want it restarted. hence the restart parameter + * where 0 = do not restart reaper and 1 = restart reaper. + */ +void +xfs_mru_cache_flush( + xfs_mru_cache_t *mru, + int restart); + +/* + * Call xfs_mru_cache_destroy() with the MRU Cache pointer when the cache is no + * longer needed. + */ +void +xfs_mru_cache_destroy(struct xfs_mru_cache *mru); + +/* + * To insert an element, call xfs_mru_cache_insert() with the data store, the + * element's key and the client data pointer. This function returns 0 on + * success or ENOMEM if memory for the data element couldn't be allocated. + */ +int +xfs_mru_cache_insert(struct xfs_mru_cache *mru, + unsigned long key, + void *value); + +/* + * To remove an element without calling the free function, call + * xfs_mru_cache_remove() with the data store and the element's key. On success + * the client data pointer for the removed element is returned, otherwise this + * function will return a NULL pointer. + */ +void* +xfs_mru_cache_remove(struct xfs_mru_cache *mru, + unsigned long key); + +/* + * To remove and element and call the free function, call xfs_mru_cache_delete() + * with the data store and the element's key. + */ +void +xfs_mru_cache_delete(struct xfs_mru_cache *mru, + unsigned long key); + +/* + * To look up an element using its key, call xfs_mru_cache_lookup() with the + * data store and the element's key. If found, the element will be moved to the + * head of the MRU list to indicate that it's been touched. + * + * The internal data structures are protected by a spinlock that is STILL HELD + * when this function returns. Call xfs_mru_cache_done() to release it. Note + * that it is not safe to call any function that might sleep in the interim. + * + * The implementation could have used reference counting to avoid this + * restriction, but since most clients simply want to get, set or test a member + * of the returned data structure, the extra per-element memory isn't warranted. + * + * If the element isn't found, this function returns NULL and the spinlock is + * released. xfs_mru_cache_done() should NOT be called when this occurs. + */ +void* +xfs_mru_cache_lookup(struct xfs_mru_cache *mru, + unsigned long key); + +/* + * To look up an element using its key, but leave its location in the internal + * lists alone, call xfs_mru_cache_peek(). If the element isn't found, this + * function returns NULL. + * + * See the comments above the declaration of the xfs_mru_cache_lookup() function + * for important locking information pertaining to this call. + */ +void* +xfs_mru_cache_peek(struct xfs_mru_cache *mru, + unsigned long key); +/* + * To release the internal data structure spinlock after having performed an + * xfs_mru_cache_lookup() or an xfs_mru_cache_peek(), call xfs_mru_cache_done() + * with the data store pointer. + */ +void +xfs_mru_cache_done(struct xfs_mru_cache *mru); + +#endif /* __XFS_MRU_CACHE_H__ */ Index: 2.6.x-xfs-new/fs/xfs/xfs_vfsops.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_vfsops.c 2007-06-13 13:58:15.787510441 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_vfsops.c 2007-06-13 14:11:28.880267586 +1000 @@ -51,6 +51,8 @@ #include "xfs_acl.h" #include "xfs_attr.h" #include "xfs_clnt.h" +#include "xfs_mru_cache.h" +#include "xfs_filestream.h" #include "xfs_fsops.h" STATIC int xfs_sync(bhv_desc_t *, int, cred_t *); @@ -81,6 +83,8 @@ xfs_init(void) xfs_dabuf_zone = kmem_zone_init(sizeof(xfs_dabuf_t), "xfs_dabuf"); xfs_ifork_zone = kmem_zone_init(sizeof(xfs_ifork_t), "xfs_ifork"); xfs_acl_zone_init(xfs_acl_zone, "xfs_acl"); + xfs_mru_cache_init(); + xfs_filestream_init(); /* * The size of the zone allocated buf log item is the maximum @@ -164,6 +168,8 @@ xfs_cleanup(void) xfs_cleanup_procfs(); xfs_sysctl_unregister(); xfs_refcache_destroy(); + xfs_filestream_uninit(); + xfs_mru_cache_uninit(); xfs_acl_zone_destroy(xfs_acl_zone); #ifdef XFS_DIR2_TRACE @@ -320,6 +326,9 @@ xfs_start_flags( else mp->m_flags &= ~XFS_MOUNT_BARRIER; + if (ap->flags2 & XFSMNT2_FILESTREAMS) + mp->m_flags |= XFS_MOUNT_FILESTREAMS; + return 0; } @@ -518,6 +527,9 @@ xfs_mount( if (mp->m_flags & XFS_MOUNT_BARRIER) xfs_mountfs_check_barriers(mp); + if ((error = xfs_filestream_mount(mp))) + goto error2; + error = XFS_IOINIT(vfsp, args, flags); if (error) goto error2; @@ -575,6 +587,13 @@ xfs_unmount( */ xfs_refcache_purge_mp(mp); + /* + * Blow away any referenced inode in the filestreams cache. + * This can and will cause log traffic as inodes go inactive + * here. + */ + xfs_filestream_unmount(mp); + XFS_bflush(mp->m_ddev_targp); error = xfs_unmount_flush(mp, 0); if (error) @@ -706,6 +725,7 @@ xfs_mntupdate( mp->m_flags &= ~XFS_MOUNT_BARRIER; } } else if (!(vfsp->vfs_flag & VFS_RDONLY)) { /* rw -> ro */ + xfs_filestream_flush(mp); bhv_vfs_sync(vfsp, SYNC_DATA_QUIESCE, NULL); xfs_attr_quiesce(mp); vfsp->vfs_flag |= VFS_RDONLY; @@ -930,6 +950,9 @@ xfs_sync( { xfs_mount_t *mp = XFS_BHVTOM(bdp); + if (flags & SYNC_IOWAIT) + xfs_filestream_flush(mp); + return xfs_syncsub(mp, flags, NULL); } @@ -1873,6 +1896,8 @@ xfs_parseargs( } else if (!strcmp(this_char, "irixsgid")) { cmn_err(CE_WARN, "XFS: irixsgid is now a sysctl(2) variable, option is deprecated."); + } else if (!strcmp(this_char, "filestreams")) { + args->flags2 |= XFSMNT2_FILESTREAMS; } else { cmn_err(CE_WARN, "XFS: unknown mount option [%s].", this_char); Index: 2.6.x-xfs-new/fs/xfs/xfs_vnodeops.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_vnodeops.c 2007-06-13 13:58:15.855501631 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_vnodeops.c 2007-06-13 14:11:28.904264454 +1000 @@ -51,6 +51,7 @@ #include "xfs_refcache.h" #include "xfs_trans_space.h" #include "xfs_log_priv.h" +#include "xfs_filestream.h" STATIC int xfs_open( @@ -94,6 +95,16 @@ xfs_close( return 0; /* + * If we are using filestreams, and we have an unlinked + * file that we are processing the last close on, then nothing + * will be able to reopen and write to this file. Purge this + * inode from the filestreams cache so that it doesn't delay + * teardown of the inode. + */ + if ((ip->i_d.di_nlink == 0) && xfs_inode_is_filestream(ip)) + xfs_filestream_deassociate(ip); + + /* * If we previously truncated this file and removed old data in * the process, we want to initiate "early" writeout on the last * close. This is an attempt to combat the notorious NULL files @@ -819,6 +830,8 @@ xfs_setattr( di_flags |= XFS_DIFLAG_PROJINHERIT; if (vap->va_xflags & XFS_XFLAG_NODEFRAG) di_flags |= XFS_DIFLAG_NODEFRAG; + if (vap->va_xflags & XFS_XFLAG_FILESTREAM) + di_flags |= XFS_DIFLAG_FILESTREAM; if ((ip->i_d.di_mode & S_IFMT) == S_IFDIR) { if (vap->va_xflags & XFS_XFLAG_RTINHERIT) di_flags |= XFS_DIFLAG_RTINHERIT; @@ -2563,6 +2576,15 @@ xfs_remove( */ xfs_refcache_purge_ip(ip); + /* + * If we are using filestreams, kill the stream association. + * If the file is still open it may get a new one but that + * will get killed on last close in xfs_close() so we don't + * have to worry about that. + */ + if (link_zero && xfs_inode_is_filestream(ip)) + xfs_filestream_deassociate(ip); + vn_trace_exit(XFS_ITOV(ip), __FUNCTION__, (inst_t *)__return_address); /* Index: 2.6.x-xfs-new/fs/xfs/quota/xfs_qm.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/quota/xfs_qm.c 2007-06-13 13:58:15.875499040 +1000 +++ 2.6.x-xfs-new/fs/xfs/quota/xfs_qm.c 2007-06-13 14:11:28.972255580 +1000 @@ -65,7 +65,6 @@ kmem_zone_t *qm_dqtrxzone; static struct shrinker *xfs_qm_shaker; static cred_t xfs_zerocr; -static xfs_inode_t xfs_zeroino; STATIC void xfs_qm_list_init(xfs_dqlist_t *, char *, int); STATIC void xfs_qm_list_destroy(xfs_dqlist_t *); @@ -1415,7 +1414,7 @@ xfs_qm_qino_alloc( return error; } - if ((error = xfs_dir_ialloc(&tp, &xfs_zeroino, S_IFREG, 1, 0, + if ((error = xfs_dir_ialloc(&tp, NULL, S_IFREG, 1, 0, &xfs_zerocr, 0, 1, ip, &committed))) { xfs_trans_cancel(tp, XFS_TRANS_RELEASE_LOG_RES | XFS_TRANS_ABORT); Index: 2.6.x-xfs-new/fs/xfs/xfs.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs.h 2007-06-13 13:58:15.879498521 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs.h 2007-06-13 14:11:28.972255580 +1000 @@ -38,6 +38,7 @@ #define XFS_RW_TRACE 1 #define XFS_BUF_TRACE 1 #define XFS_VNODE_TRACE 1 +#define XFS_FILESTREAMS_TRACE 1 #endif #include Index: 2.6.x-xfs-new/fs/xfs/xfsidbg.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfsidbg.c 2007-06-13 13:58:15.879498521 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfsidbg.c 2007-06-13 14:11:28.984254014 +1000 @@ -63,6 +63,7 @@ #include "quota/xfs_qm.h" #include "xfs_iomap.h" #include "xfs_buf.h" +#include "xfs_filestream.h" MODULE_AUTHOR("Silicon Graphics, Inc."); MODULE_DESCRIPTION("Additional kdb commands for debugging XFS"); @@ -109,6 +110,9 @@ static void xfsidbg_xlog_granttrace(xlog #ifdef XFS_DQUOT_TRACE static void xfsidbg_xqm_dqtrace(xfs_dquot_t *); #endif +#ifdef XFS_FILESTREAMS_TRACE +static void xfsidbg_filestreams_trace(int); +#endif /* @@ -197,6 +201,9 @@ static int xfs_bmbt_trace_entry(ktrace_e #ifdef XFS_DIR2_TRACE static int xfs_dir2_trace_entry(ktrace_entry_t *ktep); #endif +#ifdef XFS_FILESTREAMS_TRACE +static void xfs_filestreams_trace_entry(ktrace_entry_t *ktep); +#endif #ifdef XFS_RW_TRACE static void xfs_bunmap_trace_entry(ktrace_entry_t *ktep); static void xfs_rw_enter_trace_entry(ktrace_entry_t *ktep); @@ -761,6 +768,27 @@ static int kdbm_xfs_xalttrace( } #endif /* XFS_ALLOC_TRACE */ +#ifdef XFS_FILESTREAMS_TRACE +static int kdbm_xfs_xfstrmtrace( + int argc, + const char **argv) +{ + unsigned long addr; + int nextarg = 1; + long offset = 0; + int diag; + + if (argc != 1) + return KDB_ARGCOUNT; + diag = kdbgetaddrarg(argc, argv, &nextarg, &addr, &offset, NULL); + if (diag) + return diag; + + xfsidbg_filestreams_trace((int) addr); + return 0; +} +#endif /* XFS_FILESTREAMS_TRACE */ + static int kdbm_xfs_xattrcontext( int argc, const char **argv) @@ -2639,6 +2667,10 @@ static struct xif xfsidbg_funcs[] = { "Dump XFS bmap extents in inode"}, { "xflist", kdbm_xfs_xflist, "", "Dump XFS to-be-freed extent records"}, +#ifdef XFS_FILESTREAMS_TRACE + { "xfstrmtrc",kdbm_xfs_xfstrmtrace, "", + "Dump filestreams trace buffer"}, +#endif { "xhelp", kdbm_xfs_xhelp, "", "Print idbg-xfs help"}, { "xicall", kdbm_xfs_xiclogall, "", @@ -5305,6 +5337,162 @@ xfsidbg_xailock_trace(int count) } #endif +#ifdef XFS_FILESTREAMS_TRACE +static void +xfs_filestreams_trace_entry(ktrace_entry_t *ktep) +{ + xfs_inode_t *ip, *pip; + + /* function:line#[pid]: */ + kdb_printf("%s:%lu[%lu]: ", (char *)ktep->val[1], + ((unsigned long)ktep->val[0] >> 16) & 0xffff, + (unsigned long)ktep->val[2]); + switch ((unsigned long)ktep->val[0] & 0xffff) { + case XFS_FSTRM_KTRACE_INFO: + break; + case XFS_FSTRM_KTRACE_AGSCAN: + kdb_printf("scanning AG %ld[%ld]", + (long)ktep->val[4], (long)ktep->val[5]); + break; + case XFS_FSTRM_KTRACE_AGPICK1: + kdb_printf("using max_ag %ld[1] with maxfree %ld", + (long)ktep->val[4], (long)ktep->val[5]); + break; + case XFS_FSTRM_KTRACE_AGPICK2: + + kdb_printf("startag %ld newag %ld[%ld] free %ld scanned %ld" + " flags 0x%lx", + (long)ktep->val[4], (long)ktep->val[5], + (long)ktep->val[6], (long)ktep->val[7], + (long)ktep->val[8], (long)ktep->val[9]); + break; + case XFS_FSTRM_KTRACE_UPDATE: + ip = (xfs_inode_t *)ktep->val[4]; + if ((__psint_t)ktep->val[5] != (__psint_t)ktep->val[7]) + kdb_printf("found ip %p ino %llu, AG %ld[%ld] ->" + " %ld[%ld]", ip, (unsigned long long)ip->i_ino, + (long)ktep->val[7], (long)ktep->val[8], + (long)ktep->val[5], (long)ktep->val[6]); + else + kdb_printf("found ip %p ino %llu, AG %ld[%ld]", + ip, (unsigned long long)ip->i_ino, + (long)ktep->val[5], (long)ktep->val[6]); + break; + + case XFS_FSTRM_KTRACE_FREE: + ip = (xfs_inode_t *)ktep->val[4]; + pip = (xfs_inode_t *)ktep->val[5]; + if (ip->i_d.di_mode & S_IFDIR) + kdb_printf("deleting dip %p ino %llu, AG %ld[%ld]", + ip, (unsigned long long)ip->i_ino, + (long)ktep->val[6], (long)ktep->val[7]); + else + kdb_printf("deleting file %p ino %llu, pip %p ino %llu" + ", AG %ld[%ld]", + ip, (unsigned long long)ip->i_ino, + pip, (unsigned long long)(pip ? pip->i_ino : 0), + (long)ktep->val[6], (long)ktep->val[7]); + break; + + case XFS_FSTRM_KTRACE_ITEM_LOOKUP: + ip = (xfs_inode_t *)ktep->val[4]; + pip = (xfs_inode_t *)ktep->val[5]; + if (!pip) { + kdb_printf("lookup on %s ip %p ino %llu failed, returning %ld", + ip->i_d.di_mode & S_IFREG ? "file" : "dir", ip, + (unsigned long long)ip->i_ino, (long)ktep->val[6]); + } else if (ip->i_d.di_mode & S_IFREG) + kdb_printf("lookup on file ip %p ino %llu dir %p" + " dino %llu got AG %ld[%ld]", + ip, (unsigned long long)ip->i_ino, + pip, (unsigned long long)pip->i_ino, + (long)ktep->val[6], (long)ktep->val[7]); + else + kdb_printf("lookup on dir ip %p ino %llu got AG %ld[%ld]", + ip, (unsigned long long)ip->i_ino, + (long)ktep->val[6], (long)ktep->val[7]); + break; + + case XFS_FSTRM_KTRACE_ASSOCIATE: + ip = (xfs_inode_t *)ktep->val[4]; + pip = (xfs_inode_t *)ktep->val[5]; + kdb_printf("pip %p ino %llu and ip %p ino %llu given ag %ld[%ld]", + pip, (unsigned long long)pip->i_ino, + ip, (unsigned long long)ip->i_ino, + (long)ktep->val[6], (long)ktep->val[7]); + break; + + case XFS_FSTRM_KTRACE_MOVEAG: + ip = ktep->val[4]; + pip = ktep->val[5]; + if ((long)ktep->val[6] != NULLAGNUMBER) + kdb_printf("dir %p ino %llu to file ip %p ino %llu has" + " moved %ld[%ld] -> %ld[%ld]", + pip, (unsigned long long)pip->i_ino, + ip, (unsigned long long)ip->i_ino, + (long)ktep->val[6], (long)ktep->val[7], + (long)ktep->val[8], (long)ktep->val[9]); + else + kdb_printf("pip %p ino %llu and ip %p ino %llu moved" + " to new ag %ld[%ld]", + pip, (unsigned long long)pip->i_ino, + ip, (unsigned long long)ip->i_ino, + (long)ktep->val[8], (long)ktep->val[9]); + break; + + case XFS_FSTRM_KTRACE_ORPHAN: + ip = ktep->val[4]; + kdb_printf("gave ag %lld to orphan ip %p ino %llu", + (__psint_t)ktep->val[5], + ip, (unsigned long long)ip->i_ino); + break; + default: + kdb_printf("unknown trace type 0x%lx", + (unsigned long)ktep->val[0] & 0xffff); + } + kdb_printf("\n"); +} + +static void +xfsidbg_filestreams_trace(int count) +{ + ktrace_entry_t *ktep; + ktrace_snap_t kts; + int nentries; + int skip_entries; + + if (xfs_filestreams_trace_buf == NULL) { + qprintf("The xfs inode lock trace buffer is not initialized\n"); + return; + } + nentries = ktrace_nentries(xfs_filestreams_trace_buf); + if (count == -1) { + count = nentries; + } + if ((count <= 0) || (count > nentries)) { + qprintf("Invalid count. There are %d entries.\n", nentries); + return; + } + + ktep = ktrace_first(xfs_filestreams_trace_buf, &kts); + if (count != nentries) { + /* + * Skip the total minus the number to look at minus one + * for the entry returned by ktrace_first(). + */ + skip_entries = nentries - count - 1; + ktep = ktrace_skip(xfs_filestreams_trace_buf, skip_entries, &kts); + if (ktep == NULL) { + qprintf("Skipped them all\n"); + return; + } + } + while (ktep != NULL) { + xfs_filestreams_trace_entry(ktep); + ktep = ktrace_next(xfs_filestreams_trace_buf, &kts); + } +} +#endif /* * Compute & print buffer's checksum. */ From owner-xfs@oss.sgi.com Tue Jun 12 21:28:11 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 12 Jun 2007 21:28:13 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5D4S7Wt003056 for ; Tue, 12 Jun 2007 21:28:09 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA28768; Wed, 13 Jun 2007 14:28:02 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5D4S1Af120025984; Wed, 13 Jun 2007 14:28:01 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5D4Rx8f121106158; Wed, 13 Jun 2007 14:27:59 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Wed, 13 Jun 2007 14:27:58 +1000 From: David Chinner To: lkml Cc: xfs-oss Subject: [PATCH] Export radix_tree_preload() Message-ID: <20070613042758.GJ86004887@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11761 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Upcoming XFS functionality [1] uses radix trees and uses the preload functions. XFS can be built as a module and hence we need radix_tree_preload() exported. radix_tree_preload_end() is a static inline, so it doesn't need exporting. [1] http://marc.info/?l=linux-xfs&m=118170839531601&w=2 Signed-Off-By: Dave Chinner --- lib/radix-tree.c | 1 + 1 file changed, 1 insertion(+) Index: 2.6.x-xfs-new/lib/radix-tree.c =================================================================== --- 2.6.x-xfs-new.orig/lib/radix-tree.c 2007-03-29 19:00:53.802804161 +1000 +++ 2.6.x-xfs-new/lib/radix-tree.c 2007-03-29 19:07:10.297495640 +1000 @@ -151,6 +151,7 @@ int radix_tree_preload(gfp_t gfp_mask) out: return ret; } +EXPORT_SYMBOL(radix_tree_preload); static inline void tag_set(struct radix_tree_node *node, unsigned int tag, int offset) From owner-xfs@oss.sgi.com Tue Jun 12 22:26:05 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 12 Jun 2007 22:26:09 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.3 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5D5Q1Wt018400 for ; Tue, 12 Jun 2007 22:26:03 -0700 Received: from pc-bnaujok.melbourne.sgi.com (pc-bnaujok.melbourne.sgi.com [134.14.55.58]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA00193; Wed, 13 Jun 2007 15:25:58 +1000 Date: Wed, 13 Jun 2007 15:29:06 +1000 To: "xfs@oss.sgi.com" , xfs-dev Subject: REVIEW: Filestreams support for xfs_io chattr command From: "Barry Naujok" Organization: SGI Content-Type: multipart/mixed; boundary=----------b0LlfyjN74BP4pyMdM3DsL MIME-Version: 1.0 Message-ID: User-Agent: Opera Mail/9.10 (Win32) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11762 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@sgi.com Precedence: bulk X-list: xfs ------------b0LlfyjN74BP4pyMdM3DsL Content-Type: text/plain; format=flowed; delsp=yes; charset=iso-8859-15 Content-Transfer-Encoding: 7bit The attached patch lets you enable the filestreams allocator on a per-directory basis which can be used instead of enabling it via the mount option with the xfs_io chattr command. ------------b0LlfyjN74BP4pyMdM3DsL Content-Disposition: attachment; filename=filestream_io.patch Content-Type: application/octet-stream; name=filestream_io.patch Content-Transfer-Encoding: Base64 Cj09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQp4ZnNwcm9ncy9pbmNs dWRlL3hmc19kaW5vZGUuaAo9PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT0KCi0tLSBhL3hmc3Byb2dzL2luY2x1ZGUveGZzX2Rpbm9kZS5oCTIwMDct MDYtMTMgMTU6MjA6MDguMDAwMDAwMDAwICsxMDAwCisrKyBiL3hmc3Byb2dz L2luY2x1ZGUveGZzX2Rpbm9kZS5oCTIwMDctMDYtMTMgMTU6MDk6MTIuMjk2 MjgwMzQ0ICsxMDAwCkBAIC0yNTgsNiArMjU4LDcgQEAgdHlwZWRlZiBlbnVt IHhmc19kaW5vZGVfZm10CiAjZGVmaW5lIFhGU19ESUZMQUdfRVhUU0laRV9C SVQgICAgICAxMQkvKiBpbm9kZSBleHRlbnQgc2l6ZSBhbGxvY2F0b3IgaGlu dCAqLwogI2RlZmluZSBYRlNfRElGTEFHX0VYVFNaSU5IRVJJVF9CSVQgMTIJ LyogaW5oZXJpdCBpbm9kZSBleHRlbnQgc2l6ZSAqLwogI2RlZmluZSBYRlNf RElGTEFHX05PREVGUkFHX0JJVCAgICAgMTMJLyogZG8gbm90IHJlb3JnYW5p emUvZGVmcmFnbWVudCAqLworI2RlZmluZSBYRlNfRElGTEFHX0ZJTEVTVFJF QU1fQklUICAgMTQJLyogdXNlIGZpbGVzdHJlYW0gYWxsb2NhdG9yICovCiAj ZGVmaW5lIFhGU19ESUZMQUdfUkVBTFRJTUUgICAgICAoMSA8PCBYRlNfRElG TEFHX1JFQUxUSU1FX0JJVCkKICNkZWZpbmUgWEZTX0RJRkxBR19QUkVBTExP QyAgICAgICgxIDw8IFhGU19ESUZMQUdfUFJFQUxMT0NfQklUKQogI2RlZmlu ZSBYRlNfRElGTEFHX05FV1JUQk0gICAgICAgKDEgPDwgWEZTX0RJRkxBR19O RVdSVEJNX0JJVCkKQEAgLTI3MiwxMiArMjczLDEzIEBAIHR5cGVkZWYgZW51 bSB4ZnNfZGlub2RlX2ZtdAogI2RlZmluZSBYRlNfRElGTEFHX0VYVFNJWkUg ICAgICAgKDEgPDwgWEZTX0RJRkxBR19FWFRTSVpFX0JJVCkKICNkZWZpbmUg WEZTX0RJRkxBR19FWFRTWklOSEVSSVQgICgxIDw8IFhGU19ESUZMQUdfRVhU U1pJTkhFUklUX0JJVCkKICNkZWZpbmUgWEZTX0RJRkxBR19OT0RFRlJBRyAg ICAgICgxIDw8IFhGU19ESUZMQUdfTk9ERUZSQUdfQklUKQorI2RlZmluZSBY RlNfRElGTEFHX0ZJTEVTVFJFQU0gICAgKDEgPDwgWEZTX0RJRkxBR19GSUxF U1RSRUFNX0JJVCkKIAogI2RlZmluZSBYRlNfRElGTEFHX0FOWSBcCiAJKFhG U19ESUZMQUdfUkVBTFRJTUUgfCBYRlNfRElGTEFHX1BSRUFMTE9DIHwgWEZT X0RJRkxBR19ORVdSVEJNIHwgXAogCSBYRlNfRElGTEFHX0lNTVVUQUJMRSB8 IFhGU19ESUZMQUdfQVBQRU5EIHwgWEZTX0RJRkxBR19TWU5DIHwgXAogCSBY RlNfRElGTEFHX05PQVRJTUUgfCBYRlNfRElGTEFHX05PRFVNUCB8IFhGU19E SUZMQUdfUlRJTkhFUklUIHwgXAogCSBYRlNfRElGTEFHX1BST0pJTkhFUklU IHwgWEZTX0RJRkxBR19OT1NZTUxJTktTIHwgWEZTX0RJRkxBR19FWFRTSVpF IHwgXAotCSBYRlNfRElGTEFHX0VYVFNaSU5IRVJJVCB8IFhGU19ESUZMQUdf Tk9ERUZSQUcpCisJIFhGU19ESUZMQUdfRVhUU1pJTkhFUklUIHwgWEZTX0RJ RkxBR19OT0RFRlJBRyB8IFhGU19ESUZMQUdfRklMRVNUUkVBTSkKIAogI2Vu ZGlmCS8qIF9fWEZTX0RJTk9ERV9IX18gKi8KCj09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PQp4ZnNwcm9ncy9pbmNsdWRlL3hmc19mcy5oCj09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PQoKLS0tIGEveGZzcHJvZ3MvaW5j bHVkZS94ZnNfZnMuaAkyMDA3LTA2LTEzIDE1OjIwOjA4LjAwMDAwMDAwMCAr MTAwMAorKysgYi94ZnNwcm9ncy9pbmNsdWRlL3hmc19mcy5oCTIwMDctMDYt MTMgMTU6MTk6MDguMzU1MTE3Njk2ICsxMDAwCkBAIC02Nyw3ICs2Nyw4IEBA IHN0cnVjdCBmc3hhdHRyIHsKICNkZWZpbmUgWEZTX1hGTEFHX05PU1lNTElO S1MJMHgwMDAwMDQwMAkvKiBkaXNhbGxvdyBzeW1saW5rIGNyZWF0aW9uICov CiAjZGVmaW5lIFhGU19YRkxBR19FWFRTSVpFCTB4MDAwMDA4MDAJLyogZXh0 ZW50IHNpemUgYWxsb2NhdG9yIGhpbnQgKi8KICNkZWZpbmUgWEZTX1hGTEFH X0VYVFNaSU5IRVJJVAkweDAwMDAxMDAwCS8qIGluaGVyaXQgaW5vZGUgZXh0 ZW50IHNpemUgKi8KLSNkZWZpbmUgWEZTX1hGTEFHX05PREVGUkFHCTB4MDAw MDIwMDAgIAkvKiBkbyBub3QgZGVmcmFnbWVudCAqLworI2RlZmluZSBYRlNf WEZMQUdfTk9ERUZSQUcJMHgwMDAwMjAwMAkvKiBkbyBub3QgZGVmcmFnbWVu dCAqLworI2RlZmluZSBYRlNfWEZMQUdfRklMRVNUUkVBTQkweDAwMDA0MDAw CS8qIHVzZSBmaWxlc3RyZWFtIGFsbG9jYXRvciAqLwogI2RlZmluZSBYRlNf WEZMQUdfSEFTQVRUUgkweDgwMDAwMDAwCS8qIG5vIERJRkxBRyBmb3IgdGhp cwkqLwogCiAvKgpAQCAtNDMxLDcgKzQzMiw3IEBAIHR5cGVkZWYgc3RydWN0 IHhmc19oYW5kbGUgewogCiAjZGVmaW5lIEZTSFNJWkUJCXNpemVvZihmc2lk X3QpCiAKLS8qIAorLyoKICAqIEZsYWdzIGZvciBnb2luZyBkb3duIG9wZXJh dGlvbgogICovCiAjZGVmaW5lIFhGU19GU09QX0dPSU5HX0ZMQUdTX0RFRkFV TFQJCTB4MAkvKiBnb2luZyBkb3duICovCgo9PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT0KeGZzcHJvZ3MvaW8vYXR0ci5jCj09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PQoKLS0tIGEveGZzcHJvZ3MvaW8vYXR0ci5jCTIw MDctMDYtMTMgMTU6MjA6MDguMDAwMDAwMDAwICsxMDAwCisrKyBiL3hmc3By b2dzL2lvL2F0dHIuYwkyMDA3LTA2LTEzIDE1OjA4OjM4LjA4NDcxMDUzMyAr MTAwMApAQCAtNDYsMTAgKzQ2LDExIEBAIHN0YXRpYyBzdHJ1Y3QgeGZsYWdz IHsKIAl7IFhGU19YRkxBR19OT1NZTUxJTktTLAkJIm4iLCAibm9zeW1saW5r cyIJfSwKIAl7IFhGU19YRkxBR19FWFRTSVpFLAkJImUiLCAiZXh0c2l6ZSIJ CX0sCiAJeyBYRlNfWEZMQUdfRVhUU1pJTkhFUklULAkiRSIsICJleHRzei1p bmhlcml0Igl9LAotCXsgWEZTX1hGTEFHX05PREVGUkFHLAkgICAgCSJmIiwg Im5vLWRlZnJhZyIJfSwKKwl7IFhGU19YRkxBR19OT0RFRlJBRywJCSJmIiwg Im5vLWRlZnJhZyIJfSwKKwl7IFhGU19YRkxBR19GSUxFU1RSRUFNLAkJIlMi LCAiZmlsZXN0cmVhbSIJfSwKIAl7IDAsIE5VTEwsIE5VTEwgfQogfTsKLSNk ZWZpbmUgQ0hBVFRSX1hGTEFHX0xJU1QJInIiLypwKi8iaWFzQWR0UG5lRWYi CisjZGVmaW5lIENIQVRUUl9YRkxBR19MSVNUCSJyIi8qcCovImlhc0FkdFBu ZUVmUyIKIAogc3RhdGljIHZvaWQKIGxzYXR0cl9oZWxwKHZvaWQpCkBAIC03 Miw2ICs3Myw3IEBAIGxzYXR0cl9oZWxwKHZvaWQpCiAiIGUgLS0gZm9yIG5v bi1yZWFsdGltZSBmaWxlcywgb2JzZXJ2ZSB0aGUgaW5vZGUgZXh0ZW50IHNp emUgdmFsdWVcbiIKICIgRSAtLSBjaGlsZHJlbiBjcmVhdGVkIGluIHRoaXMg ZGlyZWN0b3J5IGluaGVyaXQgdGhlIGV4dGVudCBzaXplIHZhbHVlXG4iCiAi IGYgLS0gZG8gbm90IGluY2x1ZGUgdGhpcyBmaWxlIHdoZW4gZGVmcmFnbWVu dGluZyB0aGUgZmlsZXN5c3RlbVxuIgorIiBTIC0tIGVuYWJsZSBmaWxlc3Ry ZWFtcyBhbGxvY2F0b3IgZm9yIHRoaXMgZGlyZWN0b3J5XG4iCiAiXG4iCiAi IE9wdGlvbnM6XG4iCiAiIC1SIC0tIHJlY3Vyc2l2ZWx5IGRlc2NlbmQgKHVz ZWZ1bCB3aGVuIGN1cnJlbnQgZmlsZSBpcyBhIGRpcmVjdG9yeSlcbiIKQEAg LTEwNiw2ICsxMDgsNyBAQCBjaGF0dHJfaGVscCh2b2lkKQogIiArLy1lIC0t IHNldC9jbGVhciB0aGUgZXh0ZW50LXNpemUgZmxhZ1xuIgogIiArLy1FIC0t IHNldC9jbGVhciB0aGUgZXh0ZW50LXNpemUgaW5oZXJpdGFuY2UgZmxhZ1xu IgogIiArLy1mIC0tIHNldC9jbGVhciB0aGUgbm8tZGVmcmFnIGZsYWdcbiIK KyIgKy8tUyAtLSBzZXQvY2xlYXIgdGhlIGZpbGVzdHJlYW1zIGFsbG9jYXRv ciBmbGFnXG4iCiAiIE5vdGUxOiB1c2VyIG11c3QgaGF2ZSBjZXJ0YWluIGNh cGFiaWxpdGllcyB0byBtb2RpZnkgaW1tdXRhYmxlL2FwcGVuZC1vbmx5Llxu IgogIiBOb3RlMjogaW1tdXRhYmxlL2FwcGVuZC1vbmx5IGZpbGVzIGNhbm5v dCBiZSBkZWxldGVkOyByZW1vdmluZyB0aGVzZSBmaWxlc1xuIgogIiAgICAg ICAgcmVxdWlyZXMgdGhlIGltbXV0YWJsZS9hcHBlbmQtb25seSBmbGFnIHRv IGJlIGNsZWFyZWQgZmlyc3QuXG4iCg== ------------b0LlfyjN74BP4pyMdM3DsL-- From owner-xfs@oss.sgi.com Tue Jun 12 23:16:20 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 12 Jun 2007 23:16:23 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.5 required=5.0 tests=BAYES_80,SPF_HELO_PASS, WHOIS_MYPRIVREG autolearn=no version=3.2.0-pre1-r499012 Received: from kuber.nabble.com (kuber.nabble.com [216.139.236.158]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5D6GJWt005108 for ; Tue, 12 Jun 2007 23:16:20 -0700 Received: from isper.nabble.com ([192.168.236.156]) by kuber.nabble.com with esmtp (Exim 4.63) (envelope-from ) id 1HyM9U-00087s-5n for xfs@oss.sgi.com; Tue, 12 Jun 2007 23:16:20 -0700 Message-ID: <11093354.post@talk.nabble.com> Date: Tue, 12 Jun 2007 23:16:20 -0700 (PDT) From: Raghu Prasad To: xfs@oss.sgi.com Subject: Installation of XFS File system on Fedora3 MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Nabble-From: graghu.p@gmail.com X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11763 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: graghu.p@gmail.com Precedence: bulk X-list: xfs Friends, I'm trying to install XFS on Fedora Core3. I could not get the complete installation guide or steps. Could some one pass me the information? Regards, Raghu -- View this message in context: http://www.nabble.com/Installation-of-XFS-File-system-on-Fedora3-tf3912757.html#a11093354 Sent from the Xfs - General mailing list archive at Nabble.com. From owner-xfs@oss.sgi.com Wed Jun 13 01:07:05 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 13 Jun 2007 01:07:07 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.8 required=5.0 tests=AWL,BAYES_50,URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5D871Wt005710 for ; Wed, 13 Jun 2007 01:07:03 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id SAA03493; Wed, 13 Jun 2007 18:06:57 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 1161) id 2E9F758C38C1; Wed, 13 Jun 2007 18:06:57 +1000 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: TAKE 966238 - libattr Makefile replacing LTLDFLAGS instead of appending Message-Id: <20070613080657.2E9F758C38C1@chook.melbourne.sgi.com> Date: Wed, 13 Jun 2007 18:06:57 +1000 (EST) From: bnaujok@sgi.com (Barry Naujok) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11764 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@sgi.com Precedence: bulk X-list: xfs LTLDFLAGS set in environment are overwritten, but they shouldn't be. Date: Wed Jun 13 18:06:11 AEST 2007 Workarea: chook.melbourne.sgi.com:/home/bnaujok/isms/xfs-cmds Inspected by: Arfrever.FTA@GMail.Com The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:28906a attr/libattr/Makefile - 1.16 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/attr/libattr/Makefile.diff?r1=text&tr1=1.16&r2=text&tr2=1.15&f=h - LTLDFLAGS set in environment are overwritten, but they shouldn't be. From owner-xfs@oss.sgi.com Wed Jun 13 04:48:42 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 13 Jun 2007 04:48:49 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.6 required=5.0 tests=AWL,BAYES_99,J_CHICKENPOX_35, J_CHICKENPOX_36,J_CHICKENPOX_61 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.ukfsn.org (s2.ukfsn.org [217.158.120.143]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5DBmdWt008995 for ; Wed, 13 Jun 2007 04:48:41 -0700 Received: from localhost (mailman.ukfsn.org [80.168.53.75]) by mail.ukfsn.org (Postfix) with ESMTP id 1CC8DE6CED; Wed, 13 Jun 2007 12:16:37 +0100 (BST) Received: from mail.ukfsn.org ([80.168.53.20]) by localhost (smtp-filter.ukfsn.org [80.168.53.75]) (amavisd-new, port 10024) with ESMTP id v-wZjcWsJEuq; Wed, 13 Jun 2007 12:13:32 +0100 (BST) Received: from elm.dgreaves.com (i-83-67-36-194.freedom2surf.net [83.67.36.194]) by mail.ukfsn.org (Postfix) with ESMTP id B8F0FE6CD3; Wed, 13 Jun 2007 12:16:34 +0100 (BST) Received: from ash.dgreaves.com ([10.0.0.90]) by elm.dgreaves.com with esmtp (Exim 4.62) (envelope-from ) id 1HyQq5-0006Mc-An; Wed, 13 Jun 2007 12:16:37 +0100 Message-ID: <466FD214.9070603@dgreaves.com> Date: Wed, 13 Jun 2007 12:16:36 +0100 From: David Greaves User-Agent: Mozilla-Thunderbird 2.0.0.0 (X11/20070601) MIME-Version: 1.0 To: Linus Torvalds Cc: David Chinner , Tejun Heo , "Rafael J. Wysocki" , xfs@oss.sgi.com, "'linux-kernel@vger.kernel.org'" , linux-pm , Neil Brown , Jeff Garzik Subject: Re: 2.6.22-rc3 hibernate(?) fails totally - regression (xfs on raid6) References: <200706020122.49989.rjw@sisk.pl> <4661EFBB.5010406@dgreaves.com> <4662D852.4000005@dgreaves.com> <46667160.80905@gmail.com> <46668EE0.2030509@dgreaves.com> <46679D56.7040001@gmail.com> <4667DE2D.6050903@dgreaves.com> <20070607110708.GS86004887@sgi.com> <46680F5E.6070806@dgreaves.com> <20070607222813.GG85884050@sgi.com> <4669A965.20403@dgreaves.com> In-Reply-To: Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11765 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: david@dgreaves.com Precedence: bulk X-list: xfs Linus Torvalds wrote: > > On Fri, 8 Jun 2007, David Greaves wrote: >> positive: I can now get sysrq-t :) > > Ok, so color me confused, So what do you think that makes me > and maybe I have missed some of the emails or > skimmed over them too fast (there's been too many of them ;), You may have missed these 'tests' with rc4+Tejun's fix: * clean boot, unmounting the xfs fs : normal hibernate/resume * clean boot, remount ro xfs fs : normal hibernate/resume * clean boot, touch; sync; echo 1 > /proc/sys/vm/drop_caches: normal hibernate/resume * clean boot, touch; sync; echo 2 > /proc/sys/vm/drop_caches: hang hibernating * clean boot, touch; sync; echo 3 > /proc/sys/vm/drop_caches: hang hibernating Dave asked me to do them but hasn't responded yet. > but > > - I haven't actually seen any traces for this (netconsole apparently > doesn't work for you, and I'm not surprised: it never really worked > well for me over suspend/resume either, but I think I saw a mention of > serial console?) Well, I got netconsole working but needed to buildin the skge and that changed the behaviour a bit... let me know if that's interesting. I've left skge as a module as originally reported. I have however configured serial and serial console and plugged in a cable so I can capture data there. Sysrq at the end. > - You apparently bisected it down to the range > > 0a3fd051c7036ef71b58863f8e5da7c3dabd9d3f <- works > 1d30c33d8d07868199560b24f10ed6280e78a89c <- breaks > > but some of the intermediates in that range didn't compile. Correct? Yes, then applying the sata_via patch confirmed 9666f4009c22f6520ac3fb8a19c9e32ab973e828 libata: reimplement suspend/resume support using sdev->manage_start_stop was the first to cause the problems. However.... > Can you try to bisect down a bit more, despite the compile error? Just do > > git bisect start > git bisect good 0a3fd051c7036ef71b58863f8e5da7c3dabd9d3f > git bisect bad 1d30c33d8d07868199560b24f10ed6280e78a89c > > and it should pick > > f4d6d004: libata: ignore EH scheduling during initialization > > for you to test. It will apparently break on the fact that "sata_via.c" > wants "ata_scsi_device_resume/suspend" for the initialization of the > resume/suspend things in the scsi_host_template, but you should just > remove those lines, and the compile hopefully completes cleanly after > that. > > IOW, it *should* be easy enough to pinpoint this from 9 changes down to > just one. ... let me reconfirm - there's been a lot of testing and I don't want my recollection causing problems... These tests have had the config changed to include serial+console I also configured CONFIG_DISABLE_CONSOLE_SUSPEND=y 2.6.21-gf4d6d004-dirty : bad 2.6.21-g920a4b10-dirty : bad 2.6.21-g9666f400-dirty : bad git-bisect bad 9666f4009c22f6520ac3fb8a19c9e32ab973e828 is first bad commit commit 9666f4009c22f6520ac3fb8a19c9e32ab973e828 Author: Tejun Heo Date: Fri May 4 21:27:47 2007 +0200 libata: reimplement suspend/resume support using sdev->manage_start_stop Good. > Jeff added to the Cc, since he may not have noticed that one of the most > long-running issues is apparently sata-related. > > (Jeff: David Greaves _also_ had issues with -rc4 due to the SETFXSR > change, but that should hopefully be resolved and is presumably an > independent bug. Apart from the fact that "sata_via.c" seems problematic) > > Linus So here's a sysrq-t from a failed resume. Ask if you'd like anything else... SysRq : Show State free sibling task PC stack pid father child younger older init D 00000001 0 1 0 (NOTLB) c1941ea0 00000082 00000000 00000001 00000001 00000000 466fc747 28d2d41a 466fc747 28d2d41a 9120edc4 000001b3 000943a3 00000000 c192e030 c192eb3c 00000086 00001182 9136dece 000001b3 00000000 00000000 00000000 c1941f08 Call Trace: [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] copy_to_user+0x32/0x50 [] sys_select+0x149/0x170 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= kthreadd S F6BA3F04 0 2 0 (L-TLB) c1943fd0 00000046 00000001 f6ba3f04 c1943fb0 c0117627 00000000 00000000 ffffffff c0104a04 f6ba3f04 00000000 00000003 00000296 f72e15b0 c192e63c c1943fd0 00000048 6e2deb31 0000000c 00000a74 c0431298 00000000 00000000 Call Trace: [] __wake_up_common+0x37/0x60 [] kernel_thread_helper+0x0/0x3c [] kthreadd+0x71/0xa0 [] kthreadd+0x0/0xa0 [] kernel_thread_helper+0x7/0x3c ======================= ksoftirqd/0 S 000001B3 0 3 2 (L-TLB) c1945fc0 00000046 68967778 000001b3 000000fc 00000000 f72eaad0 c192e140 00000073 c192ea30 68967778 000001b3 c1945f90 00000000 f73a5a50 c192e13c c1945fb0 000000a5 9136e544 000001b3 c1945fc0 00000000 c011ef70 fffffffc Call Trace: [] ksoftirqd+0x0/0x90 [] ksoftirqd+0x7b/0x90 [] ksoftirqd+0x0/0x90 [] kthread+0x6a/0x70 [] kthread+0x0/0x70 [] kernel_thread_helper+0x7/0x3c ======================= watchdog/0 S C0369725 0 4 2 (L-TLB) c1947fc0 00000046 c1943f70 c0369725 c1947fd0 00000046 c048e0e0 c1932a50 c1947f90 00000296 2dbc9200 000012ae 5716de49 00000009 c1932550 c1932b5c fffffffc 00000ab5 ab8b6f40 00000004 c1947fd0 00000000 c0140290 fffffffc Call Trace: [] schedule+0x2e5/0x580 [] watchdog+0x0/0x70 [] watchdog+0x4e/0x70 [] watchdog+0x0/0x70 [] kthread+0x6a/0x70 [] kthread+0x0/0x70 [] kernel_thread_helper+0x7/0x3c ======================= events/0 R running 0 5 2 (L-TLB) khelper S 00000286 0 6 2 (L-TLB) c194bf80 00000046 00000060 00000286 00000000 00000000 c1914c20 c0127a80 c194bf60 00000001 f66ebb60 c0127a80 00000000 c0127aa5 f73a5a50 c193215c c1914c20 000008c7 b1f02986 0000000f c194bfd0 c1914c20 c194bfa8 c1914c28 Call Trace: [] __call_usermodehelper+0x0/0x70 [] __call_usermodehelper+0x0/0x70 [] __call_usermodehelper+0x25/0x70 [] worker_thread+0xe4/0xf0 [] autoremove_wake_function+0x0/0x50 [] autoremove_wake_function+0x0/0x50 [] worker_thread+0x0/0xf0 [] kthread+0x6a/0x70 [] kthread+0x0/0x70 [] kernel_thread_helper+0x7/0x3c ======================= kblockd/0 S C021F890 0 35 2 (L-TLB) c19a7f80 00000046 c19146e0 c021f890 00000000 c021f85b c19146e0 c0225790 f7ea2438 c021f869 5fe478a8 00000009 028a6b4f c021f89b c19b7a70 c19616dc 0000006e 0000003c 8c1ad4be 00000013 ffffff10 c19146e0 c19a7fa8 c19146e8 Call Trace: [] blk_unplug_work+0x0/0x10 [] __generic_unplug_device+0x2b/0x30 [] as_work_handler+0x0/0x20 [] generic_unplug_device+0x9/0x10 [] blk_unplug_work+0xb/0x10 [] worker_thread+0xe4/0xf0 [] autoremove_wake_function+0x0/0x50 [] autoremove_wake_function+0x0/0x50 [] worker_thread+0x0/0xf0 [] kthread+0x6a/0x70 [] kthread+0x0/0x70 [] kernel_thread_helper+0x7/0x3c ======================= kacpid S 000012B0 0 36 2 (L-TLB) c19a9f80 00000046 8e56e600 000012b0 58472b73 00000009 c192e530 ac2395b9 c19a9f60 c0116ce7 ac239cdd 00000004 00000077 00000000 c192e530 c19611dc 00000078 00000117 ac239df7 00000004 c19a9fd0 c19145e0 c19a9fa8 c19145e8 Call Trace: [] activate_task+0x37/0xb0 [] worker_thread+0xe4/0xf0 [] autoremove_wake_function+0x0/0x50 [] autoremove_wake_function+0x0/0x50 [] worker_thread+0x0/0xf0 [] kthread+0x6a/0x70 [] kthread+0x0/0x70 [] kernel_thread_helper+0x7/0x3c ======================= kacpi_notify S 000012B0 0 37 2 (L-TLB) c19abf80 00000046 8ef1f200 000012b0 584778f9 00000009 c192e530 ac23bc7c c19abf60 c0116ce7 ac6306e3 00000004 00000154 00000000 c1932050 c1969b3c 00000078 000002f9 ac630a09 00000004 c19abfd0 c19145a0 c19abfa8 c19145a8 Call Trace: [] activate_task+0x37/0xb0 [] worker_thread+0xe4/0xf0 [] autoremove_wake_function+0x0/0x50 [] autoremove_wake_function+0x0/0x50 [] worker_thread+0x0/0xf0 [] kthread+0x6a/0x70 [] kthread+0x0/0x70 [] kernel_thread_helper+0x7/0x3c ======================= ata/0 S 0000000A 0 121 2 (L-TLB) c19ddf80 00000046 00000060 0000000a 00000000 00000009 f7f40000 f7f41e90 00000000 c02c9acb 64c028bc 00000009 00023d66 00000000 c19615d0 c19b3b5c 0000006e 00003906 64c028bc 00000009 c19ddfd0 c19989e0 c19ddfa8 c19989e8 Call Trace: [] ata_pio_task+0x5b/0xe0 [] worker_thread+0xe4/0xf0 [] autoremove_wake_function+0x0/0x50 [] autoremove_wake_function+0x0/0x50 [] worker_thread+0x0/0xf0 [] kthread+0x6a/0x70 [] kthread+0x0/0x70 [] kernel_thread_helper+0x7/0x3c ======================= ata_aux D F65F09A4 0 122 2 (L-TLB) c19dfd20 00000046 f786c800 f65f09a4 f7ea23b8 c02bcdef ce2e3b9f 4d64703f ffffcfff f7ea23b8 00000082 f7ea2418 0005eb1d 00000082 f7c9f0b0 c196913c f786c800 000003d4 605db977 00000009 f7ea23b8 c19dfe08 f65f09a4 c19dfd3c Call Trace: [] scsi_prep_fn+0x8f/0x130 [] wait_for_completion+0x64/0xa0 [] default_wake_function+0x0/0x10 [] __generic_unplug_device+0x2b/0x30 [] default_wake_function+0x0/0x10 [] blk_execute_rq+0xa7/0xe0 [] blk_end_sync_rq+0x0/0x30 [] buffered_rmqueue+0x9f/0x100 [] get_page_from_freelist+0x80/0xc0 [] scsi_execute+0xb8/0x110 [] scsi_execute_req+0x6b/0x90 [] sd_spinup_disk+0x76/0x440 [] sd_revalidate_disk+0x6e/0x160 [] __scsi_disk_get+0x34/0x40 [] sd_rescan+0x1d/0x40 [] scsi_rescan_device+0x40/0x50 [] ata_scsi_dev_rescan+0x5c/0x70 [] ata_scsi_dev_rescan+0x0/0x70 [] run_workqueue+0x4a/0xf0 [] worker_thread+0xcd/0xf0 [] autoremove_wake_function+0x0/0x50 [] autoremove_wake_function+0x0/0x50 [] worker_thread+0x0/0xf0 [] kthread+0x6a/0x70 [] kthread+0x0/0x70 [] kernel_thread_helper+0x7/0x3c ======================= kseriod D C0369725 0 123 2 (L-TLB) c1971f60 00000046 f6d0ff90 c0369725 c1971f70 00000046 c0443c08 c1971fa8 00000000 c0444de0 91c4e988 000001b3 00002028 00000000 c19b3050 c196963c 00000073 00000120 91c535b2 000001b3 00000000 00000000 c02da2d0 c1971fa8 Call Trace: [] schedule+0x2e5/0x580 [] serio_thread+0x0/0x100 [] refrigerator+0x3c/0x50 [] serio_thread+0xf8/0x100 [] __wake_up_common+0x37/0x60 [] autoremove_wake_function+0x0/0x50 [] autoremove_wake_function+0x0/0x50 [] serio_thread+0x0/0x100 [] kthread+0x6a/0x70 [] kthread+0x0/0x70 [] kernel_thread_helper+0x7/0x3c ======================= pdflush D C0369725 0 145 2 (L-TLB) c19cdf70 00000046 c1971f60 c0369725 c19cdf80 00000046 ef006e00 000012ca 65778037 00000009 91c4eb5e 000001b3 00002124 00000000 c19b7a70 c19b315c 00000073 0000006a 91c539de 000001b3 00000000 00000000 c19cdfa8 fffffffc Call Trace: [] schedule+0x2e5/0x580 [] refrigerator+0x3c/0x50 [] __pdflush+0x145/0x150 [] pdflush+0x0/0x30 [] pdflush+0x0/0x30 [] pdflush+0x25/0x30 [] pdflush+0x0/0x30 [] kthread+0x6a/0x70 [] kthread+0x0/0x70 [] kernel_thread_helper+0x7/0x3c ======================= pdflush D C0369725 0 146 2 (L-TLB) c19cff70 00000046 c19cdf70 c0369725 c19cff80 00000046 00000001 c19cff48 c19cff74 c19615d0 91c4f11f 000001b3 00002016 00000000 f7d5f550 c19b7b7c 00000076 00000067 4270ffb6 0000000a 00000000 00000000 c19cffa8 fffffffc Call Trace: [] schedule+0x2e5/0x580 [] refrigerator+0x3c/0x50 [] __pdflush+0x145/0x150 [] pdflush+0x0/0x30 [] pdflush+0x0/0x30 [] pdflush+0x25/0x30 [] wb_kupdate+0x0/0xf0 [] pdflush+0x0/0x30 [] kthread+0x6a/0x70 [] kthread+0x0/0x70 [] kernel_thread_helper+0x7/0x3c ======================= kswapd0 D C0369725 0 147 2 (L-TLB) c1981f50 00000046 f67a5e90 c0369725 c1981f60 00000046 c1955550 00000000 c1981f78 c018dcac 91c4dbf7 000001b3 00000c71 00000000 f7d61070 c19b767c 0000006e 000000c5 91c4f96f 000001b3 00000000 00000000 c0434f44 00000000 Call Trace: [] schedule+0x2e5/0x580 [] proc_flush_task+0x4c/0x1a0 [] refrigerator+0x3c/0x50 [] kswapd+0xe1/0x110 [] schedule+0x2e5/0x580 [] autoremove_wake_function+0x0/0x50 [] autoremove_wake_function+0x0/0x50 [] kswapd+0x0/0x110 [] kthread+0x6a/0x70 [] kthread+0x0/0x70 [] kernel_thread_helper+0x7/0x3c ======================= aio/0 S 000012CA 0 148 2 (L-TLB) c1a71f80 00000046 f91aea00 000012ca 657c8d75 00000009 c192e530 b2be46ba c1a71f60 c0116ce7 b2c07abe 00000004 000000b0 00000000 c192e530 c19b717c 00000078 000001a2 b2c07c5f 00000004 c1a71fd0 c199be20 c1a71fa8 c199be28 Call Trace: [] activate_task+0x37/0xb0 [] worker_thread+0xe4/0xf0 [] autoremove_wake_function+0x0/0x50 [] autoremove_wake_function+0x0/0x50 [] worker_thread+0x0/0xf0 [] kthread+0x6a/0x70 [] kthread+0x0/0x70 [] kernel_thread_helper+0x7/0x3c ======================= xfslogd/0 S F74D6EE0 0 149 2 (L-TLB) c1a73f80 00000046 f74d2500 f74d6ee0 f78a64e0 c01f3421 00000000 b2c09d9a c1a73f60 00000246 f6f22840 c199b9e0 f6f22cc0 f6f22d1c c199b9e0 c19bbb9c 00000000 0000004a 91980244 000001b3 c1a73fd0 c199b9e0 c1a73fa8 c199b9e8 Call Trace: [] xlog_iodone+0x51/0xd0 [] worker_thread+0xe4/0xf0 [] autoremove_wake_function+0x0/0x50 [] autoremove_wake_function+0x0/0x50 [] worker_thread+0x0/0xf0 [] kthread+0x6a/0x70 [] kthread+0x0/0x70 [] kernel_thread_helper+0x7/0x3c ======================= xfsdatad/0 S 00000000 0 150 2 (L-TLB) c1a75f80 00000046 00000000 00000000 00000000 00000246 00000000 c19e17f0 00000000 c199e820 c199b9a0 c020c690 00000000 c014557a c19b7a70 c19bb69c c199b9a0 00000049 91b5c447 000001b3 c1a75fd0 c199b9a0 c1a75fa8 c199b9a8 Call Trace: [] xfs_end_bio_read+0x0/0x10 [] mempool_free+0x2a/0x60 [] worker_thread+0xe4/0xf0 [] autoremove_wake_function+0x0/0x50 [] autoremove_wake_function+0x0/0x50 [] worker_thread+0x0/0xf0 [] kthread+0x6a/0x70 [] kthread+0x0/0x70 [] kernel_thread_helper+0x7/0x3c ======================= scsi_eh_0 S F7C98084 0 774 2 (L-TLB) f7fa1fc0 00000046 00000246 f7c98084 f7c98000 00000246 c1a6f044 00000001 00000003 00000000 a35b398a 00000005 00000e17 00000000 c1932550 f7cbab5c 0000006e 0019a81a a35b398a 00000005 00000000 c1a6f000 c02bb740 fffffffc Call Trace: [] scsi_error_handler+0x0/0xa0 [] scsi_error_handler+0x41/0xa0 [] scsi_error_handler+0x0/0xa0 [] kthread+0x6a/0x70 [] kthread+0x0/0x70 [] kernel_thread_helper+0x7/0x3c ======================= scsi_eh_1 S F7C64084 0 776 2 (L-TLB) c1ad3fc0 00000046 00000246 f7c64084 f7c64000 00000246 f78b3c44 00000001 00000003 00000000 00000292 fffffffc c1ad3fb0 00000246 c192ea30 f7d6167c 00000000 0019d570 c305aee5 00000005 00000000 f78b3c00 c02bb740 fffffffc Call Trace: [] scsi_error_handler+0x0/0xa0 [] scsi_error_handler+0x41/0xa0 [] scsi_error_handler+0x0/0xa0 [] kthread+0x6a/0x70 [] kthread+0x0/0x70 [] kernel_thread_helper+0x7/0x3c ======================= scsi_eh_2 S C1AD4084 0 778 2 (L-TLB) f7d5dfc0 00000046 00000246 c1ad4084 c1ad4000 00000246 f78b3844 00000001 00000003 00000000 00000292 fffffffc f7d5dfb0 00000246 c192ea30 f7efd65c 00000000 001a0997 e2bc559e 00000005 00000000 f78b3800 c02bb740 fffffffc Call Trace: [] scsi_error_handler+0x0/0xa0 [] scsi_error_handler+0x41/0xa0 [] scsi_error_handler+0x0/0xa0 [] kthread+0x6a/0x70 [] kthread+0x0/0x70 [] kernel_thread_helper+0x7/0x3c ======================= scsi_eh_3 S F7FC4084 0 780 2 (L-TLB) c1a77fc0 00000046 00000246 f7fc4084 f7fc4000 00000246 f78b3444 00000001 00000003 00000000 02d2bfed 00000006 0000067f 00000000 c1932550 f7d61b7c 0000006e 001c7d7b 02d2bfed 00000006 00000000 f78b3400 c02bb740 fffffffc Call Trace: [] scsi_error_handler+0x0/0xa0 [] scsi_error_handler+0x41/0xa0 [] scsi_error_handler+0x0/0xa0 [] kthread+0x6a/0x70 [] kthread+0x0/0x70 [] kernel_thread_helper+0x7/0x3c ======================= scsi_eh_4 S F786C800 0 803 2 (L-TLB) f7fc3fc0 00000046 f786c800 f786c800 f7ea23b8 c02bd166 f7964044 f7ea23b8 00000292 c021f7f2 6154d436 00000009 002f304e 00000000 c1932550 f7d0915c 0000006e 000ec1d6 6154d436 00000009 00000000 f7964000 c02bb740 fffffffc Call Trace: [] scsi_request_fn+0x196/0x280 [] blk_remove_plug+0x32/0x70 [] scsi_error_handler+0x0/0xa0 [] scsi_error_handler+0x41/0xa0 [] scsi_error_handler+0x0/0xa0 [] kthread+0x6a/0x70 [] kthread+0x0/0x70 [] kernel_thread_helper+0x7/0x3c ======================= scsi_eh_5 S F786C400 0 805 2 (L-TLB) f7e3ffc0 00000046 f786c400 f786c400 f7ea2730 c02bd166 f786cc44 f7ea2730 00000292 c021f7f2 67583eed 00000009 002f0e42 00000000 c19615d0 f7c9f1bc 0000006e 000ec010 67583eed 00000009 00000000 f786cc00 c02bb740 fffffffc Call Trace: [] scsi_request_fn+0x196/0x280 [] blk_remove_plug+0x32/0x70 [] scsi_error_handler+0x0/0xa0 [] scsi_error_handler+0x41/0xa0 [] scsi_error_handler+0x0/0xa0 [] kthread+0x6a/0x70 [] kthread+0x0/0x70 [] kernel_thread_helper+0x7/0x3c ======================= kpsmoused S 000019B6 0 826 2 (L-TLB) f7e81f80 00000046 b68df400 000019b6 db5b46fa 0000000c c192e530 6dada37d f7e81f60 c0116ce7 6dadbfd3 00000006 000000a8 00000000 c1932050 f7d5fb5c 00000078 00000029 6dadc163 00000006 f7e81fd0 f7fa9c20 f7e81fa8 f7fa9c28 Call Trace: [] activate_task+0x37/0xb0 [] worker_thread+0xe4/0xf0 [] autoremove_wake_function+0x0/0x50 [] autoremove_wake_function+0x0/0x50 [] worker_thread+0x0/0xf0 [] kthread+0x6a/0x70 [] kthread+0x0/0x70 [] kernel_thread_helper+0x7/0x3c ======================= md0_raid5 D 1D3836C7 0 836 2 (L-TLB) f7cbfdf0 00000046 f7ea3888 1d3836c7 00000008 c0221036 06a47600 00011210 c191ede0 c01454ba 5324bddc 00000009 0002fb08 00000000 f7d09050 f7e2c6bc 0000006e 0000320f 532bcd0b 00000009 f7ea3888 c1a4f000 f7cbfe18 c1a4f13c Call Trace: [] generic_make_request+0x146/0x1d0 [] mempool_alloc+0x2a/0xc0 [] md_super_wait+0x7e/0xc0 [] autoremove_wake_function+0x0/0x50 [] bio_clone+0x31/0x40 [] autoremove_wake_function+0x0/0x50 [] write_sb_page+0x50/0x80 [] write_page+0x112/0x120 [] sync_sbs+0x77/0xe0 [] bitmap_update_sb+0x69/0xa0 [] md_update_sb+0x138/0x2c0 [] schedule+0x2e5/0x580 [] md_check_recovery+0x2dd/0x360 [] raid5d+0x10/0xe0 [] md_thread+0x55/0x110 [] autoremove_wake_function+0x0/0x50 [] autoremove_wake_function+0x0/0x50 [] md_thread+0x0/0x110 [] kthread+0x6a/0x70 [] kthread+0x0/0x70 [] kernel_thread_helper+0x7/0x3c ======================= xfsbufd D F7D6117C 0 839 2 (L-TLB) f74dffa0 00000046 00000282 f7d6117c 0005eadc 000000de 00000282 f74dff88 00000000 00000282 91c4e16c 000001b3 00000eb5 00000000 f7c51a30 f7d6117c 0000006e 00000115 91c50443 000001b3 00000000 00000000 00000000 f74ccfa0 Call Trace: [] refrigerator+0x3c/0x50 [] xfsbufd+0xf0/0x100 [] xfsbufd+0x0/0x100 [] kthread+0x6a/0x70 [] kthread+0x0/0x70 [] kernel_thread_helper+0x7/0x3c ======================= xfssyncd D F7D5F65C 0 840 2 (L-TLB) f775ff90 00000046 00000282 f7d5f65c 00060a54 00001d76 00000282 f775ff78 00000000 00000282 91c4fd9e 000001b3 00001d19 00000000 f7348090 f7d5f65c 00000078 000002b5 91c5428a 000001b3 00000000 00000000 f775ffb8 f78a93dc Call Trace: [] refrigerator+0x3c/0x50 [] xfssyncd+0x15b/0x160 [] xfssyncd+0x0/0x160 [] kthread+0x6a/0x70 [] kthread+0x0/0x70 [] kernel_thread_helper+0x7/0x3c ======================= udevd D 00000010 0 921 1 (NOTLB) f7043ea0 00000082 f66ec590 00000010 c1913ba0 f67ba22c 407ef268 00000010 c1913ba0 f7fa5cb4 9133dae2 000001b3 00006df1 00000000 f7348590 f7cba65c 00000075 00000567 9134df46 000001b3 00000000 00000000 00000000 f7043f08 Call Trace: [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] do_wait+0x1bb/0x3b0 [] shrink_dcache_parent+0xd/0x30 [] default_wake_function+0x0/0x10 [] sys_select+0xa9/0x170 [] sys_wait4+0x31/0x40 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= ksuspend_usbd S 00002061 0 1822 2 (L-TLB) f6c81f80 00000046 f9766400 00002061 30fcbb32 00000010 c192e530 187e5d99 f6c81f60 c0116ce7 c048e0e0 c192e530 f6c81f60 c0116be1 f7c51a30 f735867c f7358570 000003e1 19639b4d 00000008 f6c81fd0 f71611a0 f6c81fa8 f71611a8 Call Trace: [] activate_task+0x37/0xb0 [] __activate_task+0x21/0x40 [] worker_thread+0xe4/0xf0 [] autoremove_wake_function+0x0/0x50 [] autoremove_wake_function+0x0/0x50 [] worker_thread+0x0/0xf0 [] kthread+0x6a/0x70 [] kthread+0x0/0x70 [] kernel_thread_helper+0x7/0x3c ======================= khubd D C0369725 0 1825 2 (L-TLB) f75adf70 00000046 f74dffa0 c0369725 f75adf80 00000046 00000023 00000011 00000002 f882a3a5 91c4e5a2 000001b3 000010e3 00000000 f73a25d0 f7c51b3c 0000006e 000000f0 91c50da4 000001b3 00000000 00000000 f882d180 f75adfa8 Call Trace: [] schedule+0x2e5/0x580 [] usb_get_intf+0x15/0x20 [usbcore] [] hub_thread+0x0/0xf0 [usbcore] [] refrigerator+0x3c/0x50 [] hub_thread+0x55/0xf0 [usbcore] [] autoremove_wake_function+0x0/0x50 [] autoremove_wake_function+0x0/0x50 [] hub_thread+0x0/0xf0 [usbcore] [] kthread+0x6a/0x70 [] kthread+0x0/0x70 [] kernel_thread_helper+0x7/0x3c ======================= ksnapd S 000025EF 0 2132 2 (L-TLB) f7237f80 00000046 6f8a7a00 000025ef f7b7c53d 00000012 c192e530 7bdbe29e f7237f60 c0116ce7 c048e0e0 c192e530 f7237f60 c0116be1 c19bb090 f730e67c f730e570 0000007c 7bea9794 00000009 f7237fd0 f71c9360 f7237fa8 f71c9368 Call Trace: [] activate_task+0x37/0xb0 [] __activate_task+0x21/0x40 [] worker_thread+0xe4/0xf0 [] autoremove_wake_function+0x0/0x50 [] autoremove_wake_function+0x0/0x50 [] worker_thread+0x0/0xf0 [] kthread+0x6a/0x70 [] kthread+0x0/0x70 [] kernel_thread_helper+0x7/0x3c ======================= kjournald D C0369725 0 2178 2 (L-TLB) f6d71f40 00000046 f75adf70 c0369725 f6d71f50 00000046 00000000 f6d0dd38 00000000 c0117627 91c4e939 000001b3 000012d7 00000000 c19bb090 f73a26dc 0000006e 000000d2 91c515db 000001b3 00000000 00000000 f73a25d0 00000001 Call Trace: [] schedule+0x2e5/0x580 [] __wake_up_common+0x37/0x60 [] refrigerator+0x3c/0x50 [] kjournald+0xd1/0x1d0 [] autoremove_wake_function+0x0/0x50 [] autoremove_wake_function+0x0/0x50 [] kjournald+0x0/0x1d0 [] kthread+0x6a/0x70 [] kthread+0x0/0x70 [] kernel_thread_helper+0x7/0x3c ======================= xfsbufd D C19BB19C 0 2179 2 (L-TLB) f6d29fa0 00000046 00000282 c19bb19c 0005eb4b 00000085 00000282 f6d29f88 00000000 00000282 91c4ec56 000001b3 0000145d 00000000 c1961ad0 c19bb19c 0000006e 000000ab 91c51c91 000001b3 00000000 00000000 00000000 f6d44aa0 Call Trace: [] refrigerator+0x3c/0x50 [] xfsbufd+0xf0/0x100 [] xfsbufd+0x0/0x100 [] kthread+0x6a/0x70 [] kthread+0x0/0x70 [] kernel_thread_helper+0x7/0x3c ======================= xfssyncd D C1961BDC 0 2183 2 (L-TLB) f6881f90 00000046 00000282 c1961bdc 0005f177 00000269 00000282 f6881f78 00000000 00000282 91c4efba 000001b3 000015d2 00000000 f7cb10b0 c1961bdc 0000006e 000000af 91c52369 000001b3 00000000 00000000 f6881fb8 f6d45c3c Call Trace: [] refrigerator+0x3c/0x50 [] xfssyncd+0x15b/0x160 [] xfssyncd+0x0/0x160 [] kthread+0x6a/0x70 [] kthread+0x0/0x70 [] kernel_thread_helper+0x7/0x3c ======================= xfsbufd D F7CB11BC 0 2184 2 (L-TLB) f6a33fa0 00000046 00000282 f7cb11bc 0005eb9d 00000067 00000282 f6a33f88 00000000 00000282 91c4f278 000001b3 0000171e 00000000 f73945b0 f7cb11bc 0000006e 00000094 91c52939 000001b3 00000000 00000000 00000000 f74cc720 Call Trace: [] refrigerator+0x3c/0x50 [] xfsbufd+0xf0/0x100 [] xfsbufd+0x0/0x100 [] kthread+0x6a/0x70 [] kthread+0x0/0x70 [] kernel_thread_helper+0x7/0x3c ======================= xfssyncd D F73946BC 0 2185 2 (L-TLB) f6d0ff90 00000046 00000282 f73946bc 0005f197 00000153 00000282 f6d0ff78 00000000 00000282 91c4e688 000001b3 00001e93 00000000 c1969530 f73946bc 00000072 00000092 91c52ef2 000001b3 00000000 00000000 f6d0ffb8 f6d4573c Call Trace: [] refrigerator+0x3c/0x50 [] xfssyncd+0x15b/0x160 [] xfssyncd+0x0/0x160 [] kthread+0x6a/0x70 [] kthread+0x0/0x70 [] kernel_thread_helper+0x7/0x3c ======================= portmap D F6BA6684 0 2341 1 (NOTLB) f6d0dea0 00000086 00000282 f6ba6684 f6ba64a0 c033b7e6 00000282 f6ba64a0 f6ba64a0 c034d144 913389cf 000001b3 00002e10 00000000 f7358070 f730cb9c 00000073 000002ae 9133f6e9 000001b3 00000000 00000000 0804ff78 f6d0df08 Call Trace: [] inet_csk_clear_xmit_timers+0x36/0x50 [] tcp_v4_destroy_sock+0x14/0x150 [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] invalidate_inode_buffers+0xd/0xa0 [] d_kill+0x40/0x60 [] dput+0x1c/0xe0 [] __fput+0xf4/0x160 [] mntput_no_expire+0x1c/0x70 [] filp_close+0x43/0x70 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= syslogd D F77A1F28 0 2478 1 (NOTLB) f77a1ea0 00000086 f7fbe804 f77a1f28 00000000 00000001 ffffffff f7c20940 00000000 00000000 91338d55 000001b3 00002e6f 00000000 f7358a70 f735817c 00000073 00000070 9133fb50 000001b3 00000000 00000000 00000000 f77a1f08 Call Trace: [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] do_readv_writev+0xaa/0x190 [] pipe_write+0x0/0x400 [] sys_select+0xa9/0x170 [] sigprocmask+0x45/0xb0 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= klogd D C04953A0 0 2484 1 (NOTLB) f6809ea0 00000086 f68d0a20 c04953a0 f6809dc8 f6809e58 00000000 00000000 f6809eb0 00000001 913380eb 000001b3 0000267d 00000000 f730ca90 f7c9e69c 00000073 000000b5 9133dc16 000001b3 00000000 00000000 00000000 f6809f08 Call Trace: [] refrigerator+0x3c/0x50 [] smp_apic_timer_interrupt+0x30/0x40 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] do_syslog+0xf2/0x350 [] autoremove_wake_function+0x0/0x50 [] autoremove_wake_function+0x0/0x50 [] vfs_read+0xe4/0x110 [] sys_read+0x47/0x80 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= sshd D F7215BF8 0 2506 1 (NOTLB) f6a2dea0 00200082 f687e360 f7215bf8 00000000 f6a2de58 00000000 00000000 f6a2deb0 00000001 91339d89 000001b3 00002dd0 00000000 f730c090 f7358b7c 00000073 00000179 91340a0c 000001b3 00000000 00000000 00000000 f6a2df08 Call Trace: [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] do_wp_page+0x299/0x3b0 [] invalidate_inode_buffers+0xd/0xa0 [] d_kill+0x40/0x60 [] sys_select+0xa9/0x170 [] mntput_no_expire+0x1c/0x70 [] filp_close+0x43/0x70 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= exim4 D 00000000 0 2546 1 (NOTLB) f6835ea0 00000086 f6520a30 00000000 00000000 f6520a30 f6520a30 00000000 bf896808 c018dcac f6835e88 0000000d c03f1d32 00000f42 f72e1ab0 f7e2cbbc f6835e88 00000235 9134f609 000001b3 00000000 00000000 00000000 f6835f08 Call Trace: [] proc_flush_task+0x4c/0x1a0 [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] do_wait+0x1bb/0x3b0 [] do_setitimer+0x1f1/0x280 [] sys_select+0xa9/0x170 [] sys_wait4+0x31/0x40 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= inetd D 00000000 0 2552 1 (NOTLB) f6a29ea0 00000082 00000000 00000000 662f6984 c0437200 f6845bc0 f6a29eb8 c195f8c0 c02101b8 00000001 00000000 00000001 00000000 f7394ab0 f72e1bbc f6a29eb0 000004f5 913518bf 000001b3 00000000 00000000 00000000 f6a29f08 Call Trace: [] xfs_file_aio_read+0x78/0x90 [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] do_no_page+0x162/0x280 [] __handle_mm_fault+0xfa/0x1e0 [] do_page_fault+0x39b/0x590 [] sys_select+0xa9/0x170 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= lpd D 00000001 0 2556 1 (NOTLB) f6a8fea0 00000086 f7ddeb40 00000001 c1913ba0 f6870114 fcd011e4 00000007 f7312005 c03121f7 00000005 bf9e2660 00000007 00000044 f73a20d0 f7394bbc f780c800 00000267 91352994 000001b3 00000000 00000000 00000000 f6a8ff08 Call Trace: [] sys_socketcall+0x87/0x250 [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] chown_common+0xa5/0xd0 [] dput+0x1c/0xe0 [] sys_listen+0x42/0x70 [] sys_select+0xa9/0x170 [] sigprocmask+0x45/0xb0 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= mysqld_safe D F721BBF8 0 2571 1 (NOTLB) f6acdea0 00000086 00000010 f721bbf8 f722f8c0 f71fb700 00000000 00000004 c0434f00 f7240bf8 f721bbf8 bf953000 bf953000 c014d94d f72ea5d0 f73a21dc f73a20d0 00000148 9135328e 000001b3 00000000 00000000 081041c8 f6acdf08 Call Trace: [] copy_page_range+0x9d/0xd0 [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] do_wait+0x1bb/0x3b0 [] default_wake_function+0x0/0x10 [] do_sigaction+0x116/0x150 [] default_wake_function+0x0/0x10 [] sys_rt_sigaction+0x8d/0xa0 [] sys_wait4+0x31/0x40 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= mysqld D 00000000 0 2608 2571 (NOTLB) f685fea0 00000086 f77b87bc 00000000 00000001 c0434c64 c0434f04 00000044 c0434f00 c0146c50 9133fd28 000001b3 00008a1b 00000000 f72e10b0 f72ea6dc 00000076 00000287 91354440 000001b3 00000000 00000000 00000000 f685ff08 Call Trace: [] get_page_from_freelist+0x80/0xc0 [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] copy_process+0x55d/0xb30 [] change_protection+0x78/0xc0 [] __activate_task+0x21/0x40 [] sys_select+0xa9/0x170 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= mysqld D F7127D94 0 2610 2571 (NOTLB) f6a25ea0 00000086 08a45aa0 f7127d94 08a45aa0 c0134f9c 00000246 c17a5ca0 c0434da8 c0146a8f 00000000 00000000 f7348590 00000001 f7e2cab0 f734869c c0434f14 000000e9 9134e692 000001b3 00000000 00000000 00000000 f6a25f08 Call Trace: [] futex_wait+0x2fc/0x3c0 [] buffered_rmqueue+0x9f/0x100 [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] do_anonymous_page+0xb5/0x130 [] __handle_mm_fault+0xaf/0x1e0 [] default_wake_function+0x0/0x10 [] do_futex+0x53/0x130 [] sys_futex+0x65/0xf0 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= mysqld D F7127D94 0 2611 2571 (NOTLB) f6a03ea0 00000086 08a45b10 f7127d94 08a45b10 c0134f9c 00000000 f6a03ee0 f6a03e70 c0369f41 9133a6b3 000001b3 000030e1 00000000 f730c590 f730c19c 00000073 000001a4 91341a7a 000001b3 00000000 00000000 00000000 f6a03f08 Call Trace: [] futex_wait+0x2fc/0x3c0 [] io_schedule+0x11/0x20 [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] pagevec_lookup_tag+0x2a/0x50 [] __pagevec_release+0x1b/0x30 [] wait_on_page_writeback_range+0x6a/0x110 [] xfs_trans_ijoin+0x2c/0x80 [] default_wake_function+0x0/0x10 [] xfs_fsync+0x1b5/0x1d0 [] do_futex+0x53/0x130 [] sys_futex+0x65/0xf0 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= mysqld D F7127D94 0 2612 2571 (NOTLB) f6a09ea0 00000086 08a45b80 f7127d94 08a45b80 c0134f9c 00000246 c1760960 c0434da8 c0146a8f 00000000 00000000 f72e10b0 0000001b f7d5f050 f72e11bc c0434f14 0000011d 91354c0e 000001b3 00000000 00000000 00000000 f6a09f08 Call Trace: [] futex_wait+0x2fc/0x3c0 [] buffered_rmqueue+0x9f/0x100 [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] do_anonymous_page+0xb5/0x130 [] __handle_mm_fault+0xaf/0x1e0 [] default_wake_function+0x0/0x10 [] do_futex+0x53/0x130 [] sys_futex+0x65/0xf0 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= mysqld D F7127D94 0 2613 2571 (NOTLB) f6a2bea0 00000086 08a45bf0 f7127d94 08a45bf0 c0134f9c c048e0e0 f7cb15b0 f6a2be70 c0116be1 9133b006 000001b3 00002fcc 00000000 f7d09a50 f730c69c 00000073 000000ac 9134213a 000001b3 00000000 00000000 00000000 f6a2bf08 Call Trace: [] futex_wait+0x2fc/0x3c0 [] __activate_task+0x21/0x40 [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] wake_futex+0x2e/0x50 [] futex_requeue+0xb7/0x1d0 [] default_wake_function+0x0/0x10 [] do_futex+0x53/0x130 [] sys_futex+0x65/0xf0 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= mysqld D 00000001 0 2616 2571 (NOTLB) f6ff7ea0 00000086 f72944a0 00000001 00000000 c014557a f6ff7e68 c1965e94 f72944a0 c017eef6 c176b8e0 00000000 00000086 00000048 f7c9f5b0 f7d5f15c 00000001 000001a6 9135579c 000001b3 00000000 00000000 00000000 f6ff7f08 Call Trace: [] mempool_free+0x2a/0x60 [] bio_put+0x26/0x40 [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] __ide_end_request+0x87/0xe0 [] elv_queue_empty+0x24/0x30 [] ide_do_request+0x67/0x330 [] ide_dma_intr+0x7d/0xc0 [] copy_to_user+0x32/0x50 [] sys_select+0x149/0x170 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= mysqld D 00000000 0 2617 2571 (NOTLB) f6ff9ea0 00000086 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 f72ea0d0 f7c9f6bc 00000000 000001a8 91356336 000001b3 00000000 00000000 00000000 f6ff9f08 Call Trace: [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] do_no_page+0x162/0x280 [] __handle_mm_fault+0xfa/0x1e0 [] do_page_fault+0x39b/0x590 [] copy_to_user+0x32/0x50 [] sys_select+0x149/0x170 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= mysqld D F7127D94 0 2618 2571 (NOTLB) f6ffbea0 00000086 08700fe0 f7127d94 08700fe0 c0134f9c 80b65200 00003dd4 ea405b29 0000001e 913481e3 000001b3 0000ea28 00000000 f72e15b0 f7cb16bc 0000007a 0000019b 9136ac79 000001b3 00000000 00000000 00000000 f6ffbf08 Call Trace: [] futex_wait+0x2fc/0x3c0 [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] futex_requeue+0xb7/0x1d0 [] wake_futex+0x2e/0x50 [] futex_wake+0x76/0xb0 [] default_wake_function+0x0/0x10 [] do_futex+0x53/0x130 [] sys_futex+0x65/0xf0 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= mysqld D F730C590 0 2619 2571 (NOTLB) f6ffdea0 00000086 f730c740 f730c590 f7d09a50 f7127d60 f6a2bea0 c0369725 f6ffdec0 00000086 9133b619 000001b3 000030c3 00000000 f72eaad0 f7d09b5c 00000073 000000d6 91342999 000001b3 00000000 00000000 00000008 f6ffdf08 Call Trace: [] schedule+0x2e5/0x580 [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] dequeue_signal+0x31/0x130 [] sys_rt_sigtimedwait+0x172/0x1d0 [] do_futex+0x72/0x130 [] sigprocmask+0x45/0xb0 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= logger D 000081A4 0 2609 2571 (NOTLB) f6aa9ea0 00000086 04003fff 000081a4 00000001 00000000 00000000 00800081 00000000 00000000 f6f50cd8 00000400 fffffe00 00000000 f7c53050 f72ea1dc f6aa9eb0 000001fd 91357122 000001b3 00000000 00000000 b7f69420 f6aa9f08 Call Trace: [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] autoremove_wake_function+0x0/0x50 [] schedule+0x2e5/0x580 [] vfs_read+0x86/0x110 [] sys_read+0x47/0x80 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= nrpe D F6673904 0 2653 1 (NOTLB) f6b25ea0 00200086 f6673904 f6673904 f6673904 f67af43c c1910e28 f6673904 f67af43c c016dfb0 9133cde8 000001b3 00002e52 00000000 f7d09550 f72eabdc 00000073 000001cd 91343b9d 000001b3 00000000 00000000 bf995dec f6b25f08 Call Trace: [] d_kill+0x40/0x60 [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] do_no_page+0x162/0x280 [] __handle_mm_fault+0xfa/0x1e0 [] do_page_fault+0x39b/0x590 [] copy_to_user+0x32/0x50 [] sys_select+0x149/0x170 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= nagios-statd D F6B1FD44 0 2661 1 (NOTLB) f6a19ea0 00200086 f6b1fd44 f6b1fd44 f6b1fd44 f6b689ec c1910e28 f6b1fd44 f6b689ec c016dfb0 f7c20b20 f6b689ec f7c20b20 c016dfec f73940b0 f7c5315c 00000008 00000654 91359d74 000001b3 00000000 00000000 bf9c0860 f6a19f08 Call Trace: [] d_kill+0x40/0x60 [] dput+0x1c/0xe0 [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] do_no_page+0x162/0x280 [] __handle_mm_fault+0xfa/0x1e0 [] do_page_fault+0x39b/0x590 [] sys_socketcall+0xc1/0x250 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= netserver D F6B1FA84 0 2664 1 (NOTLB) f6b49ea0 00000086 f6b1fa84 f6b1fa84 f6b1fa84 f6b68134 c1910e28 f6b1fa84 f6b68134 c016dfb0 f7c20da0 f6b68134 f7c20da0 c016dfec f7efda50 f73941bc 00000008 000002be 9135b0ab 000001b3 00000000 00000000 bfdb1550 f6b49f08 Call Trace: [] d_kill+0x40/0x60 [] dput+0x1c/0xe0 [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] do_no_page+0x162/0x280 [] __handle_mm_fault+0xfa/0x1e0 [] do_page_fault+0x39b/0x590 [] sys_socketcall+0xc1/0x250 [] do_notify_resume+0x3c/0x40 [] do_page_fault+0x0/0x590 [] work_notifysig+0x13/0x19 ======================= lockd D F734819C 0 2675 2 (L-TLB) f6ba3ed0 00000046 f7e2c0b0 f734819c f727c6c0 000012a8 6e33830e 0000000c f69d69a0 7fffffff 91c5013c 000001b3 00001ea8 00000000 f7e2c0b0 f734819c 00000078 00000176 91c549d9 000001b3 00000000 00000000 f6f40000 00000003 Call Trace: [] refrigerator+0x3c/0x50 [] svc_recv+0x392/0x420 [sunrpc] [] activate_task+0x37/0xb0 [] default_wake_function+0x0/0x10 [] default_wake_function+0x0/0x10 [] lockd+0x10e/0x240 [lockd] [] ret_from_fork+0x6/0x1c [] lockd+0x0/0x240 [lockd] [] lockd+0x0/0x240 [lockd] [] kernel_thread_helper+0x7/0x3c ======================= rpciod/0 S F65780A0 0 2676 2 (L-TLB) f6ba5f80 00000046 f69d63a0 f65780a0 00000000 f918574f 00000000 f657810c f65d506c f65780a4 f65780a0 f65780a4 00000286 f918ade2 f73a5a50 f7cb1bbc f918afd0 000002c5 8bb74456 000001b3 f6ba5fd0 f69f88a0 f6ba5fa8 f69f88a8 Call Trace: [] rpc_release_client+0x3f/0x70 [sunrpc] [] rpc_release_calldata+0x12/0x20 [sunrpc] [] rpc_async_schedule+0x0/0x10 [sunrpc] [] worker_thread+0xe4/0xf0 [] autoremove_wake_function+0x0/0x50 [] autoremove_wake_function+0x0/0x50 [] worker_thread+0x0/0xf0 [] kthread+0x6a/0x70 [] kthread+0x0/0x70 [] kernel_thread_helper+0x7/0x3c ======================= nfsd D F7E2C1BC 0 2677 2 (L-TLB) f6bc7f10 00000046 f73a5050 f7e2c1bc 000cb759 00001972 00000282 f6bc7ef8 00000000 00000282 91c503f8 000001b3 00001ff2 00000000 f73a5050 f7e2c1bc 00000078 00000128 91c54fa4 000001b3 00000000 00000000 f6b87000 00000022 Call Trace: [] refrigerator+0x3c/0x50 [] svc_recv+0x392/0x420 [sunrpc] [] filp_close+0x43/0x70 [] default_wake_function+0x0/0x10 [] default_wake_function+0x0/0x10 [] nfsd+0xae/0x240 [nfsd] [] nfsd+0x0/0x240 [nfsd] [] kernel_thread_helper+0x7/0x3c ======================= nfsd D F73A515C 0 2678 2 (L-TLB) f6be9f10 00000046 f73a5550 f73a515c 000cb759 00000a68 00000282 f6be9ef8 00000000 00000282 91c506b7 000001b3 0000211f 00000000 f73a5550 f73a515c 00000078 0000011b 91c5552c 000001b3 00000000 00000000 f6bb0000 00000022 Call Trace: [] refrigerator+0x3c/0x50 [] svc_recv+0x392/0x420 [sunrpc] [] filp_close+0x43/0x70 [] default_wake_function+0x0/0x10 [] default_wake_function+0x0/0x10 [] nfsd+0xae/0x240 [nfsd] [] nfsd+0x0/0x240 [nfsd] [] kernel_thread_helper+0x7/0x3c ======================= nfsd D F73A565C 0 2679 2 (L-TLB) f6409f10 00000046 f7c53a50 f73a565c 000cb759 00000a75 00000282 f6409ef8 00000000 00000282 91c50942 000001b3 00002251 00000000 f7c53a50 f73a565c 00000078 00000112 91c55a89 000001b3 00000000 00000000 f6bd5000 00000022 Call Trace: [] refrigerator+0x3c/0x50 [] svc_recv+0x392/0x420 [sunrpc] [] filp_close+0x43/0x70 [] default_wake_function+0x0/0x10 [] default_wake_function+0x0/0x10 [] nfsd+0xae/0x240 [nfsd] [] nfsd+0x0/0x240 [nfsd] [] kernel_thread_helper+0x7/0x3c ======================= nfsd D F7C53B5C 0 2680 2 (L-TLB) f642bf10 00000046 f730e070 f7c53b5c 000cb759 0000098a 00000282 f642bef8 00000000 00000282 91c50bda 000001b3 00002387 00000000 f730e070 f7c53b5c 00000078 00000117 91c56000 000001b3 00000000 00000000 f6bfa000 00000022 Call Trace: [] refrigerator+0x3c/0x50 [] svc_recv+0x392/0x420 [sunrpc] [] filp_close+0x43/0x70 [] default_wake_function+0x0/0x10 [] default_wake_function+0x0/0x10 [] nfsd+0xae/0x240 [nfsd] [] nfsd+0x0/0x240 [nfsd] [] kernel_thread_helper+0x7/0x3c ======================= nfsd D F730E17C 0 2681 2 (L-TLB) f644bf10 00000046 f730ea70 f730e17c 000cb759 00000848 00000282 f644bef8 00000000 00000282 91c50ed9 000001b3 000024a6 00000000 f730ea70 f730e17c 00000078 00000121 91c565a6 000001b3 00000000 00000000 f641f000 00000022 Call Trace: [] refrigerator+0x3c/0x50 [] svc_recv+0x392/0x420 [sunrpc] [] remove_vma+0x31/0x50 [] filp_close+0x43/0x70 [] default_wake_function+0x0/0x10 [] default_wake_function+0x0/0x10 [] nfsd+0xae/0x240 [nfsd] [] nfsd+0x0/0x240 [nfsd] [] kernel_thread_helper+0x7/0x3c ======================= nfsd D F730EB7C 0 2682 2 (L-TLB) f646df10 00000046 f7c9e090 f730eb7c 000cb759 000007b0 00000282 f646def8 00000000 00000282 91c511a3 000001b3 000025c6 00000000 f7c9e090 f730eb7c 00000078 00000117 91c56b1c 000001b3 00000000 00000000 f6444000 00000022 Call Trace: [] refrigerator+0x3c/0x50 [] svc_recv+0x392/0x420 [sunrpc] [] filp_close+0x43/0x70 [] default_wake_function+0x0/0x10 [] default_wake_function+0x0/0x10 [] nfsd+0xae/0x240 [nfsd] [] nfsd+0x0/0x240 [nfsd] [] kernel_thread_helper+0x7/0x3c ======================= nfsd D F7C9E19C 0 2683 2 (L-TLB) f64adf10 00000046 f72e15b0 f7c9e19c 0000007c 00000d7d 00000282 f64adef8 00000000 00000282 91c51404 000001b3 000026f5 00000000 f73a2ad0 f7c9e19c 00000078 00000109 91c57049 000001b3 00000000 00000000 f6469000 00000022 Call Trace: [] refrigerator+0x3c/0x50 [] svc_recv+0x392/0x420 [sunrpc] [] filp_close+0x43/0x70 [] default_wake_function+0x0/0x10 [] default_wake_function+0x0/0x10 [] nfsd+0xae/0x240 [nfsd] [] nfsd+0x0/0x240 [nfsd] [] kernel_thread_helper+0x7/0x3c ======================= nfsd D F73A2BDC 0 2684 2 (L-TLB) f64cdf10 00000046 f7c9e090 f73a2bdc 0000007c 00006c5a 00000282 f64cdef8 00000000 00000282 00000282 000cb77f 000cb77f c0369fe4 f73a5a50 f73a2bdc 00000008 000000f6 91c57517 000001b3 00000000 00000000 f648e000 00000022 Call Trace: [] schedule_timeout+0x54/0xa0 [] refrigerator+0x3c/0x50 [] svc_recv+0x392/0x420 [sunrpc] [] svc_send+0x92/0x100 [sunrpc] [] default_wake_function+0x0/0x10 [] default_wake_function+0x0/0x10 [] nfsd+0xae/0x240 [nfsd] [] nfsd+0x0/0x240 [nfsd] [] kernel_thread_helper+0x7/0x3c ======================= rpc.mountd D C177B160 0 2688 1 (NOTLB) f6b85ea0 00000082 00000246 c177b160 c0434da8 c0146a8f f6b6352c 00000000 00000000 c0434da8 91344a31 000001b3 00010f88 00000000 c192ea30 f72e16bc 0000007d 00000af1 9136cd4c 000001b3 00000000 00000000 00000000 f6b85f08 Call Trace: [] buffered_rmqueue+0x9f/0x100 [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] do_no_page+0x162/0x280 [] __handle_mm_fault+0xfa/0x1e0 [] do_page_fault+0x39b/0x590 [] sys_select+0xa9/0x170 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= rsync D 6C720929 0 2694 1 (NOTLB) f64d5ea0 00000086 ab800883 6c720929 b43a31c6 81773793 3b011242 3456459e 3fbc385d af7f93c4 00000000 f69f87e0 f64d5eb8 0000000a f7c51530 f7efdb5c f727c650 0000024b 9135c0b9 000001b3 00000000 00000000 00000000 f64d5f08 Call Trace: [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] tcp_v4_get_port+0x19/0x20 [] release_sock+0xe/0x60 [] inet_listen+0x34/0x80 [] sys_listen+0x42/0x70 [] sys_select+0xa9/0x170 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= smartd D 00000000 0 2703 1 (NOTLB) f6a0dea0 00000082 00000005 00000000 00000001 00000000 00000200 f65f9400 bfc7b1b0 f786c400 00004000 f6673e60 f651d960 01d501b0 f65245d0 f7c5163c 004f0001 00000187 9135cb70 000001b3 00000000 00000000 bfc9abc4 f6a0df08 Call Trace: [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] ktime_get_ts+0x19/0x50 [] copy_to_user+0x32/0x50 [] hrtimer_nanosleep+0x90/0xe0 [] hrtimer_wakeup+0x0/0x20 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= sshd D 00000001 0 2705 2506 (NOTLB) f6b0dea0 00000082 f687e4c0 00000001 00000000 f6b0de58 00000000 00000000 f6b0deb0 00000001 9133dac5 000001b3 0000389d 00000000 f65240d0 f7d0965c 00000073 000003b9 913460db 000001b3 00000000 00000000 00000000 f6b0df08 Call Trace: [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] autoremove_wake_function+0x0/0x50 [] tty_write+0x94/0x1f0 [] sys_select+0xa9/0x170 [] sys_write+0x47/0x80 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= bash D C1733820 0 2761 2705 (NOTLB) f6b27ea0 00000086 00000246 c1733820 c0434da8 c0146a8f 00000000 00000000 00000000 c0434da8 9133e08b 000001b3 00003a41 00000000 f6524ad0 f65241dc 00000073 000000f7 91346a84 000001b3 00000000 00000000 080ffac8 f6b27f08 Call Trace: [] buffered_rmqueue+0x9f/0x100 [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] do_wait+0x1bb/0x3b0 [] default_wake_function+0x0/0x10 [] tiocspgrp+0xc9/0xe0 [] default_wake_function+0x0/0x10 [] sys_wait4+0x31/0x40 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= rpc.statd D F6541EA8 0 2768 1 (NOTLB) f6541ea0 00000082 00000000 f6541ea8 00000000 c022f282 bffdbb44 00000010 bffdbb20 c0310235 00000000 f6541ec8 08056a80 f64d3a00 f64c6ab0 f65246dc 00000190 000002fa 9135e04b 000001b3 00000000 00000000 00000000 f6541f08 Call Trace: [] copy_to_user+0x32/0x50 [] move_addr_to_user+0x65/0x70 [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] invalidate_inode_buffers+0xd/0xa0 [] d_kill+0x40/0x60 [] sys_select+0xa9/0x170 [] mntput_no_expire+0x1c/0x70 [] filp_close+0x43/0x70 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= rpc.idmapd D 00008003 0 2776 1 (NOTLB) f659fea0 00000082 f6089000 00008003 00000000 f6056c20 00000000 f659ff28 00000001 c193df40 9133e5ad 000001b3 00003cf9 00000000 f7c53550 f6524bdc 00000073 00000128 91347618 000001b3 00000000 00000000 00001388 f659ff08 Call Trace: [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] schedule_timeout+0x54/0xa0 [] ep_events_transfer+0x69/0x80 [] process_timeout+0x0/0x10 [] default_wake_function+0x0/0x10 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= ntpd D F6511E88 0 2791 1 (NOTLB) f6511ea0 00200082 00000000 f6511e88 f7fbe7d8 f7c53550 bfeb98fc bfeb98fc f7c53760 c0108f0c 9133efd6 000001b3 000042b3 00000000 f64c65b0 f7c5365c 00000073 0000025f 91348dcf 000001b3 00000000 00000000 00000000 f6511f08 Call Trace: [] save_i387_fxsave+0x8c/0xb0 [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] sys_select+0xa9/0x170 [] restore_sigcontext+0x10d/0x160 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= mdadm D BF9FDFB4 0 2803 1 (NOTLB) f6513ea0 00000086 00000000 bf9fdfb4 c1a4f000 c02fc8a5 00000000 80480911 c1a4f000 c02fc36d c1a4f000 c1a4f10c c190d8c8 c0180d82 f7c9ea90 f64c6bbc c1a4f10c 00000276 9135f186 000001b3 00000000 00000000 bf9fcc58 f6513f08 Call Trace: [] md_wakeup_thread+0x25/0x30 [] md_ioctl+0xcd/0x410 [] check_disk_change+0x32/0x80 [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] md_ioctl+0x0/0x410 [] blkdev_driver_ioctl+0x36/0x60 [] nameidata_to_filp+0x28/0x40 [] blkdev_ioctl+0x8f/0x1d0 [] mddev_put+0x19/0x80 [] __blkdev_put+0x62/0x120 [] copy_to_user+0x32/0x50 [] sys_select+0x149/0x170 [] mntput_no_expire+0x1c/0x70 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= amd D 00000010 0 2815 1 (NOTLB) f65c3ea0 00000082 f65c3ec8 00000010 f65c3e88 00000001 f65c3e68 00000018 00000000 00000000 9133fbce 000001b3 000049bf 00000000 f7efd050 f64c66bc 00000073 000002dd 9134aa7a 000001b3 00000000 00000000 00000000 f65c3f08 Call Trace: [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] do_wp_page+0x299/0x3b0 [] __handle_mm_fault+0x1b6/0x1e0 [] do_page_fault+0x39b/0x590 [] copy_to_user+0x32/0x50 [] sys_select+0x149/0x170 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= atd D C01D83C0 0 2844 1 (NOTLB) f6539ea0 00000082 00000000 c01d83c0 00000000 c01d83c0 00000000 00000000 f65deb00 c01e8434 f6539f08 00000008 00000000 c0209053 c19b3550 f7c9eb9c c0434f14 0000015c 9135fb0a 000001b3 00000000 00000000 bfe5d324 f6539f08 Call Trace: [] xfs_dir2_put_dirent64_direct+0x0/0xb0 [] xfs_dir2_put_dirent64_direct+0x0/0xb0 [] xfs_iunlock+0x84/0x90 [] xfs_readdir+0x53/0x70 [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] ktime_get_ts+0x19/0x50 [] copy_to_user+0x32/0x50 [] hrtimer_nanosleep+0x90/0xe0 [] hrtimer_wakeup+0x0/0x20 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= cron D 000081A4 0 2847 1 (NOTLB) f661bea0 00000082 04003fff 000081a4 00000001 00000000 00000000 060243d5 00000000 00000274 91340354 000001b3 00004c36 00000000 f64c60b0 f7efd15c 00000073 00000155 9134b7d5 000001b3 00000000 00000000 bfb8e8d4 f661bf08 Call Trace: [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] ktime_get_ts+0x19/0x50 [] copy_to_user+0x32/0x50 [] hrtimer_nanosleep+0x90/0xe0 [] hrtimer_wakeup+0x0/0x20 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= miniserv.pl D F6F02CA0 0 2864 1 (NOTLB) f6631ea0 00000082 f6631eb8 f6f02ca0 f6631e70 c0437200 f667cc00 f6631eb8 f66647c0 c02101b8 00000001 f6631f00 00000001 00000000 f6520530 c19b365c f6631eb0 0000037c 91361374 000001b3 00000000 00000000 00000000 f6631f08 Call Trace: [] xfs_file_aio_read+0x78/0x90 [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] do_wait+0x1bb/0x3b0 [] copy_to_user+0x32/0x50 [] sys_select+0x149/0x170 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= apache D C173CD20 0 2951 1 (NOTLB) f65e7ea0 00000082 00000246 c173cd20 c0434da8 c0146a8f c0434dcc 00000000 00000000 c0434da8 9133bf89 000001b3 00006e15 00000000 f7cba550 f64c61bc 00000074 0000013e 9134c442 000001b3 00000000 00000000 00000000 f65e7f08 Call Trace: [] buffered_rmqueue+0x9f/0x100 [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] do_wp_page+0x299/0x3b0 [] do_wait+0x1bb/0x3b0 [] do_page_fault+0x39b/0x590 [] copy_to_user+0x32/0x50 [] sys_select+0x149/0x170 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= apache D C1746100 0 2952 2951 (NOTLB) f66a5ea0 00000086 00000246 c1746100 c0434da8 c0146a8f c0434dcc 00000000 000000b4 c0142b08 c0434f18 00000044 c0434f14 c014290b f7c9fab0 f652063c 00000000 00000290 91362568 000001b3 00000000 00000000 00000000 f66a5f08 Call Trace: [] buffered_rmqueue+0x9f/0x100 [] find_lock_page+0x18/0x70 [] unlock_page+0x1b/0x30 [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] do_no_page+0x162/0x280 [] __handle_mm_fault+0xfa/0x1e0 [] do_page_fault+0x39b/0x590 [] sys_select+0xa9/0x170 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= apache D C1746480 0 2953 2951 (NOTLB) f66d5ea0 00000082 00000246 c1746480 c0434da8 c0146a8f c0434dcc 00000000 000000b4 c0142b08 c0434f18 00000044 c0434f14 c014290b f64fea50 f7c9fbbc 00000000 00000211 913633e0 000001b3 00000000 00000000 00000000 f66d5f08 Call Trace: [] buffered_rmqueue+0x9f/0x100 [] find_lock_page+0x18/0x70 [] unlock_page+0x1b/0x30 [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] do_no_page+0x162/0x280 [] __handle_mm_fault+0xfa/0x1e0 [] do_page_fault+0x39b/0x590 [] copy_to_user+0x32/0x50 [] sys_ipc+0x49/0x280 [] do_notify_resume+0x3c/0x40 [] do_page_fault+0x0/0x590 [] work_notifysig+0x13/0x19 ======================= apache D C1746800 0 2954 2951 (NOTLB) f66d7ea0 00000086 00000246 c1746800 c0434da8 c0146a8f c0434dcc 00000000 000000b4 c0142b08 c0434f18 00000044 c0434f14 c014290b f6520030 f64feb5c 00000000 0000012f 91363c2b 000001b3 00000000 00000000 00000000 f66d7f08 Call Trace: [] buffered_rmqueue+0x9f/0x100 [] find_lock_page+0x18/0x70 [] unlock_page+0x1b/0x30 [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] do_no_page+0x162/0x280 [] __handle_mm_fault+0xfa/0x1e0 [] do_page_fault+0x39b/0x590 [] copy_to_user+0x32/0x50 [] sys_ipc+0x49/0x280 [] do_notify_resume+0x3c/0x40 [] do_page_fault+0x0/0x590 [] work_notifysig+0x13/0x19 ======================= apache D C1741AC0 0 2955 2951 (NOTLB) f66d9ea0 00000086 00000246 c1741ac0 c0434da8 c0146a8f c0434dcc 00000000 000000b4 c0142b08 c0434f18 00000044 c0434f14 c014290b f64fe050 f652013c 00000000 0000013c 913644d2 000001b3 00000000 00000000 00000000 f66d9f08 Call Trace: [] buffered_rmqueue+0x9f/0x100 [] find_lock_page+0x18/0x70 [] unlock_page+0x1b/0x30 [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] do_no_page+0x162/0x280 [] __handle_mm_fault+0xfa/0x1e0 [] do_page_fault+0x39b/0x590 [] copy_to_user+0x32/0x50 [] sys_ipc+0x49/0x280 [] do_notify_resume+0x3c/0x40 [] do_page_fault+0x0/0x590 [] work_notifysig+0x13/0x19 ======================= apache D C174FBE0 0 2960 2951 (NOTLB) f66e3ea0 00000082 00000246 c174fbe0 c0434da8 c0146a8f 00000246 00000000 000000b4 c0142b08 c0434f18 00000044 c0434f14 c014290b f64fe550 f64fe15c 00000000 0000012b 91364cff 000001b3 00000000 00000000 00000000 f66e3f08 Call Trace: [] buffered_rmqueue+0x9f/0x100 [] find_lock_page+0x18/0x70 [] unlock_page+0x1b/0x30 [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] do_no_page+0x162/0x280 [] __handle_mm_fault+0xfa/0x1e0 [] do_page_fault+0x39b/0x590 [] copy_to_user+0x32/0x50 [] sys_ipc+0x49/0x280 [] do_notify_resume+0x3c/0x40 [] do_page_fault+0x0/0x590 [] work_notifysig+0x13/0x19 ======================= munin-node D 00000000 0 3075 1 (NOTLB) f670bea0 00000086 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 f64ff090 f64fe65c 00000000 000002b1 91365fd8 000001b3 00000000 00000000 00000000 f670bf08 Call Trace: [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] do_wp_page+0x299/0x3b0 [] __handle_mm_fault+0x1b6/0x1e0 [] do_page_fault+0x39b/0x590 [] copy_to_user+0x32/0x50 [] sys_select+0x149/0x170 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= getty D F64FF19C 0 3095 1 (NOTLB) f677dea0 00000086 f66f55b0 f64ff19c 00000000 001179b7 abc5f29a 0000000f c1a5e008 7fffffff f77ed800 f677defc bfec340b c036a025 f66f55b0 f64ff19c f77ed800 000002b0 913672ae 000001b3 00000000 00000000 0804b214 f677df08 Call Trace: [] schedule_timeout+0x95/0xa0 [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] default_wake_function+0x0/0x10 [] tty_write+0x94/0x1f0 [] tty_read+0x82/0xc0 [] vfs_read+0x86/0x110 [] sys_read+0x47/0x80 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= getty D F66F56BC 0 3096 1 (NOTLB) f677fea0 00000086 f66df570 f66f56bc f66ef400 00002ba6 abc72425 0000000f f66ef400 7fffffff f66ef400 f677fefc bf98d6cb c036a025 f66df570 f66f56bc f665e008 0000017c 91367d18 000001b3 00000000 00000000 0804b214 f677ff08 Call Trace: [] schedule_timeout+0x95/0xa0 [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] default_wake_function+0x0/0x10 [] tty_write+0x94/0x1f0 [] tty_read+0x82/0xc0 [] vfs_read+0x86/0x110 [] sys_read+0x47/0x80 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= getty D F66DF67C 0 3097 1 (NOTLB) f6781ea0 00000082 f66e8090 f66df67c f651f000 0000232c abc81a59 0000000f f651f000 7fffffff f651f000 f6781efc bfb5989b c036a025 f66e8090 f66df67c f651f408 00000171 91368732 000001b3 00000000 00000000 0804b214 f6781f08 Call Trace: [] schedule_timeout+0x95/0xa0 [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] default_wake_function+0x0/0x10 [] tty_write+0x94/0x1f0 [] tty_read+0x82/0xc0 [] vfs_read+0x86/0x110 [] sys_read+0x47/0x80 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= getty D F66E819C 0 3098 1 (NOTLB) f6783ea0 00000086 f66e7ad0 f66e819c f6f06c00 00002bab abc94c07 0000000f f6f06c00 7fffffff f6f06c00 f6783efc bfffad3b c036a025 f66e7ad0 f66e819c f66eac08 0000016f 9136913e 000001b3 00000000 00000000 0804b214 f6783f08 Call Trace: [] schedule_timeout+0x95/0xa0 [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] default_wake_function+0x0/0x10 [] tty_write+0x94/0x1f0 [] tty_read+0x82/0xc0 [] vfs_read+0x86/0x110 [] sys_read+0x47/0x80 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= getty D F66E7BDC 0 3102 1 (NOTLB) f67a9ea0 00000082 f66f50b0 f66e7bdc f66f2000 00002c4c abca821e 0000000f f66f2000 7fffffff f66f2000 f67a9efc bf878dbb c036a025 f66f50b0 f66e7bdc f668b008 0000015c 91369ac3 000001b3 00000000 00000000 0804b214 f67a9f08 Call Trace: [] schedule_timeout+0x95/0xa0 [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] default_wake_function+0x0/0x10 [] tty_write+0x94/0x1f0 [] tty_read+0x82/0xc0 [] vfs_read+0x86/0x110 [] sys_read+0x47/0x80 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= getty D F66F51BC 0 3103 1 (NOTLB) f67abea0 00000082 00000000 f66f51bc c195b800 00002659 abcb8e91 0000000f c195b800 7fffffff 91346dc8 000001b3 0000efed 00000000 f7cb15b0 f66f51bc 00000079 0000019c 9136a60a 000001b3 00000000 00000000 0804b214 f67abf08 Call Trace: [] refrigerator+0x3c/0x50 [] get_signal_to_deliver+0x227/0x230 [] do_signal+0x65/0x140 [] default_wake_function+0x0/0x10 [] tty_write+0x94/0x1f0 [] tty_read+0x82/0xc0 [] vfs_read+0x86/0x110 [] sys_read+0x47/0x80 [] do_notify_resume+0x3c/0x40 [] work_notifysig+0x13/0x19 ======================= hibernate D F65F0C64 0 3907 2761 (NOTLB) f67a5cd0 00000082 f786c800 f65f0c64 f7ea23b8 c02bcdef f7ea2040 c1ad1180 f78b3400 f7ea23b8 5b7f815f 00000009 00003442 00000000 c19615d0 f73a5b5c 0000006e 0005cef4 5b7f815f 00000009 f7ea23b8 f67a5db8 f65f0c64 f67a5cec Call Trace: [] scsi_prep_fn+0x8f/0x130 [] wait_for_completion+0x64/0xa0 [] default_wake_function+0x0/0x10 [] __generic_unplug_device+0x2b/0x30 [] default_wake_function+0x0/0x10 [] blk_execute_rq+0xa7/0xe0 [] blk_end_sync_rq+0x0/0x30 [] irq_exit+0x42/0x70 [] smp_apic_timer_interrupt+0x30/0x40 [] apic_timer_interrupt+0x28/0x30 [] scsi_execute+0xb8/0x110 [] scsi_execute_req+0x6b/0x90 [] sd_start_stop_device+0x70/0x120 [] printk+0x17/0x20 [] sd_resume+0x55/0xa0 [] scsi_bus_resume+0x6f/0x80 [] resume_device+0x136/0x190 [] kobject_get+0x15/0x20 [] get_device+0x11/0x20 [] dpm_resume+0xbd/0xc0 [] device_resume+0x1b/0x40 [] hibernate+0x103/0x1a0 [] state_store+0xc5/0x100 [] state_store+0x0/0x100 [] sysfs_write_file+0x0/0x80 [] subsys_attr_store+0x3f/0x50 [] flush_write_buffer+0x2e/0x40 [] sysfs_write_file+0x65/0x80 [] vfs_write+0x89/0x110 [] sys_write+0x47/0x80 [] sys_dup2+0x9b/0xd0 [] syscall_call+0x7/0xb ======================= From owner-xfs@oss.sgi.com Wed Jun 13 11:46:26 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 13 Jun 2007 11:46:29 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.7 required=5.0 tests=AWL,BAYES_95,J_CHICKENPOX_45, J_CHICKENPOX_56,RCVD_NUMERIC_HELO autolearn=no version=3.2.0-pre1-r499012 Received: from mail34.messagelabs.com (mail34.messagelabs.com [216.82.241.35]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5DIkOWt027931 for ; Wed, 13 Jun 2007 11:46:25 -0700 X-VirusChecked: Checked X-Env-Sender: Rene.Salmon@bp.com X-Msg-Ref: server-10.tower-34.messagelabs.com!1181760383!49196303!1 X-StarScan-Version: 5.5.12.11; banners=-,-,- X-Originating-IP: [129.230.248.44] Received: (qmail 18470 invoked from network); 13 Jun 2007 18:46:24 -0000 Received: from unknown (HELO bp1xeuav001.bp1.ad.bp.com) (129.230.248.44) by server-10.tower-34.messagelabs.com with SMTP; 13 Jun 2007 18:46:24 -0000 Received: from BP1XEUEX033.bp1.ad.bp.com ([149.184.176.167]) by bp1xeuav001.bp1.ad.bp.com with InterScan Messaging Security Suite; Wed, 13 Jun 2007 19:46:21 +0100 Received: from BP1XEUEX706-C.bp1.ad.bp.com ([149.182.218.95]) by BP1XEUEX033.bp1.ad.bp.com with Microsoft SMTPSVC(6.0.3790.1830); Wed, 13 Jun 2007 19:46:21 +0100 Received: from 149.179.228.36 ([149.179.228.36]) by BP1XEUEX706-C.bp1.ad.bp.com ([149.182.218.28]) with Microsoft Exchange Server HTTP-DAV ; Wed, 13 Jun 2007 18:46:21 +0000 Received: from holwrs01 by bp1xeuex706-c.bp1.ad.bp.com; 13 Jun 2007 13:46:20 -0500 Subject: Re: sunit not working From: "Salmon, Rene" To: nscott@aconex.com, David Chinner Cc: salmr0@bp.com, xfs@oss.sgi.com In-Reply-To: <1181690478.3758.108.camel@edge.yarra.acx> References: <1181606134.7873.72.camel@holwrs01> <1181608444.3758.73.camel@edge.yarra.acx> <902286657-1181653953-cardhu_decombobulator_blackberry.rim.net-1527539029-@bxe120.bisx.prod.on.blackberry> <1181690478.3758.108.camel@edge.yarra.acx> Content-Type: text/plain Content-Transfer-Encoding: 7bit Date: Wed, 13 Jun 2007 13:46:20 -0500 Message-Id: <1181760380.8754.53.camel@holwrs01> Mime-Version: 1.0 X-Mailer: Evolution 2.8.2 X-OriginalArrivalTime: 13 Jun 2007 18:46:21.0687 (UTC) FILETIME=[22D85070:01C7ADEB] X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11766 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: Rene.Salmon@bp.com Precedence: bulk X-list: xfs Hi, More details on this: Using dd with various block sizes to measure write performance only for now. This is using two options to dd. The direct I/O option for direct i/o and the fsync option for buffered i/o. Using direct: /usr/bin/time -p dd of=/mnt/testfile if=/dev/zero oflag=direct Using fsync: /usr/bin/time -p dd of=/mnt/testfile if=/dev/zero conv=fsync Using a 2Gbit/sec fiber channel card my theoretical max is 256 MBytes/sec. If we allow a bit of overhead for the card driver and things the manufacturer claims the card should be able to max out at around 200 MBytes/sec. The block sizes I used range from 128KBytes - 1024000Kbytes and all the writes generate a 1.0GB file. Some of the results I got: Buffered I/O(fsync): -------------------- Linux seems to do a good job at buffering this. Regardless of the block size I choose I always get write speeds of around 150MBytes/sec Direct I/O(direct): ------------------- The speeds I get here of course are very dependent on the block size I choose and how well they align with the stripe size of the storage array underneath. For the appropriate block sizes I get really good performance about 200MBytes/sec. >From your feedback is sounds like these are reasonable numbers. Most of our user apps do not use direct I/O but rather buffered I/O. Is 150MBytes/sec as good as it gets for buffered I/O or is there something I can tune to get a bit more out of buffered I/O? Thanks Rene > > > > Thanks that helps. Now that I know I have the right sunit and swidth > > I have a performace related question. > > > > If I do a dd on the raw device or to the lun directy I get speeds of > > around 190-200 MBytes/sec. > > > > As soon as I add xfs on top of the lun my speeds go to around 150 > > MBytes/sec. This is for a single stream write using various block > > sizes on a 2 Gbit/sec fiber channel card. > > > > Reads or writes? > What are your I/O sizes? > Buffered or direct IO? > Including fsync time in there or not? etc, etc. > > (Actual dd commands used and their output results would be best) > xfs_io is pretty good for this kind of analysis, as it gives very > fine grained control of operations performed, has integrated bmap > command, etc - use the -F flag for the raw device comparisons). > > > Is this overhead more or less what you would expect from xfs? Or is > > there some tunning I need to do? > > You should be able to get very close to raw device speeds esp. for a > single stream reader/writer, with some tuning. > > cheers. > From owner-xfs@oss.sgi.com Wed Jun 13 14:04:59 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 13 Jun 2007 14:05:04 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.0 required=5.0 tests=BAYES_99 autolearn=no version=3.2.0-pre1-r499012 Received: from smtp2.linux-foundation.org (smtp2.linux-foundation.org [207.189.120.14]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5DL4wWt000628 for ; Wed, 13 Jun 2007 14:04:59 -0700 Received: from imap1.linux-foundation.org (imap1.linux-foundation.org [207.189.120.55]) by smtp2.linux-foundation.org (8.13.5.20060308/8.13.5/Debian-3ubuntu1.1) with ESMTP id l5DL4s2D026366 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Wed, 13 Jun 2007 14:04:55 -0700 Received: from localhost (localhost [127.0.0.1]) by imap1.linux-foundation.org (8.13.5.20060308/8.13.5/Debian-3ubuntu1.1) with ESMTP id l5DL4mXd010642; Wed, 13 Jun 2007 14:04:48 -0700 Date: Wed, 13 Jun 2007 14:04:48 -0700 (PDT) From: Linus Torvalds To: David Greaves cc: David Chinner , Tejun Heo , "Rafael J. Wysocki" , xfs@oss.sgi.com, "'linux-kernel@vger.kernel.org'" , linux-pm , Neil Brown , Jeff Garzik Subject: Re: 2.6.22-rc3 hibernate(?) fails totally - regression (xfs on raid6) In-Reply-To: <466FD214.9070603@dgreaves.com> Message-ID: References: <200706020122.49989.rjw@sisk.pl> <4661EFBB.5010406@dgreaves.com> <4662D852.4000005@dgreaves.com> <46667160.80905@gmail.com> <46668EE0.2030509@dgreaves.com> <46679D56.7040001@gmail.com> <4667DE2D.6050903@dgreaves.com> <20070607110708.GS86004887@sgi.com> <46680F5E.6070806@dgreaves.com> <20070607222813.GG85884050@sgi.com> <4669A965.20403@dgreaves.com> <466FD214.9070603@dgreaves.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=us-ascii X-MIMEDefang-Filter: osdl$Revision: 1.181 $ X-Scanned-By: MIMEDefang 2.53 on 207.189.120.14 X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11767 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: torvalds@linux-foundation.org Precedence: bulk X-list: xfs On Wed, 13 Jun 2007, David Greaves wrote: > > git-bisect bad > 9666f4009c22f6520ac3fb8a19c9e32ab973e828 is first bad commit > commit 9666f4009c22f6520ac3fb8a19c9e32ab973e828 > Author: Tejun Heo > Date: Fri May 4 21:27:47 2007 +0200 > > libata: reimplement suspend/resume support using sdev->manage_start_stop > > Good. Ok, good. So the bug is apparently in the generic SCSI layer start/stop handling. I'm not entirely surprised, most people would never have triggered it (I _think_ it's disabled by default for all devices, and that the libata-scsi.c change was literally the first thing to ever enable it by default for anything!) > So here's a sysrq-t from a failed resume. Ask if you'd like anything else... I'm not seeing anything really obvious. The traces would probably look better if you enabled CONFIG_FRAME_POINTER, though. That should cut down on some of the noise and make the traces a bit more readable. "hibernate" is definitely stuck on the new code: it's in the "sd_start_stop_device()" call-chain, but I note that ata_aux at the same time is also doing some sd_spinup_disk logic as part of rescanning. Maybe that's part of the confusion: trying to rescan the bus at the same time upper layers (who already *know* the disks that are there) are trying to spin up the devices. Tejun? Jeff? Linus --- some per-thread commentary --- There's the worrisome thing we've seen before, with the "events" thread apparently busy-looping: > events/0 R running 0 5 2 (L-TLB) but the more fundamental problem would seem to be ata_aux being in some long (infinite?) disk wait: > ata_aux D F65F09A4 0 122 2 (L-TLB) > c19dfd20 00000046 f786c800 f65f09a4 f7ea23b8 c02bcdef ce2e3b9f 4d64703f > ffffcfff f7ea23b8 00000082 f7ea2418 0005eb1d 00000082 f7c9f0b0 c196913c > f786c800 000003d4 605db977 00000009 f7ea23b8 c19dfe08 f65f09a4 c19dfd3c > Call Trace: > [] scsi_prep_fn+0x8f/0x130 > [] wait_for_completion+0x64/0xa0 > [] default_wake_function+0x0/0x10 > [] __generic_unplug_device+0x2b/0x30 > [] default_wake_function+0x0/0x10 > [] blk_execute_rq+0xa7/0xe0 > [] blk_end_sync_rq+0x0/0x30 > [] buffered_rmqueue+0x9f/0x100 > [] get_page_from_freelist+0x80/0xc0 > [] scsi_execute+0xb8/0x110 > [] scsi_execute_req+0x6b/0x90 > [] sd_spinup_disk+0x76/0x440 > [] sd_revalidate_disk+0x6e/0x160 > [] __scsi_disk_get+0x34/0x40 > [] sd_rescan+0x1d/0x40 > [] scsi_rescan_device+0x40/0x50 > [] ata_scsi_dev_rescan+0x5c/0x70 > [] ata_scsi_dev_rescan+0x0/0x70 > [] run_workqueue+0x4a/0xf0 > [] worker_thread+0xcd/0xf0 > [] autoremove_wake_function+0x0/0x50 > [] autoremove_wake_function+0x0/0x50 > [] worker_thread+0x0/0xf0 > [] kthread+0x6a/0x70 > [] kthread+0x0/0x70 > [] kernel_thread_helper+0x7/0x3c And scsi_eh_4/5 seems to be potentially doing something too: > scsi_eh_4 S F786C800 0 803 2 (L-TLB) > f7fc3fc0 00000046 f786c800 f786c800 f7ea23b8 c02bd166 f7964044 f7ea23b8 > 00000292 c021f7f2 6154d436 00000009 002f304e 00000000 c1932550 f7d0915c > 0000006e 000ec1d6 6154d436 00000009 00000000 f7964000 c02bb740 fffffffc > Call Trace: > [] scsi_request_fn+0x196/0x280 > [] blk_remove_plug+0x32/0x70 > [] scsi_error_handler+0x0/0xa0 > [] scsi_error_handler+0x41/0xa0 > [] scsi_error_handler+0x0/0xa0 > [] kthread+0x6a/0x70 > [] kthread+0x0/0x70 > [] kernel_thread_helper+0x7/0x3c > ======================= > scsi_eh_5 S F786C400 0 805 2 (L-TLB) > f7e3ffc0 00000046 f786c400 f786c400 f7ea2730 c02bd166 f786cc44 f7ea2730 > 00000292 c021f7f2 67583eed 00000009 002f0e42 00000000 c19615d0 f7c9f1bc > 0000006e 000ec010 67583eed 00000009 00000000 f786cc00 c02bb740 fffffffc > Call Trace: > [] scsi_request_fn+0x196/0x280 > [] blk_remove_plug+0x32/0x70 > [] scsi_error_handler+0x0/0xa0 > [] scsi_error_handler+0x41/0xa0 > [] scsi_error_handler+0x0/0xa0 > [] kthread+0x6a/0x70 > [] kthread+0x0/0x70 > [] kernel_thread_helper+0x7/0x3c .. and here it's starting to get interesting: what is md_raid5 hung on? > md0_raid5 D 1D3836C7 0 836 2 (L-TLB) > f7cbfdf0 00000046 f7ea3888 1d3836c7 00000008 c0221036 06a47600 00011210 > c191ede0 c01454ba 5324bddc 00000009 0002fb08 00000000 f7d09050 f7e2c6bc > 0000006e 0000320f 532bcd0b 00000009 f7ea3888 c1a4f000 f7cbfe18 c1a4f13c > Call Trace: > [] generic_make_request+0x146/0x1d0 > [] mempool_alloc+0x2a/0xc0 > [] md_super_wait+0x7e/0xc0 > [] autoremove_wake_function+0x0/0x50 > [] bio_clone+0x31/0x40 > [] autoremove_wake_function+0x0/0x50 > [] write_sb_page+0x50/0x80 > [] write_page+0x112/0x120 > [] sync_sbs+0x77/0xe0 > [] bitmap_update_sb+0x69/0xa0 > [] md_update_sb+0x138/0x2c0 > [] schedule+0x2e5/0x580 > [] md_check_recovery+0x2dd/0x360 > [] raid5d+0x10/0xe0 > [] md_thread+0x55/0x110 > [] autoremove_wake_function+0x0/0x50 > [] autoremove_wake_function+0x0/0x50 > [] md_thread+0x0/0x110 > [] kthread+0x6a/0x70 > [] kthread+0x0/0x70 > [] kernel_thread_helper+0x7/0x3c and here's the hibernate damon itself, doing the "sd_start_stop_device()" that is supposed to get the ball rolling, but it seems to be another infinite wait: > hibernate D F65F0C64 0 3907 2761 (NOTLB) > f67a5cd0 00000082 f786c800 f65f0c64 f7ea23b8 c02bcdef f7ea2040 c1ad1180 > f78b3400 f7ea23b8 5b7f815f 00000009 00003442 00000000 c19615d0 f73a5b5c > 0000006e 0005cef4 5b7f815f 00000009 f7ea23b8 f67a5db8 f65f0c64 f67a5cec > Call Trace: > [] scsi_prep_fn+0x8f/0x130 > [] wait_for_completion+0x64/0xa0 > [] default_wake_function+0x0/0x10 > [] __generic_unplug_device+0x2b/0x30 > [] default_wake_function+0x0/0x10 > [] blk_execute_rq+0xa7/0xe0 > [] blk_end_sync_rq+0x0/0x30 > [] irq_exit+0x42/0x70 > [] smp_apic_timer_interrupt+0x30/0x40 > [] apic_timer_interrupt+0x28/0x30 > [] scsi_execute+0xb8/0x110 > [] scsi_execute_req+0x6b/0x90 > [] sd_start_stop_device+0x70/0x120 > [] printk+0x17/0x20 > [] sd_resume+0x55/0xa0 > [] scsi_bus_resume+0x6f/0x80 > [] resume_device+0x136/0x190 > [] kobject_get+0x15/0x20 > [] get_device+0x11/0x20 > [] dpm_resume+0xbd/0xc0 > [] device_resume+0x1b/0x40 > [] hibernate+0x103/0x1a0 > [] state_store+0xc5/0x100 > [] state_store+0x0/0x100 > [] sysfs_write_file+0x0/0x80 > [] subsys_attr_store+0x3f/0x50 > [] flush_write_buffer+0x2e/0x40 > [] sysfs_write_file+0x65/0x80 > [] vfs_write+0x89/0x110 > [] sys_write+0x47/0x80 > [] sys_dup2+0x9b/0xd0 > [] syscall_call+0x7/0xb > ======================= > From owner-xfs@oss.sgi.com Wed Jun 13 14:20:00 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 13 Jun 2007 14:20:04 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.2 required=5.0 tests=BAYES_99,J_CHICKENPOX_45, J_CHICKENPOX_56,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r499012 Received: from moutng.kundenserver.de (moutng.kundenserver.de [212.227.126.184]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5DLJwWt002942 for ; Wed, 13 Jun 2007 14:20:00 -0700 Received: from [212.227.126.200] (helo=mrvnet.kundenserver.de) by moutng.kundenserver.de with esmtp (Exim 3.35 #1) id 1HyYB3-0003ls-00; Wed, 13 Jun 2007 21:06:45 +0200 Received: from [172.23.1.26] (helo=xchgsmtp.exchange.xchg) by mrvnet.kundenserver.de with smtp (Exim 3.35 #1) id 1HyYB3-0007EZ-02; Wed, 13 Jun 2007 21:06:45 +0200 Received: from mapibe17.exchange.xchg ([172.23.1.54]) by xchgsmtp.exchange.xchg with Microsoft SMTPSVC(6.0.3790.3959); Wed, 13 Jun 2007 21:06:41 +0200 X-MimeOLE: Produced By Microsoft Exchange V6.5 Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Subject: RE: sunit not working Date: Wed, 13 Jun 2007 21:03:40 +0200 Message-ID: <55EF1E5D5804A542A6CA37E446DDC206F03185@mapibe17.exchange.xchg> In-Reply-To: <1181760380.8754.53.camel@holwrs01> X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: sunit not working thread-index: Acet61PkpKbdWZJYSmaRG7CK+YAdIQAASR8g References: <1181606134.7873.72.camel@holwrs01> <1181608444.3758.73.camel@edge.yarra.acx> <902286657-1181653953-cardhu_decombobulator_blackberry.rim.net-1527539029-@bxe120.bisx.prod.on.blackberry> <1181690478.3758.108.camel@edge.yarra.acx> <1181760380.8754.53.camel@holwrs01> From: "Sebastian Brings" To: "Salmon, Rene" Cc: X-OriginalArrivalTime: 13 Jun 2007 19:06:41.0331 (UTC) FILETIME=[F9CF3830:01C7ADED] X-Provags-ID: kundenserver.de abuse@kundenserver.de ident:@172.23.1.26 X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id l5DLK0Wt002956 X-archive-position: 11768 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sebas@silexmedia.com Precedence: bulk X-list: xfs Is 1 GB a reasonable filesize in your environment? And also most user apps don't use fsync, but maybe I missed somehting. Not knowing your storage vendor the numbers look pretty good to me, but the way you tested this is close to a benchmarking environment. Cheers Sebastian > -----Original Message----- > From: xfs-bounce@oss.sgi.com [mailto:xfs-bounce@oss.sgi.com] On Behalf Of Salmon, Rene > Sent: Mittwoch, 13. Juni 2007 20:46 > To: nscott@aconex.com; David Chinner > Cc: salmr0@bp.com; xfs@oss.sgi.com > Subject: Re: sunit not working > > > Hi, > > More details on this: > > Using dd with various block sizes to measure write performance only for > now. > > This is using two options to dd. The direct I/O option for direct i/o > and the fsync option for buffered i/o. > > Using direct: > /usr/bin/time -p dd of=/mnt/testfile if=/dev/zero oflag=direct > > Using fsync: > /usr/bin/time -p dd of=/mnt/testfile if=/dev/zero conv=fsync > > Using a 2Gbit/sec fiber channel card my theoretical max is 256 > MBytes/sec. If we allow a bit of overhead for the card driver and > things the manufacturer claims the card should be able to max out at > around 200 MBytes/sec. > > The block sizes I used range from 128KBytes - 1024000Kbytes and all the > writes generate a 1.0GB file. > > Some of the results I got: > > Buffered I/O(fsync): > -------------------- > Linux seems to do a good job at buffering this. Regardless of the block > size I choose I always get write speeds of around 150MBytes/sec > > Direct I/O(direct): > ------------------- > The speeds I get here of course are very dependent on the block size I > choose and how well they align with the stripe size of the storage array > underneath. For the appropriate block sizes I get really good > performance about 200MBytes/sec. > > > >From your feedback is sounds like these are reasonable numbers. > Most of our user apps do not use direct I/O but rather buffered I/O. Is > 150MBytes/sec as good as it gets for buffered I/O or is there something > I can tune to get a bit more out of buffered I/O? > > Thanks > Rene > > > > > > > > > > Thanks that helps. Now that I know I have the right sunit and swidth > > > I have a performace related question. > > > > > > If I do a dd on the raw device or to the lun directy I get speeds of > > > around 190-200 MBytes/sec. > > > > > > As soon as I add xfs on top of the lun my speeds go to around 150 > > > MBytes/sec. This is for a single stream write using various block > > > sizes on a 2 Gbit/sec fiber channel card. > > > > > > > Reads or writes? > > What are your I/O sizes? > > Buffered or direct IO? > > Including fsync time in there or not? etc, etc. > > > > (Actual dd commands used and their output results would be best) > > xfs_io is pretty good for this kind of analysis, as it gives very > > fine grained control of operations performed, has integrated bmap > > command, etc - use the -F flag for the raw device comparisons). > > > > > Is this overhead more or less what you would expect from xfs? Or is > > > there some tunning I need to do? > > > > You should be able to get very close to raw device speeds esp. for a > > single stream reader/writer, with some tuning. > > > > cheers. > > > From owner-xfs@oss.sgi.com Wed Jun 13 14:57:43 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 13 Jun 2007 14:57:47 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.0 required=5.0 tests=BAYES_99 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.dvmed.net (srv5.dvmed.net [207.36.208.214]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5DLvgWt009653 for ; Wed, 13 Jun 2007 14:57:43 -0700 Received: from cpe-065-190-165-210.nc.res.rr.com ([65.190.165.210] helo=[10.10.10.10]) by mail.dvmed.net with esmtpsa (Exim 4.63 #1 (Red Hat Linux)) id 1HyaIt-0001zJ-FR; Wed, 13 Jun 2007 21:23:00 +0000 Message-ID: <4670602F.9000908@pobox.com> Date: Wed, 13 Jun 2007 17:22:55 -0400 From: Jeff Garzik User-Agent: Thunderbird 1.5.0.12 (X11/20070530) MIME-Version: 1.0 To: Linus Torvalds CC: David Greaves , David Chinner , Tejun Heo , "Rafael J. Wysocki" , xfs@oss.sgi.com, "'linux-kernel@vger.kernel.org'" , linux-pm , Neil Brown Subject: Re: 2.6.22-rc3 hibernate(?) fails totally - regression (xfs on raid6) References: <200706020122.49989.rjw@sisk.pl> <4661EFBB.5010406@dgreaves.com> <4662D852.4000005@dgreaves.com> <46667160.80905@gmail.com> <46668EE0.2030509@dgreaves.com> <46679D56.7040001@gmail.com> <4667DE2D.6050903@dgreaves.com> <20070607110708.GS86004887@sgi.com> <46680F5E.6070806@dgreaves.com> <20070607222813.GG85884050@sgi.com> <4669A965.20403@dgreaves.com> <466FD214.9070603@dgreaves.com> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11769 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jgarzik@pobox.com Precedence: bulk X-list: xfs Linus Torvalds wrote: > Ok, good. So the bug is apparently in the generic SCSI layer start/stop > handling. I'm not entirely surprised, most people would never have > triggered it (I _think_ it's disabled by default for all devices, and that > the libata-scsi.c change was literally the first thing to ever enable it > by default for anything!) I haven't looked at this yet, but wanted to confirm your assessment here: libata was indeed the first (and still only?) user of this code path. Since some SCSI devices may not be owned by the host computer, in the power management sense, we don't want to turn that on for all SCSI devices. Otherwise you wind up powering off a device in another building :) This is basically the libata suspend/resume path, even though bits touch generic SCSI. Jeff From owner-xfs@oss.sgi.com Wed Jun 13 15:02:22 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 13 Jun 2007 15:02:29 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.6 required=5.0 tests=AWL,BAYES_99,J_CHICKENPOX_35, J_CHICKENPOX_36 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.ukfsn.org (s2.ukfsn.org [217.158.120.143]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5DM2KWt010878 for ; Wed, 13 Jun 2007 15:02:21 -0700 Received: from localhost (mailman.ukfsn.org [80.168.53.75]) by mail.ukfsn.org (Postfix) with ESMTP id 2A226E6D7F; Wed, 13 Jun 2007 23:02:14 +0100 (BST) Received: from mail.ukfsn.org ([80.168.53.20]) by localhost (smtp-filter.ukfsn.org [80.168.53.75]) (amavisd-new, port 10024) with ESMTP id i37j2YIw+Rrt; Wed, 13 Jun 2007 22:59:09 +0100 (BST) Received: from elm.dgreaves.com (i-83-67-36-194.freedom2surf.net [83.67.36.194]) by mail.ukfsn.org (Postfix) with ESMTP id 0CC13E6D81; Wed, 13 Jun 2007 23:02:12 +0100 (BST) Received: from ash.dgreaves.com ([10.0.0.90]) by elm.dgreaves.com with esmtp (Exim 4.62) (envelope-from ) id 1Hyauv-0007Ea-F7; Wed, 13 Jun 2007 23:02:17 +0100 Message-ID: <46706968.7000703@dgreaves.com> Date: Wed, 13 Jun 2007 23:02:16 +0100 From: David Greaves User-Agent: Mozilla-Thunderbird 2.0.0.0 (X11/20070601) MIME-Version: 1.0 To: Linus Torvalds Cc: David Chinner , Tejun Heo , "Rafael J. Wysocki" , xfs@oss.sgi.com, "'linux-kernel@vger.kernel.org'" , linux-pm , Neil Brown , Jeff Garzik Subject: Re: 2.6.22-rc3 hibernate(?) fails totally - regression (xfs on raid6) References: <200706020122.49989.rjw@sisk.pl> <4661EFBB.5010406@dgreaves.com> <4662D852.4000005@dgreaves.com> <46667160.80905@gmail.com> <46668EE0.2030509@dgreaves.com> <46679D56.7040001@gmail.com> <4667DE2D.6050903@dgreaves.com> <20070607110708.GS86004887@sgi.com> <46680F5E.6070806@dgreaves.com> <20070607222813.GG85884050@sgi.com> <4669A965.20403@dgreaves.com> <466FD214.9070603@dgreaves.com> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11770 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: david@dgreaves.com Precedence: bulk X-list: xfs Linus Torvalds wrote: > > On Wed, 13 Jun 2007, David Greaves wrote: >> git-bisect bad >> 9666f4009c22f6520ac3fb8a19c9e32ab973e828 is first bad commit >> commit 9666f4009c22f6520ac3fb8a19c9e32ab973e828 >> Author: Tejun Heo >> Date: Fri May 4 21:27:47 2007 +0200 >> >> libata: reimplement suspend/resume support using sdev->manage_start_stop >> >> Good. > > Ok, good. So the bug is apparently in the generic SCSI layer start/stop > handling. I'm not entirely surprised, most people would never have > triggered it (I _think_ it's disabled by default for all devices, and that > the libata-scsi.c change was literally the first thing to ever enable it > by default for anything!) > >> So here's a sysrq-t from a failed resume. Ask if you'd like anything else... > > I'm not seeing anything really obvious. The traces would probably look > better if you enabled CONFIG_FRAME_POINTER, though. That should cut down > on some of the noise and make the traces a bit more readable. I can do that... > "hibernate" is definitely stuck on the new code: it's in the > "sd_start_stop_device()" call-chain, but I note that ata_aux at the same > time is also doing some sd_spinup_disk logic as part of rescanning. Maybe > that's part of the confusion: trying to rescan the bus at the same time > upper layers (who already *know* the disks that are there) are trying to > spin up the devices. > > Tejun? Jeff? SysRq : Show State free sibling task PC stack pid father child younger older init D 28D10C50 0 1 0 (NOTLB) c1941ea0 00000082 46706775 28d10c50 46706775 28d10c50 00001000 f64b4000 c1941e80 c04250e0 d5151017 00000018 00002e2e 00000000 f7cb15b0 c192eb3c 00000073 00000207 d5157d78 00000018 c1941ea0 00000000 00000000 c1941f08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= kthreadd S C192E530 0 2 0 (L-TLB) c1943fd0 00000046 00000000 c192e530 c01175f0 00000000 f6967ed4 00000000 00000003 00000292 f6967eb8 00000000 c1943fc0 00000292 f7122090 c192e63c c1943fc0 00000060 d3a629f3 0000000b c1943fd0 c042d298 00000000 00000000 Call Trace: [] kthreadd+0x74/0xa0 [] kernel_thread_helper+0x7/0x4c ======================= ksoftirqd/0 S C04903C8 0 3 2 (L-TLB) c1945fb0 00000046 00000000 c04903c8 c1945f70 00000046 c1932550 c192e140 c1945f80 c011ee7c c6c1e024 00000000 c1945fa0 c011ec21 f6ba0ad0 c192e13c c1945fa0 00000db7 1ba6eda3 00000009 c1945fb0 00000000 c011ef80 fffffffc Call Trace: [] ksoftirqd+0x7b/0x90 [] kthread+0x67/0x70 [] kernel_thread_helper+0x7/0x4c ======================= watchdog/0 S C0364CA5 0 4 2 (L-TLB) c1947fb0 00000046 c1943f70 c0364ca5 c1947f70 00000292 c048b0e0 c1932a50 ac6b4e00 000011ef f7d635a7 00000008 c1947fb0 00000000 c1932550 c1932b5c c1947fa0 00000a3e 7beb1af2 00000004 c1947fb0 00000000 c0140310 fffffffc Call Trace: [] watchdog+0x4e/0x70 [] kthread+0x67/0x70 [] kernel_thread_helper+0x7/0x4c ======================= events/0 R running 0 5 2 (L-TLB) khelper S 00000000 0 6 2 (L-TLB) c194bf60 00000046 00000000 00000000 c194bf20 00000001 f6bf8160 c0127a40 c194bf30 c0127a67 f6bf8160 c1914c20 c194bf60 c0127ebd f65890b0 c193215c ad284200 000008d8 0b3c4782 0000000f 00000246 c1914c20 c194bf88 c1914c28 Call Trace: [] worker_thread+0xec/0xf0 [] kthread+0x67/0x70 [] kernel_thread_helper+0x7/0x4c ======================= kblockd/0 S F7EA2438 0 35 2 (L-TLB) c19a7f60 00000046 c19146e0 f7ea2438 c19a7f20 c021ac9c c19146e0 c021acc0 c19a7f30 c021acce 2bfa8b70 00000009 c19a7f60 c0127ebd c19b3a50 c19616dc 0000006e 00000047 e0f8a696 00000010 00000246 c19146e0 c19a7f88 c19146e8 Call Trace: [] worker_thread+0xec/0xf0 [] kthread+0x67/0x70 [] kernel_thread_helper+0x7/0x4c ======================= kacpid S C192E530 0 36 2 (L-TLB) c19a9f60 00000046 c048b0e0 c192e530 c19a9f20 c0116be1 c192e530 7c841b53 c19a9f40 c0116d2b 7c842264 00000004 00000087 00000000 c192e530 c19611dc 00000078 0000013f 7c8423a7 00000004 00000246 c19145e0 c19a9f88 c19145e8 Call Trace: [] worker_thread+0xec/0xf0 [] kthread+0x67/0x70 [] kernel_thread_helper+0x7/0x4c ======================= kacpi_notify S C192E530 0 37 2 (L-TLB) c19abf60 00000046 c048b0e0 c192e530 c19abf20 c0116be1 c192e530 7c844295 c19abf40 c0116d2b 7cc769c0 00000004 0000015e 00000000 c1932050 c1969b3c 00000078 00000328 7cc76cfd 00000004 00000246 c19145a0 c19abf88 c19145a8 Call Trace: [] worker_thread+0xec/0xf0 [] kthread+0x67/0x70 [] kernel_thread_helper+0x7/0x4c ======================= ata/0 S C192E530 0 121 2 (L-TLB) c19f5f60 00000046 00000001 c192e530 c19f5f20 c0116be1 c192e530 7dd68bdb c19f5f40 f7839f60 31d38c75 00000009 00023cf6 00000000 c19615d0 c1a021bc 0000006e 0000391c 31d38c75 00000009 00000246 c1a1a9e0 c19f5f88 c1a1a9e8 Call Trace: [] worker_thread+0xec/0xf0 [] kthread+0x67/0x70 [] kernel_thread_helper+0x7/0x4c ======================= ata_aux D F7945000 0 122 2 (L-TLB) c19f7ce0 00000046 f7ea23b8 f7945000 c19f7ca0 c0121d67 c198d800 f7ea23b8 c19f7cb0 c02261b7 f6af7964 f7ea23b8 c19f7cc0 c0293942 f7e865d0 c1a026bc c19f7cf0 000003ae 2dae0756 00000009 c19f7cf0 c19f7dc8 f6af7964 c19f7cfc Call Trace: [] wait_for_completion+0x64/0xa0 [] blk_execute_rq+0x8d/0xb0 [] scsi_execute+0xb8/0x110 [] scsi_execute_req+0x68/0x90 [] sd_spinup_disk+0x6d/0x400 [] sd_revalidate_disk+0x6b/0x160 [] sd_rescan+0x1f/0x30 [] scsi_rescan_device+0x42/0x50 [] ata_scsi_dev_rescan+0x60/0x70 [] run_workqueue+0x4d/0xf0 [] worker_thread+0xcd/0xf0 [] kthread+0x67/0x70 [] kernel_thread_helper+0x7/0x4c ======================= kseriod D C0364CA5 0 123 2 (L-TLB) c19e9f50 00000046 f6723e40 c0364ca5 c19e9f60 00000046 c043fc08 00000000 c19e9f30 c0440de0 d568e80c 00000018 00000db6 00000000 c19b3550 c1a02bbc 0000006e 000000d3 d5690886 00000018 c19e9f50 00000000 c02d57a0 c19e9f98 Call Trace: [] refrigerator+0x3f/0x50 [] serio_thread+0xe9/0x100 [] kthread+0x67/0x70 [] kernel_thread_helper+0x7/0x4c ======================= pdflush D C0364CA5 0 145 2 (L-TLB) c1a49f50 00000046 f6a95f00 c0364ca5 c1a49f60 00000046 c048b0e0 c192e530 c1a49f20 fffee524 d5691002 00000018 000021c7 00000000 f7cfda70 c196913c 0000007d 00000088 d5696002 00000018 c1a49f50 00000000 c1a49f98 fffffffc Call Trace: [] refrigerator+0x3f/0x50 [] __pdflush+0x146/0x150 [] pdflush+0x25/0x30 [] kthread+0x67/0x70 [] kernel_thread_helper+0x7/0x4c ======================= pdflush D C0364CA5 0 146 2 (L-TLB) c1a4bf50 00000046 f6811f80 c0364ca5 c1a4bf60 00000046 c048b0e0 c192e530 00002afd c19615d0 d569120c 00000018 000018e4 00000000 f7cafa90 c19b3b5c 00000073 00000129 b6997622 00000009 c1a4bf50 00000000 c1a4bf98 fffffffc Call Trace: [] refrigerator+0x3f/0x50 [] __pdflush+0x146/0x150 [] pdflush+0x25/0x30 [] kthread+0x67/0x70 [] kernel_thread_helper+0x7/0x4c ======================= kswapd0 D C0364CA5 0 147 2 (L-TLB) c1999f40 00000046 c19e9f50 c0364ca5 c1999f50 00000046 c1999f28 0000000d 7054e600 0000120c d568ec56 00000018 00000ecb 00000000 c19b7070 c19b365c 0000006e 000000af d5690f62 00000018 c1999f40 00000000 c0430f44 00000000 Call Trace: [] refrigerator+0x3f/0x50 [] kswapd+0xd1/0x100 [] kthread+0x67/0x70 [] kernel_thread_helper+0x7/0x4c ======================= aio/0 S C192E530 0 148 2 (L-TLB) c19d1f60 00000046 c048b0e0 c192e530 c19d1f20 c0116be1 c192e530 831e4802 c19d1f40 c0116d2b 8320a84a 00000004 000000e6 00000000 c192e530 c19b315c 00000078 0000025c 8320aa6d 00000004 00000246 c194de20 c19d1f88 c194de28 Call Trace: [] worker_thread+0xec/0xf0 [] kthread+0x67/0x70 [] kernel_thread_helper+0x7/0x4c ======================= xfslogd/0 S C194D9E0 0 149 2 (L-TLB) c19d3f60 00000046 f74c8140 c194d9e0 c195f4e0 f74c8140 c194d9e0 c020a5f0 c19d3f30 c020a629 fe7b2b4b f74c819c c19d3f60 c0127ebd f6ba0ad0 c19b7b7c 00000073 00000486 d8c04420 00000018 00000246 c194d9e0 c19d3f88 c194d9e8 Call Trace: [] worker_thread+0xec/0xf0 [] kthread+0x67/0x70 [] kernel_thread_helper+0x7/0x4c ======================= xfsdatad/0 S F6E1F240 0 150 2 (L-TLB) c19d5f60 00000046 c048b0e0 f6e1f240 c19d5f20 c1a31be0 c1a31968 c194d9a0 c19d5f30 c0207f8e 832a8b34 c1a31c08 c19d5f60 c0127ebd c19b3a50 c19b767c 00000078 00000077 d5327a77 00000018 00000246 c194d9a0 c19d5f88 c194d9a8 Call Trace: [] worker_thread+0xec/0xf0 [] kthread+0x67/0x70 [] kernel_thread_helper+0x7/0x4c ======================= scsi_eh_0 S 00000001 0 774 2 (L-TLB) f7e1dfb0 00000046 c199d044 00000001 00000003 00000246 00000296 00000246 00000000 00000246 c199d000 fffffffc f7e1df90 c02b822e c192ea30 f7d0915c f7e1dfb0 0019aedd 7711c10e 00000005 c199d000 c199d000 c02b7440 fffffffc Call Trace: [] scsi_error_handler+0x41/0xb0 [] kthread+0x67/0x70 [] kernel_thread_helper+0x7/0x4c ======================= scsi_eh_1 S 00000001 0 776 2 (L-TLB) f7f63fb0 00000046 f7ff3c44 00000001 00000003 00000246 00000296 00000246 00000000 00000246 f7ff3c00 fffffffc f7f63f90 c02b822e c192ea30 f7d76b9c f7f63fb0 0019dca3 96bbf4e7 00000005 f7ff3c00 f7ff3c00 c02b7440 fffffffc Call Trace: [] scsi_error_handler+0x41/0xb0 [] kthread+0x67/0x70 [] kernel_thread_helper+0x7/0x4c ======================= scsi_eh_2 S 00000001 0 778 2 (L-TLB) f7d7ffb0 00000046 f7ff3844 00000001 00000003 00000246 00000296 00000246 00000000 00000246 f7ff3800 fffffffc f7d7ff90 c02b822e c192ea30 f7f56b7c f7d7ffb0 001a10a7 b67304f7 00000005 f7ff3800 f7ff3800 c02b7440 fffffffc Call Trace: [] scsi_error_handler+0x41/0xb0 [] kthread+0x67/0x70 [] kernel_thread_helper+0x7/0x4c ======================= scsi_eh_3 S 00000001 0 780 2 (L-TLB) c1973fb0 00000046 f7ff3444 00000001 00000003 00000246 00000296 00000246 00000000 00000246 f7ff3400 fffffffc c1973f90 c02b822e c192ea30 f7d7217c c1973fb0 001c81ca d688c25d 00000005 f7ff3400 f7ff3400 c02b7440 fffffffc Call Trace: [] scsi_error_handler+0x41/0xb0 [] kthread+0x67/0x70 [] kernel_thread_helper+0x7/0x4c ======================= scsi_eh_4 S F7945000 0 803 2 (L-TLB) c19a1fb0 00000046 c198d808 f7945000 c19a1f80 c02b44b2 00000296 00000246 c198d800 c198d800 2ea53f8d 00000009 002f2c50 00000000 c19615d0 f7cb4bdc 0000006e 000ec5ac 2ea53f8d 00000009 f7945000 f7945000 c02b7440 fffffffc Call Trace: [] scsi_error_handler+0x41/0xb0 [] kthread+0x67/0x70 [] kernel_thread_helper+0x7/0x4c ======================= scsi_eh_5 S C198DC00 0 805 2 (L-TLB) f7957fb0 00000046 c198d408 c198dc00 f7957f80 c02b44b2 00000296 00000246 c198d400 c198d400 34285c1c 00000009 002f23b7 00000000 c19615d0 f7e866dc 0000006e 000ec3bc 34285c1c 00000009 c198dc00 c198dc00 c02b7440 fffffffc Call Trace: [] scsi_error_handler+0x41/0xb0 [] kthread+0x67/0x70 [] kernel_thread_helper+0x7/0x4c ======================= kpsmoused S C192E530 0 826 2 (L-TLB) f7f65f60 00000046 c048b0e0 c192e530 f7f65f20 c0116be1 c192e530 40af11dc f7f65f40 c0116d2b 40af2e63 00000006 000000b8 00000000 c1932050 c196963c 00000078 0000002d 40af301a 00000006 00000246 f7898c20 f7f65f88 f7898c28 Call Trace: [] worker_thread+0xec/0xf0 [] kthread+0x67/0x70 [] kernel_thread_helper+0x7/0x4c ======================= md0_raid5 D F7F4D000 0 836 2 (L-TLB) f78a3da0 00000046 f7f4d000 f7f4d000 f78a3da0 c01454b6 c048b0e0 f6ba0ad0 f78a3d70 c0116be1 00010ad0 d5658227 f78a3d90 f7853000 f70c4ae0 f7f5617c f78a3de0 00003440 1be4a921 00000009 00000246 f7853000 f78a3dc8 f785313c Call Trace: [] md_super_wait+0x77/0xc0 [] write_sb_page+0x4f/0x80 [] write_page+0x102/0x110 [] bitmap_update_sb+0x89/0x90 [] md_update_sb+0x123/0x2a0 [] md_check_recovery+0x302/0x340 [] raid5d+0x12/0xf0 [] md_thread+0x56/0x110 [] kthread+0x67/0x70 [] kernel_thread_helper+0x7/0x4c ======================= xfsbufd D 00000018 0 838 2 (L-TLB) f74dff90 00000046 be2cd3b0 00000018 00000282 00000282 ffff3153 ffff3153 f74dff90 c0365572 d568effd 00000018 00001133 00000000 f7e86ad0 c19b717c 0000006e 000000ef d56918bc 00000018 f74dff90 00000000 00000000 f78981a0 Call Trace: [] refrigerator+0x3f/0x50 [] xfsbufd+0xf3/0x100 [] kthread+0x67/0x70 [] kernel_thread_helper+0x7/0x4c ======================= xfssyncd D 00000017 0 839 2 (L-TLB) f75a5f80 00000046 6b45182c 00000017 00000282 00000282 ffff4dc1 ffff4dc1 f75a5f80 c0365572 d568f322 00000018 00001289 00000000 f7cfd570 f7e86bdc 0000006e 000000a1 d5691f0a 00000018 f75a5f80 00000000 f75a5fa8 c19c83dc Call Trace: [] refrigerator+0x3f/0x50 [] xfssyncd+0x158/0x160 [] kthread+0x67/0x70 [] kernel_thread_helper+0x7/0x4c ======================= udevd D 00000005 0 920 1 (NOTLB) f7657ea0 00000082 08dd4c82 00000005 f7029005 c017bfc0 ffffffff ffffffff f7657e70 c017223e d5159415 00000018 00007ccf 00000000 f7e860d0 f7d72b7c 00000074 0000052e d516bbb0 00000018 f7657ea0 00000000 00000000 f7657f08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= ksuspend_usbd S C192E530 0 1727 2 (L-TLB) f7025f60 00000046 c048b0e0 c192e530 f7025f20 c0116be1 c192e530 5513327e f7025f40 c0116d2b 00000002 f70fc5c0 c048b0e0 00000003 f7cfd570 f7cac67c 4ccd1400 0000006f 55e6d417 00000007 00000246 f77644a0 f7025f88 f77644a8 Call Trace: [] worker_thread+0xec/0xf0 [] kthread+0x67/0x70 [] kernel_thread_helper+0x7/0x4c ======================= khubd D C0364CA5 0 1728 2 (L-TLB) f71edf60 00000046 f75a5f80 c0364ca5 f71edf70 00000046 00000000 000003e8 f71edf30 f881d3e2 d568f7b8 00000018 00001510 00000000 c1961ad0 f7cfd67c 0000006e 0000010e d569299c 00000018 f71edf60 00000000 f88201c0 f71edf98 Call Trace: [] refrigerator+0x3f/0x50 [] hub_thread+0x56/0xf0 [usbcore] [] kthread+0x67/0x70 [] kernel_thread_helper+0x7/0x4c ======================= ksnapd S C192E530 0 2131 2 (L-TLB) f7191f60 00000046 c048b0e0 c192e530 f7191f20 c0116be1 c192e530 d38fa60e f7191f40 c0116d2b cec67144 00000008 c048b0e0 00000003 f7d76090 f7122b9c 3e9b5600 000000a6 d39e34de 00000008 00000246 f77ceda0 f7191f88 f77ceda8 Call Trace: [] worker_thread+0xec/0xf0 [] kthread+0x67/0x70 [] kernel_thread_helper+0x7/0x4c ======================= kjournald D C0364CA5 0 2180 2 (L-TLB) f7391f30 00000046 f71edf60 c0364ca5 f7391f40 00000046 f7355ce8 00000000 f7391f20 c0117637 d568fb3b 00000018 00001738 00000000 f7d76090 c1961bdc 0000006e 000000dc d569323c 00000018 f7391f30 00000000 c1961ad0 00000001 Call Trace: [] refrigerator+0x3f/0x50 [] kjournald+0xcf/0x1c0 [] kthread+0x67/0x70 [] kernel_thread_helper+0x7/0x4c ======================= xfsbufd D 00000018 0 2181 2 (L-TLB) f7327f90 00000046 a59ec715 00000018 00000282 00000282 ffff30ec ffff30ec f7327f90 c0365572 d568fe77 00000018 000018ed 00000000 f7caf090 f7d7619c 0000006e 000000ba d5693980 00000018 f7327f90 00000000 00000000 f735eae0 Call Trace: [] refrigerator+0x3f/0x50 [] xfsbufd+0xf3/0x100 [] kthread+0x67/0x70 [] kernel_thread_helper+0x7/0x4c ======================= xfssyncd D 00000011 0 2182 2 (L-TLB) f6853f80 00000046 756f731d 00000011 00000282 00000282 ffff34c1 ffff34c1 f6853f80 c0365572 d569014d 00000018 00001a21 00000000 f7d61070 f7caf19c 0000006e 00000091 d5693f32 00000018 f6853f80 00000000 f6853fa8 f735fc3c Call Trace: [] refrigerator+0x3f/0x50 [] xfssyncd+0x158/0x160 [] kthread+0x67/0x70 [] kernel_thread_helper+0x7/0x4c ======================= xfsbufd D 00000018 0 2183 2 (L-TLB) f6fd1f90 00000046 b9683ccc 00000018 00000282 00000282 ffff313f ffff313f f6fd1f90 c0365572 d569044b 00000018 00001b3d 00000000 f70fc050 f7d6117c 0000006e 0000008f d56944ce 00000018 f6fd1f90 00000000 00000000 f7764da0 Call Trace: [] refrigerator+0x3f/0x50 [] xfsbufd+0xf3/0x100 [] kthread+0x67/0x70 [] kernel_thread_helper+0x7/0x4c ======================= xfssyncd D 00000011 0 2184 2 (L-TLB) f6811f80 00000046 7d105920 00000011 00000282 00000282 ffff34e1 ffff34e1 f6811f80 c0365572 d568fa72 00000018 00002175 00000000 c19b3a50 f70fc15c 00000073 0000007d d56949b2 00000018 f6811f80 00000000 f6811fa8 f735f73c Call Trace: [] refrigerator+0x3f/0x50 [] xfssyncd+0x158/0x160 [] kthread+0x67/0x70 [] kernel_thread_helper+0x7/0x4c ======================= portmap D F64600A0 0 2344 1 (NOTLB) f6f8fea0 00000086 00000000 f64600a0 f6f8fe60 c0341701 f649f960 f64600a0 f6f8fea0 c0343533 d5152861 00000018 00002e3f 00000000 f771a5d0 f7cb16bc 00000073 00000271 d51595e9 00000018 f6f8fea0 00000000 0804ff78 f6f8ff08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= syslogd D F77C12C0 0 2477 1 (NOTLB) f6e1bea0 00000086 ffffffff f77c12c0 00000000 00000000 00000000 00000000 f7cb5a30 00000000 d515b595 00000018 0000799b 00000000 f6ac8090 f7cb5b3c 00000075 00000093 d516d59a 00000018 f6e1bea0 00000000 00000000 f6e1bf08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= klogd D 00000016 0 2483 1 (NOTLB) f75a7ea0 00000086 f7022a20 00000016 e4078600 00006353 f6ba0c80 f6ba0ad0 f70b5550 f77e6700 d514f394 00000018 000031a7 00000000 c192ea30 f70b565c 00000073 000000de d515692f 00000018 f75a7ea0 00000000 00000000 f75a7f08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= sshd D BFBEE000 0 2505 1 (NOTLB) f6849ea0 00200086 f68bd4c0 bfbee000 00000000 f6849e58 00000000 00000000 f6fab910 f70b7a40 d515f375 00000018 000103ef 00000000 f7cb5530 f7cb513c 00000077 000002e7 d5185b19 00000018 f6849ea0 00000000 00000000 f6849f08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= exim4 D 0000000D 0 2545 1 (NOTLB) f68c9ea0 00000086 f68c9e78 0000000d c03edd32 000009f3 f7d79384 f7d61a70 f70edd40 f70edd40 00c65dd8 00000004 f68c9e78 c0118fd2 f7d61a70 f70fcb5c 00000000 000001f6 d516f1e6 00000018 f68c9ea0 00000000 00000000 f68c9f08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= inetd D F6885EB8 0 2551 1 (NOTLB) f6885ea0 00000082 f77e0340 f6885eb8 f6885e70 c01490aa 00000001 f6885f00 f6885e70 c17c0d80 b7f30000 c17c0d80 f6885e80 c0149122 f7d09550 f7d61b7c 00000000 000004a7 d5171278 00000018 f6885ea0 00000000 00000000 f6885f08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= lpd D 00000000 0 2555 1 (NOTLB) f68cbea0 00000086 00001844 00000000 f68cbe70 f68d8f14 00000001 00000000 00000004 00000004 00000000 00000000 f68cbed0 c0171008 f7095590 f7d0965c 00000000 000002b1 d5172550 00000018 f68cbea0 00000000 00000000 f68cbf08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= mysqld_safe D BFD66000 0 2570 1 (NOTLB) f68f1ea0 00000082 f702abfc bfd66000 f68f1e90 c014d81a f7723bfc f75ea224 bfd51000 bfd66000 d515fa2f 00000018 000105af 00000000 f7cb40d0 f7cb563c 0000007a 000001cf d51865f6 00000018 f68f1ea0 00000000 081041c8 f68f1f08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= mysqld D 00000044 0 2607 2570 (NOTLB) f6883ea0 00000082 000200d0 00000044 00000001 c16de6c0 c0430f00 00000001 000200d0 c0430f04 000000d0 f7095590 f6883eb0 c0146ced f7d72570 f709569c 00000000 0000026e d5173654 00000018 f6883ea0 00000000 00000000 f6883f08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= mysqld D C17A5D40 0 2609 2570 (NOTLB) f6c99ea0 00000082 00000246 c17a5d40 f6c99e80 c0146aac 00000000 00000092 f6c99eb0 c0493030 d5022392 00000018 000969ea 00000000 c192e030 f7cb41dc 00000086 0000031a d5186f45 00000018 f6c99ea0 00000000 00000000 f6c99f08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= mysqld D C0142280 0 2610 2570 (NOTLB) f6c9bea0 00000082 00000000 c0142280 c18023d0 f7d72570 c17a5ea0 f6c9be94 00000001 c0492be0 00000000 00000000 f7d72570 0000000b f70b5050 f7d7267c 00000000 00000147 d5173f48 00000018 f6c9bea0 00000000 00000000 f6c9bf08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= mysqld D C17DB760 0 2611 2570 (NOTLB) f6c9dea0 00000082 00000246 c17db760 f6c9de80 c0146aac 00000001 f6c9df00 00000001 c0493180 d515301d 00000018 0000313f 00000000 f771aad0 f771a6dc 00000073 0000017b d515a4c0 00000018 f6c9dea0 00000000 00000000 f6c9df08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= mysqld D F7CAC070 0 2612 2570 (NOTLB) f6c9fea0 00000082 c048b0e0 f7cac070 f6c9fe60 c0116be1 f7cac070 deff0bf0 f6c9fe80 c0492c90 00000000 00000000 f70b5050 00000005 f7ca9550 f70b515c 00000000 000000fd d5174638 00000018 f6c9fea0 00000000 00000000 f6c9ff08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= mysqld D 00000020 0 2619 2570 (NOTLB) f68f3ea0 00000082 c042cf00 00000020 f68f3e80 c0117637 00000000 f68f3ea8 f68f3e80 c01232e0 d515379d 00000018 000032b5 00000000 f7cac070 f771abdc 00000073 00000118 d515afb8 00000018 f68f3ea0 00000000 00000000 f68f3f08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= mysqld D 00000001 0 2620 2570 (NOTLB) f6cc9ea0 00000082 f7128e74 00000001 f6cc9e80 c0220656 00000100 00000000 00000001 f7128e74 00000001 00000000 f6cc9e80 c0229da6 f7ca9050 f7ca965c 00000000 00000183 d51750ce 00000018 f6cc9ea0 00000000 00000000 f6cc9f08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= mysqld D 00000005 0 2621 2570 (NOTLB) f6ca3ea0 00000082 00000034 00000005 c048b0e0 00000003 f7d72570 c048b0e0 f6ca3e80 c0492750 d5153bbe 00000018 0000345c 00000000 f7134030 f7cac17c 00000073 000000cd d515b7c2 00000018 f6ca3ea0 00000000 00000000 f6ca3f08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= mysqld D C0364CA5 0 2623 2570 (NOTLB) f6cebea0 00000082 f6ca3ea0 c0364ca5 f6cebeb0 00000082 f7095590 c048b0e0 f6cebe80 00000046 d5154280 00000018 00003562 00000000 f70b60b0 f713413c 00000073 000000eb d515c0f0 00000018 f6cebea0 00000000 00000008 f6cebf08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= logger D 2A7A7965 0 2608 2570 (NOTLB) f68ebea0 00000086 4661e75f 2a7a7965 4661e75f 2a7a7965 00000000 00000000 00000008 00000000 f6fabcd8 00000400 fffffe00 00000000 f70fc550 f7ca915c 00000000 00000237 d5176054 00000018 f68ebea0 00000000 b7f11420 f68ebf08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= sshd D F715C00C 0 2613 2505 (NOTLB) f6ca1ea0 00000082 f68bd620 f715c00c 00000000 f6ca1e58 00000000 00000000 f6ca1eb0 00000001 d5154f02 00000018 00003edf 00000000 f7ca9a50 f70b61bc 00000073 0000037f d515e3ea 00000018 f6ca1ea0 00000000 00000000 f6ca1f08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= bash D CE5E96DF 0 2622 2613 (NOTLB) f6ce9ea0 00000086 f6ba0ad0 ce5e96df f6ce9e70 c0116d2b 00000000 00000000 c0430da8 c0430da8 d5155369 00000018 000040c2 00000000 f70b6ab0 f7ca9b5c 00000073 000000e3 d515eccc 00000018 f6ce9ea0 00000000 080fff48 f6ce9f08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= nrpe D F64E9A00 0 2660 1 (NOTLB) f6f71ea0 00200082 00000008 f64e9a00 f6f71e80 c015f602 00000000 00000000 f6f71e90 c0353d53 d5155acb 00000018 000045dc 00000000 f7f56570 f70b6bbc 00000073 000001f2 d5160042 00000018 f6f71ea0 00000000 bfbe883c f6f71f08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= nagios-statd D F6CB6460 0 2668 1 (NOTLB) f6947ea0 00200086 00000008 f6cb6460 f6947e80 c015f602 00000000 00000000 f6947e90 c0353d53 d51572ff 00000018 00004dce 00000000 f7122090 f7f5667c 00000073 0000044d d5162b48 00000018 f6947ea0 00000000 bf800770 f6947f08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= netserver D F772B780 0 2671 1 (NOTLB) f6db3ea0 00000082 00000008 f772b780 f6db3e80 c015f602 00000000 00000000 f6db3e90 c0353d53 d51575a0 00000018 000092ab 00000000 f7cb5a30 f7e861dc 00000075 0000025e d516d100 00000018 f6db3ea0 00000000 bfd06990 f6db3f08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= lockd D F7CFDB7C 0 2682 2 (L-TLB) f6967eb0 00000046 f7caca70 f7cfdb7c f6967ec0 0000141f d3abfb17 0000000b f70796c0 7fffffff d5691332 00000018 000022a5 00000000 f7caca70 f7cfdb7c 0000007d 00000540 d5696542 00000018 f6967eb0 00000000 f7557000 00000003 Call Trace: [] refrigerator+0x3f/0x50 [] svc_recv+0x385/0x410 [sunrpc] [] lockd+0x111/0x240 [lockd] [] kernel_thread_helper+0x7/0x4c ======================= rpciod/0 S C192E530 0 2683 2 (L-TLB) f696df60 00000046 c048b0e0 c192e530 f696df20 f6bfc114 f6f897a0 f94e8fb0 f696df30 f94e8fbe 00000002 f7cacae0 f696df60 c0127ebd f6ba0ad0 f7cb46dc 989a3800 0000047b d07704eb 00000018 00000246 f6f897a0 f696df88 f6f897a8 Call Trace: [] worker_thread+0xec/0xf0 [] kthread+0x67/0x70 [] kernel_thread_helper+0x7/0x4c ======================= nfsd D 00000000 0 2684 2 (L-TLB) f69adf00 00000046 f69adee8 00000000 f69aded0 00000282 d3ac18fa 0000000b 00000282 00000282 d56915fd 00000018 00002413 00000000 f7122590 f7cacb7c 0000007d 0000062d d5696b6f 00000018 f69adf00 00000000 f7283000 00000022 Call Trace: [] refrigerator+0x3f/0x50 [] svc_recv+0x385/0x410 [sunrpc] [] nfsd+0xac/0x240 [nfsd] [] kernel_thread_helper+0x7/0x4c ======================= nfsd D 00000000 0 2685 2 (L-TLB) f69cff00 00000046 f69cfee8 00000000 f69cfed0 00000282 d3ac25f1 0000000b 00000282 00000282 d569187c 00000018 00002518 00000000 f7d09a50 f712269c 0000007d 000004e9 d5697058 00000018 f69cff00 00000000 f698f000 00000022 Call Trace: [] refrigerator+0x3f/0x50 [] svc_recv+0x385/0x410 [sunrpc] [] nfsd+0xac/0x240 [nfsd] [] kernel_thread_helper+0x7/0x4c ======================= nfsd D 00000000 0 2686 2 (L-TLB) f69eff00 00000046 f69efee8 00000000 f69efed0 00000282 d3ac30e9 0000000b 00000282 00000282 d5691ae8 00000018 0000263a 00000000 f7cb1ab0 f7d09b5c 0000007d 0000051c d5697574 00000018 f69eff00 00000000 f69b4000 00000022 Call Trace: [] refrigerator+0x3f/0x50 [] svc_recv+0x385/0x410 [sunrpc] [] nfsd+0xac/0x240 [nfsd] [] kernel_thread_helper+0x7/0x4c ======================= nfsd D 00000000 0 2687 2 (L-TLB) f6a11f00 00000046 f6a11ee8 00000000 f6a11ed0 00000282 d3ac3dcb 0000000b 00000282 00000282 d5691df7 00000018 0000276f 00000000 f70b5a50 f7cb1bbc 0000007d 000005e9 d5697b5d 00000018 f6a11f00 00000000 f69d9000 00000022 Call Trace: [] refrigerator+0x3f/0x50 [] svc_recv+0x385/0x410 [sunrpc] [] nfsd+0xac/0x240 [nfsd] [] kernel_thread_helper+0x7/0x4c ======================= nfsd D 00000000 0 2688 2 (L-TLB) f6a31f00 00000046 f6a31ee8 00000000 f6a31ed0 00000282 d3ac48a4 0000000b 00000282 00000282 d569203a 00000018 0000287a 00000000 f7caf590 f70b5b5c 0000007d 000004bb d5698018 00000018 f6a31f00 00000000 f69fe000 00000022 Call Trace: [] refrigerator+0x3f/0x50 [] svc_recv+0x385/0x410 [sunrpc] [] nfsd+0xac/0x240 [nfsd] [] kernel_thread_helper+0x7/0x4c ======================= nfsd D 00000000 0 2689 2 (L-TLB) f6a53f00 00000046 f6a53ee8 00000000 f6a53ed0 00000282 d3ac53f7 0000000b 00000282 00000282 000cb599 000cb599 f6a53f10 c0365572 f6ba0ad0 f7caf69c 00000000 000004bc d56984d4 00000018 f6a53f00 00000000 f6a23000 00000022 Call Trace: [] refrigerator+0x3f/0x50 [] svc_recv+0x385/0x410 [sunrpc] [] nfsd+0xac/0x240 [nfsd] [] kernel_thread_helper+0x7/0x4c ======================= nfsd D 00000000 0 2690 2 (L-TLB) f6a73f00 00000046 f6a73ee8 00000000 f6a73ed0 00000282 b6d79c59 00000018 00000282 00000282 d56915b7 00000018 00001ae5 00000000 f7d61570 f7cafb9c 00000073 000000d7 d569556a 00000018 f6a73f00 00000000 f6a49000 00000022 Call Trace: [] refrigerator+0x3f/0x50 [] svc_recv+0x385/0x410 [sunrpc] [] nfsd+0xac/0x240 [nfsd] [] kernel_thread_helper+0x7/0x4c ======================= nfsd D 00000000 0 2691 2 (L-TLB) f6a95f00 00000046 f6a95ee8 00000000 f6a95ed0 00000282 b6d79006 00000018 00000282 00000282 d568ff0f 00000018 000026e8 00000000 c1969030 f7d6167c 00000074 00000094 d5695b36 00000018 f6a95f00 00000000 f6a6e000 00000022 Call Trace: [] refrigerator+0x3f/0x50 [] svc_recv+0x385/0x410 [sunrpc] [] nfsd+0xac/0x240 [nfsd] [] kernel_thread_helper+0x7/0x4c ======================= rpc.mountd D F64604C0 0 2695 1 (NOTLB) f6ab5ea0 00000082 00000000 f64604c0 f6ab5e60 c0341701 f649f960 f64604c0 f6ab5ea0 c0343533 d5158432 00000018 00005425 00000000 f7095090 f712219c 00000073 00000338 d5164b7e 00000018 f6ab5ea0 00000000 00000000 f6ab5f08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= rsync D 00000002 0 2701 1 (NOTLB) f6ab7ea0 00000082 00000000 00000002 00000283 0000000a 00000010 f6968e94 f6ab7eb0 c0279a20 00000000 f6ab7eb8 c0408d6f 00000246 f6ac8a90 f70fc65c 00000000 00000233 d5176fba 00000018 f6ab7ea0 00000000 00000000 f6ab7f08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= smartd D 2A7A7965 0 2710 1 (NOTLB) f6badea0 00000086 4661e75f 2a7a7965 4661e75f 2a7a7965 00000000 00000000 f6a9eb84 f78f5f40 f6a9eb60 f78f5f40 f6badea0 c012f070 f6ba8a50 f6ac8b9c 00000000 00000183 d5177a53 00000018 f6badea0 00000000 bfeea9e4 f6badf08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= rpc.statd D 00000000 0 2766 1 (NOTLB) f6bc9ea0 00000082 00000000 00000000 00000000 bfe8a9f4 00000010 bfe8a9d0 f6a9e604 c193fda0 00000002 fffffff3 f6bc9ea8 f6a9e5e0 f7d02ab0 f6ba8b5c 00000000 000002fe d5178f48 00000018 f6bc9ea0 00000000 00000000 f6bc9f08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= rpc.idmapd D 00000001 0 2774 1 (NOTLB) f642dea0 00000082 0000012b 00000001 c04903b8 00000000 f642de78 c011ec21 00000000 00000010 d5158962 00000018 00005704 00000000 f6b9e5b0 f709519c 00000073 00000133 d516577c 00000018 f642dea0 00000000 00001388 f642df08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= ntpd D BFA422F4 0 2789 1 (NOTLB) f6bbfea0 00200086 00000000 bfa422f4 f6bbfe60 c0108f08 3b9aca00 f6b9e5b0 f6bbfe80 c010364b d5159373 00000018 00005c9b 00000000 f6ba05d0 f6b9e6bc 00000073 00000254 d5166ec8 00000018 f6bbfea0 00000000 00000000 f6bbff08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= mdadm D C190D860 0 2802 1 (NOTLB) f6f1dea0 00000082 f7d6aea0 c190d860 c190d8c8 c190d860 c195f7e0 f785310c f710bca0 f7d6aea0 f6f1df08 00000001 c190d86c 00000000 f7cfd070 f7d02bbc 00000000 00000291 d517a142 00000018 f6f1dea0 00000000 bfbf3f78 f6f1df08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= amd D 00000010 0 2814 1 (NOTLB) f6f39ea0 00000086 f6f39ec8 00000010 f6f39e88 00000001 f6f39e68 00000018 00000000 00000000 d515a1d0 00000018 000062c5 00000000 f7134530 f6ba06dc 00000073 000002e5 d5168bc0 00000018 f6f39ea0 00000000 00000000 f6f39f08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= atd D C01D5920 0 2843 1 (NOTLB) f6465ea0 00000082 00000000 c01d5920 f6465e60 c01e4ce8 00000008 00000000 f646f2f8 f6aa2660 f6465f08 f6470060 f6465ea0 c012f070 f7cb10b0 f7cfd17c 00000000 0000016b d517ab31 00000018 f6465ea0 00000000 bfa33f04 f6465f08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= cron D 0000000D 0 2846 1 (NOTLB) f6485ea0 00000086 f6485e78 0000000d c03edd32 00000f20 f7ffbda4 f6b9fa70 f7086460 f7086460 00cafdb8 00000004 f6485ea0 f7004c80 f70fca50 f6ac819c 00000000 000001d2 d516e42a 00000018 f6485ea0 00000000 bf7ff544 f6485f08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= miniserv.pl D F64EDEB8 0 2869 1 (NOTLB) f64edea0 00000086 f77ad3a0 f64edeb8 f64ede90 c020b998 00000001 f64edf00 00000001 00000000 d5156000 00000018 000087dd 00000000 f7d72a70 f713463c 00000074 00000234 d516a1ca 00000018 f64edea0 00000000 00000000 f64edf08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= apache D F6463EB8 0 2898 1 (NOTLB) f6463ea0 00000082 c1a110e0 f6463eb8 f6463e90 c020b998 00000000 00000000 c0430da8 c0430da8 c0430f18 00000044 f6463ea0 c0146c5d f6ba6a30 f7cb11bc 00000000 000001b1 d517b708 00000018 f6463ea0 00000000 00000000 f6463f08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= apache D 00000000 0 2910 2898 (NOTLB) f64f1ea0 00000082 00000000 00000000 f64f1e90 c0158d44 00000000 00000000 c0430da8 c0430da8 c0430f18 00000044 f68d8064 f68d8154 f7d76590 f6ba6b3c 00000000 00000265 d517c7cc 00000018 f64f1ea0 00000000 00000000 f64f1f08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= apache D 00000000 0 2911 2898 (NOTLB) f6555ea0 00000082 00000000 00000000 f6555e90 c0158d44 00000000 00000000 c0430da8 c0430da8 c0430f18 00000044 f68d8064 f68d8154 f6ba6530 f7d7669c 00000000 0000020d d517d62c 00000018 f6555ea0 00000000 00000000 f6555f08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= apache D 00000000 0 2912 2898 (NOTLB) f6559ea0 00000086 00000000 00000000 f6559e90 c0158d44 00000000 00000000 c0430da8 c0430da8 c0430f18 00000044 f68d8064 f68d8154 f7134a30 f6ba663c 00000000 0000011c d517ddf6 00000018 f6559ea0 00000000 00000000 f6559f08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= apache D 00000000 0 2913 2898 (NOTLB) f64a5ea0 00000086 00000000 00000000 f64a5e90 c0158d44 00000000 00000000 c0430da8 c0430da8 c0430f18 00000044 f68d8064 f68d8154 f6b9f570 f7134b3c 00000000 00000153 d517e73e 00000018 f64a5ea0 00000000 00000000 f64a5f08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= apache D 00000000 0 2914 2898 (NOTLB) f6573ea0 00000086 00000000 00000000 f6573e90 c0158d44 00000000 00000000 c0430da8 c0430da8 c0430f18 00000044 f68d8064 f68d8154 f7d025b0 f6b9f67c 00000000 0000013d d517efea 00000018 f6573ea0 00000000 00000000 f6573f08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= munin-node D 00000000 0 3040 1 (NOTLB) f6831ea0 00000086 00000000 00000000 00000000 00000000 00000000 00000000 f714a8ac f66b5800 00000000 00000000 00000000 00000000 f70b65b0 f7d026bc 00000000 000002c2 d518033a 00000018 f6831ea0 00000000 00000000 f6831f08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= getty D F65F9EEC 0 3066 1 (NOTLB) f65f9ea0 00000082 f6d39400 f65f9eec f65f9e90 c03655af 00000000 c1a38800 00000001 c1846900 f6d39400 00000000 f65f9e80 00000246 f658a5d0 f70b66bc 00000000 000002b1 d5181617 00000018 f65f9ea0 00000000 0804b214 f65f9f08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= getty D F65FBEEC 0 3067 1 (NOTLB) f65fbea0 00000086 f6587800 f65fbeec f65fbe90 c03655af 7f1c0300 f65fc20e 00000000 ffffffff 00000008 00000000 f6584c08 00000246 f658aad0 f658a6dc 00000000 0000018a d51820de 00000018 f65fbea0 00000000 0804b214 f65fbf08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= getty D F65D1EEC 0 3068 1 (NOTLB) f65d1ea0 00000082 f6556c00 f65d1eec f65d1e90 c03655af 7f1c0300 f664020e 00000000 ffffffff 00000008 00000000 f6587408 00000246 f6582030 f658abdc 00000000 0000015e d5182a74 00000018 f65d1ea0 00000000 0804b214 f65d1f08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= getty D F65D3EEC 0 3069 1 (NOTLB) f65d3ea0 00000086 f6556400 f65d3eec f65d3e90 c03655af 7f1c0300 f664620e 00000000 ffffffff 00000008 00000000 f6556808 00000246 f6589ab0 f658213c 00000000 00000164 d5183434 00000018 f65d3ea0 00000000 0804b214 f65d3f08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= getty D F6621EEC 0 3070 1 (NOTLB) f6621ea0 00000082 f75e5c00 f6621eec f6621e90 c03655af 7f1c0300 f664c20e 00000000 ffffffff 00000008 00000000 f64de008 00000246 f6585090 f6589bbc 00000000 00000183 d5183ece 00000018 f6621ea0 00000000 0804b214 f6621f08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= getty D F6623EEC 0 3071 1 (NOTLB) f6623ea0 00000086 f7843800 f6623eec f6623e90 c03655af 7f1c0300 f667220e 00000000 ffffffff d515e7c5 00000018 00010183 00000000 f7cb5030 f658519c 00000077 0000018d d51849ac 00000018 f6623ea0 00000000 0804b214 f6623f08 Call Trace: [] refrigerator+0x3f/0x50 [] get_signal_to_deliver+0x226/0x230 [] do_signal+0x5b/0x120 [] do_notify_resume+0x3d/0x40 [] work_notifysig+0x13/0x19 ======================= hibernate D F7945000 0 3874 2622 (NOTLB) f6723c60 00000086 f7ea23b8 f7945000 f6723c20 c0121d67 c198d800 f7ea23b8 f6723c30 c02261b7 28ccc170 00000009 00003a03 00000000 c19615d0 f6ba0bdc 0000006e 0006a677 28ccc170 00000009 f6723c70 f6723d48 f6af7804 f6723c7c Call Trace: [] wait_for_completion+0x64/0xa0 [] blk_execute_rq+0x8d/0xb0 [] scsi_execute+0xb8/0x110 [] scsi_execute_req+0x68/0x90 [] sd_start_stop_device+0x6f/0x120 [] sd_resume+0x6a/0xa0 [] scsi_bus_resume+0x69/0x80 [] resume_device+0x132/0x190 [] dpm_resume+0xbb/0xc0 [] device_resume+0x1e/0x40 [] hibernate+0x106/0x1a0 [] state_store+0xc3/0xf0 [] subsys_attr_store+0x3b/0x40 [] flush_write_buffer+0x2e/0x40 [] sysfs_write_file+0x61/0x70 [] vfs_write+0x88/0x110 [] sys_write+0x41/0x70 [] syscall_call+0x7/0xb ======================= From owner-xfs@oss.sgi.com Wed Jun 13 15:13:16 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 13 Jun 2007 15:13:21 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.5 required=5.0 tests=AWL,BAYES_95 autolearn=no version=3.2.0-pre1-r499012 Received: from smtp2.linux-foundation.org (smtp2.linux-foundation.org [207.189.120.14]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5DMDFWt013500 for ; Wed, 13 Jun 2007 15:13:16 -0700 Received: from imap1.linux-foundation.org (imap1.linux-foundation.org [207.189.120.55]) by smtp2.linux-foundation.org (8.13.5.20060308/8.13.5/Debian-3ubuntu1.1) with ESMTP id l5DMD4dL028796 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Wed, 13 Jun 2007 15:13:05 -0700 Received: from localhost (localhost [127.0.0.1]) by imap1.linux-foundation.org (8.13.5.20060308/8.13.5/Debian-3ubuntu1.1) with ESMTP id l5DMCwOa011772; Wed, 13 Jun 2007 15:12:59 -0700 Date: Wed, 13 Jun 2007 15:12:58 -0700 (PDT) From: Linus Torvalds To: David Greaves cc: David Chinner , Tejun Heo , "Rafael J. Wysocki" , xfs@oss.sgi.com, "'linux-kernel@vger.kernel.org'" , linux-pm , Neil Brown , Jeff Garzik Subject: Re: 2.6.22-rc3 hibernate(?) fails totally - regression (xfs on raid6) In-Reply-To: <46706968.7000703@dgreaves.com> Message-ID: References: <200706020122.49989.rjw@sisk.pl> <4661EFBB.5010406@dgreaves.com> <4662D852.4000005@dgreaves.com> <46667160.80905@gmail.com> <46668EE0.2030509@dgreaves.com> <46679D56.7040001@gmail.com> <4667DE2D.6050903@dgreaves.com> <20070607110708.GS86004887@sgi.com> <46680F5E.6070806@dgreaves.com> <20070607222813.GG85884050@sgi.com> <4669A965.20403@dgreaves.com> <466FD214.9070603@dgreaves.com> <46706968.7000703@dgreaves.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=us-ascii X-MIMEDefang-Filter: osdl$Revision: 1.181 $ X-Scanned-By: MIMEDefang 2.53 on 207.189.120.14 X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11771 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: torvalds@linux-foundation.org Precedence: bulk X-list: xfs On Wed, 13 Jun 2007, David Greaves wrote: > > > I'm not seeing anything really obvious. The traces would probably look > > better if you enabled CONFIG_FRAME_POINTER, though. That should cut down on > > some of the noise and make the traces a bit more readable. > > I can do that... Thanks. That makes a big difference to the readability of the traces. That said, I'm so used to reading even the messy ones that this didn't actually tell me anything new (it made it clear that the SCSI error handler noise was just noise), but for people who aren't quite as used to seeing crap backtraces, your new trace might hopefully put them on the right track. I threw out the parts that didn't look all that relevant, and left the ata_aux/md0_raid5/hibernate traces here for others to look at without all the other noise. Those _seem_ to be the primary suspects in this saga. Linus --- > ata_aux D F7945000 0 122 2 (L-TLB) > c19f7ce0 00000046 f7ea23b8 f7945000 c19f7ca0 c0121d67 c198d800 f7ea23b8 > c19f7cb0 c02261b7 f6af7964 f7ea23b8 c19f7cc0 c0293942 f7e865d0 c1a026bc > c19f7cf0 000003ae 2dae0756 00000009 c19f7cf0 c19f7dc8 f6af7964 c19f7cfc > Call Trace: > [] wait_for_completion+0x64/0xa0 > [] blk_execute_rq+0x8d/0xb0 > [] scsi_execute+0xb8/0x110 > [] scsi_execute_req+0x68/0x90 > [] sd_spinup_disk+0x6d/0x400 > [] sd_revalidate_disk+0x6b/0x160 > [] sd_rescan+0x1f/0x30 > [] scsi_rescan_device+0x42/0x50 > [] ata_scsi_dev_rescan+0x60/0x70 > [] run_workqueue+0x4d/0xf0 > [] worker_thread+0xcd/0xf0 > [] kthread+0x67/0x70 > [] kernel_thread_helper+0x7/0x4c > md0_raid5 D F7F4D000 0 836 2 (L-TLB) > f78a3da0 00000046 f7f4d000 f7f4d000 f78a3da0 c01454b6 c048b0e0 f6ba0ad0 > f78a3d70 c0116be1 00010ad0 d5658227 f78a3d90 f7853000 f70c4ae0 f7f5617c > f78a3de0 00003440 1be4a921 00000009 00000246 f7853000 f78a3dc8 f785313c > Call Trace: > [] md_super_wait+0x77/0xc0 > [] write_sb_page+0x4f/0x80 > [] write_page+0x102/0x110 > [] bitmap_update_sb+0x89/0x90 > [] md_update_sb+0x123/0x2a0 > [] md_check_recovery+0x302/0x340 > [] raid5d+0x12/0xf0 > [] md_thread+0x56/0x110 > [] kthread+0x67/0x70 > [] kernel_thread_helper+0x7/0x4c > hibernate D F7945000 0 3874 2622 (NOTLB) > f6723c60 00000086 f7ea23b8 f7945000 f6723c20 c0121d67 c198d800 f7ea23b8 > f6723c30 c02261b7 28ccc170 00000009 00003a03 00000000 c19615d0 f6ba0bdc > 0000006e 0006a677 28ccc170 00000009 f6723c70 f6723d48 f6af7804 f6723c7c > Call Trace: > [] wait_for_completion+0x64/0xa0 > [] blk_execute_rq+0x8d/0xb0 > [] scsi_execute+0xb8/0x110 > [] scsi_execute_req+0x68/0x90 > [] sd_start_stop_device+0x6f/0x120 > [] sd_resume+0x6a/0xa0 > [] scsi_bus_resume+0x69/0x80 > [] resume_device+0x132/0x190 > [] dpm_resume+0xbb/0xc0 > [] device_resume+0x1e/0x40 > [] hibernate+0x106/0x1a0 > [] state_store+0xc3/0xf0 > [] subsys_attr_store+0x3b/0x40 > [] flush_write_buffer+0x2e/0x40 > [] sysfs_write_file+0x61/0x70 > [] vfs_write+0x88/0x110 > [] sys_write+0x41/0x70 > [] syscall_call+0x7/0xb > ======================= > From owner-xfs@oss.sgi.com Wed Jun 13 15:31:15 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 13 Jun 2007 15:31:17 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.3 required=5.0 tests=AWL,BAYES_50,J_CHICKENPOX_45, J_CHICKENPOX_56 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5DMVCWt019837 for ; Wed, 13 Jun 2007 15:31:13 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id IAA24014; Thu, 14 Jun 2007 08:31:11 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5DMV8Af122437547; Thu, 14 Jun 2007 08:31:09 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5DMV622121722671; Thu, 14 Jun 2007 08:31:06 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Thu, 14 Jun 2007 08:31:06 +1000 From: David Chinner To: "Salmon, Rene" Cc: nscott@aconex.com, David Chinner , salmr0@bp.com, xfs@oss.sgi.com Subject: Re: sunit not working Message-ID: <20070613223105.GQ86004887@sgi.com> References: <1181606134.7873.72.camel@holwrs01> <1181608444.3758.73.camel@edge.yarra.acx> <902286657-1181653953-cardhu_decombobulator_blackberry.rim.net-1527539029-@bxe120.bisx.prod.on.blackberry> <1181690478.3758.108.camel@edge.yarra.acx> <1181760380.8754.53.camel@holwrs01> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1181760380.8754.53.camel@holwrs01> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11772 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Wed, Jun 13, 2007 at 01:46:20PM -0500, Salmon, Rene wrote: > > Hi, > > More details on this: > > Using dd with various block sizes to measure write performance only for > now. > > This is using two options to dd. The direct I/O option for direct i/o > and the fsync option for buffered i/o. > > Using direct: > /usr/bin/time -p dd of=/mnt/testfile if=/dev/zero oflag=direct > > Using fsync: > /usr/bin/time -p dd of=/mnt/testfile if=/dev/zero conv=fsync > > Using a 2Gbit/sec fiber channel card my theoretical max is 256 > MBytes/sec. If we allow a bit of overhead for the card driver and > things the manufacturer claims the card should be able to max out at > around 200 MBytes/sec. Right. > The block sizes I used range from 128KBytes - 1024000Kbytes and all the > writes generate a 1.0GB file. > > Some of the results I got: > > Buffered I/O(fsync): > -------------------- > Linux seems to do a good job at buffering this. Regardless of the block > size I choose I always get write speeds of around 150MBytes/sec Because it does single threaded writeback via pdflush. It should always get the same throughput. If you wind /proc/sys/vm/dirty_ratio down to 5, it might go a bit faster because writeback will start earlier in the write and so the fsync will have less to do and overall speed will appear faster. What you should be looking at it iostat throughput in the steady state, not inferring the throughput from timing a write operation..... > Direct I/O(direct): > ------------------- > The speeds I get here of course are very dependent on the block size I > choose and how well they align with the stripe size of the storage array > underneath. For the appropriate block sizes I get really good > performance about 200MBytes/sec. Also normal, because you're iop bound at small block sizes. At large block sizes, you saturate the fibre. Sounds like nothing is wrong here. > >From your feedback is sounds like these are reasonable numbers. > Most of our user apps do not use direct I/O but rather buffered I/O. Is > 150MBytes/sec as good as it gets for buffered I/O or is there something > I can tune to get a bit more out of buffered I/O? That's about it, I think. With tweaking any tuning of the vm parameters, you might be able to get it higher, but it may be that writeback (when it occurs) is actually higher than this.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Wed Jun 13 16:09:19 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 13 Jun 2007 16:09:24 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.9 required=5.0 tests=AWL,BAYES_60 autolearn=no version=3.2.0-pre1-r499012 Received: from ogre.sisk.pl (ogre.sisk.pl [217.79.144.158]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5DN9HWt031391 for ; Wed, 13 Jun 2007 16:09:19 -0700 Received: from localhost (localhost.localdomain [127.0.0.1]) by ogre.sisk.pl (Postfix) with ESMTP id E63A943DE4; Thu, 14 Jun 2007 00:51:52 +0200 (CEST) Received: from ogre.sisk.pl ([127.0.0.1]) by localhost (ogre.sisk.pl [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 26894-06; Thu, 14 Jun 2007 00:51:52 +0200 (CEST) Received: from [192.168.100.102] (nat-be2.aster.pl [212.76.37.166]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by ogre.sisk.pl (Postfix) with ESMTP id D78B143A39; Thu, 14 Jun 2007 00:51:51 +0200 (CEST) From: "Rafael J. Wysocki" To: Linus Torvalds Subject: Re: 2.6.22-rc3 hibernate(?) fails totally - regression (xfs on raid6) Date: Thu, 14 Jun 2007 01:15:57 +0200 User-Agent: KMail/1.9.5 Cc: David Greaves , David Chinner , Tejun Heo , xfs@oss.sgi.com, "'linux-kernel@vger.kernel.org'" , linux-pm , Neil Brown , Jeff Garzik References: <200706020122.49989.rjw@sisk.pl> <46706968.7000703@dgreaves.com> In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200706140115.58733.rjw@sisk.pl> X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Scanned: amavisd-new at ogre.sisk.pl using MkS_Vir for Linux X-Virus-Status: Clean X-archive-position: 11773 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: rjw@sisk.pl Precedence: bulk X-list: xfs On Thursday, 14 June 2007 00:12, Linus Torvalds wrote: > > On Wed, 13 Jun 2007, David Greaves wrote: > > > > > I'm not seeing anything really obvious. The traces would probably look > > > better if you enabled CONFIG_FRAME_POINTER, though. That should cut down on > > > some of the noise and make the traces a bit more readable. > > > > I can do that... > > Thanks. That makes a big difference to the readability of the traces. > > That said, I'm so used to reading even the messy ones that this didn't > actually tell me anything new (it made it clear that the SCSI error > handler noise was just noise), but for people who aren't quite as used to > seeing crap backtraces, your new trace might hopefully put them on the > right track. > > I threw out the parts that didn't look all that relevant, and left the > ata_aux/md0_raid5/hibernate traces here for others to look at without all > the other noise. Those _seem_ to be the primary suspects in this saga. Hmm, it looks like both hibernate and ata_aux are waiting for the same completion. I wonder who's supposed to complete it. Greetings, Rafael > --- > > ata_aux D F7945000 0 122 2 (L-TLB) > > c19f7ce0 00000046 f7ea23b8 f7945000 c19f7ca0 c0121d67 c198d800 f7ea23b8 > > c19f7cb0 c02261b7 f6af7964 f7ea23b8 c19f7cc0 c0293942 f7e865d0 c1a026bc > > c19f7cf0 000003ae 2dae0756 00000009 c19f7cf0 c19f7dc8 f6af7964 c19f7cfc > > Call Trace: > > [] wait_for_completion+0x64/0xa0 > > [] blk_execute_rq+0x8d/0xb0 > > [] scsi_execute+0xb8/0x110 > > [] scsi_execute_req+0x68/0x90 > > [] sd_spinup_disk+0x6d/0x400 > > [] sd_revalidate_disk+0x6b/0x160 > > [] sd_rescan+0x1f/0x30 > > [] scsi_rescan_device+0x42/0x50 > > [] ata_scsi_dev_rescan+0x60/0x70 > > [] run_workqueue+0x4d/0xf0 > > [] worker_thread+0xcd/0xf0 > > [] kthread+0x67/0x70 > > [] kernel_thread_helper+0x7/0x4c > > > md0_raid5 D F7F4D000 0 836 2 (L-TLB) > > f78a3da0 00000046 f7f4d000 f7f4d000 f78a3da0 c01454b6 c048b0e0 f6ba0ad0 > > f78a3d70 c0116be1 00010ad0 d5658227 f78a3d90 f7853000 f70c4ae0 f7f5617c > > f78a3de0 00003440 1be4a921 00000009 00000246 f7853000 f78a3dc8 f785313c > > Call Trace: > > [] md_super_wait+0x77/0xc0 > > [] write_sb_page+0x4f/0x80 > > [] write_page+0x102/0x110 > > [] bitmap_update_sb+0x89/0x90 > > [] md_update_sb+0x123/0x2a0 > > [] md_check_recovery+0x302/0x340 > > [] raid5d+0x12/0xf0 > > [] md_thread+0x56/0x110 > > [] kthread+0x67/0x70 > > [] kernel_thread_helper+0x7/0x4c > > > hibernate D F7945000 0 3874 2622 (NOTLB) > > f6723c60 00000086 f7ea23b8 f7945000 f6723c20 c0121d67 c198d800 f7ea23b8 > > f6723c30 c02261b7 28ccc170 00000009 00003a03 00000000 c19615d0 f6ba0bdc > > 0000006e 0006a677 28ccc170 00000009 f6723c70 f6723d48 f6af7804 f6723c7c > > Call Trace: > > [] wait_for_completion+0x64/0xa0 > > [] blk_execute_rq+0x8d/0xb0 > > [] scsi_execute+0xb8/0x110 > > [] scsi_execute_req+0x68/0x90 > > [] sd_start_stop_device+0x6f/0x120 > > [] sd_resume+0x6a/0xa0 > > [] scsi_bus_resume+0x69/0x80 > > [] resume_device+0x132/0x190 > > [] dpm_resume+0xbb/0xc0 > > [] device_resume+0x1e/0x40 > > [] hibernate+0x106/0x1a0 > > [] state_store+0xc3/0xf0 > > [] subsys_attr_store+0x3b/0x40 > > [] flush_write_buffer+0x2e/0x40 > > [] sysfs_write_file+0x61/0x70 > > [] vfs_write+0x88/0x110 > > [] sys_write+0x41/0x70 > > [] syscall_call+0x7/0xb > > ======================= > > > > -- "Premature optimization is the root of all evil." - Donald Knuth From owner-xfs@oss.sgi.com Wed Jun 13 16:52:50 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 13 Jun 2007 16:52:55 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.3 required=5.0 tests=AWL,BAYES_50,J_CHICKENPOX_63, J_CHICKENPOX_73 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5DNqkWt012915 for ; Wed, 13 Jun 2007 16:52:49 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA25866; Thu, 14 Jun 2007 09:52:30 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5DNqPAf122695735; Thu, 14 Jun 2007 09:52:26 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5DNqH5q92735562; Thu, 14 Jun 2007 09:52:17 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Thu, 14 Jun 2007 09:52:17 +1000 From: David Chinner To: "Amit K. Arora" Cc: David Chinner , Suparna Bhattacharya , torvalds@osdl.org, akpm@linux-foundation.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, xfs@oss.sgi.com, cmm@us.ibm.com Subject: Re: [PATCH 1/5] fallocate() implementation in i86, x86_64 and powerpc Message-ID: <20070613235217.GS86004887@sgi.com> References: <20070424121632.GA10136@amitarora.in.ibm.com> <20070426175056.GA25321@amitarora.in.ibm.com> <20070426180332.GA7209@amitarora.in.ibm.com> <20070509160102.GA30745@amitarora.in.ibm.com> <20070510005926.GT85884050@sgi.com> <20070510115620.GB21400@amitarora.in.ibm.com> <20070510223950.GD86004887@sgi.com> <20070511110301.GB28425@in.ibm.com> <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="wzJLGUyc3ArbnUjN" Content-Disposition: inline In-Reply-To: <20070612061652.GA6320@amitarora.in.ibm.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11774 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs --wzJLGUyc3ArbnUjN Content-Type: text/plain; charset=us-ascii Content-Disposition: inline On Tue, Jun 12, 2007 at 11:46:52AM +0530, Amit K. Arora wrote: > Did you get time to write the above man page ? It will help to push > further patches in time (eg. for FA_PREALLOCATE mode). First pass is attached. `nroff -man fallocate.2 | less` to view. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group --wzJLGUyc3ArbnUjN Content-Type: text/plain; charset=us-ascii Content-Disposition: attachment; filename="fallocate.2" .TH fallocate 2 .SH NAME fallocate \- allocate or remove file space .SH SYNOPSIS .nf .B #include .PP .BI "int syscall(int, int fd, int mode, loff_t offset, loff_t len); .Op .SH DESCRIPTION The .BR fallocate syscall allows a user to directly manipulate the allocated disk space for the file referred to by .I fd for the byte range starting at .IR offset and continuing for .IR len bytes. The .I mode parameter determines the operation to be performed on the given range. Currently there are three modes: .TP .B FA_ALLOCATE allocates and initialises to zero the disk space within the given range. After a successful call, subsequent writes are guaranteed not to fail because of lack of disk space. If the size of the file is less than IR offset + len , then the file is increased to this size; otherwise the file size is left unchanged. B FA_ALLOCATE closely resembles B posix_fallocate(3) and is intended as a method of optimally implementing this function. B FA_ALLOCATE may allocate a larger range that was specified. TP B FA_PREALLOCATE provides the same functionality as B FA_ALLOCATE except it does not ever change the file size. This allows allocation of zero blocks beyond the end of file and is useful for optimising append workloads. TP B FA_DEALLOCATE removes the underlying disk space with the given range. The disk space shall be removed regardless of it's contents so both allocated space from B FA_ALLOCATE and B FA_PREALLOCATE as well as from B write(3) will be removed. B FA_DEALLOCATE shall never remove disk blocks outside the range specified. B FA_DEALLOCATE shall never change the file size. If changing the file size is required when deallocating blocks from an offset to end of file (or beyond end of file) is required, B ftuncate64(3) should be used. SH "RETURN VALUE" BR fallocate() returns zero on success, or an error number on failure. Note that IR errno is not set. SH "ERRORS" TP B EBADF I fd is not a valid file descriptor, or is not opened for writing. TP B EFBIG I offset+len exceeds the maximum file size. TP B EINVAL I offset or I len was less than 0. TP B ENODEV I fd does not refer to a regular file or a directory. TP B ENOSPC There is not enough space left on the device containing the file referred to by IR fd. TP B ESPIPE I fd refers to a pipe of file descriptor. B ENOSYS The filesystem underlying the file descriptor does not support this operation. SH AVAILABILITY The BR fallocate () system call is available since 2.6.XX SH "SEE ALSO" BR syscall (2), BR posix_fadvise (3) BR ftruncate (3) --wzJLGUyc3ArbnUjN-- From owner-xfs@oss.sgi.com Wed Jun 13 17:28:49 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 13 Jun 2007 17:28:52 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5E0SjWt018119 for ; Wed, 13 Jun 2007 17:28:48 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA26503; Thu, 14 Jun 2007 10:28:37 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5E0SVAf122507617; Thu, 14 Jun 2007 10:28:33 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5E0SN4I121703579; Thu, 14 Jun 2007 10:28:23 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Thu, 14 Jun 2007 10:28:23 +1000 From: David Chinner To: David Greaves Cc: Linus Torvalds , David Chinner , Tejun Heo , "Rafael J. Wysocki" , xfs@oss.sgi.com, "'linux-kernel@vger.kernel.org'" , linux-pm , Neil Brown , Jeff Garzik Subject: Re: 2.6.22-rc3 hibernate(?) fails totally - regression (xfs on raid6) Message-ID: <20070614002823.GY85884050@sgi.com> References: <46667160.80905@gmail.com> <46668EE0.2030509@dgreaves.com> <46679D56.7040001@gmail.com> <4667DE2D.6050903@dgreaves.com> <20070607110708.GS86004887@sgi.com> <46680F5E.6070806@dgreaves.com> <20070607222813.GG85884050@sgi.com> <4669A965.20403@dgreaves.com> <466FD214.9070603@dgreaves.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <466FD214.9070603@dgreaves.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11775 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Wed, Jun 13, 2007 at 12:16:36PM +0100, David Greaves wrote: > Linus Torvalds wrote: > > > >On Fri, 8 Jun 2007, David Greaves wrote: > >>positive: I can now get sysrq-t :) > > > >Ok, so color me confused, > So what do you think that makes me > > >and maybe I have missed some of the emails or > >skimmed over them too fast (there's been too many of them ;), > > You may have missed these 'tests' with rc4+Tejun's fix: > * clean boot, unmounting the xfs fs : normal hibernate/resume > * clean boot, remount ro xfs fs : normal hibernate/resume > * clean boot, touch; sync; echo 1 > /proc/sys/vm/drop_caches: normal > hibernate/resume > * clean boot, touch; sync; echo 2 > /proc/sys/vm/drop_caches: hang > hibernating > * clean boot, touch; sync; echo 3 > /proc/sys/vm/drop_caches: hang > hibernating > > Dave asked me to do them but hasn't responded yet. Sorry 'bout that. Bit busy ATM. What I was effectively looking for was whether it was data or metadata that was causing the problems. From the above, it would appear that dropping the page cache (echo 1 > drop caches) allows a successful hibernate/resume. Next step would have been to isolate which cache being dropped made the difference (e.g. a file or a bdev cache?). However, it is clear from the back traces that there is something unwell with md/sata code, so I don't think this needs to be tracked any further from the filesystem perspective. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Wed Jun 13 18:32:56 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 13 Jun 2007 18:32:58 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.4 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5E1WrWt027569 for ; Wed, 13 Jun 2007 18:32:55 -0700 Received: from boing.melbourne.sgi.com (boing.melbourne.sgi.com [134.14.55.141]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA27886; Thu, 14 Jun 2007 11:32:48 +1000 Date: Thu, 14 Jun 2007 11:32:48 +1000 From: Timothy Shimmin To: Barry Naujok , xfs@oss.sgi.com, xfs-dev Subject: Re: REVIEW: Filestreams support for xfs_io chattr command Message-ID: <643D90500ABD7F8FF6B185F8@boing.melbourne.sgi.com> In-Reply-To: References: X-Mailer: Mulberry/4.0.8 (Mac OS X) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11776 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs Barry, Looks reasonable to me. Just looking at checkin (master-melb:xfs-cmds:24368a) in a similar area and what it modified, it also changed, db/inode.c, man3/xfsctl.3, repair/dinode.c so there might be some other files to change in the same mod? OOI, do you run xfstests/040? --Tim --On 13 June 2007 3:29:06 PM +1000 Barry Naujok wrote: > > The attached patch lets you enable the filestreams allocator on a > per-directory basis which can be used instead of enabling it via > the mount option with the xfs_io chattr command. From owner-xfs@oss.sgi.com Wed Jun 13 20:33:17 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 13 Jun 2007 20:33:20 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.1 required=5.0 tests=AWL,BAYES_80,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r499012 Received: from sandeen.net (sandeen.net [209.173.210.139]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5E3XGWt018924 for ; Wed, 13 Jun 2007 20:33:17 -0700 Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 0DB241806631F; Wed, 13 Jun 2007 22:33:17 -0500 (CDT) Message-ID: <4670B6FF.3030005@sandeen.net> Date: Wed, 13 Jun 2007 22:33:19 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.0 (Macintosh/20070326) MIME-Version: 1.0 To: Raghu Prasad CC: xfs@oss.sgi.com Subject: Re: Installation of XFS File system on Fedora3 References: <11093354.post@talk.nabble.com> In-Reply-To: <11093354.post@talk.nabble.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11777 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Raghu Prasad wrote: > Friends, > > I'm trying to install XFS on Fedora Core3. I could not get the complete > installation guide or steps. > > Could some one pass me the information? > > Regards, > Raghu http://www.google.com/search?hl=en&q=xfs+fc3+install&btnG=Google+Search leads quickly to: http://fcp.surfsite.org/modules/smartfaq/faq.php?faqid=112 which is labeled: Can I install on ReiserFS, JFS, or XFS? which says: Yes! then says to type: linux xfs at the loader prompt. but really, why would you want to use something as ancient as FC3? :) -eric From owner-xfs@oss.sgi.com Wed Jun 13 22:02:59 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 13 Jun 2007 22:03:01 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.3 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5E52uWt031046 for ; Wed, 13 Jun 2007 22:02:58 -0700 Received: from pc-bnaujok.melbourne.sgi.com (pc-bnaujok.melbourne.sgi.com [134.14.55.58]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA02884; Thu, 14 Jun 2007 15:02:56 +1000 Date: Thu, 14 Jun 2007 15:06:01 +1000 To: "xfs@oss.sgi.com" , xfs-dev Subject: Review: dmapi-devel RPM should required xfsprogs-devel RPM From: "Barry Naujok" Organization: SGI Content-Type: text/plain; format=flowed; delsp=yes; charset=iso-8859-15 MIME-Version: 1.0 Message-ID: User-Agent: Opera Mail/9.10 (Win32) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from Quoted-Printable to 8bit by oss.sgi.com id l5E52xWt031049 X-archive-position: 11778 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@sgi.com Precedence: bulk X-list: xfs As per the subject (and also mentioned in the description): --- a/dmapi/build/rpm/dmapi.spec.in 2007-06-14 15:02:13.000000000 +1000 +++ b/dmapi/build/rpm/dmapi.spec.in 2007-06-14 14:47:31.290063558 +1000 @@ -23,7 +23,7 @@ %package devel Summary: Data Management API static libraries and headers. Group: Development/Libraries -Requires: @pkg_name@ >= 2.0.4 +Requires: @pkg_name@ >= 2.0.4 xfsprogs-devel %description devel dmapi-devel contains the libraries and header files needed to From owner-xfs@oss.sgi.com Wed Jun 13 23:02:07 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 13 Jun 2007 23:02:10 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.1 required=5.0 tests=AWL,BAYES_99,J_CHICKENPOX_28, J_CHICKENPOX_34,J_CHICKENPOX_55,J_CHICKENPOX_65,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r499012 Received: from astra.simleu.ro (astra.simleu.ro [80.97.18.177]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5E624Wt008138 for ; Wed, 13 Jun 2007 23:02:06 -0700 Received: from teal.hq.k1024.org (84-75-124-135.dclient.hispeed.ch [84.75.124.135]) by astra.simleu.ro (Postfix) with ESMTP id 3B59F70; Thu, 14 Jun 2007 09:02:03 +0300 (EEST) Received: by teal.hq.k1024.org (Postfix, from userid 4004) id E1CEF40A121; Thu, 14 Jun 2007 08:01:58 +0200 (CEST) Date: Thu, 14 Jun 2007 08:01:58 +0200 From: Iustin Pop To: David Chinner Cc: xfs@oss.sgi.com Subject: Re: [PATCH] Implement shrink of empty AGs Message-ID: <20070614060158.GA12951@teal.hq.k1024.org> Mail-Followup-To: David Chinner , xfs@oss.sgi.com References: <20070610164014.GA10936@teal.hq.k1024.org> <20070612024025.GM86004887@sgi.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="X1bOJ3K7DJ5YkBrT" Content-Disposition: inline In-Reply-To: <20070612024025.GM86004887@sgi.com> X-Linux: This message was written on Linux X-Header: /usr/include gives great headers User-Agent: Mutt/1.5.13 (2006-08-11) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11779 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: iusty@k1024.org Precedence: bulk X-list: xfs --X1bOJ3K7DJ5YkBrT Content-Type: text/plain; charset=us-ascii Content-Disposition: inline On Tue, Jun 12, 2007 at 12:40:25PM +1000, David Chinner wrote: > > diff -X ignore -urN linux-2.6-xfs.cvs-orig/fs/xfs/xfs_fsops.c linux-2.6-xfs.shrink/fs/xfs/xfs_fsops.c > > --- linux-2.6-xfs.cvs-orig/fs/xfs/xfs_fsops.c 2007-06-09 18:56:21.509308225 +0200 > > +++ linux-2.6-xfs.shrink/fs/xfs/xfs_fsops.c 2007-06-10 18:32:36.074856477 +0200 > > @@ -112,6 +112,53 @@ > > return 0; > > } > > > > +static void xfs_update_sb( > > STATIC void > xfs_growfs_update_sb( this was because xfs_growfs_private is also static and not STATIC. Should I change the def for it also? > > + xfs_mount_t *mp, /* mount point for filesystem */ > > + xfs_agnumber_t nagimax, > > + xfs_agnumber_t nagcount) /* new number of a.g. */ > > tabs, not spaces (and tabs are 8 spaces). sorry, I though I got all of these. There are some more in the def of xfs_reserve_blocks, btw. > > +{ > > + xfs_agnumber_t agno; > > + xfs_buf_t *bp; > > + xfs_sb_t *sbp; > > + int error; > > + > > + /* New allocation groups fully initialized, so update mount struct */ > > + if (nagimax) > > + mp->m_maxagi = nagimax; > > + if (mp->m_sb.sb_imax_pct) { > > + __uint64_t icount = mp->m_sb.sb_dblocks * mp->m_sb.sb_imax_pct; > > I'd prefer to have long lines like this split. Ok, I hope I split them ok. > > + do_div(icount, 100); > > + mp->m_maxicount = icount << mp->m_sb.sb_inopblog; > > + } else > > + mp->m_maxicount = 0; > > Insert empty line. done. > > + for (agno = 1; agno < nagcount; agno++) { > > + error = xfs_read_buf(mp, mp->m_ddev_targp, > > + XFS_AGB_TO_DADDR(mp, agno, XFS_SB_BLOCK(mp)), > > + XFS_FSS_TO_BB(mp, 1), 0, &bp); > > + if (error) { > > + xfs_fs_cmn_err(CE_WARN, mp, > > + "error %d reading secondary superblock for ag %d", > > + error, agno); > > + break; > > + } > > + sbp = XFS_BUF_TO_SBP(bp); > > + xfs_xlatesb(sbp, &mp->m_sb, -1, XFS_SB_ALL_BITS); > > Insert empty line done. > > + /* > > + * If we get an error writing out the alternate superblocks, > > + * just issue a warning and continue. The real work is > > + * already done and committed. > > + */ > > + if (!(error = xfs_bwrite(mp, bp))) { > > + continue; > > + } else { > > + xfs_fs_cmn_err(CE_WARN, mp, > > + "write error %d updating secondary superblock for ag %d", > > + error, agno); > > + break; /* no point in continuing */ > > + } > > + } > > error = xfs_bwrite(mp, bp); > if (error) { > xfs_fs_cmn_err(...) > break; ok. (the original version was purely a move from xfs_growfs_data_private to a separate function) > } > } > > +} > > + > > static int > > xfs_growfs_data_private( > > xfs_mount_t *mp, /* mount point for filesystem */ > > @@ -135,7 +182,6 @@ > > xfs_rfsblock_t nfree; > > xfs_agnumber_t oagcount; > > int pct; > > - xfs_sb_t *sbp; > > xfs_trans_t *tp; > > > > nb = in->newblocks; > > @@ -356,44 +402,228 @@ > > if (error) { > > return error; > > } > > - /* New allocation groups fully initialized, so update mount struct */ > > - if (nagimax) > > - mp->m_maxagi = nagimax; > > - if (mp->m_sb.sb_imax_pct) { > > - __uint64_t icount = mp->m_sb.sb_dblocks * mp->m_sb.sb_imax_pct; > > - do_div(icount, 100); > > - mp->m_maxicount = icount << mp->m_sb.sb_inopblog; > > - } else > > - mp->m_maxicount = 0; > > - for (agno = 1; agno < nagcount; agno++) { > > - error = xfs_read_buf(mp, mp->m_ddev_targp, > > - XFS_AGB_TO_DADDR(mp, agno, XFS_SB_BLOCK(mp)), > > - XFS_FSS_TO_BB(mp, 1), 0, &bp); > > + xfs_update_sb(mp, nagimax, nagcount); > > + return 0; > > + > > + error0: > > + xfs_trans_cancel(tp, XFS_TRANS_ABORT); > > + return error; > > +} > > + > > +static int > > STATIC int done. > > +xfs_shrinkfs_data_private( > > + xfs_mount_t *mp, /* mount point for filesystem */ > > + xfs_growfs_data_t *in) /* growfs data input struct */ > > whitespace issues fixed (I think you were referring to the alignement of the two argument lines). > > +{ > > + xfs_agf_t *agf; > > + xfs_agnumber_t agno; > > + xfs_buf_t *bp; > > + int dpct; > > + int error; > > + xfs_agnumber_t nagcount; /* new AG count */ > > + xfs_agnumber_t oagcount; /* old AG count */ > > + xfs_agnumber_t nagimax = 0; > > + xfs_rfsblock_t nb, nb_mod; > > + xfs_rfsblock_t dbdelta; /* will be used as a > > + check that we > > + shrink the fs by > > + the correct number > > + of blocks */ > > + xfs_rfsblock_t fdbdelta; /* will keep track of > > + how many ag blocks > > + we need to > > + remove */ > > Long comments like this don't go on the declaration. Put them where > the variable is initialised or first used. ok. > > + int pct; > > + xfs_trans_t *tp; > > + > > + nb = in->newblocks; > > + pct = in->imaxpct; > > + if (nb >= mp->m_sb.sb_dblocks || pct < 0 || pct > 100) > > + return XFS_ERROR(EINVAL); > > + dpct = pct - mp->m_sb.sb_imax_pct; > > This next bit: > > > + error = xfs_read_buf(mp, mp->m_ddev_targp, > > + XFS_FSB_TO_BB(mp, nb) - XFS_FSS_TO_BB(mp, 1), > > + XFS_FSS_TO_BB(mp, 1), 0, &bp); > > + if (error) > > + return error; > > + ASSERT(bp); > > + /* FIXME: we release the buffer here manually because we are > > + * outside of a transaction? The other buffers read using the > > + * functions which take a tp parameter are not released in > > + * growfs > > + */ > > + xfs_buf_relse(bp); > > Should not be necessary - we don't need to check if the new last > filesystem block is beyond EOF because we are shrinking.... Ah, now I understand what this does. I just copied it from growfs without realising what it does. Removed. > To answer the FIXME - xfs_trans_commit() releases locked buffers and > inodes that have been joined ot the transaction unless they have > also been held. So if you are outside a transaction, you do have to > ensure you release any buffers you read. Thanks for the clarification! > > + /* Do basic checks (at the fs level) */ > > + oagcount = mp->m_sb.sb_agcount; > > + nagcount = nb; > > + nb_mod = do_div(nagcount, mp->m_sb.sb_agblocks); > > + if(nb_mod) { > > + printk("not shrinking on an AG boundary (diff=%d)\n", nb_mod); > > + return XFS_ERROR(ENOSPC); > > EINVAL, I think. fixed. > > + } > > + if(nagcount < 2) { > > + printk("refusing to shrink below 2 AGs\n"); > > + return XFS_ERROR(ENOSPC); > > EINVAL. fixed. > > + } > > + if(nagcount >= oagcount) { > > + printk("number of AGs will not decrease\n"); > > + return XFS_ERROR(EINVAL); > > + } > > + printk("Cur ag=%d, cur blocks=%llu\n", > > + mp->m_sb.sb_agcount, mp->m_sb.sb_dblocks); > > + printk("New ag=%d, new blocks=%d\n", nagcount, nb); > > + > > + printk("Will resize from %llu to %d, delta is %llu\n", > > + mp->m_sb.sb_dblocks, nb, mp->m_sb.sb_dblocks - nb); > > + /* Check to see if we trip over the log section */ > > + printk("logstart=%llu logblocks=%u\n", > > + mp->m_sb.sb_logstart, mp->m_sb.sb_logblocks); > > + if (nb < mp->m_sb.sb_logstart + mp->m_sb.sb_logblocks) > > + return XFS_ERROR(EINVAL); > > Insert empty line Done. > > + /* dbdelta starts at the diff and must become zero */ > > + dbdelta = mp->m_sb.sb_dblocks - nb; > > + tp = xfs_trans_alloc(mp, XFS_TRANS_GROWFS); > > + printk("reserving %d\n", XFS_GROWFS_SPACE_RES(mp) + dbdelta); > > + if ((error = xfs_trans_reserve(tp, XFS_GROWFS_SPACE_RES(mp) + dbdelta, > > + XFS_GROWDATA_LOG_RES(mp), 0, 0, 0))) { > > + xfs_trans_cancel(tp, 0); > > + return error; > > + } > > What's the dbdelta part of the reservation for? That's reserving dbdelta > blocks for *allocations*, so I don't think this is right.... Well, we'll shrink the filesystem by dbdelta, so the filesystem needs to have enough free space to do it. Whether this space is in the correct place (the AGs we want to shrink) or not is a question answered by the per-AG checking, but since we will reduce the freespace by this amount, I thought that it's safer to mark this space in use. Did I misread the intent of xfs_trans_reserve? Also, unless I'm mistaken and don't remember correctly, you have to reserve the amount of space one will pass to xfs_trans_mod_sb(tp, XFS_TRANS_SB_FDBLOCKS, ...) otherwise the transaction code complains about exceeding your reservation. > > + > > + fdbdelta = 0; > > + > > + /* Per-AG checks */ > > + /* FIXME: do we need to hold m_peraglock while doing this? */ > > Yes. > > > + /* I think that since we do read and write to the m_perag > > + * stuff, we should be holding the lock for the entire walk & > > + * modify of the fs > > + */ > > Deadlock warning! holding the m_peraglock in write mode will cause > allocation deadlocks if you are not careful as all allocation/free > operations take the m_peraglock in read mode. (And yes, growing > an active, loaded filesystem can deadlock because of this.) How can we ensure then that no one modifies the AGs while we walk them? I hoped that we can do it without the per-AG not-avail flag, by just holding the perag lock. Or you mean we should keep the read lock, and grab also the write lock when actually modifying the perag structure? (Or should that be grab read lock for checking, release read lock - hope nobody modifies AGs - grab write lock?) > > + /* Note that because we hold the lock, on any error+early > > + * return, we must either release manually and return, or > > + * jump to error0 > > + */ > > whitespace. fixed. > > + down_write(&mp->m_peraglock); > > + for(agno = oagcount - 1; agno >= nagcount; agno--) { > > + xfs_extlen_t usedblks; /* total used blocks in this a.g. */ > > + xfs_extlen_t freeblks; /* free blocks in this a.g. */ > > + xfs_agblock_t aglen; /* this ag's len */ > > + struct xfs_perag *pag; /* the m_perag structure */ > > + > > + printk("doing agno=%d\n", agno); > > + > > + pag = &mp->m_perag[agno]; > > + > > + error = xfs_alloc_read_agf(mp, tp, agno, 0, &bp); > > if (error) { > > - xfs_fs_cmn_err(CE_WARN, mp, > > - "error %d reading secondary superblock for ag %d", > > - error, agno); > > - break; > > + goto error0; > > } > > - sbp = XFS_BUF_TO_SBP(bp); > > - xfs_xlatesb(sbp, &mp->m_sb, -1, XFS_SB_ALL_BITS); > > + ASSERT(bp); > > + agf = XFS_BUF_TO_AGF(bp); > > + aglen = INT_GET(agf->agf_length, ARCH_CONVERT); > > + > > + /* read the pagf/pagi if not already initialized */ > > + /* agf should be initialized because of the ablove read_agf */ > > + ASSERT(pag->pagf_init); > > + if (!pag->pagi_init) { > > + if ((error = xfs_ialloc_read_agi(mp, tp, agno, &bp))) > > + goto error0; > > I don't think you should be overwriting bp here.... > > > + ASSERT(pag->pagi_init); > > + } > > + > > Because now you have bp potentially pointing to two different buffers. Yes, indeed, sloppy programming. Fixed. > > /* > > - * If we get an error writing out the alternate superblocks, > > - * just issue a warning and continue. The real work is > > - * already done and committed. > > + * Check the inodes: as long as we have pagi_count == > > + * pagi_freecount == 0, then: a) we don't have to > > + * update any global inode counters, and b) there are > > + * no extra blocks in inode btrees > > */ > > - if (!(error = xfs_bwrite(mp, bp))) { > > - continue; > > - } else { > > - xfs_fs_cmn_err(CE_WARN, mp, > > - "write error %d updating secondary superblock for ag %d", > > - error, agno); > > - break; /* no point in continuing */ > > + if(pag->pagi_count > 0 || > > + pag->pagi_freecount > 0) { > > + printk("agi %d has %d inodes in total and %d free\n", > > + agno, pag->pagi_count, pag->pagi_freecount); > > + error = XFS_ERROR(ENOSPC); > > + goto error0; > > + } > > + > > + /* Check the AGF: if levels[] == 1, then there should > > + * be no extra blocks in the btrees beyond the ones > > + * at the beggining of the AG > > + */ > > + if(pag->pagf_levels[XFS_BTNUM_BNOi] > 1 || > > + pag->pagf_levels[XFS_BTNUM_CNTi] > 1) { > > + printk("agf %d has level %d bt and %d cnt\n", > > + agno, > > + pag->pagf_levels[XFS_BTNUM_BNOi], > > + pag->pagf_levels[XFS_BTNUM_CNTi]); > > + error = XFS_ERROR(ENOSPC); > > + goto error0; > > } > > ok, so we have empty ag's here. You might want to check that the > inode btree is empty and that the AGI unlinked list is empty. I thought that inode btree is empty by virtue of pag->pagi_count == 0. Is this not always so? Furthermore, also since agi_count == agi_free + actual used inodes + number of unlinked inodes, I think we don't need to check the unlinked list. > > + > > + freeblks = pag->pagf_freeblks; > > + printk("Usage: %d prealloc, %d flcount\n", > > + XFS_PREALLOC_BLOCKS(mp), pag->pagf_flcount); > > + > > + /* Done gathering data, check sizes */ > > + usedblks = XFS_PREALLOC_BLOCKS(mp) + pag->pagf_flcount; > > + printk("agno=%d agf_length=%d computed used=%d" > > + " known free=%d\n", agno, aglen, usedblks, freeblks); > > + > > + if(usedblks + freeblks != aglen) { > > + printk("agno %d is not free (%d blocks allocated)\n", > > + agno, aglen-usedblks-freeblks); > > + error = XFS_ERROR(ENOSPC); > > + goto error0; > > + } > > + dbdelta -= aglen; > > + printk("will lower with %d\n", > > + aglen - XFS_PREALLOC_BLOCKS(mp)); > > + fdbdelta += aglen - XFS_PREALLOC_BLOCKS(mp); > > Ok, so why not just > > fdbdelta += mp->m_sb.sb_agblocks - XFS_PREALLOC_BLOCKS(mp); Because the last AG can be smaller than the sb_agblocks. It's true that this holds only for the last one, but having two cases is uglier than just always reading this size from the AG. > > + } > > + /* > > + * Check that we removed all blocks > > + */ > > + ASSERT(!dbdelta); > > + ASSERT(nagcount < oagcount); > > Error out, not assert, because at this point we have not changed anything. Actually here, !dbdelta or nacount < oagacount are programming errors and not possible correct conditions. They could be removed, sine the checks before the per-AG for loop ensure these conditions are not met. But I used them just to be sure. > > + > > + printk("to free: %d, oagcount=%d, nagcount=%d\n", > > + fdbdelta, oagcount, nagcount); > > + > > + xfs_trans_agblocks_delta(tp, -((long)fdbdelta)); > > + xfs_trans_mod_sb(tp, XFS_TRANS_SB_AGCOUNT, nagcount - oagcount); > > + xfs_trans_mod_sb(tp, XFS_TRANS_SB_DBLOCKS, nb - mp->m_sb.sb_dblocks); > > + xfs_trans_mod_sb(tp, XFS_TRANS_SB_FDBLOCKS, -((int64_t)fdbdelta)); > > + > > + if (dpct) > > + xfs_trans_mod_sb(tp, XFS_TRANS_SB_IMAXPCT, dpct); > > + error = xfs_trans_commit(tp, 0); > > + if (error) { > > + up_write(&mp->m_peraglock); > > + return error; > > } > > + /* Free memory as the number of AG has changed */ > > + for (agno = nagcount; agno < oagcount; agno++) > > + if (mp->m_perag[agno].pagb_list) > > + kmem_free(mp->m_perag[agno].pagb_list, > > + sizeof(xfs_perag_busy_t) * > > + XFS_PAGB_NUM_SLOTS); > > + > > + mp->m_perag = kmem_realloc(mp->m_perag, > > + sizeof(xfs_perag_t) * nagcount, > > + sizeof(xfs_perag_t) * oagcount, > > + KM_SLEEP); > > This is not really safe - how do we know if all the users of the > higher AGs have gone away? I think we should simply just zero out > the unused AGs and don't worry about a realloc(). The problem that I saw is that if you do shrink+grow+shrink+grow+... you will leak a small amount of memory (or corrupt kernel mem allocation?) since the growfs code does a realloc from what it thinks is the size of m_perag, which it computes solely from the current number of AGs. Should we have a size in the mp struct for this and not assume it's the agcount? > > + /* FIXME: here we could instead just lower > > + * nagimax to nagcount; is it better this way? > > + */ > > Not really. Ok, removed comment. > > + /* FIXME: why is this flag unconditionally set in growfs? */ > > + mp->m_flags |= XFS_MOUNT_32BITINODES; > > good question. I don't think it should be there but I'll have to > do some digging.... Per Eric's mail, I removed both the comment and the flag setting. > > + nagimax = xfs_initialize_perag(XFS_MTOVFS(mp), mp, nagcount); > > + up_write(&mp->m_peraglock); > > + > > + xfs_update_sb(mp, nagimax, nagcount); > > return 0; > > > > error0: > > + up_write(&mp->m_peraglock); > > xfs_trans_cancel(tp, XFS_TRANS_ABORT); > > return error; > > } > > @@ -435,7 +665,10 @@ > > int error; > > if (!cpsema(&mp->m_growlock)) > > return XFS_ERROR(EWOULDBLOCK); > > - error = xfs_growfs_data_private(mp, in); > > + if(in->newblocks < mp->m_sb.sb_dblocks) > > + error = xfs_shrinkfs_data_private(mp, in); > > + else > > + error = xfs_growfs_data_private(mp, in); > > Hmmm - that's using the one ioctl to do both grow and shrink. I'd > prefer a new shrink ioctl rather than changing the behaviour of an > existing ioctl. Well, I chose this way because I see it as the ioctl that changes the data size of the filesystem. It may be lower or higher than the current size, but that is not so important :), and if another ioctl there would be the need for another tool. > Looks like a good start ;) thanks! and also thanks for taking time to look at this. updated patch (without the separation of the IOCTL and without the rework of the perag lock) is attached. iustin --X1bOJ3K7DJ5YkBrT Content-Type: text/plain; charset=us-ascii Content-Disposition: attachment; filename=patch-nice-5 diff -X ignore -urN linux-2.6-xfs.cvs-orig/fs/xfs/xfs_fsops.c linux-2.6-xfs.shrink/fs/xfs/xfs_fsops.c --- linux-2.6-xfs.cvs-orig/fs/xfs/xfs_fsops.c 2007-06-09 18:56:21.509308225 +0200 +++ linux-2.6-xfs.shrink/fs/xfs/xfs_fsops.c 2007-06-14 07:54:50.580420252 +0200 @@ -112,6 +112,55 @@ return 0; } +STATIC void xfs_growfs_update_sb( + xfs_mount_t *mp, /* mount point for filesystem */ + xfs_agnumber_t nagimax, + xfs_agnumber_t nagcount) /* new number of a.g. */ +{ + xfs_agnumber_t agno; + xfs_buf_t *bp; + xfs_sb_t *sbp; + int error; + + /* New allocation groups fully initialized, so update mount struct */ + if (nagimax) + mp->m_maxagi = nagimax; + if (mp->m_sb.sb_imax_pct) { + __uint64_t icount = mp->m_sb.sb_dblocks * + mp->m_sb.sb_imax_pct; + do_div(icount, 100); + mp->m_maxicount = icount << mp->m_sb.sb_inopblog; + } else + mp->m_maxicount = 0; + + for (agno = 1; agno < nagcount; agno++) { + error = xfs_read_buf(mp, mp->m_ddev_targp, + XFS_AGB_TO_DADDR(mp, agno, XFS_SB_BLOCK(mp)), + XFS_FSS_TO_BB(mp, 1), 0, &bp); + if (error) { + xfs_fs_cmn_err(CE_WARN, mp, + "error %d reading secondary superblock for ag %d", + error, agno); + break; + } + sbp = XFS_BUF_TO_SBP(bp); + xfs_xlatesb(sbp, &mp->m_sb, -1, XFS_SB_ALL_BITS); + + /* + * If we get an error writing out the alternate superblocks, + * just issue a warning and continue. The real work is + * already done and committed. + */ + error = xfs_bwrite(mp, bp); + if(error) { + xfs_fs_cmn_err(CE_WARN, mp, + "write error %d updating secondary superblock for ag %d", + error, agno); + break; /* no point in continuing */ + } + } +} + static int xfs_growfs_data_private( xfs_mount_t *mp, /* mount point for filesystem */ @@ -135,7 +184,6 @@ xfs_rfsblock_t nfree; xfs_agnumber_t oagcount; int pct; - xfs_sb_t *sbp; xfs_trans_t *tp; nb = in->newblocks; @@ -356,44 +404,210 @@ if (error) { return error; } - /* New allocation groups fully initialized, so update mount struct */ - if (nagimax) - mp->m_maxagi = nagimax; - if (mp->m_sb.sb_imax_pct) { - __uint64_t icount = mp->m_sb.sb_dblocks * mp->m_sb.sb_imax_pct; - do_div(icount, 100); - mp->m_maxicount = icount << mp->m_sb.sb_inopblog; - } else - mp->m_maxicount = 0; - for (agno = 1; agno < nagcount; agno++) { - error = xfs_read_buf(mp, mp->m_ddev_targp, - XFS_AGB_TO_DADDR(mp, agno, XFS_SB_BLOCK(mp)), - XFS_FSS_TO_BB(mp, 1), 0, &bp); + xfs_growfs_update_sb(mp, nagimax, nagcount); + return 0; + + error0: + xfs_trans_cancel(tp, XFS_TRANS_ABORT); + return error; +} + +STATIC int +xfs_shrinkfs_data_private( + xfs_mount_t *mp, /* mount point for filesystem */ + xfs_growfs_data_t *in) /* growfs data input struct */ +{ + xfs_agf_t *agf; + xfs_agnumber_t agno; + int dpct; + int error; + xfs_agnumber_t nagcount; /* new AG count */ + xfs_agnumber_t oagcount; /* old AG count */ + xfs_agnumber_t nagimax = 0; + xfs_rfsblock_t nb, nb_mod; + xfs_rfsblock_t dbdelta; + xfs_rfsblock_t fdbdelta; + int pct; + xfs_trans_t *tp; + + nb = in->newblocks; + pct = in->imaxpct; + if (nb >= mp->m_sb.sb_dblocks || pct < 0 || pct > 100) + return XFS_ERROR(EINVAL); + dpct = pct - mp->m_sb.sb_imax_pct; + + /* Do basic checks (at the fs level) */ + oagcount = mp->m_sb.sb_agcount; + nagcount = nb; + nb_mod = do_div(nagcount, mp->m_sb.sb_agblocks); + if(nb_mod) { + printk("not shrinking on an AG boundary (diff=%d)\n", nb_mod); + return XFS_ERROR(EINVAL); + } + if(nagcount < 2) { + printk("refusing to shrink below 2 AGs\n"); + return XFS_ERROR(EINVAL); + } + if(nagcount >= oagcount) { + printk("number of AGs will not decrease\n"); + return XFS_ERROR(EINVAL); + } + printk("Cur ag=%d, cur blocks=%llu\n", + mp->m_sb.sb_agcount, mp->m_sb.sb_dblocks); + printk("New ag=%d, new blocks=%d\n", nagcount, nb); + + printk("Will resize from %llu to %d, delta is %llu\n", + mp->m_sb.sb_dblocks, nb, mp->m_sb.sb_dblocks - nb); + /* Check to see if we trip over the log section */ + printk("logstart=%llu logblocks=%u\n", + mp->m_sb.sb_logstart, mp->m_sb.sb_logblocks); + if (nb < mp->m_sb.sb_logstart + mp->m_sb.sb_logblocks) + return XFS_ERROR(EINVAL); + + /* dbdelta starts at the diff and must become zero */ + dbdelta = mp->m_sb.sb_dblocks - nb; + tp = xfs_trans_alloc(mp, XFS_TRANS_GROWFS); + printk("reserving %d\n", XFS_GROWFS_SPACE_RES(mp) + dbdelta); + if ((error = xfs_trans_reserve(tp, XFS_GROWFS_SPACE_RES(mp) + dbdelta, + XFS_GROWDATA_LOG_RES(mp), 0, 0, 0))) { + xfs_trans_cancel(tp, 0); + return error; + } + + /* fbdelta keeps track of how many ag blocks we need to remove + * (this is different from the number of filesystem blocks) + */ + fdbdelta = 0; + + /* Per-AG checks */ + /* FIXME: do we need to hold m_peraglock while doing this? */ + /* I think that since we do read and write to the m_perag + * stuff, we should be holding the lock for the entire walk & + * modify of the fs + */ + /* Note that because we hold the lock, on any error+early + * return, we must either release manually and return, or jump + * to error0 + */ + down_write(&mp->m_peraglock); + for(agno = oagcount - 1; agno >= nagcount; agno--) { + xfs_extlen_t usedblks; /* total used blocks in this a.g. */ + xfs_extlen_t freeblks; /* free blocks in this a.g. */ + xfs_agblock_t aglen; /* this ag's len */ + struct xfs_perag *pag; /* the m_perag structure */ + xfs_buf_t *bpf; + xfs_buf_t *bpi; + + printk("doing agno=%d\n", agno); + + pag = &mp->m_perag[agno]; + + error = xfs_alloc_read_agf(mp, tp, agno, 0, &bpf); if (error) { - xfs_fs_cmn_err(CE_WARN, mp, - "error %d reading secondary superblock for ag %d", - error, agno); - break; + goto error0; } - sbp = XFS_BUF_TO_SBP(bp); - xfs_xlatesb(sbp, &mp->m_sb, -1, XFS_SB_ALL_BITS); + ASSERT(bpf); + agf = XFS_BUF_TO_AGF(bpf); + aglen = INT_GET(agf->agf_length, ARCH_CONVERT); + + /* read the pagf/pagi if not already initialized */ + /* agf should be initialized because of the ablove read_agf */ + ASSERT(pag->pagf_init); + if (!pag->pagi_init) { + if ((error = xfs_ialloc_read_agi(mp, tp, agno, &bpi))) + goto error0; + ASSERT(pag->pagi_init); + } + /* - * If we get an error writing out the alternate superblocks, - * just issue a warning and continue. The real work is - * already done and committed. + * Check the inodes: as long as we have pagi_count == + * pagi_freecount == 0, then: a) we don't have to + * update any global inode counters, and b) there are + * no extra blocks in inode btrees */ - if (!(error = xfs_bwrite(mp, bp))) { - continue; - } else { - xfs_fs_cmn_err(CE_WARN, mp, - "write error %d updating secondary superblock for ag %d", - error, agno); - break; /* no point in continuing */ + if(pag->pagi_count > 0 || + pag->pagi_freecount > 0) { + printk("agi %d has %d inodes in total and %d free\n", + agno, pag->pagi_count, pag->pagi_freecount); + error = XFS_ERROR(ENOSPC); + goto error0; } + + /* Check the AGF: if levels[] == 1, then there should + * be no extra blocks in the btrees beyond the ones + * at the beggining of the AG + */ + if(pag->pagf_levels[XFS_BTNUM_BNOi] > 1 || + pag->pagf_levels[XFS_BTNUM_CNTi] > 1) { + printk("agf %d has level %d bt and %d cnt\n", + agno, + pag->pagf_levels[XFS_BTNUM_BNOi], + pag->pagf_levels[XFS_BTNUM_CNTi]); + error = XFS_ERROR(ENOSPC); + goto error0; + } + + freeblks = pag->pagf_freeblks; + printk("Usage: %d prealloc, %d flcount\n", + XFS_PREALLOC_BLOCKS(mp), pag->pagf_flcount); + + /* Done gathering data, check sizes */ + usedblks = XFS_PREALLOC_BLOCKS(mp) + pag->pagf_flcount; + printk("agno=%d agf_length=%d computed used=%d" + " known free=%d\n", agno, aglen, usedblks, freeblks); + + if(usedblks + freeblks != aglen) { + printk("agno %d is not free (%d blocks allocated)\n", + agno, aglen-usedblks-freeblks); + error = XFS_ERROR(ENOSPC); + goto error0; + } + dbdelta -= aglen; + printk("will lower with %d\n", + aglen - XFS_PREALLOC_BLOCKS(mp)); + fdbdelta += aglen - XFS_PREALLOC_BLOCKS(mp); } + /* + * Check that we removed all blocks + */ + ASSERT(!dbdelta); + ASSERT(nagcount < oagcount); + + printk("to free: %d, oagcount=%d, nagcount=%d\n", + fdbdelta, oagcount, nagcount); + + xfs_trans_agblocks_delta(tp, -((long)fdbdelta)); + xfs_trans_mod_sb(tp, XFS_TRANS_SB_AGCOUNT, nagcount - oagcount); + xfs_trans_mod_sb(tp, XFS_TRANS_SB_DBLOCKS, nb - mp->m_sb.sb_dblocks); + xfs_trans_mod_sb(tp, XFS_TRANS_SB_FDBLOCKS, -((int64_t)fdbdelta)); + + if (dpct) + xfs_trans_mod_sb(tp, XFS_TRANS_SB_IMAXPCT, dpct); + error = xfs_trans_commit(tp, 0); + if (error) { + up_write(&mp->m_peraglock); + return error; + } + /* Free memory as the number of AG has changed */ + for (agno = nagcount; agno < oagcount; agno++) + if (mp->m_perag[agno].pagb_list) + kmem_free(mp->m_perag[agno].pagb_list, + sizeof(xfs_perag_busy_t) * + XFS_PAGB_NUM_SLOTS); + + mp->m_perag = kmem_realloc(mp->m_perag, + sizeof(xfs_perag_t) * nagcount, + sizeof(xfs_perag_t) * oagcount, + KM_SLEEP); + + nagimax = xfs_initialize_perag(XFS_MTOVFS(mp), mp, nagcount); + up_write(&mp->m_peraglock); + + xfs_growfs_update_sb(mp, nagimax, nagcount); return 0; error0: + up_write(&mp->m_peraglock); xfs_trans_cancel(tp, XFS_TRANS_ABORT); return error; } @@ -435,7 +649,10 @@ int error; if (!cpsema(&mp->m_growlock)) return XFS_ERROR(EWOULDBLOCK); - error = xfs_growfs_data_private(mp, in); + if(in->newblocks < mp->m_sb.sb_dblocks) + error = xfs_shrinkfs_data_private(mp, in); + else + error = xfs_growfs_data_private(mp, in); vsema(&mp->m_growlock); return error; } @@ -633,7 +850,7 @@ xfs_force_shutdown(mp, SHUTDOWN_FORCE_UMOUNT); thaw_bdev(sb->s_bdev, sb); } - + break; } case XFS_FSOP_GOING_FLAGS_LOGFLUSH: diff -X ignore -urN linux-2.6-xfs.cvs-orig/fs/xfs/xfs_trans.c linux-2.6-xfs.shrink/fs/xfs/xfs_trans.c --- linux-2.6-xfs.cvs-orig/fs/xfs/xfs_trans.c 2007-06-05 17:40:51.000000000 +0200 +++ linux-2.6-xfs.shrink/fs/xfs/xfs_trans.c 2007-06-07 23:01:03.000000000 +0200 @@ -503,11 +503,9 @@ tp->t_res_frextents_delta += delta; break; case XFS_TRANS_SB_DBLOCKS: - ASSERT(delta > 0); tp->t_dblocks_delta += delta; break; case XFS_TRANS_SB_AGCOUNT: - ASSERT(delta > 0); tp->t_agcount_delta += delta; break; case XFS_TRANS_SB_IMAXPCT: --X1bOJ3K7DJ5YkBrT-- From owner-xfs@oss.sgi.com Wed Jun 13 23:34:15 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 13 Jun 2007 23:34:17 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5E6YCWt011983 for ; Wed, 13 Jun 2007 23:34:14 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id QAA05580; Thu, 14 Jun 2007 16:34:07 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5E6Y6Af122774840; Thu, 14 Jun 2007 16:34:06 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5E6Y4a0121969473; Thu, 14 Jun 2007 16:34:04 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Thu, 14 Jun 2007 16:34:04 +1000 From: David Chinner To: xfs-dev Cc: xfs-oss , hch@infradead.org Subject: [PATCH, RFC] fix null files exposure growing via ftruncate Message-ID: <20070614063404.GW86004887@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11780 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Christoph, Looking into the test 140 failure you reported, I realised that none of the specific null files tests were being run automatically, which is why I hadn't seen any of those failures (nor had the QA team). That's being fixed. I suspect that the test passes on Irix because of a coincidence (the test sleeps for 10s and that is the default writeback timeout for file data) which means when the filesystem is shut down all the data is already on disk so it's not really testing the NULL files fix. The failure is due to the ftruncate() logging the new file size before any data that had previously been written had hit the disk. IOWs, it violates the data write/inode size update rule that fixes the null files problem. The fix here checks when growing the file as to whether it the disk inode size is different to the in memory size. If they are different, we have data that needs to be written to disk beyond the existing on disk EOF. Hence to maintain ordering we need to flush this data out before we log the changed file size. I suspect the flush could be done more optimally - I've just done a brute-force flush the entire file mod. Should we only flush from the old di_size to the current i_size? There may also be better ways to fix this. Any thoughts on that? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group --- fs/xfs/xfs_vnodeops.c | 21 ++++++++++++++++++--- 1 file changed, 18 insertions(+), 3 deletions(-) Index: 2.6.x-xfs-new/fs/xfs/xfs_vnodeops.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_vnodeops.c 2007-06-13 14:12:09.000000000 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_vnodeops.c 2007-06-14 16:01:48.562882473 +1000 @@ -593,9 +593,24 @@ xfs_setattr( if ((vap->va_size > ip->i_size) && (flags & ATTR_NOSIZETOK) == 0) { code = xfs_igrow_start(ip, vap->va_size, credp); - } - xfs_iunlock(ip, XFS_ILOCK_EXCL); - vn_iowait(vp); /* wait for the completion of any pending DIOs */ + xfs_iunlock(ip, XFS_ILOCK_EXCL); + /* + * We are going to log the inode size change in + * this tranaction so any previous writes that are + * beyond the on disk EOF that have not been written + * out need to be written here. If we do not write the + * data out, we expose ourselves to the null files + * problem on grow. + */ + if (!code && ip->i_size != ip->i_d.di_size) + code = bhv_vop_flush_pages(XFS_ITOV(ip), 0, -1, + XFS_B_ASYNC, FI_NONE); + } else + xfs_iunlock(ip, XFS_ILOCK_EXCL); + + /* wait for I/O to complete */ + vn_iowait(vp); + if (!code) code = xfs_itruncate_data(ip, vap->va_size); if (code) { From owner-xfs@oss.sgi.com Thu Jun 14 01:42:43 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 14 Jun 2007 01:42:48 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.4 required=5.0 tests=AWL,BAYES_80,URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from ug-out-1314.google.com (ug-out-1314.google.com [66.249.92.173]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5E8geWt018540 for ; Thu, 14 Jun 2007 01:42:42 -0700 Received: by ug-out-1314.google.com with SMTP id 74so712660ugb for ; Thu, 14 Jun 2007 01:42:40 -0700 (PDT) DKIM-Signature: a=rsa-sha1; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:subject:from:to:cc:in-reply-to:references:content-type:date:message-id:mime-version:x-mailer; b=N8HtVFhQVOiXbOFFocJk5u0l4/KFSFtDxjLYkLqCyTuDoElrnuUuiBC5TOOGvSSliA8OIJCgzPLUyi5hsTkWW4NtaEi/2mGjjk/NrEkRIc/uHaJIs1wWFUrqSq3gLITTbZCJQk4XsK6eEk3EQgymhjmYzVvsJ7QWm81NxevEuPc= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:subject:from:to:cc:in-reply-to:references:content-type:date:message-id:mime-version:x-mailer; b=s3ctva87Lfod4r/ug+fbckmwN5cB9AhNUS0BBuXADjVjTTPnrIs15ro+y0hNueb+I1lMk4uVZ2558OFbwiP+bl9Qo5o7DZnwTxtDPp/RORhKb8WMjHNlXk91fpPk2zpjxAaoXqgLM9accRUhEog8l0/v/aw8MpSxPEIAYflgIoY= Received: by 10.82.116.15 with SMTP id o15mr2879757buc.1181810131198; Thu, 14 Jun 2007 01:35:31 -0700 (PDT) Received: from ?192.168.1.10? ( [84.59.115.130]) by mx.google.com with ESMTP id 6sm4202603nfv.2007.06.14.01.35.29 (version=TLSv1/SSLv3 cipher=RC4-MD5); Thu, 14 Jun 2007 01:35:30 -0700 (PDT) Subject: Re: XFS shrink functionality From: Ruben Porras To: David Chinner Cc: xfs@oss.sgi.com, cw@f00f.org, iusty@k1024.org In-Reply-To: <20070608151223.GF86004887@sgi.com> References: <1180715974.10796.46.camel@localhost> <20070604001632.GA86004887@sgi.com> <20070604084154.GA8273@teal.hq.k1024.org> <1181291033.7510.40.camel@localhost> <20070608101532.GA18788@teal.hq.k1024.org> <20070608151223.GF86004887@sgi.com> Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="=-oxU5dVOBePDayLCCRZvz" Date: Thu, 14 Jun 2007 10:35:27 +0200 Message-Id: <1181810127.6539.13.camel@localhost> Mime-Version: 1.0 X-Mailer: Evolution 2.10.2 X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11781 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nahoo82@gmail.com Precedence: bulk X-list: xfs --=-oxU5dVOBePDayLCCRZvz Content-Type: text/plain Content-Transfer-Encoding: quoted-printable > > I took a look at both items since this discussion started. And honestly, > > I think 1) is harder that 4), so you're welcome to work on it :) The > > points that make it harder is that, per David's suggestion, there needs > > to be: > > - define two new transaction types >=20 > one new transaction type: >=20 > XFS_TRANS_AGF_FLAGS done > and and extension to xfs_alloc_log_agf(). Is about all that is > needed there. still to do. Will come after the ioctls. > See the patch here: >=20 > http://oss.sgi.com/archives/xfs/2007-04/msg00103.html >=20 > For an example of a very simlar transaction to what is needed > (look at xfs_log_sbcount()) and very similar addition to > the AGF (xfs_btreeblks). >=20 > > - define two new ioctls >=20 > XFS_IOC_ALLOC_ALLOW_AG, parameter xfsagnumber_t. > XFS_IOC_ALLOC_DENY_AG, parameter xfsagnumber_t. almost done. How I'm should I obtain a pointer to an xfs_agf_t from inside the ioctls? I guess that the first step is to get a *bp with xfs_getsb and then an *sbp, but, which function/macro gives me the xfs_agf_t pointer? Sorry, I can't find the way greeping through the code. > > - update the ondisk-format (!), if we want persistence of these flags; > > luckily, there are two spare fields in the AGF structure. >=20 > Better to expand, I think. The AGF is a sector in length - we can > expand the structure as we need to this size without fear, esp. as > the part of the sector outside the structure is guaranteed to be > zero. i.e. we can add a fields flag to the end of the AGF > structure - old filesystems simple read as "no flags set" and > old kernels never look at those bits.... done. > > - check the list of allocation functions that allocate space from the > > AG still to be done. Thaks again for the help. --=-oxU5dVOBePDayLCCRZvz Content-Type: application/pgp-signature; name=signature.asc Content-Description: Dies ist ein digital signierter Nachrichtenteil -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQBGcP3PYubrKblAx+oRAtXUAJ9HRydd5yl/7dPiqNTVMDTlRw+8lwCeO05B JJC+DMs0+wkQ0PfReV7ysoQ= =9qWU -----END PGP SIGNATURE----- --=-oxU5dVOBePDayLCCRZvz-- From owner-xfs@oss.sgi.com Thu Jun 14 02:01:04 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 14 Jun 2007 02:01:07 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50,J_CHICKENPOX_28 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5E90wWt023159 for ; Thu, 14 Jun 2007 02:01:01 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id TAA09279; Thu, 14 Jun 2007 19:00:55 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5E90sAf122738168; Thu, 14 Jun 2007 19:00:55 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5E90qJN122709687; Thu, 14 Jun 2007 19:00:52 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Thu, 14 Jun 2007 19:00:52 +1000 From: David Chinner To: xfs@oss.sgi.com Cc: Iustin Pop Subject: Re: [PATCH] Implement shrink of empty AGs Message-ID: <20070614090052.GA86004887@sgi.com> References: <20070610164014.GA10936@teal.hq.k1024.org> <20070612024025.GM86004887@sgi.com> <20070614060158.GA12951@teal.hq.k1024.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070614060158.GA12951@teal.hq.k1024.org> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11782 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Thu, Jun 14, 2007 at 08:01:58AM +0200, Iustin Pop wrote: > On Tue, Jun 12, 2007 at 12:40:25PM +1000, David Chinner wrote: > > > diff -X ignore -urN linux-2.6-xfs.cvs-orig/fs/xfs/xfs_fsops.c linux-2.6-xfs.shrink/fs/xfs/xfs_fsops.c > > > --- linux-2.6-xfs.cvs-orig/fs/xfs/xfs_fsops.c 2007-06-09 18:56:21.509308225 +0200 > > > +++ linux-2.6-xfs.shrink/fs/xfs/xfs_fsops.c 2007-06-10 18:32:36.074856477 +0200 > > > @@ -112,6 +112,53 @@ > > > return 0; > > > } > > > > > > +static void xfs_update_sb( > > > > STATIC void > > xfs_growfs_update_sb( > this was because xfs_growfs_private is also static and not STATIC. > Should I change the def for it also? Yes, probably should. > > > + xfs_mount_t *mp, /* mount point for filesystem */ > > > + xfs_agnumber_t nagimax, > > > + xfs_agnumber_t nagcount) /* new number of a.g. */ > > > > tabs, not spaces (and tabs are 8 spaces). > sorry, I though I got all of these. There are some more in the def of > xfs_reserve_blocks, btw. Ok, I'll clean them up in the current fix-reserve-blocks-yet-again patch I have. > > > +xfs_shrinkfs_data_private( > > > + xfs_mount_t *mp, /* mount point for filesystem */ > > > + xfs_growfs_data_t *in) /* growfs data input struct */ > > > > whitespace issues > fixed (I think you were referring to the alignement of the two argument > lines). Yes. > > > + /* dbdelta starts at the diff and must become zero */ > > > + dbdelta = mp->m_sb.sb_dblocks - nb; > > > + tp = xfs_trans_alloc(mp, XFS_TRANS_GROWFS); > > > + printk("reserving %d\n", XFS_GROWFS_SPACE_RES(mp) + dbdelta); > > > + if ((error = xfs_trans_reserve(tp, XFS_GROWFS_SPACE_RES(mp) + dbdelta, > > > + XFS_GROWDATA_LOG_RES(mp), 0, 0, 0))) { > > > + xfs_trans_cancel(tp, 0); > > > + return error; > > > + } > > > > What's the dbdelta part of the reservation for? That's reserving dbdelta > > blocks for *allocations*, so I don't think this is right.... > > Well, we'll shrink the filesystem by dbdelta, so the filesystem needs to > have enough free space to do it. True, but when you look at how much free space we need to make that modification, it turns out ot be zero. i.e. we don't need to do any allocations *anywhere* to remove an empty AG from the end of the filesystem - just log a change to the superblock(s). > Whether this space is in the correct > place (the AGs we want to shrink) or not is a question answered by the > per-AG checking, but since we will reduce the freespace by this amount, > I thought that it's safer to mark this space in use. Did I misread the > intent of xfs_trans_reserve? The transactions reservation takes into account blocks that may need to be allocated during a transaction. Normally when we are freeing an extent, we may still have to allocate some blocks because a btree block might need to be split (and split all the way up the tree to the root). The XFS_GROWFS_SPACE_RES(mp) macro reserves the space needed for the freeing of a new extent which occurs during growing a partial AG to a full AG. We aren't doing such an operation here; we don't modify any free space or inode or extent btrees here at all. > Also, unless I'm mistaken and don't remember correctly, you have to > reserve the amount of space one will pass to xfs_trans_mod_sb(tp, > XFS_TRANS_SB_FDBLOCKS, ...) otherwise the transaction code complains > about exceeding your reservation. xfs_trans_mod_sb: 459 case XFS_TRANS_SB_FDBLOCKS: 460 /* 461 * Track the number of blocks allocated in the 462 * transaction. Make sure it does not exceed the 463 * number reserved. 464 */ 465 if (delta < 0) { 466 tp->t_blk_res_used += (uint)-delta; 467 ASSERT(tp->t_blk_res_used <= tp->t_blk_res); 468 } 469 tp->t_fdblocks_delta += delta; 470 if (xfs_sb_version_haslazysbcount(&mp->m_sb)) 471 flags &= ~XFS_TRANS_SB_DIRTY; 472 break; So, delta < 0 (i.e. allocating blocks) requires a transaction reservation, but freeing blocks (delta > 0) does not. > > > + /* I think that since we do read and write to the m_perag > > > + * stuff, we should be holding the lock for the entire walk & > > > + * modify of the fs > > > + */ > > > > Deadlock warning! holding the m_peraglock in write mode will cause > > allocation deadlocks if you are not careful as all allocation/free > > operations take the m_peraglock in read mode. (And yes, growing > > an active, loaded filesystem can deadlock because of this.) > > How can we ensure then that no one modifies the AGs while we walk them? It's not even modify - how do you prevent any *reference* to the perag while we are doing this. > I hoped that we can do it without the per-AG not-avail flag, by just > holding the perag lock. The per-AG not-available flag will prevent new allocations, but it doesn't prevent access to the AGs as we still need access to the AGs to copy the data and metadata out. The m_peraglock in write mode will prevent *simultaneous* access to the per-ag, but the code waiting on a read lock may still try to access the bit of the perag structure we're about to remove. See, for example, xfs_bmap_btalloc(). It gets and AG it is supposed to use from somewhere external, and then before it goes to use the perag structure for that AG, it waits on the peraglock in read mode. If that blocks waiting on a shrink and after the shrink ag > sb_agcount, we're in a world of pain..... Note that I'm just using this as an example piece of existing code that would break badly if we simply realloc the perag array. I could point you to the filestreams code where there is a perag stream refcount that we'll need to ensure is zero. What I'm trying to say is that the perag lock may not give us sufficient exclusion to be able to shrink the perag array safely. > > ok, so we have empty ag's here. You might want to check that the > > inode btree is empty and that the AGI unlinked list is empty. > I thought that inode btree is empty by virtue of pag->pagi_count == 0. > Is this not always so? It should be. Call me paranoid, but with an operation like this I want to make sure we check and double check that it is safe to proceed. > Furthermore, also since agi_count == agi_free + > actual used inodes + number of unlinked inodes, I think we don't need to > check the unlinked list. Call me paranoid, but.... I think at minimum there should be debug code that caters to my paranoia ;) > > > + > > > + freeblks = pag->pagf_freeblks; > > > + printk("Usage: %d prealloc, %d flcount\n", > > > + XFS_PREALLOC_BLOCKS(mp), pag->pagf_flcount); > > > + > > > + /* Done gathering data, check sizes */ > > > + usedblks = XFS_PREALLOC_BLOCKS(mp) + pag->pagf_flcount; > > > + printk("agno=%d agf_length=%d computed used=%d" > > > + " known free=%d\n", agno, aglen, usedblks, freeblks); > > > + > > > + if(usedblks + freeblks != aglen) { > > > + printk("agno %d is not free (%d blocks allocated)\n", > > > + agno, aglen-usedblks-freeblks); > > > + error = XFS_ERROR(ENOSPC); > > > + goto error0; > > > + } > > > + dbdelta -= aglen; > > > + printk("will lower with %d\n", > > > + aglen - XFS_PREALLOC_BLOCKS(mp)); > > > + fdbdelta += aglen - XFS_PREALLOC_BLOCKS(mp); > > > > Ok, so why not just > > > > fdbdelta += mp->m_sb.sb_agblocks - XFS_PREALLOC_BLOCKS(mp); > Because the last AG can be smaller than the sb_agblocks. It's true > that this holds only for the last one, but having two cases is uglier > than just always reading this size from the AG. But you -EINVAL'd earlier when the shrink size would not leave entire AGs behind. So the partial last AG is something that won't happen here..... > > > + } > > > + /* > > > + * Check that we removed all blocks > > > + */ > > > + ASSERT(!dbdelta); > > > + ASSERT(nagcount < oagcount); > > > > Error out, not assert, because at this point we have not changed anything. > Actually here, !dbdelta or nacount < oagacount are programming errors > and not possible correct conditions. They could be removed, sine the > checks before the per-AG for loop ensure these conditions are not met. > But I used them just to be sure. Ok - I misinterpreted what they were catching. > > > + /* Free memory as the number of AG has changed */ > > > + for (agno = nagcount; agno < oagcount; agno++) > > > + if (mp->m_perag[agno].pagb_list) > > > + kmem_free(mp->m_perag[agno].pagb_list, > > > + sizeof(xfs_perag_busy_t) * > > > + XFS_PAGB_NUM_SLOTS); > > > + > > > + mp->m_perag = kmem_realloc(mp->m_perag, > > > + sizeof(xfs_perag_t) * nagcount, > > > + sizeof(xfs_perag_t) * oagcount, > > > + KM_SLEEP); > > > > This is not really safe - how do we know if all the users of the > > higher AGs have gone away? I think we should simply just zero out > > the unused AGs and don't worry about a realloc(). > The problem that I saw is that if you do shrink+grow+shrink+grow+... you > will leak a small amount of memory (or corrupt kernel mem allocation?) > since the growfs code does a realloc from what it thinks is the size of > m_perag, which it computes solely from the current number of AGs. Should > we have a size in the mp struct for this and not assume it's the > agcount? Yup, another size variable in the mount structure is probably the only way to fix this. > > > + if(in->newblocks < mp->m_sb.sb_dblocks) > > > + error = xfs_shrinkfs_data_private(mp, in); > > > + else > > > + error = xfs_growfs_data_private(mp, in); > > > > Hmmm - that's using the one ioctl to do both grow and shrink. I'd > > prefer a new shrink ioctl rather than changing the behaviour of an > > existing ioctl. > Well, I chose this way because I see it as the ioctl that changes the > data size of the filesystem. It may be lower or higher than the current > size, but that is not so important :), and if another ioctl there would > be the need for another tool. True, put that way i think you're right. > updated patch (without the separation of the IOCTL and without the > rework of the perag lock) is attached. I haven't looked at it yet - i'll try to get to it tomorrow... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Thu Jun 14 02:14:52 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 14 Jun 2007 02:14:55 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.7 required=5.0 tests=AWL,BAYES_50,URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5E9ElWt027411 for ; Thu, 14 Jun 2007 02:14:49 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id TAA09396; Thu, 14 Jun 2007 19:14:35 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5E9EWAf122708235; Thu, 14 Jun 2007 19:14:32 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5E9EQMu119774396; Thu, 14 Jun 2007 19:14:26 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Thu, 14 Jun 2007 19:14:26 +1000 From: David Chinner To: Ruben Porras Cc: David Chinner , xfs@oss.sgi.com, cw@f00f.org, iusty@k1024.org Subject: Re: XFS shrink functionality Message-ID: <20070614091426.GB86004887@sgi.com> References: <1180715974.10796.46.camel@localhost> <20070604001632.GA86004887@sgi.com> <20070604084154.GA8273@teal.hq.k1024.org> <1181291033.7510.40.camel@localhost> <20070608101532.GA18788@teal.hq.k1024.org> <20070608151223.GF86004887@sgi.com> <1181810127.6539.13.camel@localhost> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1181810127.6539.13.camel@localhost> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11783 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Thu, Jun 14, 2007 at 10:35:27AM +0200, Ruben Porras wrote: > > > I took a look at both items since this discussion started. And honestly, > > > I think 1) is harder that 4), so you're welcome to work on it :) The > > > points that make it harder is that, per David's suggestion, there needs > > > to be: > > > - define two new transaction types > > > > one new transaction type: > > > > XFS_TRANS_AGF_FLAGS > > done > > > and and extension to xfs_alloc_log_agf(). Is about all that is > > needed there. > > still to do. Will come after the ioctls. > > > See the patch here: > > > > http://oss.sgi.com/archives/xfs/2007-04/msg00103.html > > > > For an example of a very simlar transaction to what is needed > > (look at xfs_log_sbcount()) and very similar addition to > > the AGF (xfs_btreeblks). > > > > > - define two new ioctls > > > > XFS_IOC_ALLOC_ALLOW_AG, parameter xfsagnumber_t. > > XFS_IOC_ALLOC_DENY_AG, parameter xfsagnumber_t. > > almost done. FWIW, I've had second thoughts on this ioctl interface. It's horribly specific, considering all we are doing are setting or clearing a flag in an AG. Perhaps a better interface is: XFS_IOC_GET_AGF_FLAGS XFS_IOC_SET_AGF_FLAGS with: struct xfs_ioc_agflags { xfs_agnumber_t ag; __u32 flags; } As the parameter structure and: #define XFS_AGF_FLAGS_ALLOC_DENY (1<<0) > How I'm should I obtain a pointer to an xfs_agf_t from > inside the ioctls? > > I guess that the first step is to get a *bp with xfs_getsb and then an *sbp, > but, which function/macro gives me the xfs_agf_t pointer? Sorry, I can't > find the way greeping through the code. I've attached the quick hack I did when thinking this through initially. It'll give you an idea of how to do this and a bit more. FWIW, it was this hack that made me think the above interface is a better way to go.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group --- fs/xfs/linux-2.6/xfs_ioctl.c | 27 +++++++++++ fs/xfs/xfs_ag.h | 7 ++ fs/xfs/xfs_alloc.c | 103 +++++++++++++++++++++++++++++++++++++++++++ fs/xfs/xfs_fs.h | 2 fs/xfs/xfs_trans.h | 3 - 5 files changed, 140 insertions(+), 2 deletions(-) Index: 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_ioctl.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/linux-2.6/xfs_ioctl.c 2007-06-08 21:34:37.000000000 +1000 +++ 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_ioctl.c 2007-06-08 22:22:59.305412098 +1000 @@ -899,6 +899,33 @@ xfs_ioctl( return -error; } + case XFS_IOC_ALLOC_DENY_AG: { + xfs_agnumber_t in; + + if (!capable(CAP_SYS_ADMIN)) + return -EPERM; + + if (copy_from_user(&in, arg, sizeof(in))) + return -XFS_ERROR(EFAULT); + + error = xfs_alloc_deny_ag(mp, &in); + return -error; + + } + case XFS_IOC_ALLOC_ALLOW_AG: { + xfs_agnumber_t in; + + if (!capable(CAP_SYS_ADMIN)) + return -EPERM; + + if (copy_from_user(&in, arg, sizeof(in))) + return -XFS_ERROR(EFAULT); + + error = xfs_alloc_allow_ag(mp, &in); + return -error; + + } + case XFS_IOC_FREEZE: if (!capable(CAP_SYS_ADMIN)) return -EPERM; Index: 2.6.x-xfs-new/fs/xfs/xfs_ag.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_ag.h 2007-06-08 21:46:28.000000000 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_ag.h 2007-06-08 22:09:18.323606142 +1000 @@ -69,6 +69,7 @@ typedef struct xfs_agf { __be32 agf_freeblks; /* total free blocks */ __be32 agf_longest; /* longest free space */ __be32 agf_btreeblks; /* # of blocks held in AGF btrees */ + __be32 agf_flags; /* status flags */ } xfs_agf_t; #define XFS_AGF_MAGICNUM 0x00000001 @@ -83,9 +84,12 @@ typedef struct xfs_agf { #define XFS_AGF_FREEBLKS 0x00000200 #define XFS_AGF_LONGEST 0x00000400 #define XFS_AGF_BTREEBLKS 0x00000800 -#define XFS_AGF_NUM_BITS 12 +#define XFS_AGF_FLAGS 0x00001000 +#define XFS_AGF_NUM_BITS 13 #define XFS_AGF_ALL_BITS ((1 << XFS_AGF_NUM_BITS) - 1) + + /* disk block (xfs_daddr_t) in the AG */ #define XFS_AGF_DADDR(mp) ((xfs_daddr_t)(1 << (mp)->m_sectbb_log)) #define XFS_AGF_BLOCK(mp) XFS_HDR_BLOCK(mp, XFS_AGF_DADDR(mp)) @@ -189,6 +193,7 @@ typedef struct xfs_perag xfs_extlen_t pagf_freeblks; /* total free blocks */ xfs_extlen_t pagf_longest; /* longest free space */ __uint32_t pagf_btreeblks; /* # of blocks held in AGF btrees */ + __uint32_t pagf_flags; /* status flags for AG */ xfs_agino_t pagi_freecount; /* number of free inodes */ xfs_agino_t pagi_count; /* number of allocated inodes */ int pagb_count; /* pagb slots in use */ Index: 2.6.x-xfs-new/fs/xfs/xfs_alloc.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_alloc.c 2007-06-05 22:12:50.000000000 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_alloc.c 2007-06-08 23:12:51.256348632 +1000 @@ -2085,6 +2085,7 @@ xfs_alloc_log_agf( offsetof(xfs_agf_t, agf_freeblks), offsetof(xfs_agf_t, agf_longest), offsetof(xfs_agf_t, agf_btreeblks), + offsetof(xfs_agf_t, agf_flags), sizeof(xfs_agf_t) }; @@ -2112,6 +2113,107 @@ xfs_alloc_pagf_init( return 0; } +#define XFS_AGFLAG_ALLOC_DENY 1 +STATIC void +xfs_alloc_set_flag_ag( + xfs_trans_t *tp, + xfs_buf_t *agbp, /* buffer for a.g. freelist header */ + xfs_perag_t *pag, + int flag) +{ + xfs_agf_t *agf; /* a.g. freespace structure */ + + agf = XFS_BUF_TO_AGF(agbp); + pag->pagf_flags |= flag; + agf->agf_flags = cpu_to_be32(pag->pagf_flags); + + xfs_alloc_log_agf(tp, agbp, XFS_AGF_FLAGS); +} + +STATIC void +xfs_alloc_clear_flag_ag( + xfs_trans_t *tp, + xfs_buf_t *agbp, /* buffer for a.g. freelist header */ + xfs_perag_t *pag, + int flag) +{ + xfs_agf_t *agf; /* a.g. freespace structure */ + + agf = XFS_BUF_TO_AGF(agbp); + pag->pagf_flags &= ~flag; + agf->agf_flags = cpu_to_be32(pag->pagf_flags); + + xfs_alloc_log_agf(tp, agbp, XFS_AGF_FLAGS); +} + +int +xfs_alloc_allow_ag( + xfs_mount_t *mp, + xfs_agnumber_t agno) +{ + xfs_perag_t *pag; + xfs_buf_t *bp; + int error; + xfs_trans_t *tp; + + if (agno >= mp->m_sb.sb_agcount) + return -EINVAL; + + tp = xfs_trans_alloc(mp, XFS_TRANS_ALLOC_FLAGS); + error = xfs_trans_reserve(tp, 0, mp->m_sb.sb_sectsize + 128, 0, 0, + XFS_DEFAULT_LOG_COUNT); + if (error) { + xfs_trans_cancel(tp, 0); + return error; + } + error = xfs_alloc_read_agf(mp, tp, agno, 0, &bp); + if (error) + return error; + + pag = &mp->m_perag[agno]; + xfs_alloc_clear_flag_ag(tp, bp, pag, XFS_AGFLAG_ALLOC_DENY); + + xfs_trans_set_sync(tp); + xfs_trans_commit(tp, 0); + + return 0; + +} + +int +xfs_alloc_deny_ag( + xfs_mount_t *mp, + xfs_agnumber_t agno) +{ + xfs_perag_t *pag; + xfs_buf_t *bp; + int error; + xfs_trans_t *tp; + + if (agno >= mp->m_sb.sb_agcount) + return -EINVAL; + + tp = xfs_trans_alloc(mp, XFS_TRANS_ALLOC_FLAGS); + error = xfs_trans_reserve(tp, 0, mp->m_sb.sb_sectsize + 128, 0, 0, + XFS_DEFAULT_LOG_COUNT); + if (error) { + xfs_trans_cancel(tp, 0); + return error; + } + error = xfs_alloc_read_agf(mp, tp, agno, 0, &bp); + if (error) + return error; + + pag = &mp->m_perag[agno]; + xfs_alloc_set_flag_ag(tp, bp, pag, XFS_AGFLAG_ALLOC_DENY); + + xfs_trans_set_sync(tp); + xfs_trans_commit(tp, 0); + + return 0; + +} + /* * Put the block on the freelist for the allocation group. */ @@ -2226,6 +2328,7 @@ xfs_alloc_read_agf( pag->pagf_btreeblks = be32_to_cpu(agf->agf_btreeblks); pag->pagf_flcount = be32_to_cpu(agf->agf_flcount); pag->pagf_longest = be32_to_cpu(agf->agf_longest); + pag->pagf_flags = be32_to_cpu(agf->agf_flags); pag->pagf_levels[XFS_BTNUM_BNOi] = be32_to_cpu(agf->agf_levels[XFS_BTNUM_BNOi]); pag->pagf_levels[XFS_BTNUM_CNTi] = Index: 2.6.x-xfs-new/fs/xfs/xfs_trans.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_trans.h 2007-06-08 21:41:32.000000000 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_trans.h 2007-06-08 22:50:46.449405162 +1000 @@ -95,7 +95,8 @@ typedef struct xfs_trans_header { #define XFS_TRANS_GROWFSRT_FREE 39 #define XFS_TRANS_SWAPEXT 40 #define XFS_TRANS_SB_COUNT 41 -#define XFS_TRANS_TYPE_MAX 41 +#define XFS_TRANS_ALLOC_FLAGS 42 +#define XFS_TRANS_TYPE_MAX 42 /* new transaction types need to be reflected in xfs_logprint(8) */ Index: 2.6.x-xfs-new/fs/xfs/xfs_fs.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_fs.h 2007-06-08 21:46:29.000000000 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_fs.h 2007-06-08 23:15:31.755284394 +1000 @@ -493,6 +493,8 @@ typedef struct xfs_handle { #define XFS_IOC_ATTRMULTI_BY_HANDLE _IOW ('X', 123, struct xfs_fsop_attrmulti_handlereq) #define XFS_IOC_FSGEOMETRY _IOR ('X', 124, struct xfs_fsop_geom) #define XFS_IOC_GOINGDOWN _IOR ('X', 125, __uint32_t) +#define XFS_IOC_ALLOC_DENY_AG _IOR ('X', 126, __uint32_t) +#define XFS_IOC_ALLOC_ALLOW_AG _IOR ('X', 127, __uint32_t) /* XFS_IOC_GETFSUUID ---------- deprecated 140 */ From owner-xfs@oss.sgi.com Thu Jun 14 02:15:03 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 14 Jun 2007 02:15:09 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.clusterfs.com (mail.clusterfs.com [206.168.112.78]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5E9F1Wt027524 for ; Thu, 14 Jun 2007 02:15:02 -0700 Received: from localhost.adilger.int (S0106000bdb95b39c.cg.shawcable.net [70.72.213.136]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.clusterfs.com (Postfix) with ESMTP id 426F74E45B2; Thu, 14 Jun 2007 03:15:00 -0600 (MDT) Received: by localhost.adilger.int (Postfix, from userid 1000) id C20814190; Thu, 14 Jun 2007 03:14:58 -0600 (MDT) Date: Thu, 14 Jun 2007 03:14:58 -0600 From: Andreas Dilger To: David Chinner Cc: "Amit K. Arora" , Suparna Bhattacharya , torvalds@osdl.org, akpm@linux-foundation.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, xfs@oss.sgi.com, cmm@us.ibm.com Subject: Re: [PATCH 1/5] fallocate() implementation in i86, x86_64 and powerpc Message-ID: <20070614091458.GH5181@schatzie.adilger.int> Mail-Followup-To: David Chinner , "Amit K. Arora" , Suparna Bhattacharya , torvalds@osdl.org, akpm@linux-foundation.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, xfs@oss.sgi.com, cmm@us.ibm.com References: <20070426175056.GA25321@amitarora.in.ibm.com> <20070426180332.GA7209@amitarora.in.ibm.com> <20070509160102.GA30745@amitarora.in.ibm.com> <20070510005926.GT85884050@sgi.com> <20070510115620.GB21400@amitarora.in.ibm.com> <20070510223950.GD86004887@sgi.com> <20070511110301.GB28425@in.ibm.com> <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070613235217.GS86004887@sgi.com> User-Agent: Mutt/1.4.1i X-GPG-Key: 1024D/0D35BED6 X-GPG-Fingerprint: 7A37 5D79 BF1B CECA D44F 8A29 A488 39F5 0D35 BED6 X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11784 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: adilger@clusterfs.com Precedence: bulk X-list: xfs On Jun 14, 2007 09:52 +1000, David Chinner wrote: > B FA_PREALLOCATE > provides the same functionality as > B FA_ALLOCATE > except it does not ever change the file size. This allows allocation > of zero blocks beyond the end of file and is useful for optimising > append workloads. > TP > B FA_DEALLOCATE > removes the underlying disk space with the given range. The disk space > shall be removed regardless of it's contents so both allocated space > from > B FA_ALLOCATE > and > B FA_PREALLOCATE > as well as from > B write(3) > will be removed. > B FA_DEALLOCATE > shall never remove disk blocks outside the range specified. So this is essentially the same as "punch". There doesn't seem to be a mechanism to only unallocate unused FA_{PRE,}ALLOCATE space at the end. > B FA_DEALLOCATE > shall never change the file size. If changing the file size > is required when deallocating blocks from an offset to end > of file (or beyond end of file) is required, > B ftuncate64(3) > should be used. This also seems to be a bit of a wart, since it isn't a natural converse of either of the above functions. How about having two modes, similar to FA_ALLOCATE and FA_PREALLOCATE? Say, FA_PUNCH (which would be as you describe here - deletes all data in the specified range changing the file size if it overlaps EOF, and FA_DEALLOCATE, which only deallocates unused FA_{PRE,}ALLOCATE space? We might also consider making @mode be a mask instead of an enumeration: FA_FL_DEALLOC 0x01 (default allocate) FA_FL_KEEP_SIZE 0x02 (default extend/shrink size) FA_FL_DEL_DATA 0x04 (default keep written data on DEALLOC) We might then build FA_ALLOCATE and FA_DEALLOCATE out of these flags without making the interface sub-optimal. I suppose it might be a bit late in the game to add a "goal" parameter and e.g. FA_FL_REQUIRE_GOAL, FA_FL_NEAR_GOAL, etc to make the API more suitable for XFS? The goal could be a single __u64, or a struct with e.g. __u64 byte offset (possibly also __u32 lun like in FIEMAP). I guess the one potential limitation here is the number of function parameters on some architectures. > B ENOSPC > There is not enough space left on the device containing the file > referred to by > IR fd. Should probably say whether space is removed on failure or not. In some (primitive) implementations it might no longer be possible to distinguish between unwritten extents and zero-filled blocks, though at this point DEALLOC of zero-filled blocks might not be harmful either. Cheers, Andreas -- Andreas Dilger Principal Software Engineer Cluster File Systems, Inc. From owner-xfs@oss.sgi.com Thu Jun 14 05:04:50 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 14 Jun 2007 05:04:53 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5EC4kWt007064 for ; Thu, 14 Jun 2007 05:04:48 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id WAA12894; Thu, 14 Jun 2007 22:04:27 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5EC4LAf115663188; Thu, 14 Jun 2007 22:04:22 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5EC4D6d122859658; Thu, 14 Jun 2007 22:04:13 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Thu, 14 Jun 2007 22:04:13 +1000 From: David Chinner To: David Chinner , "Amit K. Arora" , Suparna Bhattacharya , torvalds@osdl.org, akpm@linux-foundation.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, xfs@oss.sgi.com, cmm@us.ibm.com Subject: Re: [PATCH 1/5] fallocate() implementation in i86, x86_64 and powerpc Message-ID: <20070614120413.GD86004887@sgi.com> References: <20070426180332.GA7209@amitarora.in.ibm.com> <20070509160102.GA30745@amitarora.in.ibm.com> <20070510005926.GT85884050@sgi.com> <20070510115620.GB21400@amitarora.in.ibm.com> <20070510223950.GD86004887@sgi.com> <20070511110301.GB28425@in.ibm.com> <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070614091458.GH5181@schatzie.adilger.int> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11785 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Thu, Jun 14, 2007 at 03:14:58AM -0600, Andreas Dilger wrote: > On Jun 14, 2007 09:52 +1000, David Chinner wrote: > > B FA_PREALLOCATE > > provides the same functionality as > > B FA_ALLOCATE > > except it does not ever change the file size. This allows allocation > > of zero blocks beyond the end of file and is useful for optimising > > append workloads. > > TP > > B FA_DEALLOCATE > > removes the underlying disk space with the given range. The disk space > > shall be removed regardless of it's contents so both allocated space > > from > > B FA_ALLOCATE > > and > > B FA_PREALLOCATE > > as well as from > > B write(3) > > will be removed. > > B FA_DEALLOCATE > > shall never remove disk blocks outside the range specified. > > So this is essentially the same as "punch". Depends on your definition of "punch". > There doesn't seem to be > a mechanism to only unallocate unused FA_{PRE,}ALLOCATE space at the > end. ftruncate() > > B FA_DEALLOCATE > > shall never change the file size. If changing the file size > > is required when deallocating blocks from an offset to end > > of file (or beyond end of file) is required, > > B ftuncate64(3) > > should be used. > > This also seems to be a bit of a wart, since it isn't a natural converse > of either of the above functions. How about having two modes, > similar to FA_ALLOCATE and FA_PREALLOCATE? whatever. > Say, FA_PUNCH (which > would be as you describe here - deletes all data in the specified > range changing the file size if it overlaps EOF, Punch means different things to different people. To me (and probably most XFS aware ppl) punch implies no change to the file size. i.e. anyone curently using XFS_IOC_UNRESVSP will expect punching holes to leave the file size unchanged. This is the behaviour I described for FA_DEALLOCATE. > and FA_DEALLOCATE, > which only deallocates unused FA_{PRE,}ALLOCATE space? That's an "unwritten-to-hole" extent conversion. Is that really useful for anything? That's easily implemented with FIEMAP and FA_DEALLOCATE. Anyway, because we can't agree on a single pair of flags: FA_ALLOCATE == posix_fallocate() FA_DEALLOCATE == unwritten-to-hole ??? FA_RESV_SPACE == XFS_IOC_RESVSP64 FA_UNRESV_SPACE == XFS_IOC_UNRESVSP64 > We might also consider making @mode be a mask instead of an enumeration: > > FA_FL_DEALLOC 0x01 (default allocate) > FA_FL_KEEP_SIZE 0x02 (default extend/shrink size) > FA_FL_DEL_DATA 0x04 (default keep written data on DEALLOC) i.e: #define FA_ALLOCATE 0 #define FA_DEALLOCATE FA_FL_DEALLOC #define FA_RESV_SPACE FA_FL_KEEP_SIZE #define FA_UNRESV_SPACE FA_FL_DEALLOC | FA_FL_KEEP_SIZE | FA_FL_DEL_DATA > I suppose it might be a bit late in the game to add a "goal" > parameter and e.g. FA_FL_REQUIRE_GOAL, FA_FL_NEAR_GOAL, etc to make > the API more suitable for XFS? It would suffice for the simpler operations, I think, but we'll rapidly run out of flags and we'll still need another interface for doing complex stuff..... > The goal could be a single __u64, or > a struct with e.g. __u64 byte offset (possibly also __u32 lun like > in FIEMAP). I guess the one potential limitation here is the > number of function parameters on some architectures. To be useful it needs to __u64. > > B ENOSPC > > There is not enough space left on the device containing the file > > referred to by > > IR fd. > > Should probably say whether space is removed on failure or not. In Right. I'd say on error you need to FA_DEALLOCATE to ensure any space allocated was freed back up. That way the error handling in the allocate functions is much simpler (i.e. no need to undo there). Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Thu Jun 14 07:29:37 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 14 Jun 2007 07:29:40 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.2 required=5.0 tests=AWL,BAYES_60,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r499012 Received: from sandeen.net (sandeen.net [209.173.210.139]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5EETaWt029636 for ; Thu, 14 Jun 2007 07:29:37 -0700 Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 34E2C1806631F; Thu, 14 Jun 2007 09:29:36 -0500 (CDT) Message-ID: <467150D3.7090109@sandeen.net> Date: Thu, 14 Jun 2007 09:29:39 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.0 (Macintosh/20070326) MIME-Version: 1.0 To: Barry Naujok CC: "xfs@oss.sgi.com" , xfs-dev Subject: Re: Review: dmapi-devel RPM should required xfsprogs-devel RPM References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-15 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11786 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Barry Naujok wrote: > As per the subject (and also mentioned in the description): > > --- a/dmapi/build/rpm/dmapi.spec.in 2007-06-14 15:02:13.000000000 +1000 > +++ b/dmapi/build/rpm/dmapi.spec.in 2007-06-14 14:47:31.290063558 +1000 > @@ -23,7 +23,7 @@ > %package devel > Summary: Data Management API static libraries and headers. > Group: Development/Libraries > -Requires: @pkg_name@ >= 2.0.4 > +Requires: @pkg_name@ >= 2.0.4 xfsprogs-devel > > %description devel > dmapi-devel contains the libraries and header files needed to > > seems sane to me. Just a header issue I guess? -eric From owner-xfs@oss.sgi.com Thu Jun 14 08:10:52 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 14 Jun 2007 08:10:58 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.5 required=5.0 tests=AWL,BAYES_50,RCVD_ILLEGAL_IP autolearn=no version=3.2.0-pre1-r499012 Received: from jericho.provo.novell.com (jericho.provo.novell.com [137.65.248.124]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5EFApWt006675 for ; Thu, 14 Jun 2007 08:10:52 -0700 Received: from tj-suse.dyndns.org ([164.99.192.147]) by jericho.provo.novell.com with ESMTP; Thu, 14 Jun 2007 09:10:35 -0600 Received: from htj.dyndns.org (unknown [192.168.2.1]) by tj-suse.dyndns.org (Postfix) with ESMTP id 7C2BA1E25CE; Fri, 15 Jun 2007 00:19:34 +0900 (KST) Received: from [127.0.1.1] (htj.dyndns.org [127.0.1.1]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: tj@htj.dyndns.org) by htj.dyndns.org (Postfix) with ESMTP id 448BE23D4B91; Fri, 15 Jun 2007 00:10:30 +0900 (KST) Message-ID: <46715A66.8030806@suse.de> Date: Fri, 15 Jun 2007 00:10:30 +0900 From: Tejun Heo User-Agent: Icedove 1.5.0.10 (X11/20070307) MIME-Version: 1.0 To: Tejun Heo CC: "Rafael J. Wysocki" , Linus Torvalds , David Greaves , David Chinner , xfs@oss.sgi.com, "'linux-kernel@vger.kernel.org'" , linux-pm , Neil Brown , Jeff Garzik Subject: Re: 2.6.22-rc3 hibernate(?) fails totally - regression (xfs on raid6) References: <200706020122.49989.rjw@sisk.pl> <46706968.7000703@dgreaves.com> <200706140115.58733.rjw@sisk.pl> <46714ECF.8080203@gmail.com> In-Reply-To: <46714ECF.8080203@gmail.com> X-Enigmail-Version: 0.94.2.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11787 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: teheo@suse.de Precedence: bulk X-list: xfs Tejun Heo wrote: > David, do you store the hibernation image on the RAID-6 array? Can you > post the captured kernel log when it locks up? Never mind. Just succeeded to reproduce it here. It definitely has something to do with the raid code. ext3 on raid6 is showing the same problem. I'll report back as soon as I find out more. -- tejun From owner-xfs@oss.sgi.com Thu Jun 14 08:20:03 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 14 Jun 2007 08:20:08 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.6 required=5.0 tests=AWL,BAYES_99,J_CHICKENPOX_33, J_CHICKENPOX_62 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.ukfsn.org (s2.ukfsn.org [217.158.120.143]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5EFK1Wt012426 for ; Thu, 14 Jun 2007 08:20:02 -0700 Received: from localhost (mailman.ukfsn.org [80.168.53.75]) by mail.ukfsn.org (Postfix) with ESMTP id BD6BBE6CC4; Thu, 14 Jun 2007 16:19:58 +0100 (BST) Received: from mail.ukfsn.org ([80.168.53.20]) by localhost (smtp-filter.ukfsn.org [80.168.53.75]) (amavisd-new, port 10024) with ESMTP id 1ZPJvTXRfrqA; Thu, 14 Jun 2007 16:16:44 +0100 (BST) Received: from elm.dgreaves.com (i-83-67-36-194.freedom2surf.net [83.67.36.194]) by mail.ukfsn.org (Postfix) with ESMTP id 53A1BE6E89; Thu, 14 Jun 2007 16:19:27 +0100 (BST) Received: from ash.dgreaves.com ([10.0.0.90]) by elm.dgreaves.com with esmtp (Exim 4.62) (envelope-from ) id 1Hyr6e-0000G0-O8; Thu, 14 Jun 2007 16:19:28 +0100 Message-ID: <46715C80.7010402@dgreaves.com> Date: Thu, 14 Jun 2007 16:19:28 +0100 From: David Greaves User-Agent: Mozilla-Thunderbird 2.0.0.0 (X11/20070601) MIME-Version: 1.0 To: Tejun Heo Cc: "Rafael J. Wysocki" , Linus Torvalds , David Chinner , xfs@oss.sgi.com, "'linux-kernel@vger.kernel.org'" , linux-pm , Neil Brown , Jeff Garzik Subject: Re: 2.6.22-rc3 hibernate(?) fails totally - regression (xfs on raid6) References: <200706020122.49989.rjw@sisk.pl> <46706968.7000703@dgreaves.com> <200706140115.58733.rjw@sisk.pl> <46714ECF.8080203@gmail.com> In-Reply-To: <46714ECF.8080203@gmail.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11788 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: david@dgreaves.com Precedence: bulk X-list: xfs Tejun Heo wrote: > They're waiting for the commands they issued to complete. ata_aux is > trying to revalidate the scsi device after libata EH finished waking up > the port and hibernate is trying to resume scsi disk device. ata_aux is > issuing either TEST UNIT READY or START STOP. hibernate is issuing > START STOP. > > This can be caused by one of the followings. > > 1. SCSI EH thread (ATA EH runs off it) for the SCSI device hasn't > finished yet. All commands are deferred while EH is in progress. > > 2. request_queue is stuck - somehow somebody forgot to kick the queue at > some point. > > 3. command is stuck somewhere in SCSI/ATA land. > > #1 doesn't seem to be the case as all scsi_eh threads seems idle. I'm > looking at the code but can't find anything which could cause #2 or #3. > Also, these code paths are traveled really frequently. > > I'm also trying to reproduce the problem here with xfs over RAID-6 array > but haven't been successful yet. > > David, do you store the hibernation image on the RAID-6 array? No, swap is on a pata disk. > Can you post the captured kernel log when it locks up? Sure... this was still on the serial terminal screen from the sysrq-t trace from this morning: [run hibernate script here] swsusp: Basic memory bitmaps created Stopping tasks ... done. Shrinking memory... done (0 pages freed) Freed 0 kbytes in 0.04 seconds (0.00 MB/s) sd 5:0:0:0: [sdf] Synchronizing SCSI cache sd 4:0:0:0: [sde] Synchronizing SCSI cache sd 3:0:0:0: [sdd] Synchronizing SCSI cache sd 2:0:0:0: [sdc] Synchronizing SCSI cache sd 1:0:0:0: [sdb] Synchronizing SCSI cache sd 0:0:0:0: [sda] Synchronizing SCSI cache pnp: Device 00:09 disabled. pnp: Device 00:08 activated. pnp: Device 00:09 activated. pnp: Failed to activate device 00:0a. pnp: Failed to activate device 00:0b. ATA: abnormal status 0x7F on port 0x0001a407 ATA: abnormal status 0x7F on port 0x0001a407 ATA: abnormal status 0x7F on port 0x0001b007 ATA: abnormal status 0x7F on port 0x0001b007 ata5.00: ata_hpa_resize 1: sectors = 490234752, hpa_sectors = 490234752 ata5.00: ata_hpa_resize 1: sectors = 490234752, hpa_sectors = 490234752 ata5.00: configured for UDMA/133 sd 0:0:0:0: [sda] Starting disk ata6.00: ata_hpa_resize 1: sectors = 781422768, hpa_sectors = 781422768 sd 1:0:0:0: [sdb] Starting disk sd 2:0:0:0: [sdc] Starting disk ata6.00: ata_hpa_resize 1: sectors = 781422768, hpa_sectors = 781422768 ata6.00: configured for UDMA/133 sd 3:0:0:0: [sdd] Starting disk sd 4:0:0:0: [sde] Starting disk sd 5:0:0:0: [sdf] Starting disk sd 4:0:0:0: [sde] 490234752 512-byte hardware sectors (251000 MB) sd 4:0:0:0: [sde] Write Protect is off sd 4:0:0:0: [sde] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sd 5:0:0:0: [sdf] 781422768 512-byte hardware sectors (400088 MB) sd 5:0:0:0: [sdf] Write Protect is off sd 5:0:0:0: [sdf] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Saving image data pages (36338 pages) ... 19%<6>skge eth0: Link is up at 1000 Mbps, full duplex, flow control both done Wrote 145352 kbytes in 8.49 seconds (17.12 MB/s) S| md: stopping all md devices. md: md0 still in use. sd 5:0:0:0: [sdf] Synchronizing SCSI cache sd 5:0:0:0: [sdf] Stopping disk sd 4:0:0:0: [sde] Synchronizing SCSI cache sd 4:0:0:0: [sde] Stopping disk sd 3:0:0:0: [sdd] Synchronizing SCSI cache sd 3:0:0:0: [sdd] Stopping disk sd 2:0:0:0: [sdc] Synchronizing SCSI cache sd 2:0:0:0: [sdc] Stopping disk sd 1:0:0:0: [sdb] Synchronizing SCSI cache sd 1:0:0:0: [sdb] Stopping disk sd 0:0:0:0: [sda] Synchronizing SCSI cache sd 0:0:0:0: [sda] Stopping disk Shutdown: hdb Shutdown: hda ACPI: PCI interrupt for device 0000:00:09.0 disabled [power off/on] Linux version 2.6.21-g9666f400-dirty (root@cu.dgreaves.com) (gcc version 3.3.5 (Debian 1:3.3.5-13)) #23 Wed Jun 13 22:51:26 BST 2007 BIOS-provided physical RAM map: BIOS-e820: 0000000000000000 - 000000000009c400 (usable) BIOS-e820: 000000000009c400 - 00000000000a0000 (reserved) BIOS-e820: 00000000000f0000 - 0000000000100000 (reserved) BIOS-e820: 0000000000100000 - 000000003fffc000 (usable) BIOS-e820: 000000003fffc000 - 000000003ffff000 (ACPI data) BIOS-e820: 000000003ffff000 - 0000000040000000 (ACPI NVS) BIOS-e820: 00000000fec00000 - 00000000fec01000 (reserved) BIOS-e820: 00000000fee00000 - 00000000fee01000 (reserved) BIOS-e820: 00000000ffff0000 - 0000000100000000 (reserved) 127MB HIGHMEM available. 896MB LOWMEM available. Zone PFN ranges: DMA 0 -> 4096 Normal 4096 -> 229376 HighMem 229376 -> 262140 early_node_map[1] active PFN ranges 0: 0 -> 262140 DMI 2.3 present. ACPI: RSDP 000F62A0, 0014 (r0 ASUS ) ACPI: RSDT 3FFFC000, 0030 (r1 ASUS A7V600 42302E31 MSFT 31313031) ACPI: FACP 3FFFC0B2, 0074 (r1 ASUS A7V600 42302E31 MSFT 31313031) ACPI: DSDT 3FFFC126, 2C4F (r1 ASUS A7V600 1000 MSFT 100000B) ACPI: FACS 3FFFF000, 0040 ACPI: BOOT 3FFFC030, 0028 (r1 ASUS A7V600 42302E31 MSFT 31313031) ACPI: APIC 3FFFC058, 005A (r1 ASUS A7V600 42302E31 MSFT 31313031) ACPI: PM-Timer IO Port: 0xe408 ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled) Processor #0 6:10 APIC version 16 ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0]) IOAPIC[0]: apic_id 2, version 3, address 0xfec00000, GSI 0-23 ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl edge) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 low level) Enabling APIC mode: Flat. Using 1 I/O APICs Using ACPI (MADT) for SMP configuration information Allocating PCI resources starting at 50000000 (gap: 40000000:bec00000) Built 1 zonelists. Total pages: 260093 Kernel command line: root=/dev/hda2 ro log_buf_len=128k console=tty0 console=ttyS0,115200 log_buf_len: 131072 Enabling fast FPU save and restore... done. Enabling unmasked SIMD FPU exception support... done. Initializing CPU#0 PID hash table entries: 4096 (order: 12, 16384 bytes) Detected 1999.872 MHz processor. Console: colour VGA+ 80x25 Dentry cache hash table entries: 131072 (order: 7, 524288 bytes) Inode-cache hash table entries: 65536 (order: 6, 262144 bytes) Memory: 1034872k/1048560k available (2459k kernel code, 13036k reserved, 915k data, 196k init, 131056k highmem) virtual kernel memory layout: fixmap : 0xfffaa000 - 0xfffff000 ( 340 kB) pkmap : 0xff800000 - 0xffc00000 (4096 kB) vmalloc : 0xf8800000 - 0xff7fe000 ( 111 MB) lowmem : 0xc0000000 - 0xf8000000 ( 896 MB) .init : 0xc044e000 - 0xc047f000 ( 196 kB) .data : 0xc0366dc7 - 0xc044bb90 ( 915 kB) .text : 0xc0100000 - 0xc0366dc7 (2459 kB) Checking if this processor honours the WP bit even in supervisor mode... Ok. Calibrating delay using timer specific routine.. 4003.08 BogoMIPS (lpj=8006174) Mount-cache hash table entries: 512 CPU: L1 I Cache: 64K (64 bytes/line), D cache 64K (64 bytes/line) CPU: L2 Cache: 512K (64 bytes/line) Intel machine check architecture supported. Intel machine check reporting enabled on CPU#0. Compat vDSO mapped to ffffe000. CPU: AMD Athlon(TM) MP stepping 00 Checking 'hlt' instruction... OK. ACPI: Core revision 20070126 ENABLING IO-APIC IRQs ..TIMER: vector=0x31 apic1=0 pin1=2 apic2=-1 pin2=-1 NET: Registered protocol family 16 ACPI: bus type pci registered PCI: PCI BIOS revision 2.10 entry at 0xf1970, last bus=1 PCI: Using configuration type 1 Setting up standard PCI resources ACPI: Interpreter enabled ACPI: (supports S0 S1 S4 S5) ACPI: Using IOAPIC for interrupt routing ACPI: PCI Root Bridge [PCI0] (0000:00) PCI: enabled onboard AC97/MC97 devices ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 5 6 7 9 10 *11 12) ACPI: PCI Interrupt Link [LNKB] (IRQs 3 4 5 6 7 9 10 11 12) *0, disabled. ACPI: PCI Interrupt Link [LNKC] (IRQs 3 4 5 6 7 9 10 11 12) *0, disabled. ACPI: PCI Interrupt Link [LNKD] (IRQs 3 4 5 6 7 9 10 11 12) *0, disabled. ACPI: PCI Interrupt Link [LNKE] (IRQs 3 4 *5 6 7 9 10 11 12) ACPI: PCI Interrupt Link [LNKF] (IRQs 3 4 *5 6 7 9 10 11 12) ACPI: PCI Interrupt Link [LNKG] (IRQs 3 4 5 *6 7 9 10 11 12) ACPI: PCI Interrupt Link [LNKH] (IRQs 3 4 5 6 7 9 10 11 12) *15, disabled. Linux Plug and Play Support v0.97 (c) Adam Belay pnp: PnP ACPI init ACPI: bus type pnp registered pnp: PnP ACPI: found 14 devices ACPI: ACPI bus type pnp unregistered SCSI subsystem initialized PCI: Using ACPI for IRQ routing PCI: If a device doesn't work, try "pci=routeirq". If it helps, post a report pnp: 00:00: iomem range 0x0-0x9ffff could not be reserved pnp: 00:00: iomem range 0xf0000-0xfffff could not be reserved pnp: 00:00: iomem range 0x100000-0x3fffffff could not be reserved pnp: 00:00: iomem range 0xfec00000-0xfec000ff could not be reserved pnp: 00:02: ioport range 0xe400-0xe47f has been reserved pnp: 00:02: ioport range 0xe800-0xe81f has been reserved pnp: 00:02: iomem range 0xfff80000-0xffffffff could not be reserved pnp: 00:02: iomem range 0xffb80000-0xffbfffff has been reserved pnp: 00:0d: ioport range 0x290-0x297 has been reserved pnp: 00:0d: ioport range 0x370-0x375 has been reserved Time: tsc clocksource has been installed. PCI: Bridge: 0000:00:01.0 IO window: disabled. MEM window: disabled. PREFETCH window: disabled. NET: Registered protocol family 2 IP route cache hash table entries: 32768 (order: 5, 131072 bytes) TCP established hash table entries: 131072 (order: 8, 1048576 bytes) TCP bind hash table entries: 65536 (order: 6, 262144 bytes) TCP: Hash tables configured (established 131072 bind 65536) TCP reno registered Simple Boot Flag at 0x3a set to 0x1 Machine check exception polling timer started. highmem bounce pool size: 64 pages SGI XFS with ACLs, no debug enabled io scheduler noop registered io scheduler anticipatory registered (default) io scheduler deadline registered io scheduler cfq registered PCI: Bypassing VIA 8237 APIC De-Assert Message atyfb: using auxiliary register aperture atyfb: 3D RAGE II+ (Mach64 GU) [0x4755 rev 0x9a] atyfb: Mach64 BIOS is located at c0000, mapped at c00c0000. atyfb: BIOS frequency table: atyfb: PCLK_min_freq 926, PCLK_max_freq 22216, ref_freq 1432, ref_divider 33 atyfb: MCLK_pwd 4200, MCLK_max_freq 6000, XCLK_max_freq 6000, SCLK_freq 5000 atyfb: 4M EDO, 14.31818 MHz XTAL, 222 MHz PLL, 60 Mhz MCLK, 60 MHz XCLK Console: switching to colour frame buffer device 80x30 atyfb: fb0: ATY Mach64 frame buffer device on PCI input: Power Button (FF) as /class/input/input0 ACPI: Power Button (FF) [PWRF] input: Power Button (CM) as /class/input/input1 ACPI: Power Button (CM) [PWRB] Serial: 8250/16550 driver $Revision: 1.90 $ 4 ports, IRQ sharing disabled 00:08: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A 00:09: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A PCI: Enabling device 0000:00:09.0 (0014 -> 0017) ACPI: PCI Interrupt 0000:00:09.0[A] -> GSI 18 (level, low) -> IRQ 16 skge 1.11 addr 0xf6000000 irq 16 chip Yukon rev 1 skge eth0: addr 00:0c:6e:f6:47:ee netconsole: not configured, aborting Uniform Multi-Platform E-IDE driver Revision: 7.00alpha2 ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx VP_IDE: IDE controller at PCI slot 0000:00:0f.1 ACPI: PCI Interrupt 0000:00:0f.1[A] -> GSI 20 (level, low) -> IRQ 17 VP_IDE: chipset revision 6 VP_IDE: not 100% native mode: will probe irqs later VP_IDE: VIA vt8237 (rev 00) IDE UDMA133 controller on pci0000:00:0f.1 ide0: BM-DMA at 0x9000-0x9007, BIOS settings: hda:DMA, hdb:DMA ide1: BM-DMA at 0x9008-0x900f, BIOS settings: hdc:pio, hdd:DMA Switched to high resolution mode on CPU 0 hda: ST320420A, ATA DISK drive hdb: Maxtor 5A300J0, ATA DISK drive ide0 at 0x1f0-0x1f7,0x3f6 on irq 14 hdd: PLEXTOR CD-R PX-W2410A, ATAPI CD/DVD-ROM drive ide1 at 0x170-0x177,0x376 on irq 15 hda: max request size: 128KiB hda: 39851760 sectors (20404 MB) w/2048KiB Cache, CHS=39535/16/63, UDMA(66) hda: cache flushes not supported hda: hda1 hda2 hda3 hdb: max request size: 512KiB hdb: 585940320 sectors (300001 MB) w/2048KiB Cache, CHS=36473/255/63, UDMA(133) hdb: cache flushes supported hdb: hdb1 hdb2 hdd: ATAPI 40X CD-ROM CD-R/RW drive, 4096kB Cache, UDMA(33) Uniform CD-ROM driver Revision: 3.20 ACPI: PCI Interrupt 0000:00:0d.0[A] -> GSI 16 (level, low) -> IRQ 18 scsi0 : sata_promise scsi1 : sata_promise scsi2 : sata_promise scsi3 : sata_promise ata1: SATA max UDMA/133 cmd 0xf880a200 ctl 0xf880a238 bmdma 0x00000000 irq 0 ata2: SATA max UDMA/133 cmd 0xf880a280 ctl 0xf880a2b8 bmdma 0x00000000 irq 0 ata3: SATA max UDMA/133 cmd 0xf880a300 ctl 0xf880a338 bmdma 0x00000000 irq 0 ata4: SATA max UDMA/133 cmd 0xf880a380 ctl 0xf880a3b8 bmdma 0x00000000 irq 0 ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) ata1.00: ata_hpa_resize 1: sectors = 490234752, hpa_sectors = 490234752 ata1.00: ATA-7: Maxtor 6B250S0, BANC19J0, max UDMA/133 ata1.00: 490234752 sectors, multi 0: LBA48 NCQ (depth 0/32) ata1.00: ata_hpa_resize 1: sectors = 490234752, hpa_sectors = 490234752 ata1.00: configured for UDMA/133 ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300) ata2.00: ata_hpa_resize 1: sectors = 490234752, hpa_sectors = 490234752 ata2.00: ATA-7: Maxtor 7Y250M0, YAR51EW0, max UDMA/133 ata2.00: 490234752 sectors, multi 0: LBA48 ata2.00: ata_hpa_resize 1: sectors = 490234752, hpa_sectors = 490234752 ata2.00: configured for UDMA/133 ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) ata3.00: ata_hpa_resize 1: sectors = 490234752, hpa_sectors = 490234752 ata3.00: ATA-7: Maxtor 7Y250M0, YAR51EW0, max UDMA/133 ata3.00: 490234752 sectors, multi 0: LBA48 ata3.00: ata_hpa_resize 1: sectors = 490234752, hpa_sectors = 490234752 ata3.00: configured for UDMA/133 ata4: SATA link up 1.5 Gbps (SStatus 113 SControl 300) ata4.00: ata_hpa_resize 1: sectors = 490234752, hpa_sectors = 490234752 ata4.00: ATA-7: Maxtor 6B250S0, BANC1980, max UDMA/133 ata4.00: 490234752 sectors, multi 0: LBA48 NCQ (depth 0/32) ata4.00: ata_hpa_resize 1: sectors = 490234752, hpa_sectors = 490234752 ata4.00: configured for UDMA/133 scsi 0:0:0:0: Direct-Access ATA Maxtor 6B250S0 BANC PQ: 0 ANSI: 5 sd 0:0:0:0: [sda] 490234752 512-byte hardware sectors (251000 MB) sd 0:0:0:0: [sda] Write Protect is off sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sd 0:0:0:0: [sda] 490234752 512-byte hardware sectors (251000 MB) sd 0:0:0:0: [sda] Write Protect is off sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sda: sda1 sd 0:0:0:0: [sda] Attached SCSI disk scsi 1:0:0:0: Direct-Access ATA Maxtor 7Y250M0 YAR5 PQ: 0 ANSI: 5 sd 1:0:0:0: [sdb] 490234752 512-byte hardware sectors (251000 MB) sd 1:0:0:0: [sdb] Write Protect is off sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sd 1:0:0:0: [sdb] 490234752 512-byte hardware sectors (251000 MB) sd 1:0:0:0: [sdb] Write Protect is off sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sdb: sdb1 sd 1:0:0:0: [sdb] Attached SCSI disk scsi 2:0:0:0: Direct-Access ATA Maxtor 7Y250M0 YAR5 PQ: 0 ANSI: 5 sd 2:0:0:0: [sdc] 490234752 512-byte hardware sectors (251000 MB) sd 2:0:0:0: [sdc] Write Protect is off sd 2:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sd 2:0:0:0: [sdc] 490234752 512-byte hardware sectors (251000 MB) sd 2:0:0:0: [sdc] Write Protect is off sd 2:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sdc: sdc1 sd 2:0:0:0: [sdc] Attached SCSI disk scsi 3:0:0:0: Direct-Access ATA Maxtor 6B250S0 BANC PQ: 0 ANSI: 5 sd 3:0:0:0: [sdd] 490234752 512-byte hardware sectors (251000 MB) sd 3:0:0:0: [sdd] Write Protect is off sd 3:0:0:0: [sdd] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sd 3:0:0:0: [sdd] 490234752 512-byte hardware sectors (251000 MB) sd 3:0:0:0: [sdd] Write Protect is off sd 3:0:0:0: [sdd] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sdd: sdd1 sd 3:0:0:0: [sdd] Attached SCSI disk ACPI: PCI Interrupt 0000:00:0f.0[B] -> GSI 20 (level, low) -> IRQ 17 sata_via 0000:00:0f.0: routed to hard irq line 0 scsi4 : sata_via scsi5 : sata_via ata5: SATA max UDMA/133 cmd 0x0001b000 ctl 0x0001a802 bmdma 0x00019800 irq 0 ata6: SATA max UDMA/133 cmd 0x0001a400 ctl 0x0001a002 bmdma 0x00019808 irq 0 ata5: SATA link up 1.5 Gbps (SStatus 113 SControl 300) ATA: abnormal status 0x7F on port 0x0001b007 ATA: abnormal status 0x7F on port 0x0001b007 ata5.00: ata_hpa_resize 1: sectors = 490234752, hpa_sectors = 490234752 ata5.00: ATA-7: Maxtor 7B250S0, BANC1980, max UDMA/133 ata5.00: 490234752 sectors, multi 16: LBA48 NCQ (depth 0/32) ata5.00: ata_hpa_resize 1: sectors = 490234752, hpa_sectors = 490234752 ata5.00: configured for UDMA/133 ata6: SATA link up 1.5 Gbps (SStatus 113 SControl 300) ATA: abnormal status 0x7F on port 0x0001a407 ATA: abnormal status 0x7F on port 0x0001a407 ata6.00: ata_hpa_resize 1: sectors = 781422768, hpa_sectors = 781422768 ata6.00: ATA-7: ST3400620AS, 3.AAK, max UDMA/133 ata6.00: 781422768 sectors, multi 16: LBA48 NCQ (depth 0/32) ata6.00: ata_hpa_resize 1: sectors = 781422768, hpa_sectors = 781422768 ata6.00: configured for UDMA/133 scsi 4:0:0:0: Direct-Access ATA Maxtor 7B250S0 BANC PQ: 0 ANSI: 5 sd 4:0:0:0: [sde] 490234752 512-byte hardware sectors (251000 MB) sd 4:0:0:0: [sde] Write Protect is off sd 4:0:0:0: [sde] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sd 4:0:0:0: [sde] 490234752 512-byte hardware sectors (251000 MB) sd 4:0:0:0: [sde] Write Protect is off sd 4:0:0:0: [sde] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sde: sde1 sd 4:0:0:0: [sde] Attached SCSI disk scsi 5:0:0:0: Direct-Access ATA ST3400620AS 3.AA PQ: 0 ANSI: 5 sd 5:0:0:0: [sdf] 781422768 512-byte hardware sectors (400088 MB) sd 5:0:0:0: [sdf] Write Protect is off sd 5:0:0:0: [sdf] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sd 5:0:0:0: [sdf] 781422768 512-byte hardware sectors (400088 MB) sd 5:0:0:0: [sdf] Write Protect is off sd 5:0:0:0: [sdf] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sdf: sdf1 sdf2 sd 5:0:0:0: [sdf] Attached SCSI disk PNP: PS/2 Controller [PNP0303:PS2K,PNP0f03:PS2M] at 0x60,0x64 irq 1,12 serio: i8042 KBD port at 0x60,0x64 irq 1 serio: i8042 AUX port at 0x60,0x64 irq 12 mice: PS/2 mouse device common for all mice input: AT Translated Set 2 keyboard as /class/input/input2 md: linear personality registered for level -1 md: raid0 personality registered for level 0 md: raid1 personality registered for level 1 raid6: int32x1 636 MB/s raid6: int32x2 787 MB/s raid6: int32x4 627 MB/s raid6: int32x8 606 MB/s raid6: mmxx1 1583 MB/s raid6: mmxx2 2557 MB/s raid6: sse1x1 1592 MB/s raid6: sse1x2 2631 MB/s raid6: using algorithm sse1x2 (2631 MB/s) md: raid6 personality registered for level 6 md: raid5 personality registered for level 5 md: raid4 personality registered for level 4 raid5: automatically using best checksumming function: pIII_sse pIII_sse : 4289.000 MB/sec raid5: using function: pIII_sse (4289.000 MB/sec) device-mapper: ioctl: 4.11.0-ioctl (2006-10-12) initialised: dm-devel@redhat.com TCP cubic registered Using IPI Shortcut mode swsusp: Basic memory bitmaps created Stopping tasks ... <6>input: ImPS/2 Logitech Wheel Mouse as /class/input/input3 done. Loading image data pages (36338 pages) ... done Read 145352 kbytes in 8.34 seconds (17.42 MB/s) sd 5:0:0:0: [sdf] Synchronizing SCSI cache sd 4:0:0:0: [sde] Synchronizing SCSI cache sd 3:0:0:0: [sdd] Synchronizing SCSI cache sd 2:0:0:0: [sdc] Synchronizing SCSI cache sd 1:0:0:0: [sdb] Synchronizing SCSI cache sd 0:0:0:0: [sda] Synchronizing SCSI cache pnp: Device 00:09 disabled. pnp: Device 00:08 activated. pnp: Device 00:09 activated. pnp: Failed to activate device 00:0a. pnp: Failed to activate device 00:0b. sd 0:0:0:0: [sda] Starting disk sd 1:0:0:0: [sdb] Starting disk sd 2:0:0:0: [sdc] Starting disk sd 3:0:0:0: [sdd] Starting disk sd 4:0:0:0: [sde] Starting disk ATA: abnormal status 0x7F on port 0x0001b007 ATA: abnormal status 0x7F on port 0x0001b007 ATA: abnormal status 0x7F on port 0x0001a407 ATA: abnormal status 0x7F on port 0x0001a407 ata5.00: ata_hpa_resize 1: sectors = 490234752, hpa_sectors = 490234752 ata5.00: ata_hpa_resize 1: sectors = 490234752, hpa_sectors = 490234752 ata5.00: configured for UDMA/133 ata6.00: ata_hpa_resize 1: sectors = 781422768, hpa_sectors = 781422768 ata6.00: ata_hpa_resize 1: sectors = 781422768, hpa_sectors = 781422768 ata6.00: configured for UDMA/133 Clocksource tsc unstable (delta = 4327743507262 ns) Time: acpi_pm clocksource has been installed. skge eth0: Link is up at 1000 Mbps, full duplex, flow control both From owner-xfs@oss.sgi.com Thu Jun 14 10:49:52 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 14 Jun 2007 10:49:57 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.5 required=5.0 tests=AWL,BAYES_50,J_CHICKENPOX_33, J_CHICKENPOX_43,J_CHICKENPOX_44,J_CHICKENPOX_62,J_CHICKENPOX_66 autolearn=no version=3.2.0-pre1-r499012 Received: from pentafluge.infradead.org (pentafluge.infradead.org [213.146.154.40]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5EHnoWt012880 for ; Thu, 14 Jun 2007 10:49:51 -0700 Received: from hch by pentafluge.infradead.org with local (Exim 4.63 #1 (Red Hat Linux)) id 1HytAn-0003xg-CE; Thu, 14 Jun 2007 18:31:53 +0100 Date: Thu, 14 Jun 2007 18:31:53 +0100 From: Christoph Hellwig To: linux-kernel@vger.kernel.org, npiggin@suse.de, mark.fasheh@oracle.com, linux-ext4@vger.kernel.org, xfs@oss.sgi.com Subject: Re: iov_iter_fault_in_readable fix Message-ID: <20070614173153.GA14771@infradead.org> Mail-Followup-To: Christoph Hellwig , linux-kernel@vger.kernel.org, npiggin@suse.de, mark.fasheh@oracle.com, linux-ext4@vger.kernel.org, xfs@oss.sgi.com References: <200705292119.l4TLJtAD011726@shell0.pdx.osdl.net> <20070613134005.GA13815@localhost.sw.ru> <20070613135759.GD13815@localhost.sw.ru> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070613135759.GD13815@localhost.sw.ru> User-Agent: Mutt/1.4.2.3i X-SRS-Rewrite: SMTP reverse-path rewritten from by pentafluge.infradead.org See http://www.infradead.org/rpr.html X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11789 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs On Wed, Jun 13, 2007 at 05:57:59PM +0400, Dmitriy Monakhov wrote: > Function prerform check for signgle region, with out respect to > segment nature of iovec, For example writev no longer works :) Btw, could someone please start to collect all sniplets like this in a nice simple regression test suite? If no one wants to start a new one we should probably just put it into xfsqa (which should be useable for other filesystems aswell despite the name) > > /* TESTCASE BEGIN */ > #include > #include > #include > #include > #include > #include > #define SIZE (4096 * 2) > int main(int argc, char* argv[]) > { > char* ptr[4]; > struct iovec iov[2]; > int fd, ret; > ptr[0] = mmap(NULL, SIZE, PROT_READ|PROT_WRITE, > MAP_PRIVATE|MAP_ANONYMOUS, 0, 0); > ptr[1] = mmap(NULL, SIZE, PROT_NONE, > MAP_PRIVATE|MAP_ANONYMOUS, 0, 0); > ptr[2] = mmap(NULL, SIZE, PROT_READ|PROT_WRITE, > MAP_PRIVATE|MAP_ANONYMOUS, 0, 0); > ptr[3] = mmap(NULL, SIZE, PROT_NONE, > MAP_PRIVATE|MAP_ANONYMOUS, 0, 0); > > iov[0].iov_base = ptr[0] + (SIZE -1); > iov[0].iov_len = 1; > memset(ptr[0], 1, SIZE); > > iov[1].iov_base = ptr[2]; > iov[1].iov_len = SIZE; > memset(ptr[2], 2, SIZE); > > fd = open(argv[1], O_CREAT|O_RDWR|O_TRUNC, 0666); > ret = writev(fd, iov, sizeof(iov) / sizeof(struct iovec)); > return 0; > } > /* TESTCASE END*/ > We will get folowing result: > writev(3, [{"\1", 1}, {"\2"..., 8192}], 2) = -1 EFAULT (Bad address) > > this is hidden bug, and it was invisiable because XXXX_fault_in_readable > return value was ignored before. Lets iov_iter_fault_in_readable > perform checks for all segments. > > Signed-off-by: Dmitriy Monakhov > > diff --git a/include/linux/fs.h b/include/linux/fs.h > index fef19fc..7e025ea 100644 > --- a/include/linux/fs.h > +++ b/include/linux/fs.h > @@ -433,7 +433,7 @@ size_t iov_iter_copy_from_user_atomic(struct page *page, > size_t iov_iter_copy_from_user(struct page *page, > struct iov_iter *i, unsigned long offset, size_t bytes); > void iov_iter_advance(struct iov_iter *i, size_t bytes); > -int iov_iter_fault_in_readable(struct iov_iter *i, size_t bytes); > +int iov_iter_fault_in_readable(struct iov_iter *i, size_t *bytes); > size_t iov_iter_single_seg_count(struct iov_iter *i); > > static inline void iov_iter_init(struct iov_iter *i, > diff --git a/mm/filemap.c b/mm/filemap.c > index 8d59ed9..8600c3e 100644 > --- a/mm/filemap.c > +++ b/mm/filemap.c > @@ -1817,10 +1817,32 @@ void iov_iter_advance(struct iov_iter *i, size_t bytes) > } > EXPORT_SYMBOL(iov_iter_advance); > > -int iov_iter_fault_in_readable(struct iov_iter *i, size_t bytes) > +int iov_iter_fault_in_readable(struct iov_iter *i, size_t* bytes) > { > - char __user *buf = i->iov->iov_base + i->iov_offset; > - return fault_in_pages_readable(buf, bytes); > + size_t len = *bytes; > + int ret; > + if (likely(i->nr_segs == 1)) { > + ret = fault_in_pages_readable(i->iov->iov_base, len); > + if (ret) > + *bytes = 0; > + } else { > + const struct iovec *iov = i->iov; > + size_t base = i->iov_offset; > + *bytes = 0; > + while (len) { > + int copy = min(len, iov->iov_len - base); > + if ((ret = fault_in_pages_readable(iov->iov_base + base, copy))) > + break; > + *bytes += copy; > + len -= copy; > + base += copy; > + if (iov->iov_len == base) { > + iov++; > + base = 0; > + } > + } > + } > + return ret; > } > EXPORT_SYMBOL(iov_iter_fault_in_readable); > > @@ -2110,7 +2132,7 @@ static ssize_t generic_perform_write_2copy(struct file *file, > * to check that the address is actually valid, when atomic > * usercopies are used, below. > */ > - if (unlikely(iov_iter_fault_in_readable(i, bytes))) { > + if (unlikely(iov_iter_fault_in_readable(i, &bytes) && !bytes)) { > status = -EFAULT; > break; > } > @@ -2284,7 +2306,7 @@ again: > * to check that the address is actually valid, when atomic > * usercopies are used, below. > */ > - if (unlikely(iov_iter_fault_in_readable(i, bytes))) { > + if (unlikely(iov_iter_fault_in_readable(i, &bytes)&& !bytes)) { > status = -EFAULT; > break; > } > > > - > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ ---end quoted text--- From owner-xfs@oss.sgi.com Thu Jun 14 11:14:47 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 14 Jun 2007 11:14:49 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.8 required=5.0 tests=AWL,BAYES_60 autolearn=no version=3.2.0-pre1-r499012 Received: from pentafluge.infradead.org (pentafluge.infradead.org [213.146.154.40]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5EIEjWt018338 for ; Thu, 14 Jun 2007 11:14:47 -0700 Received: from hch by pentafluge.infradead.org with local (Exim 4.63 #1 (Red Hat Linux)) id 1HytqI-0004QW-7U; Thu, 14 Jun 2007 19:14:46 +0100 Date: Thu, 14 Jun 2007 19:14:46 +0100 From: Christoph Hellwig To: David Chinner Cc: xfs-dev , xfs-oss , hch@infradead.org Subject: Re: [PATCH, RFC] fix null files exposure growing via ftruncate Message-ID: <20070614181446.GA16955@infradead.org> References: <20070614063404.GW86004887@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070614063404.GW86004887@sgi.com> User-Agent: Mutt/1.4.2.3i X-SRS-Rewrite: SMTP reverse-path rewritten from by pentafluge.infradead.org See http://www.infradead.org/rpr.html X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11790 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs On Thu, Jun 14, 2007 at 04:34:04PM +1000, David Chinner wrote: > Christoph, > > Looking into the test 140 failure you reported, I realised that none > of the specific null files tests were being run automatically, which > is why I hadn't seen any of those failures (nor had the QA team). > That's being fixed. > > I suspect that the test passes on Irix because of a coincidence > (the test sleeps for 10s and that is the default writeback > timeout for file data) which means when the filesystem is shut down > all the data is already on disk so it's not really testing > the NULL files fix. > > The failure is due to the ftruncate() logging the new file size > before any data that had previously been written had hit the > disk. IOWs, it violates the data write/inode size update rule > that fixes the null files problem. > > The fix here checks when growing the file as to whether it the disk > inode size is different to the in memory size. If they are > different, we have data that needs to be written to disk beyond the > existing on disk EOF. Hence to maintain ordering we need to flush > this data out before we log the changed file size. > > I suspect the flush could be done more optimally - I've just done a > brute-force flush the entire file mod. Should we only flush from the > old di_size to the current i_size? > > There may also be better ways to fix this. Any thoughts on > that? Looks good enough for now, but I suspect just flushing from the old to the new size would be a quite nice performance improvement that's worth it. From owner-xfs@oss.sgi.com Thu Jun 14 12:33:50 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 14 Jun 2007 12:33:56 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.5 required=5.0 tests=AWL,BAYES_60 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.clusterfs.com (mail.clusterfs.com [206.168.112.78]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5EJXnWt005423 for ; Thu, 14 Jun 2007 12:33:50 -0700 Received: from localhost.adilger.int (S0106000bdb95b39c.cg.shawcable.net [70.72.213.136]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.clusterfs.com (Postfix) with ESMTP id D83077BA377; Thu, 14 Jun 2007 13:33:49 -0600 (MDT) Received: by localhost.adilger.int (Postfix, from userid 1000) id 840A24190; Thu, 14 Jun 2007 13:33:47 -0600 (MDT) Date: Thu, 14 Jun 2007 13:33:47 -0600 From: Andreas Dilger To: David Chinner Cc: "Amit K. Arora" , Suparna Bhattacharya , torvalds@osdl.org, akpm@linux-foundation.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, xfs@oss.sgi.com, cmm@us.ibm.com Subject: Re: [PATCH 1/5] fallocate() implementation in i86, x86_64 and powerpc Message-ID: <20070614193347.GN5181@schatzie.adilger.int> Mail-Followup-To: David Chinner , "Amit K. Arora" , Suparna Bhattacharya , torvalds@osdl.org, akpm@linux-foundation.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, xfs@oss.sgi.com, cmm@us.ibm.com References: <20070509160102.GA30745@amitarora.in.ibm.com> <20070510005926.GT85884050@sgi.com> <20070510115620.GB21400@amitarora.in.ibm.com> <20070510223950.GD86004887@sgi.com> <20070511110301.GB28425@in.ibm.com> <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070614120413.GD86004887@sgi.com> User-Agent: Mutt/1.4.1i X-GPG-Key: 1024D/0D35BED6 X-GPG-Fingerprint: 7A37 5D79 BF1B CECA D44F 8A29 A488 39F5 0D35 BED6 X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11791 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: adilger@clusterfs.com Precedence: bulk X-list: xfs On Jun 14, 2007 22:04 +1000, David Chinner wrote: > On Thu, Jun 14, 2007 at 03:14:58AM -0600, Andreas Dilger wrote: > > > B FA_DEALLOCATE > > > removes the underlying disk space with the given range. The disk space > > > shall be removed regardless of it's contents so both allocated space > > > from > > > B FA_ALLOCATE > > > and > > > B FA_PREALLOCATE > > > as well as from > > > B write(3) > > > will be removed. > > > B FA_DEALLOCATE > > > shall never remove disk blocks outside the range specified. > > > > So this is essentially the same as "punch". > > Depends on your definition of "punch". > > > There doesn't seem to be > > a mechanism to only unallocate unused FA_{PRE,}ALLOCATE space at the > > end. > > ftruncate() No, that will delete written data also. What I'm thinking is in cases where an application does fallocate() to reserve a lot of space, and when the application is finished it wants to unreserve any unused space. > > > B FA_DEALLOCATE > > > shall never change the file size. If changing the file size > > > is required when deallocating blocks from an offset to end > > > of file (or beyond end of file) is required, > > > B ftuncate64(3) > > > should be used. > > > > This also seems to be a bit of a wart, since it isn't a natural converse > > of either of the above functions. How about having two modes, > > similar to FA_ALLOCATE and FA_PREALLOCATE? > > > > whatever. > > > Say, FA_PUNCH (which > > would be as you describe here - deletes all data in the specified > > range changing the file size if it overlaps EOF, > > Punch means different things to different people. To me (and probably > most XFS aware ppl) punch implies no change to the file size. If "punch" does not change the file size, how is it possible to determine the end of the actual written data? Say you have a file with records in it, and these records are cancelled as they are processed (e.g. a journal of sorts). One usage model for punch() that we had in the past is to punch out each record after it finishes processing, so that it will not be re-processed after a crash. If the file size doesn't change with punch then there is no way to know when the last record is hit and the rest of the file needs to be scanned. > i.e. anyone curently using XFS_IOC_UNRESVSP will expect punching > holes to leave the file size unchanged. This is the behaviour I > described for FA_DEALLOCATE. > > > and FA_DEALLOCATE, > > which only deallocates unused FA_{PRE,}ALLOCATE space? > > That's an "unwritten-to-hole" extent conversion. Is that really > useful for anything? That's easily implemented with FIEMAP > and FA_DEALLOCATE. But why force the application to do this instead of making the fallocate API sensible and allowing it to be done directly? > Anyway, because we can't agree on a single pair of flags: > > FA_ALLOCATE == posix_fallocate() > FA_DEALLOCATE == unwritten-to-hole ??? I'd think this makes sense, being natural opposites of each other. FA_ALLOCATE doesn't overwrite existing data with zeros, so FA_DEALLOCATE shouldn't erase existing data. If FA_ALLOCATE extends the file size, then FA_DEALLOCATE should shrink it if there is no data at the end. > FA_RESV_SPACE == XFS_IOC_RESVSP64 > FA_UNRESV_SPACE == XFS_IOC_UNRESVSP64 > > We might also consider making @mode be a mask instead of an enumeration: > > > > FA_FL_DEALLOC 0x01 (default allocate) > > FA_FL_KEEP_SIZE 0x02 (default extend/shrink size) > > FA_FL_DEL_DATA 0x04 (default keep written data on DEALLOC) > > #define FA_ALLOCATE 0 > #define FA_DEALLOCATE FA_FL_DEALLOC > #define FA_RESV_SPACE FA_FL_KEEP_SIZE > #define FA_UNRESV_SPACE FA_FL_DEALLOC | FA_FL_KEEP_SIZE | FA_FL_DEL_DATA OK, this makes the semantics of XFS_IOC_RESVSP64 and XFS_IOC_UNRESVSP64 clear at least. The benefit is that it would also be possible (I'm not necessarily advocating this as a flag, just an example) to have semantics that are like XFS_IOC_ALLOCSP64 (zeroing written data while preallocating) with: #define FA_ZERO_SPACE FA_DEL_DATA or whatever semantics the caller actually wants, instead of restricting them to the subset of combinations given by FA_ALLOCATE and FA_DEALLOCATE (whatever it is we decide on in the end). > > > B ENOSPC > > > There is not enough space left on the device containing the file > > > referred to by > > > IR fd. > > > > Should probably say whether space is removed on failure or not. In > > Right. I'd say on error you need to FA_DEALLOCATE to ensure any space > allocated was freed back up. That way the error handling in the allocate > functions is much simpler (i.e. no need to undo there). Hmm, another flag? FA_FL_FREE_ENOSPC? I can imagine applications like PVRs to want to preallocate, say, an estimated 30 min of space for a show but if they only get 25 min of space returned they know some cleanup is in order (which can be done asynchronously while the show is filling the first 25 min of preallocated space). Otherwise, they have to loop in userspace trying decreasing preallocations until they fit, or starting small and incrementally preallocating space until they get an error. Cheers, Andreas -- Andreas Dilger Principal Software Engineer Cluster File Systems, Inc. From owner-xfs@oss.sgi.com Thu Jun 14 13:55:25 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 14 Jun 2007 13:55:27 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=4.0 required=5.0 tests=AWL,BAYES_99,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r499012 Received: from astra.simleu.ro (astra.simleu.ro [80.97.18.177]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5EKtMWt019655 for ; Thu, 14 Jun 2007 13:55:24 -0700 Received: from teal.hq.k1024.org (84-75-124-135.dclient.hispeed.ch [84.75.124.135]) by astra.simleu.ro (Postfix) with ESMTP id 42DBB7A; Thu, 14 Jun 2007 23:55:23 +0300 (EEST) Received: by teal.hq.k1024.org (Postfix, from userid 4004) id B95A840A056; Thu, 14 Jun 2007 22:55:18 +0200 (CEST) Date: Thu, 14 Jun 2007 22:55:18 +0200 From: Iustin Pop To: David Chinner Cc: xfs@oss.sgi.com Subject: Re: [PATCH] Implement shrink of empty AGs Message-ID: <20070614205518.GA4327@teal.hq.k1024.org> Mail-Followup-To: David Chinner , xfs@oss.sgi.com References: <20070610164014.GA10936@teal.hq.k1024.org> <20070612024025.GM86004887@sgi.com> <20070614060158.GA12951@teal.hq.k1024.org> <20070614090052.GA86004887@sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070614090052.GA86004887@sgi.com> X-Linux: This message was written on Linux X-Header: /usr/include gives great headers User-Agent: Mutt/1.5.13 (2006-08-11) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11792 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: iusty@k1024.org Precedence: bulk X-list: xfs On Thu, Jun 14, 2007 at 07:00:52PM +1000, David Chinner wrote: > > > What's the dbdelta part of the reservation for? That's reserving dbdelta > > > blocks for *allocations*, so I don't think this is right.... > > > > Well, we'll shrink the filesystem by dbdelta, so the filesystem needs to > > have enough free space to do it. > > True, but when you look at how much free space we need to make that > modification, it turns out ot be zero. i.e. we don't need to do any > allocations *anywhere* to remove an empty AG from the end of the > filesystem - just log a change to the superblock(s). > > > Whether this space is in the correct > > place (the AGs we want to shrink) or not is a question answered by the > > per-AG checking, but since we will reduce the freespace by this amount, > > I thought that it's safer to mark this space in use. Did I misread the > > intent of xfs_trans_reserve? > > The transactions reservation takes into account blocks that may need > to be allocated during a transaction. Normally when we are freeing > an extent, we may still have to allocate some blocks because a > btree block might need to be split (and split all the way up the tree > to the root). The XFS_GROWFS_SPACE_RES(mp) macro reserves the space > needed for the freeing of a new extent which occurs during growing a > partial AG to a full AG. We aren't doing such an operation here; > we don't modify any free space or inode or extent btrees here at > all. Ah, finally I understand what the reservation is for. I probably misunderstood the actual meaning of the transaction also... Thanks for the clear explanation, will remove this. > [...] > Note that I'm just using this as an example piece of existing code that would > break badly if we simply realloc the perag array. I could point you to the > filestreams code where there is a perag stream refcount that we'll need to > ensure is zero. > > What I'm trying to say is that the perag lock may not give us sufficient > exclusion to be able to shrink the perag array safely. Interesting conclusion. I'm fail right now to come up with an idea, except auditing the whole codebase and make sure any down_read is followed by a check on the perag size. Or something like adding a flag to the perag structure like 'gone' that is checked (since we won't k_realloc anymore), after the read lock. > > > ok, so we have empty ag's here. You might want to check that the > > > inode btree is empty and that the AGI unlinked list is empty. > > I thought that inode btree is empty by virtue of pag->pagi_count == 0. > > Is this not always so? > > It should be. Call me paranoid, but with an operation like this I want > to make sure we check and double check that it is safe to proceed. > > > Furthermore, also since agi_count == agi_free + > > actual used inodes + number of unlinked inodes, I think we don't need to > > check the unlinked list. > > Call me paranoid, but.... > > I think at minimum there should be debug code that caters to my paranoia ;) Ok, I'll look into this then. I understand the paranoia, I just assumed that logically it's provable that this situation doesn't/can't happen. > > > Ok, so why not just > > > > > > fdbdelta += mp->m_sb.sb_agblocks - XFS_PREALLOC_BLOCKS(mp); > > Because the last AG can be smaller than the sb_agblocks. It's true > > that this holds only for the last one, but having two cases is uglier > > than just always reading this size from the AG. > > But you -EINVAL'd earlier when the shrink size would not leave entire > AGs behind. So the partial last AG is something that won't happen > here..... No, what I meant when I said 'last AG' is the "original" last, not the "new" last. Consider this fs: AGs 1-8 = 4096 blocks, AG 9 = 3000 blocks. We can't, in the for loop, make fbdelta += mp->m_sb.sb_agblocks ... since that is 4096, whereas the for 'last' AG (AG 9) has only 3000 blocks. What I mean is that for a normal filesystem (non-shrink situation), the size of the last AG is readable only from the AGF structure on-disk, but for all the others AG the size is the one from the superblock. So if you touch the last AG, you have to read the AGF. > > > This is not really safe - how do we know if all the users of the > > > higher AGs have gone away? I think we should simply just zero out > > > the unused AGs and don't worry about a realloc(). > > The problem that I saw is that if you do shrink+grow+shrink+grow+... you > > will leak a small amount of memory (or corrupt kernel mem allocation?) > > since the growfs code does a realloc from what it thinks is the size of > > m_perag, which it computes solely from the current number of AGs. Should > > we have a size in the mp struct for this and not assume it's the > > agcount? > > Yup, another size variable in the mount structure is probably the only > way to fix this. Ok, noted. Also I get the impression that this ties in with the perag locking above somehow... > > updated patch (without the separation of the IOCTL and without the > > rework of the perag lock) is attached. > > I haven't looked at it yet - i'll try to get to it tomorrow... No hurry, just small changes. I'll think about the issues you raised. thanks, iustin From owner-xfs@oss.sgi.com Thu Jun 14 15:16:41 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 14 Jun 2007 15:16:43 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.7 required=5.0 tests=AWL,BAYES_60 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5EMGbWt005302 for ; Thu, 14 Jun 2007 15:16:39 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id IAA29418 for ; Fri, 15 Jun 2007 08:16:37 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5EMGaAf123473244 for ; Fri, 15 Jun 2007 08:16:37 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5EMGZnm123551665 for xfs@oss.sgi.com; Fri, 15 Jun 2007 08:16:35 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Fri, 15 Jun 2007 08:16:35 +1000 From: David Chinner To: xfs@oss.sgi.com Subject: Re: [PATCH] Implement shrink of empty AGs Message-ID: <20070614221635.GZ85884050@sgi.com> References: <20070610164014.GA10936@teal.hq.k1024.org> <20070612024025.GM86004887@sgi.com> <20070614060158.GA12951@teal.hq.k1024.org> <20070614090052.GA86004887@sgi.com> <20070614205518.GA4327@teal.hq.k1024.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070614205518.GA4327@teal.hq.k1024.org> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11793 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Thu, Jun 14, 2007 at 10:55:18PM +0200, Iustin Pop wrote: > On Thu, Jun 14, 2007 at 07:00:52PM +1000, David Chinner wrote: > > [...] > > Note that I'm just using this as an example piece of existing code that would > > break badly if we simply realloc the perag array. I could point you to the > > filestreams code where there is a perag stream refcount that we'll need to > > ensure is zero. > > > > What I'm trying to say is that the perag lock may not give us sufficient > > exclusion to be able to shrink the perag array safely. > Interesting conclusion. I'm fail right now to come up with an idea, > except auditing the whole codebase Yup. > and make sure any down_read is followed by a check on the perag size. That doesn't help, really, because at some points we won't be able to back out and select another AG. i.e. once we've selected an AG and ensured the perag structure is initialised, it is assumed it will remain that way forever. More likely we need reference counting on the AGs as we do operations (e.g. when we select an AG for an operation we take a reference and then drop it when we are done) and prevent new references from being taken on an AG at some point (i.e. once it's been emptied). > Or something like adding a flag > to the perag structure like 'gone' that is checked (since we won't > k_realloc anymore), after the read lock. Clear the pag[if]_init flags and ensure failure to initialise is treated correctly i.e. fall back to selecting another AG. > > > > ok, so we have empty ag's here. You might want to check that the > > > > inode btree is empty and that the AGI unlinked list is empty. > > > I thought that inode btree is empty by virtue of pag->pagi_count == 0. > > > Is this not always so? > > > > It should be. Call me paranoid, but with an operation like this I want > > to make sure we check and double check that it is safe to proceed. > > > > > Furthermore, also since agi_count == agi_free + > > > actual used inodes + number of unlinked inodes, I think we don't need to > > > check the unlinked list. > > > > Call me paranoid, but.... > > > > I think at minimum there should be debug code that caters to my paranoia ;) > Ok, I'll look into this then. I understand the paranoia, I just assumed > that logically it's provable that this situation doesn't/can't happen. At some point we'll have a bug that breaks that logic - we need code to catch that.... > > > > Ok, so why not just > > > > > > > > fdbdelta += mp->m_sb.sb_agblocks - XFS_PREALLOC_BLOCKS(mp); > > > Because the last AG can be smaller than the sb_agblocks. It's true > > > that this holds only for the last one, but having two cases is uglier > > > than just always reading this size from the AG. > > > > But you -EINVAL'd earlier when the shrink size would not leave entire > > AGs behind. So the partial last AG is something that won't happen > > here..... > No, what I meant when I said 'last AG' is the "original" last, not the > "new" last. Consider this fs: AGs 1-8 = 4096 blocks, AG 9 = 3000 blocks. Ok, gotcha. Makes sense now. > We can't, in the for loop, make fbdelta += mp->m_sb.sb_agblocks ... > since that is 4096, whereas the for 'last' AG (AG 9) has only 3000 > blocks. > > What I mean is that for a normal filesystem (non-shrink situation), the > size of the last AG is readable only from the AGF structure on-disk, > but for all the others AG the size is the one from the superblock. So if > you touch the last AG, you have to read the AGF. I'd still special case it, like the growing of a partial last AG is special cased in xfs_growfs_data_private(). > > > > This is not really safe - how do we know if all the users of the > > > > higher AGs have gone away? I think we should simply just zero out > > > > the unused AGs and don't worry about a realloc(). > > > The problem that I saw is that if you do shrink+grow+shrink+grow+... you > > > will leak a small amount of memory (or corrupt kernel mem allocation?) > > > since the growfs code does a realloc from what it thinks is the size of > > > m_perag, which it computes solely from the current number of AGs. Should > > > we have a size in the mp struct for this and not assume it's the > > > agcount? > > > > Yup, another size variable in the mount structure is probably the only > > way to fix this. > Ok, noted. Also I get the impression that this ties in with the perag > locking above somehow... *nod* Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Thu Jun 14 15:21:25 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 14 Jun 2007 15:21:26 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5EMLLWt006218 for ; Thu, 14 Jun 2007 15:21:23 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id IAA29717; Fri, 15 Jun 2007 08:21:16 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5EMLDAf123476251; Fri, 15 Jun 2007 08:21:14 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5EML9rI122818922; Fri, 15 Jun 2007 08:21:09 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Fri, 15 Jun 2007 08:21:09 +1000 From: David Chinner To: Christoph Hellwig , linux-kernel@vger.kernel.org, npiggin@suse.de, mark.fasheh@oracle.com, linux-ext4@vger.kernel.org, xfs@oss.sgi.com Subject: Re: iov_iter_fault_in_readable fix Message-ID: <20070614222109.GF86004887@sgi.com> References: <200705292119.l4TLJtAD011726@shell0.pdx.osdl.net> <20070613134005.GA13815@localhost.sw.ru> <20070613135759.GD13815@localhost.sw.ru> <20070614173153.GA14771@infradead.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070614173153.GA14771@infradead.org> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11794 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Thu, Jun 14, 2007 at 06:31:53PM +0100, Christoph Hellwig wrote: > On Wed, Jun 13, 2007 at 05:57:59PM +0400, Dmitriy Monakhov wrote: > > Function prerform check for signgle region, with out respect to > > segment nature of iovec, For example writev no longer works :) > > Btw, could someone please start to collect all sniplets like this in > a nice simple regression test suite? If no one wants to start a new > one we should probably just put it into xfsqa (which should be useable > for other filesystems aswell despite the name) Yeah, it can run a subset of the tests on NFS and UDF filesystems as well and there are some specific UDF-only tests in it too. I think the NFS test group is mostly generic tests that don't use or test specific XFS features. I'd be happy to accumulate tests like these in xfsqa... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Thu Jun 14 15:34:59 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 14 Jun 2007 15:35:02 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.3 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from pentafluge.infradead.org (pentafluge.infradead.org [213.146.154.40]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5EMYwWt008939 for ; Thu, 14 Jun 2007 15:34:59 -0700 Received: from hch by pentafluge.infradead.org with local (Exim 4.63 #1 (Red Hat Linux)) id 1Hyxtw-0007Pe-5m; Thu, 14 Jun 2007 23:34:48 +0100 Date: Thu, 14 Jun 2007 23:34:48 +0100 From: Christoph Hellwig To: David Chinner Cc: Christoph Hellwig , linux-kernel@vger.kernel.org, npiggin@suse.de, mark.fasheh@oracle.com, linux-ext4@vger.kernel.org, xfs@oss.sgi.com Subject: Re: iov_iter_fault_in_readable fix Message-ID: <20070614223448.GA28420@infradead.org> Mail-Followup-To: Christoph Hellwig , David Chinner , linux-kernel@vger.kernel.org, npiggin@suse.de, mark.fasheh@oracle.com, linux-ext4@vger.kernel.org, xfs@oss.sgi.com References: <200705292119.l4TLJtAD011726@shell0.pdx.osdl.net> <20070613134005.GA13815@localhost.sw.ru> <20070613135759.GD13815@localhost.sw.ru> <20070614173153.GA14771@infradead.org> <20070614222109.GF86004887@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070614222109.GF86004887@sgi.com> User-Agent: Mutt/1.4.2.3i X-SRS-Rewrite: SMTP reverse-path rewritten from by pentafluge.infradead.org See http://www.infradead.org/rpr.html X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11795 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs On Fri, Jun 15, 2007 at 08:21:09AM +1000, David Chinner wrote: > Yeah, it can run a subset of the tests on NFS and UDF filesystems as well and > there are some specific UDF-only tests in it too. I think the NFS test group > is mostly generic tests that don't use or test specific XFS features. Actually most testcases can run on any reasonable posixish filesystem, we just need some glue to tell the testsuite it's actually okay. From owner-xfs@oss.sgi.com Thu Jun 14 17:55:58 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 14 Jun 2007 17:56:03 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.3 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5F0ttWt006391 for ; Thu, 14 Jun 2007 17:55:56 -0700 Received: from pc-bnaujok.melbourne.sgi.com (pc-bnaujok.melbourne.sgi.com [134.14.55.58]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA04134; Fri, 15 Jun 2007 10:55:49 +1000 To: "Eric Sandeen" Subject: Re: Review: dmapi-devel RPM should required xfsprogs-devel RPM From: "Barry Naujok" Organization: SGI Cc: "xfs@oss.sgi.com" , xfs-dev Content-Type: text/plain; format=flowed; delsp=yes; charset=iso-8859-15 MIME-Version: 1.0 References: <467150D3.7090109@sandeen.net> Date: Fri, 15 Jun 2007 10:59:38 +1000 Message-ID: In-Reply-To: <467150D3.7090109@sandeen.net> User-Agent: Opera Mail/9.10 (Win32) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from Quoted-Printable to 8bit by oss.sgi.com id l5F0twWt006395 X-archive-position: 11796 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@sgi.com Precedence: bulk X-list: xfs On Fri, 15 Jun 2007 00:29:39 +1000, Eric Sandeen wrote: > Barry Naujok wrote: >> As per the subject (and also mentioned in the description): >> >> --- a/dmapi/build/rpm/dmapi.spec.in 2007-06-14 15:02:13.000000000 >> +1000 >> +++ b/dmapi/build/rpm/dmapi.spec.in 2007-06-14 14:47:31.290063558 >> +1000 >> @@ -23,7 +23,7 @@ >> %package devel >> Summary: Data Management API static libraries and headers. >> Group: Development/Libraries >> -Requires: @pkg_name@ >= 2.0.4 >> +Requires: @pkg_name@ >= 2.0.4 xfsprogs-devel >> >> %description devel >> dmapi-devel contains the libraries and header files needed to >> >> > > seems sane to me. Just a header issue I guess? Yep, dmapi.h requires xfs headers. From owner-xfs@oss.sgi.com Thu Jun 14 17:58:19 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 14 Jun 2007 17:58:20 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5F0wGWt006912 for ; Thu, 14 Jun 2007 17:58:18 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA04185; Fri, 15 Jun 2007 10:58:13 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5F0wCAf122350523; Fri, 15 Jun 2007 10:58:12 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5F0wBDc123555681; Fri, 15 Jun 2007 10:58:11 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Fri, 15 Jun 2007 10:58:11 +1000 From: David Chinner To: Christoph Hellwig Cc: David Chinner , xfs-dev , xfs-oss Subject: Re: [PATCH, RFC] fix null files exposure growing via ftruncate Message-ID: <20070615005811.GJ86004887@sgi.com> References: <20070614063404.GW86004887@sgi.com> <20070614181446.GA16955@infradead.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070614181446.GA16955@infradead.org> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11797 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Thu, Jun 14, 2007 at 07:14:46PM +0100, Christoph Hellwig wrote: > On Thu, Jun 14, 2007 at 04:34:04PM +1000, David Chinner wrote: > > Christoph, > > > > Looking into the test 140 failure you reported, I realised that none > > of the specific null files tests were being run automatically, which > > is why I hadn't seen any of those failures (nor had the QA team). > > That's being fixed. > > > > I suspect that the test passes on Irix because of a coincidence > > (the test sleeps for 10s and that is the default writeback > > timeout for file data) which means when the filesystem is shut down > > all the data is already on disk so it's not really testing > > the NULL files fix. > > > > The failure is due to the ftruncate() logging the new file size > > before any data that had previously been written had hit the > > disk. IOWs, it violates the data write/inode size update rule > > that fixes the null files problem. > > > > The fix here checks when growing the file as to whether it the disk > > inode size is different to the in memory size. If they are > > different, we have data that needs to be written to disk beyond the > > existing on disk EOF. Hence to maintain ordering we need to flush > > this data out before we log the changed file size. > > > > I suspect the flush could be done more optimally - I've just done a > > brute-force flush the entire file mod. Should we only flush from the > > old di_size to the current i_size? > > > > There may also be better ways to fix this. Any thoughts on > > that? > > Looks good enough for now, but I suspect just flushing from the old > to the new size would be a quite nice performance improvement that's > worth it. Yeah, that's the way I was leaning. I'll mod the patch to do that and repost. Thanks. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Thu Jun 14 19:24:31 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 14 Jun 2007 19:24:33 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.1 required=5.0 tests=BAYES_80,RDNS_NONE autolearn=no version=3.2.0-pre1-r499012 Received: from notesmail1.matahari.co.id ([202.152.28.246]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5F2OSWt019473 for ; Thu, 14 Jun 2007 19:24:30 -0700 Received: from notesmail2.matahari.co.id ([202.152.28.247]) by mth-imss with InterScan Message Security Suite; Fri, 15 Jun 2007 08:54:05 +0700 In-Reply-To: Subject: Re: Returned mail: see transcript for details[ScanMail Notification] Virusdetected! To: linux-xfs@oss.sgi.com X-Mailer: Lotus Notes Release 6.5.4 March 27, 2005 Message-ID: From: pebrianto.dias@matahari.co.id Date: Fri, 15 Jun 2007 08:51:27 +0700 X-MIMETrack: Serialize by Router on kwcg01/Matahari(Release 6.5.4FP2|September 12, 2005) at06/15/2007 08:53:21 AM MIME-Version: 1.0 Content-type: text/plain; charset=US-ASCII X-imss-version: 2.047 X-imss-result: Passed X-imss-scanInfo: M:P L:E SM:0 X-imss-tmaseResult: TT:0 TS:0.0000 TC:00 TRN:0 TV:3.6.1039(15238.002) X-imss-scores: Clean:75.23362 C:2 M:3 S:5 R:5 X-imss-settings: Baseline:2 C:3 M:2 S:3 R:3 (0.1500 0.1500) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11798 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: pebrianto.dias@matahari.co.id Precedence: bulk X-list: xfs Who Is this linux-xfs@oss.sgi .com To 06/14/2007 07:15 pebrianto.dias@matahari.co.id PM cc Subject Returned mail: see transcript for details[ScanMail Notification] Virus detected! From owner-xfs@oss.sgi.com Thu Jun 14 23:23:45 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 14 Jun 2007 23:23:48 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5F6NcWt005989 for ; Thu, 14 Jun 2007 23:23:43 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id QAA11742; Fri, 15 Jun 2007 16:23:37 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5F6NaAf123690617; Fri, 15 Jun 2007 16:23:37 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5F6NZPL123284565; Fri, 15 Jun 2007 16:23:35 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Fri, 15 Jun 2007 16:23:35 +1000 From: David Chinner To: David Chinner Cc: xfs-dev , xfs-oss , asg-qa Subject: Re: Review: fix test 004 to account for reserved space Message-ID: <20070615062335.GO86004887@sgi.com> References: <20070604063328.GT85884050@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070604063328.GT85884050@sgi.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11799 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Ping? On Mon, Jun 04, 2007 at 04:33:28PM +1000, David Chinner wrote: > With the changes to use some space by default in only in memory > as a reserved pool, df and statfs will now output a fre block > count that is slightly different to what is held in the superblock. > > Update the qa test to account for this change. > > Cheers, > > Dave. > -- > Dave Chinner > Principal Engineer > SGI Australian Software Group > > --- > xfstests/004 | 35 +++++++++++++++++++++++++---------- > 1 file changed, 25 insertions(+), 10 deletions(-) > > Index: xfs-cmds/xfstests/004 > =================================================================== > --- xfs-cmds.orig/xfstests/004 2006-11-14 19:57:39.000000000 +1100 > +++ xfs-cmds/xfstests/004 2007-05-04 16:38:03.957537306 +1000 > @@ -67,21 +67,36 @@ xfs_db -r -c "freesp -s" $SCRATCH_DEV >$ > echo "xfs_db for $SCRATCH_DEV" >>$seq.full > cat $tmp.xfs_db >>$seq.full > > +eval `$XFS_IO_PROG -x -c resblks $SCRATCH_MNT 2>&1 \ > + | $AWK_PROG '/available/ { printf "resblks=%u\n", $5 }'` > +echo "resblks gave: resblks=$resblks" >>$seq.full > + > # check the 'blocks' field from freesp command is OK > # since 2.6.18, df does not report the 4 blocks per AG that cannot > # be allocated, hence we check for that exact mismatch. > +# since ~2.6.22, reserved blocks are used by default and df does > +# not report them, hence check for an exact mismatch. > perl -ne ' > - BEGIN { $avail ='$avail' * 512; > - $answer="(no xfs_db free blocks line?)" } > - /free blocks (\d+)$/ || next; > - $freesp = $1 * '$dbsize'; > - if ($freesp == $avail) { $answer = "yes"; } > - else { > + BEGIN { $avail ='$avail' * 512; > + $answer="(no xfs_db free blocks line?)" } > + /free blocks (\d+)$/ || next; > + $freesp = $1 * '$dbsize'; > + if ($freesp == $avail) { > + $answer = "yes"; > + } else { > $avail = $avail + (('$agcount' + 1) * '$dbsize' * 4); > - if ($freesp == $avail) { $answer = "yes"; } > - else { $answer = "no ($freesp != $avail)"; } > - } > - END { print "$answer\n" } > + if ($freesp == $avail) { > + $answer = "yes"; > + } else { > + $avail = $avail + ('$resblks' * '$dbsize'); > + if ($freesp == $avail) { > + $answer = "yes"; > + } else { > + $answer = "no ($freesp != $avail)"; > + } > + } > + } > + END { print "$answer\n" } > ' <$tmp.xfs_db >$tmp.ans > ans="`cat $tmp.ans`" > echo "Checking blocks column same as df: $ans" -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Fri Jun 15 00:20:47 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 15 Jun 2007 00:20:51 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.4 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5F7KiWt028384 for ; Fri, 15 Jun 2007 00:20:46 -0700 Received: from boing.melbourne.sgi.com (boing.melbourne.sgi.com [134.14.55.141]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA13135; Fri, 15 Jun 2007 17:20:36 +1000 Message-ID: <46723DC4.1080107@sgi.com> Date: Fri, 15 Jun 2007 17:20:36 +1000 From: Timothy Shimmin User-Agent: Thunderbird 2.0.0.4 (Macintosh/20070604) MIME-Version: 1.0 To: David Chinner CC: Johan Andersson , xfs@oss.sgi.com, Nathan Scott Subject: Re: xfs_fsr allocation group optimization References: <1181544692.19145.44.camel@gentoo-johan.transmode.se> <20070612014452.GK86004887@sgi.com> In-Reply-To: <20070612014452.GK86004887@sgi.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11800 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs David Chinner wrote: > On Mon, Jun 11, 2007 at 08:51:32AM +0200, Johan Andersson wrote: >> Does anyone know of a good way to find one filename that points o a >> certain inode? > > We need an rmap.... > > We have some prototype linux code that does parent pointers (i.e. > each inode has a back pointer to it's parent inode), but that, IIUC, > is a long way from prime-time. Tim? > > Cheers, > > Dave. I don't know about a "long way" (longer to fully supported, yes) Firstly, I need to move its hooks out of linux-2.6/xfs_iops.c which were referring to xfs inodes (instead of vnodes) probably back where they were in xfs_vnodeops.c. Nathan, did you have some other suggestion than this - unfortunately, I haven't looked at this code (until recently) for a while. Cheers, Tim. From owner-xfs@oss.sgi.com Fri Jun 15 00:25:29 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 15 Jun 2007 00:25:31 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from postoffice.aconex.com (mail.app.aconex.com [203.89.192.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5F7PQWt029597 for ; Fri, 15 Jun 2007 00:25:28 -0700 Received: from edge.local (unknown [203.89.192.141]) by postoffice.aconex.com (Postfix) with ESMTP id 8CE2492C37C; Fri, 15 Jun 2007 17:25:27 +1000 (EST) Subject: Re: xfs_fsr allocation group optimization From: Nathan Scott Reply-To: nscott@aconex.com To: Timothy Shimmin Cc: David Chinner , Johan Andersson , xfs@oss.sgi.com In-Reply-To: <46723DC4.1080107@sgi.com> References: <1181544692.19145.44.camel@gentoo-johan.transmode.se> <20070612014452.GK86004887@sgi.com> <46723DC4.1080107@sgi.com> Content-Type: text/plain Organization: Aconex Date: Fri, 15 Jun 2007 17:24:39 +1000 Message-Id: <1181892279.30716.1.camel@edge.yarra.acx> Mime-Version: 1.0 X-Mailer: Evolution 2.6.3 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11801 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nscott@aconex.com Precedence: bulk X-list: xfs On Fri, 2007-06-15 at 17:20 +1000, Timothy Shimmin wrote: > > I don't know about a "long way" (longer to fully supported, yes) > Firstly, I need to move its hooks out of linux-2.6/xfs_iops.c which > were > referring to xfs inodes (instead of vnodes) probably back where they > were in > xfs_vnodeops.c. > > Nathan, did you have some other suggestion than this - unfortunately, > I haven't looked at this code (until recently) for a while. Geez, that takes me back - hummm, I seem to remember a possibly-iget- related performance issue? Not much else, sorry (it _is_ beer o'clock on a Friday arvo after all...). cheers. -- Nathan From owner-xfs@oss.sgi.com Fri Jun 15 00:33:32 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 15 Jun 2007 00:33:34 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.4 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5F7XSWt031313 for ; Fri, 15 Jun 2007 00:33:31 -0700 Received: from boing.melbourne.sgi.com (boing.melbourne.sgi.com [134.14.55.141]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA13453; Fri, 15 Jun 2007 17:33:24 +1000 Message-ID: <467240C4.1090700@sgi.com> Date: Fri, 15 Jun 2007 17:33:24 +1000 From: Timothy Shimmin User-Agent: Thunderbird 2.0.0.4 (Macintosh/20070604) MIME-Version: 1.0 To: David Chinner CC: xfs-dev , xfs-oss , asg-qa Subject: Re: Review: fix test 004 to account for reserved space References: <20070604063328.GT85884050@sgi.com> <20070615062335.GO86004887@sgi.com> In-Reply-To: <20070615062335.GO86004887@sgi.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11802 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs I have to go home now, but I'll look at this one soon. --Tim David Chinner wrote: > Ping? > > On Mon, Jun 04, 2007 at 04:33:28PM +1000, David Chinner wrote: >> With the changes to use some space by default in only in memory >> as a reserved pool, df and statfs will now output a fre block >> count that is slightly different to what is held in the superblock. >> >> Update the qa test to account for this change. >> >> Cheers, >> >> Dave. >> -- >> Dave Chinner >> Principal Engineer >> SGI Australian Software Group >> >> --- >> xfstests/004 | 35 +++++++++++++++++++++++++---------- >> 1 file changed, 25 insertions(+), 10 deletions(-) >> >> Index: xfs-cmds/xfstests/004 >> =================================================================== >> --- xfs-cmds.orig/xfstests/004 2006-11-14 19:57:39.000000000 +1100 >> +++ xfs-cmds/xfstests/004 2007-05-04 16:38:03.957537306 +1000 >> @@ -67,21 +67,36 @@ xfs_db -r -c "freesp -s" $SCRATCH_DEV >$ >> echo "xfs_db for $SCRATCH_DEV" >>$seq.full >> cat $tmp.xfs_db >>$seq.full >> >> +eval `$XFS_IO_PROG -x -c resblks $SCRATCH_MNT 2>&1 \ >> + | $AWK_PROG '/available/ { printf "resblks=%u\n", $5 }'` >> +echo "resblks gave: resblks=$resblks" >>$seq.full >> + >> # check the 'blocks' field from freesp command is OK >> # since 2.6.18, df does not report the 4 blocks per AG that cannot >> # be allocated, hence we check for that exact mismatch. >> +# since ~2.6.22, reserved blocks are used by default and df does >> +# not report them, hence check for an exact mismatch. >> perl -ne ' >> - BEGIN { $avail ='$avail' * 512; >> - $answer="(no xfs_db free blocks line?)" } >> - /free blocks (\d+)$/ || next; >> - $freesp = $1 * '$dbsize'; >> - if ($freesp == $avail) { $answer = "yes"; } >> - else { >> + BEGIN { $avail ='$avail' * 512; >> + $answer="(no xfs_db free blocks line?)" } >> + /free blocks (\d+)$/ || next; >> + $freesp = $1 * '$dbsize'; >> + if ($freesp == $avail) { >> + $answer = "yes"; >> + } else { >> $avail = $avail + (('$agcount' + 1) * '$dbsize' * 4); >> - if ($freesp == $avail) { $answer = "yes"; } >> - else { $answer = "no ($freesp != $avail)"; } >> - } >> - END { print "$answer\n" } >> + if ($freesp == $avail) { >> + $answer = "yes"; >> + } else { >> + $avail = $avail + ('$resblks' * '$dbsize'); >> + if ($freesp == $avail) { >> + $answer = "yes"; >> + } else { >> + $answer = "no ($freesp != $avail)"; >> + } >> + } >> + } >> + END { print "$answer\n" } >> ' <$tmp.xfs_db >$tmp.ans >> ans="`cat $tmp.ans`" >> echo "Checking blocks column same as df: $ans" > From owner-xfs@oss.sgi.com Fri Jun 15 00:41:03 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 15 Jun 2007 00:41:06 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.6 required=5.0 tests=AWL,BAYES_50, FH_HOST_EQ_D_D_D_D,FH_HOST_EQ_D_D_D_DB,RDNS_DYNAMIC autolearn=no version=3.2.0-pre1-r499012 Received: from stitch.e-626.net (60.153.216.81.static.spa.siw.siwnet.net [81.216.153.60]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5F7f1Wt001176 for ; Fri, 15 Jun 2007 00:41:02 -0700 Received: from [130.100.71.110] (hidden-user@146.175.241.83.in-addr.dgcsystems.net [83.241.175.146]) (authenticated bits=0) by stitch.e-626.net (8.14.0/8.13.7) with ESMTP id l5F7eV0L003903 (version=TLSv1/SSLv3 cipher=RC4-MD5 bits=128 verify=NOT); Fri, 15 Jun 2007 09:40:53 +0200 Subject: Re: xfs_fsr allocation group optimization From: Johan Andersson To: Timothy Shimmin Cc: David Chinner , xfs@oss.sgi.com, Nathan Scott In-Reply-To: <46723DC4.1080107@sgi.com> References: <1181544692.19145.44.camel@gentoo-johan.transmode.se> <20070612014452.GK86004887@sgi.com> <46723DC4.1080107@sgi.com> Content-Type: text/plain Date: Fri, 15 Jun 2007 09:40:27 +0200 Message-Id: <1181893227.30958.24.camel@gentoo-johan.transmode.se> Mime-Version: 1.0 X-Mailer: Evolution 2.8.3 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Scanned: ClamAV 0.90.2/3424/Fri Jun 15 03:30:29 2007 on stitch.e-626.net X-Virus-Status: Clean X-archive-position: 11803 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: johan@e-626.net Precedence: bulk X-list: xfs On Fri, 2007-06-15 at 17:20 +1000, Timothy Shimmin wrote: > David Chinner wrote: > > On Mon, Jun 11, 2007 at 08:51:32AM +0200, Johan Andersson wrote: > >> Does anyone know of a good way to find one filename that points o a > >> certain inode? > > > > We need an rmap.... > > > > We have some prototype linux code that does parent pointers (i.e. > > each inode has a back pointer to it's parent inode), but that, IIUC, > > is a long way from prime-time. Tim? > > > > Cheers, > > > > Dave. > > I don't know about a "long way" (longer to fully supported, yes) > Firstly, I need to move its hooks out of linux-2.6/xfs_iops.c which were > referring to xfs inodes (instead of vnodes) probably back where they were in > xfs_vnodeops.c. > > Nathan, did you have some other suggestion than this - unfortunately, > I haven't looked at this code (until recently) for a while. > > Cheers, > Tim. > I have another idea that i plan to try. The idea was to add an ioctl to "clone" an inode. By using the original inode (the one to be defragmented) as parent "directory" in the call to xfs_dir_ialloc(), the new inode should be allocated near the original inode. The fsr can then open the new inode with jdm_open and proceed as normal. This would also solve another problem that I see with fsr, the mtime of every directory in the fs is updated when fsr is run. I do see one problem with this: If the defrag is aborted for some reason, we can get orphaned inodes. Will fsck handle this? /Johan Andersson From owner-xfs@oss.sgi.com Fri Jun 15 02:42:53 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 15 Jun 2007 02:42:58 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.7 required=5.0 tests=AWL,BAYES_99 autolearn=no version=3.2.0-pre1-r499012 Received: from nz-out-0506.google.com (nz-out-0506.google.com [64.233.162.238]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5F9gpWt028122 for ; Fri, 15 Jun 2007 02:42:53 -0700 Received: by nz-out-0506.google.com with SMTP id 4so751971nzn for ; Fri, 15 Jun 2007 02:42:51 -0700 (PDT) DKIM-Signature: a=rsa-sha1; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:received:date:from:to:cc:subject:message-id:references:mime-version:content-type:content-disposition:in-reply-to:user-agent; b=YXaSlmFq2IcxxAvDlyGd0z+TxldI7sGU6KEGZMDQPSv0344di3Kk++ttRBQOLvnppnfvH8Z7qtIIq2oAlfxodteSvfEq6HvyXVEYOOqMzDlVI/DOvMI3KOpVhxebDh3SrjlMqHbht4NrmqGp9CC7YcEr3T7aiwu+gP7ja7rl6M0= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:date:from:to:cc:subject:message-id:references:mime-version:content-type:content-disposition:in-reply-to:user-agent; b=oT6/PkG0+Tb4zLLnA6RxmOXTFlwFUE2JV/66OYM9bdq0oEu66Xa5uBON0wp9hKcnfCqrXNFxl2iVCUyWgbhHIBKRsECOU1cGqgBGdUsY3ISpJ7FBelmo9jxGYIGYTeKZFtvMqaWi4zBzik0se08v0H2mZAdpWeuNM5XW8+44w+E= Received: by 10.115.33.1 with SMTP id l1mr2870000waj.1181900571337; Fri, 15 Jun 2007 02:42:51 -0700 (PDT) Received: from htj.dyndns.org ( [221.139.199.126]) by mx.google.com with ESMTP id k9sm6404268wah.2007.06.15.02.42.48 (version=TLSv1/SSLv3 cipher=OTHER); Fri, 15 Jun 2007 02:42:50 -0700 (PDT) Received: by htj.dyndns.org (Postfix, from userid 1000) id 9E6CB23D4BCC; Fri, 15 Jun 2007 18:42:46 +0900 (KST) Date: Fri, 15 Jun 2007 18:42:46 +0900 From: Tejun Heo To: David Greaves Cc: "Rafael J. Wysocki" , Linus Torvalds , David Chinner , xfs@oss.sgi.com, "'linux-kernel@vger.kernel.org'" , linux-pm , Neil Brown , Jeff Garzik , jens.axboe@oracle.com Subject: [PATCH] block: always requeue !fs requests at the front Message-ID: <20070615094246.GN29122@htj.dyndns.org> References: <200706020122.49989.rjw@sisk.pl> <46706968.7000703@dgreaves.com> <200706140115.58733.rjw@sisk.pl> <46714ECF.8080203@gmail.com> <46715A66.8030806@suse.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <46715A66.8030806@suse.de> User-Agent: Mutt/1.5.13 (2006-08-11) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11807 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: htejun@gmail.com Precedence: bulk X-list: xfs SCSI marks internal commands with REQ_PREEMPT and push it at the front of the request queue using blk_execute_rq(). When entering suspended or frozen state, SCSI devices are quiesced using scsi_device_quiesce(). In quiesced state, only REQ_PREEMPT requests are processed. This is how SCSI blocks other requests out while suspending and resuming. As all internal commands are pushed at the front of the queue, this usually works. Unfortunately, this interacts badly with ordered requeueing. To preserve request order on requeueing (due to busy device, active EH or other failures), requests are sorted according to ordered sequence on requeue if IO barrier is in progress. The following sequence deadlocks. 1. IO barrier sequence issues. 2. Suspend requested. Queue is quiesced with part of all of IO barrier sequence at the front. 3. During suspending or resuming, SCSI issues internal command which gets deferred and requeued for some reason. As the command is issued after the IO barrier in #1, ordered requeueing code puts the request after IO barrier sequence. 4. The device is ready to process requests again but still is in quiesced state and the first request of the queue isn't REQ_PREEMPT, so command processing is deadlocked - suspending/resuming waits for the issued request to complete while the request can't be processed till device is put back into running state by resuming. This can be fixed by always putting !fs requests at the front when requeueing. The following thread reports this deadlock. http://thread.gmane.org/gmane.linux.kernel/537473 Signed-off-by: Tejun Heo Cc: Jenn Axboe Cc: David Greaves --- Okay, it took a lot of hours of debugging but boiled down to two liner fix. I feel so empty. :-) RAID6 triggers this reliably because it uses BIO_BARRIER heavily to update its superblock. The recent ATA suspend/resume rewrite is hit by this because it uses SCSI internal commands to spin down and up the drives for suspending and resuming. David, please test this. Jens, does it look okay? Thanks. block/ll_rw_blk.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/block/ll_rw_blk.c b/block/ll_rw_blk.c index 6b5173a..a2fe2e5 100644 --- a/block/ll_rw_blk.c +++ b/block/ll_rw_blk.c @@ -340,6 +340,14 @@ unsigned blk_ordered_req_seq(struct request *rq) if (rq == &q->post_flush_rq) return QUEUE_ORDSEQ_POSTFLUSH; + /* !fs requests don't need to follow barrier ordering. Always + * put them at the front. This fixes the following deadlock. + * + * http://thread.gmane.org/gmane.linux.kernel/537473 + */ + if (!blk_fs_request(rq)) + return QUEUE_ORDSEQ_DRAIN; + if ((rq->cmd_flags & REQ_ORDERED_COLOR) == (q->orig_bar_rq->cmd_flags & REQ_ORDERED_COLOR)) return QUEUE_ORDSEQ_DRAIN; From owner-xfs@oss.sgi.com Fri Jun 15 04:18:48 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 15 Jun 2007 04:18:54 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.6 required=5.0 tests=AWL,BAYES_99 autolearn=no version=3.2.0-pre1-r499012 Received: from nz-out-0506.google.com (nz-out-0506.google.com [64.233.162.239]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5FBHXWt018262 for ; Fri, 15 Jun 2007 04:18:47 -0700 Received: by nz-out-0506.google.com with SMTP id 4so768290nzn for ; Fri, 15 Jun 2007 04:17:34 -0700 (PDT) DKIM-Signature: a=rsa-sha1; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:received:date:from:to:cc:subject:message-id:references:mime-version:content-type:content-disposition:in-reply-to:user-agent; b=FeoL1gO8fikZXLDMRTfj49kHjxAJDs+gCIyzeBO/UwJ/SYCG8FMfNRnpBUX/qDfj1u6roKDPVi33POpGReQChBbbu+hWaLqjYO0JLAvnr0CXDnoLs4JBRZEePQ8ZqTnaw3NVia+8eEvPq7rbBqb51TvaRg0GUv11KgfDS6Km2Qk= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:date:from:to:cc:subject:message-id:references:mime-version:content-type:content-disposition:in-reply-to:user-agent; b=fBmIet6DrBPApn55i9pNfI7yueE3x6rIev6atd7zM17oMR0++kRuUmSgx3DpGBttXpEvfZXRHKP31lwY/ZbHkLGbShH1yqlFWD6t5qGlnupFdR5qWbv4Ws5jW7x26MErqsUUNw1688AqT0Pja7lTQPEMu4TxQ0dx9g7IpNxZE7g= Received: by 10.115.49.16 with SMTP id b16mr2860009wak.1181906253631; Fri, 15 Jun 2007 04:17:33 -0700 (PDT) Received: from htj.dyndns.org ( [221.139.199.126]) by mx.google.com with ESMTP id n33sm4453426wag.2007.06.15.04.17.30 (version=TLSv1/SSLv3 cipher=OTHER); Fri, 15 Jun 2007 04:17:32 -0700 (PDT) Received: by htj.dyndns.org (Postfix, from userid 1000) id 8AC2723D4BCC; Fri, 15 Jun 2007 20:17:28 +0900 (KST) Date: Fri, 15 Jun 2007 20:17:28 +0900 From: Tejun Heo To: Jens Axboe Cc: David Greaves , "Rafael J. Wysocki" , Linus Torvalds , David Chinner , xfs@oss.sgi.com, "'linux-kernel@vger.kernel.org'" , linux-pm , Neil Brown , Jeff Garzik Subject: [PATCH] block: always requeue !fs requests at the front Message-ID: <20070615111728.GO29122@htj.dyndns.org> References: <200706020122.49989.rjw@sisk.pl> <46706968.7000703@dgreaves.com> <200706140115.58733.rjw@sisk.pl> <46714ECF.8080203@gmail.com> <46715A66.8030806@suse.de> <20070615094246.GN29122@htj.dyndns.org> <20070615110544.GR6149@kernel.dk> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070615110544.GR6149@kernel.dk> User-Agent: Mutt/1.5.13 (2006-08-11) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11808 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: htejun@gmail.com Precedence: bulk X-list: xfs SCSI marks internal commands with REQ_PREEMPT and push it at the front of the request queue using blk_execute_rq(). When entering suspended or frozen state, SCSI devices are quiesced using scsi_device_quiesce(). In quiesced state, only REQ_PREEMPT requests are processed. This is how SCSI blocks other requests out while suspending and resuming. As all internal commands are pushed at the front of the queue, this usually works. Unfortunately, this interacts badly with ordered requeueing. To preserve request order on requeueing (due to busy device, active EH or other failures), requests are sorted according to ordered sequence on requeue if IO barrier is in progress. The following sequence deadlocks. 1. IO barrier sequence issues. 2. Suspend requested. Queue is quiesced with part or all of IO barrier sequence at the front. 3. During suspending or resuming, SCSI issues internal command which gets deferred and requeued for some reason. As the command is issued after the IO barrier in #1, ordered requeueing code puts the request after IO barrier sequence. 4. The device is ready to process requests again but still is in quiesced state and the first request of the queue isn't REQ_PREEMPT, so command processing is deadlocked - suspending/resuming waits for the issued request to complete while the request can't be processed till device is put back into running state by resuming. This can be fixed by always putting !fs requests at the front when requeueing. The following thread reports this deadlock. http://thread.gmane.org/gmane.linux.kernel/537473 Signed-off-by: Tejun Heo Cc: Jenn Axboe Cc: David Greaves --- > Yep looks good, except for the bad multi-line comment style, but that's > minor stuff ;-) That's how Jeff likes it in libata and my fingers are hardcoded to it, but I do appreciate the paramount importance of each maintainer's right to his/her own comment style, so here's the respinned patch. :-) block/ll_rw_blk.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/block/ll_rw_blk.c b/block/ll_rw_blk.c index 6b5173a..c99b463 100644 --- a/block/ll_rw_blk.c +++ b/block/ll_rw_blk.c @@ -340,6 +340,15 @@ unsigned blk_ordered_req_seq(struct request *rq) if (rq == &q->post_flush_rq) return QUEUE_ORDSEQ_POSTFLUSH; + /* + * !fs requests don't need to follow barrier ordering. Always + * put them at the front. This fixes the following deadlock. + * + * http://thread.gmane.org/gmane.linux.kernel/537473 + */ + if (!blk_fs_request(rq)) + return QUEUE_ORDSEQ_DRAIN; + if ((rq->cmd_flags & REQ_ORDERED_COLOR) == (q->orig_bar_rq->cmd_flags & REQ_ORDERED_COLOR)) return QUEUE_ORDSEQ_DRAIN; From owner-xfs@oss.sgi.com Fri Jun 15 04:21:27 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 15 Jun 2007 04:21:32 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.0 required=5.0 tests=BAYES_99 autolearn=no version=3.2.0-pre1-r499012 Received: from kernel.dk (brick.kernel.dk [80.160.20.94]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5FBLPWt022667 for ; Fri, 15 Jun 2007 04:21:27 -0700 Received: from nelson.home.kernel.dk (nelson.home.kernel.dk [192.168.0.33]) by kernel.dk (Postfix) with ESMTP id AEB312576C9; Fri, 15 Jun 2007 13:21:24 +0200 (CEST) Received: by nelson.home.kernel.dk (Postfix, from userid 1000) id 6D202F757; Fri, 15 Jun 2007 13:21:37 +0200 (CEST) Date: Fri, 15 Jun 2007 13:21:37 +0200 From: Jens Axboe To: Tejun Heo Cc: David Greaves , "Rafael J. Wysocki" , Linus Torvalds , David Chinner , xfs@oss.sgi.com, "'linux-kernel@vger.kernel.org'" , linux-pm , Neil Brown , Jeff Garzik Subject: Re: [PATCH] block: always requeue !fs requests at the front Message-ID: <20070615112137.GT6149@kernel.dk> References: <200706020122.49989.rjw@sisk.pl> <46706968.7000703@dgreaves.com> <200706140115.58733.rjw@sisk.pl> <46714ECF.8080203@gmail.com> <46715A66.8030806@suse.de> <20070615094246.GN29122@htj.dyndns.org> <20070615110544.GR6149@kernel.dk> <20070615111728.GO29122@htj.dyndns.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070615111728.GO29122@htj.dyndns.org> X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11809 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jens.axboe@oracle.com Precedence: bulk X-list: xfs On Fri, Jun 15 2007, Tejun Heo wrote: > SCSI marks internal commands with REQ_PREEMPT and push it at the front > of the request queue using blk_execute_rq(). When entering suspended > or frozen state, SCSI devices are quiesced using > scsi_device_quiesce(). In quiesced state, only REQ_PREEMPT requests > are processed. This is how SCSI blocks other requests out while > suspending and resuming. As all internal commands are pushed at the > front of the queue, this usually works. > > Unfortunately, this interacts badly with ordered requeueing. To > preserve request order on requeueing (due to busy device, active EH or > other failures), requests are sorted according to ordered sequence on > requeue if IO barrier is in progress. > > The following sequence deadlocks. > > 1. IO barrier sequence issues. > > 2. Suspend requested. Queue is quiesced with part or all of IO > barrier sequence at the front. > > 3. During suspending or resuming, SCSI issues internal command which > gets deferred and requeued for some reason. As the command is > issued after the IO barrier in #1, ordered requeueing code puts the > request after IO barrier sequence. > > 4. The device is ready to process requests again but still is in > quiesced state and the first request of the queue isn't > REQ_PREEMPT, so command processing is deadlocked - > suspending/resuming waits for the issued request to complete while > the request can't be processed till device is put back into > running state by resuming. > > This can be fixed by always putting !fs requests at the front when > requeueing. > > The following thread reports this deadlock. > > http://thread.gmane.org/gmane.linux.kernel/537473 > > Signed-off-by: Tejun Heo > Cc: Jenn Axboe > Cc: David Greaves > --- > > Yep looks good, except for the bad multi-line comment style, but that's > > minor stuff ;-) > > That's how Jeff likes it in libata and my fingers are hardcoded to it, > but I do appreciate the paramount importance of each maintainer's > right to his/her own comment style, so here's the respinned patch. :-) Thanks a lot! I'll pass it right on for 2.6.22. -- Jens Axboe From owner-xfs@oss.sgi.com Fri Jun 15 04:23:55 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 15 Jun 2007 04:23:58 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.0 required=5.0 tests=BAYES_99 autolearn=no version=3.2.0-pre1-r499012 Received: from kernel.dk (brick.kernel.dk [80.160.20.94]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5FBNrWt023530 for ; Fri, 15 Jun 2007 04:23:54 -0700 Received: from nelson.home.kernel.dk (nelson.home.kernel.dk [192.168.0.33]) by kernel.dk (Postfix) with ESMTP id 9A90F2576C5; Fri, 15 Jun 2007 13:05:31 +0200 (CEST) Received: by nelson.home.kernel.dk (Postfix, from userid 1000) id 667B6F757; Fri, 15 Jun 2007 13:05:44 +0200 (CEST) Date: Fri, 15 Jun 2007 13:05:44 +0200 From: Jens Axboe To: Tejun Heo Cc: David Greaves , "Rafael J. Wysocki" , Linus Torvalds , David Chinner , xfs@oss.sgi.com, "'linux-kernel@vger.kernel.org'" , linux-pm , Neil Brown , Jeff Garzik Subject: Re: [PATCH] block: always requeue !fs requests at the front Message-ID: <20070615110544.GR6149@kernel.dk> References: <200706020122.49989.rjw@sisk.pl> <46706968.7000703@dgreaves.com> <200706140115.58733.rjw@sisk.pl> <46714ECF.8080203@gmail.com> <46715A66.8030806@suse.de> <20070615094246.GN29122@htj.dyndns.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070615094246.GN29122@htj.dyndns.org> X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11810 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jens.axboe@oracle.com Precedence: bulk X-list: xfs On Fri, Jun 15 2007, Tejun Heo wrote: > SCSI marks internal commands with REQ_PREEMPT and push it at the front > of the request queue using blk_execute_rq(). When entering suspended > or frozen state, SCSI devices are quiesced using > scsi_device_quiesce(). In quiesced state, only REQ_PREEMPT requests > are processed. This is how SCSI blocks other requests out while > suspending and resuming. As all internal commands are pushed at the > front of the queue, this usually works. > > Unfortunately, this interacts badly with ordered requeueing. To > preserve request order on requeueing (due to busy device, active EH or > other failures), requests are sorted according to ordered sequence on > requeue if IO barrier is in progress. > > The following sequence deadlocks. > > 1. IO barrier sequence issues. > > 2. Suspend requested. Queue is quiesced with part of all of IO > barrier sequence at the front. > > 3. During suspending or resuming, SCSI issues internal command which > gets deferred and requeued for some reason. As the command is > issued after the IO barrier in #1, ordered requeueing code puts the > request after IO barrier sequence. > > 4. The device is ready to process requests again but still is in > quiesced state and the first request of the queue isn't > REQ_PREEMPT, so command processing is deadlocked - > suspending/resuming waits for the issued request to complete while > the request can't be processed till device is put back into > running state by resuming. > > This can be fixed by always putting !fs requests at the front when > requeueing. > > The following thread reports this deadlock. > > http://thread.gmane.org/gmane.linux.kernel/537473 > > Signed-off-by: Tejun Heo > Cc: Jenn Axboe > Cc: David Greaves > --- > Okay, it took a lot of hours of debugging but boiled down to two liner > fix. I feel so empty. :-) RAID6 triggers this reliably because it > uses BIO_BARRIER heavily to update its superblock. The recent ATA > suspend/resume rewrite is hit by this because it uses SCSI internal > commands to spin down and up the drives for suspending and resuming. > > David, please test this. Jens, does it look okay? Yep looks good, except for the bad multi-line comment style, but that's minor stuff ;-) Acked-by: Jens Axboe -- Jens Axboe From owner-xfs@oss.sgi.com Fri Jun 15 06:59:11 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 15 Jun 2007 06:59:15 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.6 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.ukfsn.org (s2.ukfsn.org [217.158.120.143]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5FDx9Wt001341 for ; Fri, 15 Jun 2007 06:59:11 -0700 Received: from localhost (mailman.ukfsn.org [80.168.53.75]) by mail.ukfsn.org (Postfix) with ESMTP id 35432E7058; Fri, 15 Jun 2007 14:59:08 +0100 (BST) Received: from mail.ukfsn.org ([80.168.53.20]) by localhost (smtp-filter.ukfsn.org [80.168.53.75]) (amavisd-new, port 10024) with ESMTP id mClPLcMgIWFI; Fri, 15 Jun 2007 14:55:48 +0100 (BST) Received: from elm.dgreaves.com (i-83-67-36-194.freedom2surf.net [83.67.36.194]) by mail.ukfsn.org (Postfix) with ESMTP id 380F1E7037; Fri, 15 Jun 2007 14:59:06 +0100 (BST) Received: from ash.dgreaves.com ([10.0.0.90]) by elm.dgreaves.com with esmtp (Exim 4.62) (envelope-from ) id 1HzCKG-0002JK-V6; Fri, 15 Jun 2007 14:58:57 +0100 Message-ID: <46729B20.309@dgreaves.com> Date: Fri, 15 Jun 2007 14:58:56 +0100 From: David Greaves User-Agent: Mozilla-Thunderbird 2.0.0.0 (X11/20070601) MIME-Version: 1.0 To: Tejun Heo Cc: "Rafael J. Wysocki" , Linus Torvalds , David Chinner , xfs@oss.sgi.com, "'linux-kernel@vger.kernel.org'" , linux-pm , Neil Brown , Jeff Garzik , jens.axboe@oracle.com Subject: Re: [PATCH] block: always requeue !fs requests at the front References: <200706020122.49989.rjw@sisk.pl> <46706968.7000703@dgreaves.com> <200706140115.58733.rjw@sisk.pl> <46714ECF.8080203@gmail.com> <46715A66.8030806@suse.de> <20070615094246.GN29122@htj.dyndns.org> In-Reply-To: <20070615094246.GN29122@htj.dyndns.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11811 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: david@dgreaves.com Precedence: bulk X-list: xfs > David, please test this. Jens, does it look okay? Phew! Works for me. I applied it to 2.6.22-rc4 (along with sata_promise_use_TF_interface_for_polling_NODATA_commands.patch) hibernate and resume worked. Thanks for digging it out Tejun (and everyone else!) :) David From owner-xfs@oss.sgi.com Fri Jun 15 08:57:20 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 15 Jun 2007 08:57:24 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.0 required=5.0 tests=AWL,BAYES_99 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.dvmed.net (srv5.dvmed.net [207.36.208.214]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5FFvIWt031102 for ; Fri, 15 Jun 2007 08:57:20 -0700 Received: from cpe-065-190-165-210.nc.res.rr.com ([65.190.165.210] helo=[10.10.10.10]) by mail.dvmed.net with esmtpsa (Exim 4.63 #1 (Red Hat Linux)) id 1HzDPs-00028z-Te; Fri, 15 Jun 2007 15:08:49 +0000 Message-ID: <4672AB7F.7030702@garzik.org> Date: Fri, 15 Jun 2007 11:08:47 -0400 From: Jeff Garzik User-Agent: Thunderbird 1.5.0.12 (X11/20070530) MIME-Version: 1.0 To: Tejun Heo CC: Jens Axboe , David Greaves , "Rafael J. Wysocki" , Linus Torvalds , David Chinner , xfs@oss.sgi.com, "'linux-kernel@vger.kernel.org'" , linux-pm , Neil Brown Subject: Re: [PATCH] block: always requeue !fs requests at the front References: <200706020122.49989.rjw@sisk.pl> <46706968.7000703@dgreaves.com> <200706140115.58733.rjw@sisk.pl> <46714ECF.8080203@gmail.com> <46715A66.8030806@suse.de> <20070615094246.GN29122@htj.dyndns.org> <20070615110544.GR6149@kernel.dk> <20070615111728.GO29122@htj.dyndns.org> In-Reply-To: <20070615111728.GO29122@htj.dyndns.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11812 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeff@garzik.org Precedence: bulk X-list: xfs Tejun Heo wrote: > SCSI marks internal commands with REQ_PREEMPT and push it at the front > of the request queue using blk_execute_rq(). When entering suspended > or frozen state, SCSI devices are quiesced using > scsi_device_quiesce(). In quiesced state, only REQ_PREEMPT requests > are processed. This is how SCSI blocks other requests out while > suspending and resuming. As all internal commands are pushed at the > front of the queue, this usually works. > > Unfortunately, this interacts badly with ordered requeueing. To > preserve request order on requeueing (due to busy device, active EH or > other failures), requests are sorted according to ordered sequence on > requeue if IO barrier is in progress. > > The following sequence deadlocks. > > 1. IO barrier sequence issues. > > 2. Suspend requested. Queue is quiesced with part or all of IO > barrier sequence at the front. > > 3. During suspending or resuming, SCSI issues internal command which > gets deferred and requeued for some reason. As the command is > issued after the IO barrier in #1, ordered requeueing code puts the > request after IO barrier sequence. > > 4. The device is ready to process requests again but still is in > quiesced state and the first request of the queue isn't > REQ_PREEMPT, so command processing is deadlocked - > suspending/resuming waits for the issued request to complete while > the request can't be processed till device is put back into > running state by resuming. > > This can be fixed by always putting !fs requests at the front when > requeueing. > > The following thread reports this deadlock. > > http://thread.gmane.org/gmane.linux.kernel/537473 > > Signed-off-by: Tejun Heo > Cc: Jenn Axboe > Cc: David Greaves Acked-by: Jeff Garzik Thanks Tejun, you kick ass as usual. Jeff From owner-xfs@oss.sgi.com Fri Jun 15 09:04:47 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 15 Jun 2007 09:04:49 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.7 required=5.0 tests=AWL,BAYES_80, URIBL_RHS_TLD_WHOIS autolearn=no version=3.2.0-pre1-r499012 Received: from mail.lichtvoll.de (mondschein.lichtvoll.de [194.150.191.11]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5FG4jWt001110 for ; Fri, 15 Jun 2007 09:04:46 -0700 Received: from localhost (dslb-084-056-065-071.pools.arcor-ip.net [84.56.65.71]) by mail.lichtvoll.de (Postfix) with ESMTP id 5A2015AD57 for ; Fri, 15 Jun 2007 18:04:45 +0200 (CEST) From: Martin Steigerwald To: linux-xfs@oss.sgi.com Subject: xfs_fsr - problem with open files possible? Date: Fri, 15 Jun 2007 18:04:42 +0200 User-Agent: KMail/1.9.7 MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200706151804.43067.Martin@lichtvoll.de> X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11813 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: Martin@lichtvoll.de Precedence: bulk X-list: xfs Hi! I ran xfs_fsr on my laptops some time ago, for example on my partition which holds /home. This seemed to work quite well: --------------------------------------------------------------------- shambala:~> xfs_db -c frag -f /dev/sda2 actual 777855, ideal 737681, fragmentation factor 5.16% shambala:~> xfs_fsr /dev/sda2 /home start inode=0 shambala:~> xfs_db -c frag -f /dev/sda2 actual 744809, ideal 737681, fragmentation factor 0.96% shambala:~> xfs_fsr /dev/sda2 /home start inode=0 shambala:~> xfs_db -c frag -f /dev/sda2 actual 744809, ideal 737681, fragmentation factor 0.96% --------------------------------------------------------------------- As discussed before here there is a limit in what xfs_fsr can do and it didn't come below 0.96% fragmentation even though the partition has several GB of free space. As I restarted KMail after fragmenting it however told me about some broken indexes. KMail stores index for folders in files. It was able to restore the indexes. I can not pinpoint this problem to the defragmentation with xfs_fsr, but it seemed more than a coincidence to me that it happened directly after I restarted KMail after the defragmentation. KMail was running while the fragmentation was taking place and well it likely had some index files opened. At least it does now that I am writing that mail: --------------------------------------------------------------------- shambala:~> lsof +D /home/martin/Mail COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME kontact 2787 martin mem REG 8,2 280837 223046853 /home/martin/Mail/.Lichtvoll.directory/.KDE.index kontact 2787 martin mem REG 8,2 33 119177542 /home/martin/Mail/.templates.index kontact 2787 martin mem REG 8,2 33 120665290 /home/martin/Mail/.drafts.index kontact 2787 martin 18u REG 8,2 280837 223046853 /home/martin/Mail/.Lichtvoll.directory/.KDE.index kontact 2787 martin 19u REG 8,2 33 120665290 /home/martin/Mail/.drafts.index kontact 2787 martin 20u REG 8,2 33 119177542 /home/martin/Mail/.templates.index --------------------------------------------------------------------- Could data loss happen when running xfs_fsr on files that are opened by an application? I did not came across any other corrupted files except a Mercurial repository. I can not pinpoint this problem to XFS at all and have no idea how and when it got corrupted. At least in my backup from some weeks ago the repository has been okay. Unfortunately I do not know anymore whether I made a commit to that repository while xfs_fsr was running or not. But I think I didn't. The filesystem itself was okay after fragmentation, I checked it via xfs_check! I can try to reproduce the problem. Would be handy tough to have an xfs_fsr that can be limited only to operate on a certain directory and its files and sub directories. This way I could create a new mailfolder, copy some mails in there, have it opened so that KMail accesses the index file and let xfs_fsr only run on this mailfolder. I can also open a bug report, but I first wanted to ask here. Regards, -- Martin 'Helios' Steigerwald - http://www.Lichtvoll.de GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7 From owner-xfs@oss.sgi.com Fri Jun 15 10:29:46 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 15 Jun 2007 10:29:48 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.8 required=5.0 tests=AWL,BAYES_99,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r499012 Received: from mx1.redhat.com (mx1.redhat.com [66.187.233.31]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5FHTjWt022119 for ; Fri, 15 Jun 2007 10:29:46 -0700 Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by mx1.redhat.com (8.13.1/8.13.1) with ESMTP id l5FH39D1010039; Fri, 15 Jun 2007 13:03:09 -0400 Received: from pobox-2.corp.redhat.com (pobox-2.corp.redhat.com [10.11.255.15]) by int-mx1.corp.redhat.com (8.13.1/8.13.1) with ESMTP id l5FH39N3016613; Fri, 15 Jun 2007 13:03:09 -0400 Received: from [10.15.80.10] (neon.msp.redhat.com [10.15.80.10]) by pobox-2.corp.redhat.com (8.13.1/8.13.1) with ESMTP id l5FH38Rk014497; Fri, 15 Jun 2007 13:03:09 -0400 Message-ID: <4672C531.9020802@sandeen.net> Date: Fri, 15 Jun 2007 11:58:25 -0500 From: Eric Sandeen User-Agent: Thunderbird 1.5.0.12 (X11/20070530) MIME-Version: 1.0 To: Martin Steigerwald CC: linux-xfs@oss.sgi.com Subject: Re: xfs_fsr - problem with open files possible? References: <200706151804.43067.Martin@lichtvoll.de> In-Reply-To: <200706151804.43067.Martin@lichtvoll.de> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11814 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Martin Steigerwald wrote: > Could data loss happen when running xfs_fsr on files that are opened by an > application? It should not; fsr performs a lot of safety checks and aborts under problematic circumstances. It will skip files if: * mandatory locks are present * file is marked immutable, append-only, or nodefrag * filesystem is shut down * change/modify times have been altered since defrag started * original file is mmapped If you can clearly recreate it with xfs_fsr, it would be interesting to compare the good & bad index files to see how they differ, it might offer a clue as to what/why/how it changed. > I did not came across any other corrupted files except a Mercurial > repository. I can not pinpoint this problem to XFS at all and have no > idea how and when it got corrupted. At least in my backup from some weeks > ago the repository has been okay. Unfortunately I do not know anymore > whether I made a commit to that repository while xfs_fsr was running or > not. But I think I didn't. > > The filesystem itself was okay after fragmentation, I checked it via > xfs_check! > > I can try to reproduce the problem. Would be handy tough to have an > xfs_fsr that can be limited only to operate on a certain directory and > its files and sub directories. This way I could create a new mailfolder, > copy some mails in there, have it opened so that KMail accesses the index > file and let xfs_fsr only run on this mailfolder. From the man page: Files marked as no-defrag will be skipped. The xfs_io(8) chattr command with the f attribute can be used to set or clear this flag. Files and directories created in a directory with the no-defrag flag will inherit the attribute. so you can flag your mail directory or index files as no-defrag if you like. (I know this is the converse of what you wanted... but maybe helpful) -Eric > I can also open a bug report, but I first wanted to ask here. > > Regards, From owner-xfs@oss.sgi.com Fri Jun 15 11:13:54 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 15 Jun 2007 11:13:57 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.5 required=5.0 tests=AWL,BAYES_95, URIBL_RHS_TLD_WHOIS autolearn=no version=3.2.0-pre1-r499012 Received: from mail.lichtvoll.de (mondschein.lichtvoll.de [194.150.191.11]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5FIDrWt029502 for ; Fri, 15 Jun 2007 11:13:54 -0700 Received: from localhost (dslb-084-056-065-071.pools.arcor-ip.net [84.56.65.71]) by mail.lichtvoll.de (Postfix) with ESMTP id 5BF525AD56 for ; Fri, 15 Jun 2007 20:13:54 +0200 (CEST) From: Martin Steigerwald To: linux-xfs@oss.sgi.com Subject: Re: xfs_fsr - problem with open files possible? Date: Fri, 15 Jun 2007 20:13:52 +0200 User-Agent: KMail/1.9.7 References: <200706151804.43067.Martin@lichtvoll.de> <4672C531.9020802@sandeen.net> (sfid-20070615_201058_223491_4316861A) In-Reply-To: <4672C531.9020802@sandeen.net> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200706152013.52836.Martin@lichtvoll.de> X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11815 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: Martin@lichtvoll.de Precedence: bulk X-list: xfs Am Freitag 15 Juni 2007 schrieb Eric Sandeen: > > I can try to reproduce the problem. Would be handy tough to have an > > xfs_fsr that can be limited only to operate on a certain directory > > and its files and sub directories. This way I could create a new > > mailfolder, copy some mails in there, have it opened so that KMail > > accesses the index file and let xfs_fsr only run on this mailfolder. > > From the man page: > > Files marked as no-defrag will be skipped. The xfs_io(8) chattr > command with the f attribute can be used to set or clear this flag. > Files and directories created in a directory with the no-defrag flag > will inherit the attribute. > > so you can flag your mail directory or index files as no-defrag if you > like. (I know this is the converse of what you wanted... but maybe > helpful) Hello Eric! Well also from the man page as I just found out after sending my mail: "xfs_fsr can be called with one or more arguments naming filesystems (block device name), and files to reorganize. In this mode xfs_fsr does not read or write /var/tmp/.fsrlast_xfs nor does it run for a fixed time interval. It makes one pass through each specified regular file and all regular files in each specified filesystem. A command line name referring to a symbolic link (except to a file system device), FIFO, or UNIX domain socket generates a warning message, but is otherwise ignored. While traversing the filesystem these types of files are silently skipped." So maybe something along the lines of: xfs_fsr ~/Mail/.trash.index while it is open by KMail should be enough. I think I will give this a try soon. Regards, -- Martin 'Helios' Steigerwald - http://www.Lichtvoll.de GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7 From owner-xfs@oss.sgi.com Fri Jun 15 13:04:37 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 15 Jun 2007 13:04:40 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.4 required=5.0 tests=AWL,BAYES_95, URIBL_RHS_TLD_WHOIS autolearn=no version=3.2.0-pre1-r499012 Received: from mail.lichtvoll.de (mondschein.lichtvoll.de [194.150.191.11]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5FK4ZWt021417 for ; Fri, 15 Jun 2007 13:04:36 -0700 Received: from localhost (dslb-084-056-065-071.pools.arcor-ip.net [84.56.65.71]) by mail.lichtvoll.de (Postfix) with ESMTP id 91E245AD56 for ; Fri, 15 Jun 2007 22:04:36 +0200 (CEST) From: Martin Steigerwald To: linux-xfs@oss.sgi.com Subject: Re: xfs_fsr - problem with open files possible? Date: Fri, 15 Jun 2007 22:04:34 +0200 User-Agent: KMail/1.9.7 References: <200706151804.43067.Martin@lichtvoll.de> <4672C531.9020802@sandeen.net> <200706152013.52836.Martin@lichtvoll.de> (sfid-20070615_203725_895576_F4CF076E) In-Reply-To: <200706152013.52836.Martin@lichtvoll.de> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200706152204.34506.Martin@lichtvoll.de> X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11816 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: Martin@lichtvoll.de Precedence: bulk X-list: xfs Am Freitag 15 Juni 2007 schrieb Martin Steigerwald: > Hello Eric! > > Well also from the man page as I just found out after sending my mail: > > "xfs_fsr can be called with one or more arguments naming filesystems > (block device name), and files to reorganize. In this mode xfs_fsr > does not read or write /var/tmp/.fsrlast_xfs nor does it run for a > fixed time interval. It makes one pass through each specified regular > file and all regular files in each specified filesystem. A command > line name referring to a symbolic link (except to a file system > device), FIFO, or UNIX domain socket generates a warning message, but > is otherwise ignored. While traversing the filesystem these types of > files are silently skipped." > > So maybe something along the lines of: > > xfs_fsr ~/Mail/.trash.index > > while it is open by KMail should be enough. I think I will give this a > try soon. Hello! Hmmm, could not reproduce it with a single file. xfs_fsr seems to be behaving correctly: shambala:Mail/.Lichtvoll.directory/.Linux.directory> xfs_bmap -v .Kernel.index .Kernel.index: EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL 0: [0..167]: 83985960..83986127 15 (1586760..1586927) 168 1: [168..175]: 83985824..83985831 15 (1586624..1586631) 8 shambala:Mail/.Lichtvoll.directory/.Linux.directory> md5sum .Kernel.index 9d1fdd1138e297fd97548a3a4696e308 .Kernel.index Started Kontact (KMail)... shambala:Mail/.Lichtvoll.directory/.Linux.directory> md5sum .Kernel.index 9d1fdd1138e297fd97548a3a4696e308 .Kernel.index shambala:Mail/.Lichtvoll.directory/.Linux.directory> lsof .Kernel.index COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME kontact 15985 martin mem REG 8,2 86497 254828298 .Kernel.index kontact 15985 martin 23u REG 8,2 86497 254828298 .Kernel.index shambala:Mail/.Lichtvoll.directory/.Linux.directory> cp -a .Kernel.index .Kernel.index.old shambala:Mail/.Lichtvoll.directory/.Linux.directory> md5sum .Kernel.index.old 9d1fdd1138e297fd97548a3a4696e308 .Kernel.index.old shambala:Mail/.Lichtvoll.directory/.Linux.directory> md5sum .Kernel.index 9d1fdd1138e297fd97548a3a4696e308 .Kernel.index Trying to defragment while file is open: xfs_fsr seems to leave the file alone completely like it should. shambala:Mail/.Lichtvoll.directory/.Linux.directory> xfs_fsr .Kernel.index shambala:Mail/.Lichtvoll.directory/.Linux.directory> xfs_bmap -v .Kernel.index .Kernel.index: EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL 0: [0..167]: 83985960..83986127 15 (1586760..1586927) 168 1: [168..175]: 83985824..83985831 15 (1586624..1586631) 8 shambala:Mail/.Lichtvoll.directory/.Linux.directory> md5sum .Kernel.index 9d1fdd1138e297fd97548a3a4696e308 .Kernel.index Also on a second try shambala:Mail/.Lichtvoll.directory/.Linux.directory> xfs_fsr .Kernel.index shambala:Mail/.Lichtvoll.directory/.Linux.directory> xfs_bmap -v .Kernel.index .Kernel.index: EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL 0: [0..167]: 83985960..83986127 15 (1586760..1586927) 168 1: [168..175]: 83985824..83985831 15 (1586624..1586631) 8 shambala:Mail/.Lichtvoll.directory/.Linux.directory> md5sum .Kernel.index 9d1fdd1138e297fd97548a3a4696e308 .Kernel.index Stopping Kontact (KMail) and trying to defragment then... well now it defragments. md5sum stays the fine. shambala:Mail/.Lichtvoll.directory/.Linux.directory> md5sum .Kernel.index 9d1fdd1138e297fd97548a3a4696e308 .Kernel.index shambala:Mail/.Lichtvoll.directory/.Linux.directory> xfs_fsr .Kernel.index shambala:Mail/.Lichtvoll.directory/.Linux.directory> xfs_bmap -v .Kernel.index .Kernel.index: EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL 0: [0..175]: 86612448..86612623 15 (4213248..4213423) 176 shambala:Mail/.Lichtvoll.directory/.Linux.directory> md5sum .Kernel.index 9d1fdd1138e297fd97548a3a4696e308 .Kernel.index Hmmm, maybe there is some race condition that is not triggered always. If I find time I let xfs_fsr run von ~/Mail or folder with several sub folders so that more index files are touched. I can make a backup, let it run and if Kontact / KMail complains look at the differences. Regards, -- Martin 'Helios' Steigerwald - http://www.Lichtvoll.de GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7 From owner-xfs@oss.sgi.com Fri Jun 15 13:36:08 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 15 Jun 2007 13:36:10 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.9 required=5.0 tests=AWL,BAYES_60,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r499012 Received: from lucidpixels.com (lucidpixels.com [75.144.35.66]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5FKa7Wt026563 for ; Fri, 15 Jun 2007 13:36:08 -0700 Received: by lucidpixels.com (Postfix, from userid 1001) id C3A4BB000083; Fri, 15 Jun 2007 16:36:07 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id C054D5000185; Fri, 15 Jun 2007 16:36:07 -0400 (EDT) Date: Fri, 15 Jun 2007 16:36:07 -0400 (EDT) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: xfs@oss.sgi.com cc: linux-raid@vger.kernel.org Subject: XFS Tunables for High Speed Linux SW RAID5 Systems? Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11817 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Hi, I was wondering if the XFS folks can recommend any optimizations for high speed disk arrays using RAID5? As well as if there is anything else I can do on the MD side? fs.xfs.restrict_chown = 1 fs.xfs.irix_sgid_inherit = 0 fs.xfs.irix_symlink_mode = 0 fs.xfs.panic_mask = 0 fs.xfs.error_level = 3 fs.xfs.xfssyncd_centisecs = 3000 fs.xfs.inherit_sync = 1 fs.xfs.inherit_nodump = 1 fs.xfs.inherit_noatime = 1 fs.xfs.xfsbufd_centisecs = 100 fs.xfs.age_buffer_centisecs = 1500 fs.xfs.inherit_nosymlinks = 0 fs.xfs.rotorstep = 1 fs.xfs.inherit_nodefrag = 1 fs.xfs.stats_clear = 0 There is also vm/dirty tunable in /proc. I was wondering what are some things to tune for speed? I've already tuned the MD layer but is there anything with XFS I can also tune? echo "Setting read-ahead to 64MB for /dev/md3" blockdev --setra 65536 /dev/md3 echo "Setting stripe_cache_size to 16MB for /dev/md3" echo 16384 > /sys/block/md3/md/stripe_cache_size (also set max_sectors_kb) to 128K (chunk size) and disable NCQ Justin. From owner-xfs@oss.sgi.com Fri Jun 15 13:38:44 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 15 Jun 2007 13:38:46 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.8 required=5.0 tests=AWL,BAYES_50,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r499012 Received: from mx1.redhat.com (mx1.redhat.com [66.187.233.31]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5FKchWt027338 for ; Fri, 15 Jun 2007 13:38:44 -0700 Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by mx1.redhat.com (8.13.1/8.13.1) with ESMTP id l5FKcTu3004432; Fri, 15 Jun 2007 16:38:29 -0400 Received: from pobox-2.corp.redhat.com (pobox-2.corp.redhat.com [10.11.255.15]) by int-mx1.corp.redhat.com (8.13.1/8.13.1) with ESMTP id l5FKcSrD030004; Fri, 15 Jun 2007 16:38:28 -0400 Received: from [10.15.80.10] (neon.msp.redhat.com [10.15.80.10]) by pobox-2.corp.redhat.com (8.13.1/8.13.1) with ESMTP id l5FKcRaB028253; Fri, 15 Jun 2007 16:38:28 -0400 Message-ID: <4672F7A7.9040904@sandeen.net> Date: Fri, 15 Jun 2007 15:33:43 -0500 From: Eric Sandeen User-Agent: Thunderbird 1.5.0.12 (X11/20070530) MIME-Version: 1.0 To: Justin Piszcz CC: xfs@oss.sgi.com, linux-raid@vger.kernel.org Subject: Re: XFS Tunables for High Speed Linux SW RAID5 Systems? References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11818 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Justin Piszcz wrote: > Hi, > > I was wondering if the XFS folks can recommend any optimizations for high > speed disk arrays using RAID5? > > As well as if there is anything else I can do on the MD side? > > fs.xfs.restrict_chown = 1 > fs.xfs.irix_sgid_inherit = 0 > fs.xfs.irix_symlink_mode = 0 OT but... /me wonders if some of these could go away by now... :) -Eric > fs.xfs.panic_mask = 0 > fs.xfs.error_level = 3 > fs.xfs.xfssyncd_centisecs = 3000 > fs.xfs.inherit_sync = 1 > fs.xfs.inherit_nodump = 1 > fs.xfs.inherit_noatime = 1 > fs.xfs.xfsbufd_centisecs = 100 > fs.xfs.age_buffer_centisecs = 1500 > fs.xfs.inherit_nosymlinks = 0 > fs.xfs.rotorstep = 1 > fs.xfs.inherit_nodefrag = 1 > fs.xfs.stats_clear = 0 > > There is also vm/dirty tunable in /proc. > > I was wondering what are some things to tune for speed? I've already > tuned the MD layer but is there anything with XFS I can also tune? > > echo "Setting read-ahead to 64MB for /dev/md3" > blockdev --setra 65536 /dev/md3 > > echo "Setting stripe_cache_size to 16MB for /dev/md3" > echo 16384 > /sys/block/md3/md/stripe_cache_size > > (also set max_sectors_kb) to 128K (chunk size) and disable NCQ > > Justin. > > From owner-xfs@oss.sgi.com Fri Jun 15 14:48:00 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 15 Jun 2007 14:48:03 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.1 required=5.0 tests=BAYES_99,J_CHICKENPOX_42 autolearn=no version=3.2.0-pre1-r499012 Received: from df27.dot5hosting.com (df27.dot5hosting.com [72.22.92.27]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5FLlxWt012601 for ; Fri, 15 Jun 2007 14:48:00 -0700 Received: (qmail 29133 invoked by uid 3128); 15 Jun 2007 21:17:35 -0000 Received: from 127.0.0.1 by df27.dot5hosting.com (envelope-from , uid 80) with qmail-scanner-1.25st (clamdscan: 0.88/1245. spamassassin: 3.1.0. perlscan: 1.25st. Clear:RC:1(127.0.0.1):SA:0(3.4/5.0):. Processed in 2.826867 secs); 15 Jun 2007 21:17:35 -0000 Date: 15 Jun 2007 21:17:32 -0000 Message-ID: <20070615211732.29062.qmail@df27.dot5hosting.com> To: xfs@oss.sgi.com Subject: Work Opportunity From: Anglican Mission House Reply-To: missionhouse@alice.it MIME-Version: 1.0 Content-Type: text/plain Content-Transfer-Encoding: 8bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11819 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: missionhouse@alice.it Precedence: bulk X-list: xfs Anglican Mission House Work Opportunity........ ........Are you currently looking for an opportunity to work from home or direct from your computer. There is an sensational opportunity for you to work with Anglican Mission House. This is a fantastic opportunity to build an extremely rewarding career and these opportunities would suit people who need flexible working arrangements and Located anywhere in New Zealand, Canada and USA. We have got several opportunities available for customer service/ consultants/Accounting/management sectors with a negotiable good salary starting from $5,000.00 - $65,000.00. The successful candidate will need to have customer service experience and a computer with internet connnetion. Also you need to like working in a team environment and have good communication skills. **No prior experience required **Work from home --- No daily commute! **Spend more time with your family! *we are assuring you that at no time will you be required to make any upfront payments of your personal funds to us for whatever reason. **Computer at home with access to broadband and checking your email messages at least twice everyday! You can fill the application form below, so that you can start working with OmniPay company as our company representative/Payment coordinator. > - FULL NAME: > - ADDRESS: > - CITY: > - STATE: > - COUNTRY: > - TEL NUMBERS: > - FAX NUMBERS: > - Mobile NUMBERS: > - COMPANY NAME(if any): > - AGE: > - STATUS(MARRIED/SINGLE): > - Direct Mobile Number: > - Send your resume letter attached with all this information. PLEASE SUMMIT THE DETAILS & RESUME LETTER DIRECTLY TO THIS E-MAIL ADDRESS: missionhouse@rexian.com Anglican Mission House Advertment. Tel: +44-7024067187 E-mail: missionhouse@alice.it website: http://www.missionhouses.org/ From owner-xfs@oss.sgi.com Fri Jun 15 14:49:29 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 15 Jun 2007 14:49:32 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.1 required=5.0 tests=BAYES_99,J_CHICKENPOX_42 autolearn=no version=3.2.0-pre1-r499012 Received: from df27.dot5hosting.com (df27.dot5hosting.com [72.22.92.27]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5FLnSWt012994 for ; Fri, 15 Jun 2007 14:49:29 -0700 Received: (qmail 31190 invoked by uid 3128); 15 Jun 2007 21:19:04 -0000 Received: from 127.0.0.1 by df27.dot5hosting.com (envelope-from , uid 80) with qmail-scanner-1.25st (clamdscan: 0.88/1245. spamassassin: 3.1.0. perlscan: 1.25st. Clear:RC:1(127.0.0.1):SA:0(3.4/5.0):. Processed in 2.943983 secs); 15 Jun 2007 21:19:04 -0000 Date: 15 Jun 2007 21:19:01 -0000 Message-ID: <20070615211901.31130.qmail@df27.dot5hosting.com> To: xfs@oss.sgi.com Subject: Work Opportunity From: Anglican Mission House Reply-To: missionhouse@alice.it MIME-Version: 1.0 Content-Type: text/plain Content-Transfer-Encoding: 8bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11820 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: missionhouse@alice.it Precedence: bulk X-list: xfs Anglican Mission House Work Opportunity........ ........Are you currently looking for an opportunity to work from home or direct from your computer. There is an sensational opportunity for you to work with Anglican Mission House. This is a fantastic opportunity to build an extremely rewarding career and these opportunities would suit people who need flexible working arrangements and Located anywhere in New Zealand, Canada and USA. We have got several opportunities available for customer service/ consultants/Accounting/management sectors with a negotiable good salary starting from $5,000.00 - $65,000.00. The successful candidate will need to have customer service experience and a computer with internet connnetion. Also you need to like working in a team environment and have good communication skills. **No prior experience required **Work from home --- No daily commute! **Spend more time with your family! *we are assuring you that at no time will you be required to make any upfront payments of your personal funds to us for whatever reason. **Computer at home with access to broadband and checking your email messages at least twice everyday! You can fill the application form below, so that you can start working with OmniPay company as our company representative/Payment coordinator. > - FULL NAME: > - ADDRESS: > - CITY: > - STATE: > - COUNTRY: > - TEL NUMBERS: > - FAX NUMBERS: > - Mobile NUMBERS: > - COMPANY NAME(if any): > - AGE: > - STATUS(MARRIED/SINGLE): > - Direct Mobile Number: > - Send your resume letter attached with all this information. PLEASE SUMMIT THE DETAILS & RESUME LETTER DIRECTLY TO THIS E-MAIL ADDRESS: missionhouse@rexian.com Anglican Mission House Advertment. Tel: +44-7024067187 E-mail: missionhouse@alice.it website: http://www.missionhouses.org/ From owner-xfs@oss.sgi.com Fri Jun 15 21:11:13 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 15 Jun 2007 21:11:16 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5G4B9Wt015264 for ; Fri, 15 Jun 2007 21:11:11 -0700 Received: from [134.15.251.8] (melb-sw-corp-251-8.corp.sgi.com [134.15.251.8]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA14186; Sat, 16 Jun 2007 14:11:04 +1000 Message-ID: <467362CD.4020007@sgi.com> Date: Sat, 16 Jun 2007 14:10:53 +1000 From: Tim Shimmin User-Agent: Thunderbird 1.5.0.10 (Windows/20070221) MIME-Version: 1.0 To: David Chinner CC: xfs-dev , xfs-oss , asg-qa Subject: Re: Review: fix test 004 to account for reserved space References: <20070604063328.GT85884050@sgi.com> <20070615062335.GO86004887@sgi.com> In-Reply-To: <20070615062335.GO86004887@sgi.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11821 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs Hi Dave, Yes that looks not bad. However, your last proposal, prompted from Nathan, was to adjust f_bavail differently than bfree to account for how much was available in the reserved block space. struct statfs { long f_type; /* type of filesystem (see below) */ long f_bsize; /* optimal transfer block size */ long f_blocks; /* total data blocks in file system */ long f_bfree; /* free blocks in fs */ <------ includes reserved long f_bavail; /* free blocks avail to non-superuser */ <------ doesn't include reserved long f_files; /* total file nodes in file system */ long f_ffree; /* free file nodes in fs */ fsid_t f_fsid; /* file system id */ long f_namelen; /* maximum length of filenames */ long f_spare[6]; /* spare for later */ }; And looking at df code it uses bfree in calculation of space used... the 3rd field of df output. df code: --------------------------------------------------------------- total = fsu.fsu_blocks; available = fsu.fsu_bavail; negate_available = (fsu.fsu_bavail_top_bit_set & (available != UINTMAX_MAX)); available_to_root = fsu.fsu_bfree; } used = UINTMAX_MAX; negate_used = false; if (total != UINTMAX_MAX && available_to_root != UINTMAX_MAX) { used = total - available_to_root; negate_used = (total < available_to_root); } printf (" %*s %*s %*s ", width + col1_adjustment, df_readable (false, total, buf[0], input_units, output_units), width, df_readable (negate_used, used, buf[1], input_units, output_units), width, df_readable (negate_available, available, buf[2], input_units, output_units)); ----------------------------------------------------------------- example df output: Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda3 18864128 7476720 11387408 40% / ------------------------------------------------------------------- And indeed we set "used" as a shell variable in test 004 from df but don't seem to do anything with it. I think we should check it to verify that we have set bfree correctly. Cheers, Tim. David Chinner wrote: > Ping? > > On Mon, Jun 04, 2007 at 04:33:28PM +1000, David Chinner wrote: > >> With the changes to use some space by default in only in memory >> as a reserved pool, df and statfs will now output a fre block >> count that is slightly different to what is held in the superblock. >> >> Update the qa test to account for this change. >> >> Cheers, >> >> Dave. >> -- >> Dave Chinner >> Principal Engineer >> SGI Australian Software Group >> >> --- >> xfstests/004 | 35 +++++++++++++++++++++++++---------- >> 1 file changed, 25 insertions(+), 10 deletions(-) >> >> Index: xfs-cmds/xfstests/004 >> =================================================================== >> --- xfs-cmds.orig/xfstests/004 2006-11-14 19:57:39.000000000 +1100 >> +++ xfs-cmds/xfstests/004 2007-05-04 16:38:03.957537306 +1000 >> @@ -67,21 +67,36 @@ xfs_db -r -c "freesp -s" $SCRATCH_DEV >$ >> echo "xfs_db for $SCRATCH_DEV" >>$seq.full >> cat $tmp.xfs_db >>$seq.full >> >> +eval `$XFS_IO_PROG -x -c resblks $SCRATCH_MNT 2>&1 \ >> + | $AWK_PROG '/available/ { printf "resblks=%u\n", $5 }'` >> +echo "resblks gave: resblks=$resblks" >>$seq.full >> + >> # check the 'blocks' field from freesp command is OK >> # since 2.6.18, df does not report the 4 blocks per AG that cannot >> # be allocated, hence we check for that exact mismatch. >> +# since ~2.6.22, reserved blocks are used by default and df does >> +# not report them, hence check for an exact mismatch. >> perl -ne ' >> - BEGIN { $avail ='$avail' * 512; >> - $answer="(no xfs_db free blocks line?)" } >> - /free blocks (\d+)$/ || next; >> - $freesp = $1 * '$dbsize'; >> - if ($freesp == $avail) { $answer = "yes"; } >> - else { >> + BEGIN { $avail ='$avail' * 512; >> + $answer="(no xfs_db free blocks line?)" } >> + /free blocks (\d+)$/ || next; >> + $freesp = $1 * '$dbsize'; >> + if ($freesp == $avail) { >> + $answer = "yes"; >> + } else { >> $avail = $avail + (('$agcount' + 1) * '$dbsize' * 4); >> - if ($freesp == $avail) { $answer = "yes"; } >> - else { $answer = "no ($freesp != $avail)"; } >> - } >> - END { print "$answer\n" } >> + if ($freesp == $avail) { >> + $answer = "yes"; >> + } else { >> + $avail = $avail + ('$resblks' * '$dbsize'); >> + if ($freesp == $avail) { >> + $answer = "yes"; >> + } else { >> + $answer = "no ($freesp != $avail)"; >> + } >> + } >> + } >> + END { print "$answer\n" } >> ' <$tmp.xfs_db >$tmp.ans >> ans="`cat $tmp.ans`" >> echo "Checking blocks column same as df: $ans" >> > > From owner-xfs@oss.sgi.com Sat Jun 16 12:00:43 2007 Received: with ECARTIS (v1.0.0; list xfs); Sat, 16 Jun 2007 12:00:48 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from relay.sw.ru (mailhub.sw.ru [195.214.233.200]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5GJ0fWt001130 for ; Sat, 16 Jun 2007 12:00:42 -0700 Received: from sw.ru ([192.168.3.106]) by relay.sw.ru (8.13.4/8.13.4) with SMTP id l5GIH5v5022325; Sat, 16 Jun 2007 22:17:06 +0400 (MSD) Received: by sw.ru (nbSMTP-1.00) for uid 1000 dmonakhov@sw.ru; Sat, 16 Jun 2007 22:17:54 +0400 (MSD) Date: Sat, 16 Jun 2007 22:17:51 +0400 From: Dmitriy Monakhov To: Christoph Hellwig , linux-kernel@vger.kernel.org, npiggin@suse.de, mark.fasheh@oracle.com, linux-ext4@vger.kernel.org, xfs@oss.sgi.com Subject: Re: iov_iter_fault_in_readable fix Message-ID: <20070616181751.GE14349@localhost.sw.ru> Mail-Followup-To: Christoph Hellwig , linux-kernel@vger.kernel.org, npiggin@suse.de, mark.fasheh@oracle.com, linux-ext4@vger.kernel.org, xfs@oss.sgi.com References: <200705292119.l4TLJtAD011726@shell0.pdx.osdl.net> <20070613134005.GA13815@localhost.sw.ru> <20070613135759.GD13815@localhost.sw.ru> <20070614173153.GA14771@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20070614173153.GA14771@infradead.org> User-Agent: Mutt/1.5.13 (2006-08-11) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11822 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dmonakhov@sw.ru Precedence: bulk X-list: xfs On 18:31 Чтв 14 Июн , Christoph Hellwig wrote: > On Wed, Jun 13, 2007 at 05:57:59PM +0400, Dmitriy Monakhov wrote: > > Function prerform check for signgle region, with out respect to > > segment nature of iovec, For example writev no longer works :) > > Btw, could someone please start to collect all sniplets like this in > a nice simple regression test suite? If no one wants to start a new > one we should probably just put it into xfsqa (which should be useable > for other filesystems aswell despite the name) I've prepared testcase (testcases/kernel/syscalls/writev/writev06.c) and sent it to ltp mailing list. From owner-xfs@oss.sgi.com Sat Jun 16 12:56:27 2007 Received: with ECARTIS (v1.0.0; list xfs); Sat, 16 Jun 2007 12:56:30 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.5 required=5.0 tests=AWL,BAYES_99 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.ukfsn.org (s2.ukfsn.org [217.158.120.143]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5GJuNWt015444 for ; Sat, 16 Jun 2007 12:56:26 -0700 Received: from localhost (mailman.ukfsn.org [80.168.53.75]) by mail.ukfsn.org (Postfix) with ESMTP id 7334BE7266; Sat, 16 Jun 2007 20:56:21 +0100 (BST) Received: from mail.ukfsn.org ([80.168.53.20]) by localhost (smtp-filter.ukfsn.org [80.168.53.75]) (amavisd-new, port 10024) with ESMTP id YG21w1qJ93js; Sat, 16 Jun 2007 20:52:55 +0100 (BST) Received: from elm.dgreaves.com (i-83-67-36-194.freedom2surf.net [83.67.36.194]) by mail.ukfsn.org (Postfix) with ESMTP id 141E1E7235; Sat, 16 Jun 2007 20:56:20 +0100 (BST) Received: from ash.dgreaves.com ([10.0.0.90]) by elm.dgreaves.com with esmtp (Exim 4.62) (envelope-from ) id 1HzeNi-0002mt-6a; Sat, 16 Jun 2007 20:56:22 +0100 Message-ID: <46744065.6060605@dgreaves.com> Date: Sat, 16 Jun 2007 20:56:21 +0100 From: David Greaves User-Agent: Mozilla-Thunderbird 2.0.0.0 (X11/20070601) MIME-Version: 1.0 To: "'linux-kernel@vger.kernel.org'" , xfs@oss.sgi.com, LVM general discussion and development Cc: linux-pm Subject: 2.6.22-rc4 XFS fails after hibernate/resume Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11823 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: david@dgreaves.com Precedence: bulk X-list: xfs This isn't a regression. I was seeing these problems on 2.6.21 (but 22 was in -rc so I waited to try it). I tried 2.6.22-rc4 (with Tejun's patches) to see if it had improved - no. Note this is a different (desktop) machine to that involved my recent bugs. The machine will work for days (continually powered up) without a problem and then exhibits a filesystem failure within minutes of a resume. I know xfs/raid are OK with hibernate. Is lvm? The root filesystem is xfs on raid1 and that doesn't seem to have any problems. System info: /dev/mapper/video_vg-video_lv on /scratch type xfs (rw) haze:~# vgdisplay --- Volume group --- VG Name video_vg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 19 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 372.61 GB PE Size 4.00 MB Total PE 95389 Alloc PE / Size 95389 / 372.61 GB Free PE / Size 0 / 0 VG UUID I2gW2x-aHcC-kqzs-Efpd-Q7TE-dkWf-KpHSO7 haze:~# pvdisplay --- Physical volume --- PV Name /dev/md1 VG Name video_vg PV Size 372.62 GB / not usable 3.25 MB Allocatable yes (but full) PE Size (KByte) 4096 Total PE 95389 Free PE 0 Allocated PE 95389 PV UUID IUig5k-460l-sMZc-23Iz-MMFl-Cfh9-XuBMiq md1 : active raid5 sdd1[0] sda1[2] sdc1[1] 390716672 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU] 00:00.0 Host bridge: VIA Technologies, Inc. VT8377 [KT400/KT600 AGP] Host Bridge (rev 80) 00:01.0 PCI bridge: VIA Technologies, Inc. VT8237 PCI Bridge 00:0a.0 Mass storage controller: Silicon Image, Inc. SiI 3112 [SATALink/SATARaid] Serial ATA Controller (rev 02) 00:0b.0 Ethernet controller: Marvell Technology Group Ltd. 88E8001 Gigabit Ethernet Controller (rev 12) 00:0f.0 RAID bus controller: VIA Technologies, Inc. VIA VT6420 SATA RAID Controller (rev 80) 00:0f.1 IDE interface: VIA Technologies, Inc. VT82C586A/B/VT82C686/A/B/VT823x/A/C PIPC Bus Master IDE (rev 06) 00:10.0 USB Controller: VIA Technologies, Inc. VT82xxxxx UHCI USB 1.1 Controller (rev 81) 00:10.1 USB Controller: VIA Technologies, Inc. VT82xxxxx UHCI USB 1.1 Controller (rev 81) 00:10.2 USB Controller: VIA Technologies, Inc. VT82xxxxx UHCI USB 1.1 Controller (rev 81) 00:10.3 USB Controller: VIA Technologies, Inc. VT82xxxxx UHCI USB 1.1 Controller (rev 81) 00:10.4 USB Controller: VIA Technologies, Inc. USB 2.0 (rev 86) 00:11.0 ISA bridge: VIA Technologies, Inc. VT8237 ISA bridge [KT600/K8T800/K8T890 South] 00:11.5 Multimedia audio controller: VIA Technologies, Inc. VT8233/A/8235/8237 AC97 Audio Controller (rev 60) 00:11.6 Communication controller: VIA Technologies, Inc. AC'97 Modem Controller (rev 80) 00:12.0 Ethernet controller: VIA Technologies, Inc. VT6102 [Rhine-II] (rev 78) 01:00.0 VGA compatible controller: ATI Technologies Inc RV280 [Radeon 9200 PRO] (rev 01) tail end of info from dmesg: k_prepare_write+0x272/0x490 [] xfs_iomap+0x391/0x4b0 [] xfs_bmap+0x0/0x10 [] xfs_map_blocks+0x47/0x90 [] xfs_page_state_convert+0x3dc/0x7b0 [] xfs_ilock+0x71/0xa0 [] xfs_iunlock+0x85/0x90 [] xfs_vm_writepage+0x60/0xf0 [] __writepage+0x8/0x30 [] write_cache_pages+0x1ff/0x320 [] __writepage+0x0/0x30 [] generic_writepages+0x20/0x30 [] do_writepages+0x2b/0x50 [] __filemap_fdatawrite_range+0x72/0x90 [] xfs_file_fsync+0x0/0x80 [] filemap_fdatawrite+0x23/0x30 [] do_fsync+0x4e/0xb0 [] __do_fsync+0x25/0x40 [] syscall_call+0x7/0xb ======================= Filesystem "dm-0": XFS internal error xfs_btree_check_sblock at line 334 of file fs/xfs/xfs_btree.c. Caller 0xc01b27be [] xfs_btree_check_sblock+0x5b/0xd0 [] xfs_alloc_lookup+0x17e/0x390 [] xfs_alloc_lookup+0x17e/0x390 [] xfs_alloc_ag_vextent_near+0x59/0xa30 [] xfs_alloc_ag_vextent+0x8d/0x100 [] xfs_alloc_vextent+0x223/0x450 [] xfs_bmap_btalloc+0x400/0x770 [] xfs_iext_bno_to_ext+0x9d/0x1d0 [] xfs_bmapi+0x10bd/0x1490 [] xlog_grant_log_space+0x22e/0x2b0 [] xfs_log_reserve+0xc0/0xe0 [] xfs_iomap_write_allocate+0x27f/0x4f0 [] __block_prepare_write+0x421/0x490 [] __block_prepare_write+0x272/0x490 [] xfs_iomap+0x391/0x4b0 [] xfs_bmap+0x0/0x10 [] xfs_map_blocks+0x47/0x90 [] xfs_page_state_convert+0x3dc/0x7b0 [] xfs_ilock+0x71/0xa0 [] xfs_iunlock+0x85/0x90 [] xfs_vm_writepage+0x60/0xf0 [] __writepage+0x8/0x30 [] write_cache_pages+0x1ff/0x320 [] __writepage+0x0/0x30 [] generic_writepages+0x20/0x30 [] do_writepages+0x2b/0x50 [] __filemap_fdatawrite_range+0x72/0x90 [] xfs_file_fsync+0x0/0x80 [] filemap_fdatawrite+0x23/0x30 [] do_fsync+0x4e/0xb0 [] __do_fsync+0x25/0x40 [] syscall_call+0x7/0xb ======================= Filesystem "dm-0": XFS internal error xfs_btree_check_sblock at line 334 of file fs/xfs/xfs_btree.c. Caller 0xc01b27be [] xfs_btree_check_sblock+0x5b/0xd0 [] xfs_alloc_lookup+0x17e/0x390 [] xfs_alloc_lookup+0x17e/0x390 [] xfs_alloc_ag_vextent_near+0x59/0xa30 [] xfs_alloc_ag_vextent+0x8d/0x100 [] xfs_alloc_vextent+0x223/0x450 [] xfs_bmap_btalloc+0x400/0x770 [] xfs_iext_bno_to_ext+0x9d/0x1d0 [] xfs_bmapi+0x10bd/0x1490 [] xlog_grant_log_space+0x22e/0x2b0 [] xfs_log_reserve+0xc0/0xe0 [] xfs_iomap_write_allocate+0x27f/0x4f0 [] __block_prepare_write+0x421/0x490 [] __block_prepare_write+0x272/0x490 [] xfs_iomap+0x391/0x4b0 [] xfs_bmap+0x0/0x10 [] xfs_map_blocks+0x47/0x90 [] xfs_page_state_convert+0x3dc/0x7b0 [] xfs_ilock+0x71/0xa0 [] xfs_iunlock+0x85/0x90 [] xfs_vm_writepage+0x60/0xf0 [] __writepage+0x8/0x30 [] write_cache_pages+0x1ff/0x320 [] __writepage+0x0/0x30 [] generic_writepages+0x20/0x30 [] do_writepages+0x2b/0x50 [] __filemap_fdatawrite_range+0x72/0x90 [] xfs_file_fsync+0x0/0x80 [] filemap_fdatawrite+0x23/0x30 [] do_fsync+0x4e/0xb0 [] __do_fsync+0x25/0x40 [] syscall_call+0x7/0xb ======================= Filesystem "dm-0": XFS internal error xfs_btree_check_sblock at line 334 of file fs/xfs/xfs_btree.c. Caller 0xc01b27be [] xfs_btree_check_sblock+0x5b/0xd0 [] xfs_alloc_lookup+0x17e/0x390 [] xfs_alloc_lookup+0x17e/0x390 [] xfs_alloc_ag_vextent_near+0x59/0xa30 [] xfs_alloc_ag_vextent+0x8d/0x100 [] xfs_alloc_vextent+0x223/0x450 [] xfs_bmap_btalloc+0x400/0x770 [] xfs_iext_bno_to_ext+0x9d/0x1d0 [] xfs_bmapi+0x10bd/0x1490 [] xlog_grant_log_space+0x22e/0x2b0 [] xfs_log_reserve+0xc0/0xe0 [] xfs_iomap_write_allocate+0x27f/0x4f0 [] __block_prepare_write+0x421/0x490 [] __block_prepare_write+0x272/0x490 [] xfs_iomap+0x391/0x4b0 [] xfs_bmap+0x0/0x10 [] xfs_map_blocks+0x47/0x90 [] xfs_page_state_convert+0x3dc/0x7b0 [] xfs_ilock+0x71/0xa0 [] xfs_iunlock+0x85/0x90 [] xfs_vm_writepage+0x60/0xf0 [] __writepage+0x8/0x30 [] write_cache_pages+0x1ff/0x320 [] __writepage+0x0/0x30 [] generic_writepages+0x20/0x30 [] do_writepages+0x2b/0x50 [] __filemap_fdatawrite_range+0x72/0x90 [] xfs_file_fsync+0x0/0x80 [] filemap_fdatawrite+0x23/0x30 [] do_fsync+0x4e/0xb0 [] __do_fsync+0x25/0x40 [] syscall_call+0x7/0xb ======================= Filesystem "dm-0": XFS internal error xfs_btree_check_sblock at line 334 of file fs/xfs/xfs_btree.c. Caller 0xc01b27be [] xfs_btree_check_sblock+0x5b/0xd0 [] xfs_alloc_lookup+0x17e/0x390 [] xfs_alloc_lookup+0x17e/0x390 [] xfs_alloc_ag_vextent_near+0x59/0xa30 [] xfs_alloc_ag_vextent+0x8d/0x100 [] xfs_alloc_vextent+0x223/0x450 [] xfs_bmap_btalloc+0x400/0x770 [] xfs_iext_bno_to_ext+0x9d/0x1d0 [] xfs_bmapi+0x10bd/0x1490 [] xlog_grant_log_space+0x22e/0x2b0 [] xfs_log_reserve+0xc0/0xe0 [] xfs_iomap_write_allocate+0x27f/0x4f0 [] __block_prepare_write+0x421/0x490 [] __block_prepare_write+0x272/0x490 [] xfs_iomap+0x391/0x4b0 [] xfs_bmap+0x0/0x10 [] xfs_map_blocks+0x47/0x90 [] xfs_page_state_convert+0x3dc/0x7b0 [] xfs_ilock+0x71/0xa0 [] xfs_iunlock+0x85/0x90 [] xfs_vm_writepage+0x60/0xf0 [] __writepage+0x8/0x30 [] write_cache_pages+0x1ff/0x320 [] __writepage+0x0/0x30 [] generic_writepages+0x20/0x30 [] do_writepages+0x2b/0x50 [] __filemap_fdatawrite_range+0x72/0x90 [] xfs_file_fsync+0x0/0x80 [] filemap_fdatawrite+0x23/0x30 [] do_fsync+0x4e/0xb0 [] __do_fsync+0x25/0x40 [] syscall_call+0x7/0xb ======================= Filesystem "dm-0": XFS internal error xfs_btree_check_sblock at line 334 of file fs/xfs/xfs_btree.c. Caller 0xc01b27be [] xfs_btree_check_sblock+0x5b/0xd0 [] xfs_alloc_lookup+0x17e/0x390 [] xfs_alloc_lookup+0x17e/0x390 [] xfs_alloc_ag_vextent_near+0x59/0xa30 [] xfs_alloc_ag_vextent+0x8d/0x100 [] xfs_alloc_vextent+0x223/0x450 [] xfs_bmap_btalloc+0x400/0x770 [] xfs_iext_bno_to_ext+0x9d/0x1d0 [] xfs_bmapi+0x10bd/0x1490 [] xlog_grant_log_space+0x22e/0x2b0 [] xfs_log_reserve+0xc0/0xe0 [] xfs_iomap_write_allocate+0x27f/0x4f0 [] __block_prepare_write+0x421/0x490 [] __block_prepare_write+0x272/0x490 [] xfs_iomap+0x391/0x4b0 [] xfs_bmap+0x0/0x10 [] xfs_map_blocks+0x47/0x90 [] xfs_page_state_convert+0x3dc/0x7b0 [] xfs_ilock+0x71/0xa0 [] xfs_iunlock+0x85/0x90 [] xfs_vm_writepage+0x60/0xf0 [] __writepage+0x8/0x30 [] write_cache_pages+0x1ff/0x320 [] __writepage+0x0/0x30 [] generic_writepages+0x20/0x30 [] do_writepages+0x2b/0x50 [] __filemap_fdatawrite_range+0x72/0x90 [] xfs_file_fsync+0x0/0x80 [] filemap_fdatawrite+0x23/0x30 [] do_fsync+0x4e/0xb0 [] __do_fsync+0x25/0x40 [] syscall_call+0x7/0xb ======================= Filesystem "dm-0": XFS internal error xfs_btree_check_sblock at line 334 of file fs/xfs/xfs_btree.c. Caller 0xc01b27be [] xfs_btree_check_sblock+0x5b/0xd0 [] xfs_alloc_lookup+0x17e/0x390 [] xfs_alloc_lookup+0x17e/0x390 [] xfs_alloc_ag_vextent_near+0x59/0xa30 [] xfs_alloc_ag_vextent+0x8d/0x100 [] xfs_alloc_vextent+0x223/0x450 [] xfs_bmap_btalloc+0x400/0x770 [] xfs_iext_bno_to_ext+0x9d/0x1d0 [] xfs_bmapi+0x10bd/0x1490 [] xlog_grant_log_space+0x22e/0x2b0 [] xfs_log_reserve+0xc0/0xe0 [] xfs_iomap_write_allocate+0x27f/0x4f0 [] __block_prepare_write+0x421/0x490 [] __block_prepare_write+0x272/0x490 [] xfs_iomap+0x391/0x4b0 [] xfs_bmap+0x0/0x10 [] xfs_map_blocks+0x47/0x90 [] xfs_page_state_convert+0x3dc/0x7b0 [] xfs_ilock+0x71/0xa0 [] xfs_iunlock+0x85/0x90 [] xfs_vm_writepage+0x60/0xf0 [] __writepage+0x8/0x30 [] write_cache_pages+0x1ff/0x320 [] __writepage+0x0/0x30 [] generic_writepages+0x20/0x30 [] do_writepages+0x2b/0x50 [] __filemap_fdatawrite_range+0x72/0x90 [] xfs_file_fsync+0x0/0x80 [] filemap_fdatawrite+0x23/0x30 [] do_fsync+0x4e/0xb0 [] __do_fsync+0x25/0x40 [] syscall_call+0x7/0xb ======================= Filesystem "dm-0": XFS internal error xfs_btree_check_sblock at line 334 of file fs/xfs/xfs_btree.c. Caller 0xc01b27be [] xfs_btree_check_sblock+0x5b/0xd0 [] xfs_alloc_lookup+0x17e/0x390 [] xfs_alloc_lookup+0x17e/0x390 [] xfs_alloc_ag_vextent_near+0x59/0xa30 [] xfs_alloc_ag_vextent+0x8d/0x100 [] xfs_alloc_vextent+0x223/0x450 [] xfs_bmap_btalloc+0x400/0x770 [] xfs_iext_bno_to_ext+0x9d/0x1d0 [] xfs_bmapi+0x10bd/0x1490 [] xlog_grant_log_space+0x22e/0x2b0 [] xfs_log_reserve+0xc0/0xe0 [] xfs_iomap_write_allocate+0x27f/0x4f0 [] __block_prepare_write+0x421/0x490 [] __block_prepare_write+0x272/0x490 [] xfs_iomap+0x391/0x4b0 [] xfs_bmap+0x0/0x10 [] xfs_map_blocks+0x47/0x90 [] xfs_page_state_convert+0x3dc/0x7b0 [] xfs_ilock+0x71/0xa0 [] xfs_iunlock+0x85/0x90 [] xfs_vm_writepage+0x60/0xf0 [] __writepage+0x8/0x30 [] write_cache_pages+0x1ff/0x320 [] __writepage+0x0/0x30 [] generic_writepages+0x20/0x30 [] do_writepages+0x2b/0x50 [] __filemap_fdatawrite_range+0x72/0x90 [] xfs_file_fsync+0x0/0x80 [] filemap_fdatawrite+0x23/0x30 [] do_fsync+0x4e/0xb0 [] __do_fsync+0x25/0x40 [] syscall_call+0x7/0xb ======================= Filesystem "dm-0": XFS internal error xfs_btree_check_sblock at line 334 of file fs/xfs/xfs_btree.c. Caller 0xc01b27be [] xfs_btree_check_sblock+0x5b/0xd0 [] xfs_alloc_lookup+0x17e/0x390 [] xfs_alloc_lookup+0x17e/0x390 [] xfs_alloc_ag_vextent_near+0x59/0xa30 [] xfs_alloc_ag_vextent+0x8d/0x100 [] xfs_alloc_vextent+0x223/0x450 [] xfs_bmap_btalloc+0x400/0x770 [] xfs_iext_bno_to_ext+0x9d/0x1d0 [] xfs_bmapi+0x10bd/0x1490 [] __do_softirq+0x42/0x90 [] xlog_grant_log_space+0x22e/0x2b0 [] xfs_log_reserve+0xc0/0xe0 [] xfs_iomap_write_allocate+0x27f/0x4f0 [] __block_prepare_write+0x421/0x490 [] __block_prepare_write+0x272/0x490 [] xfs_iomap+0x391/0x4b0 [] xfs_bmap+0x0/0x10 [] xfs_map_blocks+0x47/0x90 [] xfs_page_state_convert+0x3dc/0x7b0 [] xfs_ilock+0x71/0xa0 [] xfs_iunlock+0x85/0x90 [] xfs_vm_writepage+0x60/0xf0 [] __writepage+0x8/0x30 [] write_cache_pages+0x1ff/0x320 [] __writepage+0x0/0x30 [] generic_writepages+0x20/0x30 [] do_writepages+0x2b/0x50 [] __filemap_fdatawrite_range+0x72/0x90 [] xfs_file_fsync+0x0/0x80 [] filemap_fdatawrite+0x23/0x30 [] do_fsync+0x4e/0xb0 [] __do_fsync+0x25/0x40 [] syscall_call+0x7/0xb ======================= Filesystem "dm-0": XFS internal error xfs_btree_check_sblock at line 334 of file fs/xfs/xfs_btree.c. Caller 0xc01b27be [] xfs_btree_check_sblock+0x5b/0xd0 [] xfs_alloc_lookup+0x17e/0x390 [] xfs_alloc_lookup+0x17e/0x390 [] xfs_alloc_ag_vextent_near+0x59/0xa30 [] xfs_alloc_ag_vextent+0x8d/0x100 [] xfs_alloc_vextent+0x223/0x450 [] xfs_bmap_btalloc+0x400/0x770 [] xfs_iext_bno_to_ext+0x9d/0x1d0 [] xfs_bmapi+0x10bd/0x1490 [] xlog_grant_log_space+0x22e/0x2b0 [] xfs_log_reserve+0xc0/0xe0 [] xfs_iomap_write_allocate+0x27f/0x4f0 [] __block_prepare_write+0x421/0x490 [] __block_prepare_write+0x272/0x490 [] xfs_iomap+0x391/0x4b0 [] xfs_bmap+0x0/0x10 [] xfs_map_blocks+0x47/0x90 [] xfs_page_state_convert+0x3dc/0x7b0 [] xfs_ilock+0x71/0xa0 [] xfs_iunlock+0x85/0x90 [] xfs_vm_writepage+0x60/0xf0 [] __writepage+0x8/0x30 [] write_cache_pages+0x1ff/0x320 [] __writepage+0x0/0x30 [] generic_writepages+0x20/0x30 [] do_writepages+0x2b/0x50 [] __filemap_fdatawrite_range+0x72/0x90 [] xfs_file_fsync+0x0/0x80 [] filemap_fdatawrite+0x23/0x30 [] do_fsync+0x4e/0xb0 [] __do_fsync+0x25/0x40 [] syscall_call+0x7/0xb ======================= Filesystem "dm-0": XFS internal error xfs_btree_check_sblock at line 334 of file fs/xfs/xfs_btree.c. Caller 0xc01b27be [] xfs_btree_check_sblock+0x5b/0xd0 [] xfs_alloc_lookup+0x17e/0x390 [] xfs_alloc_lookup+0x17e/0x390 [] xfs_free_ag_extent+0x2de/0x720 [] xfs_free_extent+0xbb/0xf0 [] xfs_bmap_finish+0x139/0x180 [] xfs_bunmapi+0x0/0xf80 [] xfs_itruncate_finish+0x26f/0x3f0 [] xfs_inactive+0x48b/0x500 [] xfs_fs_clear_inode+0x31/0x80 [] clear_inode+0x54/0xf0 [] truncate_inode_pages+0x17/0x20 [] generic_delete_inode+0xd2/0x100 [] iput+0x5c/0x70 [] d_kill+0x35/0x60 [] dput+0xa1/0x150 [] sys_renameat+0x1d8/0x200 [] dput+0x1c/0x150 [] __fput+0x113/0x180 [] mntput_no_expire+0x13/0x90 [] sys_rename+0x27/0x30 [] syscall_call+0x7/0xb ======================= xfs_force_shutdown(dm-0,0x8) called from line 4258 of file fs/xfs/xfs_bmap.c. Return address = 0xc02113cc Filesystem "dm-0": Corruption of in-memory data detected. Shutting down filesystem: dm-0 Please umount the filesystem, and rectify the problem(s) From owner-xfs@oss.sgi.com Sat Jun 16 13:19:51 2007 Received: with ECARTIS (v1.0.0; list xfs); Sat, 16 Jun 2007 13:19:55 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.3 required=5.0 tests=AWL,BAYES_99 autolearn=no version=3.2.0-pre1-r499012 Received: from pentafluge.infradead.org (pentafluge.infradead.org [213.146.154.40]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5GKJnWt019752 for ; Sat, 16 Jun 2007 13:19:50 -0700 Received: from hch by pentafluge.infradead.org with local (Exim 4.63 #1 (Red Hat Linux)) id 1HzeLR-0001oP-QD; Sat, 16 Jun 2007 20:54:01 +0100 Date: Sat, 16 Jun 2007 20:54:01 +0100 From: Christoph Hellwig To: Jens Axboe Cc: Tejun Heo , David Greaves , "Rafael J. Wysocki" , Linus Torvalds , David Chinner , xfs@oss.sgi.com, "'linux-kernel@vger.kernel.org'" , linux-pm , Neil Brown , Jeff Garzik Subject: Re: [PATCH] block: always requeue !fs requests at the front Message-ID: <20070616195401.GA6929@infradead.org> Mail-Followup-To: Christoph Hellwig , Jens Axboe , Tejun Heo , David Greaves , "Rafael J. Wysocki" , Linus Torvalds , David Chinner , xfs@oss.sgi.com, "'linux-kernel@vger.kernel.org'" , linux-pm , Neil Brown , Jeff Garzik References: <200706020122.49989.rjw@sisk.pl> <46706968.7000703@dgreaves.com> <200706140115.58733.rjw@sisk.pl> <46714ECF.8080203@gmail.com> <46715A66.8030806@suse.de> <20070615094246.GN29122@htj.dyndns.org> <20070615110544.GR6149@kernel.dk> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070615110544.GR6149@kernel.dk> User-Agent: Mutt/1.4.2.3i X-SRS-Rewrite: SMTP reverse-path rewritten from by pentafluge.infradead.org See http://www.infradead.org/rpr.html X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11825 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs On Fri, Jun 15, 2007 at 01:05:44PM +0200, Jens Axboe wrote: > On Fri, Jun 15 2007, Tejun Heo wrote: > > SCSI marks internal commands with REQ_PREEMPT and push it at the front > > of the request queue using blk_execute_rq(). When entering suspended > > or frozen state, SCSI devices are quiesced using > > scsi_device_quiesce(). In quiesced state, only REQ_PREEMPT requests > > are processed. This is how SCSI blocks other requests out while > > suspending and resuming. As all internal commands are pushed at the > > front of the queue, this usually works. > > > > Unfortunately, this interacts badly with ordered requeueing. To > > preserve request order on requeueing (due to busy device, active EH or > > other failures), requests are sorted according to ordered sequence on > > requeue if IO barrier is in progress. > > > > The following sequence deadlocks. > > > > 1. IO barrier sequence issues. > > > > 2. Suspend requested. Queue is quiesced with part of all of IO > > barrier sequence at the front. > > > > 3. During suspending or resuming, SCSI issues internal command which > > gets deferred and requeued for some reason. As the command is > > issued after the IO barrier in #1, ordered requeueing code puts the > > request after IO barrier sequence. > > > > 4. The device is ready to process requests again but still is in > > quiesced state and the first request of the queue isn't > > REQ_PREEMPT, so command processing is deadlocked - > > suspending/resuming waits for the issued request to complete while > > the request can't be processed till device is put back into > > running state by resuming. > > > > This can be fixed by always putting !fs requests at the front when > > requeueing. > > > > The following thread reports this deadlock. > > > > http://thread.gmane.org/gmane.linux.kernel/537473 > > > > Signed-off-by: Tejun Heo > > Cc: Jenn Axboe > > Cc: David Greaves > > --- > > Okay, it took a lot of hours of debugging but boiled down to two liner > > fix. I feel so empty. :-) RAID6 triggers this reliably because it > > uses BIO_BARRIER heavily to update its superblock. The recent ATA > > suspend/resume rewrite is hit by this because it uses SCSI internal > > commands to spin down and up the drives for suspending and resuming. > > > > David, please test this. Jens, does it look okay? > > Yep looks good, except for the bad multi-line comment style, but that's > minor stuff ;-) > > Acked-by: Jens Axboe I'd much much prefer having a description of the problem in the actual comment then a hyperlink. There's just too much chance of the latter breaking over time, and it's impossible to update it when things change that should be reflected in the comment. From owner-xfs@oss.sgi.com Sat Jun 16 13:19:49 2007 Received: with ECARTIS (v1.0.0; list xfs); Sat, 16 Jun 2007 13:19:51 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.3 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from pentafluge.infradead.org (pentafluge.infradead.org [213.146.154.40]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5GKJlWt019741 for ; Sat, 16 Jun 2007 13:19:48 -0700 Received: from hch by pentafluge.infradead.org with local (Exim 4.63 #1 (Red Hat Linux)) id 1HzeMW-0001ov-9u; Sat, 16 Jun 2007 20:55:08 +0100 Date: Sat, 16 Jun 2007 20:55:08 +0100 From: Christoph Hellwig To: David Chinner Cc: xfs-dev , xfs-oss , asg-qa Subject: Re: Review: fix test 004 to account for reserved space Message-ID: <20070616195508.GB6929@infradead.org> References: <20070604063328.GT85884050@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070604063328.GT85884050@sgi.com> User-Agent: Mutt/1.4.2.3i X-SRS-Rewrite: SMTP reverse-path rewritten from by pentafluge.infradead.org See http://www.infradead.org/rpr.html X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11824 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs On Mon, Jun 04, 2007 at 04:33:28PM +1000, David Chinner wrote: > With the changes to use some space by default in only in memory > as a reserved pool, df and statfs will now output a fre block > count that is slightly different to what is held in the superblock. > > Update the qa test to account for this change. I think we should rather subtract the amount of internally reserved blocks from the return value in xfs_statvfs. From owner-xfs@oss.sgi.com Sat Jun 16 13:38:52 2007 Received: with ECARTIS (v1.0.0; list xfs); Sat, 16 Jun 2007 13:38:55 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.4 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from pentafluge.infradead.org (pentafluge.infradead.org [213.146.154.40]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5GKcoWt023305 for ; Sat, 16 Jun 2007 13:38:52 -0700 Received: from hch by pentafluge.infradead.org with local (Exim 4.63 #1 (Red Hat Linux)) id 1Hzf2p-000236-8O; Sat, 16 Jun 2007 21:38:51 +0100 Date: Sat, 16 Jun 2007 21:38:51 +0100 From: Christoph Hellwig To: David Chinner Cc: xfs-dev , xfs-oss Subject: Re: Review: Multi-File Data Streams V2 Message-ID: <20070616203851.GA7817@infradead.org> References: <20070613041629.GI86004887@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070613041629.GI86004887@sgi.com> User-Agent: Mutt/1.4.2.3i X-SRS-Rewrite: SMTP reverse-path rewritten from by pentafluge.infradead.org See http://www.infradead.org/rpr.html X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11826 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs Thanks, this version looks a lot better now. The pip checks in xfs_inode.c are still in, but I'm pretty sure they're not nessecary, and even if they were nessecary they'd need a good comment explaining why. The patch still hooks into xfs_close despite your comment that you updated it for the removal of it. I still strongly believe the mru cache should not be inside xfs. It's a completely generic library function and should go into lib/ so it's available to all of the kernel. That means it'll need some codingstyle updates and proper kerneldoc comments, though. From owner-xfs@oss.sgi.com Sat Jun 16 15:41:12 2007 Received: with ECARTIS (v1.0.0; list xfs); Sat, 16 Jun 2007 15:41:17 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.5 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from ogre.sisk.pl (ogre.sisk.pl [217.79.144.158]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5GMfAWt012044 for ; Sat, 16 Jun 2007 15:41:12 -0700 Received: from localhost (localhost.localdomain [127.0.0.1]) by ogre.sisk.pl (Postfix) with ESMTP id 1454B4BE46; Sun, 17 Jun 2007 00:22:47 +0200 (CEST) Received: from ogre.sisk.pl ([127.0.0.1]) by localhost (ogre.sisk.pl [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 24543-09; Sun, 17 Jun 2007 00:22:46 +0200 (CEST) Received: from [192.168.100.102] (nat-be2.aster.pl [212.76.37.166]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by ogre.sisk.pl (Postfix) with ESMTP id 82B684B665; Sun, 17 Jun 2007 00:22:46 +0200 (CEST) From: "Rafael J. Wysocki" To: David Greaves Subject: Re: 2.6.22-rc4 XFS fails after hibernate/resume Date: Sun, 17 Jun 2007 00:47:19 +0200 User-Agent: KMail/1.9.5 Cc: "'linux-kernel@vger.kernel.org'" , xfs@oss.sgi.com, LVM general discussion and development , linux-pm References: <46744065.6060605@dgreaves.com> In-Reply-To: <46744065.6060605@dgreaves.com> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200706170047.20559.rjw@sisk.pl> X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Scanned: amavisd-new at ogre.sisk.pl using MkS_Vir for Linux X-Virus-Status: Clean X-archive-position: 11827 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: rjw@sisk.pl Precedence: bulk X-list: xfs On Saturday, 16 June 2007 21:56, David Greaves wrote: > This isn't a regression. > > I was seeing these problems on 2.6.21 (but 22 was in -rc so I waited to try it). > I tried 2.6.22-rc4 (with Tejun's patches) to see if it had improved - no. > > Note this is a different (desktop) machine to that involved my recent bugs. > > The machine will work for days (continually powered up) without a problem and > then exhibits a filesystem failure within minutes of a resume. > > I know xfs/raid are OK with hibernate. Is lvm? > > The root filesystem is xfs on raid1 and that doesn't seem to have any problems. What is the partition that's showing problems? How's it set up, on how many drives etc.? Also, is the dmesg output below from right after the resume? Greetings, Rafael -- "Premature optimization is the root of all evil." - Donald Knuth From owner-xfs@oss.sgi.com Sat Jun 16 15:59:44 2007 Received: with ECARTIS (v1.0.0; list xfs); Sat, 16 Jun 2007 15:59:46 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=3.0 required=5.0 tests=BAYES_60,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r499012 Received: from mx1.redhat.com (mx1.redhat.com [66.187.233.31]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5GMxgWt015321 for ; Sat, 16 Jun 2007 15:59:43 -0700 Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by mx1.redhat.com (8.13.1/8.13.1) with ESMTP id l5GMTvpB003612; Sat, 16 Jun 2007 18:29:57 -0400 Received: from pobox.brisbane.redhat.com (pobox.brisbane.redhat.com [172.16.44.10]) by int-mx1.corp.redhat.com (8.13.1/8.13.1) with ESMTP id l5GMTu8b032292; Sat, 16 Jun 2007 18:29:56 -0400 Received: from [172.16.44.152] (friday.brisbane.redhat.com [172.16.44.152]) by pobox.brisbane.redhat.com (8.13.1/8.13.1) with ESMTP id l5GMTpNq011940; Sun, 17 Jun 2007 08:29:52 +1000 Message-ID: <4674645F.5000906@gmail.com> Date: Sun, 17 Jun 2007 08:29:51 +1000 From: David Robinson User-Agent: Thunderbird 1.5.0.10 (X11/20070302) MIME-Version: 1.0 To: LVM general discussion and development CC: "'linux-kernel@vger.kernel.org'" , xfs@oss.sgi.com, linux-pm Subject: Re: [linux-lvm] 2.6.22-rc4 XFS fails after hibernate/resume References: <46744065.6060605@dgreaves.com> In-Reply-To: <46744065.6060605@dgreaves.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11828 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: zxvdr.au@gmail.com Precedence: bulk X-list: xfs David Greaves wrote: > This isn't a regression. > > I was seeing these problems on 2.6.21 (but 22 was in -rc so I waited to > try it). > I tried 2.6.22-rc4 (with Tejun's patches) to see if it had improved - no. > > Note this is a different (desktop) machine to that involved my recent bugs. > > The machine will work for days (continually powered up) without a > problem and then exhibits a filesystem failure within minutes of a resume. > > I know xfs/raid are OK with hibernate. Is lvm? I have LVM working with hibernate w/o any problems (w/ ext3). If there were a problem it wouldn't be with LVM but with device-mapper, and I doubt there's a problem with either. The stack trace shows that you're within XFS code (but it's likely its hibernate). You can easily check whether its LVM/device-mapper: 1) check "dmsetup table" - it should be the same before hibernating and after resuming. 2) read directly from the LV - ie, "dd if=/dev/mapper/video_vg-video_lv of=/dev/null bs=10M count=200". If dmsetup shows the same info and you can read directly from the LV I doubt it would be a LVM/device-mapper problem. Cheers, Dave From owner-xfs@oss.sgi.com Sun Jun 17 00:30:09 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 17 Jun 2007 00:30:15 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.0 required=5.0 tests=BAYES_99 autolearn=no version=3.2.0-pre1-r499012 Received: from kernel.dk (brick.kernel.dk [80.160.20.94]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5H7U6Wt032455 for ; Sun, 17 Jun 2007 00:30:09 -0700 Received: from nelson.home.kernel.dk (nelson.home.kernel.dk [192.168.0.33]) by kernel.dk (Postfix) with ESMTP id 2F9C725773C; Sun, 17 Jun 2007 09:30:03 +0200 (CEST) Received: by nelson.home.kernel.dk (Postfix, from userid 1000) id 7ED6D621F; Sun, 17 Jun 2007 09:29:57 +0200 (CEST) Date: Sun, 17 Jun 2007 09:29:57 +0200 From: Jens Axboe To: Christoph Hellwig , Tejun Heo , David Greaves , "Rafael J. Wysocki" , Linus Torvalds , David Chinner , xfs@oss.sgi.com, "'linux-kernel@vger.kernel.org'" , linux-pm , Neil Brown , Jeff Garzik Subject: Re: [PATCH] block: always requeue !fs requests at the front Message-ID: <20070617072957.GJ6149@kernel.dk> References: <200706020122.49989.rjw@sisk.pl> <46706968.7000703@dgreaves.com> <200706140115.58733.rjw@sisk.pl> <46714ECF.8080203@gmail.com> <46715A66.8030806@suse.de> <20070615094246.GN29122@htj.dyndns.org> <20070615110544.GR6149@kernel.dk> <20070616195401.GA6929@infradead.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070616195401.GA6929@infradead.org> X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11829 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jens.axboe@oracle.com Precedence: bulk X-list: xfs On Sat, Jun 16 2007, Christoph Hellwig wrote: > On Fri, Jun 15, 2007 at 01:05:44PM +0200, Jens Axboe wrote: > > On Fri, Jun 15 2007, Tejun Heo wrote: > > > SCSI marks internal commands with REQ_PREEMPT and push it at the front > > > of the request queue using blk_execute_rq(). When entering suspended > > > or frozen state, SCSI devices are quiesced using > > > scsi_device_quiesce(). In quiesced state, only REQ_PREEMPT requests > > > are processed. This is how SCSI blocks other requests out while > > > suspending and resuming. As all internal commands are pushed at the > > > front of the queue, this usually works. > > > > > > Unfortunately, this interacts badly with ordered requeueing. To > > > preserve request order on requeueing (due to busy device, active EH or > > > other failures), requests are sorted according to ordered sequence on > > > requeue if IO barrier is in progress. > > > > > > The following sequence deadlocks. > > > > > > 1. IO barrier sequence issues. > > > > > > 2. Suspend requested. Queue is quiesced with part of all of IO > > > barrier sequence at the front. > > > > > > 3. During suspending or resuming, SCSI issues internal command which > > > gets deferred and requeued for some reason. As the command is > > > issued after the IO barrier in #1, ordered requeueing code puts the > > > request after IO barrier sequence. > > > > > > 4. The device is ready to process requests again but still is in > > > quiesced state and the first request of the queue isn't > > > REQ_PREEMPT, so command processing is deadlocked - > > > suspending/resuming waits for the issued request to complete while > > > the request can't be processed till device is put back into > > > running state by resuming. > > > > > > This can be fixed by always putting !fs requests at the front when > > > requeueing. > > > > > > The following thread reports this deadlock. > > > > > > http://thread.gmane.org/gmane.linux.kernel/537473 > > > > > > Signed-off-by: Tejun Heo > > > Cc: Jenn Axboe > > > Cc: David Greaves > > > --- > > > Okay, it took a lot of hours of debugging but boiled down to two liner > > > fix. I feel so empty. :-) RAID6 triggers this reliably because it > > > uses BIO_BARRIER heavily to update its superblock. The recent ATA > > > suspend/resume rewrite is hit by this because it uses SCSI internal > > > commands to spin down and up the drives for suspending and resuming. > > > > > > David, please test this. Jens, does it look okay? > > > > Yep looks good, except for the bad multi-line comment style, but that's > > minor stuff ;-) > > > > Acked-by: Jens Axboe > > I'd much much prefer having a description of the problem in the actual > comment then a hyperlink. There's just too much chance of the latter > breaking over time, and it's impossible to update it when things change > that should be reflected in the comment. The actual commit text is very good though, but I agree - I don't think the url comment is worth anything. I did consider just killing it. However, the comment does describe the problem, so I think it's still ok. -- Jens Axboe From owner-xfs@oss.sgi.com Mon Jun 18 12:47:15 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Jun 2007 12:47:41 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5IJkvdu002574 for ; Mon, 18 Jun 2007 12:47:13 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA01207; Mon, 18 Jun 2007 11:53:46 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5I1rjAf125523706; Mon, 18 Jun 2007 11:53:45 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5I1ripg126190304; Mon, 18 Jun 2007 11:53:44 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Mon, 18 Jun 2007 11:53:44 +1000 From: David Chinner To: xfs-dev Cc: xfs-oss Subject: [PATCH 2 of 3] Multi-File Data Streams V3 - quota inode avoidance Message-ID: <20070618015344.GX86004887@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11831 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs The quota inodes should have no parent inode. If we have a fake parent inode we then and up with reference counting issues due to filestreams associations to non-existent inodes. --- fs/xfs/quota/xfs_qm.c | 3 +-- fs/xfs/xfs_inode.c | 13 +++++++++---- 2 files changed, 10 insertions(+), 6 deletions(-) Index: 2.6.x-xfs-new/fs/xfs/quota/xfs_qm.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/quota/xfs_qm.c 2007-06-18 10:43:41.000000000 +1000 +++ 2.6.x-xfs-new/fs/xfs/quota/xfs_qm.c 2007-06-18 10:44:59.388520480 +1000 @@ -65,7 +65,6 @@ kmem_zone_t *qm_dqtrxzone; static struct shrinker *xfs_qm_shaker; static cred_t xfs_zerocr; -static xfs_inode_t xfs_zeroino; STATIC void xfs_qm_list_init(xfs_dqlist_t *, char *, int); STATIC void xfs_qm_list_destroy(xfs_dqlist_t *); @@ -1415,7 +1414,7 @@ xfs_qm_qino_alloc( return error; } - if ((error = xfs_dir_ialloc(&tp, &xfs_zeroino, S_IFREG, 1, 0, + if ((error = xfs_dir_ialloc(&tp, NULL, S_IFREG, 1, 0, &xfs_zerocr, 0, 1, ip, &committed))) { xfs_trans_cancel(tp, XFS_TRANS_RELEASE_LOG_RES | XFS_TRANS_ABORT); Index: 2.6.x-xfs-new/fs/xfs/xfs_inode.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_inode.c 2007-06-18 10:40:22.000000000 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_inode.c 2007-06-18 10:58:42.521950934 +1000 @@ -1077,6 +1077,11 @@ xfs_iread_extents( * also returns the [locked] bp pointing to the head of the freelist * as ialloc_context. The caller should hold this buffer across * the commit and pass it back into this routine on the second call. + * + * If we are allocating quota inodes, we do not have a parent inode + * to attach to or associate with (i.e. pip == NULL) because they + * are not linked into the directory structure - they are attached + * directly to the superblock - and so have no parent. */ int xfs_ialloc( @@ -1102,7 +1107,7 @@ xfs_ialloc( * Call the space management code to pick * the on-disk inode to be allocated. */ - error = xfs_dialloc(tp, pip->i_ino, mode, okalloc, + error = xfs_dialloc(tp, pip ? pip->i_ino : 0, mode, okalloc, ialloc_context, call_again, &ino); if (error != 0) { return error; @@ -1156,7 +1161,7 @@ xfs_ialloc( if ((prid != 0) && (ip->i_d.di_version == XFS_DINODE_VERSION_1)) xfs_bump_ino_vers2(tp, ip); - if (XFS_INHERIT_GID(pip, vp->v_vfsp)) { + if (pip && XFS_INHERIT_GID(pip, vp->v_vfsp)) { ip->i_d.di_gid = pip->i_d.di_gid; if ((pip->i_d.di_mode & S_ISGID) && (mode & S_IFMT) == S_IFDIR) { ip->i_d.di_mode |= S_ISGID; @@ -1198,7 +1203,7 @@ xfs_ialloc( flags |= XFS_ILOG_DEV; break; case S_IFREG: - if (xfs_inode_is_filestream(pip)) { + if (pip && xfs_inode_is_filestream(pip)) { error = xfs_filestream_associate(pip, ip); if (error) return error; @@ -1206,7 +1211,7 @@ xfs_ialloc( } /* fall through */ case S_IFDIR: - if (pip->i_d.di_flags & XFS_DIFLAG_ANY) { + if (pip && (pip->i_d.di_flags & XFS_DIFLAG_ANY)) { uint di_flags = 0; if ((mode & S_IFMT) == S_IFDIR) { From owner-xfs@oss.sgi.com Mon Jun 18 12:47:26 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Jun 2007 12:47:40 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5IJkve2002574 for ; Mon, 18 Jun 2007 12:47:24 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA29081; Mon, 18 Jun 2007 10:05:07 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5I055Af126050629; Mon, 18 Jun 2007 10:05:06 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5I052t7125239672; Mon, 18 Jun 2007 10:05:02 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Mon, 18 Jun 2007 10:05:02 +1000 From: David Chinner To: Justin Piszcz Cc: xfs@oss.sgi.com, linux-raid@vger.kernel.org Subject: Re: XFS Tunables for High Speed Linux SW RAID5 Systems? Message-ID: <20070618000502.GU86004887@sgi.com> References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11831 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Fri, Jun 15, 2007 at 04:36:07PM -0400, Justin Piszcz wrote: > Hi, > > I was wondering if the XFS folks can recommend any optimizations for high > speed disk arrays using RAID5? [sysctls snipped] None of those options will make much difference to performance. mkfs parameters are the big ticket item here.... > There is also vm/dirty tunable in /proc. That changes benchmark times by starting writeback earlier, but doesn't affect actual writeback speed. > I was wondering what are some things to tune for speed? I've already > tuned the MD layer but is there anything with XFS I can also tune? > > echo "Setting read-ahead to 64MB for /dev/md3" > blockdev --setra 65536 /dev/md3 Why so large? That's likely to cause readahead thrashing problems under low memory.... > echo "Setting stripe_cache_size to 16MB for /dev/md3" > echo 16384 > /sys/block/md3/md/stripe_cache_size > > (also set max_sectors_kb) to 128K (chunk size) and disable NCQ Why do that? You want XFS to issue large I/Os and the block layer to split them across all the disks. i.e. you are preventing full stripe writes from occurring by doing that. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Jun 18 12:47:02 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Jun 2007 12:47:41 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5IJkvdo002574 for ; Mon, 18 Jun 2007 12:46:59 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id AAA18993; Tue, 19 Jun 2007 00:50:18 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5IEoEAf125967888; Tue, 19 Jun 2007 00:50:15 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5IEo7eG125863748; Tue, 19 Jun 2007 00:50:07 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Tue, 19 Jun 2007 00:50:07 +1000 From: David Chinner To: David Greaves Cc: David Robinson , LVM general discussion and development , "'linux-kernel@vger.kernel.org'" , xfs@oss.sgi.com, linux-pm , LinuxRaid Subject: Re: [linux-lvm] 2.6.22-rc4 XFS fails after hibernate/resume Message-ID: <20070618145007.GE85884050@sgi.com> References: <46744065.6060605@dgreaves.com> <4674645F.5000906@gmail.com> <46751D37.5020608@dgreaves.com> <4676390E.6010202@dgreaves.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4676390E.6010202@dgreaves.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11831 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Mon, Jun 18, 2007 at 08:49:34AM +0100, David Greaves wrote: > David Greaves wrote: > >OK, that gave me an idea. > > > >Freeze the filesystem > >md5sum the lvm > >hibernate > >resume > >md5sum the lvm > > >So the lvm and below looks OK... > > > >I'll see how it behaves now the filesystem has been frozen/thawed over > >the hibernate... > > > And it appears to behave well. (A few hours compile/clean cycling kernel > builds on that filesystem were OK). > > > Historically I've done: > sync > echo platform > /sys/power/disk > echo disk > /sys/power/state > # resume > > and had filesystem corruption (only on this machine, my other hibernating > xfs machines don't have this problem) > > So doing: > xfs_freeze -f /scratch > sync > echo platform > /sys/power/disk > echo disk > /sys/power/state > # resume > xfs_freeze -u /scratch > > Works (for now - more usage testing tonight) Verrry interesting. What you were seeing was an XFS shutdown occurring because the free space btree was corrupted. IOWs, the process of suspend/resume has resulted in either bad data being written to disk, the correct data not being written to disk or the cached block being corrupted in memory. If you run xfs_check on the filesystem after it has shut down after a resume, can you tell us if it reports on-disk corruption? Note: do not run xfs_repair to check this - it does not check the free space btrees; instead it simply rebuilds them from scratch. If xfs_check reports an error, then run xfs_repair to fix it up. FWIW, I'm on record stating that "sync" is not sufficient to quiesce an XFS filesystem for a suspend/resume to work safely and have argued that the only safe thing to do is freeze the filesystem before suspend and thaw it after resume. This is why I originally asked you to test that with the other problem that you reported. Up until this point in time, there's been no evidence to prove either side of the argument...... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Jun 18 12:47:23 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Jun 2007 12:47:41 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.5 required=5.0 tests=AWL,BAYES_99,J_CHICKENPOX_32, J_CHICKENPOX_44,J_CHICKENPOX_45,J_CHICKENPOX_46,J_CHICKENPOX_62, J_CHICKENPOX_63 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5IJkve0002574 for ; Mon, 18 Jun 2007 12:47:20 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA01163; Mon, 18 Jun 2007 11:51:03 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5I1p2Af126143609; Mon, 18 Jun 2007 11:51:03 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5I1p1O4124726595; Mon, 18 Jun 2007 11:51:01 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Mon, 18 Jun 2007 11:51:01 +1000 From: David Chinner To: xfs-dev Cc: xfs-oss Subject: [PATCH 1 of 3] Multi-File Data Streams V3 Message-ID: <20070618015101.GW86004887@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11831 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Concurrent Multi-File Data Streams In media spaces, video is often stored in a frame-per-file format. When dealing with uncompressed realtime HD video streams in this format, it is crucial that files do not get fragmented and that multiple files a placed contiguously on disk. When multiple streams are being ingested and played out at the same time, it is critical that the filesystem does not cross the streams and interleave them together as this creates seek and readahead cache miss latency and prevents both ingest and playout from meeting frame rate targets. This patches creates a "stream of files" concept into the allocator to place all the data from a single stream contiguously on disk so that RAID array readahead can be used effectively. Each additional stream gets placed in different allocation groups within the filesystem, thereby ensuring that we don't cross any streams. When an AG fills up, we select a new AG for the stream that is not in use. The core of the functionality is the stream tracking - each inode that we create in a directory needs to be associated with the directories' stream. Hence every time we create a file, we look up the directories' stream object and associate the new file with that object. Once we have a stream object for a file, we use the AG that the stream object point to for allocations. If we can't allocate in that AG (e.g. it is full) we move the entire stream to another AG. Other inodes in the same stream are moved to the new AG on their next allocation (i.e. lazy update). Stream objects are kept in a cache and hold a reference on the inode. Hence the inode cannot be reclaimed while there is an outstanding stream reference. This means that on unlink we need to remove the stream association and we also need to flush all the associations on certain events that want to reclaim all unreferenced inodes (e.g. filesystem freeze). Credits: The original filestream allocator on Irix was written by Glen Overby, the Linux port and rewrite by Nathan Scott and Sam Vaughan (none of whom work at SGI any more). I just picked the pieces and beat it repeatedly with a big stick until it passed XFSQA. Version 3: o use proper define for mount args o make filestreams inode flag mark child inodes correctly so that filestreams are applied to them even if they are not tagged o split quota inode filestreams avoidance out into a separate patch. o move xfs_close() hooks for stream destruction on unlink to xfs_release(). Version 2: o fold xfs_bmap_filestream() into xfs_bmap_btalloc() o use ktrace infrastructure for debug code in xfs_filestream.c o wrap repeated filestream inode checks. o rename per-AG filestream reference counting macros and convert to static inline o remove debug from xfs_mru_cache.[ch] o fix function call/error check formatting. o removed unnecessary fstrm_mnt_data_t structure. o cleaned up ASSERT checks o cleaned up namespace-less globals in xfs_mru_cache.c o removed unnecessary casts --- fs/xfs/Makefile-linux-2.6 | 2 fs/xfs/linux-2.6/xfs_globals.c | 1 fs/xfs/linux-2.6/xfs_linux.h | 1 fs/xfs/linux-2.6/xfs_sysctl.c | 11 fs/xfs/linux-2.6/xfs_sysctl.h | 2 fs/xfs/xfs.h | 1 fs/xfs/xfs_ag.h | 1 fs/xfs/xfs_bmap.c | 68 +++ fs/xfs/xfs_clnt.h | 2 fs/xfs/xfs_dinode.h | 4 fs/xfs/xfs_filestream.c | 744 +++++++++++++++++++++++++++++++++++++++++ fs/xfs/xfs_filestream.h | 136 +++++++ fs/xfs/xfs_fs.h | 1 fs/xfs/xfs_fsops.c | 2 fs/xfs/xfs_inode.c | 16 fs/xfs/xfs_inode.h | 1 fs/xfs/xfs_mount.h | 4 fs/xfs/xfs_mru_cache.c | 494 +++++++++++++++++++++++++++ fs/xfs/xfs_mru_cache.h | 219 ++++++++++++ fs/xfs/xfs_vfsops.c | 26 + fs/xfs/xfs_vnodeops.c | 24 + fs/xfs/xfsidbg.c | 188 ++++++++++ 22 files changed, 1940 insertions(+), 8 deletions(-) Index: 2.6.x-xfs-new/fs/xfs/Makefile-linux-2.6 =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/Makefile-linux-2.6 2007-06-13 13:58:15.727518215 +1000 +++ 2.6.x-xfs-new/fs/xfs/Makefile-linux-2.6 2007-06-13 14:11:28.440325006 +1000 @@ -54,6 +54,7 @@ xfs-y += xfs_alloc.o \ xfs_dir2_sf.o \ xfs_error.o \ xfs_extfree_item.o \ + xfs_filestream.o \ xfs_fsops.o \ xfs_ialloc.o \ xfs_ialloc_btree.o \ @@ -67,6 +68,7 @@ xfs-y += xfs_alloc.o \ xfs_log.o \ xfs_log_recover.o \ xfs_mount.o \ + xfs_mru_cache.o \ xfs_rename.o \ xfs_trans.o \ xfs_trans_ail.o \ Index: 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_globals.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/linux-2.6/xfs_globals.c 2007-06-13 13:58:15.739516660 +1000 +++ 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_globals.c 2007-06-13 14:11:28.592305170 +1000 @@ -49,6 +49,7 @@ xfs_param_t xfs_params = { .inherit_nosym = { 0, 0, 1 }, .rotorstep = { 1, 1, 255 }, .inherit_nodfrg = { 0, 1, 1 }, + .fstrm_timer = { 1, 50, 3600*100}, }; /* Index: 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_linux.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/linux-2.6/xfs_linux.h 2007-06-13 13:58:15.739516660 +1000 +++ 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_linux.h 2007-06-13 14:11:28.600304126 +1000 @@ -132,6 +132,7 @@ #define xfs_inherit_nosymlinks xfs_params.inherit_nosym.val #define xfs_rotorstep xfs_params.rotorstep.val #define xfs_inherit_nodefrag xfs_params.inherit_nodfrg.val +#define xfs_fstrm_centisecs xfs_params.fstrm_timer.val #define current_cpu() (raw_smp_processor_id()) #define current_pid() (current->pid) Index: 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_sysctl.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/linux-2.6/xfs_sysctl.c 2007-06-13 13:58:15.739516660 +1000 +++ 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_sysctl.c 2007-06-13 14:11:28.604303604 +1000 @@ -243,6 +243,17 @@ static ctl_table xfs_table[] = { .extra1 = &xfs_params.inherit_nodfrg.min, .extra2 = &xfs_params.inherit_nodfrg.max }, + { + .ctl_name = XFS_FILESTREAM_TIMER, + .procname = "filestream_centisecs", + .data = &xfs_params.fstrm_timer.val, + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = &proc_dointvec_minmax, + .strategy = &sysctl_intvec, + .extra1 = &xfs_params.fstrm_timer.min, + .extra2 = &xfs_params.fstrm_timer.max, + }, /* please keep this the last entry */ #ifdef CONFIG_PROC_FS { Index: 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_sysctl.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/linux-2.6/xfs_sysctl.h 2007-06-13 13:58:15.739516660 +1000 +++ 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_sysctl.h 2007-06-13 14:11:28.612302560 +1000 @@ -50,6 +50,7 @@ typedef struct xfs_param { xfs_sysctl_val_t inherit_nosym; /* Inherit the "nosymlinks" flag. */ xfs_sysctl_val_t rotorstep; /* inode32 AG rotoring control knob */ xfs_sysctl_val_t inherit_nodfrg;/* Inherit the "nodefrag" inode flag. */ + xfs_sysctl_val_t fstrm_timer; /* Filestream dir-AG assoc'n timeout. */ } xfs_param_t; /* @@ -89,6 +90,7 @@ enum { XFS_INHERIT_NOSYM = 19, XFS_ROTORSTEP = 20, XFS_INHERIT_NODFRG = 21, + XFS_FILESTREAM_TIMER = 22, }; extern xfs_param_t xfs_params; Index: 2.6.x-xfs-new/fs/xfs/xfs_ag.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_ag.h 2007-06-13 13:58:15.751515106 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_ag.h 2007-06-13 14:11:28.616302038 +1000 @@ -196,6 +196,7 @@ typedef struct xfs_perag lock_t pagb_lock; /* lock for pagb_list */ #endif xfs_perag_busy_t *pagb_list; /* unstable blocks */ + atomic_t pagf_fstrms; /* # of filestreams active in this AG */ /* * inode allocation search lookup optimisation. Index: 2.6.x-xfs-new/fs/xfs/xfs_bmap.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_bmap.c 2007-06-13 13:58:15.751515106 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_bmap.c 2007-06-13 14:11:28.000000000 +1000 @@ -52,6 +52,7 @@ #include "xfs_quota.h" #include "xfs_trans_space.h" #include "xfs_buf_item.h" +#include "xfs_filestream.h" #ifdef DEBUG @@ -171,6 +172,14 @@ xfs_bmap_alloc( xfs_bmalloca_t *ap); /* bmap alloc argument struct */ /* + * xfs_bmap_filestreams is the underlying allocator when filestreams are + * enabled. + */ +STATIC int /* error */ +xfs_bmap_filestreams( + xfs_bmalloca_t *ap); /* bmap alloc argument struct */ + +/* * Transform a btree format file with only one leaf node, where the * extents list will fit in the inode, into an extents format file. * Since the file extents are already in-core, all we have to do is @@ -2724,7 +2733,12 @@ xfs_bmap_btalloc( } nullfb = ap->firstblock == NULLFSBLOCK; fb_agno = nullfb ? NULLAGNUMBER : XFS_FSB_TO_AGNO(mp, ap->firstblock); - if (nullfb) + if (nullfb && xfs_inode_is_filestream(ap->ip)) { + ag = xfs_filestream_lookup_ag(ap->ip); + ag = (ag != NULLAGNUMBER) ? ag : 0; + ap->rval = (ap->userdata) ? XFS_AGB_TO_FSB(mp, ag, 0) : + XFS_INO_TO_FSB(mp, ap->ip->i_ino); + } else if (nullfb) ap->rval = XFS_INO_TO_FSB(mp, ap->ip->i_ino); else ap->rval = ap->firstblock; @@ -2750,13 +2764,22 @@ xfs_bmap_btalloc( args.firstblock = ap->firstblock; blen = 0; if (nullfb) { - args.type = XFS_ALLOCTYPE_START_BNO; + if (xfs_inode_is_filestream(ap->ip)) + args.type = XFS_ALLOCTYPE_NEAR_BNO; + else + args.type = XFS_ALLOCTYPE_START_BNO; args.total = ap->total; + /* - * Find the longest available space. - * We're going to try for the whole allocation at once. + * Search for an allocation group with a single extent + * large enough for the request. + * + * If one isn't found, then adjust the minimum allocation + * size to the largest space found. */ startag = ag = XFS_FSB_TO_AGNO(mp, args.fsbno); + if (startag == NULLAGNUMBER) + startag = ag = 0; notinit = 0; down_read(&mp->m_peraglock); while (blen < ap->alen) { @@ -2782,6 +2805,35 @@ xfs_bmap_btalloc( blen = longest; } else notinit = 1; + + if (xfs_inode_is_filestream(ap->ip)) { + if (blen >= ap->alen) + break; + + if (ap->userdata) { + /* + * If startag is an invalid AG, we've + * come here once before and + * xfs_filestream_new_ag picked the + * best currently available. + * + * Don't continue looping, since we + * could loop forever. + */ + if (startag == NULLAGNUMBER) + break; + + error = xfs_filestream_new_ag(ap, &ag); + if (error) { + up_read(&mp->m_peraglock); + return error; + } + + /* loop again to set 'blen'*/ + startag = NULLAGNUMBER; + continue; + } + } if (++ag == mp->m_sb.sb_agcount) ag = 0; if (ag == startag) @@ -2806,8 +2858,14 @@ xfs_bmap_btalloc( */ else args.minlen = ap->alen; + + if (xfs_inode_is_filestream(ap->ip)) + ap->rval = args.fsbno = XFS_AGB_TO_FSB(mp, ag, 0); } else if (ap->low) { - args.type = XFS_ALLOCTYPE_START_BNO; + if (xfs_inode_is_filestream(ap->ip)) + args.type = XFS_ALLOCTYPE_FIRST_AG; + else + args.type = XFS_ALLOCTYPE_START_BNO; args.total = args.minlen = ap->minlen; } else { args.type = XFS_ALLOCTYPE_NEAR_BNO; Index: 2.6.x-xfs-new/fs/xfs/xfs_clnt.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_clnt.h 2007-06-13 13:58:15.759514069 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_clnt.h 2007-06-13 14:11:28.640298906 +1000 @@ -99,5 +99,7 @@ struct xfs_mount_args { */ #define XFSMNT2_COMPAT_IOSIZE 0x00000001 /* don't report large preferred * I/O size in stat(2) */ +#define XFSMNT2_FILESTREAMS 0x00000002 /* enable the filestreams + * allocator */ #endif /* __XFS_CLNT_H__ */ Index: 2.6.x-xfs-new/fs/xfs/xfs_dinode.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_dinode.h 2007-06-13 13:58:15.767513033 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_dinode.h 2007-06-13 14:11:28.648297862 +1000 @@ -257,6 +257,7 @@ typedef enum xfs_dinode_fmt #define XFS_DIFLAG_EXTSIZE_BIT 11 /* inode extent size allocator hint */ #define XFS_DIFLAG_EXTSZINHERIT_BIT 12 /* inherit inode extent size */ #define XFS_DIFLAG_NODEFRAG_BIT 13 /* do not reorganize/defragment */ +#define XFS_DIFLAG_FILESTREAM_BIT 14 /* use filestream allocator */ #define XFS_DIFLAG_REALTIME (1 << XFS_DIFLAG_REALTIME_BIT) #define XFS_DIFLAG_PREALLOC (1 << XFS_DIFLAG_PREALLOC_BIT) #define XFS_DIFLAG_NEWRTBM (1 << XFS_DIFLAG_NEWRTBM_BIT) @@ -271,12 +272,13 @@ typedef enum xfs_dinode_fmt #define XFS_DIFLAG_EXTSIZE (1 << XFS_DIFLAG_EXTSIZE_BIT) #define XFS_DIFLAG_EXTSZINHERIT (1 << XFS_DIFLAG_EXTSZINHERIT_BIT) #define XFS_DIFLAG_NODEFRAG (1 << XFS_DIFLAG_NODEFRAG_BIT) +#define XFS_DIFLAG_FILESTREAM (1 << XFS_DIFLAG_FILESTREAM_BIT) #define XFS_DIFLAG_ANY \ (XFS_DIFLAG_REALTIME | XFS_DIFLAG_PREALLOC | XFS_DIFLAG_NEWRTBM | \ XFS_DIFLAG_IMMUTABLE | XFS_DIFLAG_APPEND | XFS_DIFLAG_SYNC | \ XFS_DIFLAG_NOATIME | XFS_DIFLAG_NODUMP | XFS_DIFLAG_RTINHERIT | \ XFS_DIFLAG_PROJINHERIT | XFS_DIFLAG_NOSYMLINKS | XFS_DIFLAG_EXTSIZE | \ - XFS_DIFLAG_EXTSZINHERIT | XFS_DIFLAG_NODEFRAG) + XFS_DIFLAG_EXTSZINHERIT | XFS_DIFLAG_NODEFRAG | XFS_DIFLAG_FILESTREAM) #endif /* __XFS_DINODE_H__ */ Index: 2.6.x-xfs-new/fs/xfs/xfs_filestream.c =================================================================== --- /dev/null 1970-01-01 00:00:00.000000000 +0000 +++ 2.6.x-xfs-new/fs/xfs/xfs_filestream.c 2007-06-18 11:39:05.523825952 +1000 @@ -0,0 +1,744 @@ +/* + * Copyright (c) 2000-2005 Silicon Graphics, Inc. + * All Rights Reserved. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it would be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write the Free Software Foundation, + * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA + */ +#include "xfs.h" +#include "xfs_bmap_btree.h" +#include "xfs_inum.h" +#include "xfs_dir2.h" +#include "xfs_dir2_sf.h" +#include "xfs_attr_sf.h" +#include "xfs_dinode.h" +#include "xfs_inode.h" +#include "xfs_ag.h" +#include "xfs_dmapi.h" +#include "xfs_log.h" +#include "xfs_trans.h" +#include "xfs_sb.h" +#include "xfs_mount.h" +#include "xfs_bmap.h" +#include "xfs_alloc.h" +#include "xfs_utils.h" +#include "xfs_mru_cache.h" +#include "xfs_filestream.h" + +#ifdef XFS_FILESTREAMS_TRACE + +ktrace_t *xfs_filestreams_trace_buf; + +STATIC void +xfs_filestreams_trace( + xfs_mount_t *mp, /* mount point */ + int type, /* type of trace */ + const char *func, /* source function */ + int line, /* source line number */ + __psunsigned_t arg0, + __psunsigned_t arg1, + __psunsigned_t arg2, + __psunsigned_t arg3, + __psunsigned_t arg4, + __psunsigned_t arg5) +{ + ktrace_enter(xfs_filestreams_trace_buf, + (void *)(__psint_t)(type | (line << 16)), + (void *)func, + (void *)(__psunsigned_t)current_pid(), + (void *)mp, + (void *)(__psunsigned_t)arg0, + (void *)(__psunsigned_t)arg1, + (void *)(__psunsigned_t)arg2, + (void *)(__psunsigned_t)arg3, + (void *)(__psunsigned_t)arg4, + (void *)(__psunsigned_t)arg5, + NULL, NULL, NULL, NULL, NULL, NULL); +} + +#define TRACE0(mp,t) TRACE6(mp,t,0,0,0,0,0,0) +#define TRACE1(mp,t,a0) TRACE6(mp,t,a0,0,0,0,0,0) +#define TRACE2(mp,t,a0,a1) TRACE6(mp,t,a0,a1,0,0,0,0) +#define TRACE3(mp,t,a0,a1,a2) TRACE6(mp,t,a0,a1,a2,0,0,0) +#define TRACE4(mp,t,a0,a1,a2,a3) TRACE6(mp,t,a0,a1,a2,a3,0,0) +#define TRACE5(mp,t,a0,a1,a2,a3,a4) TRACE6(mp,t,a0,a1,a2,a3,a4,0) +#define TRACE6(mp,t,a0,a1,a2,a3,a4,a5) \ + xfs_filestreams_trace(mp, t, __FUNCTION__, __LINE__, \ + (__psunsigned_t)a0, (__psunsigned_t)a1, \ + (__psunsigned_t)a2, (__psunsigned_t)a3, \ + (__psunsigned_t)a4, (__psunsigned_t)a5) + +#define TRACE_AG_SCAN(mp, ag, ag2) \ + TRACE2(mp, XFS_FSTRM_KTRACE_AGSCAN, ag, ag2); +#define TRACE_AG_PICK1(mp, max_ag, maxfree) \ + TRACE2(mp, XFS_FSTRM_KTRACE_AGPICK1, max_ag, maxfree); +#define TRACE_AG_PICK2(mp, ag, ag2, cnt, free, scan, flag) \ + TRACE6(mp, XFS_FSTRM_KTRACE_AGPICK2, ag, ag2, \ + cnt, free, scan, flag) +#define TRACE_UPDATE(mp, ip, ag, cnt, ag2, cnt2) \ + TRACE5(mp, XFS_FSTRM_KTRACE_UPDATE, ip, ag, cnt, ag2, cnt2) +#define TRACE_FREE(mp, ip, pip, ag, cnt) \ + TRACE4(mp, XFS_FSTRM_KTRACE_FREE, ip, pip, ag, cnt) +#define TRACE_LOOKUP(mp, ip, pip, ag, cnt) \ + TRACE4(mp, XFS_FSTRM_KTRACE_ITEM_LOOKUP, ip, pip, ag, cnt) +#define TRACE_ASSOCIATE(mp, ip, pip, ag, cnt) \ + TRACE4(mp, XFS_FSTRM_KTRACE_ASSOCIATE, ip, pip, ag, cnt) +#define TRACE_MOVEAG(mp, ip, pip, oag, ocnt, nag, ncnt) \ + TRACE6(mp, XFS_FSTRM_KTRACE_MOVEAG, ip, pip, oag, ocnt, nag, ncnt) +#define TRACE_ORPHAN(mp, ip, ag) \ + TRACE2(mp, XFS_FSTRM_KTRACE_ORPHAN, ip, ag); + + +#else +#define TRACE_AG_SCAN(mp, ag, ag2) +#define TRACE_AG_PICK1(mp, max_ag, maxfree) +#define TRACE_AG_PICK2(mp, ag, ag2, cnt, free, scan, flag) +#define TRACE_UPDATE(mp, ip, ag, cnt, ag2, cnt2) +#define TRACE_FREE(mp, ip, pip, ag, cnt) +#define TRACE_LOOKUP(mp, ip, pip, ag, cnt) +#define TRACE_ASSOCIATE(mp, ip, pip, ag, cnt) +#define TRACE_MOVEAG(mp, ip, pip, oag, ocnt, nag, ncnt) +#define TRACE_ORPHAN(mp, ip, ag) +#endif + +static kmem_zone_t *item_zone; + +/* + * Structure for associating a file or a directory with an allocation group. + * The parent directory pointer is only needed for files, but since there will + * generally be vastly more files than directories in the cache, using the same + * data structure simplifies the code with very little memory overhead. + */ +typedef struct fstrm_item +{ + xfs_agnumber_t ag; /* AG currently in use for the file/directory. */ + xfs_inode_t *ip; /* inode self-pointer. */ + xfs_inode_t *pip; /* Parent directory inode pointer. */ +} fstrm_item_t; + + +/* + * Scan the AGs starting at startag looking for an AG that isn't in use and has + * at least minlen blocks free. + */ +static int +_xfs_filestream_pick_ag( + xfs_mount_t *mp, + xfs_agnumber_t startag, + xfs_agnumber_t *agp, + int flags, + xfs_extlen_t minlen) +{ + int err, trylock, nscan; + xfs_extlen_t delta, longest, need, free, minfree, maxfree = 0; + xfs_agnumber_t ag, max_ag = NULLAGNUMBER; + struct xfs_perag *pag; + + /* 2% of an AG's blocks must be free for it to be chosen. */ + minfree = mp->m_sb.sb_agblocks / 50; + + ag = startag; + *agp = NULLAGNUMBER; + + /* For the first pass, don't sleep trying to init the per-AG. */ + trylock = XFS_ALLOC_FLAG_TRYLOCK; + + for (nscan = 0; 1; nscan++) { + + TRACE_AG_SCAN(mp, ag, xfs_filestream_peek_ag(mp, ag)); + + pag = mp->m_perag + ag; + + if (!pag->pagf_init) { + err = xfs_alloc_pagf_init(mp, NULL, ag, trylock); + if (err && !trylock) + return err; + } + + /* Might fail sometimes during the 1st pass with trylock set. */ + if (!pag->pagf_init) + goto next_ag; + + /* Keep track of the AG with the most free blocks. */ + if (pag->pagf_freeblks > maxfree) { + maxfree = pag->pagf_freeblks; + max_ag = ag; + } + + /* + * The AG reference count does two things: it enforces mutual + * exclusion when examining the suitability of an AG in this + * loop, and it guards against two filestreams being established + * in the same AG as each other. + */ + if (xfs_filestream_get_ag(mp, ag) > 1) { + xfs_filestream_put_ag(mp, ag); + goto next_ag; + } + + need = XFS_MIN_FREELIST_PAG(pag, mp); + delta = need > pag->pagf_flcount ? need - pag->pagf_flcount : 0; + longest = (pag->pagf_longest > delta) ? + (pag->pagf_longest - delta) : + (pag->pagf_flcount > 0 || pag->pagf_longest > 0); + + if (((minlen && longest >= minlen) || + (!minlen && pag->pagf_freeblks >= minfree)) && + (!pag->pagf_metadata || !(flags & XFS_PICK_USERDATA) || + (flags & XFS_PICK_LOWSPACE))) { + + /* Break out, retaining the reference on the AG. */ + free = pag->pagf_freeblks; + *agp = ag; + break; + } + + /* Drop the reference on this AG, it's not usable. */ + xfs_filestream_put_ag(mp, ag); +next_ag: + /* Move to the next AG, wrapping to AG 0 if necessary. */ + if (++ag >= mp->m_sb.sb_agcount) + ag = 0; + + /* If a full pass of the AGs hasn't been done yet, continue. */ + if (ag != startag) + continue; + + /* Allow sleeping in xfs_alloc_pagf_init() on the 2nd pass. */ + if (trylock != 0) { + trylock = 0; + continue; + } + + /* Finally, if lowspace wasn't set, set it for the 3rd pass. */ + if (!(flags & XFS_PICK_LOWSPACE)) { + flags |= XFS_PICK_LOWSPACE; + continue; + } + + /* + * Take the AG with the most free space, regardless of whether + * it's already in use by another filestream. + */ + if (max_ag != NULLAGNUMBER) { + xfs_filestream_get_ag(mp, max_ag); + TRACE_AG_PICK1(mp, max_ag, maxfree); + free = maxfree; + *agp = max_ag; + break; + } + + /* take AG 0 if none matched */ + TRACE_AG_PICK1(mp, max_ag, maxfree); + *agp = 0; + return 0; + } + + TRACE_AG_PICK2(mp, startag, *agp, xfs_filestream_peek_ag(mp, *agp), + free, nscan, flags); + + return 0; +} + +/* + * Set the allocation group number for a file or a directory, updating inode + * references and per-AG references as appropriate. Must be called with the + * m_peraglock held in read mode. + */ +static int +_xfs_filestream_update_ag( + xfs_inode_t *ip, + xfs_inode_t *pip, + xfs_agnumber_t ag) +{ + int err = 0; + xfs_mount_t *mp; + xfs_mru_cache_t *cache; + fstrm_item_t *item; + xfs_agnumber_t old_ag; + xfs_inode_t *old_pip; + + /* + * Either ip is a regular file and pip is a directory, or ip is a + * directory and pip is NULL. + */ + ASSERT(ip && (((ip->i_d.di_mode & S_IFREG) && pip && + (pip->i_d.di_mode & S_IFDIR)) || + ((ip->i_d.di_mode & S_IFDIR) && !pip))); + + mp = ip->i_mount; + cache = mp->m_filestream; + + item = xfs_mru_cache_lookup(cache, ip->i_ino); + if (item) { + ASSERT(item->ip == ip); + old_ag = item->ag; + item->ag = ag; + old_pip = item->pip; + item->pip = pip; + xfs_mru_cache_done(cache); + + /* + * If the AG has changed, drop the old ref and take a new one, + * effectively transferring the reference from old to new AG. + */ + if (ag != old_ag) { + xfs_filestream_put_ag(mp, old_ag); + xfs_filestream_get_ag(mp, ag); + } + + /* + * If ip is a file and its pip has changed, drop the old ref and + * take a new one. + */ + if (pip && pip != old_pip) { + IRELE(old_pip); + IHOLD(pip); + } + + TRACE_UPDATE(mp, ip, old_ag, xfs_filestream_peek_ag(mp, old_ag), + ag, xfs_filestream_peek_ag(mp, ag)); + return 0; + } + + item = kmem_zone_zalloc(item_zone, KM_MAYFAIL); + if (!item) + return ENOMEM; + + item->ag = ag; + item->ip = ip; + item->pip = pip; + + err = xfs_mru_cache_insert(cache, ip->i_ino, item); + if (err) { + kmem_zone_free(item_zone, item); + return err; + } + + /* Take a reference on the AG. */ + xfs_filestream_get_ag(mp, ag); + + /* + * Take a reference on the inode itself regardless of whether it's a + * regular file or a directory. + */ + IHOLD(ip); + + /* + * In the case of a regular file, take a reference on the parent inode + * as well to ensure it remains in-core. + */ + if (pip) + IHOLD(pip); + + TRACE_UPDATE(mp, ip, ag, xfs_filestream_peek_ag(mp, ag), + ag, xfs_filestream_peek_ag(mp, ag)); + + return 0; +} + +/* xfs_fstrm_free_func(): callback for freeing cached stream items. */ +void +xfs_fstrm_free_func( + xfs_ino_t ino, + fstrm_item_t *item) +{ + xfs_inode_t *ip = item->ip; + int ref; + + ASSERT(ip->i_ino == ino); + + xfs_iflags_clear(ip, XFS_IFILESTREAM); + + /* Drop the reference taken on the AG when the item was added. */ + ref = xfs_filestream_put_ag(ip->i_mount, item->ag); + + ASSERT(ref >= 0); + + /* + * _xfs_filestream_update_ag() always takes a reference on the inode + * itself, whether it's a file or a directory. Release it here. + */ + IRELE(ip); + + /* + * In the case of a regular file, _xfs_filestream_update_ag() also takes a + * ref on the parent inode to keep it in-core. Release that too. + */ + if (item->pip) + IRELE(item->pip); + + TRACE_FREE(ip->i_mount, ip, item->pip, item->ag, + xfs_filestream_peek_ag(ip->i_mount, item->ag)); + + /* Finally, free the memory allocated for the item. */ + kmem_zone_free(item_zone, item); +} + +/* + * xfs_filestream_init() is called at xfs initialisation time to set up the + * memory zone that will be used for filestream data structure allocation. + */ +int +xfs_filestream_init(void) +{ + item_zone = kmem_zone_init(sizeof(fstrm_item_t), "fstrm_item"); +#ifdef XFS_FILESTREAMS_TRACE + xfs_filestreams_trace_buf = ktrace_alloc(XFS_FSTRM_KTRACE_SIZE, KM_SLEEP); +#endif + return item_zone ? 0 : -ENOMEM; +} + +/* + * xfs_filestream_uninit() is called at xfs termination time to destroy the + * memory zone that was used for filestream data structure allocation. + */ +void +xfs_filestream_uninit(void) +{ +#ifdef XFS_FILESTREAMS_TRACE + ktrace_free(xfs_filestreams_trace_buf); +#endif + kmem_zone_destroy(item_zone); +} + +/* + * xfs_filestream_mount() is called when a file system is mounted with the + * filestream option. It is responsible for allocating the data structures + * needed to track the new file system's file streams. + */ +int +xfs_filestream_mount( + xfs_mount_t *mp) +{ + int err; + unsigned int lifetime, grp_count; + + /* + * The filestream timer tunable is currently fixed within the range of + * one second to four minutes, with five seconds being the default. The + * group count is somewhat arbitrary, but it'd be nice to adhere to the + * timer tunable to within about 10 percent. This requires at least 10 + * groups. + */ + lifetime = xfs_fstrm_centisecs * 10; + grp_count = 10; + + err = xfs_mru_cache_create(&mp->m_filestream, lifetime, grp_count, + (xfs_mru_cache_free_func_t)xfs_fstrm_free_func); + + return err; +} + +/* + * xfs_filestream_unmount() is called when a file system that was mounted with + * the filestream option is unmounted. It drains the data structures created + * to track the file system's file streams and frees all the memory that was + * allocated. + */ +void +xfs_filestream_unmount( + xfs_mount_t *mp) +{ + xfs_mru_cache_destroy(mp->m_filestream); +} + +/* + * If the mount point's m_perag array is going to be reallocated, all + * outstanding cache entries must be flushed to avoid accessing reference count + * addresses that have been freed. The call to xfs_filestream_flush() must be + * made inside the block that holds the m_peraglock in write mode to do the + * reallocation. + */ +void +xfs_filestream_flush( + xfs_mount_t *mp) +{ + /* point in time flush, so keep the reaper running */ + xfs_mru_cache_flush(mp->m_filestream, 1); +} + +/* + * Return the AG of the filestream the file or directory belongs to, or + * NULLAGNUMBER otherwise. + */ +xfs_agnumber_t +xfs_filestream_lookup_ag( + xfs_inode_t *ip) +{ + xfs_mru_cache_t *cache; + fstrm_item_t *item; + xfs_agnumber_t ag; + int ref; + + if (!(ip->i_d.di_mode & (S_IFREG | S_IFDIR))) { + ASSERT(0); + return NULLAGNUMBER; + } + + cache = ip->i_mount->m_filestream; + item = xfs_mru_cache_lookup(cache, ip->i_ino); + if (!item) { + TRACE_LOOKUP(ip->i_mount, ip, NULL, NULLAGNUMBER, 0); + return NULLAGNUMBER; + } + + ASSERT(ip == item->ip); + ag = item->ag; + ref = xfs_filestream_peek_ag(ip->i_mount, ag); + xfs_mru_cache_done(cache); + + TRACE_LOOKUP(ip->i_mount, ip, item->pip, ag, ref); + return ag; +} + +/* + * xfs_filestream_associate() should only be called to associate a regular file + * with its parent directory. Calling it with a child directory isn't + * appropriate because filestreams don't apply to entire directory hierarchies. + * Creating a file in a child directory of an existing filestream directory + * starts a new filestream with its own allocation group association. + */ +int +xfs_filestream_associate( + xfs_inode_t *pip, + xfs_inode_t *ip) +{ + xfs_mount_t *mp; + xfs_mru_cache_t *cache; + fstrm_item_t *item; + xfs_agnumber_t ag, rotorstep, startag; + int err = 0; + + ASSERT(pip->i_d.di_mode & S_IFDIR); + ASSERT(ip->i_d.di_mode & S_IFREG); + if (!(pip->i_d.di_mode & S_IFDIR) || !(ip->i_d.di_mode & S_IFREG)) + return EINVAL; + + mp = pip->i_mount; + cache = mp->m_filestream; + down_read(&mp->m_peraglock); + xfs_ilock(pip, XFS_IOLOCK_EXCL); + + /* If the parent directory is already in the cache, use its AG. */ + item = xfs_mru_cache_lookup(cache, pip->i_ino); + if (item) { + ASSERT(item->ip == pip); + ag = item->ag; + xfs_mru_cache_done(cache); + + TRACE_LOOKUP(mp, pip, pip, ag, xfs_filestream_peek_ag(mp, ag)); + err = _xfs_filestream_update_ag(ip, pip, ag); + + goto exit; + } + + /* + * Set the starting AG using the rotor for inode32, otherwise + * use the directory inode's AG. + */ + if (mp->m_flags & XFS_MOUNT_32BITINODES) { + rotorstep = xfs_rotorstep; + startag = (mp->m_agfrotor / rotorstep) % mp->m_sb.sb_agcount; + mp->m_agfrotor = (mp->m_agfrotor + 1) % + (mp->m_sb.sb_agcount * rotorstep); + } else + startag = XFS_INO_TO_AGNO(mp, pip->i_ino); + + /* Pick a new AG for the parent inode starting at startag. */ + err = _xfs_filestream_pick_ag(mp, startag, &ag, 0, 0); + if (err || ag == NULLAGNUMBER) + goto exit_did_pick; + + /* Associate the parent inode with the AG. */ + err = _xfs_filestream_update_ag(pip, NULL, ag); + if (err) + goto exit_did_pick; + + /* Associate the file inode with the AG. */ + err = _xfs_filestream_update_ag(ip, pip, ag); + if (err) + goto exit_did_pick; + + TRACE_ASSOCIATE(mp, ip, pip, ag, xfs_filestream_peek_ag(mp, ag)); + +exit_did_pick: + /* + * If _xfs_filestream_pick_ag() returned a valid AG, remove the + * reference it took on it, since the file and directory will have taken + * their own now if they were successfully cached. + */ + if (ag != NULLAGNUMBER) + xfs_filestream_put_ag(mp, ag); + +exit: + xfs_iunlock(pip, XFS_IOLOCK_EXCL); + up_read(&mp->m_peraglock); + return err; +} + +/* + * Pick a new allocation group for the current file and its file stream. This + * function is called by xfs_bmap_filestreams() with the mount point's per-ag + * lock held. + */ +int +xfs_filestream_new_ag( + xfs_bmalloca_t *ap, + xfs_agnumber_t *agp) +{ + int flags, err; + xfs_inode_t *ip, *pip = NULL; + xfs_mount_t *mp; + xfs_mru_cache_t *cache; + xfs_extlen_t minlen; + fstrm_item_t *dir, *file; + xfs_agnumber_t ag = NULLAGNUMBER; + + ip = ap->ip; + mp = ip->i_mount; + cache = mp->m_filestream; + minlen = ap->alen; + *agp = NULLAGNUMBER; + + /* + * Look for the file in the cache, removing it if it's found. Doing + * this allows it to be held across the dir lookup that follows. + */ + file = xfs_mru_cache_remove(cache, ip->i_ino); + if (file) { + ASSERT(ip == file->ip); + + /* Save the file's parent inode and old AG number for later. */ + pip = file->pip; + ag = file->ag; + + /* Look for the file's directory in the cache. */ + dir = xfs_mru_cache_lookup(cache, pip->i_ino); + if (dir) { + ASSERT(pip == dir->ip); + + /* + * If the directory has already moved on to a new AG, + * use that AG as the new AG for the file. Don't + * forget to twiddle the AG refcounts to match the + * movement. + */ + if (dir->ag != file->ag) { + xfs_filestream_put_ag(mp, file->ag); + xfs_filestream_get_ag(mp, dir->ag); + *agp = file->ag = dir->ag; + } + + xfs_mru_cache_done(cache); + } + + /* + * Put the file back in the cache. If this fails, the free + * function needs to be called to tidy up in the same way as if + * the item had simply expired from the cache. + */ + err = xfs_mru_cache_insert(cache, ip->i_ino, file); + if (err) { + xfs_fstrm_free_func(ip->i_ino, file); + return err; + } + + /* + * If the file's AG was moved to the directory's new AG, there's + * nothing more to be done. + */ + if (*agp != NULLAGNUMBER) { + TRACE_MOVEAG(mp, ip, pip, + ag, xfs_filestream_peek_ag(mp, ag), + *agp, xfs_filestream_peek_ag(mp, *agp)); + return 0; + } + } + + /* + * If the file's parent directory is known, take its iolock in exclusive + * mode to prevent two sibling files from racing each other to migrate + * themselves and their parent to different AGs. + */ + if (pip) + xfs_ilock(pip, XFS_IOLOCK_EXCL); + + /* + * A new AG needs to be found for the file. If the file's parent + * directory is also known, it will be moved to the new AG as well to + * ensure that files created inside it in future use the new AG. + */ + ag = (ag == NULLAGNUMBER) ? 0 : (ag + 1) % mp->m_sb.sb_agcount; + flags = (ap->userdata ? XFS_PICK_USERDATA : 0) | + (ap->low ? XFS_PICK_LOWSPACE : 0); + + err = _xfs_filestream_pick_ag(mp, ag, agp, flags, minlen); + if (err || *agp == NULLAGNUMBER) + goto exit; + + /* + * If the file wasn't found in the file cache, then its parent directory + * inode isn't known. For this to have happened, the file must either + * be pre-existing, or it was created long enough ago that its cache + * entry has expired. This isn't the sort of usage that the filestreams + * allocator is trying to optimise, so there's no point trying to track + * its new AG somehow in the filestream data structures. + */ + if (!pip) { + TRACE_ORPHAN(mp, ip, *agp); + goto exit; + } + + /* Associate the parent inode with the AG. */ + err = _xfs_filestream_update_ag(pip, NULL, *agp); + if (err) + goto exit; + + /* Associate the file inode with the AG. */ + err = _xfs_filestream_update_ag(ip, pip, *agp); + if (err) + goto exit; + + TRACE_MOVEAG(mp, ip, pip, NULLAGNUMBER, 0, + *agp, xfs_filestream_peek_ag(mp, *agp)); + +exit: + /* + * If _xfs_filestream_pick_ag() returned a valid AG, remove the + * reference it took on it, since the file and directory will have taken + * their own now if they were successfully cached. + */ + if (*agp != NULLAGNUMBER) + xfs_filestream_put_ag(mp, *agp); + else + *agp = 0; + + if (pip) + xfs_iunlock(pip, XFS_IOLOCK_EXCL); + + return err; +} + +/* + * Remove an association between an inode and a filestream object. + * Typically this is done on last close of an unlinked file. + */ +void +xfs_filestream_deassociate( + xfs_inode_t *ip) +{ + xfs_mru_cache_t *cache = ip->i_mount->m_filestream; + + xfs_mru_cache_delete(cache, ip->i_ino); +} Index: 2.6.x-xfs-new/fs/xfs/xfs_filestream.h =================================================================== --- /dev/null 1970-01-01 00:00:00.000000000 +0000 +++ 2.6.x-xfs-new/fs/xfs/xfs_filestream.h 2007-06-18 11:39:05.523825952 +1000 @@ -0,0 +1,136 @@ +/* + * Copyright (c) 2000-2002,2005 Silicon Graphics, Inc. + * All Rights Reserved. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it would be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write the Free Software Foundation, + * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA + */ +#ifndef __XFS_FILESTREAM_H__ +#define __XFS_FILESTREAM_H__ + +#ifdef __KERNEL__ + +struct xfs_mount; +struct xfs_inode; +struct xfs_perag; +struct xfs_bmalloca; + +#ifdef XFS_FILESTREAMS_TRACE +#define XFS_FSTRM_KTRACE_INFO 1 +#define XFS_FSTRM_KTRACE_AGSCAN 2 +#define XFS_FSTRM_KTRACE_AGPICK1 3 +#define XFS_FSTRM_KTRACE_AGPICK2 4 +#define XFS_FSTRM_KTRACE_UPDATE 5 +#define XFS_FSTRM_KTRACE_FREE 6 +#define XFS_FSTRM_KTRACE_ITEM_LOOKUP 7 +#define XFS_FSTRM_KTRACE_ASSOCIATE 8 +#define XFS_FSTRM_KTRACE_MOVEAG 9 +#define XFS_FSTRM_KTRACE_ORPHAN 10 + +#define XFS_FSTRM_KTRACE_SIZE 16384 +extern ktrace_t *xfs_filestreams_trace_buf; + +#endif + +/* + * Allocation group filestream associations are tracked with per-ag atomic + * counters. These counters allow _xfs_filestream_pick_ag() to tell whether a + * particular AG already has active filestreams associated with it. The mount + * point's m_peraglock is used to protect these counters from per-ag array + * re-allocation during a growfs operation. When xfs_growfs_data_private() is + * about to reallocate the array, it calls xfs_filestream_flush() with the + * m_peraglock held in write mode. + * + * Since xfs_mru_cache_flush() guarantees that all the free functions for all + * the cache elements have finished executing before it returns, it's safe for + * the free functions to use the atomic counters without m_peraglock protection. + * This allows the implementation of xfs_fstrm_free_func() to be agnostic about + * whether it was called with the m_peraglock held in read mode, write mode or + * not held at all. The race condition this addresses is the following: + * + * - The work queue scheduler fires and pulls a filestream directory cache + * element off the LRU end of the cache for deletion, then gets pre-empted. + * - A growfs operation grabs the m_peraglock in write mode, flushes all the + * remaining items from the cache and reallocates the mount point's per-ag + * array, resetting all the counters to zero. + * - The work queue thread resumes and calls the free function for the element + * it started cleaning up earlier. In the process it decrements the + * filestreams counter for an AG that now has no references. + * + * With a shrinkfs feature, the above scenario could panic the system. + * + * All other uses of the following macros should be protected by either the + * m_peraglock held in read mode, or the cache's internal locking exposed by the + * interval between a call to xfs_mru_cache_lookup() and a call to + * xfs_mru_cache_done(). In addition, the m_peraglock must be held in read mode + * when new elements are added to the cache. + * + * Combined, these locking rules ensure that no associations will ever exist in + * the cache that reference per-ag array elements that have since been + * reallocated. + */ +STATIC_INLINE int +xfs_filestream_peek_ag( + xfs_mount_t *mp, + xfs_agnumber_t agno) +{ + return atomic_read(&mp->m_perag[agno].pagf_fstrms); +} + +STATIC_INLINE int +xfs_filestream_get_ag( + xfs_mount_t *mp, + xfs_agnumber_t agno) +{ + return atomic_inc_return(&mp->m_perag[agno].pagf_fstrms); +} + +STATIC_INLINE int +xfs_filestream_put_ag( + xfs_mount_t *mp, + xfs_agnumber_t agno) +{ + return atomic_dec_return(&mp->m_perag[agno].pagf_fstrms); +} + +/* allocation selection flags */ +typedef enum xfs_fstrm_alloc { + XFS_PICK_USERDATA = 1, + XFS_PICK_LOWSPACE = 2, +} xfs_fstrm_alloc_t; + +/* prototypes for filestream.c */ +int xfs_filestream_init(void); +void xfs_filestream_uninit(void); +int xfs_filestream_mount(struct xfs_mount *mp); +void xfs_filestream_unmount(struct xfs_mount *mp); +void xfs_filestream_flush(struct xfs_mount *mp); +xfs_agnumber_t xfs_filestream_lookup_ag(struct xfs_inode *ip); +int xfs_filestream_associate(struct xfs_inode *dip, struct xfs_inode *ip); +void xfs_filestream_deassociate(struct xfs_inode *ip); +int xfs_filestream_new_ag(struct xfs_bmalloca *ap, xfs_agnumber_t *agp); + + +/* filestreams for the inode? */ +STATIC_INLINE int +xfs_inode_is_filestream( + struct xfs_inode *ip) +{ + return (ip->i_mount->m_flags & XFS_MOUNT_FILESTREAMS) || + xfs_iflags_test(ip, XFS_IFILESTREAM) || + (ip->i_d.di_flags & XFS_DIFLAG_FILESTREAM); +} + +#endif /* __KERNEL__ */ + +#endif /* __XFS_FILESTREAM_H__ */ Index: 2.6.x-xfs-new/fs/xfs/xfs_fs.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_fs.h 2007-06-13 13:58:15.767513033 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_fs.h 2007-06-13 14:11:28.760283246 +1000 @@ -66,6 +66,7 @@ struct fsxattr { #define XFS_XFLAG_EXTSIZE 0x00000800 /* extent size allocator hint */ #define XFS_XFLAG_EXTSZINHERIT 0x00001000 /* inherit inode extent size */ #define XFS_XFLAG_NODEFRAG 0x00002000 /* do not defragment */ +#define XFS_XFLAG_FILESTREAM 0x00004000 /* use filestream allocator */ #define XFS_XFLAG_HASATTR 0x80000000 /* no DIFLAG for this */ /* Index: 2.6.x-xfs-new/fs/xfs/xfs_fsops.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_fsops.c 2007-06-13 13:58:15.767513033 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_fsops.c 2007-06-18 11:33:45.517382150 +1000 @@ -44,6 +44,7 @@ #include "xfs_trans_space.h" #include "xfs_rtalloc.h" #include "xfs_rw.h" +#include "xfs_filestream.h" /* * File system operations @@ -165,6 +166,7 @@ xfs_growfs_data_private( new = nb - mp->m_sb.sb_dblocks; oagcount = mp->m_sb.sb_agcount; if (nagcount > oagcount) { + xfs_filestream_flush(mp); down_write(&mp->m_peraglock); mp->m_perag = kmem_realloc(mp->m_perag, sizeof(xfs_perag_t) * nagcount, Index: 2.6.x-xfs-new/fs/xfs/xfs_inode.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_inode.c 2007-06-13 13:58:15.783510960 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_inode.c 2007-06-18 11:41:48.938591839 +1000 @@ -48,6 +48,7 @@ #include "xfs_dir2_trace.h" #include "xfs_quota.h" #include "xfs_acl.h" +#include "xfs_filestream.h" kmem_zone_t *xfs_ifork_zone; @@ -817,6 +818,8 @@ _xfs_dic2xflags( flags |= XFS_XFLAG_EXTSZINHERIT; if (di_flags & XFS_DIFLAG_NODEFRAG) flags |= XFS_XFLAG_NODEFRAG; + if (di_flags & XFS_DIFLAG_FILESTREAM) + flags |= XFS_XFLAG_FILESTREAM; } return flags; @@ -1150,7 +1153,7 @@ xfs_ialloc( /* * Project ids won't be stored on disk if we are using a version 1 inode. */ - if ( (prid != 0) && (ip->i_d.di_version == XFS_DINODE_VERSION_1)) + if ((prid != 0) && (ip->i_d.di_version == XFS_DINODE_VERSION_1)) xfs_bump_ino_vers2(tp, ip); if (XFS_INHERIT_GID(pip, vp->v_vfsp)) { @@ -1195,8 +1198,15 @@ xfs_ialloc( flags |= XFS_ILOG_DEV; break; case S_IFREG: + if (xfs_inode_is_filestream(pip)) { + error = xfs_filestream_associate(pip, ip); + if (error) + return error; + xfs_iflags_set(ip, XFS_IFILESTREAM); + } + /* fall through */ case S_IFDIR: - if (unlikely(pip->i_d.di_flags & XFS_DIFLAG_ANY)) { + if (pip->i_d.di_flags & XFS_DIFLAG_ANY) { uint di_flags = 0; if ((mode & S_IFMT) == S_IFDIR) { @@ -1233,6 +1243,8 @@ xfs_ialloc( if ((pip->i_d.di_flags & XFS_DIFLAG_NODEFRAG) && xfs_inherit_nodefrag) di_flags |= XFS_DIFLAG_NODEFRAG; + if (pip->i_d.di_flags & XFS_DIFLAG_FILESTREAM) + di_flags |= XFS_DIFLAG_FILESTREAM; ip->i_d.di_flags |= di_flags; } /* FALLTHROUGH */ Index: 2.6.x-xfs-new/fs/xfs/xfs_mount.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_mount.h 2007-06-13 13:58:15.783510960 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_mount.h 2007-06-13 14:11:28.788279592 +1000 @@ -66,6 +66,7 @@ struct xfs_bmbt_irec; struct xfs_bmap_free; struct xfs_extdelta; struct xfs_swapext; +struct xfs_mru_cache; extern struct bhv_vfsops xfs_vfsops; extern struct bhv_vnodeops xfs_vnodeops; @@ -436,6 +437,7 @@ typedef struct xfs_mount { struct notifier_block m_icsb_notifier; /* hotplug cpu notifier */ struct mutex m_icsb_mutex; /* balancer sync lock */ #endif + struct xfs_mru_cache *m_filestream; /* per-mount filestream data */ } xfs_mount_t; /* @@ -475,6 +477,8 @@ typedef struct xfs_mount { * I/O size in stat() */ #define XFS_MOUNT_NO_PERCPU_SB (1ULL << 23) /* don't use per-cpu superblock counters */ +#define XFS_MOUNT_FILESTREAMS (1ULL << 24) /* enable the filestreams + allocator */ /* Index: 2.6.x-xfs-new/fs/xfs/xfs_mru_cache.c =================================================================== --- /dev/null 1970-01-01 00:00:00.000000000 +0000 +++ 2.6.x-xfs-new/fs/xfs/xfs_mru_cache.c 2007-06-13 14:11:28.788279592 +1000 @@ -0,0 +1,494 @@ +/* + * Copyright (c) 2000-2002,2006 Silicon Graphics, Inc. + * All Rights Reserved. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it would be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write the Free Software Foundation, + * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA + */ +#include "xfs.h" +#include "xfs_mru_cache.h" + +/* + * An MRU Cache is a dynamic data structure that stores its elements in a way + * that allows efficient lookups, but also groups them into discrete time + * intervals based on insertion time. This allows elements to be efficiently + * and automatically reaped after a fixed period of inactivity. + * + * When a client data pointer is stored in the MRU Cache it needs to be added to + * both the data store and to one of the lists. It must also be possible to + * access each of these entries via the other, i.e. to: + * + * a) Walk a list, removing the corresponding data store entry for each item. + * b) Look up a data store entry, then access its list entry directly. + * + * To achieve both of these goals, each entry must contain both a list entry and + * a key, in addition to the user's data pointer. Note that it's not a good + * idea to have the client embed one of these structures at the top of their own + * data structure, because inserting the same item more than once would most + * likely result in a loop in one of the lists. That's a sure-fire recipe for + * an infinite loop in the code. + */ +typedef struct xfs_mru_cache_elem +{ + struct list_head list_node; + unsigned long key; + void *value; +} xfs_mru_cache_elem_t; + +static kmem_zone_t *xfs_mru_elem_zone; +static struct workqueue_struct *xfs_mru_reap_wq; + +/* + * When inserting, destroying or reaping, it's first necessary to update the + * lists relative to a particular time. In the case of destroying, that time + * will be well in the future to ensure that all items are moved to the reap + * list. In all other cases though, the time will be the current time. + * + * This function enters a loop, moving the contents of the LRU list to the reap + * list again and again until either a) the lists are all empty, or b) time zero + * has been advanced sufficiently to be within the immediate element lifetime. + * + * Case a) above is detected by counting how many groups are migrated and + * stopping when they've all been moved. Case b) is detected by monitoring the + * time_zero field, which is updated as each group is migrated. + * + * The return value is the earliest time that more migration could be needed, or + * zero if there's no need to schedule more work because the lists are empty. + */ +STATIC unsigned long +_xfs_mru_cache_migrate( + xfs_mru_cache_t *mru, + unsigned long now) +{ + unsigned int grp; + unsigned int migrated = 0; + struct list_head *lru_list; + + /* Nothing to do if the data store is empty. */ + if (!mru->time_zero) + return 0; + + /* While time zero is older than the time spanned by all the lists. */ + while (mru->time_zero <= now - mru->grp_count * mru->grp_time) { + + /* + * If the LRU list isn't empty, migrate its elements to the tail + * of the reap list. + */ + lru_list = mru->lists + mru->lru_grp; + if (!list_empty(lru_list)) + list_splice_init(lru_list, mru->reap_list.prev); + + /* + * Advance the LRU group number, freeing the old LRU list to + * become the new MRU list; advance time zero accordingly. + */ + mru->lru_grp = (mru->lru_grp + 1) % mru->grp_count; + mru->time_zero += mru->grp_time; + + /* + * If reaping is so far behind that all the elements on all the + * lists have been migrated to the reap list, it's now empty. + */ + if (++migrated == mru->grp_count) { + mru->lru_grp = 0; + mru->time_zero = 0; + return 0; + } + } + + /* Find the first non-empty list from the LRU end. */ + for (grp = 0; grp < mru->grp_count; grp++) { + + /* Check the grp'th list from the LRU end. */ + lru_list = mru->lists + ((mru->lru_grp + grp) % mru->grp_count); + if (!list_empty(lru_list)) + return mru->time_zero + + (mru->grp_count + grp) * mru->grp_time; + } + + /* All the lists must be empty. */ + mru->lru_grp = 0; + mru->time_zero = 0; + return 0; +} + +/* + * When inserting or doing a lookup, an element needs to be inserted into the + * MRU list. The lists must be migrated first to ensure that they're + * up-to-date, otherwise the new element could be given a shorter lifetime in + * the cache than it should. + */ +STATIC void +_xfs_mru_cache_list_insert( + xfs_mru_cache_t *mru, + xfs_mru_cache_elem_t *elem) +{ + unsigned int grp = 0; + unsigned long now = jiffies; + + /* + * If the data store is empty, initialise time zero, leave grp set to + * zero and start the work queue timer if necessary. Otherwise, set grp + * to the number of group times that have elapsed since time zero. + */ + if (!_xfs_mru_cache_migrate(mru, now)) { + mru->time_zero = now; + if (!mru->next_reap) + mru->next_reap = mru->grp_count * mru->grp_time; + } else { + grp = (now - mru->time_zero) / mru->grp_time; + grp = (mru->lru_grp + grp) % mru->grp_count; + } + + /* Insert the element at the tail of the corresponding list. */ + list_add_tail(&elem->list_node, mru->lists + grp); +} + +/* + * When destroying or reaping, all the elements that were migrated to the reap + * list need to be deleted. For each element this involves removing it from the + * data store, removing it from the reap list, calling the client's free + * function and deleting the element from the element zone. + */ +STATIC void +_xfs_mru_cache_clear_reap_list( + xfs_mru_cache_t *mru) +{ + xfs_mru_cache_elem_t *elem, *next; + struct list_head tmp; + + INIT_LIST_HEAD(&tmp); + list_for_each_entry_safe(elem, next, &mru->reap_list, list_node) { + + /* Remove the element from the data store. */ + radix_tree_delete(&mru->store, elem->key); + + /* + * remove to temp list so it can be freed without + * needing to hold the lock + */ + list_move(&elem->list_node, &tmp); + } + mutex_spinunlock(&mru->lock, 0); + + list_for_each_entry_safe(elem, next, &tmp, list_node) { + + /* Remove the element from the reap list. */ + list_del_init(&elem->list_node); + + /* Call the client's free function with the key and value pointer. */ + mru->free_func(elem->key, elem->value); + + /* Free the element structure. */ + kmem_zone_free(xfs_mru_elem_zone, elem); + } + + mutex_spinlock(&mru->lock); +} + +/* + * We fire the reap timer every group expiry interval so + * we always have a reaper ready to run. This makes shutdown + * and flushing of the reaper easy to do. Hence we need to + * keep when the next reap must occur so we can determine + * at each interval whether there is anything we need to do. + */ +STATIC void +_xfs_mru_cache_reap( + struct work_struct *work) +{ + xfs_mru_cache_t *mru = container_of(work, xfs_mru_cache_t, work.work); + unsigned long now; + + ASSERT(mru && mru->lists); + if (!mru || !mru->lists) + return; + + mutex_spinlock(&mru->lock); + now = jiffies; + if (mru->reap_all || + (mru->next_reap && time_after(now, mru->next_reap))) { + if (mru->reap_all) + now += mru->grp_count * mru->grp_time * 2; + mru->next_reap = _xfs_mru_cache_migrate(mru, now); + _xfs_mru_cache_clear_reap_list(mru); + } + + /* + * the process that triggered the reap_all is responsible + * for restating the periodic reap if it is required. + */ + if (!mru->reap_all) + queue_delayed_work(xfs_mru_reap_wq, &mru->work, mru->grp_time); + mru->reap_all = 0; + mutex_spinunlock(&mru->lock, 0); +} + +int +xfs_mru_cache_init(void) +{ + xfs_mru_elem_zone = kmem_zone_init(sizeof(xfs_mru_cache_elem_t), + "xfs_mru_cache_elem"); + if (!xfs_mru_elem_zone) + return ENOMEM; + + xfs_mru_reap_wq = create_singlethread_workqueue("xfs_mru_cache"); + if (!xfs_mru_reap_wq) { + kmem_zone_destroy(xfs_mru_elem_zone); + return ENOMEM; + } + + return 0; +} + +void +xfs_mru_cache_uninit(void) +{ + destroy_workqueue(xfs_mru_reap_wq); + kmem_zone_destroy(xfs_mru_elem_zone); +} + +int +xfs_mru_cache_create( + xfs_mru_cache_t **mrup, + unsigned int lifetime_ms, + unsigned int grp_count, + xfs_mru_cache_free_func_t free_func) +{ + xfs_mru_cache_t *mru = NULL; + int err = 0, grp; + unsigned int grp_time; + + if (mrup) + *mrup = NULL; + + if (!mrup || !grp_count || !lifetime_ms || !free_func) + return EINVAL; + + if (!(grp_time = msecs_to_jiffies(lifetime_ms) / grp_count)) + return EINVAL; + + if (!(mru = kmem_zalloc(sizeof(*mru), KM_SLEEP))) + return ENOMEM; + + /* An extra list is needed to avoid reaping up to a grp_time early. */ + mru->grp_count = grp_count + 1; + mru->lists = kmem_alloc(mru->grp_count * sizeof(*mru->lists), KM_SLEEP); + + if (!mru->lists) { + err = ENOMEM; + goto exit; + } + + for (grp = 0; grp < mru->grp_count; grp++) + INIT_LIST_HEAD(mru->lists + grp); + + /* + * We use GFP_KERNEL radix tree preload and do inserts under a + * spinlock so GFP_ATOMIC is appropriate for the radix tree itself. + */ + INIT_RADIX_TREE(&mru->store, GFP_ATOMIC); + INIT_LIST_HEAD(&mru->reap_list); + spinlock_init(&mru->lock, "xfs_mru_cache"); + INIT_DELAYED_WORK(&mru->work, _xfs_mru_cache_reap); + + mru->grp_time = grp_time; + mru->free_func = free_func; + + /* start up the reaper event */ + mru->next_reap = 0; + mru->reap_all = 0; + queue_delayed_work(xfs_mru_reap_wq, &mru->work, mru->grp_time); + + *mrup = mru; + +exit: + if (err && mru && mru->lists) + kmem_free(mru->lists, mru->grp_count * sizeof(*mru->lists)); + if (err && mru) + kmem_free(mru, sizeof(*mru)); + + return err; +} + +/* + * When flushing, we stop the periodic reaper from running first + * so we don't race with it. If we are flushing on unmount, we + * don't want to restart the reaper again, so the restart is conditional. + * + * Because reaping can drop the last refcount on inodes which can free + * extents, we have to push the reaping off to the workqueue thread + * because we could be called holding locks that extent freeing requires. + */ +void +xfs_mru_cache_flush( + xfs_mru_cache_t *mru, + int restart) +{ + if (!mru || !mru->lists) + return; + + cancel_rearming_delayed_workqueue(xfs_mru_reap_wq, &mru->work); + + mutex_spinlock(&mru->lock); + mru->reap_all = 1; + mutex_spinunlock(&mru->lock, 0); + + queue_work(xfs_mru_reap_wq, &mru->work.work); + flush_workqueue(xfs_mru_reap_wq); + + mutex_spinlock(&mru->lock); + WARN_ON_ONCE(mru->reap_all != 0); + mru->reap_all = 0; + if (restart) + queue_delayed_work(xfs_mru_reap_wq, &mru->work, mru->grp_time); + mutex_spinunlock(&mru->lock, 0); +} + +void +xfs_mru_cache_destroy( + xfs_mru_cache_t *mru) +{ + if (!mru || !mru->lists) + return; + + /* we don't want the reaper to restart here */ + xfs_mru_cache_flush(mru, 0); + + kmem_free(mru->lists, mru->grp_count * sizeof(*mru->lists)); + kmem_free(mru, sizeof(*mru)); +} + +int +xfs_mru_cache_insert( + xfs_mru_cache_t *mru, + unsigned long key, + void *value) +{ + xfs_mru_cache_elem_t *elem; + + ASSERT(mru && mru->lists); + if (!mru || !mru->lists) + return EINVAL; + + elem = kmem_zone_zalloc(xfs_mru_elem_zone, KM_SLEEP); + if (!elem) + return ENOMEM; + + if (radix_tree_preload(GFP_KERNEL)) { + kmem_zone_free(xfs_mru_elem_zone, elem); + return ENOMEM; + } + + INIT_LIST_HEAD(&elem->list_node); + elem->key = key; + elem->value = value; + + mutex_spinlock(&mru->lock); + + radix_tree_insert(&mru->store, key, elem); + radix_tree_preload_end(); + _xfs_mru_cache_list_insert(mru, elem); + + mutex_spinunlock(&mru->lock, 0); + + return 0; +} + +void* +xfs_mru_cache_remove( + xfs_mru_cache_t *mru, + unsigned long key) +{ + xfs_mru_cache_elem_t *elem; + void *value = NULL; + + ASSERT(mru && mru->lists); + if (!mru || !mru->lists) + return NULL; + + mutex_spinlock(&mru->lock); + elem = radix_tree_delete(&mru->store, key); + if (elem) { + value = elem->value; + list_del(&elem->list_node); + } + + mutex_spinunlock(&mru->lock, 0); + + if (elem) + kmem_zone_free(xfs_mru_elem_zone, elem); + + return value; +} + +void +xfs_mru_cache_delete( + xfs_mru_cache_t *mru, + unsigned long key) +{ + void *value = xfs_mru_cache_remove(mru, key); + + if (value) + mru->free_func(key, value); +} + +void* +xfs_mru_cache_lookup( + xfs_mru_cache_t *mru, + unsigned long key) +{ + xfs_mru_cache_elem_t *elem; + + ASSERT(mru && mru->lists); + if (!mru || !mru->lists) + return NULL; + + mutex_spinlock(&mru->lock); + elem = radix_tree_lookup(&mru->store, key); + if (elem) { + list_del(&elem->list_node); + _xfs_mru_cache_list_insert(mru, elem); + } + else + mutex_spinunlock(&mru->lock, 0); + + return elem ? elem->value : NULL; +} + +void* +xfs_mru_cache_peek( + xfs_mru_cache_t *mru, + unsigned long key) +{ + xfs_mru_cache_elem_t *elem; + + ASSERT(mru && mru->lists); + if (!mru || !mru->lists) + return NULL; + + mutex_spinlock(&mru->lock); + elem = radix_tree_lookup(&mru->store, key); + if (!elem) + mutex_spinunlock(&mru->lock, 0); + + return elem ? elem->value : NULL; +} + +void +xfs_mru_cache_done( + xfs_mru_cache_t *mru) +{ + mutex_spinunlock(&mru->lock, 0); +} Index: 2.6.x-xfs-new/fs/xfs/xfs_mru_cache.h =================================================================== --- /dev/null 1970-01-01 00:00:00.000000000 +0000 +++ 2.6.x-xfs-new/fs/xfs/xfs_mru_cache.h 2007-06-13 14:11:28.792279070 +1000 @@ -0,0 +1,219 @@ +/* + * Copyright (c) 2000-2002,2006 Silicon Graphics, Inc. + * All Rights Reserved. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it would be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write the Free Software Foundation, + * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA + */ +#ifndef __XFS_MRU_CACHE_H__ +#define __XFS_MRU_CACHE_H__ + +/* + * The MRU Cache data structure consists of a data store, an array of lists and + * a lock to protect its internal state. At initialisation time, the client + * supplies an element lifetime in milliseconds and a group count, as well as a + * function pointer to call when deleting elements. A data structure for + * queueing up work in the form of timed callbacks is also included. + * + * The group count controls how many lists are created, and thereby how finely + * the elements are grouped in time. When reaping occurs, all the elements in + * all the lists whose time has expired are deleted. + * + * To give an example of how this works in practice, consider a client that + * initialises an MRU Cache with a lifetime of ten seconds and a group count of + * five. Five internal lists will be created, each representing a two second + * period in time. When the first element is added, time zero for the data + * structure is initialised to the current time. + * + * All the elements added in the first two seconds are appended to the first + * list. Elements added in the third second go into the second list, and so on. + * If an element is accessed at any point, it is removed from its list and + * inserted at the head of the current most-recently-used list. + * + * The reaper function will have nothing to do until at least twelve seconds + * have elapsed since the first element was added. The reason for this is that + * if it were called at t=11s, there could be elements in the first list that + * have only been inactive for nine seconds, so it still does nothing. If it is + * called anywhere between t=12 and t=14 seconds, it will delete all the + * elements that remain in the first list. It's therefore possible for elements + * to remain in the data store even after they've been inactive for up to + * (t + t/g) seconds, where t is the inactive element lifetime and g is the + * number of groups. + * + * The above example assumes that the reaper function gets called at least once + * every (t/g) seconds. If it is called less frequently, unused elements will + * accumulate in the reap list until the reaper function is eventually called. + * The current implementation uses work queue callbacks to carefully time the + * reaper function calls, so this should happen rarely, if at all. + * + * From a design perspective, the primary reason for the choice of a list array + * representing discrete time intervals is that it's only practical to reap + * expired elements in groups of some appreciable size. This automatically + * introduces a granularity to element lifetimes, so there's no point storing an + * individual timeout with each element that specifies a more precise reap time. + * The bonus is a saving of sizeof(long) bytes of memory per element stored. + * + * The elements could have been stored in just one list, but an array of + * counters or pointers would need to be maintained to allow them to be divided + * up into discrete time groups. More critically, the process of touching or + * removing an element would involve walking large portions of the entire list, + * which would have a detrimental effect on performance. The additional memory + * requirement for the array of list heads is minimal. + * + * When an element is touched or deleted, it needs to be removed from its + * current list. Doubly linked lists are used to make the list maintenance + * portion of these operations O(1). Since reaper timing can be imprecise, + * inserts and lookups can occur when there are no free lists available. When + * this happens, all the elements on the LRU list need to be migrated to the end + * of the reap list. To keep the list maintenance portion of these operations + * O(1) also, list tails need to be accessible without walking the entire list. + * This is the reason why doubly linked list heads are used. + */ + +/* Function pointer type for callback to free a client's data pointer. */ +typedef void (*xfs_mru_cache_free_func_t)(unsigned long, void*); + +typedef struct xfs_mru_cache +{ + struct radix_tree_root store; /* Core storage data structure. */ + struct list_head *lists; /* Array of lists, one per grp. */ + struct list_head reap_list; /* Elements overdue for reaping. */ + spinlock_t lock; /* Lock to protect this struct. */ + unsigned int grp_count; /* Number of discrete groups. */ + unsigned int grp_time; /* Time period spanned by grps. */ + unsigned int lru_grp; /* Group containing time zero. */ + unsigned long time_zero; /* Time first element was added. */ + unsigned long next_reap; /* Time that the reaper should + next do something. */ + unsigned int reap_all; /* if set, reap all lists */ + xfs_mru_cache_free_func_t free_func; /* Function pointer for freeing. */ + struct delayed_work work; /* Workqueue data for reaping. */ +} xfs_mru_cache_t; + +/* + * xfs_mru_cache_init() prepares memory zones and any other globally scoped + * resources. + */ +int +xfs_mru_cache_init(void); + +/* + * xfs_mru_cache_uninit() tears down all the globally scoped resources prepared + * in xfs_mru_cache_init(). + */ +void +xfs_mru_cache_uninit(void); + +/* + * To initialise a struct xfs_mru_cache pointer, call xfs_mru_cache_create() + * with the address of the pointer, a lifetime value in milliseconds, a group + * count and a free function to use when deleting elements. This function + * returns 0 if the initialisation was successful. + */ +int +xfs_mru_cache_create(struct xfs_mru_cache **mrup, + unsigned int lifetime_ms, + unsigned int grp_count, + xfs_mru_cache_free_func_t free_func); + +/* + * Call xfs_mru_cache_flush() to flush out all cached entries, calling their + * free functions as they're deleted. When this function returns, the caller is + * guaranteed that all the free functions for all the elements have finished + * executing. + * + * While we are flushing, we stop the periodic reaper event from triggering. + * Normally, we want to restart this periodic event, but if we are shutting + * down the cache we do not want it restarted. hence the restart parameter + * where 0 = do not restart reaper and 1 = restart reaper. + */ +void +xfs_mru_cache_flush( + xfs_mru_cache_t *mru, + int restart); + +/* + * Call xfs_mru_cache_destroy() with the MRU Cache pointer when the cache is no + * longer needed. + */ +void +xfs_mru_cache_destroy(struct xfs_mru_cache *mru); + +/* + * To insert an element, call xfs_mru_cache_insert() with the data store, the + * element's key and the client data pointer. This function returns 0 on + * success or ENOMEM if memory for the data element couldn't be allocated. + */ +int +xfs_mru_cache_insert(struct xfs_mru_cache *mru, + unsigned long key, + void *value); + +/* + * To remove an element without calling the free function, call + * xfs_mru_cache_remove() with the data store and the element's key. On success + * the client data pointer for the removed element is returned, otherwise this + * function will return a NULL pointer. + */ +void* +xfs_mru_cache_remove(struct xfs_mru_cache *mru, + unsigned long key); + +/* + * To remove and element and call the free function, call xfs_mru_cache_delete() + * with the data store and the element's key. + */ +void +xfs_mru_cache_delete(struct xfs_mru_cache *mru, + unsigned long key); + +/* + * To look up an element using its key, call xfs_mru_cache_lookup() with the + * data store and the element's key. If found, the element will be moved to the + * head of the MRU list to indicate that it's been touched. + * + * The internal data structures are protected by a spinlock that is STILL HELD + * when this function returns. Call xfs_mru_cache_done() to release it. Note + * that it is not safe to call any function that might sleep in the interim. + * + * The implementation could have used reference counting to avoid this + * restriction, but since most clients simply want to get, set or test a member + * of the returned data structure, the extra per-element memory isn't warranted. + * + * If the element isn't found, this function returns NULL and the spinlock is + * released. xfs_mru_cache_done() should NOT be called when this occurs. + */ +void* +xfs_mru_cache_lookup(struct xfs_mru_cache *mru, + unsigned long key); + +/* + * To look up an element using its key, but leave its location in the internal + * lists alone, call xfs_mru_cache_peek(). If the element isn't found, this + * function returns NULL. + * + * See the comments above the declaration of the xfs_mru_cache_lookup() function + * for important locking information pertaining to this call. + */ +void* +xfs_mru_cache_peek(struct xfs_mru_cache *mru, + unsigned long key); +/* + * To release the internal data structure spinlock after having performed an + * xfs_mru_cache_lookup() or an xfs_mru_cache_peek(), call xfs_mru_cache_done() + * with the data store pointer. + */ +void +xfs_mru_cache_done(struct xfs_mru_cache *mru); + +#endif /* __XFS_MRU_CACHE_H__ */ Index: 2.6.x-xfs-new/fs/xfs/xfs_vfsops.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_vfsops.c 2007-06-13 13:58:15.787510441 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_vfsops.c 2007-06-18 11:34:29.619660310 +1000 @@ -51,6 +51,8 @@ #include "xfs_acl.h" #include "xfs_attr.h" #include "xfs_clnt.h" +#include "xfs_mru_cache.h" +#include "xfs_filestream.h" #include "xfs_fsops.h" STATIC int xfs_sync(bhv_desc_t *, int, cred_t *); @@ -81,6 +83,8 @@ xfs_init(void) xfs_dabuf_zone = kmem_zone_init(sizeof(xfs_dabuf_t), "xfs_dabuf"); xfs_ifork_zone = kmem_zone_init(sizeof(xfs_ifork_t), "xfs_ifork"); xfs_acl_zone_init(xfs_acl_zone, "xfs_acl"); + xfs_mru_cache_init(); + xfs_filestream_init(); /* * The size of the zone allocated buf log item is the maximum @@ -164,6 +168,8 @@ xfs_cleanup(void) xfs_cleanup_procfs(); xfs_sysctl_unregister(); xfs_refcache_destroy(); + xfs_filestream_uninit(); + xfs_mru_cache_uninit(); xfs_acl_zone_destroy(xfs_acl_zone); #ifdef XFS_DIR2_TRACE @@ -320,6 +326,9 @@ xfs_start_flags( else mp->m_flags &= ~XFS_MOUNT_BARRIER; + if (ap->flags2 & XFSMNT2_FILESTREAMS) + mp->m_flags |= XFS_MOUNT_FILESTREAMS; + return 0; } @@ -518,6 +527,9 @@ xfs_mount( if (mp->m_flags & XFS_MOUNT_BARRIER) xfs_mountfs_check_barriers(mp); + if ((error = xfs_filestream_mount(mp))) + goto error2; + error = XFS_IOINIT(vfsp, args, flags); if (error) goto error2; @@ -575,6 +587,13 @@ xfs_unmount( */ xfs_refcache_purge_mp(mp); + /* + * Blow away any referenced inode in the filestreams cache. + * This can and will cause log traffic as inodes go inactive + * here. + */ + xfs_filestream_unmount(mp); + XFS_bflush(mp->m_ddev_targp); error = xfs_unmount_flush(mp, 0); if (error) @@ -706,6 +725,7 @@ xfs_mntupdate( mp->m_flags &= ~XFS_MOUNT_BARRIER; } } else if (!(vfsp->vfs_flag & VFS_RDONLY)) { /* rw -> ro */ + xfs_filestream_flush(mp); bhv_vfs_sync(vfsp, SYNC_DATA_QUIESCE, NULL); xfs_attr_quiesce(mp); vfsp->vfs_flag |= VFS_RDONLY; @@ -930,6 +950,9 @@ xfs_sync( { xfs_mount_t *mp = XFS_BHVTOM(bdp); + if (flags & SYNC_IOWAIT) + xfs_filestream_flush(mp); + return xfs_syncsub(mp, flags, NULL); } @@ -1680,6 +1703,7 @@ xfs_vget( * in stat(). */ #define MNTOPT_ATTR2 "attr2" /* do use attr2 attribute format */ #define MNTOPT_NOATTR2 "noattr2" /* do not use attr2 attribute format */ +#define MNTOPT_FILESTREAM "filestreams" /* use filestreams allocator */ STATIC unsigned long suffix_strtoul(char *s, char **endp, unsigned int base) @@ -1866,6 +1890,8 @@ xfs_parseargs( args->flags |= XFSMNT_ATTR2; } else if (!strcmp(this_char, MNTOPT_NOATTR2)) { args->flags &= ~XFSMNT_ATTR2; + } else if (!strcmp(this_char, MNTOPT_FILESTREAM)) { + args->flags2 |= XFSMNT2_FILESTREAMS; } else if (!strcmp(this_char, "osyncisdsync")) { /* no-op, this is now the default */ cmn_err(CE_WARN, Index: 2.6.x-xfs-new/fs/xfs/xfs_vnodeops.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_vnodeops.c 2007-06-13 13:58:15.855501631 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_vnodeops.c 2007-06-18 11:43:23.838259197 +1000 @@ -51,6 +51,7 @@ #include "xfs_refcache.h" #include "xfs_trans_space.h" #include "xfs_log_priv.h" +#include "xfs_filestream.h" STATIC int xfs_open( @@ -819,6 +820,8 @@ xfs_setattr( di_flags |= XFS_DIFLAG_PROJINHERIT; if (vap->va_xflags & XFS_XFLAG_NODEFRAG) di_flags |= XFS_DIFLAG_NODEFRAG; + if (vap->va_xflags & XFS_XFLAG_FILESTREAM) + di_flags |= XFS_DIFLAG_FILESTREAM; if ((ip->i_d.di_mode & S_IFMT) == S_IFDIR) { if (vap->va_xflags & XFS_XFLAG_RTINHERIT) di_flags |= XFS_DIFLAG_RTINHERIT; @@ -1571,6 +1574,18 @@ xfs_release( if (vp->v_vfsp->vfs_flag & VFS_RDONLY) return 0; + if (!XFS_FORCED_SHUTDOWN(mp)) { + /* + * If we are using filestreams, and we have an unlinked + * file that we are processing the last close on, then nothing + * will be able to reopen and write to this file. Purge this + * inode from the filestreams cache so that it doesn't delay + * teardown of the inode. + */ + if ((ip->i_d.di_nlink == 0) && xfs_inode_is_filestream(ip)) + xfs_filestream_deassociate(ip); + } + #ifdef HAVE_REFCACHE /* If we are in the NFS reference cache then don't do this now */ if (ip->i_refcache) @@ -2563,6 +2578,15 @@ xfs_remove( */ xfs_refcache_purge_ip(ip); + /* + * If we are using filestreams, kill the stream association. + * If the file is still open it may get a new one but that + * will get killed on last close in xfs_close() so we don't + * have to worry about that. + */ + if (link_zero && xfs_inode_is_filestream(ip)) + xfs_filestream_deassociate(ip); + vn_trace_exit(XFS_ITOV(ip), __FUNCTION__, (inst_t *)__return_address); /* Index: 2.6.x-xfs-new/fs/xfs/xfs.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs.h 2007-06-13 13:58:15.879498521 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs.h 2007-06-13 14:11:28.972255580 +1000 @@ -38,6 +38,7 @@ #define XFS_RW_TRACE 1 #define XFS_BUF_TRACE 1 #define XFS_VNODE_TRACE 1 +#define XFS_FILESTREAMS_TRACE 1 #endif #include Index: 2.6.x-xfs-new/fs/xfs/xfsidbg.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfsidbg.c 2007-06-13 13:58:15.879498521 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfsidbg.c 2007-06-18 11:33:46.089307950 +1000 @@ -63,6 +63,7 @@ #include "quota/xfs_qm.h" #include "xfs_iomap.h" #include "xfs_buf.h" +#include "xfs_filestream.h" MODULE_AUTHOR("Silicon Graphics, Inc."); MODULE_DESCRIPTION("Additional kdb commands for debugging XFS"); @@ -109,6 +110,9 @@ static void xfsidbg_xlog_granttrace(xlog #ifdef XFS_DQUOT_TRACE static void xfsidbg_xqm_dqtrace(xfs_dquot_t *); #endif +#ifdef XFS_FILESTREAMS_TRACE +static void xfsidbg_filestreams_trace(int); +#endif /* @@ -197,6 +201,9 @@ static int xfs_bmbt_trace_entry(ktrace_e #ifdef XFS_DIR2_TRACE static int xfs_dir2_trace_entry(ktrace_entry_t *ktep); #endif +#ifdef XFS_FILESTREAMS_TRACE +static void xfs_filestreams_trace_entry(ktrace_entry_t *ktep); +#endif #ifdef XFS_RW_TRACE static void xfs_bunmap_trace_entry(ktrace_entry_t *ktep); static void xfs_rw_enter_trace_entry(ktrace_entry_t *ktep); @@ -761,6 +768,27 @@ static int kdbm_xfs_xalttrace( } #endif /* XFS_ALLOC_TRACE */ +#ifdef XFS_FILESTREAMS_TRACE +static int kdbm_xfs_xfstrmtrace( + int argc, + const char **argv) +{ + unsigned long addr; + int nextarg = 1; + long offset = 0; + int diag; + + if (argc != 1) + return KDB_ARGCOUNT; + diag = kdbgetaddrarg(argc, argv, &nextarg, &addr, &offset, NULL); + if (diag) + return diag; + + xfsidbg_filestreams_trace((int) addr); + return 0; +} +#endif /* XFS_FILESTREAMS_TRACE */ + static int kdbm_xfs_xattrcontext( int argc, const char **argv) @@ -2639,6 +2667,10 @@ static struct xif xfsidbg_funcs[] = { "Dump XFS bmap extents in inode"}, { "xflist", kdbm_xfs_xflist, "", "Dump XFS to-be-freed extent records"}, +#ifdef XFS_FILESTREAMS_TRACE + { "xfstrmtrc",kdbm_xfs_xfstrmtrace, "", + "Dump filestreams trace buffer"}, +#endif { "xhelp", kdbm_xfs_xhelp, "", "Print idbg-xfs help"}, { "xicall", kdbm_xfs_xiclogall, "", @@ -5305,6 +5337,162 @@ xfsidbg_xailock_trace(int count) } #endif +#ifdef XFS_FILESTREAMS_TRACE +static void +xfs_filestreams_trace_entry(ktrace_entry_t *ktep) +{ + xfs_inode_t *ip, *pip; + + /* function:line#[pid]: */ + kdb_printf("%s:%lu[%lu]: ", (char *)ktep->val[1], + ((unsigned long)ktep->val[0] >> 16) & 0xffff, + (unsigned long)ktep->val[2]); + switch ((unsigned long)ktep->val[0] & 0xffff) { + case XFS_FSTRM_KTRACE_INFO: + break; + case XFS_FSTRM_KTRACE_AGSCAN: + kdb_printf("scanning AG %ld[%ld]", + (long)ktep->val[4], (long)ktep->val[5]); + break; + case XFS_FSTRM_KTRACE_AGPICK1: + kdb_printf("using max_ag %ld[1] with maxfree %ld", + (long)ktep->val[4], (long)ktep->val[5]); + break; + case XFS_FSTRM_KTRACE_AGPICK2: + + kdb_printf("startag %ld newag %ld[%ld] free %ld scanned %ld" + " flags 0x%lx", + (long)ktep->val[4], (long)ktep->val[5], + (long)ktep->val[6], (long)ktep->val[7], + (long)ktep->val[8], (long)ktep->val[9]); + break; + case XFS_FSTRM_KTRACE_UPDATE: + ip = (xfs_inode_t *)ktep->val[4]; + if ((__psint_t)ktep->val[5] != (__psint_t)ktep->val[7]) + kdb_printf("found ip %p ino %llu, AG %ld[%ld] ->" + " %ld[%ld]", ip, (unsigned long long)ip->i_ino, + (long)ktep->val[7], (long)ktep->val[8], + (long)ktep->val[5], (long)ktep->val[6]); + else + kdb_printf("found ip %p ino %llu, AG %ld[%ld]", + ip, (unsigned long long)ip->i_ino, + (long)ktep->val[5], (long)ktep->val[6]); + break; + + case XFS_FSTRM_KTRACE_FREE: + ip = (xfs_inode_t *)ktep->val[4]; + pip = (xfs_inode_t *)ktep->val[5]; + if (ip->i_d.di_mode & S_IFDIR) + kdb_printf("deleting dip %p ino %llu, AG %ld[%ld]", + ip, (unsigned long long)ip->i_ino, + (long)ktep->val[6], (long)ktep->val[7]); + else + kdb_printf("deleting file %p ino %llu, pip %p ino %llu" + ", AG %ld[%ld]", + ip, (unsigned long long)ip->i_ino, + pip, (unsigned long long)(pip ? pip->i_ino : 0), + (long)ktep->val[6], (long)ktep->val[7]); + break; + + case XFS_FSTRM_KTRACE_ITEM_LOOKUP: + ip = (xfs_inode_t *)ktep->val[4]; + pip = (xfs_inode_t *)ktep->val[5]; + if (!pip) { + kdb_printf("lookup on %s ip %p ino %llu failed, returning %ld", + ip->i_d.di_mode & S_IFREG ? "file" : "dir", ip, + (unsigned long long)ip->i_ino, (long)ktep->val[6]); + } else if (ip->i_d.di_mode & S_IFREG) + kdb_printf("lookup on file ip %p ino %llu dir %p" + " dino %llu got AG %ld[%ld]", + ip, (unsigned long long)ip->i_ino, + pip, (unsigned long long)pip->i_ino, + (long)ktep->val[6], (long)ktep->val[7]); + else + kdb_printf("lookup on dir ip %p ino %llu got AG %ld[%ld]", + ip, (unsigned long long)ip->i_ino, + (long)ktep->val[6], (long)ktep->val[7]); + break; + + case XFS_FSTRM_KTRACE_ASSOCIATE: + ip = (xfs_inode_t *)ktep->val[4]; + pip = (xfs_inode_t *)ktep->val[5]; + kdb_printf("pip %p ino %llu and ip %p ino %llu given ag %ld[%ld]", + pip, (unsigned long long)pip->i_ino, + ip, (unsigned long long)ip->i_ino, + (long)ktep->val[6], (long)ktep->val[7]); + break; + + case XFS_FSTRM_KTRACE_MOVEAG: + ip = ktep->val[4]; + pip = ktep->val[5]; + if ((long)ktep->val[6] != NULLAGNUMBER) + kdb_printf("dir %p ino %llu to file ip %p ino %llu has" + " moved %ld[%ld] -> %ld[%ld]", + pip, (unsigned long long)pip->i_ino, + ip, (unsigned long long)ip->i_ino, + (long)ktep->val[6], (long)ktep->val[7], + (long)ktep->val[8], (long)ktep->val[9]); + else + kdb_printf("pip %p ino %llu and ip %p ino %llu moved" + " to new ag %ld[%ld]", + pip, (unsigned long long)pip->i_ino, + ip, (unsigned long long)ip->i_ino, + (long)ktep->val[8], (long)ktep->val[9]); + break; + + case XFS_FSTRM_KTRACE_ORPHAN: + ip = ktep->val[4]; + kdb_printf("gave ag %lld to orphan ip %p ino %llu", + (__psint_t)ktep->val[5], + ip, (unsigned long long)ip->i_ino); + break; + default: + kdb_printf("unknown trace type 0x%lx", + (unsigned long)ktep->val[0] & 0xffff); + } + kdb_printf("\n"); +} + +static void +xfsidbg_filestreams_trace(int count) +{ + ktrace_entry_t *ktep; + ktrace_snap_t kts; + int nentries; + int skip_entries; + + if (xfs_filestreams_trace_buf == NULL) { + qprintf("The xfs inode lock trace buffer is not initialized\n"); + return; + } + nentries = ktrace_nentries(xfs_filestreams_trace_buf); + if (count == -1) { + count = nentries; + } + if ((count <= 0) || (count > nentries)) { + qprintf("Invalid count. There are %d entries.\n", nentries); + return; + } + + ktep = ktrace_first(xfs_filestreams_trace_buf, &kts); + if (count != nentries) { + /* + * Skip the total minus the number to look at minus one + * for the entry returned by ktrace_first(). + */ + skip_entries = nentries - count - 1; + ktep = ktrace_skip(xfs_filestreams_trace_buf, skip_entries, &kts); + if (ktep == NULL) { + qprintf("Skipped them all\n"); + return; + } + } + while (ktep != NULL) { + xfs_filestreams_trace_entry(ktep); + ktep = ktrace_next(xfs_filestreams_trace_buf, &kts); + } +} +#endif /* * Compute & print buffer's checksum. */ Index: 2.6.x-xfs-new/fs/xfs/xfs_inode.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_inode.h 2007-06-18 11:33:56.907904372 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_inode.h 2007-06-18 11:39:05.527825433 +1000 @@ -366,6 +366,7 @@ xfs_iflags_test(xfs_inode_t *ip, unsigne #define XFS_ISTALE 0x0010 /* inode has been staled */ #define XFS_IRECLAIMABLE 0x0020 /* inode can be reclaimed */ #define XFS_INEW 0x0040 +#define XFS_IFILESTREAM 0x0080 /* inode is in a filestream directory */ /* * Flags for inode locking. From owner-xfs@oss.sgi.com Mon Jun 18 12:47:11 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Jun 2007 12:47:41 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5IJkvds002574 for ; Mon, 18 Jun 2007 12:47:10 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA01231; Mon, 18 Jun 2007 11:55:13 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5I1tCAf126024599; Mon, 18 Jun 2007 11:55:13 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5I1tCKr126185107; Mon, 18 Jun 2007 11:55:12 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Mon, 18 Jun 2007 11:55:12 +1000 From: David Chinner To: xfs-dev Cc: xfs-oss Subject: [PATCH 3 of 3] Multi-File Data Streams V3 - export radix tree function Message-ID: <20070618015512.GY86004887@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11831 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs XFS can be built as a module, so the filestreams code needs this function exported. --- lib/radix-tree.c | 1 + 1 file changed, 1 insertion(+) Index: 2.6.x-xfs-new/lib/radix-tree.c =================================================================== --- 2.6.x-xfs-new.orig/lib/radix-tree.c 2007-03-29 19:00:53.802804161 +1000 +++ 2.6.x-xfs-new/lib/radix-tree.c 2007-03-29 19:07:10.297495640 +1000 @@ -151,6 +151,7 @@ int radix_tree_preload(gfp_t gfp_mask) out: return ret; } +EXPORT_SYMBOL(radix_tree_preload); static inline void tag_set(struct radix_tree_node *node, unsigned int tag, int offset) From owner-xfs@oss.sgi.com Mon Jun 18 12:47:19 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Jun 2007 12:47:41 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5IJkvdw002574 for ; Mon, 18 Jun 2007 12:47:16 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA01109; Mon, 18 Jun 2007 11:47:33 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5I1lWAf125149683; Mon, 18 Jun 2007 11:47:32 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5I1lVD3126079029; Mon, 18 Jun 2007 11:47:31 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Mon, 18 Jun 2007 11:47:31 +1000 From: David Chinner To: Christoph Hellwig Cc: David Chinner , xfs-dev , xfs-oss Subject: Re: Review: Multi-File Data Streams V2 Message-ID: <20070618014731.GV86004887@sgi.com> References: <20070613041629.GI86004887@sgi.com> <20070616203851.GA7817@infradead.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070616203851.GA7817@infradead.org> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11831 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Sat, Jun 16, 2007 at 09:38:51PM +0100, Christoph Hellwig wrote: > Thanks, this version looks a lot better now. > > The pip checks in xfs_inode.c are still in, but I'm pretty sure they're > not nessecary, and even if they were nessecary they'd need a good comment > explaining why. Because quota inodes don't have a parent and doing filestreams stuff on quota inodes causes all sorts on problems. I forgot to split that patch out like you asked last time. Will now be patch 2, because without this fix we get extra references on the quota inodes that never get removed and hence busy inodes after unmount problems.... > The patch still hooks into xfs_close despite your comment that you > updated it for the removal of it. That's because it is moved in the xfs_close removal patch later in my series. I'll move it. > I still strongly believe the mru cache should not be inside xfs. It's > a completely generic library function and should go into lib/ so it's > available to all of the kernel. That means it'll need some codingstyle > updates and proper kerneldoc comments, though. And like I said last time: I don't disagree with you. However: I'm not going to hold back the filestreams code for this. Doing janitorial work like this is a complete and utter waste of my time and it does nothing to improve the code right now. I'll happily accept patches that move this code to lib/ if someone goes and does it before I find the cycles to be able to do it. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Jun 18 12:52:38 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Jun 2007 12:52:43 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.0 required=5.0 tests=AWL,BAYES_80 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.ukfsn.org (s2.ukfsn.org [217.158.120.143]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5IJqado006872 for ; Mon, 18 Jun 2007 12:52:38 -0700 Received: from localhost (mailman.ukfsn.org [80.168.53.75]) by mail.ukfsn.org (Postfix) with ESMTP id 5A819E6C34; Sun, 17 Jun 2007 12:38:27 +0100 (BST) Received: from mail.ukfsn.org ([80.168.53.20]) by localhost (smtp-filter.ukfsn.org [80.168.53.75]) (amavisd-new, port 10024) with ESMTP id LpwvD95NjhnY; Sun, 17 Jun 2007 12:35:01 +0100 (BST) Received: from elm.dgreaves.com (i-83-67-36-194.freedom2surf.net [83.67.36.194]) by mail.ukfsn.org (Postfix) with ESMTP id D75E2E6C1C; Sun, 17 Jun 2007 12:38:26 +0100 (BST) Received: from ash.dgreaves.com ([10.0.0.90]) by elm.dgreaves.com with esmtp (Exim 4.62) (envelope-from ) id 1Hzt5V-00045W-3h; Sun, 17 Jun 2007 12:38:33 +0100 Message-ID: <46751D37.5020608@dgreaves.com> Date: Sun, 17 Jun 2007 12:38:31 +0100 From: David Greaves User-Agent: Mozilla-Thunderbird 2.0.0.0 (X11/20070601) MIME-Version: 1.0 To: David Robinson Cc: LVM general discussion and development , "'linux-kernel@vger.kernel.org'" , xfs@oss.sgi.com, linux-pm , LinuxRaid Subject: Re: [linux-lvm] 2.6.22-rc4 XFS fails after hibernate/resume References: <46744065.6060605@dgreaves.com> <4674645F.5000906@gmail.com> In-Reply-To: <4674645F.5000906@gmail.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11832 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: david@dgreaves.com Precedence: bulk X-list: xfs David Robinson wrote: > David Greaves wrote: >> This isn't a regression. >> >> I was seeing these problems on 2.6.21 (but 22 was in -rc so I waited >> to try it). >> I tried 2.6.22-rc4 (with Tejun's patches) to see if it had improved - no. >> >> Note this is a different (desktop) machine to that involved my recent >> bugs. >> >> The machine will work for days (continually powered up) without a >> problem and then exhibits a filesystem failure within minutes of a >> resume. >> >> I know xfs/raid are OK with hibernate. Is lvm? > > I have LVM working with hibernate w/o any problems (w/ ext3). If there > were a problem it wouldn't be with LVM but with device-mapper, and I > doubt there's a problem with either. The stack trace shows that you're > within XFS code (but it's likely its hibernate). Thanks - that's good to know. The suspicion arises because I have xfs on raid1 as root and have *never* had a problem with that filesystem. It's *always* xfs on lvm on raid5. I also have another system (previously discussed) that reliably hibernated xfs on raid6. (Clearly raid5 is in my suspect list) > You can easily check whether its LVM/device-mapper: > > 1) check "dmsetup table" - it should be the same before hibernating and > after resuming. > > 2) read directly from the LV - ie, "dd if=/dev/mapper/video_vg-video_lv > of=/dev/null bs=10M count=200". > > If dmsetup shows the same info and you can read directly from the LV I > doubt it would be a LVM/device-mapper problem. OK, that gave me an idea. Freeze the filesystem md5sum the lvm hibernate resume md5sum the lvm so: haze:~# xfs_freeze -f /scratch/ Without this sync, the next two md5sums differed.. haze:~# sync haze:~# dd if=/dev/video_vg/video_lv bs=10M count=200 | md5sum 200+0 records in 200+0 records out 2097152000 bytes (2.1 GB) copied, 41.2495 seconds, 50.8 MB/s f42539366bb4269623fa4db14e8e8be2 - haze:~# dd if=/dev/video_vg/video_lv bs=10M count=200 | md5sum 200+0 records in 200+0 records out 2097152000 bytes (2.1 GB) copied, 41.8111 seconds, 50.2 MB/s f42539366bb4269623fa4db14e8e8be2 - haze:~# echo platform > /sys/power/disk haze:~# echo disk > /sys/power/state haze:~# dd if=/dev/video_vg/video_lv bs=10M count=200 | md5sum 200+0 records in 200+0 records out 2097152000 bytes (2.1 GB) copied, 42.0478 seconds, 49.9 MB/s f42539366bb4269623fa4db14e8e8be2 - haze:~# xfs_freeze -u /scratch/ So the lvm and below looks OK... I'll see how it behaves now the filesystem has been frozen/thawed over the hibernate... David From owner-xfs@oss.sgi.com Mon Jun 18 13:09:01 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Jun 2007 13:09:05 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.5 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.ukfsn.org (s2.ukfsn.org [217.158.120.143]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5IK90do014094 for ; Mon, 18 Jun 2007 13:09:01 -0700 Received: from localhost (mailman.ukfsn.org [80.168.53.75]) by mail.ukfsn.org (Postfix) with ESMTP id EB3E8E6D6F; Mon, 18 Jun 2007 20:14:17 +0100 (BST) Received: from mail.ukfsn.org ([80.168.53.20]) by localhost (smtp-filter.ukfsn.org [80.168.53.75]) (amavisd-new, port 10024) with ESMTP id 8kADBILoiUg0; Mon, 18 Jun 2007 20:10:51 +0100 (BST) Received: from elm.dgreaves.com (i-83-67-36-194.freedom2surf.net [83.67.36.194]) by mail.ukfsn.org (Postfix) with ESMTP id D92CAE6BC7; Mon, 18 Jun 2007 20:14:01 +0100 (BST) Received: from ash.dgreaves.com ([10.0.0.90]) by elm.dgreaves.com with esmtp (Exim 4.62) (envelope-from ) id 1I0Mfu-0006hs-LZ; Mon, 18 Jun 2007 20:14:06 +0100 Message-ID: <4676D97E.4000403@dgreaves.com> Date: Mon, 18 Jun 2007 20:14:06 +0100 From: David Greaves User-Agent: Mozilla-Thunderbird 2.0.0.0 (X11/20070601) MIME-Version: 1.0 To: David Chinner Cc: David Robinson , LVM general discussion and development , "'linux-kernel@vger.kernel.org'" , xfs@oss.sgi.com, linux-pm , LinuxRaid Subject: Re: [linux-lvm] 2.6.22-rc4 XFS fails after hibernate/resume References: <46744065.6060605@dgreaves.com> <4674645F.5000906@gmail.com> <46751D37.5020608@dgreaves.com> <4676390E.6010202@dgreaves.com> <20070618145007.GE85884050@sgi.com> In-Reply-To: <20070618145007.GE85884050@sgi.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11833 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: david@dgreaves.com Precedence: bulk X-list: xfs OK, just an quick ack When I resumed tonight (having done a freeze/thaw over the suspend) some libata errors threw up during the resume and there was an eventual hard hang. Maybe I spoke to soon? I'm going to have to do some more testing... David Chinner wrote: > On Mon, Jun 18, 2007 at 08:49:34AM +0100, David Greaves wrote: >> David Greaves wrote: >> So doing: >> xfs_freeze -f /scratch >> sync >> echo platform > /sys/power/disk >> echo disk > /sys/power/state >> # resume >> xfs_freeze -u /scratch >> >> Works (for now - more usage testing tonight) > > Verrry interesting. Good :) > What you were seeing was an XFS shutdown occurring because the free space > btree was corrupted. IOWs, the process of suspend/resume has resulted > in either bad data being written to disk, the correct data not being > written to disk or the cached block being corrupted in memory. That's the kind of thing I was suspecting, yes. > If you run xfs_check on the filesystem after it has shut down after a resume, > can you tell us if it reports on-disk corruption? Note: do not run xfs_repair > to check this - it does not check the free space btrees; instead it simply > rebuilds them from scratch. If xfs_check reports an error, then run xfs_repair > to fix it up. OK, I can try this tonight... > FWIW, I'm on record stating that "sync" is not sufficient to quiesce an XFS > filesystem for a suspend/resume to work safely and have argued that the only > safe thing to do is freeze the filesystem before suspend and thaw it after > resume. This is why I originally asked you to test that with the other problem > that you reported. Up until this point in time, there's been no evidence to > prove either side of the argument...... > > Cheers, > > Dave. From owner-xfs@oss.sgi.com Mon Jun 18 13:25:59 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Jun 2007 13:26:04 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.9 required=5.0 tests=AWL,BAYES_95 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.ukfsn.org (s2.ukfsn.org [217.158.120.143]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5IKPvdo018542 for ; Mon, 18 Jun 2007 13:25:59 -0700 Received: from localhost (mailman.ukfsn.org [80.168.53.75]) by mail.ukfsn.org (Postfix) with ESMTP id A6202E6C11; Sun, 17 Jun 2007 12:37:20 +0100 (BST) Received: from mail.ukfsn.org ([80.168.53.20]) by localhost (smtp-filter.ukfsn.org [80.168.53.75]) (amavisd-new, port 10024) with ESMTP id 6vCDAHw984jd; Sun, 17 Jun 2007 12:33:54 +0100 (BST) Received: from elm.dgreaves.com (i-83-67-36-194.freedom2surf.net [83.67.36.194]) by mail.ukfsn.org (Postfix) with ESMTP id 4B155E6BB0; Sun, 17 Jun 2007 12:37:18 +0100 (BST) Received: from ash.dgreaves.com ([10.0.0.90]) by elm.dgreaves.com with esmtp (Exim 4.62) (envelope-from ) id 1Hzt4H-00045R-83; Sun, 17 Jun 2007 12:37:17 +0100 Message-ID: <46751CEC.5090308@dgreaves.com> Date: Sun, 17 Jun 2007 12:37:16 +0100 From: David Greaves User-Agent: Mozilla-Thunderbird 2.0.0.0 (X11/20070601) MIME-Version: 1.0 To: "Rafael J. Wysocki" Cc: "'linux-kernel@vger.kernel.org'" , xfs@oss.sgi.com, LVM general discussion and development , linux-pm Subject: Re: 2.6.22-rc4 XFS fails after hibernate/resume References: <46744065.6060605@dgreaves.com> <200706170047.20559.rjw@sisk.pl> In-Reply-To: <200706170047.20559.rjw@sisk.pl> Content-Type: text/plain; charset=iso-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11834 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: david@dgreaves.com Precedence: bulk X-list: xfs Rafael J. Wysocki wrote: > On Saturday, 16 June 2007 21:56, David Greaves wrote: >> This isn't a regression. >> >> I was seeing these problems on 2.6.21 (but 22 was in -rc so I waited to try it). >> I tried 2.6.22-rc4 (with Tejun's patches) to see if it had improved - no. >> >> Note this is a different (desktop) machine to that involved my recent bugs. >> >> The machine will work for days (continually powered up) without a problem and >> then exhibits a filesystem failure within minutes of a resume. >> >> I know xfs/raid are OK with hibernate. Is lvm? >> >> The root filesystem is xfs on raid1 and that doesn't seem to have any problems. > > What is the partition that's showing problems? How's it set up, on how many > drives etc.? I did put that in the OP :) Here's a recap... /dev/mapper/video_vg-video_lv on /scratch type xfs (rw) md1 : active raid5 sdd1[0] sda1[2] sdc1[1] 390716672 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU] haze:~# pvdisplay --- Physical volume --- PV Name /dev/md1 VG Name video_vg PV Size 372.62 GB / not usable 3.25 MB Allocatable yes (but full) PE Size (KByte) 4096 Total PE 95389 Free PE 0 Allocated PE 95389 PV UUID IUig5k-460l-sMZc-23Iz-MMFl-Cfh9-XuBMiq > > Also, is the dmesg output below from right after the resume? It runs OK for a few minutes - just enough to think "hey, maybe it'll work this time". Not more than an hour of normal use. Then you notice when some app fails because the filesystem went away. The dmesg comes from that point. David From owner-xfs@oss.sgi.com Mon Jun 18 13:32:43 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Jun 2007 13:32:46 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.4 required=5.0 tests=AWL,BAYES_95,SPF_HELO_PASS, URIBL_RHS_BOGUSMX,URIBL_RHS_POST,URIBL_RHS_TLD_WHOIS autolearn=no version=3.2.0-pre1-r499012 Received: from lucidpixels.com (lucidpixels.com [75.144.35.66]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5IKWgdo020729 for ; Mon, 18 Jun 2007 13:32:43 -0700 Received: by lucidpixels.com (Postfix, from userid 1001) id 718A6B0004E6; Mon, 18 Jun 2007 07:07:39 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id 6E03250000BA; Mon, 18 Jun 2007 07:07:39 -0400 (EDT) Date: Mon, 18 Jun 2007 07:07:39 -0400 (EDT) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: David Chinner cc: xfs@oss.sgi.com, linux-raid@vger.kernel.org Subject: Re: XFS Tunables for High Speed Linux SW RAID5 Systems? In-Reply-To: <20070618000502.GU86004887@sgi.com> Message-ID: References: <20070618000502.GU86004887@sgi.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11835 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Dave, Questions inline and below. On Mon, 18 Jun 2007, David Chinner wrote: > On Fri, Jun 15, 2007 at 04:36:07PM -0400, Justin Piszcz wrote: >> Hi, >> >> I was wondering if the XFS folks can recommend any optimizations for high >> speed disk arrays using RAID5? > > [sysctls snipped] > > None of those options will make much difference to performance. > mkfs parameters are the big ticket item here.... > > >> There is also vm/dirty tunable in /proc. > > That changes benchmark times by starting writeback earlier, but > doesn't affect actual writeback speed. > >> I was wondering what are some things to tune for speed? I've already >> tuned the MD layer but is there anything with XFS I can also tune? >> >> echo "Setting read-ahead to 64MB for /dev/md3" >> blockdev --setra 65536 /dev/md3 This proved to give the fastest performance, I have always used 4GB then recently 8GB of memory in the machine. http://www.rhic.bnl.gov/hepix/talks/041019pm/schoen.pdf See page 13. > > Why so large? That's likely to cause readahead thrashing problems > under low memory.... > >> echo "Setting stripe_cache_size to 16MB for /dev/md3" >> echo 16384 > /sys/block/md3/md/stripe_cache_size >> >> (also set max_sectors_kb) to 128K (chunk size) and disable NCQ > > Why do that? You want XFS to issue large I/Os and the block layer > to split them across all the disks. i.e. you are preventing full > stripe writes from occurring by doing that. I use a 128k stripe, what should I use for the max_sectors_kb? I read that 128kb was optimal. Can you please comment on all of the optimizations below? #!/bin/bash # source profile . /etc/profile echo "Optimizing RAID Arrays..." # This step must come first. # See: http://www.3ware.com/KB/article.aspx?id=11050 echo "Setting max_sectors_kb to chunk size of RAID5 arrays..." for i in sdc sdd sde sdf sdg sdh sdi sdj sdk sdl do echo "Setting /dev/$i to 128K..." echo 128 > /sys/block/"$i"/queue/max_sectors_kb done echo "Setting read-ahead to 64MB for /dev/md3" blockdev --setra 65536 /dev/md3 echo "Setting stripe_cache_size to 16MB for /dev/md3" echo 16384 > /sys/block/md3/md/stripe_cache_size # if you use more than the default 64kb stripe with raid5 # this feature is broken so you need to limit it to 30MB/s # neil has a patch, not sure when it will be merged. echo "Setting minimum and maximum resync speed to 30MB/s..." echo 30000 > /sys/block/md0/md/sync_speed_min echo 30000 > /sys/block/md0/md/sync_speed_max echo 30000 > /sys/block/md1/md/sync_speed_min echo 30000 > /sys/block/md1/md/sync_speed_max echo 30000 > /sys/block/md2/md/sync_speed_min echo 30000 > /sys/block/md2/md/sync_speed_max echo 30000 > /sys/block/md3/md/sync_speed_min echo 30000 > /sys/block/md3/md/sync_speed_max # Disable NCQ. echo "Disabling NCQ..." for i in sdc sdd sde sdf sdg sdh sdi sdj sdk sdl do echo "Disabling NCQ on $i" echo 1 > /sys/block/"$i"/device/queue_depth done From owner-xfs@oss.sgi.com Mon Jun 18 13:46:57 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Jun 2007 13:46:59 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.0 required=5.0 tests=BAYES_99 autolearn=no version=3.2.0-pre1-r499012 Received: from smtp-out2.libero.it (smtp-out2.libero.it [212.52.84.42]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5IKktdo025903; Mon, 18 Jun 2007 13:46:56 -0700 Received: from mailrelay07.libero.it (172.31.0.114) by smtp-out2.libero.it (7.3.120) id 4611FD980538510F; Mon, 18 Jun 2007 18:50:59 +0200 X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AgAAAJdUdkbAqBEF/2dsb2JhbAAN Received: from unknown (HELO libero.it) ([192.168.17.5]) by outrelay07.libero.it with ESMTP; 18 Jun 2007 18:50:58 +0200 Date: Mon, 18 Jun 2007 18:50:58 +0200 Message-Id: Subject: YOU WON Ref No. ILPB64256987, MIME-Version: 1.0 X-Sensitivity: 3 Content-Type: text/plain; charset=iso-8859-1 From: "lottogroup101" X-XaM3-API-Version: 4.3 (R1) (B3pl19) X-SenderIP: 83.181.132.228 To: undisclosed-recipients:; X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id l5IKkvdo025907 X-archive-position: 11836 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: lottogroup101@libero.it Precedence: bulk X-list: xfs YOU WON Ref No. ILPB64256987, LOTO DAYZERS SA PROMOTIONS. Attn: Ref No. ILPB64256987, In view of the weekly sweepstakes of the above named organization held on the 16th of June, 2007. It is our pleasure to inform you that your e-mail address attached to the aboved Ref No.BVMSA/2690/023/02, Batch No. 20/333/MBV, Series No.215479623658-204, and Award No.LCT/06,08,10,11,25,28 C.14 R.1. came up in the sixth dip. This invariably means that you have emerged as the prize recipient in the Sixth category with an allocated sum of 850.000.00 (Eight Hundred and fifty Thousand Euros only.) Be informed that all participants were selected from a randon computing ballot system. This charitable sweepstakes is sponsored by a group of coperate organizations and governmental parastatals drawn from major cities in Europe. As a matter of principle, you are to donate at the minimum one tenth of the funds to an outstanding charity project in your locality, after the reception of your funds. To file for your claim and other enquiries, Contact the Accredicted Company with details stated below PRIMER CONSULT BV Email:infodetonclaim@aol.com Contact Person: Mr. Robert Jones Tel: +31 620 896 557 Faithfully yours' MrsPamela Hanson Promotions Coordinator. From owner-xfs@oss.sgi.com Mon Jun 18 13:59:18 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Jun 2007 13:59:23 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.3 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.ukfsn.org (s2.ukfsn.org [217.158.120.143]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5IKxGdo030859 for ; Mon, 18 Jun 2007 13:59:18 -0700 Received: from localhost (mailman.ukfsn.org [80.168.53.75]) by mail.ukfsn.org (Postfix) with ESMTP id 8BA3AE6C81; Mon, 18 Jun 2007 08:49:25 +0100 (BST) Received: from mail.ukfsn.org ([80.168.53.20]) by localhost (smtp-filter.ukfsn.org [80.168.53.75]) (amavisd-new, port 10024) with ESMTP id TxXQJ7koWUW6; Mon, 18 Jun 2007 08:45:59 +0100 (BST) Received: from elm.dgreaves.com (i-83-67-36-194.freedom2surf.net [83.67.36.194]) by mail.ukfsn.org (Postfix) with ESMTP id 9BF76E6FCD; Mon, 18 Jun 2007 08:49:24 +0100 (BST) Received: from [10.0.0.90] by elm.dgreaves.com with esmtp (Exim 4.62) (envelope-from ) id 1I0BzT-0005es-V0; Mon, 18 Jun 2007 08:49:36 +0100 Message-ID: <4676390E.6010202@dgreaves.com> Date: Mon, 18 Jun 2007 08:49:34 +0100 From: David Greaves User-Agent: Mozilla-Thunderbird 2.0.0.0 (X11/20070601) MIME-Version: 1.0 To: David Robinson Cc: LVM general discussion and development , "'linux-kernel@vger.kernel.org'" , xfs@oss.sgi.com, linux-pm , LinuxRaid Subject: Re: [linux-lvm] 2.6.22-rc4 XFS fails after hibernate/resume References: <46744065.6060605@dgreaves.com> <4674645F.5000906@gmail.com> <46751D37.5020608@dgreaves.com> In-Reply-To: <46751D37.5020608@dgreaves.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11838 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: david@dgreaves.com Precedence: bulk X-list: xfs David Greaves wrote: > David Robinson wrote: >> David Greaves wrote: >>> This isn't a regression. >>> >>> I was seeing these problems on 2.6.21 (but 22 was in -rc so I waited >>> to try it). >>> I tried 2.6.22-rc4 (with Tejun's patches) to see if it had improved - >>> no. >>> >>> Note this is a different (desktop) machine to that involved my recent >>> bugs. >>> >>> The machine will work for days (continually powered up) without a >>> problem and then exhibits a filesystem failure within minutes of a >>> resume. > OK, that gave me an idea. > > Freeze the filesystem > md5sum the lvm > hibernate > resume > md5sum the lvm > So the lvm and below looks OK... > > I'll see how it behaves now the filesystem has been frozen/thawed over > the hibernate... And it appears to behave well. (A few hours compile/clean cycling kernel builds on that filesystem were OK). Historically I've done: sync echo platform > /sys/power/disk echo disk > /sys/power/state # resume and had filesystem corruption (only on this machine, my other hibernating xfs machines don't have this problem) So doing: xfs_freeze -f /scratch sync echo platform > /sys/power/disk echo disk > /sys/power/state # resume xfs_freeze -u /scratch Works (for now - more usage testing tonight) David From owner-xfs@oss.sgi.com Mon Jun 18 13:59:17 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Jun 2007 13:59:20 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.4 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.ukfsn.org (s2.ukfsn.org [217.158.120.143]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5IKxFdo030852 for ; Mon, 18 Jun 2007 13:59:17 -0700 Received: from localhost (mailman.ukfsn.org [80.168.53.75]) by mail.ukfsn.org (Postfix) with ESMTP id 2BCE5E6CD1; Mon, 18 Jun 2007 12:58:51 +0100 (BST) Received: from mail.ukfsn.org ([80.168.53.20]) by localhost (smtp-filter.ukfsn.org [80.168.53.75]) (amavisd-new, port 10024) with ESMTP id 6RpMM5+-yMUQ; Mon, 18 Jun 2007 12:55:25 +0100 (BST) Received: from elm.dgreaves.com (i-83-67-36-194.freedom2surf.net [83.67.36.194]) by mail.ukfsn.org (Postfix) with ESMTP id 34783E6C9D; Mon, 18 Jun 2007 12:58:50 +0100 (BST) Received: from ash.dgreaves.com ([10.0.0.90]) by elm.dgreaves.com with esmtp (Exim 4.62) (envelope-from ) id 1I0Fsq-00065n-NU; Mon, 18 Jun 2007 12:59:00 +0100 Message-ID: <46767384.8010309@dgreaves.com> Date: Mon, 18 Jun 2007 12:59:00 +0100 From: David Greaves User-Agent: Mozilla-Thunderbird 2.0.0.0 (X11/20070601) MIME-Version: 1.0 To: David Chinner Cc: Justin Piszcz , xfs@oss.sgi.com, linux-raid@vger.kernel.org Subject: Re: XFS Tunables for High Speed Linux SW RAID5 Systems? References: <20070618000502.GU86004887@sgi.com> In-Reply-To: <20070618000502.GU86004887@sgi.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11837 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: david@dgreaves.com Precedence: bulk X-list: xfs David Chinner wrote: > On Fri, Jun 15, 2007 at 04:36:07PM -0400, Justin Piszcz wrote: >> Hi, >> >> I was wondering if the XFS folks can recommend any optimizations for high >> speed disk arrays using RAID5? > > [sysctls snipped] > > None of those options will make much difference to performance. > mkfs parameters are the big ticket item here.... Is there anywhere you can point to that expands on this? Is there anything raid specific that would be worth including in the Wiki? David From owner-xfs@oss.sgi.com Mon Jun 18 15:02:54 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Jun 2007 15:02:57 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.6 required=5.0 tests=AWL,BAYES_60,URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from av8-1-sn3.vrr.skanova.net (av8-1-sn3.vrr.skanova.net [81.228.9.183]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5IM2qdo020190 for ; Mon, 18 Jun 2007 15:02:54 -0700 Received: by av8-1-sn3.vrr.skanova.net (Postfix, from userid 502) id A6B143901D; Sun, 17 Jun 2007 14:22:09 +0200 (CEST) Received: from smtp3-2-sn3.vrr.skanova.net (smtp3-2-sn3.vrr.skanova.net [81.228.9.102]) by av8-1-sn3.vrr.skanova.net (Postfix) with ESMTP id 8DFCB38197 for ; Sun, 17 Jun 2007 14:22:09 +0200 (CEST) Received: from cobra.e-626.net (h48n2fls32o1110.telia.com [217.209.79.48]) by smtp3-2-sn3.vrr.skanova.net (Postfix) with ESMTP id 7CDA637E4A for ; Sun, 17 Jun 2007 14:22:07 +0200 (CEST) Received: from [192.168.1.201] (h48n2fls32o1110.telia.com [217.209.79.48]) (authenticated bits=0) by cobra.e-626.net (8.14.0/8.14.0) with ESMTP id l5HCLxBO023538 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT) for ; Sun, 17 Jun 2007 14:22:04 +0200 Message-ID: <46752756.9070307@e-626.net> Date: Sun, 17 Jun 2007 14:21:42 +0200 From: Johan Andersson User-Agent: Thunderbird 1.5.0.12 (Windows/20070509) MIME-Version: 1.0 To: xfs@oss.sgi.com Subject: CVS server out of space for lock files Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Scanned: ClamAV 0.90.3/3441/Sun Jun 17 10:16:04 2007 on cobra.e-626.net X-Virus-Status: Clean X-archive-position: 11839 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: johan@e-626.net Precedence: bulk X-list: xfs Hi! Just noticed that the cvs server at oss.sgi.com seems to be out of disk space. A "cvs -d :pserver:cvs@oss.sgi.com:/cvs co xfs-cmds" gives: ... cvs checkout: failed to create lock directory for `/cvs/xfs-cmds/attr2' (/var/lock/cvs/xfs-cmds/attr2/#cvs.lock): No space left on device cvs checkout: failed to obtain dir lock in repository `/cvs/xfs-cmds/attr2' cvs [checkout aborted]: read lock failed - giving up /Johan Andersson From owner-xfs@oss.sgi.com Mon Jun 18 15:41:11 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Jun 2007 15:41:14 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.0 required=5.0 tests=BAYES_60 autolearn=no version=3.2.0-pre1-r499012 Received: from wa-out-1112.google.com (wa-out-1112.google.com [209.85.146.180]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5IMf8do029504 for ; Mon, 18 Jun 2007 15:41:09 -0700 Received: by wa-out-1112.google.com with SMTP id k22so2334747waf for ; Mon, 18 Jun 2007 15:41:09 -0700 (PDT) Received: by 10.115.14.1 with SMTP id r1mr6438473wai.1182187790372; Mon, 18 Jun 2007 10:29:50 -0700 (PDT) Received: by 10.115.94.4 with HTTP; Mon, 18 Jun 2007 10:29:50 -0700 (PDT) Message-ID: <9c21eeae0706181029i19fc80a2qce004c4329f0c6b2@mail.gmail.com> Date: Mon, 18 Jun 2007 10:29:50 -0700 From: "David Brown" To: xfs@oss.sgi.com Subject: linux dmapi support? MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11840 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dmlb2000@gmail.com Precedence: bulk X-list: xfs I was wondering what was going on with the dmapi support for linux? I see that the mount option got removed does this mean that its on by default now? or is the other option used to trigger when dmapi is enabled? I found some of the examples in the cvs tree for xfs-cmds under the tests but they just don't seem to work and for some reason it can't initialize the dmapi. I hope this doesn't mean that support is no longer available... *shrug* any help would be appreciated. Thanks, - David Brown From owner-xfs@oss.sgi.com Mon Jun 18 16:32:24 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Jun 2007 16:32:26 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5INWLdo022656 for ; Mon, 18 Jun 2007 16:32:23 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA04930; Tue, 19 Jun 2007 09:32:18 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5INWHAf127012290; Tue, 19 Jun 2007 09:32:18 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5INWFG7126449826; Tue, 19 Jun 2007 09:32:15 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Tue, 19 Jun 2007 09:32:15 +1000 From: David Chinner To: David Brown Cc: xfs@oss.sgi.com Subject: Re: linux dmapi support? Message-ID: <20070618233215.GG85884050@sgi.com> References: <9c21eeae0706181029i19fc80a2qce004c4329f0c6b2@mail.gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <9c21eeae0706181029i19fc80a2qce004c4329f0c6b2@mail.gmail.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11841 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Mon, Jun 18, 2007 at 10:29:50AM -0700, David Brown wrote: > I was wondering what was going on with the dmapi support for linux? I > see that the mount option got removed does this mean that its on by > default now? or is the other option used to trigger when dmapi is > enabled? Mount option got removed? #define MNTOPT_DMAPI "dmapi" /* DMI enabled (DMAPI / XDSM) */ #define MNTOPT_XDSM "xdsm" /* DMI enabled (DMAPI / XDSM) */ #define MNTOPT_DMI "dmi" /* DMI enabled (DMAPI / XDSM) */ Still there, and it still works.... > I found some of the examples in the cvs tree for xfs-cmds under the > tests but they just don't seem to work and for some reason it can't > initialize the dmapi. dmapi is not in kernel.org trees, so you need to use the cvs tree. Then you need both the CONFIG_DMAPI and CONFIG_XFS_DMAPI build options set to build as modules. Then you need to load the modules, and the QA tests will run (the qa tests check for the existance of a loaded dmapi module and don't run if it is not present). Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Jun 18 16:54:37 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Jun 2007 16:54:41 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5INsXdo027702 for ; Mon, 18 Jun 2007 16:54:36 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA05493; Tue, 19 Jun 2007 09:54:30 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5INsTAf126902741; Tue, 19 Jun 2007 09:54:29 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5INsSbA126747671; Tue, 19 Jun 2007 09:54:28 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Tue, 19 Jun 2007 09:54:28 +1000 From: David Chinner To: Christoph Hellwig Cc: David Chinner , xfs-dev , xfs-oss , asg-qa Subject: Re: Review: fix test 004 to account for reserved space Message-ID: <20070618235428.GD86004887@sgi.com> References: <20070604063328.GT85884050@sgi.com> <20070616195508.GB6929@infradead.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070616195508.GB6929@infradead.org> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11842 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Sat, Jun 16, 2007 at 08:55:08PM +0100, Christoph Hellwig wrote: > On Mon, Jun 04, 2007 at 04:33:28PM +1000, David Chinner wrote: > > With the changes to use some space by default in only in memory > > as a reserved pool, df and statfs will now output a fre block > > count that is slightly different to what is held in the superblock. > > > > Update the qa test to account for this change. > > I think we should rather subtract the amount of internally reserved blocks > from the return value in xfs_statvfs. Which return value? With this patch: --- fs/xfs/xfs_vfsops.c | 1 + 1 file changed, 1 insertion(+) Index: 2.6.x-xfs-new/fs/xfs/xfs_vfsops.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_vfsops.c 2007-06-08 21:46:29.000000000 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_vfsops.c 2007-06-12 13:08:49.933837815 +1000 @@ -876,6 +876,7 @@ xfs_statvfs( statp->f_blocks = sbp->sb_dblocks - lsize; statp->f_bfree = statp->f_bavail = sbp->sb_fdblocks - XFS_ALLOC_SET_ASIDE(mp); + statp->f_bfree += mp->m_resblks_avail; fakeinos = statp->f_bfree << sbp->sb_inopblog; #if XFS_BIG_INUMS fakeinos += mp->m_inoadd; An strace of df --block-size=4k gives: statfs("/mnt/test", {f_type=0x58465342, f_bsize=4096, f_blocks=1048616, f_bfree=874158, f_bavail=873134, f_files=4204672, f_ffree=4191008, f_fsid={2072, 0}, f_namelen=255, f_frsize=4096}) = 0 write(1, "/dev/sdb8 1048616 "..., 66/dev/sdb8 1048616 174458 873134 17% /mnt/test ) = 66 statfs("/mnt/scratch", {f_type=0x58465342, f_bsize=4096, f_blocks=1248496, f_bfree=1248392, f_bavail=1247368, f_files=5004224, f_ffree=5004220, f_fsid={2073, 0}, f_namelen=255, f_frsize=4096}) = 0 write(1, "/dev/sdb9 1248496 "..., 69/dev/sdb9 1248496 104 1247368 1% /mnt/scratch Is this what you were thinking of? Note that this still requires the fix to the qa test because the value in the on disk superblock matches f_bfree, not f_bavail and df appears to output f_bavail.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Jun 18 19:39:13 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Jun 2007 19:39:17 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=3.0 required=5.0 tests=BAYES_95,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r499012 Received: from smtpout.eastlink.ca (smtpout.eastlink.ca [24.222.0.30]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5J2dCdo024134 for ; Mon, 18 Jun 2007 19:39:13 -0700 Received: from ip03.eastlink.ca ([24.222.10.15]) by mta01.eastlink.ca (Sun Java System Messaging Server 6.2-4.03 (built Sep 22 2005)) with ESMTP id <0JJR008QRYU20AT0@mta01.eastlink.ca> for xfs@oss.sgi.com; Sun, 17 Jun 2007 07:08:26 -0300 (ADT) Received: from blk-89-214-20.eastlink.ca (HELO llama.cordes.ca) ([24.89.214.20]) by ip03.eastlink.ca with ESMTP; Sun, 17 Jun 2007 07:07:51 -0300 Received: from peter by llama.cordes.ca with local (Exim 3.36 #1 (Debian)) id 1HzrgF-00067w-00 for ; Sun, 17 Jun 2007 07:08:23 -0300 Date: Sun, 17 Jun 2007 07:08:23 -0300 From: Peter Cordes Subject: XFS_IOC_RESVSP64 for swap files To: xfs@oss.sgi.com Message-id: <20070617100822.GA4586@cordes.ca> MIME-version: 1.0 Content-type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary=1yeeQ81UyVL57Vl7 Content-disposition: inline X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: Ah4FAFOkdEYYWdYU/2dsb2JhbACBTg X-IronPort-AV: E=Sophos;i="4.16,431,1175482800"; d="asc'?scan'208";a="16360239" User-Agent: Mutt/1.5.9i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11843 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: peter@cordes.ca Precedence: bulk X-list: xfs --1yeeQ81UyVL57Vl7 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable Hi XFS list. I'm not subscribed, please CC me. Programs such as swapspace and swapd create new swap files when vmem runs low. They would benefit hugely from being able to create a swapfile without any significant disk I/O. (If a process grabs a lot of memory quickly, the system will be swapping hard while swapspace(8) is writing a swapfile.) unfortunately, touch foo xfs_io -c 'truncate 1000000000' -c "resvsp 0 1000000000" foo mkswap foo sudo swapon foo doesn't work. The kernel complains: swapon: swapfile has holes foo is a ~1GB file with disk space allocated for it, though. But reading it doesn't create any disk I/O and reads all zero, so it's treated like a sparse file. Is this because my filesystem flags unwritten extents? And if my FS was created with that option off, would RESVSP make the file contain the previous contents of that disk space? That would be an obvious security hole, but it would still be useful for making swap files even if only root could do it. So, any ideas on how to make swap files without writing the whole file? (swapd and swapspace both avoid deleting swap files right away, IIRC, so don't suggest workarounds unless you have something really clever...) Could swapon(2) in the kernel be made to work on XFS files with reserved space? i.e. call something that would give XFS a chance to mark all the extents as written, even though they're not. Memory content is at least as sensitive as anything in the filesystem, and if this file is going to be trusted with that, it hardly matters if it also has parts of deleted files. I'm on GNU/Linux: Ubuntu Feisty AMD64, Linux 2.6.20-16-generic. xfs_io version 2.8.18 peter@tesla:/var/tmp/peter$ xfs_info /var/tmp meta-data=3D/dev/evms/temp isize=3D256 agcount=3D16, agsize=3D80= 0767 blks =3D sectsz=3D512 attr=3D0 data =3D bsize=3D4096 blocks=3D12812272, imaxpc= t=3D25 =3D sunit=3D0 swidth=3D0 blks, unwritte= n=3D1 naming =3Dversion 2 bsize=3D4096=20=20 log =3Dinternal bsize=3D4096 blocks=3D3328, version=3D1 =3D sectsz=3D512 sunit=3D0 blks realtime =3Dnone extsz=3D65536 blocks=3D0, rtextents=3D0 BTW, I think xfs_allocsp has its args reversed, or something. touch bar xfs_io -c "allocsp 0 1000000" bar; ll -h -rw-rw-r-- 1 peter peter 0 2007-06-17 06:45 bar xfs_io -c "allocsp 1000000 0" bar; ll -h -rw-rw-r-- 1 peter peter 977K 2007-06-17 06:45 bar --=20 #define X(x,y) x##y Peter Cordes ; e-mail: X(peter@cor , des.ca) "The gods confound the man who first found out how to distinguish the hours! Confound him, too, who in this place set up a sundial, to cut and hack my day so wretchedly into small pieces!" -- Plautus, 200 BC --1yeeQ81UyVL57Vl7 Content-Type: application/pgp-signature; name="signature.asc" Content-Description: Digital signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (GNU/Linux) iQC1AwUBRnUIFgWkmhLkWuRTAQI2GAUArdnx0qcgnb8zSKSPjA7XQvAUS/mePJEr lq7yFqgByftDbMd/7A4so4WJ2cB2LfylAIs+mlClpVm6n5Pwn61jwFGPypGdNhiK Anlwd9zBa7B0Mic3Sw8DvDBwzsBPS9lUxiU0maGMpIfHw5MntZPvI6OA3feoCfI2 1aYYgj1Um5enASHBj/vHV4ORcXYX3NH2URmlucFGPCG7o6sspytZ8w== =xxC2 -----END PGP SIGNATURE----- --1yeeQ81UyVL57Vl7-- From owner-xfs@oss.sgi.com Mon Jun 18 21:33:44 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Jun 2007 21:33:48 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5J4Xedo024641 for ; Mon, 18 Jun 2007 21:33:43 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA11290; Tue, 19 Jun 2007 14:33:36 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5J4XZAf127022676; Tue, 19 Jun 2007 14:33:35 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5J4XXmD125633301; Tue, 19 Jun 2007 14:33:33 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Tue, 19 Jun 2007 14:33:33 +1000 From: David Chinner To: Peter Cordes Cc: xfs@oss.sgi.com Subject: Re: XFS_IOC_RESVSP64 for swap files Message-ID: <20070619043333.GJ86004887@sgi.com> References: <20070617100822.GA4586@cordes.ca> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070617100822.GA4586@cordes.ca> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11844 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Sun, Jun 17, 2007 at 07:08:23AM -0300, Peter Cordes wrote: > Hi XFS list. I'm not subscribed, please CC me. > > Programs such as swapspace and swapd create new swap files when vmem runs > low. They would benefit hugely from being able to create a swapfile without > any significant disk I/O. (If a process grabs a lot of memory quickly, the > system will be swapping hard while swapspace(8) is writing a swapfile.) > > unfortunately, > touch foo > xfs_io -c 'truncate 1000000000' -c "resvsp 0 1000000000" foo > mkswap foo > sudo swapon foo > doesn't work. The kernel complains: > swapon: swapfile has holes > > foo is a ~1GB file with disk space allocated for it, though. But reading > it doesn't create any disk I/O and reads all zero, so it's treated like a > sparse file. Is this because my filesystem flags unwritten extents? Yes. > And if > my FS was created with that option off, would RESVSP make the file contain > the previous contents of that disk space? Yes. > That would be an obvious security hole, Yes. That's why unwritten extents were introduced 10 or so years ago. > but it would still be useful for making swap files even if only root > could do it. Still a potential security hole. > So, any ideas on how to make swap files without writing the whole file? You can't. You need to use allocsp to allocate zero'd space. i.e. # xfs_io -f -c 'allocsp 1000000000 0' foo > Could swapon(2) in the kernel be made to work on XFS files with reserved > space? Basically, the swapon syscall calls bmap() for the block mapping of the file and XFS returns "holes" for unwritten extents because this is the interface needed for reads to zero fill the pages. Something would need to be changed in XFS to make it return anything different, and that would break other things. So I doubt anything will change here. > i.e. call something that would give XFS a chance to mark all the > extents as written, even though they're not. You mean like XFS_IOC_EXPOSE_MY_STALE_DATA_TO_EVERYONE? ;) That's not going to happen. In fact, I plan to make unwritten extents non-optional soon (i.e. I've already got preliminary patches to do this) so that filesystems that have it turned off will get them turned on automatically. The reasons? a) there is no good reason for unwritten=0 from a performance perspective b) there is good reason for unwritten=1 from a security perspective c) we need to use unwritten extents in place of written extents during delayed allocation to prevent stale data exposure on crash and when using extent size hints. So soon unwritten=0 is likely to go the way of the dodo..... > BTW, I think xfs_allocsp has its args reversed, or something. > touch bar > xfs_io -c "allocsp 0 1000000" bar; ll -h > -rw-rw-r-- 1 peter peter 0 2007-06-17 06:45 bar > xfs_io -c "allocsp 1000000 0" bar; ll -h > -rw-rw-r-- 1 peter peter 977K 2007-06-17 06:45 bar Nope, acting as [badly] documented. in the xfsctl man page: "If the section specified is beyond the current end of file, the file is grown and filled with zeroes. The l_len field is currently ignored, and should be set to zero." Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Jun 18 23:01:05 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Jun 2007 23:01:08 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.4 required=5.0 tests=AWL,BAYES_50,URIBL_RHS_ABUSE, WEIRD_PORT autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5J612do018447 for ; Mon, 18 Jun 2007 23:01:04 -0700 Received: from boing.melbourne.sgi.com (boing.melbourne.sgi.com [134.14.55.141]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id QAA13153; Tue, 19 Jun 2007 16:00:58 +1000 Message-ID: <4677711A.4000109@sgi.com> Date: Tue, 19 Jun 2007 16:00:58 +1000 From: Timothy Shimmin User-Agent: Thunderbird 2.0.0.4 (Macintosh/20070604) MIME-Version: 1.0 To: torvalds@linux-foundation.org CC: xfs@oss.sgi.com, Andrew Morton Subject: [GIT PULL] XFS cleanup fix and maintainers file update Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11845 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs Hi Linus, Please pull from the for-linus branch: git pull git://oss.sgi.com:8090/xfs/xfs-2.6.git for-linus Will stop the annoying warning for memclear_highpage_flush in xfs. This will update the following files: MAINTAINERS | 2 +- fs/xfs/linux-2.6/xfs_lrw.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) through these commits: commit e99f056b2777f3fc6871ff6347c98c0321ad2f8f Author: Tim Shimmin Date: Tue Jun 19 15:26:35 2007 +1000 [XFS] Update the MAINTAINERS file entry for XFS - change git repo name. Make the git repository bare and so give it the conventional .git suffix. Signed-off-by: Tim Shimmin commit 700716c8468d95ec6d03566a4e4fb576c3223cbc Author: Christoph Hellwig Date: Thu May 24 15:27:17 2007 +1000 [XFS] s/memclear_highpage_flush/zero_user_page/ SGI-PV: 957103 SGI-Modid: xfs-linux-melb:xfs-kern:28678a Signed-off-by: Christoph Hellwig Signed-off-by: Tim Shimmin From owner-xfs@oss.sgi.com Mon Jun 18 23:37:23 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Jun 2007 23:37:25 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5J6bIdo025784 for ; Mon, 18 Jun 2007 23:37:22 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id QAA14029; Tue, 19 Jun 2007 16:37:16 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5J6bFAf127016510; Tue, 19 Jun 2007 16:37:15 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5J6bE7F126940333; Tue, 19 Jun 2007 16:37:14 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Tue, 19 Jun 2007 16:37:14 +1000 From: David Chinner To: xfs-dev Cc: xfs-oss Subject: Review: fix null files exposure growing via truncate V2 Message-ID: <20070619063714.GP86004887@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11846 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Test 140 fails due to the ftruncate() logging the new file size before any data that had previously been written had hit the disk. IOWs, it violates the data write/inode size update rule that fixes the null files problem. The fix here checks when growing the file as to whether it the disk inode size is different to the in memory size. If they are different, we have data that needs to be written to disk beyond the existing on disk EOF. Hence to maintain ordering we need to flush this data out before we log the changed file size. Version 2: o Only flush the range between the old on disk size and the current in memory size. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group --- fs/xfs/xfs_vnodeops.c | 28 +++++++++++++++++++++++++--- 1 file changed, 25 insertions(+), 3 deletions(-) Index: 2.6.x-xfs-new/fs/xfs/xfs_vnodeops.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_vnodeops.c 2007-06-18 11:44:53.986543106 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_vnodeops.c 2007-06-18 15:11:43.258810667 +1000 @@ -593,9 +593,31 @@ xfs_setattr( if ((vap->va_size > ip->i_size) && (flags & ATTR_NOSIZETOK) == 0) { code = xfs_igrow_start(ip, vap->va_size, credp); - } - xfs_iunlock(ip, XFS_ILOCK_EXCL); - vn_iowait(vp); /* wait for the completion of any pending DIOs */ + xfs_iunlock(ip, XFS_ILOCK_EXCL); + /* + * We are going to log the inode size change in + * this transaction so any previous writes that are + * beyond the on disk EOF that have not been written + * out need to be written here. If we do not write the + * data out, we expose ourselves to the null files + * problem on grow. + * + * Only flush from the on disk size to the in memory + * file size as that's the range we really care about + * here and prevents waiting for other data not within + * the range we care about here. + */ + if (!code && ip->i_size != ip->i_d.di_size) { + code = bhv_vop_flush_pages(XFS_ITOV(ip), + ip->i_d.di_size, ip->i_size, + XFS_B_ASYNC, FI_NONE); + } + } else + xfs_iunlock(ip, XFS_ILOCK_EXCL); + + /* wait for I/O to complete */ + vn_iowait(vp); + if (!code) code = xfs_itruncate_data(ip, vap->va_size); if (code) { From owner-xfs@oss.sgi.com Tue Jun 19 01:05:03 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Jun 2007 01:05:06 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.7 required=5.0 tests=AWL,BAYES_50,URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5J850do016098 for ; Tue, 19 Jun 2007 01:05:02 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id SAA16492; Tue, 19 Jun 2007 18:04:57 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5J84uAf126953123; Tue, 19 Jun 2007 18:04:57 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5J84tod126934755; Tue, 19 Jun 2007 18:04:55 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Tue, 19 Jun 2007 18:04:54 +1000 From: David Chinner To: xfs-dev Cc: xfs-oss Subject: min/max cleanup Message-ID: <20070619080454.GQ86004887@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11847 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs To finally put this to rest, clean up the open coded macro min/max macro implementations. Previous similar patch from Christoph here: http://oss.sgi.com/archives/xfs/2007-04/msg00076.html Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group --- fs/xfs/xfs_btree.h | 32 ++++++++------------------------ 1 file changed, 8 insertions(+), 24 deletions(-) Index: 2.6.x-xfs-new/fs/xfs/xfs_btree.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_btree.h 2007-02-07 13:24:33.000000000 +1100 +++ 2.6.x-xfs-new/fs/xfs/xfs_btree.h 2007-04-23 09:29:25.152573864 +1000 @@ -444,30 +444,14 @@ xfs_btree_setbuf( /* * Min and max functions for extlen, agblock, fileoff, and filblks types. */ -#define XFS_EXTLEN_MIN(a,b) \ - ((xfs_extlen_t)(a) < (xfs_extlen_t)(b) ? \ - (xfs_extlen_t)(a) : (xfs_extlen_t)(b)) -#define XFS_EXTLEN_MAX(a,b) \ - ((xfs_extlen_t)(a) > (xfs_extlen_t)(b) ? \ - (xfs_extlen_t)(a) : (xfs_extlen_t)(b)) -#define XFS_AGBLOCK_MIN(a,b) \ - ((xfs_agblock_t)(a) < (xfs_agblock_t)(b) ? \ - (xfs_agblock_t)(a) : (xfs_agblock_t)(b)) -#define XFS_AGBLOCK_MAX(a,b) \ - ((xfs_agblock_t)(a) > (xfs_agblock_t)(b) ? \ - (xfs_agblock_t)(a) : (xfs_agblock_t)(b)) -#define XFS_FILEOFF_MIN(a,b) \ - ((xfs_fileoff_t)(a) < (xfs_fileoff_t)(b) ? \ - (xfs_fileoff_t)(a) : (xfs_fileoff_t)(b)) -#define XFS_FILEOFF_MAX(a,b) \ - ((xfs_fileoff_t)(a) > (xfs_fileoff_t)(b) ? \ - (xfs_fileoff_t)(a) : (xfs_fileoff_t)(b)) -#define XFS_FILBLKS_MIN(a,b) \ - ((xfs_filblks_t)(a) < (xfs_filblks_t)(b) ? \ - (xfs_filblks_t)(a) : (xfs_filblks_t)(b)) -#define XFS_FILBLKS_MAX(a,b) \ - ((xfs_filblks_t)(a) > (xfs_filblks_t)(b) ? \ - (xfs_filblks_t)(a) : (xfs_filblks_t)(b)) +#define XFS_EXTLEN_MIN(a,b) min_t(xfs_extlen_t, (a), (b)) +#define XFS_EXTLEN_MAX(a,b) max_t(xfs_extlen_t, (a), (b)) +#define XFS_AGBLOCK_MIN(a,b) min_t(xfs_agblock_t, (a), (b)) +#define XFS_AGBLOCK_MAX(a,b) max_t(xfs_agblock_t, (a), (b)) +#define XFS_FILEOFF_MIN(a,b) min_t(xfs_fileoff_t, (a), (b)) +#define XFS_FILEOFF_MAX(a,b) max_t(xfs_fileoff_t, (a), (b)) +#define XFS_FILBLKS_MIN(a,b) min_t(xfs_filblks_t, (a), (b)) +#define XFS_FILBLKS_MAX(a,b) max_t(xfs_filblks_t, (a), (b)) #define XFS_FSB_SANITY_CHECK(mp,fsb) \ (XFS_FSB_TO_AGNO(mp, fsb) < mp->m_sb.sb_agcount && \ From owner-xfs@oss.sgi.com Tue Jun 19 01:10:15 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Jun 2007 01:10:18 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.3 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.ukfsn.org (s2.ukfsn.org [217.158.120.143]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5J8ADdo017677 for ; Tue, 19 Jun 2007 01:10:14 -0700 Received: from localhost (mailman.ukfsn.org [80.168.53.75]) by mail.ukfsn.org (Postfix) with ESMTP id 4792EE6DB2; Tue, 19 Jun 2007 09:09:54 +0100 (BST) Received: from mail.ukfsn.org ([80.168.53.20]) by localhost (smtp-filter.ukfsn.org [80.168.53.75]) (amavisd-new, port 10024) with ESMTP id C4EtFIDUzfgb; Tue, 19 Jun 2007 09:10:09 +0100 (BST) Received: from elm.dgreaves.com (i-83-67-36-194.freedom2surf.net [83.67.36.194]) by mail.ukfsn.org (Postfix) with ESMTP id 4E3E2E6D9C; Tue, 19 Jun 2007 09:09:51 +0100 (BST) Received: from ash.dgreaves.com ([10.0.0.90]) by elm.dgreaves.com with esmtp (Exim 4.62) (envelope-from ) id 1I0Ymv-0007pN-2b; Tue, 19 Jun 2007 09:10:09 +0100 Message-ID: <46778F60.5090107@dgreaves.com> Date: Tue, 19 Jun 2007 09:10:08 +0100 From: David Greaves User-Agent: Mozilla-Thunderbird 2.0.0.0 (X11/20070601) MIME-Version: 1.0 To: xfs@oss.sgi.com, "'linux-kernel@vger.kernel.org'" , David Chinner Subject: xfs freeze/umount problem Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11848 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: david@dgreaves.com Precedence: bulk X-list: xfs David Chinner wrote: > FWIW, I'm on record stating that "sync" is not sufficient to quiesce an XFS > filesystem for a suspend/resume to work safely and have argued that the only > safe thing to do is freeze the filesystem before suspend and thaw it after > resume. Whilst testing a potential bug in another thread I accidentally found that unmounting a filesystem that I'd just frozen would hang. As the saying goes: "Well, duh!!" I could eventually run an unfreeze but the mount was still hung. This lead to an unclean shutdown. OK, it may not be bright but it seems like this shouldn't happen; umount should either unfreeze and work or fail ("Attempt to umount a frozen filesystem.") if the fs is frozen. Is this a kernel bug/misfeature or a (u)mount one? Suggestions as to the best place to report it if not in the cc's? David From owner-xfs@oss.sgi.com Tue Jun 19 02:24:27 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Jun 2007 02:24:32 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.3 required=5.0 tests=AWL,BAYES_50,J_CHICKENPOX_42 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.ukfsn.org (s2.ukfsn.org [217.158.120.143]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5J9OPdo005686 for ; Tue, 19 Jun 2007 02:24:27 -0700 Received: from localhost (mailman.ukfsn.org [80.168.53.75]) by mail.ukfsn.org (Postfix) with ESMTP id A87A2E6D9C; Tue, 19 Jun 2007 10:24:07 +0100 (BST) Received: from mail.ukfsn.org ([80.168.53.20]) by localhost (smtp-filter.ukfsn.org [80.168.53.75]) (amavisd-new, port 10024) with ESMTP id UrKXPS1HpS7S; Tue, 19 Jun 2007 10:24:25 +0100 (BST) Received: from elm.dgreaves.com (i-83-67-36-194.freedom2surf.net [83.67.36.194]) by mail.ukfsn.org (Postfix) with ESMTP id 10999E6BCD; Tue, 19 Jun 2007 10:24:06 +0100 (BST) Received: from ash.dgreaves.com ([10.0.0.90]) by elm.dgreaves.com with esmtp (Exim 4.62) (envelope-from ) id 1I0Zwm-0007uy-3B; Tue, 19 Jun 2007 10:24:24 +0100 Message-ID: <4677A0C7.4000306@dgreaves.com> Date: Tue, 19 Jun 2007 10:24:23 +0100 From: David Greaves User-Agent: Mozilla-Thunderbird 2.0.0.0 (X11/20070601) MIME-Version: 1.0 To: David Chinner , Tejun Heo Cc: David Robinson , LVM general discussion and development , "'linux-kernel@vger.kernel.org'" , xfs@oss.sgi.com, linux-pm , LinuxRaid , "Rafael J. Wysocki" Subject: Re: [linux-lvm] 2.6.22-rc5 XFS fails after hibernate/resume References: <46744065.6060605@dgreaves.com> <4674645F.5000906@gmail.com> <46751D37.5020608@dgreaves.com> <4676390E.6010202@dgreaves.com> <20070618145007.GE85884050@sgi.com> <4676D97E.4000403@dgreaves.com> In-Reply-To: <4676D97E.4000403@dgreaves.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11849 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: david@dgreaves.com Precedence: bulk X-list: xfs David Greaves wrote: > I'm going to have to do some more testing... done > David Chinner wrote: >> On Mon, Jun 18, 2007 at 08:49:34AM +0100, David Greaves wrote: >>> David Greaves wrote: >>> So doing: >>> xfs_freeze -f /scratch >>> sync >>> echo platform > /sys/power/disk >>> echo disk > /sys/power/state >>> # resume >>> xfs_freeze -u /scratch >>> >>> Works (for now - more usage testing tonight) >> >> Verrry interesting. > Good :) Now, not so good :) >> What you were seeing was an XFS shutdown occurring because the free space >> btree was corrupted. IOWs, the process of suspend/resume has resulted >> in either bad data being written to disk, the correct data not being >> written to disk or the cached block being corrupted in memory. > That's the kind of thing I was suspecting, yes. > >> If you run xfs_check on the filesystem after it has shut down after a >> resume, >> can you tell us if it reports on-disk corruption? Note: do not run >> xfs_repair >> to check this - it does not check the free space btrees; instead it >> simply >> rebuilds them from scratch. If xfs_check reports an error, then run >> xfs_repair >> to fix it up. > OK, I can try this tonight... This is on 2.6.22-rc5 So I hibernated last night and resumed this morning. Before hibernating I froze and sync'ed. After resume I thawed it. (Sorry Dave) Here are some photos of the screen during resume. This is not 100% reproducable - it seems to occur only if the system is shutdown for 30mins or so. Tejun, I wonder if error handling during resume is problematic? I got the same errors in 2.6.21. I have never seen these (or any other libata) errors other than during resume. http://www.dgreaves.com/pub/2.6.22-rc5-resume-failure.jpg (hard to read, here's one from 2.6.21 http://www.dgreaves.com/pub/2.6.21-resume-failure.jpg I _think_ I've only seen the xfs problem when a resume shows these errors. Ok, to try and cause a problem I ran a make and got this back at once: make: stat: Makefile: Input/output error make: stat: clean: Input/output error make: *** No rule to make target `clean'. Stop. make: stat: GNUmakefile: Input/output error make: stat: makefile: Input/output error I caught the first dmesg this time: Filesystem "dm-0": XFS internal error xfs_btree_check_sblock at line 334 of file fs/xfs/xfs_btree.c. Caller 0xc01b58e1 [] show_trace_log_lvl+0x1a/0x30 [] show_trace+0x12/0x20 [] dump_stack+0x15/0x20 [] xfs_error_report+0x4f/0x60 [] xfs_btree_check_sblock+0x56/0xd0 [] xfs_alloc_lookup+0x181/0x390 [] xfs_alloc_lookup_le+0x16/0x20 [] xfs_free_ag_extent+0x51/0x690 [] xfs_free_extent+0xa4/0xc0 [] xfs_bmap_finish+0x119/0x170 [] xfs_itruncate_finish+0x23a/0x3a0 [] xfs_inactive+0x482/0x500 [] xfs_fs_clear_inode+0x34/0xa0 [] clear_inode+0x57/0xe0 [] generic_delete_inode+0xe5/0x110 [] generic_drop_inode+0x167/0x1b0 [] iput+0x5f/0x70 [] do_unlinkat+0xdf/0x140 [] sys_unlink+0x10/0x20 [] syscall_call+0x7/0xb ======================= xfs_force_shutdown(dm-0,0x8) called from line 4258 of file fs/xfs/xfs_bmap.c. Return address = 0xc021101e Filesystem "dm-0": Corruption of in-memory data detected. Shutting down filesystem: dm-0 Please umount the filesystem, and rectify the problem(s) so I cd'ed out of /scratch and umounted. I then tried the xfs_check. haze:~# xfs_check /dev/video_vg/video_lv ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_check. If you are unable to mount the filesystem, then use the xfs_repair -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. haze:~# mount /scratch/ haze:~# umount /scratch/ haze:~# xfs_check /dev/video_vg/video_lv Message from syslogd@haze at Tue Jun 19 08:47:30 2007 ... haze kernel: Bad page state in process 'xfs_db' Message from syslogd@haze at Tue Jun 19 08:47:30 2007 ... haze kernel: page:c1767bc0 flags:0x80010008 mapping:00000000 mapcount:-64 count:0 Message from syslogd@haze at Tue Jun 19 08:47:30 2007 ... haze kernel: Trying to fix it up, but a reboot is needed Message from syslogd@haze at Tue Jun 19 08:47:30 2007 ... haze kernel: Backtrace: Message from syslogd@haze at Tue Jun 19 08:47:30 2007 ... haze kernel: Bad page state in process 'syslogd' Message from syslogd@haze at Tue Jun 19 08:47:30 2007 ... haze kernel: page:c1767cc0 flags:0x80010008 mapping:00000000 mapcount:-64 count:0 Message from syslogd@haze at Tue Jun 19 08:47:30 2007 ... haze kernel: Trying to fix it up, but a reboot is needed Message from syslogd@haze at Tue Jun 19 08:47:30 2007 ... haze kernel: Backtrace: ugh. Try again haze:~# xfs_check /dev/video_vg/video_lv haze:~# whilst running a top reported this as roughly the peak memory usage: 8759 root 18 0 479m 474m 876 R 2.0 46.9 0:02.49 xfs_db so it looks like it didn't run out of memory (machine has 1Gb). Dave, I ran xfs_check -v... but I got bored when it reached 122M of bz2 compressed output with no sign of stopping... still got it if it's any use... lots of: setting block 0/0 to sb setting block 0/1 to freelist setting block 0/2 to freelist setting block 0/3 to freelist setting block 0/4 to freelist setting block 0/75 to btbno setting block 0/346901 to free1 setting block 0/346903 to free1 setting block 0/346904 to free1 setting block 0/346905 to free1 and stuff like this inode 128 mode 040777 fmt extents afmt extents nex 1 anex 0 nblk 1 sz 4096 inode 128 nlink 39 is dir inode 128 extent [0,7,1,0] I then rebooted and ran a repair which didn't show any damage. David From owner-xfs@oss.sgi.com Tue Jun 19 03:36:46 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Jun 2007 03:36:48 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.7 required=5.0 tests=AWL,BAYES_99,J_CHICKENPOX_33, J_CHICKENPOX_45,J_CHICKENPOX_62 autolearn=no version=3.2.0-pre1-r499012 Received: from bay0-omc1-s35.bay0.hotmail.com (bay0-omc1-s35.bay0.hotmail.com [65.54.246.107]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5JAaido023816 for ; Tue, 19 Jun 2007 03:36:45 -0700 Received: from hotmail.com ([65.54.174.87]) by bay0-omc1-s35.bay0.hotmail.com with Microsoft SMTPSVC(6.0.3790.2668); Tue, 19 Jun 2007 03:36:45 -0700 Received: from mail pickup service by hotmail.com with Microsoft SMTPSVC; Tue, 19 Jun 2007 03:36:45 -0700 Message-ID: Received: from 85.36.106.214 by BAY103-DAV15.phx.gbl with DAV; Tue, 19 Jun 2007 10:36:40 +0000 X-Originating-IP: [85.36.106.214] X-Originating-Email: [pupilla@hotmail.com] X-Sender: pupilla@hotmail.com From: "Marco Berizzi" To: "David Chinner" Cc: "David Chinner" , , , "Marco Berizzi" References: <20070316012520.GN5743@melbourne.sgi.com> <20070316195951.GB5743@melbourne.sgi.com> <20070320064632.GO32602149@melbourne.sgi.com> <20070607130505.GE85884050@sgi.com> <20070612061440.GQ86004887@sgi.com> Subject: Re: XFS internal error xfs_da_do_buf(2) at line 2087 of file fs/xfs/xfs_da_btree.c. Caller 0xc01b00bd Date: Tue, 19 Jun 2007 12:36:25 +0200 X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2800.1123 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2800.1123 X-OriginalArrivalTime: 19 Jun 2007 10:36:45.0530 (UTC) FILETIME=[BBC4FBA0:01C7B25D] X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11850 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: pupilla@hotmail.com Precedence: bulk X-list: xfs Marco Berizzi wrote: > David Chinner wrote: > > > On Fri, Jun 08, 2007 at 03:59:39PM +0200, Marco Berizzi wrote: > > > David Chinner wrote: > > > > Where we saw signs of on disk directory corruption. Have you run > > > > xfs_repair successfully on the filesystem since you reported > > > > this? > > > > > > yes. > > > > > > > If you did clean up the error, does xfs_repair report the same > sort > > > > of error again? > > > > > > I have run xfs_repair this morning. > > > Here is the report: > > > > > > > > > > Have you run a 2.6.16-rcX or 2.6.17.[0-6] kernel since you last > > > > reported this problem? > > > > > > No. I have run only 2.6.19.x and 2.6.21.x > > > > > > After the xfs_repair I have remounted the file system. > > > After few hours linux has crashed with this message: > > > BUG: at arch/i386/kernel/smp.c:546 smp_call_function() > > > I have also the monitor bitmap. > > > > This is sounding like memory corruption is no corruption is being > > found on disk by xfs_repair. Have you run memtest86 on that box to > > see if it's got bad memory? > > Yes. I have run memtest for one week: > no errors. > I have also changed the mother board, > scsi controller and ram. Only the cpu > and the 2 hot swap scsi disks were > not replaced. IMHO this isn't an > hardware problem, because the kernel > with debugging options enabled didn't > crash for a long time (>1 month). Just > for record, at this moment this box is > running 2.6.22-rc4 with no debug > options enabled. I will keep you > informed. > Thanks everybody for the support. Hi David, on another system which is doing the same task (openswan + squid), this morning I have found the following errors (2.6.21.5 after 4 days uptime). The tricky thing is that always the squid file cache filesystem is corrupted. The same box with 2.6.20.x and 2.6.21.x with 'Debug slab memory allocations' enabled, never show any errors for 1 month. # dmesg Linux version 2.6.21.5 (root@Gemini) (gcc version 3.3.6) #1 Thu Jun 14 13:18:08 CEST 2007 BIOS-provided physical RAM map: sanitize start sanitize end copy_e820_map() start: 0000000000000000 size: 000000000009f800 end: 000000000009f800 type: 1 copy_e820_map() type is E820_RAM copy_e820_map() start: 000000000009f800 size: 0000000000000800 end: 00000000000a0000 type: 2 copy_e820_map() start: 00000000000f0000 size: 0000000000010000 end: 0000000000100000 type: 2 copy_e820_map() start: 0000000000100000 size: 0000000009f00000 end: 000000000a000000 type: 1 copy_e820_map() type is E820_RAM copy_e820_map() start: 00000000ffff0000 size: 0000000000010000 end: 0000000100000000 type: 2 BIOS-e820: 0000000000000000 - 000000000009f800 (usable) BIOS-e820: 000000000009f800 - 00000000000a0000 (reserved) BIOS-e820: 00000000000f0000 - 0000000000100000 (reserved) BIOS-e820: 0000000000100000 - 000000000a000000 (usable) BIOS-e820: 00000000ffff0000 - 0000000100000000 (reserved) 160MB LOWMEM available. Entering add_active_range(0, 0, 40960) 0 entries of 256 used Zone PFN ranges: DMA 0 - 4096 Normal 4096 - 40960 early_node_map[1] active PFN ranges 0: 0 - 40960 On node 0 totalpages: 40960 DMA zone: 32 pages used for memmap DMA zone: 0 pages reserved DMA zone: 4064 pages, LIFO batch:0 Normal zone: 288 pages used for memmap Normal zone: 36576 pages, LIFO batch:7 DMI 2.1 present. Allocating PCI resources starting at 10000000 (gap: 0a000000:f5ff0000) Built 1 zonelists. Total pages: 40640 Kernel command line: auto BOOT_IMAGE=Linux ro root=301 Local APIC disabled by BIOS -- you can enable it with "lapic" mapped APIC to ffffd000 (01141000) Enabling fast FPU save and restore... done. Initializing CPU#0 PID hash table entries: 1024 (order: 10, 4096 bytes) Detected 267.302 MHz processor. Console: colour VGA+ 80x25 Dentry cache hash table entries: 32768 (order: 5, 131072 bytes) Inode-cache hash table entries: 16384 (order: 4, 65536 bytes) Memory: 159020k/163840k available (1945k kernel code, 4392k reserved, 609k data, 156k init, 0k highmem) virtual kernel memory layout: fixmap : 0xfffb7000 - 0xfffff000 ( 288 kB) vmalloc : 0xca800000 - 0xfffb5000 ( 855 MB) lowmem : 0xc0000000 - 0xca000000 ( 160 MB) .init : 0xc0382000 - 0xc03a9000 ( 156 kB) .data : 0xc02e667c - 0xc037eb94 ( 609 kB) .text : 0xc0100000 - 0xc02e667c (1945 kB) Checking if this processor honours the WP bit even in supervisor mode... Ok. Calibrating delay using timer specific routine.. 535.23 BogoMIPS (lpj=1070464) Mount-cache hash table entries: 512 CPU: After generic identify, caps: 0183f9ff 00000000 00000000 00000000 00000000 00000000 00000000 CPU: L1 I cache: 16K, L1 D cache: 16K CPU: After all inits, caps: 0183f9ff 00000000 00000000 00000040 00000000 00000000 00000000 CPU: Intel Celeron (Covington) stepping 00 Checking 'hlt' instruction... OK. ACPI: Core revision 20070126 ACPI Exception (tbxface-0618): AE_NO_ACPI_TABLES, While loading namespace from ACPI tables [20070126] ACPI: Unable to load the System Description Tables NET: Registered protocol family 16 PCI: PCI BIOS revision 2.10 entry at 0xfda61, last bus=1 PCI: Using configuration type 1 Setting up standard PCI resources ACPI: Interpreter disabled. Linux Plug and Play Support v0.97 (c) Adam Belay pnp: PnP ACPI: disabled PCI: Probing PCI hardware PCI: Probing PCI hardware (bus 00) * Found PM-Timer Bug on the chipset. Due to workarounds for a bug, * this clock source is slow. Consider trying other clock sources PCI quirk: region 6100-613f claimed by PIIX4 ACPI PCI quirk: region 5f00-5f0f claimed by PIIX4 SMB Boot video device is 0000:01:00.0 PCI: Using IRQ router PIIX/ICH [8086/7110] at 0000:00:07.0 PCI: setting IRQ 11 as level-triggered PCI: Found IRQ 11 for device 0000:00:07.2 PCI: Sharing IRQ 11 with 0000:00:0b.0 Time: tsc clocksource has been installed. PCI: Bridge: 0000:00:01.0 IO window: b000-bfff MEM window: efe00000-efefffff PREFETCH window: e5c00000-e7cfffff NET: Registered protocol family 2 IP route cache hash table entries: 2048 (order: 1, 8192 bytes) TCP established hash table entries: 8192 (order: 4, 65536 bytes) TCP bind hash table entries: 8192 (order: 3, 32768 bytes) TCP: Hash tables configured (established 8192 bind 8192) TCP reno registered SGI XFS with no debug enabled io scheduler noop registered io scheduler deadline registered (default) Limiting direct PCI/PCI transfers. Uniform Multi-Platform E-IDE driver Revision: 7.00alpha2 ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx PIIX4: IDE controller at PCI slot 0000:00:07.1 PIIX4: chipset revision 1 PIIX4: not 100% native mode: will probe irqs later ide0: BM-DMA at 0xffa0-0xffa7, BIOS settings: hda:DMA, hdb:pio ide1: BM-DMA at 0xffa8-0xffaf, BIOS settings: hdc:pio, hdd:pio Probing IDE interface ide0... hda: QUANTUM FIREBALL EX3.2A, ATA DISK drive ide0 at 0x1f0-0x1f7,0x3f6 on irq 14 Probing IDE interface ide1... hda: max request size: 128KiB hda: 6306048 sectors (3228 MB) w/418KiB Cache, CHS=6256/16/63, UDMA(33) hda: cache flushes not supported hda: hda1 hda2 < hda5 hda6 hda7 hda8 hda9 > PNP: No PS/2 controller found. Probing ports directly. serio: i8042 KBD port at 0x60,0x64 irq 1 serio: i8042 AUX port at 0x60,0x64 irq 12 mice: PS/2 mouse device common for all mice nf_conntrack version 0.5.0 (1280 buckets, 10240 max) ip_tables: (C) 2000-2006 Netfilter Core Team TCP cubic registered Initializing XFRM netlink socket NET: Registered protocol family 1 NET: Registered protocol family 17 NET: Registered protocol family 15 Using IPI Shortcut mode Filesystem "hda1": Disabling barriers, not supported by the underlying device XFS mounting filesystem hda1 Ending clean XFS mount for filesystem: hda1 VFS: Mounted root (xfs filesystem) readonly. Freeing unused kernel memory: 156k freed input: AT Translated Set 2 keyboard as /class/input/input0 Adding 209624k swap on /dev/hda9. Priority:-1 extents:1 across:209624k Filesystem "hda1": Disabling barriers, not supported by the underlying device Filesystem "hda1": Disabling barriers, not supported by the underlying device PCI: setting IRQ 9 as level-triggered PCI: Found IRQ 9 for device 0000:00:09.0 3c59x: Donald Becker and others. www.scyld.com/network/vortex.html 0000:00:09.0: 3Com PCI 3c905 Boomerang 100baseTx at 0001de00. PCI: setting IRQ 10 as level-triggered PCI: Found IRQ 10 for device 0000:00:0a.0 0000:00:0a.0: 3Com PCI 3c905 Boomerang 100baseTx at 0001dc00. PCI: Found IRQ 11 for device 0000:00:0b.0 PCI: Sharing IRQ 11 with 0000:00:07.2 0000:00:0b.0: 3Com PCI 3c905 Boomerang 100baseTx at 0001da00. Filesystem "hda5": Disabling barriers, not supported by the underlying device XFS mounting filesystem hda5 Ending clean XFS mount for filesystem: hda5 Filesystem "hda6": Disabling barriers, not supported by the underlying device XFS mounting filesystem hda6 Ending clean XFS mount for filesystem: hda6 Filesystem "hda7": Disabling barriers, not supported by the underlying device XFS mounting filesystem hda7 Ending clean XFS mount for filesystem: hda7 Filesystem "hda8": Disabling barriers, not supported by the underlying device XFS mounting filesystem hda8 Ending clean XFS mount for filesystem: hda8 eth0: setting full-duplex. eth1: setting full-duplex. eth2: setting full-duplex. 0x0: 59 fe cf 04 98 58 bc e2 42 3a 05 ee b2 12 b7 25 Filesystem "hda8": XFS internal error xfs_da_do_buf(2) at line 2086 of file fs/xfs/xfs_da_btree.c. Caller 0xc01a7aa8 [] xfs_da_do_buf+0x37b/0x7c0 [] xfs_da_read_buf+0x48/0x60 [] xfs_da_read_buf+0x48/0x60 [] profile_tick+0x3e/0x70 [] tick_handle_periodic+0xf/0x60 [] xfs_da_read_buf+0x48/0x60 [] xfs_dir2_leaf_lookup_int+0x16d/0x2b0 [] xfs_dir2_leaf_lookup_int+0x16d/0x2b0 [] xfs_dir2_leaf_lookup+0x2b/0xd0 [] xfs_dir2_isleaf+0x20/0x70 [] xfs_dir_lookup+0xf1/0x110 [] ip_route_output_flow+0x22/0x90 [] inet_csk_route_req+0xa5/0x140 [] xfs_dir_lookup_int+0x34/0x100 [] sk_alloc+0x2b/0xd0 [] xfs_lookup+0x4e/0x80 [] xfs_vn_lookup+0x52/0x90 [] real_lookup+0xc7/0xf0 [] do_lookup+0x90/0xc0 [] __link_path_walk+0x5ab/0xa70 [] sk_stop_timer+0x17/0x20 [] link_path_walk+0x45/0xd0 [] process_backlog+0x77/0xf0 [] get_unused_fd+0x54/0xa0 [] do_path_lookup+0xdd/0x1a0 [] __path_lookup_intent_open+0x50/0x90 [] path_lookup_open+0x21/0x30 [] open_namei+0x68/0x580 [] ip_rcv+0x212/0x460 [] ip_rcv_finish+0x0/0x240 [] do_filp_open+0x2e/0x50 [] process_backlog+0x77/0xf0 [] get_unused_fd+0x54/0xa0 [] do_sys_open+0x42/0xd0 [] sys_open+0x1c/0x20 [] syscall_call+0x7/0xb [] pfkey_xfrm_state2msg+0x4e0/0xb70 ======================= 0x0: 59 fe cf 04 98 58 bc e2 42 3a 05 ee b2 12 b7 25 Filesystem "hda8": XFS internal error xfs_da_do_buf(2) at line 2086 of file fs/xfs/xfs_da_btree.c. Caller 0xc01a7aa8 [] xfs_da_do_buf+0x37b/0x7c0 [] xfs_da_read_buf+0x48/0x60 [] xfs_da_read_buf+0x48/0x60 [] xfs_trans_unreserve_and_mod_sb+0x20a/0x210 [] xlog_assign_tail_lsn+0xc/0x20 [] xfs_da_read_buf+0x48/0x60 [] xfs_dir2_leaf_lookup_int+0x16d/0x2b0 [] xfs_dir2_leaf_lookup_int+0x16d/0x2b0 [] xfs_dir2_leaf_lookup+0x2b/0xd0 [] xfs_dir2_isleaf+0x20/0x70 [] xfs_dir_lookup+0xf1/0x110 [] pipe_read+0x20a/0x2c0 [] xfs_dir_lookup_int+0x34/0x100 [] link_path_walk+0x69/0xd0 [] xfs_lookup+0x4e/0x80 [] xfs_vn_lookup+0x52/0x90 [] __lookup_hash+0x89/0xb0 [] do_unlinkat+0x61/0x110 [] vfs_read+0xe8/0x110 [] sys_read+0x47/0x80 [] syscall_call+0x7/0xb ======================= 0x0: 59 fe cf 04 98 58 bc e2 42 3a 05 ee b2 12 b7 25 Filesystem "hda8": XFS internal error xfs_da_do_buf(2) at line 2086 of file fs/xfs/xfs_da_btree.c. Caller 0xc01a7aa8 [] xfs_da_do_buf+0x37b/0x7c0 [] xfs_da_read_buf+0x48/0x60 [] xfs_da_read_buf+0x48/0x60 [] issue_and_wait+0x27/0xb0 [3c59x] [] xfs_da_read_buf+0x48/0x60 [] xfs_dir2_leaf_lookup_int+0x16d/0x2b0 [] xfs_dir2_leaf_lookup_int+0x16d/0x2b0 [] xfs_dir2_leaf_lookup+0x2b/0xd0 [] xfs_dir2_isleaf+0x20/0x70 [] xfs_dir_lookup+0xf1/0x110 [] dev_queue_xmit+0x165/0x220 [] ip_output+0x158/0x270 [] xfs_dir_lookup_int+0x34/0x100 [] xfs_lookup+0x4e/0x80 [] xfs_vn_lookup+0x52/0x90 [] real_lookup+0xc7/0xf0 [] do_lookup+0x90/0xc0 [] __link_path_walk+0x5ab/0xa70 [] link_path_walk+0x45/0xd0 [] get_unused_fd+0x54/0xa0 [] do_path_lookup+0xdd/0x1a0 [] handle_IRQ_event+0x27/0x60 [] __path_lookup_intent_open+0x50/0x90 [] path_lookup_open+0x21/0x30 [] open_namei+0x68/0x580 [] do_wp_page+0x2a7/0x3a0 [] do_filp_open+0x2e/0x50 [] get_unused_fd+0x54/0xa0 [] do_sys_open+0x42/0xd0 [] sys_open+0x1c/0x20 [] syscall_call+0x7/0xb ======================= 0x0: 59 fe cf 04 98 58 bc e2 42 3a 05 ee b2 12 b7 25 Filesystem "hda8": XFS internal error xfs_da_do_buf(2) at line 2086 of file fs/xfs/xfs_da_btree.c. Caller 0xc01a7aa8 [] xfs_da_do_buf+0x37b/0x7c0 [] xfs_da_read_buf+0x48/0x60 [] xfs_da_read_buf+0x48/0x60 [] xfs_trans_unreserve_and_mod_sb+0x20a/0x210 [] xlog_assign_tail_lsn+0xc/0x20 [] xfs_da_read_buf+0x48/0x60 [] xfs_dir2_leaf_lookup_int+0x16d/0x2b0 [] xfs_dir2_leaf_lookup_int+0x16d/0x2b0 [] xfs_dir2_leaf_lookup+0x2b/0xd0 [] xfs_dir2_isleaf+0x20/0x70 [] xfs_dir_lookup+0xf1/0x110 [] pipe_read+0x20a/0x2c0 [] xfs_dir_lookup_int+0x34/0x100 [] link_path_walk+0x69/0xd0 [] xfs_lookup+0x4e/0x80 [] xfs_vn_lookup+0x52/0x90 [] __lookup_hash+0x89/0xb0 [] do_unlinkat+0x61/0x110 [] vfs_read+0xe8/0x110 [] sys_read+0x47/0x80 [] syscall_call+0x7/0xb ======================= 0x0: 59 fe cf 04 98 58 bc e2 42 3a 05 ee b2 12 b7 25 Filesystem "hda8": XFS internal error xfs_da_do_buf(2) at line 2086 of file fs/xfs/xfs_da_btree.c. Caller 0xc01a7aa8 [] xfs_da_do_buf+0x37b/0x7c0 [] xfs_da_read_buf+0x48/0x60 [] xfs_da_read_buf+0x48/0x60 [] xfs_da_read_buf+0x48/0x60 [] xfs_dir2_leaf_getdents+0x35f/0xb40 [] xfs_dir2_leaf_getdents+0x35f/0xb40 [] get_page_from_freelist+0x80/0xc0 [] xfs_dir_getdents+0xd2/0x120 [] xfs_dir2_put_dirent64_direct+0x0/0x90 [] xfs_dir2_put_dirent64_direct+0x0/0x90 [] xfs_readdir+0x48/0x70 [] xfs_file_readdir+0x100/0x220 [] filldir+0x0/0x100 [] sys_fstat64+0x2b/0x30 [] filldir+0x0/0x100 [] vfs_readdir+0x81/0xa0 [] sys_getdents+0x5e/0xa0 [] syscall_call+0x7/0xb ======================= Filesystem "hda8": Disabling barriers, not supported by the underlying device XFS mounting filesystem hda8 Ending clean XFS mount for filesystem: hda8 xfr_repair output: Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - clear lost+found (if it exists) ... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - ensuring existence of lost+found directory - traversing filesystem starting at / ... - traversal finished ... - traversing all unattached subtrees ... - traversals finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... done From owner-xfs@oss.sgi.com Tue Jun 19 04:14:23 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Jun 2007 04:14:27 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from ogre.sisk.pl (ogre.sisk.pl [217.79.144.158]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5JBEKdo003433 for ; Tue, 19 Jun 2007 04:14:22 -0700 Received: from localhost (localhost.localdomain [127.0.0.1]) by ogre.sisk.pl (Postfix) with ESMTP id 934B9523FE; Tue, 19 Jun 2007 12:55:07 +0200 (CEST) Received: from ogre.sisk.pl ([127.0.0.1]) by localhost (ogre.sisk.pl [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 23744-06; Tue, 19 Jun 2007 12:55:07 +0200 (CEST) Received: from [192.168.144.102] (iftwlan0.fuw.edu.pl [193.0.83.32]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by ogre.sisk.pl (Postfix) with ESMTP id F19884510F; Tue, 19 Jun 2007 12:55:06 +0200 (CEST) From: "Rafael J. Wysocki" To: David Greaves Subject: Re: [linux-lvm] 2.6.22-rc5 XFS fails after hibernate/resume Date: Tue, 19 Jun 2007 13:21:06 +0200 User-Agent: KMail/1.9.5 Cc: David Chinner , Tejun Heo , David Robinson , LVM general discussion and development , "'linux-kernel@vger.kernel.org'" , xfs@oss.sgi.com, linux-pm , LinuxRaid References: <46744065.6060605@dgreaves.com> <4676D97E.4000403@dgreaves.com> <4677A0C7.4000306@dgreaves.com> In-Reply-To: <4677A0C7.4000306@dgreaves.com> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200706191321.07278.rjw@sisk.pl> X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Scanned: amavisd-new at ogre.sisk.pl using MkS_Vir for Linux X-Virus-Status: Clean X-archive-position: 11851 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: rjw@sisk.pl Precedence: bulk X-list: xfs On Tuesday, 19 June 2007 11:24, David Greaves wrote: > David Greaves wrote: > > I'm going to have to do some more testing... > done > > > > David Chinner wrote: > >> On Mon, Jun 18, 2007 at 08:49:34AM +0100, David Greaves wrote: > >>> David Greaves wrote: > >>> So doing: > >>> xfs_freeze -f /scratch > >>> sync > >>> echo platform > /sys/power/disk > >>> echo disk > /sys/power/state > >>> # resume > >>> xfs_freeze -u /scratch > >>> > >>> Works (for now - more usage testing tonight) > >> > >> Verrry interesting. > > Good :) > Now, not so good :) > > > >> What you were seeing was an XFS shutdown occurring because the free space > >> btree was corrupted. IOWs, the process of suspend/resume has resulted > >> in either bad data being written to disk, the correct data not being > >> written to disk or the cached block being corrupted in memory. > > That's the kind of thing I was suspecting, yes. > > > >> If you run xfs_check on the filesystem after it has shut down after a > >> resume, > >> can you tell us if it reports on-disk corruption? Note: do not run > >> xfs_repair > >> to check this - it does not check the free space btrees; instead it > >> simply > >> rebuilds them from scratch. If xfs_check reports an error, then run > >> xfs_repair > >> to fix it up. > > OK, I can try this tonight... > > > This is on 2.6.22-rc5 Is the Tejun's patch http://www.sisk.pl/kernel/hibernation_and_suspend/2.6.22-rc5/patches/30-block-always-requeue-nonfs-requests-at-the-front.patch applied on top of that? Rafael -- "Premature optimization is the root of all evil." - Donald Knuth From owner-xfs@oss.sgi.com Tue Jun 19 04:18:43 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Jun 2007 04:18:45 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.3 required=5.0 tests=ANY_BOUNCE_MESSAGE,BAYES_99, J_CHICKENPOX_83,VBOUNCE_MESSAGE autolearn=no version=3.2.0-pre1-r499012 Received: from mx20.eastonbellsports.com (mx20.eastonbellsports.com [63.80.176.197]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5JBIgdo005040 for ; Tue, 19 Jun 2007 04:18:43 -0700 Received: from mail pickup service by mx20.eastonbellsports.com with Microsoft SMTPSVC; Tue, 19 Jun 2007 04:06:32 -0700 thread-index: AceyYeTqNf3ohN7BQd+QNh5FRDL0lQ== Thread-Topic: [MailServer Notification]Security Notification From: To: Subject: [MailServer Notification]Security Notification Date: Tue, 19 Jun 2007 04:06:32 -0700 Message-ID: <004f01c7b261$e4ea4a40$6501020a@bellsports.sportsgroup.loc> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Mailer: Microsoft CDO for Exchange 2000 Content-Class: urn:content-classes:message Importance: normal Priority: normal X-MimeOLE: Produced By Microsoft MimeOLE V6.00.3790.2826 X-OriginalArrivalTime: 19 Jun 2007 11:06:32.0751 (UTC) FILETIME=[E50943F0:01C7B261] X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11852 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: Administrator@oss.sgi.com Precedence: bulk X-list: xfs WORM_MYDOOM.GEN has been detected,and Replace has been taken on 6/19/2007 4:05:37 AM. From owner-xfs@oss.sgi.com Tue Jun 19 06:27:30 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Jun 2007 06:27:35 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.suse.cz (styx.suse.cz [82.119.242.94]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5JDRSdo015127 for ; Tue, 19 Jun 2007 06:27:30 -0700 Received: from discovery.suse.cz (discovery.suse.cz [10.20.1.116]) by mail.suse.cz (Postfix) with ESMTP id AA3DC146C035; Tue, 19 Jun 2007 15:27:26 +0200 (CEST) Received: by discovery.suse.cz (Postfix, from userid 10020) id 95E9982DD6; Tue, 19 Jun 2007 15:27:26 +0200 (CEST) Message-Id: <20070619132726.360453113@suse.cz> References: <20070619132549.266927601@suse.cz> User-Agent: quilt/0.46-42 Date: Tue, 19 Jun 2007 15:25:50 +0200 From: mmarek@suse.cz To: xfs@oss.sgi.com Cc: linux-kernel@vger.kernel.org Subject: [patch 1/3] Fix XFS_IOC_FSGEOMETRY_V1 in compat mode Content-Disposition: inline; filename=xfs-compat-ioctl-fsgeometry.patch X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11854 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: mmarek@suse.cz Precedence: bulk X-list: xfs i386 struct xfs_fsop_geom_v1 has no padding after the last member, so the size is different. Signed-off-by: Michal Marek --- fs/xfs/linux-2.6/xfs_ioctl32.c | 42 ++++++++++++++++++++++++++++++++++++++++- 1 file changed, 41 insertions(+), 1 deletion(-) --- linux-2.6.orig/fs/xfs/linux-2.6/xfs_ioctl32.c +++ linux-2.6/fs/xfs/linux-2.6/xfs_ioctl32.c @@ -75,6 +75,42 @@ xfs_ioctl32_flock( return (unsigned long)p; } +typedef struct compat_xfs_fsop_geom_v1 { + __u32 blocksize; /* filesystem (data) block size */ + __u32 rtextsize; /* realtime extent size */ + __u32 agblocks; /* fsblocks in an AG */ + __u32 agcount; /* number of allocation groups */ + __u32 logblocks; /* fsblocks in the log */ + __u32 sectsize; /* (data) sector size, bytes */ + __u32 inodesize; /* inode size in bytes */ + __u32 imaxpct; /* max allowed inode space(%) */ + __u64 datablocks; /* fsblocks in data subvolume */ + __u64 rtblocks; /* fsblocks in realtime subvol */ + __u64 rtextents; /* rt extents in realtime subvol*/ + __u64 logstart; /* starting fsblock of the log */ + unsigned char uuid[16]; /* unique id of the filesystem */ + __u32 sunit; /* stripe unit, fsblocks */ + __u32 swidth; /* stripe width, fsblocks */ + __s32 version; /* structure version */ + __u32 flags; /* superblock version flags */ + __u32 logsectsize; /* log sector size, bytes */ + __u32 rtsectsize; /* realtime sector size, bytes */ + __u32 dirblocksize; /* directory block size, bytes */ +} __attribute__((packed)) compat_xfs_fsop_geom_v1_t; + +#define XFS_IOC_FSGEOMETRY_V1_32 \ + _IOR ('X', 100, struct compat_xfs_fsop_geom_v1) + +STATIC unsigned long xfs_ioctl32_geom_v1(unsigned long arg) +{ + compat_xfs_fsop_geom_v1_t __user *p32 = (void __user *)arg; + xfs_fsop_geom_v1_t __user *p = compat_alloc_user_space(sizeof(*p)); + + if (copy_in_user(p, p32, sizeof(*p32))) + return -EFAULT; + return (unsigned long)p; +} + #else typedef struct xfs_fsop_bulkreq32 { @@ -118,7 +154,6 @@ xfs_compat_ioctl( switch (cmd) { case XFS_IOC_DIOINFO: - case XFS_IOC_FSGEOMETRY_V1: case XFS_IOC_FSGEOMETRY: case XFS_IOC_GETVERSION: case XFS_IOC_GETXFLAGS: @@ -166,6 +201,10 @@ xfs_compat_ioctl( arg = xfs_ioctl32_flock(arg); cmd = _NATIVE_IOC(cmd, struct xfs_flock64); break; + case XFS_IOC_FSGEOMETRY_V1_32: + arg = xfs_ioctl32_geom_v1(arg); + cmd = _NATIVE_IOC(cmd, struct xfs_fsop_geom_v1); + break; #else /* These are handled fine if no alignment issues */ case XFS_IOC_ALLOCSP: @@ -176,6 +215,7 @@ xfs_compat_ioctl( case XFS_IOC_FREESP64: case XFS_IOC_RESVSP64: case XFS_IOC_UNRESVSP64: + case XFS_IOC_FSGEOMETRY_V1: break; /* xfs_bstat_t still has wrong u32 vs u64 alignment */ -- From owner-xfs@oss.sgi.com Tue Jun 19 06:27:30 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Jun 2007 06:27:35 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.suse.cz (styx.suse.cz [82.119.242.94]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5JDRSdo015126 for ; Tue, 19 Jun 2007 06:27:30 -0700 Received: from discovery.suse.cz (discovery.suse.cz [10.20.1.116]) by mail.suse.cz (Postfix) with ESMTP id EC329146C036; Tue, 19 Jun 2007 15:27:26 +0200 (CEST) Received: by discovery.suse.cz (Postfix, from userid 10020) id D6C9682DB8; Tue, 19 Jun 2007 15:27:26 +0200 (CEST) Message-Id: <20070619132726.627137347@suse.cz> References: <20070619132549.266927601@suse.cz> User-Agent: quilt/0.46-42 Date: Tue, 19 Jun 2007 15:25:51 +0200 From: mmarek@suse.cz To: xfs@oss.sgi.com Cc: linux-kernel@vger.kernel.org Subject: [patch 2/3] Fix XFS_IOC_*_TO_HANDLE and XFS_IOC_{OPEN,READLINK}_BY_HANDLE in compat mode Content-Disposition: inline; filename=xfs-compat-ioctl-fshandle.patch X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11855 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: mmarek@suse.cz Precedence: bulk X-list: xfs 32bit struct xfs_fsop_handlereq has different size and offsets (due to pointers). TODO: case XFS_IOC_{FSSETDM,ATTRLIST,ATTRMULTI}_BY_HANDLE still not handled. Signed-off-by: Michal Marek --- fs/xfs/linux-2.6/xfs_ioctl32.c | 57 +++++++++++++++++++++++++++++++++++++---- 1 file changed, 52 insertions(+), 5 deletions(-) --- linux-2.6.orig/fs/xfs/linux-2.6/xfs_ioctl32.c +++ linux-2.6/fs/xfs/linux-2.6/xfs_ioctl32.c @@ -141,6 +141,50 @@ xfs_ioctl32_bulkstat( } #endif +typedef struct compat_xfs_fsop_handlereq { + __u32 fd; /* fd for FD_TO_HANDLE */ + compat_uptr_t path; /* user pathname */ + __u32 oflags; /* open flags */ + compat_uptr_t ihandle; /* user supplied handle */ + __u32 ihandlen; /* user supplied length */ + compat_uptr_t ohandle; /* user buffer for handle */ + compat_uptr_t ohandlen; /* user buffer length */ +} compat_xfs_fsop_handlereq_t; + +#define XFS_IOC_PATH_TO_FSHANDLE_32 \ + _IOWR('X', 104, struct compat_xfs_fsop_handlereq) +#define XFS_IOC_PATH_TO_HANDLE_32 \ + _IOWR('X', 105, struct compat_xfs_fsop_handlereq) +#define XFS_IOC_FD_TO_HANDLE_32 \ + _IOWR('X', 106, struct compat_xfs_fsop_handlereq) +#define XFS_IOC_OPEN_BY_HANDLE_32 \ + _IOWR('X', 107, struct compat_xfs_fsop_handlereq) +#define XFS_IOC_READLINK_BY_HANDLE_32 \ + _IOWR('X', 108, struct compat_xfs_fsop_handlereq) + +STATIC unsigned long xfs_ioctl32_fshandle(unsigned long arg) +{ + compat_xfs_fsop_handlereq_t __user *p32 = (void __user *)arg; + xfs_fsop_handlereq_t __user *p = compat_alloc_user_space(sizeof(*p)); + u32 addr; + + if (copy_in_user(&p->fd, &p32->fd, sizeof(__u32)) || + get_user(addr, &p32->path) || + put_user(compat_ptr(addr), &p->path) || + copy_in_user(&p->oflags, &p32->oflags, sizeof(__u32)) || + get_user(addr, &p32->ihandle) || + put_user(compat_ptr(addr), &p->ihandle) || + copy_in_user(&p->ihandlen, &p32->ihandlen, sizeof(__u32)) || + get_user(addr, &p32->ohandle) || + put_user(compat_ptr(addr), &p->ohandle) || + get_user(addr, &p32->ohandlen) || + put_user(compat_ptr(addr), &p->ohandlen)) + return -EFAULT; + + return (unsigned long)p; +} + + STATIC long xfs_compat_ioctl( int mode, @@ -166,12 +210,7 @@ xfs_compat_ioctl( case XFS_IOC_GETBMAPA: case XFS_IOC_GETBMAPX: /* not handled - case XFS_IOC_FD_TO_HANDLE: - case XFS_IOC_PATH_TO_HANDLE: - case XFS_IOC_PATH_TO_FSHANDLE: - case XFS_IOC_OPEN_BY_HANDLE: case XFS_IOC_FSSETDM_BY_HANDLE: - case XFS_IOC_READLINK_BY_HANDLE: case XFS_IOC_ATTRLIST_BY_HANDLE: case XFS_IOC_ATTRMULTI_BY_HANDLE: */ @@ -228,6 +267,14 @@ xfs_compat_ioctl( arg = xfs_ioctl32_bulkstat(arg); break; #endif + case XFS_IOC_FD_TO_HANDLE_32: + case XFS_IOC_PATH_TO_HANDLE_32: + case XFS_IOC_PATH_TO_FSHANDLE_32: + case XFS_IOC_OPEN_BY_HANDLE_32: + case XFS_IOC_READLINK_BY_HANDLE_32: + arg = xfs_ioctl32_fshandle(arg); + cmd = _NATIVE_IOC(cmd, struct xfs_fsop_handlereq); + break; default: return -ENOIOCTLCMD; } -- From owner-xfs@oss.sgi.com Tue Jun 19 06:27:31 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Jun 2007 06:27:38 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.suse.cz (styx.suse.cz [82.119.242.94]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5JDRSdo015128 for ; Tue, 19 Jun 2007 06:27:30 -0700 Received: from discovery.suse.cz (discovery.suse.cz [10.20.1.116]) by mail.suse.cz (Postfix) with ESMTP id 3A0B6146C032; Tue, 19 Jun 2007 15:27:27 +0200 (CEST) Received: by discovery.suse.cz (Postfix, from userid 10020) id 298DD82DB8; Tue, 19 Jun 2007 15:27:27 +0200 (CEST) Message-Id: <20070619132726.893544847@suse.cz> References: <20070619132549.266927601@suse.cz> User-Agent: quilt/0.46-42 Date: Tue, 19 Jun 2007 15:25:52 +0200 From: mmarek@suse.cz To: xfs@oss.sgi.com Cc: linux-kernel@vger.kernel.org Subject: [patch 3/3] Fix XFS_IOC_FSBULKSTAT{,_SINGLE} and XFS_IOC_FSINUMBERS in compat mode Content-Disposition: inline; filename=xfs-compat-ioctl-bulkstat.patch X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11856 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: mmarek@suse.cz Precedence: bulk X-list: xfs * 32bit struct xfs_fsop_bulkreq has different size and layout of members, no matter the alignment. Move the code out of the #else branch (why was it there in the first place?). Define _32 variants of the ioctl constants. * 32bit struct xfs_bstat is different because of time_t and on i386 becaus of different padding. Create a new formatter xfs_bulkstat_one_compat() that takes care of this. To do this, we need to make xfs_bulkstat_one_iget() and xfs_bulkstat_one_dinode() non-static. * i386 struct xfs_inogrp has different padding. Introduce a similar "formatter" mechanism for xfs_inumbers: the native formatter is just a copy_to_user, the compat formatter takes care of the different layout Signed-off-by: Michal Marek --- fs/xfs/linux-2.6/xfs_ioctl.c | 2 fs/xfs/linux-2.6/xfs_ioctl32.c | 259 +++++++++++++++++++++++++++++++++++++---- fs/xfs/xfs_itable.c | 30 +++- fs/xfs/xfs_itable.h | 31 ++++ 4 files changed, 290 insertions(+), 32 deletions(-) --- linux-2.6.21.orig/fs/xfs/linux-2.6/xfs_ioctl32.c +++ linux-2.6.21/fs/xfs/linux-2.6/xfs_ioctl32.c @@ -28,12 +28,27 @@ #include "xfs_vfs.h" #include "xfs_vnode.h" #include "xfs_dfrag.h" +#include "xfs_sb.h" +#include "xfs_log.h" +#include "xfs_trans.h" +#include "xfs_dmapi.h" +#include "xfs_mount.h" +#include "xfs_inum.h" +#include "xfs_bmap_btree.h" +#include "xfs_dir2.h" +#include "xfs_dir2_sf.h" +#include "xfs_attr_sf.h" +#include "xfs_dinode.h" +#include "xfs_itable.h" +#include "xfs_error.h" +#include "xfs_inode.h" #define _NATIVE_IOC(cmd, type) \ _IOC(_IOC_DIR(cmd), _IOC_TYPE(cmd), _IOC_NR(cmd), sizeof(type)) #if defined(CONFIG_IA64) || defined(CONFIG_X86_64) #define BROKEN_X86_ALIGNMENT +#define _PACKED __attribute__((packed)) /* on ia32 l_start is on a 32-bit boundary */ typedef struct xfs_flock64_32 { __s16 l_type; @@ -111,35 +126,234 @@ STATIC unsigned long xfs_ioctl32_geom_v1 return (unsigned long)p; } +typedef struct compat_xfs_inogrp { + __u64 xi_startino; /* starting inode number */ + __s32 xi_alloccount; /* # bits set in allocmask */ + __u64 xi_allocmask; /* mask of allocated inodes */ +} __attribute__((packed)) compat_xfs_inogrp_t; + +STATIC int xfs_inumbers_fmt_compat( + void __user *ubuffer, + const xfs_inogrp_t *buffer, + long count, + long *written) +{ + compat_xfs_inogrp_t *p32 = ubuffer; + long i; + + for (i = 0; i < count; i++) { + if (put_user(buffer[i].xi_startino, &p32[i].xi_startino) || + put_user(buffer[i].xi_alloccount, &p32[i].xi_alloccount) || + put_user(buffer[i].xi_allocmask, &p32[i].xi_allocmask)) + return -EFAULT; + } + *written = count * sizeof(*p32); + return 0; +} + #else -typedef struct xfs_fsop_bulkreq32 { +#define xfs_inumbers_fmt_compat(a, b, c, d) xfs_inumbers_fmt(a, b, c, d) +#define _PACKED + +#endif + +/* XFS_IOC_FSBULKSTAT and friends */ + +typedef struct compat_xfs_bstime { + __s32 tv_sec; /* seconds */ + __s32 tv_nsec; /* and nanoseconds */ +} compat_xfs_bstime_t; + +static int xfs_bstime_store_compat( + compat_xfs_bstime_t __user *p32, + xfs_bstime_t *p) +{ + __s32 sec32; + + sec32 = p->tv_sec; + if (put_user(sec32, &p32->tv_sec) || + put_user(p->tv_nsec, &p32->tv_nsec)) + return -EFAULT; + return 0; +} + +typedef struct compat_xfs_bstat { + __u64 bs_ino; /* inode number */ + __u16 bs_mode; /* type and mode */ + __u16 bs_nlink; /* number of links */ + __u32 bs_uid; /* user id */ + __u32 bs_gid; /* group id */ + __u32 bs_rdev; /* device value */ + __s32 bs_blksize; /* block size */ + __s64 bs_size; /* file size */ + compat_xfs_bstime_t bs_atime; /* access time */ + compat_xfs_bstime_t bs_mtime; /* modify time */ + compat_xfs_bstime_t bs_ctime; /* inode change time */ + int64_t bs_blocks; /* number of blocks */ + __u32 bs_xflags; /* extended flags */ + __s32 bs_extsize; /* extent size */ + __s32 bs_extents; /* number of extents */ + __u32 bs_gen; /* generation count */ + __u16 bs_projid; /* project id */ + unsigned char bs_pad[14]; /* pad space, unused */ + __u32 bs_dmevmask; /* DMIG event mask */ + __u16 bs_dmstate; /* DMIG state info */ + __u16 bs_aextents; /* attribute number of extents */ +} _PACKED compat_xfs_bstat_t; + +static int xfs_bulkstat_one_compat( + xfs_mount_t *mp, /* mount point for filesystem */ + xfs_ino_t ino, /* inode number to get data for */ + void __user *buffer, /* buffer to place output in */ + int ubsize, /* size of buffer */ + void *private_data, /* my private data */ + xfs_daddr_t bno, /* starting bno of inode cluster */ + int *ubused, /* bytes used by me */ + void *dibuff, /* on-disk inode buffer */ + int *stat) /* BULKSTAT_RV_... */ +{ + xfs_bstat_t *buf; /* return buffer */ + int error = 0; /* error value */ + xfs_dinode_t *dip; /* dinode inode pointer */ + compat_xfs_bstat_t __user *p32 = buffer; + + dip = (xfs_dinode_t *)dibuff; + *stat = BULKSTAT_RV_NOTHING; + + if (!buffer || xfs_internal_inum(mp, ino)) + return XFS_ERROR(EINVAL); + if (ubsize < sizeof(*buf)) + return XFS_ERROR(ENOMEM); + + buf = kmem_alloc(sizeof(*buf), KM_SLEEP); + + if (dip == NULL) { + /* We're not being passed a pointer to a dinode. This happens + * if BULKSTAT_FG_IGET is selected. Do the iget. + */ + error = xfs_bulkstat_one_iget(mp, ino, bno, buf, stat); + if (error) + goto out_free; + } else { + xfs_bulkstat_one_dinode(mp, ino, dip, buf); + } + + if (put_user(buf->bs_ino, &p32->bs_ino) || + put_user(buf->bs_mode, &p32->bs_mode) || + put_user(buf->bs_nlink, &p32->bs_nlink) || + put_user(buf->bs_uid, &p32->bs_uid) || + put_user(buf->bs_gid, &p32->bs_gid) || + put_user(buf->bs_rdev, &p32->bs_rdev) || + put_user(buf->bs_blksize, &p32->bs_blksize) || + put_user(buf->bs_size, &p32->bs_size) || + xfs_bstime_store_compat(&p32->bs_atime, &buf->bs_atime) || + xfs_bstime_store_compat(&p32->bs_mtime, &buf->bs_mtime) || + xfs_bstime_store_compat(&p32->bs_ctime, &buf->bs_ctime) || + put_user(buf->bs_blocks, &p32->bs_blocks) || + put_user(buf->bs_xflags, &p32->bs_xflags) || + put_user(buf->bs_extsize, &p32->bs_extsize) || + put_user(buf->bs_extents, &p32->bs_extents) || + put_user(buf->bs_gen, &p32->bs_gen) || + put_user(buf->bs_projid, &p32->bs_projid) || + put_user(buf->bs_dmevmask, &p32->bs_dmevmask) || + put_user(buf->bs_dmstate, &p32->bs_dmstate) || + put_user(buf->bs_aextents, &p32->bs_aextents)) { + error = EFAULT; + goto out_free; + } + + *stat = BULKSTAT_RV_DIDONE; + if (ubused) + *ubused = sizeof(compat_xfs_bstat_t); + + out_free: + kmem_free(buf, sizeof(*buf)); + return error; +} + + + +typedef struct compat_xfs_fsop_bulkreq { compat_uptr_t lastip; /* last inode # pointer */ __s32 icount; /* count of entries in buffer */ compat_uptr_t ubuffer; /* user buffer for inode desc. */ - __s32 ocount; /* output count pointer */ -} xfs_fsop_bulkreq32_t; + compat_uptr_t ocount; /* output count pointer */ +} compat_xfs_fsop_bulkreq_t; -STATIC unsigned long -xfs_ioctl32_bulkstat( - unsigned long arg) +#define XFS_IOC_FSBULKSTAT_32 \ + _IOWR('X', 101, struct compat_xfs_fsop_bulkreq) +#define XFS_IOC_FSBULKSTAT_SINGLE_32 \ + _IOWR('X', 102, struct compat_xfs_fsop_bulkreq) +#define XFS_IOC_FSINUMBERS_32 \ + _IOWR('X', 103, struct compat_xfs_fsop_bulkreq) + +/* copied from xfs_ioctl.c */ +STATIC int +xfs_ioc_bulkstat_compat( + xfs_mount_t *mp, + unsigned int cmd, + void __user *arg) { - xfs_fsop_bulkreq32_t __user *p32 = (void __user *)arg; - xfs_fsop_bulkreq_t __user *p = compat_alloc_user_space(sizeof(*p)); + compat_xfs_fsop_bulkreq_t __user *p32 = (void __user *)arg; u32 addr; + xfs_fsop_bulkreq_t bulkreq; + int count; /* # of records returned */ + xfs_ino_t inlast; /* last inode number */ + int done; + int error; + + /* done = 1 if there are more stats to get and if bulkstat */ + /* should be called again (unused here, but used in dmapi) */ + + if (!capable(CAP_SYS_ADMIN)) + return -EPERM; + + if (XFS_FORCED_SHUTDOWN(mp)) + return -XFS_ERROR(EIO); - if (get_user(addr, &p32->lastip) || - put_user(compat_ptr(addr), &p->lastip) || - copy_in_user(&p->icount, &p32->icount, sizeof(s32)) || - get_user(addr, &p32->ubuffer) || - put_user(compat_ptr(addr), &p->ubuffer) || - get_user(addr, &p32->ocount) || - put_user(compat_ptr(addr), &p->ocount)) + if (get_user(addr, &p32->lastip)) + return -EFAULT; + bulkreq.lastip = compat_ptr(addr); + if (get_user(bulkreq.icount, &p32->icount) || + get_user(addr, &p32->ubuffer)) + return -EFAULT; + bulkreq.ubuffer = compat_ptr(addr); + if (get_user(addr, &p32->ocount)) return -EFAULT; + bulkreq.ocount = compat_ptr(addr); - return (unsigned long)p; + if (copy_from_user(&inlast, bulkreq.lastip, sizeof(__s64))) + return -XFS_ERROR(EFAULT); + + if ((count = bulkreq.icount) <= 0) + return -XFS_ERROR(EINVAL); + + if (cmd == XFS_IOC_FSINUMBERS) + error = xfs_inumbers(mp, &inlast, &count, + bulkreq.ubuffer, xfs_inumbers_fmt_compat); + else + error = xfs_bulkstat(mp, &inlast, &count, + xfs_bulkstat_one_compat, NULL, + sizeof(compat_xfs_bstat_t), bulkreq.ubuffer, + BULKSTAT_FG_QUICK, &done); + + if (error) + return -error; + + if (bulkreq.ocount != NULL) { + if (copy_to_user(bulkreq.lastip, &inlast, + sizeof(xfs_ino_t))) + return -XFS_ERROR(EFAULT); + + if (copy_to_user(bulkreq.ocount, &count, sizeof(count))) + return -XFS_ERROR(EFAULT); + } + + return 0; } -#endif + + typedef struct compat_xfs_fsop_handlereq { __u32 fd; /* fd for FD_TO_HANDLE */ @@ -261,12 +475,13 @@ xfs_compat_ioctl( case XFS_IOC_SWAPEXT: break; - case XFS_IOC_FSBULKSTAT_SINGLE: - case XFS_IOC_FSBULKSTAT: - case XFS_IOC_FSINUMBERS: - arg = xfs_ioctl32_bulkstat(arg); - break; #endif + case XFS_IOC_FSBULKSTAT_32: + case XFS_IOC_FSBULKSTAT_SINGLE_32: + case XFS_IOC_FSINUMBERS_32: + cmd = _NATIVE_IOC(cmd, struct xfs_fsop_bulkreq); + return xfs_ioc_bulkstat_compat(XFS_BHVTOI(VNHEAD(vp))->i_mount, + cmd, (void*)arg); case XFS_IOC_FD_TO_HANDLE_32: case XFS_IOC_PATH_TO_HANDLE_32: case XFS_IOC_PATH_TO_FSHANDLE_32: --- linux-2.6.21.orig/fs/xfs/xfs_itable.h +++ linux-2.6.21/fs/xfs/xfs_itable.h @@ -70,6 +70,21 @@ xfs_bulkstat_single( int *done); int +xfs_bulkstat_one_iget( + xfs_mount_t *mp, /* mount point for filesystem */ + xfs_ino_t ino, /* inode number to get data for */ + xfs_daddr_t bno, /* starting bno of inode cluster */ + xfs_bstat_t *buf, /* return buffer */ + int *stat); /* BULKSTAT_RV_... */ + +int +xfs_bulkstat_one_dinode( + xfs_mount_t *mp, /* mount point for filesystem */ + xfs_ino_t ino, /* inode number to get data for */ + xfs_dinode_t *dip, /* dinode inode pointer */ + xfs_bstat_t *buf); /* return buffer */ + +int xfs_bulkstat_one( xfs_mount_t *mp, xfs_ino_t ino, @@ -86,11 +101,25 @@ xfs_internal_inum( xfs_mount_t *mp, xfs_ino_t ino); +typedef int (*inumbers_fmt_pf)( + void __user *ubuffer, /* buffer to write to */ + const xfs_inogrp_t *buffer, /* buffer to read from */ + long count, /* # of elements to read */ + long *written); /* # of bytes written */ + +int +xfs_inumbers_fmt( + void __user *ubuffer, /* buffer to write to */ + const xfs_inogrp_t *buffer, /* buffer to read from */ + long count, /* # of elements to read */ + long *written); /* # of bytes written */ + int /* error status */ xfs_inumbers( xfs_mount_t *mp, /* mount point for filesystem */ xfs_ino_t *last, /* last inode returned */ int *count, /* size of buffer/count returned */ - xfs_inogrp_t __user *buffer);/* buffer with inode info */ + void __user *buffer, /* buffer with inode info */ + inumbers_fmt_pf formatter); #endif /* __XFS_ITABLE_H__ */ --- linux-2.6.21.orig/fs/xfs/xfs_itable.c +++ linux-2.6.21/fs/xfs/xfs_itable.c @@ -49,7 +49,7 @@ xfs_internal_inum( (ino == mp->m_sb.sb_uquotino || ino == mp->m_sb.sb_gquotino))); } -STATIC int +int xfs_bulkstat_one_iget( xfs_mount_t *mp, /* mount point for filesystem */ xfs_ino_t ino, /* inode number to get data for */ @@ -129,7 +129,7 @@ xfs_bulkstat_one_iget( return error; } -STATIC int +int xfs_bulkstat_one_dinode( xfs_mount_t *mp, /* mount point for filesystem */ xfs_ino_t ino, /* inode number to get data for */ @@ -748,6 +748,19 @@ xfs_bulkstat_single( return 0; } +int +xfs_inumbers_fmt( + void __user *ubuffer, /* buffer to write to */ + const xfs_inogrp_t *buffer, /* buffer to read from */ + long count, /* # of elements to read */ + long *written) /* # of bytes written */ +{ + if (copy_to_user(ubuffer, buffer, count * sizeof(*buffer))) + return -EFAULT; + *written = count * sizeof(*buffer); + return 0; +} + /* * Return inode number table for the filesystem. */ @@ -756,7 +769,8 @@ xfs_inumbers( xfs_mount_t *mp, /* mount point for filesystem */ xfs_ino_t *lastino, /* last inode returned */ int *count, /* size of buffer/count returned */ - xfs_inogrp_t __user *ubuffer)/* buffer with inode descriptions */ + void __user *ubuffer,/* buffer with inode descriptions */ + inumbers_fmt_pf formatter) { xfs_buf_t *agbp; xfs_agino_t agino; @@ -835,12 +849,12 @@ xfs_inumbers( bufidx++; left--; if (bufidx == bcount) { - if (copy_to_user(ubuffer, buffer, - bufidx * sizeof(*buffer))) { + long written; + if (formatter(ubuffer, buffer, bufidx, &written)) { error = XFS_ERROR(EFAULT); break; } - ubuffer += bufidx; + ubuffer += written; *count += bufidx; bufidx = 0; } @@ -862,8 +876,8 @@ xfs_inumbers( } if (!error) { if (bufidx) { - if (copy_to_user(ubuffer, buffer, - bufidx * sizeof(*buffer))) + long written; + if (formatter(ubuffer, buffer, bufidx, &written)) error = XFS_ERROR(EFAULT); else *count += bufidx; --- linux-2.6.21.orig/fs/xfs/linux-2.6/xfs_ioctl.c +++ linux-2.6.21/fs/xfs/linux-2.6/xfs_ioctl.c @@ -1019,7 +1019,7 @@ xfs_ioc_bulkstat( if (cmd == XFS_IOC_FSINUMBERS) error = xfs_inumbers(mp, &inlast, &count, - bulkreq.ubuffer); + bulkreq.ubuffer, xfs_inumbers_fmt); else if (cmd == XFS_IOC_FSBULKSTAT_SINGLE) error = xfs_bulkstat_single(mp, &inlast, bulkreq.ubuffer, &done); -- From owner-xfs@oss.sgi.com Tue Jun 19 06:27:30 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Jun 2007 06:27:34 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.3 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.suse.cz (styx.suse.cz [82.119.242.94]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5JDRSdo015125 for ; Tue, 19 Jun 2007 06:27:30 -0700 Received: from discovery.suse.cz (discovery.suse.cz [10.20.1.116]) by mail.suse.cz (Postfix) with ESMTP id 917B8146C034; Tue, 19 Jun 2007 15:27:26 +0200 (CEST) Received: by discovery.suse.cz (Postfix, from userid 10020) id 50BD082DB8; Tue, 19 Jun 2007 15:27:26 +0200 (CEST) Message-Id: <20070619132549.266927601@suse.cz> User-Agent: quilt/0.46-42 Date: Tue, 19 Jun 2007 15:25:49 +0200 From: mmarek@suse.cz To: xfs@oss.sgi.com Cc: linux-kernel@vger.kernel.org Subject: [patch 0/3] Fix for XFS compat ioctls (try2) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11853 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: mmarek@suse.cz Precedence: bulk X-list: xfs Hi, here is my second attempt at fixing (some of) the XFS ioctls in compat mode. The main difference from the first version is the bulkstat patch, which I modified to do less copies (no unnecessary copy_in_user() anymore). -- have a nice day, Michal Marek From owner-xfs@oss.sgi.com Tue Jun 19 07:13:53 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Jun 2007 07:13:58 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.2 required=5.0 tests=AWL,BAYES_99,J_CHICKENPOX_42 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.ukfsn.org (s2.ukfsn.org [217.158.120.143]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5JEDpdo028017 for ; Tue, 19 Jun 2007 07:13:53 -0700 Received: from localhost (mailman.ukfsn.org [80.168.53.75]) by mail.ukfsn.org (Postfix) with ESMTP id AFB3BE6DB7; Tue, 19 Jun 2007 15:13:31 +0100 (BST) Received: from mail.ukfsn.org ([80.168.53.20]) by localhost (smtp-filter.ukfsn.org [80.168.53.75]) (amavisd-new, port 10024) with ESMTP id OXjESb39O3Gl; Tue, 19 Jun 2007 15:13:49 +0100 (BST) Received: from elm.dgreaves.com (i-83-67-36-194.freedom2surf.net [83.67.36.194]) by mail.ukfsn.org (Postfix) with ESMTP id 96CA5E6CDC; Tue, 19 Jun 2007 15:13:30 +0100 (BST) Received: from ash.dgreaves.com ([10.0.0.90]) by elm.dgreaves.com with esmtp (Exim 4.62) (envelope-from ) id 1I0eSl-0008I9-5I; Tue, 19 Jun 2007 15:13:43 +0100 Message-ID: <4677E496.3080506@dgreaves.com> Date: Tue, 19 Jun 2007 15:13:42 +0100 From: David Greaves User-Agent: Mozilla-Thunderbird 2.0.0.0 (X11/20070601) MIME-Version: 1.0 To: Tejun Heo Cc: David Chinner , David Robinson , LVM general discussion and development , "'linux-kernel@vger.kernel.org'" , xfs@oss.sgi.com, linux-pm , LinuxRaid , "Rafael J. Wysocki" Subject: Re: [linux-lvm] 2.6.22-rc5 XFS fails after hibernate/resume References: <46744065.6060605@dgreaves.com> <4674645F.5000906@gmail.com> <46751D37.5020608@dgreaves.com> <4676390E.6010202@dgreaves.com> <20070618145007.GE85884050@sgi.com> <4676D97E.4000403@dgreaves.com> <4677A0C7.4000306@dgreaves.com> <4677A596.7090404@gmail.com> In-Reply-To: <4677A596.7090404@gmail.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11857 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: david@dgreaves.com Precedence: bulk X-list: xfs Tejun Heo wrote: > Hello, again... > David Greaves wrote: >>> Good :) >> Now, not so good :) > > Oh, crap. :-) >> So I hibernated last night and resumed this morning. >> Before hibernating I froze and sync'ed. After resume I thawed it. (Sorry >> Dave) >> >> Here are some photos of the screen during resume. This is not 100% >> reproducable - it seems to occur only if the system is shutdown for >> 30mins or so. >> >> Tejun, I wonder if error handling during resume is problematic? I got >> the same errors in 2.6.21. I have never seen these (or any other libata) >> errors other than during resume. >> >> http://www.dgreaves.com/pub/2.6.22-rc5-resume-failure.jpg >> (hard to read, here's one from 2.6.21 >> http://www.dgreaves.com/pub/2.6.21-resume-failure.jpg > > Your controller is repeatedly reporting PHY readiness changed exception. > Are you reading the system image from the device attached to the first > SATA port? Yes if you mean 1st as in the one after the zero-th ... resume=/dev/sdb4 haze:~# swapon -s Filename Type Size Used Priority /dev/sdb4 partition 1004020 0 -1 dmesg snippet below... sda is part of the /scratch xfs array though. SMART doesn't show any problems and of course all is well other than during a resume. sda/b are on sata_sil (a cheap plugin pci card) > >> I _think_ I've only seen the xfs problem when a resume shows these errors. > > The error handling itself tries very hard to ensure that there is no > data corruption in case of errors. All commands which experience > exceptions are retried but if the drive itself is doing something > stupid, there's only so much the driver can do. > > How reproducible is the problem? Does the problem go away or occur more > often if you change the drive you write the memory image to? I don't think there should be activity on the sda drive during resume itself. [I broke my / md mirror and am using some of that for swap/resume for now] I did change the swap/resume device to sdd2 (different controller, onboard sata_via) and there was no EH during resume. The system seemed OK, wrote a few Gb of video and did a kernel compile. I repeated this test, no EH during resume, no problems. I even ran xfs_fsr, the defragment utility, to stress the fs. I retain this configuration and try again tonight but it looks like there _may_ be a link between EH during resume and my problems... Of course, I don't understand why it *should* EH during resume, it doesn't during boot or normal operation... Any more tests you'd like me to try? David dmesg snippet... sata_sil 0000:00:0a.0: version 2.2 ACPI: PCI Interrupt 0000:00:0a.0[A] -> GSI 16 (level, low) -> IRQ 18 scsi0 : sata_sil PM: Adding info for No Bus:host0 scsi1 : sata_sil PM: Adding info for No Bus:host1 ata1: SATA max UDMA/100 cmd 0xf881e080 ctl 0xf881e08a bmdma 0xf881e000 irq 0 ata2: SATA max UDMA/100 cmd 0xf881e0c0 ctl 0xf881e0ca bmdma 0xf881e008 irq 0 ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) ata1.00: ATA-7: Maxtor 6B200M0, BANC1980, max UDMA/100 ata1.00: 390721968 sectors, multi 0: LBA48 ata1.00: configured for UDMA/100 ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) ata2.00: ata_hpa_resize 1: sectors = 312581808, hpa_sectors = 312581808 ata2.00: ATA-6: ST3160023AS, 3.18, max UDMA/133 ata2.00: 312581808 sectors, multi 0: LBA48 ata2.00: ata_hpa_resize 1: sectors = 312581808, hpa_sectors = 312581808 ata2.00: configured for UDMA/100 PM: Adding info for No Bus:target0:0:0 scsi 0:0:0:0: Direct-Access ATA Maxtor 6B200M0 BANC PQ: 0 ANSI: 5 PM: Adding info for scsi:0:0:0:0 sd 0:0:0:0: [sda] 390721968 512-byte hardware sectors (200050 MB) sd 0:0:0:0: [sda] Write Protect is off sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sd 0:0:0:0: [sda] 390721968 512-byte hardware sectors (200050 MB) sd 0:0:0:0: [sda] Write Protect is off sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sda: sda1 sd 0:0:0:0: [sda] Attached SCSI disk sd 0:0:0:0: Attached scsi generic sg0 type 0 PM: Adding info for No Bus:target1:0:0 scsi 1:0:0:0: Direct-Access ATA ST3160023AS 3.18 PQ: 0 ANSI: 5 PM: Adding info for scsi:1:0:0:0 sd 1:0:0:0: [sdb] 312581808 512-byte hardware sectors (160042 MB) sd 1:0:0:0: [sdb] Write Protect is off sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sd 1:0:0:0: [sdb] 312581808 512-byte hardware sectors (160042 MB) sd 1:0:0:0: [sdb] Write Protect is off sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sdb: sdb1 sdb2 sdb3 sdb4 sd 1:0:0:0: [sdb] Attached SCSI disk sd 1:0:0:0: Attached scsi generic sg1 type 0 From owner-xfs@oss.sgi.com Tue Jun 19 07:34:40 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Jun 2007 07:34:43 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.9 required=5.0 tests=AWL,BAYES_60,RCVD_ILLEGAL_IP autolearn=no version=3.2.0-pre1-r499012 Received: from mailout1.imos.net (mailout1.imos.net [212.87.132.33]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5JEYcdo001248 for ; Tue, 19 Jun 2007 07:34:39 -0700 Received: from homer.imos.net (homer.imos.net [212.87.132.35]) by mailout1.imos.net (8.14.1/8.14.1) with ESMTP id l5JEYc6m019273 for ; Tue, 19 Jun 2007 16:34:38 +0200 Received: from lstyd.imos.net (lstyd.imos.net [212.87.130.122]) by homer.imos.net (8.14.1/8.14.1) with ESMTP id l5JEYcdr006782 for ; Tue, 19 Jun 2007 16:34:38 +0200 Received: from [5.15.153.128] ([5.15.153.128]) by lstyd.imos.net (8.14.0/8.13.7) with ESMTP id l5JEciET003851 for ; Tue, 19 Jun 2007 16:38:44 +0200 Message-ID: <4677E97E.5070802@theendofthetunnel.de> Date: Tue, 19 Jun 2007 16:34:38 +0200 From: Hannes Dorbath User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.4) Gecko/20070604 Thunderbird/2.0.0.4 Mnenhy/0.7.5.0 MIME-Version: 1.0 To: xfs@oss.sgi.com Subject: BUG: soft lockup detected on CPU#0! Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11858 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: light@theendofthetunnel.de Precedence: bulk X-list: xfs I got this on a server box today. Kernel 2.6.21, x86_64, LVM2: Jun 19 10:58:16 phoenix Ending XFS recovery on filesystem: dm-5 (logdev: internal) Jun 19 10:58:16 phoenix BUG: soft lockup detected on CPU#0! Jun 19 10:58:16 phoenix Jun 19 10:58:16 phoenix Call Trace: Jun 19 10:58:16 phoenix [] wake_up_process+0x10/0x20 Jun 19 10:58:16 phoenix [] softlockup_tick+0xe9/0x110 Jun 19 10:58:16 phoenix [] run_local_timers+0x13/0x20 Jun 19 10:58:16 phoenix [] update_process_times+0x57/0x90 Jun 19 10:58:16 phoenix [] smp_local_timer_interrupt+0x34/0x60 Jun 19 10:58:16 phoenix [] smp_apic_timer_interrupt+0x4e/0x70 Jun 19 10:58:16 phoenix [] apic_timer_interrupt+0x66/0x70 Jun 19 10:58:16 phoenix [] _spin_unlock_irqrestore+0xc/0x10 Jun 19 10:58:16 phoenix [] __up_read+0x9b/0xb0 Jun 19 10:58:16 phoenix [] up_read+0x9/0x10 Jun 19 10:58:16 phoenix [] xfs_iunlock+0x3d/0xa0 Jun 19 10:58:16 phoenix [] xfs_rwunlock+0x3a/0x50 Jun 19 10:58:16 phoenix [] xfs_vm_bmap+0x66/0x90 Jun 19 10:58:16 phoenix [] bmap+0x1c/0x20 Jun 19 10:58:16 phoenix [] sys_swapon+0x6ae/0xae0 Jun 19 10:58:16 phoenix [] system_call+0x7e/0x83 What does it tell me? -- Regards, Hannes Dorbath From owner-xfs@oss.sgi.com Tue Jun 19 07:59:32 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Jun 2007 07:59:34 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.1 required=5.0 tests=BAYES_99,J_CHICKENPOX_82 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.elaso.de (mail.elaso.de [212.72.162.197]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5JExVdo007053 for ; Tue, 19 Jun 2007 07:59:32 -0700 Received: by mail.elaso.de (Postfix, from userid 33) id CB6C5F08267; Tue, 19 Jun 2007 16:31:31 +0200 (CEST) To: xfs@oss.sgi.com Subject: Part-time Job Offer !!!!!!!!!!!!!!!!!! From: KIN HING HONG Reply-To: MIME-Version: 1.0 Content-Type: text/plain Content-Transfer-Encoding: 8bit Message-Id: <20070619143131.CB6C5F08267@mail.elaso.de> Date: Tue, 19 Jun 2007 16:31:31 +0200 (CEST) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11859 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sales@kinghingkong.hk Precedence: bulk X-list: xfs KIN HING HONG TEXTILES LTD 207-129 Yu Chau Street, Shamshuipo, Kowloon, Hong Kong. Tel/Fax: 852-301-00145 Our ref: DGH/LE1/07 Reply to:salestextile.paul@gmail.com ATTENTION SIR/MA, Would You like to work online from home/temporarily and get paid weekly? We are glad to offer you for a job position at our company, KIN HING HONG TEXTILES LTD we need someone to work for the company as a representative/book keeper in the UNITED STATE OF AMERICA/CANADA. This is in view of us not having an office presently in the USA/CANADA. You don't need to have an Office and this certainly wont disturb any form of work you have, going on at the moment. From the Site, you will find out the Company produces the following varieties of clothing materials: - batiks, assorted fabrics for interior decor, silk and traditional costumes which we have clients we supply weekly in the states. Our integrated yarn and fabric manufacturing operations use state-of-the-art textile equipment from the worlds leading suppliers. Order processing, production monitoring and process flow are seamlessly integrated through a company-wide computer network. * The average monthly income is about! 4000.00 $. * No form of investments from you. * This job takes only 1-3 hours per day Our Company manufactures and sells textile and fabrics, we have seller all over the world to distribute our products.You know, that it's not easy to start a business in a new market (being the USA/CANADA). There are hundreds of competitors, close direct contacts between suppliers and customers and other difficulties, which impede our sales promotion. We have decided to deliver the products in upfront, it's very risky but it should push up sales on 25 percent. Thus we need to get payments for our products as soon as possible because customers can just "forget" to pay.Unfortunately we are unable to open bank accounts in the United States without first registering the company name. Presently with the amount of Orders we have, we cannot put on hold, For fear of loosing the customers out rightly. Secondly,we cannot cash these payments from the USA/CANADA soon enough as international Checks take about 14 working days for cash to be made available. We lose about 75,000 $ of net income each month because we have money transfer delays. YOUR TASK =============== Your task is to coordinate payments from customers and help us with the payment process. You are not involved in any sales. Our sales manager sells products. Once he makes a sales we deliver the product to a customer (usually through FED EX).The customer receives and checks the products. After this has been done the customer has to pay for the products. About 90 percent of our customers prefer to pay through Certified Checks , postal order and through bank to bank transfer. etc Based on the amount involved. We have decided to open this new job position for solving this problem. Your tasks are: 1. Receive payment from Customers 2. Cash Payments at your Bank 3. Deduct 10% which will be your percentage/pay on Payment processed 4. Forward balance after deduction of percentage/pay to any of the offices you will be contacted to send payment to. Payment is to forward either by Money Gram or Western Union Money Transfer. Local Money transfers take barely hours, so it will give us a possibility to get customer's payment almost immediately. For example you've got 3000.00$ You take your income: 300.00 $ Send to us: 2700.00 $ First month you will have 15-20 transactions on $3000.00-4000.00 So you may calculate your income.For example 18 transactions on 3500.00 $ gives you up to 4410.00 $ Plus your basis monthly salary is 1000.00 $ Total: 5410.00 $ per month. After establishing close co-operation you'll be able to operate with larger orders and you'll be able to earn more. Our payments will be issued out in your name and you get them cashed in your bank deduct your weekly salary and forward the balance to the company via western union money transfer or money gram money transfer. We understand it is an unusual and incredible job position. This job takes only 3-7 hours per week. You'll have a lot of free time doing another job; you'll get good inc ome and regular job. But this job is very challenging and you should understand it. We are looking only for the worker who satisfies our requirements and will be an earnest assistant. We are glad to offer this job position to you. If you feel that you are a serious and honest worker and if you want to work for KIN HING HONG TEXTILES LTD , a letter of employment would be sent to and you would fill it out and send back via email and you will receive necessary informations in 1-48 hours. Thanks for your time and hope to read from you again. Do not reply to the company's address, You are to send your response the contact person. His email is salestextile.paul@gmail.com Yours Truly, From: KIN HING HONG TEXTILES LTD. Contact person: Mr Paul Atkins Email: salestextile.paul@gmail.com From owner-xfs@oss.sgi.com Tue Jun 19 08:31:48 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Jun 2007 08:31:52 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.ukfsn.org (s2.ukfsn.org [217.158.120.143]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5JFVkdo017894 for ; Tue, 19 Jun 2007 08:31:48 -0700 Received: from localhost (mailman.ukfsn.org [80.168.53.75]) by mail.ukfsn.org (Postfix) with ESMTP id 3BE43E6D00; Tue, 19 Jun 2007 16:31:27 +0100 (BST) Received: from mail.ukfsn.org ([80.168.53.20]) by localhost (smtp-filter.ukfsn.org [80.168.53.75]) (amavisd-new, port 10024) with ESMTP id lMwV76+3I6yN; Tue, 19 Jun 2007 16:31:45 +0100 (BST) Received: from elm.dgreaves.com (i-83-67-36-194.freedom2surf.net [83.67.36.194]) by mail.ukfsn.org (Postfix) with ESMTP id 55285E6CA5; Tue, 19 Jun 2007 16:31:26 +0100 (BST) Received: from ash.dgreaves.com ([10.0.0.90]) by elm.dgreaves.com with esmtp (Exim 4.62) (envelope-from ) id 1I0fgH-0008PP-PU; Tue, 19 Jun 2007 16:31:45 +0100 Message-ID: <4677F6E1.50108@dgreaves.com> Date: Tue, 19 Jun 2007 16:31:45 +0100 From: David Greaves User-Agent: Mozilla-Thunderbird 2.0.0.0 (X11/20070601) MIME-Version: 1.0 To: "Rafael J. Wysocki" Cc: David Chinner , Tejun Heo , David Robinson , LVM general discussion and development , "'linux-kernel@vger.kernel.org'" , xfs@oss.sgi.com, linux-pm , LinuxRaid Subject: Re: [linux-lvm] 2.6.22-rc5 XFS fails after hibernate/resume References: <46744065.6060605@dgreaves.com> <4676D97E.4000403@dgreaves.com> <4677A0C7.4000306@dgreaves.com> <200706191321.07278.rjw@sisk.pl> In-Reply-To: <200706191321.07278.rjw@sisk.pl> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11860 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: david@dgreaves.com Precedence: bulk X-list: xfs Rafael J. Wysocki wrote: >> This is on 2.6.22-rc5 > > Is the Tejun's patch > > http://www.sisk.pl/kernel/hibernation_and_suspend/2.6.22-rc5/patches/30-block-always-requeue-nonfs-requests-at-the-front.patch > > applied on top of that? 2.6.22-rc5 includes it. (but, when I was testing rc4, I did apply this patch) David From owner-xfs@oss.sgi.com Tue Jun 19 15:01:17 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Jun 2007 15:01:21 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.0 required=5.0 tests=BAYES_99 autolearn=no version=3.2.0-pre1-r499012 Received: from imf22aec.mail.bellsouth.net (imf22aec.mail.bellsouth.net [205.152.59.70]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5JM1Ddo014558 for ; Tue, 19 Jun 2007 15:01:17 -0700 Received: from ibm56aec.bellsouth.net ([192.168.16.253]) by imf16aec.mail.bellsouth.net with ESMTP id <20070619213421.UBBL25025.imf16aec.mail.bellsouth.net@ibm56aec.bellsouth.net> for ; Tue, 19 Jun 2007 17:34:21 -0400 Received: from mail.bellsouth.net ([192.168.16.253]) by ibm56aec.bellsouth.net with SMTP id <20070619213420.NLAC6678.ibm56aec.bellsouth.net@mail.bellsouth.net>; Tue, 19 Jun 2007 17:34:20 -0400 X-Mailer: Openwave WebEngine, version 2.8.16.1 (webedge20-101-1106-101-20040924) X-Originating-IP: [75.126.169.178] From: THUNDERBALL ONLINEGAMES 2007 Reply-To: britishclaimsdept2@yahoo.co.uk Organization: THUNDERBALL ONLINEGAMES 2007 To: Subject: Online Winning Notification Date: Tue, 19 Jun 2007 17:34:20 -0400 MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 8bit Message-Id: <20070619213420.NLAC6678.ibm56aec.bellsouth.net@mail.bellsouth.net> X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11861 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: info_web135@bellsouth.net Precedence: bulk X-list: xfs THUNDER BALL LOTTERY. UNITED KINGDOM. CONSOLATION PRIZE NOTIFICATION We are pleased to inform you of the announcements today of Winners of the Thunder ball Lottery June 2007 mega draws. Your company or personal e-mail address, is attached to a Ticket number 067-123-09, with a serial number 14127 drew the Lucky Winning Numbers 02-12-24-25-32-04. You have therefore been awarded the total lump sum pay out of £250,000.00 (Two Hundred And Fifty Thousand Pounds) credited to File Ref No: TBL/ 16-NB-LM45. You must contact the claims department by e-mail; Name: Mr. Albert Newton E-mail: britishclaimsdept2@yahoo.co.uk Phone: +44-704-57 25-293 +44-704-5725-388 . Congratulations once again from all our staff for being a part of our Promotions program. Mrs. Vannessa Daisy Margavey. © Thunder ball Lottery 2007. From owner-xfs@oss.sgi.com Tue Jun 19 15:49:35 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Jun 2007 15:49:38 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.5 required=5.0 tests=AWL,BAYES_50,RCVD_IN_PSBL autolearn=no version=3.2.0-pre1-r499012 Received: from wx-out-0506.google.com (wx-out-0506.google.com [66.249.82.229]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5JMnXdo029764 for ; Tue, 19 Jun 2007 15:49:35 -0700 Received: by wx-out-0506.google.com with SMTP id s17so2190816wxc for ; Tue, 19 Jun 2007 15:49:34 -0700 (PDT) DKIM-Signature: a=rsa-sha1; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:subject:from:to:cc:in-reply-to:references:content-type:date:message-id:mime-version:x-mailer:content-transfer-encoding; b=i3n5v1pR7VedpQtOZd1mG4Kvbhxnl/p/8GxFHMsYgiy8oHvaBhyiFmwEftmnYDVkTGjZhwoNdWFTLn6Yv/pWNuPnctwZmKyWOw8KE4WRhmTf75tSBcN2sn8e7R77yreSwqaBcvQUBP4ZgyUxPNT9CKD1yV+kV2nUyc8ehP6fyLY= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:subject:from:to:cc:in-reply-to:references:content-type:date:message-id:mime-version:x-mailer:content-transfer-encoding; b=WDj8qwFe0qL0VKmqrhpVfVKh0GbhfKstp8QdM64IJj3q5RR57N0xLT/1CQM7wcYpOLDIO+Tp979fO/B5zRCAlCFPCJE6kXhGiQaggKTfzW2KsjvOvxF3ZsBNtpHfie5m1o1vpMHf2CtsQsSnFX7QsQHt1299DC5AjVtEB2vaMko= Received: by 10.90.101.19 with SMTP id y19mr2421778agb.1182291755221; Tue, 19 Jun 2007 15:22:35 -0700 (PDT) Received: from ?192.168.1.10? ( [84.59.100.98]) by mx.google.com with ESMTP id i57sm832586uga.2007.06.19.15.22.33 (version=TLSv1/SSLv3 cipher=RC4-MD5); Tue, 19 Jun 2007 15:22:33 -0700 (PDT) Subject: Re: XFS shrink (step 0) From: Ruben Porras To: David Chinner Cc: xfs@oss.sgi.com, iusty@k1024.org In-Reply-To: <20070604001632.GA86004887@sgi.com> References: <1180715974.10796.46.camel@localhost> <20070604001632.GA86004887@sgi.com> Content-Type: text/plain Date: Wed, 20 Jun 2007 00:22:31 +0200 Message-Id: <1182291751.5289.9.camel@localhost> Mime-Version: 1.0 X-Mailer: Evolution 2.10.2 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11862 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nahoo82@gmail.com Precedence: bulk X-list: xfs Am Montag, den 04.06.2007, 10:16 +1000 schrieb David Chinner: > Here's the "simple" bits that will allow you to shrink > the filesystem down to the end of the internal log: > > 0. Check space is available for shrink Now that I'm almost* finish with the point 1), is there any place in the xfs_code where a similar task is done? This way I would have a basis to start off. Cheers. * I need only to fix the indentation, and change the ioctl interface as David suggested in another mail in this thread, so that the implementation is not so specific. From owner-xfs@oss.sgi.com Tue Jun 19 15:58:50 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Jun 2007 15:58:51 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from postoffice.aconex.com (mail.app.aconex.com [203.89.192.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5JMwndo000839 for ; Tue, 19 Jun 2007 15:58:49 -0700 Received: from edge.local (unknown [203.89.192.141]) by postoffice.aconex.com (Postfix) with ESMTP id 1CCD192C382; Wed, 20 Jun 2007 08:58:50 +1000 (EST) Subject: Re: min/max cleanup From: Nathan Scott Reply-To: nscott@aconex.com To: David Chinner Cc: xfs-dev , xfs-oss In-Reply-To: <20070619080454.GQ86004887@sgi.com> References: <20070619080454.GQ86004887@sgi.com> Content-Type: text/plain Organization: Aconex Date: Wed, 20 Jun 2007 08:57:27 +1000 Message-Id: <1182293847.4249.26.camel@edge.yarra.acx> Mime-Version: 1.0 X-Mailer: Evolution 2.6.3 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11863 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nscott@aconex.com Precedence: bulk X-list: xfs On Tue, 2007-06-19 at 18:04 +1000, David Chinner wrote: > To finally put this to rest, clean up the open coded macro > min/max macro implementations. Looks fine to me. cheers. -- Nathan From owner-xfs@oss.sgi.com Tue Jun 19 16:27:49 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Jun 2007 16:27:52 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.5 required=5.0 tests=AWL,BAYES_50,URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from postoffice.aconex.com (mail.app.aconex.com [203.89.192.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5JNRldo011148 for ; Tue, 19 Jun 2007 16:27:48 -0700 Received: from edge.local (unknown [203.89.192.141]) by postoffice.aconex.com (Postfix) with ESMTP id 36EC492C382; Wed, 20 Jun 2007 09:07:05 +1000 (EST) Subject: Re: [xfs-masters] Re: Problems reading >1M files from same directory with nfs From: Nathan Scott Reply-To: nscott@aconex.com To: Mark Seger Cc: David Chinner , linux-xfs@oss.sgi.com, Hank Jakiela , Nick Dokos In-Reply-To: <4677AD9D.3010200@hp.com> References: <4676CFF9.8090805@hp.com> <20070618232559.GF85884050@sgi.com> <467724E1.6050309@hp.com> <20070619005251.GI85884050@sgi.com> <4677AD9D.3010200@hp.com> Content-Type: text/plain Organization: Aconex Date: Wed, 20 Jun 2007 09:05:42 +1000 Message-Id: <1182294342.4249.31.camel@edge.yarra.acx> Mime-Version: 1.0 X-Mailer: Evolution 2.6.3 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11864 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nscott@aconex.com Precedence: bulk X-list: xfs On Tue, 2007-06-19 at 06:19 -0400, Mark Seger wrote: > I'm including some more data for you to look at. This was produced by > a > tool I just got the go ahead to open source and if you want to pull a > copy to try out it's at sourceforge.net/projects/collectl [shameless > plug]. You'd probably be interested in http://oss.sgi.com/projects/pcp (esp. as all the XFS people know how to analyse PCP data already). cheers. -- Nathan From owner-xfs@oss.sgi.com Tue Jun 19 16:43:05 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Jun 2007 16:43:07 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5JNh2do015102 for ; Tue, 19 Jun 2007 16:43:03 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA12382; Wed, 20 Jun 2007 09:42:54 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5JNgqAf126537558; Wed, 20 Jun 2007 09:42:53 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5JNgmgD127561122; Wed, 20 Jun 2007 09:42:48 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Wed, 20 Jun 2007 09:42:48 +1000 From: David Chinner To: Ruben Porras Cc: David Chinner , xfs@oss.sgi.com, iusty@k1024.org Subject: Re: XFS shrink (step 0) Message-ID: <20070619234248.GT86004887@sgi.com> References: <1180715974.10796.46.camel@localhost> <20070604001632.GA86004887@sgi.com> <1182291751.5289.9.camel@localhost> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1182291751.5289.9.camel@localhost> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11865 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Wed, Jun 20, 2007 at 12:22:31AM +0200, Ruben Porras wrote: > Am Montag, den 04.06.2007, 10:16 +1000 schrieb David Chinner: > > > Here's the "simple" bits that will allow you to shrink > > the filesystem down to the end of the internal log: > > > > 0. Check space is available for shrink > > Now that I'm almost* finish with the point 1), Cool ;) > is there any place in the > xfs_code where a similar task is done? This way I would have a basis to > start off. No, there isn't anything currently in existence to do this. It's not difficult, though. What you need to do is count the number of used blocks in the AGs that will be truncated off, and check whether there is enough free space in the remaining AGs to hold all the blocks that we are going to move. I think this could be done we a single loop across the perag array or with a simple xfs_db wrapper and some shell/awk/perl magic. e.g: Here's the basis: budgie:~ # for i in `seq 0 1 7`; do > xfs_db -r -c "agf $i" -c "p freeblks" -c "p btreeblks" /dev/sdb8 > done freeblks = 32779 btreeblks = 0 freeblks = 63003 btreeblks = 0 freeblks = 124423 btreeblks = 0 freeblks = 114516 btreeblks = 0 freeblks = 126602 btreeblks = 0 freeblks = 125905 btreeblks = 0 freeblks = 127886 btreeblks = 0 freeblks = 125445 btreeblks = 0 Now all you need to extract is the size of each ag from teh superblock, determine which AGs are going to be freed, and do some math ;) Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Tue Jun 19 17:01:25 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Jun 2007 17:01:26 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5K01Ldo020260 for ; Tue, 19 Jun 2007 17:01:23 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA12916; Wed, 20 Jun 2007 10:01:14 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5K01CAf127402530; Wed, 20 Jun 2007 10:01:13 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5K019kk127489642; Wed, 20 Jun 2007 10:01:09 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Wed, 20 Jun 2007 10:01:09 +1000 From: David Chinner To: David Greaves Cc: xfs@oss.sgi.com, "'linux-kernel@vger.kernel.org'" , David Chinner Subject: Re: xfs freeze/umount problem Message-ID: <20070620000109.GU86004887@sgi.com> References: <46778F60.5090107@dgreaves.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <46778F60.5090107@dgreaves.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11866 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Tue, Jun 19, 2007 at 09:10:08AM +0100, David Greaves wrote: > David Chinner wrote: > > FWIW, I'm on record stating that "sync" is not sufficient to quiesce an > XFS > > filesystem for a suspend/resume to work safely and have argued that the > only > > safe thing to do is freeze the filesystem before suspend and thaw it after > > resume. > > Whilst testing a potential bug in another thread I accidentally found that > unmounting a filesystem that I'd just frozen would hang. > > As the saying goes: "Well, duh!!" It's the s_umount semaphore that is the problem here - freeze_bdev() does a get_super() call which does a down_read(&sb->s_umount) and the corresponding up_read() call does not occur until the filesystem is thawed. So yes, your umount will hang until you thaw the fileystem. As i just tried this, I can't unfreeze the filesystem because it's been removed from /proc/mounts already and so xfs_freeze -u aborts: budgie:~ # xfs_freeze -u /dev/sdb8 xfs_freeze: specified file ["/dev/sdb8"] is not on an XFS filesystem budgie:~ # > I could eventually run an unfreeze but the mount was still hung. This lead > to an unclean shutdown. So I couldn't reproduce the unclean shutdown. > OK, it may not be bright but it seems like this shouldn't happen; umount > should either unfreeze and work or fail ("Attempt to umount a frozen > filesystem.") if the fs is frozen. IMO, an unmount of a frozen fileystem should simply return EBUSY. > Is this a kernel bug/misfeature or a (u)mount one? Kernel bug. umount should know nothing about frozen filesystems... Maybe something like th patch below needs to be done to prvent the hang.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group Don't try to unmount a frozen filesystem as the unmount will hang waiting on the s_umount semaphore held by the freeze and you may not be able to unfreeze the filesystem to allow the umount to proceed. Signed-Off-By: Dave Chinner --- fs/namespace.c | 7 +++++++ 1 file changed, 7 insertions(+) Index: 2.6.x-xfs-new/fs/namespace.c =================================================================== --- 2.6.x-xfs-new.orig/fs/namespace.c 2007-05-29 16:17:59.000000000 +1000 +++ 2.6.x-xfs-new/fs/namespace.c 2007-06-20 09:57:21.310048007 +1000 @@ -545,6 +545,13 @@ static int do_umount(struct vfsmount *mn int retval; LIST_HEAD(umount_list); + /* + * don't try to unmount frozen filesystems as we'll + * hang on the s_umount held by the freeze a bit later. + */ + if (sb->s_frozen != SB_UNFROZEN) + return -EBUSY; + retval = security_sb_umount(mnt, flags); if (retval) return retval; From owner-xfs@oss.sgi.com Tue Jun 19 17:19:16 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Jun 2007 17:19:20 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5K0JDdo024680 for ; Tue, 19 Jun 2007 17:19:15 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA13262; Wed, 20 Jun 2007 10:19:02 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5K0IvAf123876507; Wed, 20 Jun 2007 10:18:58 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5K0IoqN127847705; Wed, 20 Jun 2007 10:18:50 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Wed, 20 Jun 2007 10:18:50 +1000 From: David Chinner To: David Greaves Cc: David Chinner , Tejun Heo , David Robinson , LVM general discussion and development , "'linux-kernel@vger.kernel.org'" , xfs@oss.sgi.com, linux-pm , LinuxRaid , "Rafael J. Wysocki" Subject: Re: [linux-lvm] 2.6.22-rc5 XFS fails after hibernate/resume Message-ID: <20070620001850.GV86004887@sgi.com> References: <46744065.6060605@dgreaves.com> <4674645F.5000906@gmail.com> <46751D37.5020608@dgreaves.com> <4676390E.6010202@dgreaves.com> <20070618145007.GE85884050@sgi.com> <4676D97E.4000403@dgreaves.com> <4677A0C7.4000306@dgreaves.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4677A0C7.4000306@dgreaves.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11867 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Tue, Jun 19, 2007 at 10:24:23AM +0100, David Greaves wrote: > David Greaves wrote: > so I cd'ed out of /scratch and umounted. > > I then tried the xfs_check. > > haze:~# xfs_check /dev/video_vg/video_lv > ERROR: The filesystem has valuable metadata changes in a log which needs to > be replayed. Mount the filesystem to replay the log, and unmount it before > re-running xfs_check. If you are unable to mount the filesystem, then use > the xfs_repair -L option to destroy the log and attempt a repair. > Note that destroying the log may cause corruption -- please attempt a mount > of the filesystem before doing this. > haze:~# mount /scratch/ > haze:~# umount /scratch/ > haze:~# xfs_check /dev/video_vg/video_lv > > Message from syslogd@haze at Tue Jun 19 08:47:30 2007 ... > haze kernel: Bad page state in process 'xfs_db' I think we can safely say that your system is hosed at this point ;) > ugh. Try again > haze:~# xfs_check /dev/video_vg/video_lv > haze:~# zero output means no on-disk corruption was found. Everything is consistent on disk, so that seems to indicate something in memory has been crispy fried by the suspend/resume.... > Dave, I ran xfs_check -v... but I got bored when it reached 122M of bz2 > compressed output with no sign of stopping... still got it if it's any > use... No, not useful. It's a log of every operation it does and so is really only useful for debugging xfs-check problems ;) > I then rebooted and ran a repair which didn't show any damage. Not surprising as your first check showed no damage. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Tue Jun 19 20:30:07 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Jun 2007 20:30:10 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5K3U4do024086 for ; Tue, 19 Jun 2007 20:30:05 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id NAA17280; Wed, 20 Jun 2007 13:29:58 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5K3TvAf128042521; Wed, 20 Jun 2007 13:29:58 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5K3TtAF127665806; Wed, 20 Jun 2007 13:29:55 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Wed, 20 Jun 2007 13:29:54 +1000 From: David Chinner To: Hannes Dorbath Cc: xfs@oss.sgi.com Subject: Re: BUG: soft lockup detected on CPU#0! Message-ID: <20070620032954.GW86004887@sgi.com> References: <4677E97E.5070802@theendofthetunnel.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4677E97E.5070802@theendofthetunnel.de> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11868 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Tue, Jun 19, 2007 at 04:34:38PM +0200, Hannes Dorbath wrote: > I got this on a server box today. Kernel 2.6.21, x86_64, LVM2: > > Jun 19 10:58:16 phoenix Ending XFS recovery on filesystem: dm-5 (logdev: > internal) > Jun 19 10:58:16 phoenix BUG: soft lockup detected on CPU#0! > Jun 19 10:58:16 phoenix > Jun 19 10:58:16 phoenix Call Trace: > Jun 19 10:58:16 phoenix [] > wake_up_process+0x10/0x20 > Jun 19 10:58:16 phoenix [] softlockup_tick+0xe9/0x110 > Jun 19 10:58:16 phoenix [] run_local_timers+0x13/0x20 > Jun 19 10:58:16 phoenix [] update_process_times+0x57/0x90 > Jun 19 10:58:16 phoenix [] > smp_local_timer_interrupt+0x34/0x60 > Jun 19 10:58:16 phoenix [] > smp_apic_timer_interrupt+0x4e/0x70 > Jun 19 10:58:16 phoenix [] apic_timer_interrupt+0x66/0x70 > Jun 19 10:58:16 phoenix [] > _spin_unlock_irqrestore+0xc/0x10 > Jun 19 10:58:16 phoenix [] __up_read+0x9b/0xb0 > Jun 19 10:58:16 phoenix [] up_read+0x9/0x10 > Jun 19 10:58:16 phoenix [] xfs_iunlock+0x3d/0xa0 > Jun 19 10:58:16 phoenix [] xfs_rwunlock+0x3a/0x50 > Jun 19 10:58:16 phoenix [] xfs_vm_bmap+0x66/0x90 > Jun 19 10:58:16 phoenix [] bmap+0x1c/0x20 > Jun 19 10:58:16 phoenix [] sys_swapon+0x6ae/0xae0 > Jun 19 10:58:16 phoenix [] system_call+0x7e/0x83 > > What does it tell me? That you've got a fragemented swap file and that sys_swapon() does not yield the CPU in it's main loop that maps the extents in the swap file. Harmless, AFAICT. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Tue Jun 19 20:42:29 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Jun 2007 20:42:32 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.7 required=5.0 tests=AWL,BAYES_50,URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5K3gQdo028184 for ; Tue, 19 Jun 2007 20:42:28 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id NAA17638; Wed, 20 Jun 2007 13:42:23 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id 0E87658C38F1; Wed, 20 Jun 2007 13:42:22 +1000 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: PARTIAL TAKE 966569 - Fix a bunch of XFSQA tests Message-Id: <20070620034223.0E87658C38F1@chook.melbourne.sgi.com> Date: Wed, 20 Jun 2007 13:42:22 +1000 (EST) From: dgc@sgi.com (David Chinner) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11869 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs add null files tests to auto group Date: Wed Jun 20 13:41:41 AEST 2007 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/xfs-cmds Inspected by: tes@sgi.com The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:28940a xfstests/group - 1.107 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/group.diff?r1=text&tr1=1.107&r2=text&tr2=1.106&f=h - add null files tests to auto group. xfstests/140 - 1.2 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/140.diff?r1=text&tr1=1.2&r2=text&tr2=1.1&f=h - leave bad files around for post mortem on failure. From owner-xfs@oss.sgi.com Tue Jun 19 20:45:25 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Jun 2007 20:45:27 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.7 required=5.0 tests=AWL,BAYES_50,URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5K3jLdo029502 for ; Tue, 19 Jun 2007 20:45:23 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id NAA17802; Wed, 20 Jun 2007 13:45:18 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id 43C7258C38F1; Wed, 20 Jun 2007 13:45:18 +1000 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: PARTIAL TAKE 966569 - Fix a bunch of XFSQA tests Message-Id: <20070620034518.43C7258C38F1@chook.melbourne.sgi.com> Date: Wed, 20 Jun 2007 13:45:18 +1000 (EST) From: dgc@sgi.com (David Chinner) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11870 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Clean up whitespace problems with 166. Date: Wed Jun 20 13:44:53 AEST 2007 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/xfs-cmds Inspected by: tes@sgi.com The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:28941a xfstests/166 - 1.2 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/166.diff?r1=text&tr1=1.2&r2=text&tr2=1.1&f=h - Don't leave trailing whitespace at EOL when filtering output. xfstests/166.out - 1.2 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/166.out.diff?r1=text&tr1=1.2&r2=text&tr2=1.1&f=h - Correct test number in golden output. From owner-xfs@oss.sgi.com Tue Jun 19 20:48:12 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Jun 2007 20:48:14 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.7 required=5.0 tests=AWL,BAYES_50,URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5K3m6do030797 for ; Tue, 19 Jun 2007 20:48:09 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id NAA17889; Wed, 20 Jun 2007 13:48:03 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id 9278158C38F1; Wed, 20 Jun 2007 13:48:02 +1000 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: TAKE 966569 - Fix a bunch of XFSQA tests Message-Id: <20070620034803.9278158C38F1@chook.melbourne.sgi.com> Date: Wed, 20 Jun 2007 13:48:02 +1000 (EST) From: dgc@sgi.com (David Chinner) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11871 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Make sure 167 completes and unmounts scratch correctly Date: Wed Jun 20 13:47:37 AEST 2007 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/xfs-cmds Inspected by: tes@sgi.com The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:28942a xfstests/167 - 1.2 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/167.diff?r1=text&tr1=1.2&r2=text&tr2=1.1&f=h - run a sync after killing all the fsstress processes to ensure they have all completed before trying to unmount scratch. From owner-xfs@oss.sgi.com Tue Jun 19 21:08:41 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Jun 2007 21:08:43 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.7 required=5.0 tests=AWL,BAYES_50,URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5K48bdo004862 for ; Tue, 19 Jun 2007 21:08:39 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA18473; Wed, 20 Jun 2007 14:08:34 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id 5D8B858C38F1; Wed, 20 Jun 2007 14:08:34 +1000 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: TAKE 966502 - transaction leak on error in xfs_inactive Message-Id: <20070620040834.5D8B858C38F1@chook.melbourne.sgi.com> Date: Wed, 20 Jun 2007 14:08:34 +1000 (EST) From: dgc@sgi.com (David Chinner) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11872 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Cancel transactions on xfs_itruncate_start error. Signed-Off-By: Jesper Juhl Date: Wed Jun 20 14:08:02 AEST 2007 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/2.6.x-xfs Inspected by: jesper.juhl@gmail.com The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:28943a fs/xfs/xfs_vnodeops.c - 1.699 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_vnodeops.c.diff?r1=text&tr1=1.699&r2=text&tr2=1.698&f=h - Prevent transaction leak if we get an error from xfs_itruncate_start() by cancelling it correctly. From owner-xfs@oss.sgi.com Tue Jun 19 21:17:03 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Jun 2007 21:17:05 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.7 required=5.0 tests=AWL,BAYES_50,URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5K4H0do006604 for ; Tue, 19 Jun 2007 21:17:02 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA18670; Wed, 20 Jun 2007 14:16:55 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id A726258C38F1; Wed, 20 Jun 2007 14:16:55 +1000 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: TAKE 966503 - Kill off xfs_count_bits Message-Id: <20070620041655.A726258C38F1@chook.melbourne.sgi.com> Date: Wed, 20 Jun 2007 14:16:55 +1000 (EST) From: dgc@sgi.com (David Chinner) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11873 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Kill off xfs_count_bits xfs_count_bits is only called once, and is then compared to 0. IOW, what it really wants to know is, is the bitmap empty. This can be done more simply, certainly. Signed-off-by: Eric Sandeen Date: Wed Jun 20 14:16:30 AEST 2007 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/2.6.x-xfs Inspected by: sandeen@sandeen.net The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:28944a fs/xfs/xfs_buf_item.c - 1.162 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_buf_item.c.diff?r1=text&tr1=1.162&r2=text&tr2=1.161&f=h - Use xfs_bitmap_empty instead of xfs_count_bits to determine if the buf item is clean. fs/xfs/xfs_bit.h - 1.20 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_bit.h.diff?r1=text&tr1=1.20&r2=text&tr2=1.19&f=h - xfs_count_bits is not really used to count bits, just to determine if the bitmap is empty or not. Replace it with a function that does just that. fs/xfs/xfs_bit.c - 1.31 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_bit.c.diff?r1=text&tr1=1.31&r2=text&tr2=1.30&f=h - xfs_count_bits is not really used to count bits, just to determine if the bitmap is empty or not. Replace it with a function that does just that. From owner-xfs@oss.sgi.com Tue Jun 19 21:21:46 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Jun 2007 21:21:48 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.7 required=5.0 tests=AWL,BAYES_50,URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5K4Lhdo007997 for ; Tue, 19 Jun 2007 21:21:45 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA18927; Wed, 20 Jun 2007 14:21:39 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id 0205858C38F1; Wed, 20 Jun 2007 14:21:38 +1000 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: TAKE 964547 - simplify XFS min/max macros Message-Id: <20070620042139.0205858C38F1@chook.melbourne.sgi.com> Date: Wed, 20 Jun 2007 14:21:38 +1000 (EST) From: dgc@sgi.com (David Chinner) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11874 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Simplify XFS min/max macros. Date: Wed Jun 20 14:21:16 AEST 2007 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/2.6.x-xfs Inspected by: nscott@aconex.com The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:28945a fs/xfs/xfs_btree.h - 1.66 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_btree.h.diff?r1=text&tr1=1.66&r2=text&tr2=1.65&f=h - Use min_t/max_t instead of open coding min/max comparisons. From owner-xfs@oss.sgi.com Tue Jun 19 22:02:22 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Jun 2007 22:02:24 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.7 required=5.0 tests=AWL,BAYES_50,URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5K52Ido020148 for ; Tue, 19 Jun 2007 22:02:21 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA20060; Wed, 20 Jun 2007 15:02:15 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id B291958C38F1; Wed, 20 Jun 2007 15:02:15 +1000 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: PARTIAL TAKE 966505 - cleanup dir2 macro shouting Message-Id: <20070620050215.B291958C38F1@chook.melbourne.sgi.com> Date: Wed, 20 Jun 2007 15:02:15 +1000 (EST) From: dgc@sgi.com (David Chinner) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11875 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Reduce shouting by removing unnecessary macros from dir2 code. Signed-Off-By: Christoph Hellwig Date: Wed Jun 20 15:01:44 AEST 2007 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/2.6.x-xfs Inspected by: hch@lst.de The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:28947a fs/xfs/xfs_dir2_block.c - 1.55 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_dir2_block.c.diff?r1=text&tr1=1.55&r2=text&tr2=1.54&f=h - Call inline functions directly rather than the macros that wrap them and remove the unneeded macros. fs/xfs/xfs_dir2_block.h - 1.18 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_dir2_block.h.diff?r1=text&tr1=1.18&r2=text&tr2=1.17&f=h - Call inline functions directly rather than the macros that wrap them and remove the unneeded macros. fs/xfs/xfs_dir2_sf.h - 1.23 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_dir2_sf.h.diff?r1=text&tr1=1.23&r2=text&tr2=1.22&f=h - Call inline functions directly rather than the macros that wrap them and remove the unneeded macros. fs/xfs/xfs_dir2_sf.c - 1.47 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_dir2_sf.c.diff?r1=text&tr1=1.47&r2=text&tr2=1.46&f=h - Call inline functions directly rather than the macros that wrap them and remove the unneeded macros. fs/xfs/xfs_dir2_data.c - 1.38 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_dir2_data.c.diff?r1=text&tr1=1.38&r2=text&tr2=1.37&f=h - Call inline functions directly rather than the macros that wrap them and remove the unneeded macros. fs/xfs/xfs_dir2_data.h - 1.21 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_dir2_data.h.diff?r1=text&tr1=1.21&r2=text&tr2=1.20&f=h - Call inline functions directly rather than the macros that wrap them and remove the unneeded macros. fs/xfs/xfs_dir2_leaf.c - 1.58 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_dir2_leaf.c.diff?r1=text&tr1=1.58&r2=text&tr2=1.57&f=h - Call inline functions directly rather than the macros that wrap them and remove the unneeded macros. fs/xfs/xfs_dir2_leaf.h - 1.24 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_dir2_leaf.h.diff?r1=text&tr1=1.24&r2=text&tr2=1.23&f=h - Call inline functions directly rather than the macros that wrap them and remove the unneeded macros. fs/xfs/xfs_dir2_node.c - 1.59 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_dir2_node.c.diff?r1=text&tr1=1.59&r2=text&tr2=1.58&f=h - Call inline functions directly rather than the macros that wrap them and remove the unneeded macros. fs/xfs/xfs_dir2_node.h - 1.14 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_dir2_node.h.diff?r1=text&tr1=1.14&r2=text&tr2=1.13&f=h - Call inline functions directly rather than the macros that wrap them and remove the unneeded macros. fs/xfs/xfs_dir2.c - 1.56 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_dir2.c.diff?r1=text&tr1=1.56&r2=text&tr2=1.55&f=h - Call inline functions directly rather than the macros that wrap them and remove the unneeded macros. From owner-xfs@oss.sgi.com Tue Jun 19 22:06:41 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Jun 2007 22:06:43 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.7 required=5.0 tests=AWL,BAYES_50,URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5K56cdo022273 for ; Tue, 19 Jun 2007 22:06:40 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA20177; Wed, 20 Jun 2007 15:06:35 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id AA89358C38F1; Wed, 20 Jun 2007 15:06:35 +1000 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: TAKE 966505 - cleanup dir2 macro shouting Message-Id: <20070620050635.AA89358C38F1@chook.melbourne.sgi.com> Date: Wed, 20 Jun 2007 15:06:35 +1000 (EST) From: dgc@sgi.com (David Chinner) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11876 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Fixup kdb module code after removing dir2 shouting. Date: Wed Jun 20 15:05:52 AEST 2007 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/2.6.x-xfs Inspected by: dgc@sgi.com The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:28948a fs/xfs/xfsidbg.c - 1.315 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfsidbg.c.diff?r1=text&tr1=1.315&r2=text&tr2=1.314&f=h - Fixup kdb module code after removing dir2 shouting. From owner-xfs@oss.sgi.com Tue Jun 19 22:22:59 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Jun 2007 22:23:02 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.7 required=5.0 tests=AWL,BAYES_50,URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5K5Mudo028223 for ; Tue, 19 Jun 2007 22:22:58 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA20585; Wed, 20 Jun 2007 15:22:52 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id BAC9058C38F1; Wed, 20 Jun 2007 15:22:52 +1000 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: TAKE 966576 - use is_power_of_2() instead of open coding checks Message-Id: <20070620052252.BAC9058C38F1@chook.melbourne.sgi.com> Date: Wed, 20 Jun 2007 15:22:52 +1000 (EST) From: dgc@sgi.com (David Chinner) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11877 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Use is_power_of_2 instead of open coding checks Signed-off-by: vignesh babu Date: Wed Jun 20 15:22:20 AEST 2007 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/2.6.x-xfs Inspected by: vignesh.babu@wipro.com The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:28950a fs/xfs/xfs_inode.c - 1.464 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_inode.c.diff?r1=text&tr1=1.464&r2=text&tr2=1.463&f=h - Use is_power_of_2 instead of open coding checks From owner-xfs@oss.sgi.com Tue Jun 19 22:49:05 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Jun 2007 22:49:07 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.7 required=5.0 tests=AWL,BAYES_50,URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5K5n1do003256 for ; Tue, 19 Jun 2007 22:49:04 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA21422; Wed, 20 Jun 2007 15:48:58 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id 99AD658C38F1; Wed, 20 Jun 2007 15:48:58 +1000 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: TAKE 966562 - XFS should not be looking at filp reference counts Message-Id: <20070620054858.99AD658C38F1@chook.melbourne.sgi.com> Date: Wed, 20 Jun 2007 15:48:58 +1000 (EST) From: dgc@sgi.com (David Chinner) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11878 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs XFS should not be looking at filp reference counts A check for file_count is always a bad idea. Linux has the ->release method to deal with cleanups on last close and ->flush is only for the very rare case where we want to perform an operation on every drop of a reference to a file struct. This patch gets rid of vop_close and surrounding code in favour of simply doing the page flushing from ->release. Signed-off-by: Christoph Hellwig Date: Wed Jun 20 15:48:18 AEST 2007 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/2.6.x-xfs Inspected by: hch@lst.de The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:28952a fs/xfs/xfs_vnodeops.c - 1.700 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_vnodeops.c.diff?r1=text&tr1=1.700&r2=text&tr2=1.699&f=h - Move the functionality in xfs_close to xfs_release and remove xfs_close. fs/xfs/linux-2.6/xfs_file.c - 1.149 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/linux-2.6/xfs_file.c.diff?r1=text&tr1=1.149&r2=text&tr2=1.148&f=h - Kill xfs_file_flush as xfs_file_release provides us with the required last-close callout. fs/xfs/linux-2.6/xfs_vnode.h - 1.128 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/linux-2.6/xfs_vnode.h.diff?r1=text&tr1=1.128&r2=text&tr2=1.127&f=h - Remove the last close vop call as it is no longer needed. From owner-xfs@oss.sgi.com Wed Jun 20 13:59:17 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 20 Jun 2007 13:59:20 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.4 required=5.0 tests=AWL,BAYES_99,J_CHICKENPOX_43, URIBL_RHS_TLD_WHOIS autolearn=no version=3.2.0-pre1-r499012 Received: from smtpgateway.bnl.gov (smtpgw.bnl.gov [130.199.3.132]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5KKxDdo023241 for ; Wed, 20 Jun 2007 13:59:17 -0700 Received: from nico.rhic.bnl.gov ([130.199.80.5]) by smtpgw.bnl.gov with esmtp (bnl.gov SMTP on gw3) serial 1I17Gg-0006jE-AE; Wed, 20 Jun 2007 16:59:10 -0400 Message-ID: <4679951E.8050601@bnl.gov> Date: Wed, 20 Jun 2007 16:59:10 -0400 From: Robert Petkus User-Agent: Thunderbird 1.5.0.10 (X11/20070302) MIME-Version: 1.0 To: xfs@oss.sgi.com CC: Petkus Robert Subject: Poor performance -- poor config? Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BNL-MailScanner-Information: Please contact the ITD Service Desk for more information X-BNL-MailScanner: Found to be clean X-BNL-MailScanner-SpamCheck: X-BNL-MailScanner-From: rpetkus@bnl.gov X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11879 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: rpetkus@bnl.gov Precedence: bulk X-list: xfs Folks, I'm trying to configure a system (server + DS4700 disk array) that can offer the highest performance for our application. We will be reading and writing multiple threads of 1-2GB files with 1MB block sizes. DS4700 config: (16) 500 GB SATA disks (3) 4+1 RAID 5 arrays and (1) hot spare == (3) 2TB LUNs. (2) RAID arrays are on controller A, (1) RAID array is on controller B. 512k segment size Server Config: IBM x3550, 9GB RAM, RHEL 5 x86_64 (2.6.18) The (3) LUNs are sdb, sdc {both controller A}, sdd {controller B} My original goal was to use XFS and create a highly optimized config. Here is what I came up with: Create separate partitions for XFS log files: sdd1, sdd2, sdd3 each 150M -- 128MB is the maximum allowable XFS log size. The XFS "stripe unit" (su) = 512k to match the DS4700 segment size The "stripe width" ( (n-1)*sunit )= swidth=2048k = sw=4 (a multiple of su) 4k is the max block size allowable on x86_64 since 4k is the max kernel page size [root@~]# mkfs.xfs -l logdev=/dev/sdd1,size=128m -d su=512k -d sw=4 -f /dev/sdb [root@~]# mount -t xfs -o context=system_u:object_r:unconfined_t,noatime,nodiratime,logbufs=8,logdev=/dev/sdd1 /dev/sdb /data0 And the write performance is lousy compared to ext3 built like so: [root@~]# mke2fs -j -m 1 -b4096 -E stride=128 /dev/sdc [root@~]# mount -t ext3 -o noatime,nodiratime,context="system_u:object_r:unconfined_t:s0",reservation /dev/sdc /data1 What am I missing? Thanks! -- Robert Petkus RHIC/USATLAS Computing Facility Brookhaven National Laboratory Physics Dept. - Bldg. 510A Upton, New York 11973 http://www.bnl.gov/RHIC http://www.acf.bnl.gov From owner-xfs@oss.sgi.com Wed Jun 20 14:04:58 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 20 Jun 2007 14:05:01 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=4.0 required=5.0 tests=AWL,BAYES_99,J_CHICKENPOX_43, SPF_HELO_PASS,URIBL_RHS_TLD_WHOIS autolearn=no version=3.2.0-pre1-r499012 Received: from lucidpixels.com (lucidpixels.com [75.144.35.66]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5KL4udo025929 for ; Wed, 20 Jun 2007 14:04:57 -0700 Received: by lucidpixels.com (Postfix, from userid 1001) id 40964B0004E6; Wed, 20 Jun 2007 17:04:57 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id 3D03A50000BA; Wed, 20 Jun 2007 17:04:57 -0400 (EDT) Date: Wed, 20 Jun 2007 17:04:57 -0400 (EDT) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: Robert Petkus cc: xfs@oss.sgi.com Subject: Re: Poor performance -- poor config? In-Reply-To: <4679951E.8050601@bnl.gov> Message-ID: References: <4679951E.8050601@bnl.gov> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11880 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs On Wed, 20 Jun 2007, Robert Petkus wrote: > Folks, > I'm trying to configure a system (server + DS4700 disk array) that can offer > the highest performance for our application. We will be reading and writing > multiple threads of 1-2GB files with 1MB block sizes. > DS4700 config: > (16) 500 GB SATA disks > (3) 4+1 RAID 5 arrays and (1) hot spare == (3) 2TB LUNs. > (2) RAID arrays are on controller A, (1) RAID array is on controller B. > 512k segment size > > Server Config: > IBM x3550, 9GB RAM, RHEL 5 x86_64 (2.6.18) > The (3) LUNs are sdb, sdc {both controller A}, sdd {controller B} > > My original goal was to use XFS and create a highly optimized config. Here > is what I came up with: > Create separate partitions for XFS log files: sdd1, sdd2, sdd3 each 150M -- > 128MB is the maximum allowable XFS log size. > The XFS "stripe unit" (su) = 512k to match the DS4700 segment size > The "stripe width" ( (n-1)*sunit )= swidth=2048k = sw=4 (a multiple of su) > 4k is the max block size allowable on x86_64 since 4k is the max kernel page > size > > [root@~]# mkfs.xfs -l logdev=/dev/sdd1,size=128m -d su=512k -d sw=4 -f > /dev/sdb > [root@~]# mount -t xfs -o > context=system_u:object_r:unconfined_t,noatime,nodiratime,logbufs=8,logdev=/dev/sdd1 > /dev/sdb /data0 > > And the write performance is lousy compared to ext3 built like so: > [root@~]# mke2fs -j -m 1 -b4096 -E stride=128 /dev/sdc > [root@~]# mount -t ext3 -o > noatime,nodiratime,context="system_u:object_r:unconfined_t:s0",reservation > /dev/sdc /data1 > > What am I missing? > > Thanks! > > -- > Robert Petkus > RHIC/USATLAS Computing Facility > Brookhaven National Laboratory > Physics Dept. - Bldg. 510A > Upton, New York 11973 > > http://www.bnl.gov/RHIC > http://www.acf.bnl.gov > > What speeds are you getting? Have you tried a SW RAID with the 16 drives, if you do that, XFS will auto-optimize per the physical characteristics of the md array. Also, most of those mount options besides the logdev/noatime don't do much with XFS from my personal benchmarks, you're better off with the defaults+noatime. What speed are you getting reads/writes, what do you expect? How are the drives attached/what type of controller? PCI? From owner-xfs@oss.sgi.com Wed Jun 20 14:16:44 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 20 Jun 2007 14:16:47 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.6 required=5.0 tests=AWL,BAYES_99,J_CHICKENPOX_43, URIBL_RHS_TLD_WHOIS autolearn=no version=3.2.0-pre1-r499012 Received: from smtpgateway.bnl.gov (smtpgw.bnl.gov [130.199.3.132]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5KLGgdo032021 for ; Wed, 20 Jun 2007 14:16:44 -0700 Received: from nico.rhic.bnl.gov ([130.199.80.5]) by smtpgw.bnl.gov with esmtp (bnl.gov SMTP on gw1) serial 1I17Xd-00073M-6L; Wed, 20 Jun 2007 17:16:42 -0400 Message-ID: <46799939.2080503@bnl.gov> Date: Wed, 20 Jun 2007 17:16:41 -0400 From: Robert Petkus User-Agent: Thunderbird 1.5.0.10 (X11/20070302) MIME-Version: 1.0 To: Justin Piszcz CC: xfs@oss.sgi.com Subject: Re: Poor performance -- poor config? References: <4679951E.8050601@bnl.gov> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BNL-MailScanner-Information: Please contact the ITD Service Desk for more information X-BNL-MailScanner: Found to be clean X-BNL-MailScanner-SpamCheck: not spam (whitelisted), SpamAssassin (not cached, score=-3.85, required 5, autolearn=not spam, BAYES_00 -3.60, FROMDOTGOV -0.25) X-BNL-MailScanner-From: rpetkus@bnl.gov X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11881 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: rpetkus@bnl.gov Precedence: bulk X-list: xfs Justin Piszcz wrote: > > > On Wed, 20 Jun 2007, Robert Petkus wrote: > >> Folks, >> I'm trying to configure a system (server + DS4700 disk array) that >> can offer the highest performance for our application. We will be >> reading and writing multiple threads of 1-2GB files with 1MB block >> sizes. >> DS4700 config: >> (16) 500 GB SATA disks >> (3) 4+1 RAID 5 arrays and (1) hot spare == (3) 2TB LUNs. >> (2) RAID arrays are on controller A, (1) RAID array is on controller B. >> 512k segment size >> >> Server Config: >> IBM x3550, 9GB RAM, RHEL 5 x86_64 (2.6.18) >> The (3) LUNs are sdb, sdc {both controller A}, sdd {controller B} >> >> My original goal was to use XFS and create a highly optimized >> config. Here is what I came up with: >> Create separate partitions for XFS log files: sdd1, sdd2, sdd3 each >> 150M -- 128MB is the maximum allowable XFS log size. >> The XFS "stripe unit" (su) = 512k to match the DS4700 segment size >> The "stripe width" ( (n-1)*sunit )= swidth=2048k = sw=4 (a multiple >> of su) >> 4k is the max block size allowable on x86_64 since 4k is the max >> kernel page size >> >> [root@~]# mkfs.xfs -l logdev=/dev/sdd1,size=128m -d su=512k -d sw=4 >> -f /dev/sdb >> [root@~]# mount -t xfs -o >> context=system_u:object_r:unconfined_t,noatime,nodiratime,logbufs=8,logdev=/dev/sdd1 >> /dev/sdb /data0 >> >> And the write performance is lousy compared to ext3 built like so: >> [root@~]# mke2fs -j -m 1 -b4096 -E stride=128 /dev/sdc >> [root@~]# mount -t ext3 -o >> noatime,nodiratime,context="system_u:object_r:unconfined_t:s0",reservation >> /dev/sdc /data1 >> >> What am I missing? >> >> Thanks! >> >> -- >> Robert Petkus >> RHIC/USATLAS Computing Facility >> Brookhaven National Laboratory >> Physics Dept. - Bldg. 510A >> Upton, New York 11973 >> >> http://www.bnl.gov/RHIC >> http://www.acf.bnl.gov >> >> > > What speeds are you getting? dd if=/dev/zero of=/data0/bigfile bs=1024k count=5000 5242880000 bytes (5.2 GB) copied, 149.296 seconds, 35.1 MB/s dd if=/data0/bigfile of=/dev/null bs=1024k count=5000 5242880000 bytes (5.2 GB) copied, 26.3148 seconds, 199 MB/s iozone.linux -w -r 1m -s 1g -i0 -t 4 -e -w -f /data0/test1 Children see throughput for 4 initial writers = 28528.59 KB/sec Parent sees throughput for 4 initial writers = 25212.79 KB/sec Min throughput per process = 6259.05 KB/sec Max throughput per process = 7548.29 KB/sec Avg throughput per process = 7132.15 KB/sec iozone.linux -w -r 1m -s 1g -i1 -t 4 -e -w -f /data0/test1 Children see throughput for 4 readers = 3059690.19 KB/sec Parent sees throughput for 4 readers = 3055307.71 KB/sec Min throughput per process = 757151.81 KB/sec Max throughput per process = 776032.62 KB/sec Avg throughput per process = 764922.55 KB/sec > > Have you tried a SW RAID with the 16 drives, if you do that, XFS will > auto-optimize per the physical characteristics of the md array. No because this would waste an expensive disk array. I've done this with various JBODs, even a SUN Thumper, with OK results... > > Also, most of those mount options besides the logdev/noatime don't do > much with XFS from my personal benchmarks, you're better off with the > defaults+noatime. The security context stuff is in there since I run a strict SELinux policy. Otherwise, I need logdev since it's on a different disk. BTW, the same filesystem w/out a separate log disk made no difference in performance. > > What speed are you getting reads/writes, what do you expect? How are > the drives attached/what type of controller? PCI? I can get ~3x write performance with ext3. I have a dual-port FC-4 PCIe HBA connected to (2) IBM DS4700 FC-4 controllers. There is lots of headroom. -- Robert Petkus RHIC/USATLAS Computing Facility Brookhaven National Laboratory Physics Dept. - Bldg. 510A Upton, New York 11973 http://www.bnl.gov/RHIC http://www.acf.bnl.gov From owner-xfs@oss.sgi.com Wed Jun 20 14:23:56 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 20 Jun 2007 14:23:59 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.1 required=5.0 tests=AWL,BAYES_99,J_CHICKENPOX_43, J_CHICKENPOX_65,SPF_HELO_PASS,URIBL_RHS_TLD_WHOIS autolearn=no version=3.2.0-pre1-r499012 Received: from lucidpixels.com (lucidpixels.com [75.144.35.66]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5KLNsdo003665 for ; Wed, 20 Jun 2007 14:23:56 -0700 Received: by lucidpixels.com (Postfix, from userid 1001) id C0629B0004E6; Wed, 20 Jun 2007 17:23:55 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id BBA6750000B9; Wed, 20 Jun 2007 17:23:55 -0400 (EDT) Date: Wed, 20 Jun 2007 17:23:55 -0400 (EDT) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: Robert Petkus cc: xfs@oss.sgi.com Subject: Re: Poor performance -- poor config? In-Reply-To: <46799939.2080503@bnl.gov> Message-ID: References: <4679951E.8050601@bnl.gov> <46799939.2080503@bnl.gov> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11882 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs On Wed, 20 Jun 2007, Robert Petkus wrote: > Justin Piszcz wrote: >> >> >> On Wed, 20 Jun 2007, Robert Petkus wrote: >> >>> Folks, >>> I'm trying to configure a system (server + DS4700 disk array) that can >>> offer the highest performance for our application. We will be reading and >>> writing multiple threads of 1-2GB files with 1MB block sizes. >>> DS4700 config: >>> (16) 500 GB SATA disks >>> (3) 4+1 RAID 5 arrays and (1) hot spare == (3) 2TB LUNs. >>> (2) RAID arrays are on controller A, (1) RAID array is on controller B. >>> 512k segment size >>> >>> Server Config: >>> IBM x3550, 9GB RAM, RHEL 5 x86_64 (2.6.18) >>> The (3) LUNs are sdb, sdc {both controller A}, sdd {controller B} >>> >>> My original goal was to use XFS and create a highly optimized config. >>> Here is what I came up with: >>> Create separate partitions for XFS log files: sdd1, sdd2, sdd3 each 150M >>> -- 128MB is the maximum allowable XFS log size. >>> The XFS "stripe unit" (su) = 512k to match the DS4700 segment size >>> The "stripe width" ( (n-1)*sunit )= swidth=2048k = sw=4 (a multiple of >>> su) >>> 4k is the max block size allowable on x86_64 since 4k is the max kernel >>> page size >>> >>> [root@~]# mkfs.xfs -l logdev=/dev/sdd1,size=128m -d su=512k -d sw=4 -f >>> /dev/sdb >>> [root@~]# mount -t xfs -o >>> context=system_u:object_r:unconfined_t,noatime,nodiratime,logbufs=8,logdev=/dev/sdd1 >>> /dev/sdb /data0 >>> >>> And the write performance is lousy compared to ext3 built like so: >>> [root@~]# mke2fs -j -m 1 -b4096 -E stride=128 /dev/sdc >>> [root@~]# mount -t ext3 -o >>> noatime,nodiratime,context="system_u:object_r:unconfined_t:s0",reservation >>> /dev/sdc /data1 >>> >>> What am I missing? >>> >>> Thanks! >>> >>> -- >>> Robert Petkus >>> RHIC/USATLAS Computing Facility >>> Brookhaven National Laboratory >>> Physics Dept. - Bldg. 510A >>> Upton, New York 11973 >>> >>> http://www.bnl.gov/RHIC >>> http://www.acf.bnl.gov >>> >>> >> >> What speeds are you getting? > dd if=/dev/zero of=/data0/bigfile bs=1024k count=5000 > 5242880000 bytes (5.2 GB) copied, 149.296 seconds, 35.1 MB/s > > dd if=/data0/bigfile of=/dev/null bs=1024k count=5000 > 5242880000 bytes (5.2 GB) copied, 26.3148 seconds, 199 MB/s > > iozone.linux -w -r 1m -s 1g -i0 -t 4 -e -w -f /data0/test1 > Children see throughput for 4 initial writers = 28528.59 KB/sec > Parent sees throughput for 4 initial writers = 25212.79 KB/sec > Min throughput per process = 6259.05 KB/sec > Max throughput per process = 7548.29 KB/sec > Avg throughput per process = 7132.15 KB/sec > > iozone.linux -w -r 1m -s 1g -i1 -t 4 -e -w -f /data0/test1 > Children see throughput for 4 readers = 3059690.19 KB/sec > Parent sees throughput for 4 readers = 3055307.71 KB/sec > Min throughput per process = 757151.81 KB/sec > Max throughput per process = 776032.62 KB/sec > Avg throughput per process = 764922.55 KB/sec > >> >> Have you tried a SW RAID with the 16 drives, if you do that, XFS will >> auto-optimize per the physical characteristics of the md array. > No because this would waste an expensive disk array. I've done this with > various JBODs, even a SUN Thumper, with OK results... >> >> Also, most of those mount options besides the logdev/noatime don't do much >> with XFS from my personal benchmarks, you're better off with the >> defaults+noatime. > The security context stuff is in there since I run a strict SELinux policy. > Otherwise, I need logdev since it's on a different disk. BTW, the same > filesystem w/out a separate log disk made no difference in performance. >> >> What speed are you getting reads/writes, what do you expect? How are the >> drives attached/what type of controller? PCI? > I can get ~3x write performance with ext3. I have a dual-port FC-4 PCIe HBA > connected to (2) IBM DS4700 FC-4 controllers. There is lots of headroom. > > -- > Robert Petkus > RHIC/USATLAS Computing Facility > Brookhaven National Laboratory > Physics Dept. - Bldg. 510A > Upton, New York 11973 > > http://www.bnl.gov/RHIC > http://www.acf.bnl.gov > > EXT3 up to 3x fast? Hrm.. Have you tried default mkfs.xfs options [internal journal]? What write speed do you get using the defaults? What kernel version? Justin. From owner-xfs@oss.sgi.com Wed Jun 20 23:14:58 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 20 Jun 2007 23:15:03 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.5 required=5.0 tests=AWL,BAYES_99,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r499012 Received: from smtpout.eastlink.ca (smtpout.eastlink.ca [24.222.0.30]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5L6Esdo009363 for ; Wed, 20 Jun 2007 23:14:58 -0700 Received: from ip03.eastlink.ca ([24.222.10.15]) by mta01.eastlink.ca (Sun Java System Messaging Server 6.2-4.03 (built Sep 22 2005)) with ESMTP id <0JJZ00MGV2OVAZT0@mta01.eastlink.ca> for xfs@oss.sgi.com; Thu, 21 Jun 2007 03:14:55 -0300 (ADT) Received: from blk-89-214-20.eastlink.ca (HELO llama.cordes.ca) ([24.89.214.20]) by ip03.eastlink.ca with ESMTP; Thu, 21 Jun 2007 03:14:15 -0300 Received: from peter by llama.cordes.ca with local (Exim 3.36 #1 (Debian)) id 1I1FwP-0000X2-00 for ; Thu, 21 Jun 2007 03:14:49 -0300 Date: Thu, 21 Jun 2007 03:14:49 -0300 From: Peter Cordes Subject: Re: XFS_IOC_RESVSP64 for swap files In-reply-to: <20070619043333.GJ86004887@sgi.com> To: xfs@oss.sgi.com Message-id: <20070621061449.GB11200@cordes.ca> MIME-version: 1.0 Content-type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary=vkogqOf2sHV7VnPd Content-disposition: inline X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: Ao8CAEK0eUYYWdYU/2dsb2JhbAA X-IronPort-AV: E=Sophos;i="4.16,445,1175482800"; d="asc'?scan'208";a="20977665" References: <20070617100822.GA4586@cordes.ca> <20070619043333.GJ86004887@sgi.com> User-Agent: Mutt/1.5.9i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11883 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: peter@cordes.ca Precedence: bulk X-list: xfs --vkogqOf2sHV7VnPd Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Tue, Jun 19, 2007 at 02:33:33PM +1000, David Chinner wrote: > On Sun, Jun 17, 2007 at 07:08:23AM -0300, Peter Cordes wrote: > > Hi XFS list. I'm not subscribed, please CC me. > >=20 > > Programs such as swapspace and swapd create new swap files when vmem r= uns > > low. They would benefit hugely from being able to create a swapfile wi= thout > > any significant disk I/O. (If a process grabs a lot of memory quickly,= the > > system will be swapping hard while swapspace(8) is writing a swapfile.) > > but it [exposing stale data] would still be useful for making swap files > > even if only root could do it. >=20 > Still a potential security hole. Root can read the device file, so how is letting root expose stale data any worse? If a program run by root makes a file with mode 0600, and then calls XFS_IOC_EXPOSE_MY_STALE_DATA_TO_EVERYONE, where's the security problem? > > Could swapon(2) in the kernel be made to work on XFS files with reserv= ed > > space? >=20 > Basically, the swapon syscall calls bmap() for the block mapping of the > file and XFS returns "holes" [...] Yeah, bad idea to put special case stuff in the kernel. > > i.e. call something that would give XFS a chance to mark all the > > extents as written, even though they're not. >=20 > You mean like XFS_IOC_EXPOSE_MY_STALE_DATA_TO_EVERYONE? ;) >=20 > That's not going to happen. >=20 > In fact, I plan to make unwritten extents non-optional soon (i.e. I've al= ready > got preliminary patches to do this) so that filesystems that have it turn= ed > off will get them turned on automatically. >=20 > The reasons? >=20 > a) there is no good reason for unwritten=3D0 from a performance > perspective > b) there is good reason for unwritten=3D1 from a security perspective > c) we need to use unwritten extents in place of written extents > during delayed allocation to prevent stale data exposure on > crash and when using extent size hints. >=20 > So soon unwritten=3D0 is likely to go the way of the dodo..... Ok. I didn't really want to recreate my /var/tmp filesystem with unwritten=3D0, but I really wish I had XFS_IOC_EXPOSE_MY_STALE_DATA_TO_EVERYONE on my desktop machine. I think dynamic swap file creation is a cool idea, and that ioctl would make it work perfectly. This ioctl is only useful for making swap files. Nothing else cares if the file has "holes" or not. But for that one application, it's great. There are lots of ways root can shoot himself in the foot, and I don't think adding one more is enough reason to not add an ioctl. Is it just that you don't want to take time to implement such a feature, or would you reject a patch that added it? (Not that I'm volunteering, necessarily.) BTW, thanks for taking the time to respond. --=20 #define X(x,y) x##y Peter Cordes ; e-mail: X(peter@cor , des.ca) "The gods confound the man who first found out how to distinguish the hours! Confound him, too, who in this place set up a sundial, to cut and hack my day so wretchedly into small pieces!" -- Plautus, 200 BC --vkogqOf2sHV7VnPd Content-Type: application/pgp-signature; name="signature.asc" Content-Description: Digital signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (GNU/Linux) iQC1AwUBRnoXWQWkmhLkWuRTAQIWtQT/Sn2Yn/q8+zPexe/V51wd9ZqQsIX1j3jT 4YFg0OMVI3uCKtFVnuFmvQVH2FPg6JrIU+uxfgC7HnzL8AaBXw29zuvKNYDMCOQd f4IBSBnS2BlDmvz0McD5Bnhqm7gAvFsAJRpYcQPp1TyXMbPOBR9qYE5XlH97QvNK +Dv2knnRHEmHexMc7r9Y/L4oHp8UFh+D+etZG8NQDTxYPJYHwYEeEw== =Dqdd -----END PGP SIGNATURE----- --vkogqOf2sHV7VnPd-- From owner-xfs@oss.sgi.com Wed Jun 20 23:38:17 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 20 Jun 2007 23:38:20 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.7 required=5.0 tests=AWL,BAYES_99,J_CHICKENPOX_43, J_CHICKENPOX_65,SPF_HELO_PASS,URIBL_RHS_TLD_WHOIS autolearn=no version=3.2.0-pre1-r499012 Received: from moutng.kundenserver.de (moutng.kundenserver.de [212.227.126.176]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5L6cFdo015986 for ; Wed, 20 Jun 2007 23:38:16 -0700 Received: from [212.227.126.203] (helo=mrvnet.kundenserver.de) by moutng.kundenserver.de with esmtp (Exim 3.35 #1) id 1I1GJ0-000364-00; Thu, 21 Jun 2007 08:38:10 +0200 Received: from [172.23.1.26] (helo=xchgsmtp.exchange.xchg) by mrvnet.kundenserver.de with smtp (Exim 3.35 #1) id 1I1GI9-0000ya-07; Thu, 21 Jun 2007 08:37:17 +0200 Received: from mapibe17.exchange.xchg ([172.23.1.54]) by xchgsmtp.exchange.xchg with Microsoft SMTPSVC(6.0.3790.3959); Thu, 21 Jun 2007 08:37:12 +0200 X-MimeOLE: Produced By Microsoft Exchange V6.5 Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Subject: RE: Poor performance -- poor config? Date: Thu, 21 Jun 2007 08:37:36 +0200 Message-ID: <55EF1E5D5804A542A6CA37E446DDC206F5C5AA@mapibe17.exchange.xchg> In-Reply-To: X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: Poor performance -- poor config? Thread-Index: AcezgWe9BMY2wM3JQUCygSYXi7otiAATNIlg References: <4679951E.8050601@bnl.gov> <46799939.2080503@bnl.gov> From: "Sebastian Brings" To: "Justin Piszcz" , "Robert Petkus" Cc: X-OriginalArrivalTime: 21 Jun 2007 06:37:12.0975 (UTC) FILETIME=[99E2A1F0:01C7B3CE] X-Provags-ID: kundenserver.de abuse@kundenserver.de ident:@172.23.1.26 X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id l5L6cHdo015996 X-archive-position: 11884 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sebas@silexmedia.com Precedence: bulk X-list: xfs > -----Original Message----- > From: xfs-bounce@oss.sgi.com [mailto:xfs-bounce@oss.sgi.com] On Behalf Of Justin Piszcz > Sent: Mittwoch, 20. Juni 2007 23:24 > To: Robert Petkus > Cc: xfs@oss.sgi.com > Subject: Re: Poor performance -- poor config? > > > > On Wed, 20 Jun 2007, Robert Petkus wrote: > > > Justin Piszcz wrote: > >> > >> > >> On Wed, 20 Jun 2007, Robert Petkus wrote: > >> > >>> Folks, > >>> I'm trying to configure a system (server + DS4700 disk array) that can > >>> offer the highest performance for our application. We will be reading and > >>> writing multiple threads of 1-2GB files with 1MB block sizes. > >>> DS4700 config: > >>> (16) 500 GB SATA disks > >>> (3) 4+1 RAID 5 arrays and (1) hot spare == (3) 2TB LUNs. > >>> (2) RAID arrays are on controller A, (1) RAID array is on controller B. > >>> 512k segment size > >>> > >>> Server Config: > >>> IBM x3550, 9GB RAM, RHEL 5 x86_64 (2.6.18) > >>> The (3) LUNs are sdb, sdc {both controller A}, sdd {controller B} > >>> > >>> My original goal was to use XFS and create a highly optimized config. > >>> Here is what I came up with: > >>> Create separate partitions for XFS log files: sdd1, sdd2, sdd3 each 150M > >>> -- 128MB is the maximum allowable XFS log size. > >>> The XFS "stripe unit" (su) = 512k to match the DS4700 segment size > >>> The "stripe width" ( (n-1)*sunit )= swidth=2048k = sw=4 (a multiple of > >>> su) > >>> 4k is the max block size allowable on x86_64 since 4k is the max kernel > >>> page size > >>> > >>> [root@~]# mkfs.xfs -l logdev=/dev/sdd1,size=128m -d su=512k -d sw=4 -f > >>> /dev/sdb > >>> [root@~]# mount -t xfs -o > >>> context=system_u:object_r:unconfined_t,noatime,nodiratime,logbufs=8,logd ev=/dev/sdd1 > >>> /dev/sdb /data0 > >>> > >>> And the write performance is lousy compared to ext3 built like so: > >>> [root@~]# mke2fs -j -m 1 -b4096 -E stride=128 /dev/sdc > >>> [root@~]# mount -t ext3 -o > >>> noatime,nodiratime,context="system_u:object_r:unconfined_t:s0",reservati on > >>> /dev/sdc /data1 > >>> > >>> What am I missing? > >>> > >>> Thanks! > >>> > >>> -- > >>> Robert Petkus > >>> RHIC/USATLAS Computing Facility > >>> Brookhaven National Laboratory > >>> Physics Dept. - Bldg. 510A > >>> Upton, New York 11973 > >>> > >>> http://www.bnl.gov/RHIC > >>> http://www.acf.bnl.gov > >>> > >>> > >> > >> What speeds are you getting? > > dd if=/dev/zero of=/data0/bigfile bs=1024k count=5000 > > 5242880000 bytes (5.2 GB) copied, 149.296 seconds, 35.1 MB/s > > > > dd if=/data0/bigfile of=/dev/null bs=1024k count=5000 > > 5242880000 bytes (5.2 GB) copied, 26.3148 seconds, 199 MB/s > > > > iozone.linux -w -r 1m -s 1g -i0 -t 4 -e -w -f /data0/test1 > > Children see throughput for 4 initial writers = 28528.59 KB/sec > > Parent sees throughput for 4 initial writers = 25212.79 KB/sec > > Min throughput per process = 6259.05 KB/sec > > Max throughput per process = 7548.29 KB/sec > > Avg throughput per process = 7132.15 KB/sec > > > > iozone.linux -w -r 1m -s 1g -i1 -t 4 -e -w -f /data0/test1 > > Children see throughput for 4 readers = 3059690.19 KB/sec > > Parent sees throughput for 4 readers = 3055307.71 KB/sec > > Min throughput per process = 757151.81 KB/sec > > Max throughput per process = 776032.62 KB/sec > > Avg throughput per process = 764922.55 KB/sec > > > >> > >> Have you tried a SW RAID with the 16 drives, if you do that, XFS will > >> auto-optimize per the physical characteristics of the md array. > > No because this would waste an expensive disk array. I've done this with > > various JBODs, even a SUN Thumper, with OK results... > >> > >> Also, most of those mount options besides the logdev/noatime don't do much > >> with XFS from my personal benchmarks, you're better off with the > >> defaults+noatime. > > The security context stuff is in there since I run a strict SELinux policy. > > Otherwise, I need logdev since it's on a different disk. BTW, the same > > filesystem w/out a separate log disk made no difference in performance. > >> > >> What speed are you getting reads/writes, what do you expect? How are the > >> drives attached/what type of controller? PCI? > > I can get ~3x write performance with ext3. I have a dual-port FC-4 PCIe HBA > > connected to (2) IBM DS4700 FC-4 controllers. There is lots of headroom. > > > > -- > > Robert Petkus > > RHIC/USATLAS Computing Facility > > Brookhaven National Laboratory > > Physics Dept. - Bldg. 510A > > Upton, New York 11973 > > > > http://www.bnl.gov/RHIC > > http://www.acf.bnl.gov > > > > > > EXT3 up to 3x fast? Hrm.. Have you tried default mkfs.xfs options > [internal journal]? What write speed do you get using the defaults? > > What kernel version? > > Justin. > Not sure if it makes much sense to set stripe unit and width for a Raid which appears as a single device. As you state, the "width" of your DS lun is 4 x 512K == 2MB. In case you don't have write cache enabled each of your 1MB writes will cause the DS to write to two out of four disks only, causing heavy overhead to create parity. Write cache mirroring on the DS also causes limitation in write performance. And finally there is an option in the DS to change the cache segment size from 16k default to 4k IIRC. Make sure it is set to 16k. But still, 35MB/s for a single sequential write is really poor. Almost looks like you get single spindle performance only. Sebastian From owner-xfs@oss.sgi.com Sun Jun 24 22:36:08 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 24 Jun 2007 22:36:13 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.7 required=5.0 tests=AWL,BAYES_50,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r499012 Received: from sandeen.net (sandeen.net [209.173.210.139]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5P5a6do013279 for ; Sun, 24 Jun 2007 22:36:07 -0700 Received: from Liberator.local (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 79A5318011EB8 for ; Mon, 25 Jun 2007 00:36:07 -0500 (CDT) Message-ID: <467F5447.5080109@sandeen.net> Date: Mon, 25 Jun 2007 00:36:07 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.0 (Macintosh/20070326) MIME-Version: 1.0 To: xfs-oss Subject: is this thing on... Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11885 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs testing... noticed no email on list for 5 days. Hope this gets through. From owner-xfs@oss.sgi.com Sun Jun 24 22:45:30 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 24 Jun 2007 22:45:33 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.7 required=5.0 tests=AWL,BAYES_50,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r499012 Received: from sandeen.net (sandeen.net [209.173.210.139]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5P5jTdo017156 for ; Sun, 24 Jun 2007 22:45:30 -0700 Received: from Liberator.local (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 922C518011EA0 for ; Mon, 25 Jun 2007 00:19:15 -0500 (CDT) Message-ID: <467F5053.4040108@sandeen.net> Date: Mon, 25 Jun 2007 00:19:15 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.0 (Macintosh/20070326) MIME-Version: 1.0 To: xfs-oss Subject: [PATCH] simplify vnode tracing calls Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11887 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Don't think I've sent this one yet... :) linux-2.4/xfs_aops.c | 2 +- linux-2.4/xfs_ioctl.c | 2 +- linux-2.4/xfs_super.c | 6 +++--- linux-2.4/xfs_vnode.c | 4 ++-- linux-2.4/xfs_vnode.h | 11 ++++++----- linux-2.6/xfs_aops.c | 2 +- linux-2.6/xfs_ioctl.c | 2 +- linux-2.6/xfs_super.c | 6 +++--- linux-2.6/xfs_vnode.c | 4 ++-- linux-2.6/xfs_vnode.h | 11 ++++++----- xfs_iget.c | 4 ++-- xfs_rename.c | 4 ++-- xfs_utils.c | 2 +- xfs_vnodeops.c | 48 ++++++++++++++++++++++-------------------------- 14 files changed, 53 insertions(+), 55 deletions(-) Simplify vnode tracing calls by embedding function name & return addr in the calling macro. Signed-off-by: Eric Sandeen Index: linux/fs/xfs/linux-2.4/xfs_aops.c =================================================================== --- linux.orig/fs/xfs/linux-2.4/xfs_aops.c +++ linux/fs/xfs/linux-2.4/xfs_aops.c @@ -964,7 +964,7 @@ xfs_vm_bmap( struct inode *inode = (struct inode *)mapping->host; bhv_vnode_t *vp = vn_from_inode(inode); - vn_trace_entry(vp, __FUNCTION__, (inst_t *)__return_address); + VN_TRACE_ENTRY(vp); bhv_vop_rwlock(vp, VRWLOCK_READ); bhv_vop_flush_pages(vp, (xfs_off_t)0, -1, 0, FI_REMAPF); Index: linux/fs/xfs/linux-2.4/xfs_ioctl.c =================================================================== --- linux.orig/fs/xfs/linux-2.4/xfs_ioctl.c +++ linux/fs/xfs/linux-2.4/xfs_ioctl.c @@ -702,7 +702,7 @@ xfs_ioctl( vp = vn_from_inode(inode); - vn_trace_entry(vp, "xfs_ioctl", (inst_t *)__return_address); + VN_TRACE_ENTRY(vp); ip = XFS_BHVTOI(bdp); mp = ip->i_mount; Index: linux/fs/xfs/linux-2.4/xfs_super.c =================================================================== --- linux.orig/fs/xfs/linux-2.4/xfs_super.c +++ linux/fs/xfs/linux-2.4/xfs_super.c @@ -374,7 +374,7 @@ xfs_fs_write_inode( int error, flags = FLUSH_INODE; if (vp) { - vn_trace_entry(vp, __FUNCTION__, (inst_t *)__return_address); + VN_TRACE_ENTRY(vp); if (sync) flags |= FLUSH_SYNC; error = bhv_vop_iflush(vp, flags); @@ -389,7 +389,7 @@ xfs_fs_clear_inode( { bhv_vnode_t *vp = vn_from_inode(inode); - vn_trace_entry(vp, __FUNCTION__, (inst_t *)__return_address); + VN_TRACE_ENTRY(vp); XFS_STATS_INC(vn_rele); XFS_STATS_INC(vn_remove); @@ -948,7 +948,7 @@ xfs_fs_read_super( goto fail_vnrele; if (xfs_fs_start_syncd(vfsp)) goto fail_vnrele; - vn_trace_exit(rootvp, __FUNCTION__, (inst_t *)__return_address); + VN_TRACE_EXIT(rootvp); kmem_free(args, sizeof(*args)); return sb; Index: linux/fs/xfs/linux-2.4/xfs_vnode.c =================================================================== --- linux.orig/fs/xfs/linux-2.4/xfs_vnode.c +++ linux/fs/xfs/linux-2.4/xfs_vnode.c @@ -65,7 +65,7 @@ vn_initialize( vp->v_trace = ktrace_alloc(VNODE_TRACE_SIZE, KM_SLEEP); #endif /* XFS_VNODE_TRACE */ - vn_trace_exit(vp, "vn_initialize", (inst_t *)__return_address); + VN_TRACE_EXIT(vp); return vp; } @@ -118,7 +118,7 @@ vn_revalidate( bhv_vattr_t va; int error; - vn_trace_entry(vp, "vn_revalidate", (inst_t *)__return_address); + VN_TRACE_ENTRY(vp); ASSERT(VNHEAD(vp) != NULL); va.va_mask = XFS_AT_STAT|XFS_AT_XFLAGS; Index: linux/fs/xfs/linux-2.4/xfs_vnode.h =================================================================== --- linux.orig/fs/xfs/linux-2.4/xfs_vnode.h +++ linux/fs/xfs/linux-2.4/xfs_vnode.h @@ -572,15 +572,16 @@ extern void vn_trace_hold(struct bhv_vno extern void vn_trace_ref(struct bhv_vnode *, char *, int, inst_t *); extern void vn_trace_rele(struct bhv_vnode *, char *, int, inst_t *); -#define VN_TRACE(vp) \ - vn_trace_ref(vp, __FILE__, __LINE__, (inst_t *)__return_address) +#define VN_TRACE_ENTRY(vp) \ + vn_trace_entry(vp, __FUNCTION__, (inst_t *)__return_address) +#define VN_TRACE_EXIT(vp) \ + vn_trace_exit(vp, __FUNCTION__, (inst_t *)__return_address) #else -#define vn_trace_entry(a,b,c) -#define vn_trace_exit(a,b,c) +#define VN_TRACE_ENTRY(a) +#define VN_TRACE_EXIT(a) #define vn_trace_hold(a,b,c,d) #define vn_trace_ref(a,b,c,d) #define vn_trace_rele(a,b,c,d) -#define VN_TRACE(vp) #endif #endif /* __XFS_VNODE_H__ */ Index: linux/fs/xfs/linux-2.6/xfs_aops.c =================================================================== --- linux.orig/fs/xfs/linux-2.6/xfs_aops.c +++ linux/fs/xfs/linux-2.6/xfs_aops.c @@ -1529,7 +1529,7 @@ xfs_vm_bmap( struct inode *inode = (struct inode *)mapping->host; bhv_vnode_t *vp = vn_from_inode(inode); - vn_trace_entry(vp, __FUNCTION__, (inst_t *)__return_address); + VN_TRACE_ENTRY(vp); bhv_vop_rwlock(vp, VRWLOCK_READ); bhv_vop_flush_pages(vp, (xfs_off_t)0, -1, 0, FI_REMAPF); bhv_vop_rwunlock(vp, VRWLOCK_READ); Index: linux/fs/xfs/linux-2.6/xfs_ioctl.c =================================================================== --- linux.orig/fs/xfs/linux-2.6/xfs_ioctl.c +++ linux/fs/xfs/linux-2.6/xfs_ioctl.c @@ -708,7 +708,7 @@ xfs_ioctl( vp = vn_from_inode(inode); - vn_trace_entry(vp, "xfs_ioctl", (inst_t *)__return_address); + VN_TRACE_ENTRY(vp); ip = XFS_BHVTOI(bdp); mp = ip->i_mount; Index: linux/fs/xfs/linux-2.6/xfs_super.c =================================================================== --- linux.orig/fs/xfs/linux-2.6/xfs_super.c +++ linux/fs/xfs/linux-2.6/xfs_super.c @@ -415,7 +415,7 @@ xfs_fs_write_inode( int error = 0, flags = FLUSH_INODE; if (vp) { - vn_trace_entry(vp, __FUNCTION__, (inst_t *)__return_address); + VN_TRACE_ENTRY(vp); if (sync) flags |= FLUSH_SYNC; error = bhv_vop_iflush(vp, flags); @@ -431,7 +431,7 @@ xfs_fs_clear_inode( { bhv_vnode_t *vp = vn_from_inode(inode); - vn_trace_entry(vp, __FUNCTION__, (inst_t *)__return_address); + VN_TRACE_ENTRY(vp); XFS_STATS_INC(vn_rele); XFS_STATS_INC(vn_remove); @@ -844,7 +844,7 @@ xfs_fs_fill_super( } if ((error = xfs_fs_start_syncd(vfsp))) goto fail_vnrele; - vn_trace_exit(rootvp, __FUNCTION__, (inst_t *)__return_address); + VN_TRACE_EXIT(rootvp); kmem_free(args, sizeof(*args)); return 0; Index: linux/fs/xfs/linux-2.6/xfs_vnode.c =================================================================== --- linux.orig/fs/xfs/linux-2.6/xfs_vnode.c +++ linux/fs/xfs/linux-2.6/xfs_vnode.c @@ -99,7 +99,7 @@ vn_initialize( vp->v_trace = ktrace_alloc(VNODE_TRACE_SIZE, KM_SLEEP); #endif /* XFS_VNODE_TRACE */ - vn_trace_exit(vp, __FUNCTION__, (inst_t *)__return_address); + VN_TRACE_EXIT(vp); return vp; } @@ -150,7 +150,7 @@ __vn_revalidate( { int error; - vn_trace_entry(vp, __FUNCTION__, (inst_t *)__return_address); + VN_TRACE_ENTRY(vp); vattr->va_mask = XFS_AT_STAT | XFS_AT_XFLAGS; error = bhv_vop_getattr(vp, vattr, 0, NULL); if (likely(!error)) { Index: linux/fs/xfs/linux-2.6/xfs_vnode.h =================================================================== --- linux.orig/fs/xfs/linux-2.6/xfs_vnode.h +++ linux/fs/xfs/linux-2.6/xfs_vnode.h @@ -587,15 +587,16 @@ extern void vn_trace_hold(struct bhv_vno extern void vn_trace_ref(struct bhv_vnode *, char *, int, inst_t *); extern void vn_trace_rele(struct bhv_vnode *, char *, int, inst_t *); -#define VN_TRACE(vp) \ - vn_trace_ref(vp, __FILE__, __LINE__, (inst_t *)__return_address) +#define VN_TRACE_ENTRY(vp) \ + vn_trace_entry(vp, __FUNCTION__, (inst_t *)__return_address) +#define VN_TRACE_EXIT(vp) \ + vn_trace_exit(vp, __FUNCTION__, (inst_t *)__return_address) #else -#define vn_trace_entry(a,b,c) -#define vn_trace_exit(a,b,c) +#define VN_TRACE_ENTRY(a) +#define VN_TRACE_EXIT(a) #define vn_trace_hold(a,b,c,d) #define vn_trace_ref(a,b,c,d) #define vn_trace_rele(a,b,c,d) -#define VN_TRACE(vp) #endif #endif /* __XFS_VNODE_H__ */ Index: linux/fs/xfs/xfs_iget.c =================================================================== --- linux.orig/fs/xfs/xfs_iget.c +++ linux/fs/xfs/xfs_iget.c @@ -629,7 +629,7 @@ xfs_iput(xfs_inode_t *ip, { bhv_vnode_t *vp = XFS_ITOV(ip); - vn_trace_entry(vp, "xfs_iput", (inst_t *)__return_address); + VN_TRACE_ENTRY(vp); xfs_iunlock(ip, lock_flags); VN_RELE(vp); } @@ -644,7 +644,7 @@ xfs_iput_new(xfs_inode_t *ip, bhv_vnode_t *vp = XFS_ITOV(ip); struct inode *inode = vn_to_inode(vp); - vn_trace_entry(vp, "xfs_iput_new", (inst_t *)__return_address); + VN_TRACE_ENTRY(vp); if ((ip->i_d.di_mode == 0)) { ASSERT(!xfs_iflags_test(ip, XFS_IRECLAIMABLE)); Index: linux/fs/xfs/xfs_rename.c =================================================================== --- linux.orig/fs/xfs/xfs_rename.c +++ linux/fs/xfs/xfs_rename.c @@ -249,8 +249,8 @@ xfs_rename( int target_namelen = VNAMELEN(target_vname); src_dir_vp = BHV_TO_VNODE(src_dir_bdp); - vn_trace_entry(src_dir_vp, "xfs_rename", (inst_t *)__return_address); - vn_trace_entry(target_dir_vp, "xfs_rename", (inst_t *)__return_address); + VN_TRACE_ENTRY(src_dir_vp); + VN_TRACE_ENTRY(target_dir_vp); /* * Find the XFS behavior descriptor for the target directory Index: linux/fs/xfs/xfs_utils.c =================================================================== --- linux.orig/fs/xfs/xfs_utils.c +++ linux/fs/xfs/xfs_utils.c @@ -76,7 +76,7 @@ xfs_dir_lookup_int( int error; dir_vp = BHV_TO_VNODE(dir_bdp); - vn_trace_entry(dir_vp, __FUNCTION__, (inst_t *)__return_address); + VN_TRACE_ENTRY(dir_vp); dp = XFS_BHVTOI(dir_bdp); Index: linux/fs/xfs/xfs_vnodeops.c =================================================================== --- linux.orig/fs/xfs/xfs_vnodeops.c +++ linux/fs/xfs/xfs_vnodeops.c @@ -92,7 +92,7 @@ xfs_getattr( bhv_vnode_t *vp; vp = BHV_TO_VNODE(bdp); - vn_trace_entry(vp, __FUNCTION__, (inst_t *)__return_address); + VN_TRACE_ENTRY(vp); ip = XFS_BHVTOI(bdp); mp = ip->i_mount; @@ -237,7 +237,7 @@ xfs_setattr( int need_iolock = 1; vp = BHV_TO_VNODE(bdp); - vn_trace_entry(vp, __FUNCTION__, (inst_t *)__return_address); + VN_TRACE_ENTRY(vp); if (vp->v_vfsp->vfs_flag & VFS_RDONLY) return XFS_ERROR(EROFS); @@ -907,8 +907,7 @@ xfs_access( xfs_inode_t *ip; int error; - vn_trace_entry(BHV_TO_VNODE(bdp), __FUNCTION__, - (inst_t *)__return_address); + VN_TRACE_ENTRY(BHV_TO_VNODE(bdp)); ip = XFS_BHVTOI(bdp); xfs_ilock(ip, XFS_ILOCK_SHARED); @@ -951,7 +950,7 @@ xfs_readlink( xfs_buf_t *bp; vp = BHV_TO_VNODE(bdp); - vn_trace_entry(vp, __FUNCTION__, (inst_t *)__return_address); + VN_TRACE_ENTRY(vp); ip = XFS_BHVTOI(bdp); mp = ip->i_mount; @@ -1046,8 +1045,7 @@ xfs_fsync( int error; int log_flushed = 0, changed = 1; - vn_trace_entry(BHV_TO_VNODE(bdp), - __FUNCTION__, (inst_t *)__return_address); + VN_TRACE_ENTRY(BHV_TO_VNODE(bdp)); ip = XFS_BHVTOI(bdp); @@ -1601,7 +1599,7 @@ xfs_inactive( int truncate; vp = BHV_TO_VNODE(bdp); - vn_trace_entry(vp, __FUNCTION__, (inst_t *)__return_address); + VN_TRACE_ENTRY(vp); ip = XFS_BHVTOI(bdp); @@ -1825,7 +1823,7 @@ xfs_lookup( bhv_vnode_t *dir_vp; dir_vp = BHV_TO_VNODE(dir_bdp); - vn_trace_entry(dir_vp, __FUNCTION__, (inst_t *)__return_address); + VN_TRACE_ENTRY(dir_vp); dp = XFS_BHVTOI(dir_bdp); @@ -1876,7 +1874,7 @@ xfs_create( ASSERT(!*vpp); dir_vp = BHV_TO_VNODE(dir_bdp); - vn_trace_entry(dir_vp, __FUNCTION__, (inst_t *)__return_address); + VN_TRACE_ENTRY(dir_vp); dp = XFS_BHVTOI(dir_bdp); mp = dp->i_mount; @@ -2370,7 +2368,7 @@ xfs_remove( int namelen; dir_vp = BHV_TO_VNODE(dir_bdp); - vn_trace_entry(dir_vp, __FUNCTION__, (inst_t *)__return_address); + VN_TRACE_ENTRY(dir_vp); dp = XFS_BHVTOI(dir_bdp); mp = dp->i_mount; @@ -2416,7 +2414,7 @@ xfs_remove( dm_di_mode = ip->i_d.di_mode; - vn_trace_entry(XFS_ITOV(ip), __FUNCTION__, (inst_t *)__return_address); + VN_TRACE_ENTRY(XFS_ITOV(ip)); ITRACE(ip); @@ -2541,7 +2539,7 @@ xfs_remove( */ xfs_refcache_purge_ip(ip); - vn_trace_exit(XFS_ITOV(ip), __FUNCTION__, (inst_t *)__return_address); + VN_TRACE_EXIT(XFS_ITOV(ip)); /* * Let interposed file systems know about removed links. @@ -2618,8 +2616,8 @@ xfs_link( int target_namelen; target_dir_vp = BHV_TO_VNODE(target_dir_bdp); - vn_trace_entry(target_dir_vp, __FUNCTION__, (inst_t *)__return_address); - vn_trace_entry(src_vp, __FUNCTION__, (inst_t *)__return_address); + VN_TRACE_ENTRY(target_dir_vp); + VN_TRACE_ENTRY(src_vp); target_namelen = VNAMELEN(dentry); ASSERT(!VN_ISDIR(src_vp)); @@ -2818,7 +2816,7 @@ xfs_mkdir( /* Return through std_return after this point. */ - vn_trace_entry(dir_vp, __FUNCTION__, (inst_t *)__return_address); + VN_TRACE_ENTRY(dir_vp); mp = dp->i_mount; udqp = gdqp = NULL; @@ -3023,7 +3021,7 @@ xfs_rmdir( dp = XFS_BHVTOI(dir_bdp); mp = dp->i_mount; - vn_trace_entry(dir_vp, __FUNCTION__, (inst_t *)__return_address); + VN_TRACE_ENTRY(dir_vp); if (XFS_FORCED_SHUTDOWN(XFS_BHVTOI(dir_bdp)->i_mount)) return XFS_ERROR(EIO); @@ -3259,8 +3257,7 @@ xfs_readdir( int error = 0; uint lock_mode; - vn_trace_entry(BHV_TO_VNODE(dir_bdp), __FUNCTION__, - (inst_t *)__return_address); + VN_TRACE_ENTRY(BHV_TO_VNODE(dir_bdp)); dp = XFS_BHVTOI(dir_bdp); if (XFS_FORCED_SHUTDOWN(dp->i_mount)) @@ -3317,7 +3314,7 @@ xfs_symlink( ip = NULL; tp = NULL; - vn_trace_entry(dir_vp, __FUNCTION__, (inst_t *)__return_address); + VN_TRACE_ENTRY(dir_vp); mp = dp->i_mount; @@ -3608,8 +3605,7 @@ xfs_fid2( xfs_inode_t *ip; xfs_fid2_t *xfid; - vn_trace_entry(BHV_TO_VNODE(bdp), __FUNCTION__, - (inst_t *)__return_address); + VN_TRACE_ENTRY(BHV_TO_VNODE(bdp)); ASSERT(sizeof(fid_t) >= sizeof(xfs_fid2_t)); xfid = (xfs_fid2_t *)fidp; @@ -3821,7 +3817,7 @@ xfs_reclaim( vp = BHV_TO_VNODE(bdp); ip = XFS_BHVTOI(bdp); - vn_trace_entry(vp, __FUNCTION__, (inst_t *)__return_address); + VN_TRACE_ENTRY(vp); ASSERT(!VN_MAPPED(vp)); @@ -4037,7 +4033,7 @@ xfs_alloc_file_space( int committed; int error; - vn_trace_entry(XFS_ITOV(ip), __FUNCTION__, (inst_t *)__return_address); + VN_TRACE_ENTRY(XFS_ITOV(ip)); if (XFS_FORCED_SHUTDOWN(mp)) return XFS_ERROR(EIO); @@ -4308,7 +4304,7 @@ xfs_free_file_space( vp = XFS_ITOV(ip); mp = ip->i_mount; - vn_trace_entry(vp, __FUNCTION__, (inst_t *)__return_address); + VN_TRACE_ENTRY(vp); if ((error = XFS_QM_DQATTACH(mp, ip, 0))) return error; @@ -4514,7 +4510,7 @@ xfs_change_file_space( bhv_vnode_t *vp; vp = BHV_TO_VNODE(bdp); - vn_trace_entry(vp, __FUNCTION__, (inst_t *)__return_address); + VN_TRACE_ENTRY(vp); ip = XFS_BHVTOI(bdp); mp = ip->i_mount; From owner-xfs@oss.sgi.com Sun Jun 24 22:45:30 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 24 Jun 2007 22:45:32 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.2 required=5.0 tests=AWL,BAYES_80,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r499012 Received: from sandeen.net (sandeen.net [209.173.210.139]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5P5jTdo017155 for ; Sun, 24 Jun 2007 22:45:30 -0700 Received: from Liberator.local (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 9D2B018011E91 for ; Sun, 24 Jun 2007 23:53:03 -0500 (CDT) Message-ID: <467F4A2F.2060101@sandeen.net> Date: Sun, 24 Jun 2007 23:53:03 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.0 (Macintosh/20070326) MIME-Version: 1.0 To: xfs-oss Subject: Re: [PATCH] remove hard-coded fnames from tracing functions References: <467F48B9.7060504@sandeen.net> In-Reply-To: <467F48B9.7060504@sandeen.net> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11886 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Eric Sandeen wrote: > This has been on my stack a while, still compiles clean but feel free > to double-check. :-) > > xfs_alloc.c | 53 ++-------- > xfs_bmap.c | 266 +++++++++++++++++++++++-------------------------------- > xfs_bmap.h | 6 - > xfs_bmap_btree.c | 88 +++--------------- > xfs_inode.c | 8 - > 5 files changed, 149 insertions(+), 272 deletions(-) > > --------------- > > Remove the hardcoded "fnames" for tracing, and just embed > them in tracing macros via __FUNCTION__. Kills a lot of #ifdefs > too. Hm... guess I did send this one already, but don't see it in cvs... Dave, is it in your stack yet? -Eric From owner-xfs@oss.sgi.com Sun Jun 24 22:45:45 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 24 Jun 2007 22:45:47 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5P5jZds017242 for ; Sun, 24 Jun 2007 22:45:44 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA23631; Mon, 25 Jun 2007 09:22:51 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5ONMnAf128547976; Mon, 25 Jun 2007 09:22:50 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5ONMkct133397263; Mon, 25 Jun 2007 09:22:46 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Mon, 25 Jun 2007 09:22:46 +1000 From: David Chinner To: Michal Piotrowski Cc: Oliver Pinter , linux-kernel , xfs@oss.sgi.com Subject: Re: [REGRESSION 2.6-git] possible circular locking dependency detected with XFS Message-ID: <20070624232246.GF86004887@sgi.com> References: <6101e8c40706221340k65f15957k39a04193cb6e7c01@mail.gmail.com> <6bffcb0e0706221553s3a74ef58hcadc69bfa252283@mail.gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <6bffcb0e0706221553s3a74ef58hcadc69bfa252283@mail.gmail.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11891 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Sat, Jun 23, 2007 at 12:53:11AM +0200, Michal Piotrowski wrote: > Hi Oliver, > > On 22/06/07, Oliver Pinter wrote: > >Hi all! > > > >I found this info: > > > >======================================================= [ INFO: possible > >circular locking dependency detected ] 2.6.22-rc5-wifi1 #2 > >------------------------------------------------------- mount/2209 is > >trying to acquire lock: (&(&ip->i_lock)->mr_lock/1){--..}, at: [] > >xfs_ilock+0x66/0x90 > > > >but task is already holding lock: (&(&ip->i_lock)->mr_lock){----}, at: > >[] xfs_ilock+0x66/0x90 > > > > AFAIR it is not a regression. It is a known bug (harmless). FWIW, it's not even a bug. The bug (if any) is due to the fact we can't properly express the XFS locking rules with lockdep. We recently added a bunch of notations that fixed the common false positives we were seeing, but as a result, it appears we now have a whole new set of false positive reports coming in that are even harder to fix. As Christoph Hellwig has previously noted, the correct way to fix this in XFS is to completely change the locking within XFS directory operations to do strict parent/child locking like the VFS does. Unfortunately, that's not as simple as it sounds, because inode flushing and log tail pushing rely on inodes being locked in ascending inode order to prevent deadlocks within XFS. That means when we lock multiple inodes in link, rename, etc, we have to lock them in ascending order. The exception to this is create, mkdir, mknod because the newly created inode will not be locked by definition so it is always safe to lock it. Hence if the new inode's number is less than the parent inode's number we can get lockdep warning about circular locking dependencies which don't actually exist. That is where this warning is coming from.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Sun Jun 24 22:45:39 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 24 Jun 2007 22:45:42 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.3 required=5.0 tests=AWL,BAYES_50,J_CHICKENPOX_45 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5P5jZdo017242 for ; Sun, 24 Jun 2007 22:45:38 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id NAA29416; Mon, 25 Jun 2007 13:53:17 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5P3rGAf133550363; Mon, 25 Jun 2007 13:53:16 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5P3rFZk133127056; Mon, 25 Jun 2007 13:53:15 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Mon, 25 Jun 2007 13:53:15 +1000 From: David Chinner To: Timothy Shimmin Cc: David Chinner , xfs-dev , xfs-oss Subject: Re: Review: Multi-File Data Streams V2 Message-ID: <20070625035315.GL86004887@sgi.com> References: <20070613041629.GI86004887@sgi.com> <467B8BFA.2050107@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <467B8BFA.2050107@sgi.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11889 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Fri, Jun 22, 2007 at 06:44:42PM +1000, Timothy Shimmin wrote: > Hi Dave, > > For the xfs_bmap.c/xfs_bmap_btalloc() > > * > Might be clearer something like this: > ------------------ > if (nullfb) { > if (ap->userdata && xfs_inode_is_filestream(ap->ip)) { > ag = xfs_filestream_lookup_ag(ap->ip); > ag = (ag != NULLAGNUMBER) ? ag : 0; > ap->rval = XFS_AGB_TO_FSB(mp, ag, 0); > } else { > ap->rval = XFS_INO_TO_FSB(mp, ap->ip->i_ino); > } > } else > ap->rval = ap->firstblock; > ------------------- > Unless we need "ag" set for the non-userdata && filestream case. > I think Barry was questioning this today. ag gets overwritten from args.fsbno, which is set up from ap->rval, which comes from XFS_AGB_TO_FSB(mp, ag, 0) which means that ag is not needed outside this check. So yes, it would probably be cleaner. > * > It is interesting that at the start we set up the fsb > for (userdata & filestreams) and then in a bunch of other places > it tests just for filestreams - although, there is one spot further > down which also tests for userdata. Yes, because the userdata case you refer to is the need to select a new stream ag - failing to allocate metadata, which may be in a different AG, shouldn't cause the data stream to move. In other cases, we check for filestreams to set up the fallbacks on failure correctly. Those are the same for data and metadata for filestreams. > I find this a bit confusing (as usual:) - I thought we were only interested > in changing the allocation of userdata for the filestream. Mostly, yes, but metadata and data tend to be allocated with the same locality principles which filestreams does not follow. Hence there are small differences in the way we treat them. > * > As we talked about before, this code seems to come up in a few places: > > need = XFS_MIN_FREELIST_PAG(pag, mp); > delta = need > pag->pagf_flcount ? > need - pag->pagf_flcount : 0; > longest = (pag->pagf_longest > delta) ? > (pag->pagf_longest - delta) : > (pag->pagf_flcount > 0 || > pag->pagf_longest > 0); > > Perhaps we could macroize/inline-function it? Sure. I'll do that as a separate patch, though. > It confused me in _xfs_filestream_pick_ag() when I was trying > to understand it and so could do with a comment for it too. > As I said then, I don't like the way it uses a boolean as > the number of blocks, in the case when the longest extent is > is smaller than the excess over the freelist which > the fresspace-btree-splits-overhead needs. Actually, the logic statement is correct. If we have a delta greater than the longest extent, we cannot find out what the next longest extent is without searching the btree. Hence we assume that the longest extent is a single block, which means if the have free extents in the tree (pag->pagf_longest > 0) or we have blocks in the freelist (pag->pagf_flcount > 0) we are guaranteed to be able to alocate a single block if there is space available. So the logic is: longest = 0; if pag->pagf_longest > delta longest = pag->pagf_longest - delta; else if pag->pagf_flcount > 0 longest = 1; else if pag->pagf_longest > 0 longest = 1; And the above is simply more compact. > Also, the variables "need" and "delta" look pretty local to it. *nod* Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Sun Jun 24 22:45:42 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 24 Jun 2007 22:45:46 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5P5jZdq017242 for ; Sun, 24 Jun 2007 22:45:41 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id MAA28141; Mon, 25 Jun 2007 12:47:50 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5P2lnAf133450949; Mon, 25 Jun 2007 12:47:49 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5P2lkoX132324366; Mon, 25 Jun 2007 12:47:46 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Mon, 25 Jun 2007 12:47:46 +1000 From: David Chinner To: Michael Nishimoto Cc: xfs@oss.sgi.com Subject: Re: Reducing memory requirements for high extent xfs files Message-ID: <20070625024746.GJ86004887@sgi.com> References: <200705301649.l4UGnckA027406@oss.sgi.com> <20070530225516.GB85884050@sgi.com> <4665E276.9020406@agami.com> <20070606013601.GR86004887@sgi.com> <4666EC56.9000606@agami.com> <20070606234723.GC86004887@sgi.com> <467C620E.4050005@agami.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <467C620E.4050005@agami.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11890 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Fri, Jun 22, 2007 at 04:58:06PM -0700, Michael Nishimoto wrote: > > > Also, should we consider a file with 1MB extents as > > > fragmented? A 100GB file with 1MB extents has 100k extents. > > > >Yes, that's fragmented - it has 4 orders of magnitude more extents > >than optimal - and the extents are too small to allow reads or > >writes to acheive full bandwidth on high end raid configs.... > > Fair enough, so multiply those numbers by 100 -- a 10TB file with 100MB > extents. If you've got 10TB of free space, the allocator should be doing a better job than that ;) > It seems to me that we can look at the negative effects of > fragmentation in two ways here. First, (regardless of size) if a file > has a large number of extents, then it is too fragmented. Second, if > a file's extents are so small that we can't get full bandwidth, then it > is too fragmented. Yes, that is a fair observation. The first case is really only a concern when the maximum extent size (8GB on 4k fsb) becomes the limiting factor. That's at file sizes in the hundreds of TB so we are not really in trouble there yet. > If the second case were of primary concern, then it would be reasonable > to have 1000s of extents as long as each of the extents were big enough > to amortize disk latencies across a large amount of data. *nod* > We've been assuming that a good write is one which can send > 2MB of data to a single drive; so with an 8+1 raid device, we need > 16MB of write data to achieve high disk utilization. Sure, and if you want really good write performance, you don't want any seek between two lots of 16MB in the one file. which means that the extent size really needs to be much larger than 16MB.... > In particular, > there are flexibility advantages if high extent count files can > still achieve good performance. Sure. But there are many, many different options here that will have an impact. - larger extent btree block size - reduces seeks to read the tree - btree defragmentation to reduce seek distance - smarter readahead to reduce I/O latency - special casing extent zero and the extents in that first block to allow it to be brought in without the rest of the tree - critical block first retreival - demand paging Of all of these options, demand paging is the most complex and intrusive of the solutions. We should explore the simpler options first to determine if they will solve your immediate problem. FWIW, before we go changing any btree code, we really should be unifying the various btree implementations in XFS..... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Sun Jun 24 22:45:31 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 24 Jun 2007 22:45:35 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.4 required=5.0 tests=AWL,BAYES_99,J_CHICKENPOX_45, J_CHICKENPOX_61,J_CHICKENPOX_62,J_CHICKENPOX_63,J_CHICKENPOX_65,SPF_HELO_PASS, URIBL_RHS_TLD_WHOIS autolearn=no version=3.2.0-pre1-r499012 Received: from sandeen.net (sandeen.net [209.173.210.139]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5P5jTdo017157 for ; Sun, 24 Jun 2007 22:45:30 -0700 Received: from Liberator.local (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 4B24018011E86 for ; Sun, 24 Jun 2007 23:46:50 -0500 (CDT) Message-ID: <467F48B9.7060504@sandeen.net> Date: Sun, 24 Jun 2007 23:46:49 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.0 (Macintosh/20070326) MIME-Version: 1.0 To: xfs-oss Subject: [PATCH] remove hard-coded fnames from tracing functions Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11888 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs This has been on my stack a while, still compiles clean but feel free to double-check. :-) xfs_alloc.c | 53 ++-------- xfs_bmap.c | 266 +++++++++++++++++++++++-------------------------------- xfs_bmap.h | 6 - xfs_bmap_btree.c | 88 +++--------------- xfs_inode.c | 8 - 5 files changed, 149 insertions(+), 272 deletions(-) --------------- Remove the hardcoded "fnames" for tracing, and just embed them in tracing macros via __FUNCTION__. Kills a lot of #ifdefs too. Signed-off-by: Eric Sandeen Index: linux/fs/xfs/xfs_bmap_btree.c =================================================================== --- linux.orig/fs/xfs/xfs_bmap_btree.c +++ linux/fs/xfs/xfs_bmap_btree.c @@ -76,7 +76,7 @@ static char EXIT[] = "exit"; */ STATIC void xfs_bmbt_trace_enter( - char *func, + const char *func, xfs_btree_cur_t *cur, char *s, int type, @@ -117,7 +117,7 @@ xfs_bmbt_trace_enter( */ STATIC void xfs_bmbt_trace_argbi( - char *func, + const char *func, xfs_btree_cur_t *cur, xfs_buf_t *b, int i, @@ -134,7 +134,7 @@ xfs_bmbt_trace_argbi( */ STATIC void xfs_bmbt_trace_argbii( - char *func, + const char *func, xfs_btree_cur_t *cur, xfs_buf_t *b, int i0, @@ -153,7 +153,7 @@ xfs_bmbt_trace_argbii( */ STATIC void xfs_bmbt_trace_argfffi( - char *func, + const char *func, xfs_btree_cur_t *cur, xfs_dfiloff_t o, xfs_dfsbno_t b, @@ -172,7 +172,7 @@ xfs_bmbt_trace_argfffi( */ STATIC void xfs_bmbt_trace_argi( - char *func, + const char *func, xfs_btree_cur_t *cur, int i, int line) @@ -188,7 +188,7 @@ xfs_bmbt_trace_argi( */ STATIC void xfs_bmbt_trace_argifk( - char *func, + const char *func, xfs_btree_cur_t *cur, int i, xfs_fsblock_t f, @@ -206,7 +206,7 @@ xfs_bmbt_trace_argifk( */ STATIC void xfs_bmbt_trace_argifr( - char *func, + const char *func, xfs_btree_cur_t *cur, int i, xfs_fsblock_t f, @@ -235,7 +235,7 @@ xfs_bmbt_trace_argifr( */ STATIC void xfs_bmbt_trace_argik( - char *func, + const char *func, xfs_btree_cur_t *cur, int i, xfs_bmbt_key_t *k, @@ -255,7 +255,7 @@ xfs_bmbt_trace_argik( */ STATIC void xfs_bmbt_trace_cursor( - char *func, + const char *func, xfs_btree_cur_t *cur, char *s, int line) @@ -274,21 +274,21 @@ xfs_bmbt_trace_cursor( } #define XFS_BMBT_TRACE_ARGBI(c,b,i) \ - xfs_bmbt_trace_argbi(fname, c, b, i, __LINE__) + xfs_bmbt_trace_argbi(__FUNCTION__, c, b, i, __LINE__) #define XFS_BMBT_TRACE_ARGBII(c,b,i,j) \ - xfs_bmbt_trace_argbii(fname, c, b, i, j, __LINE__) + xfs_bmbt_trace_argbii(__FUNCTION__, c, b, i, j, __LINE__) #define XFS_BMBT_TRACE_ARGFFFI(c,o,b,i,j) \ - xfs_bmbt_trace_argfffi(fname, c, o, b, i, j, __LINE__) + xfs_bmbt_trace_argfffi(__FUNCTION__, c, o, b, i, j, __LINE__) #define XFS_BMBT_TRACE_ARGI(c,i) \ - xfs_bmbt_trace_argi(fname, c, i, __LINE__) + xfs_bmbt_trace_argi(__FUNCTION__, c, i, __LINE__) #define XFS_BMBT_TRACE_ARGIFK(c,i,f,s) \ - xfs_bmbt_trace_argifk(fname, c, i, f, s, __LINE__) + xfs_bmbt_trace_argifk(__FUNCTION__, c, i, f, s, __LINE__) #define XFS_BMBT_TRACE_ARGIFR(c,i,f,r) \ - xfs_bmbt_trace_argifr(fname, c, i, f, r, __LINE__) + xfs_bmbt_trace_argifr(__FUNCTION__, c, i, f, r, __LINE__) #define XFS_BMBT_TRACE_ARGIK(c,i,k) \ - xfs_bmbt_trace_argik(fname, c, i, k, __LINE__) + xfs_bmbt_trace_argik(__FUNCTION__, c, i, k, __LINE__) #define XFS_BMBT_TRACE_CURSOR(c,s) \ - xfs_bmbt_trace_cursor(fname, c, s, __LINE__) + xfs_bmbt_trace_cursor(__FUNCTION__, c, s, __LINE__) #else #define XFS_BMBT_TRACE_ARGBI(c,b,i) #define XFS_BMBT_TRACE_ARGBII(c,b,i,j) @@ -318,9 +318,6 @@ xfs_bmbt_delrec( xfs_fsblock_t bno; /* fs-relative block number */ xfs_buf_t *bp; /* buffer for block */ int error; /* error return value */ -#ifdef XFS_BMBT_TRACE - static char fname[] = "xfs_bmbt_delrec"; -#endif int i; /* loop counter */ int j; /* temp state */ xfs_bmbt_key_t key; /* bmap btree key */ @@ -694,9 +691,6 @@ xfs_bmbt_insrec( xfs_bmbt_block_t *block; /* bmap btree block */ xfs_buf_t *bp; /* buffer for block */ int error; /* error return value */ -#ifdef XFS_BMBT_TRACE - static char fname[] = "xfs_bmbt_insrec"; -#endif int i; /* loop index */ xfs_bmbt_key_t key; /* bmap btree key */ xfs_bmbt_key_t *kp=NULL; /* pointer to bmap btree key */ @@ -881,9 +875,6 @@ xfs_bmbt_killroot( #ifdef DEBUG int error; #endif -#ifdef XFS_BMBT_TRACE - static char fname[] = "xfs_bmbt_killroot"; -#endif int i; xfs_bmbt_key_t *kp; xfs_inode_t *ip; @@ -973,9 +964,6 @@ xfs_bmbt_log_keys( int kfirst, int klast) { -#ifdef XFS_BMBT_TRACE - static char fname[] = "xfs_bmbt_log_keys"; -#endif xfs_trans_t *tp; XFS_BMBT_TRACE_CURSOR(cur, ENTRY); @@ -1012,9 +1000,6 @@ xfs_bmbt_log_ptrs( int pfirst, int plast) { -#ifdef XFS_BMBT_TRACE - static char fname[] = "xfs_bmbt_log_ptrs"; -#endif xfs_trans_t *tp; XFS_BMBT_TRACE_CURSOR(cur, ENTRY); @@ -1055,9 +1040,6 @@ xfs_bmbt_lookup( xfs_daddr_t d; xfs_sfiloff_t diff; int error; /* error return value */ -#ifdef XFS_BMBT_TRACE - static char fname[] = "xfs_bmbt_lookup"; -#endif xfs_fsblock_t fsbno=0; int high; int i; @@ -1195,9 +1177,6 @@ xfs_bmbt_lshift( int *stat) /* success/failure */ { int error; /* error return value */ -#ifdef XFS_BMBT_TRACE - static char fname[] = "xfs_bmbt_lshift"; -#endif #ifdef DEBUG int i; /* loop counter */ #endif @@ -1331,9 +1310,6 @@ xfs_bmbt_rshift( int *stat) /* success/failure */ { int error; /* error return value */ -#ifdef XFS_BMBT_TRACE - static char fname[] = "xfs_bmbt_rshift"; -#endif int i; /* loop counter */ xfs_bmbt_key_t key; /* bmap btree key */ xfs_buf_t *lbp; /* left buffer pointer */ @@ -1492,9 +1468,6 @@ xfs_bmbt_split( { xfs_alloc_arg_t args; /* block allocation args */ int error; /* error return value */ -#ifdef XFS_BMBT_TRACE - static char fname[] = "xfs_bmbt_split"; -#endif int i; /* loop counter */ xfs_fsblock_t lbno; /* left sibling block number */ xfs_buf_t *lbp; /* left buffer pointer */ @@ -1641,9 +1614,6 @@ xfs_bmbt_updkey( #ifdef DEBUG int error; #endif -#ifdef XFS_BMBT_TRACE - static char fname[] = "xfs_bmbt_updkey"; -#endif xfs_bmbt_key_t *kp; int ptr; @@ -1712,9 +1682,6 @@ xfs_bmbt_decrement( xfs_bmbt_block_t *block; xfs_buf_t *bp; int error; /* error return value */ -#ifdef XFS_BMBT_TRACE - static char fname[] = "xfs_bmbt_decrement"; -#endif xfs_fsblock_t fsbno; int lev; xfs_mount_t *mp; @@ -1785,9 +1752,6 @@ xfs_bmbt_delete( int *stat) /* success/failure */ { int error; /* error return value */ -#ifdef XFS_BMBT_TRACE - static char fname[] = "xfs_bmbt_delete"; -#endif int i; int level; @@ -2000,9 +1964,6 @@ xfs_bmbt_increment( xfs_bmbt_block_t *block; xfs_buf_t *bp; int error; /* error return value */ -#ifdef XFS_BMBT_TRACE - static char fname[] = "xfs_bmbt_increment"; -#endif xfs_fsblock_t fsbno; int lev; xfs_mount_t *mp; @@ -2080,9 +2041,6 @@ xfs_bmbt_insert( int *stat) /* success/failure */ { int error; /* error return value */ -#ifdef XFS_BMBT_TRACE - static char fname[] = "xfs_bmbt_insert"; -#endif int i; int level; xfs_fsblock_t nbno; @@ -2142,9 +2100,6 @@ xfs_bmbt_log_block( int fields) { int first; -#ifdef XFS_BMBT_TRACE - static char fname[] = "xfs_bmbt_log_block"; -#endif int last; xfs_trans_t *tp; static const short offsets[] = { @@ -2181,9 +2136,6 @@ xfs_bmbt_log_recs( { xfs_bmbt_block_t *block; int first; -#ifdef XFS_BMBT_TRACE - static char fname[] = "xfs_bmbt_log_recs"; -#endif int last; xfs_bmbt_rec_t *rp; xfs_trans_t *tp; @@ -2245,9 +2197,6 @@ xfs_bmbt_newroot( xfs_bmbt_key_t *ckp; /* child key pointer */ xfs_bmbt_ptr_t *cpp; /* child ptr pointer */ int error; /* error return code */ -#ifdef XFS_BMBT_TRACE - static char fname[] = "xfs_bmbt_newroot"; -#endif #ifdef DEBUG int i; /* loop counter */ #endif @@ -2630,9 +2579,6 @@ xfs_bmbt_update( xfs_bmbt_block_t *block; xfs_buf_t *bp; int error; -#ifdef XFS_BMBT_TRACE - static char fname[] = "xfs_bmbt_update"; -#endif xfs_bmbt_key_t key; int ptr; xfs_bmbt_rec_t *rp; Index: linux/fs/xfs/xfs_alloc.c =================================================================== --- linux.orig/fs/xfs/xfs_alloc.c +++ linux/fs/xfs/xfs_alloc.c @@ -55,17 +55,17 @@ xfs_alloc_search_busy(xfs_trans_t *tp, ktrace_t *xfs_alloc_trace_buf; #define TRACE_ALLOC(s,a) \ - xfs_alloc_trace_alloc(fname, s, a, __LINE__) + xfs_alloc_trace_alloc(__FUNCTION__, s, a, __LINE__) #define TRACE_FREE(s,a,b,x,f) \ - xfs_alloc_trace_free(fname, s, mp, a, b, x, f, __LINE__) + xfs_alloc_trace_free(__FUNCTION__, s, mp, a, b, x, f, __LINE__) #define TRACE_MODAGF(s,a,f) \ - xfs_alloc_trace_modagf(fname, s, mp, a, f, __LINE__) -#define TRACE_BUSY(fname,s,ag,agb,l,sl,tp) \ - xfs_alloc_trace_busy(fname, s, mp, ag, agb, l, sl, tp, XFS_ALLOC_KTRACE_BUSY, __LINE__) -#define TRACE_UNBUSY(fname,s,ag,sl,tp) \ - xfs_alloc_trace_busy(fname, s, mp, ag, -1, -1, sl, tp, XFS_ALLOC_KTRACE_UNBUSY, __LINE__) -#define TRACE_BUSYSEARCH(fname,s,ag,agb,l,sl,tp) \ - xfs_alloc_trace_busy(fname, s, mp, ag, agb, l, sl, tp, XFS_ALLOC_KTRACE_BUSYSEARCH, __LINE__) + xfs_alloc_trace_modagf(__FUNCTION__, s, mp, a, f, __LINE__) +#define TRACE_BUSY(__FUNCTION__,s,ag,agb,l,sl,tp) \ + xfs_alloc_trace_busy(__FUNCTION__, s, mp, ag, agb, l, sl, tp, XFS_ALLOC_KTRACE_BUSY, __LINE__) +#define TRACE_UNBUSY(__FUNCTION__,s,ag,sl,tp) \ + xfs_alloc_trace_busy(__FUNCTION__, s, mp, ag, -1, -1, sl, tp, XFS_ALLOC_KTRACE_UNBUSY, __LINE__) +#define TRACE_BUSYSEARCH(__FUNCTION__,s,ag,agb,l,sl,tp) \ + xfs_alloc_trace_busy(__FUNCTION__, s, mp, ag, agb, l, sl, tp, XFS_ALLOC_KTRACE_BUSYSEARCH, __LINE__) #else #define TRACE_ALLOC(s,a) #define TRACE_FREE(s,a,b,x,f) @@ -420,7 +420,7 @@ xfs_alloc_read_agfl( */ STATIC void xfs_alloc_trace_alloc( - char *name, /* function tag string */ + const char *name, /* function tag string */ char *str, /* additional string */ xfs_alloc_arg_t *args, /* allocation argument structure */ int line) /* source line number */ @@ -453,7 +453,7 @@ xfs_alloc_trace_alloc( */ STATIC void xfs_alloc_trace_free( - char *name, /* function tag string */ + const char *name, /* function tag string */ char *str, /* additional string */ xfs_mount_t *mp, /* file system mount point */ xfs_agnumber_t agno, /* allocation group number */ @@ -479,7 +479,7 @@ xfs_alloc_trace_free( */ STATIC void xfs_alloc_trace_modagf( - char *name, /* function tag string */ + const char *name, /* function tag string */ char *str, /* additional string */ xfs_mount_t *mp, /* file system mount point */ xfs_agf_t *agf, /* new agf value */ @@ -507,7 +507,7 @@ xfs_alloc_trace_modagf( STATIC void xfs_alloc_trace_busy( - char *name, /* function tag string */ + const char *name, /* function tag string */ char *str, /* additional string */ xfs_mount_t *mp, /* file system mount point */ xfs_agnumber_t agno, /* allocation group number */ @@ -549,9 +549,6 @@ xfs_alloc_ag_vextent( xfs_alloc_arg_t *args) /* argument structure for allocation */ { int error=0; -#ifdef XFS_ALLOC_TRACE - static char fname[] = "xfs_alloc_ag_vextent"; -#endif ASSERT(args->minlen > 0); ASSERT(args->maxlen > 0); @@ -635,9 +632,6 @@ xfs_alloc_ag_vextent_exact( xfs_agblock_t fbno; /* start block of found extent */ xfs_agblock_t fend; /* end block of found extent */ xfs_extlen_t flen; /* length of found extent */ -#ifdef XFS_ALLOC_TRACE - static char fname[] = "xfs_alloc_ag_vextent_exact"; -#endif int i; /* success/failure of operation */ xfs_agblock_t maxend; /* end of maximal extent */ xfs_agblock_t minend; /* end of minimal extent */ @@ -737,9 +731,6 @@ xfs_alloc_ag_vextent_near( xfs_btree_cur_t *bno_cur_gt; /* cursor for bno btree, right side */ xfs_btree_cur_t *bno_cur_lt; /* cursor for bno btree, left side */ xfs_btree_cur_t *cnt_cur; /* cursor for count btree */ -#ifdef XFS_ALLOC_TRACE - static char fname[] = "xfs_alloc_ag_vextent_near"; -#endif xfs_agblock_t gtbno; /* start bno of right side entry */ xfs_agblock_t gtbnoa; /* aligned ... */ xfs_extlen_t gtdiff; /* difference to right side entry */ @@ -1270,9 +1261,6 @@ xfs_alloc_ag_vextent_size( int error; /* error result */ xfs_agblock_t fbno; /* start of found freespace */ xfs_extlen_t flen; /* length of found freespace */ -#ifdef XFS_ALLOC_TRACE - static char fname[] = "xfs_alloc_ag_vextent_size"; -#endif int i; /* temp status variable */ xfs_agblock_t rbno; /* returned block number */ xfs_extlen_t rlen; /* length of returned extent */ @@ -1427,9 +1415,6 @@ xfs_alloc_ag_vextent_small( int error; xfs_agblock_t fbno; xfs_extlen_t flen; -#ifdef XFS_ALLOC_TRACE - static char fname[] = "xfs_alloc_ag_vextent_small"; -#endif int i; if ((error = xfs_alloc_decrement(ccur, 0, &i))) @@ -1516,9 +1501,6 @@ xfs_free_ag_extent( xfs_btree_cur_t *bno_cur; /* cursor for by-block btree */ xfs_btree_cur_t *cnt_cur; /* cursor for by-size btree */ int error; /* error return value */ -#ifdef XFS_ALLOC_TRACE - static char fname[] = "xfs_free_ag_extent"; -#endif xfs_agblock_t gtbno; /* start of right neighbor block */ xfs_extlen_t gtlen; /* length of right neighbor block */ int haveleft; /* have a left neighbor block */ @@ -2003,9 +1985,6 @@ xfs_alloc_get_freelist( xfs_agblock_t bno; /* block number returned */ int error; int logflags; -#ifdef XFS_ALLOC_TRACE - static char fname[] = "xfs_alloc_get_freelist"; -#endif xfs_mount_t *mp; /* mount structure */ xfs_perag_t *pag; /* per allocation group data */ @@ -2128,9 +2107,6 @@ xfs_alloc_put_freelist( __be32 *blockp;/* pointer to array entry */ int error; int logflags; -#ifdef XFS_ALLOC_TRACE - static char fname[] = "xfs_alloc_put_freelist"; -#endif xfs_mount_t *mp; /* mount structure */ xfs_perag_t *pag; /* per allocation group data */ @@ -2263,9 +2239,6 @@ xfs_alloc_vextent( xfs_agblock_t agsize; /* allocation group size */ int error; int flags; /* XFS_ALLOC_FLAG_... locking flags */ -#ifdef XFS_ALLOC_TRACE - static char fname[] = "xfs_alloc_vextent"; -#endif xfs_extlen_t minleft;/* minimum left value, temp copy */ xfs_mount_t *mp; /* mount structure pointer */ xfs_agnumber_t sagno; /* starting allocation group number */ Index: linux/fs/xfs/xfs_bmap.c =================================================================== --- linux.orig/fs/xfs/xfs_bmap.c +++ linux/fs/xfs/xfs_bmap.c @@ -277,7 +277,7 @@ xfs_bmap_isaeof( STATIC void xfs_bmap_trace_addentry( int opcode, /* operation */ - char *fname, /* function name */ + const char *fname, /* function name */ char *desc, /* operation description */ xfs_inode_t *ip, /* incore inode pointer */ xfs_extnum_t idx, /* index of entry(ies) */ @@ -291,7 +291,7 @@ xfs_bmap_trace_addentry( */ STATIC void xfs_bmap_trace_delete( - char *fname, /* function name */ + const char *fname, /* function name */ char *desc, /* operation description */ xfs_inode_t *ip, /* incore inode pointer */ xfs_extnum_t idx, /* index of entry(entries) deleted */ @@ -304,7 +304,7 @@ xfs_bmap_trace_delete( */ STATIC void xfs_bmap_trace_insert( - char *fname, /* function name */ + const char *fname, /* function name */ char *desc, /* operation description */ xfs_inode_t *ip, /* incore inode pointer */ xfs_extnum_t idx, /* index of entry(entries) inserted */ @@ -318,7 +318,7 @@ xfs_bmap_trace_insert( */ STATIC void xfs_bmap_trace_post_update( - char *fname, /* function name */ + const char *fname, /* function name */ char *desc, /* operation description */ xfs_inode_t *ip, /* incore inode pointer */ xfs_extnum_t idx, /* index of entry updated */ @@ -329,17 +329,25 @@ xfs_bmap_trace_post_update( */ STATIC void xfs_bmap_trace_pre_update( - char *fname, /* function name */ + const char *fname, /* function name */ char *desc, /* operation description */ xfs_inode_t *ip, /* incore inode pointer */ xfs_extnum_t idx, /* index of entry to be updated */ int whichfork); /* data or attr fork */ +#define XFS_BMAP_TRACE_DELETE(d,ip,i,c,w) \ + xfs_bmap_trace_delete(__FUNCTION__,d,ip,i,c,w) +#define XFS_BMAP_TRACE_INSERT(d,ip,i,c,r1,r2,w) \ + xfs_bmap_trace_insert(__FUNCTION__,d,ip,i,c,r1,r2,w) +#define XFS_BMAP_TRACE_POST_UPDATE(d,ip,i,w) \ + xfs_bmap_trace_post_update(__FUNCTION__,d,ip,i,w) +#define XFS_BMAP_TRACE_PRE_UPDATE(d,ip,i,w) \ + xfs_bmap_trace_pre_update(__FUNCTION__,d,ip,i,w) #else -#define xfs_bmap_trace_delete(f,d,ip,i,c,w) -#define xfs_bmap_trace_insert(f,d,ip,i,c,r1,r2,w) -#define xfs_bmap_trace_post_update(f,d,ip,i,w) -#define xfs_bmap_trace_pre_update(f,d,ip,i,w) +#define XFS_BMAP_TRACE_DELETE(d,ip,i,c,w) +#define XFS_BMAP_TRACE_INSERT(d,ip,i,c,r1,r2,w) +#define XFS_BMAP_TRACE_POST_UPDATE(d,ip,i,w) +#define XFS_BMAP_TRACE_PRE_UPDATE(d,ip,i,w) #endif /* XFS_BMAP_TRACE */ /* @@ -531,9 +539,6 @@ xfs_bmap_add_extent( xfs_filblks_t da_new; /* new count del alloc blocks used */ xfs_filblks_t da_old; /* old count del alloc blocks used */ int error; /* error return value */ -#ifdef XFS_BMAP_TRACE - static char fname[] = "xfs_bmap_add_extent"; -#endif xfs_ifork_t *ifp; /* inode fork ptr */ int logflags; /* returned value */ xfs_extnum_t nextents; /* number of extents in file now */ @@ -551,8 +556,8 @@ xfs_bmap_add_extent( * already extents in the list. */ if (nextents == 0) { - xfs_bmap_trace_insert(fname, "insert empty", ip, 0, 1, new, - NULL, whichfork); + XFS_BMAP_TRACE_INSERT("insert empty", ip, 0, 1, new, NULL, + whichfork); xfs_iext_insert(ifp, 0, 1, new); ASSERT(cur == NULL); ifp->if_lastex = 0; @@ -710,9 +715,6 @@ xfs_bmap_add_extent_delay_real( int diff; /* temp value */ xfs_bmbt_rec_t *ep; /* extent entry for idx */ int error; /* error return value */ -#ifdef XFS_BMAP_TRACE - static char fname[] = "xfs_bmap_add_extent_delay_real"; -#endif int i; /* temp state */ xfs_ifork_t *ifp; /* inode fork pointer */ xfs_fileoff_t new_endoff; /* end offset of new entry */ @@ -808,15 +810,14 @@ xfs_bmap_add_extent_delay_real( * Filling in all of a previously delayed allocation extent. * The left and right neighbors are both contiguous with new. */ - xfs_bmap_trace_pre_update(fname, "LF|RF|LC|RC", ip, idx - 1, + XFS_BMAP_TRACE_PRE_UPDATE("LF|RF|LC|RC", ip, idx - 1, XFS_DATA_FORK); xfs_bmbt_set_blockcount(xfs_iext_get_ext(ifp, idx - 1), LEFT.br_blockcount + PREV.br_blockcount + RIGHT.br_blockcount); - xfs_bmap_trace_post_update(fname, "LF|RF|LC|RC", ip, idx - 1, - XFS_DATA_FORK); - xfs_bmap_trace_delete(fname, "LF|RF|LC|RC", ip, idx, 2, + XFS_BMAP_TRACE_POST_UPDATE("LF|RF|LC|RC", ip, idx - 1, XFS_DATA_FORK); + XFS_BMAP_TRACE_DELETE("LF|RF|LC|RC", ip, idx, 2, XFS_DATA_FORK); xfs_iext_remove(ifp, idx, 2); ip->i_df.if_lastex = idx - 1; ip->i_d.di_nextents--; @@ -855,15 +856,14 @@ xfs_bmap_add_extent_delay_real( * Filling in all of a previously delayed allocation extent. * The left neighbor is contiguous, the right is not. */ - xfs_bmap_trace_pre_update(fname, "LF|RF|LC", ip, idx - 1, + XFS_BMAP_TRACE_PRE_UPDATE("LF|RF|LC", ip, idx - 1, XFS_DATA_FORK); xfs_bmbt_set_blockcount(xfs_iext_get_ext(ifp, idx - 1), LEFT.br_blockcount + PREV.br_blockcount); - xfs_bmap_trace_post_update(fname, "LF|RF|LC", ip, idx - 1, + XFS_BMAP_TRACE_POST_UPDATE("LF|RF|LC", ip, idx - 1, XFS_DATA_FORK); ip->i_df.if_lastex = idx - 1; - xfs_bmap_trace_delete(fname, "LF|RF|LC", ip, idx, 1, - XFS_DATA_FORK); + XFS_BMAP_TRACE_DELETE("LF|RF|LC", ip, idx, 1, XFS_DATA_FORK); xfs_iext_remove(ifp, idx, 1); if (cur == NULL) rval = XFS_ILOG_DEXT; @@ -892,16 +892,13 @@ xfs_bmap_add_extent_delay_real( * Filling in all of a previously delayed allocation extent. * The right neighbor is contiguous, the left is not. */ - xfs_bmap_trace_pre_update(fname, "LF|RF|RC", ip, idx, - XFS_DATA_FORK); + XFS_BMAP_TRACE_PRE_UPDATE("LF|RF|RC", ip, idx, XFS_DATA_FORK); xfs_bmbt_set_startblock(ep, new->br_startblock); xfs_bmbt_set_blockcount(ep, PREV.br_blockcount + RIGHT.br_blockcount); - xfs_bmap_trace_post_update(fname, "LF|RF|RC", ip, idx, - XFS_DATA_FORK); + XFS_BMAP_TRACE_POST_UPDATE("LF|RF|RC", ip, idx, XFS_DATA_FORK); ip->i_df.if_lastex = idx; - xfs_bmap_trace_delete(fname, "LF|RF|RC", ip, idx + 1, 1, - XFS_DATA_FORK); + XFS_BMAP_TRACE_DELETE("LF|RF|RC", ip, idx + 1, 1, XFS_DATA_FORK); xfs_iext_remove(ifp, idx + 1, 1); if (cur == NULL) rval = XFS_ILOG_DEXT; @@ -931,11 +928,9 @@ xfs_bmap_add_extent_delay_real( * Neither the left nor right neighbors are contiguous with * the new one. */ - xfs_bmap_trace_pre_update(fname, "LF|RF", ip, idx, - XFS_DATA_FORK); + XFS_BMAP_TRACE_PRE_UPDATE("LF|RF", ip, idx, XFS_DATA_FORK); xfs_bmbt_set_startblock(ep, new->br_startblock); - xfs_bmap_trace_post_update(fname, "LF|RF", ip, idx, - XFS_DATA_FORK); + XFS_BMAP_TRACE_POST_UPDATE("LF|RF", ip, idx, XFS_DATA_FORK); ip->i_df.if_lastex = idx; ip->i_d.di_nextents++; if (cur == NULL) @@ -963,17 +958,14 @@ xfs_bmap_add_extent_delay_real( * Filling in the first part of a previous delayed allocation. * The left neighbor is contiguous. */ - xfs_bmap_trace_pre_update(fname, "LF|LC", ip, idx - 1, - XFS_DATA_FORK); + XFS_BMAP_TRACE_PRE_UPDATE("LF|LC", ip, idx - 1, XFS_DATA_FORK); xfs_bmbt_set_blockcount(xfs_iext_get_ext(ifp, idx - 1), LEFT.br_blockcount + new->br_blockcount); xfs_bmbt_set_startoff(ep, PREV.br_startoff + new->br_blockcount); - xfs_bmap_trace_post_update(fname, "LF|LC", ip, idx - 1, - XFS_DATA_FORK); + XFS_BMAP_TRACE_POST_UPDATE("LF|LC", ip, idx - 1, XFS_DATA_FORK); temp = PREV.br_blockcount - new->br_blockcount; - xfs_bmap_trace_pre_update(fname, "LF|LC", ip, idx, - XFS_DATA_FORK); + XFS_BMAP_TRACE_PRE_UPDATE("LF|LC", ip, idx, XFS_DATA_FORK); xfs_bmbt_set_blockcount(ep, temp); ip->i_df.if_lastex = idx - 1; if (cur == NULL) @@ -995,8 +987,7 @@ xfs_bmap_add_extent_delay_real( temp = XFS_FILBLKS_MIN(xfs_bmap_worst_indlen(ip, temp), STARTBLOCKVAL(PREV.br_startblock)); xfs_bmbt_set_startblock(ep, NULLSTARTBLOCK((int)temp)); - xfs_bmap_trace_post_update(fname, "LF|LC", ip, idx, - XFS_DATA_FORK); + XFS_BMAP_TRACE_POST_UPDATE("LF|LC", ip, idx, XFS_DATA_FORK); *dnew = temp; /* DELTA: The boundary between two in-core extents moved. */ temp = LEFT.br_startoff; @@ -1009,11 +1000,11 @@ xfs_bmap_add_extent_delay_real( * Filling in the first part of a previous delayed allocation. * The left neighbor is not contiguous. */ - xfs_bmap_trace_pre_update(fname, "LF", ip, idx, XFS_DATA_FORK); + XFS_BMAP_TRACE_PRE_UPDATE("LF", ip, idx, XFS_DATA_FORK); xfs_bmbt_set_startoff(ep, new_endoff); temp = PREV.br_blockcount - new->br_blockcount; xfs_bmbt_set_blockcount(ep, temp); - xfs_bmap_trace_insert(fname, "LF", ip, idx, 1, new, NULL, + XFS_BMAP_TRACE_INSERT("LF", ip, idx, 1, new, NULL, XFS_DATA_FORK); xfs_iext_insert(ifp, idx, 1, new); ip->i_df.if_lastex = idx; @@ -1046,8 +1037,7 @@ xfs_bmap_add_extent_delay_real( (cur ? cur->bc_private.b.allocated : 0)); ep = xfs_iext_get_ext(ifp, idx + 1); xfs_bmbt_set_startblock(ep, NULLSTARTBLOCK((int)temp)); - xfs_bmap_trace_post_update(fname, "LF", ip, idx + 1, - XFS_DATA_FORK); + XFS_BMAP_TRACE_POST_UPDATE("LF", ip, idx + 1, XFS_DATA_FORK); *dnew = temp; /* DELTA: One in-core extent is split in two. */ temp = PREV.br_startoff; @@ -1060,17 +1050,14 @@ xfs_bmap_add_extent_delay_real( * The right neighbor is contiguous with the new allocation. */ temp = PREV.br_blockcount - new->br_blockcount; - xfs_bmap_trace_pre_update(fname, "RF|RC", ip, idx, - XFS_DATA_FORK); - xfs_bmap_trace_pre_update(fname, "RF|RC", ip, idx + 1, - XFS_DATA_FORK); + XFS_BMAP_TRACE_PRE_UPDATE("RF|RC", ip, idx, XFS_DATA_FORK); + XFS_BMAP_TRACE_PRE_UPDATE("RF|RC", ip, idx + 1, XFS_DATA_FORK); xfs_bmbt_set_blockcount(ep, temp); xfs_bmbt_set_allf(xfs_iext_get_ext(ifp, idx + 1), new->br_startoff, new->br_startblock, new->br_blockcount + RIGHT.br_blockcount, RIGHT.br_state); - xfs_bmap_trace_post_update(fname, "RF|RC", ip, idx + 1, - XFS_DATA_FORK); + XFS_BMAP_TRACE_POST_UPDATE("RF|RC", ip, idx + 1, XFS_DATA_FORK); ip->i_df.if_lastex = idx + 1; if (cur == NULL) rval = XFS_ILOG_DEXT; @@ -1091,8 +1078,7 @@ xfs_bmap_add_extent_delay_real( temp = XFS_FILBLKS_MIN(xfs_bmap_worst_indlen(ip, temp), STARTBLOCKVAL(PREV.br_startblock)); xfs_bmbt_set_startblock(ep, NULLSTARTBLOCK((int)temp)); - xfs_bmap_trace_post_update(fname, "RF|RC", ip, idx, - XFS_DATA_FORK); + XFS_BMAP_TRACE_POST_UPDATE("RF|RC", ip, idx, XFS_DATA_FORK); *dnew = temp; /* DELTA: The boundary between two in-core extents moved. */ temp = PREV.br_startoff; @@ -1106,10 +1092,10 @@ xfs_bmap_add_extent_delay_real( * The right neighbor is not contiguous. */ temp = PREV.br_blockcount - new->br_blockcount; - xfs_bmap_trace_pre_update(fname, "RF", ip, idx, XFS_DATA_FORK); + XFS_BMAP_TRACE_PRE_UPDATE("RF", ip, idx, XFS_DATA_FORK); xfs_bmbt_set_blockcount(ep, temp); - xfs_bmap_trace_insert(fname, "RF", ip, idx + 1, 1, - new, NULL, XFS_DATA_FORK); + XFS_BMAP_TRACE_INSERT("RF", ip, idx + 1, 1, new, NULL, + XFS_DATA_FORK); xfs_iext_insert(ifp, idx + 1, 1, new); ip->i_df.if_lastex = idx + 1; ip->i_d.di_nextents++; @@ -1141,7 +1127,7 @@ xfs_bmap_add_extent_delay_real( (cur ? cur->bc_private.b.allocated : 0)); ep = xfs_iext_get_ext(ifp, idx); xfs_bmbt_set_startblock(ep, NULLSTARTBLOCK((int)temp)); - xfs_bmap_trace_post_update(fname, "RF", ip, idx, XFS_DATA_FORK); + XFS_BMAP_TRACE_POST_UPDATE("RF", ip, idx, XFS_DATA_FORK); *dnew = temp; /* DELTA: One in-core extent is split in two. */ temp = PREV.br_startoff; @@ -1155,7 +1141,7 @@ xfs_bmap_add_extent_delay_real( * This case is avoided almost all the time. */ temp = new->br_startoff - PREV.br_startoff; - xfs_bmap_trace_pre_update(fname, "0", ip, idx, XFS_DATA_FORK); + XFS_BMAP_TRACE_PRE_UPDATE("0", ip, idx, XFS_DATA_FORK); xfs_bmbt_set_blockcount(ep, temp); r[0] = *new; r[1].br_state = PREV.br_state; @@ -1163,7 +1149,7 @@ xfs_bmap_add_extent_delay_real( r[1].br_startoff = new_endoff; temp2 = PREV.br_startoff + PREV.br_blockcount - new_endoff; r[1].br_blockcount = temp2; - xfs_bmap_trace_insert(fname, "0", ip, idx + 1, 2, &r[0], &r[1], + XFS_BMAP_TRACE_INSERT("0", ip, idx + 1, 2, &r[0], &r[1], XFS_DATA_FORK); xfs_iext_insert(ifp, idx + 1, 2, &r[0]); ip->i_df.if_lastex = idx + 1; @@ -1222,13 +1208,11 @@ xfs_bmap_add_extent_delay_real( } ep = xfs_iext_get_ext(ifp, idx); xfs_bmbt_set_startblock(ep, NULLSTARTBLOCK((int)temp)); - xfs_bmap_trace_post_update(fname, "0", ip, idx, XFS_DATA_FORK); - xfs_bmap_trace_pre_update(fname, "0", ip, idx + 2, - XFS_DATA_FORK); + XFS_BMAP_TRACE_POST_UPDATE("0", ip, idx, XFS_DATA_FORK); + XFS_BMAP_TRACE_PRE_UPDATE("0", ip, idx + 2, XFS_DATA_FORK); xfs_bmbt_set_startblock(xfs_iext_get_ext(ifp, idx + 2), NULLSTARTBLOCK((int)temp2)); - xfs_bmap_trace_post_update(fname, "0", ip, idx + 2, - XFS_DATA_FORK); + XFS_BMAP_TRACE_POST_UPDATE("0", ip, idx + 2, XFS_DATA_FORK); *dnew = temp + temp2; /* DELTA: One in-core extent is split in three. */ temp = PREV.br_startoff; @@ -1287,9 +1271,6 @@ xfs_bmap_add_extent_unwritten_real( xfs_btree_cur_t *cur; /* btree cursor */ xfs_bmbt_rec_t *ep; /* extent entry for idx */ int error; /* error return value */ -#ifdef XFS_BMAP_TRACE - static char fname[] = "xfs_bmap_add_extent_unwritten_real"; -#endif int i; /* temp state */ xfs_ifork_t *ifp; /* inode fork pointer */ xfs_fileoff_t new_endoff; /* end offset of new entry */ @@ -1390,15 +1371,14 @@ xfs_bmap_add_extent_unwritten_real( * Setting all of a previous oldext extent to newext. * The left and right neighbors are both contiguous with new. */ - xfs_bmap_trace_pre_update(fname, "LF|RF|LC|RC", ip, idx - 1, + XFS_BMAP_TRACE_PRE_UPDATE("LF|RF|LC|RC", ip, idx - 1, XFS_DATA_FORK); xfs_bmbt_set_blockcount(xfs_iext_get_ext(ifp, idx - 1), LEFT.br_blockcount + PREV.br_blockcount + RIGHT.br_blockcount); - xfs_bmap_trace_post_update(fname, "LF|RF|LC|RC", ip, idx - 1, - XFS_DATA_FORK); - xfs_bmap_trace_delete(fname, "LF|RF|LC|RC", ip, idx, 2, + XFS_BMAP_TRACE_POST_UPDATE("LF|RF|LC|RC", ip, idx - 1, XFS_DATA_FORK); + XFS_BMAP_TRACE_DELETE("LF|RF|LC|RC", ip, idx, 2, XFS_DATA_FORK); xfs_iext_remove(ifp, idx, 2); ip->i_df.if_lastex = idx - 1; ip->i_d.di_nextents -= 2; @@ -1441,15 +1421,14 @@ xfs_bmap_add_extent_unwritten_real( * Setting all of a previous oldext extent to newext. * The left neighbor is contiguous, the right is not. */ - xfs_bmap_trace_pre_update(fname, "LF|RF|LC", ip, idx - 1, + XFS_BMAP_TRACE_PRE_UPDATE("LF|RF|LC", ip, idx - 1, XFS_DATA_FORK); xfs_bmbt_set_blockcount(xfs_iext_get_ext(ifp, idx - 1), LEFT.br_blockcount + PREV.br_blockcount); - xfs_bmap_trace_post_update(fname, "LF|RF|LC", ip, idx - 1, + XFS_BMAP_TRACE_POST_UPDATE("LF|RF|LC", ip, idx - 1, XFS_DATA_FORK); ip->i_df.if_lastex = idx - 1; - xfs_bmap_trace_delete(fname, "LF|RF|LC", ip, idx, 1, - XFS_DATA_FORK); + XFS_BMAP_TRACE_DELETE("LF|RF|LC", ip, idx, 1, XFS_DATA_FORK); xfs_iext_remove(ifp, idx, 1); ip->i_d.di_nextents--; if (cur == NULL) @@ -1484,16 +1463,15 @@ xfs_bmap_add_extent_unwritten_real( * Setting all of a previous oldext extent to newext. * The right neighbor is contiguous, the left is not. */ - xfs_bmap_trace_pre_update(fname, "LF|RF|RC", ip, idx, + XFS_BMAP_TRACE_PRE_UPDATE("LF|RF|RC", ip, idx, XFS_DATA_FORK); xfs_bmbt_set_blockcount(ep, PREV.br_blockcount + RIGHT.br_blockcount); xfs_bmbt_set_state(ep, newext); - xfs_bmap_trace_post_update(fname, "LF|RF|RC", ip, idx, + XFS_BMAP_TRACE_POST_UPDATE("LF|RF|RC", ip, idx, XFS_DATA_FORK); ip->i_df.if_lastex = idx; - xfs_bmap_trace_delete(fname, "LF|RF|RC", ip, idx + 1, 1, - XFS_DATA_FORK); + XFS_BMAP_TRACE_DELETE("LF|RF|RC", ip, idx + 1, 1, XFS_DATA_FORK); xfs_iext_remove(ifp, idx + 1, 1); ip->i_d.di_nextents--; if (cur == NULL) @@ -1529,10 +1507,10 @@ xfs_bmap_add_extent_unwritten_real( * Neither the left nor right neighbors are contiguous with * the new one. */ - xfs_bmap_trace_pre_update(fname, "LF|RF", ip, idx, + XFS_BMAP_TRACE_PRE_UPDATE("LF|RF", ip, idx, XFS_DATA_FORK); xfs_bmbt_set_state(ep, newext); - xfs_bmap_trace_post_update(fname, "LF|RF", ip, idx, + XFS_BMAP_TRACE_POST_UPDATE("LF|RF", ip, idx, XFS_DATA_FORK); ip->i_df.if_lastex = idx; if (cur == NULL) @@ -1559,21 +1537,21 @@ xfs_bmap_add_extent_unwritten_real( * Setting the first part of a previous oldext extent to newext. * The left neighbor is contiguous. */ - xfs_bmap_trace_pre_update(fname, "LF|LC", ip, idx - 1, + XFS_BMAP_TRACE_PRE_UPDATE("LF|LC", ip, idx - 1, XFS_DATA_FORK); xfs_bmbt_set_blockcount(xfs_iext_get_ext(ifp, idx - 1), LEFT.br_blockcount + new->br_blockcount); xfs_bmbt_set_startoff(ep, PREV.br_startoff + new->br_blockcount); - xfs_bmap_trace_post_update(fname, "LF|LC", ip, idx - 1, + XFS_BMAP_TRACE_POST_UPDATE("LF|LC", ip, idx - 1, XFS_DATA_FORK); - xfs_bmap_trace_pre_update(fname, "LF|LC", ip, idx, + XFS_BMAP_TRACE_PRE_UPDATE("LF|LC", ip, idx, XFS_DATA_FORK); xfs_bmbt_set_startblock(ep, new->br_startblock + new->br_blockcount); xfs_bmbt_set_blockcount(ep, PREV.br_blockcount - new->br_blockcount); - xfs_bmap_trace_post_update(fname, "LF|LC", ip, idx, + XFS_BMAP_TRACE_POST_UPDATE("LF|LC", ip, idx, XFS_DATA_FORK); ip->i_df.if_lastex = idx - 1; if (cur == NULL) @@ -1610,15 +1588,15 @@ xfs_bmap_add_extent_unwritten_real( * Setting the first part of a previous oldext extent to newext. * The left neighbor is not contiguous. */ - xfs_bmap_trace_pre_update(fname, "LF", ip, idx, XFS_DATA_FORK); + XFS_BMAP_TRACE_PRE_UPDATE("LF", ip, idx, XFS_DATA_FORK); ASSERT(ep && xfs_bmbt_get_state(ep) == oldext); xfs_bmbt_set_startoff(ep, new_endoff); xfs_bmbt_set_blockcount(ep, PREV.br_blockcount - new->br_blockcount); xfs_bmbt_set_startblock(ep, new->br_startblock + new->br_blockcount); - xfs_bmap_trace_post_update(fname, "LF", ip, idx, XFS_DATA_FORK); - xfs_bmap_trace_insert(fname, "LF", ip, idx, 1, new, NULL, + XFS_BMAP_TRACE_POST_UPDATE("LF", ip, idx, XFS_DATA_FORK); + XFS_BMAP_TRACE_INSERT("LF", ip, idx, 1, new, NULL, XFS_DATA_FORK); xfs_iext_insert(ifp, idx, 1, new); ip->i_df.if_lastex = idx; @@ -1653,18 +1631,18 @@ xfs_bmap_add_extent_unwritten_real( * Setting the last part of a previous oldext extent to newext. * The right neighbor is contiguous with the new allocation. */ - xfs_bmap_trace_pre_update(fname, "RF|RC", ip, idx, + XFS_BMAP_TRACE_PRE_UPDATE("RF|RC", ip, idx, XFS_DATA_FORK); - xfs_bmap_trace_pre_update(fname, "RF|RC", ip, idx + 1, + XFS_BMAP_TRACE_PRE_UPDATE("RF|RC", ip, idx + 1, XFS_DATA_FORK); xfs_bmbt_set_blockcount(ep, PREV.br_blockcount - new->br_blockcount); - xfs_bmap_trace_post_update(fname, "RF|RC", ip, idx, + XFS_BMAP_TRACE_POST_UPDATE("RF|RC", ip, idx, XFS_DATA_FORK); xfs_bmbt_set_allf(xfs_iext_get_ext(ifp, idx + 1), new->br_startoff, new->br_startblock, new->br_blockcount + RIGHT.br_blockcount, newext); - xfs_bmap_trace_post_update(fname, "RF|RC", ip, idx + 1, + XFS_BMAP_TRACE_POST_UPDATE("RF|RC", ip, idx + 1, XFS_DATA_FORK); ip->i_df.if_lastex = idx + 1; if (cur == NULL) @@ -1700,12 +1678,12 @@ xfs_bmap_add_extent_unwritten_real( * Setting the last part of a previous oldext extent to newext. * The right neighbor is not contiguous. */ - xfs_bmap_trace_pre_update(fname, "RF", ip, idx, XFS_DATA_FORK); + XFS_BMAP_TRACE_PRE_UPDATE("RF", ip, idx, XFS_DATA_FORK); xfs_bmbt_set_blockcount(ep, PREV.br_blockcount - new->br_blockcount); - xfs_bmap_trace_post_update(fname, "RF", ip, idx, XFS_DATA_FORK); - xfs_bmap_trace_insert(fname, "RF", ip, idx + 1, 1, - new, NULL, XFS_DATA_FORK); + XFS_BMAP_TRACE_POST_UPDATE("RF", ip, idx, XFS_DATA_FORK); + XFS_BMAP_TRACE_INSERT("RF", ip, idx + 1, 1, new, NULL, + XFS_DATA_FORK); xfs_iext_insert(ifp, idx + 1, 1, new); ip->i_df.if_lastex = idx + 1; ip->i_d.di_nextents++; @@ -1744,17 +1722,17 @@ xfs_bmap_add_extent_unwritten_real( * newext. Contiguity is impossible here. * One extent becomes three extents. */ - xfs_bmap_trace_pre_update(fname, "0", ip, idx, XFS_DATA_FORK); + XFS_BMAP_TRACE_PRE_UPDATE("0", ip, idx, XFS_DATA_FORK); xfs_bmbt_set_blockcount(ep, new->br_startoff - PREV.br_startoff); - xfs_bmap_trace_post_update(fname, "0", ip, idx, XFS_DATA_FORK); + XFS_BMAP_TRACE_POST_UPDATE("0", ip, idx, XFS_DATA_FORK); r[0] = *new; r[1].br_startoff = new_endoff; r[1].br_blockcount = PREV.br_startoff + PREV.br_blockcount - new_endoff; r[1].br_startblock = new->br_startblock + new->br_blockcount; r[1].br_state = oldext; - xfs_bmap_trace_insert(fname, "0", ip, idx + 1, 2, &r[0], &r[1], + XFS_BMAP_TRACE_INSERT("0", ip, idx + 1, 2, &r[0], &r[1], XFS_DATA_FORK); xfs_iext_insert(ifp, idx + 1, 2, &r[0]); ip->i_df.if_lastex = idx + 1; @@ -1845,9 +1823,6 @@ xfs_bmap_add_extent_hole_delay( int rsvd) /* OK to allocate reserved blocks */ { xfs_bmbt_rec_t *ep; /* extent record for idx */ -#ifdef XFS_BMAP_TRACE - static char fname[] = "xfs_bmap_add_extent_hole_delay"; -#endif xfs_ifork_t *ifp; /* inode fork pointer */ xfs_bmbt_irec_t left; /* left neighbor extent entry */ xfs_filblks_t newlen=0; /* new indirect size */ @@ -1919,7 +1894,7 @@ xfs_bmap_add_extent_hole_delay( */ temp = left.br_blockcount + new->br_blockcount + right.br_blockcount; - xfs_bmap_trace_pre_update(fname, "LC|RC", ip, idx - 1, + XFS_BMAP_TRACE_PRE_UPDATE("LC|RC", ip, idx - 1, XFS_DATA_FORK); xfs_bmbt_set_blockcount(xfs_iext_get_ext(ifp, idx - 1), temp); oldlen = STARTBLOCKVAL(left.br_startblock) + @@ -1928,10 +1903,9 @@ xfs_bmap_add_extent_hole_delay( newlen = xfs_bmap_worst_indlen(ip, temp); xfs_bmbt_set_startblock(xfs_iext_get_ext(ifp, idx - 1), NULLSTARTBLOCK((int)newlen)); - xfs_bmap_trace_post_update(fname, "LC|RC", ip, idx - 1, - XFS_DATA_FORK); - xfs_bmap_trace_delete(fname, "LC|RC", ip, idx, 1, + XFS_BMAP_TRACE_POST_UPDATE("LC|RC", ip, idx - 1, XFS_DATA_FORK); + XFS_BMAP_TRACE_DELETE("LC|RC", ip, idx, 1, XFS_DATA_FORK); xfs_iext_remove(ifp, idx, 1); ip->i_df.if_lastex = idx - 1; /* DELTA: Two in-core extents were replaced by one. */ @@ -1946,7 +1920,7 @@ xfs_bmap_add_extent_hole_delay( * Merge the new allocation with the left neighbor. */ temp = left.br_blockcount + new->br_blockcount; - xfs_bmap_trace_pre_update(fname, "LC", ip, idx - 1, + XFS_BMAP_TRACE_PRE_UPDATE("LC", ip, idx - 1, XFS_DATA_FORK); xfs_bmbt_set_blockcount(xfs_iext_get_ext(ifp, idx - 1), temp); oldlen = STARTBLOCKVAL(left.br_startblock) + @@ -1954,7 +1928,7 @@ xfs_bmap_add_extent_hole_delay( newlen = xfs_bmap_worst_indlen(ip, temp); xfs_bmbt_set_startblock(xfs_iext_get_ext(ifp, idx - 1), NULLSTARTBLOCK((int)newlen)); - xfs_bmap_trace_post_update(fname, "LC", ip, idx - 1, + XFS_BMAP_TRACE_POST_UPDATE("LC", ip, idx - 1, XFS_DATA_FORK); ip->i_df.if_lastex = idx - 1; /* DELTA: One in-core extent grew into a hole. */ @@ -1968,14 +1942,14 @@ xfs_bmap_add_extent_hole_delay( * on the right. * Merge the new allocation with the right neighbor. */ - xfs_bmap_trace_pre_update(fname, "RC", ip, idx, XFS_DATA_FORK); + XFS_BMAP_TRACE_PRE_UPDATE("RC", ip, idx, XFS_DATA_FORK); temp = new->br_blockcount + right.br_blockcount; oldlen = STARTBLOCKVAL(new->br_startblock) + STARTBLOCKVAL(right.br_startblock); newlen = xfs_bmap_worst_indlen(ip, temp); xfs_bmbt_set_allf(ep, new->br_startoff, NULLSTARTBLOCK((int)newlen), temp, right.br_state); - xfs_bmap_trace_post_update(fname, "RC", ip, idx, XFS_DATA_FORK); + XFS_BMAP_TRACE_POST_UPDATE("RC", ip, idx, XFS_DATA_FORK); ip->i_df.if_lastex = idx; /* DELTA: One in-core extent grew into a hole. */ temp2 = temp; @@ -1989,7 +1963,7 @@ xfs_bmap_add_extent_hole_delay( * Insert a new entry. */ oldlen = newlen = 0; - xfs_bmap_trace_insert(fname, "0", ip, idx, 1, new, NULL, + XFS_BMAP_TRACE_INSERT("0", ip, idx, 1, new, NULL, XFS_DATA_FORK); xfs_iext_insert(ifp, idx, 1, new); ip->i_df.if_lastex = idx; @@ -2039,9 +2013,6 @@ xfs_bmap_add_extent_hole_real( { xfs_bmbt_rec_t *ep; /* pointer to extent entry ins. point */ int error; /* error return value */ -#ifdef XFS_BMAP_TRACE - static char fname[] = "xfs_bmap_add_extent_hole_real"; -#endif int i; /* temp state */ xfs_ifork_t *ifp; /* inode fork pointer */ xfs_bmbt_irec_t left; /* left neighbor extent entry */ @@ -2118,15 +2089,14 @@ xfs_bmap_add_extent_hole_real( * left and on the right. * Merge all three into a single extent record. */ - xfs_bmap_trace_pre_update(fname, "LC|RC", ip, idx - 1, + XFS_BMAP_TRACE_PRE_UPDATE("LC|RC", ip, idx - 1, whichfork); xfs_bmbt_set_blockcount(xfs_iext_get_ext(ifp, idx - 1), left.br_blockcount + new->br_blockcount + right.br_blockcount); - xfs_bmap_trace_post_update(fname, "LC|RC", ip, idx - 1, + XFS_BMAP_TRACE_POST_UPDATE("LC|RC", ip, idx - 1, whichfork); - xfs_bmap_trace_delete(fname, "LC|RC", ip, - idx, 1, whichfork); + XFS_BMAP_TRACE_DELETE("LC|RC", ip, idx, 1, whichfork); xfs_iext_remove(ifp, idx, 1); ifp->if_lastex = idx - 1; XFS_IFORK_NEXT_SET(ip, whichfork, @@ -2168,10 +2138,10 @@ xfs_bmap_add_extent_hole_real( * on the left. * Merge the new allocation with the left neighbor. */ - xfs_bmap_trace_pre_update(fname, "LC", ip, idx - 1, whichfork); + XFS_BMAP_TRACE_PRE_UPDATE("LC", ip, idx - 1, whichfork); xfs_bmbt_set_blockcount(xfs_iext_get_ext(ifp, idx - 1), left.br_blockcount + new->br_blockcount); - xfs_bmap_trace_post_update(fname, "LC", ip, idx - 1, whichfork); + XFS_BMAP_TRACE_POST_UPDATE("LC", ip, idx - 1, whichfork); ifp->if_lastex = idx - 1; if (cur == NULL) { rval = XFS_ILOG_FEXT(whichfork); @@ -2202,11 +2172,11 @@ xfs_bmap_add_extent_hole_real( * on the right. * Merge the new allocation with the right neighbor. */ - xfs_bmap_trace_pre_update(fname, "RC", ip, idx, whichfork); + XFS_BMAP_TRACE_PRE_UPDATE("RC", ip, idx, whichfork); xfs_bmbt_set_allf(ep, new->br_startoff, new->br_startblock, new->br_blockcount + right.br_blockcount, right.br_state); - xfs_bmap_trace_post_update(fname, "RC", ip, idx, whichfork); + XFS_BMAP_TRACE_POST_UPDATE("RC", ip, idx, whichfork); ifp->if_lastex = idx; if (cur == NULL) { rval = XFS_ILOG_FEXT(whichfork); @@ -2237,8 +2207,7 @@ xfs_bmap_add_extent_hole_real( * real allocation. * Insert a new entry. */ - xfs_bmap_trace_insert(fname, "0", ip, idx, 1, new, NULL, - whichfork); + XFS_BMAP_TRACE_INSERT("0", ip, idx, 1, new, NULL, whichfork); xfs_iext_insert(ifp, idx, 1, new); ifp->if_lastex = idx; XFS_IFORK_NEXT_SET(ip, whichfork, @@ -3048,9 +3017,6 @@ xfs_bmap_del_extent( xfs_bmbt_rec_t *ep; /* current extent entry pointer */ int error; /* error return value */ int flags; /* inode logging flags */ -#ifdef XFS_BMAP_TRACE - static char fname[] = "xfs_bmap_del_extent"; -#endif xfs_bmbt_irec_t got; /* current extent entry */ xfs_fileoff_t got_endoff; /* first offset past got */ int i; /* temp state */ @@ -3144,7 +3110,7 @@ xfs_bmap_del_extent( /* * Matches the whole extent. Delete the entry. */ - xfs_bmap_trace_delete(fname, "3", ip, idx, 1, whichfork); + XFS_BMAP_TRACE_DELETE("3", ip, idx, 1, whichfork); xfs_iext_remove(ifp, idx, 1); ifp->if_lastex = idx; if (delay) @@ -3165,7 +3131,7 @@ xfs_bmap_del_extent( /* * Deleting the first part of the extent. */ - xfs_bmap_trace_pre_update(fname, "2", ip, idx, whichfork); + XFS_BMAP_TRACE_PRE_UPDATE("2", ip, idx, whichfork); xfs_bmbt_set_startoff(ep, del_endoff); temp = got.br_blockcount - del->br_blockcount; xfs_bmbt_set_blockcount(ep, temp); @@ -3174,13 +3140,13 @@ xfs_bmap_del_extent( temp = XFS_FILBLKS_MIN(xfs_bmap_worst_indlen(ip, temp), da_old); xfs_bmbt_set_startblock(ep, NULLSTARTBLOCK((int)temp)); - xfs_bmap_trace_post_update(fname, "2", ip, idx, + XFS_BMAP_TRACE_POST_UPDATE("2", ip, idx, whichfork); da_new = temp; break; } xfs_bmbt_set_startblock(ep, del_endblock); - xfs_bmap_trace_post_update(fname, "2", ip, idx, whichfork); + XFS_BMAP_TRACE_POST_UPDATE("2", ip, idx, whichfork); if (!cur) { flags |= XFS_ILOG_FEXT(whichfork); break; @@ -3196,19 +3162,19 @@ xfs_bmap_del_extent( * Deleting the last part of the extent. */ temp = got.br_blockcount - del->br_blockcount; - xfs_bmap_trace_pre_update(fname, "1", ip, idx, whichfork); + XFS_BMAP_TRACE_PRE_UPDATE("1", ip, idx, whichfork); xfs_bmbt_set_blockcount(ep, temp); ifp->if_lastex = idx; if (delay) { temp = XFS_FILBLKS_MIN(xfs_bmap_worst_indlen(ip, temp), da_old); xfs_bmbt_set_startblock(ep, NULLSTARTBLOCK((int)temp)); - xfs_bmap_trace_post_update(fname, "1", ip, idx, + XFS_BMAP_TRACE_POST_UPDATE("1", ip, idx, whichfork); da_new = temp; break; } - xfs_bmap_trace_post_update(fname, "1", ip, idx, whichfork); + XFS_BMAP_TRACE_POST_UPDATE("1", ip, idx, whichfork); if (!cur) { flags |= XFS_ILOG_FEXT(whichfork); break; @@ -3225,7 +3191,7 @@ xfs_bmap_del_extent( * Deleting the middle of the extent. */ temp = del->br_startoff - got.br_startoff; - xfs_bmap_trace_pre_update(fname, "0", ip, idx, whichfork); + XFS_BMAP_TRACE_PRE_UPDATE("0", ip, idx, whichfork); xfs_bmbt_set_blockcount(ep, temp); new.br_startoff = del_endoff; temp2 = got_endoff - del_endoff; @@ -3312,8 +3278,8 @@ xfs_bmap_del_extent( } } } - xfs_bmap_trace_post_update(fname, "0", ip, idx, whichfork); - xfs_bmap_trace_insert(fname, "0", ip, idx + 1, 1, &new, NULL, + XFS_BMAP_TRACE_POST_UPDATE("0", ip, idx, whichfork); + XFS_BMAP_TRACE_INSERT("0", ip, idx + 1, 1, &new, NULL, whichfork); xfs_iext_insert(ifp, idx + 1, 1, &new); ifp->if_lastex = idx + 1; @@ -3553,9 +3519,6 @@ xfs_bmap_local_to_extents( { int error; /* error return value */ int flags; /* logging flags returned */ -#ifdef XFS_BMAP_TRACE - static char fname[] = "xfs_bmap_local_to_extents"; -#endif xfs_ifork_t *ifp; /* inode fork pointer */ /* @@ -3610,7 +3573,7 @@ xfs_bmap_local_to_extents( xfs_iext_add(ifp, 0, 1); ep = xfs_iext_get_ext(ifp, 0); xfs_bmbt_set_allf(ep, 0, args.fsbno, 1, XFS_EXT_NORM); - xfs_bmap_trace_post_update(fname, "new", ip, 0, whichfork); + XFS_BMAP_TRACE_POST_UPDATE("new", ip, 0, whichfork); XFS_IFORK_NEXT_SET(ip, whichfork, 1); ip->i_d.di_nblocks = 1; XFS_TRANS_MOD_DQUOT_BYINO(args.mp, tp, ip, @@ -3733,7 +3696,7 @@ ktrace_t *xfs_bmap_trace_buf; STATIC void xfs_bmap_trace_addentry( int opcode, /* operation */ - char *fname, /* function name */ + const char *fname, /* function name */ char *desc, /* operation description */ xfs_inode_t *ip, /* incore inode pointer */ xfs_extnum_t idx, /* index of entry(ies) */ @@ -3792,7 +3755,7 @@ xfs_bmap_trace_addentry( */ STATIC void xfs_bmap_trace_delete( - char *fname, /* function name */ + const char *fname, /* function name */ char *desc, /* operation description */ xfs_inode_t *ip, /* incore inode pointer */ xfs_extnum_t idx, /* index of entry(entries) deleted */ @@ -3814,7 +3777,7 @@ xfs_bmap_trace_delete( */ STATIC void xfs_bmap_trace_insert( - char *fname, /* function name */ + const char *fname, /* function name */ char *desc, /* operation description */ xfs_inode_t *ip, /* incore inode pointer */ xfs_extnum_t idx, /* index of entry(entries) inserted */ @@ -3843,7 +3806,7 @@ xfs_bmap_trace_insert( */ STATIC void xfs_bmap_trace_post_update( - char *fname, /* function name */ + const char *fname, /* function name */ char *desc, /* operation description */ xfs_inode_t *ip, /* incore inode pointer */ xfs_extnum_t idx, /* index of entry updated */ @@ -3861,7 +3824,7 @@ xfs_bmap_trace_post_update( */ STATIC void xfs_bmap_trace_pre_update( - char *fname, /* function name */ + const char *fname, /* function name */ char *desc, /* operation description */ xfs_inode_t *ip, /* incore inode pointer */ xfs_extnum_t idx, /* index of entry to be updated */ @@ -4478,9 +4441,6 @@ xfs_bmap_read_extents( xfs_buf_t *bp; /* buffer for "block" */ int error; /* error return value */ xfs_exntfmt_t exntf; /* XFS_EXTFMT_NOSTATE, if checking */ -#ifdef XFS_BMAP_TRACE - static char fname[] = "xfs_bmap_read_extents"; -#endif xfs_extnum_t i, j; /* index into the extents list */ xfs_ifork_t *ifp; /* fork structure */ int level; /* btree level, for checking */ @@ -4597,7 +4557,7 @@ xfs_bmap_read_extents( } ASSERT(i == (ifp->if_bytes / (uint)sizeof(xfs_bmbt_rec_t))); ASSERT(i == XFS_IFORK_NEXTENTS(ip, whichfork)); - xfs_bmap_trace_exlist(fname, ip, i, whichfork); + XFS_BMAP_TRACE_EXLIST(ip, i, whichfork); return 0; error0: xfs_trans_brelse(tp, bp); @@ -4625,7 +4585,7 @@ xfs_bmap_trace_exlist( for (idx = 0; idx < cnt; idx++) { ep = xfs_iext_get_ext(ifp, idx); xfs_bmbt_get_all(ep, &s); - xfs_bmap_trace_insert(fname, "exlist", ip, idx, 1, &s, NULL, + XFS_BMAP_TRACE_INSERT("exlist", ip, idx, 1, &s, NULL, whichfork); } } Index: linux/fs/xfs/xfs_bmap.h =================================================================== --- linux.orig/fs/xfs/xfs_bmap.h +++ linux/fs/xfs/xfs_bmap.h @@ -144,12 +144,14 @@ extern ktrace_t *xfs_bmap_trace_buf; */ void xfs_bmap_trace_exlist( - char *fname, /* function name */ + const char *fname, /* function name */ struct xfs_inode *ip, /* incore inode pointer */ xfs_extnum_t cnt, /* count of entries in list */ int whichfork); /* data or attr fork */ +#define XFS_BMAP_TRACE_EXLIST(ip,c,w) \ + xfs_bmap_trace_exlist(__FUNCTION__,ip,c,w) #else -#define xfs_bmap_trace_exlist(f,ip,c,w) +#define XFS_BMAP_TRACE_EXLIST(ip,c,w) #endif /* Index: linux/fs/xfs/xfs_inode.c =================================================================== --- linux.orig/fs/xfs/xfs_inode.c +++ linux/fs/xfs/xfs_inode.c @@ -644,8 +644,7 @@ xfs_iformat_extents( ep->l1 = INT_GET(get_unaligned((__uint64_t*)&dp->l1), ARCH_CONVERT); } - xfs_bmap_trace_exlist("xfs_iformat_extents", ip, nex, - whichfork); + XFS_BMAP_TRACE_EXLIST(ip, nex, whichfork); if (whichfork != XFS_DATA_FORK || XFS_EXTFMT_INODE(ip) == XFS_EXTFMT_NOSTATE) if (unlikely(xfs_check_nostate_extents( @@ -2876,9 +2875,6 @@ xfs_iextents_copy( int copied; xfs_bmbt_rec_t *dest_ep; xfs_bmbt_rec_t *ep; -#ifdef XFS_BMAP_TRACE - static char fname[] = "xfs_iextents_copy"; -#endif int i; xfs_ifork_t *ifp; int nrecs; @@ -2889,7 +2885,7 @@ xfs_iextents_copy( ASSERT(ifp->if_bytes > 0); nrecs = ifp->if_bytes / (uint)sizeof(xfs_bmbt_rec_t); - xfs_bmap_trace_exlist(fname, ip, nrecs, whichfork); + XFS_BMAP_TRACE_EXLIST(ip, nrecs, whichfork); ASSERT(nrecs > 0); /* From owner-xfs@oss.sgi.com Sun Jun 24 22:50:22 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 24 Jun 2007 22:50:23 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5P5oCdq020911 for ; Sun, 24 Jun 2007 22:50:20 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA27366; Fri, 22 Jun 2007 09:52:26 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5LNqOAf125633736; Fri, 22 Jun 2007 09:52:25 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5LNqM0a129965607; Fri, 22 Jun 2007 09:52:22 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Fri, 22 Jun 2007 09:52:22 +1000 From: David Chinner To: Peter Cordes Cc: xfs@oss.sgi.com Subject: Re: XFS_IOC_RESVSP64 for swap files Message-ID: <20070621235222.GY85884050@sgi.com> References: <20070617100822.GA4586@cordes.ca> <20070619043333.GJ86004887@sgi.com> <20070621061449.GB11200@cordes.ca> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070621061449.GB11200@cordes.ca> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11892 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Thu, Jun 21, 2007 at 03:14:49AM -0300, Peter Cordes wrote: > On Tue, Jun 19, 2007 at 02:33:33PM +1000, David Chinner wrote: > > On Sun, Jun 17, 2007 at 07:08:23AM -0300, Peter Cordes wrote: > > > Hi XFS list. I'm not subscribed, please CC me. > > > > > > Programs such as swapspace and swapd create new swap files when vmem runs > > > low. They would benefit hugely from being able to create a swapfile without > > > any significant disk I/O. (If a process grabs a lot of memory quickly, the > > > system will be swapping hard while swapspace(8) is writing a swapfile.) > > > > > but it [exposing stale data] would still be useful for making swap files > > > even if only root could do it. > > > > Still a potential security hole. > > Root can read the device file, so how is letting root expose stale data any > worse? If a program run by root makes a file with mode 0600, and then calls > XFS_IOC_EXPOSE_MY_STALE_DATA_TO_EVERYONE, where's the security problem? If a file is not 0600 or is not owned by root, then you've got a problem. Even if you only allow root to use the ioctl, there's still plenty of ways that you can screw up and expose data to normal users with something that causes persistent exposure..... > Ok. I didn't really want to recreate my /var/tmp filesystem with > unwritten=0, but I really wish I had > XFS_IOC_EXPOSE_MY_STALE_DATA_TO_EVERYONE on my desktop machine. I think > dynamic swap file creation is a cool idea, and that ioctl would make it work > perfectly. I don't think XFS specific hacks are the way to acheive this. Perhaps you want to look at ->fallocate and introduce a new mode there for preallocating uninitialised swapfile extents. > This ioctl is only useful for making swap files. Nothing else cares if the > file has "holes" or not. But for that one application, it's great. There > are lots of ways root can shoot himself in the foot, and I don't think > adding one more is enough reason to not add an ioctl. > > Is it just that you don't want to take time to implement such a feature, or > would you reject a patch that added it? (Not that I'm volunteering, > necessarily.) I think XFS is the wrong place to do this. If you want pre-allocated swap files then a generic solution needs to be implemented. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Sun Jun 24 22:50:26 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 24 Jun 2007 22:50:30 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5P5oCds020911 for ; Sun, 24 Jun 2007 22:50:23 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA27646; Fri, 22 Jun 2007 10:00:22 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5M00JAf128666857; Fri, 22 Jun 2007 10:00:20 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5LNxWOa129937941; Fri, 22 Jun 2007 09:59:32 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Fri, 22 Jun 2007 09:59:32 +1000 From: David Chinner To: Sebastian Brings Cc: Justin Piszcz , Robert Petkus , xfs@oss.sgi.com Subject: Re: Poor performance -- poor config? Message-ID: <20070621235932.GZ85884050@sgi.com> References: <4679951E.8050601@bnl.gov> <46799939.2080503@bnl.gov> <55EF1E5D5804A542A6CA37E446DDC206F5C5AA@mapibe17.exchange.xchg> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <55EF1E5D5804A542A6CA37E446DDC206F5C5AA@mapibe17.exchange.xchg> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11894 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Thu, Jun 21, 2007 at 08:37:36AM +0200, Sebastian Brings wrote: > Not sure if it makes much sense to set stripe unit and width for a Raid > which appears as a single device. Certainly it does. That way you get stripe aligned allocation and therfore you are much more likely to get full-stripe width writes instead of unaligned writes that force RMW cycles on the RAID controller for parity calculations. > As you state, the "width" of your DS lun is 4 x 512K == 2MB. In case you > don't have write cache enabled each of your 1MB writes will cause the DS > to write to two out of four disks only, causing heavy overhead to create > parity. You're assuming stripe aligned I/O there. That 1MB could hit 3 of the 4 data disks - if you don't have a stripe unit set then that will be the common case. i.e. its worse than you think :/ Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Sun Jun 24 22:50:18 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 24 Jun 2007 22:50:24 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.7 required=5.0 tests=AWL,BAYES_99,J_CHICKENPOX_32, J_CHICKENPOX_44,J_CHICKENPOX_45,J_CHICKENPOX_46,J_CHICKENPOX_62, J_CHICKENPOX_63 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5P5oCdo020911 for ; Sun, 24 Jun 2007 22:50:14 -0700 Received: from boing.melbourne.sgi.com (boing.melbourne.sgi.com [134.14.55.141]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id SAA10089; Fri, 22 Jun 2007 18:44:42 +1000 Message-ID: <467B8BFA.2050107@sgi.com> Date: Fri, 22 Jun 2007 18:44:42 +1000 From: Timothy Shimmin User-Agent: Thunderbird 2.0.0.4 (Macintosh/20070604) MIME-Version: 1.0 To: David Chinner CC: xfs-dev , xfs-oss Subject: Re: Review: Multi-File Data Streams V2 References: <20070613041629.GI86004887@sgi.com> In-Reply-To: <20070613041629.GI86004887@sgi.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11893 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs Hi Dave, For the xfs_bmap.c/xfs_bmap_btalloc() * Might be clearer something like this: ------------------ if (nullfb) { if (ap->userdata && xfs_inode_is_filestream(ap->ip)) { ag = xfs_filestream_lookup_ag(ap->ip); ag = (ag != NULLAGNUMBER) ? ag : 0; ap->rval = XFS_AGB_TO_FSB(mp, ag, 0); } else { ap->rval = XFS_INO_TO_FSB(mp, ap->ip->i_ino); } } else ap->rval = ap->firstblock; ------------------- Unless we need "ag" set for the non-userdata && filestream case. I think Barry was questioning this today. * It is interesting that at the start we set up the fsb for (userdata & filestreams) and then in a bunch of other places it tests just for filestreams - although, there is one spot further down which also tests for userdata. I find this a bit confusing (as usual:) - I thought we were only interested in changing the allocation of userdata for the filestream. * As we talked about before, this code seems to come up in a few places: need = XFS_MIN_FREELIST_PAG(pag, mp); delta = need > pag->pagf_flcount ? need - pag->pagf_flcount : 0; longest = (pag->pagf_longest > delta) ? (pag->pagf_longest - delta) : (pag->pagf_flcount > 0 || pag->pagf_longest > 0); Perhaps we could macroize/inline-function it? It confused me in _xfs_filestream_pick_ag() when I was trying to understand it and so could do with a comment for it too. As I said then, I don't like the way it uses a boolean as the number of blocks, in the case when the longest extent is is smaller than the excess over the freelist which the fresspace-btree-splits-overhead needs. Also, the variables "need" and "delta" look pretty local to it. * I want to still look at this a bit more but I have to go home to dinner....:) --Tim David Chinner wrote: > Concurrent Multi-File Data Streams > > In media spaces, video is often stored in a frame-per-file format. > When dealing with uncompressed realtime HD video streams in this format, > it is crucial that files do not get fragmented and that multiple files > a placed contiguously on disk. > > When multiple streams are being ingested and played out at the same > time, it is critical that the filesystem does not cross the streams > and interleave them together as this creates seek and readahead > cache miss latency and prevents both ingest and playout from meeting > frame rate targets. > > This patches creates a "stream of files" concept into the allocator > to place all the data from a single stream contiguously on disk so > that RAID array readahead can be used effectively. Each additional > stream gets placed in different allocation groups within the > filesystem, thereby ensuring that we don't cross any streams. When > an AG fills up, we select a new AG for the stream that is not in > use. > > The core of the functionality is the stream tracking - each inode > that we create in a directory needs to be associated with the > directories' stream. Hence every time we create a file, we look up > the directories' stream object and associate the new file with that > object. > > Once we have a stream object for a file, we use the AG that the > stream object point to for allocations. If we can't allocate in that > AG (e.g. it is full) we move the entire stream to another AG. Other > inodes in the same stream are moved to the new AG on their next > allocation (i.e. lazy update). > > Stream objects are kept in a cache and hold a reference on the > inode. Hence the inode cannot be reclaimed while there is an > outstanding stream reference. This means that on unlink we need to > remove the stream association and we also need to flush all the > associations on certain events that want to reclaim all unreferenced > inodes (e.g. filesystem freeze). > > Credits: The original filestream allocator on Irix was written by > Glen Overby, the Linux port and rewrite by Nathan Scott and Sam > Vaughan (none of whom work at SGI any more). I just picked up the pieces > and beat it repeatedly with a big stick until it passed XFSQA. > > Version 2: > > o fold xfs_bmap_filestream() into xfs_bmap_btalloc() > o use ktrace infrastructure for debug code in xfs_filestream.c > o wrap repeated filestream inode checks. > o rename per-AG filestream reference counting macros and convert > to static inline > o remove debug from xfs_mru_cache.[ch] > o fix function call/error check formatting. > o removed unnecessary fstrm_mnt_data_t structure. > o cleaned up ASSERT checks > o cleaned up namespace-less globals in xfs_mru_cache.c > o removed unnecessary casts > > --- > fs/xfs/Makefile-linux-2.6 | 2 > fs/xfs/linux-2.6/xfs_globals.c | 1 > fs/xfs/linux-2.6/xfs_linux.h | 1 > fs/xfs/linux-2.6/xfs_sysctl.c | 11 > fs/xfs/linux-2.6/xfs_sysctl.h | 2 > fs/xfs/quota/xfs_qm.c | 3 > fs/xfs/xfs.h | 1 > fs/xfs/xfs_ag.h | 1 > fs/xfs/xfs_bmap.c | 68 +++ > fs/xfs/xfs_clnt.h | 2 > fs/xfs/xfs_dinode.h | 4 > fs/xfs/xfs_filestream.c | 742 +++++++++++++++++++++++++++++++++++++++++ > fs/xfs/xfs_filestream.h | 135 +++++++ > fs/xfs/xfs_fs.h | 1 > fs/xfs/xfs_fsops.c | 2 > fs/xfs/xfs_inode.c | 17 > fs/xfs/xfs_mount.h | 4 > fs/xfs/xfs_mru_cache.c | 494 +++++++++++++++++++++++++++ > fs/xfs/xfs_mru_cache.h | 219 ++++++++++++ > fs/xfs/xfs_vfsops.c | 25 + > fs/xfs/xfs_vnodeops.c | 22 + > fs/xfs/xfsidbg.c | 188 ++++++++++ > 22 files changed, 1934 insertions(+), 11 deletions(-) > > Index: 2.6.x-xfs-new/fs/xfs/Makefile-linux-2.6 > =================================================================== > --- 2.6.x-xfs-new.orig/fs/xfs/Makefile-linux-2.6 2007-06-13 13:58:15.727518215 +1000 > +++ 2.6.x-xfs-new/fs/xfs/Makefile-linux-2.6 2007-06-13 14:11:28.440325006 +1000 > @@ -54,6 +54,7 @@ xfs-y += xfs_alloc.o \ > xfs_dir2_sf.o \ > xfs_error.o \ > xfs_extfree_item.o \ > + xfs_filestream.o \ > xfs_fsops.o \ > xfs_ialloc.o \ > xfs_ialloc_btree.o \ > @@ -67,6 +68,7 @@ xfs-y += xfs_alloc.o \ > xfs_log.o \ > xfs_log_recover.o \ > xfs_mount.o \ > + xfs_mru_cache.o \ > xfs_rename.o \ > xfs_trans.o \ > xfs_trans_ail.o \ > Index: 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_globals.c > =================================================================== > --- 2.6.x-xfs-new.orig/fs/xfs/linux-2.6/xfs_globals.c 2007-06-13 13:58:15.739516660 +1000 > +++ 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_globals.c 2007-06-13 14:11:28.592305170 +1000 > @@ -49,6 +49,7 @@ xfs_param_t xfs_params = { > .inherit_nosym = { 0, 0, 1 }, > .rotorstep = { 1, 1, 255 }, > .inherit_nodfrg = { 0, 1, 1 }, > + .fstrm_timer = { 1, 50, 3600*100}, > }; > > /* > Index: 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_linux.h > =================================================================== > --- 2.6.x-xfs-new.orig/fs/xfs/linux-2.6/xfs_linux.h 2007-06-13 13:58:15.739516660 +1000 > +++ 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_linux.h 2007-06-13 14:11:28.600304126 +1000 > @@ -132,6 +132,7 @@ > #define xfs_inherit_nosymlinks xfs_params.inherit_nosym.val > #define xfs_rotorstep xfs_params.rotorstep.val > #define xfs_inherit_nodefrag xfs_params.inherit_nodfrg.val > +#define xfs_fstrm_centisecs xfs_params.fstrm_timer.val > > #define current_cpu() (raw_smp_processor_id()) > #define current_pid() (current->pid) > Index: 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_sysctl.c > =================================================================== > --- 2.6.x-xfs-new.orig/fs/xfs/linux-2.6/xfs_sysctl.c 2007-06-13 13:58:15.739516660 +1000 > +++ 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_sysctl.c 2007-06-13 14:11:28.604303604 +1000 > @@ -243,6 +243,17 @@ static ctl_table xfs_table[] = { > .extra1 = &xfs_params.inherit_nodfrg.min, > .extra2 = &xfs_params.inherit_nodfrg.max > }, > + { > + .ctl_name = XFS_FILESTREAM_TIMER, > + .procname = "filestream_centisecs", > + .data = &xfs_params.fstrm_timer.val, > + .maxlen = sizeof(int), > + .mode = 0644, > + .proc_handler = &proc_dointvec_minmax, > + .strategy = &sysctl_intvec, > + .extra1 = &xfs_params.fstrm_timer.min, > + .extra2 = &xfs_params.fstrm_timer.max, > + }, > /* please keep this the last entry */ > #ifdef CONFIG_PROC_FS > { > Index: 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_sysctl.h > =================================================================== > --- 2.6.x-xfs-new.orig/fs/xfs/linux-2.6/xfs_sysctl.h 2007-06-13 13:58:15.739516660 +1000 > +++ 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_sysctl.h 2007-06-13 14:11:28.612302560 +1000 > @@ -50,6 +50,7 @@ typedef struct xfs_param { > xfs_sysctl_val_t inherit_nosym; /* Inherit the "nosymlinks" flag. */ > xfs_sysctl_val_t rotorstep; /* inode32 AG rotoring control knob */ > xfs_sysctl_val_t inherit_nodfrg;/* Inherit the "nodefrag" inode flag. */ > + xfs_sysctl_val_t fstrm_timer; /* Filestream dir-AG assoc'n timeout. */ > } xfs_param_t; > > /* > @@ -89,6 +90,7 @@ enum { > XFS_INHERIT_NOSYM = 19, > XFS_ROTORSTEP = 20, > XFS_INHERIT_NODFRG = 21, > + XFS_FILESTREAM_TIMER = 22, > }; > > extern xfs_param_t xfs_params; > Index: 2.6.x-xfs-new/fs/xfs/xfs_ag.h > =================================================================== > --- 2.6.x-xfs-new.orig/fs/xfs/xfs_ag.h 2007-06-13 13:58:15.751515106 +1000 > +++ 2.6.x-xfs-new/fs/xfs/xfs_ag.h 2007-06-13 14:11:28.616302038 +1000 > @@ -196,6 +196,7 @@ typedef struct xfs_perag > lock_t pagb_lock; /* lock for pagb_list */ > #endif > xfs_perag_busy_t *pagb_list; /* unstable blocks */ > + atomic_t pagf_fstrms; /* # of filestreams active in this AG */ > > /* > * inode allocation search lookup optimisation. > Index: 2.6.x-xfs-new/fs/xfs/xfs_bmap.c > =================================================================== > --- 2.6.x-xfs-new.orig/fs/xfs/xfs_bmap.c 2007-06-13 13:58:15.751515106 +1000 > +++ 2.6.x-xfs-new/fs/xfs/xfs_bmap.c 2007-06-13 14:11:28.636299428 +1000 > @@ -52,6 +52,7 @@ > #include "xfs_quota.h" > #include "xfs_trans_space.h" > #include "xfs_buf_item.h" > +#include "xfs_filestream.h" > > > #ifdef DEBUG > @@ -171,6 +172,14 @@ xfs_bmap_alloc( > xfs_bmalloca_t *ap); /* bmap alloc argument struct */ > > /* > + * xfs_bmap_filestreams is the underlying allocator when filestreams are > + * enabled. > + */ > +STATIC int /* error */ > +xfs_bmap_filestreams( > + xfs_bmalloca_t *ap); /* bmap alloc argument struct */ > + > +/* > * Transform a btree format file with only one leaf node, where the > * extents list will fit in the inode, into an extents format file. > * Since the file extents are already in-core, all we have to do is > @@ -2724,7 +2733,12 @@ xfs_bmap_btalloc( > } > nullfb = ap->firstblock == NULLFSBLOCK; > fb_agno = nullfb ? NULLAGNUMBER : XFS_FSB_TO_AGNO(mp, ap->firstblock); > - if (nullfb) > + if (nullfb && xfs_inode_is_filestream(ap->ip)) { > + ag = xfs_filestream_lookup_ag(ap->ip); > + ag = (ag != NULLAGNUMBER) ? ag : 0; > + ap->rval = (ap->userdata) ? XFS_AGB_TO_FSB(mp, ag, 0) : > + XFS_INO_TO_FSB(mp, ap->ip->i_ino); > + } else if (nullfb) > ap->rval = XFS_INO_TO_FSB(mp, ap->ip->i_ino); > else > ap->rval = ap->firstblock; > @@ -2750,13 +2764,22 @@ xfs_bmap_btalloc( > args.firstblock = ap->firstblock; > blen = 0; > if (nullfb) { > - args.type = XFS_ALLOCTYPE_START_BNO; > + if (xfs_inode_is_filestream(ap->ip)) > + args.type = XFS_ALLOCTYPE_NEAR_BNO; > + else > + args.type = XFS_ALLOCTYPE_START_BNO; > args.total = ap->total; > + > /* > - * Find the longest available space. > - * We're going to try for the whole allocation at once. > + * Search for an allocation group with a single extent > + * large enough for the request. > + * > + * If one isn't found, then adjust the minimum allocation > + * size to the largest space found. > */ > startag = ag = XFS_FSB_TO_AGNO(mp, args.fsbno); > + if (startag == NULLAGNUMBER) > + startag = ag = 0; > notinit = 0; > down_read(&mp->m_peraglock); > while (blen < ap->alen) { > @@ -2782,6 +2805,35 @@ xfs_bmap_btalloc( > blen = longest; > } else > notinit = 1; > + > + if (xfs_inode_is_filestream(ap->ip)) { > + if (blen >= ap->alen) > + break; > + > + if (ap->userdata) { > + /* > + * If startag is an invalid AG, we've > + * come here once before and > + * xfs_filestream_new_ag picked the > + * best currently available. > + * > + * Don't continue looping, since we > + * could loop forever. > + */ > + if (startag == NULLAGNUMBER) > + break; > + > + error = xfs_filestream_new_ag(ap, &ag); > + if (error) { > + up_read(&mp->m_peraglock); > + return error; > + } > + > + /* loop again to set 'blen'*/ > + startag = NULLAGNUMBER; > + continue; > + } > + } > if (++ag == mp->m_sb.sb_agcount) > ag = 0; > if (ag == startag) > @@ -2806,8 +2858,14 @@ xfs_bmap_btalloc( > */ > else > args.minlen = ap->alen; > + > + if (xfs_inode_is_filestream(ap->ip)) > + ap->rval = args.fsbno = XFS_AGB_TO_FSB(mp, ag, 0); > } else if (ap->low) { > - args.type = XFS_ALLOCTYPE_START_BNO; > + if (xfs_inode_is_filestream(ap->ip)) > + args.type = XFS_ALLOCTYPE_FIRST_AG; > + else > + args.type = XFS_ALLOCTYPE_START_BNO; > args.total = args.minlen = ap->minlen; > } else { > args.type = XFS_ALLOCTYPE_NEAR_BNO; > Index: 2.6.x-xfs-new/fs/xfs/xfs_clnt.h > =================================================================== > --- 2.6.x-xfs-new.orig/fs/xfs/xfs_clnt.h 2007-06-13 13:58:15.759514069 +1000 > +++ 2.6.x-xfs-new/fs/xfs/xfs_clnt.h 2007-06-13 14:11:28.640298906 +1000 > @@ -99,5 +99,7 @@ struct xfs_mount_args { > */ > #define XFSMNT2_COMPAT_IOSIZE 0x00000001 /* don't report large preferred > * I/O size in stat(2) */ > +#define XFSMNT2_FILESTREAMS 0x00000002 /* enable the filestreams > + * allocator */ > > #endif /* __XFS_CLNT_H__ */ > Index: 2.6.x-xfs-new/fs/xfs/xfs_dinode.h > =================================================================== > --- 2.6.x-xfs-new.orig/fs/xfs/xfs_dinode.h 2007-06-13 13:58:15.767513033 +1000 > +++ 2.6.x-xfs-new/fs/xfs/xfs_dinode.h 2007-06-13 14:11:28.648297862 +1000 > @@ -257,6 +257,7 @@ typedef enum xfs_dinode_fmt > #define XFS_DIFLAG_EXTSIZE_BIT 11 /* inode extent size allocator hint */ > #define XFS_DIFLAG_EXTSZINHERIT_BIT 12 /* inherit inode extent size */ > #define XFS_DIFLAG_NODEFRAG_BIT 13 /* do not reorganize/defragment */ > +#define XFS_DIFLAG_FILESTREAM_BIT 14 /* use filestream allocator */ > #define XFS_DIFLAG_REALTIME (1 << XFS_DIFLAG_REALTIME_BIT) > #define XFS_DIFLAG_PREALLOC (1 << XFS_DIFLAG_PREALLOC_BIT) > #define XFS_DIFLAG_NEWRTBM (1 << XFS_DIFLAG_NEWRTBM_BIT) > @@ -271,12 +272,13 @@ typedef enum xfs_dinode_fmt > #define XFS_DIFLAG_EXTSIZE (1 << XFS_DIFLAG_EXTSIZE_BIT) > #define XFS_DIFLAG_EXTSZINHERIT (1 << XFS_DIFLAG_EXTSZINHERIT_BIT) > #define XFS_DIFLAG_NODEFRAG (1 << XFS_DIFLAG_NODEFRAG_BIT) > +#define XFS_DIFLAG_FILESTREAM (1 << XFS_DIFLAG_FILESTREAM_BIT) > > #define XFS_DIFLAG_ANY \ > (XFS_DIFLAG_REALTIME | XFS_DIFLAG_PREALLOC | XFS_DIFLAG_NEWRTBM | \ > XFS_DIFLAG_IMMUTABLE | XFS_DIFLAG_APPEND | XFS_DIFLAG_SYNC | \ > XFS_DIFLAG_NOATIME | XFS_DIFLAG_NODUMP | XFS_DIFLAG_RTINHERIT | \ > XFS_DIFLAG_PROJINHERIT | XFS_DIFLAG_NOSYMLINKS | XFS_DIFLAG_EXTSIZE | \ > - XFS_DIFLAG_EXTSZINHERIT | XFS_DIFLAG_NODEFRAG) > + XFS_DIFLAG_EXTSZINHERIT | XFS_DIFLAG_NODEFRAG | XFS_DIFLAG_FILESTREAM) > > #endif /* __XFS_DINODE_H__ */ > Index: 2.6.x-xfs-new/fs/xfs/xfs_filestream.c > =================================================================== > --- /dev/null 1970-01-01 00:00:00.000000000 +0000 > +++ 2.6.x-xfs-new/fs/xfs/xfs_filestream.c 2007-06-13 14:11:28.676294208 +1000 > @@ -0,0 +1,742 @@ > +/* > + * Copyright (c) 2000-2005 Silicon Graphics, Inc. > + * All Rights Reserved. > + * > + * This program is free software; you can redistribute it and/or > + * modify it under the terms of the GNU General Public License as > + * published by the Free Software Foundation. > + * > + * This program is distributed in the hope that it would be useful, > + * but WITHOUT ANY WARRANTY; without even the implied warranty of > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the > + * GNU General Public License for more details. > + * > + * You should have received a copy of the GNU General Public License > + * along with this program; if not, write the Free Software Foundation, > + * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA > + */ > +#include "xfs.h" > +#include "xfs_bmap_btree.h" > +#include "xfs_inum.h" > +#include "xfs_dir2.h" > +#include "xfs_dir2_sf.h" > +#include "xfs_attr_sf.h" > +#include "xfs_dinode.h" > +#include "xfs_inode.h" > +#include "xfs_ag.h" > +#include "xfs_dmapi.h" > +#include "xfs_log.h" > +#include "xfs_trans.h" > +#include "xfs_sb.h" > +#include "xfs_mount.h" > +#include "xfs_bmap.h" > +#include "xfs_alloc.h" > +#include "xfs_utils.h" > +#include "xfs_mru_cache.h" > +#include "xfs_filestream.h" > + > +#ifdef XFS_FILESTREAMS_TRACE > + > +ktrace_t *xfs_filestreams_trace_buf; > + > +STATIC void > +xfs_filestreams_trace( > + xfs_mount_t *mp, /* mount point */ > + int type, /* type of trace */ > + const char *func, /* source function */ > + int line, /* source line number */ > + __psunsigned_t arg0, > + __psunsigned_t arg1, > + __psunsigned_t arg2, > + __psunsigned_t arg3, > + __psunsigned_t arg4, > + __psunsigned_t arg5) > +{ > + ktrace_enter(xfs_filestreams_trace_buf, > + (void *)(__psint_t)(type | (line << 16)), > + (void *)func, > + (void *)(__psunsigned_t)current_pid(), > + (void *)mp, > + (void *)(__psunsigned_t)arg0, > + (void *)(__psunsigned_t)arg1, > + (void *)(__psunsigned_t)arg2, > + (void *)(__psunsigned_t)arg3, > + (void *)(__psunsigned_t)arg4, > + (void *)(__psunsigned_t)arg5, > + NULL, NULL, NULL, NULL, NULL, NULL); > +} > + > +#define TRACE0(mp,t) TRACE6(mp,t,0,0,0,0,0,0) > +#define TRACE1(mp,t,a0) TRACE6(mp,t,a0,0,0,0,0,0) > +#define TRACE2(mp,t,a0,a1) TRACE6(mp,t,a0,a1,0,0,0,0) > +#define TRACE3(mp,t,a0,a1,a2) TRACE6(mp,t,a0,a1,a2,0,0,0) > +#define TRACE4(mp,t,a0,a1,a2,a3) TRACE6(mp,t,a0,a1,a2,a3,0,0) > +#define TRACE5(mp,t,a0,a1,a2,a3,a4) TRACE6(mp,t,a0,a1,a2,a3,a4,0) > +#define TRACE6(mp,t,a0,a1,a2,a3,a4,a5) \ > + xfs_filestreams_trace(mp, t, __FUNCTION__, __LINE__, \ > + (__psunsigned_t)a0, (__psunsigned_t)a1, \ > + (__psunsigned_t)a2, (__psunsigned_t)a3, \ > + (__psunsigned_t)a4, (__psunsigned_t)a5) > + > +#define TRACE_AG_SCAN(mp, ag, ag2) \ > + TRACE2(mp, XFS_FSTRM_KTRACE_AGSCAN, ag, ag2); > +#define TRACE_AG_PICK1(mp, max_ag, maxfree) \ > + TRACE2(mp, XFS_FSTRM_KTRACE_AGPICK1, max_ag, maxfree); > +#define TRACE_AG_PICK2(mp, ag, ag2, cnt, free, scan, flag) \ > + TRACE6(mp, XFS_FSTRM_KTRACE_AGPICK2, ag, ag2, \ > + cnt, free, scan, flag) > +#define TRACE_UPDATE(mp, ip, ag, cnt, ag2, cnt2) \ > + TRACE5(mp, XFS_FSTRM_KTRACE_UPDATE, ip, ag, cnt, ag2, cnt2) > +#define TRACE_FREE(mp, ip, pip, ag, cnt) \ > + TRACE4(mp, XFS_FSTRM_KTRACE_FREE, ip, pip, ag, cnt) > +#define TRACE_LOOKUP(mp, ip, pip, ag, cnt) \ > + TRACE4(mp, XFS_FSTRM_KTRACE_ITEM_LOOKUP, ip, pip, ag, cnt) > +#define TRACE_ASSOCIATE(mp, ip, pip, ag, cnt) \ > + TRACE4(mp, XFS_FSTRM_KTRACE_ASSOCIATE, ip, pip, ag, cnt) > +#define TRACE_MOVEAG(mp, ip, pip, oag, ocnt, nag, ncnt) \ > + TRACE6(mp, XFS_FSTRM_KTRACE_MOVEAG, ip, pip, oag, ocnt, nag, ncnt) > +#define TRACE_ORPHAN(mp, ip, ag) \ > + TRACE2(mp, XFS_FSTRM_KTRACE_ORPHAN, ip, ag); > + > + > +#else > +#define TRACE_AG_SCAN(mp, ag, ag2) > +#define TRACE_AG_PICK1(mp, max_ag, maxfree) > +#define TRACE_AG_PICK2(mp, ag, ag2, cnt, free, scan, flag) > +#define TRACE_UPDATE(mp, ip, ag, cnt, ag2, cnt2) > +#define TRACE_FREE(mp, ip, pip, ag, cnt) > +#define TRACE_LOOKUP(mp, ip, pip, ag, cnt) > +#define TRACE_ASSOCIATE(mp, ip, pip, ag, cnt) > +#define TRACE_MOVEAG(mp, ip, pip, oag, ocnt, nag, ncnt) > +#define TRACE_ORPHAN(mp, ip, ag) > +#endif > + > +static kmem_zone_t *item_zone; > + > +/* > + * Structure for associating a file or a directory with an allocation group. > + * The parent directory pointer is only needed for files, but since there will > + * generally be vastly more files than directories in the cache, using the same > + * data structure simplifies the code with very little memory overhead. > + */ > +typedef struct fstrm_item > +{ > + xfs_agnumber_t ag; /* AG currently in use for the file/directory. */ > + xfs_inode_t *ip; /* inode self-pointer. */ > + xfs_inode_t *pip; /* Parent directory inode pointer. */ > +} fstrm_item_t; > + > + > +/* > + * Scan the AGs starting at startag looking for an AG that isn't in use and has > + * at least minlen blocks free. > + */ > +static int > +_xfs_filestream_pick_ag( > + xfs_mount_t *mp, > + xfs_agnumber_t startag, > + xfs_agnumber_t *agp, > + int flags, > + xfs_extlen_t minlen) > +{ > + int err, trylock, nscan; > + xfs_extlen_t delta, longest, need, free, minfree, maxfree = 0; > + xfs_agnumber_t ag, max_ag = NULLAGNUMBER; > + struct xfs_perag *pag; > + > + /* 2% of an AG's blocks must be free for it to be chosen. */ > + minfree = mp->m_sb.sb_agblocks / 50; > + > + ag = startag; > + *agp = NULLAGNUMBER; > + > + /* For the first pass, don't sleep trying to init the per-AG. */ > + trylock = XFS_ALLOC_FLAG_TRYLOCK; > + > + for (nscan = 0; 1; nscan++) { > + > + TRACE_AG_SCAN(mp, ag, xfs_filestream_peek_ag(mp, ag)); > + > + pag = mp->m_perag + ag; > + > + if (!pag->pagf_init) { > + err = xfs_alloc_pagf_init(mp, NULL, ag, trylock); > + if (err && !trylock) > + return err; > + } > + > + /* Might fail sometimes during the 1st pass with trylock set. */ > + if (!pag->pagf_init) > + goto next_ag; > + > + /* Keep track of the AG with the most free blocks. */ > + if (pag->pagf_freeblks > maxfree) { > + maxfree = pag->pagf_freeblks; > + max_ag = ag; > + } > + > + /* > + * The AG reference count does two things: it enforces mutual > + * exclusion when examining the suitability of an AG in this > + * loop, and it guards against two filestreams being established > + * in the same AG as each other. > + */ > + if (xfs_filestream_get_ag(mp, ag) > 1) { > + xfs_filestream_put_ag(mp, ag); > + goto next_ag; > + } > + > + need = XFS_MIN_FREELIST_PAG(pag, mp); > + delta = need > pag->pagf_flcount ? need - pag->pagf_flcount : 0; > + longest = (pag->pagf_longest > delta) ? > + (pag->pagf_longest - delta) : > + (pag->pagf_flcount > 0 || pag->pagf_longest > 0); > + > + if (((minlen && longest >= minlen) || > + (!minlen && pag->pagf_freeblks >= minfree)) && > + (!pag->pagf_metadata || !(flags & XFS_PICK_USERDATA) || > + (flags & XFS_PICK_LOWSPACE))) { > + > + /* Break out, retaining the reference on the AG. */ > + free = pag->pagf_freeblks; > + *agp = ag; > + break; > + } > + > + /* Drop the reference on this AG, it's not usable. */ > + xfs_filestream_put_ag(mp, ag); > +next_ag: > + /* Move to the next AG, wrapping to AG 0 if necessary. */ > + if (++ag >= mp->m_sb.sb_agcount) > + ag = 0; > + > + /* If a full pass of the AGs hasn't been done yet, continue. */ > + if (ag != startag) > + continue; > + > + /* Allow sleeping in xfs_alloc_pagf_init() on the 2nd pass. */ > + if (trylock != 0) { > + trylock = 0; > + continue; > + } > + > + /* Finally, if lowspace wasn't set, set it for the 3rd pass. */ > + if (!(flags & XFS_PICK_LOWSPACE)) { > + flags |= XFS_PICK_LOWSPACE; > + continue; > + } > + > + /* > + * Take the AG with the most free space, regardless of whether > + * it's already in use by another filestream. > + */ > + if (max_ag != NULLAGNUMBER) { > + xfs_filestream_get_ag(mp, max_ag); > + TRACE_AG_PICK1(mp, max_ag, maxfree); > + free = maxfree; > + *agp = max_ag; > + break; > + } > + > + /* take AG 0 if none matched */ > + TRACE_AG_PICK1(mp, max_ag, maxfree); > + *agp = 0; > + return 0; > + } > + > + TRACE_AG_PICK2(mp, startag, *agp, xfs_filestream_peek_ag(mp, *agp), > + free, nscan, flags); > + > + return 0; > +} > + > +/* > + * Set the allocation group number for a file or a directory, updating inode > + * references and per-AG references as appropriate. Must be called with the > + * m_peraglock held in read mode. > + */ > +static int > +_xfs_filestream_update_ag( > + xfs_inode_t *ip, > + xfs_inode_t *pip, > + xfs_agnumber_t ag) > +{ > + int err = 0; > + xfs_mount_t *mp; > + xfs_mru_cache_t *cache; > + fstrm_item_t *item; > + xfs_agnumber_t old_ag; > + xfs_inode_t *old_pip; > + > + /* > + * Either ip is a regular file and pip is a directory, or ip is a > + * directory and pip is NULL. > + */ > + ASSERT(ip && (((ip->i_d.di_mode & S_IFREG) && pip && > + (pip->i_d.di_mode & S_IFDIR)) || > + ((ip->i_d.di_mode & S_IFDIR) && !pip))); > + > + mp = ip->i_mount; > + cache = mp->m_filestream; > + > + item = xfs_mru_cache_lookup(cache, ip->i_ino); > + if (item) { > + ASSERT(item->ip == ip); > + old_ag = item->ag; > + item->ag = ag; > + old_pip = item->pip; > + item->pip = pip; > + xfs_mru_cache_done(cache); > + > + /* > + * If the AG has changed, drop the old ref and take a new one, > + * effectively transferring the reference from old to new AG. > + */ > + if (ag != old_ag) { > + xfs_filestream_put_ag(mp, old_ag); > + xfs_filestream_get_ag(mp, ag); > + } > + > + /* > + * If ip is a file and its pip has changed, drop the old ref and > + * take a new one. > + */ > + if (pip && pip != old_pip) { > + IRELE(old_pip); > + IHOLD(pip); > + } > + > + TRACE_UPDATE(mp, ip, old_ag, xfs_filestream_peek_ag(mp, old_ag), > + ag, xfs_filestream_peek_ag(mp, ag)); > + return 0; > + } > + > + item = kmem_zone_zalloc(item_zone, KM_MAYFAIL); > + if (!item) > + return ENOMEM; > + > + item->ag = ag; > + item->ip = ip; > + item->pip = pip; > + > + err = xfs_mru_cache_insert(cache, ip->i_ino, item); > + if (err) { > + kmem_zone_free(item_zone, item); > + return err; > + } > + > + /* Take a reference on the AG. */ > + xfs_filestream_get_ag(mp, ag); > + > + /* > + * Take a reference on the inode itself regardless of whether it's a > + * regular file or a directory. > + */ > + IHOLD(ip); > + > + /* > + * In the case of a regular file, take a reference on the parent inode > + * as well to ensure it remains in-core. > + */ > + if (pip) > + IHOLD(pip); > + > + TRACE_UPDATE(mp, ip, ag, xfs_filestream_peek_ag(mp, ag), > + ag, xfs_filestream_peek_ag(mp, ag)); > + > + return 0; > +} > + > +/* xfs_fstrm_free_func(): callback for freeing cached stream items. */ > +void > +xfs_fstrm_free_func( > + xfs_ino_t ino, > + fstrm_item_t *item) > +{ > + xfs_inode_t *ip = item->ip; > + int ref; > + > + ASSERT(ip->i_ino == ino); > + > + /* Drop the reference taken on the AG when the item was added. */ > + ref = xfs_filestream_put_ag(ip->i_mount, item->ag); > + > + ASSERT(ref >= 0); > + > + /* > + * _xfs_filestream_update_ag() always takes a reference on the inode > + * itself, whether it's a file or a directory. Release it here. > + */ > + IRELE(ip); > + > + /* > + * In the case of a regular file, _xfs_filestream_update_ag() also takes a > + * ref on the parent inode to keep it in-core. Release that too. > + */ > + if (item->pip) > + IRELE(item->pip); > + > + TRACE_FREE(ip->i_mount, ip, item->pip, item->ag, > + xfs_filestream_peek_ag(ip->i_mount, item->ag)); > + > + /* Finally, free the memory allocated for the item. */ > + kmem_zone_free(item_zone, item); > +} > + > +/* > + * xfs_filestream_init() is called at xfs initialisation time to set up the > + * memory zone that will be used for filestream data structure allocation. > + */ > +int > +xfs_filestream_init(void) > +{ > + item_zone = kmem_zone_init(sizeof(fstrm_item_t), "fstrm_item"); > +#ifdef XFS_FILESTREAMS_TRACE > + xfs_filestreams_trace_buf = ktrace_alloc(XFS_FSTRM_KTRACE_SIZE, KM_SLEEP); > +#endif > + return item_zone ? 0 : -ENOMEM; > +} > + > +/* > + * xfs_filestream_uninit() is called at xfs termination time to destroy the > + * memory zone that was used for filestream data structure allocation. > + */ > +void > +xfs_filestream_uninit(void) > +{ > +#ifdef XFS_FILESTREAMS_TRACE > + ktrace_free(xfs_filestreams_trace_buf); > +#endif > + kmem_zone_destroy(item_zone); > +} > + > +/* > + * xfs_filestream_mount() is called when a file system is mounted with the > + * filestream option. It is responsible for allocating the data structures > + * needed to track the new file system's file streams. > + */ > +int > +xfs_filestream_mount( > + xfs_mount_t *mp) > +{ > + int err; > + unsigned int lifetime, grp_count; > + > + /* > + * The filestream timer tunable is currently fixed within the range of > + * one second to four minutes, with five seconds being the default. The > + * group count is somewhat arbitrary, but it'd be nice to adhere to the > + * timer tunable to within about 10 percent. This requires at least 10 > + * groups. > + */ > + lifetime = xfs_fstrm_centisecs * 10; > + grp_count = 10; > + > + err = xfs_mru_cache_create(&mp->m_filestream, lifetime, grp_count, > + (xfs_mru_cache_free_func_t)xfs_fstrm_free_func); > + > + return err; > +} > + > +/* > + * xfs_filestream_unmount() is called when a file system that was mounted with > + * the filestream option is unmounted. It drains the data structures created > + * to track the file system's file streams and frees all the memory that was > + * allocated. > + */ > +void > +xfs_filestream_unmount( > + xfs_mount_t *mp) > +{ > + xfs_mru_cache_destroy(mp->m_filestream); > +} > + > +/* > + * If the mount point's m_perag array is going to be reallocated, all > + * outstanding cache entries must be flushed to avoid accessing reference count > + * addresses that have been freed. The call to xfs_filestream_flush() must be > + * made inside the block that holds the m_peraglock in write mode to do the > + * reallocation. > + */ > +void > +xfs_filestream_flush( > + xfs_mount_t *mp) > +{ > + /* point in time flush, so keep the reaper running */ > + xfs_mru_cache_flush(mp->m_filestream, 1); > +} > + > +/* > + * Return the AG of the filestream the file or directory belongs to, or > + * NULLAGNUMBER otherwise. > + */ > +xfs_agnumber_t > +xfs_filestream_lookup_ag( > + xfs_inode_t *ip) > +{ > + xfs_mru_cache_t *cache; > + fstrm_item_t *item; > + xfs_agnumber_t ag; > + int ref; > + > + if (!(ip->i_d.di_mode & (S_IFREG | S_IFDIR))) { > + ASSERT(0); > + return NULLAGNUMBER; > + } > + > + cache = ip->i_mount->m_filestream; > + item = xfs_mru_cache_lookup(cache, ip->i_ino); > + if (!item) { > + TRACE_LOOKUP(ip->i_mount, ip, NULL, NULLAGNUMBER, 0); > + return NULLAGNUMBER; > + } > + > + ASSERT(ip == item->ip); > + ag = item->ag; > + ref = xfs_filestream_peek_ag(ip->i_mount, ag); > + xfs_mru_cache_done(cache); > + > + TRACE_LOOKUP(ip->i_mount, ip, item->pip, ag, ref); > + return ag; > +} > + > +/* > + * xfs_filestream_associate() should only be called to associate a regular file > + * with its parent directory. Calling it with a child directory isn't > + * appropriate because filestreams don't apply to entire directory hierarchies. > + * Creating a file in a child directory of an existing filestream directory > + * starts a new filestream with its own allocation group association. > + */ > +int > +xfs_filestream_associate( > + xfs_inode_t *pip, > + xfs_inode_t *ip) > +{ > + xfs_mount_t *mp; > + xfs_mru_cache_t *cache; > + fstrm_item_t *item; > + xfs_agnumber_t ag, rotorstep, startag; > + int err = 0; > + > + ASSERT(pip->i_d.di_mode & S_IFDIR); > + ASSERT(ip->i_d.di_mode & S_IFREG); > + if (!(pip->i_d.di_mode & S_IFDIR) || !(ip->i_d.di_mode & S_IFREG)) > + return EINVAL; > + > + mp = pip->i_mount; > + cache = mp->m_filestream; > + down_read(&mp->m_peraglock); > + xfs_ilock(pip, XFS_IOLOCK_EXCL); > + > + /* If the parent directory is already in the cache, use its AG. */ > + item = xfs_mru_cache_lookup(cache, pip->i_ino); > + if (item) { > + ASSERT(item->ip == pip); > + ag = item->ag; > + xfs_mru_cache_done(cache); > + > + TRACE_LOOKUP(mp, pip, pip, ag, xfs_filestream_peek_ag(mp, ag)); > + err = _xfs_filestream_update_ag(ip, pip, ag); > + > + goto exit; > + } > + > + /* > + * Set the starting AG using the rotor for inode32, otherwise > + * use the directory inode's AG. > + */ > + if (mp->m_flags & XFS_MOUNT_32BITINODES) { > + rotorstep = xfs_rotorstep; > + startag = (mp->m_agfrotor / rotorstep) % mp->m_sb.sb_agcount; > + mp->m_agfrotor = (mp->m_agfrotor + 1) % > + (mp->m_sb.sb_agcount * rotorstep); > + } else > + startag = XFS_INO_TO_AGNO(mp, pip->i_ino); > + > + /* Pick a new AG for the parent inode starting at startag. */ > + err = _xfs_filestream_pick_ag(mp, startag, &ag, 0, 0); > + if (err || ag == NULLAGNUMBER) > + goto exit_did_pick; > + > + /* Associate the parent inode with the AG. */ > + err = _xfs_filestream_update_ag(pip, NULL, ag); > + if (err) > + goto exit_did_pick; > + > + /* Associate the file inode with the AG. */ > + err = _xfs_filestream_update_ag(ip, pip, ag); > + if (err) > + goto exit_did_pick; > + > + TRACE_ASSOCIATE(mp, ip, pip, ag, xfs_filestream_peek_ag(mp, ag)); > + > +exit_did_pick: > + /* > + * If _xfs_filestream_pick_ag() returned a valid AG, remove the > + * reference it took on it, since the file and directory will have taken > + * their own now if they were successfully cached. > + */ > + if (ag != NULLAGNUMBER) > + xfs_filestream_put_ag(mp, ag); > + > +exit: > + xfs_iunlock(pip, XFS_IOLOCK_EXCL); > + up_read(&mp->m_peraglock); > + return err; > +} > + > +/* > + * Pick a new allocation group for the current file and its file stream. This > + * function is called by xfs_bmap_filestreams() with the mount point's per-ag > + * lock held. > + */ > +int > +xfs_filestream_new_ag( > + xfs_bmalloca_t *ap, > + xfs_agnumber_t *agp) > +{ > + int flags, err; > + xfs_inode_t *ip, *pip = NULL; > + xfs_mount_t *mp; > + xfs_mru_cache_t *cache; > + xfs_extlen_t minlen; > + fstrm_item_t *dir, *file; > + xfs_agnumber_t ag = NULLAGNUMBER; > + > + ip = ap->ip; > + mp = ip->i_mount; > + cache = mp->m_filestream; > + minlen = ap->alen; > + *agp = NULLAGNUMBER; > + > + /* > + * Look for the file in the cache, removing it if it's found. Doing > + * this allows it to be held across the dir lookup that follows. > + */ > + file = xfs_mru_cache_remove(cache, ip->i_ino); > + if (file) { > + ASSERT(ip == file->ip); > + > + /* Save the file's parent inode and old AG number for later. */ > + pip = file->pip; > + ag = file->ag; > + > + /* Look for the file's directory in the cache. */ > + dir = xfs_mru_cache_lookup(cache, pip->i_ino); > + if (dir) { > + ASSERT(pip == dir->ip); > + > + /* > + * If the directory has already moved on to a new AG, > + * use that AG as the new AG for the file. Don't > + * forget to twiddle the AG refcounts to match the > + * movement. > + */ > + if (dir->ag != file->ag) { > + xfs_filestream_put_ag(mp, file->ag); > + xfs_filestream_get_ag(mp, dir->ag); > + *agp = file->ag = dir->ag; > + } > + > + xfs_mru_cache_done(cache); > + } > + > + /* > + * Put the file back in the cache. If this fails, the free > + * function needs to be called to tidy up in the same way as if > + * the item had simply expired from the cache. > + */ > + err = xfs_mru_cache_insert(cache, ip->i_ino, file); > + if (err) { > + xfs_fstrm_free_func(ip->i_ino, file); > + return err; > + } > + > + /* > + * If the file's AG was moved to the directory's new AG, there's > + * nothing more to be done. > + */ > + if (*agp != NULLAGNUMBER) { > + TRACE_MOVEAG(mp, ip, pip, > + ag, xfs_filestream_peek_ag(mp, ag), > + *agp, xfs_filestream_peek_ag(mp, *agp)); > + return 0; > + } > + } > + > + /* > + * If the file's parent directory is known, take its iolock in exclusive > + * mode to prevent two sibling files from racing each other to migrate > + * themselves and their parent to different AGs. > + */ > + if (pip) > + xfs_ilock(pip, XFS_IOLOCK_EXCL); > + > + /* > + * A new AG needs to be found for the file. If the file's parent > + * directory is also known, it will be moved to the new AG as well to > + * ensure that files created inside it in future use the new AG. > + */ > + ag = (ag == NULLAGNUMBER) ? 0 : (ag + 1) % mp->m_sb.sb_agcount; > + flags = (ap->userdata ? XFS_PICK_USERDATA : 0) | > + (ap->low ? XFS_PICK_LOWSPACE : 0); > + > + err = _xfs_filestream_pick_ag(mp, ag, agp, flags, minlen); > + if (err || *agp == NULLAGNUMBER) > + goto exit; > + > + /* > + * If the file wasn't found in the file cache, then its parent directory > + * inode isn't known. For this to have happened, the file must either > + * be pre-existing, or it was created long enough ago that its cache > + * entry has expired. This isn't the sort of usage that the filestreams > + * allocator is trying to optimise, so there's no point trying to track > + * its new AG somehow in the filestream data structures. > + */ > + if (!pip) { > + TRACE_ORPHAN(mp, ip, *agp); > + goto exit; > + } > + > + /* Associate the parent inode with the AG. */ > + err = _xfs_filestream_update_ag(pip, NULL, *agp); > + if (err) > + goto exit; > + > + /* Associate the file inode with the AG. */ > + err = _xfs_filestream_update_ag(ip, pip, *agp); > + if (err) > + goto exit; > + > + TRACE_MOVEAG(mp, ip, pip, NULLAGNUMBER, 0, > + *agp, xfs_filestream_peek_ag(mp, *agp)); > + > +exit: > + /* > + * If _xfs_filestream_pick_ag() returned a valid AG, remove the > + * reference it took on it, since the file and directory will have taken > + * their own now if they were successfully cached. > + */ > + if (*agp != NULLAGNUMBER) > + xfs_filestream_put_ag(mp, *agp); > + else > + *agp = 0; > + > + if (pip) > + xfs_iunlock(pip, XFS_IOLOCK_EXCL); > + > + return err; > +} > + > +/* > + * Remove an association between an inode and a filestream object. > + * Typically this is done on last close of an unlinked file. > + */ > +void > +xfs_filestream_deassociate( > + xfs_inode_t *ip) > +{ > + xfs_mru_cache_t *cache = ip->i_mount->m_filestream; > + > + xfs_mru_cache_delete(cache, ip->i_ino); > +} > Index: 2.6.x-xfs-new/fs/xfs/xfs_filestream.h > =================================================================== > --- /dev/null 1970-01-01 00:00:00.000000000 +0000 > +++ 2.6.x-xfs-new/fs/xfs/xfs_filestream.h 2007-06-13 14:11:28.756283768 +1000 > @@ -0,0 +1,135 @@ > +/* > + * Copyright (c) 2000-2002,2005 Silicon Graphics, Inc. > + * All Rights Reserved. > + * > + * This program is free software; you can redistribute it and/or > + * modify it under the terms of the GNU General Public License as > + * published by the Free Software Foundation. > + * > + * This program is distributed in the hope that it would be useful, > + * but WITHOUT ANY WARRANTY; without even the implied warranty of > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the > + * GNU General Public License for more details. > + * > + * You should have received a copy of the GNU General Public License > + * along with this program; if not, write the Free Software Foundation, > + * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA > + */ > +#ifndef __XFS_FILESTREAM_H__ > +#define __XFS_FILESTREAM_H__ > + > +#ifdef __KERNEL__ > + > +struct xfs_mount; > +struct xfs_inode; > +struct xfs_perag; > +struct xfs_bmalloca; > + > +#ifdef XFS_FILESTREAMS_TRACE > +#define XFS_FSTRM_KTRACE_INFO 1 > +#define XFS_FSTRM_KTRACE_AGSCAN 2 > +#define XFS_FSTRM_KTRACE_AGPICK1 3 > +#define XFS_FSTRM_KTRACE_AGPICK2 4 > +#define XFS_FSTRM_KTRACE_UPDATE 5 > +#define XFS_FSTRM_KTRACE_FREE 6 > +#define XFS_FSTRM_KTRACE_ITEM_LOOKUP 7 > +#define XFS_FSTRM_KTRACE_ASSOCIATE 8 > +#define XFS_FSTRM_KTRACE_MOVEAG 9 > +#define XFS_FSTRM_KTRACE_ORPHAN 10 > + > +#define XFS_FSTRM_KTRACE_SIZE 16384 > +extern ktrace_t *xfs_filestreams_trace_buf; > + > +#endif > + > +/* > + * Allocation group filestream associations are tracked with per-ag atomic > + * counters. These counters allow _xfs_filestream_pick_ag() to tell whether a > + * particular AG already has active filestreams associated with it. The mount > + * point's m_peraglock is used to protect these counters from per-ag array > + * re-allocation during a growfs operation. When xfs_growfs_data_private() is > + * about to reallocate the array, it calls xfs_filestream_flush() with the > + * m_peraglock held in write mode. > + * > + * Since xfs_mru_cache_flush() guarantees that all the free functions for all > + * the cache elements have finished executing before it returns, it's safe for > + * the free functions to use the atomic counters without m_peraglock protection. > + * This allows the implementation of xfs_fstrm_free_func() to be agnostic about > + * whether it was called with the m_peraglock held in read mode, write mode or > + * not held at all. The race condition this addresses is the following: > + * > + * - The work queue scheduler fires and pulls a filestream directory cache > + * element off the LRU end of the cache for deletion, then gets pre-empted. > + * - A growfs operation grabs the m_peraglock in write mode, flushes all the > + * remaining items from the cache and reallocates the mount point's per-ag > + * array, resetting all the counters to zero. > + * - The work queue thread resumes and calls the free function for the element > + * it started cleaning up earlier. In the process it decrements the > + * filestreams counter for an AG that now has no references. > + * > + * With a shrinkfs feature, the above scenario could panic the system. > + * > + * All other uses of the following macros should be protected by either the > + * m_peraglock held in read mode, or the cache's internal locking exposed by the > + * interval between a call to xfs_mru_cache_lookup() and a call to > + * xfs_mru_cache_done(). In addition, the m_peraglock must be held in read mode > + * when new elements are added to the cache. > + * > + * Combined, these locking rules ensure that no associations will ever exist in > + * the cache that reference per-ag array elements that have since been > + * reallocated. > + */ > +STATIC_INLINE int > +xfs_filestream_peek_ag( > + xfs_mount_t *mp, > + xfs_agnumber_t agno) > +{ > + return atomic_read(&mp->m_perag[agno].pagf_fstrms); > +} > + > +STATIC_INLINE int > +xfs_filestream_get_ag( > + xfs_mount_t *mp, > + xfs_agnumber_t agno) > +{ > + return atomic_inc_return(&mp->m_perag[agno].pagf_fstrms); > +} > + > +STATIC_INLINE int > +xfs_filestream_put_ag( > + xfs_mount_t *mp, > + xfs_agnumber_t agno) > +{ > + return atomic_dec_return(&mp->m_perag[agno].pagf_fstrms); > +} > + > +/* allocation selection flags */ > +typedef enum xfs_fstrm_alloc { > + XFS_PICK_USERDATA = 1, > + XFS_PICK_LOWSPACE = 2, > +} xfs_fstrm_alloc_t; > + > +/* prototypes for filestream.c */ > +int xfs_filestream_init(void); > +void xfs_filestream_uninit(void); > +int xfs_filestream_mount(struct xfs_mount *mp); > +void xfs_filestream_unmount(struct xfs_mount *mp); > +void xfs_filestream_flush(struct xfs_mount *mp); > +xfs_agnumber_t xfs_filestream_lookup_ag(struct xfs_inode *ip); > +int xfs_filestream_associate(struct xfs_inode *dip, struct xfs_inode *ip); > +void xfs_filestream_deassociate(struct xfs_inode *ip); > +int xfs_filestream_new_ag(struct xfs_bmalloca *ap, xfs_agnumber_t *agp); > + > + > +/* filestreams for the inode? */ > +STATIC_INLINE int > +xfs_inode_is_filestream( > + struct xfs_inode *ip) > +{ > + return (ip->i_mount->m_flags & XFS_MOUNT_FILESTREAMS) || > + (ip->i_d.di_flags & XFS_DIFLAG_FILESTREAM); > +} > + > +#endif /* __KERNEL__ */ > + > +#endif /* __XFS_FILESTREAM_H__ */ > Index: 2.6.x-xfs-new/fs/xfs/xfs_fs.h > =================================================================== > --- 2.6.x-xfs-new.orig/fs/xfs/xfs_fs.h 2007-06-13 13:58:15.767513033 +1000 > +++ 2.6.x-xfs-new/fs/xfs/xfs_fs.h 2007-06-13 14:11:28.760283246 +1000 > @@ -66,6 +66,7 @@ struct fsxattr { > #define XFS_XFLAG_EXTSIZE 0x00000800 /* extent size allocator hint */ > #define XFS_XFLAG_EXTSZINHERIT 0x00001000 /* inherit inode extent size */ > #define XFS_XFLAG_NODEFRAG 0x00002000 /* do not defragment */ > +#define XFS_XFLAG_FILESTREAM 0x00004000 /* use filestream allocator */ > #define XFS_XFLAG_HASATTR 0x80000000 /* no DIFLAG for this */ > > /* > Index: 2.6.x-xfs-new/fs/xfs/xfs_fsops.c > =================================================================== > --- 2.6.x-xfs-new.orig/fs/xfs/xfs_fsops.c 2007-06-13 13:58:15.767513033 +1000 > +++ 2.6.x-xfs-new/fs/xfs/xfs_fsops.c 2007-06-13 14:11:28.764282724 +1000 > @@ -44,6 +44,7 @@ > #include "xfs_trans_space.h" > #include "xfs_rtalloc.h" > #include "xfs_rw.h" > +#include "xfs_filestream.h" > > /* > * File system operations > @@ -165,6 +166,7 @@ xfs_growfs_data_private( > new = nb - mp->m_sb.sb_dblocks; > oagcount = mp->m_sb.sb_agcount; > if (nagcount > oagcount) { > + xfs_filestream_flush(mp); > down_write(&mp->m_peraglock); > mp->m_perag = kmem_realloc(mp->m_perag, > sizeof(xfs_perag_t) * nagcount, > Index: 2.6.x-xfs-new/fs/xfs/xfs_inode.c > =================================================================== > --- 2.6.x-xfs-new.orig/fs/xfs/xfs_inode.c 2007-06-13 13:58:15.783510960 +1000 > +++ 2.6.x-xfs-new/fs/xfs/xfs_inode.c 2007-06-13 14:11:28.780280636 +1000 > @@ -48,6 +48,7 @@ > #include "xfs_dir2_trace.h" > #include "xfs_quota.h" > #include "xfs_acl.h" > +#include "xfs_filestream.h" > > > kmem_zone_t *xfs_ifork_zone; > @@ -817,6 +818,8 @@ _xfs_dic2xflags( > flags |= XFS_XFLAG_EXTSZINHERIT; > if (di_flags & XFS_DIFLAG_NODEFRAG) > flags |= XFS_XFLAG_NODEFRAG; > + if (di_flags & XFS_DIFLAG_FILESTREAM) > + flags |= XFS_XFLAG_FILESTREAM; > } > > return flags; > @@ -1099,7 +1102,7 @@ xfs_ialloc( > * Call the space management code to pick > * the on-disk inode to be allocated. > */ > - error = xfs_dialloc(tp, pip->i_ino, mode, okalloc, > + error = xfs_dialloc(tp, pip ? pip->i_ino : 0, mode, okalloc, > ialloc_context, call_again, &ino); > if (error != 0) { > return error; > @@ -1153,7 +1156,7 @@ xfs_ialloc( > if ( (prid != 0) && (ip->i_d.di_version == XFS_DINODE_VERSION_1)) > xfs_bump_ino_vers2(tp, ip); > > - if (XFS_INHERIT_GID(pip, vp->v_vfsp)) { > + if (pip && XFS_INHERIT_GID(pip, vp->v_vfsp)) { > ip->i_d.di_gid = pip->i_d.di_gid; > if ((pip->i_d.di_mode & S_ISGID) && (mode & S_IFMT) == S_IFDIR) { > ip->i_d.di_mode |= S_ISGID; > @@ -1195,8 +1198,14 @@ xfs_ialloc( > flags |= XFS_ILOG_DEV; > break; > case S_IFREG: > + if (unlikely(pip && xfs_inode_is_filestream(pip))) { > + error = xfs_filestream_associate(pip, ip); > + if (error) > + return error; > + } > + /* fall through */ > case S_IFDIR: > - if (unlikely(pip->i_d.di_flags & XFS_DIFLAG_ANY)) { > + if (unlikely(pip && (pip->i_d.di_flags & XFS_DIFLAG_ANY))) { > uint di_flags = 0; > > if ((mode & S_IFMT) == S_IFDIR) { > @@ -1233,6 +1242,8 @@ xfs_ialloc( > if ((pip->i_d.di_flags & XFS_DIFLAG_NODEFRAG) && > xfs_inherit_nodefrag) > di_flags |= XFS_DIFLAG_NODEFRAG; > + if (pip->i_d.di_flags & XFS_DIFLAG_FILESTREAM) > + di_flags |= XFS_DIFLAG_FILESTREAM; > ip->i_d.di_flags |= di_flags; > } > /* FALLTHROUGH */ > Index: 2.6.x-xfs-new/fs/xfs/xfs_mount.h > =================================================================== > --- 2.6.x-xfs-new.orig/fs/xfs/xfs_mount.h 2007-06-13 13:58:15.783510960 +1000 > +++ 2.6.x-xfs-new/fs/xfs/xfs_mount.h 2007-06-13 14:11:28.788279592 +1000 > @@ -66,6 +66,7 @@ struct xfs_bmbt_irec; > struct xfs_bmap_free; > struct xfs_extdelta; > struct xfs_swapext; > +struct xfs_mru_cache; > > extern struct bhv_vfsops xfs_vfsops; > extern struct bhv_vnodeops xfs_vnodeops; > @@ -436,6 +437,7 @@ typedef struct xfs_mount { > struct notifier_block m_icsb_notifier; /* hotplug cpu notifier */ > struct mutex m_icsb_mutex; /* balancer sync lock */ > #endif > + struct xfs_mru_cache *m_filestream; /* per-mount filestream data */ > } xfs_mount_t; > > /* > @@ -475,6 +477,8 @@ typedef struct xfs_mount { > * I/O size in stat() */ > #define XFS_MOUNT_NO_PERCPU_SB (1ULL << 23) /* don't use per-cpu superblock > counters */ > +#define XFS_MOUNT_FILESTREAMS (1ULL << 24) /* enable the filestreams > + allocator */ > > > /* > Index: 2.6.x-xfs-new/fs/xfs/xfs_mru_cache.c > =================================================================== > --- /dev/null 1970-01-01 00:00:00.000000000 +0000 > +++ 2.6.x-xfs-new/fs/xfs/xfs_mru_cache.c 2007-06-13 14:11:28.788279592 +1000 > @@ -0,0 +1,494 @@ > +/* > + * Copyright (c) 2000-2002,2006 Silicon Graphics, Inc. > + * All Rights Reserved. > + * > + * This program is free software; you can redistribute it and/or > + * modify it under the terms of the GNU General Public License as > + * published by the Free Software Foundation. > + * > + * This program is distributed in the hope that it would be useful, > + * but WITHOUT ANY WARRANTY; without even the implied warranty of > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the > + * GNU General Public License for more details. > + * > + * You should have received a copy of the GNU General Public License > + * along with this program; if not, write the Free Software Foundation, > + * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA > + */ > +#include "xfs.h" > +#include "xfs_mru_cache.h" > + > +/* > + * An MRU Cache is a dynamic data structure that stores its elements in a way > + * that allows efficient lookups, but also groups them into discrete time > + * intervals based on insertion time. This allows elements to be efficiently > + * and automatically reaped after a fixed period of inactivity. > + * > + * When a client data pointer is stored in the MRU Cache it needs to be added to > + * both the data store and to one of the lists. It must also be possible to > + * access each of these entries via the other, i.e. to: > + * > + * a) Walk a list, removing the corresponding data store entry for each item. > + * b) Look up a data store entry, then access its list entry directly. > + * > + * To achieve both of these goals, each entry must contain both a list entry and > + * a key, in addition to the user's data pointer. Note that it's not a good > + * idea to have the client embed one of these structures at the top of their own > + * data structure, because inserting the same item more than once would most > + * likely result in a loop in one of the lists. That's a sure-fire recipe for > + * an infinite loop in the code. > + */ > +typedef struct xfs_mru_cache_elem > +{ > + struct list_head list_node; > + unsigned long key; > + void *value; > +} xfs_mru_cache_elem_t; > + > +static kmem_zone_t *xfs_mru_elem_zone; > +static struct workqueue_struct *xfs_mru_reap_wq; > + > +/* > + * When inserting, destroying or reaping, it's first necessary to update the > + * lists relative to a particular time. In the case of destroying, that time > + * will be well in the future to ensure that all items are moved to the reap > + * list. In all other cases though, the time will be the current time. > + * > + * This function enters a loop, moving the contents of the LRU list to the reap > + * list again and again until either a) the lists are all empty, or b) time zero > + * has been advanced sufficiently to be within the immediate element lifetime. > + * > + * Case a) above is detected by counting how many groups are migrated and > + * stopping when they've all been moved. Case b) is detected by monitoring the > + * time_zero field, which is updated as each group is migrated. > + * > + * The return value is the earliest time that more migration could be needed, or > + * zero if there's no need to schedule more work because the lists are empty. > + */ > +STATIC unsigned long > +_xfs_mru_cache_migrate( > + xfs_mru_cache_t *mru, > + unsigned long now) > +{ > + unsigned int grp; > + unsigned int migrated = 0; > + struct list_head *lru_list; > + > + /* Nothing to do if the data store is empty. */ > + if (!mru->time_zero) > + return 0; > + > + /* While time zero is older than the time spanned by all the lists. */ > + while (mru->time_zero <= now - mru->grp_count * mru->grp_time) { > + > + /* > + * If the LRU list isn't empty, migrate its elements to the tail > + * of the reap list. > + */ > + lru_list = mru->lists + mru->lru_grp; > + if (!list_empty(lru_list)) > + list_splice_init(lru_list, mru->reap_list.prev); > + > + /* > + * Advance the LRU group number, freeing the old LRU list to > + * become the new MRU list; advance time zero accordingly. > + */ > + mru->lru_grp = (mru->lru_grp + 1) % mru->grp_count; > + mru->time_zero += mru->grp_time; > + > + /* > + * If reaping is so far behind that all the elements on all the > + * lists have been migrated to the reap list, it's now empty. > + */ > + if (++migrated == mru->grp_count) { > + mru->lru_grp = 0; > + mru->time_zero = 0; > + return 0; > + } > + } > + > + /* Find the first non-empty list from the LRU end. */ > + for (grp = 0; grp < mru->grp_count; grp++) { > + > + /* Check the grp'th list from the LRU end. */ > + lru_list = mru->lists + ((mru->lru_grp + grp) % mru->grp_count); > + if (!list_empty(lru_list)) > + return mru->time_zero + > + (mru->grp_count + grp) * mru->grp_time; > + } > + > + /* All the lists must be empty. */ > + mru->lru_grp = 0; > + mru->time_zero = 0; > + return 0; > +} > + > +/* > + * When inserting or doing a lookup, an element needs to be inserted into the > + * MRU list. The lists must be migrated first to ensure that they're > + * up-to-date, otherwise the new element could be given a shorter lifetime in > + * the cache than it should. > + */ > +STATIC void > +_xfs_mru_cache_list_insert( > + xfs_mru_cache_t *mru, > + xfs_mru_cache_elem_t *elem) > +{ > + unsigned int grp = 0; > + unsigned long now = jiffies; > + > + /* > + * If the data store is empty, initialise time zero, leave grp set to > + * zero and start the work queue timer if necessary. Otherwise, set grp > + * to the number of group times that have elapsed since time zero. > + */ > + if (!_xfs_mru_cache_migrate(mru, now)) { > + mru->time_zero = now; > + if (!mru->next_reap) > + mru->next_reap = mru->grp_count * mru->grp_time; > + } else { > + grp = (now - mru->time_zero) / mru->grp_time; > + grp = (mru->lru_grp + grp) % mru->grp_count; > + } > + > + /* Insert the element at the tail of the corresponding list. */ > + list_add_tail(&elem->list_node, mru->lists + grp); > +} > + > +/* > + * When destroying or reaping, all the elements that were migrated to the reap > + * list need to be deleted. For each element this involves removing it from the > + * data store, removing it from the reap list, calling the client's free > + * function and deleting the element from the element zone. > + */ > +STATIC void > +_xfs_mru_cache_clear_reap_list( > + xfs_mru_cache_t *mru) > +{ > + xfs_mru_cache_elem_t *elem, *next; > + struct list_head tmp; > + > + INIT_LIST_HEAD(&tmp); > + list_for_each_entry_safe(elem, next, &mru->reap_list, list_node) { > + > + /* Remove the element from the data store. */ > + radix_tree_delete(&mru->store, elem->key); > + > + /* > + * remove to temp list so it can be freed without > + * needing to hold the lock > + */ > + list_move(&elem->list_node, &tmp); > + } > + mutex_spinunlock(&mru->lock, 0); > + > + list_for_each_entry_safe(elem, next, &tmp, list_node) { > + > + /* Remove the element from the reap list. */ > + list_del_init(&elem->list_node); > + > + /* Call the client's free function with the key and value pointer. */ > + mru->free_func(elem->key, elem->value); > + > + /* Free the element structure. */ > + kmem_zone_free(xfs_mru_elem_zone, elem); > + } > + > + mutex_spinlock(&mru->lock); > +} > + > +/* > + * We fire the reap timer every group expiry interval so > + * we always have a reaper ready to run. This makes shutdown > + * and flushing of the reaper easy to do. Hence we need to > + * keep when the next reap must occur so we can determine > + * at each interval whether there is anything we need to do. > + */ > +STATIC void > +_xfs_mru_cache_reap( > + struct work_struct *work) > +{ > + xfs_mru_cache_t *mru = container_of(work, xfs_mru_cache_t, work.work); > + unsigned long now; > + > + ASSERT(mru && mru->lists); > + if (!mru || !mru->lists) > + return; > + > + mutex_spinlock(&mru->lock); > + now = jiffies; > + if (mru->reap_all || > + (mru->next_reap && time_after(now, mru->next_reap))) { > + if (mru->reap_all) > + now += mru->grp_count * mru->grp_time * 2; > + mru->next_reap = _xfs_mru_cache_migrate(mru, now); > + _xfs_mru_cache_clear_reap_list(mru); > + } > + > + /* > + * the process that triggered the reap_all is responsible > + * for restating the periodic reap if it is required. > + */ > + if (!mru->reap_all) > + queue_delayed_work(xfs_mru_reap_wq, &mru->work, mru->grp_time); > + mru->reap_all = 0; > + mutex_spinunlock(&mru->lock, 0); > +} > + > +int > +xfs_mru_cache_init(void) > +{ > + xfs_mru_elem_zone = kmem_zone_init(sizeof(xfs_mru_cache_elem_t), > + "xfs_mru_cache_elem"); > + if (!xfs_mru_elem_zone) > + return ENOMEM; > + > + xfs_mru_reap_wq = create_singlethread_workqueue("xfs_mru_cache"); > + if (!xfs_mru_reap_wq) { > + kmem_zone_destroy(xfs_mru_elem_zone); > + return ENOMEM; > + } > + > + return 0; > +} > + > +void > +xfs_mru_cache_uninit(void) > +{ > + destroy_workqueue(xfs_mru_reap_wq); > + kmem_zone_destroy(xfs_mru_elem_zone); > +} > + > +int > +xfs_mru_cache_create( > + xfs_mru_cache_t **mrup, > + unsigned int lifetime_ms, > + unsigned int grp_count, > + xfs_mru_cache_free_func_t free_func) > +{ > + xfs_mru_cache_t *mru = NULL; > + int err = 0, grp; > + unsigned int grp_time; > + > + if (mrup) > + *mrup = NULL; > + > + if (!mrup || !grp_count || !lifetime_ms || !free_func) > + return EINVAL; > + > + if (!(grp_time = msecs_to_jiffies(lifetime_ms) / grp_count)) > + return EINVAL; > + > + if (!(mru = kmem_zalloc(sizeof(*mru), KM_SLEEP))) > + return ENOMEM; > + > + /* An extra list is needed to avoid reaping up to a grp_time early. */ > + mru->grp_count = grp_count + 1; > + mru->lists = kmem_alloc(mru->grp_count * sizeof(*mru->lists), KM_SLEEP); > + > + if (!mru->lists) { > + err = ENOMEM; > + goto exit; > + } > + > + for (grp = 0; grp < mru->grp_count; grp++) > + INIT_LIST_HEAD(mru->lists + grp); > + > + /* > + * We use GFP_KERNEL radix tree preload and do inserts under a > + * spinlock so GFP_ATOMIC is appropriate for the radix tree itself. > + */ > + INIT_RADIX_TREE(&mru->store, GFP_ATOMIC); > + INIT_LIST_HEAD(&mru->reap_list); > + spinlock_init(&mru->lock, "xfs_mru_cache"); > + INIT_DELAYED_WORK(&mru->work, _xfs_mru_cache_reap); > + > + mru->grp_time = grp_time; > + mru->free_func = free_func; > + > + /* start up the reaper event */ > + mru->next_reap = 0; > + mru->reap_all = 0; > + queue_delayed_work(xfs_mru_reap_wq, &mru->work, mru->grp_time); > + > + *mrup = mru; > + > +exit: > + if (err && mru && mru->lists) > + kmem_free(mru->lists, mru->grp_count * sizeof(*mru->lists)); > + if (err && mru) > + kmem_free(mru, sizeof(*mru)); > + > + return err; > +} > + > +/* > + * When flushing, we stop the periodic reaper from running first > + * so we don't race with it. If we are flushing on unmount, we > + * don't want to restart the reaper again, so the restart is conditional. > + * > + * Because reaping can drop the last refcount on inodes which can free > + * extents, we have to push the reaping off to the workqueue thread > + * because we could be called holding locks that extent freeing requires. > + */ > +void > +xfs_mru_cache_flush( > + xfs_mru_cache_t *mru, > + int restart) > +{ > + if (!mru || !mru->lists) > + return; > + > + cancel_rearming_delayed_workqueue(xfs_mru_reap_wq, &mru->work); > + > + mutex_spinlock(&mru->lock); > + mru->reap_all = 1; > + mutex_spinunlock(&mru->lock, 0); > + > + queue_work(xfs_mru_reap_wq, &mru->work.work); > + flush_workqueue(xfs_mru_reap_wq); > + > + mutex_spinlock(&mru->lock); > + WARN_ON_ONCE(mru->reap_all != 0); > + mru->reap_all = 0; > + if (restart) > + queue_delayed_work(xfs_mru_reap_wq, &mru->work, mru->grp_time); > + mutex_spinunlock(&mru->lock, 0); > +} > + > +void > +xfs_mru_cache_destroy( > + xfs_mru_cache_t *mru) > +{ > + if (!mru || !mru->lists) > + return; > + > + /* we don't want the reaper to restart here */ > + xfs_mru_cache_flush(mru, 0); > + > + kmem_free(mru->lists, mru->grp_count * sizeof(*mru->lists)); > + kmem_free(mru, sizeof(*mru)); > +} > + > +int > +xfs_mru_cache_insert( > + xfs_mru_cache_t *mru, > + unsigned long key, > + void *value) > +{ > + xfs_mru_cache_elem_t *elem; > + > + ASSERT(mru && mru->lists); > + if (!mru || !mru->lists) > + return EINVAL; > + > + elem = kmem_zone_zalloc(xfs_mru_elem_zone, KM_SLEEP); > + if (!elem) > + return ENOMEM; > + > + if (radix_tree_preload(GFP_KERNEL)) { > + kmem_zone_free(xfs_mru_elem_zone, elem); > + return ENOMEM; > + } > + > + INIT_LIST_HEAD(&elem->list_node); > + elem->key = key; > + elem->value = value; > + > + mutex_spinlock(&mru->lock); > + > + radix_tree_insert(&mru->store, key, elem); > + radix_tree_preload_end(); > + _xfs_mru_cache_list_insert(mru, elem); > + > + mutex_spinunlock(&mru->lock, 0); > + > + return 0; > +} > + > +void* > +xfs_mru_cache_remove( > + xfs_mru_cache_t *mru, > + unsigned long key) > +{ > + xfs_mru_cache_elem_t *elem; > + void *value = NULL; > + > + ASSERT(mru && mru->lists); > + if (!mru || !mru->lists) > + return NULL; > + > + mutex_spinlock(&mru->lock); > + elem = radix_tree_delete(&mru->store, key); > + if (elem) { > + value = elem->value; > + list_del(&elem->list_node); > + } > + > + mutex_spinunlock(&mru->lock, 0); > + > + if (elem) > + kmem_zone_free(xfs_mru_elem_zone, elem); > + > + return value; > +} > + > +void > +xfs_mru_cache_delete( > + xfs_mru_cache_t *mru, > + unsigned long key) > +{ > + void *value = xfs_mru_cache_remove(mru, key); > + > + if (value) > + mru->free_func(key, value); > +} > + > +void* > +xfs_mru_cache_lookup( > + xfs_mru_cache_t *mru, > + unsigned long key) > +{ > + xfs_mru_cache_elem_t *elem; > + > + ASSERT(mru && mru->lists); > + if (!mru || !mru->lists) > + return NULL; > + > + mutex_spinlock(&mru->lock); > + elem = radix_tree_lookup(&mru->store, key); > + if (elem) { > + list_del(&elem->list_node); > + _xfs_mru_cache_list_insert(mru, elem); > + } > + else > + mutex_spinunlock(&mru->lock, 0); > + > + return elem ? elem->value : NULL; > +} > + > +void* > +xfs_mru_cache_peek( > + xfs_mru_cache_t *mru, > + unsigned long key) > +{ > + xfs_mru_cache_elem_t *elem; > + > + ASSERT(mru && mru->lists); > + if (!mru || !mru->lists) > + return NULL; > + > + mutex_spinlock(&mru->lock); > + elem = radix_tree_lookup(&mru->store, key); > + if (!elem) > + mutex_spinunlock(&mru->lock, 0); > + > + return elem ? elem->value : NULL; > +} > + > +void > +xfs_mru_cache_done( > + xfs_mru_cache_t *mru) > +{ > + mutex_spinunlock(&mru->lock, 0); > +} > Index: 2.6.x-xfs-new/fs/xfs/xfs_mru_cache.h > =================================================================== > --- /dev/null 1970-01-01 00:00:00.000000000 +0000 > +++ 2.6.x-xfs-new/fs/xfs/xfs_mru_cache.h 2007-06-13 14:11:28.792279070 +1000 > @@ -0,0 +1,219 @@ > +/* > + * Copyright (c) 2000-2002,2006 Silicon Graphics, Inc. > + * All Rights Reserved. > + * > + * This program is free software; you can redistribute it and/or > + * modify it under the terms of the GNU General Public License as > + * published by the Free Software Foundation. > + * > + * This program is distributed in the hope that it would be useful, > + * but WITHOUT ANY WARRANTY; without even the implied warranty of > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the > + * GNU General Public License for more details. > + * > + * You should have received a copy of the GNU General Public License > + * along with this program; if not, write the Free Software Foundation, > + * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA > + */ > +#ifndef __XFS_MRU_CACHE_H__ > +#define __XFS_MRU_CACHE_H__ > + > +/* > + * The MRU Cache data structure consists of a data store, an array of lists and > + * a lock to protect its internal state. At initialisation time, the client > + * supplies an element lifetime in milliseconds and a group count, as well as a > + * function pointer to call when deleting elements. A data structure for > + * queueing up work in the form of timed callbacks is also included. > + * > + * The group count controls how many lists are created, and thereby how finely > + * the elements are grouped in time. When reaping occurs, all the elements in > + * all the lists whose time has expired are deleted. > + * > + * To give an example of how this works in practice, consider a client that > + * initialises an MRU Cache with a lifetime of ten seconds and a group count of > + * five. Five internal lists will be created, each representing a two second > + * period in time. When the first element is added, time zero for the data > + * structure is initialised to the current time. > + * > + * All the elements added in the first two seconds are appended to the first > + * list. Elements added in the third second go into the second list, and so on. > + * If an element is accessed at any point, it is removed from its list and > + * inserted at the head of the current most-recently-used list. > + * > + * The reaper function will have nothing to do until at least twelve seconds > + * have elapsed since the first element was added. The reason for this is that > + * if it were called at t=11s, there could be elements in the first list that > + * have only been inactive for nine seconds, so it still does nothing. If it is > + * called anywhere between t=12 and t=14 seconds, it will delete all the > + * elements that remain in the first list. It's therefore possible for elements > + * to remain in the data store even after they've been inactive for up to > + * (t + t/g) seconds, where t is the inactive element lifetime and g is the > + * number of groups. > + * > + * The above example assumes that the reaper function gets called at least once > + * every (t/g) seconds. If it is called less frequently, unused elements will > + * accumulate in the reap list until the reaper function is eventually called. > + * The current implementation uses work queue callbacks to carefully time the > + * reaper function calls, so this should happen rarely, if at all. > + * > + * From a design perspective, the primary reason for the choice of a list array > + * representing discrete time intervals is that it's only practical to reap > + * expired elements in groups of some appreciable size. This automatically > + * introduces a granularity to element lifetimes, so there's no point storing an > + * individual timeout with each element that specifies a more precise reap time. > + * The bonus is a saving of sizeof(long) bytes of memory per element stored. > + * > + * The elements could have been stored in just one list, but an array of > + * counters or pointers would need to be maintained to allow them to be divided > + * up into discrete time groups. More critically, the process of touching or > + * removing an element would involve walking large portions of the entire list, > + * which would have a detrimental effect on performance. The additional memory > + * requirement for the array of list heads is minimal. > + * > + * When an element is touched or deleted, it needs to be removed from its > + * current list. Doubly linked lists are used to make the list maintenance > + * portion of these operations O(1). Since reaper timing can be imprecise, > + * inserts and lookups can occur when there are no free lists available. When > + * this happens, all the elements on the LRU list need to be migrated to the end > + * of the reap list. To keep the list maintenance portion of these operations > + * O(1) also, list tails need to be accessible without walking the entire list. > + * This is the reason why doubly linked list heads are used. > + */ > + > +/* Function pointer type for callback to free a client's data pointer. */ > +typedef void (*xfs_mru_cache_free_func_t)(unsigned long, void*); > + > +typedef struct xfs_mru_cache > +{ > + struct radix_tree_root store; /* Core storage data structure. */ > + struct list_head *lists; /* Array of lists, one per grp. */ > + struct list_head reap_list; /* Elements overdue for reaping. */ > + spinlock_t lock; /* Lock to protect this struct. */ > + unsigned int grp_count; /* Number of discrete groups. */ > + unsigned int grp_time; /* Time period spanned by grps. */ > + unsigned int lru_grp; /* Group containing time zero. */ > + unsigned long time_zero; /* Time first element was added. */ > + unsigned long next_reap; /* Time that the reaper should > + next do something. */ > + unsigned int reap_all; /* if set, reap all lists */ > + xfs_mru_cache_free_func_t free_func; /* Function pointer for freeing. */ > + struct delayed_work work; /* Workqueue data for reaping. */ > +} xfs_mru_cache_t; > + > +/* > + * xfs_mru_cache_init() prepares memory zones and any other globally scoped > + * resources. > + */ > +int > +xfs_mru_cache_init(void); > + > +/* > + * xfs_mru_cache_uninit() tears down all the globally scoped resources prepared > + * in xfs_mru_cache_init(). > + */ > +void > +xfs_mru_cache_uninit(void); > + > +/* > + * To initialise a struct xfs_mru_cache pointer, call xfs_mru_cache_create() > + * with the address of the pointer, a lifetime value in milliseconds, a group > + * count and a free function to use when deleting elements. This function > + * returns 0 if the initialisation was successful. > + */ > +int > +xfs_mru_cache_create(struct xfs_mru_cache **mrup, > + unsigned int lifetime_ms, > + unsigned int grp_count, > + xfs_mru_cache_free_func_t free_func); > + > +/* > + * Call xfs_mru_cache_flush() to flush out all cached entries, calling their > + * free functions as they're deleted. When this function returns, the caller is > + * guaranteed that all the free functions for all the elements have finished > + * executing. > + * > + * While we are flushing, we stop the periodic reaper event from triggering. > + * Normally, we want to restart this periodic event, but if we are shutting > + * down the cache we do not want it restarted. hence the restart parameter > + * where 0 = do not restart reaper and 1 = restart reaper. > + */ > +void > +xfs_mru_cache_flush( > + xfs_mru_cache_t *mru, > + int restart); > + > +/* > + * Call xfs_mru_cache_destroy() with the MRU Cache pointer when the cache is no > + * longer needed. > + */ > +void > +xfs_mru_cache_destroy(struct xfs_mru_cache *mru); > + > +/* > + * To insert an element, call xfs_mru_cache_insert() with the data store, the > + * element's key and the client data pointer. This function returns 0 on > + * success or ENOMEM if memory for the data element couldn't be allocated. > + */ > +int > +xfs_mru_cache_insert(struct xfs_mru_cache *mru, > + unsigned long key, > + void *value); > + > +/* > + * To remove an element without calling the free function, call > + * xfs_mru_cache_remove() with the data store and the element's key. On success > + * the client data pointer for the removed element is returned, otherwise this > + * function will return a NULL pointer. > + */ > +void* > +xfs_mru_cache_remove(struct xfs_mru_cache *mru, > + unsigned long key); > + > +/* > + * To remove and element and call the free function, call xfs_mru_cache_delete() > + * with the data store and the element's key. > + */ > +void > +xfs_mru_cache_delete(struct xfs_mru_cache *mru, > + unsigned long key); > + > +/* > + * To look up an element using its key, call xfs_mru_cache_lookup() with the > + * data store and the element's key. If found, the element will be moved to the > + * head of the MRU list to indicate that it's been touched. > + * > + * The internal data structures are protected by a spinlock that is STILL HELD > + * when this function returns. Call xfs_mru_cache_done() to release it. Note > + * that it is not safe to call any function that might sleep in the interim. > + * > + * The implementation could have used reference counting to avoid this > + * restriction, but since most clients simply want to get, set or test a member > + * of the returned data structure, the extra per-element memory isn't warranted. > + * > + * If the element isn't found, this function returns NULL and the spinlock is > + * released. xfs_mru_cache_done() should NOT be called when this occurs. > + */ > +void* > +xfs_mru_cache_lookup(struct xfs_mru_cache *mru, > + unsigned long key); > + > +/* > + * To look up an element using its key, but leave its location in the internal > + * lists alone, call xfs_mru_cache_peek(). If the element isn't found, this > + * function returns NULL. > + * > + * See the comments above the declaration of the xfs_mru_cache_lookup() function > + * for important locking information pertaining to this call. > + */ > +void* > +xfs_mru_cache_peek(struct xfs_mru_cache *mru, > + unsigned long key); > +/* > + * To release the internal data structure spinlock after having performed an > + * xfs_mru_cache_lookup() or an xfs_mru_cache_peek(), call xfs_mru_cache_done() > + * with the data store pointer. > + */ > +void > +xfs_mru_cache_done(struct xfs_mru_cache *mru); > + > +#endif /* __XFS_MRU_CACHE_H__ */ > Index: 2.6.x-xfs-new/fs/xfs/xfs_vfsops.c > =================================================================== > --- 2.6.x-xfs-new.orig/fs/xfs/xfs_vfsops.c 2007-06-13 13:58:15.787510441 +1000 > +++ 2.6.x-xfs-new/fs/xfs/xfs_vfsops.c 2007-06-13 14:11:28.880267586 +1000 > @@ -51,6 +51,8 @@ > #include "xfs_acl.h" > #include "xfs_attr.h" > #include "xfs_clnt.h" > +#include "xfs_mru_cache.h" > +#include "xfs_filestream.h" > #include "xfs_fsops.h" > > STATIC int xfs_sync(bhv_desc_t *, int, cred_t *); > @@ -81,6 +83,8 @@ xfs_init(void) > xfs_dabuf_zone = kmem_zone_init(sizeof(xfs_dabuf_t), "xfs_dabuf"); > xfs_ifork_zone = kmem_zone_init(sizeof(xfs_ifork_t), "xfs_ifork"); > xfs_acl_zone_init(xfs_acl_zone, "xfs_acl"); > + xfs_mru_cache_init(); > + xfs_filestream_init(); > > /* > * The size of the zone allocated buf log item is the maximum > @@ -164,6 +168,8 @@ xfs_cleanup(void) > xfs_cleanup_procfs(); > xfs_sysctl_unregister(); > xfs_refcache_destroy(); > + xfs_filestream_uninit(); > + xfs_mru_cache_uninit(); > xfs_acl_zone_destroy(xfs_acl_zone); > > #ifdef XFS_DIR2_TRACE > @@ -320,6 +326,9 @@ xfs_start_flags( > else > mp->m_flags &= ~XFS_MOUNT_BARRIER; > > + if (ap->flags2 & XFSMNT2_FILESTREAMS) > + mp->m_flags |= XFS_MOUNT_FILESTREAMS; > + > return 0; > } > > @@ -518,6 +527,9 @@ xfs_mount( > if (mp->m_flags & XFS_MOUNT_BARRIER) > xfs_mountfs_check_barriers(mp); > > + if ((error = xfs_filestream_mount(mp))) > + goto error2; > + > error = XFS_IOINIT(vfsp, args, flags); > if (error) > goto error2; > @@ -575,6 +587,13 @@ xfs_unmount( > */ > xfs_refcache_purge_mp(mp); > > + /* > + * Blow away any referenced inode in the filestreams cache. > + * This can and will cause log traffic as inodes go inactive > + * here. > + */ > + xfs_filestream_unmount(mp); > + > XFS_bflush(mp->m_ddev_targp); > error = xfs_unmount_flush(mp, 0); > if (error) > @@ -706,6 +725,7 @@ xfs_mntupdate( > mp->m_flags &= ~XFS_MOUNT_BARRIER; > } > } else if (!(vfsp->vfs_flag & VFS_RDONLY)) { /* rw -> ro */ > + xfs_filestream_flush(mp); > bhv_vfs_sync(vfsp, SYNC_DATA_QUIESCE, NULL); > xfs_attr_quiesce(mp); > vfsp->vfs_flag |= VFS_RDONLY; > @@ -930,6 +950,9 @@ xfs_sync( > { > xfs_mount_t *mp = XFS_BHVTOM(bdp); > > + if (flags & SYNC_IOWAIT) > + xfs_filestream_flush(mp); > + > return xfs_syncsub(mp, flags, NULL); > } > > @@ -1873,6 +1896,8 @@ xfs_parseargs( > } else if (!strcmp(this_char, "irixsgid")) { > cmn_err(CE_WARN, > "XFS: irixsgid is now a sysctl(2) variable, option is deprecated."); > + } else if (!strcmp(this_char, "filestreams")) { > + args->flags2 |= XFSMNT2_FILESTREAMS; > } else { > cmn_err(CE_WARN, > "XFS: unknown mount option [%s].", this_char); > Index: 2.6.x-xfs-new/fs/xfs/xfs_vnodeops.c > =================================================================== > --- 2.6.x-xfs-new.orig/fs/xfs/xfs_vnodeops.c 2007-06-13 13:58:15.855501631 +1000 > +++ 2.6.x-xfs-new/fs/xfs/xfs_vnodeops.c 2007-06-13 14:11:28.904264454 +1000 > @@ -51,6 +51,7 @@ > #include "xfs_refcache.h" > #include "xfs_trans_space.h" > #include "xfs_log_priv.h" > +#include "xfs_filestream.h" > > STATIC int > xfs_open( > @@ -94,6 +95,16 @@ xfs_close( > return 0; > > /* > + * If we are using filestreams, and we have an unlinked > + * file that we are processing the last close on, then nothing > + * will be able to reopen and write to this file. Purge this > + * inode from the filestreams cache so that it doesn't delay > + * teardown of the inode. > + */ > + if ((ip->i_d.di_nlink == 0) && xfs_inode_is_filestream(ip)) > + xfs_filestream_deassociate(ip); > + > + /* > * If we previously truncated this file and removed old data in > * the process, we want to initiate "early" writeout on the last > * close. This is an attempt to combat the notorious NULL files > @@ -819,6 +830,8 @@ xfs_setattr( > di_flags |= XFS_DIFLAG_PROJINHERIT; > if (vap->va_xflags & XFS_XFLAG_NODEFRAG) > di_flags |= XFS_DIFLAG_NODEFRAG; > + if (vap->va_xflags & XFS_XFLAG_FILESTREAM) > + di_flags |= XFS_DIFLAG_FILESTREAM; > if ((ip->i_d.di_mode & S_IFMT) == S_IFDIR) { > if (vap->va_xflags & XFS_XFLAG_RTINHERIT) > di_flags |= XFS_DIFLAG_RTINHERIT; > @@ -2563,6 +2576,15 @@ xfs_remove( > */ > xfs_refcache_purge_ip(ip); > > + /* > + * If we are using filestreams, kill the stream association. > + * If the file is still open it may get a new one but that > + * will get killed on last close in xfs_close() so we don't > + * have to worry about that. > + */ > + if (link_zero && xfs_inode_is_filestream(ip)) > + xfs_filestream_deassociate(ip); > + > vn_trace_exit(XFS_ITOV(ip), __FUNCTION__, (inst_t *)__return_address); > > /* > Index: 2.6.x-xfs-new/fs/xfs/quota/xfs_qm.c > =================================================================== > --- 2.6.x-xfs-new.orig/fs/xfs/quota/xfs_qm.c 2007-06-13 13:58:15.875499040 +1000 > +++ 2.6.x-xfs-new/fs/xfs/quota/xfs_qm.c 2007-06-13 14:11:28.972255580 +1000 > @@ -65,7 +65,6 @@ kmem_zone_t *qm_dqtrxzone; > static struct shrinker *xfs_qm_shaker; > > static cred_t xfs_zerocr; > -static xfs_inode_t xfs_zeroino; > > STATIC void xfs_qm_list_init(xfs_dqlist_t *, char *, int); > STATIC void xfs_qm_list_destroy(xfs_dqlist_t *); > @@ -1415,7 +1414,7 @@ xfs_qm_qino_alloc( > return error; > } > > - if ((error = xfs_dir_ialloc(&tp, &xfs_zeroino, S_IFREG, 1, 0, > + if ((error = xfs_dir_ialloc(&tp, NULL, S_IFREG, 1, 0, > &xfs_zerocr, 0, 1, ip, &committed))) { > xfs_trans_cancel(tp, XFS_TRANS_RELEASE_LOG_RES | > XFS_TRANS_ABORT); > Index: 2.6.x-xfs-new/fs/xfs/xfs.h > =================================================================== > --- 2.6.x-xfs-new.orig/fs/xfs/xfs.h 2007-06-13 13:58:15.879498521 +1000 > +++ 2.6.x-xfs-new/fs/xfs/xfs.h 2007-06-13 14:11:28.972255580 +1000 > @@ -38,6 +38,7 @@ > #define XFS_RW_TRACE 1 > #define XFS_BUF_TRACE 1 > #define XFS_VNODE_TRACE 1 > +#define XFS_FILESTREAMS_TRACE 1 > #endif > > #include > Index: 2.6.x-xfs-new/fs/xfs/xfsidbg.c > =================================================================== > --- 2.6.x-xfs-new.orig/fs/xfs/xfsidbg.c 2007-06-13 13:58:15.879498521 +1000 > +++ 2.6.x-xfs-new/fs/xfs/xfsidbg.c 2007-06-13 14:11:28.984254014 +1000 > @@ -63,6 +63,7 @@ > #include "quota/xfs_qm.h" > #include "xfs_iomap.h" > #include "xfs_buf.h" > +#include "xfs_filestream.h" > > MODULE_AUTHOR("Silicon Graphics, Inc."); > MODULE_DESCRIPTION("Additional kdb commands for debugging XFS"); > @@ -109,6 +110,9 @@ static void xfsidbg_xlog_granttrace(xlog > #ifdef XFS_DQUOT_TRACE > static void xfsidbg_xqm_dqtrace(xfs_dquot_t *); > #endif > +#ifdef XFS_FILESTREAMS_TRACE > +static void xfsidbg_filestreams_trace(int); > +#endif > > > /* > @@ -197,6 +201,9 @@ static int xfs_bmbt_trace_entry(ktrace_e > #ifdef XFS_DIR2_TRACE > static int xfs_dir2_trace_entry(ktrace_entry_t *ktep); > #endif > +#ifdef XFS_FILESTREAMS_TRACE > +static void xfs_filestreams_trace_entry(ktrace_entry_t *ktep); > +#endif > #ifdef XFS_RW_TRACE > static void xfs_bunmap_trace_entry(ktrace_entry_t *ktep); > static void xfs_rw_enter_trace_entry(ktrace_entry_t *ktep); > @@ -761,6 +768,27 @@ static int kdbm_xfs_xalttrace( > } > #endif /* XFS_ALLOC_TRACE */ > > +#ifdef XFS_FILESTREAMS_TRACE > +static int kdbm_xfs_xfstrmtrace( > + int argc, > + const char **argv) > +{ > + unsigned long addr; > + int nextarg = 1; > + long offset = 0; > + int diag; > + > + if (argc != 1) > + return KDB_ARGCOUNT; > + diag = kdbgetaddrarg(argc, argv, &nextarg, &addr, &offset, NULL); > + if (diag) > + return diag; > + > + xfsidbg_filestreams_trace((int) addr); > + return 0; > +} > +#endif /* XFS_FILESTREAMS_TRACE */ > + > static int kdbm_xfs_xattrcontext( > int argc, > const char **argv) > @@ -2639,6 +2667,10 @@ static struct xif xfsidbg_funcs[] = { > "Dump XFS bmap extents in inode"}, > { "xflist", kdbm_xfs_xflist, "", > "Dump XFS to-be-freed extent records"}, > +#ifdef XFS_FILESTREAMS_TRACE > + { "xfstrmtrc",kdbm_xfs_xfstrmtrace, "", > + "Dump filestreams trace buffer"}, > +#endif > { "xhelp", kdbm_xfs_xhelp, "", > "Print idbg-xfs help"}, > { "xicall", kdbm_xfs_xiclogall, "", > @@ -5305,6 +5337,162 @@ xfsidbg_xailock_trace(int count) > } > #endif > > +#ifdef XFS_FILESTREAMS_TRACE > +static void > +xfs_filestreams_trace_entry(ktrace_entry_t *ktep) > +{ > + xfs_inode_t *ip, *pip; > + > + /* function:line#[pid]: */ > + kdb_printf("%s:%lu[%lu]: ", (char *)ktep->val[1], > + ((unsigned long)ktep->val[0] >> 16) & 0xffff, > + (unsigned long)ktep->val[2]); > + switch ((unsigned long)ktep->val[0] & 0xffff) { > + case XFS_FSTRM_KTRACE_INFO: > + break; > + case XFS_FSTRM_KTRACE_AGSCAN: > + kdb_printf("scanning AG %ld[%ld]", > + (long)ktep->val[4], (long)ktep->val[5]); > + break; > + case XFS_FSTRM_KTRACE_AGPICK1: > + kdb_printf("using max_ag %ld[1] with maxfree %ld", > + (long)ktep->val[4], (long)ktep->val[5]); > + break; > + case XFS_FSTRM_KTRACE_AGPICK2: > + > + kdb_printf("startag %ld newag %ld[%ld] free %ld scanned %ld" > + " flags 0x%lx", > + (long)ktep->val[4], (long)ktep->val[5], > + (long)ktep->val[6], (long)ktep->val[7], > + (long)ktep->val[8], (long)ktep->val[9]); > + break; > + case XFS_FSTRM_KTRACE_UPDATE: > + ip = (xfs_inode_t *)ktep->val[4]; > + if ((__psint_t)ktep->val[5] != (__psint_t)ktep->val[7]) > + kdb_printf("found ip %p ino %llu, AG %ld[%ld] ->" > + " %ld[%ld]", ip, (unsigned long long)ip->i_ino, > + (long)ktep->val[7], (long)ktep->val[8], > + (long)ktep->val[5], (long)ktep->val[6]); > + else > + kdb_printf("found ip %p ino %llu, AG %ld[%ld]", > + ip, (unsigned long long)ip->i_ino, > + (long)ktep->val[5], (long)ktep->val[6]); > + break; > + > + case XFS_FSTRM_KTRACE_FREE: > + ip = (xfs_inode_t *)ktep->val[4]; > + pip = (xfs_inode_t *)ktep->val[5]; > + if (ip->i_d.di_mode & S_IFDIR) > + kdb_printf("deleting dip %p ino %llu, AG %ld[%ld]", > + ip, (unsigned long long)ip->i_ino, > + (long)ktep->val[6], (long)ktep->val[7]); > + else > + kdb_printf("deleting file %p ino %llu, pip %p ino %llu" > + ", AG %ld[%ld]", > + ip, (unsigned long long)ip->i_ino, > + pip, (unsigned long long)(pip ? pip->i_ino : 0), > + (long)ktep->val[6], (long)ktep->val[7]); > + break; > + > + case XFS_FSTRM_KTRACE_ITEM_LOOKUP: > + ip = (xfs_inode_t *)ktep->val[4]; > + pip = (xfs_inode_t *)ktep->val[5]; > + if (!pip) { > + kdb_printf("lookup on %s ip %p ino %llu failed, returning %ld", > + ip->i_d.di_mode & S_IFREG ? "file" : "dir", ip, > + (unsigned long long)ip->i_ino, (long)ktep->val[6]); > + } else if (ip->i_d.di_mode & S_IFREG) > + kdb_printf("lookup on file ip %p ino %llu dir %p" > + " dino %llu got AG %ld[%ld]", > + ip, (unsigned long long)ip->i_ino, > + pip, (unsigned long long)pip->i_ino, > + (long)ktep->val[6], (long)ktep->val[7]); > + else > + kdb_printf("lookup on dir ip %p ino %llu got AG %ld[%ld]", > + ip, (unsigned long long)ip->i_ino, > + (long)ktep->val[6], (long)ktep->val[7]); > + break; > + > + case XFS_FSTRM_KTRACE_ASSOCIATE: > + ip = (xfs_inode_t *)ktep->val[4]; > + pip = (xfs_inode_t *)ktep->val[5]; > + kdb_printf("pip %p ino %llu and ip %p ino %llu given ag %ld[%ld]", > + pip, (unsigned long long)pip->i_ino, > + ip, (unsigned long long)ip->i_ino, > + (long)ktep->val[6], (long)ktep->val[7]); > + break; > + > + case XFS_FSTRM_KTRACE_MOVEAG: > + ip = ktep->val[4]; > + pip = ktep->val[5]; > + if ((long)ktep->val[6] != NULLAGNUMBER) > + kdb_printf("dir %p ino %llu to file ip %p ino %llu has" > + " moved %ld[%ld] -> %ld[%ld]", > + pip, (unsigned long long)pip->i_ino, > + ip, (unsigned long long)ip->i_ino, > + (long)ktep->val[6], (long)ktep->val[7], > + (long)ktep->val[8], (long)ktep->val[9]); > + else > + kdb_printf("pip %p ino %llu and ip %p ino %llu moved" > + " to new ag %ld[%ld]", > + pip, (unsigned long long)pip->i_ino, > + ip, (unsigned long long)ip->i_ino, > + (long)ktep->val[8], (long)ktep->val[9]); > + break; > + > + case XFS_FSTRM_KTRACE_ORPHAN: > + ip = ktep->val[4]; > + kdb_printf("gave ag %lld to orphan ip %p ino %llu", > + (__psint_t)ktep->val[5], > + ip, (unsigned long long)ip->i_ino); > + break; > + default: > + kdb_printf("unknown trace type 0x%lx", > + (unsigned long)ktep->val[0] & 0xffff); > + } > + kdb_printf("\n"); > +} > + > +static void > +xfsidbg_filestreams_trace(int count) > +{ > + ktrace_entry_t *ktep; > + ktrace_snap_t kts; > + int nentries; > + int skip_entries; > + > + if (xfs_filestreams_trace_buf == NULL) { > + qprintf("The xfs inode lock trace buffer is not initialized\n"); > + return; > + } > + nentries = ktrace_nentries(xfs_filestreams_trace_buf); > + if (count == -1) { > + count = nentries; > + } > + if ((count <= 0) || (count > nentries)) { > + qprintf("Invalid count. There are %d entries.\n", nentries); > + return; > + } > + > + ktep = ktrace_first(xfs_filestreams_trace_buf, &kts); > + if (count != nentries) { > + /* > + * Skip the total minus the number to look at minus one > + * for the entry returned by ktrace_first(). > + */ > + skip_entries = nentries - count - 1; > + ktep = ktrace_skip(xfs_filestreams_trace_buf, skip_entries, &kts); > + if (ktep == NULL) { > + qprintf("Skipped them all\n"); > + return; > + } > + } > + while (ktep != NULL) { > + xfs_filestreams_trace_entry(ktep); > + ktep = ktrace_next(xfs_filestreams_trace_buf, &kts); > + } > +} > +#endif > /* > * Compute & print buffer's checksum. > */ From owner-xfs@oss.sgi.com Sun Jun 24 23:01:44 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 24 Jun 2007 23:01:46 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.3 required=5.0 tests=AWL,BAYES_60,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r499012 Received: from sandeen.net (sandeen.net [209.173.210.139]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5P61hdo025363 for ; Sun, 24 Jun 2007 23:01:44 -0700 Received: from Liberator.local (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id D848218011E86 for ; Mon, 25 Jun 2007 01:01:44 -0500 (CDT) Message-ID: <467F5A48.1040502@sandeen.net> Date: Mon, 25 Jun 2007 01:01:44 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.0 (Macintosh/20070326) MIME-Version: 1.0 To: xfs-oss Subject: Re: is this thing on... References: <467F5447.5080109@sandeen.net> In-Reply-To: <467F5447.5080109@sandeen.net> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11895 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Eric Sandeen wrote: > testing... noticed no email on list for 5 days. Hope this gets through. > > Ohhhkay. Looks like somebody broke email. If you sent anything important since Jun 20 you might wait a while to see if it shows up now (hopefully mail servers are still retrying), otherwise you might send it again. -Eric From owner-xfs@oss.sgi.com Sun Jun 24 23:02:03 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 24 Jun 2007 23:02:07 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.2 required=5.0 tests=AWL,BAYES_99 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.ukfsn.org (s2.ukfsn.org [217.158.120.143]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5P61xdo025506 for ; Sun, 24 Jun 2007 23:02:02 -0700 Received: from localhost (mailman.ukfsn.org [80.168.53.75]) by mail.ukfsn.org (Postfix) with ESMTP id 4701EE7115; Thu, 21 Jun 2007 19:06:29 +0100 (BST) Received: from mail.ukfsn.org ([80.168.53.20]) by localhost (smtp-filter.ukfsn.org [80.168.53.75]) (amavisd-new, port 10024) with ESMTP id TASPjrJ7BwFt; Thu, 21 Jun 2007 19:06:16 +0100 (BST) Received: from elm.dgreaves.com (i-83-67-36-194.freedom2surf.net [83.67.36.194]) by mail.ukfsn.org (Postfix) with ESMTP id 71D19E70FE; Thu, 21 Jun 2007 19:06:28 +0100 (BST) Received: from ash.dgreaves.com ([10.0.0.90]) by elm.dgreaves.com with esmtp (Exim 4.62) (envelope-from ) id 1I1R38-0004QW-CT; Thu, 21 Jun 2007 19:06:30 +0100 Message-ID: <467ABE25.7060303@dgreaves.com> Date: Thu, 21 Jun 2007 19:06:29 +0100 From: David Greaves User-Agent: Mozilla-Thunderbird 2.0.0.0 (X11/20070601) MIME-Version: 1.0 To: Tejun Heo Cc: David Chinner , David Robinson , LVM general discussion and development , "'linux-kernel@vger.kernel.org'" , xfs@oss.sgi.com, linux-pm , LinuxRaid , "Rafael J. Wysocki" Subject: Re: [linux-lvm] 2.6.22-rc5 XFS fails after hibernate/resume References: <46744065.6060605@dgreaves.com> <4674645F.5000906@gmail.com> <46751D37.5020608@dgreaves.com> <4676390E.6010202@dgreaves.com> <20070618145007.GE85884050@sgi.com> <4676D97E.4000403@dgreaves.com> <4677A0C7.4000306@dgreaves.com> <4677A596.7090404@gmail.com> <4677E496.3080506@dgreaves.com> <4678DF56.1020903@gmail.com> In-Reply-To: <4678DF56.1020903@gmail.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11896 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: david@dgreaves.com Precedence: bulk X-list: xfs been away, back now... Tejun Heo wrote: > David Greaves wrote: >> Tejun Heo wrote: >>> How reproducible is the problem? Does the problem go away or occur more >>> often if you change the drive you write the memory image to? >> I don't think there should be activity on the sda drive during resume >> itself. >> >> [I broke my / md mirror and am using some of that for swap/resume for now] >> >> I did change the swap/resume device to sdd2 (different controller, >> onboard sata_via) and there was no EH during resume. The system seemed >> OK, wrote a few Gb of video and did a kernel compile. >> I repeated this test, no EH during resume, no problems. >> I even ran xfs_fsr, the defragment utility, to stress the fs. >> >> I retain this configuration and try again tonight but it looks like >> there _may_ be a link between EH during resume and my problems... Having retained this new configuration for a couple of days now I haven't had any problems. This is good but not really ideal since / isn't mirrored anymore :( >> Of course, I don't understand why it *should* EH during resume, it >> doesn't during boot or normal operation... > > EH occurs during boot, suspend and resume all the time. It just runs in > quiet mode to avoid disturbing the users too much. In your case, EH is > kicking in due to actual exception conditions so it's being verbose to > give clue about what's going on. I was trying to say that I don't actually see any errors being handled in normal operation. I'm not sure if you are saying that these PHY RDY events are normally handled quietly (which would explain it). > It's really weird tho. The PHY RDY status changed events are coming > from the device which is NOT used while resuming yes - but the erroring device which is not being used is on the same controller as the device with the in-use resume partition. > and it's before any > actual PM events are triggered. Your kernel just boots, swsusp realizes > it's resuming and tries to read memory image from the swap device. yes > While reading, the disk controller raises consecutive PHY readiness > changed interrupts. EH recovers them alright but the end result seems > to indicate that the loaded image is corrupt. Yes, that's consistent with what I'm seeing. When I move the swap/resume partition to a different controller (ie when I broke the / mirror and used the freed space) the problem seems to go away. I am seeing messages in dmesg though: ATA: abnormal status 0xD0 on port 0xf881e0c7 ATA: abnormal status 0xD0 on port 0xf881e0c7 ATA: abnormal status 0xD0 on port 0xf881e0c7 ATA: abnormal status 0xD0 on port 0xf881e0c7 ATA: abnormal status 0xD0 on port 0xf881e0c7 ata1.00: configured for UDMA/100 ata2.00: revalidation failed (errno=-2) ata2: failed to recover some devices, retrying in 5 secs sd 0:0:0:0: [sda] 390721968 512-byte hardware sectors (200050 MB) sd 0:0:0:0: resuming sd 0:0:0:0: [sda] Starting disk ATA: abnormal status 0x7F on port 0x00019807 ATA: abnormal status 0x7F on port 0x00019007 ATA: abnormal status 0x7F on port 0x00019007 ATA: abnormal status 0x7F on port 0x00019807 ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) ATA: abnormal status 0xD0 on port 0xf881e0c7 ATA: abnormal status 0xD0 on port 0xf881e0c7 ATA: abnormal status 0xD0 on port 0xf881e0c7 ATA: abnormal status 0xD0 on port 0xf881e0c7 ATA: abnormal status 0xD0 on port 0xf881e0c7 ata1.00: configured for UDMA/100 ata2.00: revalidation failed (errno=-2) ata2: failed to recover some devices, retrying in 5 secs > So, there's no device suspend/resume code involved at all. The kernel > just booted and is trying to read data from the drive. Please try with > only the first drive attached and see what happens. That's kinda hard; swap and root are on different drives... Does it help that although the errors above appear, the system seems OK when I just use the other controller? I have to be cautious what I do with this machine as it's the wife's active desktop box . David From owner-xfs@oss.sgi.com Sun Jun 24 23:02:46 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 24 Jun 2007 23:02:49 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.5 required=5.0 tests=AWL,BAYES_95 autolearn=no version=3.2.0-pre1-r499012 Received: from postoffice.aconex.com (mail.app.aconex.com [203.89.192.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5P62ido026053 for ; Sun, 24 Jun 2007 23:02:45 -0700 Received: from edge.yarra.acx (unknown [203.89.192.141]) by postoffice.aconex.com (Postfix) with ESMTP id 7106892C39B; Mon, 25 Jun 2007 10:09:01 +1000 (EST) Subject: Re: XFS_IOC_RESVSP64 for swap files From: Nathan Scott Reply-To: nscott@aconex.com To: Peter Cordes Cc: xfs@oss.sgi.com In-Reply-To: <20070621061449.GB11200@cordes.ca> References: <20070617100822.GA4586@cordes.ca> <20070619043333.GJ86004887@sgi.com> <20070621061449.GB11200@cordes.ca> Content-Type: text/plain Organization: Aconex Date: Mon, 25 Jun 2007 10:07:51 +1000 Message-Id: <1182730071.15488.36.camel@edge.yarra.acx> Mime-Version: 1.0 X-Mailer: Evolution 2.6.3 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11897 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nscott@aconex.com Precedence: bulk X-list: xfs On Thu, 2007-06-21 at 03:14 -0300, Peter Cordes wrote: > > > > Could swapon(2) in the kernel be made to work on XFS files with > reserved > > > space? > > > > Basically, the swapon syscall calls bmap() for the block mapping of > the > > file and XFS returns "holes" [...] > > Yeah, bad idea to put special case stuff in the kernel. > > > > i.e. call something that would give XFS a chance to mark all the > > > extents as written, even though they're not. > > > > You mean like XFS_IOC_EXPOSE_MY_STALE_DATA_TO_EVERYONE? ;) > > > > That's not going to happen. I wouldn't dismiss it out of hand. Maybe if it was approached as follows: - forget about "preallocation", as that's clouding the discussion with talk of unwritten extents, etc. This is a real allocation, and filesize would need to reflect that (its XFS_IOC_ALLOCSP, just without the zeroing). - instead, if you added a flag to the shiny new fallocate syscall, F_MKSWAP or some such, and then implemented it as an allocate with no zeroing. - change mkswap to be able to optionally use this, and those dynamic methods for adding swap that you mentioned. - produce objective data demonstrating the improvements this makes, as this will be needed when arguing the case for the flag in the kernel (the fallocate flag, I mean). I can't see any reason this shouldn't be acceptable - there really is no security issue here, provided the F_MKSWAP syscall has tight restrictions (the arbitrary contents of memory that will get swapped have potentially more sensitive info, like decrypted passwords, than would usually be stored in a filesystem). And there is very little added complexity/code introduced, almost everything is already in place, so if its demonstrably useful... *shrug*. > > In fact, I plan to make unwritten extents non-optional soon (i.e. > I've already Unwritten extents aren't really relevent here - you would definately not want to have these extents marked unwritten, as that would cause additional transactions, memory allocations, writes, etc during swap. > Ok. I didn't really want to recreate my /var/tmp filesystem with > unwritten=0, but I really wish I had > XFS_IOC_EXPOSE_MY_STALE_DATA_TO_EVERYONE on my desktop machine. I > think > dynamic swap file creation is a cool idea, and that ioctl would make > it work > perfectly. I think if the data was compelling enough, there's no obvious reason this couldn't be merged, IMO (you may want to make an XFS ioctl also, just to test it - use xfs_io on the frontend & write some xfstests - and you'll need to tweak the call down into xfs_alloc_file_space - the 4th "alloc_type" argument will need to become a flags parameter instead of effectively the boolean it is now, and theres a few code changes to be done related to that). cheers. -- Nathan From owner-xfs@oss.sgi.com Sun Jun 24 23:18:07 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 24 Jun 2007 23:18:09 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.5 required=5.0 tests=AWL,BAYES_99, FH_HOST_EQ_D_D_D_D,FH_HOST_EQ_D_D_D_DB,RDNS_DYNAMIC autolearn=no version=3.2.0-pre1-r499012 Received: from ext.agami.com (64.221.212.177.ptr.us.xo.net [64.221.212.177]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5P6I5do031238 for ; Sun, 24 Jun 2007 23:18:06 -0700 Received: from agami.com (mail [192.168.168.5]) by ext.agami.com (8.12.5/8.12.5) with ESMTP id l5MNvLYL008345 for ; Fri, 22 Jun 2007 16:57:21 -0700 Received: from mx1.agami.com (mx1.agami.com [10.123.10.30]) by agami.com (8.12.11/8.12.11) with ESMTP id l5MNvh5N029526 for ; Fri, 22 Jun 2007 16:57:43 -0700 Received: from [10.123.4.142] ([10.123.4.142]) by mx1.agami.com with Microsoft SMTPSVC(6.0.3790.1830); Fri, 22 Jun 2007 16:58:06 -0700 Message-ID: <467C620E.4050005@agami.com> Date: Fri, 22 Jun 2007 16:58:06 -0700 From: Michael Nishimoto User-Agent: Mail/News 1.5.0.4 (X11/20060629) MIME-Version: 1.0 To: David Chinner CC: xfs@oss.sgi.com Subject: Re: Reducing memory requirements for high extent xfs files References: <200705301649.l4UGnckA027406@oss.sgi.com> <20070530225516.GB85884050@sgi.com> <4665E276.9020406@agami.com> <20070606013601.GR86004887@sgi.com> <4666EC56.9000606@agami.com> <20070606234723.GC86004887@sgi.com> In-Reply-To: <20070606234723.GC86004887@sgi.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-OriginalArrivalTime: 22 Jun 2007 23:58:06.0318 (UTC) FILETIME=[2D6440E0:01C7B529] X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11898 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: miken@agami.com Precedence: bulk X-list: xfs > > > Also, should we consider a file with 1MB extents as > > fragmented? A 100GB file with 1MB extents has 100k extents. > > Yes, that's fragmented - it has 4 orders of magnitude more extents > than optimal - and the extents are too small to allow reads or > writes to acheive full bandwidth on high end raid configs.... Fair enough, so multiply those numbers by 100 -- a 10TB file with 100MB extents. It seems to me that we can look at the negative effects of fragmentation in two ways here. First, (regardless of size) if a file has a large number of extents, then it is too fragmented. Second, if a file's extents are so small that we can't get full bandwidth, then it is too fragmented. If the second case were of primary concern, then it would be reasonable to have 1000s of extents as long as each of the extents were big enough to amortize disk latencies across a large amount of data. We've been assuming that a good write is one which can send 2MB of data to a single drive; so with an 8+1 raid device, we need 16MB of write data to achieve high disk utilization. In particular, there are flexibility advantages if high extent count files can still achieve good performance. Michael From owner-xfs@oss.sgi.com Sun Jun 24 23:29:03 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 24 Jun 2007 23:29:06 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.5 required=5.0 tests=AWL,BAYES_50,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r499012 Received: from lucidpixels.com (lucidpixels.com [75.144.35.66]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5P6T2do001266 for ; Sun, 24 Jun 2007 23:29:03 -0700 Received: by lucidpixels.com (Postfix, from userid 1001) id 759FBB0004E6; Thu, 21 Jun 2007 12:42:03 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id 7438B5000092 for ; Thu, 21 Jun 2007 12:42:03 -0400 (EDT) Date: Thu, 21 Jun 2007 12:42:03 -0400 (EDT) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: xfs@oss.sgi.com Subject: SLUB Allocator? Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11899 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Is there any XFS filesystem performance benefit using the new SLUB allocator? Justin. From owner-xfs@oss.sgi.com Sun Jun 24 23:31:41 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 24 Jun 2007 23:31:43 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.8 required=5.0 tests=AWL,BAYES_99,SPF_HELO_PASS, WHOIS_MYPRIVREG autolearn=no version=3.2.0-pre1-r499012 Received: from kuber.nabble.com (kuber.nabble.com [216.139.236.158]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5P6Vddo002382 for ; Sun, 24 Jun 2007 23:31:40 -0700 Received: from isper.nabble.com ([192.168.236.156]) by kuber.nabble.com with esmtp (Exim 4.63) (envelope-from ) id 1I1cYR-0001SC-5R for linux-xfs@oss.sgi.com; Thu, 21 Jun 2007 23:23:35 -0700 Message-ID: <11246839.post@talk.nabble.com> Date: Thu, 21 Jun 2007 23:23:35 -0700 (PDT) From: Sandy1 To: linux-xfs@oss.sgi.com Subject: Wrong Data Pointer-XFS File system MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Nabble-From: sundeep.saini@rediffmail.com X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11900 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sundeep.saini@rediffmail.com Precedence: bulk X-list: xfs Hi, I am Using SuSe 10.0 with Xfs file system. I am working on a File system based project. During the initial research in On disk layout of File system , i found data pointer problem. I was not able to get the data location as per pointed by "absolute block no." in "xfs_bmbt_rec" extent pointers. When i create any file in 0th (zero`th) AG in that case i am able to reach on proper location by using "absolute block no." pointer. But when i create any file in 1st or in 2nd AG and so on. I never got the file data at the location pointed by "absolute block no.". I always found the file data before the the pointed address. I am not getting any value in Superblock that tells me about difference in pointer location with actual data. This value always becomes multiple of the AG number. Please help to get out from this problem. Is there any other calculation for finding the data locations. I am right now consulting the Doument issued by SGI caleed "XFS file system Structure" 2nd Edition. Regards Sandy -- View this message in context: http://www.nabble.com/Wrong-Data-Pointer-XFS-File-system-tf3963002.html#a11246839 Sent from the linux-xfs mailing list archive at Nabble.com. From owner-xfs@oss.sgi.com Sun Jun 24 23:34:52 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 24 Jun 2007 23:34:54 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.3 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from pentafluge.infradead.org (pentafluge.infradead.org [213.146.154.40]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5P6Ypdo003403 for ; Sun, 24 Jun 2007 23:34:52 -0700 Received: from hch by pentafluge.infradead.org with local (Exim 4.63 #1 (Red Hat Linux)) id 1I2hvd-0000Sd-2e; Mon, 25 Jun 2007 07:20:01 +0100 Date: Mon, 25 Jun 2007 07:20:01 +0100 From: Christoph Hellwig To: Eric Sandeen Cc: xfs-oss Subject: Re: [PATCH] remove hard-coded fnames from tracing functions Message-ID: <20070625062001.GA1307@infradead.org> References: <467F48B9.7060504@sandeen.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <467F48B9.7060504@sandeen.net> User-Agent: Mutt/1.4.2.3i X-SRS-Rewrite: SMTP reverse-path rewritten from by pentafluge.infradead.org See http://www.infradead.org/rpr.html X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11901 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs On Sun, Jun 24, 2007 at 11:46:49PM -0500, Eric Sandeen wrote: > This has been on my stack a while, still compiles clean but feel free > to double-check. :-) Looks very nice to me. From owner-xfs@oss.sgi.com Sun Jun 24 23:34:54 2007 Received: with ECARTIS (v1.0.0; list xfs); Sun, 24 Jun 2007 23:34:56 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.3 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from pentafluge.infradead.org (pentafluge.infradead.org [213.146.154.40]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5P6Yrdo003420 for ; Sun, 24 Jun 2007 23:34:54 -0700 Received: from hch by pentafluge.infradead.org with local (Exim 4.63 #1 (Red Hat Linux)) id 1I2hwN-0000T4-B6; Mon, 25 Jun 2007 07:20:47 +0100 Date: Mon, 25 Jun 2007 07:20:47 +0100 From: Christoph Hellwig To: Eric Sandeen Cc: xfs-oss Subject: Re: [PATCH] simplify vnode tracing calls Message-ID: <20070625062047.GB1307@infradead.org> References: <467F5053.4040108@sandeen.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <467F5053.4040108@sandeen.net> User-Agent: Mutt/1.4.2.3i X-SRS-Rewrite: SMTP reverse-path rewritten from by pentafluge.infradead.org See http://www.infradead.org/rpr.html X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11902 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs On Mon, Jun 25, 2007 at 12:19:15AM -0500, Eric Sandeen wrote: > Don't think I've sent this one yet... :) Any chance we can keep the name lower-cases despite the simplified prototype? Also it might make sense to merge the previous patch into this one. From owner-xfs@oss.sgi.com Mon Jun 25 00:53:20 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 25 Jun 2007 00:53:24 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.9 required=5.0 tests=AWL,BAYES_99,J_CHICKENPOX_12, J_CHICKENPOX_21,J_CHICKENPOX_41,URIBL_RHS_TLD_WHOIS autolearn=no version=3.2.0-pre1-r499012 Received: from mail.lichtvoll.de (mondschein.lichtvoll.de [194.150.191.11]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5P7rHdo028717 for ; Mon, 25 Jun 2007 00:53:19 -0700 Received: from localhost (dslb-084-056-080-234.pools.arcor-ip.net [84.56.80.234]) by mail.lichtvoll.de (Postfix) with ESMTP id 29C8E5AD5A for ; Sat, 23 Jun 2007 17:23:16 +0200 (CEST) From: Martin Steigerwald To: linux-xfs@oss.sgi.com Subject: xfs_fsr and null byte areas in files (was: Re: xfs_fsr - problem with open files possible?) User-Agent: KMail/1.9.7 References: <200706151804.43067.Martin@lichtvoll.de> <4672C531.9020802@sandeen.net> (sfid-20070615_201058_223491_4316861A) In-Reply-To: <4672C531.9020802@sandeen.net> MIME-Version: 1.0 Content-Disposition: inline Date: Sat, 23 Jun 2007 17:23:14 +0200 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Message-Id: <200706231723.14767.Martin@lichtvoll.de> X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11903 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: Martin@lichtvoll.de Precedence: bulk X-list: xfs Am Freitag 15 Juni 2007 schrieb Eric Sandeen: > Martin Steigerwald wrote: > > Could data loss happen when running xfs_fsr on files that are opened > > by an application? > > It should not; fsr performs a lot of safety checks and aborts under > problematic circumstances. It will skip files if: > > * mandatory locks are present > * file is marked immutable, append-only, or nodefrag > * filesystem is shut down > * change/modify times have been altered since defrag started > * original file is mmapped > > If you can clearly recreate it with xfs_fsr, it would be interesting to > compare the good & bad index files to see how they differ, it might > offer a clue as to what/why/how it changed. > > > I did not came across any other corrupted files except a Mercurial > > repository. I can not pinpoint this problem to XFS at all and have no > > idea how and when it got corrupted. At least in my backup from some > > weeks ago the repository has been okay. Unfortunately I do not know > > anymore whether I made a commit to that repository while xfs_fsr was > > running or not. But I think I didn't. Hello Eric! Well I now found lots of corrupted Bazaar repositories as well. For all except one I had a working backup. Those were definately not in use while xfs_fsr ran. I can't prove the corruption to xfs_fsr, but except for another XFS failure - which I doubt happened - I can't think of much of else. Granted I used some beta versions of Bazaar, but not on all of those corrupted repositories. I tried to reproduce this with my good repositories from the backup, but couldn't. I bet that they were not fragmented. xfs_fsr **/* (which recursively matches all files on Z-Shell) in the directories of them returned rather quickly. It also happened on a second notebook. I have a T42 and a T23 and it happened on the T23 too. I had a closer look about the kind of corruption. Only the file content seemed to have changed, not the date of the file or its size. Thus I was in luck cause rsync simply didn't copy the corrupted repositories to my backup disk. I found out about this as I tried to rsync the corrected repositories from my T42 to the T23. rsync would not copy them. But when I used rsync -c it copies some files with differing checksum. One example: --------------------------------------------------------------------- martin@shambala:~/.crm114> rsync -acnv .bzr/ .bzr-broken/ building file list ... done ./ branch-lock/ branch/ branch/lock/ checkout/ checkout/lock/ repository/ repository/knits/ repository/knits/22/ repository/knits/36/ repository/knits/36/mailtrainer.crm-20070329201046-91b62hipeixaywh9-5.knit repository/knits/3d/ repository/knits/4a/ repository/knits/4a/mailfilter.cf-20070329201046-91b62hipeixaywh9-1.knit repository/knits/76/ repository/knits/76/maillib.crm-20070329201046-91b62hipeixaywh9-3.knit repository/knits/8c/ repository/knits/8c/shuffle.crm-20070329201046-91b62hipeixaywh9-6.knit repository/knits/9c/ repository/knits/a9/ repository/knits/c5/ repository/knits/c5/mailreaver.crm-20070329201046-91b62hipeixaywh9-4.knit repository/lock/ repository/revision-store/ sent 2335 bytes received 368 bytes 5406.00 bytes/sec total size is 86288 speedup is 31.92 martin@shambala:~/.crm114> --------------------------------------------------------------------- Those files are differing - one example: --------------------------------------------------------------------- martin@shambala:~/.crm114> LANG=C cmp .bzr/repository/knits/76/maillib.crm-20070329201046-91b62hipeixaywh9-3.knit .bzr-broken/repository/knits/76/maillib.crm-20070329201046-91b62hipeixaywh9-3.knit .bzr/repository/knits/76/maillib.crm-20070329201046-91b62hipeixaywh9-3.knit .bzr-broken/repository/knits/76/maillib.crm-20070329201046-91b62hipeixaywh9-3.knit differ: char 8193, line 22 --------------------------------------------------------------------- The good one: --------------------------------------------------------------------- martin@shambala:~/.crm114> head -c9183 .bzr/repository/knits/76/maillib.crm-20070329201046-91b62hipeixaywh9-3.knit | tail -c1000 | od -h 0000000 d46d 2b8b 9b50 5ce8 e3f7 964c 6b3f 728b 0000020 8541 e12e f328 92a5 7061 2981 6232 7463 0000040 6583 0421 b568 481f f5db ce4c 6924 8b55 [... lots of data that looks like the first three lines ...] 0001700 a28d 2b64 d792 6dcc 96cb 15c7 f960 7e26 0001720 dc22 1d2d 6a0e 6ce0 3648 a050 c7ec 371c 0001740 743c 2502 e5e4 e1d1 0001750 --------------------------------------------------------------------- The broken one - this really looks like the null files problem!: --------------------------------------------------------------------- martin@shambala:~/.crm114> head -c9183 .bzr-broken/repository/knits/76/maillib.crm-20070329201046-91b62hipeixaywh9-3.knit | tail -c1000 | od -h 0000000 d46d 2b8b 9b50 5ce8 00f7 0000 0000 0000 0000020 0000 0000 0000 0000 0000 0000 0000 0000 * 0001740 0000 0000 0000 0000 0001750 --------------------------------------------------------------------- I checked at least one of those files with xfs_bmap and it was no sparse file, these are real zeros. I am quite a bit confused by that since I cannot remember that I had any abnormal write interruption while the xfs_fsr command was running or after it has finished. As I found out about corrupted files I just rebooted cleanly to my SUSE installation and ran xfs_check on the partitions I xfs_fsr'ed which turned out to be okay. Also certainly not all of these broken repositories had been in use while any abnormal write termination happened. I am doing checks for other broken files now: --------------------------------------------------------------------- find -type f -and -iname "*" -print0 | xargs -0 -I{} cmp {} /mnt/backup/home/martin/{} --------------------------------------------------------------------- Better with that one: --------------------------------------------------------------------- rsync -acnv /mnt/backup/home/martin/ /home/martin/ --------------------------------------------------------------------- Oh well, there are other files differing which should certainly be the same - since I didn't touch them in the meanwhile. And they contain zeros where there should be data as well as far as I looked. So whatever I faced here its a real corruption issue! And I know I will continue to use rsync without -c in my backup script, cause that prevented the corrupted files from being copied to my backup! Well thus maybe is a bad interaction between xfs_fsr and the null files problem or a problem in xfs_fsr... I for my part will be very cautious with xfs_fsr in the future. If I manage to take the time I likely will create a XFS partition in that one GB that is left on my harddisk copy certain directories in it simultaneously so that it hopefully creates some fragmentation and then run xfs_fsr... I hope I will be able to track this down, cause its really bad behavior. Is there a xfs QA test which tests xfs_fsr? I think there should be one, especially as this tool does not seem to be used that often out in the wild. Preferably one that copies lots of data to a partition in a way that it gets fragmented, md5sums it, runs xfs_fsr and then compares the md5sums of the files. If anyone else has time I suggest tests on that topic. It seems that something is going really wrong here even when I cannot prove it right now. But maybe it was just a rare race condition. Well see. I hope I will be able to reproduce it. I ran xfs_fsr --------------------------------------------------------------------- shambala:~> LANG=C apt-cache policy xfsdump xfsdump: Installed: 2.2.38-1 Candidate: 2.2.38-1 Version table: 2.2.45-1 0 500 http://debian.n-ix.net lenny/main Packages 500 http://debian.n-ix.net sid/main Packages *** 2.2.38-1 0 990 http://debian.n-ix.net etch/main Packages 100 /var/lib/dpkg/status --------------------------------------------------------------------- on Linux Kernel 2.6.21.3 on my XFS /home partition back then. Since images, music, mail files, movies and stuff are not automatically checked for corruption and since not many files seem to be affected I think I didn't notice other files to be affected. Will likely take quite some time as I changed to many files since my last backup... OK, now on to recovery: - xfs_check xfs_fsr involved partitions - they are fine - xfs_check the backup partitions - they are fine tough The most automatic way to get things recovered as good as possible is to rsync my current data to the backup partition *without* determining changed files by its checksum. Then rsync the updated backup back to the work place *with* determining changes files by its checksum: --------------------------------------------------------------------- shambala:~ # rsync -axv --acls --sparse --delete /mnt/debian/ /mnt/backup/debian/ shambala:~ # rsync -axv --acls --sparse --delete /mnt/debian/ /mnt/backup/debian/ building file list ... done sent 10306228 bytes received 20 bytes 38818.26 bytes/sec total size is 8376658632 speedup is 812.77 shambala:~ # rsync -acxv --acls --sparse --del /mnt/backup/debian/ /mnt/debian/ | tee /home/martin/XFS-Probleme/2007-07-23/rsync-acxv-debian.txt --------------------------------------------------------------------- Now look what is in "rsync-acxv-debian.txt": --------------------------------------------------------------------- building file list ... done .kde/share/apps/kconf_update/log/update.log [ ... above one shouldn't be here at / at all of course, however I managed it to be there ;-) ... ] boot/grub/default lost+found/8937921 lost+found/8937923 usr/bin/nslookup usr/games/gnubg usr/lib/libGLU.a usr/lib/libclucene.so.0.0.0 usr/lib/libdns.so.22.1.0 usr/lib/libkeximain.so.2.0.0 usr/lib/libkoproperty.so.2.0.0 usr/lib/libkwordprivate.so.4.0.0 usr/lib/libpoppler.so.0.0.0 usr/lib/flashplugin-nonfree/libflashplayer.so usr/lib/gcc/i486-linux-gnu/4.1.2/cc1 usr/lib/jvm/java-1.5.0-sun-1.5.0.10/jre/lib/charsets.jar usr/lib/jvm/java-1.5.0-sun-1.5.0.10/jre/lib/i386/client/libjvm.so usr/lib/jvm/java-1.5.0-sun-1.5.0.10/jre/lib/i386/motif21/libmawt.so usr/lib/mono/gac/System.Data/1.0.5000.0__b77a5c561934e089/System.Data.dll.mdb usr/lib/python2.3/site-packages/PyQt4/QtGui.so usr/lib/python2.4/site-packages/PyQt4/QtGui.so usr/share/X11/doc/hardcopy/XKB/XKBlib.ps.gz usr/share/doc/aircrack-ng/injection-patches/linux-wlan-0.2.3.packet.injection.patch.gz usr/share/doc/dahb-html/html/bilder/allgemein/debian-devel-earth.png usr/share/doc/dahb-html/html/bilder/allgemein/openlogo-nd.png usr/share/doc/dahb-html/html/bilder/sarge/sarge1.png usr/share/doc/dahb-html/html/bilder/sarge/sargebasec17.png usr/share/doc/dahb-html/html/bilder/sarge/sargebasesecurity.png usr/share/doc/dahb-html/html/bilder/sarge/sargedomainname.png usr/share/doc/dahb-html/html/bilder/sarge/sargehostname.png usr/share/doc/dahb-html/html/bilder/sarge/sargepart1.png [... it goes on like this ...] --------------------------------------------------------------------- This is nasty. Some of those would have been very difficult to find. Heck that flash player even worked OK yesterday on YouTube. Luckily not that many files seem to have been affected (aside from those broken repositories and KMail index files I alread fixed): martin@shambala:Shambala/XFS-Probleme/2007-07-23> wc -l rsync-acxv-debian.txt 117 rsync-acxv-debian.txt martin@shambala:Shambala/XFS-Probleme/2007-07-23> wc -l rsync-acxv-debian-home.txt 232 rsync-acxv-debian-home.txt Well at least for me T42 I recovered almost all important data. One Bazaar repository got lost, but I can get the version history from someone else. And there is a slight risk that some of the files that I already changed in the meanwhile have zero byte areas as well that didn't come to my attention yet. Well that shows me that a checksumming filesystem would really be good! It could have told me that file contents have been corrupted during check. And I learn quite a lot: - if there is any slight chance of corruption (that corrupted KMail index files) check it out before doing anything else. Just do not assume that its only those files that have been affected, but check with rsync -c - if there are broken files, restore the backup before doing anything else. Except that I have spare storage where I could store the filesystems with the broken files first for further analysis. That would have helped me to have next to zero data loss and it would have been way quicker to restore everything to normal. And another one: - do not try an online defragmentation on valuable data before testing it first on test data I still can't prove it, but I am quite sure that it has been xfs_fsr maybe in combination with some XFS problem. It was the only tool that could have accessed all those files. I know for sure I didn't do write accesses on many of them. Some more information about my T42... Kernel version and Debian installation on my T23 are pretty similar only drivers differ a bit: martin@shambala:~> cat /proc/version Linux version 2.6.21.3-tp42-cfs-v15-sws2-2.2.10 (martin@shambala) (gcc version 4.1.2 20061115 (prerelease) (Debian 4.1.1-21)) #1 PREEMPT Sun Jun 3 21:43:50 CEST 2007 shambala:~> xfs_info / meta-data=/dev/root isize=256 agcount=16, agsize=228894 blks = sectsz=512 attr=1 data = bsize=4096 blocks=3662304, imaxpct=25 = sunit=0 swidth=0 blks, unwritten=1 naming =version 2 bsize=4096 log =internal bsize=4096 blocks=2560, version=1 = sectsz=512 sunit=0 blks, lazy-count=0 realtime =none extsz=65536 blocks=0, rtextents=0 shambala:~> xfs_info /home meta-data=/dev/sda2 isize=256 agcount=19, agsize=686660 blks = sectsz=512 attr=0 data = bsize=4096 blocks=12695508, imaxpct=25 = sunit=0 swidth=0 blks, unwritten=1 naming =version 2 bsize=4096 log =internal bsize=4096 blocks=5364, version=1 = sectsz=512 sunit=0 blks, lazy-count=0 realtime =none extsz=65536 blocks=0, rtextents=0 Ok, I am quite exhausted now. And there is still the T23 to do... (in progress at the moment). Regards, -- Martin 'Helios' Steigerwald - http://www.Lichtvoll.de GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7 From owner-xfs@oss.sgi.com Mon Jun 25 01:06:39 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 25 Jun 2007 01:06:45 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.1 required=5.0 tests=BAYES_60,J_CHICKENPOX_43 autolearn=no version=3.2.0-pre1-r499012 Received: from smtp.sauce.co.nz (smtp.sauce.co.nz [210.48.49.72]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5P86bdo032264 for ; Mon, 25 Jun 2007 01:06:38 -0700 Received: (qmail 15913 invoked from network); 25 Jun 2007 00:59:57 -0000 Received: from unknown (HELO smtp22.sauce.co.nz) (192.168.1.4) by smtp.sauce.co.nz with SMTP; 25 Jun 2007 00:59:57 -0000 Received: by smtp22.sauce.co.nz (Postfix, from userid 100) id 441AE28825F; Mon, 25 Jun 2007 12:59:57 +1200 (NZST) Received: from [192.168.4.111] (unknown [192.168.4.111]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp22.sauce.co.nz (Postfix) with ESMTP id 2506028825B for ; Mon, 25 Jun 2007 12:59:57 +1200 (NZST) Message-ID: <467F144F.3020804@sauce.co.nz> Date: Mon, 25 Jun 2007 13:03:11 +1200 From: Richard Scobie User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.2) Gecko/20040805 Netscape/7.2 X-Accept-Language: en-us, en MIME-Version: 1.0 To: xfs@oss.sgi.com Subject: sunit-swidth parameters Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-DSPAM-Result: Innocent X-DSPAM-Processed: Mon Jun 25 12:59:57 2007 X-DSPAM-Confidence: 1.0000 X-DSPAM-Improbability: 1 in 98689407 chance of being spam X-DSPAM-Probability: 0.0023 X-DSPAM-Signature: 467f138d206864475714001 X-DSPAM-Factors: 27, mount+the, 0.40000, User-Agent*1.7.2)+Gecko/20040805, 0.40000, the+mailing, 0.40000, Date*1200, 0.40000, =+sectsz=512, 0.40000, have+come, 0.40000, attr=0, 0.40000, disk+RAID5, 0.40000, mkfs+xfs, 0.40000, mkfs+xfs, 0.40000, or, 0.40000, an, 0.40000, swidth=128+blks, 0.40000, blocks=10000, 0.40000, Subject*swidth, 0.40000, Date*2007+13, 0.40000, from, 0.40000, these+parameters, 0.40000, sunit=32, 0.40000, of, 0.40000, as+I, 0.40000, f+l, 0.40000, User-Agent*Mozilla/5.0, 0.40000, then+I, 0.40000, am+I, 0.40000, Received*ESMTP, 0.40000, am+about, 0.40000 X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11904 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: richard@sauce.co.nz Precedence: bulk X-list: xfs I am under the impression that mkfs.xfs is able to obtain details from the Linux md subsystem, such that it will automatically create the optimal sunit and swidth parameters, to suit an md RAID array on which the filesystem is being created. Please correct me if I am wrong. If this is the case, then I am a little confused by the following: mkfs.xfs -f -l logdev=/dev/md1,size=10000b /dev/md5 meta-data=/dev/md5 isize=256 agcount=32, agsize=7630656 blks = sectsz=4096 attr=0 data = bsize=4096 blocks=244179840, imaxpct=25 = sunit=64 swidth=128 blks, unwritten=1 naming =version 2 bsize=4096 log =/dev/md1 bsize=4096 blocks=10000, version=1 = sectsz=512 sunit=0 blks realtime =none extsz=524288 blocks=0, rtextents=0 This is the output where /dev/md5 is a 3 disk RAID5, with a chunk size of 128kB. After reading the man page and looking at some examples in the mailing list archives, I would have thought that the best sizes would have been sunit=32 and swidth=64, or am I wrong? I ask, as I am about to resize this array by adding another drive and am trying to work out the new values for these parameters, to pass to mount - the values I have come up with are sunit=256 and swidth=768. Thanks, Richard From owner-xfs@oss.sgi.com Mon Jun 25 03:31:26 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 25 Jun 2007 03:31:27 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5PAVLdo005904 for ; Mon, 25 Jun 2007 03:31:25 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id UAA09153; Mon, 25 Jun 2007 20:31:18 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5PAVHeW032906; Mon, 25 Jun 2007 20:31:18 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5PAVG4X032904; Mon, 25 Jun 2007 20:31:16 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Mon, 25 Jun 2007 20:31:16 +1000 From: David Chinner To: Justin Piszcz Cc: xfs@oss.sgi.com Subject: Re: SLUB Allocator? Message-ID: <20070625103116.GA31489@sgi.com> References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11905 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Thu, Jun 21, 2007 at 12:42:03PM -0400, Justin Piszcz wrote: > Is there any XFS filesystem performance benefit using the new SLUB > allocator? /me shrugs When it grows memory defrag of caches, yes. Otherwise I can't see there being any noticable difference. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Jun 25 03:49:26 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 25 Jun 2007 03:49:28 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5PAnMdo010255 for ; Mon, 25 Jun 2007 03:49:24 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id UAA09519; Mon, 25 Jun 2007 20:49:18 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5PAnGeW038319; Mon, 25 Jun 2007 20:49:17 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5PAnEjd038313; Mon, 25 Jun 2007 20:49:14 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Mon, 25 Jun 2007 20:49:14 +1000 From: David Chinner To: Eric Sandeen Cc: xfs-oss Subject: Re: [PATCH] remove hard-coded fnames from tracing functions Message-ID: <20070625104914.GB31489@sgi.com> References: <467F48B9.7060504@sandeen.net> <467F4A2F.2060101@sandeen.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <467F4A2F.2060101@sandeen.net> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11906 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Sun, Jun 24, 2007 at 11:53:03PM -0500, Eric Sandeen wrote: > Eric Sandeen wrote: > > This has been on my stack a while, still compiles clean but feel free > > to double-check. :-) > > > > xfs_alloc.c | 53 ++-------- > > xfs_bmap.c | 266 +++++++++++++++++++++++-------------------------------- > > xfs_bmap.h | 6 - > > xfs_bmap_btree.c | 88 +++--------------- > > xfs_inode.c | 8 - > > 5 files changed, 149 insertions(+), 272 deletions(-) > > > > --------------- > > > > Remove the hardcoded "fnames" for tracing, and just embed > > them in tracing macros via __FUNCTION__. Kills a lot of #ifdefs > > too. > > Hm... guess I did send this one already, but don't see it in cvs... > Dave, is it in your stack yet? It got as far as my incoming directory. I even remember reviewing it, and then I forgot to import the patch. My bad. I'll add it now. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Jun 25 06:09:00 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 25 Jun 2007 06:09:02 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.5 required=5.0 tests=AWL,BAYES_99,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r499012 Received: from lucidpixels.com (lucidpixels.com [75.144.35.66]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5PD8wdo027823 for ; Mon, 25 Jun 2007 06:09:00 -0700 Received: by lucidpixels.com (Postfix, from userid 1001) id D87001C076EC1; Mon, 25 Jun 2007 09:08:59 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id C984D4019510; Mon, 25 Jun 2007 09:08:59 -0400 (EDT) Date: Mon, 25 Jun 2007 09:08:59 -0400 (EDT) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: David Chinner cc: xfs@oss.sgi.com Subject: 128k proved to be optimal for max_sectors_kb Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11907 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Dave, You stated earlier why I used 128k for max_sectors_kb? It seems to offer the best performance overall (in terms of overall-bonnie runtime as well) p34-4096,15696M,77157,98.6667,443058,87.6667,176913,32,79369.7,99,472359,34,594.1,0,16:100000:16/64,1131.67,9.66667,4966,14,3184,19.3333,1222.67,11,4871.33,14.3333,2470,17.3333 p34-1024,15696M,77509.7,99,427413,85.3333,165722,30.3333,80305,99,443508,32,624.533,0.333333,16:100000:16/64,1103,9.33333,4853.67,13.3333,3180,19,1239,11,4301.33,13,2627,19 p34-512,15696M,76935.3,99,448343,87.6667,175319,32,78003,99,465689,33.6667,606.367,0.333333,16:100000:16/64,1148.33,9.33333,3240.33,9.33333,3014.33,18,1203.67,10.6667,5193.33,14.6667,2521.33,16.6667 p34-128,15696M,76202.3,99,443103,85,189716,34.6667,79552,99,507271,39.6667,607.067,0,16:100000:16/64,1153,10,13434,36,2769.67,16.3333,1201.67,10.6667,3951.33,12,2665.67,19 max_sectors_kb = 4 $ dd if=/dev/zero of=file.out1 bs=1M count=10240 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 124.779 seconds, 86.1 MB/s max_sectors_kb = 8 $ dd if=/dev/zero of=file.out2 bs=1M count=10240 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 55.4848 seconds, 194 MB/s max_sectors_kb = 16 $ dd if=/dev/zero of=file.out3 bs=1M count=10240 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 37.6886 seconds, 285 MB/s max_sectors_kb = 32 $ dd if=/dev/zero of=file.out4 bs=1M count=10240 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 26.2875 seconds, 408 MB/s max_sectors_kb = 64 $ dd if=/dev/zero of=file.out5 bs=1M count=10240 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 24.8301 seconds, 432 MB/s max_sectors_kb = 128 $ dd if=/dev/zero of=file.out6 bs=1M count=10240 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 22.6298 seconds, 474 MB/s 10737418240 bytes (11 GB) copied, 22.6298 seconds, 460 MB/s [again] 10737418240 bytes (11 GB) copied, 22.9520 seconds, 468 MB/s [again] max_sectors_kb = 256 $ dd if=/dev/zero of=file.out7 bs=1M count=10240 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 22.5042 seconds, 477 MB/s 10737418240 bytes (11 GB) copied, 23.3488 seconds, 460 MB/s [again] 10737418240 bytes (11 GB) copied, 23.3488 seconds, 431 MB/s [again] 10737418240 bytes (11 GB) copied, 23.3488 seconds, 434 MB/s [again] max_sectors_kb = 512 $ dd if=/dev/zero of=file.out8 bs=1M count=10240 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 23.6269 seconds, 454 MB/s max_sectors_kb = 1024 $ dd if=/dev/zero of=file.out9 bs=1M count=10240 10737418240 bytes (11 GB) copied, 22.3177 seconds, 481 MB/s 10737418240 bytes (11 GB) copied, 22.8935 seconds, 469 MB/s [again] 10737418240 bytes (11 GB) copied, 23.4506 seconds, 458 MB/s [again] 10737418240 bytes (11 GB) copied, 23.5932 seconds, 455 MB/s [again] Justin. From owner-xfs@oss.sgi.com Mon Jun 25 06:28:08 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 25 Jun 2007 06:28:15 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=2.0 required=5.0 tests=AWL,BAYES_99 autolearn=no version=3.2.0-pre1-r499012 Received: from e33.co.us.ibm.com (e33.co.us.ibm.com [32.97.110.151]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5PDS6do000680 for ; Mon, 25 Jun 2007 06:28:08 -0700 Received: from d03relay04.boulder.ibm.com (d03relay04.boulder.ibm.com [9.17.195.106]) by e33.co.us.ibm.com (8.13.8/8.13.8) with ESMTP id l5PDS5Vc011107 for ; Mon, 25 Jun 2007 09:28:05 -0400 Received: from d03av04.boulder.ibm.com (d03av04.boulder.ibm.com [9.17.195.170]) by d03relay04.boulder.ibm.com (8.13.8/8.13.8/NCO v8.3) with ESMTP id l5PDS3Hi208590 for ; Mon, 25 Jun 2007 07:28:03 -0600 Received: from d03av04.boulder.ibm.com (loopback [127.0.0.1]) by d03av04.boulder.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id l5PDS2bZ021395 for ; Mon, 25 Jun 2007 07:28:03 -0600 Received: from amitarora.in.ibm.com (amitarora.in.ibm.com [9.124.31.198]) by d03av04.boulder.ibm.com (8.12.11.20060308/8.12.11) with ESMTP id l5PDS1s2021255; Mon, 25 Jun 2007 07:28:02 -0600 Received: from amitarora.in.ibm.com (localhost.localdomain [127.0.0.1]) by amitarora.in.ibm.com (Postfix) with ESMTP id 0513118B996; Mon, 25 Jun 2007 18:58:12 +0530 (IST) Received: (from amit@localhost) by amitarora.in.ibm.com (8.13.1/8.13.1/Submit) id l5PDSAYW004462; Mon, 25 Jun 2007 18:58:10 +0530 Date: Mon, 25 Jun 2007 18:58:10 +0530 From: "Amit K. Arora" To: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org Cc: David Chinner , Andreas Dilger , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: [PATCH 0/6][TAKE5] fallocate system call Message-ID: <20070625132810.GA1951@amitarora.in.ibm.com> References: <20070510005926.GT85884050@sgi.com> <20070510115620.GB21400@amitarora.in.ibm.com> <20070510223950.GD86004887@sgi.com> <20070511110301.GB28425@in.ibm.com> <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070614193347.GN5181@schatzie.adilger.int> User-Agent: Mutt/1.4.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11908 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: aarora@linux.vnet.ibm.com Precedence: bulk X-list: xfs N O T E: ------- 1) Only Patches 4/7 and 7/7 are NEW. Rest of them are _already_ part of ext4 patch queue git tree hosted by Ted. 2) The above new patches (4/7 and 7/7) are based on the dicussion between Andreas Dilger and David Chinner on the mode argument, when later posted a man page on fallocate. 3) All of these patches are based on 2.6.22-rc4 kernel and apply to 2.6.22-rc5 too (with some successfull hunks, though - since the ext4 patch queue git tree has some other patches as well before fallocate patches in the patch series). Changelog: --------- Changes from Take4 to Take5: 1) New Patch 4/7 implements new flags and values for mode argument of fallocate system call. 2) New Patch 7/7 implements 2 (out of 4) modes in ext4. Implementation of rest of the (two) modes is yet to be done. 3) Updated the interface description below to mention new modes being supported. 4) Removed "extent overlap check" bugfix (patch 4/6 in TAKE4, since it is now part of mainline. 5) Corrected format of couple of multi-line comments, which got missed in earlier take. Changes from Take2 to Take3: 1) Return type is now described in the interface description above. 2) Patches rebased to 2.6.22-rc1 kernel. ** Each post will have an individual changelog for a particular patch. Description: ----------- fallocate() is a new system call being proposed here which will allow applications to preallocate space to any file(s) in a file system. Each file system implementation that wants to use this feature will need to support an inode operation called fallocate. Applications can use this feature to avoid fragmentation to certain level and thus get faster access speed. With preallocation, applications also get a guarantee of space for particular file(s) - even if later the the system becomes full. Currently, glibc provides an interface called posix_fallocate() which can be used for similar cause. Though this has the advantage of working on all file systems, but it is quite slow (since it writes zeroes to each block that has to be preallocated). Without a doubt, file systems can do this more efficiently within the kernel, by implementing the proposed fallocate() system call. It is expected that posix_fallocate() will be modified to call this new system call first and incase the kernel/filesystem does not implement it, it should fall back to the current implementation of writing zeroes to the new blocks. Interface: --------- The system call's layout is: asmlinkage long sys_fallocate(int fd, int mode, loff_t offset, loff_t len); fd: The descriptor of the open file. mode*: This specifies the behavior of the system call. Currently the system call supports four modes - FA_ALLOCATE, FA_DEALLOCATE, FA_RESV_SPACE and FA_UNRESV_SPACE. FA_ALLOCATE: Applications can use this mode to preallocate blocks to a given file (specified by fd). This mode changes the file size if the preallocation is done beyond the EOF. It also updates the ctime in the inode of the corresponding file, marking a successfull allocation. FA_FA_RESV_SPACE: This mode is quite same as FA_ALLOCATE. The only difference being that the file size will not be changed. FA_DEALLOCATE: This mode can be used by applications to deallocate the previously preallocated blocks. This also may change the file size and the ctime/mtime. This is reverse of FA_ALLOCATE mode. FA_UNRESV_SPACE: This mode is quite same as FA_DEALLOCATE. The difference being that the file size is not changed and the data is also deleted. * New modes might get added in future. offset: This is the offset in bytes, from where the preallocation should start. len: This is the number of bytes requested for preallocation (from offset). RETURN VALUE: The system call returns 0 on success and an error on failure. This is done to keep the semantics same as of posix_fallocate(). sys_fallocate() on s390: ----------------------- There is a problem with s390 ABI to implement sys_fallocate() with the proposed order of arguments. Martin Schwidefsky has suggested a patch to solve this problem which makes use of a wrapper in the kernel. This will require special handling of this system call on s390 in glibc as well. But, this seems to be the best solution so far. Known Problem: ------------- mmapped writes into uninitialized extents is a known problem with the current ext4 patches. Like XFS, ext4 may need to implement ->page_mkwrite() to solve this. See: Since there is a talk of ->fault() replacing ->page_mkwrite() and also with a generic block_page_mkwrite() implementation already posted, we can implement this later some time. See: ToDos: ----- 1> Implementation on other architectures (other than i386, x86_64, ia64, ppc64 and s390(x)). 2> A generic file system operation to handle fallocate (generic_fallocate), for filesystems that do _not_ have the fallocate inode operation implemented. 3> Changes to glibc, a) to support fallocate() system call b) to make posix_fallocate() and posix_fallocate64() call fallocate() Following patches follow: Patch 1/6 : fallocate() implementation on i386, x86_64 and powerpc Patch 2/7 : fallocate() on s390(x) Patch 3/7 : fallocate() on ia64 Patch 4/7 : support new modes in fallocate Patch 5/7 : ext4: fallocate support in ext4 Patch 6/7 : ext4: write support for preallocated blocks Patch 7/7 : ext4: support new modes From owner-xfs@oss.sgi.com Mon Jun 25 06:42:46 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 25 Jun 2007 06:42:49 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.7 required=5.0 tests=AWL,BAYES_80,J_CHICKENPOX_13, J_CHICKENPOX_16,J_CHICKENPOX_23,J_CHICKENPOX_24 autolearn=no version=3.2.0-pre1-r499012 Received: from e32.co.us.ibm.com (e32.co.us.ibm.com [32.97.110.150]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5PDgjdo006046 for ; Mon, 25 Jun 2007 06:42:46 -0700 Received: from d03relay04.boulder.ibm.com (d03relay04.boulder.ibm.com [9.17.195.106]) by e32.co.us.ibm.com (8.12.11.20060308/8.13.8) with ESMTP id l5PDboiw001955 for ; Mon, 25 Jun 2007 09:37:50 -0400 Received: from d03av03.boulder.ibm.com (d03av03.boulder.ibm.com [9.17.195.169]) by d03relay04.boulder.ibm.com (8.13.8/8.13.8/NCO v8.3) with ESMTP id l5PDgkbl215560 for ; Mon, 25 Jun 2007 07:42:46 -0600 Received: from d03av03.boulder.ibm.com (loopback [127.0.0.1]) by d03av03.boulder.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id l5PDgjAb018279 for ; Mon, 25 Jun 2007 07:42:46 -0600 Received: from amitarora.in.ibm.com (amitarora.in.ibm.com [9.124.31.198]) by d03av03.boulder.ibm.com (8.12.11.20060308/8.12.11) with ESMTP id l5PDgiOV018178; Mon, 25 Jun 2007 07:42:45 -0600 Received: from amitarora.in.ibm.com (localhost.localdomain [127.0.0.1]) by amitarora.in.ibm.com (Postfix) with ESMTP id A104918B996; Mon, 25 Jun 2007 19:12:55 +0530 (IST) Received: (from amit@localhost) by amitarora.in.ibm.com (8.13.1/8.13.1/Submit) id l5PDgtu8010528; Mon, 25 Jun 2007 19:12:55 +0530 Date: Mon, 25 Jun 2007 19:12:55 +0530 From: "Amit K. Arora" To: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org Cc: David Chinner , Andreas Dilger , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: [PATCH 2/7][TAKE5] fallocate() on s390(x) Message-ID: <20070625134255.GC1951@amitarora.in.ibm.com> References: <20070510115620.GB21400@amitarora.in.ibm.com> <20070510223950.GD86004887@sgi.com> <20070511110301.GB28425@in.ibm.com> <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070625132810.GA1951@amitarora.in.ibm.com> User-Agent: Mutt/1.4.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11909 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: aarora@linux.vnet.ibm.com Precedence: bulk X-list: xfs This is the patch suggested by Martin Schwidefsky to support sys_fallocate() on s390(x) platform. He also suggested a wrapper in glibc to handle this system call on s390. Posting it here so that we get feedback for this too. .globl __fallocate ENTRY(__fallocate) stm %r6,%r7,28(%r15) /* save %r6/%r7 on stack */ cfi_offset (%r7, -68) cfi_offset (%r6, -72) lm %r6,%r7,96(%r15) /* load loff_t len from stack */ svc SYS_ify(fallocate) lm %r6,%r7,28(%r15) /* restore %r6/%r7 from stack */ br %r14 PSEUDO_END(__fallocate) Here are the comments and the patch to linux kernel from him. ------------- From: Martin Schwidefsky This patch implements support of fallocate system call on s390(x) platform. A wrapper is added to address the issue which s390 ABI has with the arguments of this system call. Signed-off-by: Martin Schwidefsky Index: linux-2.6.22-rc4/arch/s390/kernel/compat_wrapper.S =================================================================== --- linux-2.6.22-rc4.orig/arch/s390/kernel/compat_wrapper.S 2007-06-11 16:16:01.000000000 -0700 +++ linux-2.6.22-rc4/arch/s390/kernel/compat_wrapper.S 2007-06-11 16:27:29.000000000 -0700 @@ -1683,6 +1683,16 @@ llgtr %r3,%r3 # struct compat_timeval * jg compat_sys_utimes + .globl sys_fallocate_wrapper +sys_fallocate_wrapper: + lgfr %r2,%r2 # int + lgfr %r3,%r3 # int + sllg %r4,%r4,32 # get high word of 64bit loff_t + lr %r4,%r5 # get low word of 64bit loff_t + sllg %r5,%r6,32 # get high word of 64bit loff_t + l %r5,164(%r15) # get low word of 64bit loff_t + jg sys_fallocate + .globl compat_sys_utimensat_wrapper compat_sys_utimensat_wrapper: llgfr %r2,%r2 # unsigned int Index: linux-2.6.22-rc4/arch/s390/kernel/sys_s390.c =================================================================== --- linux-2.6.22-rc4.orig/arch/s390/kernel/sys_s390.c 2007-06-11 16:16:01.000000000 -0700 +++ linux-2.6.22-rc4/arch/s390/kernel/sys_s390.c 2007-06-11 16:27:29.000000000 -0700 @@ -265,3 +265,32 @@ return -EFAULT; return sys_fadvise64_64(a.fd, a.offset, a.len, a.advice); } + +#ifndef CONFIG_64BIT +/* + * This is a wrapper to call sys_fallocate(). For 31 bit s390 the last + * 64 bit argument "len" is split into the upper and lower 32 bits. The + * system call wrapper in the user space loads the value to %r6/%r7. + * The code in entry.S keeps the values in %r2 - %r6 where they are and + * stores %r7 to 96(%r15). But the standard C linkage requires that + * the whole 64 bit value for len is stored on the stack and doesn't + * use %r6 at all. So s390_fallocate has to convert the arguments from + * %r2: fd, %r3: mode, %r4/%r5: offset, %r6/96(%r15)-99(%r15): len + * to + * %r2: fd, %r3: mode, %r4/%r5: offset, 96(%r15)-103(%r15): len + */ +asmlinkage long s390_fallocate(int fd, int mode, loff_t offset, + u32 len_high, u32 len_low) +{ + union { + u64 len; + struct { + u32 high; + u32 low; + }; + } cv; + cv.high = len_high; + cv.low = len_low; + return sys_fallocate(fd, mode, offset, cv.len); +} +#endif Index: linux-2.6.22-rc4/arch/s390/kernel/syscalls.S =================================================================== --- linux-2.6.22-rc4.orig/arch/s390/kernel/syscalls.S 2007-06-11 16:16:01.000000000 -0700 +++ linux-2.6.22-rc4/arch/s390/kernel/syscalls.S 2007-06-11 16:27:29.000000000 -0700 @@ -322,6 +322,7 @@ SYSCALL(sys_getcpu,sys_getcpu,sys_getcpu_wrapper) SYSCALL(sys_epoll_pwait,sys_epoll_pwait,compat_sys_epoll_pwait_wrapper) SYSCALL(sys_utimes,sys_utimes,compat_sys_utimes_wrapper) +SYSCALL(s390_fallocate,sys_fallocate,sys_fallocate_wrapper) NI_SYSCALL /* 314 sys_fallocate */ SYSCALL(sys_utimensat,sys_utimensat,compat_sys_utimensat_wrapper) /* 315 */ SYSCALL(sys_signalfd,sys_signalfd,compat_sys_signalfd_wrapper) Index: linux-2.6.22-rc4/include/asm-s390/unistd.h =================================================================== --- linux-2.6.22-rc4.orig/include/asm-s390/unistd.h 2007-06-11 16:16:01.000000000 -0700 +++ linux-2.6.22-rc4/include/asm-s390/unistd.h 2007-06-11 16:27:29.000000000 -0700 @@ -256,7 +256,8 @@ #define __NR_signalfd 316 #define __NR_timerfd 317 #define __NR_eventfd 318 -#define NR_syscalls 319 +#define __NR_fallocate 319 +#define NR_syscalls 320 /* * There are some system calls that are not present on 64 bit, some From owner-xfs@oss.sgi.com Mon Jun 25 06:43:44 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 25 Jun 2007 06:43:50 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.1 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from e31.co.us.ibm.com (e31.co.us.ibm.com [32.97.110.149]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5PDhgdo006558 for ; Mon, 25 Jun 2007 06:43:43 -0700 Received: from d03relay02.boulder.ibm.com (d03relay02.boulder.ibm.com [9.17.195.227]) by e31.co.us.ibm.com (8.13.8/8.13.8) with ESMTP id l5PDhhfP031509 for ; Mon, 25 Jun 2007 09:43:43 -0400 Received: from d03av04.boulder.ibm.com (d03av04.boulder.ibm.com [9.17.195.170]) by d03relay02.boulder.ibm.com (8.13.8/8.13.8/NCO v8.3) with ESMTP id l5PDhhSv231252 for ; Mon, 25 Jun 2007 07:43:43 -0600 Received: from d03av04.boulder.ibm.com (loopback [127.0.0.1]) by d03av04.boulder.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id l5PDhgx3020366 for ; Mon, 25 Jun 2007 07:43:43 -0600 Received: from amitarora.in.ibm.com (amitarora.in.ibm.com [9.124.31.198]) by d03av04.boulder.ibm.com (8.12.11.20060308/8.12.11) with ESMTP id l5PDhfYo020296; Mon, 25 Jun 2007 07:43:42 -0600 Received: from amitarora.in.ibm.com (localhost.localdomain [127.0.0.1]) by amitarora.in.ibm.com (Postfix) with ESMTP id 9154718B996; Mon, 25 Jun 2007 19:13:52 +0530 (IST) Received: (from amit@localhost) by amitarora.in.ibm.com (8.13.1/8.13.1/Submit) id l5PDhq7Q010948; Mon, 25 Jun 2007 19:13:52 +0530 Date: Mon, 25 Jun 2007 19:13:52 +0530 From: "Amit K. Arora" To: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org Cc: David Chinner , Andreas Dilger , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: [PATCH 3/7][TAKE5] fallocate() on ia64 Message-ID: <20070625134352.GD1951@amitarora.in.ibm.com> References: <20070510115620.GB21400@amitarora.in.ibm.com> <20070510223950.GD86004887@sgi.com> <20070511110301.GB28425@in.ibm.com> <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070625132810.GA1951@amitarora.in.ibm.com> User-Agent: Mutt/1.4.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11910 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: aarora@linux.vnet.ibm.com Precedence: bulk X-list: xfs fallocate() on ia64 ia64 fallocate syscall support. Signed-off-by: Dave Chinner Index: linux-2.6.22-rc4/arch/ia64/kernel/entry.S =================================================================== --- linux-2.6.22-rc4.orig/arch/ia64/kernel/entry.S 2007-06-11 17:22:15.000000000 -0700 +++ linux-2.6.22-rc4/arch/ia64/kernel/entry.S 2007-06-11 17:30:37.000000000 -0700 @@ -1588,5 +1588,6 @@ data8 sys_signalfd data8 sys_timerfd data8 sys_eventfd + data8 sys_fallocate // 1310 .org sys_call_table + 8*NR_syscalls // guard against failures to increase NR_syscalls Index: linux-2.6.22-rc4/include/asm-ia64/unistd.h =================================================================== --- linux-2.6.22-rc4.orig/include/asm-ia64/unistd.h 2007-06-11 17:22:15.000000000 -0700 +++ linux-2.6.22-rc4/include/asm-ia64/unistd.h 2007-06-11 17:30:37.000000000 -0700 @@ -299,11 +299,12 @@ #define __NR_signalfd 1307 #define __NR_timerfd 1308 #define __NR_eventfd 1309 +#define __NR_fallocate 1310 #ifdef __KERNEL__ -#define NR_syscalls 286 /* length of syscall table */ +#define NR_syscalls 287 /* length of syscall table */ /* * The following defines stop scripts/checksyscalls.sh from complaining about From owner-xfs@oss.sgi.com Mon Jun 25 06:45:00 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 25 Jun 2007 06:45:03 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.6 required=5.0 tests=AWL,BAYES_60 autolearn=no version=3.2.0-pre1-r499012 Received: from e34.co.us.ibm.com (e34.co.us.ibm.com [32.97.110.152]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5PDivdo007256 for ; Mon, 25 Jun 2007 06:44:59 -0700 Received: from d03relay02.boulder.ibm.com (d03relay02.boulder.ibm.com [9.17.195.227]) by e34.co.us.ibm.com (8.13.8/8.13.8) with ESMTP id l5PDivqv024261 for ; Mon, 25 Jun 2007 09:44:57 -0400 Received: from d03av03.boulder.ibm.com (d03av03.boulder.ibm.com [9.17.195.169]) by d03relay02.boulder.ibm.com (8.13.8/8.13.8/NCO v8.3) with ESMTP id l5PDiuLe238074 for ; Mon, 25 Jun 2007 07:44:56 -0600 Received: from d03av03.boulder.ibm.com (loopback [127.0.0.1]) by d03av03.boulder.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id l5PDiuHP025824 for ; Mon, 25 Jun 2007 07:44:56 -0600 Received: from amitarora.in.ibm.com (amitarora.in.ibm.com [9.124.31.198]) by d03av03.boulder.ibm.com (8.12.11.20060308/8.12.11) with ESMTP id l5PDisTS025686; Mon, 25 Jun 2007 07:44:55 -0600 Received: from amitarora.in.ibm.com (localhost.localdomain [127.0.0.1]) by amitarora.in.ibm.com (Postfix) with ESMTP id ED0F118B996; Mon, 25 Jun 2007 19:15:00 +0530 (IST) Received: (from amit@localhost) by amitarora.in.ibm.com (8.13.1/8.13.1/Submit) id l5PDj01O011451; Mon, 25 Jun 2007 19:15:00 +0530 Date: Mon, 25 Jun 2007 19:15:00 +0530 From: "Amit K. Arora" To: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org Cc: David Chinner , Andreas Dilger , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: [PATCH 4/7][TAKE5] support new modes in fallocate Message-ID: <20070625134500.GE1951@amitarora.in.ibm.com> References: <20070510115620.GB21400@amitarora.in.ibm.com> <20070510223950.GD86004887@sgi.com> <20070511110301.GB28425@in.ibm.com> <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070625132810.GA1951@amitarora.in.ibm.com> User-Agent: Mutt/1.4.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11911 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: aarora@linux.vnet.ibm.com Precedence: bulk X-list: xfs Implement new flags and values for mode argument. This patch implements the new flags and values for the "mode" argument of the fallocate system call. It is based on the discussion between Andreas Dilger and David Chinner on the man page proposed (by the later) on fallocate. Signed-off-by: Amit Arora Index: linux-2.6.22-rc4/include/linux/fs.h =================================================================== --- linux-2.6.22-rc4.orig/include/linux/fs.h +++ linux-2.6.22-rc4/include/linux/fs.h @@ -267,15 +267,16 @@ extern int dir_notify_enable; #define SYNC_FILE_RANGE_WAIT_AFTER 4 /* - * sys_fallocate modes - * Currently sys_fallocate supports two modes: - * FA_ALLOCATE : This is the preallocate mode, using which an application/user - * may request (pre)allocation of blocks. - * FA_DEALLOCATE: This is the deallocate mode, which can be used to free - * the preallocated blocks. + * sys_fallocate mode flags and values */ -#define FA_ALLOCATE 0x1 -#define FA_DEALLOCATE 0x2 +#define FA_FL_DEALLOC 0x01 /* default is allocate */ +#define FA_FL_KEEP_SIZE 0x02 /* default is extend/shrink size */ +#define FA_FL_DEL_DATA 0x04 /* default is keep written data on DEALLOC */ + +#define FA_ALLOCATE 0 +#define FA_DEALLOCATE FA_FL_DEALLOC +#define FA_RESV_SPACE FA_FL_KEEP_SIZE +#define FA_UNRESV_SPACE (FA_FL_DEALLOC | FA_FL_KEEP_SIZE | FA_FL_DEL_DATA) #ifdef __KERNEL__ Index: linux-2.6.22-rc4/fs/open.c =================================================================== --- linux-2.6.22-rc4.orig/fs/open.c +++ linux-2.6.22-rc4/fs/open.c @@ -356,23 +356,26 @@ asmlinkage long sys_ftruncate64(unsigned * sys_fallocate - preallocate blocks or free preallocated blocks * @fd: the file descriptor * @mode: mode specifies if fallocate should preallocate blocks OR free - * (unallocate) preallocated blocks. Currently only FA_ALLOCATE and - * FA_DEALLOCATE modes are supported. + * (unallocate) preallocated blocks. * @offset: The offset within file, from where (un)allocation is being * requested. It should not have a negative value. * @len: The amount (in bytes) of space to be (un)allocated, from the offset. * * This system call, depending on the mode, preallocates or unallocates blocks * for a file. The range of blocks depends on the value of offset and len - * arguments provided by the user/application. For FA_ALLOCATE mode, if this + * arguments provided by the user/application. For FA_ALLOCATE and + * FA_RESV_SPACE modes, if the sys_fallocate() * system call succeeds, subsequent writes to the file in the given range * (specified by offset & len) should not fail - even if the file system * later becomes full. Hence the preallocation done is persistent (valid - * even after reopen of the file and remount/reboot). + * even after reopen of the file and remount/reboot). If FA_RESV_SPACE mode + * is passed, the file size will not be changed even if the preallocation + * is beyond EOF. * * It is expected that the ->fallocate() inode operation implemented by the * individual file systems will update the file size and/or ctime/mtime - * depending on the mode and also on the success of the operation. + * depending on the mode (change is visible to user or not - say file size) + * and obviously, on the success of the operation. * * Note: Incase the file system does not support preallocation, * posix_fallocate() should fall back to the library implementation (i.e. @@ -398,7 +401,8 @@ asmlinkage long sys_fallocate(int fd, in /* Return error if mode is not supported */ ret = -EOPNOTSUPP; - if (mode != FA_ALLOCATE && mode != FA_DEALLOCATE) + if (!(mode == FA_ALLOCATE || mode == FA_DEALLOCATE || + mode == FA_RESV_SPACE || mode == FA_UNRESV_SPACE)) goto out; ret = -EBADF; From owner-xfs@oss.sgi.com Mon Jun 25 06:48:03 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 25 Jun 2007 06:48:06 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.2 required=5.0 tests=AWL,BAYES_99,J_CHICKENPOX_66 autolearn=no version=3.2.0-pre1-r499012 Received: from e31.co.us.ibm.com (e31.co.us.ibm.com [32.97.110.149]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5PDm1do009004 for ; Mon, 25 Jun 2007 06:48:03 -0700 Received: from d03relay04.boulder.ibm.com (d03relay04.boulder.ibm.com [9.17.195.106]) by e31.co.us.ibm.com (8.13.8/8.13.8) with ESMTP id l5PDm32T002393 for ; Mon, 25 Jun 2007 09:48:03 -0400 Received: from d03av04.boulder.ibm.com (d03av04.boulder.ibm.com [9.17.195.170]) by d03relay04.boulder.ibm.com (8.13.8/8.13.8/NCO v8.3) with ESMTP id l5PDm3nG139978 for ; Mon, 25 Jun 2007 07:48:03 -0600 Received: from d03av04.boulder.ibm.com (loopback [127.0.0.1]) by d03av04.boulder.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id l5PDm2nG005762 for ; Mon, 25 Jun 2007 07:48:02 -0600 Received: from amitarora.in.ibm.com (amitarora.in.ibm.com [9.124.31.198]) by d03av04.boulder.ibm.com (8.12.11.20060308/8.12.11) with ESMTP id l5PDm0BB005623; Mon, 25 Jun 2007 07:48:01 -0600 Received: from amitarora.in.ibm.com (localhost.localdomain [127.0.0.1]) by amitarora.in.ibm.com (Postfix) with ESMTP id 128E618B996; Mon, 25 Jun 2007 19:18:12 +0530 (IST) Received: (from amit@localhost) by amitarora.in.ibm.com (8.13.1/8.13.1/Submit) id l5PDmBY5012768; Mon, 25 Jun 2007 19:18:11 +0530 Date: Mon, 25 Jun 2007 19:18:11 +0530 From: "Amit K. Arora" To: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org Cc: David Chinner , Andreas Dilger , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: [PATCH 5/7][TAKE5] ext4: fallocate support in ext4 Message-ID: <20070625134811.GF1951@amitarora.in.ibm.com> References: <20070510115620.GB21400@amitarora.in.ibm.com> <20070510223950.GD86004887@sgi.com> <20070511110301.GB28425@in.ibm.com> <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070625132810.GA1951@amitarora.in.ibm.com> User-Agent: Mutt/1.4.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11912 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: aarora@linux.vnet.ibm.com Precedence: bulk X-list: xfs This patch implements ->fallocate() inode operation in ext4. With this patch users of ext4 file systems will be able to use fallocate() system call for persistent preallocation. Current implementation only supports preallocation for regular files (directories not supported as of date) with extent maps. This patch does not support block-mapped files currently. Only FA_ALLOCATE mode is being supported as of now. Supporting FA_DEALLOCATE mode is a item. Changelog: --------- Changes from Take3 to Take4: 1) Changed ext4_fllocate() declaration and definition to return a "long" and not an "int", to match with ->fallocate() inode op. 2) Update ctime if new blocks get allocated. Changes from Take2 to Take3: 1) Patch rebased to 2.6.22-rc1 kernel version. 2) Removed unnecessary "EXPORT_SYMBOL(ext4_fallocate);". Changes from Take1 to Take2: 1) Added more description for ext4_fallocate(). 2) Now returning EOPNOTSUPP when files are block-mapped (non-extent). 3) Moved journal_start & journal_stop inside the while loop. 4) Replaced BUG_ON with WARN_ON & ext4_error. 5) Make EXT4_BLOCK_ALIGN use ALIGN macro internally. 6) Added variable names in the function declaration of ext4_fallocate() 7) Converted macros that handle uninitialized extents into inline functions. Signed-off-by: Amit Arora Index: linux-2.6.22-rc4/fs/ext4/extents.c =================================================================== --- linux-2.6.22-rc4.orig/fs/ext4/extents.c +++ linux-2.6.22-rc4/fs/ext4/extents.c @@ -316,7 +316,7 @@ static void ext4_ext_show_path(struct in } else if (path->p_ext) { ext_debug(" %d:%d:%llu ", le32_to_cpu(path->p_ext->ee_block), - le16_to_cpu(path->p_ext->ee_len), + ext4_ext_get_actual_len(path->p_ext), ext_pblock(path->p_ext)); } else ext_debug(" []"); @@ -339,7 +339,7 @@ static void ext4_ext_show_leaf(struct in for (i = 0; i < le16_to_cpu(eh->eh_entries); i++, ex++) { ext_debug("%d:%d:%llu ", le32_to_cpu(ex->ee_block), - le16_to_cpu(ex->ee_len), ext_pblock(ex)); + ext4_ext_get_actual_len(ex), ext_pblock(ex)); } ext_debug("\n"); } @@ -455,7 +455,7 @@ ext4_ext_binsearch(struct inode *inode, ext_debug(" -> %d:%llu:%d ", le32_to_cpu(path->p_ext->ee_block), ext_pblock(path->p_ext), - le16_to_cpu(path->p_ext->ee_len)); + ext4_ext_get_actual_len(path->p_ext)); #ifdef CHECK_BINSEARCH { @@ -713,7 +713,7 @@ static int ext4_ext_split(handle_t *hand ext_debug("move %d:%llu:%d in new leaf %llu\n", le32_to_cpu(path[depth].p_ext->ee_block), ext_pblock(path[depth].p_ext), - le16_to_cpu(path[depth].p_ext->ee_len), + ext4_ext_get_actual_len(path[depth].p_ext), newblock); /*memmove(ex++, path[depth].p_ext++, sizeof(struct ext4_extent)); @@ -1133,7 +1133,19 @@ static int ext4_can_extents_be_merged(struct inode *inode, struct ext4_extent *ex1, struct ext4_extent *ex2) { - if (le32_to_cpu(ex1->ee_block) + le16_to_cpu(ex1->ee_len) != + unsigned short ext1_ee_len, ext2_ee_len; + + /* + * Make sure that either both extents are uninitialized, or + * both are _not_. + */ + if (ext4_ext_is_uninitialized(ex1) ^ ext4_ext_is_uninitialized(ex2)) + return 0; + + ext1_ee_len = ext4_ext_get_actual_len(ex1); + ext2_ee_len = ext4_ext_get_actual_len(ex2); + + if (le32_to_cpu(ex1->ee_block) + ext1_ee_len != le32_to_cpu(ex2->ee_block)) return 0; @@ -1142,14 +1154,14 @@ ext4_can_extents_be_merged(struct inode * as an RO_COMPAT feature, refuse to merge to extents if * this can result in the top bit of ee_len being set. */ - if (le16_to_cpu(ex1->ee_len) + le16_to_cpu(ex2->ee_len) > EXT_MAX_LEN) + if (ext1_ee_len + ext2_ee_len > EXT_MAX_LEN) return 0; #ifdef AGGRESSIVE_TEST if (le16_to_cpu(ex1->ee_len) >= 4) return 0; #endif - if (ext_pblock(ex1) + le16_to_cpu(ex1->ee_len) == ext_pblock(ex2)) + if (ext_pblock(ex1) + ext1_ee_len == ext_pblock(ex2)) return 1; return 0; } @@ -1171,7 +1183,7 @@ unsigned int ext4_ext_check_overlap(stru unsigned int ret = 0; b1 = le32_to_cpu(newext->ee_block); - len1 = le16_to_cpu(newext->ee_len); + len1 = ext4_ext_get_actual_len(newext); depth = ext_depth(inode); if (!path[depth].p_ext) goto out; @@ -1218,8 +1230,9 @@ int ext4_ext_insert_extent(handle_t *han struct ext4_extent *nearex; /* nearest extent */ struct ext4_ext_path *npath = NULL; int depth, len, err, next; + unsigned uninitialized = 0; - BUG_ON(newext->ee_len == 0); + BUG_ON(ext4_ext_get_actual_len(newext) == 0); depth = ext_depth(inode); ex = path[depth].p_ext; BUG_ON(path[depth].p_hdr == NULL); @@ -1227,14 +1240,24 @@ int ext4_ext_insert_extent(handle_t *han /* try to insert block into found extent and return */ if (ex && ext4_can_extents_be_merged(inode, ex, newext)) { ext_debug("append %d block to %d:%d (from %llu)\n", - le16_to_cpu(newext->ee_len), + ext4_ext_get_actual_len(newext), le32_to_cpu(ex->ee_block), - le16_to_cpu(ex->ee_len), ext_pblock(ex)); + ext4_ext_get_actual_len(ex), ext_pblock(ex)); err = ext4_ext_get_access(handle, inode, path + depth); if (err) return err; - ex->ee_len = cpu_to_le16(le16_to_cpu(ex->ee_len) - + le16_to_cpu(newext->ee_len)); + + /* + * ext4_can_extents_be_merged should have checked that either + * both extents are uninitialized, or both aren't. Thus we + * need to check only one of them here. + */ + if (ext4_ext_is_uninitialized(ex)) + uninitialized = 1; + ex->ee_len = cpu_to_le16(ext4_ext_get_actual_len(ex) + + ext4_ext_get_actual_len(newext)); + if (uninitialized) + ext4_ext_mark_uninitialized(ex); eh = path[depth].p_hdr; nearex = ex; goto merge; @@ -1290,7 +1313,7 @@ has_space: ext_debug("first extent in the leaf: %d:%llu:%d\n", le32_to_cpu(newext->ee_block), ext_pblock(newext), - le16_to_cpu(newext->ee_len)); + ext4_ext_get_actual_len(newext)); path[depth].p_ext = EXT_FIRST_EXTENT(eh); } else if (le32_to_cpu(newext->ee_block) > le32_to_cpu(nearex->ee_block)) { @@ -1303,7 +1326,7 @@ has_space: "move %d from 0x%p to 0x%p\n", le32_to_cpu(newext->ee_block), ext_pblock(newext), - le16_to_cpu(newext->ee_len), + ext4_ext_get_actual_len(newext), nearex, len, nearex + 1, nearex + 2); memmove(nearex + 2, nearex + 1, len); } @@ -1316,7 +1339,7 @@ has_space: "move %d from 0x%p to 0x%p\n", le32_to_cpu(newext->ee_block), ext_pblock(newext), - le16_to_cpu(newext->ee_len), + ext4_ext_get_actual_len(newext), nearex, len, nearex + 1, nearex + 2); memmove(nearex + 1, nearex, len); path[depth].p_ext = nearex; @@ -1335,8 +1358,13 @@ merge: if (!ext4_can_extents_be_merged(inode, nearex, nearex + 1)) break; /* merge with next extent! */ - nearex->ee_len = cpu_to_le16(le16_to_cpu(nearex->ee_len) - + le16_to_cpu(nearex[1].ee_len)); + if (ext4_ext_is_uninitialized(nearex)) + uninitialized = 1; + nearex->ee_len = cpu_to_le16(ext4_ext_get_actual_len(nearex) + + ext4_ext_get_actual_len(nearex + 1)); + if (uninitialized) + ext4_ext_mark_uninitialized(nearex); + if (nearex + 1 < EXT_LAST_EXTENT(eh)) { len = (EXT_LAST_EXTENT(eh) - nearex - 1) * sizeof(struct ext4_extent); @@ -1406,8 +1434,8 @@ int ext4_ext_walk_space(struct inode *in end = le32_to_cpu(ex->ee_block); if (block + num < end) end = block + num; - } else if (block >= - le32_to_cpu(ex->ee_block) + le16_to_cpu(ex->ee_len)) { + } else if (block >= le32_to_cpu(ex->ee_block) + + ext4_ext_get_actual_len(ex)) { /* need to allocate space after found extent */ start = block; end = block + num; @@ -1419,7 +1447,8 @@ int ext4_ext_walk_space(struct inode *in * by found extent */ start = block; - end = le32_to_cpu(ex->ee_block) + le16_to_cpu(ex->ee_len); + end = le32_to_cpu(ex->ee_block) + + ext4_ext_get_actual_len(ex); if (block + num < end) end = block + num; exists = 1; @@ -1435,7 +1464,7 @@ int ext4_ext_walk_space(struct inode *in cbex.ec_type = EXT4_EXT_CACHE_GAP; } else { cbex.ec_block = le32_to_cpu(ex->ee_block); - cbex.ec_len = le16_to_cpu(ex->ee_len); + cbex.ec_len = ext4_ext_get_actual_len(ex); cbex.ec_start = ext_pblock(ex); cbex.ec_type = EXT4_EXT_CACHE_EXTENT; } @@ -1508,15 +1537,15 @@ ext4_ext_put_gap_in_cache(struct inode * ext_debug("cache gap(before): %lu [%lu:%lu]", (unsigned long) block, (unsigned long) le32_to_cpu(ex->ee_block), - (unsigned long) le16_to_cpu(ex->ee_len)); + (unsigned long) ext4_ext_get_actual_len(ex)); } else if (block >= le32_to_cpu(ex->ee_block) - + le16_to_cpu(ex->ee_len)) { + + ext4_ext_get_actual_len(ex)) { lblock = le32_to_cpu(ex->ee_block) - + le16_to_cpu(ex->ee_len); + + ext4_ext_get_actual_len(ex); len = ext4_ext_next_allocated_block(path); ext_debug("cache gap(after): [%lu:%lu] %lu", (unsigned long) le32_to_cpu(ex->ee_block), - (unsigned long) le16_to_cpu(ex->ee_len), + (unsigned long) ext4_ext_get_actual_len(ex), (unsigned long) block); BUG_ON(len == lblock); len = len - lblock; @@ -1646,12 +1675,12 @@ static int ext4_remove_blocks(handle_t * unsigned long from, unsigned long to) { struct buffer_head *bh; + unsigned short ee_len = ext4_ext_get_actual_len(ex); int i; #ifdef EXTENTS_STATS { struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb); - unsigned short ee_len = le16_to_cpu(ex->ee_len); spin_lock(&sbi->s_ext_stats_lock); sbi->s_ext_blocks += ee_len; sbi->s_ext_extents++; @@ -1665,12 +1694,12 @@ static int ext4_remove_blocks(handle_t * } #endif if (from >= le32_to_cpu(ex->ee_block) - && to == le32_to_cpu(ex->ee_block) + le16_to_cpu(ex->ee_len) - 1) { + && to == le32_to_cpu(ex->ee_block) + ee_len - 1) { /* tail removal */ unsigned long num; ext4_fsblk_t start; - num = le32_to_cpu(ex->ee_block) + le16_to_cpu(ex->ee_len) - from; - start = ext_pblock(ex) + le16_to_cpu(ex->ee_len) - num; + num = le32_to_cpu(ex->ee_block) + ee_len - from; + start = ext_pblock(ex) + ee_len - num; ext_debug("free last %lu blocks starting %llu\n", num, start); for (i = 0; i < num; i++) { bh = sb_find_get_block(inode->i_sb, start + i); @@ -1678,12 +1707,12 @@ static int ext4_remove_blocks(handle_t * } ext4_free_blocks(handle, inode, start, num); } else if (from == le32_to_cpu(ex->ee_block) - && to <= le32_to_cpu(ex->ee_block) + le16_to_cpu(ex->ee_len) - 1) { + && to <= le32_to_cpu(ex->ee_block) + ee_len - 1) { printk("strange request: removal %lu-%lu from %u:%u\n", - from, to, le32_to_cpu(ex->ee_block), le16_to_cpu(ex->ee_len)); + from, to, le32_to_cpu(ex->ee_block), ee_len); } else { printk("strange request: removal(2) %lu-%lu from %u:%u\n", - from, to, le32_to_cpu(ex->ee_block), le16_to_cpu(ex->ee_len)); + from, to, le32_to_cpu(ex->ee_block), ee_len); } return 0; } @@ -1698,6 +1727,7 @@ ext4_ext_rm_leaf(handle_t *handle, struc unsigned a, b, block, num; unsigned long ex_ee_block; unsigned short ex_ee_len; + unsigned uninitialized = 0; struct ext4_extent *ex; /* the header must be checked already in ext4_ext_remove_space() */ @@ -1711,7 +1741,9 @@ ext4_ext_rm_leaf(handle_t *handle, struc ex = EXT_LAST_EXTENT(eh); ex_ee_block = le32_to_cpu(ex->ee_block); - ex_ee_len = le16_to_cpu(ex->ee_len); + if (ext4_ext_is_uninitialized(ex)) + uninitialized = 1; + ex_ee_len = ext4_ext_get_actual_len(ex); while (ex >= EXT_FIRST_EXTENT(eh) && ex_ee_block + ex_ee_len > start) { @@ -1779,6 +1811,8 @@ ext4_ext_rm_leaf(handle_t *handle, struc ex->ee_block = cpu_to_le32(block); ex->ee_len = cpu_to_le16(num); + if (uninitialized) + ext4_ext_mark_uninitialized(ex); err = ext4_ext_dirty(handle, inode, path + depth); if (err) @@ -1788,7 +1822,7 @@ ext4_ext_rm_leaf(handle_t *handle, struc ext_pblock(ex)); ex--; ex_ee_block = le32_to_cpu(ex->ee_block); - ex_ee_len = le16_to_cpu(ex->ee_len); + ex_ee_len = ext4_ext_get_actual_len(ex); } if (correct_index && eh->eh_entries) @@ -2062,7 +2096,7 @@ int ext4_ext_get_blocks(handle_t *handle if (ex) { unsigned long ee_block = le32_to_cpu(ex->ee_block); ext4_fsblk_t ee_start = ext_pblock(ex); - unsigned short ee_len = le16_to_cpu(ex->ee_len); + unsigned short ee_len; /* * Allow future support for preallocated extents to be added @@ -2070,8 +2104,9 @@ int ext4_ext_get_blocks(handle_t *handle * Uninitialized extents are treated as holes, except that * we avoid (fail) allocating new blocks during a write. */ - if (ee_len > EXT_MAX_LEN) + if (le16_to_cpu(ex->ee_len) > EXT_MAX_LEN) goto out2; + ee_len = ext4_ext_get_actual_len(ex); /* if found extent covers block, simply return it */ if (iblock >= ee_block && iblock < ee_block + ee_len) { newblock = iblock - ee_block + ee_start; @@ -2079,8 +2114,11 @@ int ext4_ext_get_blocks(handle_t *handle allocated = ee_len - (iblock - ee_block); ext_debug("%d fit into %lu:%d -> %llu\n", (int) iblock, ee_block, ee_len, newblock); - ext4_ext_put_in_cache(inode, ee_block, ee_len, - ee_start, EXT4_EXT_CACHE_EXTENT); + /* Do not put uninitialized extent in the cache */ + if (!ext4_ext_is_uninitialized(ex)) + ext4_ext_put_in_cache(inode, ee_block, + ee_len, ee_start, + EXT4_EXT_CACHE_EXTENT); goto out; } } @@ -2122,6 +2160,8 @@ int ext4_ext_get_blocks(handle_t *handle /* try to insert new extent into found leaf and return */ ext4_ext_store_pblock(&newex, newblock); newex.ee_len = cpu_to_le16(allocated); + if (create == EXT4_CREATE_UNINITIALIZED_EXT) /* Mark uninitialized */ + ext4_ext_mark_uninitialized(&newex); err = ext4_ext_insert_extent(handle, inode, path, &newex); if (err) { /* free data blocks we just allocated */ @@ -2137,8 +2177,10 @@ int ext4_ext_get_blocks(handle_t *handle newblock = ext_pblock(&newex); __set_bit(BH_New, &bh_result->b_state); - ext4_ext_put_in_cache(inode, iblock, allocated, newblock, - EXT4_EXT_CACHE_EXTENT); + /* Cache only when it is _not_ an uninitialized extent */ + if (create != EXT4_CREATE_UNINITIALIZED_EXT) + ext4_ext_put_in_cache(inode, iblock, allocated, newblock, + EXT4_EXT_CACHE_EXTENT); out: if (allocated > max_blocks) allocated = max_blocks; @@ -2241,3 +2283,129 @@ int ext4_ext_writepage_trans_blocks(stru return needed; } + +/* + * preallocate space for a file. This implements ext4's fallocate inode + * operation, which gets called from sys_fallocate system call. + * Currently only FA_ALLOCATE mode is supported on extent based files. + * We may have more modes supported in future - like FA_DEALLOCATE, which + * tells fallocate to unallocate previously (pre)allocated blocks. + * For block-mapped files, posix_fallocate should fall back to the method + * of writing zeroes to the required new blocks (the same behavior which is + * expected for file systems which do not support fallocate() system call). + */ +long ext4_fallocate(struct inode *inode, int mode, loff_t offset, loff_t len) +{ + handle_t *handle; + ext4_fsblk_t block, max_blocks; + ext4_fsblk_t nblocks = 0; + int ret = 0; + int ret2 = 0; + int retries = 0; + struct buffer_head map_bh; + unsigned int credits, blkbits = inode->i_blkbits; + + /* + * currently supporting (pre)allocate mode for extent-based + * files _only_ + */ + if (mode != FA_ALLOCATE || !(EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL)) + return -EOPNOTSUPP; + + /* preallocation to directories is currently not supported */ + if (S_ISDIR(inode->i_mode)) + return -ENODEV; + + block = offset >> blkbits; + max_blocks = (EXT4_BLOCK_ALIGN(len + offset, blkbits) >> blkbits) + - block; + + /* + * credits to insert 1 extent into extent tree + buffers to be able to + * modify 1 super block, 1 block bitmap and 1 group descriptor. + */ + credits = EXT4_DATA_TRANS_BLOCKS(inode->i_sb) + 3; +retry: + while (ret >= 0 && ret < max_blocks) { + block = block + ret; + max_blocks = max_blocks - ret; + handle = ext4_journal_start(inode, credits); + if (IS_ERR(handle)) { + ret = PTR_ERR(handle); + break; + } + + ret = ext4_ext_get_blocks(handle, inode, block, + max_blocks, &map_bh, + EXT4_CREATE_UNINITIALIZED_EXT, 0); + WARN_ON(!ret); + if (!ret) { + ext4_error(inode->i_sb, "ext4_fallocate", + "ext4_ext_get_blocks returned 0! inode#%lu" + ", block=%llu, max_blocks=%llu", + inode->i_ino, block, max_blocks); + ret = -EIO; + ext4_mark_inode_dirty(handle, inode); + ret2 = ext4_journal_stop(handle); + break; + } + if (ret > 0) { + /* check wrap through sign-bit/zero here */ + if ((block + ret) < 0 || (block + ret) < block) { + ret = -EIO; + ext4_mark_inode_dirty(handle, inode); + ret2 = ext4_journal_stop(handle); + break; + } + if (buffer_new(&map_bh) && ((block + ret) > + (EXT4_BLOCK_ALIGN(i_size_read(inode), blkbits) + >> blkbits))) + nblocks = nblocks + ret; + } + + /* Update ctime if new blocks get allocated */ + if (nblocks) { + struct timespec now; + now = current_fs_time(inode->i_sb); + if (!timespec_equal(&inode->i_ctime, &now)) + inode->i_ctime = now; + } + + ext4_mark_inode_dirty(handle, inode); + ret2 = ext4_journal_stop(handle); + if (ret2) + break; + } + + if (ret == -ENOSPC && ext4_should_retry_alloc(inode->i_sb, &retries)) + goto retry; + + /* + * Time to update the file size. + * Update only when preallocation was requested beyond the file size. + */ + if ((offset + len) > i_size_read(inode)) { + if (ret > 0) { + /* + * if no error, we assume preallocation succeeded + * completely + */ + mutex_lock(&inode->i_mutex); + i_size_write(inode, offset + len); + EXT4_I(inode)->i_disksize = i_size_read(inode); + mutex_unlock(&inode->i_mutex); + } else if (ret < 0 && nblocks) { + /* Handle partial allocation scenario */ + loff_t newsize; + + mutex_lock(&inode->i_mutex); + newsize = (nblocks << blkbits) + i_size_read(inode); + i_size_write(inode, EXT4_BLOCK_ALIGN(newsize, blkbits)); + EXT4_I(inode)->i_disksize = i_size_read(inode); + mutex_unlock(&inode->i_mutex); + } + } + + return ret > 0 ? ret2 : ret; +} + Index: linux-2.6.22-rc4/fs/ext4/file.c =================================================================== --- linux-2.6.22-rc4.orig/fs/ext4/file.c +++ linux-2.6.22-rc4/fs/ext4/file.c @@ -135,5 +135,6 @@ const struct inode_operations ext4_file_ .removexattr = generic_removexattr, #endif .permission = ext4_permission, + .fallocate = ext4_fallocate, }; Index: linux-2.6.22-rc4/include/linux/ext4_fs.h =================================================================== --- linux-2.6.22-rc4.orig/include/linux/ext4_fs.h +++ linux-2.6.22-rc4/include/linux/ext4_fs.h @@ -102,6 +102,7 @@ EXT4_GOOD_OLD_FIRST_INO : \ (s)->s_first_ino) #endif +#define EXT4_BLOCK_ALIGN(size, blkbits) ALIGN((size), (1 << (blkbits))) /* * Macro-instructions used to manage fragments @@ -225,6 +226,11 @@ struct ext4_new_group_data { __u32 free_blocks_count; }; +/* + * Following is used by preallocation code to tell get_blocks() that we + * want uninitialzed extents. + */ +#define EXT4_CREATE_UNINITIALIZED_EXT 2 /* * ioctl commands @@ -984,6 +990,8 @@ extern int ext4_ext_get_blocks(handle_t extern void ext4_ext_truncate(struct inode *, struct page *); extern void ext4_ext_init(struct super_block *); extern void ext4_ext_release(struct super_block *); +extern long ext4_fallocate(struct inode *inode, int mode, loff_t offset, + loff_t len); static inline int ext4_get_blocks_wrap(handle_t *handle, struct inode *inode, sector_t block, unsigned long max_blocks, struct buffer_head *bh, Index: linux-2.6.22-rc4/include/linux/ext4_fs_extents.h =================================================================== --- linux-2.6.22-rc4.orig/include/linux/ext4_fs_extents.h +++ linux-2.6.22-rc4/include/linux/ext4_fs_extents.h @@ -188,6 +188,21 @@ ext4_ext_invalidate_cache(struct inode * EXT4_I(inode)->i_cached_extent.ec_type = EXT4_EXT_CACHE_NO; } +static inline void ext4_ext_mark_uninitialized(struct ext4_extent *ext) +{ + ext->ee_len |= cpu_to_le16(0x8000); +} + +static inline int ext4_ext_is_uninitialized(struct ext4_extent *ext) +{ + return (int)(le16_to_cpu((ext)->ee_len) & 0x8000); +} + +static inline int ext4_ext_get_actual_len(struct ext4_extent *ext) +{ + return (int)(le16_to_cpu((ext)->ee_len) & 0x7FFF); +} + extern int ext4_extent_tree_init(handle_t *, struct inode *); extern int ext4_ext_calc_credits_for_insert(struct inode *, struct ext4_ext_path *); extern unsigned int ext4_ext_check_overlap(struct inode *, struct ext4_extent *, struct ext4_ext_path *); From owner-xfs@oss.sgi.com Mon Jun 25 06:49:24 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 25 Jun 2007 06:49:28 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.3 required=5.0 tests=AWL,BAYES_99,J_CHICKENPOX_66, J_CHICKENPOX_72 autolearn=no version=3.2.0-pre1-r499012 Received: from e32.co.us.ibm.com (e32.co.us.ibm.com [32.97.110.150]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5PDnNdo009687 for ; Mon, 25 Jun 2007 06:49:24 -0700 Received: from d03relay02.boulder.ibm.com (d03relay02.boulder.ibm.com [9.17.195.227]) by e32.co.us.ibm.com (8.12.11.20060308/8.13.8) with ESMTP id l5PDiRDm008740 for ; Mon, 25 Jun 2007 09:44:27 -0400 Received: from d03av04.boulder.ibm.com (d03av04.boulder.ibm.com [9.17.195.170]) by d03relay02.boulder.ibm.com (8.13.8/8.13.8/NCO v8.3) with ESMTP id l5PDnKS9231086 for ; Mon, 25 Jun 2007 07:49:21 -0600 Received: from d03av04.boulder.ibm.com (loopback [127.0.0.1]) by d03av04.boulder.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id l5PDnKS9011347 for ; Mon, 25 Jun 2007 07:49:20 -0600 Received: from amitarora.in.ibm.com (amitarora.in.ibm.com [9.124.31.198]) by d03av04.boulder.ibm.com (8.12.11.20060308/8.12.11) with ESMTP id l5PDnJaJ011255; Mon, 25 Jun 2007 07:49:19 -0600 Received: from amitarora.in.ibm.com (localhost.localdomain [127.0.0.1]) by amitarora.in.ibm.com (Postfix) with ESMTP id 1EE2B18B996; Mon, 25 Jun 2007 19:19:30 +0530 (IST) Received: (from amit@localhost) by amitarora.in.ibm.com (8.13.1/8.13.1/Submit) id l5PDnTkk013319; Mon, 25 Jun 2007 19:19:29 +0530 Date: Mon, 25 Jun 2007 19:19:29 +0530 From: "Amit K. Arora" To: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org Cc: David Chinner , Andreas Dilger , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: [PATCH 6/7][TAKE5] ext4: write support for preallocated blocks Message-ID: <20070625134929.GG1951@amitarora.in.ibm.com> References: <20070510115620.GB21400@amitarora.in.ibm.com> <20070510223950.GD86004887@sgi.com> <20070511110301.GB28425@in.ibm.com> <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070625132810.GA1951@amitarora.in.ibm.com> User-Agent: Mutt/1.4.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11913 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: aarora@linux.vnet.ibm.com Precedence: bulk X-list: xfs This patch adds write support to the uninitialized extents that get created when a preallocation is done using fallocate(). It takes care of splitting the extents into multiple (upto three) extents and merging the new split extents with neighbouring ones, if possible. Changelog: --------- Changes from Take3 to Take4: - no change - Changes from Take2 to Take3: 1) Patch now rebased to 2.6.22-rc1 kernel. Changes from Take1 to Take2: 1) Replaced BUG_ON with WARN_ON & ext4_error. 2) Added variable names to the function declaration of ext4_ext_try_to_merge(). 3) Updated variable declarations to use multiple-definitions-per-line. 4) "if((a=foo())).." was broken into "a=foo(); if(a).." 5) Removed extra spaces. Signed-off-by: Amit Arora Index: linux-2.6.22-rc4/fs/ext4/extents.c =================================================================== --- linux-2.6.22-rc4.orig/fs/ext4/extents.c +++ linux-2.6.22-rc4/fs/ext4/extents.c @@ -1167,6 +1167,53 @@ ext4_can_extents_be_merged(struct inode } /* + * This function tries to merge the "ex" extent to the next extent in the tree. + * It always tries to merge towards right. If you want to merge towards + * left, pass "ex - 1" as argument instead of "ex". + * Returns 0 if the extents (ex and ex+1) were _not_ merged and returns + * 1 if they got merged. + */ +int ext4_ext_try_to_merge(struct inode *inode, + struct ext4_ext_path *path, + struct ext4_extent *ex) +{ + struct ext4_extent_header *eh; + unsigned int depth, len; + int merge_done = 0; + int uninitialized = 0; + + depth = ext_depth(inode); + BUG_ON(path[depth].p_hdr == NULL); + eh = path[depth].p_hdr; + + while (ex < EXT_LAST_EXTENT(eh)) { + if (!ext4_can_extents_be_merged(inode, ex, ex + 1)) + break; + /* merge with next extent! */ + if (ext4_ext_is_uninitialized(ex)) + uninitialized = 1; + ex->ee_len = cpu_to_le16(ext4_ext_get_actual_len(ex) + + ext4_ext_get_actual_len(ex + 1)); + if (uninitialized) + ext4_ext_mark_uninitialized(ex); + + if (ex + 1 < EXT_LAST_EXTENT(eh)) { + len = (EXT_LAST_EXTENT(eh) - ex - 1) + * sizeof(struct ext4_extent); + memmove(ex + 1, ex + 2, len); + } + eh->eh_entries = cpu_to_le16(le16_to_cpu(eh->eh_entries) - 1); + merge_done = 1; + WARN_ON(eh->eh_entries == 0); + if (!eh->eh_entries) + ext4_error(inode->i_sb, "ext4_ext_try_to_merge", + "inode#%lu, eh->eh_entries = 0!", inode->i_ino); + } + + return merge_done; +} + +/* * check if a portion of the "newext" extent overlaps with an * existing extent. * @@ -1354,25 +1401,7 @@ has_space: merge: /* try to merge extents to the right */ - while (nearex < EXT_LAST_EXTENT(eh)) { - if (!ext4_can_extents_be_merged(inode, nearex, nearex + 1)) - break; - /* merge with next extent! */ - if (ext4_ext_is_uninitialized(nearex)) - uninitialized = 1; - nearex->ee_len = cpu_to_le16(ext4_ext_get_actual_len(nearex) - + ext4_ext_get_actual_len(nearex + 1)); - if (uninitialized) - ext4_ext_mark_uninitialized(nearex); - - if (nearex + 1 < EXT_LAST_EXTENT(eh)) { - len = (EXT_LAST_EXTENT(eh) - nearex - 1) - * sizeof(struct ext4_extent); - memmove(nearex + 1, nearex + 2, len); - } - eh->eh_entries = cpu_to_le16(le16_to_cpu(eh->eh_entries)-1); - BUG_ON(eh->eh_entries == 0); - } + ext4_ext_try_to_merge(inode, path, nearex); /* try to merge extents to the left */ @@ -2035,15 +2064,158 @@ void ext4_ext_release(struct super_block #endif } +/* + * This function is called by ext4_ext_get_blocks() if someone tries to write + * to an uninitialized extent. It may result in splitting the uninitialized + * extent into multiple extents (upto three - one initialized and two + * uninitialized). + * There are three possibilities: + * a> There is no split required: Entire extent should be initialized + * b> Splits in two extents: Write is happening at either end of the extent + * c> Splits in three extents: Somone is writing in middle of the extent + */ +int ext4_ext_convert_to_initialized(handle_t *handle, struct inode *inode, + struct ext4_ext_path *path, + ext4_fsblk_t iblock, + unsigned long max_blocks) +{ + struct ext4_extent *ex, newex; + struct ext4_extent *ex1 = NULL; + struct ext4_extent *ex2 = NULL; + struct ext4_extent *ex3 = NULL; + struct ext4_extent_header *eh; + unsigned int allocated, ee_block, ee_len, depth; + ext4_fsblk_t newblock; + int err = 0; + int ret = 0; + + depth = ext_depth(inode); + eh = path[depth].p_hdr; + ex = path[depth].p_ext; + ee_block = le32_to_cpu(ex->ee_block); + ee_len = ext4_ext_get_actual_len(ex); + allocated = ee_len - (iblock - ee_block); + newblock = iblock - ee_block + ext_pblock(ex); + ex2 = ex; + + /* ex1: ee_block to iblock - 1 : uninitialized */ + if (iblock > ee_block) { + ex1 = ex; + ex1->ee_len = cpu_to_le16(iblock - ee_block); + ext4_ext_mark_uninitialized(ex1); + ex2 = &newex; + } + /* + * for sanity, update the length of the ex2 extent before + * we insert ex3, if ex1 is NULL. This is to avoid temporary + * overlap of blocks. + */ + if (!ex1 && allocated > max_blocks) + ex2->ee_len = cpu_to_le16(max_blocks); + /* ex3: to ee_block + ee_len : uninitialised */ + if (allocated > max_blocks) { + unsigned int newdepth; + ex3 = &newex; + ex3->ee_block = cpu_to_le32(iblock + max_blocks); + ext4_ext_store_pblock(ex3, newblock + max_blocks); + ex3->ee_len = cpu_to_le16(allocated - max_blocks); + ext4_ext_mark_uninitialized(ex3); + err = ext4_ext_insert_extent(handle, inode, path, ex3); + if (err) + goto out; + /* + * The depth, and hence eh & ex might change + * as part of the insert above. + */ + newdepth = ext_depth(inode); + if (newdepth != depth) { + depth = newdepth; + path = ext4_ext_find_extent(inode, iblock, NULL); + if (IS_ERR(path)) { + err = PTR_ERR(path); + path = NULL; + goto out; + } + eh = path[depth].p_hdr; + ex = path[depth].p_ext; + if (ex2 != &newex) + ex2 = ex; + } + allocated = max_blocks; + } + /* + * If there was a change of depth as part of the + * insertion of ex3 above, we need to update the length + * of the ex1 extent again here + */ + if (ex1 && ex1 != ex) { + ex1 = ex; + ex1->ee_len = cpu_to_le16(iblock - ee_block); + ext4_ext_mark_uninitialized(ex1); + ex2 = &newex; + } + /* ex2: iblock to iblock + maxblocks-1 : initialised */ + ex2->ee_block = cpu_to_le32(iblock); + ex2->ee_start = cpu_to_le32(newblock); + ext4_ext_store_pblock(ex2, newblock); + ex2->ee_len = cpu_to_le16(allocated); + if (ex2 != ex) + goto insert; + err = ext4_ext_get_access(handle, inode, path + depth); + if (err) + goto out; + /* + * New (initialized) extent starts from the first block + * in the current extent. i.e., ex2 == ex + * We have to see if it can be merged with the extent + * on the left. + */ + if (ex2 > EXT_FIRST_EXTENT(eh)) { + /* + * To merge left, pass "ex2 - 1" to try_to_merge(), + * since it merges towards right _only_. + */ + ret = ext4_ext_try_to_merge(inode, path, ex2 - 1); + if (ret) { + err = ext4_ext_correct_indexes(handle, inode, path); + if (err) + goto out; + depth = ext_depth(inode); + ex2--; + } + } + /* + * Try to Merge towards right. This might be required + * only when the whole extent is being written to. + * i.e. ex2 == ex and ex3 == NULL. + */ + if (!ex3) { + ret = ext4_ext_try_to_merge(inode, path, ex2); + if (ret) { + err = ext4_ext_correct_indexes(handle, inode, path); + if (err) + goto out; + } + } + /* Mark modified extent as dirty */ + err = ext4_ext_dirty(handle, inode, path + depth); + goto out; +insert: + err = ext4_ext_insert_extent(handle, inode, path, &newex); +out: + return err ? err : allocated; +} + int ext4_ext_get_blocks(handle_t *handle, struct inode *inode, ext4_fsblk_t iblock, unsigned long max_blocks, struct buffer_head *bh_result, int create, int extend_disksize) { struct ext4_ext_path *path = NULL; + struct ext4_extent_header *eh; struct ext4_extent newex, *ex; ext4_fsblk_t goal, newblock; - int err = 0, depth; + int err = 0, depth, ret; unsigned long allocated = 0; __clear_bit(BH_New, &bh_result->b_state); @@ -2056,8 +2228,10 @@ int ext4_ext_get_blocks(handle_t *handle if (goal) { if (goal == EXT4_EXT_CACHE_GAP) { if (!create) { - /* block isn't allocated yet and - * user doesn't want to allocate it */ + /* + * block isn't allocated yet and + * user doesn't want to allocate it + */ goto out2; } /* we should allocate requested block */ @@ -2091,6 +2265,7 @@ int ext4_ext_get_blocks(handle_t *handle * this is why assert can't be put in ext4_ext_find_extent() */ BUG_ON(path[depth].p_ext == NULL && depth != 0); + eh = path[depth].p_hdr; ex = path[depth].p_ext; if (ex) { @@ -2099,13 +2274,9 @@ int ext4_ext_get_blocks(handle_t *handle unsigned short ee_len; /* - * Allow future support for preallocated extents to be added - * as an RO_COMPAT feature: * Uninitialized extents are treated as holes, except that - * we avoid (fail) allocating new blocks during a write. + * we split out initialized portions during a write. */ - if (le16_to_cpu(ex->ee_len) > EXT_MAX_LEN) - goto out2; ee_len = ext4_ext_get_actual_len(ex); /* if found extent covers block, simply return it */ if (iblock >= ee_block && iblock < ee_block + ee_len) { @@ -2114,12 +2285,27 @@ int ext4_ext_get_blocks(handle_t *handle allocated = ee_len - (iblock - ee_block); ext_debug("%d fit into %lu:%d -> %llu\n", (int) iblock, ee_block, ee_len, newblock); + /* Do not put uninitialized extent in the cache */ - if (!ext4_ext_is_uninitialized(ex)) + if (!ext4_ext_is_uninitialized(ex)) { ext4_ext_put_in_cache(inode, ee_block, ee_len, ee_start, EXT4_EXT_CACHE_EXTENT); - goto out; + goto out; + } + if (create == EXT4_CREATE_UNINITIALIZED_EXT) + goto out; + if (!create) + goto out2; + + ret = ext4_ext_convert_to_initialized(handle, inode, + path, iblock, + max_blocks); + if (ret <= 0) + goto out2; + else + allocated = ret; + goto outnew; } } @@ -2128,8 +2314,10 @@ int ext4_ext_get_blocks(handle_t *handle * we couldn't try to create block if create flag is zero */ if (!create) { - /* put just found gap into cache to speed up - * subsequent requests */ + /* + * put just found gap into cache to speed up + * subsequent requests + */ ext4_ext_put_gap_in_cache(inode, path, iblock); goto out2; } @@ -2175,6 +2363,7 @@ int ext4_ext_get_blocks(handle_t *handle /* previous routine could use block we allocated */ newblock = ext_pblock(&newex); +outnew: __set_bit(BH_New, &bh_result->b_state); /* Cache only when it is _not_ an uninitialized extent */ @@ -2244,7 +2433,8 @@ void ext4_ext_truncate(struct inode * in err = ext4_ext_remove_space(inode, last_block); /* In a multi-transaction truncate, we only make the final - * transaction synchronous. */ + * transaction synchronous. + */ if (IS_SYNC(inode)) handle->h_sync = 1; Index: linux-2.6.22-rc4/include/linux/ext4_fs_extents.h =================================================================== --- linux-2.6.22-rc4.orig/include/linux/ext4_fs_extents.h +++ linux-2.6.22-rc4/include/linux/ext4_fs_extents.h @@ -205,6 +205,9 @@ static inline int ext4_ext_get_actual_le extern int ext4_extent_tree_init(handle_t *, struct inode *); extern int ext4_ext_calc_credits_for_insert(struct inode *, struct ext4_ext_path *); +extern int ext4_ext_try_to_merge(struct inode *inode, + struct ext4_ext_path *path, + struct ext4_extent *); extern unsigned int ext4_ext_check_overlap(struct inode *, struct ext4_extent *, struct ext4_ext_path *); extern int ext4_ext_insert_extent(handle_t *, struct inode *, struct ext4_ext_path *, struct ext4_extent *); extern int ext4_ext_walk_space(struct inode *, unsigned long, unsigned long, ext_prepare_callback, void *); From owner-xfs@oss.sgi.com Mon Jun 25 06:50:43 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 25 Jun 2007 06:50:47 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from e36.co.us.ibm.com (e36.co.us.ibm.com [32.97.110.154]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5PDogdo010636 for ; Mon, 25 Jun 2007 06:50:43 -0700 Received: from d03relay02.boulder.ibm.com (d03relay02.boulder.ibm.com [9.17.195.227]) by e36.co.us.ibm.com (8.13.8/8.13.8) with ESMTP id l5PDohsb001898 for ; Mon, 25 Jun 2007 09:50:43 -0400 Received: from d03av02.boulder.ibm.com (d03av02.boulder.ibm.com [9.17.195.168]) by d03relay02.boulder.ibm.com (8.13.8/8.13.8/NCO v8.3) with ESMTP id l5PDohm4255808 for ; Mon, 25 Jun 2007 07:50:43 -0600 Received: from d03av02.boulder.ibm.com (loopback [127.0.0.1]) by d03av02.boulder.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id l5PDogcw019100 for ; Mon, 25 Jun 2007 07:50:43 -0600 Received: from amitarora.in.ibm.com (amitarora.in.ibm.com [9.124.31.198]) by d03av02.boulder.ibm.com (8.12.11.20060308/8.12.11) with ESMTP id l5PDofNG018994; Mon, 25 Jun 2007 07:50:42 -0600 Received: from amitarora.in.ibm.com (localhost.localdomain [127.0.0.1]) by amitarora.in.ibm.com (Postfix) with ESMTP id E374C18B996; Mon, 25 Jun 2007 19:20:51 +0530 (IST) Received: (from amit@localhost) by amitarora.in.ibm.com (8.13.1/8.13.1/Submit) id l5PDopjd013878; Mon, 25 Jun 2007 19:20:51 +0530 Date: Mon, 25 Jun 2007 19:20:51 +0530 From: "Amit K. Arora" To: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org Cc: David Chinner , Andreas Dilger , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: [PATCH 7/7][TAKE5] ext4: support new modes Message-ID: <20070625135051.GH1951@amitarora.in.ibm.com> References: <20070510115620.GB21400@amitarora.in.ibm.com> <20070510223950.GD86004887@sgi.com> <20070511110301.GB28425@in.ibm.com> <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070625132810.GA1951@amitarora.in.ibm.com> User-Agent: Mutt/1.4.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11914 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: aarora@linux.vnet.ibm.com Precedence: bulk X-list: xfs Support new values of mode in ext4. This patch supports new mode values/flags in ext4. With this patch ext4 will be able to support FA_ALLOCATE and FA_RESV_SPACE modes. Supporting FA_DEALLOCATE and FA_UNRESV_SPACE fallocate modes in ext4 is a work for future. Signed-off-by: Amit Arora Index: linux-2.6.22-rc4/fs/ext4/extents.c =================================================================== --- linux-2.6.22-rc4.orig/fs/ext4/extents.c +++ linux-2.6.22-rc4/fs/ext4/extents.c @@ -2477,7 +2477,8 @@ int ext4_ext_writepage_trans_blocks(stru /* * preallocate space for a file. This implements ext4's fallocate inode * operation, which gets called from sys_fallocate system call. - * Currently only FA_ALLOCATE mode is supported on extent based files. + * Currently only FA_ALLOCATE and FA_RESV_SPACE modes are supported on + * extent based files. * We may have more modes supported in future - like FA_DEALLOCATE, which * tells fallocate to unallocate previously (pre)allocated blocks. * For block-mapped files, posix_fallocate should fall back to the method @@ -2499,7 +2500,8 @@ long ext4_fallocate(struct inode *inode, * currently supporting (pre)allocate mode for extent-based * files _only_ */ - if (mode != FA_ALLOCATE || !(EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL)) + if (!(EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL) || + !(mode == FA_ALLOCATE || mode == FA_RESV_SPACE)) return -EOPNOTSUPP; /* preallocation to directories is currently not supported */ @@ -2572,9 +2574,10 @@ retry: /* * Time to update the file size. - * Update only when preallocation was requested beyond the file size. + * Update only when preallocation was requested beyond the file size + * and when FA_FL_KEEP_SIZE mode is not specified! */ - if ((offset + len) > i_size_read(inode)) { + if (!(mode & FA_FL_KEEP_SIZE) && (offset + len) > i_size_read(inode)) { if (ret > 0) { /* * if no error, we assume preallocation succeeded From owner-xfs@oss.sgi.com Mon Jun 25 07:00:04 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 25 Jun 2007 07:00:06 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.3 required=5.0 tests=AWL,BAYES_60,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r499012 Received: from sandeen.net (sandeen.net [209.173.210.139]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5PE03do014232 for ; Mon, 25 Jun 2007 07:00:04 -0700 Received: from Liberator.local (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 7677418011E86; Mon, 25 Jun 2007 09:00:03 -0500 (CDT) Message-ID: <467FCA62.8080800@sandeen.net> Date: Mon, 25 Jun 2007 09:00:02 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.0 (Macintosh/20070326) MIME-Version: 1.0 To: Christoph Hellwig CC: xfs-oss Subject: Re: [PATCH] simplify vnode tracing calls References: <467F5053.4040108@sandeen.net> <20070625062047.GB1307@infradead.org> In-Reply-To: <20070625062047.GB1307@infradead.org> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11915 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Christoph Hellwig wrote: > On Mon, Jun 25, 2007 at 12:19:15AM -0500, Eric Sandeen wrote: >> Don't think I've sent this one yet... :) > > Any chance we can keep the name lower-cases despite the simplified > prototype? Yeah, it is a bit of a mishmash this way isn't it. SGI guys, do you have a preference? Christoph, how would you do it, vn_trace_enter(vp) macro calling __vn_trace_enter() function? > Also it might make sense to merge the previous patch into > this one. *shrug* either way. -Eric From owner-xfs@oss.sgi.com Mon Jun 25 07:11:36 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 25 Jun 2007 07:11:40 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.2 required=5.0 tests=AWL,BAYES_99 autolearn=no version=3.2.0-pre1-r499012 Received: from over.ny.us.ibm.com (over.ny.us.ibm.com [32.97.182.150]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5PEBXdo018404 for ; Mon, 25 Jun 2007 07:11:36 -0700 Received: from e6.ny.us.ibm.com ([192.168.1.106]) by pokfb.esmtp.ibm.com (8.12.11.20060308/8.12.11) with ESMTP id l5PDe6M0017825 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Mon, 25 Jun 2007 09:40:06 -0400 Received: from d01relay04.pok.ibm.com (d01relay04.pok.ibm.com [9.56.227.236]) by e6.ny.us.ibm.com (8.13.8/8.13.8) with ESMTP id l5PDfDYm028915 for ; Mon, 25 Jun 2007 09:41:13 -0400 Received: from d01av04.pok.ibm.com (d01av04.pok.ibm.com [9.56.224.64]) by d01relay04.pok.ibm.com (8.13.8/8.13.8/NCO v8.3) with ESMTP id l5PDe4cH549482 for ; Mon, 25 Jun 2007 09:40:04 -0400 Received: from d01av04.pok.ibm.com (loopback [127.0.0.1]) by d01av04.pok.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id l5PDe36v025131 for ; Mon, 25 Jun 2007 09:40:04 -0400 Received: from amitarora.in.ibm.com (amitarora.in.ibm.com [9.124.31.198]) by d01av04.pok.ibm.com (8.12.11.20060308/8.12.11) with ESMTP id l5PDe2Bb025035; Mon, 25 Jun 2007 09:40:02 -0400 Received: from amitarora.in.ibm.com (localhost.localdomain [127.0.0.1]) by amitarora.in.ibm.com (Postfix) with ESMTP id AFBC818B996; Mon, 25 Jun 2007 19:10:12 +0530 (IST) Received: (from amit@localhost) by amitarora.in.ibm.com (8.13.1/8.13.1/Submit) id l5PDeCca009396; Mon, 25 Jun 2007 19:10:12 +0530 Date: Mon, 25 Jun 2007 19:10:12 +0530 From: "Amit K. Arora" To: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org Cc: David Chinner , Andreas Dilger , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: [PATCH 1/7][TAKE5] fallocate() implementation on i386, x86_64 and powerpc Message-ID: <20070625134012.GB1951@amitarora.in.ibm.com> References: <20070510115620.GB21400@amitarora.in.ibm.com> <20070510223950.GD86004887@sgi.com> <20070511110301.GB28425@in.ibm.com> <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070625132810.GA1951@amitarora.in.ibm.com> User-Agent: Mutt/1.4.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11916 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: aarora@linux.vnet.ibm.com Precedence: bulk X-list: xfs This patch implements sys_fallocate() and adds support on i386, x86_64 and powerpc platforms. Changelog: --------- Changes from Take3 to Take4: 1) Do not update c/mtime. Let each filesystem update ctime (update of mtime will not be required for allocation since we touch only metadata/inode and not blocks), if required. Changes from Take2 to Take3: 1) Patches now based on 2.6.22-rc1 kernel. Changes from Take1(initial post on 26th April, 2007) to Take2: 1) Added description before sys_fallocate() definition. 2) Return EINVAL for len<=0 (With new draft that Ulrich pointed to, posix_fallocate should return EINVAL for len <= 0. 3) Return EOPNOTSUPP if mode is not one of FA_ALLOCATE or FA_DEALLOCATE 4) Do not return ENODEV for dirs (let individual file systems decide if they want to support preallocation to directories or not. 5) Check for wrap through zero. 6) Update c/mtime if fallocate() succeeds. 7) Added mode descriptions in fs.h 8) Added variable names to function definition (fallocate inode op) Signed-off-by: Amit Arora Index: linux-2.6.22-rc4/arch/i386/kernel/syscall_table.S =================================================================== --- linux-2.6.22-rc4.orig/arch/i386/kernel/syscall_table.S +++ linux-2.6.22-rc4/arch/i386/kernel/syscall_table.S @@ -323,3 +323,4 @@ ENTRY(sys_call_table) .long sys_signalfd .long sys_timerfd .long sys_eventfd + .long sys_fallocate Index: linux-2.6.22-rc4/arch/powerpc/kernel/sys_ppc32.c =================================================================== --- linux-2.6.22-rc4.orig/arch/powerpc/kernel/sys_ppc32.c +++ linux-2.6.22-rc4/arch/powerpc/kernel/sys_ppc32.c @@ -773,6 +773,13 @@ asmlinkage int compat_sys_truncate64(con return sys_truncate(path, (high << 32) | low); } +asmlinkage long compat_sys_fallocate(int fd, int mode, u32 offhi, u32 offlo, + u32 lenhi, u32 lenlo) +{ + return sys_fallocate(fd, mode, ((loff_t)offhi << 32) | offlo, + ((loff_t)lenhi << 32) | lenlo); +} + asmlinkage int compat_sys_ftruncate64(unsigned int fd, u32 reg4, unsigned long high, unsigned long low) { Index: linux-2.6.22-rc4/arch/x86_64/ia32/ia32entry.S =================================================================== --- linux-2.6.22-rc4.orig/arch/x86_64/ia32/ia32entry.S +++ linux-2.6.22-rc4/arch/x86_64/ia32/ia32entry.S @@ -719,4 +719,5 @@ ia32_sys_call_table: .quad compat_sys_signalfd .quad compat_sys_timerfd .quad sys_eventfd + .quad sys_fallocate ia32_syscall_end: Index: linux-2.6.22-rc4/fs/open.c =================================================================== --- linux-2.6.22-rc4.orig/fs/open.c +++ linux-2.6.22-rc4/fs/open.c @@ -353,6 +353,92 @@ asmlinkage long sys_ftruncate64(unsigned #endif /* + * sys_fallocate - preallocate blocks or free preallocated blocks + * @fd: the file descriptor + * @mode: mode specifies if fallocate should preallocate blocks OR free + * (unallocate) preallocated blocks. Currently only FA_ALLOCATE and + * FA_DEALLOCATE modes are supported. + * @offset: The offset within file, from where (un)allocation is being + * requested. It should not have a negative value. + * @len: The amount (in bytes) of space to be (un)allocated, from the offset. + * + * This system call, depending on the mode, preallocates or unallocates blocks + * for a file. The range of blocks depends on the value of offset and len + * arguments provided by the user/application. For FA_ALLOCATE mode, if this + * system call succeeds, subsequent writes to the file in the given range + * (specified by offset & len) should not fail - even if the file system + * later becomes full. Hence the preallocation done is persistent (valid + * even after reopen of the file and remount/reboot). + * + * It is expected that the ->fallocate() inode operation implemented by the + * individual file systems will update the file size and/or ctime/mtime + * depending on the mode and also on the success of the operation. + * + * Note: Incase the file system does not support preallocation, + * posix_fallocate() should fall back to the library implementation (i.e. + * allocating zero-filled new blocks to the file). + * + * Return Values + * 0 : On SUCCESS a value of zero is returned. + * error : On Failure, an error code will be returned. + * An error code of -ENOSYS or -EOPNOTSUPP should make posix_fallocate() + * fall back on library implementation of fallocate. + * + * Generic fallocate to be added for file systems that do not + * support fallocate it. + */ +asmlinkage long sys_fallocate(int fd, int mode, loff_t offset, loff_t len) +{ + struct file *file; + struct inode *inode; + long ret = -EINVAL; + + if (offset < 0 || len <= 0) + goto out; + + /* Return error if mode is not supported */ + ret = -EOPNOTSUPP; + if (mode != FA_ALLOCATE && mode != FA_DEALLOCATE) + goto out; + + ret = -EBADF; + file = fget(fd); + if (!file) + goto out; + if (!(file->f_mode & FMODE_WRITE)) + goto out_fput; + + inode = file->f_path.dentry->d_inode; + + ret = -ESPIPE; + if (S_ISFIFO(inode->i_mode)) + goto out_fput; + + ret = -ENODEV; + /* + * Let individual file system decide if it supports preallocation + * for directories or not. + */ + if (!S_ISREG(inode->i_mode) && !S_ISDIR(inode->i_mode)) + goto out_fput; + + ret = -EFBIG; + /* Check for wrap through zero too */ + if (((offset + len) > inode->i_sb->s_maxbytes) || ((offset + len) < 0)) + goto out_fput; + + if (inode->i_op && inode->i_op->fallocate) + ret = inode->i_op->fallocate(inode, mode, offset, len); + else + ret = -ENOSYS; + +out_fput: + fput(file); +out: + return ret; +} + +/* * access() needs to use the real uid/gid, not the effective uid/gid. * We do this by temporarily clearing all FS-related capabilities and * switching the fsuid/fsgid around to the real ones. Index: linux-2.6.22-rc4/include/asm-i386/unistd.h =================================================================== --- linux-2.6.22-rc4.orig/include/asm-i386/unistd.h +++ linux-2.6.22-rc4/include/asm-i386/unistd.h @@ -329,10 +329,11 @@ #define __NR_signalfd 321 #define __NR_timerfd 322 #define __NR_eventfd 323 +#define __NR_fallocate 324 #ifdef __KERNEL__ -#define NR_syscalls 324 +#define NR_syscalls 325 #define __ARCH_WANT_IPC_PARSE_VERSION #define __ARCH_WANT_OLD_READDIR Index: linux-2.6.22-rc4/include/asm-powerpc/systbl.h =================================================================== --- linux-2.6.22-rc4.orig/include/asm-powerpc/systbl.h +++ linux-2.6.22-rc4/include/asm-powerpc/systbl.h @@ -308,6 +308,7 @@ COMPAT_SYS_SPU(move_pages) SYSCALL_SPU(getcpu) COMPAT_SYS(epoll_pwait) COMPAT_SYS_SPU(utimensat) +COMPAT_SYS(fallocate) COMPAT_SYS_SPU(signalfd) COMPAT_SYS_SPU(timerfd) SYSCALL_SPU(eventfd) Index: linux-2.6.22-rc4/include/asm-powerpc/unistd.h =================================================================== --- linux-2.6.22-rc4.orig/include/asm-powerpc/unistd.h +++ linux-2.6.22-rc4/include/asm-powerpc/unistd.h @@ -330,10 +330,11 @@ #define __NR_signalfd 305 #define __NR_timerfd 306 #define __NR_eventfd 307 +#define __NR_fallocate 308 #ifdef __KERNEL__ -#define __NR_syscalls 308 +#define __NR_syscalls 309 #define __NR__exit __NR_exit #define NR_syscalls __NR_syscalls Index: linux-2.6.22-rc4/include/asm-x86_64/unistd.h =================================================================== --- linux-2.6.22-rc4.orig/include/asm-x86_64/unistd.h +++ linux-2.6.22-rc4/include/asm-x86_64/unistd.h @@ -630,6 +630,8 @@ __SYSCALL(__NR_signalfd, sys_signalfd) __SYSCALL(__NR_timerfd, sys_timerfd) #define __NR_eventfd 283 __SYSCALL(__NR_eventfd, sys_eventfd) +#define __NR_fallocate 284 +__SYSCALL(__NR_fallocate, sys_fallocate) #ifndef __NO_STUBS #define __ARCH_WANT_OLD_READDIR Index: linux-2.6.22-rc4/include/linux/fs.h =================================================================== --- linux-2.6.22-rc4.orig/include/linux/fs.h +++ linux-2.6.22-rc4/include/linux/fs.h @@ -266,6 +266,17 @@ extern int dir_notify_enable; #define SYNC_FILE_RANGE_WRITE 2 #define SYNC_FILE_RANGE_WAIT_AFTER 4 +/* + * sys_fallocate modes + * Currently sys_fallocate supports two modes: + * FA_ALLOCATE : This is the preallocate mode, using which an application/user + * may request (pre)allocation of blocks. + * FA_DEALLOCATE: This is the deallocate mode, which can be used to free + * the preallocated blocks. + */ +#define FA_ALLOCATE 0x1 +#define FA_DEALLOCATE 0x2 + #ifdef __KERNEL__ #include @@ -1138,6 +1149,8 @@ struct inode_operations { ssize_t (*listxattr) (struct dentry *, char *, size_t); int (*removexattr) (struct dentry *, const char *); void (*truncate_range)(struct inode *, loff_t, loff_t); + long (*fallocate)(struct inode *inode, int mode, loff_t offset, + loff_t len); }; struct seq_file; Index: linux-2.6.22-rc4/include/linux/syscalls.h =================================================================== --- linux-2.6.22-rc4.orig/include/linux/syscalls.h +++ linux-2.6.22-rc4/include/linux/syscalls.h @@ -608,6 +608,7 @@ asmlinkage long sys_signalfd(int ufd, si asmlinkage long sys_timerfd(int ufd, int clockid, int flags, const struct itimerspec __user *utmr); asmlinkage long sys_eventfd(unsigned int count); +asmlinkage long sys_fallocate(int fd, int mode, loff_t offset, loff_t len); int kernel_execve(const char *filename, char *const argv[], char *const envp[]); From owner-xfs@oss.sgi.com Mon Jun 25 08:33:14 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 25 Jun 2007 08:33:18 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.7 required=5.0 tests=AWL,BAYES_60 autolearn=no version=3.2.0-pre1-r499012 Received: from over.ny.us.ibm.com (over.ny.us.ibm.com [32.97.182.150]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5PFXAdO015747 for ; Mon, 25 Jun 2007 08:33:13 -0700 Received: from e34.co.us.ibm.com (e34.co.us.ibm.com [32.97.110.152]) by pokfb.esmtp.ibm.com (8.12.11.20060308/8.12.11) with ESMTP id l5PF3NDW006243 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Mon, 25 Jun 2007 11:03:23 -0400 Received: from d03relay04.boulder.ibm.com (d03relay04.boulder.ibm.com [9.17.195.106]) by e34.co.us.ibm.com (8.13.8/8.13.8) with ESMTP id l5PF3MMh031305 for ; Mon, 25 Jun 2007 11:03:22 -0400 Received: from d03av03.boulder.ibm.com (d03av03.boulder.ibm.com [9.17.195.169]) by d03relay04.boulder.ibm.com (8.13.8/8.13.8/NCO v8.3) with ESMTP id l5PF3Hx1173796 for ; Mon, 25 Jun 2007 09:03:17 -0600 Received: from d03av03.boulder.ibm.com (loopback [127.0.0.1]) by d03av03.boulder.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id l5PF3GIS010030 for ; Mon, 25 Jun 2007 09:03:17 -0600 Received: from amitarora.in.ibm.com ([9.124.35.152]) by d03av03.boulder.ibm.com (8.12.11.20060308/8.12.11) with ESMTP id l5PF3Fwq009812; Mon, 25 Jun 2007 09:03:16 -0600 Received: from amitarora.in.ibm.com (localhost.localdomain [127.0.0.1]) by amitarora.in.ibm.com (Postfix) with ESMTP id 1DA5718B996; Mon, 25 Jun 2007 20:33:21 +0530 (IST) Received: (from amit@localhost) by amitarora.in.ibm.com (8.13.1/8.13.1/Submit) id l5PF3K10011270; Mon, 25 Jun 2007 20:33:20 +0530 Date: Mon, 25 Jun 2007 20:33:20 +0530 From: "Amit K. Arora" To: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org Cc: David Chinner , Andreas Dilger , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: Re: [PATCH 4/7][TAKE5] support new modes in fallocate Message-ID: <20070625150320.GA8686@amitarora.in.ibm.com> References: <20070510223950.GD86004887@sgi.com> <20070511110301.GB28425@in.ibm.com> <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070625134500.GE1951@amitarora.in.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070625134500.GE1951@amitarora.in.ibm.com> User-Agent: Mutt/1.4.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11917 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: aarora@linux.vnet.ibm.com Precedence: bulk X-list: xfs I have not implemented FA_FL_FREE_ENOSPC and FA_ZERO_SPACE flags yet, as *suggested* by Andreas in http://lkml.org/lkml/2007/6/14/323 post. If it is decided that these flags are also needed, I will update this patch. Thanks! On Mon, Jun 25, 2007 at 07:15:00PM +0530, Amit K. Arora wrote: > Implement new flags and values for mode argument. > > This patch implements the new flags and values for the "mode" argument > of the fallocate system call. It is based on the discussion between > Andreas Dilger and David Chinner on the man page proposed (by the later) > on fallocate. > > Signed-off-by: Amit Arora > > Index: linux-2.6.22-rc4/include/linux/fs.h > =================================================================== > --- linux-2.6.22-rc4.orig/include/linux/fs.h > +++ linux-2.6.22-rc4/include/linux/fs.h > @@ -267,15 +267,16 @@ extern int dir_notify_enable; > #define SYNC_FILE_RANGE_WAIT_AFTER 4 > > /* > - * sys_fallocate modes > - * Currently sys_fallocate supports two modes: > - * FA_ALLOCATE : This is the preallocate mode, using which an application/user > - * may request (pre)allocation of blocks. > - * FA_DEALLOCATE: This is the deallocate mode, which can be used to free > - * the preallocated blocks. > + * sys_fallocate mode flags and values > */ > -#define FA_ALLOCATE 0x1 > -#define FA_DEALLOCATE 0x2 > +#define FA_FL_DEALLOC 0x01 /* default is allocate */ > +#define FA_FL_KEEP_SIZE 0x02 /* default is extend/shrink size */ > +#define FA_FL_DEL_DATA 0x04 /* default is keep written data on DEALLOC */ > + > +#define FA_ALLOCATE 0 > +#define FA_DEALLOCATE FA_FL_DEALLOC > +#define FA_RESV_SPACE FA_FL_KEEP_SIZE > +#define FA_UNRESV_SPACE (FA_FL_DEALLOC | FA_FL_KEEP_SIZE | FA_FL_DEL_DATA) > > #ifdef __KERNEL__ > > Index: linux-2.6.22-rc4/fs/open.c > =================================================================== > --- linux-2.6.22-rc4.orig/fs/open.c > +++ linux-2.6.22-rc4/fs/open.c > @@ -356,23 +356,26 @@ asmlinkage long sys_ftruncate64(unsigned > * sys_fallocate - preallocate blocks or free preallocated blocks > * @fd: the file descriptor > * @mode: mode specifies if fallocate should preallocate blocks OR free > - * (unallocate) preallocated blocks. Currently only FA_ALLOCATE and > - * FA_DEALLOCATE modes are supported. > + * (unallocate) preallocated blocks. > * @offset: The offset within file, from where (un)allocation is being > * requested. It should not have a negative value. > * @len: The amount (in bytes) of space to be (un)allocated, from the offset. > * > * This system call, depending on the mode, preallocates or unallocates blocks > * for a file. The range of blocks depends on the value of offset and len > - * arguments provided by the user/application. For FA_ALLOCATE mode, if this > + * arguments provided by the user/application. For FA_ALLOCATE and > + * FA_RESV_SPACE modes, if the sys_fallocate() > * system call succeeds, subsequent writes to the file in the given range > * (specified by offset & len) should not fail - even if the file system > * later becomes full. Hence the preallocation done is persistent (valid > - * even after reopen of the file and remount/reboot). > + * even after reopen of the file and remount/reboot). If FA_RESV_SPACE mode > + * is passed, the file size will not be changed even if the preallocation > + * is beyond EOF. > * > * It is expected that the ->fallocate() inode operation implemented by the > * individual file systems will update the file size and/or ctime/mtime > - * depending on the mode and also on the success of the operation. > + * depending on the mode (change is visible to user or not - say file size) > + * and obviously, on the success of the operation. > * > * Note: Incase the file system does not support preallocation, > * posix_fallocate() should fall back to the library implementation (i.e. > @@ -398,7 +401,8 @@ asmlinkage long sys_fallocate(int fd, in > > /* Return error if mode is not supported */ > ret = -EOPNOTSUPP; > - if (mode != FA_ALLOCATE && mode != FA_DEALLOCATE) > + if (!(mode == FA_ALLOCATE || mode == FA_DEALLOCATE || > + mode == FA_RESV_SPACE || mode == FA_UNRESV_SPACE)) > goto out; > > ret = -EBADF; > - > To unsubscribe from this list: send the line "unsubscribe linux-ext4" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html From owner-xfs@oss.sgi.com Mon Jun 25 14:46:29 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 25 Jun 2007 14:46:32 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.3 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.clusterfs.com (mail.clusterfs.com [206.168.112.78]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5PLkRtL007176 for ; Mon, 25 Jun 2007 14:46:29 -0700 Received: from localhost.adilger.int (S0106000bdb95b39c.cg.shawcable.net [70.72.213.136]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.clusterfs.com (Postfix) with ESMTP id BB9E27BA36C; Mon, 25 Jun 2007 15:46:27 -0600 (MDT) Received: by localhost.adilger.int (Postfix, from userid 1000) id 219C141B9; Mon, 25 Jun 2007 15:46:26 -0600 (MDT) Date: Mon, 25 Jun 2007 15:46:26 -0600 From: Andreas Dilger To: "Amit K. Arora" Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: Re: [PATCH 4/7][TAKE5] support new modes in fallocate Message-ID: <20070625214626.GJ5181@schatzie.adilger.int> Mail-Followup-To: "Amit K. Arora" , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com References: <20070511110301.GB28425@in.ibm.com> <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070625134500.GE1951@amitarora.in.ibm.com> <20070625150320.GA8686@amitarora.in.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070625150320.GA8686@amitarora.in.ibm.com> User-Agent: Mutt/1.4.1i X-GPG-Key: 1024D/0D35BED6 X-GPG-Fingerprint: 7A37 5D79 BF1B CECA D44F 8A29 A488 39F5 0D35 BED6 X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11918 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: adilger@clusterfs.com Precedence: bulk X-list: xfs On Jun 25, 2007 20:33 +0530, Amit K. Arora wrote: > I have not implemented FA_FL_FREE_ENOSPC and FA_ZERO_SPACE flags yet, as > *suggested* by Andreas in http://lkml.org/lkml/2007/6/14/323 post. > If it is decided that these flags are also needed, I will update this > patch. Thanks! Can you clarify - what is the current behaviour when ENOSPC (or some other error) is hit? Does it keep the current fallocate() or does it free it? For FA_ZERO_SPACE - I'd think this would (IMHO) be the default - we don't want to expose uninitialized disk blocks to userspace. I'm not sure if this makes sense at all. > On Mon, Jun 25, 2007 at 07:15:00PM +0530, Amit K. Arora wrote: > > Implement new flags and values for mode argument. > > > > This patch implements the new flags and values for the "mode" argument > > of the fallocate system call. It is based on the discussion between > > Andreas Dilger and David Chinner on the man page proposed (by the later) > > on fallocate. > > > > Signed-off-by: Amit Arora > > > > Index: linux-2.6.22-rc4/include/linux/fs.h > > =================================================================== > > --- linux-2.6.22-rc4.orig/include/linux/fs.h > > +++ linux-2.6.22-rc4/include/linux/fs.h > > @@ -267,15 +267,16 @@ extern int dir_notify_enable; > > #define SYNC_FILE_RANGE_WAIT_AFTER 4 > > > > /* > > - * sys_fallocate modes > > - * Currently sys_fallocate supports two modes: > > - * FA_ALLOCATE : This is the preallocate mode, using which an application/user > > - * may request (pre)allocation of blocks. > > - * FA_DEALLOCATE: This is the deallocate mode, which can be used to free > > - * the preallocated blocks. > > + * sys_fallocate mode flags and values > > */ > > -#define FA_ALLOCATE 0x1 > > -#define FA_DEALLOCATE 0x2 > > +#define FA_FL_DEALLOC 0x01 /* default is allocate */ > > +#define FA_FL_KEEP_SIZE 0x02 /* default is extend/shrink size */ > > +#define FA_FL_DEL_DATA 0x04 /* default is keep written data on DEALLOC */ > > + > > +#define FA_ALLOCATE 0 > > +#define FA_DEALLOCATE FA_FL_DEALLOC > > +#define FA_RESV_SPACE FA_FL_KEEP_SIZE > > +#define FA_UNRESV_SPACE (FA_FL_DEALLOC | FA_FL_KEEP_SIZE | FA_FL_DEL_DATA) > > > > #ifdef __KERNEL__ > > > > Index: linux-2.6.22-rc4/fs/open.c > > =================================================================== > > --- linux-2.6.22-rc4.orig/fs/open.c > > +++ linux-2.6.22-rc4/fs/open.c > > @@ -356,23 +356,26 @@ asmlinkage long sys_ftruncate64(unsigned > > * sys_fallocate - preallocate blocks or free preallocated blocks > > * @fd: the file descriptor > > * @mode: mode specifies if fallocate should preallocate blocks OR free > > - * (unallocate) preallocated blocks. Currently only FA_ALLOCATE and > > - * FA_DEALLOCATE modes are supported. > > + * (unallocate) preallocated blocks. > > * @offset: The offset within file, from where (un)allocation is being > > * requested. It should not have a negative value. > > * @len: The amount (in bytes) of space to be (un)allocated, from the offset. > > * > > * This system call, depending on the mode, preallocates or unallocates blocks > > * for a file. The range of blocks depends on the value of offset and len > > - * arguments provided by the user/application. For FA_ALLOCATE mode, if this > > + * arguments provided by the user/application. For FA_ALLOCATE and > > + * FA_RESV_SPACE modes, if the sys_fallocate() > > * system call succeeds, subsequent writes to the file in the given range > > * (specified by offset & len) should not fail - even if the file system > > * later becomes full. Hence the preallocation done is persistent (valid > > - * even after reopen of the file and remount/reboot). > > + * even after reopen of the file and remount/reboot). If FA_RESV_SPACE mode > > + * is passed, the file size will not be changed even if the preallocation > > + * is beyond EOF. > > * > > * It is expected that the ->fallocate() inode operation implemented by the > > * individual file systems will update the file size and/or ctime/mtime > > - * depending on the mode and also on the success of the operation. > > + * depending on the mode (change is visible to user or not - say file size) > > + * and obviously, on the success of the operation. > > * > > * Note: Incase the file system does not support preallocation, > > * posix_fallocate() should fall back to the library implementation (i.e. > > @@ -398,7 +401,8 @@ asmlinkage long sys_fallocate(int fd, in > > > > /* Return error if mode is not supported */ > > ret = -EOPNOTSUPP; > > - if (mode != FA_ALLOCATE && mode != FA_DEALLOCATE) > > + if (!(mode == FA_ALLOCATE || mode == FA_DEALLOCATE || > > + mode == FA_RESV_SPACE || mode == FA_UNRESV_SPACE)) > > goto out; > > > > ret = -EBADF; > > - > > To unsubscribe from this list: send the line "unsubscribe linux-ext4" in > > the body of a message to majordomo@vger.kernel.org > > More majordomo info at http://vger.kernel.org/majordomo-info.html Cheers, Andreas -- Andreas Dilger Principal Software Engineer Cluster File Systems, Inc. From owner-xfs@oss.sgi.com Mon Jun 25 14:52:40 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 25 Jun 2007 14:52:44 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.3 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.clusterfs.com (mail.clusterfs.com [206.168.112.78]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5PLqdtL008645 for ; Mon, 25 Jun 2007 14:52:40 -0700 Received: from localhost.adilger.int (S0106000bdb95b39c.cg.shawcable.net [70.72.213.136]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.clusterfs.com (Postfix) with ESMTP id 017447BA36B; Mon, 25 Jun 2007 15:52:41 -0600 (MDT) Received: by localhost.adilger.int (Postfix, from userid 1000) id 547C341B9; Mon, 25 Jun 2007 15:52:39 -0600 (MDT) Date: Mon, 25 Jun 2007 15:52:39 -0600 From: Andreas Dilger To: "Amit K. Arora" Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: Re: [PATCH 4/7][TAKE5] support new modes in fallocate Message-ID: <20070625215239.GK5181@schatzie.adilger.int> Mail-Followup-To: "Amit K. Arora" , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com References: <20070510223950.GD86004887@sgi.com> <20070511110301.GB28425@in.ibm.com> <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070625134500.GE1951@amitarora.in.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070625134500.GE1951@amitarora.in.ibm.com> User-Agent: Mutt/1.4.1i X-GPG-Key: 1024D/0D35BED6 X-GPG-Fingerprint: 7A37 5D79 BF1B CECA D44F 8A29 A488 39F5 0D35 BED6 X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11919 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: adilger@clusterfs.com Precedence: bulk X-list: xfs On Jun 25, 2007 19:15 +0530, Amit K. Arora wrote: > +#define FA_FL_DEALLOC 0x01 /* default is allocate */ > +#define FA_FL_KEEP_SIZE 0x02 /* default is extend/shrink size */ > +#define FA_FL_DEL_DATA 0x04 /* default is keep written data on DEALLOC */ In XFS one of the (many) ALLOC modes is to zero existing data on allocate. For ext4 all this would mean is calling ext4_ext_mark_uninitialized() on each extent. For some workloads this would be much faster than truncate and reallocate of all the blocks in a file. In that light, please change the comment to /* default is keep existing data */ so that it doesn't imply this is only for DEALLOC. Cheers, Andreas -- Andreas Dilger Principal Software Engineer Cluster File Systems, Inc. From owner-xfs@oss.sgi.com Mon Jun 25 14:56:26 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 25 Jun 2007 14:56:32 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.clusterfs.com (mail.clusterfs.com [206.168.112.78]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5PLuPtL009830 for ; Mon, 25 Jun 2007 14:56:26 -0700 Received: from localhost.adilger.int (S0106000bdb95b39c.cg.shawcable.net [70.72.213.136]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.clusterfs.com (Postfix) with ESMTP id AC6BE7BA36B; Mon, 25 Jun 2007 15:56:26 -0600 (MDT) Received: by localhost.adilger.int (Postfix, from userid 1000) id 14A1E41B9; Mon, 25 Jun 2007 15:56:25 -0600 (MDT) Date: Mon, 25 Jun 2007 15:56:25 -0600 From: Andreas Dilger To: "Amit K. Arora" Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: Re: [PATCH 7/7][TAKE5] ext4: support new modes Message-ID: <20070625215625.GL5181@schatzie.adilger.int> Mail-Followup-To: "Amit K. Arora" , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com References: <20070510223950.GD86004887@sgi.com> <20070511110301.GB28425@in.ibm.com> <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070625135051.GH1951@amitarora.in.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070625135051.GH1951@amitarora.in.ibm.com> User-Agent: Mutt/1.4.1i X-GPG-Key: 1024D/0D35BED6 X-GPG-Fingerprint: 7A37 5D79 BF1B CECA D44F 8A29 A488 39F5 0D35 BED6 X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11920 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: adilger@clusterfs.com Precedence: bulk X-list: xfs On Jun 25, 2007 19:20 +0530, Amit K. Arora wrote: > @@ -2499,7 +2500,8 @@ long ext4_fallocate(struct inode *inode, > * currently supporting (pre)allocate mode for extent-based > * files _only_ > */ > - if (mode != FA_ALLOCATE || !(EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL)) > + if (!(EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL) || > + !(mode == FA_ALLOCATE || mode == FA_RESV_SPACE)) > return -EOPNOTSUPP; This should probably just check for the individual flags it can support (e.g. no FA_FL_DEALLOC, no FA_FL_DEL_DATA). I also thought another proposed flag was to determine whether mtime (and maybe ctime) is changed when doing prealloc/dealloc space? Default should probably be to change mtime/ctime, and have FA_FL_NO_MTIME. Someone else should decide if we want to allow changing the file w/o changing ctime, if that is required even though the file is not visibly changing. Maybe the ctime update should be implicit if the size or mtime are changing? Cheers, Andreas -- Andreas Dilger Principal Software Engineer Cluster File Systems, Inc. From owner-xfs@oss.sgi.com Mon Jun 25 16:28:29 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 25 Jun 2007 16:28:32 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5PNSPtL004343 for ; Mon, 25 Jun 2007 16:28:27 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA00661; Tue, 26 Jun 2007 09:28:20 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5PNSJeW976190; Tue, 26 Jun 2007 09:28:20 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5PNSH4e975845; Tue, 26 Jun 2007 09:28:17 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Tue, 26 Jun 2007 09:28:17 +1000 From: David Chinner To: Sandy1 Cc: linux-xfs@oss.sgi.com Subject: Re: Wrong Data Pointer-XFS File system Message-ID: <20070625232817.GC31489@sgi.com> References: <11246839.post@talk.nabble.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <11246839.post@talk.nabble.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11921 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Thu, Jun 21, 2007 at 11:23:35PM -0700, Sandy1 wrote: > > Hi, > I am Using SuSe 10.0 with Xfs file system. I am working on a File system > based project. During the initial research in On disk layout of File system > , i found data pointer problem. I was not able to get the data location as > per pointed by "absolute block no." in "xfs_bmbt_rec" extent pointers. The xfs_bmbt_rec holds the address in FSB format, not DADDR format. > When i create any file in 0th (zero`th) AG in that case i am able to reach > on proper location by using "absolute block no." pointer. > > But when i create any file in 1st or in 2nd AG and so on. I never got the > file data at the location pointed by "absolute block no.". I always found > the file data before the the pointed address. FSB notation is sparse representation. > I am not getting any value in Superblock that tells me about difference in > pointer location with actual data. > > This value always becomes multiple of the AG number. > > Please help to get out from this problem. > > Is there any other calculation for finding the data locations. man xfs_db. Search for "convert". Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Jun 25 16:54:51 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 25 Jun 2007 16:54:55 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5PNsmtL015330 for ; Mon, 25 Jun 2007 16:54:50 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA01333; Tue, 26 Jun 2007 09:54:40 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5PNsceW986528; Tue, 26 Jun 2007 09:54:39 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5PNsarx986123; Tue, 26 Jun 2007 09:54:36 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Tue, 26 Jun 2007 09:54:36 +1000 From: David Chinner To: Eric Sandeen Cc: Christoph Hellwig , xfs-oss Subject: Re: [PATCH] simplify vnode tracing calls Message-ID: <20070625235436.GD31489@sgi.com> References: <467F5053.4040108@sandeen.net> <20070625062047.GB1307@infradead.org> <467FCA62.8080800@sandeen.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <467FCA62.8080800@sandeen.net> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11922 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Mon, Jun 25, 2007 at 09:00:02AM -0500, Eric Sandeen wrote: > Christoph Hellwig wrote: > > On Mon, Jun 25, 2007 at 12:19:15AM -0500, Eric Sandeen wrote: > >> Don't think I've sent this one yet... :) > > > > Any chance we can keep the name lower-cases despite the simplified > > prototype? > > Yeah, it is a bit of a mishmash this way isn't it. SGI guys, do you > have a preference? Christoph, how would you do it, vn_trace_enter(vp) > macro calling __vn_trace_enter() function? Yeah, keep the lower case names if possible. I can't think of a simpler way of doing it, and it's not terribly ugly. I'm open to better solutions, though..... > > Also it might make sense to merge the previous patch into > > this one. > > *shrug* either way. Keep 'em separate - that way I don't have to go and review this first one again ;) Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Jun 25 18:27:26 2007 Received: with ECARTIS (v1.0.0; list xfs); Mon, 25 Jun 2007 18:27:33 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.6 required=5.0 tests=AWL,BAYES_60 autolearn=no version=3.2.0-pre1-r499012 Received: from postoffice.aconex.com (mail.app.aconex.com [203.89.192.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5Q1RNtL007540 for ; Mon, 25 Jun 2007 18:27:25 -0700 Received: from edge.yarra.acx (unknown [203.89.192.141]) by postoffice.aconex.com (Postfix) with ESMTP id 6946092C5F3; Tue, 26 Jun 2007 11:27:22 +1000 (EST) Subject: Re: Reducing memory requirements for high extent xfs files From: Nathan Scott Reply-To: nscott@aconex.com To: Michael Nishimoto Cc: David Chinner , xfs@oss.sgi.com In-Reply-To: <467C620E.4050005@agami.com> References: <200705301649.l4UGnckA027406@oss.sgi.com> <20070530225516.GB85884050@sgi.com> <4665E276.9020406@agami.com> <20070606013601.GR86004887@sgi.com> <4666EC56.9000606@agami.com> <20070606234723.GC86004887@sgi.com> <467C620E.4050005@agami.com> Content-Type: text/plain Organization: Aconex Date: Tue, 26 Jun 2007 11:26:15 +1000 Message-Id: <1182821175.15488.64.camel@edge.yarra.acx> Mime-Version: 1.0 X-Mailer: Evolution 2.6.3 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11923 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nscott@aconex.com Precedence: bulk X-list: xfs Hi Mike, On Fri, 2007-06-22 at 16:58 -0700, Michael Nishimoto wrote: > > > Also, should we consider a file with 1MB extents as > > > fragmented? A 100GB file with 1MB extents has 100k extents. > > > > Yes, that's fragmented - it has 4 orders of magnitude more extents > > than optimal - and the extents are too small to allow reads or > > writes to acheive full bandwidth on high end raid configs.... > > Fair enough, so multiply those numbers by 100 -- a 10TB file ... This seems a flawed way to look at this to me - in practice, almost noone would have files that large. While filesystem sizes increase and can be expected to continue to increase, I'd expect individual file sizes do not tend to increase anywhere near as much - file sizes tend to be an application property, and apps want to work for all filesystems. So, people want to store _more_ files in their larger filesystems, not _larger_ files, AFAICT. So, IMO, this isn't a good place to invest effort - there are alot of bigger bang-for-buck places that XFS could do with change to make it generally much better. The biggest probably being the amount of log traffic that XFS generates ... that really needs to be tackled. cheers. -- Nathan From owner-xfs@oss.sgi.com Tue Jun 26 03:32:43 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 26 Jun 2007 03:32:48 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.3 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from e1.ny.us.ibm.com (e1.ny.us.ibm.com [32.97.182.141]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5QAWftL011981 for ; Tue, 26 Jun 2007 03:32:42 -0700 Received: from d01relay02.pok.ibm.com (d01relay02.pok.ibm.com [9.56.227.234]) by e1.ny.us.ibm.com (8.13.8/8.13.8) with ESMTP id l5QAWdsi006655 for ; Tue, 26 Jun 2007 06:32:39 -0400 Received: from d01av04.pok.ibm.com (d01av04.pok.ibm.com [9.56.224.64]) by d01relay02.pok.ibm.com (8.13.8/8.13.8/NCO v8.3) with ESMTP id l5QAWdJW481796 for ; Tue, 26 Jun 2007 06:32:39 -0400 Received: from d01av04.pok.ibm.com (loopback [127.0.0.1]) by d01av04.pok.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id l5QAWdiu016574 for ; Tue, 26 Jun 2007 06:32:39 -0400 Received: from amitarora.in.ibm.com ([9.124.31.198]) by d01av04.pok.ibm.com (8.12.11.20060308/8.12.11) with ESMTP id l5QAWcMP016541; Tue, 26 Jun 2007 06:32:38 -0400 Received: from amitarora.in.ibm.com (localhost.localdomain [127.0.0.1]) by amitarora.in.ibm.com (Postfix) with ESMTP id 0D04729EB5D; Tue, 26 Jun 2007 16:02:49 +0530 (IST) Received: (from amit@localhost) by amitarora.in.ibm.com (8.13.1/8.13.1/Submit) id l5QAWlJS028362; Tue, 26 Jun 2007 16:02:47 +0530 Date: Tue, 26 Jun 2007 16:02:47 +0530 From: "Amit K. Arora" To: Andreas Dilger Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: Re: [PATCH 4/7][TAKE5] support new modes in fallocate Message-ID: <20070626103247.GA19870@amitarora.in.ibm.com> References: <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070625134500.GE1951@amitarora.in.ibm.com> <20070625150320.GA8686@amitarora.in.ibm.com> <20070625214626.GJ5181@schatzie.adilger.int> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070625214626.GJ5181@schatzie.adilger.int> User-Agent: Mutt/1.4.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11924 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: aarora@linux.vnet.ibm.com Precedence: bulk X-list: xfs On Mon, Jun 25, 2007 at 03:46:26PM -0600, Andreas Dilger wrote: > On Jun 25, 2007 20:33 +0530, Amit K. Arora wrote: > > I have not implemented FA_FL_FREE_ENOSPC and FA_ZERO_SPACE flags yet, as > > *suggested* by Andreas in http://lkml.org/lkml/2007/6/14/323 post. > > If it is decided that these flags are also needed, I will update this > > patch. Thanks! > > Can you clarify - what is the current behaviour when ENOSPC (or some other > error) is hit? Does it keep the current fallocate() or does it free it? Currently it is left on the file system implementation. In ext4, we do not undo preallocation if some error (say, ENOSPC) is hit. Hence it may end up with partial (pre)allocation. This is inline with dd and posix_fallocate, which also do not free the partially allocated space. > For FA_ZERO_SPACE - I'd think this would (IMHO) be the default - we > don't want to expose uninitialized disk blocks to userspace. I'm not > sure if this makes sense at all. I don't think we need to make it default - atleast for filesystems which have a mechanism to distinguish preallocated blocks from "regular" ones. In ext4, for example, we will have a way to mark uninitialized extents. All the preallocated blocks will be part of these uninitialized extents. And any read on these extents will treat them as a hole, returning zeroes to user land. Thus any existing data on uninitialized blocks will not be exposed to the userspace. -- Regards, Amit Arora From owner-xfs@oss.sgi.com Tue Jun 26 03:45:36 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 26 Jun 2007 03:45:41 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from e32.co.us.ibm.com (e32.co.us.ibm.com [32.97.110.150]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5QAjZtL018634 for ; Tue, 26 Jun 2007 03:45:36 -0700 Received: from d03relay04.boulder.ibm.com (d03relay04.boulder.ibm.com [9.17.195.106]) by e32.co.us.ibm.com (8.12.11.20060308/8.13.8) with ESMTP id l5QAecIG026107 for ; Tue, 26 Jun 2007 06:40:38 -0400 Received: from d03av01.boulder.ibm.com (d03av01.boulder.ibm.com [9.17.195.167]) by d03relay04.boulder.ibm.com (8.13.8/8.13.8/NCO v8.3) with ESMTP id l5QAja6l215516 for ; Tue, 26 Jun 2007 04:45:36 -0600 Received: from d03av01.boulder.ibm.com (loopback [127.0.0.1]) by d03av01.boulder.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id l5QAjaUK021802 for ; Tue, 26 Jun 2007 04:45:36 -0600 Received: from amitarora.in.ibm.com ([9.124.31.198]) by d03av01.boulder.ibm.com (8.12.11.20060308/8.12.11) with ESMTP id l5QAjZ8V021752; Tue, 26 Jun 2007 04:45:35 -0600 Received: from amitarora.in.ibm.com (localhost.localdomain [127.0.0.1]) by amitarora.in.ibm.com (Postfix) with ESMTP id DCB2F29EB5D; Tue, 26 Jun 2007 16:15:46 +0530 (IST) Received: (from amit@localhost) by amitarora.in.ibm.com (8.13.1/8.13.1/Submit) id l5QAjkXk001292; Tue, 26 Jun 2007 16:15:46 +0530 Date: Tue, 26 Jun 2007 16:15:46 +0530 From: "Amit K. Arora" To: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: Re: [PATCH 4/7][TAKE5] support new modes in fallocate Message-ID: <20070626104546.GB19870@amitarora.in.ibm.com> References: <20070511110301.GB28425@in.ibm.com> <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070625134500.GE1951@amitarora.in.ibm.com> <20070625215239.GK5181@schatzie.adilger.int> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070625215239.GK5181@schatzie.adilger.int> User-Agent: Mutt/1.4.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11925 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: aarora@linux.vnet.ibm.com Precedence: bulk X-list: xfs On Mon, Jun 25, 2007 at 03:52:39PM -0600, Andreas Dilger wrote: > On Jun 25, 2007 19:15 +0530, Amit K. Arora wrote: > > +#define FA_FL_DEALLOC 0x01 /* default is allocate */ > > +#define FA_FL_KEEP_SIZE 0x02 /* default is extend/shrink size */ > > +#define FA_FL_DEL_DATA 0x04 /* default is keep written data on DEALLOC */ > > In XFS one of the (many) ALLOC modes is to zero existing data on allocate. > For ext4 all this would mean is calling ext4_ext_mark_uninitialized() on > each extent. For some workloads this would be much faster than truncate > and reallocate of all the blocks in a file. In ext4, we already mark each extent having preallocated blocks as uninitialized. This is done as part of following code (which is part of patch 5/7) in ext4_ext_get_blocks() : @@ -2122,6 +2160,8 @@ int ext4_ext_get_blocks(handle_t *handle /* try to insert new extent into found leaf and return */ ext4_ext_store_pblock(&newex, newblock); newex.ee_len = cpu_to_le16(allocated); + if (create == EXT4_CREATE_UNINITIALIZED_EXT) /* Mark uninitialized */ + ext4_ext_mark_uninitialized(&newex); err = ext4_ext_insert_extent(handle, inode, path, &newex); if (err) { /* free data blocks we just allocated */ > In that light, please change the comment to /* default is keep existing data */ > so that it doesn't imply this is only for DEALLOC. Ok. Will update the comment. Thanks! -- Regards, Amit Arora From owner-xfs@oss.sgi.com Tue Jun 26 05:07:47 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 26 Jun 2007 05:07:54 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from e6.ny.us.ibm.com (e6.ny.us.ibm.com [32.97.182.146]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5QC7jtL005651 for ; Tue, 26 Jun 2007 05:07:46 -0700 Received: from d01relay04.pok.ibm.com (d01relay04.pok.ibm.com [9.56.227.236]) by e6.ny.us.ibm.com (8.13.8/8.13.8) with ESMTP id l5QC8tqG008650 for ; Tue, 26 Jun 2007 08:08:55 -0400 Received: from d01av03.pok.ibm.com (d01av03.pok.ibm.com [9.56.224.217]) by d01relay04.pok.ibm.com (8.13.8/8.13.8/NCO v8.3) with ESMTP id l5QC7kds481730 for ; Tue, 26 Jun 2007 08:07:46 -0400 Received: from d01av03.pok.ibm.com (loopback [127.0.0.1]) by d01av03.pok.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id l5QC7kWC015907 for ; Tue, 26 Jun 2007 08:07:46 -0400 Received: from amitarora.in.ibm.com ([9.124.31.198]) by d01av03.pok.ibm.com (8.12.11.20060308/8.12.11) with ESMTP id l5QC7iRe015764; Tue, 26 Jun 2007 08:07:45 -0400 Received: from amitarora.in.ibm.com (localhost.localdomain [127.0.0.1]) by amitarora.in.ibm.com (Postfix) with ESMTP id 0B4BF18B996; Tue, 26 Jun 2007 17:37:56 +0530 (IST) Received: (from amit@localhost) by amitarora.in.ibm.com (8.13.1/8.13.1/Submit) id l5QC7qHi002559; Tue, 26 Jun 2007 17:37:52 +0530 Date: Tue, 26 Jun 2007 17:37:52 +0530 From: "Amit K. Arora" To: Andreas Dilger Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: Re: [PATCH 7/7][TAKE5] ext4: support new modes Message-ID: <20070626120751.GC19870@amitarora.in.ibm.com> References: <20070511110301.GB28425@in.ibm.com> <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070625135051.GH1951@amitarora.in.ibm.com> <20070625215625.GL5181@schatzie.adilger.int> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070625215625.GL5181@schatzie.adilger.int> User-Agent: Mutt/1.4.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11926 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: aarora@linux.vnet.ibm.com Precedence: bulk X-list: xfs On Mon, Jun 25, 2007 at 03:56:25PM -0600, Andreas Dilger wrote: > On Jun 25, 2007 19:20 +0530, Amit K. Arora wrote: > > @@ -2499,7 +2500,8 @@ long ext4_fallocate(struct inode *inode, > > * currently supporting (pre)allocate mode for extent-based > > * files _only_ > > */ > > - if (mode != FA_ALLOCATE || !(EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL)) > > + if (!(EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL) || > > + !(mode == FA_ALLOCATE || mode == FA_RESV_SPACE)) > > return -EOPNOTSUPP; > > This should probably just check for the individual flags it can support > (e.g. no FA_FL_DEALLOC, no FA_FL_DEL_DATA). Hmm.. I am thinking of a scenario when the file system supports some individual flags, but does not support a particular combination of them. Just for example sake, assume we have FA_ZERO_SPACE mode also. Now, if a file system supports FA_ZERO_SPACE, FA_ALLOCATE, FA_DEALLOCATE and FA_RESV_SPACE; and no other mode (i.e. FA_UNRESV_SPACE is not supported for some reason). This means that although we support FA_FL_DEALLOC, FA_FL_KEEP_SIZE and FA_FL_DEL_DATA flags, but we do not support the combination of all these flags (which is nothing but FA_UNRESV_SPACE). > I also thought another proposed flag was to determine whether mtime (and > maybe ctime) is changed when doing prealloc/dealloc space? Default should > probably be to change mtime/ctime, and have FA_FL_NO_MTIME. Someone else > should decide if we want to allow changing the file w/o changing ctime, if > that is required even though the file is not visibly changing. Maybe the > ctime update should be implicit if the size or mtime are changing? Is it really required ? I mean, why should we allow users not to update ctime/mtime even if the file metadata/data gets updated ? It sounds a bit "unnatural" to me. Is there any application scenario in your mind, when you suggest of giving this flexibility to userspace ? I think, modifying ctime/mtime should be dependent on the other flags. E.g., if we do not zero out data blocks on allocation/deallocation, update only ctime. Otherwise, update ctime and mtime both. -- Regards, Amit Arora From owner-xfs@oss.sgi.com Tue Jun 26 05:16:32 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 26 Jun 2007 05:16:35 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5QCGTtL007849 for ; Tue, 26 Jun 2007 05:16:31 -0700 Received: from [134.15.251.4] (melb-sw-corp-251-4.corp.sgi.com [134.15.251.4]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id WAA18823; Tue, 26 Jun 2007 22:16:25 +1000 Message-ID: <46810398.7040004@sgi.com> Date: Tue, 26 Jun 2007 22:16:24 +1000 From: Tim Shimmin User-Agent: Thunderbird 1.5.0.12 (Windows/20070509) MIME-Version: 1.0 To: David Chinner CC: xfs-dev , xfs-oss Subject: Re: Review: Multi-File Data Streams V2 References: <20070613041629.GI86004887@sgi.com> <467B8BFA.2050107@sgi.com> <20070625035315.GL86004887@sgi.com> In-Reply-To: <20070625035315.GL86004887@sgi.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11927 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs David Chinner wrote: > On Fri, Jun 22, 2007 at 06:44:42PM +1000, Timothy Shimmin wrote: >> * >> As we talked about before, this code seems to come up in a few places: >> >> need = XFS_MIN_FREELIST_PAG(pag, mp); >> delta = need > pag->pagf_flcount ? >> need - pag->pagf_flcount : 0; >> longest = (pag->pagf_longest > delta) ? >> (pag->pagf_longest - delta) : >> (pag->pagf_flcount > 0 || >> pag->pagf_longest > 0); >> >> Perhaps we could macroize/inline-function it? > > Sure. I'll do that as a separate patch, though. Cool. > >> It confused me in _xfs_filestream_pick_ag() when I was trying >> to understand it and so could do with a comment for it too. >> As I said then, I don't like the way it uses a boolean as >> the number of blocks, in the case when the longest extent is >> is smaller than the excess over the freelist which >> the fresspace-btree-splits-overhead needs. > > Actually, the logic statement is correct. If we have a delta greater > than the longest extent, we cannot find out what the next longest > extent is without searching the btree. Hence we assume that the > longest extent is a single block, which means if the have free > extents in the tree (pag->pagf_longest > 0) or we have blocks in the > freelist (pag->pagf_flcount > 0) we are guaranteed to be able to > alocate a single block if there is space available. So the logic is: > > longest = 0; > if pag->pagf_longest > delta > longest = pag->pagf_longest - delta; > else if pag->pagf_flcount > 0 > longest = 1; > else if pag->pagf_longest > 0 > longest = 1; > > And the above is simply more compact. > I didn't say it didn't give a reasonable answer - it's just I don't like it using a boolean result for the number of blocks being 1. It's a style thing. I think it is hacky. i.e. prefer (pagf->pagf_flcount > 0 || pag->pagf_longest > 0) ? 1 : 0 but then that gets too long winded with the existing ternary, and your above code is clearer or grouping the last 2 conditions into one "or". The existing code "hides" how we want 1 block in that case IMHO. Hmmm.... though if all we have is freelist (pagf_flcount) space, then I didn't know we could dip into it here. And I'm not really sure on what happens if delta is too big for our longest and it actually needs this space for splits - what happens then - we die further on? --Tim From owner-xfs@oss.sgi.com Tue Jun 26 07:10:15 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 26 Jun 2007 07:10:19 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.0 required=5.0 tests=AWL,BAYES_99,SPF_HELO_PASS, WHOIS_MYPRIVREG autolearn=no version=3.2.0-pre1-r499012 Received: from kuber.nabble.com (kuber.nabble.com [216.139.236.158]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5QEADtL032028 for ; Tue, 26 Jun 2007 07:10:15 -0700 Received: from isper.nabble.com ([192.168.236.156]) by kuber.nabble.com with esmtp (Exim 4.63) (envelope-from ) id 1I3BkE-0008Lt-VW for linux-xfs@oss.sgi.com; Tue, 26 Jun 2007 07:10:14 -0700 Message-ID: <11306454.post@talk.nabble.com> Date: Tue, 26 Jun 2007 07:10:14 -0700 (PDT) From: Sandy1 To: linux-xfs@oss.sgi.com Subject: Re: Wrong Data Pointer-XFS File system In-Reply-To: <20070625232817.GC31489@sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Nabble-From: sundeep.saini@rediffmail.com References: <11246839.post@talk.nabble.com> <20070625232817.GC31489@sgi.com> X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11928 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sundeep.saini@rediffmail.com Precedence: bulk X-list: xfs Hi David, I am already using FSB format, i am using this format for sparsing the absolute block number, as we know extent is 128 bit in size and uses the following packed layout: LSB... bit 0-20 gives # blocks bits 21 to 72 (52) gives absolute number bits 72 to 126 (54) gives logical file block offset MSB..flag(last bit) the value i am getting in absolute no. is in FSB and after multiplying with no. of sectors in FS block i am trying to jump that value (pointer location). But only in 0th AG i am able to get the data.Afterwards i am not able to reach the data in next AGs. I also read in document that "sb_agblklog"--> This value is used to generate inode numbers and absolute block numbers defined in extent maps. How i can use this value in interprating the absolute block number? Is the above sparsing method is right or there is any other calculation? Please guide me. Regards Sandy David Chinner wrote: > > On Thu, Jun 21, 2007 at 11:23:35PM -0700, Sandy1 wrote: >> >> Hi, >> I am Using SuSe 10.0 with Xfs file system. I am working on a File system >> based project. During the initial research in On disk layout of File >> system >> , i found data pointer problem. I was not able to get the data location >> as >> per pointed by "absolute block no." in "xfs_bmbt_rec" extent pointers. > > The xfs_bmbt_rec holds the address in FSB format, not DADDR format. > >> When i create any file in 0th (zero`th) AG in that case i am able to >> reach >> on proper location by using "absolute block no." pointer. >> >> But when i create any file in 1st or in 2nd AG and so on. I never got the >> file data at the location pointed by "absolute block no.". I always found >> the file data before the the pointed address. > > FSB notation is sparse representation. > >> I am not getting any value in Superblock that tells me about difference >> in >> pointer location with actual data. >> >> This value always becomes multiple of the AG number. >> >> Please help to get out from this problem. >> >> Is there any other calculation for finding the data locations. > > man xfs_db. Search for "convert". > > Cheers, > > Dave. > -- > Dave Chinner > Principal Engineer > SGI Australian Software Group > > > > -- View this message in context: http://www.nabble.com/Wrong-Data-Pointer-XFS-File-system-tf3963002.html#a11306454 Sent from the linux-xfs mailing list archive at Nabble.com. From owner-xfs@oss.sgi.com Tue Jun 26 07:44:12 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 26 Jun 2007 07:44:19 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=3.0 required=5.0 tests=AWL,BAYES_60,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r499012 Received: from lucidpixels.com (lucidpixels.com [75.144.35.66]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5QEi9tL006163 for ; Tue, 26 Jun 2007 07:44:12 -0700 Received: by lucidpixels.com (Postfix, from userid 1001) id 1248B1C076ECD; Tue, 26 Jun 2007 10:44:10 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id 0E675401992E; Tue, 26 Jun 2007 10:44:10 -0400 (EDT) Date: Tue, 26 Jun 2007 10:44:10 -0400 (EDT) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: linux-kernel@vger.kernel.org cc: xfs@oss.sgi.com Subject: Does anyone have a benchmark of chunk sizes with SW RAID5? Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11929 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs I am running one with 64,128,256,512,1024,2048,4096,8192,16384,32768,65536 it it accepts up that high.. With XFS and _ALL_ default mount and filesystem settings to target this specific tunable. p34:~# /usr/bin/time ./benchraid.sh Tue Jun 26 10:25:21 EDT 2007: Creating RAID5 array with 64 chunk size... Tue Jun 26 10:25:21 EDT 2007: Increasing RAID rebuild speed... Tue Jun 26 10:25:21 EDT 2007: RAID is still building... Tue Jun 26 10:35:21 EDT 2007: RAID is still building... From owner-xfs@oss.sgi.com Tue Jun 26 08:35:05 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 26 Jun 2007 08:35:10 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from mtagate5.uk.ibm.com (mtagate5.uk.ibm.com [195.212.29.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5QFZ2tL019957 for ; Tue, 26 Jun 2007 08:35:04 -0700 Received: from d06nrmr1407.portsmouth.uk.ibm.com (d06nrmr1407.portsmouth.uk.ibm.com [9.149.38.185]) by mtagate5.uk.ibm.com (8.13.8/8.13.8) with ESMTP id l5QFF6HO082038 for ; Tue, 26 Jun 2007 15:15:06 GMT Received: from d06av02.portsmouth.uk.ibm.com (d06av02.portsmouth.uk.ibm.com [9.149.37.228]) by d06nrmr1407.portsmouth.uk.ibm.com (8.13.8/8.13.8/NCO v8.3) with ESMTP id l5QFF6Kj2953434 for ; Tue, 26 Jun 2007 16:15:06 +0100 Received: from d06av02.portsmouth.uk.ibm.com (loopback [127.0.0.1]) by d06av02.portsmouth.uk.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id l5QFF5hp026055 for ; Tue, 26 Jun 2007 16:15:06 +0100 Received: from localhost (dyn-9-152-198-51.boeblingen.de.ibm.com [9.152.198.51]) by d06av02.portsmouth.uk.ibm.com (8.12.11.20060308/8.12.11) with ESMTP id l5QFF5Qs026047; Tue, 26 Jun 2007 16:15:05 +0100 Date: Tue, 26 Jun 2007 17:15:05 +0200 From: Heiko Carstens To: "Amit K. Arora" Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , Andreas Dilger , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: Re: [PATCH 2/7][TAKE5] fallocate() on s390(x) Message-ID: <20070626151505.GA15160@osiris.boeblingen.de.ibm.com> References: <20070510223950.GD86004887@sgi.com> <20070511110301.GB28425@in.ibm.com> <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070625134255.GC1951@amitarora.in.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070625134255.GC1951@amitarora.in.ibm.com> User-Agent: mutt-ng/devel-r804 (Linux) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11931 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: heiko.carstens@de.ibm.com Precedence: bulk X-list: xfs > Index: linux-2.6.22-rc4/arch/s390/kernel/syscalls.S > =================================================================== > --- linux-2.6.22-rc4.orig/arch/s390/kernel/syscalls.S 2007-06-11 16:16:01.000000000 -0700 > +++ linux-2.6.22-rc4/arch/s390/kernel/syscalls.S 2007-06-11 16:27:29.000000000 -0700 > @@ -322,6 +322,7 @@ > SYSCALL(sys_getcpu,sys_getcpu,sys_getcpu_wrapper) > SYSCALL(sys_epoll_pwait,sys_epoll_pwait,compat_sys_epoll_pwait_wrapper) > SYSCALL(sys_utimes,sys_utimes,compat_sys_utimes_wrapper) > +SYSCALL(s390_fallocate,sys_fallocate,sys_fallocate_wrapper) > NI_SYSCALL /* 314 sys_fallocate */ You need to remove the NI_SYSCALL line. Otherwise all following entries will be wrong. > SYSCALL(sys_utimensat,sys_utimensat,compat_sys_utimensat_wrapper) /* 315 */ > SYSCALL(sys_signalfd,sys_signalfd,compat_sys_signalfd_wrapper) > Index: linux-2.6.22-rc4/include/asm-s390/unistd.h > =================================================================== > --- linux-2.6.22-rc4.orig/include/asm-s390/unistd.h 2007-06-11 16:16:01.000000000 -0700 > +++ linux-2.6.22-rc4/include/asm-s390/unistd.h 2007-06-11 16:27:29.000000000 -0700 > @@ -256,7 +256,8 @@ > #define __NR_signalfd 316 > #define __NR_timerfd 317 > #define __NR_eventfd 318 > -#define NR_syscalls 319 > +#define __NR_fallocate 319 > +#define NR_syscalls 320 Erm... no. You use slot 314 in the syscall table but assign number 319. That won't work. Please use 314 for both. I assume this got broken when updating to newer kernel versions. From owner-xfs@oss.sgi.com Tue Jun 26 08:34:16 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 26 Jun 2007 08:34:21 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.clusterfs.com (mail.clusterfs.com [206.168.112.78]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5QFYEtL019761 for ; Tue, 26 Jun 2007 08:34:15 -0700 Received: from localhost.adilger.int (CPE0080c816aec8-CM0011ae013d40.cpe.net.cable.rogers.com [74.122.210.125]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.clusterfs.com (Postfix) with ESMTP id 74ACA7BA327; Tue, 26 Jun 2007 09:34:15 -0600 (MDT) Received: by localhost.adilger.int (Postfix, from userid 1000) id 57B4F3FB4; Tue, 26 Jun 2007 11:34:13 -0400 (EDT) Date: Tue, 26 Jun 2007 11:34:13 -0400 From: Andreas Dilger To: "Amit K. Arora" Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: Re: [PATCH 4/7][TAKE5] support new modes in fallocate Message-ID: <20070626153413.GC6652@schatzie.adilger.int> Mail-Followup-To: "Amit K. Arora" , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com References: <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070625134500.GE1951@amitarora.in.ibm.com> <20070625150320.GA8686@amitarora.in.ibm.com> <20070625214626.GJ5181@schatzie.adilger.int> <20070626103247.GA19870@amitarora.in.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070626103247.GA19870@amitarora.in.ibm.com> User-Agent: Mutt/1.4.1i X-GPG-Key: 1024D/0D35BED6 X-GPG-Fingerprint: 7A37 5D79 BF1B CECA D44F 8A29 A488 39F5 0D35 BED6 X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11930 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: adilger@clusterfs.com Precedence: bulk X-list: xfs On Jun 26, 2007 16:02 +0530, Amit K. Arora wrote: > On Mon, Jun 25, 2007 at 03:46:26PM -0600, Andreas Dilger wrote: > > Can you clarify - what is the current behaviour when ENOSPC (or some other > > error) is hit? Does it keep the current fallocate() or does it free it? > > Currently it is left on the file system implementation. In ext4, we do > not undo preallocation if some error (say, ENOSPC) is hit. Hence it may > end up with partial (pre)allocation. This is inline with dd and > posix_fallocate, which also do not free the partially allocated space. Since I believe the XFS allocation ioctls do it the opposite way (free preallocated space on error) this should be encoded into the flags. Having it "filesystem dependent" just means that nobody will be happy. > > For FA_ZERO_SPACE - I'd think this would (IMHO) be the default - we > > don't want to expose uninitialized disk blocks to userspace. I'm not > > sure if this makes sense at all. > > I don't think we need to make it default - atleast for filesystems which > have a mechanism to distinguish preallocated blocks from "regular" ones. What I mean is that any data read from the file should have the "appearance" of being zeroed (whether zeroes are actually written to disk or not). What I _think_ David is proposing is to allow fallocate() to return without marking the blocks even "uninitialized" and subsequent reads would return the old data from the disk. Cheers, Andreas -- Andreas Dilger Principal Software Engineer Cluster File Systems, Inc. From owner-xfs@oss.sgi.com Tue Jun 26 08:42:52 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 26 Jun 2007 08:42:57 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.0 required=5.0 tests=AWL,BAYES_50,TRACKER_ID autolearn=no version=3.2.0-pre1-r499012 Received: from mail.clusterfs.com (mail.clusterfs.com [206.168.112.78]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5QFgotL022535 for ; Tue, 26 Jun 2007 08:42:52 -0700 Received: from localhost.adilger.int (CPE0080c816aec8-CM0011ae013d40.cpe.net.cable.rogers.com [74.122.210.125]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.clusterfs.com (Postfix) with ESMTP id F26B17BA36C; Tue, 26 Jun 2007 09:42:51 -0600 (MDT) Received: by localhost.adilger.int (Postfix, from userid 1000) id 716063FB4; Tue, 26 Jun 2007 11:42:50 -0400 (EDT) Date: Tue, 26 Jun 2007 11:42:50 -0400 From: Andreas Dilger To: "Amit K. Arora" Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: Re: [PATCH 4/7][TAKE5] support new modes in fallocate Message-ID: <20070626154250.GD6652@schatzie.adilger.int> Mail-Followup-To: "Amit K. Arora" , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com References: <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070625134500.GE1951@amitarora.in.ibm.com> <20070625215239.GK5181@schatzie.adilger.int> <20070626104546.GB19870@amitarora.in.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070626104546.GB19870@amitarora.in.ibm.com> User-Agent: Mutt/1.4.1i X-GPG-Key: 1024D/0D35BED6 X-GPG-Fingerprint: 7A37 5D79 BF1B CECA D44F 8A29 A488 39F5 0D35 BED6 X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11932 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: adilger@clusterfs.com Precedence: bulk X-list: xfs On Jun 26, 2007 16:15 +0530, Amit K. Arora wrote: > On Mon, Jun 25, 2007 at 03:52:39PM -0600, Andreas Dilger wrote: > > In XFS one of the (many) ALLOC modes is to zero existing data on allocate. > > For ext4 all this would mean is calling ext4_ext_mark_uninitialized() on > > each extent. For some workloads this would be much faster than truncate > > and reallocate of all the blocks in a file. > > In ext4, we already mark each extent having preallocated blocks as > uninitialized. This is done as part of following code (which is part of > patch 5/7) in ext4_ext_get_blocks() : What I meant is that with XFS_IOC_ALLOCSP the previously-written data is ZEROED OUT, unlike with fallocate() which leaves previously-written data alone and only allocates in holes. So, if you had a sparse file with some data in it: AAAAA BBBBBB fallocate() would allocate the holes: 00000AAAAA000000000BBBBBB00000000 XFS_IOC_ALLOCSP would overwrite everything: 000000000000000000000000000000000 In order to specify this for allocation, FA_FL_DEL_DATA would need to make sense for allocations (as well as the deallocation). This is farily easy to do - just mark all of the existing extents as unallocated, and their data disappears. Cheers, Andreas -- Andreas Dilger Principal Software Engineer Cluster File Systems, Inc. From owner-xfs@oss.sgi.com Tue Jun 26 09:14:03 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 26 Jun 2007 09:14:10 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.5 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.clusterfs.com (mail.clusterfs.com [206.168.112.78]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5QGE1tL029882 for ; Tue, 26 Jun 2007 09:14:03 -0700 Received: from localhost.adilger.int (CPE0080c816aec8-CM0011ae013d40.cpe.net.cable.rogers.com [74.122.210.125]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.clusterfs.com (Postfix) with ESMTP id DEC587BA373; Tue, 26 Jun 2007 10:14:02 -0600 (MDT) Received: by localhost.adilger.int (Postfix, from userid 1000) id DD4043FB4; Tue, 26 Jun 2007 12:14:00 -0400 (EDT) Date: Tue, 26 Jun 2007 12:14:00 -0400 From: Andreas Dilger To: "Amit K. Arora" Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: Re: [PATCH 7/7][TAKE5] ext4: support new modes Message-ID: <20070626161400.GE6652@schatzie.adilger.int> Mail-Followup-To: "Amit K. Arora" , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com References: <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070625135051.GH1951@amitarora.in.ibm.com> <20070625215625.GL5181@schatzie.adilger.int> <20070626120751.GC19870@amitarora.in.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070626120751.GC19870@amitarora.in.ibm.com> User-Agent: Mutt/1.4.1i X-GPG-Key: 1024D/0D35BED6 X-GPG-Fingerprint: 7A37 5D79 BF1B CECA D44F 8A29 A488 39F5 0D35 BED6 X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11933 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: adilger@clusterfs.com Precedence: bulk X-list: xfs On Jun 26, 2007 17:37 +0530, Amit K. Arora wrote: > Hmm.. I am thinking of a scenario when the file system supports some > individual flags, but does not support a particular combination of them. > Just for example sake, assume we have FA_ZERO_SPACE mode also. Now, if a > file system supports FA_ZERO_SPACE, FA_ALLOCATE, FA_DEALLOCATE and > FA_RESV_SPACE; and no other mode (i.e. FA_UNRESV_SPACE is not supported > for some reason). This means that although we support FA_FL_DEALLOC, > FA_FL_KEEP_SIZE and FA_FL_DEL_DATA flags, but we do not support the > combination of all these flags (which is nothing but FA_UNRESV_SPACE). That is up to the filesystem to determine then. I just thought it should be clear to return an error for flags (or as you say combinations thereof) that the filesystem doesn't understand. That said, I'd think in most cases the flags are orthogonal, so if you support some combination of the flags (e.g. FA_FL_DEL_DATA, FA_FL_DEALLOC) then you will also support other combinations of those flags just from the way it is coded. > > I also thought another proposed flag was to determine whether mtime (and > > maybe ctime) is changed when doing prealloc/dealloc space? Default should > > probably be to change mtime/ctime, and have FA_FL_NO_MTIME. Someone else > > should decide if we want to allow changing the file w/o changing ctime, if > > that is required even though the file is not visibly changing. Maybe the > > ctime update should be implicit if the size or mtime are changing? > > Is it really required ? I mean, why should we allow users not to update > ctime/mtime even if the file metadata/data gets updated ? It sounds > a bit "unnatural" to me. > Is there any application scenario in your mind, when you suggest of > giving this flexibility to userspace ? One reason is that XFS does NOT update the mtime/ctime when doing the XFS_IOC_* allocation ioctls. > I think, modifying ctime/mtime should be dependent on the other flags. > E.g., if we do not zero out data blocks on allocation/deallocation, > update only ctime. Otherwise, update ctime and mtime both. I'm only being the advocate for requirements David Chinner has put forward due to existing behaviour in XFS. This is one of the reasons why I think the "flags" mechanism we now have - we can encode the various different behaviours in any way we want and leave it to the caller. Cheers, Andreas -- Andreas Dilger Principal Software Engineer Cluster File Systems, Inc. From owner-xfs@oss.sgi.com Tue Jun 26 10:00:31 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 26 Jun 2007 10:00:35 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.3 required=5.0 tests=AWL,BAYES_60,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r499012 Received: from sandeen.net (sandeen.net [209.173.210.139]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5QH0QtL005608 for ; Tue, 26 Jun 2007 10:00:30 -0700 Received: from liberator.sandeen.net (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 4780218011E86; Tue, 26 Jun 2007 12:00:25 -0500 (CDT) Message-ID: <46814628.5010907@sandeen.net> Date: Tue, 26 Jun 2007 12:00:24 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.0 (Macintosh/20070326) MIME-Version: 1.0 To: David Chinner CC: Christoph Hellwig , xfs-oss Subject: Re: [PATCH] simplify vnode tracing calls References: <467F5053.4040108@sandeen.net> <20070625062047.GB1307@infradead.org> <467FCA62.8080800@sandeen.net> <20070625235436.GD31489@sgi.com> In-Reply-To: <20070625235436.GD31489@sgi.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11934 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs David Chinner wrote: > > Yeah, keep the lower case names if possible. I can't think of a simpler > way of doing it, and it's not terribly ugly. I'm open to better > solutions, though..... > Howziss.... note a couple of __vn* callers due to "special" function names wanted... --------------- Simplify vnode tracing calls by embedding function name & return addr in the calling macro. Signed-off-by: Eric Sandeen Index: xfs-linux-clean/linux-2.4/xfs_aops.c =================================================================== --- xfs-linux-clean.orig/linux-2.4/xfs_aops.c +++ xfs-linux-clean/linux-2.4/xfs_aops.c @@ -964,7 +964,7 @@ xfs_vm_bmap( struct inode *inode = (struct inode *)mapping->host; bhv_vnode_t *vp = vn_from_inode(inode); - vn_trace_entry(vp, __FUNCTION__, (inst_t *)__return_address); + vn_trace_entry(vp); bhv_vop_rwlock(vp, VRWLOCK_READ); bhv_vop_flush_pages(vp, (xfs_off_t)0, -1, 0, FI_REMAPF); Index: xfs-linux-clean/linux-2.4/xfs_ioctl.c =================================================================== --- xfs-linux-clean.orig/linux-2.4/xfs_ioctl.c +++ xfs-linux-clean/linux-2.4/xfs_ioctl.c @@ -702,7 +702,7 @@ xfs_ioctl( vp = vn_from_inode(inode); - vn_trace_entry(vp, "xfs_ioctl", (inst_t *)__return_address); + vn_trace_entry(vp); ip = XFS_BHVTOI(bdp); mp = ip->i_mount; Index: xfs-linux-clean/linux-2.4/xfs_super.c =================================================================== --- xfs-linux-clean.orig/linux-2.4/xfs_super.c +++ xfs-linux-clean/linux-2.4/xfs_super.c @@ -374,7 +374,7 @@ xfs_fs_write_inode( int error, flags = FLUSH_INODE; if (vp) { - vn_trace_entry(vp, __FUNCTION__, (inst_t *)__return_address); + vn_trace_entry(vp); if (sync) flags |= FLUSH_SYNC; error = bhv_vop_iflush(vp, flags); @@ -389,7 +389,7 @@ xfs_fs_clear_inode( { bhv_vnode_t *vp = vn_from_inode(inode); - vn_trace_entry(vp, __FUNCTION__, (inst_t *)__return_address); + vn_trace_entry(vp); XFS_STATS_INC(vn_rele); XFS_STATS_INC(vn_remove); @@ -948,7 +948,7 @@ xfs_fs_read_super( goto fail_vnrele; if (xfs_fs_start_syncd(vfsp)) goto fail_vnrele; - vn_trace_exit(rootvp, __FUNCTION__, (inst_t *)__return_address); + vn_trace_exit(rootvp); kmem_free(args, sizeof(*args)); return sb; Index: xfs-linux-clean/linux-2.4/xfs_vnode.c =================================================================== --- xfs-linux-clean.orig/linux-2.4/xfs_vnode.c +++ xfs-linux-clean/linux-2.4/xfs_vnode.c @@ -65,7 +65,7 @@ vn_initialize( vp->v_trace = ktrace_alloc(VNODE_TRACE_SIZE, KM_SLEEP); #endif /* XFS_VNODE_TRACE */ - vn_trace_exit(vp, "vn_initialize", (inst_t *)__return_address); + vn_trace_exit(vp); return vp; } @@ -118,7 +118,7 @@ vn_revalidate( bhv_vattr_t va; int error; - vn_trace_entry(vp, "vn_revalidate", (inst_t *)__return_address); + vn_trace_entry(vp); ASSERT(VNHEAD(vp) != NULL); va.va_mask = XFS_AT_STAT|XFS_AT_XFLAGS; @@ -168,13 +168,13 @@ vn_hold( * Vnode tracing code. */ void -vn_trace_entry(bhv_vnode_t *vp, char *func, inst_t *ra) +__vn_trace_entry(bhv_vnode_t *vp, char *func, inst_t *ra) { KTRACE_ENTER(vp, VNODE_KTRACE_ENTRY, func, 0, ra); } void -vn_trace_exit(bhv_vnode_t *vp, char *func, inst_t *ra) +__vn_trace_exit(bhv_vnode_t *vp, char *func, inst_t *ra) { KTRACE_ENTER(vp, VNODE_KTRACE_EXIT, func, 0, ra); } Index: xfs-linux-clean/linux-2.4/xfs_vnode.h =================================================================== --- xfs-linux-clean.orig/linux-2.4/xfs_vnode.h +++ xfs-linux-clean/linux-2.4/xfs_vnode.h @@ -566,21 +566,22 @@ static inline void vn_atime_to_time_t(st #define VNODE_KTRACE_REF 4 #define VNODE_KTRACE_RELE 5 -extern void vn_trace_entry(struct bhv_vnode *, char *, inst_t *); -extern void vn_trace_exit(struct bhv_vnode *, char *, inst_t *); +extern void __vn_trace_entry(struct bhv_vnode *, char *, inst_t *); +extern void __vn_trace_exit(struct bhv_vnode *, char *, inst_t *); extern void vn_trace_hold(struct bhv_vnode *, char *, int, inst_t *); extern void vn_trace_ref(struct bhv_vnode *, char *, int, inst_t *); extern void vn_trace_rele(struct bhv_vnode *, char *, int, inst_t *); -#define VN_TRACE(vp) \ - vn_trace_ref(vp, __FILE__, __LINE__, (inst_t *)__return_address) +#define vn_trace_entry(vp) \ + __vn_trace_entry(vp, __FUNCTION__, (inst_t *)__return_address) +#define vn_trace_exit(vp) \ + __vn_trace_exit(vp, __FUNCTION__, (inst_t *)__return_address) #else -#define vn_trace_entry(a,b,c) -#define vn_trace_exit(a,b,c) +#define vn_trace_entry(a) +#define vn_trace_exit(a) #define vn_trace_hold(a,b,c,d) #define vn_trace_ref(a,b,c,d) #define vn_trace_rele(a,b,c,d) -#define VN_TRACE(vp) #endif #endif /* __XFS_VNODE_H__ */ Index: xfs-linux-clean/linux-2.6/xfs_aops.c =================================================================== --- xfs-linux-clean.orig/linux-2.6/xfs_aops.c +++ xfs-linux-clean/linux-2.6/xfs_aops.c @@ -1529,7 +1529,7 @@ xfs_vm_bmap( struct inode *inode = (struct inode *)mapping->host; bhv_vnode_t *vp = vn_from_inode(inode); - vn_trace_entry(vp, __FUNCTION__, (inst_t *)__return_address); + vn_trace_entry(vp); bhv_vop_rwlock(vp, VRWLOCK_READ); bhv_vop_flush_pages(vp, (xfs_off_t)0, -1, 0, FI_REMAPF); bhv_vop_rwunlock(vp, VRWLOCK_READ); Index: xfs-linux-clean/linux-2.6/xfs_ioctl.c =================================================================== --- xfs-linux-clean.orig/linux-2.6/xfs_ioctl.c +++ xfs-linux-clean/linux-2.6/xfs_ioctl.c @@ -708,7 +708,7 @@ xfs_ioctl( vp = vn_from_inode(inode); - vn_trace_entry(vp, "xfs_ioctl", (inst_t *)__return_address); + vn_trace_entry(vp); ip = XFS_BHVTOI(bdp); mp = ip->i_mount; Index: xfs-linux-clean/linux-2.6/xfs_super.c =================================================================== --- xfs-linux-clean.orig/linux-2.6/xfs_super.c +++ xfs-linux-clean/linux-2.6/xfs_super.c @@ -415,7 +415,7 @@ xfs_fs_write_inode( int error = 0, flags = FLUSH_INODE; if (vp) { - vn_trace_entry(vp, __FUNCTION__, (inst_t *)__return_address); + vn_trace_entry(vp); if (sync) flags |= FLUSH_SYNC; error = bhv_vop_iflush(vp, flags); @@ -431,7 +431,7 @@ xfs_fs_clear_inode( { bhv_vnode_t *vp = vn_from_inode(inode); - vn_trace_entry(vp, __FUNCTION__, (inst_t *)__return_address); + vn_trace_entry(vp); XFS_STATS_INC(vn_rele); XFS_STATS_INC(vn_remove); @@ -844,7 +844,7 @@ xfs_fs_fill_super( } if ((error = xfs_fs_start_syncd(vfsp))) goto fail_vnrele; - vn_trace_exit(rootvp, __FUNCTION__, (inst_t *)__return_address); + vn_trace_exit(rootvp); kmem_free(args, sizeof(*args)); return 0; Index: xfs-linux-clean/linux-2.6/xfs_vnode.c =================================================================== --- xfs-linux-clean.orig/linux-2.6/xfs_vnode.c +++ xfs-linux-clean/linux-2.6/xfs_vnode.c @@ -99,7 +99,7 @@ vn_initialize( vp->v_trace = ktrace_alloc(VNODE_TRACE_SIZE, KM_SLEEP); #endif /* XFS_VNODE_TRACE */ - vn_trace_exit(vp, __FUNCTION__, (inst_t *)__return_address); + vn_trace_exit(vp); return vp; } @@ -150,7 +150,7 @@ __vn_revalidate( { int error; - vn_trace_entry(vp, __FUNCTION__, (inst_t *)__return_address); + vn_trace_entry(vp); vattr->va_mask = XFS_AT_STAT | XFS_AT_XFLAGS; error = bhv_vop_getattr(vp, vattr, 0, NULL); if (likely(!error)) { @@ -207,13 +207,13 @@ vn_hold( * Vnode tracing code. */ void -vn_trace_entry(bhv_vnode_t *vp, const char *func, inst_t *ra) +__vn_trace_entry(bhv_vnode_t *vp, const char *func, inst_t *ra) { KTRACE_ENTER(vp, VNODE_KTRACE_ENTRY, func, 0, ra); } void -vn_trace_exit(bhv_vnode_t *vp, const char *func, inst_t *ra) +__vn_trace_exit(bhv_vnode_t *vp, const char *func, inst_t *ra) { KTRACE_ENTER(vp, VNODE_KTRACE_EXIT, func, 0, ra); } Index: xfs-linux-clean/linux-2.6/xfs_vnode.h =================================================================== --- xfs-linux-clean.orig/linux-2.6/xfs_vnode.h +++ xfs-linux-clean/linux-2.6/xfs_vnode.h @@ -581,21 +581,22 @@ static inline void vn_atime_to_time_t(bh #define VNODE_KTRACE_REF 4 #define VNODE_KTRACE_RELE 5 -extern void vn_trace_entry(struct bhv_vnode *, const char *, inst_t *); -extern void vn_trace_exit(struct bhv_vnode *, const char *, inst_t *); +extern void __vn_trace_entry(struct bhv_vnode *, const char *, inst_t *); +extern void __vn_trace_exit(struct bhv_vnode *, const char *, inst_t *); extern void vn_trace_hold(struct bhv_vnode *, char *, int, inst_t *); extern void vn_trace_ref(struct bhv_vnode *, char *, int, inst_t *); extern void vn_trace_rele(struct bhv_vnode *, char *, int, inst_t *); -#define VN_TRACE(vp) \ - vn_trace_ref(vp, __FILE__, __LINE__, (inst_t *)__return_address) +#define vn_trace_entry(vp) \ + __vn_trace_entry(vp, __FUNCTION__, (inst_t *)__return_address) +#define vn_trace_exit(vp) \ + __vn_trace_exit(vp, __FUNCTION__, (inst_t *)__return_address) #else -#define vn_trace_entry(a,b,c) -#define vn_trace_exit(a,b,c) +#define vn_trace_entry(a) +#define vn_trace_exit(a) #define vn_trace_hold(a,b,c,d) #define vn_trace_ref(a,b,c,d) #define vn_trace_rele(a,b,c,d) -#define VN_TRACE(vp) #endif #endif /* __XFS_VNODE_H__ */ Index: xfs-linux-clean/xfs_iget.c =================================================================== --- xfs-linux-clean.orig/xfs_iget.c +++ xfs-linux-clean/xfs_iget.c @@ -268,7 +268,7 @@ again: goto again; } - vn_trace_exit(vp, "xfs_iget.alloc", + __vn_trace_exit(vp, "xfs_iget.alloc", (inst_t *)__return_address); XFS_STATS_INC(xs_ig_found); @@ -328,7 +328,7 @@ finish_inode: xfs_ilock(ip, lock_flags); xfs_iflags_clear(ip, XFS_ISTALE); - vn_trace_exit(vp, "xfs_iget.found", + __vn_trace_exit(vp, "xfs_iget.found", (inst_t *)__return_address); goto return_ip; } @@ -353,7 +353,7 @@ finish_inode: if (error) return error; - vn_trace_exit(vp, "xfs_iget.alloc", (inst_t *)__return_address); + __vn_trace_exit(vp, "xfs_iget.alloc", (inst_t *)__return_address); xfs_inode_lock_init(ip, vp); xfs_iocore_inode_init(ip); @@ -629,7 +629,7 @@ xfs_iput(xfs_inode_t *ip, { bhv_vnode_t *vp = XFS_ITOV(ip); - vn_trace_entry(vp, "xfs_iput", (inst_t *)__return_address); + vn_trace_entry(vp); xfs_iunlock(ip, lock_flags); VN_RELE(vp); } @@ -644,7 +644,7 @@ xfs_iput_new(xfs_inode_t *ip, bhv_vnode_t *vp = XFS_ITOV(ip); struct inode *inode = vn_to_inode(vp); - vn_trace_entry(vp, "xfs_iput_new", (inst_t *)__return_address); + vn_trace_entry(vp); if ((ip->i_d.di_mode == 0)) { ASSERT(!xfs_iflags_test(ip, XFS_IRECLAIMABLE)); Index: xfs-linux-clean/xfs_rename.c =================================================================== --- xfs-linux-clean.orig/xfs_rename.c +++ xfs-linux-clean/xfs_rename.c @@ -249,8 +249,8 @@ xfs_rename( int target_namelen = VNAMELEN(target_vname); src_dir_vp = BHV_TO_VNODE(src_dir_bdp); - vn_trace_entry(src_dir_vp, "xfs_rename", (inst_t *)__return_address); - vn_trace_entry(target_dir_vp, "xfs_rename", (inst_t *)__return_address); + vn_trace_entry(src_dir_vp); + vn_trace_entry(target_dir_vp); /* * Find the XFS behavior descriptor for the target directory Index: xfs-linux-clean/xfs_utils.c =================================================================== --- xfs-linux-clean.orig/xfs_utils.c +++ xfs-linux-clean/xfs_utils.c @@ -76,7 +76,7 @@ xfs_dir_lookup_int( int error; dir_vp = BHV_TO_VNODE(dir_bdp); - vn_trace_entry(dir_vp, __FUNCTION__, (inst_t *)__return_address); + vn_trace_entry(dir_vp); dp = XFS_BHVTOI(dir_bdp); Index: xfs-linux-clean/xfs_vnodeops.c =================================================================== --- xfs-linux-clean.orig/xfs_vnodeops.c +++ xfs-linux-clean/xfs_vnodeops.c @@ -92,7 +92,7 @@ xfs_getattr( bhv_vnode_t *vp; vp = BHV_TO_VNODE(bdp); - vn_trace_entry(vp, __FUNCTION__, (inst_t *)__return_address); + vn_trace_entry(vp); ip = XFS_BHVTOI(bdp); mp = ip->i_mount; @@ -237,7 +237,7 @@ xfs_setattr( int need_iolock = 1; vp = BHV_TO_VNODE(bdp); - vn_trace_entry(vp, __FUNCTION__, (inst_t *)__return_address); + vn_trace_entry(vp); if (vp->v_vfsp->vfs_flag & VFS_RDONLY) return XFS_ERROR(EROFS); @@ -907,8 +907,7 @@ xfs_access( xfs_inode_t *ip; int error; - vn_trace_entry(BHV_TO_VNODE(bdp), __FUNCTION__, - (inst_t *)__return_address); + vn_trace_entry(BHV_TO_VNODE(bdp)); ip = XFS_BHVTOI(bdp); xfs_ilock(ip, XFS_ILOCK_SHARED); @@ -951,7 +950,7 @@ xfs_readlink( xfs_buf_t *bp; vp = BHV_TO_VNODE(bdp); - vn_trace_entry(vp, __FUNCTION__, (inst_t *)__return_address); + vn_trace_entry(vp); ip = XFS_BHVTOI(bdp); mp = ip->i_mount; @@ -1046,8 +1045,7 @@ xfs_fsync( int error; int log_flushed = 0, changed = 1; - vn_trace_entry(BHV_TO_VNODE(bdp), - __FUNCTION__, (inst_t *)__return_address); + vn_trace_entry(BHV_TO_VNODE(bdp)); ip = XFS_BHVTOI(bdp); @@ -1601,7 +1599,7 @@ xfs_inactive( int truncate; vp = BHV_TO_VNODE(bdp); - vn_trace_entry(vp, __FUNCTION__, (inst_t *)__return_address); + vn_trace_entry(vp); ip = XFS_BHVTOI(bdp); @@ -1825,7 +1823,7 @@ xfs_lookup( bhv_vnode_t *dir_vp; dir_vp = BHV_TO_VNODE(dir_bdp); - vn_trace_entry(dir_vp, __FUNCTION__, (inst_t *)__return_address); + vn_trace_entry(dir_vp); dp = XFS_BHVTOI(dir_bdp); @@ -1876,7 +1874,7 @@ xfs_create( ASSERT(!*vpp); dir_vp = BHV_TO_VNODE(dir_bdp); - vn_trace_entry(dir_vp, __FUNCTION__, (inst_t *)__return_address); + vn_trace_entry(dir_vp); dp = XFS_BHVTOI(dir_bdp); mp = dp->i_mount; @@ -2370,7 +2368,7 @@ xfs_remove( int namelen; dir_vp = BHV_TO_VNODE(dir_bdp); - vn_trace_entry(dir_vp, __FUNCTION__, (inst_t *)__return_address); + vn_trace_entry(dir_vp); dp = XFS_BHVTOI(dir_bdp); mp = dp->i_mount; @@ -2416,7 +2414,7 @@ xfs_remove( dm_di_mode = ip->i_d.di_mode; - vn_trace_entry(XFS_ITOV(ip), __FUNCTION__, (inst_t *)__return_address); + vn_trace_entry(XFS_ITOV(ip)); ITRACE(ip); @@ -2541,7 +2539,7 @@ xfs_remove( */ xfs_refcache_purge_ip(ip); - vn_trace_exit(XFS_ITOV(ip), __FUNCTION__, (inst_t *)__return_address); + vn_trace_exit(XFS_ITOV(ip)); /* * Let interposed file systems know about removed links. @@ -2618,8 +2616,8 @@ xfs_link( int target_namelen; target_dir_vp = BHV_TO_VNODE(target_dir_bdp); - vn_trace_entry(target_dir_vp, __FUNCTION__, (inst_t *)__return_address); - vn_trace_entry(src_vp, __FUNCTION__, (inst_t *)__return_address); + vn_trace_entry(target_dir_vp); + vn_trace_entry(src_vp); target_namelen = VNAMELEN(dentry); ASSERT(!VN_ISDIR(src_vp)); @@ -2818,7 +2816,7 @@ xfs_mkdir( /* Return through std_return after this point. */ - vn_trace_entry(dir_vp, __FUNCTION__, (inst_t *)__return_address); + vn_trace_entry(dir_vp); mp = dp->i_mount; udqp = gdqp = NULL; @@ -3023,7 +3021,7 @@ xfs_rmdir( dp = XFS_BHVTOI(dir_bdp); mp = dp->i_mount; - vn_trace_entry(dir_vp, __FUNCTION__, (inst_t *)__return_address); + vn_trace_entry(dir_vp); if (XFS_FORCED_SHUTDOWN(XFS_BHVTOI(dir_bdp)->i_mount)) return XFS_ERROR(EIO); @@ -3259,8 +3257,7 @@ xfs_readdir( int error = 0; uint lock_mode; - vn_trace_entry(BHV_TO_VNODE(dir_bdp), __FUNCTION__, - (inst_t *)__return_address); + vn_trace_entry(BHV_TO_VNODE(dir_bdp)); dp = XFS_BHVTOI(dir_bdp); if (XFS_FORCED_SHUTDOWN(dp->i_mount)) @@ -3317,7 +3314,7 @@ xfs_symlink( ip = NULL; tp = NULL; - vn_trace_entry(dir_vp, __FUNCTION__, (inst_t *)__return_address); + vn_trace_entry(dir_vp); mp = dp->i_mount; @@ -3608,8 +3605,7 @@ xfs_fid2( xfs_inode_t *ip; xfs_fid2_t *xfid; - vn_trace_entry(BHV_TO_VNODE(bdp), __FUNCTION__, - (inst_t *)__return_address); + vn_trace_entry(BHV_TO_VNODE(bdp)); ASSERT(sizeof(fid_t) >= sizeof(xfs_fid2_t)); xfid = (xfs_fid2_t *)fidp; @@ -3821,7 +3817,7 @@ xfs_reclaim( vp = BHV_TO_VNODE(bdp); ip = XFS_BHVTOI(bdp); - vn_trace_entry(vp, __FUNCTION__, (inst_t *)__return_address); + vn_trace_entry(vp); ASSERT(!VN_MAPPED(vp)); @@ -4037,7 +4033,7 @@ xfs_alloc_file_space( int committed; int error; - vn_trace_entry(XFS_ITOV(ip), __FUNCTION__, (inst_t *)__return_address); + vn_trace_entry(XFS_ITOV(ip)); if (XFS_FORCED_SHUTDOWN(mp)) return XFS_ERROR(EIO); @@ -4308,7 +4304,7 @@ xfs_free_file_space( vp = XFS_ITOV(ip); mp = ip->i_mount; - vn_trace_entry(vp, __FUNCTION__, (inst_t *)__return_address); + vn_trace_entry(vp); if ((error = XFS_QM_DQATTACH(mp, ip, 0))) return error; @@ -4514,7 +4510,7 @@ xfs_change_file_space( bhv_vnode_t *vp; vp = BHV_TO_VNODE(bdp); - vn_trace_entry(vp, __FUNCTION__, (inst_t *)__return_address); + vn_trace_entry(vp); ip = XFS_BHVTOI(bdp); mp = ip->i_mount; Index: xfs-linux-clean/linux-2.4/xfs_ksyms.c =================================================================== --- xfs-linux-clean.orig/linux-2.4/xfs_ksyms.c +++ xfs-linux-clean/linux-2.4/xfs_ksyms.c @@ -82,8 +82,8 @@ EXPORT_SYMBOL(ktrace_next); #ifdef XFS_VNODE_TRACE EXPORT_SYMBOL(vn_trace_ref); -EXPORT_SYMBOL(vn_trace_entry); -EXPORT_SYMBOL(vn_trace_exit); +EXPORT_SYMBOL(__vn_trace_entry); +EXPORT_SYMBOL(__vn_trace_exit); EXPORT_SYMBOL(vn_trace_hold); EXPORT_SYMBOL(vn_trace_rele); #endif Index: xfs-linux-clean/linux-2.6/xfs_ksyms.c =================================================================== --- xfs-linux-clean.orig/linux-2.6/xfs_ksyms.c +++ xfs-linux-clean/linux-2.6/xfs_ksyms.c @@ -82,8 +82,8 @@ EXPORT_SYMBOL(ktrace_next); #ifdef XFS_VNODE_TRACE EXPORT_SYMBOL(vn_trace_ref); -EXPORT_SYMBOL(vn_trace_entry); -EXPORT_SYMBOL(vn_trace_exit); +EXPORT_SYMBOL(__vn_trace_entry); +EXPORT_SYMBOL(__vn_trace_exit); EXPORT_SYMBOL(vn_trace_hold); EXPORT_SYMBOL(vn_trace_rele); #endif From owner-xfs@oss.sgi.com Tue Jun 26 10:38:24 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 26 Jun 2007 10:38:29 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.0 required=5.0 tests=BAYES_80 autolearn=no version=3.2.0-pre1-r499012 Received: from stlx01.stz-softwaretechnik.com (stz-softwaretechnik.de [217.160.223.211]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5QHcLtL013607 for ; Tue, 26 Jun 2007 10:38:24 -0700 Received: from rg by stlx01.stz-softwaretechnik.com with local (Exim 3.36 #1 (Debian)) id 1I3EfR-0004HN-00 for ; Tue, 26 Jun 2007 19:17:29 +0200 Date: Tue, 26 Jun 2007 19:17:19 +0200 From: Ralf Gross To: xfs-oss Subject: reasonable xfs fs size for 30-100 TB? Message-ID: <20070626171719.GD32546@p15145560.pureserver.info> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.5.9i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11935 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: Ralf-Lists@ralfgross.de Precedence: bulk X-list: xfs Hi, we are about to buy some large (at least for us) RAID arrays (Overland/Xyratex 48x S-SATA). We'll start with ~30 TB and end with ~100 TB. The data will mainly be video data, each file with a size of 2 GB + some meta data. The RAID will be attached by Fibre Channel to an Debian Etch server with kernel 2.6.18-amd64 and 6 GB RAM (xfsprogs 2.8.11-1). My main concern is the amount of RAM I need for a fsck of the xfs fs. Last time I search for the xfs requirements, I found the rule of thumb: 2 GB RAM for 1 TB of disk storage + some RAM per x inodes. Last year I deployed one server with a 4 TB xfs fs which is running absolutely stable since then. I don't want to creat _one_ big xfs fs, but I also don't want to end up with 20-30 fs each with 4 TB of size. The question is, what is a reasonable fs size with 4-6 GB of RAM? Will it be sufficient to add 15 GB of swap for a 10 TB fs? This will slow things certainly down, but will it work? Only few clients will access the data on this server. There will be 3 streams of data with ~17MB/s which the server should be able to provide over GbE. Performance is not the primary goal. There will also be a backup to tape (ok, _many_ tapes...). Any thoughts or hints? Ralf From owner-xfs@oss.sgi.com Tue Jun 26 12:09:37 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 26 Jun 2007 12:09:41 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.6 required=5.0 tests=AWL,BAYES_60 autolearn=no version=3.2.0-pre1-r499012 Received: from e35.co.us.ibm.com (e35.co.us.ibm.com [32.97.110.153]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5QJ9atL003504 for ; Tue, 26 Jun 2007 12:09:37 -0700 Received: from d03relay04.boulder.ibm.com (d03relay04.boulder.ibm.com [9.17.195.106]) by e35.co.us.ibm.com (8.13.8/8.13.8) with ESMTP id l5QJ9bDo032428 for ; Tue, 26 Jun 2007 15:09:37 -0400 Received: from d03av02.boulder.ibm.com (d03av02.boulder.ibm.com [9.17.195.168]) by d03relay04.boulder.ibm.com (8.13.8/8.13.8/NCO v8.3) with ESMTP id l5QJ9bPH064428 for ; Tue, 26 Jun 2007 13:09:37 -0600 Received: from d03av02.boulder.ibm.com (loopback [127.0.0.1]) by d03av02.boulder.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id l5QJ9bBN003734 for ; Tue, 26 Jun 2007 13:09:37 -0600 Received: from amitarora.in.ibm.com ([9.126.238.78]) by d03av02.boulder.ibm.com (8.12.11.20060308/8.12.11) with ESMTP id l5QJ9Zsh003576; Tue, 26 Jun 2007 13:09:36 -0600 Received: from amitarora.in.ibm.com (localhost.localdomain [127.0.0.1]) by amitarora.in.ibm.com (Postfix) with ESMTP id EF98918B996; Wed, 27 Jun 2007 00:39:45 +0530 (IST) Received: (from amit@localhost) by amitarora.in.ibm.com (8.13.1/8.13.1/Submit) id l5QJ9irD020440; Wed, 27 Jun 2007 00:39:44 +0530 Date: Wed, 27 Jun 2007 00:39:44 +0530 From: "Amit K. Arora" To: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: Re: [PATCH 4/7][TAKE5] support new modes in fallocate Message-ID: <20070626190944.GA13324@amitarora.in.ibm.com> References: <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070625134500.GE1951@amitarora.in.ibm.com> <20070625150320.GA8686@amitarora.in.ibm.com> <20070625214626.GJ5181@schatzie.adilger.int> <20070626103247.GA19870@amitarora.in.ibm.com> <20070626153413.GC6652@schatzie.adilger.int> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070626153413.GC6652@schatzie.adilger.int> User-Agent: Mutt/1.4.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11936 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: aarora@linux.vnet.ibm.com Precedence: bulk X-list: xfs On Tue, Jun 26, 2007 at 11:34:13AM -0400, Andreas Dilger wrote: > On Jun 26, 2007 16:02 +0530, Amit K. Arora wrote: > > On Mon, Jun 25, 2007 at 03:46:26PM -0600, Andreas Dilger wrote: > > > Can you clarify - what is the current behaviour when ENOSPC (or some other > > > error) is hit? Does it keep the current fallocate() or does it free it? > > > > Currently it is left on the file system implementation. In ext4, we do > > not undo preallocation if some error (say, ENOSPC) is hit. Hence it may > > end up with partial (pre)allocation. This is inline with dd and > > posix_fallocate, which also do not free the partially allocated space. > > Since I believe the XFS allocation ioctls do it the opposite way (free > preallocated space on error) this should be encoded into the flags. > Having it "filesystem dependent" just means that nobody will be happy. Ok, got your point. Maybe we can have a flag for this, as you suggested. But, default behavior IMHO should be _not_ to undo partial allocation (thus the file system will have the option of supporting this flag or not and it will be inline with posix_fallocate; XFS will obviously like to support this flag, inline with its existing behavior). > > > For FA_ZERO_SPACE - I'd think this would (IMHO) be the default - we > > > don't want to expose uninitialized disk blocks to userspace. I'm not > > > sure if this makes sense at all. > > > > I don't think we need to make it default - atleast for filesystems which > > have a mechanism to distinguish preallocated blocks from "regular" ones. > > What I mean is that any data read from the file should have the "appearance" > of being zeroed (whether zeroes are actually written to disk or not). What > I _think_ David is proposing is to allow fallocate() to return without > marking the blocks even "uninitialized" and subsequent reads would return > the old data from the disk. I can't think of a good reason for this (i.e. returning stale data from preallocated blocks). It is infact a security issue to me. Anyhow, this may though be beneficial for file systems which have noticable overhead in marking the blocks "uninitialized/preallocated". Can you or David please throw some light on how this option might really be helpful ? Thanks! -- Regards, Amit Arora From owner-xfs@oss.sgi.com Tue Jun 26 12:12:07 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 26 Jun 2007 12:12:12 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.3 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from e36.co.us.ibm.com (e36.co.us.ibm.com [32.97.110.154]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5QJC6tL004465 for ; Tue, 26 Jun 2007 12:12:07 -0700 Received: from d03relay02.boulder.ibm.com (d03relay02.boulder.ibm.com [9.17.195.227]) by e36.co.us.ibm.com (8.13.8/8.13.8) with ESMTP id l5QJC7Cq018725 for ; Tue, 26 Jun 2007 15:12:07 -0400 Received: from d03av02.boulder.ibm.com (d03av02.boulder.ibm.com [9.17.195.168]) by d03relay02.boulder.ibm.com (8.13.8/8.13.8/NCO v8.3) with ESMTP id l5QJC7QR260012 for ; Tue, 26 Jun 2007 13:12:07 -0600 Received: from d03av02.boulder.ibm.com (loopback [127.0.0.1]) by d03av02.boulder.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id l5QJC7Tv016875 for ; Tue, 26 Jun 2007 13:12:07 -0600 Received: from amitarora.in.ibm.com ([9.126.238.78]) by d03av02.boulder.ibm.com (8.12.11.20060308/8.12.11) with ESMTP id l5QJC5dn016807; Tue, 26 Jun 2007 13:12:06 -0600 Received: from amitarora.in.ibm.com (localhost.localdomain [127.0.0.1]) by amitarora.in.ibm.com (Postfix) with ESMTP id 3A9DD18B996; Wed, 27 Jun 2007 00:42:16 +0530 (IST) Received: (from amit@localhost) by amitarora.in.ibm.com (8.13.1/8.13.1/Submit) id l5QJCFUt021786; Wed, 27 Jun 2007 00:42:15 +0530 Date: Wed, 27 Jun 2007 00:42:15 +0530 From: "Amit K. Arora" To: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: Re: [PATCH 4/7][TAKE5] support new modes in fallocate Message-ID: <20070626191215.GB13324@amitarora.in.ibm.com> References: <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070625134500.GE1951@amitarora.in.ibm.com> <20070625215239.GK5181@schatzie.adilger.int> <20070626104546.GB19870@amitarora.in.ibm.com> <20070626154250.GD6652@schatzie.adilger.int> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070626154250.GD6652@schatzie.adilger.int> User-Agent: Mutt/1.4.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11937 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: aarora@linux.vnet.ibm.com Precedence: bulk X-list: xfs On Tue, Jun 26, 2007 at 11:42:50AM -0400, Andreas Dilger wrote: > On Jun 26, 2007 16:15 +0530, Amit K. Arora wrote: > > On Mon, Jun 25, 2007 at 03:52:39PM -0600, Andreas Dilger wrote: > > > In XFS one of the (many) ALLOC modes is to zero existing data on allocate. > > > For ext4 all this would mean is calling ext4_ext_mark_uninitialized() on > > > each extent. For some workloads this would be much faster than truncate > > > and reallocate of all the blocks in a file. > > > > In ext4, we already mark each extent having preallocated blocks as > > uninitialized. This is done as part of following code (which is part of > > patch 5/7) in ext4_ext_get_blocks() : > > What I meant is that with XFS_IOC_ALLOCSP the previously-written data > is ZEROED OUT, unlike with fallocate() which leaves previously-written > data alone and only allocates in holes. > > In order to specify this for allocation, FA_FL_DEL_DATA would need to make > sense for allocations (as well as the deallocation). This is farily easy > to do - just mark all of the existing extents as unallocated, and their > data disappears. Ok, agreed. Will add the FA_ZERO_SPACE mode too. Thanks! -- Regards, Amit Arora From owner-xfs@oss.sgi.com Tue Jun 26 12:29:11 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 26 Jun 2007 12:29:15 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from e32.co.us.ibm.com (e32.co.us.ibm.com [32.97.110.150]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5QJT9tL012886 for ; Tue, 26 Jun 2007 12:29:11 -0700 Received: from d03relay02.boulder.ibm.com (d03relay02.boulder.ibm.com [9.17.195.227]) by e32.co.us.ibm.com (8.12.11.20060308/8.13.8) with ESMTP id l5QJO9Lq001093 for ; Tue, 26 Jun 2007 15:24:09 -0400 Received: from d03av04.boulder.ibm.com (d03av04.boulder.ibm.com [9.17.195.170]) by d03relay02.boulder.ibm.com (8.13.8/8.13.8/NCO v8.3) with ESMTP id l5QJT2Kr243168 for ; Tue, 26 Jun 2007 13:29:05 -0600 Received: from d03av04.boulder.ibm.com (loopback [127.0.0.1]) by d03av04.boulder.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id l5QJT1Gk023909 for ; Tue, 26 Jun 2007 13:29:01 -0600 Received: from amitarora.in.ibm.com ([9.126.238.78]) by d03av04.boulder.ibm.com (8.12.11.20060308/8.12.11) with ESMTP id l5QJSxcl023760; Tue, 26 Jun 2007 13:29:00 -0600 Received: from amitarora.in.ibm.com (localhost.localdomain [127.0.0.1]) by amitarora.in.ibm.com (Postfix) with ESMTP id EC6C018B996; Wed, 27 Jun 2007 00:59:09 +0530 (IST) Received: (from amit@localhost) by amitarora.in.ibm.com (8.13.1/8.13.1/Submit) id l5QJT8pL030686; Wed, 27 Jun 2007 00:59:08 +0530 Date: Wed, 27 Jun 2007 00:59:08 +0530 From: "Amit K. Arora" To: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: Re: [PATCH 7/7][TAKE5] ext4: support new modes Message-ID: <20070626192908.GC13324@amitarora.in.ibm.com> References: <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070625135051.GH1951@amitarora.in.ibm.com> <20070625215625.GL5181@schatzie.adilger.int> <20070626120751.GC19870@amitarora.in.ibm.com> <20070626161400.GE6652@schatzie.adilger.int> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070626161400.GE6652@schatzie.adilger.int> User-Agent: Mutt/1.4.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11938 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: aarora@linux.vnet.ibm.com Precedence: bulk X-list: xfs On Tue, Jun 26, 2007 at 12:14:00PM -0400, Andreas Dilger wrote: > On Jun 26, 2007 17:37 +0530, Amit K. Arora wrote: > > Hmm.. I am thinking of a scenario when the file system supports some > > individual flags, but does not support a particular combination of them. > > Just for example sake, assume we have FA_ZERO_SPACE mode also. Now, if a > > file system supports FA_ZERO_SPACE, FA_ALLOCATE, FA_DEALLOCATE and > > FA_RESV_SPACE; and no other mode (i.e. FA_UNRESV_SPACE is not supported > > for some reason). This means that although we support FA_FL_DEALLOC, > > FA_FL_KEEP_SIZE and FA_FL_DEL_DATA flags, but we do not support the > > combination of all these flags (which is nothing but FA_UNRESV_SPACE). > > That is up to the filesystem to determine then. I just thought it should > be clear to return an error for flags (or as you say combinations thereof) > that the filesystem doesn't understand. > > That said, I'd think in most cases the flags are orthogonal, so if you > support some combination of the flags (e.g. FA_FL_DEL_DATA, FA_FL_DEALLOC) > then you will also support other combinations of those flags just from > the way it is coded. Ok. > > > I also thought another proposed flag was to determine whether mtime (and > > > maybe ctime) is changed when doing prealloc/dealloc space? Default should > > > probably be to change mtime/ctime, and have FA_FL_NO_MTIME. Someone else > > > should decide if we want to allow changing the file w/o changing ctime, if > > > that is required even though the file is not visibly changing. Maybe the > > > ctime update should be implicit if the size or mtime are changing? > > > > Is it really required ? I mean, why should we allow users not to update > > ctime/mtime even if the file metadata/data gets updated ? It sounds > > a bit "unnatural" to me. > > Is there any application scenario in your mind, when you suggest of > > giving this flexibility to userspace ? > > One reason is that XFS does NOT update the mtime/ctime when doing the > XFS_IOC_* allocation ioctls. Hmm.. I personally will call it a bug in XFS code then. :) > > I think, modifying ctime/mtime should be dependent on the other flags. > > E.g., if we do not zero out data blocks on allocation/deallocation, > > update only ctime. Otherwise, update ctime and mtime both. > > I'm only being the advocate for requirements David Chinner has put > forward due to existing behaviour in XFS. This is one of the reasons > why I think the "flags" mechanism we now have - we can encode the > various different behaviours in any way we want and leave it to the > caller. I understand. May be we can confirm once more with David Chinner if this is really required. Will it really be a compatibility issue if new XFS preallocations (ie. via fallocate) update mtime/ctime ? Will old applications really get affected ? If yes, then it might be worth implementing - even though I personally don't like it. David, can you please confirm ? Thanks! -- Regards, Amit Arora From owner-xfs@oss.sgi.com Tue Jun 26 12:39:09 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 26 Jun 2007 12:39:17 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from mtagate7.de.ibm.com (mtagate7.de.ibm.com [195.212.29.156]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5QJd7tL015401 for ; Tue, 26 Jun 2007 12:39:08 -0700 Received: from d12nrmr1607.megacenter.de.ibm.com (d12nrmr1607.megacenter.de.ibm.com [9.149.167.49]) by mtagate7.de.ibm.com (8.13.8/8.13.8) with ESMTP id l5QJd8t9091626 for ; Tue, 26 Jun 2007 19:39:08 GMT Received: from d12av04.megacenter.de.ibm.com (d12av04.megacenter.de.ibm.com [9.149.165.229]) by d12nrmr1607.megacenter.de.ibm.com (8.13.8/8.13.8/NCO v8.3) with ESMTP id l5QJd79q1827034 for ; Tue, 26 Jun 2007 21:39:07 +0200 Received: from d12av04.megacenter.de.ibm.com (loopback [127.0.0.1]) by d12av04.megacenter.de.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id l5QJd7aJ022308 for ; Tue, 26 Jun 2007 21:39:07 +0200 Received: from localhost (ICON-9-164-137-3.megacenter.de.ibm.com [9.164.137.3]) by d12av04.megacenter.de.ibm.com (8.12.11.20060308/8.12.11) with ESMTP id l5QJd7s1022305; Tue, 26 Jun 2007 21:39:07 +0200 Date: Tue, 26 Jun 2007 21:38:55 +0200 From: Heiko Carstens To: "Amit K. Arora" Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , Andreas Dilger , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: Re: [PATCH 1/7][TAKE5] fallocate() implementation on i386, x86_64 and powerpc Message-ID: <20070626193855.GA13727@osiris.ibm.com> References: <20070510223950.GD86004887@sgi.com> <20070511110301.GB28425@in.ibm.com> <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070625134012.GB1951@amitarora.in.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070625134012.GB1951@amitarora.in.ibm.com> User-Agent: mutt-ng/devel-r804 (Linux) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11939 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: heiko.carstens@de.ibm.com Precedence: bulk X-list: xfs > Index: linux-2.6.22-rc4/arch/powerpc/kernel/sys_ppc32.c > =================================================================== > --- linux-2.6.22-rc4.orig/arch/powerpc/kernel/sys_ppc32.c > +++ linux-2.6.22-rc4/arch/powerpc/kernel/sys_ppc32.c > @@ -773,6 +773,13 @@ asmlinkage int compat_sys_truncate64(con > return sys_truncate(path, (high << 32) | low); > } > > +asmlinkage long compat_sys_fallocate(int fd, int mode, u32 offhi, u32 offlo, > + u32 lenhi, u32 lenlo) > +{ > + return sys_fallocate(fd, mode, ((loff_t)offhi << 32) | offlo, > + ((loff_t)lenhi << 32) | lenlo); > +} > + > asmlinkage int compat_sys_ftruncate64(unsigned int fd, u32 reg4, unsigned long high, > unsigned long low) > { > Index: linux-2.6.22-rc4/arch/x86_64/ia32/ia32entry.S > =================================================================== > --- linux-2.6.22-rc4.orig/arch/x86_64/ia32/ia32entry.S > +++ linux-2.6.22-rc4/arch/x86_64/ia32/ia32entry.S > @@ -719,4 +719,5 @@ ia32_sys_call_table: > .quad compat_sys_signalfd > .quad compat_sys_timerfd > .quad sys_eventfd > + .quad sys_fallocate > ia32_syscall_end: Btw. this is also (still?) broken. x86_64 needs a compat syscall here. From owner-xfs@oss.sgi.com Tue Jun 26 16:15:01 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 26 Jun 2007 16:15:07 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5QNEwtL001594 for ; Tue, 26 Jun 2007 16:15:00 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA08919; Wed, 27 Jun 2007 09:14:42 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5QNEaeW2161036; Wed, 27 Jun 2007 09:14:37 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5QNEVmC2160866; Wed, 27 Jun 2007 09:14:31 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Wed, 27 Jun 2007 09:14:31 +1000 From: David Chinner To: "Amit K. Arora" , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: Re: [PATCH 4/7][TAKE5] support new modes in fallocate Message-ID: <20070626231431.GO31489@sgi.com> References: <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070625134500.GE1951@amitarora.in.ibm.com> <20070625150320.GA8686@amitarora.in.ibm.com> <20070625214626.GJ5181@schatzie.adilger.int> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070625214626.GJ5181@schatzie.adilger.int> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11940 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Mon, Jun 25, 2007 at 03:46:26PM -0600, Andreas Dilger wrote: > On Jun 25, 2007 20:33 +0530, Amit K. Arora wrote: > > I have not implemented FA_FL_FREE_ENOSPC and FA_ZERO_SPACE flags yet, as > > *suggested* by Andreas in http://lkml.org/lkml/2007/6/14/323 post. > > If it is decided that these flags are also needed, I will update this > > patch. Thanks! > > Can you clarify - what is the current behaviour when ENOSPC (or some other > error) is hit? Does it keep the current fallocate() or does it free it? > > For FA_ZERO_SPACE - I'd think this would (IMHO) be the default - we > don't want to expose uninitialized disk blocks to userspace. I'm not > sure if this makes sense at all. Someone on the XFs list had an interesting request - preallocated swap files. You can't use unwritten extents for this because of sys_swapon()s use of bmap() (XFS returns holes for reading unwritten extents), so we need a method of preallocating that does not zero or mark the extent unread. i.e. FA_MKSWAP. I thinkthis would be: #define FA_FL_NO_ZERO_SPACE 0x08 /* default is to zero space */ #define FA_MKSWAP (FA_ALLOCATE | FA_FL_NO_ZERO_SPACE) That way we can allocate large swap files that don't need zeroing in a single, fast operation, and hence potentially bring new swap space online without needed very much memory at all (i.e. should succeed in most near-OOM conditions). Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Tue Jun 26 16:16:20 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 26 Jun 2007 16:16:26 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5QNGGtL002191 for ; Tue, 26 Jun 2007 16:16:19 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA09015; Wed, 27 Jun 2007 09:16:07 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5QNG1eW2162416; Wed, 27 Jun 2007 09:16:02 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5QNFtZF2162354; Wed, 27 Jun 2007 09:15:55 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Wed, 27 Jun 2007 09:15:55 +1000 From: David Chinner To: "Amit K. Arora" Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , Andreas Dilger , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: Re: [PATCH 0/6][TAKE5] fallocate system call Message-ID: <20070626231555.GP31489@sgi.com> References: <20070510115620.GB21400@amitarora.in.ibm.com> <20070510223950.GD86004887@sgi.com> <20070511110301.GB28425@in.ibm.com> <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070625132810.GA1951@amitarora.in.ibm.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11941 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Mon, Jun 25, 2007 at 06:58:10PM +0530, Amit K. Arora wrote: > 2) The above new patches (4/7 and 7/7) are based on the dicussion > between Andreas Dilger and David Chinner on the mode argument, > when later posted a man page on fallocate. Can you include the man page in this patch set, please? That way it can be kept up to date with the rest of the patch set. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Tue Jun 26 16:18:28 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 26 Jun 2007 16:18:33 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5QNIOtL003026 for ; Tue, 26 Jun 2007 16:18:26 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA09345; Wed, 27 Jun 2007 09:18:13 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5QNI9eW2163288; Wed, 27 Jun 2007 09:18:09 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5QNI4pW2162777; Wed, 27 Jun 2007 09:18:04 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Wed, 27 Jun 2007 09:18:04 +1000 From: David Chinner To: "Amit K. Arora" , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: Re: [PATCH 4/7][TAKE5] support new modes in fallocate Message-ID: <20070626231803.GQ31489@sgi.com> References: <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070625134500.GE1951@amitarora.in.ibm.com> <20070625150320.GA8686@amitarora.in.ibm.com> <20070625214626.GJ5181@schatzie.adilger.int> <20070626103247.GA19870@amitarora.in.ibm.com> <20070626153413.GC6652@schatzie.adilger.int> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070626153413.GC6652@schatzie.adilger.int> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11942 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Tue, Jun 26, 2007 at 11:34:13AM -0400, Andreas Dilger wrote: > On Jun 26, 2007 16:02 +0530, Amit K. Arora wrote: > > On Mon, Jun 25, 2007 at 03:46:26PM -0600, Andreas Dilger wrote: > > > Can you clarify - what is the current behaviour when ENOSPC (or some other > > > error) is hit? Does it keep the current fallocate() or does it free it? > > > > Currently it is left on the file system implementation. In ext4, we do > > not undo preallocation if some error (say, ENOSPC) is hit. Hence it may > > end up with partial (pre)allocation. This is inline with dd and > > posix_fallocate, which also do not free the partially allocated space. > > Since I believe the XFS allocation ioctls do it the opposite way (free > preallocated space on error) this should be encoded into the flags. > Having it "filesystem dependent" just means that nobody will be happy. No, XFs does not free preallocated space on error. it is up to the application to clean up. > What I mean is that any data read from the file should have the "appearance" > of being zeroed (whether zeroes are actually written to disk or not). What > I _think_ David is proposing is to allow fallocate() to return without > marking the blocks even "uninitialized" and subsequent reads would return > the old data from the disk. Correct, but for swap files that's not an issue - no user should be able too read them, and FA_MKSWAP would really need root privileges to execute. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Tue Jun 26 16:27:17 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 26 Jun 2007 16:27:23 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5QNRFtL008703 for ; Tue, 26 Jun 2007 16:27:16 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA09904; Wed, 27 Jun 2007 09:27:00 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5QNQseW2166555; Wed, 27 Jun 2007 09:26:55 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5QNQnjq2157901; Wed, 27 Jun 2007 09:26:49 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Wed, 27 Jun 2007 09:26:49 +1000 From: David Chinner To: "Amit K. Arora" , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: Re: [PATCH 4/7][TAKE5] support new modes in fallocate Message-ID: <20070626232649.GR31489@sgi.com> References: <20070511110301.GB28425@in.ibm.com> <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070625134500.GE1951@amitarora.in.ibm.com> <20070625215239.GK5181@schatzie.adilger.int> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070625215239.GK5181@schatzie.adilger.int> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11943 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Mon, Jun 25, 2007 at 03:52:39PM -0600, Andreas Dilger wrote: > On Jun 25, 2007 19:15 +0530, Amit K. Arora wrote: > > +#define FA_FL_DEALLOC 0x01 /* default is allocate */ > > +#define FA_FL_KEEP_SIZE 0x02 /* default is extend/shrink size */ > > +#define FA_FL_DEL_DATA 0x04 /* default is keep written data on DEALLOC */ > > In XFS one of the (many) ALLOC modes is to zero existing data on allocate. No, none of the XFS allocation modes do that. XFS_IOC_ALLOCSP, which does write zeros to disk, only allocates and writes zeros in the range between the old file size and the new file size. XFS_IOC_RESVSP, which alocates unwritten extents, only allocates where extents do not currently exist. It does not zero existing extents. IOWs, you can't overwrite existing data with XFS preallocation. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Tue Jun 26 16:33:18 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 26 Jun 2007 16:33:22 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5QNXEtL010115 for ; Tue, 26 Jun 2007 16:33:16 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA10348; Wed, 27 Jun 2007 09:33:02 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5QNWveW2170680; Wed, 27 Jun 2007 09:32:58 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5QNWrVF2170293; Wed, 27 Jun 2007 09:32:53 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Wed, 27 Jun 2007 09:32:53 +1000 From: David Chinner To: "Amit K. Arora" , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: Re: [PATCH 4/7][TAKE5] support new modes in fallocate Message-ID: <20070626233253.GS31489@sgi.com> References: <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070625134500.GE1951@amitarora.in.ibm.com> <20070625215239.GK5181@schatzie.adilger.int> <20070626104546.GB19870@amitarora.in.ibm.com> <20070626154250.GD6652@schatzie.adilger.int> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070626154250.GD6652@schatzie.adilger.int> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11944 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Tue, Jun 26, 2007 at 11:42:50AM -0400, Andreas Dilger wrote: > On Jun 26, 2007 16:15 +0530, Amit K. Arora wrote: > > On Mon, Jun 25, 2007 at 03:52:39PM -0600, Andreas Dilger wrote: > > > In XFS one of the (many) ALLOC modes is to zero existing data on allocate. > > > For ext4 all this would mean is calling ext4_ext_mark_uninitialized() on > > > each extent. For some workloads this would be much faster than truncate > > > and reallocate of all the blocks in a file. > > > > In ext4, we already mark each extent having preallocated blocks as > > uninitialized. This is done as part of following code (which is part of > > patch 5/7) in ext4_ext_get_blocks() : > > What I meant is that with XFS_IOC_ALLOCSP the previously-written data > is ZEROED OUT, unlike with fallocate() which leaves previously-written > data alone and only allocates in holes. > > So, if you had a sparse file with some data in it: > > AAAAA BBBBBB > > fallocate() would allocate the holes: > > 00000AAAAA000000000BBBBBB00000000 > > XFS_IOC_ALLOCSP would overwrite everything: > > 000000000000000000000000000000000 No, it wouldn't. XFS_IOC_ALLOCSP would give you: AAAAA BBBBBB00000000 because it only allocates the space between the old EOF and the new EOF. Graphic demonstration - write 4k @ 4k, 4k @ 16k, allocsp out to 32k: budgie:~ # xfs_io -f \ > -c "pwrite 4096 4096" \ > -c "pwrite 16384 4096" \ > -c "bmap -vvp" \ > -c "allocsp 32768 0" \ > -c "bmap -vvp" \ > /mnt/test/alfred wrote 4096/4096 bytes at offset 4096 4 KiB, 1 ops; 0.0000 sec (108.507 MiB/sec and 27777.7778 ops/sec) wrote 4096/4096 bytes at offset 16384 4 KiB, 1 ops; 0.0000 sec (260.417 MiB/sec and 66666.6667 ops/sec) /mnt/test/alfred: EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL 0: [0..7]: hole 8 1: [8..15]: 5226864..5226871 4 (1022160..1022167) 8 2: [16..31]: hole 16 3: [32..39]: 5226888..5226895 4 (1022184..1022191) 8 /mnt/test/alfred: EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL 0: [0..7]: hole 8 1: [8..15]: 5226864..5226871 4 (1022160..1022167) 8 2: [16..31]: hole 16 3: [32..63]: 5226888..5226919 4 (1022184..1022215) 32 budgie:~ # Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Tue Jun 26 17:05:26 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 26 Jun 2007 17:05:31 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5R05MtL015200 for ; Tue, 26 Jun 2007 17:05:24 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA11324; Wed, 27 Jun 2007 10:05:07 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5R051eW2181185; Wed, 27 Jun 2007 10:05:03 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5R04ugg2184430; Wed, 27 Jun 2007 10:04:56 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Wed, 27 Jun 2007 10:04:56 +1000 From: David Chinner To: "Amit K. Arora" Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: Re: [PATCH 7/7][TAKE5] ext4: support new modes Message-ID: <20070627000456.GT31489@sgi.com> References: <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070625135051.GH1951@amitarora.in.ibm.com> <20070625215625.GL5181@schatzie.adilger.int> <20070626120751.GC19870@amitarora.in.ibm.com> <20070626161400.GE6652@schatzie.adilger.int> <20070626192908.GC13324@amitarora.in.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070626192908.GC13324@amitarora.in.ibm.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11945 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Wed, Jun 27, 2007 at 12:59:08AM +0530, Amit K. Arora wrote: > On Tue, Jun 26, 2007 at 12:14:00PM -0400, Andreas Dilger wrote: > > On Jun 26, 2007 17:37 +0530, Amit K. Arora wrote: > > > > I also thought another proposed flag was to determine whether mtime (and > > > > maybe ctime) is changed when doing prealloc/dealloc space? Default should > > > > probably be to change mtime/ctime, and have FA_FL_NO_MTIME. Someone else > > > > should decide if we want to allow changing the file w/o changing ctime, if > > > > that is required even though the file is not visibly changing. Maybe the > > > > ctime update should be implicit if the size or mtime are changing? > > > > > > Is it really required ? I mean, why should we allow users not to update > > > ctime/mtime even if the file metadata/data gets updated ? It sounds > > > a bit "unnatural" to me. > > > Is there any application scenario in your mind, when you suggest of > > > giving this flexibility to userspace ? > > > > One reason is that XFS does NOT update the mtime/ctime when doing the > > XFS_IOC_* allocation ioctls. Not totally correct. XFS_IOC_ALLOCSP/FREESP change timestamps if they change the file size (via the truncate call made to change the file size). If they don't change the file size, then they are a no-op and should not change the file size. XFS_IOC_RESVSP/UNRESVSP don't change timestamps just like they don't change file size. That is by design AFAICT so these calls can be used by HSM-type applications that don't want to change timestamps when punching out data blocks or preallocating new ones. > Hmm.. I personally will call it a bug in XFS code then. :) No, I'd call it useful. :) > > > I think, modifying ctime/mtime should be dependent on the other flags. > > > E.g., if we do not zero out data blocks on allocation/deallocation, > > > update only ctime. Otherwise, update ctime and mtime both. > > > > I'm only being the advocate for requirements David Chinner has put > > forward due to existing behaviour in XFS. This is one of the reasons > > why I think the "flags" mechanism we now have - we can encode the > > various different behaviours in any way we want and leave it to the > > caller. > > I understand. May be we can confirm once more with David Chinner if this > is really required. Will it really be a compatibility issue if new XFS > preallocations (ie. via fallocate) update mtime/ctime? It should be left up to the filesystem to decide. Only the filesystem knows whether something changed and the timestamp should or should not be updated. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Tue Jun 26 20:49:17 2007 Received: with ECARTIS (v1.0.0; list xfs); Tue, 26 Jun 2007 20:49:23 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.3 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.clusterfs.com (mail.clusterfs.com [206.168.112.78]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5R3nGtL029177 for ; Tue, 26 Jun 2007 20:49:17 -0700 Received: from localhost.adilger.int (CPE0080c816aec8-CM0011ae013d40.cpe.net.cable.rogers.com [74.122.210.125]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.clusterfs.com (Postfix) with ESMTP id 26E474E45A1; Tue, 26 Jun 2007 21:49:17 -0600 (MDT) Received: by localhost.adilger.int (Postfix, from userid 1000) id DB1AC4016; Tue, 26 Jun 2007 23:49:15 -0400 (EDT) Date: Tue, 26 Jun 2007 23:49:15 -0400 From: Andreas Dilger To: David Chinner Cc: "Amit K. Arora" , linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org, suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: Re: [PATCH 4/7][TAKE5] support new modes in fallocate Message-ID: <20070627034915.GR6652@schatzie.adilger.int> Mail-Followup-To: David Chinner , "Amit K. Arora" , linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org, suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com References: <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070625134500.GE1951@amitarora.in.ibm.com> <20070625150320.GA8686@amitarora.in.ibm.com> <20070625214626.GJ5181@schatzie.adilger.int> <20070626231431.GO31489@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070626231431.GO31489@sgi.com> User-Agent: Mutt/1.4.1i X-GPG-Key: 1024D/0D35BED6 X-GPG-Fingerprint: 7A37 5D79 BF1B CECA D44F 8A29 A488 39F5 0D35 BED6 X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11946 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: adilger@clusterfs.com Precedence: bulk X-list: xfs On Jun 27, 2007 09:14 +1000, David Chinner wrote: > Someone on the XFs list had an interesting request - preallocated > swap files. You can't use unwritten extents for this because > of sys_swapon()s use of bmap() (XFS returns holes for reading > unwritten extents), so we need a method of preallocating that does > not zero or mark the extent unread. i.e. FA_MKSWAP. Is there a reason why unwritten extents return 0 to bmap()? This would seem to be the only impediment from using fallocated files for swap files. Maybe if FIEMAP was used by mkswap to get an "UNWRITTEN" flag back instead of "HOLE" it wouldn't be a problem. > That way we can allocate large swap files that don't need zeroing > in a single, fast operation, and hence potentially bring new > swap space online without needed very much memory at all (i.e. > should succeed in most near-OOM conditions). Cheers, Andreas -- Andreas Dilger Principal Software Engineer Cluster File Systems, Inc. From owner-xfs@oss.sgi.com Wed Jun 27 03:15:32 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 27 Jun 2007 03:15:36 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.9 required=5.0 tests=AWL,BAYES_95 autolearn=no version=3.2.0-pre1-r499012 Received: from ug-out-1314.google.com (ug-out-1314.google.com [66.249.92.171]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5RAFTtL021569 for ; Wed, 27 Jun 2007 03:15:31 -0700 Received: by ug-out-1314.google.com with SMTP id z36so330750uge for ; Wed, 27 Jun 2007 03:15:29 -0700 (PDT) DKIM-Signature: a=rsa-sha1; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:subject:from:to:content-type:date:message-id:mime-version:x-mailer; b=nuS/O1rqMuWLGUvZnwV+HTbHqeZJ40MFPAb2Hkd12lDgpkf4NvasECY7FffGDBA4h5Vt51xLJP1RCoZdD0sKhL8egHXepCeEqd6zc0n5PP8ERjtPzfEj7ca+80MVjI813SmXAB4rCRFmqGTUCHa4ZnzDE3K3HII1xsAY4dTvHuA= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:subject:from:to:content-type:date:message-id:mime-version:x-mailer; b=Iatyxw5Uvp94WFO+zHA2wRmrSfyiwj2lPkiEKX5uUJegUdG6vh08f6khYRnnMugeO5VpnVHQWkH53SoUih2++CPS+7x20BDGKSFjj81O23UTbo/gkBTT8dmkbC4qlR+DGIvzmaf4F7a7N874sYgXlUJpAi7vvu/2S5qP3PuoXkc= Received: by 10.78.201.15 with SMTP id y15mr156997huf.1182939329534; Wed, 27 Jun 2007 03:15:29 -0700 (PDT) Received: from ?192.168.1.10? ( [84.59.100.151]) by mx.google.com with ESMTP id k10sm4643417nfh.2007.06.27.03.15.27 (version=TLSv1/SSLv3 cipher=RC4-MD5); Wed, 27 Jun 2007 03:15:28 -0700 (PDT) Subject: [PATCH] Implement ioctl to mark AGs as "don't use/use" From: Ruben Porras To: xfs@oss.sgi.com Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="=-4q4KvNaYwVtWs6R7erN9" Date: Wed, 27 Jun 2007 12:15:25 +0200 Message-Id: <1182939325.5313.12.camel@localhost> Mime-Version: 1.0 X-Mailer: Evolution 2.10.2 X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11947 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nahoo82@gmail.com Precedence: bulk X-list: xfs --=-4q4KvNaYwVtWs6R7erN9 Content-Type: multipart/mixed; boundary="=-mxxJ5eF2pIjoqGZ+jADI" --=-mxxJ5eF2pIjoqGZ+jADI Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable The patch has the following parts: - Necessary changes to xfs_ag.h - two new ioctls - Changes to the allocation functions to avoid using marked AGs - Extension to xfs_alloc_log_agf This should implement the second step on the requirement list to shrink an xfs filesystem. Comment are welcome. --=20 Rub=C3=A9n Porras LinWorks GmbH --=-mxxJ5eF2pIjoqGZ+jADI Content-Disposition: attachment; filename=patch_markags.diff Content-Transfer-Encoding: base64 Content-Type: text/x-patch; name=patch_markags.diff; charset=utf-8 SW5kZXg6IHhmcy94ZnNfYWcuaA0KPT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQ0K UkNTIGZpbGU6IC9jdnMvbGludXgtMi42LXhmcy9mcy94ZnMveGZzX2FnLmgs dg0KcmV0cmlldmluZyByZXZpc2lvbiAxLjU5DQpkaWZmIC11IC1yMS41OSB4 ZnNfYWcuaA0KLS0tIHhmcy94ZnNfYWcuaAkyMiBNYXkgMjAwNyAxNTo1MDo0 OCAtMDAwMAkxLjU5DQorKysgeGZzL3hmc19hZy5oCTI3IEp1biAyMDA3IDA5 OjA2OjM5IC0wMDAwDQpAQCAtNjksNiArNjksNyBAQA0KIAlfX2JlMzIJCWFn Zl9mcmVlYmxrczsJLyogdG90YWwgZnJlZSBibG9ja3MgKi8NCiAJX19iZTMy CQlhZ2ZfbG9uZ2VzdDsJLyogbG9uZ2VzdCBmcmVlIHNwYWNlICovDQogCV9f YmUzMgkJYWdmX2J0cmVlYmxrczsJLyogIyBvZiBibG9ja3MgaGVsZCBpbiBB R0YgYnRyZWVzICovDQorCV9fYmUzMgkJYWdmX2ZsYWdzOyAgICAgIC8qIHRo ZSBBR0YgaXMgYWxsb2NhdGFibGUgKi8NCiB9IHhmc19hZ2ZfdDsNCiANCiAj ZGVmaW5lCVhGU19BR0ZfTUFHSUNOVU0JMHgwMDAwMDAwMQ0KQEAgLTE5Niw4 ICsxOTcsMTcgQEANCiAJbG9ja190CQlwYWdiX2xvY2s7CS8qIGxvY2sgZm9y IHBhZ2JfbGlzdCAqLw0KICNlbmRpZg0KIAl4ZnNfcGVyYWdfYnVzeV90ICpw YWdiX2xpc3Q7CS8qIHVuc3RhYmxlIGJsb2NrcyAqLw0KKwlfX3UzMgkJIHBh Z2ZfZmxhZ3M7CS8qIHRoZSBBR0YgaXMgYWxsb2NhdGFibGUgKi8NCiB9IHhm c19wZXJhZ190Ow0KIA0KK3R5cGVkZWYgc3RydWN0IHhmc19pb2NfYWdmbGFn cw0KK3sNCisJeGZzX2FnbnVtYmVyX3QJYWc7DQorCV9fdTMyCQlmbGFnczsN Cit9IHhmc19pb2NfYWdmbGFnc190Ow0KKw0KKyNkZWZpbmUgWEZTX0FHRl9G TEFHU19BTExPQ19ERU5ZCSgxPDwwKQ0KKw0KICNkZWZpbmUJWEZTX0FHX01B WExFVkVMUyhtcCkJCSgobXApLT5tX2FnX21heGxldmVscykNCiAjZGVmaW5l CVhGU19NSU5fRlJFRUxJU1RfUkFXKGJsLGNsLG1wKQlcDQogCShNSU4oYmwg KyAxLCBYRlNfQUdfTUFYTEVWRUxTKG1wKSkgKyBNSU4oY2wgKyAxLCBYRlNf QUdfTUFYTEVWRUxTKG1wKSkpDQpJbmRleDogeGZzL3hmc19hbGxvYy5jDQo9 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09DQpSQ1MgZmlsZTogL2N2cy9saW51eC0y LjYteGZzL2ZzL3hmcy94ZnNfYWxsb2MuYyx2DQpyZXRyaWV2aW5nIHJldmlz aW9uIDEuMTg2DQpkaWZmIC11IC1yMS4xODYgeGZzX2FsbG9jLmMNCi0tLSB4 ZnMveGZzX2FsbG9jLmMJMjIgTWF5IDIwMDcgMTU6NTA6NDggLTAwMDAJMS4x ODYNCisrKyB4ZnMveGZzX2FsbG9jLmMJMjcgSnVuIDIwMDcgMDk6MDY6NDAg LTAwMDANCkBAIC01NDksNiArNTQ5LDcgQEANCiAJeGZzX2FsbG9jX2FyZ190 CSphcmdzKQkvKiBhcmd1bWVudCBzdHJ1Y3R1cmUgZm9yIGFsbG9jYXRpb24g Ki8NCiB7DQogCWludAkJZXJyb3I9MDsNCisJeGZzX3BlcmFnX3QJKnBhZzsN CiAjaWZkZWYgWEZTX0FMTE9DX1RSQUNFDQogCXN0YXRpYyBjaGFyCWZuYW1l W10gPSAieGZzX2FsbG9jX2FnX3ZleHRlbnQiOw0KICNlbmRpZg0KQEAgLTU1 OSw2ICs1NjAsMTUgQEANCiAJQVNTRVJUKGFyZ3MtPm1vZCA8IGFyZ3MtPnBy b2QpOw0KIAlBU1NFUlQoYXJncy0+YWxpZ25tZW50ID4gMCk7DQogCS8qDQor CSAqIFJldHVybiBhbiBlcnJvciBpZiB0aGUgYS5nLiBzaG91bGQgbm90IGJl IGFsbG9jYXRlZC4NCisJICogVGhpcyBoYXBwZW5zIG5vcm1hbGx5IGR1cmlu ZyBhIHNocmluayBvcGVyYXRpb24uDQorCSAqLw0KKyAgICAgICAgcGFnID0g KGFyZ3MtPnBhZyk7DQorICAgICAgICBpZiAodW5saWtlbHkocGFnLT5wYWdm X2ZsYWdzICYgWEZTX0FHRl9GTEFHU19BTExPQ19ERU5ZKSkgew0KKwkJYXJn cy0+YWdibm8gPSBOVUxMQUdCTE9DSzsNCisJCXJldHVybiAwOw0KKwl9DQor CS8qDQogCSAqIEJyYW5jaCB0byBjb3JyZWN0IHJvdXRpbmUgYmFzZWQgb24g dGhlIHR5cGUuDQogCSAqLw0KIAlhcmdzLT53YXNmcm9tZmwgPSAwOw0KQEAg LTIwODUsNiArMjA5NSw3IEBADQogCQlvZmZzZXRvZih4ZnNfYWdmX3QsIGFn Zl9mcmVlYmxrcyksDQogCQlvZmZzZXRvZih4ZnNfYWdmX3QsIGFnZl9sb25n ZXN0KSwNCiAJCW9mZnNldG9mKHhmc19hZ2ZfdCwgYWdmX2J0cmVlYmxrcyks DQorCQlvZmZzZXRvZih4ZnNfYWdmX3QsIGFnZl9mbGFncyksDQogCQlzaXpl b2YoeGZzX2FnZl90KQ0KIAl9Ow0KIA0KSW5kZXg6IHhmcy94ZnNfZnMuaA0K PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PQ0KUkNTIGZpbGU6IC9jdnMvbGludXgt Mi42LXhmcy9mcy94ZnMveGZzX2ZzLmgsdg0KcmV0cmlldmluZyByZXZpc2lv biAxLjMzDQpkaWZmIC11IC1yMS4zMyB4ZnNfZnMuaA0KLS0tIHhmcy94ZnNf ZnMuaAkyMiBNYXkgMjAwNyAxNTo1MDo0OCAtMDAwMAkxLjMzDQorKysgeGZz L3hmc19mcy5oCTI3IEp1biAyMDA3IDA5OjA2OjQwIC0wMDAwDQpAQCAtNDc2 LDIyICs0NzYsMjQgQEANCiAjZGVmaW5lIFhGU19JT0NfT1BFTl9CWV9IQU5E TEUJICAgICBfSU9XUignWCcsIDEwNywgc3RydWN0IHhmc19mc29wX2hhbmRs ZXJlcSkNCiAjZGVmaW5lIFhGU19JT0NfUkVBRExJTktfQllfSEFORExFICAg X0lPV1IoJ1gnLCAxMDgsIHN0cnVjdCB4ZnNfZnNvcF9oYW5kbGVyZXEpDQog I2RlZmluZSBYRlNfSU9DX1NXQVBFWFQJCSAgICAgX0lPV1IoJ1gnLCAxMDks IHN0cnVjdCB4ZnNfc3dhcGV4dCkNCi0jZGVmaW5lIFhGU19JT0NfRlNHUk9X RlNEQVRBCSAgICAgX0lPVyAoJ1gnLCAxMTAsIHN0cnVjdCB4ZnNfZ3Jvd2Zz X2RhdGEpDQotI2RlZmluZSBYRlNfSU9DX0ZTR1JPV0ZTTE9HCSAgICAgX0lP VyAoJ1gnLCAxMTEsIHN0cnVjdCB4ZnNfZ3Jvd2ZzX2xvZykNCi0jZGVmaW5l IFhGU19JT0NfRlNHUk9XRlNSVAkgICAgIF9JT1cgKCdYJywgMTEyLCBzdHJ1 Y3QgeGZzX2dyb3dmc19ydCkNCi0jZGVmaW5lIFhGU19JT0NfRlNDT1VOVFMJ ICAgICBfSU9SICgnWCcsIDExMywgc3RydWN0IHhmc19mc29wX2NvdW50cykN Ci0jZGVmaW5lIFhGU19JT0NfU0VUX1JFU0JMS1MJICAgICBfSU9XUignWCcs IDExNCwgc3RydWN0IHhmc19mc29wX3Jlc2Jsa3MpDQotI2RlZmluZSBYRlNf SU9DX0dFVF9SRVNCTEtTCSAgICAgX0lPUiAoJ1gnLCAxMTUsIHN0cnVjdCB4 ZnNfZnNvcF9yZXNibGtzKQ0KLSNkZWZpbmUgWEZTX0lPQ19FUlJPUl9JTkpF Q1RJT04JICAgICBfSU9XICgnWCcsIDExNiwgc3RydWN0IHhmc19lcnJvcl9p bmplY3Rpb24pDQotI2RlZmluZSBYRlNfSU9DX0VSUk9SX0NMRUFSQUxMCSAg ICAgX0lPVyAoJ1gnLCAxMTcsIHN0cnVjdCB4ZnNfZXJyb3JfaW5qZWN0aW9u KQ0KLS8qCVhGU19JT0NfQVRUUkNUTF9CWV9IQU5ETEUgLS0gZGVwcmVjYXRl ZCAxMTgJICovDQotI2RlZmluZSBYRlNfSU9DX0ZSRUVaRQkJICAgICBfSU9X UignWCcsIDExOSwgaW50KQ0KLSNkZWZpbmUgWEZTX0lPQ19USEFXCQkgICAg IF9JT1dSKCdYJywgMTIwLCBpbnQpDQotI2RlZmluZSBYRlNfSU9DX0ZTU0VU RE1fQllfSEFORExFICAgIF9JT1cgKCdYJywgMTIxLCBzdHJ1Y3QgeGZzX2Zz b3Bfc2V0ZG1faGFuZGxlcmVxKQ0KLSNkZWZpbmUgWEZTX0lPQ19BVFRSTElT VF9CWV9IQU5ETEUgICBfSU9XICgnWCcsIDEyMiwgc3RydWN0IHhmc19mc29w X2F0dHJsaXN0X2hhbmRsZXJlcSkNCi0jZGVmaW5lIFhGU19JT0NfQVRUUk1V TFRJX0JZX0hBTkRMRSAgX0lPVyAoJ1gnLCAxMjMsIHN0cnVjdCB4ZnNfZnNv cF9hdHRybXVsdGlfaGFuZGxlcmVxKQ0KLSNkZWZpbmUgWEZTX0lPQ19GU0dF T01FVFJZCSAgICAgX0lPUiAoJ1gnLCAxMjQsIHN0cnVjdCB4ZnNfZnNvcF9n ZW9tKQ0KLSNkZWZpbmUgWEZTX0lPQ19HT0lOR0RPV04JICAgICBfSU9SICgn WCcsIDEyNSwgX191aW50MzJfdCkNCisjZGVmaW5lIFhGU19JT0NfR0VUX0FH Rl9GTEFHUyAgICAgICBfSU9XICgnWCcsIDExMCwgc3RydWN0IHhmc19pb2Nf YWdmbGFncykNCisjZGVmaW5lIFhGU19JT0NfU0VUX0FHRl9GTEFHUyAgICAg ICBfSU9XICgnWCcsIDExMSwgc3RydWN0IHhmc19pb2NfYWdmbGFncykNCisj ZGVmaW5lIFhGU19JT0NfRlNHUk9XRlNEQVRBCSAgICAgX0lPVyAoJ1gnLCAx MTEsIHN0cnVjdCB4ZnNfZ3Jvd2ZzX2RhdGEpDQorI2RlZmluZSBYRlNfSU9D X0ZTR1JPV0ZTTE9HCSAgICAgX0lPVyAoJ1gnLCAxMTIsIHN0cnVjdCB4ZnNf Z3Jvd2ZzX2xvZykNCisjZGVmaW5lIFhGU19JT0NfRlNHUk9XRlNSVAkgICAg IF9JT1cgKCdYJywgMTEzLCBzdHJ1Y3QgeGZzX2dyb3dmc19ydCkNCisjZGVm aW5lIFhGU19JT0NfRlNDT1VOVFMJICAgICBfSU9SICgnWCcsIDExNCwgc3Ry dWN0IHhmc19mc29wX2NvdW50cykNCisjZGVmaW5lIFhGU19JT0NfU0VUX1JF U0JMS1MJICAgICBfSU9XUignWCcsIDExNSwgc3RydWN0IHhmc19mc29wX3Jl c2Jsa3MpDQorI2RlZmluZSBYRlNfSU9DX0dFVF9SRVNCTEtTCSAgICAgX0lP UiAoJ1gnLCAxMTYsIHN0cnVjdCB4ZnNfZnNvcF9yZXNibGtzKQ0KKyNkZWZp bmUgWEZTX0lPQ19FUlJPUl9JTkpFQ1RJT04JICAgICBfSU9XICgnWCcsIDEx Nywgc3RydWN0IHhmc19lcnJvcl9pbmplY3Rpb24pDQorI2RlZmluZSBYRlNf SU9DX0VSUk9SX0NMRUFSQUxMCSAgICAgX0lPVyAoJ1gnLCAxMTgsIHN0cnVj dCB4ZnNfZXJyb3JfaW5qZWN0aW9uKQ0KKy8qCVhGU19JT0NfQVRUUkNUTF9C WV9IQU5ETEUgLS0gZGVwcmVjYXRlZCAxMTkJICovDQorI2RlZmluZSBYRlNf SU9DX0ZSRUVaRQkJICAgICBfSU9XUignWCcsIDEyMCwgaW50KQ0KKyNkZWZp bmUgWEZTX0lPQ19USEFXCQkgICAgIF9JT1dSKCdYJywgMTIxLCBpbnQpDQor I2RlZmluZSBYRlNfSU9DX0ZTU0VURE1fQllfSEFORExFICAgIF9JT1cgKCdY JywgMTIyLCBzdHJ1Y3QgeGZzX2Zzb3Bfc2V0ZG1faGFuZGxlcmVxKQ0KKyNk ZWZpbmUgWEZTX0lPQ19BVFRSTElTVF9CWV9IQU5ETEUgICBfSU9XICgnWCcs IDEyMywgc3RydWN0IHhmc19mc29wX2F0dHJsaXN0X2hhbmRsZXJlcSkNCisj ZGVmaW5lIFhGU19JT0NfQVRUUk1VTFRJX0JZX0hBTkRMRSAgX0lPVyAoJ1gn LCAxMjQsIHN0cnVjdCB4ZnNfZnNvcF9hdHRybXVsdGlfaGFuZGxlcmVxKQ0K KyNkZWZpbmUgWEZTX0lPQ19GU0dFT01FVFJZCSAgICAgX0lPUiAoJ1gnLCAx MjUsIHN0cnVjdCB4ZnNfZnNvcF9nZW9tKQ0KKyNkZWZpbmUgWEZTX0lPQ19H T0lOR0RPV04JICAgICBfSU9SICgnWCcsIDEyNiwgX191aW50MzJfdCkNCiAv KglYRlNfSU9DX0dFVEZTVVVJRCAtLS0tLS0tLS0tIGRlcHJlY2F0ZWQgMTQw CSAqLw0KIA0KIA0KSW5kZXg6IHhmcy94ZnNfZnNvcHMuYw0KPT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PQ0KUkNTIGZpbGU6IC9jdnMvbGludXgtMi42LXhmcy9m cy94ZnMveGZzX2Zzb3BzLmMsdg0KcmV0cmlldmluZyByZXZpc2lvbiAxLjEy Ng0KZGlmZiAtdSAtcjEuMTI2IHhmc19mc29wcy5jDQotLS0geGZzL3hmc19m c29wcy5jCTggSnVuIDIwMDcgMTY6MDM6NTkgLTAwMDAJMS4xMjYNCisrKyB4 ZnMveGZzX2Zzb3BzLmMJMjcgSnVuIDIwMDcgMDk6MDY6NDAgLTAwMDANCkBA IC02NDksMyArNjQ5LDgwIEBADQogDQogCXJldHVybiAwOw0KIH0NCisNCitT VEFUSUMgdm9pZA0KK3hmc19hZ19zZXRfZmxhZ3NfcHJpdmF0ZSgNCisJeGZz X3RyYW5zX3QJKnRwLA0KKwl4ZnNfYnVmX3QJKmFnYnAsCS8qIGJ1ZmZlciBm b3IgYS5nLiBmcmVlbGlzdCBoZWFkZXIgKi8NCisJeGZzX3BlcmFnX3QJKnBh ZywNCisJX191MzIJCWZsYWdzKQ0KK3sNCisJeGZzX2FnZl90CSphZ2Y7CS8q IGEuZy4gZnJlZXNwYWNlIHN0cnVjdHVyZSAqLw0KKw0KKwlhZ2YgPSBYRlNf QlVGX1RPX0FHRihhZ2JwKTsNCisJcGFnLT5wYWdmX2ZsYWdzIHw9IGZsYWdz Ow0KKwlhZ2YtPmFnZl9mbGFncyA9IGNwdV90b19iZTMyKHBhZy0+cGFnZl9m bGFncyk7DQorDQorCXhmc19hbGxvY19sb2dfYWdmKHRwLCBhZ2JwLCBYRlNf VFJBTlNfQUdGX0ZMQUdTKTsNCit9DQorDQorX191MzINCit4ZnNfYWdfZ2V0 X2ZsYWdzX3ByaXZhdGUoDQorCXhmc19wZXJhZ190CSpwYWcpDQorew0KKwly ZXR1cm4gcGFnLT5wYWdmX2ZsYWdzOw0KK30NCisNCitpbnQNCit4ZnNfYWdf c2V0X2ZsYWdzKA0KKwl4ZnNfbW91bnRfdAkJKm1wLA0KKwl4ZnNfaW9jX2Fn ZmxhZ3NfdCAJKmlvY19mbGFncykNCit7DQorCXhmc19hZ251bWJlcl90ICBh Z25vOw0KKwl4ZnNfcGVyYWdfdAkqcGFnOw0KKwl4ZnNfYnVmX3QJKmJwOw0K KwlpbnQJCWVycm9yOw0KKwl4ZnNfdHJhbnNfdAkqdHA7DQorDQorCWFnbm8g PSBpb2NfZmxhZ3MtPmFnOw0KKwlpZiAoYWdubyA+PSBtcC0+bV9zYi5zYl9h Z2NvdW50KQ0KKwkJcmV0dXJuIC1FSU5WQUw7DQorDQorCXRwID0geGZzX3Ry YW5zX2FsbG9jKG1wLCBYRlNfVFJBTlNfQUdGX0ZMQUdTKTsNCisJZXJyb3Ig PSB4ZnNfdHJhbnNfcmVzZXJ2ZSh0cCwgMCwgbXAtPm1fc2Iuc2Jfc2VjdHNp emUgKyAxMjgsIDAsIDAsDQorCQkJCQlYRlNfREVGQVVMVF9MT0dfQ09VTlQp Ow0KKwlpZiAoZXJyb3IpIHsNCisJCXhmc190cmFuc19jYW5jZWwodHAsIDAp Ow0KKwkJcmV0dXJuIGVycm9yOw0KKwl9DQorCWVycm9yID0geGZzX2FsbG9j X3JlYWRfYWdmKG1wLCB0cCwgYWdubywgMCwgJmJwKTsNCisJaWYgKGVycm9y KQ0KKwkJcmV0dXJuIGVycm9yOw0KKw0KKwlwYWcgPSAmbXAtPm1fcGVyYWdb YWdub107DQorCXhmc19hZ19zZXRfZmxhZ3NfcHJpdmF0ZSh0cCwgYnAsIHBh ZywgaW9jX2ZsYWdzLT5mbGFncyk7DQorDQorCXhmc190cmFuc19zZXRfc3lu Yyh0cCk7DQorCXhmc190cmFuc19jb21taXQodHAsIDApOw0KKw0KKwlyZXR1 cm4gMDsNCisNCit9DQorDQoraW50DQoreGZzX2FnX2dldF9mbGFncygNCisJ eGZzX21vdW50X3QJCSptcCwNCisJeGZzX2lvY19hZ2ZsYWdzX3QgCSppb2Nf ZmxhZ3MsDQorCV9fdTMyCQkJKmZsYWdzKQ0KK3sNCisJeGZzX2FnbnVtYmVy X3QJYWdubzsNCisJeGZzX3BlcmFnX3QJKnBhZzsNCisNCisJYWdubyA9IGlv Y19mbGFncy0+YWc7DQorCWlmIChhZ25vID49IG1wLT5tX3NiLnNiX2FnY291 bnQpDQorCQlyZXR1cm4gLUVJTlZBTDsNCisNCisJcGFnID0gJm1wLT5tX3Bl cmFnW2Fnbm9dOw0KKwkqZmxhZ3MgPSB4ZnNfYWdfZ2V0X2ZsYWdzX3ByaXZh dGUocGFnKTsNCisJcmV0dXJuIDA7DQorfQ0KSW5kZXg6IHhmcy94ZnNfZnNv cHMuaA0KPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PQ0KUkNTIGZpbGU6IC9jdnMv bGludXgtMi42LXhmcy9mcy94ZnMveGZzX2Zzb3BzLmgsdg0KcmV0cmlldmlu ZyByZXZpc2lvbiAxLjI5DQpkaWZmIC11IC1yMS4yOSB4ZnNfZnNvcHMuaA0K LS0tIHhmcy94ZnNfZnNvcHMuaAkyMSBOb3YgMjAwNSAxNDo0MjozNiAtMDAw MAkxLjI5DQorKysgeGZzL3hmc19mc29wcy5oCTI3IEp1biAyMDA3IDA5OjA2 OjQwIC0wMDAwDQpAQCAtMjcsNCArMjcsNyBAQA0KIGV4dGVybiBpbnQgeGZz X2ZzX2dvaW5nZG93bih4ZnNfbW91bnRfdCAqbXAsIF9fdWludDMyX3QgaW5m bGFncyk7DQogZXh0ZXJuIHZvaWQgeGZzX2ZzX2xvZ19kdW1teSh4ZnNfbW91 bnRfdCAqbXApOw0KIA0KK2V4dGVybiBpbnQgeGZzX2FnX3NldF9mbGFncyh4 ZnNfbW91bnRfdCAqbXAsIHhmc19pb2NfYWdmbGFnc190ICppb2NfZmxhZ3Mp Ow0KK2V4dGVybiBpbnQgeGZzX2FnX2dldF9mbGFncyh4ZnNfbW91bnRfdCAq bXAsIHhmc19pb2NfYWdmbGFnc190ICppb2NfZmxhZ3MsIF9fdTMyICpmbGFn cyk7DQorDQogI2VuZGlmCS8qIF9fWEZTX0ZTT1BTX0hfXyAqLw0KSW5kZXg6 IHhmcy94ZnNfdHJhbnMuaA0KPT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQ0KUkNT IGZpbGU6IC9jdnMvbGludXgtMi42LXhmcy9mcy94ZnMveGZzX3RyYW5zLmgs dg0KcmV0cmlldmluZyByZXZpc2lvbiAxLjE0NQ0KZGlmZiAtdSAtcjEuMTQ1 IHhmc190cmFucy5oDQotLS0geGZzL3hmc190cmFucy5oCTIyIE1heSAyMDA3 IDE1OjUwOjQ4IC0wMDAwCTEuMTQ1DQorKysgeGZzL3hmc190cmFucy5oCTI3 IEp1biAyMDA3IDA5OjA2OjQxIC0wMDAwDQpAQCAtNDE4LDYgKzQxOCwxMCBA QA0KICNkZWZpbmUJWEZTX1RSQU5TX1NCX1JFWFRFTlRTCQkweDAwMDAxMDAw DQogI2RlZmluZQlYRlNfVFJBTlNfU0JfUkVYVFNMT0cJCTB4MDAwMDIwMDAN CiANCisvKg0KKyAqIFZhbHVlIGZvciB4ZnNfdHJhbnNfbW9kX2FnZg0KKyAq Lw0KKyNkZWZpbmUgWEZTX1RSQU5TX0FHRl9GTEFHUyAgICAgICAgICAgICAw eDAwMDA0MDAwDQogDQogLyoNCiAgKiBWYXJpb3VzIGxvZyByZXNlcnZhdGlv biB2YWx1ZXMuDQpJbmRleDogeGZzL2xpbnV4LTIuNi94ZnNfaW9jdGwuYw0K PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PQ0KUkNTIGZpbGU6IC9jdnMvbGludXgt Mi42LXhmcy9mcy94ZnMvbGludXgtMi42L3hmc19pb2N0bC5jLHYNCnJldHJp ZXZpbmcgcmV2aXNpb24gMS4xNDQNCmRpZmYgLXUgLXIxLjE0NCB4ZnNfaW9j dGwuYw0KLS0tIHhmcy9saW51eC0yLjYveGZzX2lvY3RsLmMJNyBGZWIgMjAw NyAwMjo1MDoxMyAtMDAwMAkxLjE0NA0KKysrIHhmcy9saW51eC0yLjYveGZz X2lvY3RsLmMJMjcgSnVuIDIwMDcgMDk6MDY6NDEgLTAwMDANCkBAIC04NjAs NiArODYwLDM4IEBADQogCQlyZXR1cm4gMDsNCiAJfQ0KIA0KKwljYXNlIFhG U19JT0NfR0VUX0FHRl9GTEFHUzogew0KKwkJeGZzX2lvY19hZ2ZsYWdzX3Qg aW47DQorICAgICAgICAgICAgICAgX191MzIgb3V0Ow0KKw0KKwkJaWYgKCFj YXBhYmxlKENBUF9TWVNfQURNSU4pKQ0KKwkJCXJldHVybiAtRVBFUk07DQor DQorCQlpZiAoY29weV9mcm9tX3VzZXIoJmluLCBhcmcsIHNpemVvZihpbikp KQ0KKwkJCXJldHVybiAtWEZTX0VSUk9SKEVGQVVMVCk7DQorDQorCQllcnJv ciA9IHhmc19hZ19nZXRfZmxhZ3MobXAsICZpbiwgJm91dCk7DQorCQlpZiAo ZXJyb3IpDQorCQkJcmV0dXJuIC1lcnJvcjsNCisNCisJCWlmIChjb3B5X3Rv X3VzZXIoYXJnLCAmb3V0LCBzaXplb2Yob3V0KSkpDQorCQkJcmV0dXJuIC1Y RlNfRVJST1IoRUZBVUxUKTsNCisJCXJldHVybiAwOw0KKwl9DQorDQorCWNh c2UgWEZTX0lPQ19TRVRfQUdGX0ZMQUdTOiB7DQorCQl4ZnNfaW9jX2FnZmxh Z3NfdCBpbjsNCisNCisJCWlmICghY2FwYWJsZShDQVBfU1lTX0FETUlOKSkN CisJCQlyZXR1cm4gLUVQRVJNOw0KKw0KKwkJaWYgKGNvcHlfZnJvbV91c2Vy KCZpbiwgYXJnLCBzaXplb2YoaW4pKSkNCisJCQlyZXR1cm4gLVhGU19FUlJP UihFRkFVTFQpOw0KKw0KKwkJZXJyb3IgPSB4ZnNfYWdfc2V0X2ZsYWdzKG1w LCAmaW4pOw0KKwkJcmV0dXJuIC1lcnJvcjsNCisJfQ0KKw0KIAljYXNlIFhG U19JT0NfRlNHUk9XRlNEQVRBOiB7DQogCQl4ZnNfZ3Jvd2ZzX2RhdGFfdCBp bjsNCiANCg== --=-mxxJ5eF2pIjoqGZ+jADI-- --=-4q4KvNaYwVtWs6R7erN9 Content-Type: application/pgp-signature; name=signature.asc Content-Description: Dies ist ein digital signierter Nachrichtenteil -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQBGgji7YubrKblAx+oRArmaAJ9qzq12QmJrkbV6oqQLJL1gAlMOvACdExD0 ov5JPuHWb9UA30K+ATCa1WM= =f/Fq -----END PGP SIGNATURE----- --=-4q4KvNaYwVtWs6R7erN9-- From owner-xfs@oss.sgi.com Wed Jun 27 03:16:22 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 27 Jun 2007 03:16:26 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from waldorf.loreland.org (ip186.digipost.co.nz [203.110.30.186] (may be forged)) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5RAGKtL021835 for ; Wed, 27 Jun 2007 03:16:21 -0700 Received: from localhost (localhost [127.0.0.1]) by waldorf.loreland.org (Postfix) with ESMTP id 996FF419BE3; Wed, 27 Jun 2007 21:57:31 +1200 (NZST) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Scanned: Debian amavisd-new at waldorf Received: from waldorf.loreland.org ([127.0.0.1]) by localhost (waldorf.loreland.org [127.0.0.1]) (amavisd-new, port 10024) with LMTP id BxfNmJsvAmby; Wed, 27 Jun 2007 21:57:28 +1200 (NZST) Received: from [192.168.0.14] (84-43-126-21.ppp.onetel.net.uk [84.43.126.21]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by waldorf.loreland.org (Postfix) with ESMTP id 1FEA6419BDA; Wed, 27 Jun 2007 21:57:26 +1200 (NZST) Message-ID: <46823480.1000305@loreland.org> Date: Wed, 27 Jun 2007 10:57:20 +0100 From: James Braid User-Agent: Thunderbird 1.5.0.12 (Windows/20070509) MIME-Version: 1.0 To: Ralf Gross CC: xfs-oss Subject: Re: reasonable xfs fs size for 30-100 TB? References: <20070626171719.GD32546@p15145560.pureserver.info> In-Reply-To: <20070626171719.GD32546@p15145560.pureserver.info> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Status: Clean X-archive-position: 11948 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jamesb@loreland.org Precedence: bulk X-list: xfs Ralf Gross wrote: > My main concern is the amount of RAM I need for a fsck of the xfs fs. > Last time I search for the xfs requirements, I found the rule of > thumb: 2 GB RAM for 1 TB of disk storage + some RAM per x inodes. A real world example: we have a ~70TB filesystem, with ~70M inodes and xfs_repair uses about 13-15GB of memory IIRC (haven't run a repair in a while) using a recentish 2.8.x version. Hope that helps. From owner-xfs@oss.sgi.com Wed Jun 27 06:37:31 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 27 Jun 2007 06:37:35 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5RDbRtL004756 for ; Wed, 27 Jun 2007 06:37:30 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id XAA00819; Wed, 27 Jun 2007 23:37:10 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5RDb4eW2647348; Wed, 27 Jun 2007 23:37:05 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5RDavZY2646517; Wed, 27 Jun 2007 23:36:57 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Wed, 27 Jun 2007 23:36:57 +1000 From: David Chinner To: xfs-oss Cc: "Amit K. Arora" , linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org, suparna@in.ibm.com, cmm@us.ibm.com Subject: Re: [PATCH 4/7][TAKE5] support new modes in fallocate Message-ID: <20070627133657.GQ989688@sgi.com> References: <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070625134500.GE1951@amitarora.in.ibm.com> <20070625150320.GA8686@amitarora.in.ibm.com> <20070625214626.GJ5181@schatzie.adilger.int> <20070626231431.GO31489@sgi.com> <20070627034915.GR6652@schatzie.adilger.int> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070627034915.GR6652@schatzie.adilger.int> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11949 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Tue, Jun 26, 2007 at 11:49:15PM -0400, Andreas Dilger wrote: > On Jun 27, 2007 09:14 +1000, David Chinner wrote: > > Someone on the XFs list had an interesting request - preallocated > > swap files. You can't use unwritten extents for this because > > of sys_swapon()s use of bmap() (XFS returns holes for reading > > unwritten extents), so we need a method of preallocating that does > > not zero or mark the extent unread. i.e. FA_MKSWAP. > > Is there a reason why unwritten extents return 0 to bmap()? It's a fallout of xfs_get_blocks not mapping unwritten extents on read because we want do_mpage_readpage() to treat them as a hole. i.e. zero fill them instead of doing I/O. This is the way XFS was shoehorned into the generic read path :/ > This > would seem to be the only impediment from using fallocated files > for swap files. Maybe if FIEMAP was used by mkswap to get an > "UNWRITTEN" flag back instead of "HOLE" it wouldn't be a problem. Probably. If we taught do_mpage_readpage() about unwritten mappings, then would could map them on read if and then sys_swapon can remain blissfully unaware of unwritten extents. I think this is pretty much all I need to do to acheive that is (untested): --- Teach do_mpage_readpage() about unwritten extents so we can always map them in get_blocks rather than they are are holes on read. Allows setup_swap_extents() to use preallocated files on XFS filesystems for swap files without ever needing to convert them. Signed-Off-By: Dave Chinner --- fs/mpage.c | 5 +++-- fs/xfs/linux-2.6/xfs_aops.c | 13 +++---------- 2 files changed, 6 insertions(+), 12 deletions(-) Index: 2.6.x-xfs-new/fs/mpage.c =================================================================== --- 2.6.x-xfs-new.orig/fs/mpage.c 2007-05-29 16:17:59.000000000 +1000 +++ 2.6.x-xfs-new/fs/mpage.c 2007-06-27 22:39:35.568852270 +1000 @@ -207,7 +207,8 @@ do_mpage_readpage(struct bio *bio, struc * Map blocks using the result from the previous get_blocks call first. */ nblocks = map_bh->b_size >> blkbits; - if (buffer_mapped(map_bh) && block_in_file > *first_logical_block && + if (buffer_mapped(map_bh) && !buffer_unwritten(map_bh) && + block_in_file > *first_logical_block && block_in_file < (*first_logical_block + nblocks)) { unsigned map_offset = block_in_file - *first_logical_block; unsigned last = nblocks - map_offset; @@ -242,7 +243,7 @@ do_mpage_readpage(struct bio *bio, struc *first_logical_block = block_in_file; } - if (!buffer_mapped(map_bh)) { + if (!buffer_mapped(map_bh) || buffer_unwritten(map_bh)) { fully_mapped = 0; if (first_hole == blocks_per_page) first_hole = page_block; Index: 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_aops.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/linux-2.6/xfs_aops.c 2007-06-05 22:14:39.000000000 +1000 +++ 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_aops.c 2007-06-27 22:39:29.545636749 +1000 @@ -1340,16 +1340,9 @@ __xfs_get_blocks( return 0; if (iomap.iomap_bn != IOMAP_DADDR_NULL) { - /* - * For unwritten extents do not report a disk address on - * the read case (treat as if we're reading into a hole). - */ - if (create || !(iomap.iomap_flags & IOMAP_UNWRITTEN)) { - xfs_map_buffer(bh_result, &iomap, offset, - inode->i_blkbits); - } - if (create && (iomap.iomap_flags & IOMAP_UNWRITTEN)) { - if (direct) + xfs_map_buffer(bh_result, &iomap, offset, inode->i_blkbits); + if (iomap.iomap_flags & IOMAP_UNWRITTEN) { + if (create && direct) bh_result->b_private = inode; set_buffer_unwritten(bh_result); } Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Wed Jun 27 07:46:40 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 27 Jun 2007 07:46:46 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_50,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r499012 Received: from sandeen.net (sandeen.net [209.173.210.139]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5REkctL025651 for ; Wed, 27 Jun 2007 07:46:40 -0700 Received: from Liberator.local (unknown [205.233.52.213]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 61A5018011E86 for ; Wed, 27 Jun 2007 09:46:39 -0500 (CDT) Message-ID: <4682784C.1070607@sandeen.net> Date: Wed, 27 Jun 2007 09:46:36 -0500 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.4 (Macintosh/20070604) MIME-Version: 1.0 To: xfs-oss Subject: [PATCH] vn_hold return value is unused Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11950 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Came across this while looking at the last patch. *shrug* it's here if you want it :) ------------- Make vn_hold a void function; return is never used. Signed-off-by: Eric Sandeen Index: xfs-linux-clean/linux-2.4/xfs_vnode.c =================================================================== --- xfs-linux-clean.orig/linux-2.4/xfs_vnode.c +++ xfs-linux-clean/linux-2.4/xfs_vnode.c @@ -133,7 +133,7 @@ vn_revalidate( /* * Add a reference to a referenced vnode. */ -bhv_vnode_t * +void vn_hold( bhv_vnode_t *vp) { @@ -145,8 +145,6 @@ vn_hold( inode = igrab(vn_to_inode(vp)); ASSERT(inode); VN_UNLOCK(vp, 0); - - return vp; } #ifdef XFS_VNODE_TRACE Index: xfs-linux-clean/linux-2.4/xfs_vnode.h =================================================================== --- xfs-linux-clean.orig/linux-2.4/xfs_vnode.h +++ xfs-linux-clean/linux-2.4/xfs_vnode.h @@ -435,17 +435,17 @@ static inline int vn_count(struct bhv_vn /* * Vnode reference counting functions (and macros for compatibility). */ -extern bhv_vnode_t *vn_hold(struct bhv_vnode *); +extern void vn_hold(struct bhv_vnode *); #if defined(XFS_VNODE_TRACE) #define VN_HOLD(vp) \ - ((void)vn_hold(vp), \ + (vn_hold(vp), \ vn_trace_hold(vp, __FILE__, __LINE__, (inst_t *)__return_address)) #define VN_RELE(vp) \ (vn_trace_rele(vp, __FILE__, __LINE__, (inst_t *)__return_address), \ iput(vn_to_inode(vp))) #else -#define VN_HOLD(vp) ((void)vn_hold(vp)) +#define VN_HOLD(vp) (vn_hold(vp)) #define VN_RELE(vp) (iput(vn_to_inode(vp))) #endif Index: xfs-linux-clean/linux-2.6/xfs_vnode.c =================================================================== --- xfs-linux-clean.orig/linux-2.6/xfs_vnode.c +++ xfs-linux-clean/linux-2.6/xfs_vnode.c @@ -172,7 +172,7 @@ vn_revalidate( /* * Add a reference to a referenced vnode. */ -bhv_vnode_t * +void vn_hold( bhv_vnode_t *vp) { @@ -184,8 +184,6 @@ vn_hold( inode = igrab(vn_to_inode(vp)); ASSERT(inode); VN_UNLOCK(vp, 0); - - return vp; } #ifdef XFS_VNODE_TRACE Index: xfs-linux-clean/linux-2.6/xfs_vnode.h =================================================================== --- xfs-linux-clean.orig/linux-2.6/xfs_vnode.h +++ xfs-linux-clean/linux-2.6/xfs_vnode.h @@ -454,17 +454,17 @@ static inline int vn_count(struct bhv_vn /* * Vnode reference counting functions (and macros for compatibility). */ -extern bhv_vnode_t *vn_hold(struct bhv_vnode *); +extern void vn_hold(struct bhv_vnode *); #if defined(XFS_VNODE_TRACE) #define VN_HOLD(vp) \ - ((void)vn_hold(vp), \ + (vn_hold(vp), \ vn_trace_hold(vp, __FILE__, __LINE__, (inst_t *)__return_address)) #define VN_RELE(vp) \ (vn_trace_rele(vp, __FILE__, __LINE__, (inst_t *)__return_address), \ iput(vn_to_inode(vp))) #else -#define VN_HOLD(vp) ((void)vn_hold(vp)) +#define VN_HOLD(vp) (vn_hold(vp)) #define VN_RELE(vp) (iput(vn_to_inode(vp))) #endif From owner-xfs@oss.sgi.com Wed Jun 27 11:14:07 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 27 Jun 2007 11:14:12 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.1 required=5.0 tests=BAYES_80,J_CHICKENPOX_27 autolearn=no version=3.2.0-pre1-r499012 Received: from isls-mx20.wmin.ac.uk (isls-mx20.wmin.ac.uk [161.74.14.113]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5RIE5tL000719 for ; Wed, 27 Jun 2007 11:14:07 -0700 Received: from groucho.wmin.ac.uk ([161.74.160.74]) by isls-mx20.wmin.ac.uk with esmtp (Exim 4.60) (envelope-from ) id 1I3bmi-0004es-7b for xfs@oss.sgi.com; Wed, 27 Jun 2007 18:58:32 +0100 Received: from sunset.cpc.wmin.ac.uk (5ac093cf.bb.sky.com [90.192.147.207]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (No client certificate requested) by groucho.wmin.ac.uk (Postfix) with ESMTP id 00F78321B3B for ; Wed, 27 Jun 2007 18:58:29 +0100 (BST) Date: Wed, 27 Jun 2007 18:58:29 +0100 To: xfs@oss.sgi.com Subject: After reboot fs with barrier faster deletes then fs with nobarrier From: "Szabolcs Illes" Content-Type: text/plain; format=flowed; delsp=yes; charset=us-ascii MIME-Version: 1.0 Message-ID: User-Agent: Opera Mail/9.21 (Linux) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from Quoted-Printable to 8bit by oss.sgi.com id l5RIE7tL000735 X-archive-position: 11951 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: S.Illes@westminster.ac.uk Precedence: bulk X-list: xfs Hi, I am using XFS on my laptop, I have realized that nobarrier mount options sometimes slows down deleting large number of small files, like the kernel source tree. I made four tests, deleting the kernel source right after unpack and after reboot, with both barrier and nobarrier options: mount opts: rw,noatime,nodiratime,logbsize=256k,logbufs=2 illes@sunset:~/tmp> tar xjf ~/Download/linux-2.6.21.5.tar.bz2 && sync && reboot After reboot: illes@sunset:~/tmp> time rm -rf linux-2.6.21.5/ real 0m28.127s user 0m0.044s sys 0m2.924s illes@sunset:~/tmp> tar xjf ~/Download/linux-2.6.21.5.tar.bz2 && sync illes@sunset:~/tmp> time rm -rf linux-2.6.21.5/ real 0m14.872s user 0m0.044s sys 0m2.872s ------------------------------------------------------------------- mount opts: rw,noatime,nodiratime,logbsize=256k,logbufs=2,nobarrier illes@sunset:~/tmp> tar xjf ~/Download/linux-2.6.21.5.tar.bz2 && sync && reboot After reboot: illes@sunset:~/tmp> time rm -rf linux-2.6.21.5/ real 1m12.738s user 0m0.032s sys 0m2.548s illes@sunset:~/tmp> tar xjf ~/Download/linux-2.6.21.5.tar.bz2 && sync illes@sunset:~/tmp> time rm -rf linux-2.6.21.5/ real 0m7.884s user 0m0.028s sys 0m2.008s It looks like with barrier it's faster deleting files after reboot. ( 28 sec vs 72 sec !!! ). I thought it was supposed to be faster using the nobarrier mount options, but is not, very strange. However deleting right after the unpack ( so everything in mem cache ) is faster with nobarrier ( 15 sec vs 9 sec ), not to much surprise here. I repeated this test several times, same results. I made sure nothing was running while I was doing the tests, cpu was idle, hdd led was not on, etc. I have found nothing in the logs regarding to barrier is not working. Can anyone explain this? Is it normal? Is there any point using barrier (expect it's sometimes faster :) ) on a laptop? Cheers, Szabolcs Some info: Test system: opensuse10.2 xfsprogs-2.8.11_1-11 sunset:~ # uname -a Linux sunset 2.6.18.8-0.3-default #1 SMP Tue Apr 17 08:42:35 UTC 2007 i686 i686 i386 GNU/Linux sunset:~ # xfs_info: meta-data=/dev/hda3 isize=256 agcount=8, agsize=946078 blks = sectsz=512 attr=2 data = bsize=4096 blocks=7568623, imaxpct=25 = sunit=0 swidth=0 blks, unwritten=1 naming =version 2 bsize=4096 log =internal bsize=4096 blocks=3695, version=2 = sectsz=512 sunit=0 blks realtime =none extsz=65536 blocks=0, rtextents=0 sunset:~ # hdparm /dev/hda /dev/hda: multcount = 16 (on) IO_support = 1 (32-bit) unmaskirq = 0 (off) using_dma = 1 (on) keepsettings = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 16383/255/63, sectors = 78140160, start = 0 From owner-xfs@oss.sgi.com Wed Jun 27 14:45:09 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 27 Jun 2007 14:45:12 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.2 required=5.0 tests=AWL,BAYES_95 autolearn=no version=3.2.0-pre1-r499012 Received: from smtp102.sbc.mail.re2.yahoo.com (smtp102.sbc.mail.re2.yahoo.com [68.142.229.103]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5RLj7tL011852 for ; Wed, 27 Jun 2007 14:45:08 -0700 Received: (qmail 51341 invoked from network); 27 Jun 2007 21:45:07 -0000 Received: from unknown (HELO stupidest.org) (cwedgwood@sbcglobal.net@24.5.75.45 with login) by smtp102.sbc.mail.re2.yahoo.com with SMTP; 27 Jun 2007 21:45:07 -0000 X-YMail-OSG: g9TnissVM1l0nWBLjeJjnld0ZEdf3hZDV2ti0pTMhqBOb8Gkdqzs.hNshTuBcP9j8TfLpxi7qQ-- Received: by tuatara.stupidest.org (Postfix, from userid 10000) id 9101C1827285; Wed, 27 Jun 2007 14:45:06 -0700 (PDT) Date: Wed, 27 Jun 2007 14:45:06 -0700 From: Chris Wedgwood To: Szabolcs Illes Cc: xfs@oss.sgi.com Subject: Re: After reboot fs with barrier faster deletes then fs with nobarrier Message-ID: <20070627214506.GA1352@tuatara.stupidest.org> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11952 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: xfs On Wed, Jun 27, 2007 at 06:58:29PM +0100, Szabolcs Illes wrote: > I repeated this test several times, same results. I made sure > nothing was running while I was doing the tests, cpu was idle, hdd > led was not on, etc. instead of doing a reboot can you try something like: echo 1 > /proc/sys/vm/drop_caches or echo 3 > /proc/sys/vm/drop_caches (the value is a bit mask, bit 0 will drop the page cache, bit 1 will drop the slab) does that give you more or less the same results as rebooting? another thing to try, before the delete, also try: find path/to/whatever -noleaf >/dev/null and see if that helps (i expect it should greatly) From owner-xfs@oss.sgi.com Wed Jun 27 15:18:52 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 27 Jun 2007 15:18:57 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.1 required=5.0 tests=BAYES_80,J_CHICKENPOX_27 autolearn=no version=3.2.0-pre1-r499012 Received: from isls-mx10.wmin.ac.uk (isls-mx10.wmin.ac.uk [161.74.14.112]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5RMIotL019638 for ; Wed, 27 Jun 2007 15:18:52 -0700 Received: from groucho.wmin.ac.uk ([161.74.160.74]) by isls-mx10.wmin.ac.uk with esmtp (Exim 4.60) (envelope-from ) id 1I3fqd-00051t-DS; Wed, 27 Jun 2007 23:18:51 +0100 Received: from sunset.cpc.wmin.ac.uk (5ac093cf.bb.sky.com [90.192.147.207]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (No client certificate requested) by groucho.wmin.ac.uk (Postfix) with ESMTP id 29712321B84; Wed, 27 Jun 2007 23:18:51 +0100 (BST) Date: Wed, 27 Jun 2007 23:18:50 +0100 To: "Chris Wedgwood" Subject: Re: After reboot fs with barrier faster deletes then fs with nobarrier From: "Szabolcs Illes" Cc: xfs@oss.sgi.com Content-Type: text/plain; format=flowed; delsp=yes; charset=us-ascii MIME-Version: 1.0 References: <20070627214506.GA1352@tuatara.stupidest.org> Message-ID: In-Reply-To: <20070627214506.GA1352@tuatara.stupidest.org> User-Agent: Opera Mail/9.21 (Linux) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from Quoted-Printable to 8bit by oss.sgi.com id l5RMIqtL019648 X-archive-position: 11953 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: S.Illes@westminster.ac.uk Precedence: bulk X-list: xfs On Wed, 27 Jun 2007 22:45:06 +0100, Chris Wedgwood wrote: > On Wed, Jun 27, 2007 at 06:58:29PM +0100, Szabolcs Illes wrote: > >> I repeated this test several times, same results. I made sure >> nothing was running while I was doing the tests, cpu was idle, hdd >> led was not on, etc. > > instead of doing a reboot can you try something like: > > echo 1 > /proc/sys/vm/drop_caches > > or > echo 3 > /proc/sys/vm/drop_caches > > (the value is a bit mask, bit 0 will drop the page cache, bit 1 will > drop the slab) > > does that give you more or less the same results as rebooting? yes it does. > > > > > another thing to try, before the delete, also try: > > find path/to/whatever -noleaf >/dev/null > > and see if that helps (i expect it should greatly) It doesn't help to much, see the updated tests: mount opts: rw,noatime,nodiratime,logbsize=256k,logbufs=2 illes@sunset:~/tmp> tar xjf ~/Download/linux-2.6.21.5.tar.bz2 && sync && reboot After reboot: illes@sunset:~/tmp> time rm -rf linux-2.6.21.5/ real 0m28.127s user 0m0.044s sys 0m2.924s illes@sunset:~/tmp> tar xjf ~/Download/linux-2.6.21.5.tar.bz2 && sync illes@sunset:~/tmp> time rm -rf linux-2.6.21.5/ real 0m14.872s user 0m0.044s sys 0m2.872s illes@sunset:~/tmp> tar xjf ~/Download/linux-2.6.21.5.tar.bz2 && sync sunset:~ # echo 1 > /proc/sys/vm/drop_caches illes@sunset:~/tmp> time rm -rf linux-2.6.21.5/ real 0m28.257s user 0m0.036s sys 0m2.732s illes@sunset:~/tmp> tar xjf ~/Download/linux-2.6.21.5.tar.bz2 && sync sunset:~ # echo 3 > /proc/sys/vm/drop_caches illes@sunset:~/tmp> time rm -rf linux-2.6.21.5/ real 0m28.155s user 0m0.048s sys 0m2.772s illes@sunset:~/tmp> tar xjf ~/Download/linux-2.6.21.5.tar.bz2 && sync sunset:~ # echo 3 > /proc/sys/vm/drop_caches illes@sunset:~/tmp> find linux-2.6.21.5/ -noleaf >/dev/null illes@sunset:~/tmp> time rm -rf linux-2.6.21.5/ real 0m25.702s user 0m0.064s sys 0m2.664s ------------------------------------------------------------------- mount opts: rw,noatime,nodiratime,logbsize=256k,logbufs=2,nobarrier illes@sunset:~/tmp> tar xjf ~/Download/linux-2.6.21.5.tar.bz2 && sync && reboot After reboot: illes@sunset:~/tmp> time rm -rf linux-2.6.21.5/ real 1m12.738s user 0m0.032s sys 0m2.548s illes@sunset:~/tmp> tar xjf ~/Download/linux-2.6.21.5.tar.bz2 && sync illes@sunset:~/tmp> time rm -rf linux-2.6.21.5/ real 0m7.884s user 0m0.028s sys 0m2.008s illes@sunset:~/tmp> tar xjf ~/Download/linux-2.6.21.5.tar.bz2 && sync sunset:~ # echo 1 > /proc/sys/vm/drop_caches illes@sunset:~/tmp> time rm -rf linux-2.6.21.5/ real 1m15.367s user 0m0.048s sys 0m2.264s illes@sunset:~/tmp> tar xjf ~/Download/linux-2.6.21.5.tar.bz2 && sync sunset:~ # echo 3 > /proc/sys/vm/drop_caches illes@sunset:~/tmp> time rm -rf linux-2.6.21.5/ real 1m16.043s user 0m0.060s sys 0m2.448s illes@sunset:~/tmp> tar xjf ~/Download/linux-2.6.21.5.tar.bz2 && sync sunset:~ # echo 3 > /proc/sys/vm/drop_caches illes@sunset:~/tmp> find linux-2.6.21.5/ -noleaf >/dev/null illes@sunset:~/tmp> time rm -rf linux-2.6.21.5/ real 1m7.856s user 0m0.044s sys 0m2.020s Cheers, Szabolcs From owner-xfs@oss.sgi.com Wed Jun 27 15:20:48 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 27 Jun 2007 15:20:53 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50,J_CHICKENPOX_27 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5RMKjtL020259 for ; Wed, 27 Jun 2007 15:20:47 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id IAA16648; Thu, 28 Jun 2007 08:20:42 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5RMKfeW3392918; Thu, 28 Jun 2007 08:20:42 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5RMKeiL3391840; Thu, 28 Jun 2007 08:20:40 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Thu, 28 Jun 2007 08:20:40 +1000 From: David Chinner To: Szabolcs Illes Cc: xfs@oss.sgi.com Subject: Re: After reboot fs with barrier faster deletes then fs with nobarrier Message-ID: <20070627222040.GR989688@sgi.com> References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11954 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Wed, Jun 27, 2007 at 06:58:29PM +0100, Szabolcs Illes wrote: > Hi, > > I am using XFS on my laptop, I have realized that nobarrier mount options > sometimes slows down deleting large number of small files, like the kernel > source tree. I made four tests, deleting the kernel source right after > unpack and after reboot, with both barrier and nobarrier options: > > mount opts: rw,noatime,nodiratime,logbsize=256k,logbufs=2 > > illes@sunset:~/tmp> tar xjf ~/Download/linux-2.6.21.5.tar.bz2 && sync && > reboot > > After reboot: > illes@sunset:~/tmp> time rm -rf linux-2.6.21.5/ > > real 0m28.127s > user 0m0.044s > sys 0m2.924s > > illes@sunset:~/tmp> tar xjf ~/Download/linux-2.6.21.5.tar.bz2 && sync > illes@sunset:~/tmp> time rm -rf linux-2.6.21.5/ > > real 0m14.872s > user 0m0.044s > sys 0m2.872s Of course the second run will be faster here - the inodes are already in cache and so there's no reading from disk needed to find the files to delete.... That's because run time after reboot is determined by how fast you can traverse the directory structure (i.e. how many seeks are involved). Barriers will have little impact on the uncached rm -rf results, but as the cached rm time will be log-I/O bound, nobarrier will be far faster (as you've found out). Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Wed Jun 27 16:20:08 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 27 Jun 2007 16:20:13 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=3.0 required=5.0 tests=AWL,BAYES_60,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r499012 Received: from lucidpixels.com (lucidpixels.com [75.144.35.66]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5RNK5tL008586 for ; Wed, 27 Jun 2007 16:20:08 -0700 Received: by lucidpixels.com (Postfix, from userid 1001) id 668A81C076ECD; Wed, 27 Jun 2007 19:20:06 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id 4E2B6401992C; Wed, 27 Jun 2007 19:20:06 -0400 (EDT) Date: Wed, 27 Jun 2007 19:20:06 -0400 (EDT) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: linux-raid@vger.kernel.org, xfs@oss.sgi.com cc: Alan Piszcz Subject: Fastest Chunk Size w/XFS For MD Software RAID = 1024k Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11955 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs The results speak for themselves: http://home.comcast.net/~jpiszcz/chunk/index.html From owner-xfs@oss.sgi.com Wed Jun 27 16:20:41 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 27 Jun 2007 16:20:45 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=3.0 required=5.0 tests=AWL,BAYES_60,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r499012 Received: from lucidpixels.com (lucidpixels.com [75.144.35.66]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5RNKetL008798 for ; Wed, 27 Jun 2007 16:20:41 -0700 Received: by lucidpixels.com (Postfix, from userid 1001) id 6E2AA1C000275; Wed, 27 Jun 2007 19:20:42 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id 6C4ED401992C; Wed, 27 Jun 2007 19:20:42 -0400 (EDT) Date: Wed, 27 Jun 2007 19:20:42 -0400 (EDT) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: linux-raid@vger.kernel.org, xfs@oss.sgi.com cc: Alan Piszcz Subject: Re: Fastest Chunk Size w/XFS For MD Software RAID = 1024k In-Reply-To: Message-ID: References: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11956 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs For drives with 16MB of cache (in this case, raptors). Justin. On Wed, 27 Jun 2007, Justin Piszcz wrote: > The results speak for themselves: > > http://home.comcast.net/~jpiszcz/chunk/index.html > > - > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > From owner-xfs@oss.sgi.com Wed Jun 27 16:24:38 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 27 Jun 2007 16:24:42 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=3.0 required=5.0 tests=AWL,BAYES_60,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r499012 Received: from lucidpixels.com (lucidpixels.com [75.144.35.66]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5RNObtL010469 for ; Wed, 27 Jun 2007 16:24:38 -0700 Received: by lucidpixels.com (Postfix, from userid 1001) id E8B591C076ECD; Wed, 27 Jun 2007 19:24:38 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id E485E401992C; Wed, 27 Jun 2007 19:24:38 -0400 (EDT) Date: Wed, 27 Jun 2007 19:24:38 -0400 (EDT) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: linux-raid@vger.kernel.org, xfs@oss.sgi.com cc: Alan Piszcz Subject: Re: Fastest Chunk Size w/XFS For MD Software RAID = 1024k In-Reply-To: Message-ID: References: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11957 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs For the e-mail archives: p34-128k-chunk,15696M,77236.3,99,445653,86.3333,192267,34.3333,78773.7,99,524463,41,594.9,0,16:100000:16/64,1298.67,10.6667,5964.33,17.3333,3035.67,18.3333,1512,13.6667,5334.33,16,2634.67,19 p34-512k-chunk,15696M,78383,99,436842,86,162969,27,79624,99,486892,38,583.0,0,16:100000:16/64,2019,17,9715,29,4272,23,2250,22,17095,45,3691,30 p34-1024k-chunk,15696M,77672.3,99,455267,87.3333,183772,29.6667,79601.3,99,578225,43.3333,595.933,0,16:100000:16/64,2085.67,18,12953,39,3908.33,23.3333,2375.33,23.3333,18492,51.6667,3388.33,27 p34-4096k-chunk,15696M,33791.1,43.5556,176630,37.3333,72235.1,11.5556,34424.9,44,247925,18.2222,271.644,0,16:100000:16/64,560,4.88889,2928,8.88889,1039.56,5.77778,571.556,5.33333,1729.78,5.33333,1289.33,9.33333 On Wed, 27 Jun 2007, Justin Piszcz wrote: > For drives with 16MB of cache (in this case, raptors). > > Justin. > > On Wed, 27 Jun 2007, Justin Piszcz wrote: > >> The results speak for themselves: >> >> http://home.comcast.net/~jpiszcz/chunk/index.html >> >> - >> To unsubscribe from this list: send the line "unsubscribe linux-raid" in >> the body of a message to majordomo@vger.kernel.org >> More majordomo info at http://vger.kernel.org/majordomo-info.html >> > > From owner-xfs@oss.sgi.com Wed Jun 27 16:29:39 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 27 Jun 2007 16:29:44 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from postoffice.aconex.com (mail.app.aconex.com [203.89.192.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5RNTbtL012194 for ; Wed, 27 Jun 2007 16:29:39 -0700 Received: from edge.yarra.acx (unknown [203.89.192.141]) by postoffice.aconex.com (Postfix) with ESMTP id 3E18F92C3CF; Thu, 28 Jun 2007 09:29:39 +1000 (EST) Subject: Re: [PATCH 4/7][TAKE5] support new modes in fallocate From: Nathan Scott Reply-To: nscott@aconex.com To: David Chinner , Andreas Dilger Cc: xfs-oss , "Amit K. Arora" , linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org, suparna@in.ibm.com, cmm@us.ibm.com In-Reply-To: <20070627133657.GQ989688@sgi.com> References: <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070625134500.GE1951@amitarora.in.ibm.com> <20070625150320.GA8686@amitarora.in.ibm.com> <20070625214626.GJ5181@schatzie.adilger.int> <20070626231431.GO31489@sgi.com> <20070627034915.GR6652@schatzie.adilger.int> <20070627133657.GQ989688@sgi.com> Content-Type: text/plain Organization: Aconex Date: Thu, 28 Jun 2007 09:28:36 +1000 Message-Id: <1182986916.15488.88.camel@edge.yarra.acx> Mime-Version: 1.0 X-Mailer: Evolution 2.6.3 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11958 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nscott@aconex.com Precedence: bulk X-list: xfs On Wed, 2007-06-27 at 23:36 +1000, David Chinner wrote: > .... Allows setup_swap_extents() to use preallocated files on XFS > filesystems for swap files without ever needing to convert them. Using unwritten extents (as opposed to the MKSWAP flag mentioned earlier) has the unfortunate down side of requiring transactions, possibly additional IO, and memory allocation during swap. (but, this patch should probably go in regardless, as teaching generic code about unwritten extents is not a bad idea). cheers. -- Nathan From owner-xfs@oss.sgi.com Wed Jun 27 17:39:58 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 27 Jun 2007 17:40:04 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5S0dttL002138 for ; Wed, 27 Jun 2007 17:39:57 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA20517; Thu, 28 Jun 2007 10:39:35 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5S0dTeW3446725; Thu, 28 Jun 2007 10:39:30 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5S0dLvY3454051; Thu, 28 Jun 2007 10:39:21 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Thu, 28 Jun 2007 10:39:21 +1000 From: David Chinner To: Nathan Scott Cc: David Chinner , Andreas Dilger , xfs-oss , "Amit K. Arora" , linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org, suparna@in.ibm.com, cmm@us.ibm.com Subject: Re: [PATCH 4/7][TAKE5] support new modes in fallocate Message-ID: <20070628003921.GW989688@sgi.com> References: <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070625134500.GE1951@amitarora.in.ibm.com> <20070625150320.GA8686@amitarora.in.ibm.com> <20070625214626.GJ5181@schatzie.adilger.int> <20070626231431.GO31489@sgi.com> <20070627034915.GR6652@schatzie.adilger.int> <20070627133657.GQ989688@sgi.com> <1182986916.15488.88.camel@edge.yarra.acx> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1182986916.15488.88.camel@edge.yarra.acx> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11959 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Thu, Jun 28, 2007 at 09:28:36AM +1000, Nathan Scott wrote: > On Wed, 2007-06-27 at 23:36 +1000, David Chinner wrote: > > .... Allows setup_swap_extents() to use preallocated files on XFS > > filesystems for swap files without ever needing to convert them. > > Using unwritten extents (as opposed to the MKSWAP flag mentioned > earlier) has the unfortunate down side of requiring transactions, > possibly additional IO, and memory allocation during swap. (but, > this patch should probably go in regardless, as teaching generic > code about unwritten extents is not a bad idea). I don't think it does - swapfile I/O looks like it goes direct to bio without passing through the filesystem. When the swapfile is mapped, it scans and records the extent map of the entire swapfile in a separate structure and AFAICT the swap code uses that built map without touching the filesystem at all. If that is true then the written/unwritten state of the extents is irrelevant; all we need is allocated disk space for the file and swapping should work. And it's not like anyone should be reading the contents of that swapfile through the filesystem, either. ;) Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Wed Jun 27 17:54:11 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 27 Jun 2007 17:54:16 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.1 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from postoffice.aconex.com (mail.app.aconex.com [203.89.192.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5S0s9tL008273 for ; Wed, 27 Jun 2007 17:54:10 -0700 Received: from edge.yarra.acx (unknown [203.89.192.141]) by postoffice.aconex.com (Postfix) with ESMTP id 939C792C53D; Thu, 28 Jun 2007 10:54:10 +1000 (EST) Subject: Re: [PATCH 4/7][TAKE5] support new modes in fallocate From: Nathan Scott Reply-To: nscott@aconex.com To: David Chinner Cc: Andreas Dilger , xfs-oss , "Amit K. Arora" , linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org, suparna@in.ibm.com, cmm@us.ibm.com In-Reply-To: <20070628003921.GW989688@sgi.com> References: <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070625134500.GE1951@amitarora.in.ibm.com> <20070625150320.GA8686@amitarora.in.ibm.com> <20070625214626.GJ5181@schatzie.adilger.int> <20070626231431.GO31489@sgi.com> <20070627034915.GR6652@schatzie.adilger.int> <20070627133657.GQ989688@sgi.com> <1182986916.15488.88.camel@edge.yarra.acx> <20070628003921.GW989688@sgi.com> Content-Type: text/plain Organization: Aconex Date: Thu, 28 Jun 2007 10:53:08 +1000 Message-Id: <1182991988.15488.95.camel@edge.yarra.acx> Mime-Version: 1.0 X-Mailer: Evolution 2.6.3 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11960 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nscott@aconex.com Precedence: bulk X-list: xfs On Thu, 2007-06-28 at 10:39 +1000, David Chinner wrote: > > > I don't think it does - swapfile I/O looks like it goes direct to > bio without passing through the filesystem. When the swapfile is > mapped, it scans and records the extent map of the entire swapfile > in a separate structure and AFAICT the swap code uses that built map > without touching the filesystem at all. > > If that is true then the written/unwritten state of the extents is > irrelevant; all we need is allocated disk space for the file and > swapping should work. And it's not like anyone should be reading > the contents of that swapfile through the filesystem, either. ;) Ah, yes, good point - thats true. Unwritten extents are ideal for this then, as attempts to read swap via the regular interfaces will return zeros instead of random swapped out memory contents. cheers. From owner-xfs@oss.sgi.com Wed Jun 27 20:06:16 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 27 Jun 2007 20:06:19 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5S36DtL022373 for ; Wed, 27 Jun 2007 20:06:15 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id NAA24232; Thu, 28 Jun 2007 13:06:08 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5S366eW3519686; Thu, 28 Jun 2007 13:06:07 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5S364YR3516932; Thu, 28 Jun 2007 13:06:04 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Thu, 28 Jun 2007 13:06:04 +1000 From: David Chinner To: mmarek@suse.cz Cc: xfs@oss.sgi.com, linux-kernel@vger.kernel.org Subject: Re: [patch 1/3] Fix XFS_IOC_FSGEOMETRY_V1 in compat mode Message-ID: <20070628030603.GC989688@sgi.com> References: <20070619132549.266927601@suse.cz> <20070619132726.360453113@suse.cz> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070619132726.360453113@suse.cz> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11961 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Tue, Jun 19, 2007 at 03:25:50PM +0200, mmarek@suse.cz wrote: > i386 struct xfs_fsop_geom_v1 has no padding after the last member, so > the size is different. Looks good. Added to my qa tree... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Wed Jun 27 20:07:11 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 27 Jun 2007 20:07:14 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5S376tL022756 for ; Wed, 27 Jun 2007 20:07:09 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id NAA24263; Thu, 28 Jun 2007 13:07:03 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5S372eW3520374; Thu, 28 Jun 2007 13:07:02 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5S370022271472; Thu, 28 Jun 2007 13:07:00 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Thu, 28 Jun 2007 13:07:00 +1000 From: David Chinner To: mmarek@suse.cz Cc: xfs@oss.sgi.com, linux-kernel@vger.kernel.org Subject: Re: [patch 2/3] Fix XFS_IOC_*_TO_HANDLE and XFS_IOC_{OPEN,READLINK}_BY_HANDLE in compat mode Message-ID: <20070628030700.GD989688@sgi.com> References: <20070619132549.266927601@suse.cz> <20070619132726.627137347@suse.cz> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070619132726.627137347@suse.cz> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11962 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Tue, Jun 19, 2007 at 03:25:51PM +0200, mmarek@suse.cz wrote: > 32bit struct xfs_fsop_handlereq has different size and offsets (due to > pointers). TODO: case XFS_IOC_{FSSETDM,ATTRLIST,ATTRMULTI}_BY_HANDLE > still not handled. Looks good. Added to my qa tree... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Wed Jun 27 20:50:07 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 27 Jun 2007 20:50:12 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5S3o4tL005376 for ; Wed, 27 Jun 2007 20:50:06 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id NAA25441; Thu, 28 Jun 2007 13:50:01 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5S3nxeW3541923; Thu, 28 Jun 2007 13:50:00 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5S3nvqU3541450; Thu, 28 Jun 2007 13:49:57 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Thu, 28 Jun 2007 13:49:57 +1000 From: David Chinner To: mmarek@suse.cz Cc: xfs@oss.sgi.com, linux-kernel@vger.kernel.org Subject: Re: [patch 3/3] Fix XFS_IOC_FSBULKSTAT{,_SINGLE} and XFS_IOC_FSINUMBERS in compat mode Message-ID: <20070628034957.GE989688@sgi.com> References: <20070619132549.266927601@suse.cz> <20070619132726.893544847@suse.cz> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070619132726.893544847@suse.cz> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11963 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Tue, Jun 19, 2007 at 03:25:52PM +0200, mmarek@suse.cz wrote: > * 32bit struct xfs_fsop_bulkreq has different size and layout of > members, no matter the alignment. Move the code out of the #else > branch (why was it there in the first place?). Define _32 variants of > the ioctl constants. > * 32bit struct xfs_bstat is different because of time_t and on > i386 becaus of different padding. Create a new formatter > xfs_bulkstat_one_compat() that takes care of this. To do this, we need > to make xfs_bulkstat_one_iget() and xfs_bulkstat_one_dinode() > non-static. > * i386 struct xfs_inogrp has different padding. Introduce a similar > "formatter" mechanism for xfs_inumbers: the native formatter is just a > copy_to_user, the compat formatter takes care of the different layout Oh, wow, that is so much nicer than the first version, Michal. ;) Still, I think there's possibly one further revision: > +static int xfs_bulkstat_one_compat( > + xfs_mount_t *mp, /* mount point for filesystem */ > + xfs_ino_t ino, /* inode number to get data for */ > + void __user *buffer, /* buffer to place output in */ > + int ubsize, /* size of buffer */ > + void *private_data, /* my private data */ > + xfs_daddr_t bno, /* starting bno of inode cluster */ > + int *ubused, /* bytes used by me */ > + void *dibuff, /* on-disk inode buffer */ > + int *stat) /* BULKSTAT_RV_... */ Hmmm - this is almost all duplicated code. It's pretty much what I described, but maybe not /quite/ what I had in mind here. It's a *big* improvement on the first version, but it seems now that the only real difference xfs_bulkstat_one() and xfs_bulkstat_one_compat() is copy_to_user() vs the discrete put_user calls. I think we can remove xfs_bulkstat_one_compat() completely by using the same method you used with the xfs_inumber_fmt functions. That would mean the only duplicated code is the initial ioctl handling (which we can't really avoid). It would also mean that there is no need to make xfs_bulkstat_one_iget() and xfs_bulkstat_one_dinode() non-static. Your thoughts? Other than that possible improvement, this looks really good. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Wed Jun 27 21:29:01 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 27 Jun 2007 21:29:06 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.0 required=5.0 tests=AWL,BAYES_50,URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5S4SvtL022301 for ; Wed, 27 Jun 2007 21:29:00 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA26232; Thu, 28 Jun 2007 14:28:53 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 1116) id 2F78D58C38F1; Thu, 28 Jun 2007 14:28:53 +1000 (EST) To: xfs-bugs-internal@sgi.com, xfs@oss.sgi.com, akpm@linux-foundation.org Subject: TAKE fix rtx warning msg Message-Id: <20070628042853.2F78D58C38F1@chook.melbourne.sgi.com> Date: Thu, 28 Jun 2007 14:28:53 +1000 (EST) From: tes@sgi.com (Tim Shimmin) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11964 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs Appease gcc in regards to "warning: 'rtx' is used uninitialized in this function". Use the uninitialized_var() macro in xfs_bmap_rtalloc for this. Signed-off-by: Andrew Morton Patch provided by Andrew Morton. --Tim Date: Thu Jun 28 14:25:55 AEST 2007 Workarea: chook.melbourne.sgi.com:/build/tes/2.6.x-xfs Inspected by: akpm@linux-foundation.org The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:29007a fs/xfs/xfs_bmap.c - 1.370 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_bmap.c.diff?r1=text&tr1=1.370&r2=text&tr2=1.369&f=h - Appease gcc in regards to "warning: 'rtx' is used uninitialized in this function". Use the uninitialized_var() macro in xfs_bmap_rtalloc for this. Signed-off-by: Andrew Morton From owner-xfs@oss.sgi.com Wed Jun 27 21:51:03 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 27 Jun 2007 21:51:08 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.2 required=5.0 tests=AWL,BAYES_99 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5S4owtL029557 for ; Wed, 27 Jun 2007 21:51:00 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA26730; Thu, 28 Jun 2007 14:50:53 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5S4opeW3568151; Thu, 28 Jun 2007 14:50:52 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5S4oniG3566787; Thu, 28 Jun 2007 14:50:49 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Thu, 28 Jun 2007 14:50:49 +1000 From: David Chinner To: Ruben Porras Cc: xfs@oss.sgi.com Subject: Re: [PATCH] Implement ioctl to mark AGs as "don't use/use" Message-ID: <20070628045049.GF989688@sgi.com> References: <1182939325.5313.12.camel@localhost> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1182939325.5313.12.camel@localhost> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11965 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Wed, Jun 27, 2007 at 12:15:25PM +0200, Ruben Porras wrote: > The patch has the following parts: > > - Necessary changes to xfs_ag.h > - two new ioctls > - Changes to the allocation functions to avoid using marked AGs > - Extension to xfs_alloc_log_agf > > This should implement the second step on the requirement list to shrink > an xfs filesystem. Comment are welcome. It's a good first cut - comments are inline in the patch below. > Index: xfs/xfs_ag.h > =================================================================== > RCS file: /cvs/linux-2.6-xfs/fs/xfs/xfs_ag.h,v > retrieving revision 1.59 > diff -u -r1.59 xfs_ag.h > --- xfs/xfs_ag.h 22 May 2007 15:50:48 -0000 1.59 > +++ xfs/xfs_ag.h 27 Jun 2007 09:06:39 -0000 > @@ -69,6 +69,7 @@ > __be32 agf_freeblks; /* total free blocks */ > __be32 agf_longest; /* longest free space */ > __be32 agf_btreeblks; /* # of blocks held in AGF btrees */ > + __be32 agf_flags; /* the AGF is allocatable */ The comment should say "persistent AG state flags" or something similar - it's not just for allocation ;) > } xfs_agf_t; > > #define XFS_AGF_MAGICNUM 0x00000001 > @@ -196,8 +197,17 @@ > lock_t pagb_lock; /* lock for pagb_list */ > #endif > xfs_perag_busy_t *pagb_list; /* unstable blocks */ > + __u32 pagf_flags; /* the AGF is allocatable */ Ditto. > --- xfs/xfs_alloc.c 22 May 2007 15:50:48 -0000 1.186 > +++ xfs/xfs_alloc.c 27 Jun 2007 09:06:40 -0000 > @@ -549,6 +549,7 @@ > xfs_alloc_arg_t *args) /* argument structure for allocation */ > { > int error=0; > + xfs_perag_t *pag; > #ifdef XFS_ALLOC_TRACE > static char fname[] = "xfs_alloc_ag_vextent"; > #endif > @@ -559,6 +560,15 @@ > ASSERT(args->mod < args->prod); > ASSERT(args->alignment > 0); > /* > + * Return an error if the a.g. should not be allocated. > + * This happens normally during a shrink operation. > + */ > + pag = (args->pag); > + if (unlikely(pag->pagf_flags & XFS_AGF_FLAGS_ALLOC_DENY)) { > + args->agbno = NULLAGBLOCK; > + return 0; > + } > + /* > * Branch to correct routine based on the type. > */ > args->wasfromfl = 0; Looks like some whitespace problems there (mixing spaces and tabs). Also, can you include empty lines either side of a unique hunk of code like this? I wonder how many other places we are going to have to put this check? I haven't looked myself, but this is a good place to start ;) > Index: xfs/xfs_fs.h > =================================================================== > RCS file: /cvs/linux-2.6-xfs/fs/xfs/xfs_fs.h,v > retrieving revision 1.33 > diff -u -r1.33 xfs_fs.h > --- xfs/xfs_fs.h 22 May 2007 15:50:48 -0000 1.33 > +++ xfs/xfs_fs.h 27 Jun 2007 09:06:40 -0000 > @@ -476,22 +476,24 @@ > #define XFS_IOC_OPEN_BY_HANDLE _IOWR('X', 107, struct xfs_fsop_handlereq) > #define XFS_IOC_READLINK_BY_HANDLE _IOWR('X', 108, struct xfs_fsop_handlereq) > #define XFS_IOC_SWAPEXT _IOWR('X', 109, struct xfs_swapext) > -#define XFS_IOC_FSGROWFSDATA _IOW ('X', 110, struct xfs_growfs_data) > -#define XFS_IOC_FSGROWFSLOG _IOW ('X', 111, struct xfs_growfs_log) > -#define XFS_IOC_FSGROWFSRT _IOW ('X', 112, struct xfs_growfs_rt) > -#define XFS_IOC_FSCOUNTS _IOR ('X', 113, struct xfs_fsop_counts) > -#define XFS_IOC_SET_RESBLKS _IOWR('X', 114, struct xfs_fsop_resblks) > -#define XFS_IOC_GET_RESBLKS _IOR ('X', 115, struct xfs_fsop_resblks) > -#define XFS_IOC_ERROR_INJECTION _IOW ('X', 116, struct xfs_error_injection) > -#define XFS_IOC_ERROR_CLEARALL _IOW ('X', 117, struct xfs_error_injection) > -/* XFS_IOC_ATTRCTL_BY_HANDLE -- deprecated 118 */ > -#define XFS_IOC_FREEZE _IOWR('X', 119, int) > -#define XFS_IOC_THAW _IOWR('X', 120, int) > -#define XFS_IOC_FSSETDM_BY_HANDLE _IOW ('X', 121, struct xfs_fsop_setdm_handlereq) > -#define XFS_IOC_ATTRLIST_BY_HANDLE _IOW ('X', 122, struct xfs_fsop_attrlist_handlereq) > -#define XFS_IOC_ATTRMULTI_BY_HANDLE _IOW ('X', 123, struct xfs_fsop_attrmulti_handlereq) > -#define XFS_IOC_FSGEOMETRY _IOR ('X', 124, struct xfs_fsop_geom) > -#define XFS_IOC_GOINGDOWN _IOR ('X', 125, __uint32_t) > +#define XFS_IOC_GET_AGF_FLAGS _IOW ('X', 110, struct xfs_ioc_agflags) > +#define XFS_IOC_SET_AGF_FLAGS _IOW ('X', 111, struct xfs_ioc_agflags) > +#define XFS_IOC_FSGROWFSDATA _IOW ('X', 111, struct xfs_growfs_data) > +#define XFS_IOC_FSGROWFSLOG _IOW ('X', 112, struct xfs_growfs_log) > +#define XFS_IOC_FSGROWFSRT _IOW ('X', 113, struct xfs_growfs_rt) > +#define XFS_IOC_FSCOUNTS _IOR ('X', 114, struct xfs_fsop_counts) > +#define XFS_IOC_SET_RESBLKS _IOWR('X', 115, struct xfs_fsop_resblks) > +#define XFS_IOC_GET_RESBLKS _IOR ('X', 116, struct xfs_fsop_resblks) > +#define XFS_IOC_ERROR_INJECTION _IOW ('X', 117, struct xfs_error_injection) > +#define XFS_IOC_ERROR_CLEARALL _IOW ('X', 118, struct xfs_error_injection) > +/* XFS_IOC_ATTRCTL_BY_HANDLE -- deprecated 119 */ > +#define XFS_IOC_FREEZE _IOWR('X', 120, int) > +#define XFS_IOC_THAW _IOWR('X', 121, int) > +#define XFS_IOC_FSSETDM_BY_HANDLE _IOW ('X', 122, struct xfs_fsop_setdm_handlereq) > +#define XFS_IOC_ATTRLIST_BY_HANDLE _IOW ('X', 123, struct xfs_fsop_attrlist_handlereq) > +#define XFS_IOC_ATTRMULTI_BY_HANDLE _IOW ('X', 124, struct xfs_fsop_attrmulti_handlereq) > +#define XFS_IOC_FSGEOMETRY _IOR ('X', 125, struct xfs_fsop_geom) > +#define XFS_IOC_GOINGDOWN _IOR ('X', 126, __uint32_t) You shouldn't renumber the existing ioctls - that changes the interfaces to userspace and so will break lots of stuff :( Just put them at the end as 126/127. (Oh, you've got two "111" ioctls in there, anyway ;) Also, XFS_IOC_GET_AGF_FLAGS needs to be _IOWR as it has input and output parameters that need to be copied in and out. We only need to copy in for XFS_IOC_SET_AGF_FLAGS, so _IOW is right for that. (I think I got that the right way around....) > +STATIC void > +xfs_ag_set_flags_private( > + xfs_trans_t *tp, > + xfs_buf_t *agbp, /* buffer for a.g. freelist header */ > + xfs_perag_t *pag, > + __u32 flags) > +{ > + xfs_agf_t *agf; /* a.g. freespace structure */ > + > + agf = XFS_BUF_TO_AGF(agbp); > + pag->pagf_flags |= flags; > + agf->agf_flags = cpu_to_be32(pag->pagf_flags); > + > + xfs_alloc_log_agf(tp, agbp, XFS_TRANS_AGF_FLAGS); XFS_TRANS_AGF_FLAGS doesn't match with the other AGF log flags. They are defined in fs/xfs/xfs_ag.h. Search for XFS_AGF_BTREEBLKS (which is a flag passed to xfs_alloc_log_agf()). You'll also need to increment XFS_AGF_NUM_BITS.... > +int > +xfs_ag_set_flags( > + xfs_mount_t *mp, > + xfs_ioc_agflags_t *ioc_flags) > +{ > + xfs_agnumber_t agno; > + xfs_perag_t *pag; > + xfs_buf_t *bp; > + int error; > + xfs_trans_t *tp; > + > + agno = ioc_flags->ag; > + if (agno >= mp->m_sb.sb_agcount) > + return -EINVAL; > + > + tp = xfs_trans_alloc(mp, XFS_TRANS_AGF_FLAGS); Ah, i see the confusion now... Ok, so the transaction type definition is different to the field in a structure that is being logged. The flag passed into xfs_trans_alloc() is placed in the transaction header in the log to describe the type of transaction that needs to be recovered. Some transaction types require special handling and so we ned to be able to tell what type of transaction it is in the log. These are defined in fs/xfs/xfs_trans.h (search for XFS_TRANS_SB_COUNT). The flag passed to xfs_alloc_log_agf() tells the transaction which bits in the AGF are being modified as only the modified bits of the AGF are logged in the transaction. it is only used within the xfs_alloc_log_agf() function and is used to look up the offset into the AGF of the modified field(s). IOWs, the flag passed to xfs_trans_alloc() defines the type of the transaction and the flag passed to xfs_alloc_log_agf() defines what has been modified within the transaction. > Index: xfs/xfs_trans.h > =================================================================== > RCS file: /cvs/linux-2.6-xfs/fs/xfs/xfs_trans.h,v > retrieving revision 1.145 > diff -u -r1.145 xfs_trans.h > --- xfs/xfs_trans.h 22 May 2007 15:50:48 -0000 1.145 > +++ xfs/xfs_trans.h 27 Jun 2007 09:06:41 -0000 > @@ -418,6 +418,10 @@ > #define XFS_TRANS_SB_REXTENTS 0x00001000 > #define XFS_TRANS_SB_REXTSLOG 0x00002000 > > +/* > + * Value for xfs_trans_mod_agf > + */ > +#define XFS_TRANS_AGF_FLAGS 0x00004000 This is the transaction type, and so needs to be defined as the next transaction after XFS_TRANS_SB_COUNT. > @@ -860,6 +860,38 @@ > return 0; > } > > + case XFS_IOC_GET_AGF_FLAGS: { > + xfs_ioc_agflags_t in; > + __u32 out; > + > + if (!capable(CAP_SYS_ADMIN)) > + return -EPERM; > + > + if (copy_from_user(&in, arg, sizeof(in))) > + return -XFS_ERROR(EFAULT); > + > + error = xfs_ag_get_flags(mp, &in, &out); > + if (error) > + return -error; > + > + if (copy_to_user(arg, &out, sizeof(out))) > + return -XFS_ERROR(EFAULT); I'm don't think that is correct - the flags need to be placed into the structure that was passed in, not completely overwritten. That is: + case XFS_IOC_GET_AGF_FLAGS: { + xfs_ioc_agflags_t inout; + __u32 out; + + if (!capable(CAP_SYS_ADMIN)) + return -EPERM; + + if (copy_from_user(&inout, arg, sizeof(inout))) + return -XFS_ERROR(EFAULT); + + error = xfs_ag_get_flags(mp, &inout, &out); + if (error) + return -error; + + inout->flags = out; + if (copy_to_user(arg, &inout, sizeof(inout))) + return -XFS_ERROR(EFAULT); Basically, our input structure is also the output structure, and we need to put the flags: +typedef struct xfs_ioc_agflags +{ + xfs_agnumber_t ag; + __u32 flags; <<<<<<<< here +} xfs_ioc_agflags_t; Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Wed Jun 27 22:00:59 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 27 Jun 2007 22:01:03 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.6 required=5.0 tests=AWL,BAYES_50,J_CHICKENPOX_27 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5S50utL000949 for ; Wed, 27 Jun 2007 22:00:57 -0700 Received: from boing.melbourne.sgi.com (boing.melbourne.sgi.com [134.14.55.141]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA26985; Thu, 28 Jun 2007 15:00:47 +1000 Message-ID: <4683407E.9080707@sgi.com> Date: Thu, 28 Jun 2007 15:00:46 +1000 From: Timothy Shimmin User-Agent: Thunderbird 2.0.0.4 (Macintosh/20070604) MIME-Version: 1.0 To: David Chinner CC: Szabolcs Illes , xfs@oss.sgi.com Subject: Re: After reboot fs with barrier faster deletes then fs with nobarrier References: <20070627222040.GR989688@sgi.com> In-Reply-To: <20070627222040.GR989688@sgi.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11966 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs David Chinner wrote: > On Wed, Jun 27, 2007 at 06:58:29PM +0100, Szabolcs Illes wrote: >> Hi, >> >> I am using XFS on my laptop, I have realized that nobarrier mount options >> sometimes slows down deleting large number of small files, like the kernel >> source tree. I made four tests, deleting the kernel source right after >> unpack and after reboot, with both barrier and nobarrier options: >> >> mount opts: rw,noatime,nodiratime,logbsize=256k,logbufs=2 >> illes@sunset:~/tmp> tar xjf ~/Download/linux-2.6.21.5.tar.bz2 && sync && reboot >> After reboot: >> illes@sunset:~/tmp> time rm -rf linux-2.6.21.5/ >> real 0m28.127s >> user 0m0.044s >> sys 0m2.924s >> mount opts: rw,noatime,nodiratime,logbsize=256k,logbufs=2,nobarrier >> illes@sunset:~/tmp> tar xjf ~/Download/linux-2.6.21.5.tar.bz2 && sync && reboot >> After reboot: >> illes@sunset:~/tmp> time rm -rf linux-2.6.21.5/ >> real 1m12.738s >> user 0m0.032s >> sys 0m2.548s >> It looks like with barrier it's faster deleting files after reboot. >> ( 28 sec vs 72 sec !!! ). > > Of course the second run will be faster here - the inodes are already in > cache and so there's no reading from disk needed to find the files > to delete.... > > That's because run time after reboot is determined by how fast you > can traverse the directory structure (i.e. how many seeks are > involved). > Barriers will have little impact on the uncached rm -rf > results, But it looks like barriers _are_ having impact on the uncached rm -rf results. --Tim From owner-xfs@oss.sgi.com Wed Jun 27 22:08:53 2007 Received: with ECARTIS (v1.0.0; list xfs); Wed, 27 Jun 2007 22:08:58 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5S58otL004645 for ; Wed, 27 Jun 2007 22:08:52 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA27070; Thu, 28 Jun 2007 15:08:44 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5S58feW3573966; Thu, 28 Jun 2007 15:08:42 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5S58bl33574741; Thu, 28 Jun 2007 15:08:37 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Thu, 28 Jun 2007 15:08:37 +1000 From: David Chinner To: Justin Piszcz Cc: linux-raid@vger.kernel.org, xfs@oss.sgi.com, Alan Piszcz Subject: Re: Fastest Chunk Size w/XFS For MD Software RAID = 1024k Message-ID: <20070628050837.GG989688@sgi.com> References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11967 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Wed, Jun 27, 2007 at 07:20:42PM -0400, Justin Piszcz wrote: > For drives with 16MB of cache (in this case, raptors). That's four (4) drives, right? If so, how do you get a block read rate of 578MB/s from 4 drives? That's 145MB/s per drive.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Thu Jun 28 00:43:48 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 00:43:53 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.7 required=5.0 tests=AWL,BAYES_50,URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5S7hitL008494 for ; Thu, 28 Jun 2007 00:43:47 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA00506; Thu, 28 Jun 2007 17:43:39 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id A6EEA58C38F1; Thu, 28 Jun 2007 17:43:39 +1000 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: PARTIAL TAKE 964469 - filestreams qa tests Message-Id: <20070628074339.A6EEA58C38F1@chook.melbourne.sgi.com> Date: Thu, 28 Jun 2007 17:43:39 +1000 (EST) From: dgc@sgi.com (David Chinner) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11968 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs QA tests for filestreams Date: Thu Jun 28 17:42:52 AEST 2007 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/xfs-cmds Inspected by: tes@sgi.com The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:29011a xfstests/170 - 1.1 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/170 xfstests/170.out - 1.1 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/170.out xfstests/171 - 1.1 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/171 xfstests/171.out - 1.1 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/171.out xfstests/172 - 1.1 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/172 xfstests/172.out - 1.1 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/172.out xfstests/173 - 1.1 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/173 xfstests/173.out - 1.1 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/173.out xfstests/174 - 1.1 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/174 xfstests/174.out - 1.1 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/174.out xfstests/common.filestreams - 1.1 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/common.filestreams xfstests/group - 1.108 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/group.diff?r1=text&tr1=1.108&r2=text&tr2=1.107&f=h - QA tests for filestreams From owner-xfs@oss.sgi.com Thu Jun 28 00:53:58 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 00:54:03 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.ukfsn.org (s2.ukfsn.org [217.158.120.143]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5S7rutL012944 for ; Thu, 28 Jun 2007 00:53:58 -0700 Received: from localhost (mailman.ukfsn.org [80.168.53.75]) by mail.ukfsn.org (Postfix) with ESMTP id 395FEE72B0; Thu, 28 Jun 2007 08:53:33 +0100 (BST) Received: from mail.ukfsn.org ([80.168.53.20]) by localhost (smtp-filter.ukfsn.org [80.168.53.75]) (amavisd-new, port 10024) with ESMTP id Ic4b02Ms0pgl; Thu, 28 Jun 2007 08:53:01 +0100 (BST) Received: from elm.dgreaves.com (i-83-67-36-194.freedom2surf.net [83.67.36.194]) by mail.ukfsn.org (Postfix) with ESMTP id 8FCB3E72FF; Thu, 28 Jun 2007 08:53:31 +0100 (BST) Received: from ash.dgreaves.com ([10.0.0.90]) by elm.dgreaves.com with esmtp (Exim 4.62) (envelope-from ) id 1I3op9-0004Vl-CU; Thu, 28 Jun 2007 08:53:55 +0100 Message-ID: <46836912.4000508@dgreaves.com> Date: Thu, 28 Jun 2007 08:53:54 +0100 From: David Greaves User-Agent: Mozilla-Thunderbird 2.0.0.4 (X11/20070618) MIME-Version: 1.0 To: David Chinner Cc: Justin Piszcz , linux-raid@vger.kernel.org, xfs@oss.sgi.com, Alan Piszcz Subject: Re: Fastest Chunk Size w/XFS For MD Software RAID = 1024k References: <20070628050837.GG989688@sgi.com> In-Reply-To: <20070628050837.GG989688@sgi.com> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11969 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: david@dgreaves.com Precedence: bulk X-list: xfs David Chinner wrote: > On Wed, Jun 27, 2007 at 07:20:42PM -0400, Justin Piszcz wrote: >> For drives with 16MB of cache (in this case, raptors). > > That's four (4) drives, right? I'm pretty sure he's using 10 - email a few days back... >>>>>> Justin Piszcz wrote: >>>>> Running test with 10 RAPTOR 150 hard drives, expect it to take >>>>> awhile until I get the results, avg them etc. :) > If so, how do you get a block read rate of 578MB/s from > 4 drives? That's 145MB/s per drive.... Which gives a far more reasonable 60MB/s per drive... David From owner-xfs@oss.sgi.com Thu Jun 28 00:59:50 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 00:59:54 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.7 required=5.0 tests=AWL,BAYES_50,URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5S7xktL015898 for ; Thu, 28 Jun 2007 00:59:48 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA00958; Thu, 28 Jun 2007 17:59:43 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id 526DD58C38F1; Thu, 28 Jun 2007 17:59:43 +1000 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: PARTIAL TAKE 964468 - fix qa test 004 - XFS can ENOSPC in bad places Message-Id: <20070628075943.526DD58C38F1@chook.melbourne.sgi.com> Date: Thu, 28 Jun 2007 17:59:43 +1000 (EST) From: dgc@sgi.com (David Chinner) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11970 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Make sure we take into account newly reserved blocks as introduced in 964468. Date: Thu Jun 28 17:56:22 AEST 2007 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/xfs-cmds Inspected by: ddiss@sgi.com The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:29013a xfstests/004 - 1.15 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/004.diff?r1=text&tr1=1.15&r2=text&tr2=1.14&f=h From owner-xfs@oss.sgi.com Thu Jun 28 00:59:58 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 01:00:02 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.8 required=5.0 tests=AWL,BAYES_50,URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5S7xstL015990 for ; Thu, 28 Jun 2007 00:59:56 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA00962; Thu, 28 Jun 2007 17:59:51 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 1161) id 8BE6358C38F1; Thu, 28 Jun 2007 17:59:51 +1000 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: TAKE 961297 - dmapi-devel RPM should Require: xfsprogs-devel Message-Id: <20070628075951.8BE6358C38F1@chook.melbourne.sgi.com> Date: Thu, 28 Jun 2007 17:59:51 +1000 (EST) From: bnaujok@sgi.com (Barry Naujok) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11971 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@sgi.com Precedence: bulk X-list: xfs Update dmapi specfile for dmapi-devel requiring xfsprogs-devel Date: Thu Jun 28 17:59:03 AEST 2007 Workarea: chook.melbourne.sgi.com:/home/bnaujok/isms/xfs-cmds Inspected by: sandeen@sandeen.net The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:29012a dmapi/build/rpm/dmapi.spec.in - 1.17 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/dmapi/build/rpm/dmapi.spec.in.diff?r1=text&tr1=1.17&r2=text&tr2=1.16&f=h - dmapi-devel RPM should Require: xfsprogs-devel From owner-xfs@oss.sgi.com Thu Jun 28 01:07:46 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 01:07:52 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.5 required=5.0 tests=AWL,BAYES_50,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r499012 Received: from lucidpixels.com (lucidpixels.com [75.144.35.66]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5S87itL020901 for ; Thu, 28 Jun 2007 01:07:45 -0700 Received: by lucidpixels.com (Postfix, from userid 1001) id 063EB1C076ECD; Thu, 28 Jun 2007 04:07:46 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id 039C3401992C; Thu, 28 Jun 2007 04:07:46 -0400 (EDT) Date: Thu, 28 Jun 2007 04:07:45 -0400 (EDT) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: David Chinner cc: linux-raid@vger.kernel.org, xfs@oss.sgi.com, Alan Piszcz Subject: Re: Fastest Chunk Size w/XFS For MD Software RAID = 1024k In-Reply-To: <20070628050837.GG989688@sgi.com> Message-ID: References: <20070628050837.GG989688@sgi.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11973 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs 10 disks total. Justin. On Thu, 28 Jun 2007, David Chinner wrote: > On Wed, Jun 27, 2007 at 07:20:42PM -0400, Justin Piszcz wrote: >> For drives with 16MB of cache (in this case, raptors). > > That's four (4) drives, right? > > If so, how do you get a block read rate of 578MB/s from > 4 drives? That's 145MB/s per drive.... > > Cheers, > > Dave. > -- > Dave Chinner > Principal Engineer > SGI Australian Software Group > From owner-xfs@oss.sgi.com Thu Jun 28 01:07:36 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 01:07:43 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=3.0 required=5.0 tests=AWL,BAYES_60,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r499012 Received: from lucidpixels.com (lucidpixels.com [75.144.35.66]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5S87ZtL020709 for ; Thu, 28 Jun 2007 01:07:36 -0700 Received: by lucidpixels.com (Postfix, from userid 1001) id 26FC31C000275; Thu, 28 Jun 2007 04:07:36 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id 23D30401992C; Thu, 28 Jun 2007 04:07:36 -0400 (EDT) Date: Thu, 28 Jun 2007 04:07:36 -0400 (EDT) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: Peter Rabbitson cc: linux-raid@vger.kernel.org, xfs@oss.sgi.com, Alan Piszcz Subject: Re: Fastest Chunk Size w/XFS For MD Software RAID = 1024k In-Reply-To: <46832E60.9000006@rabbit.us> Message-ID: References: <46832E60.9000006@rabbit.us> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11972 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs mdadm --create \ --verbose /dev/md3 \ --level=5 \ --raid-devices=10 \ --chunk=1024 \ --force \ --run /dev/sd[cdefghijkl]1 Justin. On Thu, 28 Jun 2007, Peter Rabbitson wrote: > Justin Piszcz wrote: >> The results speak for themselves: >> >> http://home.comcast.net/~jpiszcz/chunk/index.html >> > > > What is the array layout (-l ? -n ? -p ?) > - > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > From owner-xfs@oss.sgi.com Thu Jun 28 01:21:31 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 01:21:35 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.8 required=5.0 tests=AWL,BAYES_50,URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5S8LRtL030572 for ; Thu, 28 Jun 2007 01:21:29 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id SAA01613; Thu, 28 Jun 2007 18:21:24 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 1161) id BD38D58C38F1; Thu, 28 Jun 2007 18:21:24 +1000 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: PARTIAL TAKE 966972 - make sure the library link respects LDFLAGS Message-Id: <20070628082124.BD38D58C38F1@chook.melbourne.sgi.com> Date: Thu, 28 Jun 2007 18:21:24 +1000 (EST) From: bnaujok@sgi.com (Barry Naujok) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11974 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@sgi.com Precedence: bulk X-list: xfs Date: Thu Jun 28 18:20:51 AEST 2007 Workarea: chook.melbourne.sgi.com:/home/bnaujok/isms/xfs-cmds Inspected by: SpanKY The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:29014a xfsprogs/include/buildmacros - 1.18 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/include/buildmacros.diff?r1=text&tr1=1.18&r2=text&tr2=1.17&f=h - make sure the library link respects LDFLAGS From owner-xfs@oss.sgi.com Thu Jun 28 01:27:15 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 01:27:19 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.9 required=5.0 tests=AWL,BAYES_60,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r499012 Received: from lucidpixels.com (lucidpixels.com [75.144.35.66]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5S8REtL001466 for ; Thu, 28 Jun 2007 01:27:15 -0700 Received: by lucidpixels.com (Postfix, from userid 1001) id F086E1C000275; Thu, 28 Jun 2007 04:27:15 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id E1718401992C; Thu, 28 Jun 2007 04:27:15 -0400 (EDT) Date: Thu, 28 Jun 2007 04:27:15 -0400 (EDT) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: Peter Rabbitson cc: linux-raid@vger.kernel.org, xfs@oss.sgi.com, Alan Piszcz Subject: Re: Fastest Chunk Size w/XFS For MD Software RAID = 1024k In-Reply-To: <46837056.4050306@rabbit.us> Message-ID: References: <46832E60.9000006@rabbit.us> <46837056.4050306@rabbit.us> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11975 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs On Thu, 28 Jun 2007, Peter Rabbitson wrote: > Justin Piszcz wrote: >> mdadm --create \ >> --verbose /dev/md3 \ >> --level=5 \ >> --raid-devices=10 \ >> --chunk=1024 \ >> --force \ >> --run >> /dev/sd[cdefghijkl]1 >> >> Justin. > > Interesting, I came up with the same results (1M chunk being superior) with a > completely different raid set with XFS on top: > > mdadm --create \ > --level=10 \ > --chunk=1024 \ > --raid-devices=4 \ > --layout=f3 \ > ... > > Could it be attributed to XFS itself? > > Peter > Good question, by the way how much cache do the drives have that you are testing with? Justin. From owner-xfs@oss.sgi.com Thu Jun 28 01:30:02 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 01:30:06 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.8 required=5.0 tests=AWL,BAYES_50,URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5S8TwtL003001 for ; Thu, 28 Jun 2007 01:30:00 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id SAA01738; Thu, 28 Jun 2007 18:29:55 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 1161) id 64C7A58C38F1; Thu, 28 Jun 2007 18:29:55 +1000 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: TAKE 965902 - Add filestreams support to xfs_io Message-Id: <20070628082955.64C7A58C38F1@chook.melbourne.sgi.com> Date: Thu, 28 Jun 2007 18:29:55 +1000 (EST) From: bnaujok@sgi.com (Barry Naujok) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11976 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@sgi.com Precedence: bulk X-list: xfs Date: Thu Jun 28 18:29:21 AEST 2007 Workarea: chook.melbourne.sgi.com:/home/bnaujok/isms/xfs-cmds Inspected by: tes@sgi.com The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:29015a xfsprogs/db/inode.c - 1.19 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/db/inode.c.diff?r1=text&tr1=1.19&r2=text&tr2=1.18&f=h - Add filestreams support to xfs_db inode output xfsprogs/VERSION - 1.173 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/VERSION.diff?r1=text&tr1=1.173&r2=text&tr2=1.172&f=h xfsprogs/doc/CHANGES - 1.242 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/doc/CHANGES.diff?r1=text&tr1=1.242&r2=text&tr2=1.241&f=h - Update to 2.9.1 xfsprogs/include/xfs_fs.h - 1.40 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/include/xfs_fs.h.diff?r1=text&tr1=1.40&r2=text&tr2=1.39&f=h xfsprogs/include/xfs_dinode.h - 1.21 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/include/xfs_dinode.h.diff?r1=text&tr1=1.21&r2=text&tr2=1.20&f=h - Add filestreams support xfsprogs/man/man3/xfsctl.3 - 1.9 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/man/man3/xfsctl.3.diff?r1=text&tr1=1.9&r2=text&tr2=1.8&f=h - Document filestream inode attr xfsprogs/io/attr.c - 1.9 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/io/attr.c.diff?r1=text&tr1=1.9&r2=text&tr2=1.8&f=h - Add filestreams support to xfs_io From owner-xfs@oss.sgi.com Thu Jun 28 01:39:14 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 01:39:18 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.8 required=5.0 tests=AWL,BAYES_50,URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5S8dBtL007311 for ; Thu, 28 Jun 2007 01:39:13 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id SAA02014; Thu, 28 Jun 2007 18:39:08 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 1161) id 6C03658C38F1; Thu, 28 Jun 2007 18:39:08 +1000 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: TAKE 966972 - make sure the library link respects LDFLAGS Message-Id: <20070628083908.6C03658C38F1@chook.melbourne.sgi.com> Date: Thu, 28 Jun 2007 18:39:08 +1000 (EST) From: bnaujok@sgi.com (Barry Naujok) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11977 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@sgi.com Precedence: bulk X-list: xfs Date: Thu Jun 28 18:38:39 AEST 2007 Workarea: chook.melbourne.sgi.com:/home/bnaujok/isms/xfs-cmds Inspected by: SpanKY The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:29016a dmapi/include/buildmacros - 1.16 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/dmapi/include/buildmacros.diff?r1=text&tr1=1.16&r2=text&tr2=1.15&f=h acl/include/buildmacros - 1.18 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/acl/include/buildmacros.diff?r1=text&tr1=1.18&r2=text&tr2=1.17&f=h attr/include/buildmacros - 1.16 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/attr/include/buildmacros.diff?r1=text&tr1=1.16&r2=text&tr2=1.15&f=h xfsdump/include/buildmacros - 1.17 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsdump/include/buildmacros.diff?r1=text&tr1=1.17&r2=text&tr2=1.16&f=h - make sure the library link respects LDFLAGS From owner-xfs@oss.sgi.com Thu Jun 28 02:26:39 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 02:26:44 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.1 required=5.0 tests=BAYES_99,J_CHICKENPOX_43 autolearn=no version=3.2.0-pre1-r499012 Received: from z2.cat.iki.fi (z2.cat.iki.fi [212.16.98.133]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5S9QbtL027922 for ; Thu, 28 Jun 2007 02:26:38 -0700 Received: (mea@mea-ext) by mail.zmailer.org id S3541649AbXF1JFv (ORCPT ); Thu, 28 Jun 2007 12:05:51 +0300 Date: Thu, 28 Jun 2007 12:05:51 +0300 From: Matti Aarnio To: Peter Rabbitson Cc: Justin Piszcz , linux-raid@vger.kernel.org, xfs@oss.sgi.com, Alan Piszcz Subject: Re: Fastest Chunk Size w/XFS For MD Software RAID = 1024k Message-ID: <20070628090551.GG4504@mea-ext.zmailer.org> References: <46832E60.9000006@rabbit.us> <46837056.4050306@rabbit.us> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <46837056.4050306@rabbit.us> X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11978 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: matti.aarnio@zmailer.org Precedence: bulk X-list: xfs On Thu, Jun 28, 2007 at 10:24:54AM +0200, Peter Rabbitson wrote: > Interesting, I came up with the same results (1M chunk being superior) > with a completely different raid set with XFS on top: > > mdadm --create \ > --level=10 \ > --chunk=1024 \ > --raid-devices=4 \ > --layout=f3 \ > ... > > Could it be attributed to XFS itself? Sort of.. /dev/md4: Version : 00.90.03 Raid Level : raid5 Raid Devices : 4 Total Devices : 4 Preferred Minor : 4 Active Devices : 4 Working Devices : 4 Layout : left-symmetric Chunk Size : 256K This means there are 3x 256k for the user data.. Now I had to carefully tune the XFS bsize/sunit/swidth to match that: meta-data=/dev/DataDisk/lvol0 isize=256 agcount=32, agsize=7325824 blks = sectsz=512 attr=1 data = bsize=4096 blocks=234426368, imaxpct=25 = sunit=64 swidth=192 blks, unwritten=1 ... That is, 4k * 64 = 256k, and 64 * 3 = 192 With that, bulk writing on the file system runs without need to read back blocks of disk-space to calculate RAID5 parity data because the filesystem's idea of block does not align with RAID5 surface. I do have LVM in between the MD-RAID5 and XFS, so I did also align the LVM to that 3 * 256k. Doing this alignment thing did boost write performance by nearly a factor of 2 from mkfs.xfs with default parameters. With very wide RAID5, like the original question... I would find it very surprising if the alignment of upper layers to MD-RAID level would not be important there as well. Very small continuous writing does not make good use of disk mechanism, (seek time, rotation delay), so something in order of 128k-1024k will speed things up -- presuming that when you are writing, you are doing it many MB at the time. Database transactions are a lot smaller, and are indeed harmed by such large megachunk-IO oriented surfaces. RAID-levels 0 and 1 (and 10) do not have the need of reading back parts of the surface because a subset of it was not altered by incoming write. Some DB application on top of the filesystem would benefit if we had a way for it to ask about these alignment boundary issues, so it could read whole alignment block even though it writes out only a subset of it. (Theory being that those same blocks would also exist in memory cache and thus be available for write-back parity calculation.) > Peter /Matti Aarnio From owner-xfs@oss.sgi.com Thu Jun 28 03:07:38 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 03:07:44 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.0 required=5.0 tests=BAYES_60 autolearn=no version=3.2.0-pre1-r499012 Received: from smtp2.linux-foundation.org (smtp2.linux-foundation.org [207.189.120.14]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5SA7atL006435 for ; Thu, 28 Jun 2007 03:07:37 -0700 Received: from imap1.linux-foundation.org (imap1.linux-foundation.org [207.189.120.55]) by smtp2.linux-foundation.org (8.13.5.20060308/8.13.5/Debian-3ubuntu1.1) with ESMTP id l5S9tntc007505 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Thu, 28 Jun 2007 02:55:50 -0700 Received: from box (localhost [127.0.0.1]) by imap1.linux-foundation.org (8.13.5.20060308/8.13.5/Debian-3ubuntu1.1) with SMTP id l5S9th4u022685; Thu, 28 Jun 2007 02:55:43 -0700 Date: Thu, 28 Jun 2007 02:55:43 -0700 From: Andrew Morton To: "Amit K. Arora" Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , Andreas Dilger , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: Re: [PATCH 0/6][TAKE5] fallocate system call Message-Id: <20070628025543.9467216f.akpm@linux-foundation.org> In-Reply-To: <20070625132810.GA1951@amitarora.in.ibm.com> References: <20070510005926.GT85884050@sgi.com> <20070510115620.GB21400@amitarora.in.ibm.com> <20070510223950.GD86004887@sgi.com> <20070511110301.GB28425@in.ibm.com> <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> X-Mailer: Sylpheed 2.4.1 (GTK+ 2.8.17; x86_64-unknown-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-MIMEDefang-Filter: osdl$Revision: 1.181 $ X-Scanned-By: MIMEDefang 2.53 on 207.189.120.14 X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11979 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: akpm@linux-foundation.org Precedence: bulk X-list: xfs On Mon, 25 Jun 2007 18:58:10 +0530 "Amit K. Arora" wrote: > N O T E: > ------- > 1) Only Patches 4/7 and 7/7 are NEW. Rest of them are _already_ part > of ext4 patch queue git tree hosted by Ted. Why the heck are replacements for these things being sent out again when they're already in -mm and they're already in Ted's queue (from which I need to diligently drop them each time I remerge)? Are we all supposed to re-review the entire patchset (or at least #4 and #7) again? The core kernel changes are not appropriate to the ext4 tree. For a start, the syscall numbers in Ted's queue are wrong (other new syscalls are pending). Patches which add syscalls are an utter PITA to carry due to all the patch conflicts and to the relatively frequent syscall renumbering (they don't get numbered in time-of-arrival order due to differing rates at which patches mature). Please drop the non-ext4 patches from the ext4 tree and send incremental patches against the (non-ext4) fallocate patches in -mm. And try to get the code finished? Time is pressing. Thanks. From owner-xfs@oss.sgi.com Thu Jun 28 03:38:53 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 03:38:58 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.9 required=5.0 tests=AWL,BAYES_50,MISSING_HEADERS autolearn=no version=3.2.0-pre1-r499012 Received: from mail.pawisda.de (mail.pawisda.de [213.157.4.156]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5SAcotL016642 for ; Thu, 28 Jun 2007 03:38:53 -0700 Received: from localhost (localhost.intra.frontsite.de [127.0.0.1]) by mail.pawisda.de (Postfix) with ESMTP id 3D0A4D550; Thu, 28 Jun 2007 12:38:51 +0200 (CEST) Received: from mail.pawisda.de ([127.0.0.1]) by localhost (ndb [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 26023-02; Thu, 28 Jun 2007 12:38:44 +0200 (CEST) Received: from [192.168.51.2] (lw-pc002.intra.frontsite.de [192.168.51.2]) by mail.pawisda.de (Postfix) with ESMTP id 8F78AD534; Thu, 28 Jun 2007 12:38:44 +0200 (CEST) Message-ID: <46838FB4.1040906@linworks.de> Date: Thu, 28 Jun 2007 12:38:44 +0200 From: Ruben Porras User-Agent: Mozilla-Thunderbird 2.0.0.4 (X11/20070618) MIME-Version: 1.0 Cc: xfs@oss.sgi.com, iusty@k1024.org Subject: Re: XFS shrink (step 0) References: <1180715974.10796.46.camel@localhost> <20070604001632.GA86004887@sgi.com> <1182291751.5289.9.camel@localhost> <20070619234248.GT86004887@sgi.com> In-Reply-To: <20070619234248.GT86004887@sgi.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 8bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Scanned: by amavisd-new at pawisda.de X-Virus-Status: Clean X-archive-position: 11980 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: ruben.porras@linworks.de Precedence: bulk X-list: xfs David Chinner wrote: > No, there isn't anything currently in existence to do this. > > It's not difficult, though. What you need to do is count the number of > used blocks in the AGs that will be truncated off, and check whether > there is enough free space in the remaining AGs to hold all the > blocks that we are going to move. > > I think this could be done we a single loop across the perag > array or with a simple xfs_db wrapper and some shell/awk/perl > magic. > Do you mind that is it ok to depend on shell/awk/perl? I'll do it in C looping through the perag array. -- Rubén Porras LinWorks GmbH From owner-xfs@oss.sgi.com Thu Jun 28 03:52:25 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 03:52:30 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.3 required=5.0 tests=AWL,BAYES_99,MISSING_HEADERS autolearn=no version=3.2.0-pre1-r499012 Received: from mail.pawisda.de (mail.pawisda.de [213.157.4.156]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5SAqOtL021390 for ; Thu, 28 Jun 2007 03:52:25 -0700 Received: from localhost (localhost.intra.frontsite.de [127.0.0.1]) by mail.pawisda.de (Postfix) with ESMTP id 937DBD57B for ; Thu, 28 Jun 2007 12:26:02 +0200 (CEST) Received: from mail.pawisda.de ([127.0.0.1]) by localhost (ndb [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 25532-08 for ; Thu, 28 Jun 2007 12:25:50 +0200 (CEST) Received: from [192.168.51.2] (lw-pc002.intra.frontsite.de [192.168.51.2]) by mail.pawisda.de (Postfix) with ESMTP id 9A8F3D532 for ; Thu, 28 Jun 2007 12:25:50 +0200 (CEST) Message-ID: <46838CAE.9030808@linworks.de> Date: Thu, 28 Jun 2007 12:25:50 +0200 From: Ruben Porras User-Agent: Mozilla-Thunderbird 2.0.0.4 (X11/20070618) MIME-Version: 1.0 Cc: xfs@oss.sgi.com Subject: Re: [PATCH] Implement ioctl to mark AGs as "don't use/use" References: <1182939325.5313.12.camel@localhost> <20070628045049.GF989688@sgi.com> In-Reply-To: <20070628045049.GF989688@sgi.com> Content-Type: multipart/mixed; boundary="------------090407080102030706060601" X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Scanned: by amavisd-new at pawisda.de X-Virus-Status: Clean X-archive-position: 11981 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: ruben.porras@linworks.de Precedence: bulk X-list: xfs This is a multi-part message in MIME format. --------------090407080102030706060601 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 8bit David Chinner wrote: > It's a good first cut - comments are inline in the patch below. > > > > Looks like some whitespace problems there (mixing spaces and tabs). > Also, can you include empty lines either side of a unique hunk of code > like this? > > I wonder how many other places we are going to have to put this check? > I haven't looked myself, but this is a good place to start ;) > > I'm still wondering how should I educate emacs to introduce alsways tabs ;) > You shouldn't renumber the existing ioctls - that changes the > interfaces to userspace and so will break lots of stuff :( Just put > them at the end as 126/127. (Oh, you've got two "111" ioctls in > there, anyway ;) > > IOWs, the flag passed to xfs_trans_alloc() defines the type of the > transaction and the flag passed to xfs_alloc_log_agf() defines what > has been modified within the transaction. > > Ok, thank you for the explanation, I think that now I got it right. Attached is a new patch. There is one question that I would like to ask: when you sketched the xfs_alloc_set_flag_ag function, you put inside it the call to the funcintion xfs_alloc_log_agf (see next code snippet). STATIC void xfs_alloc_set_flag_ag( xfs_trans_t *tp, xfs_buf_t *agbp, /* buffer for a.g. freelist header */ xfs_perag_t *pag, int flag) { xfs_agf_t *agf; /* a.g. freespace structure */ agf = XFS_BUF_TO_AGF(agbp); pag->pagf_flags |= flag; agf->agf_flags = cpu_to_be32(pag->pagf_flags); xfs_alloc_log_agf(tp, agbp, XFS_AGF_FLAGS); <-- ***** FROM HERE } is it required to do the transaction log right after the change or can it be done in the caller function right after calling xfs_alloc_set_flag_ag? For example caller(...) { xfs_alloc_set_flag_ag(tp, bp, pag, XFS_AGFLAG_ALLOC_DENY); <-- **** TO HERE xfs_trans_set_sync(tp); xfs_trans_commit(tp, 0); } Thanks -- Rubén Porras LinWorks GmbH --------------090407080102030706060601 Content-Type: text/x-patch; name="patch_markags1.diff" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="patch_markags1.diff" Index: fs/xfs/xfs_ag.h =================================================================== RCS file: /cvs/linux-2.6-xfs/fs/xfs/xfs_ag.h,v retrieving revision 1.59 diff -u -r1.59 xfs_ag.h --- fs/xfs/xfs_ag.h 22 May 2007 15:50:48 -0000 1.59 +++ fs/xfs/xfs_ag.h 28 Jun 2007 09:47:41 -0000 @@ -69,6 +69,7 @@ __be32 agf_freeblks; /* total free blocks */ __be32 agf_longest; /* longest free space */ __be32 agf_btreeblks; /* # of blocks held in AGF btrees */ + __be32 agf_flags; /* persistent AG state flags */ } xfs_agf_t; #define XFS_AGF_MAGICNUM 0x00000001 @@ -83,7 +84,8 @@ #define XFS_AGF_FREEBLKS 0x00000200 #define XFS_AGF_LONGEST 0x00000400 #define XFS_AGF_BTREEBLKS 0x00000800 -#define XFS_AGF_NUM_BITS 12 +#define XFS_AGF_FLAGS 0x00001000 +#define XFS_AGF_NUM_BITS 13 #define XFS_AGF_ALL_BITS ((1 << XFS_AGF_NUM_BITS) - 1) /* disk block (xfs_daddr_t) in the AG */ @@ -196,8 +198,17 @@ lock_t pagb_lock; /* lock for pagb_list */ #endif xfs_perag_busy_t *pagb_list; /* unstable blocks */ + __u32 pagf_flags; /* persistent AG state flags */ } xfs_perag_t; +typedef struct xfs_ioc_agflags +{ + xfs_agnumber_t ag; + __u32 flags; +} xfs_ioc_agflags_t; + +#define XFS_AGF_FLAGS_ALLOC_DENY (1<<0) + #define XFS_AG_MAXLEVELS(mp) ((mp)->m_ag_maxlevels) #define XFS_MIN_FREELIST_RAW(bl,cl,mp) \ (MIN(bl + 1, XFS_AG_MAXLEVELS(mp)) + MIN(cl + 1, XFS_AG_MAXLEVELS(mp))) Index: fs/xfs/xfs_alloc.c =================================================================== RCS file: /cvs/linux-2.6-xfs/fs/xfs/xfs_alloc.c,v retrieving revision 1.186 diff -u -r1.186 xfs_alloc.c --- fs/xfs/xfs_alloc.c 22 May 2007 15:50:48 -0000 1.186 +++ fs/xfs/xfs_alloc.c 28 Jun 2007 09:47:44 -0000 @@ -549,6 +549,7 @@ xfs_alloc_arg_t *args) /* argument structure for allocation */ { int error=0; + xfs_perag_t *pag; #ifdef XFS_ALLOC_TRACE static char fname[] = "xfs_alloc_ag_vextent"; #endif @@ -558,6 +559,17 @@ ASSERT(args->minlen <= args->maxlen); ASSERT(args->mod < args->prod); ASSERT(args->alignment > 0); + + /* + * Return an error if the a.g. should not be allocated. + * This happens normally during a shrink operation. + */ + pag = (args->pag); + if (unlikely(pag->pagf_flags & XFS_AGF_FLAGS_ALLOC_DENY)) { + args->agbno = NULLAGBLOCK; + return 0; + } + /* * Branch to correct routine based on the type. */ @@ -2085,6 +2097,7 @@ offsetof(xfs_agf_t, agf_freeblks), offsetof(xfs_agf_t, agf_longest), offsetof(xfs_agf_t, agf_btreeblks), + offsetof(xfs_agf_t, agf_flags), sizeof(xfs_agf_t) }; Index: fs/xfs/xfs_fs.h =================================================================== RCS file: /cvs/linux-2.6-xfs/fs/xfs/xfs_fs.h,v retrieving revision 1.33 diff -u -r1.33 xfs_fs.h --- fs/xfs/xfs_fs.h 22 May 2007 15:50:48 -0000 1.33 +++ fs/xfs/xfs_fs.h 28 Jun 2007 09:47:44 -0000 @@ -492,6 +492,8 @@ #define XFS_IOC_ATTRMULTI_BY_HANDLE _IOW ('X', 123, struct xfs_fsop_attrmulti_handlereq) #define XFS_IOC_FSGEOMETRY _IOR ('X', 124, struct xfs_fsop_geom) #define XFS_IOC_GOINGDOWN _IOR ('X', 125, __uint32_t) +#define XFS_IOC_GET_AGF_FLAGS _IOWR('X', 126, struct xfs_ioc_agflags) +#define XFS_IOC_SET_AGF_FLAGS _IOW ('X', 127, struct xfs_ioc_agflags) /* XFS_IOC_GETFSUUID ---------- deprecated 140 */ Index: fs/xfs/xfs_fsops.c =================================================================== RCS file: /cvs/linux-2.6-xfs/fs/xfs/xfs_fsops.c,v retrieving revision 1.126 diff -u -r1.126 xfs_fsops.c --- fs/xfs/xfs_fsops.c 8 Jun 2007 16:03:59 -0000 1.126 +++ fs/xfs/xfs_fsops.c 28 Jun 2007 09:47:45 -0000 @@ -649,3 +649,79 @@ return 0; } + +STATIC void +xfs_ag_set_flags_private( + xfs_trans_t *tp, + xfs_buf_t *agbp, /* buffer for a.g. freelist header */ + xfs_perag_t *pag, + __u32 flags) +{ + xfs_agf_t *agf; /* a.g. freespace structure */ + + agf = XFS_BUF_TO_AGF(agbp); + pag->pagf_flags |= flags; + agf->agf_flags = cpu_to_be32(pag->pagf_flags); + + xfs_alloc_log_agf(tp, agbp, XFS_AGF_FLAGS); +} + +__u32 +xfs_ag_get_flags_private( + xfs_perag_t *pag) +{ + return pag->pagf_flags; +} + +int +xfs_ag_set_flags( + xfs_mount_t *mp, + xfs_ioc_agflags_t *ioc_flags) +{ + xfs_agnumber_t agno; + xfs_perag_t *pag; + xfs_buf_t *bp; + int error; + xfs_trans_t *tp; + + agno = ioc_flags->ag; + if (agno >= mp->m_sb.sb_agcount) + return -EINVAL; + + tp = xfs_trans_alloc(mp, XFS_TRANS_AGF_FLAGS); + error = xfs_trans_reserve(tp, 0, mp->m_sb.sb_sectsize + 128, 0, 0, + XFS_DEFAULT_LOG_COUNT); + if (error) { + xfs_trans_cancel(tp, 0); + return error; + } + error = xfs_alloc_read_agf(mp, tp, agno, 0, &bp); + if (error) + return error; + + pag = &mp->m_perag[agno]; + xfs_ag_set_flags_private(tp, bp, pag, ioc_flags->flags); + + xfs_trans_set_sync(tp); + xfs_trans_commit(tp, 0); + + return 0; + +} + +int +xfs_ag_get_flags( + xfs_mount_t *mp, + xfs_ioc_agflags_t *ioc_flags) +{ + xfs_agnumber_t agno; + xfs_perag_t *pag; + + agno = ioc_flags->ag; + if (agno >= mp->m_sb.sb_agcount) + return -EINVAL; + + pag = &mp->m_perag[agno]; + ioc_flags->flags = xfs_ag_get_flags_private(pag); + return 0; +} Index: fs/xfs/xfs_fsops.h =================================================================== RCS file: /cvs/linux-2.6-xfs/fs/xfs/xfs_fsops.h,v retrieving revision 1.29 diff -u -r1.29 xfs_fsops.h --- fs/xfs/xfs_fsops.h 21 Nov 2005 14:42:36 -0000 1.29 +++ fs/xfs/xfs_fsops.h 28 Jun 2007 09:47:45 -0000 @@ -27,4 +27,7 @@ extern int xfs_fs_goingdown(xfs_mount_t *mp, __uint32_t inflags); extern void xfs_fs_log_dummy(xfs_mount_t *mp); +extern int xfs_ag_set_flags(xfs_mount_t *mp, xfs_ioc_agflags_t *ioc_flags); +extern int xfs_ag_get_flags(xfs_mount_t *mp, xfs_ioc_agflags_t *ioc_flags); + #endif /* __XFS_FSOPS_H__ */ Index: fs/xfs/xfs_trans.h =================================================================== RCS file: /cvs/linux-2.6-xfs/fs/xfs/xfs_trans.h,v retrieving revision 1.145 diff -u -r1.145 xfs_trans.h --- fs/xfs/xfs_trans.h 22 May 2007 15:50:48 -0000 1.145 +++ fs/xfs/xfs_trans.h 28 Jun 2007 09:47:46 -0000 @@ -95,7 +95,8 @@ #define XFS_TRANS_GROWFSRT_FREE 39 #define XFS_TRANS_SWAPEXT 40 #define XFS_TRANS_SB_COUNT 41 -#define XFS_TRANS_TYPE_MAX 41 +#define XFS_TRANS_AGF_FLAGS 42 +#define XFS_TRANS_TYPE_MAX 42 /* new transaction types need to be reflected in xfs_logprint(8) */ Index: fs/xfs/linux-2.6/xfs_ioctl.c =================================================================== RCS file: /cvs/linux-2.6-xfs/fs/xfs/linux-2.6/xfs_ioctl.c,v retrieving revision 1.144 diff -u -r1.144 xfs_ioctl.c --- fs/xfs/linux-2.6/xfs_ioctl.c 7 Feb 2007 02:50:13 -0000 1.144 +++ fs/xfs/linux-2.6/xfs_ioctl.c 28 Jun 2007 09:47:46 -0000 @@ -860,6 +860,37 @@ return 0; } + case XFS_IOC_GET_AGF_FLAGS: { + xfs_ioc_agflags_t inout; + + if (!capable(CAP_SYS_ADMIN)) + return -EPERM; + + if (copy_from_user(&inout, arg, sizeof(inout))) + return -XFS_ERROR(EFAULT); + + error = xfs_ag_get_flags(mp, &inout); + if (error) + return -error; + + if (copy_to_user(arg, &inout, sizeof(inout))) + return -XFS_ERROR(EFAULT); + return 0; + } + + case XFS_IOC_SET_AGF_FLAGS: { + xfs_ioc_agflags_t in; + + if (!capable(CAP_SYS_ADMIN)) + return -EPERM; + + if (copy_from_user(&in, arg, sizeof(in))) + return -XFS_ERROR(EFAULT); + + error = xfs_ag_set_flags(mp, &in); + return -error; + } + case XFS_IOC_FSGROWFSDATA: { xfs_growfs_data_t in; --------------090407080102030706060601-- From owner-xfs@oss.sgi.com Thu Jun 28 06:25:30 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 06:25:37 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.1 required=5.0 tests=BAYES_80,J_CHICKENPOX_64 autolearn=no version=3.2.0-pre1-r499012 Received: from zebday.corky.net (corky.net [212.150.53.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5SDPStL022169 for ; Thu, 28 Jun 2007 06:25:29 -0700 Received: from [127.0.0.1] (zebday [127.0.0.1]) by zebday.corky.net (Postfix) with ESMTP id 26C1FEE2C1 for ; Thu, 28 Jun 2007 15:54:57 +0300 (IDT) Message-ID: <4683ADEB.3010106@corky.net> Date: Thu, 28 Jun 2007 13:47:39 +0100 From: Just Marc User-Agent: Mozilla-Thunderbird 2.0.0.4 (X11/20070622) MIME-Version: 1.0 To: xfs@oss.sgi.com Subject: xfs_fsr, performance related tweaks Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11984 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: marc@corky.net Precedence: bulk X-list: xfs Hi, I'm using fsr extensively. I noticed two things: 1. in xfs_fsr.c: if (fsx.fsx_xflags & XFS_XFLAG_NODEFRAG) { if (vflag) fsrprintf(_("%s: marked as don't defrag, ignoring\n"), fname); return(0); } This check should be moved above the code that performs a stat on the file, to become the first check, this will help reduce redundant stat calls for heavily defragged fileystems. A simple patch is attached for that. 2. Files for which 'No improvement will be made' should also be marked as no-defrag, this will avoid a ton of extra work in the future. In my particular use case I have many hundreds of thousands of files on each volume (xfs_fsr runs in a never ending-loop as new files are constantly being added) and once the file is written to disk it is never changed again until deletion. Optionally do this only when a special parameter is passed to fsr at command line? (that is, if you think this is not appropriate for all scenarios). I tried to accomplish this but it proved more difficult than I thought. While digging around in the fsr code, I didn't find out how fsr marks the file as no-defrag, I'm guessing this is done in kernel code via the ioctl that swaps the extents (xfs_swapext) ... Is that correct? I looked at how the 'io' utility marks files as no-defrag; it calls xfsctl(path, fd, XFS_IOC_FSSETXATTR, &attr); -- this requires a path, which is not available to me when fsr is run in its default mode which traverses all xfs filesystems rather than gets to run on a single file. Is there a way to extract the path in this case? Or possibly use another way to mark the inode as no-defrag without having to need the path -- just fd? Of course this can be done by an external script which parses the output of xfs_fsr for these inodes, looks them up and marks them as such, but that's pretty messy and very inefficient. I'd really like to do this as cleanly and efficiently as possible. I would appreciate any feedback you have on this. Please CC me as I'm not on this list. Thanks! From owner-xfs@oss.sgi.com Thu Jun 28 06:23:59 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 06:24:07 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.1 required=5.0 tests=BAYES_80,J_CHICKENPOX_64 autolearn=no version=3.2.0-pre1-r499012 Received: from zebday.corky.net (corky.net [212.150.53.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5SDNutL021549 for ; Thu, 28 Jun 2007 06:23:59 -0700 Received: from [127.0.0.1] (zebday [127.0.0.1]) by zebday.corky.net (Postfix) with ESMTP id F116992268 for ; Thu, 28 Jun 2007 15:55:08 +0300 (IDT) Message-ID: <4683ADF5.9050901@corky.net> Date: Thu, 28 Jun 2007 13:47:49 +0100 From: Just Marc User-Agent: Mozilla-Thunderbird 2.0.0.4 (X11/20070622) MIME-Version: 1.0 To: xfs@oss.sgi.com Subject: xfs_fsr, performance related tweaks Content-Type: multipart/mixed; boundary="------------010502060302030207090406" X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11983 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: marc@corky.net Precedence: bulk X-list: xfs This is a multi-part message in MIME format. --------------010502060302030207090406 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Hi, I'm using fsr extensively. I noticed two things: 1. in xfs_fsr.c: if (fsx.fsx_xflags & XFS_XFLAG_NODEFRAG) { if (vflag) fsrprintf(_("%s: marked as don't defrag, ignoring\n"), fname); return(0); } This check should be moved above the code that performs a stat on the file, to become the first check, this will help reduce redundant stat calls for heavily defragged fileystems. A simple patch is attached for that. 2. Files for which 'No improvement will be made' should also be marked as no-defrag, this will avoid a ton of extra work in the future. In my particular use case I have many hundreds of thousands of files on each volume (xfs_fsr runs in a never ending-loop as new files are constantly being added) and once the file is written to disk it is never changed again until deletion. Optionally do this only when a special parameter is passed to fsr at command line? (that is, if you think this is not appropriate for all scenarios). I tried to accomplish this but it proved more difficult than I thought. While digging around in the fsr code, I didn't find out how fsr marks the file as no-defrag, I'm guessing this is done in kernel code via the ioctl that swaps the extents (xfs_swapext) ... Is that correct? I looked at how the 'io' utility marks files as no-defrag; it calls xfsctl(path, fd, XFS_IOC_FSSETXATTR, &attr); -- this requires a path, which is not available to me when fsr is run in its default mode which traverses all xfs filesystems rather than gets to run on a single file. Is there a way to extract the path in this case? Or possibly use another way to mark the inode as no-defrag without having to need the path -- just fd? Of course this can be done by an external script which parses the output of xfs_fsr for these inodes, looks them up and marks them as such, but that's pretty messy and very inefficient. I'd really like to do this as cleanly and efficiently as possible. I would appreciate any feedback you have on this. Please CC me as I'm not on this list. Thanks! --------------010502060302030207090406 Content-Type: text/plain; name="xfs_fsr.diff" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="xfs_fsr.diff" --- xfs_fsr.c 2007-06-28 13:23:58.745778214 +0100 +++ xfs_fsr.c.orig 2007-06-28 07:40:42.572069164 +0100 @@ -904,6 +904,20 @@ } } + /* Check if there is room to copy the file */ + if ( statvfs64( (fsname == NULL ? fname : fsname), &vfss) < 0) { + fsrprintf(_("unable to get fs stat on %s: %s\n"), + fname, strerror(errno)); + return (-1); + } + bsize = vfss.f_frsize ? vfss.f_frsize : vfss.f_bsize; + + if (statp->bs_size > ((vfss.f_bfree * bsize) - minimumfree)) { + fsrprintf(_("insufficient freespace for: %s: " + "size=%lld: ignoring\n"), fname, statp->bs_size); + return 1; + } + if ((ioctl(fd, XFS_IOC_FSGETXATTR, &fsx)) < 0) { fsrprintf(_("failed to get inode attrs: %s\n"), fname); return(-1); @@ -937,20 +951,6 @@ return -1; } - /* Check if there is room to copy the file */ - if ( statvfs64( (fsname == NULL ? fname : fsname), &vfss) < 0) { - fsrprintf(_("unable to get fs stat on %s: %s\n"), - fname, strerror(errno)); - return (-1); - } - bsize = vfss.f_frsize ? vfss.f_frsize : vfss.f_bsize; - - if (statp->bs_size > ((vfss.f_bfree * bsize) - minimumfree)) { - fsrprintf(_("insufficient freespace for: %s: " - "size=%lld: ignoring\n"), fname, statp->bs_size); - return 1; - } - /* * Previously the code forked here, & the child changed it's uid to * that of the file's owner and then called packfile(), to keep --------------010502060302030207090406-- From owner-xfs@oss.sgi.com Thu Jun 28 07:22:19 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 07:22:27 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.1 required=5.0 tests=BAYES_50,J_CHICKENPOX_27 autolearn=no version=3.2.0-pre1-r499012 Received: from isls-mx10.wmin.ac.uk (isls-mx10.wmin.ac.uk [161.74.14.112]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5SEMHtL014467 for ; Thu, 28 Jun 2007 07:22:19 -0700 Received: from groucho.wmin.ac.uk ([161.74.160.74]) by isls-mx10.wmin.ac.uk with esmtp (Exim 4.60) (envelope-from ) id 1I3usz-000592-Hl; Thu, 28 Jun 2007 15:22:17 +0100 Received: from project1.cpc.wmin.ac.uk (project1.cpc.wmin.ac.uk [161.74.69.87]) by groucho.wmin.ac.uk (Postfix) with ESMTP id 7B10732299D; Thu, 28 Jun 2007 15:22:17 +0100 (BST) Date: Thu, 28 Jun 2007 15:22:17 +0100 To: "Timothy Shimmin" , "David Chinner" Subject: Re: After reboot fs with barrier faster deletes then fs with nobarrier From: "Szabolcs Illes" Organization: UoW Cc: xfs@oss.sgi.com Content-Type: text/plain; format=flowed; delsp=yes; charset=us-ascii MIME-Version: 1.0 References: <20070627222040.GR989688@sgi.com> <4683407E.9080707@sgi.com> Message-ID: In-Reply-To: <4683407E.9080707@sgi.com> User-Agent: Opera Mail/9.20 (Linux) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from Quoted-Printable to 8bit by oss.sgi.com id l5SEMJtL014505 X-archive-position: 11985 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: S.Illes@westminster.ac.uk Precedence: bulk X-list: xfs On Thu, 28 Jun 2007 06:00:46 +0100, Timothy Shimmin wrote: > David Chinner wrote: >> On Wed, Jun 27, 2007 at 06:58:29PM +0100, Szabolcs Illes wrote: >>> Hi, >>> >>> I am using XFS on my laptop, I have realized that nobarrier mount >>> options sometimes slows down deleting large number of small files, >>> like the kernel source tree. I made four tests, deleting the kernel >>> source right after unpack and after reboot, with both barrier and >>> nobarrier options: >>> > >> mount opts: rw,noatime,nodiratime,logbsize=256k,logbufs=2 > >> illes@sunset:~/tmp> tar xjf ~/Download/linux-2.6.21.5.tar.bz2 && > sync && reboot > >> After reboot: > >> illes@sunset:~/tmp> time rm -rf linux-2.6.21.5/ > >> real 0m28.127s > >> user 0m0.044s > >> sys 0m2.924s > >> mount opts: rw,noatime,nodiratime,logbsize=256k,logbufs=2,nobarrier > >> illes@sunset:~/tmp> tar xjf ~/Download/linux-2.6.21.5.tar.bz2 && > sync && reboot > >> After reboot: > >> illes@sunset:~/tmp> time rm -rf linux-2.6.21.5/ > >> real 1m12.738s > >> user 0m0.032s > >> sys 0m2.548s > >> It looks like with barrier it's faster deleting files after reboot. > >> ( 28 sec vs 72 sec !!! ). >> Of course the second run will be faster here - the inodes are already >> in >> cache and so there's no reading from disk needed to find the files >> to delete.... >> That's because run time after reboot is determined by how fast you >> can traverse the directory structure (i.e. how many seeks are >> involved). Barriers will have little impact on the uncached rm -rf >> results, > > But it looks like barriers _are_ having impact on the uncached rm -rf > results. and the same happens on my desktop pc which is much more faster, 64 bit dual core P4, sata drive, etc. nobarrier: + echo 3 > /proc/sys/vm/drop_caches + rm -rf /root/fs/linux-2.6.21.5 /root/fs/x.tar real 0m12.526s user 0m0.016s sys 0m1.244s barrier: + echo 3 > /proc/sys/vm/drop_caches + rm -rf /root/fs/linux-2.6.21.5 /root/fs/x.tar real 0m6.784s user 0m0.032s sys 0m1.244s Szabolcs > > --Tim From owner-xfs@oss.sgi.com Thu Jun 28 07:48:40 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 07:48:46 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.3 required=5.0 tests=AWL,BAYES_50, DATE_IN_PAST_12_24 autolearn=no version=3.2.0-pre1-r499012 Received: from spitz.ucw.cz (gprs189-60.eurotel.cz [160.218.189.60]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5SEmXtL026428 for ; Thu, 28 Jun 2007 07:48:37 -0700 Received: by spitz.ucw.cz (Postfix, from userid 0) id 8AC79279F2; Wed, 27 Jun 2007 20:49:24 +0000 (UTC) Date: Wed, 27 Jun 2007 20:49:24 +0000 From: Pavel Machek To: David Chinner Cc: David Greaves , David Robinson , LVM general discussion and development , "'linux-kernel@vger.kernel.org'" , xfs@oss.sgi.com, linux-pm , LinuxRaid Subject: Re: [linux-lvm] 2.6.22-rc4 XFS fails after hibernate/resume Message-ID: <20070627204924.GA4777@ucw.cz> References: <46744065.6060605@dgreaves.com> <4674645F.5000906@gmail.com> <46751D37.5020608@dgreaves.com> <4676390E.6010202@dgreaves.com> <20070618145007.GE85884050@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070618145007.GE85884050@sgi.com> User-Agent: Mutt/1.5.9i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11986 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: pavel@ucw.cz Precedence: bulk X-list: xfs Hi! > FWIW, I'm on record stating that "sync" is not sufficient to quiesce an XFS > filesystem for a suspend/resume to work safely and have argued that the only Hmm, so XFS writes to disk even when its threads are frozen? > safe thing to do is freeze the filesystem before suspend and thaw it after > resume. This is why I originally asked you to test that with the other problem Could you add that to the XFS threads if it is really required? They do know that they are being frozen for suspend. Pavel -- (english) http://www.livejournal.com/~pavelmachek (cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html From owner-xfs@oss.sgi.com Thu Jun 28 08:05:49 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 08:06:15 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from over.ny.us.ibm.com (over.ny.us.ibm.com [32.97.182.150]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5SF5ltL031183 for ; Thu, 28 Jun 2007 08:05:49 -0700 Received: from e3.ny.us.ibm.com ([192.168.1.103]) by pokfb.esmtp.ibm.com (8.12.11.20060308/8.12.11) with ESMTP id l5SEb36F029553 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Thu, 28 Jun 2007 10:37:06 -0400 Received: from d01relay04.pok.ibm.com (d01relay04.pok.ibm.com [9.56.227.236]) by e3.ny.us.ibm.com (8.13.8/8.13.8) with ESMTP id l5SDXZJJ013202 for ; Thu, 28 Jun 2007 09:33:35 -0400 Received: from d01av02.pok.ibm.com (d01av02.pok.ibm.com [9.56.224.216]) by d01relay04.pok.ibm.com (8.13.8/8.13.8/NCO v8.3) with ESMTP id l5SEarFS525944 for ; Thu, 28 Jun 2007 10:36:53 -0400 Received: from d01av02.pok.ibm.com (loopback [127.0.0.1]) by d01av02.pok.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id l5SEaqnt019292 for ; Thu, 28 Jun 2007 10:36:53 -0400 Received: from [9.67.80.118] (wecm-9-67-80-118.wecm.ibm.com [9.67.80.118]) by d01av02.pok.ibm.com (8.12.11.20060308/8.12.11) with ESMTP id l5SEaoT7019172; Thu, 28 Jun 2007 10:36:51 -0400 Subject: Re: [PATCH 0/6][TAKE5] fallocate system call From: Mingming Cao Reply-To: cmm@us.ibm.com To: Andrew Morton Cc: "Amit K. Arora" , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , Andreas Dilger , suparna@in.ibm.com, xfs@oss.sgi.com In-Reply-To: <20070628025543.9467216f.akpm@linux-foundation.org> References: <20070510005926.GT85884050@sgi.com> <20070510115620.GB21400@amitarora.in.ibm.com> <20070510223950.GD86004887@sgi.com> <20070511110301.GB28425@in.ibm.com> <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070628025543.9467216f.akpm@linux-foundation.org> Content-Type: text/plain Organization: IBM Linux Technology Center Date: Thu, 28 Jun 2007 10:36:47 -0700 Message-Id: <1183052207.4420.17.camel@localhost.localdomain> Mime-Version: 1.0 X-Mailer: Evolution 2.8.0 (2.8.0-33.el5) Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11987 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: cmm@us.ibm.com Precedence: bulk X-list: xfs On Thu, 2007-06-28 at 02:55 -0700, Andrew Morton wrote: > Please drop the non-ext4 patches from the ext4 tree and send incremental > patches against the (non-ext4) fallocate patches in -mm. > The ext4 fallocate() patches are dependent on the core fallocate() patches, so ext4 patch-queue and git tree won't compile (it's not based on mm tree) without the core changes. We can send ext4 fallocate patches (incremental patches against mm tree) and drop the full fallocate patches(ext4 and non ext4 part) from ext4 patch queue if you prefer this way. > And try to get the code finished? Time is pressing. > I looked at the mm tree, there are other ext4 features/changes that are currently in ext4-patch-queue(not ext4 git tree) that not in part of ext4 series yet. Ted, can you merge those patches to your git tree? Thanks! Thanks for your patience. Mingming. From owner-xfs@oss.sgi.com Thu Jun 28 08:21:41 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 08:21:47 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from ogre.sisk.pl (ogre.sisk.pl [217.79.144.158]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5SFLdtL007574 for ; Thu, 28 Jun 2007 08:21:41 -0700 Received: from localhost (localhost.localdomain [127.0.0.1]) by ogre.sisk.pl (Postfix) with ESMTP id 9F36744CF6; Thu, 28 Jun 2007 16:59:27 +0200 (CEST) Received: from ogre.sisk.pl ([127.0.0.1]) by localhost (ogre.sisk.pl [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 02127-07; Thu, 28 Jun 2007 16:59:27 +0200 (CEST) Received: from [192.168.144.102] (iftwlan0.fuw.edu.pl [193.0.83.32]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by ogre.sisk.pl (Postfix) with ESMTP id 0B9F040F76; Thu, 28 Jun 2007 16:59:27 +0200 (CEST) From: "Rafael J. Wysocki" To: Pavel Machek Subject: Re: [linux-lvm] 2.6.22-rc4 XFS fails after hibernate/resume Date: Thu, 28 Jun 2007 17:27:34 +0200 User-Agent: KMail/1.9.5 Cc: David Chinner , David Greaves , David Robinson , LVM general discussion and development , "'linux-kernel@vger.kernel.org'" , xfs@oss.sgi.com, linux-pm , LinuxRaid References: <46744065.6060605@dgreaves.com> <20070618145007.GE85884050@sgi.com> <20070627204924.GA4777@ucw.cz> In-Reply-To: <20070627204924.GA4777@ucw.cz> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200706281727.35430.rjw@sisk.pl> X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Scanned: amavisd-new at ogre.sisk.pl using MkS_Vir for Linux X-Virus-Status: Clean X-archive-position: 11988 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: rjw@sisk.pl Precedence: bulk X-list: xfs On Wednesday, 27 June 2007 22:49, Pavel Machek wrote: > Hi! > > > FWIW, I'm on record stating that "sync" is not sufficient to quiesce an XFS > > filesystem for a suspend/resume to work safely and have argued that the only > > Hmm, so XFS writes to disk even when its threads are frozen? > > > safe thing to do is freeze the filesystem before suspend and thaw it after > > resume. This is why I originally asked you to test that with the other problem > > Could you add that to the XFS threads if it is really required? They > do know that they are being frozen for suspend. Well, do you remember the workqueues? They are still nonfreezable. Greetings, Rafael -- "Premature optimization is the root of all evil." - Donald Knuth From owner-xfs@oss.sgi.com Thu Jun 28 08:23:34 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 08:23:40 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: **** X-Spam-Status: No, score=4.0 required=5.0 tests=BAYES_99 autolearn=no version=3.2.0-pre1-r499012 Received: from mail12.bluewin.ch (mail12.bluewin.ch [195.186.19.61]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5SFNWtL008345 for ; Thu, 28 Jun 2007 08:23:33 -0700 Received: from asgard (85.0.116.217) by mail12.bluewin.ch (Bluewin 7.3.121) id 467A67CC0018CD57 for xfs@oss.sgi.com; Thu, 28 Jun 2007 15:02:15 +0000 Received: from [192.168.0.141] (helo=[127.0.0.1]) by asgard with esmtp (Exim 4.63) (envelope-from ) id 1I3vVc-0003Gr-Cv for xfs@oss.sgi.com; Thu, 28 Jun 2007 17:02:12 +0200 Message-ID: <4683CD56.9000102@fintan.ch> Date: Thu, 28 Jun 2007 17:01:42 +0200 From: IT Fintan User-Agent: Thunderbird 2.0.0.4 (Windows/20070604) MIME-Version: 1.0 To: xfs@oss.sgi.com Subject: project quotas fstab Content-Type: text/plain; charset=ISO-8859-15; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11989 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: it@fintan.ch Precedence: bulk X-list: xfs hi all, question : how to load the project quotas directly in fstab ? there isnt a documentation! just for group or userquotas. thanks harald -- __________________________________________________ Sativa Rheinau Klosterplatz 8462 Rheinau Tel.: ++41 52 304 91 40 E-Mail: it@fintan.ch http://www.sativa-rheinau.ch __________________________________________________ From owner-xfs@oss.sgi.com Thu Jun 28 09:31:22 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 09:31:27 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.4 required=5.0 tests=AWL,BAYES_50,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r499012 Received: from lucidpixels.com (lucidpixels.com [75.144.35.66]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5SGVLtL007083 for ; Thu, 28 Jun 2007 09:31:22 -0700 Received: by lucidpixels.com (Postfix, from userid 1001) id A0AF11C000275; Thu, 28 Jun 2007 12:31:22 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id 9BB62401992C; Thu, 28 Jun 2007 12:31:22 -0400 (EDT) Date: Thu, 28 Jun 2007 12:31:22 -0400 (EDT) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: xfs@oss.sgi.com, linux-raid@vger.kernel.org Subject: XFS mount option performance on Linux Software RAID 5 Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11990 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Still reviewing but it appears 8 + 256k looks good. p34-noatime-logbufs=2-lbsize=256k,15696M,78172.3,99,450320,86.6667,178683,29,79808,99,565741,42.3333,610.067,0,16:100000:16/64,2362,19.6667,15751.7,46,3993.33,22,2545.67,24.3333,13976,41,3781.33,28.6667 p34-noatime-logbufs=8-lbsize=256k,15696M,78238,99,455532,86.6667,182382,30,79741.7,99,571631,43,597.633,0,16:100000:16/64,3421,29,12130,38.3333,5943.33,33,3671.33,35.6667,13521.3,41.3333,5162.33,38.3333 p34-noatime-logbufs=8-lbsize=default,15696M,77872,98.6667,438661,86.6667,179848,29.3333,79368,99,555999,42,632.733,0.333333,16:100000:16/64,2090,17.6667,11183,33,3922.67,23,2271.33,22.3333,11709,35,3391.33,26.3333 p34-noatime-only,15696M,77473,99,449689,86.6667,176960,29.3333,80186.3,99,568503,42.6667,592.633,0,16:100000:16/64,2102,18,15935.3,44.6667,3825.67,22.3333,2353,23.6667,9727.33,29.3333,3265,25.6667 http://home.comcast.net/~jpiszcz/chunk/logbufs.html From owner-xfs@oss.sgi.com Thu Jun 28 10:57:53 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 10:58:01 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from e5.ny.us.ibm.com (e5.ny.us.ibm.com [32.97.182.145]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5SHvotL013859 for ; Thu, 28 Jun 2007 10:57:52 -0700 Received: from d01relay04.pok.ibm.com (d01relay04.pok.ibm.com [9.56.227.236]) by e5.ny.us.ibm.com (8.13.8/8.13.8) with ESMTP id l5SHvp3r014116 for ; Thu, 28 Jun 2007 13:57:51 -0400 Received: from d01av03.pok.ibm.com (d01av03.pok.ibm.com [9.56.224.217]) by d01relay04.pok.ibm.com (8.13.8/8.13.8/NCO v8.3) with ESMTP id l5SHvp0g551106 for ; Thu, 28 Jun 2007 13:57:51 -0400 Received: from d01av03.pok.ibm.com (loopback [127.0.0.1]) by d01av03.pok.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id l5SHvoXx026453 for ; Thu, 28 Jun 2007 13:57:51 -0400 Received: from amitarora.in.ibm.com ([9.126.238.125]) by d01av03.pok.ibm.com (8.12.11.20060308/8.12.11) with ESMTP id l5SHvm15026302; Thu, 28 Jun 2007 13:57:49 -0400 Received: from amitarora.in.ibm.com (localhost.localdomain [127.0.0.1]) by amitarora.in.ibm.com (Postfix) with ESMTP id B467C18B99D; Thu, 28 Jun 2007 23:27:59 +0530 (IST) Received: (from amit@localhost) by amitarora.in.ibm.com (8.13.1/8.13.1/Submit) id l5SHvvWC011395; Thu, 28 Jun 2007 23:27:57 +0530 Date: Thu, 28 Jun 2007 23:27:57 +0530 From: "Amit K. Arora" To: Andrew Morton Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , Andreas Dilger , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: Re: [PATCH 0/6][TAKE5] fallocate system call Message-ID: <20070628175757.GA1674@amitarora.in.ibm.com> References: <20070510223950.GD86004887@sgi.com> <20070511110301.GB28425@in.ibm.com> <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070628025543.9467216f.akpm@linux-foundation.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070628025543.9467216f.akpm@linux-foundation.org> User-Agent: Mutt/1.4.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11991 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: aarora@linux.vnet.ibm.com Precedence: bulk X-list: xfs On Thu, Jun 28, 2007 at 02:55:43AM -0700, Andrew Morton wrote: > On Mon, 25 Jun 2007 18:58:10 +0530 "Amit K. Arora" wrote: > > > N O T E: > > ------- > > 1) Only Patches 4/7 and 7/7 are NEW. Rest of them are _already_ part > > of ext4 patch queue git tree hosted by Ted. > > Why the heck are replacements for these things being sent out again when > they're already in -mm and they're already in Ted's queue (from which I > need to diligently drop them each time I remerge)? > > Are we all supposed to re-review the entire patchset (or at least #4 and > #7) again? As I mentioned in the note above, only patches #4 and #7 were new and thus these needed to be reviewed. Other patches are _not_ replacements of any of the patches which are already part of -mm and/or in Ted's patch queue. They were posted again as just "placeholders" so that the two new patches (#4 & #7) could be reviewed. Sorry for any confusion. > Please drop the non-ext4 patches from the ext4 tree and send incremental > patches against the (non-ext4) fallocate patches in -mm. Please let us know what you think of Mingming's suggestion of posting all the fallocate patches including the ext4 ones as incremental ones against the -mm. -- Regards, Amit Arora From owner-xfs@oss.sgi.com Thu Jun 28 11:07:47 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 11:07:54 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from e36.co.us.ibm.com (e36.co.us.ibm.com [32.97.110.154]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5SI7jtL017595 for ; Thu, 28 Jun 2007 11:07:46 -0700 Received: from d03relay02.boulder.ibm.com (d03relay02.boulder.ibm.com [9.17.195.227]) by e36.co.us.ibm.com (8.13.8/8.13.8) with ESMTP id l5SI7kVk013065 for ; Thu, 28 Jun 2007 14:07:46 -0400 Received: from d03av02.boulder.ibm.com (d03av02.boulder.ibm.com [9.17.195.168]) by d03relay02.boulder.ibm.com (8.13.8/8.13.8/NCO v8.3) with ESMTP id l5SI7kLZ252624 for ; Thu, 28 Jun 2007 12:07:46 -0600 Received: from d03av02.boulder.ibm.com (loopback [127.0.0.1]) by d03av02.boulder.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id l5SI7j94006699 for ; Thu, 28 Jun 2007 12:07:46 -0600 Received: from amitarora.in.ibm.com ([9.126.238.125]) by d03av02.boulder.ibm.com (8.12.11.20060308/8.12.11) with ESMTP id l5SI7iBq006523; Thu, 28 Jun 2007 12:07:45 -0600 Received: from amitarora.in.ibm.com (localhost.localdomain [127.0.0.1]) by amitarora.in.ibm.com (Postfix) with ESMTP id AA9E218B99D; Thu, 28 Jun 2007 23:37:56 +0530 (IST) Received: (from amit@localhost) by amitarora.in.ibm.com (8.13.1/8.13.1/Submit) id l5SI7onV016842; Thu, 28 Jun 2007 23:37:50 +0530 Date: Thu, 28 Jun 2007 23:37:50 +0530 From: "Amit K. Arora" To: David Chinner Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: Re: [PATCH 7/7][TAKE5] ext4: support new modes Message-ID: <20070628180750.GB1674@amitarora.in.ibm.com> References: <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070625135051.GH1951@amitarora.in.ibm.com> <20070625215625.GL5181@schatzie.adilger.int> <20070626120751.GC19870@amitarora.in.ibm.com> <20070626161400.GE6652@schatzie.adilger.int> <20070626192908.GC13324@amitarora.in.ibm.com> <20070627000456.GT31489@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070627000456.GT31489@sgi.com> User-Agent: Mutt/1.4.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11992 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: aarora@linux.vnet.ibm.com Precedence: bulk X-list: xfs On Wed, Jun 27, 2007 at 10:04:56AM +1000, David Chinner wrote: > On Wed, Jun 27, 2007 at 12:59:08AM +0530, Amit K. Arora wrote: > > On Tue, Jun 26, 2007 at 12:14:00PM -0400, Andreas Dilger wrote: > > > On Jun 26, 2007 17:37 +0530, Amit K. Arora wrote: > > > > I think, modifying ctime/mtime should be dependent on the other flags. > > > > E.g., if we do not zero out data blocks on allocation/deallocation, > > > > update only ctime. Otherwise, update ctime and mtime both. > > > > > > I'm only being the advocate for requirements David Chinner has put > > > forward due to existing behaviour in XFS. This is one of the reasons > > > why I think the "flags" mechanism we now have - we can encode the > > > various different behaviours in any way we want and leave it to the > > > caller. > > > > I understand. May be we can confirm once more with David Chinner if this > > is really required. Will it really be a compatibility issue if new XFS > > preallocations (ie. via fallocate) update mtime/ctime? > > It should be left up to the filesystem to decide. Only the > filesystem knows whether something changed and the timestamp should > or should not be updated. Since Andreas had suggested FA_FL_NO_MTIME flag thinking it as a requirement from XFS (whereas XFS does not need this flag), I don't think we need to add this new flag. Please let know if someone still feels FA_FL_NO_MTIME flag can be useful. -- Regards, Amit Arora From owner-xfs@oss.sgi.com Thu Jun 28 11:15:37 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 11:15:41 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.5 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from smtp2.linux-foundation.org (smtp2.linux-foundation.org [207.189.120.14]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5SIFatL020623 for ; Thu, 28 Jun 2007 11:15:36 -0700 Received: from imap1.linux-foundation.org (imap1.linux-foundation.org [207.189.120.55]) by smtp2.linux-foundation.org (8.13.5.20060308/8.13.5/Debian-3ubuntu1.1) with ESMTP id l5SIFanm026363 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Thu, 28 Jun 2007 11:15:37 -0700 Received: from box (localhost [127.0.0.1]) by imap1.linux-foundation.org (8.13.5.20060308/8.13.5/Debian-3ubuntu1.1) with SMTP id l5SIFVsJ031037; Thu, 28 Jun 2007 11:15:31 -0700 Date: Thu, 28 Jun 2007 11:15:30 -0700 From: Andrew Morton To: mmarek@suse.cz Cc: xfs@oss.sgi.com, linux-kernel@vger.kernel.org Subject: Re: [patch 3/3] Fix XFS_IOC_FSBULKSTAT{,_SINGLE} and XFS_IOC_FSINUMBERS in compat mode Message-Id: <20070628111530.829e7a06.akpm@linux-foundation.org> In-Reply-To: <20070619132726.893544847@suse.cz> References: <20070619132549.266927601@suse.cz> <20070619132726.893544847@suse.cz> X-Mailer: Sylpheed 2.4.1 (GTK+ 2.8.17; x86_64-unknown-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-MIMEDefang-Filter: osdl$Revision: 1.181 $ X-Scanned-By: MIMEDefang 2.53 on 207.189.120.14 X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11993 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: akpm@linux-foundation.org Precedence: bulk X-list: xfs On Tue, 19 Jun 2007 15:25:52 +0200 mmarek@suse.cz wrote: > * 32bit struct xfs_fsop_bulkreq has different size and layout of > members, no matter the alignment. Move the code out of the #else > branch (why was it there in the first place?). Define _32 variants of > the ioctl constants. > * 32bit struct xfs_bstat is different because of time_t and on > i386 becaus of different padding. Create a new formatter > xfs_bulkstat_one_compat() that takes care of this. To do this, we need > to make xfs_bulkstat_one_iget() and xfs_bulkstat_one_dinode() > non-static. > * i386 struct xfs_inogrp has different padding. Introduce a similar > "formatter" mechanism for xfs_inumbers: the native formatter is just a > copy_to_user, the compat formatter takes care of the different layout test.kernel.org build failed: CC fs/xfs/linux-2.6/xfs_ioctl32.o fs/xfs/linux-2.6/xfs_ioctl32.c: In function ‘xfs_ioc_bulkstat_compat’: fs/xfs/linux-2.6/xfs_ioctl32.c:334: error: ‘xfs_inumbers_fmt_compat’ undeclared (first use in this function) fs/xfs/linux-2.6/xfs_ioctl32.c:334: error: (Each undeclared identifier is reported only once fs/xfs/linux-2.6/xfs_ioctl32.c:334: error: for each function it appears in.) http://test.kernel.org/abat/96972/debug/test.log.0 http://test.kernel.org/abat/96972/build/dotconfig From owner-xfs@oss.sgi.com Thu Jun 28 11:19:15 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 11:19:20 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from e34.co.us.ibm.com (e34.co.us.ibm.com [32.97.110.152]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5SIJEtL022006 for ; Thu, 28 Jun 2007 11:19:15 -0700 Received: from d03relay02.boulder.ibm.com (d03relay02.boulder.ibm.com [9.17.195.227]) by e34.co.us.ibm.com (8.13.8/8.13.8) with ESMTP id l5SIJDwl006604 for ; Thu, 28 Jun 2007 14:19:13 -0400 Received: from d03av02.boulder.ibm.com (d03av02.boulder.ibm.com [9.17.195.168]) by d03relay02.boulder.ibm.com (8.13.8/8.13.8/NCO v8.3) with ESMTP id l5SIJ5bI262748 for ; Thu, 28 Jun 2007 12:19:11 -0600 Received: from d03av02.boulder.ibm.com (loopback [127.0.0.1]) by d03av02.boulder.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id l5SIJ4LM004125 for ; Thu, 28 Jun 2007 12:19:04 -0600 Received: from amitarora.in.ibm.com ([9.126.238.125]) by d03av02.boulder.ibm.com (8.12.11.20060308/8.12.11) with ESMTP id l5SIJ2l0003949; Thu, 28 Jun 2007 12:19:03 -0600 Received: from amitarora.in.ibm.com (localhost.localdomain [127.0.0.1]) by amitarora.in.ibm.com (Postfix) with ESMTP id 9A87518B99D; Thu, 28 Jun 2007 23:49:14 +0530 (IST) Received: (from amit@localhost) by amitarora.in.ibm.com (8.13.1/8.13.1/Submit) id l5SIJDB0023109; Thu, 28 Jun 2007 23:49:13 +0530 Date: Thu, 28 Jun 2007 23:49:13 +0530 From: "Amit K. Arora" To: David Chinner Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: Re: [PATCH 4/7][TAKE5] support new modes in fallocate Message-ID: <20070628181913.GC1674@amitarora.in.ibm.com> References: <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070625134500.GE1951@amitarora.in.ibm.com> <20070625150320.GA8686@amitarora.in.ibm.com> <20070625214626.GJ5181@schatzie.adilger.int> <20070626103247.GA19870@amitarora.in.ibm.com> <20070626153413.GC6652@schatzie.adilger.int> <20070626231803.GQ31489@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070626231803.GQ31489@sgi.com> User-Agent: Mutt/1.4.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11994 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: aarora@linux.vnet.ibm.com Precedence: bulk X-list: xfs On Wed, Jun 27, 2007 at 09:18:04AM +1000, David Chinner wrote: > On Tue, Jun 26, 2007 at 11:34:13AM -0400, Andreas Dilger wrote: > > On Jun 26, 2007 16:02 +0530, Amit K. Arora wrote: > > > On Mon, Jun 25, 2007 at 03:46:26PM -0600, Andreas Dilger wrote: > > > > Can you clarify - what is the current behaviour when ENOSPC (or some other > > > > error) is hit? Does it keep the current fallocate() or does it free it? > > > > > > Currently it is left on the file system implementation. In ext4, we do > > > not undo preallocation if some error (say, ENOSPC) is hit. Hence it may > > > end up with partial (pre)allocation. This is inline with dd and > > > posix_fallocate, which also do not free the partially allocated space. > > > > Since I believe the XFS allocation ioctls do it the opposite way (free > > preallocated space on error) this should be encoded into the flags. > > Having it "filesystem dependent" just means that nobody will be happy. > > No, XFs does not free preallocated space on error. it is up to the > application to clean up. Since XFS also does not free preallocated space on error and this behavior is inline with dd, posix_fallocate() and the current ext4 implementation, do we still need FA_FL_FREE_ENOSPC flag ? > > What I mean is that any data read from the file should have the "appearance" > > of being zeroed (whether zeroes are actually written to disk or not). What > > I _think_ David is proposing is to allow fallocate() to return without > > marking the blocks even "uninitialized" and subsequent reads would return > > the old data from the disk. > > Correct, but for swap files that's not an issue - no user should be able > too read them, and FA_MKSWAP would really need root privileges to execute. Will the FA_MKSWAP mode still be required with your suggested change of teaching do_mpage_readpage() about unwritten extents being in place ? Or, will you still like to have FA_MKSWAP mode ? -- Regards, Amit Arora From owner-xfs@oss.sgi.com Thu Jun 28 11:33:54 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 11:34:00 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.8 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from smtp2.linux-foundation.org (smtp2.linux-foundation.org [207.189.120.14]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5SIXrtL029557 for ; Thu, 28 Jun 2007 11:33:54 -0700 Received: from imap1.linux-foundation.org (imap1.linux-foundation.org [207.189.120.55]) by smtp2.linux-foundation.org (8.13.5.20060308/8.13.5/Debian-3ubuntu1.1) with ESMTP id l5SIXlLl026943 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Thu, 28 Jun 2007 11:33:48 -0700 Received: from box (localhost [127.0.0.1]) by imap1.linux-foundation.org (8.13.5.20060308/8.13.5/Debian-3ubuntu1.1) with SMTP id l5SIXgoq031382; Thu, 28 Jun 2007 11:33:42 -0700 Date: Thu, 28 Jun 2007 11:33:42 -0700 From: Andrew Morton To: "Amit K. Arora" Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , Andreas Dilger , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: Re: [PATCH 0/6][TAKE5] fallocate system call Message-Id: <20070628113342.c9c0f49c.akpm@linux-foundation.org> In-Reply-To: <20070628175757.GA1674@amitarora.in.ibm.com> References: <20070510223950.GD86004887@sgi.com> <20070511110301.GB28425@in.ibm.com> <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070628025543.9467216f.akpm@linux-foundation.org> <20070628175757.GA1674@amitarora.in.ibm.com> X-Mailer: Sylpheed 2.4.1 (GTK+ 2.8.17; x86_64-unknown-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-MIMEDefang-Filter: osdl$Revision: 1.181 $ X-Scanned-By: MIMEDefang 2.53 on 207.189.120.14 X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11995 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: akpm@linux-foundation.org Precedence: bulk X-list: xfs On Thu, 28 Jun 2007 23:27:57 +0530 "Amit K. Arora" wrote: > > Please drop the non-ext4 patches from the ext4 tree and send incremental > > patches against the (non-ext4) fallocate patches in -mm. > > Please let us know what you think of Mingming's suggestion of posting > all the fallocate patches including the ext4 ones as incremental ones > against the -mm. I think Mingming was asking that Ted move the current quilt tree into git, presumably because she's working off git. I'm not sure what to do, really. The core kernel patches need to be in Ted's tree for testing but that'll create a mess for me. ug. Options might be a) I drop the fallocate patches from -mm and from the ext4 tree, hack up any needed build fixes, then just wait for it all to mature and then think about it again b) We do what we normally don't do and reserve the syscall slots in mainline. From owner-xfs@oss.sgi.com Thu Jun 28 11:57:55 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 11:58:02 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.2 required=5.0 tests=AWL,BAYES_60 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.dvmed.net (srv5.dvmed.net [207.36.208.214]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5SIvrtL004404 for ; Thu, 28 Jun 2007 11:57:54 -0700 Received: from cpe-065-190-165-210.nc.res.rr.com ([65.190.165.210] helo=[10.10.10.10]) by mail.dvmed.net with esmtpsa (Exim 4.63 #1 (Red Hat Linux)) id 1I3zBR-0003XQ-Rd; Thu, 28 Jun 2007 18:57:38 +0000 Message-ID: <468404A0.503@garzik.org> Date: Thu, 28 Jun 2007 14:57:36 -0400 From: Jeff Garzik User-Agent: Thunderbird 1.5.0.12 (X11/20070530) MIME-Version: 1.0 To: Andrew Morton CC: "Amit K. Arora" , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , Andreas Dilger , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: Re: [PATCH 0/6][TAKE5] fallocate system call References: <20070510223950.GD86004887@sgi.com> <20070511110301.GB28425@in.ibm.com> <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070628025543.9467216f.akpm@linux-foundation.org> <20070628175757.GA1674@amitarora.in.ibm.com> <20070628113342.c9c0f49c.akpm@linux-foundation.org> In-Reply-To: <20070628113342.c9c0f49c.akpm@linux-foundation.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11996 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeff@garzik.org Precedence: bulk X-list: xfs Andrew Morton wrote: > b) We do what we normally don't do and reserve the syscall slots in mainline. If everyone agrees it's going to happen... why not? Jeff From owner-xfs@oss.sgi.com Thu Jun 28 12:10:16 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 12:10:23 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from over.ny.us.ibm.com (over.ny.us.ibm.com [32.97.182.150]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5SJAEtL008470 for ; Thu, 28 Jun 2007 12:10:15 -0700 Received: from e6.ny.us.ibm.com ([192.168.1.106]) by pokfb.esmtp.ibm.com (8.12.11.20060308/8.12.11) with ESMTP id l5SIjCWi000718 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Thu, 28 Jun 2007 14:45:12 -0400 Received: from d01relay02.pok.ibm.com (d01relay02.pok.ibm.com [9.56.227.234]) by e6.ny.us.ibm.com (8.13.8/8.13.8) with ESMTP id l5SIkJ2O018601 for ; Thu, 28 Jun 2007 14:46:19 -0400 Received: from d01av03.pok.ibm.com (d01av03.pok.ibm.com [9.56.224.217]) by d01relay02.pok.ibm.com (8.13.8/8.13.8/NCO v8.3) with ESMTP id l5SIj98j320122 for ; Thu, 28 Jun 2007 14:45:09 -0400 Received: from d01av03.pok.ibm.com (loopback [127.0.0.1]) by d01av03.pok.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id l5SIj8G6032626 for ; Thu, 28 Jun 2007 14:45:09 -0400 Received: from [9.53.41.190] (kleikamp.austin.ibm.com [9.53.41.190]) by d01av03.pok.ibm.com (8.12.11.20060308/8.12.11) with ESMTP id l5SIj7D5032579; Thu, 28 Jun 2007 14:45:08 -0400 Subject: Re: [PATCH 0/6][TAKE5] fallocate system call From: Dave Kleikamp To: Andrew Morton Cc: "Amit K. Arora" , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , Andreas Dilger , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com In-Reply-To: <20070628113342.c9c0f49c.akpm@linux-foundation.org> References: <20070510223950.GD86004887@sgi.com> <20070511110301.GB28425@in.ibm.com> <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070628025543.9467216f.akpm@linux-foundation.org> <20070628175757.GA1674@amitarora.in.ibm.com> <20070628113342.c9c0f49c.akpm@linux-foundation.org> Content-Type: text/plain Date: Thu, 28 Jun 2007 13:45:02 -0500 Message-Id: <1183056302.9880.28.camel@kleikamp.austin.ibm.com> Mime-Version: 1.0 X-Mailer: Evolution 2.8.3 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11997 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: shaggy@linux.vnet.ibm.com Precedence: bulk X-list: xfs On Thu, 2007-06-28 at 11:33 -0700, Andrew Morton wrote: > On Thu, 28 Jun 2007 23:27:57 +0530 "Amit K. Arora" wrote: > > > > Please drop the non-ext4 patches from the ext4 tree and send incremental > > > patches against the (non-ext4) fallocate patches in -mm. > > > > Please let us know what you think of Mingming's suggestion of posting > > all the fallocate patches including the ext4 ones as incremental ones > > against the -mm. > > I think Mingming was asking that Ted move the current quilt tree into git, > presumably because she's working off git. I moved the fallocate patches to the very end of the series in the quilt tree. This way the patches will be in the quilt tree for testing, but Ted can easily leave them out of the git tree so you and Linus won't pull them with the ext4 patches. Fortunately, the ext4-specific fallocate patches don't conflict with the other patches in the queue, so they can (at least for now) be handled independently in the -mm tree. > I'm not sure what to do, really. The core kernel patches need to be in > Ted's tree for testing but that'll create a mess for me. > > ug. > > Options might be > > a) I drop the fallocate patches from -mm and from the ext4 tree, hack up > any needed build fixes, then just wait for it all to mature and then > think about it again > > b) We do what we normally don't do and reserve the syscall slots in mainline. -- David Kleikamp IBM Linux Technology Center From owner-xfs@oss.sgi.com Thu Jun 28 13:34:27 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 13:34:35 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.clusterfs.com (mail.clusterfs.com [206.168.112.78]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5SKYPtL017922 for ; Thu, 28 Jun 2007 13:34:26 -0700 Received: from localhost.adilger.int (unknown [205.233.50.250]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.clusterfs.com (Postfix) with ESMTP id DD0F07BA3CF; Thu, 28 Jun 2007 14:34:26 -0600 (MDT) Received: by localhost.adilger.int (Postfix, from userid 1000) id B84B4400F; Thu, 28 Jun 2007 16:34:25 -0400 (EDT) Date: Thu, 28 Jun 2007 16:34:25 -0400 From: Andreas Dilger To: "Amit K. Arora" Cc: Andrew Morton , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: Re: [PATCH 0/6][TAKE5] fallocate system call Message-ID: <20070628203425.GB5789@schatzie.adilger.int> Mail-Followup-To: "Amit K. Arora" , Andrew Morton , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com References: <20070511110301.GB28425@in.ibm.com> <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070628025543.9467216f.akpm@linux-foundation.org> <20070628175757.GA1674@amitarora.in.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070628175757.GA1674@amitarora.in.ibm.com> User-Agent: Mutt/1.4.1i X-GPG-Key: 1024D/0D35BED6 X-GPG-Fingerprint: 7A37 5D79 BF1B CECA D44F 8A29 A488 39F5 0D35 BED6 X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11998 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: adilger@clusterfs.com Precedence: bulk X-list: xfs On Jun 28, 2007 23:27 +0530, Amit K. Arora wrote: > On Thu, Jun 28, 2007 at 02:55:43AM -0700, Andrew Morton wrote: > > Are we all supposed to re-review the entire patchset (or at least #4 and > > #7) again? > > As I mentioned in the note above, only patches #4 and #7 were new and > thus these needed to be reviewed. Other patches are _not_ replacements > of any of the patches which are already part of -mm and/or in Ted's > patch queue. They were posted again as just "placeholders" so that the > two new patches (#4 & #7) could be reviewed. Sorry for any confusion. The new patches are definitely a big improvement over the previous API, and need to go in before fallocate() goes into mainline. This last set of changes allows the behaviour of these syscalls to accomodate the various different semantics desired by XFS in a sensible manner instead of tying all of the individual behaviours (time update, size update, alloc/free, etc) into monolithic modes that will never make everyone happy. My understanding is that you only need to grab #4 and #7 to get your tree into get fallocate in sync with the ext4 patch queue (i.e. they are incremental over the previous set). Cheers, Andreas -- Andreas Dilger Principal Software Engineer Cluster File Systems, Inc. From owner-xfs@oss.sgi.com Thu Jun 28 13:38:58 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 13:39:06 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=3.0 required=5.0 tests=BAYES_60,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r499012 Received: from sandeen.net (sandeen.net [209.173.210.139]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5SKcvtL019445 for ; Thu, 28 Jun 2007 13:38:58 -0700 Received: from Liberator.local (dsldyn51.travel-net.com [205.150.76.51]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id DD03818011E86; Thu, 28 Jun 2007 15:38:57 -0500 (CDT) Message-ID: <46841C60.5030207@sandeen.net> Date: Thu, 28 Jun 2007 16:38:56 -0400 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.4 (Macintosh/20070604) MIME-Version: 1.0 To: Just Marc CC: xfs@oss.sgi.com Subject: Re: xfs_fsr, performance related tweaks References: <4683ADEB.3010106@corky.net> In-Reply-To: <4683ADEB.3010106@corky.net> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 11999 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Just Marc wrote: > 2. Files for which 'No improvement will be made' should also be marked > as no-defrag, this will avoid a ton of extra work in the future. But... that file could drastically change in the future, no? Just because it can't be improved now doesn't necessarily mean that it should never be revisited on subsequent runs, does it? -Eric From owner-xfs@oss.sgi.com Thu Jun 28 15:00:53 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 15:00:58 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.7 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from amd.ucw.cz (gprs189-60.eurotel.cz [160.218.189.60]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5SM0ktL024729 for ; Thu, 28 Jun 2007 15:00:49 -0700 Received: by amd.ucw.cz (Postfix, from userid 8) id AB8E22B9D0; Fri, 29 Jun 2007 00:00:45 +0200 (CEST) Date: Fri, 29 Jun 2007 00:00:45 +0200 From: Pavel Machek To: "Rafael J. Wysocki" Cc: David Chinner , linux-pm , "'linux-kernel@vger.kernel.org'" , xfs@oss.sgi.com, LinuxRaid , LVM general discussion and development , David Robinson , David Greaves Subject: Re: [linux-pm] Re: [linux-lvm] 2.6.22-rc4 XFS fails after hibernate/resume Message-ID: <20070628220045.GA4521@elf.ucw.cz> References: <46744065.6060605@dgreaves.com> <20070618145007.GE85884050@sgi.com> <20070627204924.GA4777@ucw.cz> <200706281727.35430.rjw@sisk.pl> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <200706281727.35430.rjw@sisk.pl> X-Warning: Reading this can be dangerous to your mental health. User-Agent: Mutt/1.5.11+cvs20060126 X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12000 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: pavel@ucw.cz Precedence: bulk X-list: xfs On Thu 2007-06-28 17:27:34, Rafael J. Wysocki wrote: > On Wednesday, 27 June 2007 22:49, Pavel Machek wrote: > > Hi! > > > > > FWIW, I'm on record stating that "sync" is not sufficient to quiesce an XFS > > > filesystem for a suspend/resume to work safely and have argued that the only > > > > Hmm, so XFS writes to disk even when its threads are frozen? > > > > > safe thing to do is freeze the filesystem before suspend and thaw it after > > > resume. This is why I originally asked you to test that with the other problem > > > > Could you add that to the XFS threads if it is really required? They > > do know that they are being frozen for suspend. > > Well, do you remember the workqueues? They are still nonfreezable. Oops, that would explain it :-(. Can we make XFS stop using them? Pavel -- (english) http://www.livejournal.com/~pavelmachek (cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html From owner-xfs@oss.sgi.com Thu Jun 28 15:02:32 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 15:02:40 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.3 required=5.0 tests=AWL,BAYES_50,J_CHICKENPOX_27 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5SM2TtL025603 for ; Thu, 28 Jun 2007 15:02:31 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id IAA22922; Fri, 29 Jun 2007 08:02:29 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5SM2ReW4568840; Fri, 29 Jun 2007 08:02:28 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5SM2Q9j4608414; Fri, 29 Jun 2007 08:02:26 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Fri, 29 Jun 2007 08:02:26 +1000 From: David Chinner To: Timothy Shimmin Cc: David Chinner , Szabolcs Illes , xfs@oss.sgi.com Subject: Re: After reboot fs with barrier faster deletes then fs with nobarrier Message-ID: <20070628220225.GB31489@sgi.com> References: <20070627222040.GR989688@sgi.com> <4683407E.9080707@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4683407E.9080707@sgi.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12001 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Thu, Jun 28, 2007 at 03:00:46PM +1000, Timothy Shimmin wrote: > David Chinner wrote: > >On Wed, Jun 27, 2007 at 06:58:29PM +0100, Szabolcs Illes wrote: > >>Hi, > >> > >>I am using XFS on my laptop, I have realized that nobarrier mount options > >>sometimes slows down deleting large number of small files, like the > >>kernel source tree. I made four tests, deleting the kernel source right > >>after unpack and after reboot, with both barrier and nobarrier options: > >> > >> mount opts: rw,noatime,nodiratime,logbsize=256k,logbufs=2 > >> illes@sunset:~/tmp> tar xjf ~/Download/linux-2.6.21.5.tar.bz2 && sync && > reboot > >> After reboot: > >> illes@sunset:~/tmp> time rm -rf linux-2.6.21.5/ > >> real 0m28.127s > >> user 0m0.044s > >> sys 0m2.924s > >> mount opts: rw,noatime,nodiratime,logbsize=256k,logbufs=2,nobarrier > >> illes@sunset:~/tmp> tar xjf ~/Download/linux-2.6.21.5.tar.bz2 && sync && > reboot > >> After reboot: > >> illes@sunset:~/tmp> time rm -rf linux-2.6.21.5/ > >> real 1m12.738s > >> user 0m0.032s > >> sys 0m2.548s > >> It looks like with barrier it's faster deleting files after reboot. > >> ( 28 sec vs 72 sec !!! ). > > > >Of course the second run will be faster here - the inodes are already in > >cache and so there's no reading from disk needed to find the files > >to delete.... > > > >That's because run time after reboot is determined by how fast you > >can traverse the directory structure (i.e. how many seeks are > >involved). > >Barriers will have little impact on the uncached rm -rf > >results, > > But it looks like barriers _are_ having impact on the uncached rm -rf > results. Tim, please be care with what you quote - you've quoted a different set of results wot what I did and commented on and that takes my comments way out of context. In hindsight, I should have phrased it as "barriers _should_ have little impact on uncached rm -rf results." We've seen little impact in the past, and it's always been a decrease in performance, so what we need to find out is how they are having an impact. I suspect that it's to do with drive cache control algorithms and barriers substantially reducing the amount of dirty data being cached and hence read caching is working efficiently as a side effect. I guess the only way to confirm this is blktrace output to see what I/Os are taking longer to execute when barriers are disabled. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Thu Jun 28 15:05:55 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 15:06:00 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5SM5qtL027473 for ; Thu, 28 Jun 2007 15:05:54 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id IAA23064; Fri, 29 Jun 2007 08:05:46 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5SM5feW4476813; Fri, 29 Jun 2007 08:05:41 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5SM5bKf4589761; Fri, 29 Jun 2007 08:05:37 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Fri, 29 Jun 2007 08:05:37 +1000 From: David Chinner To: Justin Piszcz Cc: Peter Rabbitson , linux-raid@vger.kernel.org, xfs@oss.sgi.com, Alan Piszcz Subject: Re: Fastest Chunk Size w/XFS For MD Software RAID = 1024k Message-ID: <20070628220537.GC31489@sgi.com> References: <46832E60.9000006@rabbit.us> <46837056.4050306@rabbit.us> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12002 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Thu, Jun 28, 2007 at 04:27:15AM -0400, Justin Piszcz wrote: > > > On Thu, 28 Jun 2007, Peter Rabbitson wrote: > > >Justin Piszcz wrote: > >>mdadm --create \ > >> --verbose /dev/md3 \ > >> --level=5 \ > >> --raid-devices=10 \ > >> --chunk=1024 \ > >> --force \ > >> --run > >> /dev/sd[cdefghijkl]1 > >> > >>Justin. > > > >Interesting, I came up with the same results (1M chunk being superior) > >with a completely different raid set with XFS on top: > > > >mdadm --create \ > > --level=10 \ > > --chunk=1024 \ > > --raid-devices=4 \ > > --layout=f3 \ > > ... > > > >Could it be attributed to XFS itself? More likely it's related to the I/O size being sent to the disks. The larger the chunk size, the larger the I/o hitting each disk. I think the maximum I/O size is 512k ATM on x86(_64), so a chunk of 1MB will guarantee that there are maximally sized I/Os being sent to the disk.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Thu Jun 28 15:09:27 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 15:09:35 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.7 required=5.0 tests=AWL,BAYES_50,RCVD_IN_PSBL autolearn=no version=3.2.0-pre1-r499012 Received: from ogre.sisk.pl (ogre.sisk.pl [217.79.144.158]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5SM9OtL028848 for ; Thu, 28 Jun 2007 15:09:26 -0700 Received: from localhost (localhost.localdomain [127.0.0.1]) by ogre.sisk.pl (Postfix) with ESMTP id 6988C511A5; Thu, 28 Jun 2007 23:47:24 +0200 (CEST) Received: from ogre.sisk.pl ([127.0.0.1]) by localhost (ogre.sisk.pl [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 05474-10; Thu, 28 Jun 2007 23:47:23 +0200 (CEST) Received: from [192.168.100.102] (nat-be2.aster.pl [212.76.37.166]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by ogre.sisk.pl (Postfix) with ESMTP id 393164F1D9; Thu, 28 Jun 2007 23:47:23 +0200 (CEST) From: "Rafael J. Wysocki" To: Pavel Machek Subject: Re: [linux-pm] Re: [linux-lvm] 2.6.22-rc4 XFS fails after hibernate/resume Date: Fri, 29 Jun 2007 00:16:44 +0200 User-Agent: KMail/1.9.5 Cc: David Chinner , linux-pm , "'linux-kernel@vger.kernel.org'" , xfs@oss.sgi.com, LinuxRaid , LVM general discussion and development , David Robinson , David Greaves , Oleg Nesterov References: <46744065.6060605@dgreaves.com> <200706281727.35430.rjw@sisk.pl> <20070628220045.GA4521@elf.ucw.cz> In-Reply-To: <20070628220045.GA4521@elf.ucw.cz> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200706290016.45719.rjw@sisk.pl> X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Scanned: amavisd-new at ogre.sisk.pl using MkS_Vir for Linux X-Virus-Status: Clean X-archive-position: 12003 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: rjw@sisk.pl Precedence: bulk X-list: xfs On Friday, 29 June 2007 00:00, Pavel Machek wrote: > On Thu 2007-06-28 17:27:34, Rafael J. Wysocki wrote: > > On Wednesday, 27 June 2007 22:49, Pavel Machek wrote: > > > Hi! > > > > > > > FWIW, I'm on record stating that "sync" is not sufficient to quiesce an XFS > > > > filesystem for a suspend/resume to work safely and have argued that the only > > > > > > Hmm, so XFS writes to disk even when its threads are frozen? > > > > > > > safe thing to do is freeze the filesystem before suspend and thaw it after > > > > resume. This is why I originally asked you to test that with the other problem > > > > > > Could you add that to the XFS threads if it is really required? They > > > do know that they are being frozen for suspend. > > > > Well, do you remember the workqueues? They are still nonfreezable. > > Oops, that would explain it :-(. Can we make XFS stop using them? I'm afraid that we can't. There are two solutions possible, IMO. One would be to make these workqueues freezable, which is possible, but hacky and Oleg didn't like that very much. The second would be to freeze XFS from within the hibernation code path, using freeze_bdev(). Greetings, Rafael -- "Premature optimization is the root of all evil." - Donald Knuth From owner-xfs@oss.sgi.com Thu Jun 28 16:35:43 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 16:35:48 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.1 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from postoffice.aconex.com (mail.app.aconex.com [203.89.192.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5SNZftL018358 for ; Thu, 28 Jun 2007 16:35:43 -0700 Received: from edge.yarra.acx (unknown [203.89.192.141]) by postoffice.aconex.com (Postfix) with ESMTP id EB92F92C515; Fri, 29 Jun 2007 09:35:41 +1000 (EST) Subject: Re: project quotas fstab From: Nathan Scott Reply-To: nscott@aconex.com To: IT Fintan Cc: xfs@oss.sgi.com In-Reply-To: <4683CD56.9000102@fintan.ch> References: <4683CD56.9000102@fintan.ch> Content-Type: text/plain Organization: Aconex Date: Fri, 29 Jun 2007 09:34:41 +1000 Message-Id: <1183073681.15488.131.camel@edge.yarra.acx> Mime-Version: 1.0 X-Mailer: Evolution 2.6.3 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12004 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nscott@aconex.com Precedence: bulk X-list: xfs On Thu, 2007-06-28 at 17:01 +0200, IT Fintan wrote: > hi all, > > question : how to load the project quotas directly in fstab ? > there isnt a documentation! just for group or userquotas. The mount options are described in Documentation/filesystems/xfs.txt (in the kernel source), the mount(8) man page is probably out of date in this area. cheers. -- Nathan From owner-xfs@oss.sgi.com Thu Jun 28 16:40:40 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 16:40:50 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.1 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from postoffice.aconex.com (mail.app.aconex.com [203.89.192.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5SNectL019463 for ; Thu, 28 Jun 2007 16:40:40 -0700 Received: from edge.yarra.acx (unknown [203.89.192.141]) by postoffice.aconex.com (Postfix) with ESMTP id 6F0E592C77A; Fri, 29 Jun 2007 09:40:40 +1000 (EST) Subject: Re: [PATCH 4/7][TAKE5] support new modes in fallocate From: Nathan Scott Reply-To: nscott@aconex.com To: "Amit K. Arora" Cc: David Chinner , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com In-Reply-To: <20070628181913.GC1674@amitarora.in.ibm.com> References: <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070625134500.GE1951@amitarora.in.ibm.com> <20070625150320.GA8686@amitarora.in.ibm.com> <20070625214626.GJ5181@schatzie.adilger.int> <20070626103247.GA19870@amitarora.in.ibm.com> <20070626153413.GC6652@schatzie.adilger.int> <20070626231803.GQ31489@sgi.com> <20070628181913.GC1674@amitarora.in.ibm.com> Content-Type: text/plain Organization: Aconex Date: Fri, 29 Jun 2007 09:39:40 +1000 Message-Id: <1183073980.15488.134.camel@edge.yarra.acx> Mime-Version: 1.0 X-Mailer: Evolution 2.6.3 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12005 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nscott@aconex.com Precedence: bulk X-list: xfs On Thu, 2007-06-28 at 23:49 +0530, Amit K. Arora wrote: > > > Correct, but for swap files that's not an issue - no user should be > able > > too read them, and FA_MKSWAP would really need root privileges to > execute. > > Will the FA_MKSWAP mode still be required with your suggested change > of > teaching do_mpage_readpage() about unwritten extents being in place ? > Or, will you still like to have FA_MKSWAP mode ? There's no need for a MKSWAP flag. cheers. -- Nathan From owner-xfs@oss.sgi.com Thu Jun 28 16:57:29 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 16:57:35 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.5 required=5.0 tests=AWL,BAYES_80 autolearn=no version=3.2.0-pre1-r499012 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5SNvStL022705 for ; Thu, 28 Jun 2007 16:57:29 -0700 Received: from Relay2.suse.de (mail2.suse.de [195.135.221.8]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx2.suse.de (Postfix) with ESMTP id EFE0921662; Fri, 29 Jun 2007 01:57:22 +0200 (CEST) To: Eric Sandeen Cc: Just Marc , xfs@oss.sgi.com Subject: Re: xfs_fsr, performance related tweaks References: <4683ADEB.3010106@corky.net> <46841C60.5030207@sandeen.net> From: Andi Kleen Date: 29 Jun 2007 02:52:57 +0200 In-Reply-To: <46841C60.5030207@sandeen.net> Message-ID: Lines: 24 User-Agent: Gnus/5.09 (Gnus v5.9.0) Emacs/21.3 MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12006 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: andi@firstfloor.org Precedence: bulk X-list: xfs Eric Sandeen writes: > Just Marc wrote: > > > 2. Files for which 'No improvement will be made' should also be marked > > as no-defrag, this will avoid a ton of extra work in the future. > > But... that file could drastically change in the future, no? Just > because it can't be improved now doesn't necessarily mean that it should > never be revisited on subsequent runs, does it? I guess one could define an additional dont-defrag (or perhaps rather already-defrag) flag that is always cleared when the file changes. That could be safely set here. But then I'm not sure it would be worth the effort. Why would you run fsr that often that it matters? Also I would expect that one can easily detect in many cases an defragmented file by looking at the number of extents in the inode only and that would make it equivalent to the flag. The cases where this is not the case are probably rare too. -Andi From owner-xfs@oss.sgi.com Thu Jun 28 17:13:09 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 17:13:13 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50,J_CHICKENPOX_64 autolearn=no version=3.2.0-pre1-r499012 Received: from postoffice.aconex.com (mail.app.aconex.com [203.89.192.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5T0D7tL028948 for ; Thu, 28 Jun 2007 17:13:08 -0700 Received: from edge.yarra.acx (unknown [203.89.192.141]) by postoffice.aconex.com (Postfix) with ESMTP id 2698292C46D; Fri, 29 Jun 2007 10:13:09 +1000 (EST) Subject: Re: xfs_fsr, performance related tweaks From: Nathan Scott Reply-To: nscott@aconex.com To: Just Marc Cc: xfs@oss.sgi.com In-Reply-To: <4683ADF5.9050901@corky.net> References: <4683ADF5.9050901@corky.net> Content-Type: text/plain Organization: Aconex Date: Fri, 29 Jun 2007 10:12:09 +1000 Message-Id: <1183075929.15488.148.camel@edge.yarra.acx> Mime-Version: 1.0 X-Mailer: Evolution 2.6.3 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12007 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nscott@aconex.com Precedence: bulk X-list: xfs On Thu, 2007-06-28 at 13:47 +0100, Just Marc wrote: > ... > I looked at how the 'io' utility marks files as no-defrag; it calls > xfsctl(path, fd, XFS_IOC_FSSETXATTR, &attr); -- this requires a path, > which is not available to me when fsr is run in its default mode > which > traverses all xfs filesystems rather than gets to run on a single > file. Just call the ioctl directly - fsr is already doing this in a bunch of places (even has a call to XFS_IOC_FSSETXATTR already, elsewhere). The xfsctl wrapper is just to give some tools platform independence - on IRIX (shares xfs_io code) some of the syscalls take paths, but on Linux only file descriptors are used. cheers. -- Nathan From owner-xfs@oss.sgi.com Thu Jun 28 17:17:00 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 17:17:04 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50,J_CHICKENPOX_27 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5T0GutL030526 for ; Thu, 28 Jun 2007 17:16:58 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA26093; Fri, 29 Jun 2007 10:16:52 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5T0GoeW4668825; Fri, 29 Jun 2007 10:16:51 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5T0Gmwo4669971; Fri, 29 Jun 2007 10:16:48 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Fri, 29 Jun 2007 10:16:48 +1000 From: David Chinner To: Szabolcs Illes Cc: xfs@oss.sgi.com Subject: Re: After reboot fs with barrier faster deletes then fs with nobarrier Message-ID: <20070629001648.GD31489@sgi.com> References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12008 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Wed, Jun 27, 2007 at 06:58:29PM +0100, Szabolcs Illes wrote: > Hi, > > I am using XFS on my laptop, I have realized that nobarrier mount options > sometimes slows down deleting large number of small files, like the kernel > source tree. I made four tests, deleting the kernel source right after > unpack and after reboot, with both barrier and nobarrier options: > > mount opts: rw,noatime,nodiratime,logbsize=256k,logbufs=2 FWIW, I bet these mount options have something to do with the issue. Here's the disk I'm testing against - 36GB 10krpm u160 SCSI: <5>[ 25.427907] sd 0:0:2:0: [sdb] 71687372 512-byte hardware sectors (36704 MB) <5>[ 25.440393] sd 0:0:2:0: [sdb] Write Protect is off <7>[ 25.441276] sd 0:0:2:0: [sdb] Mode Sense: ab 00 10 08 <5>[ 25.442662] sd 0:0:2:0: [sdb] Write cache: disabled, read cache: enabled, supports DPO and FUA <6>[ 25.446992] sdb: sdb1 sdb2 sdb3 sdb4 sdb5 sdb6 sdb7 sdb8 sdb9 Note - read cache is enabled, write cache is disabled, so barriers cause a FUA only. i.e. the only bubble in the I/O pipeline that barriers cause are in teh elevator and the scsi command queue. The disk is capable of about 30MB/s on the inner edge. Mount options are default (so logbsize=32k,logbufs=8), mkfs options are default, 4GB partition on inner (slow) edge of disk. Kernel is 2.6.22-rc4 with all debug and tracing options turned on on ia64. For this config, I see: barrier nobarrier hot cache 22s 14s cold cache 21s 20s In this case, barriers have little impact on cold cache behaviour, and the difference on the hot cache behaviour will probably be because of FUA being used on barrier writes (i.e. no combining of sequential log I/Os in the elevator). The difference in I/O behaviour b/t hot cache and cold cache during the rm -rf is that there are zero read I/Os on a hot cache and 50-100 read I/Os per second on a cold cache which is easily within the capability of this drive. After turning on the write cache with: # sdparm -s WCE -S /dev/sdb # reboot [ 25.717942] sd 0:0:2:0: [sdb] Write cache: enabled, read cache: enabled, supports DPO and FUA I get: barrier nobarrier logbsize=32k,logbufs=8: hot cache 24s 11s logbsize=32k,logbufs=8: cold cache 33s 16s logbsize=256k,logbufs=8: hot cache 10s 10s logbsize=256k,logbufs=8: cold cache 16s 16s logbsize=256k,logbufs=2: hot cache 11s 9s logbsize=256k,logbufs=2: cold cache 17s 13s Out of the box, barriers are 50% slower with WCE=1 than with WCE=0 on the cold cache test, but are almost as fast with larger log buffer size (i.e. less barrier writes being issued). Worth noting is that at 10-11s runtime, the disk is bandwidth bound (i.e. we're doing 30MB/s), so that's the fastest time rm -rf will do on this filesystem. So, clearly we have differing performance depending on mount options and at best barriers give equal performance. I just ran the same tests on an x86_64 box with 7.2krpm 500GB SATA disks with WCE (2.6.18 kernel) using a 30GB partition on the outer edge: barrier nobarrier logbsize=32k,logbufs=8: hot cache 29s 29s logbsize=32k,logbufs=8: cold cache 33s 30s logbsize=256k,logbufs=8: hot cache 8s 8s logbsize=256k,logbufs=8: cold cache 11s 11s logbsize=256k,logbufs=2: hot cache 8s 8s logbsize=256k,logbufs=2: cold cache 11s 11s Barriers make little to zero difference here. > Can anyone explain this? Right now I'm unable to reproduce your results even on 2.6.18 so I suspect a drive level issue here. Can I suggest that you try the same tests with write caching turned off on the drive(s)? (hdparm -W 0 , IIRC). Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Thu Jun 28 17:35:54 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 17:35:59 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5T0ZptL002745 for ; Thu, 28 Jun 2007 17:35:53 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA26650; Fri, 29 Jun 2007 10:35:48 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5T0ZleW4678042; Fri, 29 Jun 2007 10:35:47 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5T0ZjbD4675635; Fri, 29 Jun 2007 10:35:45 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Fri, 29 Jun 2007 10:35:45 +1000 From: David Chinner To: Ruben Porras Cc: xfs@oss.sgi.com Subject: Re: [PATCH] Implement ioctl to mark AGs as "don't use/use" Message-ID: <20070629003545.GE31489@sgi.com> References: <1182939325.5313.12.camel@localhost> <20070628045049.GF989688@sgi.com> <46838CAE.9030808@linworks.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <46838CAE.9030808@linworks.de> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12009 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Thu, Jun 28, 2007 at 12:25:50PM +0200, Ruben Porras wrote: > David Chinner wrote: > Ok, thank you for the explanation, I think that now I got it right. > Attached is a new patch. Ok, I'll have a look in a little while ;) > There is one question that I would like to ask: when you sketched the > xfs_alloc_set_flag_ag function, you put inside it the call to the > funcintion xfs_alloc_log_agf (see next code snippet). > > STATIC void > xfs_alloc_set_flag_ag( > xfs_trans_t *tp, > xfs_buf_t *agbp, /* buffer for a.g. freelist header */ > xfs_perag_t *pag, > int flag) > { > xfs_agf_t *agf; /* a.g. freespace structure */ > > agf = XFS_BUF_TO_AGF(agbp); > pag->pagf_flags |= flag; > agf->agf_flags = cpu_to_be32(pag->pagf_flags); > > xfs_alloc_log_agf(tp, agbp, XFS_AGF_FLAGS); <-- ***** FROM HERE > } > > is it required to do the transaction log right after the change or can it be > done in the caller function right after calling xfs_alloc_set_flag_ag? > > For example > > caller(...) > > { > xfs_alloc_set_flag_ag(tp, bp, pag, XFS_AGFLAG_ALLOC_DENY); > > <-- **** TO HERE > > xfs_trans_set_sync(tp); > xfs_trans_commit(tp, 0); > } Yes, you could do that but I don't think it make sense. We also log the dirty buffer in th context in which it got dirtied, and in this case it is xfs_alloc_set_flag_ag(). it also saves having to rememeber that you ahve to call xfs_alloc_log_agf() after a call to xfs_alloc_set_flag_ag(). If you need to change multiple flags, batch them up and do a single xfs_alloc_set_flag_ag() call... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Thu Jun 28 17:54:25 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 17:54:30 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5T0sMtL008407 for ; Thu, 28 Jun 2007 17:54:24 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA27031; Fri, 29 Jun 2007 10:54:19 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5T0sIeW4687774; Fri, 29 Jun 2007 10:54:19 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5T0sHbH4686956; Fri, 29 Jun 2007 10:54:17 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Fri, 29 Jun 2007 10:54:17 +1000 From: David Chinner To: Ruben Porras Cc: xfs@oss.sgi.com Subject: Re: [PATCH] Implement ioctl to mark AGs as "don't use/use" Message-ID: <20070629005417.GF31489@sgi.com> References: <1182939325.5313.12.camel@localhost> <20070628045049.GF989688@sgi.com> <46838CAE.9030808@linworks.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <46838CAE.9030808@linworks.de> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12010 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Thu, Jun 28, 2007 at 12:25:50PM +0200, Ruben Porras wrote: > Ok, thank you for the explanation, I think that now I got it right. > Attached is a new patch. Only one minor nit: > @@ -558,6 +559,17 @@ > ASSERT(args->minlen <= args->maxlen); > ASSERT(args->mod < args->prod); > ASSERT(args->alignment > 0); > + > + /* > + * Return an error if the a.g. should not be allocated. > + * This happens normally during a shrink operation. > + */ > + pag = (args->pag); Kill the () here. Otherwise, looks good. OOC, do you have any test code for this? xfs_io would be the tool to teach this ioctl to.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Thu Jun 28 18:03:59 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 18:04:03 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5T13utL012146 for ; Thu, 28 Jun 2007 18:03:58 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA27301; Fri, 29 Jun 2007 11:03:38 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5T13XeW4691785; Fri, 29 Jun 2007 11:03:34 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5T13PZS4690632; Fri, 29 Jun 2007 11:03:25 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Fri, 29 Jun 2007 11:03:25 +1000 From: David Chinner To: "Amit K. Arora" Cc: David Chinner , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: Re: [PATCH 4/7][TAKE5] support new modes in fallocate Message-ID: <20070629010325.GG31489@sgi.com> References: <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070625134500.GE1951@amitarora.in.ibm.com> <20070625150320.GA8686@amitarora.in.ibm.com> <20070625214626.GJ5181@schatzie.adilger.int> <20070626103247.GA19870@amitarora.in.ibm.com> <20070626153413.GC6652@schatzie.adilger.int> <20070626231803.GQ31489@sgi.com> <20070628181913.GC1674@amitarora.in.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070628181913.GC1674@amitarora.in.ibm.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12011 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Thu, Jun 28, 2007 at 11:49:13PM +0530, Amit K. Arora wrote: > On Wed, Jun 27, 2007 at 09:18:04AM +1000, David Chinner wrote: > > On Tue, Jun 26, 2007 at 11:34:13AM -0400, Andreas Dilger wrote: > > > On Jun 26, 2007 16:02 +0530, Amit K. Arora wrote: > > > > On Mon, Jun 25, 2007 at 03:46:26PM -0600, Andreas Dilger wrote: > > > > > Can you clarify - what is the current behaviour when ENOSPC (or some other > > > > > error) is hit? Does it keep the current fallocate() or does it free it? > > > > > > > > Currently it is left on the file system implementation. In ext4, we do > > > > not undo preallocation if some error (say, ENOSPC) is hit. Hence it may > > > > end up with partial (pre)allocation. This is inline with dd and > > > > posix_fallocate, which also do not free the partially allocated space. > > > > > > Since I believe the XFS allocation ioctls do it the opposite way (free > > > preallocated space on error) this should be encoded into the flags. > > > Having it "filesystem dependent" just means that nobody will be happy. > > > > No, XFs does not free preallocated space on error. it is up to the > > application to clean up. > > Since XFS also does not free preallocated space on error and this > behavior is inline with dd, posix_fallocate() and the current ext4 > implementation, do we still need FA_FL_FREE_ENOSPC flag ? Not at the moment. > > > What I mean is that any data read from the file should have the "appearance" > > > of being zeroed (whether zeroes are actually written to disk or not). What > > > I _think_ David is proposing is to allow fallocate() to return without > > > marking the blocks even "uninitialized" and subsequent reads would return > > > the old data from the disk. > > > > Correct, but for swap files that's not an issue - no user should be able > > too read them, and FA_MKSWAP would really need root privileges to execute. > > Will the FA_MKSWAP mode still be required with your suggested change of > teaching do_mpage_readpage() about unwritten extents being in place ? > Or, will you still like to have FA_MKSWAP mode ? budgie:/mnt/test # xfs_io -f -c "resvsp 0 1048576" -c "truncate 1048576" swap_file budgie:/mnt/test # mkswap swap_file Setting up swapspace version 1, size = 1032 kB budgie:/mnt/test # swapon -v swap_file swapon on swap_file budgie:/mnt/test # swapon -s Filename Type Size Used Priority /dev/sda2 partition 9437152 0 -1 /mnt/test/swap_file file 992 0 -2 budgie:/mnt/test # xfs_bmap -vvp swap_file swap_file: EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL FLAGS 0: [0..31]: 96..127 0 (96..127) 32 1: [32..2047]: 128..2143 0 (128..2143) 2016 10000 FLAG Values: 010000 Unwritten preallocated extent 001000 Doesn't begin on stripe unit 000100 Doesn't end on stripe unit 000010 Doesn't begin on stripe width 000001 Doesn't end on stripe width Looks like the changes work, so FA_MKSWAP is not necessary for XFS. We can drop that for the moment unless anyone else sees a need for it. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Thu Jun 28 18:48:31 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 18:48:36 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.5 required=5.0 tests=AWL,BAYES_99,J_CHICKENPOX_32, J_CHICKENPOX_44,J_CHICKENPOX_45,J_CHICKENPOX_46,J_CHICKENPOX_62, J_CHICKENPOX_63 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5T1mPtL027484 for ; Thu, 28 Jun 2007 18:48:27 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA28469; Fri, 29 Jun 2007 11:48:20 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5T1mJeW4712004; Fri, 29 Jun 2007 11:48:20 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5T1mIPs4711972; Fri, 29 Jun 2007 11:48:18 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Fri, 29 Jun 2007 11:48:18 +1000 From: David Chinner To: xfs-dev Cc: xfs-oss Subject: Review: Concurrent Filestreams V4 Message-ID: <20070629014818.GI31489@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12012 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Concurrent Multi-File Data Streams In media spaces, video is often stored in a frame-per-file format. When dealing with uncompressed realtime HD video streams in this format, it is crucial that files do not get fragmented and that multiple files a placed contiguously on disk. When multiple streams are being ingested and played out at the same time, it is critical that the filesystem does not cross the streams and interleave them together as this creates seek and readahead cache miss latency and prevents both ingest and playout from meeting frame rate targets. This patches creates a "stream of files" concept into the allocator to place all the data from a single stream contiguously on disk so that RAID array readahead can be used effectively. Each additional stream gets placed in different allocation groups within the filesystem, thereby ensuring that we don't cross any streams. When an AG fills up, we select a new AG for the stream that is not in use. The core of the functionality is the stream tracking - each inode that we create in a directory needs to be associated with the directories' stream. Hence every time we create a file, we look up the directories' stream object and associate the new file with that object. Once we have a stream object for a file, we use the AG that the stream object point to for allocations. If we can't allocate in that AG (e.g. it is full) we move the entire stream to another AG. Other inodes in the same stream are moved to the new AG on their next allocation (i.e. lazy update). Stream objects are kept in a cache and hold a reference on the inode. Hence the inode cannot be reclaimed while there is an outstanding stream reference. This means that on unlink we need to remove the stream association and we also need to flush all the associations on certain events that want to reclaim all unreferenced inodes (e.g. filesystem freeze). Credits: The original filestream allocator on Irix was written by Glen Overby, the Linux port and rewrite by Nathan Scott and Sam Vaughan (none of whom work at SGI any more). I just picked the pieces and beat it repeatedly with a big stick until it passed XFSQA. Version 4: o cleanup code in xfs_bmap_btalloc o add comments to xfs_bmap_btalloc o moved comments from xfs_mru_cache.h to mxfs_mru_cache.c so functions are documented rather than their protoypes o fixed use-after-free in tracing code o fixed xfs_release merge screwup o fixed ABBA deadlock on the directory inode in xfs_filestream_associate during xfs_freeze Version 3: o use proper define for mount args o make filestreams inode flag mark child inodes correctly so that filestreams are applied to them even if they are not tagged o split quota inode filestreams avoidance out into a separate patch. o move xfs_close() hooks for stream destruction on unlink to xfs_release(). Version 2: o fold xfs_bmap_filestream() into xfs_bmap_btalloc() o use ktrace infrastructure for debug code in xfs_filestream.c o wrap repeated filestream inode checks. o rename per-AG filestream reference counting macros and convert to static inline o remove debug from xfs_mru_cache.[ch] o fix function call/error check formatting. o removed unnecessary fstrm_mnt_data_t structure. o cleaned up ASSERT checks o cleaned up namespace-less globals in xfs_mru_cache.c o removed unnecessary casts --- fs/xfs/Makefile-linux-2.6 | 2 fs/xfs/linux-2.6/xfs_globals.c | 1 fs/xfs/linux-2.6/xfs_linux.h | 1 fs/xfs/linux-2.6/xfs_sysctl.c | 11 fs/xfs/linux-2.6/xfs_sysctl.h | 2 fs/xfs/xfs.h | 1 fs/xfs/xfs_ag.h | 1 fs/xfs/xfs_bmap.c | 69 +++ fs/xfs/xfs_clnt.h | 2 fs/xfs/xfs_dinode.h | 4 fs/xfs/xfs_filestream.c | 771 +++++++++++++++++++++++++++++++++++++++++ fs/xfs/xfs_filestream.h | 136 +++++++ fs/xfs/xfs_fs.h | 1 fs/xfs/xfs_fsops.c | 2 fs/xfs/xfs_inode.c | 17 fs/xfs/xfs_inode.h | 1 fs/xfs/xfs_mount.h | 4 fs/xfs/xfs_mru_cache.c | 608 ++++++++++++++++++++++++++++++++ fs/xfs/xfs_mru_cache.h | 57 +++ fs/xfs/xfs_vfsops.c | 26 + fs/xfs/xfs_vnodeops.c | 25 + fs/xfs/xfsidbg.c | 188 +++++++++ 22 files changed, 1918 insertions(+), 12 deletions(-) Index: 2.6.x-xfs-new/fs/xfs/Makefile-linux-2.6 =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/Makefile-linux-2.6 2007-06-20 16:35:45.172356726 +1000 +++ 2.6.x-xfs-new/fs/xfs/Makefile-linux-2.6 2007-06-20 17:59:34.794802221 +1000 @@ -54,6 +54,7 @@ xfs-y += xfs_alloc.o \ xfs_dir2_sf.o \ xfs_error.o \ xfs_extfree_item.o \ + xfs_filestream.o \ xfs_fsops.o \ xfs_ialloc.o \ xfs_ialloc_btree.o \ @@ -67,6 +68,7 @@ xfs-y += xfs_alloc.o \ xfs_log.o \ xfs_log_recover.o \ xfs_mount.o \ + xfs_mru_cache.o \ xfs_rename.o \ xfs_trans.o \ xfs_trans_ail.o \ Index: 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_globals.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/linux-2.6/xfs_globals.c 2007-06-20 16:35:45.192354104 +1000 +++ 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_globals.c 2007-06-20 17:59:34.902788196 +1000 @@ -49,6 +49,7 @@ xfs_param_t xfs_params = { .inherit_nosym = { 0, 0, 1 }, .rotorstep = { 1, 1, 255 }, .inherit_nodfrg = { 0, 1, 1 }, + .fstrm_timer = { 1, 50, 3600*100}, }; /* Index: 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_linux.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/linux-2.6/xfs_linux.h 2007-06-20 16:35:45.196353580 +1000 +++ 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_linux.h 2007-06-28 11:04:29.751600456 +1000 @@ -132,6 +132,7 @@ #define xfs_inherit_nosymlinks xfs_params.inherit_nosym.val #define xfs_rotorstep xfs_params.rotorstep.val #define xfs_inherit_nodefrag xfs_params.inherit_nodfrg.val +#define xfs_fstrm_centisecs xfs_params.fstrm_timer.val #define current_cpu() (raw_smp_processor_id()) #define current_pid() (current->pid) Index: 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_sysctl.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/linux-2.6/xfs_sysctl.c 2007-06-20 16:35:45.200353055 +1000 +++ 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_sysctl.c 2007-06-20 17:59:34.914786638 +1000 @@ -243,6 +243,17 @@ static ctl_table xfs_table[] = { .extra1 = &xfs_params.inherit_nodfrg.min, .extra2 = &xfs_params.inherit_nodfrg.max }, + { + .ctl_name = XFS_FILESTREAM_TIMER, + .procname = "filestream_centisecs", + .data = &xfs_params.fstrm_timer.val, + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = &proc_dointvec_minmax, + .strategy = &sysctl_intvec, + .extra1 = &xfs_params.fstrm_timer.min, + .extra2 = &xfs_params.fstrm_timer.max, + }, /* please keep this the last entry */ #ifdef CONFIG_PROC_FS { Index: 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_sysctl.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/linux-2.6/xfs_sysctl.h 2007-06-20 16:35:45.212351482 +1000 +++ 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_sysctl.h 2007-06-20 17:59:34.918786119 +1000 @@ -50,6 +50,7 @@ typedef struct xfs_param { xfs_sysctl_val_t inherit_nosym; /* Inherit the "nosymlinks" flag. */ xfs_sysctl_val_t rotorstep; /* inode32 AG rotoring control knob */ xfs_sysctl_val_t inherit_nodfrg;/* Inherit the "nodefrag" inode flag. */ + xfs_sysctl_val_t fstrm_timer; /* Filestream dir-AG assoc'n timeout. */ } xfs_param_t; /* @@ -89,6 +90,7 @@ enum { XFS_INHERIT_NOSYM = 19, XFS_ROTORSTEP = 20, XFS_INHERIT_NODFRG = 21, + XFS_FILESTREAM_TIMER = 22, }; extern xfs_param_t xfs_params; Index: 2.6.x-xfs-new/fs/xfs/xfs_ag.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_ag.h 2007-06-20 17:59:24.992075301 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_ag.h 2007-06-28 11:04:33.895058244 +1000 @@ -196,6 +196,7 @@ typedef struct xfs_perag lock_t pagb_lock; /* lock for pagb_list */ #endif xfs_perag_busy_t *pagb_list; /* unstable blocks */ + atomic_t pagf_fstrms; /* # of filestreams active in this AG */ int pag_ici_init; /* incore inode cache initialised */ rwlock_t pag_ici_lock; /* incore inode lock */ Index: 2.6.x-xfs-new/fs/xfs/xfs_bmap.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_bmap.c 2007-06-20 16:35:45.220350433 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_bmap.c 2007-06-29 11:38:01.970328178 +1000 @@ -52,6 +52,7 @@ #include "xfs_quota.h" #include "xfs_trans_space.h" #include "xfs_buf_item.h" +#include "xfs_filestream.h" #ifdef DEBUG @@ -2724,9 +2725,15 @@ xfs_bmap_btalloc( } nullfb = ap->firstblock == NULLFSBLOCK; fb_agno = nullfb ? NULLAGNUMBER : XFS_FSB_TO_AGNO(mp, ap->firstblock); - if (nullfb) - ap->rval = XFS_INO_TO_FSB(mp, ap->ip->i_ino); - else + if (nullfb) { + if (ap->userdata && xfs_inode_is_filestream(ap->ip)) { + ag = xfs_filestream_lookup_ag(ap->ip); + ag = (ag != NULLAGNUMBER) ? ag : 0; + ap->rval = XFS_AGB_TO_FSB(mp, ag, 0); + } else { + ap->rval = XFS_INO_TO_FSB(mp, ap->ip->i_ino); + } + } else ap->rval = ap->firstblock; xfs_bmap_adjacent(ap); @@ -2750,13 +2757,22 @@ xfs_bmap_btalloc( args.firstblock = ap->firstblock; blen = 0; if (nullfb) { - args.type = XFS_ALLOCTYPE_START_BNO; + if (ap->userdata && xfs_inode_is_filestream(ap->ip)) + args.type = XFS_ALLOCTYPE_NEAR_BNO; + else + args.type = XFS_ALLOCTYPE_START_BNO; args.total = ap->total; + /* - * Find the longest available space. - * We're going to try for the whole allocation at once. + * Search for an allocation group with a single extent + * large enough for the request. + * + * If one isn't found, then adjust the minimum allocation + * size to the largest space found. */ startag = ag = XFS_FSB_TO_AGNO(mp, args.fsbno); + if (startag == NULLAGNUMBER) + startag = ag = 0; notinit = 0; down_read(&mp->m_peraglock); while (blen < ap->alen) { @@ -2782,6 +2798,35 @@ xfs_bmap_btalloc( blen = longest; } else notinit = 1; + + if (xfs_inode_is_filestream(ap->ip)) { + if (blen >= ap->alen) + break; + + if (ap->userdata) { + /* + * If startag is an invalid AG, we've + * come here once before and + * xfs_filestream_new_ag picked the + * best currently available. + * + * Don't continue looping, since we + * could loop forever. + */ + if (startag == NULLAGNUMBER) + break; + + error = xfs_filestream_new_ag(ap, &ag); + if (error) { + up_read(&mp->m_peraglock); + return error; + } + + /* loop again to set 'blen'*/ + startag = NULLAGNUMBER; + continue; + } + } if (++ag == mp->m_sb.sb_agcount) ag = 0; if (ag == startag) @@ -2806,8 +2851,18 @@ xfs_bmap_btalloc( */ else args.minlen = ap->alen; + + /* + * set the failure fallback case to look in the selected + * AG as the stream may have moved. + */ + if (xfs_inode_is_filestream(ap->ip)) + ap->rval = args.fsbno = XFS_AGB_TO_FSB(mp, ag, 0); } else if (ap->low) { - args.type = XFS_ALLOCTYPE_START_BNO; + if (xfs_inode_is_filestream(ap->ip)) + args.type = XFS_ALLOCTYPE_FIRST_AG; + else + args.type = XFS_ALLOCTYPE_START_BNO; args.total = args.minlen = ap->minlen; } else { args.type = XFS_ALLOCTYPE_NEAR_BNO; Index: 2.6.x-xfs-new/fs/xfs/xfs_clnt.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_clnt.h 2007-06-20 17:53:27.670502869 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_clnt.h 2007-06-29 11:36:43.236506638 +1000 @@ -98,5 +98,7 @@ struct xfs_mount_args { */ #define XFSMNT2_COMPAT_IOSIZE 0x00000001 /* don't report large preferred * I/O size in stat(2) */ +#define XFSMNT2_FILESTREAMS 0x00000002 /* enable the filestreams + * allocator */ #endif /* __XFS_CLNT_H__ */ Index: 2.6.x-xfs-new/fs/xfs/xfs_dinode.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_dinode.h 2007-06-20 16:35:45.236348336 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_dinode.h 2007-06-20 17:59:34.950781963 +1000 @@ -257,6 +257,7 @@ typedef enum xfs_dinode_fmt #define XFS_DIFLAG_EXTSIZE_BIT 11 /* inode extent size allocator hint */ #define XFS_DIFLAG_EXTSZINHERIT_BIT 12 /* inherit inode extent size */ #define XFS_DIFLAG_NODEFRAG_BIT 13 /* do not reorganize/defragment */ +#define XFS_DIFLAG_FILESTREAM_BIT 14 /* use filestream allocator */ #define XFS_DIFLAG_REALTIME (1 << XFS_DIFLAG_REALTIME_BIT) #define XFS_DIFLAG_PREALLOC (1 << XFS_DIFLAG_PREALLOC_BIT) #define XFS_DIFLAG_NEWRTBM (1 << XFS_DIFLAG_NEWRTBM_BIT) @@ -271,12 +272,13 @@ typedef enum xfs_dinode_fmt #define XFS_DIFLAG_EXTSIZE (1 << XFS_DIFLAG_EXTSIZE_BIT) #define XFS_DIFLAG_EXTSZINHERIT (1 << XFS_DIFLAG_EXTSZINHERIT_BIT) #define XFS_DIFLAG_NODEFRAG (1 << XFS_DIFLAG_NODEFRAG_BIT) +#define XFS_DIFLAG_FILESTREAM (1 << XFS_DIFLAG_FILESTREAM_BIT) #define XFS_DIFLAG_ANY \ (XFS_DIFLAG_REALTIME | XFS_DIFLAG_PREALLOC | XFS_DIFLAG_NEWRTBM | \ XFS_DIFLAG_IMMUTABLE | XFS_DIFLAG_APPEND | XFS_DIFLAG_SYNC | \ XFS_DIFLAG_NOATIME | XFS_DIFLAG_NODUMP | XFS_DIFLAG_RTINHERIT | \ XFS_DIFLAG_PROJINHERIT | XFS_DIFLAG_NOSYMLINKS | XFS_DIFLAG_EXTSIZE | \ - XFS_DIFLAG_EXTSZINHERIT | XFS_DIFLAG_NODEFRAG) + XFS_DIFLAG_EXTSZINHERIT | XFS_DIFLAG_NODEFRAG | XFS_DIFLAG_FILESTREAM) #endif /* __XFS_DINODE_H__ */ Index: 2.6.x-xfs-new/fs/xfs/xfs_filestream.c =================================================================== --- /dev/null 1970-01-01 00:00:00.000000000 +0000 +++ 2.6.x-xfs-new/fs/xfs/xfs_filestream.c 2007-06-29 11:42:55.336391604 +1000 @@ -0,0 +1,771 @@ +/* + * Copyright (c) 2006-2007 Silicon Graphics, Inc. + * All Rights Reserved. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it would be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write the Free Software Foundation, + * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA + */ +#include "xfs.h" +#include "xfs_bmap_btree.h" +#include "xfs_inum.h" +#include "xfs_dir2.h" +#include "xfs_dir2_sf.h" +#include "xfs_attr_sf.h" +#include "xfs_dinode.h" +#include "xfs_inode.h" +#include "xfs_ag.h" +#include "xfs_dmapi.h" +#include "xfs_log.h" +#include "xfs_trans.h" +#include "xfs_sb.h" +#include "xfs_mount.h" +#include "xfs_bmap.h" +#include "xfs_alloc.h" +#include "xfs_utils.h" +#include "xfs_mru_cache.h" +#include "xfs_filestream.h" + +#ifdef XFS_FILESTREAMS_TRACE + +ktrace_t *xfs_filestreams_trace_buf; + +STATIC void +xfs_filestreams_trace( + xfs_mount_t *mp, /* mount point */ + int type, /* type of trace */ + const char *func, /* source function */ + int line, /* source line number */ + __psunsigned_t arg0, + __psunsigned_t arg1, + __psunsigned_t arg2, + __psunsigned_t arg3, + __psunsigned_t arg4, + __psunsigned_t arg5) +{ + ktrace_enter(xfs_filestreams_trace_buf, + (void *)(__psint_t)(type | (line << 16)), + (void *)func, + (void *)(__psunsigned_t)current_pid(), + (void *)mp, + (void *)(__psunsigned_t)arg0, + (void *)(__psunsigned_t)arg1, + (void *)(__psunsigned_t)arg2, + (void *)(__psunsigned_t)arg3, + (void *)(__psunsigned_t)arg4, + (void *)(__psunsigned_t)arg5, + NULL, NULL, NULL, NULL, NULL, NULL); +} + +#define TRACE0(mp,t) TRACE6(mp,t,0,0,0,0,0,0) +#define TRACE1(mp,t,a0) TRACE6(mp,t,a0,0,0,0,0,0) +#define TRACE2(mp,t,a0,a1) TRACE6(mp,t,a0,a1,0,0,0,0) +#define TRACE3(mp,t,a0,a1,a2) TRACE6(mp,t,a0,a1,a2,0,0,0) +#define TRACE4(mp,t,a0,a1,a2,a3) TRACE6(mp,t,a0,a1,a2,a3,0,0) +#define TRACE5(mp,t,a0,a1,a2,a3,a4) TRACE6(mp,t,a0,a1,a2,a3,a4,0) +#define TRACE6(mp,t,a0,a1,a2,a3,a4,a5) \ + xfs_filestreams_trace(mp, t, __FUNCTION__, __LINE__, \ + (__psunsigned_t)a0, (__psunsigned_t)a1, \ + (__psunsigned_t)a2, (__psunsigned_t)a3, \ + (__psunsigned_t)a4, (__psunsigned_t)a5) + +#define TRACE_AG_SCAN(mp, ag, ag2) \ + TRACE2(mp, XFS_FSTRM_KTRACE_AGSCAN, ag, ag2); +#define TRACE_AG_PICK1(mp, max_ag, maxfree) \ + TRACE2(mp, XFS_FSTRM_KTRACE_AGPICK1, max_ag, maxfree); +#define TRACE_AG_PICK2(mp, ag, ag2, cnt, free, scan, flag) \ + TRACE6(mp, XFS_FSTRM_KTRACE_AGPICK2, ag, ag2, \ + cnt, free, scan, flag) +#define TRACE_UPDATE(mp, ip, ag, cnt, ag2, cnt2) \ + TRACE5(mp, XFS_FSTRM_KTRACE_UPDATE, ip, ag, cnt, ag2, cnt2) +#define TRACE_FREE(mp, ip, pip, ag, cnt) \ + TRACE4(mp, XFS_FSTRM_KTRACE_FREE, ip, pip, ag, cnt) +#define TRACE_LOOKUP(mp, ip, pip, ag, cnt) \ + TRACE4(mp, XFS_FSTRM_KTRACE_ITEM_LOOKUP, ip, pip, ag, cnt) +#define TRACE_ASSOCIATE(mp, ip, pip, ag, cnt) \ + TRACE4(mp, XFS_FSTRM_KTRACE_ASSOCIATE, ip, pip, ag, cnt) +#define TRACE_MOVEAG(mp, ip, pip, oag, ocnt, nag, ncnt) \ + TRACE6(mp, XFS_FSTRM_KTRACE_MOVEAG, ip, pip, oag, ocnt, nag, ncnt) +#define TRACE_ORPHAN(mp, ip, ag) \ + TRACE2(mp, XFS_FSTRM_KTRACE_ORPHAN, ip, ag); + + +#else +#define TRACE_AG_SCAN(mp, ag, ag2) +#define TRACE_AG_PICK1(mp, max_ag, maxfree) +#define TRACE_AG_PICK2(mp, ag, ag2, cnt, free, scan, flag) +#define TRACE_UPDATE(mp, ip, ag, cnt, ag2, cnt2) +#define TRACE_FREE(mp, ip, pip, ag, cnt) +#define TRACE_LOOKUP(mp, ip, pip, ag, cnt) +#define TRACE_ASSOCIATE(mp, ip, pip, ag, cnt) +#define TRACE_MOVEAG(mp, ip, pip, oag, ocnt, nag, ncnt) +#define TRACE_ORPHAN(mp, ip, ag) +#endif + +static kmem_zone_t *item_zone; + +/* + * Structure for associating a file or a directory with an allocation group. + * The parent directory pointer is only needed for files, but since there will + * generally be vastly more files than directories in the cache, using the same + * data structure simplifies the code with very little memory overhead. + */ +typedef struct fstrm_item +{ + xfs_agnumber_t ag; /* AG currently in use for the file/directory. */ + xfs_inode_t *ip; /* inode self-pointer. */ + xfs_inode_t *pip; /* Parent directory inode pointer. */ +} fstrm_item_t; + + +/* + * Scan the AGs starting at startag looking for an AG that isn't in use and has + * at least minlen blocks free. + */ +static int +_xfs_filestream_pick_ag( + xfs_mount_t *mp, + xfs_agnumber_t startag, + xfs_agnumber_t *agp, + int flags, + xfs_extlen_t minlen) +{ + int err, trylock, nscan; + xfs_extlen_t delta, longest, need, free, minfree, maxfree = 0; + xfs_agnumber_t ag, max_ag = NULLAGNUMBER; + struct xfs_perag *pag; + + /* 2% of an AG's blocks must be free for it to be chosen. */ + minfree = mp->m_sb.sb_agblocks / 50; + + ag = startag; + *agp = NULLAGNUMBER; + + /* For the first pass, don't sleep trying to init the per-AG. */ + trylock = XFS_ALLOC_FLAG_TRYLOCK; + + for (nscan = 0; 1; nscan++) { + + TRACE_AG_SCAN(mp, ag, xfs_filestream_peek_ag(mp, ag)); + + pag = mp->m_perag + ag; + + if (!pag->pagf_init) { + err = xfs_alloc_pagf_init(mp, NULL, ag, trylock); + if (err && !trylock) + return err; + } + + /* Might fail sometimes during the 1st pass with trylock set. */ + if (!pag->pagf_init) + goto next_ag; + + /* Keep track of the AG with the most free blocks. */ + if (pag->pagf_freeblks > maxfree) { + maxfree = pag->pagf_freeblks; + max_ag = ag; + } + + /* + * The AG reference count does two things: it enforces mutual + * exclusion when examining the suitability of an AG in this + * loop, and it guards against two filestreams being established + * in the same AG as each other. + */ + if (xfs_filestream_get_ag(mp, ag) > 1) { + xfs_filestream_put_ag(mp, ag); + goto next_ag; + } + + need = XFS_MIN_FREELIST_PAG(pag, mp); + delta = need > pag->pagf_flcount ? need - pag->pagf_flcount : 0; + longest = (pag->pagf_longest > delta) ? + (pag->pagf_longest - delta) : + (pag->pagf_flcount > 0 || pag->pagf_longest > 0); + + if (((minlen && longest >= minlen) || + (!minlen && pag->pagf_freeblks >= minfree)) && + (!pag->pagf_metadata || !(flags & XFS_PICK_USERDATA) || + (flags & XFS_PICK_LOWSPACE))) { + + /* Break out, retaining the reference on the AG. */ + free = pag->pagf_freeblks; + *agp = ag; + break; + } + + /* Drop the reference on this AG, it's not usable. */ + xfs_filestream_put_ag(mp, ag); +next_ag: + /* Move to the next AG, wrapping to AG 0 if necessary. */ + if (++ag >= mp->m_sb.sb_agcount) + ag = 0; + + /* If a full pass of the AGs hasn't been done yet, continue. */ + if (ag != startag) + continue; + + /* Allow sleeping in xfs_alloc_pagf_init() on the 2nd pass. */ + if (trylock != 0) { + trylock = 0; + continue; + } + + /* Finally, if lowspace wasn't set, set it for the 3rd pass. */ + if (!(flags & XFS_PICK_LOWSPACE)) { + flags |= XFS_PICK_LOWSPACE; + continue; + } + + /* + * Take the AG with the most free space, regardless of whether + * it's already in use by another filestream. + */ + if (max_ag != NULLAGNUMBER) { + xfs_filestream_get_ag(mp, max_ag); + TRACE_AG_PICK1(mp, max_ag, maxfree); + free = maxfree; + *agp = max_ag; + break; + } + + /* take AG 0 if none matched */ + TRACE_AG_PICK1(mp, max_ag, maxfree); + *agp = 0; + return 0; + } + + TRACE_AG_PICK2(mp, startag, *agp, xfs_filestream_peek_ag(mp, *agp), + free, nscan, flags); + + return 0; +} + +/* + * Set the allocation group number for a file or a directory, updating inode + * references and per-AG references as appropriate. Must be called with the + * m_peraglock held in read mode. + */ +static int +_xfs_filestream_update_ag( + xfs_inode_t *ip, + xfs_inode_t *pip, + xfs_agnumber_t ag) +{ + int err = 0; + xfs_mount_t *mp; + xfs_mru_cache_t *cache; + fstrm_item_t *item; + xfs_agnumber_t old_ag; + xfs_inode_t *old_pip; + + /* + * Either ip is a regular file and pip is a directory, or ip is a + * directory and pip is NULL. + */ + ASSERT(ip && (((ip->i_d.di_mode & S_IFREG) && pip && + (pip->i_d.di_mode & S_IFDIR)) || + ((ip->i_d.di_mode & S_IFDIR) && !pip))); + + mp = ip->i_mount; + cache = mp->m_filestream; + + item = xfs_mru_cache_lookup(cache, ip->i_ino); + if (item) { + ASSERT(item->ip == ip); + old_ag = item->ag; + item->ag = ag; + old_pip = item->pip; + item->pip = pip; + xfs_mru_cache_done(cache); + + /* + * If the AG has changed, drop the old ref and take a new one, + * effectively transferring the reference from old to new AG. + */ + if (ag != old_ag) { + xfs_filestream_put_ag(mp, old_ag); + xfs_filestream_get_ag(mp, ag); + } + + /* + * If ip is a file and its pip has changed, drop the old ref and + * take a new one. + */ + if (pip && pip != old_pip) { + IRELE(old_pip); + IHOLD(pip); + } + + TRACE_UPDATE(mp, ip, old_ag, xfs_filestream_peek_ag(mp, old_ag), + ag, xfs_filestream_peek_ag(mp, ag)); + return 0; + } + + item = kmem_zone_zalloc(item_zone, KM_MAYFAIL); + if (!item) + return ENOMEM; + + item->ag = ag; + item->ip = ip; + item->pip = pip; + + err = xfs_mru_cache_insert(cache, ip->i_ino, item); + if (err) { + kmem_zone_free(item_zone, item); + return err; + } + + /* Take a reference on the AG. */ + xfs_filestream_get_ag(mp, ag); + + /* + * Take a reference on the inode itself regardless of whether it's a + * regular file or a directory. + */ + IHOLD(ip); + + /* + * In the case of a regular file, take a reference on the parent inode + * as well to ensure it remains in-core. + */ + if (pip) + IHOLD(pip); + + TRACE_UPDATE(mp, ip, ag, xfs_filestream_peek_ag(mp, ag), + ag, xfs_filestream_peek_ag(mp, ag)); + + return 0; +} + +/* xfs_fstrm_free_func(): callback for freeing cached stream items. */ +void +xfs_fstrm_free_func( + xfs_ino_t ino, + fstrm_item_t *item) +{ + xfs_inode_t *ip = item->ip; + int ref; + + ASSERT(ip->i_ino == ino); + + xfs_iflags_clear(ip, XFS_IFILESTREAM); + + /* Drop the reference taken on the AG when the item was added. */ + ref = xfs_filestream_put_ag(ip->i_mount, item->ag); + + ASSERT(ref >= 0); + TRACE_FREE(ip->i_mount, ip, item->pip, item->ag, + xfs_filestream_peek_ag(ip->i_mount, item->ag)); + + /* + * _xfs_filestream_update_ag() always takes a reference on the inode + * itself, whether it's a file or a directory. Release it here. + * This can result in the inode being freed and so we must + * not hold any inode locks when freeing filesstreams objects + * otherwise we can deadlock here. + */ + IRELE(ip); + + /* + * In the case of a regular file, _xfs_filestream_update_ag() also + * takes a ref on the parent inode to keep it in-core. Release that + * too. + */ + if (item->pip) + IRELE(item->pip); + + /* Finally, free the memory allocated for the item. */ + kmem_zone_free(item_zone, item); +} + +/* + * xfs_filestream_init() is called at xfs initialisation time to set up the + * memory zone that will be used for filestream data structure allocation. + */ +int +xfs_filestream_init(void) +{ + item_zone = kmem_zone_init(sizeof(fstrm_item_t), "fstrm_item"); +#ifdef XFS_FILESTREAMS_TRACE + xfs_filestreams_trace_buf = ktrace_alloc(XFS_FSTRM_KTRACE_SIZE, KM_SLEEP); +#endif + return item_zone ? 0 : -ENOMEM; +} + +/* + * xfs_filestream_uninit() is called at xfs termination time to destroy the + * memory zone that was used for filestream data structure allocation. + */ +void +xfs_filestream_uninit(void) +{ +#ifdef XFS_FILESTREAMS_TRACE + ktrace_free(xfs_filestreams_trace_buf); +#endif + kmem_zone_destroy(item_zone); +} + +/* + * xfs_filestream_mount() is called when a file system is mounted with the + * filestream option. It is responsible for allocating the data structures + * needed to track the new file system's file streams. + */ +int +xfs_filestream_mount( + xfs_mount_t *mp) +{ + int err; + unsigned int lifetime, grp_count; + + /* + * The filestream timer tunable is currently fixed within the range of + * one second to four minutes, with five seconds being the default. The + * group count is somewhat arbitrary, but it'd be nice to adhere to the + * timer tunable to within about 10 percent. This requires at least 10 + * groups. + */ + lifetime = xfs_fstrm_centisecs * 10; + grp_count = 10; + + err = xfs_mru_cache_create(&mp->m_filestream, lifetime, grp_count, + (xfs_mru_cache_free_func_t)xfs_fstrm_free_func); + + return err; +} + +/* + * xfs_filestream_unmount() is called when a file system that was mounted with + * the filestream option is unmounted. It drains the data structures created + * to track the file system's file streams and frees all the memory that was + * allocated. + */ +void +xfs_filestream_unmount( + xfs_mount_t *mp) +{ + xfs_mru_cache_destroy(mp->m_filestream); +} + +/* + * If the mount point's m_perag array is going to be reallocated, all + * outstanding cache entries must be flushed to avoid accessing reference count + * addresses that have been freed. The call to xfs_filestream_flush() must be + * made inside the block that holds the m_peraglock in write mode to do the + * reallocation. + */ +void +xfs_filestream_flush( + xfs_mount_t *mp) +{ + /* point in time flush, so keep the reaper running */ + xfs_mru_cache_flush(mp->m_filestream, 1); +} + +/* + * Return the AG of the filestream the file or directory belongs to, or + * NULLAGNUMBER otherwise. + */ +xfs_agnumber_t +xfs_filestream_lookup_ag( + xfs_inode_t *ip) +{ + xfs_mru_cache_t *cache; + fstrm_item_t *item; + xfs_agnumber_t ag; + int ref; + + if (!(ip->i_d.di_mode & (S_IFREG | S_IFDIR))) { + ASSERT(0); + return NULLAGNUMBER; + } + + cache = ip->i_mount->m_filestream; + item = xfs_mru_cache_lookup(cache, ip->i_ino); + if (!item) { + TRACE_LOOKUP(ip->i_mount, ip, NULL, NULLAGNUMBER, 0); + return NULLAGNUMBER; + } + + ASSERT(ip == item->ip); + ag = item->ag; + ref = xfs_filestream_peek_ag(ip->i_mount, ag); + xfs_mru_cache_done(cache); + + TRACE_LOOKUP(ip->i_mount, ip, item->pip, ag, ref); + return ag; +} + +/* + * xfs_filestream_associate() should only be called to associate a regular file + * with its parent directory. Calling it with a child directory isn't + * appropriate because filestreams don't apply to entire directory hierarchies. + * Creating a file in a child directory of an existing filestream directory + * starts a new filestream with its own allocation group association. + * + * Returns < 0 on error, 0 if successful association occurred, > 0 if + * we failed to get an association because of locking issues. + */ +int +xfs_filestream_associate( + xfs_inode_t *pip, + xfs_inode_t *ip) +{ + xfs_mount_t *mp; + xfs_mru_cache_t *cache; + fstrm_item_t *item; + xfs_agnumber_t ag, rotorstep, startag; + int err = 0; + + ASSERT(pip->i_d.di_mode & S_IFDIR); + ASSERT(ip->i_d.di_mode & S_IFREG); + if (!(pip->i_d.di_mode & S_IFDIR) || !(ip->i_d.di_mode & S_IFREG)) + return -EINVAL; + + mp = pip->i_mount; + cache = mp->m_filestream; + down_read(&mp->m_peraglock); + + /* + * We have a problem, Houston. + * + * Taking the iolock here violates inode locking order - we already + * hold the ilock. Hence if we block getting this lock we may never + * wake. Unfortunately, that means if we can't get the lock, we're + * screwed in terms of getting a stream association - we can't spin + * waiting for the lock because someone else is waiting on the lock we + * hold and we cannot drop that as we are in a transaction here. + * + * Lucky for us, this inversion is rarely a problem because it's a + * directory inode that we are trying to lock here and that means the + * only place that matters is xfs_sync_inodes() and SYNC_DELWRI is + * used. i.e. freeze, remount-ro, quotasync or unmount. + * + * So, if we can't get the iolock without sleeping then just give up + */ + if (!xfs_ilock_nowait(pip, XFS_IOLOCK_EXCL)) { + up_read(&mp->m_peraglock); + return 1; + } + + /* If the parent directory is already in the cache, use its AG. */ + item = xfs_mru_cache_lookup(cache, pip->i_ino); + if (item) { + ASSERT(item->ip == pip); + ag = item->ag; + xfs_mru_cache_done(cache); + + TRACE_LOOKUP(mp, pip, pip, ag, xfs_filestream_peek_ag(mp, ag)); + err = _xfs_filestream_update_ag(ip, pip, ag); + + goto exit; + } + + /* + * Set the starting AG using the rotor for inode32, otherwise + * use the directory inode's AG. + */ + if (mp->m_flags & XFS_MOUNT_32BITINODES) { + rotorstep = xfs_rotorstep; + startag = (mp->m_agfrotor / rotorstep) % mp->m_sb.sb_agcount; + mp->m_agfrotor = (mp->m_agfrotor + 1) % + (mp->m_sb.sb_agcount * rotorstep); + } else + startag = XFS_INO_TO_AGNO(mp, pip->i_ino); + + /* Pick a new AG for the parent inode starting at startag. */ + err = _xfs_filestream_pick_ag(mp, startag, &ag, 0, 0); + if (err || ag == NULLAGNUMBER) + goto exit_did_pick; + + /* Associate the parent inode with the AG. */ + err = _xfs_filestream_update_ag(pip, NULL, ag); + if (err) + goto exit_did_pick; + + /* Associate the file inode with the AG. */ + err = _xfs_filestream_update_ag(ip, pip, ag); + if (err) + goto exit_did_pick; + + TRACE_ASSOCIATE(mp, ip, pip, ag, xfs_filestream_peek_ag(mp, ag)); + +exit_did_pick: + /* + * If _xfs_filestream_pick_ag() returned a valid AG, remove the + * reference it took on it, since the file and directory will have taken + * their own now if they were successfully cached. + */ + if (ag != NULLAGNUMBER) + xfs_filestream_put_ag(mp, ag); + +exit: + xfs_iunlock(pip, XFS_IOLOCK_EXCL); + up_read(&mp->m_peraglock); + return -err; +} + +/* + * Pick a new allocation group for the current file and its file stream. This + * function is called by xfs_bmap_filestreams() with the mount point's per-ag + * lock held. + */ +int +xfs_filestream_new_ag( + xfs_bmalloca_t *ap, + xfs_agnumber_t *agp) +{ + int flags, err; + xfs_inode_t *ip, *pip = NULL; + xfs_mount_t *mp; + xfs_mru_cache_t *cache; + xfs_extlen_t minlen; + fstrm_item_t *dir, *file; + xfs_agnumber_t ag = NULLAGNUMBER; + + ip = ap->ip; + mp = ip->i_mount; + cache = mp->m_filestream; + minlen = ap->alen; + *agp = NULLAGNUMBER; + + /* + * Look for the file in the cache, removing it if it's found. Doing + * this allows it to be held across the dir lookup that follows. + */ + file = xfs_mru_cache_remove(cache, ip->i_ino); + if (file) { + ASSERT(ip == file->ip); + + /* Save the file's parent inode and old AG number for later. */ + pip = file->pip; + ag = file->ag; + + /* Look for the file's directory in the cache. */ + dir = xfs_mru_cache_lookup(cache, pip->i_ino); + if (dir) { + ASSERT(pip == dir->ip); + + /* + * If the directory has already moved on to a new AG, + * use that AG as the new AG for the file. Don't + * forget to twiddle the AG refcounts to match the + * movement. + */ + if (dir->ag != file->ag) { + xfs_filestream_put_ag(mp, file->ag); + xfs_filestream_get_ag(mp, dir->ag); + *agp = file->ag = dir->ag; + } + + xfs_mru_cache_done(cache); + } + + /* + * Put the file back in the cache. If this fails, the free + * function needs to be called to tidy up in the same way as if + * the item had simply expired from the cache. + */ + err = xfs_mru_cache_insert(cache, ip->i_ino, file); + if (err) { + xfs_fstrm_free_func(ip->i_ino, file); + return err; + } + + /* + * If the file's AG was moved to the directory's new AG, there's + * nothing more to be done. + */ + if (*agp != NULLAGNUMBER) { + TRACE_MOVEAG(mp, ip, pip, + ag, xfs_filestream_peek_ag(mp, ag), + *agp, xfs_filestream_peek_ag(mp, *agp)); + return 0; + } + } + + /* + * If the file's parent directory is known, take its iolock in exclusive + * mode to prevent two sibling files from racing each other to migrate + * themselves and their parent to different AGs. + */ + if (pip) + xfs_ilock(pip, XFS_IOLOCK_EXCL); + + /* + * A new AG needs to be found for the file. If the file's parent + * directory is also known, it will be moved to the new AG as well to + * ensure that files created inside it in future use the new AG. + */ + ag = (ag == NULLAGNUMBER) ? 0 : (ag + 1) % mp->m_sb.sb_agcount; + flags = (ap->userdata ? XFS_PICK_USERDATA : 0) | + (ap->low ? XFS_PICK_LOWSPACE : 0); + + err = _xfs_filestream_pick_ag(mp, ag, agp, flags, minlen); + if (err || *agp == NULLAGNUMBER) + goto exit; + + /* + * If the file wasn't found in the file cache, then its parent directory + * inode isn't known. For this to have happened, the file must either + * be pre-existing, or it was created long enough ago that its cache + * entry has expired. This isn't the sort of usage that the filestreams + * allocator is trying to optimise, so there's no point trying to track + * its new AG somehow in the filestream data structures. + */ + if (!pip) { + TRACE_ORPHAN(mp, ip, *agp); + goto exit; + } + + /* Associate the parent inode with the AG. */ + err = _xfs_filestream_update_ag(pip, NULL, *agp); + if (err) + goto exit; + + /* Associate the file inode with the AG. */ + err = _xfs_filestream_update_ag(ip, pip, *agp); + if (err) + goto exit; + + TRACE_MOVEAG(mp, ip, pip, NULLAGNUMBER, 0, + *agp, xfs_filestream_peek_ag(mp, *agp)); + +exit: + /* + * If _xfs_filestream_pick_ag() returned a valid AG, remove the + * reference it took on it, since the file and directory will have taken + * their own now if they were successfully cached. + */ + if (*agp != NULLAGNUMBER) + xfs_filestream_put_ag(mp, *agp); + else + *agp = 0; + + if (pip) + xfs_iunlock(pip, XFS_IOLOCK_EXCL); + + return err; +} + +/* + * Remove an association between an inode and a filestream object. + * Typically this is done on last close of an unlinked file. + */ +void +xfs_filestream_deassociate( + xfs_inode_t *ip) +{ + xfs_mru_cache_t *cache = ip->i_mount->m_filestream; + + xfs_mru_cache_delete(cache, ip->i_ino); +} Index: 2.6.x-xfs-new/fs/xfs/xfs_filestream.h =================================================================== --- /dev/null 1970-01-01 00:00:00.000000000 +0000 +++ 2.6.x-xfs-new/fs/xfs/xfs_filestream.h 2007-06-29 11:38:01.966328695 +1000 @@ -0,0 +1,136 @@ +/* + * Copyright (c) 2006-2007 Silicon Graphics, Inc. + * All Rights Reserved. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it would be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write the Free Software Foundation, + * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA + */ +#ifndef __XFS_FILESTREAM_H__ +#define __XFS_FILESTREAM_H__ + +#ifdef __KERNEL__ + +struct xfs_mount; +struct xfs_inode; +struct xfs_perag; +struct xfs_bmalloca; + +#ifdef XFS_FILESTREAMS_TRACE +#define XFS_FSTRM_KTRACE_INFO 1 +#define XFS_FSTRM_KTRACE_AGSCAN 2 +#define XFS_FSTRM_KTRACE_AGPICK1 3 +#define XFS_FSTRM_KTRACE_AGPICK2 4 +#define XFS_FSTRM_KTRACE_UPDATE 5 +#define XFS_FSTRM_KTRACE_FREE 6 +#define XFS_FSTRM_KTRACE_ITEM_LOOKUP 7 +#define XFS_FSTRM_KTRACE_ASSOCIATE 8 +#define XFS_FSTRM_KTRACE_MOVEAG 9 +#define XFS_FSTRM_KTRACE_ORPHAN 10 + +#define XFS_FSTRM_KTRACE_SIZE 16384 +extern ktrace_t *xfs_filestreams_trace_buf; + +#endif + +/* + * Allocation group filestream associations are tracked with per-ag atomic + * counters. These counters allow _xfs_filestream_pick_ag() to tell whether a + * particular AG already has active filestreams associated with it. The mount + * point's m_peraglock is used to protect these counters from per-ag array + * re-allocation during a growfs operation. When xfs_growfs_data_private() is + * about to reallocate the array, it calls xfs_filestream_flush() with the + * m_peraglock held in write mode. + * + * Since xfs_mru_cache_flush() guarantees that all the free functions for all + * the cache elements have finished executing before it returns, it's safe for + * the free functions to use the atomic counters without m_peraglock protection. + * This allows the implementation of xfs_fstrm_free_func() to be agnostic about + * whether it was called with the m_peraglock held in read mode, write mode or + * not held at all. The race condition this addresses is the following: + * + * - The work queue scheduler fires and pulls a filestream directory cache + * element off the LRU end of the cache for deletion, then gets pre-empted. + * - A growfs operation grabs the m_peraglock in write mode, flushes all the + * remaining items from the cache and reallocates the mount point's per-ag + * array, resetting all the counters to zero. + * - The work queue thread resumes and calls the free function for the element + * it started cleaning up earlier. In the process it decrements the + * filestreams counter for an AG that now has no references. + * + * With a shrinkfs feature, the above scenario could panic the system. + * + * All other uses of the following macros should be protected by either the + * m_peraglock held in read mode, or the cache's internal locking exposed by the + * interval between a call to xfs_mru_cache_lookup() and a call to + * xfs_mru_cache_done(). In addition, the m_peraglock must be held in read mode + * when new elements are added to the cache. + * + * Combined, these locking rules ensure that no associations will ever exist in + * the cache that reference per-ag array elements that have since been + * reallocated. + */ +STATIC_INLINE int +xfs_filestream_peek_ag( + xfs_mount_t *mp, + xfs_agnumber_t agno) +{ + return atomic_read(&mp->m_perag[agno].pagf_fstrms); +} + +STATIC_INLINE int +xfs_filestream_get_ag( + xfs_mount_t *mp, + xfs_agnumber_t agno) +{ + return atomic_inc_return(&mp->m_perag[agno].pagf_fstrms); +} + +STATIC_INLINE int +xfs_filestream_put_ag( + xfs_mount_t *mp, + xfs_agnumber_t agno) +{ + return atomic_dec_return(&mp->m_perag[agno].pagf_fstrms); +} + +/* allocation selection flags */ +typedef enum xfs_fstrm_alloc { + XFS_PICK_USERDATA = 1, + XFS_PICK_LOWSPACE = 2, +} xfs_fstrm_alloc_t; + +/* prototypes for filestream.c */ +int xfs_filestream_init(void); +void xfs_filestream_uninit(void); +int xfs_filestream_mount(struct xfs_mount *mp); +void xfs_filestream_unmount(struct xfs_mount *mp); +void xfs_filestream_flush(struct xfs_mount *mp); +xfs_agnumber_t xfs_filestream_lookup_ag(struct xfs_inode *ip); +int xfs_filestream_associate(struct xfs_inode *dip, struct xfs_inode *ip); +void xfs_filestream_deassociate(struct xfs_inode *ip); +int xfs_filestream_new_ag(struct xfs_bmalloca *ap, xfs_agnumber_t *agp); + + +/* filestreams for the inode? */ +STATIC_INLINE int +xfs_inode_is_filestream( + struct xfs_inode *ip) +{ + return (ip->i_mount->m_flags & XFS_MOUNT_FILESTREAMS) || + xfs_iflags_test(ip, XFS_IFILESTREAM) || + (ip->i_d.di_flags & XFS_DIFLAG_FILESTREAM); +} + +#endif /* __KERNEL__ */ + +#endif /* __XFS_FILESTREAM_H__ */ Index: 2.6.x-xfs-new/fs/xfs/xfs_fs.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_fs.h 2007-06-20 16:35:45.256345714 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_fs.h 2007-06-20 17:59:35.006774692 +1000 @@ -66,6 +66,7 @@ struct fsxattr { #define XFS_XFLAG_EXTSIZE 0x00000800 /* extent size allocator hint */ #define XFS_XFLAG_EXTSZINHERIT 0x00001000 /* inherit inode extent size */ #define XFS_XFLAG_NODEFRAG 0x00002000 /* do not defragment */ +#define XFS_XFLAG_FILESTREAM 0x00004000 /* use filestream allocator */ #define XFS_XFLAG_HASATTR 0x80000000 /* no DIFLAG for this */ /* Index: 2.6.x-xfs-new/fs/xfs/xfs_fsops.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_fsops.c 2007-06-20 16:35:45.256345714 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_fsops.c 2007-06-29 11:36:43.804433222 +1000 @@ -44,6 +44,7 @@ #include "xfs_trans_space.h" #include "xfs_rtalloc.h" #include "xfs_rw.h" +#include "xfs_filestream.h" /* * File system operations @@ -165,6 +166,7 @@ xfs_growfs_data_private( new = nb - mp->m_sb.sb_dblocks; oagcount = mp->m_sb.sb_agcount; if (nagcount > oagcount) { + xfs_filestream_flush(mp); down_write(&mp->m_peraglock); mp->m_perag = kmem_realloc(mp->m_perag, sizeof(xfs_perag_t) * nagcount, Index: 2.6.x-xfs-new/fs/xfs/xfs_inode.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_inode.c 2007-06-20 17:53:27.610510667 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_inode.c 2007-06-29 11:42:55.360388500 +1000 @@ -48,6 +48,7 @@ #include "xfs_dir2_trace.h" #include "xfs_quota.h" #include "xfs_acl.h" +#include "xfs_filestream.h" #include @@ -818,6 +819,8 @@ _xfs_dic2xflags( flags |= XFS_XFLAG_EXTSZINHERIT; if (di_flags & XFS_DIFLAG_NODEFRAG) flags |= XFS_XFLAG_NODEFRAG; + if (di_flags & XFS_DIFLAG_FILESTREAM) + flags |= XFS_XFLAG_FILESTREAM; } return flags; @@ -1151,7 +1154,7 @@ xfs_ialloc( /* * Project ids won't be stored on disk if we are using a version 1 inode. */ - if ( (prid != 0) && (ip->i_d.di_version == XFS_DINODE_VERSION_1)) + if ((prid != 0) && (ip->i_d.di_version == XFS_DINODE_VERSION_1)) xfs_bump_ino_vers2(tp, ip); if (XFS_INHERIT_GID(pip, vp->v_vfsp)) { @@ -1196,8 +1199,16 @@ xfs_ialloc( flags |= XFS_ILOG_DEV; break; case S_IFREG: + if (xfs_inode_is_filestream(pip)) { + error = xfs_filestream_associate(pip, ip); + if (error < 0) + return -error; + if (!error) + xfs_iflags_set(ip, XFS_IFILESTREAM); + } + /* fall through */ case S_IFDIR: - if (unlikely(pip->i_d.di_flags & XFS_DIFLAG_ANY)) { + if (pip->i_d.di_flags & XFS_DIFLAG_ANY) { uint di_flags = 0; if ((mode & S_IFMT) == S_IFDIR) { @@ -1234,6 +1245,8 @@ xfs_ialloc( if ((pip->i_d.di_flags & XFS_DIFLAG_NODEFRAG) && xfs_inherit_nodefrag) di_flags |= XFS_DIFLAG_NODEFRAG; + if (pip->i_d.di_flags & XFS_DIFLAG_FILESTREAM) + di_flags |= XFS_DIFLAG_FILESTREAM; ip->i_d.di_flags |= di_flags; } /* FALLTHROUGH */ Index: 2.6.x-xfs-new/fs/xfs/xfs_mount.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_mount.h 2007-06-20 17:53:35.609470968 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_mount.h 2007-06-29 11:36:43.236506638 +1000 @@ -63,6 +63,7 @@ struct xfs_bmbt_irec; struct xfs_bmap_free; struct xfs_extdelta; struct xfs_swapext; +struct xfs_mru_cache; extern struct bhv_vfsops xfs_vfsops; extern struct bhv_vnodeops xfs_vnodeops; @@ -431,6 +432,7 @@ typedef struct xfs_mount { struct notifier_block m_icsb_notifier; /* hotplug cpu notifier */ struct mutex m_icsb_mutex; /* balancer sync lock */ #endif + struct xfs_mru_cache *m_filestream; /* per-mount filestream data */ } xfs_mount_t; /* @@ -470,6 +472,8 @@ typedef struct xfs_mount { * I/O size in stat() */ #define XFS_MOUNT_NO_PERCPU_SB (1ULL << 23) /* don't use per-cpu superblock counters */ +#define XFS_MOUNT_FILESTREAMS (1ULL << 24) /* enable the filestreams + allocator */ /* Index: 2.6.x-xfs-new/fs/xfs/xfs_mru_cache.c =================================================================== --- /dev/null 1970-01-01 00:00:00.000000000 +0000 +++ 2.6.x-xfs-new/fs/xfs/xfs_mru_cache.c 2007-06-29 11:38:01.962329212 +1000 @@ -0,0 +1,608 @@ +/* + * Copyright (c) 2006-2007 Silicon Graphics, Inc. + * All Rights Reserved. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it would be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write the Free Software Foundation, + * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA + */ +#include "xfs.h" +#include "xfs_mru_cache.h" + +/* + * The MRU Cache data structure consists of a data store, an array of lists and + * a lock to protect its internal state. At initialisation time, the client + * supplies an element lifetime in milliseconds and a group count, as well as a + * function pointer to call when deleting elements. A data structure for + * queueing up work in the form of timed callbacks is also included. + * + * The group count controls how many lists are created, and thereby how finely + * the elements are grouped in time. When reaping occurs, all the elements in + * all the lists whose time has expired are deleted. + * + * To give an example of how this works in practice, consider a client that + * initialises an MRU Cache with a lifetime of ten seconds and a group count of + * five. Five internal lists will be created, each representing a two second + * period in time. When the first element is added, time zero for the data + * structure is initialised to the current time. + * + * All the elements added in the first two seconds are appended to the first + * list. Elements added in the third second go into the second list, and so on. + * If an element is accessed at any point, it is removed from its list and + * inserted at the head of the current most-recently-used list. + * + * The reaper function will have nothing to do until at least twelve seconds + * have elapsed since the first element was added. The reason for this is that + * if it were called at t=11s, there could be elements in the first list that + * have only been inactive for nine seconds, so it still does nothing. If it is + * called anywhere between t=12 and t=14 seconds, it will delete all the + * elements that remain in the first list. It's therefore possible for elements + * to remain in the data store even after they've been inactive for up to + * (t + t/g) seconds, where t is the inactive element lifetime and g is the + * number of groups. + * + * The above example assumes that the reaper function gets called at least once + * every (t/g) seconds. If it is called less frequently, unused elements will + * accumulate in the reap list until the reaper function is eventually called. + * The current implementation uses work queue callbacks to carefully time the + * reaper function calls, so this should happen rarely, if at all. + * + * From a design perspective, the primary reason for the choice of a list array + * representing discrete time intervals is that it's only practical to reap + * expired elements in groups of some appreciable size. This automatically + * introduces a granularity to element lifetimes, so there's no point storing an + * individual timeout with each element that specifies a more precise reap time. + * The bonus is a saving of sizeof(long) bytes of memory per element stored. + * + * The elements could have been stored in just one list, but an array of + * counters or pointers would need to be maintained to allow them to be divided + * up into discrete time groups. More critically, the process of touching or + * removing an element would involve walking large portions of the entire list, + * which would have a detrimental effect on performance. The additional memory + * requirement for the array of list heads is minimal. + * + * When an element is touched or deleted, it needs to be removed from its + * current list. Doubly linked lists are used to make the list maintenance + * portion of these operations O(1). Since reaper timing can be imprecise, + * inserts and lookups can occur when there are no free lists available. When + * this happens, all the elements on the LRU list need to be migrated to the end + * of the reap list. To keep the list maintenance portion of these operations + * O(1) also, list tails need to be accessible without walking the entire list. + * This is the reason why doubly linked list heads are used. + */ + +/* + * An MRU Cache is a dynamic data structure that stores its elements in a way + * that allows efficient lookups, but also groups them into discrete time + * intervals based on insertion time. This allows elements to be efficiently + * and automatically reaped after a fixed period of inactivity. + * + * When a client data pointer is stored in the MRU Cache it needs to be added to + * both the data store and to one of the lists. It must also be possible to + * access each of these entries via the other, i.e. to: + * + * a) Walk a list, removing the corresponding data store entry for each item. + * b) Look up a data store entry, then access its list entry directly. + * + * To achieve both of these goals, each entry must contain both a list entry and + * a key, in addition to the user's data pointer. Note that it's not a good + * idea to have the client embed one of these structures at the top of their own + * data structure, because inserting the same item more than once would most + * likely result in a loop in one of the lists. That's a sure-fire recipe for + * an infinite loop in the code. + */ +typedef struct xfs_mru_cache_elem +{ + struct list_head list_node; + unsigned long key; + void *value; +} xfs_mru_cache_elem_t; + +static kmem_zone_t *xfs_mru_elem_zone; +static struct workqueue_struct *xfs_mru_reap_wq; + +/* + * When inserting, destroying or reaping, it's first necessary to update the + * lists relative to a particular time. In the case of destroying, that time + * will be well in the future to ensure that all items are moved to the reap + * list. In all other cases though, the time will be the current time. + * + * This function enters a loop, moving the contents of the LRU list to the reap + * list again and again until either a) the lists are all empty, or b) time zero + * has been advanced sufficiently to be within the immediate element lifetime. + * + * Case a) above is detected by counting how many groups are migrated and + * stopping when they've all been moved. Case b) is detected by monitoring the + * time_zero field, which is updated as each group is migrated. + * + * The return value is the earliest time that more migration could be needed, or + * zero if there's no need to schedule more work because the lists are empty. + */ +STATIC unsigned long +_xfs_mru_cache_migrate( + xfs_mru_cache_t *mru, + unsigned long now) +{ + unsigned int grp; + unsigned int migrated = 0; + struct list_head *lru_list; + + /* Nothing to do if the data store is empty. */ + if (!mru->time_zero) + return 0; + + /* While time zero is older than the time spanned by all the lists. */ + while (mru->time_zero <= now - mru->grp_count * mru->grp_time) { + + /* + * If the LRU list isn't empty, migrate its elements to the tail + * of the reap list. + */ + lru_list = mru->lists + mru->lru_grp; + if (!list_empty(lru_list)) + list_splice_init(lru_list, mru->reap_list.prev); + + /* + * Advance the LRU group number, freeing the old LRU list to + * become the new MRU list; advance time zero accordingly. + */ + mru->lru_grp = (mru->lru_grp + 1) % mru->grp_count; + mru->time_zero += mru->grp_time; + + /* + * If reaping is so far behind that all the elements on all the + * lists have been migrated to the reap list, it's now empty. + */ + if (++migrated == mru->grp_count) { + mru->lru_grp = 0; + mru->time_zero = 0; + return 0; + } + } + + /* Find the first non-empty list from the LRU end. */ + for (grp = 0; grp < mru->grp_count; grp++) { + + /* Check the grp'th list from the LRU end. */ + lru_list = mru->lists + ((mru->lru_grp + grp) % mru->grp_count); + if (!list_empty(lru_list)) + return mru->time_zero + + (mru->grp_count + grp) * mru->grp_time; + } + + /* All the lists must be empty. */ + mru->lru_grp = 0; + mru->time_zero = 0; + return 0; +} + +/* + * When inserting or doing a lookup, an element needs to be inserted into the + * MRU list. The lists must be migrated first to ensure that they're + * up-to-date, otherwise the new element could be given a shorter lifetime in + * the cache than it should. + */ +STATIC void +_xfs_mru_cache_list_insert( + xfs_mru_cache_t *mru, + xfs_mru_cache_elem_t *elem) +{ + unsigned int grp = 0; + unsigned long now = jiffies; + + /* + * If the data store is empty, initialise time zero, leave grp set to + * zero and start the work queue timer if necessary. Otherwise, set grp + * to the number of group times that have elapsed since time zero. + */ + if (!_xfs_mru_cache_migrate(mru, now)) { + mru->time_zero = now; + if (!mru->next_reap) + mru->next_reap = mru->grp_count * mru->grp_time; + } else { + grp = (now - mru->time_zero) / mru->grp_time; + grp = (mru->lru_grp + grp) % mru->grp_count; + } + + /* Insert the element at the tail of the corresponding list. */ + list_add_tail(&elem->list_node, mru->lists + grp); +} + +/* + * When destroying or reaping, all the elements that were migrated to the reap + * list need to be deleted. For each element this involves removing it from the + * data store, removing it from the reap list, calling the client's free + * function and deleting the element from the element zone. + */ +STATIC void +_xfs_mru_cache_clear_reap_list( + xfs_mru_cache_t *mru) +{ + xfs_mru_cache_elem_t *elem, *next; + struct list_head tmp; + + INIT_LIST_HEAD(&tmp); + list_for_each_entry_safe(elem, next, &mru->reap_list, list_node) { + + /* Remove the element from the data store. */ + radix_tree_delete(&mru->store, elem->key); + + /* + * remove to temp list so it can be freed without + * needing to hold the lock + */ + list_move(&elem->list_node, &tmp); + } + mutex_spinunlock(&mru->lock, 0); + + list_for_each_entry_safe(elem, next, &tmp, list_node) { + + /* Remove the element from the reap list. */ + list_del_init(&elem->list_node); + + /* Call the client's free function with the key and value pointer. */ + mru->free_func(elem->key, elem->value); + + /* Free the element structure. */ + kmem_zone_free(xfs_mru_elem_zone, elem); + } + + mutex_spinlock(&mru->lock); +} + +/* + * We fire the reap timer every group expiry interval so + * we always have a reaper ready to run. This makes shutdown + * and flushing of the reaper easy to do. Hence we need to + * keep when the next reap must occur so we can determine + * at each interval whether there is anything we need to do. + */ +STATIC void +_xfs_mru_cache_reap( + struct work_struct *work) +{ + xfs_mru_cache_t *mru = container_of(work, xfs_mru_cache_t, work.work); + unsigned long now; + + ASSERT(mru && mru->lists); + if (!mru || !mru->lists) + return; + + mutex_spinlock(&mru->lock); + now = jiffies; + if (mru->reap_all || + (mru->next_reap && time_after(now, mru->next_reap))) { + if (mru->reap_all) + now += mru->grp_count * mru->grp_time * 2; + mru->next_reap = _xfs_mru_cache_migrate(mru, now); + _xfs_mru_cache_clear_reap_list(mru); + } + + /* + * the process that triggered the reap_all is responsible + * for restating the periodic reap if it is required. + */ + if (!mru->reap_all) + queue_delayed_work(xfs_mru_reap_wq, &mru->work, mru->grp_time); + mru->reap_all = 0; + mutex_spinunlock(&mru->lock, 0); +} + +int +xfs_mru_cache_init(void) +{ + xfs_mru_elem_zone = kmem_zone_init(sizeof(xfs_mru_cache_elem_t), + "xfs_mru_cache_elem"); + if (!xfs_mru_elem_zone) + return ENOMEM; + + xfs_mru_reap_wq = create_singlethread_workqueue("xfs_mru_cache"); + if (!xfs_mru_reap_wq) { + kmem_zone_destroy(xfs_mru_elem_zone); + return ENOMEM; + } + + return 0; +} + +void +xfs_mru_cache_uninit(void) +{ + destroy_workqueue(xfs_mru_reap_wq); + kmem_zone_destroy(xfs_mru_elem_zone); +} + +/* + * To initialise a struct xfs_mru_cache pointer, call xfs_mru_cache_create() + * with the address of the pointer, a lifetime value in milliseconds, a group + * count and a free function to use when deleting elements. This function + * returns 0 if the initialisation was successful. + */ +int +xfs_mru_cache_create( + xfs_mru_cache_t **mrup, + unsigned int lifetime_ms, + unsigned int grp_count, + xfs_mru_cache_free_func_t free_func) +{ + xfs_mru_cache_t *mru = NULL; + int err = 0, grp; + unsigned int grp_time; + + if (mrup) + *mrup = NULL; + + if (!mrup || !grp_count || !lifetime_ms || !free_func) + return EINVAL; + + if (!(grp_time = msecs_to_jiffies(lifetime_ms) / grp_count)) + return EINVAL; + + if (!(mru = kmem_zalloc(sizeof(*mru), KM_SLEEP))) + return ENOMEM; + + /* An extra list is needed to avoid reaping up to a grp_time early. */ + mru->grp_count = grp_count + 1; + mru->lists = kmem_alloc(mru->grp_count * sizeof(*mru->lists), KM_SLEEP); + + if (!mru->lists) { + err = ENOMEM; + goto exit; + } + + for (grp = 0; grp < mru->grp_count; grp++) + INIT_LIST_HEAD(mru->lists + grp); + + /* + * We use GFP_KERNEL radix tree preload and do inserts under a + * spinlock so GFP_ATOMIC is appropriate for the radix tree itself. + */ + INIT_RADIX_TREE(&mru->store, GFP_ATOMIC); + INIT_LIST_HEAD(&mru->reap_list); + spinlock_init(&mru->lock, "xfs_mru_cache"); + INIT_DELAYED_WORK(&mru->work, _xfs_mru_cache_reap); + + mru->grp_time = grp_time; + mru->free_func = free_func; + + /* start up the reaper event */ + mru->next_reap = 0; + mru->reap_all = 0; + queue_delayed_work(xfs_mru_reap_wq, &mru->work, mru->grp_time); + + *mrup = mru; + +exit: + if (err && mru && mru->lists) + kmem_free(mru->lists, mru->grp_count * sizeof(*mru->lists)); + if (err && mru) + kmem_free(mru, sizeof(*mru)); + + return err; +} + +/* + * Call xfs_mru_cache_flush() to flush out all cached entries, calling their + * free functions as they're deleted. When this function returns, the caller is + * guaranteed that all the free functions for all the elements have finished + * executing. + * + * While we are flushing, we stop the periodic reaper event from triggering. + * Normally, we want to restart this periodic event, but if we are shutting + * down the cache we do not want it restarted. hence the restart parameter + * where 0 = do not restart reaper and 1 = restart reaper. + */ +void +xfs_mru_cache_flush( + xfs_mru_cache_t *mru, + int restart) +{ + if (!mru || !mru->lists) + return; + + cancel_rearming_delayed_workqueue(xfs_mru_reap_wq, &mru->work); + + mutex_spinlock(&mru->lock); + mru->reap_all = 1; + mutex_spinunlock(&mru->lock, 0); + + queue_work(xfs_mru_reap_wq, &mru->work.work); + flush_workqueue(xfs_mru_reap_wq); + + mutex_spinlock(&mru->lock); + WARN_ON_ONCE(mru->reap_all != 0); + mru->reap_all = 0; + if (restart) + queue_delayed_work(xfs_mru_reap_wq, &mru->work, mru->grp_time); + mutex_spinunlock(&mru->lock, 0); +} + +void +xfs_mru_cache_destroy( + xfs_mru_cache_t *mru) +{ + if (!mru || !mru->lists) + return; + + /* we don't want the reaper to restart here */ + xfs_mru_cache_flush(mru, 0); + + kmem_free(mru->lists, mru->grp_count * sizeof(*mru->lists)); + kmem_free(mru, sizeof(*mru)); +} + +/* + * To insert an element, call xfs_mru_cache_insert() with the data store, the + * element's key and the client data pointer. This function returns 0 on + * success or ENOMEM if memory for the data element couldn't be allocated. + */ +int +xfs_mru_cache_insert( + xfs_mru_cache_t *mru, + unsigned long key, + void *value) +{ + xfs_mru_cache_elem_t *elem; + + ASSERT(mru && mru->lists); + if (!mru || !mru->lists) + return EINVAL; + + elem = kmem_zone_zalloc(xfs_mru_elem_zone, KM_SLEEP); + if (!elem) + return ENOMEM; + + if (radix_tree_preload(GFP_KERNEL)) { + kmem_zone_free(xfs_mru_elem_zone, elem); + return ENOMEM; + } + + INIT_LIST_HEAD(&elem->list_node); + elem->key = key; + elem->value = value; + + mutex_spinlock(&mru->lock); + + radix_tree_insert(&mru->store, key, elem); + radix_tree_preload_end(); + _xfs_mru_cache_list_insert(mru, elem); + + mutex_spinunlock(&mru->lock, 0); + + return 0; +} + +/* + * To remove an element without calling the free function, call + * xfs_mru_cache_remove() with the data store and the element's key. On success + * the client data pointer for the removed element is returned, otherwise this + * function will return a NULL pointer. + */ +void * +xfs_mru_cache_remove( + xfs_mru_cache_t *mru, + unsigned long key) +{ + xfs_mru_cache_elem_t *elem; + void *value = NULL; + + ASSERT(mru && mru->lists); + if (!mru || !mru->lists) + return NULL; + + mutex_spinlock(&mru->lock); + elem = radix_tree_delete(&mru->store, key); + if (elem) { + value = elem->value; + list_del(&elem->list_node); + } + + mutex_spinunlock(&mru->lock, 0); + + if (elem) + kmem_zone_free(xfs_mru_elem_zone, elem); + + return value; +} + +/* + * To remove and element and call the free function, call xfs_mru_cache_delete() + * with the data store and the element's key. + */ +void +xfs_mru_cache_delete( + xfs_mru_cache_t *mru, + unsigned long key) +{ + void *value = xfs_mru_cache_remove(mru, key); + + if (value) + mru->free_func(key, value); +} + +/* + * To look up an element using its key, call xfs_mru_cache_lookup() with the + * data store and the element's key. If found, the element will be moved to the + * head of the MRU list to indicate that it's been touched. + * + * The internal data structures are protected by a spinlock that is STILL HELD + * when this function returns. Call xfs_mru_cache_done() to release it. Note + * that it is not safe to call any function that might sleep in the interim. + * + * The implementation could have used reference counting to avoid this + * restriction, but since most clients simply want to get, set or test a member + * of the returned data structure, the extra per-element memory isn't warranted. + * + * If the element isn't found, this function returns NULL and the spinlock is + * released. xfs_mru_cache_done() should NOT be called when this occurs. + */ +void * +xfs_mru_cache_lookup( + xfs_mru_cache_t *mru, + unsigned long key) +{ + xfs_mru_cache_elem_t *elem; + + ASSERT(mru && mru->lists); + if (!mru || !mru->lists) + return NULL; + + mutex_spinlock(&mru->lock); + elem = radix_tree_lookup(&mru->store, key); + if (elem) { + list_del(&elem->list_node); + _xfs_mru_cache_list_insert(mru, elem); + } + else + mutex_spinunlock(&mru->lock, 0); + + return elem ? elem->value : NULL; +} + +/* + * To look up an element using its key, but leave its location in the internal + * lists alone, call xfs_mru_cache_peek(). If the element isn't found, this + * function returns NULL. + * + * See the comments above the declaration of the xfs_mru_cache_lookup() function + * for important locking information pertaining to this call. + */ +void * +xfs_mru_cache_peek( + xfs_mru_cache_t *mru, + unsigned long key) +{ + xfs_mru_cache_elem_t *elem; + + ASSERT(mru && mru->lists); + if (!mru || !mru->lists) + return NULL; + + mutex_spinlock(&mru->lock); + elem = radix_tree_lookup(&mru->store, key); + if (!elem) + mutex_spinunlock(&mru->lock, 0); + + return elem ? elem->value : NULL; +} + +/* + * To release the internal data structure spinlock after having performed an + * xfs_mru_cache_lookup() or an xfs_mru_cache_peek(), call xfs_mru_cache_done() + * with the data store pointer. + */ +void +xfs_mru_cache_done( + xfs_mru_cache_t *mru) +{ + mutex_spinunlock(&mru->lock, 0); +} Index: 2.6.x-xfs-new/fs/xfs/xfs_mru_cache.h =================================================================== --- /dev/null 1970-01-01 00:00:00.000000000 +0000 +++ 2.6.x-xfs-new/fs/xfs/xfs_mru_cache.h 2007-06-29 11:38:01.966328695 +1000 @@ -0,0 +1,57 @@ +/* + * Copyright (c) 2006-2007 Silicon Graphics, Inc. + * All Rights Reserved. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it would be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write the Free Software Foundation, + * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA + */ +#ifndef __XFS_MRU_CACHE_H__ +#define __XFS_MRU_CACHE_H__ + + +/* Function pointer type for callback to free a client's data pointer. */ +typedef void (*xfs_mru_cache_free_func_t)(unsigned long, void*); + +typedef struct xfs_mru_cache +{ + struct radix_tree_root store; /* Core storage data structure. */ + struct list_head *lists; /* Array of lists, one per grp. */ + struct list_head reap_list; /* Elements overdue for reaping. */ + spinlock_t lock; /* Lock to protect this struct. */ + unsigned int grp_count; /* Number of discrete groups. */ + unsigned int grp_time; /* Time period spanned by grps. */ + unsigned int lru_grp; /* Group containing time zero. */ + unsigned long time_zero; /* Time first element was added. */ + unsigned long next_reap; /* Time that the reaper should + next do something. */ + unsigned int reap_all; /* if set, reap all lists */ + xfs_mru_cache_free_func_t free_func; /* Function pointer for freeing. */ + struct delayed_work work; /* Workqueue data for reaping. */ +} xfs_mru_cache_t; + +int xfs_mru_cache_init(void); +void xfs_mru_cache_uninit(void); +int xfs_mru_cache_create(struct xfs_mru_cache **mrup, unsigned int lifetime_ms, + unsigned int grp_count, + xfs_mru_cache_free_func_t free_func); +void xfs_mru_cache_flush(xfs_mru_cache_t *mru, int restart); +void xfs_mru_cache_destroy(struct xfs_mru_cache *mru); +int xfs_mru_cache_insert(struct xfs_mru_cache *mru, unsigned long key, + void *value); +void * xfs_mru_cache_remove(struct xfs_mru_cache *mru, unsigned long key); +void xfs_mru_cache_delete(struct xfs_mru_cache *mru, unsigned long key); +void *xfs_mru_cache_lookup(struct xfs_mru_cache *mru, unsigned long key); +void *xfs_mru_cache_peek(struct xfs_mru_cache *mru, unsigned long key); +void xfs_mru_cache_done(struct xfs_mru_cache *mru); + +#endif /* __XFS_MRU_CACHE_H__ */ Index: 2.6.x-xfs-new/fs/xfs/xfs_vfsops.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_vfsops.c 2007-06-20 17:53:27.630508068 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_vfsops.c 2007-06-29 11:36:43.660451835 +1000 @@ -51,6 +51,8 @@ #include "xfs_acl.h" #include "xfs_attr.h" #include "xfs_clnt.h" +#include "xfs_mru_cache.h" +#include "xfs_filestream.h" #include "xfs_fsops.h" STATIC int xfs_sync(bhv_desc_t *, int, cred_t *); @@ -81,6 +83,8 @@ xfs_init(void) xfs_dabuf_zone = kmem_zone_init(sizeof(xfs_dabuf_t), "xfs_dabuf"); xfs_ifork_zone = kmem_zone_init(sizeof(xfs_ifork_t), "xfs_ifork"); xfs_acl_zone_init(xfs_acl_zone, "xfs_acl"); + xfs_mru_cache_init(); + xfs_filestream_init(); /* * The size of the zone allocated buf log item is the maximum @@ -164,6 +168,8 @@ xfs_cleanup(void) xfs_cleanup_procfs(); xfs_sysctl_unregister(); xfs_refcache_destroy(); + xfs_filestream_uninit(); + xfs_mru_cache_uninit(); xfs_acl_zone_destroy(xfs_acl_zone); #ifdef XFS_DIR2_TRACE @@ -317,6 +323,9 @@ xfs_start_flags( else mp->m_flags &= ~XFS_MOUNT_BARRIER; + if (ap->flags2 & XFSMNT2_FILESTREAMS) + mp->m_flags |= XFS_MOUNT_FILESTREAMS; + return 0; } @@ -515,6 +524,9 @@ xfs_mount( if (mp->m_flags & XFS_MOUNT_BARRIER) xfs_mountfs_check_barriers(mp); + if ((error = xfs_filestream_mount(mp))) + goto error2; + error = XFS_IOINIT(vfsp, args, flags); if (error) goto error2; @@ -572,6 +584,13 @@ xfs_unmount( */ xfs_refcache_purge_mp(mp); + /* + * Blow away any referenced inode in the filestreams cache. + * This can and will cause log traffic as inodes go inactive + * here. + */ + xfs_filestream_unmount(mp); + XFS_bflush(mp->m_ddev_targp); error = xfs_unmount_flush(mp, 0); if (error) @@ -703,6 +722,7 @@ xfs_mntupdate( mp->m_flags &= ~XFS_MOUNT_BARRIER; } } else if (!(vfsp->vfs_flag & VFS_RDONLY)) { /* rw -> ro */ + xfs_filestream_flush(mp); bhv_vfs_sync(vfsp, SYNC_DATA_QUIESCE, NULL); xfs_attr_quiesce(mp); vfsp->vfs_flag |= VFS_RDONLY; @@ -927,6 +947,9 @@ xfs_sync( { xfs_mount_t *mp = XFS_BHVTOM(bdp); + if (flags & SYNC_IOWAIT) + xfs_filestream_flush(mp); + return xfs_syncsub(mp, flags, NULL); } @@ -1676,6 +1699,7 @@ xfs_vget( * in stat(). */ #define MNTOPT_ATTR2 "attr2" /* do use attr2 attribute format */ #define MNTOPT_NOATTR2 "noattr2" /* do not use attr2 attribute format */ +#define MNTOPT_FILESTREAM "filestreams" /* use filestreams allocator */ STATIC unsigned long suffix_strtoul(char *s, char **endp, unsigned int base) @@ -1853,6 +1877,8 @@ xfs_parseargs( args->flags |= XFSMNT_ATTR2; } else if (!strcmp(this_char, MNTOPT_NOATTR2)) { args->flags &= ~XFSMNT_ATTR2; + } else if (!strcmp(this_char, MNTOPT_FILESTREAM)) { + args->flags2 |= XFSMNT2_FILESTREAMS; } else if (!strcmp(this_char, "osyncisdsync")) { /* no-op, this is now the default */ cmn_err(CE_WARN, Index: 2.6.x-xfs-new/fs/xfs/xfs_vnodeops.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_vnodeops.c 2007-06-20 17:53:36.657334767 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_vnodeops.c 2007-06-29 11:38:01.966328695 +1000 @@ -51,6 +51,7 @@ #include "xfs_refcache.h" #include "xfs_trans_space.h" #include "xfs_log_priv.h" +#include "xfs_filestream.h" STATIC int xfs_open( @@ -789,6 +790,8 @@ xfs_setattr( di_flags |= XFS_DIFLAG_PROJINHERIT; if (vap->va_xflags & XFS_XFLAG_NODEFRAG) di_flags |= XFS_DIFLAG_NODEFRAG; + if (vap->va_xflags & XFS_XFLAG_FILESTREAM) + di_flags |= XFS_DIFLAG_FILESTREAM; if ((ip->i_d.di_mode & S_IFMT) == S_IFDIR) { if (vap->va_xflags & XFS_XFLAG_RTINHERIT) di_flags |= XFS_DIFLAG_RTINHERIT; @@ -1542,7 +1545,17 @@ xfs_release( if (vp->v_vfsp->vfs_flag & VFS_RDONLY) return 0; - if (!XFS_FORCED_SHUTDOWN(ip->i_mount)) { + if (!XFS_FORCED_SHUTDOWN(mp)) { + /* + * If we are using filestreams, and we have an unlinked + * file that we are processing the last close on, then nothing + * will be able to reopen and write to this file. Purge this + * inode from the filestreams cache so that it doesn't delay + * teardown of the inode. + */ + if ((ip->i_d.di_nlink == 0) && xfs_inode_is_filestream(ip)) + xfs_filestream_deassociate(ip); + /* * If we previously truncated this file and removed old data * in the process, we want to initiate "early" writeout on @@ -1557,7 +1570,6 @@ xfs_release( bhv_vop_flush_pages(vp, 0, -1, XFS_B_ASYNC, FI_NONE); } - #ifdef HAVE_REFCACHE /* If we are in the NFS reference cache then don't do this now */ if (ip->i_refcache) @@ -2551,6 +2563,15 @@ xfs_remove( */ xfs_refcache_purge_ip(ip); + /* + * If we are using filestreams, kill the stream association. + * If the file is still open it may get a new one but that + * will get killed on last close in xfs_close() so we don't + * have to worry about that. + */ + if (link_zero && xfs_inode_is_filestream(ip)) + xfs_filestream_deassociate(ip); + vn_trace_exit(XFS_ITOV(ip), __FUNCTION__, (inst_t *)__return_address); /* Index: 2.6.x-xfs-new/fs/xfs/xfs.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs.h 2007-06-20 16:35:45.276343092 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs.h 2007-06-20 17:59:35.054768459 +1000 @@ -38,6 +38,7 @@ #define XFS_RW_TRACE 1 #define XFS_BUF_TRACE 1 #define XFS_VNODE_TRACE 1 +#define XFS_FILESTREAMS_TRACE 1 #endif #include Index: 2.6.x-xfs-new/fs/xfs/xfsidbg.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfsidbg.c 2007-06-20 17:53:35.661464210 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfsidbg.c 2007-06-20 17:59:35.058767939 +1000 @@ -63,6 +63,7 @@ #include "quota/xfs_qm.h" #include "xfs_iomap.h" #include "xfs_buf.h" +#include "xfs_filestream.h" MODULE_AUTHOR("Silicon Graphics, Inc."); MODULE_DESCRIPTION("Additional kdb commands for debugging XFS"); @@ -109,6 +110,9 @@ static void xfsidbg_xlog_granttrace(xlog #ifdef XFS_DQUOT_TRACE static void xfsidbg_xqm_dqtrace(xfs_dquot_t *); #endif +#ifdef XFS_FILESTREAMS_TRACE +static void xfsidbg_filestreams_trace(int); +#endif /* @@ -196,6 +200,9 @@ static int xfs_bmbt_trace_entry(ktrace_e #ifdef XFS_DIR2_TRACE static int xfs_dir2_trace_entry(ktrace_entry_t *ktep); #endif +#ifdef XFS_FILESTREAMS_TRACE +static void xfs_filestreams_trace_entry(ktrace_entry_t *ktep); +#endif #ifdef XFS_RW_TRACE static void xfs_bunmap_trace_entry(ktrace_entry_t *ktep); static void xfs_rw_enter_trace_entry(ktrace_entry_t *ktep); @@ -760,6 +767,27 @@ static int kdbm_xfs_xalttrace( } #endif /* XFS_ALLOC_TRACE */ +#ifdef XFS_FILESTREAMS_TRACE +static int kdbm_xfs_xfstrmtrace( + int argc, + const char **argv) +{ + unsigned long addr; + int nextarg = 1; + long offset = 0; + int diag; + + if (argc != 1) + return KDB_ARGCOUNT; + diag = kdbgetaddrarg(argc, argv, &nextarg, &addr, &offset, NULL); + if (diag) + return diag; + + xfsidbg_filestreams_trace((int) addr); + return 0; +} +#endif /* XFS_FILESTREAMS_TRACE */ + static int kdbm_xfs_xattrcontext( int argc, const char **argv) @@ -2615,6 +2643,10 @@ static struct xif xfsidbg_funcs[] = { "Dump XFS bmap extents in inode"}, { "xflist", kdbm_xfs_xflist, "", "Dump XFS to-be-freed extent records"}, +#ifdef XFS_FILESTREAMS_TRACE + { "xfstrmtrc",kdbm_xfs_xfstrmtrace, "", + "Dump filestreams trace buffer"}, +#endif { "xhelp", kdbm_xfs_xhelp, "", "Print idbg-xfs help"}, { "xicall", kdbm_xfs_xiclogall, "", @@ -5279,6 +5311,162 @@ xfsidbg_xailock_trace(int count) } #endif +#ifdef XFS_FILESTREAMS_TRACE +static void +xfs_filestreams_trace_entry(ktrace_entry_t *ktep) +{ + xfs_inode_t *ip, *pip; + + /* function:line#[pid]: */ + kdb_printf("%s:%lu[%lu]: ", (char *)ktep->val[1], + ((unsigned long)ktep->val[0] >> 16) & 0xffff, + (unsigned long)ktep->val[2]); + switch ((unsigned long)ktep->val[0] & 0xffff) { + case XFS_FSTRM_KTRACE_INFO: + break; + case XFS_FSTRM_KTRACE_AGSCAN: + kdb_printf("scanning AG %ld[%ld]", + (long)ktep->val[4], (long)ktep->val[5]); + break; + case XFS_FSTRM_KTRACE_AGPICK1: + kdb_printf("using max_ag %ld[1] with maxfree %ld", + (long)ktep->val[4], (long)ktep->val[5]); + break; + case XFS_FSTRM_KTRACE_AGPICK2: + + kdb_printf("startag %ld newag %ld[%ld] free %ld scanned %ld" + " flags 0x%lx", + (long)ktep->val[4], (long)ktep->val[5], + (long)ktep->val[6], (long)ktep->val[7], + (long)ktep->val[8], (long)ktep->val[9]); + break; + case XFS_FSTRM_KTRACE_UPDATE: + ip = (xfs_inode_t *)ktep->val[4]; + if ((__psint_t)ktep->val[5] != (__psint_t)ktep->val[7]) + kdb_printf("found ip %p ino %llu, AG %ld[%ld] ->" + " %ld[%ld]", ip, (unsigned long long)ip->i_ino, + (long)ktep->val[7], (long)ktep->val[8], + (long)ktep->val[5], (long)ktep->val[6]); + else + kdb_printf("found ip %p ino %llu, AG %ld[%ld]", + ip, (unsigned long long)ip->i_ino, + (long)ktep->val[5], (long)ktep->val[6]); + break; + + case XFS_FSTRM_KTRACE_FREE: + ip = (xfs_inode_t *)ktep->val[4]; + pip = (xfs_inode_t *)ktep->val[5]; + if (ip->i_d.di_mode & S_IFDIR) + kdb_printf("deleting dip %p ino %llu, AG %ld[%ld]", + ip, (unsigned long long)ip->i_ino, + (long)ktep->val[6], (long)ktep->val[7]); + else + kdb_printf("deleting file %p ino %llu, pip %p ino %llu" + ", AG %ld[%ld]", + ip, (unsigned long long)ip->i_ino, + pip, (unsigned long long)(pip ? pip->i_ino : 0), + (long)ktep->val[6], (long)ktep->val[7]); + break; + + case XFS_FSTRM_KTRACE_ITEM_LOOKUP: + ip = (xfs_inode_t *)ktep->val[4]; + pip = (xfs_inode_t *)ktep->val[5]; + if (!pip) { + kdb_printf("lookup on %s ip %p ino %llu failed, returning %ld", + ip->i_d.di_mode & S_IFREG ? "file" : "dir", ip, + (unsigned long long)ip->i_ino, (long)ktep->val[6]); + } else if (ip->i_d.di_mode & S_IFREG) + kdb_printf("lookup on file ip %p ino %llu dir %p" + " dino %llu got AG %ld[%ld]", + ip, (unsigned long long)ip->i_ino, + pip, (unsigned long long)pip->i_ino, + (long)ktep->val[6], (long)ktep->val[7]); + else + kdb_printf("lookup on dir ip %p ino %llu got AG %ld[%ld]", + ip, (unsigned long long)ip->i_ino, + (long)ktep->val[6], (long)ktep->val[7]); + break; + + case XFS_FSTRM_KTRACE_ASSOCIATE: + ip = (xfs_inode_t *)ktep->val[4]; + pip = (xfs_inode_t *)ktep->val[5]; + kdb_printf("pip %p ino %llu and ip %p ino %llu given ag %ld[%ld]", + pip, (unsigned long long)pip->i_ino, + ip, (unsigned long long)ip->i_ino, + (long)ktep->val[6], (long)ktep->val[7]); + break; + + case XFS_FSTRM_KTRACE_MOVEAG: + ip = ktep->val[4]; + pip = ktep->val[5]; + if ((long)ktep->val[6] != NULLAGNUMBER) + kdb_printf("dir %p ino %llu to file ip %p ino %llu has" + " moved %ld[%ld] -> %ld[%ld]", + pip, (unsigned long long)pip->i_ino, + ip, (unsigned long long)ip->i_ino, + (long)ktep->val[6], (long)ktep->val[7], + (long)ktep->val[8], (long)ktep->val[9]); + else + kdb_printf("pip %p ino %llu and ip %p ino %llu moved" + " to new ag %ld[%ld]", + pip, (unsigned long long)pip->i_ino, + ip, (unsigned long long)ip->i_ino, + (long)ktep->val[8], (long)ktep->val[9]); + break; + + case XFS_FSTRM_KTRACE_ORPHAN: + ip = ktep->val[4]; + kdb_printf("gave ag %lld to orphan ip %p ino %llu", + (__psint_t)ktep->val[5], + ip, (unsigned long long)ip->i_ino); + break; + default: + kdb_printf("unknown trace type 0x%lx", + (unsigned long)ktep->val[0] & 0xffff); + } + kdb_printf("\n"); +} + +static void +xfsidbg_filestreams_trace(int count) +{ + ktrace_entry_t *ktep; + ktrace_snap_t kts; + int nentries; + int skip_entries; + + if (xfs_filestreams_trace_buf == NULL) { + qprintf("The xfs inode lock trace buffer is not initialized\n"); + return; + } + nentries = ktrace_nentries(xfs_filestreams_trace_buf); + if (count == -1) { + count = nentries; + } + if ((count <= 0) || (count > nentries)) { + qprintf("Invalid count. There are %d entries.\n", nentries); + return; + } + + ktep = ktrace_first(xfs_filestreams_trace_buf, &kts); + if (count != nentries) { + /* + * Skip the total minus the number to look at minus one + * for the entry returned by ktrace_first(). + */ + skip_entries = nentries - count - 1; + ktep = ktrace_skip(xfs_filestreams_trace_buf, skip_entries, &kts); + if (ktep == NULL) { + qprintf("Skipped them all\n"); + return; + } + } + while (ktep != NULL) { + xfs_filestreams_trace_entry(ktep); + ktep = ktrace_next(xfs_filestreams_trace_buf, &kts); + } +} +#endif /* * Compute & print buffer's checksum. */ Index: 2.6.x-xfs-new/fs/xfs/xfs_inode.h =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_inode.h 2007-06-20 17:53:27.342545497 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_inode.h 2007-06-29 11:36:40.812819943 +1000 @@ -350,6 +350,7 @@ xfs_iflags_test(xfs_inode_t *ip, unsigne #define XFS_ISTALE 0x0010 /* inode has been staled */ #define XFS_IRECLAIMABLE 0x0020 /* inode can be reclaimed */ #define XFS_INEW 0x0040 +#define XFS_IFILESTREAM 0x0080 /* inode is in a filestream directory */ /* * Flags for inode locking. From owner-xfs@oss.sgi.com Thu Jun 28 21:56:36 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 21:56:41 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5T4uXtL005685 for ; Thu, 28 Jun 2007 21:56:35 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA02395; Fri, 29 Jun 2007 14:56:06 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5T4u1eW4779794; Fri, 29 Jun 2007 14:56:02 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5T4trFW4779809; Fri, 29 Jun 2007 14:55:53 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Fri, 29 Jun 2007 14:55:53 +1000 From: David Chinner To: Pavel Machek Cc: David Chinner , David Greaves , David Robinson , LVM general discussion and development , "'linux-kernel@vger.kernel.org'" , xfs@oss.sgi.com, linux-pm , LinuxRaid Subject: Re: [linux-lvm] 2.6.22-rc4 XFS fails after hibernate/resume Message-ID: <20070629045553.GN31489@sgi.com> References: <46744065.6060605@dgreaves.com> <4674645F.5000906@gmail.com> <46751D37.5020608@dgreaves.com> <4676390E.6010202@dgreaves.com> <20070618145007.GE85884050@sgi.com> <20070627204924.GA4777@ucw.cz> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070627204924.GA4777@ucw.cz> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12013 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Wed, Jun 27, 2007 at 08:49:24PM +0000, Pavel Machek wrote: > Hi! > > > FWIW, I'm on record stating that "sync" is not sufficient to quiesce an XFS > > filesystem for a suspend/resume to work safely and have argued that the only > > Hmm, so XFS writes to disk even when its threads are frozen? They issue async I/O before they sleep and expects processing to be done on I/O completion via workqueues. > > safe thing to do is freeze the filesystem before suspend and thaw it after > > resume. This is why I originally asked you to test that with the other problem > > Could you add that to the XFS threads if it is really required? They > do know that they are being frozen for suspend. We don't suspend the threads on a filesystem freeze - they continue run. A filesystem freeze guarantees the filesystem clean and that the in memory state matches what is on disk. It is not possible for the filesytem to issue I/O or have outstanding I/O when it is in the frozen state, so the state of the threads and/or workqueues does not matter because they will be idle. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Thu Jun 28 22:01:01 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 22:01:07 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5T50wtL007165 for ; Thu, 28 Jun 2007 22:01:00 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA02586; Fri, 29 Jun 2007 15:00:40 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5T50VeW4781749; Fri, 29 Jun 2007 15:00:32 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5T50O4H4781636; Fri, 29 Jun 2007 15:00:24 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Fri, 29 Jun 2007 15:00:24 +1000 From: David Chinner To: "Rafael J. Wysocki" Cc: Pavel Machek , David Chinner , linux-pm , "'linux-kernel@vger.kernel.org'" , xfs@oss.sgi.com, LinuxRaid , LVM general discussion and development , David Robinson , David Greaves , Oleg Nesterov Subject: Re: [linux-pm] Re: [linux-lvm] 2.6.22-rc4 XFS fails after hibernate/resume Message-ID: <20070629050024.GO31489@sgi.com> References: <46744065.6060605@dgreaves.com> <200706281727.35430.rjw@sisk.pl> <20070628220045.GA4521@elf.ucw.cz> <200706290016.45719.rjw@sisk.pl> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <200706290016.45719.rjw@sisk.pl> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12014 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Fri, Jun 29, 2007 at 12:16:44AM +0200, Rafael J. Wysocki wrote: > There are two solutions possible, IMO. One would be to make these workqueues > freezable, which is possible, but hacky and Oleg didn't like that very much. > The second would be to freeze XFS from within the hibernation code path, > using freeze_bdev(). The second is much more likely to work reliably. If freezing the filesystem leaves something in an inconsistent state, then it's something I can reproduce and debug without needing to suspend/resume. FWIW, don't forget you need to thaw the filesystem on resume. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Thu Jun 28 22:11:20 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 22:11:25 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.7 required=5.0 tests=AWL,BAYES_50,URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5T5BHtL010587 for ; Thu, 28 Jun 2007 22:11:19 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA02814; Fri, 29 Jun 2007 15:11:05 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id 9BA0758C38F1; Fri, 29 Jun 2007 15:11:05 +1000 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: TAKE 967035 - lockdep inumorder annotations are busted Message-Id: <20070629051105.9BA0758C38F1@chook.melbourne.sgi.com> Date: Fri, 29 Jun 2007 15:11:05 +1000 (EST) From: dgc@sgi.com (David Chinner) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12015 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Fix lockdep annotations for xfs_lock_inodes Date: Fri Jun 29 15:10:33 AEST 2007 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/2.6.x-xfs Inspected by: tes@sgi.com The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:29026a fs/xfs/xfs_vnodeops.c - 1.701 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_vnodeops.c.diff?r1=text&tr1=1.701&r2=text&tr2=1.700&f=h - Don't double shift the inumorder subclass out of the lock_mode variable. fs/xfs/xfs_inode.h - 1.221 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_inode.h.diff?r1=text&tr1=1.221&r2=text&tr2=1.220&f=h - Don't double shift the inumorder subclass out of the lock_mode variable. From owner-xfs@oss.sgi.com Thu Jun 28 22:37:56 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 22:38:01 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from outbound4.mail.tds.net (outbound4.mail.tds.net [216.170.230.94]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5T5bttL015778 for ; Thu, 28 Jun 2007 22:37:56 -0700 Received: from outaamta02.mail.tds.net (outaamta02.mail.tds.net [216.170.230.32]) by outbound4.mail.tds.net (8.13.6/8.13.4) with ESMTP id l5SDRvAm014053; Thu, 28 Jun 2007 08:27:58 -0500 Received: from turnip.jamponi.pvt ([216.165.151.198]) by outaamta02.mail.tds.net with ESMTP id <20070628132757.TRVA21812.outaamta02.mail.tds.net@turnip.jamponi.pvt>; Thu, 28 Jun 2007 08:27:57 -0500 Received: by turnip.jamponi.pvt (Postfix, from userid 1000) id 42CB618078; Thu, 28 Jun 2007 08:27:55 -0500 (CDT) Received: from localhost (localhost [127.0.0.1]) by turnip.jamponi.pvt (Postfix) with ESMTP id 27E6018077; Thu, 28 Jun 2007 08:27:55 -0500 (CDT) Date: Thu, 28 Jun 2007 08:27:54 -0500 (CDT) From: Jon Nelson To: Matti Aarnio cc: Peter Rabbitson , Justin Piszcz , linux-raid@vger.kernel.org, xfs@oss.sgi.com, Alan Piszcz Subject: Re: Fastest Chunk Size w/XFS For MD Software RAID = 1024k In-Reply-To: <20070628090551.GG4504@mea-ext.zmailer.org> Message-ID: References: <46832E60.9000006@rabbit.us> <46837056.4050306@rabbit.us> <20070628090551.GG4504@mea-ext.zmailer.org> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12016 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jnelson-linux-raid@jamponi.net Precedence: bulk X-list: xfs On Thu, 28 Jun 2007, Matti Aarnio wrote: > I do have LVM in between the MD-RAID5 and XFS, so I did also align > the LVM to that 3 * 256k. How did you align the LVM ? -- Jon Nelson From owner-xfs@oss.sgi.com Thu Jun 28 23:13:11 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 23:13:16 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.3 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5T6D5tL023401 for ; Thu, 28 Jun 2007 23:13:07 -0700 Received: from pc-bnaujok.melbourne.sgi.com (pc-bnaujok.melbourne.sgi.com [134.14.55.58]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id QAA04110; Fri, 29 Jun 2007 16:12:57 +1000 Date: Fri, 29 Jun 2007 16:16:53 +1000 To: xfs-dev , "xfs@oss.sgi.com" Subject: REVIEW: xfs_bmap '-v' flag has no effect on a realtime file system From: "Barry Naujok" Organization: SGI Content-Type: multipart/mixed; boundary=----------OzfYWMDWik0amWcHwYPaQK MIME-Version: 1.0 Message-ID: User-Agent: Opera Mail/9.10 (Win32) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12017 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@sgi.com Precedence: bulk X-list: xfs ------------OzfYWMDWik0amWcHwYPaQK Content-Type: text/plain; format=flowed; delsp=yes; charset=iso-8859-15 Content-Transfer-Encoding: 7bit It's rather confusing when you run xfs_bmap -v on a realtime file and it doesn't show any verbose information at all. The attached patch presents verbose information but without the AG information for realtime files. Eg: # xfs_bmap -v foo_rt foo_rt: EXT: FILE-OFFSET RT-BLOCK-RANGE TOTAL 0: [0..2047]: 0..2047 2048 ------------OzfYWMDWik0amWcHwYPaQK Content-Disposition: attachment; filename=xfs_bmap_verbose_rt.patch Content-Type: application/octet-stream; name=xfs_bmap_verbose_rt.patch Content-Transfer-Encoding: Base64 LS0tIGEveGZzcHJvZ3MvaW8vYm1hcC5jCTIwMDctMDYtMjkgMTY6MTI6MjMu MDAwMDAwMDAwICsxMDAwCisrKyBiL3hmc3Byb2dzL2lvL2JtYXAuYwkyMDA3 LTA2LTI5IDE2OjA5OjI1LjExNzU3Mjk1MCArMTAwMApAQCAtNzUsNiArNzUs NyBAQCBibWFwX2YoCiAJaW50CQkJbGZsYWcgPSAwOwogCWludAkJCW5mbGFn ID0gMDsKIAlpbnQJCQl2ZmxhZyA9IDA7CisJaW50CQkJaXNfcnQgPSAwOwog CWludAkJCWJtdl9pZmxhZ3MgPSAwOwkvKiBmbGFncyBmb3IgWEZTX0lPQ19H RVRCTUFQWCAqLwogCWludAkJCWkgPSAwOwogCWludAkJCWM7CkBAIC0xMzMs NyArMTM0LDcgQEAgYm1hcF9mKAogCQkJICogYWcgaW5mbyBub3QgYXBwbGlj YWJsZSB0byBydCwgY29udGludWUKIAkJCSAqIHdpdGhvdXQgYWcgb3V0cHV0 LgogCQkJICovCi0JCQl2ZmxhZyA9IDA7CisJCQlpc19ydCA9IDE7CiAJCX0K IAl9CiAKQEAgLTI4MCwxMCArMjgxLDE0IEBAIGJtYXBfZigKIAogCQlmb2Zm X3cgPSBib2ZmX3cgPSBhb2ZmX3cgPSBNSU5SQU5HRV9XSURUSDsKIAkJdG90 X3cgPSBNSU5UT1RfV0lEVEg7Ci0JCWJicGVyYWcgPSAob2ZmNjRfdClmc2dl by5hZ2Jsb2NrcyAqCi0JCQkgIChvZmY2NF90KWZzZ2VvLmJsb2Nrc2l6ZSAv IEJCU0laRTsKLQkJc3VuaXQgPSAoZnNnZW8uc3VuaXQgKiBmc2dlby5ibG9j a3NpemUpIC8gQkJTSVpFOwotCQlzd2lkdGggPSAoZnNnZW8uc3dpZHRoICog ZnNnZW8uYmxvY2tzaXplKSAvIEJCU0laRTsKKwkJaWYgKGlzX3J0KQorCQkJ c3VuaXQgPSBzd2lkdGggPSBiYnBlcmFnID0gMDsKKwkJZWxzZSB7CisJCQli YnBlcmFnID0gKG9mZjY0X3QpZnNnZW8uYWdibG9ja3MgKgorCQkJCSAgKG9m ZjY0X3QpZnNnZW8uYmxvY2tzaXplIC8gQkJTSVpFOworCQkJc3VuaXQgPSAo ZnNnZW8uc3VuaXQgKiBmc2dlby5ibG9ja3NpemUpIC8gQkJTSVpFOworCQkJ c3dpZHRoID0gKGZzZ2VvLnN3aWR0aCAqIGZzZ2VvLmJsb2Nrc2l6ZSkgLyBC QlNJWkU7CisJCX0KIAkJZmxnID0gc3VuaXQ7CiAKIAkJLyoKQEAgLTMwNiwy NSArMzExLDMxIEBAIGJtYXBfZigKIAkJCQkJKGxvbmcgbG9uZykgbWFwW2kg KyAxXS5ibXZfYmxvY2ssCiAJCQkJCShsb25nIGxvbmcpKG1hcFtpICsgMV0u Ym12X2Jsb2NrICsKIAkJCQkJCW1hcFtpICsgMV0uYm12X2xlbmd0aCAtIDFM TCkpOwotCQkJCWFnbm8gPSBtYXBbaSArIDFdLmJtdl9ibG9jayAvIGJicGVy YWc7Ci0JCQkJYWdvZmYgPSBtYXBbaSArIDFdLmJtdl9ibG9jayAtIChhZ25v ICogYmJwZXJhZyk7Ci0JCQkJc25wcmludGYoYWJ1Ziwgc2l6ZW9mKGFidWYp LCAiKCVsbGQuLiVsbGQpIiwKLQkJCQkJKGxvbmcgbG9uZylhZ29mZiwgIChs b25nIGxvbmcpCi0JCQkJCShhZ29mZiArIG1hcFtpICsgMV0uYm12X2xlbmd0 aCAtIDFMTCkpOwotCQkJCWZvZmZfdyA9IG1heChmb2ZmX3csIHN0cmxlbihy YnVmKSk7CiAJCQkJYm9mZl93ID0gbWF4KGJvZmZfdywgc3RybGVuKGJidWYp KTsKLQkJCQlhb2ZmX3cgPSBtYXgoYW9mZl93LCBzdHJsZW4oYWJ1ZikpOwor CQkJCWlmICghaXNfcnQpIHsKKwkJCQkJYWdubyA9IG1hcFtpICsgMV0uYm12 X2Jsb2NrIC8gYmJwZXJhZzsKKwkJCQkJYWdvZmYgPSBtYXBbaSArIDFdLmJt dl9ibG9jayAtCisJCQkJCQkJKGFnbm8gKiBiYnBlcmFnKTsKKwkJCQkJc25w cmludGYoYWJ1Ziwgc2l6ZW9mKGFidWYpLAorCQkJCQkJIiglbGxkLi4lbGxk KSIsCisJCQkJCQkobG9uZyBsb25nKWFnb2ZmLAorCQkJCQkJKGxvbmcgbG9u ZykoYWdvZmYgKworCQkJCQkJIG1hcFtpICsgMV0uYm12X2xlbmd0aCAtIDFM TCkpOworCQkJCQlhb2ZmX3cgPSBtYXgoYW9mZl93LCBzdHJsZW4oYWJ1Zikp OworCQkJCX0gZWxzZQorCQkJCQlhb2ZmX3cgPSAwOworCQkJCWZvZmZfdyA9 IG1heChmb2ZmX3csIHN0cmxlbihyYnVmKSk7CiAJCQkJdG90X3cgPSBtYXgo dG90X3csCiAJCQkJCW51bWxlbihtYXBbaSsxXS5ibXZfbGVuZ3RoKSk7CiAJ CQl9CiAJCX0KLQkJYWdub193ID0gbWF4KE1JTkFHX1dJRFRILCBudW1sZW4o ZnNnZW8uYWdjb3VudCkpOworCQlhZ25vX3cgPSBpc19ydCA/IDAgOiBtYXgo TUlOQUdfV0lEVEgsIG51bWxlbihmc2dlby5hZ2NvdW50KSk7CiAJCXByaW50 ZigiJTRzOiAlLSpzICUtKnMgJSpzICUtKnMgJSpzJXNcbiIsCiAJCQlfKCJF WFQiKSwKIAkJCWZvZmZfdywgXygiRklMRS1PRkZTRVQiKSwKLQkJCWJvZmZf dywgXygiQkxPQ0stUkFOR0UiKSwKLQkJCWFnbm9fdywgXygiQUciKSwKLQkJ CWFvZmZfdywgXygiQUctT0ZGU0VUIiksCisJCQlib2ZmX3csIGlzX3J0ID8g XygiUlQtQkxPQ0stUkFOR0UiKSA6IF8oIkJMT0NLLVJBTkdFIiksCisJCQlh Z25vX3csIGlzX3J0ID8gIiIgOiBfKCJBRyIpLAorCQkJYW9mZl93LCBpc19y dCA/ICIiIDogXygiQUctT0ZGU0VUIiksCiAJCQl0b3RfdywgXygiVE9UQUwi KSwKIAkJCWZsZyA/IF8oIiBGTEFHUyIpIDogIiIpOwogCQlmb3IgKGkgPSAw OyBpIDwgZWdjbnQ7IGkrKykgewpAQCAtMzY5LDE4ICszODAsMjMgQEAgYm1h cF9mKAogCQkJCQkobG9uZyBsb25nKSBtYXBbaSArIDFdLmJtdl9ibG9jaywK IAkJCQkJKGxvbmcgbG9uZykobWFwW2kgKyAxXS5ibXZfYmxvY2sgKwogCQkJ CQkJbWFwW2kgKyAxXS5ibXZfbGVuZ3RoIC0gMUxMKSk7Ci0JCQkJYWdubyA9 IG1hcFtpICsgMV0uYm12X2Jsb2NrIC8gYmJwZXJhZzsKLQkJCQlhZ29mZiA9 IG1hcFtpICsgMV0uYm12X2Jsb2NrIC0gKGFnbm8gKiBiYnBlcmFnKTsKLQkJ CQlzbnByaW50ZihhYnVmLCBzaXplb2YoYWJ1ZiksICIoJWxsZC4uJWxsZCki LAotCQkJCQkobG9uZyBsb25nKWFnb2ZmLCAgKGxvbmcgbG9uZykKLQkJCQkJ KGFnb2ZmICsgbWFwW2kgKyAxXS5ibXZfbGVuZ3RoIC0gMUxMKSk7Ci0JCQkJ cHJpbnRmKCIlNGQ6ICUtKnMgJS0qcyAlKmQgJS0qcyAlKmxsZCIsCi0JCQkJ CWksCi0JCQkJCWZvZmZfdywgcmJ1ZiwKLQkJCQkJYm9mZl93LCBiYnVmLAot CQkJCQlhZ25vX3csIGFnbm8sCi0JCQkJCWFvZmZfdywgYWJ1ZiwKLQkJCQkJ dG90X3csIChsb25nIGxvbmcpbWFwW2krMV0uYm12X2xlbmd0aCk7CisJCQkJ cHJpbnRmKCIlNGQ6ICUtKnMgJS0qcyIsIGksIGZvZmZfdywgcmJ1ZiwKKwkJ CQkJYm9mZl93LCBiYnVmKTsKKwkJCQlpZiAoIWlzX3J0KSB7CisJCQkJCWFn bm8gPSBtYXBbaSArIDFdLmJtdl9ibG9jayAvIGJicGVyYWc7CisJCQkJCWFn b2ZmID0gbWFwW2kgKyAxXS5ibXZfYmxvY2sgLQorCQkJCQkJCShhZ25vICog YmJwZXJhZyk7CisJCQkJCXNucHJpbnRmKGFidWYsIHNpemVvZihhYnVmKSwK KwkJCQkJCSIoJWxsZC4uJWxsZCkiLAorCQkJCQkJKGxvbmcgbG9uZylhZ29m ZiwKKwkJCQkJCShsb25nIGxvbmcpKGFnb2ZmICsKKwkJCQkJCSBtYXBbaSAr IDFdLmJtdl9sZW5ndGggLSAxTEwpKTsKKwkJCQkJcHJpbnRmKCIgJSpkICUt KnMiLCBhZ25vX3csIGFnbm8sCisJCQkJCQlhb2ZmX3csIGFidWYpOworCQkJ CX0gZWxzZQorCQkJCQlwcmludGYoIiAgIik7CisJCQkJcHJpbnRmKCIgJSps bGQiLCB0b3RfdywKKwkJCQkJKGxvbmcgbG9uZyltYXBbaSsxXS5ibXZfbGVu Z3RoKTsKIAkJCQlpZiAoZmxnID09IEZMR19OVUxMKSB7CiAJCQkJCXByaW50 ZigiXG4iKTsKIAkJCQl9IGVsc2Ugewo= ------------OzfYWMDWik0amWcHwYPaQK-- From owner-xfs@oss.sgi.com Thu Jun 28 23:16:22 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 23:16:27 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5T6GItL024418 for ; Thu, 28 Jun 2007 23:16:21 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id QAA04193; Fri, 29 Jun 2007 16:16:15 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5T6GCeW4807750; Fri, 29 Jun 2007 16:16:15 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5T6GBhP4808074; Fri, 29 Jun 2007 16:16:11 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Fri, 29 Jun 2007 16:16:11 +1000 From: David Chinner To: Barry Naujok Cc: xfs-dev , "xfs@oss.sgi.com" Subject: Re: REVIEW: xfs_bmap '-v' flag has no effect on a realtime file system Message-ID: <20070629061611.GP31489@sgi.com> References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12018 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Fri, Jun 29, 2007 at 04:16:53PM +1000, Barry Naujok wrote: > It's rather confusing when you run xfs_bmap -v on a realtime file and > it doesn't show any verbose information at all. > > The attached patch presents verbose information but without the AG > information for realtime files. Looks ok to me. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Thu Jun 28 23:29:16 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 23:29:19 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.5 required=5.0 tests=AWL,BAYES_99 autolearn=no version=3.2.0-pre1-r499012 Received: from zebday.corky.net (corky.net [212.150.53.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5T6TEtL027521 for ; Thu, 28 Jun 2007 23:29:16 -0700 Received: from [127.0.0.1] (zebday [127.0.0.1]) by zebday.corky.net (Postfix) with ESMTP id 5673EEE4D1; Fri, 29 Jun 2007 09:29:15 +0300 (IDT) Message-ID: <4684A506.4030705@corky.net> Date: Fri, 29 Jun 2007 07:21:58 +0100 From: Just Marc User-Agent: Mozilla-Thunderbird 2.0.0.4 (X11/20070622) MIME-Version: 1.0 To: Eric Sandeen CC: xfs@oss.sgi.com Subject: Re: xfs_fsr, performance related tweaks References: <4683ADEB.3010106@corky.net> <46841C60.5030207@sandeen.net> In-Reply-To: <46841C60.5030207@sandeen.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12019 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: marc@corky.net Precedence: bulk X-list: xfs Hi Eric, In my particular case, and I'm sure for many other people, files that are stored never change again until they deleted. I hinted that there could be a command line switch to turn this functionality on, as it may not be perfect for everyone's use cases. If nobody likes this still, I would appreciate some hints on how to mark files as no-defrag from within fsr itself given that I only have the file descriptor ... A hack like looking up the descriptor in /proc/self/fd should work, but is linux specific and is too hackish in my opinion. I'd like to at least have a nice simple patch for my own uses. Marc Eric Sandeen wrote: > Just Marc wrote: > > >> 2. Files for which 'No improvement will be made' should also be marked >> as no-defrag, this will avoid a ton of extra work in the future. >> > > But... that file could drastically change in the future, no? Just > because it can't be improved now doesn't necessarily mean that it should > never be revisited on subsequent runs, does it? > > -Eric > > > From owner-xfs@oss.sgi.com Thu Jun 28 23:34:03 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 23:34:07 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.8 required=5.0 tests=AWL,BAYES_50,URIBL_RHS_ABUSE autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5T6XvtL029136 for ; Thu, 28 Jun 2007 23:34:01 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id QAA04761; Fri, 29 Jun 2007 16:33:53 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 1161) id 1B88E58C38F1; Fri, 29 Jun 2007 16:33:53 +1000 (EST) To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: TAKE 958214 - xfs_bmap '-v' flag has no effect on a realtime file system Message-Id: <20070629063353.1B88E58C38F1@chook.melbourne.sgi.com> Date: Fri, 29 Jun 2007 16:33:53 +1000 (EST) From: bnaujok@sgi.com (Barry Naujok) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12020 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@sgi.com Precedence: bulk X-list: xfs Date: Fri Jun 29 16:33:35 AEST 2007 Workarea: chook.melbourne.sgi.com:/home/bnaujok/isms/xfs-cmds Inspected by: dgc@sgi.com The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:29029a xfsprogs/io/bmap.c - 1.14 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/io/bmap.c.diff?r1=text&tr1=1.14&r2=text&tr2=1.13&f=h - xfs_bmap '-v' flag has no effect on a realtime file system From owner-xfs@oss.sgi.com Thu Jun 28 23:38:28 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 23:38:32 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.3 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5T6cPtL030460 for ; Thu, 28 Jun 2007 23:38:26 -0700 Received: from pc-bnaujok.melbourne.sgi.com (pc-bnaujok.melbourne.sgi.com [134.14.55.58]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id QAA04838; Fri, 29 Jun 2007 16:38:21 +1000 Date: Fri, 29 Jun 2007 16:41:22 +1000 To: "Just Marc" Subject: Re: xfs_fsr, performance related tweaks From: "Barry Naujok" Organization: SGI Cc: xfs@oss.sgi.com Content-Type: text/plain; format=flowed; delsp=yes; charset=iso-8859-15 MIME-Version: 1.0 References: <4683ADEB.3010106@corky.net> <46841C60.5030207@sandeen.net> <4684A506.4030705@corky.net> Content-Transfer-Encoding: 7bit Message-ID: In-Reply-To: <4684A506.4030705@corky.net> User-Agent: Opera Mail/9.10 (Win32) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12021 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@sgi.com Precedence: bulk X-list: xfs On Fri, 29 Jun 2007 16:21:58 +1000, Just Marc wrote: > Hi Eric, > > In my particular case, and I'm sure for many other people, files that > are stored never change again until they deleted. I hinted that there > could be a command line switch to turn this functionality on, as it may > not be perfect for everyone's use cases. > > If nobody likes this still, I would appreciate some hints on how to mark > files as no-defrag from within fsr itself given that I only have the > file descriptor ... A hack like looking up the descriptor in > /proc/self/fd should work, but is linux specific and is too hackish in > my opinion. I'd like to at least have a nice simple patch for my own > uses. Eric, You can use the xfs_io chattr command to mark known files as nodefrag. Using the chattr -R option can be used to recurse directories. Regards, Barry. From owner-xfs@oss.sgi.com Thu Jun 28 23:39:27 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 23:39:31 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.7 required=5.0 tests=AWL,BAYES_99 autolearn=no version=3.2.0-pre1-r499012 Received: from zebday.corky.net (corky.net [212.150.53.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5T6dPtL030878 for ; Thu, 28 Jun 2007 23:39:26 -0700 Received: from [127.0.0.1] (zebday [127.0.0.1]) by zebday.corky.net (Postfix) with ESMTP id 80ED5EE501; Fri, 29 Jun 2007 09:38:21 +0300 (IDT) Message-ID: <4684A728.1050405@corky.net> Date: Fri, 29 Jun 2007 07:31:04 +0100 From: Just Marc User-Agent: Mozilla-Thunderbird 2.0.0.4 (X11/20070622) MIME-Version: 1.0 To: nscott@aconex.com CC: xfs@oss.sgi.com, andi@firstfloor.org Subject: Re: xfs_fsr, performance related tweaks References: <4683ADF5.9050901@corky.net> <1183075929.15488.148.camel@edge.yarra.acx> In-Reply-To: <1183075929.15488.148.camel@edge.yarra.acx> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12022 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: marc@corky.net Precedence: bulk X-list: xfs Hi Nathan, Andy, I tried calling the ioctl of course, it does accept a path (also in the example in which it is used) but it returned EINVAL, I'll try again. > I guess one could define an additional dont-defrag (or perhaps > rather already-defrag) flag that is always > cleared when the file changes. That could be safely set here. I had this in mind but thought not to bring it up as it's too low level. Although I prefer this solution myself as it caters for all cases automatically. > But then I'm not sure it would be worth the effort. Why would you run fsr that often that it matters? I run fsr all the time because in my case there is hundreds of gigs of new data added to each file system every day, some of it does badly need to be defragged as the files added are actively being served, not just stored. >Also I would expect that one can easily detect in many cases an defragmented file by looking at the number of extents in the inode only and that would > make it equivalent to the flag. The cases where this is not the case are probably rare too. Well, this case would change from file to file (depending on its size) and a high number of extents may be acceptable for very large files so either come up with a formula that says "I can accept X extents per gig and not continue defragging" or do clear the no-defrag bit on file modification which is a cleaner solution. Marc Nathan Scott wrote: > Just call the ioctl directly - fsr is already doing this in a bunch > of places (even has a call to XFS_IOC_FSSETXATTR already, elsewhere). > The xfsctl wrapper is just to give some tools platform independence - > on IRIX (shares xfs_io code) some of the syscalls take paths, but on > Linux only file descriptors are used. > > cheers. > > -- > Nathan > > > From owner-xfs@oss.sgi.com Thu Jun 28 23:48:34 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 23:48:38 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.3 required=5.0 tests=AWL,BAYES_80 autolearn=no version=3.2.0-pre1-r499012 Received: from zebday.corky.net (corky.net [212.150.53.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5T6mWtL000920 for ; Thu, 28 Jun 2007 23:48:33 -0700 Received: from [127.0.0.1] (zebday [127.0.0.1]) by zebday.corky.net (Postfix) with ESMTP id C8A0BEE4D1; Fri, 29 Jun 2007 09:48:32 +0300 (IDT) Message-ID: <4684A98B.1030000@corky.net> Date: Fri, 29 Jun 2007 07:41:15 +0100 From: Just Marc User-Agent: Mozilla-Thunderbird 2.0.0.4 (X11/20070622) MIME-Version: 1.0 To: Barry Naujok CC: xfs@oss.sgi.com Subject: Re: xfs_fsr, performance related tweaks References: <4683ADEB.3010106@corky.net> <46841C60.5030207@sandeen.net> <4684A506.4030705@corky.net> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-15; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12023 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: marc@corky.net Precedence: bulk X-list: xfs Barry Naujok wrote: > On Fri, 29 Jun 2007 16:21:58 +1000, Just Marc wrote: > >> Hi Eric, >> >> In my particular case, and I'm sure for many other people, files that >> are stored never change again until they deleted. I hinted that >> there could be a command line switch to turn this functionality on, >> as it may not be perfect for everyone's use cases. >> >> If nobody likes this still, I would appreciate some hints on how to >> mark files as no-defrag from within fsr itself given that I only have >> the file descriptor ... A hack like looking up the descriptor in >> /proc/self/fd should work, but is linux specific and is too hackish >> in my opinion. I'd like to at least have a nice simple patch for my >> own uses. > > Eric, > > You can use the xfs_io chattr command to mark known files as > nodefrag. Using the chattr -R option can be used to recurse > directories. > > Barry, That's right but I can't do this on a filesystem that's just been defragged say a minute ago, and in the mean time 20 new files got added (I don't know what these files are... ). The whole filesystem has to be scanned again and here comes the issue: fsr wants to try and reduce the extents of many files for which it can't do it thus incurring lots of work and load until it can even reach the new files do defrag them... Marc From owner-xfs@oss.sgi.com Thu Jun 28 23:55:44 2007 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Jun 2007 23:55:50 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5T6tftL003785 for ; Thu, 28 Jun 2007 23:55:43 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id QAA05240; Fri, 29 Jun 2007 16:55:32 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5T6tTeW4818879; Fri, 29 Jun 2007 16:55:31 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5T6tPiR4823385; Fri, 29 Jun 2007 16:55:25 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Fri, 29 Jun 2007 16:55:25 +1000 From: David Chinner To: Ruben Porras Cc: xfs@oss.sgi.com, iusty@k1024.org Subject: Re: XFS shrink (step 0) Message-ID: <20070629065525.GQ31489@sgi.com> References: <1180715974.10796.46.camel@localhost> <20070604001632.GA86004887@sgi.com> <1182291751.5289.9.camel@localhost> <20070619234248.GT86004887@sgi.com> <46838FB4.1040906@linworks.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <46838FB4.1040906@linworks.de> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12024 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Thu, Jun 28, 2007 at 12:38:44PM +0200, Ruben Porras wrote: > David Chinner wrote: > >No, there isn't anything currently in existence to do this. > > > >It's not difficult, though. What you need to do is count the number of > >used blocks in the AGs that will be truncated off, and check whether > >there is enough free space in the remaining AGs to hold all the > >blocks that we are going to move. > > > >I think this could be done we a single loop across the perag > >array or with a simple xfs_db wrapper and some shell/awk/perl > >magic. > > > Do you mind that is it ok to depend on shell/awk/perl? Sure. We have a few programs that are just shell wrappers of other xfs programs e.g: xfs_bmap: shell script that calls xfs_io xfs_check: shell script that calls xfs_db xfs_info: shell script that calls xfs_growfs > I'll do it in C looping through the perag array. For something like this it's probably easier to do with shell/perl/awk. e.g. in shell, the number of ags in the filesystem: iterate all ags: numags=`xfs_db -r -c "sb 0" -c "p agcount" /dev/sdb8 | sed -e 's/.* = //'` lastag=`expr $numags - 1` for ags in `seq 0 1 $lastag`; do .... done Free space in an AG 0: xfs_db -r -c "freesp -s -a 0" /dev/sdb8 | awk '/total free blocks/ {print $4}' And so on. You can peek into pretty much any structure on disk with xfs_db and you can do it online so it's pretty much perfect for this sort of checking. I'd start with something like this, and if it gets too complex then we need to look at integrating it into xfs_db (i.e. writing it in C).... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Fri Jun 29 00:03:31 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 29 Jun 2007 00:03:36 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.5 required=5.0 tests=AWL,BAYES_50,J_CHICKENPOX_27 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5T73QtL008166 for ; Fri, 29 Jun 2007 00:03:29 -0700 Received: from boing.melbourne.sgi.com (boing.melbourne.sgi.com [134.14.55.141]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA05455; Fri, 29 Jun 2007 17:03:10 +1000 Message-ID: <4684AEAE.4050008@sgi.com> Date: Fri, 29 Jun 2007 17:03:10 +1000 From: Timothy Shimmin User-Agent: Thunderbird 2.0.0.4 (Macintosh/20070604) MIME-Version: 1.0 To: David Chinner CC: Szabolcs Illes , xfs@oss.sgi.com Subject: Re: After reboot fs with barrier faster deletes then fs with nobarrier References: <20070627222040.GR989688@sgi.com> <4683407E.9080707@sgi.com> <20070628220225.GB31489@sgi.com> In-Reply-To: <20070628220225.GB31489@sgi.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12025 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs Hi Dave, David Chinner wrote: > On Thu, Jun 28, 2007 at 03:00:46PM +1000, Timothy Shimmin wrote: >> David Chinner wrote: >>> On Wed, Jun 27, 2007 at 06:58:29PM +0100, Szabolcs Illes wrote: >>>> Hi, >>>> >>>> I am using XFS on my laptop, I have realized that nobarrier mount options >>>> sometimes slows down deleting large number of small files, like the >>>> kernel source tree. I made four tests, deleting the kernel source right >>>> after unpack and after reboot, with both barrier and nobarrier options: >>>> >>>> mount opts: rw,noatime,nodiratime,logbsize=256k,logbufs=2 >>>> illes@sunset:~/tmp> tar xjf ~/Download/linux-2.6.21.5.tar.bz2 && sync && >> reboot >>>> After reboot: >>>> illes@sunset:~/tmp> time rm -rf linux-2.6.21.5/ >>>> real 0m28.127s >>>> user 0m0.044s >>>> sys 0m2.924s >>>> mount opts: rw,noatime,nodiratime,logbsize=256k,logbufs=2,nobarrier >>>> illes@sunset:~/tmp> tar xjf ~/Download/linux-2.6.21.5.tar.bz2 && sync && >> reboot >>>> After reboot: >>>> illes@sunset:~/tmp> time rm -rf linux-2.6.21.5/ >>>> real 1m12.738s >>>> user 0m0.032s >>>> sys 0m2.548s >>>> It looks like with barrier it's faster deleting files after reboot. >>>> ( 28 sec vs 72 sec !!! ). >>> Of course the second run will be faster here - the inodes are already in >>> cache and so there's no reading from disk needed to find the files >>> to delete.... >>> >>> That's because run time after reboot is determined by how fast you >>> can traverse the directory structure (i.e. how many seeks are >>> involved). >>> Barriers will have little impact on the uncached rm -rf >>> results, >> But it looks like barriers _are_ having impact on the uncached rm -rf >> results. > > Tim, please be care with what you quote - you've quoted a different > set of results wot what I did and commented on and that takes my > comments way out of context. Sorry for rearranging the quote (haven't touched it this time ;-). My aim was just to highlight the uncached results which I thought were a bit surprising. (The other results not being surprising) I was wondering what your take on that was. > > In hindsight, I should have phrased it as "barriers _should_ have > little impact on uncached rm -rf results." > > We've seen little impact in the past, and it's always been a > decrease in performance, so what we need to find out is how they are > having an impact. I suspect that it's to do with drive cache control > algorithms and barriers substantially reducing the amount of dirty > data being cached and hence read caching is working efficiently as a > side effect. > > I guess the only way to confirm this is blktrace output to see what > I/Os are taking longer to execute when barriers are disabled. > Yep. --Tim From owner-xfs@oss.sgi.com Fri Jun 29 00:08:28 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 29 Jun 2007 00:08:33 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5T78PtL009302 for ; Fri, 29 Jun 2007 00:08:27 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA05519; Fri, 29 Jun 2007 17:08:18 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5T78GeW4828132; Fri, 29 Jun 2007 17:08:17 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5T78EdY3835605; Fri, 29 Jun 2007 17:08:14 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Fri, 29 Jun 2007 17:08:14 +1000 From: David Chinner To: Just Marc Cc: Barry Naujok , xfs@oss.sgi.com Subject: Re: xfs_fsr, performance related tweaks Message-ID: <20070629070814.GR31489@sgi.com> References: <4683ADEB.3010106@corky.net> <46841C60.5030207@sandeen.net> <4684A506.4030705@corky.net> <4684A98B.1030000@corky.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4684A98B.1030000@corky.net> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12026 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Fri, Jun 29, 2007 at 07:41:15AM +0100, Just Marc wrote: > Barry Naujok wrote: > >You can use the xfs_io chattr command to mark known files as > >nodefrag. Using the chattr -R option can be used to recurse > >directories. > > That's right but I can't do this on a filesystem that's just been > defragged say a minute ago, and in the mean time 20 new files got added > (I don't know what these files are... ). So walk the filesystem with a script that queries the number of extents in each file, and if they have a single extent then run the xfs_io on them. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Fri Jun 29 00:23:46 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 29 Jun 2007 00:23:52 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.7 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from zebday.corky.net (corky.net [212.150.53.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5T7NitL015907 for ; Fri, 29 Jun 2007 00:23:46 -0700 Received: from [127.0.0.1] (zebday [127.0.0.1]) by zebday.corky.net (Postfix) with ESMTP id EE6D7EE4E4; Fri, 29 Jun 2007 10:23:44 +0300 (IDT) Message-ID: <4684B1CC.60004@corky.net> Date: Fri, 29 Jun 2007 08:16:28 +0100 From: Just Marc User-Agent: Mozilla-Thunderbird 2.0.0.4 (X11/20070622) MIME-Version: 1.0 To: David Chinner CC: Barry Naujok , xfs@oss.sgi.com Subject: Re: xfs_fsr, performance related tweaks References: <4683ADEB.3010106@corky.net> <46841C60.5030207@sandeen.net> <4684A506.4030705@corky.net> <4684A98B.1030000@corky.net> <20070629070814.GR31489@sgi.com> In-Reply-To: <20070629070814.GR31489@sgi.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12027 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: marc@corky.net Precedence: bulk X-list: xfs David, In my first post I already said something like that can be done but it's just an ugly hack. Don't you think it would best be handled cleanly and correctly by fsr itself? David Chinner wrote: > On Fri, Jun 29, 2007 at 07:41:15AM +0100, Just Marc wrote: > >> Barry Naujok wrote: >> >>> You can use the xfs_io chattr command to mark known files as >>> nodefrag. Using the chattr -R option can be used to recurse >>> directories. >>> >> That's right but I can't do this on a filesystem that's just been >> defragged say a minute ago, and in the mean time 20 new files got added >> (I don't know what these files are... ). >> > > So walk the filesystem with a script that queries the number of > extents in each file, and if they have a single extent then > run the xfs_io on them. > > Cheers, > > Dave. > From owner-xfs@oss.sgi.com Fri Jun 29 00:34:54 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 29 Jun 2007 00:34:58 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.1 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from postoffice.aconex.com (mail.app.aconex.com [203.89.192.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5T7YrtL018262 for ; Fri, 29 Jun 2007 00:34:54 -0700 Received: from edge.yarra.acx (unknown [203.89.192.141]) by postoffice.aconex.com (Postfix) with ESMTP id BDA1C92C427; Fri, 29 Jun 2007 17:34:54 +1000 (EST) Subject: Re: xfs_fsr, performance related tweaks From: Nathan Scott Reply-To: nscott@aconex.com To: Just Marc Cc: David Chinner , Barry Naujok , xfs@oss.sgi.com In-Reply-To: <4684B1CC.60004@corky.net> References: <4683ADEB.3010106@corky.net> <46841C60.5030207@sandeen.net> <4684A506.4030705@corky.net> <4684A98B.1030000@corky.net> <20070629070814.GR31489@sgi.com> <4684B1CC.60004@corky.net> Content-Type: text/plain Organization: Aconex Date: Fri, 29 Jun 2007 17:33:55 +1000 Message-Id: <1183102435.15488.170.camel@edge.yarra.acx> Mime-Version: 1.0 X-Mailer: Evolution 2.6.3 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12028 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nscott@aconex.com Precedence: bulk X-list: xfs On Fri, 2007-06-29 at 08:16 +0100, Just Marc wrote: > > > In my first post I already said something like that can be done but > it's > just an ugly hack. Don't you think it would best be handled cleanly > and correctly by fsr itself? > As I said earlier, fsr already issues the ioctl you're concerned about using - I'm not sure what the issue is there - if you need to do a setxattr, Just Do It. cheers. -- Nathan From owner-xfs@oss.sgi.com Fri Jun 29 00:40:03 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 29 Jun 2007 00:40:10 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.1 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.ukfsn.org (s2.ukfsn.org [217.158.120.143]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5T7e1tL019463 for ; Fri, 29 Jun 2007 00:40:03 -0700 Received: from localhost (mailman.ukfsn.org [80.168.53.75]) by mail.ukfsn.org (Postfix) with ESMTP id 9AF6FE6BA9; Fri, 29 Jun 2007 08:39:31 +0100 (BST) Received: from mail.ukfsn.org ([80.168.53.20]) by localhost (smtp-filter.ukfsn.org [80.168.53.75]) (amavisd-new, port 10024) with ESMTP id pBkCR0P2ngdg; Fri, 29 Jun 2007 08:39:00 +0100 (BST) Received: from elm.dgreaves.com (i-83-67-36-194.freedom2surf.net [83.67.36.194]) by mail.ukfsn.org (Postfix) with ESMTP id 8EBB8E6BA2; Fri, 29 Jun 2007 08:39:30 +0100 (BST) Received: from [10.0.0.90] by elm.dgreaves.com with esmtp (Exim 4.62) (envelope-from ) id 1I4B5E-0006k0-Mc; Fri, 29 Jun 2007 08:40:00 +0100 Message-ID: <4684B750.5030001@dgreaves.com> Date: Fri, 29 Jun 2007 08:40:00 +0100 From: David Greaves User-Agent: Mozilla-Thunderbird 2.0.0.4 (X11/20070618) MIME-Version: 1.0 To: David Chinner Cc: "Rafael J. Wysocki" , Pavel Machek , linux-pm , "'linux-kernel@vger.kernel.org'" , xfs@oss.sgi.com, LinuxRaid , LVM general discussion and development , David Robinson , Oleg Nesterov Subject: Re: [linux-pm] Re: [linux-lvm] 2.6.22-rc4 XFS fails after hibernate/resume References: <46744065.6060605@dgreaves.com> <200706281727.35430.rjw@sisk.pl> <20070628220045.GA4521@elf.ucw.cz> <200706290016.45719.rjw@sisk.pl> <20070629050024.GO31489@sgi.com> In-Reply-To: <20070629050024.GO31489@sgi.com> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12029 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: david@dgreaves.com Precedence: bulk X-list: xfs David Chinner wrote: > On Fri, Jun 29, 2007 at 12:16:44AM +0200, Rafael J. Wysocki wrote: >> There are two solutions possible, IMO. One would be to make these workqueues >> freezable, which is possible, but hacky and Oleg didn't like that very much. >> The second would be to freeze XFS from within the hibernation code path, >> using freeze_bdev(). > > The second is much more likely to work reliably. If freezing the > filesystem leaves something in an inconsistent state, then it's > something I can reproduce and debug without needing to > suspend/resume. > > FWIW, don't forget you need to thaw the filesystem on resume. I've been a little distracted recently - sorry. I'll re-read the thread and see if there are any test actions I need to complete. I do know that the corruption problems I've been having: a) only happen after hibernate/resume b) only ever happen on one of 2 XFS filesystems c) happen even when the script does xfs_freeze;sync;hibernate;xfs_thaw What happens if a filesystem is frozen and I hibernate? Will it be thawed when I resume? David From owner-xfs@oss.sgi.com Fri Jun 29 00:41:22 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 29 Jun 2007 00:41:26 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5T7fItL019805 for ; Fri, 29 Jun 2007 00:41:21 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA06469; Fri, 29 Jun 2007 17:41:18 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5T7fGeW4838701; Fri, 29 Jun 2007 17:41:17 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5T7fEI14673066; Fri, 29 Jun 2007 17:41:14 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Fri, 29 Jun 2007 17:41:14 +1000 From: David Chinner To: Just Marc Cc: David Chinner , Barry Naujok , xfs@oss.sgi.com Subject: Re: xfs_fsr, performance related tweaks Message-ID: <20070629074114.GS31489@sgi.com> References: <4683ADEB.3010106@corky.net> <46841C60.5030207@sandeen.net> <4684A506.4030705@corky.net> <4684A98B.1030000@corky.net> <20070629070814.GR31489@sgi.com> <4684B1CC.60004@corky.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4684B1CC.60004@corky.net> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12030 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Fri, Jun 29, 2007 at 08:16:28AM +0100, Just Marc wrote: > David, > > In my first post I already said something like that can be done but it's > just an ugly hack. Don't you think it would best be handled cleanly > and correctly by fsr itself? No, I don't - if you want files not to be defragmented, then you have to set the flags yourself in some way. You have a specific need that can be solved by some scripting to describe your defrag/no defrag policy. xfs_fsr has no place is setting defrag policy; it's function is simply to find and defrag files. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Fri Jun 29 00:43:49 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 29 Jun 2007 00:43:55 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5T7hitL020790 for ; Fri, 29 Jun 2007 00:43:46 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA06517; Fri, 29 Jun 2007 17:43:34 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id l5T7hSeW4831417; Fri, 29 Jun 2007 17:43:29 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id l5T7hM3M4838263; Fri, 29 Jun 2007 17:43:22 +1000 (AEST) X-Authentication-Warning: snort.melbourne.sgi.com: dgc set sender to dgc@sgi.com using -f Date: Fri, 29 Jun 2007 17:43:22 +1000 From: David Chinner To: David Greaves Cc: David Chinner , "Rafael J. Wysocki" , Pavel Machek , linux-pm , "'linux-kernel@vger.kernel.org'" , xfs@oss.sgi.com, LinuxRaid , LVM general discussion and development , David Robinson , Oleg Nesterov Subject: Re: [linux-pm] Re: [linux-lvm] 2.6.22-rc4 XFS fails after hibernate/resume Message-ID: <20070629074322.GT31489@sgi.com> References: <46744065.6060605@dgreaves.com> <200706281727.35430.rjw@sisk.pl> <20070628220045.GA4521@elf.ucw.cz> <200706290016.45719.rjw@sisk.pl> <20070629050024.GO31489@sgi.com> <4684B750.5030001@dgreaves.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4684B750.5030001@dgreaves.com> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12031 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Fri, Jun 29, 2007 at 08:40:00AM +0100, David Greaves wrote: > What happens if a filesystem is frozen and I hibernate? > Will it be thawed when I resume? If you froze it yourself, then you'll have to thaw it yourself. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Fri Jun 29 00:46:20 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 29 Jun 2007 00:46:23 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.9 required=5.0 tests=AWL,BAYES_60 autolearn=no version=3.2.0-pre1-r499012 Received: from zebday.corky.net (corky.net [212.150.53.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5T7kItL021680 for ; Fri, 29 Jun 2007 00:46:20 -0700 Received: from [127.0.0.1] (zebday [127.0.0.1]) by zebday.corky.net (Postfix) with ESMTP id 697E1EE4F2; Fri, 29 Jun 2007 10:46:19 +0300 (IDT) Message-ID: <4684B716.6030601@corky.net> Date: Fri, 29 Jun 2007 08:39:02 +0100 From: Just Marc User-Agent: Mozilla-Thunderbird 2.0.0.4 (X11/20070622) MIME-Version: 1.0 To: David Chinner CC: Barry Naujok , xfs@oss.sgi.com Subject: Re: xfs_fsr, performance related tweaks References: <4683ADEB.3010106@corky.net> <46841C60.5030207@sandeen.net> <4684A506.4030705@corky.net> <4684A98B.1030000@corky.net> <20070629070814.GR31489@sgi.com> <4684B1CC.60004@corky.net> <20070629074114.GS31489@sgi.com> In-Reply-To: <20070629074114.GS31489@sgi.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12032 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: marc@corky.net Precedence: bulk X-list: xfs I agree with you. But what about files it works on, heavily, then decides they can't be defragged further? then it tries to defrag them again and again and again. And that's the end of my story. David Chinner wrote: > No, I don't - if you want files not to be defragmented, then you > have to set the flags yourself in some way. You have a specific need > that can be solved by some scripting to describe your defrag/no > defrag policy. xfs_fsr has no place is setting defrag policy; it's > function is simply to find and defrag files. > > Cheers, > > Dave. > From owner-xfs@oss.sgi.com Fri Jun 29 00:49:54 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 29 Jun 2007 00:50:00 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.3 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from pentafluge.infradead.org (pentafluge.infradead.org [213.146.154.40]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5T7nqtL022757 for ; Fri, 29 Jun 2007 00:49:54 -0700 Received: from hch by pentafluge.infradead.org with local (Exim 4.63 #1 (Red Hat Linux)) id 1I4Am9-0003RG-Gx; Fri, 29 Jun 2007 08:20:17 +0100 Date: Fri, 29 Jun 2007 08:20:17 +0100 From: Christoph Hellwig To: Andrew Morton Cc: "Amit K. Arora" , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , Andreas Dilger , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: Re: [PATCH 0/6][TAKE5] fallocate system call Message-ID: <20070629072017.GA13192@infradead.org> Mail-Followup-To: Christoph Hellwig , Andrew Morton , "Amit K. Arora" , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , Andreas Dilger , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com References: <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070628025543.9467216f.akpm@linux-foundation.org> <20070628175757.GA1674@amitarora.in.ibm.com> <20070628113342.c9c0f49c.akpm@linux-foundation.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070628113342.c9c0f49c.akpm@linux-foundation.org> User-Agent: Mutt/1.4.2.3i X-SRS-Rewrite: SMTP reverse-path rewritten from by pentafluge.infradead.org See http://www.infradead.org/rpr.html X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12033 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs On Thu, Jun 28, 2007 at 11:33:42AM -0700, Andrew Morton wrote: > I think Mingming was asking that Ted move the current quilt tree into git, > presumably because she's working off git. > > I'm not sure what to do, really. The core kernel patches need to be in > Ted's tree for testing but that'll create a mess for me. Could we please stop this stupid ext4-centrism? XFS is ready so we can put in the syscalls backed by XFS. We have already done this with the xattr syscalls in 2.4, btw. Then again I don't think we should put it in quite yet, because this thread has degraded into creeping featurism, please give me some more time to preparate a semi-coheret rants about this.. From owner-xfs@oss.sgi.com Fri Jun 29 00:54:15 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 29 Jun 2007 00:54:20 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.1 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.ukfsn.org (s2.ukfsn.org [217.158.120.143]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5T7sDtL023962 for ; Fri, 29 Jun 2007 00:54:15 -0700 Received: from localhost (mailman.ukfsn.org [80.168.53.75]) by mail.ukfsn.org (Postfix) with ESMTP id C0289E6BAB; Fri, 29 Jun 2007 08:53:43 +0100 (BST) Received: from mail.ukfsn.org ([80.168.53.20]) by localhost (smtp-filter.ukfsn.org [80.168.53.75]) (amavisd-new, port 10024) with ESMTP id rWswf3fcrjN4; Fri, 29 Jun 2007 08:53:11 +0100 (BST) Received: from elm.dgreaves.com (i-83-67-36-194.freedom2surf.net [83.67.36.194]) by mail.ukfsn.org (Postfix) with ESMTP id 89FF3E6BA2; Fri, 29 Jun 2007 08:53:42 +0100 (BST) Received: from [10.0.0.90] by elm.dgreaves.com with esmtp (Exim 4.62) (envelope-from ) id 1I4BIy-0006mN-Mt; Fri, 29 Jun 2007 08:54:12 +0100 Message-ID: <4684BAA4.4010905@dgreaves.com> Date: Fri, 29 Jun 2007 08:54:12 +0100 From: David Greaves User-Agent: Mozilla-Thunderbird 2.0.0.4 (X11/20070618) MIME-Version: 1.0 To: David Chinner Cc: "Rafael J. Wysocki" , Pavel Machek , linux-pm , "'linux-kernel@vger.kernel.org'" , xfs@oss.sgi.com, LinuxRaid , LVM general discussion and development , David Robinson , Oleg Nesterov Subject: Re: [linux-pm] Re: [linux-lvm] 2.6.22-rc4 XFS fails after hibernate/resume References: <46744065.6060605@dgreaves.com> <200706281727.35430.rjw@sisk.pl> <20070628220045.GA4521@elf.ucw.cz> <200706290016.45719.rjw@sisk.pl> <20070629050024.GO31489@sgi.com> <4684B750.5030001@dgreaves.com> <20070629074322.GT31489@sgi.com> In-Reply-To: <20070629074322.GT31489@sgi.com> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12034 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: david@dgreaves.com Precedence: bulk X-list: xfs David Chinner wrote: > On Fri, Jun 29, 2007 at 08:40:00AM +0100, David Greaves wrote: >> What happens if a filesystem is frozen and I hibernate? >> Will it be thawed when I resume? > > If you froze it yourself, then you'll have to thaw it yourself. So hibernate will not attempt to re-freeze a frozen fs and, during resume, it will only thaw filesystems that were frozen by the suspend? David From owner-xfs@oss.sgi.com Fri Jun 29 01:13:59 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 29 Jun 2007 01:14:04 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from one.firstfloor.org (one.firstfloor.org [213.235.205.2]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5T8DutL027964 for ; Fri, 29 Jun 2007 01:13:58 -0700 Received: by one.firstfloor.org (Postfix, from userid 503) id 7E32218902A4; Fri, 29 Jun 2007 10:13:57 +0200 (CEST) Date: Fri, 29 Jun 2007 10:13:57 +0200 From: Andi Kleen To: Just Marc Cc: nscott@aconex.com, xfs@oss.sgi.com, andi@firstfloor.org Subject: Re: xfs_fsr, performance related tweaks Message-ID: <20070629081357.GC14519@one.firstfloor.org> References: <4683ADF5.9050901@corky.net> <1183075929.15488.148.camel@edge.yarra.acx> <4684A728.1050405@corky.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4684A728.1050405@corky.net> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12035 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: andi@firstfloor.org Precedence: bulk X-list: xfs > I run fsr all the time because in my case there is hundreds of gigs of > new data added to each file system every day, some of it does badly need > to be defragged as the files added are actively being served, not just It might be better to investigate why XFS does such a poor job for your workload in the first case. Unless the file systems are always nearly full or you have a lot of holes it shouldn't fragment that badly in the first place. -Andi From owner-xfs@oss.sgi.com Fri Jun 29 01:20:48 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 29 Jun 2007 01:20:54 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.6 required=5.0 tests=AWL,BAYES_60 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.ukfsn.org (s2.ukfsn.org [217.158.120.143]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5T8KjtL029801 for ; Fri, 29 Jun 2007 01:20:47 -0700 Received: from localhost (mailman.ukfsn.org [80.168.53.75]) by mail.ukfsn.org (Postfix) with ESMTP id 629DFE6BA2; Fri, 29 Jun 2007 09:20:16 +0100 (BST) Received: from mail.ukfsn.org ([80.168.53.20]) by localhost (smtp-filter.ukfsn.org [80.168.53.75]) (amavisd-new, port 10024) with ESMTP id rcwaSZ5AURQ3; Fri, 29 Jun 2007 09:19:45 +0100 (BST) Received: from elm.dgreaves.com (i-83-67-36-194.freedom2surf.net [83.67.36.194]) by mail.ukfsn.org (Postfix) with ESMTP id CE9E1E6AC0; Fri, 29 Jun 2007 09:20:15 +0100 (BST) Received: from [10.0.0.90] by elm.dgreaves.com with esmtp (Exim 4.62) (envelope-from ) id 1I4Big-0006oB-4V; Fri, 29 Jun 2007 09:20:46 +0100 Message-ID: <4684C0DD.4080702@dgreaves.com> Date: Fri, 29 Jun 2007 09:20:45 +0100 From: David Greaves User-Agent: Mozilla-Thunderbird 2.0.0.4 (X11/20070618) MIME-Version: 1.0 To: Tejun Heo , David Chinner Cc: David Robinson , LVM general discussion and development , "'linux-kernel@vger.kernel.org'" , xfs@oss.sgi.com, linux-pm , LinuxRaid , "Rafael J. Wysocki" Subject: Re: [linux-lvm] 2.6.22-rc5 XFS fails after hibernate/resume References: <46744065.6060605@dgreaves.com> <4674645F.5000906@gmail.com> <46751D37.5020608@dgreaves.com> <4676390E.6010202@dgreaves.com> <20070618145007.GE85884050@sgi.com> <4676D97E.4000403@dgreaves.com> <4677A0C7.4000306@dgreaves.com> <4677A596.7090404@gmail.com> <4677E496.3080506@dgreaves.com> <4678DF56.1020903@gmail.com> <467ABE25.7060303@dgreaves.com> In-Reply-To: <467ABE25.7060303@dgreaves.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12036 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: david@dgreaves.com Precedence: bulk X-list: xfs David Greaves wrote: > been away, back now... again... David Greaves wrote: > When I move the swap/resume partition to a different controller (ie when > I broke the / mirror and used the freed space) the problem seems to go > away. No, it's not gone away - but it's taking longer to show up. I can try and put together a test loop that does work, hibernates, resumes and repeats but since I know it crashes at some point there doesn't seem much point unless I'm looking for something. There's not much in the logs - is there any other instrumentation that people could suggest? DaveC, given this is happening without (obvious) libata errors do you think it may be something in the XFS/md/hibernate area? If there's anything to be tried then I'll also move to 2.6.22-rc6. > Tejun Heo wrote: >> It's really weird tho. The PHY RDY status changed events are coming >> from the device which is NOT used while resuming There is an obvious problem there though Tejun (the errors even when sda isn't involved in the OS boot) - can I start another thread about that issue/bug later? I need to reshuffle partitions so I'd rather get the hibernate working first and then go back to it if that's OK? David From owner-xfs@oss.sgi.com Fri Jun 29 01:30:23 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 29 Jun 2007 01:30:29 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.5 required=5.0 tests=AWL,BAYES_99,J_CHICKENPOX_43, J_CHICKENPOX_64 autolearn=no version=3.2.0-pre1-r499012 Received: from zebday.corky.net (corky.net [212.150.53.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5T8ULtL000562 for ; Fri, 29 Jun 2007 01:30:23 -0700 Received: from [127.0.0.1] (zebday [127.0.0.1]) by zebday.corky.net (Postfix) with ESMTP id A347BEE505; Fri, 29 Jun 2007 11:30:21 +0300 (IDT) Message-ID: <4684C168.2050605@corky.net> Date: Fri, 29 Jun 2007 09:23:04 +0100 From: Just Marc User-Agent: Mozilla-Thunderbird 2.0.0.4 (X11/20070622) MIME-Version: 1.0 To: Andi Kleen CC: nscott@aconex.com, xfs@oss.sgi.com Subject: Re: xfs_fsr, performance related tweaks References: <4683ADF5.9050901@corky.net> <1183075929.15488.148.camel@edge.yarra.acx> <4684A728.1050405@corky.net> <20070629081357.GC14519@one.firstfloor.org> In-Reply-To: <20070629081357.GC14519@one.firstfloor.org> Content-Type: multipart/mixed; boundary="------------080102000005080906000206" X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12037 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: marc@corky.net Precedence: bulk X-list: xfs This is a multi-part message in MIME format. --------------080102000005080906000206 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Many files are being added concurrently, 24/7. You might have hit the nail on the head, some of the files it was not able to improve are on filesystems that are almost full. As the patch is done anyway here it is. It does three things: 1. Actually make the usage printing become visible using -v, the default case of getopt was never reached. 2. Reduces the number of 'stat' calls when scanning a filesystem for work to do, it now first checks if the file is marked as no-defrag before stat'ing it 3. Optionally, the -X parameter will tell fsr not to defrag files which it had decided it can't improve: 'No improvement will be made' ... Marc Andi Kleen wrote: >> I run fsr all the time because in my case there is hundreds of gigs of >> new data added to each file system every day, some of it does badly need >> to be defragged as the files added are actively being served, not just >> > > It might be better to investigate why XFS does such a poor job > for your workload in the first case. Unless the file systems > are always nearly full or you have a lot of holes it shouldn't fragment > that badly in the first place. > > -Andi > > --------------080102000005080906000206 Content-Type: text/plain; name="xfs_fsr.diff" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="xfs_fsr.diff" --- xfs_fsr.c.orig 2007-06-28 07:40:42.572069164 +0100 +++ xfs_fsr.c 2007-06-29 09:18:44.330906899 +0100 @@ -68,6 +68,7 @@ int vflag; int gflag; +int xflag; static int Mflag; /* static int nflag; */ int dflag = 0; @@ -218,7 +219,7 @@ gflag = ! isatty(0); - while ((c = getopt(argc, argv, "C:p:e:MgsdnvTt:f:m:b:N:FV")) != -1 ) + while ((c = getopt(argc, argv, "C:p:e:MgsdnvTt:f:m:b:N:FVXh")) != -1 ) switch (c) { case 'M': Mflag = 1; @@ -267,7 +268,10 @@ case 'V': printf(_("%s version %s\n"), progname, VERSION); exit(0); - default: + case 'X': + xflag = 1; /* no eXtra work */ + break; + case 'h': usage(1); } if (vflag) @@ -371,6 +375,8 @@ " -f leftoff Use this instead of /etc/fsrlast.\n" " -m mtab Use something other than /etc/mtab.\n" " -d Debug, print even more.\n" +" -h Show usage.\n" +" -X Mark as no-defrag files which can't be defragged further.\n" " -v Verbose, more -v's more verbose.\n" ), progname, progname); exit(ret); @@ -904,20 +910,6 @@ } } - /* Check if there is room to copy the file */ - if ( statvfs64( (fsname == NULL ? fname : fsname), &vfss) < 0) { - fsrprintf(_("unable to get fs stat on %s: %s\n"), - fname, strerror(errno)); - return (-1); - } - bsize = vfss.f_frsize ? vfss.f_frsize : vfss.f_bsize; - - if (statp->bs_size > ((vfss.f_bfree * bsize) - minimumfree)) { - fsrprintf(_("insufficient freespace for: %s: " - "size=%lld: ignoring\n"), fname, statp->bs_size); - return 1; - } - if ((ioctl(fd, XFS_IOC_FSGETXATTR, &fsx)) < 0) { fsrprintf(_("failed to get inode attrs: %s\n"), fname); return(-1); @@ -951,6 +943,20 @@ return -1; } + /* Check if there is room to copy the file */ + if ( statvfs64( (fsname == NULL ? fname : fsname), &vfss) < 0) { + fsrprintf(_("unable to get fs stat on %s: %s\n"), + fname, strerror(errno)); + return (-1); + } + bsize = vfss.f_frsize ? vfss.f_frsize : vfss.f_bsize; + + if (statp->bs_size > ((vfss.f_bfree * bsize) - minimumfree)) { + fsrprintf(_("insufficient freespace for: %s: " + "size=%lld: ignoring\n"), fname, statp->bs_size); + return 1; + } + /* * Previously the code forked here, & the child changed it's uid to * that of the file's owner and then called packfile(), to keep @@ -1128,11 +1134,32 @@ if (dflag) fsrprintf(_("Temporary file has %d extents (%d in original)\n"), new_nextents, cur_nextents); if (cur_nextents <= new_nextents) { + struct fsxattr fsx_tmp; + if (vflag) fsrprintf(_("No improvement will be made (skipping): %s\n"), fname); free(fbuf); + + if (xflag) { + /* Get the current inode flags */ + if ((ioctl(fd, XFS_IOC_FSGETXATTR, &fsx_tmp)) < 0) { + fsrprintf(_("failed to get inode attrs: %s\n"), fname); + return -1; + } + + /* Add no-defrag */ + fsx_tmp.fsx_xflags |= XFS_XFLAG_NODEFRAG; + + /* Mark it! */ + if (ioctl(fd, XFS_IOC_FSSETXATTR, &fsx_tmp) < 0) { + fsrprintf(_("could not set inode attrs on: %s\n"), fname); + close(tfd); + return -1; + } + } + close(tfd); - return 1; /* no change/no error */ + return 0; /* We're done with this file, forever. */ } /* Loop through block map copying the file. */ --------------080102000005080906000206-- From owner-xfs@oss.sgi.com Fri Jun 29 01:58:47 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 29 Jun 2007 01:58:53 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from one.firstfloor.org (one.firstfloor.org [213.235.205.2]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5T8wjtL008178 for ; Fri, 29 Jun 2007 01:58:47 -0700 Received: by one.firstfloor.org (Postfix, from userid 503) id 68FAB18902A4; Fri, 29 Jun 2007 10:58:46 +0200 (CEST) Date: Fri, 29 Jun 2007 10:58:46 +0200 From: Andi Kleen To: Just Marc Cc: Andi Kleen , nscott@aconex.com, xfs@oss.sgi.com Subject: Re: xfs_fsr, performance related tweaks Message-ID: <20070629085846.GD14519@one.firstfloor.org> References: <4683ADF5.9050901@corky.net> <1183075929.15488.148.camel@edge.yarra.acx> <4684A728.1050405@corky.net> <20070629081357.GC14519@one.firstfloor.org> <4684C168.2050605@corky.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4684C168.2050605@corky.net> User-Agent: Mutt/1.4.2.1i X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12038 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: andi@firstfloor.org Precedence: bulk X-list: xfs On Fri, Jun 29, 2007 at 09:23:04AM +0100, Just Marc wrote: > You might have hit the > nail on the head, some of the files it was not able to improve are on > filesystems that are almost full. Don't do that then. It's probably the reason you get the bad fragmentation in the first place. Most file systems perform very poorly when they are nearly full. -Andi From owner-xfs@oss.sgi.com Fri Jun 29 02:25:12 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 29 Jun 2007 02:25:16 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.5 required=5.0 tests=AWL,BAYES_50,URIBL_RHS_ABUSE, WEIRD_PORT autolearn=no version=3.2.0-pre1-r499012 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id l5T9P9tL015530 for ; Fri, 29 Jun 2007 02:25:11 -0700 Received: from boing.melbourne.sgi.com (boing.melbourne.sgi.com [134.14.55.141]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id TAA08889; Fri, 29 Jun 2007 19:25:05 +1000 Message-ID: <4684CFF1.70700@sgi.com> Date: Fri, 29 Jun 2007 19:25:05 +1000 From: Timothy Shimmin User-Agent: Thunderbird 2.0.0.4 (Macintosh/20070604) MIME-Version: 1.0 To: torvalds@linux-foundation.org CC: Andrew Morton , xfs-oss Subject: [GIT PULL] XFS lockdep fix for 2.6.22 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12039 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs Hi Linus, We have a patch to help for lockdep annotations which were busted in XFS. We still have more work to go - on to next lockdep warning. Please pull from the for-linus branch: git pull git://oss.sgi.com:8090/xfs/xfs-2.6.git for-linus This will update the following files: fs/xfs/xfs_inode.h | 15 +++++++++------ fs/xfs/xfs_vnodeops.c | 4 ++-- 2 files changed, 11 insertions(+), 8 deletions(-) through these commits: commit 09ff7bd79164131cc041777e291a951a7adb8ab4 Author: David Chinner Date: Fri Jun 29 17:26:09 2007 +1000 [XFS] Fix lockdep annotations for xfs_lock_inodes SGI-PV: 967035 SGI-Modid: xfs-linux-melb:xfs-kern:29026a Signed-off-by: David Chinner Signed-off-by: Tim Shimmin --Tim From owner-xfs@oss.sgi.com Fri Jun 29 04:02:52 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 29 Jun 2007 04:02:57 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.suse.cz (styx.suse.cz [82.119.242.94]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5TB2otL005803 for ; Fri, 29 Jun 2007 04:02:51 -0700 Received: from discovery.suse.cz (discovery.suse.cz [10.20.1.116]) by mail.suse.cz (Postfix) with ESMTP id E938A6280BD; Fri, 29 Jun 2007 13:02:50 +0200 (CEST) Received: by discovery.suse.cz (Postfix, from userid 10020) id B22AC82E8B; Fri, 29 Jun 2007 13:02:50 +0200 (CEST) Date: Fri, 29 Jun 2007 13:02:50 +0200 From: Michal Marek To: Andrew Morton Cc: xfs@oss.sgi.com, linux-kernel@vger.kernel.org Subject: Re: [patch 3/3] Fix XFS_IOC_FSBULKSTAT{,_SINGLE} and XFS_IOC_FSINUMBERS in compat mode Message-ID: <20070629110250.GA8011@discovery.suse.cz> References: <20070619132549.266927601@suse.cz> <20070619132726.893544847@suse.cz> <20070628111530.829e7a06.akpm@linux-foundation.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20070628111530.829e7a06.akpm@linux-foundation.org> User-Agent: Mutt/1.5.16 (2007-06-09) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12040 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: mmarek@suse.cz Precedence: bulk X-list: xfs On Thu, Jun 28, 2007 at 11:15:30AM -0700, Andrew Morton wrote: > CC fs/xfs/linux-2.6/xfs_ioctl32.o > fs/xfs/linux-2.6/xfs_ioctl32.c: In function ‘xfs_ioc_bulkstat_compat’: > fs/xfs/linux-2.6/xfs_ioctl32.c:334: error: ‘xfs_inumbers_fmt_compat’ undeclared (first use in this function) > fs/xfs/linux-2.6/xfs_ioctl32.c:334: error: (Each undeclared identifier is reported only once > fs/xfs/linux-2.6/xfs_ioctl32.c:334: error: for each function it appears in.) Sorry, the #define was wrong. This one should work better (at least build-tested on ppc64 this time): * 32bit struct xfs_fsop_bulkreq has different size and layout of members, no matter the alignment. Move the code out of the #else branch (why was it there in the first place?). Define _32 variants of the ioctl constants. * 32bit struct xfs_bstat is different because of time_t and on i386 becaus of different padding. Create a new formatter xfs_bulkstat_one_compat() that takes care of this. To do this, we need to make xfs_bulkstat_one_iget() and xfs_bulkstat_one_dinode() non-static. * i386 struct xfs_inogrp has different padding. Introduce a similar "formatter" mechanism for xfs_inumbers: the native formatter is just a copy_to_user, the compat formatter takes care of the different layout Signed-off-by: Michal Marek --- fs/xfs/linux-2.6/xfs_ioctl.c | 2 fs/xfs/linux-2.6/xfs_ioctl32.c | 259 +++++++++++++++++++++++++++++++++++++---- fs/xfs/xfs_itable.c | 30 +++- fs/xfs/xfs_itable.h | 31 ++++ 4 files changed, 290 insertions(+), 32 deletions(-) --- linux-2.6.orig/fs/xfs/linux-2.6/xfs_ioctl32.c +++ linux-2.6/fs/xfs/linux-2.6/xfs_ioctl32.c @@ -28,12 +28,27 @@ #include "xfs_vfs.h" #include "xfs_vnode.h" #include "xfs_dfrag.h" +#include "xfs_sb.h" +#include "xfs_log.h" +#include "xfs_trans.h" +#include "xfs_dmapi.h" +#include "xfs_mount.h" +#include "xfs_inum.h" +#include "xfs_bmap_btree.h" +#include "xfs_dir2.h" +#include "xfs_dir2_sf.h" +#include "xfs_attr_sf.h" +#include "xfs_dinode.h" +#include "xfs_itable.h" +#include "xfs_error.h" +#include "xfs_inode.h" #define _NATIVE_IOC(cmd, type) \ _IOC(_IOC_DIR(cmd), _IOC_TYPE(cmd), _IOC_NR(cmd), sizeof(type)) #if defined(CONFIG_IA64) || defined(CONFIG_X86_64) #define BROKEN_X86_ALIGNMENT +#define _PACKED __attribute__((packed)) /* on ia32 l_start is on a 32-bit boundary */ typedef struct xfs_flock64_32 { __s16 l_type; @@ -111,35 +126,234 @@ STATIC unsigned long xfs_ioctl32_geom_v1 return (unsigned long)p; } +typedef struct compat_xfs_inogrp { + __u64 xi_startino; /* starting inode number */ + __s32 xi_alloccount; /* # bits set in allocmask */ + __u64 xi_allocmask; /* mask of allocated inodes */ +} __attribute__((packed)) compat_xfs_inogrp_t; + +STATIC int xfs_inumbers_fmt_compat( + void __user *ubuffer, + const xfs_inogrp_t *buffer, + long count, + long *written) +{ + compat_xfs_inogrp_t *p32 = ubuffer; + long i; + + for (i = 0; i < count; i++) { + if (put_user(buffer[i].xi_startino, &p32[i].xi_startino) || + put_user(buffer[i].xi_alloccount, &p32[i].xi_alloccount) || + put_user(buffer[i].xi_allocmask, &p32[i].xi_allocmask)) + return -EFAULT; + } + *written = count * sizeof(*p32); + return 0; +} + #else -typedef struct xfs_fsop_bulkreq32 { +#define xfs_inumbers_fmt_compat xfs_inumbers_fmt +#define _PACKED + +#endif + +/* XFS_IOC_FSBULKSTAT and friends */ + +typedef struct compat_xfs_bstime { + __s32 tv_sec; /* seconds */ + __s32 tv_nsec; /* and nanoseconds */ +} compat_xfs_bstime_t; + +static int xfs_bstime_store_compat( + compat_xfs_bstime_t __user *p32, + xfs_bstime_t *p) +{ + __s32 sec32; + + sec32 = p->tv_sec; + if (put_user(sec32, &p32->tv_sec) || + put_user(p->tv_nsec, &p32->tv_nsec)) + return -EFAULT; + return 0; +} + +typedef struct compat_xfs_bstat { + __u64 bs_ino; /* inode number */ + __u16 bs_mode; /* type and mode */ + __u16 bs_nlink; /* number of links */ + __u32 bs_uid; /* user id */ + __u32 bs_gid; /* group id */ + __u32 bs_rdev; /* device value */ + __s32 bs_blksize; /* block size */ + __s64 bs_size; /* file size */ + compat_xfs_bstime_t bs_atime; /* access time */ + compat_xfs_bstime_t bs_mtime; /* modify time */ + compat_xfs_bstime_t bs_ctime; /* inode change time */ + int64_t bs_blocks; /* number of blocks */ + __u32 bs_xflags; /* extended flags */ + __s32 bs_extsize; /* extent size */ + __s32 bs_extents; /* number of extents */ + __u32 bs_gen; /* generation count */ + __u16 bs_projid; /* project id */ + unsigned char bs_pad[14]; /* pad space, unused */ + __u32 bs_dmevmask; /* DMIG event mask */ + __u16 bs_dmstate; /* DMIG state info */ + __u16 bs_aextents; /* attribute number of extents */ +} _PACKED compat_xfs_bstat_t; + +static int xfs_bulkstat_one_compat( + xfs_mount_t *mp, /* mount point for filesystem */ + xfs_ino_t ino, /* inode number to get data for */ + void __user *buffer, /* buffer to place output in */ + int ubsize, /* size of buffer */ + void *private_data, /* my private data */ + xfs_daddr_t bno, /* starting bno of inode cluster */ + int *ubused, /* bytes used by me */ + void *dibuff, /* on-disk inode buffer */ + int *stat) /* BULKSTAT_RV_... */ +{ + xfs_bstat_t *buf; /* return buffer */ + int error = 0; /* error value */ + xfs_dinode_t *dip; /* dinode inode pointer */ + compat_xfs_bstat_t __user *p32 = buffer; + + dip = (xfs_dinode_t *)dibuff; + *stat = BULKSTAT_RV_NOTHING; + + if (!buffer || xfs_internal_inum(mp, ino)) + return XFS_ERROR(EINVAL); + if (ubsize < sizeof(*buf)) + return XFS_ERROR(ENOMEM); + + buf = kmem_alloc(sizeof(*buf), KM_SLEEP); + + if (dip == NULL) { + /* We're not being passed a pointer to a dinode. This happens + * if BULKSTAT_FG_IGET is selected. Do the iget. + */ + error = xfs_bulkstat_one_iget(mp, ino, bno, buf, stat); + if (error) + goto out_free; + } else { + xfs_bulkstat_one_dinode(mp, ino, dip, buf); + } + + if (put_user(buf->bs_ino, &p32->bs_ino) || + put_user(buf->bs_mode, &p32->bs_mode) || + put_user(buf->bs_nlink, &p32->bs_nlink) || + put_user(buf->bs_uid, &p32->bs_uid) || + put_user(buf->bs_gid, &p32->bs_gid) || + put_user(buf->bs_rdev, &p32->bs_rdev) || + put_user(buf->bs_blksize, &p32->bs_blksize) || + put_user(buf->bs_size, &p32->bs_size) || + xfs_bstime_store_compat(&p32->bs_atime, &buf->bs_atime) || + xfs_bstime_store_compat(&p32->bs_mtime, &buf->bs_mtime) || + xfs_bstime_store_compat(&p32->bs_ctime, &buf->bs_ctime) || + put_user(buf->bs_blocks, &p32->bs_blocks) || + put_user(buf->bs_xflags, &p32->bs_xflags) || + put_user(buf->bs_extsize, &p32->bs_extsize) || + put_user(buf->bs_extents, &p32->bs_extents) || + put_user(buf->bs_gen, &p32->bs_gen) || + put_user(buf->bs_projid, &p32->bs_projid) || + put_user(buf->bs_dmevmask, &p32->bs_dmevmask) || + put_user(buf->bs_dmstate, &p32->bs_dmstate) || + put_user(buf->bs_aextents, &p32->bs_aextents)) { + error = EFAULT; + goto out_free; + } + + *stat = BULKSTAT_RV_DIDONE; + if (ubused) + *ubused = sizeof(compat_xfs_bstat_t); + + out_free: + kmem_free(buf, sizeof(*buf)); + return error; +} + + + +typedef struct compat_xfs_fsop_bulkreq { compat_uptr_t lastip; /* last inode # pointer */ __s32 icount; /* count of entries in buffer */ compat_uptr_t ubuffer; /* user buffer for inode desc. */ - __s32 ocount; /* output count pointer */ -} xfs_fsop_bulkreq32_t; + compat_uptr_t ocount; /* output count pointer */ +} compat_xfs_fsop_bulkreq_t; -STATIC unsigned long -xfs_ioctl32_bulkstat( - unsigned long arg) +#define XFS_IOC_FSBULKSTAT_32 \ + _IOWR('X', 101, struct compat_xfs_fsop_bulkreq) +#define XFS_IOC_FSBULKSTAT_SINGLE_32 \ + _IOWR('X', 102, struct compat_xfs_fsop_bulkreq) +#define XFS_IOC_FSINUMBERS_32 \ + _IOWR('X', 103, struct compat_xfs_fsop_bulkreq) + +/* copied from xfs_ioctl.c */ +STATIC int +xfs_ioc_bulkstat_compat( + xfs_mount_t *mp, + unsigned int cmd, + void __user *arg) { - xfs_fsop_bulkreq32_t __user *p32 = (void __user *)arg; - xfs_fsop_bulkreq_t __user *p = compat_alloc_user_space(sizeof(*p)); + compat_xfs_fsop_bulkreq_t __user *p32 = (void __user *)arg; u32 addr; + xfs_fsop_bulkreq_t bulkreq; + int count; /* # of records returned */ + xfs_ino_t inlast; /* last inode number */ + int done; + int error; + + /* done = 1 if there are more stats to get and if bulkstat */ + /* should be called again (unused here, but used in dmapi) */ + + if (!capable(CAP_SYS_ADMIN)) + return -EPERM; + + if (XFS_FORCED_SHUTDOWN(mp)) + return -XFS_ERROR(EIO); - if (get_user(addr, &p32->lastip) || - put_user(compat_ptr(addr), &p->lastip) || - copy_in_user(&p->icount, &p32->icount, sizeof(s32)) || - get_user(addr, &p32->ubuffer) || - put_user(compat_ptr(addr), &p->ubuffer) || - get_user(addr, &p32->ocount) || - put_user(compat_ptr(addr), &p->ocount)) + if (get_user(addr, &p32->lastip)) + return -EFAULT; + bulkreq.lastip = compat_ptr(addr); + if (get_user(bulkreq.icount, &p32->icount) || + get_user(addr, &p32->ubuffer)) + return -EFAULT; + bulkreq.ubuffer = compat_ptr(addr); + if (get_user(addr, &p32->ocount)) return -EFAULT; + bulkreq.ocount = compat_ptr(addr); - return (unsigned long)p; + if (copy_from_user(&inlast, bulkreq.lastip, sizeof(__s64))) + return -XFS_ERROR(EFAULT); + + if ((count = bulkreq.icount) <= 0) + return -XFS_ERROR(EINVAL); + + if (cmd == XFS_IOC_FSINUMBERS) + error = xfs_inumbers(mp, &inlast, &count, + bulkreq.ubuffer, xfs_inumbers_fmt_compat); + else + error = xfs_bulkstat(mp, &inlast, &count, + xfs_bulkstat_one_compat, NULL, + sizeof(compat_xfs_bstat_t), bulkreq.ubuffer, + BULKSTAT_FG_QUICK, &done); + + if (error) + return -error; + + if (bulkreq.ocount != NULL) { + if (copy_to_user(bulkreq.lastip, &inlast, + sizeof(xfs_ino_t))) + return -XFS_ERROR(EFAULT); + + if (copy_to_user(bulkreq.ocount, &count, sizeof(count))) + return -XFS_ERROR(EFAULT); + } + + return 0; } -#endif + + typedef struct compat_xfs_fsop_handlereq { __u32 fd; /* fd for FD_TO_HANDLE */ @@ -261,12 +475,13 @@ xfs_compat_ioctl( case XFS_IOC_SWAPEXT: break; - case XFS_IOC_FSBULKSTAT_SINGLE: - case XFS_IOC_FSBULKSTAT: - case XFS_IOC_FSINUMBERS: - arg = xfs_ioctl32_bulkstat(arg); - break; #endif + case XFS_IOC_FSBULKSTAT_32: + case XFS_IOC_FSBULKSTAT_SINGLE_32: + case XFS_IOC_FSINUMBERS_32: + cmd = _NATIVE_IOC(cmd, struct xfs_fsop_bulkreq); + return xfs_ioc_bulkstat_compat(XFS_BHVTOI(VNHEAD(vp))->i_mount, + cmd, (void*)arg); case XFS_IOC_FD_TO_HANDLE_32: case XFS_IOC_PATH_TO_HANDLE_32: case XFS_IOC_PATH_TO_FSHANDLE_32: --- linux-2.6.orig/fs/xfs/xfs_itable.h +++ linux-2.6/fs/xfs/xfs_itable.h @@ -70,6 +70,21 @@ xfs_bulkstat_single( int *done); int +xfs_bulkstat_one_iget( + xfs_mount_t *mp, /* mount point for filesystem */ + xfs_ino_t ino, /* inode number to get data for */ + xfs_daddr_t bno, /* starting bno of inode cluster */ + xfs_bstat_t *buf, /* return buffer */ + int *stat); /* BULKSTAT_RV_... */ + +int +xfs_bulkstat_one_dinode( + xfs_mount_t *mp, /* mount point for filesystem */ + xfs_ino_t ino, /* inode number to get data for */ + xfs_dinode_t *dip, /* dinode inode pointer */ + xfs_bstat_t *buf); /* return buffer */ + +int xfs_bulkstat_one( xfs_mount_t *mp, xfs_ino_t ino, @@ -86,11 +101,25 @@ xfs_internal_inum( xfs_mount_t *mp, xfs_ino_t ino); +typedef int (*inumbers_fmt_pf)( + void __user *ubuffer, /* buffer to write to */ + const xfs_inogrp_t *buffer, /* buffer to read from */ + long count, /* # of elements to read */ + long *written); /* # of bytes written */ + +int +xfs_inumbers_fmt( + void __user *ubuffer, /* buffer to write to */ + const xfs_inogrp_t *buffer, /* buffer to read from */ + long count, /* # of elements to read */ + long *written); /* # of bytes written */ + int /* error status */ xfs_inumbers( xfs_mount_t *mp, /* mount point for filesystem */ xfs_ino_t *last, /* last inode returned */ int *count, /* size of buffer/count returned */ - xfs_inogrp_t __user *buffer);/* buffer with inode info */ + void __user *buffer, /* buffer with inode info */ + inumbers_fmt_pf formatter); #endif /* __XFS_ITABLE_H__ */ --- linux-2.6.orig/fs/xfs/xfs_itable.c +++ linux-2.6/fs/xfs/xfs_itable.c @@ -49,7 +49,7 @@ xfs_internal_inum( (ino == mp->m_sb.sb_uquotino || ino == mp->m_sb.sb_gquotino))); } -STATIC int +int xfs_bulkstat_one_iget( xfs_mount_t *mp, /* mount point for filesystem */ xfs_ino_t ino, /* inode number to get data for */ @@ -129,7 +129,7 @@ xfs_bulkstat_one_iget( return error; } -STATIC int +int xfs_bulkstat_one_dinode( xfs_mount_t *mp, /* mount point for filesystem */ xfs_ino_t ino, /* inode number to get data for */ @@ -748,6 +748,19 @@ xfs_bulkstat_single( return 0; } +int +xfs_inumbers_fmt( + void __user *ubuffer, /* buffer to write to */ + const xfs_inogrp_t *buffer, /* buffer to read from */ + long count, /* # of elements to read */ + long *written) /* # of bytes written */ +{ + if (copy_to_user(ubuffer, buffer, count * sizeof(*buffer))) + return -EFAULT; + *written = count * sizeof(*buffer); + return 0; +} + /* * Return inode number table for the filesystem. */ @@ -756,7 +769,8 @@ xfs_inumbers( xfs_mount_t *mp, /* mount point for filesystem */ xfs_ino_t *lastino, /* last inode returned */ int *count, /* size of buffer/count returned */ - xfs_inogrp_t __user *ubuffer)/* buffer with inode descriptions */ + void __user *ubuffer,/* buffer with inode descriptions */ + inumbers_fmt_pf formatter) { xfs_buf_t *agbp; xfs_agino_t agino; @@ -835,12 +849,12 @@ xfs_inumbers( bufidx++; left--; if (bufidx == bcount) { - if (copy_to_user(ubuffer, buffer, - bufidx * sizeof(*buffer))) { + long written; + if (formatter(ubuffer, buffer, bufidx, &written)) { error = XFS_ERROR(EFAULT); break; } - ubuffer += bufidx; + ubuffer += written; *count += bufidx; bufidx = 0; } @@ -862,8 +876,8 @@ xfs_inumbers( } if (!error) { if (bufidx) { - if (copy_to_user(ubuffer, buffer, - bufidx * sizeof(*buffer))) + long written; + if (formatter(ubuffer, buffer, bufidx, &written)) error = XFS_ERROR(EFAULT); else *count += bufidx; --- linux-2.6.orig/fs/xfs/linux-2.6/xfs_ioctl.c +++ linux-2.6/fs/xfs/linux-2.6/xfs_ioctl.c @@ -1019,7 +1019,7 @@ xfs_ioc_bulkstat( if (cmd == XFS_IOC_FSINUMBERS) error = xfs_inumbers(mp, &inlast, &count, - bulkreq.ubuffer); + bulkreq.ubuffer, xfs_inumbers_fmt); else if (cmd == XFS_IOC_FSBULKSTAT_SINGLE) error = xfs_bulkstat_single(mp, &inlast, bulkreq.ubuffer, &done); From owner-xfs@oss.sgi.com Fri Jun 29 05:01:38 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 29 Jun 2007 05:01:41 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.1 required=5.0 tests=AWL,BAYES_99,J_CHICKENPOX_27 autolearn=no version=3.2.0-pre1-r499012 Received: from isls-mx10.wmin.ac.uk (isls-mx10.wmin.ac.uk [161.74.14.112]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5TC1atL022981 for ; Fri, 29 Jun 2007 05:01:37 -0700 Received: from groucho.wmin.ac.uk ([161.74.160.74]) by isls-mx10.wmin.ac.uk with esmtp (Exim 4.60) (envelope-from ) id 1I4FAO-0006Uj-Ro; Fri, 29 Jun 2007 13:01:36 +0100 Received: from project1.cpc.wmin.ac.uk (project1.cpc.wmin.ac.uk [161.74.69.87]) by groucho.wmin.ac.uk (Postfix) with ESMTP id B6A01326AA2; Fri, 29 Jun 2007 13:01:36 +0100 (BST) Date: Fri, 29 Jun 2007 13:01:36 +0100 To: "David Chinner" Subject: Re: After reboot fs with barrier faster deletes then fs with nobarrier From: "Szabolcs Illes" Organization: UoW Cc: xfs@oss.sgi.com Content-Type: text/plain; format=flowed; delsp=yes; charset=us-ascii MIME-Version: 1.0 References: <20070629001648.GD31489@sgi.com> Message-ID: In-Reply-To: <20070629001648.GD31489@sgi.com> User-Agent: Opera Mail/9.20 (Linux) X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from Quoted-Printable to 8bit by oss.sgi.com id l5TC1ctL023009 X-archive-position: 12041 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: S.Illes@westminster.ac.uk Precedence: bulk X-list: xfs On Fri, 29 Jun 2007 01:16:48 +0100, David Chinner wrote: > On Wed, Jun 27, 2007 at 06:58:29PM +0100, Szabolcs Illes wrote: >> Hi, >> >> I am using XFS on my laptop, I have realized that nobarrier mount >> options >> sometimes slows down deleting large number of small files, like the >> kernel >> source tree. I made four tests, deleting the kernel source right after >> unpack and after reboot, with both barrier and nobarrier options: >> >> mount opts: rw,noatime,nodiratime,logbsize=256k,logbufs=2 > > FWIW, I bet these mount options have something to do with the > issue. > > Here's the disk I'm testing against - 36GB 10krpm u160 SCSI: > > <5>[ 25.427907] sd 0:0:2:0: [sdb] 71687372 512-byte hardware sectors > (36704 MB) > <5>[ 25.440393] sd 0:0:2:0: [sdb] Write Protect is off > <7>[ 25.441276] sd 0:0:2:0: [sdb] Mode Sense: ab 00 10 08 > <5>[ 25.442662] sd 0:0:2:0: [sdb] Write cache: disabled, read cache: > enabled, supports DPO and FUA > <6>[ 25.446992] sdb: sdb1 sdb2 sdb3 sdb4 sdb5 sdb6 sdb7 sdb8 sdb9 > > Note - read cache is enabled, write cache is disabled, so barriers > cause a FUA only. i.e. the only bubble in the I/O pipeline that > barriers cause are in teh elevator and the scsi command queue. > > The disk is capable of about 30MB/s on the inner edge. > > Mount options are default (so logbsize=32k,logbufs=8), mkfs > options are default, 4GB partition on inner (slow) edge of disk. > Kernel is 2.6.22-rc4 with all debug and tracing options turned on > on ia64. > > For this config, I see: > > barrier nobarrier > hot cache 22s 14s > cold cache 21s 20s > > In this case, barriers have little impact on cold cache behaviour, > and the difference on the hot cache behaviour will probably be > because of FUA being used on barrier writes (i.e. no combining > of sequential log I/Os in the elevator). > > The difference in I/O behaviour b/t hot cache and cold cache during > the rm -rf is that there are zero read I/Os on a hot cache and > 50-100 read I/Os per second on a cold cache which is easily > within the capability of this drive. > > After turning on the write cache with: > > # sdparm -s WCE -S /dev/sdb > # reboot > > [ 25.717942] sd 0:0:2:0: [sdb] Write cache: enabled, read cache: > enabled, supports DPO and FUA > > I get: > barrier nobarrier > logbsize=32k,logbufs=8: hot cache 24s 11s > logbsize=32k,logbufs=8: cold cache 33s 16s > logbsize=256k,logbufs=8: hot cache 10s 10s > logbsize=256k,logbufs=8: cold cache 16s 16s > logbsize=256k,logbufs=2: hot cache 11s 9s > logbsize=256k,logbufs=2: cold cache 17s 13s > > Out of the box, barriers are 50% slower with WCE=1 than with WCE=0 > on the cold cache test, but are almost as fast with larger > log buffer size (i.e. less barrier writes being issued). > > Worth noting is that at 10-11s runtime, the disk is bandwidth > bound (i.e. we're doing 30MB/s), so that's the fastest time > rm -rf will do on this filesystem. > > So, clearly we have differing performance depending on > mount options and at best barriers give equal performance. > > I just ran the same tests on an x86_64 box with 7.2krpm 500GB SATA > disks with WCE (2.6.18 kernel) using a 30GB partition on the outer > edge: > > barrier nobarrier > logbsize=32k,logbufs=8: hot cache 29s 29s > logbsize=32k,logbufs=8: cold cache 33s 30s > logbsize=256k,logbufs=8: hot cache 8s 8s > logbsize=256k,logbufs=8: cold cache 11s 11s > logbsize=256k,logbufs=2: hot cache 8s 8s > logbsize=256k,logbufs=2: cold cache 11s 11s > > Barriers make little to zero difference here. > >> Can anyone explain this? > > Right now I'm unable to reproduce your results even on 2.6.18 so I > suspect a drive level issue here. > > Can I suggest that you try the same tests with write caching turned > off on the drive(s)? (hdparm -W 0 , IIRC). on my laptop I could not set W 0: sunset:~ # hdparm -W0 /dev/hda /dev/hda: setting drive write-caching to 0 (off) HDIO_SET_WCACHE(wcache) failed: Success on my desktop pc: WCE=1 barrier nobarrier logbsize=256k,logbufs=4: hot cache 6.3s/6.3s/6.5s 10.8s/1.9s/2s logbsize=256k,logbufs=4: cold cache 11.1s/10.9s/10.7 4.8s/5.8s/7.3s logbsize=256k,logbufs=4: after reboot 11.9s/10.3s 52.2s/47.2s WCE=0 logbsize=256k,logbufs=4: hot cache 5.7s/5.6s/5.6s 8.3s/5.6s/5.6s logbsize=256k,logbufs=4: cold cache 9.5s/9/9s/9.9s 9.5s/9.9s/9.8s logbsize=256k,logbufs=4: after reboot 9.9s 48.0s for cold cache I used: echo 3 > /proc/sys/vm/drop_caches it looks like this machine is only affected after reboot, maybe the hdd has more cache, then the hdd in my 3 years old laptop. on my laptop it was enought to clear the kernel cache. How did you do your "cold" tests? reboot or drop_caches? Cheers, Szabolcs > > Cheers, > > Dave. From owner-xfs@oss.sgi.com Fri Jun 29 06:10:57 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 29 Jun 2007 06:11:05 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from ogre.sisk.pl (ogre.sisk.pl [217.79.144.158]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5TDAttL008003 for ; Fri, 29 Jun 2007 06:10:56 -0700 Received: from localhost (localhost.localdomain [127.0.0.1]) by ogre.sisk.pl (Postfix) with ESMTP id D5DBE4A3DC; Fri, 29 Jun 2007 14:48:42 +0200 (CEST) Received: from ogre.sisk.pl ([127.0.0.1]) by localhost (ogre.sisk.pl [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 11738-08; Fri, 29 Jun 2007 14:48:42 +0200 (CEST) Received: from [192.168.144.102] (iftwlan0.fuw.edu.pl [193.0.83.32]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by ogre.sisk.pl (Postfix) with ESMTP id 52BF144FFA; Fri, 29 Jun 2007 14:48:42 +0200 (CEST) From: "Rafael J. Wysocki" To: David Greaves Subject: Re: [linux-pm] Re: [linux-lvm] 2.6.22-rc4 XFS fails after hibernate/resume Date: Fri, 29 Jun 2007 15:18:16 +0200 User-Agent: KMail/1.9.5 Cc: David Chinner , Pavel Machek , linux-pm , "'linux-kernel@vger.kernel.org'" , xfs@oss.sgi.com, LinuxRaid , LVM general discussion and development , David Robinson , Oleg Nesterov References: <46744065.6060605@dgreaves.com> <20070629074322.GT31489@sgi.com> <4684BAA4.4010905@dgreaves.com> In-Reply-To: <4684BAA4.4010905@dgreaves.com> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200706291518.18072.rjw@sisk.pl> X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Scanned: amavisd-new at ogre.sisk.pl using MkS_Vir for Linux X-Virus-Status: Clean X-archive-position: 12042 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: rjw@sisk.pl Precedence: bulk X-list: xfs On Friday, 29 June 2007 09:54, David Greaves wrote: > David Chinner wrote: > > On Fri, Jun 29, 2007 at 08:40:00AM +0100, David Greaves wrote: > >> What happens if a filesystem is frozen and I hibernate? > >> Will it be thawed when I resume? > > > > If you froze it yourself, then you'll have to thaw it yourself. > > So hibernate will not attempt to re-freeze a frozen fs and, during resume, it > will only thaw filesystems that were frozen by the suspend? Right now it doesn't freeze (or thaw) any filesystems. It just sync()s them before creating the hibernation image. However, the fact that you've seen corruption with the XFS filesystems frozen before the hibernation indicates that the problem occurs on a lower level. Greetings, Rafael -- "Premature optimization is the root of all evil." - Donald Knuth From owner-xfs@oss.sgi.com Fri Jun 29 06:30:26 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 29 Jun 2007 06:30:31 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.1 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.ukfsn.org (s2.ukfsn.org [217.158.120.143]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5TDUOtL014657 for ; Fri, 29 Jun 2007 06:30:26 -0700 Received: from localhost (mailman.ukfsn.org [80.168.53.75]) by mail.ukfsn.org (Postfix) with ESMTP id 5FCE3E6C00; Fri, 29 Jun 2007 14:30:22 +0100 (BST) Received: from mail.ukfsn.org ([80.168.53.20]) by localhost (smtp-filter.ukfsn.org [80.168.53.75]) (amavisd-new, port 10024) with ESMTP id fk-rPDof3JJl; Fri, 29 Jun 2007 14:29:18 +0100 (BST) Received: from elm.dgreaves.com (i-83-67-36-194.freedom2surf.net [83.67.36.194]) by mail.ukfsn.org (Postfix) with ESMTP id 33742E7504; Fri, 29 Jun 2007 14:30:20 +0100 (BST) Received: from ash.dgreaves.com ([10.0.0.90]) by elm.dgreaves.com with esmtp (Exim 4.62) (envelope-from ) id 1I4GYG-0007DP-3p; Fri, 29 Jun 2007 14:30:20 +0100 Message-ID: <4685096B.9000400@dgreaves.com> Date: Fri, 29 Jun 2007 14:30:19 +0100 From: David Greaves User-Agent: Mozilla-Thunderbird 2.0.0.4 (X11/20070618) MIME-Version: 1.0 To: "Rafael J. Wysocki" Cc: David Chinner , Pavel Machek , linux-pm , "'linux-kernel@vger.kernel.org'" , xfs@oss.sgi.com, LinuxRaid , LVM general discussion and development , David Robinson , Oleg Nesterov Subject: Re: [linux-pm] Re: [linux-lvm] 2.6.22-rc4 XFS fails after hibernate/resume References: <46744065.6060605@dgreaves.com> <20070629074322.GT31489@sgi.com> <4684BAA4.4010905@dgreaves.com> <200706291518.18072.rjw@sisk.pl> In-Reply-To: <200706291518.18072.rjw@sisk.pl> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12043 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: david@dgreaves.com Precedence: bulk X-list: xfs Rafael J. Wysocki wrote: > On Friday, 29 June 2007 09:54, David Greaves wrote: >> David Chinner wrote: >>> On Fri, Jun 29, 2007 at 08:40:00AM +0100, David Greaves wrote: >>>> What happens if a filesystem is frozen and I hibernate? >>>> Will it be thawed when I resume? >>> If you froze it yourself, then you'll have to thaw it yourself. >> So hibernate will not attempt to re-freeze a frozen fs and, during resume, it >> will only thaw filesystems that were frozen by the suspend? > > Right now it doesn't freeze (or thaw) any filesystems. It just sync()s them > before creating the hibernation image. Thanks. Yes I realise that :) I wasn't clear, I should have said: So hibernate should not attempt to re-freeze a frozen fs and, during resume, it should only thaw filesystems that were frozen by the suspend. > However, the fact that you've seen corruption with the XFS filesystems frozen > before the hibernation indicates that the problem occurs on a lower level. And that was why I chimed in - I don't think freezing fixes the problem (though it may make sense for other reasons). David From owner-xfs@oss.sgi.com Fri Jun 29 07:30:43 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 29 Jun 2007 07:30:48 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.5 required=5.0 tests=AWL,BAYES_60,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r499012 Received: from thunker.thunk.org (thunk.org [69.25.196.29]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5TEUgtL004487 for ; Fri, 29 Jun 2007 07:30:43 -0700 Received: from root (helo=candygram.thunk.org) by thunker.thunk.org with local-esmtps (tls_cipher TLS-1.0:RSA_AES_256_CBC_SHA:32) (Exim 4.50 #1 (Debian)) id 1I4H57-0005iA-Ej; Fri, 29 Jun 2007 10:04:19 -0400 Received: from tytso by candygram.thunk.org with local (Exim 4.63) (envelope-from ) id 1I4GxI-0001I7-Lh; Fri, 29 Jun 2007 09:56:12 -0400 Date: Fri, 29 Jun 2007 09:56:12 -0400 From: Theodore Tso To: Andrew Morton Cc: "Amit K. Arora" , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , Andreas Dilger , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: Re: [PATCH 0/6][TAKE5] fallocate system call Message-ID: <20070629135612.GH29279@thunk.org> Mail-Followup-To: Theodore Tso , Andrew Morton , "Amit K. Arora" , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , Andreas Dilger , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com References: <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070628025543.9467216f.akpm@linux-foundation.org> <20070628175757.GA1674@amitarora.in.ibm.com> <20070628113342.c9c0f49c.akpm@linux-foundation.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070628113342.c9c0f49c.akpm@linux-foundation.org> User-Agent: Mutt/1.5.13 (2006-08-11) X-SA-Exim-Connect-IP: X-SA-Exim-Mail-From: tytso@thunk.org X-SA-Exim-Scanned: No (on thunker.thunk.org); SAEximRunCond expanded to false X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12045 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tytso@mit.edu Precedence: bulk X-list: xfs On Thu, Jun 28, 2007 at 11:33:42AM -0700, Andrew Morton wrote: > > Please let us know what you think of Mingming's suggestion of posting > > all the fallocate patches including the ext4 ones as incremental ones > > against the -mm. > > I think Mingming was asking that Ted move the current quilt tree into git, > presumably because she's working off git. No, mingming and I both work off of the patch queue (which is also stored in git). So what mingming was asking for exactly was just posting the incremental patches and tagging them appropriately to avoid confusion. I tried building the patch queue earlier in the week and it there were multiple oops/panics as I ran things through various regression tests, but that may have been fixed since (the tree was broken over the weekend and I may have grabbed a broken patch series) or it may have been a screw up on my part feeding them into our testing grid. I haven't had time to try again this week, but I'll try to put together a new tested ext4 patchset over the weekend. > I'm not sure what to do, really. The core kernel patches need to be in > Ted's tree for testing but that'll create a mess for me. I don't think we have a problem here. What we have now is fine, and it was just people kvetching that Amit reposted patches that were already in -mm and ext4. In any case, the plan is to push all of the core bits into Linus tree for 2.6.22 once it opens up, which should be Real Soon Now, it looks like. - Ted From owner-xfs@oss.sgi.com Fri Jun 29 07:29:40 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 29 Jun 2007 07:29:45 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.4 required=5.0 tests=AWL,BAYES_80 autolearn=no version=3.2.0-pre1-r499012 Received: from mail.dvmed.net (srv5.dvmed.net [207.36.208.214]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5TETdtL003920 for ; Fri, 29 Jun 2007 07:29:40 -0700 Received: from cpe-065-190-165-210.nc.res.rr.com ([65.190.165.210] helo=[10.10.10.10]) by mail.dvmed.net with esmtpsa (Exim 4.63 #1 (Red Hat Linux)) id 1I4HTO-0006vf-Sa; Fri, 29 Jun 2007 14:29:23 +0000 Message-ID: <46851741.3030707@garzik.org> Date: Fri, 29 Jun 2007 10:29:21 -0400 From: Jeff Garzik User-Agent: Thunderbird 1.5.0.12 (X11/20070530) MIME-Version: 1.0 To: Theodore Tso , Andrew Morton , "Amit K. Arora" , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , Andreas Dilger , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: Re: [PATCH 0/6][TAKE5] fallocate system call References: <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070628025543.9467216f.akpm@linux-foundation.org> <20070628175757.GA1674@amitarora.in.ibm.com> <20070628113342.c9c0f49c.akpm@linux-foundation.org> <20070629135612.GH29279@thunk.org> In-Reply-To: <20070629135612.GH29279@thunk.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12044 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeff@garzik.org Precedence: bulk X-list: xfs Theodore Tso wrote: > I don't think we have a problem here. What we have now is fine, and It's fine for ext4, but not the wider world. This is a common problem created by parallel development when code dependencies exist. > In any case, the plan is to push all of the core bits into Linus tree > for 2.6.22 once it opens up, which should be Real Soon Now, it looks > like. Presumably you mean 2.6.23. Jeff From owner-xfs@oss.sgi.com Fri Jun 29 09:21:09 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 29 Jun 2007 09:21:17 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.0 required=5.0 tests=AWL,BAYES_99,J_CHICKENPOX_53 autolearn=no version=3.2.0-pre1-r499012 Received: from over.ny.us.ibm.com (over.ny.us.ibm.com [32.97.182.150]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5TGL7tL023671 for ; Fri, 29 Jun 2007 09:21:09 -0700 Received: from e35.co.us.ibm.com (e35.co.us.ibm.com [32.97.110.153]) by pokfb.esmtp.ibm.com (8.12.11.20060308/8.12.11) with ESMTP id l5TFoJ4x013463 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Fri, 29 Jun 2007 11:50:19 -0400 Received: from d03relay02.boulder.ibm.com (d03relay02.boulder.ibm.com [9.17.195.227]) by e35.co.us.ibm.com (8.13.8/8.13.8) with ESMTP id l5TFoFO9011933 for ; Fri, 29 Jun 2007 11:50:15 -0400 Received: from d03av01.boulder.ibm.com (d03av01.boulder.ibm.com [9.17.195.167]) by d03relay02.boulder.ibm.com (8.13.8/8.13.8/NCO v8.3) with ESMTP id l5TFo9q9225876 for ; Fri, 29 Jun 2007 09:50:11 -0600 Received: from d03av01.boulder.ibm.com (loopback [127.0.0.1]) by d03av01.boulder.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id l5TFo8tY021841 for ; Fri, 29 Jun 2007 09:50:08 -0600 Received: from [127.0.0.1] (wecm-9-67-58-105.wecm.ibm.com [9.67.58.105]) by d03av01.boulder.ibm.com (8.12.11.20060308/8.12.11) with ESMTP id l5TFo4ch021464; Fri, 29 Jun 2007 09:50:06 -0600 Message-ID: <46852A2C.9040407@us.ibm.com> Date: Fri, 29 Jun 2007 11:50:04 -0400 From: Mingming Caoc User-Agent: Thunderbird 2.0.0.4 (Windows/20070604) MIME-Version: 1.0 To: Theodore Tso , Andrew Morton , "Amit K. Arora" , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , Andreas Dilger , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: Re: [PATCH 0/6][TAKE5] fallocate system call References: <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070628025543.9467216f.akpm@linux-foundation.org> <20070628175757.GA1674@amitarora.in.ibm.com> <20070628113342.c9c0f49c.akpm@linux-foundation.org> <20070629135612.GH29279@thunk.org> In-Reply-To: <20070629135612.GH29279@thunk.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12046 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: cmm@us.ibm.com Precedence: bulk X-list: xfs Theodore Tso wrote: > On Thu, Jun 28, 2007 at 11:33:42AM -0700, Andrew Morton wrote: > >>> Please let us know what you think of Mingming's suggestion of posting >>> all the fallocate patches including the ext4 ones as incremental ones >>> against the -mm. >>> >> I think Mingming was asking that Ted move the current quilt tree into git, >> presumably because she's working off git. >> > > No, mingming and I both work off of the patch queue (which is also > stored in git). So what mingming was asking for exactly was just > posting the incremental patches and tagging them appropriately to > avoid confusion. > > I tried building the patch queue earlier in the week and it there were > multiple oops/panics as I ran things through various regression tests,but that may have been fixed since (the tree was broken over the > weekend and I may have grabbed a broken patch series) or it may have > been a screw up on my part feeding them into our testing grid. I > haven't had time to try again this week, but I'll try to put together > a new tested ext4 patchset over the weekend. > > I think the ext4 patch queue is in good shape now. Shaggy have tested in on dbench, fsx, and tiobench, tests runs fine. and BULL team has benchmarked the latest ext4 patch queue with iozone and FFSB. Regards, Mingming >> I'm not sure what to do, really. The core kernel patches need to be in >> Ted's tree for testing but that'll create a mess for me. >> > > I don't think we have a problem here. What we have now is fine, and > it was just people kvetching that Amit reposted patches that were > already in -mm and ext4. > > In any case, the plan is to push all of the core bits into Linus tree > for 2.6.22 once it opens up, which should be Real Soon Now, it looks > like. > > - Ted > - > To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > From owner-xfs@oss.sgi.com Fri Jun 29 10:43:17 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 29 Jun 2007 10:43:23 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.4 required=5.0 tests=AWL,BAYES_50,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r499012 Received: from thunker.thunk.org (thunk.org [69.25.196.29]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5THhFtL018977 for ; Fri, 29 Jun 2007 10:43:17 -0700 Received: from root (helo=candygram.thunk.org) by thunker.thunk.org with local-esmtps (tls_cipher TLS-1.0:RSA_AES_256_CBC_SHA:32) (Exim 4.50 #1 (Debian)) id 1I4KcQ-0006oo-5S; Fri, 29 Jun 2007 13:50:54 -0400 Received: from tytso by candygram.thunk.org with local (Exim 4.63) (envelope-from ) id 1I4KUa-0001vp-Vq; Fri, 29 Jun 2007 13:42:48 -0400 Date: Fri, 29 Jun 2007 13:42:48 -0400 From: Theodore Tso To: Jeff Garzik Cc: Andrew Morton , "Amit K. Arora" , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , Andreas Dilger , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: Re: [PATCH 0/6][TAKE5] fallocate system call Message-ID: <20070629174248.GD16268@thunk.org> Mail-Followup-To: Theodore Tso , Jeff Garzik , Andrew Morton , "Amit K. Arora" , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , Andreas Dilger , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com References: <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070628025543.9467216f.akpm@linux-foundation.org> <20070628175757.GA1674@amitarora.in.ibm.com> <20070628113342.c9c0f49c.akpm@linux-foundation.org> <20070629135612.GH29279@thunk.org> <46851741.3030707@garzik.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <46851741.3030707@garzik.org> User-Agent: Mutt/1.5.13 (2006-08-11) X-SA-Exim-Connect-IP: X-SA-Exim-Mail-From: tytso@thunk.org X-SA-Exim-Scanned: No (on thunker.thunk.org); SAEximRunCond expanded to false X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12047 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tytso@mit.edu Precedence: bulk X-list: xfs On Fri, Jun 29, 2007 at 10:29:21AM -0400, Jeff Garzik wrote: > >In any case, the plan is to push all of the core bits into Linus tree > >for 2.6.22 once it opens up, which should be Real Soon Now, it looks > >like. > > Presumably you mean 2.6.23. Yes, sorry. I meant once Linus releases 2.6.22, and we would be aiming to merge before the 2.6.23-rc1 window. - Ted From owner-xfs@oss.sgi.com Fri Jun 29 13:48:34 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 29 Jun 2007 13:48:42 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.5 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from smtp2.linux-foundation.org (smtp2.linux-foundation.org [207.189.120.14]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5TKmXtL010777 for ; Fri, 29 Jun 2007 13:48:34 -0700 Received: from imap1.linux-foundation.org (imap1.linux-foundation.org [207.189.120.55]) by smtp2.linux-foundation.org (8.13.5.20060308/8.13.5/Debian-3ubuntu1.1) with ESMTP id l5TKltXv009516 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Fri, 29 Jun 2007 13:47:56 -0700 Received: from akpm.corp.google.com (localhost [127.0.0.1]) by imap1.linux-foundation.org (8.13.5.20060308/8.13.5/Debian-3ubuntu1.1) with SMTP id l5TKlmY3022909; Fri, 29 Jun 2007 13:47:49 -0700 Date: Fri, 29 Jun 2007 13:47:48 -0700 From: Andrew Morton To: Mariusz Kozlowski , Jason Wessel , Michal Marek Cc: paulus@samba.org, xfs-masters@oss.sgi.com, linux-kernel@vger.kernel.org, linuxppc-dev@ozlabs.org, xfs@oss.sgi.com Subject: Re: 2.6.22-rc6-mm1 Message-Id: <20070629134748.71adba1e.akpm@linux-foundation.org> In-Reply-To: <200706291432.10128.m.kozlowski@tuxland.pl> References: <20070628034321.38c9f12b.akpm@linux-foundation.org> <200706291432.10128.m.kozlowski@tuxland.pl> X-Mailer: Sylpheed version 2.2.7 (GTK+ 2.8.6; i686-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-MIMEDefang-Filter: osdl$Revision: 1.181 $ X-Scanned-By: MIMEDefang 2.53 on 207.189.120.14 X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12048 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: akpm@linux-foundation.org Precedence: bulk X-list: xfs On Fri, 29 Jun 2007 14:32:09 +0200 Mariusz Kozlowski wrote: > Hello, > > allmodconfig on powerpc (iMac g3) fails due to > git-kgdb.patch. allmodconfig defaults should be changed? > > CC arch/powerpc/kernel/kgdb.o > arch/powerpc/kernel/kgdb.c:485:2: error: #error Both XMON and KGDB selected > in .config. Unselect one of them. > make[1]: *** [arch/powerpc/kernel/kgdb.o] Blad 1 > make: *** [arch/powerpc/kernel] Blad 2 Jason cc'ed > anyway after unselecting XMON we can see: > > CC [M] fs/xfs/linux-2.6/xfs_ioctl32.o > fs/xfs/linux-2.6/xfs_ioctl32.c: In function 'xfs_ioc_bulkstat_compat': > fs/xfs/linux-2.6/xfs_ioctl32.c:334: error: 'xfs_inumbers_fmt_compat' > undeclared (first use in this function) > fs/xfs/linux-2.6/xfs_ioctl32.c:334: error: (Each undeclared identifier is > reported only once > fs/xfs/linux-2.6/xfs_ioctl32.c:334: error: for each function it appears in.) > make[2]: *** [fs/xfs/linux-2.6/xfs_ioctl32.o] Blad 1 > make[1]: *** [fs/xfs] Blad 2 > > This is just allmodconfig - not a .config that's used daily by users but I'm > used to compiling the kernel using it anyway 8) > Michal cc'ed. I think this is the one which was already reported but I haven't seen a fix yet? From owner-xfs@oss.sgi.com Fri Jun 29 13:57:25 2007 Received: with ECARTIS (v1.0.0; list xfs); Fri, 29 Jun 2007 13:57:32 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.4 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from smtp2.linux-foundation.org (smtp2.linux-foundation.org [207.189.120.14]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5TKvOtL014397 for ; Fri, 29 Jun 2007 13:57:25 -0700 Received: from imap1.linux-foundation.org (imap1.linux-foundation.org [207.189.120.55]) by smtp2.linux-foundation.org (8.13.5.20060308/8.13.5/Debian-3ubuntu1.1) with ESMTP id l5TKvFXZ009849 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Fri, 29 Jun 2007 13:57:16 -0700 Received: from akpm.corp.google.com (localhost [127.0.0.1]) by imap1.linux-foundation.org (8.13.5.20060308/8.13.5/Debian-3ubuntu1.1) with SMTP id l5TKv9EF023355; Fri, 29 Jun 2007 13:57:09 -0700 Date: Fri, 29 Jun 2007 13:57:09 -0700 From: Andrew Morton To: Mingming Caoc Cc: Theodore Tso , "Amit K. Arora" , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , Andreas Dilger , suparna@in.ibm.com, xfs@oss.sgi.com Subject: Re: [PATCH 0/6][TAKE5] fallocate system call Message-Id: <20070629135709.6cf0264d.akpm@linux-foundation.org> In-Reply-To: <46852A2C.9040407@us.ibm.com> References: <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070628025543.9467216f.akpm@linux-foundation.org> <20070628175757.GA1674@amitarora.in.ibm.com> <20070628113342.c9c0f49c.akpm@linux-foundation.org> <20070629135612.GH29279@thunk.org> <46852A2C.9040407@us.ibm.com> X-Mailer: Sylpheed version 2.2.7 (GTK+ 2.8.6; i686-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-MIMEDefang-Filter: osdl$Revision: 1.181 $ X-Scanned-By: MIMEDefang 2.53 on 207.189.120.14 X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12049 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: akpm@linux-foundation.org Precedence: bulk X-list: xfs On Fri, 29 Jun 2007 11:50:04 -0400 Mingming Caoc wrote: > I think the ext4 patch queue is in good shape now. Which ext4 patches are you intending to merge into 2.6.23? Please send all those out to lkml for review? From owner-xfs@oss.sgi.com Sat Jun 30 02:49:56 2007 Received: with ECARTIS (v1.0.0; list xfs); Sat, 30 Jun 2007 02:49:58 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.3 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from pentafluge.infradead.org (pentafluge.infradead.org [213.146.154.40]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5U9nstL025773 for ; Sat, 30 Jun 2007 02:49:55 -0700 Received: from hch by pentafluge.infradead.org with local (Exim 4.63 #1 (Red Hat Linux)) id 1I4ZLJ-0005ti-GJ; Sat, 30 Jun 2007 10:34:13 +0100 Date: Sat, 30 Jun 2007 10:34:13 +0100 From: Christoph Hellwig To: Mark Fasheh Cc: Andrew Morton , Amit Arora , linux-fsdevel@vger.kernel.org, ocfs2-devel@oss.oracle.com, xfs@oss.sgi.com Subject: Re: [-mm PATCH] ocfs2: ->fallocate() support Message-ID: <20070630093413.GE22354@infradead.org> References: <20070621190143.GC17713@ca-server1.us.oracle.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070621190143.GC17713@ca-server1.us.oracle.com> User-Agent: Mutt/1.4.2.3i X-SRS-Rewrite: SMTP reverse-path rewritten from by pentafluge.infradead.org See http://www.infradead.org/rpr.html X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12050 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs On Thu, Jun 21, 2007 at 12:01:43PM -0700, Mark Fasheh wrote: > Plug ocfs2 into the ->fallocate() callback. We only support FA_ALLOCATE for > now - FA_DEALLOCATE will come later. Btw, it seems like ocfs implements the xfs preallocation ioctls. What would people thing about moving those up to work over ->fallocate so they can be used on all filesystems that support preallocation? While the ABI is quite ugly it has a huge userbase because it's the only existing preallocation mechanism on Linux. From owner-xfs@oss.sgi.com Sat Jun 30 03:14:58 2007 Received: with ECARTIS (v1.0.0; list xfs); Sat, 30 Jun 2007 03:15:02 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.3 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from pentafluge.infradead.org (pentafluge.infradead.org [213.146.154.40]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5UAEutL030831 for ; Sat, 30 Jun 2007 03:14:57 -0700 Received: from hch by pentafluge.infradead.org with local (Exim 4.63 #1 (Red Hat Linux)) id 1I4ZyU-0006Ai-7P; Sat, 30 Jun 2007 11:14:42 +0100 Date: Sat, 30 Jun 2007 11:14:42 +0100 From: Christoph Hellwig To: David Chinner , "Amit K. Arora" , Suparna Bhattacharya , torvalds@osdl.org, akpm@linux-foundation.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, xfs@oss.sgi.com, cmm@us.ibm.com Subject: Re: [PATCH 1/5] fallocate() implementation in i86, x86_64 and powerpc Message-ID: <20070630101442.GA23568@infradead.org> Mail-Followup-To: Christoph Hellwig , David Chinner , "Amit K. Arora" , Suparna Bhattacharya , torvalds@osdl.org, akpm@linux-foundation.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, xfs@oss.sgi.com, cmm@us.ibm.com References: <20070426180332.GA7209@amitarora.in.ibm.com> <20070509160102.GA30745@amitarora.in.ibm.com> <20070510005926.GT85884050@sgi.com> <20070510115620.GB21400@amitarora.in.ibm.com> <20070510223950.GD86004887@sgi.com> <20070511110301.GB28425@in.ibm.com> <20070512080157.GF85884050@sgi.com> <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070614091458.GH5181@schatzie.adilger.int> User-Agent: Mutt/1.4.2.3i X-SRS-Rewrite: SMTP reverse-path rewritten from by pentafluge.infradead.org See http://www.infradead.org/rpr.html X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12051 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs On Thu, Jun 14, 2007 at 03:14:58AM -0600, Andreas Dilger wrote: > I suppose it might be a bit late in the game to add a "goal" > parameter and e.g. FA_FL_REQUIRE_GOAL, FA_FL_NEAR_GOAL, etc to make > the API more suitable for XFS? The goal could be a single __u64, or > a struct with e.g. __u64 byte offset (possibly also __u32 lun like > in FIEMAP). I guess the one potential limitation here is the > number of function parameters on some architectures. This isn't really about "more suitable for XFS" but more about more suitable for sophisticated layout decisions. But I'm still not confident this should be shohorned into this syscall. In fact I'm already rather unhappy about the feature churn in the current patch series. The more I think about it the more I'd prefer we would just put a simple syscall in that implements nothing but the posix_fallocate(3) semantics as defined in SuS, and then go on to brainstorm about advanced preallocation / layout hint semantics. From owner-xfs@oss.sgi.com Sat Jun 30 03:21:18 2007 Received: with ECARTIS (v1.0.0; list xfs); Sat, 30 Jun 2007 03:21:22 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.3 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from pentafluge.infradead.org (pentafluge.infradead.org [213.146.154.40]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5UALGtL032514 for ; Sat, 30 Jun 2007 03:21:18 -0700 Received: from hch by pentafluge.infradead.org with local (Exim 4.63 #1 (Red Hat Linux)) id 1I4a4l-0006Ck-E5; Sat, 30 Jun 2007 11:21:11 +0100 Date: Sat, 30 Jun 2007 11:21:11 +0100 From: Christoph Hellwig To: "Amit K. Arora" Cc: Andreas Dilger , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com Subject: Re: [PATCH 4/7][TAKE5] support new modes in fallocate Message-ID: <20070630102111.GB23568@infradead.org> Mail-Followup-To: Christoph Hellwig , "Amit K. Arora" , Andreas Dilger , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, David Chinner , suparna@in.ibm.com, cmm@us.ibm.com, xfs@oss.sgi.com References: <20070612061652.GA6320@amitarora.in.ibm.com> <20070613235217.GS86004887@sgi.com> <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070625134500.GE1951@amitarora.in.ibm.com> <20070625150320.GA8686@amitarora.in.ibm.com> <20070625214626.GJ5181@schatzie.adilger.int> <20070626103247.GA19870@amitarora.in.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070626103247.GA19870@amitarora.in.ibm.com> User-Agent: Mutt/1.4.2.3i X-SRS-Rewrite: SMTP reverse-path rewritten from by pentafluge.infradead.org See http://www.infradead.org/rpr.html X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12052 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs On Tue, Jun 26, 2007 at 04:02:47PM +0530, Amit K. Arora wrote: > > Can you clarify - what is the current behaviour when ENOSPC (or some other > > error) is hit? Does it keep the current fallocate() or does it free it? > > Currently it is left on the file system implementation. In ext4, we do > not undo preallocation if some error (say, ENOSPC) is hit. Hence it may > end up with partial (pre)allocation. This is inline with dd and > posix_fallocate, which also do not free the partially allocated space. I can't find anything in the specification of posix_fallocate (http://www.opengroup.org/onlinepubs/009695399/functions/posix_fallocate.html) that tells what should happen to allocate blocks on error. But common sense would be to not leak disk space on failure of this syscall, and this definitively should not be left up to the filesystem, either we always leak it or always free it, and I'd strongly favour the latter variant. > > For FA_ZERO_SPACE - I'd think this would (IMHO) be the default - we > > don't want to expose uninitialized disk blocks to userspace. I'm not > > sure if this makes sense at all. > > I don't think we need to make it default - atleast for filesystems which > have a mechanism to distinguish preallocated blocks from "regular" ones. > In ext4, for example, we will have a way to mark uninitialized extents. > All the preallocated blocks will be part of these uninitialized extents. > And any read on these extents will treat them as a hole, returning > zeroes to user land. Thus any existing data on uninitialized blocks will > not be exposed to the userspace. This is the xfs unwritten extent behaviour. But anyway, the important bit is uninitialized blocks should never ever leak to userspace, so there is not need for the flag. From owner-xfs@oss.sgi.com Sat Jun 30 03:26:41 2007 Received: with ECARTIS (v1.0.0; list xfs); Sat, 30 Jun 2007 03:26:44 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: X-Spam-Status: No, score=0.3 required=5.0 tests=AWL,BAYES_50 autolearn=no version=3.2.0-pre1-r499012 Received: from pentafluge.infradead.org (pentafluge.infradead.org [213.146.154.40]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5UAQetL001939 for ; Sat, 30 Jun 2007 03:26:41 -0700 Received: from hch by pentafluge.infradead.org with local (Exim 4.63 #1 (Red Hat Linux)) id 1I4aA3-0006FI-1h; Sat, 30 Jun 2007 11:26:39 +0100 Date: Sat, 30 Jun 2007 11:26:38 +0100 From: Christoph Hellwig To: David Chinner Cc: xfs-oss , "Amit K. Arora" , linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org, suparna@in.ibm.com, cmm@us.ibm.com Subject: Re: [PATCH 4/7][TAKE5] support new modes in fallocate Message-ID: <20070630102638.GC23568@infradead.org> References: <20070614091458.GH5181@schatzie.adilger.int> <20070614120413.GD86004887@sgi.com> <20070614193347.GN5181@schatzie.adilger.int> <20070625132810.GA1951@amitarora.in.ibm.com> <20070625134500.GE1951@amitarora.in.ibm.com> <20070625150320.GA8686@amitarora.in.ibm.com> <20070625214626.GJ5181@schatzie.adilger.int> <20070626231431.GO31489@sgi.com> <20070627034915.GR6652@schatzie.adilger.int> <20070627133657.GQ989688@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070627133657.GQ989688@sgi.com> User-Agent: Mutt/1.4.2.3i X-SRS-Rewrite: SMTP reverse-path rewritten from by pentafluge.infradead.org See http://www.infradead.org/rpr.html X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12053 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: xfs On Wed, Jun 27, 2007 at 11:36:57PM +1000, David Chinner wrote: > > This > > would seem to be the only impediment from using fallocated files > > for swap files. Maybe if FIEMAP was used by mkswap to get an > > "UNWRITTEN" flag back instead of "HOLE" it wouldn't be a problem. > > Probably. If we taught do_mpage_readpage() about unwritten mappings, > then would could map them on read if and then sys_swapon can remain > blissfully unaware of unwritten extents. Except for reading the swap header in the first page sys_swapon will never end up in do_mpage_readpage. It rather uses ->bmap to build it's own extent list and issues bios directly. Now this is everything but nice and we should rather refactor the direct I/O code to work on kernel pages without looking at their fields so this can be done properly. Alternatively ->bmap would grow a BMAP_SWAP flag so the filesystem could do the right thing. But despite not beeing useful for swap the patch below looks very nice to me. doing things correctly in core code is always better than hacking around it in the filesystem, especially as XFS won't stay the only filesystem using unwritten extents. From owner-xfs@oss.sgi.com Sat Jun 30 10:17:34 2007 Received: with ECARTIS (v1.0.0; list xfs); Sat, 30 Jun 2007 10:17:37 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: * X-Spam-Status: No, score=1.5 required=5.0 tests=AWL,BAYES_50,SPF_HELO_PASS autolearn=no version=3.2.0-pre1-r499012 Received: from sandeen.net (sandeen.net [209.173.210.139]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5UHHXtL015366 for ; Sat, 30 Jun 2007 10:17:34 -0700 Received: from Liberator.local (dsldyn51.travel-net.com [205.150.76.51]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 9E74D18011E86; Sat, 30 Jun 2007 12:17:31 -0500 (CDT) Message-ID: <46869029.3040704@sandeen.net> Date: Sat, 30 Jun 2007 13:17:29 -0400 From: Eric Sandeen User-Agent: Thunderbird 2.0.0.4 (Macintosh/20070604) MIME-Version: 1.0 To: David Chinner CC: Just Marc , Barry Naujok , xfs@oss.sgi.com Subject: Re: xfs_fsr, performance related tweaks References: <4683ADEB.3010106@corky.net> <46841C60.5030207@sandeen.net> <4684A506.4030705@corky.net> <4684A98B.1030000@corky.net> <20070629070814.GR31489@sgi.com> <4684B1CC.60004@corky.net> <20070629074114.GS31489@sgi.com> In-Reply-To: <20070629074114.GS31489@sgi.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12054 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs David Chinner wrote: > On Fri, Jun 29, 2007 at 08:16:28AM +0100, Just Marc wrote: >> David, >> >> In my first post I already said something like that can be done but it's >> just an ugly hack. Don't you think it would best be handled cleanly >> and correctly by fsr itself? > > No, I don't - if you want files not to be defragmented, then you > have to set the flags yourself in some way. You have a specific need > that can be solved by some scripting to describe your defrag/no > defrag policy. xfs_fsr has no place is setting defrag policy; it's > function is simply to find and defrag files. I wouldn't mind seeing a way to tell fsr to not worry about defragging some files based on current layout; say if the avg extent in the file is > 100MB, or > 1G, don't bother... if today you have a 4.7G DVD iso image in 3 extents (not bad) fsr will try to "fix" it for you right? -eric > Cheers, > > Dave. From owner-xfs@oss.sgi.com Sat Jun 30 16:42:26 2007 Received: with ECARTIS (v1.0.0; list xfs); Sat, 30 Jun 2007 16:42:30 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: ** X-Spam-Status: No, score=2.2 required=5.0 tests=AWL,BAYES_80 autolearn=no version=3.2.0-pre1-r499012 Received: from ug-out-1314.google.com (ug-out-1314.google.com [66.249.92.174]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5UNgPtL030322 for ; Sat, 30 Jun 2007 16:42:26 -0700 Received: by ug-out-1314.google.com with SMTP id z36so1202639uge for ; Sat, 30 Jun 2007 16:42:26 -0700 (PDT) DKIM-Signature: a=rsa-sha1; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:from:to:subject:date:user-agent:cc:mime-version:content-type:content-transfer-encoding:content-disposition:message-id; b=kbeHdSYEd7ghLcc8kCD9qlkkrHOJB4Wku1tjfztcnHl0Ex8uKEScvF6WVQs41nNNk/DaWvdP0vB1vAz/fhOWoAAhRivar2xA2seoSPWyCq24kz11mBDYu5AyP/K8CqPu53IOXe5VK4ESZMGdO5XserlPSOdSv6A+2e9rGn1C5Wg= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:from:to:subject:date:user-agent:cc:mime-version:content-type:content-transfer-encoding:content-disposition:message-id; b=AFYB9hSc2Sjhzd64uh7BJQ8yfoQWcWGhjRjMLqqN2FXuSWbv1iji4toaMfeiNOGxzrF+Iqtuytyu4JOkp4LGnR2Q/6odDuN5sYOF/5jyKaNn7QuT5yjTrYBDciWfdy0Aekw/Oqeryacv1806bf+VjDgkzJJ15/utm4e7VHiXXaA= Received: by 10.66.233.14 with SMTP id f14mr3806787ugh.1183245482681; Sat, 30 Jun 2007 16:18:02 -0700 (PDT) Received: from ?192.168.1.34? ( [90.184.90.115]) by mx.google.com with ESMTP id 61sm2475024ugz.2007.06.30.16.18.01 (version=TLSv1/SSLv3 cipher=OTHER); Sat, 30 Jun 2007 16:18:01 -0700 (PDT) From: Jesper Juhl To: Linux Kernel Mailing List Subject: [PATCH][XFS][resend] fix memory leak in xfs_inactive() Date: Sun, 1 Jul 2007 01:16:51 +0200 User-Agent: KMail/1.9.7 Cc: David Chinner , xfs-masters@oss.sgi.com, xfs@oss.sgi.com, Andrew Morton , Jesper Juhl MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200707010116.52012.jesper.juhl@gmail.com> X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12055 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jesper.juhl@gmail.com Precedence: bulk X-list: xfs (this is back from May 16 2007, resending since it doesn't look like the patch ever made it in anywhere) The Coverity checker found a memory leak in xfs_inactive(). The offending code is this bit : 1671 tp = xfs_trans_alloc(mp, XFS_TRANS_INACTIVE); At conditional (1): "truncate != 0" taking true path 1672 if (truncate) { 1673 /* 1674 * Do the xfs_itruncate_start() call before 1675 * reserving any log space because itruncate_start 1676 * will call into the buffer cache and we can't 1677 * do that within a transaction. 1678 */ 1679 xfs_ilock(ip, XFS_IOLOCK_EXCL); 1680 1681 error = xfs_itruncate_start(ip, XFS_ITRUNC_DEFINITE, 0); At conditional (2): "error != 0" taking true path 1682 if (error) { 1683 xfs_iunlock(ip, XFS_IOLOCK_EXCL); Event leaked_storage: Returned without freeing storage "tp" Also see events: [alloc_fn][var_assign] 1684 return VN_INACTIVE_CACHE; 1685 } So, the code allocates a transaction, but in the case where 'truncate' is !=0 and xfs_itruncate_start(ip, XFS_ITRUNC_DEFINITE, 0); happens to return an error, we'll just return from the function without dealing with the memory allocated byxfs_trans_alloc() and assigned to 'tp', thus it'll be orphaned/leaked - not good. The bug was introduced by this commit: http://git2.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=d3cf209476b72c83907a412b6708c5e498410aa7 The patch below is From: Dave Chinner Signed-off-by: Jesper Juhl --- fs/xfs/xfs_vnodeops.c | 1 + 1 file changed, 1 insertion(+) Index: 2.6.x-xfs-new/fs/xfs/xfs_vnodeops.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/xfs_vnodeops.c 2007-05-11 16:04:03.000000000 +1000 +++ 2.6.x-xfs-new/fs/xfs/xfs_vnodeops.c 2007-05-17 12:37:25.671399078 +1000 @@ -1710,6 +1710,7 @@ xfs_inactive( error = xfs_itruncate_start(ip, XFS_ITRUNC_DEFINITE, 0); if (error) { + xfs_trans_cancel(tp, 0); xfs_iunlock(ip, XFS_IOLOCK_EXCL); return VN_INACTIVE_CACHE; } From owner-xfs@oss.sgi.com Sat Jun 30 16:46:04 2007 Received: with ECARTIS (v1.0.0; list xfs); Sat, 30 Jun 2007 16:46:07 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.2.0-pre1-r499012 (2007-01-23) on oss.sgi.com X-Spam-Level: *** X-Spam-Status: No, score=3.0 required=5.0 tests=AWL,BAYES_99 autolearn=no version=3.2.0-pre1-r499012 Received: from ug-out-1314.google.com (ug-out-1314.google.com [66.249.92.172]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l5UNk0tL031451 for ; Sat, 30 Jun 2007 16:46:03 -0700 Received: by ug-out-1314.google.com with SMTP id z36so1202865uge for ; Sat, 30 Jun 2007 16:46:02 -0700 (PDT) DKIM-Signature: a=rsa-sha1; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:from:to:subject:date:user-agent:cc:mime-version:content-type:content-transfer-encoding:content-disposition:message-id; b=nUvBprw7jkreXNtBphsVddcPhv5TaHF1vvmC7eYAPb/A0Vhm1abXlEbwuXdAJ8sseRGFeIHfrlqUmx3Iv8Rhzt3ihfeDsB34absTNhHxpjNTgsq+lcoF1Sm3nZa3xh3ko5Tspt+7LSlJyxXzZHYpjSbFvdF5Ax8YcSa8ANe33wM= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:from:to:subject:date:user-agent:cc:mime-version:content-type:content-transfer-encoding:content-disposition:message-id; b=I4vjL3UrTV7ojORIi3wpSahMchvw3nuWo4Ju1Tp1UKeO0kRnk6Kg2QY9V2RBG4eXFh0NtZrSCeF43Fn/U8M+vsWx2fmOWyKhlZO+G5J1F8JQuEV57nSmK8p9AGEv7Mlpl6gyEbapMb+z12jSY1adzdz8G7rDohmGMf89nFCWnMU= Received: by 10.67.88.20 with SMTP id q20mr1488894ugl.1183245479355; Sat, 30 Jun 2007 16:17:59 -0700 (PDT) Received: from ?192.168.1.34? ( [90.184.90.115]) by mx.google.com with ESMTP id 61sm2475024ugz.2007.06.30.16.17.58 (version=TLSv1/SSLv3 cipher=OTHER); Sat, 30 Jun 2007 16:17:58 -0700 (PDT) From: Jesper Juhl To: Linux Kernel Mailing List Subject: [PATCH][XFS][resend] memory leak; allocated transaction not freed in xfs_inactive_free_eofblocks() in failure case. Date: Sun, 1 Jul 2007 01:16:54 +0200 User-Agent: KMail/1.9.7 Cc: David Chinner , xfs-masters@oss.sgi.com, xfs@oss.sgi.com, Andrew Morton , Jesper Juhl MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200707010116.54884.jesper.juhl@gmail.com> X-Virus-Scanned: ClamAV version 0.90, clamav-milter version devel-120207 on oss.sgi.com X-Virus-Status: Clean X-archive-position: 12056 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jesper.juhl@gmail.com Precedence: bulk X-list: xfs (this is back from May 16 2007, resending since it doesn't look like the patch ever made it in anywhere) Fix XFS memory leak; allocated transaction not freed in xfs_inactive_free_eofblocks() in failure case. the code allocates a transaction, but in the case where 'truncate' is !=0 and xfs_itruncate_start(ip, XFS_ITRUNC_DEFINITE, 0); happens to return an error, we'll just return from the function without dealing with the memory allocated byxfs_trans_alloc() and assigned to 'tp', thus it'll be orphaned/leaked - not good. Signed-off-by: Jesper Juhl --- fs/xfs/xfs_vnodeops.c | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diff --git a/fs/xfs/xfs_vnodeops.c b/fs/xfs/xfs_vnodeops.c index de17aed..32519cf 100644 --- a/fs/xfs/xfs_vnodeops.c +++ b/fs/xfs/xfs_vnodeops.c @@ -1260,6 +1260,7 @@ xfs_inactive_free_eofblocks( error = xfs_itruncate_start(ip, XFS_ITRUNC_DEFINITE, ip->i_size); if (error) { + xfs_trans_cancel(tp, 0); xfs_iunlock(ip, XFS_IOLOCK_EXCL); return error; }