From owner-xfs@oss.sgi.com Thu Aug 31 23:55:53 2006 Received: with ECARTIS (v1.0.0; list xfs); Thu, 31 Aug 2006 23:56:21 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k816tcDW022684 for ; Thu, 31 Aug 2006 23:55:50 -0700 Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id QAA14969; Fri, 1 Sep 2006 16:54:54 +1000 Received: from wobbly.melbourne.sgi.com (localhost [127.0.0.1]) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id k816sqgw3207623; Fri, 1 Sep 2006 16:54:53 +1000 (EST) Received: (from nathans@localhost) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id k816spc93245563; Fri, 1 Sep 2006 16:54:51 +1000 (EST) Date: Fri, 1 Sep 2006 16:54:50 +1000 From: Nathan Scott To: dgc@melbourne.sgi.com Cc: xfs@oss.sgi.com Subject: review: add a splice command to xfs_io Message-ID: <20060901165450.T3186664@wobbly.melbourne.sgi.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="1LKvkjL3sHcu1TtY" Content-Disposition: inline User-Agent: Mutt/1.2.5i X-archive-position: 8859 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: xfs --1LKvkjL3sHcu1TtY Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Yo Dave, Here's some code which should help exercising the splice functionality from a QA test. Needs a /usr/include/sys/splice.h (attached also, but glibc will get this at some point I guess). No QA test yet though - I will hack something up there on Monday (its beer o'clock). cheers. -- Nathan Index: xfsprogs/io/init.c =================================================================== --- xfsprogs.orig/io/init.c 2006-09-01 09:34:40.679409500 +1000 +++ xfsprogs/io/init.c 2006-09-01 09:36:18.193313250 +1000 @@ -74,6 +74,7 @@ init_commands(void) resblks_init(); sendfile_init(); shutdown_init(); + splice_init(); truncate_init(); } Index: xfsprogs/io/io.h =================================================================== --- xfsprogs.orig/io/io.h 2006-09-01 09:34:40.595404250 +1000 +++ xfsprogs/io/io.h 2006-09-01 09:36:04.013341500 +1000 @@ -131,6 +131,12 @@ extern void sendfile_init(void); #define sendfile_init() do { } while (0) #endif +#ifdef HAVE_SPLICE +extern void splice_init(void); +#else +#define splice_init() do { } while (0) +#endif + #ifdef HAVE_MADVISE extern void madvise_init(void); #else Index: xfsprogs/io/splice.c =================================================================== --- /dev/null 1970-01-01 00:00:00.000000000 +0000 +++ xfsprogs/io/splice.c 2006-09-01 16:46:41.929606500 +1000 @@ -0,0 +1,255 @@ +/* + * Copyright (c) 2006 Silicon Graphics, Inc. + * All Rights Reserved. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it would be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write the Free Software Foundation, + * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#include +#include +#include +#include +#include "init.h" +#include "io.h" + +static cmdinfo_t splice_cmd; + +static void +splice_help(void) +{ + printf(_( +"\n" +" splice part or all of the current open file to a second file\n" +"\n" +" Example:\n" +" 'splice 100m 400m 4k out' - splices 4 kilobytes at 100 megabytes offset\n" +" into the open file, to 200m offset in \"out\"\n" +"\n" +" Copies data between one file descriptor and another. Because this copying\n" +" is done within the kernel, splice does not need to transfer data to and\n" +" from user space.\n" +" -i -- input file, instead of current file descriptor\n" +" -m -- move pages instead of copying\n" +" -n -- operate in non-blocking I/O mode\n" +" Offsets in both the source and target files are optional, as is the size,\n" +" as follows: If one of these arguments is given it's the transfer size. If\n" +" two are given they are offsets. If three are given - offset, offset, size.\n" +" Finally, if none of these arguments are given the entire input file will be\n" +" spliced to the output file.\n" +"\n")); +} + +static int +splice_buffer( + int in_fd, + off64_t in_offset, + int out_fd, + off64_t out_offset, + size_t bsize, + int flags, + long long *total) +{ + long long bytes_remaining = *total; + ssize_t bytes, ibytes, obytes; + int pipefds[2]; + int ops = 0; + + *total = 0; + if (pipe(pipefds) < 0) { + perror("pipe"); + return -1; + } + while (bytes_remaining > 0) { + ibytes = min(bytes_remaining, bsize); + bytes = splice(in_fd, &in_offset, + pipefds[1], NULL, ibytes, flags); + if (bytes == 0) + break; + if (bytes < 0) { + perror("splice(in)"); + return -1; + } + ops++; + ibytes = obytes = bytes; + while (obytes > 0) { + bytes = splice(pipefds[0], NULL, + out_fd, &out_offset, obytes, flags); + if (bytes < 0) { + perror("splice(out)"); + return -1; + } + ops++; + obytes -= bytes; + } + *total += ibytes; + if (ibytes >= bytes_remaining) + break; + bytes_remaining -= ibytes; + } + return ops; +} + +static int +splice_f( + int argc, + char **argv) +{ + off64_t inoff, outoff; + long long count, total; + size_t blocksize, sectsize; + struct stat64 instat; + struct timeval t1, t2; + char s1[64], s2[64], ts[64]; + char *infile = NULL; + int Cflag, qflag, flags; + int c, infd = -1, outfd = -1; + + Cflag = qflag = flags = 0; + init_cvtnum(&blocksize, §size); + while ((c = getopt(argc, argv, "i:mnqC")) != EOF) { + switch (c) { + case 'C': + Cflag = 1; + break; + case 'q': + qflag = 1; + break; + case 'm': + flags |= SPLICE_F_MOVE; + break; + case 'n': + flags |= SPLICE_F_NONBLOCK; + break; + case 'i': + infile = optarg; + break; + default: + return command_usage(&splice_cmd); + } + } + + /* + * If no more arguments are given, splice the whole input file + * If one of these arguments is given it's the transfer size + * If two are given they are file offsets + * If three are given - offset, offset, size + */ + + if (optind >= argc) + return command_usage(&splice_cmd); + + if (optind == argc - 1) { + inoff = outoff = 0; + count = -1; + } else if (optind == argc - 2) { + inoff = outoff = 0; + count = cvtnum(blocksize, sectsize, argv[optind]); + if (count < 0) { + printf(_("non-numeric length argument -- %s\n"), + argv[optind]); + return 0; + } + optind++; + } else if (optind == argc - 3 || optind == argc - 4) { + inoff = cvtnum(blocksize, sectsize, argv[optind]); + if (inoff < 0) { + printf(_("non-numeric offset argument -- %s\n"), + argv[optind]); + return 0; + } + optind++; + outoff = cvtnum(blocksize, sectsize, argv[optind]); + if (outoff < 0) { + printf(_("non-numeric offset argument -- %s\n"), + argv[optind]); + return 0; + } + optind++; + if (optind == argc - 1) { + count = -1; + } else { + count = cvtnum(blocksize, sectsize, argv[optind]); + if (count < 0) { + printf(_("non-numeric length argument -- %s\n"), + argv[optind]); + return 0; + } + optind++; + } + } else + return command_usage(&splice_cmd); + + if (((outfd = openfile(argv[optind], NULL, IO_CREAT, 0644)) < 0)) + return 0; + + if (!infile) + infd = file->fd; + else if ((infd = openfile(infile, NULL, IO_READONLY, 0)) < 0) + goto done; + + if (fstat64(infd, &instat) < 0) { + perror("fstat64"); + goto done; + } + if (count == -1) + count = instat.st_size; + total = count; + blocksize = instat.st_blksize; + + gettimeofday(&t1, NULL); + c = splice_buffer(infd, inoff, outfd, outoff, blocksize, flags, &total); + if (c < 0) + goto done; + if (qflag) + goto done; + gettimeofday(&t2, NULL); + t2 = tsub(t2, t1); + + /* Finally, report back -- -C gives a parsable format */ + timestr(&t2, ts, sizeof(ts), Cflag ? VERBOSE_FIXED_TIME : 0); + if (!Cflag) { + cvtstr((double)total, s1, sizeof(s1)); + cvtstr(tdiv((double)total, t2), s2, sizeof(s2)); + printf(_("spliced %lld/%lld bytes from offset %lld to offset %lld\n"), + total, count, (long long)inoff, (long long)outoff); + printf(_("%s, %d ops; %s (%s/sec and %.4f ops/sec)\n"), + s1, c, ts, s2, tdiv((double)c, t2)); + } else {/* bytes,ops,time,bytes/sec,ops/sec */ + printf("%lld,%d,%s,%.3f,%.3f\n", + total, c, ts, + tdiv((double)total, t2), tdiv((double)c, t2)); + } +done: + if (infile) + close(infd); + close(outfd); + return 0; +} + +void +splice_init(void) +{ + splice_cmd.name = _("splice"); + splice_cmd.cfunc = splice_f; + splice_cmd.argmin = 1; + splice_cmd.argmax = -1; + splice_cmd.flags = CMD_NOMAP_OK | CMD_FOREIGN_OK; + splice_cmd.args = + _("[-i infile] [inoff [outoff [len]]] outfile"); + splice_cmd.oneline = + _("Splice copy data between two file descriptors via a pipe"); + splice_cmd.help = splice_help; + + add_command(&splice_cmd); +} Index: xfsprogs/io/Makefile =================================================================== --- xfsprogs.orig/io/Makefile 2006-09-01 09:32:31.319325000 +1000 +++ xfsprogs/io/Makefile 2006-09-01 12:03:24.598944250 +1000 @@ -44,6 +44,13 @@ else LSRCFILES += sendfile.c endif +ifeq ($(HAVE_SPLICE),yes) +CFILES += splice.c +LCFLAGS += -DHAVE_SPLICE +else +LSRCFILES += splice.c +endif + ifeq ($(PKG_PLATFORM),irix) LSRCFILES += inject.c resblks.c else Index: xfsprogs/aclocal.m4 =================================================================== --- xfsprogs.orig/aclocal.m4 2006-09-01 12:04:54.696575000 +1000 +++ xfsprogs/aclocal.m4 2006-09-01 12:06:48.043658750 +1000 @@ -157,9 +157,9 @@ AC_DEFUN([AC_PACKAGE_GLOBALS], AC_SUBST(pkg_platform) ]) -# +# # Check if we have a working fadvise system call -# +# AC_DEFUN([AC_HAVE_FADVISE], [ AC_MSG_CHECKING([for fadvise ]) AC_TRY_COMPILE([ @@ -174,9 +174,9 @@ AC_DEFUN([AC_HAVE_FADVISE], AC_SUBST(have_fadvise) ]) -# +# # Check if we have a working madvise system call -# +# AC_DEFUN([AC_HAVE_MADVISE], [ AC_MSG_CHECKING([for madvise ]) AC_TRY_COMPILE([ @@ -191,9 +191,9 @@ AC_DEFUN([AC_HAVE_MADVISE], AC_SUBST(have_madvise) ]) -# +# # Check if we have a working mincore system call -# +# AC_DEFUN([AC_HAVE_MINCORE], [ AC_MSG_CHECKING([for mincore ]) AC_TRY_COMPILE([ @@ -208,9 +208,9 @@ AC_DEFUN([AC_HAVE_MINCORE], AC_SUBST(have_mincore) ]) -# +# # Check if we have a working sendfile system call -# +# AC_DEFUN([AC_HAVE_SENDFILE], [ AC_MSG_CHECKING([for sendfile ]) AC_TRY_COMPILE([ @@ -226,6 +226,23 @@ AC_DEFUN([AC_HAVE_SENDFILE], ]) # +# Check if we have a working splice system call +# +AC_DEFUN([AC_HAVE_SPLICE], + [ AC_MSG_CHECKING([for splice ]) + AC_TRY_COMPILE([ +#define _GNU_SOURCE +#define _FILE_OFFSET_BITS 64 +#include + ], [ + splice(0, 0, 0, 0, 0, 0); + ], have_splice=yes + AC_MSG_RESULT(yes), + AC_MSG_RESULT(no)) + AC_SUBST(have_splice) + ]) + +# # Check if we have a getmntent libc call (IRIX, Linux) # AC_DEFUN([AC_HAVE_GETMNTENT], Index: xfsprogs/configure.in =================================================================== --- xfsprogs.orig/configure.in 2006-09-01 12:04:54.612569750 +1000 +++ xfsprogs/configure.in 2006-09-01 12:06:39.123101250 +1000 @@ -52,6 +52,7 @@ AC_HAVE_FADVISE AC_HAVE_MADVISE AC_HAVE_MINCORE AC_HAVE_SENDFILE +AC_HAVE_SPLICE AC_HAVE_GETMNTENT AC_HAVE_GETMNTINFO Index: xfsprogs/m4/package_libcdev.m4 =================================================================== --- xfsprogs.orig/m4/package_libcdev.m4 2006-09-01 12:05:03.605131750 +1000 +++ xfsprogs/m4/package_libcdev.m4 2006-09-01 12:06:29.186480250 +1000 @@ -1,6 +1,6 @@ -# +# # Check if we have a working fadvise system call -# +# AC_DEFUN([AC_HAVE_FADVISE], [ AC_MSG_CHECKING([for fadvise ]) AC_TRY_COMPILE([ @@ -15,9 +15,9 @@ AC_DEFUN([AC_HAVE_FADVISE], AC_SUBST(have_fadvise) ]) -# +# # Check if we have a working madvise system call -# +# AC_DEFUN([AC_HAVE_MADVISE], [ AC_MSG_CHECKING([for madvise ]) AC_TRY_COMPILE([ @@ -32,9 +32,9 @@ AC_DEFUN([AC_HAVE_MADVISE], AC_SUBST(have_madvise) ]) -# +# # Check if we have a working mincore system call -# +# AC_DEFUN([AC_HAVE_MINCORE], [ AC_MSG_CHECKING([for mincore ]) AC_TRY_COMPILE([ @@ -49,9 +49,9 @@ AC_DEFUN([AC_HAVE_MINCORE], AC_SUBST(have_mincore) ]) -# +# # Check if we have a working sendfile system call -# +# AC_DEFUN([AC_HAVE_SENDFILE], [ AC_MSG_CHECKING([for sendfile ]) AC_TRY_COMPILE([ @@ -67,6 +67,23 @@ AC_DEFUN([AC_HAVE_SENDFILE], ]) # +# Check if we have a working splice system call +# +AC_DEFUN([AC_HAVE_SPLICE], + [ AC_MSG_CHECKING([for splice ]) + AC_TRY_COMPILE([ +#define _GNU_SOURCE +#define _FILE_OFFSET_BITS 64 +#include + ], [ + splice(0, 0, 0, 0, 0, 0); + ], have_splice=yes + AC_MSG_RESULT(yes), + AC_MSG_RESULT(no)) + AC_SUBST(have_splice) + ]) + +# # Check if we have a getmntent libc call (IRIX, Linux) # AC_DEFUN([AC_HAVE_GETMNTENT], Index: xfsprogs/include/builddefs.in =================================================================== --- xfsprogs.orig/include/builddefs.in 2006-09-01 14:58:42.038205250 +1000 +++ xfsprogs/include/builddefs.in 2006-09-01 14:59:00.475357500 +1000 @@ -90,6 +90,7 @@ HAVE_FADVISE = @have_fadvise@ HAVE_MADVISE = @have_madvise@ HAVE_MINCORE = @have_mincore@ HAVE_SENDFILE = @have_sendfile@ +HAVE_SPLICE = @have_splice@ HAVE_GETMNTENT = @have_getmntent@ HAVE_GETMNTINFO = @have_getmntinfo@ --1LKvkjL3sHcu1TtY Content-Type: text/plain; charset=us-ascii Content-Disposition: attachment; filename="splice.h" #ifndef SPLICE_H #define SPLICE_H #include #include #include #include #if defined(__i386__) #define __NR_sys_splice 313 #define __NR_sys_tee 315 #define __NR_sys_vmsplice 316 #elif defined(__x86_64__) #define __NR_sys_splice 275 #define __NR_sys_tee 276 #define __NR_sys_vmsplice 278 #elif defined(__powerpc__) || defined(__powerpc64__) #define __NR_sys_splice 283 #define __NR_sys_tee 284 #define __NR_sys_vmsplice 285 #elif defined(__ia64__) #define __NR_sys_splice 1297 #define __NR_sys_tee 1301 #define __NR_sys_vmsplice 1302 #else #error unsupported arch #endif #define SPLICE_F_MOVE (0x01) /* move pages instead of copying */ #define SPLICE_F_NONBLOCK (0x02) /* don't block on the pipe splicing (but */ /* we may still block on the fd we splice */ /* from/to, of course */ #define SPLICE_F_MORE (0x04) /* expect more data */ #define SPLICE_F_GIFT (0x08) /* pages passed in are a gift */ _syscall6(int, sys_splice, int, fdin, loff_t *, off_in, int, fdout, loff_t *, off_out, size_t, len, unsigned int, flags); _syscall4(int, sys_vmsplice, int, fd, const struct iovec *, iov, unsigned long, nr_segs, unsigned int, flags); _syscall4(int, sys_tee, int, fdin, int, fdout, size_t, len, unsigned int, flags); static inline int splice(int fdin, loff_t *off_in, int fdout, loff_t *off_out, size_t len, unsigned long flags) { return sys_splice(fdin, off_in, fdout, off_out, len, flags); } static inline int tee(int fdin, int fdout, size_t len, unsigned int flags) { return sys_tee(fdin, fdout, len, flags); } static inline int vmsplice(int fd, const struct iovec *iov, unsigned long nr_segs, unsigned int flags) { return sys_vmsplice(fd, iov, nr_segs, flags); } #endif --1LKvkjL3sHcu1TtY-- From owner-xfs@oss.sgi.com Fri Sep 1 00:38:02 2006 Received: with ECARTIS (v1.0.0; list xfs); Fri, 01 Sep 2006 00:38:10 -0700 (PDT) Received: from deliver.uni-koblenz.de (deliver.uni-koblenz.de [141.26.64.15]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k817c1DW002417 for ; Fri, 1 Sep 2006 00:38:02 -0700 Received: from localhost (localhost [127.0.0.1]) by deliver.uni-koblenz.de (Postfix) with ESMTP id 686E5B62227; Fri, 1 Sep 2006 08:36:39 +0200 (CEST) Received: from deliver.uni-koblenz.de ([127.0.0.1]) by localhost (deliver [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 28984-01; Fri, 1 Sep 2006 08:36:37 +0200 (CEST) Received: from bliss.uni-koblenz.de (bliss.uni-koblenz.de [141.26.64.65]) (using SSLv3 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by deliver.uni-koblenz.de (Postfix) with ESMTP id 98766B55F3E; Fri, 1 Sep 2006 08:36:37 +0200 (CEST) From: Rainer Krienke To: Chris Hane , xfs@oss.sgi.com Subject: Re: XFS and 3.2TB Partition Date: Fri, 1 Sep 2006 08:36:28 +0200 User-Agent: KMail/1.9.4 References: <44F714F2.7050502@gmail.com> In-Reply-To: <44F714F2.7050502@gmail.com> MIME-Version: 1.0 Content-Type: multipart/signed; boundary="nextPart1420533.3OVRndO7eY"; protocol="application/pgp-signature"; micalg=pgp-sha1 Content-Transfer-Encoding: 7bit Message-Id: <200609010836.32331.krienke@uni-koblenz.de> X-archive-position: 8860 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: krienke@uni-koblenz.de Precedence: bulk X-list: xfs --nextPart1420533.3OVRndO7eY Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Content-Disposition: inline Am Donnerstag, 31. August 2006 18:57 schrieben Sie: > I am trying to create a 3.2TB partition on my Raid 5. Is there a > document that could help? > > I have a 3ware 9500 controller and 8 *500GB sata drives configured into > a single RAID 5 array. > I have a Raid with about 5TB and no problems creating an xfs filesystem on = it.=20 The system is Novell SLES10 with a 2.6.16.21 kernel.=20 At first there was a problem with the raid. The firmware of the raid device= =20 needed an upgrade. Bevore the upgrade I had a maximum of 2TB.=20 In dmesg (or /var/log/boot.msg on SLES10) you should see something like thi= s=20 message if the device (sdc here) is handled correctly: <5>sdc : very big device. try to use READ CAPACITY(16). <5>SCSI device sdc: 10156243968 512-byte hdwr sectors (5199997 MB) <5>sdc: Write Protect is off <7>sdc: Mode Sense: cb 00 00 08 <5>SCSI device sdc: drive cache: write back <5>sdc : very big device. try to use READ CAPACITY(16). <5>SCSI device sdc: 10156243968 512-byte hdwr sectors (5199997 MB) Bevore the firmware update there was an error when trying to read the capac= ity=20 via READ CAPACITY(16).=20=20 I created the partitiions using parted. fdisk did not work. Have a nice day Rainer --=20 --------------------------------------------------------------------------- Rainer Krienke, Universitaet Koblenz, Rechenzentrum, Raum A022 Universitaetsstrasse 1, 56070 Koblenz, Tel: +49 261287 -1312, Fax: -1001312 Mail: krienke@uni-koblenz.de, Web: http://www.uni-koblenz.de/~krienke Get my public PGP key: http://www.uni-koblenz.de/~krienke/mypgp.html --------------------------------------------------------------------------- --nextPart1420533.3OVRndO7eY Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2 (GNU/Linux) iD8DBQBE99Twaldtjc/KDEoRAoshAKCx63Wqwb/s188EdqmGXyLJE79mRgCfbEpN EoR1IWQ8ogyx+D6zmgoccag= =0RXB -----END PGP SIGNATURE----- --nextPart1420533.3OVRndO7eY-- From owner-xfs@oss.sgi.com Fri Sep 1 03:41:32 2006 Received: with ECARTIS (v1.0.0; list xfs); Fri, 01 Sep 2006 03:41:41 -0700 (PDT) Received: from jabber.dneg.com (mail.dneg.com [193.203.82.196]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k81AfVDW030184 for ; Fri, 1 Sep 2006 03:41:32 -0700 Received: from localhost (localhost.localdomain [127.0.0.1]) by jabber.dneg.com (Postfix) with ESMTP id AB4F1B7000; Fri, 1 Sep 2006 09:41:07 +0100 (BST) Received: from jabber.dneg.com ([127.0.0.1]) by localhost (jabber [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 13808-06; Fri, 1 Sep 2006 09:40:58 +0100 (BST) Received: from [172.16.11.100] (bath.dneg.com [172.16.11.100]) by jabber.dneg.com (Postfix) with ESMTP id 71F3AB6FD4; Fri, 1 Sep 2006 09:40:57 +0100 (BST) Message-ID: <44F7F219.40904@dneg.com> Date: Fri, 01 Sep 2006 09:40:57 +0100 From: Evan Fraser User-Agent: Thunderbird 1.5.0.5 (X11/20060719) MIME-Version: 1.0 To: Chris Hane Cc: xfs@oss.sgi.com Subject: Re: XFS and 3.2TB Partition References: <44F714F2.7050502@gmail.com> In-Reply-To: <44F714F2.7050502@gmail.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 8861 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: evan@dneg.com Precedence: bulk X-list: xfs I had that problem when I was using an adaptec aic79xx adapter, using a new LSI-Logic one fixed the problem for me. Could it be a limit with your controller/controller driver? Chris Hane wrote: > > I am trying to create a 3.2TB partition on my Raid 5. Is there a > document that could help? > > I have a 3ware 9500 controller and 8 *500GB sata drives configured > into a single RAID 5 array. > > I am running linux 2.6.16 with the 3ware drivers compiled into the > kernel. > > I've tried a couple of different means to create the partition and > format the file system with xfs without success (or confidence that I > haven't done something wrong). > > 1. FDISK > > I've tried fdisk on the array to create the partition; but it forces > me enter the number of cylinders before letting me create the > partition. I enter the largest number of cylinders since I'm not sure > how to calculate the correct cylinder number across an 8 disk RAID 5 > array. > > I then create the partition starting at 0 (or whatever the default > was) and ending at 3500GB. > > Once the partition is created this way, I can mkfs.xfs; but I'm a > little hesitant to use this since I input and arbitrary cylinder number. > > Thoughts on what to use for the correct cylinder count with fdisk? > > 2. PARTED > > I've tried to use parted without any success. Here is what I've tried > and the errors I get. > > > parted > parted> mklabel gpt > parted> mkpart primary 0 3500GB > parted> quit > > ok - the partition now exists. If I use ext2 everything works ok. > > however, when I run > > > mkfs.xfs /dev/sda1 > > the file system is formated but is truncated to to 2TB. > > > Any advice/pointers on how to partition and format a 3.2TB raid 5 > array would be much appreciated. > > Thanks, > Chris.... > > > -- evan@dneg.com Linux Systems Administrator Double Negative tel: +44 (0)20 7534 4400 fax: +44 (0)20 7534 4452 77 shaftesbury avenue, w1d 5du, London From owner-xfs@oss.sgi.com Fri Sep 1 06:20:31 2006 Received: with ECARTIS (v1.0.0; list xfs); Fri, 01 Sep 2006 06:20:49 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k81DKDDW024104 for ; Fri, 1 Sep 2006 06:20:27 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id XAA22324; Fri, 1 Sep 2006 23:19:22 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id k81DJIeQ7692301; Fri, 1 Sep 2006 23:19:19 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id k81DJD988721962; Fri, 1 Sep 2006 23:19:13 +1000 (AEST) Date: Fri, 1 Sep 2006 23:19:13 +1000 From: David Chinner To: Jens Axboe Cc: "Jeffrey E. Hundstad" , xfs@oss.sgi.com, nathans@sgi.com Subject: Re: vmsplice can't work well Message-ID: <20060901131913.GG5737019@melbourne.sgi.com> References: <44F4440F.1090300@gmail.com> <20060829140542.GN12257@kernel.dk> <44F5CC08.8010205@mnsu.edu> <20060830174815.GF7331@kernel.dk> <44F5D3C6.1010108@mnsu.edu> <20060831092440.GC5528@kernel.dk> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20060831092440.GC5528@kernel.dk> User-Agent: Mutt/1.4.2.1i X-archive-position: 8862 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs On Thu, Aug 31, 2006 at 11:24:41AM +0200, Jens Axboe wrote: > XFS list, > > On Wed, Aug 30 2006, Jeffrey E. Hundstad wrote: > > Jens Axboe wrote: > > >On Wed, Aug 30 2006, Jeffrey E. Hundstad wrote: > > > > > >>I tried your splie-git...tar.gz file and tried the splice-cp. It > > >>produced files that are the right length... but the files only contain > > >>nulls. Here's the straces: > > >> > > > > > >Works for me as well. Could be an fs issue, how large was the README and > > >what filesystem did you use? > > > > > > > > The file was 1130 bytes (it was the README in that directory.) The > > filesystem is XFS. > > > > I can reproduce this quite easily, doing: > > nelson:~ # splice-cp sda.blktrace.0 foo > > nelson:~ # md5sum sda.blktrace.0 foo > 4754070ae77091468c830ea23b125d68 sda.blktrace.0 > efdc7b9d00692fdfe91a691277209267 foo Busted write side - splice-in works fine, splice-out is an alias for /dev/zero. The reason it's full of NULLs: death:/mnt# xfs_bmap -vv foo foo: no extents death:/mnt# It's a hole. Nothing has been flushed out to disk. Interesting - the inode is leaving pipe_to_file() dirty, the page is dirty, the buffer head is dirty, delay, mapped and uptodate. The page is the only page in the radix tree and the radix tree is marked dirty. But it never gets flushed out. Even when I use dd to seek past the first disk block and write further into the file, I still end up with a hole in the range where the original splice write should be which means it was no longer in the page cache. Copying a large file I can see dirty memory increase to tens of megabytes. Nothing is going to disk, writeback is not going above zero. Interestingly, when the write completes, the size of the page cache drops by almost exactly the size of the file being written - almost like a truncate_inode_pages() is occuring on file close. Oh, look - we _are_ tossing away all the pages on close. xfs_splice_write() hasn't updated the xfs inode size when extending the file. The linux inode has the correct value, but xfs thinks that it's only got a speculative allocation EOF (i.e. 0) so we invalidate it before it gets to disk. The patch below just copies some code out of xfs_write() where it updates the xfs inode size and drops it in xfs_splice_write(). It's almost certainly not the right fix, but the bucket under the pipe will now catch most of the bits.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group --- fs/xfs/linux-2.6/xfs_lrw.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) Index: 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_lrw.c =================================================================== --- 2.6.x-xfs-new.orig/fs/xfs/linux-2.6/xfs_lrw.c 2006-08-31 16:17:47.000000000 +1000 +++ 2.6.x-xfs-new/fs/xfs/linux-2.6/xfs_lrw.c 2006-09-01 22:48:56.463190730 +1000 @@ -390,6 +390,8 @@ xfs_splice_write( xfs_inode_t *ip = XFS_BHVTOI(bdp); xfs_mount_t *mp = ip->i_mount; ssize_t ret; + struct inode *inode = outfilp->f_mapping->host; + xfs_fsize_t isize; XFS_STATS_INC(xs_write_calls); if (XFS_FORCED_SHUTDOWN(ip->i_mount)) @@ -416,6 +418,20 @@ xfs_splice_write( if (ret > 0) XFS_STATS_ADD(xs_write_bytes, ret); + isize = i_size_read(inode); + if (unlikely(ret < 0 && ret != -EFAULT && *ppos > isize)) + *ppos = isize; + + if (*ppos > ip->i_d.di_size) { + xfs_ilock(ip, XFS_ILOCK_EXCL); + if (*ppos > ip->i_d.di_size) { + ip->i_d.di_size = *ppos; + i_size_write(inode, *ppos); + ip->i_update_core = 1; + ip->i_update_size = 1; + } + xfs_iunlock(ip, XFS_ILOCK_EXCL); + } xfs_iunlock(ip, XFS_IOLOCK_EXCL); return ret; } From owner-xfs@oss.sgi.com Fri Sep 1 06:42:52 2006 Received: with ECARTIS (v1.0.0; list xfs); Fri, 01 Sep 2006 06:43:12 -0700 (PDT) Received: from kernel.dk (brick.kernel.dk [62.242.22.158]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k81DgqDW029130 for ; Fri, 1 Sep 2006 06:42:52 -0700 Received: from nelson.home.kernel.dk (nelson.home.kernel.dk [192.168.0.33]) by kernel.dk (Postfix) with ESMTP id A22EE63CE1; Fri, 1 Sep 2006 15:42:13 +0200 (CEST) Received: by nelson.home.kernel.dk (Postfix, from userid 1000) id AF1741192E; Fri, 1 Sep 2006 15:45:12 +0200 (CEST) Date: Fri, 1 Sep 2006 15:45:12 +0200 From: Jens Axboe To: David Chinner Cc: "Jeffrey E. Hundstad" , xfs@oss.sgi.com, nathans@sgi.com Subject: Re: vmsplice can't work well Message-ID: <20060901134512.GD25434@kernel.dk> References: <44F4440F.1090300@gmail.com> <20060829140542.GN12257@kernel.dk> <44F5CC08.8010205@mnsu.edu> <20060830174815.GF7331@kernel.dk> <44F5D3C6.1010108@mnsu.edu> <20060831092440.GC5528@kernel.dk> <20060901131913.GG5737019@melbourne.sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20060901131913.GG5737019@melbourne.sgi.com> X-archive-position: 8863 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: axboe@kernel.dk Precedence: bulk X-list: xfs On Fri, Sep 01 2006, David Chinner wrote: > On Thu, Aug 31, 2006 at 11:24:41AM +0200, Jens Axboe wrote: > > XFS list, > > > > On Wed, Aug 30 2006, Jeffrey E. Hundstad wrote: > > > Jens Axboe wrote: > > > >On Wed, Aug 30 2006, Jeffrey E. Hundstad wrote: > > > > > > > >>I tried your splie-git...tar.gz file and tried the splice-cp. It > > > >>produced files that are the right length... but the files only contain > > > >>nulls. Here's the straces: > > > >> > > > > > > > >Works for me as well. Could be an fs issue, how large was the README and > > > >what filesystem did you use? > > > > > > > > > > > The file was 1130 bytes (it was the README in that directory.) The > > > filesystem is XFS. > > > > > > > I can reproduce this quite easily, doing: > > > > nelson:~ # splice-cp sda.blktrace.0 foo > > > > nelson:~ # md5sum sda.blktrace.0 foo > > 4754070ae77091468c830ea23b125d68 sda.blktrace.0 > > efdc7b9d00692fdfe91a691277209267 foo > > Busted write side - splice-in works fine, splice-out is an alias > for /dev/zero. The reason it's full of NULLs: > > death:/mnt# xfs_bmap -vv foo > foo: no extents > death:/mnt# > > It's a hole. Nothing has been flushed out to disk. > > Interesting - the inode is leaving pipe_to_file() dirty, the page is > dirty, the buffer head is dirty, delay, mapped and uptodate. The > page is the only page in the radix tree and the radix tree is marked > dirty. > > But it never gets flushed out. Even when I use dd to seek past the > first disk block and write further into the file, I still end up > with a hole in the range where the original splice write should > be which means it was no longer in the page cache. > > Copying a large file I can see dirty memory increase to tens of > megabytes. Nothing is going to disk, writeback is not going above > zero. Interestingly, when the write completes, the size of the page > cache drops by almost exactly the size of the file being written - > almost like a truncate_inode_pages() is occuring on file close. > > Oh, look - we _are_ tossing away all the pages on close. > > xfs_splice_write() hasn't updated the xfs inode size when extending the > file. The linux inode has the correct value, but xfs thinks that it's > only got a speculative allocation EOF (i.e. 0) so we invalidate it > before it gets to disk. > > The patch below just copies some code out of xfs_write() where it > updates the xfs inode size and drops it in xfs_splice_write(). It's > almost certainly not the right fix, but the bucket under the pipe will > now catch most of the bits.... Good analysis and fix, Dave! I don't have time to test it right now, perhaps Jeffrey can give it a shot? Will you make sure this gets into 2.6.18? -- Jens Axboe From owner-xfs@oss.sgi.com Fri Sep 1 09:13:01 2006 Received: with ECARTIS (v1.0.0; list xfs); Fri, 01 Sep 2006 09:13:10 -0700 (PDT) Received: from mail.itsolut.com (mail.itsolut.com [64.182.153.89]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k81GD0DW021514 for ; Fri, 1 Sep 2006 09:13:00 -0700 Received: by mail.itsolut.com (Postfix, from userid 5004) id 7E87B43E35; Fri, 1 Sep 2006 11:12:27 -0500 (EST) Received: from [192.168.1.3] (adsl-68-251-149-159.dsl.bltnin.ameritech.net [68.251.149.159]) by mail.itsolut.com (Postfix) with ESMTP id 4E9D043223 for ; Fri, 1 Sep 2006 11:12:24 -0500 (EST) Message-ID: <44F85BE7.2010001@gmail.com> Date: Fri, 01 Sep 2006 12:12:23 -0400 From: Chris Hane User-Agent: Thunderbird 1.5.0.5 (Windows/20060719) MIME-Version: 1.0 To: xfs@oss.sgi.com Subject: Re: XFS and 3.2TB Partition References: <44F714F2.7050502@gmail.com> <200609010836.32331.krienke@uni-koblenz.de> In-Reply-To: <200609010836.32331.krienke@uni-koblenz.de> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 8866 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: chrishane@gmail.com Precedence: bulk X-list: xfs Content-Length: 3029 Lines: 79 Thank for the input. I appreciate everyones help! I believe I am going to end up not partitioning the raid array and using it directly (as described in an email which I copied below). When we get our next large storage machine in (we've are going to need a couple over the next year to store CD & DVD ISO Images), I'm going to experiment some more with the suggestions everyone has given me here. As an FYI: I'm using the latest versions of everything (parted 1.7.1, kernel 2.6.16, 3ware raid controller brand new) Thanks for the help, Chris.... Peter Grandi wrote: >>>> On Thu, 31 Aug 2006 12:57:22 -0400, Chris Hane >>>> said: > > chrishane> I am trying to create a 3.2TB partition on my Raid 5. > chrishane> Is there a document that could help? > > The 9500 is fairly recent, so it should not have a lot of 2TB > problems. But there are 2TB limits in several places. For example > old versions of the Linux kernel don't support more than 2TB per > _filesystem_. > > But I suspect that you are trying to create partitions in the > sense of the MS-DOS/MS-Windows partitioning scheme. Check > carefully whether that partitioning scheme allos partitions > larger than 2TB :-). > > Anyhow, usually for very large filesystems you don't need > partitions at all. Just use '/dev/sda'. Or check the other > partitioning schemes supported by Linux, some may have higher > limits. > > chrishane> I have a 3ware 9500 controller and 8 *500GB sata > chrishane> drives configured into a single RAID 5 array. > > Using RAID5 with 8 drives is a great crime. Nothing to do with > your partitioning problems, but since you mentioned it... > Consider reading carefully > Rainer Krienke wrote: > Am Donnerstag, 31. August 2006 18:57 schrieben Sie: >> I am trying to create a 3.2TB partition on my Raid 5. Is there a >> document that could help? >> >> I have a 3ware 9500 controller and 8 *500GB sata drives configured into >> a single RAID 5 array. >> > > I have a Raid with about 5TB and no problems creating an xfs filesystem on it. > The system is Novell SLES10 with a 2.6.16.21 kernel. > > At first there was a problem with the raid. The firmware of the raid device > needed an upgrade. Bevore the upgrade I had a maximum of 2TB. > > In dmesg (or /var/log/boot.msg on SLES10) you should see something like this > message if the device (sdc here) is handled correctly: > > <5>sdc : very big device. try to use READ CAPACITY(16). > <5>SCSI device sdc: 10156243968 512-byte hdwr sectors (5199997 MB) > <5>sdc: Write Protect is off > <7>sdc: Mode Sense: cb 00 00 08 > <5>SCSI device sdc: drive cache: write back > <5>sdc : very big device. try to use READ CAPACITY(16). > <5>SCSI device sdc: 10156243968 512-byte hdwr sectors (5199997 MB) > > Bevore the firmware update there was an error when trying to read the capacity > via READ CAPACITY(16). > > I created the partitiions using parted. fdisk did not work. > > Have a nice day > Rainer From owner-xfs@oss.sgi.com Fri Sep 1 20:33:13 2006 Received: with ECARTIS (v1.0.0; list xfs); Fri, 01 Sep 2006 20:33:29 -0700 (PDT) Received: from avalanche.hickorytech.net (smtp.hickorytech.net [216.114.192.16]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k823XCDW012434 for ; Fri, 1 Sep 2006 20:33:13 -0700 Received: from localhost (localhost.localdomain [127.0.0.1]) by avalanche.hickorytech.net (Postfix) with ESMTP id EF132204FC9; Fri, 1 Sep 2006 21:25:14 -0500 (CDT) Received: from avalanche.hickorytech.net ([216.114.192.16]) by localhost (avalanche.hickorytech.net [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id NsKYhpW9HJC2; Fri, 1 Sep 2006 21:25:14 -0500 (CDT) Received: from [10.0.0.1] (mn-10k-dhcp2-220.dsl.hickorytech.net [216.114.240.220]) by avalanche.hickorytech.net (Postfix) with ESMTP id B8F1E204FC1; Fri, 1 Sep 2006 21:25:14 -0500 (CDT) Message-ID: <44F8ECE7.2090102@mnsu.edu> Date: Fri, 01 Sep 2006 21:31:03 -0500 From: "Jeffrey E. Hundstad" User-Agent: Thunderbird 1.5.0.5 (X11/20060812) MIME-Version: 1.0 To: David Chinner Cc: Jens Axboe , xfs@oss.sgi.com, nathans@sgi.com Subject: Re: vmsplice can't work well References: <44F4440F.1090300@gmail.com> <20060829140542.GN12257@kernel.dk> <44F5CC08.8010205@mnsu.edu> <20060830174815.GF7331@kernel.dk> <44F5D3C6.1010108@mnsu.edu> <20060831092440.GC5528@kernel.dk> <20060901131913.GG5737019@melbourne.sgi.com> In-Reply-To: <20060901131913.GG5737019@melbourne.sgi.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 8868 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jeffrey.hundstad@mnsu.edu Precedence: bulk X-list: xfs Content-Length: 2707 Lines: 84 David Chinner wrote: > On Thu, Aug 31, 2006 at 11:24:41AM +0200, Jens Axboe wrote: > >> XFS list, >> >> On Wed, Aug 30 2006, Jeffrey E. Hundstad wrote: >> >>> Jens Axboe wrote: >>> >>>> On Wed, Aug 30 2006, Jeffrey E. Hundstad wrote: >>>> >>>> >>>>> I tried your splie-git...tar.gz file and tried the splice-cp. It >>>>> produced files that are the right length... but the files only contain >>>>> nulls. Here's the straces: >>>>> >>>>> >>>> Works for me as well. Could be an fs issue, how large was the README and >>>> what filesystem did you use? >>>> >>>> >>>> >>> The file was 1130 bytes (it was the README in that directory.) The >>> filesystem is XFS. >>> >>> >> I can reproduce this quite easily, doing: >> >> nelson:~ # splice-cp sda.blktrace.0 foo >> >> nelson:~ # md5sum sda.blktrace.0 foo >> 4754070ae77091468c830ea23b125d68 sda.blktrace.0 >> efdc7b9d00692fdfe91a691277209267 foo >> > > Busted write side - splice-in works fine, splice-out is an alias > for /dev/zero. The reason it's full of NULLs: > > death:/mnt# xfs_bmap -vv foo > foo: no extents > death:/mnt# > > It's a hole. Nothing has been flushed out to disk. > > Interesting - the inode is leaving pipe_to_file() dirty, the page is > dirty, the buffer head is dirty, delay, mapped and uptodate. The > page is the only page in the radix tree and the radix tree is marked > dirty. > > But it never gets flushed out. Even when I use dd to seek past the > first disk block and write further into the file, I still end up > with a hole in the range where the original splice write should > be which means it was no longer in the page cache. > > Copying a large file I can see dirty memory increase to tens of > megabytes. Nothing is going to disk, writeback is not going above > zero. Interestingly, when the write completes, the size of the page > cache drops by almost exactly the size of the file being written - > almost like a truncate_inode_pages() is occuring on file close. > > Oh, look - we _are_ tossing away all the pages on close. > > xfs_splice_write() hasn't updated the xfs inode size when extending the > file. The linux inode has the correct value, but xfs thinks that it's > only got a speculative allocation EOF (i.e. 0) so we invalidate it > before it gets to disk. > > The patch below just copies some code out of xfs_write() where it updates > the xfs inode size and drops it in xfs_splice_write(). It's almost certainly not > the right fix, but the bucket under the pipe will now catch most of the > bits.... > > Cheers, > > Dave. > I can confirm that this patch allows splice-cp to work as expected! Thanks all! -- Jeffrey Hundstad From owner-xfs@oss.sgi.com Sun Sep 3 17:18:16 2006 Received: with ECARTIS (v1.0.0; list xfs); Sun, 03 Sep 2006 17:18:36 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k840I3DW002413 for ; Sun, 3 Sep 2006 17:18:15 -0700 Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA17481; Mon, 4 Sep 2006 10:17:15 +1000 Received: from wobbly.melbourne.sgi.com (localhost [127.0.0.1]) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id k840HDgw3331177; Mon, 4 Sep 2006 10:17:13 +1000 (EST) Received: (from nathans@localhost) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id k840HB413272533; Mon, 4 Sep 2006 10:17:11 +1000 (EST) Date: Mon, 4 Sep 2006 10:17:11 +1000 From: Nathan Scott To: lachlan@sgi.com Cc: xfs@oss.sgi.com Subject: review: minor cleanup in xfs_read locking Message-ID: <20060904101711.A3331169@wobbly.melbourne.sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.2.5i X-archive-position: 8871 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: xfs Content-Length: 985 Lines: 35 Hi Lachlan, Could you check this for me - it just folds the second direct I/O conditional added in your recent deadlock fix back into the prior branch, which is also direct I/O specific... thanks. -- Nathan Index: xfs-linux/linux-2.6/xfs_lrw.c =================================================================== --- xfs-linux.orig/linux-2.6/xfs_lrw.c 2006-09-04 09:59:10.955973000 +1000 +++ xfs-linux/linux-2.6/xfs_lrw.c 2006-09-04 09:59:42.205926000 +1000 @@ -270,12 +270,12 @@ xfs_read( } } - if (unlikely((ioflags & IO_ISDIRECT) && VN_CACHED(vp))) - bhv_vop_flushinval_pages(vp, ctooff(offtoct(*offset)), - -1, FI_REMAPF_LOCKED); - - if (unlikely(ioflags & IO_ISDIRECT)) + if (unlikely((ioflags & IO_ISDIRECT))) { + if (VN_CACHED(vp)) + bhv_vop_flushinval_pages(vp, ctooff(offtoct(*offset)), + -1, FI_REMAPF_LOCKED); mutex_unlock(&inode->i_mutex); + } xfs_rw_enter_trace(XFS_READ_ENTER, &ip->i_iocore, (void *)iovp, segs, *offset, ioflags); From owner-xfs@oss.sgi.com Sun Sep 3 18:10:49 2006 Received: with ECARTIS (v1.0.0; list xfs); Sun, 03 Sep 2006 18:11:08 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k841AaDW008320 for ; Sun, 3 Sep 2006 18:10:48 -0700 Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA18407; Mon, 4 Sep 2006 11:09:49 +1000 Received: from wobbly.melbourne.sgi.com (localhost [127.0.0.1]) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id k8419lgw3328556; Mon, 4 Sep 2006 11:09:48 +1000 (EST) Received: (from nathans@localhost) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id k8419jrC3301042; Mon, 4 Sep 2006 11:09:45 +1000 (EST) Date: Mon, 4 Sep 2006 11:09:45 +1000 From: Nathan Scott To: Lachlan McIlroy Cc: xfs@oss.sgi.com Subject: Re: review: minor cleanup in xfs_read locking Message-ID: <20060904110945.A3329063@wobbly.melbourne.sgi.com> References: <20060904101711.A3331169@wobbly.melbourne.sgi.com> <44FB75CB.8050809@sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.2.5i In-Reply-To: <44FB75CB.8050809@sgi.com>; from lachlan@sgi.com on Mon, Sep 04, 2006 at 01:39:39AM +0100 X-archive-position: 8872 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: xfs Content-Length: 273 Lines: 12 On Mon, Sep 04, 2006 at 01:39:39AM +0100, Lachlan McIlroy wrote: > Looking a little closer... you could probably do away with the extra > pair of parentheses in the call to unlikely(). > Done, thanks - I'll push in most of my pending stuff shortly. cheers. -- Nathan From owner-xfs@oss.sgi.com Sun Sep 3 18:30:45 2006 Received: with ECARTIS (v1.0.0; list xfs); Sun, 03 Sep 2006 18:30:54 -0700 (PDT) Received: from omx1.americas.sgi.com (omx1.americas.sgi.com [198.149.16.13]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k841UYDW010939 for ; Sun, 3 Sep 2006 18:30:45 -0700 Received: from internal-mail-relay1.corp.sgi.com (internal-mail-relay1.corp.sgi.com [198.149.32.52]) by omx1.americas.sgi.com (8.12.10/8.12.9/linux-outbound_gateway-1.1) with ESMTP id k840UTnx017092 for ; Sun, 3 Sep 2006 19:30:29 -0500 Received: from [134.15.160.1] (vpn-emea-sw-emea-160-1.emea.sgi.com [134.15.160.1]) by internal-mail-relay1.corp.sgi.com (8.12.9/8.12.10/SGI_generic_relay-1.2) with ESMTP id k840U78s36683850; Sun, 3 Sep 2006 17:30:08 -0700 (PDT) Message-ID: <44FB73FA.6010400@sgi.com> Date: Mon, 04 Sep 2006 01:31:54 +0100 From: Lachlan McIlroy Reply-To: lachlan@sgi.com Organization: SGI User-Agent: Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-US; rv:1.7.12) Gecko/20050920 X-Accept-Language: en-us, en MIME-Version: 1.0 To: Nathan Scott CC: xfs@oss.sgi.com Subject: Re: review: minor cleanup in xfs_read locking References: <20060904101711.A3331169@wobbly.melbourne.sgi.com> In-Reply-To: <20060904101711.A3331169@wobbly.melbourne.sgi.com> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 8873 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: lachlan@sgi.com Precedence: bulk X-list: xfs Content-Length: 436 Lines: 16 Looks good Nathan. I've made changes to check return codes from bhv_vop_flushinval_pages() and friends so it's now dependent on this change. I'll post a review as soon as your change has gone in. Nathan Scott wrote: > Hi Lachlan, > > Could you check this for me - it just folds the second direct I/O > conditional added in your recent deadlock fix back into the prior > branch, which is also direct I/O specific... > > thanks. > From owner-xfs@oss.sgi.com Sun Sep 3 18:32:59 2006 Received: with ECARTIS (v1.0.0; list xfs); Sun, 03 Sep 2006 18:33:15 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k841WkDW011352 for ; Sun, 3 Sep 2006 18:32:57 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA18914; Mon, 4 Sep 2006 11:31:57 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16302) id 3ED5158CF851; Mon, 4 Sep 2006 11:31:56 +1000 (EST) To: linux-xfs@oss.sgi.com, sgi.bugs.xfs@engr.sgi.com Subject: TAKE 955302 - fix warnings Message-Id: <20060904013157.3ED5158CF851@chook.melbourne.sgi.com> Date: Mon, 4 Sep 2006 11:31:56 +1000 (EST) From: nathans@sgi.com (Nathan Scott) X-archive-position: 8875 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: xfs Content-Length: 1173 Lines: 24 Fix kmem_zalloc_greedy warnings on 64 bit platforms. Date: Mon Sep 4 11:31:03 AEST 2006 Workarea: chook.melbourne.sgi.com:/build/nathans/xfs-linux Inspected by: lachlan,vapo The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-kern/xfs-linux-melb Modid: xfs-linux-melb:xfs-kern:26907a xfs_itable.c - 1.148 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_itable.c.diff?r1=text&tr1=1.148&r2=text&tr2=1.147&f=h xfs_vfsops.c - 1.511 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_vfsops.c.diff?r1=text&tr1=1.511&r2=text&tr2=1.510&f=h xfs_mount.h - 1.227 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_mount.h.diff?r1=text&tr1=1.227&r2=text&tr2=1.226&f=h quota/xfs_qm.c - 1.44 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/quota/xfs_qm.c.diff?r1=text&tr1=1.44&r2=text&tr2=1.43&f=h linux-2.6/xfs_ksyms.c - 1.51 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/linux-2.6/xfs_ksyms.c.diff?r1=text&tr1=1.51&r2=text&tr2=1.50&f=h linux-2.4/xfs_ksyms.c - 1.46 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/linux-2.4/xfs_ksyms.c.diff?r1=text&tr1=1.46&r2=text&tr2=1.45&f=h From owner-xfs@oss.sgi.com Sun Sep 3 18:31:32 2006 Received: with ECARTIS (v1.0.0; list xfs); Sun, 03 Sep 2006 18:31:36 -0700 (PDT) Received: from omx1.americas.sgi.com (omx1.americas.sgi.com [198.149.16.13]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k841VLDW011078 for ; Sun, 3 Sep 2006 18:31:31 -0700 Received: from internal-mail-relay1.corp.sgi.com (internal-mail-relay1.corp.sgi.com [198.149.32.52]) by omx1.americas.sgi.com (8.12.10/8.12.9/linux-outbound_gateway-1.1) with ESMTP id k840cDnx017901 for ; Sun, 3 Sep 2006 19:38:13 -0500 Received: from [134.15.160.1] (vpn-emea-sw-emea-160-1.emea.sgi.com [134.15.160.1]) by internal-mail-relay1.corp.sgi.com (8.12.9/8.12.10/SGI_generic_relay-1.2) with ESMTP id k840bq8s36682405; Sun, 3 Sep 2006 17:37:52 -0700 (PDT) Message-ID: <44FB75CB.8050809@sgi.com> Date: Mon, 04 Sep 2006 01:39:39 +0100 From: Lachlan McIlroy Reply-To: lachlan@sgi.com Organization: SGI User-Agent: Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-US; rv:1.7.12) Gecko/20050920 X-Accept-Language: en-us, en MIME-Version: 1.0 To: Nathan Scott CC: xfs@oss.sgi.com Subject: Re: review: minor cleanup in xfs_read locking References: <20060904101711.A3331169@wobbly.melbourne.sgi.com> In-Reply-To: <20060904101711.A3331169@wobbly.melbourne.sgi.com> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 8874 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: lachlan@sgi.com Precedence: bulk X-list: xfs Content-Length: 354 Lines: 13 Looking a little closer... you could probably do away with the extra pair of parentheses in the call to unlikely(). Nathan Scott wrote: > Hi Lachlan, > > Could you check this for me - it just folds the second direct I/O > conditional added in your recent deadlock fix back into the prior > branch, which is also direct I/O specific... > > thanks. > From owner-xfs@oss.sgi.com Sun Sep 3 18:37:37 2006 Received: with ECARTIS (v1.0.0; list xfs); Sun, 03 Sep 2006 18:37:44 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k841bODW012301 for ; Sun, 3 Sep 2006 18:37:36 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA19011 for ; Mon, 4 Sep 2006 11:36:41 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16302) id AB70258CF851; Mon, 4 Sep 2006 11:36:40 +1000 (EST) To: linux-xfs@oss.sgi.com Subject: TAKE 955696 - cleanup, xfs_read Message-Id: <20060904013640.AB70258CF851@chook.melbourne.sgi.com> Date: Mon, 4 Sep 2006 11:36:40 +1000 (EST) From: nathans@sgi.com (Nathan Scott) X-archive-position: 8876 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: xfs Content-Length: 480 Lines: 14 Minor cleanup from dio locking fix, remove an extra conditional. Date: Mon Sep 4 11:36:19 AEST 2006 Workarea: chook.melbourne.sgi.com:/build/nathans/xfs-linux Inspected by: lachlan The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-kern/xfs-linux-melb Modid: xfs-linux-melb:xfs-kern:26908a linux-2.6/xfs_lrw.c - 1.250 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/linux-2.6/xfs_lrw.c.diff?r1=text&tr1=1.250&r2=text&tr2=1.249&f=h From owner-xfs@oss.sgi.com Mon Sep 4 04:24:08 2006 Received: with ECARTIS (v1.0.0; list xfs); Mon, 04 Sep 2006 04:24:19 -0700 (PDT) Received: from imr2.americas.sgi.com (imr2.americas.sgi.com [198.149.16.18]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k84BNvDW004644 for ; Mon, 4 Sep 2006 04:24:08 -0700 Received: from [134.15.160.13] (vpn-emea-sw-emea-160-13.emea.sgi.com [134.15.160.13]) by imr2.americas.sgi.com (8.12.9/8.12.10/SGI_generic_relay-1.2) with ESMTP id k84BNqDu51075438 for ; Mon, 4 Sep 2006 04:23:53 -0700 (PDT) Message-ID: <44FC0D0F.60403@sgi.com> Date: Mon, 04 Sep 2006 12:25:03 +0100 From: Lachlan McIlroy Reply-To: lachlan@sgi.com Organization: SGI User-Agent: Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-US; rv:1.7.12) Gecko/20050920 X-Accept-Language: en-us, en MIME-Version: 1.0 To: xfs@oss.sgi.com Subject: review: propogate return codes from flush routines Content-Type: multipart/mixed; boundary="------------070105020404090905000704" X-archive-position: 8880 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: lachlan@sgi.com Precedence: bulk X-list: xfs Content-Length: 10601 Lines: 319 This is a multi-part message in MIME format. --------------070105020404090905000704 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Here's a patch to handle error return values in fs_flush_pages and fs_flushinval_pages. It changes the prototype of fs_flushinval_pages so we can propogate the errors and handle them at higher layers. I also modified xfs_itruncate_start so that it could propogate the error further. I've changed the necessary prototypes on 2.4 to keep the build happy but haven't bothered to fix the error handling in fs_flush_pages or fs_flushinval_pages for 2.4. The motivation behind this change was the recent BUG reported due to a direct I/O read trying to write to delayed alloc extents. While the exact cause of this problem is not known it is possible that fs_flushinval_pages ignored an error while flushing, truncated the pages on the file anyway, and failed to convert all delayed alloc extents. Lachlan --------------070105020404090905000704 Content-Type: text/plain; name="flush.patch" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="flush.patch" --- fs/xfs/linux-2.4/xfs_fs_subr.c_1.48 2006-09-04 11:55:28.000000000 +0100 +++ fs/xfs/linux-2.4/xfs_fs_subr.c 2006-09-04 11:54:38.000000000 +0100 @@ -35,7 +35,7 @@ truncate_inode_pages(ip->i_mapping, first); } -void +int fs_flushinval_pages( bhv_desc_t *bdp, xfs_off_t first, @@ -53,6 +53,7 @@ filemap_fdatawait(ip->i_mapping); truncate_inode_pages(ip->i_mapping, first); } + return 0; } int --- fs/xfs/linux-2.4/xfs_fs_subr.h_1.17 2006-09-04 11:55:52.000000000 +0100 +++ fs/xfs/linux-2.4/xfs_fs_subr.h 2006-09-04 11:55:24.000000000 +0100 @@ -23,7 +23,7 @@ extern int fs_nosys(void); extern void fs_noval(void); extern void fs_tosspages(bhv_desc_t *, xfs_off_t, xfs_off_t, int); -extern void fs_flushinval_pages(bhv_desc_t *, xfs_off_t, xfs_off_t, int); +extern int fs_flushinval_pages(bhv_desc_t *, xfs_off_t, xfs_off_t, int); extern int fs_flush_pages(bhv_desc_t *, xfs_off_t, xfs_off_t, uint64_t, int); #endif /* __XFS_FS_SUBR_H__ */ --- fs/xfs/linux-2.4/xfs_vnode.h_1.113 2006-09-04 11:56:17.000000000 +0100 +++ fs/xfs/linux-2.4/xfs_vnode.h 2006-09-04 11:56:51.000000000 +0100 @@ -183,7 +183,7 @@ typedef void (*vop_link_removed_t)(bhv_desc_t *, bhv_vnode_t *, int); typedef void (*vop_vnode_change_t)(bhv_desc_t *, bhv_vchange_t, __psint_t); typedef void (*vop_ptossvp_t)(bhv_desc_t *, xfs_off_t, xfs_off_t, int); -typedef void (*vop_pflushinvalvp_t)(bhv_desc_t *, xfs_off_t, xfs_off_t, int); +typedef int (*vop_pflushinvalvp_t)(bhv_desc_t *, xfs_off_t, xfs_off_t, int); typedef int (*vop_pflushvp_t)(bhv_desc_t *, xfs_off_t, xfs_off_t, uint64_t, int); typedef int (*vop_iflush_t)(bhv_desc_t *, int); --- fs/xfs/linux-2.6/xfs_fs_subr.c_1.47 2006-09-01 16:34:01.000000000 +0100 +++ fs/xfs/linux-2.6/xfs_fs_subr.c 2006-09-01 16:36:00.000000000 +0100 @@ -35,7 +35,7 @@ truncate_inode_pages(ip->i_mapping, first); } -void +int fs_flushinval_pages( bhv_desc_t *bdp, xfs_off_t first, @@ -44,13 +44,16 @@ { bhv_vnode_t *vp = BHV_TO_VNODE(bdp); struct inode *ip = vn_to_inode(vp); + int ret = 0; if (VN_CACHED(vp)) { if (VN_TRUNC(vp)) VUNTRUNCATE(vp); - filemap_write_and_wait(ip->i_mapping); - truncate_inode_pages(ip->i_mapping, first); + ret = filemap_write_and_wait(ip->i_mapping); + if (!ret) + truncate_inode_pages(ip->i_mapping, first); } + return ret; } int @@ -63,14 +66,14 @@ { bhv_vnode_t *vp = BHV_TO_VNODE(bdp); struct inode *ip = vn_to_inode(vp); + int ret = 0; if (VN_DIRTY(vp)) { if (VN_TRUNC(vp)) VUNTRUNCATE(vp); - filemap_fdatawrite(ip->i_mapping); - if (flags & XFS_B_ASYNC) - return 0; - filemap_fdatawait(ip->i_mapping); + ret = filemap_fdatawrite(ip->i_mapping); + if (!ret && !(flags & XFS_B_ASYNC)) + ret = filemap_fdatawait(ip->i_mapping); } - return 0; + return ret; } --- fs/xfs/linux-2.6/xfs_fs_subr.h_1.13 2006-09-01 18:24:35.000000000 +0100 +++ fs/xfs/linux-2.6/xfs_fs_subr.h 2006-09-01 17:08:16.000000000 +0100 @@ -23,7 +23,7 @@ extern int fs_nosys(void); extern void fs_noval(void); extern void fs_tosspages(bhv_desc_t *, xfs_off_t, xfs_off_t, int); -extern void fs_flushinval_pages(bhv_desc_t *, xfs_off_t, xfs_off_t, int); +extern int fs_flushinval_pages(bhv_desc_t *, xfs_off_t, xfs_off_t, int); extern int fs_flush_pages(bhv_desc_t *, xfs_off_t, xfs_off_t, uint64_t, int); #endif /* __XFS_FS_SUBR_H__ */ --- fs/xfs/linux-2.6/xfs_lrw.c_1.250 2006-09-04 11:03:51.000000000 +0100 +++ fs/xfs/linux-2.6/xfs_lrw.c 2006-09-04 11:05:20.000000000 +0100 @@ -200,7 +200,7 @@ struct file *file = iocb->ki_filp; struct inode *inode = file->f_mapping->host; size_t size = 0; - ssize_t ret; + ssize_t ret = 0; xfs_fsize_t n; xfs_inode_t *ip; xfs_mount_t *mp; @@ -272,9 +272,13 @@ if (unlikely(ioflags & IO_ISDIRECT)) { if (VN_CACHED(vp)) - bhv_vop_flushinval_pages(vp, ctooff(offtoct(*offset)), + ret = bhv_vop_flushinval_pages(vp, ctooff(offtoct(*offset)), -1, FI_REMAPF_LOCKED); mutex_unlock(&inode->i_mutex); + if (ret) { + xfs_iunlock(ip, XFS_IOLOCK_SHARED); + return ret; + } } xfs_rw_enter_trace(XFS_READ_ENTER, &ip->i_iocore, @@ -802,8 +806,10 @@ if (need_flush) { xfs_inval_cached_trace(io, pos, -1, ctooff(offtoct(pos)), -1); - bhv_vop_flushinval_pages(vp, ctooff(offtoct(pos)), + error = bhv_vop_flushinval_pages(vp, ctooff(offtoct(pos)), -1, FI_REMAPF_LOCKED); + if (error) + goto out_unlock_internal; } if (need_i_mutex) { --- fs/xfs/linux-2.6/xfs_vnode.h_1.125 2006-09-01 18:11:19.000000000 +0100 +++ fs/xfs/linux-2.6/xfs_vnode.h 2006-09-01 18:12:32.000000000 +0100 @@ -196,7 +196,7 @@ typedef void (*vop_link_removed_t)(bhv_desc_t *, bhv_vnode_t *, int); typedef void (*vop_vnode_change_t)(bhv_desc_t *, bhv_vchange_t, __psint_t); typedef void (*vop_ptossvp_t)(bhv_desc_t *, xfs_off_t, xfs_off_t, int); -typedef void (*vop_pflushinvalvp_t)(bhv_desc_t *, xfs_off_t, xfs_off_t, int); +typedef int (*vop_pflushinvalvp_t)(bhv_desc_t *, xfs_off_t, xfs_off_t, int); typedef int (*vop_pflushvp_t)(bhv_desc_t *, xfs_off_t, xfs_off_t, uint64_t, int); typedef int (*vop_iflush_t)(bhv_desc_t *, int); --- fs/xfs/xfs_dfrag.c_1.55 2006-09-01 18:25:24.000000000 +0100 +++ fs/xfs/xfs_dfrag.c 2006-09-01 16:46:00.000000000 +0100 @@ -200,7 +200,9 @@ if (VN_CACHED(tvp) != 0) { xfs_inval_cached_trace(&tip->i_iocore, 0, -1, 0, -1); - bhv_vop_flushinval_pages(tvp, 0, -1, FI_REMAPF_LOCKED); + error = bhv_vop_flushinval_pages(tvp, 0, -1, FI_REMAPF_LOCKED); + if (error) + goto error0; } /* Verify O_DIRECT for ftmp */ --- fs/xfs/xfs_inode.c_1.451 2006-09-01 18:25:49.000000000 +0100 +++ fs/xfs/xfs_inode.c 2006-09-01 16:52:40.000000000 +0100 @@ -1421,7 +1421,7 @@ * must be called again with all the same restrictions as the initial * call. */ -void +int xfs_itruncate_start( xfs_inode_t *ip, uint flags, @@ -1431,6 +1431,7 @@ xfs_off_t toss_start; xfs_mount_t *mp; bhv_vnode_t *vp; + int error = 0; ASSERT(ismrlocked(&ip->i_iolock, MR_UPDATE) != 0); ASSERT((new_size == 0) || (new_size <= ip->i_d.di_size)); @@ -1468,7 +1469,7 @@ * file size, so there is no way that the data extended * out there. */ - return; + return 0; } last_byte = xfs_file_last_byte(ip); xfs_itrunc_trace(XFS_ITRUNC_START, ip, flags, new_size, toss_start, @@ -1477,7 +1478,7 @@ if (flags & XFS_ITRUNC_DEFINITE) { bhv_vop_toss_pages(vp, toss_start, -1, FI_REMAPF_LOCKED); } else { - bhv_vop_flushinval_pages(vp, toss_start, -1, FI_REMAPF_LOCKED); + error = bhv_vop_flushinval_pages(vp, toss_start, -1, FI_REMAPF_LOCKED); } } @@ -1486,6 +1487,7 @@ ASSERT(VN_CACHED(vp) == 0); } #endif + return error; } /* --- fs/xfs/xfs_inode.h_1.215 2006-09-01 18:26:15.000000000 +0100 +++ fs/xfs/xfs_inode.h 2006-09-01 16:53:11.000000000 +0100 @@ -439,7 +439,7 @@ uint xfs_dic2xflags(struct xfs_dinode_core *); int xfs_ifree(struct xfs_trans *, xfs_inode_t *, struct xfs_bmap_free *); -void xfs_itruncate_start(xfs_inode_t *, uint, xfs_fsize_t); +int xfs_itruncate_start(xfs_inode_t *, uint, xfs_fsize_t); int xfs_itruncate_finish(struct xfs_trans **, xfs_inode_t *, xfs_fsize_t, int, int); int xfs_iunlink(struct xfs_trans *, xfs_inode_t *); --- fs/xfs/xfs_utils.c_1.72 2006-09-01 18:26:39.000000000 +0100 +++ fs/xfs/xfs_utils.c 2006-09-01 16:55:21.000000000 +0100 @@ -420,7 +420,11 @@ * in a transaction. */ xfs_ilock(ip, XFS_IOLOCK_EXCL); - xfs_itruncate_start(ip, XFS_ITRUNC_DEFINITE, (xfs_fsize_t)0); + error = xfs_itruncate_start(ip, XFS_ITRUNC_DEFINITE, (xfs_fsize_t)0); + if (error) { + xfs_iunlock(ip, XFS_IOLOCK_EXCL); + return error; + } tp = xfs_trans_alloc(mp, XFS_TRANS_TRUNCATE_FILE); if ((error = xfs_trans_reserve(tp, 0, XFS_ITRUNCATE_LOG_RES(mp), 0, --- fs/xfs/xfs_vfsops.c_1.511 2006-09-04 11:06:00.000000000 +0100 +++ fs/xfs/xfs_vfsops.c 2006-09-04 11:00:50.000000000 +0100 @@ -1150,7 +1150,7 @@ if (XFS_FORCED_SHUTDOWN(mp)) { bhv_vop_toss_pages(vp, 0, -1, FI_REMAPF); } else { - bhv_vop_flushinval_pages(vp, 0, -1, FI_REMAPF); + error = bhv_vop_flushinval_pages(vp, 0, -1, FI_REMAPF); } xfs_ilock(ip, XFS_ILOCK_SHARED); --- fs/xfs/xfs_vnodeops.c_1.682 2006-09-01 18:27:04.000000000 +0100 +++ fs/xfs/xfs_vnodeops.c 2006-09-04 01:11:37.000000000 +0100 @@ -1258,8 +1258,12 @@ * do that within a transaction. */ xfs_ilock(ip, XFS_IOLOCK_EXCL); - xfs_itruncate_start(ip, XFS_ITRUNC_DEFINITE, + error = xfs_itruncate_start(ip, XFS_ITRUNC_DEFINITE, ip->i_d.di_size); + if (error) { + xfs_iunlock(ip, XFS_IOLOCK_EXCL); + return error; + } error = xfs_trans_reserve(tp, 0, XFS_ITRUNCATE_LOG_RES(mp), @@ -1676,7 +1680,11 @@ */ xfs_ilock(ip, XFS_IOLOCK_EXCL); - xfs_itruncate_start(ip, XFS_ITRUNC_DEFINITE, 0); + error = xfs_itruncate_start(ip, XFS_ITRUNC_DEFINITE, 0); + if (error) { + xfs_iunlock(ip, XFS_IOLOCK_EXCL); + return VN_INACTIVE_CACHE; + } error = xfs_trans_reserve(tp, 0, XFS_ITRUNCATE_LOG_RES(mp), @@ -4332,8 +4340,10 @@ if (VN_CACHED(vp) != 0) { xfs_inval_cached_trace(&ip->i_iocore, ioffset, -1, ctooff(offtoct(ioffset)), -1); - bhv_vop_flushinval_pages(vp, ctooff(offtoct(ioffset)), + error = bhv_vop_flushinval_pages(vp, ctooff(offtoct(ioffset)), -1, FI_REMAPF_LOCKED); + if (error) + goto out_unlock_iolock; } /* --------------070105020404090905000704-- From owner-xfs@oss.sgi.com Mon Sep 4 19:00:52 2006 Received: with ECARTIS (v1.0.0; list xfs); Mon, 04 Sep 2006 19:01:15 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k8520dDW005141 for ; Mon, 4 Sep 2006 19:00:50 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA19420; Tue, 5 Sep 2006 11:59:49 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id 334BF58CF851; Tue, 5 Sep 2006 11:59:49 +1000 (EST) To: linux-xfs@oss.sgi.com, sgi.bugs.xfs@engr.sgi.com Subject: TAKE 955939 - writing by splice() doesn't work in 2.6.17+ Message-Id: <20060905015949.334BF58CF851@chook.melbourne.sgi.com> Date: Tue, 5 Sep 2006 11:59:49 +1000 (EST) From: dgc@sgi.com (David Chinner) X-archive-position: 8882 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 811 Lines: 22 Fix xfs_splice_write() so appended data gets to disk. xfs_splice_write() failed to update the on disk inode size when extending the so when the file was closed the range extended by splice was truncated off. Hence any region of a file written to by splice would end up as a hole full of zeros. Date: Tue Sep 5 11:58:45 AEST 2006 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/2.6.x-xfs Inspected by: lachlan The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:26920a fs/xfs/linux-2.6/xfs_lrw.c - 1.251 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/linux-2.6/xfs_lrw.c.diff?r1=text&tr1=1.251&r2=text&tr2=1.250&f=h - Update xfs inode size if xfs_splice_write is writing beyond the end of the current file. From owner-xfs@oss.sgi.com Tue Sep 5 00:54:51 2006 Received: with ECARTIS (v1.0.0; list xfs); Tue, 05 Sep 2006 00:55:13 -0700 (PDT) Received: from smtp3.adl2.internode.on.net (smtp3.adl2.internode.on.net [203.16.214.203]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k857snDW006978 for ; Tue, 5 Sep 2006 00:54:50 -0700 Received: from saturn.flamingspork.com (ppp163-199.static.internode.on.net [150.101.163.199]) by smtp3.adl2.internode.on.net (8.13.6/8.13.5) with ESMTP id k857s9T9049644; Tue, 5 Sep 2006 17:24:10 +0930 (CST) (envelope-from stewart@flamingspork.com) Received: from localhost.localdomain (saturn.flamingspork.com [127.0.0.1]) by saturn.flamingspork.com (Postfix) with ESMTP id CFAAFC4055A; Tue, 5 Sep 2006 17:54:09 +1000 (EST) Received: by localhost.localdomain (Postfix, from userid 1000) id ACE8E147A386; Tue, 5 Sep 2006 17:54:09 +1000 (EST) Subject: Re: review: propogate return codes from flush routines From: Stewart Smith To: lachlan@sgi.com Cc: xfs@oss.sgi.com In-Reply-To: <44FC0D0F.60403@sgi.com> References: <44FC0D0F.60403@sgi.com> Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="=-4A1jMXZoNsX4cSbdR8hr" Date: Tue, 05 Sep 2006 17:54:08 +1000 Message-Id: <1157442848.5844.38.camel@localhost.localdomain> Mime-Version: 1.0 X-Mailer: Evolution 2.6.1 X-archive-position: 8884 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: stewart@flamingspork.com Precedence: bulk X-list: xfs Content-Length: 1630 Lines: 45 --=-4A1jMXZoNsX4cSbdR8hr Content-Type: text/plain Content-Transfer-Encoding: quoted-printable On Mon, 2006-09-04 at 12:25 +0100, Lachlan McIlroy wrote: > Here's a patch to handle error return values in fs_flush_pages and > fs_flushinval_pages. It changes the prototype of fs_flushinval_pages > so we can propogate the errors and handle them at higher layers. I also= =20 > modified xfs_itruncate_start so that it could propogate the error further. IMHO this is always a good idea. Although I guess the only concern can be getting the right error back (and a useful one).=20 > The motivation behind this change was the recent BUG reported due to a > direct I/O read trying to write to delayed alloc extents. While the exact > cause of this problem is not known it is possible that fs_flushinval_pages > ignored an error while flushing, truncated the pages on the file anyway, > and failed to convert all delayed alloc extents. from a quick look the patch seems to do as advertised. i probably just haven't looked hard enough - but I'm assuming the layers higher up deal with the error and: report to user, write log message or something if there's a really catastrophic error? --=20 Stewart Smith (stewart@flamingspork.com) http://www.flamingspork.com/ --=-4A1jMXZoNsX4cSbdR8hr Content-Type: application/pgp-signature; name=signature.asc Content-Description: This is a digitally signed message part -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2.2 (GNU/Linux) iD8DBQBE/S0gKglWCUL+FDoRAmYOAKDWkSrawcugWkypcl3U+uhCnm9YtACeIlKY VxE8pVDXIERtN4mRxSKH/1o= =AvC+ -----END PGP SIGNATURE----- --=-4A1jMXZoNsX4cSbdR8hr-- From owner-xfs@oss.sgi.com Tue Sep 5 15:31:37 2006 Received: with ECARTIS (v1.0.0; list xfs); Tue, 05 Sep 2006 15:31:57 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k85MVODW021811 for ; Tue, 5 Sep 2006 15:31:35 -0700 Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id IAA16176; Wed, 6 Sep 2006 08:30:33 +1000 Received: from wobbly.melbourne.sgi.com (localhost [127.0.0.1]) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id k85MUVgw3383132; Wed, 6 Sep 2006 08:30:31 +1000 (EST) Received: (from nathans@localhost) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id k85MUSfP3385425; Wed, 6 Sep 2006 08:30:28 +1000 (EST) Date: Wed, 6 Sep 2006 08:30:28 +1000 From: Nathan Scott To: Chris Seufert Cc: xfs@oss.sgi.com Subject: Re: Kernel Ooops Message-ID: <20060906083028.I3365803@wobbly.melbourne.sgi.com> References: <2260b150609050427p3123cb85q5af484d8b907e6ac@mail.gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.2.5i In-Reply-To: <2260b150609050427p3123cb85q5af484d8b907e6ac@mail.gmail.com>; from seufert@gmail.com on Tue, Sep 05, 2006 at 09:27:04PM +1000 X-archive-position: 8889 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: xfs Content-Length: 412 Lines: 16 On Tue, Sep 05, 2006 at 09:27:04PM +1000, Chris Seufert wrote: > I have had this one before, and i had assumed it had been fixed. > > System seems very stable running ext3, so i dont 'think' its hardware > related, but i am begining to wonder. This is fixed, what kernel version are you using? (it was fixed in -rc5/6 IIRC). > RIP [] xfs_btree_init_cursor+0x48/0x1bd cheers. -- Nathan From owner-xfs@oss.sgi.com Tue Sep 5 15:35:59 2006 Received: with ECARTIS (v1.0.0; list xfs); Tue, 05 Sep 2006 15:36:15 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k85MZkDW022405 for ; Tue, 5 Sep 2006 15:35:58 -0700 Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id IAA16223; Wed, 6 Sep 2006 08:34:54 +1000 Received: from wobbly.melbourne.sgi.com (localhost [127.0.0.1]) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id k85MYogw3384341; Wed, 6 Sep 2006 08:34:51 +1000 (EST) Received: (from nathans@localhost) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id k85MYmtt3382131; Wed, 6 Sep 2006 08:34:48 +1000 (EST) Date: Wed, 6 Sep 2006 08:34:48 +1000 From: Nathan Scott To: Roger Willcocks Cc: xfs@oss.sgi.com Subject: race in xfs_rename? (fwd) Message-ID: <20060906083448.J3365803@wobbly.melbourne.sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.2.5i X-archive-position: 8890 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nathans@xfs.org Precedence: bulk X-list: xfs Content-Length: 1100 Lines: 44 Hi Roger, I'm gonna be rude and fwd your mail to the list - in the hope someone there will be able to help you. I'm running out of time @sgi and have a bunch of stuff still to get done before I skip outta here - having to look at the xfs_rename locking right now might just be enough to make my head explode. ;) cheers. ----- Forwarded message from Roger Willcocks ----- Date: 05 Sep 2006 14:30:30 +0100 To: nathans@sgi.com X-Mailer: Ximian Evolution 1.2.2 (1.2.2-4) From: Roger Willcocks Subject: race in xfs_rename? Hi Nathan, I think I must be missing something here: xfs_rename calls xfs_lock_for_rename, which i-locks the source file and directory, target directory, and (if it already exists) the target file. It returns a two-to-four entry list of participating inodes. xfs_rename unlocks them all, creates a transaction, and then locks them all again. Surely while they're unlocked, another processor could jump in and fiddle with the underlying files and directories? -- Roger ----- End forwarded message ----- -- Nathan From owner-xfs@oss.sgi.com Tue Sep 5 16:15:27 2006 Received: with ECARTIS (v1.0.0; list xfs); Tue, 05 Sep 2006 16:15:47 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k85NFEDW028040 for ; Tue, 5 Sep 2006 16:15:25 -0700 Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA16936; Wed, 6 Sep 2006 09:14:18 +1000 Received: from wobbly.melbourne.sgi.com (localhost [127.0.0.1]) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id k85NEDgw3385030; Wed, 6 Sep 2006 09:14:14 +1000 (EST) Received: (from nathans@localhost) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id k85NE8rc3385132; Wed, 6 Sep 2006 09:14:08 +1000 (EST) Date: Wed, 6 Sep 2006 09:14:08 +1000 From: Nathan Scott To: Richard Knutsson Cc: akpm@osdl.org, xfs@oss.sgi.com, linux-kernel@vger.kernel.org Subject: Re: [xfs-masters] Re: [PATCH 2.6.18-rc4-mm3 2/2] fs/xfs: Converting into generic boolean Message-ID: <20060906091407.M3365803@wobbly.melbourne.sgi.com> References: <44F833C9.1000208@student.ltu.se> <20060904150241.I3335706@wobbly.melbourne.sgi.com> <44FBFEE9.4010201@student.ltu.se> <20060905130557.A3334712@wobbly.melbourne.sgi.com> <44FD71C6.20006@student.ltu.se> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.2.5i In-Reply-To: <44FD71C6.20006@student.ltu.se>; from ricknu-0@student.ltu.se on Tue, Sep 05, 2006 at 02:47:02PM +0200 X-archive-position: 8891 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: xfs Content-Length: 1049 Lines: 36 On Tue, Sep 05, 2006 at 02:47:02PM +0200, Richard Knutsson wrote: > Just the notion: "your" guys was the ones to make those to boolean(_t), Sort of, we actually inherited that type from IRIX where it is defined in . > and now you seem to want to patch them away because I tried to make them > more general. Nah, I just don't see the value either way, and see it as another code churn exercise. > So, is the: > B_FALSE -> false > B_TRUE -> true > ok by you? Personally, no. Thats code churn with no value IMO. > >"int needflush;" is just as readable (some would argue moreso) as > >"bool needflush;" and thats pretty much the level of use in XFS - > > > How are you sure "needflush" is, for example, not a counter? Well, that would be named "flushcount" or some such thing. And you would be able to tell that it was a counter by the way its used in the surrounding code. This discussion really isn't going anywhere useful; I think you need to accept that not everyone sees value in a boolean type. :) cheers. -- Nathan From owner-xfs@oss.sgi.com Tue Sep 5 16:31:02 2006 Received: with ECARTIS (v1.0.0; list xfs); Tue, 05 Sep 2006 16:31:17 -0700 (PDT) Received: from wx-out-0506.google.com (wx-out-0506.google.com [66.249.82.229]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k85NV1DW002138 for ; Tue, 5 Sep 2006 16:31:02 -0700 Received: by wx-out-0506.google.com with SMTP id h29so2348870wxd for ; Tue, 05 Sep 2006 16:30:26 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=T3mdH0vnK4Ux55XJTY2Bw2bGJ1Ab5iexjJSzvGsLOJiw1dtRjhT3rDIRPpB0sCyya2DtkIa1yCELhrL0V6TogKbGASKduJ734+C9+eOdUInSCNcE0u6oFnDUmixF+/927aqxngvVQ0zBTNMcaHZ/971Sy3+pzAEkjdZWEF6xYs8= Received: by 10.70.74.1 with SMTP id w1mr10995316wxa; Tue, 05 Sep 2006 16:30:25 -0700 (PDT) Received: by 10.70.20.10 with HTTP; Tue, 5 Sep 2006 16:30:25 -0700 (PDT) Message-ID: <2260b150609051630w311dcedfgca19fb3e1cd41f95@mail.gmail.com> Date: Wed, 6 Sep 2006 09:30:25 +1000 From: "Chris Seufert" To: "Nathan Scott" Subject: Re: Kernel Ooops Cc: xfs@oss.sgi.com In-Reply-To: <20060906083028.I3365803@wobbly.melbourne.sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline References: <2260b150609050427p3123cb85q5af484d8b907e6ac@mail.gmail.com> <20060906083028.I3365803@wobbly.melbourne.sgi.com> X-archive-position: 8892 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: seufert@gmail.com Precedence: bulk X-list: xfs Content-Length: 582 Lines: 22 Just installed -rc5, all seems well. Should i be running a fsck after these types of errors? On 9/6/06, Nathan Scott wrote: > On Tue, Sep 05, 2006 at 09:27:04PM +1000, Chris Seufert wrote: > > I have had this one before, and i had assumed it had been fixed. > > > > System seems very stable running ext3, so i dont 'think' its hardware > > related, but i am begining to wonder. > > This is fixed, what kernel version are you using? (it was fixed > in -rc5/6 IIRC). > > > RIP [] xfs_btree_init_cursor+0x48/0x1bd > > cheers. > > -- > Nathan > From owner-xfs@oss.sgi.com Tue Sep 5 16:32:41 2006 Received: with ECARTIS (v1.0.0; list xfs); Tue, 05 Sep 2006 16:32:57 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k85NWRDW002341 for ; Tue, 5 Sep 2006 16:32:39 -0700 Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA17367; Wed, 6 Sep 2006 09:31:38 +1000 Received: from wobbly.melbourne.sgi.com (localhost [127.0.0.1]) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id k85NVZgw3364836; Wed, 6 Sep 2006 09:31:35 +1000 (EST) Received: (from nathans@localhost) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id k85NVWWP3388392; Wed, 6 Sep 2006 09:31:32 +1000 (EST) Date: Wed, 6 Sep 2006 09:31:32 +1000 From: Nathan Scott To: Chris Seufert Cc: xfs@oss.sgi.com Subject: Re: Kernel Ooops Message-ID: <20060906093132.A3385910@wobbly.melbourne.sgi.com> References: <2260b150609050427p3123cb85q5af484d8b907e6ac@mail.gmail.com> <20060906083028.I3365803@wobbly.melbourne.sgi.com> <2260b150609051630w311dcedfgca19fb3e1cd41f95@mail.gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.2.5i In-Reply-To: <2260b150609051630w311dcedfgca19fb3e1cd41f95@mail.gmail.com>; from seufert@gmail.com on Wed, Sep 06, 2006 at 09:30:25AM +1000 X-archive-position: 8893 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: xfs Content-Length: 212 Lines: 14 On Wed, Sep 06, 2006 at 09:30:25AM +1000, Chris Seufert wrote: > Just installed -rc5, all seems well. Great. > Should i be running a fsck after these types of errors? Its not needed, no. cheers. -- Nathan From owner-xfs@oss.sgi.com Tue Sep 5 16:41:16 2006 Received: with ECARTIS (v1.0.0; list xfs); Tue, 05 Sep 2006 16:41:25 -0700 (PDT) Received: from wx-out-0506.google.com (wx-out-0506.google.com [66.249.82.236]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k85NfFDW004087 for ; Tue, 5 Sep 2006 16:41:15 -0700 Received: by wx-out-0506.google.com with SMTP id h29so2351472wxd for ; Tue, 05 Sep 2006 16:40:41 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:to:subject:mime-version:content-type:content-transfer-encoding:content-disposition; b=FAT7voeByA4avheKvdU2pG6/KUPHIWwVW7uk8qbZMhN097c1H0VvZJk9RiyJTxCf0N34nZUAKZ1QsUSzal9Jw8Zw3KlF4IXuO7Vi3lfOGFIVxi6ACb7/KJnR7wB9WZG05ONWBKaB5dplSipZcRuF3ZN097EihEkOab9Vr/HPIbw= Received: by 10.70.38.19 with SMTP id l19mr10859288wxl; Tue, 05 Sep 2006 16:40:41 -0700 (PDT) Received: by 10.70.20.10 with HTTP; Tue, 5 Sep 2006 16:40:41 -0700 (PDT) Message-ID: <2260b150609051640y288629cbtcbc133d05b2b40dd@mail.gmail.com> Date: Wed, 6 Sep 2006 09:40:41 +1000 From: "Chris Seufert" To: xfs@oss.sgi.com Subject: XFS Journal on md device MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline X-archive-position: 8894 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: seufert@gmail.com Precedence: bulk X-list: xfs Content-Length: 555 Lines: 16 Hey, I currently have a 2.1Tb xfs partition, sitting on a hardware SATA raid card. However i also have 2 hdd's (for OS etc) in software raid1, with a md device for the xfs log file. Its a 100mb RAID1 (under /dev/md4), now when i halt/reboot the box, even after the xfs partition is unmounted (as part of the shutdown sequence as normal on debian etch) the /dev/md4 device cant be cleanly stopped. Is having the log on a redundant partition a good idea or is it better to leave it as an internal log or is there another way round this problem. -Chris From owner-xfs@oss.sgi.com Tue Sep 5 17:16:59 2006 Received: with ECARTIS (v1.0.0; list xfs); Tue, 05 Sep 2006 17:17:16 -0700 (PDT) Received: from gepetto.dc.ltu.se (gepetto.dc.ltu.se [130.240.42.40]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k860GtDW008712 for ; Tue, 5 Sep 2006 17:16:59 -0700 Received: from [130.240.205.31] (thinktank.campus.luth.se [130.240.205.31]) by gepetto.dc.ltu.se (8.12.5/8.12.5) with ESMTP id k860GBp9024412; Wed, 6 Sep 2006 02:16:11 +0200 (MEST) Message-ID: <44FE14ED.3020605@student.ltu.se> Date: Wed, 06 Sep 2006 02:23:09 +0200 From: Richard Knutsson User-Agent: Mozilla Thunderbird 1.0.8-1.1.fc4 (X11/20060501) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Nathan Scott CC: akpm@osdl.org, xfs@oss.sgi.com, linux-kernel@vger.kernel.org Subject: Re: [xfs-masters] Re: [PATCH 2.6.18-rc4-mm3 2/2] fs/xfs: Converting into generic boolean References: <44F833C9.1000208@student.ltu.se> <20060904150241.I3335706@wobbly.melbourne.sgi.com> <44FBFEE9.4010201@student.ltu.se> <20060905130557.A3334712@wobbly.melbourne.sgi.com> <44FD71C6.20006@student.ltu.se> <20060906091407.M3365803@wobbly.melbourne.sgi.com> In-Reply-To: <20060906091407.M3365803@wobbly.melbourne.sgi.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 8895 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: ricknu-0@student.ltu.se Precedence: bulk X-list: xfs Content-Length: 1148 Lines: 45 Nathan Scott wrote: >On Tue, Sep 05, 2006 at 02:47:02PM +0200, Richard Knutsson wrote: > > >>Just the notion: "your" guys was the ones to make those to boolean(_t), >> >> > >Sort of, we actually inherited that type from IRIX where it is >defined in . > > Oh, ok >>>"int needflush;" is just as readable (some would argue moreso) as >>>"bool needflush;" and thats pretty much the level of use in XFS - >>> >>> >>> >>How are you sure "needflush" is, for example, not a counter? >> >> > >Well, that would be named "flushcount" or some such thing. And you >would be able to tell that it was a counter by the way its used in >the surrounding code. > > True, thinking more of when you have a quick look at the headers, but "flushcount" would be a more logical name in such a case. >This discussion really isn't going anywhere useful; I think you need >to accept that not everyone sees value in a boolean type. :) > > Well, can you blame me for trying? ;) But the more important thing is to clean up the boolean-type and FALSE/TRUE mess in the kernel. >cheers. > > Thank you for your time and happy coding :) From owner-xfs@oss.sgi.com Tue Sep 5 19:32:35 2006 Received: with ECARTIS (v1.0.0; list xfs); Tue, 05 Sep 2006 19:32:56 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k862WMDW026638 for ; Tue, 5 Sep 2006 19:32:34 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id MAA20848; Wed, 6 Sep 2006 12:31:32 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id k862VTeQ12699673; Wed, 6 Sep 2006 12:31:30 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id k862VSQT12697813; Wed, 6 Sep 2006 12:31:28 +1000 (AEST) Date: Wed, 6 Sep 2006 12:31:28 +1000 From: David Chinner To: Nathan Scott Cc: Roger Willcocks , xfs@oss.sgi.com Subject: Re: race in xfs_rename? (fwd) Message-ID: <20060906023128.GN10950339@melbourne.sgi.com> References: <20060906083448.J3365803@wobbly.melbourne.sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20060906083448.J3365803@wobbly.melbourne.sgi.com> User-Agent: Mutt/1.4.2.1i X-archive-position: 8896 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 1763 Lines: 52 On Wed, Sep 06, 2006 at 08:34:48AM +1000, Nathan Scott wrote: > Hi Roger, > > I'm gonna be rude and fwd your mail to the list - in the hope > someone there will be able to help you. I'm running out of time > @sgi and have a bunch of stuff still to get done before I skip > outta here - having to look at the xfs_rename locking right now > might just be enough to make my head explode. ;) > > cheers. > > ----- Forwarded message from Roger Willcocks ----- > > Date: 05 Sep 2006 14:30:30 +0100 > To: nathans@sgi.com > X-Mailer: Ximian Evolution 1.2.2 (1.2.2-4) > From: Roger Willcocks > Subject: race in xfs_rename? > > Hi Nathan, > > I think I must be missing something here: > > xfs_rename calls xfs_lock_for_rename, which i-locks the source file and > directory, target directory, and (if it already exists) the target file. > > It returns a two-to-four entry list of participating inodes. > > xfs_rename unlocks them all, creates a transaction, and then locks them > all again. > > Surely while they're unlocked, another processor could jump in and > fiddle with the underlying files and directories? I don't think that can happen due to i_mutex locking at the vfs layer i.e. in do_rename() via lock_rename() and in vfs_rename_{dir,other}(). Hence I think it is safe for XFS to do what it does. FWIW, in Irix where there is no higher layer locking, XFS has extra checks and locks (ancestor lock, inode generation count checks, etc) to ensure nothing changed when the locks were dropped and regained. AFAICT, the Linux XFS code doesn't need to do of this because the VFS guarantees us that things won't change..... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Tue Sep 5 20:54:09 2006 Received: with ECARTIS (v1.0.0; list xfs); Tue, 05 Sep 2006 20:54:15 -0700 (PDT) Received: from wx-out-0506.google.com (wx-out-0506.google.com [66.249.82.225]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k863s8DW011764 for ; Tue, 5 Sep 2006 20:54:09 -0700 Received: by wx-out-0506.google.com with SMTP id h29so2415237wxd for ; Tue, 05 Sep 2006 20:53:31 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:to:subject:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=DDjzkCJ/z0xetBUkGjnK+BHb2XkT9qVKNT0kTbHTLNRq/WHSd3+JJ9+wZQKRAhXLPRc/yS48kaXW68K+V1EZybT1qT7RkvBak2icRQJ+k4SQN1NrCQ6ysDCrF2rXSWZrB3QZnh8oABee/ZvByUspbp4u3pp8SOR2LBAJHSnWJNA= Received: by 10.70.99.11 with SMTP id w11mr11199434wxb; Tue, 05 Sep 2006 20:53:31 -0700 (PDT) Received: by 10.70.20.10 with HTTP; Tue, 5 Sep 2006 20:53:31 -0700 (PDT) Message-ID: <2260b150609052053h31731a0eycababfab603749c9@mail.gmail.com> Date: Wed, 6 Sep 2006 13:53:31 +1000 From: "Chris Seufert" To: "Chris Seufert" , "linux-xfs@oss.sgi.com" Subject: Re: XFS Journal on md device In-Reply-To: <20060906034027.GA7393@piper.madduck.net> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline References: <2260b150609051640y288629cbtcbc133d05b2b40dd@mail.gmail.com> <44FE28A8.5000803@oss.sgi.com> <2260b150609051853w1286eda7ve59a5df2c7e0ae1c@mail.gmail.com> <20060906034027.GA7393@piper.madduck.net> X-archive-position: 8900 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: seufert@gmail.com Precedence: bulk X-list: xfs Content-Length: 1416 Lines: 45 I'm running Debian Testing (is that Etch), updated about a week ago, on AMD64, with Kernel 2.6.18-rc5-mm1 + md hotfix(1) with md built-in, no initrd, so raid autodetect works. My root (/) volume is ext3 running on /dev/md0 (RAID 1), the problem is with my /data volume thats xfs, running on /dev/sda, with log on /dev/md4. 1: The patch is required becase mm1 killed the KConfig for md devices. On 9/6/06, martin f krafft wrote: > also sprach Chris Seufert [2006.09.06.0353 +0200]: > > However on reboot xfs does a journal rebuild/repair. and the md > > does a re-sync of the md device. > > Which distro? > > I am the Debian maintainer for mdadm and have run into the problem > that the array used for / cannot be stopped until after / is > unmounted, at which point nothing stops the array for there is no > shutdownramfs. > > However, we (Debian) remount / read-only and I never see > a filesystem check on reboot. > > -- > martin; (greetings from the heart of the sun.) > \____ echo mailto: !#^."<*>"|tr "<*> mailto:" net@madduck > > spamtraps: madduck.bogus@madduck.net > > the micro$oft hoover: finally, a product that's supposed to suck! > > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.5 (GNU/Linux) > > iD8DBQFE/kMrIgvIgzMMSnURAoSgAJ4mQ8a1RH6sYd7VRn4yZsRNKxbeSACdEGFv > HeWvLK1N+R1nvxMfeqlDZk8= > =LyfV > -----END PGP SIGNATURE----- > > > From owner-xfs@oss.sgi.com Tue Sep 5 22:12:09 2006 Received: with ECARTIS (v1.0.0; list xfs); Tue, 05 Sep 2006 22:12:20 -0700 (PDT) Received: from albatross.madduck.net (armagnac.ifi.unizh.ch [130.60.75.72]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k865C7DW022292 for ; Tue, 5 Sep 2006 22:12:08 -0700 Received: from localhost (albatross.madduck.net [127.0.0.1]) by albatross.madduck.net (postfix) with ESMTP id 72800895D7C for ; Wed, 6 Sep 2006 06:12:39 +0200 (CEST) Received: from albatross.madduck.net ([127.0.0.1]) by localhost (albatross.madduck.net [127.0.0.1]) (amavisd-new, port 10024) with LMTP id 16384-01 for ; Wed, 6 Sep 2006 06:12:39 +0200 (CEST) Received: from wall.oerlikon.madduck.net (84-72-21-226.dclient.hispeed.ch [84.72.21.226]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "wall.oerlikon.madduck.net", Issuer "CAcert Class 3 Root" (verified OK)) by albatross.madduck.net (postfix) with ESMTP id 2EF29895D79 for ; Wed, 6 Sep 2006 06:12:39 +0200 (CEST) Received: from piper.oerlikon.madduck.net (piper.oerlikon.madduck.net [192.168.14.3]) by wall.oerlikon.madduck.net (Postfix) with ESMTP id 911761804BBE for ; Wed, 6 Sep 2006 06:12:45 +0200 (CEST) Received: by piper.oerlikon.madduck.net (Postfix, from userid 1000) id 281D21043E50; Wed, 6 Sep 2006 06:12:45 +0200 (CEST) Date: Wed, 6 Sep 2006 06:12:45 +0200 From: martin f krafft To: "linux-xfs@oss.sgi.com" Subject: Re: XFS Journal on md device Message-ID: <20060906041245.GA10066@piper.madduck.net> Mail-Followup-To: "linux-xfs@oss.sgi.com" References: <2260b150609051640y288629cbtcbc133d05b2b40dd@mail.gmail.com> <44FE28A8.5000803@oss.sgi.com> <2260b150609051853w1286eda7ve59a5df2c7e0ae1c@mail.gmail.com> <20060906034027.GA7393@piper.madduck.net> <2260b150609052053h31731a0eycababfab603749c9@mail.gmail.com> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="ew6BAiZeqk4r7MaW" Content-Disposition: inline In-Reply-To: <2260b150609052053h31731a0eycababfab603749c9@mail.gmail.com> X-OS: Debian GNU/Linux testing/unstable kernel 2.6.17-2-amd64 x86_64 X-Motto: Keep the good times rollin' X-Subliminal-Message: debian/rules! X-Spamtrap: madduck.bogus@madduck.net User-Agent: Mutt/1.5.13 (2006-08-11) X-archive-position: 8901 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: madduck@madduck.net Precedence: bulk X-list: xfs Content-Length: 1595 Lines: 56 --ew6BAiZeqk4r7MaW Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable also sprach Chris Seufert [2006.09.06.0553 +0200]: > My root (/) volume is ext3 running on /dev/md0 (RAID 1), the > problem is with my /data volume thats xfs, running on /dev/sda, > with log on /dev/md4. Can you tell when /data gets umounted during the shutdown sequence? Correct me if I'm wrong, but once that happened, /dev/md4 should become free as far as XFS is concerned. However, since mdadm or the kernel fails to stop the device during shutdown, I am guessing that the partition is simply not being umounted. Does it have an entry in /etc/fstab? Try changing the=20 #! /bin/sh in line 1 of /etc/rc6.d/S40umountfs to #! /bin/sh -x exec 2> /root/umountfs.out.2 then reboot and paste that file somewhere (http://rafb.net/paste). --=20 martin; (greetings from the heart of the sun.) \____ echo mailto: !#^."<*>"|tr "<*> mailto:" net@madduck =20 spamtraps: madduck.bogus@madduck.net =20 "you don't sew with a fork, so I see no reason to eat with knitting needles." -- miss piggy, on eating chinese food --ew6BAiZeqk4r7MaW Content-Type: application/pgp-signature; name="signature.asc" Content-Description: Digital signature (GPG/PGP) Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.5 (GNU/Linux) iD8DBQFE/kq9IgvIgzMMSnURAiPcAJ99QMFVoCCU3qG+cTDtAP7wtvm3dQCffFeW 1nM/lomxGvDkQSmGbdgBKRI= =fBYK -----END PGP SIGNATURE----- --ew6BAiZeqk4r7MaW-- From owner-xfs@oss.sgi.com Tue Sep 5 22:45:33 2006 Received: with ECARTIS (v1.0.0; list xfs); Tue, 05 Sep 2006 22:45:37 -0700 (PDT) Received: from albatross.madduck.net (armagnac.ifi.unizh.ch [130.60.75.72]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k865jRDW027334 for ; Tue, 5 Sep 2006 22:45:32 -0700 Received: from localhost (albatross.madduck.net [127.0.0.1]) by albatross.madduck.net (postfix) with ESMTP id 28DE0895D7A; Wed, 6 Sep 2006 05:40:22 +0200 (CEST) Received: from albatross.madduck.net ([127.0.0.1]) by localhost (albatross.madduck.net [127.0.0.1]) (amavisd-new, port 10024) with LMTP id 08528-02; Wed, 6 Sep 2006 05:40:21 +0200 (CEST) Received: from wall.oerlikon.madduck.net (84-72-21-226.dclient.hispeed.ch [84.72.21.226]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "wall.oerlikon.madduck.net", Issuer "CAcert Class 3 Root" (verified OK)) by albatross.madduck.net (postfix) with ESMTP id D8A71895D79; Wed, 6 Sep 2006 05:40:21 +0200 (CEST) Received: from piper.oerlikon.madduck.net (piper.oerlikon.madduck.net [192.168.14.3]) by wall.oerlikon.madduck.net (Postfix) with ESMTP id 336F61804B98; Wed, 6 Sep 2006 05:40:28 +0200 (CEST) Received: by piper.oerlikon.madduck.net (Postfix, from userid 1000) id CC0BC1043E50; Wed, 6 Sep 2006 05:40:27 +0200 (CEST) Date: Wed, 6 Sep 2006 05:40:27 +0200 From: martin f krafft To: Chris Seufert Cc: "linux-xfs@oss.sgi.com" Subject: Re: XFS Journal on md device Message-ID: <20060906034027.GA7393@piper.madduck.net> Mail-Followup-To: Chris Seufert , "linux-xfs@oss.sgi.com" References: <2260b150609051640y288629cbtcbc133d05b2b40dd@mail.gmail.com> <44FE28A8.5000803@oss.sgi.com> <2260b150609051853w1286eda7ve59a5df2c7e0ae1c@mail.gmail.com> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="rwEMma7ioTxnRzrJ" Content-Disposition: inline In-Reply-To: <2260b150609051853w1286eda7ve59a5df2c7e0ae1c@mail.gmail.com> X-OS: Debian GNU/Linux testing/unstable kernel 2.6.17-2-amd64 x86_64 X-Motto: Keep the good times rollin' X-Subliminal-Message: debian/rules! X-Spamtrap: madduck.bogus@madduck.net User-Agent: Mutt/1.5.13 (2006-08-11) X-archive-position: 8902 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: madduck@madduck.net Precedence: bulk X-list: xfs Content-Length: 1230 Lines: 42 --rwEMma7ioTxnRzrJ Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable also sprach Chris Seufert [2006.09.06.0353 +0200]: > However on reboot xfs does a journal rebuild/repair. and the md > does a re-sync of the md device. Which distro? I am the Debian maintainer for mdadm and have run into the problem that the array used for / cannot be stopped until after / is unmounted, at which point nothing stops the array for there is no shutdownramfs. However, we (Debian) remount / read-only and I never see a filesystem check on reboot. --=20 martin; (greetings from the heart of the sun.) \____ echo mailto: !#^."<*>"|tr "<*> mailto:" net@madduck =20 spamtraps: madduck.bogus@madduck.net =20 the micro$oft hoover: finally, a product that's supposed to suck! --rwEMma7ioTxnRzrJ Content-Type: application/pgp-signature; name="signature.asc" Content-Description: Digital signature (GPG/PGP) Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.5 (GNU/Linux) iD8DBQFE/kMrIgvIgzMMSnURAoSgAJ4mQ8a1RH6sYd7VRn4yZsRNKxbeSACdEGFv HeWvLK1N+R1nvxMfeqlDZk8= =LyfV -----END PGP SIGNATURE----- --rwEMma7ioTxnRzrJ-- From owner-xfs@oss.sgi.com Wed Sep 6 05:58:11 2006 Received: with ECARTIS (v1.0.0; list xfs); Wed, 06 Sep 2006 05:58:21 -0700 (PDT) Received: from ug-out-1314.google.com (ug-out-1314.google.com [66.249.92.168]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k86CwADW002687 for ; Wed, 6 Sep 2006 05:58:11 -0700 Received: by ug-out-1314.google.com with SMTP id j3so2316337ugf for ; Wed, 06 Sep 2006 05:57:36 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:to:subject:mime-version:content-type:content-transfer-encoding:content-disposition; b=JiCwpuJ2Hifh+CsDCWsT5QpZ72EsH9uRCgasR1a/t1XMnpEt/mvX5RC3wUu3j+zCflFTmlVqhDTpCByNruQ8pI69agToeVy3NCELmOIW48MJLg+oEUJqGfG70zK5v1iclg7IwwyMFQp9L/M30qCaOfdIH1vpnaxBICAT4+VQIWU= Received: by 10.66.216.20 with SMTP id o20mr4320806ugg; Wed, 06 Sep 2006 04:59:24 -0700 (PDT) Received: by 10.67.23.8 with HTTP; Wed, 6 Sep 2006 04:59:24 -0700 (PDT) Message-ID: <60fdb1ad0609060459k6132f8b8s40e4f20f51a746ed@mail.gmail.com> Date: Wed, 6 Sep 2006 12:59:24 +0100 From: "Vijay Gill" To: xfs@oss.sgi.com Subject: Bad block on partition, how to deal with it? MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline X-archive-position: 8905 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: vijay.s.gill@gmail.com Precedence: bulk X-list: xfs Content-Length: 515 Lines: 18 Hi, Got this bad sector in a seagate 40G hard disk. Is there any tool under linux to scan the surface of the disk and mark the sectors bad in file system (or at even lower level like seatools does)? Running Linux Fedora Core 5. In the mean while I am doing a dd on that partition to copy the data and try to recover it from there. Also I have run badblocks to get the number of the block which is bad, but how do I get it marked now so that the OS does not try to allocate it for data in future. Thanks Vijay From owner-xfs@oss.sgi.com Wed Sep 6 07:23:30 2006 Received: with ECARTIS (v1.0.0; list xfs); Wed, 06 Sep 2006 07:23:40 -0700 (PDT) Received: from mx1.redhat.com (mx1.redhat.com [66.187.233.31]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k86ENSDW015430 for ; Wed, 6 Sep 2006 07:23:30 -0700 Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by mx1.redhat.com (8.12.11.20060308/8.12.11) with ESMTP id k86EMjM2004086; Wed, 6 Sep 2006 10:22:45 -0400 Received: from pobox-2.corp.redhat.com (pobox-2.corp.redhat.com [10.11.255.15]) by int-mx1.corp.redhat.com (8.12.11.20060308/8.12.11) with ESMTP id k86EMiPA027491; Wed, 6 Sep 2006 10:22:44 -0400 Received: from [10.15.80.10] (neon.msp.redhat.com [10.15.80.10]) by pobox-2.corp.redhat.com (8.13.1/8.13.1) with ESMTP id k86EMi7r020600; Wed, 6 Sep 2006 10:22:44 -0400 Message-ID: <44FED9B3.5080308@sandeen.net> Date: Wed, 06 Sep 2006 09:22:43 -0500 From: Eric Sandeen User-Agent: Thunderbird 1.5.0.5 (X11/20060808) MIME-Version: 1.0 To: Vijay Gill CC: xfs@oss.sgi.com Subject: Re: Bad block on partition, how to deal with it? References: <60fdb1ad0609060459k6132f8b8s40e4f20f51a746ed@mail.gmail.com> In-Reply-To: <60fdb1ad0609060459k6132f8b8s40e4f20f51a746ed@mail.gmail.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 8907 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Content-Length: 954 Lines: 29 Vijay Gill wrote: > Hi, > > Got this bad sector in a seagate 40G hard disk. You should buy a new disk for $20 or so :) > Is there any tool > under linux to scan the surface of the disk and mark the sectors bad > in file system (or at even lower level like seatools does)? xfs has no badblocks support. If you can convince the drive to remap the block with vendor tools then maybe it's ok. But modern drives remap on their own; if you have a block that can't be remapped then your drive is probably not long for this world. Don't try to keep using it. > Running Linux Fedora Core 5. > > In the mean while I am doing a dd on that partition to copy the data > and try to recover it from there. > > Also I have run badblocks to get the number of the block which is bad, > but how do I get it marked now so that the OS does not try to allocate > it for data in future. With xfs, you don't. It's not worth it IMHO, just get a new disk. -Eric From owner-xfs@oss.sgi.com Wed Sep 6 10:23:51 2006 Received: with ECARTIS (v1.0.0; list xfs); Wed, 06 Sep 2006 10:24:05 -0700 (PDT) Received: from smtp102.sbc.mail.mud.yahoo.com (smtp102.sbc.mail.mud.yahoo.com [68.142.198.201]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k86HNiDW013417 for ; Wed, 6 Sep 2006 10:23:51 -0700 Received: (qmail 70801 invoked from network); 6 Sep 2006 17:23:08 -0000 Received: from unknown (HELO stupidest.org) (cwedgwood@sbcglobal.net@71.202.63.228 with login) by smtp102.sbc.mail.mud.yahoo.com with SMTP; 6 Sep 2006 17:23:08 -0000 Received: by tuatara.stupidest.org (Postfix, from userid 10000) id CB8531814338; Wed, 6 Sep 2006 10:23:06 -0700 (PDT) Date: Wed, 6 Sep 2006 10:23:06 -0700 From: Chris Wedgwood To: Vijay Gill Cc: xfs@oss.sgi.com Subject: Re: Bad block on partition, how to deal with it? Message-ID: <20060906172306.GA19108@tuatara.stupidest.org> References: <60fdb1ad0609060459k6132f8b8s40e4f20f51a746ed@mail.gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <60fdb1ad0609060459k6132f8b8s40e4f20f51a746ed@mail.gmail.com> X-archive-position: 8908 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: xfs Content-Length: 732 Lines: 20 On Wed, Sep 06, 2006 at 12:59:24PM +0100, Vijay Gill wrote: > Got this bad sector in a seagate 40G hard disk. recorder where it is, dd over it and hopefully the drive will remap it (if there are many sectors the drive is probably toast) if you know which block is/was bad you can user xfs_bmap to figure out which file it was in > Also I have run badblocks to get the number of the block which is > bad, but how do I get it marked now so that the OS does not try to > allocate it for data in future. modern drivers (pretty much anything less than 10 years old) will remap bad sectors on writes, if they fail to do this get a new drive smartctl will usually let you get a count of how many times the drive has done this since From owner-xfs@oss.sgi.com Wed Sep 6 16:04:14 2006 Received: with ECARTIS (v1.0.0; list xfs); Wed, 06 Sep 2006 16:04:40 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k86N3xDW026696 for ; Wed, 6 Sep 2006 16:04:12 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA16212; Thu, 7 Sep 2006 09:03:07 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id k86N2feQ13557459; Thu, 7 Sep 2006 09:02:42 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id k86N2cTC13528081; Thu, 7 Sep 2006 09:02:38 +1000 (AEST) Date: Thu, 7 Sep 2006 09:02:38 +1000 From: David Chinner To: Jesper Juhl Cc: Linux Kernel Mailing List , xfs@oss.sgi.com Subject: Re: Wrong free space reported for XFS filesystem Message-ID: <20060906230238.GJ5737019@melbourne.sgi.com> References: <9a8748490609060154ye8730b0n16e23524010a35e4@mail.gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <9a8748490609060154ye8730b0n16e23524010a35e4@mail.gmail.com> User-Agent: Mutt/1.4.2.1i X-archive-position: 8910 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 1209 Lines: 40 On Wed, Sep 06, 2006 at 10:54:34AM +0200, Jesper Juhl wrote: > For your information; > > I've been running a bunch of benchmarks on a 250GB XFS filesystem. > After the benchmarks had run for a few hours and almost filled up the > fs, I removed all the files and did a "df -h" with interresting > results : > > /dev/mapper/Data1-test > 250G -64Z 251G 101% /mnt/test > > "df -k" reported this : > > /dev/mapper/Data1-test > 262144000 -73786976294838202960 262147504 101% /mnt/test .... > The filesystem is mounted like this : > > /dev/mapper/Data1-test on /mnt/test type xfs > (rw,noatime,ihashsize=64433,logdev=/dev/Log1/test_log,usrquota) So the in-core accounting has underflowed by a small amount but the on disk accounting is correct. We've had a few reports of this that I know of over the past couple of years, but we've never managed to find a reproducable test case for it. Can you describe what benchmark you were runnin, wht kernel you were using and whether any of the tests hit an ENOSPC condition? Also, in future can you cc xfs@oss.sgi.com on XFS bug reports? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Wed Sep 6 21:10:15 2006 Received: with ECARTIS (v1.0.0; list xfs); Wed, 06 Sep 2006 21:10:31 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k8749xDW013700 for ; Wed, 6 Sep 2006 21:10:12 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA22746; Thu, 7 Sep 2006 14:09:08 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16302) id 2DDBF58CF851; Thu, 7 Sep 2006 14:09:08 +1000 (EST) To: linux-xfs@oss.sgi.com, sgi.bugs.xfs@engr.sgi.com Subject: TAKE 955993 - quota oops fix Message-Id: <20060907040908.2DDBF58CF851@chook.melbourne.sgi.com> Date: Thu, 7 Sep 2006 14:09:08 +1000 (EST) From: nathans@sgi.com (Nathan Scott) X-archive-position: 8913 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: xfs Content-Length: 467 Lines: 14 Fix a bad pointer dereference in the quota statvfs handling. Date: Thu Sep 7 14:08:44 AEST 2006 Workarea: chook.melbourne.sgi.com:/build/nathans/xfs-linux Inspected by: dgc The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-kern/xfs-linux-melb Modid: xfs-linux-melb:xfs-kern:26934a quota/xfs_qm_bhv.c - 1.23 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/quota/xfs_qm_bhv.c.diff?r1=text&tr1=1.23&r2=text&tr2=1.22&f=h From owner-xfs@oss.sgi.com Wed Sep 6 23:52:26 2006 Received: with ECARTIS (v1.0.0; list xfs); Wed, 06 Sep 2006 23:52:40 -0700 (PDT) Received: from omx1.americas.sgi.com (omx1.americas.sgi.com [198.149.16.13]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k876qKDW006399 for ; Wed, 6 Sep 2006 23:52:25 -0700 Received: from internal-mail-relay1.corp.sgi.com (internal-mail-relay1.corp.sgi.com [198.149.32.52]) by omx1.americas.sgi.com (8.12.10/8.12.9/linux-outbound_gateway-1.1) with ESMTP id k875rUnx001062 for ; Thu, 7 Sep 2006 00:53:30 -0500 Received: from omx2.sgi.com ([198.149.32.25]) by internal-mail-relay1.corp.sgi.com (8.12.9/8.12.10/SGI_generic_relay-1.2) with ESMTP id k875r98s37777792 for ; Wed, 6 Sep 2006 22:53:09 -0700 (PDT) Received: from outhouse.melbourne.sgi.com (outhouse.melbourne.sgi.com [134.14.52.145]) by omx2.sgi.com (8.12.11/8.12.9/linux-outbound_gateway-1.1) with ESMTP id k878QdFu017978; Thu, 7 Sep 2006 01:26:40 -0700 Received: from [134.14.55.232] (chatz.melbourne.sgi.com [134.14.55.232]) by outhouse.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id k875t22G15375408; Thu, 7 Sep 2006 15:55:03 +1000 (AEST) Message-ID: <44FFB39C.3000700@melbourne.sgi.com> Date: Thu, 07 Sep 2006 15:52:28 +1000 From: David Chatterton Reply-To: chatz@melbourne.sgi.com Organization: SGI User-Agent: Thunderbird 1.5.0.5 (Windows/20060719) MIME-Version: 1.0 To: torvalds@osdl.org CC: akpm@osdl.org, xfs@oss.sgi.com, axboe@kernel.dk Subject: XFS update for 2.6.18-rc6 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-archive-position: 8914 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: chatz@melbourne.sgi.com Precedence: bulk X-list: xfs Content-Length: 3500 Lines: 100 Hi Linus, Please pull from: git://oss.sgi.com:8090/xfs/xfs-2.6 This will update the following files: fs/xfs/linux-2.6/xfs_aops.c | 18 +++++++++++++----- fs/xfs/linux-2.6/xfs_lrw.c | 27 ++++++++++++++++++++++----- fs/xfs/quota/xfs_qm_bhv.c | 2 +- fs/xfs/xfs_alloc.h | 20 ++++++++++++++++++++ fs/xfs/xfs_fsops.c | 16 ++++++++++------ fs/xfs/xfs_mount.c | 32 ++++++++------------------------ fs/xfs/xfs_vfsops.c | 3 ++- 7 files changed, 76 insertions(+), 42 deletions(-) through these commits: commit 4be536debe3f7b0c62283e77fd6bd8bdb9f83c6f Author: David Chinner Date: Thu Sep 7 14:26:50 2006 +1000 [XFS] Prevent free space oversubscription and xfssyncd looping. The fix for recent ENOSPC deadlocks introduced certain limitations on allocations. The fix could cause xfssyncd to loop endlessly if we did not leave some space free for the allocator to work correctly. Basically, we needed to ensure that we had at least 4 blocks free for an AG free list and a block for the inode bmap btree at all times. However, this did not take into account the fact that each AG has a free list that needs 4 blocks. Hence any filesystem with more than one AG could cause oversubscription of free space and make xfssyncd spin forever trying to allocate space needed for AG freelists that was not available in the AG. The following patch reserves space for the free lists in all AGs plus the inode bmap btree which prevents oversubscription. It also prevents those blocks from being reported as free space (as they can never be used) and makes the SMP in-core superblock accounting code and the reserved block ioctl respect this requirement. SGI-PV: 955674 SGI-Modid: xfs-linux-melb:xfs-kern:26894a Signed-off-by: David Chinner Signed-off-by: David Chatterton commit 721259bce2851893155c6cb88a3f8ecb106b348c Author: Lachlan McIlroy Date: Thu Sep 7 14:27:05 2006 +1000 [XFS] Fix ABBA deadlock between i_mutex and iolock. Avoid calling __blockdev_direct_IO for the DIO_OWN_LOCKING case for direct I/O reads since it drops and reacquires the i_mutex while holding the iolock and this violates the locking order. SGI-PV: 955696 SGI-Modid: xfs-linux-melb:xfs-kern:26898a Signed-off-by: Lachlan McIlroy Signed-off-by: David Chatterton commit 0a8d17d090a4939643a52194b7d4a4001b9b2d93 Author: David Chinner Date: Thu Sep 7 14:27:15 2006 +1000 [XFS] Fix xfs_splice_write() so appended data gets to disk. xfs_splice_write() failed to update the on disk inode size when extending the so when the file was closed the range extended by splice was truncated off. Hence any region of a file written to by splice would end up as a hole full of zeros. SGI-PV: 955939 SGI-Modid: xfs-linux-melb:xfs-kern:26920a Signed-off-by: David Chinner Signed-off-by: David Chatterton commit 0edc7d0f3709e8c3bb7e69c4df614218a753361e Author: Nathan Scott Date: Thu Sep 7 14:27:23 2006 +1000 [XFS] Fix a bad pointer dereference in the quota statvfs handling. SGI-PV: 955993 SGI-Modid: xfs-linux-melb:xfs-kern:26934a Signed-off-by: Nathan Scott Signed-off-by: David Chatterton Thanks, David From owner-xfs@oss.sgi.com Thu Sep 7 19:34:49 2006 Received: with ECARTIS (v1.0.0; list xfs); Thu, 07 Sep 2006 19:35:09 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k882YaDW006722 for ; Thu, 7 Sep 2006 19:34:47 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id MAA19450; Fri, 8 Sep 2006 12:33:44 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id k882XfeQ14414568; Fri, 8 Sep 2006 12:33:42 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id k882XdAu14403235; Fri, 8 Sep 2006 12:33:39 +1000 (AEST) Date: Fri, 8 Sep 2006 12:33:39 +1000 From: David Chinner To: Jesper Juhl Cc: Linux Kernel Mailing List , xfs@oss.sgi.com Subject: Re: Wrong free space reported for XFS filesystem Message-ID: <20060908023339.GF10950339@melbourne.sgi.com> References: <9a8748490609060154ye8730b0n16e23524010a35e4@mail.gmail.com> <20060906230238.GJ5737019@melbourne.sgi.com> <9a8748490609070717q6ed9111ckdc3de025dc44938b@mail.gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <9a8748490609070717q6ed9111ckdc3de025dc44938b@mail.gmail.com> User-Agent: Mutt/1.4.2.1i X-archive-position: 8922 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 1367 Lines: 46 On Thu, Sep 07, 2006 at 04:17:53PM +0200, Jesper Juhl wrote: > On 07/09/06, David Chinner wrote: > >On Wed, Sep 06, 2006 at 10:54:34AM +0200, Jesper Juhl wrote: > >> For your information; > >> > >> I've been running a bunch of benchmarks on a 250GB XFS filesystem. > >> After the benchmarks had run for a few hours and almost filled up the > >> fs, I removed all the files and did a "df -h" with interresting > >> results : ..... > >So the in-core accounting has underflowed by a small amount but the > >on disk accounting is correct. > > > >We've had a few reports of this that I know of over the past > >couple of years, but we've never managed to find a reproducable > >test case for it. > >Can you describe what benchmark you were runnin, wht kernel you were > >using > > The kernel is 2.6.18-rc6 SMP Ok, so it's a current problem.... > >and whether any of the tests hit an ENOSPC condition? > > > That I don't know. > > The script I was running is this one : That doesn't really narrow down the scope at all. All that script tells me is that problem is somewhere inside XFS.... :/ Can you try to isolate which of the loads is causing the problem? That being said, this looks like a good stress load - I'll pass it onto our QA folks... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Fri Sep 8 08:05:24 2006 Received: with ECARTIS (v1.0.0; list xfs); Fri, 08 Sep 2006 08:05:45 -0700 (PDT) Received: from a.mx.filmlight.ltd.uk (host217-40-27-25.in-addr.btopenworld.com [217.40.27.25]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k88F5LDW005483 for ; Fri, 8 Sep 2006 08:05:24 -0700 Received: (qmail 31358 invoked from network); 8 Sep 2006 14:04:43 -0000 Received: from orthia.filmlight.ltd.uk (10.44.0.109) by a.mx.filmlight.ltd.uk with SMTP; 8 Sep 2006 14:04:43 -0000 Subject: Re: race in xfs_rename? (fwd) From: Roger Willcocks To: David Chinner Cc: Nathan Scott , xfs@oss.sgi.com In-Reply-To: <20060906023128.GN10950339@melbourne.sgi.com> References: <20060906083448.J3365803@wobbly.melbourne.sgi.com> <20060906023128.GN10950339@melbourne.sgi.com> Content-Type: text/plain Organization: Message-Id: <1157724365.873.71.camel@orthia.filmlight.ltd.uk> Mime-Version: 1.0 X-Mailer: Ximian Evolution 1.2.2 (1.2.2-4) Date: 08 Sep 2006 15:06:05 +0100 Content-Transfer-Encoding: 7bit X-archive-position: 8923 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: roger@filmlight.ltd.uk Precedence: bulk X-list: xfs Content-Length: 2483 Lines: 68 On Wed, 2006-09-06 at 03:31, David Chinner wrote: > On Wed, Sep 06, 2006 at 08:34:48AM +1000, Nathan Scott wrote: > > Hi Roger, > > > > I'm gonna be rude and fwd your mail to the list - in the hope > > someone there will be able to help you. I'm running out of time > > @sgi and have a bunch of stuff still to get done before I skip > > outta here - having to look at the xfs_rename locking right now > > might just be enough to make my head explode. ;) > > > > cheers. > > > > ----- Forwarded message from Roger Willcocks ----- > > > > Date: 05 Sep 2006 14:30:30 +0100 > > To: nathans@sgi.com > > X-Mailer: Ximian Evolution 1.2.2 (1.2.2-4) > > From: Roger Willcocks > > Subject: race in xfs_rename? > > > > Hi Nathan, > > > > I think I must be missing something here: > > > > xfs_rename calls xfs_lock_for_rename, which i-locks the source file and > > directory, target directory, and (if it already exists) the target file. > > > > It returns a two-to-four entry list of participating inodes. > > > > xfs_rename unlocks them all, creates a transaction, and then locks them > > all again. > > > > Surely while they're unlocked, another processor could jump in and > > fiddle with the underlying files and directories? > > I don't think that can happen due to i_mutex locking at the vfs layer > i.e. in do_rename() via lock_rename() and in vfs_rename_{dir,other}(). > Hence I think it is safe for XFS to do what it does. > > FWIW, in Irix where there is no higher layer locking, XFS has extra > checks and locks (ancestor lock, inode generation count checks, etc) > to ensure nothing changed when the locks were dropped and regained. > AFAICT, the Linux XFS code doesn't need to do of this because the VFS > guarantees us that things won't change..... > > Cheers, > > Dave. Hi Dave & Nathan, yes that makes sense. I'm currently chasing a couple of xfs shutdowns on customer clusters, and 'rename' seems to be a factor, although it could just as well be a dodgy network driver, or whatever. I'll let you know if I find a reproducible test case. I've also been looking into a couple of 'LEAFN node level is X' warnings from xfs_repair, and it seems to me that leaf nodes don't actually have a /level/ member, although internal nodes do (compare xfs_dir2_leaf_hdr_t and xfs_da_intnode_t). The value being tested by xfs_repair is actually leaf->hdr.stale, so the warning is bogus. Or so it seems to me... -- Roger From owner-xfs@oss.sgi.com Fri Sep 8 11:30:42 2006 Received: with ECARTIS (v1.0.0; list xfs); Fri, 08 Sep 2006 11:30:54 -0700 (PDT) Received: from amsfep11-int.chello.nl (amsfep17-int.chello.nl [213.46.243.15]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k88IUfDW003607 for ; Fri, 8 Sep 2006 11:30:42 -0700 Received: from cable-213-132-154-40.upc.chello.be ([213.132.154.40]) by amsfep11-int.chello.nl (InterMail vM.6.01.05.04 201-2131-123-105-20051025) with ESMTP id <20060908172214.XVNB14551.amsfep11-int.chello.nl@cable-213-132-154-40.upc.chello.be> for ; Fri, 8 Sep 2006 19:22:14 +0200 From: Grozdan Nikolov To: xfs@oss.sgi.com Subject: XFS questions Date: Fri, 8 Sep 2006 19:23:07 +0200 User-Agent: KMail/1.9.4 MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200609081923.08215.microchip@chello.be> X-archive-position: 8927 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: microchip@chello.be Precedence: bulk X-list: xfs Content-Length: 837 Lines: 21 Hi, I have a few questions regarding the data integrity on XFS filesystems. I have 4 servers here all running on XFS partitions and I'm a bit concerned about the data integrity of an XFS filesystem. After reading a lot of benchmarks/user experiences I came to the conclusion that XFS is really very fast, as I experience it here on my servers too, but when it comes to data integrity it is wise not to use XFS for partitions containing important files as XFS may not be able to recover them after a lets say power outage. I'm also worried about the 'zeroing' thing in XFS. I have 3 questions... 1) How reliable is XFS at data-integrity? 2) Will the 'zeroing' thing be removed/fixed in the near future? 3) Will XFS ever support ordered or journalled mode like ReiserFS or Ext3? Thanks in advance and best regards, Grozdan From owner-xfs@oss.sgi.com Fri Sep 8 12:20:24 2006 Received: with ECARTIS (v1.0.0; list xfs); Fri, 08 Sep 2006 12:20:37 -0700 (PDT) Received: from smtp105.sbc.mail.mud.yahoo.com (smtp105.sbc.mail.mud.yahoo.com [68.142.198.204]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k88JKKDW015784 for ; Fri, 8 Sep 2006 12:20:24 -0700 Received: (qmail 75026 invoked from network); 8 Sep 2006 19:19:42 -0000 Received: from unknown (HELO stupidest.org) (cwedgwood@sbcglobal.net@71.202.63.228 with login) by smtp105.sbc.mail.mud.yahoo.com with SMTP; 8 Sep 2006 19:19:41 -0000 Received: by tuatara.stupidest.org (Postfix, from userid 10000) id 50807180B3F6; Fri, 8 Sep 2006 12:19:40 -0700 (PDT) Date: Fri, 8 Sep 2006 12:19:40 -0700 From: Chris Wedgwood To: Grozdan Nikolov Cc: xfs@oss.sgi.com Subject: Re: XFS questions Message-ID: <20060908191940.GC30358@tuatara.stupidest.org> References: <200609081923.08215.microchip@chello.be> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <200609081923.08215.microchip@chello.be> X-archive-position: 8928 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: xfs Content-Length: 1725 Lines: 49 On Fri, Sep 08, 2006 at 07:23:07PM +0200, Grozdan Nikolov wrote: > I'm also worried about the 'zeroing' thing in XFS. Most of what people claim is a bit vague and often incorrect. > 1) How reliable is XFS at data-integrity? Fine, if your applications are sane. MTAs like postfix for example *never* had any problems with XFS. > 2) Will the 'zeroing' thing be removed/fixed in the near future? What usually happens if that if you truncate over a file, and write data *then* loose power some of the data might not have been written to disk yet so when you read it back the XFS returns zeroes. This is normal/expected for journalling filesystems. Now, by default ext3 doesn't have this 'problem' so I think have a misconception as to how it should work (or how they would like it to). In recent kernels when XFS truncated over a file the data is flushed after close() so the chances of loosing data are much less. Why have people seen this? Because a lot of applications do something like: open file ~/.bookmarks read close [...] open file ~/.bookmarks, truncating the existing file write close The open + truncate is journalled, so that will survive a power failure, but the 'write' isn't --- so you might end up with a file that looks like it has zeroes. I'll claim this in general is a bad practise and I'm also going to claim applications that do this ideally should be fixed to open/creat tmp, write, fsync, close, rename tmp to original. Not only does that make things more reliable for XFS, but also pretty much every other fs out there on any unix-like OS. It also is more reliable again something like a crash/powerfailure in the application during write out (which I've seen). From owner-xfs@oss.sgi.com Sat Sep 9 16:41:33 2006 Received: with ECARTIS (v1.0.0; list xfs); Sat, 09 Sep 2006 16:41:39 -0700 (PDT) Received: from smtp106.biz.mail.mud.yahoo.com (smtp106.biz.mail.mud.yahoo.com [68.142.200.254]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k89NfWDW025958 for ; Sat, 9 Sep 2006 16:41:32 -0700 Received: (qmail 24288 invoked from network); 9 Sep 2006 22:40:56 -0000 Received: from unknown (HELO ?192.168.0.6?) (mikem@stwo-corp.com@70.34.34.158 with plain) by smtp106.biz.mail.mud.yahoo.com with SMTP; 9 Sep 2006 22:40:55 -0000 Message-ID: <450342F9.8090301@stwo-corp.com> Date: Sat, 09 Sep 2006 15:40:57 -0700 From: Michael Morrison Organization: S.Two Corp User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.0.5) Gecko/20060719 Thunderbird/1.5.0.5 Mnenhy/0.7.4.0 MIME-Version: 1.0 To: linux-xfs@oss.sgi.com Subject: oops mounting unassembled md device Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-archive-position: 8934 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: mikem@stwo-corp.com Precedence: bulk X-list: xfs Content-Length: 2317 Lines: 68 Got on oops when trying to mount xfs filesystem onto unassembled md raid0 device. What else can I provide to help with this? Linux version: 2.6.18-rc4 Drives are hanging off an LSI fibre channel controller which is driven by the mpt driver in drivers/message/fusion. Steps leading up to this: 1. assemble 8 drive raid0 on /dev/md0 2. mount filesystem: /bin/mount -t xfs /dev/md0 /mnt/testfs -o noatime,nodiratime 3. umount /mnt/testfs 4. mdadm --stop /dev/md0 5. /bin/mount -t xfs /dev/md0 /mnt/testfs -o noatime,nodiratime BUG: unable to handle kernel NULL pointer dereference at virtual address 00000004 printing eip: c03f18f6 *pde = 00000000 Oops: 0000 [#1] SMP Modules linked in: mptctl ppdev XENA2 tc ch sr_mod w83781d hwmon_vid i2c_isa mptfc mptscsih mptbase scsi_transport_fc i2c_i801 i2c_core rtc cdc_subset usbnet e1000 CPU: 1 EIP: 0060:[] Not tainted VLI EFLAGS: 00010282 (2.6.18-rc4 #5) EIP is at raid0_unplug+0x1a/0x59 eax: 00000000 ebx: 00000000 ecx: c03f18dc edx: 00000000 esi: f7f9a400 edi: e73f3d64 ebp: df5221f8 esp: e73f3d40 ds: 007b es: 007b ss: 0068 Process mount (pid: 11481, ti=e73f2000 task=f7835030 task.ti=e73f2000) Stack: 00000001 e73f3d3c e73f3d3c e73f3d64 c02d7be2 f7d357a4 00000000 df5221f0 00000000 e73f3d64 e73f3d64 f724c800 00000005 f724cb68 f66b57d0 c02c8c6b df5221c0 00000001 f724c800 00000000 f7f21c80 f66b57c0 00000000 00000000 Call Trace: [] xfs_flush_buftarg+0x1b9/0x1bb [] xfs_mount+0x380/0x4d6 [] xfs_fs_fill_super+0xaf/0x20f [] snprintf+0x27/0x2b [] disk_name+0xa9/0xbf [] sb_set_blocksize+0x1f/0x45 [] get_sb_bdev+0x13a/0x17d [] xfs_fs_get_sb+0x37/0x3b [] xfs_fs_fill_super+0x0/0x20f [] vfs_kern_mount+0x55/0xa6 [] do_kern_mount+0x42/0x5a [] do_new_mount+0x83/0xdb [] do_mount+0x1dd/0x20f [] copy_mount_options+0x60/0xb7 [] sys_mount+0x9f/0xe0 [] sysenter_past_esp+0x56/0x79 Code: f0 ff c6 05 5c 3e 53 c0 01 83 c4 10 5b c3 90 90 90 57 56 53 31 db 83 ec 04 8b 44 24 14 8b b0 e8 00 00 00 8b 06 8b 96 98 00 00 00 <8b> 40 04 83 fa 00 8b 78 1c 7e 1a 8b 04 9f 8b 40 18 8b 40 58 8b EIP: [] raid0_unplug+0x1a/0x59 SS:ESP 0068:e73f3d40 From owner-xfs@oss.sgi.com Sun Sep 10 07:38:43 2006 Received: with ECARTIS (v1.0.0; list xfs); Sun, 10 Sep 2006 07:38:50 -0700 (PDT) Received: from rapidforum.com (www.rapidforum.com [80.237.244.2]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k8AEcgDW020904 for ; Sun, 10 Sep 2006 07:38:43 -0700 Received: (qmail 1440 invoked by uid 1004); 10 Sep 2006 13:38:00 -0000 Received: from pd95b5a1f.dip0.t-ipconnect.de (HELO ?217.91.90.31?) (217.91.90.31) by www.rapidforum.com with SMTP; 10 Sep 2006 13:38:00 -0000 Message-ID: <4504151F.6050704@rapidforum.com> Date: Sun, 10 Sep 2006 15:37:35 +0200 From: Christian Schmid User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.12) Gecko/20050920 X-Accept-Language: de, en MIME-Version: 1.0 To: xfs@oss.sgi.com Subject: Critical xfs bug in 2.6.17.11? Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 8937 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: webmaster@rapidforum.com Precedence: bulk X-list: xfs Content-Length: 5469 Lines: 102 Hello. Instead of a tmpfs, I use a raid 10 softraid. Unfortunately it crashed after 10 hours of extreme activities (read/block-writes with up to 250 streams/deletes) 12 gb memory-test successful. 2 cpu xeon smp system. Tell me if this helps you: Sep 9 18:08:49 inode430 kernel: [87433.143498] 0x0: 58 41 47 46 00 00 00 01 00 00 00 00 00 04 34 a0 Sep 9 18:08:49 inode430 kernel: [87433.143672] Filesystem "md5": XFS internal error xfs_alloc_read_agf at line 2176 of file fs/xfs/xfs_alloc.c. Caller 0xfffffff f80314069 Sep 9 18:08:49 inode430 kernel: [87433.143904] Sep 9 18:08:49 inode430 kernel: [87433.143905] Call Trace: {xfs_corruption_error+244} Sep 9 18:08:49 inode430 kernel: [87433.143995] {xfs_iext_insert+65} {xfs_trans_read_buf+203} Sep 9 18:08:49 inode430 kernel: [87433.144353] {xfs_alloc_read_agf+281} {xfs_alloc_fix_freelist+356} Sep 9 18:08:49 inode430 kernel: [87433.144628] {xfs_alloc_fix_freelist+356} {__down_read+18} Sep 9 18:08:49 inode430 kernel: [87433.144855] {xfs_alloc_vextent+289} {xfs_bmapi+4061} Sep 9 18:08:49 inode430 kernel: [87433.145091] {xfs_bmap_search_multi_extents+175} Sep 9 18:08:49 inode430 kernel: [87433.145226] {xfs_iomap_write_allocate+675} {xfs_iomap+701} Sep 9 18:08:49 inode430 kernel: [87433.145473] {generic_make_request+515} {xfs_map_blocks+67} Sep 9 18:08:49 inode430 kernel: [87433.145846] {xfs_page_state_convert+722} {xfs_vm_writepage+179} Sep 9 18:08:49 inode430 kernel: [87433.146079] {mpage_writepages+459} {xfs_vm_writepage+0} Sep 9 18:08:49 inode430 kernel: [87433.146330] {do_writepages+41} {__writeback_single_inode+559} Sep 9 18:08:49 inode430 kernel: [87433.146583] {default_wake_function+0} {default_wake_function+0} Sep 9 18:08:49 inode430 kernel: [87433.146847] {xfs_trans_first_ail+28} {sync_sb_inodes+501} Sep 9 18:08:49 inode430 kernel: [87433.147230] {keventd_create_kthread+0} {writeback_inodes+144} Sep 9 18:08:49 inode430 kernel: [87433.147463] {wb_kupdate+148} {pdflush+313} Sep 9 18:08:49 inode430 kernel: [87433.147825] {wb_kupdate+0} {pdflush+0} Sep 9 18:08:49 inode430 kernel: [87433.148142] {kthread+218} {child_rip+8} Sep 9 18:08:49 inode430 kernel: [87433.148420] {keventd_create_kthread+0} {kthread+0} Sep 9 18:08:49 inode430 kernel: [87433.148775] {child_rip+0} Sep 9 18:08:49 inode430 kernel: [87433.149105] Filesystem "md5": XFS internal error xfs_trans_cancel at line 1150 of file fs/xfs/xfs_trans.c. Caller 0xffffffff8 0349bf8 Sep 9 18:08:49 inode430 kernel: [87433.149262] Sep 9 18:08:49 inode430 kernel: [87433.149263] Call Trace: {xfs_trans_cancel+111} Sep 9 18:08:49 inode430 kernel: [87433.149348] {xfs_iomap_write_allocate+1006} {xfs_iomap+701} Sep 9 18:08:49 inode430 kernel: [87433.149568] {generic_make_request+515} {xfs_map_blocks+67} Sep 9 18:08:49 inode430 kernel: [87433.149847] {xfs_page_state_convert+722} {xfs_vm_writepage+179} Sep 9 18:08:49 inode430 kernel: [87433.150169] {mpage_writepages+459} {xfs_vm_writepage+0} Sep 9 18:08:49 inode430 kernel: [87433.150435] {do_writepages+41} {__writeback_single_inode+559} Sep 9 18:08:49 inode430 kernel: [87433.150593] {default_wake_function+0} {default_wake_function+0} Sep 9 18:08:49 inode430 kernel: [87433.150807] {xfs_trans_first_ail+28} {sync_sb_inodes+501} Sep 9 18:08:49 inode430 kernel: [87433.151042] {keventd_create_kthread+0} {writeback_inodes+144} Sep 9 18:08:49 inode430 kernel: [87433.151271] {wb_kupdate+148} {pdflush+313} Sep 9 18:08:49 inode430 kernel: [87433.151439] {wb_kupdate+0} {pdflush+0} Sep 9 18:08:49 inode430 kernel: [87433.151680] {kthread+218} {child_rip+8} Sep 9 18:08:49 inode430 kernel: [87433.151922] {keventd_create_kthread+0} {kthread+0} Sep 9 18:08:49 inode430 kernel: [87433.152086] {child_rip+0} Sep 9 18:08:49 inode430 kernel: [87433.152489] xfs_force_shutdown(md5,0x8) called from line 1151 of file fs/xfs/xfs_trans.c. Return address = 0xffffffff80357507 Sep 9 18:08:49 inode430 kernel: [87433.168623] Filesystem "md5": Corruption of in-memory data detected. Shutting down filesystem: md5 Sep 9 18:08:49 inode430 kernel: [87433.168903] Please umount the filesystem, and rectify the problem(s) From owner-xfs@oss.sgi.com Sun Sep 10 14:32:35 2006 Received: with ECARTIS (v1.0.0; list xfs); Sun, 10 Sep 2006 14:32:49 -0700 (PDT) Received: from lucidpixels.com (lucidpixels.com [66.45.37.187]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8ALWYDW001281 for ; Sun, 10 Sep 2006 14:32:34 -0700 Received: by lucidpixels.com (Postfix, from userid 1001) id 03EE06091D24; Sun, 10 Sep 2006 17:31:56 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id F2D941615F126; Sun, 10 Sep 2006 17:31:56 -0400 (EDT) Date: Sun, 10 Sep 2006 17:31:56 -0400 (EDT) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: Christian Schmid cc: xfs@oss.sgi.com Subject: Re: Critical xfs bug in 2.6.17.11? In-Reply-To: <4504151F.6050704@rapidforum.com> Message-ID: References: <4504151F.6050704@rapidforum.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-archive-position: 8939 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Content-Length: 5999 Lines: 133 I hope this is not a repeat of 2.6.17 -> 2.6.17.6.. $ grep -i xfs ChangeLog-2.6.17.* ChangeLog-2.6.17.7: XFS: corruption fix ChangeLog-2.6.17.7: check in xfs_dir2_leafn_remove() fails every time and xfs_dir2_shrink_inode() It appears the only changes to the XFS code though went into 2.6.17.7 so I am not sure what you are seeing there, had you fixed your filesystem from the 2.6.17 -> .17.6 bug? Justin. On Sun, 10 Sep 2006, Christian Schmid wrote: > Hello. > > Instead of a tmpfs, I use a raid 10 softraid. Unfortunately it crashed after > 10 hours of extreme activities (read/block-writes with up to 250 > streams/deletes) > > 12 gb memory-test successful. 2 cpu xeon smp system. > > Tell me if this helps you: > > Sep 9 18:08:49 inode430 kernel: [87433.143498] 0x0: 58 41 47 46 00 00 00 01 > 00 00 00 00 00 04 34 a0 > Sep 9 18:08:49 inode430 kernel: [87433.143672] Filesystem "md5": XFS > internal error xfs_alloc_read_agf at line 2176 of file fs/xfs/xfs_alloc.c. > Caller 0xfffffff > f80314069 > Sep 9 18:08:49 inode430 kernel: [87433.143904] > Sep 9 18:08:49 inode430 kernel: [87433.143905] Call Trace: > {xfs_corruption_error+244} > Sep 9 18:08:49 inode430 kernel: [87433.143995] > {xfs_iext_insert+65} > {xfs_trans_read_buf+203} > Sep 9 18:08:49 inode430 kernel: [87433.144353] > {xfs_alloc_read_agf+281} > {xfs_alloc_fix_freelist+356} > Sep 9 18:08:49 inode430 kernel: [87433.144628] > {xfs_alloc_fix_freelist+356} > {__down_read+18} > Sep 9 18:08:49 inode430 kernel: [87433.144855] > {xfs_alloc_vextent+289} {xfs_bmapi+4061} > Sep 9 18:08:49 inode430 kernel: [87433.145091] > {xfs_bmap_search_multi_extents+175} > Sep 9 18:08:49 inode430 kernel: [87433.145226] > {xfs_iomap_write_allocate+675} > {xfs_iomap+701} > Sep 9 18:08:49 inode430 kernel: [87433.145473] > {generic_make_request+515} > {xfs_map_blocks+67} > Sep 9 18:08:49 inode430 kernel: [87433.145846] > {xfs_page_state_convert+722} > {xfs_vm_writepage+179} > Sep 9 18:08:49 inode430 kernel: [87433.146079] > {mpage_writepages+459} > {xfs_vm_writepage+0} > Sep 9 18:08:49 inode430 kernel: [87433.146330] > {do_writepages+41} > {__writeback_single_inode+559} > Sep 9 18:08:49 inode430 kernel: [87433.146583] > {default_wake_function+0} > {default_wake_function+0} > Sep 9 18:08:49 inode430 kernel: [87433.146847] > {xfs_trans_first_ail+28} > {sync_sb_inodes+501} > Sep 9 18:08:49 inode430 kernel: [87433.147230] > {keventd_create_kthread+0} > {writeback_inodes+144} > Sep 9 18:08:49 inode430 kernel: [87433.147463] > {wb_kupdate+148} {pdflush+313} > Sep 9 18:08:49 inode430 kernel: [87433.147825] > {wb_kupdate+0} {pdflush+0} > Sep 9 18:08:49 inode430 kernel: [87433.148142] > {kthread+218} {child_rip+8} > Sep 9 18:08:49 inode430 kernel: [87433.148420] > {keventd_create_kthread+0} {kthread+0} > Sep 9 18:08:49 inode430 kernel: [87433.148775] > {child_rip+0} > Sep 9 18:08:49 inode430 kernel: [87433.149105] Filesystem "md5": XFS > internal error xfs_trans_cancel at line 1150 of file fs/xfs/xfs_trans.c. > Caller 0xffffffff8 > 0349bf8 > Sep 9 18:08:49 inode430 kernel: [87433.149262] > Sep 9 18:08:49 inode430 kernel: [87433.149263] Call Trace: > {xfs_trans_cancel+111} > Sep 9 18:08:49 inode430 kernel: [87433.149348] > {xfs_iomap_write_allocate+1006} > {xfs_iomap+701} > Sep 9 18:08:49 inode430 kernel: [87433.149568] > {generic_make_request+515} > {xfs_map_blocks+67} > Sep 9 18:08:49 inode430 kernel: [87433.149847] > {xfs_page_state_convert+722} > {xfs_vm_writepage+179} > Sep 9 18:08:49 inode430 kernel: [87433.150169] > {mpage_writepages+459} > {xfs_vm_writepage+0} > Sep 9 18:08:49 inode430 kernel: [87433.150435] > {do_writepages+41} > {__writeback_single_inode+559} > Sep 9 18:08:49 inode430 kernel: [87433.150593] > {default_wake_function+0} > {default_wake_function+0} > Sep 9 18:08:49 inode430 kernel: [87433.150807] > {xfs_trans_first_ail+28} > {sync_sb_inodes+501} > Sep 9 18:08:49 inode430 kernel: [87433.151042] > {keventd_create_kthread+0} > {writeback_inodes+144} > Sep 9 18:08:49 inode430 kernel: [87433.151271] > {wb_kupdate+148} {pdflush+313} > Sep 9 18:08:49 inode430 kernel: [87433.151439] > {wb_kupdate+0} {pdflush+0} > Sep 9 18:08:49 inode430 kernel: [87433.151680] > {kthread+218} {child_rip+8} > Sep 9 18:08:49 inode430 kernel: [87433.151922] > {keventd_create_kthread+0} {kthread+0} > Sep 9 18:08:49 inode430 kernel: [87433.152086] > {child_rip+0} > Sep 9 18:08:49 inode430 kernel: [87433.152489] xfs_force_shutdown(md5,0x8) > called from line 1151 of file fs/xfs/xfs_trans.c. Return address = > 0xffffffff80357507 > Sep 9 18:08:49 inode430 kernel: [87433.168623] Filesystem "md5": Corruption > of in-memory data detected. Shutting down filesystem: md5 > Sep 9 18:08:49 inode430 kernel: [87433.168903] Please umount the filesystem, > and rectify the problem(s) > > > From owner-xfs@oss.sgi.com Sun Sep 10 15:14:56 2006 Received: with ECARTIS (v1.0.0; list xfs); Sun, 10 Sep 2006 15:15:02 -0700 (PDT) Received: from rapidforum.com (www.rapidforum.com [80.237.244.2]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k8AMEsDW005615 for ; Sun, 10 Sep 2006 15:14:55 -0700 Received: (qmail 26451 invoked by uid 1004); 10 Sep 2006 22:14:15 -0000 Received: from pd95b5a1f.dip0.t-ipconnect.de (HELO ?217.91.90.31?) (217.91.90.31) by www.rapidforum.com with SMTP; 10 Sep 2006 22:14:15 -0000 Message-ID: <45048E1E.6040002@rapidforum.com> Date: Mon, 11 Sep 2006 00:13:50 +0200 From: Christian Schmid User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.12) Gecko/20050920 X-Accept-Language: de, en MIME-Version: 1.0 To: Justin Piszcz CC: xfs@oss.sgi.com Subject: Re: Critical xfs bug in 2.6.17.11? References: <4504151F.6050704@rapidforum.com> In-Reply-To: Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 8940 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: webmaster@rapidforum.com Precedence: bulk X-list: xfs Content-Length: 6211 Lines: 131 This file-system was created 2 days before with the same kernel. Justin Piszcz wrote: > I hope this is not a repeat of 2.6.17 -> 2.6.17.6.. > > $ grep -i xfs ChangeLog-2.6.17.* > ChangeLog-2.6.17.7: XFS: corruption fix > ChangeLog-2.6.17.7: check in xfs_dir2_leafn_remove() fails every time > and xfs_dir2_shrink_inode() > > It appears the only changes to the XFS code though went into 2.6.17.7 so > I am not sure what you are seeing there, had you fixed your filesystem > from the 2.6.17 -> .17.6 bug? > > Justin. > > On Sun, 10 Sep 2006, Christian Schmid wrote: > >> Hello. >> >> Instead of a tmpfs, I use a raid 10 softraid. Unfortunately it crashed >> after 10 hours of extreme activities (read/block-writes with up to 250 >> streams/deletes) >> >> 12 gb memory-test successful. 2 cpu xeon smp system. >> >> Tell me if this helps you: >> >> Sep 9 18:08:49 inode430 kernel: [87433.143498] 0x0: 58 41 47 46 00 00 >> 00 01 00 00 00 00 00 04 34 a0 Sep 9 18:08:49 inode430 kernel: >> [87433.143672] Filesystem "md5": XFS internal error xfs_alloc_read_agf >> at line 2176 of file fs/xfs/xfs_alloc.c. Caller 0xfffffff >> f80314069 Sep 9 18:08:49 inode430 kernel: [87433.143904] Sep 9 >> 18:08:49 inode430 kernel: [87433.143905] Call Trace: >> {xfs_corruption_error+244} >> Sep 9 18:08:49 inode430 kernel: [87433.143995] >> {xfs_iext_insert+65} >> {xfs_trans_read_buf+203} >> Sep 9 18:08:49 inode430 kernel: [87433.144353] >> {xfs_alloc_read_agf+281} >> {xfs_alloc_fix_freelist+356} >> Sep 9 18:08:49 inode430 kernel: [87433.144628] >> {xfs_alloc_fix_freelist+356} >> {__down_read+18} Sep 9 18:08:49 inode430 kernel: >> [87433.144855] {xfs_alloc_vextent+289} >> {xfs_bmapi+4061} >> Sep 9 18:08:49 inode430 kernel: [87433.145091] >> {xfs_bmap_search_multi_extents+175} Sep 9 18:08:49 >> inode430 kernel: [87433.145226] >> {xfs_iomap_write_allocate+675} >> {xfs_iomap+701} Sep 9 18:08:49 inode430 kernel: >> [87433.145473] {generic_make_request+515} >> {xfs_map_blocks+67} >> Sep 9 18:08:49 inode430 kernel: [87433.145846] >> {xfs_page_state_convert+722} >> {xfs_vm_writepage+179} Sep 9 18:08:49 inode430 >> kernel: [87433.146079] {mpage_writepages+459} >> {xfs_vm_writepage+0} >> Sep 9 18:08:49 inode430 kernel: [87433.146330] >> {do_writepages+41} >> {__writeback_single_inode+559} >> Sep 9 18:08:49 inode430 kernel: [87433.146583] >> {default_wake_function+0} >> {default_wake_function+0} >> Sep 9 18:08:49 inode430 kernel: [87433.146847] >> {xfs_trans_first_ail+28} >> {sync_sb_inodes+501} >> Sep 9 18:08:49 inode430 kernel: [87433.147230] >> {keventd_create_kthread+0} >> {writeback_inodes+144} >> Sep 9 18:08:49 inode430 kernel: [87433.147463] >> {wb_kupdate+148} {pdflush+313} >> Sep 9 18:08:49 inode430 kernel: [87433.147825] >> {wb_kupdate+0} {pdflush+0} >> Sep 9 18:08:49 inode430 kernel: [87433.148142] >> {kthread+218} {child_rip+8} >> Sep 9 18:08:49 inode430 kernel: [87433.148420] >> {keventd_create_kthread+0} >> {kthread+0} >> Sep 9 18:08:49 inode430 kernel: [87433.148775] >> {child_rip+0} Sep 9 18:08:49 inode430 kernel: >> [87433.149105] Filesystem "md5": XFS internal error xfs_trans_cancel >> at line 1150 of file fs/xfs/xfs_trans.c. Caller 0xffffffff8 >> 0349bf8 Sep 9 18:08:49 inode430 kernel: [87433.149262] Sep 9 >> 18:08:49 inode430 kernel: [87433.149263] Call Trace: >> {xfs_trans_cancel+111} Sep 9 18:08:49 inode430 >> kernel: [87433.149348] >> {xfs_iomap_write_allocate+1006} >> {xfs_iomap+701} Sep 9 18:08:49 inode430 kernel: >> [87433.149568] {generic_make_request+515} >> {xfs_map_blocks+67} >> Sep 9 18:08:49 inode430 kernel: [87433.149847] >> {xfs_page_state_convert+722} >> {xfs_vm_writepage+179} Sep 9 18:08:49 inode430 >> kernel: [87433.150169] {mpage_writepages+459} >> {xfs_vm_writepage+0} >> Sep 9 18:08:49 inode430 kernel: [87433.150435] >> {do_writepages+41} >> {__writeback_single_inode+559} >> Sep 9 18:08:49 inode430 kernel: [87433.150593] >> {default_wake_function+0} >> {default_wake_function+0} >> Sep 9 18:08:49 inode430 kernel: [87433.150807] >> {xfs_trans_first_ail+28} >> {sync_sb_inodes+501} >> Sep 9 18:08:49 inode430 kernel: [87433.151042] >> {keventd_create_kthread+0} >> {writeback_inodes+144} >> Sep 9 18:08:49 inode430 kernel: [87433.151271] >> {wb_kupdate+148} {pdflush+313} >> Sep 9 18:08:49 inode430 kernel: [87433.151439] >> {wb_kupdate+0} {pdflush+0} >> Sep 9 18:08:49 inode430 kernel: [87433.151680] >> {kthread+218} {child_rip+8} >> Sep 9 18:08:49 inode430 kernel: [87433.151922] >> {keventd_create_kthread+0} >> {kthread+0} >> Sep 9 18:08:49 inode430 kernel: [87433.152086] >> {child_rip+0} Sep 9 18:08:49 inode430 kernel: >> [87433.152489] xfs_force_shutdown(md5,0x8) called from line 1151 of >> file fs/xfs/xfs_trans.c. Return address = 0xffffffff80357507 >> Sep 9 18:08:49 inode430 kernel: [87433.168623] Filesystem "md5": >> Corruption of in-memory data detected. Shutting down filesystem: md5 >> Sep 9 18:08:49 inode430 kernel: [87433.168903] Please umount the >> filesystem, and rectify the problem(s) >> >> >> > > From owner-xfs@oss.sgi.com Sun Sep 10 15:37:42 2006 Received: with ECARTIS (v1.0.0; list xfs); Sun, 10 Sep 2006 15:37:52 -0700 (PDT) Received: from lucidpixels.com (lucidpixels.com [66.45.37.187]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8AMbeDW008214 for ; Sun, 10 Sep 2006 15:37:41 -0700 Received: by lucidpixels.com (Postfix, from userid 1001) id 6FBE66091D24; Sun, 10 Sep 2006 18:37:05 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id 604A81615F126; Sun, 10 Sep 2006 18:37:05 -0400 (EDT) Date: Sun, 10 Sep 2006 18:37:05 -0400 (EDT) From: Justin Piszcz X-X-Sender: jpiszcz@p34.internal.lan To: Christian Schmid cc: xfs@oss.sgi.com Subject: Re: Critical xfs bug in 2.6.17.11? In-Reply-To: <45048E1E.6040002@rapidforum.com> Message-ID: References: <4504151F.6050704@rapidforum.com> <45048E1E.6040002@rapidforum.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-archive-position: 8941 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: xfs Content-Length: 6817 Lines: 143 ACK, scary, will wait for Nathan Scott's/other SGI members reply on this one... I have not had that happen to me yet, what were you doing that caused the problem? Is it repeatable? Have you checked the XFS FAQ for the FS fix for 2.6.17-17.6? just to check if there is indeed any problems (basically an xfs check on your FS), if you do it, dont use knoppix 5.0.2 (contains the 2.6.17 XFS corruption bug), use 4.0.2. Justin. On Mon, 11 Sep 2006, Christian Schmid wrote: > This file-system was created 2 days before with the same kernel. > > Justin Piszcz wrote: >> I hope this is not a repeat of 2.6.17 -> 2.6.17.6.. >> >> $ grep -i xfs ChangeLog-2.6.17.* >> ChangeLog-2.6.17.7: XFS: corruption fix >> ChangeLog-2.6.17.7: check in xfs_dir2_leafn_remove() fails every time >> and xfs_dir2_shrink_inode() >> >> It appears the only changes to the XFS code though went into 2.6.17.7 so I >> am not sure what you are seeing there, had you fixed your filesystem from >> the 2.6.17 -> .17.6 bug? >> >> Justin. >> >> On Sun, 10 Sep 2006, Christian Schmid wrote: >> >>> Hello. >>> >>> Instead of a tmpfs, I use a raid 10 softraid. Unfortunately it crashed >>> after 10 hours of extreme activities (read/block-writes with up to 250 >>> streams/deletes) >>> >>> 12 gb memory-test successful. 2 cpu xeon smp system. >>> >>> Tell me if this helps you: >>> >>> Sep 9 18:08:49 inode430 kernel: [87433.143498] 0x0: 58 41 47 46 00 00 00 >>> 01 00 00 00 00 00 04 34 a0 Sep 9 18:08:49 inode430 kernel: [87433.143672] >>> Filesystem "md5": XFS internal error xfs_alloc_read_agf at line 2176 of >>> file fs/xfs/xfs_alloc.c. Caller 0xfffffff >>> f80314069 Sep 9 18:08:49 inode430 kernel: [87433.143904] Sep 9 18:08:49 >>> inode430 kernel: [87433.143905] Call Trace: >>> {xfs_corruption_error+244} >>> Sep 9 18:08:49 inode430 kernel: [87433.143995] >>> {xfs_iext_insert+65} >>> {xfs_trans_read_buf+203} >>> Sep 9 18:08:49 inode430 kernel: [87433.144353] >>> {xfs_alloc_read_agf+281} >>> {xfs_alloc_fix_freelist+356} >>> Sep 9 18:08:49 inode430 kernel: [87433.144628] >>> {xfs_alloc_fix_freelist+356} >>> {__down_read+18} Sep 9 18:08:49 inode430 kernel: >>> [87433.144855] {xfs_alloc_vextent+289} >>> {xfs_bmapi+4061} >>> Sep 9 18:08:49 inode430 kernel: [87433.145091] >>> {xfs_bmap_search_multi_extents+175} Sep 9 18:08:49 >>> inode430 kernel: [87433.145226] >>> {xfs_iomap_write_allocate+675} >>> {xfs_iomap+701} Sep 9 18:08:49 inode430 kernel: >>> [87433.145473] {generic_make_request+515} >>> {xfs_map_blocks+67} >>> Sep 9 18:08:49 inode430 kernel: [87433.145846] >>> {xfs_page_state_convert+722} >>> {xfs_vm_writepage+179} Sep 9 18:08:49 inode430 kernel: >>> [87433.146079] {mpage_writepages+459} >>> {xfs_vm_writepage+0} >>> Sep 9 18:08:49 inode430 kernel: [87433.146330] >>> {do_writepages+41} >>> {__writeback_single_inode+559} >>> Sep 9 18:08:49 inode430 kernel: [87433.146583] >>> {default_wake_function+0} >>> {default_wake_function+0} >>> Sep 9 18:08:49 inode430 kernel: [87433.146847] >>> {xfs_trans_first_ail+28} >>> {sync_sb_inodes+501} >>> Sep 9 18:08:49 inode430 kernel: [87433.147230] >>> {keventd_create_kthread+0} >>> {writeback_inodes+144} >>> Sep 9 18:08:49 inode430 kernel: [87433.147463] >>> {wb_kupdate+148} {pdflush+313} >>> Sep 9 18:08:49 inode430 kernel: [87433.147825] >>> {wb_kupdate+0} {pdflush+0} >>> Sep 9 18:08:49 inode430 kernel: [87433.148142] >>> {kthread+218} {child_rip+8} >>> Sep 9 18:08:49 inode430 kernel: [87433.148420] >>> {keventd_create_kthread+0} {kthread+0} >>> Sep 9 18:08:49 inode430 kernel: [87433.148775] >>> {child_rip+0} Sep 9 18:08:49 inode430 kernel: >>> [87433.149105] Filesystem "md5": XFS internal error xfs_trans_cancel at >>> line 1150 of file fs/xfs/xfs_trans.c. Caller 0xffffffff8 >>> 0349bf8 Sep 9 18:08:49 inode430 kernel: [87433.149262] Sep 9 18:08:49 >>> inode430 kernel: [87433.149263] Call Trace: >>> {xfs_trans_cancel+111} Sep 9 18:08:49 inode430 kernel: >>> [87433.149348] {xfs_iomap_write_allocate+1006} >>> {xfs_iomap+701} Sep 9 18:08:49 inode430 kernel: >>> [87433.149568] {generic_make_request+515} >>> {xfs_map_blocks+67} >>> Sep 9 18:08:49 inode430 kernel: [87433.149847] >>> {xfs_page_state_convert+722} >>> {xfs_vm_writepage+179} Sep 9 18:08:49 inode430 kernel: >>> [87433.150169] {mpage_writepages+459} >>> {xfs_vm_writepage+0} >>> Sep 9 18:08:49 inode430 kernel: [87433.150435] >>> {do_writepages+41} >>> {__writeback_single_inode+559} >>> Sep 9 18:08:49 inode430 kernel: [87433.150593] >>> {default_wake_function+0} >>> {default_wake_function+0} >>> Sep 9 18:08:49 inode430 kernel: [87433.150807] >>> {xfs_trans_first_ail+28} >>> {sync_sb_inodes+501} >>> Sep 9 18:08:49 inode430 kernel: [87433.151042] >>> {keventd_create_kthread+0} >>> {writeback_inodes+144} >>> Sep 9 18:08:49 inode430 kernel: [87433.151271] >>> {wb_kupdate+148} {pdflush+313} >>> Sep 9 18:08:49 inode430 kernel: [87433.151439] >>> {wb_kupdate+0} {pdflush+0} >>> Sep 9 18:08:49 inode430 kernel: [87433.151680] >>> {kthread+218} {child_rip+8} >>> Sep 9 18:08:49 inode430 kernel: [87433.151922] >>> {keventd_create_kthread+0} {kthread+0} >>> Sep 9 18:08:49 inode430 kernel: [87433.152086] >>> {child_rip+0} Sep 9 18:08:49 inode430 kernel: >>> [87433.152489] xfs_force_shutdown(md5,0x8) called from line 1151 of file >>> fs/xfs/xfs_trans.c. Return address = 0xffffffff80357507 >>> Sep 9 18:08:49 inode430 kernel: [87433.168623] Filesystem "md5": >>> Corruption of in-memory data detected. Shutting down filesystem: md5 >>> Sep 9 18:08:49 inode430 kernel: [87433.168903] Please umount the >>> filesystem, and rectify the problem(s) >>> >>> >>> >> >> > > From owner-xfs@oss.sgi.com Sun Sep 10 16:36:10 2006 Received: with ECARTIS (v1.0.0; list xfs); Sun, 10 Sep 2006 16:36:16 -0700 (PDT) Received: from rapidforum.com (www.rapidforum.com [80.237.244.2]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k8ANa8DW019605 for ; Sun, 10 Sep 2006 16:36:09 -0700 Received: (qmail 7295 invoked by uid 1004); 10 Sep 2006 23:35:33 -0000 Received: from pd95b5a1f.dip0.t-ipconnect.de (HELO ?217.91.90.31?) (217.91.90.31) by www.rapidforum.com with SMTP; 10 Sep 2006 23:35:33 -0000 Message-ID: <4504A12C.9090608@rapidforum.com> Date: Mon, 11 Sep 2006 01:35:08 +0200 From: Christian Schmid User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.12) Gecko/20050920 X-Accept-Language: de, en MIME-Version: 1.0 To: Justin Piszcz CC: xfs@oss.sgi.com Subject: Re: Critical xfs bug in 2.6.17.11? References: <4504151F.6050704@rapidforum.com> <45048E1E.6040002@rapidforum.com> In-Reply-To: Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 8942 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: webmaster@rapidforum.com Precedence: bulk X-list: xfs Content-Length: 7545 Lines: 158 I am using www.linuxfromscratch.org. Kernel is vanilla 2.6.17.11 from kernel.org, xfsprogs is 2.7.3, libc is 2.3.3. I was not doing anything special. In fact its a heavy-duty tmpfs with up to 300 write-streams at once, reads and deletes. So basically a heavy stress-test on a SMP. Maybe a race-condition? Pure speculations from my side. But memory is ok. Memory-test with ECC disabled ran through 12 hours without any errors. ECC is on now of course, so the possibility of a simple hardware problem is eliminated from my side. Justin Piszcz wrote: > ACK, scary, will wait for Nathan Scott's/other SGI members reply on this > one... > > I have not had that happen to me yet, what were you doing that caused > the problem? Is it repeatable? Have you checked the XFS FAQ for the FS > fix for 2.6.17-17.6? just to check if there is indeed any problems > (basically an xfs check on your FS), if you do it, dont use knoppix > 5.0.2 (contains the 2.6.17 XFS corruption bug), use 4.0.2. > > Justin. > > On Mon, 11 Sep 2006, Christian Schmid wrote: > >> This file-system was created 2 days before with the same kernel. >> >> Justin Piszcz wrote: >> >>> I hope this is not a repeat of 2.6.17 -> 2.6.17.6.. >>> >>> $ grep -i xfs ChangeLog-2.6.17.* >>> ChangeLog-2.6.17.7: XFS: corruption fix >>> ChangeLog-2.6.17.7: check in xfs_dir2_leafn_remove() fails every >>> time and xfs_dir2_shrink_inode() >>> >>> It appears the only changes to the XFS code though went into 2.6.17.7 >>> so I am not sure what you are seeing there, had you fixed your >>> filesystem from the 2.6.17 -> .17.6 bug? >>> >>> Justin. >>> >>> On Sun, 10 Sep 2006, Christian Schmid wrote: >>> >>>> Hello. >>>> >>>> Instead of a tmpfs, I use a raid 10 softraid. Unfortunately it >>>> crashed after 10 hours of extreme activities (read/block-writes with >>>> up to 250 streams/deletes) >>>> >>>> 12 gb memory-test successful. 2 cpu xeon smp system. >>>> >>>> Tell me if this helps you: >>>> >>>> Sep 9 18:08:49 inode430 kernel: [87433.143498] 0x0: 58 41 47 46 00 >>>> 00 00 01 00 00 00 00 00 04 34 a0 Sep 9 18:08:49 inode430 kernel: >>>> [87433.143672] Filesystem "md5": XFS internal error >>>> xfs_alloc_read_agf at line 2176 of file fs/xfs/xfs_alloc.c. Caller >>>> 0xfffffff >>>> f80314069 Sep 9 18:08:49 inode430 kernel: [87433.143904] Sep 9 >>>> 18:08:49 inode430 kernel: [87433.143905] Call Trace: >>>> {xfs_corruption_error+244} >>>> Sep 9 18:08:49 inode430 kernel: [87433.143995] >>>> {xfs_iext_insert+65} >>>> {xfs_trans_read_buf+203} >>>> Sep 9 18:08:49 inode430 kernel: [87433.144353] >>>> {xfs_alloc_read_agf+281} >>>> {xfs_alloc_fix_freelist+356} >>>> Sep 9 18:08:49 inode430 kernel: [87433.144628] >>>> {xfs_alloc_fix_freelist+356} >>>> {__down_read+18} Sep 9 18:08:49 inode430 kernel: >>>> [87433.144855] {xfs_alloc_vextent+289} >>>> {xfs_bmapi+4061} >>>> Sep 9 18:08:49 inode430 kernel: [87433.145091] >>>> {xfs_bmap_search_multi_extents+175} Sep 9 >>>> 18:08:49 inode430 kernel: [87433.145226] >>>> {xfs_iomap_write_allocate+675} >>>> {xfs_iomap+701} Sep 9 18:08:49 inode430 kernel: >>>> [87433.145473] {generic_make_request+515} >>>> {xfs_map_blocks+67} >>>> Sep 9 18:08:49 inode430 kernel: [87433.145846] >>>> {xfs_page_state_convert+722} >>>> {xfs_vm_writepage+179} Sep 9 18:08:49 inode430 >>>> kernel: [87433.146079] {mpage_writepages+459} >>>> {xfs_vm_writepage+0} >>>> Sep 9 18:08:49 inode430 kernel: [87433.146330] >>>> {do_writepages+41} >>>> {__writeback_single_inode+559} >>>> Sep 9 18:08:49 inode430 kernel: [87433.146583] >>>> {default_wake_function+0} >>>> {default_wake_function+0} >>>> Sep 9 18:08:49 inode430 kernel: [87433.146847] >>>> {xfs_trans_first_ail+28} >>>> {sync_sb_inodes+501} >>>> Sep 9 18:08:49 inode430 kernel: [87433.147230] >>>> {keventd_create_kthread+0} >>>> {writeback_inodes+144} >>>> Sep 9 18:08:49 inode430 kernel: [87433.147463] >>>> {wb_kupdate+148} {pdflush+313} >>>> Sep 9 18:08:49 inode430 kernel: [87433.147825] >>>> {wb_kupdate+0} {pdflush+0} >>>> Sep 9 18:08:49 inode430 kernel: [87433.148142] >>>> {kthread+218} {child_rip+8} >>>> Sep 9 18:08:49 inode430 kernel: [87433.148420] >>>> {keventd_create_kthread+0} >>>> {kthread+0} >>>> Sep 9 18:08:49 inode430 kernel: [87433.148775] >>>> {child_rip+0} Sep 9 18:08:49 inode430 kernel: >>>> [87433.149105] Filesystem "md5": XFS internal error xfs_trans_cancel >>>> at line 1150 of file fs/xfs/xfs_trans.c. Caller 0xffffffff8 >>>> 0349bf8 Sep 9 18:08:49 inode430 kernel: [87433.149262] Sep 9 >>>> 18:08:49 inode430 kernel: [87433.149263] Call Trace: >>>> {xfs_trans_cancel+111} Sep 9 18:08:49 inode430 >>>> kernel: [87433.149348] >>>> {xfs_iomap_write_allocate+1006} >>>> {xfs_iomap+701} Sep 9 18:08:49 inode430 kernel: >>>> [87433.149568] {generic_make_request+515} >>>> {xfs_map_blocks+67} >>>> Sep 9 18:08:49 inode430 kernel: [87433.149847] >>>> {xfs_page_state_convert+722} >>>> {xfs_vm_writepage+179} Sep 9 18:08:49 inode430 >>>> kernel: [87433.150169] {mpage_writepages+459} >>>> {xfs_vm_writepage+0} >>>> Sep 9 18:08:49 inode430 kernel: [87433.150435] >>>> {do_writepages+41} >>>> {__writeback_single_inode+559} >>>> Sep 9 18:08:49 inode430 kernel: [87433.150593] >>>> {default_wake_function+0} >>>> {default_wake_function+0} >>>> Sep 9 18:08:49 inode430 kernel: [87433.150807] >>>> {xfs_trans_first_ail+28} >>>> {sync_sb_inodes+501} >>>> Sep 9 18:08:49 inode430 kernel: [87433.151042] >>>> {keventd_create_kthread+0} >>>> {writeback_inodes+144} >>>> Sep 9 18:08:49 inode430 kernel: [87433.151271] >>>> {wb_kupdate+148} {pdflush+313} >>>> Sep 9 18:08:49 inode430 kernel: [87433.151439] >>>> {wb_kupdate+0} {pdflush+0} >>>> Sep 9 18:08:49 inode430 kernel: [87433.151680] >>>> {kthread+218} {child_rip+8} >>>> Sep 9 18:08:49 inode430 kernel: [87433.151922] >>>> {keventd_create_kthread+0} >>>> {kthread+0} >>>> Sep 9 18:08:49 inode430 kernel: [87433.152086] >>>> {child_rip+0} Sep 9 18:08:49 inode430 kernel: >>>> [87433.152489] xfs_force_shutdown(md5,0x8) called from line 1151 of >>>> file fs/xfs/xfs_trans.c. Return address = 0xffffffff80357507 >>>> Sep 9 18:08:49 inode430 kernel: [87433.168623] Filesystem "md5": >>>> Corruption of in-memory data detected. Shutting down filesystem: md5 >>>> Sep 9 18:08:49 inode430 kernel: [87433.168903] Please umount the >>>> filesystem, and rectify the problem(s) >>>> >>>> >>>> >>> >>> >> >> > > From owner-xfs@oss.sgi.com Sun Sep 10 17:47:52 2006 Received: with ECARTIS (v1.0.0; list xfs); Sun, 10 Sep 2006 17:48:18 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k8B0ldDW028276 for ; Sun, 10 Sep 2006 17:47:50 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA27606; Mon, 11 Sep 2006 10:46:47 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id k8B0kjeQ16891906; Mon, 11 Sep 2006 10:46:45 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id k8B0khjR16867313; Mon, 11 Sep 2006 10:46:43 +1000 (AEST) Date: Mon, 11 Sep 2006 10:46:43 +1000 From: David Chinner To: Michael Morrison Cc: linux-xfs@oss.sgi.com Subject: Re: oops mounting unassembled md device Message-ID: <20060911004643.GJ10950339@melbourne.sgi.com> References: <450342F9.8090301@stwo-corp.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <450342F9.8090301@stwo-corp.com> User-Agent: Mutt/1.4.2.1i X-archive-position: 8944 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 1063 Lines: 34 On Sat, Sep 09, 2006 at 03:40:57PM -0700, Michael Morrison wrote: > Got on oops when trying to mount xfs filesystem onto unassembled md > raid0 device. > What else can I provide to help with this? > > Linux version: 2.6.18-rc4 > Drives are hanging off an LSI fibre channel controller which is driven by > the mpt driver in drivers/message/fusion. > > Steps leading up to this: > > 1. assemble 8 drive raid0 on /dev/md0 > 2. mount filesystem: /bin/mount -t xfs /dev/md0 /mnt/testfs -o > noatime,nodiratime > 3. umount /mnt/testfs > 4. mdadm --stop /dev/md0 > 5. /bin/mount -t xfs /dev/md0 /mnt/testfs -o noatime,nodiratime So why did MD allow /dev/md0 to be used if the device had been stopped? According to the mdadm man page a --stop will release all resources associated with the md device so after stopping /dev/md0 it should not be possible to use it. This is not an XFS problem - this is either an MD bug (leaving /dev/md0 around after being stopped) and/or user error. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Sun Sep 10 17:44:36 2006 Received: with ECARTIS (v1.0.0; list xfs); Sun, 10 Sep 2006 17:44:45 -0700 (PDT) Received: from nf-out-0910.google.com (nf-out-0910.google.com [64.233.182.190]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8B0iZDW027973 for ; Sun, 10 Sep 2006 17:44:35 -0700 Received: by nf-out-0910.google.com with SMTP id a25so1062790nfc for ; Sun, 10 Sep 2006 17:44:00 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:user-agent:mime-version:to:subject:x-enigmail-version:content-type:content-transfer-encoding; b=cAzJehQtGncaBGc+AwjqDncFZoo0dQJ4TNZvKTA1NxnwCQlYqXBURZ9nyqi1blX8X3nP38olE5zyGdHr72YqI31SYao1iu6rWm1LTnVMWfDZOg9lRPJwT1U6t/T6x81hDOgn2vcrpysNv2vKFle11vf+SGrRAvLh20KdhvSDwds= Received: by 10.49.94.20 with SMTP id w20mr7332481nfl; Sun, 10 Sep 2006 16:39:16 -0700 (PDT) Received: from ?192.168.1.2? ( [151.38.73.163]) by mx.gmail.com with ESMTP id c1sm11425005nfe.2006.09.10.16.39.15; Sun, 10 Sep 2006 16:39:15 -0700 (PDT) Message-ID: <4504A3A9.4080704@gmail.com> Date: Mon, 11 Sep 2006 01:45:45 +0200 From: Enrico Maria Crisostomo User-Agent: Thunderbird 1.5.0.5 (X11/20060719) MIME-Version: 1.0 To: xfs@oss.sgi.com Subject: xfs_repair -d doesn't work X-Enigmail-Version: 0.94.0.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-archive-position: 8943 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: enrico.m.crisostomo@gmail.com Precedence: bulk X-list: xfs Content-Length: 1177 Lines: 33 -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi. I'm running a slackware-current with xfsprogs-2.8.10_1. On some machine I ran many 2.6.17 kernel and one machine is showing the "directory problem": I cannot get rid of many "Filesystem "hda3": XFS internal error xfs_da_do_buf(2) at line 2212 of file fs/xfs/xfs_da_btree.c. Caller 0xe0ac57d9". As I'm now running 2.6.17.13 I think I'm at safe from this bug and the last thing to do is... repairing my root filesystem. Unfortunately xfs_repair -d and xfs_repair -n does not work, and it returns saying it could not initialize XFS library because the filesystem is mounted AND writable. What I did: - - remounted the file system read only: did not work, not even xfs_repair -n, which I expect to succeed always - - hacked /etc/mtab to change rw to ro: did not work, error is the same as before. Can you suggest me some way to repair my file system and confirm me that this xfs_repair behavior is a bug? Thank you, Enrico M. Crisostomo -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.5 (GNU/Linux) iD8DBQFFBKOpW8+x8v0iKa8RAkMPAKDbwuF/pgkX8epIvkKEZOjQKP2cbgCghvJB IkbMDVHlvNJgRnwZ61YTgwk= =ht3Y -----END PGP SIGNATURE----- From owner-xfs@oss.sgi.com Sun Sep 10 17:55:37 2006 Received: with ECARTIS (v1.0.0; list xfs); Sun, 10 Sep 2006 17:55:52 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k8B0tODW029447 for ; Sun, 10 Sep 2006 17:55:35 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA27712; Mon, 11 Sep 2006 10:54:32 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id k8B0sVeQ16871507; Mon, 11 Sep 2006 10:54:31 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id k8B0sTOR16905225; Mon, 11 Sep 2006 10:54:29 +1000 (AEST) Date: Mon, 11 Sep 2006 10:54:29 +1000 From: David Chinner To: Christian Schmid Cc: xfs@oss.sgi.com Subject: Re: Critical xfs bug in 2.6.17.11? Message-ID: <20060911005429.GK10950339@melbourne.sgi.com> References: <4504151F.6050704@rapidforum.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4504151F.6050704@rapidforum.com> User-Agent: Mutt/1.4.2.1i X-archive-position: 8945 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 930 Lines: 31 On Sun, Sep 10, 2006 at 03:37:35PM +0200, Christian Schmid wrote: > Hello. > > Instead of a tmpfs, I use a raid 10 softraid. Unfortunately it crashed > after 10 hours of extreme activities (read/block-writes with up to 250 > streams/deletes) > > 12 gb memory-test successful. 2 cpu xeon smp system. > > Tell me if this helps you: > > Sep 9 18:08:49 inode430 kernel: [87433.143498] 0x0: 58 41 47 46 00 00 00 > 01 00 00 00 00 00 04 34 a0 > Sep 9 18:08:49 inode430 kernel: [87433.143672] Filesystem "md5": XFS > internal error xfs_alloc_read_agf at line 2176 of file fs/xfs/xfs_alloc.c. > Caller 0xffffffff80314069 Hmm - bad read of the AGF during allocation, which causes a shutdown due to cancelling a dirty transaction. If you run "xfs_check -n /dev/md5" and "xfs_repair -n /dev/md5" do they warn about any AGF related corruption? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Sun Sep 10 18:01:46 2006 Received: with ECARTIS (v1.0.0; list xfs); Sun, 10 Sep 2006 18:02:02 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k8B11XDW030407 for ; Sun, 10 Sep 2006 18:01:44 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA27939; Mon, 11 Sep 2006 11:00:40 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id k8B10ceQ16780051; Mon, 11 Sep 2006 11:00:38 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id k8B10ZaQ16906667; Mon, 11 Sep 2006 11:00:35 +1000 (AEST) Date: Mon, 11 Sep 2006 11:00:35 +1000 From: David Chinner To: Christian Schmid Cc: Justin Piszcz , xfs@oss.sgi.com Subject: Re: Critical xfs bug in 2.6.17.11? Message-ID: <20060911010035.GL10950339@melbourne.sgi.com> References: <4504151F.6050704@rapidforum.com> <45048E1E.6040002@rapidforum.com> <4504A12C.9090608@rapidforum.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4504A12C.9090608@rapidforum.com> User-Agent: Mutt/1.4.2.1i X-archive-position: 8946 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 621 Lines: 19 On Mon, Sep 11, 2006 at 01:35:08AM +0200, Christian Schmid wrote: > Memory-test with ECC disabled ran through 12 hours without any errors. ECC > is on now of course, so the possibility of a simple hardware problem is > eliminated from my side. A _memory error_ can be ruled out, but what about a bad disk, bad disk controller, bad PCI bus interface, a bad driver, etc. Memory is just one piece of hardware that can result in bad data being read from or written to disk. Is there any indication of disk or driver errors in your syslog? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Sun Sep 10 18:10:24 2006 Received: with ECARTIS (v1.0.0; list xfs); Sun, 10 Sep 2006 18:10:38 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k8B1ABDW031571 for ; Sun, 10 Sep 2006 18:10:22 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA28085; Mon, 11 Sep 2006 11:09:19 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id k8B19IeQ16034862; Mon, 11 Sep 2006 11:09:18 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id k8B19Gsu16906848; Mon, 11 Sep 2006 11:09:16 +1000 (AEST) Date: Mon, 11 Sep 2006 11:09:16 +1000 From: David Chinner To: Enrico Maria Crisostomo Cc: xfs@oss.sgi.com Subject: Re: xfs_repair -d doesn't work Message-ID: <20060911010916.GM10950339@melbourne.sgi.com> References: <4504A3A9.4080704@gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4504A3A9.4080704@gmail.com> User-Agent: Mutt/1.4.2.1i X-archive-position: 8947 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 1541 Lines: 42 On Mon, Sep 11, 2006 at 01:45:45AM +0200, Enrico Maria Crisostomo wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Hi. > > I'm running a slackware-current with xfsprogs-2.8.10_1. On some > machine I ran many 2.6.17 kernel and one machine is showing the > "directory problem": I cannot get rid of many "Filesystem "hda3": XFS > internal error xfs_da_do_buf(2) at line 2212 of file > fs/xfs/xfs_da_btree.c. Caller 0xe0ac57d9". As I'm now running > 2.6.17.13 I think I'm at safe from this bug and the last thing to do > is... repairing my root filesystem. Unfortunately xfs_repair -d and > xfs_repair -n does not work, and it returns saying it could not > initialize XFS library because the filesystem is mounted AND writable. > What I did: > - - remounted the file system read only: did not work, not even > xfs_repair -n, which I expect to succeed always > - - hacked /etc/mtab to change rw to ro: did not work, error is the same > as before. IIRC, we recently updated libxfs to look at /proc/mounts rather than /etc/mtab to fix these sorts of problems. As it is, to fix the "directory problem" you need a more recent xfsprogs (2.8.11 IIRC), so I'd suggest the first thing to do is upgrade your xfsprogs and try again. > Can you suggest me some way to repair my file system You should be able to boot to single user mode, remount the root filesystem readonly and then "xfs_repair -d " to fix it. You need to reboot after doing this.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Sep 11 05:32:22 2006 Received: with ECARTIS (v1.0.0; list xfs); Mon, 11 Sep 2006 05:32:40 -0700 (PDT) Received: from rapidforum.com (www.rapidforum.com [80.237.244.2]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k8BCWKDW005843 for ; Mon, 11 Sep 2006 05:32:21 -0700 Received: (qmail 21164 invoked by uid 1004); 11 Sep 2006 12:31:46 -0000 Received: from pd95b5a1f.dip0.t-ipconnect.de (HELO ?217.91.90.31?) (217.91.90.31) by www.rapidforum.com with SMTP; 11 Sep 2006 12:31:46 -0000 Message-ID: <45055717.4090800@rapidforum.com> Date: Mon, 11 Sep 2006 14:31:19 +0200 From: Christian Schmid User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.12) Gecko/20050920 X-Accept-Language: de, en MIME-Version: 1.0 To: David Chinner CC: Justin Piszcz , xfs@oss.sgi.com Subject: Re: Critical xfs bug in 2.6.17.11? References: <4504151F.6050704@rapidforum.com> <45048E1E.6040002@rapidforum.com> <4504A12C.9090608@rapidforum.com> <20060911010035.GL10950339@melbourne.sgi.com> In-Reply-To: <20060911010035.GL10950339@melbourne.sgi.com> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 8953 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: webmaster@rapidforum.com Precedence: bulk X-list: xfs Content-Length: 877 Lines: 22 I am not sure. Its a linux software-raid and as far as I know there is a crc-check in the drives so if they cant read the data, they give an error and the raid gets the data from the other drive and marks this drive as broken. As far as the log says, the drives are ok. David Chinner wrote: > On Mon, Sep 11, 2006 at 01:35:08AM +0200, Christian Schmid wrote: > >>Memory-test with ECC disabled ran through 12 hours without any errors. ECC >>is on now of course, so the possibility of a simple hardware problem is >>eliminated from my side. > > > A _memory error_ can be ruled out, but what about a bad disk, bad > disk controller, bad PCI bus interface, a bad driver, etc. Memory is > just one piece of hardware that can result in bad data being read > from or written to disk. Is there any indication of disk or driver > errors in your syslog? > > Cheers, > > Dave. From owner-xfs@oss.sgi.com Mon Sep 11 05:30:45 2006 Received: with ECARTIS (v1.0.0; list xfs); Mon, 11 Sep 2006 05:31:01 -0700 (PDT) Received: from rapidforum.com (www.rapidforum.com [80.237.244.2]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k8BCUfDW005382 for ; Mon, 11 Sep 2006 05:30:45 -0700 Received: (qmail 21098 invoked by uid 1004); 11 Sep 2006 12:30:02 -0000 Received: from pd95b5a1f.dip0.t-ipconnect.de (HELO ?217.91.90.31?) (217.91.90.31) by www.rapidforum.com with SMTP; 11 Sep 2006 12:30:02 -0000 Message-ID: <450556AF.6030201@rapidforum.com> Date: Mon, 11 Sep 2006 14:29:35 +0200 From: Christian Schmid User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.12) Gecko/20050920 X-Accept-Language: de, en MIME-Version: 1.0 To: David Chinner CC: xfs@oss.sgi.com Subject: Re: Critical xfs bug in 2.6.17.11? References: <4504151F.6050704@rapidforum.com> <20060911005429.GK10950339@melbourne.sgi.com> In-Reply-To: <20060911005429.GK10950339@melbourne.sgi.com> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 8952 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: webmaster@rapidforum.com Precedence: bulk X-list: xfs Content-Length: 1006 Lines: 32 Unfortunately after this error I did a mkfs.xfs on this device, because its just a /tmp fs. David Chinner wrote: > On Sun, Sep 10, 2006 at 03:37:35PM +0200, Christian Schmid wrote: > >>Hello. >> >>Instead of a tmpfs, I use a raid 10 softraid. Unfortunately it crashed >>after 10 hours of extreme activities (read/block-writes with up to 250 >>streams/deletes) >> >>12 gb memory-test successful. 2 cpu xeon smp system. >> >>Tell me if this helps you: >> >>Sep 9 18:08:49 inode430 kernel: [87433.143498] 0x0: 58 41 47 46 00 00 00 >>01 00 00 00 00 00 04 34 a0 >>Sep 9 18:08:49 inode430 kernel: [87433.143672] Filesystem "md5": XFS >>internal error xfs_alloc_read_agf at line 2176 of file fs/xfs/xfs_alloc.c. >>Caller 0xffffffff80314069 > > > Hmm - bad read of the AGF during allocation, which causes a shutdown > due to cancelling a dirty transaction. > > If you run "xfs_check -n /dev/md5" and "xfs_repair -n /dev/md5" do > they warn about any AGF related corruption? > > Cheers, > > Dave. From owner-xfs@oss.sgi.com Mon Sep 11 05:27:15 2006 Received: with ECARTIS (v1.0.0; list xfs); Mon, 11 Sep 2006 05:27:25 -0700 (PDT) Received: from mail.crc.dk (mail.crc.dk [130.226.184.8]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8BCRCDW004537 for ; Mon, 11 Sep 2006 05:27:14 -0700 Received: from localhost (localhost [127.0.0.1]) by mail.crc.dk (8.12.11.20060308/8.12.11) with ESMTP id k8BAWHZC017278; Mon, 11 Sep 2006 12:32:17 +0200 Received: from mail.crc.dk ([127.0.0.1]) by localhost (mail.crc.dk [127.0.0.1]) (amavisd-new, port 10024) with LMTP id 13462-10; Mon, 11 Sep 2006 12:32:14 +0200 (CEST) Received: from [130.226.183.100] (mkx.crc.dk [130.226.183.100]) by mail.crc.dk (8.12.11.20060308/8.12.11) with ESMTP id k8BAWDRZ017268; Mon, 11 Sep 2006 12:32:13 +0200 Message-ID: <45053B2D.50203@crc.dk> Date: Mon, 11 Sep 2006 12:32:13 +0200 From: Mogens Kjaer Organization: Carlsberg Laboratory User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.13) Gecko/20060501 Fedora/1.7.13-1.1.fc5 X-Accept-Language: en, da MIME-Version: 1.0 To: linux-xfs@oss.sgi.com Subject: Mounting IRIX disk on Linux Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-archive-position: 8951 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: mk@crc.dk Precedence: bulk X-list: xfs Content-Length: 1632 Lines: 55 When I try to mount a disk from our SGI Origin 200 machine under Linux (Fedora Core 5), I get the following error message: # mount /dev/sda1 /mnt/sgi mount: Function not implemented The partitions are found during boot: ... SCSI device sda: 143374738 512-byte hdwr sectors (73408 MB) sda: Write Protect is off sda: Mode Sense: 9f 00 10 08 SCSI device sda: drive cache: write back w/ FUA SCSI device sda: 143374738 512-byte hdwr sectors (73408 MB) sda: Write Protect is off sda: Mode Sense: 9f 00 10 08 SCSI device sda: drive cache: write back w/ FUA sda: sda1 sda2 sda9 sda11 ... # fdisk -l /dev/sda Disk /dev/sda (SGI disk label): 255 heads, 63 sectors, 8924 cylinders Units = cylinders of 16065 * 512 bytes ----- partitions ----- Pt# Device Info Start End Sectors Id System 1: /dev/sda1 boot 17 8924 143108498 a SGI xfs 2: /dev/sda2 swap 1 16 262144 3 SGI raw 9: /dev/sda3 0 0 4096 0 SGI volhdr 11: /dev/sda4 0 8924 143374738 6 SGI volume ----- Bootinfo ----- Bootfile: /unix ----- Directory Entries ----- The only thing that gets logged during the mount is: SGI XFS with ACLs, security attributes, large block numbers, no debug enabled SGI XFS Quota Management subsystem How do I determine which of the requirements listed in: http://oss.sgi.com/projects/xfs/faq.html#useirixxfs that hasn't been fulfilled? Mogens -- Mogens Kjaer, Carlsberg A/S, Computer Department Gamle Carlsberg Vej 10, DK-2500 Valby, Denmark Phone: +45 33 27 53 25, Fax: +45 33 27 47 08 Email: mk@crc.dk Homepage: http://www.crc.dk From owner-xfs@oss.sgi.com Mon Sep 11 11:30:02 2006 Received: with ECARTIS (v1.0.0; list xfs); Mon, 11 Sep 2006 11:30:12 -0700 (PDT) Received: from slurp.thebarn.com (cattelan-host202.dsl.visi.com [208.42.117.202]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8BIU0if006161 for ; Mon, 11 Sep 2006 11:30:00 -0700 Received: from [127.0.0.1] (lupo.thebarn.com [10.0.0.10]) (authenticated bits=0) by slurp.thebarn.com (8.13.5/8.13.5) with ESMTP id k8BGLtL4021271; Mon, 11 Sep 2006 11:21:56 -0500 (CDT) (envelope-from cattelan@thebarn.com) Subject: Re: Mounting IRIX disk on Linux From: Russell Cattelan To: Mogens Kjaer Cc: linux-xfs@oss.sgi.com In-Reply-To: <45053B2D.50203@crc.dk> References: <45053B2D.50203@crc.dk> Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="=-yYy194lQdmz6sZbIXzy2" Date: Mon, 11 Sep 2006 11:21:55 -0500 Message-Id: <1157991715.3651.14.camel@xenon.msp.redhat.com> Mime-Version: 1.0 X-Mailer: Evolution 2.7.92-1mdv2007.0 X-archive-position: 8959 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: cattelan@thebarn.com Precedence: bulk X-list: xfs Content-Length: 2466 Lines: 81 --=-yYy194lQdmz6sZbIXzy2 Content-Type: text/plain Content-Transfer-Encoding: quoted-printable Was this disk cleanly unmounted from the SGI box? If the log is dirty linux won't be able to mount it since it will be in the wrong endian format (well assuming you are using a=20 little endian linux box) If you can cleanly mount and unmount on the sgi box if that is not possible run xfs_repair -L to clear the log. But not that will potentially loose important file system info. On Mon, 2006-09-11 at 12:32 +0200, Mogens Kjaer wrote: > When I try to mount a disk from our SGI Origin 200 > machine under Linux (Fedora Core 5), I get the following error message: >=20 > # mount /dev/sda1 /mnt/sgi > mount: Function not implemented >=20 > The partitions are found during boot: >=20 > ... > SCSI device sda: 143374738 512-byte hdwr sectors (73408 MB) > sda: Write Protect is off > sda: Mode Sense: 9f 00 10 08 > SCSI device sda: drive cache: write back w/ FUA > SCSI device sda: 143374738 512-byte hdwr sectors (73408 MB) > sda: Write Protect is off > sda: Mode Sense: 9f 00 10 08 > SCSI device sda: drive cache: write back w/ FUA > sda: sda1 sda2 sda9 sda11 > ... > # fdisk -l /dev/sda >=20 > Disk /dev/sda (SGI disk label): 255 heads, 63 sectors, 8924 cylinders > Units =3D cylinders of 16065 * 512 bytes >=20 > ----- partitions ----- > Pt# Device Info Start End Sectors Id System > 1: /dev/sda1 boot 17 8924 143108498 a SGI xfs > 2: /dev/sda2 swap 1 16 262144 3 SGI raw > 9: /dev/sda3 0 0 4096 0 SGI volhdr > 11: /dev/sda4 0 8924 143374738 6 SGI volume > ----- Bootinfo ----- > Bootfile: /unix > ----- Directory Entries ----- >=20 > The only thing that gets logged during the mount is: >=20 > SGI XFS with ACLs, security attributes, large block numbers, no debug > enabled > SGI XFS Quota Management subsystem >=20 >=20 > How do I determine which of the requirements > listed in: >=20 > http://oss.sgi.com/projects/xfs/faq.html#useirixxfs >=20 > that hasn't been fulfilled? >=20 > Mogens --=-yYy194lQdmz6sZbIXzy2 Content-Type: application/pgp-signature; name=signature.asc Content-Description: This is a digitally signed message part -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.5 (GNU/Linux) iD8DBQBFBY0jNRmM+OaGhBgRAqyeAJ4un3ZC02TxepQCRCOGkbFx6gEuCgCfTaUK 2SfdjsniqCi5eXriSONVcYQ= =0x0c -----END PGP SIGNATURE----- --=-yYy194lQdmz6sZbIXzy2-- From owner-xfs@oss.sgi.com Mon Sep 11 17:02:41 2006 Received: with ECARTIS (v1.0.0; list xfs); Mon, 11 Sep 2006 17:03:07 -0700 (PDT) Received: from smtp.osdl.org (smtp.osdl.org [65.172.181.4]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8C02dij010106 for ; Mon, 11 Sep 2006 17:02:41 -0700 Received: from shell0.pdx.osdl.net (fw.osdl.org [65.172.181.6]) by smtp.osdl.org (8.12.8/8.12.8) with ESMTP id k8BMrCnW001025 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO); Mon, 11 Sep 2006 15:53:13 -0700 Received: from akpm.corp.google.com (shell0.pdx.osdl.net [10.9.0.31]) by shell0.pdx.osdl.net (8.13.1/8.11.6) with SMTP id k8BMrB80030982; Mon, 11 Sep 2006 15:53:11 -0700 Date: Mon, 11 Sep 2006 15:53:11 -0700 From: Andrew Morton To: Judith Lebzelter Cc: tglx@linutronix.de, linux-kernel@vger.kernel.org, Greg KH , linux-xfs@oss.sgi.com Subject: Re: 2.6.18-rc6-mm1 'uio_read' redefined, breaks allyesconfig on i386 Message-Id: <20060911155311.270a8fbb.akpm@osdl.org> In-Reply-To: <20060911224520.GJ9335@shell0.pdx.osdl.net> References: <20060911224520.GJ9335@shell0.pdx.osdl.net> X-Mailer: Sylpheed version 2.2.7 (GTK+ 2.8.6; i686-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-MIMEDefang-Filter: osdl$Revision: 1.148 $ X-Scanned-By: MIMEDefang 2.36 X-archive-position: 8961 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: akpm@osdl.org Precedence: bulk X-list: xfs Content-Length: 1228 Lines: 31 On Mon, 11 Sep 2006 15:45:20 -0700 Judith Lebzelter wrote: > Hello, > > I noticed in the 'allyesconfig' build for i386 is not working for 2.6.18-rc6-mm1. > The function 'uio_read' in gregkh-driver-uio.patch has the same name as a > function in fs/xfs/support/move.c. Here is the error message: > > LD drivers/w1/built-in.o > LD drivers/built-in.o > GEN .version > CHK include/linux/compile.h > UPD include/linux/compile.h > CC init/version.o > LD init/built-in.o > LD .tmp_vmlinux1 > drivers/built-in.o(.text+0x6eb597): In function `uio_read': > drivers/uio/uio_dev.c:59: multiple definition of `uio_read' > fs/built-in.o(.text+0x2f4ee8):fs/xfs/support/move.c:26: first defined here > i686-unknown-linux-gnu-ld: Warning: size of symbol `uio_read' changed from 123 in fs/built-in.o to 397 in drivers/built-in.o > make: [.tmp_vmlinux1] Error 1 (ignored) > KSYM .tmp_kallsyms1.S > i686-unknown-linux-gnu-nm: '.tmp_vmlinux1': No such file > No valid symbol. > make: [.tmp_kallsyms1.S] Error 1 (ignored) > Thanks. I'd suggest that XFS is being poorly behaved here. "uio_read" isn't an appropriately named symbol for a filesystem to be exposing. From owner-xfs@oss.sgi.com Mon Sep 11 17:32:40 2006 Received: with ECARTIS (v1.0.0; list xfs); Mon, 11 Sep 2006 17:32:51 -0700 (PDT) Received: from smtp.osdl.org (smtp.osdl.org [65.172.181.4]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8C0Wdih014790 for ; Mon, 11 Sep 2006 17:32:40 -0700 Received: from shell0.pdx.osdl.net (fw.osdl.org [65.172.181.6]) by smtp.osdl.org (8.12.8/8.12.8) with ESMTP id k8BNSxnW003159 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO); Mon, 11 Sep 2006 16:29:00 -0700 Received: from shell0.pdx.osdl.net (localhost [127.0.0.1]) by shell0.pdx.osdl.net (8.13.1/8.11.6) with ESMTP id k8BNSxFk032484; Mon, 11 Sep 2006 16:28:59 -0700 Received: (from judith@localhost) by shell0.pdx.osdl.net (8.13.1/8.13.1/Submit) id k8BNSxlc032483; Mon, 11 Sep 2006 16:28:59 -0700 X-Authentication-Warning: shell0.pdx.osdl.net: judith set sender to judith@osdl.org using -f Date: Mon, 11 Sep 2006 16:28:59 -0700 From: Judith Lebzelter To: Andrew Morton Cc: Judith Lebzelter , tglx@linutronix.de, linux-kernel@vger.kernel.org, Greg KH , linux-xfs@oss.sgi.com Subject: Re: 2.6.18-rc6-mm1 'uio_read' redefined, breaks allyesconfig on i386 Message-ID: <20060911232859.GK9335@shell0.pdx.osdl.net> References: <20060911224520.GJ9335@shell0.pdx.osdl.net> <20060911155311.270a8fbb.akpm@osdl.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20060911155311.270a8fbb.akpm@osdl.org> User-Agent: Mutt/1.5.6i X-MIMEDefang-Filter: osdl$Revision: 1.148 $ X-Scanned-By: MIMEDefang 2.36 X-archive-position: 8962 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: judith@osdl.org Precedence: bulk X-list: xfs Content-Length: 1281 Lines: 28 On Mon, Sep 11, 2006 at 03:53:11PM -0700, Andrew Morton wrote: > On Mon, 11 Sep 2006 15:45:20 -0700 > Judith Lebzelter wrote: > > I noticed in the 'allyesconfig' build for i386 is not working for 2.6.18-rc6-mm1. > > The function 'uio_read' in gregkh-driver-uio.patch has the same name as a > > function in fs/xfs/support/move.c. Here is the error message: > > > > LD init/built-in.o > > LD .tmp_vmlinux1 > > drivers/built-in.o(.text+0x6eb597): In function `uio_read': > > drivers/uio/uio_dev.c:59: multiple definition of `uio_read' > > fs/built-in.o(.text+0x2f4ee8):fs/xfs/support/move.c:26: first defined here > > i686-unknown-linux-gnu-ld: Warning: size of symbol `uio_read' changed from 123 in fs/built-in.o to 397 in drivers/built-in.o > > make: [.tmp_vmlinux1] Error 1 (ignored) > > KSYM .tmp_kallsyms1.S > > i686-unknown-linux-gnu-nm: '.tmp_vmlinux1': No such file > > No valid symbol. > > make: [.tmp_kallsyms1.S] Error 1 (ignored) > > > > Thanks. I'd suggest that XFS is being poorly behaved here. "uio_read" isn't > an appropriately named symbol for a filesystem to be exposing. Great. This is showing up on other platforms as well in PLM (OSDL's cross-compile build farm), so it will be good to see it fixed.:~) Judith From owner-xfs@oss.sgi.com Mon Sep 11 23:21:55 2006 Received: with ECARTIS (v1.0.0; list xfs); Mon, 11 Sep 2006 23:22:03 -0700 (PDT) Received: from mail.crc.dk (mail.crc.dk [130.226.184.8]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8C6Loif023087 for ; Mon, 11 Sep 2006 23:21:54 -0700 Received: from localhost (localhost [127.0.0.1]) by mail.crc.dk (8.12.11.20060308/8.12.11) with ESMTP id k8C6LFdb007897 for ; Tue, 12 Sep 2006 08:21:15 +0200 Received: from mail.crc.dk ([127.0.0.1]) by localhost (mail.crc.dk [127.0.0.1]) (amavisd-new, port 10024) with LMTP id 16765-10 for ; Tue, 12 Sep 2006 08:21:13 +0200 (CEST) Received: from [130.226.183.100] (mkx.crc.dk [130.226.183.100]) by mail.crc.dk (8.12.11.20060308/8.12.11) with ESMTP id k8C6L9QX007892 for ; Tue, 12 Sep 2006 08:21:09 +0200 Message-ID: <450651D5.2010706@crc.dk> Date: Tue, 12 Sep 2006 08:21:09 +0200 From: Mogens Kjaer Organization: Carlsberg Laboratory User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.13) Gecko/20060501 Fedora/1.7.13-1.1.fc5 X-Accept-Language: en, da MIME-Version: 1.0 To: linux-xfs@oss.sgi.com Subject: Re: Mounting IRIX disk on Linux References: <45053B2D.50203@crc.dk> <1157991715.3651.14.camel@xenon.msp.redhat.com> In-Reply-To: <1157991715.3651.14.camel@xenon.msp.redhat.com> Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-archive-position: 8963 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: mk@crc.dk Precedence: bulk X-list: xfs Content-Length: 951 Lines: 28 Russell Cattelan wrote: > Was this disk cleanly unmounted from the SGI box? > > If the log is dirty linux won't be able to mount it since it will be > in the wrong endian format (well assuming you are using a > little endian linux box) > > If you can cleanly mount and unmount on the sgi box > if that is not possible run xfs_repair -L to clear > the log. But not that will potentially loose important > file system info. Some of the disks are dirty, some are clean (the Origin boots, but halt during the boot phase because of multiple fan failure - a little strange as all fans are running). I could mount one of the disks after the xfs_repair -L trick, but it gave massive file system corruption. Not a big deal, we have backups of everything. Mogens -- Mogens Kjaer, Carlsberg A/S, Computer Department Gamle Carlsberg Vej 10, DK-2500 Valby, Denmark Phone: +45 33 27 53 25, Fax: +45 33 27 47 08 Email: mk@crc.dk Homepage: http://www.crc.dk From owner-xfs@oss.sgi.com Mon Sep 11 23:25:11 2006 Received: with ECARTIS (v1.0.0; list xfs); Mon, 11 Sep 2006 23:25:17 -0700 (PDT) Received: from mail.crc.dk (mail.crc.dk [130.226.184.8]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8C6PAif023756 for ; Mon, 11 Sep 2006 23:25:11 -0700 Received: from localhost (localhost [127.0.0.1]) by mail.crc.dk (8.12.11.20060308/8.12.11) with ESMTP id k8C6OZKc008648 for ; Tue, 12 Sep 2006 08:24:35 +0200 Received: from mail.crc.dk ([127.0.0.1]) by localhost (mail.crc.dk [127.0.0.1]) (amavisd-new, port 10024) with LMTP id 16800-09 for ; Tue, 12 Sep 2006 08:24:33 +0200 (CEST) Received: from [130.226.183.100] (mkx.crc.dk [130.226.183.100]) by mail.crc.dk (8.12.11.20060308/8.12.11) with ESMTP id k8C6OVA0008643 for ; Tue, 12 Sep 2006 08:24:31 +0200 Message-ID: <4506529F.9030109@crc.dk> Date: Tue, 12 Sep 2006 08:24:31 +0200 From: Mogens Kjaer Organization: Carlsberg Laboratory User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.13) Gecko/20060501 Fedora/1.7.13-1.1.fc5 X-Accept-Language: en, da MIME-Version: 1.0 To: linux-xfs@oss.sgi.com Subject: Re: Mounting IRIX disk on Linux References: <45053B2D.50203@crc.dk> <4505712E.5070801@oss.sgi.com> In-Reply-To: <4505712E.5070801@oss.sgi.com> Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-archive-position: 8964 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: mk@crc.dk Precedence: bulk X-list: xfs Content-Length: 1339 Lines: 75 linux-xfs@oss.sgi.com wrote: > Mogens Kjaer wrote: > >> How do I determine which of the requirements >> listed in: >> >> http://oss.sgi.com/projects/xfs/faq.html#useirixxfs >> >> that hasn't been fulfilled? > > > # xfs_db /dev/ > xfs_db> sb 0 > xfs_db> p This gives me: magicnum = 0x58465342 blocksize = 4096 dblocks = 17888562 rblocks = 0 rextents = 0 uuid = 06ea04fb-1f3f-1024-8ab0-080069057580 logstart = 8912900 rootino = 128 rbmino = 129 rsumino = 130 rextsize = 16 agblocks = 259255 agcount = 69 rbmblocks = 0 logblocks = 1168 versionnum = 0x1084 sectsize = 512 inodesize = 256 inopblock = 16 fname = "\000\000\000\000\000\000\000\000\000\000\000\000" blocklog = 12 sectlog = 9 inodelog = 8 inopblog = 4 agblklog = 18 rextslog = 0 inprogress = 0 imax_pct = 25 icount = 113856 ifree = 113202 fdblocks = 16727530 frextents = 0 uquotino = 0 gquotino = 0 qflags = 0 flags = 0 shared_vn = 0 inoalignmt = 2 unit = 0 width = 0 dirblklog = 0 logsectlog = 0 logsectsize = 0 logsunit = 0 features2 = 0 How do I interprete this? My guess is that this fs does not have directory format 2, it might have been made before IRIX 6.5.5 Mogens -- Mogens Kjaer, Carlsberg A/S, Computer Department Gamle Carlsberg Vej 10, DK-2500 Valby, Denmark Phone: +45 33 27 53 25, Fax: +45 33 27 47 08 Email: mk@crc.dk Homepage: http://www.crc.dk From owner-xfs@oss.sgi.com Tue Sep 12 13:13:28 2006 Received: with ECARTIS (v1.0.0; list xfs); Tue, 12 Sep 2006 13:13:29 -0700 (PDT) Received: from incmta05.incnets.com (incmta05.incnets.com [203.80.96.34]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8CKDFVw004778 for ; Tue, 12 Sep 2006 13:13:21 -0700 Received: from 192.168.123.162 (203186142154.ctinets.com [203.186.142.154]) by incmta05.incnets.com (8.12.11/8.12.11) with SMTP id k8CBQLPv002787; Tue, 12 Sep 2006 19:26:22 +0800 (HKT) Message-Id: <200609121126.k8CBQLPv002787@incmta05.incnets.com> From: "JHL [ Hong Kong ]" To: Subject: AD Direct Import your items from Mainland China Mime-Version: 1.0 Content-Type: multipart/related; boundary="= Multipart Boundary 0912061926" Date: Tue, 12 Sep 2006 19:26:26 +0800 Reply-To: "JHL [ Hong Kong ]" X-archive-position: 8970 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: direct_china_export@yahoo.com.hk Precedence: bulk X-list: xfs Content-Length: 259240 Lines: 4433 This is a multipart MIME message. --= Multipart Boundary 0912061926 Content-Type: multipart/alternative; boundary="= Multipart Boundary _EXTRA_0912061926" --= Multipart Boundary _EXTRA_0912061926 Content-Type: text/plain; charset="ISO-8859-1" Content-Transfer-Encoding: 8bit SEP-2006 Trust you enjoyed the summer holidays. The best regards from Hong Kong. # To remove your list, please reply an email with subject REMOVE. We are so sorry to send this email in error to your mailbox. Not only the above , we can also source most others, such as Non-Woven Bags, Portfolios, AC Adaptor, USB Sticks and Acrylic Items, stationery, nylon bags, PVC figurines, Mug, other Electronic and Electrical items and many other items...Whatever you need, most likely we can supply. Dear importers, Over 20 years in the export industry dealing with all kinds of consumer products, most of them we can supply from China, or even we explore your OEM items to be produced in China. You may rely on us for sourcing, sampling, communication and inspection of goods before shipping. Quality is promising. If you find we can help or have any items that you are looking for, please feel free to discuss. Looking forward to doing business with you. Many Thank & Best Regards Mr Johnny Chow JAY HANG LTD, Hong Kong. direct_china_export@yahoo.com.hk Fax 852-2724-3279, Fax 852-2368-9495 Tel 852-2368-1193 http://home.pacific.net.hk/~JHL # To remove your list, please reply an email with subject REMOVE. We are so sorry to send this email in error to your mailbox. *** AD *** --= Multipart Boundary _EXTRA_0912061926 Content-Type: text/html; charset="ISO-8859-1" Content-Transfer-Encoding: 8bit

SEP-2006

Trust you enjoyed the summer holidays. The best regards from Hong Kong.
 
# To remove your list, please reply an email with subject REMOVE. We are so sorry to send this email in error to your mailbox.
 
 
    
      

     

Not only the above , we can also source most others, such as Non-Woven Bags, Portfolios, AC Adaptor, USB Sticks and Acrylic Items, stationery, nylon bags, PVC figurines, Mug, other Electronic and Electrical items and many other items...Whatever you need, most likely we can supply.
 
Dear importers,
 
Over 20 years in the export industry dealing with all kinds of consumer products, most of them we can supply from China, or even we explore your OEM items to be produced in China.
 
You may rely on us for sourcing, sampling, communication and inspection of goods before shipping.
Quality is promising.
 
If you find we can help or have any items that you are looking for, please feel free to discuss.
Looking forward to doing business with you.
 
Many Thank & Best Regards          Mr Johnny Chow
JAY HANG LTD, Hong Kong.
 
Fax 852-2724-3279, Fax 852-2368-9495
Tel 852-2368-1193
 
 
# To remove your list, please reply an email with subject REMOVE. We are so sorry to send this email in error to your mailbox.
 
*** AD ***
 
--= Multipart Boundary _EXTRA_0912061926-- --= Multipart Boundary 0912061926 Content-Type: application/octet-stream; name="KK_r.JPG" Content-Transfer-Encoding: base64 Content-ID: <13600201> Content-Disposition: attachment; filename="KK_r.JPG" /9j/4QDmRXhpZgAASUkqAAgAAAAFABIBAwABAAAAAQAAADEBAgAcAAAASgAA ADIBAgAUAAAAZgAAABMCAwABAAAAAQAAAGmHBAABAAAAegAAAAAAAABBQ0Qg U3lzdGVtcyBEaWdpdGFsIEltYWdpbmcAMjAwNjowODoxOCAxNToyMToyOAAF AACQBwAEAAAAMDIyMJCSAgAEAAAAMjg5AAKgBAABAAAApQAAAAOgBAABAAAA aQAAAAWgBAABAAAAvAAAAAAAAAACAAEAAgAEAAAAUjk4AAIABwAEAAAAMDEw MAAAAAAKAAAA/8AAEQgAaQClAwEhAAIRAQMRAf/bAIQAAwICAgIBAwICAgMD AwMEBwQEBAQECQYGBQcKCQsLCgkKCgwNEQ4MDBAMCgoPFA8QERITExMLDhUW FRIWERITEgEEBQUGBQYNBwcNGxIPEhsbGxsbGxsbGxsbGxsbGxsbGxsbGxsb GxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsb/8QAwwAAAAYDAQAAAAAAAAAA AAAAAAQGBwgJAQIFAxAAAQMDAgMFAwYHCBAHAAAAAQIDBAUGEQAHCBIhCRMx QVEUImEVMkJxgZEWI2JyocHCFxlSgpLD0dIYJCYzU2ODhIWTlaKjsdPhQ1RW ZJSW8AEAAQUBAQEAAAAAAAAAAAAAAAIDBAUGBwEIEQABAwICBwQGCAYDAAAA AAABAAIDBBEFMQYSEyFBUbFhcZGhFCJCgcHRFTI0UnLh8PEWIyQzNYJDYpL/ 2gAMAwEAAhEDEQA/ALAbm4nuHqzLxnW7de89oUqqU13uZkKVVW232F4B5VIz kHBHT464quNDhTSkE7/WV19Kmk69shEJnHTwjwoSn3t+rXWlOAQw848v+ShB J+7XOX2hHBylsq/dwphx5JgTCfu7nRYoSo2y4uuHbePc9Nmbb7lxazWlx3JS YiYchlRbbxzqy42lPTI6Zzp3wtJ8DrxCyTgaxz9fA68Qsc48MHPp56zze7nB Gi6Fjn+vWwIxouhY5tbDw16hDQ0IQ0jb13l2m22rkembgblW1bkyUz7QxHql UajOOt55edKVkEpyCM+o0IXMpnEZsBWYy3qZvbYkhDauVRTcMYYOM+axo+ne 7ZpYyjdqy1A+lwRf+pr2xQiNybw0Gm0WDVLYlUy4oc1x5oyIVQQ42hbfJlPM jmGffHTy+3Sf/d/c/wDTDf8A8w/1NWcGH7eMP1rXTZfY2VPfG0D++17qYOP7 oFdf8i1pkCXA5jJ+/VWl2WUrUHfnZ6dOuvRC8p6k/fpSSVJ/s45TcXtRaUp5 9KUrolST730vxI6D49NW47PNlvbCmMuxqjGdap7aFxqpNEycyc/NfeBIccHm oEg+umzfWTgtqpc1Fl6RQpEeOvldcZWhCs4woggHP16jhJtjeipxKPKety5m n6bFp0D2dysIDb8phpaX33FNyEqS2ouAoeBWrmQStlYIGregdAwkykDvF/Dc d44X42TbgTkvW09ut3X+G6p23UvlSLMm1Gjr7ybIVzJDbrftp5Ey1FTZQjmJ DrZd5lgIT9LykU7ijtSp1Gh2dAceo8cSTGdc9neKwp1akFrvnVLSeUJAStSg kKHuqxq3dU4fM+Rkv1Sbiw7AODQefZ2JvVeALLoS7Y3zf2ArlLrCahLfl1CL UmGGlNKeZSas6uSwhSHmy437MGlBJcT0WUhWPdSUo0fitiTIFvsNyIlIi05L C5kgRJUguNxsoVla1EqW57qwor5T0BwOdTDX4aWua/K9wN/IcbXtft9wXpD+ CcHaOPumu+7kqu5TExlUluI3HaWpkRW1oVILiY4bWolvlWz77gC1HOchIw6I 8NUlXsdu7Y/V3W8OvPtTjb23rOhqKlIaqV7YJJPHNZJAz/ciry/947oQoGLb SoDmaSrHqnOgGGvH2Zv4/ix/RoXisr4GG0o7LKlpShKR+F1Z6AY+hE0+ONaq h+zt9/Upl/1kqr+o/AUve+tSdx6Ltg7dLsoqqy6nEbXKU9yjPeEpPvY5f0a4 Yh9mu0CfkDZ3/ZbJ/Y1i3VUDTYuC0MeB4nK0PZC4g7xuWvfdmtGPP8g7QJIS R0orROP9Xoi9cfZmMuELtvbFR9UWsVj7wzrz0yD7ydGj+KH/AIT5fNLjZC5O Cuob4GNsbQ7Oh3MiE64HqdbxhPBgFIcw6Wk9OqcgHrqQ7bzLqiG3kLI8QlYO NPxyNlGs03CqqqknopNnO2zv1yWyjgdPHSHvTe7aXbt7ub03CoNKf6n2d6Wl T3QZP4tOVfo0pzmsF3GyRBTzVL9nC0uPILnbb8RezO7NcXSbEvqFOqCAV+xO IXHkKSPFSW3EpKh8U5xpyUkEZ9deMe2Rus03CXU0s1HIYpm2ctsD00nr7v8A szbPbaXeF+3FColGhAd9Llr5UAk4CQOpUonoEgEnyGlqKo8v9pFwyO1JyDQ7 kmznwkltb0NcKOoj1deACc+pGmz3A7RDcyz7nNwQNlqfPsZtwtrnrmSGXFKS UBxIW42kZT3jWSGykd4gc3XTgZfNJJUmeHziX214jduV1iyZzjE+GEipUiZh MuEpXgVJBwpB+itOUn4Hpp2vLSCLJSHlpoN5eH7hy3dvSDW957NodXqMCKYs R+fOWwtDJXzFI5XE5HMSeuepOkkgZpTWuebNFym9/sJ+A1fQbYWgeXocVp7p /wAfXm5wWcA7auV3baz0H0VXnh/P6Trt5p8Us59g+BRqubbbS7W7R0W2dm6R TabQFTpspTMCYqQ336wxznmUtRyQlPTPp00m+VH/AOOtZQb6Ztu3qVAkaWvI cLFRC4qXVp7RfcEBeB8sqwM/4tvTU98s/S/Trk0oG0d3lfQ1AXehxfhb0CKS H1+3pHOTlJ+lrUOep0cF44nWKeng+UVcakVhAW6XKZLHKhJWSAEk9B18tWYb XoUzLlJMBcdCWE9CytPXnPTJAz+rV5QgmP3rluk4/rv9R1Kgvxn8UO4svirr u3FoXTPotvUBfyc81AeLK5j3IO+Li0+8RlRQE5A90nxOoooe71SnOvOTlRJy T9vnquqpHSSG+QW5wWmipKVjWDe4Ak8yd/7IzT6lPpVbj1OmTH4kyI6HmJDD hbcaWDkKSodQQfMas74N+KD92ywHLTu+Q0m8qK0FvKACRUY+QO/SPAKBICwO mSFDorAdoJdSTU4Hqq/SeiFRSbdo9ZnQ5/PxUl/o6q57XHceoyt/7N2rYlrT TqZS1V2Q0lRAckPOKabKh5lKGlY9O8OtCFylQt2ltNrcLietKxJUt6OxcVbi Ux15ojvG0OupQpSc9OYAkjPTOpA3TFufg94j6nt3ulaLV1txUe2W3U1JZC32 CvCFNuvsvd20QCFttpCkuD52D1cBSSLpuaDxE39bHHE7vtbTwgVZ2aX1w+/c djrjnAMRZUSpTXIkJ69RhJGCkYum2Y3Ztje3hxo249qOH2Oqs5cYWrLkV5PR xlf5SFAj49D4Ea8dzXoS41XB2maEf2WVprUlJzbih1GfCSv+nVbXf2CtPo0b Yk09h6KH6e6S8kBpvqD9Aa9gWv8ABI/kj+jWbLQuwMldzU1OGfl/e/KVypAH 4SVXwGPoRdOJ011PB/sEfd8VwvGyTic1/vFPbclk8NMu+p0+7be28drEh0rm O1FEUyFuYHVZX72cY8dcldn8HrR/GUPaVH1iCP16yhbT3328lLjlxcsAYZLW 3W1slzanRuCCJEKqjC2fQgnlJ/tIn9HXXCD/AGf0B3Ia2jB+DLC/1HTTnUjd xIUuOHH5G3Y2S3+yXu01S4Za1fL/AO40zYq6vDjFbqqJDabkNskhJJKUhQST gfHpp2Fn3PEnUuMxubePLsVHWR1UUxZVAh/bn5qmvifgOwO0P3EjvBQUq4ZD wyfouELH6FDXX4R6VbFy8bVNsm8Yjcmj3VAm0WQ2rGcuMlSFJP0VhbaSlQ6g 41nwBt7Hn8V1ZxcMN12HeGXHubdJreXaut7NcRFUsKtqU6Ya+8iSuXlTLjL6 tOj6x0I8lBQ8tczbu/a/tjvNSb5tmR3VRpEhL6EkkJdT4LaX6pUklJ+B1GIM MnaCrWN0ddSgn6r29QrjbB3HtrcHZSj3zRZrYg1iImS2lavebJ6KQr8pKgpJ +I1Vb2rEVP743SKwy53seo2pGCFDwy0++lQ/SPv1rGODgCOK4bLG6GR0bswS PBRf2sudFlcStpXis4RRK5CqC/zWn0KV/ug6uS42uHqLxDcGMpVCitv3XbqF 1a3nkfOeUE5cjZ/gvIGPzgg+Wl3smVXhw1WBsnfFiuT7jsqRVq1Ru/frKqre LdJpcGOhBW1IWyEe0upJHItLZVylPh7wBfngy3qtXbrtL63s5QrjoM60twEN VKniiSHHaZT6sWQtceOtxKVKbI5mwSAcobHiDpw7wkhWQg5TnTS7v8Mu02+N 7QK9uBTKhJmU2MYbCo1RcjgNlXPghJwfeJ66iyRtkbquyU2lqpaOUSxGxCRj XAFwyocCvwRqjhHT3q5JP7WjrfAjwyIHWwZS/wA6sSz/ADmo3ocHJW/8RYnw k8h8kUvfa6x9pNu6JathUU0ymKlTJZYMhx7LqwwFK5lknryjpny0i+VHprbY c0MpWtblv6lZ2omknldJIbk7yok8U7Tf74nuAS0gn5ZV15R/g2/hprQ2keDK P5I1yCZo2ru8r6Qw6RwooRf2W9AirnMKkUBIHueQx568Vd6PAK0poACale5z ibqRfA5JmI4nqy0ytaS7RFAnmUDjv2s+H16n5se5PNgwG50evNqTTGgRXnAu o8wUQfaVJ90u+pSTq/ob6gsuQ6Tf5B1+Q6Kv/j5s16hccj1ypaKY1yxUyArH QutHunB9eA2f42mGs64qhZe6VIu6jrKZtFnMz2D6rbWFY+3GPt1XVF2Tu71u cK1anDowci23wVgHGpYlG3r4FKLvxZjaJL9GiIqAcb6qdpzwBcScebaiFY8u VzVdmQnocemnK1o2usOIuo2jkjjRGF+bCR8fiphcB+5iu8q+1FSkZTg1alhS vA9A+2P91f2K1jtP9pHqtwv2pu9TEF78HZ66bUClPVuPJxyKPwDyEj/KatqN 2tCOzcsLpDDscSeRk6x8c/O6rYCEY5F9QRg6ue4CeI+l7z8GlMtyq1FP4V2h HapVTQtXvvISnlYkD1C0JAJ/hpUPTU45LOqGvEdt3YuwXbLVNu77doTtj37D eqLDtVpypkelmUCHnm2UFJUtp9CilIIIDo6joQw+7O4dov8AFNT9xdl3Ki0K YY8hM5+lsU0OzmVlaFIYY9xACEtJP0lcpUoZUdLFrJBzV3219807c3h4tzcG klPstxUtiooSk55C4gKUn60qyn7NQZ7SWVNh8UdpqjTZDSHLfcyG3lJBIkK8 gfjqrr/7BWr0YNsUZ3Hooks3FW2HuRqtVBAWMkIluDOPqOvY3DXM9a3UT/nj n9OsyS4ZFdmbqO3lo8AphcO8qVL4CqS9KkvPr/CKqDmdcK1Y5IvmTnS7yr46 6tg/2CPu+K4Jjn+Um/EU8VzcLXD9eG4s+6rnsGNNqtTeMiXIcnyEF1eAMlKX AB0A6Aa5x4RuFplfvbZ0UEeS575/QXdZI0dOTct3qUzSLE4mBjZrACwy4e5F pfDBwiw2uefYFqMJzjmdnqT9mS7oodieCqLnvLY29Tjp+MqSD/zd156PTN4B Q5NJqsH16mx7wEr9qNu+He0r1l1LZ+lWqxU3YvcyXKTMS+73JUDg4WrCeYJ+ 0DTqegOcakxtY1tmZKFJWOrnbZz9cnje+SjNxg7TDeThOqa6JDDlzWjIcqcF CB78htOQ80PUqQMgfwkJ1Vv3nT3eoPUEapa9lpQ7mul6MVGvRmPi0+R39bqX PBPxIQKC47sNuS6h61rgLjFPcfV7kZ14ELYVnwbcyceiz6K6Ry3TseTtnxE3 FYclSlfIlQcitLI6uNA5aX/GbKD9umZHbSBp4jd8lY0cfo2JTNGTwHDvvY+Z XrtPej233Ejb14IWpKKfPbW+M45mVHldSfrQpWrQtwKVHvXhIvSw3KS1WkVy gy2IcReOWQ8WVFkAnwJWEEHPQ4Op+HO9Vw/X63LM6Wx2lik5gjwN/iqLXI8m JJXFnMuNSWFFt5txJStC0nCkqB8CCCCPIjS92P3hufYziIpt/wBrqLq4x7qb DK+VudGUR3jKvTOAQfoqCT5auFhVNXj9rVs759m/Ye/dlzBLRSKoI4ewA6hm UjlU24PJbb7LYUPIn0I0xe9V7O7k8DEKdAZu2vx4U6LLclxbQj0i3aA8Wil5 hC2k8zzhKwgrWcZTnxOlNyISSp2dmBd79ydl1Do8l7nXbFZmUpAJ6pbKkvoH 1YeI+zTn75cK+3u/950yuXjUq9EkUmKuIz8myG20rQpYWeYKbV1z6Y8dRpom ytLXZKdRVktBOJ4rXF8+1Nsns2tiw8larhvc8uQB7ewP5nRlPZybCp8ateh+ uptj+Z1B+j4Tz8VoxpXiIy1fD80fr+ztqbJbSUSzLPeqLkBc2bOKp74dd51h gK6hKenujpjSY5E+p1ucOYI6RjRkPmsjV1D6qd0z83G5Ub+JqdKRx7Xw2mS8 EpqqgAFnA9xGmv8AbHz851aj8VE65lITrnvK4JiAvWS/iPVc6TKWqqKR0/ve R5+eiqnVlX/bXoaE2xoAUhOCmrToXEZWURJLjKnqOUFSHO7PL3zf348cfDU6 tka3OrO31Pdm1ar1NS6Y0szKtH9llyFcxBW6xgBpZ804GNWlNcADguvaOAfR je89UKi4pFyTkpUoBxx1tWDglKiQf0HVTm8O1tZ2e33nWVVip5lv8fTZnLhM yKonkcHxGOVQ8lJPw0nEGXYHcl1PReoDJnwn2hfw/dIkZBz+vSova+6ruFWI Naryg9VY1PZp8qXzZXMDOUtuL/L7vlST58gPiTqmvYELoLYw6Rr+Iv4H9h4J N4yOQ+CunU+WrUtkLhXdXCDZ1fdWVOyaOwl1WfFbY7tX6UHVjhx9dw7Fk9K2 3p43cjbxH5KE/aIcOhod4nfy0oOKdV3ktXIy0npHlq6Ik4Hgl3olR8l4P09Q mQeXwz9+r4Lm6Xtq7s1i3eGe8tq30rmUK7ER30sKXgQ5rLyFofQPykpKFDzH KfFOnNtniiti1OGiiWXI23m3VMgUd2jyY9ZuGQ1Qi0qQt3AgMcocUQUcy1qy SNLCSVN3spY8p/gtvC4nY7Udqr3jIdZaaTytI5WGeYIT5JBOAPROiHH5ed4W txFW1Hty6qxS48iiKWtqFOcYQtYfUOYhJAJxgZ+GoFWSIzZZ/SB748PcWEg3 GW7ioyRt5t1o8ju29y7oQF5JxV3/AC/jaMHefdk+O5t1H/TD/wDW1SmSQe0V yw1tYNwld4lSZ2guKu3PwXUepXHWp1TliuVJrv5khbznKERsJ5lEnAyemlH0 9Tro2FXNFGSeHxXW8Mc59FE5xuSAnQvDg+2cvndOpXhcEatuVCrPmTI7qoqb bKiAOiQnoOg89c5HA3w/pV71CrKx+VVnv1ayBpIiSbeajSaO4fLIXuabk3zW q+Bjh0Uedy2qoFeHMazIHT0+drYcFXDGwkl61XSPD8ZXZHT/AImlCli5L0aP YaPY8ylXtdw97LbXXnJru39ES1OkRjFdcXUXJWGyoKICVqIHVI6+PTTpoabR kttoST48qQNPtY2MaoCuKamipI9lCLBI247bkNTHahGBcbcUVrA8U5OmF4jN jYO9uyhpzBajXFSiqRRpi+gS4R7zKz/g3MAH0ISryOvJGbRharWjqDS1DJhw P7+SrPqFOn0q4JVKq8J2JOgvLjSo7owtl1BwpJHqCDrzbPU9OmNZZwtuXao3 B1nDIrAGV6td4SrQW72YVid+ksy1wXZCOYYyhx9xaM/xVA/bqxw8fzD3LKaU u/pGj/t8ClJdlqU6vWlUbSuuktzabU464kyK8MoeaWMKT/QR1BwR1Gqe+Jrh +rPDxxBuW/IU9Lt+pc0mhVJY/v7Geraz4d63kJUPP3VDorV6FzVNjRaTUriu BmkW/T5NUnPnDcaE0p91f1JQCdSEsDs/+JvcCYzyWTHoEd0gqerk5EcoSfpF sczn2cudObhmvCra+HzZmkbB8JVB2wo8n2v5LZUuVMKOQy5Liit13HkConA8 gEjy0keIPhYom/t8UquVK8Z9GdpcRcMIjx23UuJUvmyebwIOdRpWbVpaVCrq NtdAYXGwKahPZtWn7Slxe69bPKCMCnsD9ejY7OWyQnCtza8f80YH6tRDRMPE rMnRWmPtld+obPUvZLZaiWZSK3MqbC502aX5SEJXzLSwCnCemByj79cLk/KO tthzdSkY0cPmVo4KdtNE2Fu8NFlEnio3G3BpnaJ3/TKbfdxRIcasKbZYYqz7 bbae7bOEpSoADr5aaVe5G4TgJcv25Vk9DmsSD+3q2ghj2LTqjIcOxSCd64tQ vC550xTE25KtIRyc4S9PdcGc+OCrXKcqE1fz5Ty8+rhOpjWtA3BKA3KUPZ7X DVafxVXEiBJebW/b/KSjHgJDXU59M51YpsvdlQuvbqnVCdcorxkU1qQak3EM RqYoqILiGVe82Dj5p8NYfFiRWkdg6KQxoDE5BSCNJyu2q1LSqRASEPeJR5K/ 76rl4oUcVnChcd/7jxr523p0X5akkRqzEkPpjIfCRhEgKV05wByqHiQEnxB0 yNs8EXEFcV5uUl+1WKRHZVyuVGZLQqOoerfIVKX9gHxxqmno5Hy3bkV0PDMe poKIMmPrNHjyspI7d9mtY0Qtydx7rrVXWjClxofJCjr9Uk+84U/xkn6tTNpV LgUW3ItIpURqLDhMojx2Gk8qGm0JCUpSPIAADU+CnbAN2ayuJ4rLiThrCzRk PmtajS4tTi91Jbz6K8xpAXZsha19U9mn3lQqLXoUWQJUdmpwkSEtuAEBYSoE BWCRnUpUy6du7RWbbTi3aNRKZTnHAEuKp8BqMVgeRKEgkaVdPo8Cmg+yM8hV 4nOSdCEdx01AbtDN6t2tr+Ji1qZYF/1egwZtBXIfYhupSlx0SFJ5jlJ68uB9 mrLDo2S1Ia8XG9JdkotR+L7iUYX3ad57kPPk++62vH3oOjB4vuJcjrvNcH8p r+prVHD6X7g/XvTRvzUqdjdwrz3K4HqTcV+XLNrdSTX6pGTJlEFYbSmNyp6A DAyfv0rsj+EdR2NEYLW7gCepSTmu5uN2fu1e5u/Vc3Art4XVHm16WZj7EV2O lpCuVKcJ5mice6PEnXFa7MnYtJ/G3Ze7g9PbI6f+TOqNmLTsaGgDcniwLY9m Lw+Kd51V++SvHLzfKTPhn07jXsjsyuHRIwqoXusflVZAx9zI0v6YqezwRqBL 7Zbgy2b2K3Il3TaQrk2ZMhGAsVaWmQ2lBWlZKUhtOFZQOvpp8I8CFFcUuNFZ aUroShsAn69VdRO+pk2j80sbhZGNDAOmELzXHZc+e2lWfUZ1lLKEJwhISPgM aELfQ0IQ0NCENDQhDUYeLHg4m8Sm6FBuSFuAxb6qNT3YC2naaqV3oU4FhQIc TjHUY66l0lR6LMJLXsvCLpik9lPXTMQ4rfCAAkEe7b6/PH+P0cT2Vc8eO+Mf /wCvK/6+rv6bH3PP8knUTm0bYhzh54cKNYK7oFdK6pPqPtQiezAd4mOOTl5l eHJ4589ePIf4WpUEgmj2ls79SmnbjZN5vpvPupQeL67KJRtwa5Cgw6iWo8di UUIaTyIOAMeGSdIJW+28SiSrc+5f9oLGsIXuuu4UWE4e6ljcYmklo4dgXJqO 8O6c+SWZe41yupA5sKqbuM58eitcd7cC+HQQ5eddUCeuak8f2tNF7uauI8Po 4x6sTR7h8k9XCXuBdbe+lWacrtQl95S8BEh5chIPfIwcKJA8fH0zqYW0V5VO 8bCgVGqVmlVV2TT25K5lIBMB9ZJBVHUfeU306ZJOno3uNguVaTQsjxFwYLbh 0ThHw0zM/dO8YN4VaNJrNnsxYr7jUbMGolwYWoJDh5OUnoM8pIGDjI1KWUSk o27VDYtNj8K6qx8pBKfaFQKfK9mJWkrRyFSMkFGOp8+njrojdmx/ZQ+anJCC Acqp8gYySnHzPHI6jy+rQhKSkVeBXbfZqlMeU7FkAltZbUjIBI8FAEdQfEaO 6EIaGhCGhoQh5ailxcb37m7W710Sk2TcCYEOdSlSXkGI07zOB1Sc5UkkdAOm m3uLW3CvsCpIa6ubDMLtIPZkEx0fjB38bd5FXky5z9ffpkc4/wBzXsrjC37K cfhbEH+i2P6uo+1cujnRjC/uH/0U61sbh3XuXwy0e47wqKZs9NWqEYOJZS0A hKY/KMJAHmdb8yvX9Otph5vStPf1XJsUhZTV0sUe5rSQEqNwODCDf291avSR uFLhKrEoyTGbpyFhr3QMBRWM/N9NcRHAHQv/ABNzKor82nND9o6yJiBOa1sO lksMLYxEPVAGZ4CyB7P21lu94rcit82MHEJnGNeqOz+ssJw5uDcCvzY7A/Z0 nYN5p3+MqjhE3xKWu0fCXZ+0+4kq4Gq9Uq0uREMQMy220IRlaVc3uYJPugfa dPRS6DRqK2EUmmR4aQkICWUcqQkeQHgB8NONjDAspiWIPxKoM7xa9tw7F0PL WMfE/edOKsQx+UfvOhy/lH7zoQgBg6zoQhoaEIaGhCB8NRy4muG68N69yaNW 7ardHgt06CuI6ieXAVEucwKeRJ6ePjpD26zbK4weuZh1Y2eQEgXy7QmbHALu oZSVqvK1EgAjoqQf5vRgcA+5Wcm9rZz+bIP7Go+wPNb06Y0fCN3l80u4m0lc 2i2Bo1o1eoRqlJVUp00uwm3OQJWlgAe8M5906I+ySf8Ayz/+rV/RrZYfZtK0 X59VzLEahtXWSTt3BxupdjwGsnw1mFHWFa10L1bHwGsjw0LxDQ0IQ0NCENDQ hDQ0IQ0NCFg+GsDw0IQPz9ZV4aELCfnnW2mXZpS//9k= --= Multipart Boundary 0912061926 Content-Type: application/octet-stream; name="JJ_r.JPG" Content-Transfer-Encoding: base64 Content-ID: <13600200> Content-Disposition: attachment; filename="JJ_r.JPG" /9j/4QDmRXhpZgAASUkqAAgAAAAFABIBAwABAAAAAQAAADEBAgAcAAAASgAA ADIBAgAUAAAAZgAAABMCAwABAAAAAQAAAGmHBAABAAAAegAAAAAAAABBQ0Qg U3lzdGVtcyBEaWdpdGFsIEltYWdpbmcAMjAwNjowODoxOCAxNToyMToyOAAF AACQBwAEAAAAMDIyMJCSAgAEAAAANjUwAAKgBAABAAAApQAAAAOgBAABAAAA hQAAAAWgBAABAAAAvAAAAAAAAAACAAEAAgAEAAAAUjk4AAIABwAEAAAAMDEw MAAAAAAKAAAA/8AAEQgAhQClAwEhAAIRAQMRAf/bAIQAAwICAgIBAwICAgMD AwMEBwQEBAQECQYGBQcKCQsLCgkKCgwNEQ4MDBAMCgoPFA8QERITExMLDhUW FRIWERITEgEEBQUGBQYNBwcNGxIPEhsbGxsbGxsbGxsbGxsbGxsbGxsbGxsb GxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsb/8QAugAAAQMFAQAAAAAAAAAA AAAAAAEFCAMEBgcJAhAAAQIEAwMHBAkMDA8BAAAAAQIDAAQFEQYSIQcTMQgJ IkFRYbQUMnGBFRYXIzl2kcHTGCRCZGZ0lqGxs9LwKTQ1UleEkpSio6TRJSYo Mzc4RkdIVFZik7LD8QEBAAMBAQEAAAAAAAAAAAAAAAECAwQFBhEAAgIBAwMB BwQDAAAAAAAAAAECEQMSITEEQVEyBRMUImFxkYGhsfFC0eH/2gAMAwEAAhED EQA/AJbc4FyhqtyXub9mccYNpkg5iGq1JmiUx2Yl0uNyrjiHFqeUgiysqGl2 B0zFN7i4PFmpcsflUVevvVOb2+Y2Dz6sywzVnGWwe5CLJSO4ACAJabOeU47i zY9TazVdptWkZ51ndzcu/jCYaUh5Oi+ip4HKTqO4xkp28SRayna5O3H3bvC/ 9fHStFWNRbu7eZFQsnaxUvVjmYt+fhsndvLaWSWtrtZSb36GO5n6eJSgyjsx 9/b7WVzO7kNsOIi8VAJS3jaaWpR6rDfG8S75DPKaxbjzFmIdmuPps116k0k1 imz80hKpooQ4ltxp1YSM+riClSul5wJVpasoR0toK+5InGu0TZzgFuUmNp20 jD+GvL1KEuKlUWZBDqk2Kg2FEZstxfjxF+MMrHKP5Jzsmkv8oLZ5mGhvithP /wBI59y56+qI5Iyh/rBbPLfG9kf/AFjx9UJyRMtjygtnv4ZNfSxG4PKuUFyQ R/xA7Pj3e3Nr6WPPu+cjwnN9UFs/J7Pboj6aANlSTdKl6dTsQYdqTM9R6gpm wLwmmXm3iAh1pw3PFSToSkg+gxl6ZKUBCkyrKSNQQgaRIK0LABBABBAEAuei TfmvcOq7MbSvhJqOQWDqUahQ3HESinih5QJSlJt0QRe/r9cb4fWZ5HUbMpTh ZtyWZApzaVbo7xTrSCc1zlBHHhbhFeSwtJZlKm6SjzG7Ato86xzWsO20ejv4 OFz25Lr2sUffJQKOxYDUqbA+btihPYckG5E+S0qXzZ03LbKSsJv0subTN2Xh v2KqTvkYxR5qVqSn5inpZQ2FqaUWkp4dpAve0T75t+eM7y6cWpvcDB0yL/xu Vjky3Ts7cbT4L3ncpFU7tV2Ytttb1SKVUbAEA/thm+p9UQMYw+oyJSJNsOb+ 91LQejbW2hFr9VvkjXDqUFRllklIrS+HWxOJD8ohaBvASAmyhcZDbqMXZoFM Uno0pm5PWgaRvcqOdyd7MR3D9PDS1S9Nk8+VWXM2SL20v3X4xjszQZ1My2r2 NWGLIzqW03mB6+GmW/Z1d8Vlr2otGS7s7ycmpOTml9khHEUCiD+k1G+48g9I ILiACFgAggCAfPQn9i5w8Pu2lPCTUcgsKy80/gtQlZkMETlyon/sGluvrjbF vIzyekf5mWmg+konGmglJNi6lIUdNcoNraHtj0y2+l+/lDactlgiaB3SQSDf 99fTU8OEddfU5tqKsjLzTTCmFDeP5jvHPKATLq6ha5+aLqoM1NyoBUktaW93 lsl3KL63077p17omtqRRtWWE2KkzKOKm1qyJlik3dzXX/wDlx85vpNrmwX97 y5cYrUb2wa/1/bktGWbg6MddjJud2ZVNY92dNIcS2o0eoWWTYD65Y1jn27IO CmtIcn2bpISpxa0IKjrpYAa+nWGNfIis3UgalXFOJb30u6pd05A/qpVtFeq4 04HTri4lWVpqy3lhucW9lUhszAspvS6hc8dD1Rpt5M20Xc0xPqpDTMu4reoX dZDmXSxtr12JHyRalmsNlvfurUnf7xZL2boaX07/ANQLxZJ6iqce53X5NRvz SOyQ340Chn+kzF7j/DG2h3bLOVnA+OpdqmmdRMIkFTlltDcNsOAhaFIDadXS kg9JV7XIjxssZtfK6Z9D0GTp8eR/ER1Jqvtxv27WVqhhza7KYWeZksUy8s3O 1GanJmcXUFONS8qphAT03EFSQXA4oJQU5Mw6Vk60ZOk8omflc0pjyi7lbqJl qZbl23m1tKdUS2hQR0glARqU65jrfUYuOa6TPQWT2Y8WqcG3+O/Gz5rf/psL DUjiGlzrk1iqtImVrlJaXRdzKkrbbUp5zLlSApS1K4DzUJ4WjKmnEOspcbWF JUAQQbgg9cdaTSpnh5XGUriqR7giTIgHz0PwXWHvjtKeEmo5FYHS0rDKkLUh K1TZDalozpvkFwR137o3w+syy+kfHJCm5wZqooCzmyhLeUAWOgGtrD8ke22a cZdbCKmCHWikhKbX6V83DU30HYDHZ8qOa5VwOMuhtlx1YfCkOuKdAy2ylRKi L9fGPKqlKlYQpdiWi70tLAcR6YNqzOnJjfW5yXdw68GX0KXu8xSDqAbf3iJh c1xMKVy3MZW1IwY742VjHNwdOJUjO+dn3A2i7OFPqsn2KnrqNrIJmmbHXsPV EA5iTkSornqiE2IACGA2kEg9Wup+aLYvQrKTvXsLLJpktMocbqeqXSfN1NgO iTbha3pi7k5eXlsrjM0VIDSW1Ap87JcA3tpxOkaXFdzOTfcquz8qyoJWqw3g bBOg14H0RRenpRTCmw+jeKUpCU5tSQYmLVlVF0dyeTOQrmi9kRvxw7Qj/SZi 52gUTANZ24VcVDaJP4fnG1MeUkI3UvLKDBWsh02SFLaSyokmw3CdCbiPGyqL VN0fRdBPLjm5Y46tv2tf0eKVgrAbk9UqIvbAupFWH5qSfZcnUlAYceVvJg9P KpSSkpUrqKU6o4FvomDtndTrTcxhna/O7qqulmSk25glKAW3wmyCodIkrUlw gKUoLN1FWnN7uD/yPW+K6qOq8KqvH63/AD/H0MkoGxmkz8gzUZLGtRnkIlZ6 mrddQVXUvMwsozK6ISApIte4SjU5ddvS0u3KyiGGUJQ22kIQlIsEgCwA9UdG PGoLZ2eV13Vz6qSUoqNXsvuVoI2POIB89D8F3h747SvhJqOQWFJhqXwnnfmF Mp8rISUhJOayP3w09PVG2L1GeRXEclTjT1QUkzSHNzlRnKWyLqNlAC2oIPHh xisQG6eXmi0khpCFJAAKbpuQB2aDXtIAvHWt+DDgyWVzrosutepUykkDr0ht k2G5umofmKehp0pebyqzApANrfJ1/J3XZjdWxmnVNmlvZZVDalSaF3Ssm2iN NfT6dOvqmXzV7l+W5jMjqwY6f7dKRzZeDrxmwudrWkYz2fPLWtttFMnStSEg kfXDZFri3ECOej84hydQymbU6nKXSVhuyAB0STl436uw9cWxtaEVlG5Mrttl TAfQGw4gOPAZAkqsB0hpqSeruN4fKIh1eGwpQJCXFpBA4gH5+PrjZIwnwUFt eUVabYmKb0GVtFtSkq6d73Vp6NPxxYTTaW6olAkkAibWkKuq9+ic1r9/o0iU twvud2OTMP2I7ZCPucoX/szFXa1iLZpJV+syWOMDzb9PNUSzNT7T6m97Mexx cSLi1wW0oayhR6TgukA3jxM7io/MrPo/ZuPLlytYpaZV+d1t+SkcRbCsM01t ifodSlX6vLZFMEOvKJelUulrNn45HkJzaAKI1HGKOzaZ2G4nxpQpfC+HJ5NZ S55e2l6aeWZQoC1rdKyqylBTgSo63LwGoJjBPC2lR6k4e0Y45ZHNVX043Xg3 xRqVJUPDcvSqcwGZWVQG2kBRVYek6k95i9uI7qrZHzEpOTbfIsECCAfPQ/Bd Ye+O0r4SajjFTsQVykSqmKXV5uUbWrOpDLykAm1rkCJTa3RDSezLj23Ypy2O Iahb74V/fHhWKcSK86uzxt2vqi2uT7kaY+B8oVfqk9T1tzNXnlOtnj5UvzTw 64czUKgLf4Vnv505/fGsW65KOKXYovT086yW3alOLbUClSVzCyFA8QRfhEyO asUPq2sa368EvHj9vSkRJ2iVsdMNoGxXZTtedkVbS8EyFeVTAtEkt9bqFMhZ GZIU2tJKSUpNjcXAMYkjkR8lRSipWxWjH+NTf00ZKTSpMs4p8orN8hnkmuJC l7D6HY/bM1r/AF0X0ryEuSQ4yU+4nR02PBM1NW/PQ1S8jTHwXH1BnJItb3Fa T/Opv6WEPIK5JBRb3FKRb76mvponXLyNMfBtSZodFwtsopeGsPU1in0uluyE lJykujK2w0h9pKUJHUAAIxrElZxpN7aZ2gyGEJUUhqaky1PzNLMwmaUSkTGu cBJSlbWVZFve1+cBpz5G0lSs7+khjnN+8lVLzV7rb8W/0MHnsY7VKjiFmRlt l1Km0vy6W0vP0B1sBTlkuIO8UAlLdt0SogLyhWiTpXp2LdrTPlNQo2z2iMTU tLzK3gmjOsKDbKU7tAJIUSoqUUpAUTltYC6hy+8yXtH9j3vhOiUUnmf1+ZbL +/8Ab2H6U2gbXp+pFlrDrbEujyfJOLpbq0TfQWZjKN4kosvdBJI16XHiL/Zt jbajiDGMmnFVEl5WlvyIUt5Mm6yrynKlRQMxNgLkdIAHKbG+karJkbWx5+Tp OkhCbU90l3XPj9Hf4+xtkebCx1HiEA+ei+C7w98dpXwk1HFOACCAL6izBl6+ 3c2S50D6+H47RlR8yNYcFJFJZtpEy+apWDy3ca34DA73jpSJlwEdaJVXvluP SsDDgO0HWMS5dS7t1ZOI/JF/KLSibF7DNpeAHMEEQijZF4AYcT64ea75+T8S 3GRbtJNyOMAG7T2CDdptb5zABukdkLkT+pgRSFhYEkAuei+C8w98dpXwk1HF SACPQbURewHpNoAMq0LBBFwdLK1jLpaZExSW5j9+Ne49caQKs8PLuLgxMvmp 125b2Ne/A7w/t0pFpcEI61MukOZTfTW8XrbwHBNyTqRGJcoP1ymSj5Q9NIzI +xT0j+LhCDGlHGXeTDqBxzFs2MAZVTKzTqnJ72nzTT6QLqyK1T6RxEXi15k6 GAGbE2mHmfv+T8Q3GRjhACwQAQQAQQBALnoj+xe4dH3bSvhJqOKnE2EAXCG0 oRc6q64ybB20CtYHbnEUeWpjyZ4tF0TsmmYtkKrZb+bfMoG3EGLcFC4rW0+v VzBrlEm6dRkMrbDQcZkEtuoQF57BY1PC1zc20jFZeaelV5mlkA8U30MRe5I6 CaQ/LB1PXxHZE1OagUF8uPGwv/sM946Ti7ewR1tRdvMCNNTftjFq5iF16ZVK SiyllBspaVWz+vs/LGZYapWoPyc0XZVzIvKU3UkHQ90Vn8Q1Fd0qeQoK0PvY 7LekQBZSk+/JTqZqVmVMvI81aFWI9fZG2MF4sRiOlLZfypnZYDeAaBaepYH5 R1GAHDElva+0T/z0p4huMjHCAFggAggAggCAPPR/BfYc+O8r4SbjiywPfMxH CBDKpJiitwk2B0gEhAs31N49ZriAZXlXCh0pJ0UInLzSxzcvHGqTwOBn/HSk S/SQdYcTTZksOrU2rKVHLe9rRr8rzJ4k5jpEFjYOBsEylRoqKzWWy6h4ncsk 2SRwzG3HuEZbN4MwxOU8y7lGlkJtYKaTkWn0EQBp/F1EfwvjBUi84XGVp3su 4R5yD294tb8cU8H1tVI2lSE2XPe1PBl0dqVnKfyg+qAN14g/cBu/VOyviG4y EcIAWCACCACCAIA89H8F9hz47yvhJuONeGKnTaRixqfq9DZrEo2FhySedU2h 26FJF1J6QsSFaWPRgB5x7ijCmJ63LTOE9nVPwgw0h0Oy8lPTEyl4qdWtKip5 SiMqFJbFjYhAPEkxiECEEOuGanS6PjiSqVaw+xW5GXczv0995bTcymx6KloI Ukd6SDpxgSOWLcR4br1Qk3MOYHksNIYbyOty026+HlZUgrJcJINwo2Bt0tAI mPzRagvl/YySeBwLMeOk4kr3OrGPDkoqHsiVBDtyk8DrGEzlaZmZFLLdJk5U g3LjGYKVx0Nz33/XSCxuvA09LT2yumuS6kkNshlQH2Kk6EfP64fiYA01tuqc qvGUhT2l3elpdSnSn7DOq4B9Qv64xD2WZq1VlJKVo0lJuOTjeVcvnzk5/NNy e38XyASHxCAcOIOn7clfENw9jhACwQAQQAQQBALnovgvMO/HeV8JNxxVSbKg D2THkgGBUS2sKeECwJOsTo5ob4QLGJP/AELM+Ok4EHXKv0xNWw7MSRCsxByk cQY0rNFclUXJd5tTTzaspSb9R1gSO2G8dVnCk6pdOcC2Fn31l0EtqtwPce8R kNT28Vx2nKZkKRIyjih/nStThHeAQB8t4A1pPVKZnaiuenZhb775LrrqzdSj 3mM72N4eXW9pLdVfR9a0r3zMrg45bogei9z6BFmlQN4142w4hOv7clfz7cPo 4RUCwQAQQAQQBALnovgvcO/HaV8JNRxUgBQdIecHnB3uhS3t+9mPYPI75T7E bvyvNu1bvJvOhbeZM1/sc1tbQAlT9qntJp3sR7I+yuZzy/f5dzbTJktrfje8 M19IAOuJz80Qf8v/ABl8RJnxsnAhnYZKyH1cCSSIxPGWCGK8VTcllYnEi17X SvuIgSatnsPVukVRLdRpsypgLBWpi6sybi9iOBtfjFrOMSE03NNU2k1NT5fP kyXLuFDemirAXVx4CAHfDOzHEFZfbeqra6fK8VFerqh2BPV6T8hjeGGqZJYc prElIMJbZaFrDUm/Ek9ZPbAD5Xv3BR1/Xkr+fbh/HCAFggAggAggCAPPR/Bf Yc+O0r4SbjirABBABBABE5uaI15f2Mhe3+Ic142SgDsIk2WRc3BMC9UG564A pFtKrZkg6dcImXZT5raR6oAqtqyWNuB0i7SoKSDAF1PPh/CLVhqiclUn/wA7 cZSOEALBABBABBAEAeej+C+w58dpXwk3HFWACCACCACJzc0Tpy98aLva2A5n X+OycAdgs1nVHtMCiSux0gBNI8l1AVYmxgAS4hRISb9se0OWPnaEWgCoZgCQ SxntvJuVsOGofR80Z2OEALBABBABBAEXecY5O+LuUnzc81hHALTcxiKi1Riu yEotwN+WKaQ4hbIUdAoodUU3sCpIBIvccPanyc+UBR629TqlsQx4xMMLLa0K w7NaEd4RY+qALX3BduQ/3M46/B2b+jhfcE252/0MY6/Byb+jgA9wTbn/AAMY 7/Byb+jg9wTbl/Azjr8HJv6OAAbBduSlBKdjOOiToAMOzdz/AFcdLeav5G+1 bZ47jHaztOw3N4Xcr1E9g6LIVFotzS0qdQ64840ek2nM02kBVlG6jawBIE8G 5+ZbmDLz1JqLEwg2cSZVakg9ykghQ7DFVc8fsZKet95u/owB4M65kFpOf043 kXf0YoOTq8xIkqibcbSLv6MAIibdBuZCojrBMi7+jFVqacK+jJzoJ+03f0YA vKVKTtaxLKpbkJluVlX0TDzz7JbSrIbpSkHUkqA9Fo2IPNgBYIAIIAIIAQgE WMJlHaflgAyjtPymDIO1X8owAZB2q/lGDKO0/LABlHaflhQkDgIAMoHCC363 gAt+t4Ld5gAt3mC3eYAMoveFgAggAggAggD/2Q== --= Multipart Boundary 0912061926 Content-Type: application/octet-stream; name="HH.JPG" Content-Transfer-Encoding: base64 Content-ID: <13600199> Content-Disposition: attachment; filename="HH.JPG" /9j/4QDmRXhpZgAASUkqAAgAAAAFABIBAwABAAAAAQAAADEBAgAcAAAASgAA ADIBAgAUAAAAZgAAABMCAwABAAAAAQAAAGmHBAABAAAAegAAAAAAAABBQ0Qg U3lzdGVtcyBEaWdpdGFsIEltYWdpbmcAMjAwNjowNTowMSAxMDoxMTowNQAF AACQBwAEAAAAMDIyMJCSAgAEAAAANDc5AAKgBAABAAAApQAAAAOgBAABAAAA lwAAAAWgBAABAAAAvAAAAAAAAAACAAEAAgAEAAAAUjk4AAIABwAEAAAAMDEw MAAAAAAgIAAg/8AAEQgAlwClAwEhAAIRAQMRAf/bAIQAAwICAgIBAwICAgMD AwMEBwQEBAQECQYGBQcKCQsLCgkKCgwNEQ4MDBAMCgoPFA8QERITExMLDhUW FRIWERITEgEEBQUGBQYNBwcNGxIPEhsbGxsbGxsbGxsbGxsbGxsbGxsbGxsb GxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsb/8QArwAAAQQDAQEAAAAAAAAA AAAAAAIGBwgBAwUECRAAAQMDAgQDBgMGAgcJAAAAAQIDBAUGEQAHCBIhMRNB UQkUIjJhcRVCgSMzUmKRoRaSFxlDgtHh8CQlJnKxsrPB8QEAAQUBAQAAAAAA AAAAAAAAAAECAwQFBgcRAAIBAwMCAgcFBgcAAAAAAAABAgMEEQUSITFBInEG EzJRYZGhFDOBsfAVQsHR4fEWIyRSYnKC/9oADAMBAAIRAxEAPwD6oaNABo0A GjQBjOjmHr/fQAZGNYJHLoASojl0jmTn0+udIN6CfEST0UDrmT7ptulFX4pc FMhkHB8eY21j/MdOjmfsiOaXLeEeVjcOxJT3hRr0oDq/RFTZUf8A3a7TEpmU wl6I8h5tXUKbUFJP6jSyhKHMlgaqkJ9GbuZIV1IH66yFpzgEZ8uum89yXoKC tKyMaAM6NABo0AGscwxoAxzD00FQx20mQEk57nWM9dGeRMmArPU5x66CoBGj KDI2793Cs3bTbKXeF9V6LSKPBRzPypLnKkeiUjupR8kjJPkNVgsDjE3A4l98 pdqcOFiswrdpah+J3fcSFKaYSewRHQRzOK6lKCrOME8o1p2tn6ylK4qPEI/V +4r1ari9kOrJOu/h2u+/Ipbr3EPeyG1j9xCYjxGP8iEg4+5OoXuj2dVRmhx6 g7uIfeV1SmrUnnyr6rQrP9tbmm+kUbBKMKKx+OTJutIVy9zm8kHX17PziWpP iOUKFbFwMpHMPw6d4Lp9fgdSnr+umHQNsuInau5AqrRrytXwcDnZU+htP2Uj 4SNd3Q13TNXg4VcRfufH8TCuLO5sFujyWl2q333Rgw2Y8++VVhtAA5Z6UOq/ VWAr++rIWvvM7UmW01OlshZ7qYdIH9Ff8dcNq2k0qU3KjwvoWtP1eTe2oSFS q/FqiUFlCk59SP7a6w+Ua5GUXF4Z18JblkWO2jTR4aNAGCQNalKI656aR8Aa JNQiwqe5KmSG2GGU8zjriwlCB6lR6AffUP3Xxg7A2nLciuXyzV5TZIUxRmVz SMeqkjkH6q1fsrC41CeyhHPx7LzKlxdUbRZqywRfW/aG2RDkrFE29uCagdUq lSGYwI9cZURqP657TeXDeWin7TxenymRWCf68qNdvbehNy1mtNLyMCfpBTk8 U0NKf7UDcxayKbtnbDI7guynnT/bGuXN9pxvKinKdVbdmsAjKf2LyiPsCvrr R/wRQUfHUb+Qx6zWk8RRWTfzfzdjiUvKJWL+qCI9Lgfs6fTIjakRWFEDK0oy SpxQ7nqcDAwNTjsHxO7x7UbL0+wNptm1y6QwpSi7/h18rlvqxzPOL5hzLUcd +wAAxjU91o1hStVbVp7Uv1yKrqvKfrKfJZK0eLHifqbjZqvDo2Wz3K5HuR+v VSzjUy2zvbuHWI//AH1su7TiR3RcUdwZ+3Lka4C9sbSj91Vz8v5mpQu68vvI /r5HEuriD3mo6lpofDLUKu2knDiLmiYP+6BnUSV/io4zJLhj0ThXRTUEkEyF LmHHl2UlJ1NZadY1FmtXx7+n8xa91WSxCGRj1OTxJXdDVWNybYpFswPE53ZD jMSmpHmBzglxX2zk67FvX3T6SGo6KmZKyQlK20L5Vq9Ec2FK++BrpZUredFx tsuK7/ps5WTnCvvqJZfZFt9pqZWl243U60wqJ4qQW2FdXOXHQr9Pt31JiR17 68/uZKVR4O4t0401k2Dto1WLIawdAGtRAOT6aYm827Vr7K7B1S/rseUYcJIQ zHawXZbyuiGmwfzKP9BknoNS0aUq1SNOPVvAypJQi2fOtiNxccdF9LuNynPQ LH8YpiRnHlRKQwnPTH5pKh5qwoZ7Y1MlI9n7Jh0ttm5902Y7hGPApdLLp+wK 1D/016PLXbTQqatrWG6S6v4+fJyk9NrahU9ZUlhDliezqsidHBql/XYEqPUJ 93aJ+mAg411ofs1eHNhIVUXLtqC+6lPVpSM9fRCRrFr+l2pVfYxH5P8AgaFD Rren15O7F9n9wqUxnmXt49J5U4KpVXkrz9/j1Q/iCoeyo4iJVK2itKHSrdoJ VEU+y4tz8QfB/aOFSif2YKSlP2V662fRvUNQ1K9brTzCK56fgU9Vp0LShtgv FJltuDnhcYpG0Td+XjTG41RrqA5GYW0C5Gid0DB+VS+iie+MDVtoNrUCDTER Y9MYDaRgBSeY/rrkdav53t9UmpcZ4Nawto0raKkuXyetFKpjKsogRkkeYbA/ +tbktNIRypShIHkBrAXJpKKj0Rxbquq1bPttyr3VWYFMitoKy7JWlIKR1JAP U/pqqO5fHJTlLkUvai2WJBAIFYqrXI2P5kMdCr7qIH010ui6LW1WrlvEF1Zi 6lqMLKHHMiv1HlbycQ+5qhb7c26pyXT4lQkfBS6f/EApI5E4/hQCrPf11cbY vhOpW3Mlq57wqBrtzKT8chQwzGPcpZR+UeXMcq+uui1++trGj9htHz3MjTLG rcVftVcsIxGaaYSlpASkdeg1vSnB/XPfXnWX3Oz6cC/LRoANYOgDUvz7EZ7H VZeI6yqDu9xvbY7XXlUG0W8lmbXJEFauX8Qdb5UoZH3SVZ/l5sd9X7GUqdbf Hqk39GVbna6eJd8FhYMKJT6OxRaJFZiRIjSWmm2UBDbKAMBKEjoABjtr2x4U eOsrSnmcPVS1dTrOfMnJsspJLCPR08tYJSehxnv107ApVnjb4g29vdsVbc25 UA1X64wTLdacwuHEPQn6LX1SPQcx9NV94LuHuRu5e7G5N1QFNWVRngqCy4nH 4xIScjoe7KFAZP5lADsDr0awmtI0Gdd+3U6fkv4s5SvD7fqSh+7A+krbTbSU pb5UpHQADsPTGtFVqlOo1vyKtVpzEOHEQXX333A220kd1KUegGvOfafJ1Xsr gqRv97QG1bU2ikPbE+43lWl+M346ytMSEtsgFS04Cl/NkBPQ4PXUNUb2mO4D m2tLn1u26C82lLTNalU9a2ZbRPRbjLayUA9cgKyO3cEEbumaZHUN0XLa0soy rq9lQa2LJM8Og7X7w27S74i1CqXu7VGRJivqUphrkV8JS6DkjqMFIOMp6Z1t sngKsh6/3bhvp+bNpvjeLCoPjKEVhHklw/M71z8xAx5ar2Gr3em76MH70LVs KVw4zlyWloFsUK1rbao9u0aHToUdIS2xFZS02j7ADGutjtgazJTlUlum8tml GKisJcGxHbWSOumjzOjQAaSrvoAQrHMR66q7xY2RVJe6Nt7ix1usw6TGdjuP trKVxHecONuZHbOCAfX7619HqQhfQU+jyvmsGTq0Z/ZJSh1XJyNmeOfb2bcS LD3arMagVtB8NiqSFBuFP64BUrs04fMHoT2Plq18SdFn0xqXBkNPx3khbbzS wttaT5hQ6EfbS6vplTS7lwa8L6PzJNPuldW8Z9zcVp8MAqA1H29+71u7MbAz r0rjiFuNJ8KHGKwkyZCgeRAz2Hmo+QB1nW9B3NaNKPVtFyvUVGk5vsUL2s2K vrik4hHb+3KVKaoc+R748paS07Uk+SUpPVuOAAAe6gAB0ydfSW3bfpVr2bDo FFgMw4MBlLLDDKOVCEpGAAB2Guk9IL6FapC1o/d0+F5rgy9Mt5Qi6s/akIuu 56NZu3NSuqvyvdqZSIjk2W9y83I0hPMo4HUnA7a+Wm8XErVuJabIkfi1ShWm HClq3UyChgJScpXISn94s9+uUg9AOmuXikzRrTwsFeK5uOp+eukWVAQ46zlt c2T+xjMY9MkcxHp6+upf4dODHdTdLcOl1y+LegU225jrUiUl5BQuU0k5UEMj qlLncEkAZOB5anjXlQfgfJUjS3rB9ZbQsS3bMtmNSqHS48ZiIgNtIbbCENpA xhIAwP8A905OXr/z1VfLbZpRW1YFYSrppQQNA4yAAOms6ADRoANYIzoA1qSD 31za7TYVUtiVBnxWpMd9stusuDmQtJ7gjRlxakuMCTipxcX3KBcRXsxZl0XG 9cuyl0MMIeBccoVZWothXo0+MkJ8uVYOPXUW2Jsrxq7Fspi2/Tr6pCQAks0m WJkM48+RKlI+3wjXpdlrtjqVFUtRxlfD+jOYuravbLNsTda90e0Jr6UQ4z0q EyB8c2t0eNGbbSO6lKIyf0SSdPWlbJ3Dem4sGs7pXPL3Cr8DIivTGEtU2BzY 5vd46Ry+Xzryo4HbXP6hV0uyl/oeZe/3fRE1s7qvHFcs7aVqQ7WoZZaPO+vq 89jqo/8AAa5+5G7FhbTWUK7fdyR6Yw8sMxWsFyRLdPZthpIK3VnySkHXKdeT oEtqwQzcrG/HEja823o7B2wsSpx1xpAlMpk3FUmlDBBR1ahpIP5ipwZ7DVc4 /sqLkZvF/wADdZiLSlrIRyw1uyfD8go8wSVfpjQm0V503UeSxe0vAZsrtxSY jlTo6biq8c86qjUW0rdKvUJ6pQB5ADVjKfSoNMhIjQIzbLaBgBIwT9zo5fJN CnGJ7EnrpWRoHmeYDrjRzaBQ5h66znHfQAZGs5GgA0aAElAPnrU6gLaUj+IY 0jWUBpZWhLYQtQCkjBydeabV6fTxl14FeCQlPUn9BpuV1YN46jAvK/pFOUxI l0cvUjxQmWUOHxW0/wAWMfEB3xnXbrm5O1+39s0+oXJe9u0GHUse4uTZ7cdM jIz8PMQT06n00Rk2upDGeZP4DCuHfG6r3t0Q+Hi2G60Zn7NF01hK4tEj/wAz QIDktQ64S2nk9V40na7hoptrX4vczcu55l8X9KSUu1yqAJERBPVqM0PgYbHo nqfM6dxjI7rx2JtbbZYbS22EIHXCRgD9BrYCjnUEqSVDuM9RoH4EpcbcVlt1 CvMBKgdKLjbeA44hOe2VYzpeMcDue4pJ5gCOoIyCOoxrK+g6nQIc+fWoFNZS p59JKxlCUHmUr7Af9Z1EF2cVNvWvuG/bcew7yrD8Xl94dptLLrLWRnBWSBke YGcaZuy+COc9nI7rI3mtu94Lj0OJPiqjgKealRlNuNA+akkZwPUZGn60+h9t DjS0rQscwUk5CgexGlTz1FjJSWTcn5epxrOf5tOHitJJ66AMZz5ga0SZTEWM XZDyW0J6knR5hlLqNarV8ycoprBHq8vocfQf8dcMBXIVuLK3D8yleeqkpbuh TlPL4NL8Zl+GpqQlK0rGFpPXI+2qxXbtFsRRePm3rm3BqVPdkyXWhT6fLntn x3W15ZbLSyf2fOQSEAc3KkHIyC6lJrOCKXVMuxR6WqNHEmWsOylDBUezY/hS OwA+moj42Up/1Ue5gwMfgiv/AJUafJf5bNuyWLmn/wBl+Zw95ky1cZ+wqIE5 uFJMS4w1JcVypYX+D/CsnBwEnBPTy1HWx9v2XbG69q/4mt+4rT3EqFGmJeqk Coe/0O/+aOXHHzJSVpcWej+DyKCioDmAJ1ScV61t+/64j+fQ66NaUbCFOn1c HlPvFTqtrzjxNeXfoRRt5Ht63dkdlavamz9Xsa4qjXaYxK3DmHwoUptSj4ra VNrcUr3hILQC0JB5/Lpqy3EhZdqbg+0F2LtK86JHq1Im/wCIS/DkZ8NZREaW nOCD0UkHv5ajoQjKi47cLjj5c/iWtWr16OpKuq2+aVTEl2S3Yj06x8u5Hrs/ /RlvrUNraAmaLZo28dqwqRDNVko9wbmww642hSVgqYC/lYWS0B05SNdWzuMC +Lu4jJtpzoViRoSatPozlEMyVHr8UNBYbe5XUhp8EpTlLfxfETgcitCryg9q xjLX9PMZU0qlc0XcS3OW1S4x1aTcn/xz1xzysDU4E7unbi2LQLCoElqPQbDt ttm4fGbWJEibIfeU2hhQwA2lCPiJ79gOpVq6sa3aJHipQinMEDsCnVu1lvpK f644OY1q0dnfzpy69X/68XHkmvxz2G7cLFjtvuKbkRIVTZHhtSmei2Fq6JCl DpgnGQemvFtjWJRpM6DKdC1RHW/Fbz+4cWjK0D6c2VfZX21Zym0YLShJJEip VlII6g9RrOT/AAnUhLkyVjr1HTvrU4+2hsqWtKQASSTgADudIKR3uXvfZ221 nvVisVNpqMyBl8qBSc9gnqOY/QaaW2m9Fkbz0B2tWxWjNDK/DcQ4sc8dR+UK QDlAPkSME+eo5JzjlFKrVzLah6uAgEEaZ24W6libYWUuv35c0KkQUA8qpC/i cI8kJHxLP0H9tVYrnAx8FRr34663fc827sXZ9SfkPEtplSmyhQ9DgZxnuAMk eetGy3CPdly8XNG3q33nfik6nPNSY1K5clS0HLfiY+VCVfFyZJJHXVlYpodF bmfS5slbQUQQVdcHy1wr8sa3dydnqvYt2w1SqRW4yoktpLhbUpB8wodQQQCD 6jz1I1uWDRpzlSmqkeqefkRnZ3DLEpO4VNuO/wDcq6b+foFKdpFFarTjaGoL TzfhPLw0lPiOLa+ArXlWCepOCNVi8LFEsvcCk1N+/wC661TbVjy41rUmfKSq NRUyApKuTACnSltRbQXCrlScDyxUVslhtvP5/rCN+etTluUaaUWsJc+HrnHn ul19/HRY8Fp8J/4Lb9t2vce7l03JaVrSI8yDb8uPFYjeLHVzxypbTYdIQ5hY HP1KRnI6F47tbKydzNyLTu+kbg1q06zZ5me5S6dHYeUr3lCEOcweQpPyox2/ Mfpp0bdxht3Pz8uhHW1WFa7Vf1MUucxy8PdnLbznnPY4TfDLbEa2qaqfd1cn VaLeUK+anW5ymnJVTlxU4Shw8oCGQOiUIGEDonA6aiqv7T1Ki7iW5um/cN43 RDplWqE63INUMZ9VPckodPiLdwHnGBzZDZX8oQMfCNRyt17/AOov7anGLXq0 +Gly/DnjjyXGGdHYbZR2k7Y2nUaFV6pblxWhbsq13qj7u2WKtHfUXWyrmHQs unnThIORhXMnppzU2pXpXavWaPdm6lwWnBiwYlOhyno8VLsx2MoGVMCuUoHj 5CeTtgkpAxohBwTSbX9v1+JUnqVOvJTq0oyffl/7m8Pn3eHy+I8LlqbF4baz bVsqmsNRKg2Y7lQnMEMJQe5SjHM4fTPQ+Z05LUo8Sh2oYUdAU/Id94lPoRy+ O8UgKXjy7dPQYGNWMcmOuWPJhJbhoQo5IHXS+n01MhxpluOM0x51lkuONtqU hAOCsgZA/U9NfJbefjJ3Y3R3HTSa9KXQLXjT0Jk0OmqUjxW0OjnQ8v5nThJH L0Tny1PRpqpnJSuqrppY7ne42aBeN13tR9xLYYmVyxJ9OaXTJFOQqRHjKI+J KkoB5VHp1I9R0xps8INm3tbnEM1ulcBkWnZdHjOiqVWq5hsSkKHRlPOBz9Qk 9unL66k9ZH1GxGcnLfkk/e72jMdx9+2thaR+JSXCWhXJzJDHMT/sWfmcOexV gZ8tRLaPDRvZvvfKL43ouKowmpKgoOz/ANpOcR3AaZ7NJ9M8v2OqcIKjHJZ8 U5YL17IcNFs2DazMS2KAIKQnDk6SfFlOnzJWRnr6DA1P9DtmmUVr/s7AU55u K7/pqNZk8s04Q2o7GRk+uo34jNwK5tXwSXfuFbKYhqdCp/vUYS2i40Vc6E/E kEEjCj0yNOm8RbRctqSq14U5dG0vmyALs4pd8Le4TdtK6mi28bmrlTkx7ry1 zNwG489uIpKGg73K3m0lXMrB6gEHI9Nb4it/LdN9boursyoWFYt+SLXl0Uwn Y1TfjoebaC25POpsrBfbOCgZ5VdhrOlc1E3x0WfplnZU9Dsp04vdLMpOK+D3 KMfo02PO+N7d5bM4u4NNn2zb7FrVCtsUCk0IyfGr9ebWpIfqTAQohDDPiIKk KSTyoWSUdDqYrx3DoFlw211BT8h515LKGYzKnVFSuxPKDyp9Seg89XYTbzk5 W8o0qEKc6bbUl34574+GePwZFz26R3KvGr2JGfnxXYqC+6mmOpSHYJPL4zUg 9HQV5SpKR0IIOO+vFDoV0N7cT7BugO3NDmBCojrySGIh6jkSsfF8I5SAOxyM 6RNtmO8SG+OCnb6t0GNJq8CbKlBAS+4moPMF1Y7q5EKwM6kbbrYO1NuLUVQr btxMaC5J98U29IXISXsY5/jJ+LGpNo2NLa8pklxKGhpgJdIx6J102Y7LKAlp sDGnpYLCWDby9f8Anox9/wDN/wAtKOELyE5Hl2OvkNxp7P1Pb3i+rNUj0/wq VXn1VGG4j5FFZy4geikqPb+Eg6tWzxMzr2DlBP3EHW1vBuPt/wA7Vi7gV6iJ WeZbUCapDSj6lHy/266ke3truIricnxarfVzVuRRUnmaqFekLLAHmWWQBzn6 gY+unVtkXnuUqaci6ewvB5aVjJZl0ajqmVNAw5WaikFwdP8AZpxhsfbJ+urT W5t5RqIlLzyBJkd1LX1GdUW3JmtTp7VkdSUJSABgAdANKT0HcaciwJUrr0PX UM8XFMqV2+z6vmybZiLqNdq9KLEKEypPiOr8RBx8RAHQHuR21HVwoMsW9SNK vCcnwmvzKoXptvczO5t9Pxts5LMJVRhrok9Mvm8duRU4s6coAq64Wx+bsAeX vgzntPw62ZV9x733L3Ptd199zcSpVyhKmVJaqe9FJQY8n3cOFlR5ishS082Q M9hrLpUd81Jrj+52VfXKcLWcKFROT+C7xhnyw84fXjg5+7/+mvcveuJaESwq TJotKueBcVIvSA6WxEp7DiXHYjqTl0yFqbKf2eErSsdMDq6Hb2vmNLmW3L2e uudGrkmU43PhNo5WWw2FJS8VqBQVrBSjAwnoTq2pTzJtHOXMLecKNKnPjGXn s32+efmbLKtRdRvegTq5tvWoU+iUVEqDUak+wpUZyWT7xT8NKwVtlIUrILeV ZST2E30yiBhCnJeCpf5Qemp6az1MqdONOWIvKOw22lDQSkYA6AayB+0AHTUw 3BnB9DrIThOgBWjQBrxgYzqPN39n7U3Y26foN00sSozp5shXK40sdloUOqVD 10udryhk4qawyt1jezz22tC83K00ZVwSvGLjRq6Elpjrn4UJHKSPVWftqytr 7TUijlD9S5Zj6AMJxhCcen00yU3VeWV6VHax9MsNMRw00hKEp/KOgGt2Bjv/ AH0qLYg9u/lrRMnRoEMyJchLSB+Yn/rrobSWRG0lkZlavSVKzEpgU0gnlLp+ c/p5abqvEU0pKzzFfzKUclWs+pPe+DNqzc2cypQmnWVtvJ521dwdMOVRZdG3 Ook6VW6ougQZiX36alwrivcquZJW2enMlQ5hjvjB1JRljhkCe2SwWNgVCJUY rT1GLclDqMtPJ/dhJ9PTXrbobSlqW88tZX8w7A6tqKRsZ3I9keBFjfuWEj69 zr09R36acO6C8DWcaAMAAeWs6ADRoAwQOXtpJSCCB00AYShKemf0xpKikD0w NAmBq1jdDby376TbNdvqg02rOIDiIUue228pKux5SfP6/fTjYnw5VOEyJLZe YIz4rboWg/YjpoaaWRFKL4G9VL0ZbUpmjtCUtPwlzP7NJ+n8WmrLky5cn3me +467jAJ7J+w8tUqtRSWEVKtTPCNABHytgHywdYKcNlSldPTVUqYGhfN+2zY9 sOVW4qnHiMpzyhZytw+iUjqo/bVSb14tK1dO4SKPbkVyl0TxOQuY55kvrgAA fKD6Drq7SpZWWNSyy8PDDat0Wvw2Nru9yQiZV5a6i1DfWVLhsrA5Gz6Kx1I8 s41MCAkjoNWnjPBrwWIoWEj00YHpoHmdGgA0aADRoANGgBB74zjUU8Se88XY vhSqt8LQhyo9IdKYV2eluZCM/wAqeqj9EnTorLwR1Hti2fIekUy9N8OJJuni Suq3Dcc0rkSZKick5Ut1Z8kpHX6AADVwIlT2n4OrIVEZrT1cuaS1yOePIcUX sdw2wk8rbeenMe/qTqWviTVJGHGUk3IcmzHGfbW498otu4lU6gzpLgZgsvpU 21JUTgNpd+VKz5BWAScZ1ZJl1E6AiTHBUlQxhXdJ8wftrLuKPqmmWactyPLU qjBo9KfnVCYzGjsNF1599wNttoHclR6JH31VjcvjftBMl2i7WvR62/ktqqa1 csRBHfw0/M59+ifvptKk5sJeEhSj21utv3fq5EUS52VEPVOaSiLHT6A9un8K eurUbM8NNk7ZSWaxIbFcuAYJqElA5WT5+Ej8v376vSzFbSWlDuWxtUTP8MJM vOScpz5jXbQB3B0LoaQvRpQDRoANGgA0aADRoAQfm18+vamXLI/FbEs5KlCP 4UqqLA7KXlLSf6Dm/qdS0lmRWuHimyvfBXX6LQuOiE1WHEMmqwZFPiLcVgeO rlUlOfVQSQPqfrricUVrXTbPFnXJNytPhidIL0KUtJ8J5k/LyqIx06gjuCDq Z+C45MP9wiy27fuK975jWtZtOfqNTmuJbaQwkqDfXqtSh0SlPfJPlr6R3zxK 2Jw77OQLYrdSXcV3NRkhylQngp1TvKAVPL7NJyM5V8XXoNV7peuqKCJqbcY5 ZRPeXf3cnfKsKTeNU91oqF87FCgqUiG316c/m6r+ZX6AajlKUpAQlIGPQf06 auU6cYIhlUci/fs87NvqqbHXBX5dVSu11TvdYEV91XwPtjLziOmAkhSQfUp1 dS1abbTtJbqUaos1BC/keYcDjR+ykZB/rrMnVi5tG5Qj4EPJtTS2gWyOXsMH W0Z5e41Jxgsih21nQAaNABo0AGjQAaNAGtQOc+g1Sz2gnDjulvFX7Zuvbekt VT8FhyIsyKJCW3sKWlaSkKICh8wIzntqSD2yyVriG+m0fPW5ts90rDlf+KrH rtJWyoFLr0NxCUqByFBYBGQeo6+WdSfbvGhuJFspNu3vbtAvaKwAgLqrGXiA PzlIIUfqRnVypCNbxdzFScOpybh4sdyKnQXqLYlLtvb6DKHK8q34QTLWPMB0 j4fuBn66h5GC446p1TzrqvEddcUVuOrPdSlHJJz5nrpIUtnIyUm+goJTg9en kc6zgZ50oK1D5Up6lR8hj1J1JJ8ZGJZeD61bfM7bcNPsyrRtDdq4Y9EYrTCa fLW4paVPzZoUtxscg5gr4iOYdgny1J+1e1e3mwPDoxZdiwlUm26WX5pD8lTq hzEuOrWs9T0z38hjy1zEsuTa951MFiKPFw6V6t3hsW9fVbW6lN1VSTWqew4M GPBcXyxkY8stoSr/AH9Sun5BrSj7IC9GnAGjQAaNABo0AGjQAlQTg/bXlnxf eqepkYHN66AGrLteYqKppTQdaI6gnIP6HUY3pw4bT3glRubbajSHFd3kww07 /nQAdJvceSvKjGSwyDbv9nrs/Vyt23p9boLncJbkCQ0D/wCVYz/fUMXT7Oe/ 4IcdtG+KTUkJOA3KbXFcP0yOZOdXIV0+pQqW2OURFdHCrxB2gFmZYE+awnqX oBTKQf8AIc/203LF283Grm/9FtqDbdVizl1FrlLsR1HgkKCvEWSn4QnBOT6a knUTjgrRoSjI+i0kbl1OjwYm4FIpd4NUuQmXFFYpjM9LLyTlLqFYStKgR379 tdXc3dLcDcLZd7bCLbTcCfdklijyJ8eQtPu0V1Y95c8NQz+6Cx0PQqGsF03G RvKfhyWVtSBEptlRosOOliOw0hhhpAwltpICUJH2SBrtI7dftq8uEKugvRpR Q0aADRoANGgA0aADGsFII0AI5Eg9BnWClCk4KM/fRjIHmfpcCR8LsdB6demu dItanuHLeUE/XOkaEayjwu2d8WWnUk/010KXbcWGCt5KXVqGDn09NNwM2o3y Leo8nq5Aa+hAxrzf4PopkpcDKhy9QM6WSyPydltpKGwhAwlPQDS06VCi9GlA NGgA0aADRoA//9k= --= Multipart Boundary 0912061926 Content-Type: application/octet-stream; name="FF.JPG" Content-Transfer-Encoding: base64 Content-ID: <13600198> Content-Disposition: attachment; filename="FF.JPG" /9j/4QDmRXhpZgAASUkqAAgAAAAFABIBAwABAAAAAQAAADEBAgAcAAAASgAA ADIBAgAUAAAAZgAAABMCAwABAAAAAQAAAGmHBAABAAAAegAAAAAAAABBQ0Qg U3lzdGVtcyBEaWdpdGFsIEltYWdpbmcAMjAwNjowNTowMSAxMDoxMTowNQAF AACQBwAEAAAAMDIyMJCSAgAEAAAAMTM5AAKgBAABAAAApQAAAAOgBAABAAAA ewAAAAWgBAABAAAAvAAAAAAAAAACAAEAAgAEAAAAUjk4AAIABwAEAAAAMDEw MAAAAAAgIAAg/8AAEQgAewClAwEhAAIRAQMRAf/bAIQAAwICAgIBAwICAgMD AwMEBwQEBAQECQYGBQcKCQsLCgkKCgwNEQ4MDBAMCgoPFA8QERITExMLDhUW FRIWERITEgEEBQUGBQYNBwcNGxIPEhsbGxsbGxsbGxsbGxsbGxsbGxsbGxsb GxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsb/8QAmAAAAQUAAwEAAAAAAAAA AAAABgIDBAUHAAEICRAAAgEDAwIEBAIECwUFCQAAAQIDBAURAAYSITEHEyJB FFFhcQgyFYGRoRYjQlJTkrHB0eHwJDNUYqIJF0NV8RglVnKCk5Wy0gEAAwEB AAAAAAAAAAAAAAAAAAECAwQRAQEBAAMAAwEBAQAAAAAAAAABEQIhMQMSQVET 4f/aAAwDAQACEQMRAD8AxWh2lHdNuSXi77kt9htqVAoYqysSWVJqngHMYWJW YcVdWZyOKBlyfUNJm8PaCg2PBeN3b9s9heprqyggppKWqrWlelMQkIenjePi TMgU8uvftpa30P7M24+9fEWk27T3CmoGqlmlapqVdooUigkndmCBmPoibooJ Jxq3rdilbXarntK9QbqpLrWyW2na30FTHM1UkcchiEMkayMeEqEFQQckdwdG lfUk2C42jw6n3FcI/hPhLs9llo5leOojnSDzm5IwGMD04OCD3GtNpvCyrtd5 a3Xfdtmt88lwa10ImWYx1lQkcLsvmBOMKj4mEF5So9ROcKSDTVVpt1VfKurp aeWJZKagnrfLfPKYRJyMaADLOQOg+h+Wps3hVuSa8Wikmnt0P6btz3SKZpWM caqkbhHYDozCaAL7FplUHlkaLRVVJ4cWafxNG06DxIs9VVQvWrXSLb62OOkW kilklduUQLjELgBASTg9tQ6DY+3bnT3SqpvEu0vbLPRRVVVXta65UUyzrAkQ j8oSFuTqeQXiB750agC3xaC33+phtt3pblSRZ41cVPJDHKvEEsEkAdfcdQOo PcEaub3sJrRaKyCs3RbG3BbYWmrrFIsyVFOqLzdPOZRC8qp6jErlujAZZcaN Dl/8PW21aq01O6bRPdrT8O10tMaypPS+cF4cXdRHPjzE5CJm4gknopIrKTbV 4uOxlvlq41pe6xWf4SAfx4mmTlT9DhSJWWRF655RPnAwS5QK7v4L1VquVDZ4 98Wavvl6SWS0WuGlq1a4BKiWDEUzxLFyZoH4qWBOUHQsBoavm1rNZNlUNwrf EKzm53G0Ul4p7PHQ1bStHURrJGvneX5IbgwJ9WB2znRvYCpkxFg8SPbpke3T SOZLHkq4+ecfsHfV0EO4EfRmUN/N9tMTGRYwS59Xtnt9f8tIGFLc2Of2k6Vl v+X+sdMNWt1dYLx4MDZ9zv8ADYXo7vJeY6yoo5qmKZZKeKF4gsSswkXygy5w rciCyYBYjpt37bPghS2TaW8rDtaChu12qBbtxWI3So+Hm+HMOJhRTgHjFJyC leuO/Q6yvqwdsSqj8MvxG2m5XytpVajt09Q/lJ8V8LNPbphFBKpUq0ivLGrr hlGTkkZ0U2jxNsl/2ZQQbgui2e7x2i62Go+GtiU1vdqunKQ13l0kIxIpURzM FLsqwlAQGVQJ9/8AFunpvD2rtmzbpRVdaK6hi82qs8NTHPHTWaCkaZPiYW4h 5o2IyFcqRyA7A3j3xsK7+Md03BNuumgtlffPi7xQ3C2vNDdKPyoAFp4/JcQz YWqQuTEQJI+LAA8QYC9v1VNsbxvslXd62EwLGk9R5DGRqNKmF0IfAyXjWUSF BkkYAOT0v4/GDbsll3BBWU9TGtXf6eS3RjL/AAtv+KpZWiTC+hIloI8RjGfM TiB5ZyWBybxAtifiUi3HWeJO1ay3y1N4ankpNsPCaT4ikqo4XnHwKGVOUsYZ T5vckqep1X2neNNQVG4qWr8S9nmsulmpqakr4dqsKFTHXRyvE8HwADOUViHM LDsOQPZdEy7fFuqaiOfcM247LcBdKuWkhmoKU0gnCRJynSn8qLhGC3l54Jl0 fp/KJteNw7Avt/3Zve6TbbqzuSCpqYbRW2+te72+sendIRFMqrT8UqDHIWLn kik8Qx8vTvacSKiDae4tott2LdtvlhrPgI9v09wpGW5WONXT4qSoqGhUvCkQ qPSssoRcYUcegf4QePmy9h7hubXKk/8AdCBKqnrfhxWpUVlPNygd0YZhk8t5 0Q/lzKCxGOQqTotIse9LHJfPC7xC3Hv5K247Cty1Fbt6WlrZKqprBcKycmCp 8oU/CXz4SW8z0qDlGKBDzd25IL3+Hyy261+IO12p6DadrtFRZ5duN+kjUU8E ccqrVmiOPUpIPxIHEYBH5dPOyZ1arFF5KO246SDzcB4pY5z5eZlj9XGMjopM vQn0qQMvhDx3MdZJDKRyjYqSM4PsCMjOCOvb741dEKjlQKFx0DH3/lfbTUvN m9Ckv1H3/wAdLFGHilEpxLwHyPbSfLl/4pf2/wCWjAJpmBphgkknHXp1/ZqA xPPBkyVHUcv2YOpsaG0IWMCNgseMgdgPn0/bp0ElsIcjuDn+7SCZQspYY9Rx 06jH+uurqlkeKm9gc9Rn/XtoCSZuvBYsDOMqPSc6izSen8y575HuPt30B1Ah c/kx0x9/9HXVX61QxsoZMrjl1J9/bppkqzyWkKxnJZuTYxkkds/YE6k0dyo7 FU1MtfEstSlF5kKBv927kBSf+YA569sjSTQLeRUX+8SfG1EkkGecMfUvTyZH WN/YZGcHPXT1u2zFLXpWSvNJUqnlyv0X4hflIuMNrWTWYgtm16WjtaUtHTmO FTz4cyQD9M9h9NTf0BKpUkAe2QP36vo3VRbIUolZSwlzls9jpNVbfibYpVcP wyh6fLt9tScUpcFSpUjDYIIwR9NdMiqCEIzkgBTjA9jqVdGxMqk+oknvg466 7+IX5t/W/wAtAxaNIjBlQhgTnJ/fnUZ8eSMLgMRhs9T/AOmpqyXLAcioAAAz yx7/AL9LUgTFSygHuP8ADUhZW6nnnnIjgJjVOT57KPmfYDUOq3fS09zNOtJJ MYzglJAoz26E6MK2Q5/DWgbCvbatHPTkJkfP6sDTn8J7LKjc6qWHIz/HR/2k dtPC+0W1slhmj+Jo6qOZGwAVcH7jH9+mqsEyvyII5dicf6OhWq241y2y2NO5 VipAQFeQc/b30AU18qLpvuWCqlLzVrNyOe78s9tERWl2fZVZJUBZImVXGVJ9 wdGtDsdqdBJJBgdh01rvScWbbZ8iP+Lj4qx6nj2H01HeyIiYKDJ6Zx1H+uuq nmGqbxaIhaQcMWRsg4HHj9u+c6HJnmKsnlrxTJyOnz0gHrkpiu4kOFEx8t29 g3sf7tMNHglQ2CCB09tQDRhMx5YBwPlk/r+Wuvg2/mf9H+ehR9qkTALK4cju CuP3a6UpzaNnK9Cw49c/XOhRSo8iCMN3OQeXc/PrqTb6Gor7jDSQep5CFXA/ N9cakJ29b/RbcsQ2tZXEk5GamRcY5ff5+2s6ikkwSUTOexfvojPle0yNn/MY zxz1IOdLlQSU5Cdj3+uqR7EGlhqrdcRU01XJAe4CHGf7tFFo3bPM6Q3MZIHE SImT+v56LDlxQeKt6ijpaCOj8sLlmMkbEcmHXt7HH9mhCgn/ANujusMmHkbz UIPXkO+pnpWvbH4ba63eJ3hxNZmiSS+W2MyxIT6pIx+ZVHzH5h9jrUZ9j+XG p8hu3IEklSCfb5a0i54q6vbTR28hoSvywMEE/wCsaGK+0Rxw8pOIyp6EZGqA OvSoKKQOjdBjHfQJeHWNykMfAN74yD9dKgL1cnxUElPIQGIyrk9c+x/bqNE0 ctuDIvUj1L8iO4+2dTQS0URcmZCR7YONdeTSf0Tf1tI9NouWQLgdMsM4APzz p5V5qx5MCWHIrg+3fSqzhCn8z+lcEliPy9x21Lqq+WxbZM8chWtqlKgjAZFP sPlnSLcmgpopJZGd2BLnPXuf167ioPmW6+xOqY3vtNgoWim5An7ddX9HRUtT R8auLy2AwJI+jD7jsdVhRAvFinpKRp/99Tr0MkfUDP8AOHtqspqZvK5HiBgY +ug6ZvNlt12srx3IYSIc/M/lxn5jQhbLdVWe+RbfvKCAV450ruwPlPnCFvkG 7H7g+2ps/San4J79vPhR+Ia3bgpCYZaSoXzEfscHBU/vB/z19U5ods7q8NLb v/bCq9ovsIqBGp6079C8XT3Vv3Y0NZcZduykippWlV24gj1A5JHsMay/cTRI rovHDde/TOtCZxeJIwzBDyJzj5n7fLQPeEJkyUxgdOXTQAlXhY3yp7de3fUK mZC9RGHCYIlVSSeQPcDH1H79TRPXJeDBTK4HyLEgn9mm8U/9Mn9Y/wCGpWd8 pUPqHXHcnv8A6OlyROicJAclh1HTv7/t0jWG3aa23PxBorTU19PD5zkqjyBD NIBlIwT0BY4UZ7kge+ntyW7asUbU9xvNfR3l6p43oK2mKCCFRgF26ZcsfyjO APmdCKqItj3CsUTW+rp6tD24S9dcn25eaBhHV2+ZMjILIRn9eqRh2CmZYeo6 +zN76lUvpPcZ99VErWmiw4MSnmfr0I+RHbSqjbtJcI2ko0FPVHqYSMRy/Qfz T9Ox0fqlPFaKN6nyqqEllPVGyMEH31SX7bMlZytDiBqu4lmaZE6RxKcBVz7/ AJQR8lJ99FLD601DUWuWerUG62cpT1zA4M8R6RVAH1/I3ydR/O166/BV4+26 hjrPCfddei2q7tmmklbC01V7HPsrdAdZqjX/ABEjEFnniRfKnRmRyWAx1JyP 2dtYXfa51oShJYoeuPUMe2D2Otd1VmM5u9aTIeL59gf16ErpPK6FDIR19vfQ QZr6gP6C3TGOnfOq+jjf+EaPlipVkfAyAPr8hkDU8gteUSeku6fd+Of1++ue bB/xLf8A3z/hqVG+LGJWZ1JOCxI5cj199chi7qe4HT757fTQpl9xrZKvddb8 XgvDUOhU9sg9P1Y1ai6XbdNWancF6qqpkRYoHqWMrkAYGWbqQAMe/t10pWGr Wx266QTyVNFcJSIAxUREqZD3wvz6dT/6aONk+Ie/Gu5tkyx1tPGvORayLKhe 2Scfq1WqnY5lr/Dy61b0m6LPW7cqiSrVNKhkg5DuSD7Dp1Hz1YJ4F3O82/43 YF7t25YMcvLpZwJx942wSftnVTlpWB2bam4LFcTR3m0VNDOhwVmRlP7CNTqS ilERC8c59QznGmSNeNstcFWohIjnjHRvY/8AKf8AHVBVeZBZ2WoRklGR1HUH 56VNlu5txPYt8UG5BEZoeTUldAWwKiB/zp9z3B9mAOkyXGv2fviCstVe0lPI ErKGpXIE8R6o30PQgj2II1E9D3PsXxhh8Wfw1089RVKt3tyLHVkj1Tp2BI9y OgOgTdNerXGQRLheRJA9OT2zjWit0A3KoLyEdenyGPftoarqgGVxjBB75zoA cq5gZTyGfbppmilVbhGRTvKQ4z5b8WJ/m49x89TRF7LADMSVdsnuGx+46R8O P6KT+uNSsiMMKeNldl49By/x+379PoMqxPI9M8uPv1xnpoMKXGxWtPEmovMq iSJIkaaAD0tNj0qfbqMch8h8zqrZp6q4vX1jkyzNyLdh/kBpMb6sKGpmecUi IR5hCcT0YDvx+5PU/YD21sEFzt1k8DP4PUyI09WTNWsccmcZwAe4Cjp9ST89 WJcZl+nbul7koXrWkWqRoQ0hLAq+Aevt2B/Vo22FtLcVy8OLpvral8ipjY7h SW+dHrPhpSahuMcgz6QnLAJYjGcnUnO2uReO3jJsE0tj8X9lm+2yTMaQX+gK SMAFJ8uUgZwrKcqSMMp9xoyslR+G3xXZFtF9qdgXqUAfCXPM1EW+QlHqXr88 6cVkQ/EXwT3p4cWFLpcqOKttE5/2e4UUoqKWUe2HX+w41543rUK5YxOeQBXr 0yPkfrqve0WYw7e8ss9iZPLZvIbm5TrxUHufoM6Z2/V/pXaDbcqCWmpeVRbW 5dcdTJF+v8w+vL56yvpQbeD2/q/ZW/InVz8PMeEqcsAg99b5errTVcQrKVg0 NQOSn5f4auVUA90uC8CF9RyffQvW1QHNuWOuBj56sUP3CvRXPq6fM6ZsE4uW /KSgi5c2lzzVypRR1b6de3XUCVoRgmllYLJEpU9eQySfuO+ufBVP9PT/ANU6 TVECejirHlnIwSeo+fvj+/30xf7tT2XY9TXny45IIiyNK35jjA7d+p/dooZF T3u4m4SFq9y7ksyt1LN7+nWw+G/hLfd7eCe6PEe81sdm25tukVxXNAZRVVTs BFSxoCCS5yc9lCknpqNrKTVVt2xtXUtReKJXq4aRuEjorHyyf5b9PSMnAJwC dTrhVeTOFV8iNcBx8/l89bTMTZijipJlqpbpI0a0tLw8tCctJIx6Iv2wSfoN at4G1V1m8JfFGxWqhE8tVZIquJscjFLDMXTChW5nK/l7dDk/OVcWk7m2ltPe t+2sLqXpqeqW21LLQM/nVCXGmdj5fOQxxKlRGAIo0VTyY9Og0JW/au2PFfcU D7Vmh2xPClNbam2PDLURGZYiGqBL04Rl1CsW5MGbkeh01I21vFfxR2DS1+zb ZdKqroKkSU8trqQ0lPMQSpwhB7Eewz06aHN92+xSbKtsNmuklbdBTSG6QSTB 5aeceokkKoERB6L1ZcHPsdKeFYz7w2tv6X8T6qerhRqdaKSKRZBlX5YVgR8i uc6H94+HNRsXeKz0Jf8ARtRLyoZ1yTBJ38lj88DKn3HTvqKckxEqFhq5VuEA WMzZMka9AjjGRj2HXI+h+h1ptsq77ZI6GxXtqOlEx82eWetT/ZYeOSzAEgEg qQp6nsBnVRKduSChS0w19tGKIBYRLLMC9RIQSXVcZCkY+fHsSTrPrnc1Riob t2we+nRTvifN4eLc7NT+HFbcaqIWanN2nrQV515BM/lqfyxjsB9M566uPDjb Ztlqku1fB/H1ceFDDrHH3+XQn+wjSXk3oUtGfLCpC+V6Hifb26+/vpPlS/0M /wC0aFIoWVvQ0YWNhlvXxBwf8tOTW+huVsqaSriWSOWMqV8sMFyD1APfHtn3 0Bn23dn7kqvEr+CFxoIqwVmIY6p4+SyKWwGR/wCS+ehz+XqfbWveIe7aGyNa vB+0XJaO3W6meRKlTiM1bDpK4/8Ap4qW7DrjUeohW67zffBr8KyeGFprGfcm 8BFfd4LTKryU0Iw9LRSN35dfPkX2LID1B1j9Jf8A9L0HkeWZqqVgkfldncnH 3H11XHouU/Eq9NWpcYqUQyR0dEpSKRlKrM5PrkGR16jiMey62f8ACXuTb+3/ AMTlTJui5C32+7WOrt8lTz4mKR1HDIyAVJGDkgdT1B0xI0fY9wraPwv2veEa kkW3Wy2FDJCZlb4C8vCMSAegASrnuDnvnA0R/ho2zvWh214gXemsET2UXWY1 L1ESTU/GPlyIVupUBx6gQGXOD6WIZsg8XBJb5NkVNvujK9JttHgqYHKlTHX1 SqUI6p2yOp7jqdAF63VuO57Vu01XcmmkqUWSqkdE8yXLdQzheR64J69cdc+y 8IQ+EfiV4V0XhYuzt6WyxxXWe6NUTXGndkroRyUFOOQhAiU8S2FUkk8u2tYH hLtnxA2LLR0e5qWqp6xI1kSduUPrkkQFZUU4CuqBXYKXaROK9iRU8ecfEnwO 3n4TbkjS6SU1wttbCKmKop50dkQkhDIoPR8Bs4+XXB7VlJBSXfaElujo6UNT ASny6RmkrWZsgPJ2AA9ugI+Z0JJrr5ebdZpkvcLVVdVqYxNJAop6WLsscKqM dsdSBx/kj31Q0W3r1uOqdbZb6mrMaGabylJEaZ6sx7KvUdToL0Z7U8N6Wnpo bteUjqZnCvBTI3JBk5DHp6zjsO330bS2/KuUQEE+rnkLkHv9+v6tDSTCFo5G HFIS2OoDLnAPv1+udK+An/4Vf6g1SulUIopXCIiK5BCrH0U/UHJ6fr66fWke KjZDDxGcZKM3Tv0P7/tqSObR3btC3Xm6XmK8CuvlrjFHQW7zIkXzHIVpFyeT sc46LhVyxI7aOPA/w22/Zdwbm8ft83Cmui7bUTUluroVMVVd5MmKH/mSMAMw 74AHvnWfKiR5731Xbwu3iDcd27jpLjFXXiqlqZ6qWEryd3LNyYZUFupwehB7 6d8NKBtw+IkcFNFDJFAhNRUooJhX/wCcDBJ64HU56+2qniPa9D/ou11Vpjop aaNoEwArDkoA6e/Tpgjt7aHrh4U2GtkWppIzRE+r/Z5OBxg5PHtj7DTl7aWK iXaG+duSCKx36d048QhlaLI5CTHQ46Oitj+coPcaJtl+Nvi94Xb0mvs8LXE1 UMsNTDdaf9I0k8MrmSVCAcqpcliARglu2Tmk2YzjdO77tuS5VN0qp0ldRI8N MkoSGnjaRnMUanIjjDOxC/XSrXbpb1BUWGS1XCnqJPLWbzqVokjj6Mx8w4Un sAFJyT8tSkenw92jc7LJDWbSo54f5R8lQ4IXHIsMEsMf36H4PCKO0XFK3ZG7 r7t2RGE0bRVJlVCAByGSGyCBg5JHtoVZot2vffG/b91te2KXxkNXbrvcBBPQ 19jhrQ8kh9TJHIjcmY8R7ZLD56VuHw4pLfvu60KXanhkhrpYZJrUixUpdW4s yx9UUcs/l9JxkYzp5hQJ3bwwp6qRBVbjv7QzgMBE8EeepBHSM9Bjvq28Poaf wo3al2sNNNNSyIae5wzzPMaiFj6ic4yRj6Dp069ydKnEUbu27QwSLfbG8VRZ bqfiKSRBgKW9RTkOgI6E46KW4ZyNCXwoaqHpjMKj1B8469cZ+pGmFfKjYCGY Bl7jGcZ6/wB+keW/9MP6n+egEUqwPRIDT8S4yWQEkDPQ4x99S1jnhtTPT1bp D6UwzDkTjoQM5PQDqB07aSulDbNnbTsm7Xu9DT0lBcbrMtMlRWMzRxO/QsF6 np1YgA9AdaDNTPA8Oy2rZjbbJPI5WpbyTJMxBMjqR3PpY5+aDsuos7wuMwmS 0VtJOySFvKfqvB+SHrjOB37YP10/bqGmpCxjVV4AEpjiCT36Dsc/btqqa8pY gKdmYO0asMsY8DK9c9iepJ7afmBSFz6OKHnyzgqCcnp7f46U9NIpKamI4ylU QKVD5PFiB7dPlpVTYaSqqZIo0MrdEUsCDnHfPTHXrjVF6o63bFGsiTCmikkQ ZjaWMMcjPXJGR+35aRJToyxl/wCKV17knAwOv1/doSeXLr+ZnJPJ+gKjI45J x2+3110kRSByVCsPynGfUQOgA7fPQoOXqrs+3fF6y7iv26K/bUcMMlNRbggh aopLbVFgVNSqAuI3UsvmICYzggHqNHlDZtwWLb8D3a1W+qtd2VVoLva5lqqG r4glTBUISpJOMoSH+ajTmWo8qkrqdBcG5xsBG5PHBXyyfYj2wfbp9tUzTynH BpFAzyJ6hh3AAPft0xpeLNUlzrLVRVkFFS0Fxt9UFklt1ShdFdJBIHjwQVyV 6qCOQ98gYcuFZtS525b1tSukipZphHWUUkwknoJyCfKY4y8bYPlyY9QBVgrq Qaif1SywwyIHTGeRyGU4HbTfwy/KP9h0sPDUULU9KGbjMuR0+Z6keoZ6fU64 HKRBj0XHRieQGMH+3SBF1s1FfrYaW7UCz0qyrIeLMhVl/KykYIbPuMHWr+CN ho67e9bUT7Ta90tgtPxZowVklmzJFCGXzWCuQZsnm3Yk9WwNKkJrzQ13hzTR 7forVbqqWsv12tlZBPTLUpNHTTxRRwI0g5IuJGUlCrciDy6DV7bov4Gfh5ro 4LdTU19jC1RespIaiajWe4/DeQwmRuqrSseRHUTZGMktP5gOVcVPYPDiTedA lHBda6ktMryLSxyU0T1MdW8xiidTGORpkOOJ45fjjPpAr2WXcEk9Tbv0WJ0j qhSryEQWSNJAyhj2cOHAz0DjHQafHs40bZG1Tc/w6Xq6yWOCoeqFYYawEf7F 8FTrO4bJDDPI44K2c+rpjU+HY1gO1KuwRyXGrrIL7bYq2QxRcuD0VROTExI4 R9fUG/L5XMlh6QtwjNT4b7ThQTy3+slt9wNrio1pJYp+TVklREQ0xQBwhp+X NUw4yABkSa6u+yNqxbRswqIKsfAUEdDW/CCGOaqqJrnVQLMHKHCgwscNybiU UMOJJWkEd8bHtm1rPQwx1dXUXOonrVlHkKkCRwVc1NyXBLHl5Stgn09fzZ9O eX2urLZZvPoKNa6qchKOnDESTyfzUB6u/HLcFHI4wAdVbnHWvCfayB3bu+LH dq+47e39bJ7dPJCwhweEMkvUfDVUciMYlzxYPjpg8gA3JSLaXhJ4lbI3Ldq7 8P26Z7Owp4Kyu2peIQ9HeIniMjsKOXkk0QKMBIpYkYI45GMf9bc6O/HP6vq7 8Qe0bz5W3PF7wqp9lXNIY2jqnlnjpmTBLlGKtJT9OPFW8yPJOQB2r6zY18q/ DSDd9qjpLhQ1jyCnEEhk81VYAtHJjjNj+UIySGyCq41PD5ufK2Y2vxceOXf4 B2qp1pZeEBLRMR0JViVHbqPzZB/sPvoK3xW0Notn8OIqY0lTapqeZvLU8KpV dWaOQAYHUfmPuNaT5beUmes+XxfXje/P+L6zbote4oZq+GKG1o780p5JCyKr EkBTjJwDjr8hqx+Jt/8A5pQ/1j/hroc8vSFI5KA8eDBeowVA7ZOPnnSkhd64 ZjXlz4sAwKg9fY6VNOo4UdhEXLMFLj3Hb2/Z31fUdTWRbSqLR8WPgKyaKeog MeVLRK4iJJBYACR+g6HOTnAIQElDvzddJVyLHcg5ljijjleCJmiEUSxJJEzI THII0VfMTDnAyxPXU20b13FQTTKstPUiajgt7rV26nqopYaYAQKVkRlPHAAb 83TqTpYMPrvG9w3NritzikqqsYqo6qljngZRgqphkRoyq4XiAuFwOOMDVLU1 dwuN1luFZWPVS1EpLzSyZZ2Jz6iepPb7DTw13bNw3SK42uaKtj8+xKfgHMaY hBcv0BzyJeRj1BznicjAF1H4jbrktdPSxXiYxxTx1EfGGNQ0yRNGrOePJ2Eb GM8+XJMK2QBhZAs6LxM3QNs11PVV6mtnaihgn+Hh8qGOlM7JF5RThgPMpUgA pwXGOK4hUO+d1pb4aeGrjnFHGIBFLSQyqVMjzZcyKeTLI7upbLKWbiRyOV9Y MU90u1Xdo40uFYs60aSrCXlV2YySySsxb3JkkZiTnqe+hHxD3Fdq6201ttux KHdtT8R5tTRm3SP8OvPzVk5U/B4j5g9DIylR6R6SQakguSCCHxB3z4jbH/Q+ 6fw0ruNKSMRCd5KqauiX2AmPKcY+rHGtU8FxfaPblSaHZFRDt6njKzbauiVt VUU7Z9UkPNRKmffguOmeWruZjLjJpN53bsnxH26bPXWq5w2ullanqbXuq0G7 QREdxDVMRLBj+bIyg/zhrPNt+Dli2lvCok8GPHCPZMNa4ersN+pJJbHXH25x TF4j74YSKw6YYHrpfXZ0rzxo26fBGu3bsFqm8QW61bkZPLiqbXVGrpaskelo WY4mXP8A4UkhkHZZJOi68SeKVk8V9pzVe2d6bfmuFrrCaVbhQUHmUknXiQH9 Jice6yBWU9x00di7gu234BbpqbPR020GgrKhrXTV1dT1NRHC1KJTIIhlmw/J Yy2R+vVz/wCzl4w/+T2v/wDJ0/8A/WtJ9UdqGmjRoJSy54sAM/Y6XUsYrSal DiQSj1d//DB1i3WlKAbY1SesjQsC36m0qKWRLUZlchuJP0/KPbtoC/oaeKWr nEik8QxB5HPYdz79zp+nVRuGniA9E0pDjPQ9ToC3uUaC+zQhcII1OAcdpeP/ AOvTTaFikeXY8ymSWJPXOgHbUTV1sq1JLhZ41GT2BL5H7hqzFPDBNUxxrhQo cAsSAxRmJGe3UD9mgydwAWuA/ADysYXp1yCqk9/qTp+UlqedWJIWV4wCenFY lYDH3Y/t0CqyrqJQaiIFQqIkagKPy8l6dvroKvm7d12mx79e0bpvFvNkpXqK D4Ovlg8mT4mNefoYZPEkZOdOJvjv8P34oPHq+/iQt+1NweJFfd7XK6RvBcoY aslS2Mc5EZv36+pO5vDrZybJS4RWgxTtTiTlFUyoA3EdQAwA/UNLkm/jzZbE St/GNSWKqjSSlYsxZlHn5A/pv95/1azj8Q9NTWGvvlbaKeOmmpKuCOJ0UflY nkGz+bP/ADZ1fFV/BD4fWCzp4h7XigoI6aO8UhmrkpiYVnYSsuSEx7Adu+Pn qB4u7cs1/wD+yy3T4hXWj87cVLdZKFLgJGSRoUqwiI4UgSAL09YJ1X6XLx59 8P8AYezt17febcO3aOtkpkSOJ3TDBct0JGCf16K/+5fwt/8Agug/6v8AHVZE P//Z --= Multipart Boundary 0912061926 Content-Type: application/octet-stream; name="DD.JPG" Content-Transfer-Encoding: base64 Content-ID: <13600197> Content-Disposition: attachment; filename="DD.JPG" /9j/4QDmRXhpZgAASUkqAAgAAAAFABIBAwABAAAAAQAAADEBAgAcAAAASgAA ADIBAgAUAAAAZgAAABMCAwABAAAAAQAAAGmHBAABAAAAegAAAAAAAABBQ0Qg U3lzdGVtcyBEaWdpdGFsIEltYWdpbmcAMjAwNjowNTowMSAxMDoxMTowNAAF AACQBwAEAAAAMDIyMJCSAgAEAAAANzc4AAKgBAABAAAApQAAAAOgBAABAAAA kgAAAAWgBAABAAAAvAAAAAAAAAACAAEAAgAEAAAAUjk4AAIABwAEAAAAMDEw MAAAAAAgIAAg/8AAEQgAkgClAwEhAAIRAQMRAf/bAIQAAwICAgIBAwICAgMD AwMEBwQEBAQECQYGBQcKCQsLCgkKCgwNEQ4MDBAMCgoPFA8QERITExMLDhUW FRIWERITEgEEBQUGBQYNBwcNGxIPEhsbGxsbGxsbGxsbGxsbGxsbGxsbGxsb GxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsb/8QAqwAAAQUBAQEBAAAAAAAA AAAABwMEBQYIAgkAARAAAQMCBQIEAwYCCAUEAwAAAQIDBAURAAYHEiExQQgT IlEyYXEUQlKBkbEVoQkjYnKCksHwFiQzQ9ElU6LxssLhAQAABwEBAAAAAAAA AAAAAAAAAQIDBAUGBwgRAAEEAQIDBQYGAgMBAAAAAAEAAgMRBBIhBTFBBhMi UWEUMoGRocEVcbHR4fAj8SQzQlL/2gAMAwEAAhEDEQA/ANoR0ErJPFh+uHzK N7qUqIsSO/bDKB5IZ0DUHPUqm5fqlQfy2uLmOouUpiOw24mUw4S4ltwgqIUk FA3Djg3xNQdSp8zTzKctpmK1VKxNXFqTakFSIqYwWqYoC9/SEWTc/eHXFa3J eRRA35V60qdmXLRD6uhVeZr9LHwTnKuYtTq3Jo2aVUajOZbrqtwhMlSZsCOs EtvLcUdq+ACpIHG7jD7P2dK7l/UqiUGk1nLdKRU4kqQ9MriVeUkslASgEKTY neevth3vZRHqNc9r8j+yd7+YQl5oEnYnlR8/yTyHn9FFlSF5uzPQnYsDL7VV lPU6M7tVveUgOIJJBbV6UpQCVbrnoRh7A1ayNPaeVHnTG1syokN5p+C606y9 JJDLakqAIJIN/bgnC25DG7OO/onG5cbdnu39BslqvqtkegxXnZ9Rkj7PU3qO pLUJx1ZlNoC1NpSBdXBFrcHoMM42q9Dp1Jkycw1NKy7WZdMgtwac+txRaSlf klFrqdCTzbgnp0wDkxg8/oUH5sLXc9hz2P8AeaXl6wZDhZQpdcVU5T8SrMLl R/s0F15xLSDZxxxCRuQhB4UpXQjDhzVTJI1FZy0ioyXZDzjTAfahuLiB11AW 02p4DYlSkkEA/iGD9pi5f3fdL9sgugd9uh6j+35K1mwQqw9QFgDhJagmxSQD ++JKmJNzcpPHN+vGG3l7VEBdyBzx2wEYSbwJQeeR3GG6iEoVt9ha+AjTZy5d Sq/t+WGa9pRwbcdL4CCaOKTv+Ik9+Mc70/7GAgnEdO9RPAH1w7jJstN7Wvzc 8WwdJNKm6daZ0fKeWo0qdQ6YcwNKe82e0jeolbiyLLIv8KgDYDvjqhaauQ9f K7X5chlykVBl0Qoqb7mXZISJSrW43BtNre56YhDGGloG1c/kq4YYYxjWgCtz XU1+4C/cp5Z1UoztDyxKqFKYy/QXADNiuqVMqMdAIbZW2U7UcFIUoH7vGLFX smJr+uNAr1Qh0+bTabT5kd5mUgOEuOlsoIQUlJtsPJ5GAIpHN0yV0+QRsgld Holo0R8h8OaquqWSFNR6/mREmLT6XGoEKNG8phTgYejTPOSVMoTfyugJTyBf jjEBS42aNTsxZwzBR5WX5s5qo0ScwqFJcNPdcihS1MJfUm6jY8qtYE27YjPj d3uhtb7/AKqFLE8Td22tzdfB3081cKFpxm9rO1OrVYXSkus5yl5ikojPrKUt Oxi2Ai6blQV724F+vGHlH02zDA1HpdYedhliHm+p11yzx3fZ5DGxsDj4weo7 DucPDHkIFn+2E83Fl0i6/pB+yqD2iOcImWKIuM1TalLi0+ZS5cZdWkQmgl2S t5t0ONWLgG+ymyLHt74lGtJM20vW2k1LLqaXS4MUwftE2DPkNKcZZbSh1lyM dyXiq1krJBCbXuRhr2V92K+qZ9ilBsVzu7N+v969Vfg5qH5bPmw8vAqTNDoS 84QCD/ye3jkEW8z2+7iVp38QOWIaqsiOid5KftSY5JaDlvVsJ5Kb3tftiwbr vxK4Z3l+OvglVlRaFrkkdADc4SdQ6hRHku34JJQfbp0w4n0g9uuoA7VBNrHt hulJDO3gm9r4MIJoRxe5BsMN3wkcq9Q6i2CQUe8r1ggqN+OBhPef7X6YCCfs p2JN7Dd747jzoaquacmXHMtLYfWwHB5iWybBRT12lQIv0uMKRJ6EkoNun88O G77ztT8r4JGqvn7VzT3SuimZnTMLMLy2TIDCUl15SAbXCBybngdL24vY4r+l /im0T1bzQaHlTNRbqjlwxDqUcxXJFhc+XuJSo27Xv8sHRq0SMMcKesUoLgPU J9X7X4wo0yy1EDcdtDaAfgQgIA/IcYKgirqlLeqwHTCiRZsX28fLBpS5G1Q+ Zx9s/wD5g6QXDh9QAFziLzBXKVlnIk7MdcmtxafTIzkqU+vo22gXUf5cDuSB hJQq9l58541d8TviFq66rpuKvSsnPS1xIkOlyUsL8tCreY8rcFrWR89oPAGK 9XtEvEVlysTjR6pnmUESFCA824+oSUgpCXFFLig0SCs2UQQE9yRhLdDrs7rQ x40bWCwnelHi31J0kzlBoWos2bW8uynFi05W59lCVlKlIUfUCFA8K4PQgXvj 0GpFYpWYcnwa7RJ7UyBOYRJjSGVbkONqHBH++LW7YUBRoqqyoRG+xyXTuxLf pR3w1e2fElAB45vgdVDTNSF7vQPmTfH55b3sP1waCUSsgbQoce+ArrHmA6U+ KzJerUlKhR6hFey3WC2gKUAD57R/yl02/se+Db71JKNlWzHl7L1Pbm12vU+n xZFvKfkyktIcBFwUkkbhYg8X4w0qWqGm9E0+mZtrGeqLFpFPQFypSpiSlpJt a4F1Em/AAuewOC3rkjC8pdddWE6zeL2t5mp326dT50sMQN6FIK2myUMhLQuR 6LEA83Uonrghaf6JaiNvMpqNVy/kBqtBDTDdQQHqnISrhJSgJUtu56KJR++J DpBGylLx8d07vRK656KZo0Lyo3mBvV2e7Os4+0w3LejuueWUhW0pVdJG8Wv1 7Y9B/Do7qBE8EGVpuslcU/mObHVIkvTlpbdShxRUy0sm3rS3tvfnnnnEUO1C ylZMAifpaiNDrEGqQkO0dw1Bt34VxwSn6lRsB+eJWPR6pJO+S83Fb6hKPWs/ UngfkMSAzqVFA81FodSxmswm6gJbS2ztWPuuJNykn+6QcPFEk2F8IeKKIpM3 IuAflY4EvipypNzr/R/5zy/TpAalfYRMaKnNiVeQ4l0oUewIQcNlKYacEJvD i7KTTI9HFcy3VqKw0YzLlLQGlw3kBJUwtKVKB4Ve/B7kG+LlXs3ZnRqk7QaV nXLUWfLW6qi0VyEd81tpN3EreUrduuD6kcJt0VziC0W521rTkEsBtYl8TLQn 6u54rU2kJiOU+oRmPLTYKQ48gbUm3xkJQ4bjjqTckW1V4AqpPm/0fqIcp1a0 U6ty2GLm+1tSW3LD2G5Sv1OJvQKDn7sI8itEubS0k9b3w2VYsg3Iv7YUqRM3 Ei4v+2ObJ/2MBBDnWmpa+U/KkZGhOV8uVWWtLipbtVnBssWttS02qyVk3JuV cWtbm+PPzW7xL68ZyyvI0x1hokCmro9ZYfkxmab9mklxtJ9K1BVxwu908EHr a13Y2tcbJSUW9DJFN1s8TlLoWTswPwaWqkrkyGnmVzX6ZGZQ2DFQXVFCW1uK IT16Hgi2C74lfCll6teH6s1DLmY6s5mhtsvRk1CQ2tmWEgXZIDYCCQLIULbT bscPyy6fCOSPat1nyh+H2oaUeHfKGvFMlJqjsOWJVRcbaXtgxypI80JtuJbW hQV3IKuOBjcuW1RM6U6hZ8yRVaMimOrS/OkvRN70qHtVsQhfBQQr8XA54GK6 Uhx3Wox2iKIUnOYMvZZq+q2X8z16ExXKe06H0s/Z0SWlPJ5YfFrhRSrp93oe oxasvx8uPUZysZ9kpkyXbK3TSXSWuOE8WFiLkAXw1BIwO8R2TORC9xL2izQr zVlouo9Cn5iGXsowZDi22FSAtyKphgJSQCNyhY8kdMSr8mt1CN5MxcJlJ5sg rcJ+t7A/zxYNmY8W3dUksb4XU/mkkwVGc3JlTXZCmQoNJIShDd+pCQP3vhSd Lh02mfbKhLYiMD/vSHUtNj/EogX+WEONmymSbTCVXqX/AML/AMVgzo0qOpJ8 t2O+HG1WNjZSbg8g/njHviT8TUqj0eVlRpxBj1ZpcCUB18lz0Of/ABJw040E bfeCsOi2pHh8ys9U6VS341KDDSChxEcoYfUpPqKSBZS/QAe/pFsEbJ+cNG9W q6xWqI7TapWaC24iO8tvZJjbuCtsK9QChYG3tiB4tRK1QY7QD0WOPGhl2kx/ EB/Eokp6pVOqstIfpbcZS9jvlrQ05cfEo2Nk27Akm9sFX+jvzLJRlDN+m1Wj vx5lMlMVhpLyFIUW3UBpYsQDw42P1xMbs0KJnDXCStbuI9KRc8XPXvhqtOyE ByL3Jw4s8mq0naBYnr8Jtjjb/ZX/AJsGgo6rvuRUb/MKR2HGMU+MfJbOevFh kKkUWK6/Ws0MOQXUMpBWUtOICHVcdG0uLuTxtHXgYVGadukgWm9RodJ8GP8A SLsScnOSJ2W3aYKsll1zzZD1PKgxOYU4QNykLCH27/JPfGh9fs+Qo2n0qRHq jCkrYLja0q+IEXB/O+DkpwDkqrCoOgNB1gpHhUVKNSoeYco5rC5DlMmJcC4j b10r8t0XB3d0K4BuR3wBZKNT9Gm6xpbNzPJpuWJ4UpsvFSm3GAr1NtL2kNur SbWPpVs+eGpI+6e1rzsVocF2uD1HRFVrxS53q+cIGmeWsr0OSgz2IDNRgea2 lbAUAbJF0j0A3IJ4uRjZlQcoUbLjSMr1iClLz7cduG69tCd6gON3qBF72Nzh p2LraS3zRT5Ihkbvt1V8omQGKbINXqUkOS0sFlKkEhLKe5+fa9+wxBpzvR0Z zm5eddtNgOll9sG9lWBH8iD+eHWs7pgCp8mYTyl4U3IMqTQHhSnEIlrZUmMt 3lKXSk7Coe27bfHjTqFrDqLnzVCsr1AmS6jVGHXICGZcoqbhuIUULCEI2puF JNrC1uxxIiou3UdaH8I+uGV8h+CKrae59k1WPUINUemwI7NNefKozoQLiwIS PNBFyQDu474AOteoWUsz6uN1FhUtmKlzeDLCUqXY2H9WklX7HBGIuN8gjFg7 KWpla0zo0emz4tYqdVU+2xJfR5rqJbTxHNwCEpUDe3JsLdcXLMUjJlVriavT c11GLJWlLiUlvfU2UWAKS+kt7nE88EHt15xFjjLidR2XRsrIxPZYWY7KdXi/ 3/CjI03+NeJGjNV5+Y5Gpq1VwVqSVGozYqT5aEOC9lKSoHamySPUOhxv7STK 2l0GiScy6avRKjIrDTSJ9RSvdJfDdwhCx1QBc+mw69zyFah3lLOZ0GS3GEob 4Cavpfkrw7xtHS4Nh3w2cFo9iLkYWs0DaaPqshF1gXGEt4/91OAjQT1M1rre XIimU5KlPAi6loHAxmupal5w1C1Yjii7aVKjJU0l8oBcQhZBUgK6gGwuB1sM NElJVg1R0lzTA0WY1irlZfr5yvIafmxHRuSYLp8qRYdeAoK/K+I3SPSJjVZT VJqOYKhLZpEtUKY35iiHGQAplYPYLbKf54kwxd67Sprt8e+oK2Jl/KVOyd/D 8mQWFpoHkqbjx2kFSWXO7YA7K6g+98UzUKrUOJnGVkuVQY9UbbJaS5LCVgOg AloKtc7b23AWJ47Xxa5mJHMADtX6KjyOOHgkfeabDjVclbsg6bZATS42YsqZ dQ0/5NknahLrDg+JKxYAHtdP+uL5MypCdhoXJZZeeTtcS+GgVocHIWk9dyTy PpbEiKNkcYY0bKWMp+YGzE8wD8wh/mbxKalNZhnZLiUaiwKkwotCe2pbodAH DjaF2SLixF72+eKR4emKxmXPFc86Y9IkMPJlPOvOFS3A4pQKie53J/b2xTTR 6bCyOFxqXL4oICKbuPiN1pKtSV07S6otodtKTCd8mzm1RWEGwB97jGftL/Dn ovVsyDUlzKFKmVCoqE1Tkv1NodI5WEH0g3vfjr9cV0r3MIpdIwmsLXE9Ep4l 9JNN88yqC5nCryqLCpyl+fPprQj745FjFBt6W1K2qKrG23gXOMX6hZKypPy+ /T8hZVptIQly8UrSXX3ADwXHFEqKyO/bDJyXNaGk2tnwbgDuJufM7wtaNvUn p8OvwQ5olPqeXs3RJMyhTUBhY3LUwVgkK5Uk9L9LfQYKbFNrVcqk+bQ6RUZ0 uTIU+ZdQjCNGS3fehzaSSlSF833G9yk3BxKZMwNUj8Bzu9DHModSVZKLktym zZlXqElyo1WpOKfmT3QQXVqUVGwPRNyeww6ZiyKNVBNplQkwXz8Sozymyflw ecVry4OLiuqQYWOMQYtW2uS0J4Vc2Vevu5kg1rOj1TVHW2Y0KS+XHG0873E7 vu9EmxPztg/uNhCDuPxG/wBMWEDi5lrz72ixY8PicsUbdLQR+ij5ik70hKvf jDe5/F/LD6zygs5UCJVKQW3mkrUofh63xnmTomila3IrUEKabWdzgCeAcJq0 VIlVGa5J8OOZcqopQqr1Tpb9OZh79v2hbiFJAvb536dsZZ8LOpEzJGtWU585 fkU+sIOVqwpy4R9oZNmXFX4SoBaPyGJeNbHB3RTIBrY5q33CU6c1tl66Gy4E q+gPI/lgHZqn5UqVVo9ZzK3IlwWaw43MRFO11YcaK+3PxbucW+SdgufdoiwR sa/le6JmlFVhNahTspLfcVDqJVIgvOXSobSB+pbU2SD7H2OCZUGZ9EfREc8t 559YbZaC7rVc8cDt8zgmOAppVzwmUTYbTfLb5HZBvxD5JYh5eg5wjkpfi/8A KytqSLAlVunQAm3PyxCeGWNUWc/1jMkGnTGKPPpyYMaVIsEypKHC4stgD4AB bnvfnjELIaXWR5Khhwe74654Br3vQbUVR/FNnDOFIpqlQaw4yhLnISoi/wAs Tvh+1BfoWmDFTXVYlSo8pId3JdQh2E595CkqIPB7jFDONrXVuFQumkcxm9rn XjVjL2YsgikRtsuTKSl5Kw5fyhu++T942+EcAW74zpLlRTqC9TZYQwtaQ7GG 7alaD2F+4PFsQJC3Ufgu8dn8KXCwg2TqSa8uSvuV6FS356YCKmyXJDC3X3Eu bG0JSkq8kKIsVqIsT0FrC5xUKXJdj1sU510Bazujbz6XE36A/iHTDjhoaC1W 0UznySCQVQH9/P0VvrdJzpTIiHa1RZcWMoWbcWkeV0B+IcdxireaqqrcjwXC oJ4dfsQB8k/+cFNrcaIT2NJBIzXC6wlS3V8sVKJW8rVFcKdCUFsyWSQpCu4+ nYjvjVuiWt9P1ToRpFUbap+Z4bYVJi32tyUj/ute6fdPVPXkc4kQOLDpK5x2 34WciAZ0Q8Tdnfl/CJi2SqyggKuPbH55Cv8A2U4sFxely2hgqCVJCrcC44wy rVKYfpK9rTSV7TyOoPbCUFRcoRIjWb6nIdUmMqmtl6S+s3DTQAO0Ad1E/nxg I5t8MsbMqq7mnTOHUo6ZVSFbaguvpciqlJvuWEkbkBVyCASBwLEDDs+U2FrG UrbAgJDpLRCZ1YzvHVMTLo0F58RdzitxYVFJNlek3DiegBTY89BgXrq0+BME yGptx2PMbkJLwui/KSVD/EcWzpRK0ELh3aDPllyhE9mksJu/z/j6ohU+trak sVBNa+z1uAv7QxIaSCku7SHEEHgpUCU/pjV2REUZ5mFW2QHTOiplfaHVb3F7 hcAk/XtxgiToIC0fZqfU18ZPKj8/9KGzdTWapS6zlx1KVjlbfmC+5Cub/krE FQnlS8q0oNsttKRJcR5bQCUN7W1ApAHQDDrv+g/ktkGgOvqsm+LilTV05TpQ 5sQsk2vjLdCqxplTbHmqSvftVt43A/TuMZmVthabs/nfh+eyQ8uR+KK9GktV p5lf2hBQ0N7gUv7w7c8//WGObP4TUsxQGNjMnyFLUtZAV3AxWkVuvSTXteAR yVgoinGW/JQgOstJWoNlRACQD3Htx+mJml0CPWKCtqQlpRWoC+xRWRb7vZPN uevb3w5jtMhASMhwitwPkmeZ8rfwOliNIluLS6ncGzIKhb5p7YrmXsvVauZ9 i0qiT48GW856ZEx1QjW2khKykHaOAL2sO9hc4KSMtl7tNPy44cJ2W4bAE/JT SnMwRM3Tcu1ihCFVKbIMWSh55OxtwdRcE7uosRhWoUWp0CVHzVEqzVImQvMk Qp8Z3ehD6ElQSSOyrWIPY9MLAdR9FHmlx8nGAHibI3bbYgjz81tjJGZHc2aO 0LNS2Q0usUyPOW2OQ2paASkfQ3xOeY5+H+WLVpBaF5lkYInuYQdiR8ikGiFe ncCLe+OKpPiwKS49IWAEi5N8JUdZfznrflvKueq3S11Jst5kQxS1MGwU28HU LS5e4snbuSq/HQ37Erw9TaBByG2pNTgQYjKAlanZSGwFAe9/hHZQ+uK7OBcW 6fJanhTAcck+aztrH4g8nvZtMOiyabVFrbWpcxDykJSoWIS2q1iu6e/BuB15 EhMjIdQ+GwRHqEHz2tx6gp3C/wBCDi84dfs+l3Ncb7eYMcWezIZ/7FH8xtf1 CdxlpVSICmIb0dtZSW23fiIIuTf2v0xqXQ+rvOaAU+LJT/W0xb1PWlXWza/T /wDEpxYxizSqOyz/APmPZ5t+4/dWHM9R+z5wo9V3XRIBiPW736X/ADAxHZaU pjVmVReC02+5Ka+W9o3/AJ/vhyUVCfyXTRzQq8TmU5dTyFN8mOFXBPS+POmu QHYFUcjPJKdqin2xnHp9qYyq7UnIIjOTXFITxe3r+m7r/M4sencxvyaiJDli UCxWeve38jiLI0aTS3vZniWS/iUMUsh0jkL25Iv0eT5sEvJbufKN1oXa4Ise fp2xaIjsan5fjuhiaFOuneQ6jb5Y/CAb3v78WtbpiNA7SbK7bkAmg0jevNRV XkO1PNanG23FNobPx8kDpzbCtI0ez1U6gtaMt2jPN70pqbCvIcKbKAIuLm4F gfSe/HGA7U5+sKDxKTDjwHQZL6DhW3P4BdrotaoGdU1LNEaSVmYHZbpaG4q4 JFunQfD0t0x+NUSTq94mYmUqc4URqpKUXVoFhHiJO5xdu1kcD5kYdPu6OpIT cs0OJiuyYT/jjYQPhyW44sGJTqQxTaawiPFhtIjsMpHpbbQkJSkfQDCu1fuM WjQAKXmV8ji4lI3CEBVu+KxqG8UZSdQVEejn9MNlEVhGnae07Vjx8v5CrGYZ 1JjVmI+lbsVtDil+QPOS3ZXS5B5HPGDBE8COSoVLdlwdQ647JjMqejtuwmVN laElQCkng8jGT4nxqXh+U2NrARQO60mJbscUutHcjZMb/o/383M5egrr+YqH U3JlRejhx4Of16AlG6+xI2psE27k3xB5dzK/VNNMuzHkocH8PZUg7BwFIG5J 49ycXHZnIfPkZJkN7j77LnPb8aIoHnlZH0v7K7aWQoEvMESdNqCH2qekJUqU va20oA2Bv2AFsGnSKpxJkjMKac8l+MJ7akLb5TdTISbfL0Y2UXRZXs6A3Jb5 m/lX8KzZybW9pi+/GBLkV8OghPIsL47y4pL2ujFQQR5cyilwf2iCn/RWHJ/+ py6Q3mrZX6HBr9DciTGwoLBA9GMgay+Dau1quvVPKDTCi6SS0s7QcZ5wtOrP eavC1qRleK6/UqE3tQCVbHb8DAheyznhukuTcqlbLsGSQ8xuFnhYdiLGxHf8 RwydLT4uSvOD4+Xk5QGIaeNx8E+gZo1ypJ3nKzS2lCxSmD1H+FX+mCBlbWCr xGAc3ab1nalNlKhNWBN/dfAGHIoMU7hy6O/jnaPHbU+NfqAftanqBrRNe1fo 7GR9NcwLmPTGwVSn0G4vwbBAHHXlVuPe2PQtmDUZ9AEuc8ymSWgtUUXKUg24 F+p+uGZ2xsOljlSZvEcjOaybKi0O5D1HnvvYKFmsFPolYyWY9TPly2yG0PA2 UDa4Sfe3z+eInwu6cLytSapnCrICqhVl/ZYlx/0oqFcf51DcfklIxGgaXPBP RDN4sYeCvw+riPlzKOilkSFC9j1NsfeYr8R/TFmucUmaV2aSeOvOIHOCG5OV XQ44AQnm/vhBRLFwDeWv6UnJFRZdAberLbCyk2sHUKbP/wCQxtGjr/8AWmW3 Dwh0Nq9yLgHHOe0gAymO9FosE/4KQc0ep23wY1LK7fqeo9UrlHcQeNqvNWpI /RY/XFJ0IouXK/4VKJMltPqkwfNgvJQ+RtU2sgAj6Wxq+x7WPyshr/T7qTPw PB47pjzASG7ijW/LoibTMsZKpM6M6ctsy0pcAUxOcW4lXzIvb/ffBNyMYzOd KkiEwzHZdRHcDTSbBO0KTxbHSjGxjfCEnM4HhcKwnDFjDQa/XzO/1VqloMhc qmcFEllRKefVxiAyDIJj0Z14ELhuPUx0nrYg7f2TiJOLY4eiyLdkSi9Zu4Tb Ca1FarBXGM8nUJNbnGmsjy3X3NjaWVKUb9EgEnGDltJXEjspSUOTHi+6B+G9 7ftiFkHZdP7B4+qaaY9AAPj/AK+qno6A2yEAc37cYvtIbZV4b6nOclv+ezUG WWmQr+qWlSFFSlD3Fhb64i458R/Jddz/AAsYQP8A00fWlfKDTYFO0Ep+oUBx mfLpc5C50R5ASlpvYUtrva/JUrkXF7X+Vzi+JTTxqI03mLMLMF/ZuR9pBbQt RPFzbakD5nEnIbRaehC5tn4WTxFz5WCyxxaR6dPoquuvw9WNZG51DcMmixRd t7aQmW4eroB+72F+vXB2o0FEGkNpAtb54fiFNpcpzHOdO4O5j7JV53/mld+3 XHHm/L+eH1DtNG3ApF/M3WPQ4rudmi9QHQl0i6DhJRLBerD/APw54qKLX3pJ Q1DqMaWpwkkIS28hSjxzYAHGsHvE/wCH9jMLsmNqdAeQt8rb8mM+s8k9tmMZ x7h2RmPZ3DdVWr7h7h3RsoN0rxTZTyZnvPqMu5eqOY0ZlzC7V6I0zZlv1I2r 82/qTuKQoAAkg9jh74W80xcwab5gXIDUepv1QzpUVpoNttIcSEo2AdR6CDfu Ocans5w1+FkvllPvgCvUBaXhrgZR62izPcLeZ4qXHLNNMly97AgKtghabPoX nBxBUT50EOEnqkJcI/1GN5Jtal8eH/AeVeKrKTDrzUpsgBCSSq9ha1jf2xQY dfpsGv1qCiqxlKUtE2OUrB/rAQSn63Tz8jiK4WDfkuVhF1uSZEZt5I4dQHLe 1xe388fLe2NqVbi5vY+2MwTRT4WaPFbntiFpDNp6XktqkhLZUVW9G4bgPyvj K1OmMVatKmR3AttI2I+nXEGeyuwdgZI+4ey/EXA16V+6m3SAkFJ64tcaTIj+ HpyMF+mVO8wpvzcJt/riNAdOo+hXUMtutrAf/oKy5fenUgs06al9uLVYJYfb dBKXA42SDbtztOAzmV3y30pBKkpXY3PW/BxJyAe7aCoWEG97I9vIgb+u4Wif CDRqg5o/PqE6O4mMmqOIhrUmwWjancQe6d9+el7+2NHOKKI9gena/OJEV92L Xnnj+j8VnLDtqP2TJ10B439++OPOT7jD6oVHoUoXKene/TDHMSfNy84lIN9v f3w2jWCvE9T1sZkZmtpI2rOH/h009oGrWmNQh1CqzolSpayhK2kpKSOFA8jn i9+e2JOFRmooWQ3ZGuu+Gai5NyJLzkmopqtSjJStt1KPLS02SPUB78kE9uMB XINTY0z8eKoLlm6VmJAaI6AJf9SD/hdSR/ixoJAIiwjof4W04XM0wRPHMGj/ AH5LRuaXkwqKx5igpZukG5uU7knFmoNcfpTQq9Nk+TJbjqbCVteY24CQSFDg 8WuCMTHjxUVqs7HblQGF3IqMpjVW1S1nYgZrzNUZ1IhpMt6AwUxId+iUkNgK UCT0Uo9MEzKdKyS5nB+Pl6kwI8aGQ0sRmQnz1pPO5ZuSgHt0JFzfFdmO7mPb mVzfi2LBiZAhh6Df4lELzbuBJXf8ucMqzNMHL7rqz0SeqbYzqqSvO3xW5mXm PUyBlVMsttS5AQ4oXOxN+T8zxii0TK+ZqNXFs5fqkd2GbJQzISV7QBYC45xD kIGy6d2IxHufJlMdRGyaZr1PkZGzIYmaqE6mMP8Apyol1JWeLix46374sh17 yE5lKHTxUJSFrUnYwqEvzXCRf2/lfvhkR6QS3kuhzcbx4chuLk2H3tXIogVH WGu1miUuZEy+4v8AhTCIzbi3UBxwAWSSCewsPywvoTScn6geIBdLz1TXy4pv zoUN1y7UhabqWFlI7AXtex5vh5zhK5ocVB4k7I4bwyV2I3xVzPQXdgel7LbT MWLBpzUSMw0w0yAlttsBKUAdgkcAY5kODySBYE8c4mVWy8+udrOom7Ua+8Co Xufa3TCXmp9jg0lN2lkIta9+ccVFG+mucA7gbYR0SlkXxM5Nq1Qy8XYdNdd2 q3XQm+Afo7rBVdDc+zftNFVNiy0hMiN5nlOJUAQFAkEd+nfBxSd1IHohurln PxvZ9rFKTSstZcpNIgfZ/s6/tO6U+8CmytyrpASewA+pOBPJz7UM0UamvTlN oqVFJaTJbulSmioKQSP7Kh1GLB2a/IcWnYKdiSnG8I5FEup62Z6zRTl1xmoM sIpbjSHoRaug3shRJHPJN/pggQNbq0jKCXPsUIvJAAu+4Qu/4eOPocdM/Coc uCOdjqtovb5po9s8vFyZYp4w6nGt6ofdD+i5u1lzl4hpUHKGZqlSI6HTGWin yFtM7R1KrfEeTyfyxuvRjJlRyxktgT5V3No3KUblR745RPI98hBNgEpM87sm Qyu5lFJLosbLuPdP/wBYYVhhifl91p1wgW/FzhtR159eKrKtIpec4tZTUQH/ ADwjYDyQeP2OAvRtUc2fbHac9VQ8zGa8mOqQ0ham0+wURfjnCWtaea0nBczI w3OdC6vNUzN9RrFfmk1ueuY2tfDayCm59k9uO+GVKrz+X85x60ISZDYXscQq 3ANttienTDZYD4ArCTiUjeJRZT/EQb38vL9kb6bqpQI1AfXNiSVSVshDDaGG 1JPS3q7W+QOCl4R58zNHi+NTcYSlqFTpC20DnaTtSST/AIv54JmOGn1C03He 03tWC+KBtB1DdbfU+kq+EAAdd3fDdyagxySpIHT4hiQ21yWqTJx5gO2DiDYe /THPnsfiR+uD0u8kLUdHmIRYK32A/CcLGagtJBbWdqriyO2F9xJ5IWEwrojV LLq4n2Bbildi1x+uMi6oeFXUTOeqTlTokamxojg5L0jar/KBgeyynoiDgFW5 XgczyxRHZb2ZKQyGkFxxb7hbbbSOqlLPAAtyTwMZ2zNSnMt5xk06iZipFYMV pxbkuItf2ZQA5SlSgN59iBY9r4WzGcx4Lk53m2yteR6tGnBzzGUri1ymqQ5u /wC06ix3/lbn6YWp+o0GfTlw6DAlzRTmg+p7elltQB+fNvy5x0Xh/FocXhbG Sbu3FfdUHEsGTI4gXM2Bo2tU+DKp6b5yy+87Q3mmMytOLfm0qU7Z4gm/mt8e tvmxtyk9exxsWMZaWtpbbA7WJsMYAY2vxE81dXpCclyVx/0+Pris59zjlzIW nEmvZyr0amU9A2laz61q/AhPVSvkMKOK1oslG0lxoLzp111Ky9qdqLvy1R5z ERskIdlyAp12/wB7YBZHHbk884pD1Oy5S9PWElLv8WedUXjuSplLYA2gW5Cu t7/LEHS0EkLT4sPcxgeahcmKo8rxFZcRmSktz6OuqMImxjcB1hSwlSbg36G/ 5Y014h/CtRMj5eqGbcr0amtUFhhQfZRuDjPBsqyibi4HN8PxRiRhNbhQeIW2 ZpCy/keAzXsvOsSaxDgKjxy6HJLhQHLH4U2Buo+2CJ4d9UW9OvFjS5lYkIRQ nlLptQWL+hp0gB0/3FhCvoDhDTpIPqp8rNcJafJemDVPYtfYkgi4N7g/MHuP njv7Gyjo0n35GLoNaOiydLgx0JPwjH3kI/CP0wdBClGMtpv0GF0oG29h9BhF IL7Ykm+3H4EpA+EdcOBBYs/pGM1Zobh5RyNSJbrNOqCXqhMZbdUkSlIUlLaV AfElJubdLm9umMg0ZutUSRSqvDbecrTjylIjOw1EpUD6ClJB33HPTi3fEN16 0avlN0h1Ym5ZU3H09zC2ZrolRwaW6205uPrR8NglXPGBc1RszwtQv4PLgPwZ T7pYbQ6nyQlW4pIVe20g8EHphIDxseX7qVM5rtJHlurqim5k0o1cpM2LW4Zn U6QzNTJpE/zPKsoK27gB6rcEdDexvfHsdl6sMZhyhBr0Rh9hipRm5jbb7Zbc QlxIUEqSRdJF7EH2xIYKJCjFSqSNvYH39sea3inzFm7NXj9rFEzZKcp9Ppy0 xKay7dbMZkJBStIA++TuKgOb/IDDOT7lKdggd6bVJzJp/TMsZZhVVjUKkVRc xG9UaChzzGefhXuSLH6e+IXLWneddSMyikZNoE6pKvyWGypDfPVSzZKB7kkf nivMe4aN7Wg70NZqcKWtNF/BDTss12DmfUipJnT4rqZCKZDP9QlaTdPmuEXX Y24TYcdTgy+I+hyMx+DbM1PQVXdjguBPUo3Dcf54tY4e6YR1WemnM0oPReVl OU7CbShPpIuk3+XFsLqfVHQpaEpJJ+Ei4AxSu2K04Gpi9EPBTq7M1H8OMjLt acLtTymtuHvUbqdjKSfKJPcp2lHzAGNDK5HIHGLuBxdGCVkchndyEeqSKElR PGPtifl+uH6TCiGkqPvhZKQU8g8YTSC68oEX6Y5DAINzfCgiULmrIWUM70hq Bm/LdNrEdhzzWm5bIX5a/wAST1SfocNqHpfp9lmrs1DL+S6HAmMCzcmPCQl5 vi3C7FQ4464cDRzRhWtDJKeCs/ngU6seFvSzWHNjWYcxQ58GrosFzKZIDK3w OnmJIKVkcckX+eA9ocKKFLvTrwnaP5AzS3XI1BcqtTZI8mXVlpkKZI/AiwQl X9qxPzGDo2lIT1J7k++EadKFJUJ4v7YrGb9OMj56aQ1m/KtLq6WxZsy44WtH ySvhQ/I4IgEUUsEg2FXGPDxovEfS6zprQ1qR8PmsqdA/JSiMW+HR4FLpqYNO gRIkZAsliO0lpA/JIAwTWMb7oTjpHye8bSvlcWt/4w1qlLjVSgyabOaLkaW0 ph1N7XSoWNv1w8m+S82tc/Djm/S/UKVLTTJEygSHS4xUozJW1tJ6Ltfy1juD x7G2BtUqRRZFLippjMhEtadrgW6HAtV/upAuPpziiki0uIK0cORqYKWzvAlp NmrI1EzDmfMFMkwI1aYZajJkpKHXtqyorCDyEi9gTa98atUkEcdcWcDdMYBV DkPD5S4JPbbqMfWT+Efrh+lGUKx8Kfy/fDpNrdO+EIyv2w3dB1OFFpT5CztH 3/2GFBEu3Ep3j0j4j+wx+2T5g9I+IYWEoLtv4E/3h++FmwLcgdP9MGjTpAHn 9P8AdsOW/gH1/wBcIKC7T8Q/uj9zjtSU7FHaPvYJBcpSncfSOv8A+uG7tvMS Lfd/1wEEkv4/z/8AOG/f9P2wsodEnYKmMNKF0OghaT0ULjqO+HyspZViPKmx cs0pmQloEPNw20rBuebgXwZAKS4kUmkFRXA3LJKja5J56Y6V0X/i/fCTzRHk viB5p49v2x9Ye2FjkjX/2Q== --= Multipart Boundary 0912061926 Content-Type: application/octet-stream; name="77.JPG" Content-Transfer-Encoding: base64 Content-ID: <13600196> Content-Disposition: attachment; filename="77.JPG" /9j/4QDmRXhpZgAASUkqAAgAAAAFABIBAwABAAAAAQAAADEBAgAcAAAASgAA ADIBAgAUAAAAZgAAABMCAwABAAAAAQAAAGmHBAABAAAAegAAAAAAAABBQ0Qg U3lzdGVtcyBEaWdpdGFsIEltYWdpbmcAMjAwNjowNDoyOCAxOToyMDozNAAF AACQBwAEAAAAMDIyMJCSAgAEAAAAOTI5AAKgBAABAAAAlQAAAAOgBAABAAAA yAAAAAWgBAABAAAAvAAAAAAAAAACAAEAAgAEAAAAUjk4AAIABwAEAAAAMDEw MAAAAAAAAAAA/8AAEQgAyACVAwEhAAIRAQMRAf/bAIQAAwICAgIBAwICAgMD AwMEBwQEBAQECQYGBQcKCQsLCgkKCgwNEQ4MDBAMCgoPFA8QERITExMLDhUW FRIWERITEgEEBQUGBQYNBwcNGxIPEhsbGxsbGxsbGxsbGxsbGxsbGxsbGxsb GxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsb/8QAswAAAQQDAQEAAAAAAAAA AAAABQQGBwgBAgMJABAAAQMCBQEFBAYGCQIFBQAAAQIDBAURAAYHEiExCBMi QVEUYXGBFTKRobHwCSNCUsHRFjNicoKSk+HxF1MkQ2OisiU0VKPCAQABBQEB AQAAAAAAAAAAAAAEAQIDBQYHAAgRAAEDAgQCCAQFBQEBAAAAAAEAAgMEEQUS ITFBUQYTImFxgZGhMrHB0RQVI1LwB0JiouEzkv/aAAwDAQACEQMRAD8A9Ddi Snm4Hv5P5/4x9ZIPFx5df9v+ThLquTTz7W6zl3IsypUuM2462/FZQpaStDKH HkIccUkEEhCVKURwOOcNpGf83ma/H+gW1qacWhtQQsBZSQdpBNxdIUQfWwOP NJLyO4LR0VDS1EOd7rH7W+5X0bUHM0mkKkGNTm1IdLexbaxcCOp0n63Aunb8 xbzway5nKrVirU9mVTkNpnpfJLe67Sm+iVXPG4AkG3u62Jl2U1RhdNGxxa43 HM6bE/SybT/aFy9BV3U/L9Vak9440GgttVyh9bPUG3Km3PgE+pGCCdaKU9GY ei5cqj3fMQHtqFtAtiWSGgbq9QQbY9GDIwP2unO6OTNaHGRtj493d3ofSe0X laqop/cUGtgVJCltbkt3sO8uLBXJ/VK4HU2AtiRqTXmsx6Vt5koLnetz4Jlw iCF7tySUHjg3Nv8AbDZmFjboKuwaXDrGVwIJtpfv+xTHpVT1tlZHirm0hDNQ dZkofSphtru3CGy0oC/oHE29SCRYXw4Qc/Crz44SsxVyW/YZK1tlaWrpCtwH zsfj7sZOV2LOaMmh8uW/kbeV0VKzCWOIYbjXmeOnsizMqtsyVty2UlKdyg4l kgKAQmwAHqrcPd5euFsOoplJQ33W1xaCbG9gR1B9w9ffiOLFquCpbT1bPiNg QO8Aet/ZVElPEWl8RXYSI7t7SEkE+Z59D8yeAfLG5AAPhsb+ZsP9revQnGlp qqGqZnhdcIB7Cw2cFpchQAva9rdFG/Qe5Xr7saAiw3Kvx1SODbzA9PK3zwYA o7LcFKRa5Rx63/Ecj3/HGd6f+6r7v5YTVesF0Tw2ALD4H8/n1xyefjx0gvyG 2t3A3qCQT7r/AJ/HDAnrkuTESCszGAkJ3XLgFh5G9+nv/HDWrGqunNBkFmfm yItxNwW4131D3HYCBf4/ZgGsxGlw9maodb5ny3R9FhlXXvyU7bn5eJQlrX3S 9x0D6VmJ6DcYDgA/jx0Hp78OKi6h5Drz6UUnNVNceNrNKd7pwf4V2Pp8+cVd N0jw2qfka+x79PfZWdV0bxSkZnfHcdxv8lvI07yPMbCZWWIC0neobkE8rcLq yDfm6yVX8ySfPH0fTjJMcJDGXIbYSWVDaV+HuuWrc/s2Fv5Y0jXZQAFX/mNc 0ZesNuWn2QxemWm9Ajt1GNlBptUYbW/Yy7vTwRZICvQkfM+uCtOr+X6RRYtO hU+bFaCFlmOmGsKQkHk7eoF1Hnp15wpcXjVQ1FdU1dhM+6OxZTU2ltSmUrQh 5AWkLQUKA9CDYjjy/DHVV9vyv1+3EduaC3QMV+d3ryV5XqyO6TuBShKt5sLJ HINwCfsPxwsplR+kysqps2KWrW9pZ7vdfpt593/OFc1rhqF7Y6LpIprDrC0p 8BcFlKtfobk/zwFk0OutIvTq1sskJ2km3Tj4AHm3nc4xc/R2SObrsPk6vu1t uT9eKtIKyMNyztuFynxs4nSuREhSk/TPeANvl1Nigugkci1tlxY84b0yi6wS lsFiswUpIjLeQ/IKbuIbcQ6nc2kGxUW3OLG4I4HBtxDiRDRm4C/e7W+wvqjI JsMYCZGkm5tpw0tvpzTpyZRa/RsoLhV6sipSDKcdQ+R4w0qxSlRtYkci4AB4 Nr3wd7p394f5f9sXcILIw1+pHFU1TJHLM58bbNJ0C7E/H3+eGdqNmzKGV8rp k5oZZlOKv7LE4U48rjoD0FwLk8cDrxgWsqWUcDpn7AKWkp3Vc7YWbkqv+rVV j5lyzTKiw/GTAkthCqc0QEteEEBVuVdVDnj0xHjSUrirU1YpbICrchN+l/sO OFTT1+Jh1VUNO58AL6WXasInw6iiZSMe0PNxa+pI4rBUhCVKKglIFyTxbDJq Gc6A1X3SzXJkhA3BZZR3jDZF789Li/r5DFpgeC1mMzFlOy4A1J0HrzU+NYzS 4RG187rEnQD5+CeOmet9cptXEPLGe5jy0tFZhvr71laQRc7FjjqOg87g4sTk 7tMU6Zsi50phhOHj2uIC4yfepB8Q+Vx6jGip8Sq8AqvwdcOx627weI7vRZmr welx6l/GUOj/AEDvbTx9Vpq3rOG5UCnZAzSLpT7RLkwlJX9YDY2SQfK6iOvI v6YhSeyJ+q3/AFBkpLuakPNyG65uIns7LBKG3AfC3YWLYGwhSrg3OOwUrIJo GvtcHW/cV82YriVbSV8kTHWy6W03G/BWi0e1Ea1C0vdluTm5NTpkxdPqaUNh vu3gAtI2jgAoWgg9OvmMFs56jZIyBSxLzfmWFTQpO9ttxd3nQP3GxdSufd7z fAD2APIGy2lI98sDHu1JA25lQjUO1pkmu5gYapTNagsxH9/fuNJKXRbi6UqK tvU9D7xfpOGWtQsp5qebaotYaecfQXGUk7e+SPrKQTwoD9qx3DztihiZU0+I Smd145LZNbi4HaHceNuIuVYSTROjbFlLXtvmBFjqdD4cE473tYefpz7uPX3e XXnA+qwF1KnhhqoyoZNyHIy9qjfgm/p+OLpoQpKjvJGf8tai52rOXst5xroe oKEKLiy2lE1halpElpVjub3pcRuNv6sWFiCZPaR3cRDRUpZCQgqVa6uOLj1P W3zxNLG6J2UpgIdstwfCOQb83sTf3/nnGbj1H+T/AGxDopAFlQs3tSPcB+HX FHNVta8vam9qLNGV4j3cDJzqIsaQ9Zth5sK7p4b+Lfrrbd3UA2OKbEy5sTXj YEE+G2nqnPiE0L2cSNPHdcaTToH0dJhTqJMkzKgwpNPkQ1mQGnkc2KG9xUFD gG1hbm3XC6Hliu0yiTEP0moFsRLlaYToaRZaVEqcUkJ4sbWJJv5Y57iGKwmO aB7x2rZb2HjvY7g+unFF4HhUsVfS1NrAHtcePnwUNau5pDIVlxpSlt7AqShC ikvLV9Ru46JA8SvXgYjekv06JK9mlMplGRHdQApPgQdhsQDxcEDG+6OONDhD KVo1la9x/wDk2+nougV1EcTrZ8Qf8MD4mNHfnbm+Z9Vq7U5MmS2+46ovN2Lb oJC0EdCk9QR5WxLenWdJNdyvJh1NQdqUBHeAngyW7cKPvuLH4g+eOf4jCaiH M7dvy4/ddnxOnhgY2WFoHA227vTYeKgzIvaDzvQHakltqmVlMyY7UVszlKQs KVbeELR0HAskg9OMGsl9p/NLGXp0NjKzFXkOyHZXtsqpLQ2CuxCEo2khCP3Q R8sdip5jBGxjRo0WH0Xx5X4FHWVEkxdZzzc6Xtve3ipJ7H+vOYMmdtCsNV54 1CHm2h1Cq1FllNkpkxG3H21pH7I2Ica/uqHphh5qz1mXPOoc/NmY6o7KqFSc 711az0B6ISOiUpHAAtYAYzeNOe9gZwOpXb/6d0cEE8k3FoAHcDe59rISJlwL XQR6G3Puweo+e8z0RlhVJrsuGYUlucz3bih+uQfAsjoSLnnzBIOK3B5TBVCN +rXcO/gfG62PT7C4sWwZ87R+pEMzTxt/cDzBHBemDGsOWoHZXoWpmapohRaz AjyENJBW4866gKLTaR9ZV9390Dk2GIEzZ2u801aoSIuVaLTKdBcQttLkoqek quCkKumyUKAPFgbepx0Knga993bL5HxjGfy8NYwXeddtAo70J1eouhWYp66t Qp1SjVOPEgqloeSXorMcKCEpFgFglZUQSkk9OuL05ZzLRM25Hh5hy9UGp9On td6w+3fatJPIIPINwQQeQQR1w+uBMmbmpcIxL8dHZwsR7/ZF08p+utP91QBP xxmw/wC+9/qJxVWWgXxJS6gk8bh1488ecWnK5zeZtQc1UpDEecurMKalONBY BVLcKgRxceIE8/PGX6TOaMNcHbEgf7BXGFjNWNHj8lYinZszA9FaVP1Ey9Ds QotM05F0kdQd7xIPywDzhn6jU8RPpTUyp1B2ovbIsRltDMaUfNPgaF08gcrt yLnHCabC5pn3p6Q2/c4k27xo0fNbF08EEgEkovyH21VLc9ViNW9aajJiOKDC XllO4XKbWSBx6bbYBuyO7zBEKVbyrfbj+yf5479R/ozwxu/tiI/0K01JGZOj 0z27vlv/ALtt8lNUrMHZ2i0WCmlZLrDspuOPbHZMYrStyyRdAU4ABfcefUDE R521HgZP1NfzflKgOMQZLYgNxFWZI3JQFm6LhJJQSLeuGB+HzlkMTNzrfj3K bEaPH46GeerluMuluBuLHQAac1FGRMqZ6zs9OcyVltc6FGkqtGCkLcaK7qCU 7iFK8N7kemFWaqTm7J9OCa3QXKQ85dLQcid2q44O0g2uL40+RwbmtouJMrYD L1Bf2xp529O/RLdPsy5JyHW/6TsZprq647TpMH2dqLsSlb7SmlpU4Tymyzf8 MSLSZeV/6PMMVurVWA/sSbyaOFoV06ELBI99umDaR8sZcIGh5O7S7LtfbQ66 6q7wWepzPE4ygHQjU+J28t0vMTKMiEXomfaCpaedkiJJjqI/0yPvwBp9Nq+Z 6yuk0ZMaY+FqLXcK7tpSUi5UCoA2t684BxN8TIevnpjEWkEu7JHllJJ8wFum VdXNBLTNlLg5jha55d4UmZordXiJy7lerVVUhNHorTYStxS20EkiyL9AAkDg Yc2Ts4ZJiacCjVphmPPcq7UqTUFILqxCSjxMoR0UVG4INh4r9RxDgcuaha6/ xXPqbriWMwRUvSRz6llmBoA000aBysmLNrEeYp5pttQbcKrI3XKQeg99h+GL c9gasVGfoNmWE+taokOrtmOFEnapxm7gA8rlKTb1JxcTnM1ZTAmdXVOy7aq0 6CAjhQHHr1xvv/8AUT9pxVlbtZuO9Qrp4h1+OKIR6FGy1Qs1UYSG+/VOkyT5 E2kFQT8QkJxhel7yKONg4uHstLgjAZ3PPAFN2n0Z2h6mSMxUGfVolRkzX3lh iMVtOJUV2aI2GwulBLh3J8YsLpJwiztAr2YH2anW2qnPrENtaGXZO2LHYAdS ptTaQQbhG8EKTyopIHFsFtxbD4qRkb5QTlGg11t3IIYZVy1LnMabZr8hv3qr VTzhAgZtVUUx3GGJC+72FNrKAFyR5XIJ+eC0PVzLqo+1NGLykjmy1j7bWt/v jWvwuiqMjp3EHKNhvp4hXtL0mxnDY3QUTgGkk6gG1/IpHU9aY4CUwsssoJ6B aSoi3n4lHDVqFXnaj1sUlxxuJ7PGkVBtTgATuYaU4U8Di4SoX9bYY6nw6iY5 8EZJHEn6D7oaXFsZxW0dbUFwPAaN9BYH0Tk7Omf38pR5vs8qmpU5KQ6ymSRv Woo2kBJ+sOgtY84Oa/qqGYMpwa0iZS5AS+69KZizUuOtbtvjUn0ubEpJsSL2 vg7s5L8Vz7qH/mINzlzE6jS5bY207lCEKmVKr1ju6fDU+YDSpT+y3gabUCpZ 9wuPfiXaHnLNtIo6GIVentspHDSnt7fHXwKuPuwXh9JTVpfHM0OAtvwW6o3S QnO3S6LvZtzLJhhcmJR3r+Iqdpccrt8dowe0edel6w+0LCUrWzIUoNJDaRdF jYAAAc+WKLHWUQwub8M8usbEFziBvsCdPJbqkhqYJmumbYOaSNuXcm7mLPk7 /r7mBLCGnWo0wMspev1QhKSL/FJwRa1FjuxUidQ2zwOAsG32j+OLLDMNc6ij LTrYIIdKobGlrYBIwEjh7g7+yHS84UNcxSlNSUKINkIaCUW9CRfHoR2CqauL 2MJNRUwUJqVZedQbW3JS20jj15Ch8jbEskUsILXrE46zA+uikwyMsJvmGo8N NvRWXBNr7li/XaoD8fz64zuV++7/AKicBKjW6hxe3Q3t99vz9+KU646YZx0q 0uzznRqO5WIhlSqwiQyQkNtOuJuHR1SUJV5XB2j32zuNUL6+BjGC/abfwvqr jDahtNMXOPA+vBVZqWt2pc6vuPN53+joQSkNxYkdIQiwte5SpXPXr1w25ObZ 9SlvOVfUmvLDigQ2l1Vk/hi9oOjOFYfZzGDNz3KfNidVPcF2iQ+zZTqdVajv SavUHCve0grAus+77cJq9Gay5mN2kVSjlp5tYX3MlA3D/L8x8jjSfosFxr4o AFxvdZo+m+eM9VeRXMrUGK3EcWWfaC8222FpACrAm4PQ8DqeOuLc9l7sU6WV HSVzNOpk2W7mNyXIYbFOqBSxGYKC2U2CfEo7l3v04HkcYiXE6eorXUcZu7XN poPNaZuFVMdK2qOx25nvT5rv6NvRWpP+1ZdzM/BeUkoSp5lDm24sSCgIN7E/ bhlt/ouoNLraJdHz1TpTQSttTD6XWdza0lK03BUBcE+XUDF31jTuFlzRzQjK 1x8xdCYX6O3MmVsxz4lFqdIKJ1MehvSBIKnyHLWsHABbi5scRlmrsp5syrmR 6iuVymuSGLBTa9yeov8AWTuHQ4qpekTOj9QZJYi6N9hcHW4HL/q12CYfJXU4 idIOsbmOxta/NdIOcdQ6PTGIBhZeqiIie5S6WkKS5t4vuKRc8dfcbeeG7knf lnVE1KtJbSqSh9LTMVSXS66vkIQhBNh162AtyRiDEamGejkaLgvsQCLWGu5F xx5rVNhfAW5hawOpO5PK9ibpPUdBc2VDNk6sRpFOSuqyXJiIz7hQ4Nx3FNwC lRF+bHAGq6PaiU43cy69JSn9qK6h4H5A3+7BWEdMsLYxtPIS22lyND5/eyy9 Z0ZxFt5WAO7gdR3W+yARcg5yrudoeWKTlaqv1Wc6GY8REVfeOKPoCBx6k8Ac k49cuz9p3VtLeyRl/JVddYXUYTTi5Xcq3NoW44pZQFftbbgX6E3tfGjq6qGp AfE4EHkbrFVEEkUwEgIIvvpy+yklAunqB9mM7P7Q+0Yqio7LoogpP8fz/wAY YmtNCXmXsk50oDTZW5OoMxptIHJX3KlJFvW4Hwx5u6evGJdQcdYSgJF7AqJN /hjIptZlXQxTXjt9Rt/Hzxc5rhSDRHMuUuv0DMTNaQIbLkNaXEd+4FpJvxcc 3wP1czNW8zZ6RmGr1SJKmvDa6mP0CQABuNuvA+zEbzZtlK226lDsmZo9ohVv LDlyWFJqDXpz4FD/AOOLOaF1aXStIIUxNQKO8ly5bjaHCP6x5SidvnycYKjp 2sxyofzy+7TddFinMuHU7DwDh6EfdTbTc5e0i0WepxPHCXAbH04wVbrryjud eIPvV0xsMgtogC1pN01876gS8uojVKHMKnmwWy2CVK29Qbe436+uIWzXm45i bq2ckEKS53zyCVglXdtg8gfV54scYrpZRmWlZJsGn3JA+6tcGeIKuwHxA+wJ +ygPK+YmhAREStQIaR4Vq5HHPPpc4kGDk3MuYIUeRGy0mbHkpTsWpCXEBJ6F V7kDz6dOca4AAALY1OJ0dHTNfUHs7a68NvZBqvlKBSaquDVcutRnozx4DKmg FpP1kqTtB9xGOD1VTDAKK/UopJ8ITMWoH5K3D7sA1GHUlWLTRh3klgbS1EQn pzlBH9pt6200Uldi/ONfrurzVUrlSclsP1aTT4SnEgEM9zax2gbvEfTF9QQL KufXk/x/j8sVmCxxQGeCIWa15A9B9VyTpI58ssUrzcub9T9F0BFuePiSn8Pw 8sZ8Pqn/AFFY0Niskuh4GA+anURdOavKWE7WafJcO7p4WlE393GGBOXhm9vU 4dhum3QDm9r8YUJkVWRHuJspSbCwSspHHT0xZ5hspAtFR3lruQSrqVOODnDl yno/mXPWVX6pTpsKOww86yo92486soSFHahIPBuAObk34xHIWMbmcQBzJ0Ur bnZWN0LypWcrdl6jt1zLcukTZinnHUy4So7rhDhsVbkgkhJT8BbAufr5RdLd W5WS8w0OU5BYWJbEyKkF1sPEqW2UccA9CDex5HnjmOAmWqx2o6p173O+hANt /PQrqNbLDBhNO+21h4XGqD6U9oGNNqU56v5vpuU3jMcWGFd6hExtSP1bi1jc EkdbX6+mHxqBrzEounjtVj54p1ZgJQUSY1MqSXJi1KUgILV0gjaN5PJvx0x0 nrgwWewjkbaH0+tlj2yOLM7XjvHFSRlnMOWX9GabnGtZjgNMSYbboQ9MQp8k puEEXJU4elk+ZwyKrLFd0InTYUbu/pKkvPIbSAOXGlEDjz5xzbpXMJ5IXsPZ a6x3Gp8eVluMEcDnZxyqqjtRlirUVygTBIQh8KkFp0Ed1sA2r+N+nXEt03V7 M1HoLUWHMaejx+Uok+IpSRYhF+R1+Hl5nHV6bDp6hnWAdn+fwrMYhiWHSh1H UvvqCNe7XX2tz3SDP2vsuuZP9ln16Iv2QKcbQgrKkrsQBc3twQLe4YjDKucY 3/TITKxVXpk9KHlKC963FWvtF7Wt088AWyPsVbUtdQYWGQRTXjDXHfckiw8V azsyRHMn5NyU5IO172pqa8Qehdd3H/2kDHoT9VZA6A9fz+fjjFdHpjLPV3/e T8/sqXpTF1cdKf8AD7fddQbD0xnd7/z9mNcsLdJKnJqsZpBptKRMKvrBT4Zs bj191z8sD5zc2s0eTSKnl28ObGMd8e1pG9LiNriBbkcKUL/Mc4RPXmHnzsc5 wydr7VqEKxTWKS28XKbKdcU65IinhpRQkeE7RYgkcg+uJl7NehOkS6BWIOcs sQK9WKTLSPa5QX3bja0m21rcUiykLTfzxznpP0mmjpp4aIlr47XdpzAIHrut nT4IWU8VTKQQ/YeW6sRTsgaUUkpVTNPMtMKRbaUUlm4+ZTfDoiVCnQYvcwoj UVHoygNgfJIGOB1GKVtWbzyOd4klWjaMN+EWTI1rmwp2gs2StYU9TXWZjaiL lNnEpXY+9ClA4o/q1orSs/6nPVuJmaRTZy2gy+05CU6yruwQFBQIINh7+gx0 nofi78KDanLm+JpF7aaHTzVkzDjXUb6cuy2cCPQhR/mbQ/M7qVVh/NFBS2xH QguO95FTtT4R9YHxH0xH6KerLdRRUMx0kyojTRcUpmytoPhCioHgBSk+fmMd 6w7pNS18BjDS3hrt5c1l63o9WUjDMQHNG9jt5b+aemQ9Ic+DVqlS5GVJMOA0 vvjIXt7tI2EpVcE3vcdPXFopjkDL+njhcSRCpsIhQHUtoRa3xIH34wHTHE6b FaqCno3ZgOXAk2Ws6M0M1FDLNUNt48hqqiNJKHUoiBYH7IFyR9nU2wskSahG SliQkAqG4bgCbHH0VFkp2MhB1suPSwCcOeRoChlSWqZl6VDcdCBIbU2oq5Cb j63yxpSqapnILFCjgvvvARme7Fy6pZtZI9eemMxijWRz9YdARqhRG4lsTNSS FbLIsU5Y01plJkz2HJdPZ3O7XAdqtxVb4C9vlj0DpVSjVOjR5jEhtwPMofO1 wK4UkG9x+Pyxx/oxKJayqcNnG48yV1TpbF1dLTA7gEH0CIBKj9VPTj0/D/jG drn7v3n+eN4QubXW5JA4vjQ+8e78/n7MInqvfaQghvUakVHoJENTSj70L/kr EY6NvyUaqzmS0plyRS0uLQo2N0SFA38v/MvjhvSGMCprR3X+RXW6R2fCaY8t PeymVyX7NGU7IfCUNgqUo8BI9cNPIufV5nrNTiyHxujv7GmvZi2WwL3SpW4h ZsAq9k2CunTGCocIlrKGeqY24jtrf18dE2SeGKRsbt3bJdqN3T+geYmH0mwp r60nbflKCofenEKzZ0aHRnqhMfDUdpsvOLV0SkC5OLDCmufSho/cfk1aLCiG ukvyH1Vdc7Z2mZwzYJYW6zCjqtDY3fU9Vq/tnz9Bx8QUGoxYFSeVPhJlU+S0 tqfH2+GQyoWc4twsDm49Aeox2mnpvw9MIW6ED35+qti1r2HPqDe/n/LKV9JM xvUCtHSXMU0vvQ44lZdnrPFTpx5Rz/3GxwR6D3YV675q+gtJ/olm3e1ncyon qlsW3Ee8kgYzFJQibpBEGjsvcHfUjyII8lTuqOrwmQSHVgLT4gWHqLHzVeqd VZtLrDVRhO92+0SULIBsbW6fM4cWac6/0mZQhNIjxGk7DuSSpwqCbKur0Jub Y+npaCKoqo6kmxaPVcPFQ6KB0QG6bsWWiLmGLKWhJQy+hag4LpICgTcHqPdi 3WU9MaHUpsfNOS9P2XlvpLkaZApihcK4ukhNhfpe18ck/qGKwzRCmzdoEEDj tut30QFKIpH1Ib2SCCbaHXZPqHoXnioKK2shPo7xNip5tDVwf7xGLUZFoaqF pfSYk2nMRJ7EFqPJDe02KBa1xwen/OM70Xw+vpJXvqWkAjS/O690sxGgrImR 07w5wOtr7JxiwHIHpjN0+ifuxvlzdbC1uD1GNbWTzz7vz+RhqeFDPaUhd5kW jVFKb9xMWyT7louPvRipH/UxGnXa0bqbkZ2XFbppjy4zKwHNjit25IPBUChJ 56gWvjm9VhYxLHKilvbPGbHkbAArdS4j+X9G4Zzs2QX8Lm/zTvf7X+ns3voM HLtdmOAFLjTjTTYt0N7rN8CU9p6PBX/9C02ZaV0LkqohKvsQg/jgGh/ptXMa Y6iqDWncNub/AEVFX9NKCndZsZc5MB/tU531VrrmTmYVOpcB2Y1ElNstLcW8 guWKCtR+qbchIFwbX5wp16qdRpeV6fQzGfYRVFLfWpbZSFttkeEXHPiIJ+Aw bN0dpcJq6ampblozOJPE9/AbbLa9GsUdWMndJYG7QB4i/nuouyhpnGzmzUJk eqLgSmVJuW1lKllQPiXyBtFrdCfQHANUN+EpyM7JW+6y4QC6E9U3BTwBe/I+ eNM57HXbxG/n81pKOQuqJImkgDv0P2KkbRnLy9S+0bpFkeDJeROplWqCnJAj laWaYIxcBUq4vY+Hb7gb84kft16MT8gULKtZp65dQpDnfRpExxKEJakqKVIb 2g3F0IUoHobYLwfCIxXxVZdq3MALcy77rE47ir4XS0OXR2U3vyA+yqekdEgk EC3XHyVAK2+vAOOztdoFgHWN1lh9DFYZWv6qV+Li/kecer3Y2qz1X/R0ZRdc fLgjokxEEE/VRIcCR9nn5Wxi8XhkFT13C1lafiIvyzqP7s9/Kymuw+fw9fzx jXbz7+eeTilCqCsg2T1tjN/7f5+3CpFkn94Xv6+eOTzjUeMt99e1CAVLUfIe ZwicmTn+BlvUDTZVJ/pNEhpbltuF8jcEKBUCm3HUbh8sVD7SmglcpeZMtVnI TNRzQ3UkO09xcGEpa230kKSmyb+EpUTc8XScVUWGNGLCvzcLWReIV75cDdhz WXu697940soNyV2VO0GvUCpVFjSnMaUylmy5McMJ+tfq4pOJeonYt15qCkrn UujUpP706qoUr5paC8aYTNjFr3WLqsMlqpQ4aCw9gntpj+jyqWUtQoWZa/qN Ty4xU2qg7Gp8BxQUEOBzYHFqHJItfb8sWO1q0Z011fyxEOocWWk0xazDmQ5C mnmO8tuSDYgpO0GxB5FxgOZzZdANFqKIyUU3XMdrp7Cyq7mrsPUaOwKvpbrG /FW6SlLFXirRvAHKe8aF+ih1RziFszdmLWzJ8D6QnZLmVKAbqTOpQ9sYI55O y6gOP2kjFXNT9nQLpGE49G6YmZ2Un0v3Hh4FT/2BdG8x0nNtY1Sr8GVToqY7 lIp7EhktrfKlpW45ZQBCEhKUg9CSfTFg+0RoXG150ZjZWfrqqO/AmidHkezd 8jdsUhSVouOCFdQbpsOuCIWmNgCzmLVzZMSMzQCG2HcbKnGbP0dOq9Oaceyz Xcu11KRcNokLhuq+ToKb/wCPEL5v7OWseS0Kdr+m+YYrLQ8b/sKnmh7+8b3J +/GjpsWlisJRmHPihHtoqw/pnq3cjt5HgmvkrTnMWoWslHyTl+OHJ9YmJit8 7ktX5UtduQlKQVH3JOPWTTKlZO0q0Uo2mtInOLZoMIsl5cdSC+pO5Trp4ICl r3qIv0IGEr65lUAI9lXVFNJSuySb/wA1Typ9dpdWeW3TpqXloRvICVAgHi9i Bx5W8sLjwoAceXw9Ps+3FLayFWwJ/Z/+N/z/ABxm6/Uf6ePJtis+X3Y5qKVI KSAQeCDyDfCKRaJixSiwjM2PUd2m34Y7tpS0zsbAQm1rJ8It8seK8vrI5vb5 4ST6rDppQJReusFQDbRWbC1+nxHywiUpKczUTi84JuhK07m1i4Va3l7wLdb8 deMVo1W7bzeT9YKrk/J+RW6p9ES109+pVCeplh19Bs4hltCVLc2qukkeY+GC 6aETPsTZNdcDRBIfah7S2aikZO0VkzErHhcZy9K7s/4nXRx77YcVPzh285QZ mt6aZcioQdymakWI/eD924e3Jv6+WDnw0MQs52vioh1pOgVkaDWzmPIzU6ZH egSVI7mdFdcu5DeCf1jalDi6b8KHBFlDg4QwYsGbRXZbT9WiAK7txBkq3NFB 4NuguCD6EWv0xUEAFEAFfZcqcVGXXHX58xe59ZR7Yvc5t4CSngHaRYi48zhf /SKE2f1SnSfVI24dkJKmZBI9CY8PJDGeVZnj5Yp8erraMddRRAbTIU2SCUlw DcQbC/PNvPB9mZGk8MPpWfQdcKWuG6SWGRup1XTeCeDf5/n8+mNuCb+vzw2y FWDtPvx9ZP7uESrr+zYi5HW3Pxt92OZB23v5fW8r+Z+HlhE5fJUneUAi4/Zv z7h8vXHQCwIB+F/u/wB8eKWy+43XAJt0/gPjfn4YC181sSWDAefTCKSmQYTb apIVcWUkOAgoPNwAVXsemPMAvqkKAvU7NSZaZeUsy5pmytwHc1aO25CWb/tl aG1oHXxNqvxexxD/AGTvomVQcz5xrFJhKzFUM01FiTLS0FqQhp7alptRF0oB ubC1yok8nE7jaM2T4xmKsgqpIVZKFqHqpSr44PB5y62nzc/MYAspkyJFVapV VzHTpCylubMg71N9Ehxnau3xDQHzw6FTkOyiiC7CYjPN3fUsD9Ykkg3+XFzf rz0wU5t2gpvwm5TekVmlv1GYiGAiPASkEi1ttja1uvT8LYWR4U56OHFU99q/ NnLX+YBOCLZW6lGxztawApHU2MwsOH2KLH2BI/8AuN43Hm/I6Dp6+eEgl1tN VSkRIXdd6kb0yiFpQTybW5I9PPHhqphIx+l056fWlpcDM8khX/m+Y95/ng22 dyLnaRa/B49/ytxiJwsqyoiyOuNit7G39bs559/+44GPrH/8sfYf5YZohLr5 6THjgd/Iaa3Hje4E3+3GFOt2O1xJ+ChhoUqb1RmqbzEpTThSUAIFjbyxrNzr Tsv0JVRzPUIcGG2LrlS5CY6APUqWQPvxKQCFYmJr4xzUW5h7ZmkULfGyf9MZ 0mJunZRolowPoZTu1q390q+GIuzN2g9f85U9SaE/QNP4T9+79la+lakR6l10 BlB/utq+OEDL7oQR66pDoVRKpmPtiNUnU3PedczLdpi6rBdnZjfbS1JjPsqu G2VIRtKVm6FAghPTEn0Cj0vSnVDOGW5UhliFU649X6c486EktyUpWsJ6X2Oh xJt0sn1GH5d2hK0Wej51LoHtHdRsxxHnAbd209vN/gLnEeanZq7R1XzZEh6O soj05UU+0yX4KlL77cfq70EW228vXDo4m37eyUhx+BPGUxmqlaJpqecil+rS XKc3LLYH1mm9i3Dt4BUorNh0BHnfBUVpiNTkyNyFbm9qVqPisfK2JWNGwXpg W6Hkm3EzUvL2Sc05tmRzKYgSojqWTwHdqhdAI+I+fliQsqa/6f5ohIdjPttu K4U0qUgLSfMFK9pws0Lni44KEODQ0He31Ke0bNWW5TF0PuoCvNTRUPtTuH34 3W/lqqK7tEqA6fRS0hX2HnAOSRmqXS64ycvQlsnu7s2HBQogYGUybVImozdG kvh+M5DceQopALZQpAFiOoO7oelr3xK12cEFI8uLd05kk28Pwxt+s9T9+IrK JcJlOgTUEToTT42lspWnddJ+sn89cQzqivWPLWnVcr2Wsracd3SYj8sSZsuY SphhBcH6pIHj2oAA3Wv548E8WvqqjU7WjtBatZcZr9U1SeoUSdda4tAp7cLZ yQUl1W5w9PIjA5/JyHquJNVS9WpW65qFUlLmPA+u50qt/htiYDmjrk2CNRma VA8L8lVwkFW09B7vTAfMereUqFOEddaZQ6AEoZQouvK/wpufuw64Xkkybn3V ZvV+FnHTvI7z0lliUwiZX7xYgS+2E7tlwtQSQFW4Btix+gic8Z4r02q6m5vF Sm5dWGHYMW64Mtb7W4PKQvwpCbKSlDaEgFNypWEuSbqWM65VYCDGYYIbiMts J8+6QEAfYBjrXp9OozLQfXIUt7psZW78ztBsMIXG9k978psEjFeypU6M7TyW pLzidoaUspUsnixQoDCNzIVNhNG0N8M3+qXNyR9oJH24c1xamDK/R6S5hypQ cwadyMs1CP3dPkJAUlpfdlBBuFA+RBF/PEG1jsnokzFPZZ1CPolubGDhH+Jt X/8AODYqgxCxCgqaSOYjWxQVGhWuGVJanKTMpdTRbbsbqS2iocfsrCeePXzw klZk7SWT45TUsuVtcdBJJLCZrQF79U7uPjgoPgmNzuq59NUQts3UJNE7UGc4 azFk0WG1IHFyVxVX94BH4YsB2acy5ozzlit5wzQ+pzvJSIUNO4qS22hO5e0n rdS03+A9MJUwMjhLwUKyZzpMhFlNyUADkKP90X/P8cZsj9x3/TH8sURKMWbW T1sOnBtbDE1zkCD2Ms+TNu7ustVA7fX/AMOsW+//AIw0J43XmLBzc9p/pkhh phupojPKSpuI6e9ZK/GEqQpI6A8kEjocFMu5q1SzxSkVLLmWaZDpzxUlEqoz ivobGzaBfj34muQbBGBLXcgVKtVDbm/N86e3v8UeEkRI32C61fNWHbTMtZGy wyGKJQIUdRHLqGgHCf7SzdR+3C2G6VJK5qBQsrBa6lmmOw2OA2Xgkn3W+sfk MKtI+0JmeiZ3qj+VMlTJVKqyIsd6rTYqkR460uLShQQVI37u+CblQ2gFVldM KTfRPa6xurst5yayfUKDDzm0lUmpJWw87G/q0PoCSbDzSbn7MSfCrVMmRELY keBQuLggHEMzHaOGybIbvstZVHodTkNvvw4zjzCwtp1IG9Ch0II5GAecZq4T DTCLbn3m0AdLgq5wyK5cAVENwoHq1Jo+s+rdYiVKpFVFyyTBjsJeUhMiSB+u csPrKQohIv8Aun1wArHZzybNzZLaytW6tREsIaU2pEvvfEoHcBcji6d3W4B9 wxNNUOpyA1MGHU+IOdJINeY35BBZuSdeckV2BT8paoPVJMwOqaalO3SlDQG8 qDm4WubdeuCiM/8AaSysUfT2UKHWmxuDi2HO5W2R5LKVWBPlwb4fHURy/Fof ZNNFXUYHUSZwdbO39f8AqKU3XugO1+EjVjS6VSYcl3ufbZDLcyMknjxEpuBz yRew5tiy1PiQYFLai0uLHjRkf1TcdCUNpHkUhIt+ecTSgtAF7hBtqTUPOdmV w/m6XA7R1t5ccYzv/tH/ADf74CUq3NrHytf8/n/fAzMkOFUckTqbUoyJEWYw uM+y4nchxCxtUlQ9Ckm/xx4bqZou4Bed/aL0Fq2mlXrdSybCqU3LtYpzj6Ay yt5UJxtlSVtuKSD4dgSoKVbgEHkG8VZDq8qjZXnQKfEdeajSt6dq9o2raQv0 PmTie90Y9uQrhTdSNQa9qCcr5eynDamhrvbTZdrJ48RtYHr0Bw8W9HM65sfS c86hvtpctenUZHcNgeYU4eThm6anZQdItNclPB6iUdk1NA3CXJtIcKh/fvhX mmqTJ+Sp0Bx0ENxnFoAFglYSSFcdeQMStaUhNlZ7UjK1UzllehVinLYDkMGU 42893YUHUIPBItcH8cNyBXc6ZZp3gjzUtJOzeysSGrjm3hJHT3DB0XVPZlcb FRTxyNkLwLhGIGq9WqmYI1GgSGpM2W4Gm2igoc3Hrci1rc+XlhxM0DPkjVNm VmF1ww4f61LiHe8Q6RwlCeb9Tc3A6YiexkRsd0kbs5CadTpkvTqs1F8QHn6B OmLnJcYaK1RHF23bwBe1789D54G0rUbT2bQKhDn5kisypkgKC3LKCQEAC5+q R1vyDirrIy8Bw28L8FaUThC8xjfhrvrwS7dTK3neU1RK/EeZi5eW2mQp4vIL jroB+qq9trY4B4vgc8alQtP6HBrlQZqEibUkOd2pd+7DTaiU7yBvAO21gPn1 wFAwggg6K1mec2Vzdbb8tClmplRo9W0DnRX4UZxkR3HFE2AbCUkk7vI8cYmX S96oSezblSRVAoTHaJDW/uvfcWU9ff8Axxcn4NVkJrdb5J2g8el8Zv8A2j+f nga6Ys3sOu23HBvbAnMkhbGW1raZU8q4s2k+JVubD8f548BqiI9XhMqoym6l l2ZS6tl2c5FltriSGggLS80tJSroQSkg2Pnz88Ur1H0LzJkrVaosZHyzVqlR a7sVSjHhOvLY7sFtbTlgSFJsmxP1k2N+tpgbFWUrcwUVx8m5kyB2tkU3NtMl wJr7LZS2+Alex1gqR9Um3ibIt19bHEnv5jqDAVGYkL5G1Sgnke6+HsAcg3Xa bIQ/U3GpiIzjzipEg2ZYsVvuH+ylN1KPwBxI+UNBdTs0RhJqdM+gaaoHeaj4 ZTqD1ShgcgkXAKyn1semJi9rdl5kbnm/BWqTMeao6oj+WZqmWUIbShJS5vQB YWHHTaOCPMYUxafSp8MvO0JDBWu5S+ylKyQB4jb7Pl6YHViLFLaexl6HmyDD dREjvWcdihaUgk9FbSeb+L188PMtNKbG3x35JHIxDLe4ugJj2yk7lOYUq5ZT 7iOuI8zvppppNcRKquSaXLmyHUtJWWAlZJPJKk2JsLnrj0b3AqMdvsuFworz 9pV2fcsUtmVmStNZPRJWW47xrJjBahzZIcJvbi/p54Et6B/SdIi1TKuqq6zT 0XXCXMPtDSSf2kOtKte3HQ4IDmk9oI3LIwZY327jqP8AiclJ0HzbXYojZprs F6hoXulQoRWXZoTY90FKSAkK6Em/BNsTq3Uqi00tCssSUJaKUoSy4hSSmx+q OLAAAbT68XthJHA6BVDmFrzm3RaI+uRE7xbD8Y3sEKAUoj1+H42x35/7rv8A pjA5TVhVrc/n0wDzA6LMtWve5t+GPNRcH/oFCGce0jkDLuaF5YoLdSzhXk3Q qm0Bj2gtnpZbg8CftJHmMNX+lWv1VV3lC0Jp9IYkJCB9MZmUHVJHI3JbIANw PLAlRWxQfEVZguebMF0krmX9R9SaVEyJqRp1SKTOXIMmjV+n1j21MCS0kqHe tk953awkoVtP7V7eeCWW+yZS0NpdzvnOo1A9VQ6VeCx83OXVfIpwXDKJGktU fV59XCykiFlLIukuU0v5PytTqUHZDTT77Ld33EqWAdzqiVqPPmrEbZW1HzZL 1+7QTD1bUqBlWnqVSGS0jbCUGnCFI8N73SDck4sYY2va4n+ahD1ByusE1alq xqLE/Rq6R5tTmV76bruZERJ81bLalyWS66NqrptayR0A6YlBWoea3v0pea8k moNJy3S8pIqSIbqUNoQ94P1iV23eZ4vzfBT4I7ONv3exCGa5wO6N6h5Xq2bq VR65l9SXXYrZcCA53ZUFhJCkk2549R88AGc4ag5c7lqdDqLDaOFqkx1EH/Gm 3239MQMbHIzKdwpJmva7PwKcLOstdYSFtKiy0Af+ar+PX7zjbLebavqFqcuX PiojQKUkFoIJIW6vgKuethuP2YgdAI+0kis54AXlrr1q/mXUztR1vMdTqL3c Ny3YsCOFnu4sZtZShCB5XA3E25JJOA+RNYc76c1v6QyXmifR3ibueyPbW3f7 7RBbX80/PAQlIOuynJN7q3ukP6SiVAjMUnVPJ6JzQV46jRv1Ui3qqOolC/gh aT/ZOL25Mzjl3P2mNNzjlSpt1CkVaOJMWSgEBaT1uDyCDcEHkEEHEhAOoQco Oa54pwJI2dbfE4zdP7yf8w/nhlkPdZX0Plb5/n34rh2lF6hZizjTMi091dAy pUEL+lqvHkAy30pI/wDDot9TvAr63UhKr26GCWTqoy+yNp2F77BdckUPKeRc js0rKVDi0yMzZCy0mzjpH7S1fWUeepPlhwzswKRFdWgBKEkLCuh58rfIDGTm DiZWuGpsdFfMaLMI2F0Hg5py6jPrVWq9ciQGYyFulyXIQygeHb1URfk+WHRQ 9UNOcxVpNLoOfaBUJizZMePUG1uKPolN7n5Y01DlEII0uh3OaDYlc9STvyRF j3sXqnFQOnXvL+fwxVygVxqlamdpErcSPaozjACfXunhx7r40lKLsI/m4VbU nt6puVqvoc/R26I0Yr/WRsxtulN+UjvXP54k4176V/SsZ8nxpAStOR+6SRc9 EoNhbzwc5tgfP5oW91ZKhtNStK6dGeVvbdgNJUUqtcFA88Ycy3STGcbbElsO KCz3clafEBa4544JxRu+Iq5aLtCB1ijZKolOYZm05cqVJdV7K2jcuW+vqQkp 8RA9TwBghS5NKp7CKemEumPPHcGJKFMuuK9Ulzhw/BRPuwjnkjKSgnzRxSbe apPqZ+jnrzkqXUdNs9R6m6panTT6w17K/ckm3eJBTfnzSnFR9QtJdQ9Lcx/R mesq1CkOk2bU+3+rdt5ocF0LH90nAbmWUjm6Zmpqtl4PAeJKgRYDqD5ffj2F 7DeXqhl79G1llFQU4FVF6XU2Ur4KGnXlFFvQEDcPI7sSxnslBzbKwTY2o5UE +XION7j/ALyfsOEuhV8RYWB93GGdqHklObMjyGIyimciz0ZSlWSVp6JPoCLi /wCPXEMrOsjLOaJilMT8wUHCLVILTkaR3/tSFbTHUk70q6FNupPwwwtVqnqh QU0KiQKUmFU81vqjU9EtSQpDaAFOPrRyoISCOSPPocUxgIeXu2y6+uytTKMo aN76IxlXs/ZDpcX6czlKfzpWnUBRkVJZUylXmG2r2A/venQYdNQ0t04rFCcZ qOTKMhspAaDEJLK2yP2kuJAUgg25B4xD10jHuH+OgHz+SeIWOAJ3vqU7tPWf pfT17LOZVmtGhSgwxLmDvHJDJSFMrWfN1KSUKV5lF/2sMzW6fotorpq9Xqvp hT57+Y31RPZYsVCDNc2FR71Z/Zte55PPAxq6KSSVrLG11FKxgBLhdVYzfm3P VT0roWZ4WlmUMr5UmT0xaJ3MBt4h0kkKSXCpVhtPi2gegwoq2mustB7bEbJ6 c509Oa8wUwyDOivlDJZAX4CoNi39UeibdMatgiYMpuf+Klc0k3CP5Y1w7UGV qJWqo3Oh5joeSpP0dVWpbLSkNBHh4UkIc28WCgSfdiyuhvaMy/rdR5bEKiTK XWKa0hcmG6tK217uLtOcbgOL3AIBHB64raumjAL4+G6sKeY/CV2rFaei6oy5 bwW1LUSyC4CCGgfClFx9TqeOp5POHC3qhlmHlRLOa6lT48Z5XclE0p7txVul lcH+GKWSMnUIZ0jGZi82HG6NR2aVIpjUvLNcMNpSdyI7h9rhkf2Uk7kA/wDp rSPdgRWZFHqgXlTPVDgyoksDvo0gJlRH2yQkut703sklIUlQ3JuCCRyWMJJy leaTAbtN2lRpmr9HNoRWNYafmaifSdCgx5SZFQokdffRJaAbltBUd7QJABsV C1wAnribsiR3clZdkZeU1UXKLAsqioVCs5EimwEYqT9cN3sknxbeDuKblBoo nvL09aXUotSS6IS3Ls7QsOIU0U3FwPEBfj0wv2u/vf8A7MIorLkeRZJ49Lfw /NvjgfUI1QkbfYqp7IEg7v1SV3PHPPkLEH4+WECekDlFriamia1WIYfKENrc VTkb1EAcA3uL88X4vxiDNbcq193tw0vOb7QFGYyy5T6c6pd0tyC/ufB/dJSU Wv8AWCT6HAtU0GB9uSIpz+q1bU3L+ZqkwXqdSJSm0Xu84AyyQTwNy7A+XTDo pWlmaak22usV+mwWzyltp0yVKHHoQk8+hPXA8dO4kOd+2yLkqGtBaOakrKeT YOWssKg997aXV71OrZSm5sAALeXHr64jftSZUhzOyRWayxDYK6TEkPkrB8KF sqQVA+XUdOovfFtRnqXsby0QUkr3371SLNRfb7FukdKdc73fUUP7LfVFzYfY rEhzKt7R+lupE1lQUmHQlC6lcWKXOP8A3Y1uW49UMXge6b+Uswqj9nLtAt2Q qNNmvK7sDcbqLgBHu6fZizPZwyG7lzs00Woy4Cfa5kCMEvCOAstpbsnxAXN9 xHPkBgKrIZGe+3yRNMf1Rfh9lJc+mwqnEMapQmpLR/ZdRut8PT5YjrO2hNDz RRTHivLZAVvQy+pSkJV0ulY8SfvGKhr8qNq6WOrjLTxQHL9BqemWUWcvy4M1 ENlai3Icc71sbjewWBYD0HGFeYXmaplFTrbxclRryGQV/XskhaP8aCpPzHpj xb2swVX1Zjh6k8Apt0zqc2t6A5eqlRUtUmTT21LWv6yyBYLPvKQFH1vxh0AA HzH3fn+GBH6ONkxmrRdbC6h4Ss/A2xnav0c/zHDbpy4FVkn8Sfz+fdjCz0sO QfwH5/jfHhZPssosWzci1uefI+X5+7HykIc8LqUL2ndZQvYjz6dff+OPWXu9 c5UWNLaDUppLyAQoBfkR0+Hp9w62wlVlvLrsXu10aJtSpS7bLAG91Hjpz1/n hCkS+MxHiQ0w4zSGmWUBtDaRwhI4At5Dpx8vfjnVaVT65lmXR6vDblQp7K40 lh0XQ62sbVpPqCDz/tfHgbG4SphVvs/6UVLLdMpjWnWXi1SFJENDrK7RwkeH ZZQNwQDyTfnAt/s+ZL/pmM0sZGy99NFlTCpQkymyUm9gLLsPCB5HzPuwWK2c D4lHkasRuyxouzkmqUVvLEmIzXkE1JEaqyB3ylDxEKUokef/ADxiUqbTYVGy 1FpNMYTHiQY6IzDSeA22hISlI9wAA/3OI5amSYWedkoAGy7LabWqzqEq5PUe Y/Py9+OK6XAc49mRyf2QU9fh+GIQ4hSNkczYpM7l+nuNKCd6QpPPi3Ajpzf8 +mGvO0gy7JriJje6KQv9a2ygBDo8wU9ASOpFre/ErZSFI+frG2cLpyCBW2qg gQ58JqGhW1Ef2XhLQHhSCLWtx6jjkDy6wW643JSahNhvNbDctMlCiSRtPW1u vx4t0xGTdDogdp+tY/EXxiyPRP8Ak/2w3ReX/9k= --= Multipart Boundary 0912061926 Content-Type: application/octet-stream; name="55.JPG" Content-Transfer-Encoding: base64 Content-ID: <13600195> Content-Disposition: attachment; filename="55.JPG" /9j/4QDmRXhpZgAASUkqAAgAAAAFABIBAwABAAAAAQAAADEBAgAcAAAASgAA ADIBAgAUAAAAZgAAABMCAwABAAAAAQAAAGmHBAABAAAAegAAAAAAAABBQ0Qg U3lzdGVtcyBEaWdpdGFsIEltYWdpbmcAMjAwNjowNDoyOCAxOToyMDozNAAF AACQBwAEAAAAMDIyMJCSAgAEAAAAMzE4AAKgBAABAAAArgAAAAOgBAABAAAA yAAAAAWgBAABAAAAvAAAAAAAAAACAAEAAgAEAAAAUjk4AAIABwAEAAAAMDEw MAAAAAAAAAAA/8AAEQgAyACuAwEhAAIRAQMRAf/bAIQAAwICAgIBAwICAgMD AwMEBwQEBAQECQYGBQcKCQsLCgkKCgwNEQ4MDBAMCgoPFA8QERITExMLDhUW FRIWERITEgEEBQUGBQYNBwcNGxIPEhsbGxsbGxsbGxsbGxsbGxsbGxsbGxsb GxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsb/8QAswAAAAYDAQAAAAAAAAAA AAAAAwQFBgcIAQIJABAAAgEDAwIEBAMFAwgGCwAAAQIDBAURAAYSByEIEzFB FCJRYQkycRUjQoGRFqGxChdSYoKTosEkQ3KS0fEYJjM0U4OUo8Lh8AEAAQUB AQAAAAAAAAAAAAAABAABAgMFBgcRAAEEAQMCAwUHAwQDAAAAAAEAAgMRBBIh MQVBE1FxIjJhkaEGFIGx0fDxI0LBBzNy4RUkYv/aAAwDAQACEQMRAD8Av91n 652DolbLLV3/AG9e7lHeqt6KH9mxI5jkC5UNyZfzMVUYz/ExwFJ1Gu0vG/s3 ePW/b20bVsbccce5K4W6lq5zEOEvcNzQE4VWBHdgWA5KGBGRmxNIBLkfHiSS xGUcBb748dXSrZPUyv25JY7/AHMW/wAzlW0ggFPUeXIsUnkc5Q0mJWCegye4 yO+kO4eP/aVFViJelm7ABKIX8+WCP5/MeIgcWcYEkUi8mKrgA5ww0/hRtrU6 vmiGdNmkaCpOsnig6YX7w1VvUyiqas09vmSjmtxVDWNUyYEUKANwcychxcNw IySRxbi0tmeNzp7vC+SwnaG6rXR09FUXGorKmmjkWlhhQuWmWN2ZA2AqHBDM ygZ5Z0mwXdn6H9hD/dJKcTW238JEqvHzs+KSIQdM90sssywK0ktOgLP3jHZ2 7kEHvgYYYLZGitN+IPsx66EVXTjcUUEyhg4nhycyeXgBuILF/lClgxyOwBzp gyImtW/oUWOlzlt2EfpPH/0zqrQ9T/Y/chZ3gipYYXpppKmWTi3lALJhHETB wGI5YKjLDBWrL42emd56B3He0VnvIlobutkjtaCOWoqp3jMicCrcePAMSScA qVHI8cy8AEbH6FUvwJo+a+f74SbUeO/p9BtOO6DZG5xymZJI5GpU4RhYmEvL zeJDGZAoByffiNODe3jI6X7PtFhqoKK73hr7QR3NYaVI4npoHLBfM811AkJR wIxk/IfqvJvBbex+h/f6JjgTNIHnffyTcvHj46Y2rcMttTaG655Fr5KCOTyo I4pSh/OGaXspHf5gCPf3wDTePzp1WUaT0uwd3TxyhlhaAU0gldT8wBEuMBe/ LP07fMpL+AwCy76Gvmpjp0xrj5olL+INsuG5Dzem254KUqX82onp424hVfJH IqPldDhnDZOMdjp3WjxpdNr/AAbjNl21uud9u2WS+sklHHCamFJEjdVzJ8rB 3APPAwrHOB3YRRu4cmk6fNG2zSb8nj+6aG4x0tFsfdss0qCRI5YoIZJVYfu+ CmTLs3f5B8wwMj5lztSePvprUQ1YbaV/WWm8oLCktPK8jMyc1+V/lKRyLIeW MqCB37afwWn+76GvmnPTpmizXpas7GwYch6e2hR6aFIo0s1Z17TJJidXeke3 OsvS9NqbkrrlRQR1K1UdRbplinjbg8bBWZWA5JI6k4yOWQQQDqLdneCTpzsz rbZt92/de56ivs0r1KieaEeZK8ZjZsrGCilOI4rgZQEYyckRyiMbDfzRUeVJ HGYhwUQqvAJ0iqIKRE3HuxGpKYU4eWqp6gsQ6uHHmQsEP7uIYQKv7pTjPIkp cPw++mNeIEm31vKVIZopx8RLTTOHjUKjKxhBBCqo9wcEkZYkyEwBukQOpZAF bfJOe1+DHpbb+klPtCW6bimhjuwu9RPHWrTy1LiF4ViYxqOMYSR/y4bLMeWW OlPZ/hN6Y7P21uC1R1d/ukO5rSlnr/j64MfLUs3NOCLwcsQcjsCi8QO+W8fm hyqHZcrgRfe00B4Cul/wdZHLu7dc7VNG1NE8stMTSyF0ZZ4x5IAkVU4D2wT2 9MNvqX4YPDP0qvcvUPe/UW97UtlTmigpKapji8wt3MEQSIyOpbL+WAfUhsr2 1c2Uy/0wLv6/vlWff5mnVt8kkdFPD/4TOq9pudy6RdUr3d6iymKgkkpjBTT2 qQBf+qNOoPJI0TLqy4j+XDDIliLwY9LI+hX9iUr755q1yXBLuKlBWI6IY0Rf k4CMIzAJx9SWzy+bVT3aHe0N0j1Cd4Ava74SXD4E+lVLfo66l3XvWmKKissN zjQyY9QXEXIKSc8VIC+igDABu7eCPpJd7fZqee6bnjNltkdqjkjrY+cyR58t 3zEQGUEDCBVKqikEKNLxwDYG/dI585Isjb4JIofw/uilJeJKx7rueoaalFPI WqoULMo/dycliB+QhWVPygqOxGQRG8A3SeONIbZu3elvp4QvlRQV8LcWGOTc nhJPIgE/TAA+UKFkckuFEWpDqE4N7fJB0/gG6Zwb2pbx/bnejtRl/LIq4I5w rnLr5yRBwCSc4+b2BAGNOLpx4M+k3TWtvUtrlu9aL5bqm2TJUyRRokVQgSYq sUaDmyqByOcAdvUkx8YNFNH7/RRkz55QQ6k3B4A+l7XyO5T753tPUw8zFM9Z Th0LsrSFWEA4sXRSGGCmAFIGjVN4DOjtJXV863rdkhr6iKqzLcUZoHR1JKt5 eTzjRYm5ZPEEghiW1IZOkUG7JOz5nG1ZJVx6/XOts9tCFAL2dZ99MkkKs3AY Kl44Ioisb+W0k03lqWHqo7Ekj/8AvTRm2XeOvd4ZI2gnixzjZsjB9CD7j7/b QAzojP4NfirjEQzUlIDtotcrhQWmwVF0udbDSUdHE09RPPIEjijUZZ2Y9goA JJOtACzSoVYtu+LzqD1X6h3es6C9EZ907G26XSrvlbcPgXuDqpJjpEKkM5A7 KcnuC3DIBnDo51b2j1w8P1s6jbMnnNBcAyPBUp5dRRzxsVlgmX+GRHBBH6EZ BGip4RENjZHKYG09WwEJP01TG40zeIL/ACgMWevxUbW6KW1KgwN3jkuEnFgS PTPmOv8AKn1LEJaXP8gfrsmd5KCfAhcl2R+Pp1a6c2ntaayovNJHGv5UFNWc 4v6KWUfrrqKBk9/8dRyfeB+CcLOAP/PWrFVU5OhLUkx6jrB04pd9ttpd4W6e 5G5xWcUkDmWRauRSyxEKDg4Ukk9lweRGsXXrX0l2/amrL31L25BGsZlLftBG HESNHkcSc/vEZf8AtKR7asjxsh79Ib++yoORC0Elw2TRuHi16I2zfCWuv3pR ilqY0akrYGaaOc8PMk/KvyqiGMlie/I/Tujbr6+bp3p0NsVb0HtUhvu5LxVU NELrQiWNKek5CoqHCyBVj5CNeRbtz9Ce2tRnT3xkOmFN/n9EG7Na8FsO7kkR 9dOvlyuD3KzdFLrJC1uNRT0pgdoajiGkXDFUMc0qyU+AzcUAkXDMNPzovfut t93nfm6rWunt1HQRUtHQrBR+QlVOFLVEq5ZnwGIUdyhABUnvp5oMWKMlrrco xT5EkgBbQS7uvrf052feq22Xa/hq63NHHVU1PGZJIXkKCNW9gzeYpAznv9tN Og8VGxLtvGxbftFJXVdberhTW8spUU1O8yF8Gb8rMq4+UepOATg6xC9jdius g6Pl5EZlAoVe/lV/wpnVz2JProQNj8xxqdLGBTMrKGSjvJSWiqJwjyGLjGrJ Ij9yGJ9CD76O2G3TQ19PJI2WhpxCwHcDvn19/prj/DcM7T8VoOeDHac2MD11 SH8QvqRetybr2X4UdjV7RXTflfTm7Mh/JTPMI4Ym+zOHkYe6wgeh13eI25Qf Lf5IAq3XT7YW3OmPRKz7E2lSCmtVkpVpaZR6tj80jfV2bLsfcsTqGvDxR0O1 vHb142LY4BT2lb3RX5adG/dx1dXAWqCo/h5YiYj0zn66q1atRKSn+6XCltO3 qm6VsgSmooXqZmPsiKWb+4HVHvw07xU706odZeot1Jauv14o6qV29QJmqJsf yDL/AE0RAP6D3eigfeCYP4dG36bdf4k3U3q0sJdqi4X2RZySQUnr41ix/u5u +ulQ9dUTmyFMLJxoldaKWv29V0dPWSUk08DxR1EYBeFmUgOue2QSCP01QKvd I3WyrjQeCTbDWq0QXrqHuV6m1UhhE1rEdEjyNGInl4kOebKqZJY9wSMciNPK 3eEfoXSWgUFw2m94i+LFbi51bzDzAXI7DAwDLIQuMfOfrrVPVJz7gDa8uVmN 6dCN37/knHQ9AekFu6dybYpNi2xaWSj+BaQwh6jy+JUHzTlgwB7N6jt9NB3D evSfodtC1bZqq+hsNmpKRkpjzzDAqOicCcli7PIPYknkSc+ocuVJI3+o7bla mLgeLKGQMt3ApM3cniIp9x9Ld1zdLLbd7nUWC5UNslrKSj+JJjqJOMk8EYyW 4Kr45D14kgrpnWazeMPfPTptu3C4023rTVwimauujiG8KmBlj5GTz7kZDLy4 nHEEEgOc9/8At8ea6zHxcHAjLs8+2CKaOeAQCPLfcpbl6BbB2Dtip3t1b3nd LxUC4fFyVECtC0k8skBVFVMySOWhVRg91YjA9dHpd2+H/pluKjt+xthU1bdK etECfsug701RIY6f55WHZ2Jijz3PYj2bVrYw0b7lc31X7RvJLL0js0eW9D80 Ugr/ABGdR9sU8FvparbtGEkE9RU8KSomzSZTDFTyDSyHOEXh5Y9e+pr2DYrl tjorZNvXauSsrbdQxU086cgsjquCRyJOM/U51YsDCGS+QyzbAjYJwNEjjDqr D799eSNI1wiBf01RoZq11v5rXsr0rqkRZ2CqBlmJ7KPc/wAtcwehd3rPEd/l E83UiMNNarRU1t4hY9xHR00RpaQfzaRG/XOtXEFMkf8ACvmoO5C6fgAQ4+gx qpvRK5RUX4+/iGsIlLNcrZZK5AfYxU0SuP8A7y6EiGoO9FIlTZ4jK2Wj8De9 /h5vKmqbPNRRvnuGmAiGP+/qn3givH+brw7eIy+eS9Om2Vjq4VkBU8IrdK0Z 7+xCqR9iNGQj/wBV3qoH3gnb+FPa7engKlvNMmaqWWKlq5Scs8n7yoYH/aqf 7tXaJAXv/joKb3yFMKEN8eMfofsHqzcNl3bcSyXK01C0tciVNNEIpSASi+dK jSuoYFljDEen5vl1NVFW0lxtEFfQ1Ec9PUxrNDLG3JJEYZVgfcEEH+eoujLA CU6Eb/8AZ+uqudYOoPXbcfWvdOxuk1tuapaqcWsQ09P5LySzpC3xZqWXEaqH kVCreqMSPdaJC4N9nlbHSYcaXILso0xos/MD8eeO6NU/SPxD9SROnUPf021r TNDwht9DXNPPERFLCrEpgHKyBmDM2WGT6AaXNmeD/o/se3Tzbh+K3LFEfiDJ e5uUcXyASM4yFbJBb5uw/lnURCXnVJ8uy1J+uR4kbsfp7Q1p/ur2jXceQTxs zdF+h+zauS1XG22mku1UblJ5c/mtM8qs6lVXJ4FEbgAAuFOPfTAqvEtet9V6 f5hNr1u4WQPBItbB5FOXLx8JC3qF4FyMlQSe/wBrwNIoLgOpdXc+bY65Xb/y j229k9RernT7elm6vTVdrpa+9QrSUvlRyrHFD+byea4VT8mHXuGUsCfUyns/ phszY8cv9nrLHFLPJJLLUTO007l5DI2XYk45nIHoO2NOSqsTGL9OROPbr5bl OoLhfXv650m1tXdoroI6S3CaLhnmWx31OINc6nFaUhe1vsiylfWD+XVKuUTe KrfDdPPw9N+7pgm8upis8tJSsDgiefEMePvykB/lqtn4WXTaG2dE91dUp4wZ r1XJZaNiO4p6UcnI/WWTH/yxrUj9nCefMhQPvK9R7LqmuyWRP8p/6gx0vbzN j071HH3by6TGf5BdCwf3eicqR/HXuh9p/h9XKuicAz3Gjg7+/wC8LY/4NQnN Yl2v+Fb4m91RScnu9CttDr6E09npKd8fpK8o/UHRUQrHHxP6KJO6z+EdcVfw Ybusgb5qHcay4+gkp1x/eh1Y3xadaB0F8CW5uoFLKi3WOD4K0B+4NZKCsbY9 wnzOR9E1VIzVk6fikD7Noj0F6QdK5PAjtIT9Ptv1p3Bt+kuN1lrrfFVS3Goq IVkmlnkkUtIzu7ElifXt20Q8Oe6aGxdfOpfh2pVaCi6fXKGawU7uWMVsqYld YVJySkUpdVz6KyD2Gom5Gvvtv9U/FKfSQVOR299a8VHykE++MaD3UlkAFcqP bUBdX+nXVLqV1yq7XbJGttjhtIp6K4SVP/RgZlkSp5RKeTyFSoAK8cAdx7ya s7Pilmi8OLklK2z/AAybdtF2W870vFVui5fDrSnzoxDS+UsXlBPLXuwClwOT HAbsOw1LNssVns9P5NptdJRKVVSKaBYwQM4BwB6ZOPpk6YlLDwmYrfN3co8F UdvvrSqm+HoHlAzxGcaiStMCyg6GrSposs6+aPUD20ZABPpnUiW0CEnN0mit 9ansO/8AjqKZUZ/FU6iLZPDdtbp7DNh7/c5LnVgNg/D0idgfsZZU/wC5qxPh N2I/Tn8OzYe2KiIR1S2iOtq19xPUEzPn7gyY/lrTk9nDaPMlQ/uUsVU8FNb5 KiplWKKJS8kjHARQMkn7AAn+WqNeBW9Hrb+I5108SChhQVtVBYrRy9TT/mU/ 7qCA/wC2dDxCo3uUinV+J5cWpfAPZ6CJvnrt0U6gE/m4QzMAftnGnx1I6Vw7 W/BL3l08pJ5K2eLZldPUVLj56qqaJp5ZSPq0nI49hge2rw7TAwfE/wCFDuVV 38IHcimv39tst3rKSiuiDPpwkljY/wDGmt/xeuoJrLZszo7bqgmecS3eqjVv 4pD8PTgj7/vjq9zbzL+F/RRHuLoVsawLtXotYNsKMLZ7XS0AH08qFU//AB1T 7dm7IunH+UtWlaicQUW+9t01mqSp48pJEkELH7+bBGuf9YaGxRqc8DyKd+wC tNS9LrPSXOjuEFzuMU9GnlrJGyJ5g4lR5gC4Y4OcnuT39TpFuti6bJdYtqP1 EgttfRccUa3aCOqQmGNFypPLBVA2MYJbP01U2R7jsFZsFJ0LBqVCjmRcDDg8 uX3yO2tvlzgkA/roY2Duks5+/wDfrDNhCT7ainRE1dZNLJ8JDEUjYpmRiC5H rjA7D2zraC4UtXSDkwHIYKN7H0I/rqp80cZGs1alpJ4RiGnghBEMSrn6aGH6 6t27KJJPKyc+w0DPPFBTmSaVUXsMs2B30klze8cHTDqd1V8Z1bc6fZ+8Lxbb VTU9LY2tNhFbRPT8FkkJk81DzMrShl+yDGPm10Ist+oP7GW96uSOlmekiaSA A/umKDK4+x7fy0bNIHRMbtskGG1GXidtW6upXg3vWyOm8zG53h4KWZTWGh82 lMg8+PzijcQyAg9iSpYDGc6jTwc9Hd2+Gra+5rXuqgs8VHeZoqqmhttzlrJj KgZW5mREUDgUA4DvjvnGdVtlDYi3zUhGSoK8cHU6p6zSbTqbDVY2bR72fb1t +XIuVXCiCrqeXukbyeQgHYlZW7/Ljo1ebNSXrZNfYKtQaavpZaKRfqroUP8A cdXyjTFH+P8AhVDkrjX4Et57q6U+NNtv7fnt8NfUvW7Zl/aEDywhlcMuVV0J PKHA+bHfVs9weFO+dZPHBXdZ+pd421ealYaWGls89vq6akp2p+JhmRoqkuSC GJVsqS57emrsmURPBHdqeNmoFWLiuPiBpdwPVyXXZNZTvnFM/wATGq5PswhL H+Z1A3W7wr9VeuPjAsvVi4b02rY57NTUsMdJbzVsxkp5nmjlErR9jyZe3E9l 9RnIFgmbC4uHkpuj1KxB6q7x29caWh31s23J+1JGpLfX2e5PPTyVpRmigmjl jjeLzChVXHNeWASpIzxY3Nuy+7x6kXjde6f395utwmqrg86fvBOznmp+nE/K B7BQPbW10bQHuIQuS0gBC2bfu8ttziTbe7b7aTj1oblNAP8AhYDUhbf8Y3id 2xKptvWrccqKeyV8qVi4+mJUb/HW3LiwS+80IVjnjgqV9kfiOeKlb/Db4rdY t7Vc58uC3m0GOpqX9kRoWX5j9cH+eurFDNLVWOCaeAwSSxLI8ZPdGKglf5Ht /LXK9RxosaQCPujInOc2yitRRVkSTmiqUVZSX4tHyKk+pHf3+h0mwQJBTLCr E49c+pPqc/z1wfVw7WxaERFJfpVYUieZ64Gc6GGuhjvw235Ic8rxGkvcIB2z I5GQjI5H6MNWJDlIBCqhAJKH2PtrUNGjcQVP0wO/9NSVqAqa0UkZkeKTj9Sp A/qdV/8AEP4hdk9O/D/vSS3bxsZ3qLPUpZ7d+0InqzVuhSICJWLAhm5dwPy6 lGwyPDWi05IaLKqD1V6v9F4/Db0A6d9PrpWXX/NeIq3cPGiePnM3kSTFGkCi R2kSfv6ZPr31Plw/F46dQ1ci2/ozu2dVPJTNX00RIzn0BbW4/BkexoO1WgdY tUR2tvinTxU3bqTY6CShat3TJuOjppJMmBWmaTyWYDv+YDkB9e2rZbL/ABE6 +m3pJUX3pK0FqqJwjGG+rJNGucclBiCnH0JGr5unPyGtp3A/7SbOGWFejb+6 rVufbiXS0ytLTuxUErxOf01F/VHxb9NekXV6o2Nuehvz3Gnp4amRqOjSSILK vJRkyKc49e3vrDxsWTLk8NnKJlkbENTlEu+vGt0o3NTWRKOn3NGLduCguk4e 3IOUUMhZgD5hPL0IHbJHqNUb6mRU26fETuvdNj4Q2+93ie4U0cgKOqSHPzKA cEnJIyfXXU9P6XkY0hc+qpZ0+VHIKCStv9Pd3bs3MLHtSyVF8uDRSTJR0KmS ZkReTsq9icAEkDvpCudsr7NfZ7XeaGpt9ZTNxnpquFoZYjjOGRgCOxHqNar3 sa8xk7qhtkauytb4HOiV3vHUwdTLtRzRUFvKx0JdMea5PcjI+vEfoCfpnrFG nCMKB2UY1wmfMJslxHA2WqxuiMArfA1p5EXPl5a5+uNZ7mNf7wtOt+I+ms47 6kkve2k3cA/9Tasg9/KOknHKa9urYKqhLcsSJ2kQ+x0wBsq/t4l6q4z3u6yb VmsoCUYu8qJFcPiXZmWJSMK0TDvnAK+mpWFfwjG5rTs3bu0q2/XCz0ciW+nk q3adfOYiNS57vn14643VU8dwnnvj0kKSVkxqJMRgcTKS5x/tZ/rrpOitrW4/ BA5V0AtUpKdKKnlMkBeWZ0kUsFJA48T+mC337HW2aUUE7RCMsvZW4gnHpn01 uQuuVzSgpBTQQlyyNtiO9mLc0EpoJqWWnaamg82elZl+SeNeS8irBcqSOSlx kEgj2+LttyrraCWyWysEdNQwQ1L1EUcDVc8alfOMaMwTkoTI5MSVLE5bGrHx v8Uu/tpM0t013V7/AAI9UqnefSe42x5eRsvwvJy2TkhlGf5R99QJ426xqn8S LcT4+RaG3Iv8qZe39TrA6S0x9QcPVF5RuAFQWXcjK+ugi0i+rDXbFYvKnPwQ IX/Eo26WGfKorjIP/pXH/PU/detl2KXxN2XqR+x6Gpr62JoKxqmmSYTmBwqM eQOSFbj/ALK68+6/I5mXYPYLoMFo8P8AFWS2/wCUuzLUtLFHBFLPS4ijQIq5 kXIAHYamYe+ufZwiJOUjXveO29u3SCivV3gpZqlS0SPkkgdiew7DOB39TpXj kEkQdfRhkdsatqlUvLUQyTvHHKjMhwyhslT9x7a3zpkl7OR20iXW6W+sobvZ opedXTUZaaLDDiHU8TnGP6aScJo2qB5Mzu/J1Ujl7v7d9GWJTPL21EDZEHcq K+rlaarpbfo3AMf7Nql4n0I8l9cjhI0lJDEUURyqq9sj0XP9+ur6Ju14PwQO btpQU9PFDOskSpG/HgWzg9vv66KpIYYgP3SqoxhVz210FaOCs+9XK2e5zqOM lSeIGOXcYGgJ66kjgLiQSdxybOQBqLpBW6kGq2P4YN6ln6i9QrZGokL0tHOQ ndgBLIB2+nzHTl8T3QHrFu/xgXnd+2tlVFztty8hYHpp4y6iOCNDzViCvzBs fXGuaxsuLGznPkOyOlifLCGtUXR+FrxBOgP+bC4qM4+aogX/ABfRaq8MfiAp yQ/Sq9OPXMXlSD/hc66AdZwnbaq+az/uUw7KSvDB0l6v9PfGbadz7i6d3y20 ENNUwTTywDiBKnDHYk57/wB2rFdUKarqrnZY4qdJTTpUNwU5Y5lHf+gGuK61 kR5ORqjNigtnDjdGynBTRtqCL+zFlUrwf4il5AnGDzX21Lc1VBTQCSolSNCw XkxwMn076Aj4Ty8qPLnc9uW/xUUctZPRVjXekSmjyyyNRzRsSh/1Q/LH1yNH +pnWrpv0et9HPv8A3CtvNx8w00EcElRPKsePMcRxqzcV5LybGAWA9SBojQSQ FBjXSuDWiyheme5unPUHZp6l9N7nSXS37gJd62AsPMeM8GVlYAo6lSrKQCCO 40v0+57PVbvrLDDUMa2hAaePy2HEEAjBxg+vtqBG9FRo3SNUY+DoaeiqK34i dUC83IDyY7csaj2t67dMa/btcLDeai91KVMlteltlvmnqfMUMWYR8QWjUKx5 jKnHYkkAsG6t04srfZ+4tt7ms01Rtu6x1K00ghqYmjeKenkxnhLE6q6Njvhl GR3GRoW9VcEAMUciu7eoBzx0nCgr27lQr1auBi6d3yIMvP8AZtQT/un1ysed Z5qJFjHFF7ge5Ka6ronuP/BBZvLVrVsaelmEXH/pEYSTmqscDPoSMj1Pp69v ppGqp4omYgMR7ZY62nNDHkjugWmwLQ+3vhmlkuNRVQITIaZYWUg9lVuWfTHf H1zpyjyHQDy4nH/ZDf8ALROK9j49/NUTNIcjtsudwss7zWWrqbfJKoSR6KRq dnGc4YpgkZ74OhZtxbhnYma/Xd8+paumOf8Ai0QYYXGy0fIKoOeNrUoeGTqh fNn+ICaalvFRVwNCnx1FJVNIkqcuJyCSAwzkH1Guk1huduv224LnQuHgqEDq f+WvPOrtY3McGcbLoMUkwglKhUwoRFFhm9WA76Y1w6QS9RpoLnS74n21X7cd 5FkWljqIJFdySJVbiwA8sd1dexOc6xg0ONFEklosJzzbX6hV1XQMvUDYcNFQ VEc9RU09JOXIRh24GfiCew7tgEjsfTTo3veb3FcZqWnpKK6WeqgSE00EqmqE jMRyAz3Axn0/hPp30SGaOEOXajuo2qunVZYOoNvvG4ClutUdVG09UDzyeQKq AvfkSMenb31N172ntLeNHAdybbtd4jj+eH42kSby+WDlSwJHoD2+g+mrpZA5 wLVVGHR7go1Y9vWTbVhjtW37PRWyiiLMtPSQLFGCxyx4qAMkkkn1JOdG1pKe OtepSniWaQBXkCgMwHoCfU6GJvlTVVep/jN8Pe2PFPa7DuC07qrJoJ6q011z htksdNb8DBLqwV5ELEgMin0yOXbT7j6adNbLW27f3TKyUtQK2hKVFZRO08k8 LiNomIyTxAUgYHbPfU7GkgK4Ncwgu4QVDVqniupYkIVrltipScMCGkNPVQFA fclFqJf0DHRff/U7Yux6cx3rcFuo6h/yQzVCrI5+yfmP9NQNupWjlQVu3cO6 epW27hbdibG3Xd/jaeSJKinslR5WWQqDzdVTGSO5bGuf1223uHZ+/pdr7rst ZZ7vQNwqqKtiMU0RC47g+30IyD6gnXU9FOgua7YoDNIfRCRbzcqZE4iUNj1Y 9lGmvLXPWVgprej1UrHASBTIc/ooOteeVoKDjal+g2Vvm27aN1vu0b9brZPI I4Kuut01PA8hBPFWdQCxUZwPYaCez0fPLrHnPcKudRxw2SO/VKQ6XIZaWjEP BYm/UOVP+OgTaaSUcWlbB9Q7MR/jq8xtd/Kr1Upm8NHTneG4eqVwo+nW0qm9 T01AJ6mOiKAohkUBjyZR6jH110k6O2fftu6bQWu/9Pb5a56f5WWaEFW+4KsR rjurM05NN8gtXGePC3KkpKO7q6o9nrlY9hiFsf10pbY2a0u2LrS32nkSG5Px Ccir8QxYN9jkjH6ayWghwKue4aatHx012n+2KqvagkaatUJUnziBOokMmGUd iORJxj+IjQsfT7bce66e8pTzLU0rBoiJiAuHkfH6FpXJHvnHp20T47/NDpfq aGjrKXyKyminjyDwkUMMj0OCNDBQFwDoe0629Ne0kk27/wBO9ibq3DTXfc+z rJd66kXhBUV9BHUPGM5wC4PuAdGINs2G2bym3JDTrBVSQLA7huKBF9O3oNON k+okVaa3VDaVHvD4WlqtlUl4EEbVEFU7SJLFJyClUliZXjyhYk8sMBx0lbH2 dR7RWlrLB0YsNiqHRTWSxU6vVK7ShTibu8mE5MST3wPr2Jbp0cqO6eFkv9++ BqJt8W6hs8SeQIZfisq7uuWU8vQqxC/rkd8Z0zN/7A2N1s6aXWn6idH6S9Pb S8VClUOM8oGcNDMuJEBGD8p/TOma8xyamFKgRuo82l4WejGzLnVLZ/C/tmpe KHnR19dG9aZn5ABW+JLMnbuT7Z9Ox1KlmFftvesFpsHR62WygMio1XQU8cCI pkZSQFQfwAN/PB9Rq+R/in2npgAOEL116IbR8QfQiTYG9am5U9EauKtiqKCV UngljzhlLKw7hmBGPQ6rb/6AfQOzhaFOku7LykdW1NJUy7oq1nCBlAmPEKjA qS+E9hj17B8fKljZoa6gmLGk2Qhj4FegdPdfhoegt/q4jLgTSbsq1Pl+YVDH LcQcDOM9h39CNOa0+Anwly7VpK269PK23VM8CyTUk25Kt2gcjJQkSdyD2zon 73kn3HWVU4Rs97ZPHpr4f+jPh/rr/fujlsqKa8XehWl41lfPVQ4Ri6gcz2yx ye/sPTTprbPbty0ElVuOrraeuq40EooDxMDKjJ+6kPzKMNnH+kM+uo+Fkyv8 R/KpOXAwUENVWWw1NDUxG43jzamEwNNlMhTD5XYDAyBkg+oJH0Gl6xXOz2Cy mipkrZA80k7vKQWZnYk++B9O300zsSVwrZQ+/R+RSmu7bZ/ElQP9jQg3NZ2X 5p3X9UOhzgzDhTGbCe6HW92qUgJXRZ9gWx/jout8eofNBRCeIuUV2qFTzCPX iD6j79tBysfFyEXG9kvBR2irY62AsisjI3B43GGRvoRo0SAfbVYNiwp1RXuQ 9dFLjbqC82CotlfCJ6aqQxyoSRyU+o7adJCUlFDR26Okp4wkUKCONR/Co9Bo fgAPTSSRG82O2X+ytbrtRpUUzMrlGJHcHIPbRuKJIqdYkQKqKFVR6AD0GntJ Z+X31rJJFFFzldVUepJwNINJ2CYkDcpKq90W6AlYS1Qw/wBDsv8AXSNU7ouM xIiEcCn0wOR/qda0GF/dIsqbNo1Gk6aqqanvUVMsh/1n7f00Cqgeg1rNa1g0 gUstzi/crJAx6awe7Aff+moyFwYSNyk0AuFreZYxOPLPYjv3zpl9W+q21ujX RGv31u2Z/haReMVPER51XMQeMUYPYscE5PYAEn00J06SbIxmulFOKIyI2NmL IzYUa7b6+dSqvwr27rPuTp5t+Kx3yanFBbrfenkrooZ5PLheWRk8kszlF4jH EuMkHID16Nde9g9cLPdH2lLXU1wsNT8JdrVcqY01ZQy9xh4z7ZVhn6qwOCNH 0K5Vb4tiRwFIzLnPbtpNNO6wiM0rtIsXkqwxw7ejZ9RoPIYXdrHCux3hl70U uwXK5Wotc5JZJqdkjEzcuTjiCOePcDOCPX30ctW8Y624T1k0TPAp8mn8v5lI B7tg/U+n2H31jRRGaRzfX81qzzthDe9j/CMQ36asowtUoGCQSn8Xv3GjdLWw x1AaOVQMjkM47f0/n/LVc8JZIa4RUEgfEL5S2k8TIGEqEH0IOc62EyY/Mv8A XVIF8JWEHLWUsSFpKiJP1cDRE3ynqKhoLe61EyuEIyQgJUtnIHfAHtqwMcN6 UQ5pNWg5bnUSWGpkiMSTQxCZHU8kdD7jI+xHf0006iqqKuQvVTvIfbk2R/TW vhMY5uut1l57nNcGXsol3j18o7Vf7zYunuyrxvy67eDftk26eGlt1pZV5MlT WzMI1kC/MY15so7sBqj/AFE/Ez6kbmvNFP01sVPtiihhYTRPVQV01TKT2Zi6 DigA7KACSSSfQaN8X2hQseqphxg7dxWLB4+/ESLWGrKNLhOAAQtuphGSfocA 4zp0VPj66/0FLyXaW2ZGHZlraN42B+4WRf7ta7IPFbYbR9RScwRg0SgbH+JT 1V29LPX9Sen+2K62s6rELa8tHNFn2wzycx7+gOrs9KeqVL1H8INi6uXGkprN R3i2G6yQrWCpSmiBb80gUZIVMkYyD27kaz5GmF+h535Vc0AaA5voo+3V4iOo E+5Km19Keh9/vy0flvNV1cDBSjglSIYiZFBHccyDjOVGqt798RN06j+ITb24 OoG17PU0O2JOabfE0rUkkoc8mckFmJYKCCO4Tj6E50MdkBGoyb/lfCqdFJFy Ek0PUufangW3z0otlok/Ze7btPX25nMiLZ2lljk8uD5SrKrR5GSDnPvp2WXr JTTfiNWDxE2vbVVt2Cusz2vd9CKhJpLyOJCVCqAoD5EbfN3PAd9O7CYRbHgq etwJ1NpWe2/4wOi193bSWKW63C2VlbNHTxfGUn7oSOwVFaRCwXLEAcsamw57 gjBHY6DlhdCachqKLyUhqV8uqnkeHOTCrFUP647nRS22iiS2qslLH5illZu4 Jwx9caCELA/UQnLi7kpVieSJjwIwfUH01s1TKW744nsQBqMmO17i7uiI8h7G 6OyUq2topbQ0cZDMwAVQO6/+GkkAlCcAgHB+2sbouPNiQObONy79/Mq/MlZM 8GPsF4gH5gP56xTyzUdwM8IR1fHNGZlyR6EMvcHW3NH4rC0IaCXwZNRRyS6V EtDJCsSQrIqoVQkhUX0UE9/Ukk++dFe6uGTAKnP9NKCLwWaU+RN479XZUx6k eHfxA7UvG7Nj9Gt609B0u3zX1G6r3VVsgesskgDy1NJTp6ulQwTv9AVYjuXr 7tfenhwqOmVstm8ugG4KqthpEjnuVLVU9Qal8fM351x3JwMdhga5zrHS+sZ0 bf8AxkgY5p3sXY8loYxwpHEzg8diRv8Ah6Jn9Rl8OlTR2uh6Z9Kd4C/3O5RU UVJXRRRxMGJyFKNlpD2CjIHr303qvphbqRRHeunG66GQdmSe0XCM/fuuV1Pp kjsNn3frczWTHgDuPPdKXClc7VhvcW/GjXzBTKv2yrLbI70stmnpai3RYWOW d2ZOaqVYqe4biwOD3X0OurFigO1vwPqWK2QJRvb+npeBAvZG8gsDjHfuc/fO tPFeyQl7TYrZZ5kmLyyU2WlMjb3Xba/Sjwybkip7xWVW4r9C9WlBURxrM8hA Vp5KolcqVUqsX5gPQHOBV7ppt2Ortb2G+XK4W0VcjXeSWhZKgyweSeEaopwH ZlKjkVEeWLZPEaNgY2bFklO1/wCAtTNAdL4INkdx5p4dZIYJfBstJZ9t11JU I1PUuq8pQAHbvIOZYEiUgkrgn0J1abZdffa/wn7Y6dblm2/c7Tctv0MPx92d lEmaRHZI5XACyIRGozkh5FIzg45Po8zpsE+1qOt1fAbUtXqMQhkABslov17q gXUee5QeKbb1VBR1NJQ3Wup5aPzO5qohW+WG8wDE2GQr5g/MV11+k/8Ae5c/ /Eb/AB12bHF0TL8ly0opotY1gKo7DsP10kMt/b+WtcdtJJe9NGaSmRpgs0yc Z1KhRkk49/T2Os7qEr4oToaSf03RWPGJH7lYqIEVA4kRaf8AgK/MW+5++ghF lC8Mgkx3K4IYD641DEnJiDiD8T8e+ytlgGogHfsPh6+a0GD76yo7a1EAk6/w mfYdzgU4MlDUIP1MTDXHO309RTWhWpmGJFBHsQ3pj9NaXT/90j4K2I04hCxm d+sGz6lqcRz0O4KSpk4ejBSSCD/L3+uujcoaRmVZHUMSOzEdjrwH/VsVnwO/ +T+YXefZ5/svZfl/lc7OtVG0HW3fkXElp7i0Xc/UIv8AgddHrok7fgxS+QvK UdO1kQEeuKRWGf5DXpHQd8OP/i38lxUu+ZJ/zKo7HNdq/ckH9qbWTRbm4zrG pHmcGkyOJbOAcqAWz2we/fNsNldGemtx2hFcune0RcbjPF5Fys8kSVUVF+5Z JKiGSQ8lVi/lfPkPxLqoZQ2i2ZkngmOM+yV0XVOlMwckBm4cLHr3CafRjp9b thUe46u/Wetv9m/Y80N5paK2yTsY1Z2iwhA5MoCSYDofUcsqQYD699QN0V+4 tg3f9uU90p7SqS2z4WrM9pqXp58xVEMJOYGKlEkicK6GPGWBB0Z0XFAkfK4b m/j25QGVG7Ga2B3Lf1/7UWI+67p4odoLuWGf4mSrtooo35KvlGoTgUB9mOSS PVix9TrtBIB8XJg/9Y3+OtOQxmjGdlm5QcCA7lZgiWapKu3FFUu7fQAZOh54 aRTiMeWq8eUk03HJIzjGD3xjXF9Y6vNhZDY4h2so3CwmZEZc/wCiLZx/TWvI fUa65YqxJIkUPORwi/6THiP6nWn7WtpqIo4rnSPL5J4otQhbvn2B++h5meJ7 Pmr43BoJO3CM1JxUBu4jZVCn0BXHprDAQV6mPI/K4B9Rn21BtaQBwQrnj2i4 8grEqhK2RVPYMQNYBwNEM3YELJs8hAVkD1Nonpo5vKaaJ41kC8uBZSA2PfGc 498apNcfw9NzUcSx2XrXaJwF/LcrC8Bb29YpyPX/AFdWMfLE/VGd0mFoNlNG 4+BfrvbLnBXW2t2XeDTSLNG8N5lpWOM+0sOPc/xal5arr/aoUW8+Ha71ojQK 8tkv9DXAkAdwDIh7+vp764H7ZfZ7I+03hvDg1zAfQ2t7pfUG4MjnHcGvXa/w +qhSm8MHUDrn47aqfcuxd27M2hW3A3S61dxp1pZmhCAfDREFgZJG7clJ4rlv UAavfubZtNN4X7p0/wBuUccEDWCaz2+m5kJGvw5iiTkcnA+UZP8APXS9KxH4 mIyN/vAAfILImfcrn+bifmbC533Hpd1wW/Wyh3H0+3TTUNlh+FglitEsg8sN 2/8AZ8snI7nPpxxkY1MPTC+7ts0Yprlb7pYpzK8VbVTW2oVZabu4YEqDGyYA BVu/YEHGqPu/3aM6RsLNefw/fkusz+owdRnjDRQa2rJ3s7n6p50nU6W/bjq9 nJvmGntO4qWahkrN308KQzxCI4R/l5ZLOcM+FH3Jxqo192LbbU9BtSw2O5Xi 4V63HldaxY6GhrvhlIkanZizNBEscx5kqzsVPoAp1+lzU18ROknc9+3ZYkzC SJOR2+fdY6A7e3Z1z/EM2jXUNkLUG3pqCtryjsYaGjp3BBZnycsygKpJLMxx 6EjrAMkFjnLHOiBCzGaIWcN2QOXKZZNR5W8T+TUcyvNSCrLn1B9RrL1AKBWp 2l7AFxO0fLAwOQHbOPfXLdW6M/PmEsZA2o2i8HOZjsLHgrQ/lHfUO9buqFz6 c2W410lA8i+bb6KzIlTJEtVNOJ3qPM4kE+WkGR3A+cZBzrpyC4ho7lYznCNj nkcAn6Ks996yXK4PSXS5bXoLhJLH5jPUXKsmcHJHoZsYGMA4GO+k+o60VkHx a0vTnbqzxItXHO6SEsTnKxK3NWft3zgdx379tCTH8OMEEn0XK4uaMrJLH6Wi ibcT+/onPsDrr1Ql602GkobzRUVq/aVNHW0sQiiSaJpFUpgLk5BOPfV6fLaC qM1S3zKThT3Zj9ToDKjjhdpYN3Le6Rkz5UZdM6w0/sIDuXLMe7dzpn766gNt e8WzblisU24Nz3xn/Z1qhmEIESY82onlIIhp05KGcgkllVQzHGlwFotGt26N WC7bqbf81i3JR2wlbbDXrUW4yeUjtI6NC3mdz2VWVsDI5dhgaN3bZ9ru96a4 VTziVlVTxZSPlV1H5lP8LsPpnB/MAdOCk6mn2UBQbGsNsssFHTNUhKZFSNnn 5MoVnYdyO2DI5+np20oWDaNBt+inFopJkhqCrkE8lAVcAL29MfqfvgABnPaz 3jSk1r5LoWlIBVYjjxI1nAxqQN7qmiOUi3q6Xm33CGK12dqmN05PKA7BTzUY wvfspLffjgd9E7duLeFXa/NrNpy0soeReDVBwQHjCsPcZVnPcfwds50qsKYA rlaW56rct3q6XdGyKWOkjjBgmq4Fk87LsOJVwcHiFJHf19dYuHSzppdII46/ YG3pxGX8sG3RgoXXi3HCjjyUkHGMjUS1oN905cW8FRhsnfvS3YtJNYOmO3ds UNjRgoa23NPNypI+eOXgcLk4HPsMgad0vWO3wJ5kctNUgsqARwPnJ+6uwP3I 7aiS9p3CFGQySzf6fPb8kq0HU62Vl8pbSKq0y3Ku5fC0MFcTPUFV5FUUp3YK CcEgHHY6e33GmY/VsiNO12PwWD+TUIeJfa1x3JddtfD01luKWe33G6Q2augm knuUwESuITE6kGOAlj9cgDudJ7g1zbNbpeG6SJ4aAdu6r1Hb4/kRegpndTIB Hm6NGMMD8q+YMZ5ehPsdLdloN3BXFF4dbLGM/u3msUmVP1zOzZ7fX7a0JNFe 3MfoPyC5GHxmyf0sQfIn8zSWrJauu8V9o4qW0WmxwCrhMjQQUFIVjLrzGI15 dlz298/ri4DFRIzAduR/x0KTCXf0ja3sNmY1hOUAL4AAH0Cibd3iY6Z7W3pd 9rpUVt1vNkfyKuloo14xS8AwRpGYKvqAT3AOR6jGotuviq6c2bY193JaHof7 f3O1+T5tTXwGKOVVYwwglziBHYniMciWY5J1PwXkXWy1GtIOkAk9wPJO3wo9 Wdrb52XVbdgqVl3RTwpc7vM12S4TVzvhGnd07JhwFEf8K8cds6lnqRumLZXR W67nmkmjSihBMkEKzSR8nVA4jYgPxLZ4kjOMZ1WQQaHKaYhri42Pzr0UL7D6 v75Xf9NWXfe21d0bSll8ysk+LpqKogjY4z5chSRCuclGBHbsT2OntuHqSenm 46/bVyqKSvlgcTUk9y3v5JlhccoyYljLqpU4xg5x2z66DlhDpabYseV/JQgz HRY5dIQaPOrTYPBPx+Ci+s613mt6tJuG89VLDRW2Jl5Wu1y1dQscQADCNvJ7 vgZy2VJ9Vxqctr7weo643zp3V3X9p1Vmoqe4/GLQimEkFR3iBw5DPj1wqjRO h0ZawChXdBwZH3nXI54Jvtvzxv8AgnnI6RwtJIwVVUsxJwAB6nTXXqn00MvB 950SH1yIJ2X+oTGnJI90WiS6NvvuAS1a7/YL5C0livdDcUQDmaeT5k+mVIDD 9SNHpGWOEyM2FQFiT7AdydIO1C1IgdjY81WjavhM3JU7HhuNl3rtyvgruVT5 0STFX5ksPmAIOM4/l6DQtZ4S+oIkZqWu2/Lg4AM8ikj+ceqz1Fhcdigh9npQ z2XA/RM2wdFtybK8aVsrLxfbJYntVXQPNUJWhnkmkYtHBHGAHZnUEZxjuc6u ZWqq1zSIoUOc49gdM/IbLKCLqkViYMmLA9jyLu6HZAH8mD76Z3UerMW69pzU lqWoray6taxUO2BR0slOz1Unb1Jip+IH1bUMhlttHYzwxyYE8sjosbVUpUyf LykPZCATjv8Ap7aBSpll4SNgNxJHfPqfQ9vXtoYhXnlGk8xqkL5jhEIc+2T2 7emprJBhz9s6Ix+6pyOyod1c2Ds27+Lbdkt225Qzu1wly5hHL29/f9TpkVO1 NsUsTxW3b1BTJ7caaMsPvkrreawFotcdP1HKa7ww80Ce5UueBekoaPxA72io aaKEPbICxUdzif31Y/r3TCq8H264iMj4Dl2+0iH/AJazyAH7LYicX4+p3kqL bXsUd/6yRWCoeRaK4r8PUPCAWVCRggHt+cL/ACJ0b35vW5dSeoc+7b7FDT1b 0UEJhpwyRrEicQACScjH17+utJrQZAfIfn/C45znMx3MPDnfl/KQJDbo5zHL HTuXDEc2P5uII9/fI1brwt1tVfqu9bnvbNNdqiOOkaqkUiSeCPgEBz6qjZAI 7DJH1xRlNbpB7rS6Q4+MWDirU4bjnan6f3SoRVZ4qCd1VvRiI2OD9u2uf12v 1HbrNDZajp7Y/jo5GeWKjmrKeBiOxYCOoAyOPcYx9NDQN8RxaDX7+K0uqyjH ax5aDz539N1L/hKqblU9dq2ak2/ZbVbquytPJJSSyzy1BjqAgUvJI5AVnzg4 /wDC1lwZUsVS7HAWB2P2AQ6g9ul5F2jMN5fjB1V6Ki227nXR7Htc9urKim82 mSRnpp2iZSRkk8SPU/36Va7eG9xH5P8AbjcCxEZKpdJ15Z+uG7D7acxRuPtA FADLyGN9h5A9VHvTF47h+ItQJJK7yncFGgklkaR3KFSxZmySSWPqddHcckAL HHr30pWA0PJE9Okfpeb5K2x2zpC3Habtcq+2TW6op0io55paiKRTylDU8kac W/hIaTkfqBjVTm6mkLVaQHAlR7Jtm/K6U09mq2kwQfLTkCOODgjtraPZO6JZ Pks7RE4JMsyKMgY+udA+G87UjtbBvaUV6d7iqIODz2+AEehkd+/sewH9NSPE siUSrK4Z1QBmAwC2O5xomGMsu0PLI19AKm/V+n+H8We5wP4qvmPvyRT/AM9R vX8VhkzjGDnvraYaYFweQ253eqdXgNuVRUeMjekbEiKWxK6g/wCrUqB/idW/ 6r0xq/DPuqBR3az1BH6qhb/lrMB9oldXE3+gB8FQKw3CWjlr5KWpaGp+Daen qIWIaNo/mXH074II+mkufqJuJkeaKg27FMylmmSw0nMtj1OUIzn7DWiY9e9k ehpclHkGI6dIO/cX+KkXxBbiuuzN87epdvXo2Wkq9sUFbMlKsVOpmkVi5yqj ucD39tOzwU7mrbv4l9wx3K8VFynq9vo5mnqWnZvKqV7ciT2HmHsPTJ1nsa0x Bx5XSl0zcssAIZfbjhWj6nMYvDrueVJJUaOz1Thom4sMRMe39/8ALXPu57zs tTuGGsuW283CWmWYyU92NOsjsuWIjeNx3JJIB/Qdsavxmmy4GkF1Z4OiNzdX JocqcfBZdhdes+5oKS1/AUFtsVPFFF5zTHk9U7szOQMsfsAAFAx76tZfRnY9 xHEtminGAMk/u27DVcgIebNrQwyH4wLRQpUmsu2tzUHT21RTbau6KlFCM/BS f6A/1e2mruKTc1u3W6QWOaaFYUmiikDRNIQWEiAkYBxxI5djpjJW7aP4oRmL qdpmJaK5on04SJ0nYJ+IdYqilEk3m7oQjgvLChxnsMnOBk/QDXSwdkAJ1KTd yn00ERu9VkemvemqlrJAvG3Lncr29VTX+amjZVAiHPC4Vh24uB6sH9M8kGcr 20XptqXuDbcdG+8q+SoihWMVDJ8xI8z5iM9z+8X17/u1+2HvZT1DySjt61XS 00M0V1vsl0aSQOjyRBDGOIHEYJ7ds/8AmdKxHJTpu6iaJVVOu22r3Q+Iu73a Ohk+Er4I6mOoKM0ZwiowyAfmDL6fcfXUM1Vm3Fcqd/2dt+8VJc4TybZO4x9e yfXP9NGNc0MFlc1NjyPyHaQpr8G3TC/bN6h7y3Jf9r3O2C5U9JT0c1bAYeah 5GkVVYBvXgScY9NWcuduo7vYKm1XGDzqWshenniJIDowIZcjBGQSOx0HsCVu wtIjDXKqHUPwm7jsW4XqOlkZu1urKeSD4auro4pKJmGMtI2OcePQ/mBHfPrp mW7wZ9UrqwNduTZ9v5r8wFwepdQVznCJg9u/rooT02liP6W50pIO1qYuofhY k6g9W7bums6mi0LaLZT2ulWjogZUEcfBm5tIBluX07ZH106Ojvhl2n0b6k1e 67RuS+XOvraV6SX41oghDurs2FUHkWXOSfc6Baxrd63810LvEeac46QbA7Wp N3TS0Vw2BcLTX18VJFc6WWh82QgBTIjJ7kZPfOPtqm9b4ONx3B5UtfUjalWg iSjkmLSLyjIPFHUE4yASMffVwI06XCwh5IpvEbNC/S4fD9f0Ux+GrpVL0Usl /Xc289tV1TdngMYoHKrBFEjYDO5BORIDjAAHfJzqev19R/dpnO1EkqcMXgxB hSNuCr3BTeV+xrWlYHRzIXHMggrgY5L6gsc98lQvblkJdPU7tq6KoFbtKhRk lmEHmMMSoFzGxXkSpLevf9PTJjTT2RYO3KL7Q23TW7f1XdIumlj2+80Uimtp IolqJz5x7MUUHiycWz7nPtjT1XuoH20lWQAdlsNexnSTL3Ea9jSSWOx1j20k ki3ez3auvSVFFfJaSFVQGBWkUEhiW7oy/mBAJ9RxGDgsCmLtfdw2rDQv1Brm qIwQ1TxbLZfkD+buQOwz+h7dtS2Uw4DslDa9hutlim/au5qu7vKsYBn9EKg8 se/fI/p76XRg+mmO6iSCdkWuFBT3KyVFvqlLQVUbRSANxJUjBwfbSEen23Wv MdwlWqkmi87iTNxH70Yk7KBnOT29O/pjtpWnDiFtVbB29V3BamSKpV0AA8uo KjsUIOMev7tO/r2+uMOT+In6nOkkSSilytNvu9MkNfCZEjYsuHKkZUqwyCOx VmUj3BOiFLs3blJcpayKgJqJmRpJZJnd2KDC5JPsDpWQlqKLydPNnzMfOs6y IRxaJ5XaNhgDuhOPb6eufqdOCNFjgEa5woCjJJOB9/fStIku5W2Pvr2AB2Gm UV7HfW2MaSS//9k= --= Multipart Boundary 0912061926 Content-Type: application/octet-stream; name="22.JPG" Content-Transfer-Encoding: base64 Content-ID: <13600194> Content-Disposition: attachment; filename="22.JPG" /9j/4QDmRXhpZgAASUkqAAgAAAAFABIBAwABAAAAAQAAADEBAgAcAAAASgAA ADIBAgAUAAAAZgAAABMCAwABAAAAAQAAAGmHBAABAAAAegAAAAAAAABBQ0Qg U3lzdGVtcyBEaWdpdGFsIEltYWdpbmcAMjAwNjowNDoyOCAxOToyMDozMwAF AACQBwAEAAAAMDIyMJCSAgAEAAAANzE3AAKgBAABAAAAowAAAAOgBAABAAAA yAAAAAWgBAABAAAAvAAAAAAAAAACAAEAAgAEAAAAUjk4AAIABwAEAAAAMDEw MAAAAAAAAAAA/8AAEQgAyACjAwEhAAIRAQMRAf/bAIQAAwICAgIBAwICAgMD AwMEBwQEBAQECQYGBQcKCQsLCgkKCgwNEQ4MDBAMCgoPFA8QERITExMLDhUW FRIWERITEgEEBQUGBQYNBwcNGxIPEhsbGxsbGxsbGxsbGxsbGxsbGxsbGxsb GxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsb/8QAvgAAAQUBAQEBAAAAAAAA AAAABgMEBQcIAgEACRAAAQMCBQIEAwUEBgYGCwAAAQIDBAURAAYHEiETMQgi QVEUYXEVIzKBkUJSYqEXM1OSscEJFiSC4fAYQ2Nyk9ElNEVUVnN0lbPD0gEA AgMBAQEBAAAAAAAAAAAABAYDBQcCAQAIEQABAwIEAgkCAwYFBQEAAAABAAID BBEFEiExQVEGEyJhcYGhscGR8BQy0SMkQlLh8RVyorLCJTNTYpKC/9oADAMB AAIRAxEAPwDUerubqxlWTl1mFXI+XaXU5rzNQrj8QSW4SUMlaBtV5RvULXPt YcnEtl/Mkyp6fUypuVVqWuTCYeS6iC5GExSvxKS0vzIBtwk/4WwqY5X1NE0m J1tRwvwuP/o9lXzaOE0McuXU3ub8bkWt3DXz8EEZX1d+0oOa6NMzDDezHSq/ OjRYXT2uNwWnUpS6UAeYJSVEq9bY5zrn/O9LbpcTK8v7RMusuwhKiwmlrltJ iqdIbSshG4KTbvz272xa19RPTYm2BujS0G1tdQdb9xAHml+cWweSpj1eDa/m Ba3fc/RFum2Zq/m3Tqg1efOp0hybTWpUlUJFh1CpQV5VWKQLAEAXCgR2w31i ztXsmeFfMubqC8wmpUmlrlNKeZDqEOBaQNyT34J4xbhxy3uvKMCamEjzrYH0 RXkzOOXs8ZMbrOV69Aq8YEMvPwng42h4JSVoJHYjcOPS4wRXUU2sR9MSbrjZ eo/rfwc47UOeQT8h3x6d7LkvFrr4EpHIP5jH24FPBv8Alj5ejUXXCiqwsbH6 45AcsfPb1GPV9ZQWbM4UvJ+XjPq7r7yikliJEZL0mSRYEIQO9iRdRskXFyMU 3U/EjnVLgcpui7jMcjzLq2YY7Cx8tjaXOcByVAY7KNSuZJaemaHVDiL7WFz4 8FDS/FJnCME9TTCjqub/AHWZF3t+cexwgrxjwojQNY0oriT6mBUo8oWt3sdh /liH8Y8bt9UHHiuDzSCMSubfi5unmQSR9FFVXx/6T01lluTlTNiZLrmwsOxG W1N/mV8n5DF95Jz7lXPuU2a1lKuRJ8V1tLi0IcSXGtwvtdRe6FehB+eC4apk xtax71f1OHOhj62N4e3m3UajTXbX4RMGWiL70i/pyMfdFr+1T+pwbdUtlIPN JWjZsBT22+n6Y5QwncFPLHf1vgWwdui81hugSOM2KixzKazGD/7SUxHj70ub F2Ebi5a3bLk342/x2Ref1LYy+HvhJqqk2Qp1plhpUQM7EctevWK9wIv+8O20 4sA2Em7rff37oUveBZL1B7PUzM9WXRo1UiMuuNCmpdihptDZLPUKvIbEXevc kj2PGHdJezk3VIbeYYcp2P8Afx5KERkupK7ktrJAG5ABSkLsL2JKQeB8WxBt uK87T9Rso12fqY3FafpNGcSmLBvKjLhtNGTJDCgvpngXKygpV+FRTbsScSUS bnwzIqFolqlKmIRLakxkNwUM7zyhxI3m6LE23WN77eAfS2ENsuQyQm4KZKrG psOtUd9+lzZKPgWnJsVqKAlx273UTu2WBFmv2k8HgKJOOm6hqXHp7DNTps9U yPZF4TLa25jhdBO5WwhKA0oAW2glKrqBAx7lgtovD1jnWKscNKK+L97A46CQ B7k/liv8EYXZRqknVIQm7rbiPY2uDgYzlnql5XTBp0Rr7Sr9ZdMek0xK9ipL gF1LWr9hlA8y3DwkcC5IGOXuyC67hb17wzbn4cT9FH1mkVCj6N1WfEEbMWaF xw86okMJlKSoEMoJBDbQ5CU9uAVXUScUVPzxU0OrRmrw+5rZWgqClswG5aRb 8RCkEcDGadJaOp/Ftmpp+rdlAtrY6nVM+G0dHisLmzltwdA7lbgUH1nPek8W odCp5EzTTpCklbSHKU80opCbEgBd9oB7gWAOIJOdPD85BtJh1httR4vEl29u CLi3J7HC82bpKG2bK0/T5CMf0Awt5BDB5P0XcjNPhmlwDFZyxUKqsq2hv7Hl yFKuPQHFgeHPL+mNI8Xcx3IlElUWpu0eRHq9LkbkKibFxVt/dFSggK6hI55K VYYujP8Ai5rS6vkBFtgRzHIKDEcDp8EogyFuXMeBvstUJYSWwb24/dx78On3 P93GnXCTbJ0R3JV8sJqRucsonngc98R3svXR3N18IgUgqClJSe3bCyI6QOSp znjiw749uowwc165tSfOdpPJHrjgpQobgDzb6Y+JUgjBFlTufdTdRJGZF0nT PKheixlqS/VJZSht0p/EltJIJSD3We5BtxyQehZy8RWbcniuxIsCRAmNPMJY Ufhy+hV0dZKr3HqUqBHvbtjOqvpI11S8RSENbpYDS/6rRqDBMKipGy1LyXaX twvw79lI5Jnas6fMyWo+mr1QEvYXBKzKuSUFAIBQVklN788nsMSic+ZiprLr lT0Irx3N9NfwlYLg23vxx7jviOm6VRQgRSEG3O4UlTglBWzGWOoDSebbBR3/ AEoct0fL7lDiUSp5bmBW5C6tHcloQbi+4BQWbgcc4LMo+IejVahvOzq3Q5nQ N3HKcpTam0WFypl07+9+xOLmPpB+1Gdn7M8Qb7qpr+i89LTGRvaPMbW90Uan ak0nTXReoZwrpK2YqQllhCiFSXlcIaT8yf0AJ9MDOjWRM0BuRqbqO0lWb8yN JK2inyUmJ+JuI0P2bfiV6lXBvbDYSHzBo2Gv6JZjb1FA6Y7vOUeA1PwPqq+z fqfnaR4xaBlqY6IlCam9VDFJjPyFSCtLjaBNdCShsJ4WEggHeCewxZS6rMam iO24rftsbe574w7p7WZ6tojdsLaeKvMNpHtiHWDQ6jwtx+9kjU47hWmY8sLl llTAeNi4hB7pB7gdjbtcYiaS23lzLMSlQVFpiGkIYR3CQBt4/LGaxVtSwdl5 B8VetILMnD+/6pZNbbi0qQ5Mfbjx4zCdzyiG0to6qSSo/wCeK10s8TuXYOeJ dHzhAcjGqVV1DdVbKei1HBDccLBsvbsSFFVzZTijYDGydDZXkdbJqSLev9Ev Y1PFTwtEh3On0WqW3AGRZX88ddUfv/zxrehSxlTOryZcelh2C7EbKDdxcpRC Ept7+hvbABU9V4VNiNtyMxUV18+V8Q233z3/AGNoseLdz+eIHyMj/Mo5p2RD tGyHKv4jstU59Tcem1F0jt1ShgEfS5V/LAtK8WVRU8fszKjS0DsOs6tVvqAB islxVrXZY23KPp8Nq54fxLwIov5nnLfwG5U/lXxXZfekph5qyhW4KlH/ANbj MiS0kfxJvvH5A4tup5wpLmnSq/RJbU+GlIc6sdwK3JHmUkfx2SRY2sTzguCq 65pDhZ3JQN/DueBBIHgbkX+QPqskQpWoWb9FGM4xdXWqHT8yxBPfpU5s9CP8 T5lstuAKUEec2v2HriXotd8RcGHHpmXs3ZIrUVodBlpiVHttQmwAG1J22GM4 fR00peTHYEm5F972T8yuwkAU8xLHWvodNt7FECtTPFFToqfj9KaZUmgA/vje a4PH7Dp449uMJueIDVuEgoregUw7Sd/SS+myiO34FfW2AX4dTTnV29vvhyVk zD6ORt4KgDxFlCz/ABIMVGN8LmHQ+ubU3uC2XR+i2h/z74Bc8uI1GyO9LyFp E9l401tc6TWKhGDAQhCVK6aEgecqI9P5d8c0uEPo5xIyXsjgOKIYJ6O15QWd xvfysr6rDFW1I8b2l2UM504ssUDLSc21KAqxQqcQEpCrcHasJ+X4hi6c7Vyf S8nT3KLKZYmsR1KbecYMiz6h9y2GgRvUpVuL8Dk9xjWYS4h7uJPsEg4k2Npg iH5Q2/8A9En9FnDSihVnL2UoDFfnLk1h5a5dRd6hWC8om9j6gAJA9OOOMWXG UIqDNKyt5V9hJx+ZMceJcQlkPElNj/yNa1dLkTF09yRJO4udhxiJKZDwUQkE e5NgML4DRqu4mXKgM35ep1d00qFOzDTGJcd1TKTHfTvbX5lEXHr2/ljKuo2W omVdS3aFToKIdOcitSYrCL7EJUChaU37ALTe1/28aj0Zqnskjgv2S0m3fcqr xyATYVMDu2xHlb4X6CaH1dvOnhHypmKVuS/IprbTtx3W190o/mUE/ng5+zYn 7x/TG2tlOUJPEL3aqjvFDCzG5pDBq2Xq6xTVUyWp5bby7IkkpslHY8/iIBBH 07jMlW1McZmQKVmivIjIeSS4uKwRdA/aITzyeAOO/a18K9bO41b2bAW9uHyn zBcKpauiZUGIOkYTa50PK/gVDys+6ftLSihsP1R3ckhTjBUXO4UFJNgE8jtz cYeR63nqvR+lljTqprQQEoX8K4Qe4IvZI9ff0vivmr4KRu4aO9Au6I1GJTGo xqo8GtOw7r6DyCk4ulettdeBkQ4FCZIsTIcQF/Wydyvni9MrZKzZpb4N66qm VhurV2oTQBJdV0mIqlI6Ta7m9kpWtJKiLc3NgMBYRj0NdiIghOY2JvsERV4Z guHUoioYrXIBcdSRfa5VC1aRV8p+EbKzlUprzMqiH7OqEB9GxxDiUlsgp9Fp U3bn0J98B/xzH2gVToMuNuJP3rF/X94cWxf0kvVtc0jYlJGN9Fji8vW0szWy HZrja9v5T8J/BqcVLgTHzC1DA7qD7iLD2snk++CeHqNFpLaHGNRq+VW37GKk 4hO4cXG5RP0vgsmll3AKWI+jfSugPajfbuN1YehOoOZ83aqVaLPzROl06JCU 8y08/wBYoUpVr7/VRHf8sHWpJca8PlaVu3qTRt4JV6eY8/zwuztjZU2jGn9G p+wgVTYWNrL5wdb778Vdk/JNDrmcKXnNSpUKtQIq4rMyK6EqLDlippaSClaL 2IBHB5FsMpFPo9OpyYkWVJlP+YPSpD291RN7m9gLk8kgC/HsLaPHDqXDxVNN VOc0MNtNAeNrnRVMIzkOttNoVcpCLDkHsP8AzwQphuv0uPILSkfElCUoTY+Z QJ7G3HBx+YZ6B9dVStYdW/rZaBLKGta7mmbqluo7OhKm1KSS0rhKT5jxjpAQ hq5L6tuy4DKhbd+Hk8c4rH4RVggBvn5X9lJFM1rSorMUhRotQYUfKic2i4/h Chx+eBDNuiULVSRRXk1lymyY0pUbroYDu9pYSVJsbC4IBF/XDNhjTS10LBrl BH+ohEufG6mdnFwdxzBAWoMm5ZpuRdLKXlDL4W3T6VHTGYDhClqA7qUbcqJu T8ycTXVe/tB/dH/lj9BBgAss7c9ziT8rAesmtc7UvN0uZTGZAo1JZdfhw9wu EIF1OrHbeR372FgPW57kGn0l/KdAqTtOilUqjNdVPSBSpawFqNiD6nGM9K6t 8jA5htcn2WxSUww2jZTjcDXx/uj2lQadEcLkSmwmb+rTCUf4AYlytXT5X5Rw B6YyiR73m7iT4pbeSXXJTWUAVfi/FgzySxFk5Nnwqi006w64W3GnUhSFoUix BSeCCPfDt0G1xtng72VZihtRk94WafFOpmHmL7Di7hHkT2mA4p1ThADAURc3 PF0jk8C2KprzzrcBoturaCEhIseQbW/PG4QRhhf3krHukFQ99S1t/wAoHqg+ S7J3WecKkr47i3/Pz/LCLRaEpC3m21drhSBbk/T9BiUQsJ2UMeN4jDHZk7gP Eq7vDotljX6tQ44SlBghCdvAslSb2t8zi4tQ0tuaC1Np1xDTb0BCHVq7JBWU m/ysThaqmhtZ3X+GrQOj00k1GyR5u4m9976lXlInuJo6wlsp56aLG/Pv+mIA xnisbRuv+EdzxjT47MBVC/M92yDs9Lo2WVxKzWKyxBZlSkxfvr7EuFJI5A4F k/4YmKdX8rz5cdun1uny226g2lCm5KFEthoAHg+/88Y5WYK7DsQmkaOy8gj1 JB8CmmHEYqiNsZPaaNQvFNtLy624lAIFOkr4HqV4aTHWmJrqCUgb4Q5PYWGK Z0dmjwH+wq0a7X75odrtlUSpqWCE/aQNx8yu1sFunbFLFGgTwobUOOPNrSrc hZNkg3+QSfzPyxN0coHVeLXI7IaT/ruvsUqhBSgA6k/CscViORfYr88ffa8f 9w/qcbR1LuaUOtCzPL8HVAoOimYqbJzI5Jn1B9tUCoBjpqiIT2Ck38265CgD bb2xWGVcxamaToRlXUvTur1an0lsx4FXojHxCHWkmyd44tbte4PHIvzjL8cw uGqi/CBwa8agnjuCCnSpxqoqKl1UWktdYZeVhuPVFx16cfZS3lvSLO9SfNgA 7C6CAT2ufN/hj1rMviWzI8E0HSmmUZDilISuqTNygUi6rpunkfTChT9E6WEX rZrnk3xtv/RAyYnPKf3eM+JSjWlniQzCAcx6s06jMvBpRbpccqUEuKsLFITa 3/excGleSv6PsgyaW5mOp1h6VJMmXNnvXcWuwSLDnakBI457nnD50diw+CtE dHEBodTueCratlS6IyTvv3cE4zzpPkjUehOKqdKjoqLziQzU2kAPtKSmySbf jTYWIV8uR6VLP8IUyUouf0ipUPRCqZYcd72cPGNCkpWucSNEm1mHtqXiS+qr eP4c6lmCuzaZkrUjJeYJcAH4hiPUNjrfNvMCCLfO9sQlW8OOsNJcUpzKL0xK T+OC82+k/Taq/wDLAYgJGZhuO5VdThFVTOyPbr6qz9FNI9R6DqF/rvNy283C mxCx8Mt9DcoEFN1KbWRYXSbc3Pti2Kshuj6fKrOcKDITToDbYXEkqaCpbvW+ 5aASpQsVFJJPoPXnFVPhshqRO78o1P8Ap97JwwV5pqNkB0d7b+yG5fiHq0+u SFJy9DfhM33Jjz1tup59dzZG21u1jiFq+sGmFdpoazRp7WQpQN3WFIWsexS6 FpXfF2Jmvbllal9vSOGmlzwXA56G/jwshWq6sZRp2WPsSc/Vc6ZdcfQtyi1+ nOfFxrHhbEtF+Ugnhf64sDKeQvDZqmw63lViet5hrqyY6XnWXo6TwAoLB5JH oTiBhY5/VSa32J9lc1MOHY/F+PpQA4fnaDYjvA3sT9Cuq94VtO6TRxKYzjnm jplL2p+CnlzbYXFxtBH64gv6C6JGpUNTWsuqLxlkEKVuCTfgXSUqt+Z4x27C qaU9pungq+OOop/+3O4eJv7pzUfCtS6rETFqup+dpMQuBTsdyYgpWR78d+Tz b1xcuVcr0vLGVIOXKNGEWnwGksR29xOxA7C55Pvf3OD6eigogRELXXDI5HSZ 5Hlx70RqZW2rYl1Vh88ebHf7RX944nzdysbWXOeXL5Ncta4cSbd8ArqVGFLV uN/hnTdKrXs7fGW9IG2qD/lTTRkGPzS8kHfNdLqj97GWEk/TnDpmwrQ8221R e5HflHOKTTrPP/m5Ffw68vgJOHKWEsAjd9xGAN/+1OHtKWiVAcakn8QSpr0G 72PyuSMX3Rttq1pH8pQdcc0JHenREqmFxpEfZvTZQV2sR9MRMpIznmuflCqL X9g0t1LVVaYWUrluqSFiMVDkNBJSpdiCrclNwNwOmTAObl5qmg/ZuMw/g189 h6pvqbkOnStJVzdP4USl16iNGTQ3oMZDTjS0C5a4TyhaQUFJuDce2IrRXVSP qrpR9pPNNsVqnKDFQjpNgFEXDiQeyVAHj0IUPTA7XdTUCPgR6hHujNbhrqhx u+N1j4O/Q+6s1hDnwilBCglQ5Vbt8sVnr3TKlXNNaVRqTDelSXqu28iO2obn ktNOuLHNhwlN7fIYlqiOpcqWFvbFza+n10WdVvooxrTlUCosqYlakMPoUyu5 7CygPXEM6gJyxucPxL6G0pSGzvSCTa3Hfsf0xRNnjfsUg4jgOI0Lf28Ryi+o 1H1F13RoHwGWZ1QdZC5wY3Nt2/CDYDj3JIwc6DMS6BrDlTMMOa8kZgzBKosp La7svMCK4QFDsSHmtyT6H64CrKkwzQNafzPA9Cr3onTWjqJDwZbzLgf+K1/L IkQ/hQVJK/Y+2Is0aUohBVvIO1BTwBfufphwjcGiyu3tzG4XD0ZaHVpUgOIS dvCrX5xKQaSwHEr3uFHfYRz9MeSyZQvYYw4+CrXN2p32RqPOpcFpTjMVwM7h zdQACvT96+Ij+l+X/wC7L/T/AIYvosPc6Np7glubFAyVzRwJVpZg3zcrPsNt KcXwdoFyrnnj1wGJS8YTzrRCkLiyOU9rXQf88Y50kgcJBJwtZaLh8jSzLxX0 +SsonWHmCItj+mOisprPUcBQBUXyVE2Fg3zhaaO3rz/5uVne4++QUYxmagxw yw9V4YcS1FBQH0qUPvCT5Rc9sEFMJfpTS0bwm5AK0KQSLnkAgH5jDR0bgmbU iTKcuUgnzVdXysEeW+t1F58z5FyTlmM/UJTLT8yS3AhuvLCW7qPmUsnshCbq J9hb1xWWmPiCy/lrLuYI87Lk6ompZgmVJiUy4hsLYeWCncF8hVkg8e4w5VMw E4A4fKq56iPDcMMswP7Q2HkiaR4ncpRmnlIy/Vz8KoqWErZ3JsL8ndiidEc8 TqZ43XswUSmKNFr8l1ubFBBUww84VIXsHJDaikkpBsL+hxXVtYxr43HSx3V3 0UczEqCrc3Rtra89wtxNCTJkGMoWVv8Awg+UWwPZpjvR9X8vtuKSW2KfVJlr /tBtpof/AJTiwxCRrKZ/gl+NrnEHvTioQabUZXwNQiMyGlPxoxQ6gKSQEbrW PpgMn6R6U1dAflZNp6VLakyCthJYXwbJ8yCDx6Yz0uaHHz/3D9UyQVdTTf8A bdp/TkkougOl1O6kpynSXg01G+7fqT7jalLUCSUFVjx6G4wIabUFl7THJrtI Qlt8Vn7Va2oG1HUefPAt6Jc7fK2K6qqHCppgf/I33cF7JK+pY/MAOydgBrpy V30Wg5rBdXUsypUSlKWf9hSCiwN/Xm9x+mJlilPN0OLFqdUD7rVy44030A4L 8WSO1uBjWi859EtMY0R9pMs0SKXlvJEmtSkKTFgtl+QttJWpKBypVhybDk/I HFf0vVeut0vNdbnx4LdJpclcKmoCVpkF1NkgOX8qgo+cFJ4CgPng6nh/EDtc 7IeeUU7C9vAKknag49LW88+VuOKK1qPdSibk/rjj4wf2n8saE0ZQAsrdKS4l aocuh1Yv/EB8j7Yg64xTWKZLqcipCmNpZX8TJNigIIG5SgeL8Dkc9u+MrraW KrgdHLt96rWKdz2yDJumEGJQKpLhOTq/NbFaSkxGyx8IJPTF0hJIKt1vNtuC RyBYHE0rIeVFyVJk5fYmB66w9KUqQrce995I598VlHhtJCczRc8z43+VYVD5 wMrjYd30+Em5FjUaU2ijQ2YZaTtcQ0ylsKBPHCQMdurSimOz5KkMsshS3Vk+ VIAuT+nOGQlrWZlVAX0WVfEucy5i1jy9S3HSGnoS3osFxI2x0uE23C3K1JAJ vwDZI7XINSY6nH4bi1hEVsbmWrW3kIHJ9+/fCVSTmomfI7cn2NlY9Nourwah YO/1TKK1JmRJFOQofGSXSwhR9S4vb/Iqvgo0KyI9mnLuY3UPPsP0xpiQ05HF nUlPUUXWza4cSE7hzZQ3IVcK4ExQdYcg5D1Ks+gTzTdH6iYfz+1lqzRXUCoV zKcuh5k6X27QlBiV0vwPJKQpt5H8C0kKHtcj0w/zdUi9rBTS81tP2BPSCDwb vxf8hgtsvX4O553At9P6LiqjbBX9W3Ym48Dqn6nQvMQtcD7RCifo1hozudoR ISOKU5/NzCfMbBx/zfCOa2xt4LyuOmNQaq6Vf1ZYPH8DRP8AlgM0K8unWTm1 kFQgxSQBccoBNv1xSYhI78RAR/5B/uciIWNu+/8AKfhX/wBJIa3XFgO/riMm wGFz0TH5CvLYbCfKPXG3xkg6JSksVVGseYazXqxF0uyupbUishCp01Lim1xo u87ltFJBUodM7h228XuoYrfVWu06mt0/TrLy/wD0XQGwHAFE7nyOfU8i5J+a j7YasMgHWMby1PsEt4zUZaZw56KvTISVEkeuPuuj54b8qQMvctiym3Ax+Hge h9MUTm2dmjMvi6GTZaWVZVo64syYySEmQemVAdtywFkHYOPLz2xjOLVrKOmD 3HS+vhxW84HTsfO97/4Wkjx4IylOZkqmc3YFXpjM3KT7IBS2gty2XRYoeSr0 UlY4IItwRyOTbINenVmjSoNYaeM6lSDFcfdZ6fxabAofA7Dcn8QHZYUO1sVG H45TV8xaLhxvYEEXHDu5qWppmspwI/4bHzO/kdCiSXT2ZDfmbsUiwI7jAfnB p1NJRQ3GQE1V5Edak8BTSAXXB9Shsp+hxf1j/wB0eL8FSRt/ahNMxZIypXMz HMNVy7Dk1OMl8MynAd6NiRtA57AqJ/PGfM0aDZ2pOazFy5AVU4bD5TGcbdQl wjYFFK0qI5COLg2NvTCXTTCnkJ4H51VriVIcbpBTSvy5dWk7C3D19ERaX6B1 OHXmsxZ1Yaa+HWiRGgodDhPWVa7hHAsm9gCeTcngDFw5WyZk3IrkleVqDHpf X+K63SKj1AgEIB3E9rn9cQT1IlkL/D0dZG0lN/h9EKKF12Dc8zvf9O4ITy9T WqB4naehgqSl0u0R+37bRR8RFUf+6la0fnhPxEZuXpjUssV/7Edq3XTOpvTb f6agFpaXcXB7FscexxYYa1z8Pnpxvchc1Jg/EQyzuytsLnlYlVsjxg0Bqqpc qWR65ET8R1lbFoc42bbc2xK0zxWaWSWOhKqVSp7nwfw/+0QVbd269rpvxbFP NSSta6417Xr/AGTOzCDMA+lkbI3TYooq+tul9dyNW3afn2kLU42paG1P9Nag GVDhKgCeeOMA+k+q9Hd1r0y0+ytVG5bzsZhVVfbstuOhEYks3/fKkgH90A+u K78A+qroRbRr8x+pProhn0s1K2R8zbAMOvfoAPMrXjj0kxdjSxZQ447DARqT naFkbI/Xkvoeqs1K2qXCAKnJbwA4SAObbkkgckdrnGw07Q54bzWeytOW91XU 3NVc0+8PESoZsqH2hmSU2pEQyY7aH2lLCSUr2kjyHuUnaSEj0xQDlRckSluP PKccWorUtRuVE8kn5k3w+YVCA10o2J08FnuOVBMrYeQ91z8Sr0JIx98Sv54u CNUtrbOYqi3TshP1JKihSUEC6NxQo9hb9rm1ve4wOZEycijUr4yeTKqkuWhy bLdF1vqUDe/8IPAHYADH5Wx6ofUYy2lP5WC58d/0X6QorRUTncXH0CnkxwKT tuvcIq0j/ccvh65FSmodULU08iS70135upsKAPuCR2x41oc1pG4y291CDY6q Up85mpUduUlBSVJupJ/ZPqP1wL57UEZqy8XEkpUqdcAEm4iqt/icN9bP1uGd YOIB+tkFDCBPbxTh1SVxpG0G2yURf5qQMPlttJqzlx+GS+R+TAGEtryf9KuO rA9Ui3HQpLbf/wBCj/PCMlEZukOLVtVaPLc5Huu2CGhobc8vZy8e47D70QBJ U234qz+ylM2lrA9L9LZ/hxgb8ZbAl5WyU3vLaftV877diGkn/LF/gxuKj/N8 BLuPi1A0f+vysv5ra3PNJQUlQ8yj78g2wBzFpUsgNApQkpuR+I++LvK1zdVk 0NbVUsuaCQt8Cm8WLHmVJht2E25z+DkA/pi6vCnlp6r+PCNLp8ZTMKiIkS3V JTZKB01NoBI9SpY/nj6KNjXgtCZocaxTEZmQVM7nNuNCdFrfUvUFWRmIbDKJ E6c46h4RWElZMZK0h0k/skhVkk/tfTEZXKZAoedka15rqs1qGxASGqZUG7OR HlJQUBoXuHNwXdJ7FR9rhqiZYNy7uuFeyv6trnPOjdfoszZ8z/Us85/erc0h pBuiNGQbpYbvwkfP1J9T+Vh1qXzff9T7Y0iCAQRCMcFj1TUuqJnSu4m6WTLX 0xtWbfXHXxbn9of72OyzVQB3etnZzeSr7IpzSyWXqml1SVE32oSpyx/3kJwT MOlERtJt5Ew1/r3x+QcQd/12oPIN/wBp/Rfp2IfuMfn7pMuqQ4tABPllI/nf HYdSuoNuPr2lUhi3+80Rg5txYeHshd17QpCGY6RcFPlVwr3SD/jfEBnudIY1 Qyu2Vbmn/tJYAHmTti2Fj7HccMxaDgoJ/l9ihWPIrMvC6fvvpXGfUEkFSJHB +biMO3XwmU+VOD+tlq/PpgYSmHUnwV+R8pNEpQmtgrsPiIgP5N3wwkuOPZbO x9CLQXSSoXvuewQDcEePuuS2333Kls06gR6V/pA4GTDCXJeq9SplpQWEttpH fjuTxwMaXfZiz4JYmRmZDaudjrQWm/0IIw84NS9THI8m+d1/hLFRVCpPVW0Z p48UKVnRnS3MDRXU8h0Zbh7uNsdFV/qjbgArvg40fq4cXFZq1Jcc/aizStKT 77XAr/HFw6NpVLNhdLMb5bHuVfz/AAKiJV2peX9RwUtL3BudTuSPbchX+WLG 01yZlXRFhOUI9YYlZmzG45JU+topEhxIJQgAX2oSL2STc2Wfp1T05L9EJDhj aGbrc1+SQgQ3Mttq1Q1oqiHalEdDrDCHt8eGoAhtLSAPM75nLWJHnPzIp7UL NmcdZH5FfQlqLSKahxyDTTI+8DSLBx3bbzq5AUfyHAOHLD2xRyfipPyN7LfP j5cVVYxLLPH+Di/Mbk+A4eaqhZtc78eNKHUsD3w9rMrKHlahZeg1FyIuQ64p pW1Smk7kk+tjfCX9JmXP35P/AIf/ABxCXC6vGYTVOaDYfVbWrmq+S69rmxlu mz0Kdp8pTJd6gLLjpaVuShXYhNwnvyTx2xYC5ijGDTY5+FiLKvovH40q2yOx mpkOxAt5BwX6SfH1EDIj+Yb917G3qnPxqE1VbayLfEzEcnuNl8fNT2lLYWCk 3MFRH1BGDmydrKTx/wCKCLNL/e6j6bXKM1XxQVTmBUlQ0yhGC/vFNJUpBWB6 gGwPtxiLzSl1/UWiSGl8RYtRcSSbcKbbT/nh3af+h+XyqYi1cLp5ImqQ3ICj uVZ8e3/WJx1IlOLdfKjtJck+vugYQntsSB3Jnjcbar5MgmUhXUueuwR/4eEk ygKOoL2kGGU8/wDzb46DiL//AK91ObH0Wfs1041f/TUUJlu5CJESTwPwpbaW s8f7uNdNqeaSk+ZVhbtbnGo4aR+HAKQCC2eW3P4SnxKujyOb2sPTEBmTOMPL 7gTInR2CiI9UFh/hPRaKeoSeyQAsXJ9xguVrw39nqVPE4Od2tlXtc1Nr+ec3 /wCp+mMZhxKpcmnzKhII6Ab6W5LzakkG1tyvUkWsDe4jcxvacaG9PMGZqtJr Wb3pCqh00v733XFoUgpG4HpMWUO/fYg8kWwwQU7i5tNF+Z2/cFT1VQynjdUS HsjbvKzlnTUPNmrGorL9SeB3uhmDCa4Yj7jYAD1Puo8n5DjBcidGofizpOVI YQqJDhN0VY9FqcQVKJ+rigfzwx10DWtFJHsGPPnoL+pSPQVLnyOrZNy9g8rk kegVYZhjfY2a50FbgQ1EeWm6jYBF+L/lbD/TbSzOeudZXHy8pyk5WYWUVCvL bP3lu7Ucftr/AJD1I7G3nq2QUomdxA+pQmH4caitLSOy0la5y5oRpfl7JEOi wcn0hxiI3sS5KjIfeXzcqWtSbqJJJP14sOMSP9EOnX/wVl7/AO3Nf/zjP3V0 7nE3WmCFlllLXLLuWNPq7R8s5XnIfEODulFwgqffcVfqlQ8t9hTa3AAAHrh/ H8UepDNBbocDLNDW5EitRHXFPOEnp9lWvwTbtzjMKuiMhLIj2dvSxTlLi+Cm rMmIF0cul7C4PEHuNiB5JL/pE6vzZqymJQmldR1SglharKULK7q9iMcf0261 h5CW59IQobVIAhD9j8J5PpfFeMH1zE+vdZeu6U9GYxZoe7ysi3TlGbNWZ8rN dSq8RGbcnyGE0lyO2Gm1tq3qdYeAN1IcuUk28psR2IxoWpNOP0WLVadFVJUm OtsRitKVFLgTeyjxuSUdiQDzzhvioi7DjTtOpHr92VJLiMFXM2phYWsB0B3t /e6jZVQqjrS3BkGstby+AVSIqLbyCnu93uD+WG0yvSm5xVKynmBpC1qXuajt yBtU3bjprUTz8ucLZwSrB1by4qxbiEIGhP0TI50oEd5v42ROguJWyVJl0yS1 ylJCuS3awwirPOVZEDoIr8TqdAo27yCTvuBYjvivfh9RHuw7H1KNZWQSDR44 JXJOl8ys+M2VrVUJBahMRRBpkZxlSHH1BoIU+bgWR5lBPFyRft3uwJcUQkWB JsOcaDSRmOFrTulywD3O5m6p2k6w1+v6kS4VJys5VI7r0b4VmKemuNFW662t 95w3TvIbCwiyQkFIKt17K54yXRU51m5w1MzT06JDUr4Nhp1SUNsWTuDvl3EF SU+VN+VXvewF69op3BrNXHbxQQILS55sAqi1E8RzuWHJuSNLaKKK3FkuNvTn kguJXYIV0W/woHlHJufkMZ8m1KXUas7PqEt6TJkLK3XnnCtbij6lR5JxoOFU DaSLO7V7tyspxrE3105YNGN0A+UY6UUhVQ1dpK1N/dLeWsqPYNtJCnT/ADSn /eOIGt5mlS9cHKnTWpE6qy6kZEOJEbLr7yw5dASkc+gxES2TEJC7ZrAPqT+i lhik/AxNZu55P0A/VabqXhrypqFrujNWbX5rFN+Fafk0BA6QekEkqLqwb7Rc JKRzcdwMXjRqXBpFEZp1JgNQokRPQYjx2um20hPZKUgWAwgSV0lVE1hOjdPp pdak2jipnOLR+Y3+uqk0D7oX2/mnnHVh7o/u4Cupg0L8/tedMmdPtcY2X40u Q/S6o2JEFx8edk3sUE9iAQRx7g2B71nChsN5ueZlpdjvB5Sm5CDZQIPZY9cK 8MbonPi3ylB9Oo2SVMNa0W65gJtzGh9kWxY7yMzfFIUl0PEBYbBIWbWBA98T whzZTpREhSXHE8EJjrKgfpbBGQ8As7jY83AF0RabaS6w06mwdUchQ5TOYKZV lxn6XUf9maqkFSUqVcL23G8W7juCDdPOjdN6VmfKXhyYGo7rMCcmZJlSC5IS tDCHXipIKwdtrqPrxfFnSRytmby1+pWrUr6YYI2M6S6fQX/Vdwc65PzDTapK EhiVTaK03KdlEEhQU2V7kJ/FYJsL2FzcDscQ8rUnT1dHSjL+S6rVPg0Nrjux aWejcqVYAjkWKb8j9q4vY2Yfwji7XS3NVAlGXRE8fNWeJmmwnQcqGHVPjzFM R+SVJbbCyC7fybgQPS1t1+QDdi4xrjmSlJi1h3L8CnveWSzHlOJfWjrAhIcG 4pPTG1Vu4JIIUbJ8Ip4z2jcgqZhleOzsiTKkfOtGmP1XPWc48xh6IgLY2pbZ jvhRK1IVsT5CLAbiT6elzB5u8SWk+Ug40/mhqpSUi3w9MT8Su/tuHkH5qx9D RS18uWnbooqivhoIs07tfVZ8rnimrS8x1EaY5aj0R2sLbS5KlESHwUp2p2IA DbYupSuyvMtSibnE3n6RLquruYqBUZbkyYjTqM0p11d1rd2peWf+8VKBOGGq w9uHxh17vtcnwLUqw4pJiby06MvYDxDlUerdPUx4gpiYjS1iotxZrKUglS+s w2rgDuSScMI+l+pM6OpyDkSuPbUdSwiKSSPlut64Y4ayCGkjfK4DQJUloZ6i tkZC0mzj7q3NL9Nas1qrmTK9RmmmLiUsUpl9hHXdjbkJckO7R5Qd6rJBNyfk MXtpbpfpvpfCVGydl2QJi2x8VWJ4CpL4t2LiuQP4UgJHrjOa3FDPLIYfyvIF /Ae1yVqWG4c2lhY2T8zb+p/sp2ozGUVhAZguOqkKW0tCHNqtikdu1+/b6YFW 3EU+tzqarJVdejiM3JfnCerobySOmmx4UAq5/wCGKKJ2Ru25+iup3Zj5Kwcq TEO6fQ3GYzkZBSqzT6+otPmPdR7++Jf4g+6P7uCdFCoadBoc2rMT59LhypMW 4YdeZStbIPfaVC47DtiEqmVGKrVhLYlyIvmSS2yEBB2nsbpPe3P1OJQxrTcr iWUzANOttu5NJVLoWVJa6jXM1qYQmKp1LEpbYQEN2K1pskKJHHa/fEa7q+1U q2xCyLR5dTC94ceSy6VCxasttAH3iSl3fuJSClJtfBUcBlGY6AcULnEZs3dL Uqraof0g06fnGBS6BRh8Q9IbU8HbNgJ6YW7+ELuocBXNl8WAJr7NOYNEKZX5 8nMuqVXr7rry3QxDkrcSyFlW5A6Y22soC5VcW424sKaCSR/7q3MLblA1VXDT svUuseQ3+/u6CnPEBl5Oc3aLpvleNRmJpTHm1iqqBeK1HhakXVvIClLIUQT2 HPYvczJmhedaXkTLuYX4lAbbDJrNNqbaUsoSOB0ttkquACCk/iJGPp4nRvLJ Dc/PJFUbo6iHrWiwOyQGY9RotMqsOm5grq6owypTQmJZdSFIIC3Qjoje2D22 uBSuD5Rzgf1AzhqnQtKWqvG1Vq3WlJBTT1R24Ui57klIUEpHcnd7D2v5E6Jr 7uYHC/FST0z3stHIWnuVC5hrderksOVuuVKoqWhLn+2yVuFJI5BBJFwbj8sQ oBCAAm1j6Y0ujc19O1zW2BF7LHq1r2VL43OuQSLlXX4bMguZmzs/XJeWo1Xg RlCHtlOFLSHF2utNuVLSndYduTc4Nqnl3M9Y/wBJHWa/S6DKl0NhxUCpSwEh mM10AgJJJ5J2pO0XNjhNxOrY6pnY46BhA8dPcp1wuhc2kgkYNS8OPgL/AAju peGym5u1Ooub5WapUNNHp8RhqJFbSOs6xyha3DchP4QUpANgeRg7y5U3FZwn 0ORNc60Z9K+stsIAQsp2hN7hQUoKAtzYg972S63EnSxRscbAC10401BHBK9z BcuN1H1WbWYNdrrEQNQYqbqaeQptJRYputRUTe48trC98N5NSzi5GYSHokZz 4lMiSplYAea2/wBUPKSlJ7k3J9rDAkPbOQSeQHeipQYhmIUtOrgo2S4lWqcv 7Ohw0rfm9FX/AFaeEi5SVWva5A7X7DAhmfM0aExUMxLjMz24DSnH0Sk9FphO 38QUk8puOQoG45vgGaE1F7u0ve3d5IyKZsRAA1ta/erDyx8SdOaYtTqlqciN uKUUgbipIJPHAFzwBxiT/wBo/eP90Yu2EBoF0A7VxNvVMZ0xEKgSZ64rkgxW HHgy3+NzYkq2p+ZtYfXFaaa5z1KzJXUJVEhyqe3UVKqkt4dNKG3IzbiWYyRY 7UKUQCoKKgAVFO7g9kcbo3OfwQBe4OAaoHM0/TPLWpD9YzpmNzNk5oPlimxw SGS64VKSoBRQEgWsCoWNzYcAB+ZfEznB2MqBk+nQ8vQkAIR0kB53aBYC5G0W AA4Tx74YqPC3VVn1GjRs39Up4njYprxU2ruJ4DwVRZgzXWsyVQycy16ZPdPN 5ckr9fQHgfkMD81pL1TjOx5TKegtRW0pX47psLfMH/HDUWsijyRgC3BJbHvk l6ya7r7nj4rjKVGmV3VlVDr77VFY2OugfGBEhSV8JKuFJ+8KQEgD1HJIwe5l y/kCtZqkv1aj12HLqMhmDSZXx6k/ailKQN3LRSgIS4NxuRfgX7jPJS+V7nHe 5077raaSNrIGsG1h7Lyfpvpe7mOoUWgamZ2VOgp3t0uMsrkSR+0tsWAXwCdp KVWF+O2KtzhLh0WEzMyZm5Ffp1RfVC6pStEltxCQpTTiV+YE35BsRtxA7svy OU5bpcKysiZXp+qOVcp5ZouyJV6s8+HpLzC+tFW0bSEvIuUkJSAoeZKh5bbg s20HljwP5KZ6buac1VmtH9pmOUwWj8vLuWf7wxZR47NSQdSwajYlUdTgFLU1 X4l+x3HMq0aZQqLlV9im02G7Hi05ktMx4zvCko3JTcK7qAB8179yTiCg1Bul VSsInx2YyNhkRnGUbG3GlEJ3rKrJW6Ceb9/Lb2wl1TnRvfM4k3v9hMMZa5jY WgANsp+gVClyBSV0yqMy3EKTDkWdSFBe253AXsrt5fS4w5gzUSvg5El6M+w+ lKmC3YNpcQTyjncb/eEX4sL4EIL42j+H3/sjWlrHHmoqr0aPWM0MyXHltstS PjJPVAU0ptAN0ngWSAVG/JBN+bYYU6rZdEJ6FT50haIsZL0SS+gLbWFglITa ylJFiblI72vjwQR004kD7udf1/rso3SumjLHN0FvRNMz5dg1jJlNy7NmTn4U ZK3pTqShKpSW0FXTeVe6ErV+Ipv2tgDl5npmYclGmwVJnTGhvqCGoqW0qU4i 6EIHZbdipAN+TcYHhe9s+V17WPnb7KlliibHmbvceSvfJilTdHqFKXBXGU5T IxUw5cKZPSTdBvzdPbn2xM9D/s/54vWnQIF25VY5+1Hy3kiLtqMpb81Y3Nwm Td1Q91fuj5n8r4zrmvVDNFeEuNAV9j0+Y+5JfjQ1lJeWv8SnF/iUTYX7Djtx h5wrDw5vXSjTgPlZ5jWJFp6iE68T8Kv0tuh9ZSslCzYBJIFvp6n1x62390po qKim3fk2Pb/PDOywseaVpCXB0R2b8b/XimT9Pjvqs7HQ6Pmm/F+2G0tikxUi TKYYQ7e7f3e5aj/CByecSuYwgkhCRGUuDWkrzWyLV5+g+VXMvUqW+w/Bea6D aVLCktkBXUTa4HV3KHYdiPMDeoW1vJfgtMP1OHHilKEyA84jpPJ7rRtUVJO4 fhUm/ANx2xlFUT1uZq3KHSMBP5WddQ41SmN/661cRkMvx/u5/XkhpZBUlQ29 j3sSQFE2tzhKiO/CV2m1N19dTbk10y5ESXJKGJDnQUoFx3jatViDZQ4N8Qxv cXi+uq7cVr/wlZPNV8XEjPDWWWaTGpdGT8Q5HBUzKkSGwAEuXIWpDezeb9z9 QNo3Kdq0lQuQOFY4lN3bKRxGwVT5yzExRs1sR3qsmKZEtSHHngem0wFKKiPK QV+gT8yewx5mNxp7JghzI0WTTm2FTuhIfDG5xtQKSVHyqbUVpABFtwHfAVRL q5otoRp3H79EPEwZrnkUxylmCRMzRRm5mUnYTBMlRtJaUuOtKkhO5tNvIoX5 H4Ta/fEO3AirzhRKtQXVhqAh5Lykw3VFTe0htCbWA2kmwsRzjpkJqoiA2xHp xXWbq3Ak7p1IVXW64qp06nTkOqksKCTHKfu0EFQCr+puSCO574kn4rKXTKp+ V5cN1DyFpLbbXT4KlA8quF2Xz+WIYcNe11pbEXuO7e/v6ImWtaRdhsdj6KHk N1ldEQipUpSU2WlwOyG1IUlRNwbnkW78W74BtRK1BFK6kamvrrjLbjEOamQg JYbWkpU2pNiFt8g273SLEHB7MPJh6tx80M+oDpesb9FfOlvxa/DnlxQcCSaa zuFyrzbRc39eb9+cFO2X/bJ/u4nEYaMvJfGUk3KwvU2n5NQckyHnHluq3rWt ZUpRt3JPJx1TMvsVGYyxMedYEx8RkJG0KWCkqUtJUQLJH5D8saPUVP4aEFo1 2CymioDW1OUmw3JUU9mPTGlz3YFQFdDjby2uvsSptASvbZQRyVG45SSBycPs wUig0vLi6qiawyVKZbS2Kk1JU+DfzoSjzW9QLdvXjmpZiE4eGm1tOB900PwO lcwuYTmN+I9lDUfKOe896hqyrkOiB19tKXJM15YEeKlXG5SuQPX3J9AcSVHy mtt+vUuk12nznstJAq84TfhWVouQlxa1AkAnnakexNr4kxKuhe0MBPM/ovME wp1K5z3gX2C5zTnWJkLMLFDzhqzliCqGryNxHk1J5gK/jK9wO7aVJKRYdh2x XFcr+i+fo7k+vayUilZglyQh6YmC64hadgBW50m0j5Xve49RzhUdUQublsm0 RkC5KKnfALqehhqXQ8y5WqzTikutuMznWdyVJuCCpqxBBuDf1wEI0U1IynnZ 2o1Fim1uNRZCgpMeSiREXZpSHGy4CkFSf2h3J44scRwRAvBJXObNeyvbRvV/ NOk+m+XadVdPq/EpiV9SpT2Hm5sWW2b2CUJBLKkjbtHU7hQsScakyNrRknUK aiJQam6p1xoutoda2dRI7kG559dpsq3NrA4lqIbjrB5r65Di0ppmiNBnoqbc mBDmNlAU6mUFKQhsFZUoBI3BXYAjtf8AUSzNRIEzMsYIUj4F9SXJEd4blKUh BU0gKvYNtkci3JSFE+mFqqja+ozSDQWPjw1+uiJYbQ2Zu64+U8ymxAdz/T6n EDT/AMUhSEvNkKBSFKvZQ9z/AID3wzo6XYTBuoN3v5UG478dsXeHiMMc5vE3 QkofcNdwFkRJdLVm1ymXntt+jvsr/C388QcjNHQccQxTngveSQW1DzevpgmC eOYkDQ9/I8VE+FzVFTa7U32VBFPfbK+N6WFKKfpdPGA7MGW5KaaZ8dmZJWtd ldRK1WuPUe+CyW814GloWh9MI5Z8PtAZfaW043CQlSHAUqBHuDyME/Tb/eT+ uKwkXRItyWHK48xTqaZElQSkKSgXPKlE2AF/UnDCsR4LmfarRtsadVafAbVE S9M+HYc3naG9xN/MCi1htKiBfccNuIzZS0cvlKeB0+r5PAfKhJpplFZTMnZW jBLEP7Rq0d2ouiSwlayhhI9Cp03sFDkBSrWw4pMDIFaep0RwwYKZjSU2dekE N3Sp5tW4kBabJ/CDuv5fUYqZKiQ3s7jZNrKePTRS+QszTtOUxs/wGaQhmWxI iqZp8xbpksuMlSCEqUSlbaw24QUg7Tx63gqXV5GVtU8/xKY47JMOnFC5C2kq 6wS045axHBN2rLNyClPB4wLI4lrjv92UwaA8Ad6yDBy1AqGb0pmRpsGPUk9W KHF71q/eVvO0KtyST+d73wR5/wAm6fZf0Cy1Py7VZEuvTA6msBclO1pwKIDY aAukBIHmJsb8EjsuEuEmRwRTWExdbwvZfoZn3VV3SjwDZUi0iSBXahlyIqKV AXQy22whxw+x+8AHz+mMvVLxF5sbp6WGobTISkpQ2l1fTF/4Dcc+otzc3A74 vGNuLoEDKNE705zrRNQXJtDzJl+DBnwECpxpMBPw4e6arKSsA8Abtx27b8ng jdiwNCk0yb42snV3Kc1pFKcDr62kuuL3Kf6qfxKsTdLie47gfXEtwGkDkfhc vGYLUWbk5jjZEqb1LdnQag4sPwjCcSHZDiSQlB3C3T811X9vXA3IyrNqGnbd GqjVTfkvRjDenKcUVKLgusg2IvyRuN+OPbFDVQCaU5icvyvInuYLgaqYy0IV I1Ph0qFAqzyEvdOU+qKWYzTiGU3WEqSL7+B5CU3v7Ysee+lqgupQy0hFilKE JsOfTHMDWxRFo2CnkcZHgnj8aJ2yyE05LJSkpSjaQRx254wi00mOwiM1fY2n anzdh7YMsBZ3FQDW4KU2KDVypXm4tuwm4PO0TuCt4AsTiYbqMhOgtpQ3Hufn j27P/JxCSiAAsTZkhKkw2oiVDph1D0lOy6lMJWCsD2V7HAnmDMNKj5IrqIMV urKltKRCYdine7LSEI6gbKiXAlpRWUEEApB7gHDTiQzPF9tPdUeBgiEnvQ3l atozwzEytJpiGl11hEOa6JjqWyuG1ta2hYI6qGwFbio35CrXwVPM5UXkKuZS zI1Flz6DEckUdyMFNzfKkblKYKiN5CUKIvtsm9k3tgF4yNyxm1kwt1N3BDdB o2Yq+jMOd/tSB0ovw0d+mNsOvSkC6EXUtKQlDiW1KRYEkpTdW0gHAtl6bUom mubpWX6uXa3U4jtJAUtSnpLfw6zZF738yR25tY/hScCOc7qXBx1HsvsvbFkP 0zKWVsweGSDQ6vWHkagQytiJDlylMKgAKGyzVgC0UAHqEnkEeww3zrligTdC pWX6bSoMPMtBLbk5/qb1S/u7kodFy71N27sAkcd+MVTnulfmdupg1sUeQHS+ y1lrpppVGPABkzOlSqjbE3LmU4tIqMZbSlKd6pYUCj2Ulaeb+mMX1MhKyLW7 iyTwCPQH5Dkn1HGLaJ2ZnghXCyJNGX0R9TanILPxAFDl3Y3WLo2p8t/QqHCf kcW/4TH4sPxT5ZQiWsxpSlNxmnEguMKU1cMrCCpKVAA2IVyB6dhIxwbcEcCu Hi4W+F5YpUqtCe+5MS9v6l25K0oJsUm4BHFieO38sKx6HSKTV/j478oOhCrJ VOcWki1j5CbH9PpzirkcGC7joomtJNglV1aGaYp9MgKb27iUrBFvb64jK5NQ 1lxbslDjWxaP2Sbn8vngeWRr4+yb6KZjS1/a4J65WqZHhl1+Y2wLgEuXQAT2 FyO/yx5OmIYgqG8JeWCGEq/bUBfj34Bx2+VuU6rljblNJtRraENVCAaV9n2Q 44t9xYUlHG83SLXHNvyvhhSqvmKpZqCSuhOxUK+8MZ5wuoSRwdpHvb8vniRj w86cr+S8c2xsi9GwNAW7Y6uj2/n/AMMcogNFl+eM/OlMnRa3UxXREqDziWaT GadAloba5C9v8SyokHuPfFMZ7znX69Iaj1BiC6xGWqTBWtoJcYcURdxNkjYu 6RcDgkc2OGXEI88+vJVOG5ooCwi2qjWtSc2PSHBUas7ID+/qKKkqUpKxZYvt 5BFr272srjBbH1IXNc+Im5kqLH3YbU3GbQ2ogAGyQkWBJAPtfk3wGTbVWAF1 CzNUM5zpyhErVQgxFuB1bbUxZLhHN1rJuVGwuf0FuRsrTvwyQK/4Wcm5pksJ j5kaZNQLandrEttxa3GEu2BIW2l0lKhYgkpJIwA9/NT7EFUFrT4U8waSx6bn VmQ06ZDzsJ1yO+6/0QSS2VFQHmWm9xylKk8d7YrhOUKjLdVNm1FxDzgTuU4n ZuCQAAT9AP09r4Lp4I3NzIaSQtWg5WpFazP/AKNat6dVt+RLqtHDD8eSZG9T 8NDgJ3KIuVIO3n90g87TjL1RiuKb2lrjm9hYcc8D0A729+/GPcuRxAXbHB7b qc0hgNStQ6pDdYecS/RZSVNt33rBCSUi1iSe3BHew98XL4Y8sNs+MTKEyiuz EUV5s1JhThSorc2LSQQEhSEgeWx7EWNzzjprmhpF7H30XDr2X6BN36d9xviu dQM2TIWbYlNhQJjsfqdae9HhpfWhhJ2kAEHuo9+O3rihrHWitlvfuuvY817g 2RNVK5SKNASKzUYFPaVZSUPupSTftZPe35YSl5mAZjmhxEVFMq5beadsyAO9 1DsbntxjmeobE0iIZn6aDdfRMzv7ZsE5os5dUpXSu2JQYDqgtBDYWSbJNvYj 68YiXm87RZCy4Ur+GSt9DMZjpomKSeEb9xNieSOCbfPAsk00kbHMG+/3y4KY RsY9wcdtvvmm1dztSH9Ny8x0VOuRTI+HQQsMEW/rLcCyjxu7nFaZCrT0LV6D ML6lfEPdB66u6V8W/Wx/LF7R5JYi5v3pshnZmusVotC1JaAuDYe2Ouov5fpg TMjQ0WWEp+l1Ezn4habpTkqlMU5dxPrtVaaC34zA9N6rncbji/JUkdr4kJvg Qy0/WXDIzhnEtDcpKkUJpSSAbAbupYk/QevYYZMWlEbmADW2vmqLC85jJeeK WpngLyFJmKYkZxzmzssre/S2GkLHsO9z8+/zwSRfAJpeDZWdc1d73tHH/wCv FJ1rnaq2vrZElP8AAhoqysLnVbM81sHzNrltt7gf4ktg/ob40RAjwqZRY1Nh tIZjRmkR2UBJO1CU7Uj9AMQlxK73TXN2TqLnnTyblitspchz2ukvamyknulS fYggEfQYy8/4M82sVl0w6/lwRklW2Q4h0LKe9ygINj8gf1wVBUCIEFRSR59U 7geHWiRaJKQrVVtmo/DrbStFHcLKErTsUSCQVeVdrcdwcBM3wbZhkpeiZRzf Q6sI6LltwuxnbdgCFIKe/wDF/PHrqrM65C6ZDlbugagaCao6e6zuya9k2oMR nYEhlEppv4hkuEApAU3u7kcep+R4xpjQ3RGqUjPcXVjN8qosVaRTkstUySpJ UwVNJbUp1QA5KUghvkJKlE+bgSF7A3TdQuur9bbUTYDvxgFrC5cHN8xNPdk/ Gx5BlmK06G1vsBPcX4KdyxwbXsbcjFNWS9XFpv793miaeLrH67BQcyPUqfn5 Eudl+mLYnIZRU5Epze8yVIP4gUWDaQdt7g8m1vX2nyMr6fS5dNFSnTYNVlhd P+BZMtEdQTbprUD/AFi+bC3ZIJPfAJmjZK17+y4cOd/6ro07iwlureamF1+A tEF2VUG4UipvLEWPI+6cWlFirg8BQHJTe9jfAxlaoNxqtWGk1FTtTcqB+LYW +VCMo8pANzdJuTuH0/ZwU6UOqGN5X+umiH6q0TncdFE5wqdLpKp8NiKhlFW3 x4aI6yvqqTZ119Q/ZTwkAe5+RwLZaQuRneChu1/iW7FPp5xg/DB1cUjTpYn+ nopJiHObl5Baj2C54Hc4+2J9hiHREgKj/DXkGblPSR/OGZULVmPN7gnzVOp+ 8ba/6ts+xsSoj3Vb0xc6H+qwAki6e974s6+Xrahzxtf2VbTM6uJrE46q+EDa T3NsdB21k3Te3zxW5gjNAkZKlPwltocA3AjgkE/nhvDLMiOuIlospaITa9uQ MSB3ZXIsHpzvWyrpIVYpPqewwqt9xQ6djwdxso3xwSCu9eK6MlROwrHHpuw3 eW4IakRnCm5udq7qPPoTj0EXXjttFxDfEtpSmHHEFKrFV+TbDmOldtizdAHC lfXHRdY2XgaHNuE9bTHH7SbfXEBnDKkDMVIkfDqgMSpDPwzr0hguJWzzdBty PqOeBgaVgmFiLqWMmO9ja6D8u6d17L1XYlRs4Q5sFDJSWJBdcso/urNz07cb T7euGtB0un0epuSFyqVUgpYdSJDjii0ebhBCBYWPB5IwM6jzOaXa5eamZIGM IbxUvWMh1SVlKQmIxTi8XELj9V9ZS2tJuClW0FKvnbjnvfENStPatR6+/Pi0 CltS3oQjmW3LKnFEW8qgQm4sO/fjEbqZ5qeusDppr6qXNF1PVk215KLn5Rrr 1WZFVyip0xUqdj/CTm1lK1pKTYKKbJt3uTxj3T7Teu07UhqZXITUaNCX1UEv pcLpt5eE+3qT7YtYHvjBzi1+X3yQJZH/AAG9lcuxB5CiMfdNH75/5/LHGVTr /9k= --= Multipart Boundary 0912061926 Content-Type: application/octet-stream; name="88.JPG" Content-Transfer-Encoding: base64 Content-ID: <13600193> Content-Disposition: attachment; filename="88.JPG" /9j/4QDmRXhpZgAASUkqAAgAAAAFABIBAwABAAAAAQAAADEBAgAcAAAASgAA ADIBAgAUAAAAZgAAABMCAwABAAAAAQAAAGmHBAABAAAAegAAAAAAAABBQ0Qg U3lzdGVtcyBEaWdpdGFsIEltYWdpbmcAMjAwNjowNDoyOCAxOToyMDozNQAF AACQBwAEAAAAMDIyMJCSAgAEAAAAMTY5AAKgBAABAAAAkgAAAAOgBAABAAAA yAAAAAWgBAABAAAAvAAAAAAAAAACAAEAAgAEAAAAUjk4AAIABwAEAAAAMDEw MAAAAAAAAAAA/8AAEQgAyACSAwEhAAIRAQMRAf/bAIQAAwICAgIBAwICAgMD AwMEBwQEBAQECQYGBQcKCQsLCgkKCgwNEQ4MDBAMCgoPFA8QERITExMLDhUW FRIWERITEgEEBQUGBQYNBwcNGxIPEhsbGxsbGxsbGxsbGxsbGxsbGxsbGxsb GxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsb/8QAugAAAQQDAQEAAAAAAAAA AAAABwQFBggBAgMACRAAAQMDAwIEAwQHBQQGCwAAAQIDBAUGEQAHIRIxCBMi QRRRYTJxgZEVFiNCUmKhCTOxwdEXJHKiQ2OCstLxGCU0U1SEkpPh4vABAAIC AwEBAAAAAAAAAAAAAAUGAwQBAgcACBEAAQMCBAMFBgQGAgIDAAAAAQACAwQR BRIhMQZBURMiYXGBFDKRobHBQlLR8AcVIyQz4RZicvE0krL/2gAMAwEAAhED EQA/APo4VcDtrUrHXjVpAU21K67eoh6atWocZX8C3R1//SOf6aZXd37DaVgV Z136txlkf4a9kKpyVsERs46rZrdyw3lYNWda+rkZaR/hp6p14WtVXAin3BAe Wf3POAV+RwdYyHkvR1tPIbA2Pjonn2znGe2tgT7ajKvjVcZM1uI42hTMh1bg JAZZU5gDGScDjuNasVESX/KTBnNn2L0dTaT+J1hbAXSoR5ReA6Gun3PX203m s04XAKWZbRfKVKACuD08q+7GvN72ykMelwupqNPQ2FKnRgDnB85ODjGff6j8 9bibCcbUpuawoJSFkh5JAB7E89uRzrNlFcKA3NtZb14+IKg7hG6qk3UKACGY LUtL0F1OFDK2Dx1es+oYPCfkNLLk2yi3Dbkems3LV6OIbyZMRylOJjFlzrKl K6UjpUFFXIUCPxJJzSf2k5m96+tjt/6+it4jVuxGkjpnDKGC126X1JBPjrvz UWqnhxodwb7Va97jv285qKvGjR3KTHq7sCGypnow615CkqQtRR6sHCuojGDp 0tjZVm196jeTG4V4TQtclxylzaj58FS3vcIWFKR0jAASoDjtq3LUOlAGwA2s qoc21sv76ok+Rxr3w+q2YrTRJ1jDegJeW6terEp6FSFqpsIEo9Cv2zn/ABK9 vuH56kBQnEKh0MYazmoC2lSnCpayVK5JJ5P+uvSFVJDzaKZRlz3HDjoQ6EHO R8/xP4fXUcsjIWGSQ2AS9BC+okEbBclO6aBeKC6qfacqMw20FqeQ4l1HVxlO E88c8kDtod7n7i0OwaCxIqde+DkSCryIzUUSX5GMfZQSMAdiSQOfnrUVEfZm TkOuiMU2CVlTXx0TG3e/bp8fDmodZ/jerds3Mn9lVl0pKQgtLW0s549ZTjGe D6QR378auZYu/lCuW3IM6ot+THnsofYmMAlpaVDIJT9pP9carU1ZHXEhosR8 0/cRcMVfB7InVEgex+ml7tI8+X6Ie+JfxL7a7JXdSXZdAmXLVqnGM5hmNPLE YJSoIQtTgJGcoV6Qkn05494PY39oBs1XNr5lv37Hq1gy3uppt0KfntlCk4Li X0oKkqHOAU+wOedMf8pkdRe0B2/LoEvRzZ5ALadVYDZqxqBaOwTDdhV6VVqV W/8A1rHnSH1POSQ8kFJBwPR04wMcZOdVQ3f8ZlPsnxCSqVttAj1ObRpb0SXL qgKob68FtxLSEqClAKJHUSAfke+l6eWZjSWtu7oE5cP0FLXVPYSPDI7G5PL/ AGSiH4cvFzF3r3sZ27v6wqDBly2HHabIhIUppxxA61NqQvOCUpKgQe6cfI6t DIRRadUnCxToTcmQ1lxfkD1oyOCB3GQPy1rSTioi7S1lrxHggwXETSMdmbYE HnY/7TPUK07QqcuqSYdKbQygqQtDQSs/NI6eckad7TuiBd1mM1qnFQbcUtpa VfaQtKsKSfx/x1fygtzBK0jMpCev3M6xxnGdRqJdc/TXs/T+mtF5JVckZ+eq nVRoN1yUgclL7ie/yUdTNQHFR3W+q4st5Gf/AD1Itu5k6DvE9KbgmUzGgLUl IwT1YGekfM9vx0IxQtDYg7YvH6qxw43NVO8Gn7J+d3c3BrFKdft7aGutfsyW jPWiMlR9Pf5faPc/unVOvFtY16v7n024I9tyZMRinqblyYyPMS2ouqUcgdhg 98Y+uqVTXwSH2a/eIvbnYb/BdNwEezYnDUyWaAbXO1yNB6lVvlMzXVoDDMRa 0qz1Pgqxz7DOP6HVw/D3c9ed8PIo821Kp1UGOr4V/oCUTk5UpLSCrHrGcfLt yNYwiVvbBgvrv0Cc/wCJeHOmwx9RM9rS1wLQd3ciPnf0QM3YurdK4N2hX7j2 rqMGPGDaIkRbDx8oIKiCpYT0qUSokkYHYdhpmf273V3zvWNJpVhOUpppvyly 5mY0flZIKlrGSACBhIJ4OBzjXVT2NPDdr7i3X7Lh8M0UUDczhYL6jbcuQdpP BHQrUodTYvCpWnRGorMePKajOVF1sfZQXSAgEkgdXYAe/GvnDU9nN/DeFUkx 9trkpqJ8uU+qPEYC0IDyypSOsZynGB3/AHQdc+ramopnB0Lbk35XGq6BwkMF qDIa+oa1ulgTYm3Py1R38G+wd62X4jI25+4EeHb1PpsSQxHizpKESnnHEdAI bBJSkAq5Vg/IHVt69V9vq/UW01yqxnXI4UGltKc6EgnkZGM547517DopooBc WKrcX43h1diznxPDmgBt+R/1qoxcrtiC1F06lTXVBQPSiK2oAE9z1KPH4HU3 2ggQ6XsPTokNASkKdUo4x1KLiiT/AIflomS/J3kk+1wzydnHsFNOr06ynp6u dRKVb5Tjvr2U/PUS8uCuFDJGM6qxcSQ3e1RaKOnEt3jP851YCB4r7jUhYACs Yz7cHOl8V2RGlh9gOtLT2WkEEajnhjqYzFKLtO4QWCV8DxIw2I5pc9cdafZ6 XKk/8s55/PGmOfVIsJsyp8lSSo9zlSlf5nQSmwzCsDY+oY0N6uJJNulzr6I6 JsTx6oZTAl7idBsPP9SminybOk1pT8anwm5BV/ergpbUVH+Yp7/jpbc92Ua0 6GmoVuqQYiFK8tBmTW44V78FZGfuHz0Qw+tpK9hfS6230sp8awjFcMmZDX3J I01Lh6fdRS3t+duLivKFRKVdlJclTiQhDdQTnPPSkZA6lHjgfP30ShlRyvJP zOSdFOSBTU8kDsrxYosWQ/tw3YSkVahRxNYJRIXISl1Tih3Kc9hyOB/XUcvy t7VBKIsWNFgznR+y8tSW1E+wKRxz9dVmiS5cCnR+HwshyPaBootSIMSbW2o0 l0R0LPJCQT92pRLpNlUWApdRckqCB1KUXDwPw0jcRY/W0NSKamA2vc2v8CpM BwKCug7eW51Itt9FCmp8KpzH3qcwtthDpQhK19Rx7En66N+1S+vaFhHsh91P /Nn/AD05UNQ6qomTP3IBPmgpp20mJSQt2FwpnxnB99bJA6s41OURC3wPkNex 9BrRbLi6B5R551V+7mg3ubVUD2mvfXPqOpAgeKD+m3zVHfGZvzetu7yMbZ2l V5tGiRYTcmovxV+W7KU6MpQFjkISkDsRkk57aEGx9u3JupubIolMvasQJcRh M0GOt599wBxKT0gLSB0lYUSSMfedRyzGJheOSZsHoIXQMDgLu3J8V9Adn7il 3B4f6T+na1Hn16I05DqhQ4nzPiGXFtLK0gkg5RnPvnPvoZ3ruJUGfELWac2S tiGpDcbGVJcSEDq5HHfP46W+Jm9rRBhNrm6Pfw8psuOynLcNBHW1z9/omedf tZdpzqIkNtoKSStQycq+edVi8Ql/V+7/ABbyGq1NCGoTUWmsLktqbaYSGkeY ojGR+0UtSiBz9eNCOFo2wmRoO9l0HjWnaTBJbVub52Wu5Ft2paxtt3by+lXM 5UohekBkpKmlpKcKCUgKbBJVhKwFjoz7jX0a8PlRql2eHGzZ16qXAmTYjSJr shWSQCU+aoj+JKQeecnnT+x2Ull9VxnG6a8TJyO7mtfzH+laCNtrQFVhVWpU ryw6lSSGHOttSVEFRTg+knpGSPlxjnTJX7JsmipW9Go9OiuqKip97AOVH1HJ 5yfx1A8vLCxqICoYbPeRZQGRbdKrVGnxWKg4GnVlpBbV0rCAQc8/PGPu1B3N p00e8naqzet2zGXlrcXT5k8PRCVdWMgpyAkqyAD7DXF8exST+ZSxSRjpqOg3 H1TTgFLCKJhjdfUn4nZO1OoJowcSHG/LcUFFA4KDj5e+jbtG4Dto631Z6Ja/ 6pSddL4ZqjV4W0kWykjzSVjUDYMZdY3zC/l+7Kb9PY/LW6fs9OdMZVcLfj+I a9x/ENaL1guTvLR1Wq+Uhvd6sJVjiY4R+ef89SBBsT/xjzQV3T8N21G8NXNY uukyG6t5AjpqEKUpl4pTnpChylWM8dQ1XxnwK02BeT36M3efjxm0rClIpbyH gjkKClIWE4OMd+flzrSzgdCt6LF3wR9m9uYDZGTarwmbdWDMj3FErFTq1TSg mPIcSGG28jGQ0OScZHqJ+7U7qVi0aK8FSIkiR5hyRFh9S+CO/P1/ppZxPAf5 nM2V0pAAta30/ZTfhHHlRhMb4mQtJJuDcj49fknm3bbtSBKXHh2ypl1KQ4p2 TGyCTjgKORnn2+Wo7f8AsftreNwv3RcXnQJCkhUqU3IS22oJAAUvrBSDgAZ4 0XjwqmigEMYtbnzPiUGp+McYgxM15eHuNxY6tsTewHLbko1RdidiYdYSlVVf kqGV9D8sMNuJGM8hCeocjsdG6nxIMCjMwqe00xFYaS2y21gIQgDAA+mNWKSm igaXxnNfnuvcScQ4tjZZHXNDGt1DQLDXnrqks1MV+a2n9YJMNR6gGmJXk+Z0 nCvvwTg47HGuU+sUi34i6pWriSiM+eptyQ8FJAAyeggcjnPv21fLsrcxOiWm UdTJK2nDDmOgB0+qZaLutaldW25DdcREdLnlypJS0hwIUEZSMkkFRwCcDg57 akFO8+dTkT4l0SpMaU2FsFHR0dJ5Chxz/wCeqOSmrhd7A63UAoziWGV3D7wy R1ifyk79OScW09LIDjhWoDClHgqI9zotbPqH6nTmc/YlA/mgf6auhrWMytFg hNG4uqLk66ogHHGDrZBAGM6jTCF1x9T+WvY+p/LUa2XFz+5P+Oqj+Je9Gdso V23pIgOTU05aXEx0K6S4tZQlIJ9hlQJPy1I219UKro+1DGDmbKhV0eKHeyRd LqnLjcoKXUJcbiQo6EobQoZTgqClHIPcn8u2nvaTxUbqxt2afTLjqztx0ya/ 5T7L7STIQOkkltSQDkY7HIPOrr3RBtgETfg1P2JAGoG9/wBhRGv+JLdPcJS5 Uu85dOhSPU3DpyzHZQk9k5T6lfeSSdE7Y3e26rT3dodsXfdLlbt64kNNsPvS C98C45kI/aK5GFApUg5+Y7ajDgRYqSowunFIWMbqBvz0VyxgJUVqCUoBKvbp +egzuPvPb8LcyHQG6sy3A+FLxl+XlLj7ivLaCCrgjK0kEA5yTwEHQyuuYHMa dbbfVBcCpHT1Qfbut1J+nzQGu+5z+gYV5UeoMSHnZ5abYlvsymFIS6ULWhp0 FS1np6iDjp49ODo2UC6rypnhNo8yqupfbqE51xfwLKGXYcRIQpCQ2hOMFXrK cZ6HPx1G+V8lO6Um1hlHIdL22T9DRMNXBGBmjD82tzsCfNE+794bGY8J0WXJ qVV/WSX8TFS/U6IhS2XmGchaUElKWx1AhQKiVYHqUANVhuHcfcOaxSruoshy BEpsBFNaXKSkOKf6UFZbQB0eoKSfsgYHbvqKnYGUAfe5Ntzrax5E6ao8aNz8 QLZhZoNhbbU6WO+2l0yxdwD+rrdHegqekLYWJPloAQHeoJQrpQcKKUJGO3q5 wTjR32A3HCLFFr3e78HKZllmI48pKUr6uzZ59K8g8Yxz31tRyMjOU81U4rw2 fEIM8epYAban66nTn4WRyxhWfloobNLHwFUazkdbSsfgof5aLu2XH6L/AOQE Sen0/a4762bA7ahTMF2H2e+vaiutrLktJKfnqjvjaav6pVivWda1mP1Kn1qI G5kxlPnKb6kJ9KWwcggpB6sH6a0mkdHHmaLqxRU9JVVcbKyXs2Xvm8QNPmqU U7w0b13nWWmpNMqiPLSGxInsFhCEgAAFSz2GOwz9NWV2L8KUbaq8UXTXblNV qXw62PhmowRHbKxgkKVlRISSOwHJ760gqJJhq2w8Ve4kqKCiaYaWcSuO+XYD zG58Ag5u34Q9w6DUH6/aE5FyRHFnrajQ0sS2kDt+yThK+MAlHJxnHOobT9pd 9t0JcanyqFWXIUBY8oTI6afBjKAwVdPSlPVjurClHnVxrhzQ+DE4JIO0cbW3 H73Vw7dsWrnwyU20b03FRUKrEeRJE1KwR6DltpXVy6lPzVySBkHGNQe89h59 ZrrUqPWYUtskIKKnUfiW0gDhSUqSOk9ySCSSeAkcaglic+dklhYXvp+qqYRi lHTSlszCGXvprp0spLbOxm29ItJNFudEeoyJLaWm24sdSI7QKcJKUpGFkDB6 l57ffow2V+ocW0pJuKmwZbcNS26d8NB6yh8o6etwvKy6vp6SeolJweBnWK6F 1RF2Udg2+oOv716qzS8QvZOXyAgfhtpYHe41352UGVYH61SI8W54q5lMif8A sVOiwQ0kLU5lfpz0tpVhJ6W1dxxjJGifTth6ixbBjQ9v6VEjPu/FqamyG+ou 4+2oYWQrtyTnVGhpPZo7zal3Lpor2M41UYxL2VEcsTOexJve/XyvbxCGV2+G xcK5XFiqPUB51XmeVGjIeZSfYtqOD2Oec886cKDtHatJhwkz47VUXTkdEbz4 6EttcgkpQAeSRkkk6ssoml2ZxuOQUVTxbVxUppY4xHJs5wJudLbbDToeZU2I z3Pf5nRF2ccxVqoz2y02v/mI/wA9EjskijP9w1FPHo15HCu+oUzrsEp6RrPS nUS3XF9bbbRW6tKEjkqJwNBXcSkGq7qvz2ZkNuM422Euuu4ClBIBwO/GNZJI F1WnpX1TAxnVQ5yw7Vp1MlVGZW1u4ZWpTqZakNtADJVkngD345xpNDbYRT2k x1BbIQChWcgpxwc++sgk6lCq+jbStY0G5O6XKp0WayuHT7noiamlgvGI5JHm tYOMqSPYHg/I6StWjfDkFTvxVMlK8wYUzghKPcfayTjWQ+62fg1Q0A2vdR++ L4tLbOoMRL2q7VMkSmy6ywttTji0fxdKQePv1Av/AEmLNqNdVTrbgzJS0trc VImERY6EJGSon1LPtgBOSSMDVWSshjdlGp6JmwzgbFa6H2iUCOIczufIc/oo 3SvE1V4++4oV30mnQKYh5UWQ5FW4stkkdLvUrukZ5GAcHOMjGrFNuJWlK0Od SSMpKVZBHzB+Ws0lT7RmBFiF7i3hmPAHQOp3F0cjdz1G/wAiEUdsaBR4VETd FaeYS46tTcXz1BKUJHdXPuefwGiW6WTF60dPSRnKDkK+7UjzdyrUMPZUzdN0 O79QqRRJK3ko8pSchSwQUkdsY0JAo8K9iM6tttlQzGYwCx/Ve+nsNT3Z9fTf M1sn7cT/AAWP9dbHZBaXSdqLh4TxrCf7zudQpqSgJTj/APOvdKf/AOOo1lQT dmmVpW3b9wW5DM+pUphx1uD1FJlp4JQCAcK4JHHPb31SCq+NKpLqQSxYVJJb c6Fl+S685gHkAYSArv3HB76glkDSBZU6vEZ6NrWxMvfn0Src03rvhtWxce39 2yahRlNhM23WkJaWFjnCkoAJUP4VZBwCNHrwsWdNibVQol9uJcqsFKlsw3FA qabUolAX7FSR3AzjqAPbiGNznOKrxn2qdskhuDt59EWqts3ttW7sFdn25EXP DJj+epsFXQTkjnv9559u2ksyydt7CtVbvkiIhXqRGipQ0p5WOwAGfYc+2pGN to1NE1Y6NmZ50AVQfExthUtwKY1d1AivSKtTstqiocKyuMeQlsHupB5x3IJ9 8aqnEVUaBcqZcOXKp86MojqSotONnsR7EfjpbxFr6aozt2K7l/D3E4Mdwb2a c3fGSC3/AKk3B8uXoni3rSum97pUzR6bMqcmS4VOu4Kkkk8qccPA5PJJ1ei0 qTJoG11IokyQJD8CE1GddHZSkpAJGr+DRvJfIdik7+LGI0To6fD4SC5hJIHI WsB6/ZS+DuM3b9Bbo9wUSfLpjaupqVCR5qmCc5C0faxzwpOe5yNdqbufZ9uz 5VRg1RcxuaAlMdnzCpJGSB5PSMK+asc40xCAPfnB/fiubUFa11GI7jZRW6L2 vK6papTUVVKpLfOHfVIWOxwgdjz7n/XSBu4YPkNkR6gErdDCCqG5yeBnt257 6suLGgNCW8SqBUuDWbBO/Qc++pptOry91Cg9nIrg/LpOtDshdNpO3zRjPKe4 1hIweNQJqC7YOO+vc/PUa3TMh27TOWJFOpQZCSUlMlzrUr2H2cAfX/DQk3E8 NNqX1eTVed26tsS5DIXMUT0kvZPVlSenr4xyRzqORjXjUKGdhfHlbY+YWLe8 PUq2qauJb1MoVJZdA8xMT9mF47dRCcq/EnT0xtBcTTwWmqQWlJ5SpK15H3ED XmjKLNCGGhleblwCeUWFe6Y5aN8KQkjBAcdOkS9nag/KL0y5kOOK7qUytZP4 k63BPRSPopZRZ8lwstbLJCsruI59wmN/+2ub/h/tqdM+JqLzUlxP/SOU5pav zVk684B+jgCpoaN9O7NHI5p6jT6JQ7sZakilCE/PlKjgg+WhKG05+4D6nW8L YixYfkeQZiTGPU30ugdJOfpz3PfWwJAsFk0TCSXEklOqNpLTA9Sp6v8A5jH+ Cdd0bWWelfMeUv75StezO6rwoIByPxXZO2dnIHFLWo/WQv8A110Tt5ZyFZ/Q bZJ+biz/AJ69mPVSexwflW6bGtBB9Fvw/wAUk6VwbeoVMmCRT6TEjugEdbbe FYPfnW1yVuIImG7WhOAAJ59teAA1hTLp7azrCysenPfWcgDg6wvBYU82hvK3 Ep9vUrGTpmrEWq1CRHVS7lXTENnLoRHQ75oyP4u3Y/nrwB5L12j3lGJ1UqVv VJC3r8XVnUL6lQUxmUhQ5wFKTykZI7cnH5MUm9r2bkGQ3PaT1eryiwkoH0A7 /wBdc94g4lkw+ZsMFrjfT5Jow3DIqhhkk57fqusLcq+Dlx2lUt5hP2lkLbz9 2Cc6B2/FS3OqVo1O46LflVg1eBHVKiopr64yWQj1eWEpVyFJCxk5JIGg3/MK iokZG1uXUXIKP0GD0sNQC/vDax21VYaJ4l/Eu+lb0Pd64iyyAVKdeQ4PoMKS edTvaTxtbs2jvkxM3Suidc9uSFBmdGdbbDsdJ/6VnpSn1J909lDI74Oun0lS +VvftqgHFLKKkrzS0jbZd7dTy+H1V6LNvKfuFZ7NdsfcK2qtEdBcCmoa8hBH HUOoKSQeDkfP8J5BRNZo7LdQlNyZKU4ddba8tK1e5CcnA/HV97Sw5XCyWO7b RKAo+XrVSyByRjWtl5aNqStPUlwEfMHOs4+uRrbZY3WQOlH36wDlWsLC3/PX vz1heSCrw6hNjtJp1bdpqkKJWpthDpWOOD1cD3/PTBOTW6O2HpF8OuJ80KDa oDXUpORlHHz/AIvbVGurI8PpnVEujW/uw89lYhiM7wxu5VdvGrImXZ4SXJkG X6KHWWHnwys5byFNkce48xBP36r3svsW5e0uLXbyr9ZgUBMoU9zKlpXMllRA YaUkqwjgdThAwfSOeRWwXiyWn4UmrngGXO4NB6kDKNeg+QKIy4cx1a2P8Nhf 7q49Es+gWdardKtSjQqcy1gqajo4cOP3lH1KP8xJOlrcyPKliPIjOMhCQSVj GTnlOe3yP3HXzaa2WulfJUOzPcbknckp5jjEbQGbJwSY7zRabKEpx049tRS6 NuHKzJZnR3w2/HykKQvBcbV9ttYIwpJ479iARqaKV0Tw9oUrXhu6qDud4ad0 bJpRFhPs1SloyoxBHS1NB7nBJKXPwIOBwDqt1TlVFmY8xNQ8zIaUUONvJKVI UO4IPIP0Ovq7gmrwXHaC4Z/Vbo4E6+Y8CkLGqCRlW6oJvnJN/spdslvle+2O 6sJNEeffZlSkNhhhzocDiyEhSD2znGUn0qHf5j607S7lxtw7SW3LQ2xWIICZ jKeErB4DiP5T8vY/hozilGyJgMfL6f6VKSKzLhT3oT0cHGuMuMp2lvMtrAU4 0pCSocAkEDP56WgdVSIuLIFOSdxNqrStG1qZGgp6o8eLU+lrzwlzzmGQ4F4H 2kBzg9sgnton2Vf9HvwVJVHiTmhS5Jiu/ENAAqBUPSQSD9k59xkZ76JSQh8P bMPmgdLUSRT+yy+noNblSf31gYHHfQ9HF0BPT317KvnrC8oVurej232y827G mWnjCU2S26SELSVjqBI7enq518pJm/u+l67qLrEbci6m3ZMnITHmOLS0hbmE JS0kYIAIASBzj66ZMPwvD66me6uja9o5OsR81m8jMpjJBPRHerVrfVjwl1Z/ fCt0/wDV54JixhPZUzVJC1qAR6GVJAPTleHckdJOMjOnC9q34gbOaRT7VqtI o9mQGUxVz7TpxluQmUoKmw60oqdQSFJJI79fUSdcxii4YknFO1rvZXPdkF7R ukAaCC618uoDSTa9xtZM4jq+zuXd8AX621+fVAKfu1f1TsmOuZe97/psyFqk vyKutMZbBSkt9CAQQrJOTyCMY030jeHc6j3OxJc3BuF6My8lbscVR0h1IIJT lWcAjjOD305YjS8MUVO+CSnic/Wwa0adLnqOfimTBuF8XxF7ZDeNh3LjqfIb m/w8US0eMbdB7cqNVWY1LjU6OsrcprbRKHknghbh9WfkRjB5wdW02u37sbdS hJRR5/wdWCMvUuUoJfSfmg9nE/VP4ga+fsSwhlPGHQ8t/wBV0PGuHBS0zZ6e 5A97r5/qlNdq0WZV3aQmXl2KUPPNA8pC89GR/wBlX5aDm9eyNs7p2w7K/ZU6 4GW8RqkhOOvHZDwH20fXuPb5ag4exOowGvZVR8tx1adx8NvGyTKqlbUQmN3N U7sC0atQvGvRLZuCCuPMg1NKnmlcj0JUsKB7KScAg+4xr6a+FqCP03cFSI+y wxHBP1UpR/7o19VVlQypo+2jN2uAI8iueVDDG1zTy0VhAONePbI+7SkhC17H KScjnQS3E2XlMWfFibaQ5HnuVFU2aXakUKUA04EgKURj1LI45BOfYYI0NQIJ Bm907oXiNJ7TF3feG2tt9/kpVa+57VVueZTK2yzCX+nX6JT0thxan3Gis5Ue QPQgnPbII1OwsH3761qIDA+3JSUlR7RHc7jdb84+1r3q/i1Tsr6BPjI/TM7w iKtm3IypNVuGoNU6K2njlSVFSifZKU5UT7AaFOzuyFq7K2K5VmacK1cDcVx2 VNQ11SFqCc+VGSeEJOCB2Uo4yfYJnGvEL6LDGYVA7K6Y3cejdtfAnfwB6pow ajEh7dwuG7eaB/iY3JolT39g0F+gpdpFG6ZNUpXxJjvyZ0hgFallIPLaQ22T zyFD3zo5eHy4p24Xh4tJ2j3BDp0mmyvhK4lcZLi5zTCCnHUSCFeWG1dfJwnH 3bYpgEjeGKAPu8ZmggaXLw45r8x7u/5VJBVAVcpGlgfl+z8Vw8Vvh5Zvzbz/ AGk7ew2nqxAjBUmPFSCKjFSOFIxwXEJ7Y+0njuBqhZ4cIVzravp/Z5tNiu4c E4p/McLEbjd0enpyPw09ErgN4YfXz2SPu50tiyHmZCXWHVtOIIUhaFFKkkds Ecg6DSDMV0VpGSx2Vp/DDbG8u4V01a7i67Lo5hpjTapVFrV1/DpUW2mT3W4O og9wAeTnGpBWrxnN3BHpsOmPvhzDj75IS00388n7Ss9kjng/TS/X4WGyMk2D gflpp5rjGMzUn8wkhpB7lgegPh5fVRivWnEr+41EuxxtDFRosjqbex6nGlAp U2r5j1Ej5H7zq1/hkhBnamrTTwZFQCB/2Gx/mo67Hg8NZS4I2KqFiNuttwCu X4y+J73Oj9fNGTjHGmuqXNSaPU2oU158PPJ60JbjOO5GSP3UnHY99bac0rgE 6BJ6ZeNCq9SbiQnpJcdz0+ZFcbHAyftAfLTzlJXxrZYI6qJ1nbuizrlplXpg RSZVNnrqJXFjozIcU2tB689+HFnPzOdRGzKjcdk1mztu61CjMolszGpDiXvO UXgC81hQ4AI80YIyekaJNl7aMsedR87X/VBZYjTTCWMaHS3S5aEVun6690aH XCLXb1QJ8TlyS6Km2kQXuhZekPqB5CkhKE4I9xydQ22ryiO2yqatagwwhTji SclnoSVKQT8sAlJ+Qx7aQOO8AFTRx18Y718rvjoft6hO+CzZIsh8V84q/cMm 6NxqncM1xS3qpLdmLJP8air/AD1dXYXZ7cOJ/Z6XHXYc1bdcvjyn4kZ5HKIY ISVDHZbqef8AgSnjJ13fFYKeHDo4JG3DbADy/wDSUs7873sOpurOWnSHbF2o pFu+Y5PapUVLC5CCetSs5Kgk/u5JwPYYGqbeL7w/x6FXn92rFjJ/Q093NYis p4hSFH+9CR2bWTyP3VH5K45viVNnp7tGrfouicFYmaHFGtkPdk7p8+R+Onqq 1QwEU55BzlRHv7DVovDF4PajuG5GvfctuTTrYOHY0IZbkVMex+bbR/i+0r93 j1aWaWlNRLl5c12bibGxguHulHvnRo8f9bq5l8bgUTZqwqQmmW9HFusrTA8i EUtCInjp6U9invx3PJ+Z1Wu6qUih7lVSnJyptiUvyF5yC2o9SCPoUkafKfCY J545pPwXsPhb4L5roqufPKHfi59ev1TSHB5hCeTnnOrXbCRRF8NEF3GDKkPv n8VlP+CdHMRNoR5qGr9xEFbqG2CpSwkJBKiTgAfPOm+RUosi3nJjVVSxBSCp 6UlzoASk8kKIxjgjq/LS7oBdC2i5AUTqd1RjaMq5p1yy7Ut6G2luPLkdH+88 n1pQsFRyMJSnlSvYaQ2dWb3nMsV6RTLqfiSHnER4lQRGjuuM9Keh5xvAU11E qIQT1YTlQGcCOOTOBpurMsAYCSdtPXnbyUhp951GoXgqjJtCchbJQX3BJaWl lC1KAWrB7eknA5wQcak3RlYUW8Ec5Kf89TWsqVrHVdMj+LXsj+LWtgsXKq94 q5Ye3Yo0LqOI9OUsj6rcP/hGgg89UEWrVadClpj/AKTgvQnFkZCAtBT1Y+Yz nTGaSOsoBDLqD9jf7JgpXFkbSEJYfh6orCkum8ZRWjBBTHa6eO3BJyNS4+Lf fyx70k27Pr9LrTcFflJ+OpbYK0ADpILfQRlONM5ZFiRyzN22sUIqouwAcwqb 0rx3XcKak1fb+jST+8Ys11kn7goK0ud8b1pVOjy6dcG1UsxZrampbaJzTqHk KT0qCkqSnII476HS8PsfcMf8VFDWSRODhuEKtp7g8Nlub3zLiuaHck6AzI8y j02TAQ80wnggvkL/AGpSeAMYIAKsnjVtYfix2IrLWJF+PRk5CkNSob7CU4H0 Rg/noDBwzUUTTls6/RMmO8SvxydskgIDQAB48z6n5WSuobxbFXjbr1Bl35bE uHJz1sLqCWsn58lJzzqGbmSrZqVZgVK2K3TZrK4yIakxZqHyPLGEfZUSfTxn +XViKikpZe0e0jTmhFNMxzwAUmou3dSnsCRPcEJpfISU5cI+72/H8tFyn7jM bZ7QRokxUNumUljoL75UFK5J9u6iScADXFsb/iFJU4iKDC4hIM1rk+8dtB06 H5WTXJhcIpjNUvygC58EOr630pVXjRbl3HdmU60HmEu0u1mCROr8sK9KHEjn ySek47EKGR849b7m9FcrIrO8NRbmVcL/AEnaFnYDkZh0kqCpDTPKg2j+7bcI JVj7IGn6nxKGop7tHe1A6EjRzh1bfug+qXaPC3mTtJNjY+IbvryBA68yiHU6 pG2+on+2DxE1z424Yzjjls0Fzy2vh1DIQWWELWkOq6sFZUroSeVdWdJLNuvc vfmwZ1Q3GqMfbuz6Wnoq6YDrkd6opUklSTKdA8pnpwFFByc4BzyMtdkYAT3j 8h+9lLVRvmm7SGM9kNG6aHW3rfW6KFEb2Tb2foLdCfhQLfjsFNKQ2t6KhTaC EEjOFLHoHqVnI5BIOdPNsQ9vTc6HrbqiJUttpakoTUHHulJOFK6SSO/H05A1 YBLhcHRCJWPieWPFiFMsnHf+uvdR+f8AXXrKHVVA8R074nxNSmuo/wC6w47O B7enqP8A3tCxbaX0KbUnqQQUlJ7EHgjTjTgCBo8Eeh/xgKB1+7ttLavFNMkU lL8hCSHVQ2gQyf4TyOcZyB29++hzuVWLdrm4ESq22lxMdURDLqFM+X0KQSBg f8OPy1jDMSpXV/s7T3tR4aI3ivDddDhgrXgZdDvqL7aJiRPZYiK8xzpAGR8z prcmCRMPJxnIGe2nVu65yW20K5Nu9DhWFnKVZGnEVFAheYVHI4I7Ea3sonE8 k2urVKmAdKllR6Qkckk9v66tr4eNvYlj1WnPz4rRq85wKlOFAJaBB6Wgcexw T81fcNKfFjy3BahjDYljv/yUawZmaqa93IhWIuC4YFuW2Z03zVqUoIZYYQXH X1nslCRySdCe4K1If3HZTcFN/TlyvJ8yk2s1h1qmIPPxEtWQhJxyeo/idfHH DVF2ETqwmz3BwafytH+ST0HcbzLjpsnrFHuqphTNF2tsXdCfwt+PePgPFR2K 3XKzvNIn2rTWLiuwDy5l0TwUU6jox9mPnlRAz6gB/KMZOnu1r8g2XcrtF2bg ruW8ZKS3U6+3ASWUA/aKnnPSlGffqI/4zpqgkqH1DXMd2bYgC8nZjB7sY8Xb kDUkonXRfy/D/Y2HPNJ73S56+W55bBSqypFoUnfaBcdz0uVuJfvxzTdUnuul 1uisuJUWnY6VgJIB7qJGBykDIGj7ebO3cuM3de5FSp82FFKHIEKdLD0Vt5HU UqQ0eHHj1Y7K7JwPfXQKHEKapjDXkNeRmLSRmAPVKVVT11O5l3Fzho09PL4r em7qwK9RPj6fbjz7DZ8p4qkshcY4CglxCsLTkHIHTn6acbZval1mvNw4VDeg uPFYUXEIQSEDg+nuD7a9W47Q0csUGa75CAAPHmegQ1mG1Lg9zhYNBupj1D6a 9kfTR+zuiD6qj+99Viq8Sl0T5MptmOxNU2XVrCUpShKU9/w1Wy+94H5KHKRa ClsNK9Lk4jC1/MIH7o+p5+7RfFMQFDSNY33yNP1/TxXWeEcCOJzCWUf02b+J 5D9ULWM/EkuZJVyeddZnV8OnyyDg55OlHA6llNiUcshsL6nzBC63xHSPrMIm ghbdxGg8iCmR51anleY6oED34xpOmYESCArXfY3BwuNQvlKop3xEteLELs2+ VJKOrv7a3ClFQAJ+RHz1YQshEnY21hWd1DXZCo5i0QJdCH1HC31nDY47YwTk 8ZA1aC16qW78pMefFkU2fJlEx4stooU+lvBWpCgClSQCCSDxrlPFWL9hLLBM 3+mWEXG9yDfT1CecGoGy04ew98G5HUDp5c0UbrptRkUSS5Sq2KNLSkJdrDrA eMVrPrDKFenrI4B5AJ56jxqI0KyKZLsJ2l0eHU6NRqsnz58+YtSKzWTjqJX1 ftE5GftYI7BAHOvlejxHs6TqRYW6ke43/wAWe+7q6106RMbFN2nTX1/39EwV ajS51nMUH4eXaNrIUlDVvUpvzKxUSrnMhQz5QPc56ieSSeNcK3Htyy9tY0W8 q5CsOkKy6ui090PTJRUBhK+FFa/+sOe/HTjOjEEnYsa1v9R7jdo/O7nI7owf h67okx76qwa2xO/U+fQDpyXCh1vdq55LMXby1GbEs9xlIFUqzCXJSgkcudKz lWRgJBBwPcDsXrZs2l2+43NdmyKtVFDLlVnuedJXnnhQGG0/JKMAfXvpaxaR sZLYXZnu9+TqebQeQ8vBWaoU0DBDFqeZH0HgkTEmDb92VSbVpVVqFRmOpQ8+ KcsoShvIbQnoBBACjzkkk/dqebdVaPP3GhvxkPpSHi0S60UZyg9s9++qtCyT +aQVDrAZ2aXudwPNAqsh1NI0D8J+iNY+yO2s/lr6tzLklwvld4nq4uewhwOH zKlV3ZR55wAefzWNAiOHn/2ZVknnnvoJj5/vLDkB9F9PcEsyYQ3xJ+qVCKpL frJyPbWroyBk9hwBzpfaTe6cZT3bLhIiokwelxvq+R7EfjqPzaVIhSSpoqdR 9PtDXReGcd9nkFLOe4dieR6eRXKeLOHvbI3VULe+Nx1H6/VECxtlr2u+AxUG o7cWFJWppt5xQKnFgZCEpHdR7DJAz3IHOjbSPDVZ0CGk1ORV6xIYdSiQj4hq EG1B1CVBTZBV0dKirqCvUB6CSRpmxPiMRtIpNbbnex8hvbmuPw4SGvtPoehN viiDQKLQLTorLVCoMOjLVMS247ASJDUg+Tl1SVLWXA3wpJUrlPTnGMnUSvW3 7plb92RdtHr0lC461Qn5MpPlNpbaWlbrwSjKfLfQ4RwOcJ9uyHPXxs7WarGc Oza7noOVtCbdUehpnTFkcRsQOvhc/EBWnkQoUqbHkyY6FuRll1gr5LaiMZHt nBxn8tMV63faNnUtiqXNU2IrhUpEIBsOynlHHUllABUongHpHyzr5Hp45qio bFHqdQB4c/Tr4JybGZCGDcoQ1/cK4mi67ToEHbaBPPW5NqTQkVuaD+8iOnJT n2LhH36brJsm4H6wqrWRZzjEqSet26rsPny1E91NNkYT8xgH/i11KnpKaipD V10h7M7ke9KR+FnRnIlNMUcFHAS53dO5G7vBvRvU80R7At6l267UqFWr8i3D PkSjUnkBeFpJASorHUoqyods/LjU4dqjzACIdEnSABhBQ2hCP+dScflpQxdk 1dUmaVop2Gxa12mmmtgPqB0SnPiENWXTU4zC9rN18LX2TRMcvRyUp+K5CpUJ LeD8S6lxQVzySAQBk/PT1t65XFX/AEyoTrgalsJkhBQ3FCEuHIHUlY5Kc5x/ n31VpmUUM8ckV3kOb3rFovf1uhr3yvY5sgymx0uDorD5AGMHXsj5HX1RlK5X ZfHnxBSmahuNApi3CUQYnWoJUR6nFZ5/AJ/PUAhuhK89Xb66A4zIZK145Ar6 n4XpmU2GxO5uF/unB0JlxvJdIUFckdsfXXBeEqwlOAPl76C35Jke1oOYDVbp cBRgKODxplXIqqaisPRGFN8gLQs59sZH4n8tWoQ38RQasfUsIdC0Ec+vorw2 LDfgbVUqaaY+w3Dp7Li3pClutyWEslHSuKg+Z5uHCUjoPbJySnDtLZpDLcr9 KLgwpzCYk0To8ZBVDdfT5IQlrlxSQhBCVO5J8zkejg0yU9iGsNjyI0HgPXa/ iuHVIzVDn2vvpvz19QthRaaLLkxDSFx3Wn5EWK7FhqZLaVu9DbzbbZUQ2AsJ U4nBIStRAHdpvKBU2dr59Wt+ZFkUumutzeoKZDBij0PNNkekY9fR6gVFIA1V qss9LITq22h15Aa+ZPxNyt6MiOrjDtydR4knp0v9kY6JVoVR27gVKnzhUY70 dvy5EUdXnekDqA9s98HtzntqIXHRam2r4uZcNPtynR2/K/S7yxIqroIHUA8v CWs/yZJ187UAZT17mPYXuuQGDdxvoCd8vUDe1ky9tDSEy1GjW6m+g9fDqoTb 9Xs2Fd8iLtfYM+7K02rK6tUFlwZ/i6lD0j6+jOpDPta57ujLZ3Evv4UL5TSa KnrWk+3VgEqP4K+/XUqkNwWcVWIgVFdYFsQ/xwjlm5achoOfmg1mKVnFbyQ4 x0u2b8T/AAYOQ9E6xXrH2qt3pi0qJRGSAlc2qykRnHz/ADZ6nFE98BIHyGm1 W8ECtFSLUk1q5lp4U1bFGU4hB+SpD+EJ+/jSFNTzYnUOr653aOcdXElsY8L+ 862wDRblqmmnoKiGhHs0fZxN0G1z9h56nnonig0+567bkeqVu2maDUEvL6mK jMFUdDYI6FZHoSojOQO2pVQ4tyxrriyZ1djvtNSEqU0iGEZQDxzknOen6YGh s1RTzVDWMdnDT3bd0b/hb+q2jh7GM92xO/MnzO6sSonrPfvrH56+oRqL2+a5 UQbr41by23W6Xuk/Xpq/iIlTc6mXkpwGyAP2ZHsQBx8x+OoK2/0Ecce+geJw viqnh+97/FfU+A1bKnD4nR7AAeVtEualIwO/0wdYW91OHHv20ItqmKR/dWoe wc5JJPvrj8QfNJOeOcj5anaNELe/vK6u3M/9ZqHRqk63HZcUlma8lET9pIks slHmocUfsdPQAvAUkoxznT5CrlJhzw9PuBK26ZALC1yUux5Q63iyt14EJQrK 0JCD0HnqPZWdGmvIJcxhc4crfAjkTtfoLc1wuoiHaFj3ADz8wb+H3WY6HP8A aC8mM+1HnzI8hyIuQHX5QZUW0oPQoBLaEvoOWeU9iDknQg3rs287/wB42YNt 0xKqLQ4bUbzpDiY0RLwcPxEhpknkZwkAAk9Axwc6IU1XHSVTBVX74sLi526D xNvNVxTOqI3PiIGTXS/X9BceCsBUKjuBM23pcbbmi0yMtxAS7JmFLceO2Bge W2O547YxqB3DQrbot3xk7rXs9W6++yX41NQ29NeWgHBLcdsJATnjnjXHKKro 8EqJY8PaZKwufd50DGgkaE7G3vOt5G9lpWYbVYxOxtQ4CDSwv7xP5uo6a26g p0bqG4FVpKIFj7VzY0AcNvXHNRAjp/mERnpyPvydKYO1+6lZhBu8d0lUqGft U61oqYqCPcKcwCfyOl+qq4aYHtP6jyb22Zfqb9558Te6eaWHC8Lbdo7WQf8A 1HgOWnTQJ4XsPtfLtlqnTrffqCUdJEmVOdU+opKsZWCM/aV+eppTJEXynKfT 4aYjEMhtCEtJS2U8gFAB+z6SOQO3b30rV1dU17SKh9w3kBYa+n1CxNWT1RAe bj6ei3mVim0xeJ1QbbJCj6jx6cZ7DjGR+etKbcVGqczy6dORIWlBd9CVYwDg nOMd+NDwyoaztGts3r/vdUz2bjqbqxkZZdpzToHC0BX5jXTB/h19cRTXjafA LkTmODiq33n4MrWue0ZdIZuqe01JRj/eIyHfLV+6sFPScg86+am49i1/bLei rWNcsfyp9IklhxSc9Dqe6HE/NKkkKB+R1pjMntBbIRY7LrnAmKdoZKV3/kPo fsmBDp6OflroHenI7caWrLrDpNLpRAjyahU2oUOI7IkPqCGm2xlSlHtx+eiN /sgrNAgQ6vIS1MkNqDkmME+Y2yDnGR++B7kdvr30RipJ5oHSxjRqU8UxWGkk bC46u+Xiivt3V6FcTsug0enyGp9KgqiP0+pMkFoqK+pQcT6nGStQBAxwE8A4 1M5FTodvWk2/UrypVMgDrYlFUlbvlBLKWwhjo6i30EAhKs9z786vTllMxold lvl906m42HIb+AsQFzYMnqJnNhbnIJGo08/X7XUFleIi0JmzVaZt+q1eJWDB 8ilv1OOUCWfslxK0lSUr6SpQCggk9JxqcmlVOgUWDDrMtuVPVBYS66O6h0gn r/mKsj34Qnk5OvGhDMeigLSAAXam99N/jbTwU8c2TBnPPvPdb0FtPmi9t8+H tqYeOfLUtvv8ln/XUhIQHy70gL6enrA5x3xn5fTXzLxHmgxyqDTbvu+pRWms 6BnkEM634hdu7c8RqttarLfanNshbssJCo7TpBUGVHuFdODnGMkDvqUrvaDS tuKjeN1RHKTTKa2t/wAx9Y9bI5C+nvk9gkgEkge+ic3DNZQMgfMRmnDSwbk5 jsehGl/NQtqmSh9tmkg+iAu3e/u8u+V2vUax7Xp1EiRKil6RWpDS3mWYxVlL KkkgKeIwOlJBVnOW0jJYo28t/Xt/aOr2r2eqKVW1ALrVwVF9PWqThQEiSHBg oKDhprowMjgYOdOtVgGF4RXT0zryNhbmkJ5u/C3TpqfHRC2z1E8bXbFxsPLm UTKa7sVtd4lIdn066oNEuJVNU7OgSHlFysJdIDa3CfQt/rBUMDrPVgDBxqc2 tupZ16WJ+sloSHanAysSHWmfKVHKSUqDiVlJChj7OCrHt20gYjR4pXMbX1Dc rXAd7kb3I28LDw0RWGanicYYzrrp5bqy9HqyFWjBUT3jNnv/ACjSv9Ko+Y/P XcYK9nYtv0H0XPpIe+U4EDVePEp4SbS3zvin3tUroftx6lQHI85+NCS+uS0k 9TeckYKAVjPJIUB7abKiLto8t1XwjE34VVioYL2vp1ugZK8A+1aiGqXvrUm3 lpJQJNC6grBwfske+ovWf7O3co0BFYsO8bauiI6Cpn1uQnHADjjrBT3H8Q0J fQSM1Bv+/wB810im46Y85amItHUG/wBgrDeFrwh21t9tCzcO41tRpt3VJKi6 zKHWmnNHKQyAPSVEepSueSADxyWKh4fdqKj1L/Vf4Rau6oclxr+mSP6aOUck lLGAw2SDi2Ky11a+YHu3sPIbIJboeBm3Jdyu3dY1br0eVNCItWiJkJJlRlKS HOn0fbKUp7/wg8kaSSvClSrPtxNDi3fQKTFWPP8A0XMfOVhXB63FBQcyQc4S B9+NLtZQz19eA+wjAGoGvQD09NLI7RcSCkoREG965J/UqJUfwxt0rc+XcNn2 1bVVmy2nMsw6kia0hCvtlth1WEg8DIH0GO2ldcse/YL3XWLQqkZLSEtgiEoN pSkYAHSCkAD5aeqeGj9pFS55Mgblu48vLQIS2tEsfZtNm3vbxU5sOkyk7LtN yHH4/wAU86pBRlDiQlYBxkccpOolvDdtB2x2+kVWfedRZqchpf6MhqnhPmue xx05CEk+o+4GO+vmbEqSWu4wmooYw8vk6X0Op+WqbWSxxULZXusAEJPD1tTS qpbZ323MnMPMPPP1NtyaOhLpCvVJeKuC31JKkA/f8hpNvtua/vxVKLtPtNUH ZsCuPtrkvpaDaXCla8laXMOFttSUr6kDpOFZPCdP1Kx+K8SS4lMP7ehuB0u2 9h6W+ACqyzNbTsp26F9vMA9dt0O96PEVRtpdqmfDvsFOU+9ESqJVK8yR5kh5 X96GVJ/eUrIU57cJT2yCZthaTfhP8LiKrU6YJ173jGcfkSpb7UaFHeaZLzUJ x11xAR6Q5jnK1pIGT0jQioo3S07KaodllrX5nnmB72UeOwHiSoxNZzpWC7Yx YfS6EmyGzrfia8Z1zbsXi3LetJieZAWZxccnSFYUhgPJQjKW046ilKcAJSMZ zq/TdLpXnMqNPiEsY8o+Qn0c544455/rpP42xJ3traGncRHCANOpGu3hYfFE 8Lg/o9q8au1U3g1jyaJHa6/sNJT+QGu/6c/6zVmLF3tjaPAIY+lu4myI4nxS x5glNFPzCxj89bfGRhH81T7fQO6usY/PX0LlXNrt6rImxHMoXIZUAASCsHA9 iddkPs9JCHUY+hHt31GWEbBbA3Cx8ZGCRmS0M9vWOdZ+IjkZDzeB/OPfXspX rrfzWgk+tOB39WuLrFPeX5j8eM4SMZWhKj78ZI+p/M61I6hZDrc1iNCpMeR5 0SFEacI6etplKVY4yMgfQflpYOB6Dj7jjWLW0CzfMg/4hpFepliLqtrUxuo1 dqE78JFVnDrgUCBgcngk4HfGPfXzHo1tXx4gvHCLbuyRMDyX1Krbz6Oh1hhp QC0BP7gHCEoHuofXSRhbaeixXEsSL7PibmI52ygj0LrA/BdFo5mjDYosl3P0 BPLXWw6258vNWh8Qmytc3G2fptDsW6G6XTqHFX00NP8AczVNpHkJBBABT0lI 6sjnPcabvCb4fndrLJk3bdMEM3PWxjy14K4MfOQgkfvqIClfQJHsdJB4tiZw e/DWtyzPd3v+wcS5zvkGogKP+9Et7gD4dB902UjweUGL/aLz91JVOo7NttqR UabTYqVAqmcZ8xCuAlK+pz0npJKRgAEaNu4u19nbu7cuWhe9FVU6e46JAQlx SFodSCErSpJz1DqPzHPIOlDEeIKmsq6eaAnNE1oF/wAw3Pjcq1HSxxMe12xJ +CWWnsnKtTbGBaVhwanSqVToyYzDbbKVc5JU6pSh6nFE5JPfUqoe1l6xQv8A SE92WXCADKW2gIA+QSO599XY+HsaxfNI6ms5xuXONt9eapyYrQUtgZL25DVS JO2tf8sf71CHH/vVf+HWf9mtf/8Ai4X/ANxX/h0bHAWNAe+34n9EO/5BRfld 8F//2Q== --= Multipart Boundary 0912061926 Content-Type: application/octet-stream; name="AA.JPG" Content-Transfer-Encoding: base64 Content-ID: <13600192> Content-Disposition: attachment; filename="AA.JPG" /9j/4AAQSkZJRgABAgAAAQABAAD/4D2KSkZYWAATQFJNW7dKWLRLWbNMXbRM X7RLYrFJYK9LXbBQYrVTZbhPYbRSZLdTZbhQYrVPYrhOYbdNYLZOYbdOYbdN YLZOYbdNX7hLXrRNYLZOYbZOYbZPYrdQYbdUYbhJVqtDUaBXZ6x6isyZqued rfF/jtZldLxWZa1DUppGVZ1XZq5TYrZQYbhPYLdPYLdQYbhLXLNQYbhSY7lZ asBVZrxLXLJNXrROX7VNXrRNYLVTZrtNYLVPYrdNYLVKXbJKXbJPYbRNX7JI VK9GU6tIVq5MW7FNXrRRZLdPY7NPYbRMXrFPYbRVZ7pSZLdOYLNQYrVNYLVR ZLlOYbZKXbJMX7RJXLFMX7RNYLVNYLVOYbZPYrdNYLVOYbZQY7hXaL5HWaxJ W6xOYa5NYK1LXqlIWKZTY7FTY7FNXatHV6VKWqhnd8WGmuCQpOiNoeVwhMhc cLQ9UZVGWp5LXa5OYLFPYbJWaLlMXq9PYbJOYLFMYLBRZbVJXa1DV6dIXKxL X69IW65KW7RNXrdKVa1HVKtKV65OXbFNXLBLXbBJXK9SZLdPYbRLXbBOYLNP YbRQYrVPYbRNYLNNYLNQY7ZPYrVMX7JYa75Xar1OYbROYbRNYLNNYLNMX7RN YLVOYbZYbsBDWatAVqhOZLZLYbNKYLJGWKtRYLRQX7NNXLBRYLROXbFMW69Q Y6xQZKpHW6FAVJpNYadNYadqfsR8j9yCleJtgM1VaLVOYa5JXKlLXqtUabVI XalIXalOY69OY69NYq5NYa9MXbNOX7VQXLE/S6BMWKtNW61HVqpWaLtWaLtQ YrVRY7Zld8pIWq1OYLNLXbBPYbRKXbBLXrFNYLNFWKtkd8pNYLNUaLhCVqRM YK5QZLRPYrVOYbZPYrdLXrNPZbdHXa9KYLRLYLZFWrBIXbVTZLtOX7ZNXrVO X7ZKW7JKW7JOX7ZHWapIWqtOYLFMXq9KXK1IWqtIWqtJXKlJXKlIW6hNYK1Y a7hUZ7RNYK1NYq5ab7twhdF+k992i9dQZbFKX6tDVqFKXahOWKhFT59YZLVV YbRPXa9DUqZKWa1KXK9IWq1UZrlNX7JJW65NX7JOYLNXa7tFWalofMxPY7NR ZbVNYbFRZbNXbLZOY69LX61NYbFMX7RMX7VIW7FWaLtHWaxIWq1LXLJMXbNL XLJRZLpLXrRJXLJIW7FIW7FJXLJIW7FKW7RKW7RPYLlMXbZLXLVKW7RJWrNI WqtOYLFMXq9PYbJMXq9DVaZTZbZRZbVQZLRSZrZTZ7dVablNYbFMYa1EWJxX a69QWadZY7FJU6NIVKVKVqlMW69JWK5PYbRPYbRMXrFld8pOYLNNX7JOYLNf c8FXa7lLX61MYK5FWadJXatJXqpKYKVUabNHXKhQZLROYbZNYLZKW7JTYrZI V6tQX7FKWatPX65LW6pQYrVKXbJJXLFKXbJJXLFIW7BHWq9MXbNLXLJNXrRJ WrBKW7FJWrBLXLJIWq1MXrFLXbBNX7JEVqlIWq1Za75GWaxKXbBWabxTZrlI W65IW65TZrlMXq9MXq9CT5pZZrNEUKFNW61JVqtKWKpJVqtEVqlPYbRPYbRV ZLhLXbBRY7ZQYrVRZbVPY7NVablQZLRXa7tRZbVVabdWa7NPZLBXa7lGWqpJ XLFNYLZMXbZOXbNMWa5RXrNJV6dPXaxKWqhPYrVJXLFGWa5FWK1JXLFNYLVN YLVHWaxIWqtOYLFMXq9GWKlPYbJJW6pJWrBJWrBIWa9Yab9IWa9SY7lPYLZH Wq9PYrdGWa5LXrNHWK5JWrBMXbNJV69OXLRDV5tIXKJMW61HVqxKWa9NW61K WKhGWKtOYLNNWq9GUqdHVadIWqtGWqpOYbRLXrFMX7JMX7JJXK9JXK9MX7JF V6hFV6hRY7ZGV61GV61LXLNJWrFMXrFTYbFVYKtQXKFSXqFLWpxJWqNDVqFI W6hMXq1EVqdEVqlFV6pDUqZJWK5IWq1IWq1IW65GXK5GXK5LXLJHWK5MXbNd bsRKW7FMXbNGV61KXK9MXrFNX7JMXq9LXa5MW61NXK5KWa1KWa1GUZBLVZlR XKdHU6RNWq9JWK5GVatEVaxJV69OWbFSW7Nfa75MXKtJW6pMX7JKXbBLXrFM X7JIW65HWq1LXrFHWahDVaZQYrNGWKtIWa9KW7FIWa9KXapPXaNWX51GTYBT WIdYYI1MWY5GVIxJVpFIVZNNWZxMWJ1NWJ9PXa1PXLFOXbFKW7FIWbBHWbJE WbNJWrBHWK5JWrBbbMJJWrBJWrBJWrBKXK1LXa5JW6xKXK1KXK1JW6xIWqtJ W65IWq07Om1LS4BUWpdxeb1UXqxKWa9EVaxHVa9KVbBNU65LUqt+htg+S5pP XaxJW6xKXK1MXq9JW6xEVqdFV6hKXKtEV6RFV6ZPYbJCVKVHWaxHWK5HWK5O W6pOVppERn1DQGZFQWNMSGhERmQ/Q2JCRWc+QWc/QG1HSHhPT4JVXphRXplP W5xFVJZJV51HVqJJWaVIVqVKWKdKWKdJV6ZKWKdLWahOXKtOWq9NWq9JVq1M WrJLXLNJW7RIW7FIW65GWaw9OWJVUXtOT39ZX5hNVZdLWahGValCT6ZFTqRI TKRJTKJMUaRSXKxIVaJRYbBGVqVLW6pJWahEVKNFVaRIWKdBVKFKXapLXaxB U6RDVaZHWaxIV6tHT6FNUZVCQndCPWRHQGNMRWpDQV1CQVpFQ18/PVtBPmBG Q2c/O2Q3PGM3QGY5Q206RHFGUYBMWY5TYJVSXZxPWplTXp1IU5JCTYxXYqFR W59UWaxQWKpHUKRNWa5JVq1NW7NKW7FHWapJW6xIRGNNS2pMTXJHS3eBiL1A TI9AS5ZGU55JT5xLT51NT6BMUKKEittSXKxSYrFIWKdGVqVDU6JEVKNDU6JL W6lFWKNGWaZEV6RAUqFCVKVEVqdKWqlITZZFRoNDQ3BBPGE/Ol9HQ21BPV1D QFxCP1tGQmFFQWBCPl1EQF9BP10/Ql4/QV89QWBBRGY4P2I2PWI7QW4/RXJI Tns2PGk0OmdJT3xDSHlERYZNT5BUWp1RWZ1QWqBMV6BHVZ1RYKxNXKhDQlld XHM+P1hGSmdcYYhNUIpTWJhJVJJETYtFSY1ISZJFSJZJT6BJUaNPXaxKWKdP XaxLWahHVaRFU6JDUaBLXqc/Up1FWKVAU6BDVaRAUqFQX6kyN2ZeYYVjZH1K TGFDRmI9QGRDP2lEPGRHP2dAOVxEPl1DPVpDPVhHP19IPWFDO1xAOVpFQWBH Q2JGRGBBQFdDQllFRFtCQVg9PFNTUmlEQ1pFRFtGRV5ER2M8QmE+SGpAS29F UH1IUplNV55CRFlLUGdPXIA2RnE8S3tASHZcXYpMUXpNUnv/+/BNUoNESodA Ro1eZLFNXalHV6U0RJNGVqVSYrBMW6X/+/BMXKhGWaRKXK1ATqhDTqZOWaJM WJNCTnVGUXZHVHc9THBNXIRcbJZtbpVwa5BZVnozM1IyNVE/RF07Q1k5N1NH QV5APVlFQlxEQ1pBQFY/PlRHQFlSS2ZSTGlGRGJJSmdCQ2BAQVxBP11BP11H RWNFRmFERWA/QlxGSWVCR25DSG83OkpTWG1PXoJXaZZKXItVXII2N1o6Olk6 PVk8QFxGTWlITndXX49obqdOX55RYaNSYalNW6NGVJxKWZv/+/BPXp5GV5ZL XKVEU6dEUaBGUZBJUoRQWIX/+/BKVIFMV4ZATnxLWYdGR3ZLSXVMTHdQU3lp bJB0eptdY4Q/P2BAPl03NVMvMEtAQVxDRF0+QFZDO1tFP15PS2tBP15RT21B Ql9FRmNFQ2FGRGJEQmBHRWFGRGBDQV1HRWFERWJDRGHAwcxRVGRUXHhMWHlT XoJucY1XVm1WVWv/+/BHSFX/+/A+Qk4wNUI7QE9EUHH/+/D/+/A+R3k7RHb/ +/D/+/A9R3FMWoZCUopKWp5EU5U3RnhFUXxHRoH/+/BQUotGTYJJUIVHUIL/ +/BFTINHToVISoFOUIdKSn9GRntLTHv/+/BPT3pLS3RZWYJhYIhjYohPUnhE RWhUVnY/PVtPTWlGQ11DQFpHRl9HRWFEQl5EQmBEQmBJR2VHRWRCQFxDQV1p aXJPUmBDSF9BSGQ+RWFNT2VERFdCQFZEQlhFQ1VFRFL/+/A7PUM7O0A4Pk// +/A+RGP/+/BQV3xASmz/+/A/Rmv/+/BBT4RAT5FGUpVEUYReaZZcYp3/+/BD TYRJU4lJVon/+/D/+/BEToJDTIBES4BJTYNOToNGRHpKTYFJToFIS31KS3tO T35JSHhQT39ZX4pNVHpbYYI5PFZVVm9HRF5JSGFCQ1xBQltFRmFHSGVFRWRF RWQ+P1xHRGBKR2M/QE1MT19AQ1tMUWpMUGw/QVdvb4I/PVlSTm09OVhDQFr/ +/BEQFBEQE9bXGf/+/BHSV9CR2A5QFs/SGD/+/BWXYP/+/BBToNLVZlTW51H ToNPWIJYZ5n/+/BFV4ZVaZdpfathdqT/+/BAT3dCTnlDTXpJUX9LUH9NUID/ +/BETYFCSXxCR3pJTH5HRndHRndDS3hGT3VKUnJARVxOUWlCQ14/P15AQ19C RWFFSGJAQ19DRmJER2NHSmRGRV5IR2BMUWpVWXVCRmVMTm5ISmhFSWVSV3BK T2ZKTWlMTG1TUHZHQmdKRmhLR2ZtaXv/+/BFRl9BRl8+RlxWYnX/+/BkcpBX Z4dPW4Y8RHtGRntHRGo9PldASGb/+/A/SW1LV35OWoVRXIn/+/BUYIlUYItO XIhSYI5VY5Fea5xZaJj/+/A5RHFCSnhGTHlMTXpQUX5KTnpPVntVXXlFTmRX YHpmcJBgZ41fYoZeYYNUWHVKT2hDSF9FS15ESl1FSl9HTGFaYoBQVnU/QmZC RWlGSWtFS2pHTmppcoJ7gZRhZIBERGdMSHFFQmhJRGlHQVxhXnpFRmNUWHX/ +/BVY3VreYpbb4NleY1ebIo+R3E8O2NDPFRFQE0+PFI+PVY9O1pBQmdBQWxA P29BRnc9RnhDTn9IVYhTYZZUZZpUZp1NX4xBVH1JWIBXYYtjbJRWWIBNT3dG SHJaXX82PVhRXXJMWXNFUnVNV4NJS3NNUHZVWXhhZn9iaHtYX2tVW2pMVmlI UmVPYotHV388SWxHU3JBTm9HWH1HWYRXZo5LVYFQWIZgY5NIRnZKRnBJRGtP UmqSlqp2fI9ESGVLTnJ3fpo+R11leIiMmK10eZI6O1hEQV1BPlZBP1VHRGBL SGJJRl4/PFRAPVdHRGBEQmE/PVxAPV9DQ2ZHRmxCQWlJSXJKTXNJTnRXYolb aZVMX5A9VI9IYaBNYZlHXpM7UoVUaJZbbZpXZZFJV4NCUX8+TnhDT3ZLVnpK UXRJTWxOUG5TWnViaoBLWoBVYoVGUnFOXHpNXIBEWYlEXJNEVoVQXoxZZJVV YJFKUoBcZY9KUXdJT25WX3dcaYBgbZFSXYpLW4NCVHVNYYJ8iJ9aYHE9PUxI Q1pIQl9AOVpAPFtEQVtBP1U9O09EQlhCP1dEP1ZEP1ZHQlpFP1pHQV5JQ2BA OllEQVlAP1ZARVw6RV8zQV9EVoFWaZpLVIRMWYpMXY9QZ5xPaqBNaaFLZpxQ YpFSZJNQZJBTZZJbbZpVZZA7SXQ/SnFRW3tETGhXYHqEi6ZibYpUYIdLX5dK YaBJXI9TYpQ/TYJFVotJWo5HWYhOXYtOV31eaodNX35meKNSZppBXYtGYIpV aJlodY9lbHc2ODxCPE1BOVdGPWM8OVU+O1VBP1VCQFRDQVdBPlhAO1JAO1BD PlVBPFNDPlZDPlZEPlk/PU89PUxBQk8+QU84PE4/RF1FTGhERGdJSm9GS3JG VIA9UoI2UYM0T4FHU35RXYhZZ5NQX41abpxKYpE3THxRXpFEUn0/QVdQUmda XnOChqJRWoBNX5g9U5VFV5B+jshDVJA9To1CVZFCWJBGXZJqc5uUoL9ugqRa b59DXZZkhbdmhbVleLNsepZrcns1ODhEP0xMQWJBOF5CP1tEQVlCQFQ9O09A PVU6N1NAO1I/Ok9BPFFBPFE/Ok9BPFFBPFM+PFA9O01DP05BPUxFQE9CPFE5 Mko/O1pHQ2JEPltDQV0+QF4/R2c/SWtFR2VFR2dFRm0+Q2pFT3lCVH9MYI5T Y5tUY5NGRlk/P1A+PE5ISl9SVnJrdqNRX5VQXpZSX5xHVJJPXpxMXZJOX5FW ZZOHi6pTWHFWYXxPW4ZQXpRLYJJDVoc3RH9YYIA+Q1Jtb3dzb382M089OVlC P1k8OlBBP1NIRlxGQ11BPlpDPVg9N1JBPFRFQFdAO1JAO1JAO1A+O1U4NVFC PVRAOk9FPFJJPFZHOVc7PFlEP1dEOkxOP0tMP0pLQlBLSVs3O01EQlhKQFxO QmNIQGFFRmtCR24+SHxCTHk2OE1TVmY4N0U9PktFSFpGTG1ocZdXZZBPXYl1 gK1UYIlCTGw3Qlx+h50/PVNCQU8/QVZFRmlAQXFDR3U+RG9BSnxNU3ZGSWNH R1o1N0xWV3CCg55QTWU0MkhGRFhCQFZCP1lFQWBDPVxFP15EPltDPVhCPVVG QVk+OVBBPlZBQFdBP1FGQlRCPE9EPVVGPFg+O1NEPlNGPk9LRVg8PVZIVHMv RWUuRFg3RF1IRGRORWtNRGxIRG1ISHFMUH5BQ2uzu89ucoZGR1Q+PkdKSVc7 PVJCRV8yO1FWYHOfqbuDjJpbYWo0Oj8+RUZKQ0xJQ0hEPUZCOU9GPl9FPGBJ QmVFRmtOTnFOTm8/QV88Q15IUWlHT2VQT2g8O1JaWW+CgZhYV3A5N1NFQWBH Q2M/O1pDQFxQTWl4dY+cmbNFRVYoLDg9QUk/Q0s+P0o7O0w9PVBAO0pEPEpD Pk05RWItT3tNgbpAfbpNgKZVf6pBWIo9R3tDSHtHSnpSU4BJRmxJRWentceQ nK9BRVc7O05JRFlFP1xBPV1HQVZKRlVPTlpHSlg+RlpTXnsvOl4/O0M+OkI+ OkA/PEQ/O0pBPFFLRl5DPVhLTGdIT2o9SGM8SGdHUXNFTG9BRGZARGFCR2BJ TmNLUWJnbnpiZXc6PFFAQldLTWNYW3NPUmo3OlRGSVl5fIqanatOUWExNERG SVk8P08/SFQ5NTpDOj4zPlg6XolLgblCg8E9frxGh8hCe8FKer1Qd7M9VYIz Q2s+Q2lJSHDK2e23x9hNVWkyNEpEQl5NSnBGRW1DPF9RR2NEPVZFQFg9O1dI SGdbXH80O1Q7QFU8PEs9OkY/NkJAMkJJOk5AOk9CRFpTWG9BSmRETGxNVHdD SWpCRV9BRF4/QlpFSV5KTmBHS10/P1I5OUxERltERlxRVGxHSmRCRmJBQ1hN UGI5O1BLTWJ2eI2vscZdX3Q1OkdFPkdFO0U4RGNSdaZZjslAerNMeKREc6ZC dbVAdLc9cbZRgcBLd69IZJI0SXnn/v+yxNdFUWhES2dCRWtIS39PUoxKTIVR UINXVYNRUHlDQGJAPFw7N1ZSXIBHT28+QVs8PlQ9OE9COFI9NExDPk06Oktc YHU6QVpGSWtXWH1KSmtERFdBQVREQ1lGRVxEQ1xHRl9VVGM3NkRERFU3OU9S VW9GSmc1O1o0NVhERWhGSGhCRGRHSWdNT21MTmxAPlRKQVU9Nk86TnxVfLZZ ichKc6RFU2cwQl84V4ZCaaNIeLs5cbc6c7lAd7lDdrbj+v+pvMxYZHtZYX9P Vnxsc6qbot+gsOqSoNiPnM+LlsOEjbNpcZFfZoJZVXRqaIZNUGxLT2xSWnpA SXE4P2I5OEc1OEhcYXY7QltGSWtSU3hJSWpER1lGSF1ERlw/QFs/QF1CQmFd WWhEQUs9PEs3OU9QU29FS2pIT3JJTXlDR3NFSnNFSnNGS3JJTnRESm1JRWVH Q2NHSG9KZp5RfrxJeLNLb5hLVGRETWNGVng7T3k3UIE+XI9OdK5Cd8Q9b7Xh 9P+vws9GUmVETWdPVnlYYZNPWY9OZZpGYZVJYpRLYpRIXYVJXX9PYYBHP11E QV1ITGhMVnZSYYVQZJBFVX1AQlcyN05JUmpDTGZBR2g9PmNGSGhCRltDRl5A Q10+QF5ISmpFRmlZVmJBPUU6OUc4Ok9RVHBITm9GT3VGT4FCTIJBS39GUYBH Un9BTXZFUXo/RG1ESHRBToFLcbNGdrlKeLBDYYRIUmRBSV1KTmpLUXBPVnlJ VHg+U4FFc8BJcbXj8vy6yNJARlVCSFlCR15GUHJMV35LXYhNYJFIWo9OYJdS Y5dPYY5TZZBUZpVJWYQ5RmlHU3BVXXk9RF88Q19XXoExPV5MWXpjcY9FUXA1 PV1HTW4+RWhHTnFIT3JLUnVGTXBAR2pWU10/OT47OEJAQldJTWpGTXI5Q29C VI1GWJdTZqFRZZtKXpJIXY1QZZNBUYlKXZg8W5tFe8RKgsRMealGXHJBSlpG TmFMUGxJUXFCT3NDWYRRbp9RertKa53r9v/a5fA7RFRIUGRHTmlETGxFTG9J VHtOWYhGUIRNV45EToRDTYE8SXo7V5BDXpBIW4JWYoFFSmEzLT4+OU47RWlV YoUwQmNOYIFPXH1ZY4VfaY09SXA5RWxCTndUXohLVn1KVXpVVGI/OkU8O0pF SGQ8QmVFTnZEUHtIXplGX6BPaaU8Vo89WY9EYZJQbZ5MY55MZ6VLcbU+ecRE fr5OeKM+UWBDTFpESV5LT2tFUG1BVnw1Wo9NdrNWgb88WoXz/P/g6vxxf51G VXtBS3VMT3NISGlKTnpKT35LUH9RVoVITXxHTHtOU4JCVIFGWIdFV4RGVH9N VHc9OlRAO1M6PlpKVXBDU3FAUnNNXIRNWIVRYYtHYohVbZxPY5lLXZRDVYRA UHpQVW5WV3JbXH9ZX4xBSno+S3xNX4xKZKBJZKRHZJ8/X5c7XZREaqBDaZ9B X5ZCY58+aq9FgspEfblPd5pCUlo7R1prb4xlY39MUV5IW2pAYplPesdPe8E/ TXnp+Pzj8fuywdlSYH43QmZITm9GSmlCSHNGTnxFTXtBSXdMVIJDS3lFTXtB T3tHVYNLWYc/SXVPVHtHRGZPTWxRVHhUXoBCT3JMXH5FUnZMV3xKVXlEWHxM XolMXZFMXZJdcKFhdZ9HTmlDRF82OFhPVYBZY5BVY5E0RnM/UoVAVIhHW5FE W5BLYZlLY5pddaxOY5VEYZpFcrRDerxRgLVCYoI/T19MYX9SX4NhZ4ZCTmVA VW9McKlNd7xKcqdCT3Dn9/3h8/qpuMxhb4szP2BIUHBMUnNLVH5ETntGUH1F T3xHUX5ETntKVIFLVYJIUn9DTXo+RnNRVYFCQmtISHFRVYFGT3djb5ZPXoJH VHdIUnRLVXdGU3dLV4I/TH1CU4VRZJVTaJZ7gp5BQl1GSGZdZIpNV4E7SXU3 SXhEVH9AT307SnpAToNCUIZEVYldbqA9S3lGY5xHdblSgsFTealBW3tAUmlF XoVRZIs/TG09T25MZYpPdLFHcK9Kb5U9SWC0x9a6zdxbaYNDT249R2tDTXFh a49VX4tWYI1ZY5BWYI1QWodRW4hPWYZGUn1IVH9JUX4/R3VSWohESXpBSHtE S35IU4BIVoJBUXlga5Bgao5NVHlJUnhLVX9FUH1LWopAUn9HW4djaoZHSGM/ QV9SWXxKVXxRX4pbbZxJWIA/S3RIU4JGUIREToJET35jb5o7RXE3VI1Bc7dL ebNUcp1KXn9KX31VbZpDWn8+UnQ8UnpQa51Qd7RSfLU4V3I9RVhDVYBDVn9I VH87Q3BETHlQWoRwfKVseKNveqdsd6Rncp9ncp9ibZpga5hYaJJZZ5JRXYhQ W4pNWo1GWJFKXZhDU4tJW5BKX5FCV4dFVIRET35KU4NSWodtd6NcZpJLVX9I UnxWYYhJTWpKS2YzNlBGTmpCTm9UY4tyg7VTZo9OYYpSYZNndqhib6BJVnpQ Xnw/RXBKbqtHgMlIdKw8VX1AUXRJX4pTYY1UZIRDWXdMZZRPb69biMpFcKQ1 S2FFSV4+UY1SY59FTYdAR3w+Q3ROWYZHVYFNWIVNWIVLVoNOWYZKVYJOWYZK VYJDWIBJXINSYItXZ5JFWItBW5c9WJY+VIxHX5ZGYZdHX5ZQZJxLWJNEUIdI UYNOV4c8RnJZYoh8g6hRVnxHSWdAQVw7Plh/iKJodJNRYIZOX5FBWYRSapd0 iLxjdapNX45hc5Jkd45DSXRUero+fsxRf7dfdZ1EV34zSnxITHhDTmhGW3VH YpROcLZHdr1SfrRDVWxDRF0/VJJXaqZUYJleaJxaY5NMWINUYItYZpJre6Zq eadmdaN5irx0hbd+j8Fzi7h6j71xhbF3irtEXJNKaahDY6NIZJ1FYp1EYKBO aalLYqFMYaFKW5hIVoxKWI1MWYxLVoVGUHxLVH5LT2w+QV1HTmpUXoI+SnFF VYB1iLlYbplBV4JRZJVLXI5ldaB7jKdMXnNOVXpQdq40cbhMe7Z3lMRtgrJU aqJTXYk+Smk8UnM0UolEabFKeLw2XpJMWnZKTGxSaIhLYYFRZ4Vid5NSY34+ RWBAQ19FS2pDSm9HUHhHU35KW41UZ5pccKZUZp9MXpVGVIlOXJFXap1Xb6ZF YJZJXZFWaKdPYKVQYahVZ6hTZqFJXpBIWINIV4lIVoxJWZVHV5NJWZNaZ4ot OVg1QWxhbqlaaadQY59DWo9BVIc/UIJQYo91ha9vfqZRXoJBTHBHTHM8S3k5 VYtAZqZWebhlfr1idbBjiL5xj8ZRcrBDbbJHdrtWfbE/WYVIVoJRX4pPXXlR YntLXXRDUGdTXXA9P1RIQ1hFP1JEPlNDPlZBQFlGSGZCRWc8P2NERWxDQmhG Q2VNS2pAQGFNVHk5QGVHSm44OWg+PnFFRXpQUYBJS3M3OVk/PlQ9PltBQmdI SXhBRHRGSXtSXIY6Rm9GUYBBT4VEVI5XaqZRZqZRcK9CYqJDY50+WpIySn84 TX1EWYdKSIpDSIhET41FV5A7TYQ/UIVlea9SebVMbKRkhcFPeLlLeLhTcqJG XIVKWoVOXoZSWXRES2ZDS19LUWRSVWVIQE5RRlNEOUZIP01FPEpHP09DPVBJ RVdGQVZFOlFGOk9LPU1KPExEOks3M0W9u89NQ11MQWVMQGlNQWpQRmREO1NE PE1BOUlIQ1tJRWdFQHFLSX9MTINKUYZJVIVTXo8/Tn47SX8+TopMWqBJZahS crJSbalTbqRYb6JUZpVba5ZcaKNaYppgY5dfYpRdZJlPXZU8VZQ4XJc1UYlQ a6FNbqhWe7E+UnxQX4NJW4ZNXoNMT3NITm9CSWRESWBLTWJDO0xHPEtHQVRA PlJDQVU2NEg+PFBAQFM6Ok1AO1BBO05DO0lHP00/O0pBRFRKTGFDQFxHQmdJ RG1LRm9IRGM+O1M9PVA5OlNBQmVHSXNBQ3pISoQ5PHhQWIhNVYI/R3VKU4VI UIdETYdGT4tDUYdDVIZHVYNGUn1CTXI8Q2Y/RWZBV3hHVnpLVnpHS3lLU4pV ZKJXbbBMZp9NYpJDV4M3U4NHY5M6SW9ETnBIWIBIV31MVIJGTntETXNFTW1N UW5DPlZCOVFKR2FERWA/QFm+v9hBQ1lDQlg+PVM7PlA9PUxAPUk6OUU5Okc+ RFNESV5CRGRFRm1FRXBGRm8/QV9ARFlHTWBAS2VFT29SW4FOVIFESHY9QW9I Tm0zOVpDSHFSVItUVo9KTX9DR3NKS25ISGdFRmM9PllAP1hAP1ZEQ1o2RVs3 RV82QmNBSXZCS39FT4ZGUolOW4xfaJBgbI1XapFGWoRKVHZTVXVHUnlJVHlZ ZKJPXJdMWo9HVYNJU31BQmk+O19KRmVMRmFUT2dNSF9IQlVBO0xAOktAP0tA PUVBO0BGQEU/PUI9QUlARVJEQF9IQ2hIQ2pJRGlBPlZPTl1ITldCUlhBT1hJ U11MUFpNTVRPTU9AQ0NDR1NFQmZZUo9NRoJIQm9FQ1VEOktEOUpBOUpBO0xC PlBAQFM/QlRMQWNMR3BQT4BISHtMS3RNRmFOQ1pLSG5HQV49PFM1QF1FUnVH S2pJRWVIS3FSWX9NXppVaKNEXJU8Vo9JYZpMX5o+T4tJSndOSXBPS21OSGc/ OFE/OU4+OEtHPURCPUg/O0pBOkFEPEI/QE1JSVxLPllMQWNORWlNSWs9P1RN Ul5HT1U/TlBDT1JHU1ZHUlVNVVdJUlJIVVVIVFdBSFFLS1xEP1Q+Okw3NkVA OVFDPFdCPFtMSGpMS3NCQm9HSHhUT4FTUX9MS3FHSV9DQ1ZIRFZDPU5CPk45 NzxCQ05BRWRKUHtETGo7R147T3tHW4VHW5NMYppNaaFTcKlOa6ZFXp1NY6VH T4ZFSXdMTnhMTXRDP18+O1dEP1dHPlQ6O1Y6PVk8N05COktFRFtLTmhIPVdI QGBEPV5HRWNARVRGTlJFTU9ET1JDTlFDTlFET1JDTlFBTE8+TU9AT09CTU1B QkU0MTk6N0M+Pk96d5NsaIhBQWQ7OmNNTXpISXlHR3pKSHhHRm5BP1s5Okc7 OkZDP088N0xEPk8+O0M/QlRDRW9QV45EUHdCUnBHWYRIW4RBV49FXZRHY5lF Y5pDYJlMY55MYZ9QYqFNXptUYppHVIlASHU3PGVKT3ZDS4JGU45GVIpNUX1V UHVNTnU/RmxBRWQ7PmI6PV88QFxBSFNDTExIUU1FTVFDS09DS09DS09JUVVF TVFCUVNFUVRGTlAyNDo1NT5BQE9JSF9JT3JKUXdJTnRJTHJNUHRub5J5e5ti YH5IRV08OkxMSVNMR1RMRFVIP1VLQVtLRl1FR2dITX5WX5tEVo1FV4ZRWoJG T3dKYplHYphIZptHZZpIZJpNY5tPY5tIX55DWplVaqhRY5xDUYZQXY5ZZJNP YJxEW5pLX5dQWoZHR2pLTnBTXoVKVYJYY5JGUHxAR2w/R1pDS1FFTE1DSU5E Sk9GTFFGTFFHTVJFS1BBTVBET1JCSE01Nz84N0VDQVVFRmFDTnNDT3hHUndI T3RHTW5KTmtKTWlEQ1pPT2Jvbn2ioa96doZMR1xJQlpKQlNMR1xHSmZPWX1W ZJBIXY1CVoRMV4RFUXxGYZVIZZhJZ5pGZ5lHZJVEWIxGWI1MYpxJX5pSaKJS Zp5IXJJLXZREVotFWY9LZ58/WIlFUnVHSmJOVmx1gaJTYZc7R35XX5ZFTXpG TWhJUFxCSlBGTlJOVlpLU1dLU1dGTlJDS09FT1VGTlJITVI9PEg3NUc6OVA/ QV9BTXQ9TXU8S3FDUHNCTG5IUG5ASGZBRWE9QltCR15BQltQUWw+PFhFQlpM SE5OS1dBRlU9R1c9S11OYH9EV4BBUoRNXpBRb6JPbp5Sc6NScaBRbZtCV4U9 T35RW4VHUXs6SHRugK9PZplSap9GYZc+THpVaJtIWoU2PVk2NklCRlhDTmhP WYM3QW1ETXc4P2SMlqiEkJVmc3NHVlhCUVNIV1lCUVNAT1FAT1FDTVNFS1JP U10+PUxHRVtERWJXWn5aZ4pebpBdbpNcbJRRYYtWZpFXaZZRZZNHWYhBU4BD U35IVoFHVn5JVHE8Pjw/PkpBQk07QT48RD1JUFxOVnRLW4VTY41Map1FZpZK bJpJaJVIZJBPY49EU4FITm1BRWRXXoNufaVQZZVMZZdDX5VDSHlFUoVRXIlF SGo9O1E/Q1dBSmJUXXU+RmJSWnZZYngtOT42RD47TENrfn6BlJRyhYVIW1tA U1M+UVFFT1VHTVRUVWBEQlRGQ1tJSWhHTHJATGtIVnRIV31FVX9OYpBZbaFQ ZJxPaaNcdK1fd7BEWI5OY5VFWIlYaIo9PUIxL0E4N0Y9Pzs+QTVEQ09OT2xM VHRASGhEW4A+V35EYo07WodAXIxYZptJU4dRPmNfTXRaVX5KUXZEVXhOXY1L WJVNYqRTZaRJVotBR3BAPFxFNVBGPVVEUGdTWn1BRmxGR240Nkw3OEM/REk5 Q002PkA7Qz9WX1V2g3h6jH9ndW1HUE5ARktERFdGRV5AQ19LUXJJVXZVYoNk dpdne51KXoI1RmtFVHpSXodkc5lUZYpKXoJJWn9RYIhGUHA6ODo6OD02NDg3 NjU4NzQ5NzlBQUpGRldDQ1RRVXFLU29GVnhSY4ZSYYdUVn5OSnN+a5BSRHBJ R3NBS29EVHJHUX1CSIFaaJ16i72PpNSJoc5yiLNKXIlOYI08T3hcbJZXY45A S3JJUHNOVHNRV3ZKUHFCRWE7PVIvNEEzPUcwQEgrOj4xO0NCSFc9PltNTXBE R2tNVHlHUXtIVH1UZIxMYIRAVHY8TG5OWX1UYIFgboxhco1JV3E5Rl88Q1w5 O1A4NzY3NjU6OTg9PDs4NzY8Ozo2ODw/SVxCTF9MSWVKSGZCSmpJU3NQWHhV U3JQSGZPRGhIQm9ITHpIU3dSYHxCS3FscaRYXolFUXxSa5pEZZVDaJ1HbadL bKZLXZJKXohPY4dQYIJKV3tIUn5JUX9UWYhWXIlZXoVRWHtLV3ZAUG44RVw0 PVNCSWVVVX5SUn1GSnZOV4FKU4NCUH5HWYRHXIRFWoBVZIpRXYRKV3hEVHJJ WnVfa4A/RVgvLz41MDs5ODc4NzY7Ojk6OTg2NTQ4NzY2OD5CTWdFUGpHR3BB Q21MWIMzRm9DU3tOVXpIS29CRGxNUoFIUH5GT3VJVG6KkLNKS3pASIBGUIdV Z55TZ51AVo5KXZhVZZ9VX5VjbpOmss+kr8mfpsuepNGVms2Tns+XpdOMm8mI msVOYItHWoNNXIJNV4FFTXtDRnpNUIRHT39HU35JV4VLWYdOYo47UXxJXYlQ YItMWoY7T3E9T3A9TW1qc4s+QVFQSVIzKiw/Pj09PDs9PDs5ODc6OTg4NzZA PkJEQlRCQFJLT31VX4xJXI1QaJc+VoVRYYxASnZLUXpJTHxFSHhHTHM9RF9K TXFOTX1XXaBASIhOW5ZHU4xTXZM/QXhMUYRZYpJLVn07R2hBTW5HUHhETHlP WIhBUXxGWoZEXIk+VoVYcJ1TaZRHV4JHVYNHUIJSVoxPVolJU4B6ia9CUXlE VHxJXoZCWIFAVoFCUYFKV4pOYYhMX4ZEVHxRW3s2Ok88OUWXkJFbWlkqKShE Q0I4NzY5ODczMjE8Ojw1LTs6MkA/QWk7QWxPYY5NYpJLYJBbappfaptSV35I SHVGRXVJSXJISmhOT3ZMS35GSIJJT4hDT4ZOXJJNW5FYYplGU4g1R3ZRYpZW aJ1PYZZOYZRIW4xDV4VMYIxIYI9OaZ1OaqBOaZ9JYJNDVYRDUoRFT4NUXY9W YI1ZZYygstHY6v+lt9ZMY4RUbZRYbplIWY1GVIxDVYBGWIVKWYk/S3Q/RWQ+ QVM7OT47OjlVVFOHhoVoZ2Y4NzY5ODc7OT1MSFhJRVVbW3pbXX1WYYZUY4ta aZFja5hDSXRDRmxKSnVOTHpIR3BCQ2hCRG5JTIBtbpFrcpeImsdLZJZFYZpG ZaRDYJ1ad6pTcK1BXaBUcrdTdLJXea9Yd6dMYZFRbKBFY5pPbKlXcrBVbKlO ZJxKXJFCUIVDTnthbZaSosKDmLK1x9qzx9nc9f/S7P/N5P+OnMhaZJFVZ5RE VYdCU4dXYpNKU31FSGoyNEk7O0I6OkE1Njk6OTgzNzI7OT1LSlZISWRHSGVK Xn9KXH1QYoNUZIZVYoVZY4c6RGhTXYdaZpFVYYxHU35kbppweqZQWoZqd5F0 hKRgdKBlfLFacKtKXJtIWZZtfrNzh7Vsgq1wibFvibNwjLhzj7tvirBtiK5y jbNxi7d0jMFSaKJGZKtRbLJIX5pAVH6+0vaEmrqlv9+8zNzC1eLi+/9ifpNi eIz2/f/r7O+80fdfcpk4RG1SWINnaJhCQnVFRnVBRWJAQV4yOUU9NzovQjhK P1BLRF1HRFxORmdUbJlQaJVWa5lWaphCVoRJW4pZaJg7RnUuOWY8R3R6hrFb ZZFocpxpc51ZZZA9THpdcqRLYpRMY5VCVoRIWodVY49NXoNvg6RuhaBxh6Vr g6JlfJ1aco9WbotWbotXbpFsgqs8UYFJYqNIYaJacKi3zPRec5dxiKulwOSa ss3K4fjI4fZddYhsgJDe7ffu+f+zxt3e8P/t/P98h6JGTmxKUnJPVXZSVXdG S3FCSmhSSFxMTlRvSmRlTGVHTVxIS2NNY51PZZ9XaqVbbqlTZKBZaqZYaaVh baRaZJhSXY5xf61YZ49JWH5BUHZHVIlQXpRRZphYcJ9Va5Q2Smw9TW1UX3nK 1und6vnR4OzU4/HZ6fne7v7U5/fT5vbO4vLO4fjb8P9EWHpXaJ1LW5NoeqlZ bpJTaot1kLRhf6iOst+72/+iuM+kssTe7fvX8P7D4fJYZGu/zdbh8Pzi9f/P 4u/N4O3E0uNsc49GUnlBT3peRWt2VmqjVGqXSF6YR19/R1tIUohBS4FNVYw+ Rn1FT4VTXZNSXJJdZZxGUIRMVopcaZpJWYRleKFVaI9AUXZHWoNfd6RRaZhm e6thcJ5EU3tZZn3E0eLR3u3U4e7U4e7d6vvg7v/Z5/nZ6frS5fXX6f/a7P9M YIJPXJFJVonL3f9Uaop3kbBjfZ92kLpzlsV7nsdNYnzP3e/t/f+71+uPssmP lqLk7v7l9f/M4vZxiJl+kqK+zdvj7f+jtdJKV3hwTm18TFmoT1qTQ0mXR0+Q TFBNTn1NTn1KTnxbX41hZ5RPVYJQVoNNVohHUII8R3hse6tidKNXa5dfdaBd co5qgKBcdZ1SZ5dTZplndKlDTn9KWX1odpRte5VvfpZpdo9odJFlcY5hbYpj cY1ic45ecJFneqE/U39SYJVSX5CbrtW/1/SVsMxfdpdYbJaIpc57mL59lK/U 5/7c8P94lKmBoLna4PHd6fzj9P9wiKVgd5Dj9v/l8//G0+Bmfo9ATF+GWW10 OkCJPD1/Pj2TQ0ueUFQ/R3Q9RXJKVIB2gKyRnciistyDk71hbZhGUHxIU4Bn dqZjeKpUa55je7Bkd6hYbZ1RaZhKYItDVn1YZoRLWn5SaaRHXpNHXpBIXY1W Z5s+TIJNW5FRW49WY5ZPXpBRY5pLYZxRZ6lNXZdgcaN1i7OHoryet89ziKS7 xuudqsNleI+CnbmLqcifu9plepSKnLPi7fje7v6nv9SForvP6P/L3fK+yNvy //82T2VdXWyZW22aWmKPXmpvWGN9TmV7PU+Uos2MmsVjc55QYItMXolCVoBE WIJTX4hBS3VLV4J2hbNGW41geatferBZb6lPZ6BJYJNIXolUaIlBT2A7TGVU ba9GYJxBWZJOZp1LXplJWpmgr++otutcap9AUYVHW5NSZ6dUaK5MXpVZa5ae s9d0i6ZkeZNtfZt7hquUoLd9jKKWrsuNq8qDn75HWnLT4Pfn9v/Z7Pt8lKel wdbX7/+fssKkscCWpL5BWHFfXXGDS2GNUmGDXHRubYNQWWluVmVJVXxGUntT X4hZZZBcaJNbZpNaZZRMVIRMUn9IUXtCTXRIXYVKZ5BDZo9KbptCYZFLYpdQ YpFYZYlIUmQ2QVtNXJxXa6E5UoNFXo1IXJJOX5xZa6RjfqJkfp6fttewxe1q e61LWZFPYINHWHOGkrF+hahibZFJWH5mfKRKXYapueFKX3mds8fI3/CZrcF9 kKe70eeCmK9sgpnQ5vzU6v6/1uXX6/WIl5s7T2NPU3JaQ1yDWmttR2VjR2dq TF2RaXNIVH1DT3hFUXxGUn1AS3hIU4JGUYBSW49PVolGTHlJT3o1QGdpeaFe c5tQZ4xOYoxca5uDjr1ncJg3Plo8Q188RHFTYJNOXJI+ToZJVolXYY5aZJFh cqeRo9KAk7x0iKpvhaZpf6CHms2fsOVGU4hLVYJDTXdtep2fsc6QnMNeaZBf bYtuf5hYaYKswtlQZX/I3fe4zeeCl7GInrSgtsx5j6Oitc9NVoZGVIpOV4dR S2hxX2xrX2JhVVxYPlBvV2hJVYBGUn1FUXxBTHlFUH9JVIVNWIlfaaBTWpFO U4RUVYJGS3JGT3Vkb5Zhbo+BjrJSXIhcZJJDSXZJTnQ/RWg6QGFRWYlVXZdI UY1IUIBeY4lka45VYJFjbptNW4Zjd5uFmr6Em75me6t4ir9FVotNWo9OW4xJ V4NEU3lQWYGepM20vuBhb41TY4FOY39GXXiQpcOJnrpDWHTN4/qrwdhpfphQ Yn9KWHZKXoI/U31hbpJITGtPUG1SXXpMcpI6VnVJVYBKVYJIU4BLVoVKVYRM VopKVIhXY5xcY5pMT4FHSHVISnI/RGtgZo9CSmo+SGxLVH5KUoA+RnZ9hraI kblBRWJOU4JXX5lKUY5FSnlscpVTWnZrdZV7hqplcZhSYo1ecJ9QY5RZbpRx hKtCVoJCU4hqeLBOXJRJVotTXIZNVn5AS3CMmb3E1PZ5ja5RZoRWaoxXa4x7 kKw5TmibsMxidpc7TG89TW1GWnw+WIKcuOa1zfqHkrdCTXFHXYZJXoRPWodJ VIFMV4ZIU4RIU4RKVIhKVIhASoBUW45PVYJOT3xMUXpgZpFTW4hIUXdhbJNy gKxIWoljeKpTa6JRZZlfZ5RBToFBT4VGVo5OW45WYI1cZYtVYHtDT3BfaZNa Y5NRWoxzfKxpdZRebIpHV39VapqIn9KGm8uartiVqs5sgaVGW4NPYYw/T3lK W4BUZYi3yvOnu99gdZNNYn57j7FBU35QYJpUYrJfb7RcfrRrm8puoMDZ9PTP 2M7e3+KUm6dGUYBIU4JET35GUYJNV4tLVYtMVoxPWotiaphNU4BNUX0/SHJQ W4iBj710f6xebJhecqBZcqNYdahQcqk9WpN7iMNPYZZRZphFXI5PYZZPXZVG UolMVoJVX4xtdqZZYoxJUHViaYVxdpxCSHVFU4Fvh7SDoMaUs8yXtsGSssij w+OTrdWLocxKXopQYIhJWHw9T3w+UXiZrc60yedleZ09T3xFVpNCVaBNXp13 nM92rd13r9eCnKK9ycLb8fWXsL5OU4RPVINJUX5KVH5OWIJHVYFOXIpSZY5k d6A1RHJMW4tIV4eOnc1reqpngK9edqNZbZdleKFXbJRSa5NYcaBcc7Jke64x RnRJX4pKYZRVbKdpgsFjerVnf7hddKlRZpRLX4lBVHtMVoNdZphLWoiit92i udKPo63I1M+Uo6e+0d6HqMSQtduBpdBKY4tLW4NDVJBVaZ9OZZhKYZNOY5U/ UYZTY5tRZqROaqN0oNRlmMhllr9ykaqEm7KnxOo8TntTWYZaYItWYYZaZ4pw faFhcJhKWoUyR29GXIdQZJJKXZB2h7l+jb1VZZBrh7NieqVGVoFEUn1aaZFY bZNMYo1GXpdkeas/U39IXIhOZZpJY59JZKJlgLJZdKZheqmKos9Sa5NCWIA1 QGdBSnBTYITa6P/O2u+boaq6uLTDxMXAx9Kct9GFqM1rj7xLY5JUY5M+T4xG WZROZJ5CWpE+VYpRY5g9ToNMY5hHYpZ3n8xxm8Z2n8ZpjLF+oL2lxtRFWHBp dZxdao5KWnw7TWw/UHNKXYZcbptLYI5BVohLXpFSZZhzhbRRYYtwgKhGXIU+ UHtda5lHVoRTYpJDWIY8VINbcqVfc6EzRXBGWoZKYJhOZ6ZTbqxOZ5Y0THlm f6eIn8R5jrKTp8k5QGNHS2pKUW3s9P/R1uPPz9TY09Cwra+2uMC6y+SAmb5g eahCUopLV5BLW5WClM1of7RRapxRaJtAU4ZTZZQ4UXhgd5yrxeeXtNp3lcKN suh9oc6w0NROYXA/UoNFWYc+Un5NYY1MX5BabqZVaKRMX5pRZZ1KXJVIWpFh c6BNXYVIWntrd6BHUn9PXJFNXpNJW5BGXZBEW404SnVVZZBZaZREVYdKYptK ZaNGYaFQZJxPY5lBU4hKW41LXYyJmMZygak0QWVJVXRgZ4BSWGuCi5vf6PjX 1dni4eDk6/ehrcRRXoFIU4RETINMWYpicaFQY5Q8VIOuw/N8kL47T31UbJmG mcCapcKdqcZziaqFq9mEqthgdJZNWYRNYJtRZJ9PY5lJXZFDV49NXp1NXaFI W5dAU49VZqJKXJMqPGl7jLFUZoeNlr5PWYZIVotOXpZKXJNDVodFV4RPX4lH VYBqeKZOXY9FXZRNaKRHYqBMXqFUZqdYaahCUY9LW5dKV5JjfK4/V4YtQGld aYpTYIFRY4RGXYJweYWnrK/i6fK8xdVQWXFHUHhFTn5GUHxaZZJvgaxSaJNG XIc/U3+twPFUZqVQXpSdpsyIlLOXrMZ7oLvA4v5KWHZAS3BPZaBJX5lGWpBK XZBMXZFKWpJEUYxEVZFDVpE8TYlXaaJQYpFAUHpuf6STnsVEUn5OX5NEVYpP YJRLWYREUHlVYYxncp9ET4BQXpNMZJlTcKNHZJ07UZZRZatNYqROYJ9GV5RM XJhbd7BDXpI7TXxQW4JLV4BNYY1KaJ9LYYJCUGplcYZsdIpJUGtFTnZFT3xd Y4xkbpiCkrphd59whq9NYYttgLFRY6JSY59IWYtUZ46pv92WsMKowc5GU2I8 SF1HXIxLX41OY4lNYoBTY4FMWnZMWnhBVIVLX5dRY5w6TIVDVIhDUoJQYIto d51GVoFNX5REVYlicJxpcJNMUnNOV4dOV4tNV4tIVYqSq9qEos9YeqhMaKFJ ZZ1KZZtLZJZNZJZacqE5UYZUZZlveKJ5dJlJSHBHUIBJYJ9HZq1NaaxPYJVA SXlNUoFKUIlMVZNiaItNV3tDUnZ/lLhOZYpnfaVziK5QZYk+WIREZJxGY5xP aJlNWXhNU3JJSnlQVYZLXnhHWXhDU3VLV3RPW3JRV3ZMTnhITntEVo1NZKNR ZqZCTIBIUH5KVIF2h6BLWYSKgaVVbIM6VoJRVJBQWodMUYBCTIM+SH5VXY1T Zo2Cpcp9psuVstlMZZZLY5xLYZxQZJxbbp8+THhFU4hWaaRDUoJ8g6avs9Jb YI9KYJpMaKhNaKZMXpVFTX1SV4hOVo5NVHdDTXdCVH9NZIVDWnXC0+5JUnhP W4ZGYYdLYZlBWnJbX3Rcb5g+SXA7TXpMX5L/2wBDAAIBAQEBAQIBAQECAgIC AgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsK DAkKCgr/2wBDAQICAgICAgUDAwUKBwYHCgoKCgoKCgoKCgoKCgoKCgoKCgoK CgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgr/wAARCADBAJYDASIAAhEB AxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgED AwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAk M2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZn aGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5 usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEA AwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQA AQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl 8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3 eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbH yMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD5 1s7iWe//AHlrvrVj/wCeUe/zv+WUccXzyVm/2XFp9tDr2oSyPbXcW+2js/4/ +uj/APPT/pn/AKypv7Zuo4vstvstk/5ax2/8f/A6/rD2vtj4SrSNWOO7sz5V xG6J9zzPKp/2i1kh/d+XsH/LT+5UOh3Esm+1jutkN9+5lEn3P9Z/rKZHby28 z/8ATP8A8coMib/Wfcl2PH/yzp+n6ZqmqOf7PsPO8v8A1vlmrn9n21nD5slq lzN5X/Hvb3MexP8Aro//AC0/7Z/9/KrXFxrNzcpayRTJDHF+6jj8vYn/AFzo 9qzMrCP5/Nk+enxyQ7/9V/20rSuLP+3IZrqSLyby3i33Mf8ABdR/8tJP+un/ AKM/66f6zHkuPMhf7PKj/SnSqusBN/aF/o8NtLo+q3lm8nmP/odzIj/6z/pn W3p/xk+Jej2n2C48Rw39tn/V6xptpep/5MRyVzGoyfaPJiEmz/Rv3skfz1Db 4uETy/neT5P3lZVaNGq/3hqvbHZ3nxIsLjyZNU+GnhWYyRRvLJZ6bJZO/wC7 /wCneSOP/wAh1DZ678G7xPK1DwRr1g/3PM0vxTHOn/XTZcW8n/oyuY1C8/0x /wB7sqDzJZNnl7/3n/POL79V9Vo2D2rO0vNP+D9xC8un+PNbs/8AnlHqnhuO f/x+3uP/AGnVaXw3owf/AET4jaDN+6/5eJbu1/8ARlv5f/kSuYt/K8iTy/8A yHUJiikfv+7+SX+5SdKsv+Xgjqv+EL8W3iY0e1s7x5Pki/s/V7Sd/wDyHJ5l UNQ0vVNDv3sNYsJraaP5/LvInR/LrHt7iK3hSWSKMPH3rV1C4k/tWGwk+RII rdJfM+f/AJZx/wD2yj98Bc1Xwn4j0tI7q88OX8KZ3+ZJYybP++/LrH8yWRP3 mzf/AM845asx+LL/AE+5+32d/c2cknmOJI7mRP3n/bOtH/hZvii9dItU1RL9 I/8AoKW0F1/6Mj8yj98Z2uYsf+kTPFvSLYelSfY/+n5K2bPxd4WlQw3Pwy0K V06vavdW4P4R3HNTf8JL4O/6JXpn/g0v/wD5Io9tW/59mlmcP4L1C11yySLS 7p0TUoo4fs/m/wDLT/lh5n/bSi4/dp+8idHk/wBb/BsrgNH1C68P22m6zp91 /rI5IfLj/g8uT/lpXf8AjjVItT16HXrO6Qw+ILGPUoo/7kkn+vj/AO/6T/8A jlfLZPxFRxX7uoexisD7J/uwS4i+zfZZPk9ZY63vFlxayTWfiPykRNVsftP7 v93skj/dz/8AkRJP+/lcBeaoLe5SKSV/9V/z1rpNL1S11DwfNpcn/HzY332m KSP/AJ95I/Lkj/7+eR/5Er6P63QrfwzmeFfsS5cSRXk/myxI+P8AplVaWS1j v/K+Rx/0zqGT/SVSW31D5/Kj/dyf9c6zZJJbe88q8i/25K09qjh9kzY0uTy7 n93/AKuSKSodPvPkeWOT/YqGC4ljmSHzd7yfP/f2VWs5/M2Red/rJY0opVbI STbNTUNY0bT7O5v9QtfJSOL/AJZy/wDouuc0P4maXLeJdXlrND5HmP5fl7// AGpWb40i1nxpf3OjeF7VJoYJf3v+kx/PJ/20/wBZ/wBs6xLjwX430KzmutY8 JanbJ9yKSSxk2f8AfdfDZzntZYz2eHqH0GEwNB0f3h21v488L3DvLcaon+t/ 5aeYn/tOrlnrlhqH/HnqkL/88v3sdeReZ9/y5d7/APLXyx89Mkl+T7LJ+7/6 af8ATSuGlxZmVF/vDV5NQPbDcfZ08qSWbZHF/wAtIqBcfP8Au5U314/HqGs6 en+j39yn/XOWStKz8eeLY/L8rWf9X/z8RRvXr0uMqL/iUziq5PWTPVNPP9oT fZY/+W8saVfuLy1uLy8v/N+T955Xl/8AfuP/ANDrzTw98RNe+3pdSQw/u/33 7uLZ/q/3n/slVtL+IHii3025uri6S58y5jT95F9/93XUuMctM/7Kro9I+2fI hk59PMpkkgP7ryu++uGt/iZvT/icaWj/APTSOXZWxb+PNBuOZLp4Xk6/aIq9 ClxHk9b/AJeGVXAV6Rtzbdw3+Y7dz5tMwn/PKT/v7TLbxLo00jmyv4dgP/Pz U/8Ab1l/z/w/+BNel9coPqc/sa5gf8Iv4N+FfjDUvBHiCW21V9Olu7b+1LeW TyL238yTy5I/+un7uSrMmsRax8PZtLj137M+h3P2nzNPtv8Al0n8uOSP/v8A pB/38krjNU0e18L6lYWF5C8zwGTzY/8AtpWx4LGjSeKrb7RFImm6lFJZ6l+9 +5byR+X5n/bP93J/2zr8UpVcZRwnsz6P2dF1/aHv/wCzNq3gn4ZeAPDfhnx/ 4gsBa/HPxXN4eutX8RQxW66DocdjNp6atIJeLmy/tPVUvRloohc+EQBJ5ieZ a8l8LfAfw++Heq6Bpn7ZvxD8R/Dm98W3c9rp1jYeAzfz21nFfT6ZcXWowvdW 72aLd20yiJFnuS2n3BeCNfs7XHcfHv4t/tPfAxPD/wANv2fv2i/HPhjwV4W8 JWNjY6X4U8VX+nQTXigHVLxIEdJ3W41GW9uEa5USCOaNCsSosMdX9p/xx+zh +2LfeBvibrXxytfCGraD4Hij8Z6ffeH9Qu77V9Tmv73UdTu7NbeE232qe/vL +VbWSa2tUimsitxGZZ7ew82lWzjDz9tQm1zbtatdrqzs7WXXY7fZYesvQm+G X7O3wLvvBmk+J/2lf2otM8Cwaj4y1zwx51n4Zn13ybzSobCRrhJLK4EU9s0l 0I/Midjva3aNZYpJZbfW/Yl+FXjjwr+2FoZ+LFrL4el+H/xn8L+HprAlL1bj XptdjiGniWKQhMQWmp3HnjfF/oHl7g08W7g/2oPFXwu8e/s/+ALP4T+GJdBX VvH/AIx16HwYt1Nd/wBk212ui2kMSXciD7SQ9lON33z951XeufdL7x/4f8O+ Ov2dfidquuX+gi98faB47+Isrwl31CCzGk6RcXuoyw72nuF1HSPEl2ARI5Gq vI5jmubiNfUnxJnU8LKFWTfOmrWWlr7WV9Vdat9Gra34ZYHDUZJ22t/XyPKL X4OWep/Cfwd8YPin8VtYsvBJ8LRuRb6UNQutLludZ1yzttMs4JLmJZPMfS7+ 8Zmkt4kVp8lpTElx558avDHhfwR4qsdP+E3jubxbomsafZ3Wk6o2nx2lz5bp iaC5tUnmNrNFOs0RRnO9Y1mQtFNE7fQOt33w01r9mfwP+zt4p8e2ens2hafc 3fim/wBPu/sNjqdjrHiicWF3HbRSSqDBrsbiWKKbEy26MrLOZYfl341fEDwb 4eu9F8H/AAd1CS6i0bSfK1TxDaieO38QX0lxPK95FBN88EISWK3jVgjyR2qT yRQyTSQx+/g8/r/VKvtqrteVlZWtuneyd76bu97uP2lyRwkKs17JdvX0/r7+ h9ZfFf4MeG/iT8JL3QdB0a8n+Il/dfBbQfCdvJoMGy4e78FmNLYahLco0CTO jmT5NivZ2obeJWe28L1H4QeDIvCeq+KfgD8Y9T8QXHhWwiuPGKy6JJpVqtu9 xDaDUNPkE7vd2wuZ4Yz50drcYuYHFuR9o+z+p6J+358IvCMTeO9L8T2Goat4 T1n4P65Y+HTbX9o+uPofh+Sz1S0imjtpI4miupVXdNtjaNHaMy4RXyPjB+1/ P4j+Ft74Y8Tftz+PPifp/iPyTp/hWe81FYLGNJkn36ul9CUaZdqKtvaT3MYl 3yfagsEYuvFwWJk5qML293df3YqS+F6Lpqtb6vZdNelJq/Jrr+bt1/zF+In7 MXwu8PeNviH8LtR/amm1i5+Et3dTeN7zV/hn51k2nw6lDp5FizztNc35murW P7PLFb23mNN/pvkxrcS8vP8AsrfBWW7Pji81idPh6Ph//wAJh/ben+F2tda+ yf25/YPl/wBnf2j9k8/+0f4ftO37N+93+Z/o9dJ8QPj1+zr43+Lv7TXiTS/G MUlp8RRfr4WuYra7ibUBJ4r03UUyJEYwk29tJJ86oBs24DELU/hfxD8L9Th+ GP8Awhf7S2r+BPEng/4X3mmaV4u02eeOLR9Vk8R6ndul1JaQG5aKTTbyZAbV G/e3EKuQgnxpGhXqUVKUW3dX0W3LdvSPfS9nbs2U8RDmtP8ArX/I8H+MvwZ+ Bfhe60uf4V/E/W9S0XV9J+3adNd21muoWeLie3eC9tYnkS2m327SKgmk3QTW 8hKmQonHSfDu1kc/2f4ttpv4/wDTLZ4H/wDale3ftUaR4X1rxBo994F1nw/q OoW/h9YfF+t+GPDc+iaVqmoi7uWSa0tBaW0UCLZNZQOBFBumt5n2uXM0vl2s +CtS0+/gttN1S01uN7S3ma40YtKiPJCkjwkNsbfEzNE5AKF42KNIm122hgsN OmnK9/68l+S9FsXHFKctzn7P4f8Aiizhv5be1S5Atv3UdnfRvv8AM8v+D/rn WVcebb6VZ/uv9ZFv6/f8ySu78RWS+B/Ds13Dr1pfvMtuiXltbzKiTyWxkmhI mjRt8QZonIBQvGCjSIVdnap4T8D2Edto9/rsd9ciwtHa40tJVhQSRJI8JEux t8bM0TkAoXjYozoVduGrRftvZ00WsUr/ALw8/EcUaP5f3/8ApnT/AOP/AEg+ X5fz11sngfwvcJ5Wn+LUhf8A553ltJUMnw71mP8Afaf9muU/5ZSW9zvrCqnR f7wv2qZy8ZuJOsT7ew82nbJf+eL/APf2tj/hD9at5WtL2zmtwg+VpIvvUv8A wiVz/wA/D/8AfusvbyDnos73wv4D0H4sf8TqPxQltfxxb5bOSL53j/5Z+R/z 0k/6Z1Q/srS9P32Hzw7JdksflSeYkn/POuJ8caf8WvhP4kh0HxZpWpeHrqOK P7Tp+oRbHgk/+Ik/d12Gj/HyLXHTTPiZau80cWyLWLeL9/8A9tP+ekdejleZ 0XjLYk87FYF+x/dHSeLPstvYaJ4o0/7TB9u03ybmT7Ts2XEH7uT/AL+R+XJ/ 28VzGsaPo3iD97JF9jmk/wBbJbxb9/8A10j/APjfl16j4ft7/wCIHhi88I+C N9/5FjHqtjeaf86J5ckkckcn/XSN4/8Ann/q46898SeIL/4d3XlfFC6trN/K 3x6XZW3n6hP/AMA/1dvH/wBd/wDv3JXt4nHZNSo+/UOal9cT/dnqHwo8NeJf DXw607XpoYVuNBV7nRZLizhu7dn84CUGCciKZCjW7tHIrI4jKspBINL4kaH4 z+I/iPWb4ww6jDfeQt49q/kgRxootoVijmWOCGONEijtY1WOKONI0VUVVHj2 qftSePLjTX8O6X9m03R5LH7NLp8ckj+fH/rP3jyf6z/rn+7j/wCmdU4vjBf3 Ft9lt7C2Tn91J5VeRldTKq06j6nTisLmVv3Z6/8AD/7fofiSaLWIrlEklsHi 8zz3gupI/Mjk8x5P3ccnlv8A6yR44/Lr5njvJbyCGWWJ/M+zfuv7iR11viDW LrxToKHVLW2up7G+3/vLaPY8flyfu9n/AC0jqbR3+GmuOkWseB7nR3+zf8fH he++T/tpa3HmR/8AfuSOvOx9L2WZfu/4R24BOjR/eHJSapfyWz6f5Xk+X/y0 ji2VWt5IrfpdO7yS8+X/AB13OofCO01x/N8D+N9N1j/nlp95L/Z96/8A2wuP 3cn/AGznkrldc8L674XuPsPiDw5eabcyf6q3vLaSB3/7+Vz61ax3FO31CL55 vK+eppNYlk/e2906P/y1kqEWcu/95N/rPn8uSt6z+CfxQ1DQf+Et8P8Aw+1L U9Nki3/aNHtvtuyP/polv5kkf/bSOjmr0nuW0ihpfjTXtPd5bPWbmHn/AJZy 10ng/wCMFhp9y8viTSoXuZPklvIh88//ADzrg7eSK4eaL7fv8v5Jc/fT/pnR JHayPNFJ/wBsv3tXhszxeFre5UOOpgcHivjPVP8Ahaui3j/6Zo1tN5fmJH5k VaUfxY8OXH7qS1mfy/8AlmZfPT/yJXjMcn7nzRNseP8A5aGu2+Hfwj8R/EDR LzWvD91ZokFz5JjvPMTfXtYXOMxxVb2dOl7Q87H4HLcLR9pUq+zOw/4ST4aa p+9vLaGF/wDr2kT/ANF0+PQ/BusJ/wAS/WX3yRb4vLuY3f8A74k8v/0ZXB6h 8O/iD4bd/wDhINBvLaH/AJayeVvT/vuP93TLezluLnyo/MfzK3p4itV/i0iK WHoVqPtMNVO41qTxP4XtI4vDh1G83yHEcMMiLGmOO71mf8Jh8U/+gDqv5yf/ ABusd9e1TSgLLRtRcBD87eZsyab/AMJf4w/6Cr/+BVRWpZa6jNI0sZY+r/8A gtv8KLrw5+0fBpdnYHZP4b8m2k/g8yxu54//AEQ8FfIWl+F7rxBYeVZ6W81z HF/yzr9Qf+DgfRr/AEPVfD3i3S4khmj1uSHzPK+d47u0jk/9GWUlfnv8N/jJ 4o8Lak8tnql5pVzH5f8Apmj3Mlq7/wDfuvKytYOtV/elUqtalROe8P2/xV+H +/VPBes6lo88ltJDLcWdz5H7uSPy5I65v+1NLs0m8G+PP31tJLviuI5f39rJ /wA9I69+t/2gPiDcP5v/AAsvxI/ny/8AL5q87/8Aoyn658bNf1TUppdc8Zed cyRbP+JpYwXTp/zz/wBZHXr1smoU17TDmazT2v8AEpnzf4g8H3/h8ebHdJc2 0nz22oW8vyTR/wDPSsq3k1TT7zIidEj+f/Vffr6B/wCFqfEvR/3Rh8Palpvm 7/Lj8JaS+z/v5Z1x/jj4sWFv4o/4Rzxpo2mTQyW0bmPT9Njtdn+s/gjj8vzK 8SrllKlWvU/dndSx9v4f7w4zwPql1cXOpRSRGZLixkmij/65x+ZXQ+F7eK8S a/t4tnl2Mn7yodP0Kw8P+J4des7/AGaPPFIkV5HFvT95/wAs5Kv6p4D8UaPD 9k0PZeQyS/6zT/8A2on+srTC0q1P+IZVatGrWOauJLqO58qPfvrqPDfxQ8Y6 fYDQbi/nm03/AKB95FHdWr/9sJP3dYlvp91qE32WSUQ3Pm/6qT5Kuf8ACH3W l+TDJau80nzyyVm8LXqK8zVVaL6mP401SKPxNef2fYWFsnmxp9m0+KRIE/d/ 8s0k8ytX+0NU0ia2v9Llura5tLa38q4s/MgdP3cf8cf7yOuV8QSS6h4qv4vn /eXMif8AkSu5vbf7RfzS2dt53l3Oz91/37rnyyMsW7MvFP2JpW/x48W65mL4 iWGj+Lbbyv8AmaNIjup/+2d1+7u4/wDv5VyPR/2b/FFs8useF/E/hK5/5ZXG h6lHqlr/AN+Lzy5//JujS/AdreWz6prF08Dx/wCt8uL5Kms/A9hrGsJa6XK7 w+b/AMtD9+veq8OOked/alC55drnhex0/wAT3mgwaol5bQXMiRXkdtJAk/l/ 8tNn/LOvpb4N+F7Xwv8ABm2l8qH7Z5v2z95L88/mf5jrzfT/AIZ3Oqa95t5F N/pd1I/7uL7kfmV6p4wvLXQ/DE0sd0+yO2/dHyvuV7eR5O8kw9XG1D5zibMK GYypYKmYNn8cNGuLy50bWIrnSofN/wBJxbb0f/gdTf8ACJfD/wAWQvdSaXbf vItn2nS/kf8A7aeXXjlndy3l5Jdeb5byf8s66TwtZ/2hNDFb77a5e5jSK4t5 dlPK+IamLp+yxFP2hlieHqWEj7XD1fZmnqH7Or3Nyz6D4iUx55W6tt5H/Au9 Qf8ADNniL/oP2X/gJXZ/EDXLXwZY2d1bNIZbpF+STsMHP6bK5b/hblz/AHK9 itg8gpVHF7nLRzLP3TVqh+sP/BbPwl4S8cfBa51TzrO/mgtrS8ij+/s8i78t 5N/+rj/d3slfjJpely/ab21+RHtLnZ+8/wCmcnl16X8UPFnxz+KFt5vjj4g6 34hzF+6jvNbknRP+uaeZ5f8A37rN+G/hu70tEtbzQXS8nikT7PJfR2r/AOs/ vyf+i6+LoZXi8JR9nUPraVW/7w5WPVL6zf8A1uyaP/ln5tEccQuU/wBFSZ/K /e+ZXp0fhi+uIf7Ut/DuseT9zzI9Xgf/AMc8urMngPxTInkyeA/EO+T/AJ52 ME7v/wCQ69RJ2B1UeaPcaX9meK8sED/8sv8AlpXMeNNLtdY1Kw1TUPk8yxjT zMV7TcfDu6js0lk8JeIUSP8A1kknhaP/ANp1zGs+B4o9SubX+y5nSD5PLuLH yHT/AJafc/5Z0nhHmD9mzOliqGF/eHPeG/B+g2mlQxafqlml5HL532e8+TfH /wA9I3/1f/kStKSPVNHuUsNQ0uZEj/1UdxF9+n6X4fMaTfaYnh/55R/3KuW9 nr2jv5Vpfv5P/LW3ki3p/wB8Sfu6HlVel/DMni6NasVtP8UWEkzxeILBLlPN /wCPe4i3+X/7Urb0fw34N8YXiS291Nps32mN/wB2d6fu/wDyJWJcXel3E3la p4XdHk/1txpcv/tCT93/AORI6LyPRtHsDqlvrKXP8EVnHFIk/wD20/55x/8A XOShzdKh7Ooh/V9f3bOV/wCJp4b8bP4S8WeF7O68zUtn+qj/AOWkn7uTf/y0 jkre1j7B4cvzr2j2Fy4n/wBV9oikREjk/wBusTw/qlrrnxFfWfFGvR200cv+ jSSWMk6Pcf8ALOORI/8Aln/2zk/1deu6HZy3Fn5tnpdnqT/fubjwnqXz/wDb SD95/wCi4683LKln7Ox3Y9rlOM/4WBaeT5WoaDN5cg2fu5fketXwn4k8ORuk tnvtoY/Mm8s/9M4/MrVj8L+DdYd7WzutNSb/AJax6hF9inT/AIHH5kf/AH8q trHwjl0TR9S8RmWazhj02Tyri4/fwfvP3f8Ar7f93/HXvVcfWpUeeoeJ9Uwl Vk3wvk/ty8S60+SZ4YPk8uT/AJ6UfHjVPs9gmg291DI8nzy+X/B/0zo+A+hy W3hKbVLiJEfUr7/Rv7/l/wCfMpnxE8B/FXR73+3pNCTUrCT99F9nl37I/Mrv zJ1q2Q06a/5eHh0qVF59U/6dmDZahpXhvwE+l3Gl237u2keUXEXzvcSf6v8A 65x/9+60vhHo91sh17UYx5Mfzxf7dY97rHhLWLOG21T7fDfiWNJbe4tvk8v/ AH/M8z/yHXqngPUIvD/hJ9Ut4odk8X7q4kl+SCPy/wC5/wBc3rx8jwFKeYOr T/5dnrZxiqsMH7KoeVfGvxFc6n4lh0Ow3udOtgioZfujv+myuN/4n/8Az6v/ AN/au6jBf6l441K91BHle5kkn3xn+/Jmp/7IH/PpP+dfP43GYzFYqdTuz6DL cJGlgoRfY9Dk+MniOzuftX9sw+dH/wAs/tMaf9s/3deY/Ez4qeKNY8ealdW+ sXKQ30vnSnzfk/1fmVf1zwhf29y5uIkRI/8Alp5tcp4w0P7Pcp+63yfZo0lM kUlTjsLmlI0wNPBe2JNH+KHiPS4fKs5Umhk6RSRf/HK3rf42X9vbebcaDYXn l/8ATts/9F1j/DOz8L6f45sNZ8eeEv7e0e1lkub7Q/t0ll9tjj/5d/Pj/eR/ 9dI657VJLCO/mit7V4YXl/dW8kv3P3n+rrlpZnmWE/5eHfVwGDrf8uztrj48 ayJv+JXa21t/1721bHg/4v8AiPxpr1t4S0PwvDeanqVzHZ6RZ2/yefcSSeXH H/zz8yvH5P8ARx+8+SrPhu4kt9V83ykd/wB5/qxSXE2Ze2/iGVXLMt9j/DPV 9U+JHi3w9qUnhzXNBhhv7GWS21K3uPvpcRyeXJH/AJ/551DrHxsms9N82PRo Refcik/gribOP/XSx7/3lFvp8Wt6r9lkl2QwRSPL5f8ABXt1c9zKlgv4hH9m Zal/DNjS/jZ4yuH8ryof3f8ArZPs1b2n/ETxbqj/AGWz2fvOv7r/AJaVzej6 foOj3nm+U9yknz/Z5JdlP1C4v7C/hv8AQ7V0SO53+X5m/wD/AHlZYHPI06P+ 04gxWGwlT+HTNjT9DsLPxhrHxB1j99Z/af8ARrP/AJ73H/xuun0PUPC/jCaO /t4v7Nv4/wDP364PQ9cv9U1hP7Y2W0Mf/HtZn5EgjqbXPEGj2d/5umXSIn/P TH369fA4/A0sJ7T/AJdnNVwFes9D12O58eW8KWtzfw6xbRxfuo9Yto59n/XN /wDWR/8AbOSub8cW/ijVLNNL0/wl9mtpJfOuY7e5kn8+SP8A1f8ArP3n/oyv Lrj4uaz5P2XQ7+Z3z/q45fuVDH408b3lz5kmqXm+P/lnHcyVwYrOMorP93SJ pZXmNFfvKh6Lodt488Nu/wDZ8V/CknzmKOT5P++K7bwv8bPEfhN0OueG5vJ8 3fLJZ/I7/wDAJP3deRaX4o+KGof6zxHdRp/y1kklrV0v4mapocPlXmspfpIP +Xz569TDZrRpUdf3dMzq5WqtX2ns/wB4ezSeJPhN8SLZ7XVNBs3/AOfby/ku k/7+f8s6oePI4vDfglLC32I995dtbf8AXP8Ad/8AtOvLrj4seA/s3leING2X P/TnLUOs+JItY0Szl+3381hbyyfZrO8l/wBR/wA9K0qZ7h1RqU6R5TyOu69P 2pq3cq6FaJcWtn583mGIPn+Aciqv/CXap/0Bv1puly3Wrw+dJcFcfwPVn+zZ v+fpPzryXUu7o9X2dWOjZvWdxLcWyRXlgkyR/wDLO4qbWLO11SzS1vbBPsw8 xPs+Kv28egxo8Mmqf6usTxp51vZ22qafdb7aO5k82OT/ALZ19J9ZjT3PBouV WuYNv8O5be7ubnT7pJoY4pP3f8f7yuMk+G+qax4jeH+y5t8fzyySfIif9dK9 I8H+JP8AhKPFsOl6f5Nhbf8AL9rFx9yCP/npT9Q0u/s7N5ZPFF080kv/AB8R xRpvryauFyzMP4Z76x2Nwn8Q89k+CfiO53/8e0L/APPOS+j+SqEnw/v9D1jy vtVtczRxcW9vLvf/AMh16Lo+p6NceG5rXVLDyLmD5JbeSXY7/wDTTfXDfDf4 k6z4H+IUfjfT7CzhntJZEijtzJ/q5P8Ab/8AIleLmmEy3COn7OmduExeMrKp zmbc+HvFun2z39vpc32b/n5p+nySxmaWT5Hnlj82uw1C916TxnDf2d1bfZtS ttkX2j/UP/0zkqGz/wCFVa5cvYeIPiXNo9/aS+T5cnh+S6sv+2c8cnmf+Q68 zOKWEpYP93UNVia9X+Ic9HceXJ+76/8ATOppbiWRD+9cfvf9XXWx/B/7Ynm+ CPGXhvxCg/5Z6fqUfn/98XHlyVg+IPC/iPwvM8Xijwvc2D/9PFtJBXyjaI9p SRj6peS6fpVzf+bvmS1k8r97XJeE9D/tyaE6xdO7+bsirs9QktZNEubWSJH8 /wCTy/N+/wD8tP8A2SmeB9H8Lx6rD9o0a8mfzfO8u3ufn/74r0MvwzrVvI7K V6VA6SPT/hVJMn2fwbf2Hl/8vGn6lv8A/HLiOT/0ZUx8P+Etnm6P43hSb/nn rGmyR7P+Bx/aK1Y/DfgizmSLUL+/s/3XneXcW0iVN/wq+w1iZJf7UmSH/lln 59//ALUr7SlRqpf7Oea8e7/vDmNQ8D/EvXIf+Kfis9YQf8s9D1KCd/8Avj/W f+Q65XxL4L8Y6HePpfijRtS0q5+/5eqWMkD/APfEkcdel6h8F7qSZJdH1SG5 8s/6uS5/9kkqn4k8F+I9Y+x2Gsa9cu+mxfuo7iWd/I8yT/pp/q687FZRmNWr 7SodVLNMG2eaWfh/zLxLqKX5/uf6qvRbjwvFHoNna3EX/LLfLXldx4wlt7mb /Tvsvl/6ry4vv/8AA66T4f8AjTy9+l3l+k3n/wDLQCvOyzNMHhJeyq0/4heP pVq38M7K51C30ONLWC0ZyBysEXAqH/hLG/6BVz/35q1a+KvD2iRCyj12W2YH mSCPeHqT/hYeif8AQ7Xv/gLX1bq4e/8AFPF9nb/l2ZNl4oit7z/SLV/s0h/e 5rV8Sa5F/YKRWfyW08vXFQ+F7OWS5+weIPBuzzPk+0W8X7h/+ulbHiTwPo2n +Hnl8QXT21tBL+68v77/AOs/d1FOlWdD93VOlOjRxulM5LUNQi1zU7m68L2C abYSfP8AZ47n5Ere8P6ffapNDrF5fpbabPF/p1xcfc/d/fj/AOulcxcah/yz 0v8A49o/+mVTaX4k1640H/hDbfXn+x/bvtP9n/wfaPL8vzKzpt0a1P2Z6GKV XF1faVD2PTtU+BlxDDpf2XxJrA8r975nkWUH/txJVaOP4LSXn2/Q/gPoMM0c v73+1Jbu9f8A748yOP8A8crlfD/g/wAb2+m3N3H4cmfy7bfLJJLsdI4/9ZJs /wBZJH/y0kpl5o91Jcpf2+qTQ3kf/LSP/Vv/ANdK972NDFL95+8PBqqthq37 up7MueKNc0bVEufC3ijS9K01/N/4lEel6bHawf8A2uvOtQ8B+HJHf+0fs2+S Xf8AabO52PXsfhfxx8RvCdtD/wAIxdf2PeSRf6Tqmn20f2qfzP8AV/v5I/uf 9M4/+eklb1x8X/2lpE82P4l69c+X/rY5PL+f/wAh15tbK6tV/wAP92a0sdRp f8vD571D4N6XqH+n6X4tkhT/AJ6SS76m0uz+Pvg9PK8GfEu5eH/nnHqUjwf9 8SeZHXutn8fPjcXeK8+I2tonm/8ALSKP5JP+/dU/HGseLfGGj+br91bXlzBL G9teXNjGk/8A1z8+OP8A1f8A0zkriq8M4PFL2nsxVMzbfszxDULz4jeMLnzf HFrpsNzaeZ/pFvYwQeZ/00k8uP8AeVc0fw3/AGfeJLHM7yfZuJJP+elauoRx R6kkXiC1fZX0t+zH+x34I8eeCbn4ofGT/hJP7N1WLyfCWn+G5Y4Lqf8AeeXJ f/vP9ZHHJ+7jj/5aSeZ/zzqKWV4PLqR2/Wr0D5+8P3Gu6ej/APE01K2/55fY 7n5KhvLfy7x7/VPs0z5/5eLHyJ/++46+zvFv/BGT4oatYPrPwR+JVn4qgjij eXR9UtpNJ1CGOT7nmJJ+4k/77r5t+LH7MHxt+DF9c6Z428J6xo7xy7JI9csX SP8A7+f6v/yJXXSWGxX8Nnne0brHJaHqljJYf2NcRXmz78f/ABMt/wD45JXB +PfHni3wv4nv7WzieeGCLZL/ABon7uOP7kldPcWdhHeINQ0HyZo/n8uP5N9c f40u5fC+pf2p4k0GHWP7SlkmvtPvJZPI/ef89PL8uSP/AL+V4+aUsdhaHtKZ 6WAVGrW/eHmOPMfzZP8Alp88VPt7jy3/ANHi+fzf3VdneXHwR1xE/deJfDFz 5X/PWPVLXzP/ACXnj/7+SVQvPh/YWyJqmh/ELR9Vs45dkv2eWSC6/wC/Ekfm f9+/Mr8897m1Pfdjp7bwXqvlR+fc2yz+WPM/4mUdS/8ACFap/wA/1t/4M465 Y+Dbpfk/4SG9bH8cg60n/CHXP/QwXP5V9QpO38I4/YX/AOXh7/Z6X4SuH8r7 Uk38fmXEsexI6NQ+Gdh9j/tXQ7+ws4buWRPtGny/Okn/AKMj/wCWf/XT95Rp f7Pfxf8AOS61TwlZ6On3/wDiqNStLJPL/wCmiSSeZJ/37rb/AOFR+CNPhT/h Mf2gvCum+XF+9s/Ddjd3r/8AtOOT/v5X398H/wA+z5/2Xsv+Xhw2h+H/ABl/ wk9hpfiz7BqVtPfRw3MVx5cE/l+Z+8+eOuz1iP4feA5Hv7PS3sIZJf3WoaXY 7Nn/AEzknk/eeZH/AKuT95XPJb6D/wALds9L8N6zealps99bpbXGoW3kTz/8 s/3ieZJ5f7z/AKaV3nh+30+4T+xtYld3MX73y/8AWeZ5f7v/ALaf+0/+udS6 Sq/wwq1nRMfwp48uruGaXw/a2FzDHFH5clnbbL3/AK6b6x/iBZXWl6b/AG9p el+dZySbLn7GdiQf/a5P/tf/ADzrp5LzRvC8M1rpdhYWfn/PLcRxeZI//ouu Y8WeN7WTwreaNrm+GH7VaJFJJL/y0kk/5Z/8s/L+Sq9q8LR/59nLf29Y+0/2 L/2G7/8AaM+BHhv4jXHxQsLCzu7G4his/wDhG/PdPIkkj/1/2iP+5/zzr2iz /wCCVF1cw/8AEv8AjTYJ/wBdPC3yf+lFfnR4X/aA+PHwf0rTfC/hP4g6xZ6V BbSJY/2X4pvrVH8ySST/AFEcnlx/frufDf7aH7X2uaxZ6N4X+KvjO5vL65+z RWcfja73+Z/yz/1kleZVw2fOtozW2Cqn2Zqn/BKvVLOb/SPjd4Yf+D/SPCUn /wAmV89/8FBP2UNe/Zr+H3hXxHJ4t0TVbbUdXuLb/iT6RJBs8u38z/lpJJXD fED9rz9uD4YXP2D4geM/iLo9yf8Al31zW7613/8AXPzI/wB5XDfGH9oP43fH zw3YeHfiB481XVdNsb77ZF/aGrvdbJPL8v8Ad/u66cBh849tuZ1aVGmP+C/w v8LfFDVZtU8cb7Pwr4fi+0+JLyP/AJ9/+feN/wDnpPJ+7jr6T+F/7UmgyalD deLPhVDYW135cOiWfh+52PZaZB+7g+ST93/0zj/1f+rkkrkv2S9QsPCf7Ozx +JNGsL/w94j8d3Ft4gj1Cx8/93BYR+R5f7yOTzPMn/5Zyf6vzK7DR/hX8Kvi B45SLw/rX9lXkksflf2VffbdLn/eRxxx75I457fzP9X5bx1eOxKrYv8AeGXs 2n7OmfUvxt/aH0vQ/wBmTXvhh4P1W2fxJ4m0ST+zbj7THA91JP8Au4498knl xzyf6uOPzP8AlnXf/sx+GNB/Zv8A2LEi/aA8bv4hmtbGS/8AFNvrFz9t+yyS R/8AHhGkn9z/AFfl/wDLSTzK+J/h3qN1H+0bo3xL8YeHP+Eh8PeGLnfpFnpf 79Pt8f7v7RJHH/q/I/55yf6vy69j+Pf7S3i79pr4++FvgP8ADe1htnv5Y0vv tltHPHBb/wDL3d3X/PSOP/Vx/wDLOvmsXhac5futKe5rRq1aen/Lwj+Gf7FP 7Fv/AAUr+H+ofF3w18ItV+HE8WoyW9lq/h+X7Kjzp9//AEWSOSCTZ8m//poK +Z/2k/8AgjH8atD8VTWPwU+IGg/EKG08xJdPjuo9P1CD/pm6SSeXI/8AwOv0 o+N3xO+H37D3wFtdP8KXVtZWtpYyW2iaeLdNkMf+skn/AOAf6z/tpXif7Kn7 O3gD4Z6dqvxf+P8Ar+pa34g8VXMmt6bcSyXenyQ2H7yTY8Hmf693f95/2zrG licaqTn7TT+u53e19lW9nA/Hf40fsv8Ai34V68nhz4ofD7WPD1+P+XPWNNkg d4/+mf8Az0j/AOudYPh/4FRWYfWf3P8Ax7b4vtFzs3x/89P9XX23/wAFi/Hs vj39pXw34S1DWEmTRvC8D30Vnc+dBZSTySPJBH/uRxxxyf8ATSvmm80LVLe8 vLr+1IR5/lzRfYz57/Z/+WcfkeXXj55ilTVP2VL94fofB2AweYupXxv8On/9 ucHq/gTUdL+RH81TIcOao/8ACM6l/wA8xXeaZqd5Jq09vo2mNeHyw9zNdLIb gyZ/jj/grS+0eJv+hZX/AMA5K8n+3M3o+46eqPslwnw1iP3lDEe69vfuc7/w kEUly91H4cSb/ppcSyT0/S9c1T5LWSVEtp5dksccUaI8f/fus2S8i3v5cUn/ AMRVnPmfuvNT/tnX69SpK+p+E2sM8SafdRukVxL5jxyyQyySS/8ATSuz+Hes f8LEv4YtQ859Vg/1v9/UY/L/AOPj/rp/z0/56f6z/npXJeILiK80S2uri6T/ AFvkyyf9NP3f/oytj4P/AAw1/wAced4jt4vsEMF99mttYkuZINlx5fmeWnl/ vPM8t4/9XWdW6rfujWrSoVaP7w6G80P+y0+y+LL+z0f91vit7yX/AEqT/rna x/vP/IcdcT8UNd0XXPCU3gjw/oN5c+XfR3P9oXnlo/7vzP8AVpH5n7v5/wDn pXf6R8B/BvhubztTur3VbmCX97/y6wP/ANdP+Wkn/kOtXT72/wDD7/b/AAnY Q2aWkXnC30+LyEnkj/1fmf8ALST/ALaSSU3hq2PpezqHEqmDwLPJfDfiS68S eAJrXUJQ81jFvtpP4/3dbfgS8utU1VIreL99HF+9jjH+vqbxp8NrXw3rGsXX gvemj3csc1tHJFI6QRz/ALxI98f+/wCX5f8A0zqHQ7OWzR7rS7B5nni8m5t4 /wDlp+8rt/2xULf8vC74J1j6K8H/ALSniPw/pqeF9P1nxV4bhk+T+x7iL7bp 7x/7drJ/yz/7YSVtxj4G/ESHytY+C/hXWJpxv+2eD759C1D/AL8R/uP/ACVr wy4/4S3wfbf2Xf8A2mzR4v8Aj3vIvPhk/wCBx+ZH/wCQKit7zS9QtnsLjQYb n/sHyb9n/AI/M/8AREdea6kqf7yp+7B4ZP8AhnbfEjxR4j/Z/u4fDnwf1/Xk 8H6zFJeHw340iguoEuP9XJ/q44/+Wfl/vI/Lkqt4T/aA+F+oaVqvhfxRo2se D7nVbaOG6vNDlkvbWCPzPM/1EknmR+Z/0zrzS9kuviR8TrPwRoct5cpJfR2F t5lzvf8A1kccnlp/zzr7t0f9jbwR8cNE+y2/wqs7l4PLtra8t/8ARZ4P+Bx/ 7n/oyiq8HToe0qfuzOpVdJ+zqHmnw31z4q+NNYsPFHhf4oWHjm28N21xeRXn hu+2au8kdvJHBHPBJ+/kk8zy/wDWeZU2qftuRW/id9Z8cfC+5/4Se7McN7p1 vpsen3SSfu/9RqMflyeX/wAtPLu7Sf8A66Vj/GT/AIJr3XgeaHXvh/8AEGaz 1WCX/j31SKSCe1k/6Z3Vv/uSf8s64bVP2nP2lvgvqtn4E/aA0HQfGdtB5c2m x+MLaPUN8f8ABJBex/v4/wDv5/2zriqYZVl7RCdWjVf7vc+pfjBqHh34kax4 b1nVPiDc6lYaNdW/9paX4gsZPsT3EEnmfYJ72zjkjt5PM8uT9/HB5n7uvUfi p+3h4J8PalYap8VLCzhhsdEuNY/s/T9XgvYLry/9RJvj/dyRySf9NP8AWeXX wl/wnH7MvxQe516PxR4q+F2valLI9z/aHmappF75knz/AOlW/lzx/wDbRK5X 9pzxBousa3o/wv8Ah3Fba9onhzw3plhfapo8v+i3Ukcfmfu/+ekfmPJJ/wB+ /wDnnWSy91a3syU2mcN8TPif4o+MnxY1v4yeOIobmbWL6Sa5t/3iJBH/AMs4 4/L/ANXHH/q6f408Jy+C7+HRfHHhLW9EvDbb7G31y2kR3j/5Z+XP5cfmR/8A bP8A7aV0PwH8L+HPFnxO02wjtZrN7SX7ZqVnJF8j28f7z7n/AF08uv0F+Dfj /wCGfijR7n4afHTw5a+JLbVbmT7Db6pLvtYP+AXH7v5/+mf7ytM5yajSwftH /wAu/wDn2dmA4jrZdjPq1P8A5eH5nv4v8U31nFDNCZIYBsilnuPtef8Atv8A +ydqi/t3Xf8AnhD/AN8V9U/8FR/2Sv2Yv2c7TRPF/wAKIdShl8QalJHBo/2l bmB4Ej3ySIs3zp5byJHg/wDPSvjrz/DX/Qu3f/gDBXzscpy6suf2dTXvue+u Lc4p+6qkNO0LIux6p4ItHSKz8OaleP8A8svtFzs3/wDoytjS/wC1NQR5bfw5 pVgkcX73y7bz50/66PcSeXTJNQluP3VvdIie/wB+s28k+ztP5cqO/wB+WST5 9lfcqlg6Vb95UPl/a16pf8Waxf3CW114g86/+z/8e3+jRvs/65pHHHHHXovg f4keEtD+GP2X7iX1jJf2P9ye/guP9X/20g8yP/v3XlGn3Gq26PrNnK9/DJ/x 8x/x+X/sVq6HceF54X0uS68mw1KXfFJ/yztbvy/L8z/pn/zzkj/9qR104qkq VG9Izp3bsb3jT4keIpNQmi1SK20d4zH5tvb/AOlT/wCr8z/rnHWIniS/j1BL +zl/0n/lrJqEsj//AGv/AMh1f+KFh9o16w1C4i+zPd+G7B5f7/mR2/kSf+RI KypPDd/ZpjWJYbBPv/8AEwl2O/8A2wj/AHlc0XR9j7SoW6V17OB0+sXH9sfC XxVqGn77lI7G3m8ySXzHj8u/g8z/AL976xPC+qS2bpdR3LwvH/y8W9UI/Fn9 h6JqujaHLNcpqVjJbXMl5LsTy5PL3yRwR/8ALT5P+eld/wDDP4D+MviR4Mv/ ABb4Xie5s4JZEl+z/PP5kcccn3P+B+XXbhaqdb2lQ58TS9jRKdn8WP7D1W2/ 4Tywmv8AR4P9bcWflpNJ/wBdE/1cn/kOvSLn9nr4L/tKaV9v+Dfje20zUpIt /wBnkl+RP+ukEn7yP/rpHJJHXjmqaHrPgu/m8OeJLV98Ev72O8ttjwf9M5Ek o+G/h+2s/GsMVvLs0qPzLy5k83Z5Hlx1risIpL+IZ0XSR2H7E/iD4afBP9rC zl/aQlgv9H8K312l9/Y/+m7Lvy5I45I/L8vzI/MfzP3dfqJ8O/8AhTfxJ1K2 8Zfs3/GTTby2jto0udLs77z0njj8z/X2v+s/j/7Z1+Od54H8R+ONVvPHnw/v 7D7fjzpdPvNSjtX1GPzP9XH5n7uSeP8A551peH7zxRoc0Os3FrrHh7Vo/wDj 21Dyp7Wf/vuvkswyevj631f2h7VLEqlR+sOmfrF8cPh14o8N/D97DSvBs02l abLvtri3l/tB0/efu/n/AHckccf/AD08uvys/aA8aRa58ZtetbyLz7CC5+xx SSf884/3de2fDz/gqB+198I9Bm0bWPGSeJ7P7NIltcaxbefdWvmR+X5kb/8A PSvkuSO/1C6+1XF0d8nz/wC/WuT4bMsv/wBnxFMzqUsHiq31imekeB/Aeg6h o7yyfEuwsEu5f9Bs9Ytp/sr/APb1H5nl/wDbSP8A7aVD48+F/jf4fpbX/ijS 7y2s5/8AkG+ILeSOe1nj/wBi6t/MguE/7aV0Pw/v/h94b0ez8OeLLWF7+3i/ 07+0fMRPMkk8zy43/wBX/wB/PI/66V1Xh/R9Z8F659q+E/jK50eHVfnudDuJ f9F1GP8A5afJJ5kFxH/3/r2vryqHDVpOlWO//wCCefwz0HxB4b8T+PPih4sh sLy0lt7PRLe3+ee9j/1k8kf/AJDr0jT9Hv8AUL+5uo7rZbab/qo44o/nkk/d +XIn/LOTy/8AP7uvj/wV8UPFvgO+m0bR7qGHy5f9O0O8ijurV5P9z/Vyf9dI 69+8B/t4fC/w+mm6p8RPg29g+m/P9o0eXz7W9uI/9X5iSfvI4/M/1n+srHFV a1H36n7ymcdXCUar/d/xDy//AIKXfFfUPiZ+0W/hB7q7S28JabHYxyKd8X2t v3t3n/b3ukf/AG7189fYJP8AoMzf9+ql8R+IvHHizxfqfje9SS8uNRvJZrma T70ssjl3c/jiq/27xj/0BG/OvIp0qrhr7Q9ylQVOmosuXH/ITT/tpRefck+k dFx/yE0/7aUXn3JPpHWT/jQLwppfC/711/11rKvv+Qxe/wDXzJWr8L/vXX/X Wsq+/wCQxe/9fMlfWYj/AHCByUv96me66P8A8nIfCb/sVv8A4/Xjkv8Ax8Xv /YXk/wDRlex6P/ych8Jv+xW/+P145L/x8Xv/AGF5P/RleLgf4/8A24dNX4aH /b5j3P8AyF9K/wCvmP8A9GV9UfsAf8gbx5/2G7D/ANFz18r3P/IX0r/r5j/9 GV9UfsAf8gbx5/2G7D/0XPXdU2gceZ/7pM2P+Cl//Jd7D/sRNM/9qV4N4X+5 ef8AYNuP/Rde8/8ABS//AJLvYf8AYiaZ/wC1K8G8L/cvP+wbcf8Aouu2l/uR wUN0cn4b/wCSczf9fMn/AKLr7D8Q/wDKKvUvpplfHnhv/knM3/XzJ/6Lr7D8 Q/8AKKvUvppledjPjons4T46h8xx/wDII/7ZSVxnhv7o/wCvmuzj/wCQR/2y krjPDf3R/wBfNevmHxwOPLPhmaXjD/kqmp/9huT/ANKJK9Q0n/kwnxh/2Vqw /wDSS7ry/wAYf8lU1P8A7Dcn/pRJXqGk/wDJhPjD/srVh/6SXdfLVv4cDef+ 9nlOj/8AIe0r/r2k/wDadetaX/yi+m/7K3q//pJBXkuj/wDIe0r/AK9pP/ad etaX/wAovpv+yt6v/wCkkFb4v4qH/b4qP8Sf9faOJ8F/8eFl/wBe9z/6VSVv Vg+C/wDjwsv+ve5/9KpK3q9ql/DRyy+I/9k= --= Multipart Boundary 0912061926 Content-Type: application/octet-stream; name="EE.JPG" Content-Transfer-Encoding: base64 Content-ID: <13600191> Content-Disposition: attachment; filename="EE.JPG" /9j/4QDmRXhpZgAASUkqAAgAAAAFABIBAwABAAAAAQAAADEBAgAcAAAASgAA ADIBAgAUAAAAZgAAABMCAwABAAAAAQAAAGmHBAABAAAAegAAAAAAAABBQ0Qg U3lzdGVtcyBEaWdpdGFsIEltYWdpbmcAMjAwNjowNTowMSAxMDoxMTowNAAF AACQBwAEAAAAMDIyMJCSAgAEAAAAOTQ4AAKgBAABAAAApQAAAAOgBAABAAAA ewAAAAWgBAABAAAAvAAAAAAAAAACAAEAAgAEAAAAUjk4AAIABwAEAAAAMDEw MAAAAAAgIAAg/8AAEQgAewClAwEhAAIRAQMRAf/bAIQAAwICAgIBAwICAgMD AwMEBwQEBAQECQYGBQcKCQsLCgkKCgwNEQ4MDBAMCgoPFA8QERITExMLDhUW FRIWERITEgEEBQUGBQYNBwcNGxIPEhsbGxsbGxsbGxsbGxsbGxsbGxsbGxsb GxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsb/8QAoQAAAQQDAQEAAAAAAAAA AAAABgQFBwgBAgMJABAAAQMDAwICBwQFCQMNAAAAAQIDBAUGEQAHEiExE0EI FCJRYXGBFTKRoQkWI0KxM1JicoKSssHRk6LwFxgkJzRDRFRVVmOU0wEAAwEB AQAAAAAAAAAAAAAAAAECAwQFEQACAgMAAgAEBwAAAAAAAAAAAQIRAyExEkEE EyJhIzJxobHR8P/aAAwDAQACEQMRAD8Ah6jWKp6xo0g1emQZtThmfS6W4laH JkfKgHA4E+E3y4OcErUkr4HHcZC7ip1Tj2xAqL1FqfqlZcUxTXUQXVpnuJyF IZIThxQIIKU5Iwc40rOghm7qTfxtELnWvXGKHSJXqzklynvJYakupStLbiyn ilzitBCCckLB8xpqd29v9mkSag9YtyNRYDBlSn3KTIS1HaC1NlxxRRhCAttx JUogBSFDOUkB2jmfR+tim3LQqtPcptHqdWhQaexVak5EpsgphMuspd8R3KPY QArHiKwhQTySSkgk7sS6qLf+40e1YlZRT1uxpUp2VJYcWzHajxnZLqiGwpas IZXgJBJONA1oEr/smv1e9aAmyy9e8W5Ij0yjLoNPkLckoacW29+wUgOpKFNq BynBAyDjUfQqNc1UbjKpdnVqWJslMKMWYjjgefUopS0jik8llQICR1JBGM6L smrOzFtXq/V6xTY1hVtyVbja3qwymA8V05CPvrfTwy0lODkrwBjrouY2G3cq z8BNuUD7VYm0GNcTkmIy6tqJFfSSC6SnpxxhRA4g+egKH2F6MF41diMV3Il5 T0VMot0+kypPAK8Q9wgBQ4suK5JJSQheD7CuLlbPoi1C8bMuCr0bc+M4LdiN y3WHKe74khKys4awrBKUNOuHJHsoOMnprTwfQtCC6vQ7vS3tqKDd8W76HUGK +zJktxwl9t+OyyGleK6CjilC232lpPIjioZwemo8qOxG7dOTIX+oNZmNRQ0X 3YERcpLIdGWivgCUhY6pKgOQ7Z1LVFWAQHXqe3lrOE6kDCsY1sMY0AZCAVZG RgadqekSiGFvAZ9kk+7/AD1cZeJMo+Q+1Rp2G+h9Yd8GQMtKcwSrHQ564zn3 E+WkPraP5x/up/11Viom5rdva6vWBbFWuJKU/qpQWqHKt5fraplULS31trYf SksIQovp5l1QUgIVhLuUjR1RPSqt6DW9uKkzcVu0enwZtrisxYNuShVmhTGU NvuvP/yPDmlxTYY5rWhSAsIUkjWFGrZH9t7xWgn0Oa5AuC7n3rmq9t1KmS25 8aTLmypr00Po8N45ZZiFvir2cOKkB4r9hYUTCZ6TVsVLf2XVKluDWpVImbq3 Dcr4eTJcS9T3oiGYDi0Ee1gFxCUEcmwpQwkE5KED0ndHbe5LIp1Kc3JqttGi fq9PflwYshMySIlvw4T8eKpCCA+l9hwJLxbaIwQvB0D7byo+yHpZQqjcdYp3 iw6DMdUuM1683HkyqRISzHcQUFKlpdebQtBSpAOc5SDpiO1rbqSbtpF30G/7 0hUF6vW8xSaTOFMMen00t1CPLWnwYLBLaVhlzq20eS1Ar7lQPq/6QNh3FuMl ca5J9PplVqV9PFMiM6G6ausocahyHUNhX3Q4krLQWtKUkAKOAQY47Hbo7f25 eT1rovinVidDr1HrEKtXJSJ01L64cFbS0w0IKFey8QGUyfDQWkJ58DlOplsr dCxbava0l1WtmnKoEK2kSC/Tn5LCEQw6JSGWE4SqQCtsIcWCgIL2CVcQXFWy ZMbIG8VuQ7BpNPiXFMjqbcthM5tpt5KVswqjU5EhJwBzSlMiMoA55E9ASk4a LN3SoFtXPek2Q649HrldiLbjpQpPrkAyZYlIJ4+xmPIxg4V7eE9R06VDRl7H 2qbpWTWbUlWmxWzT4S4lbt2nPPRnlMQ4bkSmMRHlhKC54azCfUUhK3Ec0hQP fRrZtz0m494Y1YtO4KwzSaPWfFVUBTX2kTWjbkSGpp0oyhgBcZZX4ywjiocC 6cpEOLS2VZVKp2Had+WE25XJEOBU0s8W5aGHEyMpZ5DnxQUrSpQDYCjkKOTx TlQrrX6JMt26pFInFJdjqxyR1SsEeyofAjU5FstbG/8Ae1ny7azGfJUUkEd9 dfEStHJvII7+/QDHKJcEtiEll1SX0oGG0v5cDY/ojy6nr9Ndv1lc/wDJwv8A Yadk0NraTnB0ri02bVKw3ApkF6XJePFthhsuOLPuCUgnRVl1ZK9reh76QF2h DkaxHKc052cqspuJ/uqPL8tFB9APfkRuebX5AZ4faxz/AIMfnp0PxAi8/Rd3 wsKA5NrNjyZENoclyac6mWhI95CPaA+movBCU4AAx0wOmP8ATUiqjktQI764 EZOgQRbcZO+FKHf9qe/9U6tzHpVLrdITFqkYPBP3HAeLiPkof561giJDZM2p kFwmhVdtwDs1ITxV/eHf8NM72394sPeGqhvuknALCg5n6Dr+WuhGdCJy3K4y eD1JmAg4ILKun5awxCuGK2+ywxUGm5SQ2+hJW2h5PXCVjICh1PQ57nppvYUG Vs7N7pXLFDlPtKWGHB7Lr2GUn6rx/DTJdnoH753Rdj9ZZNvsFSEobZengEAD sSEnXPNpnRHG6Aqq+gf6SFNjKdatGDOCf3YdVZUo/IKKdRlduyG7lipK7t23 uKmNDr4zsBZa+jiQUn8dZ0U4NAQpOFFOO3fB7a54UlYKeh9+lVEGc5PVP4HG vv7J/vHSAN9qrBmbl7ww7XiyRGbcCnpUkjIYaTjkrHmeoAHvI16PbSbY2Jtp bDcS0KIzGdUkB2asBcp8+9bh69fcMAeWrjw0jwlaIR4eRpXgqT8cfjpspjDW /YhrWhRQsDuDg687/TGtii0XeKlV+lQmoj9cjvLmIaSEIcW2sAO4HYqBIPv4 51DIlwr4VE9emtOue4/DSICTbZOd7qX8FqP+6dW0o7ykNoIUe+O+tsZEggZq JTLwrioZ89Ogq3h9UOKSR1ylZGNdRmxLMrikoUovrPn3zqVdvragUlLEyeym TU3m0uqLnUMBQyEpHkevfvrPI2ka4km7JepzhU2kqUTgeZ06ocyB1zrmR1G/ Lp97HuOub0mQYgjl9wtJzhsqJQM9+nbTKREu5Ho6bNbmxnFXLZUFuY4k4nwE CLJSffyRgH+0CNUx3n9Bi8rKYfre3U1d0UpsFaopSEz2k/1R7LuP6PX4aGrI lGysLrDkeStl5tTbjZKVpWOJSR3BHkdaY+X46ijHRJno93a3aXpLQXZH/Zqg 2uC9g4IC8EEfEKSNejtqyHH6S2uOPHbOCFNnyx3I7jVR4XHgaxXHvAA8B4/2 Ff6aXiLUlpGIL4yMgqTxHz66GUN1TpK/AU5OfyCMeE2rv81eX01QX05EhW5d Ce4gBLbraQBgJACcAfAalky4VgUMK6awU9frpGYT7ZJ/67Kdgduf+HVqKceK gke/W2MiQtceLbwBPQ66KnnmSVd/j011LhkhrqU9amy2g+0tQSn4knVqqYx4 9Eg3Mwgrjy2UIdHk06kYIPwPlrDK9o6MOrC2HPHgBIP0HlpzalfshnprFHQd hJ+OtFSQQcq/PTASPvp8Pv8AnprmvgI76oPuV43b9Gnbbc2/0XHMTKpU9aSJ LkApQJPbClgjBUOvXuc9ew0Df8yPbT/3HX/9q3/ppUHiilkCW5T61GnMEhcZ 1LoI+Bzr0g2Qulur2TAktOgh1tKh7XkeuoRnEsbSlP8A2SHMnBGfhpWZBCDk k9O3u0yhhrb2Yqh17aqJ6Qeyl0by3s2xbEuAw9SWlPLEtakJc5nASFAHB6E9 emkS9lV772O3Q26bMm6bSlNQ0/8AjY+H43zK0ZCfrjQGR1/466khqgt2na8b fampHnzH5DVn4SCmYU+eQda4+kSMy3AJuOQwDn5a7RKVWKi2XKdTJslI7FqO 4sH5EDGultIzStnKPbFxP7i02A9QqihTkgKKVxVpPFPUnqOw1cTbaK5TqH6o 8yl6HIQEuMLHsrGNc03bOjHGkO0Ow3XJNSqVpXE9WoypJKoDq0eJTuICSyjA BKcgn2snqe+khdfjvliS2tp1BwpDgKVJ+YOkas39ewMnWipYx30AJ3po4nr2 9+mqZMT6oVE9/jqkMDKvVPCqPFK9IPtlX846YWVi2o9EFUqmorO57rjIcGW6 XHXhQ+Lrg7f1U/U6nr0Y9u1mt1O3nHVoYoEgxiyM5yVHh/ZKMH66zqiEqJvk yqlQ92HKDTWXkMRAhDgCCUOqIyoE56Y0S+L+yyM47fLQMZquoFhWT0I0BwpD Me9JhKgkrSkD49dIQ8Ohl6IpK0IWhwYUkjKVD3EeY+eqs+kT6K9IqVCmXvtj Tkw6jHSqRMpLAw1KSBlamk/uuAAniOisdADpMclaKy7Tq8HfKC+odGwpRP4a thFiLmz2mIMcuvvEJaS2MqWT2H11cDnZZPaD0b6Yy0zVboit1Goq9vwlDkxH 88AfvKHvP4asEuyqJQKYk1R2NBa4gjksISB/AflolJs6IxSRvTqXZU6QHoNT Yk8emUuBaVDzHQ6DLkgQ7YvadHiHLLCi8kjsEY5Y/LUWXr0b2/RGHLap7xU9 GnNMAiVHWW3kFXtEZHcZV905Hw0/v+vyIoYuKiQrjjp6JeRiNLQP8Kj8in5a sEM0q1dvJnLwLom2/K/djVaOpKCfcF9vzOmp/a6uOJKqTV6RUW+/JmUAT9NH QoZ5W2l9pWeNKQR7xIRj8dMFTsSvQYqnK3U6XS2Rnkt+UOg1cQoia79wdqrP rDcSZKk1t10K5SGyG2unkjJAPfyzpg/5ctoP/RJP/wBlH/6adoKJCW8hmFyH TA00bX7gU23vSrrVIlBDDlViMvtrJwHFNEpP14qGs5MkmCj1GFO3Yrr8uWgN vspdYK8kKWe+D5noPz0qblsy4Idjr6Hy/LUgNdUUfB4jz6aBlU1cm4X3wDgE AK0CHiLGeZa9vKh/DSSVIMWWFA4IPT56TAolf1u0+yv0j1yUOAhLcQyVPsIH QIDyEuhP4qOrd+i1ZzlYmG6JqS5wWY0JJ6gEffWP4D5HTT0ZpfUT7vZvvRNg tuo9LpzDU+5qg2TEiEnigdvFcI68QfIdSeg9+qOX1eO4O5lTcq933FKnLWSo MlxSWWv6KW0+yB/xnVRjexzl6Q6ej7dlz2PvEpyAp56KeIfjh4gKGe6QemRq 4M6vSbjgvylci5O8OKgHuStQSP46mtjg7QYWtX6PXqMmZRKgzLjkAAoV1SMk DKT1GeJxkdcZGRomE9iJAXMmOBDDCC86s/uoSMqP0AOqNLo8m3PTw9Ial7q1 uqUS/Fv0idUZEmNTalEblMMsqcUUISFDISElIwDoqp36R2+/Cxcm1dk1BYHV 2O09EUfj7JVqboz82dJn6RGVIZPDZWktKPTIrcgj8OOo+uv0zdwbhZcFFtq2 qGkjHiNxVSnQT7lOkgf3dHkwc2QXX7muC6rgXVbiqsmoSl93H18iB7gOwHwG m3kr3aVkWepTran20oSSQR10A35tRPuudGrFAqQgVaGSpp3kQDn4jTZqGtiU avhtmBdtQWJTCQl9lpXsPEfvhX809Djy6jU0UxiO3EShLICQnASDgDTQCOte rQ4CnnlhGegHcn4AdydIqfSHWLbQ7UG/BddUp0tq7oST0B+ONSBqt2FHQpK1 pAI8zj8u+h2ruUg1ZMgSnFIbAUpKkhA+I8/lnVqLZLkVZ3e9H28b839rO49m XTSZkyoSfWm6dIzFca4pCUoS4eSFYCQOpTnV1vRqtqVa+w1LFdiiJJp9OSuY 2rs28RyWDjp0JP4aHBxJi7ZUXci5pW4e/VZvGYtTnrkkojJUf5NhPRtI93Tr 9TobUy8hZ4KLZ+CsZGtYvRnLthZtFDW5fEl5xQUfFCc9+w1aejpy7TI2CQZB eI+DaCf4lOsn02hw67cKt20d26rZ9NoblNVUpClxU+L4nNmJHYTzxxyhrDmE ZKsqC+uTjWnpZX7+oP6O29Kwy94cmZA+yYhBwfEknwunySVnQaHj/FiKkLdS jJ4IP4DSBeU9DjU1o5vZhtpbz/BPfzPu1tJKEkMo7J89SM4jAGNZyPd+WgD1 NoDzE5CPa6kYxnRRCpwRIwEDAOtDfqCRqiU2elJktEKA6LScKHw08xbcQhoB FXmBHbiFDOpCjeXBodu0l2sPNKecaGUrdVzWVHoAM9snQRVLgkyXFAKy6o5U R2HuA1SREnQPO+uSJRb8Qk9z1IA+Z0J3RU0lww4bqlgdHHAe59w10RjbOeTp DNSX3mashwOFKgrIIODqXKzclXlbLVKg0eeIciuU16n+Nk8QtxspQvHkQT3H v1WVaFiZ5tQr5vu1Ko5R3qgpTkFwxnGJaPFCFoPEpyevce/T45vRcq45beot NcI8wFgk/IHXGmbFxtpdpbmoWwVDvauhpMysrK5MVkHEbl1Qnr546H46mekh Kbqbax0jQx+K1f6IH46aNFo6XJLm07cG3KlGudFPbelfZy4hSpZmtqCnHOKQ DlaUt8knAAT4mT1Gq1fpJb/QvbKzLCgSApNRkOVuR4auhbQngz9CpaiP6uhj fCj9Aa5T/BT95SVdu56abXoKC6cg5Hx1cl+En9/6OddNZQbp8fwEY8VX3h/N 01d1kn5nWclWho3xnvr7iPhqRl/tut5tsNxg19gXCi3KyeiqVVnAgKV/8bv3 Vfx+Gpup9TrcCOj7YochaMDEiOnxELHvyNX+hut8CenXNSSgD1jh54Wggj8t P8e46UpACHys+5KSf8tIdAbulcjzcGDBXGkR2HnQ6FOtlAc4+7Pfz/DSeg0p utoL5lhDAVgqSMqV59PdrSJlJbDuDZFqvQfDkU/xwR7XiOq6/hjXOdtFt1Ki lKreQycfeZecQf4nXVHRDimR/cmyjdOSqZbNRW8E9TFlY5kf0VgfxH10wsL4 2+/GlpUhyMeIQoYVy7AanI/pM1HxkMVe9B6yr8up65JUirUubUlesvuQHUqb cWvBJKFg4PvwcfDRrtr6Bm1Fk3NGr70epV+bEUHWV1NYLTSge4bSAnI95zrj o6PEmivR6fRtpZdGCkkAlxB/pZz092obTX66zuRFkQKa0/Sp05cWa+teDH4p QlkgDrjkHMnGOqc476ZbD+XBptbophVON4zRCsYWpC0EpKSUrSQpJ4qIyCDg ka86PS0rdQvv9ILWINEhvzodusM0OO00kuZSyj9p08/2ilfH66ryUdsTi2qQ C23t3UU7kRYkhuXAWp4NOIkx1NqaJzkLKsY+HQ5zjTNUbTqtHbdnVCI4hPNQ QtWAjvjnnHbyA8z8tejDHcFrW2n/AL9znlHxd++UBs6E8FF1LbvBfXm4MKWf P6aQkdeuPljXlykpO0XKMoun01PsnWOXwOpELccj1Gc+/Rham726FkcU2pf1 cpzaezTctSmv7qsj8tMadEh0/wBNL0jITIQq92JOP3pFMYWr/CNd5npsekpJ bLbe4LcMKGMxKaw2r8eJxosfkwErO+e6twbkUq47wvyt1lVNkIeSmTKKkJAP UJQMADGR21ffa++WhQW1j24slIdRg9U5Gcj3j4eWtcb2KyYBeERq2VyI9QjN LAASp5YSEn3kH+HnrWLuVSpaA2KzRHXeJCWmpqQpSgRjv0wRnXSDZ1rFyQmG FHxwrkPuDUFbuXizSrOq1wpQhtcaI4vkP6KTj5qPQZ1nLgusr9YXp8b32XRW YFTbo9wttJCUOzELaewO3JSOhPxI0UVj9JbvPPpy49Lti26YpScFwl584+RK Qfrrms089UGGzm7V2bn243dF33XJmyMuNSGkkNMNKB9pIaSMD2cEHvg99SqK Wip7TsofYU5IZT6/HCOpEghSkHHY4K/PzwfLVDsIKFdjUHYpFddExApsE+J9 oYS8pxlPFXM9irmkgnsTk9uuqyGy7FqlXfqNHqrrdamOFVRRPLePFUolRDgO OqyrB6eR1lKPzJxiev8ACJxxTyNaQJzqTdm2u5Juqty0O4K0RC7KakpWfvKO Co9+ncai7cm/7kuncRiu1gR5C4zDfhNkoSwlRycqb6DoMY6fPOuv4m8FY/VH nY8ib+ZQL3JMqMmyEVmuSi7JmexGSn2QlAPU4AHTpgDtoEI9vPn564cf5aM8 03km5MyMY6j89fez7j+OtDAVZI9/XW2Oozj66AN0A8j292tkgk4GgDC2gtpS SR1HTrq2fo5Xiqo7HxIrrnN2nExFjl1HHsfqkjVRdMHwmyPXklASmS6kjuAR 2+OnKLcq0052Oid7DykleY6SolPbCsZH0I12IzOL9xcWlDmpYzjr0zqE/SPr 5RsNOjh0pMpxthIB75WCR8uh1nkdcKiVJKu/l8zjWyU8hjl21ylB5tbuTUtu bpdfQ2uRTp6PBnRUnqtOMBaM9AtOenkc4PvF69s9zravO2mJ9CqTL6cDkznC 0FPdJSeox8ev8dUmaRGjcuDJpuwci06VcXNyuyG48YzHyhaUIJcd5uKUeSlH iCenTp5agS7tq7+t3bxEdDa3n5LplyXmEKcRxSMISSkY6Dr9dYzm45FXo7Iy ksVXpkSS6dcSHFxfUZkx8nwwFoV1PIc89e2emfhptqlFkvV9t+4ixTUPK5hB JzhKcYSgZWVHoB89ZZPiPm5LbtkxxSjD7A1dNdk1uu4WhDTEdHhMso+62kdh 18/89MRB7g66EqVHE3bMDt2199NMQq458unz1nBwMH6aAOrYB6ZIzqaNitiq XvHRlPfbM+G5TKsmLVQhCCDHeivORyx7J5OeJFkBYVgBPhlOSVcUwJLtfZqz rDtWU1d7k2symKJWfXorLUUR0TF2tJqDbrLrjKnFJSypnjyxxeysA8AVEFco Vn2rvVddVtdyRT/Dr0WjTYfq7TUT1hynrkhcZCB7CEJYWg8iouFSVjw8FJI9 KJ7MZNztxrYbjU+FDD9GjSqUunhmoUUPvxmTIQ+GwHw4XgQFuKPGTkoTwBGl uUm2qXsrW6zWTKkNGmVJSvAS34qDFqcJlAaKhhClhwgrIVxCjhKgSk720iKF Vas6HSrOnUREoF2luVsKkojthySlt6kpQlRUkkYEhRynqDnGApQML767XbY1 +s/qdFuW7EVM1+t0WmuLbjCIHaZE9YdffSBz4qDjbaW0kEftFlf3W9Zykwig D2U2V28i3vt/dFxJqUp/7VpjFYocp6I6t9uowJUiO74PhLSw2DH6IdW4t1Ck OBLIIGmik7DWncV2UGLVa/Wgu53LZt6krjtRmvU5FQo7EtL0hIbw823zCClP BxfEqU5yOs7KBu5Nn7Ppno+mv0qt1tytQrbody1BElDQiFFRWhr1doJHPKFO BfiKOCMo4ZHiEltitXJY86wLO28t+NLVeNrR6xUoqYTTkyfKdVJytL5Hio8N LCFIShQSPD6pVyVlOVRtFRVySJgj12mV+yaLTbgotTuOuxmrcdW/CU2kyDW2 C82mOzlCkKQEpBC3VBZ6jgBjR/aMfaNMtMSgXTVUKun7AcgTIUqPUU091yuL guJD3gBt79q1jkEhC0nHtAEmVk3tG75VkQO7VWHeDlHQze12itT6bR6rI5PJ RDQ3UKm3A4pA/aK4LeQ5yKhkApwchYiDeS0LFoNFt2Xt4qoxItwP1aHM9eWy 7LbXDdQ1gPADkF8ufZBwoDj05KyjJW2om8n5JRv3/JCkqgRGa9ToqKgH0TG0 LWWyklOVEfLsMgH39dJa5SYtKDPq9SZlqd8Tn4ZBDfFZABwe5AB7efnrWORy q105p41FOnwaO6QCMY92vuKfjrU5xV3IxnWRkJ66AOjYGR3x5nOnmkXHW6JQ KtS6XUnY0WuRkQ6g2jH7dtDyHkjJGUkONoOUkHoRnBIIAWtb27nJuGbVl3Ol +TU5kyoS/Hgx3UPvS4q4khSm1tlGFx3Ft8ccQD0AIBHJe6l71G43ZVcrrsxu ZVmKvOSplseO+2yphKzhIOQy4tOBgHOSCeujnBlsKdujc1ZsmIymtpS2RHdb nRozTUp4NFKmfEkIQHXOBQ2U81HBbQR1QMFUfdK+lsFxVyR+Hqr0H1ZNPjJj GO64h11otBvgULcbQpSSMK9rP3lZ3i1ROweqW4t2Jp0ou1954SBJW+44lBz4 62VvEkjPVUVg/DwwBgEg1fu/eq+qnu/Jr0K5XRwq1Qqkdz1do/8ASJrKWJTo yj/vG0JGOwxlISeus5UPaQmpm++7VIpVIgwLuWiPQpcSfDSqFHWQ/EaWzGcc KmyXi2y4toeIVfs+KOqUJA1p29W5lKkuPw7kSh1TUVppSqfGWYhix0x47sfk 2fV3m2UpQl5rg4Ake1rOgG2RuBeE635FHk1tbsSXTINGda8Fsc4sNSVRmshO cIKEnOcnHtE6V0zce8KZts5aMSrBFOcQ4ykqisqksNOfyrTUgoLzLa+vNtta Uq5KyDyOXSGhYrdXcD9XqZTWLpcipozsZ6JIjR2mZRVGGI3iSEIDrwZSeLQc UoNj2UgAY1u56Qm78WvsVJi722XYUeLEhpbpcNLcRmNMM2M2y2GuDSW5CitI QBxyUj2fZ0qoGDlN3YvinzIx/WGX6sxBhUpSIq0xnjDiSUSWWkPITybKXW0K DifbBA6nGNP+8+88LcNi2qfRKdLah216y6wqosRApS5DqHVo8GO0hhLYWjlg IHNTjil5KjooCNptxz5sB2K8xS0oeTxUWaVGaUP2q3eikthSTycUMgj2eKPu ISlLOcBXz89MRkEY1nOgDunqRnyB1ukAM5HfloA6s9Wznr111aUff+9oA6oA UwoqGSNKmkpPtHuOxz1HTQBM2wdSnPCpUx2UtcWMWyy0o5COWc41PEdpv7Pd 9gdEHH5atFkNb9VSoRLOgxIstxpmZIW0+lCseIkJyAfhnUElKfEPTyzqWJnN X8p9NZR1UrPv/wAtIk7pALBPnjXQgeIRj93/AC00UazPZQrj0wkY01OqJeOT 30MBIvz+WuSgPDJ+OkSJ1k+J389ZUBn66AMJA69PPWeKfdoA/9k= --= Multipart Boundary 0912061926 Content-Type: application/octet-stream; name="II.JPG" Content-Transfer-Encoding: base64 Content-ID: <13600190> Content-Disposition: attachment; filename="II.JPG" /9j/4AAQSkZJRgABAgIAAAAAAAD//gAeQUNEIFN5c3RlbXMgRGlnaXRhbCBJ bWFnaW5nAP/AABEIAHsApQMBIgACEQEDEQH/2wCEAAMCAgICAQMCAgIDAwMD BAcEBAQEBAkGBgUHCgkLCwoJCgoMDREODAwQDAoKDxQPEBESExMTCw4VFhUS FhESExIBBAUFBgUGDQcHDRsSDxIbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsb GxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbG//EAKEAAAICAwEBAQAAAAAAAAAA AAYHBQgDBAkAAQIQAAEDAwMCBQMCAggEBwAAAAECAwQFBhEAByESMQgTIkFh FFFxMoEjkQkVFkJSocHhJGJysSY0U2PR8PEBAAIDAQEBAQAAAAAAAAAAAAQF AwYHAgEACBEAAQMDAgUDAgYDAQAAAAAAAQIDEQAEBSExBhITQVEiYXGRoRQy gcHR8BWx4WL/2gAMAwEAAhEDEQA/AOWlPZS/WGWl46VrAP40cxbfhdQSIiDn 7+2guio67mjpAP6/bTQprbLrnlH0heElWe2pUCRUyCAdalbXtSjSao2y5Day o45TnVj53hUVB8IUPdh9imops2cILDeR5yjz6ynHCcpI+/vquLMldFkKU095 riDkN+500LX3Xua4kRrcnLeEcKBQ0SSkcYzjtn51Gu3efWA3Wm4PJ4y3Z5Ho GoJJEyO4Guh96iV7bU1cstMU5hS1cJARgZ/Oitvw21eTtSbnbpLQioKUrX5X pbUc9IUrsCQDgd8DONWCovhwvWVtem7hR1KjdAe8zGcD2PxxobrN0qathy31 FURMVQU82hRUl5Z9PWR7HHxptf4+4Zt+o2Rpv/2vuHsrgczfLtW4UonSZ21m IjXwarQdtHjIcQ1Soi1IV08f/modVpwYVcVHm0ttK0gHpUgA/GrM21cNIoIm mdRqfOTJYcZBkoKktqUMeYnBHrA7E5AznGk5XH6fVNyHBBWDGaJYaPYfJ/Gd Vy2uHXHuRYq08RcPY6ytVONJKSIjWQrz8RUDAtanPM5MBsE4A/hjn/LUxHs6 mFQUYbCPygYTojodMS+/5IIBWrpB9hz307GNmqHBmtfUyZ8lpSx6UthAXxnJ V7J17kszaYpINwYnbQnas/scM7eyU6Cq4ps2lJdI+gZVnuQ2ONfh+zaW0npR CikE5z0ZHbVkbx2HrrEdm5KRQVikqaw6WUktsqH+L3wR7nSwqdI+lleQtv1d sE/Ou8RmLTMN89uqfbuPmo7/AA67VJWCCPIpbGz6Khk+ZBayf/aGRrXFp0ZU o+ZAYHV7hAGmBW7RudFkrqtKpzi46eHH+glsH2Go2dYtcYj0qZSJRqLVUQAU BISoLABUBj29vjVyuce9aWhunRCB3qtWzJvXS2yZI7ULxrKozjgU5Cj88j0D jTH2h2vsms+IOgUSt09k02XPZakpSnpK0E+oZ7jWp/Z2dS5gi1eHIgupOFtu tEKTxnBBxo12qgvq39t1cdvqLdTYGB7+sD/XVceWh+2UptWhBgijmWFMPhLg 71Yu79mvDGqsusWvtnbpbjehwPsLVhQUU4yV/cHSU3u202RmeDy7ZNr7bUqk XJbwbmfWRWVICmS6hBGCT7k5+CNW6t7w8UOsXvWmbmmyWTIqz8hpthXSs/xl FPt2wdDniH2Qh03YTcGXR1qeZVaEltaekjy1NqDiQfz06Wt41xpKQX1GCDud ROx7RFMVXNqUloJ9UbnfbzXJSZbcFdKeechNFSUZI6cAaT8tARPWnGMKIwNP +olxqiPJTkFaPUjq/lpCTUlNVdz/AOoc/wA9NHQIFVVwFKjNauPsca9z/j1m UADwnXzj/AdDxUVZ6CM3TH9/V7/jTMgJcU+hLPCiQBj20tKAP/FDB6c4VnB0 0qTLXFcccZbSFuI6UqKclH36ft+dTN16KxVmn1Z6vuFpp1mMsgOOjJ6+O/30 Sbf+fSrkRUFrebixF/wwtJSXvkg9gPb76x0+p1BtxMtMwpWhQ6UHkfkjGMaI J8r+tKEiV1f8UtZQ8AOCe/UPzzpgweRQVUnKlXp7VeawPGHVGPDp/ZIx2G2V MFrzAnLmCMftnSHuB96q3W9Niq6S4FDIJHB79tLCiVNyAw215pKgOw7J/wB9 ODbS8rIj0yqIuSn/ANZSPpFNNIS75akOK/SvPYgfbjTPI3yUWighGqt47mmf A3D7Yy/XSSQgTE9gdhOgEmfiairXsSTcl3s0ejQ1yJctYbbR1cuKPYDJxpfX PQI1vbiLhRm1NracLb7YUFJCwTnBHyNT8+7XGbxcepTzjSASAltYKRj5/wBd QtTqC6rVDNkMttr4SA1wB9zz7n3OqTbWr/V6qtBW68SZjHrt1W4WFHSAOx7m dvpRztdCp1SvaBTqlMRGYkvIQ68s4DSSoAqJ9gBq8NcsPbi8ZzFv2lWIUmQ4 GwURZ2VqS2MAkA/zxrnxRpLkdxDmSD986PbfvqbQa6zUY1QejTGSHGXEL6XE KHYg6QcR8OKzK219Up5Ow71TMcC+E8jhQUz8EnaauDuBu0/sxGd27i0pl+of Q/xXXSFNNNuDAyn+8SOeeO3fVKbkmofqy328I5Pp+3Op2/L3uW6q2/cVTedn uzR1vS3OVrUMDGOOMAAY440vplSZkOtpQ6lw9IXlJz+x+dHcN4G2wqSGtydZ M1JlLRNjZhIHqXqo66n2nt4orm7v3RR9jJlixpKzSZriXXmgkdJUOxJxnUHG vi2LPv8ApMiyoX18qH/Eky+pRQ4s+3PHBz2GPzqCLxW2sex+e+tFvobk+pCS M+/YZ1pN/eqyOOVYvJHKrfyazWzWjFvqeYSApW9Mq/t2K1upuCu5LijxmpDj SGm22E9KEIQMJA+/A5PvqR24rqbd3botTUyFFqoR3UjPf+InOdLFoJQlZJA6 Tgerk6mqHMAuKGtSsYkNqJzj+8NVpqzZtbboNiEgQBRSbouuhRrpVWt/KrQd 1qvBiWQagzCllovJew4cJSRgYPHq0I7g76P3ZtTd9szrSfpSqpbVRcBkuArd U3GVghPsMH31pX5t9ccy6KrLodUW2iteW8mbFeRlr0IBx6v+XWvL2hjI28qd 0Vm4o6XYdvVGO6t+SgLWVsYHHV7nP++gLjJW7dq2oI3jUT7GmCLK2SC4pQkj 7xXK6e8XqQV+YlIU2CeM+2kXUMisPgnnzFfvzp+VuMhiCqOlWFhISPcdtIWq o6K5ISRghxX/AH0wWZSDVcuUkLM1qhRI/wB9feft/nrGePv+2vmfzqCaDqTt ppTl1NJHydMeMgqV0YAP50AWinru9pKUFRIIAAyc6Y8RpDrWPSFFWEgHsfnU 7e1Ett81enVaLSqYHpIOAcdKT31OW7WqXXqSZEBS0qQelaF8FB9tRVRtcV2n JjKcKeghQUBnn51KUC1kWvRf4bpWX+V9SSDx20S2o83tTE2LpRIGlT7TykEF Zzntz3Ot1MsIbPpQgq5UrPfGoptRX0pUonq7/nOtnCXWFIAPbpPPfR0+ajaQ to+kxWrT79t6beSaSl1xK1r8tDiv0FX20VdSW2+gjsTnnHP+ul5CsKn02701 JDq1KbdLjbeRhJ9ufjRwysdJWhR61e6uw17y9QkxRRCpE0VWtFRUrqiQHl9C HnUJPPHJ503qfsRSX5lTqiH58hRBeUhThV5fTngfb8aS1vTXI9damxnlpdjr Dra0r6SlQOQQR7g41bXw6bv29bUGoQtxqc/VYs1IUl9sgvtLCuog5I6kq4zz 7c5B0pytnclnntxKh2rQMFdOMWjgZRzLEGO57Rr4pP3XblWptqsTZsIR4jjY iRQTwMck/J/+/Olhbdt1C5N3IFo2+ttcqrTmqfDceUW0l51YQOfZPUe+NPnd u/pNx1yqxYShBtuVKEqLTAkKLJSkhJKscHClcDjnS+2b8s+NqwFN8K/tVTuM d/8AiUY51zi7C9aty7dCFeP5plxPkQuwS0tIC99OwjY+/netq5PDrctrUGpy qpubtWDR2nnJMZF5MGSFNAlaA3jJXlJHT3J40sHYMmi1i2afcxZjv3lbse6a Mll7zfMhvqcSjrwB0OZbVlHOOOdNXfe0NjGbzvadTd3qzUq+urzViju2kthl bqpK+tr6gukAJyr19PPSOOdSz221CrPin2EdfsWrXq7A2bgop1BiOhlt99Cp RaVJcKk9EdJVlZ6hxjvnBOKlpAP7RWLuHm1pJySIj5QnPUPTkj31mozE2q3h Co0AtpkVKU3CY804SXHFhCcn+6MqHOrQ3Ztvt85aG3t47wWhYlkrdvtNv3C3 Z9VS5BMBUZTw+p6HHA04FpAJCs+WsqOOMaG4tjT7erFmVSr7B2vQGxetNapN 1WRN8+kSohkJAaf/AIiyXSQkpWQknB/B4LoNQBRFVc3SvBW0+6c+x7sKl1Ok znqdKEdwutoW0soVg8ZTkHH30VXLFk0C3KchVWp8lms0qNVWlQJaX0hp5PUh K8fpcA/Uk8pOm9vrbm029O43iKs+ZtTRKVVLIjVO46bc8VbpqzkxiYhL/nLK ulbbinVAN9ICEgYyeRlom22zlnVCpR6jt5FqdIt/Yql3eICX3GRIqK/J63lL SrILilHqIPZSsAHGhpB3FEpuHU6g1T+4XfMdUSSCPj20h64kC6JXBx5hPOuk dO242s3xm7P31NsKl2jFrEq4G7qplveYzEnR6SyJALSColpbifQrCvfOcgaR 9xVXZren+jW3l3EgeH+0rLu6zahQWIEugqkNsCFLlLSMtrcUkvYQtK3MDrBT wCNcOqBFQKUVVTw8j8a+Y+P89eB+2vagripq0QTdKekkEJPOe2mRThh7pSrA T+o+2l1aXSK+o9j08HTUtGSGL9hO/RokNuO+WpsnjChgn9u/7anEhBKdTTnF tpduUNqMAkCfk1Z3wmbA0/fC6ZESTX2aVDhJC3XVtda3M/3UDI5+TxzoZ332 7i2VcbUKLAcjkvutgLWCVJScAnBxz341m2+uaXbt1NS6BPfivErD4S4oFSDy BxxjP37e2rH7eWNtFuHabTu5LrDlTUSIcUTfp1lCgcFKQcKyrHf50NhWrm8v VqcMJA2Ak+K2HiRTfDeLdu7hQUylIAASJCu5n399hVGURghQ6gM99TtCt2bV X0x4TSnVLPSEgZJP40c+IyxrHsLe1iiWRV33VJb66jEcc8z6NwkdKAvHcp5x 7cffR/4U37Ko268e4LwSiWzTHEufSKQFJUDkdZ++M9vvjVgy4/AIK066VQ+G 12ubtxepQrlKSoJIhR8CPf8A1rSZuCxrityE1LqNFmMsuAdDrjJSk/vj/voa kvKjvqhyYbyFkFWSPTgHGc+3Oro+I++LUve+THtKGlETAQQhHCjjgY+dVVu2 AzDvtmI+w4l5JDq/8KgQef8AY8caEwFw9e8vWSASftVtv8MwnHC9UkoPdJ3r 82qz5jaEAZJ+3vqwu1u1FZvKxq/VojjjYo8VLiEhGfNWo8J+OAdK7aalWtK3 QiM3DMXEpLz2HHgMqSnPH3x8nXTLaKw7It6whTLdkx34E7+MZYc61vqxwfxj jGtGyDjOMtweWVHYxpE61k91n14spLBBWT+UyNO/t7e1c863TpTjq43kOl5v KVoKT1pI7jHxoPodwvWLvrbl5Ip65YoNYiVVcVK0tqfaZeStSEk8BSgkgE8D OuhfiSqtTsvZyZMtSBAhIkLTGVUEsoU6gKB9STjIJ5GfnXPOZb1Rru4UC26L H+pqNVlNw47XmBJddcUEpT1HAGVEDJ0Y308rZKuAgJA03+vYRQzObTkmypU8 4JBHb9qUG4u6b9W3uuCvyKJIp7dUqkmchl50KWhDrylhOQMEgK5I4405rZ8Y u3btfotKuK3qs5bytsmtt66mHNQzOUlLqnDKjKI6QerpHQvuOrOst9+FPfS2 rAqVwVawI8yLQ2XJUtMGpw5z8dtH61lppxS8Jxyenj31VWg1+j3HfLVLrdvQ g1KX0odZT0LQfbkd9UK5baX+RQI9v+Vy44lWwq3cTxKbO23tLbtkWLtFKk0a 3byRcribglMSjXGzGWw6mSlKOhtwpWOgIBSgITnJyTsxN6NrLbtNrb7Yix72 ebuW56ZWalHrE1l9xIhP+eiLEQyOkEqBBdXyQADn2qxUtvqazKUuk1WbFcJP HX1DUzQbW3O27v8AsO7nnG3oNeUKxSHw6kh9hmT5a1qwcoAcQpOFYJ9uOdAJ aT1AlWxPmh1glBLYkgGnPWd8E0LxD731OtWjVoMjc2mVSmxoEjDT9PVMkoeQ t3I9SUhBB6e+RjWSt+JujTBXM2pVWP652nh7dI8x5ACJDBbJkduWz5f6f1c6 h9196bgvhxi4KxZ8FdQDbq3ZAYGQylZDLfbjuD9zjWlSqVb1c2hlu3CqQzIf jBcZkNpKQs9+onkYGSMe+l2cdaxbqUpPMD9qvnCnDC+IMeXriW3QNokTtv4/ 1RJsjvFdUy4dntt9trdhyLnty5axNbaqk5uPCrDU5oBcIqIykrbbcbye6lox rZ3Ebsi1f6HreeDbOxd2bZU+t3LQosVy65inptUnNvuOuxmgW0YYjtIyOFEl w9SicYq2ZT0WSmTEkLZdZWFNONuFC0qByFJI5BHfIPB1A7p7m7j7jXLGO4O4 Nx3Oae35UT+uKm9L+mSe4R5ij0g45x31Gv1AGqY+z0llHigfjHsde9P2H89e 4HbXs/jUdDQaILPUUVhxYWkEIzg8557aOqVMVGmtOpT1hCwojJHvk6A7QKRU 1lagARgn7aMGXUBWUtqHTgZ6tGN7UWwooIUNxTModeaRQpj9MrIgywpCmGH2 8oUOrCsrKvYc49+PnR3R93k2jCVMYkoqVZ8seSC0HYrLhx68k5OP8I/npGx1 Z6VdC1BPOD7886kI7ndeOxGT/wDGn2PvjYJUGQAVbmrHeZJ/IsBm59SfHb6U V1u5qrcd2yq3WpSpc6e7577xAT1KPwOBgcY9gNSVu3dLt2rJmQ0pVjCVoX+l xOQek+/t7aDWXiFgrJznHUdZ+t4ZSBnp5IJzqNxf4j8+te2V25ZqBa0jxTSg br1KnbmN1qnsxPKYeDqEPtdYUcg+oZ5+3GM60LnuqVeG4kq4Ki1HTImq6sMt 9KEfCUjgDHtoNaQvyAV8dXOPntqThoDklDXmdOTjKhpljLNoPBSR6tqPyedu 7tsh0zP9j4o0tmcxGkJccV5aQcDjOTjVuNmfEbbe3nhln09+O7NrJl4p7JPS 222oDKlK9hn27nVLoiOqd5TSuohRTxyNO+zNirluSzVTZSn4bq3AlhCgQBju VD88ftrR3WLa5tg1dmEgj7f3WsdzuQsbJoruiNdB5/Sj67PFBWKntjXbRq9L gVGm1Zopb6jhcd3OUrSfg84+NJTaSrGX46Nv+pWVf2qpuAPf/iW9fu/9tq1a V8RKHV6rFaTMSk/VlC/Lb6jjCuO+cZIyNBVl11O23iwt2v3Cw4pq2LhizZzU cdThQw+lawgEgEkJOORn415eWzFvYrFoPSsE6d9Ir7CZCzubdLNquQBpv+9W XsSw7Kt3+ktr16WpvJbd0XRBn1ybBs+FCkRJdTkqTIH0ReeSlojKz1AFRV0H pB7hKeHnZOyahtSi5aH4YK7uvdCqs6zU1yqk/SaRRI6QkpS26lSUqfVlRUVq /hgAEc5JDH3X8LlleJh/ey2RubcNxx6pKrdMpVUhw4UBEt1S1oLjiHFOFCFL JwBk4GhhndfZm9PB/aNn7oVC+Ik+yJNQdcpFCba+iuP6qQXypxxSgI7mSUKW UrIR+kE4AyW4S4dTPYa6bT4irMEk0cXts5tRsnuXuzdNy2rJu2hWhWabQLeo UqpOR0PSp0b6kmQ+16yhlvIGP1nGedTd12dsxdti7a38ug1hqwbG2tqd0LoP 9ZLTKkLNTKEQfqEgK8v6h0pLmOroQPck6Db18Qm0O6u6+5VHu9q5aRZd+yaV WqbMhw2351EqEKKhjKmPM6XWlJC0H1dWOk4HOJih7v7e3TXKVY23G3d412wr c2zqduXfDcVHbqqoBlpeNQjjqKVOoX0PFAHHUoYIT1aVL54BVM/8rtIUnUVh o9j7W7q7H7d1+wLQm2pVb13GVaFQiLrL1RiQMRkuFTHm+oo6SHMKJUCVJzjA 0yNxth41Y2evWi0Hw93ZZQtiiTanR7pqElToqJioKlNymyroR5qAooKAOlXS ORnS3uW+rO2r/o9tqbx2eo1fbjUXdSRWqa7cwQ3KrpYiI858oaHQhrqUGMJK sBOckkgL+7r88LEaiXfdVoxL2rtcuqG+zSrcrrIbp1tPyFdTrwkId6pBbJUG hgDB9XwretEPuc6v5/3tVrteI8lbWwYbWYExqdJPsQD4hUgCdKrLLW0uEA2g gEZBxyftoBuBIFfVgdwNHc5ZSfn7jQLXua2Vdxjvoxegiq64SrU1F4A4ONff T8a9gnXsH4/lqChoqbtbic4rn047HRWwCPYp55OcnQrbKeHFhJPI9u/xouit 9aCE59RBxjRiNqJYbKtqia/UZ0YttxnVNhQyVJOCdEtrVKZNtNK5oKl9XQlZ 7rGv0mksvqAdZCknn1jOpOPES0wllKAlI7AamToZp43YuKMmszKArp6TxwDr beTIahKWwklSEnGPfjtr0NhBeQopOB+xOi+k0ET2k/TtFQA9WeAnUwd5dTVi sMM5dK5EDWlJbVarytxWUKeccDznS40oekJ9+PbGmUJCFOnCVJIVkc+2s9Wt lqkO+clhtlfutPdX76hmn+moBKT6u+nWPfCiINLcphnrCUOb08NjLaTXL8RJ Usp+nIWl0pCwhf8AdyPvntn/AAnvro/tTbNFlUdBqbrfSw0kcnGSByT865kb MXo9btzuR1vhDUvy3CFLwnLZyD+cFQ1Y6m+LSi0ykCmsw5kklCkrdaeDX4xk HOrjfWlxkbdKWK/NvEuOv/8ANpueh1W0jQdtv5qX8XNCodWuFqn0JaXXZc5E SGsAk+pQTkHtjJ7fvqme5Y6N16v1SRJcZnOMKeSrhzoPSFj8hP8Anp0XrvdT qlRJ6Iv1SqhLeQWpTmOqM0kEqSnHYk49QwcDVc67IS5LU6VY61Z+/wC+m/Ku yxwZcOo+tFcGYu9YUt24RySTCfHMQdPYbCoSatIQpaPV34xjQHSrkqZv9oSD 5jTjvlqb6f0jPto1cWFqIXjnWgIEZNRLyWEeZ36gOTrNb1yVaGtkatVKGlb0 5bSJhDJPHHfOti1Luuqxdw4l12XcM2iVmCoqjTYTxbdbJGCM+4IJBByCDgg6 inFZVg+xwNeICle4x7DSJxYNGJsztFbm9e8e6W5tbZrt+XrVK/NZa+laflrG GG+/QhCQEIBPJ6QMnvnQ7bdSXMsxwzQPObWOlzHcfOpU05D6Ol1kLSodiM6w qgNxo+EN4T26QMY0EXAk0UMa5EgaVEzFfwz1DGc4+dBtfQP6zSR26dGs1ohr qyRnjgaD66jNQB7YGolKmgX7YoGtQWMHGNex8azlHqx3186PjUc0t6dS1sp6 gsdWMkcZwNNOwLb/ALQ3g1BCwhBBU4vGelI74/mNK62EjocP3ONOXaaosU/c WHIfV0MdQbfP/Ie/7+/7aYtoLg5RuatHDTTLl62Hh6Z1pzXRsTbUOl0xy1Lg dmOux0rkpeawG1nukEfb/PUXd2yjFvbcsVeFXmZj6mi47HCMKbAJBH54z+NX Tsag7JtbK12TMqP1skNJRTXQCkKKhnqx9/g+2qzb4T6Vbqk02DUmpUicxkoa UVBhKu4OQPV7fbTi6xot7QFCjzA6yN5E6fFbBbKsrlam+mRy9ymJ07eImkNS 6Y+/JSEIUVE4SBzn9tXf8Om11vbdyIVx3zRGajUPLDgakNhbUYkZHpPClD3z qtmzMajyt8qBHqYAZdqUdLnWeOnzBka6Tbj0WgPQW2qeuO0uSsoYClpSVnH6 QOMn4GhWrRNyOm730nxSjJuNY8It0TDoMkdh4qhW/u5M/dvf6qM0GHTKdAjl TbDAiIALaD09SsDlSjz8DH21Vx2Q8zdb8J9TfnNOFB8s5TnOONOTxQ2DWtsd yBVoz0iPFqxcU2QShSDnKkn+eq50uar+06VqOfVklR786JbsziwEjalmbu7R Yas20wExBmREafr5p0W3aF0VmkyJtJpE6U1FR1vrZYWsNpzjqJA4GdR86VOp M52O4y62uPkOJKSFI/IPbVxtpvErtRZdhvMVG6WWRJjcMxm1OKJUn9JATjg+ x1Xa+JdtbteISX/Ynz1uVN1SpCwyodIwecHvwAPzoNril+2KlrTCR3+KEusL aLbcSqEhIkKJ/N5gRv8AWlLJuRK3F+dI6en7+2NRyqqiWrhYWBwcnnUpVNsb yp+5k+y36Upmrw3VNORnFhK0KHsoc4OD2+dDcamzKPXFRZ8Z1p0OFCkKGMHT Q8TM3yYSsEkTvVAtmLdy56bapoqodoVm4W1Pwo7YjoV0qecV0pB+2fc/jUvB 2urj1fDNQcYix0+pT6HA5kfZKe5P5xp+bZ7dP3H4SqdcdLZb+mhLXFklGcpe JKsK+SMY+41sR7FnIrzcZ2OsKWQACkjOs0yPEa0vrbGkaVt+OwWIQyOcyob6 /akZWtoGxa782iyJbrsZsuqQ8kEKSD3yBx7d/vpdw4y3pPljOSRx99Xi3Ftq 27I8NUmqzpLbNScAYiR+St5Sgeo57YTjkaq7tNR4FU8Ttu02eUBiVVWW1k9s FY4P76AsMu68w465qE1DeWNlcBL1ujlTJHzEaints14ZKY9QmqhdNNFQnuoS 59K4pXlMg4CUkDlS+Rx9+NGm4fh8siHEVDqtpwYqSOnzI6Pp3UK7Y/6geMEe 2rUu7Z0Riw3m3glZUnK2+vHXghXtzjIGhyo2dVLplR6mZyGBHcXkKaQtLqVI KVIwoHHfPUOQex1iq+KnbtwvOrUnXtMAdoqvMZptS5QQGxpH9/iuWO8e1szb m9vplLVIp8tJciPkYKk+4PyNJKutlMsHvxroH42bXbpGycKQ+lKXRVOhg5/U C2SrH8hqgVwYEga2fhzJLydgl5ep2+YpXnmmSkOtaBQmh5acr4z/AD186FfP 89fVrAcOvz1jVqrPzE1J28stQ1OH2PB+dGFIq7kZQCFDp6s8caB6P/5A/wDU dTbClCSWwT0gDjPxpm2op2rqzeUyoKFWj258QdPtvZKdb9VpX9YTC2RTny70 hhRwMrHdQAzgfOllV7rk1OuOS5ckuuuqKusqznS8juuISAlZAxnvrZStZRgq P6vvo1Dzi1etRP61f1Z99xkJiP3ph0G5HolYbfadUhbawtJCuxB4I+ddNNpN 5dg94to6BWN2anGo9z2qhKlefJLLMlSSFBxOPuUglPfOfY65M0txZdGVHv8A f50SIqE5mA4GpTiQCRgK1dsdZtXTJSolJGxTv7/UVU8ve3N40EBcRP33p9f0 gm/Vsbt71xKNYTSnaNRgvqllBQJLysdSkg8hAAwM98k/bVQ2yhMnrACTjBA1 KVp11ye51rKuCeTqAC1Fw5UdKsqttJDLYhKRAnf9aWodcbQlKlTA71Opqjye A6VHjseNbEC4ajTJ6pdMqcmI6tJQVsPKbJSe4yk5xodQT5iRk4J55+NfetWQ ATqquAKBChIqU3K1jlUdKP7W3avi1NxmrqoledZqrCgpMhaELVkf3jkcn576 0KhXpNYuN6qSuhLj7pdUEk4yrk9yT3zoUbJ84jPH+2thgkvEE9h/rpd+Ft21 9RKAFeYoiyKG3OoBrTp243duCy1+RTKu4wwtaVOtFWWlqSeOpPY6ec3xRx7x utqryolOpUyOlIDbBKGVkd1DqJwT3x2+2qXsPOh3hZ7azyZDylZU6o/k6SXu Kt7tXMsa+a1O2zSeRKnWwojSe8H+96d+8+91T3MvMSpchvyIbIhxG2/ShDSe BjHue5J5PGlhCuCRTa6zPivKbfYWlxtxBwUKByCPwQNDYdcJUCs4HzrC84sd GFnk886las2mG+mkaUPdZZawkITypTsBsK6OWD4o6PvFtvTaPWq23TbggONu OMKkeQXXEY9bSuOpKhkFGc8n4On7S90rfoQqlyXfdEWjU54K8mM682iLEaIG UoB9a1HGc8nnAGuNDTrgewFnuNbqpclboC5DiwkenqUTjVQueFWzaLsrdfI0 pXNHKCQf/Ku1J1/hrlvkKY+PferA+MfxCUzeTdpqLbAW3blHCkROsdKpCz+p 0j2BAAA+w+dVHrr2Xs50RVJ1xSPUsnnGhKsHt+dWbF49jGWybZgelNJsrcDk DaRASIFRa1Dr1+epOvyrg6+Z07iqaVa1/9k= --= Multipart Boundary 0912061926 Content-Type: application/octet-stream; name="CC.JPG" Content-Transfer-Encoding: base64 Content-ID: <13600189> Content-Disposition: attachment; filename="CC.JPG" /9j/4QDmRXhpZgAASUkqAAgAAAAFABIBAwABAAAAAQAAADEBAgAcAAAASgAA ADIBAgAUAAAAZgAAABMCAwABAAAAAQAAAGmHBAABAAAAegAAAAAAAABBQ0Qg U3lzdGVtcyBEaWdpdGFsIEltYWdpbmcAMjAwNjowNTowMSAxMDoxMTowNAAF AACQBwAEAAAAMDIyMJCSAgAEAAAANTU4AAKgBAABAAAApQAAAAOgBAABAAAA bQAAAAWgBAABAAAAvAAAAAAAAAACAAEAAgAEAAAAUjk4AAIABwAEAAAAMDEw MAAAAAAgIAAg/8AAEQgAbQClAwEhAAIRAQMRAf/bAIQAAwICAgIBAwICAgMD AwMEBwQEBAQECQYGBQcKCQsLCgkKCgwNEQ4MDBAMCgoPFA8QERITExMLDhUW FRIWERITEgEEBQUGBQYNBwcNGxIPEhsbGxsbGxsbGxsbGxsbGxsbGxsbGxsb GxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsb/8QAoQAAAQUBAQEAAAAAAAAA AAAABQIDBAYHAQAIEAABAwIEBAIHBgMGBQUAAAABAgMEBREABhIhBxMxQSJR CBQyYXGBkRUjQlOh0RdSYhYzY5KxwTRUgqLhNnODhPABAAMBAQEAAAAAAAAA AAAAAAABAgMEBREAAgICAgECAwUJAAAAAAAAAAECEQMhEjEEQVETgcEiYZGh sRQjMkJxstHh8P/aAAwDAQACEQMRAD8A0FEJIJ5jhJ922AdVrkqFnljLtIyx PrMx6GZ5DMuOwltsOhoXLq03JUe1/PHZgw/GnxclHvb60rfRbk0tEyiV6i1q i06ZGlhtVUZW9GjPOJQ8sNkpcsi91aSLEi46HoRhuq5no1JzBGpS3UvTJBXq ZRJbSpkJjuvhTuojQlSWFgKNhfc7AkaLxMvxHj9Um/kr3+KoOVbCKZtPVPYh GUymVIY9Zbj89JdU2LAqSAbqSCQCoXHTffHJFQhRKqmNIcaZSWedzHJKEW8a UAaSdW5WBqA03sL3UkHJ4ZqXF9/7C0dj1GmS6Gmqw58Z2CpsvCS0+lbKkDqr WDp0ixub264ZjZgy9JqjdPj5hpbsp1S0NsInNqccU37YCdVyU28QA8Nt7YI4 Mruo9d/dXYckSEVKnLbfWipQ1Iiv+qvlMhBDT1wOWs38K7qSNJsbqG24x6HV KbUmW3qdUYktDjaX0KjvpdC21FQSoFJIKSUqAPQlJ8jZPBlircQtHHanTmqA au7UIqICWueqUp9IZDdr6yu+nTbfVe2IdOzPSKhRPX1S2ojJqT1LbMh5CA88 28tqyDeyitSCUgbkW2xcPHyZIuUV1+vsDkkTV1Smt5iRSHKlETUHWi+3EVIQ H1ti91hBOopBB3tbY4YFfpaaXCkTahBhGezzmUPTWvGNIUrQoK0rAChcpJFi DexBxKwZJJNLv9N/4Dkh9mqU+TU3YMWoxXpLI1PMNvpU42NRRdSQbp8SVJ3H VJHUEYkal332GM5wljdSVME7Fo/vDhSCEm/fEDFhyw3N/njodGn/AM4AE85I PXHvWE/zYCgAJkmwSXlbYquYsqKzRxSE2dJqMaImhuQ25dPqLsV9l5UgKunl qSSQkahqBTcDa+O7w878XN8WPaT/AEZnNclTK3LyznZEGkRqfSI7LsFMBl2R FeaSXPVZSlFZWs6khTa1rCWwFLU86HCkBAU9Wso5klrZp7NBjyXIqq08aup9 pK5AmRJSG0gHx31Ostq1WADaCCoX5fty8zx3HUt1Lfq9OKX/AH8u+zDhQXby 1X0VZuOYyCXZ8Cpip8xOqG3HaZQuKB7ZKuW6kafAUyHLkbpWviDlaqZmrmtm AzLiOUxuI8l1SNK7VOG+tBSrqOUy4T22A6kXw/bcS8lZE9Xf9NrXyr8zTi+N E1VMlnhhnelNCNzazKqzsFsPICVpfZ0tb3skFXY2t3tga1k5UXMy5VNpVNYC JWWgwtgspU2xDcSp9KbWsE2JCfxdr4jF5cIRcb7X0ivoxOIEGQs2zXZlPm0G MmmORYcRUNDzJiLDNUYfISi5UtAZDxDjx1qupJSjYLnZpy/Uo0tyPGXFjS8x 1yqUpDK3AVOQJy2yVoQncaeWCVfgB3G+PRflYM04wjvu/wAJJv6v5kcWkXPN VOdcg0mTToxks0aaiWqApaEmU2lpaEpBWQjWlS0OJ1EAqaG6TZQog4dZgXlu PrpciI2tVWj/AGTTKiw01HEuat1DhWttSS3yyEq0JCwDsFeyOHwvMxQh+8rd 6+bd/QuUb6LIzl+uU/ipHdapYkRiYpkzXX0OsuhqMlovEOEvJlhSQEqTdCkW 1nUVHFCqdPfpnC9zJUylRajWZFIocBEdTzfNgusNtXQylV1OgqClhTIKQvUF lFtZ6/HywzT+w9vi391N3fy7JlF0XumZXqNFzxGrDNFbVy6lmKbJEZbSXH/W pIXHNyQFKUhI3J8NgDbF4bfWqS8hUZaEtrCULUQQ6CkHUmxuBckbgG4O1rE+ J5+WGfK8kZa3/c6/Jo1iqQ8FDTcA/XHNQv8AD6480s6VJG3nhsub23scACVL 8Xb645zD7vrgHQIRcf74cHtC2G1YMUANN+pwpIHUYTEdddQxEW+57DaSo/C2 M+dq9UqUcuIlOkyTslKyEgE2t9MXHbE2TqRTE1OAp1CGUEGxbM1sqH0B6/ri FmWoR8nstOzqbOktvGwXAcbfUD70bK+eOjhUbIbQxTc4UacQGouYmL9NdIdI HzSMWaJEXUSl+JUihxKVBC5Ed1laARuAVJuAe9uuElb0Kyt1uqRKRN5dSNZl rSd1Rqc6tP8AnUAME3qvXVUptynO6nF2sh99SAE29wO/TbCaplXZXM1ccJ2Q KxHo0rLL1UcfYEjmpklOm5tp9ncXHX98C0+kg49JTJXwykKfbBQhfOBWAeoC tFwPMX7+/GO7pWDl6Dp9JWqnpwwqV/8A3zv/ANmC1L4151rKkiBweqRSeq3p QaT9VAYXGu0CkW2mZtzXKAXUMtQII6lP2gXlAfJIGCS8wTXE+BtlHfoVH9cV x9yrIj1QmvKPMmOn3A6f9Bj0OYuDLD91KSf7wX6jz+WG0CZYkuNLSCFAgi4N +oPQ47qb/wDxxiMhCDL6Bnr5EYV6nLS3cx1m2+2+E5I04im40lbYKY7gHvFs OIiSrf8ADr+mDkieIOzHGnf2LlJaYduppQvp93U4sPDjilwey/wTodEzHkF6 ZUYMJtqTJDYPNcHU/PHRhcG9kSWiy07ip6NFOjrbg8KVxkuG6ktRUgHE1PGf 0d0+L+Gcr4mOL46qh7mdJCv43+jsNxwzmED/AABjx44+j5q8PCuSr3GOMLiv cWhxPHDgQq/J4RzFW/wE7YSON/BQ/wBzwhcN+pLSMPivV/kCMD9ICtys7cVK XWuGeSqdTocenKiSWZTAOpfNKwtI+BF/gMBKPmri7Tqc1Hey7lNZQ0GlLdhF SlWFiokG1za52xk1JSfEZOo06fJzFNbrD7Tz7rhks8trQhpKju2lPZKSNt++ J0xFVVVU+qykpaWnxa0hQT8O++IafqBZKNlTNFc0opFCqE4ke2zGVpPz6D64 tUXgpnNTPOqyqdSGu/rctOof9KbnFKDYWOryHkGiJvX8++sKT1RBaAHw1KP+ 2Ij9f4V0e/2XliRUXE9HJjilg/LYfpiuMYdsKbAEGSZTT74jeqtqkOFpoH2U E3AHuF8SL/4hxzOmzVLQRAIVa9h1wtJI21XAxwmwva+43w42Te4GEAiW2mTE EVxVkveFXwtvjN26HyZ61NyZfqGolp4QkuXT2PXp5HHf49KNsxyPZYKbl6jz rIYzsw25/I7CS2r9TgwOG75TrTmZOk9CIaSD+uPTXFmIv+HEsEAZmSP/AKSf 3wxM4Zyn4Cmzmx5sK/G1GShafgoHbpimogDFcMK0Ja1Izc+pKrXUUG/6GxwT pvDB2OHNObJylOEFWtlKgLC3hB6DGajFCJv8OXwP/Vcoe4RkYYk8OXrb5omH /wCBH7YuoICmt5ch5V490aqZlRLn5fanpRUFlJaXyFgpUAUnzIPTtjW5nG7h dlRsoyjw/pEZSdkvykpWu/xXc455KMHb2VVlPrnpN5vrV41Pky1p6BqBEWpI +BsBilzc18Ra9IJFDqS9f4pbxSP8o/fAozmvYrUSfQKTV015BzM0yhqQ2Q0k HSeYN7dbnbFxZy1l15Q5kJKjfqSRjzs6cZ1ZtDaCYpFOSgJREFgLWBOPfZUH /lP+7/zjDmy6RH5abiwtjwaA7XtiQFaAD7zhaAOyRhAVnNAnys+0aBEcKW1p fMgi4OgpCT8+3zxZo0ZKGQhCLAbWAtYY9DDTgc+TshVuk0tvLr8x6nNvuJTZ tI8JWs9Bcdr4fiZNnQW0OUityoLqkgqSlepu9t+vbG9NGa0IhVHPqqYJkZxq px9RQFoslR0m1/0wsZ5lx3OXVWJcJZO5dZBT9bY0jOuyrQQj50jvEMtVeMoq 3CdKL/UjBRFVlrQLuN7D8lIP+mNVxl0PQpNQf2GpAI6Etp/bDnrchatS+WPM BlIH0thtBRHlx48+GuPNhxnmnBZSFspKSPpiNHy5QIygqPQKYgjuIbdz87YV LsZOEWNywn1GLby5CR/thaYsPWbwYp36coDC2PiircQYsdqkU5UaO2h0T06N KbfgVf8ATESnFfqiVkbjY483yf4jSKpBtBSWgo33Hljt0+/6Y5CyvVLM1IpG a6bR57rjb9VUpMYls6CU9QVdAfLBa41YGgR7QT03+eOgFPXEjM9zTxEpWUOM hjVRsELiI5eokADUb7jvfFpoPEfJtbUhDdRDS1WFtWoC/wCvXHq4eLhRySX2 iyuOUaryVwqdKj1GbCdQvlNnUlhwEWLqgQlFgSQm6ldNsFqoFNUOY03IEd5D ClcxQuGBY+NfS3fqdz0Bxq00SEqHTo0TLMWLE3aSynQSnSSCL3I7E3v+2A9X 9ajZXqk9ERD0uozG4VNZWjmJubISrSdiPaUfcN+2JqxBSVwoytUY4JiGO8BZ TsZWi57m3TAqVwrzXTGkqy9mUPtoFksTE3AHlfAtdBdAiU5nOhgiu5SeW2Du 9CVrHxtjkLN9AlvBpU4Rnfy5CS2f2/XGin7midh5stuMhxpQWg/iSdQ+uH0t KIvyzb4YvRY6Iyr3tbHQyEi9r4mwKzmxlMjNdCYULpEh54/9LRA/VWPJjstp GhABPzx5nkO5mseh8N2QAnbHtCvd9Mcoyi8R8upzJw/chNq5cxlQfir1EfeJ 9kEjex6HzBxUeH/HPKtWTT8tV+qiPXnViKRoJbccBsE6xsFE7b4vbRHTtmr2 UOnT34UCb7A2PliaLZnPEjLVJzJL5splPMZSUhZHQfHGRT8sZhgUhVLpLjKo AletABhKnUL06bBy2vSR1Sbjv1xtByW0Zyg30DoGYM+ZRSlmDr5Ld7Ni6Sm9 7ke83O/fFwy76RdUbzS29ms+trZNo8d9u0dq34g10Uv+s3PlYY68ea9MwcWj YIHHym5mkR4wjJipO8l91OppO+xQ1f70nr4zpHcHpjSsu1TK9VzDS0w67Fei 09lxxCpslSZCpKld0adBGkq3uCkkAC2OlVJWmTRpDDCgwFpAUj+ZHiSffcYd DlwpCLEjYi/TCaoCVEjBxy5bIsOuI1UyHlmvNFNVokWRqFtS2xcfMb4XYim1 D0fKQhReyxWajR3DuEtulTf0xVJ3CzivQrqjvorzKSSAmY4w7/rbEyi2tMuM qAbtWNIkcnM9LzLR1jqpx1xbf+YftgtTpVGqTWqn5hkSO/hnKJ+h3xyy+LH1 OhSizsmE2Kml7nSHVoSUI5jpVYHrb6YSUKGwOOaTbeyvTR3QbWJUB2x7QP51 fTCoLMm4q8W8kZKbXQswKqD02VGK0sQmrqDa7pB1kgJPW3wx8VF2OzU1qiOO obDhUyVq8YAOxJH4ht88WkZZGuj6Syd6WEFGXotMzNlufKntMoaL8J1CzJWB bUUG2km3me+DLnpa5SQ08tvJ1eUho6VqLjKQkk2F/F54TiyviKir1r0naNLZ WYmS5xQ4PCXJqEm/fYA7YqEz0hZClkRMpRmwd7uSlq/0SBi0J5NkdHH2Q5tM yxBdQetpC7/K4OAGYOIyMwrcjxMqU1gOABpS1LW4nz8gfkBhPslzVAWgZ0rl AkoUiQy81e6m1k7jGnUHjNSXG0plvSKW901brb+oGLjJxIVM1fKPGvNFP0vZ fzAmYyOvq8gL296cbDln0n5gKWsxUePJ1e0rTy1n5jbHZDMnpicaNZyzxr4d 1ltCXpb9OcV+anWj/MMaXSXqZV4wdpNTiTkn8h0KP0642q1aJonmIpskaLH+ oYUInhCVAG47YS2DEP0iLLYU1JYbdQrYpWkKB+WKRmPgNw4ry1PO0BuI/e/P hKLKwfPbb9MJpPsEYhWqE1kX0kZGR4dem1GKmmNzCmSoKUw4pRsm46+Cx+Yw VprLcnNkKI88lpp59IWtRsAL7487Kkp6OuPQIdD32k+VrULuq2O1t/L4Wxzx fmH64mhWYzx3yjRqzkqTXp2XW6hMpsNxTK1KUNrXsQki4B3scfG9x0A69Dbb DRlkVMWlI36eePMrifabSJilhguAOlvdYRcXt77XwGa0yz1VeU2OL60w2Vz6 Ay62loRnNCnGABdNz0UffuDfAerTU1SXzmoMOHG1HQ1GZ0lCewJJJVtbqeuA cuz1PoL0qA9OP3MKPu5JeFkjyAH4lH+UYMZR4b5pza7z6PFKGkK8Lq73Xv28 7YaGo2a3S/RcnPNpVOnKbJFyCoCxPwBOCcz0ZqPSaaKlPzF6q00LqcecSGvn rG+J5JF8EE6DwNp0fl1WCEVHnJBalJUCnT/RoAAHwxblZJkx4ulHPbVb8X3g /XfDVM1SSQwmDW6erUiO08B+UrQr6H98FKZnGXSpiFJflQnhcgglB+uNIzlB kuJqOWfSHztSkIbVWEVBgW+6loDm3lq6/rjVMvekvlmcEN5joT0RR2L0RfMR fz0nfHXGcZ9OmZNGk0DOuRsyoAo+Zoji1dGXVcpz4WODrsUoaLjgCGQNSl+S RuTfp0xV8VTJStnxNSauvOfpBZpz27rKJ0pYjE9OVeyQPglKfriyPBKoClC9 0jUD0IOPMm7kdMegPVpUqVKbShOpDaNike0T1UT3JPc4g6Zf5S/ocaJOiaF1 WC3LpzrLg1JcSUKT7iMYlX/RvyhIalyKew+xIWlRb5a/ChR7gfHGaY5R5GPS OBPESG6oJoZeCV21IN9SfcMJh+j3xEqUqwooipB3LywAMVox4FgV6Ndbp1Ad eXVA5NKCW0tp+6B8lHqfliM7lbIUCirhVih5nZrOkBbCWFrBX5tqSNKgT0Pl 2wNKtFcUuwpw+4IVSp1NE/McKXHp6FaolMkOFTiv6nANk/AY2uNWMj8P6epu ROYXKbQNUeIkLUhPYbeFA95IxL9i4xoEz+J2aqtGV/Zqls0iCDpM+WoE9OoU Rp8xsFfHFFq9QgTKkg1d2dmuctQIClENDfsTube4AY2hBLb7Iky1Zdzs/wAP WJpCqbS6VLdDsaJNBKtVvGoJQdQB2uAD0vbG2ZKzdRs88OImYqalC2ZAUlSd WrQtJ0rTfvv0PcEYnJxTtDg29BZ6m0mSn72Gn5C2BkrJtIlNFKQpIPa1xjOz ZFem8LmeYtyA7yz5oUUE/HtgI/lXNtLJWw4ZDYOwUmx+Fxhp2yWkRBmOo0t4 faUGSwR+PQSB8xizx+MOaxw+nUSj5qmpYnsKjONJfKxpVsbA9DYkduuNlkpN MjjssmScvmkZCYYcbSly2ogeZ3w9WZS4UZTYaJK+tuw745vU0RX5NVXLk620 oaQkBAS3cDYWv7z5nDPrbv5h+pxtYiyLZSbCwxEchFW4Fr4xKGvUlBzob4cT BH8ouT5YQmKdp0ZbRC2kWt3GKnnqpR8oZHdq0OkpmyfYaaW8W0HzJV2GKWie j5yzHxXznXKm/TJ/OpjY3MZr7hpIPnY6lHbud/LBOiV7KlLyfAaS4wxVFq0r cnAuNMb+0lPQX27d98dMFH1MHJ2SK7WaHBmrTXcxvVial1LaI8EcwpubJST7 CLnzIwPnVqsM019mO5Hy7pcUhKI6UyJTwSpAUoOHY7LFtAV0O+OfNmUXS79D WEeW30TctZGrtcrstqk0kl6YgMqlrWtS1AKVvdd1bpKb30AntbH0Xwg4dfw4 4Qt5eW8Vq5y31EqBF1W2227YxhGS3LtmrovCWLo6EfLC+T7/ANcaAcLSv3x7 lHfVvtgAZkRWHkaXWELHfUm+Bhy1QW6wiezR4yXkn20ti+DoA7HVdO4tbsBg DmNSZEW5VywF2UsJuR8sJAV1hmIyxqQpyy99J6pPe+HLxv6/qf2xqRRaNPc7 36e7Hg3qR1scZliTHF7+1fDqGFau+AR5ULUCVG3wwDzPlmTUqS2YKkJlRnA8 0XCAkqG4vfDQNGC1XhgxPzBU/tWP9lynXjIdfX43hYAJ5drhVz13Atfvj52V S35mZpMB5x1xbS1N61IKytQVpsE/I2vi70YSibTww4HZzepi33Y5jx5rjchJ fbsWtF9J38OrxX6G3uxueVOB2XaKlt6drmvpTpBKjYDyubkj5gYx4qLs3jdU aPT6VEgRgxDjNMIH4W0WGCTTRCbEnFDF6PJR+uOFPhJJNuu2ADmkHoOuPFO3 Q79sAhCm03FvLDa2wUgdMJjFtNi/xwIq0UKhOoCCSo9hfvhgBk0Z1Sla2L/K 2FfYh/5bC2AcCLNagrtfCkJBVfzOGAuwSkbdr4cbQFXJ8+mABYAHTClNpOyt wRffAALrGU6RW4ivWG1NvoSeW+2bKR+ljipZa4UZLyvUVyIFKbclrUXFyXgF LUpRuT0sPlhohrZfGoydhfoLDbEpttIAPXthFj6UjTbHbC+ADo3UfdjgSPrg ASUC3XqL48uyQD52wAcCbg7nzwgtgKO+ABaR4b/DECY2Cle/TfAAwWk3vfqM c5ScAH//2Q== --= Multipart Boundary 0912061926 Content-Type: application/octet-stream; name="BB.JPG" Content-Transfer-Encoding: base64 Content-ID: <13600188> Content-Disposition: attachment; filename="BB.JPG" /9j/4QDmRXhpZgAASUkqAAgAAAAFABIBAwABAAAAAQAAADEBAgAcAAAASgAA ADIBAgAUAAAAZgAAABMCAwABAAAAAQAAAGmHBAABAAAAegAAAAAAAABBQ0Qg U3lzdGVtcyBEaWdpdGFsIEltYWdpbmcAMjAwNjowNTowMSAxMDoxMTowNAAF AACQBwAEAAAAMDIyMJCSAgAEAAAAMzA4AAKgBAABAAAApQAAAAOgBAABAAAA ewAAAAWgBAABAAAAvAAAAAAAAAACAAEAAgAEAAAAUjk4AAIABwAEAAAAMDEw MAAAAAAgIAAg/8AAEQgAewClAwEhAAIRAQMRAf/bAIQAAwICAgIBAwICAgMD AwMEBwQEBAQECQYGBQcKCQsLCgkKCgwNEQ4MDBAMCgoPFA8QERITExMLDhUW FRIWERITEgEEBQUGBQYNBwcNGxIPEhsbGxsbGxsbGxsbGxsbGxsbGxsbGxsb GxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsb/8QArAAAAQUBAQEBAAAAAAAA AAAABgMEBQcIAgEJABAAAQMCBAQDBAcEBwQLAAAAAQIDBAURAAYSIQcTMUEi UWEIFHGRFSMyQlKBoZKx0eEJF1NicsHwFiQzghglNENUc4OUorLxAQABBQEB AAAAAAAAAAAAAAADAAECBAUHBhEAAgIBBAADBgUEAwAAAAAAAAECAxEEEiEx BRNRBiIyQWHwI3GBobEUkcHhYtHx/9oADAMBAAIRAxEAPwDCyGyCCLd7/wD5 h8yn6sEH4DBAY6aaFgAb33B88OEtgrUFEA32sbHCEuRdLVm9+h8u2FWhvcpG +/w/nhCHLd9SrgW8vPyxW1eU5mjj/EoAuYsFQ5iL3Fx4lH5WGIsdFle6Nriq ZUgKSQUlBFwbjcYr7KCo2W/aVqWXYjBRHmI+rGr7BSkLH5bnDsZFo7BQSm6S em/7/wCWK7qrT+cOKfuDClGHC8CiO34vzJ2+AwSC5yIKst5BZzNwmzvnt+uw 6HQcsQhS4z8pClCfMUQr3dATvfSNrA7qFxa5HeWIoi5Bp7diByEm3kTvb9cP LGBYwiSXblgK79hv+uEXQFKIPmLW64CIaqaFrH4i/Q4buJ8YIFu22EIarQlJ 1EAkG1jhq60Ci5uPCe2EIZuRyq2m23njj3VfmMIQolohFtt+ow+p0B+bVY8K G0XXZLiWmmx1UsmwA/O2J4F2W3S+HGUsrPKd4gSXRy20EFtX1S3FblttKfGv T3VsL7C43wnC4h0aDnNmlRMtU9rKnvHLeYcYCnnWSbKWo+ffuRbrhshuI8C/ GDh1EyjmKNU6GjTRaoPqEhRUGXEgFSLncgg6hfzI7Yr5tlJuoggjbzwgc1hn bhQxEK3FFCUjUpRNrDqf3YAuFURVSzNVs0PhV3XS02f8R1K/TSMMxiyrfUkJ Sbm5tfFRmatr2sp0+MGXBFBSoLWUi2gJVYjvviSWXgbpZCmp5xmVDLhaosB5 h9aCXVqOoNpHkQOtu56Ylck0Y0nh6h4gGVJBeUrY2J+yD2O1sHlBwjyRTT6I SqVupPeyllvLjz5EdElxxDKfCnmLcUpxw26rUbXV5AAbDB01HTGhoYBulKUp B+AtgTxgkecu7qQSRb9PTCa2wfrLbnA2IRcSBupO5Hb1w1WlKepJ6fkcLAho 42NZCdrb7jDR9ANwNzbqCcMIaL0lZBWQRjnSj+0OHwI7QCFDw2F9wdt8SNLk rh5iiTGjZbMht1J8ilYOJjrsuf2gqYuPnGlVVsn3eXHWzt0ulWrb4hQ+WKks SoJuLDzAscRQ8uzRWQGW+LnsezclrKV1qjoSIpV1KkXLCvgRds/ljHXFF+pT c+wMqUyNMXJBF4zIUXlvLOlLYSNyrY7euIvom+cMDPpauQOdSpNRnRwCpl+O 44oEWuFJKT0PUHEpl/O1Ty5ShBpTrAjlZc5b7IVcnvfY+WI7hmiZ/rbzA9Hd ZECClSkHS6nXdJt1AvbAVDkcqsuzXbu825VdVySTc/HE4zxyNgIIubgzS3IK VqR72QhSSjYHpt2G2D/hRlvM2fs+xch5BShzMlSTJdjoflBlhxLDDj6k6lnS lRQ0oC9gSRcgG4vK2Mo5l0V3Bp8HlSy/VlcG8sZhntRmoM6rP0+HoUovl1lL C1hTenw/9qbAG5JuLbC50w27OJREZekFBUFcppR0qTfUkgC4I0quDYjSq/2T YFsYxJp7iSpeVqpXMqVCr05DbjUCRDjLbDllrXLWtDISLWN1Nm5JFrg+dn2Y +F+Z8syq6zVkQkqy4xDkSuVIK9bcpwNtKQbb+M2N7Wt3wBskCjkF9DFnY7yV BCHDqbUPAs+BW46G4sehuLXwj9F1B9kvR4Ul5tKC9rajrWkNg2K7gfZB2Kug PfEkm+hPCGUmnzGI63n4clDSCApxbKkpGokAFRFrkpVbffSfI4jXehHQeeGa aENFJSpV9ienljzQPwD54YR62gg3B67bbjDkIulSdVgoWHbExGo69Tk8Sv6P uLX4yOZKpcRMtSgm5C2RodB730gm3oMZzDPi3T4b264ZBJl6+ztkbP8AD4kM ZnbhLp1HfZUxJdlAo94bO45aOqiFWINtPXffF7UThJw2yvxwqmfqXl6Ma/WH S7JnukuOtqIFw3fZsG1zoAvvvhmkFhH3eTHXt0cGIdF4jji5lHkrgVp4M1tl hYV7pNI8LhSPsh0D9tJ/EMZMPW99j03vgTIPs7aWtI069j5YWQytxJCDujph iJ1EYcVXWUvJ3CtW/fFrcHq1TMu8YX6pVJ5gNnL9ajNSLG6X3qXLZYAIBsou uNgHoCQTYC4t0rMGQfZcD3FrIdZ4d5enmt/RebqzIzJJqThYWlmgVWWxTmmK i24AVct33Z9xSk6nGlvPHxaQSb8KuKcGocZImW8vV9ciW7VsrJzHLivusN1x mD9IKnzVOaNS2gXYiSV6VvBKbghZGC7G+Ggb45L+yG5kLLWfKyhVFiimzg65 yGmAEP8ALkJfQm1vtKDZQgn7JcHQXxVOY67TY0XKOXK1TZuaKq1mKLNrTYSH X6tAjOPqQHCbBSVtykNpCjZJiL1W8BMr6ctbQdc8LEidp9KoFEgtMcVc9xaw JjlUh1GqOLXy0QpbSA0wdY1KU282XtISUsqdISCU7P4/tG5AgSqhTcpNU2O3 CkxkxZ9YTLZakU6MyGkstNRlpLjpKSvkL0NuKdUfCb4G4+Wh872CUniU1nX2 Sapl2fWGxDj5TmMpipaIEec5WnHmmwgXSFCPyyCCdIJAVub5fltFElbaiRpJ STfbqRbEJx91P1JxfOBiUJve/XfHmhH4sCJnSE/VgqTa+974ex2ua8EIBUpZ sAL3JOwAHr+eCCNa+zXlDOtM4d1KiZophh0irKD0Vp9Q511DS4OX91Kk2+13 B2wd5c4McMOGqlzI1LYekRr3nVFfPWm25KSRpT/yi48ziD46LUUsckXWeJlV qtClVPJNDRMpsVShInyZqYzKQgeOy1dRa+/T12xlbM/tp1djLUlun1ptyorn LT7vGbCmWoyQLKDqtlqUq4KQLW3vfbEWx28AhSc+cL6NHqVZzlXswVZ/OqHG atSougw24zpTYq1C4dQoFxGj7KkJHfFaZy4Q5uydxNmZdXT3Z6G20yocuMnU ibEWtKWZCANyletFu9yR2OBSko8shsc/hA96M5HcLb7Lja0qKSFpIII6jp2x wl10DwqOk+WGTT6INOPZI0hKnakpxd/q0ee2+2LCyZkuqZyzizlyjvQGJL0e RJL1QliNHaaYYcfdW44bhKUttLVf0sNzjRoS8ttgpdnea8iVPKMCFV5VVolX ptRU6zHqFFqSZsUutBBda1gCziQ60opt0dQbm+2o/Zx4ZIyVwo+n6syEVmvJ S+7q6sMWu218bEqPqR5YtQafICzK4LbciLDfMSki33r2viOzPHnweHUqVltL IqSWVra1I13AOpSR62uQDcX7YPnCyVlzwZkqU+ZVat79Upj8l5fiLjyipQv5 A9B6DDRqM9NqSIkRp5595WlpttF1rJ7Ad8ZTzJl3MYrgL5FIfyHkSTGqUxr6 RqzYWYjatQYaRfxKPclRSnbbfa+KzeQS5a5JPY98Fv4SiQr5y/UZLa+tUNVr E9cc8n++PlioGHFPgyqjU2YEKOpx+SoNtNjqonoBi3uEfD3NNF4rxsz1WAiF Epai4v3lsOLeugghCRvf+92NrXxbqqdj46ISmocs0XTazWma66ZMlqQyuy0N IWCNJ8vIgdb4o32y+MkzLeSYlAplVdjP1IJU4ylheuQg31AOW0pSkAAjdRKu 1r4nqatq3RXBKm35SKk4e0L2jfamyOqiy8wx6NkNvlwlPvx0ojjlG4QyhI1O LvbUbgbWJ7HQeSvZO4a5VoUmkMVar1aTyi7IeTFhBopt1sppXgBFtGr+OKcI N8hZT2rcwEz57Brebpyp3DuptUl2VIC3lVJmzKEgbhstJASL6fDov0Pc4kab wpqGZv6KKkZ3nJnzszZLFQZ94hkqfmwWpTgUhNyNSUkKUkHe2pItcYjdXDrA 9E5Lkz5BzzlSuuu/SOYGYelNmvfgeWtChugJLahpt4T4t7326gPz5U6At1VM pcWlPlTiJKalE0grQUk8o6R9oE777AAHffFJUxg8xbLcr8xaksv8iGy/DLzj q0qQgGwJA7Yt3ghVqXRfaajSKnLojDC6PWIqV1x4NU9x16lymmm31lSQltbi 0IN1J+11HXG1CP4P5mc/iNEcE6K1Vs15emrn5JpQy2qsSH28mz2lU9tchmEm OpqUXHUx5iyw8FO6yUNtNAoHNBXqGJ7hG4gtVbJVXpcGPMzNFMlaNXu8pnkx i9Ha1IuvS8p1Za0AnmJJTZHgo2XuEml0QlJJ8gJKhiRPmzWZTZaW488gc9JU pPOUnT6q726lPi6b4ZyIxicyMOSpxO6FtrDiNfUEEdfLbG1B70sFR+68FI5q 4SVubxSfTl+O0zTZaTJ5rqrNxze629tyQbkAdsWFlXhLIy3lcO0SkSZsp9H1 sspQlbm17AqICUem1+98AVcam5yC5dnuxKEznmZus1JyPEpoho5hL5WrW+8t JIspXkLbAeWAl5ALhOgXJ3GKdk98sh4x2rAxcSEqvqTck9Tjm/8AfR88DJhT w0LCeLEZp1WhxxJSws/2lwenmQDb5YsDN3EDO2UqqzRm3IhbW0HTJXFuJSiT 8ALWHhHnjTrslCnKK01umky1uAOa4vETLE+FmAJ+l6a4lZDP1YUyoeEhO+6S CD+WFuINO4f5uqkjJGY4DFdgkKccuOZyHEdQpSd2jvsq4ubi+2GjqfMeyfRO VCjHdHslpNZy7lHh3GoU0M0alU/ksx24sYhhlHTRpQNjc3G1io2PqaxmI8pB kvNp1OBJulotAC2w09vUH022GJxq2zwugcp747mdZikZmd4YzKPlV+MzOmhM ZuZIUR7ohR0rdAt4lJSSQnubYMcmZeo2UuHdPypRIiWIFNjpYjt9QUgd/Mnc k9yTipqalCeVwmXNPPfHD7PmN7X/AAK/qX9qJ+RRoakZWzKV1ClFIIQwq/1s c+WhRuB+BSfI4oxMkK20A/HGdyFfDwGNHiKg5MalqaKy8guHTvYE7X+WHeXK PVs45+g5eoMMvz6g8mOwgdNRPfyAFyT5A41q5KUUkAthKt5kb8yxw2i8NMg0 mn5fJWmEzpmPhF1SXFFJcUruSpQ2FrAbYmpTOZaVSJVMpVWmMQ5jTjU5EZZQ Hm1KUUki9ibLI+KSO+KGo0s3a18n9/8AZmyhKcW0/t/aJKlRHIOXm2XXCVnx KF9k+g9OmO3mW1gnQST1J3xuUx21pEJN7jiI4zCr0eRNQhxhRDSgo33/AJj9 QMFb1fZXWC2ypSStvZdh29PhijrMtr0NPSNJPHZi7ifliTT+Plbp8CFJebMx T7KUsqV4HPGLWH97AdNoNajp1SKLPZsNR1RVjbsdxiiEkuSDI/Dbr2sb48sr /QGGIn5hbrbiH2HFJcQQttSTYoI6KHwO+NG5Dm0Xi3wuco9ZjsqnRkhMlk7W V2dT3AV6dDt5Y0dM08wfzAWr5oHKfBmcEPaCg1kyHHKQ+tUSWro4lleywrzI 2UFDrpxnbjBTM/y/bRqvD+ImTGaYfbhx/o9lWmRFV4mX3VNC7usLC9Sr/aPl indW4TwixXLdDJtD2eckz8lZFcpOas8rzXUKcsBl1cdSG4jZGyEKX43NwfEo bC1sWoVOqkFaQpLI+xfY/n+7GxDMa0n2UZRW/K6FnpqIdNU+8FuOaFFDLQ1O vFKSSltP312H2Rv+mHmR850iu5bj1KlVREumvEgOIFlMqBspCkndKknYoNiN 8TthGyvaxqpOue5FZe3lM4Yuew5No+ba7EZzA4publyMlXMkOSEqAJSkbhso KwpRsLHuRbHy/KG02Ta5Pa+PNyWJcmrJp9Bw/XUVKgNU8wYpcaQhAkNXaV4b ABaR4VH1sD8ca99kXg4cuZRRxJrkRQqNVQpqnpcTuxGJ3XY9FOEfsgeeNnTx T95FK+csYfJpJ5CwrWCbnbbriOlx2pDiRIbvyzqTfoD/AK7YszgrFiRUhZOt 5ixCQ+QshpRJvbbDRSpRWQm49BucEw32QbI+qVCl0yEV1OoNMKUOi3AVKtuN I6nceWJbKdYj1ZuNVWTzWkrKTqGlek7bg9MVtXXJU7/kW9LLFmGM+JvFSJw+ rsNdSokuRFqDakB2K8hJStuwKSFWv4SDse2BFnjXwuqrrgFUnUiQ+jlqMptY QR5kjUk4x8ZNRzWcEI9kCgZmeM+mVTKVUbO5deipCxfsdCiD+dscf1Ow/wCw yX/7dX8cPwLCMwtGwBBO9htieyrX6nlbNUetUZ8syI53HVLiT1Qod0nuP87Y JGTjLKKbSaL4qMykcaMgRo1NRyZSgUrbKtS4rqU6khXmkkbK7g+eArMdcmZW q0Ck0yXGlVSmU9mmVSosoJEvlLWpuPqvcttaynVsSRbokYs6uahFWL9DZ8A8 OXiGqdMuksv/AAXNwvzxlOuUJEGOn3KptDW7BdV41EX8aVffT8OnkOuD1yTz UhS7hAN7eeLtNitipoyPE9HZ4dqZUWdozrxPlTc7Zqor1YjClZihtP8AuFON SIhz08wkGJObOluYAkEA9QdBuMEfs/f7TSqhOzVWKvOcXXVpWr3iO037001d CHXNG3PKiUKJAUQ2D3w0X+IVGvwz97SHswxOLeXV5wy2pqHm+KyEp1r0t1Ft P2WnD91Y6IV07HbcYJl5ZqNFzOuDWokiNIiPKaktOI0raUk7pUOoP5fvxnay rZPd8mWNPLcsegu5TWtSXI0hSgfX9Mbb9k3iqrN3BoZHq8lSqxlltLbalLJV IiXs2r4oPgPppPniGkltnj1JamGY5NBMz5KyGyyj/ESd/hj1yIt36yS6kJBt ZOwxt493LMnt4IDMOZ6Nl2UYj7D70lKQotpGkAHpdR9LYGJ2barU6E4aPIaj TUP2biNsF0vo6dbXB3v2GLkKVtTfQ+HnHzAup5src6TrlyGW3EjRqajpSoC+ w1Wv/PB9kjNU2oRVPu0lAiNtcuXKSoptYHSoC1judwDt1tY4FrdNDycxf+yx prpKwS9omjoq3s6oqaEBTlOlNSNQHRCxy1fvTjJ76bEkm4BKb36nHmOTRt+I YLb1KuoC/mlOOeSn8KvlhYA4RFsVJtMcB9FiNioHY7YK8rZdqmbcys0eiMB6 Q54lKvZDafxLV2A+fYb4el+c8Ls2PEvD5+Hz/wCL6f38y6qu3SOC/CsUfL5D +Zqsj6yd9lxDd7KUOukXuEp89zcjFRKmhLykAkKB3v1GB62fvKHodA9ktEqN G7pdz/hHqJamJqHozjjbrawtDiFFKknzBHQ4t/KnHiKnJMujcRKa/UEe7LbT IioBXJSUkFtxO1ioGwWLdd/PA9JqXTPD6Zf9ofBV4np98Pjj19foBFBay9Jy xN4V5QkUN6h5oS5PbNTZcRVKQ40EqUlxtNwtaUDwKsErH3r3GNFZUoFOpFBZ bpsePHQ54+UwkoQhu3hSAem25HmT8cbkPeeTitqcPdl2WBTmXC0lTiNQtZKS P1xhv2zM1ZCzlx3p9JyPSRPr8FRiVOowzqRKUbBEcBN+atB+8AbX0i/ZayUV Thg9NFuwqOg8MqkuEqpZkmt0anpJSXnRqSgham1aym4QULA1JVYlNyCLYT4c 51n8M+O0HNEF1t4wHlNvpac1NyWTdLiQe4I3HrY4xIS2yTNOSTi0z6j5So8P N/Cam5wpdQCIdThNzoyVtEuctYBAIvYHTcHyPnhhmykMxa/SpEJBKFB+OoqC ilBLepJV2AunqemNG7Uu3iJWp0yr5fLBPi/SmH6XBzdCW25GeSlt4i3iBHgX 5q6n5DAxk7NholYcQ9IaZalJSlYdClN60jwq238QsNr9MbNMVqNGuynZ+Ff2 Qeby9Gr7tQfyzGjGa4pxt9bbhbWruUJXsL9bEd9sGeQZ+YajQ40Oq0oiNru0 8poI5ie5PlawAPf8sPqo1vTJuWRqJSVzUVwWHUcsQM3cLKlR6k8W2JTa2Fho 7gdUm52uLA4zy9lb2X4D3KqGdZMxwHxASXVD/wCDdv1x5hmvKMXyzhET2UUN hInQnANtTi5JJ+dsde7eyl/4mm/tSMPlkMV+pmZ6mNSG06btu28RTiWyPmPM +Ss2B+gzVMcwJDzdtTbwB2C0+XXcbjtgclKqfm19nsPBdXVrq/6DV8p9ffAR 13MtVq2Y5NXqzqXZkw9bWCR2SkeQGw/jiGbTZ/VoWodDsd8Upz3SydMophp6 o1QXCWB22lbzwunSlIvYb/PD1tFnfv8ApbA0XYpMKMq5tqWUpiXlRkSoSlp5 0NwgcxN76Qu10gm23TbGmMjZsy/nqH73SX0+8A/WQnEgPtf4k9x5EXBxvaC9 SeyZyn2u8FlXnXULh/Ev8lU8Y+L2Y86ypnDbgs4TFQlxqs5kTr5CNNg4wytC ST1stabmwVawBOM8yaHkHJsZaWBInVD3x2NFMKU3NRWYamgklPLN2EqUSUKF 1pUm41b2HqrfMsfoeKoq2QwRVeyFnysUhcvMFQQzLZTpTDKkp0uLQHQZCwQl CnUoUrWditB1FBJxVhjus1BbailSm1FBU2dSDbyI2IPpikGaPqN7HWYE1L+j jyo9Je8VPZfpTqlHoWnVAD9kpwb1So05+CmMZbXNcA0oK7KKbdQPzB9MWIRc lwJtR7YpRo0SqZAix57TTgUxynG1psdrpPbY+owK1ng5QJalJhtSm2ySUBpR s2eoIuO2LlGrs0/XXoBt08beZEhTsmTI1AjwZyZFRMVQW25KQlRFtttttu+J MUSeUg8hAK+pU5a+K1ljm8sLGO1D+m0mUyFJckNAHcJF9iP5Ywjxaoi8se0F X6EgqQmLOc0AG3gWdabfkrEU8kbVwmAzq1axZSrWvsrHGtfmv9rDlYQQSpBS TYrNrDrfEvEiqiKK3dPiFlE9UW7YDfLEcHtPZbR+bqXe+o/z/wCHPOTz1LbV a/cnoPTCqX0lJVrsL367Yz2dVQSRBCodHbNSpjE2ZOAc93eUR7uydwq6d0uq 6g/dTvY6rYeGlJeoyqjQ3ly4bXifZWE+8xf8YGykduYnw9L6TtifAOMpQe59 NjV1pUl1DbJBSrqo/d/hiQLaGoCmG33WuY0tlS2nS2spULKGoWIBBIw6zF5y XLK4XQcJLKYG5szBVWoceZk6gQ4MyBCcpgmQStp1LKlBStDSSEaiBubE3Kj3 2G8pVvh/lzhu9MkLqcis1WK7EV7m4iNJprySNRSv8DiVJKV6fCUKSetsHjPc cb8Z8Ln4bfhfA+n/AIIrNudarmuoOuOPPNRnE8sRuZcFvY6VqAHMusFZuLa1 qIAucCCFlLp8BNrk3O+HMFm9f6P+bJzX7OWbMjNEhum1NMtJ3OhLyAb/AB1N G3xONGu5By3SM6Krs6XFY0RTHbRJfbbSlf4gSd+23zxcptdWcfNYBWVqzGSb i52yXTstNLqeY6HEATYqdnMpI/W+BqtceuDNP1Idz1AfUk7piNOvm/8Aypt+ uBBHKCAWpe09wvjSCiAa5OvcgNwkt/8A2V/lgPrHtaUppxaaPkuavvqkz0IH yQknDqKBO6KAus+1NnYuFVMoVFhrJtd1Lr6h8yB+mKhzvnGt53zca/mB2O5L U2lnUxGSykpT0BCep9Tc/LEugMrN4JPpd1BQITcb798J/X/2gwiAtTmzrQ4q 4snUL7FPr8cLSX1uuhsKsgXB36nyxRslubOz+C6T+j0cYtYb5Ykm9xrV0GCC iwORF+nZMX3kN6/c4xSVc9aN1LUP7JvYnsVaU/isDGTelLbE5jMT6zmFCUqk Spk17TsNbjzijYCw6qJPT5DytOm8PzTMjt11n/aGJOao66wisNhBpqHUEhcR StOpLgsWySrddhoKTfEoRfLYLUW+VGMF8/4++vUGp6UzMmpr7sVNOkyJHLLL KbNTLDxuJT/3ek2BA8KiqybEEYg5clTr6WGTuoWO/QYjLJe08m1j0IarpLEP 3ZvdbyzZKT52/lgOOWZufeKSKDk2jOT5zLTl/dUXW+ppCnHldfFpSlX5JOHq Tcng8z7V3Qjo1Br3m+Pp6jGoUCRQpSoM6I/HkoALjclstrTcAi6TunYg2O9j hKTkeqJSmqVVTVGgpOhT0xWhazexDbf2lEf5dcW2jkw3NUk5eq0yNk/NVWZh PaRzUOriKfsOq0IV2JNrnp5XwU8J25VTqFSrlSkPylshMRCn3FuG58SgCokj bTh49g7HiJYTzoNy00ElNhYJF8fgpYYSFK1b2UT3v2xZZSa9RBZWhxIvuRYd tsIvFX3Unf12FsMOMX16mbhW2re3TDF9QR9oC97+nTCFgYPbOWUk38yeuE7p /CPnhDnipBZYsFqurbxdR8cesuoUlLhF7dvXGW3zg79HDZJ0qnszpS5Mx5TE KKnmy3kjxIRfZKfNaj4UjzN+gOLU4SnXUahxCq0VuNTKfDVFiN9UNtI3WlN+ qUgab/eUtRO4OJRXJU1k/wANr9P7/wCh/wAGcq1SrZ4TmaLTYkmaoLfpkQ1B DSw7c+B1okOBt1BWhLiQdKtKumGMkUesSWcv5dolVpdIgO65oflGTLfcUQhD IASlOu6dCE6b3K1qJsbEawsIErfMtbTWIpcNd46w/wA+39foS3EJnLFG4WQo 7rKX8wzHUvNqYd/3eDFSmyWW7fbSQeqtyoLV3xWoHuig84kBSgLknoMDnwzX 8P3yr3S/T8gIzVXLVlTLCzzHDpaSi5VfzAAuT2HrfywV8MHMz8MMzLzXPqFN yyExlMN+/R0P1BIXYa2I+6kuDqCqwHe42wWlYWTnPtPq3frfLXUF+/zEsx8X FP5terdFiypVWdIU7Xq4USagpQBA0AANtAC2wB3ue+K7qwqWcMxmTNemT6k8 oqK1kuOKv3JPT9Bgz54PINpIm6HwubU2l+uO+G//AAW1X+av4YOo8eLSKMI9 OioaZa3SlIsN+/x9cSa8uDZTb8ye08clEMukJstOwSk9dr3/AFxwJSXFFJAG kWTa6gT/AJYEruVx6hXRFZ5EPfSV7ti23mU+f8sJOPucvWUBGo9ATYfpiEL2 8rDCy08El73bX7jNbx1Dayrkg3uCPhb44aPlK0aipBJFyAbi97dcFhbKbWV2 CnTGCbT6GbqkpUBqO4vtvhPmI/Gr9nFgrEUqSt14dfEfLBBTaVJnzmIMNAU4 5uLnSEgC5UonYJABJJ6AE4y0ss7zGW1ZLHpnD5NXy+y7Nkmn5XiH3lb6yGXa gbeJ43+w3bZJV0SdgSVHBImdBzXTWoNNk0ukZRpklmKHamh9mNU3hdSYwLaS pDelJJJItspXYYMlgyLLXbPKXEf3f30Os51mqPZ/ZoaXXpslcoTmIVYQl2TQ ntQVy2ZSTpVHUgBQKfDywCQki+H6pkKGh3O9dZKkyZDlRghv/d1SZS1nVJCd QUAo30bKCG0KuCVkYm3n9Aqr2wjFdv8Atz9P3K5kTJVcrj1ZqK+YtZvfSEg2 FhYAAAAAAAAWAAwIZprLbHMbL5QlAusp6gfx8hivPn9T0k5x09Dm+kgQRnjR VOflekR6U8klJnEl6asWsfGdkjYGyQLG+5vhu1HqNYrTrjZkzpjqrrUbuOEn utR6fmcW4rjBwvU3O62VsvmwopGRlrcQ7WZGhIG7LB8XwK7fuGCiHDh06GGI LCI7dgrSgWJ9T5n44PGKSMuybm+B3pKkJvtvbZOwxxqSXLWIv96+HBYwJlZC ySs9bg9L2xw4To30pJ6gdbeuHEkIrWkv+FAvtv8Alhs6vQ549Vk+Yv8Alhck hi6sqK/rLW2I/wBemGbpTZN9rgYXPeRsIj31hbukOBISLd7nCf8A64+Zww4w iJCq00g9PK+Lc4cwospMVuQwhxMyvQqe+k9HGFNuuKbP90rQgnz02O22M6HZ 225tUtof8d6jO+lKRSve3BEdhOS1shVkrcDlgo+dh07Dr1xd1FyZlmjZv4UZ Wp1KQ3SM9ZWkysww1OLW1OebjFxDqgonS4le4Wmyh52wZfGZln4dEVDjt/ql wUlk+Oy/T5IebC+fNpsJy/dl513mo9EqDaAR3At0JwQcen3P+kVLpqSlMWBZ iM0lISltAWUgAAeSUj1sPLDP4WbUEnror6P+EDDvgoyyjaxtt8MUrnKQ8Ut3 cP1jrhV66U7fLAn8SF463Hw6bRHZYYaekwGXE3Q/NS04LkaknqL4uhEOLTmE RIDCGGUqICGxYbHF2HRxGx8DhIAWf/LB/O5wgga91b/n64MBFVXSpoJJsVDa /rhOwKEg9yo/IbYcQio/9Wg97kY8HjUkq3uSDfyxFiGayUytKTYFIJw3k+F4 hJ2Nr7/HCQhmvdsoPTQDbDN7/gfnbDiItxSg7sewxzrV54iI/9k= --= Multipart Boundary 0912061926 Content-Type: application/octet-stream; name="GG.JPG" Content-Transfer-Encoding: base64 Content-ID: <13600187> Content-Disposition: attachment; filename="GG.JPG" /9j/4QDmRXhpZgAASUkqAAgAAAAFABIBAwABAAAAAQAAADEBAgAcAAAASgAA ADIBAgAUAAAAZgAAABMCAwABAAAAAQAAAGmHBAABAAAAegAAAAAAAABBQ0Qg U3lzdGVtcyBEaWdpdGFsIEltYWdpbmcAMjAwNjowNTowMSAxMDoxMTowNQAF AACQBwAEAAAAMDIyMJCSAgAEAAAAMzA5AAKgBAABAAAApQAAAAOgBAABAAAA ZgAAAAWgBAABAAAAvAAAAAAAAAACAAEAAgAEAAAAUjk4AAIABwAEAAAAMDEw MAAAAAAgIAAg/8AAEQgAZgClAwEhAAIRAQMRAf/bAIQAAwICAgIBAwICAgMD AwMEBwQEBAQECQYGBQcKCQsLCgkKCgwNEQ4MDBAMCgoPFA8QERITExMLDhUW FRIWERITEgEEBQUGBQYNBwcNGxIPEhsbGxsbGxsbGxsbGxsbGxsbGxsbGxsb GxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsb/8QArwAAAAYDAQAAAAAAAAAA AAAAAAMEBQYJAQIHCBAAAQIEBAIFAgsQChMBAAAAAQIDAAQFEQYHCCESMQkT IkFRFGEVFyMkMnGBkaGx0RYYGTM0QmJjcnOSosPS09Q1Uldlk5WWs8HwJSgp Q0RGU2RmdIKEhpSjssTh5PEBAQEBAQEAAAAAAAAAAAAAAAACAQMEEQEBAAIC AwEAAwAAAAAAAAAAAQIRITEDEkFREyIj/9oADAMBAAIRAxEAPwC1CBACNN4C EZr0OsYiy2l6bQmlLmTWJB1ZClBKGkzKC8pfCtBKQ3x3SFXIuBziAs0/P2hJ +ZWmVKYnZCRdl5NuqzEqw867L8csC+CtV1OcKpsKSu9ghBFz7LvhfHcfXJ58 55Jl7Ym2nymcqcyqlV5pvEM1MSimZJp56VZU2mWVPTRdXLJuG3FeTmV3UCSP OnZVIy2e1PqSp52jLEzNzXXT62OrmA0lYp6XjLJWsi3ZmilJBF0379+n+P1x 15vz6fcQnOp7DmHJinyMs1WFUgoqj0s224iXmVPywVwBa+E+pdebEKF07HYX 6NhF6tuZa0teKWmm6wZRoz6G7cIe4Rx2tsBfwjhlMNcdvRhc989HkOJOwMBw 8vObRzdnmmR175R1atVCSo2HMWzwptQfprrrcoyhKnGXChZSFug8N0m1xvDk nWtlupPq2EcZti17intLHwOmA2Trdyh2S5R8Ztdx4qIo295UKW9aOTi/ZS+K 0WG/FQXtvegDPnz8kQkKcm8Qt91lUJ/5ILXrayCaVZ2t1tHmNCmdvxYApWuP Tuk9vE1XTfvNCmvzIl+G9R2TmK8MMVei4yl1szBISh5JZeBBtYtqsoG45W3j j5fLj4pvJWONtdDptSk6tRGajIuFbD6eJCim1xy5HluDCniH9QI6YZe+Myn1 mqMUrsw1O4qw/Lzq5eYq0qy62bKQ64EEe/aMyymPZJtCpzUvp/p1amKdP5y4 Pl5qUdUw+05VW0qbWk2KTvzBgr557Tt+7bgn+Omvljpq1NqM5i6jsq5/KOfa y9z9wJJ4hAS5T1zFaZDK1pUCULvfsqHECQL77RAk580x7M9ck/qOwo1QET8k UzTWJpLr3mGw6HyQW7p6z1AlAFx2gD3nth6zH+05efyTO5bxvBBSs81SNBaa TqAwE35PKMjqGcQyKWlkBm6EANgIXxJfur2JCwBY2s+UDUDTm8I11mrZ/wCA 3KhMUJhNMemK9LKRL1EMgODhbTZTZc7XGre5I4SLReX8d6iccfN9qN4bzdYp jFQkpbP/AA3IsTiahOyxYr9OmVJmHn3l8UwVgXWeNpSOrASOBQVbmZk/Vs13 cW4cq0pWHV0mpTzM7OTWH6hKzDKmky8qhXG4QnrUqWHh2iLJKlA3QgGpfF7I uPmmLpuVE7iWlYHflsx8Y0+pT/XpU08Z9hagktI403QlA4esCykWvYi5idej 1EUpIRV5EniHKab8fbjy5at4ezHfrypYYm3aZndmQzKeotnGlUKQgWHD5Qu1 vN7UOSaxNLaCevWL+eM03YxFRmbgCYVc93FBqatO8RSqad8B2+cNG4LXUZjh BVNG97+zEJXp6ZBKuuVvysqNhsifn5kDhEyv8KG41R2RnWagEhbku+1MDaxV wLSqwNtrhNvdhcZY2Xna47JLFVMx5pVw3i+iF0yNWk/KmOtTwrCVLVsR47RN +rP9f/yOXix9PHMb8X7Nl7J7/ARU5rexzQsb9IFVprDkw6/K0mVaoswpTZQD MMrc60JB5gKVa/eRtFesyvKN2dPPEyw1cIQ0kJtsLDaEplU8NylKdvAR1c2z LLabqUkEeFoVNsNb2YTw3/aj5IBYiXYAumXb8fYjb4I3TLtKXbqG0+0kcoHB XLtNocsE2BBuB7UF4Jw9UJjG8hQKLKpmX5tSmJSWcd4ELdcICEEnYAqI37rk wHYZjSJrITNEIyEoCzyJTiyWt5+ZBgtzSfrQQr1LT7Rl922LJaw/GipMP1XI lelDWmUcXzvVMKvNiuV+Pighek7WmE3Tp1p57+ziyUv/AN8NYfrBQ0na07ni 04Slu6+J5Tf8eNxpS1plI/tbpIFPI/NRKfnxvrh+hJUtNmp7CdDexJj7IyTp GHqenrp+dFelpgsN3tfq0qJVuQLAHnEVaQGqY20EhASkDhAsBvEXU6CWYVwq 9+GauKHoQ7z3Qe/zQatv0WqKuiyy2UqwPoKkbeZxcdr934Y5jZf1u/1w+MRS jmivy3UdiybI3er0+se7MLjY2og9KXQbjuhKttKXDc+/FpoAJ6ojYnlB7Vhz sdt4MLWkJKCOfmg4M9gG9rDxjdA+XRd8jmQkm/uQ+5Xy/Bqbwmvmn0YkyB/v DcYLXXb+XOb/AF6u8+JjFvP8Mc3QLef4Yxw7f+4DAT4xnh8NvdgOaalb/Q+M xO0q/oGeRIP05rvitGVWpul8Kidlq9kd+cVE/Wkw4opJv498M1ZWVUZ0C4PC fdihblojV1nRT5bq3/Ykj3nnBHcLeYxLGyxy+6EUn4vW3M5iVebUokv1Gacv v3vLMZFVGph4JslKT4bw2vBRXcg78vNFprRCHCbm9/ag5ttfAbA842MK2wsK G5g8FZIG/nt3xoPlSnrOEcR7JiUZdKDGonCauEkmrylgPv7cTRau79WOffFf GYxHN0CBACBAc21J8I6P3MMrOwoZJ/hmorFcfStaykKAK1c/bMVE3sQ+5dN9 99+cM9VUVSatwBaKFuWhdwO9E9l4o9q0g8nfzTDoju9h+0T75iWBUHQxSHni bdW2pd/aSTFUWV2Zem+gYGm5PN3KqcxNXJipvzKZxh1FkMKtwI3dQbghR5d/ fGRaYrzZ0JOglWn2tXHddP6zAbzU0CAerZBVwHzD/wCqK5ZoqazV6PVIuvIG vKP3J/WYUozZ6O82KsgK8k/cqP8A5UbycQZ6bPR1A9rIvEKT4Btz9ajPptdH UocJyOxFb7hz9ahqs3HDs+q7kjXM46fNZF4fmaNQV0xtMxKvocSryoOLKyeJ avrODcG0RnADraNSGFnCAL1WU2A/zhuDFsj31a6PtivjMCOawgQAgQHNdShC ej8zDUbbUNXP781FXU3MIE652tuMn4YqJvZG7MIseE784bJ149SoXHKKFueg hzrOiXwDxOewYmU8/CZdj0D2f8r8MS2dEWKXuoy2qz97dXIvrv4WbUYpFWz1 ss29ZIukHzn2zCMotSUlNlW58oJcbQq2wATFMEA9Yu5FiTGy0BLZI3t33imC lrFt++CgobcIgzoWXlNTDar29WRy85tEpwPM21GYUBN/7MSlrDl64bia1bs9 9Xuj7Yr44xHN0CBACBAcz1NK4ej0zFP7xK/nmoqlqs2WaqoJ3Fyd4qJpvdqB KPYjfnYwifnSskAgXHcYpm1vnR7vl3olsF7nsLnUePKacj0ZdX7b4IlUMeZL xltPWKJoKt1VEnF8/BhZikXykehrCWwT6kne/M8IhCtRMrFgEi5jYHiaso/D FxDG45C9vAxq66Qk8Sd40JHVLCr39wGCjMKTe5sQYMJHp4KnGkqI3cQfeMSb A8zxaiMKWO4q8pe3+stRNauKfHr93f69XxxiOboECAECA5hqfIHR1Zjk8vQI /wA81FTFef4aoe1cWvvFRNMbk7wquTax5QkVMgvC6r38IpK4ro6lX6J3C6Cr 6XOTydt/8IV8selbDxV+DErnSL50veT6QMbvg+ww7PH/AKC4pRQlTtOl2ELT coQkJSNzsAB7cMemU+Yry+xtl9WhT8cYVqtBeuAkVCVU0HLjYhR7J5jkYY3H w2gN2894uJYS4VnkLX235wRMzDSUbnfneNCF6ZQBxcdz5tobZmcKirtC3Pcw DeZ71+3a3ZUCN/PveJVgKYWjUNhcr9kKxJE+O8y1E0XRuj1+5v8A3xXxxiOb oECAECA5bqmXwdG9mSq9rUE8z9vaioTEswpVVQoEDY8t77xURUeXMDlxd3jG jTyQq5VzIG0Uxc10dI4eiyoaBdITU6gLW+3GPS/+0fwTErnSEai5gS2gzMKY ST6nhqeO17/SVeEU7ZR4kwTQtTOF8RY88peoFIn2pyclpNjrXXg120oCVFIN 1hIO/K8MemVYRPdInpxrtDcp9XwrieflnQQuXm6Uw62q/wBiVkePdHnfMrHW gfGq3naRgbHOE59y566iMNIa4j3lhbhbt5k8MJLs3HmTFnzNSWISnCNcn56n Oewdn5ASjwvsAUpWtJ8bgiGBxMwicPWzTVvslG/xR0STvP8AChSUuJO/MXvD a6HVrPCtO/cQYyhEnszqkuPJC08h47252iT4BfHp7YYutSya3IjiUu6j65a7 4wXeP29EXvvivjMaxzdAgQAgQHJ9V4/uZ+ZvP9gD/PtRTviJ5KXm9+GwVb34 qJqPrebTxbnY+N4LYeUZ9FipV3B3+eKSuu6PBKmujIpjTh7SKxUAf4W/9Mel rp8fgiFxzLVfPpp/RwZjzV+WH5lvbvKgE/0xS5PIZSTwJCT3mKx6ZSJL5b3Q oKUTYb2gl+eWFAFV7WG+5itpIpiebU7ZalE3vw329yLFdEGUmVGONByK/jXL egVufdrs8z5VOygcd6tBbCU8XOw39+Jyqo7c/pb05TKi4jJ3CwWeafJSn+mG Wc0qacDMlt7JrDzar/WtLTb3lCI9qvUqqXOWjU7DGsnHOH6FLIlaXT67NS8n LNk8DLSHLJSL32Ahvy/m7Z9YYupVzW5D2J2+qmovfDn9XpzAHok953FfGY1i FhAgBAgOR6uHOq6MLM9fhQDy+/tRTTiNx5biSlGwKhtv398VEXszBE0pBQmW eIT3cB+SJHgzDk1PYvp81PUubVTvK2xMqbaV7DiF+7baGWUk2Sbul3Ojik02 k6G5OVpcuWWFVWedCesKjdTpNyT7kdu4B9l+EY80tsddINqCplErOjjFlLxF IJnafMU1wPy5UpPWAJKki6dweIJO3hFIdSmXm1pShhwdnZRaVzPiSBvHXDK3 OxOU4Msw87xXWl255FXKEDj7nWkJQvv3IMdnMlU451gulQB5kjaPc+kfWRkr k7o0lMB45n6yxVGanOTbhlaWuZa4HVgo7STzsOVoyzbZw7S90hGlrqUn5uKo hR3saBNXH4saJ6Q7Su80lqcx5PKbuLKNDmgsW8OxEetX7RWfmpXKPjbVljLE uE59dQp1arE1Pyii0ppZbW6VJuk7g27oYcCPuSudOH52d9by0tXJFbrizwoa QmYbUpSj3AAEk+AinOrrHdQWQq5x1ac8cvrF1X+Mcv4/dRj54DIe1/Tvy/sf 9Ipf5Yle4z6f2RB5Z34AP/EUv8sZ9PzIv923AP8AKFj5YG4wc/ciRzzuwB/K KX+WMHP/ACIHPO/AH8oWPlgbjluqbObKDEPRx5i0OgZq4Qq1RqFG8nlpORrL T7zy+uaNkoSbk2B96OA9Hdp5yez0l8eN5pYUTWXKM9JqkVmbdZLSHEr4x2FA HdIO/hGs7r2gno+dJyUEDK61/wB9Zr8+JFh/R3p+wvQ/Q2iYKcl5frFOlAqc wbqVzJ7fmEc/JhPJNZLxvr06ZhHBGG8C4MTh/C8j5HIIdW8GutU52lm6jdRJ 3MPPUo8TGzGSahumXG2BcN5h5bzeEsWSTk3S53h69luZcYK+FQUO02QobjuO 8ccOgvS2qfS/MZbuzISbluYr0+42v7pBesfd8I2SS7Z80OXoS0nqFjkzTBvv admh+VhO5oB0ku3vk/KAfY1KbH5WK3WaJHejy0jOJIOUzYCuYFWmx+UhKvo4 dIKlA+lYpNh9bWpsflIbppqvo3NISzvli/8Ax5OfpIL+hp6Pu7LF+x5j0cm/ z4bpqDWOje0iMKSWssnUkHiBFamgb+3xxlfRwaP3HitWVR4lklR9GJvtX8e3 DZqNm+jh0fsjs5St7+NVmvz4PR0d2kJACfShliPPU5o/lIbNQcjo99IKeWTc iR56hNfpIyej50fk3OS1P9yfmv0kNmoA6PnSBwkHJanWPP1/NfpIyOj50fp5 ZL0//n5r9JDZqM/Q+9Id7pyakUn7GozQ+JyOhZTae8ocjXKmcrMHM0I1jq/L ermnnet6u/B9MUq1uI8rRhp0ZPKM2g0LQLQH/9k= --= Multipart Boundary 0912061926-- From owner-xfs@oss.sgi.com Tue Sep 12 19:07:32 2006 Received: with ECARTIS (v1.0.0; list xfs); Tue, 12 Sep 2006 19:07:49 -0700 (PDT) Received: from omx1.americas.sgi.com (omx1.americas.sgi.com [198.149.16.13]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8D27LVw025631 for ; Tue, 12 Sep 2006 19:07:32 -0700 Received: from omx2.sgi.com ([198.149.32.25]) by omx1.americas.sgi.com (8.12.10/8.12.9/linux-outbound_gateway-1.1) with ESMTP id k8D26anx006434 for ; Tue, 12 Sep 2006 21:06:36 -0500 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by omx2.sgi.com (8.12.11/8.12.9/linux-outbound_gateway-1.1) with SMTP id k8D4f984014465 for ; Tue, 12 Sep 2006 21:41:11 -0700 Received: from [134.14.55.141] (boing.melbourne.sgi.com [134.14.55.141]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA22794; Wed, 13 Sep 2006 11:03:08 +1000 Message-ID: <450758E2.30408@sgi.com> Date: Wed, 13 Sep 2006 11:03:30 +1000 From: Timothy Shimmin User-Agent: Thunderbird 1.5.0.5 (Macintosh/20060719) MIME-Version: 1.0 To: Eric Sandeen CC: Mogens Kjaer , linux-xfs@oss.sgi.com Subject: Re: Mounting IRIX disk on Linux References: <45053B2D.50203@crc.dk> <4505712E.5070801@oss.sgi.com> <4506529F.9030109@crc.dk> <4506831A.3000001@sgi.com> <4506D66D.8060206@sandeen.net> In-Reply-To: <4506D66D.8060206@sandeen.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 8971 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs Content-Length: 785 Lines: 35 Eric Sandeen wrote: > Timothy Shimmin wrote: > >>> This gives me: >>> >> >>> versionnum = 0x1084 >> >> #define XFS_SB_VERSION_DIRV2BIT 0x2000 >> >> So, yes it doesn't look like you have v2 directories >> and there is no support for it in Linux. > > Hm, the kernel should probably print something more helpful in that case :) > > -Eric Good point. And indeed it should have: xfs_mount.c/xfs_mount_validate_sb() ... /* * Version 1 directory format has never worked on Linux. */ if (unlikely(!XFS_SB_VERSION_HASDIRV2(sbp))) { xfs_fs_mount_cmn_err(flags, "file system using version 1 directory format"); return XFS_ERROR(ENOSYS); } Did this not happen, Mogens? --Tim From owner-xfs@oss.sgi.com Wed Sep 13 03:46:07 2006 Received: with ECARTIS (v1.0.0; list xfs); Wed, 13 Sep 2006 03:46:17 -0700 (PDT) Received: from tac.ki.iif.hu (tac.ki.iif.hu [193.6.222.43]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8DAk6Vw007496 for ; Wed, 13 Sep 2006 03:46:06 -0700 Received: from wferi by tac.ki.iif.hu with local (Exim 4.50) id 1GNRJo-0004LY-K6; Wed, 13 Sep 2006 11:46:08 +0200 To: Nathan Scott Cc: 387057@bugs.debian.org, xfs@oss.sgi.com Subject: Re: Bug#387057: xfsprogs: repeated xfs_repair does not fix the filesystem References: <20060911223008.5160.98142.reportbug@szonett.ki.iif.hu> <20060912085025.A3552962@wobbly.melbourne.sgi.com> From: Ferenc Wagner Date: Wed, 13 Sep 2006 11:46:08 +0200 In-Reply-To: <20060912085025.A3552962@wobbly.melbourne.sgi.com> (Nathan Scott's message of "Tue, 12 Sep 2006 08:50:25 +1000") Message-ID: <87zmd4jdzz.fsf@tac.ki.iif.hu> User-Agent: Gnus/5.1007 (Gnus v5.10.7) Emacs/21.4 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-archive-position: 8974 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: wferi@niif.hu Precedence: bulk X-list: xfs Content-Length: 2783 Lines: 76 Nathan Scott writes: > On Tue, Sep 12, 2006 at 12:30:08AM +0200, Ferenc Wagner wrote: >> Package: xfsprogs >> Version: 2.8.11-1 >> Severity: normal >> >> I guess my problem is rooted in the 'well known' 2.6.17 error, or maybe >> not. Anyway, my experience under a current Sid system is that >> xfs_repair does not fix my filesystem. It does something, as the first >> two runs produced slightly different outputs, but the further runs did >> not. I've got similar problems on two filesystems: > > Try moving aside the contents of lost+found after the first run, > and see if the problems persist. After renaming lost+found to l+f, xfs_repair didn't report any errors: => Phase 1 - find and verify superblock... => Phase 2 - using internal log => - zero log... => - scan filesystem freespace and inode maps... => - found root inode chunk => Phase 3 - for each AG... => - scan and clear agi unlinked lists... => - process known inodes and perform inode discovery... => - agno = 0 => - agno = 1 => - agno = 2 => - agno = 3 => - agno = 4 => - agno = 5 => - agno = 6 => - agno = 7 => - process newly discovered inodes... => Phase 4 - check for duplicate blocks... => - setting up duplicate extent list... => - clear lost+found (if it exists) ... => - check for inodes claiming duplicate blocks... => - agno = 0 => - agno = 1 => - agno = 2 => - agno = 3 => - agno = 4 => - agno = 5 => - agno = 6 => - agno = 7 => Phase 5 - rebuild AG headers and trees... => - reset superblock... => Phase 6 - check inode connectivity... => - resetting contents of realtime bitmap and summary inodes => - ensuring existence of lost+found directory => - traversing filesystem starting at / ... => - traversal finished ... => - traversing all unattached subtrees ... => - traversals finished ... => - moving disconnected inodes to lost+found ... => Phase 7 - verify and correct link counts... => done Still, xfs_check reported: => link count mismatch for inode 400254 (name ?), nlink 0, counted 2 => link count mismatch for inode 4239409 (name ?), nlink 0, counted 2 => link count mismatch for inode 8388736 (name ?), nlink 39, counted 38 Further runs of xfs_repair didn't bring any change. On the root filesystem the results are much the same, but xfs_check reports: => sb_ifree 3042, counted 3041 I read that xfs_check is being obsoleted in the future, but not sure which program to trust. Are my filesystems healthy or not? -- Thanks, Feri. (Please Cc: me, I'm not subscribed to the xfs mailing list.) From owner-xfs@oss.sgi.com Thu Sep 14 19:05:21 2006 Received: with ECARTIS (v1.0.0; list xfs); Thu, 14 Sep 2006 19:05:26 -0700 (PDT) Received: from omx1.americas.sgi.com (omx1.americas.sgi.com [198.149.16.13]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8F259Zd016517 for ; Thu, 14 Sep 2006 19:05:19 -0700 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by omx1.americas.sgi.com (8.12.10/8.12.9/linux-outbound_gateway-1.1) with SMTP id k8E1vFnx024243 for ; Wed, 13 Sep 2006 20:57:17 -0500 Received: from pcbnaujok (pc-bnaujok.melbourne.sgi.com [134.14.55.58]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA22186; Thu, 14 Sep 2006 11:57:04 +1000 Message-Id: <200609140157.LAA22186@larry.melbourne.sgi.com> From: "Barry Naujok" To: "'Ferenc Wagner'" Cc: Subject: RE: Bug#387057: xfsprogs: repeated xfs_repair does not fix the filesystem Date: Thu, 14 Sep 2006 12:03:28 +1000 MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit X-Mailer: Microsoft Office Outlook, Build 11.0.6353 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.2962 Thread-Index: AcbXIhNXEILEpsB4TQq/G8AHj/wvfAAf6/iw In-Reply-To: <87zmd4jdzz.fsf@tac.ki.iif.hu> X-archive-position: 8979 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@melbourne.sgi.com Precedence: bulk X-list: xfs Content-Length: 1104 Lines: 31 > -----Original Message----- > From: xfs-bounce@oss.sgi.com [mailto:xfs-bounce@oss.sgi.com] > On Behalf Of Ferenc Wagner > Sent: Wednesday, 13 September 2006 7:46 PM > To: Nathan Scott > Cc: 387057@bugs.debian.org; xfs@oss.sgi.com > Subject: Re: Bug#387057: xfsprogs: repeated xfs_repair does > not fix the filesystem > > > Still, xfs_check reported: > => link count mismatch for inode 400254 (name ?), nlink 0, counted 2 > => link count mismatch for inode 4239409 (name ?), nlink 0, counted 2 > => link count mismatch for inode 8388736 (name ?), nlink 39, > counted 38 > > Further runs of xfs_repair didn't bring any change. On the root > filesystem the results are much the same, but xfs_check reports: > => sb_ifree 3042, counted 3041 > > I read that xfs_check is being obsoleted in the future, but not sure > which program to trust. Are my filesystems healthy or not? This has been reported before, can you try running an older xfsprogs before 2.8.10 and see how that goes? I think with the dir2 fixes, the nlink stuff might be a tad broken. I'll also look into it from my end. Barry. From owner-xfs@oss.sgi.com Thu Sep 14 19:06:02 2006 Received: with ECARTIS (v1.0.0; list xfs); Thu, 14 Sep 2006 19:06:23 -0700 (PDT) Received: from omx1.americas.sgi.com (omx1.americas.sgi.com [198.149.16.13]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8F25pZd016628 for ; Thu, 14 Sep 2006 19:06:01 -0700 Received: from internal-mail-relay1.corp.sgi.com (internal-mail-relay1.corp.sgi.com [198.149.32.52]) by omx1.americas.sgi.com (8.12.10/8.12.9/linux-outbound_gateway-1.1) with ESMTP id k8E1dGnx021464 for ; Wed, 13 Sep 2006 20:39:16 -0500 Received: from omx2.sgi.com ([198.149.32.25]) by internal-mail-relay1.corp.sgi.com (8.12.9/8.12.10/SGI_generic_relay-1.2) with ESMTP id k8E1dG8s39912864 for ; Wed, 13 Sep 2006 18:39:16 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by omx2.sgi.com (8.12.11/8.12.9/linux-outbound_gateway-1.1) with SMTP id k8E4EBti011249 for ; Wed, 13 Sep 2006 21:14:12 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA21840; Thu, 14 Sep 2006 11:39:03 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16346) id 2896E58CF851; Thu, 14 Sep 2006 11:39:03 +1000 (EST) To: linux-xfs@oss.sgi.com, sgi.bugs.xfs@engr.sgi.com Subject: TAKE 952967 - BUG() in generic_delete_inode() Message-Id: <20060914013903.2896E58CF851@chook.melbourne.sgi.com> Date: Thu, 14 Sep 2006 11:39:03 +1000 (EST) From: dgc@SGI.com (David Chinner) X-archive-position: 8980 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@SGI.com Precedence: bulk X-list: xfs Content-Length: 1883 Lines: 43 Really fix use after free in xfs_iunpin. The previous attempts to fix the linux inode use-after-free in xfs_iunpin simply made the problem harder to hit. We actually need complete exclusion between xfs_reclaim and xfs_iunpin, as well as ensuring that the i_flags are consistent during both of these functions. Introduce a new spinlock for exclusion and the i_flags, and fix up xfs_iunpin to use igrab before marking the inode dirty. Date: Thu Sep 14 11:37:19 AEST 2006 Workarea: chook.melbourne.sgi.com:/build/dgc/isms/2.6.x-xfs-new Inspected by: m-saito,masano,nathans The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:26964a fs/xfs/xfs_vnodeops.c - 1.683 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_vnodeops.c.diff?r1=text&tr1=1.683&r2=text&tr2=1.682&f=h - Use new i_flags_lock to protect i_flags. fs/xfs/xfs_itable.c - 1.149 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_itable.c.diff?r1=text&tr1=1.149&r2=text&tr2=1.148&f=h - Use new i_flags_lock to protect i_flags. fs/xfs/xfs_iget.c - 1.221 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_iget.c.diff?r1=text&tr1=1.221&r2=text&tr2=1.220&f=h - Use new i_flags_lock to protect i_flags. fs/xfs/xfs_inode.c - 1.452 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_inode.c.diff?r1=text&tr1=1.452&r2=text&tr2=1.451&f=h - Fix xfs_iunpin to prevent use-after-free of the linux inode. fs/xfs/xfs_inode.h - 1.216 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_inode.h.diff?r1=text&tr1=1.216&r2=text&tr2=1.215&f=h - Use new i_flags_lock to protect i_flags. fs/xfs/linux-2.6/xfs_super.c - 1.370 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/linux-2.6/xfs_super.c.diff?r1=text&tr1=1.370&r2=text&tr2=1.369&f=h - Use new i_flags_lock to protect i_flags. From owner-xfs@oss.sgi.com Thu Sep 14 20:17:28 2006 Received: with ECARTIS (v1.0.0; list xfs); Thu, 14 Sep 2006 20:17:39 -0700 (PDT) Received: from swip.net (mailfe03.tele2.dk [212.247.154.67]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8F3HQZd014490 for ; Thu, 14 Sep 2006 20:17:27 -0700 X-T2-Posting-ID: 3Pczlc0f568823tLSzyYPw== X-Cloudmark-Score: 0.000000 [] Received: from [83.73.5.27] (HELO mogens1.lemo.dk) by mailfe03.swip.net (CommuniGate Pro SMTP 5.0.8) with ESMTPS id 286094762 for linux-xfs@oss.sgi.com; Thu, 14 Sep 2006 17:16:40 +0200 Received: from [192.168.1.89] (dhcp89.lemo.dk [192.168.1.89]) by mogens1.lemo.dk (8.13.7/8.13.6) with ESMTP id k8EFGcrt009815 for ; Thu, 14 Sep 2006 17:16:38 +0200 Message-ID: <45097256.4060601@crc.dk> Date: Thu, 14 Sep 2006 17:16:38 +0200 From: Mogens Kjaer User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.13) Gecko/20060501 Fedora/1.7.13-1.1.fc5 X-Accept-Language: en-us, en MIME-Version: 1.0 To: linux-xfs@oss.sgi.com Subject: Re: Mounting IRIX disk on Linux References: <45053B2D.50203@crc.dk> <4505712E.5070801@oss.sgi.com> <4506529F.9030109@crc.dk> <4506831A.3000001@sgi.com> <4506D66D.8060206@sandeen.net> <450758E2.30408@sgi.com> In-Reply-To: <450758E2.30408@sgi.com> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 8982 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: mk@crc.dk Precedence: bulk X-list: xfs Content-Length: 451 Lines: 17 Timothy Shimmin wrote: .. > "file system using version 1 directory format"); > Did this not happen, Mogens? No. Nothing in /var/log/messages, and the only thing I get on the screen when I mount is "Function not implemented". Mogens -- Mogens Kjaer, Carlsberg A/S, Computer Department Gamle Carlsberg Vej 10, DK-2500 Valby, Denmark Phone: +45 33 27 53 25, Fax: +45 33 27 47 08 Email: mk@crc.dk Homepage: http://www.crc.dk From owner-xfs@oss.sgi.com Thu Sep 14 22:09:19 2006 Received: with ECARTIS (v1.0.0; list xfs); Thu, 14 Sep 2006 22:09:28 -0700 (PDT) Received: from asteria.debian.or.at (asteria.debian.or.at [86.59.21.34]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8F59IZd009014 for ; Thu, 14 Sep 2006 22:09:19 -0700 Received: by asteria.debian.or.at (Postfix, from userid 1002) id 4618670CD07; Fri, 15 Sep 2006 05:10:21 +0200 (CEST) Date: Fri, 15 Sep 2006 05:10:21 +0200 From: Peter Palfrader To: xfs@oss.sgi.com Cc: 387057@bugs.debian.org Subject: Re: Bug#387057: xfsprogs: repeated xfs_repair does not fix the filesystem Message-ID: <20060915031021.GT5221@asteria.noreply.org> References: <87zmd4jdzz.fsf@tac.ki.iif.hu> <200609140157.LAA22186@larry.melbourne.sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <200609140157.LAA22186@larry.melbourne.sgi.com> X-PGP: 1024D/94C09C7F 5B00 C96D 5D54 AEE1 206B AF84 DE7A AF6E 94C0 9C7F X-Request-PGP: http://www.palfrader.org/keys/94C09C7F.asc X-Accept-Language: de, en User-Agent: Mutt/1.5.9i X-archive-position: 8983 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: peter@palfrader.org Precedence: bulk X-list: xfs Content-Length: 2505 Lines: 59 On Thu, 14 Sep 2006, Barry Naujok wrote: > > Still, xfs_check reported: > > => link count mismatch for inode 400254 (name ?), nlink 0, counted 2 > > => link count mismatch for inode 4239409 (name ?), nlink 0, counted 2 > > => link count mismatch for inode 8388736 (name ?), nlink 39, > > counted 38 > > > > Further runs of xfs_repair didn't bring any change. On the root > > filesystem the results are much the same, but xfs_check reports: > > => sb_ifree 3042, counted 3041 > > > > I read that xfs_check is being obsoleted in the future, but not sure > > which program to trust. Are my filesystems healthy or not? > > This has been reported before, can you try running an older xfsprogs before > 2.8.10 and see how that goes? I think with the dir2 fixes, the nlink stuff > might be a tad broken. I'll also look into it from my end. I have just had a similar problem. xfs_repair 2.8.11 did its stuff, moving a few items into lost+found in the process. After mounting, trying to rmdir the directories under lost+found the filesystem was shut down immediately again: | [ 434.590246] Ending clean XFS mount for filesystem: md0 | [ 447.677811] xfs_inotobp: xfs_imap() returned an error 22 on md0. Returning error. | [ 447.678090] xfs_iunlink_remove: xfs_inotobp() returned an error 22 on md0. Returning error. | [ 447.678383] xfs_inactive: xfs_ifree() returned an error = 22 on md0 | [ 447.678605] xfs_force_shutdown(md0,0x1) called from line 1763 of file fs/xfs/xfs_vnodeops.c. Return address = 0xffffffff803ca50a | [ 447.678986] Filesystem "md0": I/O Error Detected. Shutting down filesystem: md0 Downgrading xfsprogs to 2.6.20 solves the issue: } [...] } Phase 7 - verify and correct link counts... } resetting inode 789921 nlinks from 0 to 2 } resetting inode 4290022 nlinks from 0 to 2 } resetting inode 4290023 nlinks from 0 to 2 } resetting inode 4290024 nlinks from 0 to 2 } resetting inode 4290025 nlinks from 0 to 2 } [..] } resetting inode 59501189 nlinks from 0 to 2 } resetting inode 63698112 nlinks from 0 to 2 } resetting inode 63698118 nlinks from 0 to 2 } done Now my filesystem appears functional again. If you need any more info, I still have an image of the filesystem prior to the treatment with 2.6.20. Cheers, Peter -- | .''`. ** Debian GNU/Linux ** Peter Palfrader | : :' : The universal http://www.palfrader.org/ | `. `' Operating System | `- http://www.debian.org/ From owner-xfs@oss.sgi.com Thu Sep 14 23:56:34 2006 Received: with ECARTIS (v1.0.0; list xfs); Thu, 14 Sep 2006 23:56:54 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k8F6uLZd027652 for ; Thu, 14 Sep 2006 23:56:33 -0700 Received: from [134.14.55.89] (soarer.melbourne.sgi.com [134.14.55.89]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id QAA26443; Fri, 15 Sep 2006 16:55:30 +1000 Message-ID: <450A4E87.4030005@sgi.com> Date: Fri, 15 Sep 2006 16:56:07 +1000 From: Vlad Apostolov User-Agent: Thunderbird 1.5.0.5 (X11/20060719) MIME-Version: 1.0 To: sgi.bugs.xfs@engr.sgi.com, linux-xfs@oss.sgi.com Subject: TAKE 956240: fixup greedy allocator References: <44CE9F23.7000605@sgi.com> <44EE9DF7.1080904@sgi.com> <44F66CA0.4080008@sgi.com> In-Reply-To: <44F66CA0.4080008@sgi.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 8984 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: vapo@sgi.com Precedence: bulk X-list: xfs Content-Length: 737 Lines: 19 pv 956240, author: nathans, rv: vapo - Minor fixes in kmem_zalloc_greedy() Date: Fri Sep 15 16:53:00 AEST 2006 Workarea: soarer.melbourne.sgi.com:/home/vapo/isms/linux-xfs-nathan Inspected by: vapo Author: vapo The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:26983a fs/xfs/linux-2.4/kmem.c - 1.38 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/linux-2.4/kmem.c.diff?r1=text&tr1=1.38&r2=text&tr2=1.37&f=h fs/xfs/linux-2.6/kmem.c - 1.9 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/linux-2.6/kmem.c.diff?r1=text&tr1=1.9&r2=text&tr2=1.8&f=h - pv 956240, author: nathans, rv: vapo - Minor fixes in kmem_zalloc_greedy() From owner-xfs@oss.sgi.com Fri Sep 15 00:11:03 2006 Received: with ECARTIS (v1.0.0; list xfs); Fri, 15 Sep 2006 00:11:20 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k8F7AoZd032710 for ; Fri, 15 Sep 2006 00:11:01 -0700 Received: from [134.14.55.89] (soarer.melbourne.sgi.com [134.14.55.89]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA26746; Fri, 15 Sep 2006 17:09:58 +1000 Message-ID: <450A51EB.90004@sgi.com> Date: Fri, 15 Sep 2006 17:10:35 +1000 From: Vlad Apostolov User-Agent: Thunderbird 1.5.0.5 (X11/20060719) MIME-Version: 1.0 To: sgi.bugs.xfs@engr.sgi.com, linux-xfs@oss.sgi.com Subject: TAKE 956241: make ino validation checks consistent in bulkstat References: <44CE9F23.7000605@sgi.com> <44EE9DF7.1080904@sgi.com> <44F66CA0.4080008@sgi.com> <450A4E87.4030005@sgi.com> In-Reply-To: <450A4E87.4030005@sgi.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 8985 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: vapo@sgi.com Precedence: bulk X-list: xfs Content-Length: 1219 Lines: 26 pv 956241, author: nathans, rv: vapo - make ino validation checks consistent in bulkstat Date: Fri Sep 15 17:08:03 AEST 2006 Workarea: soarer.melbourne.sgi.com:/home/vapo/isms/linux-xfs-nathan Inspected by: vapo Author: vapo The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:26984a fs/xfs/xfs_itable.c - 1.150 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_itable.c.diff?r1=text&tr1=1.150&r2=text&tr2=1.149&f=h fs/xfs/xfs_itable.h - 1.50 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_itable.h.diff?r1=text&tr1=1.50&r2=text&tr2=1.49&f=h fs/xfs/linux-2.6/xfs_ksyms.c - 1.52 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/linux-2.6/xfs_ksyms.c.diff?r1=text&tr1=1.52&r2=text&tr2=1.51&f=h fs/xfs/linux-2.4/xfs_ksyms.c - 1.47 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/linux-2.4/xfs_ksyms.c.diff?r1=text&tr1=1.47&r2=text&tr2=1.46&f=h fs/xfs/dmapi/xfs_dm.c - 1.23 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/dmapi/xfs_dm.c.diff?r1=text&tr1=1.23&r2=text&tr2=1.22&f=h - pv 956241, author: nathans, rv: vapo - make ino validation checks consistent in bulkstat From owner-xfs@oss.sgi.com Fri Sep 15 04:42:20 2006 Received: with ECARTIS (v1.0.0; list xfs); Fri, 15 Sep 2006 04:42:38 -0700 (PDT) Received: from tac.ki.iif.hu (tac.ki.iif.hu [193.6.222.43]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8FBgEZd030284 for ; Fri, 15 Sep 2006 04:42:19 -0700 Received: from wferi by tac.ki.iif.hu with local (Exim 4.50) id 1GOC4S-0007pY-0K; Fri, 15 Sep 2006 13:41:24 +0200 To: "Barry Naujok" Cc: Subject: Re: Bug#387057: xfsprogs: repeated xfs_repair does not fix the filesystem References: <200609140157.LAA22186@larry.melbourne.sgi.com> From: Ferenc Wagner Date: Fri, 15 Sep 2006 13:41:23 +0200 In-Reply-To: <200609140157.LAA22186@larry.melbourne.sgi.com> (Barry Naujok's message of "Thu, 14 Sep 2006 12:03:28 +1000") Message-ID: <87mz91mk64.fsf@tac.ki.iif.hu> User-Agent: Gnus/5.1007 (Gnus v5.10.7) Emacs/21.4 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-archive-position: 8987 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: wferi@niif.hu Precedence: bulk X-list: xfs Content-Length: 4576 Lines: 138 "Barry Naujok" writes: >> -----Original Message----- >> From: xfs-bounce@oss.sgi.com [mailto:xfs-bounce@oss.sgi.com] >> On Behalf Of Ferenc Wagner >> Sent: Wednesday, 13 September 2006 7:46 PM >> To: Nathan Scott >> Cc: 387057@bugs.debian.org; xfs@oss.sgi.com >> Subject: Re: Bug#387057: xfsprogs: repeated xfs_repair does >> not fix the filesystem >> >> >> Still, xfs_check reported: >> => link count mismatch for inode 400254 (name ?), nlink 0, counted 2 >> => link count mismatch for inode 4239409 (name ?), nlink 0, counted 2 >> => link count mismatch for inode 8388736 (name ?), nlink 39, >> counted 38 >> >> Further runs of xfs_repair didn't bring any change. On the root >> filesystem the results are much the same, but xfs_check reports: >> => sb_ifree 3042, counted 3041 > > This has been reported before, can you try running an older xfsprogs before > 2.8.10 and see how that goes? I think with the dir2 fixes, the nlink stuff > might be a tad broken. I'll also look into it from my end. Thanks. I tried xfsprogs 2.8.4 with the following result: # xfs_repair /dev/main/usr Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - clear lost+found (if it exists) ... - clearing existing "lost+found" inode - marking entry "lost+found" to be deleted - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - ensuring existence of lost+found directory - traversing filesystem starting at / ... rebuilding directory inode 128 - traversal finished ... - traversing all unattached subtrees ... - traversals finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... resetting inode 400254 nlinks from 0 to 2 resetting inode 4239409 nlinks from 0 to 2 resetting inode 8388736 nlinks from 39 to 38 done # xfs_check /dev/main/usr # That is, my /usr partition looks fixed. On the other hand: # xfs_repair -d /dev/main/root Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - clear lost+found (if it exists) ... - clearing existing "lost+found" inode - marking entry "lost+found" to be deleted - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - ensuring existence of lost+found directory - traversing filesystem starting at / ... rebuilding directory inode 128 - traversal finished ... - traversing all unattached subtrees ... - traversals finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... done # xfs_check /dev/main/root sb_ifree 3044, counted 3043 # That is, almost like before, but with incremented numbers (there was filesystem activity since then). So, there is some progress, but part of the problem remained. -- Thanks, Feri. From owner-xfs@oss.sgi.com Fri Sep 15 09:51:58 2006 Received: with ECARTIS (v1.0.0; list xfs); Fri, 15 Sep 2006 09:52:09 -0700 (PDT) Received: from slurp.thebarn.com (cattelan-host202.dsl.visi.com [208.42.117.202]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8FGpvZd020208 for ; Fri, 15 Sep 2006 09:51:58 -0700 Received: from [127.0.0.1] (lupo.thebarn.com [10.0.0.10]) (authenticated bits=0) by slurp.thebarn.com (8.13.8/8.13.5) with ESMTP id k8FGon5l005329; Fri, 15 Sep 2006 11:51:07 -0500 (CDT) (envelope-from cattelan@thebarn.com) Subject: Re: Mounting IRIX disk on Linux From: Russell Cattelan To: Mogens Kjaer Cc: linux-xfs@oss.sgi.com In-Reply-To: <45097256.4060601@crc.dk> References: <45053B2D.50203@crc.dk> <4505712E.5070801@oss.sgi.com> <4506529F.9030109@crc.dk> <4506831A.3000001@sgi.com> <4506D66D.8060206@sandeen.net> <450758E2.30408@sgi.com> <45097256.4060601@crc.dk> Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="=-3V+vRC0rbAZqcNDsUe5K" Date: Fri, 15 Sep 2006 11:50:49 -0500 Message-Id: <1158339049.25151.3.camel@xenon.msp.redhat.com> Mime-Version: 1.0 X-Mailer: Evolution 2.8.0-1mdv2007.0 X-archive-position: 8989 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: cattelan@thebarn.com Precedence: bulk X-list: xfs Content-Length: 1033 Lines: 37 --=-3V+vRC0rbAZqcNDsUe5K Content-Type: text/plain Content-Transfer-Encoding: quoted-printable On Thu, 2006-09-14 at 17:16 +0200, Mogens Kjaer wrote: > Timothy Shimmin wrote: > .. > > "file system using version 1 directory format"); >=20 > > Did this not happen, Mogens? >=20 > No. Nothing in /var/log/messages, and the only thing I > get on the screen when I mount is "Function not implemented". BTW older versions xfs did have limited support for v1 directories. It didn't work in all cases (nfs was a big problem) but it should would well enough to get your data off. I would grab an older kernel off the oss ftp site. >=20 > Mogens >=20 --=-3V+vRC0rbAZqcNDsUe5K Content-Type: application/pgp-signature; name=signature.asc Content-Description: This is a digitally signed message part -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.5 (GNU/Linux) iD8DBQBFCtnpNRmM+OaGhBgRApw7AJ9BIhSCEOs22TqqwhcWLmWYiyrMKACdE5oK pr9637dc6abb1Pj3te1PtwI= =mkdu -----END PGP SIGNATURE----- --=-3V+vRC0rbAZqcNDsUe5K-- From owner-xfs@oss.sgi.com Fri Sep 15 10:51:51 2006 Received: with ECARTIS (v1.0.0; list xfs); Fri, 15 Sep 2006 10:52:00 -0700 (PDT) Received: from mail.max-t.com (h216-18-124-229.gtcust.grouptelecom.net [216.18.124.229]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8FHppZd029110 for ; Fri, 15 Sep 2006 10:51:51 -0700 Received: from madrid.max-t.internal ([192.168.1.189] ident=[U2FsdGVkX1+BVs4G8Z6T84sX/L7PGDrmRJUI5Iu1N04=]) by mail.max-t.com with esmtp (Exim 4.43) id 1GOGpM-0007sH-4y for xfs@oss.sgi.com; Fri, 15 Sep 2006 12:46:09 -0400 Date: Fri, 15 Sep 2006 12:44:16 -0400 (EDT) From: Stephane Doyon X-X-Sender: sdoyon@madrid.max-t.internal To: xfs@oss.sgi.com Message-ID: MIME-Version: 1.0 X-SA-Exim-Connect-IP: 192.168.1.189 X-SA-Exim-Mail-From: sdoyon@max-t.com Subject: File system block reservation mechanism is broken Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-SA-Exim-Version: 4.1 (built Thu, 08 Sep 2005 14:17:48 -0500) X-SA-Exim-Scanned: Yes (on mail.max-t.com) X-archive-position: 8990 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sdoyon@max-t.com Precedence: bulk X-list: xfs Content-Length: 6391 Lines: 157 [Resending. Seems my previous post did not make it somehow...] The mechanism allowing to reserve file system blocks, xfs_reserve_blocks() / XFS_IOC_SET_RESBLKS, appears to have been broken by the patch that introduced per-cpu superblock counters. The code in xfs_reserve_blocks() just locks the superblock and then consults and modifies mp->m_sb.sb_fdblocks. However, if the per-cpu counter is active, the count is spread across the per-cpu counters, and the superblock field does not contain an accurate count, nor does modifying it have any effect. The observed behavior is that xfs_io -xc "resblks " has no effect: the resblks does get set and can be retrieved, but the free blocks count does not decrease. The XFS_SET_ASIDE_BLOCKS, introduced in the recent full file system deadlock fix, probably also needs to be taken into account. I'm not particularly familiar with the code, but AFAICT something along the lines of the following patch should fix it. Signed-off-by: Stephane Doyon Index: linux/fs/xfs/xfs_fsops.c =================================================================== --- linux.orig/fs/xfs/xfs_fsops.c 2006-09-13 11:31:36.000000000 -0400 +++ linux/fs/xfs/xfs_fsops.c 2006-09-13 11:32:06.782591491 -0400 @@ -505,6 +505,7 @@ request = *inval; s = XFS_SB_LOCK(mp); + xfs_icsb_disable_counter(mp, XFS_SBS_FDBLOCKS); /* * If our previous reservation was larger than the current value, @@ -520,14 +521,14 @@ mp->m_resblks = request; } else { delta = request - mp->m_resblks; - lcounter = mp->m_sb.sb_fdblocks - delta; + lcounter = mp->m_sb.sb_fdblocks - XFS_SET_ASIDE_BLOCKS(mp) - delta; if (lcounter < 0) { /* We can't satisfy the request, just get what we can */ - mp->m_resblks += mp->m_sb.sb_fdblocks; - mp->m_resblks_avail += mp->m_sb.sb_fdblocks; - mp->m_sb.sb_fdblocks = 0; + mp->m_resblks += mp->m_sb.sb_fdblocks - XFS_SET_ASIDE_BLOCKS(mp); + mp->m_resblks_avail += mp->m_sb.sb_fdblocks - XFS_SET_ASIDE_BLOCKS(mp); + mp->m_sb.sb_fdblocks = XFS_SET_ASIDE_BLOCKS(mp); } else { - mp->m_sb.sb_fdblocks = lcounter; + mp->m_sb.sb_fdblocks = lcounter + XFS_SET_ASIDE_BLOCKS(mp); mp->m_resblks = request; mp->m_resblks_avail += delta; } Index: linux/fs/xfs/xfs_mount.c =================================================================== --- linux.orig/fs/xfs/xfs_mount.c 2006-09-13 11:31:36.000000000 -0400 +++ linux/fs/xfs/xfs_mount.c 2006-09-13 11:32:06.784591724 -0400 @@ -58,7 +58,6 @@ int, int); STATIC int xfs_icsb_modify_counters_locked(xfs_mount_t *, xfs_sb_field_t, int, int); -STATIC int xfs_icsb_disable_counter(xfs_mount_t *, xfs_sb_field_t); #else @@ -1254,26 +1253,6 @@ } /* - * In order to avoid ENOSPC-related deadlock caused by - * out-of-order locking of AGF buffer (PV 947395), we place - * constraints on the relationship among actual allocations for - * data blocks, freelist blocks, and potential file data bmap - * btree blocks. However, these restrictions may result in no - * actual space allocated for a delayed extent, for example, a data - * block in a certain AG is allocated but there is no additional - * block for the additional bmap btree block due to a split of the - * bmap btree of the file. The result of this may lead to an - * infinite loop in xfssyncd when the file gets flushed to disk and - * all delayed extents need to be actually allocated. To get around - * this, we explicitly set aside a few blocks which will not be - * reserved in delayed allocation. Considering the minimum number of - * needed freelist blocks is 4 fsbs _per AG_, a potential split of file's bmap - * btree requires 1 fsb, so we set the number of set-aside blocks - * to 4 + 4*agcount. - */ -#define XFS_SET_ASIDE_BLOCKS(mp) (4 + ((mp)->m_sb.sb_agcount * 4)) - -/* * xfs_mod_incore_sb_unlocked() is a utility routine common used to apply * a delta to a specified field in the in-core superblock. Simply * switch on the field indicated and apply the delta to that field. @@ -1906,7 +1885,7 @@ return test_bit(field, &mp->m_icsb_counters); } -STATIC int +int xfs_icsb_disable_counter( xfs_mount_t *mp, xfs_sb_field_t field) Index: linux/fs/xfs/xfs_mount.h =================================================================== --- linux.orig/fs/xfs/xfs_mount.h 2006-09-13 11:31:36.000000000 -0400 +++ linux/fs/xfs/xfs_mount.h 2006-09-13 11:33:24.441557999 -0400 @@ -307,10 +307,14 @@ extern int xfs_icsb_init_counters(struct xfs_mount *); extern void xfs_icsb_sync_counters_lazy(struct xfs_mount *); +/* Can't forward declare typedefs... */ +struct xfs_mount; +extern int xfs_icsb_disable_counter(struct xfs_mount *, xfs_sb_field_t); # else # define xfs_icsb_init_counters(mp) (0) # define xfs_icsb_sync_counters_lazy(mp) do { } while (0) +#define xfs_icsb_disable_counters(mp, field) do { } while (0) #endif typedef struct xfs_mount { @@ -574,6 +578,27 @@ # define XFS_SB_LOCK(mp) mutex_spinlock(&(mp)->m_sb_lock) # define XFS_SB_UNLOCK(mp,s) mutex_spinunlock(&(mp)->m_sb_lock,(s)) + +/* + * In order to avoid ENOSPC-related deadlock caused by + * out-of-order locking of AGF buffer (PV 947395), we place + * constraints on the relationship among actual allocations for + * data blocks, freelist blocks, and potential file data bmap + * btree blocks. However, these restrictions may result in no + * actual space allocated for a delayed extent, for example, a data + * block in a certain AG is allocated but there is no additional + * block for the additional bmap btree block due to a split of the + * bmap btree of the file. The result of this may lead to an + * infinite loop in xfssyncd when the file gets flushed to disk and + * all delayed extents need to be actually allocated. To get around + * this, we explicitly set aside a few blocks which will not be + * reserved in delayed allocation. Considering the minimum number of + * needed freelist blocks is 4 fsbs _per AG_, a potential split of file's bmap + * btree requires 1 fsb, so we set the number of set-aside blocks + * to 4 + 4*agcount. + */ +#define XFS_SET_ASIDE_BLOCKS(mp) (4 + ((mp)->m_sb.sb_agcount * 4)) + extern xfs_mount_t *xfs_mount_init(void); extern void xfs_mod_sb(xfs_trans_t *, __int64_t); extern void xfs_mount_free(xfs_mount_t *mp, int remove_bhv); From owner-xfs@oss.sgi.com Fri Sep 15 15:07:35 2006 Received: with ECARTIS (v1.0.0; list xfs); Fri, 15 Sep 2006 15:07:46 -0700 (PDT) Received: from limpet.umeoce.maine.edu (limpet.umeoce.maine.edu [130.111.192.115]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8FM7YZd007926 for ; Fri, 15 Sep 2006 15:07:35 -0700 Received: from localhost (cousins@localhost) by limpet.umeoce.maine.edu (8.9.3/8.9.3) with ESMTP id RAA31806 for ; Fri, 15 Sep 2006 17:07:07 -0400 Date: Fri, 15 Sep 2006 17:07:07 -0400 (EDT) From: Steve Cousins Reply-To: cousins@umit.maine.edu To: xfs@oss.sgi.com Subject: swidth with mdadm and RAID6 Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 8991 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: cousins@limpet.umeoce.maine.edu Precedence: bulk X-list: xfs Content-Length: 1511 Lines: 33 I have a RAID6 array of 11 500 GB drives using mdadm. There is one hot-spare so the number of data drives is 8. I used mkfs.xfs with defaults to create the file system and it seemed to pick up the chunk size I used correctly (64K) but I think it got the swidth wrong. Here is what xfs_info says: =========================================================================== meta-data=/dev/md0 isize=256 agcount=32, agsize=30524160 blks = sectsz=4096 attr=0 data = bsize=4096 blocks=976772992, imaxpct=25 = sunit=16 swidth=144 blks, unwritten=1 naming =version 2 bsize=4096 log =internal bsize=4096 blocks=32768, version=2 = sectsz=4096 sunit=1 blks realtime =none extsz=589824 blocks=0, rtextents=0 =========================================================================== So, sunit*bsize=64K, but swidth=144 and swidth/sunit=9 so it looks like it thought there were 9 data drives instead of 8. Am I diagnosing this correctly? Should I recreate the array and explicitly set sunit=16 and swidth=128? Thanks for your help. Steve ______________________________________________________________________ Steve Cousins, Ocean Modeling Group Email: cousins@umit.maine.edu Marine Sciences, 452 Aubert Hall http://rocky.umeoce.maine.edu Univ. of Maine, Orono, ME 04469 Phone: (207) 581-4302 From owner-xfs@oss.sgi.com Fri Sep 15 16:50:32 2006 Received: with ECARTIS (v1.0.0; list xfs); Fri, 15 Sep 2006 16:50:39 -0700 (PDT) Received: from ty.sabi.co.UK (82-69-39-138.dsl.in-addr.zen.co.uk [82.69.39.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8FNoTZd026063 for ; Fri, 15 Sep 2006 16:50:32 -0700 Received: from from [127.0.0.1] (helo=base.ty.sabi.co.UK) by ty.sabi.co.UK with esmtp(Exim 4.62 #1) id 1GONR8-0002lz-Ue for ; Sat, 16 Sep 2006 00:49:34 +0100 MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-ID: <17675.15374.698393.232722@base.ty.sabi.co.UK> Date: Sat, 16 Sep 2006 00:49:34 +0100 X-Face: SMJE]JPYVBO-9UR%/8d'mG.F!@.,l@c[f'[%S8'BZIcbQc3/">GrXDwb#;fTRGNmHr^JFb SAptvwWc,0+z+~p~"Gdr4H$(|N(yF(wwCM2bW0~U?HPEE^fkPGx^u[*[yV.gyB!hDOli}EF[\cW*S H&spRGFL}{`bj1TaD^l/"[ msn( /TH#THs{Hpj>)]f> Subject: Re: swidth with mdadm and RAID6 In-Reply-To: References: X-Mailer: VM 7.17 under 21.4 (patch 19) XEmacs Lucid From: pg_xfs@xfs.for.sabi.co.UK (Peter Grandi) X-Disclaimer: This message contains only personal opinions X-archive-position: 8992 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: pg_xfs@xfs.for.sabi.co.UK Precedence: bulk X-list: xfs Content-Length: 574 Lines: 13 >>> On Fri, 15 Sep 2006 17:07:07 -0400 (EDT), Steve Cousins >>> said: cousins> I have a RAID6 array of 11 500 GB drives using mdadm. cousins> There is one hot-spare so the number of data drives is cousins> 8. I used mkfs.xfs with defaults to create the file cousins> system and it seemed to pick up the chunk size I used cousins> correctly (64K) but I think it got the swidth wrong. Worrying about the impact on performance of a relatively small thing like 'swidth' for something like an 8+2 RAID6 is quite funny. http://WWW.BAARF.com/ From owner-xfs@oss.sgi.com Sun Sep 17 16:47:09 2006 Received: with ECARTIS (v1.0.0; list xfs); Sun, 17 Sep 2006 16:47:22 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k8HNkuZd004677 for ; Sun, 17 Sep 2006 16:47:08 -0700 Received: from [134.14.55.89] (soarer.melbourne.sgi.com [134.14.55.89]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA27842; Mon, 18 Sep 2006 09:46:04 +1000 Message-ID: <450DDE60.50300@sgi.com> Date: Mon, 18 Sep 2006 09:46:40 +1000 From: Vlad Apostolov User-Agent: Thunderbird 1.5.0.5 (X11/20060719) MIME-Version: 1.0 To: sgi.bugs.xfs@engr.sgi.com, linux-xfs@oss.sgi.com Subject: TAKE 955947: Infinite loop in xfs_bulkstat() on formatter() error References: <44CE9F23.7000605@sgi.com> <44EE9DF7.1080904@sgi.com> <44F66CA0.4080008@sgi.com> <450A4E87.4030005@sgi.com> <450A531E.2070205@sgi.com> In-Reply-To: <450A531E.2070205@sgi.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 8996 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: vapo@sgi.com Precedence: bulk X-list: xfs Content-Length: 580 Lines: 17 955947: Infinite loop in xfs_bulkstat() on formatter() error Date: Mon Sep 18 09:43:49 AEST 2006 Workarea: soarer.melbourne.sgi.com:/home/vapo/isms/linux-xfs Inspected by: nathans Author: vapo The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: xfs-linux-melb:xfs-kern:26986a fs/xfs/xfs_itable.c - 1.151 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_itable.c.diff?r1=text&tr1=1.151&r2=text&tr2=1.150&f=h - pv 955947, rv: nathans - update lastino on non critical errors returned by formatter() From owner-xfs@oss.sgi.com Mon Sep 18 07:25:39 2006 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Sep 2006 07:25:48 -0700 (PDT) Received: from garda.mci.edu (intern.mci4me.at [193.171.232.20]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8IEPbZd006725 for ; Mon, 18 Sep 2006 07:25:38 -0700 Received: from localhost (localhost.localdomain [127.0.0.1]) by garda.mci.edu (Postfix) with ESMTP id 7C4501046D1 for ; Mon, 18 Sep 2006 15:32:30 +0200 (CEST) Received: from garda.mci.edu ([127.0.0.1]) by localhost (rekim [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 26513-07 for ; Mon, 18 Sep 2006 13:32:21 +0000 (UTC) Received: from crados (pc61vw.mci.edu [193.171.235.61]) by garda.mci.edu (Postfix) with ESMTP id A91FB10465A for ; Mon, 18 Sep 2006 15:32:21 +0200 (CEST) From: christian gattermair To: xfs@oss.sgi.com Subject: xfs_check - out of memory | xfs_repair - superblock error reading Date: Mon, 18 Sep 2006 15:19:18 +0200 User-Agent: KMail/1.9.4 MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200609181519.18448.christian.gattermair@mci.edu> X-archive-position: 8998 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: christian.gattermair@mci.edu Precedence: bulk X-list: xfs Content-Length: 1728 Lines: 58 hi! after a reboot of our box (debian sarge, 3ware controller, raid 5 - 3tb xfs) we can not mount it any more. from syslog: Sep 18 12:51:36 localhost kernel: SGI XFS with ACLs, security attributes, realtime, large block numbers, no debug enabled Sep 18 12:51:36 localhost kernel: SGI XFS Quota Management subsystem Sep 18 12:51:53 localhost kernel: attempt to access beyond end of device Sep 18 12:51:53 localhost kernel: sdb1: rw=0, want=6445069056, limit=2150101796 Sep 18 12:51:53 localhost kernel: I/O error in filesystem ("sdb1") meta-data dev sdb1 block 0x18027f2ff ("xfs_read_buf") error 5 buf count 512 Sep 18 12:51:53 localhost kernel: XFS: size check 2 failed xfs_check fails with: xfs_check /dev/sdb1 XFS: totally zeroed log xfs_check: out of memory there is a lot of space (i tryed more swap) Mem: 1011 1006 5 0 750 111 -/+ buffers/cache: 144 867 Swap: 57812 0 57812 does xfs_check only looks at the mem or also an swap??? is there any hint to use the swap? second question: xfs_repair works but can not find any superblock. any hints? xfs_repair /dev/sdb1 Phase 1 - find and verify superblock... error reading superblock 11 -- seek to offset 1134332153856 failed couldn't verify primary superblock - bad magic number !!! attempting to find secondary superblock... ... ... ..............found candidate secondary superblock... error reading superblock 11 -- seek to offset 1134332153856 failed unable to verify superblock, continuing... the whole system runs one year without any errors. only today one shutdown for chaning the usv .... thanks for any hint! with friendly greetings, christian gattermair From owner-xfs@oss.sgi.com Mon Sep 18 07:42:19 2006 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Sep 2006 07:42:26 -0700 (PDT) Received: from sandeen.net (sandeen.net [209.173.210.139]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8IEgHZd009800 for ; Mon, 18 Sep 2006 07:42:19 -0700 Received: from [10.0.0.4] (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 75136180173DC; Mon, 18 Sep 2006 09:41:38 -0500 (CDT) Message-ID: <450EB025.5020007@oss.sgi.com> Date: Mon, 18 Sep 2006 09:41:41 -0500 From: linux-xfs@oss.sgi.com User-Agent: Thunderbird 1.5.0.7 (Macintosh/20060909) MIME-Version: 1.0 To: christian gattermair CC: xfs@oss.sgi.com Subject: Re: xfs_check - out of memory | xfs_repair - superblock error reading References: <200609181519.18448.christian.gattermair@mci.edu> In-Reply-To: <200609181519.18448.christian.gattermair@mci.edu> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 8999 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: linux-xfs@oss.sgi.com Precedence: bulk X-list: xfs Content-Length: 725 Lines: 21 christian gattermair wrote: > xfs_repair works but can not find any superblock. any hints? > > xfs_repair /dev/sdb1 > Phase 1 - find and verify superblock... > error reading superblock 11 -- seek to offset 1134332153856 failed > couldn't verify primary superblock - bad magic number !!! > > attempting to find secondary superblock... > ... > ... > ..............found candidate secondary superblock... > error reading superblock 11 -- seek to offset 1134332153856 failed That's about a terabyte into your 3t fs, but you can't seek to it? Any kernel messages when this happens? What does /proc/partitions and/or parted say about the size of /dev/sdb1? Seems like maybe your device itself is not as expected. -Eric From owner-xfs@oss.sgi.com Mon Sep 18 07:51:08 2006 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Sep 2006 07:51:16 -0700 (PDT) Received: from ext.agami.com (64.221.212.177.ptr.us.xo.net [64.221.212.177]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8IEp3Zd011412 for ; Mon, 18 Sep 2006 07:51:08 -0700 Received: from agami.com ([192.168.168.132]) by ext.agami.com (8.12.5/8.12.5) with ESMTP id k8IEoP2c007361 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO) for ; Mon, 18 Sep 2006 07:50:25 -0700 Received: from mx1.agami.com (mx1.agami.com [10.123.10.30]) by agami.com (8.12.11/8.12.11) with ESMTP id k8IEoKbF013904 for ; Mon, 18 Sep 2006 07:50:20 -0700 Received: from [10.12.12.141] ([10.12.12.141]) by mx1.agami.com with Microsoft SMTPSVC(6.0.3790.1830); Mon, 18 Sep 2006 07:53:47 -0700 Message-ID: <450EB248.3000108@agami.com> Date: Mon, 18 Sep 2006 20:20:48 +0530 From: Shailendra Tripathi User-Agent: Mozilla Thunderbird 0.9 (X11/20041127) X-Accept-Language: en-us, en MIME-Version: 1.0 To: cousins@umit.maine.edu CC: xfs@oss.sgi.com Subject: Re: swidth with mdadm and RAID6 References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-OriginalArrivalTime: 18 Sep 2006 14:53:47.0609 (UTC) FILETIME=[3EDBFC90:01C6DB32] X-Scanned-By: MIMEDefang 2.36 X-archive-position: 9000 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: stripathi@agami.com Precedence: bulk X-list: xfs Content-Length: 1792 Lines: 45 Can you list the output of 1. cat /proc/mdstat 2. the command to create 8+2 RAID6 with one spare ? 3. and output of following: xfs_db -r /dev/md* xfs_db> sb xfs_db> p -shailendra Steve Cousins wrote: > I have a RAID6 array of 11 500 GB drives using mdadm. There is one > hot-spare so the number of data drives is 8. I used mkfs.xfs with > defaults to create the file system and it seemed to pick up the chunk size > I used correctly (64K) but I think it got the swidth wrong. Here is what > xfs_info says: > > =========================================================================== > meta-data=/dev/md0 isize=256 agcount=32, agsize=30524160 > blks > = sectsz=4096 attr=0 > data = bsize=4096 blocks=976772992, imaxpct=25 > = sunit=16 swidth=144 blks, unwritten=1 > naming =version 2 bsize=4096 > log =internal bsize=4096 blocks=32768, version=2 > = sectsz=4096 sunit=1 blks > realtime =none extsz=589824 blocks=0, rtextents=0 > =========================================================================== > > So, sunit*bsize=64K, but swidth=144 and swidth/sunit=9 so it looks like it > thought there were 9 data drives instead of 8. > > Am I diagnosing this correctly? Should I recreate the array and > explicitly set sunit=16 and swidth=128? > > Thanks for your help. > > Steve > ______________________________________________________________________ > Steve Cousins, Ocean Modeling Group Email: cousins@umit.maine.edu > Marine Sciences, 452 Aubert Hall http://rocky.umeoce.maine.edu > Univ. of Maine, Orono, ME 04469 Phone: (207) 581-4302 > > From owner-xfs@oss.sgi.com Mon Sep 18 08:02:47 2006 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Sep 2006 08:02:54 -0700 (PDT) Received: from ext.agami.com (64.221.212.177.ptr.us.xo.net [64.221.212.177]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8IF2gZd014417 for ; Mon, 18 Sep 2006 08:02:46 -0700 Received: from agami.com ([192.168.168.132]) by ext.agami.com (8.12.5/8.12.5) with ESMTP id k8IF212c007503 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO) for ; Mon, 18 Sep 2006 08:02:06 -0700 Received: from mx1.agami.com (mx1.agami.com [10.123.10.30]) by agami.com (8.12.11/8.12.11) with ESMTP id k8IF1uUr014151 for ; Mon, 18 Sep 2006 08:01:56 -0700 Received: from [10.12.12.141] ([10.12.12.141]) by mx1.agami.com with Microsoft SMTPSVC(6.0.3790.1830); Mon, 18 Sep 2006 08:05:23 -0700 Message-ID: <450EB500.3070000@agami.com> Date: Mon, 18 Sep 2006 20:32:24 +0530 From: Shailendra Tripathi User-Agent: Mozilla Thunderbird 0.9 (X11/20041127) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Stephane Doyon CC: xfs@oss.sgi.com Subject: Re: File system block reservation mechanism is broken References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-OriginalArrivalTime: 18 Sep 2006 15:05:23.0390 (UTC) FILETIME=[DD93BDE0:01C6DB33] X-Scanned-By: MIMEDefang 2.36 X-archive-position: 9001 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: stripathi@agami.com Precedence: bulk X-list: xfs Content-Length: 7481 Lines: 176 Hi Stephane, > The code in xfs_reserve_blocks() just locks the superblock and then > consults and modifies mp->m_sb.sb_fdblocks. However, if the per-cpu > counter is active, the count is spread across the per-cpu counters, and > the superblock field does not contain an accurate count, nor does > modifying it have any effect. This is the fast path. However, there is slow path where it actually falls back to the earlier mechanism where global lock (spin-lock) is held and then counters are guranteed to be consistent. The goal is not to take the global lock unless in extreme cases (when performance might go down anyway due to other reasons). I am really not sure about your observations on xfs_io. Can you please clarify little more as to why it fails. I can't see the problem. Regards, Shailendra Stephane Doyon wrote: > [Resending. Seems my previous post did not make it somehow...] > > The mechanism allowing to reserve file system blocks, > xfs_reserve_blocks() / XFS_IOC_SET_RESBLKS, appears to have been broken > by the patch that introduced per-cpu superblock counters. > > > The observed behavior is that xfs_io -xc "resblks " has no > effect: the resblks does get set and can be retrieved, but the free > blocks count does not decrease. > > The XFS_SET_ASIDE_BLOCKS, introduced in the recent full file system > deadlock fix, probably also needs to be taken into account. > > I'm not particularly familiar with the code, but AFAICT something along > the lines of the following patch should fix it. > > Signed-off-by: Stephane Doyon > > Index: linux/fs/xfs/xfs_fsops.c > =================================================================== > --- linux.orig/fs/xfs/xfs_fsops.c 2006-09-13 11:31:36.000000000 -0400 > +++ linux/fs/xfs/xfs_fsops.c 2006-09-13 11:32:06.782591491 -0400 > @@ -505,6 +505,7 @@ > > request = *inval; > s = XFS_SB_LOCK(mp); > + xfs_icsb_disable_counter(mp, XFS_SBS_FDBLOCKS); > > /* > * If our previous reservation was larger than the current value, > @@ -520,14 +521,14 @@ > mp->m_resblks = request; > } else { > delta = request - mp->m_resblks; > - lcounter = mp->m_sb.sb_fdblocks - delta; > + lcounter = mp->m_sb.sb_fdblocks - XFS_SET_ASIDE_BLOCKS(mp) - > delta; > if (lcounter < 0) { > /* We can't satisfy the request, just get what we can */ > - mp->m_resblks += mp->m_sb.sb_fdblocks; > - mp->m_resblks_avail += mp->m_sb.sb_fdblocks; > - mp->m_sb.sb_fdblocks = 0; > + mp->m_resblks += mp->m_sb.sb_fdblocks - > XFS_SET_ASIDE_BLOCKS(mp); > + mp->m_resblks_avail += mp->m_sb.sb_fdblocks - > XFS_SET_ASIDE_BLOCKS(mp); > + mp->m_sb.sb_fdblocks = XFS_SET_ASIDE_BLOCKS(mp); > } else { > - mp->m_sb.sb_fdblocks = lcounter; > + mp->m_sb.sb_fdblocks = lcounter + XFS_SET_ASIDE_BLOCKS(mp); > mp->m_resblks = request; > mp->m_resblks_avail += delta; > } > Index: linux/fs/xfs/xfs_mount.c > =================================================================== > --- linux.orig/fs/xfs/xfs_mount.c 2006-09-13 11:31:36.000000000 -0400 > +++ linux/fs/xfs/xfs_mount.c 2006-09-13 11:32:06.784591724 -0400 > @@ -58,7 +58,6 @@ > int, int); > STATIC int xfs_icsb_modify_counters_locked(xfs_mount_t *, > xfs_sb_field_t, > int, int); > -STATIC int xfs_icsb_disable_counter(xfs_mount_t *, xfs_sb_field_t); > > #else > > @@ -1254,26 +1253,6 @@ > } > > /* > - * In order to avoid ENOSPC-related deadlock caused by > - * out-of-order locking of AGF buffer (PV 947395), we place > - * constraints on the relationship among actual allocations for > - * data blocks, freelist blocks, and potential file data bmap > - * btree blocks. However, these restrictions may result in no > - * actual space allocated for a delayed extent, for example, a data > - * block in a certain AG is allocated but there is no additional > - * block for the additional bmap btree block due to a split of the > - * bmap btree of the file. The result of this may lead to an > - * infinite loop in xfssyncd when the file gets flushed to disk and > - * all delayed extents need to be actually allocated. To get around > - * this, we explicitly set aside a few blocks which will not be > - * reserved in delayed allocation. Considering the minimum number of > - * needed freelist blocks is 4 fsbs _per AG_, a potential split of > file's bmap > - * btree requires 1 fsb, so we set the number of set-aside blocks > - * to 4 + 4*agcount. > - */ > -#define XFS_SET_ASIDE_BLOCKS(mp) (4 + ((mp)->m_sb.sb_agcount * 4)) > - > -/* > * xfs_mod_incore_sb_unlocked() is a utility routine common used to apply > * a delta to a specified field in the in-core superblock. Simply > * switch on the field indicated and apply the delta to that field. > @@ -1906,7 +1885,7 @@ > return test_bit(field, &mp->m_icsb_counters); > } > > -STATIC int > +int > xfs_icsb_disable_counter( > xfs_mount_t *mp, > xfs_sb_field_t field) > Index: linux/fs/xfs/xfs_mount.h > =================================================================== > --- linux.orig/fs/xfs/xfs_mount.h 2006-09-13 11:31:36.000000000 -0400 > +++ linux/fs/xfs/xfs_mount.h 2006-09-13 11:33:24.441557999 -0400 > @@ -307,10 +307,14 @@ > > extern int xfs_icsb_init_counters(struct xfs_mount *); > extern void xfs_icsb_sync_counters_lazy(struct xfs_mount *); > +/* Can't forward declare typedefs... */ > +struct xfs_mount; > +extern int xfs_icsb_disable_counter(struct xfs_mount *, xfs_sb_field_t); > > # else > # define xfs_icsb_init_counters(mp) (0) > # define xfs_icsb_sync_counters_lazy(mp) do { } while (0) > +#define xfs_icsb_disable_counters(mp, field) do { } while (0) > #endif > > typedef struct xfs_mount { > @@ -574,6 +578,27 @@ > # define XFS_SB_LOCK(mp) mutex_spinlock(&(mp)->m_sb_lock) > # define XFS_SB_UNLOCK(mp,s) mutex_spinunlock(&(mp)->m_sb_lock,(s)) > > + > +/* > + * In order to avoid ENOSPC-related deadlock caused by > + * out-of-order locking of AGF buffer (PV 947395), we place > + * constraints on the relationship among actual allocations for > + * data blocks, freelist blocks, and potential file data bmap > + * btree blocks. However, these restrictions may result in no > + * actual space allocated for a delayed extent, for example, a data > + * block in a certain AG is allocated but there is no additional > + * block for the additional bmap btree block due to a split of the > + * bmap btree of the file. The result of this may lead to an > + * infinite loop in xfssyncd when the file gets flushed to disk and > + * all delayed extents need to be actually allocated. To get around > + * this, we explicitly set aside a few blocks which will not be > + * reserved in delayed allocation. Considering the minimum number of > + * needed freelist blocks is 4 fsbs _per AG_, a potential split of > file's bmap > + * btree requires 1 fsb, so we set the number of set-aside blocks > + * to 4 + 4*agcount. > + */ > +#define XFS_SET_ASIDE_BLOCKS(mp) (4 + ((mp)->m_sb.sb_agcount * 4)) > + > extern xfs_mount_t *xfs_mount_init(void); > extern void xfs_mod_sb(xfs_trans_t *, __int64_t); > extern void xfs_mount_free(xfs_mount_t *mp, int remove_bhv); > > > From owner-xfs@oss.sgi.com Mon Sep 18 08:27:42 2006 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Sep 2006 08:27:58 -0700 (PDT) Received: from mail.max-t.com (h216-18-124-229.gtcust.grouptelecom.net [216.18.124.229]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8IFRgZd026222 for ; Mon, 18 Sep 2006 08:27:42 -0700 Received: from madrid.max-t.internal ([192.168.1.189] ident=[U2FsdGVkX19rOjnqRu34vY5U3NP1b3QVy1gFvKWbDc8=]) by mail.max-t.com with esmtp (Exim 4.43) id 1GPL1C-0002K6-6J; Mon, 18 Sep 2006 11:26:49 -0400 Date: Mon, 18 Sep 2006 11:24:44 -0400 (EDT) From: Stephane Doyon X-X-Sender: sdoyon@madrid.max-t.internal To: Shailendra Tripathi cc: xfs@oss.sgi.com In-Reply-To: <450EB500.3070000@agami.com> Message-ID: References: <450EB500.3070000@agami.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 192.168.1.189 X-SA-Exim-Mail-From: sdoyon@max-t.com Subject: Re: File system block reservation mechanism is broken Content-Type: MULTIPART/MIXED; BOUNDARY="-1463763711-1607048818-1158593084=:22939" X-SA-Exim-Version: 4.1 (built Thu, 08 Sep 2005 14:17:48 -0500) X-SA-Exim-Scanned: Yes (on mail.max-t.com) X-archive-position: 9002 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sdoyon@max-t.com Precedence: bulk X-list: xfs Content-Length: 9466 Lines: 237 This message is in MIME format. The first part should be readable text, while the remaining parts are likely unreadable without MIME-aware tools. ---1463763711-1607048818-1158593084=:22939 Content-Type: TEXT/PLAIN; charset=UTF-8; format=flowed Content-Transfer-Encoding: QUOTED-PRINTABLE On Mon, 18 Sep 2006, Shailendra Tripathi wrote: > Hi Stephane, > >> The code in xfs_reserve_blocks() just locks the superblock and then >> consults and modifies mp->m_sb.sb_fdblocks. However, if the per-cpu >> counter is active, the count is spread across the per-cpu counters, and >> the superblock field does not contain an accurate count, nor does >> modifying it have any effect. > > This is the fast path. However, there is slow path where it actually= =20 > falls back to the earlier mechanism where global lock (spin-lock) is held= and=20 > then counters are guranteed to be consistent. The goal is not to take the AFAICT, xfs_reserve_blocks() is only aware of the old / slow path way. > please clarify little more as to why it fails. I can't see the problem. int xfs_reserve_blocks(...) { ... s =3D XFS_SB_LOCK(mp); ... lcounter =3D mp->m_sb.sb_fdblocks - delta; ... mp->m_sb.sb_fdblocks =3D lcounter; ... } It assumes the superblock counters are current and accurate, and that they= =20 are authoritative... it hasn't been converted to use the new fast path, it= =20 always uses the slow path. Most of the time (except when some CPU's counter has drained), the=20 fdblocks count will be in the per-cpu mp->m_sb_cnts array of counters.=20 m_sb.sb_fdblocks only contains the value left in it last time we totaled=20 the counters. Modifying m_sb.sb_fdblocks does nothing because that value=20 is known to be inaccurate and will be recalculated and overwritten next=20 time the slow path to that counter is used. Does that make it clearer? > Stephane Doyon wrote: >> [Resending. Seems my previous post did not make it somehow...] >> >> The mechanism allowing to reserve file system blocks, xfs_reserve_block= s() >> / XFS_IOC_SET_RESBLKS, appears to have been broken by the patch that >> introduced per-cpu superblock counters. >> >> >> The observed behavior is that xfs_io -xc "resblks " has no >> effect: the resblks does get set and can be retrieved, but the free blo= cks >> count does not decrease. >> >> The XFS_SET_ASIDE_BLOCKS, introduced in the recent full file system >> deadlock fix, probably also needs to be taken into account. >> >> I'm not particularly familiar with the code, but AFAICT something along >> the lines of the following patch should fix it. >> >> Signed-off-by: Stephane Doyon >> >> Index: linux/fs/xfs/xfs_fsops.c >> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >> --- linux.orig/fs/xfs/xfs_fsops.c 2006-09-13 11:31:36.000000000 -0400 >> +++ linux/fs/xfs/xfs_fsops.c 2006-09-13 11:32:06.782591491 -0400 >> @@ -505,6 +505,7 @@ >> >> request =3D *inval; >> s =3D XFS_SB_LOCK(mp); >> + xfs_icsb_disable_counter(mp, XFS_SBS_FDBLOCKS); >> >> /* >> * If our previous reservation was larger than the current value, >> @@ -520,14 +521,14 @@ >> mp->m_resblks =3D request; >> } else { >> delta =3D request - mp->m_resblks; >> - lcounter =3D mp->m_sb.sb_fdblocks - delta; >> + lcounter =3D mp->m_sb.sb_fdblocks - XFS_SET_ASIDE_BLOCKS(mp) - >> delta; >> if (lcounter < 0) { >> /* We can't satisfy the request, just get what we can */ >> - mp->m_resblks +=3D mp->m_sb.sb_fdblocks; >> - mp->m_resblks_avail +=3D mp->m_sb.sb_fdblocks; >> - mp->m_sb.sb_fdblocks =3D 0; >> + mp->m_resblks +=3D mp->m_sb.sb_fdblocks - >> XFS_SET_ASIDE_BLOCKS(mp); >> + mp->m_resblks_avail +=3D mp->m_sb.sb_fdblocks - >> XFS_SET_ASIDE_BLOCKS(mp); >> + mp->m_sb.sb_fdblocks =3D XFS_SET_ASIDE_BLOCKS(mp); >> } else { >> - mp->m_sb.sb_fdblocks =3D lcounter; >> + mp->m_sb.sb_fdblocks =3D lcounter + XFS_SET_ASIDE_BLOCKS(m= p); >> mp->m_resblks =3D request; >> mp->m_resblks_avail +=3D delta; >> } >> Index: linux/fs/xfs/xfs_mount.c >> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >> --- linux.orig/fs/xfs/xfs_mount.c 2006-09-13 11:31:36.000000000 -0400 >> +++ linux/fs/xfs/xfs_mount.c 2006-09-13 11:32:06.784591724 -0400 >> @@ -58,7 +58,6 @@ >> int, int); >> STATIC int xfs_icsb_modify_counters_locked(xfs_mount_t *, >> xfs_sb_field_t, >> int, int); >> -STATIC int xfs_icsb_disable_counter(xfs_mount_t *, xfs_sb_field_t); >> >> #else >> >> @@ -1254,26 +1253,6 @@ >> } >> >> /* >> - * In order to avoid ENOSPC-related deadlock caused by >> - * out-of-order locking of AGF buffer (PV 947395), we place >> - * constraints on the relationship among actual allocations for >> - * data blocks, freelist blocks, and potential file data bmap >> - * btree blocks. However, these restrictions may result in no >> - * actual space allocated for a delayed extent, for example, a data >> - * block in a certain AG is allocated but there is no additional >> - * block for the additional bmap btree block due to a split of the >> - * bmap btree of the file. The result of this may lead to an >> - * infinite loop in xfssyncd when the file gets flushed to disk and >> - * all delayed extents need to be actually allocated. To get around >> - * this, we explicitly set aside a few blocks which will not be >> - * reserved in delayed allocation. Considering the minimum number of >> - * needed freelist blocks is 4 fsbs _per AG_, a potential split of fil= e's >> bmap >> - * btree requires 1 fsb, so we set the number of set-aside blocks >> - * to 4 + 4*agcount. >> - */ >> -#define XFS_SET_ASIDE_BLOCKS(mp) (4 + ((mp)->m_sb.sb_agcount * 4)) >> - >> -/* >> * xfs_mod_incore_sb_unlocked() is a utility routine common used to >> apply >> * a delta to a specified field in the in-core superblock. Simply >> * switch on the field indicated and apply the delta to that field. >> @@ -1906,7 +1885,7 @@ >> return test_bit(field, &mp->m_icsb_counters); >> } >> >> -STATIC int >> +int >> xfs_icsb_disable_counter( >> xfs_mount_t *mp, >> xfs_sb_field_t field) >> Index: linux/fs/xfs/xfs_mount.h >> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >> --- linux.orig/fs/xfs/xfs_mount.h 2006-09-13 11:31:36.000000000 -0400 >> +++ linux/fs/xfs/xfs_mount.h 2006-09-13 11:33:24.441557999 -0400 >> @@ -307,10 +307,14 @@ >> >> extern int xfs_icsb_init_counters(struct xfs_mount *); >> extern void xfs_icsb_sync_counters_lazy(struct xfs_mount *); >> +/* Can't forward declare typedefs... */ >> +struct xfs_mount; >> +extern int xfs_icsb_disable_counter(struct xfs_mount *, xfs_sb_field_t= ); >>=20 >> # else >> # define xfs_icsb_init_counters(mp) (0) >> # define xfs_icsb_sync_counters_lazy(mp) do { } while (0) >> +#define xfs_icsb_disable_counters(mp, field) do { } while (0) >> #endif >> >> typedef struct xfs_mount { >> @@ -574,6 +578,27 @@ >> # define XFS_SB_LOCK(mp) mutex_spinlock(&(mp)->m_sb_lock) >> # define XFS_SB_UNLOCK(mp,s) mutex_spinunlock(&(mp)->m_sb_lock,(s= )) >> >> + >> +/* >> + * In order to avoid ENOSPC-related deadlock caused by >> + * out-of-order locking of AGF buffer (PV 947395), we place >> + * constraints on the relationship among actual allocations for >> + * data blocks, freelist blocks, and potential file data bmap >> + * btree blocks. However, these restrictions may result in no >> + * actual space allocated for a delayed extent, for example, a data >> + * block in a certain AG is allocated but there is no additional >> + * block for the additional bmap btree block due to a split of the >> + * bmap btree of the file. The result of this may lead to an >> + * infinite loop in xfssyncd when the file gets flushed to disk and >> + * all delayed extents need to be actually allocated. To get around >> + * this, we explicitly set aside a few blocks which will not be >> + * reserved in delayed allocation. Considering the minimum number of >> + * needed freelist blocks is 4 fsbs _per AG_, a potential split of fil= e's >> bmap >> + * btree requires 1 fsb, so we set the number of set-aside blocks >> + * to 4 + 4*agcount. >> + */ >> +#define XFS_SET_ASIDE_BLOCKS(mp) (4 + ((mp)->m_sb.sb_agcount * 4)) >> + >> extern xfs_mount_t *xfs_mount_init(void); >> extern void xfs_mod_sb(xfs_trans_t *, __int64_t); >> extern void xfs_mount_free(xfs_mount_t *mp, int remove_bhv); >> >> >>=20 > > > --=20 St=C3=A9phane Doyon Software Developer Maximum Throughput Inc. http://www.max-t.com sdoyon@max-t.com Phone: (514) 938-7297 ---1463763711-1607048818-1158593084=:22939-- From owner-xfs@oss.sgi.com Mon Sep 18 08:34:18 2006 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Sep 2006 08:34:30 -0700 (PDT) Received: from limpet.umeoce.maine.edu (limpet.umeoce.maine.edu [130.111.192.115]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8IFYHZd028047 for ; Mon, 18 Sep 2006 08:34:18 -0700 Received: from localhost (cousins@localhost) by limpet.umeoce.maine.edu (8.9.3/8.9.3) with ESMTP id LAA01511; Mon, 18 Sep 2006 11:33:34 -0400 Date: Mon, 18 Sep 2006 11:33:34 -0400 (EDT) From: Steve Cousins Reply-To: cousins@umit.maine.edu To: Shailendra Tripathi cc: "xfs@oss.sgi.com" Subject: Re: swidth with mdadm and RAID6 In-Reply-To: Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 9003 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: cousins@limpet.umeoce.maine.edu Precedence: bulk X-list: xfs Content-Length: 3496 Lines: 125 Hi Shailendra, Here is the info: 1. [root@juno ~]# cat /proc/mdstat Personalities : [raid6] md0 : active raid6 sdb[0] sdl[10](S) sdk[9] sdj[8] sdi[7] sdh[6] sdg[5] sdf[4] sde[3] sdd[2] sdc[1] 3907091968 blocks level 6, 64k chunk, algorithm 2 [10/10] [UUUUUUUUUU] unused devices: 2. mdadm --create /dev/md0 --chunk=64 --level=6 --raid-devices=10 --spare-devices=1 /dev/sd[bcdefghijkl] 3. [root@juno ~]# xfs_db -r /dev/md* xfs_db> sb xfs_db> p magicnum = 0x58465342 blocksize = 4096 dblocks = 976772992 rblocks = 0 rextents = 0 uuid = 04b32cce-ed38-496f-811f-2ccd51450bf4 logstart = 536870919 rootino = 256 rbmino = 257 rsumino = 258 rextsize = 144 agblocks = 30524160 agcount = 32 rbmblocks = 0 logblocks = 32768 versionnum = 0x3d84 sectsize = 4096 inodesize = 256 inopblock = 16 fname = "\000\000\000\000\000\000\000\000\000\000\000\000" blocklog = 12 sectlog = 12 inodelog = 8 inopblog = 4 agblklog = 25 rextslog = 0 inprogress = 0 imax_pct = 25 icount = 36864 ifree = 362 fdblocks = 669630878 frextents = 0 uquotino = 0 gquotino = 0 qflags = 0 flags = 0 shared_vn = 0 inoalignmt = 2 unit = 16 width = 144 dirblklog = 0 logsectlog = 12 logsectsize = 4096 logsunit = 4096 features2 = 0 xfs_db> Thanks for the help. Steve ______________________________________________________________________ Steve Cousins, Ocean Modeling Group Email: cousins@umit.maine.edu Marine Sciences, 452 Aubert Hall http://rocky.umeoce.maine.edu Univ. of Maine, Orono, ME 04469 Phone: (207) 581-4302 On Mon, 18 Sep 2006, Shailendra Tripathi wrote: > Can you list the output of > 1. cat /proc/mdstat > 2. the command to create 8+2 RAID6 with one spare ? > 3. and output of following: > xfs_db -r /dev/md* > xfs_db> sb > xfs_db> p > > -shailendra > > Steve Cousins wrote: > >> I have a RAID6 array of 11 500 GB drives using mdadm. There is one > >> hot-spare so the number of data drives is 8. I used mkfs.xfs with > >> defaults to create the file system and it seemed to pick up the chunk size > >> I used correctly (64K) but I think it got the swidth wrong. Here is what > >> xfs_info says: > >> > >> =========================================================================== > >> meta-data=/dev/md0 isize=256 agcount=32, agsize=30524160 > >> blks > >> = sectsz=4096 attr=0 > >> data = bsize=4096 blocks=976772992, imaxpct=25 > >> = sunit=16 swidth=144 blks, unwritten=1 > >> naming =version 2 bsize=4096 > >> log =internal bsize=4096 blocks=32768, version=2 > >> = sectsz=4096 sunit=1 blks > >> realtime =none extsz=589824 blocks=0, rtextents=0 > >> =========================================================================== > >> > >> So, sunit*bsize=64K, but swidth=144 and swidth/sunit=9 so it looks like it > >> thought there were 9 data drives instead of 8. > >> > >> Am I diagnosing this correctly? Should I recreate the array and > >> explicitly set sunit=16 and swidth=128? > >> > >> Thanks for your help. > >> > >> Steve > >> ______________________________________________________________________ > >> Steve Cousins, Ocean Modeling Group Email: cousins@umit.maine.edu > >> Marine Sciences, 452 Aubert Hall http://rocky.umeoce.maine.edu > >> Univ. of Maine, Orono, ME 04469 Phone: (207) 581-4302 > >> > >> > > > From owner-xfs@oss.sgi.com Mon Sep 18 11:11:54 2006 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Sep 2006 11:12:00 -0700 (PDT) Received: from ext.agami.com (64.221.212.177.ptr.us.xo.net [64.221.212.177]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8IIBjZd032547 for ; Mon, 18 Sep 2006 11:11:51 -0700 Received: from agami.com ([192.168.168.132]) by ext.agami.com (8.12.5/8.12.5) with ESMTP id k8IIB52c009832 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO) for ; Mon, 18 Sep 2006 11:11:07 -0700 Received: from mx1.agami.com (mx1.agami.com [10.123.10.30]) by agami.com (8.12.11/8.12.11) with ESMTP id k8IIB07f017419 for ; Mon, 18 Sep 2006 11:11:00 -0700 Received: from [10.125.200.197] ([10.125.200.197]) by mx1.agami.com with Microsoft SMTPSVC(6.0.3790.1830); Mon, 18 Sep 2006 11:14:25 -0700 Message-ID: <450EE12A.4020403@agami.com> Date: Mon, 18 Sep 2006 23:40:50 +0530 From: Shailendra Tripathi User-Agent: Mozilla Thunderbird 1.0.6 (Windows/20050716) X-Accept-Language: en-us, en MIME-Version: 1.0 To: cousins@umit.maine.edu CC: "xfs@oss.sgi.com" Subject: Re: swidth with mdadm and RAID6 References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-OriginalArrivalTime: 18 Sep 2006 18:14:26.0343 (UTC) FILETIME=[46810370:01C6DB4E] X-Scanned-By: MIMEDefang 2.36 X-archive-position: 9004 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: stripathi@agami.com Precedence: bulk X-list: xfs Content-Length: 4494 Lines: 162 Hi Steve, I checked the code and it appears that XFS is not *aware* of RAID6. Basically, for all md devices, it gets the volume info by making a an ioctl call. I can see that XFS only take care of level 4 and level 5. It does not account for level 6. Only extra line need to be added here as below: if (md.level == 6) md.nr_disks -= 2; /* RAID 6 has 2 parity disks */ You can try with this change if you can. Do let mew know if it solves your problem. This code is in function: md_get_subvol_stripe in /libdisk/md.c /* Deduct a disk from stripe width on RAID4/5 */ if (md.level == 4 || md.level == 5) md.nr_disks--; /* Update sizes */ *sunit = md.chunk_size >> 9; *swidth = *sunit * md.nr_disks; return 1; } Regards, Shailendra Steve Cousins wrote: >Hi Shailendra, > >Here is the info: > >1. [root@juno ~]# cat /proc/mdstat >Personalities : [raid6] >md0 : active raid6 sdb[0] sdl[10](S) sdk[9] sdj[8] sdi[7] sdh[6] sdg[5] >sdf[4] sde[3] sdd[2] sdc[1] > 3907091968 blocks level 6, 64k chunk, algorithm 2 [10/10] >[UUUUUUUUUU] > >unused devices: > >2. mdadm --create /dev/md0 --chunk=64 --level=6 --raid-devices=10 >--spare-devices=1 /dev/sd[bcdefghijkl] > >3. [root@juno ~]# xfs_db -r /dev/md* >xfs_db> sb >xfs_db> p >magicnum = 0x58465342 >blocksize = 4096 >dblocks = 976772992 >rblocks = 0 >rextents = 0 >uuid = 04b32cce-ed38-496f-811f-2ccd51450bf4 >logstart = 536870919 >rootino = 256 >rbmino = 257 >rsumino = 258 >rextsize = 144 >agblocks = 30524160 >agcount = 32 >rbmblocks = 0 >logblocks = 32768 >versionnum = 0x3d84 >sectsize = 4096 >inodesize = 256 >inopblock = 16 >fname = "\000\000\000\000\000\000\000\000\000\000\000\000" >blocklog = 12 >sectlog = 12 >inodelog = 8 >inopblog = 4 >agblklog = 25 >rextslog = 0 >inprogress = 0 >imax_pct = 25 >icount = 36864 >ifree = 362 >fdblocks = 669630878 >frextents = 0 >uquotino = 0 >gquotino = 0 >qflags = 0 >flags = 0 >shared_vn = 0 >inoalignmt = 2 >unit = 16 >width = 144 >dirblklog = 0 >logsectlog = 12 >logsectsize = 4096 >logsunit = 4096 >features2 = 0 >xfs_db> > > >Thanks for the help. > >Steve > >______________________________________________________________________ > Steve Cousins, Ocean Modeling Group Email: cousins@umit.maine.edu > Marine Sciences, 452 Aubert Hall http://rocky.umeoce.maine.edu > Univ. of Maine, Orono, ME 04469 Phone: (207) 581-4302 > >On Mon, 18 Sep 2006, Shailendra Tripathi wrote: > > > >>Can you list the output of >>1. cat /proc/mdstat >>2. the command to create 8+2 RAID6 with one spare ? >>3. and output of following: >> xfs_db -r /dev/md* >> xfs_db> sb >> xfs_db> p >> >>-shailendra >> >>Steve Cousins wrote: >> >> >>>>I have a RAID6 array of 11 500 GB drives using mdadm. There is one >>>>hot-spare so the number of data drives is 8. I used mkfs.xfs with >>>>defaults to create the file system and it seemed to pick up the chunk size >>>>I used correctly (64K) but I think it got the swidth wrong. Here is what >>>>xfs_info says: >>>> >>>>=========================================================================== >>>>meta-data=/dev/md0 isize=256 agcount=32, agsize=30524160 >>>>blks >>>> = sectsz=4096 attr=0 >>>>data = bsize=4096 blocks=976772992, imaxpct=25 >>>> = sunit=16 swidth=144 blks, unwritten=1 >>>>naming =version 2 bsize=4096 >>>>log =internal bsize=4096 blocks=32768, version=2 >>>> = sectsz=4096 sunit=1 blks >>>>realtime =none extsz=589824 blocks=0, rtextents=0 >>>>=========================================================================== >>>> >>>>So, sunit*bsize=64K, but swidth=144 and swidth/sunit=9 so it looks like it >>>>thought there were 9 data drives instead of 8. >>>> >>>>Am I diagnosing this correctly? Should I recreate the array and >>>>explicitly set sunit=16 and swidth=128? >>>> >>>>Thanks for your help. >>>> >>>>Steve >>>>______________________________________________________________________ >>>> Steve Cousins, Ocean Modeling Group Email: cousins@umit.maine.edu >>>> Marine Sciences, 452 Aubert Hall http://rocky.umeoce.maine.edu >>>> Univ. of Maine, Orono, ME 04469 Phone: (207) 581-4302 >>>> >>>> >>>> >>>> > > > From owner-xfs@oss.sgi.com Mon Sep 18 11:20:21 2006 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Sep 2006 11:20:26 -0700 (PDT) Received: from ext.agami.com (64.221.212.177.ptr.us.xo.net [64.221.212.177]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8IIKIZd002145 for ; Mon, 18 Sep 2006 11:20:19 -0700 Received: from agami.com ([192.168.168.132]) by ext.agami.com (8.12.5/8.12.5) with ESMTP id k8IIJa2c009931 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO) for ; Mon, 18 Sep 2006 11:19:39 -0700 Received: from mx1.agami.com (mx1.agami.com [10.123.10.30]) by agami.com (8.12.11/8.12.11) with ESMTP id k8IIJUEv017570 for ; Mon, 18 Sep 2006 11:19:30 -0700 Received: from [10.125.200.197] ([10.125.200.197]) by mx1.agami.com with Microsoft SMTPSVC(6.0.3790.1830); Mon, 18 Sep 2006 11:22:56 -0700 Message-ID: <450EE325.1010706@agami.com> Date: Mon, 18 Sep 2006 23:49:17 +0530 From: Shailendra Tripathi User-Agent: Mozilla Thunderbird 1.0.6 (Windows/20050716) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Shailendra Tripathi CC: cousins@umit.maine.edu, "xfs@oss.sgi.com" Subject: Re: swidth with mdadm and RAID6 References: <450EE12A.4020403@agami.com> In-Reply-To: <450EE12A.4020403@agami.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-OriginalArrivalTime: 18 Sep 2006 18:22:57.0359 (UTC) FILETIME=[7717DDF0:01C6DB4F] X-Scanned-By: MIMEDefang 2.36 X-archive-position: 9005 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: stripathi@agami.com Precedence: bulk X-list: xfs Content-Length: 5133 Lines: 194 Hi Steve, Both of us are using old xfsprogs. It is handled in new xfsprogs. */ switch (md.level) { case 6: md.nr_disks--; /* fallthrough */ case 5: case 4: md.nr_disks--; /* fallthrough */ case 1: case 0: case 10: break; default: return 0; Regards, Shailendra Tripathi wrote: > Hi Steve, > I checked the code and it appears that XFS is not *aware* > of RAID6. Basically, for all md devices, it gets the volume info by > making a an ioctl call. I can see that XFS only take care of level 4 > and level 5. It does not account for level 6. > Only extra line need to be added here as below: > > if (md.level == 6) > md.nr_disks -= 2; /* RAID 6 has 2 parity disks */ > You can try with this change if you can. Do let mew know if it solves > your problem. > > This code is in function: md_get_subvol_stripe in /libdisk/md.c > > > /* Deduct a disk from stripe width on RAID4/5 */ > if (md.level == 4 || md.level == 5) > md.nr_disks--; > > /* Update sizes */ > *sunit = md.chunk_size >> 9; > *swidth = *sunit * md.nr_disks; > > return 1; > } > > Regards, > Shailendra > Steve Cousins wrote: > >> Hi Shailendra, >> >> Here is the info: >> >> 1. [root@juno ~]# cat /proc/mdstat Personalities : [raid6] md0 : >> active raid6 sdb[0] sdl[10](S) sdk[9] sdj[8] sdi[7] sdh[6] sdg[5] >> sdf[4] sde[3] sdd[2] sdc[1] >> 3907091968 blocks level 6, 64k chunk, algorithm 2 [10/10] >> [UUUUUUUUUU] >> unused devices: >> >> 2. mdadm --create /dev/md0 --chunk=64 --level=6 --raid-devices=10 >> --spare-devices=1 /dev/sd[bcdefghijkl] >> >> 3. [root@juno ~]# xfs_db -r /dev/md* >> xfs_db> sb >> xfs_db> p >> magicnum = 0x58465342 >> blocksize = 4096 >> dblocks = 976772992 >> rblocks = 0 >> rextents = 0 >> uuid = 04b32cce-ed38-496f-811f-2ccd51450bf4 >> logstart = 536870919 >> rootino = 256 >> rbmino = 257 >> rsumino = 258 >> rextsize = 144 >> agblocks = 30524160 >> agcount = 32 >> rbmblocks = 0 >> logblocks = 32768 >> versionnum = 0x3d84 >> sectsize = 4096 >> inodesize = 256 >> inopblock = 16 >> fname = "\000\000\000\000\000\000\000\000\000\000\000\000" >> blocklog = 12 >> sectlog = 12 >> inodelog = 8 >> inopblog = 4 >> agblklog = 25 >> rextslog = 0 >> inprogress = 0 >> imax_pct = 25 >> icount = 36864 >> ifree = 362 >> fdblocks = 669630878 >> frextents = 0 >> uquotino = 0 >> gquotino = 0 >> qflags = 0 >> flags = 0 >> shared_vn = 0 >> inoalignmt = 2 >> unit = 16 >> width = 144 >> dirblklog = 0 >> logsectlog = 12 >> logsectsize = 4096 >> logsunit = 4096 >> features2 = 0 >> xfs_db> >> >> Thanks for the help. >> >> Steve >> >> ______________________________________________________________________ >> Steve Cousins, Ocean Modeling Group Email: cousins@umit.maine.edu >> Marine Sciences, 452 Aubert Hall http://rocky.umeoce.maine.edu >> Univ. of Maine, Orono, ME 04469 Phone: (207) 581-4302 >> >> On Mon, 18 Sep 2006, Shailendra Tripathi wrote: >> >> >> >>> Can you list the output of >>> 1. cat /proc/mdstat >>> 2. the command to create 8+2 RAID6 with one spare ? >>> 3. and output of following: >>> xfs_db -r /dev/md* >>> xfs_db> sb >>> xfs_db> p >>> >>> -shailendra >>> >>> Steve Cousins wrote: >>> >>> >>>>> I have a RAID6 array of 11 500 GB drives using mdadm. There is one >>>>> hot-spare so the number of data drives is 8. I used mkfs.xfs with >>>>> defaults to create the file system and it seemed to pick up the >>>>> chunk size >>>>> I used correctly (64K) but I think it got the swidth wrong. Here >>>>> is what >>>>> xfs_info says: >>>>> >>>>> =========================================================================== >>>>> >>>>> meta-data=/dev/md0 isize=256 agcount=32, >>>>> agsize=30524160 >>>>> blks >>>>> = sectsz=4096 attr=0 >>>>> data = bsize=4096 blocks=976772992, >>>>> imaxpct=25 >>>>> = sunit=16 swidth=144 blks, >>>>> unwritten=1 >>>>> naming =version 2 bsize=4096 >>>>> log =internal bsize=4096 blocks=32768, version=2 >>>>> = sectsz=4096 sunit=1 blks >>>>> realtime =none extsz=589824 blocks=0, rtextents=0 >>>>> =========================================================================== >>>>> >>>>> >>>>> So, sunit*bsize=64K, but swidth=144 and swidth/sunit=9 so it looks >>>>> like it >>>>> thought there were 9 data drives instead of 8. >>>>> Am I diagnosing this correctly? Should I recreate the array and >>>>> explicitly set sunit=16 and swidth=128? >>>>> >>>>> Thanks for your help. >>>>> >>>>> Steve >>>>> ______________________________________________________________________ >>>>> >>>>> Steve Cousins, Ocean Modeling Group Email: cousins@umit.maine.edu >>>>> Marine Sciences, 452 Aubert Hall http://rocky.umeoce.maine.edu >>>>> Univ. of Maine, Orono, ME 04469 Phone: (207) 581-4302 >>>>> >>>>> >>>>> >>>> >> >> >> > > From owner-xfs@oss.sgi.com Mon Sep 18 13:28:46 2006 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Sep 2006 13:28:55 -0700 (PDT) Received: from limpet.umeoce.maine.edu (limpet.umeoce.maine.edu [130.111.192.115]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8IKSgZd003338 for ; Mon, 18 Sep 2006 13:28:46 -0700 Received: from localhost (cousins@localhost) by limpet.umeoce.maine.edu (8.9.3/8.9.3) with ESMTP id QAA01748; Mon, 18 Sep 2006 16:28:05 -0400 Date: Mon, 18 Sep 2006 16:28:05 -0400 (EDT) From: Steve Cousins Reply-To: cousins@umit.maine.edu To: Shailendra Tripathi cc: "\"xfs@oss.sgi.com\" " Subject: Re: swidth with mdadm and RAID6 In-Reply-To: Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 9006 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: cousins@limpet.umeoce.maine.edu Precedence: bulk X-list: xfs Content-Length: 6080 Lines: 209 Thanks very much Shailendra. I'll give it a try. Steve ______________________________________________________________________ Steve Cousins, Ocean Modeling Group Email: cousins@umit.maine.edu Marine Sciences, 452 Aubert Hall http://rocky.umeoce.maine.edu Univ. of Maine, Orono, ME 04469 Phone: (207) 581-4302 On Mon, 18 Sep 2006, Shailendra Tripathi wrote: > Hi Steve, > Both of us are using old xfsprogs. It is handled in new > xfsprogs. > > */ > switch (md.level) { > case 6: > md.nr_disks--; > /* fallthrough */ > case 5: > case 4: > md.nr_disks--; > /* fallthrough */ > case 1: > case 0: > case 10: > break; > default: > return 0; > > > Regards, > > Shailendra Tripathi wrote: > > >> Hi Steve, > >> I checked the code and it appears that XFS is not *aware* > >> of RAID6. Basically, for all md devices, it gets the volume info by > >> making a an ioctl call. I can see that XFS only take care of level 4 > >> and level 5. It does not account for level 6. > >> Only extra line need to be added here as below: > >> > >> if (md.level == 6) > >> md.nr_disks -= 2; /* RAID 6 has 2 parity disks */ > >> You can try with this change if you can. Do let mew know if it solves > >> your problem. > >> > >> This code is in function: md_get_subvol_stripe in /libdisk/md.c > >> > >> > >> /* Deduct a disk from stripe width on RAID4/5 */ > >> if (md.level == 4 || md.level == 5) > >> md.nr_disks--; > >> > >> /* Update sizes */ > >> *sunit = md.chunk_size >> 9; > >> *swidth = *sunit * md.nr_disks; > >> > >> return 1; > >> } > >> > >> Regards, > >> Shailendra > >> Steve Cousins wrote: > >> > >>> Hi Shailendra, > >>> > >>> Here is the info: > >>> > >>> 1. [root@juno ~]# cat /proc/mdstat Personalities : [raid6] md0 : > >>> active raid6 sdb[0] sdl[10](S) sdk[9] sdj[8] sdi[7] sdh[6] sdg[5] > >>> sdf[4] sde[3] sdd[2] sdc[1] > >>> 3907091968 blocks level 6, 64k chunk, algorithm 2 [10/10] > >>> [UUUUUUUUUU] > >>> unused devices: > >>> > >>> 2. mdadm --create /dev/md0 --chunk=64 --level=6 --raid-devices=10 > >>> --spare-devices=1 /dev/sd[bcdefghijkl] > >>> > >>> 3. [root@juno ~]# xfs_db -r /dev/md* > >>> xfs_db> sb > >>> xfs_db> p > >>> magicnum = 0x58465342 > >>> blocksize = 4096 > >>> dblocks = 976772992 > >>> rblocks = 0 > >>> rextents = 0 > >>> uuid = 04b32cce-ed38-496f-811f-2ccd51450bf4 > >>> logstart = 536870919 > >>> rootino = 256 > >>> rbmino = 257 > >>> rsumino = 258 > >>> rextsize = 144 > >>> agblocks = 30524160 > >>> agcount = 32 > >>> rbmblocks = 0 > >>> logblocks = 32768 > >>> versionnum = 0x3d84 > >>> sectsize = 4096 > >>> inodesize = 256 > >>> inopblock = 16 > >>> fname = "\000\000\000\000\000\000\000\000\000\000\000\000" > >>> blocklog = 12 > >>> sectlog = 12 > >>> inodelog = 8 > >>> inopblog = 4 > >>> agblklog = 25 > >>> rextslog = 0 > >>> inprogress = 0 > >>> imax_pct = 25 > >>> icount = 36864 > >>> ifree = 362 > >>> fdblocks = 669630878 > >>> frextents = 0 > >>> uquotino = 0 > >>> gquotino = 0 > >>> qflags = 0 > >>> flags = 0 > >>> shared_vn = 0 > >>> inoalignmt = 2 > >>> unit = 16 > >>> width = 144 > >>> dirblklog = 0 > >>> logsectlog = 12 > >>> logsectsize = 4096 > >>> logsunit = 4096 > >>> features2 = 0 > >>> xfs_db> > >>> > >>> Thanks for the help. > >>> > >>> Steve > >>> > >>> ______________________________________________________________________ > >>> Steve Cousins, Ocean Modeling Group Email: cousins@umit.maine.edu > >>> Marine Sciences, 452 Aubert Hall http://rocky.umeoce.maine.edu > >>> Univ. of Maine, Orono, ME 04469 Phone: (207) 581-4302 > >>> > >>> On Mon, 18 Sep 2006, Shailendra Tripathi wrote: > >>> > >>> > >>> > >>>> Can you list the output of > >>>> 1. cat /proc/mdstat > >>>> 2. the command to create 8+2 RAID6 with one spare ? > >>>> 3. and output of following: > >>>> xfs_db -r /dev/md* > >>>> xfs_db> sb > >>>> xfs_db> p > >>>> > >>>> -shailendra > >>>> > >>>> Steve Cousins wrote: > >>>> > >>>> > >>>>>> I have a RAID6 array of 11 500 GB drives using mdadm. There is one > >>>>>> hot-spare so the number of data drives is 8. I used mkfs.xfs with > >>>>>> defaults to create the file system and it seemed to pick up the > >>>>>> chunk size > >>>>>> I used correctly (64K) but I think it got the swidth wrong. Here > >>>>>> is what > >>>>>> xfs_info says: > >>>>>> > >>>>>> =========================================================================== > >>>>>> > >>>>>> meta-data=/dev/md0 isize=256 agcount=32, > >>>>>> agsize=30524160 > >>>>>> blks > >>>>>> = sectsz=4096 attr=0 > >>>>>> data = bsize=4096 blocks=976772992, > >>>>>> imaxpct=25 > >>>>>> = sunit=16 swidth=144 blks, > >>>>>> unwritten=1 > >>>>>> naming =version 2 bsize=4096 > >>>>>> log =internal bsize=4096 blocks=32768, version=2 > >>>>>> = sectsz=4096 sunit=1 blks > >>>>>> realtime =none extsz=589824 blocks=0, rtextents=0 > >>>>>> =========================================================================== > >>>>>> > >>>>>> > >>>>>> So, sunit*bsize=64K, but swidth=144 and swidth/sunit=9 so it looks > >>>>>> like it > >>>>>> thought there were 9 data drives instead of 8. > >>>>>> Am I diagnosing this correctly? Should I recreate the array and > >>>>>> explicitly set sunit=16 and swidth=128? > >>>>>> > >>>>>> Thanks for your help. > >>>>>> > >>>>>> Steve > >>>>>> ______________________________________________________________________ > >>>>>> > >>>>>> Steve Cousins, Ocean Modeling Group Email: cousins@umit.maine.edu > >>>>>> Marine Sciences, 452 Aubert Hall http://rocky.umeoce.maine.edu > >>>>>> Univ. of Maine, Orono, ME 04469 Phone: (207) 581-4302 > >>>>>> > >>>>>> > >>>>>> > >>>>> > >>> > >>> > >>> > >> > >> > > > > > From owner-xfs@oss.sgi.com Mon Sep 18 13:45:38 2006 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Sep 2006 13:45:46 -0700 (PDT) Received: from limpet.umeoce.maine.edu (limpet.umeoce.maine.edu [130.111.192.115]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8IKjbZd006847 for ; Mon, 18 Sep 2006 13:45:38 -0700 Received: from localhost (cousins@localhost) by limpet.umeoce.maine.edu (8.9.3/8.9.3) with ESMTP id QAA01760; Mon, 18 Sep 2006 16:44:59 -0400 Date: Mon, 18 Sep 2006 16:44:59 -0400 (EDT) From: Steve Cousins Reply-To: cousins@umit.maine.edu To: Shailendra Tripathi cc: "\"xfs@oss.sgi.com\" " Subject: Re: swidth with mdadm and RAID6 In-Reply-To: Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 9007 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: cousins@limpet.umeoce.maine.edu Precedence: bulk X-list: xfs Content-Length: 6788 Lines: 228 Hi again, Still no luck with 2.8.11: [root@juno xfsprogs-2.8.11]# cd mkfs [root@juno mkfs]# ./mkfs.xfs -f /dev/md0 meta-data=/dev/md0 isize=256 agcount=32, agsize=30524160 blks = sectsz=4096 attr=0 data = bsize=4096 blocks=976772992, imaxpct=25 = sunit=16 swidth=144 blks, unwritten=1 naming =version 2 bsize=4096 log =internal log bsize=4096 blocks=32768, version=2 = sectsz=4096 sunit=1 blks realtime =none extsz=589824 blocks=0, rtextents=0 Since I have a spare in there do you think it is starting with md.nr_disks = 11 and then subtracting two? Thanks, Steve ______________________________________________________________________ Steve Cousins, Ocean Modeling Group Email: cousins@umit.maine.edu Marine Sciences, 452 Aubert Hall http://rocky.umeoce.maine.edu Univ. of Maine, Orono, ME 04469 Phone: (207) 581-4302 On Mon, 18 Sep 2006, Shailendra Tripathi wrote: > Hi Steve, > Both of us are using old xfsprogs. It is handled in new > xfsprogs. > > */ > switch (md.level) { > case 6: > md.nr_disks--; > /* fallthrough */ > case 5: > case 4: > md.nr_disks--; > /* fallthrough */ > case 1: > case 0: > case 10: > break; > default: > return 0; > > > Regards, > > Shailendra Tripathi wrote: > > >> Hi Steve, > >> I checked the code and it appears that XFS is not *aware* > >> of RAID6. Basically, for all md devices, it gets the volume info by > >> making a an ioctl call. I can see that XFS only take care of level 4 > >> and level 5. It does not account for level 6. > >> Only extra line need to be added here as below: > >> > >> if (md.level == 6) > >> md.nr_disks -= 2; /* RAID 6 has 2 parity disks */ > >> You can try with this change if you can. Do let mew know if it solves > >> your problem. > >> > >> This code is in function: md_get_subvol_stripe in /libdisk/md.c > >> > >> > >> /* Deduct a disk from stripe width on RAID4/5 */ > >> if (md.level == 4 || md.level == 5) > >> md.nr_disks--; > >> > >> /* Update sizes */ > >> *sunit = md.chunk_size >> 9; > >> *swidth = *sunit * md.nr_disks; > >> > >> return 1; > >> } > >> > >> Regards, > >> Shailendra > >> Steve Cousins wrote: > >> > >>> Hi Shailendra, > >>> > >>> Here is the info: > >>> > >>> 1. [root@juno ~]# cat /proc/mdstat Personalities : [raid6] md0 : > >>> active raid6 sdb[0] sdl[10](S) sdk[9] sdj[8] sdi[7] sdh[6] sdg[5] > >>> sdf[4] sde[3] sdd[2] sdc[1] > >>> 3907091968 blocks level 6, 64k chunk, algorithm 2 [10/10] > >>> [UUUUUUUUUU] > >>> unused devices: > >>> > >>> 2. mdadm --create /dev/md0 --chunk=64 --level=6 --raid-devices=10 > >>> --spare-devices=1 /dev/sd[bcdefghijkl] > >>> > >>> 3. [root@juno ~]# xfs_db -r /dev/md* > >>> xfs_db> sb > >>> xfs_db> p > >>> magicnum = 0x58465342 > >>> blocksize = 4096 > >>> dblocks = 976772992 > >>> rblocks = 0 > >>> rextents = 0 > >>> uuid = 04b32cce-ed38-496f-811f-2ccd51450bf4 > >>> logstart = 536870919 > >>> rootino = 256 > >>> rbmino = 257 > >>> rsumino = 258 > >>> rextsize = 144 > >>> agblocks = 30524160 > >>> agcount = 32 > >>> rbmblocks = 0 > >>> logblocks = 32768 > >>> versionnum = 0x3d84 > >>> sectsize = 4096 > >>> inodesize = 256 > >>> inopblock = 16 > >>> fname = "\000\000\000\000\000\000\000\000\000\000\000\000" > >>> blocklog = 12 > >>> sectlog = 12 > >>> inodelog = 8 > >>> inopblog = 4 > >>> agblklog = 25 > >>> rextslog = 0 > >>> inprogress = 0 > >>> imax_pct = 25 > >>> icount = 36864 > >>> ifree = 362 > >>> fdblocks = 669630878 > >>> frextents = 0 > >>> uquotino = 0 > >>> gquotino = 0 > >>> qflags = 0 > >>> flags = 0 > >>> shared_vn = 0 > >>> inoalignmt = 2 > >>> unit = 16 > >>> width = 144 > >>> dirblklog = 0 > >>> logsectlog = 12 > >>> logsectsize = 4096 > >>> logsunit = 4096 > >>> features2 = 0 > >>> xfs_db> > >>> > >>> Thanks for the help. > >>> > >>> Steve > >>> > >>> ______________________________________________________________________ > >>> Steve Cousins, Ocean Modeling Group Email: cousins@umit.maine.edu > >>> Marine Sciences, 452 Aubert Hall http://rocky.umeoce.maine.edu > >>> Univ. of Maine, Orono, ME 04469 Phone: (207) 581-4302 > >>> > >>> On Mon, 18 Sep 2006, Shailendra Tripathi wrote: > >>> > >>> > >>> > >>>> Can you list the output of > >>>> 1. cat /proc/mdstat > >>>> 2. the command to create 8+2 RAID6 with one spare ? > >>>> 3. and output of following: > >>>> xfs_db -r /dev/md* > >>>> xfs_db> sb > >>>> xfs_db> p > >>>> > >>>> -shailendra > >>>> > >>>> Steve Cousins wrote: > >>>> > >>>> > >>>>>> I have a RAID6 array of 11 500 GB drives using mdadm. There is one > >>>>>> hot-spare so the number of data drives is 8. I used mkfs.xfs with > >>>>>> defaults to create the file system and it seemed to pick up the > >>>>>> chunk size > >>>>>> I used correctly (64K) but I think it got the swidth wrong. Here > >>>>>> is what > >>>>>> xfs_info says: > >>>>>> > >>>>>> =========================================================================== > >>>>>> > >>>>>> meta-data=/dev/md0 isize=256 agcount=32, > >>>>>> agsize=30524160 > >>>>>> blks > >>>>>> = sectsz=4096 attr=0 > >>>>>> data = bsize=4096 blocks=976772992, > >>>>>> imaxpct=25 > >>>>>> = sunit=16 swidth=144 blks, > >>>>>> unwritten=1 > >>>>>> naming =version 2 bsize=4096 > >>>>>> log =internal bsize=4096 blocks=32768, version=2 > >>>>>> = sectsz=4096 sunit=1 blks > >>>>>> realtime =none extsz=589824 blocks=0, rtextents=0 > >>>>>> =========================================================================== > >>>>>> > >>>>>> > >>>>>> So, sunit*bsize=64K, but swidth=144 and swidth/sunit=9 so it looks > >>>>>> like it > >>>>>> thought there were 9 data drives instead of 8. > >>>>>> Am I diagnosing this correctly? Should I recreate the array and > >>>>>> explicitly set sunit=16 and swidth=128? > >>>>>> > >>>>>> Thanks for your help. > >>>>>> > >>>>>> Steve > >>>>>> ______________________________________________________________________ > >>>>>> > >>>>>> Steve Cousins, Ocean Modeling Group Email: cousins@umit.maine.edu > >>>>>> Marine Sciences, 452 Aubert Hall http://rocky.umeoce.maine.edu > >>>>>> Univ. of Maine, Orono, ME 04469 Phone: (207) 581-4302 > >>>>>> > >>>>>> > >>>>>> > >>>>> > >>> > >>> > >>> > >> > >> > > > > > From owner-xfs@oss.sgi.com Mon Sep 18 13:49:12 2006 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Sep 2006 13:49:23 -0700 (PDT) Received: from ext.agami.com (64.221.212.177.ptr.us.xo.net [64.221.212.177]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8IKn1Zd007830 for ; Mon, 18 Sep 2006 13:49:05 -0700 Received: from agami.com ([192.168.168.132]) by ext.agami.com (8.12.5/8.12.5) with ESMTP id k8IKmB2c011602 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO) for ; Mon, 18 Sep 2006 13:48:21 -0700 Received: from mx1.agami.com (mx1.agami.com [10.123.10.30]) by agami.com (8.12.11/8.12.11) with ESMTP id k8IKm5QE020120 for ; Mon, 18 Sep 2006 13:48:05 -0700 Received: from [10.125.200.197] ([10.125.200.197]) by mx1.agami.com with Microsoft SMTPSVC(6.0.3790.1830); Mon, 18 Sep 2006 13:51:29 -0700 Message-ID: <450F05F4.1080306@agami.com> Date: Tue, 19 Sep 2006 02:17:48 +0530 From: Shailendra Tripathi User-Agent: Mozilla Thunderbird 1.0.6 (Windows/20050716) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Stephane Doyon CC: xfs@oss.sgi.com Subject: Re: File system block reservation mechanism is broken References: <450EB500.3070000@agami.com> In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-OriginalArrivalTime: 18 Sep 2006 20:51:30.0359 (UTC) FILETIME=[37A7BC70:01C6DB64] X-Scanned-By: MIMEDefang 2.36 X-archive-position: 9008 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: stripathi@agami.com Precedence: bulk X-list: xfs Content-Length: 8691 Lines: 204 Stephane Doyon wrote: > It assumes the superblock counters are current and accurate, and that > they are authoritative... it hasn't been converted to use the new fast > path, it always uses the slow path. > > Most of the time (except when some CPU's counter has drained), the > fdblocks count will be in the per-cpu mp->m_sb_cnts array of counters. > m_sb.sb_fdblocks only contains the value left in it last time we > totaled the counters. Modifying m_sb.sb_fdblocks does nothing because > that value is known to be inaccurate and will be recalculated and > overwritten next time the slow path to that counter is used. > > Does that make it clearer? > Yes. I missed that in next resync, the fdblocks will remain summed up as the old fdblocks value, however, resblks will up upped. This might make the calculations based upon rsvd blocks go wrong. However, it can be handled in resync path itself (during balance) alternatively. After draining the counters, sum up the fdblocks. At this point, compare the mp->sb_fdblocks and the summed up blocks. If there is difference, adjust the difference. In current code, after xfs_icsb_count, it just overwrites the fields. . However, there is possibility that some rsvd blocks might end up being used without going through regular path. For example, lets say, resblks in increased, however, it is not updated immediately. Now, total usage would be governed by the previous available fdblocks. So, by the time, next update takes place, the block might be in rsvd range already. >> Stephane Doyon wrote: >> >>> [Resending. Seems my previous post did not make it somehow...] >>> >>> The mechanism allowing to reserve file system blocks, >>> xfs_reserve_blocks() >>> / XFS_IOC_SET_RESBLKS, appears to have been broken by the patch that >>> introduced per-cpu superblock counters. >>> >>> >>> The observed behavior is that xfs_io -xc "resblks " has no >>> effect: the resblks does get set and can be retrieved, but the free >>> blocks >>> count does not decrease. >>> >>> The XFS_SET_ASIDE_BLOCKS, introduced in the recent full file system >>> deadlock fix, probably also needs to be taken into account. >>> >>> I'm not particularly familiar with the code, but AFAICT something >>> along >>> the lines of the following patch should fix it. >>> >>> Signed-off-by: Stephane Doyon >>> >>> Index: linux/fs/xfs/xfs_fsops.c >>> =================================================================== >>> --- linux.orig/fs/xfs/xfs_fsops.c 2006-09-13 11:31:36.000000000 >>> -0400 >>> +++ linux/fs/xfs/xfs_fsops.c 2006-09-13 11:32:06.782591491 -0400 >>> @@ -505,6 +505,7 @@ >>> >>> request = *inval; >>> s = XFS_SB_LOCK(mp); >>> + xfs_icsb_disable_counter(mp, XFS_SBS_FDBLOCKS); >>> >>> /* >>> * If our previous reservation was larger than the current value, >>> @@ -520,14 +521,14 @@ >>> mp->m_resblks = request; >>> } else { >>> delta = request - mp->m_resblks; >>> - lcounter = mp->m_sb.sb_fdblocks - delta; >>> + lcounter = mp->m_sb.sb_fdblocks - XFS_SET_ASIDE_BLOCKS(mp) - >>> delta; >>> if (lcounter < 0) { >>> /* We can't satisfy the request, just get what we can */ >>> - mp->m_resblks += mp->m_sb.sb_fdblocks; >>> - mp->m_resblks_avail += mp->m_sb.sb_fdblocks; >>> - mp->m_sb.sb_fdblocks = 0; >>> + mp->m_resblks += mp->m_sb.sb_fdblocks - >>> XFS_SET_ASIDE_BLOCKS(mp); >>> + mp->m_resblks_avail += mp->m_sb.sb_fdblocks - >>> XFS_SET_ASIDE_BLOCKS(mp); >>> + mp->m_sb.sb_fdblocks = XFS_SET_ASIDE_BLOCKS(mp); >>> } else { >>> - mp->m_sb.sb_fdblocks = lcounter; >>> + mp->m_sb.sb_fdblocks = lcounter + >>> XFS_SET_ASIDE_BLOCKS(mp); >>> mp->m_resblks = request; >>> mp->m_resblks_avail += delta; >>> } >>> Index: linux/fs/xfs/xfs_mount.c >>> =================================================================== >>> --- linux.orig/fs/xfs/xfs_mount.c 2006-09-13 11:31:36.000000000 >>> -0400 >>> +++ linux/fs/xfs/xfs_mount.c 2006-09-13 11:32:06.784591724 -0400 >>> @@ -58,7 +58,6 @@ >>> int, int); >>> STATIC int xfs_icsb_modify_counters_locked(xfs_mount_t *, >>> xfs_sb_field_t, >>> int, int); >>> -STATIC int xfs_icsb_disable_counter(xfs_mount_t *, >>> xfs_sb_field_t); >>> >>> #else >>> >>> @@ -1254,26 +1253,6 @@ >>> } >>> >>> /* >>> - * In order to avoid ENOSPC-related deadlock caused by >>> - * out-of-order locking of AGF buffer (PV 947395), we place >>> - * constraints on the relationship among actual allocations for >>> - * data blocks, freelist blocks, and potential file data bmap >>> - * btree blocks. However, these restrictions may result in no >>> - * actual space allocated for a delayed extent, for example, a data >>> - * block in a certain AG is allocated but there is no additional >>> - * block for the additional bmap btree block due to a split of the >>> - * bmap btree of the file. The result of this may lead to an >>> - * infinite loop in xfssyncd when the file gets flushed to disk and >>> - * all delayed extents need to be actually allocated. To get around >>> - * this, we explicitly set aside a few blocks which will not be >>> - * reserved in delayed allocation. Considering the minimum number of >>> - * needed freelist blocks is 4 fsbs _per AG_, a potential split of >>> file's >>> bmap >>> - * btree requires 1 fsb, so we set the number of set-aside blocks >>> - * to 4 + 4*agcount. >>> - */ >>> -#define XFS_SET_ASIDE_BLOCKS(mp) (4 + ((mp)->m_sb.sb_agcount * 4)) >>> - >>> -/* >>> * xfs_mod_incore_sb_unlocked() is a utility routine common used to >>> apply >>> * a delta to a specified field in the in-core superblock. Simply >>> * switch on the field indicated and apply the delta to that field. >>> @@ -1906,7 +1885,7 @@ >>> return test_bit(field, &mp->m_icsb_counters); >>> } >>> >>> -STATIC int >>> +int >>> xfs_icsb_disable_counter( >>> xfs_mount_t *mp, >>> xfs_sb_field_t field) >>> Index: linux/fs/xfs/xfs_mount.h >>> =================================================================== >>> --- linux.orig/fs/xfs/xfs_mount.h 2006-09-13 11:31:36.000000000 >>> -0400 >>> +++ linux/fs/xfs/xfs_mount.h 2006-09-13 11:33:24.441557999 -0400 >>> @@ -307,10 +307,14 @@ >>> >>> extern int xfs_icsb_init_counters(struct xfs_mount *); >>> extern void xfs_icsb_sync_counters_lazy(struct xfs_mount *); >>> +/* Can't forward declare typedefs... */ >>> +struct xfs_mount; >>> +extern int xfs_icsb_disable_counter(struct xfs_mount *, >>> xfs_sb_field_t); >>> >>> # else >>> # define xfs_icsb_init_counters(mp) (0) >>> # define xfs_icsb_sync_counters_lazy(mp) do { } while (0) >>> +#define xfs_icsb_disable_counters(mp, field) do { } while (0) >>> #endif >>> >>> typedef struct xfs_mount { >>> @@ -574,6 +578,27 @@ >>> # define XFS_SB_LOCK(mp) mutex_spinlock(&(mp)->m_sb_lock) >>> # define XFS_SB_UNLOCK(mp,s) >>> mutex_spinunlock(&(mp)->m_sb_lock,(s)) >>> >>> + >>> +/* >>> + * In order to avoid ENOSPC-related deadlock caused by >>> + * out-of-order locking of AGF buffer (PV 947395), we place >>> + * constraints on the relationship among actual allocations for >>> + * data blocks, freelist blocks, and potential file data bmap >>> + * btree blocks. However, these restrictions may result in no >>> + * actual space allocated for a delayed extent, for example, a data >>> + * block in a certain AG is allocated but there is no additional >>> + * block for the additional bmap btree block due to a split of the >>> + * bmap btree of the file. The result of this may lead to an >>> + * infinite loop in xfssyncd when the file gets flushed to disk and >>> + * all delayed extents need to be actually allocated. To get around >>> + * this, we explicitly set aside a few blocks which will not be >>> + * reserved in delayed allocation. Considering the minimum number of >>> + * needed freelist blocks is 4 fsbs _per AG_, a potential split of >>> file's >>> bmap >>> + * btree requires 1 fsb, so we set the number of set-aside blocks >>> + * to 4 + 4*agcount. >>> + */ >>> +#define XFS_SET_ASIDE_BLOCKS(mp) (4 + ((mp)->m_sb.sb_agcount * 4)) >>> + >>> extern xfs_mount_t *xfs_mount_init(void); >>> extern void xfs_mod_sb(xfs_trans_t *, __int64_t); >>> extern void xfs_mount_free(xfs_mount_t *mp, int remove_bhv); >>> >>> >>> >> >> >> > From owner-xfs@oss.sgi.com Mon Sep 18 14:07:25 2006 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Sep 2006 14:07:31 -0700 (PDT) Received: from ext.agami.com (64.221.212.177.ptr.us.xo.net [64.221.212.177]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8IL7OZd012221 for ; Mon, 18 Sep 2006 14:07:25 -0700 Received: from agami.com ([192.168.168.132]) by ext.agami.com (8.12.5/8.12.5) with ESMTP id k8IL6m2c011892 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO) for ; Mon, 18 Sep 2006 14:06:48 -0700 Received: from mx1.agami.com (mx1.agami.com [10.123.10.30]) by agami.com (8.12.11/8.12.11) with ESMTP id k8IL6hu0020382 for ; Mon, 18 Sep 2006 14:06:43 -0700 Received: from [10.125.200.197] ([10.125.200.197]) by mx1.agami.com with Microsoft SMTPSVC(6.0.3790.1830); Mon, 18 Sep 2006 14:10:09 -0700 Message-ID: <450F0A5E.4000208@agami.com> Date: Tue, 19 Sep 2006 02:36:38 +0530 From: Shailendra Tripathi User-Agent: Mozilla Thunderbird 1.0.6 (Windows/20050716) X-Accept-Language: en-us, en MIME-Version: 1.0 To: cousins@umit.maine.edu CC: "\"xfs@oss.sgi.com\" " Subject: Re: swidth with mdadm and RAID6 References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-OriginalArrivalTime: 18 Sep 2006 21:10:09.0859 (UTC) FILETIME=[D2EDE130:01C6DB66] X-Scanned-By: MIMEDefang 2.36 X-archive-position: 9009 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: stripathi@agami.com Precedence: bulk X-list: xfs Content-Length: 974 Lines: 47 -shailendra >Since I have a spare in there do you think it is starting with md.nr_disks >= 11 and then subtracting two? > > You can verify that very quickly by removing the spare_disks option and see it gives proper results. >Thanks, > >Steve >______________________________________________________________________ > Steve Cousins, Ocean Modeling Group Email: cousins@umit.maine.edu > Marine Sciences, 452 Aubert Hall http://rocky.umeoce.maine.edu > Univ. of Maine, Orono, ME 04469 Phone: (207) 581-4302 > >On Mon, 18 Sep 2006, Shailendra Tripathi wrote: > > > >>Hi Steve, >> Both of us are using old xfsprogs. It is handled in new >>xfsprogs. >> >> */ >> switch (md.level) { >> case 6: >> md.nr_disks--; >> /* fallthrough */ >> case 5: >> case 4: >> md.nr_disks--; >> /* fallthrough */ >> case 1: >> case 0: >> case 10: >> break; >> default: >> return 0; >> >> >>Regards, >> >>Shailendra Tripathi wrote: >> >> From owner-xfs@oss.sgi.com Mon Sep 18 15:25:03 2006 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Sep 2006 15:25:25 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k8IMOnZd025648 for ; Mon, 18 Sep 2006 15:25:01 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id IAA24896; Tue, 19 Sep 2006 08:23:53 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id k8IMNppj5638910; Tue, 19 Sep 2006 08:23:51 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id k8IMNmbO5646398; Tue, 19 Sep 2006 08:23:48 +1000 (AEST) Date: Tue, 19 Sep 2006 08:23:48 +1000 From: David Chinner To: christian gattermair Cc: xfs@oss.sgi.com Subject: Re: xfs_check - out of memory | xfs_repair - superblock error reading Message-ID: <20060918222348.GY3034@melbourne.sgi.com> References: <200609181519.18448.christian.gattermair@mci.edu> Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <200609181519.18448.christian.gattermair@mci.edu> User-Agent: Mutt/1.4.2.1i X-archive-position: 9010 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 2368 Lines: 71 On Mon, Sep 18, 2006 at 03:19:18PM +0200, christian gattermair wrote: > hi! > > after a reboot of our box (debian sarge, 3ware controller, raid 5 - 3tb xfs) > we can not mount it any more. > > from syslog: > > Sep 18 12:51:36 localhost kernel: SGI XFS with ACLs, security attributes, > realtime, large block numbers, no debug enabled > Sep 18 12:51:36 localhost kernel: SGI XFS Quota Management subsystem > Sep 18 12:51:53 localhost kernel: attempt to access beyond end of device > Sep 18 12:51:53 localhost kernel: sdb1: rw=0, want=6445069056, > limit=2150101796 > Sep 18 12:51:53 localhost kernel: I/O error in filesystem ("sdb1") meta-data > dev sdb1 block 0x18027f2ff ("xfs_read_buf") error 5 buf count 512 > Sep 18 12:51:53 localhost kernel: XFS: size check 2 failed I/O error - something is not right with your raid controller i think. Are there any other errors in dmesg? What does /proc/partitions tell you about the size of the device? > xfs_check fails with: > > xfs_check /dev/sdb1 > XFS: totally zeroed log > xfs_check: out of memory 3TB filesystem - you won't be able to xfs_check that on a 32 bit system, and you'll need >6GiB RAM to check it on a 64bit system. > there is a lot of space (i tryed more swap) > > Mem: 1011 1006 5 0 750 111 > -/+ buffers/cache: 144 867 > Swap: 57812 0 57812 > > does xfs_check only looks at the mem or also an swap??? is there any hint to > use the swap? Sounds like a 32 bit system where a process can't use more than 2-3GB of RAM. No amount of swap will help if the process requires more then the maximum thæt can be addressed per process. > second question: > > xfs_repair works but can not find any superblock. any hints? > > xfs_repair /dev/sdb1 > Phase 1 - find and verify superblock... > error reading superblock 11 -- seek to offset 1134332153856 failed > couldn't verify primary superblock - bad magic number !!! As already commented, that's about 1TB into 3TB volume. I'd suggest raid controller problems.... Did you boot the same kernel you'd been running previously? > the whole system runs one year without any errors. only today one shutdown for > chaning the usv .... What's a usv? Did you change anything else? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Sep 18 15:40:27 2006 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Sep 2006 15:40:43 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k8IMeEZd028072 for ; Mon, 18 Sep 2006 15:40:25 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id IAA25183; Tue, 19 Sep 2006 08:39:21 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id k8IMdJpj5651203; Tue, 19 Sep 2006 08:39:19 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id k8IMdI8P5650685; Tue, 19 Sep 2006 08:39:18 +1000 (AEST) Date: Tue, 19 Sep 2006 08:39:18 +1000 From: David Chinner To: Stephane Doyon Cc: xfs@oss.sgi.com Subject: Re: File system block reservation mechanism is broken Message-ID: <20060918223918.GZ3034@melbourne.sgi.com> References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.1i X-archive-position: 9011 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 1152 Lines: 31 On Fri, Sep 15, 2006 at 12:44:16PM -0400, Stephane Doyon wrote: > [Resending. Seems my previous post did not make it somehow...] > > The mechanism allowing to reserve file system blocks, xfs_reserve_blocks() > / XFS_IOC_SET_RESBLKS, appears to have been broken by the patch that > introduced per-cpu superblock counters. Thanks for finding this, Stephane. It turns out our xfsqa test that is supposed to test this feature only tests whether the ioctl succeeds or fails - it doesn't check whether values have been set properly, whether the reservation really is reserved, etc. Hence we've always got false successes from this test and hence it's never been noticed as broken. Your patch is based on a tree that is a little out of date - the allocation set aside code has already been pushed into xfs_reserve_blocks(). Unfortunately I didn't notice that this code didn't work with SMP counters at the same time I realised it needed to obey the set aside restrictions.... I'll have a fix for the problem soon and get the QA test updated to test the ioctl properly. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Mon Sep 18 16:24:35 2006 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Sep 2006 16:24:41 -0700 (PDT) Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8INOVZd007935 for ; Mon, 18 Sep 2006 16:24:35 -0700 X-ASG-Debug-ID: 1158617662-12729-298-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from ext.agami.com (64.221.212.177.ptr.us.xo.net [64.221.212.177]) by cuda.sgi.com (Spam Firewall) with ESMTP id 04462D17674E for ; Mon, 18 Sep 2006 15:14:22 -0700 (PDT) Received: from agami.com ([192.168.168.132]) by ext.agami.com (8.12.5/8.12.5) with ESMTP id k8IME52c012758 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO) for ; Mon, 18 Sep 2006 15:14:05 -0700 Received: from mx1.agami.com (mx1.agami.com [10.123.10.30]) by agami.com (8.12.11/8.12.11) with ESMTP id k8IME04g021516 for ; Mon, 18 Sep 2006 15:14:00 -0700 Received: from [10.125.200.197] ([10.125.200.197]) by mx1.agami.com with Microsoft SMTPSVC(6.0.3790.1830); Mon, 18 Sep 2006 15:17:25 -0700 Message-ID: <450F1A1F.1020204@agami.com> Date: Tue, 19 Sep 2006 03:43:51 +0530 From: Shailendra Tripathi User-Agent: Mozilla Thunderbird 1.0.6 (Windows/20050716) X-Accept-Language: en-us, en MIME-Version: 1.0 To: cousins@umit.maine.edu CC: "\"xfs@oss.sgi.com\" " X-ASG-Orig-Subj: Re: swidth with mdadm and RAID6 Subject: Re: swidth with mdadm and RAID6 References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-OriginalArrivalTime: 18 Sep 2006 22:17:26.0062 (UTC) FILETIME=[38B194E0:01C6DB70] X-Scanned-By: MIMEDefang 2.36 X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21490 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9012 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: stripathi@agami.com Precedence: bulk X-list: xfs Content-Length: 3387 Lines: 113 Hi Steve, Your guess appears to be correct. md_ioctl returns nr which is total number of disk in the array including the spare disks. However, XFS function md_get_vol_stripe does not take spare disk into account. It needs to subtract spare_disks as well. However, md.spare_disks returned by the call returns spare + parity (both). So, one way could be substract spare_disks directly. Otherwise, the xfs should rely on md.raid_disks. This does not include spare_disks and nr.disks should be changed for that. When I run my program md_info on raid5 array with 5 devices and 2 spares, I get [root@ga09 root]# ./a.out /dev/md11 Level 5, disks=7 spare_disks=3 raid_disks=5 Steve can you please compile the pasted program and run on your system with md prepared. It takes /dev/md as input. In your case, you should get above line as: Level 6, disks=11 spare disks=3 raid_disks=10 nr=working=active=failed=spare=0; ITERATE_RDEV(mddev,rdev,tmp) { nr++; if (rdev->faulty) failed++; else { working++; if (rdev->in_sync) active++; else spare++; } } info.level = mddev->level; info.size = mddev->size; info.nr_disks = nr; .... info.active_disks = active; info.working_disks = working; info.failed_disks = failed; info.spare_disks = spare; -shailendra The program is pasted below: md_info.c. Takes /dev/md as name. For example, /dev/md11. #include #include #include #ifndef MD_MAJOR #define MD_MAJOR 9 #endif #define GET_ARRAY_INFO _IOR (MD_MAJOR, 0x11, struct md_array_info) struct md_array_info { __uint32_t major_version; __uint32_t minor_version; __uint32_t patch_version; __uint32_t ctime; __uint32_t level; __uint32_t size; __uint32_t nr_disks; __uint32_t raid_disks; __uint32_t md_minor; __uint32_t not_persistent; /* * Generic state information */ __uint32_t utime; /* 0 Superblock update time */ __uint32_t state; /* 1 State bits (clean, ...) */ __uint32_t active_disks; /* 2 Number of currently active disks */ __uint32_t working_disks; /* 3 Number of working disks */ __uint32_t failed_disks; /* 4 Number of failed disks */ __uint32_t spare_disks; /* 5 Number of spare disks */ /* * Personality information */ __uint32_t layout; /* 0 the array's physical layout */ __uint32_t chunk_size; /* 1 chunk size in bytes */ }; int main(int argc, char *argv[]) { struct md_array_info md; int fd; /* Open device */ fd = open(argv[1], O_RDONLY); if (fd == -1) { printf("Could not open %s\n", argv[1]); exit(1); } if (ioctl(fd, GET_ARRAY_INFO, &md)) { printf("Error getting MD array info from %s\n", argv[1]); exit(1); } close(fd); printf("Level %d, disks=%d spare_disks=%d raid_disks=%d\n", md.level, md.nr_disks, md.spare_disks, md.raid_disks); return 0; } From owner-xfs@oss.sgi.com Mon Sep 18 17:08:34 2006 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Sep 2006 17:08:45 -0700 (PDT) Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8J08YZd013824 for ; Mon, 18 Sep 2006 17:08:34 -0700 X-ASG-Debug-ID: 1158620193-13321-899-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from uwast.astro.wisc.edu (uwast.astro.wisc.edu [144.92.110.130]) by cuda.sgi.com (Spam Firewall) with ESMTP id 721153E0DF1 for ; Mon, 18 Sep 2006 15:56:33 -0700 (PDT) Received: from [192.168.1.101] (ppp-70-226-162-117.dsl.mdsnwi.ameritech.net [70.226.162.117]) (authenticated bits=0) by uwast.astro.wisc.edu (8.13.4/8.13.4/SuSE Linux 0.7) with ESMTP id k8IMuBVL029290 for ; Mon, 18 Sep 2006 17:56:13 -0500 Mime-Version: 1.0 (Apple Message framework v752.2) In-Reply-To: <20060918222348.GY3034@melbourne.sgi.com> References: <200609181519.18448.christian.gattermair@mci.edu> <20060918222348.GY3034@melbourne.sgi.com> Content-Type: multipart/signed; micalg=sha1; boundary=Apple-Mail-12--75655696; protocol="application/pkcs7-signature" Message-Id: <8B10E861-C427-4F9F-B0C4-7A87DB77236B@astro.wisc.edu> From: Stephan Jansen X-ASG-Orig-Subj: Re: xfs_check - out of memory | xfs_repair - superblock error reading Subject: Re: xfs_check - out of memory | xfs_repair - superblock error reading Date: Mon, 18 Sep 2006 17:56:10 -0500 To: xfs@oss.sgi.com X-Mailer: Apple Mail (2.752.2) X-UW-Astronomy-MailScanner-Information: Please contact the ISP for more information X-UW-Astronomy-MailScanner: Found to be clean X-UW-Astronomy-MailScanner-From: jansen@astro.wisc.edu X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21492 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9013 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: jansen@astro.wisc.edu Precedence: bulk X-list: xfs Content-Length: 5191 Lines: 135 --Apple-Mail-12--75655696 Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset=US-ASCII; delsp=yes; format=flowed Hi, On Sep 18, 2006, at 5:23 PM, David Chinner wrote: > On Mon, Sep 18, 2006 at 03:19:18PM +0200, christian gattermair wrote: >> hi! >> >> after a reboot of our box (debian sarge, 3ware controller, raid 5 >> - 3tb xfs) >> we can not mount it any more. >> >> from syslog: >> >> Sep 18 12:51:36 localhost kernel: SGI XFS with ACLs, security >> attributes, >> realtime, large block numbers, no debug enabled >> Sep 18 12:51:36 localhost kernel: SGI XFS Quota Management subsystem >> Sep 18 12:51:53 localhost kernel: attempt to access beyond end of >> device >> Sep 18 12:51:53 localhost kernel: sdb1: rw=0, want=6445069056, >> limit=2150101796 >> Sep 18 12:51:53 localhost kernel: I/O error in filesystem ("sdb1") >> meta-data >> dev sdb1 block 0x18027f2ff ("xfs_read_buf") error 5 buf >> count 512 >> Sep 18 12:51:53 localhost kernel: XFS: size check 2 failed > > I/O error - something is not right with your raid controller i think. > Are there any other errors in dmesg? What does /proc/partitions tell > you about the size of the device? > >> xfs_check fails with: >> >> xfs_check /dev/sdb1 >> XFS: totally zeroed log >> xfs_check: out of memory > > 3TB filesystem - you won't be able to xfs_check that on a 32 bit > system, > and you'll need >6GiB RAM to check it on a 64bit system. > > I was just going to create a 3TB filesystem on a 32 bit system. So xfs_check will not work? How about xfs_repair? I assume that will work but would like to know beforehand. [stuff deleted] > Cheers, > > Dave. > -- > Dave Chinner > Principal Engineer > SGI Australian Software Group > -- ----- Stephan --Apple-Mail-12--75655696 Content-Transfer-Encoding: base64 Content-Type: application/pkcs7-signature; name=smime.p7s Content-Disposition: attachment; filename=smime.p7s MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEH AQAAoIIGLzCCAvQwggJdoAMCAQICAkRNMA0GCSqGSIb3DQEBBQUAMFMxCzAJ BgNVBAYTAlVTMRwwGgYDVQQKExNFcXVpZmF4IFNlY3VyZSBJbmMuMSYwJAYD VQQDEx1FcXVpZmF4IFNlY3VyZSBlQnVzaW5lc3MgQ0EtMTAeFw0wNTA4Mjkx NjA3MjBaFw0xNTA4MjkxNjA3MjBaMIGJMQswCQYDVQQGEwJVUzErMCkGA1UE ChMiRGl2aXNpb24gb2YgSW5mb3JtYXRpb24gVGVjaG5vbG9neTEjMCEGA1UE CxMaRmFjdWx0eSAtIFN0YWZmIC0gU3R1ZGVudHMxKDAmBgNVBAMTH1VuaXZl cnNpdHkgb2YgV2lzY29uc2luLU1hZGlzb24wgZ8wDQYJKoZIhvcNAQEBBQAD gY0AMIGJAoGBAOhIUdwld8sfAAlrdOv5Tt8PTX1Wku/ItsIjHrkus1MbKoul SXxSsSUPAPYzgT8HfhuRY+tHHzohFSu3xJWgx0wk8q2pqwo4KZ2evy7GMDFx THyXSYa/1m0Wsg5c11u8J6/tR8yqu7RWIJPr+edlPjx8r/cYP7AK5nA7msMF FZqDAgMBAAGjgZ8wgZwwDgYDVR0PAQH/BAQDAgGGMB0GA1UdDgQWBBQcnlJS GwRiRyxrLAG4afGpNywjJDAfBgNVHSMEGDAWgBRKeDJSEdtZFjZe38EUNkBq R3xMoTAPBgNVHRMBAf8EBTADAQH/MDkGA1UdHwQyMDAwLqAsoCqGKGh0dHA6 Ly9jcmwuZ2VvdHJ1c3QuY29tL2NybHMvZWJpemNhMS5jcmwwDQYJKoZIhvcN AQEFBQADgYEAJfFEWDN3f+cS1o3XqrcgmDdr5h3e37WxerB/YxVfHpsr5UzT GVBwR09zyRA+AtmBrNBE07HcLSsri/x9o1qJPwtko8GB+ScW9lTvoSoWKf93 fkeymKj4T7X2rFV+umJTSmgs850RTh+oRx0eVGHfc1zHRNjpUiPqZRoaYqjF Z5AwggMzMIICnKADAgECAgIAqzANBgkqhkiG9w0BAQUFADCBiTELMAkGA1UE BhMCVVMxKzApBgNVBAoTIkRpdmlzaW9uIG9mIEluZm9ybWF0aW9uIFRlY2hu b2xvZ3kxIzAhBgNVBAsTGkZhY3VsdHkgLSBTdGFmZiAtIFN0dWRlbnRzMSgw JgYDVQQDEx9Vbml2ZXJzaXR5IG9mIFdpc2NvbnNpbi1NYWRpc29uMB4XDTA1 MTEwODE5NDg0NloXDTA2MTEwODE5NDg0NlowgcExCzAJBgNVBAYTAlVTMRIw EAYDVQQIEwlXaXNjb25zaW4xEDAOBgNVBAcTB01hZGlzb24xKDAmBgNVBAoT H1VuaXZlcnNpdHkgb2YgV2lzY29uc2luLU1hZGlzb24xIzAhBgNVBAsTGkZh Y3VsdHkgLSBTdGFmZiAtIFN0dWRlbnRzMRcwFQYDVQQDEw5TdGVwaGVuIEph bnNlbjEkMCIGCSqGSIb3DQEJARYVamFuc2VuQGFzdHJvLndpc2MuZWR1MIGf MA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDYmSqLM5WAgcBhd+EtQ3lxnq2H u6TA+EN1lzR/ZKyEjg/cFhajrs+Py5qrADFp62Chsq1BcqI8l3j3oQMiQgni AGxVrBkg/Al5nlUuzm3G/zpgoxbOXdBPv1xv7b52ylcKkaW4byOswQdZqSLt H7aNPHdyDLFKEG9nw4TuGPm9sQIDAQABo3AwbjAOBgNVHQ8BAf8EBAMCBeAw OwYDVR0fBDQwMjAwoC6gLIYqaHR0cDovL2NybC5nZW90cnVzdC5jb20vY3Js cy93aXNjb25zaW4uY3JsMB8GA1UdIwQYMBaAFByeUlIbBGJHLGssAbhp8ak3 LCMkMA0GCSqGSIb3DQEBBQUAA4GBAHslg5nIvKIHNbz6jAwLHx4JK++gdHUA GJxm8NCog0S58/nJXNJZpiCV3YY1feC3s+8eM0LusZXJJ5O8SgRBLmBP1TMp pIbX0WSZz40xn+IqBbSDboK2s6694y0aS8f1O/1I0+r6AEXhnBtPxK8U9UYN quW4bAAme3t0sHAoaFANMYIC4jCCAt4CAQEwgZAwgYkxCzAJBgNVBAYTAlVT MSswKQYDVQQKEyJEaXZpc2lvbiBvZiBJbmZvcm1hdGlvbiBUZWNobm9sb2d5 MSMwIQYDVQQLExpGYWN1bHR5IC0gU3RhZmYgLSBTdHVkZW50czEoMCYGA1UE AxMfVW5pdmVyc2l0eSBvZiBXaXNjb25zaW4tTWFkaXNvbgICAKswCQYFKw4D AhoFAKCCAacwGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0B CQUxDxcNMDYwOTE4MjI1NjExWjAjBgkqhkiG9w0BCQQxFgQUCYqSbBmLgs7s MLu1O29Xh/V50fIwgaEGCSsGAQQBgjcQBDGBkzCBkDCBiTELMAkGA1UEBhMC VVMxKzApBgNVBAoTIkRpdmlzaW9uIG9mIEluZm9ybWF0aW9uIFRlY2hub2xv Z3kxIzAhBgNVBAsTGkZhY3VsdHkgLSBTdGFmZiAtIFN0dWRlbnRzMSgwJgYD VQQDEx9Vbml2ZXJzaXR5IG9mIFdpc2NvbnNpbi1NYWRpc29uAgIAqzCBowYL KoZIhvcNAQkQAgsxgZOggZAwgYkxCzAJBgNVBAYTAlVTMSswKQYDVQQKEyJE aXZpc2lvbiBvZiBJbmZvcm1hdGlvbiBUZWNobm9sb2d5MSMwIQYDVQQLExpG YWN1bHR5IC0gU3RhZmYgLSBTdHVkZW50czEoMCYGA1UEAxMfVW5pdmVyc2l0 eSBvZiBXaXNjb25zaW4tTWFkaXNvbgICAKswDQYJKoZIhvcNAQEBBQAEgYBw n2GMFMv+jKSh8IBaFdtpB43fN3/M25j8rDlnooACgWf3dZZBg/lbRzlMp2Ah dZgQ+AwySjGTLmE1p/1GCPeLIQYnVNn26527281dZwTpFAYKebypuxoz0+7Y neGISMSkx1ZVrol0UOLw3lSgEbUFqoerXnknGbAXKyhGitAgLQAAAAAAAA== --Apple-Mail-12--75655696-- From owner-xfs@oss.sgi.com Mon Sep 18 17:28:12 2006 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Sep 2006 17:28:21 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k8J0S9Zd017422 for ; Mon, 18 Sep 2006 17:28:11 -0700 Received: from pcbnaujok (pc-bnaujok.melbourne.sgi.com [134.14.55.58]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA27712; Tue, 19 Sep 2006 10:27:26 +1000 Message-Id: <200609190027.KAA27712@larry.melbourne.sgi.com> From: "Barry Naujok" To: "'Stephan Jansen'" , Subject: RE: xfs_check - out of memory | xfs_repair - superblock error reading Date: Tue, 19 Sep 2006 10:33:31 +1000 MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit X-Mailer: Microsoft Office Outlook, Build 11.0.6353 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.2962 Thread-Index: AcbbgACooF8xqKdKTiiTLNBjMQnu2gAAt1FA In-Reply-To: <8B10E861-C427-4F9F-B0C4-7A87DB77236B@astro.wisc.edu> X-archive-position: 9014 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@melbourne.sgi.com Precedence: bulk X-list: xfs Content-Length: 801 Lines: 22 > -----Original Message----- > From: xfs-bounce@oss.sgi.com [mailto:xfs-bounce@oss.sgi.com] > On Behalf Of Stephan Jansen > Sent: Tuesday, 19 September 2006 8:56 AM > To: xfs@oss.sgi.com > Subject: Re: xfs_check - out of memory | xfs_repair - > superblock error reading > > > I was just going to create a 3TB filesystem on a 32 bit system. So > xfs_check will not work? How about xfs_repair? I assume that will > work but would like to know beforehand. Just to let you know, I'm currently working on memory optimisations for xfs_repair and when it's released, it should work on your system. Memory usage will grow with inode count and free space fragementation and not based on filesystem size as it currently does. The first set of changes have been done and are currently being tested. From owner-xfs@oss.sgi.com Mon Sep 18 22:12:02 2006 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Sep 2006 22:12:09 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k8J5BwaG031273 for ; Mon, 18 Sep 2006 22:12:00 -0700 Received: from [134.14.55.141] (boing.melbourne.sgi.com [134.14.55.141]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA03651; Tue, 19 Sep 2006 15:11:12 +1000 Message-ID: <450F7C1E.5020300@sgi.com> Date: Tue, 19 Sep 2006 15:11:58 +1000 From: Timothy Shimmin User-Agent: Thunderbird 1.5.0.7 (Macintosh/20060909) MIME-Version: 1.0 To: Shailendra Tripathi CC: cousins@umit.maine.edu, "\"xfs@oss.sgi.com\" " Subject: Re: swidth with mdadm and RAID6 References: <450F1A1F.1020204@agami.com> In-Reply-To: <450F1A1F.1020204@agami.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 9015 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs Content-Length: 2272 Lines: 67 Hi Shailendra and Steve, Shailendra Tripathi wrote: > Hi Steve, > Your guess appears to be correct. md_ioctl returns nr which > is total number of disk in the array including the spare disks. However, > XFS function md_get_vol_stripe does not take spare disk into account. It > needs to subtract spare_disks as well. > However, md.spare_disks returned by the call returns spare + parity > (both). So, one way could be substract spare_disks directly. Otherwise, > the xfs should rely on md.raid_disks. This does not include spare_disks > and nr.disks should be changed for that. > When I run my program md_info on raid5 array with 5 devices and 2 > spares, I get > [root@ga09 root]# ./a.out /dev/md11 > Level 5, disks=7 spare_disks=3 raid_disks=5 > > Steve can you please compile the pasted program and run on your system > with md prepared. It takes /dev/md as input. > In your case, you should get above line as: > Level 6, disks=11 spare disks=3 raid_disks=10 > > nr=working=active=failed=spare=0; > ITERATE_RDEV(mddev,rdev,tmp) { > nr++; > if (rdev->faulty) > failed++; > else { > working++; > if (rdev->in_sync) > active++; > else > spare++; > } > } > > info.level = mddev->level; > info.size = mddev->size; > info.nr_disks = nr; > .... > info.active_disks = active; > info.working_disks = working; > info.failed_disks = failed; > info.spare_disks = spare; > > -shailendra I'm not that au fait with RAID and md, but looking at what you wrote, Shailendra, and the md code, instead of your suggestions (what I think are your suggestions:) of: (1) subtracting parity from md.raid_disk (instead of md.nr_disks) where we work out parity by switching on md.level or (2) using directly: (md.nr_disks - md.spares); that instead we could use: (3) using directly: md.active_disks i.e. *swidth = *sunit * md.active_disks; I presume that active is the working non spares and non-parity. Does that make sense? --Tim From owner-xfs@oss.sgi.com Mon Sep 18 23:44:55 2006 Received: with ECARTIS (v1.0.0; list xfs); Mon, 18 Sep 2006 23:45:04 -0700 (PDT) Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8J6isaG014364 for ; Mon, 18 Sep 2006 23:44:55 -0700 X-ASG-Debug-ID: 1158648254-14598-142-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from ext.agami.com (64.221.212.177.ptr.us.xo.net [64.221.212.177]) by cuda.sgi.com (Spam Firewall) with ESMTP id 90006D177233 for ; Mon, 18 Sep 2006 23:44:14 -0700 (PDT) Received: from agami.com ([192.168.168.132]) by ext.agami.com (8.12.5/8.12.5) with ESMTP id k8J6iE2c017899 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO) for ; Mon, 18 Sep 2006 23:44:14 -0700 Received: from mx1.agami.com (mx1.agami.com [10.123.10.30]) by agami.com (8.12.11/8.12.11) with ESMTP id k8J6i9Ao027803 for ; Mon, 18 Sep 2006 23:44:09 -0700 Received: from [10.12.12.141] ([10.12.12.141]) by mx1.agami.com with Microsoft SMTPSVC(6.0.3790.1830); Mon, 18 Sep 2006 23:47:33 -0700 Message-ID: <450F91D4.1030606@agami.com> Date: Tue, 19 Sep 2006 12:14:36 +0530 From: Shailendra Tripathi User-Agent: Mozilla Thunderbird 0.9 (X11/20041127) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Timothy Shimmin CC: cousins@umit.maine.edu, "\"xfs@oss.sgi.com\" " X-ASG-Orig-Subj: Re: swidth with mdadm and RAID6 Subject: Re: swidth with mdadm and RAID6 References: <450F1A1F.1020204@agami.com> <450F7C1E.5020300@sgi.com> In-Reply-To: <450F7C1E.5020300@sgi.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-OriginalArrivalTime: 19 Sep 2006 06:47:34.0093 (UTC) FILETIME=[7C8077D0:01C6DBB7] X-Scanned-By: MIMEDefang 2.36 X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21514 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9016 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: stripathi@agami.com Precedence: bulk X-list: xfs Content-Length: 1082 Lines: 29 Hi Tim, > I'm not that au fait with RAID and md, but looking at what you wrote, > Shailendra, and the md code, instead of your suggestions > (what I think are your suggestions:) of: > > (1) subtracting parity from md.raid_disk (instead of md.nr_disks) > where we work out parity by switching on md.level > or > (2) using directly: (md.nr_disks - md.spares); > > that instead we could use: > (3) using directly: md.active_disks > > i.e. > *swidth = *sunit * md.active_disks; > I presume that active is the working non spares and non-parity. > > Does that make sense? I agree with you that for operational raid since there would not be any faulty disks, active disks should the number of disks. However, I am just concerned that active disks tracks live disks (not failed disks). If we ever used these commands when the system has faulty drive, the information returned wouldn't be correct. Though, from XFS perspective, I can't think of where it can happen. I would still say that lets rely more on raid_disks to be more conservative, just my choice. From owner-xfs@oss.sgi.com Tue Sep 19 00:02:52 2006 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Sep 2006 00:03:02 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k8J72maG019244 for ; Tue, 19 Sep 2006 00:02:50 -0700 Received: from [134.14.55.141] (boing.melbourne.sgi.com [134.14.55.141]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA05797; Tue, 19 Sep 2006 17:02:01 +1000 Message-ID: <450F9617.2020603@sgi.com> Date: Tue, 19 Sep 2006 17:02:47 +1000 From: Timothy Shimmin User-Agent: Thunderbird 1.5.0.7 (Macintosh/20060909) MIME-Version: 1.0 To: Shailendra Tripathi CC: cousins@umit.maine.edu, "\"xfs@oss.sgi.com\" " Subject: Re: swidth with mdadm and RAID6 References: <450F1A1F.1020204@agami.com> <450F7C1E.5020300@sgi.com> <450F91D4.1030606@agami.com> In-Reply-To: <450F91D4.1030606@agami.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 9017 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs Content-Length: 1354 Lines: 37 Shailendra Tripathi wrote: > > Hi Tim, > >> I'm not that au fait with RAID and md, but looking at what you wrote, >> Shailendra, and the md code, instead of your suggestions >> (what I think are your suggestions:) of: >> >> (1) subtracting parity from md.raid_disk (instead of md.nr_disks) >> where we work out parity by switching on md.level >> or >> (2) using directly: (md.nr_disks - md.spares); >> >> that instead we could use: >> (3) using directly: md.active_disks >> >> i.e. >> *swidth = *sunit * md.active_disks; >> I presume that active is the working non spares and non-parity. >> >> Does that make sense? > I agree with you that for operational raid since there would not > be any faulty disks, active disks should the number of disks. However, I > am just concerned that active disks tracks live disks (not failed > disks). If we ever used these commands when the system has faulty drive, > the information returned wouldn't be correct. Though, from XFS > perspective, I can't think of where it can happen. > I would still say that lets rely more on raid_disks to be more > conservative, just my choice. I see your point. I can just change md_get_subvol_stripe(): s/nr_disks/raid_disks/ I just liked the idea of removing the switch statement which could potentially get out of date in the future. Too bad :) --Tim From owner-xfs@oss.sgi.com Tue Sep 19 02:08:06 2006 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Sep 2006 02:08:19 -0700 (PDT) Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8J980aG011970 for ; Tue, 19 Sep 2006 02:08:06 -0700 X-ASG-Debug-ID: 1158652499-6002-242-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mail.vcc.de (mail.vcc.de [217.111.2.122]) by cuda.sgi.com (Spam Firewall) with ESMTP id 18ABA452DDF for ; Tue, 19 Sep 2006 00:54:59 -0700 (PDT) Received: from localhost (localhost [127.0.0.1]) by mail.vcc.de (Postfix) with ESMTP id 395741C81EE; Tue, 19 Sep 2006 09:54:57 +0200 (CEST) Received: from mail.vcc.de ([127.0.0.1]) by localhost (mail.vcc.de [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 32328-06; Tue, 19 Sep 2006 09:54:55 +0200 (CEST) Received: from [192.9.209.64] (wolverine.vcc.de [217.111.2.200]) by mail.vcc.de (Postfix) with ESMTP id 6DEFE1C81ED; Tue, 19 Sep 2006 09:54:55 +0200 (CEST) Message-ID: <450FA2A2.10909@opticalart.de> Date: Tue, 19 Sep 2006 09:56:18 +0200 From: Frank Hellmann User-Agent: Thunderbird 1.5.0.5 (X11/20060728) MIME-Version: 1.0 To: Stephan Jansen CC: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: xfs_check - out of memory | xfs_repair - superblock error reading Subject: Re: xfs_check - out of memory | xfs_repair - superblock error reading References: <200609181519.18448.christian.gattermair@mci.edu> <20060918222348.GY3034@melbourne.sgi.com> <8B10E861-C427-4F9F-B0C4-7A87DB77236B@astro.wisc.edu> In-Reply-To: <8B10E861-C427-4F9F-B0C4-7A87DB77236B@astro.wisc.edu> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21519 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9018 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: frank@opticalart.de Precedence: bulk X-list: xfs Content-Length: 1118 Lines: 35 Hi, Stephan Jansen wrote: > > Hi, > > I was just going to create a 3TB filesystem on a 32 bit system. So > xfs_check will not work? How about xfs_repair? I assume that will > work but would like to know beforehand. > > [stuff deleted] > > > ----- Stephan > I have a couple of 3.1TB FC arrays here and I had no real problems with xfs on them so far. We have to reset the machines from time to time, 'cause nvidia drivers and data moving locks up the complete machine, but even then no data losses occured. The lastest kernels we use (2.6.16) checks the filesystems/logs without any hickups during mount phase. Commandline xfs_check won't work (out of memory error), but checking with xfs_repair works fine here. It will need a lot of memory though, so have at least 3GB RAM installed. Cheers, Frank... -- -------------------------------------------------------------------------- Frank Hellmann Optical Art GmbH Waterloohain 7a DI Supervisor http://www.opticalart.de 22769 Hamburg frank@opticalart.de Tel: ++49 40 5111051 Fax: ++49 40 43169199 From owner-xfs@oss.sgi.com Tue Sep 19 04:24:19 2006 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Sep 2006 04:24:31 -0700 (PDT) Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8JBOHaG004608 for ; Tue, 19 Sep 2006 04:24:18 -0700 X-ASG-Debug-ID: 1158660961-12344-640-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from garda.mci.edu (lxoffice.mci.edu [193.171.232.20]) by cuda.sgi.com (Spam Firewall) with ESMTP id CCEF1D1783CD for ; Tue, 19 Sep 2006 03:16:01 -0700 (PDT) Received: from localhost (localhost.localdomain [127.0.0.1]) by garda.mci.edu (Postfix) with ESMTP id 56053104711 for ; Tue, 19 Sep 2006 12:29:06 +0200 (CEST) Received: from garda.mci.edu ([127.0.0.1]) by localhost (rekim [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 18164-03 for ; Tue, 19 Sep 2006 10:28:57 +0000 (UTC) Received: from crados (pc61vw.mci.edu [193.171.235.61]) by garda.mci.edu (Postfix) with ESMTP id 3FB621143D3 for ; Tue, 19 Sep 2006 12:16:36 +0200 (CEST) From: christian gattermair To: linux-xfs@oss.sgi.com X-ASG-Orig-Subj: Re: xfs_check - out of memory | xfs_repair - superblock error reading Subject: Re: xfs_check - out of memory | xfs_repair - superblock error reading Date: Tue, 19 Sep 2006 12:03:21 +0200 User-Agent: KMail/1.9.4 References: <200609181519.18448.christian.gattermair@mci.edu> <450EB025.5020007@oss.sgi.com> In-Reply-To: <450EB025.5020007@oss.sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200609191203.22083.christian.gattermair@mci.edu> X-Barracuda-Spam-Score: 0.50 X-Barracuda-Spam-Status: No, SCORE=0.50 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests=BSF_RULE7568M X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21526 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.50 BSF_RULE7568M BODY: Custom Rule 7568M X-archive-position: 9019 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: christian.gattermair@mci.edu Precedence: bulk X-list: xfs Content-Length: 3148 Lines: 83 hi! thanks for all your answers parted: Disk geometry for /dev/sda: 0.000-3147012,000 megabytes Disk label type: msdos Minor Start End Type Filesystem Flags 1 0,031 1049854,423 primary xfs cat /proc/partitions 8 0 3222540288 sda 8 1 1075050898 sda1 tw_cli (3ware raid tool) Unit UnitType Status %Cmpl Stripe Size(GB) Cache AVerify IgnECC ------------------------------------------------------------------------------ u0 RAID-5 OK - 64K 3073.25 ON OFF OFF Port Status Unit Size Blocks Serial --------------------------------------------------------------- p0 OK u0 279.46 GB 586072368 3NF04F4D p1 OK u0 279.46 GB 586072368 3NF09YMR p2 OK u0 279.46 GB 586072368 3NF0AGN2 p3 OK u0 279.46 GB 586072368 3NF0D6ZM p4 OK u0 279.46 GB 586072368 3NF0BG47 p5 OK u0 279.46 GB 586072368 3NF09YQT p6 OK u0 279.46 GB 586072368 3NF02SMD p7 OK u0 279.46 GB 586072368 3NF01YL5 p8 OK u0 279.46 GB 586072368 3NF02J46 p9 OK u0 279.46 GB 586072368 3NF04EYE p10 OK u0 279.46 GB 586072368 3NF0ECRG p11 OK u0 279.46 GB 586072368 3NF071X5 Name OnlineState BBUReady Status Volt Temp Hours LastCapTest --------------------------------------------------------------------------- bbu On Yes OK OK OK 255 23-Aug-2005 dmesg: 3ware 9000 Storage Controller device driver for Linux v2.26.02.001. ACPI: PCI interrupt 0000:04:02.0[A] -> GSI 52 (level, low) -> IRQ 185 scsi_proc_hostdir_add: proc_mkdir failed for 3w-9xxx: scsi0: AEN: INFO (0x04:0x0055): :. 3w-9xxx: scsi0: AEN: INFO (0x04:0x0053): :. scsi0 : 3ware 9000 Storage Controller 3w-9xxx: scsi0: Found a 3ware 9000 Storage Controller at 0xfeaf0000, IRQ: 185. 3w-9xxx: scsi0: Firmware FE9X 2.06.00.009, BIOS BE9X 2.03.01.051, Ports: 12. Vendor: AMCC Model: 9500S-12 DISK Rev: 2.06 Type: Direct-Access ANSI SCSI revision: 03 sda : very big device. try to use READ CAPACITY(16). SCSI device sda: 6445080576 512-byte hdwr sectors (3299881 MB) SCSI device sda: drive cache: write back, no read (daft) /dev/scsi/host0/bus0/target0/lun0: p1 Attached scsi disk sda at scsi0, channel 0, id 0, lun 0 what is a usv? sorry a german shurtcut. i mean ups. no i have nothing changed on the system. normal shutdown for changing the ups and then boot. and know i can not mount the drive .... yes it is the same kernel (debian 2.6.8-2-686-smp) i have know updated xfsprogs to 2.8.11 and i get a new error message with xfs_check: xfs_check /dev/sda1 XFS: Log inconsistent (didn't find previous header) XFS: failed to find log head ERROR: cannot find log head/tail, run xfs_repair i will try ... and hope with friendly greetings, christian gattermair From owner-xfs@oss.sgi.com Tue Sep 19 06:20:21 2006 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Sep 2006 06:20:29 -0700 (PDT) Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8JDKKaG028858 for ; Tue, 19 Sep 2006 06:20:21 -0700 X-ASG-Debug-ID: 1158671982-26441-68-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from garda.mci.edu (rekim.mci.edu [193.171.232.20]) by cuda.sgi.com (Spam Firewall) with ESMTP id 566154580EF for ; Tue, 19 Sep 2006 06:19:42 -0700 (PDT) Received: from localhost (localhost.localdomain [127.0.0.1]) by garda.mci.edu (Postfix) with ESMTP id 973871046E8 for ; Tue, 19 Sep 2006 15:32:29 +0200 (CEST) Received: from garda.mci.edu ([127.0.0.1]) by localhost (rekim [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 23817-06 for ; Tue, 19 Sep 2006 13:32:29 +0000 (UTC) Received: from crados (pc61vw.mci.edu [193.171.235.61]) by garda.mci.edu (Postfix) with ESMTP id 7F8A81046DA for ; Tue, 19 Sep 2006 15:32:29 +0200 (CEST) From: christian gattermair To: linux-xfs@oss.sgi.com X-ASG-Orig-Subj: Re: xfs_check - out of memory | xfs_repair - superblock error reading Subject: Re: xfs_check - out of memory | xfs_repair - superblock error reading Date: Tue, 19 Sep 2006 15:19:13 +0200 User-Agent: KMail/1.9.4 MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200609191519.13467.christian.gattermair@mci.edu> X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21537 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9021 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: christian.gattermair@mci.edu Precedence: bulk X-list: xfs Content-Length: 244 Lines: 16 hi! xfs_repair .....................................Sorry, could not find valid secondary superblock Exiting now. data lose or any option to recover anything???? thanks for any tip or hint! with friendly greetings, christian gattermair From owner-xfs@oss.sgi.com Tue Sep 19 09:07:35 2006 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Sep 2006 09:07:44 -0700 (PDT) Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8JG7ZaG028247 for ; Tue, 19 Sep 2006 09:07:35 -0700 X-ASG-Debug-ID: 1158678176-29178-311-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mx1.redhat.com (mx1.redhat.com [66.187.233.31]) by cuda.sgi.com (Spam Firewall) with ESMTP id EA641D1783BB for ; Tue, 19 Sep 2006 08:02:57 -0700 (PDT) Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by mx1.redhat.com (8.12.11.20060308/8.12.11) with ESMTP id k8JF2sqD030693; Tue, 19 Sep 2006 11:02:54 -0400 Received: from pobox-2.corp.redhat.com (pobox-2.corp.redhat.com [10.11.255.15]) by int-mx1.corp.redhat.com (8.12.11.20060308/8.12.11) with ESMTP id k8JF2smc008890; Tue, 19 Sep 2006 11:02:54 -0400 Received: from [10.15.80.10] (neon.msp.redhat.com [10.15.80.10]) by pobox-2.corp.redhat.com (8.13.1/8.13.1) with ESMTP id k8JF2oJX017988; Tue, 19 Sep 2006 11:02:53 -0400 Message-ID: <4510069A.6050508@sandeen.net> Date: Tue, 19 Sep 2006 10:02:50 -0500 From: Eric Sandeen User-Agent: Thunderbird 1.5.0.5 (X11/20060808) MIME-Version: 1.0 To: christian gattermair CC: linux-xfs@oss.sgi.com X-ASG-Orig-Subj: Re: xfs_check - out of memory | xfs_repair - superblock error reading Subject: Re: xfs_check - out of memory | xfs_repair - superblock error reading References: <200609181519.18448.christian.gattermair@mci.edu> <450EB025.5020007@oss.sgi.com> <200609191203.22083.christian.gattermair@mci.edu> In-Reply-To: <200609191203.22083.christian.gattermair@mci.edu> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21541 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9023 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Content-Length: 1161 Lines: 41 christian gattermair wrote: > hi! > > thanks for all your answers > > > parted: > > Disk geometry for /dev/sda: 0.000-3147012,000 megabytes > Disk label type: msdos > Minor Start End Type Filesystem Flags > 1 0,031 1049854,423 primary xfs > > cat /proc/partitions > > 8 0 3222540288 sda > 8 1 1075050898 sda1 sda1 is only 1 terabyte. sda itself appears to be about 3 terabytes. I think you need to sort out where your filesystem is living... Did someone repartition sda? > Vendor: AMCC Model: 9500S-12 DISK Rev: 2.06 > Type: Direct-Access ANSI SCSI revision: 03 > sda : very big device. try to use READ CAPACITY(16). Or perhaps this is the 2T lun problem... although you say it was working before. At any rate this isn't looking like an xfs problem at this stage - your kernel thinks that your storage is smaller than you think it is. -Eric > SCSI device sda: 6445080576 512-byte hdwr sectors (3299881 MB) > SCSI device sda: drive cache: write back, no read (daft) > /dev/scsi/host0/bus0/target0/lun0: p1 > Attached scsi disk sda at scsi0, channel 0, id 0, lun 0 From owner-xfs@oss.sgi.com Tue Sep 19 09:59:01 2006 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Sep 2006 09:59:09 -0700 (PDT) Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8JGx0aG007093 for ; Tue, 19 Sep 2006 09:59:01 -0700 X-ASG-Debug-ID: 1158685102-334-770-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from ext.agami.com (64.221.212.177.ptr.us.xo.net [64.221.212.177]) by cuda.sgi.com (Spam Firewall) with ESMTP id 76E44457987 for ; Tue, 19 Sep 2006 09:58:22 -0700 (PDT) Received: from agami.com ([192.168.168.132]) by ext.agami.com (8.12.5/8.12.5) with ESMTP id k8JGwH2c023378 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO) for ; Tue, 19 Sep 2006 09:58:21 -0700 Received: from mx1.agami.com (mx1.agami.com [10.123.10.30]) by agami.com (8.12.11/8.12.11) with ESMTP id k8JGwC9v002380 for ; Tue, 19 Sep 2006 09:58:12 -0700 Received: from [10.12.12.141] ([10.12.12.141]) by mx1.agami.com with Microsoft SMTPSVC(6.0.3790.1830); Tue, 19 Sep 2006 10:01:35 -0700 Message-ID: <451021C0.8040701@agami.com> Date: Tue, 19 Sep 2006 22:28:40 +0530 From: Shailendra Tripathi User-Agent: Mozilla Thunderbird 0.9 (X11/20041127) X-Accept-Language: en-us, en MIME-Version: 1.0 To: cousins@umit.maine.edu CC: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: swidth with mdadm and RAID6 Subject: Re: swidth with mdadm and RAID6 References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-OriginalArrivalTime: 19 Sep 2006 17:01:35.0953 (UTC) FILETIME=[43F62C10:01C6DC0D] X-Scanned-By: MIMEDefang 2.36 X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21546 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9024 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: stripathi@agami.com Precedence: bulk X-list: xfs Content-Length: 509 Lines: 17 > Hi Shailendra, > > I ran the program and it reports: > > Level 6, disks=11 spare_disks=1 raid_disks=10 > > which looks good. I don't understand why you got: > > Level 5, disks=7 spare_disks=3 raid_disks=5 > > Why would it have 3 spare_disks? Perhaps you are running more recent kernel than mine, and, spare_disks now reports only actual spares. It did appear little weired that it reported spare_disks as 3. get_array_info is changed in recent kernels and that should explain this difference. From owner-xfs@oss.sgi.com Tue Sep 19 10:51:14 2006 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Sep 2006 10:51:20 -0700 (PDT) Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8JHpEaG015639 for ; Tue, 19 Sep 2006 10:51:14 -0700 X-ASG-Debug-ID: 1158683789-32514-431-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from limpet.umeoce.maine.edu (limpet.umeoce.maine.edu [130.111.192.115]) by cuda.sgi.com (Spam Firewall) with ESMTP id B0DE5455D3F for ; Tue, 19 Sep 2006 09:36:29 -0700 (PDT) Received: from localhost (cousins@localhost) by limpet.umeoce.maine.edu (8.9.3/8.9.3) with ESMTP id MAA02470; Tue, 19 Sep 2006 12:36:20 -0400 Date: Tue, 19 Sep 2006 12:36:20 -0400 (EDT) From: Steve Cousins Reply-To: cousins@umit.maine.edu To: Shailendra Tripathi cc: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: swidth with mdadm and RAID6 Subject: Re: swidth with mdadm and RAID6 In-Reply-To: Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21546 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9026 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: cousins@limpet.umeoce.maine.edu Precedence: bulk X-list: xfs Content-Length: 4189 Lines: 139 Hi Shailendra, I ran the program and it reports: Level 6, disks=11 spare_disks=1 raid_disks=10 which looks good. I don't understand why you got: Level 5, disks=7 spare_disks=3 raid_disks=5 Why would it have 3 spare_disks? Thanks, Steve ______________________________________________________________________ Steve Cousins, Ocean Modeling Group Email: cousins@umit.maine.edu Marine Sciences, 452 Aubert Hall http://rocky.umeoce.maine.edu Univ. of Maine, Orono, ME 04469 Phone: (207) 581-4302 On Mon, 18 Sep 2006, Shailendra Tripathi wrote: > Hi Steve, > Your guess appears to be correct. md_ioctl returns nr which > is total number of disk in the array including the spare disks. However, > XFS function md_get_vol_stripe does not take spare disk into account. It > needs to subtract spare_disks as well. > However, md.spare_disks returned by the call returns spare + parity > (both). So, one way could be substract spare_disks directly. Otherwise, > the xfs should rely on md.raid_disks. This does not include spare_disks > and nr.disks should be changed for that. > > When I run my program md_info on raid5 array with 5 devices and 2 > spares, I get > [root@ga09 root]# ./a.out /dev/md11 > Level 5, disks=7 spare_disks=3 raid_disks=5 > > Steve can you please compile the pasted program and run on your system > with md prepared. It takes /dev/md as input. > In your case, you should get above line as: > Level 6, disks=11 spare disks=3 raid_disks=10 > > nr=working=active=failed=spare=0; > ITERATE_RDEV(mddev,rdev,tmp) { > nr++; > if (rdev->faulty) > failed++; > else { > working++; > if (rdev->in_sync) > active++; > else > spare++; > } > } > > info.level = mddev->level; > info.size = mddev->size; > info.nr_disks = nr; > .... > info.active_disks = active; > info.working_disks = working; > info.failed_disks = failed; > info.spare_disks = spare; > > -shailendra > The program is pasted below: > md_info.c. Takes /dev/md as name. For example, /dev/md11. > > #include > #include > #include > #ifndef MD_MAJOR > #define MD_MAJOR 9 > #endif > > #define GET_ARRAY_INFO _IOR (MD_MAJOR, 0x11, struct md_array_info) > > > struct md_array_info { > __uint32_t major_version; > __uint32_t minor_version; > __uint32_t patch_version; > __uint32_t ctime; > __uint32_t level; > __uint32_t size; > __uint32_t nr_disks; > __uint32_t raid_disks; > __uint32_t md_minor; > __uint32_t not_persistent; > /* > * Generic state information > */ > __uint32_t utime; /* 0 Superblock update time */ > __uint32_t state; /* 1 State bits (clean, ...) */ > __uint32_t active_disks; /* 2 Number of currently active disks */ > __uint32_t working_disks; /* 3 Number of working disks */ > __uint32_t failed_disks; /* 4 Number of failed disks */ > __uint32_t spare_disks; /* 5 Number of spare disks */ > /* > * Personality information > */ > __uint32_t layout; /* 0 the array's physical layout */ > __uint32_t chunk_size; /* 1 chunk size in bytes */ > > }; > > int main(int argc, char *argv[]) > { > struct md_array_info md; > int fd; > > > /* Open device */ > fd = open(argv[1], O_RDONLY); > if (fd == -1) { > printf("Could not open %s\n", argv[1]); > exit(1); > } > if (ioctl(fd, GET_ARRAY_INFO, &md)) { > printf("Error getting MD array info from %s\n", argv[1]); > exit(1); > } > close(fd); > printf("Level %d, disks=%d spare_disks=%d raid_disks=%d\n", > md.level, md.nr_disks, > md.spare_disks, md.raid_disks); > return 0; > } > > > > > > > From owner-xfs@oss.sgi.com Tue Sep 19 10:51:14 2006 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Sep 2006 10:51:17 -0700 (PDT) Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8JHpDaG015623 for ; Tue, 19 Sep 2006 10:51:14 -0700 X-ASG-Debug-ID: 1158685986-21846-132-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from limpet.umeoce.maine.edu (limpet.umeoce.maine.edu [130.111.192.115]) by cuda.sgi.com (Spam Firewall) with ESMTP id 287D245452E for ; Tue, 19 Sep 2006 10:13:06 -0700 (PDT) Received: from localhost (cousins@localhost) by limpet.umeoce.maine.edu (8.9.3/8.9.3) with ESMTP id NAA02545; Tue, 19 Sep 2006 13:13:02 -0400 Date: Tue, 19 Sep 2006 13:13:02 -0400 (EDT) From: Steve Cousins Reply-To: cousins@umit.maine.edu To: Shailendra Tripathi cc: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: swidth with mdadm and RAID6 Subject: Re: swidth with mdadm and RAID6 In-Reply-To: Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21546 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9025 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: cousins@limpet.umeoce.maine.edu Precedence: bulk X-list: xfs Content-Length: 4792 Lines: 158 On Tue, 19 Sep 2006, Steve Cousins wrote: > > Hi Shailendra, > > I ran the program and it reports: > > Level 6, disks=11 spare_disks=1 raid_disks=10 > > which looks good. I don't understand why you got: To me this looks correct but I was re-reading your original message and you said: > > In your case, you should get above line as: > > Level 6, disks=11 spare disks=3 raid_disks=10 I don't understand why we should expect parity disks to be included as spare disks. Steve > Level 5, disks=7 spare_disks=3 raid_disks=5 > > Why would it have 3 spare_disks? > > Thanks, > > Steve > > ______________________________________________________________________ > Steve Cousins, Ocean Modeling Group Email: cousins@umit.maine.edu > Marine Sciences, 452 Aubert Hall http://rocky.umeoce.maine.edu > Univ. of Maine, Orono, ME 04469 Phone: (207) 581-4302 > > On Mon, 18 Sep 2006, Shailendra Tripathi wrote: > > > Hi Steve, > > Your guess appears to be correct. md_ioctl returns nr which > > is total number of disk in the array including the spare disks. However, > > XFS function md_get_vol_stripe does not take spare disk into account. It > > needs to subtract spare_disks as well. > > However, md.spare_disks returned by the call returns spare + parity > > (both). So, one way could be substract spare_disks directly. Otherwise, > > the xfs should rely on md.raid_disks. This does not include spare_disks > > and nr.disks should be changed for that. > > > > When I run my program md_info on raid5 array with 5 devices and 2 > > spares, I get > > [root@ga09 root]# ./a.out /dev/md11 > > Level 5, disks=7 spare_disks=3 raid_disks=5 > > > > Steve can you please compile the pasted program and run on your system > > with md prepared. It takes /dev/md as input. > > In your case, you should get above line as: > > Level 6, disks=11 spare disks=3 raid_disks=10 > > > > nr=working=active=failed=spare=0; > > ITERATE_RDEV(mddev,rdev,tmp) { > > nr++; > > if (rdev->faulty) > > failed++; > > else { > > working++; > > if (rdev->in_sync) > > active++; > > else > > spare++; > > } > > } > > > > info.level = mddev->level; > > info.size = mddev->size; > > info.nr_disks = nr; > > .... > > info.active_disks = active; > > info.working_disks = working; > > info.failed_disks = failed; > > info.spare_disks = spare; > > > > -shailendra > > The program is pasted below: > > md_info.c. Takes /dev/md as name. For example, /dev/md11. > > > > #include > > #include > > #include > > #ifndef MD_MAJOR > > #define MD_MAJOR 9 > > #endif > > > > #define GET_ARRAY_INFO _IOR (MD_MAJOR, 0x11, struct md_array_info) > > > > > > struct md_array_info { > > __uint32_t major_version; > > __uint32_t minor_version; > > __uint32_t patch_version; > > __uint32_t ctime; > > __uint32_t level; > > __uint32_t size; > > __uint32_t nr_disks; > > __uint32_t raid_disks; > > __uint32_t md_minor; > > __uint32_t not_persistent; > > /* > > * Generic state information > > */ > > __uint32_t utime; /* 0 Superblock update time */ > > __uint32_t state; /* 1 State bits (clean, ...) */ > > __uint32_t active_disks; /* 2 Number of currently active disks */ > > __uint32_t working_disks; /* 3 Number of working disks */ > > __uint32_t failed_disks; /* 4 Number of failed disks */ > > __uint32_t spare_disks; /* 5 Number of spare disks */ > > /* > > * Personality information > > */ > > __uint32_t layout; /* 0 the array's physical layout */ > > __uint32_t chunk_size; /* 1 chunk size in bytes */ > > > > }; > > > > int main(int argc, char *argv[]) > > { > > struct md_array_info md; > > int fd; > > > > > > /* Open device */ > > fd = open(argv[1], O_RDONLY); > > if (fd == -1) { > > printf("Could not open %s\n", argv[1]); > > exit(1); > > } > > if (ioctl(fd, GET_ARRAY_INFO, &md)) { > > printf("Error getting MD array info from %s\n", argv[1]); > > exit(1); > > } > > close(fd); > > printf("Level %d, disks=%d spare_disks=%d raid_disks=%d\n", > > md.level, md.nr_disks, > > md.spare_disks, md.raid_disks); > > return 0; > > } > > > > > > > > > > > > > > > > From owner-xfs@oss.sgi.com Tue Sep 19 12:23:03 2006 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Sep 2006 12:23:13 -0700 (PDT) Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8JJN2aG032733 for ; Tue, 19 Sep 2006 12:23:03 -0700 X-ASG-Debug-ID: 1158693743-19056-941-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from limpet.umeoce.maine.edu (limpet.umeoce.maine.edu [130.111.192.115]) by cuda.sgi.com (Spam Firewall) with ESMTP id F3448D176759 for ; Tue, 19 Sep 2006 12:22:23 -0700 (PDT) Received: from localhost (cousins@localhost) by limpet.umeoce.maine.edu (8.9.3/8.9.3) with ESMTP id PAA02665; Tue, 19 Sep 2006 15:22:16 -0400 Date: Tue, 19 Sep 2006 15:22:16 -0400 (EDT) From: Steve Cousins Reply-To: cousins@umit.maine.edu To: Shailendra Tripathi cc: "xfs@oss.sgi.com" X-ASG-Orig-Subj: Re: swidth with mdadm and RAID6 Subject: Re: swidth with mdadm and RAID6 In-Reply-To: Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21553 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9027 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: cousins@limpet.umeoce.maine.edu Precedence: bulk X-list: xfs Content-Length: 576 Lines: 25 On Tue, 19 Sep 2006, Steve Cousins wrote: > This is a 2.6.17 kernel. So, with this in mind, is there a change that I > should try in libdisk/md.c? Tim had suggested: > > s/nr_disks/raid_disks/ > > Would this be sufficient? Or should nr_disks be initialized as raid_disks > and then go into the switch clause? I ended up just adding: md.nr_disks = md.raid_disks; right be fore the switch statement and it worked fine in my situation. Not sure how this would work with other kernels etc. but I'll let you figure that out. Thanks very much for your help. Steve From owner-xfs@oss.sgi.com Tue Sep 19 13:07:48 2006 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Sep 2006 13:07:51 -0700 (PDT) Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8JK7laG005828 for ; Tue, 19 Sep 2006 13:07:48 -0700 X-ASG-Debug-ID: 1158696428-15743-266-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from limpet.umeoce.maine.edu (limpet.umeoce.maine.edu [130.111.192.115]) by cuda.sgi.com (Spam Firewall) with ESMTP id 4EC2E456BC7 for ; Tue, 19 Sep 2006 13:07:09 -0700 (PDT) Received: from localhost (cousins@localhost) by limpet.umeoce.maine.edu (8.9.3/8.9.3) with ESMTP id NAA02570; Tue, 19 Sep 2006 13:52:45 -0400 Date: Tue, 19 Sep 2006 13:52:45 -0400 (EDT) From: Steve Cousins Reply-To: cousins@umit.maine.edu To: Shailendra Tripathi cc: "xfs@oss.sgi.com" X-ASG-Orig-Subj: Re: swidth with mdadm and RAID6 Subject: Re: swidth with mdadm and RAID6 In-Reply-To: Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21555 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9028 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: cousins@limpet.umeoce.maine.edu Precedence: bulk X-list: xfs Content-Length: 872 Lines: 33 On Tue, 19 Sep 2006, Shailendra Tripathi wrote: > >> Hi Shailendra, > >> > >> I ran the program and it reports: > >> > >> Level 6, disks=11 spare_disks=1 raid_disks=10 > >> > >> which looks good. I don't understand why you got: > >> > >> Level 5, disks=7 spare_disks=3 raid_disks=5 > >> > >> Why would it have 3 spare_disks? > > > Perhaps you are running more recent kernel than mine, and, spare_disks > now reports only actual spares. It did appear little weired that it > reported spare_disks as 3. get_array_info is changed in recent kernels > and that should explain this difference. This is a 2.6.17 kernel. So, with this in mind, is there a change that I should try in libdisk/md.c? Tim had suggested: s/nr_disks/raid_disks/ Would this be sufficient? Or should nr_disks be initialized as raid_disks and then go into the switch clause? Steve From owner-xfs@oss.sgi.com Tue Sep 19 13:20:53 2006 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Sep 2006 13:20:55 -0700 (PDT) Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8JKKpaG008310 for ; Tue, 19 Sep 2006 13:20:53 -0700 X-ASG-Debug-ID: 1158697209-28809-915-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from ext.agami.com (64.221.212.177.ptr.us.xo.net [64.221.212.177]) by cuda.sgi.com (Spam Firewall) with ESMTP id 22947D17675C for ; Tue, 19 Sep 2006 13:20:10 -0700 (PDT) Received: from agami.com ([192.168.168.132]) by ext.agami.com (8.12.5/8.12.5) with ESMTP id k8JKK62c026002 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO) for ; Tue, 19 Sep 2006 13:20:08 -0700 Received: from mx1.agami.com (mx1.agami.com [10.123.10.30]) by agami.com (8.12.11/8.12.11) with ESMTP id k8JKK1Zb005927 for ; Tue, 19 Sep 2006 13:20:01 -0700 Received: from [10.125.200.204] ([10.125.200.204]) by mx1.agami.com with Microsoft SMTPSVC(6.0.3790.1830); Tue, 19 Sep 2006 13:23:23 -0700 Message-ID: <451050E7.40806@agami.com> Date: Wed, 20 Sep 2006 01:49:51 +0530 From: Shailendra Tripathi User-Agent: Mozilla Thunderbird 1.0.6 (Windows/20050716) X-Accept-Language: en-us, en MIME-Version: 1.0 To: cousins@umit.maine.edu CC: "xfs@oss.sgi.com" X-ASG-Orig-Subj: Re: swidth with mdadm and RAID6 Subject: Re: swidth with mdadm and RAID6 References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-OriginalArrivalTime: 19 Sep 2006 20:23:24.0765 (UTC) FILETIME=[756040D0:01C6DC29] X-Scanned-By: MIMEDefang 2.36 X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21556 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9030 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: stripathi@agami.com Precedence: bulk X-list: xfs Content-Length: 1685 Lines: 88 Steve Cousins wrote: >>a 2.6.17 kernel. So, with this in mind, is there a change that I >>should try in libdisk/md.c? Tim had suggested: >> >> s/nr_disks/raid_disks/ >> >>Would this be sufficient? Or should nr_disks be initialized as raid_disks >>and then go into the switch clause? >> >> > >I ended up just adding: > > md.nr_disks = md.raid_disks; > >right be fore the switch statement and it worked fine in my situation. >Not sure how this would work with other kernels etc. but I'll let you >figure that out. > >Thanks very much for your help. > >Steve > > > Hi Steve, Technically speaking, you are doing the same thing. However, just write the function below to avoid any confusion. int md_get_subvol_stripe( char *dfile, sv_type_t type, int *sunit, int *swidth, int *sectalign, struct stat64 *sb) { if (mnt_is_md_subvol(sb->st_rdev)) { struct md_array_info md; int fd; /* Open device */ fd = open(dfile, O_RDONLY); if (fd == -1) return 0; /* Is this thing on... */ if (ioctl(fd, GET_ARRAY_INFO, &md)) { fprintf(stderr, _("Error getting MD array info from %s\n"), dfile); exit(1); } close(fd); /* * Ignore levels we don't want aligned (e.g. linear) * and deduct disk(s) from stripe width on RAID4/5/6 */ switch (md.level) { case 6: md.raid_disks--; /* fallthrough */ case 5: case 4: md.raid_disks--; /* fallthrough */ case 1: case 0: case 10: break; default: return 0; } /* Update sizes */ *sunit = md.chunk_size >> 9; *swidth = *sunit * md.raid_disks; *sectalign = (md.level == 4 || md.level == 5 || md.level == 6); return 1; } return 0; } From owner-xfs@oss.sgi.com Tue Sep 19 22:24:12 2006 Received: with ECARTIS (v1.0.0; list xfs); Tue, 19 Sep 2006 22:25:07 -0700 (PDT) Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8K5OBaG018562 for ; Tue, 19 Sep 2006 22:24:12 -0700 X-ASG-Debug-ID: 1158725654-16402-583-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com (Spam Firewall) with ESMTP id 868FFD1783A6 for ; Tue, 19 Sep 2006 21:14:14 -0700 (PDT) Received: from [10.0.0.4] (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 46830180173DC; Tue, 19 Sep 2006 23:14:13 -0500 (CDT) Message-ID: <4510C017.4040200@oss.sgi.com> Date: Tue, 19 Sep 2006 23:14:15 -0500 From: linux-xfs@oss.sgi.com User-Agent: Thunderbird 1.5.0.7 (Macintosh/20060909) MIME-Version: 1.0 To: Eric Sandeen CC: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: [PATCH] kill leftover WANT_FUNCS macro indirection Subject: Re: [PATCH] kill leftover WANT_FUNCS macro indirection References: <44CAE247.6020608@sandeen.net> In-Reply-To: <44CAE247.6020608@sandeen.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Barracuda-Spam-Score: 0.55 X-Barracuda-Spam-Status: No, SCORE=0.55 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests=NO_REAL_NAME X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21580 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.55 NO_REAL_NAME From: does not include a real name X-archive-position: 9031 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: linux-xfs@oss.sgi.com Precedence: bulk X-list: xfs Content-Length: 227 Lines: 10 Eric Sandeen wrote: > This gets rid of some pointless macro defines... I had a version that > lower-cased it all too but Nathan liked this better, and he's the man! :) > > -Eric Hm, what was the verdict on this one? -Eric From owner-xfs@oss.sgi.com Wed Sep 20 05:56:44 2006 Received: with ECARTIS (v1.0.0; list xfs); Wed, 20 Sep 2006 05:56:50 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k8KCueaG025811 for ; Wed, 20 Sep 2006 05:56:43 -0700 Received: from [127.0.0.1] (sshgate.corp.sgi.com [198.149.36.12]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id VAA10040; Wed, 20 Sep 2006 21:41:15 +1000 Message-ID: <451128D8.2030704@melbourne.sgi.com> Date: Wed, 20 Sep 2006 21:41:12 +1000 From: David Chatterton Reply-To: chatz@melbourne.sgi.com Organization: SGI User-Agent: Thunderbird 1.5.0.7 (Windows/20060909) MIME-Version: 1.0 To: Eric Sandeen CC: xfs@oss.sgi.com Subject: Re: [PATCH] kill leftover WANT_FUNCS macro indirection References: <44CAE247.6020608@sandeen.net> <4510C017.4040200@oss.sgi.com> In-Reply-To: <4510C017.4040200@oss.sgi.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-archive-position: 9035 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: chatz@melbourne.sgi.com Precedence: bulk X-list: xfs Content-Length: 560 Lines: 29 Eric, We already have a large number of changes queued up for 2.6.19, I'd prefer that we don't add any more right now unless they are bug fixes. I've raised a pv with the patch so that it doesn't get dropped. Thanks, David linux-xfs@oss.sgi.com wrote: > Eric Sandeen wrote: >> This gets rid of some pointless macro defines... I had a version that >> lower-cased it all too but Nathan liked this better, and he's the man! :) >> >> -Eric > > Hm, what was the verdict on this one? > > -Eric > -- David Chatterton XFS Engineering Manager SGI Australia From owner-xfs@oss.sgi.com Wed Sep 20 07:34:32 2006 Received: with ECARTIS (v1.0.0; list xfs); Wed, 20 Sep 2006 07:34:37 -0700 (PDT) Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8KEYWaG004943 for ; Wed, 20 Sep 2006 07:34:32 -0700 X-ASG-Debug-ID: 1158759237-12268-192-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com (Spam Firewall) with ESMTP id 23CF545697F for ; Wed, 20 Sep 2006 06:33:58 -0700 (PDT) Received: from [10.0.0.4] (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id CFB48185FC82B; Wed, 20 Sep 2006 08:33:56 -0500 (CDT) Message-ID: <45114346.8020905@sandeen.net> Date: Wed, 20 Sep 2006 08:33:58 -0500 From: Eric Sandeen User-Agent: Thunderbird 1.5.0.7 (Macintosh/20060909) MIME-Version: 1.0 To: chatz@melbourne.sgi.com CC: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: [PATCH] kill leftover WANT_FUNCS macro indirection Subject: Re: [PATCH] kill leftover WANT_FUNCS macro indirection References: <44CAE247.6020608@sandeen.net> <4510C017.4040200@oss.sgi.com> <451128D8.2030704@melbourne.sgi.com> In-Reply-To: <451128D8.2030704@melbourne.sgi.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21609 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9036 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Content-Length: 719 Lines: 32 David Chatterton wrote: > Eric, > > We already have a large number of changes queued up for 2.6.19, I'd prefer > that we don't add any more right now unless they are bug fixes. > > I've raised a pv with the patch so that it doesn't get dropped. No problem, and I'm not hung up about the change, just wondered if I should keep it in my patch stack or if it had been dismissed as undesirable. Thanks, -Eric > Thanks, > > David > > > linux-xfs@oss.sgi.com wrote: >> Eric Sandeen wrote: >>> This gets rid of some pointless macro defines... I had a version that >>> lower-cased it all too but Nathan liked this better, and he's the man! :) >>> >>> -Eric >> Hm, what was the verdict on this one? >> >> -Eric >> > From owner-xfs@oss.sgi.com Wed Sep 20 08:36:27 2006 Received: with ECARTIS (v1.0.0; list xfs); Wed, 20 Sep 2006 08:36:32 -0700 (PDT) Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8KFaPaG016554 for ; Wed, 20 Sep 2006 08:36:27 -0700 X-ASG-Debug-ID: 1158766546-20338-480-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from garda.mci.edu (intern.mci4me.at [193.171.232.20]) by cuda.sgi.com (Spam Firewall) with ESMTP id 7109DD110A5F for ; Wed, 20 Sep 2006 08:35:46 -0700 (PDT) Received: from localhost (localhost.localdomain [127.0.0.1]) by garda.mci.edu (Postfix) with ESMTP id 5F9061046A8; Wed, 20 Sep 2006 17:48:41 +0200 (CEST) Received: from garda.mci.edu ([127.0.0.1]) by localhost (rekim [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 21487-09; Wed, 20 Sep 2006 15:48:41 +0000 (UTC) Received: from crados (pc61vw.mci.edu [193.171.235.61]) by garda.mci.edu (Postfix) with ESMTP id 4E5E11045D1; Wed, 20 Sep 2006 17:48:41 +0200 (CEST) From: christian gattermair To: Eric Sandeen , linux-xfs@oss.sgi.com X-ASG-Orig-Subj: Re: xfs_check - out of memory | xfs_repair - superblock error reading Subject: Re: xfs_check - out of memory | xfs_repair - superblock error reading Date: Wed, 20 Sep 2006 17:35:17 +0200 User-Agent: KMail/1.9.4 References: <200609181519.18448.christian.gattermair@mci.edu> <200609191203.22083.christian.gattermair@mci.edu> <4510069A.6050508@sandeen.net> In-Reply-To: <4510069A.6050508@sandeen.net> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200609201735.17479.christian.gattermair@mci.edu> X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21613 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9037 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: christian.gattermair@mci.edu Precedence: bulk X-list: xfs Content-Length: 1432 Lines: 52 hi! thanks for all your answers > > parted: > > > > Disk geometry for /dev/sda: 0.000-3147012,000 megabytes > > Disk label type: msdos > > Minor Start End Type Filesystem Flags > > 1 0,031 1049854,423 primary xfs > > > > cat /proc/partitions > > > > 8 0 3222540288 sda > > 8 1 1075050898 sda1 > > sda1 is only 1 terabyte. sda itself appears to be about 3 terabytes. > > I think you need to sort out where your filesystem is living... > > Did someone repartition sda? no repartition of sda or other partitions .... sda1 was the whole space of sda (3t) > > Vendor: AMCC Model: 9500S-12 DISK Rev: 2.06 > > Type: Direct-Access ANSI SCSI revision: 03 > > sda : very big device. try to use READ CAPACITY(16). > > Or perhaps this is the 2T lun problem... although you say it was working > before. At any rate this isn't looking like an xfs problem at this > stage - your kernel thinks that your storage is smaller than you think > it is. how can i bring back the right size? today i have compiled a 2.6.18 kernel. same error .... any ideas? thanks for help with friendly greetings, christian gattermair > -Eric > > > SCSI device sda: 6445080576 512-byte hdwr sectors (3299881 MB) > > SCSI device sda: drive cache: write back, no read (daft) > > /dev/scsi/host0/bus0/target0/lun0: p1 > > Attached scsi disk sda at scsi0, channel 0, id 0, lun 0 From owner-xfs@oss.sgi.com Wed Sep 20 23:46:51 2006 Received: with ECARTIS (v1.0.0; list xfs); Wed, 20 Sep 2006 23:46:58 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k8L6kjaG022228 for ; Wed, 20 Sep 2006 23:46:49 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id QAA04058; Thu, 21 Sep 2006 16:46:02 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 1116) id BCDE358CF853; Thu, 21 Sep 2006 16:46:02 +1000 (EST) To: sgi.bugs.xfs@engr.sgi.com, xfs@oss.sgi.com Subject: TAKE 956363 956371 - md.raid_disks, __u32 from asm/types.h Message-Id: <20060921064602.BCDE358CF853@chook.melbourne.sgi.com> Date: Thu, 21 Sep 2006 16:46:02 +1000 (EST) From: tes@sgi.com (Tim Shimmin) X-archive-position: 9040 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs Content-Length: 1542 Lines: 36 Update xfsprogs so that the libdisk/md.c code sets stripe width using number of raid-disks and doesn't include spare disks. Make xfsprogs get its __u32 & friends types from if it exists. Date: Thu Sep 21 16:44:55 AEST 2006 Workarea: chook.melbourne.sgi.com:/build/tes/xfs-cmds Inspected by: stripathi@agami.com,bnaujok@sgi.com The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:27016a xfsprogs/configure.in - 1.37 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/configure.in.diff?r1=text&tr1=1.37&r2=text&tr2=1.36&f=h - Add a AC_CHECK_TYPES for __u32 in . xfsprogs/VERSION - 1.163 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/VERSION.diff?r1=text&tr1=1.163&r2=text&tr2=1.162&f=h - Bump to 2.8.13. xfsprogs/doc/CHANGES - 1.220 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/doc/CHANGES.diff?r1=text&tr1=1.220&r2=text&tr2=1.219&f=h - Describe changes for 2.8.13. xfsprogs/libdisk/md.c - 1.19 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/libdisk/md.c.diff?r1=text&tr1=1.19&r2=text&tr2=1.18&f=h - Use raid_disks in lieu of nr_disks so that we don't include spares. Thanks to Shailendra. xfsprogs/include/platform_defs.h.in - 1.34 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/include/platform_defs.h.in.diff?r1=text&tr1=1.34&r2=text&tr2=1.33&f=h - Don't define the __u32 family types if we can get them from . From owner-xfs@oss.sgi.com Thu Sep 21 00:35:19 2006 Received: with ECARTIS (v1.0.0; list xfs); Thu, 21 Sep 2006 00:35:26 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k8L7ZFaG001152 for ; Thu, 21 Sep 2006 00:35:17 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id QAA03647; Thu, 21 Sep 2006 16:29:09 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id k8L6T7pj7773222; Thu, 21 Sep 2006 16:29:08 +1000 (AEST) Received: (from bnaujok@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id k8L6T5Dp7773509; Thu, 21 Sep 2006 16:29:05 +1000 (AEST) Date: Thu, 21 Sep 2006 16:29:05 +1000 (AEST) From: Barry Naujok Message-Id: <200609210629.k8L6T5Dp7773509@snort.melbourne.sgi.com> To: sgi.bugs.xfs@engr.sgi.com Cc: xfs@oss.sgi.com Subject: TAKE 956445 - X-archive-position: 9041 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@snort.melbourne.sgi.com Precedence: bulk X-list: xfs Content-Length: 685 Lines: 19 Fixed three issues in v2 directory checks in phase 6. Date: Thu Sep 21 16:28:39 AEST 2006 Workarea: snort.melbourne.sgi.com:/home/bnaujok/isms/repair Inspected by: tes@sgi.com The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:27015a xfsprogs/doc/CHANGES - 1.219 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/doc/CHANGES.diff?r1=text&tr1=1.219&r2=text&tr2=1.218&f=h xfsprogs/repair/phase6.c - 1.33 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/repair/phase6.c.diff?r1=text&tr1=1.33&r2=text&tr2=1.32&f=h - Fixed three issues in v2 directory checks in phase 6. From owner-xfs@oss.sgi.com Thu Sep 21 01:35:51 2006 Received: with ECARTIS (v1.0.0; list xfs); Thu, 21 Sep 2006 01:36:01 -0700 (PDT) Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8L8ZoaG008559 for ; Thu, 21 Sep 2006 01:35:51 -0700 X-ASG-Debug-ID: 1158823830-16031-383-1 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from hao.com (unknown [59.107.12.155]) by cuda.sgi.com (Spam Firewall) with ESMTP id 6D956D10E747 for ; Thu, 21 Sep 2006 00:30:32 -0700 (PDT) From: owir@hao.com X-ASG-Orig-Subj: Invitation Letter for 100th China Export Commodities fair Subject: Invitation Letter for 100th China Export Commodities fair To: linux-xfs@oss.sgi.com Content-Type: text/html;charset="GB2312" Date: Thu, 21 Sep 2006 15:29:26 +0800 X-Priority: 4 X-Mailer: Foxmail 4.2 [cn] Message-Id: <20060921073032.6D956D10E747@cuda.sgi.com> X-Barracuda-Spam-Score: 1.79 X-Barracuda-Spam-Status: No, SCORE=1.79 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests=HTML_FONT_BIG, MAILTO_TO_SPAM_ADDR, MIME_HEADER_CTYPE_ONLY, MIME_HTML_ONLY, MSGID_FROM_MTA_ID, NO_REAL_NAME X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21621 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.55 NO_REAL_NAME From: does not include a real name 0.70 MSGID_FROM_MTA_ID Message-Id for external message added locally 0.28 MAILTO_TO_SPAM_ADDR URI: Includes a link to a likely spammer email 0.26 HTML_FONT_BIG BODY: HTML tag for a big font size 0.00 MIME_HTML_ONLY BODY: Message only has text/html MIME parts 0.00 MIME_HEADER_CTYPE_ONLY 'Content-Type' found without required MIME headers X-archive-position: 9042 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: owir@hao.com Precedence: bulk X-list: xfs Content-Length: 21936 Lines: 278 Invitation Letter

Invitation Letter

for 100th China Export Commodities Fair

 

Chinese Export Commodities Fair, also called Canton Fair, is held twice a year in spring and autumn, since it was inaugurated in the spring of 1957. It is China's largest trade fair of the highest level (No.3 in the world), of the most complete varieties and of the largest attendance and business turnover. Preserving its traditions, the Fair is a comprehensive and multifunctional event of international importance.

The 100th Canton Fair will be hold in Pazhou Complex and Liuhua Complex at the same time. The details are as follows.

Phase time

Exhibition Hall

Products Categories

Exhibition Section

Phase I

(Oct. 15th-20th)

Pazhou Complex

Industrial Products

Household Electrical Appliances, Electronics and IT Products, Lamps & Lighting Fixtures, Tools, Machinery & Equipment, Small vehicles & spare parts, Hardware, Building Materials, Chemical Products & Mineral Products, Vehicles & Construction Machinery

Liuhua Complex

Textiles & Garments, Medicines & Health Products

Garments, Household Textiles, Carpets & Tapestries, Textile Raw Materials & Fabrics, Artex, Furs, Leather, Down & related products, Footwear & Headgear, Medicines, Health Products & Hospital Equipment

Phase II

(Oct. 25th-30th)

Pazhou Complex

Consumer Goods

¡¡

Articles of Daily Use, Native Produce & Animal By-products, Furniture, Ceramics, House ware, Kitchenware & Tableware, Cases and Bags, Foodstuffs & Tea, Stone & Iron Products

Liuhua Complex

Gifts

Gifts, Decorations, Toys, Wickerwork Articles, Horticultural Products, Clocks, Watches & Optical  Instruments, Office Supplies, Sporting Goods, Tour Equipment & Casual Goods, Jewelry, Bone Carvings & Jade Carvings

 

 

Service for Canton Fair

1. Supply advertisement service for magazines of the Canton Fair

2. Supply Canton Fair provisional visiting card service

3. Supply hotel reservation for group visitors during the Canton Fair

 

Contact Person: Ms. Alice

E-mail: alicesnow2006@163.com 

 

 

Visit Intention List

Company Name

¡¡

Address

¡¡

E-mail

¡¡

Contact Person

¡¡

Mobile

¡¡

Office Telephone

¡¡

Fax

¡¡

Products Purchased ¡¡

Whether is the invitation letter needed

(Yes or No)

¡¡

                   Contact Person: Ms. Alice

                    Fax: +86-20-85648232

¡¡

 

¡¡

 

                               

 

From owner-xfs@oss.sgi.com Thu Sep 21 12:40:57 2006 Received: with ECARTIS (v1.0.0; list xfs); Thu, 21 Sep 2006 12:41:04 -0700 (PDT) Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8LJeuaG023182 for ; Thu, 21 Sep 2006 12:40:57 -0700 X-ASG-Debug-ID: 1158863930-26759-876-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from agminet01.oracle.com (agminet01.oracle.com [141.146.126.228]) by cuda.sgi.com (Spam Firewall) with ESMTP id 25FD2D10D3B6 for ; Thu, 21 Sep 2006 11:38:50 -0700 (PDT) Received: from rgmgw2.us.oracle.com (rgmgw2.us.oracle.com [138.1.186.111]) by agminet01.oracle.com (Switch-3.1.7/Switch-3.1.7) with ESMTP id k8LIcGKu015203; Thu, 21 Sep 2006 13:38:18 -0500 Received: from [141.144.83.5] (dhcp-amer-whq-csvpn-gw3-141-144-83-5.vpn.oracle.com [141.144.83.5]) by rgmgw2.us.oracle.com (Switch-3.1.7/Switch-3.1.7) with ESMTP id k8LIcDFt028925; Thu, 21 Sep 2006 12:38:14 -0600 Message-ID: <4512DC15.8050101@oracle.com> Date: Thu, 21 Sep 2006 11:38:13 -0700 From: Zach Brown User-Agent: Thunderbird 1.5.0.5 (X11/20060808) MIME-Version: 1.0 To: Veerendra Chandrappa CC: linux-fsdevel@vger.kernel.org, linux-aio@kvack.org, linux-kernel@vger.kernel.org, suparna@in.ibm.com, xfs@oss.sgi.com X-ASG-Orig-Subj: Re: [RFC 0/5] dio: clean up completion phase of direct_io_worker() Subject: Re: [RFC 0/5] dio: clean up completion phase of direct_io_worker() References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Brightmail-Tracker: AAAAAQAAAAI= X-Brightmail-Tracker: AAAAAQAAAAI= X-Whitelist: TRUE X-Whitelist: TRUE X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21623 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9047 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: zach.brown@oracle.com Precedence: bulk X-list: xfs Content-Length: 1898 Lines: 54 > on EXT2, EXT3 and XFS filesystems. For the EXT2 and EXT3 filesystems the > tests went okay. But I got stack trace on XFS filesystem and the machine > went down. Fantastic, thanks for running these tests. > kernel BUG at kernel/workqueue.c:113! > EIP is at queue_work+0x86/0x90 We were able to set the pending bit but then found that list_empty() failed on the work queue's entry list_head. Let's call this memory corruption of some kind. > [] xfs_finish_ioend+0x20/0x22 > [] xfs_end_io_direct+0x3c/0x68 > [] dio_complete+0xe3/0xfe > [] dio_bio_end_aio+0x98/0xb1 > [] bio_endio+0x4e/0x78 > [] __end_that_request_first+0xcd/0x416 It was completing an AIO request. ret = blockdev_direct_IO_own_locking(rw, iocb, inode, iomap.iomap_target->bt_bdev, iov, offset, nr_segs, xfs_get_blocks_direct, xfs_end_io_direct); if (unlikely(ret <= 0 && iocb->private)) xfs_destroy_ioend(iocb->private); It looks like xfs_vm_direct_io() is destroying the ioend in the case where direct IO is returning -EIOCBQUEUED. Later the AIO will complete and try to call queue_work on the freed ioend. This wasn't a problem before when blkdev_direct_IO_*() would just return the number of bytes in the op that was in flight. That test should be if (unlikely(ret != -EIOCBQUEUED && iocb->private)) I'll update the patch set and send it out. This makes me worry that XFS might have other paths that need to know about the magical -EIOCBQUEUED case which actually means that a AIO DIO is in flight. Could I coerce some XFS guys into investigating if we might have other problems with trying to bubble -EIOCBQUEUED up from blockdev_direct_IO_own_locking() up through to xfs_file_aio_write()'s caller before calling xfs_end_io_direct()? - z From owner-xfs@oss.sgi.com Thu Sep 21 15:34:25 2006 Received: with ECARTIS (v1.0.0; list xfs); Thu, 21 Sep 2006 15:34:33 -0700 (PDT) Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8LMYOaG016984 for ; Thu, 21 Sep 2006 15:34:25 -0700 X-ASG-Debug-ID: 1158878024-9422-364-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mx1.redhat.com (mx1.redhat.com [66.187.233.31]) by cuda.sgi.com (Spam Firewall) with ESMTP id 357B745BDEC for ; Thu, 21 Sep 2006 15:33:45 -0700 (PDT) Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by mx1.redhat.com (8.12.11.20060308/8.12.11) with ESMTP id k8LMXSa8002239; Thu, 21 Sep 2006 18:33:28 -0400 Received: from pobox-2.corp.redhat.com (pobox-2.corp.redhat.com [10.11.255.15]) by int-mx1.corp.redhat.com (8.12.11.20060308/8.12.11) with ESMTP id k8LMXSZv022665; Thu, 21 Sep 2006 18:33:28 -0400 Received: from [10.15.80.10] (neon.msp.redhat.com [10.15.80.10]) by pobox-2.corp.redhat.com (8.13.1/8.13.1) with ESMTP id k8LMXPdi006626; Thu, 21 Sep 2006 18:33:26 -0400 Message-ID: <45131334.6050803@sandeen.net> Date: Thu, 21 Sep 2006 17:33:24 -0500 From: Eric Sandeen User-Agent: Thunderbird 1.5.0.7 (X11/20060913) MIME-Version: 1.0 To: Linux Kernel Mailing List , xfs mailing list X-ASG-Orig-Subj: [PATCH -mm] rescue large xfs preferred iosize from the inode diet patch Subject: [PATCH -mm] rescue large xfs preferred iosize from the inode diet patch Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Spam-Score: 0.60 X-Barracuda-Spam-Status: No, SCORE=0.60 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests=MARKETING_SUBJECT X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21637 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.60 MARKETING_SUBJECT Subject contains popular marketing words X-archive-position: 9050 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Content-Length: 3131 Lines: 90 The inode diet patch in -mm unhooked xfs_preferred_iosize from the stat call: --- a/fs/xfs/linux-2.6/xfs_vnode.c +++ b/fs/xfs/linux-2.6/xfs_vnode.c @@ -122,7 +122,6 @@ vn_revalidate_core( inode->i_blocks = vap->va_nblocks; inode->i_mtime = vap->va_mtime; inode->i_ctime = vap->va_ctime; - inode->i_blksize = vap->va_blocksize; if (vap->va_xflags & XFS_XFLAG_IMMUTABLE) This in turn breaks the largeio mount option for xfs: largeio/nolargeio If "nolargeio" is specified, the optimal I/O reported in st_blksize by stat(2) will be as small as possible to allow user applications to avoid inefficient read/modify/write I/O. If "largeio" specified, a filesystem that has a "swidth" specified will return the "swidth" value (in bytes) in st_blksize. If the filesystem does not have a "swidth" specified but does specify an "allocsize" then "allocsize" (in bytes) will be returned instead. If neither of these two options are specified, then filesystem will behave as if "nolargeio" was specified. and the (undocumented?) allocsize mount option as well. For a filesystem like this with sunit/swidth specified, meta-data=/dev/sda1 isize=512 agcount=32, agsize=7625840 blks = sectsz=512 attr=0 data = bsize=4096 blocks=244026880, imaxpct=25 = sunit=16 swidth=16 blks, unwritten=1 naming =version 2 bsize=4096 log =internal bsize=4096 blocks=32768, version=1 = sectsz=512 sunit=0 blks realtime =none extsz=65536 blocks=0, rtextents=0 stat on a stock FC6 kernel w/ the largeio mount option returns only the page size: [root@link-07]# mount -o largeio /dev/sda1 /mnt/test/ [root@link-07]# stat -c %o /mnt/test/foo 4096 with the following patch, it does what it should: [root@link-07]# mount -o largeio /dev/sda1 /mnt/test/ [root@link-07]# stat -c %o /mnt/test/foo 65536 same goes for filesystems w/o sunit,swidth but with the allocsize mount option. stock: [root@link-07]# mount -o largeio,allocsize=32768 /dev/sda1 /mnt/test/ [root@link-07]# stat -c %o /mnt/test/foo 4096 w/ patch: [root@link-07# mount -o largeio,allocsize=32768 /dev/sda1 /mnt/test/ [root@link-07]# stat -c %o /mnt/test/foo 32768 Signed-off-by: Eric Sandeen XFS guys, does this look ok? Index: linux-2.6.18/fs/xfs/linux-2.6/xfs_iops.c =================================================================== --- linux-2.6.18.orig/fs/xfs/linux-2.6/xfs_iops.c +++ linux-2.6.18/fs/xfs/linux-2.6/xfs_iops.c @@ -623,12 +623,16 @@ xfs_vn_getattr( { struct inode *inode = dentry->d_inode; bhv_vnode_t *vp = vn_from_inode(inode); + xfs_inode_t *ip; int error = 0; if (unlikely(vp->v_flag & VMODIFIED)) error = vn_revalidate(vp); - if (!error) + if (!error) { generic_fillattr(inode, stat); + ip = xfs_vtoi(vp); + stat->blksize = xfs_preferred_iosize(ip->i_mount); + } return -error; } From owner-xfs@oss.sgi.com Thu Sep 21 17:07:48 2006 Received: with ECARTIS (v1.0.0; list xfs); Thu, 21 Sep 2006 17:07:57 -0700 (PDT) Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8M07laG029790 for ; Thu, 21 Sep 2006 17:07:48 -0700 X-ASG-Debug-ID: 1158879142-18033-79-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from excu-mxob-1.symantec.com (excu-mxob-1.symantec.com [198.6.49.12]) by cuda.sgi.com (Spam Firewall) with ESMTP id 1ED5BD10E76D for ; Thu, 21 Sep 2006 15:52:22 -0700 (PDT) Received: from tus1opsmtapin01.ges.symantec.com (tus1opsmtapin01.ges.symantec.com [192.168.214.43]) by excu-mxob-1.symantec.com (8.13.7/8.13.7) with ESMTP id k8LMqJoB016263 for ; Thu, 21 Sep 2006 15:52:20 -0700 (PDT) Received: from [10.137.18.172] (helo=SVLXCHCON2.enterprise.veritas.com) by tus1opsmtapin01.ges.symantec.com with esmtp (Exim 4.52) id 1GQXOz-000820-36 for xfs@oss.sgi.com; Thu, 21 Sep 2006 15:52:17 -0700 Received: from SVL1XCHEVSPIN01.enterprise.veritas.com ([166.98.169.10]) by SVLXCHCON2.enterprise.veritas.com with Microsoft SMTPSVC(6.0.3790.211); Thu, 21 Sep 2006 15:52:19 -0700 X-MimeOLE: Produced By Microsoft Exchange V6.5.7226.0 MIME-Version: 1.0 X-ASG-Orig-Subj: xfs_read_buf error 5. Subject: xfs_read_buf error 5. Date: Thu, 21 Sep 2006 15:52:18 -0700 Message-ID: <43FB1967D03EC7449A77FA91322E364802A9AA07@SVL1XCHCLUPIN01.enterprise.veritas.com> X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: xfs_read_buf error 5. Thread-Index: Acbd0JdfGEmEr6cURB6QJ+DPhOLE4g== From: "Nikhil Kulkarni" To: X-OriginalArrivalTime: 21 Sep 2006 22:52:19.0541 (UTC) FILETIME=[97BE6050:01C6DDD0] X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21638 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- Content-Type: text/plain Content-Disposition: inline Content-Transfer-Encoding: 7bit X-archive-position: 9051 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nikhil_kulkarni@symantec.com Precedence: bulk X-list: xfs Content-Length: 984 Lines: 54 Hi, We are running the latest stable kernel 2.6.18 on RedHat Enterprise 4 Update 4. When I create an xfs partition and assign it a size > 2GB, I get the following errors during bootup: Sep 21 15:44:06 ssimppi7 kernel: sda: sda1 Sep 21 15:44:06 ssimppi7 kernel: sda1: rw=0, want=7167993856, limit=2873026796 Sep 21 15:44:06 ssimppi7 kernel: I/O error in filesystem ("sda1") meta-data dev sda1 block 0x1ab3ee7ff ("xfs_read_buf") error 5 buf count 512 Sep 21 15:43:51 ssimppi7 mount: mount: /dev/sda1: can't read superblock If the partition size is < 2GB then everything works smoothly. The filesystem mounts successfully. I searched the archives but could not find anything that would help me in debugging this issue. I was wondering if anyone else has reported the same issue or if I am missing a step or if this is a defect in the code or something?? Your response is greatly appreciated. Thanks, Nikhil [[HTML alternate version deleted]] From owner-xfs@oss.sgi.com Thu Sep 21 17:17:11 2006 Received: with ECARTIS (v1.0.0; list xfs); Thu, 21 Sep 2006 17:17:17 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k8M0H7aG031291 for ; Thu, 21 Sep 2006 17:17:09 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA25700; Fri, 22 Sep 2006 10:16:22 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id k8M0GKpj8619530; Fri, 22 Sep 2006 10:16:21 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id k8M0GJQZ8630963; Fri, 22 Sep 2006 10:16:19 +1000 (AEST) Date: Fri, 22 Sep 2006 10:16:19 +1000 From: David Chinner To: Nikhil Kulkarni Cc: xfs@oss.sgi.com Subject: Re: xfs_read_buf error 5. Message-ID: <20060922001619.GW3034@melbourne.sgi.com> References: <43FB1967D03EC7449A77FA91322E364802A9AA07@SVL1XCHCLUPIN01.enterprise.veritas.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <43FB1967D03EC7449A77FA91322E364802A9AA07@SVL1XCHCLUPIN01.enterprise.veritas.com> User-Agent: Mutt/1.4.2.1i X-archive-position: 9053 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 600 Lines: 26 On Thu, Sep 21, 2006 at 03:52:18PM -0700, Nikhil Kulkarni wrote: > Hi, > > > > We are running the latest stable kernel 2.6.18 on RedHat Enterprise 4 > Update 4. > > When I create an xfs partition and assign it a size > 2GB, I get the > following errors during bootup: .... > If the partition size is < 2GB then everything works smoothly. The > filesystem mounts successfully. Sounds like a partition problem, not an XFS problem... What is the size of the partitions in both cases according to /proc/partitions? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Thu Sep 21 17:47:31 2006 Received: with ECARTIS (v1.0.0; list xfs); Thu, 21 Sep 2006 17:47:41 -0700 (PDT) Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8M0lUaG001624 for ; Thu, 21 Sep 2006 17:47:31 -0700 X-ASG-Debug-ID: 1158886011-26122-779-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from excu-mxob-1.symantec.com (excu-mxob-1.symantec.com [198.6.49.12]) by cuda.sgi.com (Spam Firewall) with ESMTP id 1986D4592F8; Thu, 21 Sep 2006 17:46:51 -0700 (PDT) Received: from tus1opsmtapin02.ges.symantec.com (tus1opsmtapin02.ges.symantec.com [192.168.214.44]) by excu-mxob-1.symantec.com (8.13.7/8.13.7) with ESMTP id k8M0jWpS029090; Thu, 21 Sep 2006 17:45:33 -0700 (PDT) Received: from [10.137.18.172] (helo=SVLXCHCON2.enterprise.veritas.com) by tus1opsmtapin02.ges.symantec.com with esmtp (Exim 4.52) id 1GQZAa-0004wY-QX; Thu, 21 Sep 2006 17:45:32 -0700 Received: from SVL1XCHEVSPIN01.enterprise.veritas.com ([166.98.169.10]) by SVLXCHCON2.enterprise.veritas.com with Microsoft SMTPSVC(6.0.3790.211); Thu, 21 Sep 2006 17:45:32 -0700 X-MimeOLE: Produced By Microsoft Exchange V6.5.7226.0 Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" X-ASG-Orig-Subj: RE: xfs_read_buf error 5. Subject: RE: xfs_read_buf error 5. Date: Thu, 21 Sep 2006 17:45:32 -0700 Message-ID: <43FB1967D03EC7449A77FA91322E364802A9AB91@SVL1XCHCLUPIN01.enterprise.veritas.com> In-Reply-To: <20060922001619.GW3034@melbourne.sgi.com> X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: xfs_read_buf error 5. Thread-Index: Acbd3FtQ+RWJ9rcTREqesed1mVSVugAAxZ/w From: "Nikhil Kulkarni" To: "David Chinner" Cc: X-OriginalArrivalTime: 22 Sep 2006 00:45:32.0620 (UTC) FILETIME=[68BBF8C0:01C6DDE0] X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21643 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id k8M0lVaG001627 X-archive-position: 9054 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nikhil_kulkarni@symantec.com Precedence: bulk X-list: xfs Content-Length: 2780 Lines: 108 Hi David, Hi David, Thanks for your prompt response, I really appreciate it!!! I made a major tying mistake in describing the initial size of the partitions. We are having issues when we assign a size > 2TB and not 2GB. I'm sorry about the typo. Here are the 2 /proc/partition files: This is the one where the partition size is 3.5T [nikhil@nkulkarni tmp]$ more xfs-log-3.5T [root@ssimppi7 log]# cat /proc/partitions major minor #blocks name 8 0 4093902848 sda 8 1 1436513398 sda1 8 16 143257600 sdb 8 17 10482381 sdb1 8 18 10482412 sdb2 8 19 4192965 sdb3 8 20 1 sdb4 8 21 118093783 sdb5 8 32 878837760 sdc 8 33 12811806 sdc1 8 34 64010992 sdc2 8 35 8032 sdc3 8 36 1 sdc4 8 37 19197643 sdc5 8 38 100582933 sdc6 8 39 682224291 sdc7 253 0 136445952 dm-0 253 1 545775616 dm-1 This is the one where the partition size is 2TB: [nikhil@nkulkarni tmp]$ more xfs-log-2T root@ssimppi7 log]# cat /proc/partitions major minor #blocks name 8 0 4093902848 sda 8 1 2047998298 sda1 8 16 143257600 sdb 8 17 10482381 sdb1 8 18 10482412 sdb2 8 19 4192965 sdb3 8 20 1 sdb4 8 21 118093783 sdb5 8 32 878837760 sdc 8 33 12811806 sdc1 8 34 64010992 sdc2 8 35 8032 sdc3 8 36 1 sdc4 8 37 19197643 sdc5 8 38 100582933 sdc6 8 39 682224291 sdc7 253 0 136445952 dm-0 253 1 545775616 dm-1 [nikhil@nkulkarni tmp]$ I think you are right. The partitions are not set up correctly. Do you know on a 2.5 kernel on a 32 bit operating system, which tool can be used to setup partitions for sizes up to 4TB or 8TB? fdisk on a 32 bit os does not work correctly. I thought parted worked but apparently it does not either. Any suggestions how I can utilize the entire 4TB file system using xfs? Thanks, Nikhil -----Original Message----- From: David Chinner [mailto:dgc@sgi.com] Sent: Thursday, September 21, 2006 5:16 PM To: Nikhil Kulkarni Cc: xfs@oss.sgi.com Subject: Re: xfs_read_buf error 5. On Thu, Sep 21, 2006 at 03:52:18PM -0700, Nikhil Kulkarni wrote: > Hi, > > > > We are running the latest stable kernel 2.6.18 on RedHat Enterprise 4 > Update 4. > > When I create an xfs partition and assign it a size > 2GB , I get the > following errors during bootup: .... > If the partition size is < 2GB then everything works smoothly. The > filesystem mounts successfully. Sounds like a partition problem, not an XFS problem... What is the size of the partitions in both cases according to /proc/partitions? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Thu Sep 21 18:04:09 2006 Received: with ECARTIS (v1.0.0; list xfs); Thu, 21 Sep 2006 18:04:15 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k8M145aG003281 for ; Thu, 21 Sep 2006 18:04:07 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA26750; Fri, 22 Sep 2006 11:03:20 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id k8M13Ipj8641304; Fri, 22 Sep 2006 11:03:18 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id k8M13Gkn8574426; Fri, 22 Sep 2006 11:03:16 +1000 (AEST) Date: Fri, 22 Sep 2006 11:03:16 +1000 From: David Chinner To: Eric Sandeen Cc: Linux Kernel Mailing List , xfs mailing list Subject: Re: [PATCH -mm] rescue large xfs preferred iosize from the inode diet patch Message-ID: <20060922010316.GY3034@melbourne.sgi.com> References: <45131334.6050803@sandeen.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <45131334.6050803@sandeen.net> User-Agent: Mutt/1.4.2.1i X-archive-position: 9055 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 1004 Lines: 39 On Thu, Sep 21, 2006 at 05:33:24PM -0500, Eric Sandeen wrote: > The inode diet patch in -mm unhooked xfs_preferred_iosize from the stat call: .... > Signed-off-by: Eric Sandeen > > XFS guys, does this look ok? > > Index: linux-2.6.18/fs/xfs/linux-2.6/xfs_iops.c > =================================================================== > --- linux-2.6.18.orig/fs/xfs/linux-2.6/xfs_iops.c > +++ linux-2.6.18/fs/xfs/linux-2.6/xfs_iops.c > @@ -623,12 +623,16 @@ xfs_vn_getattr( > { > struct inode *inode = dentry->d_inode; > bhv_vnode_t *vp = vn_from_inode(inode); > + xfs_inode_t *ip; > int error = 0; > > if (unlikely(vp->v_flag & VMODIFIED)) > error = vn_revalidate(vp); > - if (!error) > + if (!error) { > generic_fillattr(inode, stat); > + ip = xfs_vtoi(vp); > + stat->blksize = xfs_preferred_iosize(ip->i_mount); > + } > return -error; > } ACK. Looks good, Eric. Good catch. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Thu Sep 21 19:03:26 2006 Received: with ECARTIS (v1.0.0; list xfs); Thu, 21 Sep 2006 19:03:32 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k8M23LaG009009 for ; Thu, 21 Sep 2006 19:03:23 -0700 Received: from [134.14.55.141] (boing.melbourne.sgi.com [134.14.55.141]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id MAA27859; Fri, 22 Sep 2006 12:02:32 +1000 Message-ID: <45134472.7080002@sgi.com> Date: Fri, 22 Sep 2006 12:03:30 +1000 From: Timothy Shimmin User-Agent: Thunderbird 1.5.0.7 (Macintosh/20060909) MIME-Version: 1.0 To: Eric Sandeen CC: Linux Kernel Mailing List , xfs mailing list Subject: Re: [PATCH -mm] rescue large xfs preferred iosize from the inode diet patch References: <45131334.6050803@sandeen.net> In-Reply-To: <45131334.6050803@sandeen.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 9056 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs Content-Length: 4009 Lines: 112 Hi Eric, Eric Sandeen wrote: > The inode diet patch in -mm unhooked xfs_preferred_iosize from the stat call: > > --- a/fs/xfs/linux-2.6/xfs_vnode.c > +++ b/fs/xfs/linux-2.6/xfs_vnode.c > @@ -122,7 +122,6 @@ vn_revalidate_core( > inode->i_blocks = vap->va_nblocks; > inode->i_mtime = vap->va_mtime; > inode->i_ctime = vap->va_ctime; > - inode->i_blksize = vap->va_blocksize; > if (vap->va_xflags & XFS_XFLAG_IMMUTABLE) > > This in turn breaks the largeio mount option for xfs: > > largeio/nolargeio > If "nolargeio" is specified, the optimal I/O reported in > st_blksize by stat(2) will be as small as possible to allow user > applications to avoid inefficient read/modify/write I/O. > If "largeio" specified, a filesystem that has a "swidth" specified > will return the "swidth" value (in bytes) in st_blksize. If the > filesystem does not have a "swidth" specified but does specify > an "allocsize" then "allocsize" (in bytes) will be returned > instead. > If neither of these two options are specified, then filesystem > will behave as if "nolargeio" was specified. > > and the (undocumented?) allocsize mount option as well. > > For a filesystem like this with sunit/swidth specified, > > meta-data=/dev/sda1 isize=512 agcount=32, agsize=7625840 blks > = sectsz=512 attr=0 > data = bsize=4096 blocks=244026880, imaxpct=25 > = sunit=16 swidth=16 blks, unwritten=1 > naming =version 2 bsize=4096 > log =internal bsize=4096 blocks=32768, version=1 > = sectsz=512 sunit=0 blks > realtime =none extsz=65536 blocks=0, rtextents=0 > > stat on a stock FC6 kernel w/ the largeio mount option returns only the page size: > > [root@link-07]# mount -o largeio /dev/sda1 /mnt/test/ > [root@link-07]# stat -c %o /mnt/test/foo > 4096 > > with the following patch, it does what it should: > > [root@link-07]# mount -o largeio /dev/sda1 /mnt/test/ > [root@link-07]# stat -c %o /mnt/test/foo > 65536 > > same goes for filesystems w/o sunit,swidth but with the allocsize mount option. > > stock: > [root@link-07]# mount -o largeio,allocsize=32768 /dev/sda1 /mnt/test/ > [root@link-07]# stat -c %o /mnt/test/foo > 4096 > > w/ patch: > [root@link-07# mount -o largeio,allocsize=32768 /dev/sda1 /mnt/test/ > [root@link-07]# stat -c %o /mnt/test/foo > 32768 > > Signed-off-by: Eric Sandeen > > XFS guys, does this look ok? > > Index: linux-2.6.18/fs/xfs/linux-2.6/xfs_iops.c > =================================================================== > --- linux-2.6.18.orig/fs/xfs/linux-2.6/xfs_iops.c > +++ linux-2.6.18/fs/xfs/linux-2.6/xfs_iops.c > @@ -623,12 +623,16 @@ xfs_vn_getattr( > { > struct inode *inode = dentry->d_inode; > bhv_vnode_t *vp = vn_from_inode(inode); > + xfs_inode_t *ip; > int error = 0; > > if (unlikely(vp->v_flag & VMODIFIED)) > error = vn_revalidate(vp); > - if (!error) > + if (!error) { > generic_fillattr(inode, stat); > + ip = xfs_vtoi(vp); > + stat->blksize = xfs_preferred_iosize(ip->i_mount); > + } > return -error; > } > Looked at your patch and then at our xfs code in the tree and the existing code is different than what yours is based on. I then noticed in the logs Nathan has actually made changes for this: ---------------------------- revision 1.254 date: 2006/07/17 10:46:05; author: nathans; state: Exp; lines: +20 -5 modid: xfs-linux-melb:xfs-kern:26565a Update XFS for i_blksize removal from generic inode structure ---------------------------- I even reviewed the change (and I don't remember it - getting old). I looked at the mods scheduled for 2.6.19 and this is one of them. So the fix for this is coming soon (and the fix is different from the one above). Cheers, Tim. From owner-xfs@oss.sgi.com Thu Sep 21 19:14:56 2006 Received: with ECARTIS (v1.0.0; list xfs); Thu, 21 Sep 2006 19:15:05 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k8M2EqaG010369 for ; Thu, 21 Sep 2006 19:14:54 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id MAA27998; Fri, 22 Sep 2006 12:14:05 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id k8M2E3pj8653690; Fri, 22 Sep 2006 12:14:04 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id k8M2E13M8588399; Fri, 22 Sep 2006 12:14:01 +1000 (AEST) Date: Fri, 22 Sep 2006 12:14:01 +1000 From: David Chinner To: Nikhil Kulkarni Cc: David Chinner , xfs@oss.sgi.com Subject: Re: xfs_read_buf error 5. Message-ID: <20060922021401.GA3034@melbourne.sgi.com> References: <20060922001619.GW3034@melbourne.sgi.com> <43FB1967D03EC7449A77FA91322E364802A9AB91@SVL1XCHCLUPIN01.enterprise.veritas.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <43FB1967D03EC7449A77FA91322E364802A9AB91@SVL1XCHCLUPIN01.enterprise.veritas.com> User-Agent: Mutt/1.4.2.1i X-archive-position: 9057 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 1445 Lines: 54 On Thu, Sep 21, 2006 at 05:45:32PM -0700, Nikhil Kulkarni wrote: > Hi David, > > Hi David, > > Thanks for your prompt response, I really appreciate it!!! I made a > major tying mistake in describing the initial size of the partitions. We > are having issues when we assign a size > 2TB and not 2GB. I'm sorry > about the typo. I was wondering about that ;) > Here are the 2 /proc/partition files: > > This is the one where the partition size is 3.5T > > [nikhil@nkulkarni tmp]$ more xfs-log-3.5T > [root@ssimppi7 log]# cat /proc/partitions > major minor #blocks name > > 8 0 4093902848 sda > 8 1 1436513398 sda1 That says it's only 1.4TB... > This is the one where the partition size is 2TB: > > [nikhil@nkulkarni tmp]$ more xfs-log-2T > root@ssimppi7 log]# cat /proc/partitions > major minor #blocks name > > 8 0 4093902848 sda > 8 1 2047998298 sda1 And that is 2TB. > I think you are right. The partitions are not set up correctly. > > Do you know on a 2.5 kernel on a 32 bit operating system, which tool can > be used to setup partitions for sizes up to 4TB or 8TB? > fdisk on a 32 bit os does not work correctly. I thought parted worked > but apparently it does not either. Did you build your kernel with CONFIG_LBD=y? (Block layer option) This is the option that allows >2TB block devices on 32 bit kernels... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Thu Sep 21 19:24:49 2006 Received: with ECARTIS (v1.0.0; list xfs); Thu, 21 Sep 2006 19:24:56 -0700 (PDT) Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8M2OnaG011981 for ; Thu, 21 Sep 2006 19:24:49 -0700 X-ASG-Debug-ID: 1158891850-348-565-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com (Spam Firewall) with ESMTP id F023FD10E75E for ; Thu, 21 Sep 2006 19:24:10 -0700 (PDT) Received: from [10.0.0.4] (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 3D97118DF3806; Thu, 21 Sep 2006 21:23:54 -0500 (CDT) Message-ID: <4513493F.8090005@sandeen.net> Date: Thu, 21 Sep 2006 21:23:59 -0500 From: Eric Sandeen User-Agent: Thunderbird 1.5.0.7 (Macintosh/20060909) MIME-Version: 1.0 To: Timothy Shimmin CC: Linux Kernel Mailing List , xfs mailing list X-ASG-Orig-Subj: Re: [PATCH -mm] rescue large xfs preferred iosize from the inode diet patch Subject: Re: [PATCH -mm] rescue large xfs preferred iosize from the inode diet patch References: <45131334.6050803@sandeen.net> <45134472.7080002@sgi.com> In-Reply-To: <45134472.7080002@sgi.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Barracuda-Spam-Score: 0.60 X-Barracuda-Spam-Status: No, SCORE=0.60 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests=MARKETING_SUBJECT X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21647 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.60 MARKETING_SUBJECT Subject contains popular marketing words X-archive-position: 9058 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Content-Length: 779 Lines: 24 Timothy Shimmin wrote: > Looked at your patch and then at our xfs code in the tree and > the existing code is different than what yours is based on. > I then noticed in the logs Nathan has actually made changes for this: > > ---------------------------- > revision 1.254 > date: 2006/07/17 10:46:05; author: nathans; state: Exp; lines: +20 -5 > modid: xfs-linux-melb:xfs-kern:26565a > Update XFS for i_blksize removal from generic inode structure > ---------------------------- > I even reviewed the change (and I don't remember it - getting old). > > I looked at the mods scheduled for 2.6.19 and this is one of them. > > So the fix for this is coming soon (and the fix is different from the > one above). Ah, ok, thanks guys. Should have checked CVS I guess. -Eric From owner-xfs@oss.sgi.com Thu Sep 21 19:44:08 2006 Received: with ECARTIS (v1.0.0; list xfs); Thu, 21 Sep 2006 19:44:14 -0700 (PDT) Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8M2i7aG014576 for ; Thu, 21 Sep 2006 19:44:08 -0700 X-ASG-Debug-ID: 1158893008-12739-789-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com (Spam Firewall) with ESMTP id F37FC45A5B8 for ; Thu, 21 Sep 2006 19:43:28 -0700 (PDT) Received: from [10.0.0.4] (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 132F818DF3806; Thu, 21 Sep 2006 21:43:12 -0500 (CDT) Message-ID: <45134DC5.4070607@sandeen.net> Date: Thu, 21 Sep 2006 21:43:17 -0500 From: Eric Sandeen User-Agent: Thunderbird 1.5.0.7 (Macintosh/20060909) MIME-Version: 1.0 To: Eric Sandeen CC: Timothy Shimmin , xfs mailing list X-ASG-Orig-Subj: Re: [PATCH -mm] rescue large xfs preferred iosize from the inode diet patch Subject: Re: [PATCH -mm] rescue large xfs preferred iosize from the inode diet patch References: <45131334.6050803@sandeen.net> <45134472.7080002@sgi.com> <4513493F.8090005@sandeen.net> In-Reply-To: <4513493F.8090005@sandeen.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Barracuda-Spam-Score: 0.60 X-Barracuda-Spam-Status: No, SCORE=0.60 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests=MARKETING_SUBJECT X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21649 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.60 MARKETING_SUBJECT Subject contains popular marketing words X-archive-position: 9059 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Content-Length: 1107 Lines: 38 Eric Sandeen wrote: > Timothy Shimmin wrote: > >> Looked at your patch and then at our xfs code in the tree and >> the existing code is different than what yours is based on. >> I then noticed in the logs Nathan has actually made changes for this: >> >> ---------------------------- >> revision 1.254 >> date: 2006/07/17 10:46:05; author: nathans; state: Exp; lines: +20 -5 >> modid: xfs-linux-melb:xfs-kern:26565a >> Update XFS for i_blksize removal from generic inode structure >> ---------------------------- >> I even reviewed the change (and I don't remember it - getting old). >> >> I looked at the mods scheduled for 2.6.19 and this is one of them. >> >> So the fix for this is coming soon (and the fix is different from the >> one above). > > > Ah, ok, thanks guys. Should have checked CVS I guess. > > -Eric > cc -= lkml; actually the patch nathan put in seems like a lot of replicated code. http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/linux-2.6/xfs_iops.c.diff?r1=text&tr1=1.254&r2=text&tr2=1.253&f=h But maybe he's solving some problem I didn't think of. Any idea what? -Eric From owner-xfs@oss.sgi.com Thu Sep 21 22:40:54 2006 Received: with ECARTIS (v1.0.0; list xfs); Thu, 21 Sep 2006 22:41:01 -0700 (PDT) Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8M5eqaG004077 for ; Thu, 21 Sep 2006 22:40:54 -0700 X-ASG-Debug-ID: 1158899530-12215-980-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from cfa.harvard.edu (cfa.harvard.edu [131.142.10.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id EAAD0D10E762 for ; Thu, 21 Sep 2006 21:32:10 -0700 (PDT) Received: from titan (titan [131.142.24.40]) by cfa.harvard.edu (8.13.7/8.13.7/cfunix Mast-Sol 1.0) with ESMTP id k8M4W9ci006928 for ; Fri, 22 Sep 2006 00:32:09 -0400 (EDT) Date: Fri, 22 Sep 2006 00:32:09 -0400 (EDT) From: Gaspar Bakos Reply-To: gbakos@cfa.harvard.edu To: linux-xfs@oss.sgi.com X-ASG-Orig-Subj: mkfs.xfs on hw RAID6 Subject: mkfs.xfs on hw RAID6 In-Reply-To: Message-ID: References: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21653 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9060 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: gbakos@cfa.harvard.edu Precedence: bulk X-list: xfs Content-Length: 933 Lines: 37 Hi, I am trying to create an XFS filesystem on a hardware RAID-6 partition that consists of 12 x 500Gb disks, i.e. the total disk space is 5Tb. All this is one big partition, created by parted. Typical filesize will range from 100Kb to 30Mb, but very small files are not typical. I wonder about the usual magic; with a raid setup the su,sw and log su,size parameters can be optimized. The RAID stripe size is 64K in my case. In other words: mkfs.xfs -b size=$BS -d su=$SU,sw=$SW -i size=$ISIZE -l su=$LSU,size=$LS -L BIG /dev/sdc1 BS=? SU=? SW=? ISIZE=? LSU=? LS=? I would say: mkfs.xfs -b size=4k -d su=64k,sw=10 -f -i size=512 -l su=64k,size=32m -L BIG /dev/sdc1 I read somewhere some time ago that recommended sw = (num_of_disks - 1) with RAID-5 . The current sw = 10 is intuitive for RAID-6. I can also imagine that with hardware RAID all this is not that important. Thanks in advance for suggestions! Cheers, Gaspar From owner-xfs@oss.sgi.com Thu Sep 21 23:20:44 2006 Received: with ECARTIS (v1.0.0; list xfs); Thu, 21 Sep 2006 23:20:51 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k8M6KeaG007814 for ; Thu, 21 Sep 2006 23:20:43 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id QAA02940; Fri, 22 Sep 2006 16:19:53 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id k8M6Jqpj8687346; Fri, 22 Sep 2006 16:19:52 +1000 (AEST) Received: (from dgc@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id k8M6Jo3g8706232; Fri, 22 Sep 2006 16:19:50 +1000 (AEST) Date: Fri, 22 Sep 2006 16:19:50 +1000 From: David Chinner To: Eric Sandeen Cc: Timothy Shimmin , xfs mailing list Subject: Re: [PATCH -mm] rescue large xfs preferred iosize from the inode diet patch Message-ID: <20060922061950.GE3034@melbourne.sgi.com> References: <45131334.6050803@sandeen.net> <45134472.7080002@sgi.com> <4513493F.8090005@sandeen.net> <45134DC5.4070607@sandeen.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <45134DC5.4070607@sandeen.net> User-Agent: Mutt/1.4.2.1i X-archive-position: 9061 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: xfs Content-Length: 1168 Lines: 36 On Thu, Sep 21, 2006 at 09:43:17PM -0500, Eric Sandeen wrote: > Eric Sandeen wrote: > > > >Ah, ok, thanks guys. Should have checked CVS I guess. > > > cc -= lkml; > > actually the patch nathan put in seems like a lot of replicated code. Yeah, that's what caught me - I looked at the tree which had nathan's patch in it, and assumed that the stuff the -mm tree had cleaned it up to use the generic_fillattr() code. > But maybe he's solving some problem I didn't think of. The difference is the old code updated the fields in the linux inode with all the info from disk and then filled in the stat data from the linux inode. The new code gets the data from "disk" and puts it straight into the the stat buffer without updating the linux inode. > Any idea what? I would have thought that we want what we report to userspace to be consistent in the linux inode as well. I suppose that by duplicating the code we removed a copy of the data but I see little advantage from doing that considering the extra code to do it and the fat that the linux inode may not be up to date now.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Fri Sep 22 00:49:56 2006 Received: with ECARTIS (v1.0.0; list xfs); Fri, 22 Sep 2006 00:50:05 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k8M7nqaG022272 for ; Fri, 22 Sep 2006 00:49:54 -0700 Received: from [134.14.55.141] (boing.melbourne.sgi.com [134.14.55.141]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA04472; Fri, 22 Sep 2006 17:49:04 +1000 Message-ID: <451395AB.4060400@sgi.com> Date: Fri, 22 Sep 2006 17:50:03 +1000 From: Timothy Shimmin User-Agent: Thunderbird 1.5.0.7 (Macintosh/20060909) MIME-Version: 1.0 To: David Chinner CC: Eric Sandeen , xfs mailing list Subject: Re: [PATCH -mm] rescue large xfs preferred iosize from the inode diet patch References: <45131334.6050803@sandeen.net> <45134472.7080002@sgi.com> <4513493F.8090005@sandeen.net> <45134DC5.4070607@sandeen.net> <20060922061950.GE3034@melbourne.sgi.com> In-Reply-To: <20060922061950.GE3034@melbourne.sgi.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 9062 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs Content-Length: 3279 Lines: 95 David Chinner wrote: > On Thu, Sep 21, 2006 at 09:43:17PM -0500, Eric Sandeen wrote: >> Eric Sandeen wrote: >>> Ah, ok, thanks guys. Should have checked CVS I guess. >>> >> cc -= lkml; >> >> actually the patch nathan put in seems like a lot of replicated code. > > Yeah, that's what caught me - I looked at the tree which had nathan's > patch in it, and assumed that the stuff the -mm tree had cleaned it up > to use the generic_fillattr() code. > >> But maybe he's solving some problem I didn't think of. > > The difference is the old code updated the fields in the linux inode > with all the info from disk and then filled in the stat data from > the linux inode. The new code gets the data from "disk" and puts it > straight into the the stat buffer without updating the linux inode. > >> Any idea what? > > I would have thought that we want what we report to userspace to be > consistent in the linux inode as well. I suppose that by duplicating > the code we removed a copy of the data but I see little advantage > from doing that considering the extra code to do it and the fat that > the linux inode may not be up to date now.... > Well we just sync it (linux inode) up at points when we need to don't we? (Hmmm, doesn't look like we call vn_revalidate much anymore.) I agree Eric's fix is simpler but I'd like to wait for Nathan's comments. Perhaps he is trying to future proof us, from this thing happening again when we rely on the linux inode? :) Review at the time: ------------------------------- Re: review: rework stat/getattr for i_blksize removal To: Timothy Shimmin Subject: Re: review: rework stat/getattr for i_blksize removal From: Nathan Scott Date: Thu, 6 Jul 2006 15:45:14 +1000 Cc: xfs-dev On Thu, Jul 06, 2006 at 03:37:02PM +1000, Timothy Shimmin wrote: > Hi Nathan, > > Looks reasonable. > Just a few questions of interest below :) Thanks & no worries... > So we don't need to call vn_revalidate because we no longer need to update > the inode at this point with the data from the vnode because > we are no longer looking at the linux inode? > We are now just looking at the vnode and its fields (which we > get a lot from the xfs inode in xfs_getattr). Yep. > + error = bhv_vop_getattr(vp, &vattr, ATTR_LAZY, NULL); > + if (likely(!error)) { > + stat->size = i_size_read(inode); > > Q: OOI, Why can't we use vattr.va_size? > Is inode->i_size up to date at this point? Slightly more uptodate at times, but we probably could use it too. There are times where that is updated in advance of the XFS inode (e.g. during write, its updated per-page, whereas we only update the XFS inode right at the end). I was just looking to reduce any risk here, but maybe we should go for the xfs_inode size, for consistency ... lemme ponder it a bit. > + stat->rdev = (vattr.va_rdev == 0) ? 0 : > + MKDEV(sysv_major(vattr.va_rdev) & 0x1ff, > + sysv_minor(vattr.va_rdev)); > > Q: Is it really worth special casing 0 with a conditional? > Result will be the same won't it? Heh - good point. It is the same in the end, will fix. cheers. -- Nathan ------------------------------- --Tim From owner-xfs@oss.sgi.com Fri Sep 22 04:51:14 2006 Received: with ECARTIS (v1.0.0; list xfs); Fri, 22 Sep 2006 04:51:23 -0700 (PDT) Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8MBpDaG019667 for ; Fri, 22 Sep 2006 04:51:14 -0700 X-ASG-Debug-ID: 1158921878-26955-896-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mail.moloch.sk (chrocht.moloch.sk [62.176.169.44]) by cuda.sgi.com (Spam Firewall) with ESMTP id 003C645BF10 for ; Fri, 22 Sep 2006 03:44:38 -0700 (PDT) Received: from dezo.moloch.sk (dezo.moloch.sk [62.176.172.4]) by mail.moloch.sk (Postfix) with ESMTP id 979B6340F14D for ; Fri, 22 Sep 2006 12:44:18 +0200 (CEST) Received: by dezo.moloch.sk (Postfix, from userid 1000) id F1C7C6EF; Fri, 22 Sep 2006 12:44:17 +0200 (CEST) Date: Fri, 22 Sep 2006 12:44:17 +0200 From: Martin Lucina To: xfs@oss.sgi.com X-ASG-Orig-Subj: Write barrier support with LVM2/md Subject: Write barrier support with LVM2/md Message-ID: <20060922104417.GA4161@dezo.moloch.sk> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.5.13 (2006-08-11) X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21673 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9063 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: mato@kotelna.sk Precedence: bulk X-list: xfs Content-Length: 554 Lines: 18 Hello, Having recently upgraded several systems to Linux 2.6.16/2.6.18 I learnt about the issues with modern drives and write caching. Thanks for the excellent FAQ entry on this. Unfortunately it seems that write barriers do not work when using XFS on top of LVM2 (device-mapper). Trying an explicit mount with -o barrier gives: Filesystem "dm-4": Disabling barriers, not supported by the underlying device Is anyone working on a fix for this? It might be a good idea to add to the FAQ that write barriers are not supported on dm devices. -mato From owner-xfs@oss.sgi.com Fri Sep 22 10:05:08 2006 Received: with ECARTIS (v1.0.0; list xfs); Fri, 22 Sep 2006 10:05:17 -0700 (PDT) Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8MH57aG022810 for ; Fri, 22 Sep 2006 10:05:08 -0700 X-ASG-Debug-ID: 1158944667-27510-403-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from excu-mxob-1.symantec.com (excu-mxob-1.symantec.com [198.6.49.12]) by cuda.sgi.com (Spam Firewall) with ESMTP id EC1DED112406; Fri, 22 Sep 2006 10:04:27 -0700 (PDT) Received: from tus1opsmtapin01.ges.symantec.com (tus1opsmtapin01.ges.symantec.com [192.168.214.43]) by excu-mxob-1.symantec.com (8.13.7/8.13.7) with ESMTP id k8MH2piK028534; Fri, 22 Sep 2006 10:03:06 -0700 (PDT) Received: from [10.137.18.172] (helo=SVLXCHCON2.enterprise.veritas.com) by tus1opsmtapin01.ges.symantec.com with esmtp (Exim 4.52) id 1GQoQK-00082M-54; Fri, 22 Sep 2006 10:02:48 -0700 Received: from SVL1XCHEVSPIN01.enterprise.veritas.com ([166.98.169.10]) by SVLXCHCON2.enterprise.veritas.com with Microsoft SMTPSVC(6.0.3790.211); Fri, 22 Sep 2006 10:02:35 -0700 X-MimeOLE: Produced By Microsoft Exchange V6.5.7226.0 Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" X-ASG-Orig-Subj: RE: xfs_read_buf error 5. Subject: RE: xfs_read_buf error 5. Date: Fri, 22 Sep 2006 10:02:33 -0700 Message-ID: <43FB1967D03EC7449A77FA91322E364802AD571F@SVL1XCHCLUPIN01.enterprise.veritas.com> In-Reply-To: <20060922021401.GA3034@melbourne.sgi.com> X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: xfs_read_buf error 5. Thread-Index: Acbd7M3jxbcmbG0uTLCXLK1GGayXJgAe+dYg From: "Nikhil Kulkarni" To: "David Chinner" Cc: X-OriginalArrivalTime: 22 Sep 2006 17:02:35.0925 (UTC) FILETIME=[E6F26C50:01C6DE68] X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21692 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id k8MH58aG022814 X-archive-position: 9064 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nikhil_kulkarni@symantec.com Precedence: bulk X-list: xfs Content-Length: 1895 Lines: 73 Hi David, I figured it out. I used the msdos label in parted to while creating partitions, and that is why they got all screwed up. I then switched to "gpt" and now everything works like a charm!!!! Thanks very much for all your help. Nikhil -----Original Message----- From: David Chinner [mailto:dgc@sgi.com] Sent: Thursday, September 21, 2006 7:14 PM To: Nikhil Kulkarni Cc: David Chinner; xfs@oss.sgi.com Subject: Re: xfs_read_buf error 5. On Thu, Sep 21, 2006 at 05:45:32PM -0700, Nikhil Kulkarni wrote: > Hi David, > > Hi David, > > Thanks for your prompt response, I really appreciate it!!! I made a > major tying mistake in describing the initial size of the partitions. We > are having issues when we assign a size > 2TB and not 2GB. I'm sorry > about the typo. I was wondering about that ;) > Here are the 2 /proc/partition files: > > This is the one where the partition size is 3.5T > > [nikhil@nkulkarni tmp]$ more xfs-log-3.5T > [root@ssimppi7 log]# cat /proc/partitions > major minor #blocks name > > 8 0 4093902848 sda > 8 1 1436513398 sda1 That says it's only 1.4TB... > This is the one where the partition size is 2TB: > > [nikhil@nkulkarni tmp]$ more xfs-log-2T > root@ssimppi7 log]# cat /proc/partitions > major minor #blocks name > > 8 0 4093902848 sda > 8 1 2047998298 sda1 And that is 2TB. > I think you are right. The partitions are not set up correctly. > > Do you know on a 2.5 kernel on a 32 bit operating system, which tool can > be used to setup partitions for sizes up to 4TB or 8TB? > fdisk on a 32 bit os does not work correctly. I thought parted worked > but apparently it does not either. Did you build your kernel with CONFIG_LBD=y? (Block layer option) This is the option that allows >2TB block devices on 32 bit kernels... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group From owner-xfs@oss.sgi.com Fri Sep 22 15:40:55 2006 Received: with ECARTIS (v1.0.0; list xfs); Fri, 22 Sep 2006 15:41:04 -0700 (PDT) Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8MMesaG029862 for ; Fri, 22 Sep 2006 15:40:55 -0700 X-ASG-Debug-ID: 1158960677-23169-893-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from smtp113.sbc.mail.mud.yahoo.com (smtp113.sbc.mail.mud.yahoo.com [68.142.198.212]) by cuda.sgi.com (Spam Firewall) with SMTP id 93FC4D112657 for ; Fri, 22 Sep 2006 14:31:17 -0700 (PDT) Received: (qmail 73350 invoked from network); 22 Sep 2006 21:24:37 -0000 Received: from unknown (HELO stupidest.org) (cwedgwood@sbcglobal.net@71.202.63.228 with login) by smtp113.sbc.mail.mud.yahoo.com with SMTP; 22 Sep 2006 21:24:36 -0000 Received: by tuatara.stupidest.org (Postfix, from userid 10000) id C6E211824789; Fri, 22 Sep 2006 14:24:35 -0700 (PDT) Date: Fri, 22 Sep 2006 14:24:35 -0700 From: Chris Wedgwood To: Martin Lucina Cc: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: Write barrier support with LVM2/md Subject: Re: Write barrier support with LVM2/md Message-ID: <20060922212435.GA20811@tuatara.stupidest.org> References: <20060922104417.GA4161@dezo.moloch.sk> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20060922104417.GA4161@dezo.moloch.sk> X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21704 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9065 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: xfs Content-Length: 370 Lines: 9 On Fri, Sep 22, 2006 at 12:44:17PM +0200, Martin Lucina wrote: > Is anyone working on a fix for this? It might be a good idea to add > to the FAQ that write barriers are not supported on dm devices. it's not really an XFS issue, it's that the md/dm/lvm layers don't support write barriers (and it would in many cases be hard to support without killing performance). From owner-xfs@oss.sgi.com Fri Sep 22 16:19:51 2006 Received: with ECARTIS (v1.0.0; list xfs); Fri, 22 Sep 2006 16:20:01 -0700 (PDT) Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8MNJpaG007055 for ; Fri, 22 Sep 2006 16:19:51 -0700 X-ASG-Debug-ID: 1158967152-28517-567-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com (Spam Firewall) with ESMTP id E1EEAD110A22 for ; Fri, 22 Sep 2006 16:19:12 -0700 (PDT) Received: from [10.0.0.4] (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id C0D7318DF3806; Fri, 22 Sep 2006 18:19:11 -0500 (CDT) Message-ID: <45146F76.3010301@sandeen.net> Date: Fri, 22 Sep 2006 18:19:18 -0500 From: Eric Sandeen User-Agent: Thunderbird 1.5.0.7 (Macintosh/20060909) MIME-Version: 1.0 To: Andrew Morton CC: Timothy Shimmin , Linux Kernel Mailing List , xfs mailing list X-ASG-Orig-Subj: Re: [PATCH -mm] rescue large xfs preferred iosize from the inode diet patch Subject: Re: [PATCH -mm] rescue large xfs preferred iosize from the inode diet patch References: <45131334.6050803@sandeen.net> <45134472.7080002@sgi.com> <20060922161040.609286fa.akpm@osdl.org> In-Reply-To: <20060922161040.609286fa.akpm@osdl.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Barracuda-Spam-Score: 0.60 X-Barracuda-Spam-Status: No, SCORE=0.60 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests=MARKETING_SUBJECT X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21710 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.60 MARKETING_SUBJECT Subject contains popular marketing words X-archive-position: 9066 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Content-Length: 836 Lines: 27 Andrew Morton wrote: >> So the fix for this is coming soon (and the fix is different from the >> one above). >> > > eh? Eric's patch is based on -mm, which includes the XFS git tree. If I > go and merge the inode-diet patches from -mm, XFS gets broken until you > guys merge the above mystery patch. (I prefer to merge the -mm patches > after all the git trees have gone, but sometimes maintainers dawdle and I > get bored of waiting). > > Is git://oss.sgi.com:8090/nathans/xfs-2.6 obsolete, or are you hiding stuff > from me? ;) > > well it's in cvs: http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/linux-2.6/xfs_iops.c.diff?r1=text&tr1=1.254&r2=text&tr2=1.253&f=h but I'm too lazy to check git on a friday evening. :) Well, sgi-guys, I'll let you sort out which patch you want. Sorry for not checking cvs first! -Eric From owner-xfs@oss.sgi.com Fri Sep 22 17:34:31 2006 Received: with ECARTIS (v1.0.0; list xfs); Fri, 22 Sep 2006 17:34:37 -0700 (PDT) Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8N0YVaG015826 for ; Fri, 22 Sep 2006 17:34:31 -0700 X-ASG-Debug-ID: 1158966669-13399-602-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from smtp.osdl.org (smtp.osdl.org [65.172.181.4]) by cuda.sgi.com (Spam Firewall) with ESMTP id 7118D45A7EA for ; Fri, 22 Sep 2006 16:11:10 -0700 (PDT) Received: from shell0.pdx.osdl.net (fw.osdl.org [65.172.181.6]) by smtp.osdl.org (8.12.8/8.12.8) with ESMTP id k8MNAgnW010795 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO); Fri, 22 Sep 2006 16:10:44 -0700 Received: from box (shell0.pdx.osdl.net [10.9.0.31]) by shell0.pdx.osdl.net (8.13.1/8.11.6) with SMTP id k8MNAe3e030028; Fri, 22 Sep 2006 16:10:41 -0700 Date: Fri, 22 Sep 2006 16:10:40 -0700 From: Andrew Morton To: Timothy Shimmin Cc: Eric Sandeen , Linux Kernel Mailing List , xfs mailing list X-ASG-Orig-Subj: Re: [PATCH -mm] rescue large xfs preferred iosize from the inode diet patch Subject: Re: [PATCH -mm] rescue large xfs preferred iosize from the inode diet patch Message-Id: <20060922161040.609286fa.akpm@osdl.org> In-Reply-To: <45134472.7080002@sgi.com> References: <45131334.6050803@sandeen.net> <45134472.7080002@sgi.com> X-Mailer: Sylpheed version 2.2.7 (GTK+ 2.8.17; x86_64-unknown-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-MIMEDefang-Filter: osdl$Revision: 1.152 $ X-Scanned-By: MIMEDefang 2.36 X-Barracuda-Spam-Score: 0.60 X-Barracuda-Spam-Status: No, SCORE=0.60 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests=MARKETING_SUBJECT X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21709 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.60 MARKETING_SUBJECT Subject contains popular marketing words X-archive-position: 9067 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: akpm@osdl.org Precedence: bulk X-list: xfs Content-Length: 4694 Lines: 123 On Fri, 22 Sep 2006 12:03:30 +1000 Timothy Shimmin wrote: > Hi Eric, > > Eric Sandeen wrote: > > The inode diet patch in -mm unhooked xfs_preferred_iosize from the stat call: > > > > --- a/fs/xfs/linux-2.6/xfs_vnode.c > > +++ b/fs/xfs/linux-2.6/xfs_vnode.c > > @@ -122,7 +122,6 @@ vn_revalidate_core( > > inode->i_blocks = vap->va_nblocks; > > inode->i_mtime = vap->va_mtime; > > inode->i_ctime = vap->va_ctime; > > - inode->i_blksize = vap->va_blocksize; > > if (vap->va_xflags & XFS_XFLAG_IMMUTABLE) > > > > This in turn breaks the largeio mount option for xfs: > > > > largeio/nolargeio > > If "nolargeio" is specified, the optimal I/O reported in > > st_blksize by stat(2) will be as small as possible to allow user > > applications to avoid inefficient read/modify/write I/O. > > If "largeio" specified, a filesystem that has a "swidth" specified > > will return the "swidth" value (in bytes) in st_blksize. If the > > filesystem does not have a "swidth" specified but does specify > > an "allocsize" then "allocsize" (in bytes) will be returned > > instead. > > If neither of these two options are specified, then filesystem > > will behave as if "nolargeio" was specified. > > > > and the (undocumented?) allocsize mount option as well. > > > > For a filesystem like this with sunit/swidth specified, > > > > meta-data=/dev/sda1 isize=512 agcount=32, agsize=7625840 blks > > = sectsz=512 attr=0 > > data = bsize=4096 blocks=244026880, imaxpct=25 > > = sunit=16 swidth=16 blks, unwritten=1 > > naming =version 2 bsize=4096 > > log =internal bsize=4096 blocks=32768, version=1 > > = sectsz=512 sunit=0 blks > > realtime =none extsz=65536 blocks=0, rtextents=0 > > > > stat on a stock FC6 kernel w/ the largeio mount option returns only the page size: > > > > [root@link-07]# mount -o largeio /dev/sda1 /mnt/test/ > > [root@link-07]# stat -c %o /mnt/test/foo > > 4096 > > > > with the following patch, it does what it should: > > > > [root@link-07]# mount -o largeio /dev/sda1 /mnt/test/ > > [root@link-07]# stat -c %o /mnt/test/foo > > 65536 > > > > same goes for filesystems w/o sunit,swidth but with the allocsize mount option. > > > > stock: > > [root@link-07]# mount -o largeio,allocsize=32768 /dev/sda1 /mnt/test/ > > [root@link-07]# stat -c %o /mnt/test/foo > > 4096 > > > > w/ patch: > > [root@link-07# mount -o largeio,allocsize=32768 /dev/sda1 /mnt/test/ > > [root@link-07]# stat -c %o /mnt/test/foo > > 32768 > > > > Signed-off-by: Eric Sandeen > > > > XFS guys, does this look ok? > > > > Index: linux-2.6.18/fs/xfs/linux-2.6/xfs_iops.c > > =================================================================== > > --- linux-2.6.18.orig/fs/xfs/linux-2.6/xfs_iops.c > > +++ linux-2.6.18/fs/xfs/linux-2.6/xfs_iops.c > > @@ -623,12 +623,16 @@ xfs_vn_getattr( > > { > > struct inode *inode = dentry->d_inode; > > bhv_vnode_t *vp = vn_from_inode(inode); > > + xfs_inode_t *ip; > > int error = 0; > > > > if (unlikely(vp->v_flag & VMODIFIED)) > > error = vn_revalidate(vp); > > - if (!error) > > + if (!error) { > > generic_fillattr(inode, stat); > > + ip = xfs_vtoi(vp); > > + stat->blksize = xfs_preferred_iosize(ip->i_mount); > > + } > > return -error; > > } > > > > Looked at your patch and then at our xfs code in the tree and > the existing code is different than what yours is based on. > I then noticed in the logs Nathan has actually made changes for this: > > ---------------------------- > revision 1.254 > date: 2006/07/17 10:46:05; author: nathans; state: Exp; lines: +20 -5 > modid: xfs-linux-melb:xfs-kern:26565a > Update XFS for i_blksize removal from generic inode structure > ---------------------------- > I even reviewed the change (and I don't remember it - getting old). > > I looked at the mods scheduled for 2.6.19 and this is one of them. > > So the fix for this is coming soon (and the fix is different from the > one above). > eh? Eric's patch is based on -mm, which includes the XFS git tree. If I go and merge the inode-diet patches from -mm, XFS gets broken until you guys merge the above mystery patch. (I prefer to merge the -mm patches after all the git trees have gone, but sometimes maintainers dawdle and I get bored of waiting). Is git://oss.sgi.com:8090/nathans/xfs-2.6 obsolete, or are you hiding stuff from me? ;) From owner-xfs@oss.sgi.com Fri Sep 22 17:34:32 2006 Received: with ECARTIS (v1.0.0; list xfs); Fri, 22 Sep 2006 17:34:42 -0700 (PDT) Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8N0YVaG015840 for ; Fri, 22 Sep 2006 17:34:32 -0700 X-ASG-Debug-ID: 1158968061-24024-429-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from smtp.osdl.org (smtp.osdl.org [65.172.181.4]) by cuda.sgi.com (Spam Firewall) with ESMTP id A2B4D45BB82 for ; Fri, 22 Sep 2006 16:34:22 -0700 (PDT) Received: from shell0.pdx.osdl.net (fw.osdl.org [65.172.181.6]) by smtp.osdl.org (8.12.8/8.12.8) with ESMTP id k8MNYGnW012077 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO); Fri, 22 Sep 2006 16:34:17 -0700 Received: from box (shell0.pdx.osdl.net [10.9.0.31]) by shell0.pdx.osdl.net (8.13.1/8.11.6) with SMTP id k8MNYFIN030835; Fri, 22 Sep 2006 16:34:15 -0700 Date: Fri, 22 Sep 2006 16:34:15 -0700 From: Andrew Morton To: Eric Sandeen Cc: Timothy Shimmin , Linux Kernel Mailing List , xfs mailing list X-ASG-Orig-Subj: Re: [PATCH -mm] rescue large xfs preferred iosize from the inode diet patch Subject: Re: [PATCH -mm] rescue large xfs preferred iosize from the inode diet patch Message-Id: <20060922163415.4e137374.akpm@osdl.org> In-Reply-To: <45146F76.3010301@sandeen.net> References: <45131334.6050803@sandeen.net> <45134472.7080002@sgi.com> <20060922161040.609286fa.akpm@osdl.org> <45146F76.3010301@sandeen.net> X-Mailer: Sylpheed version 2.2.7 (GTK+ 2.8.17; x86_64-unknown-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-MIMEDefang-Filter: osdl$Revision: 1.152 $ X-Scanned-By: MIMEDefang 2.36 X-Barracuda-Spam-Score: 0.60 X-Barracuda-Spam-Status: No, SCORE=0.60 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests=MARKETING_SUBJECT X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21712 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.60 MARKETING_SUBJECT Subject contains popular marketing words X-archive-position: 9068 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: akpm@osdl.org Precedence: bulk X-list: xfs Content-Length: 876 Lines: 28 On Fri, 22 Sep 2006 18:19:18 -0500 Eric Sandeen wrote: > Andrew Morton wrote: > > >> So the fix for this is coming soon (and the fix is different from the > >> one above). > >> > > > > eh? Eric's patch is based on -mm, which includes the XFS git tree. If I > > go and merge the inode-diet patches from -mm, XFS gets broken until you > > guys merge the above mystery patch. (I prefer to merge the -mm patches > > after all the git trees have gone, but sometimes maintainers dawdle and I > > get bored of waiting). > > > > Is git://oss.sgi.com:8090/nathans/xfs-2.6 obsolete, or are you hiding stuff > > from me? ;) > > > > > well it's in cvs: That's nearly four months old! > http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/linux-2.6/xfs_iops.c.diff?r1=text&tr1=1.254&r2=text&tr2=1.253&f=h From owner-xfs@oss.sgi.com Sat Sep 23 07:40:55 2006 Received: with ECARTIS (v1.0.0; list xfs); Sat, 23 Sep 2006 07:41:01 -0700 (PDT) Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8NEesaG030199 for ; Sat, 23 Sep 2006 07:40:55 -0700 X-ASG-Debug-ID: 1159018095-9458-887-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from moutng.kundenserver.de (moutng.kundenserver.de [212.227.126.190]) by cuda.sgi.com (Spam Firewall) with ESMTP id E4392D111DF6 for ; Sat, 23 Sep 2006 06:28:15 -0700 (PDT) Received: from [212.227.126.203] (helo=mrvnet.kundenserver.de) by moutng.kundenserver.de with esmtp (Exim 3.35 #1) id 1GR7Xx-0001do-00 for xfs@oss.sgi.com; Sat, 23 Sep 2006 15:27:57 +0200 Received: from [172.23.1.26] (helo=xchgsmtp.exchange.xchg) by mrvnet.kundenserver.de with smtp (Exim 3.35 #1) id 1GR7Xx-0001Rn-00 for xfs@oss.sgi.com; Sat, 23 Sep 2006 15:27:57 +0200 Received: from mapibe17.exchange.xchg ([172.23.1.54]) by xchgsmtp.exchange.xchg with Microsoft SMTPSVC(6.0.3790.1830); Sat, 23 Sep 2006 15:27:57 +0200 X-MIMEOLE: Produced By Microsoft Exchange V6.5 Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" X-ASG-Orig-Subj: largeio mount and performance impact Subject: largeio mount and performance impact Date: Sat, 23 Sep 2006 15:27:45 +0200 Message-ID: <55EF1E5D5804A542A6CA37E446DDC206655888@mapibe17.exchange.xchg> X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: largeio mount and performance impact Thread-Index: AcbfFA3oo3ZCqR+aSz+IjJWOINSLtA== From: "Sebastian Brings" To: X-OriginalArrivalTime: 23 Sep 2006 13:27:57.0472 (UTC) FILETIME=[15343200:01C6DF14] X-Provags-ID: kundenserver.de abuse@kundenserver.de ident:@172.23.1.26 X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21752 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id k8NEetaG030218 X-archive-position: 9070 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sebas@silexmedia.com Precedence: bulk X-list: xfs Content-Length: 1011 Lines: 22 In a different thread I read about the largeio mount option for XFS. Now I wonder if the problems I recently ran in have been caused by this. After a system uprgade from sles9 sp2 to sp3 one app started misbehaving. Before the upgrade it used 15% CPU, after the upgrade it was 90+% and the performance dropped by about 50%. The app is writing a wave audio file, and for every 3840 bytes of audio samples it appends, it updates the RIFF header of the file. All of this was done using the buffered fopen/fwrite/... C library functions. An strace showed that seeking to the beginning of the file also triggered a 12MiB read(2) call, and seeking to the end, for example to 13MiB, translated to a seek(2) to offset 1mib and a read(2) of 12 MiB. Initiall I assumed something very strange had happened to the C lib defaults. Otoh 12MiB is the swidth of the filesystem, so I assume that the C lib got the 12 MiB optimal IO size from XFS and therefore behaved as described above. Could that be? Regards Sebastian From owner-xfs@oss.sgi.com Sat Sep 23 09:35:16 2006 Received: with ECARTIS (v1.0.0; list xfs); Sat, 23 Sep 2006 09:35:26 -0700 (PDT) Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8NGZDaG015916 for ; Sat, 23 Sep 2006 09:35:16 -0700 X-ASG-Debug-ID: 1159029275-3637-415-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com (Spam Firewall) with ESMTP id 6B835D10E74F for ; Sat, 23 Sep 2006 09:34:35 -0700 (PDT) Received: from [10.0.0.4] (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id 8B4231809E7D0; Sat, 23 Sep 2006 11:34:18 -0500 (CDT) Message-ID: <4515620F.5010607@sandeen.net> Date: Sat, 23 Sep 2006 11:34:23 -0500 From: Eric Sandeen User-Agent: Thunderbird 1.5.0.7 (Macintosh/20060909) MIME-Version: 1.0 To: Sebastian Brings CC: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: largeio mount and performance impact Subject: Re: largeio mount and performance impact References: <55EF1E5D5804A542A6CA37E446DDC206655888@mapibe17.exchange.xchg> In-Reply-To: <55EF1E5D5804A542A6CA37E446DDC206655888@mapibe17.exchange.xchg> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21761 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9071 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Content-Length: 1225 Lines: 27 Sebastian Brings wrote: > In a different thread I read about the largeio mount option for XFS. Now > I wonder if the problems I recently ran in have been caused by this. > > After a system uprgade from sles9 sp2 to sp3 one app started > misbehaving. Before the upgrade it used 15% CPU, after the upgrade it > was 90+% and the performance dropped by about 50%. The app is writing a > wave audio file, and for every 3840 bytes of audio samples it appends, > it updates the RIFF header of the file. All of this was done using the > buffered fopen/fwrite/... C library functions. > An strace showed that seeking to the beginning of the file also > triggered a 12MiB read(2) call, and seeking to the end, for example to > 13MiB, translated to a seek(2) to offset 1mib and a read(2) of 12 MiB. > > Initiall I assumed something very strange had happened to the C lib > defaults. Otoh 12MiB is the swidth of the filesystem, so I assume that > the C lib got the 12 MiB optimal IO size from XFS and therefore behaved > as described above. Could that be? Unless you specify the largeio mount option, I don't -think- any of this is exposed. What does stat -c %o say? Did this strace behavior change from sp2 to sp3? -Eric From owner-xfs@oss.sgi.com Sun Sep 24 06:42:21 2006 Received: with ECARTIS (v1.0.0; list xfs); Sun, 24 Sep 2006 06:42:29 -0700 (PDT) Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8ODgKaG004349 for ; Sun, 24 Sep 2006 06:42:21 -0700 X-ASG-Debug-ID: 1159105298-22916-131-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from moutng.kundenserver.de (unknown [212.227.126.190]) by cuda.sgi.com (Spam Firewall) with ESMTP id 5A5D8D111616 for ; Sun, 24 Sep 2006 06:41:39 -0700 (PDT) Received: from [212.227.126.203] (helo=mrvnet.kundenserver.de) by moutng.kundenserver.de with esmtp (Exim 3.35 #1) id 1GRUEi-0006HL-00; Sun, 24 Sep 2006 15:41:36 +0200 Received: from [172.23.1.26] (helo=xchgsmtp.exchange.xchg) by mrvnet.kundenserver.de with smtp (Exim 3.35 #1) id 1GRUEi-0003Sw-00; Sun, 24 Sep 2006 15:41:36 +0200 Received: from mapibe17.exchange.xchg ([172.23.1.54]) by xchgsmtp.exchange.xchg with Microsoft SMTPSVC(6.0.3790.1830); Sun, 24 Sep 2006 15:41:36 +0200 X-MIMEOLE: Produced By Microsoft Exchange V6.5 Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" X-ASG-Orig-Subj: RE: largeio mount and performance impact Subject: RE: largeio mount and performance impact Date: Sun, 24 Sep 2006 15:41:34 +0200 Message-ID: <55EF1E5D5804A542A6CA37E446DDC2066558B4@mapibe17.exchange.xchg> In-Reply-To: <4515620F.5010607@sandeen.net> X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: largeio mount and performance impact Thread-Index: AcbfLh+tNQre0GkrS+eGxOXRKxyUIQAsEOrA From: "Sebastian Brings" To: "Eric Sandeen" Cc: X-OriginalArrivalTime: 24 Sep 2006 13:41:36.0348 (UTC) FILETIME=[27B481C0:01C6DFDF] X-Provags-ID: kundenserver.de abuse@kundenserver.de ident:@172.23.1.26 X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21824 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id k8ODgMaG004354 X-archive-position: 9073 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sebas@silexmedia.com Precedence: bulk X-list: xfs Content-Length: 1989 Lines: 68 > -----Original Message----- > From: Eric Sandeen [mailto:sandeen@sandeen.net] > Sent: Samstag, 23. September 2006 18:34 > To: Sebastian Brings > Cc: xfs@oss.sgi.com > Subject: Re: largeio mount and performance impact > > Sebastian Brings wrote: > > In a different thread I read about the largeio mount option for XFS. Now > > I wonder if the problems I recently ran in have been caused by this. > > > > After a system uprgade from sles9 sp2 to sp3 one app started > > misbehaving. Before the upgrade it used 15% CPU, after the upgrade it > > was 90+% and the performance dropped by about 50%. The app is writing a > > wave audio file, and for every 3840 bytes of audio samples it appends, > > it updates the RIFF header of the file. All of this was done using the > > buffered fopen/fwrite/... C library functions. > > An strace showed that seeking to the beginning of the file also > > triggered a 12MiB read(2) call, and seeking to the end, for example to > > 13MiB, translated to a seek(2) to offset 1mib and a read(2) of 12 MiB. > > > > Initiall I assumed something very strange had happened to the C lib > > defaults. Otoh 12MiB is the swidth of the filesystem, so I assume that > > the C lib got the 12 MiB optimal IO size from XFS and therefore behaved > > as described above. Could that be? > > Unless you specify the largeio mount option, I don't -think- any of this > is exposed. > > What does stat -c %o say? > > Did this strace behavior change from sp2 to sp3? > > -Eric Stat seems to report 12MB: # stat -c %o sebastian.tgz 12582912 Mount does not have largeio set explicitly. But this is a CXFS, so this may change tings: /dev/cxvm/fs1 on /shares/fs1 type xfs (rw,noatime,client_timeout=1s,retry=0,server_list=(meta01, meta02),filestreams,quota,mtpt=/shares/fs1) I don't have strace data from the app running under sp2. But changing the default library buffer size to a more reasonable size after the fopen(), the app behaved normal again. Sebastian From owner-xfs@oss.sgi.com Sun Sep 24 06:48:45 2006 Received: with ECARTIS (v1.0.0; list xfs); Sun, 24 Sep 2006 06:48:50 -0700 (PDT) Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8ODmhaG005502 for ; Sun, 24 Sep 2006 06:48:45 -0700 X-ASG-Debug-ID: 1159105684-20787-443-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from moutng.kundenserver.de (moutng.kundenserver.de [212.227.126.184]) by cuda.sgi.com (Spam Firewall) with ESMTP id 8AFACD110A4F for ; Sun, 24 Sep 2006 06:48:04 -0700 (PDT) Received: from [212.227.126.203] (helo=mrvnet.kundenserver.de) by moutng.kundenserver.de with esmtp (Exim 3.35 #1) id 1GRUKc-0002nU-00; Sun, 24 Sep 2006 15:47:42 +0200 Received: from [172.23.1.26] (helo=xchgsmtp.exchange.xchg) by mrvnet.kundenserver.de with smtp (Exim 3.35 #1) id 1GRUKb-0002fA-00; Sun, 24 Sep 2006 15:47:41 +0200 Received: from mapibe17.exchange.xchg ([172.23.1.54]) by xchgsmtp.exchange.xchg with Microsoft SMTPSVC(6.0.3790.1830); Sun, 24 Sep 2006 15:47:41 +0200 X-MIMEOLE: Produced By Microsoft Exchange V6.5 Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" X-ASG-Orig-Subj: RE: largeio mount and performance impact Subject: RE: largeio mount and performance impact Date: Sun, 24 Sep 2006 15:47:40 +0200 Message-ID: <55EF1E5D5804A542A6CA37E446DDC2066558B6@mapibe17.exchange.xchg> In-Reply-To: <55EF1E5D5804A542A6CA37E446DDC2066558B4@mapibe17.exchange.xchg> X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: largeio mount and performance impact Thread-Index: AcbfLh+tNQre0GkrS+eGxOXRKxyUIQAsEOrAAABUpHA= From: "Sebastian Brings" To: "Eric Sandeen" Cc: X-OriginalArrivalTime: 24 Sep 2006 13:47:41.0771 (UTC) FILETIME=[0183A1B0:01C6DFE0] X-Provags-ID: kundenserver.de abuse@kundenserver.de ident:@172.23.1.26 X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21824 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id k8ODmjaG005506 X-archive-position: 9074 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sebas@silexmedia.com Precedence: bulk X-list: xfs Content-Length: 2576 Lines: 89 > -----Original Message----- > From: xfs-bounce@oss.sgi.com [mailto:xfs-bounce@oss.sgi.com] On Behalf Of > Sebastian Brings > Sent: Sonntag, 24. September 2006 15:42 > To: Eric Sandeen > Cc: xfs@oss.sgi.com > Subject: RE: largeio mount and performance impact > > > -----Original Message----- > > From: Eric Sandeen [mailto:sandeen@sandeen.net] > > Sent: Samstag, 23. September 2006 18:34 > > To: Sebastian Brings > > Cc: xfs@oss.sgi.com > > Subject: Re: largeio mount and performance impact > > > > Sebastian Brings wrote: > > > In a different thread I read about the largeio mount option for XFS. > Now > > > I wonder if the problems I recently ran in have been caused by this. > > > > > > After a system uprgade from sles9 sp2 to sp3 one app started > > > misbehaving. Before the upgrade it used 15% CPU, after the upgrade > it > > > was 90+% and the performance dropped by about 50%. The app is > writing a > > > wave audio file, and for every 3840 bytes of audio samples it > appends, > > > it updates the RIFF header of the file. All of this was done using > the > > > buffered fopen/fwrite/... C library functions. > > > An strace showed that seeking to the beginning of the file also > > > triggered a 12MiB read(2) call, and seeking to the end, for example > to > > > 13MiB, translated to a seek(2) to offset 1mib and a read(2) of 12 > MiB. > > > > > > Initiall I assumed something very strange had happened to the C lib > > > defaults. Otoh 12MiB is the swidth of the filesystem, so I assume > that > > > the C lib got the 12 MiB optimal IO size from XFS and therefore > behaved > > > as described above. Could that be? > > > > Unless you specify the largeio mount option, I don't -think- any of > this > > is exposed. > > > > What does stat -c %o say? > > > > Did this strace behavior change from sp2 to sp3? > > > > -Eric > > > Stat seems to report 12MB: > > # stat -c %o sebastian.tgz > 12582912 > > Mount does not have largeio set explicitly. But this is a CXFS, so this > may change tings: > > /dev/cxvm/fs1 on /shares/fs1 type xfs > (rw,noatime,client_timeout=1s,retry=0,server_list=(meta01, > meta02),filestreams,quota,mtpt=/shares/fs1) > > > I don't have strace data from the app running under sp2. But changing > the default library buffer size to a more reasonable size after the > fopen(), the app behaved normal again. > > > Sebastian > Now I remember. Filestreams was added during the update. Maybe that makes the difference. Anyways, I just wanted to point out that largeio mount option may have unexpected side effects Sebastian From owner-xfs@oss.sgi.com Mon Sep 25 01:02:14 2006 Received: with ECARTIS (v1.0.0; list xfs); Mon, 25 Sep 2006 01:02:27 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k8P82AaG011099 for ; Mon, 25 Sep 2006 01:02:12 -0700 Received: from [134.14.55.141] (boing.melbourne.sgi.com [134.14.55.141]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id SAA13986; Mon, 25 Sep 2006 18:01:23 +1000 Message-ID: <45178D19.5000803@sgi.com> Date: Mon, 25 Sep 2006 18:02:33 +1000 From: Timothy Shimmin User-Agent: Thunderbird 1.5.0.7 (Macintosh/20060909) MIME-Version: 1.0 To: Andrew Morton CC: Eric Sandeen , Linux Kernel Mailing List , xfs mailing list Subject: Re: [PATCH -mm] rescue large xfs preferred iosize from the inode diet patch References: <45131334.6050803@sandeen.net> <45134472.7080002@sgi.com> <20060922161040.609286fa.akpm@osdl.org> In-Reply-To: <20060922161040.609286fa.akpm@osdl.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 9075 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs Content-Length: 1932 Lines: 52 Andrew Morton wrote: > On Fri, 22 Sep 2006 12:03:30 +1000 >> Looked at your patch and then at our xfs code in the tree and >> the existing code is different than what yours is based on. >> I then noticed in the logs Nathan has actually made changes for this: >> >> ---------------------------- >> revision 1.254 >> date: 2006/07/17 10:46:05; author: nathans; state: Exp; lines: +20 -5 >> modid: xfs-linux-melb:xfs-kern:26565a >> Update XFS for i_blksize removal from generic inode structure >> ---------------------------- >> I even reviewed the change (and I don't remember it - getting old). >> >> I looked at the mods scheduled for 2.6.19 and this is one of them. >> >> So the fix for this is coming soon (and the fix is different from the >> one above). >> > > eh? Eric's patch is based on -mm, which includes the XFS git tree. If I > go and merge the inode-diet patches from -mm, XFS gets broken until you > guys merge the above mystery patch. (I prefer to merge the -mm patches > after all the git trees have gone, but sometimes maintainers dawdle and I > get bored of waiting). > > Is git://oss.sgi.com:8090/nathans/xfs-2.6 obsolete, or are you hiding stuff > from me? ;) > :) We're still getting our act together since Nathan is no longer here. Going forward the new git tree is at: git://oss.sgi.com:8090/xfs/xfs-2.6 This has some more recent changes than the "nathans" one but is far from up to date with the internal sgi tree and the external cvs tree (as you noticed with the nathans one:). I will get the "xfs" one updated in the next day or so. (Aside: for some strange reason, the "nathans" one has 3 extra mods (commits) and as expected (to me:) the "xfs" one has 10 extra mods (commits) and there is about 46 mods (including missing 3) pending for the "xfs" tree. If we end up moving from our internal SCM to git at some point, this could make the updates less of a hassle:). --Tim From owner-xfs@oss.sgi.com Mon Sep 25 01:41:59 2006 Received: with ECARTIS (v1.0.0; list xfs); Mon, 25 Sep 2006 01:42:07 -0700 (PDT) Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8P8fwaG015907 for ; Mon, 25 Sep 2006 01:41:59 -0700 X-ASG-Debug-ID: 1159173678-16238-611-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from ext.agami.com (64.221.212.177.ptr.us.xo.net [64.221.212.177]) by cuda.sgi.com (Spam Firewall) with ESMTP id CFB0F45CB2D for ; Mon, 25 Sep 2006 01:41:18 -0700 (PDT) Received: from agami.com ([192.168.168.146]) by ext.agami.com (8.12.5/8.12.5) with ESMTP id k8P8fHQ2005885 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO) for ; Mon, 25 Sep 2006 01:41:17 -0700 Received: from mx1.agami.com (mx1.agami.com [10.123.10.30]) by agami.com (8.12.11/8.12.11) with ESMTP id k8P8fCKA003759 for ; Mon, 25 Sep 2006 01:41:12 -0700 Received: from [10.12.12.141] ([10.12.12.141]) by mx1.agami.com with Microsoft SMTPSVC(6.0.3790.1830); Mon, 25 Sep 2006 01:45:18 -0700 Message-ID: <45179573.3020007@agami.com> Date: Mon, 25 Sep 2006 14:08:11 +0530 From: Shailendra Tripathi User-Agent: Mozilla Thunderbird 0.9 (X11/20041127) X-Accept-Language: en-us, en MIME-Version: 1.0 To: David Chinner CC: xfs@oss.sgi.com, Timothy Shimmin X-ASG-Orig-Subj: Data type overflow in xfs_trans_unreserve_and_mod_sb Subject: Data type overflow in xfs_trans_unreserve_and_mod_sb References: <55EF1E5D5804A542A6CA37E446DDC206655888@mapibe17.exchange.xchg> In-Reply-To: <55EF1E5D5804A542A6CA37E446DDC206655888@mapibe17.exchange.xchg> Content-Type: multipart/mixed; boundary="------------040408080302030208020808" X-OriginalArrivalTime: 25 Sep 2006 08:45:18.0390 (UTC) FILETIME=[EDA12560:01C6E07E] X-Scanned-By: MIMEDefang 2.36 X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21883 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9076 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: stripathi@agami.com Precedence: bulk X-list: xfs Content-Length: 5870 Lines: 164 This is a multi-part message in MIME format. --------------040408080302030208020808 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Hi David, As part of fixing xfs_reserve_blocks issue, you might want to fix an issue in xfs_trans_unreserve_and_mod_sb as well. Since, I am on much older version, my patch is not applicable on newer trees. However, the patch is attached for your reference. The problem is as below: Superblock modifications required during transaction are stored in delta fields in transaction. These fields are applied to the superblock when transaction commits. The in-core superblock changes are done in xfs_trans_unreserve_and_mod_sb. It calls xfs_mod_incore_sb_batch function to apply the changes. This function tries to apply the deltas and if it fails for any reason, it backs out all the changes. One typical modification done is like that: case XFS_SBS_DBLOCKS: lcounter = (long long)mp->m_sb.sb_dblocks; lcounter += delta; if (lcounter < 0) { ASSERT(0); return (XFS_ERROR(EINVAL)); } mp->m_sb.sb_dblocks = lcounter; return (0); So, when it returns EINVAL, the second part of the code backs out the changes made to superblock. However, the worst part is that xfs_trans_unreserve_and_mod_sb does not return any error value. The transaction appears to be committed peacefully without returning the error. You don't notice this unless you do I/O on the filesystem. Later, it hits some sort of in-memory corruption or other errors. We hit this issue in our testing we tried to grow the filesystem from from 100GB to 10000GB. This is beyond the interger (31 bits) limit and, hence, for dblocks and fdblocks, xfs_mod_sb struct does not pass in correct data. --------------040408080302030208020808 Content-Type: text/x-patch; name="xfs-sb-mod-growfs.patch" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="xfs-sb-mod-growfs.patch" diff -urNp -X kernel-2.6.7-dontdiff kernel-2.6.7-orig/fs/xfs/xfs_mount.c kernel-2.6.7-modif/fs/xfs/xfs_mount.c --- kernel-2.6.7-orig/fs/xfs/xfs_mount.c +++ kernel-2.6.7-modif/fs/xfs/xfs_mount.c @@ -1286,7 +1286,7 @@ xfs_mod_sb(xfs_trans_t *tp, __int64_t fi */ STATIC int xfs_mod_incore_sb_unlocked(xfs_mount_t *mp, xfs_sb_field_t field, - int delta, int rsvd) + long delta, int rsvd) { int scounter; /* short counter for 32 bit fields */ long long lcounter; /* long counter for 64 bit fields */ diff -urNp -X kernel-2.6.7-dontdiff kernel-2.6.7-orig/fs/xfs/xfs_mount.h kernel-2.6.7-modif/fs/xfs/xfs_mount.h --- kernel-2.6.7-orig/fs/xfs/xfs_mount.h +++ kernel-2.6.7-modif/fs/xfs/xfs_mount.h @@ -551,10 +551,11 @@ static inline xfs_agblock_t XFS_DADDR_TO /* * This structure is for use by the xfs_mod_incore_sb_batch() routine. + * xfs_growfs can specify a few fields which are more than int limit */ typedef struct xfs_mod_sb { xfs_sb_field_t msb_field; /* Field to modify, see below */ - int msb_delta; /* Change to make to specified field */ + long msb_delta; /* Change to make to specified field */ } xfs_mod_sb_t; #define XFS_MOUNT_ILOCK(mp) mutex_lock(&((mp)->m_ilock), PINOD) diff -urNp -X kernel-2.6.7-dontdiff kernel-2.6.7-orig/fs/xfs/xfs_trans.c kernel-2.6.7-modif/fs/xfs/xfs_trans.c --- kernel-2.6.7-orig/fs/xfs/xfs_trans.c +++ kernel-2.6.7-modif/fs/xfs/xfs_trans.c @@ -590,62 +590,62 @@ xfs_trans_unreserve_and_mod_sb( if (tp->t_flags & XFS_TRANS_SB_DIRTY) { if (tp->t_icount_delta != 0) { msbp->msb_field = XFS_SBS_ICOUNT; - msbp->msb_delta = (int)tp->t_icount_delta; + msbp->msb_delta = tp->t_icount_delta; msbp++; } if (tp->t_ifree_delta != 0) { msbp->msb_field = XFS_SBS_IFREE; - msbp->msb_delta = (int)tp->t_ifree_delta; + msbp->msb_delta = tp->t_ifree_delta; msbp++; } if (tp->t_fdblocks_delta != 0) { msbp->msb_field = XFS_SBS_FDBLOCKS; - msbp->msb_delta = (int)tp->t_fdblocks_delta; + msbp->msb_delta = tp->t_fdblocks_delta; msbp++; } if (tp->t_frextents_delta != 0) { msbp->msb_field = XFS_SBS_FREXTENTS; - msbp->msb_delta = (int)tp->t_frextents_delta; + msbp->msb_delta = tp->t_frextents_delta; msbp++; } if (tp->t_dblocks_delta != 0) { msbp->msb_field = XFS_SBS_DBLOCKS; - msbp->msb_delta = (int)tp->t_dblocks_delta; + msbp->msb_delta = tp->t_dblocks_delta; msbp++; } if (tp->t_agcount_delta != 0) { msbp->msb_field = XFS_SBS_AGCOUNT; - msbp->msb_delta = (int)tp->t_agcount_delta; + msbp->msb_delta = tp->t_agcount_delta; msbp++; } if (tp->t_imaxpct_delta != 0) { msbp->msb_field = XFS_SBS_IMAX_PCT; - msbp->msb_delta = (int)tp->t_imaxpct_delta; + msbp->msb_delta = tp->t_imaxpct_delta; msbp++; } if (tp->t_rextsize_delta != 0) { msbp->msb_field = XFS_SBS_REXTSIZE; - msbp->msb_delta = (int)tp->t_rextsize_delta; + msbp->msb_delta = tp->t_rextsize_delta; msbp++; } if (tp->t_rbmblocks_delta != 0) { msbp->msb_field = XFS_SBS_RBMBLOCKS; - msbp->msb_delta = (int)tp->t_rbmblocks_delta; + msbp->msb_delta = tp->t_rbmblocks_delta; msbp++; } if (tp->t_rblocks_delta != 0) { msbp->msb_field = XFS_SBS_RBLOCKS; - msbp->msb_delta = (int)tp->t_rblocks_delta; + msbp->msb_delta = tp->t_rblocks_delta; msbp++; } if (tp->t_rextents_delta != 0) { msbp->msb_field = XFS_SBS_REXTENTS; - msbp->msb_delta = (int)tp->t_rextents_delta; + msbp->msb_delta = tp->t_rextents_delta; msbp++; } if (tp->t_rextslog_delta != 0) { msbp->msb_field = XFS_SBS_REXTSLOG; - msbp->msb_delta = (int)tp->t_rextslog_delta; + msbp->msb_delta = tp->t_rextslog_delta; msbp++; } } --------------040408080302030208020808-- From owner-xfs@oss.sgi.com Mon Sep 25 07:33:24 2006 Received: with ECARTIS (v1.0.0; list xfs); Mon, 25 Sep 2006 07:33:35 -0700 (PDT) Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8PEXNaG026464 for ; Mon, 25 Sep 2006 07:33:24 -0700 X-ASG-Debug-ID: 1159194763-20021-379-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com (Spam Firewall) with ESMTP id 9DC4B45E459 for ; Mon, 25 Sep 2006 07:32:44 -0700 (PDT) Received: from [10.0.0.4] (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by sandeen.net (Postfix) with ESMTP id DB0C518C732B7; Mon, 25 Sep 2006 09:32:42 -0500 (CDT) Message-ID: <4517E88E.4020809@sandeen.net> Date: Mon, 25 Sep 2006 09:32:46 -0500 From: Eric Sandeen User-Agent: Thunderbird 1.5.0.7 (Macintosh/20060909) MIME-Version: 1.0 To: Shailendra Tripathi CC: David Chinner , xfs@oss.sgi.com, Timothy Shimmin X-ASG-Orig-Subj: Re: Data type overflow in xfs_trans_unreserve_and_mod_sb Subject: Re: Data type overflow in xfs_trans_unreserve_and_mod_sb References: <55EF1E5D5804A542A6CA37E446DDC206655888@mapibe17.exchange.xchg> <45179573.3020007@agami.com> In-Reply-To: <45179573.3020007@agami.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21901 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9079 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Content-Length: 2148 Lines: 56 Shailendra Tripathi wrote: > Hi David, > As part of fixing xfs_reserve_blocks issue, you might want to > fix an issue in xfs_trans_unreserve_and_mod_sb as well. Since, I am on > much older version, my patch is not applicable on newer trees. However, > the patch is attached for your reference. > > The problem is as below: > > Superblock modifications required during transaction are stored in delta > fields in transaction. These fields are applied to the superblock when > transaction commits. > > The in-core superblock changes are done in > xfs_trans_unreserve_and_mod_sb. It calls xfs_mod_incore_sb_batch > function to apply the changes. This function tries to apply the deltas > and if it fails for any reason, it backs out all the changes. One > typical modification done is like that: > > case XFS_SBS_DBLOCKS: > lcounter = (long long)mp->m_sb.sb_dblocks; > lcounter += delta; > if (lcounter < 0) { > ASSERT(0); > return (XFS_ERROR(EINVAL)); > } > mp->m_sb.sb_dblocks = lcounter; > return (0); > > So, when it returns EINVAL, the second part of the code backs out the > changes made to superblock. However, the worst part is that > xfs_trans_unreserve_and_mod_sb does not return any error value. Hm, yep, just ASSERT(error == 0); I suppose this is the trickiness of canceling a transaction at some points... > The > transaction appears to be committed peacefully without returning the > error. You don't notice this unless you do I/O on the filesystem. Later, > it hits some sort of in-memory corruption or other errors. > > We hit this issue in our testing we tried to grow the filesystem from > from 100GB to 10000GB. This is beyond the interger (31 bits) limit and, > hence, for dblocks and fdblocks, xfs_mod_sb struct does not pass in > correct data. > > First thoughts, "long" won't help on 32 bit machines, perhaps this should be an explicitly-sized 64-bit type? -Eric p.s. good to see agami's recently active participation on the list! From owner-xfs@oss.sgi.com Mon Sep 25 07:56:37 2006 Received: with ECARTIS (v1.0.0; list xfs); Mon, 25 Sep 2006 07:56:46 -0700 (PDT) Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8PEuaaG029014 for ; Mon, 25 Sep 2006 07:56:37 -0700 X-ASG-Debug-ID: 1159196156-21487-760-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from ext.agami.com (64.221.212.177.ptr.us.xo.net [64.221.212.177]) by cuda.sgi.com (Spam Firewall) with ESMTP id 2453B45CA3F for ; Mon, 25 Sep 2006 07:55:56 -0700 (PDT) Received: from agami.com ([192.168.168.146]) by ext.agami.com (8.12.5/8.12.5) with ESMTP id k8PEttQ2009116 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO) for ; Mon, 25 Sep 2006 07:55:55 -0700 Received: from mx1.agami.com (mx1.agami.com [10.123.10.30]) by agami.com (8.12.11/8.12.11) with ESMTP id k8PEto3a008522 for ; Mon, 25 Sep 2006 07:55:50 -0700 Received: from [10.12.12.141] ([10.12.12.141]) by mx1.agami.com with Microsoft SMTPSVC(6.0.3790.1830); Mon, 25 Sep 2006 07:59:55 -0700 Message-ID: <4517ED41.8090305@agami.com> Date: Mon, 25 Sep 2006 20:22:49 +0530 From: Shailendra Tripathi User-Agent: Mozilla Thunderbird 0.9 (X11/20041127) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Eric Sandeen CC: David Chinner , xfs@oss.sgi.com, Timothy Shimmin X-ASG-Orig-Subj: Re: Data type overflow in xfs_trans_unreserve_and_mod_sb Subject: Re: Data type overflow in xfs_trans_unreserve_and_mod_sb References: <55EF1E5D5804A542A6CA37E446DDC206655888@mapibe17.exchange.xchg> <45179573.3020007@agami.com> <4517E88E.4020809@sandeen.net> In-Reply-To: <4517E88E.4020809@sandeen.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-OriginalArrivalTime: 25 Sep 2006 14:59:55.0562 (UTC) FILETIME=[431168A0:01C6E0B3] X-Scanned-By: MIMEDefang 2.36 X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21901 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9080 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: stripathi@agami.com Precedence: bulk X-list: xfs Content-Length: 669 Lines: 22 Eric Sandeen wrote: > Hm, yep, just ASSERT(error == 0); > > I suppose this is the trickiness of canceling a transaction at some > points... Yes, you are right. At the point where it is being applied, all the things can't be reverted back as XFS does not store the "before-image". However, typically in most of such cases, XFS goes ahead with shutting down the filesystem assuming these to be catastrophic or incorrigible errors requiring manual intervention. > First thoughts, "long" won't help on 32 bit machines, perhaps this > should be an explicitly-sized 64-bit type > -Eric > > p.s. good to see agami's recently active participation on the list! > From owner-xfs@oss.sgi.com Mon Sep 25 11:24:17 2006 Received: with ECARTIS (v1.0.0; list xfs); Mon, 25 Sep 2006 11:24:26 -0700 (PDT) Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8PIOGaG020825 for ; Mon, 25 Sep 2006 11:24:17 -0700 X-ASG-Debug-ID: 1159204457-24976-401-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mx1.vodamail.co.za (mx1.vodamail.co.za [196.11.146.148]) by cuda.sgi.com (Spam Firewall) with ESMTP id 3A4C5D118FEE for ; Mon, 25 Sep 2006 10:14:18 -0700 (PDT) Received: from [10.48.96.8] (unknown [10.48.96.8]) by mx1.vodamail.co.za (Postfix) with ESMTP id 16BC5D40F5 for ; Mon, 25 Sep 2006 19:13:37 +0200 (SAST) Message-ID: <45180845.1050401@up.ac.za> Date: Mon, 25 Sep 2006 18:48:05 +0200 From: Paul Schutte User-Agent: Debian Thunderbird 1.0.2 (X11/20060804) X-Accept-Language: en-us, en MIME-Version: 1.0 To: xfs@oss.sgi.com X-ASG-Orig-Subj: DMAPI problem on the cvs tree of (2.6.17) SGI-XFS CVS-2006-08-26 Subject: DMAPI problem on the cvs tree of (2.6.17) SGI-XFS CVS-2006-08-26 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Barracuda-Spam-Score: 0.50 X-Barracuda-Spam-Status: No, SCORE=0.50 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests=BSF_RULE7568M X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21908 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.50 BSF_RULE7568M BODY: Custom Rule 7568M X-archive-position: 9081 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: paul@up.ac.za Precedence: bulk X-list: xfs Content-Length: 10617 Lines: 234 Hi, I am trying to get the DMAPI stuff going on xfs again. I last played with it back in the 2.4.25 kernel and it worked great. I now want to resume the work that I started back then, but are unable to get the dmapi to work on any recent kernel. I tried both 2.4.33 from the cvs and 2.6.17 from the cvs. (Both checked out on 2006-08-26). 2.4.33 was unable to compile. make[3]: Entering directory `/mnt/linux-2.4-xfs/fs/dmapi' gcc -D__KERNEL__ -I/mnt/linux-2.4-xfs/include -Wall -Wstrict-prototypes -Wno-trigraphs -O2 -fno-strict-aliasing -fno-common -fomit-frame-pointer -pipe -mpreferred-stack-boundary=2 -march=athlon -fno-unit-at-a-time -fno-optimize-sibling-calls -DDEBUG -g -nostdinc -iwithprefix include -DKBUILD_BASENAME=dmapi_sysent -DEXPORT_SYMTAB -c dmapi_sysent.c dmapi_sysent.c:54: error: conflicting types for 'dm_fsreg_cachep' dmapi_private.h:44: error: previous declaration of 'dm_fsreg_cachep' was here The following fixed that: ---------------------------------------------------------------------- --- /usr/src/linux-2.4-xfs/fs/dmapi/dmapi_private.h 2005-12-05 22:35:19.000000000 +0200 +++ linux-2.4-xfs/fs/dmapi/dmapi_private.h 2006-09-25 18:17:56.047674000 +0200 @@ -41,11 +41,11 @@ #define DMAPI_DBG_PROCFS "fs/dmapi_d" /* DMAPI debugging dir */ #endif -extern struct kmem_cache *dm_fsreg_cachep; -extern struct kmem_cache *dm_tokdata_cachep; -extern struct kmem_cache *dm_session_cachep; -extern struct kmem_cache *dm_fsys_map_cachep; -extern struct kmem_cache *dm_fsys_vptr_cachep; +extern kmem_cache_t *dm_fsreg_cachep; +extern kmem_cache_t *dm_tokdata_cachep; +extern kmem_cache_t *dm_session_cachep; +extern kmem_cache_t *dm_fsys_map_cachep; +extern kmem_cache_t *dm_fsys_vptr_cachep; typedef struct dm_tokdata { struct dm_tokdata *td_next; -------------------------------------------------------------------- Then I had hit: make[4]: Entering directory `/mnt/linux-2.4-xfs/fs/xfs/dmapi' gcc -D__KERNEL__ -I/mnt/linux-2.4-xfs/include -Wall -Wstrict-prototypes -Wno-trigraphs -O2 -fno-strict-aliasing -fno-common -fomit-frame-pointer -pipe -mpreferred-stack-boundary=2 -march=athlon -fno-unit-at-a-time -fno-optimize-sibling-calls -I /mnt/linux-2.4-xfs/fs/xfs -I /mnt/linux-2.4-xfs/fs/xfs/linux-2.4 -I /mnt/linux-2.4-xfs/fs/dmapi -nostdinc -iwithprefix include -DKBUILD_BASENAME=xfs_dm -c -o xfs_dm.o xfs_dm.c In file included from /mnt/linux-2.4-xfs/fs/xfs/xfs.h:25, from xfs_dm.c:18: /mnt/linux-2.4-xfs/fs/xfs/linux-2.4/xfs_linux.h:35:18: warning: extra tokens at end of #undef directive xfs_dm.c: In function `xfs_dm_set_fileattr': xfs_dm.c:2913: error: request for member `tv_sec' in something not a structure or union make[4]: *** [xfs_dm.o] Error 1 make[4]: Leaving directory `/mnt/linux-2.4-xfs/fs/xfs/dmapi' make[3]: *** [first_rule] Error 2 make[3]: Leaving directory `/mnt/linux-2.4-xfs/fs/xfs/dmapi' make[2]: *** [_subdir_dmapi] Error 2 make[2]: Leaving directory `/mnt/linux-2.4-xfs/fs/xfs' make[1]: *** [_subdir_xfs] Error 2 make[1]: Leaving directory `/mnt/linux-2.4-xfs/fs' make: *** [_dir_fs] Error 2 I then did: (which I know is not a proper fix, but I was desperate to get it to compile and was'nt too concerned about atime problems) ---------------------------------------------------------------------- -------------------- /usr/src/linux-2.4-xfs/fs/xfs/dmapi/xfs_dm.c 2006-08-26 16:01:18.000000000 +0200 +++ linux-2.4-xfs/fs/xfs/dmapi/xfs_dm.c 2006-09-25 18:29:49.836283000 +0200 @@ -2910,7 +2910,7 @@ vat.va_mask |= XFS_AT_ATIME; vat.va_atime.tv_sec = stat.fa_atime; vat.va_atime.tv_nsec = 0; - inode->i_atime.tv_sec = stat.fa_atime; +// inode->i_atime.tv_sec = stat.fa_atime; } if (mask & DM_AT_MTIME) { vat.va_mask |= XFS_AT_MTIME; ---------------------------------------------------------------------- It did compile then, but could not mount a filesystem with dmapi. Unfortunatedly I did'nt save the kernel output. The 2.6.17 compiled cleanly, but also could not mount. I got the following dump: [17179626.228000] kobject xfs: registering. parent: , set: module [17179626.228000] kobject_uevent [17179626.228000] fill_kobj_path: path = '/module/xfs' [17179626.244000] SGI-XFS CVS-2006-08-26_07:00_UTC with ACLs, security attributes, realtime, large block numbers, tracing, debug enabled [17179626.248000] xfs_dmapi: module license 'unspecified' taints kernel. [17179626.252000] kobject xfs_dmapi: registering. parent: , set: module [17179626.252000] kobject_uevent [17179626.252000] fill_kobj_path: path = '/module/xfs_dmapi' [17179626.264000] SGI XFS Data Management API subsystem [17179626.264000] ftype_list/348: Current ftype_list [17179626.264000] ftype_list/351: FS 0xdc3733c0, ftype 0xe15bc080 xfs [17179626.264000] ftype_list/353: Done ftype_list [17179649.888000] Large kmem_alloc attempt, size=6144 [17179649.888000] show_trace+0x20/0x30 dump_stack+0x1e/0x20 [17179649.888000] kmem_alloc+0x134/0x140 [xfs] kmem_zalloc+0x1e/0x50 [xfs] [17179649.888000] xfs_alloc_bufhash+0x48/0xd0 [xfs] xfs_alloc_buftarg+0x63/0x90 [xfs] [17179649.888000] xfs_mount+0x24c/0x730 [xfs] vfs_mount+0x9b/0xb0 [xfs] [17179649.892000] xfs_dm_mount+0x74/0x130 [xfs_dmapi] vfs_mount+0x9b/0xb0 [xfs] [17179649.892000] xfs_fs_fill_super+0x9a/0x230 [xfs] get_sb_bdev+0x100/0x170 [17179649.892000] xfs_fs_get_sb+0x2e/0x30 [xfs] do_kern_mount+0x56/0xd0 [17179649.892000] do_new_mount+0x58/0xb0 do_mount+0x19f/0x1d0 [17179649.892000] sys_mount+0x97/0xe0 sysenter_past_esp+0x54/0x75 [17179649.892000] Filesystem "md1": Disabling barriers, not supported by the underlying device [17179649.912000] XFS mounting filesystem md1 [17179650.028000] Ending clean XFS mount for filesystem: md1 [17179650.028000] ftype_list/348: Current ftype_list [17179650.028000] ftype_list/351: FS 0xdc3733c0, ftype 0xe15bc080 xfs [17179650.028000] ftype_list/353: Done ftype_list [17179650.028000] sb_list/330: Current sb_list [17179650.028000] sb_list/335: Done sb_list [17179650.028000] ftype_list/348: Current ftype_list [17179650.028000] ftype_list/351: FS 0xdc3733c0, ftype 0xe15bc080 xfs [17179650.028000] ftype_list/353: Done ftype_list [17179650.028000] DMAPI assertion failed: fstype, file: fs/dmapi/dmapi_mountinfo.c, line: 280 [17179650.028000] ------------[ cut here ]------------ [17179650.028000] kernel BUG at fs/dmapi/dmapi_port.h:72! [17179650.028000] invalid opcode: 0000 [#1] [17179650.028000] PREEMPT [17179650.028000] Modules linked in: xfs_dmapi xfs tuner saa7134 video_buf compat_ioctl32 v4l2_common v4l1_compat ir_kbd_i2c ir_common videodev ohci1394 ieee 1394 pdc202xx_new ide_cd cdrom [17179650.028000] CPU: 0 [17179650.028000] EIP: 0060:[] Tainted: P VLI [17179650.028000] EFLAGS: 00010296 (2.6.17-dmapi #1) [17179650.028000] EIP is at dm_fsys_map_by_fstype+0x64/0x70 [17179650.028000] eax: 00000061 ebx: 00000000 ecx: 00000000 edx: 00000001 [17179650.028000] esi: 00000000 edi: dc368d2c ebp: dc2ffc9c esp: dc2ffc84 [17179650.028000] ds: 007b es: 007b ss: 0068 [17179650.028000] Process mount (pid: 2836, threadinfo=dc2fe000 task=df97c0b0) [17179650.028000] Stack: c0554fb8 c054f4ac c054f488 00000118 dc368d2c 00000000 dc2ffcc8 c019f846 [17179650.028000] 00000000 c019f9ac c054f501 c051e534 00000161 dc3733c0 e15bc080 dc368d2c [17179650.028000] 00000000 dc2ffcf4 c019faa0 e1574680 dfeef4c4 dc2ffd08 c015ceca c14b0960 [17179650.028000] Call Trace: [17179650.028000] show_stack_log_lvl+0x90/0xc0 show_registers+0x1a3/0x220 [17179650.028000] die+0x118/0x240 do_trap+0x87/0xd0 [17179650.028000] do_invalid_op+0xb5/0xc0 error_code+0x4f/0x54 [17179650.028000] sb_list+0x16/0xf0 dm_fsys_ops+0x30/0x1e0 [17179650.028000] dm_ip_to_handle+0x20/0x100 dm_ip_data+0xa9/0x110 [17179650.028000] dm_send_mount_event+0x72/0x430 xfs_dm_mount+0x12c/0x130 [xfs_dmapi] [17179650.028000] vfs_mount+0x9b/0xb0 [xfs] xfs_fs_fill_super+0x9a/0x230 [xfs] [17179650.028000] get_sb_bdev+0x100/0x170 xfs_fs_get_sb+0x2e/0x30 [xfs] [17179650.028000] do_kern_mount+0x56/0xd0 do_new_mount+0x58/0xb0 [17179650.028000] do_mount+0x19f/0x1d0 sys_mount+0x97/0xe0 [17179650.028000] sysenter_past_esp+0x54/0x75 [17179650.028000] Code: c6 5b 89 f0 5e c9 c3 c7 44 24 0c 18 01 00 00 c7 44 24 08 88 f4 54 c0 c7 44 24 04 ac f4 54 c0 c7 04 24 b8 4f 55 c0 e8 8c da f7 ff <0f> 0b 48 00 72 f4 54 c0 eb a3 89 f6 55 89 e5 56 31 f6 53 83 ec [17179650.028000] EIP: [] dm_fsys_map_by_fstype+0x64/0x70 SS:ESP 0068:dc2ffc84 [17179650.028000] <6>note: mount[2836] exited with preempt_count 1 [17179652.952000] BUG: sleeping function called from invalid context at include/linux/rwsem.h:43 [17179652.952000] in_atomic():1, irqs_disabled():0 [17179652.952000] show_trace+0x20/0x30 dump_stack+0x1e/0x20 [17179652.952000] __might_sleep+0xa1/0xc0 exit_mm+0x3c/0x140 [17179652.952000] do_exit+0xda/0x460 die+0x23a/0x240 [17179652.952000] do_trap+0x87/0xd0 do_invalid_op+0xb5/0xc0 [17179652.952000] error_code+0x4f/0x54 sb_list+0x16/0xf0 [17179652.952000] dm_fsys_ops+0x30/0x1e0 dm_ip_to_handle+0x20/0x100 [17179652.952000] dm_ip_data+0xa9/0x110 dm_send_mount_event+0x72/0x430 [17179652.952000] xfs_dm_mount+0x12c/0x130 [xfs_dmapi] vfs_mount+0x9b/0xb0 [xfs] [17179652.952000] xfs_fs_fill_super+0x9a/0x230 [xfs] get_sb_bdev+0x100/0x170 [17179652.952000] xfs_fs_get_sb+0x2e/0x30 [xfs] do_kern_mount+0x56/0xd0 [17179652.956000] do_new_mount+0x58/0xb0 do_mount+0x19f/0x1d0 [17179652.956000] sys_mount+0x97/0xe0 sysenter_past_esp+0x54/0x75 I used "mount /dev/md1 -o dmapi,mtpt=/mnt /mnt" to try and mount the filesystem in all the cases (which worked great back in 2.4.25). I would appreciate any help. Regards Paul Schutte From owner-xfs@oss.sgi.com Mon Sep 25 11:30:12 2006 Received: with ECARTIS (v1.0.0; list xfs); Mon, 25 Sep 2006 11:30:18 -0700 (PDT) Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8PIUBaG021651 for ; Mon, 25 Sep 2006 11:30:12 -0700 X-ASG-Debug-ID: 1159208970-11527-335-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from kendy.up.ac.za (kendy.up.ac.za [137.215.101.101]) by cuda.sgi.com (Spam Firewall) with ESMTP id 24AEE3E99B9 for ; Mon, 25 Sep 2006 11:29:31 -0700 (PDT) Received: from postino.up.ac.za ([137.215.6.15] helo=mx1.up.ac.za) by kendy.up.ac.za with esmtp (Exim 4.50) id 1GRvCV-0000zl-Qd for xfs@oss.sgi.com; Mon, 25 Sep 2006 20:29:07 +0200 Received: from cleopatra.up.ac.za ([137.215.124.210]) by mx1.up.ac.za with esmtpsa (TLS-1.0:DHE_RSA_AES_256_CBC_SHA:32) (Exim 4.50) id 1GRvCU-0002dB-O6 for xfs@oss.sgi.com; Mon, 25 Sep 2006 20:29:07 +0200 Message-ID: <45181FF0.7030105@up.ac.za> Date: Mon, 25 Sep 2006 20:29:04 +0200 From: Paul Schutte User-Agent: Thunderbird 1.5.0.5 (X11/20060812) MIME-Version: 1.0 To: xfs@oss.sgi.com X-ASG-Orig-Subj: DMAPI problem on the cvs tree of (2.6.17) SGI-XFS CVS-2006-08-26 Subject: DMAPI problem on the cvs tree of (2.6.17) SGI-XFS CVS-2006-08-26 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Scan-Signature: ce0d2b4861e238ec3b8993fcb82bb86f X-Barracuda-Spam-Score: 0.50 X-Barracuda-Spam-Status: No, SCORE=0.50 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests=BSF_RULE7568M X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21913 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.50 BSF_RULE7568M BODY: Custom Rule 7568M X-archive-position: 9082 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: paul@up.ac.za Precedence: bulk X-list: xfs Content-Length: 10551 Lines: 234 Hi, I am trying to get the DMAPI stuff going on xfs again. I last played with it back in the 2.4.25 kernel and it worked great. I now want to resume the work that I started back then, but are unable to get the dmapi to work on any recent kernel. I tried both 2.4.33 from the cvs and 2.6.17 from the cvs. (Both checked out on 2006-08-26). 2.4.33 was unable to compile. make[3]: Entering directory `/mnt/linux-2.4-xfs/fs/dmapi' gcc -D__KERNEL__ -I/mnt/linux-2.4-xfs/include -Wall -Wstrict-prototypes -Wno-trigraphs -O2 -fno-strict-aliasing -fno-common -fomit-frame-pointer -pipe -mpreferred-stack-boundary=2 -march=athlon -fno-unit-at-a-time -fno-optimize-sibling-calls -DDEBUG -g -nostdinc -iwithprefix include -DKBUILD_BASENAME=dmapi_sysent -DEXPORT_SYMTAB -c dmapi_sysent.c dmapi_sysent.c:54: error: conflicting types for 'dm_fsreg_cachep' dmapi_private.h:44: error: previous declaration of 'dm_fsreg_cachep' was here The following fixed that: ---------------------------------------------------------------------- --- /usr/src/linux-2.4-xfs/fs/dmapi/dmapi_private.h 2005-12-05 22:35:19.000000000 +0200 +++ linux-2.4-xfs/fs/dmapi/dmapi_private.h 2006-09-25 18:17:56.047674000 +0200 @@ -41,11 +41,11 @@ #define DMAPI_DBG_PROCFS "fs/dmapi_d" /* DMAPI debugging dir */ #endif -extern struct kmem_cache *dm_fsreg_cachep; -extern struct kmem_cache *dm_tokdata_cachep; -extern struct kmem_cache *dm_session_cachep; -extern struct kmem_cache *dm_fsys_map_cachep; -extern struct kmem_cache *dm_fsys_vptr_cachep; +extern kmem_cache_t *dm_fsreg_cachep; +extern kmem_cache_t *dm_tokdata_cachep; +extern kmem_cache_t *dm_session_cachep; +extern kmem_cache_t *dm_fsys_map_cachep; +extern kmem_cache_t *dm_fsys_vptr_cachep; typedef struct dm_tokdata { struct dm_tokdata *td_next; -------------------------------------------------------------------- Then I had hit: make[4]: Entering directory `/mnt/linux-2.4-xfs/fs/xfs/dmapi' gcc -D__KERNEL__ -I/mnt/linux-2.4-xfs/include -Wall -Wstrict-prototypes -Wno-trigraphs -O2 -fno-strict-aliasing -fno-common -fomit-frame-pointer -pipe -mpreferred-stack-boundary=2 -march=athlon -fno-unit-at-a-time -fno-optimize-sibling-calls -I /mnt/linux-2.4-xfs/fs/xfs -I /mnt/linux-2.4-xfs/fs/xfs/linux-2.4 -I /mnt/linux-2.4-xfs/fs/dmapi -nostdinc -iwithprefix include -DKBUILD_BASENAME=xfs_dm -c -o xfs_dm.o xfs_dm.c In file included from /mnt/linux-2.4-xfs/fs/xfs/xfs.h:25, from xfs_dm.c:18: /mnt/linux-2.4-xfs/fs/xfs/linux-2.4/xfs_linux.h:35:18: warning: extra tokens at end of #undef directive xfs_dm.c: In function `xfs_dm_set_fileattr': xfs_dm.c:2913: error: request for member `tv_sec' in something not a structure or union make[4]: *** [xfs_dm.o] Error 1 make[4]: Leaving directory `/mnt/linux-2.4-xfs/fs/xfs/dmapi' make[3]: *** [first_rule] Error 2 make[3]: Leaving directory `/mnt/linux-2.4-xfs/fs/xfs/dmapi' make[2]: *** [_subdir_dmapi] Error 2 make[2]: Leaving directory `/mnt/linux-2.4-xfs/fs/xfs' make[1]: *** [_subdir_xfs] Error 2 make[1]: Leaving directory `/mnt/linux-2.4-xfs/fs' make: *** [_dir_fs] Error 2 I then did: (which I know is not a proper fix, but I was desperate to get it to compile and was'nt too concerned about atime problems) ---------------------------------------------------------------------- -------------------- /usr/src/linux-2.4-xfs/fs/xfs/dmapi/xfs_dm.c 2006-08-26 16:01:18.000000000 +0200 +++ linux-2.4-xfs/fs/xfs/dmapi/xfs_dm.c 2006-09-25 18:29:49.836283000 +0200 @@ -2910,7 +2910,7 @@ vat.va_mask |= XFS_AT_ATIME; vat.va_atime.tv_sec = stat.fa_atime; vat.va_atime.tv_nsec = 0; - inode->i_atime.tv_sec = stat.fa_atime; +// inode->i_atime.tv_sec = stat.fa_atime; } if (mask & DM_AT_MTIME) { vat.va_mask |= XFS_AT_MTIME; ---------------------------------------------------------------------- It did compile then, but could not mount a filesystem with dmapi. Unfortunatedly I did'nt save the kernel output. The 2.6.17 compiled cleanly, but also could not mount. I got the following dump: [17179626.228000] kobject xfs: registering. parent: , set: module [17179626.228000] kobject_uevent [17179626.228000] fill_kobj_path: path = '/module/xfs' [17179626.244000] SGI-XFS CVS-2006-08-26_07:00_UTC with ACLs, security attributes, realtime, large block numbers, tracing, debug enabled [17179626.248000] xfs_dmapi: module license 'unspecified' taints kernel. [17179626.252000] kobject xfs_dmapi: registering. parent: , set: module [17179626.252000] kobject_uevent [17179626.252000] fill_kobj_path: path = '/module/xfs_dmapi' [17179626.264000] SGI XFS Data Management API subsystem [17179626.264000] ftype_list/348: Current ftype_list [17179626.264000] ftype_list/351: FS 0xdc3733c0, ftype 0xe15bc080 xfs [17179626.264000] ftype_list/353: Done ftype_list [17179649.888000] Large kmem_alloc attempt, size=6144 [17179649.888000] show_trace+0x20/0x30 dump_stack+0x1e/0x20 [17179649.888000] kmem_alloc+0x134/0x140 [xfs] kmem_zalloc+0x1e/0x50 [xfs] [17179649.888000] xfs_alloc_bufhash+0x48/0xd0 [xfs] xfs_alloc_buftarg+0x63/0x90 [xfs] [17179649.888000] xfs_mount+0x24c/0x730 [xfs] vfs_mount+0x9b/0xb0 [xfs] [17179649.892000] xfs_dm_mount+0x74/0x130 [xfs_dmapi] vfs_mount+0x9b/0xb0 [xfs] [17179649.892000] xfs_fs_fill_super+0x9a/0x230 [xfs] get_sb_bdev+0x100/0x170 [17179649.892000] xfs_fs_get_sb+0x2e/0x30 [xfs] do_kern_mount+0x56/0xd0 [17179649.892000] do_new_mount+0x58/0xb0 do_mount+0x19f/0x1d0 [17179649.892000] sys_mount+0x97/0xe0 sysenter_past_esp+0x54/0x75 [17179649.892000] Filesystem "md1": Disabling barriers, not supported by the underlying device [17179649.912000] XFS mounting filesystem md1 [17179650.028000] Ending clean XFS mount for filesystem: md1 [17179650.028000] ftype_list/348: Current ftype_list [17179650.028000] ftype_list/351: FS 0xdc3733c0, ftype 0xe15bc080 xfs [17179650.028000] ftype_list/353: Done ftype_list [17179650.028000] sb_list/330: Current sb_list [17179650.028000] sb_list/335: Done sb_list [17179650.028000] ftype_list/348: Current ftype_list [17179650.028000] ftype_list/351: FS 0xdc3733c0, ftype 0xe15bc080 xfs [17179650.028000] ftype_list/353: Done ftype_list [17179650.028000] DMAPI assertion failed: fstype, file: fs/dmapi/dmapi_mountinfo.c, line: 280 [17179650.028000] ------------[ cut here ]------------ [17179650.028000] kernel BUG at fs/dmapi/dmapi_port.h:72! [17179650.028000] invalid opcode: 0000 [#1] [17179650.028000] PREEMPT [17179650.028000] Modules linked in: xfs_dmapi xfs tuner saa7134 video_buf compat_ioctl32 v4l2_common v4l1_compat ir_kbd_i2c ir_common videodev ohci1394 ieee 1394 pdc202xx_new ide_cd cdrom [17179650.028000] CPU: 0 [17179650.028000] EIP: 0060:[] Tainted: P VLI [17179650.028000] EFLAGS: 00010296 (2.6.17-dmapi #1) [17179650.028000] EIP is at dm_fsys_map_by_fstype+0x64/0x70 [17179650.028000] eax: 00000061 ebx: 00000000 ecx: 00000000 edx: 00000001 [17179650.028000] esi: 00000000 edi: dc368d2c ebp: dc2ffc9c esp: dc2ffc84 [17179650.028000] ds: 007b es: 007b ss: 0068 [17179650.028000] Process mount (pid: 2836, threadinfo=dc2fe000 task=df97c0b0) [17179650.028000] Stack: c0554fb8 c054f4ac c054f488 00000118 dc368d2c 00000000 dc2ffcc8 c019f846 [17179650.028000] 00000000 c019f9ac c054f501 c051e534 00000161 dc3733c0 e15bc080 dc368d2c [17179650.028000] 00000000 dc2ffcf4 c019faa0 e1574680 dfeef4c4 dc2ffd08 c015ceca c14b0960 [17179650.028000] Call Trace: [17179650.028000] show_stack_log_lvl+0x90/0xc0 show_registers+0x1a3/0x220 [17179650.028000] die+0x118/0x240 do_trap+0x87/0xd0 [17179650.028000] do_invalid_op+0xb5/0xc0 error_code+0x4f/0x54 [17179650.028000] sb_list+0x16/0xf0 dm_fsys_ops+0x30/0x1e0 [17179650.028000] dm_ip_to_handle+0x20/0x100 dm_ip_data+0xa9/0x110 [17179650.028000] dm_send_mount_event+0x72/0x430 xfs_dm_mount+0x12c/0x130 [xfs_dmapi] [17179650.028000] vfs_mount+0x9b/0xb0 [xfs] xfs_fs_fill_super+0x9a/0x230 [xfs] [17179650.028000] get_sb_bdev+0x100/0x170 xfs_fs_get_sb+0x2e/0x30 [xfs] [17179650.028000] do_kern_mount+0x56/0xd0 do_new_mount+0x58/0xb0 [17179650.028000] do_mount+0x19f/0x1d0 sys_mount+0x97/0xe0 [17179650.028000] sysenter_past_esp+0x54/0x75 [17179650.028000] Code: c6 5b 89 f0 5e c9 c3 c7 44 24 0c 18 01 00 00 c7 44 24 08 88 f4 54 c0 c7 44 24 04 ac f4 54 c0 c7 04 24 b8 4f 55 c0 e8 8c da f7 ff <0f> 0b 48 00 72 f4 54 c0 eb a3 89 f6 55 89 e5 56 31 f6 53 83 ec [17179650.028000] EIP: [] dm_fsys_map_by_fstype+0x64/0x70 SS:ESP 0068:dc2ffc84 [17179650.028000] <6>note: mount[2836] exited with preempt_count 1 [17179652.952000] BUG: sleeping function called from invalid context at include/linux/rwsem.h:43 [17179652.952000] in_atomic():1, irqs_disabled():0 [17179652.952000] show_trace+0x20/0x30 dump_stack+0x1e/0x20 [17179652.952000] __might_sleep+0xa1/0xc0 exit_mm+0x3c/0x140 [17179652.952000] do_exit+0xda/0x460 die+0x23a/0x240 [17179652.952000] do_trap+0x87/0xd0 do_invalid_op+0xb5/0xc0 [17179652.952000] error_code+0x4f/0x54 sb_list+0x16/0xf0 [17179652.952000] dm_fsys_ops+0x30/0x1e0 dm_ip_to_handle+0x20/0x100 [17179652.952000] dm_ip_data+0xa9/0x110 dm_send_mount_event+0x72/0x430 [17179652.952000] xfs_dm_mount+0x12c/0x130 [xfs_dmapi] vfs_mount+0x9b/0xb0 [xfs] [17179652.952000] xfs_fs_fill_super+0x9a/0x230 [xfs] get_sb_bdev+0x100/0x170 [17179652.952000] xfs_fs_get_sb+0x2e/0x30 [xfs] do_kern_mount+0x56/0xd0 [17179652.956000] do_new_mount+0x58/0xb0 do_mount+0x19f/0x1d0 [17179652.956000] sys_mount+0x97/0xe0 sysenter_past_esp+0x54/0x75 I used "mount /dev/md1 -o dmapi,mtpt=/mnt /mnt" to try and mount the filesystem in all the cases (which worked great back in 2.4.25). I would appreciate any help. Regards Paul Schutte From owner-xfs@oss.sgi.com Mon Sep 25 16:34:37 2006 Received: with ECARTIS (v1.0.0; list xfs); Mon, 25 Sep 2006 16:34:45 -0700 (PDT) Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8PNYZaG017329 for ; Mon, 25 Sep 2006 16:34:36 -0700 X-ASG-Debug-ID: 1159222327-28410-399-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from tricca.tcs.tulane.edu (tricca.tcs.tulane.edu [129.81.224.27]) by cuda.sgi.com (Spam Firewall) with ESMTP id 6878045F68E for ; Mon, 25 Sep 2006 15:12:07 -0700 (PDT) Received: from tricca.tcs.tulane.edu (localhost.localdomain [127.0.0.1]) by tricca.tcs.tulane.edu (8.13.6/8.13.6) with ESMTP id k8PMBnNu011368; Mon, 25 Sep 2006 17:11:49 -0500 Received: from olympus.tcs.tulane.edu (olympus.tcs.tulane.edu [129.81.224.6] (may be forged)) by tricca.tcs.tulane.edu (8.13.6/8.12.8) with ESMTP id k8PMBmbW011363; Mon, 25 Sep 2006 17:11:48 -0500 Received: from [129.81.86.224] (localhost [127.0.0.1]) (authenticated bits=0) by olympus.tcs.tulane.edu (8.13.6/8.13.6) with ESMTP id k8PMBmWM005400; Mon, 25 Sep 2006 17:11:48 -0500 (CDT) Message-ID: <45185424.2030707@tulane.edu> Date: Mon, 25 Sep 2006 17:11:48 -0500 From: Rene Salmon User-Agent: Thunderbird 1.5.0.7 (Macintosh/20060909) MIME-Version: 1.0 To: xfs@oss.sgi.com CC: Rene Salmon X-ASG-Orig-Subj: LVM and XFS cannot set blocksize on block device Subject: LVM and XFS cannot set blocksize on block device Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21922 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9085 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: rsalmon@tulane.edu Precedence: bulk X-list: xfs Content-Length: 1248 Lines: 47 Hi, I am trying to create an xfs file system on and LVM logical volume. the actual physical drive has blocks or sectors of 4096. when I try to make the xfs file system on top of LVM I get this message: helix-priv:~ # mkfs.xfs -f /dev/vg_u00/lv_u00 mkfs.xfs: warning - cannot set blocksize on block device /dev/vg_u00/lv_u00: Invalid argument Warning: the data subvolume sector size 512 is less than the sector size reported by the device (4096). meta-data=/dev/vg_u00/lv_u00 isize=256 agcount=32, agsize=11489280 blks = sectsz=512 attr=0 data = bsize=4096 blocks=367656960, imaxpct=25 = sunit=0 swidth=0 blks, unwritten=1 naming =version 2 bsize=4096 log =internal log bsize=4096 blocks=32768, version=1 = sectsz=512 sunit=0 blks realtime =none extsz=65536 blocks=0, rtextents=0 helix-priv:~ # any ideas? Thanks Rene -- - -- Rene Salmon Tulane University Center for Computational Science http://www.ccs.tulane.edu rsalmon@tulane.edu Tel 504-862-8393 Tel 504-988-8552 Fax 504-862-8392 From owner-xfs@oss.sgi.com Mon Sep 25 17:13:43 2006 Received: with ECARTIS (v1.0.0; list xfs); Mon, 25 Sep 2006 17:13:49 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k8Q0DcaG020379 for ; Mon, 25 Sep 2006 17:13:40 -0700 Received: from [134.14.55.89] (soarer.melbourne.sgi.com [134.14.55.89]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA03043; Tue, 26 Sep 2006 10:12:49 +1000 Message-ID: <451870A2.6060406@sgi.com> Date: Tue, 26 Sep 2006 10:13:22 +1000 From: Vlad Apostolov User-Agent: Thunderbird 1.5.0.5 (X11/20060719) MIME-Version: 1.0 To: Paul Schutte CC: xfs@oss.sgi.com Subject: Re: DMAPI problem on the cvs tree of (2.6.17) SGI-XFS CVS-2006-08-26 References: <45181FF0.7030105@up.ac.za> In-Reply-To: <45181FF0.7030105@up.ac.za> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 9086 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: vapo@sgi.com Precedence: bulk X-list: xfs Content-Length: 11203 Lines: 249 Hi Paul, To get the 2.6.17 going please turn off CONFIG_XFS_TRACE in .config and rebuild the kernel and the modules. This problem is on my todo list for investigation. Regards, Vlad Paul Schutte wrote: > Hi, > > I am trying to get the DMAPI stuff going on xfs again. I last played > with it back in the 2.4.25 kernel and it worked great. > I now want to resume the work that I started back then, but are unable > to get the dmapi to work on any recent kernel. I tried both 2.4.33 from > the cvs and 2.6.17 from the cvs. (Both checked out on 2006-08-26). > > 2.4.33 was unable to compile. > > make[3]: Entering directory `/mnt/linux-2.4-xfs/fs/dmapi' > gcc -D__KERNEL__ -I/mnt/linux-2.4-xfs/include -Wall -Wstrict-prototypes > -Wno-trigraphs -O2 -fno-strict-aliasing -fno-common -fomit-frame-pointer > -pipe -mpreferred-stack-boundary=2 -march=athlon -fno-unit-at-a-time > -fno-optimize-sibling-calls -DDEBUG -g -nostdinc -iwithprefix include > -DKBUILD_BASENAME=dmapi_sysent -DEXPORT_SYMTAB -c dmapi_sysent.c > dmapi_sysent.c:54: error: conflicting types for 'dm_fsreg_cachep' > dmapi_private.h:44: error: previous declaration of 'dm_fsreg_cachep' was > here > > The following fixed that: > ---------------------------------------------------------------------- > --- /usr/src/linux-2.4-xfs/fs/dmapi/dmapi_private.h 2005-12-05 > 22:35:19.000000000 +0200 > +++ linux-2.4-xfs/fs/dmapi/dmapi_private.h 2006-09-25 > 18:17:56.047674000 +0200 > @@ -41,11 +41,11 @@ > #define DMAPI_DBG_PROCFS "fs/dmapi_d" /* DMAPI debugging dir */ > #endif > > -extern struct kmem_cache *dm_fsreg_cachep; > -extern struct kmem_cache *dm_tokdata_cachep; > -extern struct kmem_cache *dm_session_cachep; > -extern struct kmem_cache *dm_fsys_map_cachep; > -extern struct kmem_cache *dm_fsys_vptr_cachep; > +extern kmem_cache_t *dm_fsreg_cachep; > +extern kmem_cache_t *dm_tokdata_cachep; > +extern kmem_cache_t *dm_session_cachep; > +extern kmem_cache_t *dm_fsys_map_cachep; > +extern kmem_cache_t *dm_fsys_vptr_cachep; > > typedef struct dm_tokdata { > struct dm_tokdata *td_next; > -------------------------------------------------------------------- > > Then I had hit: > > make[4]: Entering directory `/mnt/linux-2.4-xfs/fs/xfs/dmapi' > gcc -D__KERNEL__ -I/mnt/linux-2.4-xfs/include -Wall -Wstrict-prototypes > -Wno-trigraphs -O2 -fno-strict-aliasing -fno-common -fomit-frame-pointer > -pipe -mpreferred-stack-boundary=2 -march=athlon -fno-unit-at-a-time > -fno-optimize-sibling-calls -I /mnt/linux-2.4-xfs/fs/xfs -I > /mnt/linux-2.4-xfs/fs/xfs/linux-2.4 -I /mnt/linux-2.4-xfs/fs/dmapi > -nostdinc -iwithprefix include -DKBUILD_BASENAME=xfs_dm -c -o xfs_dm.o > xfs_dm.c > In file included from /mnt/linux-2.4-xfs/fs/xfs/xfs.h:25, > from xfs_dm.c:18: > /mnt/linux-2.4-xfs/fs/xfs/linux-2.4/xfs_linux.h:35:18: warning: extra > tokens at end of #undef directive > xfs_dm.c: In function `xfs_dm_set_fileattr': > xfs_dm.c:2913: error: request for member `tv_sec' in something not a > structure or union > make[4]: *** [xfs_dm.o] Error 1 > make[4]: Leaving directory `/mnt/linux-2.4-xfs/fs/xfs/dmapi' > make[3]: *** [first_rule] Error 2 > make[3]: Leaving directory `/mnt/linux-2.4-xfs/fs/xfs/dmapi' > make[2]: *** [_subdir_dmapi] Error 2 > make[2]: Leaving directory `/mnt/linux-2.4-xfs/fs/xfs' > make[1]: *** [_subdir_xfs] Error 2 > make[1]: Leaving directory `/mnt/linux-2.4-xfs/fs' > make: *** [_dir_fs] Error 2 > > I then did: (which I know is not a proper fix, but I was desperate to > get it to compile and was'nt too concerned about atime problems) > > ---------------------------------------------------------------------- > -------------------- /usr/src/linux-2.4-xfs/fs/xfs/dmapi/xfs_dm.c > 2006-08-26 16:01:18.000000000 +0200 > +++ linux-2.4-xfs/fs/xfs/dmapi/xfs_dm.c 2006-09-25 18:29:49.836283000 > +0200 > @@ -2910,7 +2910,7 @@ > vat.va_mask |= XFS_AT_ATIME; > vat.va_atime.tv_sec = stat.fa_atime; > vat.va_atime.tv_nsec = 0; > - inode->i_atime.tv_sec = stat.fa_atime; > +// inode->i_atime.tv_sec = stat.fa_atime; > } > if (mask & DM_AT_MTIME) { > vat.va_mask |= XFS_AT_MTIME; > ---------------------------------------------------------------------- > > It did compile then, but could not mount a filesystem with dmapi. > Unfortunatedly I did'nt save the kernel output. > > The 2.6.17 compiled cleanly, but also could not mount. > I got the following dump: > > > [17179626.228000] kobject xfs: registering. parent: , set: module > [17179626.228000] kobject_uevent > [17179626.228000] fill_kobj_path: path = '/module/xfs' > [17179626.244000] SGI-XFS CVS-2006-08-26_07:00_UTC with ACLs, security > attributes, realtime, large block numbers, tracing, debug enabled > [17179626.248000] xfs_dmapi: module license 'unspecified' taints kernel. > [17179626.252000] kobject xfs_dmapi: registering. parent: , set: > module > [17179626.252000] kobject_uevent > [17179626.252000] fill_kobj_path: path = '/module/xfs_dmapi' > [17179626.264000] SGI XFS Data Management API subsystem > [17179626.264000] ftype_list/348: Current ftype_list > [17179626.264000] ftype_list/351: FS 0xdc3733c0, ftype 0xe15bc080 xfs > [17179626.264000] ftype_list/353: Done ftype_list > [17179649.888000] Large kmem_alloc attempt, size=6144 > [17179649.888000] show_trace+0x20/0x30 > dump_stack+0x1e/0x20 > [17179649.888000] kmem_alloc+0x134/0x140 [xfs] > kmem_zalloc+0x1e/0x50 [xfs] > [17179649.888000] xfs_alloc_bufhash+0x48/0xd0 [xfs] > xfs_alloc_buftarg+0x63/0x90 [xfs] > [17179649.888000] xfs_mount+0x24c/0x730 [xfs] > vfs_mount+0x9b/0xb0 [xfs] > [17179649.892000] xfs_dm_mount+0x74/0x130 [xfs_dmapi] > vfs_mount+0x9b/0xb0 [xfs] > [17179649.892000] xfs_fs_fill_super+0x9a/0x230 [xfs] > get_sb_bdev+0x100/0x170 > [17179649.892000] xfs_fs_get_sb+0x2e/0x30 [xfs] > do_kern_mount+0x56/0xd0 > [17179649.892000] do_new_mount+0x58/0xb0 > do_mount+0x19f/0x1d0 > [17179649.892000] sys_mount+0x97/0xe0 > sysenter_past_esp+0x54/0x75 > [17179649.892000] Filesystem "md1": Disabling barriers, not supported by > the underlying device > [17179649.912000] XFS mounting filesystem md1 > [17179650.028000] Ending clean XFS mount for filesystem: md1 > [17179650.028000] ftype_list/348: Current ftype_list > [17179650.028000] ftype_list/351: FS 0xdc3733c0, ftype 0xe15bc080 xfs > [17179650.028000] ftype_list/353: Done ftype_list > [17179650.028000] sb_list/330: Current sb_list > [17179650.028000] sb_list/335: Done sb_list > [17179650.028000] ftype_list/348: Current ftype_list > [17179650.028000] ftype_list/351: FS 0xdc3733c0, ftype 0xe15bc080 xfs > [17179650.028000] ftype_list/353: Done ftype_list > [17179650.028000] DMAPI assertion failed: fstype, file: > fs/dmapi/dmapi_mountinfo.c, line: 280 > [17179650.028000] ------------[ cut here ]------------ > [17179650.028000] kernel BUG at fs/dmapi/dmapi_port.h:72! > [17179650.028000] invalid opcode: 0000 [#1] > [17179650.028000] PREEMPT > [17179650.028000] Modules linked in: xfs_dmapi xfs tuner saa7134 > video_buf compat_ioctl32 v4l2_common v4l1_compat ir_kbd_i2c ir_common > videodev ohci1394 ieee > 1394 pdc202xx_new ide_cd cdrom > [17179650.028000] CPU: 0 > [17179650.028000] EIP: 0060:[] Tainted: P VLI > [17179650.028000] EFLAGS: 00010296 (2.6.17-dmapi #1) > [17179650.028000] EIP is at dm_fsys_map_by_fstype+0x64/0x70 > [17179650.028000] eax: 00000061 ebx: 00000000 ecx: 00000000 edx: > 00000001 > [17179650.028000] esi: 00000000 edi: dc368d2c ebp: dc2ffc9c esp: > dc2ffc84 > [17179650.028000] ds: 007b es: 007b ss: 0068 > [17179650.028000] Process mount (pid: 2836, threadinfo=dc2fe000 > task=df97c0b0) > [17179650.028000] Stack: c0554fb8 c054f4ac c054f488 00000118 dc368d2c > 00000000 dc2ffcc8 c019f846 > [17179650.028000] 00000000 c019f9ac c054f501 c051e534 00000161 > dc3733c0 e15bc080 dc368d2c > [17179650.028000] 00000000 dc2ffcf4 c019faa0 e1574680 dfeef4c4 > dc2ffd08 c015ceca c14b0960 > [17179650.028000] Call Trace: > [17179650.028000] show_stack_log_lvl+0x90/0xc0 > show_registers+0x1a3/0x220 > [17179650.028000] die+0x118/0x240 > do_trap+0x87/0xd0 > [17179650.028000] do_invalid_op+0xb5/0xc0 > error_code+0x4f/0x54 > [17179650.028000] sb_list+0x16/0xf0 > dm_fsys_ops+0x30/0x1e0 > [17179650.028000] dm_ip_to_handle+0x20/0x100 > dm_ip_data+0xa9/0x110 > [17179650.028000] dm_send_mount_event+0x72/0x430 > xfs_dm_mount+0x12c/0x130 [xfs_dmapi] > [17179650.028000] vfs_mount+0x9b/0xb0 [xfs] > xfs_fs_fill_super+0x9a/0x230 [xfs] > [17179650.028000] get_sb_bdev+0x100/0x170 > xfs_fs_get_sb+0x2e/0x30 [xfs] > [17179650.028000] do_kern_mount+0x56/0xd0 > do_new_mount+0x58/0xb0 > [17179650.028000] do_mount+0x19f/0x1d0 > sys_mount+0x97/0xe0 > [17179650.028000] sysenter_past_esp+0x54/0x75 > [17179650.028000] Code: c6 5b 89 f0 5e c9 c3 c7 44 24 0c 18 01 00 00 c7 > 44 24 08 88 f4 54 c0 c7 44 24 04 ac f4 54 c0 c7 04 24 b8 4f 55 c0 e8 8c > da f7 ff <0f> > 0b 48 00 72 f4 54 c0 eb a3 89 f6 55 89 e5 56 31 f6 53 83 ec > [17179650.028000] EIP: [] dm_fsys_map_by_fstype+0x64/0x70 > SS:ESP 0068:dc2ffc84 > [17179650.028000] <6>note: mount[2836] exited with preempt_count 1 > [17179652.952000] BUG: sleeping function called from invalid context at > include/linux/rwsem.h:43 > [17179652.952000] in_atomic():1, irqs_disabled():0 > [17179652.952000] show_trace+0x20/0x30 > dump_stack+0x1e/0x20 > [17179652.952000] __might_sleep+0xa1/0xc0 > exit_mm+0x3c/0x140 > [17179652.952000] do_exit+0xda/0x460 > die+0x23a/0x240 > [17179652.952000] do_trap+0x87/0xd0 > do_invalid_op+0xb5/0xc0 > [17179652.952000] error_code+0x4f/0x54 > sb_list+0x16/0xf0 > [17179652.952000] dm_fsys_ops+0x30/0x1e0 > dm_ip_to_handle+0x20/0x100 > [17179652.952000] dm_ip_data+0xa9/0x110 > dm_send_mount_event+0x72/0x430 > [17179652.952000] xfs_dm_mount+0x12c/0x130 [xfs_dmapi] > vfs_mount+0x9b/0xb0 [xfs] > [17179652.952000] xfs_fs_fill_super+0x9a/0x230 [xfs] > get_sb_bdev+0x100/0x170 > [17179652.952000] xfs_fs_get_sb+0x2e/0x30 [xfs] > do_kern_mount+0x56/0xd0 > [17179652.956000] do_new_mount+0x58/0xb0 > do_mount+0x19f/0x1d0 > [17179652.956000] sys_mount+0x97/0xe0 > sysenter_past_esp+0x54/0x75 > > > I used "mount /dev/md1 -o dmapi,mtpt=/mnt /mnt" to try and mount the > filesystem in all the cases (which worked great back in 2.4.25). > > > I would appreciate any help. > > Regards > Paul Schutte > From owner-xfs@oss.sgi.com Mon Sep 25 17:18:20 2006 Received: with ECARTIS (v1.0.0; list xfs); Mon, 25 Sep 2006 17:18:29 -0700 (PDT) Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8Q0IJaG021198 for ; Mon, 25 Sep 2006 17:18:20 -0700 X-ASG-Debug-ID: 1159229859-31150-32-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from smtp101.sbc.mail.mud.yahoo.com (smtp101.sbc.mail.mud.yahoo.com [68.142.198.200]) by cuda.sgi.com (Spam Firewall) with SMTP id 3488045F730 for ; Mon, 25 Sep 2006 17:17:39 -0700 (PDT) Received: (qmail 5047 invoked from network); 26 Sep 2006 00:17:39 -0000 Received: from unknown (HELO stupidest.org) (cwedgwood@sbcglobal.net@71.202.63.228 with login) by smtp101.sbc.mail.mud.yahoo.com with SMTP; 26 Sep 2006 00:17:39 -0000 Received: by tuatara.stupidest.org (Postfix, from userid 10000) id 03FD118079B1; Mon, 25 Sep 2006 17:17:37 -0700 (PDT) Date: Mon, 25 Sep 2006 17:17:37 -0700 From: Chris Wedgwood To: Rene Salmon Cc: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: LVM and XFS cannot set blocksize on block device Subject: Re: LVM and XFS cannot set blocksize on block device Message-ID: <20060926001737.GA10224@tuatara.stupidest.org> References: <45185424.2030707@tulane.edu> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <45185424.2030707@tulane.edu> X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21931 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9087 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: xfs Content-Length: 198 Lines: 7 On Mon, Sep 25, 2006 at 05:11:48PM -0500, Rene Salmon wrote: > Warning: the data subvolume sector size 512 is less than the sector size > reported by the device (4096). does "-s size=4096" help? From owner-xfs@oss.sgi.com Tue Sep 26 06:59:56 2006 Received: with ECARTIS (v1.0.0; list xfs); Tue, 26 Sep 2006 07:00:02 -0700 (PDT) Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8QDxtaG017547 for ; Tue, 26 Sep 2006 06:59:56 -0700 X-ASG-Debug-ID: 1159279155-28564-946-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from tricca.tcs.tulane.edu (tricca.tcs.tulane.edu [129.81.224.27]) by cuda.sgi.com (Spam Firewall) with ESMTP id CCDB645EC53 for ; Tue, 26 Sep 2006 06:59:15 -0700 (PDT) Received: from tricca.tcs.tulane.edu (localhost.localdomain [127.0.0.1]) by tricca.tcs.tulane.edu (8.13.6/8.13.6) with ESMTP id k8QDwbMf011609; Tue, 26 Sep 2006 08:58:37 -0500 Received: from olympus.tcs.tulane.edu (olympus.tcs.tulane.edu [129.81.224.6] (may be forged)) by tricca.tcs.tulane.edu (8.13.6/8.12.8) with ESMTP id k8QDwbYE011604; Tue, 26 Sep 2006 08:58:37 -0500 Received: from [129.81.113.244] (localhost [127.0.0.1]) (authenticated bits=0) by olympus.tcs.tulane.edu (8.13.6/8.13.6) with ESMTP id k8QDwSnH002491; Tue, 26 Sep 2006 08:58:33 -0500 (CDT) Message-ID: <45193204.3030500@tulane.edu> Date: Tue, 26 Sep 2006 08:58:28 -0500 From: Rene Salmon User-Agent: Thunderbird 1.5.0.7 (Macintosh/20060909) MIME-Version: 1.0 To: Chris Wedgwood CC: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: LVM and XFS cannot set blocksize on block device Subject: Re: LVM and XFS cannot set blocksize on block device References: <45185424.2030707@tulane.edu> <20060926001737.GA10224@tuatara.stupidest.org> In-Reply-To: <20060926001737.GA10224@tuatara.stupidest.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21970 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9089 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: rsalmon@tulane.edu Precedence: bulk X-list: xfs Content-Length: 1536 Lines: 52 Hi, Thanks for the reply. The "-s size=4096" helped I was able to create the file system, then mount it and use it. I did however get a warning still about "cannot set blocksize on block device". Everything seems to be working but I am a bit worried about the warning message. Following is the message. Any ideas if it is safe to ignore this or any way to get rid of it? helix-priv:~ # mkfs.xfs -s size=4096 -f /dev/vg_u00/lv_u00 mkfs.xfs: warning - cannot set blocksize on block device /dev/vg_u00/lv_u00: Invalid argument meta-data=/dev/vg_u00/lv_u00 isize=256 agcount=32, agsize=11489280 blks = sectsz=4096 attr=0 data = bsize=4096 blocks=367656960, imaxpct=25 = sunit=0 swidth=0 blks, unwritten=1 naming =version 2 bsize=4096 log =internal log bsize=4096 blocks=32768, version=2 = sectsz=4096 sunit=1 blks realtime =none extsz=65536 blocks=0, rtextents=0 Thanks Rene Chris Wedgwood wrote: > On Mon, Sep 25, 2006 at 05:11:48PM -0500, Rene Salmon wrote: > >> Warning: the data subvolume sector size 512 is less than the sector size >> reported by the device (4096). > > does "-s size=4096" help? -- - -- Rene Salmon Tulane University Center for Computational Science http://www.ccs.tulane.edu rsalmon@tulane.edu Tel 504-862-8393 Tel 504-988-8552 Fax 504-862-8392 From owner-xfs@oss.sgi.com Tue Sep 26 13:07:37 2006 Received: with ECARTIS (v1.0.0; list xfs); Tue, 26 Sep 2006 13:07:48 -0700 (PDT) Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8QK7aaG002239 for ; Tue, 26 Sep 2006 13:07:37 -0700 X-ASG-Debug-ID: 1159296862-30952-145-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mail.max-t.com (h216-18-124-229.gtcust.grouptelecom.net [216.18.124.229]) by cuda.sgi.com (Spam Firewall) with ESMTP id 4042AD118FE5 for ; Tue, 26 Sep 2006 11:54:22 -0700 (PDT) Received: from madrid.max-t.internal ([192.168.1.189] ident=[U2FsdGVkX19dVT9CZrn2XCLfH1i8jpfuYCsnGLnAtB8=]) by mail.max-t.com with esmtp (Exim 4.43) id 1GSI4N-0004Zc-Lc; Tue, 26 Sep 2006 14:54:16 -0400 Date: Tue, 26 Sep 2006 14:51:45 -0400 (EDT) From: Stephane Doyon X-X-Sender: sdoyon@madrid.max-t.internal To: xfs@oss.sgi.com, nfs@lists.sourceforge.net Message-ID: MIME-Version: 1.0 X-SA-Exim-Connect-IP: 192.168.1.189 X-SA-Exim-Mail-From: sdoyon@max-t.com X-ASG-Orig-Subj: Long sleep with i_mutex in xfs_flush_device(), affects NFS service Subject: Long sleep with i_mutex in xfs_flush_device(), affects NFS service Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-SA-Exim-Version: 4.1 (built Thu, 08 Sep 2005 14:17:48 -0500) X-SA-Exim-Scanned: Yes (on mail.max-t.com) X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21986 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9092 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sdoyon@max-t.com Precedence: bulk X-list: xfs Content-Length: 2458 Lines: 57 Hi, I'm seeing an unpleasant behavior when an XFS file system becomes full, particularly when accessed over NFS. Both XFS and the linux NFS client appear to be contributing to the problem. When the file system becomes nearly full, we eventually call down to xfs_flush_device(), which sleeps for 0.5seconds, waiting for xfssyncd to do some work. xfs_flush_space()does xfs_iunlock(ip, XFS_ILOCK_EXCL); before calling xfs_flush_device(), but i_mutex is still held, at least when we're being called from under xfs_write(). It seems like a fairly long time to hold a mutex. And I wonder whether it's really necessary to keep going through that again and again for every new request after we've hit NOSPC. In particular this can cause a pileup when several threads are writing concurrently to the same file. Some specialized apps might do that, and nfsd threads do it all the time. To reproduce locally, on a full file system: #!/bin/sh for i in `seq 30`; do dd if=/dev/zero of=f bs=1 count=1 & done wait time that, it takes nearly exactly 15s. The linux NFS client typically sends bunches of 16 requests, and so if the client is writing a single file, some NFS requests are therefore delayed by up to 8seconds, which is kind of long for NFS. What's worse, when my linux NFS client writes out a file's pages, it does not react immediately on receiving a NOSPC error. It will remember and report the error later on close(), but it still tries and issues write requests for each page of the file. So even if there isn't a pileup on the i_mutex on the server, the NFS client still waits 0.5s for each 32K (typically) request. So on an NFS client on a gigabit network, on an already full filesystem, if I open and write a 10M file and close() it, it takes 2m40.083s for it to issue all the requests, get an NOSPC for each, and finally have my close() call return ENOSPC. That can stretch to several hours for gigabyte-sized files, which is how I noticed the problem. I'm not too familiar with the NFS client code, but would it not be possible for it to give up when it encounters NOSPC? Or is there some reason why this wouldn't be desirable? The rough workaround I have come up with for the problem is to have xfs_flush_space() skip calling xfs_flush_device() if we are within 2secs of having returned ENOSPC. I have verified that this workaround is effective, but I imagine there might be a cleaner solution. Thanks From owner-xfs@oss.sgi.com Tue Sep 26 13:07:38 2006 Received: with ECARTIS (v1.0.0; list xfs); Tue, 26 Sep 2006 13:07:48 -0700 (PDT) Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8QK7baG002256 for ; Tue, 26 Sep 2006 13:07:38 -0700 X-ASG-Debug-ID: 1159297591-30950-435-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from pat.uio.no (pat.uio.no [129.240.10.4]) by cuda.sgi.com (Spam Firewall) with ESMTP id 28657D118FEA for ; Tue, 26 Sep 2006 12:06:32 -0700 (PDT) Received: from mail-mx7.uio.no ([129.240.10.52]) by pat.uio.no with esmtp (Exim 4.43) id 1GSIGE-0004Nf-0F; Tue, 26 Sep 2006 21:06:30 +0200 Received: from dh141.citi.umich.edu ([141.211.133.141]) by mail-mx7.uio.no with esmtpsa (SSLv3:RC4-MD5:128) (Exim 4.43) id 1GSIGA-0000HO-2Z; Tue, 26 Sep 2006 21:06:26 +0200 X-ASG-Orig-Subj: Re: [NFS] Long sleep with i_mutex in xfs_flush_device(), affects NFS service Subject: Re: [NFS] Long sleep with i_mutex in xfs_flush_device(), affects NFS service From: Trond Myklebust To: Stephane Doyon Cc: xfs@oss.sgi.com, nfs@lists.sourceforge.net In-Reply-To: References: Content-Type: text/plain Date: Tue, 26 Sep 2006 15:06:19 -0400 Message-Id: <1159297579.5492.21.camel@lade.trondhjem.org> Mime-Version: 1.0 X-Mailer: Evolution 2.8.0 Content-Transfer-Encoding: 7bit X-UiO-Spam-info: not spam, SpamAssassin (score=-3.553, required 12, autolearn=disabled, AWL 1.45, UIO_MAIL_IS_INTERNAL -5.00) X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21986 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9091 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: trond.myklebust@fys.uio.no Precedence: bulk X-list: xfs Content-Length: 2605 Lines: 60 On Tue, 2006-09-26 at 14:51 -0400, Stephane Doyon wrote: > Hi, > > I'm seeing an unpleasant behavior when an XFS file system becomes full, > particularly when accessed over NFS. Both XFS and the linux NFS client > appear to be contributing to the problem. > > When the file system becomes nearly full, we eventually call down to > xfs_flush_device(), which sleeps for 0.5seconds, waiting for xfssyncd to > do some work. > > xfs_flush_space()does > xfs_iunlock(ip, XFS_ILOCK_EXCL); > before calling xfs_flush_device(), but i_mutex is still held, at least > when we're being called from under xfs_write(). It seems like a fairly > long time to hold a mutex. And I wonder whether it's really necessary to > keep going through that again and again for every new request after we've > hit NOSPC. > > In particular this can cause a pileup when several threads are writing > concurrently to the same file. Some specialized apps might do that, and > nfsd threads do it all the time. > > To reproduce locally, on a full file system: > #!/bin/sh > for i in `seq 30`; do > dd if=/dev/zero of=f bs=1 count=1 & > done > wait > time that, it takes nearly exactly 15s. > > The linux NFS client typically sends bunches of 16 requests, and so if the > client is writing a single file, some NFS requests are therefore delayed > by up to 8seconds, which is kind of long for NFS. Why? The file is still open, and so the standard close-to-open rules state that you are not guaranteed that the cache will be flushed unless the VM happens to want to reclaim memory. > What's worse, when my linux NFS client writes out a file's pages, it does > not react immediately on receiving a NOSPC error. It will remember and > report the error later on close(), but it still tries and issues write > requests for each page of the file. So even if there isn't a pileup on the > i_mutex on the server, the NFS client still waits 0.5s for each 32K > (typically) request. So on an NFS client on a gigabit network, on an > already full filesystem, if I open and write a 10M file and close() it, it > takes 2m40.083s for it to issue all the requests, get an NOSPC for each, > and finally have my close() call return ENOSPC. That can stretch to > several hours for gigabyte-sized files, which is how I noticed the > problem. > > I'm not too familiar with the NFS client code, but would it not be > possible for it to give up when it encounters NOSPC? Or is there some > reason why this wouldn't be desirable? How would it then detect that you have fixed the problem on the server? Cheers, Trond From owner-xfs@oss.sgi.com Tue Sep 26 13:08:57 2006 Received: with ECARTIS (v1.0.0; list xfs); Tue, 26 Sep 2006 13:09:03 -0700 (PDT) Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8QK8vaG002507 for ; Tue, 26 Sep 2006 13:08:57 -0700 X-ASG-Debug-ID: 1159301294-633-577-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mail.max-t.com (h216-18-124-229.gtcust.grouptelecom.net [216.18.124.229]) by cuda.sgi.com (Spam Firewall) with ESMTP id CE42845F4CD for ; Tue, 26 Sep 2006 13:08:14 -0700 (PDT) Received: from madrid.max-t.internal ([192.168.1.189] ident=[U2FsdGVkX1/Gm/UTPXR1vswuJ8T5jnFEfB0MTMi9L4w=]) by mail.max-t.com with esmtp (Exim 4.43) id 1GSJDv-0005pP-NW; Tue, 26 Sep 2006 16:08:12 -0400 Date: Tue, 26 Sep 2006 16:05:41 -0400 (EDT) From: Stephane Doyon X-X-Sender: sdoyon@madrid.max-t.internal To: Trond Myklebust cc: xfs@oss.sgi.com, nfs@lists.sourceforge.net In-Reply-To: <1159297579.5492.21.camel@lade.trondhjem.org> Message-ID: References: <1159297579.5492.21.camel@lade.trondhjem.org> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 192.168.1.189 X-SA-Exim-Mail-From: sdoyon@max-t.com X-ASG-Orig-Subj: Re: [NFS] Long sleep with i_mutex in xfs_flush_device(), affects NFS service Subject: Re: [NFS] Long sleep with i_mutex in xfs_flush_device(), affects NFS service Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-SA-Exim-Version: 4.1 (built Thu, 08 Sep 2005 14:17:48 -0500) X-SA-Exim-Scanned: Yes (on mail.max-t.com) X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21988 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9093 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sdoyon@max-t.com Precedence: bulk X-list: xfs Content-Length: 3518 Lines: 69 On Tue, 26 Sep 2006, Trond Myklebust wrote: [...] >> When the file system becomes nearly full, we eventually call down to >> xfs_flush_device(), which sleeps for 0.5seconds, waiting for xfssyncd to >> do some work. >> >> xfs_flush_space()does >> xfs_iunlock(ip, XFS_ILOCK_EXCL); >> before calling xfs_flush_device(), but i_mutex is still held, at least >> when we're being called from under xfs_write(). It seems like a fairly >> long time to hold a mutex. And I wonder whether it's really necessary to >> keep going through that again and again for every new request after we've >> hit NOSPC. >> >> In particular this can cause a pileup when several threads are writing >> concurrently to the same file. Some specialized apps might do that, and >> nfsd threads do it all the time. [...] >> The linux NFS client typically sends bunches of 16 requests, and so if the >> client is writing a single file, some NFS requests are therefore delayed >> by up to 8seconds, which is kind of long for NFS. > > Why? The file is still open, and so the standard close-to-open rules > state that you are not guaranteed that the cache will be flushed unless > the VM happens to want to reclaim memory. I mean there will be a delay on the server, in responding to the requests. Sorry for the confusion. When the NFS client does flush its cache, each request will take an extra 0.5s to execute on the server, and the i_mutex will prevent their parallel execution on the server. >> What's worse, when my linux NFS client writes out a file's pages, it does >> not react immediately on receiving a NOSPC error. It will remember and >> report the error later on close(), but it still tries and issues write >> requests for each page of the file. So even if there isn't a pileup on the >> i_mutex on the server, the NFS client still waits 0.5s for each 32K >> (typically) request. So on an NFS client on a gigabit network, on an >> already full filesystem, if I open and write a 10M file and close() it, it >> takes 2m40.083s for it to issue all the requests, get an NOSPC for each, >> and finally have my close() call return ENOSPC. That can stretch to >> several hours for gigabyte-sized files, which is how I noticed the >> problem. >> >> I'm not too familiar with the NFS client code, but would it not be >> possible for it to give up when it encounters NOSPC? Or is there some >> reason why this wouldn't be desirable? > > How would it then detect that you have fixed the problem on the server? I suppose it has to try again at some point. Yet when flushing a file, if even one write requests gets an error response like ENOSPC, we know some part of the data has not been written on the server, and close() will return the appropriate error to the program on the client. If a single write error is enough to cause close() to return an error, why bother sending all the other write requests for that file? If we get an error while flushing, couldn't that one flushing operation bail out early? As I said I'm not too familiar with the code, but AFAICT nfs_wb_all() will keep flushing everything, and afterwards nfs_file_flush() wil check ctx->error. Perhaps ctx->error could be checked at some lower level, maybe in nfs_sync_inode_wait... I suppose it's not technically wrong to try to flush all the pages of the file, but if the server file system is full then it will be at its worse. Also if you happened to be on a slower link and have a big cache to flush, you're waiting around for very little gain. From owner-xfs@oss.sgi.com Tue Sep 26 13:15:14 2006 Received: with ECARTIS (v1.0.0; list xfs); Tue, 26 Sep 2006 13:15:23 -0700 (PDT) Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8QKFBaG004407 for ; Tue, 26 Sep 2006 13:15:14 -0700 X-ASG-Debug-ID: 1159301670-30151-998-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mx1.vodamail.co.za (mx1.vodamail.co.za [196.11.146.148]) by cuda.sgi.com (Spam Firewall) with ESMTP id CE1D345F41A for ; Tue, 26 Sep 2006 13:14:30 -0700 (PDT) Received: from [10.50.194.162] (unknown [10.50.194.162]) by mx1.vodamail.co.za (Postfix) with ESMTP id 61DCAD3C2A; Tue, 26 Sep 2006 22:14:01 +0200 (SAST) Message-ID: <4519840F.5070206@up.ac.za> Date: Tue, 26 Sep 2006 21:48:31 +0200 From: Paul Schutte User-Agent: Debian Thunderbird 1.0.2 (X11/20060804) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Vlad Apostolov CC: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: DMAPI problem on the cvs tree of (2.6.17) SGI-XFS CVS-2006-08-26 Subject: Re: DMAPI problem on the cvs tree of (2.6.17) SGI-XFS CVS-2006-08-26 References: <45181FF0.7030105@up.ac.za> <451870A2.6060406@sgi.com> In-Reply-To: <451870A2.6060406@sgi.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Barracuda-Spam-Score: 0.50 X-Barracuda-Spam-Status: No, SCORE=0.50 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests=BSF_RULE7568M X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21988 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.50 BSF_RULE7568M BODY: Custom Rule 7568M X-archive-position: 9094 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: paul@up.ac.za Precedence: bulk X-list: xfs Content-Length: 11818 Lines: 264 Thanks a lot. It worked. I just had to figure out why the dmapi would'nt initialize. The source provided as always... It turned out that I did'nt have a /dev/dmapi. I was using a static /dev and now I HAVE to run the udevd ( wonder how long before my /dev/null and /dev/random dissapear again ;-) Thanks for the help Paul Vlad Apostolov wrote: > Hi Paul, > > To get the 2.6.17 going please turn off CONFIG_XFS_TRACE in .config and > rebuild > the kernel and the modules. This problem is on my todo list for > investigation. > > Regards, > Vlad > > Paul Schutte wrote: > >> Hi, >> >> I am trying to get the DMAPI stuff going on xfs again. I last played >> with it back in the 2.4.25 kernel and it worked great. >> I now want to resume the work that I started back then, but are unable >> to get the dmapi to work on any recent kernel. I tried both 2.4.33 from >> the cvs and 2.6.17 from the cvs. (Both checked out on 2006-08-26). >> >> 2.4.33 was unable to compile. >> >> make[3]: Entering directory `/mnt/linux-2.4-xfs/fs/dmapi' >> gcc -D__KERNEL__ -I/mnt/linux-2.4-xfs/include -Wall -Wstrict-prototypes >> -Wno-trigraphs -O2 -fno-strict-aliasing -fno-common -fomit-frame-pointer >> -pipe -mpreferred-stack-boundary=2 -march=athlon -fno-unit-at-a-time >> -fno-optimize-sibling-calls -DDEBUG -g -nostdinc -iwithprefix include >> -DKBUILD_BASENAME=dmapi_sysent -DEXPORT_SYMTAB -c dmapi_sysent.c >> dmapi_sysent.c:54: error: conflicting types for 'dm_fsreg_cachep' >> dmapi_private.h:44: error: previous declaration of 'dm_fsreg_cachep' was >> here >> >> The following fixed that: >> ---------------------------------------------------------------------- >> --- /usr/src/linux-2.4-xfs/fs/dmapi/dmapi_private.h 2005-12-05 >> 22:35:19.000000000 +0200 >> +++ linux-2.4-xfs/fs/dmapi/dmapi_private.h 2006-09-25 >> 18:17:56.047674000 +0200 >> @@ -41,11 +41,11 @@ >> #define DMAPI_DBG_PROCFS "fs/dmapi_d" /* DMAPI debugging dir */ >> #endif >> >> -extern struct kmem_cache *dm_fsreg_cachep; >> -extern struct kmem_cache *dm_tokdata_cachep; >> -extern struct kmem_cache *dm_session_cachep; >> -extern struct kmem_cache *dm_fsys_map_cachep; >> -extern struct kmem_cache *dm_fsys_vptr_cachep; >> +extern kmem_cache_t *dm_fsreg_cachep; >> +extern kmem_cache_t *dm_tokdata_cachep; >> +extern kmem_cache_t *dm_session_cachep; >> +extern kmem_cache_t *dm_fsys_map_cachep; >> +extern kmem_cache_t *dm_fsys_vptr_cachep; >> >> typedef struct dm_tokdata { >> struct dm_tokdata *td_next; >> -------------------------------------------------------------------- >> >> Then I had hit: >> >> make[4]: Entering directory `/mnt/linux-2.4-xfs/fs/xfs/dmapi' >> gcc -D__KERNEL__ -I/mnt/linux-2.4-xfs/include -Wall -Wstrict-prototypes >> -Wno-trigraphs -O2 -fno-strict-aliasing -fno-common -fomit-frame-pointer >> -pipe -mpreferred-stack-boundary=2 -march=athlon -fno-unit-at-a-time >> -fno-optimize-sibling-calls -I /mnt/linux-2.4-xfs/fs/xfs -I >> /mnt/linux-2.4-xfs/fs/xfs/linux-2.4 -I /mnt/linux-2.4-xfs/fs/dmapi >> -nostdinc -iwithprefix include -DKBUILD_BASENAME=xfs_dm -c -o xfs_dm.o >> xfs_dm.c >> In file included from /mnt/linux-2.4-xfs/fs/xfs/xfs.h:25, >> from xfs_dm.c:18: >> /mnt/linux-2.4-xfs/fs/xfs/linux-2.4/xfs_linux.h:35:18: warning: extra >> tokens at end of #undef directive >> xfs_dm.c: In function `xfs_dm_set_fileattr': >> xfs_dm.c:2913: error: request for member `tv_sec' in something not a >> structure or union >> make[4]: *** [xfs_dm.o] Error 1 >> make[4]: Leaving directory `/mnt/linux-2.4-xfs/fs/xfs/dmapi' >> make[3]: *** [first_rule] Error 2 >> make[3]: Leaving directory `/mnt/linux-2.4-xfs/fs/xfs/dmapi' >> make[2]: *** [_subdir_dmapi] Error 2 >> make[2]: Leaving directory `/mnt/linux-2.4-xfs/fs/xfs' >> make[1]: *** [_subdir_xfs] Error 2 >> make[1]: Leaving directory `/mnt/linux-2.4-xfs/fs' >> make: *** [_dir_fs] Error 2 >> >> I then did: (which I know is not a proper fix, but I was desperate to >> get it to compile and was'nt too concerned about atime problems) >> >> ---------------------------------------------------------------------- >> -------------------- /usr/src/linux-2.4-xfs/fs/xfs/dmapi/xfs_dm.c >> 2006-08-26 16:01:18.000000000 +0200 >> +++ linux-2.4-xfs/fs/xfs/dmapi/xfs_dm.c 2006-09-25 18:29:49.836283000 >> +0200 >> @@ -2910,7 +2910,7 @@ >> vat.va_mask |= XFS_AT_ATIME; >> vat.va_atime.tv_sec = stat.fa_atime; >> vat.va_atime.tv_nsec = 0; >> - inode->i_atime.tv_sec = stat.fa_atime; >> +// inode->i_atime.tv_sec = stat.fa_atime; >> } >> if (mask & DM_AT_MTIME) { >> vat.va_mask |= XFS_AT_MTIME; >> ---------------------------------------------------------------------- >> >> It did compile then, but could not mount a filesystem with dmapi. >> Unfortunatedly I did'nt save the kernel output. >> >> The 2.6.17 compiled cleanly, but also could not mount. >> I got the following dump: >> >> >> [17179626.228000] kobject xfs: registering. parent: , set: module >> [17179626.228000] kobject_uevent >> [17179626.228000] fill_kobj_path: path = '/module/xfs' >> [17179626.244000] SGI-XFS CVS-2006-08-26_07:00_UTC with ACLs, security >> attributes, realtime, large block numbers, tracing, debug enabled >> [17179626.248000] xfs_dmapi: module license 'unspecified' taints kernel. >> [17179626.252000] kobject xfs_dmapi: registering. parent: , set: >> module >> [17179626.252000] kobject_uevent >> [17179626.252000] fill_kobj_path: path = '/module/xfs_dmapi' >> [17179626.264000] SGI XFS Data Management API subsystem >> [17179626.264000] ftype_list/348: Current ftype_list >> [17179626.264000] ftype_list/351: FS 0xdc3733c0, ftype 0xe15bc080 xfs >> [17179626.264000] ftype_list/353: Done ftype_list >> [17179649.888000] Large kmem_alloc attempt, size=6144 >> [17179649.888000] show_trace+0x20/0x30 >> dump_stack+0x1e/0x20 >> [17179649.888000] kmem_alloc+0x134/0x140 [xfs] >> kmem_zalloc+0x1e/0x50 [xfs] >> [17179649.888000] xfs_alloc_bufhash+0x48/0xd0 [xfs] >> xfs_alloc_buftarg+0x63/0x90 [xfs] >> [17179649.888000] xfs_mount+0x24c/0x730 [xfs] >> vfs_mount+0x9b/0xb0 [xfs] >> [17179649.892000] xfs_dm_mount+0x74/0x130 [xfs_dmapi] >> vfs_mount+0x9b/0xb0 [xfs] >> [17179649.892000] xfs_fs_fill_super+0x9a/0x230 [xfs] >> get_sb_bdev+0x100/0x170 >> [17179649.892000] xfs_fs_get_sb+0x2e/0x30 [xfs] >> do_kern_mount+0x56/0xd0 >> [17179649.892000] do_new_mount+0x58/0xb0 >> do_mount+0x19f/0x1d0 >> [17179649.892000] sys_mount+0x97/0xe0 >> sysenter_past_esp+0x54/0x75 >> [17179649.892000] Filesystem "md1": Disabling barriers, not supported by >> the underlying device >> [17179649.912000] XFS mounting filesystem md1 >> [17179650.028000] Ending clean XFS mount for filesystem: md1 >> [17179650.028000] ftype_list/348: Current ftype_list >> [17179650.028000] ftype_list/351: FS 0xdc3733c0, ftype 0xe15bc080 xfs >> [17179650.028000] ftype_list/353: Done ftype_list >> [17179650.028000] sb_list/330: Current sb_list >> [17179650.028000] sb_list/335: Done sb_list >> [17179650.028000] ftype_list/348: Current ftype_list >> [17179650.028000] ftype_list/351: FS 0xdc3733c0, ftype 0xe15bc080 xfs >> [17179650.028000] ftype_list/353: Done ftype_list >> [17179650.028000] DMAPI assertion failed: fstype, file: >> fs/dmapi/dmapi_mountinfo.c, line: 280 >> [17179650.028000] ------------[ cut here ]------------ >> [17179650.028000] kernel BUG at fs/dmapi/dmapi_port.h:72! >> [17179650.028000] invalid opcode: 0000 [#1] >> [17179650.028000] PREEMPT >> [17179650.028000] Modules linked in: xfs_dmapi xfs tuner saa7134 >> video_buf compat_ioctl32 v4l2_common v4l1_compat ir_kbd_i2c ir_common >> videodev ohci1394 ieee >> 1394 pdc202xx_new ide_cd cdrom >> [17179650.028000] CPU: 0 >> [17179650.028000] EIP: 0060:[] Tainted: P VLI >> [17179650.028000] EFLAGS: 00010296 (2.6.17-dmapi #1) >> [17179650.028000] EIP is at dm_fsys_map_by_fstype+0x64/0x70 >> [17179650.028000] eax: 00000061 ebx: 00000000 ecx: 00000000 edx: >> 00000001 >> [17179650.028000] esi: 00000000 edi: dc368d2c ebp: dc2ffc9c esp: >> dc2ffc84 >> [17179650.028000] ds: 007b es: 007b ss: 0068 >> [17179650.028000] Process mount (pid: 2836, threadinfo=dc2fe000 >> task=df97c0b0) >> [17179650.028000] Stack: c0554fb8 c054f4ac c054f488 00000118 dc368d2c >> 00000000 dc2ffcc8 c019f846 >> [17179650.028000] 00000000 c019f9ac c054f501 c051e534 00000161 >> dc3733c0 e15bc080 dc368d2c >> [17179650.028000] 00000000 dc2ffcf4 c019faa0 e1574680 dfeef4c4 >> dc2ffd08 c015ceca c14b0960 >> [17179650.028000] Call Trace: >> [17179650.028000] show_stack_log_lvl+0x90/0xc0 >> show_registers+0x1a3/0x220 >> [17179650.028000] die+0x118/0x240 >> do_trap+0x87/0xd0 >> [17179650.028000] do_invalid_op+0xb5/0xc0 >> error_code+0x4f/0x54 >> [17179650.028000] sb_list+0x16/0xf0 >> dm_fsys_ops+0x30/0x1e0 >> [17179650.028000] dm_ip_to_handle+0x20/0x100 >> dm_ip_data+0xa9/0x110 >> [17179650.028000] dm_send_mount_event+0x72/0x430 >> xfs_dm_mount+0x12c/0x130 [xfs_dmapi] >> [17179650.028000] vfs_mount+0x9b/0xb0 [xfs] >> xfs_fs_fill_super+0x9a/0x230 [xfs] >> [17179650.028000] get_sb_bdev+0x100/0x170 >> xfs_fs_get_sb+0x2e/0x30 [xfs] >> [17179650.028000] do_kern_mount+0x56/0xd0 >> do_new_mount+0x58/0xb0 >> [17179650.028000] do_mount+0x19f/0x1d0 >> sys_mount+0x97/0xe0 >> [17179650.028000] sysenter_past_esp+0x54/0x75 >> [17179650.028000] Code: c6 5b 89 f0 5e c9 c3 c7 44 24 0c 18 01 00 00 c7 >> 44 24 08 88 f4 54 c0 c7 44 24 04 ac f4 54 c0 c7 04 24 b8 4f 55 c0 e8 8c >> da f7 ff <0f> >> 0b 48 00 72 f4 54 c0 eb a3 89 f6 55 89 e5 56 31 f6 53 83 ec >> [17179650.028000] EIP: [] dm_fsys_map_by_fstype+0x64/0x70 >> SS:ESP 0068:dc2ffc84 >> [17179650.028000] <6>note: mount[2836] exited with preempt_count 1 >> [17179652.952000] BUG: sleeping function called from invalid context at >> include/linux/rwsem.h:43 >> [17179652.952000] in_atomic():1, irqs_disabled():0 >> [17179652.952000] show_trace+0x20/0x30 >> dump_stack+0x1e/0x20 >> [17179652.952000] __might_sleep+0xa1/0xc0 >> exit_mm+0x3c/0x140 >> [17179652.952000] do_exit+0xda/0x460 >> die+0x23a/0x240 >> [17179652.952000] do_trap+0x87/0xd0 >> do_invalid_op+0xb5/0xc0 >> [17179652.952000] error_code+0x4f/0x54 >> sb_list+0x16/0xf0 >> [17179652.952000] dm_fsys_ops+0x30/0x1e0 >> dm_ip_to_handle+0x20/0x100 >> [17179652.952000] dm_ip_data+0xa9/0x110 >> dm_send_mount_event+0x72/0x430 >> [17179652.952000] xfs_dm_mount+0x12c/0x130 [xfs_dmapi] >> vfs_mount+0x9b/0xb0 [xfs] >> [17179652.952000] xfs_fs_fill_super+0x9a/0x230 [xfs] >> get_sb_bdev+0x100/0x170 >> [17179652.952000] xfs_fs_get_sb+0x2e/0x30 [xfs] >> do_kern_mount+0x56/0xd0 >> [17179652.956000] do_new_mount+0x58/0xb0 >> do_mount+0x19f/0x1d0 >> [17179652.956000] sys_mount+0x97/0xe0 >> sysenter_past_esp+0x54/0x75 >> >> >> I used "mount /dev/md1 -o dmapi,mtpt=/mnt /mnt" to try and mount the >> filesystem in all the cases (which worked great back in 2.4.25). >> >> >> I would appreciate any help. >> >> Regards >> Paul Schutte >> > From owner-xfs@oss.sgi.com Tue Sep 26 13:30:44 2006 Received: with ECARTIS (v1.0.0; list xfs); Tue, 26 Sep 2006 13:30:50 -0700 (PDT) Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8QKUhaG006549 for ; Tue, 26 Sep 2006 13:30:44 -0700 X-ASG-Debug-ID: 1159302604-24008-972-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from pat.uio.no (pat.uio.no [129.240.10.4]) by cuda.sgi.com (Spam Firewall) with ESMTP id D3851D118233 for ; Tue, 26 Sep 2006 13:30:05 -0700 (PDT) Received: from mail-mx6.uio.no ([129.240.10.47]) by pat.uio.no with esmtp (Exim 4.43) id 1GSJZ3-00077s-So; Tue, 26 Sep 2006 22:30:02 +0200 Received: from dh141.citi.umich.edu ([141.211.133.141]) by mail-mx6.uio.no with esmtpsa (SSLv3:RC4-MD5:128) (Exim 4.43) id 1GSJZ0-00063I-PV; Tue, 26 Sep 2006 22:29:59 +0200 X-ASG-Orig-Subj: Re: [NFS] Long sleep with i_mutex in xfs_flush_device(), affects NFS service Subject: Re: [NFS] Long sleep with i_mutex in xfs_flush_device(), affects NFS service From: Trond Myklebust To: Stephane Doyon Cc: xfs@oss.sgi.com, nfs@lists.sourceforge.net In-Reply-To: References: <1159297579.5492.21.camel@lade.trondhjem.org> Content-Type: text/plain Date: Tue, 26 Sep 2006 16:29:56 -0400 Message-Id: <1159302596.5492.57.camel@lade.trondhjem.org> Mime-Version: 1.0 X-Mailer: Evolution 2.8.0 Content-Transfer-Encoding: 7bit X-UiO-Spam-info: not spam, SpamAssassin (score=-3.909, required 12, autolearn=disabled, AWL 1.09, UIO_MAIL_IS_INTERNAL -5.00) X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21989 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9095 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: trond.myklebust@fys.uio.no Precedence: bulk X-list: xfs Content-Length: 658 Lines: 16 On Tue, 2006-09-26 at 16:05 -0400, Stephane Doyon wrote: > I suppose it's not technically wrong to try to flush all the pages of the > file, but if the server file system is full then it will be at its worse. > Also if you happened to be on a slower link and have a big cache to flush, > you're waiting around for very little gain. That all assumes that nobody fixes the problem on the server. If somebody notices, and actually removes an unused file, then you may be happy that the kernel preserved the last 80% of the apache log file that was being written out. ENOSPC is a transient error: that is why the current behaviour exists. Cheers, Trond From owner-xfs@oss.sgi.com Tue Sep 26 15:48:20 2006 Received: with ECARTIS (v1.0.0; list xfs); Tue, 26 Sep 2006 15:48:32 -0700 (PDT) Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8QMmJaG016192 for ; Tue, 26 Sep 2006 15:48:20 -0700 X-ASG-Debug-ID: 1159310860-11055-953-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from smtp114.sbc.mail.mud.yahoo.com (smtp114.sbc.mail.mud.yahoo.com [68.142.198.213]) by cuda.sgi.com (Spam Firewall) with SMTP id C1AAAD118FE7 for ; Tue, 26 Sep 2006 15:47:40 -0700 (PDT) Received: (qmail 6416 invoked from network); 26 Sep 2006 22:40:57 -0000 Received: from unknown (HELO stupidest.org) (cwedgwood@sbcglobal.net@70.231.251.10 with login) by smtp114.sbc.mail.mud.yahoo.com with SMTP; 26 Sep 2006 22:40:56 -0000 Received: by tuatara.stupidest.org (Postfix, from userid 10000) id 9817018079AE; Tue, 26 Sep 2006 15:40:53 -0700 (PDT) Date: Tue, 26 Sep 2006 15:40:53 -0700 From: Chris Wedgwood To: Rene Salmon Cc: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: LVM and XFS cannot set blocksize on block device Subject: Re: LVM and XFS cannot set blocksize on block device Message-ID: <20060926224053.GA31542@tuatara.stupidest.org> References: <45185424.2030707@tulane.edu> <20060926001737.GA10224@tuatara.stupidest.org> <45193204.3030500@tulane.edu> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <45193204.3030500@tulane.edu> X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.21995 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9096 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: xfs Content-Length: 847 Lines: 25 On Tue, Sep 26, 2006 at 08:58:28AM -0500, Rene Salmon wrote: > Thanks for the reply. The "-s size=4096" helped I was able to create > the file system, then mount it and use it. I did however get a > warning still about "cannot set blocksize on block device". I don't know much about the LVM code, my guess is that ioctl(... ,BLKBSZSET, ...) is failing, strace would confirm this. > Everything seems to be working but I am a bit worried about the > warning message. Following is the message. Any ideas if it is safe > to ignore this or any way to get rid of it? What does: blockdev --getbsz /dev/vg_u00/lv_u00 say? If mkfs.xfs is trying to set a blocksize that already matches the underlying device, it woudn't be hard to silence the warning by doing a check before unconditionally setting it, though I don't know that it's worth it. From owner-xfs@oss.sgi.com Tue Sep 26 21:31:11 2006 Received: with ECARTIS (v1.0.0; list xfs); Tue, 26 Sep 2006 21:31:23 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k8R4V8aG021494 for ; Tue, 26 Sep 2006 21:31:10 -0700 Received: from [134.14.55.89] (soarer.melbourne.sgi.com [134.14.55.89]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA06030; Wed, 27 Sep 2006 14:30:23 +1000 Message-ID: <4519FE80.4000609@sgi.com> Date: Wed, 27 Sep 2006 14:30:56 +1000 From: Vlad Apostolov User-Agent: Thunderbird 1.5.0.5 (X11/20060719) MIME-Version: 1.0 To: sgi.bugs.xfs@engr.sgi.com, linux-xfs@oss.sgi.com Subject: PARTIAL TAKE 955274: DMAPI qa test fixes References: <44CE9F23.7000605@sgi.com> In-Reply-To: <44CE9F23.7000605@sgi.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 9097 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: vapo@sgi.com Precedence: bulk X-list: xfs Content-Length: 690 Lines: 18 pv 955274 - Limit the random generated file size attribute to 1 TB (if size is too big the kernel panics) Date: Wed Sep 27 14:28:03 AEST 2006 Workarea: soarer.melbourne.sgi.com:/home/vapo/isms/xfs-cmds Inspected by: none Author: vapo The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:27060a xfstests/dmapi/src/suite2/src/test_fileattr.c - 1.9 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/dmapi/src/suite2/src/test_fileattr.c.diff?r1=text&tr1=1.9&r2=text&tr2=1.8&f=h - pv 955274 - Limit the random generated file size attribute to 1 TB (if size is too big the kernel panics) From owner-xfs@oss.sgi.com Tue Sep 26 22:51:14 2006 Received: with ECARTIS (v1.0.0; list xfs); Tue, 26 Sep 2006 22:51:20 -0700 (PDT) Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8R5pDaG031105 for ; Tue, 26 Sep 2006 22:51:14 -0700 X-ASG-Debug-ID: 1159332341-19730-261-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from tyo202.gate.nec.co.jp (TYO202.gate.nec.co.jp [202.32.8.206]) by cuda.sgi.com (Spam Firewall) with ESMTP id D9C5245F711 for ; Tue, 26 Sep 2006 21:45:41 -0700 (PDT) Received: from mailgate3.nec.co.jp (mailgate53.nec.co.jp [10.7.69.160] (may be forged)) by tyo202.gate.nec.co.jp (8.13.8/8.13.4) with ESMTP id k8R4j4L8011718 for ; Wed, 27 Sep 2006 13:45:04 +0900 (JST) Received: (from root@localhost) by mailgate3.nec.co.jp (8.11.7/3.7W-MAILGATE-NEC) id k8R4j4B20423 for xfs@oss.sgi.com; Wed, 27 Sep 2006 13:45:04 +0900 (JST) Received: from secsv3.tnes.nec.co.jp (tnesvc2.tnes.nec.co.jp [10.1.101.15]) by mailsv4.nec.co.jp (8.11.7/3.7W-MAILSV4-NEC) with ESMTP id k8R4j3Q18713 for ; Wed, 27 Sep 2006 13:45:03 +0900 (JST) Received: from tnesvc2.tnes.nec.co.jp ([10.1.101.15]) by secsv3.tnes.nec.co.jp (ExpressMail 5.10) with SMTP id 20060927.134856.01502552 for ; Wed, 27 Sep 2006 13:48:56 +0900 Received: FROM tnessv1.tnes.nec.co.jp BY tnesvc2.tnes.nec.co.jp ; Wed Sep 27 13:48:55 2006 +0900 Received: from rifu.bsd.tnes.nec.co.jp (rifu.bsd.tnes.nec.co.jp [10.1.104.1]) by tnessv1.tnes.nec.co.jp (Postfix) with ESMTP id 4A96AAE4B3 for ; Wed, 27 Sep 2006 13:45:03 +0900 (JST) Received: from TNESG9305.tnes.nec.co.jp (TNESG9305.bsd.tnes.nec.co.jp [10.1.104.199]) by rifu.bsd.tnes.nec.co.jp (8.12.11/3.7W/BSD-TNES-MX01) with SMTP id k8R4j3MQ001527 for ; Wed, 27 Sep 2006 13:45:03 +0900 Message-Id: <200609270444.AA04551@TNESG9305.tnes.nec.co.jp> Date: Wed, 27 Sep 2006 13:44:58 +0900 To: xfs@oss.sgi.com X-ASG-Orig-Subj: [PATCH] xfs_db ring command Subject: [PATCH] xfs_db ring command From: Utako Kusaka MIME-Version: 1.0 X-Mailer: AL-Mail32 Version 1.13 Content-Type: text/plain; charset=us-ascii X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.22015 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9098 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: utako@tnes.nec.co.jp Precedence: bulk X-list: xfs Content-Length: 499 Lines: 17 This patch fixes the issue that xfs_db ring [index] command don't move to specified index. Signed-off-by: Utako Kusaka --- --- xfsprogs-2.8.11-orgn/db/io.c 2006-06-26 14:01:14.000000000 +0900 +++ xfsprogs-2.8.11/db/io.c 2006-09-26 16:59:40.000000000 +0900 @@ -352,7 +352,7 @@ ring_f( return 0; } - index = (int)strtoul(argv[0], NULL, 0); + index = (int)strtoul(argv[1], NULL, 0); if (index < 0 || index >= RING_ENTRIES) dbprintf("invalid entry: %d\n", index); From owner-xfs@oss.sgi.com Wed Sep 27 04:37:50 2006 Received: with ECARTIS (v1.0.0; list xfs); Wed, 27 Sep 2006 04:37:59 -0700 (PDT) Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8RBbmaG029225 for ; Wed, 27 Sep 2006 04:37:50 -0700 X-ASG-Debug-ID: 1159357028-27240-71-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from ext.agami.com (64.221.212.177.ptr.us.xo.net [64.221.212.177]) by cuda.sgi.com (Spam Firewall) with ESMTP id 9461CD118FE1 for ; Wed, 27 Sep 2006 04:37:08 -0700 (PDT) Received: from agami.com ([192.168.168.146]) by ext.agami.com (8.12.5/8.12.5) with ESMTP id k8RBaaRC004312 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO) for ; Wed, 27 Sep 2006 04:36:36 -0700 Received: from mx1.agami.com (mx1.agami.com [10.123.10.30]) by agami.com (8.12.11/8.12.11) with ESMTP id k8RBaVqq015842 for ; Wed, 27 Sep 2006 04:36:31 -0700 Received: from [10.12.12.141] ([10.12.12.141]) by mx1.agami.com with Microsoft SMTPSVC(6.0.3790.1830); Wed, 27 Sep 2006 04:40:30 -0700 Message-ID: <451A618B.5080901@agami.com> Date: Wed, 27 Sep 2006 17:03:31 +0530 From: Shailendra Tripathi User-Agent: Mozilla Thunderbird 0.9 (X11/20041127) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Stephane Doyon CC: xfs@oss.sgi.com, nfs@lists.sourceforge.net X-ASG-Orig-Subj: Re: Long sleep with i_mutex in xfs_flush_device(), affects NFS service Subject: Re: Long sleep with i_mutex in xfs_flush_device(), affects NFS service References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-OriginalArrivalTime: 27 Sep 2006 11:40:31.0234 (UTC) FILETIME=[BC995620:01C6E229] X-Scanned-By: MIMEDefang 2.36 X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.22034 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9099 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: stripathi@agami.com Precedence: bulk X-list: xfs Content-Length: 4756 Lines: 111 Hi Stephane, > When the file system becomes nearly full, we eventually call down to > xfs_flush_device(), which sleeps for 0.5seconds, waiting for xfssyncd to > do some work. > xfs_flush_space()does > xfs_iunlock(ip, XFS_ILOCK_EXCL); > before calling xfs_flush_device(), but i_mutex is still held, at least > when we're being called from under xfs_write(). 1. I agree that the delay of 500 ms is not a deterministic wait. 2. xfs_flush_device is a big operation. It has to flush all the dirty pages possibly in the cache on the device. Depending upon the device, it might take significant amount of time. Keeping view of it, 500 ms in that unreasonable. Also, perhaps you would never want more than one request to be queued for device flush. 3. The hope is that after one big flush operation, it would be able to free up resources which are in transient state (over-reservation of blocks, delalloc, pending removes, ...). The whole operation is intended to make sure that ENOSPC is not returned unless really required. 4. This wait could be made deterministic by waiting for the syncer thread to complete when device flush is triggered. > It seems like a fairly long time to hold a mutex. And I wonder whether it's really It might not be that good even if it doesn't. This can return pre-mature ENOSPC or it can queue many xfs_flush_device requests (which can make your system dead(-slow) anyway) > necessary to keep going through that again and again for every new request after > we've hit NOSPC. > > In particular this can cause a pileup when several threads are writing > concurrently to the same file. Some specialized apps might do that, and > nfsd threads do it all the time. > > To reproduce locally, on a full file system: > #!/bin/sh > for i in `seq 30`; do > dd if=/dev/zero of=f bs=1 count=1 & > done > wait > time that, it takes nearly exactly 15s. > > The linux NFS client typically sends bunches of 16 requests, and so if > the client is writing a single file, some NFS requests are therefore > delayed by up to 8seconds, which is kind of long for NFS. > > What's worse, when my linux NFS client writes out a file's pages, it > does not react immediately on receiving a NOSPC error. It will remember > and report the error later on close(), but it still tries and issues > write requests for each page of the file. So even if there isn't a > pileup on the i_mutex on the server, the NFS client still waits 0.5s for > each 32K (typically) request. So on an NFS client on a gigabit network, > on an already full filesystem, if I open and write a 10M file and > close() it, it takes 2m40.083s for it to issue all the requests, get an > NOSPC for each, and finally have my close() call return ENOSPC. That can > stretch to several hours for gigabyte-sized files, which is how I > noticed the problem. > > I'm not too familiar with the NFS client code, but would it not be > possible for it to give up when it encounters NOSPC? Or is there some > reason why this wouldn't be desirable? > > The rough workaround I have come up with for the problem is to have > xfs_flush_space() skip calling xfs_flush_device() if we are within 2secs > of having returned ENOSPC. I have verified that this workaround is > effective, but I imagine there might be a cleaner solution. The fix would not be a good idea for standalone use of XFS. if (nimaps == 0) { if (xfs_flush_space(ip, &fsynced, &ioflag)) return XFS_ERROR(ENOSPC); error = 0; goto retry; } xfs_flush_space: case 2: xfs_iunlock(ip, XFS_ILOCK_EXCL); xfs_flush_device(ip); xfs_ilock(ip, XFS_ILOCK_EXCL); *fsynced = 3; return 0; } return 1; lets say that you don't enqueue it for another 2 secs. Then, in next retry it would return 1 and, hence, outer if condition would return ENOSPC. Please note that for standalone XFS, the application or client mostly don't retry and, hence, it might return premature ENOSPC. You didn't notice this because, as you said, nfs client will retry in case of ENOSPC. Assuming that you don't return *fsynced = 3 (instead *fsynced = 2), the code path will loop (because of retry) and CPU itself would become busy for no good job. You might experiment by adding deterministic wait. When you enqueue, set some flag. All others who come in between just get enqueued. Once, device flush is over wake up all. If flush could free enough resources, threads will proceed ahead and return. Otherwise, another flush would be enqueued to flush what might have come since last flush. > Thanks > > From owner-xfs@oss.sgi.com Wed Sep 27 04:58:58 2006 Received: with ECARTIS (v1.0.0; list xfs); Wed, 27 Sep 2006 04:59:08 -0700 (PDT) Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8RBwvaG032423 for ; Wed, 27 Sep 2006 04:58:58 -0700 X-ASG-Debug-ID: 1159358295-19242-980-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from ext.agami.com (64.221.212.177.ptr.us.xo.net [64.221.212.177]) by cuda.sgi.com (Spam Firewall) with ESMTP id 14863460948 for ; Wed, 27 Sep 2006 04:58:15 -0700 (PDT) Received: from agami.com ([192.168.168.146]) by ext.agami.com (8.12.5/8.12.5) with ESMTP id k8RBwFRC004470 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO) for ; Wed, 27 Sep 2006 04:58:15 -0700 Received: from mx1.agami.com (mx1.agami.com [10.123.10.30]) by agami.com (8.12.11/8.12.11) with ESMTP id k8RBw91T015990 for ; Wed, 27 Sep 2006 04:58:10 -0700 Received: from [10.12.12.141] ([10.12.12.141]) by mx1.agami.com with Microsoft SMTPSVC(6.0.3790.1830); Wed, 27 Sep 2006 05:02:08 -0700 Message-ID: <451A669D.9020503@agami.com> Date: Wed, 27 Sep 2006 17:25:09 +0530 From: Shailendra Tripathi User-Agent: Mozilla Thunderbird 0.9 (X11/20041127) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Chris Wedgwood CC: Rene Salmon , xfs@oss.sgi.com X-ASG-Orig-Subj: Re: LVM and XFS cannot set blocksize on block device Subject: Re: LVM and XFS cannot set blocksize on block device References: <45185424.2030707@tulane.edu> <20060926001737.GA10224@tuatara.stupidest.org> <45193204.3030500@tulane.edu> <20060926224053.GA31542@tuatara.stupidest.org> In-Reply-To: <20060926224053.GA31542@tuatara.stupidest.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-OriginalArrivalTime: 27 Sep 2006 12:02:08.0953 (UTC) FILETIME=[C2198A90:01C6E22C] X-Scanned-By: MIMEDefang 2.36 X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.22036 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9100 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: stripathi@agami.com Precedence: bulk X-list: xfs Content-Length: 1416 Lines: 44 >>Thanks for the reply. The "-s size=4096" helped I was able to create >>the file system, then mount it and use it. I did however get a >>warning still about "cannot set blocksize on block device". > > I don't know much about the LVM code, my guess is that > ioctl(... ,BLKBSZSET, ...) is failing, strace would confirm this. libxfs_device_open () seems to be working with the pre-conceived notion of assuming block devices of only 512 bytes in size. if (!readonly && setblksize && (statb.st_mode & S_IFMT) == S_IFBLK) platform_set_blocksize(fd, path, statb.st_rdev, 512); This eventually calls to set the blk sz to 512. Since, your volume does not support less than 4k, it returns EINVAL. I think, libxfs_init should be modified to take pass on the -s size option to this call so that it does not happen. However, I don't see any problem despite this failure. Everything else should work fine. >>Everything seems to be working but I am a bit worried about the >>warning message. Following is the message. Any ideas if it is safe >>to ignore this or any way to get rid of it? > > > What does: > > blockdev --getbsz /dev/vg_u00/lv_u00 > > say? > > > If mkfs.xfs is trying to set a blocksize that already matches the > underlying device, it woudn't be hard to silence the warning by doing > a check before unconditionally setting it, though I don't know that > it's worth it. > > From owner-xfs@oss.sgi.com Wed Sep 27 06:17:43 2006 Received: with ECARTIS (v1.0.0; list xfs); Wed, 27 Sep 2006 06:17:53 -0700 (PDT) Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8RDHfaG012530 for ; Wed, 27 Sep 2006 06:17:43 -0700 X-ASG-Debug-ID: 1159363021-31580-528-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from tricca.tcs.tulane.edu (tricca.tcs.tulane.edu [129.81.224.27]) by cuda.sgi.com (Spam Firewall) with ESMTP id 39C6445FED3 for ; Wed, 27 Sep 2006 06:17:01 -0700 (PDT) Received: from tricca.tcs.tulane.edu (localhost.localdomain [127.0.0.1]) by tricca.tcs.tulane.edu (8.13.6/8.13.6) with ESMTP id k8RDGqmL004306; Wed, 27 Sep 2006 08:16:52 -0500 Received: from olympus.tcs.tulane.edu (olympus.tcs.tulane.edu [129.81.224.6] (may be forged)) by tricca.tcs.tulane.edu (8.13.6/8.12.8) with ESMTP id k8RDGpoe004301; Wed, 27 Sep 2006 08:16:51 -0500 Received: from [129.81.113.244] (localhost [127.0.0.1]) (authenticated bits=0) by olympus.tcs.tulane.edu (8.13.6/8.13.6) with ESMTP id k8RDGi66002297; Wed, 27 Sep 2006 08:16:44 -0500 (CDT) Message-ID: <451A79BC.1010302@tulane.edu> Date: Wed, 27 Sep 2006 08:16:44 -0500 From: Rene Salmon User-Agent: Thunderbird 1.5.0.7 (Macintosh/20060909) MIME-Version: 1.0 To: Shailendra Tripathi , Chris Wedgwood CC: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: LVM and XFS cannot set blocksize on block device Subject: Re: LVM and XFS cannot set blocksize on block device References: <45185424.2030707@tulane.edu> <20060926001737.GA10224@tuatara.stupidest.org> <45193204.3030500@tulane.edu> <20060926224053.GA31542@tuatara.stupidest.org> <451A669D.9020503@agami.com> In-Reply-To: <451A669D.9020503@agami.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.22039 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9103 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: rsalmon@tulane.edu Precedence: bulk X-list: xfs Content-Length: 2176 Lines: 77 Hi, Thanks for the replies. The block size of my LV is indeed 4096 helix-priv:~ # blockdev --getbsz /dev/vg_u00/lv_u00 4096 I can mount and use the xfs file system no problems. I have even tested extending the LV and doing an xfs_grow and that seemed to work no problems. So I take it I can safely ignore the warning. Should I report this as a bug? If so Can someone point me to the bugzilla page or something of the sorts? Thanks Rene Shailendra Tripathi wrote: >>> Thanks for the reply. The "-s size=4096" helped I was able to create >>> the file system, then mount it and use it. I did however get a >>> warning still about "cannot set blocksize on block device". > >> >> I don't know much about the LVM code, my guess is that >> ioctl(... ,BLKBSZSET, ...) is failing, strace would confirm this. > > > libxfs_device_open () seems to be working with the pre-conceived notion > of assuming block devices of only 512 bytes in size. > > if (!readonly && setblksize && (statb.st_mode & S_IFMT) == S_IFBLK) > platform_set_blocksize(fd, path, statb.st_rdev, 512); > > This eventually calls to set the blk sz to 512. Since, your volume does > not support less than 4k, it returns EINVAL. I think, libxfs_init should > be modified to take pass on the -s size option to this call so that it > does not happen. > However, I don't see any problem despite this failure. Everything > else should work fine. > > > > >>> Everything seems to be working but I am a bit worried about the >>> warning message. Following is the message. Any ideas if it is safe >>> to ignore this or any way to get rid of it? >> >> >> What does: >> >> blockdev --getbsz /dev/vg_u00/lv_u00 >> >> say? >> >> >> If mkfs.xfs is trying to set a blocksize that already matches the >> underlying device, it woudn't be hard to silence the warning by doing >> a check before unconditionally setting it, though I don't know that >> it's worth it. >> >> -- - -- Rene Salmon Tulane University Center for Computational Science http://www.ccs.tulane.edu rsalmon@tulane.edu Tel 504-862-8393 Tel 504-988-8552 Fax 504-862-8392 From owner-xfs@oss.sgi.com Wed Sep 27 08:50:01 2006 Received: with ECARTIS (v1.0.0; list xfs); Wed, 27 Sep 2006 08:50:07 -0700 (PDT) Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8RFnwaG008464 for ; Wed, 27 Sep 2006 08:50:01 -0700 X-ASG-Debug-ID: 1159372158-4393-804-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mx1.redhat.com (mx1.redhat.com [66.187.233.31]) by cuda.sgi.com (Spam Firewall) with ESMTP id A3798D118765 for ; Wed, 27 Sep 2006 08:49:18 -0700 (PDT) Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by mx1.redhat.com (8.12.11.20060308/8.12.11) with ESMTP id k8RFmeES011049; Wed, 27 Sep 2006 11:48:40 -0400 Received: from pobox-2.corp.redhat.com (pobox-2.corp.redhat.com [10.11.255.15]) by int-mx1.corp.redhat.com (8.12.11.20060308/8.12.11) with ESMTP id k8RFmdlw008867; Wed, 27 Sep 2006 11:48:39 -0400 Received: from [10.15.80.10] (neon.msp.redhat.com [10.15.80.10]) by pobox-2.corp.redhat.com (8.13.1/8.13.1) with ESMTP id k8RFmcZG017079; Wed, 27 Sep 2006 11:48:39 -0400 Message-ID: <451A9D55.30607@sandeen.net> Date: Wed, 27 Sep 2006 10:48:37 -0500 From: Eric Sandeen User-Agent: Thunderbird 1.5.0.7 (X11/20060913) MIME-Version: 1.0 To: Shailendra Tripathi CC: Chris Wedgwood , Rene Salmon , xfs@oss.sgi.com X-ASG-Orig-Subj: Re: LVM and XFS cannot set blocksize on block device Subject: Re: LVM and XFS cannot set blocksize on block device References: <45185424.2030707@tulane.edu> <20060926001737.GA10224@tuatara.stupidest.org> <45193204.3030500@tulane.edu> <20060926224053.GA31542@tuatara.stupidest.org> <451A669D.9020503@agami.com> In-Reply-To: <451A669D.9020503@agami.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.22046 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9104 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Content-Length: 880 Lines: 23 Shailendra Tripathi wrote: > libxfs_device_open () seems to be working with the pre-conceived notion > of assuming block devices of only 512 bytes in size. > > if (!readonly && setblksize && (statb.st_mode & S_IFMT) == S_IFBLK) > platform_set_blocksize(fd, path, statb.st_rdev, 512); > > This eventually calls to set the blk sz to 512. Since, your volume does > not support less than 4k, it returns EINVAL. I think, libxfs_init should > be modified to take pass on the -s size option to this call so that it > does not happen. > However, I don't see any problem despite this failure. Everything > else should work fine. > Yep, this looks to me like an oversight when the larger-sector-size support was added. Seems like if the device can't be set to a smaller sector size than X, then the tools should run as if a sector size of X had been specified? -Eric From owner-xfs@oss.sgi.com Wed Sep 27 17:40:08 2006 Received: with ECARTIS (v1.0.0; list xfs); Wed, 27 Sep 2006 17:40:11 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k8S0e5aG020813 for ; Wed, 27 Sep 2006 17:40:07 -0700 Received: from [134.14.55.141] (boing.melbourne.sgi.com [134.14.55.141]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA01037; Thu, 28 Sep 2006 10:39:17 +1000 Message-ID: <451B1A06.4000507@sgi.com> Date: Thu, 28 Sep 2006 10:40:38 +1000 From: Timothy Shimmin User-Agent: Thunderbird 1.5.0.7 (Macintosh/20060909) MIME-Version: 1.0 To: Utako Kusaka CC: xfs@oss.sgi.com Subject: Re: [PATCH] xfs_db ring command References: <200609270444.AA04551@TNESG9305.tnes.nec.co.jp> In-Reply-To: <200609270444.AA04551@TNESG9305.tnes.nec.co.jp> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 9106 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs Content-Length: 2102 Lines: 83 Utako Kusaka wrote: > This patch fixes the issue that > xfs_db ring [index] command don't move to specified index. > > Signed-off-by: Utako Kusaka > --- > > --- xfsprogs-2.8.11-orgn/db/io.c 2006-06-26 14:01:14.000000000 +0900 > +++ xfsprogs-2.8.11/db/io.c 2006-09-26 16:59:40.000000000 +0900 > @@ -352,7 +352,7 @@ ring_f( > return 0; > } > > - index = (int)strtoul(argv[0], NULL, 0); > + index = (int)strtoul(argv[1], NULL, 0); > if (index < 0 || index >= RING_ENTRIES) > dbprintf("invalid entry: %d\n", index); > Hi there, Thanks for that. I'll check it in shortly. Aside: I think I can see how this happened. It must have been due to porting from IRIX to Linux. On IRIX the db commands removed the command name, and just passed in the arguments to the specific command function; they did: ON IRIX: ------------ command( int argc, char **argv) { char *cmd; const cmdinfo_t *ct; cmd = argv[0]; ct = find_command(cmd); if (ct == NULL) { dbprintf("command %s not found\n", cmd); return 0; } --> argc--; --> argv++; if (argc < ct->argmin || (ct->argmax != -1 && argc > ct->argmax)) { ... return ct->cfunc(argc, argv); ------------- But on Linux we pass through the command name as well: -------------- int command( int argc, char **argv) { char *cmd; const cmdinfo_t *ct; cmd = argv[0]; ct = find_command(cmd); if (ct == NULL) { dbprintf("command %s not found\n", cmd); return 0; } if (argc-1 < ct->argmin || (ct->argmax != -1 && argc-1 > ct->argmax)) { ... return ct->cfunc(argc, argv); -------------- So I hope no more of these mistakes have happened because I'd imagine many of the commands would have had to change. I noticed that someone fixed up part of ring_f command but not the part that you found/fixed. Cheers, Tim. From owner-xfs@oss.sgi.com Wed Sep 27 17:49:27 2006 Received: with ECARTIS (v1.0.0; list xfs); Wed, 27 Sep 2006 17:49:30 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k8S0nNaG022428 for ; Wed, 27 Sep 2006 17:49:25 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA01292 for ; Thu, 28 Sep 2006 10:48:44 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 1116) id 4D83758CF853; Thu, 28 Sep 2006 10:48:44 +1000 (EST) To: xfs@oss.sgi.com Subject: TAKE xfs_db - ring command fix Message-Id: <20060928004844.4D83758CF853@chook.melbourne.sgi.com> Date: Thu, 28 Sep 2006 10:48:44 +1000 (EST) From: tes@sgi.com (Tim Shimmin) X-archive-position: 9107 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs Content-Length: 788 Lines: 24 Fix up ring command in xfs_db for its argument handling of the index. Date: Thu Sep 28 10:47:51 AEST 2006 Workarea: chook.melbourne.sgi.com:/build/tes/xfs-cmds Inspected by: utako@tnes.nec.co.jp The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:27071a xfsprogs/db/io.c - 1.14 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/db/io.c.diff?r1=text&tr1=1.14&r2=text&tr2=1.13&f=h - Fix up ring command in xfs_db for its argument handling of the index. Fix from utako@tnes.nec.co.jp (Utako Kusaka). xfsprogs/doc/CHANGES - 1.221 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/doc/CHANGES.diff?r1=text&tr1=1.221&r2=text&tr2=1.220&f=h - Add xfs_db ring command fix From owner-xfs@oss.sgi.com Wed Sep 27 19:07:39 2006 Received: with ECARTIS (v1.0.0; list xfs); Wed, 27 Sep 2006 19:07:47 -0700 (PDT) Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8S27caG001649 for ; Wed, 27 Sep 2006 19:07:39 -0700 X-ASG-Debug-ID: 1159404938-27986-63-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from prod.aconex.com (mail.app.aconex.com [203.89.192.138]) by cuda.sgi.com (Spam Firewall) with ESMTP id 7DA05D112418 for ; Wed, 27 Sep 2006 17:55:38 -0700 (PDT) Received: from page.mel.office.aconex.com (unknown [192.168.0.210]) by prod.aconex.com (Postfix) with ESMTP id C14AF289B5 for ; Thu, 28 Sep 2006 10:55:33 +1000 (EST) Received: from localhost (page.mel.aconex.com [127.0.0.1]) by page.mel.office.aconex.com (Postfix) with ESMTP id A2DA353403A for ; Thu, 28 Sep 2006 10:55:33 +1000 (EST) Received: from page.mel.office.aconex.com ([127.0.0.1]) by localhost (mail.aconex.com [127.0.0.1]) (amavisd-new, port 10024) with LMTP id 30205-01-25 for ; Thu, 28 Sep 2006 10:55:32 +1000 (EST) Received: from edge (unknown [192.168.0.246]) by page.mel.office.aconex.com (Postfix) with ESMTP id AD9AF534039 for ; Thu, 28 Sep 2006 10:55:31 +1000 (EST) X-ASG-Orig-Subj: [PATCH] fix xfs_admin command PATH Subject: [PATCH] fix xfs_admin command PATH From: Nathan Scott To: xfs@oss.sgi.com Content-Type: multipart/mixed; boundary="=-gmm4poRh7ZPYp3P0vKCl" Date: Thu, 28 Sep 2006 10:53:36 +1000 Message-Id: <1159404816.3497.15.camel@edge> Mime-Version: 1.0 X-Mailer: Evolution 2.6.3 X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.22076 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9108 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nathans@xfs.org Precedence: bulk X-list: xfs Content-Length: 844 Lines: 35 --=-gmm4poRh7ZPYp3P0vKCl Content-Type: text/plain Content-Transfer-Encoding: 7bit Hi, I noticed an error labelling a filesystem on Fedora the other day, this should fix it. Possibly other scripts are affected, dunno - the problem occurs when /usr/sbin is not in the PATH (it typically happens after a su to root). cheers. -- Nathan --=-gmm4poRh7ZPYp3P0vKCl Content-Disposition: attachment; filename=admin.patch Content-Type: text/x-patch; name=admin.patch; charset=UTF-8 Content-Transfer-Encoding: 7bit --- /fedora/usr/sbin/xfs_admin.orig 2006-09-26 16:01:10.000000000 +1000 +++ /fedora/usr/sbin/xfs_admin 2006-09-26 16:01:47.000000000 +1000 @@ -33,6 +33,7 @@ OPTS="" USAGE="Usage: xfs_admin [-efluV] [-L label] [-U uuid] special" +export PATH="/sbin:/usr/sbin:$PATH" while getopts "efluL:U:V" c do --=-gmm4poRh7ZPYp3P0vKCl-- From owner-xfs@oss.sgi.com Thu Sep 28 01:07:42 2006 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Sep 2006 01:07:45 -0700 (PDT) Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8S87faG032342 for ; Thu, 28 Sep 2006 01:07:42 -0700 X-ASG-Debug-ID: 1159426674-4305-209-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from web8511.mail.in.yahoo.com (web8511.mail.in.yahoo.com [202.43.219.104]) by cuda.sgi.com (Spam Firewall) with SMTP id 18081D1123E8 for ; Wed, 27 Sep 2006 23:57:55 -0700 (PDT) Received: (qmail 70798 invoked by uid 60001); 28 Sep 2006 06:51:13 -0000 DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.co.in; h=Message-ID:Received:Date:From:Subject:To:MIME-Version:Content-Type:Content-Transfer-Encoding; b=Ndcwx0e/28d3cIqxROi6j0v1q6rVXg1cYjbYpoFnfUy6etKQIHBaQe+vJBRK9QUBYrNiM98af4jnPBiLxOimKXaDzzPmkfAVdzPgoDt/qcu9rCdhwDmGU0/Xtwk2pfMe4o5l/uLp8moRVAEByzsXdvAmXrDeXbQ2NFMJmCh1ZxI= ; Message-ID: <20060928065113.70796.qmail@web8511.mail.in.yahoo.com> Received: from [198.62.10.100] by web8511.mail.in.yahoo.com via HTTP; Thu, 28 Sep 2006 07:51:13 BST Date: Thu, 28 Sep 2006 07:51:13 +0100 (BST) From: nagesh kollu X-ASG-Orig-Subj: about performace Subject: about performace To: linux-xfs@oss.sgi.com MIME-Version: 1.0 X-Barracuda-Spam-Score: 0.22 X-Barracuda-Spam-Status: No, SCORE=0.22 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests=FROM_HAS_ULINE_NUMS X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.22094 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.22 FROM_HAS_ULINE_NUMS From: contains an underline and numbers/letters Content-Type: text/plain Content-Disposition: inline Content-Transfer-Encoding: 7bit X-archive-position: 9110 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: nagesh_20k@yahoo.co.in Precedence: bulk X-list: xfs Content-Length: 647 Lines: 15 hi,, i'm new to xfs ..i want to know the performance details of xfs.. how much time it will take to write /read ...can anyone done this on ARM processor...the block size is 2^n only in xfs ..but for video streaming the incoming packet is 188 bytes .the sector size is 512 ...so the i/o buffer size is 188*512 ...how can we do this in xfs....cany anyone send the xfs performance details.. thanx! nagesh.k --------------------------------- Find out what India is talking about on - Yahoo! Answers India Send FREE SMS to your friend's mobile from Yahoo! Messenger Version 8. Get it NOW [[HTML alternate version deleted]] From owner-xfs@oss.sgi.com Thu Sep 28 03:24:42 2006 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Sep 2006 03:24:53 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k8SAObaG020269 for ; Thu, 28 Sep 2006 03:24:40 -0700 Received: from [134.14.52.207] (pmmelb207.melbourne.sgi.com [134.14.52.207]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id UAA12001; Thu, 28 Sep 2006 20:23:44 +1000 Message-ID: <451BA2AF.9090703@sgi.com> Date: Thu, 28 Sep 2006 20:23:43 +1000 From: Tim Shimmin Reply-To: tes@sgi.com Organization: SGI User-Agent: Thunderbird 1.5.0.4 (Macintosh/20060530) MIME-Version: 1.0 To: Shailendra Tripathi CC: Chris Wedgwood , Rene Salmon , xfs@oss.sgi.com Subject: Re: LVM and XFS cannot set blocksize on block device References: <45185424.2030707@tulane.edu> <20060926001737.GA10224@tuatara.stupidest.org> <45193204.3030500@tulane.edu> <20060926224053.GA31542@tuatara.stupidest.org> <451A669D.9020503@agami.com> In-Reply-To: <451A669D.9020503@agami.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 9111 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs Content-Length: 1127 Lines: 30 Shailendra Tripathi wrote: >>> Thanks for the reply. The "-s size=4096" helped I was able to create >>> the file system, then mount it and use it. I did however get a >>> warning still about "cannot set blocksize on block device". > >> >> I don't know much about the LVM code, my guess is that >> ioctl(... ,BLKBSZSET, ...) is failing, strace would confirm this. > > > libxfs_device_open () seems to be working with the pre-conceived notion > of assuming block devices of only 512 bytes in size. > > if (!readonly && setblksize && (statb.st_mode & S_IFMT) == S_IFBLK) > platform_set_blocksize(fd, path, statb.st_rdev, 512); > > This eventually calls to set the blk sz to 512. Since, your volume does > not support less than 4k, it returns EINVAL. I think, libxfs_init should > be modified to take pass on the -s size option to this call so that it > does not happen. > However, I don't see any problem despite this failure. Everything > else should work fine. > > Sounds reasonable. I'll have a look soon at passing the mkfs.xfs -s option thru to libxfs which is consistent with the existing code. --Tim From owner-xfs@oss.sgi.com Thu Sep 28 03:34:41 2006 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Sep 2006 03:34:44 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k8SAYbaG022037 for ; Thu, 28 Sep 2006 03:34:39 -0700 Received: from [134.14.52.207] (pmmelb207.melbourne.sgi.com [134.14.52.207]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id UAA12223; Thu, 28 Sep 2006 20:33:48 +1000 Message-ID: <451BA50B.4090300@sgi.com> Date: Thu, 28 Sep 2006 20:33:47 +1000 From: Tim Shimmin Reply-To: tes@sgi.com Organization: SGI User-Agent: Thunderbird 1.5.0.4 (Macintosh/20060530) MIME-Version: 1.0 To: Rene Salmon CC: Shailendra Tripathi , Chris Wedgwood , xfs@oss.sgi.com Subject: Re: LVM and XFS cannot set blocksize on block device References: <45185424.2030707@tulane.edu> <20060926001737.GA10224@tuatara.stupidest.org> <45193204.3030500@tulane.edu> <20060926224053.GA31542@tuatara.stupidest.org> <451A669D.9020503@agami.com> <451A79BC.1010302@tulane.edu> In-Reply-To: <451A79BC.1010302@tulane.edu> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 9112 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs Content-Length: 275 Lines: 19 Hi Rene, Rene Salmon wrote: > > Should I report this as a bug? Well I know it now :) but it is good to have a record of it. > If so Can someone point me to the > bugzilla page or something of the sorts? > http://oss.sgi.com/bugzilla/ Product: Linux XFS. Thanks, --Tim From owner-xfs@oss.sgi.com Thu Sep 28 08:17:14 2006 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Sep 2006 08:17:16 -0700 (PDT) Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8SFHDaG001619 for ; Thu, 28 Sep 2006 08:17:14 -0700 X-ASG-Debug-ID: 1159456593-6233-880-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mx1.redhat.com (mx1.redhat.com [66.187.233.31]) by cuda.sgi.com (Spam Firewall) with ESMTP id 2850A462FC9 for ; Thu, 28 Sep 2006 08:16:33 -0700 (PDT) Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by mx1.redhat.com (8.12.11.20060308/8.12.11) with ESMTP id k8SFGXG9028357; Thu, 28 Sep 2006 11:16:33 -0400 Received: from pobox-2.corp.redhat.com (pobox-2.corp.redhat.com [10.11.255.15]) by int-mx1.corp.redhat.com (8.12.11.20060308/8.12.11) with ESMTP id k8SFGWbX015334; Thu, 28 Sep 2006 11:16:32 -0400 Received: from [10.15.80.10] (neon.msp.redhat.com [10.15.80.10]) by pobox-2.corp.redhat.com (8.13.1/8.13.1) with ESMTP id k8SFGVvZ013704; Thu, 28 Sep 2006 11:16:32 -0400 Message-ID: <451BE74F.3030205@sandeen.net> Date: Thu, 28 Sep 2006 10:16:31 -0500 From: Eric Sandeen User-Agent: Thunderbird 1.5.0.7 (X11/20060913) MIME-Version: 1.0 To: nagesh kollu CC: linux-xfs@oss.sgi.com X-ASG-Orig-Subj: Re: about performace Subject: Re: about performace References: <20060928065113.70796.qmail@web8511.mail.in.yahoo.com> In-Reply-To: <20060928065113.70796.qmail@web8511.mail.in.yahoo.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.22114 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9115 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Content-Length: 654 Lines: 13 nagesh kollu wrote: > hi,, > i'm new to xfs ..i want to know the performance details of xfs.. > how much time it will take to write /read ...can anyone done this on ARM processor...the block size is 2^n only in xfs ..but for video streaming the incoming packet is 188 bytes .the sector size is 512 ...so the i/o buffer size is 188*512 ...how can we do this in xfs....cany anyone send the xfs performance details.. It sounds like you have a very specific environment that would be best tested and evaluated by... you :) Yes, the blocksizes are powers of two, but of course you can do any arbitrary sized write, if that's what you're asking. -Eric From owner-xfs@oss.sgi.com Thu Sep 28 08:23:00 2006 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Sep 2006 08:23:01 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k8SFMvaG006953 for ; Thu, 28 Sep 2006 08:22:58 -0700 Received: from [127.0.0.1] (sshgate.corp.sgi.com [198.149.36.12]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id AAA16842; Fri, 29 Sep 2006 00:12:22 +1000 Message-ID: <451BD844.4000509@melbourne.sgi.com> Date: Fri, 29 Sep 2006 00:12:20 +1000 From: David Chatterton Reply-To: chatz@melbourne.sgi.com Organization: SGI User-Agent: Thunderbird 1.5.0.7 (Windows/20060909) MIME-Version: 1.0 To: nagesh kollu CC: linux-xfs@oss.sgi.com Subject: Re: about performace References: <20060928065113.70796.qmail@web8511.mail.in.yahoo.com> In-Reply-To: <20060928065113.70796.qmail@web8511.mail.in.yahoo.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-archive-position: 9116 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: chatz@melbourne.sgi.com Precedence: bulk X-list: xfs Content-Length: 1188 Lines: 34 In general (no experience or suggestions for ARM processors), for video streaming: - preallocate space for the files when writing, see xfsctl(3) and XFS_IOC_RESVSP - use direct I/O, the larger the better. ie a MB or more at a time, but at least ensure the direct I/O requests are aligned on page or filesystem block boundaries, whichever is larger - use asynchronous I/O or multiple threads to keep the disks busy David nagesh kollu wrote: > hi,, > i'm new to xfs ..i want to know the performance details of xfs.. > how much time it will take to write /read ...can anyone done this on ARM processor...the block size is 2^n only in xfs ..but for video streaming the incoming packet is 188 bytes .the sector size is 512 ...so the i/o buffer size is 188*512 ...how can we do this in xfs....cany anyone send the xfs performance details.. > > > thanx! > nagesh.k > > > --------------------------------- > Find out what India is talking about on - Yahoo! Answers India > Send FREE SMS to your friend's mobile from Yahoo! Messenger Version 8. Get it NOW > > [[HTML alternate version deleted]] > -- David Chatterton XFS Engineering Manager SGI Australia From owner-xfs@oss.sgi.com Thu Sep 28 08:59:43 2006 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Sep 2006 08:59:47 -0700 (PDT) Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8SFxgaG012355 for ; Thu, 28 Sep 2006 08:59:43 -0700 X-ASG-Debug-ID: 1159459144-23892-716-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from smtp113.sbc.mail.mud.yahoo.com (smtp113.sbc.mail.mud.yahoo.com [68.142.198.212]) by cuda.sgi.com (Spam Firewall) with SMTP id 8B53B463390 for ; Thu, 28 Sep 2006 08:59:04 -0700 (PDT) Received: (qmail 28787 invoked from network); 28 Sep 2006 15:32:20 -0000 Received: from unknown (HELO stupidest.org) (cwedgwood@sbcglobal.net@70.132.27.77 with login) by smtp113.sbc.mail.mud.yahoo.com with SMTP; 28 Sep 2006 15:32:20 -0000 Received: by tuatara.stupidest.org (Postfix, from userid 10000) id 7859E18079B1; Thu, 28 Sep 2006 08:32:18 -0700 (PDT) Date: Thu, 28 Sep 2006 08:32:18 -0700 From: Chris Wedgwood To: Tim Shimmin Cc: Shailendra Tripathi , Rene Salmon , xfs@oss.sgi.com, Eric Sandeen X-ASG-Orig-Subj: Re: LVM and XFS cannot set blocksize on block device Subject: Re: LVM and XFS cannot set blocksize on block device Message-ID: <20060928153218.GA26366@tuatara.stupidest.org> References: <45185424.2030707@tulane.edu> <20060926001737.GA10224@tuatara.stupidest.org> <45193204.3030500@tulane.edu> <20060926224053.GA31542@tuatara.stupidest.org> <451A669D.9020503@agami.com> <451BA2AF.9090703@sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <451BA2AF.9090703@sgi.com> X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.22114 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9117 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: cw@f00f.org Precedence: bulk X-list: xfs Content-Length: 545 Lines: 14 On Thu, Sep 28, 2006 at 08:23:43PM +1000, Tim Shimmin wrote: > I'll have a look soon at passing the mkfs.xfs -s option thru to > libxfs which is consistent with the existing code. (following up on something mentioned off the list) When you do this change please consider *not* making the code fallback to a different blocksize if the ioctl fails when "-s size=" is given. The logic here is that if someone clearly wants a specific value and if that cannot be met it should error out with a suitable message, not silently do something else. From owner-xfs@oss.sgi.com Thu Sep 28 10:07:54 2006 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Sep 2006 10:07:57 -0700 (PDT) Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8SH7raG022359 for ; Thu, 28 Sep 2006 10:07:54 -0700 X-ASG-Debug-ID: 1159458969-24342-583-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from tabit.netstar.se (tabit.netstar.se [195.178.179.33]) by cuda.sgi.com (Spam Firewall) with SMTP id 6A8BD461DC9 for ; Thu, 28 Sep 2006 08:56:09 -0700 (PDT) Received: (qmail 26459 invoked by uid 1262); 28 Sep 2006 15:49:25 -0000 Received: from lindqvist@netstar.se by tabit.netstar.se by uid 1255 with qmail-scanner-1.21 (fprot(2004-03-11)/avp(2004-03-11)/orion(2004-03-11). Clear:RC:1(192.168.0.78):. Processed in 0.043489 secs); 28 Sep 2006 15:49:25 -0000 Received: from unknown (HELO client29.intranet.netstar.se) (192.168.0.78) by 0 with SMTP; 28 Sep 2006 15:49:25 -0000 X-ASG-Orig-Subj: xfs_repair said "cache_purge: shake on cache 0x80f5288 left 1 inodes!?" Subject: xfs_repair said "cache_purge: shake on cache 0x80f5288 left 1 inodes!?" From: =?ISO-8859-1?Q?H=E5kan?= Lindqvist To: xfs@oss.sgi.com Content-Type: multipart/signed; micalg=sha1; protocol="application/x-pkcs7-signature"; boundary="=-kNkP7lcRRzSX8uDp4v28" Date: Thu, 28 Sep 2006 17:49:25 +0200 Message-Id: <1159458565.28220.12.camel@lasse> Mime-Version: 1.0 X-Mailer: Evolution 2.6.1 X-Barracuda-Spam-Score: 0.33 X-Barracuda-Spam-Status: No, SCORE=0.33 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests=PLING_QUERY X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.22114 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.33 PLING_QUERY Subject has exclamation mark and question mark X-archive-position: 9118 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: lindqvist@netstar.se Precedence: bulk X-list: xfs Content-Length: 5281 Lines: 106 --=-kNkP7lcRRzSX8uDp4v28 Content-Type: text/plain; charset=ISO-8859-15 Content-Transfer-Encoding: quoted-printable Hi! When I ran xfs_repair (2.8.11) to fix a filesystem that had been bitten by the Linux 2.6.17.<7 bug the following happened: All seemed to go well, xfs_repair found the problematic directory inode and rebuilded it. At the end (right before printing "done"), it did however say "cache_purge: shake on cache 0x80f5288 left 1 inodes!?" thrice. The filesystem appears to be fine so far (and the directory that previously caused the filesystem to shut down is ok again), but that error/warning message sounded a little scary. Is this problem harmless or should I be worried? Regards, H=E5kan Lindqvist (Please CC me on any replies as I'm not subscribed to the list.) --=-kNkP7lcRRzSX8uDp4v28 Content-Type: application/x-pkcs7-signature; name=smime.p7s Content-Disposition: attachment; filename=smime.p7s Content-Transfer-Encoding: base64 MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEH AQAAoIIJDzCCAuIwggJLoAMCAQICEG8tZl0422Qyf40yQ7wKJPEwDQYJKoZI hvcNAQEEBQAwYjELMAkGA1UEBhMCWkExJTAjBgNVBAoTHFRoYXd0ZSBDb25z dWx0aW5nIChQdHkpIEx0ZC4xLDAqBgNVBAMTI1RoYXd0ZSBQZXJzb25hbCBG cmVlbWFpbCBJc3N1aW5nIENBMB4XDTA1MTIxMjE2MzIyOFoXDTA2MTIxMjE2 MzIyOFowRjEfMB0GA1UEAxMWVGhhd3RlIEZyZWVtYWlsIE1lbWJlcjEjMCEG CSqGSIb3DQEJARYUbGluZHF2aXN0QG5ldHN0YXIuc2UwggEiMA0GCSqGSIb3 DQEBAQUAA4IBDwAwggEKAoIBAQDWlz+4yiQCdZe4w3Ts+vcIqiaaZHSSE3pd wQVcI6IQikTTLXfTV04ByLx7CmxvN0bqHNI0afVpO5gP4p0zs4+HK9qLQkZ8 DcwlscT6ds71dahPb7LVKlz/e069LNWQY6cBHltS6kBudK23pjz7tYWrmWLo tjmJfczN0+wu+E+SDpdO4qPEqb16ZdgxGo3vjg0d7+3F7UC0SOgxWXkZSRt9 I9wLt8X4okDWDB96Vg/lNeQQjnExhJzR7vFrcoWThsNRolAmEfNf8m03RnhA f14+Q7MUipxiPis2OkKsMi/WNoiVEIhh3Q5AuLmJn9EcUO7D4JDgEI0qQ+OX eeGqoWJPAgMBAAGjMTAvMB8GA1UdEQQYMBaBFGxpbmRxdmlzdEBuZXRzdGFy LnNlMAwGA1UdEwEB/wQCMAAwDQYJKoZIhvcNAQEEBQADgYEAV/Y6eLppyWP5 Gw1d+oiwyoqfKJboPWarruo4UDk9jG8ic1gG5IDbWSfczhg1WvX9NfSjm+s+ wI7R7HP8hhr9quSBdscfH1BD9xtO5AT8sVfMjJEZpiXb8XQHmbz4roq0xBHr O0QgbGAK/iOp/Irb07tVeLPqcv8NRWWzz54GLNMwggLiMIICS6ADAgECAhBv LWZdONtkMn+NMkO8CiTxMA0GCSqGSIb3DQEBBAUAMGIxCzAJBgNVBAYTAlpB MSUwIwYDVQQKExxUaGF3dGUgQ29uc3VsdGluZyAoUHR5KSBMdGQuMSwwKgYD VQQDEyNUaGF3dGUgUGVyc29uYWwgRnJlZW1haWwgSXNzdWluZyBDQTAeFw0w NTEyMTIxNjMyMjhaFw0wNjEyMTIxNjMyMjhaMEYxHzAdBgNVBAMTFlRoYXd0 ZSBGcmVlbWFpbCBNZW1iZXIxIzAhBgkqhkiG9w0BCQEWFGxpbmRxdmlzdEBu ZXRzdGFyLnNlMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA1pc/ uMokAnWXuMN07Pr3CKommmR0khN6XcEFXCOiEIpE0y1301dOAci8ewpsbzdG 6hzSNGn1aTuYD+KdM7OPhyvai0JGfA3MJbHE+nbO9XWoT2+y1Spc/3tOvSzV kGOnAR5bUupAbnStt6Y8+7WFq5li6LY5iX3MzdPsLvhPkg6XTuKjxKm9emXY MRqN744NHe/txe1AtEjoMVl5GUkbfSPcC7fF+KJA1gwfelYP5TXkEI5xMYSc 0e7xa3KFk4bDUaJQJhHzX/JtN0Z4QH9ePkOzFIqcYj4rNjpCrDIv1jaIlRCI Yd0OQLi5iZ/RHFDuw+CQ4BCNKkPjl3nhqqFiTwIDAQABozEwLzAfBgNVHREE GDAWgRRsaW5kcXZpc3RAbmV0c3Rhci5zZTAMBgNVHRMBAf8EAjAAMA0GCSqG SIb3DQEBBAUAA4GBAFf2Oni6aclj+RsNXfqIsMqKnyiW6D1mq67qOFA5PYxv InNYBuSA21kn3M4YNVr1/TX0o5vrPsCO0exz/IYa/arkgXbHHx9QQ/cbTuQE /LFXzIyRGaYl2/F0B5m8+K6KtMQR6ztEIGxgCv4jqfyK29O7VXiz6nL/DUVl s8+eBizTMIIDPzCCAqigAwIBAgIBDTANBgkqhkiG9w0BAQUFADCB0TELMAkG A1UEBhMCWkExFTATBgNVBAgTDFdlc3Rlcm4gQ2FwZTESMBAGA1UEBxMJQ2Fw ZSBUb3duMRowGAYDVQQKExFUaGF3dGUgQ29uc3VsdGluZzEoMCYGA1UECxMf Q2VydGlmaWNhdGlvbiBTZXJ2aWNlcyBEaXZpc2lvbjEkMCIGA1UEAxMbVGhh d3RlIFBlcnNvbmFsIEZyZWVtYWlsIENBMSswKQYJKoZIhvcNAQkBFhxwZXJz b25hbC1mcmVlbWFpbEB0aGF3dGUuY29tMB4XDTAzMDcxNzAwMDAwMFoXDTEz MDcxNjIzNTk1OVowYjELMAkGA1UEBhMCWkExJTAjBgNVBAoTHFRoYXd0ZSBD b25zdWx0aW5nIChQdHkpIEx0ZC4xLDAqBgNVBAMTI1RoYXd0ZSBQZXJzb25h bCBGcmVlbWFpbCBJc3N1aW5nIENBMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCB iQKBgQDEpjxVc1X7TrnKmVoeaMB1BHCd3+n/ox7svc31W/Iadr1/DDph8r9R zgHU5VAKMNcCY1osiRVwjt3J8CuFWqo/cVbLrzwLB+fxH5E2JCoTzyvV84J3 PQO+K/67GD4Hv0CAAmTXp6a7n2XRxSpUhQ9IBH+nttE8YQRAHmQZcmC3+wID AQABo4GUMIGRMBIGA1UdEwEB/wQIMAYBAf8CAQAwQwYDVR0fBDwwOjA4oDag NIYyaHR0cDovL2NybC50aGF3dGUuY29tL1RoYXd0ZVBlcnNvbmFsRnJlZW1h aWxDQS5jcmwwCwYDVR0PBAQDAgEGMCkGA1UdEQQiMCCkHjAcMRowGAYDVQQD ExFQcml2YXRlTGFiZWwyLTEzODANBgkqhkiG9w0BAQUFAAOBgQBIjNFQg+oL LswNo2asZw9/r6y+whehQ5aUnX9MIbj4Nh+qLZ82L8D0HFAgk3A8/a3hYWLD 2ToZfoSxmRsAxRoLgnSeJVCUYsfbJ3FXJY3dqZw5jowgT2Vfldr394fWxghO rvbqNOUQGls1TXfjViF4gtwhGTXeJLHTHUb/XV9lTzGCAxAwggMMAgEBMHYw YjELMAkGA1UEBhMCWkExJTAjBgNVBAoTHFRoYXd0ZSBDb25zdWx0aW5nIChQ dHkpIEx0ZC4xLDAqBgNVBAMTI1RoYXd0ZSBQZXJzb25hbCBGcmVlbWFpbCBJ c3N1aW5nIENBAhBvLWZdONtkMn+NMkO8CiTxMAkGBSsOAwIaBQCgggFvMBgG CSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZIhvcNAQkFMQ8XDTA2MDky ODE1NDkyNVowIwYJKoZIhvcNAQkEMRYEFPdpIR2XRNTFW3ffz/4NozkRIOXp MIGFBgkrBgEEAYI3EAQxeDB2MGIxCzAJBgNVBAYTAlpBMSUwIwYDVQQKExxU aGF3dGUgQ29uc3VsdGluZyAoUHR5KSBMdGQuMSwwKgYDVQQDEyNUaGF3dGUg UGVyc29uYWwgRnJlZW1haWwgSXNzdWluZyBDQQIQby1mXTjbZDJ/jTJDvAok 8TCBhwYLKoZIhvcNAQkQAgsxeKB2MGIxCzAJBgNVBAYTAlpBMSUwIwYDVQQK ExxUaGF3dGUgQ29uc3VsdGluZyAoUHR5KSBMdGQuMSwwKgYDVQQDEyNUaGF3 dGUgUGVyc29uYWwgRnJlZW1haWwgSXNzdWluZyBDQQIQby1mXTjbZDJ/jTJD vAok8TANBgkqhkiG9w0BAQEFAASCAQAEiuayQzqut8FUtpo038Kw06vpOtL2 kix+xCBvVXPkiuPT1e7NqEAiN3zBoKJKgYinbiurzbYIazbhgO9QsgzFrahi OlOINajddblxY5/j8jxhuA7zTK+sU2+/Lo3G/NMsJs0ycbtA/nj7gqVnXxso yA3Y5z3qoG9EZQpKcxpPX1lGe5LFbPG9t3mHbbnqrWCxCAXskxhdpalHcjPN pBhMUhuL3vPJoaMtPXAH0CjwVx/Pg5PWUa+kRNzWH9YhLoWHYCQUcUdRNS50 auEsqE6mMW4OxPxGI2p1nhWjGLDgsoaMHzxiZxQWAJHlRl1J4QszJWfFGg6y J9ubfST+P/vRAAAAAAAA --=-kNkP7lcRRzSX8uDp4v28-- From owner-xfs@oss.sgi.com Thu Sep 28 17:48:19 2006 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Sep 2006 17:48:22 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k8T0mGaG015533 for ; Thu, 28 Sep 2006 17:48:18 -0700 Received: from pcbnaujok (pc-bnaujok.melbourne.sgi.com [134.14.55.58]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA01609; Fri, 29 Sep 2006 10:47:31 +1000 Message-Id: <200609290047.KAA01609@larry.melbourne.sgi.com> From: "Barry Naujok" To: "=?iso-8859-1?Q?'H=E5kan_Lindqvist'?=" , Subject: RE: xfs_repair said "cache_purge: shake on cache 0x80f5288 left 1inodes!?" Date: Fri, 29 Sep 2006 10:54:24 +1000 MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" X-Mailer: Microsoft Office Outlook, Build 11.0.6353 Thread-Index: AcbjIK0l+VsNXr65TtW50EjcCbwHkAAQL21w X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.2962 In-Reply-To: <1159458565.28220.12.camel@lasse> Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id k8T0mKaG015540 X-archive-position: 9120 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: bnaujok@melbourne.sgi.com Precedence: bulk X-list: xfs Content-Length: 1085 Lines: 36 Hi, > -----Original Message----- > From: xfs-bounce@oss.sgi.com [mailto:xfs-bounce@oss.sgi.com] > On Behalf Of Håkan Lindqvist > Sent: Friday, 29 September 2006 1:49 AM > To: xfs@oss.sgi.com > Subject: xfs_repair said "cache_purge: shake on cache > 0x80f5288 left 1inodes!?" > > Hi! > > When I ran xfs_repair (2.8.11) to fix a filesystem that had > been bitten > by the Linux 2.6.17.<7 bug the following happened: > > All seemed to go well, xfs_repair found the problematic > directory inode > and rebuilded it. > > At the end (right before printing "done"), it did however say > "cache_purge: shake on cache 0x80f5288 left 1 inodes!?" thrice. > > The filesystem appears to be fine so far (and the directory that > previously caused the filesystem to shut down is ok again), but that > error/warning message sounded a little scary. > > Is this problem harmless or should I be worried? This cache_purge comment is harmless. It's informing us that some buffers have reference counts remaining that weren't unreferenced when read. All dirty buffers do get written. Barry. From owner-xfs@oss.sgi.com Thu Sep 28 20:29:39 2006 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Sep 2006 20:29:43 -0700 (PDT) Received: from cuda.sgi.com (cuda1.sgi.com [192.48.168.28]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8T3TcaG005313 for ; Thu, 28 Sep 2006 20:29:39 -0700 X-ASG-Debug-ID: 1159500537-11630-424-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com (Spam Firewall) with ESMTP id 59B1B456F15 for ; Thu, 28 Sep 2006 20:28:57 -0700 (PDT) Received: by sandeen.net (Postfix, from userid 500) id 8DA9C18001A5E; Thu, 28 Sep 2006 22:28:56 -0500 (CDT) To: xfs@oss.sgi.com X-ASG-Orig-Subj: [PATCH 1/2] Make stuff static Subject: [PATCH 1/2] Make stuff static Message-Id: <20060929032856.8DA9C18001A5E@sandeen.net> Date: Thu, 28 Sep 2006 22:28:56 -0500 (CDT) From: sandeen@sandeen.net X-Barracuda-Spam-Score: 0.55 X-Barracuda-Spam-Status: No, SCORE=0.55 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests=NO_REAL_NAME X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.22155 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.55 NO_REAL_NAME From: does not include a real name X-archive-position: 9121 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Content-Length: 27636 Lines: 816 Make things static which can be. linux-2.4/xfs_vfs.c | 2 - linux-2.4/xfs_vfs.h | 1 linux-2.4/xfs_vnode.c | 2 - linux-2.6/xfs_vfs.c | 2 - linux-2.6/xfs_vfs.h | 1 linux-2.6/xfs_vnode.c | 2 - quota/xfs_dquot.c | 2 - quota/xfs_qm_bhv.c | 2 - xfs_attr.c | 3 +- xfs_attr.h | 4 --- xfs_bmap.c | 5 +--- xfs_bmap_btree.c | 52 ++++++++++++++++++++++++++------------------------ xfs_bmap_btree.h | 10 --------- xfs_btree.c | 4 ++- xfs_btree.h | 12 +---------- xfs_dir2.h | 4 --- xfs_dir2_data.h | 2 - xfs_dir2_node.h | 2 - xfs_inode.c | 49 ++++++++++++++++++++++++++++++----------------- xfs_inode.h | 29 +-------------------------- xfs_log_priv.h | 7 ------ xfs_log_recover.c | 10 ++++----- xfs_mount.h | 2 - xfs_quota.h | 2 - xfs_trans_buf.c | 2 - 25 files changed, 84 insertions(+), 129 deletions(-) Signed-off-by: Eric Sandeen Index: xfs-linux/linux-2.4/xfs_vnode.c =================================================================== --- xfs-linux.orig/linux-2.4/xfs_vnode.c +++ xfs-linux/linux-2.4/xfs_vnode.c @@ -17,7 +17,7 @@ #include "xfs.h" uint64_t vn_generation; /* vnode generation number */ -spinlock_t vnumber_lock = SPIN_LOCK_UNLOCKED; +static spinlock_t vnumber_lock = SPIN_LOCK_UNLOCKED; /* * Dedicated vnode inactive/reclaim sync semaphores. Index: xfs-linux/linux-2.6/xfs_vnode.c =================================================================== --- xfs-linux.orig/linux-2.6/xfs_vnode.c +++ xfs-linux/linux-2.6/xfs_vnode.c @@ -18,7 +18,7 @@ #include "xfs.h" uint64_t vn_generation; /* vnode generation number */ -DEFINE_SPINLOCK(vnumber_lock); +static DEFINE_SPINLOCK(vnumber_lock); /* * Dedicated vnode inactive/reclaim sync semaphores. Index: xfs-linux/xfs_inode.c =================================================================== --- xfs-linux.orig/xfs_inode.c +++ xfs-linux/xfs_inode.c @@ -65,7 +65,22 @@ STATIC int xfs_iflush_int(xfs_inode_t *, STATIC int xfs_iformat_local(xfs_inode_t *, xfs_dinode_t *, int, int); STATIC int xfs_iformat_extents(xfs_inode_t *, xfs_dinode_t *, int); STATIC int xfs_iformat_btree(xfs_inode_t *, xfs_dinode_t *, int); - +STATIC void xfs_iext_add_indirect_multi(xfs_ifork_t *, int, xfs_extnum_t, int); +STATIC void xfs_iext_remove_inline(xfs_ifork_t *, xfs_extnum_t, int); +STATIC void xfs_iext_remove_direct(xfs_ifork_t *, xfs_extnum_t, int); +STATIC void xfs_iext_remove_indirect(xfs_ifork_t *, xfs_extnum_t, int); +STATIC void xfs_iext_inline_to_direct(xfs_ifork_t *, int); +STATIC void xfs_iext_realloc_direct(xfs_ifork_t *, int); +STATIC void xfs_iext_realloc_indirect(xfs_ifork_t *, int); +STATIC void xfs_iext_direct_to_inline(xfs_ifork_t *, xfs_extnum_t); +STATIC void xfs_iext_irec_init(xfs_ifork_t *); +STATIC void xfs_iext_irec_remove(xfs_ifork_t *, int); +STATIC void xfs_iext_irec_compact(xfs_ifork_t *); +STATIC void xfs_iext_irec_compact_pages(xfs_ifork_t *); +STATIC void xfs_iext_irec_compact_full(xfs_ifork_t *); +STATIC void xfs_iext_irec_update_extoffs(xfs_ifork_t *, int, int); +STATIC xfs_ext_irec_t *xfs_iext_bno_to_irec(xfs_ifork_t *, xfs_fileoff_t, int *); +STATIC xfs_ext_irec_t *xfs_iext_irec_new(xfs_ifork_t *, int); #ifdef DEBUG /* @@ -105,7 +120,7 @@ xfs_validate_extents( * unlinked field of 0. */ #if defined(DEBUG) -void +STATIC void xfs_inobp_check( xfs_mount_t *mp, xfs_buf_t *bp) @@ -1268,7 +1283,7 @@ xfs_ialloc( * at least do it for regular files. */ #ifdef DEBUG -void +STATIC void xfs_isize_check( xfs_mount_t *mp, xfs_inode_t *ip, @@ -3621,7 +3636,7 @@ xfs_iaccess( /* * xfs_iroundup: round up argument to next power of two */ -uint +STATIC uint xfs_iroundup( uint v) { @@ -3834,7 +3849,7 @@ xfs_iext_add( * | count | | nex2 | nex2 - number of extents after idx + count * |-------| |-------| */ -void +STATIC void xfs_iext_add_indirect_multi( xfs_ifork_t *ifp, /* inode fork pointer */ int erp_idx, /* target extent irec index */ @@ -3971,7 +3986,7 @@ xfs_iext_remove( * This removes ext_diff extents from the inline buffer, beginning * at extent index idx. */ -void +STATIC void xfs_iext_remove_inline( xfs_ifork_t *ifp, /* inode fork pointer */ xfs_extnum_t idx, /* index to begin removing exts */ @@ -4008,7 +4023,7 @@ xfs_iext_remove_inline( * at idx + ext_diff up in the list to overwrite the records being * removed, then remove the extra space via kmem_realloc. */ -void +STATIC void xfs_iext_remove_direct( xfs_ifork_t *ifp, /* inode fork pointer */ xfs_extnum_t idx, /* index to begin removing exts */ @@ -4060,7 +4075,7 @@ xfs_iext_remove_direct( * | | | nex2 | nex2 - number of extents after idx + count * |-------| |-------| */ -void +STATIC void xfs_iext_remove_indirect( xfs_ifork_t *ifp, /* inode fork pointer */ xfs_extnum_t idx, /* index to begin removing extents */ @@ -4126,7 +4141,7 @@ xfs_iext_remove_indirect( /* * Create, destroy, or resize a linear (direct) block of extents. */ -void +STATIC void xfs_iext_realloc_direct( xfs_ifork_t *ifp, /* inode fork pointer */ int new_size) /* new size of extents */ @@ -4187,7 +4202,7 @@ xfs_iext_realloc_direct( /* * Switch from linear (direct) extent records to inline buffer. */ -void +STATIC void xfs_iext_direct_to_inline( xfs_ifork_t *ifp, /* inode fork pointer */ xfs_extnum_t nextents) /* number of extents in file */ @@ -4214,7 +4229,7 @@ xfs_iext_direct_to_inline( * if_bytes here. It is the caller's responsibility to update * if_bytes upon return. */ -void +STATIC void xfs_iext_inline_to_direct( xfs_ifork_t *ifp, /* inode fork pointer */ int new_size) /* number of extents in file */ @@ -4234,7 +4249,7 @@ xfs_iext_inline_to_direct( /* * Resize an extent indirection array to new_size bytes. */ -void +STATIC void xfs_iext_realloc_indirect( xfs_ifork_t *ifp, /* inode fork pointer */ int new_size) /* new indirection array size */ @@ -4259,7 +4274,7 @@ xfs_iext_realloc_indirect( /* * Switch from indirection array to linear (direct) extent allocations. */ -void +STATIC void xfs_iext_indirect_to_direct( xfs_ifork_t *ifp) /* inode fork pointer */ { @@ -4386,7 +4401,7 @@ xfs_iext_bno_to_ext( * extent record for filesystem block bno. Store the index of the * target irec in *erp_idxp. */ -xfs_ext_irec_t * /* pointer to found extent record */ +STATIC xfs_ext_irec_t * /* pointer to found extent record */ xfs_iext_bno_to_irec( xfs_ifork_t *ifp, /* inode fork pointer */ xfs_fileoff_t bno, /* block number to search for */ @@ -4611,7 +4626,7 @@ xfs_iext_irec_remove( * Partial Compaction: Extents occupy > 10% and < 50% of allocated space * No Compaction: Extents occupy at least 50% of allocated space */ -void +STATIC void xfs_iext_irec_compact( xfs_ifork_t *ifp) /* inode fork pointer */ { @@ -4639,7 +4654,7 @@ xfs_iext_irec_compact( /* * Combine extents from neighboring extent pages. */ -void +STATIC void xfs_iext_irec_compact_pages( xfs_ifork_t *ifp) /* inode fork pointer */ { @@ -4676,7 +4691,7 @@ xfs_iext_irec_compact_pages( /* * Fully compact the extent records managed by the indirection array. */ -void +STATIC void xfs_iext_irec_compact_full( xfs_ifork_t *ifp) /* inode fork pointer */ { Index: xfs-linux/xfs_inode.h =================================================================== --- xfs-linux.orig/xfs_inode.h +++ xfs-linux/xfs_inode.h @@ -460,7 +460,6 @@ int xfs_iextents_copy(xfs_inode_t *, xf int xfs_iflush(xfs_inode_t *, uint); void xfs_iflush_all(struct xfs_mount *); int xfs_iaccess(xfs_inode_t *, mode_t, cred_t *); -uint xfs_iroundup(uint); void xfs_ichgtime(xfs_inode_t *, int); xfs_fsize_t xfs_file_last_byte(xfs_inode_t *); void xfs_lock_inodes(xfs_inode_t **, int, int, uint); @@ -473,41 +472,17 @@ xfs_bmbt_rec_t *xfs_iext_get_ext(xfs_ifo void xfs_iext_insert(xfs_ifork_t *, xfs_extnum_t, xfs_extnum_t, xfs_bmbt_irec_t *); void xfs_iext_add(xfs_ifork_t *, xfs_extnum_t, int); -void xfs_iext_add_indirect_multi(xfs_ifork_t *, int, xfs_extnum_t, int); void xfs_iext_remove(xfs_ifork_t *, xfs_extnum_t, int); -void xfs_iext_remove_inline(xfs_ifork_t *, xfs_extnum_t, int); -void xfs_iext_remove_direct(xfs_ifork_t *, xfs_extnum_t, int); -void xfs_iext_remove_indirect(xfs_ifork_t *, xfs_extnum_t, int); -void xfs_iext_realloc_direct(xfs_ifork_t *, int); -void xfs_iext_realloc_indirect(xfs_ifork_t *, int); -void xfs_iext_indirect_to_direct(xfs_ifork_t *); -void xfs_iext_direct_to_inline(xfs_ifork_t *, xfs_extnum_t); -void xfs_iext_inline_to_direct(xfs_ifork_t *, int); void xfs_iext_destroy(xfs_ifork_t *); xfs_bmbt_rec_t *xfs_iext_bno_to_ext(xfs_ifork_t *, xfs_fileoff_t, int *); -xfs_ext_irec_t *xfs_iext_bno_to_irec(xfs_ifork_t *, xfs_fileoff_t, int *); xfs_ext_irec_t *xfs_iext_idx_to_irec(xfs_ifork_t *, xfs_extnum_t *, int *, int); -void xfs_iext_irec_init(xfs_ifork_t *); -xfs_ext_irec_t *xfs_iext_irec_new(xfs_ifork_t *, int); -void xfs_iext_irec_remove(xfs_ifork_t *, int); -void xfs_iext_irec_compact(xfs_ifork_t *); -void xfs_iext_irec_compact_pages(xfs_ifork_t *); -void xfs_iext_irec_compact_full(xfs_ifork_t *); -void xfs_iext_irec_update_extoffs(xfs_ifork_t *, int, int); #define xfs_ipincount(ip) ((unsigned int) atomic_read(&ip->i_pincount)) -#ifdef DEBUG -void xfs_isize_check(struct xfs_mount *, xfs_inode_t *, xfs_fsize_t); -#else /* DEBUG */ +#ifndef DEBUG #define xfs_isize_check(mp, ip, isize) -#endif /* DEBUG */ - -#if defined(DEBUG) -void xfs_inobp_check(struct xfs_mount *, struct xfs_buf *); -#else #define xfs_inobp_check(mp, bp) -#endif /* DEBUG */ +#endif /* !DEBUG */ extern struct kmem_zone *xfs_chashlist_zone; extern struct kmem_zone *xfs_ifork_zone; Index: xfs-linux/xfs_attr.h =================================================================== --- xfs-linux.orig/xfs_attr.h +++ xfs-linux/xfs_attr.h @@ -59,7 +59,6 @@ typedef struct attrnames { #define ATTR_NAMECOUNT 4 extern struct attrnames attr_user; extern struct attrnames attr_secure; -extern struct attrnames attr_system; extern struct attrnames attr_trusted; extern struct attrnames *attr_namespaces[ATTR_NAMECOUNT]; @@ -161,11 +160,8 @@ struct xfs_da_args; */ int xfs_attr_get(bhv_desc_t *, const char *, char *, int *, int, struct cred *); int xfs_attr_set(bhv_desc_t *, const char *, char *, int, int, struct cred *); -int xfs_attr_set_int(struct xfs_inode *, const char *, int, char *, int, int); int xfs_attr_remove(bhv_desc_t *, const char *, int, struct cred *); -int xfs_attr_remove_int(struct xfs_inode *, const char *, int, int); int xfs_attr_list(bhv_desc_t *, char *, int, int, struct attrlist_cursor_kern *, struct cred *); -int xfs_attr_list_int(struct xfs_attr_list_context *); int xfs_attr_inactive(struct xfs_inode *dp); int xfs_attr_shortform_getvalue(struct xfs_da_args *); Index: xfs-linux/xfs_bmap_btree.c =================================================================== --- xfs-linux.orig/xfs_bmap_btree.c +++ xfs-linux/xfs_bmap_btree.c @@ -60,7 +60,17 @@ STATIC int xfs_bmbt_rshift(xfs_btree_cur STATIC int xfs_bmbt_split(xfs_btree_cur_t *, int, xfs_fsblock_t *, __uint64_t *, xfs_btree_cur_t **, int *); STATIC int xfs_bmbt_updkey(xfs_btree_cur_t *, xfs_bmbt_key_t *, int); - +STATIC xfs_bmbt_block_t *xfs_bmbt_get_block(struct xfs_btree_cur *cur, + int, struct xfs_buf **bpp); +#ifndef XFS_NATIVE_HOST +STATIC void xfs_bmbt_disk_set_all(xfs_bmbt_rec_t *r, xfs_bmbt_irec_t *s); +STATIC void xfs_bmbt_disk_set_allf(xfs_bmbt_rec_t *r, xfs_fileoff_t o, + xfs_fsblock_t b, xfs_filblks_t c, xfs_exntst_t v); +#ifdef DEBUG +STATIC xfs_fsblock_t xfs_bmbt_disk_get_startblock(xfs_bmbt_rec_t *r); +STATIC xfs_exntst_t xfs_bmbt_disk_get_state(xfs_bmbt_rec_t *r); +#endif +#endif #if defined(XFS_BMBT_TRACE) @@ -693,18 +703,14 @@ xfs_bmbt_get_rec( { xfs_bmbt_block_t *block; xfs_buf_t *bp; -#ifdef DEBUG int error; -#endif int ptr; xfs_bmbt_rec_t *rp; block = xfs_bmbt_get_block(cur, 0, &bp); ptr = cur->bc_ptrs[0]; -#ifdef DEBUG if ((error = xfs_btree_check_lblock(cur, block, 0, bp))) return error; -#endif if (ptr > be16_to_cpu(block->bb_numrecs) || ptr <= 0) { *stat = 0; return 0; @@ -1913,7 +1919,7 @@ xfs_bmbt_get_all( * Get the block pointer for the given level of the cursor. * Fill in the buffer pointer, if applicable. */ -xfs_bmbt_block_t * +STATIC xfs_bmbt_block_t * xfs_bmbt_get_block( xfs_btree_cur_t *cur, int level, @@ -2015,10 +2021,11 @@ xfs_bmbt_disk_get_blockcount( return (xfs_filblks_t)(INT_GET(r->l1, ARCH_CONVERT) & XFS_MASK64LO(21)); } +#ifdef DEBUG /* * Extract the startblock field from an on disk bmap extent record. */ -xfs_fsblock_t +STATIC xfs_fsblock_t xfs_bmbt_disk_get_startblock( xfs_bmbt_rec_t *r) { @@ -2026,19 +2033,27 @@ xfs_bmbt_disk_get_startblock( return (((xfs_fsblock_t)INT_GET(r->l0, ARCH_CONVERT) & XFS_MASK64LO(9)) << 43) | (((xfs_fsblock_t)INT_GET(r->l1, ARCH_CONVERT)) >> 21); #else -#ifdef DEBUG xfs_dfsbno_t b; b = (((xfs_dfsbno_t)INT_GET(r->l0, ARCH_CONVERT) & XFS_MASK64LO(9)) << 43) | (((xfs_dfsbno_t)INT_GET(r->l1, ARCH_CONVERT)) >> 21); ASSERT((b >> 32) == 0 || ISNULLDSTARTBLOCK(b)); return (xfs_fsblock_t)b; -#else /* !DEBUG */ - return (xfs_fsblock_t)(((xfs_dfsbno_t)INT_GET(r->l1, ARCH_CONVERT)) >> 21); -#endif /* DEBUG */ #endif /* XFS_BIG_BLKNOS */ } +STATIC xfs_exntst_t +xfs_bmbt_disk_get_state( + xfs_bmbt_rec_t *r) +{ + int ext_flag; + + ext_flag = (int)((INT_GET(r->l0, ARCH_CONVERT)) >> (64 - BMBT_EXNTFLAG_BITLEN)); + return xfs_extent_state(xfs_bmbt_disk_get_blockcount(r), + ext_flag); +} +#endif /* DEBUG */ + /* * Extract the startoff field from a disk format bmap extent record. */ @@ -2049,17 +2064,6 @@ xfs_bmbt_disk_get_startoff( return ((xfs_fileoff_t)INT_GET(r->l0, ARCH_CONVERT) & XFS_MASK64LO(64 - BMBT_EXNTFLAG_BITLEN)) >> 9; } - -xfs_exntst_t -xfs_bmbt_disk_get_state( - xfs_bmbt_rec_t *r) -{ - int ext_flag; - - ext_flag = (int)((INT_GET(r->l0, ARCH_CONVERT)) >> (64 - BMBT_EXNTFLAG_BITLEN)); - return xfs_extent_state(xfs_bmbt_disk_get_blockcount(r), - ext_flag); -} #endif /* XFS_NATIVE_HOST */ @@ -2506,7 +2510,7 @@ xfs_bmbt_set_allf( /* * Set all the fields in a bmap extent record from the uncompressed form. */ -void +STATIC void xfs_bmbt_disk_set_all( xfs_bmbt_rec_t *r, xfs_bmbt_irec_t *s) @@ -2548,7 +2552,7 @@ xfs_bmbt_disk_set_all( /* * Set all the fields in a disk format bmap extent record from the arguments. */ -void +STATIC void xfs_bmbt_disk_set_allf( xfs_bmbt_rec_t *r, xfs_fileoff_t o, Index: xfs-linux/xfs_bmap_btree.h =================================================================== --- xfs-linux.orig/xfs_bmap_btree.h +++ xfs-linux/xfs_bmap_btree.h @@ -306,8 +306,6 @@ extern void xfs_bmdr_to_bmbt(xfs_bmdr_bl extern int xfs_bmbt_decrement(struct xfs_btree_cur *, int, int *); extern int xfs_bmbt_delete(struct xfs_btree_cur *, int *); extern void xfs_bmbt_get_all(xfs_bmbt_rec_t *r, xfs_bmbt_irec_t *s); -extern xfs_bmbt_block_t *xfs_bmbt_get_block(struct xfs_btree_cur *cur, - int, struct xfs_buf **bpp); extern xfs_filblks_t xfs_bmbt_get_blockcount(xfs_bmbt_rec_t *r); extern xfs_fsblock_t xfs_bmbt_get_startblock(xfs_bmbt_rec_t *r); extern xfs_fileoff_t xfs_bmbt_get_startoff(xfs_bmbt_rec_t *r); @@ -315,9 +313,7 @@ extern xfs_exntst_t xfs_bmbt_get_state(x #ifndef XFS_NATIVE_HOST extern void xfs_bmbt_disk_get_all(xfs_bmbt_rec_t *r, xfs_bmbt_irec_t *s); -extern xfs_exntst_t xfs_bmbt_disk_get_state(xfs_bmbt_rec_t *r); extern xfs_filblks_t xfs_bmbt_disk_get_blockcount(xfs_bmbt_rec_t *r); -extern xfs_fsblock_t xfs_bmbt_disk_get_startblock(xfs_bmbt_rec_t *r); extern xfs_fileoff_t xfs_bmbt_disk_get_startoff(xfs_bmbt_rec_t *r); #else #define xfs_bmbt_disk_get_all(r, s) xfs_bmbt_get_all(r, s) @@ -351,11 +347,7 @@ extern void xfs_bmbt_set_startblock(xfs_ extern void xfs_bmbt_set_startoff(xfs_bmbt_rec_t *r, xfs_fileoff_t v); extern void xfs_bmbt_set_state(xfs_bmbt_rec_t *r, xfs_exntst_t v); -#ifndef XFS_NATIVE_HOST -extern void xfs_bmbt_disk_set_all(xfs_bmbt_rec_t *r, xfs_bmbt_irec_t *s); -extern void xfs_bmbt_disk_set_allf(xfs_bmbt_rec_t *r, xfs_fileoff_t o, - xfs_fsblock_t b, xfs_filblks_t c, xfs_exntst_t v); -#else +#ifdef XFS_NATIVE_HOST #define xfs_bmbt_disk_set_all(r, s) xfs_bmbt_set_all(r, s) #define xfs_bmbt_disk_set_allf(r, o, b, c, v) xfs_bmbt_set_allf(r, o, b, c, v) #endif /* XFS_NATIVE_HOST */ Index: xfs-linux/xfs_dir2.h =================================================================== --- xfs-linux.orig/xfs_dir2.h +++ xfs-linux/xfs_dir2.h @@ -107,10 +107,6 @@ extern int xfs_dir_ino_validate(struct x */ extern int xfs_dir2_grow_inode(struct xfs_da_args *args, int space, xfs_dir2_db_t *dbp); -extern int xfs_dir2_isblock(struct xfs_trans *tp, struct xfs_inode *dp, - int *vp); -extern int xfs_dir2_isleaf(struct xfs_trans *tp, struct xfs_inode *dp, - int *vp); extern int xfs_dir2_shrink_inode(struct xfs_da_args *args, xfs_dir2_db_t db, struct xfs_dabuf *bp); Index: xfs-linux/xfs_dir2_data.h =================================================================== --- xfs-linux.orig/xfs_dir2_data.h +++ xfs-linux/xfs_dir2_data.h @@ -161,8 +161,6 @@ extern void xfs_dir2_data_check(struct x #else #define xfs_dir2_data_check(dp,bp) #endif -extern xfs_dir2_data_free_t *xfs_dir2_data_freefind(xfs_dir2_data_t *d, - xfs_dir2_data_unused_t *dup); extern xfs_dir2_data_free_t *xfs_dir2_data_freeinsert(xfs_dir2_data_t *d, xfs_dir2_data_unused_t *dup, int *loghead); extern void xfs_dir2_data_freescan(struct xfs_mount *mp, xfs_dir2_data_t *d, Index: xfs-linux/xfs_dir2_node.h =================================================================== --- xfs-linux.orig/xfs_dir2_node.h +++ xfs-linux/xfs_dir2_node.h @@ -77,8 +77,6 @@ xfs_dir2_db_to_fdindex(struct xfs_mount return ((db) % XFS_DIR2_MAX_FREE_BESTS(mp)); } -extern void xfs_dir2_free_log_bests(struct xfs_trans *tp, struct xfs_dabuf *bp, - int first, int last); extern int xfs_dir2_leaf_to_node(struct xfs_da_args *args, struct xfs_dabuf *lbp); extern xfs_dahash_t xfs_dir2_leafn_lasthash(struct xfs_dabuf *bp, int *count); Index: xfs-linux/xfs_log_priv.h =================================================================== --- xfs-linux.orig/xfs_log_priv.h +++ xfs-linux/xfs_log_priv.h @@ -484,18 +484,11 @@ typedef struct log { /* common routines */ extern xfs_lsn_t xlog_assign_tail_lsn(struct xfs_mount *mp); -extern int xlog_find_tail(xlog_t *log, - xfs_daddr_t *head_blk, - xfs_daddr_t *tail_blk); extern int xlog_recover(xlog_t *log); extern int xlog_recover_finish(xlog_t *log, int mfsi_flags); extern void xlog_pack_data(xlog_t *log, xlog_in_core_t *iclog, int); extern void xlog_recover_process_iunlinks(xlog_t *log); -extern struct xfs_buf *xlog_get_bp(xlog_t *, int); -extern void xlog_put_bp(struct xfs_buf *); -extern int xlog_bread(xlog_t *, xfs_daddr_t, int, struct xfs_buf *); - /* iclog tracing */ #define XLOG_TRACE_GRAB_FLUSH 1 #define XLOG_TRACE_REL_FLUSH 2 Index: xfs-linux/xfs_log_recover.c =================================================================== --- xfs-linux.orig/xfs_log_recover.c +++ xfs-linux/xfs_log_recover.c @@ -69,7 +69,7 @@ STATIC void xlog_recover_check_ail(xfs_m ((bbs + (log)->l_sectbb_mask + 1) & ~(log)->l_sectbb_mask) : (bbs) ) #define XLOG_SECTOR_ROUNDDOWN_BLKNO(log, bno) ((bno) & ~(log)->l_sectbb_mask) -xfs_buf_t * +STATIC xfs_buf_t * xlog_get_bp( xlog_t *log, int num_bblks) @@ -84,7 +84,7 @@ xlog_get_bp( return xfs_buf_get_noaddr(BBTOB(num_bblks), log->l_mp->m_logdev_targp); } -void +STATIC void xlog_put_bp( xfs_buf_t *bp) { @@ -95,7 +95,7 @@ xlog_put_bp( /* * nbblks should be uint, but oh well. Just want to catch that 32-bit length. */ -int +STATIC int xlog_bread( xlog_t *log, xfs_daddr_t blk_no, @@ -293,7 +293,7 @@ xlog_recover_iodone( * Note that the algorithm can not be perfect because the disk will not * necessarily be perfect. */ -int +STATIC int xlog_find_cycle_start( xlog_t *log, xfs_buf_t *bp, @@ -777,7 +777,7 @@ xlog_find_head( * We could speed up search by using current head_blk buffer, but it is not * available. */ -int +STATIC int xlog_find_tail( xlog_t *log, xfs_daddr_t *head_blk, Index: xfs-linux/xfs_mount.h =================================================================== --- xfs-linux.orig/xfs_mount.h +++ xfs-linux/xfs_mount.h @@ -586,8 +586,6 @@ extern void xfs_unmountfs_close(xfs_moun extern int xfs_unmountfs_writesb(xfs_mount_t *); extern int xfs_unmount_flush(xfs_mount_t *, int); extern int xfs_mod_incore_sb(xfs_mount_t *, xfs_sb_field_t, int, int); -extern int xfs_mod_incore_sb_unlocked(xfs_mount_t *, xfs_sb_field_t, - int, int); extern int xfs_mod_incore_sb_batch(xfs_mount_t *, xfs_mod_sb_t *, uint, int); extern struct xfs_buf *xfs_getsb(xfs_mount_t *, int); Index: xfs-linux/quota/xfs_qm_bhv.c =================================================================== --- xfs-linux.orig/quota/xfs_qm_bhv.c +++ xfs-linux/quota/xfs_qm_bhv.c @@ -401,7 +401,7 @@ STATIC struct xfs_qmops xfs_qmcore_xfs = .xfs_dqtrxops = &xfs_trans_dquot_ops, }; -struct bhv_module_vfsops xfs_qmops = { { +static struct bhv_module_vfsops xfs_qmops = { { BHV_IDENTITY_INIT(VFS_BHV_QM, VFS_POSITION_QM), .vfs_parseargs = xfs_qm_parseargs, .vfs_showargs = xfs_qm_showargs, Index: xfs-linux/xfs_attr.c =================================================================== --- xfs-linux.orig/xfs_attr.c +++ xfs-linux/xfs_attr.c @@ -59,6 +59,7 @@ #define ATTR_SYSCOUNT 2 STATIC struct attrnames posix_acl_access; STATIC struct attrnames posix_acl_default; +STATIC struct attrnames attr_system; STATIC struct attrnames *attr_system_names[ATTR_SYSCOUNT]; /*======================================================================== @@ -2690,7 +2691,7 @@ attr_system_remove( return namesp->attr_remove(vp, name, xflags); } -struct attrnames attr_system = { +STATIC struct attrnames attr_system = { .attr_name = "system.", .attr_namelen = sizeof("system.") - 1, .attr_flag = ATTR_SYSTEM, Index: xfs-linux/xfs_quota.h =================================================================== --- xfs-linux.orig/xfs_quota.h +++ xfs-linux/xfs_quota.h @@ -363,8 +363,6 @@ typedef struct xfs_dqtrxops { extern int xfs_qm_dqcheck(xfs_disk_dquot_t *, xfs_dqid_t, uint, uint, char *); extern int xfs_mount_reset_sbqflags(struct xfs_mount *); -extern struct bhv_module_vfsops xfs_qmops; - #endif /* __KERNEL__ */ #endif /* __XFS_QUOTA_H__ */ Index: xfs-linux/linux-2.4/xfs_vfs.c =================================================================== --- xfs-linux.orig/linux-2.4/xfs_vfs.c +++ xfs-linux/linux-2.4/xfs_vfs.c @@ -275,7 +275,7 @@ vfs_deallocate( kmem_free(vfsp, sizeof(bhv_vfs_t)); } -void +static void vfs_insertbhv( struct bhv_vfs *vfsp, struct bhv_desc *bdp, Index: xfs-linux/linux-2.4/xfs_vfs.h =================================================================== --- xfs-linux.orig/linux-2.4/xfs_vfs.h +++ xfs-linux/linux-2.4/xfs_vfs.h @@ -230,7 +230,6 @@ typedef struct bhv_module { extern bhv_vfs_t *vfs_allocate(struct super_block *); extern bhv_vfs_t *vfs_from_sb(struct super_block *); extern void vfs_deallocate(bhv_vfs_t *); -extern void vfs_insertbhv(bhv_vfs_t *, bhv_desc_t *, bhv_vfsops_t *, void *); #define bhv_lookup_module(n,m) ( (m) ? \ inter_module_get_request(n, m) : \ Index: xfs-linux/linux-2.6/xfs_vfs.c =================================================================== --- xfs-linux.orig/linux-2.6/xfs_vfs.c +++ xfs-linux/linux-2.6/xfs_vfs.c @@ -274,7 +274,7 @@ vfs_deallocate( kmem_free(vfsp, sizeof(bhv_vfs_t)); } -void +static void vfs_insertbhv( struct bhv_vfs *vfsp, struct bhv_desc *bdp, Index: xfs-linux/linux-2.6/xfs_vfs.h =================================================================== --- xfs-linux.orig/linux-2.6/xfs_vfs.h +++ xfs-linux/linux-2.6/xfs_vfs.h @@ -224,7 +224,6 @@ typedef struct bhv_module { extern bhv_vfs_t *vfs_allocate(struct super_block *); extern bhv_vfs_t *vfs_from_sb(struct super_block *); extern void vfs_deallocate(bhv_vfs_t *); -extern void vfs_insertbhv(bhv_vfs_t *, bhv_desc_t *, bhv_vfsops_t *, void *); extern void bhv_module_init(const char *, struct module *, const void *); extern void bhv_module_exit(const char *); Index: xfs-linux/quota/xfs_dquot.c =================================================================== --- xfs-linux.orig/quota/xfs_dquot.c +++ xfs-linux/quota/xfs_dquot.c @@ -72,7 +72,7 @@ STATIC void xfs_qm_dqflush_done(xfs_buf xfs_buftarg_t *xfs_dqerror_target; int xfs_do_dqerror; int xfs_dqreq_num; -int xfs_dqerror_mod = 33; +STATIC int xfs_dqerror_mod = 33; #endif /* Index: xfs-linux/xfs_bmap.c =================================================================== --- xfs-linux.orig/xfs_bmap.c +++ xfs-linux/xfs_bmap.c @@ -6072,8 +6072,7 @@ xfs_bmap_check_extents( } } -STATIC -xfs_buf_t * +STATIC xfs_buf_t * xfs_bmap_get_bp( xfs_btree_cur_t *cur, xfs_fsblock_t bno) @@ -6134,7 +6133,7 @@ xfs_bmap_get_bp( return(bp); } -void +STATIC void xfs_check_block( xfs_bmbt_block_t *block, xfs_mount_t *mp, Index: xfs-linux/xfs_btree.c =================================================================== --- xfs-linux.orig/xfs_btree.c +++ xfs-linux/xfs_btree.c @@ -110,7 +110,7 @@ xfs_btree_maxrecs( /* * Debug routine: check that block header is ok. */ -void +STATIC void xfs_btree_check_block( xfs_btree_cur_t *cur, /* btree cursor */ xfs_btree_block_t *block, /* generic btree block pointer */ @@ -337,6 +337,7 @@ xfs_btree_check_sblock( return 0; } +#ifdef DEBUG /* * Checking routine: check that (short) pointer is ok. */ @@ -357,6 +358,7 @@ xfs_btree_check_sptr( ptr < be32_to_cpu(agf->agf_length)); return 0; } +#endif /* * Delete the btree cursor. Index: xfs-linux/xfs_btree.h =================================================================== --- xfs-linux.orig/xfs_btree.h +++ xfs-linux/xfs_btree.h @@ -192,16 +192,6 @@ typedef struct xfs_btree_cur #ifdef DEBUG /* - * Debug routine: check that block header is ok. - */ -void -xfs_btree_check_block( - xfs_btree_cur_t *cur, /* btree cursor */ - xfs_btree_block_t *block, /* generic btree block pointer */ - int level, /* level of the btree block */ - struct xfs_buf *bp); /* buffer containing block, if any */ - -/* * Debug routine: check that keys are in the right order. */ void @@ -256,6 +246,7 @@ xfs_btree_check_sblock( int level, /* level of the btree block */ struct xfs_buf *bp); /* buffer containing block */ +#ifdef DEBUG /* * Checking routine: check that (short) pointer is ok. */ @@ -264,6 +255,7 @@ xfs_btree_check_sptr( xfs_btree_cur_t *cur, /* btree cursor */ xfs_agblock_t ptr, /* btree block disk address */ int level); /* btree block level */ +#endif /* * Delete the btree cursor. Index: xfs-linux/xfs_trans_buf.c =================================================================== --- xfs-linux.orig/xfs_trans_buf.c +++ xfs-linux/xfs_trans_buf.c @@ -260,7 +260,7 @@ xfs_trans_getsb(xfs_trans_t *tp, xfs_buftarg_t *xfs_error_target; int xfs_do_error; int xfs_req_num; -int xfs_error_mod = 33; +STATIC int xfs_error_mod = 33; #endif /* From owner-xfs@oss.sgi.com Thu Sep 28 20:56:12 2006 Received: with ECARTIS (v1.0.0; list xfs); Thu, 28 Sep 2006 20:56:16 -0700 (PDT) Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8T3uBaG009143 for ; Thu, 28 Sep 2006 20:56:12 -0700 X-ASG-Debug-ID: 1159502130-25384-177-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from sandeen.net (sandeen.net [209.173.210.139]) by cuda.sgi.com (Spam Firewall) with ESMTP id 56CCCD11A33B for ; Thu, 28 Sep 2006 20:55:30 -0700 (PDT) Received: by sandeen.net (Postfix, from userid 500) id AB75B18001A5F; Thu, 28 Sep 2006 22:29:16 -0500 (CDT) To: xfs@oss.sgi.com X-ASG-Orig-Subj: [PATCH 2/2] Remove unused stuff Subject: [PATCH 2/2] Remove unused stuff Message-Id: <20060929032916.AB75B18001A5F@sandeen.net> Date: Thu, 28 Sep 2006 22:29:16 -0500 (CDT) From: sandeen@sandeen.net X-Barracuda-Spam-Score: 0.55 X-Barracuda-Spam-Status: No, SCORE=0.55 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests=NO_REAL_NAME X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.22156 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.55 NO_REAL_NAME From: does not include a real name X-archive-position: 9122 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Content-Length: 12733 Lines: 438 And now that things are static, remove things that are not used. xfs_bmap.c | 36 ------------------ xfs_bmap_btree.c | 74 ------------------------------------- xfs_bmap_btree.h | 11 ----- xfs_da_btree.c | 15 ------- xfs_da_btree.h | 1 xfs_error.c | 26 ------------- xfs_error.h | 1 xfs_rtalloc.c | 108 ------------------------------------------------------- xfs_rtalloc.h | 18 --------- 9 files changed, 290 deletions(-) Signed-off-by: Eric Sandeen Index: xfs-linux/xfs_da_btree.c =================================================================== --- xfs-linux.orig/xfs_da_btree.c +++ xfs-linux/xfs_da_btree.c @@ -2166,21 +2166,6 @@ xfs_da_reada_buf( return rval; } -/* - * Calculate the number of bits needed to hold i different values. - */ -uint -xfs_da_log2_roundup(uint i) -{ - uint rval; - - for (rval = 0; rval < NBBY * sizeof(i); rval++) { - if ((1 << rval) >= i) - break; - } - return(rval); -} - kmem_zone_t *xfs_da_state_zone; /* anchor for state struct zone */ kmem_zone_t *xfs_dabuf_zone; /* dabuf zone */ Index: xfs-linux/xfs_da_btree.h =================================================================== --- xfs-linux.orig/xfs_da_btree.h +++ xfs-linux/xfs_da_btree.h @@ -249,7 +249,6 @@ int xfs_da_shrink_inode(xfs_da_args_t *a xfs_dabuf_t *dead_buf); uint xfs_da_hashname(const uchar_t *name_string, int name_length); -uint xfs_da_log2_roundup(uint i); xfs_da_state_t *xfs_da_state_alloc(void); void xfs_da_state_free(xfs_da_state_t *state); Index: xfs-linux/xfs_bmap.c =================================================================== --- xfs-linux.orig/xfs_bmap.c +++ xfs-linux/xfs_bmap.c @@ -185,16 +185,6 @@ xfs_bmap_btree_to_extents( int *logflagsp, /* inode logging flags */ int whichfork); /* data or attr fork */ -#ifdef DEBUG -/* - * Check that the extents list for the inode ip is in the right order. - */ -STATIC void -xfs_bmap_check_extents( - xfs_inode_t *ip, /* incore inode pointer */ - int whichfork); /* data or attr fork */ -#endif - /* * Called by xfs_bmapi to update file extent records and the btree * after removing space (or undoing a delayed allocation). @@ -6046,32 +6036,6 @@ xfs_bmap_eof( } #ifdef DEBUG -/* - * Check that the extents list for the inode ip is in the right order. - */ -STATIC void -xfs_bmap_check_extents( - xfs_inode_t *ip, /* incore inode pointer */ - int whichfork) /* data or attr fork */ -{ - xfs_bmbt_rec_t *ep; /* current extent entry */ - xfs_extnum_t idx; /* extent record index */ - xfs_ifork_t *ifp; /* inode fork pointer */ - xfs_extnum_t nextents; /* number of extents in list */ - xfs_bmbt_rec_t *nextp; /* next extent entry */ - - ifp = XFS_IFORK_PTR(ip, whichfork); - ASSERT(ifp->if_flags & XFS_IFEXTENTS); - nextents = ifp->if_bytes / (uint)sizeof(xfs_bmbt_rec_t); - ep = xfs_iext_get_ext(ifp, 0); - for (idx = 0; idx < nextents - 1; idx++) { - nextp = xfs_iext_get_ext(ifp, idx + 1); - xfs_btree_check_rec(XFS_BTNUM_BMAP, (void *)ep, - (void *)(nextp)); - ep = nextp; - } -} - STATIC xfs_buf_t * xfs_bmap_get_bp( xfs_btree_cur_t *cur, Index: xfs-linux/xfs_bmap_btree.c =================================================================== --- xfs-linux.orig/xfs_bmap_btree.c +++ xfs-linux/xfs_bmap_btree.c @@ -66,10 +66,6 @@ STATIC xfs_bmbt_block_t *xfs_bmbt_get_bl STATIC void xfs_bmbt_disk_set_all(xfs_bmbt_rec_t *r, xfs_bmbt_irec_t *s); STATIC void xfs_bmbt_disk_set_allf(xfs_bmbt_rec_t *r, xfs_fileoff_t o, xfs_fsblock_t b, xfs_filblks_t c, xfs_exntst_t v); -#ifdef DEBUG -STATIC xfs_fsblock_t xfs_bmbt_disk_get_startblock(xfs_bmbt_rec_t *r); -STATIC xfs_exntst_t xfs_bmbt_disk_get_state(xfs_bmbt_rec_t *r); -#endif #endif #if defined(XFS_BMBT_TRACE) @@ -688,43 +684,6 @@ error0: return error; } -#ifdef DEBUG -/* - * Get the data from the pointed-to record. - */ -int -xfs_bmbt_get_rec( - xfs_btree_cur_t *cur, - xfs_fileoff_t *off, - xfs_fsblock_t *bno, - xfs_filblks_t *len, - xfs_exntst_t *state, - int *stat) -{ - xfs_bmbt_block_t *block; - xfs_buf_t *bp; - int error; - int ptr; - xfs_bmbt_rec_t *rp; - - block = xfs_bmbt_get_block(cur, 0, &bp); - ptr = cur->bc_ptrs[0]; - if ((error = xfs_btree_check_lblock(cur, block, 0, bp))) - return error; - if (ptr > be16_to_cpu(block->bb_numrecs) || ptr <= 0) { - *stat = 0; - return 0; - } - rp = XFS_BMAP_REC_IADDR(block, ptr, cur); - *off = xfs_bmbt_disk_get_startoff(rp); - *bno = xfs_bmbt_disk_get_startblock(rp); - *len = xfs_bmbt_disk_get_blockcount(rp); - *state = xfs_bmbt_disk_get_state(rp); - *stat = 1; - return 0; -} -#endif - /* * Insert one record/level. Return information to the caller * allowing the next level up to proceed if necessary. @@ -2021,39 +1980,6 @@ xfs_bmbt_disk_get_blockcount( return (xfs_filblks_t)(INT_GET(r->l1, ARCH_CONVERT) & XFS_MASK64LO(21)); } -#ifdef DEBUG -/* - * Extract the startblock field from an on disk bmap extent record. - */ -STATIC xfs_fsblock_t -xfs_bmbt_disk_get_startblock( - xfs_bmbt_rec_t *r) -{ -#if XFS_BIG_BLKNOS - return (((xfs_fsblock_t)INT_GET(r->l0, ARCH_CONVERT) & XFS_MASK64LO(9)) << 43) | - (((xfs_fsblock_t)INT_GET(r->l1, ARCH_CONVERT)) >> 21); -#else - xfs_dfsbno_t b; - - b = (((xfs_dfsbno_t)INT_GET(r->l0, ARCH_CONVERT) & XFS_MASK64LO(9)) << 43) | - (((xfs_dfsbno_t)INT_GET(r->l1, ARCH_CONVERT)) >> 21); - ASSERT((b >> 32) == 0 || ISNULLDSTARTBLOCK(b)); - return (xfs_fsblock_t)b; -#endif /* XFS_BIG_BLKNOS */ -} - -STATIC xfs_exntst_t -xfs_bmbt_disk_get_state( - xfs_bmbt_rec_t *r) -{ - int ext_flag; - - ext_flag = (int)((INT_GET(r->l0, ARCH_CONVERT)) >> (64 - BMBT_EXNTFLAG_BITLEN)); - return xfs_extent_state(xfs_bmbt_disk_get_blockcount(r), - ext_flag); -} -#endif /* DEBUG */ - /* * Extract the startoff field from a disk format bmap extent record. */ Index: xfs-linux/xfs_bmap_btree.h =================================================================== --- xfs-linux.orig/xfs_bmap_btree.h +++ xfs-linux/xfs_bmap_btree.h @@ -317,9 +317,7 @@ extern xfs_filblks_t xfs_bmbt_disk_get_b extern xfs_fileoff_t xfs_bmbt_disk_get_startoff(xfs_bmbt_rec_t *r); #else #define xfs_bmbt_disk_get_all(r, s) xfs_bmbt_get_all(r, s) -#define xfs_bmbt_disk_get_state(r) xfs_bmbt_get_state(r) #define xfs_bmbt_disk_get_blockcount(r) xfs_bmbt_get_blockcount(r) -#define xfs_bmbt_disk_get_startblock(r) xfs_bmbt_get_blockcount(r) #define xfs_bmbt_disk_get_startoff(r) xfs_bmbt_get_startoff(r) #endif /* XFS_NATIVE_HOST */ @@ -356,15 +354,6 @@ extern void xfs_bmbt_to_bmdr(xfs_bmbt_bl extern int xfs_bmbt_update(struct xfs_btree_cur *, xfs_fileoff_t, xfs_fsblock_t, xfs_filblks_t, xfs_exntst_t); -#ifdef DEBUG -/* - * Get the data from the pointed-to record. - */ -extern int xfs_bmbt_get_rec(struct xfs_btree_cur *, xfs_fileoff_t *, - xfs_fsblock_t *, xfs_filblks_t *, - xfs_exntst_t *, int *); -#endif - #endif /* __KERNEL__ */ #endif /* __XFS_BMAP_BTREE_H__ */ Index: xfs-linux/xfs_rtalloc.c =================================================================== --- xfs-linux.orig/xfs_rtalloc.c +++ xfs-linux/xfs_rtalloc.c @@ -913,57 +913,6 @@ xfs_rtcheck_alloc_range( } #endif -#ifdef DEBUG -/* - * Check whether the given block in the bitmap has the given value. - */ -STATIC int /* 1 for matches, 0 for not */ -xfs_rtcheck_bit( - xfs_mount_t *mp, /* file system mount structure */ - xfs_trans_t *tp, /* transaction pointer */ - xfs_rtblock_t start, /* bit (block) to check */ - int val) /* 1 for free, 0 for allocated */ -{ - int bit; /* bit number in the word */ - xfs_rtblock_t block; /* bitmap block number */ - xfs_buf_t *bp; /* buf for the block */ - xfs_rtword_t *bufp; /* pointer into the buffer */ - /* REFERENCED */ - int error; /* error value */ - xfs_rtword_t wdiff; /* difference between bit & expected */ - int word; /* word number in the buffer */ - xfs_rtword_t wval; /* word value from buffer */ - - block = XFS_BITTOBLOCK(mp, start); - error = xfs_rtbuf_get(mp, tp, block, 0, &bp); - bufp = (xfs_rtword_t *)XFS_BUF_PTR(bp); - word = XFS_BITTOWORD(mp, start); - bit = (int)(start & (XFS_NBWORD - 1)); - wval = bufp[word]; - xfs_trans_brelse(tp, bp); - wdiff = (wval ^ -val) & ((xfs_rtword_t)1 << bit); - return !wdiff; -} -#endif /* DEBUG */ - -#if 0 -/* - * Check that the given extent (block range) is free already. - */ -STATIC int /* error */ -xfs_rtcheck_free_range( - xfs_mount_t *mp, /* file system mount point */ - xfs_trans_t *tp, /* transaction pointer */ - xfs_rtblock_t bno, /* starting block number of extent */ - xfs_extlen_t len, /* length of extent */ - int *stat) /* out: 1 for free, 0 for not */ -{ - xfs_rtblock_t new; /* dummy for xfs_rtcheck_range */ - - return xfs_rtcheck_range(mp, tp, bno, len, 1, &new, stat); -} -#endif - /* * Check that the given range is either all allocated (val = 0) or * all free (val = 1). @@ -2382,60 +2331,3 @@ xfs_rtpick_extent( *pick = b; return 0; } - -#ifdef DEBUG -/* - * Debug code: print out the value of a range in the bitmap. - */ -void -xfs_rtprint_range( - xfs_mount_t *mp, /* file system mount structure */ - xfs_trans_t *tp, /* transaction pointer */ - xfs_rtblock_t start, /* starting block to print */ - xfs_extlen_t len) /* length to print */ -{ - xfs_extlen_t i; /* block number in the extent */ - - cmn_err(CE_DEBUG, "%Ld: ", (long long)start); - for (i = 0; i < len; i++) - cmn_err(CE_DEBUG, "%d", xfs_rtcheck_bit(mp, tp, start + i, 1)); - cmn_err(CE_DEBUG, "\n"); -} - -/* - * Debug code: print the summary file. - */ -void -xfs_rtprint_summary( - xfs_mount_t *mp, /* file system mount structure */ - xfs_trans_t *tp) /* transaction pointer */ -{ - xfs_suminfo_t c; /* summary data */ - xfs_rtblock_t i; /* bitmap block number */ - int l; /* summary information level */ - int p; /* flag for printed anything */ - xfs_fsblock_t sb; /* summary block number */ - xfs_buf_t *sumbp; /* summary block buffer */ - - sumbp = NULL; - for (l = 0; l < mp->m_rsumlevels; l++) { - for (p = 0, i = 0; i < mp->m_sb.sb_rbmblocks; i++) { - (void)xfs_rtget_summary(mp, tp, l, i, &sumbp, &sb, &c); - if (c) { - if (!p) { - cmn_err(CE_DEBUG, "%Ld-%Ld:", 1LL << l, - XFS_RTMIN((1LL << l) + - ((1LL << l) - 1LL), - mp->m_sb.sb_rextents)); - p = 1; - } - cmn_err(CE_DEBUG, " %Ld:%d", (long long)i, c); - } - } - if (p) - cmn_err(CE_DEBUG, "\n"); - } - if (sumbp) - xfs_trans_brelse(tp, sumbp); -} -#endif /* DEBUG */ Index: xfs-linux/xfs_rtalloc.h =================================================================== --- xfs-linux.orig/xfs_rtalloc.h +++ xfs-linux/xfs_rtalloc.h @@ -134,24 +134,6 @@ xfs_rtpick_extent( xfs_rtblock_t *pick); /* result rt extent */ /* - * Debug code: print out the value of a range in the bitmap. - */ -void -xfs_rtprint_range( - struct xfs_mount *mp, /* file system mount structure */ - struct xfs_trans *tp, /* transaction pointer */ - xfs_rtblock_t start, /* starting block to print */ - xfs_extlen_t len); /* length to print */ - -/* - * Debug code: print the summary file. - */ -void -xfs_rtprint_summary( - struct xfs_mount *mp, /* file system mount structure */ - struct xfs_trans *tp); /* transaction pointer */ - -/* * Grow the realtime area of the filesystem. */ int Index: xfs-linux/xfs_error.c =================================================================== --- xfs-linux.orig/xfs_error.c +++ xfs-linux/xfs_error.c @@ -132,32 +132,6 @@ xfs_errortag_add(int error_tag, xfs_moun } int -xfs_errortag_clear(int error_tag, xfs_mount_t *mp) -{ - int i; - int64_t fsid; - - memcpy(&fsid, mp->m_fixedfsid, sizeof(xfs_fsid_t)); - - for (i = 0; i < XFS_NUM_INJECT_ERROR; i++) { - if (xfs_etest_fsid[i] == fsid && xfs_etest[i] == error_tag) { - xfs_etest[i] = 0; - xfs_etest_fsid[i] = 0LL; - kmem_free(xfs_etest_fsname[i], - strlen(xfs_etest_fsname[i]) + 1); - xfs_etest_fsname[i] = NULL; - cmn_err(CE_WARN, "Cleared XFS error tag #%d", - error_tag); - return 0; - } - } - - cmn_err(CE_WARN, "XFS error tag %d not on", error_tag); - - return 1; -} - -int xfs_errortag_clearall_umount(int64_t fsid, char *fsname, int loud) { int i; Index: xfs-linux/xfs_error.h =================================================================== --- xfs-linux.orig/xfs_error.h +++ xfs-linux/xfs_error.h @@ -144,7 +144,6 @@ extern void xfs_error_test_init(void); #endif /* __ANSI_CPP__ */ extern int xfs_errortag_add(int error_tag, xfs_mount_t *mp); -extern int xfs_errortag_clear(int error_tag, xfs_mount_t *mp); extern int xfs_errortag_clearall(xfs_mount_t *mp); extern int xfs_errortag_clearall_umount(int64_t fsid, char *fsname, int loud); #else From owner-xfs@oss.sgi.com Fri Sep 29 00:56:56 2006 Received: with ECARTIS (v1.0.0; list xfs); Fri, 29 Sep 2006 00:57:02 -0700 (PDT) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id k8T7uoaG014070 for ; Fri, 29 Sep 2006 00:56:53 -0700 Received: from [134.14.55.141] (boing.melbourne.sgi.com [134.14.55.141]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id RAA10519; Fri, 29 Sep 2006 17:56:04 +1000 Message-ID: <451CD1EA.8020704@sgi.com> Date: Fri, 29 Sep 2006 17:57:30 +1000 From: Timothy Shimmin User-Agent: Thunderbird 1.5.0.7 (Macintosh/20060909) MIME-Version: 1.0 To: torvalds@osdl.org CC: akpm@osdl.org, xfs@oss.sgi.com Subject: XFS update for 2.6.19 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 9123 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: xfs Content-Length: 24067 Lines: 778 Hi Linus, Please pull from: git://oss.sgi.com:8090/xfs/xfs-2.6 This will update the following files: fs/xfs/Makefile-linux-2.6 | 1 fs/xfs/linux-2.6/kmem.c | 29 +++ fs/xfs/linux-2.6/kmem.h | 6 - fs/xfs/linux-2.6/sema.h | 2 fs/xfs/linux-2.6/sv.h | 2 fs/xfs/linux-2.6/xfs_aops.c | 9 - fs/xfs/linux-2.6/xfs_buf.c | 51 +++--- fs/xfs/linux-2.6/xfs_buf.h | 7 - fs/xfs/linux-2.6/xfs_globals.c | 2 fs/xfs/linux-2.6/xfs_ioctl.c | 19 +- fs/xfs/linux-2.6/xfs_iops.c | 25 ++- fs/xfs/linux-2.6/xfs_linux.h | 14 +- fs/xfs/linux-2.6/xfs_lrw.c | 10 + fs/xfs/linux-2.6/xfs_super.c | 2 fs/xfs/linux-2.6/xfs_vfs.h | 2 fs/xfs/linux-2.6/xfs_vnode.h | 2 fs/xfs/quota/xfs_dquot_item.c | 26 --- fs/xfs/quota/xfs_qm.c | 14 +- fs/xfs/quota/xfs_qm.h | 6 - fs/xfs/quota/xfs_quota_priv.h | 2 fs/xfs/support/ktrace.c | 2 fs/xfs/xfs_ag.h | 2 fs/xfs/xfs_alloc.c | 10 + fs/xfs/xfs_alloc_btree.c | 132 ++++++++------- fs/xfs/xfs_attr.c | 181 ++++++++++++++++----- fs/xfs/xfs_attr.h | 8 + fs/xfs/xfs_attr_leaf.c | 351 ++++++++++++++++++---------------------- fs/xfs/xfs_attr_leaf.h | 41 ++++- fs/xfs/xfs_behavior.c | 20 -- fs/xfs/xfs_behavior.h | 2 fs/xfs/xfs_bmap.c | 90 +++++----- fs/xfs/xfs_bmap_btree.c | 113 ++++++------- fs/xfs/xfs_bmap_btree.h | 11 + fs/xfs/xfs_btree.c | 8 - fs/xfs/xfs_btree.h | 5 - fs/xfs/xfs_buf_item.c | 22 --- fs/xfs/xfs_da_btree.c | 33 ++-- fs/xfs/xfs_error.h | 9 - fs/xfs/xfs_extfree_item.c | 69 -------- fs/xfs/xfs_extfree_item.h | 50 +++--- fs/xfs/xfs_fs.h | 8 - fs/xfs/xfs_ialloc.c | 11 + fs/xfs/xfs_ialloc_btree.c | 62 ++++--- fs/xfs/xfs_ialloc_btree.h | 19 +- fs/xfs/xfs_iget.c | 44 +++-- fs/xfs/xfs_inode.c | 30 +++ fs/xfs/xfs_inode.h | 12 + fs/xfs/xfs_inode_item.c | 16 -- fs/xfs/xfs_inode_item.h | 66 ++++---- fs/xfs/xfs_iomap.c | 89 ++++------ fs/xfs/xfs_itable.c | 184 +++++++++++++-------- fs/xfs/xfs_itable.h | 16 +- fs/xfs/xfs_log.c | 19 ++ fs/xfs/xfs_log.h | 8 - fs/xfs/xfs_log_priv.h | 10 + fs/xfs/xfs_mount.h | 5 - fs/xfs/xfs_quota.h | 2 fs/xfs/xfs_rtalloc.c | 38 ++-- fs/xfs/xfs_sb.h | 22 --- fs/xfs/xfs_trans.h | 2 fs/xfs/xfs_trans_ail.c | 4 fs/xfs/xfs_trans_priv.h | 12 + fs/xfs/xfs_vfsops.c | 2 fs/xfs/xfs_vnodeops.c | 26 ++- 64 files changed, 1060 insertions(+), 1037 deletions(-) through these commits: commit f37ea14969bf85633d3bd29ddf008171a5618855 Author: Alexey Dobriyan Date: Thu Sep 28 10:52:04 2006 +1000 [XFS] pass inode to xfs_ioc_space(), simplify some code. There is trivial "inode => vnode => inode" conversion, but only flags and mode of final inode are looked at. Pass original inode instead. SGI-PV: 904196 SGI-Modid: xfs-linux-melb:xfs-kern:26395a Signed-off-by: Alexey Dobriyan Signed-off-by: Nathan Scott Signed-off-by: Tim Shimmin commit f07c225036358038bf8a64f75351f10cdca2fb22 Author: Nathan Scott Date: Thu Sep 28 10:52:15 2006 +1000 [XFS] Improve xfsbufd delayed write submission patterns, after blktrace analysis. Under a sequential create+allocate workload, blktrace reported backward writes being issued by xfsbufd, and frequent inappropriate queue unplugs. We now insert at the tail when moving from the delwri lists to the temp lists, which maintains correct ordering, and we avoid unplugging queues deep in the submit paths when we'd shortly do it at a higher level anyway. blktrace now reports much healthier write patterns from xfsbufd for this workload (and likely many others). SGI-PV: 954310 SGI-Modid: xfs-linux-melb:xfs-kern:26396a Signed-off-by: Nathan Scott Signed-off-by: Tim Shimmin commit 128dabc5e9aa13dfebcad84e6b4ab31078555131 Author: Tim Shimmin Date: Thu Sep 28 10:55:43 2006 +1000 [XFS] cleanup the field types of some item format structures SGI-PV: 954365 SGI-Modid: xfs-linux-melb:xfs-kern:26406a Signed-off-by: Tim Shimmin commit 87395deb0b3d174ffcc7f66569764f0715ac5174 Author: Alexey Dobriyan Date: Thu Sep 28 10:56:01 2006 +1000 [XFS] move XFS_IOC_GETVERSION to main multiplexer Avoids doing an unnecessary inode to vnode conversion and avoids a memory allocation. SGI-PV: 904196 SGI-Modid: xfs-linux-melb:xfs-kern:26492a Signed-off-by: Alexey Dobriyan Signed-off-by: Nathan Scott Signed-off-by: Tim Shimmin commit 673cdf5c72ff9551df08a71f2ac1a8fe02888e8d Author: Nathan Scott Date: Thu Sep 28 10:56:26 2006 +1000 [XFS] Fix rounding bug in xfs_free_file_space found by sparse checking. SGI-PV: 954580 SGI-Modid: xfs-linux-melb:xfs-kern:26551a Signed-off-by: Nathan Scott Signed-off-by: Tim Shimmin commit ed9d88f7b7e6feba457b87ff30249e6c1e139005 Author: Nathan Scott Date: Thu Sep 28 10:56:43 2006 +1000 [XFS] Fix sparse warning found when page tracing enabled, due to overloaded gfp_t param. SGI-PV: 954580 SGI-Modid: xfs-linux-melb:xfs-kern:26552a Signed-off-by: Nathan Scott Signed-off-by: Tim Shimmin commit e21010053a0f11122db728f82ae115f2808752d6 Author: Christoph Hellwig Date: Thu Sep 28 10:56:51 2006 +1000 [XFS] endianess annotation for xfs_agfl_t. Trivial, xfs_agfl_t is always used for ondisk values. SGI-PV: 954580 SGI-Modid: xfs-linux-melb:xfs-kern:26553a Signed-off-by: Christoph Hellwig Signed-off-by: Nathan Scott Signed-off-by: Tim Shimmin commit 61a258486795ff710cf4518b5a1729c965c32aa0 Author: Christoph Hellwig Date: Thu Sep 28 10:57:04 2006 +1000 [XFS] endianess annotations for xfs_inobt_rec_t / xfs_inobt_key_t SGI-PV: 954580 SGI-Modid: xfs-linux-melb:xfs-kern:26556a Signed-off-by: Christoph Hellwig Signed-off-by: Nathan Scott Signed-off-by: Tim Shimmin commit c38e5e84dbbeda9a92ea878ec9f6255b519a69e7 Author: Christoph Hellwig Date: Thu Sep 28 10:57:17 2006 +1000 [XFS] remove left over INT_ comments in *alloc*.c We can verify endianess handling with sparse now, no need for comments. SGI-PV: 954580 SGI-Modid: xfs-linux-melb:xfs-kern:26557a Signed-off-by: Christoph Hellwig Signed-off-by: Nathan Scott Signed-off-by: Tim Shimmin commit b113bcb83efb411f3cc6c7692fbf933ed01b67d8 Author: Christoph Hellwig Date: Thu Sep 28 10:57:42 2006 +1000 [XFS] add xfs_btree_check_lptr_disk variant which handles endian conversion SGI-PV: 954580 SGI-Modid: xfs-linux-melb:xfs-kern:26558a Signed-off-by: Christoph Hellwig Signed-off-by: Nathan Scott Signed-off-by: Tim Shimmin commit 397b5208d5609e2f01b171a34ab690f325253492 Author: Christoph Hellwig Date: Thu Sep 28 10:57:52 2006 +1000 [XFS] endianess annotations for xfs_bmbt_ptr_t/xfs_bmdr_ptr_t SGI-PV: 954580 SGI-Modid: xfs-linux-melb:xfs-kern:26559a Signed-off-by: Christoph Hellwig Signed-off-by: Nathan Scott Signed-off-by: Tim Shimmin commit 576039cf3c668d5f8d97dff8a0a5817e8b3a761b Author: Christoph Hellwig Date: Thu Sep 28 10:58:06 2006 +1000 [XFS] endianess annotate XFS_BMAP_BROOT_PTR_ADDR Make sure it returns a __be64 and let the callers use the proper macros. SGI-PV: 954580 SGI-Modid: xfs-linux-melb:xfs-kern:26560a Signed-off-by: Christoph Hellwig Signed-off-by: Nathan Scott Signed-off-by: Tim Shimmin commit 8801bb99e4425b9a778b355153ab3254bb431d92 Author: Christoph Hellwig Date: Thu Sep 28 10:58:17 2006 +1000 [XFS] endianess annotations for xfs_bmbt_key Trivial as there are no incore users. SGI-PV: 954580 SGI-Modid: xfs-linux-melb:xfs-kern:26561a Signed-off-by: Christoph Hellwig Signed-off-by: Nathan Scott Signed-off-by: Tim Shimmin commit 1121b219bf3fe6d1bd1d1f7618cc5e0c409fabb4 Author: Nathan Scott Date: Thu Sep 28 10:58:40 2006 +1000 [XFS] use NULL for pointer initialisation instead of zero-cast-to-ptr SGI-PV: 954580 SGI-Modid: xfs-linux-melb:xfs-kern:26562a Signed-off-by: Nathan Scott Signed-off-by: Tim Shimmin commit fe48cae9ed979d2ac14080c837d793c4f6bfaa82 Author: Christoph Hellwig Date: Thu Sep 28 10:58:52 2006 +1000 [XFS] remove bhv_lookup, _range version works aswell and has more useful semantics. SGI-PV: 954580 SGI-Modid: xfs-linux-melb:xfs-kern:26563a Signed-off-by: Christoph Hellwig Signed-off-by: Nathan Scott Signed-off-by: Tim Shimmin commit 29b6d22b011d83dac8ca5b7d26f766ae598abbbd Author: Nathan Scott Date: Thu Sep 28 10:59:06 2006 +1000 [XFS] remove accidentally reintroduced vfs unmount flag, unneeded in current kernels SGI-PV: 954580 SGI-Modid: xfs-linux-melb:xfs-kern:26564a Signed-off-by: Nathan Scott Signed-off-by: Tim Shimmin commit 69e23b9a5e7430ced667d8b699330e370c202f28 Author: Nathan Scott Date: Thu Sep 28 11:01:22 2006 +1000 [XFS] Update XFS for i_blksize removal from generic inode structure SGI-PV: 954366 SGI-Modid: xfs-linux-melb:xfs-kern:26565a Signed-off-by: Nathan Scott Signed-off-by: Tim Shimmin commit 726801ba067410a1d38518823f2c253a087f6c6f Author: Tim Shimmin Date: Thu Sep 28 11:01:37 2006 +1000 [XFS] Add EA list callbacks for xfs kernel use. Cleanup some namespace code. SGI-PV: 954372 SGI-Modid: xfs-linux-melb:xfs-kern:26583a Signed-off-by: Tim Shimmin commit 8b56f083c2a6bd0a88271225f0bcf1d81db20d3c Author: Nathan Scott Date: Thu Sep 28 11:01:46 2006 +1000 [XFS] Rework DMAPI bulkstat calls in such a way that we can directly extract inline attributes out of the bulkstat buffer (for that case), rather than using an (extremely expensive for large icount filesystems) iget for fetching attrs. SGI-PV: 944409 SGI-Modid: xfs-linux-melb:xfs-kern:26602a Signed-off-by: Nathan Scott Signed-off-by: Tim Shimmin commit 51bdd70681e247184b81c2de61dbc26154511155 Author: Nathan Scott Date: Thu Sep 28 11:01:57 2006 +1000 [XFS] When issuing metadata readahead, submit bio with READA not READ. SGI-PV: 944409 SGI-Modid: xfs-linux-melb:xfs-kern:26603a Signed-off-by: Nathan Scott Signed-off-by: Tim Shimmin commit 2627509330323efc88b5818065cba737e000de5c Author: Nathan Scott Date: Thu Sep 28 11:02:03 2006 +1000 [XFS] Drop unneeded endian conversion in bulkstat and start readahead for batches of inode cluster buffers at once, before any blocking reads are issued. SGI-PV: 944409 SGI-Modid: xfs-linux-melb:xfs-kern:26606a Signed-off-by: Nathan Scott Signed-off-by: Tim Shimmin commit bb3c7d2936b6db6f5ded9abf4d215abe97af8372 Author: Nathan Scott Date: Thu Sep 28 11:02:09 2006 +1000 [XFS] Increase the size of the buffer holding the local inode cluster list, to increase our potential readahead window and in turn improve bulkstat performance. SGI-PV: 944409 SGI-Modid: xfs-linux-melb:xfs-kern:26607a Signed-off-by: Nathan Scott Signed-off-by: Tim Shimmin commit a3c6685eaa1b6c5cf05b084b3bc91895e253ad2c Author: Nathan Scott Date: Thu Sep 28 11:02:14 2006 +1000 [XFS] Ensure xlog_state_do_callback does not report spurious warnings on ramdisks. SGI-PV: 954802 SGI-Modid: xfs-linux-melb:xfs-kern:26627a Signed-off-by: Nathan Scott Signed-off-by: Tim Shimmin commit 745b1f47fc0c68dbb1ff440eec8889f61e57194b Author: Nathan Scott Date: Thu Sep 28 11:02:23 2006 +1000 [XFS] Remove last bulkstat false-positives with debug kernels. SGI-PV: 953819 SGI-Modid: xfs-linux-melb:xfs-kern:26628a Signed-off-by: Nathan Scott Signed-off-by: Tim Shimmin commit 17370097dace78c93d6fa32110983e74b981d192 Author: Vlad Apostolov Date: Thu Sep 28 11:02:30 2006 +1000 [XFS] pass file mode on DMAPI remove events SGI-PV: 953687 SGI-Modid: xfs-linux-melb:xfs-kern:26639a Signed-off-by: Vlad Apostolov Signed-off-by: Tim Shimmin commit 43129c16e85119355d352e10ff4b30a08053228c Author: Eric Sandeen Date: Thu Sep 28 11:02:37 2006 +1000 [XFS] Remove a couple of unused BUF macros SGI-PV: 955302 SGI-Modid: xfs-linux-melb:xfs-kern:26746a Signed-off-by: Eric Sandeen Signed-off-by: Nathan Scott Signed-off-by: Tim Shimmin commit 065d312e15902976d256ddaf396a7950ec0350a8 Author: Eric Sandeen Date: Thu Sep 28 11:02:44 2006 +1000 [XFS] Remove unused iop_abort log item operation SGI-PV: 955302 SGI-Modid: xfs-linux-melb:xfs-kern:26747a Signed-off-by: Eric Sandeen Signed-off-by: Nathan Scott Signed-off-by: Tim Shimmin commit 3f89243c5b987dd55f8eec6fd54be05887d69bc6 Author: Eric Sandeen Date: Thu Sep 28 11:02:57 2006 +1000 [XFS] Remove several macros that are no longer used anywhere SGI-PV: 955302 SGI-Modid: xfs-linux-melb:xfs-kern:26749a Signed-off-by: Eric Sandeen Signed-off-by: Nathan Scott Signed-off-by: Tim Shimmin commit efb8ad7e9431a430a75d44288614cf6047ff4baa Author: Nathan Scott Date: Thu Sep 28 11:03:05 2006 +1000 [XFS] Add a debug flag for allocations which are known to be larger than one page. SGI-PV: 955302 SGI-Modid: xfs-linux-melb:xfs-kern:26800a Signed-off-by: Nathan Scott Signed-off-by: Tim Shimmin commit 948ecdb4c118293d2f3e267eec642c30c5d3a056 Author: Nathan Scott Date: Thu Sep 28 11:03:13 2006 +1000 [XFS] Be more defensive with page flags (error/private) for metadata buffers. SGI-PV: 955302 SGI-Modid: xfs-linux-melb:xfs-kern:26801a Signed-off-by: Nathan Scott Signed-off-by: Tim Shimmin commit 572d95f49f3652fffe8242c4498b85f4083e52ab Author: Nathan Scott Date: Thu Sep 28 11:03:20 2006 +1000 [XFS] Improve error handling for the zero-fsblock extent detection code. SGI-PV: 955302 SGI-Modid: xfs-linux-melb:xfs-kern:26802a Signed-off-by: Nathan Scott Signed-off-by: Tim Shimmin commit 77e4635ae191774526ed695482a151ac986f3806 Author: Nathan Scott Date: Thu Sep 28 11:03:27 2006 +1000 [XFS] Add a greedy allocation interface, allocating within a min/max size range. SGI-PV: 955302 SGI-Modid: xfs-linux-melb:xfs-kern:26803a Signed-off-by: Nathan Scott Signed-off-by: Tim Shimmin commit b627259c602f3f1b995d09aad2b57bed889430b9 Author: Nathan Scott Date: Thu Sep 28 11:03:33 2006 +1000 [XFS] Remove a no-longer-correct debug assert from dio completion handling. SGI-PV: 955302 SGI-Modid: xfs-linux-melb:xfs-kern:26804a Signed-off-by: Nathan Scott Signed-off-by: Tim Shimmin commit d432c80e68e3c283fc9a85021f5b65e0aabf254e Author: Nathan Scott Date: Thu Sep 28 11:03:44 2006 +1000 [XFS] Minor code rearranging and cleanup to prevent some coverity false positives. SGI-PV: 955502 SGI-Modid: xfs-linux-melb:xfs-kern:26805a Signed-off-by: Nathan Scott Signed-off-by: Tim Shimmin commit 68c3271515f11f6665dc8732e53aaab3d3fdd7d3 Author: Nathan Scott Date: Thu Sep 28 11:03:53 2006 +1000 [XFS] Fix a porting botch on the realtime subvol growfs code path. SGI-PV: 955515 SGI-Modid: xfs-linux-melb:xfs-kern:26806a Signed-off-by: Nathan Scott Signed-off-by: Tim Shimmin commit 22d91f65d57a7f1a1c5fc81f47b47b0cc54ad6f7 Author: Josh Triplett Date: Thu Sep 28 11:04:07 2006 +1000 [XFS] Add lock annotations to xfs_trans_update_ail and xfs_trans_delete_ail xfs_trans_update_ail and xfs_trans_delete_ail get called with the AIL lock held, and release it. Add lock annotations to these two functions so that sparse can check callers for lock pairing, and so that sparse will not complain about these functions since they intentionally use locks in this manner. SGI-PV: 954580 SGI-Modid: xfs-linux-melb:xfs-kern:26807a Signed-off-by: Josh Triplett Signed-off-by: Nathan Scott Signed-off-by: Tim Shimmin commit 955e47ad28b5b255ddcd7eb9cb814a269dc6e991 Author: Tim Shimmin Date: Thu Sep 28 11:04:16 2006 +1000 [XFS] Fixes the leak in reservation space because we weren't ungranting space for the unmount record - which becomes a problem in the freeze/thaw scenario. SGI-PV: 942533 SGI-Modid: xfs-linux-melb:xfs-kern:26815a Signed-off-by: Tim Shimmin commit 22de606a0b9623bf15752808f123848a65a6cc28 Author: Vlad Apostolov Date: Thu Sep 28 11:04:24 2006 +1000 [XFS] pv 955157, rv bnaujok - break the loop on formatter() error SGI-PV: 955157 SGI-Modid: xfs-linux-melb:xfs-kern:26866a Signed-off-by: Vlad Apostolov Signed-off-by: Tim Shimmin commit e132f54ce8660bbf34723cc12cb11e6f61d6fbac Author: Vlad Apostolov Date: Thu Sep 28 11:04:31 2006 +1000 [XFS] pv 955157, rv bnaujok - break the loop on EFAULT formatter() error SGI-PV: 955157 SGI-Modid: xfs-linux-melb:xfs-kern:26869a Signed-off-by: Vlad Apostolov Signed-off-by: Tim Shimmin commit 215101c36012399cf2eaee849de54eeefc9f618c Author: Nathan Scott Date: Thu Sep 28 11:04:43 2006 +1000 [XFS] Fix kmem_zalloc_greedy warnings on 64 bit platforms. SGI-PV: 955302 SGI-Modid: xfs-linux-melb:xfs-kern:26907a Signed-off-by: Nathan Scott Signed-off-by: Tim Shimmin commit edcd4bce5e58987c8c039bdf7705a22cd229fe96 Author: Nathan Scott Date: Thu Sep 28 11:05:33 2006 +1000 [XFS] Minor cleanup from dio locking fix, remove an extra conditional. SGI-PV: 955696 SGI-Modid: xfs-linux-melb:xfs-kern:26908a Signed-off-by: Nathan Scott Signed-off-by: Tim Shimmin commit 91d87232044c1ceb8371625c27479e982984a848 Author: Eric Sandeen Date: Thu Sep 28 11:05:40 2006 +1000 [XFS] Reduce endian flipping in alloc_btree, same as was done for ialloc_btree. SGI-PV: 955302 SGI-Modid: xfs-linux-melb:xfs-kern:26910a Signed-off-by: Eric Sandeen Signed-off-by: Nathan Scott Signed-off-by: Tim Shimmin commit 7ae67d78e7518fba89e5f3a74bdcb68e48ae8858 Author: Eric Sandeen Date: Thu Sep 28 11:05:46 2006 +1000 [XFS] standardize on one sema init macro One sema to rule them all, one sema to find them... SGI-PV: 907752 SGI-Modid: xfs-linux-melb:xfs-kern:26911a Signed-off-by: Eric Sandeen Signed-off-by: Nathan Scott Signed-off-by: Tim Shimmin commit 01106eae97b70399ce5a273a3cceb5246e8d9cc8 Author: Eric Sandeen Date: Thu Sep 28 11:05:52 2006 +1000 [XFS] Collapse sv_init and init_sv into just the one interface. SGI-PV: 907752 SGI-Modid: xfs-linux-melb:xfs-kern:26925a Signed-off-by: Eric Sandeen Signed-off-by: Nathan Scott Signed-off-by: Tim Shimmin commit f273ab848b7cbc0088b0ac7457b3769e6566074e Author: David Chinner Date: Thu Sep 28 11:06:03 2006 +1000 [XFS] Really fix use after free in xfs_iunpin. The previous attempts to fix the linux inode use-after-free in xfs_iunpin simply made the problem harder to hit. We actually need complete exclusion between xfs_reclaim and xfs_iunpin, as well as ensuring that the i_flags are consistent during both of these functions. Introduce a new spinlock for exclusion and the i_flags, and fix up xfs_iunpin to use igrab before marking the inode dirty. SGI-PV: 952967 SGI-Modid: xfs-linux-melb:xfs-kern:26964a Signed-off-by: David Chinner Signed-off-by: Tim Shimmin commit 6216ff18839bf302805f67c93e8bc344387c513b Author: Vlad Apostolov Date: Thu Sep 28 11:06:10 2006 +1000 [XFS] pv 956240, author: nathans, rv: vapo - Minor fixes in kmem_zalloc_greedy() SGI-PV: 956240 SGI-Modid: xfs-linux-melb:xfs-kern:26983a Signed-off-by: Vlad Apostolov Signed-off-by: Tim Shimmin commit 6f1f21684078884b62cfff2ea80a1a6c07f79824 Author: Vlad Apostolov Date: Thu Sep 28 11:06:15 2006 +1000 [XFS] pv 956241, author: nathans, rv: vapo - make ino validation checks consistent in bulkstat SGI-PV: 956241 SGI-Modid: xfs-linux-melb:xfs-kern:26984a Signed-off-by: Vlad Apostolov Signed-off-by: Tim Shimmin commit 6e73b418887675da18602550ca296211caeb3897 Author: Vlad Apostolov Date: Thu Sep 28 11:06:21 2006 +1000 [XFS] 955947: Infinite loop in xfs_bulkstat() on formatter() error SGI-PV: 955947 SGI-Modid: xfs-linux-melb:xfs-kern:26986a Signed-off-by: Vlad Apostolov Signed-off-by: Tim Shimmin commit 65e8697a12e356cd7a6ecafa1149f5c5c6a71593 Author: Tim Shimmin Date: Fri Sep 29 15:23:02 2006 +1000 [XFS] Remove v1 dir trace macro - missed in a past commit. Signed-off-by: Tim Shimmin From owner-xfs@oss.sgi.com Fri Sep 29 10:07:21 2006 Received: with ECARTIS (v1.0.0; list xfs); Fri, 29 Sep 2006 10:07:23 -0700 (PDT) Received: from cuda.sgi.com (cuda2.sgi.com [192.48.168.29]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id k8TH7KaG007480 for ; Fri, 29 Sep 2006 10:07:21 -0700 X-ASG-Debug-ID: 1159549599-19632-183-0 X-Barracuda-URL: http://cuda.sgi.com:80/cgi-bin/mark.cgi Received: from mx1.redhat.com (mx1.redhat.com [66.187.233.31]) by cuda.sgi.com (Spam Firewall) with ESMTP id 21C44D11900E for ; Fri, 29 Sep 2006 10:06:39 -0700 (PDT) Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by mx1.redhat.com (8.12.11.20060308/8.12.11) with ESMTP id k8TH6567030939; Fri, 29 Sep 2006 13:06:05 -0400 Received: from pobox-2.corp.redhat.com (pobox-2.corp.redhat.com [10.11.255.15]) by int-mx1.corp.redhat.com (8.12.11.20060308/8.12.11) with ESMTP id k8TH659U009921; Fri, 29 Sep 2006 13:06:05 -0400 Received: from [10.15.80.10] (neon.msp.redhat.com [10.15.80.10]) by pobox-2.corp.redhat.com (8.13.1/8.13.1) with ESMTP id k8TH636i023778; Fri, 29 Sep 2006 13:06:04 -0400 Message-ID: <451D527A.6080904@sandeen.net> Date: Fri, 29 Sep 2006 12:06:02 -0500 From: Eric Sandeen User-Agent: Thunderbird 1.5.0.7 (X11/20060913) MIME-Version: 1.0 To: sandeen@sandeen.net CC: xfs@oss.sgi.com X-ASG-Orig-Subj: Re: [PATCH 2/2] Remove unused stuff Subject: Re: [PATCH 2/2] Remove unused stuff References: <20060929032916.AB75B18001A5F@sandeen.net> In-Reply-To: <20060929032916.AB75B18001A5F@sandeen.net> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.02, rules version 3.0.22195 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-archive-position: 9127 X-ecartis-version: Ecartis v1.0.0 Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: xfs Content-Length: 941 Lines: 27 sandeen@sandeen.net wrote: > And now that things are static, remove things that are not used. > > xfs_bmap.c | 36 ------------------ > xfs_bmap_btree.c | 74 ------------------------------------- > xfs_bmap_btree.h | 11 ----- > xfs_da_btree.c | 15 ------- > xfs_da_btree.h | 1 > xfs_error.c | 26 ------------- > xfs_error.h | 1 > xfs_rtalloc.c | 108 ------------------------------------------------------- > xfs_rtalloc.h | 18 --------- > 9 files changed, 290 deletions(-) > > Signed-off-by: Eric Sandeen I suppose that if this code is deemed useful, maybe it could be moved to some xfs_*_debug.[ch] files so it's not lost.... Hm, or looks like some of it is used in userspace (xfs_da_log2_roundup) it could be moved there if not ever used in the kernel. (although from a quick look I'm not sure that's doing anything useful there either, anymore). -Eric