From owner-linux-xfs@oss.sgi.com Fri Jul 1 01:18:59 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 01 Jul 2005 01:19:04 -0700 (PDT) Received: from ninja.slaphack.com (69-18-3-179.lisco.net [69.18.3.179]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j618IxH9026928 for ; Fri, 1 Jul 2005 01:18:59 -0700 Received: from [10.1.0.177] (unknown [10.1.0.177]) by ninja.slaphack.com (Postfix) with ESMTP id BE2096B046C; Fri, 1 Jul 2005 03:17:22 -0500 (CDT) Message-ID: <42C4FC14.7070402@slaphack.com> Date: Fri, 01 Jul 2005 03:17:24 -0500 From: David Masover User-Agent: Mozilla Thunderbird 1.0.2 (Windows/20050317) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Chris Wedgwood Cc: Al Boldi , "'Nathan Scott'" , linux-xfs@oss.sgi.com, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, reiserfs-list@namesys.com Subject: Re: XFS corruption during power-blackout References: <20050629001847.GB850@frodo> <200506290453.HAA14576@raad.intranet> <556815.441dd7d1ebc32b4a80e049e0ddca5d18e872c6e8a722b2aefa7525e9504533049d801014.ANY@taniwha.stupidest.org> In-Reply-To: <556815.441dd7d1ebc32b4a80e049e0ddca5d18e872c6e8a722b2aefa7525e9504533049d801014.ANY@taniwha.stupidest.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 5539 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: ninja@slaphack.com Precedence: bulk X-list: linux-xfs Chris Wedgwood wrote: > On Wed, Jun 29, 2005 at 07:53:09AM +0300, Al Boldi wrote: > > >>What I found were 4 things in the dest dir: >>1. Missing Dirs,Files. That's OK. >>2. Files of size 0. That's acceptable. >>3. Corrupted Files. That's unacceptable. >>4. Corrupted Files with original fingerprint. That's ABSOLUTELY >>unacceptable. > > > disk usually default to caching these days and can lose data as a > result, disable that Not always possible. Some disks lie and leave caching on anyway. From owner-linux-xfs@oss.sgi.com Fri Jul 1 02:24:25 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 01 Jul 2005 02:24:35 -0700 (PDT) Received: from virtualhost.dk (ns.virtualhost.dk [195.184.98.160]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j619OOH9031250 for ; Fri, 1 Jul 2005 02:24:25 -0700 Received: from [62.242.22.158] (helo=router.home.kernel.dk) by virtualhost.dk with esmtp (Exim 3.36 #1) id 1DoHjP-0006ai-00; Fri, 01 Jul 2005 11:22:43 +0200 Received: from nelson.home.kernel.dk ([192.168.0.33] helo=kernel.dk) by router.home.kernel.dk with esmtp (Exim 4.22) id 1DoHjN-0005hL-Kf; Fri, 01 Jul 2005 11:22:41 +0200 Received: by kernel.dk (Postfix, from userid 1000) id 59DD4AA0D1; Fri, 1 Jul 2005 11:24:14 +0200 (CEST) Date: Fri, 1 Jul 2005 11:24:14 +0200 From: Jens Axboe To: David Masover Cc: Chris Wedgwood , Al Boldi , "'Nathan Scott'" , linux-xfs@oss.sgi.com, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, reiserfs-list@namesys.com Subject: Re: XFS corruption during power-blackout Message-ID: <20050701092412.GD2243@suse.de> References: <20050629001847.GB850@frodo> <200506290453.HAA14576@raad.intranet> <556815.441dd7d1ebc32b4a80e049e0ddca5d18e872c6e8a722b2aefa7525e9504533049d801014.ANY@taniwha.stupidest.org> <42C4FC14.7070402@slaphack.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <42C4FC14.7070402@slaphack.com> X-archive-position: 5540 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: axboe@suse.de Precedence: bulk X-list: linux-xfs On Fri, Jul 01 2005, David Masover wrote: > Chris Wedgwood wrote: > >On Wed, Jun 29, 2005 at 07:53:09AM +0300, Al Boldi wrote: > > > > > >>What I found were 4 things in the dest dir: > >>1. Missing Dirs,Files. That's OK. > >>2. Files of size 0. That's acceptable. > >>3. Corrupted Files. That's unacceptable. > >>4. Corrupted Files with original fingerprint. That's ABSOLUTELY > >>unacceptable. > > > > > >disk usually default to caching these days and can lose data as a > >result, disable that > > Not always possible. Some disks lie and leave caching on anyway. And the same (and others) disks will not honor a flush anyways. Moral of that story - avoid bad hardware. -- Jens Axboe From owner-linux-xfs@oss.sgi.com Fri Jul 1 05:38:32 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 01 Jul 2005 05:38:42 -0700 (PDT) Received: from mailhub.lss.emc.com (mailhub.lss.emc.com [168.159.2.31]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j61CcUH9013705 for ; Fri, 1 Jul 2005 05:38:32 -0700 Received: from [192.168.1.103] (STIGLS0L20CORP.corp.emc.com [10.4.10.44]) by mailhub.lss.emc.com (Switch-3.1.6/Switch-3.1.6) with ESMTP id j61CajQZ002815; Fri, 1 Jul 2005 08:36:46 -0400 (EDT) Message-ID: <42C538DC.4070000@emc.com> Date: Fri, 01 Jul 2005 08:36:44 -0400 From: Ric Wheeler User-Agent: Mozilla Thunderbird 1.0 (X11/20041206) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Chris Wedgwood CC: J?rn Engel , Bryan Henderson , Al Boldi , linux-fsdevel@vger.kernel.org, linux-xfs@oss.sgi.com, Steve Lord , "'Nathan Scott'" , reiserfs-list@namesys.com Subject: Re: XFS corruption during power-blackout References: <254889.27725ab660aa106eb6acc07307d71ef1fbd5b6fd366aebef9e2f611750fbcb467e46e8a4.IBX@taniwha.stupidest.org> <054069.b93858d6b97c07747dc32be2dd8981b254d981528781006053dce7be58de88865a43b162.IBX@taniwha.stupidest.org> <20050630194437.GC8374@wohnheim.fh-wedel.de> <947885.634f7bc00f9a47e9c90ffbeec9ebb14a812e2dab7a64e2d09cedc7aa2589ffaf3593543a.IBX@taniwha.stupidest.org> In-Reply-To: <947885.634f7bc00f9a47e9c90ffbeec9ebb14a812e2dab7a64e2d09cedc7aa2589ffaf3593543a.IBX@taniwha.stupidest.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-PMX-Version: 4.7.1.128075, Antispam-Engine: 2.0.3.2, Antispam-Data: 2005.7.1.8 X-PerlMx-Spam: Gauge=, SPAM=7%, Reasons='__CT 0, __CTE 0, __CT_TEXT_PLAIN 0, __HAS_MSGID 0, __MIME_TEXT_ONLY 0, __MIME_VERSION 0, __SANE_MSGID 0' X-archive-position: 5541 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: ric@emc.com Precedence: bulk X-list: linux-xfs Chris Wedgwood wrote: >On Thu, Jun 30, 2005 at 09:44:37PM +0200, J?rn Engel wrote: > > > >>Or do you rather mean that a single sync() should block until all data >>currently present is hardened? >> >> > >Logically sync() should return only after all dirty buffers that >existed before sync() was called are flushed. > >Anything more than this (i.e. waiting on newly (since sync was called >but before it returns) dirtied buffers) could live-lock (actually, >this used to happen sometimes, I don't know if that's the case). > > I think that we need one more stage in sync() behavior to make sure that the data is safely on the platter. For file systems with supported write barriers, the last IO should be wrapped with a barrier to flush the disk cache. It doesn't seem that sync() does that in today's code. From owner-linux-xfs@oss.sgi.com Fri Jul 1 05:55:26 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 01 Jul 2005 05:55:36 -0700 (PDT) Received: from mailhub.lss.emc.com (mailhub.lss.emc.com [168.159.2.31]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j61CtQH9015065 for ; Fri, 1 Jul 2005 05:55:26 -0700 Received: from [192.168.1.103] (STIGLS0L20CORP.corp.emc.com [10.4.10.44]) by mailhub.lss.emc.com (Switch-3.1.6/Switch-3.1.6) with ESMTP id j61Crd0t011097; Fri, 1 Jul 2005 08:53:40 -0400 (EDT) Message-ID: <42C53CD4.4000205@emc.com> Date: Fri, 01 Jul 2005 08:53:40 -0400 From: Ric Wheeler User-Agent: Mozilla Thunderbird 1.0 (X11/20041206) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Bryan Henderson CC: Chris Wedgwood , Al Boldi , linux-fsdevel@vger.kernel.org, linux-xfs@oss.sgi.com, Steve Lord , "'Nathan Scott'" , reiserfs-list@namesys.com Subject: Re: XFS corruption during power-blackout References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-PMX-Version: 4.7.1.128075, Antispam-Engine: 2.0.3.2, Antispam-Data: 2005.7.1.10 X-PerlMx-Spam: Gauge=, SPAM=7%, Reasons='__CT 0, __CTE 0, __CT_TEXT_PLAIN 0, __HAS_MSGID 0, __MIME_TEXT_ONLY 0, __MIME_VERSION 0, __SANE_MSGID 0' X-archive-position: 5542 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: ric@emc.com Precedence: bulk X-list: linux-xfs Bryan Henderson wrote: > >It's because of the words before that: "everything that was buffered when >sync() >started is hardened before the next sync() returns." The point is that >the second sync() is the one that waits (it actually waits for the >previous one to finish before it starts). By the way, I'm not talking >about Linux at this point. I'm talking about so-called POSIX systems in >general. > >But it does sound like Linux has a pretty firm philosophy of synchronous >sync (I see it documented in an old man page), so I guess it's OK to rely >on it. > >There are scenarios where you'd rather not have a process tied up while >syncing takes place. Stepping back, I would guess the primary original >purpose of sync() was to allow you to make a sync daemon. Early Unix >systems did not have in-kernel safety clean timers. A user space process >did that. > >-- >Bryan Henderson IBM Almaden Research Center >San Jose CA Filesystems > > We have been playing around with various sync techniques that allow you to get good data safety for a large batch of files (think of a restore of a file system or a migration of lots of files from one server to another). You can always restart a restore if the box goes down in the middle, but once you are done, you want a hard promise that all files are safely on the disk platter. Using system level sync() has all of the disadvantages that you mention along with the lack of a per-file system barrier flush. You can try to hack in a flush by issuing an fsync() call on one file per file system after the sync() completes, but whether or not the file system issues a barrier operation is file system dependent. Doing an fsync() per file is slow but safe. Writing the files without syncing and then reopening and fsync()'ing each one in reasonable batch size is much faster, but still kludgey. An attractive, but as far as I can see missing feature, would be the ability to do a file system specific sync() command. Another option would be a batched AIO like fsync() with a bit vector of descriptors to sync. Not surprising, but the best performance is reached when you let the writing phase working asynchronously and let the underlying file system do its thing and wrap it up with a group cache to disk sync and a single disk write cache invalidate (barrier) at the end. From owner-linux-xfs@oss.sgi.com Fri Jul 1 05:57:09 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 01 Jul 2005 05:57:16 -0700 (PDT) Received: from virtualhost.dk (ns.virtualhost.dk [195.184.98.160]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j61Cv8H9015497 for ; Fri, 1 Jul 2005 05:57:09 -0700 Received: from [62.242.22.158] (helo=router.home.kernel.dk) by virtualhost.dk with esmtp (Exim 3.36 #1) id 1DoL3I-0002Ey-00; Fri, 01 Jul 2005 14:55:28 +0200 Received: from nelson.home.kernel.dk ([192.168.0.33] helo=kernel.dk) by router.home.kernel.dk with esmtp (Exim 4.22) id 1DoL3G-00008l-25; Fri, 01 Jul 2005 14:55:26 +0200 Received: by kernel.dk (Postfix, from userid 1000) id A34ACAB027; Fri, 1 Jul 2005 14:56:58 +0200 (CEST) Date: Fri, 1 Jul 2005 14:56:58 +0200 From: Jens Axboe To: Ric Wheeler Cc: Chris Wedgwood , J?rn Engel , Bryan Henderson , Al Boldi , linux-fsdevel@vger.kernel.org, linux-xfs@oss.sgi.com, Steve Lord , "'Nathan Scott'" , reiserfs-list@namesys.com Subject: Re: XFS corruption during power-blackout Message-ID: <20050701125657.GH2243@suse.de> References: <254889.27725ab660aa106eb6acc07307d71ef1fbd5b6fd366aebef9e2f611750fbcb467e46e8a4.IBX@taniwha.stupidest.org> <054069.b93858d6b97c07747dc32be2dd8981b254d981528781006053dce7be58de88865a43b162.IBX@taniwha.stupidest.org> <20050630194437.GC8374@wohnheim.fh-wedel.de> <947885.634f7bc00f9a47e9c90ffbeec9ebb14a812e2dab7a64e2d09cedc7aa2589ffaf3593543a.IBX@taniwha.stupidest.org> <42C538DC.4070000@emc.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <42C538DC.4070000@emc.com> X-archive-position: 5543 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: axboe@suse.de Precedence: bulk X-list: linux-xfs On Fri, Jul 01 2005, Ric Wheeler wrote: > Chris Wedgwood wrote: > > >On Thu, Jun 30, 2005 at 09:44:37PM +0200, J?rn Engel wrote: > > > > > > > >>Or do you rather mean that a single sync() should block until all data > >>currently present is hardened? > >> > >> > > > >Logically sync() should return only after all dirty buffers that > >existed before sync() was called are flushed. > > > >Anything more than this (i.e. waiting on newly (since sync was called > >but before it returns) dirtied buffers) could live-lock (actually, > >this used to happen sometimes, I don't know if that's the case). > > > > > I think that we need one more stage in sync() behavior to make sure that > the data is safely on the platter. For file systems with supported > write barriers, the last IO should be wrapped with a barrier to flush > the disk cache. > > It doesn't seem that sync() does that in today's code. That is true, sync() really only guarantees that the io has been issued and the drive signalled completion, with write back caching on it might not be on platter yet. -- Jens Axboe From owner-linux-xfs@oss.sgi.com Fri Jul 1 07:08:40 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 01 Jul 2005 07:08:43 -0700 (PDT) Received: from raad.intranet ([212.76.81.133]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j61E8XH9018954 for ; Fri, 1 Jul 2005 07:08:39 -0700 Received: from i810 (rescueCli [10.254.254.253]) by raad.intranet (8.8.7/8.8.7) with ESMTP id RAA27425; Fri, 1 Jul 2005 17:05:35 +0300 Message-Id: <200507011405.RAA27425@raad.intranet> From: "Al Boldi" To: "'Jens Axboe'" , "'David Masover'" Cc: "'Chris Wedgwood'" , "'Nathan Scott'" , , , , Subject: RE: XFS corruption during power-blackout Date: Fri, 1 Jul 2005 17:05:11 +0300 MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit X-Mailer: Microsoft Office Outlook, Build 11.0.5510 In-Reply-To: <20050701092412.GD2243@suse.de> X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2600.0000 Thread-Index: AcV+HfWgqiiuY9vWQrSqU6po5PQNIgAJi6Lg X-archive-position: 5544 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: a1426z@gawab.com Precedence: bulk X-list: linux-xfs Jens Axboe wrote: { On Fri, Jul 01 2005, David Masover wrote: > Chris Wedgwood wrote: > >On Wed, Jun 29, 2005 at 07:53:09AM +0300, Al Boldi wrote: > > > > > >>What I found were 4 things in the dest dir: > >>1. Missing Dirs,Files. That's OK. > >>2. Files of size 0. That's acceptable. > >>3. Corrupted Files. That's unacceptable. > >>4. Corrupted Files with original fingerprint. That's ABSOLUTELY > >>unacceptable. > > > > > >disk usually default to caching these days and can lose data as a > >result, disable that > > Not always possible. Some disks lie and leave caching on anyway. And the same (and others) disks will not honor a flush anyways. Moral of that story - avoid bad hardware. } 1. Sync is not the issue. The issue is whether a journaled FS can detect corrupted files and flag them after a power-blackout! 2. Moral of the story is: What's ext3 doing the others aren't? From owner-linux-xfs@oss.sgi.com Fri Jul 1 09:37:03 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 01 Jul 2005 09:37:10 -0700 (PDT) Received: from mail.metronet.co.uk (mail.metronet.co.uk [213.162.97.75]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j61Gb2H9015714 for ; Fri, 1 Jul 2005 09:37:03 -0700 Received: from [192.168.0.50] (84-51-143-33.rogers337.adsl.metronet.co.uk [84.51.143.33]) by mail.metronet.co.uk (MetroNet Mail) with ESMTP id 9B95940C600; Fri, 1 Jul 2005 17:35:11 +0100 (BST) From: Alistair John Strachan To: "Al Boldi" Subject: Re: XFS corruption during power-blackout Date: Fri, 1 Jul 2005 17:35:30 +0100 User-Agent: KMail/1.8.1 Cc: "'Jens Axboe'" , "'David Masover'" , "'Chris Wedgwood'" , "'Nathan Scott'" , linux-xfs@oss.sgi.com, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, reiserfs-list@namesys.com References: <200507011405.RAA27425@raad.intranet> In-Reply-To: <200507011405.RAA27425@raad.intranet> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200507011735.30229.s0348365@sms.ed.ac.uk> X-archive-position: 5545 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: s0348365@sms.ed.ac.uk Precedence: bulk X-list: linux-xfs On Friday 01 Jul 2005 15:05, Al Boldi wrote: > Jens Axboe wrote: { > > On Fri, Jul 01 2005, David Masover wrote: > > Chris Wedgwood wrote: > > >On Wed, Jun 29, 2005 at 07:53:09AM +0300, Al Boldi wrote: > > >>What I found were 4 things in the dest dir: > > >>1. Missing Dirs,Files. That's OK. > > >>2. Files of size 0. That's acceptable. > > >>3. Corrupted Files. That's unacceptable. > > >>4. Corrupted Files with original fingerprint. That's ABSOLUTELY > > >>unacceptable. > > > > > >disk usually default to caching these days and can lose data as a > > >result, disable that > > > > Not always possible. Some disks lie and leave caching on anyway. > > And the same (and others) disks will not honor a flush anyways. > Moral of that story - avoid bad hardware. > } > > 1. Sync is not the issue. The issue is whether a journaled FS can detect > corrupted files and flag them after a power-blackout! > 2. Moral of the story is: What's ext3 doing the others aren't? > I agree, I've used XFS for about three years on Linux now, and whilst I love the performance and self-repair attributes of the filesystem, I do think it leaves a lot to be desired when it comes to file corruption. In my experience, using a standard XFS log/volume setup on the same physical, cheap IDE HD, any files open at the time as a power down or hardware lockup end up being filled either with zeros, or garbage. However, I'd far rather lose a few files once in a blue moon than have to sit through 10 minute fsck's every time the kernel crashes or I kick out the plugs. -- Cheers, Alistair. personal: alistair()devzero!co!uk university: s0348365()sms!ed!ac!uk student: CS/CSim Undergraduate contact: 1F2 55 South Clerk Street, Edinburgh. EH8 9PP. From owner-linux-xfs@oss.sgi.com Fri Jul 1 11:23:29 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 01 Jul 2005 11:23:32 -0700 (PDT) Received: from e2.ny.us.ibm.com (e2.ny.us.ibm.com [32.97.182.142]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j61INMH9023309 for ; Fri, 1 Jul 2005 11:23:29 -0700 Received: from d01relay04.pok.ibm.com (d01relay04.pok.ibm.com [9.56.227.236]) by e2.ny.us.ibm.com (8.12.11/8.12.11) with ESMTP id j61ILp3w010463 for ; Fri, 1 Jul 2005 14:21:51 -0400 Received: from d01av01.pok.ibm.com (d01av01.pok.ibm.com [9.56.224.215]) by d01relay04.pok.ibm.com (8.12.10/NCO/VERS6.7) with ESMTP id j61ILpkW222608 for ; Fri, 1 Jul 2005 14:21:51 -0400 Received: from d01av01.pok.ibm.com (loopback [127.0.0.1]) by d01av01.pok.ibm.com (8.12.11/8.13.3) with ESMTP id j61ILpMV015328 for ; Fri, 1 Jul 2005 14:21:51 -0400 Received: from [9.56.227.90] (d01ml604.pok.ibm.com [9.56.227.90]) by d01av01.pok.ibm.com (8.12.11/8.12.11) with ESMTP id j61ILpvV015325; Fri, 1 Jul 2005 14:21:51 -0400 In-Reply-To: <42C53CD4.4000205@emc.com> To: Ric Wheeler Cc: Al Boldi , Chris Wedgwood , linux-fsdevel@vger.kernel.org, linux-xfs@oss.sgi.com, Steve Lord , "'Nathan Scott'" , reiserfs-list@namesys.com MIME-Version: 1.0 Subject: Re: XFS corruption during power-blackout X-Mailer: Lotus Notes Release 6.0.2CF1 June 9, 2003 Message-ID: From: Bryan Henderson Date: Fri, 1 Jul 2005 11:24:20 -0700 X-MIMETrack: Serialize by Router on D01ML604/01/M/IBM(Build V70_06092005|June 09, 2005) at 07/01/2005 14:21:50, Serialize complete at 07/01/2005 14:21:50 Content-Type: text/plain; charset="US-ASCII" X-archive-position: 5546 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: hbryan@us.ibm.com Precedence: bulk X-list: linux-xfs >We have been playing around with various sync techniques that allow you >to get good data safety for a large batch of files (think of a restore >of a file system or a migration of lots of files from one server to >another). You can always restart a restore if the box goes down in the >middle, but once you are done, you want a hard promise that all files >are safely on the disk platter. > >Using system level sync() has all of the disadvantages that you mention >along with the lack of a per-file system barrier flush. > >You can try to hack in a flush by issuing an fsync() call on one file >per file system after the sync() completes, but whether or not the file >system issues a barrier operation is file system dependent. > >Doing an fsync() per file is slow but safe. Writing the files without >syncing and then reopening and fsync()'ing each one in reasonable batch >size is much faster, but still kludgey. > >An attractive, but as far as I can see missing feature, would be the >ability to do a file system specific sync() command. Another option >would be a batched AIO like fsync() with a bit vector of descriptors to >sync. Not surprising, but the best performance is reached when you let >the writing phase working asynchronously and let the underlying file >system do its thing and wrap it up with a group cache to disk sync and a >single disk write cache invalidate (barrier) at the end. Hear, hear to all of that. sync() has gotten to be really old-fashioned. You can sync an invidual filesystem image if the filesystem is on a block device or a suitable simulation of one, by opening a block device special file for the device and doing fsync(). What you'd really like is to fsync a multi-file unit of work (transaction) -- and not just among open files. You'd like to open, write, and close 1000 files in a single transaction and then commit that transaction, with no syncing due to timers in the meantime. If you're really greedy, you'd also ask for complete rollback if the system fails before the commit. I've always found it awkward that any user can do a sync(), when it's a system-wide control operation. In the Storage Tank Linux filesystem driver I designed, you could turn off safety cleaning with a mount option (and could mount the filesystem multiple times in order to work with multiple options). You could also turn it off for a particular file with a "temporary file" attribute, and a file which was not linked to a directory was also understood to be temporary. Safety cleaning is what sync() and the internal timers do. Safety cleaning doesn't make much sense unless it goes down inside the storage device as well. -- Bryan Henderson IBM Almaden Research Center San Jose CA Filesystems From owner-linux-xfs@oss.sgi.com Fri Jul 1 12:59:53 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 01 Jul 2005 13:00:04 -0700 (PDT) Received: from ninja.slaphack.com (69-18-3-179.lisco.net [69.18.3.179]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j61JxqH9032393 for ; Fri, 1 Jul 2005 12:59:52 -0700 Received: from [10.0.0.151] (unknown [10.0.0.151]) by ninja.slaphack.com (Postfix) with ESMTP id 664526B92A3; Fri, 1 Jul 2005 14:58:21 -0500 (CDT) Message-ID: <42C5A06F.40906@slaphack.com> Date: Fri, 01 Jul 2005 14:58:39 -0500 From: David Masover User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.8) Gecko/20050513 Debian/1.7.8-1 X-Accept-Language: en MIME-Version: 1.0 To: Bryan Henderson Cc: Ric Wheeler , Al Boldi , Chris Wedgwood , linux-fsdevel@vger.kernel.org, linux-xfs@oss.sgi.com, Steve Lord , "'Nathan Scott'" , reiserfs-list@namesys.com Subject: Re: XFS corruption during power-blackout References: In-Reply-To: Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 5547 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: ninja@slaphack.com Precedence: bulk X-list: linux-xfs Bryan Henderson wrote: [...] > What you'd really like is to fsync a multi-file unit of work (transaction) > -- and not just among open files. You'd like to open, write, and close > 1000 files in a single transaction and then commit that transaction, with > no syncing due to timers in the meantime. If you're really greedy, you'd > also ask for complete rollback if the system fails before the commit. Both of these are planned for Reiser4. Or is it 4.1? I would like said interface to be able to not necessarily flush to disk right away, though. It should certainly be an option (I'm sure MySQL would use that option), but sometimes you want the performance, especially if there are dozens of these transactions firing all at once -- better to let RAM fill up and then flush them all. From owner-linux-xfs@oss.sgi.com Fri Jul 1 14:12:13 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 01 Jul 2005 14:12:18 -0700 (PDT) Received: from moskovskaya.fh-wedel.de (mail.fh-wedel.de [213.39.232.198]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j61LBqH9013756 for ; Fri, 1 Jul 2005 14:12:13 -0700 Received: from wohnheim.fh-wedel.de ([213.39.233.138]:49423) by moskovskaya.fh-wedel.de with esmtp (Exim 4.34) id 1DoSlm-0008E7-0k; Fri, 01 Jul 2005 23:09:54 +0200 Received: from joern by wohnheim.fh-wedel.de with local (Exim 3.35 #1 (Debian)) id 1DoSly-0003YX-00; Fri, 01 Jul 2005 23:10:06 +0200 Date: Fri, 1 Jul 2005 23:10:06 +0200 From: =?iso-8859-1?Q?J=F6rn?= Engel To: David Masover Cc: Bryan Henderson , Ric Wheeler , Al Boldi , Chris Wedgwood , linux-fsdevel@vger.kernel.org, linux-xfs@oss.sgi.com, Steve Lord , "'Nathan Scott'" , reiserfs-list@namesys.com Subject: Re: XFS corruption during power-blackout Message-ID: <20050701211006.GA13311@wohnheim.fh-wedel.de> References: <42C5A06F.40906@slaphack.com> Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <42C5A06F.40906@slaphack.com> User-Agent: Mutt/1.3.28i X-archive-position: 5548 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: joern@wohnheim.fh-wedel.de Precedence: bulk X-list: linux-xfs On Fri, 1 July 2005 14:58:39 -0500, David Masover wrote: > Bryan Henderson wrote: > [...] > >What you'd really like is to fsync a multi-file unit of work (transaction) > >-- and not just among open files. You'd like to open, write, and close > >1000 files in a single transaction and then commit that transaction, with > >no syncing due to timers in the meantime. If you're really greedy, you'd > >also ask for complete rollback if the system fails before the commit. > > Both of these are planned for Reiser4. Or is it 4.1? Both are pretty trivial to implement for a tree-based fs like reiserfs. Non-trivial is the user interface. Not sure if sys_reiser is the answer to that. Jörn -- When people work hard for you for a pat on the back, you've got to give them that pat. -- Robert Heinlein From owner-linux-xfs@oss.sgi.com Fri Jul 1 14:41:11 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 01 Jul 2005 14:41:21 -0700 (PDT) Received: from ninja.slaphack.com (69-18-3-179.lisco.net [69.18.3.179]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j61LepH9017492 for ; Fri, 1 Jul 2005 14:41:11 -0700 Received: from [10.1.0.177] (unknown [10.1.0.177]) by ninja.slaphack.com (Postfix) with ESMTP id 08E836B9429; Fri, 1 Jul 2005 16:39:19 -0500 (CDT) Message-ID: <42C5B80D.2050100@slaphack.com> Date: Fri, 01 Jul 2005 16:39:25 -0500 From: David Masover User-Agent: Mozilla Thunderbird 1.0.2 (Windows/20050317) X-Accept-Language: en-us, en MIME-Version: 1.0 To: =?ISO-8859-1?Q?J=F6rn_Engel?= Cc: Bryan Henderson , Ric Wheeler , Al Boldi , Chris Wedgwood , linux-fsdevel@vger.kernel.org, linux-xfs@oss.sgi.com, Steve Lord , "'Nathan Scott'" , reiserfs-list@namesys.com Subject: Re: XFS corruption during power-blackout References: <42C5A06F.40906@slaphack.com> <20050701211006.GA13311@wohnheim.fh-wedel.de> In-Reply-To: <20050701211006.GA13311@wohnheim.fh-wedel.de> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 8bit X-archive-position: 5549 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: ninja@slaphack.com Precedence: bulk X-list: linux-xfs Jörn Engel wrote: > On Fri, 1 July 2005 14:58:39 -0500, David Masover wrote: > >>Bryan Henderson wrote: >>[...] >> >>>What you'd really like is to fsync a multi-file unit of work (transaction) >>>-- and not just among open files. You'd like to open, write, and close >>>1000 files in a single transaction and then commit that transaction, with >>>no syncing due to timers in the meantime. If you're really greedy, you'd >>>also ask for complete rollback if the system fails before the commit. >> >>Both of these are planned for Reiser4. Or is it 4.1? > > > Both are pretty trivial to implement for a tree-based fs like > reiserfs. Non-trivial is the user interface. Not sure if sys_reiser > is the answer to that. It is intended to be, I think. But sys_reiser has been pushed off to 4.1, last I checked. From the general attitude here, I'm guessing that it should *not* be called sys_reiser. We're already doing the meta-files interface for doing anything we want to do with reiser, which means sys_reiser currently only does two things: allows simultaneous access to lots of small files efficiently (versus open()-ing each of them), and transactions. While the two may or may not belong in the same system call, I don't believe they should be Reiser-specific. From owner-linux-xfs@oss.sgi.com Sun Jul 3 20:10:27 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Sun, 03 Jul 2005 20:10:32 -0700 (PDT) Received: from wproxy.gmail.com (wproxy.gmail.com [64.233.184.201]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j643AQH9021531 for ; Sun, 3 Jul 2005 20:10:27 -0700 Received: by wproxy.gmail.com with SMTP id i20so677030wra for ; Sun, 03 Jul 2005 20:08:45 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:organization:user-agent:x-accept-language:mime-version:to:subject:content-type:content-transfer-encoding:from; b=UrSxA97KVqdQEH4vN1B4OZrfhNHLVXOruId9j9OdmpAXwoR/Vjh1vl3LZM4oUXmk+QfLgG9MbeQpMQDS1VkNeF+4/uidLVI0UPvwhhkwKHcPAVeuKSOJ/0AKwcYEIvxkaYiSZRiCtd9QkB2lHnFm0bC+j58qTSWLfnnKlU0YCYI= Received: by 10.54.101.2 with SMTP id y2mr3408474wrb; Sun, 03 Jul 2005 20:08:45 -0700 (PDT) Received: from ?10.0.0.1? ([67.121.168.179]) by mx.gmail.com with ESMTP id 66sm4127085wra.2005.07.03.20.08.45; Sun, 03 Jul 2005 20:08:45 -0700 (PDT) Message-ID: <42C8A840.4030507@linux-sxs.org> Date: Sun, 03 Jul 2005 20:08:48 -0700 Organization: HAL V User-Agent: Mozilla Thunderbird 1.0.2 (X11/20050317) X-Accept-Language: en-us, en MIME-Version: 1.0 To: linux-xfs@oss.sgi.com Subject: grub disaster with FC4 & XFS Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit From: Net Llama! X-archive-position: 5552 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: netllama@gmail.com Precedence: bulk X-list: linux-xfs Content-Length: 710 Lines: 20 Not sure if anyone is aware of this mess of a bug: https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=160444 Basically grub pukes all over itself when trying to interact with XFS filesystems, resulting in an unbootable, or barely bootable FC4 installation. Any gurus here know of a solution? Yes, i know, this is really a FC/grub issue, but since its specific to XFS, i thought someone might have run into it and found a workaround and/or fix. -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ L. Friedman netllama@linux-sxs.org LlamaLand http://netllama.linux-sxs.org 20:05:01 up 83 days, 6:24, 1 user, load average: 0.07, 0.06, 0.02 From owner-linux-xfs@oss.sgi.com Sun Jul 3 22:20:09 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Sun, 03 Jul 2005 22:20:11 -0700 (PDT) Received: from postfix3-1.free.fr (postfix3-1.free.fr [213.228.0.44]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j645K8H9032575 for ; Sun, 3 Jul 2005 22:20:08 -0700 Received: from [192.168.0.10] (ax113-3-82-234-27-162.fbx.proxad.net [82.234.27.162]) by postfix3-1.free.fr (Postfix) with ESMTP id 6AAE2173497; Mon, 4 Jul 2005 07:18:34 +0200 (CEST) Message-ID: <42C8C6B0.70208@free.fr> Date: Mon, 04 Jul 2005 07:18:40 +0200 From: Fabrice Ferrero User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; fr-FR; rv:1.7.7) Gecko/20050414 X-Accept-Language: fr-fr, en-us, en MIME-Version: 1.0 To: "Net Llama!" CC: linux-xfs@oss.sgi.com Subject: Re: grub disaster with FC4 & XFS References: <42C8A840.4030507@linux-sxs.org> In-Reply-To: <42C8A840.4030507@linux-sxs.org> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-archive-position: 5553 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: fabriceferrero@free.fr Precedence: bulk X-list: linux-xfs Content-Length: 1083 Lines: 32 Net Llama! a écrit : > Not sure if anyone is aware of this mess of a bug: > https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=160444 > > Basically grub pukes all over itself when trying to interact with XFS > filesystems, resulting in an unbootable, or barely bootable FC4 > installation. > > Any gurus here know of a solution? > > Yes, i know, this is really a FC/grub issue, but since its specific to > XFS, i thought someone might have run into it and found a workaround > and/or fix. > I don't know if it can help you, but I had this problem too, but I installed 3 workstations after this workaround. - When the grub slpash screen appears, move the line to the selected kernel and edit the last options: replace "root=label=/" by "root=/dev/hda1" (put the good slice corresponding to your / partition) - Try booting like this - If all is OK, I suggest you to modify /etc/grub.conf, and change this line to make it safe permanently. And, for me, I hate the entries in /etc/fstab using label instead of the physical slice. So I replace them also. Good luck. FF From owner-linux-xfs@oss.sgi.com Sun Jul 3 22:23:30 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Sun, 03 Jul 2005 22:23:35 -0700 (PDT) Received: from wproxy.gmail.com (wproxy.gmail.com [64.233.184.192]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j645NUH9000550 for ; Sun, 3 Jul 2005 22:23:30 -0700 Received: by wproxy.gmail.com with SMTP id i28so979080wra for ; Sun, 03 Jul 2005 22:21:57 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=n/eZVcy8OjczmQ52huzmXnDQkmPqvQmFzdrfN8gx0MENwk761ihuFFfYsWfVvnSNLVVrWqQm3K0sVYbEcHWALEp10Cg/DpDZmNDBGYINyp5tEP/hirMX7zvnIWVv0Z9I5JZyeikqO8X305wFFjDaoiCERHYyFsKhT4XguvYwVns= Received: by 10.54.142.11 with SMTP id p11mr3502437wrd; Sun, 03 Jul 2005 22:21:56 -0700 (PDT) Received: by 10.54.67.10 with HTTP; Sun, 3 Jul 2005 22:21:55 -0700 (PDT) Message-ID: <7c1574a905070322217e8f66fd@mail.gmail.com> Date: Sun, 3 Jul 2005 22:21:55 -0700 From: Lonni J Friedman Reply-To: Lonni J Friedman To: Fabrice Ferrero Subject: Re: grub disaster with FC4 & XFS Cc: linux-xfs@oss.sgi.com In-Reply-To: <42C8C6B0.70208@free.fr> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Disposition: inline References: <42C8A840.4030507@linux-sxs.org> <42C8C6B0.70208@free.fr> Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id j645NUH9000553 X-archive-position: 5554 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: netllama@gmail.com Precedence: bulk X-list: linux-xfs Content-Length: 1307 Lines: 31 On 7/3/05, Fabrice Ferrero wrote: > Net Llama! a écrit : > > > Not sure if anyone is aware of this mess of a bug: > > https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=160444 > > > > Basically grub pukes all over itself when trying to interact with XFS > > filesystems, resulting in an unbootable, or barely bootable FC4 > > installation. > > > > Any gurus here know of a solution? > > > > Yes, i know, this is really a FC/grub issue, but since its specific to > > XFS, i thought someone might have run into it and found a workaround > > and/or fix. > > > I don't know if it can help you, but I had this problem too, but I > installed 3 workstations after this workaround. > > - When the grub slpash screen appears, move the line to the selected > kernel and edit the last options: replace "root=label=/" by > "root=/dev/hda1" (put the good slice corresponding to your / partition) I don't get the grub splash screen, or anything at all. All I get is a grub prompt with nothing else. It looks like the stage2 bootloader isn't getting called, and I can't figure out why. -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ L. Friedman netllama@gmail.com LlamaLand http://netllama.linux-sxs.org From owner-linux-xfs@oss.sgi.com Mon Jul 4 05:56:20 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 04 Jul 2005 05:56:27 -0700 (PDT) Received: from moving-picture.com (mpc-26.sohonet.co.uk [193.203.82.251]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j64CuJH9003121 for ; Mon, 4 Jul 2005 05:56:20 -0700 Received: from minion.mpc.local ([172.16.11.112] helo=moving-picture.com) by moving-picture.com with esmtp (Exim 4.43) id 1DpQTA-000677-QC; Mon, 04 Jul 2005 13:54:40 +0100 Message-ID: <42C93190.8070804@moving-picture.com> Date: Mon, 04 Jul 2005 13:54:40 +0100 From: James Pearson User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.6) Gecko/20040524 X-Accept-Language: en-us, en MIME-Version: 1.0 To: linux-xfs@oss.sgi.com CC: netllama@linux-sxs.org Subject: Re: grub disaster with FC4 & XFS Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-Disclaimer: This email and any attachments are confidential, may be legally X-Disclaimer: privileged and intended solely for the use of addressee. If you X-Disclaimer: are not the intended recipient of this message, any disclosure, X-Disclaimer: copying, distribution or any action taken in reliance on it is X-Disclaimer: strictly prohibited and may be unlawful. If you have received X-Disclaimer: this message in error, please notify the sender and delete all X-Disclaimer: copies from your system. X-Disclaimer: X-Disclaimer: Email may be susceptible to data corruption, interception and X-Disclaimer: unauthorised amendment, and we do not accept liability for any X-Disclaimer: such corruption, interception or amendment or the consequences X-Disclaimer: thereof. X-archive-position: 5555 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: james-p@moving-picture.com Precedence: bulk X-list: linux-xfs Content-Length: 1121 Lines: 33 > Not sure if anyone is aware of this mess of a bug: > https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=160444 > > Basically grub pukes all over itself when trying to interact with XFS > filesystems, resulting in an unbootable, or barely bootable FC4 > installation. > > Any gurus here know of a solution? > > Yes, i know, this is really a FC/grub issue, but since its specific to > XFS, i thought someone might have run into it and found a workaround > and/or fix. Not sure if this is connected, but the FC installer (anaconda) uses the code in 'booty' to install the bootloader - this has a work around for grub on XFS which uses xfs_freeze - see the thread that starts with: http://marc.theaimsgroup.com/?l=linux-xfs&m=108009684613605&w=2 However, anaconda doesn't include /usr/sbin/xfs_freeze (and /usr/sbin/xfs_io) on the stage2 installer images. As I said above, I don't know if it's connnected, as I haven't installed FC4. Unfortunately, it's not straight forward to get xfs_freeze and xfs_io included unless you patch the anaconda src RPM and rebuild the install media ... James Pearson From owner-linux-xfs@oss.sgi.com Mon Jul 4 13:24:56 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 04 Jul 2005 13:25:04 -0700 (PDT) Received: from mercury.acsalaska.net (mercury.acsalaska.net [209.112.173.226]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j64KOtH9005300 for ; Mon, 4 Jul 2005 13:24:55 -0700 Received: from erbenson.alaska.net (66-230-89-134-dial-as3.nwc.acsalaska.net [66.230.89.134]) by mercury.acsalaska.net (8.13.4/8.13.4) with ESMTP id j64KNHnd062032 for ; Mon, 4 Jul 2005 12:23:18 -0800 (AKDT) (envelope-from erbenson@alaska.net) Received: from plato.local.lan (plato.local.lan [192.168.0.4]) by erbenson.alaska.net (Postfix) with ESMTP id AEF0A3933 for ; Mon, 4 Jul 2005 12:22:55 -0800 (AKDT) Received: by plato.local.lan (Postfix, from userid 1000) id 90DF440FF35; Mon, 4 Jul 2005 12:22:55 -0800 (AKDT) Date: Mon, 4 Jul 2005 12:22:55 -0800 From: Ethan Benson To: linux-xfs@oss.sgi.com Subject: Re: grub disaster with FC4 & XFS Message-ID: <20050704202255.GF25980@plato.local.lan> Mail-Followup-To: linux-xfs@oss.sgi.com References: <42C93190.8070804@moving-picture.com> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="A9eION1OHh0ab6BT" Content-Disposition: inline In-Reply-To: <42C93190.8070804@moving-picture.com> User-Agent: Mutt/1.3.28i X-OS: Debian GNU X-gpg-fingerprint: E3E4 D0BC 31BC F7BB C1DD C3D6 24AC 7B1A 2C44 7AFC X-gpg-key: http://www.alaska.net/~erbenson/gpg/key.asc Mail-Copies-To: nobody X-No-CC: I subscribe to this list; do not CC me on replies. X-ACS-Spam-Status: no X-ACS-Scanned-By: MD 2.51; SA 3.0.3; spamdefang 1.112 X-archive-position: 5556 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: erbenson@alaska.net Precedence: bulk X-list: linux-xfs Content-Length: 2034 Lines: 64 --A9eION1OHh0ab6BT Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Mon, Jul 04, 2005 at 01:54:40PM +0100, James Pearson wrote: > >Not sure if anyone is aware of this mess of a bug: > >https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=3D160444 > > > >Basically grub pukes all over itself when trying to interact with XFS=20 > >filesystems, resulting in an unbootable, or barely bootable FC4=20 > >installation. > > > >Any gurus here know of a solution? > > > >Yes, i know, this is really a FC/grub issue, but since its specific to= =20 > >XFS, i thought someone might have run into it and found a workaround=20 > >and/or fix. >=20 > Not sure if this is connected, but the FC installer (anaconda) uses the= =20 > code in 'booty' to install the bootloader - this has a work around for=20 > grub on XFS which uses xfs_freeze - see the thread that starts with: >=20 > http://marc.theaimsgroup.com/?l=3Dlinux-xfs&m=3D108009684613605&w=3D2 which is a kludge, and an unecessary one at that. if you install grub as followes it does not modify XFS filesystems via raw devices, but instead through the standard unix interfaces: embed /boot/grub/xfs_stage1_5 (hd0) xx sectors embedded. install --stage2=3D/boot/grub/stage2 /boot/grub/stage1 (hd0) (hd0)1+xx p (h= d0,1)/boot/grub/stage2 /etc/grub.conf note the xx in the first message is the value you need to use in the (hd0)1+xx part above. the last argument is the path to the config file you can have that where you like. what should be done is rewrite the setup command in grub to perform the above, instead of the broken method it currently uses. --=20 Ethan Benson http://www.alaska.net/~erbenson/ --A9eION1OHh0ab6BT Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (GNU/Linux) iEYEARECAAYFAkLJmp8ACgkQJKx7GixEevzEnQCcC5U3ZTycc0zwkEpotqejSh2r qjwAoKFVUcvj0UOtZOXR19uuSc0zwmb/ =Takc -----END PGP SIGNATURE----- --A9eION1OHh0ab6BT-- From owner-linux-xfs@oss.sgi.com Mon Jul 4 13:33:57 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 04 Jul 2005 13:34:01 -0700 (PDT) Received: from wproxy.gmail.com (wproxy.gmail.com [64.233.184.206]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j64KXuH9005938 for ; Mon, 4 Jul 2005 13:33:57 -0700 Received: by wproxy.gmail.com with SMTP id i20so798370wra for ; Mon, 04 Jul 2005 13:32:20 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:organization:user-agent:x-accept-language:mime-version:to:subject:references:in-reply-to:content-type:content-transfer-encoding:from; b=qqEyACQicvhs8b8gMiMx1cT9E6vn1LsQAPH6Ok/fOWf9toJ1w7s3p1kgKiiw/aXVlMfmje9YCbxS28RfxswKkC0rTtLX/0W3yoI/SDE3WeeuJKGtAK5OBUrSjemx9TtjX1UY+4uWHXCZkkqvAh0b8P7Y+NN88X/NrabLwRmQgq4= Received: by 10.54.33.70 with SMTP id g70mr4044832wrg; Mon, 04 Jul 2005 13:32:20 -0700 (PDT) Received: from ?10.0.0.1? ([67.121.168.179]) by mx.gmail.com with ESMTP id 16sm416479wrl.2005.07.04.13.32.20; Mon, 04 Jul 2005 13:32:20 -0700 (PDT) Message-ID: <42C99CD5.7010409@linux-sxs.org> Date: Mon, 04 Jul 2005 13:32:21 -0700 Organization: HAL V User-Agent: Mozilla Thunderbird 1.0.2 (X11/20050317) X-Accept-Language: en-us, en MIME-Version: 1.0 To: linux-xfs@oss.sgi.com Subject: Re: grub disaster with FC4 & XFS References: <42C93190.8070804@moving-picture.com> <20050704202255.GF25980@plato.local.lan> In-Reply-To: <20050704202255.GF25980@plato.local.lan> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit From: Net Llama! X-archive-position: 5557 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: netllama@gmail.com Precedence: bulk X-list: linux-xfs Content-Length: 2204 Lines: 55 On 07/04/2005 01:22 PM, Ethan Benson wrote: > On Mon, Jul 04, 2005 at 01:54:40PM +0100, James Pearson wrote: >> >Not sure if anyone is aware of this mess of a bug: >> >https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=160444 >> > >> >Basically grub pukes all over itself when trying to interact with XFS >> >filesystems, resulting in an unbootable, or barely bootable FC4 >> >installation. >> > >> >Any gurus here know of a solution? >> > >> >Yes, i know, this is really a FC/grub issue, but since its specific to >> >XFS, i thought someone might have run into it and found a workaround >> >and/or fix. >> >> Not sure if this is connected, but the FC installer (anaconda) uses the >> code in 'booty' to install the bootloader - this has a work around for >> grub on XFS which uses xfs_freeze - see the thread that starts with: >> >> http://marc.theaimsgroup.com/?l=linux-xfs&m=108009684613605&w=2 > > which is a kludge, and an unecessary one at that. > > if you install grub as followes it does not modify XFS filesystems via > raw devices, but instead through the standard unix interfaces: > > embed /boot/grub/xfs_stage1_5 (hd0) > > xx sectors embedded. > > install --stage2=/boot/grub/stage2 /boot/grub/stage1 (hd0) (hd0)1+xx p (hd0,1)/boot/grub/stage2 /etc/grub.conf > > note the xx in the first message is the value you need to use in the > (hd0)1+xx part above. the last argument is the path to the config > file you can have that where you like. > > what should be done is rewrite the setup command in grub to perform > the above, instead of the broken method it currently uses. > Actually, the command you noted above is what the FC4 installer was trying to do when grub went south. I did finally manage to fix this mess by booting with knoppix, purging everything in /boot/grub, repopulating with the templates that ship with grub, and running 'setup (hd0)' again. I have no clue why all of that was neccesary. -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ L. Friedman netllama@linux-sxs.org LlamaLand http://netllama.linux-sxs.org 13:30:01 up 83 days, 23:49, 1 user, load average: 0.35, 0.33, 0.19 From owner-linux-xfs@oss.sgi.com Mon Jul 4 20:27:24 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 04 Jul 2005 20:27:29 -0700 (PDT) Received: from mercury.acsalaska.net (mercury.acsalaska.net [209.112.173.226]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j653RKH9007278 for ; Mon, 4 Jul 2005 20:27:24 -0700 Received: from erbenson.alaska.net (66-230-89-160-dial-as3.nwc.acsalaska.net [66.230.89.160]) by mercury.acsalaska.net (8.13.4/8.13.4) with ESMTP id j653PeKC035094 for ; Mon, 4 Jul 2005 19:25:41 -0800 (AKDT) (envelope-from erbenson@alaska.net) Received: from plato.local.lan (plato.local.lan [192.168.0.4]) by erbenson.alaska.net (Postfix) with ESMTP id B8E9B3933 for ; Mon, 4 Jul 2005 19:25:39 -0800 (AKDT) Received: by plato.local.lan (Postfix, from userid 1000) id 679F240FF35; Mon, 4 Jul 2005 19:25:39 -0800 (AKDT) Date: Mon, 4 Jul 2005 19:25:39 -0800 From: Ethan Benson To: linux-xfs@oss.sgi.com Subject: Re: grub disaster with FC4 & XFS Message-ID: <20050705032539.GG25980@plato.local.lan> Mail-Followup-To: linux-xfs@oss.sgi.com References: <42C93190.8070804@moving-picture.com> <20050704202255.GF25980@plato.local.lan> <42C99CD5.7010409@linux-sxs.org> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="+5X6wed+orjucm7J" Content-Disposition: inline In-Reply-To: <42C99CD5.7010409@linux-sxs.org> User-Agent: Mutt/1.3.28i X-OS: Debian GNU X-gpg-fingerprint: E3E4 D0BC 31BC F7BB C1DD C3D6 24AC 7B1A 2C44 7AFC X-gpg-key: http://www.alaska.net/~erbenson/gpg/key.asc Mail-Copies-To: nobody X-No-CC: I subscribe to this list; do not CC me on replies. X-ACS-Spam-Status: no X-ACS-Scanned-By: MD 2.51; SA 3.0.3; spamdefang 1.112 X-archive-position: 5558 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: erbenson@alaska.net Precedence: bulk X-list: linux-xfs Content-Length: 1601 Lines: 47 --+5X6wed+orjucm7J Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Mon, Jul 04, 2005 at 01:32:21PM -0700, Net Llama! wrote: >=20 > I did finally manage to fix this mess by booting with knoppix, purging=20 > everything in /boot/grub, repopulating with the templates that ship with= =20 > grub, and running 'setup (hd0)' again. I have no clue why all of that=20 > was neccesary. thats one thing I really dislike about grub, there is no reason it should be this much of a PITA to install a bootloader. lilo got this right, you setup a lilo.conf and run /sbin/lilo, done, simple. I managed to make the PowerPC bootloader yaboot just as simple on the common machines, and there is a whole lot more nonsense involved on ppc then there is on x86. Why can't grub just use a normal config file (in /etc where it belongs) and just install with a single invocation of a *noninteractive* command? there must be a reason, why else would so many installer writers put up with all this hell for so long, just because grub is so revered for being better then lilo... I suppose its a akin to debian, and its oh-so-derided installer `who cares, you only have to install it once' --=20 Ethan Benson http://www.alaska.net/~erbenson/ --+5X6wed+orjucm7J Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (GNU/Linux) iEYEARECAAYFAkLJ/bMACgkQJKx7GixEevykQwCeMO9usV7mWFycpJfITC1Vi439 lf0AnAqQzKsby0b/OPr8Bkp1WUhJPgSh =5De0 -----END PGP SIGNATURE----- --+5X6wed+orjucm7J-- From owner-linux-xfs@oss.sgi.com Mon Jul 4 21:11:10 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 04 Jul 2005 21:11:15 -0700 (PDT) Received: from wproxy.gmail.com (wproxy.gmail.com [64.233.184.196]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j654B9H9009846 for ; Mon, 4 Jul 2005 21:11:10 -0700 Received: by wproxy.gmail.com with SMTP id i31so796268wra for ; Mon, 04 Jul 2005 21:09:35 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:organization:user-agent:x-accept-language:mime-version:to:cc:subject:references:in-reply-to:content-type:content-transfer-encoding:from; b=ESWuhLaI21nyw6mqhwgv0sq4QAPXprriUNc7blJOi3IPYz92jdySDmN4oDQ8Mxa/N8QZbaeWI3b5aXoCJ9q5RlDlzxB9rcwBe6wfRO+17Ar7Kl0bWLc6FpPJx399JF7N7QePiKTc/oeIhGSu1ZfWP4JzLSGG47ZHgwlY4FGC2Pw= Received: by 10.54.35.14 with SMTP id i14mr1938251wri; Mon, 04 Jul 2005 21:09:35 -0700 (PDT) Received: from ?10.0.0.1? ([67.121.168.179]) by mx.gmail.com with ESMTP id d7sm740578wra.2005.07.04.21.09.33; Mon, 04 Jul 2005 21:09:35 -0700 (PDT) Message-ID: <42CA0801.3000900@linux-sxs.org> Date: Mon, 04 Jul 2005 21:09:37 -0700 Organization: HAL V User-Agent: Mozilla Thunderbird 1.0.2 (X11/20050317) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Ethan Benson CC: linux-xfs@oss.sgi.com Subject: Re: grub disaster with FC4 & XFS References: <42C93190.8070804@moving-picture.com> <20050704202255.GF25980@plato.local.lan> <42C99CD5.7010409@linux-sxs.org> <20050705032539.GG25980@plato.local.lan> In-Reply-To: <20050705032539.GG25980@plato.local.lan> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit From: Net Llama! X-archive-position: 5560 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: netllama@gmail.com Precedence: bulk X-list: linux-xfs Content-Length: 1688 Lines: 38 On 07/04/2005 08:25 PM, Ethan Benson wrote: > On Mon, Jul 04, 2005 at 01:32:21PM -0700, Net Llama! wrote: >> >> I did finally manage to fix this mess by booting with knoppix, purging >> everything in /boot/grub, repopulating with the templates that ship with >> grub, and running 'setup (hd0)' again. I have no clue why all of that >> was neccesary. > > thats one thing I really dislike about grub, there is no reason it > should be this much of a PITA to install a bootloader. lilo got this > right, you setup a lilo.conf and run /sbin/lilo, done, simple. I > managed to make the PowerPC bootloader yaboot just as simple on the > common machines, and there is a whole lot more nonsense involved on ppc > then there is on x86. > > Why can't grub just use a normal config file (in /etc where it > belongs) and just install with a single invocation of a > *noninteractive* command? > > there must be a reason, why else would so many installer writers put > up with all this hell for so long, just because grub is so revered for > being better then lilo... I suppose its a akin to debian, and its > oh-so-derided installer `who cares, you only have to install it once' > Except that for grub, you don't just install it once. Everytime I upgrade Fedora, it needs to get installed again, and I go through this fiasco again. Last night, I was about this -> <- close to just installing LILO and moving on with my life. -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ L. Friedman netllama@linux-sxs.org LlamaLand http://netllama.linux-sxs.org 21:05:01 up 84 days, 7:24, 1 user, load average: 0.78, 0.40, 0.35 From owner-linux-xfs@oss.sgi.com Mon Jul 4 21:26:16 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 04 Jul 2005 21:26:21 -0700 (PDT) Received: from arke.acsalaska.net (arke.acsalaska.net [209.112.173.225]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j654QFH9011000 for ; Mon, 4 Jul 2005 21:26:16 -0700 Received: from erbenson.alaska.net (66-230-89-160-dial-as3.nwc.acsalaska.net [66.230.89.160]) by arke.acsalaska.net (8.13.4/8.13.4) with ESMTP id j654OegX035751 for ; Mon, 4 Jul 2005 20:24:40 -0800 (AKDT) (envelope-from erbenson@alaska.net) Received: from plato.local.lan (plato.local.lan [192.168.0.4]) by erbenson.alaska.net (Postfix) with ESMTP id 99EDB3933 for ; Mon, 4 Jul 2005 20:24:39 -0800 (AKDT) Received: by plato.local.lan (Postfix, from userid 1000) id AAF4340FF35; Mon, 4 Jul 2005 20:24:39 -0800 (AKDT) Date: Mon, 4 Jul 2005 20:24:39 -0800 From: Ethan Benson To: linux-xfs@oss.sgi.com Subject: Re: grub disaster with FC4 & XFS Message-ID: <20050705042439.GH25980@plato.local.lan> Mail-Followup-To: linux-xfs@oss.sgi.com References: <42C93190.8070804@moving-picture.com> <20050704202255.GF25980@plato.local.lan> <42C99CD5.7010409@linux-sxs.org> <20050705032539.GG25980@plato.local.lan> <42CA0801.3000900@linux-sxs.org> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="v+yx4Fq+rcVvnsQ8" Content-Disposition: inline In-Reply-To: <42CA0801.3000900@linux-sxs.org> User-Agent: Mutt/1.3.28i X-OS: Debian GNU X-gpg-fingerprint: E3E4 D0BC 31BC F7BB C1DD C3D6 24AC 7B1A 2C44 7AFC X-gpg-key: http://www.alaska.net/~erbenson/gpg/key.asc Mail-Copies-To: nobody X-No-CC: I subscribe to this list; do not CC me on replies. X-ACS-Spam-Status: no X-ACS-Scanned-By: MD 2.51; SA 3.0.4; spamdefang 1.112 X-archive-position: 5561 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: erbenson@alaska.net Precedence: bulk X-list: linux-xfs Content-Length: 1177 Lines: 38 --v+yx4Fq+rcVvnsQ8 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Mon, Jul 04, 2005 at 09:09:37PM -0700, Net Llama! wrote: >=20 > Except that for grub, you don't just install it once. Everytime I=20 > upgrade Fedora, it needs to get installed again, and I go through this=20 > fiasco again. Last night, I was about this -> <- close to just=20 > installing LILO and moving on with my life. you could use debian stable like me, then you only upgrade the OS every 3 years or so, then grub can pretty much be installed once and forgotten about for at least that long (you don't even strictly have to upgrade grub on disk when the package or OS is upgraded either). heh now that I have probably offended just about everyone ill shut up now ;-p --=20 Ethan Benson http://www.alaska.net/~erbenson/ --v+yx4Fq+rcVvnsQ8 Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (GNU/Linux) iEYEARECAAYFAkLKC4cACgkQJKx7GixEevz/ogCgkR3VTqAEFwLH3sWDrt4BjIPN 7aIAn0G0W6NLAug/shBuHOadHcQkew64 =kz30 -----END PGP SIGNATURE----- --v+yx4Fq+rcVvnsQ8-- From owner-linux-xfs@oss.sgi.com Mon Jul 4 21:59:29 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 04 Jul 2005 21:59:33 -0700 (PDT) Received: from wproxy.gmail.com (wproxy.gmail.com [64.233.184.198]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j654xSH9013218 for ; Mon, 4 Jul 2005 21:59:28 -0700 Received: by wproxy.gmail.com with SMTP id i20so857155wra for ; Mon, 04 Jul 2005 21:57:54 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:organization:user-agent:x-accept-language:mime-version:to:cc:subject:references:in-reply-to:content-type:content-transfer-encoding:from; b=UG2KsbZSJncmQQK8++Zfy9KY7/RqEajVtHzkUiYCggyTyzeHElo/PbGDEcbVAwPDKKq3HCxqqNLFCfWECUf0DZ/N2OA1sdFOecZQl+wjPpMZW26US5IJL+S/Wjgt4Mq9YMHUuIE4Zds+GcKHjd2CQmbaIvyd9gzI0Ooa4Gj3Yk8= Received: by 10.54.73.15 with SMTP id v15mr4285806wra; Mon, 04 Jul 2005 21:57:54 -0700 (PDT) Received: from ?10.0.0.1? ([67.121.168.179]) by mx.gmail.com with ESMTP id 26sm766986wrl.2005.07.04.21.57.53; Mon, 04 Jul 2005 21:57:54 -0700 (PDT) Message-ID: <42CA1355.9040700@linux-sxs.org> Date: Mon, 04 Jul 2005 21:57:57 -0700 Organization: HAL V User-Agent: Mozilla Thunderbird 1.0.2 (X11/20050317) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Ethan Benson CC: linux-xfs@oss.sgi.com Subject: Re: grub disaster with FC4 & XFS References: <42C93190.8070804@moving-picture.com> <20050704202255.GF25980@plato.local.lan> <42C99CD5.7010409@linux-sxs.org> <20050705032539.GG25980@plato.local.lan> <42CA0801.3000900@linux-sxs.org> <20050705042439.GH25980@plato.local.lan> In-Reply-To: <20050705042439.GH25980@plato.local.lan> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit From: Net Llama! X-archive-position: 5562 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: netllama@gmail.com Precedence: bulk X-list: linux-xfs Content-Length: 1236 Lines: 30 On 07/04/2005 09:24 PM, Ethan Benson wrote: > On Mon, Jul 04, 2005 at 09:09:37PM -0700, Net Llama! wrote: >> >> Except that for grub, you don't just install it once. Everytime I >> upgrade Fedora, it needs to get installed again, and I go through this >> fiasco again. Last night, I was about this -> <- close to just >> installing LILO and moving on with my life. > > you could use debian stable like me, then you only upgrade the OS > every 3 years or so, then grub can pretty much be installed once and > forgotten about for at least that long (you don't even strictly have > to upgrade grub on disk when the package or OS is upgraded either). > > heh now that I have probably offended just about everyone ill shut up > now ;-p > heh. debian stable is about as much of a polar opposite to fedora as you can get, other than perhaps some harcore gentoo stuff. thanks, but i'll stick with fedora. at least i know that i'll be fighting with grub every 6 months ;) -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ L. Friedman netllama@linux-sxs.org LlamaLand http://netllama.linux-sxs.org 21:55:01 up 84 days, 8:14, 1 user, load average: 0.21, 0.42, 0.41 From owner-linux-xfs@oss.sgi.com Tue Jul 5 08:57:07 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 05 Jul 2005 08:57:15 -0700 (PDT) Received: from kevlar.burdell.org (66-23-228-155.clients.speedfactory.net [66.23.228.155]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j65Fv6H9005861 for ; Tue, 5 Jul 2005 08:57:06 -0700 Received: by kevlar.burdell.org (Postfix, from userid 1189) id D521F66C9B; Tue, 5 Jul 2005 11:53:22 -0400 (EDT) Date: Tue, 5 Jul 2005 11:53:22 -0400 From: Sonny Rao To: Chris Wedgwood Cc: Bryan Henderson , Al Boldi , linux-fsdevel@vger.kernel.org, linux-xfs@oss.sgi.com, Steve Lord , "'Nathan Scott'" , reiserfs-list@namesys.com Subject: Re: XFS corruption during power-blackout Message-ID: <20050705155322.GB13262@kevlar.burdell.org> Mail-Followup-To: Sonny Rao , Chris Wedgwood , Bryan Henderson , Al Boldi , linux-fsdevel@vger.kernel.org, linux-xfs@oss.sgi.com, Steve Lord , 'Nathan Scott' , reiserfs-list@namesys.com References: <254889.27725ab660aa106eb6acc07307d71ef1fbd5b6fd366aebef9e2f611750fbcb467e46e8a4.IBX@taniwha.stupidest.org> <054069.b93858d6b97c07747dc32be2dd8981b254d981528781006053dce7be58de88865a43b162.IBX@taniwha.stupidest.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <054069.b93858d6b97c07747dc32be2dd8981b254d981528781006053dce7be58de88865a43b162.IBX@taniwha.stupidest.org> User-Agent: Mutt/1.4.2.1i X-archive-position: 5564 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: sonny@burdell.org Precedence: bulk X-list: linux-xfs Content-Length: 1031 Lines: 26 On Thu, Jun 30, 2005 at 11:46:27AM -0700, Chris Wedgwood wrote: > On Thu, Jun 30, 2005 at 12:30:20PM -0400, Bryan Henderson wrote: > > > For another point of reference - were these ATA (personal class) or > > SCSI (commercial class) drives or both? > > IDE were Maxtor some old Maxtor 60GB disks and some not-so-old 200GB > WD drives. Maxtor has 2MB cache. WD 8MB. > > The SCSI disks where 10K RPM SCA somethings. I think they were Segate > (they've since been taken or else I would check). I have no idea what > the cache is on those. > > > Is write caching the default on typical SCSI devices? > > I'm not sure. It seemed to be off by default for the SCSI disks and > on by default for IDE when I checked. I can't rule out the > bios/controller doing something there though. On all the SCSI drives shipped w/ servers write-caching is turned off for this very reason. This is true of all the IBM equipment I've seen, not sure about the smaller mom & pop outfits or drives sold through retail channels though. Sonny From owner-linux-xfs@oss.sgi.com Tue Jul 5 08:53:06 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 05 Jul 2005 08:53:17 -0700 (PDT) Received: from kevlar.burdell.org (66-23-228-155.clients.speedfactory.net [66.23.228.155]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j65Fr5H9005518 for ; Tue, 5 Jul 2005 08:53:06 -0700 Received: by kevlar.burdell.org (Postfix, from userid 1189) id C3F1D66C9B; Tue, 5 Jul 2005 11:49:19 -0400 (EDT) Date: Tue, 5 Jul 2005 11:49:19 -0400 From: Sonny Rao To: Al Boldi Cc: "'Jens Axboe'" , "'David Masover'" , "'Chris Wedgwood'" , "'Nathan Scott'" , linux-xfs@oss.sgi.com, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, reiserfs-list@namesys.com Subject: Re: XFS corruption during power-blackout Message-ID: <20050705154919.GA13262@kevlar.burdell.org> Mail-Followup-To: Sonny Rao , Al Boldi , 'Jens Axboe' , 'David Masover' , 'Chris Wedgwood' , 'Nathan Scott' , linux-xfs@oss.sgi.com, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, reiserfs-list@namesys.com References: <20050701092412.GD2243@suse.de> <200507011405.RAA27425@raad.intranet> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <200507011405.RAA27425@raad.intranet> User-Agent: Mutt/1.4.2.1i X-archive-position: 5563 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: sonny@burdell.org Precedence: bulk X-list: linux-xfs Content-Length: 1511 Lines: 42 On Fri, Jul 01, 2005 at 05:05:11PM +0300, Al Boldi wrote: > Jens Axboe wrote: { > On Fri, Jul 01 2005, David Masover wrote: > > Chris Wedgwood wrote: > > >On Wed, Jun 29, 2005 at 07:53:09AM +0300, Al Boldi wrote: > > > > > > > > >>What I found were 4 things in the dest dir: > > >>1. Missing Dirs,Files. That's OK. > > >>2. Files of size 0. That's acceptable. > > >>3. Corrupted Files. That's unacceptable. > > >>4. Corrupted Files with original fingerprint. That's ABSOLUTELY > > >>unacceptable. > > > > > > > > >disk usually default to caching these days and can lose data as a > > >result, disable that > > > > Not always possible. Some disks lie and leave caching on anyway. > > And the same (and others) disks will not honor a flush anyways. > Moral of that story - avoid bad hardware. > } > > 1. Sync is not the issue. The issue is whether a journaled FS can detect > corrupted files and flag them after a power-blackout! Journaling implies filesystem consistency, not data integrity, AFAIK. > 2. Moral of the story is: What's ext3 doing the others aren't? Ext3 has stronger guaranties than basic filesystem consistency. I.e. in ordered mode, file data is always written before metadata, so the worst that could happen is a growing file's new data is written but the metadata isn't updated before a power failure... so the new writes wouldn't be seen afterwards. You should try the same test w/ ext3 in "writeback" mode and see if it fares better or worse in terms of file corruption. Sonny From owner-linux-xfs@oss.sgi.com Tue Jul 5 10:29:26 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 05 Jul 2005 10:29:36 -0700 (PDT) Received: from raad.intranet ([212.76.86.91]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j65HTNH9012988 for ; Tue, 5 Jul 2005 10:29:24 -0700 Received: from i810 (rescueCli [10.254.254.253]) by raad.intranet (8.8.7/8.8.7) with ESMTP id UAA07417; Tue, 5 Jul 2005 20:25:09 +0300 Message-Id: <200507051725.UAA07417@raad.intranet> From: "Al Boldi" To: "'Sonny Rao'" Cc: "'Jens Axboe'" , "'David Masover'" , "'Chris Wedgwood'" , "'Nathan Scott'" , , , , Subject: RE: XFS corruption during power-blackout Date: Tue, 5 Jul 2005 20:25:11 +0300 MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit X-Mailer: Microsoft Office Outlook, Build 11.0.5510 In-Reply-To: <20050705154919.GA13262@kevlar.burdell.org> X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2600.0000 Thread-Index: AcWBeOboEysnS0HNSRenkfASCK84rQADFsog X-archive-position: 5565 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: a1426z@gawab.com Precedence: bulk X-list: linux-xfs Content-Length: 830 Lines: 23 Sonny Rao wrote: { > > >On Wed, Jun 29, 2005 at 07:53:09AM +0300, Al Boldi wrote: > > >>What I found were 4 things in the dest dir: > > >>1. Missing Dirs,Files. That's OK. > > >>2. Files of size 0. That's acceptable. > > >>3. Corrupted Files. That's unacceptable. > > >>4. Corrupted Files with original fingerprint. That's ABSOLUTELY > > >>unacceptable. > > > > 2. Moral of the story is: What's ext3 doing the others aren't? Ext3 has stronger guaranties than basic filesystem consistency. I.e. in ordered mode, file data is always written before metadata, so the worst that could happen is a growing file's new data is written but the metadata isn't updated before a power failure... so the new writes wouldn't be seen afterwards. } Sonny, Thanks for you input! Is there an option in XFS,ReiserFS,JFS to enable ordered mode? From owner-linux-xfs@oss.sgi.com Tue Jul 5 11:14:43 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 05 Jul 2005 11:14:46 -0700 (PDT) Received: from kevlar.burdell.org (66-23-228-155.clients.speedfactory.net [66.23.228.155]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j65IEgH9016535 for ; Tue, 5 Jul 2005 11:14:42 -0700 Received: by kevlar.burdell.org (Postfix, from userid 1189) id CC8C666C81; Tue, 5 Jul 2005 14:10:57 -0400 (EDT) Date: Tue, 5 Jul 2005 14:10:57 -0400 From: Sonny Rao To: Al Boldi Cc: "'Jens Axboe'" , "'David Masover'" , "'Chris Wedgwood'" , "'Nathan Scott'" , linux-xfs@oss.sgi.com, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, reiserfs-list@namesys.com Subject: Re: XFS corruption during power-blackout Message-ID: <20050705181057.GA16422@kevlar.burdell.org> Mail-Followup-To: Sonny Rao , Al Boldi , 'Jens Axboe' , 'David Masover' , 'Chris Wedgwood' , 'Nathan Scott' , linux-xfs@oss.sgi.com, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, reiserfs-list@namesys.com References: <20050705154919.GA13262@kevlar.burdell.org> <200507051725.UAA07417@raad.intranet> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <200507051725.UAA07417@raad.intranet> User-Agent: Mutt/1.4.2.1i X-archive-position: 5566 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: sonny@burdell.org Precedence: bulk X-list: linux-xfs Content-Length: 1270 Lines: 32 On Tue, Jul 05, 2005 at 08:25:11PM +0300, Al Boldi wrote: > Sonny Rao wrote: { > > > >On Wed, Jun 29, 2005 at 07:53:09AM +0300, Al Boldi wrote: > > > >>What I found were 4 things in the dest dir: > > > >>1. Missing Dirs,Files. That's OK. > > > >>2. Files of size 0. That's acceptable. > > > >>3. Corrupted Files. That's unacceptable. > > > >>4. Corrupted Files with original fingerprint. That's ABSOLUTELY > > > >>unacceptable. > > > > > > 2. Moral of the story is: What's ext3 doing the others aren't? > > Ext3 has stronger guaranties than basic filesystem consistency. > I.e. in ordered mode, file data is always written before metadata, so the > worst that could happen is a growing file's new data is written but the > metadata isn't updated before a power failure... so the new writes wouldn't > be seen afterwards. > > } > > Sonny, > Thanks for you input! > Is there an option in XFS,ReiserFS,JFS to enable ordered mode? I beleive in newer 2.6 kernels that Reiser has ordered mode (IIRC, courtesy of Chris Mason), but XFS and JFS do not support it. I seem to remember Shaggy (JFS maintainer) saying in older 2.4 kernels he tried to write file data before metadata but had to change that behavior in 2.6, not really sure why or anything beyond that. Sonny From owner-linux-xfs@oss.sgi.com Tue Jul 5 12:26:37 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 05 Jul 2005 12:26:47 -0700 (PDT) Received: from lee.int-rz.hamburg.de (frontend-1.hamburg.de [212.1.41.126]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j65JQaH9024484 for ; Tue, 5 Jul 2005 12:26:37 -0700 Received: from [213.39.214.19] (helo=c214019.adsl.hansenet.de) by lee.int-rz.hamburg.de with esmtps (TLSv1:RC4-MD5:128) (Exim 4.51) id 1Dpt2E-0003hU-G3; Tue, 05 Jul 2005 21:24:47 +0200 From: Dieter =?iso-8859-1?q?N=FCtzel?= Organization: DN To: reiserfs-list@namesys.com Subject: Re: XFS corruption during power-blackout Date: Tue, 5 Jul 2005 21:24:48 +0200 User-Agent: KMail/1.8.1 Cc: Sonny Rao , Al Boldi , "'Jens Axboe'" , "'David Masover'" , "'Chris Wedgwood'" , "'Nathan Scott'" , linux-xfs@oss.sgi.com, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org References: <20050705154919.GA13262@kevlar.burdell.org> <200507051725.UAA07417@raad.intranet> <20050705181057.GA16422@kevlar.burdell.org> In-Reply-To: <20050705181057.GA16422@kevlar.burdell.org> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Disposition: inline Message-Id: <200507052124.49199.Dieter.Nuetzel@hamburg.de> Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id j65JQbH9024486 X-archive-position: 5567 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: Dieter.Nuetzel@hamburg.de Precedence: bulk X-list: linux-xfs Content-Length: 1549 Lines: 47 Am Dienstag, 5. Juli 2005 20:10 schrieb Sonny Rao: > On Tue, Jul 05, 2005 at 08:25:11PM +0300, Al Boldi wrote: > > Sonny Rao wrote: { > > > > > > >On Wed, Jun 29, 2005 at 07:53:09AM +0300, Al Boldi wrote: > > > > >>What I found were 4 things in the dest dir: > > > > >>1. Missing Dirs,Files. That's OK. > > > > >>2. Files of size 0. That's acceptable. > > > > >>3. Corrupted Files. That's unacceptable. > > > > >>4. Corrupted Files with original fingerprint. That's ABSOLUTELY > > > > >>unacceptable. > > > > > > 2. Moral of the story is: What's ext3 doing the others aren't? > > > > Ext3 has stronger guaranties than basic filesystem consistency. > > I.e. in ordered mode, file data is always written before metadata, so the > > worst that could happen is a growing file's new data is written but the > > metadata isn't updated before a power failure... so the new writes > > wouldn't be seen afterwards. > > > > } > > > > Sonny, > > Thanks for you input! > > Is there an option in XFS,ReiserFS,JFS to enable ordered mode? > > I beleive in newer 2.6 kernels that Reiser has ordered mode (IIRC, courtesy > of Chris Mason), And SuSE, ack. ftp://ftp.suse.com/pub/people/mason/patches/data-logging They are around some time ;-) > but XFS and JFS do not support it. I seem to remember > Shaggy (JFS maintainer) saying in older 2.4 kernels he tried to write > file data before metadata but had to change that behavior in 2.6, not > really sure why or anything beyond that. Greetings, Dieter -- Dieter Nützel @home: From owner-linux-xfs@oss.sgi.com Tue Jul 5 16:11:01 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 05 Jul 2005 16:11:06 -0700 (PDT) Received: from larry.melbourne.sgi.com (mverd138.asia.info.net [61.14.31.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id j65NAxH9012414 for ; Tue, 5 Jul 2005 16:11:00 -0700 Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA08503; Wed, 6 Jul 2005 09:09:20 +1000 Received: from frodo.melbourne.sgi.com (root@frodo.melbourne.sgi.com [134.14.55.153]) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id j65N9Pkt2766544; Wed, 6 Jul 2005 09:09:25 +1000 (EST) Received: from frodo.melbourne.sgi.com (nathans@localhost [127.0.0.1]) by frodo.melbourne.sgi.com (8.12.9/8.12.9/Debian-3) with ESMTP id j65N2cm8000902; Wed, 6 Jul 2005 09:02:38 +1000 Received: (from nathans@localhost) by frodo.melbourne.sgi.com (8.12.9/8.12.9/Debian-3) id j65N2aE3000900; Wed, 6 Jul 2005 09:02:36 +1000 Date: Wed, 6 Jul 2005 09:02:36 +1000 From: Nathan Scott To: Andy Cc: linux-kernel@vger.kernel.org, linux-xfs@oss.sgi.com Subject: Re: Online resizing devices Message-ID: <20050705230236.GA812@frodo> References: <20050705160815.GA14324@thumper2> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20050705160815.GA14324@thumper2> User-Agent: Mutt/1.5.3i Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id j65NB1H9012416 X-archive-position: 5568 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: linux-xfs Content-Length: 1519 Lines: 38 On Tue, Jul 05, 2005 at 11:08:15AM -0500, Andy wrote: > I'd like to do an online resize of and XFS filesystem on a non-partitioned > device. But, I always have to reboot to do so. > > Say I have a sdc with 16777216 blocks and expand it on the SAN > to have 17825792 blocks, and rescan the device. > > The xfs_growfs does not see the size, nor does blockdev --getsz /dev/sdc, > however, the I know the rescan worked because /sys/block/sdc/size now is > 17825792 instead of 16777216. > I've wondered why this is so too occasionally. AFAICT, we are doing everything correctly from the filesystem point of view, we are just not being told of the larger device size when we query it. So, it was interesting to hear that sysfs reports the correct size.. From a quick look through the code - it seems sysfs reports the value from the struct genhd ->capacity field (get_capacity and set_capacity from ). Whereas the other block device interfaces are looking at the struct block_device bd_inode ->i_size field. So, it kinda looks like a coherency issue between those two beasts - someone more familiar with the block layer may be able to suggest a fix (Christoph/Jens/Al/...). > Is there some reason for this? Is there a fix for it? I'm not using any > volume management stuff yet but would like to be able to grow our > filesystems without having to reboot to do so. I'm not aware of a fix, and the last time I looked using a volume manager doesn't resolve this issue either. cheers. -- Nathan From owner-linux-xfs@oss.sgi.com Tue Jul 5 21:27:06 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 05 Jul 2005 21:27:33 -0700 (PDT) Received: from raad.intranet ([213.184.187.7]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j664R3H9011580 for ; Tue, 5 Jul 2005 21:27:04 -0700 Received: from i810 (rescueCli [10.254.254.253]) by raad.intranet (8.8.7/8.8.7) with ESMTP id HAA27591; Wed, 6 Jul 2005 07:24:04 +0300 Message-Id: <200507060424.HAA27591@raad.intranet> From: "Al Boldi" To: "'Sonny Rao'" Cc: "'Jens Axboe'" , "'David Masover'" , "'Chris Wedgwood'" , "'Nathan Scott'" , , , , Subject: RE: XFS corruption during power-blackout Date: Wed, 6 Jul 2005 07:24:03 +0300 MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit X-Mailer: Microsoft Office Outlook, Build 11.0.5510 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2600.0000 In-Reply-To: <20050705181057.GA16422@kevlar.burdell.org> Thread-Index: AcWBjLAcL2Ef45tFSJOcSXMRBNyXrwAVUTmQ X-archive-position: 5569 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: a1426z@gawab.com Precedence: bulk X-list: linux-xfs Content-Length: 976 Lines: 24 Sonny Rao wrote: { > > > >On Wed, Jun 29, 2005 at 07:53:09AM +0300, Al Boldi wrote: > > > >>What I found were 4 things in the dest dir: > > > >>1. Missing Dirs,Files. That's OK. > > > >>2. Files of size 0. That's acceptable. > > > >>3. Corrupted Files. That's unacceptable. > > > >>4. Corrupted Files with original fingerprint. That's ABSOLUTELY > > > >>unacceptable. > > > > > > 2. Moral of the story is: What's ext3 doing the others aren't? > > Ext3 has stronger guaranties than basic filesystem consistency. > I.e. in ordered mode, file data is always written before metadata, so > the worst that could happen is a growing file's new data is written > but the metadata isn't updated before a power failure... so the new > writes wouldn't be seen afterwards. > I believe in newer 2.6 kernels that Reiser has ordered mode (IIRC, courtesy of Chris Mason), but XFS and JFS do not support it. } Was ordered mode disabled/removed when XFS was add to the vanilla-kernel? From owner-linux-xfs@oss.sgi.com Tue Jul 5 21:55:11 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 05 Jul 2005 21:55:15 -0700 (PDT) Received: from larry.melbourne.sgi.com (mverd138.asia.info.net [61.14.31.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id j664t8H9013779 for ; Tue, 5 Jul 2005 21:55:11 -0700 Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA16432; Wed, 6 Jul 2005 14:53:20 +1000 Received: from frodo.melbourne.sgi.com (root@frodo.melbourne.sgi.com [134.14.55.153]) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id j664rMkt2776879; Wed, 6 Jul 2005 14:53:23 +1000 (EST) Received: from frodo.melbourne.sgi.com (nathans@localhost [127.0.0.1]) by frodo.melbourne.sgi.com (8.12.9/8.12.9/Debian-3) with ESMTP id j664kVm8001883; Wed, 6 Jul 2005 14:46:32 +1000 Received: (from nathans@localhost) by frodo.melbourne.sgi.com (8.12.9/8.12.9/Debian-3) id j664kQEe001881; Wed, 6 Jul 2005 14:46:26 +1000 Date: Wed, 6 Jul 2005 14:46:26 +1000 From: Nathan Scott To: Al Boldi Cc: "'Sonny Rao'" , "'Jens Axboe'" , "'David Masover'" , "'Chris Wedgwood'" , linux-xfs@oss.sgi.com, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, reiserfs-list@namesys.com Subject: Re: XFS corruption during power-blackout Message-ID: <20050706044626.GA1773@frodo> References: <20050705181057.GA16422@kevlar.burdell.org> <200507060424.HAA27591@raad.intranet> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <200507060424.HAA27591@raad.intranet> User-Agent: Mutt/1.5.3i X-archive-position: 5570 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: linux-xfs Content-Length: 198 Lines: 10 On Wed, Jul 06, 2005 at 07:24:03AM +0300, Al Boldi wrote: > Was ordered mode disabled/removed when XFS was add to the vanilla-kernel? No, XFS has never supported such a mode. cheers. -- Nathan From owner-linux-xfs@oss.sgi.com Wed Jul 6 04:29:02 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 06 Jul 2005 04:29:08 -0700 (PDT) Received: from smtp.nildram.co.uk (smtp.nildram.co.uk [195.112.4.54]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j66BSxH9018909 for ; Wed, 6 Jul 2005 04:29:02 -0700 Received: from doufu (unknown [82.133.120.56]) by smtp.nildram.co.uk (Postfix) with ESMTP id 604F524D010 for ; Wed, 6 Jul 2005 12:27:21 +0100 (BST) Received: from xiao.siksai.co.uk ([82.133.8.12] ident=rhowe) by doufu with smtp (Exim 4.50) id 1Dq83e-000127-7g for linux-xfs@oss.sgi.com; Wed, 06 Jul 2005 12:27:14 +0100 Received: by xiao.siksai.co.uk (sSMTP sendmail emulation); Wed, 6 Jul 2005 12:27:20 +0100 From: "Russell Howe" Date: Wed, 6 Jul 2005 12:27:20 +0100 To: linux-xfs@oss.sgi.com Subject: Re: XFS corruption during power-blackout Message-ID: <20050706112719.GA18969@xiao.rsnet> Mail-Followup-To: linux-xfs@oss.sgi.com References: <20050705181057.GA16422@kevlar.burdell.org> <200507060424.HAA27591@raad.intranet> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <200507060424.HAA27591@raad.intranet> User-Agent: Mutt/1.5.9i X-archive-position: 5572 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: rhowe@siksai.co.uk Precedence: bulk X-list: linux-xfs Content-Length: 1664 Lines: 38 On Wed, Jul 06, 2005 at 07:24:03AM +0300, Al Boldi wrote: > Was ordered mode disabled/removed when XFS was add to the vanilla-kernel? See the FAQ: http://oss.sgi.com/projects/xfs/faq.html#nulls XFS only journals metadata, not data. So, you are supposed to get a consistent filesystem structure, but your data consistency isn't guaranteed. That's not to say XFS is especially cavalier about your data, but just that the journalling functionality in XFS isn't journalling data writes. I think quite a lot of work was done a year or two ago to make it less likely that you would lose data after a crash or power loss, but XFS makes no guarantees (although, I think if you read back through previous postings, if you use fsync or fdatasync, then you should be able to guarantee that your data was written out). I think you can also mount with -o sync to make all writes synchronous (although obviously, performance suffers), and you can also (thanks to the hard work of a contributor whose name escapes me) use chattr to set the 'sync' attribute on files and directories to specify that I/O to those files is always synchronous (ignore the man page for chattr that says it only works on ext[23]. XFS now supports the ioctls too). There are probably other things I'm missing here, and I know nothing about XFS internals and so on, but there are others on this list who can probably fill in those blanks if there's anything specific you need to know (and who can point out all the errors and omissions in the above too, no doubt :) -- Russell Howe | Why be just another cog in the machine, rhowe@siksai.co.uk | when you can be the spanner in the works? From owner-linux-xfs@oss.sgi.com Wed Jul 6 19:59:24 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 06 Jul 2005 19:59:29 -0700 (PDT) Received: from mercury.acsalaska.net (mercury.acsalaska.net [209.112.173.226]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j672xNH9027025 for ; Wed, 6 Jul 2005 19:59:23 -0700 Received: from erbenson.alaska.net (66-230-90-226-dial-as4.nwc.acsalaska.net [66.230.90.226]) by mercury.acsalaska.net (8.13.4/8.13.4) with ESMTP id j672vhfv077029 for ; Wed, 6 Jul 2005 18:57:46 -0800 (AKDT) (envelope-from erbenson@alaska.net) Received: from plato.local.lan (plato.local.lan [192.168.0.4]) by erbenson.alaska.net (Postfix) with ESMTP id 120E8395A for ; Wed, 6 Jul 2005 18:56:07 -0800 (AKDT) Received: by plato.local.lan (Postfix, from userid 1000) id 6A2F040FF35; Wed, 6 Jul 2005 18:56:07 -0800 (AKDT) Date: Wed, 6 Jul 2005 18:56:07 -0800 From: Ethan Benson To: linux-xfs@oss.sgi.com Subject: Re: XFS corruption during power-blackout Message-ID: <20050707025607.GJ25980@plato.local.lan> Mail-Followup-To: linux-xfs@oss.sgi.com References: <20050705181057.GA16422@kevlar.burdell.org> <200507060424.HAA27591@raad.intranet> <20050706112719.GA18969@xiao.rsnet> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="oVY0WsfUWmN/0W08" Content-Disposition: inline In-Reply-To: <20050706112719.GA18969@xiao.rsnet> User-Agent: Mutt/1.3.28i X-OS: Debian GNU X-gpg-fingerprint: E3E4 D0BC 31BC F7BB C1DD C3D6 24AC 7B1A 2C44 7AFC X-gpg-key: http://www.alaska.net/~erbenson/gpg/key.asc Mail-Copies-To: nobody X-No-CC: I subscribe to this list; do not CC me on replies. X-ACS-Spam-Status: no X-ACS-Scanned-By: MD 2.51; SA 3.0.3; spamdefang 1.112 X-archive-position: 5574 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: erbenson@alaska.net Precedence: bulk X-list: linux-xfs Content-Length: 1566 Lines: 45 --oVY0WsfUWmN/0W08 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Wed, Jul 06, 2005 at 12:27:20PM +0100, Russell Howe wrote: >=20 > I think you can also mount with -o sync to make all writes synchronous > (although obviously, performance suffers), and you can also (thanks to > the hard work of a contributor whose name escapes me) use chattr to set that was me. > the 'sync' attribute on files and directories to specify that I/O to > those files is always synchronous (ignore the man page for chattr that > says it only works on ext[23]. XFS now supports the ioctls too). note that +S on directories does not make everything in that directory synchronous automatically, you need to apply it recursively. what +S on the directory will do is ensure any new files created under that directory inherit the +S flag, and thus get written synchronously. I believe this is the same behavior as ext2, newer versions of ext2 also had a different sync flag specifically for directories to ensure directory updates are synchronous, this one is not yet supported by XFS (at least that I am aware). I think this flag is 2.6 only as well. --=20 Ethan Benson http://www.alaska.net/~erbenson/ --oVY0WsfUWmN/0W08 Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (GNU/Linux) iEYEARECAAYFAkLMmccACgkQJKx7GixEevy+nQCfRazaOQe9iOlbTzC11ar6pdXC V20An2j9EePnL+Fktyfi195q7ULqFBMg =9SkP -----END PGP SIGNATURE----- --oVY0WsfUWmN/0W08-- From owner-linux-xfs@oss.sgi.com Wed Jul 6 20:52:42 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 06 Jul 2005 20:52:47 -0700 (PDT) Received: from larry.melbourne.sgi.com (mverd138.asia.info.net [61.14.31.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id j673qeH9001985 for ; Wed, 6 Jul 2005 20:52:41 -0700 Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id NAA15625 for ; Thu, 7 Jul 2005 13:51:02 +1000 Received: from frodo.melbourne.sgi.com (root@frodo.melbourne.sgi.com [134.14.55.153]) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id j673p8kt2804722 for ; Thu, 7 Jul 2005 13:51:08 +1000 (EST) Received: from frodo.melbourne.sgi.com (nathans@localhost [127.0.0.1]) by frodo.melbourne.sgi.com (8.12.9/8.12.9/Debian-3) with ESMTP id j673iIuA001576 for ; Thu, 7 Jul 2005 13:44:19 +1000 Received: (from nathans@localhost) by frodo.melbourne.sgi.com (8.12.9/8.12.9/Debian-3) id j673iIMX001574 for linux-xfs@oss.sgi.com; Thu, 7 Jul 2005 13:44:18 +1000 Date: Thu, 7 Jul 2005 13:44:18 +1000 From: Nathan Scott To: linux-xfs@oss.sgi.com Subject: Re: XFS corruption during power-blackout Message-ID: <20050707034417.GC1070@frodo> References: <20050705181057.GA16422@kevlar.burdell.org> <200507060424.HAA27591@raad.intranet> <20050706112719.GA18969@xiao.rsnet> <20050707025607.GJ25980@plato.local.lan> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="jI8keyz6grp/JLjh" Content-Disposition: inline In-Reply-To: <20050707025607.GJ25980@plato.local.lan> User-Agent: Mutt/1.5.3i X-archive-position: 5575 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: linux-xfs Content-Length: 947 Lines: 35 --jI8keyz6grp/JLjh Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable Hi Ethan, On Wed, Jul 06, 2005 at 06:56:07PM -0800, Ethan Benson wrote: > I believe this is the same behavior as ext2, newer versions of ext2 > also had a different sync flag specifically for directories to ensure > directory updates are synchronous, this one is not yet supported by XFS > (at least that I am aware). I think this flag is 2.6 only as well. I added support for the mount option (dirsync), but I never got around to making it an inode flag too.. (got a patch for me? :) cheers. --=20 Nathan --jI8keyz6grp/JLjh Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.5 (GNU/Linux) iD8DBQFCzKURm8fl3HSIa2MRAoudAJ9VEh67kRghH1HysJsMJR/u8A2ZtgCfeCUp fVg9WgUR5pGLFXvlglxQ11g= =Ama7 -----END PGP SIGNATURE----- --jI8keyz6grp/JLjh-- From owner-linux-xfs@oss.sgi.com Wed Jul 6 21:28:20 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 06 Jul 2005 21:28:25 -0700 (PDT) Received: from larry.melbourne.sgi.com (mverd138.asia.info.net [61.14.31.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id j674SJH9004923 for ; Wed, 6 Jul 2005 21:28:19 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA16252 for ; Thu, 7 Jul 2005 14:26:41 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id j674Qe1883104079 for ; Thu, 7 Jul 2005 14:26:40 +1000 (EST) Received: (from tes@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id j674QdFS82835963 for linux-xfs@oss.sgi.com; Thu, 7 Jul 2005 14:26:39 +1000 (EST) Date: Thu, 7 Jul 2005 14:26:39 +1000 (EST) From: Timothy Shimmin Message-Id: <200507070426.j674QdFS82835963@snort.melbourne.sgi.com> To: linux-xfs@oss.sgi.com Subject: TAKE 931457 - xlog_ticket_get reservation amount recalculation X-archive-position: 5576 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: tes@snort.melbourne.sgi.com Precedence: bulk X-list: linux-xfs Content-Length: 843 Lines: 22 Fix up the calculation of the reservation overhead to hopefully include all the components which make up the transaction in the ondisk log. Having this incomplete has shown up as problems on IRIX when some v2 log changes went in. The symptom was the msg of "xfs_log_write: reservation ran out. Need to up reservation" and was seen on synchronous writes on files with lots of holes (and therefore lots of extents). Put this into Linux too. Date: Thu Jul 7 14:23:52 AEST 2005 Workarea: snort.melbourne.sgi.com:/home/tes/isms/xfs-linux Inspected by: overby@sgi.com The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-kern/xfs-linux-melb Modid: xfs-linux-melb:xfs-kern:23095a xfs_log.c - 1.305 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_log.c.diff?r1=text&tr1=1.305&r2=text&tr2=1.304&f=h From owner-linux-xfs@oss.sgi.com Wed Jul 6 21:38:13 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 06 Jul 2005 21:38:18 -0700 (PDT) Received: from mercury.acsalaska.net (mercury.acsalaska.net [209.112.173.226]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j674cDH9005859 for ; Wed, 6 Jul 2005 21:38:13 -0700 Received: from erbenson.alaska.net (66-230-90-226-dial-as4.nwc.acsalaska.net [66.230.90.226]) by mercury.acsalaska.net (8.13.4/8.13.4) with ESMTP id j674aa37031403 for ; Wed, 6 Jul 2005 20:36:36 -0800 (AKDT) (envelope-from erbenson@alaska.net) Received: from plato.local.lan (plato.local.lan [192.168.0.4]) by erbenson.alaska.net (Postfix) with ESMTP id D8699395A for ; Wed, 6 Jul 2005 20:36:34 -0800 (AKDT) Received: by plato.local.lan (Postfix, from userid 1000) id 2778C40FF35; Wed, 6 Jul 2005 20:36:35 -0800 (AKDT) Date: Wed, 6 Jul 2005 20:36:35 -0800 From: Ethan Benson To: linux-xfs@oss.sgi.com Subject: Re: XFS corruption during power-blackout Message-ID: <20050707043634.GK25980@plato.local.lan> Mail-Followup-To: linux-xfs@oss.sgi.com References: <20050705181057.GA16422@kevlar.burdell.org> <200507060424.HAA27591@raad.intranet> <20050706112719.GA18969@xiao.rsnet> <20050707025607.GJ25980@plato.local.lan> <20050707034417.GC1070@frodo> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="cQYPpkLLLIn+TToK" Content-Disposition: inline In-Reply-To: <20050707034417.GC1070@frodo> User-Agent: Mutt/1.3.28i X-OS: Debian GNU X-gpg-fingerprint: E3E4 D0BC 31BC F7BB C1DD C3D6 24AC 7B1A 2C44 7AFC X-gpg-key: http://www.alaska.net/~erbenson/gpg/key.asc Mail-Copies-To: nobody X-No-CC: I subscribe to this list; do not CC me on replies. X-ACS-Spam-Status: no X-ACS-Scanned-By: MD 2.51; SA 3.0.3; spamdefang 1.112 X-archive-position: 5577 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: erbenson@alaska.net Precedence: bulk X-list: linux-xfs Content-Length: 1478 Lines: 45 --cQYPpkLLLIn+TToK Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Thu, Jul 07, 2005 at 01:44:18PM +1000, Nathan Scott wrote: > Hi Ethan, >=20 > On Wed, Jul 06, 2005 at 06:56:07PM -0800, Ethan Benson wrote: > > I believe this is the same behavior as ext2, newer versions of ext2 > > also had a different sync flag specifically for directories to ensure > > directory updates are synchronous, this one is not yet supported by XFS > > (at least that I am aware). I think this flag is 2.6 only as well. >=20 > I added support for the mount option (dirsync), but I never got > around to making it an inode flag too.. (got a patch for me? :) hmm, not at the moment. I didn't know we got the dirsync stuff, but I haven't been paying too close attention lately. the attr should't be that difficult to add, mainly depends how much granularity is currently allowed in your dirsync code. +S is easy since it basically just forces O_SYNC flags on all open() calls, can this work the same? also is dirsync 2.6 only? or am I thinking of something else. --=20 Ethan Benson http://www.alaska.net/~erbenson/ --cQYPpkLLLIn+TToK Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (GNU/Linux) iEYEARECAAYFAkLMsVIACgkQJKx7GixEevxwtQCgiOFstUIR2a6gvOIuiQ2pWiQr Ht8An1YVamZp3Khfum+eHm8ESGfjPj0M =aPb2 -----END PGP SIGNATURE----- --cQYPpkLLLIn+TToK-- From owner-linux-xfs@oss.sgi.com Wed Jul 6 21:45:07 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 06 Jul 2005 21:45:12 -0700 (PDT) Received: from larry.melbourne.sgi.com (mverd138.asia.info.net [61.14.31.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id j674j5H9006605 for ; Wed, 6 Jul 2005 21:45:06 -0700 Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA16615 for ; Thu, 7 Jul 2005 14:43:28 +1000 Received: from frodo.melbourne.sgi.com (root@frodo.melbourne.sgi.com [134.14.55.153]) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id j674hXkt2778525 for ; Thu, 7 Jul 2005 14:43:33 +1000 (EST) Received: from frodo.melbourne.sgi.com (nathans@localhost [127.0.0.1]) by frodo.melbourne.sgi.com (8.12.9/8.12.9/Debian-3) with ESMTP id j674aiuA001713 for ; Thu, 7 Jul 2005 14:36:44 +1000 Received: (from nathans@localhost) by frodo.melbourne.sgi.com (8.12.9/8.12.9/Debian-3) id j674ah6J001711 for linux-xfs@oss.sgi.com; Thu, 7 Jul 2005 14:36:43 +1000 Date: Thu, 7 Jul 2005 14:36:43 +1000 From: Nathan Scott To: linux-xfs@oss.sgi.com Subject: Re: XFS corruption during power-blackout Message-ID: <20050707043643.GD1070@frodo> References: <20050705181057.GA16422@kevlar.burdell.org> <200507060424.HAA27591@raad.intranet> <20050706112719.GA18969@xiao.rsnet> <20050707025607.GJ25980@plato.local.lan> <20050707034417.GC1070@frodo> <20050707043634.GK25980@plato.local.lan> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="ZoaI/ZTpAVc4A5k6" Content-Disposition: inline In-Reply-To: <20050707043634.GK25980@plato.local.lan> User-Agent: Mutt/1.5.3i X-archive-position: 5578 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: linux-xfs Content-Length: 963 Lines: 37 --ZoaI/ZTpAVc4A5k6 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Wed, Jul 06, 2005 at 08:36:35PM -0800, Ethan Benson wrote: > On Thu, Jul 07, 2005 at 01:44:18PM +1000, Nathan Scott wrote: > the attr should't be that difficult to add, mainly depends how much > granularity is currently allowed in your dirsync code. +S is easy > since it basically just forces O_SYNC flags on all open() calls, can > this work the same? I'd not expect these things to be difficult to add. > also is dirsync 2.6 only? or am I thinking of something else. Thats right, its 2.6 only. cheers. --=20 Nathan --ZoaI/ZTpAVc4A5k6 Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.5 (GNU/Linux) iD8DBQFCzLFam8fl3HSIa2MRArkSAKCp4ag3WN71kOJFaYRk6lSNeHjRhgCglb06 /jbnCOnbZZ3/Ibd4VjH7k7Y= =SHaT -----END PGP SIGNATURE----- --ZoaI/ZTpAVc4A5k6-- From owner-linux-xfs@oss.sgi.com Thu Jul 7 08:12:58 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 07 Jul 2005 08:13:01 -0700 (PDT) Received: from chaos.egr.duke.edu (chaos.egr.duke.edu [152.3.195.82]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j67FCvH9014563 for ; Thu, 7 Jul 2005 08:12:58 -0700 Received: from chaos.egr.duke.edu (localhost.localdomain [127.0.0.1]) by chaos.egr.duke.edu (8.12.11/8.12.11) with ESMTP id j67FBKL8030477 for ; Thu, 7 Jul 2005 11:11:20 -0400 Received: from localhost (jlb@localhost) by chaos.egr.duke.edu (8.12.11/8.12.11/Submit) with ESMTP id j67FBK8e030473 for ; Thu, 7 Jul 2005 11:11:20 -0400 X-Authentication-Warning: chaos.egr.duke.edu: jlb owned process doing -bs Date: Thu, 7 Jul 2005 11:11:20 -0400 (EDT) From: Joshua Baker-LePain X-X-Sender: jlb@chaos.egr.duke.edu To: Linux xfs mailing list Subject: XFS, 4K stacks, and Red Hat Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 5580 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: jlb17@duke.edu Precedence: bulk X-list: linux-xfs Content-Length: 669 Lines: 19 Can anyone summarize the current status of XFS and 4K stacks? There was recently a thread[1] on the nahant (RHEL4) mailing list where it was stated[2] that one reason for the exclusion of XFS in RHEL4 is the stack size issue. I'd love to see XFS in Red Hat, although of course I have no idea if they'd turn it on even if the stack size issues went away tomorrow. I'm just wondering what the view of this is from the SGI side. Thanks. [1] https://www.redhat.com/archives/nahant-list/2005-June/msg00280.html [2] https://www.redhat.com/archives/nahant-list/2005-June/msg00304.html -- Joshua Baker-LePain Department of Biomedical Engineering Duke University From owner-linux-xfs@oss.sgi.com Thu Jul 7 08:43:23 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 07 Jul 2005 08:43:29 -0700 (PDT) Received: from mail00hq.adic.com ([63.81.117.10]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j67FhNH9021488 for ; Thu, 7 Jul 2005 08:43:23 -0700 Received: from [172.16.82.67] ([172.16.82.67]) by mail00hq.adic.com with Microsoft SMTPSVC(5.0.2195.6713); Thu, 7 Jul 2005 08:41:46 -0700 Message-ID: <42CD4D38.1090703@xfs.org> Date: Thu, 07 Jul 2005 10:41:44 -0500 From: Steve Lord User-Agent: Mozilla Thunderbird 1.0.2-1.3.3 (X11/20050513) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Joshua Baker-LePain CC: Linux xfs mailing list Subject: Re: XFS, 4K stacks, and Red Hat References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-OriginalArrivalTime: 07 Jul 2005 15:41:46.0749 (UTC) FILETIME=[620762D0:01C5830A] X-archive-position: 5581 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: lord@xfs.org Precedence: bulk X-list: linux-xfs Content-Length: 1615 Lines: 36 Joshua Baker-LePain wrote: > Can anyone summarize the current status of XFS and 4K stacks? There was > recently a thread[1] on the nahant (RHEL4) mailing list where it was > stated[2] that one reason for the exclusion of XFS in RHEL4 is the stack > size issue. I'd love to see XFS in Red Hat, although of course I have no > idea if they'd turn it on even if the stack size issues went away > tomorrow. I'm just wondering what the view of this is from the SGI side. > > Thanks. > > > [1] https://www.redhat.com/archives/nahant-list/2005-June/msg00280.html > > [2] https://www.redhat.com/archives/nahant-list/2005-June/msg00304.html > I have my suspicions that they could find another reason if this one was not present - code is too complex, they have no expertise for support.... Ask them if they support NFS V4 on top of ext3 on top of multipathing on top of network block device with a 4K stack under low memory conditions..... they have all the component parts in their kernel. Sorry about the attitude, but the whole fixed size stack and the continual addition of layers of code is a little silly. Wait until iscsi initiators make it into the picture, and throw in a crypto layer for good measure. As for XFS and a 4K stack, I think it still boils down to a few edge cases, I have not seen one in years, I am doing all my builds via nfs v3 with tcp/ip to an XFS filesystem. The only stack overflow I have seen recently has been attempting to get device mapper multipath to work, I can make that overflow the stack just trying to configure it. Steve (who has an attitude problem this morning) From owner-linux-xfs@oss.sgi.com Thu Jul 7 09:45:04 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 07 Jul 2005 09:45:10 -0700 (PDT) Received: from chaos.egr.duke.edu (chaos.egr.duke.edu [152.3.195.82]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j67GitH9027195 for ; Thu, 7 Jul 2005 09:44:55 -0700 Received: from chaos.egr.duke.edu (localhost.localdomain [127.0.0.1]) by chaos.egr.duke.edu (8.12.11/8.12.11) with ESMTP id j67GhFWP030652; Thu, 7 Jul 2005 12:43:15 -0400 Received: from localhost (jlb@localhost) by chaos.egr.duke.edu (8.12.11/8.12.11/Submit) with ESMTP id j67GhFsB030648; Thu, 7 Jul 2005 12:43:15 -0400 X-Authentication-Warning: chaos.egr.duke.edu: jlb owned process doing -bs Date: Thu, 7 Jul 2005 12:43:15 -0400 (EDT) From: Joshua Baker-LePain X-X-Sender: jlb@chaos.egr.duke.edu To: Steve Lord cc: Linux xfs mailing list Subject: Re: XFS, 4K stacks, and Red Hat In-Reply-To: <42CD4D38.1090703@xfs.org> Message-ID: References: <42CD4D38.1090703@xfs.org> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 5582 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: jlb17@duke.edu Precedence: bulk X-list: linux-xfs Content-Length: 1955 Lines: 48 On Thu, 7 Jul 2005 at 10:41am, Steve Lord wrote > Joshua Baker-LePain wrote: > > Can anyone summarize the current status of XFS and 4K stacks? There was > > recently a thread[1] on the nahant (RHEL4) mailing list where it was > > stated[2] that one reason for the exclusion of XFS in RHEL4 is the stack > > size issue. I'd love to see XFS in Red Hat, although of course I have no > > idea if they'd turn it on even if the stack size issues went away > > tomorrow. I'm just wondering what the view of this is from the SGI side. > > > > Thanks. > > > > > > [1] https://www.redhat.com/archives/nahant-list/2005-June/msg00280.html > > > > [2] https://www.redhat.com/archives/nahant-list/2005-June/msg00304.html > > > I have my suspicions that they could find another reason if this one was > not present - code is too complex, they have no expertise for support.... Oh, I share those same suspicions. The only reason I pointed at that post is that it's the first time I've heard anything from them other than "XFS doesn't provide anything not provided by ext3". Note that I was ignored in the same thread after proving that RHEL4's dump for ext3 ignores EAs/ACLs. > As for XFS and a 4K stack, I think it still boils down to a few edge cases, > I have not seen one in years, I am doing all my builds via nfs v3 with > tcp/ip to an XFS filesystem. > > The only stack overflow I have seen recently has been attempting to get > device mapper multipath to work, I can make that overflow the stack just > trying to configure it. Hrm. I was easily able to trigger stack overflows on a pretty simple (albeit old) setup -- RHEL4 kernel with XFS turned on, dual PIII 450, 384MB RAM, XFS on a single SCSI disk on aic7xxx. > Steve (who has an attitude problem this morning) Rather understandable given the subject -- sorry to poke you with this particular stick. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University From owner-linux-xfs@oss.sgi.com Thu Jul 7 10:06:15 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 07 Jul 2005 10:06:18 -0700 (PDT) Received: from mail.linux-sxs.org (mail.linux-sxs.org [64.116.183.6]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j67H6EH9029334 for ; Thu, 7 Jul 2005 10:06:14 -0700 Received: from mail.linux-sxs.org (localhost [127.0.0.1]) by mail.linux-sxs.org (8.13.4/8.13.4/Debian-3) with ESMTP id j67G0CmO000709 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT); Thu, 7 Jul 2005 11:00:13 -0500 Received: from localhost (netllama@localhost) by mail.linux-sxs.org (8.13.4/8.13.4/Submit) with ESMTP id j67G0B0B000706; Thu, 7 Jul 2005 11:00:12 -0500 X-Authentication-Warning: mail.linux-sxs.org: netllama owned process doing -bs Date: Thu, 7 Jul 2005 11:00:11 -0500 (EST) From: Net Llama! To: Joshua Baker-LePain cc: Steve Lord , Linux xfs mailing list Subject: Re: XFS, 4K stacks, and Red Hat In-Reply-To: Message-ID: References: <42CD4D38.1090703@xfs.org> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Scanned-By: milter-sender/0.62.837 (localhost [127.0.0.1]); Thu, 07 Jul 2005 11:00:13 -0500 Received-SPF: pass (mail.linux-sxs.org: domain of netllama@linux-sxs.org designates 127.0.0.1 as permitted sender) receiver=mail.linux-sxs.org; client-ip=127.0.0.1; helo=mail.linux-sxs.org; envelope-from=netllama@linux-sxs.org; x-software=spfmilter 0.95 http://www.acme.com/software/spfmilter/ with libspf2; X-archive-position: 5583 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: netllama@linux-sxs.org Precedence: bulk X-list: linux-xfs Content-Length: 2568 Lines: 56 On Thu, 7 Jul 2005, Joshua Baker-LePain wrote: > On Thu, 7 Jul 2005 at 10:41am, Steve Lord wrote > > > Joshua Baker-LePain wrote: > > > Can anyone summarize the current status of XFS and 4K stacks? There was > > > recently a thread[1] on the nahant (RHEL4) mailing list where it was > > > stated[2] that one reason for the exclusion of XFS in RHEL4 is the stack > > > size issue. I'd love to see XFS in Red Hat, although of course I have no > > > idea if they'd turn it on even if the stack size issues went away > > > tomorrow. I'm just wondering what the view of this is from the SGI side. > > > > > > Thanks. > > > > > > > > > [1] https://www.redhat.com/archives/nahant-list/2005-June/msg00280.html > > > > > > [2] https://www.redhat.com/archives/nahant-list/2005-June/msg00304.html > > > > > I have my suspicions that they could find another reason if this one was > > not present - code is too complex, they have no expertise for support.... > > Oh, I share those same suspicions. The only reason I pointed at that post > is that it's the first time I've heard anything from them other than "XFS > doesn't provide anything not provided by ext3". Note that I was ignored > in the same thread after proving that RHEL4's dump for ext3 ignores > EAs/ACLs. > > > As for XFS and a 4K stack, I think it still boils down to a few edge cases, > > I have not seen one in years, I am doing all my builds via nfs v3 with > > tcp/ip to an XFS filesystem. > > > > The only stack overflow I have seen recently has been attempting to get > > device mapper multipath to work, I can make that overflow the stack just > > trying to configure it. > > Hrm. I was easily able to trigger stack overflows on a pretty simple > (albeit old) setup -- RHEL4 kernel with XFS turned on, dual PIII 450, > 384MB RAM, XFS on a single SCSI disk on aic7xxx. > > > Steve (who has an attitude problem this morning) > > Rather understandable given the subject -- sorry to poke you with this > particular stick. Its worse than this too. Fedora Core ships with native XFS support. Unfortunately, Redhat pretends like its not there. When I submitted a bug a few days ago against grub failing to work during OS installation it was closed as WNF because 'XFS is unsupported'. I questioned why they were shipping a kernel with XFS and even the xfs-progs RPM if XFS is unsupported, but I doubt I'll get a response. -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Lonni J Friedman netllama@linux-sxs.org LlamaLand http://netllama.linux-sxs.org From owner-linux-xfs@oss.sgi.com Thu Jul 7 21:30:26 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 07 Jul 2005 21:30:32 -0700 (PDT) Received: from larry.melbourne.sgi.com (mverd138.asia.info.net [61.14.31.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id j684UOH9024808 for ; Thu, 7 Jul 2005 21:30:25 -0700 Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA16538; Fri, 8 Jul 2005 14:28:38 +1000 Received: from frodo.melbourne.sgi.com (root@frodo.melbourne.sgi.com [134.14.55.153]) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id j684Sgkt2807544; Fri, 8 Jul 2005 14:28:42 +1000 (EST) Received: from frodo.melbourne.sgi.com (nathans@localhost [127.0.0.1]) by frodo.melbourne.sgi.com (8.12.9/8.12.9/Debian-3) with ESMTP id j684LnMV001789; Fri, 8 Jul 2005 14:21:50 +1000 Received: (from nathans@localhost) by frodo.melbourne.sgi.com (8.12.9/8.12.9/Debian-3) id j684LkD0001787; Fri, 8 Jul 2005 14:21:46 +1000 Date: Fri, 8 Jul 2005 14:21:46 +1000 From: Nathan Scott To: Yura Pakhuchiy Cc: linux-xfs@oss.sgi.com, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tibor@altlinux.ru Subject: Re: XFS corruption on move from xscale to i686 Message-ID: <20050708042146.GA1679@frodo> References: <1120756552.5298.10.camel@pc299.sam-solutions.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1120756552.5298.10.camel@pc299.sam-solutions.net> User-Agent: Mutt/1.5.3i X-archive-position: 5584 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: linux-xfs Content-Length: 503 Lines: 17 On Thu, Jul 07, 2005 at 08:15:52PM +0300, Yura Pakhuchiy wrote: > Hi, > > I'm creadted XFS volume on 2.6.10 linux xscale/iq31244 box, then I > copyied files on it and moved this hard drive to i686 machine. When I > mounted it on i686, I found no files on it. I runned xfs_check, here is > output: Someone else was doing this awhile back, and also had issues. Their trouble seemed to be related to xscale gcc miscompiling parts of XFS - search the linux-xfs archives for details. cheers. -- Nathan From owner-linux-xfs@oss.sgi.com Thu Jul 7 21:46:12 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 07 Jul 2005 21:46:16 -0700 (PDT) Received: from larry.melbourne.sgi.com (mverd138.asia.info.net [61.14.31.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id j684kAH9025797 for ; Thu, 7 Jul 2005 21:46:11 -0700 Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA16877; Fri, 8 Jul 2005 14:44:30 +1000 Received: from frodo.melbourne.sgi.com (root@frodo.melbourne.sgi.com [134.14.55.153]) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id j684iZkt2834549; Fri, 8 Jul 2005 14:44:35 +1000 (EST) Received: from frodo.melbourne.sgi.com (nathans@localhost [127.0.0.1]) by frodo.melbourne.sgi.com (8.12.9/8.12.9/Debian-3) with ESMTP id j684bhMV001828; Fri, 8 Jul 2005 14:37:43 +1000 Received: (from nathans@localhost) by frodo.melbourne.sgi.com (8.12.9/8.12.9/Debian-3) id j684beB8001826; Fri, 8 Jul 2005 14:37:40 +1000 Date: Fri, 8 Jul 2005 14:37:40 +1000 From: Nathan Scott To: Joshua Baker-LePain Cc: Steve Lord , Linux xfs mailing list Subject: Re: XFS, 4K stacks, and Red Hat Message-ID: <20050708043740.GB1679@frodo> References: <42CD4D38.1090703@xfs.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.3i X-archive-position: 5585 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: linux-xfs Content-Length: 926 Lines: 25 On Thu, Jul 07, 2005 at 12:43:15PM -0400, Joshua Baker-LePain wrote: > > As for XFS and a 4K stack, I think it still boils down to a few edge cases, > > I have not seen one in years, I am doing all my builds via nfs v3 with > > tcp/ip to an XFS filesystem. > ... > Hrm. I was easily able to trigger stack overflows on a pretty simple > (albeit old) setup -- RHEL4 kernel with XFS turned on, dual PIII 450, > 384MB RAM, XFS on a single SCSI disk on aic7xxx. I put in a bit of time awhile back to get the largest of these issues sorted out - perhaps (almost certainly) RHEL4 is an older 2.6 kernel than the one containing those changes. As other cases pop up (with a reproducible test case please, and no stacking drivers in the way too :), we slowly iron them out.. its not exactly top of the priority list though. Choose SLES9 over RHEL4 if you want an "Enterprise" kernel with decent XFS support. cheers. -- Nathan From owner-linux-xfs@oss.sgi.com Thu Jul 7 23:14:57 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 07 Jul 2005 23:15:04 -0700 (PDT) Received: from larry.melbourne.sgi.com (mverd138.asia.info.net [61.14.31.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id j686EtH9030604 for ; Thu, 7 Jul 2005 23:14:56 -0700 Received: from kao2.melbourne.sgi.com (kao2.melbourne.sgi.com [134.14.55.180]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id QAA18578; Fri, 8 Jul 2005 16:13:17 +1000 Received: by kao2.melbourne.sgi.com (Postfix, from userid 16331) id 53FA1104; Fri, 8 Jul 2005 16:13:17 +1000 (EST) Received: from kao2.melbourne.sgi.com (localhost [127.0.0.1]) by kao2.melbourne.sgi.com (Postfix) with ESMTP id 505741000F8; Fri, 8 Jul 2005 16:13:17 +1000 (EST) X-Mailer: exmh version 2.6.3_20040314 03/14/2004 with nmh-1.0.4 From: Keith Owens To: Nathan Scott cc: Joshua Baker-LePain , Steve Lord , Linux xfs mailing list Subject: Re: XFS, 4K stacks, and Red Hat In-reply-to: Your message of "Fri, 08 Jul 2005 14:37:40 +1000." <20050708043740.GB1679@frodo> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Date: Fri, 08 Jul 2005 16:13:17 +1000 Message-ID: <9807.1120803197@kao2.melbourne.sgi.com> X-archive-position: 5586 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: kaos@sgi.com Precedence: bulk X-list: linux-xfs Content-Length: 230 Lines: 7 On Fri, 8 Jul 2005 14:37:40 +1000, Nathan Scott wrote: >Choose SLES9 over RHEL4 if you want an "Enterprise" kernel with >decent XFS support. Plus the fact that SLES9 comes with a kernel debugger, unlike RHEL. From owner-linux-xfs@oss.sgi.com Fri Jul 8 09:04:43 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 08 Jul 2005 09:04:47 -0700 (PDT) Received: from mail.linux-sxs.org (mail.linux-sxs.org [64.116.183.6]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j68G4dH9026889 for ; Fri, 8 Jul 2005 09:04:42 -0700 Received: from mail.linux-sxs.org (localhost [127.0.0.1]) by mail.linux-sxs.org (8.13.4/8.13.4/Debian-3) with ESMTP id j68Ewpnw021274 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT); Fri, 8 Jul 2005 09:58:53 -0500 Received: from localhost (netllama@localhost) by mail.linux-sxs.org (8.13.4/8.13.4/Submit) with ESMTP id j68EwmPK021271; Fri, 8 Jul 2005 09:58:48 -0500 X-Authentication-Warning: mail.linux-sxs.org: netllama owned process doing -bs Date: Fri, 8 Jul 2005 09:58:48 -0500 (EST) From: Net Llama! To: Keith Owens cc: Nathan Scott , Joshua Baker-LePain , Steve Lord , Linux xfs mailing list Subject: Re: XFS, 4K stacks, and Red Hat In-Reply-To: <9807.1120803197@kao2.melbourne.sgi.com> Message-ID: References: <9807.1120803197@kao2.melbourne.sgi.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Scanned-By: milter-sender/0.62.837 (localhost [127.0.0.1]); Fri, 08 Jul 2005 09:58:53 -0500 Received-SPF: pass (mail.linux-sxs.org: domain of netllama@linux-sxs.org designates 127.0.0.1 as permitted sender) receiver=mail.linux-sxs.org; client-ip=127.0.0.1; helo=mail.linux-sxs.org; envelope-from=netllama@linux-sxs.org; x-software=spfmilter 0.95 http://www.acme.com/software/spfmilter/ with libspf2; X-archive-position: 5587 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: netllama@linux-sxs.org Precedence: bulk X-list: linux-xfs Content-Length: 676 Lines: 20 On Fri, 8 Jul 2005, Keith Owens wrote: > On Fri, 8 Jul 2005 14:37:40 +1000, > Nathan Scott wrote: > >Choose SLES9 over RHEL4 if you want an "Enterprise" kernel with > >decent XFS support. > > Plus the fact that SLES9 comes with a kernel debugger, unlike RHEL. And a wonderful collection of bugs that need the debugger to be debugged ;) No offense to any SuSE fans here, but SLES9 is one of the most unstable, buggy distributions I've ever used, at least with x86_64 CPUs. -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Lonni J Friedman netllama@linux-sxs.org LlamaLand http://netllama.linux-sxs.org From owner-linux-xfs@oss.sgi.com Sat Jul 9 02:13:38 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Sat, 09 Jul 2005 02:13:42 -0700 (PDT) Received: from albatross.madduck.net (armagnac.ifi.unizh.ch [130.60.75.72]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j699DYH9012157 for ; Sat, 9 Jul 2005 02:13:37 -0700 Received: from localhost (albatross.madduck.net [127.0.0.1]) by albatross.madduck.net (postfix) with ESMTP id C63688DCCAC for ; Sat, 9 Jul 2005 11:11:50 +0200 (CEST) Received: from albatross.madduck.net ([127.0.0.1]) by localhost (albatross.madduck.net [127.0.0.1]) (amavisd-new, port 10024) with LMTP id 09761-01 for ; Sat, 9 Jul 2005 11:11:50 +0200 (CEST) Received: from cirrus.madduck.net (cirrus.madduck.net [192.168.14.1]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "cirrus.madduck.net", Issuer "madduck.net CA" (verified OK)) by albatross.madduck.net (postfix) with ESMTP id 826FF8DC978 for ; Sat, 9 Jul 2005 11:11:46 +0200 (CEST) Received: by cirrus.madduck.net (Postfix, from userid 1000) id 25E8020041D; Sat, 9 Jul 2005 11:11:45 +0200 (CEST) Date: Sat, 9 Jul 2005 11:11:45 +0200 From: martin f krafft To: linux xfs mailing list Subject: how to flush an XFS filesystem Message-ID: <20050709091145.GA13108@cirrus.madduck.net> Mail-Followup-To: linux xfs mailing list Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="IS0zKkzwUGydFO0o" Content-Disposition: inline X-OS: Debian GNU/Linux 3.1 kernel 2.6.11-cirrus i686 X-Motto: Keep the good times rollin' X-Subliminal-Message: debian/rules! X-Spamtrap: madduck.bogus@madduck.net User-Agent: Mutt/1.5.9i X-archive-position: 5590 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: madduck@madduck.net Precedence: bulk X-list: linux-xfs Content-Length: 1474 Lines: 44 --IS0zKkzwUGydFO0o Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable This issues has been discussed before and I cannot find a solution. We have the challenge to create the Grub menu.lst file and immediately reboot afterwards. XFS will probably not flush the file in time, so when Grub accesses the device directly, the file does not yet exist. Only mounting the device causes the log to be played, but that's not possible before Grub. The problem is that xfs_freeze -f says it would flush everything to the disk, but it does not. Not even waiting for 20 seconds after calling xfs_freeze works. If xfs_freeze does not do the trick and sync does not work for XFS, how can I actually flush all buffers to the disk and commit all open transactions from the log? Thanks, --=20 martin; (greetings from the heart of the sun.) \____ echo mailto: !#^."<*>"|tr "<*> mailto:" net@madduck =20 invalid/expired pgp subkeys? use subkeys.pgp.net as keyserver! spamtraps: madduck.bogus@madduck.net =20 i'd give my right arm to be ambidextrous. --IS0zKkzwUGydFO0o Content-Type: application/pgp-signature; name="signature.asc" Content-Description: Digital signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (GNU/Linux) iD8DBQFCz5TRIgvIgzMMSnURApFTAKCBMW47FRiiRIplmCFw2neCz/kqvwCguTEt q8AHqoMYUHT/uYfy3Izgd6c= =MfzF -----END PGP SIGNATURE----- --IS0zKkzwUGydFO0o-- From owner-linux-xfs@oss.sgi.com Sat Jul 9 03:22:37 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Sat, 09 Jul 2005 03:22:41 -0700 (PDT) Received: from lucidpixels.com (lucidpixels.com [66.45.37.187]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j69AMZH9017082 for ; Sat, 9 Jul 2005 03:22:36 -0700 Received: by lucidpixels.com (Postfix, from userid 1001) id 44A1C200015F; Sat, 9 Jul 2005 06:20:54 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by lucidpixels.com (Postfix) with ESMTP id 00788A00319E; Sat, 9 Jul 2005 06:20:53 -0400 (EDT) Date: Sat, 9 Jul 2005 06:20:53 -0400 (EDT) From: Justin Piszcz X-X-Sender: jpiszcz@p34 To: linux-kernel@vger.kernel.org cc: linux-xfs@oss.sgi.com Subject: XFS Oops Under 2.6.12.2 Message-ID: MIME-Version: 1.0 Content-Type: MULTIPART/MIXED; BOUNDARY="-1463747160-1976317989-1120904453=:21767" X-archive-position: 5591 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: jpiszcz@lucidpixels.com Precedence: bulk X-list: linux-xfs Content-Length: 9344 Lines: 177 This message is in MIME format. The first part should be readable text, while the remaining parts are likely unreadable without MIME-aware tools. ---1463747160-1976317989-1120904453=:21767 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed After a couple hours of use, I get this error on a linear RAID under 2.6.12.2 using loop-AES w/AES-256 encrypted filesystem. Anyone know what is wrong? Filesystem "loop1": XFS internal error xfs_da_do_buf(2) at line 2271 of file fs/xfs/xfs_da_btree.c. Caller 0xc025e807 [] xfs_da_do_buf+0x500/0x860 [] xfs_da_read_buf+0x57/0x60 [] xfs_da_read_buf+0x57/0x60 [] __tcp_data_snd_check+0xcb/0xe0 [] tcp_new_space+0x8d/0xa0 [] tcp_v4_rcv+0x585/0x810 [] xfs_da_read_buf+0x57/0x60 [] xfs_dir2_block_getdents+0xa4/0x330 [] xfs_dir2_block_getdents+0xa4/0x330 [] ip_local_deliver_finish+0x0/0x150 [] ip_rcv+0x391/0x510 [] xfs_bmap_last_offset+0xc2/0x120 [] ip_rcv_finish+0x0/0x290 [] xfs_dir2_put_dirent64_direct+0x0/0xc0 [] xfs_dir2_isblock+0x32/0x90 [] xfs_dir2_put_dirent64_direct+0x0/0xc0 [] xfs_dir2_getdents+0xa1/0x150 [] xfs_dir2_put_dirent64_direct+0x0/0xc0 [] xfs_readdir+0x75/0xc0 [] linvfs_readdir+0x10e/0x270 [] net_rx_action+0x6a/0xf0 [] vfs_readdir+0x77/0x90 [] filldir64+0x0/0x110 [] sys_getdents64+0x6f/0xb2 [] filldir64+0x0/0x110 [] syscall_call+0x7/0xb # lsmod Module Size Used by I am not using any special modules. Configuration file attached. ---1463747160-1976317989-1120904453=:21767 Content-Type: APPLICATION/octet-stream; name=config-2.6.12.2.txt.bz2 Content-Transfer-Encoding: BASE64 Content-ID: Content-Description: Content-Disposition: attachment; filename=config-2.6.12.2.txt.bz2 QlpoOTFBWSZTWWKf53gABfHfgEAQWOf/8j////C/7//gYBjcAA95wBl0PXvs OGSV2eADmbdw07bsOOaptilO52uipNsbY1hZ3cesvbGQYF3Hc3Xfde9Frhqn iAENA0EJpoCI8oyNkR+ppqGjQBpomgAggFTyCjTTQAADQAAGITU0EwkeQjTU 01PSZAAyaNGgHqDQSaSSJiDSaJoYgANAAAAaZAEkpo0U9T0am0Jk9Q2o0YjI 9E0AAaGgBIkBACaTFT2qeU1NMgNPKAAAaAOnzwP+RVgc+imMrO+k2Q0iikVV iICx9F6AxnVrSfmz0ZKJ7u3m812zbN7TTMSv6WVDhCQLZGM3Pn36HfrHBKPj b2AJ1JUnWzGYlYsKtqqsFhFiJWsBQVTutxbRYVi1URgoANpKiypUKLY1tlTT iOUpaLaNUWs01CYkVtowrAqio2rbaxYAY5MLGVaNaFRSoloiKFGLpxxLaLEV lYosKEBSSdrIVcWCK1FsEYIajMEDEk8bDFy+TDMkWtFXuuGC7sDHElSdrIRY pAUMctUKy2ytQpUpFIpbbaEWVHhcHEUolatSruuZVQ1ZVi4iKlQKwFA/rJy7 q6LEJkfnde5cOJt6cqZX21skAhTUwEppwORPFddq1McHA8l+ODe5D6kR2VY8 kRagM3bhnLgSqT5vRAPFEQhKFUEERYh7UOiHu1125ZCYRtRruYDrtpZcuZbE M7NKTDNeMv3bSNxYYWyZ30btabCe92LsufL49PPam3P1+XorPx6LQT+1h+2L OfhCSGXob8fjXK7Dt+4WBTzN5hDzY69iLECP9IpOh82VykWqCMipR681etqs qzDaM9xXumqi24TNSpKN6uem1jok5TVSjFwdvPKuhQu2YE3Yne5lbKTUomx1 0kGEy7FV9CWylEaJ16R9bVWuiL9k2WsiXddowcArU250fqq6pSvOydBeVGWb aXsN0Na7NCt3v0qM8Nrms1V0aiG0bsL/PA7TRFq4Xsv4tJiXI1rmIjmm9LoS 0wFdDHA1IvHVNz6GUXOXRtIhqZiKkjWTNslsUzjbC44Tc+ZlMOZepS663Rv3 QvkN0FvLHM2OTHfb+fQTm25XaStxx3QJV8U/9j4sWQdlABCywaMujmfTCUyy EsUYAIT7kNltNkge6SaGGbA4h4bG5DG0mjpfPydmbng9Ed6AEItae5Cntly3 9CdZniH9q6iT3xf9ju+/vDXiW0faSdNsfLl15giAIfiXX5bzZ2um7GnMT6vB c1d2601tlRVc8llfJQErNht8mez3Niz6/Tpkyz6Q92nvUn1N78P2zfYPNl1S OioVHzm9g04p1E4M8lc+zb24ccd59lNacF4jprjm3BPMsFyAjytc0ipkYuyO waXoWgeDBLpssWWAdwcLinlLym9M0uzxk/40qIcFKKnUQzjWNSAFYs3EErgD iJiMDU2SFlPuX8J1zj5L6PeOPBz7a+VYeY64ER32UdvhNyDdHC5oQqBMM7JP KmFoR1TCyVhLm7lg6DjHGBxYZzI5aGtZseMknpDkEC0Rm3XV6DDZ99nHZ48l vFWJ5irenx07c6AZEK3v08UAjhjEBFvl+AF9vHqM9MssMp49/FamoycSztYq oQAqM/DYfJBw7izHs3XAbKFZBhQmREw0N5rXqYSDbot2y9G2cHR1YpcmutlW Y7Ix5eNtuDuyL/nC5hd8+2y6F7t1h9LvSsV4Sbx432hyyzDTS2a1S617oWHw SyS62g9bMZ2prjQq16N6nhSq3QcNIIqy1li4pPLJ9mLHRFFI9FKDB7OwhDqs ZK6uICatidZ7vScJ2hUe15yfsvimH0xPWdiKEdZMTOvLRoSvs4jDz371vqws JW9+8uvqXiN2+NiV3HdloD9tq1WBfZR+BaM/PjQmTEtJaHbAeq1HotaXHtcR L/KrxpShoXZWaPuii9TYR8ZVPpk8SQZEeq4NsJVKfZ8ZCRs82iG18u5LZzwU 5Skb6TG+GuNOtCPSBIiiRRlxad54yKLlhSUW7lTrKZIoMTCdm181Rz29/got yPOw3FKEsM5613vCIUmFBQvkiZHkkvmlRDgpFQ8hIoO12NkAE6gDGOwxsdWQ bkmRV1prmp2Om3Xen2vHVgLEzfOinUmyDHXwEyWgNEQhhoZa5LnKWR2sRRms lHUehxLueKs2aa73sHJQHXhOP8KZbS9vVaQQbyUfREFl5IEhaD1xFBM2KeXN F2Ub6NQMZoy3eEQLFk7EOgXCyRJ9mZ+7xnSpvSuYUoz3cgNEmfGqCLuJR3Ky pGcFZYxDIHqNkcQ4DBTEGlEUBdem1nPjC2LE1rBMI9AKKYQWHmWyA+wNU8SD ggq8RsSKKKqqc5OxOseW22YdvVgR63piIdltTdIGwlcOTrNmSHs76a6Lu5Hi zu+5gsYgoxFIsBYREYwFgsFRVRjERSCxiMGIxCKEWJFEYsisVQZICqrBgsIs GIiDGDEEFjEGMIAoqRVURWAxQVIKqyKKMVUiKgKiMFFARGKKiIREFBiyApIj GIqsYioKKiuWgrEFFigqILGMjIwUEWKKMESKxiiRUVRgiA2EkaMDN0a+nYot gEyZdro18HVbZyQlsTHqpbZORDMOLTYVgTdKskbd4MaqoZVwywJgMox8yIlE up95ZBWG1OrRyuN/IKdtAet7yyn1GxJKD4VSzY6Xz4eEAOVvrsJ0CkOb87gV cwUaDgi3fJU4bsgYR2tNhMkgluZq9/p9fWu5ARDBjQZ2Lsw6+SUq5yNS9C+F jbORi5lrEOggdwbDablwkPI35MBCgBC4LjMSr8UOGu5QjCthBA3fMbRHctpE EsCUyX5962k2p94iDyNKQPAtTv96qypTSqpXs7QVZwaNxqcnnRehyYybzPxj bEIiUwnWOGfsIBi0j8yhiA+NTZTFiJ65jMB5DcokwHiNehJBi0wDoql1gjw7 qMiiRfK8dtC3FMckIbuYZ7kpVl45zmg453nTObICWN74F4LcYTbTlAEkH5hj Lzk0sKq3w3043kPpIrJfFBxAoHQBQqI/A6MGhEBfBXW3IXpwnBjzIF0m3XXO FyQSm2evJy6qcTTwMFkWQRRAZvSwYgDwcDfBVFSLyEJTL5y+qwGIGBmwRTDa +FYoHVGDHgu5hZdpR8ALCXXHsRc+ASD2IAsSUJAB5oO3VQa7NQ5ggkkibMdm bi1MmmKVE3Nxc2BpzAg4dmir+nKVZrRoyYU7ZA3fji3a2xztjzjzrv9ftvnb hhWZaa5GtyxAt5lUENIBs6NAeA8sJElpfrOV4TbmZRwc5nVSglvlkashvaqR vOmhPoATEjMHcgNaBD+amrEZbUvleUzl0ahJJxcQAJwQkhWE5ISEKkJDwsCB 3g6ZudD4RZeQVG0PE8XFATa2KmwU1x1M1tb4o9mAbOkKbJB4WEnFN5NMqqx3 3ykjLp4o1DWl1BsU0I1koJJFwR1RRsBVhgdGqCA4bfLV2ZUDZlsLShBXQb8l CRwaCU7qEoglLfnLHxenBptYJKyJ/NMSEkjbqTUd6cTyOGaFAlIwGAQLpGQd peka7ZccuiaRu1p2ZvtAI1Tpz9y+znUqEbtseakDcxZ2epiQmjdZIzqsF2IR Ag1cAqp1CeAMPXyokCzUFEggT3FedXMagka0PM4wUtABamXVxCDX2DmnAqVk UQ5UpySxQ5hyOfbZM24d14nl5TbXK02TaJeTE1l6vgnYqQQNBGWszegN9cpu anVIgS5cLqzbeG4STEqqz6NKLqzGK/dbeeqxl2enZPLwiK6bnoQD7KAatOMQ N55S1bqOTThLVC+ROR4gA/o+srS2p5FODJhuUQBCGv3WgaJ2s4YjyHR9ICzV HV9Oc7npKMdHIiGwExpJgPEzBiDkQN+0dMJmhLoIpo8bZjv6sz1KsxltSO6u eeeDUoSXdR13WOwhLB6C+YCAbbakOSD6WZuUASpzSlNXJ9X6EiDRjkMgRePg 5Z7ItVUafwZu5cYVKEkCc0dkgRealiv1sOrZKJSqgCs9AlgOQy0tIU5nHJ3K WB0FZvme6aswA7kBk81rsDcVyPgiJ04NUpS2ad0Hhi+8biRJktZSGhiyaaaw w2o97sLnp7oHA3u226wlYRRYEWcNiEkkMhkkAZwNJd6pgs1IQ24HVrETWTep ZAUKUGB3xxKDU2VCsBCxU5VoCiflp0myxmC26APWEyHUTPmCLzvcZWPvILcj Qaq0apWJeu8io1WBRDPHbRxEJIgvlAnnTC3fDkp3uBHrkKJiqpJE7/MhzXVj DGy7QQRYumyxizj24wSEkkatFBi4ZtIyppl753xMXmqrhbzjGxoV+cJY9TAD TYdj1kjQkUVCeZ5XMb9epGG+w7AetkHS2nJK9OvY0zNmFHDOzIbYObiTghGI 9G+Yt85bOsTpsHeA7aUVHEe+kgKG5k0jKbLorvZJGPpLVrFWcD6dJNnuwwaE 5SlShDGDAkPcNs+gH7P1C9BGKlqxrXK1Q5VA8yknQILhkaXIhJSS6OIhUEJq DeZjiBcB6vTu45aweBi0YtIwvSXjHGUblIwjKuyQvS6tou6Ad1LS45dlq6Dq 5Z0EASYDGN9+yRx9W/m/MU3Jx21gDIUukjZaJIMpQqnOTQcGrzYl62APJv5M R1OeiyI1sqNeoil+RjoUYB94sEiQh3i1QPLgcVmal1AbOcy6ybaAnCFrs4KA wM9Svr4wAoe0TZg9sFbEDVrMt8c8aR2aVJesh55MYzNVZvOY1uWN1JkMY5lc um1S0JCi0l4j49QyYuR4I3KsUiAOK00a6WOqPbe3JTlaU+GYxwDyG5CID9mV hlZFvj1bBwpUPsFVoLRmZEm5GRgqLC7ylDrK4NJZdj2kd3rEB5vjicGHkovG hrivQgIx7yymhdLz4abnvUhGIyjMTWRzOHgmFVaHNwo+iN2vLWA6oWbIPAuk Z5wbFo16a5TiCubscHFsCFOCGdl7S1scpJqrLQzLFSuzkGBvzDkE2bGbRoxL l+lNWkrRtWqdEQqQiBd0Ljzxi5VOgcb5dS5C9bx5nt8RO4USDw1qgssXzJ9m 1N5SWUAdYgHwq1Vr9JCMpExHRapCwjlmigAK+monRYBAhkUHULvZ0UF73S0o RZssQz7tF5mRAIpecRcukh3XhkxbFpuWOChPEJqZIMDACkc1iqvfM9KxI3NW PS3zffC9kVAN2gyPRMcjxTBXARid+dC9+vxQH3Z1e9qaQBtrAGaE6yyWc3nf eTOMOQarjfIcyrkdwzobraFnoHXFjFYdEUaXVVGEHKpgmW159+HUcw6Ofk6v BsdPE2meiU0QNAotoMbeQL34e4iiFONouyNMe1KbOxkGyIJGyjxQnytadoOt W1VDTU6dLPoUFmwtwGNElDx6hQCRHFOFEoHKhBOPOVc6ZVu7sOrIa6m0KMYJ E5yCTkpNIk0CNC1LzltMqYYIbaiqTNikYyor3rpBxxHmWGQd33lBT53a0O7w O9KqgGTwjMiSNHq+rRVVAhx9e91+HxlQKlBjO3dkY7rY0v06Hec5dWnQI3io fSSFLogSnvVxFKkYa0kpS+sAfG4IHBj7QYnBCg0z4CYdAjkC0smPIYkRIUeX jz3zmhVr89a8hevahS+FOuG87VI4Ivu1xrjOuki62x5Cwlga2O48yLNizkEH cgyIhAY4gOPq5mGWSDCZdm+N2kiogUHYYZtBED/Pruz9Iff6belee+O/e+/l ccLlk2gTI2M85488y69HrbJeLGJnlOChqkBDAIta0suhHL60ziSV4nJejOki JxClEnv3hUzaHeFwwQaM45yv43ry/h76BRrhECxYLp3c8PSqZrNDGtXeJy5x R2hVT36BwY5dYEcMjIhBnLXVD2qBaj2IGghgXhqraWQSSQTY2xsJl4FFoEHM sruWmmlgpNReNpIBERGtYbMgT6sMHY9uJFnShAuGXW2l5ldoNXdxwR5d84V2 dMYFfLp02nfq5gK1nkzUeTEcShWTTe7duZImhpX7ayLTgl0sRrtRFvfaTFtB npCuyTAFpnAGSSSvCBWYk2bUtLB7bYTgH6TPF89WiuphqLMubEbsud11BUQQ IoF5ZthH2jjAkLmThsCFz0IHwNJfBoR6BkeWfrkHfuti5TIeiCcD53pDNdKK QBh36XeA7Y5AgIJBRDMWpu0GAAIhPN2SrTYIFECi8afiFK1s0AZrIgYxNpEY EV853YTIL2a6qlyIZAAM2rOYhfSmbLLHcdngBJbspEcDMIl6LleukcOQRNRE TCgeVEbiW8hnWV9MG8eINmTyiInWaUazkCDmMw0mLSmaVDw8/H31fbn4Bp/5 8Az+kfAM/iwzbHk+f+jxA9dTPhYJCGvUe8fAvSv2aZxmiL+x5Vg0PkNdvX1P +yS5ogT2CHJdJDnj0rjEuP3VjxQWhIpBLRKBFBIPtQAixM2PJyK1p8YcNsyX d+ev6x/KZ5V61WsEMkpmjR5/Hb+Kjx9HOMdPHGFvm07cD6FCc2+450nN8XyK /IKqq4ZOd2P2a7qwYu3ENde0IxZo7POAzABCouHH0Nw1HNPvHRA2J3xXdYCa 8zllOGMJvAYe/sqY0nUSxDU6eOd6hjoHCat24ehbU1JAbw+bv60R9CSQR6HB mZ7+o/kiibgLZtA3FURVVNt26e6ZFfEAAVCrrHdIf8e/dxCQA0Q7Vj0viB30 PAoJ+od3q892LK1omsKALVWEGT79MQjEWrWD/2AsKYV3FLwZSEeKByxEChgg dAlCM5PcsMcYs8QiNqfR4AIUQ8DYUDKkTHHK9m/4dExh139aoMi3wh3KkBeV TvYtBwrIqjqqKkXJY6JuNEG+AFEwAIVea/dqWC1PRrqGYk2r04JWxvFCXtk/ zNGJwWxu6MTHZxl0lHv8vqnoiiV2z6ngFDu70gkryIUsR6Nhi4NCSblLs+vT jpIEM5cRC1enaCZUrJwSc5km/++L6AAhPLOJauUH8WACF3fFdbZYxmeEnlgA u0oYIC6QmkAgq0klGEN6/ly6gAhVKWU5WlfvTI0S7d8Ga3N6zNVSocC5DeTg zCm54FzBEf1YL+e18j4Fgue9JkrLuGiaZwwDMDDIZkWEOIpX1F/bIq4vv7Mh r1rigdk5IHfQVPszI7fDutj6wSSQNccenOUnwVircsa1DYv/F3JFOFCQYp/n eA== ---1463747160-1976317989-1120904453=:21767-- From owner-linux-xfs@oss.sgi.com Sat Jul 9 16:21:36 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Sat, 09 Jul 2005 16:21:41 -0700 (PDT) Received: from malik.acsalaska.net (malik.acsalaska.net [209.112.173.227]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j69NLXH9012637 for ; Sat, 9 Jul 2005 16:21:36 -0700 Received: from erbenson.alaska.net (66-230-89-254-dial-as3.nwc.acsalaska.net [66.230.89.254]) by malik.acsalaska.net (8.13.4/8.13.4) with ESMTP id j69NJr29073050 for ; Sat, 9 Jul 2005 15:19:53 -0800 (AKDT) (envelope-from erbenson@alaska.net) Received: from plato.local.lan (plato.local.lan [192.168.0.4]) by erbenson.alaska.net (Postfix) with ESMTP id 2378C3933 for ; Sat, 9 Jul 2005 15:19:52 -0800 (AKDT) Received: by plato.local.lan (Postfix, from userid 1000) id B1BA440FF35; Sat, 9 Jul 2005 15:19:51 -0800 (AKDT) Date: Sat, 9 Jul 2005 15:19:51 -0800 From: Ethan Benson To: linux xfs mailing list Subject: Re: how to flush an XFS filesystem Message-ID: <20050709231951.GN25980@plato.local.lan> Mail-Followup-To: linux xfs mailing list References: <20050709091145.GA13108@cirrus.madduck.net> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="FVoU9VXBP3PcghKQ" Content-Disposition: inline In-Reply-To: <20050709091145.GA13108@cirrus.madduck.net> User-Agent: Mutt/1.3.28i X-OS: Debian GNU X-gpg-fingerprint: E3E4 D0BC 31BC F7BB C1DD C3D6 24AC 7B1A 2C44 7AFC X-gpg-key: http://www.alaska.net/~erbenson/gpg/key.asc Mail-Copies-To: nobody X-No-CC: I subscribe to this list; do not CC me on replies. X-ACS-Spam-Status: no X-ACS-Scanned-By: MD 2.51; SA 3.0.3; spamdefang 1.112 X-archive-position: 5592 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: erbenson@alaska.net Precedence: bulk X-list: linux-xfs Content-Length: 1099 Lines: 35 --FVoU9VXBP3PcghKQ Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Sat, Jul 09, 2005 at 11:11:45AM +0200, martin f krafft wrote: > This issues has been discussed before and I cannot find a solution. > We have the challenge to create the Grub menu.lst file and > immediately reboot afterwards. XFS will probably not flush the file > in time, so when Grub accesses the device directly, the file does > not yet exist. Only mounting the device causes the log to be played, > but that's not possible before Grub. the solution is to fix grub not to access the device directly during installation. it already has code to do its installation without direct device access. --=20 Ethan Benson http://www.alaska.net/~erbenson/ --FVoU9VXBP3PcghKQ Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (GNU/Linux) iEYEARECAAYFAkLQW5cACgkQJKx7GixEevwykwCfd50rSM9FLs6DjR72pWABL1o0 CmEAoJbK+ZY6lf6YlsSLwExwwTFManJr =XGdk -----END PGP SIGNATURE----- --FVoU9VXBP3PcghKQ-- From owner-linux-xfs@oss.sgi.com Sat Jul 9 21:14:35 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Sat, 09 Jul 2005 21:14:37 -0700 (PDT) Received: from larry.melbourne.sgi.com (mverd138.asia.info.net [61.14.31.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id j6A4EXH9031238 for ; Sat, 9 Jul 2005 21:14:34 -0700 Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA26265 for ; Sun, 10 Jul 2005 14:12:50 +1000 Received: from wobbly.melbourne.sgi.com (localhost [127.0.0.1]) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id j6A4Cukt2903329 for ; Sun, 10 Jul 2005 14:12:56 +1000 (EST) Received: (from nathans@localhost) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id j6A4CspU2903815 for linux-xfs@oss.sgi.com; Sun, 10 Jul 2005 14:12:54 +1000 (EST) Date: Sun, 10 Jul 2005 14:12:54 +1000 From: Nathan Scott To: linux xfs mailing list Subject: Re: how to flush an XFS filesystem Message-ID: <20050710141254.A2904172@wobbly.melbourne.sgi.com> References: <20050709091145.GA13108@cirrus.madduck.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.2.5i In-Reply-To: <20050709091145.GA13108@cirrus.madduck.net>; from madduck@madduck.net on Sat, Jul 09, 2005 at 11:11:45AM +0200 X-archive-position: 5593 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: linux-xfs Content-Length: 1553 Lines: 44 Hi there, On Sat, Jul 09, 2005 at 11:11:45AM +0200, martin f krafft wrote: > This issues has been discussed before and I cannot find a solution. > We have the challenge to create the Grub menu.lst file and > immediately reboot afterwards. OK. Can you remount-readonly before you reboot? That is what is done for the root filesystem before a clean shutdown... and that flushes everything with no log recovery being required at startup. (I assume by "immediately reboot" above, you are not doing a clean system shutdown for some reason? why not, out of curiousity?) > so when Grub accesses the device directly, the file does > not yet exist. Right (invalid assumption on the part of Grub there). > The problem is that xfs_freeze -f says it would flush everything to > the disk, but it does not. Not even waiting for 20 seconds after > calling xfs_freeze works. If xfs_freeze does not do the trick and > sync does not work for XFS, I'm curious in what way xfs_freeze did not work here? And to clarify your statement above ("sync does not work for XFS"), sync works just fine on XFS, it just doesn't do what Grub incorrectly assumes it will do. > how can I actually flush all buffers to > the disk and commit all open transactions from the log? "mount -oremount,ro ..." is guaranteed to do that, and is a filesystem independent way of doing things, so seems like a better solution. xfs_freeze should also do so, so I'm a bit surprised by your assertion there ... what was your test case where something was not flushed? cheers. -- Nathan From owner-linux-xfs@oss.sgi.com Sat Jul 9 21:22:07 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Sat, 09 Jul 2005 21:22:10 -0700 (PDT) Received: from larry.melbourne.sgi.com (mverd138.asia.info.net [61.14.31.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id j6A4M5H9032077 for ; Sat, 9 Jul 2005 21:22:06 -0700 Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA26450; Sun, 10 Jul 2005 14:20:20 +1000 Received: from wobbly.melbourne.sgi.com (localhost [127.0.0.1]) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id j6A4KPkt2874285; Sun, 10 Jul 2005 14:20:25 +1000 (EST) Received: (from nathans@localhost) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id j6A4KMu32904194; Sun, 10 Jul 2005 14:20:22 +1000 (EST) Date: Sun, 10 Jul 2005 14:20:22 +1000 From: Nathan Scott To: Justin Piszcz Cc: linux-kernel@vger.kernel.org, linux-xfs@oss.sgi.com Subject: Re: XFS Oops Under 2.6.12.2 Message-ID: <20050710142021.B2904172@wobbly.melbourne.sgi.com> References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.2.5i In-Reply-To: ; from jpiszcz@lucidpixels.com on Sat, Jul 09, 2005 at 06:20:53AM -0400 X-archive-position: 5594 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: linux-xfs Content-Length: 814 Lines: 25 Hi there, On Sat, Jul 09, 2005 at 06:20:53AM -0400, Justin Piszcz wrote: > After a couple hours of use, I get this error on a linear RAID under > 2.6.12.2 using loop-AES w/AES-256 encrypted filesystem. > > Anyone know what is wrong? This is not an Oops as your subject line states ... its a forced filesystem shutdown due to (what looks like) corruption in a btree block in a directory inode. > Filesystem "loop1": XFS internal error xfs_da_do_buf(2) at line 2271 of > file fs/xfs/xfs_da_btree.c. Caller 0xc025e807 Is this reproducible? In particular, is it reproducible if you take some of the MD/loop/encryption complexities out of the picture (just to try to narrow down the source of the failure). And if so, could you send me a recipe describing how to reproduce it... thanks! cheers. -- Nathan From owner-linux-xfs@oss.sgi.com Sun Jul 10 01:52:15 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Sun, 10 Jul 2005 01:52:26 -0700 (PDT) Received: from smtp-3.hut.fi (smtp-3.hut.fi [130.233.228.93]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6A8qEH9018379 for ; Sun, 10 Jul 2005 01:52:15 -0700 Received: from localhost (katosiko.hut.fi [130.233.228.115]) by smtp-3.hut.fi (8.12.10/8.12.10) with ESMTP id j6A8oTwu014133 for ; Sun, 10 Jul 2005 11:50:29 +0300 Received: from smtp-3.hut.fi ([130.233.228.93]) by localhost (katosiko.hut.fi [130.233.228.115]) (amavisd-new, port 10024) with LMTP id 12653-48-4 for ; Sun, 10 Jul 2005 11:50:29 +0300 (EEST) Received: from wing.madduck.net (aaninen-47.hut.fi [130.233.238.47]) by smtp-3.hut.fi (8.12.10/8.12.10) with ESMTP id j6A8hgfY013473 for ; Sun, 10 Jul 2005 11:43:42 +0300 Received: by wing.madduck.net (Postfix, from userid 1000) id 806F680A9A7; Sun, 10 Jul 2005 10:43:45 +0200 (CEST) Date: Sun, 10 Jul 2005 10:43:45 +0200 From: martin f krafft To: linux xfs mailing list Subject: Re: how to flush an XFS filesystem Message-ID: <20050710084345.GA11413@localhost.localdomain> Mail-Followup-To: linux xfs mailing list References: <20050709091145.GA13108@cirrus.madduck.net> <20050710141254.A2904172@wobbly.melbourne.sgi.com> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="h31gzZEtNLTqOjlF" Content-Disposition: inline In-Reply-To: <20050710141254.A2904172@wobbly.melbourne.sgi.com> X-OS: Debian GNU/Linux 3.1 kernel 2.6.12-wing i686 X-Motto: Keep the good times rollin' X-Subliminal-Message: debian/rules! X-Spamtrap: madduck.bogus@madduck.net User-Agent: Mutt/1.5.9i X-TKK-Virus-Scanned: by amavisd-new-2.1.2-hutcc at katosiko.hut.fi X-archive-position: 5595 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: madduck@madduck.net Precedence: bulk X-list: linux-xfs Content-Length: 2353 Lines: 75 --h31gzZEtNLTqOjlF Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable also sprach Nathan Scott [2005.07.10.0612 +0200]: > OK. Can you remount-readonly before you reboot? That is what > is done for the root filesystem before a clean shutdown... and > that flushes everything with no log recovery being required at > startup. I have considered this and will have to look into it. > (I assume by "immediately reboot" above, you are not doing a clean > system shutdown for some reason? why not, out of curiousity?) Software suspend. > > The problem is that xfs_freeze -f says it would flush everything to > > the disk, but it does not. Not even waiting for 20 seconds after > > calling xfs_freeze works. If xfs_freeze does not do the trick and > > sync does not work for XFS, >=20 > I'm curious in what way xfs_freeze did not work here? >=20 > And to clarify your statement above ("sync does not work for XFS"), > sync works just fine on XFS, it just doesn't do what Grub incorrectly > assumes it will do. Right. So for me 'sync' means to flush to disk after which even direct hardware access would find the data. > "mount -oremount,ro ..." is guaranteed to do that, and is > a filesystem independent way of doing things, so seems like > a better solution. xfs_freeze should also do so, so I'm a bit > surprised by your assertion there ... what was your test case > where something was not flushed? Here's the rundown: Grub menu file is changed kernel freezer is activated filesystems are left untouched system is shut down then: grub starts and /boot has not been flushed. --=20 martin; (greetings from the heart of the sun.) \____ echo mailto: !#^."<*>"|tr "<*> mailto:" net@madduck =20 invalid/expired pgp subkeys? use subkeys.pgp.net as keyserver! spamtraps: madduck.bogus@madduck.net =20 windoze nt crashed. i am the blue screen of death. no one hears your screams. --h31gzZEtNLTqOjlF Content-Type: application/pgp-signature; name="signature.asc" Content-Description: Digital signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (GNU/Linux) iD8DBQFC0N/BIgvIgzMMSnURAvnIAJ9cUhHX3vpzpvElkfwUnvLPwH1NuACfaHHF WuFY/vvNd/NGpGmorYL6mro= =vR1O -----END PGP SIGNATURE----- --h31gzZEtNLTqOjlF-- From owner-linux-xfs@oss.sgi.com Sun Jul 10 06:21:25 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Sun, 10 Jul 2005 06:21:30 -0700 (PDT) Received: from sccrmhc12.comcast.net (sccrmhc12.comcast.net [204.127.202.56]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6ADLMH9005362 for ; Sun, 10 Jul 2005 06:21:25 -0700 Received: from c-66-30-114-143.hsd1.ma.comcast.net ([66.30.114.143]) by comcast.net (sccrmhc12) with ESMTP id <20050710131942012009llrce>; Sun, 10 Jul 2005 13:19:42 +0000 Received: from c-66-30-114-143.hsd1.ma.comcast.net (localhost.127.in-addr.arpa [127.0.0.1]) by c-66-30-114-143.hsd1.ma.comcast.net (8.13.4/8.13.1) with ESMTP id j6ADJfsb005273 for ; Sun, 10 Jul 2005 09:19:42 -0400 (EDT) (envelope-from rodrigc@c-66-30-114-143.hsd1.ma.comcast.net) Received: (from rodrigc@localhost) by c-66-30-114-143.hsd1.ma.comcast.net (8.13.4/8.13.1/Submit) id j6ADJfDn005272 for linux-xfs@oss.sgi.com; Sun, 10 Jul 2005 09:19:41 -0400 (EDT) (envelope-from rodrigc) Date: Sun, 10 Jul 2005 09:19:41 -0400 From: Craig Rodrigues To: linux-xfs@oss.sgi.com Subject: cvsup of XFS code? Message-ID: <20050710131941.GA5256@crodrigues.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.5.9i X-archive-position: 5596 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: rodrigc@crodrigues.org Precedence: bulk X-list: linux-xfs Content-Length: 487 Lines: 22 Hi, I tried to follow the instructions for obtaining the XFS code via CVSup at: http://oss.sgi.com/projects/xfs/source.html but it is not working: % cvsup -L 2 -g cvsupfile Parsing supfile "cvsupfile" Connecting to xfs.org Connected to xfs.org Server software version: SNAP_16_1h Negotiating file attribute support Exchanging collection information Server message: Unknown collection "linux-2.6-xfs" This used to work before.... -- Craig Rodrigues rodrigc@crodrigues.org From owner-linux-xfs@oss.sgi.com Sun Jul 10 12:37:59 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Sun, 10 Jul 2005 12:38:04 -0700 (PDT) Received: from mx2.suse.de (ns2.suse.de [195.135.220.15]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6AJbwH9001546 for ; Sun, 10 Jul 2005 12:37:59 -0700 Received: from Relay1.suse.de (mail2.suse.de [195.135.221.8]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx2.suse.de (Postfix) with ESMTP id 427FB1D85D; Sun, 10 Jul 2005 21:36:18 +0200 (CEST) Date: Sun, 10 Jul 2005 19:36:18 +0000 From: Olaf Hering To: Andrew Morton , linux-kernel@vger.kernel.org Cc: linux-xfs@oss.sgi.com Subject: [PATCH 70/82] remove linux/version.h from fs/xfs/ Message-ID: <20050710193618.70.qNxGNj4127.2247.olh@nectarine.suse.de> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline X-DOS: I got your 640K Real Mode Right Here Buddy! X-Homeland-Security: You are not supposed to read this line! You are a terrorist! User-Agent: Mutt und vi sind doch schneller als Notes (und GroupWise) In-Reply-To: <20050710193508.0.PmFpst2252.2247.olh@nectarine.suse.de> X-archive-position: 5597 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: olh@suse.de Precedence: bulk X-list: linux-xfs Content-Length: 1738 Lines: 49 changing CONFIG_LOCALVERSION rebuilds too much, for no appearent reason. remove code for obsolete kernels Signed-off-by: Olaf Hering fs/xfs/linux-2.6/xfs_linux.h | 1 - fs/xfs/xfs_dmapi.h | 16 ---------------- 2 files changed, 17 deletions(-) Index: linux-2.6.13-rc2-mm1/fs/xfs/linux-2.6/xfs_linux.h =================================================================== --- linux-2.6.13-rc2-mm1.orig/fs/xfs/linux-2.6/xfs_linux.h +++ linux-2.6.13-rc2-mm1/fs/xfs/linux-2.6/xfs_linux.h @@ -87,7 +87,6 @@ #include #include #include -#include #include #include Index: linux-2.6.13-rc2-mm1/fs/xfs/xfs_dmapi.h =================================================================== --- linux-2.6.13-rc2-mm1.orig/fs/xfs/xfs_dmapi.h +++ linux-2.6.13-rc2-mm1/fs/xfs/xfs_dmapi.h @@ -172,25 +172,9 @@ typedef enum { /* * Based on IO_ISDIRECT, decide which i_ flag is set. */ -#if LINUX_VERSION_CODE > KERNEL_VERSION(2,6,0) #define DM_SEM_FLAG_RD(ioflags) (((ioflags) & IO_ISDIRECT) ? DM_FLAGS_ISEM : 0) #define DM_SEM_FLAG_WR (DM_FLAGS_IALLOCSEM_WR | DM_FLAGS_ISEM) -#endif - -#if (LINUX_VERSION_CODE < KERNEL_VERSION(2,6,0)) && - (LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,22)) -#define DM_SEM_FLAG_RD(ioflags) (((ioflags) & IO_ISDIRECT) ? - DM_FLAGS_IALLOCSEM_RD : DM_FLAGS_ISEM) -#define DM_SEM_FLAG_WR (DM_FLAGS_IALLOCSEM_WR | DM_FLAGS_ISEM) -#endif - -#if LINUX_VERSION_CODE <= KERNEL_VERSION(2,4,21) -#define DM_SEM_FLAG_RD(ioflags) (((ioflags) & IO_ISDIRECT) ? - 0 : DM_FLAGS_ISEM) -#define DM_SEM_FLAG_WR (DM_FLAGS_ISEM) -#endif - /* * Macros to turn caller specified delay/block flags into From owner-linux-xfs@oss.sgi.com Sun Jul 10 15:17:52 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Sun, 10 Jul 2005 15:17:55 -0700 (PDT) Received: from larry.melbourne.sgi.com (mverd138.asia.info.net [61.14.31.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id j6AMHoH9012383 for ; Sun, 10 Jul 2005 15:17:51 -0700 Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id IAA11427 for ; Mon, 11 Jul 2005 08:16:09 +1000 Received: from wobbly.melbourne.sgi.com (localhost [127.0.0.1]) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id j6AMGFkt2902168 for ; Mon, 11 Jul 2005 08:16:15 +1000 (EST) Received: (from nathans@localhost) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id j6AMGEgi2918794 for linux-xfs@oss.sgi.com; Mon, 11 Jul 2005 08:16:14 +1000 (EST) Date: Mon, 11 Jul 2005 08:16:13 +1000 From: Nathan Scott To: linux xfs mailing list Subject: Re: how to flush an XFS filesystem Message-ID: <20050711081613.A2828633@wobbly.melbourne.sgi.com> References: <20050709091145.GA13108@cirrus.madduck.net> <20050710141254.A2904172@wobbly.melbourne.sgi.com> <20050710084345.GA11413@localhost.localdomain> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.2.5i In-Reply-To: <20050710084345.GA11413@localhost.localdomain>; from madduck@madduck.net on Sun, Jul 10, 2005 at 10:43:45AM +0200 X-archive-position: 5598 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: linux-xfs Content-Length: 1278 Lines: 40 On Sun, Jul 10, 2005 at 10:43:45AM +0200, martin f krafft wrote: > > (I assume by "immediately reboot" above, you are not doing a clean > > system shutdown for some reason? why not, out of curiousity?) > > Software suspend. Er, oh. > > "mount -oremount,ro ..." is guaranteed to do that, and is > > a filesystem independent way of doing things, so seems like > > a better solution. xfs_freeze should also do so, so I'm a bit > > surprised by your assertion there ... what was your test case > > where something was not flushed? > > Here's the rundown: > > Grub menu file is changed > kernel freezer is activated > filesystems are left untouched > system is shut down There's no xfs_freeze(8) in that test case...? I'm confused. > grub starts and /boot has not been flushed. Hmm, AFAICT you didn't really freeze the filesystem. The software suspend "freezer" is putting the system into a state such that it stops writing, such that kernel daemons go to "sleep" (and don't wakeup on their usual timer-driven way), etc. The assumption there is the system will be woken up from this state at some point not switched off and cold booted. At least, thats my understanding from the guys who sent us the XFS patches to implement that stuff.. cheers. -- Nathan From owner-linux-xfs@oss.sgi.com Sun Jul 10 15:52:44 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Sun, 10 Jul 2005 15:52:49 -0700 (PDT) Received: from smtp-2.hut.fi (smtp-2.hut.fi [130.233.228.92]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6AMqhH9014277 for ; Sun, 10 Jul 2005 15:52:44 -0700 Received: from localhost (katosiko.hut.fi [130.233.228.115]) by smtp-2.hut.fi (8.12.10/8.12.10) with ESMTP id j6AMp08Y013123 for ; Mon, 11 Jul 2005 01:51:00 +0300 Received: from smtp-2.hut.fi ([130.233.228.92]) by localhost (katosiko.hut.fi [130.233.228.115]) (amavisd-new, port 10024) with LMTP id 26890-29-4 for ; Mon, 11 Jul 2005 01:51:00 +0300 (EEST) Received: from wing.madduck.net (a130-233-4-144.debconf5.hut.fi [130.233.4.144]) by smtp-2.hut.fi (8.12.10/8.12.10) with ESMTP id j6AMkPte012734 for ; Mon, 11 Jul 2005 01:46:26 +0300 Received: by wing.madduck.net (Postfix, from userid 1000) id C4EF480E807; Mon, 11 Jul 2005 01:46:35 +0300 (EEST) Date: Mon, 11 Jul 2005 01:46:35 +0300 From: martin f krafft To: linux xfs mailing list Subject: Re: how to flush an XFS filesystem Message-ID: <20050710224635.GA12333@localhost.localdomain> Mail-Followup-To: linux xfs mailing list References: <20050709091145.GA13108@cirrus.madduck.net> <20050710141254.A2904172@wobbly.melbourne.sgi.com> <20050710084345.GA11413@localhost.localdomain> <20050711081613.A2828633@wobbly.melbourne.sgi.com> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="uAKRQypu60I7Lcqm" Content-Disposition: inline In-Reply-To: <20050711081613.A2828633@wobbly.melbourne.sgi.com> X-OS: Debian GNU/Linux 3.1 kernel 2.6.11-wing i686 X-Motto: Keep the good times rollin' X-Subliminal-Message: debian/rules! X-Spamtrap: madduck.bogus@madduck.net User-Agent: Mutt/1.5.9i X-TKK-Virus-Scanned: by amavisd-new-2.1.2-hutcc at katosiko.hut.fi X-archive-position: 5599 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: madduck@madduck.net Precedence: bulk X-list: linux-xfs Content-Length: 1613 Lines: 49 --uAKRQypu60I7Lcqm Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable also sprach Nathan Scott [2005.07.11.0116 +0300]: > > Grub menu file is changed > > kernel freezer is activated > > filesystems are left untouched > > system is shut down >=20 > There's no xfs_freeze(8) in that test case...? I'm confused. >=20 > > grub starts and /boot has not been flushed. >=20 > Hmm, AFAICT you didn't really freeze the filesystem. The software > suspend "freezer" is putting the system into a state such that it > stops writing, such that kernel daemons go to "sleep" (and don't > wakeup on their usual timer-driven way), etc. The assumption > there is the system will be woken up from this state at some point > not switched off and cold booted. Sorry for leaving out this vital info, I freeze (and unfreeze) right after changing the grub menu file. --=20 martin; (greetings from the heart of the sun.) \____ echo mailto: !#^."<*>"|tr "<*> mailto:" net@madduck =20 invalid/expired pgp subkeys? use subkeys.pgp.net as keyserver! spamtraps: madduck.bogus@madduck.net =20 perl -e 'print $i=3Dpack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);' --uAKRQypu60I7Lcqm Content-Type: application/pgp-signature; name="signature.asc" Content-Description: Digital signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (GNU/Linux) iD8DBQFC0aVLIgvIgzMMSnURAoHiAKCdr+rld1SvyN1Wn1MKpU5yy6//kACg10HI Jy3NhrKJw/FE4l3ZVhWBUd4= =NnX+ -----END PGP SIGNATURE----- --uAKRQypu60I7Lcqm-- From owner-linux-xfs@oss.sgi.com Sun Jul 10 18:27:54 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Sun, 10 Jul 2005 18:28:01 -0700 (PDT) Received: from wproxy.gmail.com (wproxy.gmail.com [64.233.184.199]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6B1RrH9026938 for ; Sun, 10 Jul 2005 18:27:54 -0700 Received: by wproxy.gmail.com with SMTP id 71so750459wra for ; Sun, 10 Jul 2005 18:26:13 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:mime-version:content-type:content-transfer-encoding:content-disposition; b=BLw8AXVCQV1BvVSywZdZhP86DU0PdUdFLXOnqW6QvMNNVUU+QGCWUxPnAApIIm8dvU4uMHL7wA39qfo69zfL337KOYuiOYplfBN53o/5aCnNS3lEWlvhNbXyFq5tJTkDGcWDp4Ax8b17gVK5d7Q+LSXNSmwEVtVJD9HpwIjulOc= Received: by 10.54.36.75 with SMTP id j75mr3519140wrj; Sun, 10 Jul 2005 18:26:13 -0700 (PDT) Received: by 10.54.110.14 with HTTP; Sun, 10 Jul 2005 18:26:13 -0700 (PDT) Message-ID: <359782e70507101826ac15e8e@mail.gmail.com> Date: Mon, 11 Jul 2005 09:26:13 +0800 From: Qin Mikore Li Reply-To: Qin Mikore Li To: linux-xfs@oss.sgi.com Subject: Request xfsprog patches for cross-compiling for Xscale Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id j6B1RsH9026940 X-archive-position: 5600 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: oldmoonster@gmail.com Precedence: bulk X-list: linux-xfs Content-Length: 642 Lines: 22 Hi, XFS folks and experts, With following two patches, I got xfs-1.2.0 run on redhat9 with linux-2.4.19 kernel, and the xfsprogs I used is the latest version from 1.3.1 release. linux-2.4.19-core-xfs-1.2.0.patch linux-2.4.19-xfs-1.2.0.patch Now, I am trying to porting above kernel from x86 to arm based Xscale platform, and would like to know whether or not there is a patch for cross-compiling xfsprog(app) for Xscale that of the host is x86/redhat 9. It seems the porting of xfsprog is more difficult than kernel porting, but I don't think I am so lucky to be the first one in the world to do such a work. Could you help? Thanks QL From owner-linux-xfs@oss.sgi.com Sun Jul 10 18:57:00 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Sun, 10 Jul 2005 18:57:04 -0700 (PDT) Received: from larry.melbourne.sgi.com (mverd138.asia.info.net [61.14.31.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id j6B1uwH9028863 for ; Sun, 10 Jul 2005 18:56:59 -0700 Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA15511 for ; Mon, 11 Jul 2005 11:55:17 +1000 Received: from frodo.melbourne.sgi.com (root@frodo.melbourne.sgi.com [134.14.55.153]) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id j6B1tNkt2899550 for ; Mon, 11 Jul 2005 11:55:23 +1000 (EST) Received: from frodo.melbourne.sgi.com (nathans@localhost [127.0.0.1]) by frodo.melbourne.sgi.com (8.12.9/8.12.9/Debian-3) with ESMTP id j6B1mSua001345 for ; Mon, 11 Jul 2005 11:48:28 +1000 Received: (from nathans@localhost) by frodo.melbourne.sgi.com (8.12.9/8.12.9/Debian-3) id j6B1mRTj001343 for linux-xfs@oss.sgi.com; Mon, 11 Jul 2005 11:48:27 +1000 Date: Mon, 11 Jul 2005 11:48:27 +1000 From: Nathan Scott To: linux xfs mailing list Subject: Re: how to flush an XFS filesystem Message-ID: <20050711014827.GB829@frodo> References: <20050709091145.GA13108@cirrus.madduck.net> <20050710141254.A2904172@wobbly.melbourne.sgi.com> <20050710084345.GA11413@localhost.localdomain> <20050711081613.A2828633@wobbly.melbourne.sgi.com> <20050710224635.GA12333@localhost.localdomain> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20050710224635.GA12333@localhost.localdomain> User-Agent: Mutt/1.5.3i X-archive-position: 5601 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: linux-xfs Content-Length: 1173 Lines: 31 On Mon, Jul 11, 2005 at 01:46:35AM +0300, martin f krafft wrote: > also sprach Nathan Scott [2005.07.11.0116 +0300]: > > > Grub menu file is changed > > > kernel freezer is activated > > > filesystems are left untouched > > > system is shut down > > > > There's no xfs_freeze(8) in that test case...? I'm confused. > > > > > grub starts and /boot has not been flushed. > > > > Hmm, AFAICT you didn't really freeze the filesystem. The software > > suspend "freezer" is putting the system into a state such that it > > stops writing, such that kernel daemons go to "sleep" (and don't > > wakeup on their usual timer-driven way), etc. The assumption > > there is the system will be woken up from this state at some point > > not switched off and cold booted. > > Sorry for leaving out this vital info, I freeze (and unfreeze) right > after changing the grub menu file. Ah, OK thats more interesting then - can you describe the way in which the Grub menu file is changed? e.g. ... is a new inode created or is an existing one overwritten? is it written via write(2) or mmap? Is it using buffered or direct IO? etc. thanks. -- Nathan From owner-linux-xfs@oss.sgi.com Sun Jul 10 19:01:21 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Sun, 10 Jul 2005 19:01:24 -0700 (PDT) Received: from larry.melbourne.sgi.com (mverd138.asia.info.net [61.14.31.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id j6B21KH9029513 for ; Sun, 10 Jul 2005 19:01:20 -0700 Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA15595; Mon, 11 Jul 2005 11:59:34 +1000 Received: from frodo.melbourne.sgi.com (root@frodo.melbourne.sgi.com [134.14.55.153]) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id j6B1xekt2901779; Mon, 11 Jul 2005 11:59:41 +1000 (EST) Received: from frodo.melbourne.sgi.com (nathans@localhost [127.0.0.1]) by frodo.melbourne.sgi.com (8.12.9/8.12.9/Debian-3) with ESMTP id j6B1qkua001403; Mon, 11 Jul 2005 11:52:46 +1000 Received: (from nathans@localhost) by frodo.melbourne.sgi.com (8.12.9/8.12.9/Debian-3) id j6B1qj8s001401; Mon, 11 Jul 2005 11:52:45 +1000 Date: Mon, 11 Jul 2005 11:52:45 +1000 From: Nathan Scott To: Qin Mikore Li Cc: linux-xfs@oss.sgi.com Subject: Re: Request xfsprog patches for cross-compiling for Xscale Message-ID: <20050711015245.GC829@frodo> References: <359782e70507101826ac15e8e@mail.gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <359782e70507101826ac15e8e@mail.gmail.com> User-Agent: Mutt/1.5.3i X-archive-position: 5602 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: linux-xfs Content-Length: 1019 Lines: 31 On Mon, Jul 11, 2005 at 09:26:13AM +0800, Qin Mikore Li wrote: > With following two patches, I got xfs-1.2.0 run on redhat9 with Which two patches? > linux-2.4.19 kernel, and the xfsprogs I used is the latest version > from 1.3.1 release. > > linux-2.4.19-core-xfs-1.2.0.patch > linux-2.4.19-xfs-1.2.0.patch > > Now, I am trying to porting above kernel from x86 to arm based Xscale > platform, and would like to know whether or not there is a patch for > cross-compiling xfsprog(app) for Xscale that of the host is x86/redhat > 9. It seems the porting of xfsprog is more difficult than kernel > porting, but I don't think I am so lucky to be the first one in the > world to do such a work. > > Could you help? Use a current version of xfsprogs from CVS on oss.sgi.com and then let us know what the actual problems are (there were no patches in your mail, so I'm kinda guessing here...). There was some changes in xfsprogs since that release you're using to make cross-compiles much easier. cheers. -- Nathan From owner-linux-xfs@oss.sgi.com Sun Jul 10 19:14:54 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Sun, 10 Jul 2005 19:14:58 -0700 (PDT) Received: from wproxy.gmail.com (wproxy.gmail.com [64.233.184.205]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6B2ErH9030756 for ; Sun, 10 Jul 2005 19:14:53 -0700 Received: by wproxy.gmail.com with SMTP id 71so754698wra for ; Sun, 10 Jul 2005 19:13:13 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:cc:in-reply-to:mime-version:content-type:references; b=HQ/AcvtrlaPfSKvmX0p0hbdaHq+PtQ5xsPoxb3v/P5pt1itERIx4DGkN5L39xpNS0PuoLkDufnEJq7lwBPubxN9IrEzC2IiE3nRsYffHSl8oRtLNZBRBFNV2BmmQTick3OZ5RT9rvyEUCLL+Ept+75zAq9ftEns02b9X+B94d+E= Received: by 10.54.26.56 with SMTP id 56mr3593190wrz; Sun, 10 Jul 2005 19:13:13 -0700 (PDT) Received: by 10.54.110.14 with HTTP; Sun, 10 Jul 2005 19:13:13 -0700 (PDT) Message-ID: <359782e705071019137b138ce@mail.gmail.com> Date: Mon, 11 Jul 2005 10:13:13 +0800 From: "Q.L" Reply-To: "Q.L" To: Nathan Scott Subject: Re: Request xfsprog patches for cross-compiling for Xscale Cc: linux-xfs@oss.sgi.com In-Reply-To: <20050711015245.GC829@frodo> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_Part_7163_12804647.1121047993426" References: <359782e70507101826ac15e8e@mail.gmail.com> <20050711015245.GC829@frodo> X-archive-position: 5603 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: oldmoonster@gmail.com Precedence: bulk X-list: linux-xfs Content-Length: 29134 Lines: 504 ------=_Part_7163_12804647.1121047993426 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline Hi, Nathan and all, I attached the patch in the mail. On 7/11/05, Nathan Scott wrote: > On Mon, Jul 11, 2005 at 09:26:13AM +0800, Qin Mikore Li wrote: > > With following two patches, I got xfs-1.2.0 run on redhat9 with >=20 > Which two patches? >=20 > > linux-2.4.19 kernel, and the xfsprogs I used is the latest version > > from 1.3.1 release. > > > > linux-2.4.19-core-xfs-1.2.0.patch > > linux-2.4.19-xfs-1.2.0.patch > > > > Now, I am trying to porting above kernel from x86 to arm based Xscale > > platform, and would like to know whether or not there is a patch for > > cross-compiling xfsprog(app) for Xscale that of the host is x86/redhat > > 9. It seems the porting of xfsprog is more difficult than kernel > > porting, but I don't think I am so lucky to be the first one in the > > world to do such a work. > > > > Could you help? >=20 > Use a current version of xfsprogs from CVS on oss.sgi.com and then > let us know what the actual problems are (there were no patches in > your mail, so I'm kinda guessing here...). There was some changes > in xfsprogs since that release you're using to make cross-compiles > much easier. easier? In fact, this is what I want to know, as I have little experience with xfsprogs so far. Thanks QL ------=_Part_7163_12804647.1121047993426 Content-Type: application/x-gzip; name="linux-2.4.19-core-xfs-1.2.0.patch.gz" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="linux-2.4.19-core-xfs-1.2.0.patch.gz" Cgo8aHRtbD4KCgo8aGVhZD4KPHRpdGxlPlBsYW5ldE1pcnJvciAtIERvd25s b2FkIEluIFByb2dyZXNzLi4uPC90aXRsZT4KPG1ldGEgaHR0cC1lcXVpdj0n UmVmcmVzaCcgY29udGVudD0nNTtVUkw9aHR0cDovL2Rvd25sb2Fkcy5wbGFu ZXRtaXJyb3IuY29tL3B1Yi94ZnMvUmVsZWFzZS0xLjIva2VybmVsX3BhdGNo ZXMvbGludXgtMi40LjE5LWNvcmUteGZzLTEuMi4wLnBhdGNoLmd6Jz4KPG1l dGEgaHR0cC1lcXVpdj0nQ29udGVudC1UeXBlJyBjb250ZW50PSd0ZXh0L2h0 bWw7IGNoYXJzZXQ9aXNvLTg4NTktMSc+CjxsaW5rIHJlbD0nc3R5bGVzaGVl dCcgaHJlZj0nL3RoZW1lcy9zdGFuZGFyZC9wbS5jc3MnIHR5cGU9J3RleHQv Y3NzJz4KPC9oZWFkPgoKCjxib2R5IGJnY29sb3I9JyNGRkZGRkYnPgoKCjx0 YWJsZSBib3JkZXI9JzAnIGNlbGxwYWRkaW5nPScwJyBjZWxsc3BhY2luZz0n MCcgd2lkdGg9JzEwMCUnIGFsaWduPSdjZW50ZXInPgogIDx0cj4gCiAgICA8 dGQgY29sc3Bhbj0nNCcgYmdjb2xvcj0nIzY2OTlGRic+PGltZyBzcmM9Jy90 aGVtZXMvc3RhbmRhcmQvaW1hZ2VzL3NwYWNlci5naWYnIGFsdD0nJyB3aWR0 aD0nMScgaGVpZ2h0PSc4JyBib3JkZXI9JzAnPjwvdGQ+CiAgICA8dGQgd2lk dGg9JzEwJz48aW1nIHNyYz0nL3RoZW1lcy9zdGFuZGFyZC9pbWFnZXMvcG0t dG9wcmlnaHQuZ2lmJyB3aWR0aD0nMTAnIGFsdD0nJyBoZWlnaHQ9JzgnIGJv cmRlcj0nMCc+PC90ZD4KICA8L3RyPgogIDx0ciBiZ2NvbG9yPScjNjY5OUZG Jz4gCiAgICA8dGQgYmdjb2xvcj0nIzY2OTlGRicgd2lkdGg9JzYnPjxpbWcg c3JjPScvdGhlbWVzL3N0YW5kYXJkL2ltYWdlcy9zcGFjZXIuZ2lmJyBhbHQ9 Jycgd2lkdGg9JzYnIGhlaWdodD0nNjAnIGJvcmRlcj0nMCc+PC90ZD4KICAg IDx0ZCB3aWR0aD0nMTM2Jz48aW1nIHNyYz0nL3RoZW1lcy9zdGFuZGFyZC9p bWFnZXMvcG0tbG9nby5naWYnIGJvcmRlcj0nMCcgYWx0PSdQbGFuZXRNaXJy b3InPjwvdGQ+CiAgICA8dGQgYmdjb2xvcj0nIzY2OTlGRic+PGltZyBzcmM9 Jy90aGVtZXMvc3RhbmRhcmQvaW1hZ2VzL3NwYWNlci5naWYnIGFsdD0nJyBo ZWlnaHQ9JzYwJyBib3JkZXI9JzAnPjwvdGQ+CiAgICA8dGQgd2lkdGg9JzQ2 OCcgYmdjb2xvcj0nIzY2OTlGRicgYWxpZ249J3JpZ2h0Jz4KJm5ic3A7CiAg ICA8L3RkPgogICAgPHRkIGJnY29sb3I9JyM2Njk5RkYnIHdpZHRoPScxMCc+ PGltZyBzcmM9Jy90aGVtZXMvc3RhbmRhcmQvaW1hZ2VzL3NwYWNlci5naWYn IGFsdD0nJyB3aWR0aD0nMTAnIGJvcmRlcj0nMCc+PC90ZD4KICA8L3RyPgog IDx0cj4gCiAgICA8dGQgY29sc3Bhbj0nNScgYmdjb2xvcj0nIzY2OTlGRic+ PGltZyBzcmM9Jy90aGVtZXMvc3RhbmRhcmQvaW1hZ2VzL3NwYWNlci5naWYn IGFsdD0nJyB3aWR0aD0nMScgaGVpZ2h0PSc1JyBib3JkZXI9JzAnPjwvdGQ+ CiAgPC90cj4KICA8dHI+IAogICAgPHRkIGNvbHNwYW49JzUnIGJnY29sb3I9 JyNGRkZGRkYnPjxpbWcgc3JjPScvdGhlbWVzL3N0YW5kYXJkL2ltYWdlcy9z cGFjZXIuZ2lmJyBhbHQ9Jycgd2lkdGg9JzEnIGhlaWdodD0nMScgYm9yZGVy PScwJz48L3RkPgogIDwvdHI+CjwvdGFibGU+CgoKPHRhYmxlIGJvcmRlcj0n MCcgY2VsbHBhZGRpbmc9JzAnIGNlbGxzcGFjaW5nPScwJyB3aWR0aD0nMTAw JScgYWxpZ249J2NlbnRlcic+CiAgPHRyIGNsYXNzPSdwbS1tZW51Y2VsbCc+ CiAgICA8dGQgYWxpZ249J2xlZnQnIGNsYXNzPSdwbS1tZW51Y2VsbCc+Cgk8 cD48Zm9udCBjbGFzcz0ncG0tbWVudSc+Jm5ic3A7Jm5ic3A7OHRoIEp1bCAy MDA1PC9mb250PjwvcD4KICAgIDwvdGQ+CiAgICA8dGQgYWxpZ249J2NlbnRl cicgY2xhc3M9J3BtLW1lbnVjZWxsJz4gCiAgICAgIDxwPjxmb250IGNsYXNz PSdwbS1tZW51Jz48YSBjbGFzcz0ncG0tbWVudScgaHJlZj0nLyc+aG9tZTwv YT4mbmJzcDt8Jm5ic3A7PGEgY2xhc3M9J3BtLW1lbnUnIGhyZWY9Jy9uZXdz Lyc+bmV3czwvYT4mbmJzcDt8Jm5ic3A7PGEgY2xhc3M9J3BtLW1lbnUnIGhy ZWY9Jy9tZW1iZXJzL3JlZ2lzdGVyLyc+cmVnaXN0ZXI8L2E+Jm5ic3A7fCZu YnNwOzxhIGNsYXNzPSdwbS1tZW51JyBocmVmPScvbWVtYmVycy8nPm1lbWJl cnM8L2E+Jm5ic3A7fCZuYnNwOzxhIGhyZWY9Jy9wcml2YWN5LycgY2xhc3M9 J3BtLW1lbnUnPnByaXZhY3kgcG9saWN5PC9hPiZuYnNwO3wmbmJzcDs8YSBo cmVmPScvZmFxcy8nIGNsYXNzPSdwbS1tZW51Jz5mYXFzPC9hPiZuYnNwO3wm bmJzcDs8YSBocmVmPScvYWJvdXQvJyBjbGFzcz0ncG0tbWVudSc+YWJvdXQ8 L2E+Jm5ic3A7fCZuYnNwOzxhIGNsYXNzPSdwbS1tZW51JyBocmVmPScvY29u dGFjdC8nPmNvbnRhY3Q8L2E+PC9mb250PjwvcD4KICAgIDwvdGQ+CiAgICA8 dGQgYWxpZ249J3JpZ2h0JyBjbGFzcz0ncG0tbWVudWNlbGwnPiAKICAgICAg PHA+PGZvbnQgY2xhc3M9J3BtLW1lbnUnPk5vdCBMb2dnZWQgSW4mbmJzcDsm bmJzcDs8L2ZvbnQ+PC9wPgogICAgPC90ZD4KICA8L3RyPgogIDx0ciBiZ2Nv bG9yPScjRkZGRkZGJz4gCiAgICA8dGQgY29sc3Bhbj0nNScgdmFsaWduPSd0 b3AnIGhlaWdodD0nNic+PGltZyBzcmM9Jy90aGVtZXMvc3RhbmRhcmQvaW1h Z2VzL3NwYWNlci5naWYnIGFsdD0nJyB3aWR0aD0nMScgaGVpZ2h0PSc2Jz48 L3RkPgogIDwvdHI+CjwvdGFibGU+Cgo8dGFibGUgYm9yZGVyPScwJyBjZWxs cGFkZGluZz0nMCcgY2VsbHNwYWNpbmc9JzAnIHdpZHRoPScxMDAlJyBhbGln bj0nY2VudGVyJz4KICA8dHIgYmdjb2xvcj0nI0ZGRkZGRic+IAogICAgPHRk IGNvbHNwYW49JzUnIHZhbGlnbj0ndG9wJz4gCgoKCjx0YWJsZSB3aWR0aD0n MTAwJScgYm9yZGVyPScwJyBjZWxscGFkZGluZz0nMCcgY2VsbHNwYWNpbmc9 JzAnPgo8dHIgYmdjb2xvcj0nIzAwMzM5OSc+IAo8dGQgY29sc3Bhbj0nNScg aGVpZ2h0PScxJz48aW1nIHNyYz0nL3RoZW1lcy9zdGFuZGFyZC9pbWFnZXMv c3BhY2VyLmdpZicgYWx0PScnPjwvdGQ+CjwvdHI+Cjx0cj4gCjx0ZCBiZ2Nv bG9yPScjMDAzMzk5JyB3aWR0aD0nMSc+PGltZyBzcmM9Jy90aGVtZXMvc3Rh bmRhcmQvaW1hZ2VzL3NwYWNlci5naWYnIGFsdD0nJyB3aWR0aD0nMSc+PC90 ZD4KPHRkIGNvbHNwYW49JzMnIGJnY29sb3I9JyM2Njk5RkYnIGNsYXNzPSdw bS1wYW5lbGhlYWQnIHZhbGlnbj0nbWlkZGxlJz48aW1nIHNyYz0nL3RoZW1l cy9zdGFuZGFyZC9pbWFnZXMvcG0tYnV0dG9uLmdpZicgd2lkdGg9JzIwJyBo ZWlnaHQ9JzIwJyBhbHQ9JycgYWxpZ249J3RvcCc+Jm5ic3A7U2VhcmNoIFRo ZSBXZWI8L3RkPgo8dGQgYmdjb2xvcj0nIzAwMzM5OScgd2lkdGg9JzEnPjxp bWcgc3JjPScvdGhlbWVzL3N0YW5kYXJkL2ltYWdlcy9zcGFjZXIuZ2lmJyBh bHQ9Jycgd2lkdGg9JzEnPjwvdGQ+CjwvdHI+Cjx0ciBiZ2NvbG9yPScjMDAz Mzk5Jz4gCjx0ZCBjb2xzcGFuPSc1JyBoZWlnaHQ9JzEnPjxpbWcgc3JjPScv dGhlbWVzL3N0YW5kYXJkL2ltYWdlcy9zcGFjZXIuZ2lmJyBhbHQ9Jyc+PC90 ZD4KPC90cj4KCgo8dHI+Cgo8dGQgd2lkdGg9JzEnIGhlaWdodD0nNScgYmdj b2xvcj0nIzAwMzM5OSc+PGltZyBzcmM9Jy90aGVtZXMvc3RhbmRhcmQvaW1h Z2VzL3NwYWNlci5naWYnIGFsdD0nJyB3aWR0aD0nMScgaGVpZ2h0PSc1Jz48 L3RkPgo8dGQgd2lkdGg9JzExOCcgaGVpZ2h0PSc1JyBiZ2NvbG9yPScjRkZG RkZGJz48aW1nIHNyYz0nL3RoZW1lcy9zdGFuZGFyZC9pbWFnZXMvc3BhY2Vy LmdpZicgYWx0PScnIHdpZHRoPScxJyBoZWlnaHQ9JzUnPjwvdGQ+Cgo8dGQg d2lkdGg9JzEnIGhlaWdodD0nNScgYmdjb2xvcj0nIzAwMzM5OSc+PGltZyBz cmM9Jy90aGVtZXMvc3RhbmRhcmQvaW1hZ2VzL3NwYWNlci5naWYnIGFsdD0n JyB3aWR0aD0nMScgaGVpZ2h0PSc1Jz48L3RkPgo8dGQgd2lkdGg9JzExOCcg aGVpZ2h0PSc1JyBiZ2NvbG9yPScjRkZGRkZGJz48aW1nIHNyYz0nL3RoZW1l cy9zdGFuZGFyZC9pbWFnZXMvc3BhY2VyLmdpZicgYWx0PScnIHdpZHRoPScx JyBoZWlnaHQ9JzUnPjwvdGQ+Cjx0ZCB3aWR0aD0nMScgaGVpZ2h0PSc1JyBi Z2NvbG9yPScjMDAzMzk5Jz48aW1nIHNyYz0nL3RoZW1lcy9zdGFuZGFyZC9p bWFnZXMvc3BhY2VyLmdpZicgYWx0PScnIHdpZHRoPScxJyBoZWlnaHQ9JzUn PjwvdGQ+CjwvdHI+CgoKICAgICAgICAgICAgICA8dHI+IAogICAgICAgICAg ICAgICAgPHRkIGJnY29sb3I9JyMwMDMzOTknIGhlaWdodD0nMTQnIHdpZHRo PScxJz48aW1nIHNyYz0nL3RoZW1lcy9zdGFuZGFyZC9pbWFnZXMvc3BhY2Vy LmdpZicgYWx0PScnIHdpZHRoPScxJyBoZWlnaHQ9JzE0Jz48L3RkPgogICAg ICAgICAgICAgICAgPGZvcm0gbWV0aG9kPSdHRVQnIGFjdGlvbj0naHR0cDov L3d3dy5nb29nbGUuY29tL2N1c3RvbSc+CiAgICAgICAgICAgICAgICAgIDx0 ZCB3aWR0aD0nODAlJyBjbGFzcz0ncG0tcGFuZWxib2R5JyBhbGlnbj0nbGVm dCc+CgkJCTxkaXYgYWxpZ249J2xlZnQnPgoJCQkgICA8aW5wdXQgdHlwZT0n cmFkaW8nIG5hbWU9J3NpdGVzZWFyY2gnIHZhbHVlPSdwbGFuZXRtaXJyb3Iu Y29tJyBjaGVja2VkPlNlYXJjaCBwbGFuZXRtaXJyb3IuY29tCgkJCTwvZGl2 PgoJCQk8ZGl2IGFsaWduPSdsZWZ0Jz4KCQkJICAgPGlucHV0IHR5cGU9J3Jh ZGlvJyBuYW1lPSdzaXRlc2VhcmNoJyB2YWx1ZT0nJz5TZWFyY2ggdGhlIFdl YgoJCQk8L2Rpdj4KCQkJPGRpdiBhbGlnbj0nY2VudGVyJz4KCQkJICAgPGlu cHV0IHR5cGU9J2hpZGRlbicgbmFtZT0nY2xpZW50JyB2YWx1ZT0ncHViLTk1 OTMzNTYzNzY4MjIwODMnIC8+CgkJCSAgIDxpbnB1dCB0eXBlPSdoaWRkZW4n IG5hbWU9J2ZvcmlkJyB2YWx1ZT0nMScgLz4KCQkJICAgPGlucHV0IHR5cGU9 J2hpZGRlbicgbmFtZT0naWUnIHZhbHVlPSdJU08tODg1OS0xJyAvPgoJCQkg ICA8aW5wdXQgdHlwZT0naGlkZGVuJyBuYW1lPSdvZScgdmFsdWU9J0lTTy04 ODU5LTEnIC8+CgkJCSAgIDxpbnB1dCB0eXBlPSdoaWRkZW4nIG5hbWU9J2Nv ZicgdmFsdWU9J0w6aHR0cDovL3BsYW5ldG1pcnJvci5jb20vaW1hZ2VzL3Bt LWxvZ28td2hpdGUuZ2lmO0xIOjYwO0xXOjEzNjtBSDpjZW50ZXI7QVdGSUQ6 MzQ0NjYxMzk4ODE3NjcxYTsnIC8+CgkJCSAgIDxpbnB1dCB0eXBlPSd0ZXh0 JyBjbGFzcz0ncG0tZm9ybWlucHV0d2lkZScgbmFtZT0ncXVlcnknIC8+CgkJ CSAgIDxpbnB1dCB0eXBlPSdpbWFnZScgc3JjPScvdGhlbWVzL3N0YW5kYXJk L2ltYWdlcy9wbS1zZWFyY2guZ2lmJyBuYW1lPSdHbycgLz4KCQkJPC9kaXY+ CiAgICAgICAgICAgICAgICA8L3RkPgogICAgICAgICAgICAgICAgPHRkIGJn Y29sb3I9JyMwMDMzOTknIGhlaWdodD0nMTQnIHdpZHRoPScxJz48aW1nIHNy Yz0nL3RoZW1lcy9zdGFuZGFyZC9pbWFnZXMvc3BhY2VyLmdpZicgYWx0PScn IHdpZHRoPScxJyBoZWlnaHQ9JzE0Jz48L3RkPgogICAKICAgICAgICAgICAg ICAgICAgPHRkIHdpZHRoPScyMCUnIGNsYXNzPSdwbS1wYW5lbGJvZHknIGFs aWduPSdjZW50ZXInPgoJCQlQb3dlcmVkIEJ5PGJyIC8+PGEgaHJlZj0naHR0 cDovL3d3dy5nb29nbGUuY29tLycgdGFyZ2V0PSdfYmxhbmsnIGNsYXNzPSdw bS1wYW5lbGxpbmsnPjxpbWcgc3JjPScvaW1hZ2VzL2dvb2dsZV9sb2dvX3Nt bC5naWYnIGJvcmRlcj0nMCcgYWx0PSdHb29nbGUnPjwvYT4KICAgICAgICAg ICAgICAgICAgPC90ZD4KICAgICAgICAgICAgICAgIDwvZm9ybT4KICAgICAg ICAgICAgICAgIDx0ZCBoZWlnaHQ9JzE0JyB3aWR0aD0nMScgYmdjb2xvcj0n IzAwMzM5OSc+PGltZyBzcmM9Jy90aGVtZXMvc3RhbmRhcmQvaW1hZ2VzL3Nw YWNlci5naWYnIGFsdD0nJyB3aWR0aD0nMScgaGVpZ2h0PScxNCc+PC90ZD4K ICAgICAgICAgICAgICA8L3RyPgoKCjx0cj4KCjx0ZCB3aWR0aD0nMScgaGVp Z2h0PSc1JyBiZ2NvbG9yPScjMDAzMzk5Jz48aW1nIHNyYz0nL3RoZW1lcy9z dGFuZGFyZC9pbWFnZXMvc3BhY2VyLmdpZicgYWx0PScnIHdpZHRoPScxJyBo ZWlnaHQ9JzUnPjwvdGQ+Cjx0ZCB3aWR0aD0nMTE4JyBoZWlnaHQ9JzUnIGJn Y29sb3I9JyNGRkZGRkYnPjxpbWcgc3JjPScvdGhlbWVzL3N0YW5kYXJkL2lt YWdlcy9zcGFjZXIuZ2lmJyBhbHQ9Jycgd2lkdGg9JzEnIGhlaWdodD0nNSc+ PC90ZD4KCjx0ZCB3aWR0aD0nMScgaGVpZ2h0PSc1JyBiZ2NvbG9yPScjMDAz Mzk5Jz48aW1nIHNyYz0nL3RoZW1lcy9zdGFuZGFyZC9pbWFnZXMvc3BhY2Vy LmdpZicgYWx0PScnIHdpZHRoPScxJyBoZWlnaHQ9JzUnPjwvdGQ+Cjx0ZCB3 aWR0aD0nMTE4JyBoZWlnaHQ9JzUnIGJnY29sb3I9JyNGRkZGRkYnPjxpbWcg c3JjPScvdGhlbWVzL3N0YW5kYXJkL2ltYWdlcy9zcGFjZXIuZ2lmJyBhbHQ9 Jycgd2lkdGg9JzEnIGhlaWdodD0nNSc+PC90ZD4KPHRkIHdpZHRoPScxJyBo ZWlnaHQ9JzUnIGJnY29sb3I9JyMwMDMzOTknPjxpbWcgc3JjPScvdGhlbWVz L3N0YW5kYXJkL2ltYWdlcy9zcGFjZXIuZ2lmJyBhbHQ9Jycgd2lkdGg9JzEn IGhlaWdodD0nNSc+PC90ZD4KPC90cj4KCgoKPHRyIGJnY29sb3I9JyMwMDMz OTknPiAKPHRkIGNvbHNwYW49JzUnIGhlaWdodD0nMSc+PGltZyBzcmM9Jy90 aGVtZXMvc3RhbmRhcmQvaW1hZ2VzL3NwYWNlci5naWYnIGFsdD0nJz48L3Rk Pgo8L3RyPgoKICAgICAgICAgICAgICAgICAgICA8dHI+IAogICAgICAgICAg ICAgICAgICAgICAgPHRkIGNvbHNwYW49JzMnIGhlaWdodD0nMTAnIGJnY29s b3I9JyNGRkZGRkYnPjxpbWcgc3JjPScvdGhlbWVzL3N0YW5kYXJkL2ltYWdl cy9zcGFjZXIuZ2lmJyBhbHQ9JycgaGVpZ2h0PScxMCc+PC90ZD4KICAgICAg ICAgICAgICAgICAgICA8L3RyPgoKPC90YWJsZT4KCgo8dGFibGUgd2lkdGg9 JzEwMCUnIGJvcmRlcj0nMCcgY2VsbHBhZGRpbmc9JzAnIGNlbGxzcGFjaW5n PScwJz4KPHRyIGJnY29sb3I9JyMwMDMzOTknPiAKPHRkIGNvbHNwYW49JzMn IGhlaWdodD0nMSc+PGltZyBzcmM9Jy90aGVtZXMvc3RhbmRhcmQvaW1hZ2Vz L3NwYWNlci5naWYnIGFsdD0nJz48L3RkPgo8L3RyPgo8dHI+IAo8dGQgYmdj b2xvcj0nIzAwMzM5OScgd2lkdGg9JzEnPjxpbWcgc3JjPScvdGhlbWVzL3N0 YW5kYXJkL2ltYWdlcy9zcGFjZXIuZ2lmJyBhbHQ9Jycgd2lkdGg9JzEnPjwv dGQ+Cjx0ZCBjb2xzcGFuPScxJyBiZ2NvbG9yPScjNjY5OUZGJyBjbGFzcz0n cG0tcGFuZWxoZWFkJyB2YWxpZ249J21pZGRsZSc+PGltZyBzcmM9Jy90aGVt ZXMvc3RhbmRhcmQvaW1hZ2VzL3BtLWJ1dHRvbi5naWYnIHdpZHRoPScyMCcg aGVpZ2h0PScyMCcgYWx0PScnIGFsaWduPSd0b3AnPiZuYnNwO0Rvd25sb2Fk IEluIFByb2dyZXNzLi4uPC90ZD4KPHRkIGJnY29sb3I9JyMwMDMzOTknIHdp ZHRoPScxJz48aW1nIHNyYz0nL3RoZW1lcy9zdGFuZGFyZC9pbWFnZXMvc3Bh Y2VyLmdpZicgYWx0PScnIHdpZHRoPScxJz48L3RkPgo8L3RyPgo8dHIgYmdj b2xvcj0nIzAwMzM5OSc+IAo8dGQgY29sc3Bhbj0nMycgaGVpZ2h0PScxJz48 aW1nIHNyYz0nL3RoZW1lcy9zdGFuZGFyZC9pbWFnZXMvc3BhY2VyLmdpZicg YWx0PScnPjwvdGQ+CjwvdHI+CgoKPHRyPgoKPHRkIHdpZHRoPScxJyBoZWln aHQ9JzUnIGJnY29sb3I9JyMwMDMzOTknPjxpbWcgc3JjPScvdGhlbWVzL3N0 YW5kYXJkL2ltYWdlcy9zcGFjZXIuZ2lmJyBhbHQ9Jycgd2lkdGg9JzEnIGhl aWdodD0nNSc+PC90ZD4KPHRkIHdpZHRoPSctMicgaGVpZ2h0PSc1JyBiZ2Nv bG9yPScjRkZGRkZGJz48aW1nIHNyYz0nL3RoZW1lcy9zdGFuZGFyZC9pbWFn ZXMvc3BhY2VyLmdpZicgYWx0PScnIHdpZHRoPScxJyBoZWlnaHQ9JzUnPjwv dGQ+Cjx0ZCB3aWR0aD0nMScgaGVpZ2h0PSc1JyBiZ2NvbG9yPScjMDAzMzk5 Jz48aW1nIHNyYz0nL3RoZW1lcy9zdGFuZGFyZC9pbWFnZXMvc3BhY2VyLmdp ZicgYWx0PScnIHdpZHRoPScxJyBoZWlnaHQ9JzUnPjwvdGQ+CjwvdHI+CgoK ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICA8dHI+IAogICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgPHRkIGJnY29sb3I9JyMwMDMz OTknIHdpZHRoPScxJz48aW1nIHNyYz0nL3RoZW1lcy9zdGFuZGFyZC9pbWFn ZXMvc3BhY2VyLmdpZicgYWx0PScnIHdpZHRoPScxJz48L3RkPgogICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgPHRkIHdpZHRoPScxMDAlJyBj bGFzcz0ncG0tcGFuZWxib2R5JyB2YWxpZ249J3RvcCc+IApZb3UgYXJlIGFi b3V0IHRvIGRvd25sb2FkIDxiPi9wdWIveGZzL1JlbGVhc2UtMS4yL2tlcm5l bF9wYXRjaGVzL2xpbnV4LTIuNC4xOS1jb3JlLXhmcy0xLjIuMC5wYXRjaC5n ejwvYj4uIFRoaXMgZG93bmxvYWQgc2hvdWxkIGF1dG9tYXRpY2FsbHkgY29t bWVuY2Ugc2hvcnRseS48YnIgLz48YnIgLz4KCjxzY3JpcHQgbGFuZ3VhZ2U9 J0phdmFTY3JpcHQnIHR5cGU9J3RleHQvamF2YXNjcmlwdCc+PCEtLQpnb29n bGVfYWRfY2xpZW50ID0gInB1Yi05NTkzMzU2Mzc2ODIyMDgzIjsKZ29vZ2xl X2FkX3dpZHRoID0gNzI4Owpnb29nbGVfYWRfaGVpZ2h0ID0gOTA7Cmdvb2ds ZV9hZF9mb3JtYXQgPSAiNzI4eDkwX2FzIjsKZ29vZ2xlX2NvbG9yX2JvcmRl ciA9ICJGRkZGRkYiOwpnb29nbGVfY29sb3JfYmcgPSAiRkZGRkZGIjsKZ29v Z2xlX2NvbG9yX2xpbmsgPSAiMDAzMzk5IjsKZ29vZ2xlX2NvbG9yX3VybCA9 ICIwMDgwMDAiOwpnb29nbGVfY29sb3JfdGV4dCA9ICIwMDMzOTkiOwovLy0t Pjwvc2NyaXB0Pgo8c2NyaXB0IGxhbmd1YWdlPSdKYXZhc2NyaXB0JyB0eXBl PSd0ZXh0L2phdmFzY3JpcHQnCiAgc3JjPSdodHRwOi8vcGFnZWFkMi5nb29n bGVzeW5kaWNhdGlvbi5jb20vcGFnZWFkL3Nob3dfYWRzLmpzJz4KPC9zY3Jp cHQ+PGltZyBzcmM9Jy9pbWFnZXMvc3BhY2VyLWFkLnBocCcgaGVpZ2h0PScx JyB3aWR0aD0nMScgYWx0PScxJyAvPgoKPGJyIC8+PGJyIC8+SWYgaXQgZG9l cyBub3QsIHBsZWFzZSA8YSBocmVmPSdodHRwOi8vZG93bmxvYWRzLnBsYW5l dG1pcnJvci5jb20vcHViL3hmcy9SZWxlYXNlLTEuMi9rZXJuZWxfcGF0Y2hl cy9saW51eC0yLjQuMTktY29yZS14ZnMtMS4yLjAucGF0Y2guZ3onIGNsYXNz PSdwbS1wYW5lbGxpbmsnPmNsaWNrIGhlcmU8L2E+LgoKICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgIDwvdGQ+CiAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICA8dGQgd2lkdGg9JzEnIGJnY29sb3I9JyMwMDMz OTknPjxpbWcgc3JjPScvdGhlbWVzL3N0YW5kYXJkL2ltYWdlcy9zcGFjZXIu Z2lmJyBhbHQ9Jycgd2lkdGg9JzEnPjwvdGQ+CiAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgPC90cj4KCjx0cj4KCjx0ZCB3aWR0aD0nMScgaGVp Z2h0PSc1JyBiZ2NvbG9yPScjMDAzMzk5Jz48aW1nIHNyYz0nL3RoZW1lcy9z dGFuZGFyZC9pbWFnZXMvc3BhY2VyLmdpZicgYWx0PScnIHdpZHRoPScxJyBo ZWlnaHQ9JzUnPjwvdGQ+Cjx0ZCB3aWR0aD0nLTInIGhlaWdodD0nNScgYmdj b2xvcj0nI0ZGRkZGRic+PGltZyBzcmM9Jy90aGVtZXMvc3RhbmRhcmQvaW1h Z2VzL3NwYWNlci5naWYnIGFsdD0nJyB3aWR0aD0nMScgaGVpZ2h0PSc1Jz48 L3RkPgo8dGQgd2lkdGg9JzEnIGhlaWdodD0nNScgYmdjb2xvcj0nIzAwMzM5 OSc+PGltZyBzcmM9Jy90aGVtZXMvc3RhbmRhcmQvaW1hZ2VzL3NwYWNlci5n aWYnIGFsdD0nJyB3aWR0aD0nMScgaGVpZ2h0PSc1Jz48L3RkPgo8L3RyPgoK Cgo8dHIgYmdjb2xvcj0nIzAwMzM5OSc+IAo8dGQgY29sc3Bhbj0nMycgaGVp Z2h0PScxJz48aW1nIHNyYz0nL3RoZW1lcy9zdGFuZGFyZC9pbWFnZXMvc3Bh Y2VyLmdpZicgYWx0PScnPjwvdGQ+CjwvdHI+CgogICAgICAgICAgICAgICAg ICAgIDx0cj4gCiAgICAgICAgICAgICAgICAgICAgICA8dGQgY29sc3Bhbj0n MycgaGVpZ2h0PScxMCcgYmdjb2xvcj0nI0ZGRkZGRic+PGltZyBzcmM9Jy90 aGVtZXMvc3RhbmRhcmQvaW1hZ2VzL3NwYWNlci5naWYnIGFsdD0nJyBoZWln aHQ9JzEwJz48L3RkPgogICAgICAgICAgICAgICAgICAgIDwvdHI+Cgo8L3Rh YmxlPgogICAgPC90ZD4KICA8L3RyPgo8L3RhYmxlPgoKPC9ib2R5Pgo8L2h0 bWw+Cg== ------=_Part_7163_12804647.1121047993426 Content-Type: application/x-gzip; name="linux-2.4.19-xfs-1.2.0.patch.gz" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="linux-2.4.19-xfs-1.2.0.patch.gz" Cgo8aHRtbD4KCgo8aGVhZD4KPHRpdGxlPlBsYW5ldE1pcnJvciAtIERvd25s b2FkIEluIFByb2dyZXNzLi4uPC90aXRsZT4KPG1ldGEgaHR0cC1lcXVpdj0n UmVmcmVzaCcgY29udGVudD0nNTtVUkw9aHR0cDovL2Rvd25sb2Fkcy5wbGFu ZXRtaXJyb3IuY29tL3B1Yi94ZnMvUmVsZWFzZS0xLjIva2VybmVsX3BhdGNo ZXMvbGludXgtMi40LjE5LXhmcy0xLjIuMC5wYXRjaC5neic+CjxtZXRhIGh0 dHAtZXF1aXY9J0NvbnRlbnQtVHlwZScgY29udGVudD0ndGV4dC9odG1sOyBj aGFyc2V0PWlzby04ODU5LTEnPgo8bGluayByZWw9J3N0eWxlc2hlZXQnIGhy ZWY9Jy90aGVtZXMvc3RhbmRhcmQvcG0uY3NzJyB0eXBlPSd0ZXh0L2Nzcyc+ CjwvaGVhZD4KCgo8Ym9keSBiZ2NvbG9yPScjRkZGRkZGJz4KCgo8dGFibGUg Ym9yZGVyPScwJyBjZWxscGFkZGluZz0nMCcgY2VsbHNwYWNpbmc9JzAnIHdp ZHRoPScxMDAlJyBhbGlnbj0nY2VudGVyJz4KICA8dHI+IAogICAgPHRkIGNv bHNwYW49JzQnIGJnY29sb3I9JyM2Njk5RkYnPjxpbWcgc3JjPScvdGhlbWVz L3N0YW5kYXJkL2ltYWdlcy9zcGFjZXIuZ2lmJyBhbHQ9Jycgd2lkdGg9JzEn IGhlaWdodD0nOCcgYm9yZGVyPScwJz48L3RkPgogICAgPHRkIHdpZHRoPScx MCc+PGltZyBzcmM9Jy90aGVtZXMvc3RhbmRhcmQvaW1hZ2VzL3BtLXRvcHJp Z2h0LmdpZicgd2lkdGg9JzEwJyBhbHQ9JycgaGVpZ2h0PSc4JyBib3JkZXI9 JzAnPjwvdGQ+CiAgPC90cj4KICA8dHIgYmdjb2xvcj0nIzY2OTlGRic+IAog ICAgPHRkIGJnY29sb3I9JyM2Njk5RkYnIHdpZHRoPSc2Jz48aW1nIHNyYz0n L3RoZW1lcy9zdGFuZGFyZC9pbWFnZXMvc3BhY2VyLmdpZicgYWx0PScnIHdp ZHRoPSc2JyBoZWlnaHQ9JzYwJyBib3JkZXI9JzAnPjwvdGQ+CiAgICA8dGQg d2lkdGg9JzEzNic+PGltZyBzcmM9Jy90aGVtZXMvc3RhbmRhcmQvaW1hZ2Vz L3BtLWxvZ28uZ2lmJyBib3JkZXI9JzAnIGFsdD0nUGxhbmV0TWlycm9yJz48 L3RkPgogICAgPHRkIGJnY29sb3I9JyM2Njk5RkYnPjxpbWcgc3JjPScvdGhl bWVzL3N0YW5kYXJkL2ltYWdlcy9zcGFjZXIuZ2lmJyBhbHQ9JycgaGVpZ2h0 PSc2MCcgYm9yZGVyPScwJz48L3RkPgogICAgPHRkIHdpZHRoPSc0NjgnIGJn Y29sb3I9JyM2Njk5RkYnIGFsaWduPSdyaWdodCc+CiZuYnNwOwogICAgPC90 ZD4KICAgIDx0ZCBiZ2NvbG9yPScjNjY5OUZGJyB3aWR0aD0nMTAnPjxpbWcg c3JjPScvdGhlbWVzL3N0YW5kYXJkL2ltYWdlcy9zcGFjZXIuZ2lmJyBhbHQ9 Jycgd2lkdGg9JzEwJyBib3JkZXI9JzAnPjwvdGQ+CiAgPC90cj4KICA8dHI+ IAogICAgPHRkIGNvbHNwYW49JzUnIGJnY29sb3I9JyM2Njk5RkYnPjxpbWcg c3JjPScvdGhlbWVzL3N0YW5kYXJkL2ltYWdlcy9zcGFjZXIuZ2lmJyBhbHQ9 Jycgd2lkdGg9JzEnIGhlaWdodD0nNScgYm9yZGVyPScwJz48L3RkPgogIDwv dHI+CiAgPHRyPiAKICAgIDx0ZCBjb2xzcGFuPSc1JyBiZ2NvbG9yPScjRkZG RkZGJz48aW1nIHNyYz0nL3RoZW1lcy9zdGFuZGFyZC9pbWFnZXMvc3BhY2Vy LmdpZicgYWx0PScnIHdpZHRoPScxJyBoZWlnaHQ9JzEnIGJvcmRlcj0nMCc+ PC90ZD4KICA8L3RyPgo8L3RhYmxlPgoKCjx0YWJsZSBib3JkZXI9JzAnIGNl bGxwYWRkaW5nPScwJyBjZWxsc3BhY2luZz0nMCcgd2lkdGg9JzEwMCUnIGFs aWduPSdjZW50ZXInPgogIDx0ciBjbGFzcz0ncG0tbWVudWNlbGwnPgogICAg PHRkIGFsaWduPSdsZWZ0JyBjbGFzcz0ncG0tbWVudWNlbGwnPgoJPHA+PGZv bnQgY2xhc3M9J3BtLW1lbnUnPiZuYnNwOyZuYnNwOzh0aCBKdWwgMjAwNTwv Zm9udD48L3A+CiAgICA8L3RkPgogICAgPHRkIGFsaWduPSdjZW50ZXInIGNs YXNzPSdwbS1tZW51Y2VsbCc+IAogICAgICA8cD48Zm9udCBjbGFzcz0ncG0t bWVudSc+PGEgY2xhc3M9J3BtLW1lbnUnIGhyZWY9Jy8nPmhvbWU8L2E+Jm5i c3A7fCZuYnNwOzxhIGNsYXNzPSdwbS1tZW51JyBocmVmPScvbmV3cy8nPm5l d3M8L2E+Jm5ic3A7fCZuYnNwOzxhIGNsYXNzPSdwbS1tZW51JyBocmVmPScv bWVtYmVycy9yZWdpc3Rlci8nPnJlZ2lzdGVyPC9hPiZuYnNwO3wmbmJzcDs8 YSBjbGFzcz0ncG0tbWVudScgaHJlZj0nL21lbWJlcnMvJz5tZW1iZXJzPC9h PiZuYnNwO3wmbmJzcDs8YSBocmVmPScvcHJpdmFjeS8nIGNsYXNzPSdwbS1t ZW51Jz5wcml2YWN5IHBvbGljeTwvYT4mbmJzcDt8Jm5ic3A7PGEgaHJlZj0n L2ZhcXMvJyBjbGFzcz0ncG0tbWVudSc+ZmFxczwvYT4mbmJzcDt8Jm5ic3A7 PGEgaHJlZj0nL2Fib3V0LycgY2xhc3M9J3BtLW1lbnUnPmFib3V0PC9hPiZu YnNwO3wmbmJzcDs8YSBjbGFzcz0ncG0tbWVudScgaHJlZj0nL2NvbnRhY3Qv Jz5jb250YWN0PC9hPjwvZm9udD48L3A+CiAgICA8L3RkPgogICAgPHRkIGFs aWduPSdyaWdodCcgY2xhc3M9J3BtLW1lbnVjZWxsJz4gCiAgICAgIDxwPjxm b250IGNsYXNzPSdwbS1tZW51Jz5Ob3QgTG9nZ2VkIEluJm5ic3A7Jm5ic3A7 PC9mb250PjwvcD4KICAgIDwvdGQ+CiAgPC90cj4KICA8dHIgYmdjb2xvcj0n I0ZGRkZGRic+IAogICAgPHRkIGNvbHNwYW49JzUnIHZhbGlnbj0ndG9wJyBo ZWlnaHQ9JzYnPjxpbWcgc3JjPScvdGhlbWVzL3N0YW5kYXJkL2ltYWdlcy9z cGFjZXIuZ2lmJyBhbHQ9Jycgd2lkdGg9JzEnIGhlaWdodD0nNic+PC90ZD4K ICA8L3RyPgo8L3RhYmxlPgoKPHRhYmxlIGJvcmRlcj0nMCcgY2VsbHBhZGRp bmc9JzAnIGNlbGxzcGFjaW5nPScwJyB3aWR0aD0nMTAwJScgYWxpZ249J2Nl bnRlcic+CiAgPHRyIGJnY29sb3I9JyNGRkZGRkYnPiAKICAgIDx0ZCBjb2xz cGFuPSc1JyB2YWxpZ249J3RvcCc+IAoKCgo8dGFibGUgd2lkdGg9JzEwMCUn IGJvcmRlcj0nMCcgY2VsbHBhZGRpbmc9JzAnIGNlbGxzcGFjaW5nPScwJz4K PHRyIGJnY29sb3I9JyMwMDMzOTknPiAKPHRkIGNvbHNwYW49JzUnIGhlaWdo dD0nMSc+PGltZyBzcmM9Jy90aGVtZXMvc3RhbmRhcmQvaW1hZ2VzL3NwYWNl ci5naWYnIGFsdD0nJz48L3RkPgo8L3RyPgo8dHI+IAo8dGQgYmdjb2xvcj0n IzAwMzM5OScgd2lkdGg9JzEnPjxpbWcgc3JjPScvdGhlbWVzL3N0YW5kYXJk L2ltYWdlcy9zcGFjZXIuZ2lmJyBhbHQ9Jycgd2lkdGg9JzEnPjwvdGQ+Cjx0 ZCBjb2xzcGFuPSczJyBiZ2NvbG9yPScjNjY5OUZGJyBjbGFzcz0ncG0tcGFu ZWxoZWFkJyB2YWxpZ249J21pZGRsZSc+PGltZyBzcmM9Jy90aGVtZXMvc3Rh bmRhcmQvaW1hZ2VzL3BtLWJ1dHRvbi5naWYnIHdpZHRoPScyMCcgaGVpZ2h0 PScyMCcgYWx0PScnIGFsaWduPSd0b3AnPiZuYnNwO1NlYXJjaCBUaGUgV2Vi PC90ZD4KPHRkIGJnY29sb3I9JyMwMDMzOTknIHdpZHRoPScxJz48aW1nIHNy Yz0nL3RoZW1lcy9zdGFuZGFyZC9pbWFnZXMvc3BhY2VyLmdpZicgYWx0PScn IHdpZHRoPScxJz48L3RkPgo8L3RyPgo8dHIgYmdjb2xvcj0nIzAwMzM5OSc+ IAo8dGQgY29sc3Bhbj0nNScgaGVpZ2h0PScxJz48aW1nIHNyYz0nL3RoZW1l cy9zdGFuZGFyZC9pbWFnZXMvc3BhY2VyLmdpZicgYWx0PScnPjwvdGQ+Cjwv dHI+CgoKPHRyPgoKPHRkIHdpZHRoPScxJyBoZWlnaHQ9JzUnIGJnY29sb3I9 JyMwMDMzOTknPjxpbWcgc3JjPScvdGhlbWVzL3N0YW5kYXJkL2ltYWdlcy9z cGFjZXIuZ2lmJyBhbHQ9Jycgd2lkdGg9JzEnIGhlaWdodD0nNSc+PC90ZD4K PHRkIHdpZHRoPScxMTgnIGhlaWdodD0nNScgYmdjb2xvcj0nI0ZGRkZGRic+ PGltZyBzcmM9Jy90aGVtZXMvc3RhbmRhcmQvaW1hZ2VzL3NwYWNlci5naWYn IGFsdD0nJyB3aWR0aD0nMScgaGVpZ2h0PSc1Jz48L3RkPgoKPHRkIHdpZHRo PScxJyBoZWlnaHQ9JzUnIGJnY29sb3I9JyMwMDMzOTknPjxpbWcgc3JjPScv dGhlbWVzL3N0YW5kYXJkL2ltYWdlcy9zcGFjZXIuZ2lmJyBhbHQ9Jycgd2lk dGg9JzEnIGhlaWdodD0nNSc+PC90ZD4KPHRkIHdpZHRoPScxMTgnIGhlaWdo dD0nNScgYmdjb2xvcj0nI0ZGRkZGRic+PGltZyBzcmM9Jy90aGVtZXMvc3Rh bmRhcmQvaW1hZ2VzL3NwYWNlci5naWYnIGFsdD0nJyB3aWR0aD0nMScgaGVp Z2h0PSc1Jz48L3RkPgo8dGQgd2lkdGg9JzEnIGhlaWdodD0nNScgYmdjb2xv cj0nIzAwMzM5OSc+PGltZyBzcmM9Jy90aGVtZXMvc3RhbmRhcmQvaW1hZ2Vz L3NwYWNlci5naWYnIGFsdD0nJyB3aWR0aD0nMScgaGVpZ2h0PSc1Jz48L3Rk Pgo8L3RyPgoKCiAgICAgICAgICAgICAgPHRyPiAKICAgICAgICAgICAgICAg IDx0ZCBiZ2NvbG9yPScjMDAzMzk5JyBoZWlnaHQ9JzE0JyB3aWR0aD0nMSc+ PGltZyBzcmM9Jy90aGVtZXMvc3RhbmRhcmQvaW1hZ2VzL3NwYWNlci5naWYn IGFsdD0nJyB3aWR0aD0nMScgaGVpZ2h0PScxNCc+PC90ZD4KICAgICAgICAg ICAgICAgIDxmb3JtIG1ldGhvZD0nR0VUJyBhY3Rpb249J2h0dHA6Ly93d3cu Z29vZ2xlLmNvbS9jdXN0b20nPgogICAgICAgICAgICAgICAgICA8dGQgd2lk dGg9JzgwJScgY2xhc3M9J3BtLXBhbmVsYm9keScgYWxpZ249J2xlZnQnPgoJ CQk8ZGl2IGFsaWduPSdsZWZ0Jz4KCQkJICAgPGlucHV0IHR5cGU9J3JhZGlv JyBuYW1lPSdzaXRlc2VhcmNoJyB2YWx1ZT0ncGxhbmV0bWlycm9yLmNvbScg Y2hlY2tlZD5TZWFyY2ggcGxhbmV0bWlycm9yLmNvbQoJCQk8L2Rpdj4KCQkJ PGRpdiBhbGlnbj0nbGVmdCc+CgkJCSAgIDxpbnB1dCB0eXBlPSdyYWRpbycg bmFtZT0nc2l0ZXNlYXJjaCcgdmFsdWU9Jyc+U2VhcmNoIHRoZSBXZWIKCQkJ PC9kaXY+CgkJCTxkaXYgYWxpZ249J2NlbnRlcic+CgkJCSAgIDxpbnB1dCB0 eXBlPSdoaWRkZW4nIG5hbWU9J2NsaWVudCcgdmFsdWU9J3B1Yi05NTkzMzU2 Mzc2ODIyMDgzJyAvPgoJCQkgICA8aW5wdXQgdHlwZT0naGlkZGVuJyBuYW1l PSdmb3JpZCcgdmFsdWU9JzEnIC8+CgkJCSAgIDxpbnB1dCB0eXBlPSdoaWRk ZW4nIG5hbWU9J2llJyB2YWx1ZT0nSVNPLTg4NTktMScgLz4KCQkJICAgPGlu cHV0IHR5cGU9J2hpZGRlbicgbmFtZT0nb2UnIHZhbHVlPSdJU08tODg1OS0x JyAvPgoJCQkgICA8aW5wdXQgdHlwZT0naGlkZGVuJyBuYW1lPSdjb2YnIHZh bHVlPSdMOmh0dHA6Ly9wbGFuZXRtaXJyb3IuY29tL2ltYWdlcy9wbS1sb2dv LXdoaXRlLmdpZjtMSDo2MDtMVzoxMzY7QUg6Y2VudGVyO0FXRklEOjM0NDY2 MTM5ODgxNzY3MWE7JyAvPgoJCQkgICA8aW5wdXQgdHlwZT0ndGV4dCcgY2xh c3M9J3BtLWZvcm1pbnB1dHdpZGUnIG5hbWU9J3F1ZXJ5JyAvPgoJCQkgICA8 aW5wdXQgdHlwZT0naW1hZ2UnIHNyYz0nL3RoZW1lcy9zdGFuZGFyZC9pbWFn ZXMvcG0tc2VhcmNoLmdpZicgbmFtZT0nR28nIC8+CgkJCTwvZGl2PgogICAg ICAgICAgICAgICAgPC90ZD4KICAgICAgICAgICAgICAgIDx0ZCBiZ2NvbG9y PScjMDAzMzk5JyBoZWlnaHQ9JzE0JyB3aWR0aD0nMSc+PGltZyBzcmM9Jy90 aGVtZXMvc3RhbmRhcmQvaW1hZ2VzL3NwYWNlci5naWYnIGFsdD0nJyB3aWR0 aD0nMScgaGVpZ2h0PScxNCc+PC90ZD4KICAgCiAgICAgICAgICAgICAgICAg IDx0ZCB3aWR0aD0nMjAlJyBjbGFzcz0ncG0tcGFuZWxib2R5JyBhbGlnbj0n Y2VudGVyJz4KCQkJUG93ZXJlZCBCeTxiciAvPjxhIGhyZWY9J2h0dHA6Ly93 d3cuZ29vZ2xlLmNvbS8nIHRhcmdldD0nX2JsYW5rJyBjbGFzcz0ncG0tcGFu ZWxsaW5rJz48aW1nIHNyYz0nL2ltYWdlcy9nb29nbGVfbG9nb19zbWwuZ2lm JyBib3JkZXI9JzAnIGFsdD0nR29vZ2xlJz48L2E+CiAgICAgICAgICAgICAg ICAgIDwvdGQ+CiAgICAgICAgICAgICAgICA8L2Zvcm0+CiAgICAgICAgICAg ICAgICA8dGQgaGVpZ2h0PScxNCcgd2lkdGg9JzEnIGJnY29sb3I9JyMwMDMz OTknPjxpbWcgc3JjPScvdGhlbWVzL3N0YW5kYXJkL2ltYWdlcy9zcGFjZXIu Z2lmJyBhbHQ9Jycgd2lkdGg9JzEnIGhlaWdodD0nMTQnPjwvdGQ+CiAgICAg ICAgICAgICAgPC90cj4KCgo8dHI+Cgo8dGQgd2lkdGg9JzEnIGhlaWdodD0n NScgYmdjb2xvcj0nIzAwMzM5OSc+PGltZyBzcmM9Jy90aGVtZXMvc3RhbmRh cmQvaW1hZ2VzL3NwYWNlci5naWYnIGFsdD0nJyB3aWR0aD0nMScgaGVpZ2h0 PSc1Jz48L3RkPgo8dGQgd2lkdGg9JzExOCcgaGVpZ2h0PSc1JyBiZ2NvbG9y PScjRkZGRkZGJz48aW1nIHNyYz0nL3RoZW1lcy9zdGFuZGFyZC9pbWFnZXMv c3BhY2VyLmdpZicgYWx0PScnIHdpZHRoPScxJyBoZWlnaHQ9JzUnPjwvdGQ+ Cgo8dGQgd2lkdGg9JzEnIGhlaWdodD0nNScgYmdjb2xvcj0nIzAwMzM5OSc+ PGltZyBzcmM9Jy90aGVtZXMvc3RhbmRhcmQvaW1hZ2VzL3NwYWNlci5naWYn IGFsdD0nJyB3aWR0aD0nMScgaGVpZ2h0PSc1Jz48L3RkPgo8dGQgd2lkdGg9 JzExOCcgaGVpZ2h0PSc1JyBiZ2NvbG9yPScjRkZGRkZGJz48aW1nIHNyYz0n L3RoZW1lcy9zdGFuZGFyZC9pbWFnZXMvc3BhY2VyLmdpZicgYWx0PScnIHdp ZHRoPScxJyBoZWlnaHQ9JzUnPjwvdGQ+Cjx0ZCB3aWR0aD0nMScgaGVpZ2h0 PSc1JyBiZ2NvbG9yPScjMDAzMzk5Jz48aW1nIHNyYz0nL3RoZW1lcy9zdGFu ZGFyZC9pbWFnZXMvc3BhY2VyLmdpZicgYWx0PScnIHdpZHRoPScxJyBoZWln aHQ9JzUnPjwvdGQ+CjwvdHI+CgoKCjx0ciBiZ2NvbG9yPScjMDAzMzk5Jz4g Cjx0ZCBjb2xzcGFuPSc1JyBoZWlnaHQ9JzEnPjxpbWcgc3JjPScvdGhlbWVz L3N0YW5kYXJkL2ltYWdlcy9zcGFjZXIuZ2lmJyBhbHQ9Jyc+PC90ZD4KPC90 cj4KCiAgICAgICAgICAgICAgICAgICAgPHRyPiAKICAgICAgICAgICAgICAg ICAgICAgIDx0ZCBjb2xzcGFuPSczJyBoZWlnaHQ9JzEwJyBiZ2NvbG9yPScj RkZGRkZGJz48aW1nIHNyYz0nL3RoZW1lcy9zdGFuZGFyZC9pbWFnZXMvc3Bh Y2VyLmdpZicgYWx0PScnIGhlaWdodD0nMTAnPjwvdGQ+CiAgICAgICAgICAg ICAgICAgICAgPC90cj4KCjwvdGFibGU+CgoKPHRhYmxlIHdpZHRoPScxMDAl JyBib3JkZXI9JzAnIGNlbGxwYWRkaW5nPScwJyBjZWxsc3BhY2luZz0nMCc+ Cjx0ciBiZ2NvbG9yPScjMDAzMzk5Jz4gCjx0ZCBjb2xzcGFuPSczJyBoZWln aHQ9JzEnPjxpbWcgc3JjPScvdGhlbWVzL3N0YW5kYXJkL2ltYWdlcy9zcGFj ZXIuZ2lmJyBhbHQ9Jyc+PC90ZD4KPC90cj4KPHRyPiAKPHRkIGJnY29sb3I9 JyMwMDMzOTknIHdpZHRoPScxJz48aW1nIHNyYz0nL3RoZW1lcy9zdGFuZGFy ZC9pbWFnZXMvc3BhY2VyLmdpZicgYWx0PScnIHdpZHRoPScxJz48L3RkPgo8 dGQgY29sc3Bhbj0nMScgYmdjb2xvcj0nIzY2OTlGRicgY2xhc3M9J3BtLXBh bmVsaGVhZCcgdmFsaWduPSdtaWRkbGUnPjxpbWcgc3JjPScvdGhlbWVzL3N0 YW5kYXJkL2ltYWdlcy9wbS1idXR0b24uZ2lmJyB3aWR0aD0nMjAnIGhlaWdo dD0nMjAnIGFsdD0nJyBhbGlnbj0ndG9wJz4mbmJzcDtEb3dubG9hZCBJbiBQ cm9ncmVzcy4uLjwvdGQ+Cjx0ZCBiZ2NvbG9yPScjMDAzMzk5JyB3aWR0aD0n MSc+PGltZyBzcmM9Jy90aGVtZXMvc3RhbmRhcmQvaW1hZ2VzL3NwYWNlci5n aWYnIGFsdD0nJyB3aWR0aD0nMSc+PC90ZD4KPC90cj4KPHRyIGJnY29sb3I9 JyMwMDMzOTknPiAKPHRkIGNvbHNwYW49JzMnIGhlaWdodD0nMSc+PGltZyBz cmM9Jy90aGVtZXMvc3RhbmRhcmQvaW1hZ2VzL3NwYWNlci5naWYnIGFsdD0n Jz48L3RkPgo8L3RyPgoKCjx0cj4KCjx0ZCB3aWR0aD0nMScgaGVpZ2h0PSc1 JyBiZ2NvbG9yPScjMDAzMzk5Jz48aW1nIHNyYz0nL3RoZW1lcy9zdGFuZGFy ZC9pbWFnZXMvc3BhY2VyLmdpZicgYWx0PScnIHdpZHRoPScxJyBoZWlnaHQ9 JzUnPjwvdGQ+Cjx0ZCB3aWR0aD0nLTInIGhlaWdodD0nNScgYmdjb2xvcj0n I0ZGRkZGRic+PGltZyBzcmM9Jy90aGVtZXMvc3RhbmRhcmQvaW1hZ2VzL3Nw YWNlci5naWYnIGFsdD0nJyB3aWR0aD0nMScgaGVpZ2h0PSc1Jz48L3RkPgo8 dGQgd2lkdGg9JzEnIGhlaWdodD0nNScgYmdjb2xvcj0nIzAwMzM5OSc+PGlt ZyBzcmM9Jy90aGVtZXMvc3RhbmRhcmQvaW1hZ2VzL3NwYWNlci5naWYnIGFs dD0nJyB3aWR0aD0nMScgaGVpZ2h0PSc1Jz48L3RkPgo8L3RyPgoKCiAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgPHRyPiAKICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgIDx0ZCBiZ2NvbG9yPScjMDAzMzk5JyB3 aWR0aD0nMSc+PGltZyBzcmM9Jy90aGVtZXMvc3RhbmRhcmQvaW1hZ2VzL3Nw YWNlci5naWYnIGFsdD0nJyB3aWR0aD0nMSc+PC90ZD4KICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgIDx0ZCB3aWR0aD0nMTAwJScgY2xhc3M9 J3BtLXBhbmVsYm9keScgdmFsaWduPSd0b3AnPiAKWW91IGFyZSBhYm91dCB0 byBkb3dubG9hZCA8Yj4vcHViL3hmcy9SZWxlYXNlLTEuMi9rZXJuZWxfcGF0 Y2hlcy9saW51eC0yLjQuMTkteGZzLTEuMi4wLnBhdGNoLmd6PC9iPi4gVGhp cyBkb3dubG9hZCBzaG91bGQgYXV0b21hdGljYWxseSBjb21tZW5jZSBzaG9y dGx5LjxiciAvPjxiciAvPgoKPHNjcmlwdCBsYW5ndWFnZT0nSmF2YVNjcmlw dCcgdHlwZT0ndGV4dC9qYXZhc2NyaXB0Jz48IS0tCmdvb2dsZV9hZF9jbGll bnQgPSAicHViLTk1OTMzNTYzNzY4MjIwODMiOwpnb29nbGVfYWRfd2lkdGgg PSA3Mjg7Cmdvb2dsZV9hZF9oZWlnaHQgPSA5MDsKZ29vZ2xlX2FkX2Zvcm1h dCA9ICI3Mjh4OTBfYXMiOwpnb29nbGVfY29sb3JfYm9yZGVyID0gIkZGRkZG RiI7Cmdvb2dsZV9jb2xvcl9iZyA9ICJGRkZGRkYiOwpnb29nbGVfY29sb3Jf bGluayA9ICIwMDMzOTkiOwpnb29nbGVfY29sb3JfdXJsID0gIjAwODAwMCI7 Cmdvb2dsZV9jb2xvcl90ZXh0ID0gIjAwMzM5OSI7Ci8vLS0+PC9zY3JpcHQ+ CjxzY3JpcHQgbGFuZ3VhZ2U9J0phdmFzY3JpcHQnIHR5cGU9J3RleHQvamF2 YXNjcmlwdCcKICBzcmM9J2h0dHA6Ly9wYWdlYWQyLmdvb2dsZXN5bmRpY2F0 aW9uLmNvbS9wYWdlYWQvc2hvd19hZHMuanMnPgo8L3NjcmlwdD48aW1nIHNy Yz0nL2ltYWdlcy9zcGFjZXItYWQucGhwJyBoZWlnaHQ9JzEnIHdpZHRoPScx JyBhbHQ9JzEnIC8+Cgo8YnIgLz48YnIgLz5JZiBpdCBkb2VzIG5vdCwgcGxl YXNlIDxhIGhyZWY9J2h0dHA6Ly9kb3dubG9hZHMucGxhbmV0bWlycm9yLmNv bS9wdWIveGZzL1JlbGVhc2UtMS4yL2tlcm5lbF9wYXRjaGVzL2xpbnV4LTIu NC4xOS14ZnMtMS4yLjAucGF0Y2guZ3onIGNsYXNzPSdwbS1wYW5lbGxpbmsn PmNsaWNrIGhlcmU8L2E+LgoKICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgIDwvdGQ+CiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICA8dGQgd2lkdGg9JzEnIGJnY29sb3I9JyMwMDMzOTknPjxpbWcgc3JjPScv dGhlbWVzL3N0YW5kYXJkL2ltYWdlcy9zcGFjZXIuZ2lmJyBhbHQ9Jycgd2lk dGg9JzEnPjwvdGQ+CiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg PC90cj4KCjx0cj4KCjx0ZCB3aWR0aD0nMScgaGVpZ2h0PSc1JyBiZ2NvbG9y PScjMDAzMzk5Jz48aW1nIHNyYz0nL3RoZW1lcy9zdGFuZGFyZC9pbWFnZXMv c3BhY2VyLmdpZicgYWx0PScnIHdpZHRoPScxJyBoZWlnaHQ9JzUnPjwvdGQ+ Cjx0ZCB3aWR0aD0nLTInIGhlaWdodD0nNScgYmdjb2xvcj0nI0ZGRkZGRic+ PGltZyBzcmM9Jy90aGVtZXMvc3RhbmRhcmQvaW1hZ2VzL3NwYWNlci5naWYn IGFsdD0nJyB3aWR0aD0nMScgaGVpZ2h0PSc1Jz48L3RkPgo8dGQgd2lkdGg9 JzEnIGhlaWdodD0nNScgYmdjb2xvcj0nIzAwMzM5OSc+PGltZyBzcmM9Jy90 aGVtZXMvc3RhbmRhcmQvaW1hZ2VzL3NwYWNlci5naWYnIGFsdD0nJyB3aWR0 aD0nMScgaGVpZ2h0PSc1Jz48L3RkPgo8L3RyPgoKCgo8dHIgYmdjb2xvcj0n IzAwMzM5OSc+IAo8dGQgY29sc3Bhbj0nMycgaGVpZ2h0PScxJz48aW1nIHNy Yz0nL3RoZW1lcy9zdGFuZGFyZC9pbWFnZXMvc3BhY2VyLmdpZicgYWx0PScn PjwvdGQ+CjwvdHI+CgogICAgICAgICAgICAgICAgICAgIDx0cj4gCiAgICAg ICAgICAgICAgICAgICAgICA8dGQgY29sc3Bhbj0nMycgaGVpZ2h0PScxMCcg Ymdjb2xvcj0nI0ZGRkZGRic+PGltZyBzcmM9Jy90aGVtZXMvc3RhbmRhcmQv aW1hZ2VzL3NwYWNlci5naWYnIGFsdD0nJyBoZWlnaHQ9JzEwJz48L3RkPgog ICAgICAgICAgICAgICAgICAgIDwvdHI+Cgo8L3RhYmxlPgogICAgPC90ZD4K ICA8L3RyPgo8L3RhYmxlPgoKPC9ib2R5Pgo8L2h0bWw+Cg== ------=_Part_7163_12804647.1121047993426-- From owner-linux-xfs@oss.sgi.com Sun Jul 10 22:58:43 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Sun, 10 Jul 2005 22:58:49 -0700 (PDT) Received: from nproxy.gmail.com (nproxy.gmail.com [64.233.182.207]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6B5wgH9018591 for ; Sun, 10 Jul 2005 22:58:42 -0700 Received: by nproxy.gmail.com with SMTP id o25so186250nfa for ; Sun, 10 Jul 2005 22:56:59 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=NJ1XxHP+1Xjz+fvQ9AtJyLupewbD5kVhFE/vjZrjDhkoJW2P7X4Lt7LbKPaIUYE7dlbF3ODLEpltsooCIyxi1J/xrWI3HELCxl/FCGY0IeoeSygNAjzMU/X9cNoyRtrOShMXlclS6dgseI8IlU2Xl3XLHv5jDddDrjtBdr8l9BQ= Received: by 10.48.3.19 with SMTP id 19mr126192nfc; Sun, 10 Jul 2005 22:56:59 -0700 (PDT) Received: by 10.48.49.16 with HTTP; Sun, 10 Jul 2005 22:56:59 -0700 (PDT) Message-ID: <2cd57c900507102256115bd4ff@mail.gmail.com> Date: Mon, 11 Jul 2005 13:56:59 +0800 From: Coywolf Qi Hunt Reply-To: coywolf@lovecn.org To: Craig Rodrigues Subject: Re: cvsup of XFS code? Cc: linux-xfs@oss.sgi.com In-Reply-To: <20050710131941.GA5256@crodrigues.org> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Disposition: inline References: <20050710131941.GA5256@crodrigues.org> Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id j6B5whH9018596 X-archive-position: 5604 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: coywolf@gmail.com Precedence: bulk X-list: linux-xfs Content-Length: 630 Lines: 26 On 7/10/05, Craig Rodrigues wrote: > Hi, > > I tried to follow the instructions for obtaining the XFS > code via CVSup at: http://oss.sgi.com/projects/xfs/source.html > but it is not working: > > % cvsup -L 2 -g cvsupfile > Parsing supfile "cvsupfile" > Connecting to xfs.org > Connected to xfs.org > Server software version: SNAP_16_1h > Negotiating file attribute support > Exchanging collection information > Server message: Unknown collection "linux-2.6-xfs" > > > This used to work before.... > It seems to me that xfs.org isn't relevant, is it? -- Coywolf Qi Hunt http://ahbl.org/~coywolf/ From owner-linux-xfs@oss.sgi.com Mon Jul 11 00:01:03 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 11 Jul 2005 00:01:06 -0700 (PDT) Received: from larry.melbourne.sgi.com (mverd138.asia.info.net [61.14.31.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id j6B711H9024239 for ; Mon, 11 Jul 2005 00:01:02 -0700 Received: from bruce.melbourne.sgi.com (bruce.melbourne.sgi.com [134.14.54.176]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id QAA21815 for ; Mon, 11 Jul 2005 16:59:20 +1000 Received: from bruce.melbourne.sgi.com (localhost.localdomain [127.0.0.1]) by bruce.melbourne.sgi.com (8.12.8/8.12.8) with ESMTP id j6B6ZCtX005717 for ; Mon, 11 Jul 2005 16:35:13 +1000 Received: (from fsgqa@localhost) by bruce.melbourne.sgi.com (8.12.8/8.12.8/Submit) id j6B6ZCEJ005716 for linux-xfs@oss.sgi.com; Mon, 11 Jul 2005 16:35:12 +1000 Date: Mon, 11 Jul 2005 16:35:12 +1000 From: FSG QA Message-Id: <200507110635.j6B6ZCEJ005716@bruce.melbourne.sgi.com> To: linux-xfs@oss.sgi.com Subject: TAKE 907752 - xfstests X-archive-position: 5605 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: fsgqa@bruce.melbourne.sgi.com Precedence: bulk X-list: linux-xfs Content-Length: 1209 Lines: 33 Fix fsstress builds when setting the project identifier (fsx interface now). Date: Tue Jul 5 12:17:32 AEST 2005 Workarea: bruce.melbourne.sgi.com:/home/fsgqa/qa/xfs-cmds Inspected by: nathans The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:23061a xfstests/ltp/fsstress.c - 1.6 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/ltp/fsstress.c.diff?r1=text&tr1=1.6&r2=text&tr2=1.5&f=h Fix up test 096 to work for both internal and external logs. Date: Mon Jul 11 16:58:55 AEST 2005 Workarea: bruce.melbourne.sgi.com:/home/fsgqa/qa/xfs-cmds Inspected by: nathans,tes The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:23131a xfstests/096.external - 1.1 - new http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/096.external xfstests/096 - 1.5 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/096.diff?r1=text&tr1=1.5&r2=text&tr2=1.4&f=h xfstests/096.out - 1.3 - renamed to xfstests/096.internal 1.1 http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfstests/096.out.diff?r1=text&tr1=1.3&r2=text&tr2=1.2&f=h From owner-linux-xfs@oss.sgi.com Mon Jul 11 00:34:27 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 11 Jul 2005 00:34:32 -0700 (PDT) Received: from smtp-4.hut.fi (smtp-4.hut.fi [130.233.228.94]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6B7YQH9030127 for ; Mon, 11 Jul 2005 00:34:27 -0700 Received: from localhost (katosiko.hut.fi [130.233.228.115]) by smtp-4.hut.fi (8.12.10/8.12.10) with ESMTP id j6B7WjQF020178 for ; Mon, 11 Jul 2005 10:32:45 +0300 Received: from smtp-4.hut.fi ([130.233.228.94]) by localhost (katosiko.hut.fi [130.233.228.115]) (amavisd-new, port 10024) with LMTP id 25496-19-5 for ; Mon, 11 Jul 2005 10:32:44 +0300 (EEST) Received: from wing.madduck.net (aaninen-47.hut.fi [130.233.238.47]) by smtp-4.hut.fi (8.12.10/8.12.10) with ESMTP id j6B7Rp2L019678 for ; Mon, 11 Jul 2005 10:27:51 +0300 Received: by wing.madduck.net (Postfix, from userid 1000) id 7DAA380E838; Mon, 11 Jul 2005 10:28:07 +0300 (EEST) Date: Mon, 11 Jul 2005 10:28:07 +0300 From: martin f krafft To: linux xfs mailing list Subject: Re: how to flush an XFS filesystem Message-ID: <20050711072807.GA16354@localhost.localdomain> Mail-Followup-To: linux xfs mailing list References: <20050709091145.GA13108@cirrus.madduck.net> <20050710141254.A2904172@wobbly.melbourne.sgi.com> <20050710084345.GA11413@localhost.localdomain> <20050711081613.A2828633@wobbly.melbourne.sgi.com> <20050710224635.GA12333@localhost.localdomain> <20050711014827.GB829@frodo> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="jI8keyz6grp/JLjh" Content-Disposition: inline In-Reply-To: <20050711014827.GB829@frodo> X-OS: Debian GNU/Linux 3.1 kernel 2.6.11-wing i686 X-Motto: Keep the good times rollin' X-Subliminal-Message: debian/rules! X-Spamtrap: madduck.bogus@madduck.net User-Agent: Mutt/1.5.9i X-TKK-Virus-Scanned: by amavisd-new-2.1.2-hutcc at katosiko.hut.fi X-archive-position: 5606 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: madduck@madduck.net Precedence: bulk X-list: linux-xfs Content-Length: 1294 Lines: 39 --jI8keyz6grp/JLjh Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable also sprach Nathan Scott [2005.07.11.0448 +0300]: > Ah, OK thats more interesting then - can you describe the way in > which the Grub menu file is changed? e.g. ... is a new inode > created or is an existing one overwritten? is it written via > write(2) or mmap? Is it using buffered or direct IO? etc. I am using sed to edit inplace and from what I know about sed, it actually creates a new inode. --=20 martin; (greetings from the heart of the sun.) \____ echo mailto: !#^."<*>"|tr "<*> mailto:" net@madduck =20 invalid/expired pgp subkeys? use subkeys.pgp.net as keyserver! spamtraps: madduck.bogus@madduck.net =20 "you know you're a hopeless geek when you misspell 'date' as 'data'" -- branden robinson --jI8keyz6grp/JLjh Content-Type: application/pgp-signature; name="signature.asc" Content-Description: Digital signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (GNU/Linux) iD8DBQFC0h+HIgvIgzMMSnURAjDgAJ0frdjRhYuQ4goESgmFGwAdWtyI2QCgxAWE +NSH1OGUDpwm5Nb8DbH+x5s= =IVZZ -----END PGP SIGNATURE----- --jI8keyz6grp/JLjh-- From owner-linux-xfs@oss.sgi.com Mon Jul 11 14:04:57 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 11 Jul 2005 14:05:03 -0700 (PDT) Received: from omx2.sgi.com (omx2-ext.sgi.com [192.48.171.19]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6BL4vH9010211 for ; Mon, 11 Jul 2005 14:04:57 -0700 Received: from flecktone.americas.sgi.com (flecktone.americas.sgi.com [198.149.16.15]) by omx2.sgi.com (8.12.11/8.12.9/linux-outbound_gateway-1.1) with ESMTP id j6BMttIp030359 for ; Mon, 11 Jul 2005 15:55:55 -0700 Received: from daisy-e236.americas.sgi.com (daisy-e236.americas.sgi.com [128.162.236.214]) by flecktone.americas.sgi.com (8.12.9/8.12.10/SGI_generic_relay-1.2) with ESMTP id j6BL3FDN11946717; Mon, 11 Jul 2005 16:03:15 -0500 (CDT) Received: from naboo.americas.sgi.com (naboo.americas.sgi.com [128.162.233.73]) by daisy-e236.americas.sgi.com (8.12.9/SGI-server-1.8) with ESMTP id j6BL3Ev0369868; Mon, 11 Jul 2005 16:03:15 -0500 (CDT) Subject: Re: cvsup of XFS code? From: Russell Cattelan To: Craig Rodrigues Cc: linux-xfs@oss.sgi.com In-Reply-To: <20050710131941.GA5256@crodrigues.org> References: <20050710131941.GA5256@crodrigues.org> Content-Type: text/plain Date: Mon, 11 Jul 2005 16:03:14 -0500 Message-Id: <1121115794.25840.7.camel@naboo.americas.sgi.com> Mime-Version: 1.0 X-Mailer: Evolution 2.0.3-4mdk Content-Transfer-Encoding: 7bit X-archive-position: 5607 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: cattelan@thebarn.com Precedence: bulk X-list: linux-xfs Content-Length: 579 Lines: 24 On Sun, 2005-07-10 at 09:19 -0400, Craig Rodrigues wrote: > Hi, > > I tried to follow the instructions for obtaining the XFS > code via CVSup at: http://oss.sgi.com/projects/xfs/source.html > but it is not working: > > % cvsup -L 2 -g cvsupfile > Parsing supfile "cvsupfile" > Connecting to xfs.org > Connected to xfs.org > Server software version: SNAP_16_1h > Negotiating file attribute support > Exchanging collection information > Server message: Unknown collection "linux-2.6-xfs" > > > This used to work before.... > Hmm sorry about that, I'll fix it up. -Russell From owner-linux-xfs@oss.sgi.com Tue Jul 12 09:50:09 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 12 Jul 2005 09:50:16 -0700 (PDT) Received: from strike.wu-wien.ac.at (strike.wu-wien.ac.at [137.208.8.200]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6CGo6H9006569 for ; Tue, 12 Jul 2005 09:50:08 -0700 Received: from localhost (localhost.localdomain [127.0.0.1]) by strike.wu-wien.ac.at (Postfix) with ESMTP id 391452000F2; Tue, 12 Jul 2005 18:48:19 +0200 (CEST) Received: from strike.wu-wien.ac.at ([127.0.0.1]) by localhost (strike.wu-wien.ac.at [127.0.0.1]) (amavisd-new, port 10024) with LMTP id 24729-01-5; Tue, 12 Jul 2005 18:48:12 +0200 (CEST) Received: from [137.208.89.100] (ariel.wu-wien.ac.at [137.208.89.100]) by strike.wu-wien.ac.at (Postfix) with ESMTP id 011252000F1; Tue, 12 Jul 2005 18:48:12 +0200 (CEST) Message-ID: <42D3F44B.308@strike.wu-wien.ac.at> Date: Tue, 12 Jul 2005 18:48:11 +0200 From: Alexander Bergolth User-Agent: Mozilla Thunderbird 1.0.2-1.3.3 (X11/20050513) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Nathan Scott Cc: Joshua Baker-LePain , Steve Lord , Linux xfs mailing list Subject: Re: XFS, 4K stacks, and Red Hat References: <42CD4D38.1090703@xfs.org> <20050708043740.GB1679@frodo> In-Reply-To: <20050708043740.GB1679@frodo> X-Enigmail-Version: 0.92.0.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-archive-position: 5608 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: leo@strike.wu-wien.ac.at Precedence: bulk X-list: linux-xfs Content-Length: 10635 Lines: 255 -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 07/08/2005 06:37 AM, Nathan Scott wrote: > On Thu, Jul 07, 2005 at 12:43:15PM -0400, Joshua Baker-LePain wrote: > >>> As for XFS and a 4K stack, I think it still boils down to a few >>> edge cases, I have not seen one in years, I am doing all my >>> builds via nfs v3 with tcp/ip to an XFS filesystem. >>... >>Hrm. I was easily able to trigger stack overflows on a pretty simple >>(albeit old) setup -- RHEL4 kernel with XFS turned on, dual PIII 450, >>384MB RAM, XFS on a single SCSI disk on aic7xxx. > > I put in a bit of time awhile back to get the largest of these > issues sorted out - perhaps (almost certainly) RHEL4 is an older > 2.6 kernel than the one containing those changes. > > As other cases pop up (with a reproducible test case please, and > no stacking drivers in the way too :), we slowly iron them out.. > its not exactly top of the priority list though. I'm getting frequent stack overflows on one system, using xfs, lvm2, sw-raid and libata but I don't know, if they are XFS-related. I've attached the stack-trace of my last crash, using FC4 kernel-2.6.11-1.1286_FC4. I'd appreciate if someone could take a look at it. Thanks, - --leo do_IRQ: stack overflow: 476 [] do_IRQ+0x80/0x82 [] common_interrupt+0x1a/0x20 [] kmem_cache_alloc+0xd/0x49 [] mempool_alloc+0x6a/0x26c [] ide_dma_exec_cmd+0x1f/0x23 [] ide_dma_start+0x21/0x2d [] __ide_do_rw_disk+0x344/0x501 [] autoremove_wake_function+0x0/0x37 [] cfq_set_request+0x12c/0x58a [] mempool_alloc+0x6a/0x26c [] cfq_set_request+0x0/0x58a [] elv_set_request+0x14/0x23 [] get_request+0x1aa/0x5d2 [] elv_next_request+0x12/0x157 [] __make_request+0x120/0x662 [] cfq_add_crq_rb+0x86/0x93 [] mempool_alloc+0x6a/0x26c [] cfq_get_io_context+0x26/0x681 [] generic_make_request+0x94/0x23a [] schedule+0x31d/0x7b3 [] cfq_set_request+0x12c/0x58a [] autoremove_wake_function+0x0/0x37 [] submit_bio+0x46/0xcc [] autoremove_wake_function+0x0/0x37 [] bio_add_page+0x29/0x2f [] sync_page_io+0xa9/0xc6 [] write_disk_sb+0x6f/0xab [] sync_sbs+0x24/0x39 [] md_update_sb+0x78/0xf6 [] md_write_start+0x75/0x77 [] make_request+0x157/0x50a [raid1] [] mempool_alloc+0x6a/0x26c [] generic_make_request+0x94/0x23a [] bio_alloc_bioset+0xf6/0x1a7 [] bio_clone+0xa0/0xb1 [] autoremove_wake_function+0x0/0x37 [] __clone_and_map+0xb3/0x328 [dm_mod] [] mempool_alloc+0x6a/0x26c [] __delay+0x9/0xa [] autoremove_wake_function+0x0/0x37 [] __split_bio+0xcf/0x111 [dm_mod] [] dm_request+0x79/0x8e [dm_mod] [] generic_make_request+0x94/0x23a [] ide_build_sglist+0x24/0x9c [] autoremove_wake_function+0x0/0x37 [] submit_bio+0x46/0xcc [] autoremove_wake_function+0x0/0x37 [] bio_add_page+0x29/0x2f [] _pagebuf_ioapply+0x180/0x2ee [xfs] [] pagebuf_iorequest+0x30/0x132 [xfs] [] default_wake_function+0x0/0xc [] xlog_bdstrat_cb+0x41/0x45 [xfs] [] xlog_sync+0x282/0x622 [xfs] [] xfs_trans_log_buf+0x51/0x78 [xfs] [] xlog_state_release_iclog+0x13/0x21c [xfs] [] xfs_alloc_update+0x41/0xe8 [xfs] [] xlog_state_sync+0x273/0x754 [xfs] [] xfs_btree_del_cursor+0x21/0x4d [xfs] [] xfs_alloc_search_busy+0x19b/0x2e2 [xfs] [] xfs_trans_log_buf+0x51/0x78 [xfs] [] xfs_alloc_ag_vextent+0xcc/0xe5 [xfs] [] xfs_alloc_vextent+0x3e9/0x571 [xfs] [] xfs_bmap_alloc+0x1154/0x18f7 [xfs] [] xfs_bmap_add_extent_hole_delay+0x11f/0x4f8 [xfs] [] cfq_set_request+0x12c/0x58a [] xfs_bmbt_get_state+0x13/0x1c [xfs] [] xfs_bmapi+0x6e9/0x1601 [xfs] [] xfs_btree_check_lblock+0x75/0x1a2 [xfs] [] xfs_btree_read_bufl+0xac/0xc6 [xfs] [] xfs_bmbt_get_state+0x13/0x1c [xfs] [] xfs_dir2_grow_inode+0x100/0x42f [xfs] [] xfs_da_brelse+0xa2/0xad [xfs] [] xfs_dir2_node_addname_int+0x525/0x9c3 [xfs] [] xfs_dir2_node_addname+0x6d/0xc2 [xfs] [] xfs_dir2_createname+0xed/0x122 [xfs] [] xfs_dir2_createname+0x0/0x122 [xfs] [] xfs_create+0x465/0x6df [xfs] [] linvfs_mknod+0x279/0x45a [xfs] [] xfs_da_brelse+0xa2/0xad [xfs] [] avc_has_perm_noaudit+0x26/0xd1 [] avc_has_perm+0x4e/0x58 [] avc_has_perm+0x4e/0x58 [] vfs_create+0xd9/0x125 [] open_namei+0x565/0x619 [] selinux_file_permission+0xe0/0x152 [] filp_open+0x27/0x46 [] get_unused_fd+0x79/0x1d2 [] getname+0x87/0xc5 [] sys_open+0x31/0x5b [] syscall_call+0x7/0xb ======================= Unable to handle kernel paging request at virtual address fffff034 printing eip: c0104086 *pde = 00002067 Oops: 0000 [#1] Modules linked in: r128 drm nfsd lockd sunrpc md5 ipv6 parport_pc lp parport autofs4 smsc47m1 eeprom adm1025 adm1031 i2c_sensor i2c_isa i2c_i80 1 i2c_core uhci_hcd ohci_hcd ehci_hcd 3c59x mii floppy xfs exportfs raid5 xor raid1 dm_mod sata_promise libata sd_mod scsi_mod CPU: 0 EIP: 0060:[] Not tainted VLI EFLAGS: 00010002 (2.6.12-1.1390_FC4) EIP is at show_trace+0x5a/0x78 eax: fffffffd ebx: ffffffff ecx: 000055ae edx: 000055ae esi: fffff000 edi: 00000000 ebp: 00000220 esp: dbdb01f4 ds: 007b es: 007b ss: 0068 Process vm86.c (pid: 1734962273, threadinfo=dbdb0000 task=c0382498) Stack: c03823ea c0103a51 dbdb0000 dbdb022c 00000000 c010417f dbdb0218 c0105b53 c03825af 000001dc 00011220 00011220 ef9a2a80 c0103c0e 00011220 ef9a2a80 00011220 00011220 ef9a2a80 00000220 ef9a2a80 0000007b c049007b ffffff00 Call Trace: [] syscall_call+0x7/0xb [] dump_stack+0x13/0x17 [] do_IRQ+0x80/0x82 [] common_interrupt+0x1a/0x20 [] kmem_cache_alloc+0xd/0x49 [] mempool_alloc+0x6a/0x26c [] ide_dma_exec_cmd+0x1f/0x23 [] ide_dma_start+0x21/0x2d [] __ide_do_rw_disk+0x344/0x501 [] autoremove_wake_function+0x0/0x37 [] cfq_set_request+0x12c/0x58a [] mempool_alloc+0x6a/0x26c [] cfq_set_request+0x0/0x58a [] elv_set_request+0x14/0x23 [] get_request+0x1aa/0x5d2 [] elv_next_request+0x12/0x157 [] __make_request+0x120/0x662 [] cfq_add_crq_rb+0x86/0x93 [] mempool_alloc+0x6a/0x26c [] cfq_get_io_context+0x26/0x681 [] generic_make_request+0x94/0x23a [] schedule+0x31d/0x7b3 [] cfq_set_request+0x12c/0x58a [] autoremove_wake_function+0x0/0x37 [] submit_bio+0x46/0xcc [] autoremove_wake_function+0x0/0x37 [] bio_add_page+0x29/0x2f [] sync_page_io+0xa9/0xc6 [] write_disk_sb+0x6f/0xab [] sync_sbs+0x24/0x39 [] md_update_sb+0x78/0xf6 [] md_write_start+0x75/0x77 [] make_request+0x157/0x50a [raid1] [] mempool_alloc+0x6a/0x26c [] generic_make_request+0x94/0x23a [] bio_alloc_bioset+0xf6/0x1a7 [] bio_clone+0xa0/0xb1 [] autoremove_wake_function+0x0/0x37 [] __clone_and_map+0xb3/0x328 [dm_mod] [] mempool_alloc+0x6a/0x26c [] __delay+0x9/0xa [] autoremove_wake_function+0x0/0x37 [] __split_bio+0xcf/0x111 [dm_mod] [] dm_request+0x79/0x8e [dm_mod] [] generic_make_request+0x94/0x23a [] ide_build_sglist+0x24/0x9c [] autoremove_wake_function+0x0/0x37 [] submit_bio+0x46/0xcc [] autoremove_wake_function+0x0/0x37 [] bio_add_page+0x29/0x2f [] _pagebuf_ioapply+0x180/0x2ee [xfs] [] pagebuf_iorequest+0x30/0x132 [xfs] [] default_wake_function+0x0/0xc [] xlog_bdstrat_cb+0x41/0x45 [xfs] [] xlog_sync+0x282/0x622 [xfs] [] xfs_trans_log_buf+0x51/0x78 [xfs] [] xlog_state_release_iclog+0x13/0x21c [xfs] [] xfs_alloc_update+0x41/0xe8 [xfs] [] xlog_state_sync+0x273/0x754 [xfs] [] xfs_btree_del_cursor+0x21/0x4d [xfs] [] xfs_alloc_search_busy+0x19b/0x2e2 [xfs] [] xfs_trans_log_buf+0x51/0x78 [xfs] [] xfs_alloc_ag_vextent+0xcc/0xe5 [xfs] [] xfs_alloc_vextent+0x3e9/0x571 [xfs] [] xfs_bmap_alloc+0x1154/0x18f7 [xfs] [] xfs_bmap_add_extent_hole_delay+0x11f/0x4f8 [xfs] [] cfq_set_request+0x12c/0x58a [] xfs_bmbt_get_state+0x13/0x1c [xfs] [] xfs_bmapi+0x6e9/0x1601 [xfs] [] xfs_btree_check_lblock+0x75/0x1a2 [xfs] [] xfs_btree_read_bufl+0xac/0xc6 [xfs] [] xfs_bmbt_get_state+0x13/0x1c [xfs] [] xfs_dir2_grow_inode+0x100/0x42f [xfs] [] xfs_da_brelse+0xa2/0xad [xfs] [] xfs_dir2_node_addname_int+0x525/0x9c3 [xfs] [] xfs_dir2_node_addname+0x6d/0xc2 [xfs] [] xfs_dir2_createname+0xed/0x122 [xfs] [] xfs_dir2_createname+0x0/0x122 [xfs] [] xfs_create+0x465/0x6df [xfs] [] linvfs_mknod+0x279/0x45a [xfs] [] xfs_da_brelse+0xa2/0xad [xfs] [] avc_has_perm_noaudit+0x26/0xd1 [] avc_has_perm+0x4e/0x58 [] avc_has_perm+0x4e/0x58 [] vfs_create+0xd9/0x125 [] open_namei+0x565/0x619 [] selinux_file_permission+0xe0/0x152 [] filp_open+0x27/0x46 [] get_unused_fd+0x79/0x1d2 [] getname+0x87/0xc5 [] sys_open+0x31/0x5b [] syscall_call+0x7/0xb ======================= - -- - ----------------------------------------------------------------------- Alexander.Bergolth@wu-wien.ac.at Fax: +43-1-31336-906050 Zentrum fuer Informatikdienste - Wirtschaftsuniversitaet Wien - Austria -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.6 (GNU/Linux) Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org iD8DBQFC0/RLsYaksEkoAQMRAgysAJ9zxLcK9ISPq3bd1Fre9D8VqejK9QCeNsyT ko/q/+s93VWv1rkm1kVOa/A= =yp8b -----END PGP SIGNATURE----- From owner-linux-xfs@oss.sgi.com Tue Jul 12 18:09:35 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 12 Jul 2005 18:09:39 -0700 (PDT) Received: from localhost.localdomain (mverd138.asia.info.net [61.14.31.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6D19XH9021472 for ; Tue, 12 Jul 2005 18:09:34 -0700 Received: from localhost.localdomain (snap [127.0.0.1]) by localhost.localdomain (8.12.8/8.12.8) with ESMTP id j6D0dMFj002946; Wed, 13 Jul 2005 10:39:22 +1000 Received: (from fsgqa@localhost) by localhost.localdomain (8.12.8/8.12.8/Submit) id j6D0dLk1002945; Wed, 13 Jul 2005 10:39:21 +1000 Date: Wed, 13 Jul 2005 10:39:21 +1000 From: FSG QA Message-Id: <200507130039.j6D0dLk1002945@localhost.localdomain> To: linux-xfs@oss.sgi.com, sgi.bugs.xfs@engr.sgi.com Subject: TAKE 931456 929956 - add log debugging and tracing info X-archive-position: 5609 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: fsgqa@localhost.localdomain.sgi.com Precedence: bulk X-list: linux-xfs Content-Length: 1763 Lines: 33 Add log debugging and tracing info. Did it on IRIX a while ago and now putting back to Linux. Date: Wed Jul 13 11:05:30 AEST 2005 Workarea: snap.melbourne.sgi.com:/home/fsgqa/qa/xfs-linux Inspected by: overby@sgi.com (irix changes) The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-kern/xfs-linux-melb Modid: xfs-linux-melb:xfs-kern:23155a xfsidbg.c - 1.277 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfsidbg.c.diff?r1=text&tr1=1.277&r2=text&tr2=1.276&f=h xfs_log.h - 1.73 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_log.h.diff?r1=text&tr1=1.73&r2=text&tr2=1.72&f=h xfs_log.c - 1.306 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_log.c.diff?r1=text&tr1=1.306&r2=text&tr2=1.305&f=h xfs_extfree_item.c - 1.60 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_extfree_item.c.diff?r1=text&tr1=1.60&r2=text&tr2=1.59&f=h xfs_buf_item.c - 1.152 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_buf_item.c.diff?r1=text&tr1=1.152&r2=text&tr2=1.151&f=h xfs_log_priv.h - 1.106 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_log_priv.h.diff?r1=text&tr1=1.106&r2=text&tr2=1.105&f=h xfs_inode_item.c - 1.119 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_inode_item.c.diff?r1=text&tr1=1.119&r2=text&tr2=1.118&f=h xfs_trans.c - 1.163 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_trans.c.diff?r1=text&tr1=1.163&r2=text&tr2=1.162&f=h xfs_trans.h - 1.130 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_trans.h.diff?r1=text&tr1=1.130&r2=text&tr2=1.129&f=h quota/xfs_dquot_item.c - 1.6 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/quota/xfs_dquot_item.c.diff?r1=text&tr1=1.6&r2=text&tr2=1.5&f=h From owner-linux-xfs@oss.sgi.com Tue Jul 12 18:49:06 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 12 Jul 2005 18:49:11 -0700 (PDT) Received: from larry.melbourne.sgi.com (mverd138.asia.info.net [61.14.31.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id j6D1n5H9023471 for ; Tue, 12 Jul 2005 18:49:05 -0700 Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA19233 for ; Wed, 13 Jul 2005 11:47:21 +1000 Received: from frodo.melbourne.sgi.com (root@frodo.melbourne.sgi.com [134.14.55.153]) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id j6D1lOkt2986237 for ; Wed, 13 Jul 2005 11:47:25 +1000 (EST) Received: from frodo.melbourne.sgi.com (nathans@localhost [127.0.0.1]) by frodo.melbourne.sgi.com (8.12.9/8.12.9/Debian-3) with ESMTP id j6D1eToe001275 for ; Wed, 13 Jul 2005 11:40:29 +1000 Received: (from nathans@localhost) by frodo.melbourne.sgi.com (8.12.9/8.12.9/Debian-3) id j6D1eTMR001273 for linux-xfs@oss.sgi.com; Wed, 13 Jul 2005 11:40:29 +1000 Date: Wed, 13 Jul 2005 11:40:28 +1000 From: Nathan Scott To: linux xfs mailing list Subject: Re: how to flush an XFS filesystem Message-ID: <20050713014028.GC980@frodo> References: <20050709091145.GA13108@cirrus.madduck.net> <20050710141254.A2904172@wobbly.melbourne.sgi.com> <20050710084345.GA11413@localhost.localdomain> <20050711081613.A2828633@wobbly.melbourne.sgi.com> <20050710224635.GA12333@localhost.localdomain> <20050711014827.GB829@frodo> <20050711072807.GA16354@localhost.localdomain> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20050711072807.GA16354@localhost.localdomain> User-Agent: Mutt/1.5.3i X-archive-position: 5610 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: linux-xfs Content-Length: 1395 Lines: 42 Hi Martin, On Mon, Jul 11, 2005 at 10:28:07AM +0300, martin f krafft wrote: > also sprach Nathan Scott [2005.07.11.0448 +0300]: > > Ah, OK thats more interesting then - can you describe the way in > > which the Grub menu file is changed? e.g. ... is a new inode > > created or is an existing one overwritten? is it written via > > write(2) or mmap? Is it using buffered or direct IO? etc. > > I am using sed to edit inplace and from what I know about sed, it > actually creates a new inode. I don't seem to be able to reproduce this - does the following recipe fail for you on your machine? Maybe your kernels a bit out of date? (what version was that again?) > mount | tail -1 /dev/sdb5 on /mnt/xfs0 type xfs (rw,rtdev=/dev/sdc1,logdev=/dev/sda11,uquota) > su [root@bruce xfstests]# [root@bruce xfstests]# xfs_freeze Usage: xfs_freeze -f | -u [root@bruce xfstests]# echo writeme > /mnt/xfs0/foo [root@bruce xfstests]# xfs_freeze -f /mnt/xfs0 [root@bruce xfstests]# xfs_freeze -u /mnt/xfs0 [root@bruce xfstests]# reboot -f Read from remote host bruce: Connection reset by peer Connection to bruce closed. $ ssh bruce -l fsgqa fsgqa has logged on pts/0 from sheila. > su [root@bruce fsgqa]# mount -o rw,rtdev=/dev/sdc1,logdev=/dev/sda11,uquota /dev/sdb5 /mnt/xfs0 [root@bruce fsgqa]# cat /mnt/xfs0/foo writeme [root@bruce fsgqa]# cheers. -- Nathan From owner-linux-xfs@oss.sgi.com Tue Jul 12 19:05:10 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 12 Jul 2005 19:05:17 -0700 (PDT) Received: from larry.melbourne.sgi.com (mverd138.asia.info.net [61.14.31.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id j6D258H9024811 for ; Tue, 12 Jul 2005 19:05:09 -0700 Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id MAA19769; Wed, 13 Jul 2005 12:03:23 +1000 Received: from frodo.melbourne.sgi.com (root@frodo.melbourne.sgi.com [134.14.55.153]) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id j6D23Pkt2985732; Wed, 13 Jul 2005 12:03:26 +1000 (EST) Received: from frodo.melbourne.sgi.com (nathans@localhost [127.0.0.1]) by frodo.melbourne.sgi.com (8.12.9/8.12.9/Debian-3) with ESMTP id j6D1uToe001314; Wed, 13 Jul 2005 11:56:29 +1000 Received: (from nathans@localhost) by frodo.melbourne.sgi.com (8.12.9/8.12.9/Debian-3) id j6D1uQj9001312; Wed, 13 Jul 2005 11:56:26 +1000 Date: Wed, 13 Jul 2005 11:56:26 +1000 From: Nathan Scott To: Alexander Bergolth Cc: Joshua Baker-LePain , Steve Lord , Linux xfs mailing list Subject: Re: XFS, 4K stacks, and Red Hat Message-ID: <20050713015626.GD980@frodo> References: <42CD4D38.1090703@xfs.org> <20050708043740.GB1679@frodo> <42D3F44B.308@strike.wu-wien.ac.at> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <42D3F44B.308@strike.wu-wien.ac.at> User-Agent: Mutt/1.5.3i X-archive-position: 5611 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: linux-xfs Content-Length: 1569 Lines: 43 On Tue, Jul 12, 2005 at 06:48:11PM +0200, Alexander Bergolth wrote: > On 07/08/2005 06:37 AM, Nathan Scott wrote: > >... > > As other cases pop up (with a reproducible test case please, and > > no stacking drivers in the way too :), we slowly iron them out.. ^^^^^^^^^^^^^^^^^^^^^^^^^^ *cough* > I'm getting frequent stack overflows on one system, using xfs, lvm2, > sw-raid and libata but I don't know, if they are XFS-related. Hmmm - xfs on lvm on md on ide ...? Looks like its death by a thousand cuts.. thats the sort of case Steve keeps talking about. You will be able to crash using any filesystem doing this, eventually - and we haven't even got NFS in the picture here yet. ( Maybe you can do away with one of device mapper / MD here? ) > [] __ide_do_rw_disk+0x344/0x501 > [] __make_request+0x120/0x662 > [] cfq_set_request+0x12c/0x58a > [] md_update_sb+0x78/0xf6 > [] md_write_start+0x75/0x77 > [] make_request+0x157/0x50a [raid1] > [] bio_clone+0xa0/0xb1 > [] dm_request+0x79/0x8e [dm_mod] > [] generic_make_request+0x94/0x23a > [] _pagebuf_ioapply+0x180/0x2ee [xfs] > [] pagebuf_iorequest+0x30/0x132 [xfs] > [] xlog_sync+0x282/0x622 [xfs] > [] xfs_dir2_node_addname_int+0x525/0x9c3 [xfs] > [] xfs_dir2_node_addname+0x6d/0xc2 [xfs] > [] xfs_dir2_createname+0x0/0x122 [xfs] > [] xfs_create+0x465/0x6df [xfs] > [] linvfs_mknod+0x279/0x45a [xfs] cheers. -- Nathan From owner-linux-xfs@oss.sgi.com Tue Jul 12 19:14:02 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 12 Jul 2005 19:14:04 -0700 (PDT) Received: from larry.melbourne.sgi.com (mverd138.asia.info.net [61.14.31.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id j6D2E0H9025725 for ; Tue, 12 Jul 2005 19:14:01 -0700 Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id MAA19942 for ; Wed, 13 Jul 2005 12:12:17 +1000 Received: from frodo.melbourne.sgi.com (root@frodo.melbourne.sgi.com [134.14.55.153]) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id j6D2CKkt2984756 for ; Wed, 13 Jul 2005 12:12:20 +1000 (EST) Received: from frodo.melbourne.sgi.com (nathans@localhost [127.0.0.1]) by frodo.melbourne.sgi.com (8.12.9/8.12.9/Debian-3) with ESMTP id j6D25Poe001362 for ; Wed, 13 Jul 2005 12:05:25 +1000 Received: (from nathans@localhost) by frodo.melbourne.sgi.com (8.12.9/8.12.9/Debian-3) id j6D25PKs001360 for linux-xfs@oss.sgi.com; Wed, 13 Jul 2005 12:05:25 +1000 Date: Wed, 13 Jul 2005 12:05:25 +1000 From: Nathan Scott To: linux xfs mailing list Subject: Re: how to flush an XFS filesystem Message-ID: <20050713020524.GE980@frodo> References: <20050709091145.GA13108@cirrus.madduck.net> <20050710141254.A2904172@wobbly.melbourne.sgi.com> <20050710084345.GA11413@localhost.localdomain> <20050711081613.A2828633@wobbly.melbourne.sgi.com> <20050710224635.GA12333@localhost.localdomain> <20050711014827.GB829@frodo> <20050711072807.GA16354@localhost.localdomain> <20050713014028.GC980@frodo> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20050713014028.GC980@frodo> User-Agent: Mutt/1.5.3i X-archive-position: 5613 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: linux-xfs Content-Length: 1211 Lines: 38 On Wed, Jul 13, 2005 at 11:40:28AM +1000, Nathan Scott wrote: > I don't seem to be able to reproduce this - does the following > recipe fail for you on your machine? Maybe your kernels a bit > out of date? (what version was that again?) > .. Here's another recipe - this time without any possibility for a log recovery happening before the read... # echo writemetoo > /mnt/xfs0/foo2 && ls -li /mnt/xfs0/foo2 && xfs_freeze -f /mnt/xfs0 && xfs_freeze -u /mnt/xfs0 && reboot -f 133 -rw-r--r-- 1 root root 11 Jul 13 11:40 /mnt/xfs0/foo2 ead from remote host bruce: Connection reset by peer Connection to bruce closed. [root@bruce fsgqa]# xfs_db -x /dev/sdb5 xfs_db> inode 133 xfs_db> p core.magic = 0x494e core.mode = 0100644 ... core.gen = 0 next_unlinked = null u.bmx[0] = [startoff,startblock,blockcount,extentflag] 0:[0,16,1,0] xfs_db> addr u.bmx[0].startblock xfs_db> type text xfs_db> p 000: 77 72 69 74 65 6d 65 74 6f 6f 0a 00 00 00 00 00 writemetoo...... 010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ So, its definately all there on disk after a freeze... cheers. -- Nathan From owner-linux-xfs@oss.sgi.com Tue Jul 12 19:13:46 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 12 Jul 2005 19:13:50 -0700 (PDT) Received: from mx1.suse.de (mx1.suse.de [195.135.220.2]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6D2DjH9025652 for ; Tue, 12 Jul 2005 19:13:46 -0700 Received: from Relay2.suse.de (mail2.suse.de [195.135.221.8]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.suse.de (Postfix) with ESMTP id AB56EF14E; Wed, 13 Jul 2005 04:12:00 +0200 (CEST) To: Nathan Scott Cc: linux-xfs@oss.sgi.com, axboe@suse.de Subject: Re: XFS, 4K stacks, and Red Hat References: <42CD4D38.1090703@xfs.org> <20050708043740.GB1679@frodo> <42D3F44B.308@strike.wu-wien.ac.at> <20050713015626.GD980@frodo> From: Andi Kleen Date: 13 Jul 2005 04:12:00 +0200 In-Reply-To: <20050713015626.GD980@frodo> Message-ID: User-Agent: Gnus/5.09 (Gnus v5.9.0) Emacs/21.2 MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-archive-position: 5612 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: ak@suse.de Precedence: bulk X-list: linux-xfs Content-Length: 1086 Lines: 31 Nathan Scott writes: > On Tue, Jul 12, 2005 at 06:48:11PM +0200, Alexander Bergolth wrote: > > On 07/08/2005 06:37 AM, Nathan Scott wrote: > > >... > > > As other cases pop up (with a reproducible test case please, and > > > no stacking drivers in the way too :), we slowly iron them out.. > ^^^^^^^^^^^^^^^^^^^^^^^^^^ > > *cough* > > > I'm getting frequent stack overflows on one system, using xfs, lvm2, > > sw-raid and libata but I don't know, if they are XFS-related. > > Hmmm - xfs on lvm on md on ide ...? Looks like its death by > a thousand cuts.. thats the sort of case Steve keeps talking > about. You will be able to crash using any filesystem doing > this, eventually - and we haven't even got NFS in the picture > here yet. Eventually even 8k stack systems might run into problems. A generic way to solve this would be to let the block layer who calls into the various stacking layers check how much stack is left first and when it is too low push the work to another thread using a workqueue. Jens, do you think that would be feasible? -Andi From owner-linux-xfs@oss.sgi.com Tue Jul 12 20:18:16 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 12 Jul 2005 20:18:19 -0700 (PDT) Received: from relay02.roc.ny.frontiernet.net ([66.133.182.165]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6D3IFH9029106 for ; Tue, 12 Jul 2005 20:18:15 -0700 Received: from filter02.roc.ny.frontiernet.net (filter02.roc.ny.frontiernet.net [66.133.183.69]) by relay02.roc.ny.frontiernet.net (Postfix) with ESMTP id 4FB763715DC; Wed, 13 Jul 2005 03:16:23 +0000 (UTC) Received: from relay02.roc.ny.frontiernet.net ([66.133.182.165]) by filter02.roc.ny.frontiernet.net (filter02.roc.ny.frontiernet.net [66.133.183.69]) (amavisd-new, port 10024) with LMTP id 17608-09-75; Wed, 13 Jul 2005 03:16:23 +0000 (UTC) Received: from [192.168.1.100] (67-137-96-87.dsl2.brv.mn.frontiernet.net [67.137.96.87]) by relay02.roc.ny.frontiernet.net (Postfix) with ESMTP id 95F663715CD; Wed, 13 Jul 2005 03:16:17 +0000 (UTC) Message-ID: <42D48780.2030500@xfs.org> Date: Tue, 12 Jul 2005 22:16:16 -0500 From: Steve Lord User-Agent: Mozilla Thunderbird 1.0.2-1.3.3 (X11/20050513) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Andi Kleen Cc: Nathan Scott , linux-xfs@oss.sgi.com, axboe@suse.de Subject: Re: XFS, 4K stacks, and Red Hat References: <42CD4D38.1090703@xfs.org> <20050708043740.GB1679@frodo> <42D3F44B.308@strike.wu-wien.ac.at> <20050713015626.GD980@frodo> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 5614 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: lord@xfs.org Precedence: bulk X-list: linux-xfs Content-Length: 877 Lines: 27 Andi Kleen wrote: > > Eventually even 8k stack systems might run into problems. > > A generic way to solve this would be to let the block layer > who calls into the various stacking layers check how much stack is left > first and when it is too low push the work to another thread using > a workqueue. > > Jens, do you think that would be feasible? > > -Andi > Quick, before Adrian Bunk gets his patch to completely kill 8K stacks into Linus's tree! In a previous life I actually had to resort to allocating a chunk of memory, linking it into the stack, then carrying on down the call chain (not on linux). The memory was freed on the way up the stack again. I am not saying that would be a viable solution, but there needs to be something done about stack overflow and nested subsystems, before someone tries iscsi over IPV6 or something other bizzare combo. Steve From owner-linux-xfs@oss.sgi.com Tue Jul 12 21:12:26 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 12 Jul 2005 21:12:29 -0700 (PDT) Received: from mx2.suse.de (cantor2.suse.de [195.135.220.15]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6D4CPH9003421 for ; Tue, 12 Jul 2005 21:12:26 -0700 Received: from Relay2.suse.de (mail2.suse.de [195.135.221.8]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx2.suse.de (Postfix) with ESMTP id 0C6781DA9C; Wed, 13 Jul 2005 06:10:42 +0200 (CEST) Date: Wed, 13 Jul 2005 06:10:41 +0200 From: Andi Kleen To: Steve Lord Cc: Andi Kleen , Nathan Scott , linux-xfs@oss.sgi.com, axboe@suse.de Subject: Re: XFS, 4K stacks, and Red Hat Message-ID: <20050713041041.GV23737@wotan.suse.de> References: <42CD4D38.1090703@xfs.org> <20050708043740.GB1679@frodo> <42D3F44B.308@strike.wu-wien.ac.at> <20050713015626.GD980@frodo> <42D48780.2030500@xfs.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <42D48780.2030500@xfs.org> X-archive-position: 5615 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: ak@suse.de Precedence: bulk X-list: linux-xfs Content-Length: 921 Lines: 19 > In a previous life I actually had to resort to allocating a chunk of > memory, linking it into the stack, then carrying on down the call > chain (not on linux). The memory was freed on the way up the stack > again. I am not saying that would be a viable solution, but there needs > to be something done about stack overflow and nested subsystems, before > someone tries iscsi over IPV6 or something other bizzare combo. ISCSI over something would be difficult again because that layering is invisible to the block layer. Maybe the iscsi block driver would need to declare how much stack it needs or do similar checks by itself. At least for the network driver interface the technique doesn't really work because blocking is not allowed at this point, so it would need to be higher level. BTW I doubt IPv6 uses much more stack than IPv4. But e.g. Infiniband is probably pretty bad when you run it below it. -Andi From owner-linux-xfs@oss.sgi.com Tue Jul 12 21:20:20 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 12 Jul 2005 21:20:23 -0700 (PDT) Received: from larry.melbourne.sgi.com (mverd138.asia.info.net [61.14.31.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id j6D4KIH9004483 for ; Tue, 12 Jul 2005 21:20:19 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA22865; Wed, 13 Jul 2005 14:18:33 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16302) id B54D449B4E72; Wed, 13 Jul 2005 14:27:19 +1000 (EST) To: linux-xfs@oss.sgi.com, sgi.bugs.xfs@engr.sgi.com Subject: PARTIAL TAKE 939444 - document grpid Message-Id: <20050713042719.B54D449B4E72@chook.melbourne.sgi.com> Date: Wed, 13 Jul 2005 14:27:19 +1000 (EST) From: nathans@sgi.com (Nathan Scott) X-archive-position: 5616 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: linux-xfs Content-Length: 665 Lines: 16 Document the grpid mount option, add it to docs-update patch too. Date: Wed Jul 13 14:11:53 AEST 2005 Workarea: chook.melbourne.sgi.com:/build/nathans/2.6.x-xfs Inspected by: nathans The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb Modid: 2.6.x-xfs-melb:linux:23160a Documentation/filesystems/xfs.txt - 1.12 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/linux-2.6-xfs/Documentation/filesystems/xfs.txt.diff?r1=text&tr1=1.12&r2=text&tr2=1.11&f=h split-patches/docs-update - 1.7 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/linux-2.6-xfs/split-patches/docs-update.diff?r1=text&tr1=1.7&r2=text&tr2=1.6&f=h From owner-linux-xfs@oss.sgi.com Tue Jul 12 21:26:57 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 12 Jul 2005 21:27:00 -0700 (PDT) Received: from larry.melbourne.sgi.com (mverd138.asia.info.net [61.14.31.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id j6D4QtH9005110 for ; Tue, 12 Jul 2005 21:26:56 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA22966 for ; Wed, 13 Jul 2005 14:25:12 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16302) id 67B9B49B4E72; Wed, 13 Jul 2005 14:33:59 +1000 (EST) To: linux-xfs@oss.sgi.com Subject: TAKE 907752 - fix iomap kdb command Message-Id: <20050713043359.67B9B49B4E72@chook.melbourne.sgi.com> Date: Wed, 13 Jul 2005 14:33:59 +1000 (EST) From: nathans@sgi.com (Nathan Scott) X-archive-position: 5617 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: linux-xfs Content-Length: 414 Lines: 14 Fix iomap kdb command. Date: Wed Jul 13 14:24:55 AEST 2005 Workarea: chook.melbourne.sgi.com:/build/nathans/xfs-linux Inspected by: hch The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-kern/xfs-linux-melb Modid: xfs-linux-melb:xfs-kern:23161a xfsidbg.c - 1.278 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfsidbg.c.diff?r1=text&tr1=1.278&r2=text&tr2=1.277&f=h From owner-linux-xfs@oss.sgi.com Tue Jul 12 21:28:27 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 12 Jul 2005 21:28:32 -0700 (PDT) Received: from larry.melbourne.sgi.com (mverd138.asia.info.net [61.14.31.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id j6D4SPH9005538 for ; Tue, 12 Jul 2005 21:28:26 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA23026; Wed, 13 Jul 2005 14:26:41 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16302) id 2915849B4E72; Wed, 13 Jul 2005 14:35:27 +1000 (EST) To: linux-xfs@oss.sgi.com, sgi.bugs.xfs@engr.sgi.com Subject: TAKE 939444 - grpid/nogrpid Message-Id: <20050713043527.2915849B4E72@chook.melbourne.sgi.com> Date: Wed, 13 Jul 2005 14:35:27 +1000 (EST) From: nathans@sgi.com (Nathan Scott) X-archive-position: 5618 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: linux-xfs Content-Length: 471 Lines: 14 Add in grpid/nogrpid mount option parsing, actual code was always there.. Date: Wed Jul 13 14:26:27 AEST 2005 Workarea: chook.melbourne.sgi.com:/build/nathans/xfs-linux Inspected by: hch The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-kern/xfs-linux-melb Modid: xfs-linux-melb:xfs-kern:23162a xfs_vfsops.c - 1.470 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_vfsops.c.diff?r1=text&tr1=1.470&r2=text&tr2=1.469&f=h From owner-linux-xfs@oss.sgi.com Tue Jul 12 21:29:40 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 12 Jul 2005 21:29:42 -0700 (PDT) Received: from larry.melbourne.sgi.com (mverd138.asia.info.net [61.14.31.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id j6D4TcH9005957 for ; Tue, 12 Jul 2005 21:29:39 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA23077 for ; Wed, 13 Jul 2005 14:27:55 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16302) id 77BCD49B4E74; Wed, 13 Jul 2005 14:36:42 +1000 (EST) To: linux-xfs@oss.sgi.com Subject: TAKE 907752 - remove repeated debug msgs Message-Id: <20050713043642.77BCD49B4E74@chook.melbourne.sgi.com> Date: Wed, 13 Jul 2005 14:36:42 +1000 (EST) From: nathans@sgi.com (Nathan Scott) X-archive-position: 5619 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: linux-xfs Content-Length: 440 Lines: 14 Remove extraneous quotacheck diagnostics. Date: Wed Jul 13 14:27:33 AEST 2005 Workarea: chook.melbourne.sgi.com:/build/nathans/xfs-linux Inspected by: hch The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-kern/xfs-linux-melb Modid: xfs-linux-melb:xfs-kern:23163a quota/xfs_qm.c - 1.24 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/quota/xfs_qm.c.diff?r1=text&tr1=1.24&r2=text&tr2=1.23&f=h From owner-linux-xfs@oss.sgi.com Wed Jul 13 06:43:09 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 13 Jul 2005 06:43:18 -0700 (PDT) Received: from smtp-2.hut.fi (smtp-2.hut.fi [130.233.228.92]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6DDh1H9025132 for ; Wed, 13 Jul 2005 06:43:08 -0700 Received: from localhost (katosiko.hut.fi [130.233.228.115]) by smtp-2.hut.fi (8.12.10/8.12.10) with ESMTP id j6DDfG8Y025185 for ; Wed, 13 Jul 2005 16:41:16 +0300 Received: from smtp-2.hut.fi ([130.233.228.92]) by localhost (katosiko.hut.fi [130.233.228.115]) (amavisd-new, port 10024) with LMTP id 10302-01 for ; Wed, 13 Jul 2005 16:41:15 +0300 (EEST) Received: from wing.madduck.net (a130-233-4-144.debconf5.hut.fi [130.233.4.144]) by smtp-2.hut.fi (8.12.10/8.12.10) with ESMTP id j6DDdxte024951 for ; Wed, 13 Jul 2005 16:40:00 +0300 Received: by wing.madduck.net (Postfix, from userid 1000) id 55D449DC1BC; Wed, 13 Jul 2005 16:40:34 +0300 (EEST) Date: Wed, 13 Jul 2005 16:40:34 +0300 From: martin f krafft To: linux xfs mailing list Subject: Re: how to flush an XFS filesystem Message-ID: <20050713134034.GA6743@localhost.localdomain> Mail-Followup-To: linux xfs mailing list References: <20050713014028.GC980@frodo> <20050713020524.GE980@frodo> <20050709091145.GA13108@cirrus.madduck.net> <20050710141254.A2904172@wobbly.melbourne.sgi.com> <20050710084345.GA11413@localhost.localdomain> <20050711081613.A2828633@wobbly.melbourne.sgi.com> <20050710224635.GA12333@localhost.localdomain> <20050711014827.GB829@frodo> <20050711072807.GA16354@localhost.localdomain> <20050713014028.GC980@frodo> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="ZPt4rx8FFjLCG7dd" Content-Disposition: inline In-Reply-To: <20050713020524.GE980@frodo> <20050713014028.GC980@frodo> X-OS: Debian GNU/Linux 3.1 kernel 2.6.11-wing i686 X-Motto: Keep the good times rollin' X-Subliminal-Message: debian/rules! X-Spamtrap: madduck.bogus@madduck.net User-Agent: Mutt/1.5.9i X-TKK-Virus-Scanned: by amavisd-new-2.1.2-hutcc at katosiko.hut.fi X-archive-position: 5620 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: madduck@madduck.net Precedence: bulk X-list: linux-xfs Content-Length: 1622 Lines: 51 --ZPt4rx8FFjLCG7dd Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable > [root@bruce fsgqa]# mount -o rw,rtdev=3D/dev/sdc1,logdev=3D/dev/sda11,uqu= ota /dev/sdb5 /mnt/xfs0 [...] > [root@bruce fsgqa]# xfs_db -x /dev/sdb5 Both of these are different from the way grub accesses it. In order for me to reproduce the problem, I had to get access to the partition before mounting it. It seems as if (a) the file gets written to extents on the disk, and (b) that the inode is written to the log, but the log is never flushed such that the metadata never make it to the filesystem. Mounting the filesystem on next reboot causes the log to be replayed and consistency to be restored. I assume xfs_db does the same. Grub, on the other hand, tries to get the file, but it does not exist in the directory (and grub cannot replay the log), so it fails to find it. --=20 martin; (greetings from the heart of the sun.) \____ echo mailto: !#^."<*>"|tr "<*> mailto:" net@madduck =20 invalid/expired pgp subkeys? use subkeys.pgp.net as keyserver! spamtraps: madduck.bogus@madduck.net =20 "doesn't he know who i think i am?" -- phil collins --ZPt4rx8FFjLCG7dd Content-Type: application/pgp-signature; name="signature.asc" Content-Description: Digital signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (GNU/Linux) iD8DBQFC1RnSIgvIgzMMSnURAl3RAJ9eeP2NSFuI1lXEA2KqHD1f4KyGpQCfdztP KNXq5bSBGYuflAAGB0Vk4is= =tWEJ -----END PGP SIGNATURE----- --ZPt4rx8FFjLCG7dd-- From owner-linux-xfs@oss.sgi.com Wed Jul 13 07:20:25 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 13 Jul 2005 07:21:51 -0700 (PDT) Received: from mail.gmx.net (mail.gmx.net [213.165.64.20]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id j6DEKLH9028100 for ; Wed, 13 Jul 2005 07:20:24 -0700 Received: (qmail 18640 invoked by uid 0); 13 Jul 2005 14:18:37 -0000 Received: from 81.5.248.222 by www72.gmx.net with HTTP; Wed, 13 Jul 2005 16:18:37 +0200 (MEST) Date: Wed, 13 Jul 2005 16:18:37 +0200 (MEST) From: =?ISO-8859-1?Q?=22J=FCrgen_Moritz=22?= To: lawrencc@debian.org, lch@multimania.com, LeBlanc@mcc.ac.uk, lederer@francium.informatik.uni-bonn.de.sgi.com, lehors@mirsa.inria.fr, leisner@sdsp.mc.xerox.com, lemke@sun.COM, lendecke@math.uni-goettingen.de, Lepied@debian.org, lermen@elserv.ffm.fgan.de, ley@rz.uni-karlsruhe.de, lilo@openprojects.net, links-list@linuxfromscratch.org, linuxdev@karagee.com, Linux-LVM@Sistina.com, linux-ntfs-dev@lists.sourceforge.net, linux-ntfs-dev@lists.sourceforge.net, linux-xfs@oss.sgi.com, liw@iki.fi, ljlane@debian.org, L.McLoughlin@doc.ic.ac.uk, lmontel@mandrakesoft.com, lmoore@debian.org, lndshark@speakeasy.net, lord@cray.com, lowe@debian.org, loyer@ensta.fr, lrains@netcom.com, luferbu@fluidsignal.com, luisgh@debian.org, lukas@debian.org, lukas@kde.org, luke@research.canon.com.au, lukka@iki.fi, luther@debian.org, lvirden@cas.org, lynx-dev@sig.net, m0c87zK-007zXpC@futatsu.uts.amdahl.com, mac@melware.de, madler@alumni.caltech.edu, maggi@athena.polito.it, mah@everybody.org, mailinglist-request@some.where.sgi.com, mail-server@PENGUIN-LUST.mit.edu Cc: majordomo@linux.kernel.org, majordomo@mailman.xmission.com, majordomo@sig.net, majordomo@warbase.selwerd.nl, makar@phoenix.kharkov.ua, mancini@elecsrv.enst.fr.sgi.com, manome@itlb.te.noda.sut.ac.jp, man-pages@qa.debian.org, marc@CAM.ORG, marcel@mesa.nl, marciot@users.sourceforge.net, marc@PostImage.COM, marc@redhat.com, Marc@Synergytics.Com, Marcus.Brinkmann@ruhr-uni-bochum.de, marekm@i17linuxb.ists.pwr.wroc.pl.sgi.com, marek@saftsack.fs.uni.sgi.com, marillat@debian.org, markn@greenwoodsoftware.com, markn@ieee.org, markus@oberhumer.com, markus@openbsd.org, martin@cs.unc.edu, martin.quinson@ens-lyon.fr, Martin.Schulze@Linux.DE, martin@trcsun3.eas.asu.edu.sgi.com, martweb@gmx.net, massifr@tiscalinet.it, mat42b@spi.power.uni-essen.de, mat@colton.de, matloff@cs.ucdavis.edu, mattm@access.digex.net.sgi.com, matt@mafr.de, mawa@iname.com, max@thekompany.com, mbayer@zedat.fu-berlin.de, mbm@linux.com, mbp@humbug.org.au, mbp@sourcefrog.net, mc-devel@gnome.org, mci@owl.openwall.com, mckinstry@computer.org, mckinstry@debian.org, mdb@go2net.com MIME-Version: 1.0 Subject: =?ISO-8859-1?Q?HELP_with_a_picture_of_criminals_in_their_car.____bitte_um?= =?ISO-8859-1?Q?_hilfe_bei_bilderkennung,_BITTE_WEITERLEITEN_---PLEASE_FOR?= =?ISO-8859-1?Q?WARD?= X-Priority: 1 (Highest) X-Authenticated: #27237219 Message-ID: <1704.1121264317@www72.gmx.net> X-Mailer: WWW-Mail 1.6 (Global Message Exchange) X-Flags: 0001 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 8bit X-archive-position: 5621 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: juergen.moritz@gmx.at Precedence: bulk X-list: linux-xfs Status: RO Content-Length: 2225 Lines: 60 at http://www.8ung.at/juergen3/yvonne/ are 4 pictures taken by my person with the linkname "ALink" at one of those pictures is a carsign of a trailer maybe W... (vienna city/austria), not well readable entirely please help me identifying this carsign. maybe a picture manipulation software can make the picture sharper.. the photos (ALink) are taken april/mai 2005 in Vienna, district Ottakring, at the subwaystation "U3 Ottakring" In the car, a blue chrysler voyager maybe, were about 4-6 persons with darker color than austrian people, i assume they are from ex-yugoslavia or a country nearby greece. The car had a carsign on the rearside beginning with GF (GF stands for Gänserndorf - a district of the province Niederösterreich which border is with the east of province Vienna and to the country slovakia) the car and persons in it are criminals almost 100% and that is the reason i send this email to persons unknown to me. regards, jürgen PS: The former mail of me, where i warned of the persons with surname Haselmaier and Leitgeb is nonsense. Neither family Haselmaier nor Leitgeb have to do with the criminals I track as far as I know. auf http://www.8ung.at/juergen3/yvonne/ sind 4 bilder von mir unter dem linknamen ALink auf einem bild sieht man die nummerntafel eines anhängers wahrscheinlich W... (wien), verwackelt/unscharf bitte um hilfe dieses kennzeichen zu erkennen. vielleicht mit einem bildbearbeitungsprogramm korrektur möglich? ich habe die fotos april/mai 2005 in wien/ottakring gemacht bei der UBahnstation U3 ottakring und im auto habe ich ca. 4-6 menschen dunkler hautfarbe gesehen. wahrscheinlich serben oder einem land unfern griechenland. das auto, welches den anhänger zieht, ich glaube es ist ein chrysler voyager, hatte ein Autokennzeichen beginnend mit GF. vielleicht handelt es sich um verbrecher, deshalb zur besseren hilfe diese email an mir fremde. mfg jürgen -- GMX DSL = Maximale Leistung zum minimalen Preis! 2000 MB nur 2,99, Flatrate ab 4,99 Euro/Monat: http://www.gmx.net/de/go/dsl From owner-linux-xfs@oss.sgi.com Wed Jul 13 07:45:01 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 13 Jul 2005 07:45:05 -0700 (PDT) Received: from pentafluge.infradead.org (pentafluge.infradead.org [213.146.154.40]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6DEj0H9029578 for ; Wed, 13 Jul 2005 07:45:01 -0700 Received: from hch by pentafluge.infradead.org with local (Exim 4.52 #1 (Red Hat Linux)) id 1DsiS9-0006pF-W7; Wed, 13 Jul 2005 15:43:14 +0100 Date: Wed, 13 Jul 2005 15:43:13 +0100 From: Christoph Hellwig To: Andi Kleen Cc: Steve Lord , Nathan Scott , linux-xfs@oss.sgi.com, axboe@suse.de Subject: Re: XFS, 4K stacks, and Red Hat Message-ID: <20050713144313.GD26025@infradead.org> References: <42CD4D38.1090703@xfs.org> <20050708043740.GB1679@frodo> <42D3F44B.308@strike.wu-wien.ac.at> <20050713015626.GD980@frodo> <42D48780.2030500@xfs.org> <20050713041041.GV23737@wotan.suse.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20050713041041.GV23737@wotan.suse.de> User-Agent: Mutt/1.4.2.1i X-SRS-Rewrite: SMTP reverse-path rewritten from by pentafluge.infradead.org See http://www.infradead.org/rpr.html X-archive-position: 5622 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: linux-xfs Content-Length: 809 Lines: 16 On Wed, Jul 13, 2005 at 06:10:41AM +0200, Andi Kleen wrote: > > In a previous life I actually had to resort to allocating a chunk of > > memory, linking it into the stack, then carrying on down the call > > chain (not on linux). The memory was freed on the way up the stack > > again. I am not saying that would be a viable solution, but there needs > > to be something done about stack overflow and nested subsystems, before > > someone tries iscsi over IPV6 or something other bizzare combo. > > ISCSI over something would be difficult again because that layering > is invisible to the block layer. Maybe the iscsi block driver would > need to declare how much stack it needs or do similar checks > by itself. That iscsi driver needs very little stack because it hands off all work to a helper thread. From owner-linux-xfs@oss.sgi.com Wed Jul 13 08:03:01 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 13 Jul 2005 08:03:05 -0700 (PDT) Received: from mail00hq.adic.com ([63.81.117.10]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6DF31H9030764 for ; Wed, 13 Jul 2005 08:03:01 -0700 Received: from [172.16.82.67] ([172.16.82.67]) by mail00hq.adic.com with Microsoft SMTPSVC(5.0.2195.6713); Wed, 13 Jul 2005 08:01:18 -0700 Message-ID: <42D52CBD.1030404@xfs.org> Date: Wed, 13 Jul 2005 10:01:17 -0500 From: Steve Lord User-Agent: Mozilla Thunderbird 1.0.2-1.3.3 (X11/20050513) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Christoph Hellwig CC: Andi Kleen , Nathan Scott , linux-xfs@oss.sgi.com, axboe@suse.de Subject: Re: XFS, 4K stacks, and Red Hat References: <42CD4D38.1090703@xfs.org> <20050708043740.GB1679@frodo> <42D3F44B.308@strike.wu-wien.ac.at> <20050713015626.GD980@frodo> <42D48780.2030500@xfs.org> <20050713041041.GV23737@wotan.suse.de> <20050713144313.GD26025@infradead.org> In-Reply-To: <20050713144313.GD26025@infradead.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-OriginalArrivalTime: 13 Jul 2005 15:01:18.0710 (UTC) FILETIME=[B9488960:01C587BB] X-archive-position: 5623 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: lord@xfs.org Precedence: bulk X-list: linux-xfs Content-Length: 905 Lines: 25 Christoph Hellwig wrote: > On Wed, Jul 13, 2005 at 06:10:41AM +0200, Andi Kleen wrote: > >>>In a previous life I actually had to resort to allocating a chunk of >>>memory, linking it into the stack, then carrying on down the call >>>chain (not on linux). The memory was freed on the way up the stack >>>again. I am not saying that would be a viable solution, but there needs >>>to be something done about stack overflow and nested subsystems, before >>>someone tries iscsi over IPV6 or something other bizzare combo. >> >>ISCSI over something would be difficult again because that layering >>is invisible to the block layer. Maybe the iscsi block driver would >>need to declare how much stack it needs or do similar checks >>by itself. > > > That iscsi driver needs very little stack because it hands off all work > to a helper thread. > Because it was running out of stack otherwise? ;-) Steve From owner-linux-xfs@oss.sgi.com Wed Jul 13 08:24:52 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 13 Jul 2005 08:25:01 -0700 (PDT) Received: from wproxy.gmail.com (wproxy.gmail.com [64.233.184.199]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6DFOpH9003569 for ; Wed, 13 Jul 2005 08:24:51 -0700 Received: by wproxy.gmail.com with SMTP id i21so180562wra for ; Wed, 13 Jul 2005 08:23:09 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=NlP1nWAttp+GJ65gnARsps9ON7IOM8mGwmmnTv9DRyqdAss7hRCHuYhUzvGykIzmconi7pMUDR9fwUdinJyOep3P0Lwm0aCaV/ZlD17m22VR6qN8fUau1n3EzrEccjqhTPitgEbxk1+cpD7004I7wpg6SbLPQg8VmZHxJTSnlHg= Received: by 10.54.115.3 with SMTP id n3mr344843wrc; Wed, 13 Jul 2005 08:22:28 -0700 (PDT) Received: by 10.54.110.20 with HTTP; Wed, 13 Jul 2005 08:22:28 -0700 (PDT) Message-ID: <60868aed0507130822c2e9e97@mail.gmail.com> Date: Wed, 13 Jul 2005 18:22:28 +0300 From: Yura Pakhuchiy Reply-To: Yura Pakhuchiy To: Nathan Scott Subject: Re: XFS corruption on move from xscale to i686 Cc: linux-xfs@oss.sgi.com, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tibor@altlinux.ru, pakhuchiy@gmail.com In-Reply-To: <20050708042146.GA1679@frodo> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Disposition: inline References: <1120756552.5298.10.camel@pc299.sam-solutions.net> <20050708042146.GA1679@frodo> Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id j6DFOqH9003571 X-archive-position: 5624 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: pakhuchiy@gmail.com Precedence: bulk X-list: linux-xfs Content-Length: 766 Lines: 20 2005/7/8, Nathan Scott : > On Thu, Jul 07, 2005 at 08:15:52PM +0300, Yura Pakhuchiy wrote: > > Hi, > > > > I'm creadted XFS volume on 2.6.10 linux xscale/iq31244 box, then I > > copyied files on it and moved this hard drive to i686 machine. When I > > mounted it on i686, I found no files on it. I runned xfs_check, here is > > output: > > Someone else was doing this awhile back, and also had issues. > Their trouble seemed to be related to xscale gcc miscompiling > parts of XFS - search the linux-xfs archives for details. I found patch by Greg Ungreger to fix this problem, but why it's still not in mainline? Or it's a gcc problem and should be fixed by gcc folks? BTW, my kernel on xscale is compiled using gcc 3.4.3. Thanks, Yura From owner-linux-xfs@oss.sgi.com Wed Jul 13 13:51:19 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 13 Jul 2005 13:51:25 -0700 (PDT) Received: from larry.melbourne.sgi.com (mverd138.asia.info.net [61.14.31.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id j6DKpIH9028416 for ; Wed, 13 Jul 2005 13:51:19 -0700 Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id GAA13540 for ; Thu, 14 Jul 2005 06:49:33 +1000 Received: from wobbly.melbourne.sgi.com (localhost [127.0.0.1]) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id j6DKnbkt3005945 for ; Thu, 14 Jul 2005 06:49:37 +1000 (EST) Received: (from nathans@localhost) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id j6DKnaBD3009384 for linux-xfs@oss.sgi.com; Thu, 14 Jul 2005 06:49:36 +1000 (EST) Date: Thu, 14 Jul 2005 06:49:36 +1000 From: Nathan Scott To: linux xfs mailing list Subject: Re: how to flush an XFS filesystem Message-ID: <20050714064936.A3008365@wobbly.melbourne.sgi.com> References: <20050709091145.GA13108@cirrus.madduck.net> <20050710141254.A2904172@wobbly.melbourne.sgi.com> <20050710084345.GA11413@localhost.localdomain> <20050711081613.A2828633@wobbly.melbourne.sgi.com> <20050710224635.GA12333@localhost.localdomain> <20050711014827.GB829@frodo> <20050711072807.GA16354@localhost.localdomain> <20050713014028.GC980@frodo> <20050713020524.GE980@frodo> <20050713134034.GA6743@localhost.localdomain> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.2.5i In-Reply-To: <20050713134034.GA6743@localhost.localdomain>; from madduck@madduck.net on Wed, Jul 13, 2005 at 04:40:34PM +0300 X-archive-position: 5626 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: linux-xfs Content-Length: 1101 Lines: 36 On Wed, Jul 13, 2005 at 04:40:34PM +0300, martin f krafft wrote: > > [root@bruce fsgqa]# mount -o rw,rtdev=/dev/sdc1,logdev=/dev/sda11,uquota /dev/sdb5 /mnt/xfs0 > > [...] > > > [root@bruce fsgqa]# xfs_db -x /dev/sdb5 > > Both of these are different from the way grub accesses it. In order > for me to reproduce the problem, I had to get access to the > partition before mounting it. The second case (xfs_db, above) is doing exactly that. > It seems as if (a) the file gets > written to extents on the disk, and (b) that the inode is written to > the log, but the log is never flushed such that the metadata never > make it to the filesystem. It doesn't seem like that to me. > Mounting the filesystem on next reboot causes the log to be replayed > and consistency to be restored. I assume xfs_db does the same. No, it doesn't. Log replay only happens during mount. > Grub, on the other hand, tries to get the file, but it does not > exist in the directory (and grub cannot replay the log), so it fails > to find it. xfs_db would have the same problem, but doesn't... cheers. -- Nathan From owner-linux-xfs@oss.sgi.com Wed Jul 13 17:31:38 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 13 Jul 2005 17:31:42 -0700 (PDT) Received: from larry.melbourne.sgi.com (mverd138.asia.info.net [61.14.31.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id j6E0VaH9009735 for ; Wed, 13 Jul 2005 17:31:37 -0700 Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA18668; Thu, 14 Jul 2005 10:29:47 +1000 Received: from frodo.melbourne.sgi.com (root@frodo.melbourne.sgi.com [134.14.55.153]) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id j6E0Tnkt3015053; Thu, 14 Jul 2005 10:29:49 +1000 (EST) Received: from frodo.melbourne.sgi.com (nathans@localhost [127.0.0.1]) by frodo.melbourne.sgi.com (8.12.9/8.12.9/Debian-3) with ESMTP id j6E0Mo9e001140; Thu, 14 Jul 2005 10:22:50 +1000 Received: (from nathans@localhost) by frodo.melbourne.sgi.com (8.12.9/8.12.9/Debian-3) id j6E0MkZM001138; Thu, 14 Jul 2005 10:22:46 +1000 Date: Thu, 14 Jul 2005 10:22:46 +1000 From: Nathan Scott To: Daniel Walker , Ingo Molnar , Steve Lord Cc: linux-kernel@vger.kernel.org, linux-xfs@oss.sgi.com Subject: Re: RT and XFS Message-ID: <20050714002246.GA937@frodo> References: <1121209293.26644.8.camel@dhcp153.mvista.com> <20050713002556.GA980@frodo> <20050713064739.GD12661@elte.hu> <1121273158.13259.9.camel@c-67-188-6-232.hsd1.ca.comcast.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1121273158.13259.9.camel@c-67-188-6-232.hsd1.ca.comcast.net> User-Agent: Mutt/1.5.3i X-archive-position: 5627 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: linux-xfs Content-Length: 2200 Lines: 48 Hi there, On Wed, Jul 13, 2005 at 09:45:58AM -0700, Daniel Walker wrote: > On Wed, 2005-07-13 at 08:47 +0200, Ingo Molnar wrote: > > > > downgrade_write() wasnt the main problem - the main problem was that for > > PREEMPT_RT i implemented 'strict' semaphores, which are not identical to > > vanilla kernel semaphores. The thing that seemed to impact XFS the most > > is the 'acquirer thread has to release the lock' rule of strict > > semaphores. Both the XFS logging code and the XFS IO completion code > > seems to release locks in a different context from where the acquire > > happened. It's of course valid upstream behavior, but without these > > extra rules it's hard to do sane priority inheritance. (who do you boost > > if you dont really know who 'owns' the lock?) It might make sense to > > introduce some sort of sem_pass_to(new_owner) interface? For now i > > introduced a compat type, which lets those semaphores fall back to the > > vanilla implementation. Hmm, I'm not aware of anywhere in XFS where we do that. From talking to some colleagues here, they're claiming that we can't be doing that since it'd trip an assert in the IRIX mrlock code. > There's a lot of code like this in there .. I've seen some that down() > in process contex, and up() in interrupt contex which is weird .. But > those aren't major features, just little drivers. XFS is pretty major > feature. > > Nathan, does XFS need this property or could we convert it to > synchronize the locking (with ease?)? I'm not yet sure in what situations we are doing this, so can't really say. It'd be interesting to see an implementation of the downgrade_write functionality and then a specific case where the above locking behaviour happens ... and I'd then be able to say how tricky that would be to resolve. Steve, are you aware of situations where we unlock in a different thread to where we acquired the lock? It'd surprise me, as we're holding these things for as short a time as possible - afaict the transactions always ilock, copy delta to iclog, pin, and unlock, no? (all in the same thread). I can't see the iolock being used in this way anywhere either... you? cheers. -- Nathan From owner-linux-xfs@oss.sgi.com Wed Jul 13 18:22:23 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 13 Jul 2005 18:22:28 -0700 (PDT) Received: from wproxy.gmail.com (wproxy.gmail.com [64.233.184.207]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6E1MNH9012444 for ; Wed, 13 Jul 2005 18:22:23 -0700 Received: by wproxy.gmail.com with SMTP id i1so315103wra for ; Wed, 13 Jul 2005 18:20:38 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:cc:mime-version:content-type:content-transfer-encoding:content-disposition; b=GxAv0FbRd06mrqpLQZKNMQtnWyhtyfhs8JH1/jaRqWLQ74IZdMEjl+QIO6ZPVuc1ZmGbsl0SXoC59s7CZgkka5zrnq5gJbKWjxexrpzG2WnVQarwf4BEfB09/qLCMvZq7zkurFkvrKFSzUrLNryxldv6v0leEUoMRZ1klNLJIEM= Received: by 10.54.34.20 with SMTP id h20mr522050wrh; Wed, 13 Jul 2005 18:19:35 -0700 (PDT) Received: by 10.54.79.1 with HTTP; Wed, 13 Jul 2005 18:19:35 -0700 (PDT) Message-ID: Date: Thu, 14 Jul 2005 09:19:35 +0800 From: Mikore Li Reply-To: Mikore Li To: linux-xfs@oss.sgi.com Subject: Search for mkfs.xfs that run in Xscale. Cc: nathans@sgi.com, pakhuchiy@gmail.com Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id j6E1MNH9012446 X-archive-position: 5628 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: mikore.li@gmail.com Precedence: bulk X-list: linux-xfs Content-Length: 346 Lines: 14 Has anyone ported follow applications to cross-compiling environment for arm based Xscale? ftp://oss.sgi.com/projects/xfs/download/Release-1.3.1/cmd_tars? If you have, could you share with me? I have a Xscale board and the kernel support xfs-1.2.0. But I can't find the tool like mkfs.xfs.. for my Xscale board. Thanks & Best Regards, Q.L . From owner-linux-xfs@oss.sgi.com Wed Jul 13 18:29:38 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 13 Jul 2005 18:29:41 -0700 (PDT) Received: from larry.melbourne.sgi.com (mverd138.asia.info.net [61.14.31.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id j6E1TbH9013033 for ; Wed, 13 Jul 2005 18:29:38 -0700 Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA19981; Thu, 14 Jul 2005 11:27:47 +1000 Received: from frodo.melbourne.sgi.com (root@frodo.melbourne.sgi.com [134.14.55.153]) by wobbly.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id j6E1Rokt3009993; Thu, 14 Jul 2005 11:27:50 +1000 (EST) Received: from frodo.melbourne.sgi.com (nathans@localhost [127.0.0.1]) by frodo.melbourne.sgi.com (8.12.9/8.12.9/Debian-3) with ESMTP id j6E1Kp9e001295; Thu, 14 Jul 2005 11:20:52 +1000 Received: (from nathans@localhost) by frodo.melbourne.sgi.com (8.12.9/8.12.9/Debian-3) id j6E1Km0A001293; Thu, 14 Jul 2005 11:20:48 +1000 Date: Thu, 14 Jul 2005 11:20:48 +1000 From: Nathan Scott To: Yura Pakhuchiy Cc: linux-xfs@oss.sgi.com, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tibor@altlinux.ru Subject: Re: XFS corruption on move from xscale to i686 Message-ID: <20050714012048.GB937@frodo> References: <1120756552.5298.10.camel@pc299.sam-solutions.net> <20050708042146.GA1679@frodo> <60868aed0507130822c2e9e97@mail.gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <60868aed0507130822c2e9e97@mail.gmail.com> User-Agent: Mutt/1.5.3i X-archive-position: 5629 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: linux-xfs Content-Length: 393 Lines: 13 On Wed, Jul 13, 2005 at 06:22:28PM +0300, Yura Pakhuchiy wrote: > I found patch by Greg Ungreger to fix this problem, but why it's still > not in mainline? Or it's a gcc problem and should be fixed by gcc folks? Yes, IIRC the patch was incorrect for other platforms, and it sure looked like an arm-specific gcc problem (this was ages back, so perhaps its fixed by now). cheers. -- Nathan From owner-linux-xfs@oss.sgi.com Wed Jul 13 20:52:20 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 13 Jul 2005 20:52:23 -0700 (PDT) Received: from larry.melbourne.sgi.com (mverd138.asia.info.net [61.14.31.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id j6E3qIH9023558 for ; Wed, 13 Jul 2005 20:52:19 -0700 Received: from mumble.melbourne.sgi.com (mumble.melbourne.sgi.com [134.14.55.227]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id NAA22621; Thu, 14 Jul 2005 13:50:31 +1000 Received: from mumble.melbourne.sgi.com (localhost [127.0.0.1]) by mumble.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id j6E3oSXf247039; Thu, 14 Jul 2005 13:50:29 +1000 (EST) Received: (from dgc@localhost) by mumble.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id j6E3oOft247180; Thu, 14 Jul 2005 13:50:24 +1000 (EST) Date: Thu, 14 Jul 2005 13:50:23 +1000 From: Dave Chinner To: Nathan Scott Cc: Daniel Walker , Ingo Molnar , Steve Lord , linux-kernel@vger.kernel.org, linux-xfs@oss.sgi.com Subject: Re: RT and XFS Message-ID: <20050714135023.E241419@melbourne.sgi.com> References: <1121209293.26644.8.camel@dhcp153.mvista.com> <20050713002556.GA980@frodo> <20050713064739.GD12661@elte.hu> <1121273158.13259.9.camel@c-67-188-6-232.hsd1.ca.comcast.net> <20050714002246.GA937@frodo> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.2.5.1i In-Reply-To: <20050714002246.GA937@frodo>; from nathans@sgi.com on Thu, Jul 14, 2005 at 10:22:46AM +1000 X-archive-position: 5630 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: dgc@sgi.com Precedence: bulk X-list: linux-xfs Content-Length: 2650 Lines: 55 On Thu, Jul 14, 2005 at 10:22:46AM +1000, Nathan Scott wrote: > Hi there, > > On Wed, Jul 13, 2005 at 09:45:58AM -0700, Daniel Walker wrote: > > On Wed, 2005-07-13 at 08:47 +0200, Ingo Molnar wrote: > > > > > > downgrade_write() wasnt the main problem - the main problem was that for > > > PREEMPT_RT i implemented 'strict' semaphores, which are not identical to > > > vanilla kernel semaphores. The thing that seemed to impact XFS the most > > > is the 'acquirer thread has to release the lock' rule of strict > > > semaphores. Both the XFS logging code and the XFS IO completion code > > > seems to release locks in a different context from where the acquire > > > happened. It's of course valid upstream behavior, but without these > > > extra rules it's hard to do sane priority inheritance. (who do you boost > > > if you dont really know who 'owns' the lock?) It might make sense to > > > introduce some sort of sem_pass_to(new_owner) interface? For now i > > > introduced a compat type, which lets those semaphores fall back to the > > > vanilla implementation. > > Hmm, I'm not aware of anywhere in XFS where we do that. From talking > to some colleagues here, they're claiming that we can't be doing that > since it'd trip an assert in the IRIX mrlock code. Now that I've read the thread, I see it's not mrlocks that is the issue with unlocking in a different context - it's semaphores. All the pagebuf synchronisation is done with a semaphore because it's held across the I/O and it's _most definitely_ released in a different context when doing async I/O. Just about all metadata I/O is async because once the transaction has been logged to disk we don't need to write these buffers out synchronously. Not to mention the log I/O completion unlocks the buffers in a transaction in a different context as well. The whole point of using a semaphore in the pagebuf is because there is no tracking of who "owns" the lock so we can actually release it in a different context. Semaphores were invented for this purpose, and we use them in the way they were intended. ;) Realistically, I seriously doubt the need for any sort of rt changes to these semaphores. They can be held for indeterminant periods of time potentially across multiple disk I/Os (e.g. when held locked in a transaction that requires more metadata to be read in from disk to make progress). Hence there is no really no point in making them RT aware because if you end up waiting on one of them you can forget about pretty much any RT guarantee that you've ever given.... Cheers, Dave. -- Dave Chinner R&D Software Engineer SGI Australian Software Group From owner-linux-xfs@oss.sgi.com Wed Jul 13 21:12:21 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 13 Jul 2005 21:12:26 -0700 (PDT) Received: from av.mvista.com (gateway-1237.mvista.com [12.44.186.158]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6E4CKH9024721 for ; Wed, 13 Jul 2005 21:12:21 -0700 Received: from localhost.localdomain (av [127.0.0.1]) by av.mvista.com (8.9.3/8.9.3) with ESMTP id VAA05739; Wed, 13 Jul 2005 21:10:27 -0700 Subject: Re: RT and XFS From: Daniel Walker To: Dave Chinner Cc: Nathan Scott , Ingo Molnar , Steve Lord , linux-kernel@vger.kernel.org, linux-xfs@oss.sgi.com In-Reply-To: <20050714135023.E241419@melbourne.sgi.com> References: <1121209293.26644.8.camel@dhcp153.mvista.com> <20050713002556.GA980@frodo> <20050713064739.GD12661@elte.hu> <1121273158.13259.9.camel@c-67-188-6-232.hsd1.ca.comcast.net> <20050714002246.GA937@frodo> <20050714135023.E241419@melbourne.sgi.com> Content-Type: text/plain Date: Wed, 13 Jul 2005 21:10:26 -0700 Message-Id: <1121314226.14816.18.camel@c-67-188-6-232.hsd1.ca.comcast.net> Mime-Version: 1.0 X-Mailer: Evolution 2.0.4 (2.0.4-4) Content-Transfer-Encoding: 7bit X-archive-position: 5631 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: dwalker@mvista.com Precedence: bulk X-list: linux-xfs Content-Length: 1703 Lines: 35 On Thu, 2005-07-14 at 13:50 +1000, Dave Chinner wrote: > Now that I've read the thread, I see it's not mrlocks that is the > issue with unlocking in a different context - it's semaphores. > > All the pagebuf synchronisation is done with a semaphore because > it's held across the I/O and it's _most definitely_ released in a > different context when doing async I/O. Just about all metadata I/O > is async because once the transaction has been logged to disk we > don't need to write these buffers out synchronously. Not to mention > the log I/O completion unlocks the buffers in a transaction in a > different context as well. > > The whole point of using a semaphore in the pagebuf is because there > is no tracking of who "owns" the lock so we can actually release it > in a different context. Semaphores were invented for this purpose, > and we use them in the way they were intended. ;) Where is the that semaphore spec, is that posix ? There is a new construct called "complete" that is good for this type of stuff too. No owner needed , just something running, and something waiting till it completes. > Realistically, I seriously doubt the need for any sort of rt changes > to these semaphores. They can be held for indeterminant periods of > time potentially across multiple disk I/Os (e.g. when held locked in > a transaction that requires more metadata to be read in from disk to > make progress). Hence there is no really no point in making them RT > aware because if you end up waiting on one of them you can forget > about pretty much any RT guarantee that you've ever given.... PI is always good, cause it allows the tracking of what is high priority , and what is not . Daniel From owner-linux-xfs@oss.sgi.com Wed Jul 13 22:25:34 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 13 Jul 2005 22:25:38 -0700 (PDT) Received: from mx2.elte.hu (mx2.elte.hu [157.181.151.9]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6E5PXH9028798 for ; Wed, 13 Jul 2005 22:25:34 -0700 Received: from chiara.elte.hu (chiara.elte.hu [157.181.150.200]) by mx2.elte.hu (Postfix) with ESMTP id 370C5327E6A; Thu, 14 Jul 2005 07:22:56 +0200 (CEST) Received: by chiara.elte.hu (Postfix, from userid 17806) id EC8711FC2; Thu, 14 Jul 2005 07:23:42 +0200 (CEST) Date: Thu, 14 Jul 2005 07:23:48 +0200 From: Ingo Molnar To: Daniel Walker Cc: Dave Chinner , Nathan Scott , Steve Lord , linux-kernel@vger.kernel.org, linux-xfs@oss.sgi.com Subject: Re: RT and XFS Message-ID: <20050714052347.GA18813@elte.hu> References: <1121209293.26644.8.camel@dhcp153.mvista.com> <20050713002556.GA980@frodo> <20050713064739.GD12661@elte.hu> <1121273158.13259.9.camel@c-67-188-6-232.hsd1.ca.comcast.net> <20050714002246.GA937@frodo> <20050714135023.E241419@melbourne.sgi.com> <1121314226.14816.18.camel@c-67-188-6-232.hsd1.ca.comcast.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1121314226.14816.18.camel@c-67-188-6-232.hsd1.ca.comcast.net> User-Agent: Mutt/1.4.2.1i X-ELTE-SpamVersion: MailScanner 4.31.6-itk1 (ELTE 1.2) SpamAssassin 2.63 ClamAV 0.73 X-ELTE-VirusStatus: clean X-ELTE-SpamCheck: no X-ELTE-SpamCheck-Details: score=-4.672, required 5.9, autolearn=not spam, BAYES_00 -4.90, OPT_IN 0.23 X-ELTE-SpamLevel: X-ELTE-SpamScore: -4 X-archive-position: 5632 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: mingo@elte.hu Precedence: bulk X-list: linux-xfs Content-Length: 1559 Lines: 34 * Daniel Walker wrote: > > The whole point of using a semaphore in the pagebuf is because there > > is no tracking of who "owns" the lock so we can actually release it > > in a different context. Semaphores were invented for this purpose, > > and we use them in the way they were intended. ;) > > Where is the that semaphore spec, is that posix ? There is a new > construct called "complete" that is good for this type of stuff too. > No owner needed , just something running, and something waiting till > it completes. wrt. posix, we dont really care about that for kernel-internal primitives like struct semaphore. So whether it's posix or not has no relevance. wrt. 'struct completion' - completions should indeed be slightly faster for that particular purpose (IO completion, log transaction completion, etc.). [ And it's in no way a 'must have' change - these are problems introduced by PREEMPT_RT, and are solved within that patch. If upstream code decides to convert certain types of semaphore uses to completions, that will help -RT, but it's an opt-in process. ] it's easy to test the semaphore usage that -RT doesnt like: just revert one of the 'struct compat_semaphore' declarations to 'struct semaphore', enable RT_DEADLOCK_DETECT, and create & mount an XFS partition and do some simple file ops on it. That was enough for me to trigger the warnings which prompted the compat_semaphore changes. You'll get a verbose lock trace whenever something outside of the -RT kernel's expecations happens. Ingo From owner-linux-xfs@oss.sgi.com Thu Jul 14 01:57:39 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 14 Jul 2005 01:57:44 -0700 (PDT) Received: from smtp-1.hut.fi (smtp-1.hut.fi [130.233.228.91]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6E8vcH9014722 for ; Thu, 14 Jul 2005 01:57:39 -0700 Received: from localhost (katosiko.hut.fi [130.233.228.115]) by smtp-1.hut.fi (8.12.10/8.12.10) with ESMTP id j6E8tpik023213 for ; Thu, 14 Jul 2005 11:55:52 +0300 Received: from smtp-1.hut.fi ([130.233.228.91]) by localhost (katosiko.hut.fi [130.233.228.115]) (amavisd-new, port 10024) with LMTP id 08862-48-2 for ; Thu, 14 Jul 2005 11:55:51 +0300 (EEST) Received: from wing.madduck.net (aaninen-76.hut.fi [130.233.238.76]) by smtp-1.hut.fi (8.12.10/8.12.10) with ESMTP id j6E8tUkU023146 for ; Thu, 14 Jul 2005 11:55:30 +0300 Received: by wing.madduck.net (Postfix, from userid 1000) id 7A33E80E861; Thu, 14 Jul 2005 11:56:07 +0300 (EEST) Date: Thu, 14 Jul 2005 11:56:07 +0300 From: martin f krafft To: linux xfs mailing list Subject: Re: how to flush an XFS filesystem Message-ID: <20050714085607.GA24806@localhost.localdomain> Mail-Followup-To: linux xfs mailing list References: <20050710141254.A2904172@wobbly.melbourne.sgi.com> <20050710084345.GA11413@localhost.localdomain> <20050711081613.A2828633@wobbly.melbourne.sgi.com> <20050710224635.GA12333@localhost.localdomain> <20050711014827.GB829@frodo> <20050711072807.GA16354@localhost.localdomain> <20050713014028.GC980@frodo> <20050713020524.GE980@frodo> <20050713134034.GA6743@localhost.localdomain> <20050714064936.A3008365@wobbly.melbourne.sgi.com> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="UugvWAfsgieZRqgk" Content-Disposition: inline In-Reply-To: <20050714064936.A3008365@wobbly.melbourne.sgi.com> X-OS: Debian GNU/Linux 3.1 kernel 2.6.11-wing i686 X-Motto: Keep the good times rollin' X-Subliminal-Message: debian/rules! X-Spamtrap: madduck.bogus@madduck.net User-Agent: Mutt/1.5.9i X-TKK-Virus-Scanned: by amavisd-new-2.1.2-hutcc at katosiko.hut.fi X-archive-position: 5633 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: madduck@madduck.net Precedence: bulk X-list: linux-xfs Content-Length: 1211 Lines: 39 --UugvWAfsgieZRqgk Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable also sprach Nathan Scott [2005.07.13.2349 +0300]: > > Grub, on the other hand, tries to get the file, but it does not > > exist in the directory (and grub cannot replay the log), so it fails > > to find it. >=20 > xfs_db would have the same problem, but doesn't... I will try to reproduce this problem. It might take some days. --=20 martin; (greetings from the heart of the sun.) \____ echo mailto: !#^."<*>"|tr "<*> mailto:" net@madduck =20 invalid/expired pgp subkeys? use subkeys.pgp.net as keyserver! spamtraps: madduck.bogus@madduck.net =20 "convictions are more dangerous enemies of truth than lies." - friedrich nietzsche --UugvWAfsgieZRqgk Content-Type: application/pgp-signature; name="signature.asc" Content-Description: Digital signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (GNU/Linux) iD8DBQFC1iinIgvIgzMMSnURAijWAKC1oPdHDH9S6rc84E4FspYuoGjJGQCgjWXV SlhdeK5hyP+RniLFFeS0d/0= =9T46 -----END PGP SIGNATURE----- --UugvWAfsgieZRqgk-- From owner-linux-xfs@oss.sgi.com Thu Jul 14 02:10:51 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 14 Jul 2005 02:10:53 -0700 (PDT) Received: from wproxy.gmail.com (wproxy.gmail.com [64.233.184.199]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6E9AoH9016078 for ; Thu, 14 Jul 2005 02:10:51 -0700 Received: by wproxy.gmail.com with SMTP id i2so371585wra for ; Thu, 14 Jul 2005 02:09:07 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:mime-version:content-type:content-transfer-encoding:content-disposition; b=bJ2dEvya3mfmtB2kHIWMfWeuKFHKEVkGa79TApUDKvvADQHl5cz86pPYPFtIdnMq7dV+9sajFsu/OCGRvLewJxvx3DfyoFWE+BmYh0vhSyAMPkWd9TD6eo/TsiZ809CCVhzAIFDh2QlfLh/QAI9LH8ABafqL0sJitcrYyQmXNyk= Received: by 10.54.47.67 with SMTP id u67mr652380wru; Thu, 14 Jul 2005 02:08:13 -0700 (PDT) Received: by 10.54.79.1 with HTTP; Thu, 14 Jul 2005 02:08:13 -0700 (PDT) Message-ID: Date: Thu, 14 Jul 2005 17:08:13 +0800 From: Mikore Li Reply-To: Mikore Li To: linux-xfs@oss.sgi.com Subject: Re: Search for mkfs.xfs that run in Xscale. Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id j6E9ApH9016080 X-archive-position: 5634 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: mikore.li@gmail.com Precedence: bulk X-list: linux-xfs Content-Length: 555 Lines: 27 Hi, folks, Is there anyone experienced running mkfs.xfs to create xfs partition in a Xscale box? Is here the right alias to ask such a question? Thanks Q,L Mikore Li wrote: > Has anyone ported follow applications to cross-compiling environment > for arm based Xscale? > ftp://oss.sgi.com/projects/xfs/download/Release-1.3.1/cmd_tars? If you > have, could you share with me? > > I have a Xscale board and the kernel support xfs-1.2.0. But I can't > find the tool like mkfs.xfs.. for my Xscale board. > > Thanks & Best Regards, > > Q.L > > . > > From owner-linux-xfs@oss.sgi.com Thu Jul 14 06:17:47 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 14 Jul 2005 06:17:55 -0700 (PDT) Received: from mail.gmx.net (mail.gmx.de [213.165.64.20]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id j6EDHjH9008276 for ; Thu, 14 Jul 2005 06:17:46 -0700 Received: (qmail invoked by alias); 14 Jul 2005 13:16:01 -0000 Received: from G0283.g.pppool.de (EHLO [192.168.10.11]) [80.185.2.131] by mail.gmx.net (mp009) with SMTP; 14 Jul 2005 15:16:01 +0200 X-Authenticated: #2986359 Message-ID: <42D66591.5040209@gmx.net> Date: Thu, 14 Jul 2005 15:16:01 +0200 From: evilninja User-Agent: Mozilla Thunderbird 1.0.2 (X11/20050404) X-Accept-Language: de-DE, de, en-us, en MIME-Version: 1.0 To: linux-xfs@oss.sgi.com CC: Mikore Li Subject: Re: Search for mkfs.xfs that run in Xscale. References: In-Reply-To: X-Enigmail-Version: 0.90.2.0 X-Enigmail-Supports: pgp-inline, pgp-mime Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Y-GMX-Trusted: 0 X-archive-position: 5636 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: evilninja@gmx.net Precedence: bulk X-list: linux-xfs Content-Length: 774 Lines: 27 -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Mikore Li schrieb: >>Has anyone ported follow applications to cross-compiling environment >>for arm based Xscale? >>ftp://oss.sgi.com/projects/xfs/download/Release-1.3.1/cmd_tars? If you i don't know about Xscale, but arm-based distributions are out there (with xfsprogs) and my cross-compile-environment of choice [1] also build for arm. Christian. [1] http://www.kegel.com/crosstool/ - -- BOFH excuse #141: disks spinning backwards - toggle the hemisphere jumper. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.5 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFC1mWRC/PVm5+NVoYRAjK2AKDBeRokiyntkwucm63hIOG0TULDCACgnoQJ ggUw9tFwPCWBst3zs7+477g= =iycQ -----END PGP SIGNATURE----- From owner-linux-xfs@oss.sgi.com Thu Jul 14 06:52:29 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 14 Jul 2005 06:52:35 -0700 (PDT) Received: from wproxy.gmail.com (wproxy.gmail.com [64.233.184.194]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6EDqSH9010559 for ; Thu, 14 Jul 2005 06:52:29 -0700 Received: by wproxy.gmail.com with SMTP id i20so426106wra for ; Thu, 14 Jul 2005 06:50:44 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=kNJ2AwyGlG338hbwHhQmymMYQqriCfC5aq1muszLzz6aJdTNDMPb592M/mJeQC8vv2A0r912c3th9j7jJDVlnMDwDeG2u3Pzg1lhHyEERyXRYQBIjdRUeuC2A5qzJs0Kel1IMucEgG6p6wA7OyeyPgm0BGOTmTYbOEa0rmwq1H8= Received: by 10.54.33.7 with SMTP id g7mr743920wrg; Thu, 14 Jul 2005 06:50:01 -0700 (PDT) Received: by 10.54.110.20 with HTTP; Thu, 14 Jul 2005 06:50:01 -0700 (PDT) Message-ID: <60868aed050714065047e3aaec@mail.gmail.com> Date: Thu, 14 Jul 2005 16:50:01 +0300 From: Yura Pakhuchiy Reply-To: Yura Pakhuchiy To: Nathan Scott Subject: Re: XFS corruption on move from xscale to i686 Cc: linux-xfs@oss.sgi.com, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tibor@altlinux.ru, pakhuchiy@iptel.by In-Reply-To: <20050714012048.GB937@frodo> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Disposition: inline References: <1120756552.5298.10.camel@pc299.sam-solutions.net> <20050708042146.GA1679@frodo> <60868aed0507130822c2e9e97@mail.gmail.com> <20050714012048.GB937@frodo> Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id j6EDqTH9010571 X-archive-position: 5637 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: pakhuchiy@gmail.com Precedence: bulk X-list: linux-xfs Content-Length: 722 Lines: 22 2005/7/14, Nathan Scott : > On Wed, Jul 13, 2005 at 06:22:28PM +0300, Yura Pakhuchiy wrote: > > I found patch by Greg Ungreger to fix this problem, but why it's still > > not in mainline? Or it's a gcc problem and should be fixed by gcc folks? > > Yes, IIRC the patch was incorrect for other platforms, and it sure > looked like an arm-specific gcc problem (this was ages back, so > perhaps its fixed by now). AFAIR gcc-3.4.3 was released after this conversation take place at linux-xfs, maybe add something like this: #ifdef XSCALE /* We need this because some gcc versions for xscale are broken. */ [patched version here] #else [original version here] #endif Best regards, Yura From owner-linux-xfs@oss.sgi.com Thu Jul 14 07:40:19 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 14 Jul 2005 07:40:23 -0700 (PDT) Received: from pentafluge.infradead.org (pentafluge.infradead.org [213.146.154.40]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6EEeIH9014808 for ; Thu, 14 Jul 2005 07:40:19 -0700 Received: from hch by pentafluge.infradead.org with local (Exim 4.52 #1 (Red Hat Linux)) id 1Dt4r8-0004eA-67; Thu, 14 Jul 2005 15:38:30 +0100 Date: Thu, 14 Jul 2005 15:38:30 +0100 From: Christoph Hellwig To: Yura Pakhuchiy Cc: Nathan Scott , linux-xfs@oss.sgi.com, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tibor@altlinux.ru, pakhuchiy@iptel.by Subject: Re: XFS corruption on move from xscale to i686 Message-ID: <20050714143830.GA17842@infradead.org> Mail-Followup-To: Christoph Hellwig , Yura Pakhuchiy , Nathan Scott , linux-xfs@oss.sgi.com, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tibor@altlinux.ru, pakhuchiy@iptel.by References: <1120756552.5298.10.camel@pc299.sam-solutions.net> <20050708042146.GA1679@frodo> <60868aed0507130822c2e9e97@mail.gmail.com> <20050714012048.GB937@frodo> <60868aed050714065047e3aaec@mail.gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <60868aed050714065047e3aaec@mail.gmail.com> User-Agent: Mutt/1.4.2.1i X-SRS-Rewrite: SMTP reverse-path rewritten from by pentafluge.infradead.org See http://www.infradead.org/rpr.html X-archive-position: 5638 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: linux-xfs Content-Length: 1000 Lines: 24 On Thu, Jul 14, 2005 at 04:50:01PM +0300, Yura Pakhuchiy wrote: > 2005/7/14, Nathan Scott : > > On Wed, Jul 13, 2005 at 06:22:28PM +0300, Yura Pakhuchiy wrote: > > > I found patch by Greg Ungreger to fix this problem, but why it's still > > > not in mainline? Or it's a gcc problem and should be fixed by gcc folks? > > > > Yes, IIRC the patch was incorrect for other platforms, and it sure > > looked like an arm-specific gcc problem (this was ages back, so > > perhaps its fixed by now). > > AFAIR gcc-3.4.3 was released after this conversation take place at linux-xfs, > maybe add something like this: > > #ifdef XSCALE > /* We need this because some gcc versions for xscale are broken. */ > [patched version here] > #else > [original version here] > #endif no, just fix your compiler or let the gcc folks do it. Did anyone of the arm folks ever open a PR at the gcc bugzilla with a reproduced testcase? You're never get your compiler fixed with that attitude. From owner-linux-xfs@oss.sgi.com Thu Jul 14 07:47:49 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 14 Jul 2005 07:47:53 -0700 (PDT) Received: from wproxy.gmail.com (wproxy.gmail.com [64.233.184.195]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6EElmH9015571 for ; Thu, 14 Jul 2005 07:47:49 -0700 Received: by wproxy.gmail.com with SMTP id i31so437745wra for ; Thu, 14 Jul 2005 07:46:05 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=r3FzwxfGcaZ8dL6qbrzI1AtldyXEdZeiUCmBywzeXqOD/ZHG61Gn7rSe8hq2h4dfnydpRrekMbPAZ1VFDZ7p+mo7rcyKAeVxhsiXUIQIDmSqAwRqtK1eJkolZpzzRZg+ondvjm595vnv+7PO64Q/vFC1fHy2tc1TbnBqTduXNXg= Received: by 10.54.113.13 with SMTP id l13mr764458wrc; Thu, 14 Jul 2005 07:45:15 -0700 (PDT) Received: by 10.54.110.20 with HTTP; Thu, 14 Jul 2005 07:45:15 -0700 (PDT) Message-ID: <60868aed050714074550e0adcf@mail.gmail.com> Date: Thu, 14 Jul 2005 17:45:15 +0300 From: Yura Pakhuchiy Reply-To: Yura Pakhuchiy To: Christoph Hellwig Subject: Re: XFS corruption on move from xscale to i686 Cc: Nathan Scott , linux-xfs@oss.sgi.com, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tibor@altlinux.ru In-Reply-To: <20050714143830.GA17842@infradead.org> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Disposition: inline References: <1120756552.5298.10.camel@pc299.sam-solutions.net> <20050708042146.GA1679@frodo> <60868aed0507130822c2e9e97@mail.gmail.com> <20050714012048.GB937@frodo> <60868aed050714065047e3aaec@mail.gmail.com> <20050714143830.GA17842@infradead.org> Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id j6EElnH9015575 X-archive-position: 5639 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: pakhuchiy@gmail.com Precedence: bulk X-list: linux-xfs Content-Length: 1254 Lines: 32 2005/7/14, Christoph Hellwig : > On Thu, Jul 14, 2005 at 04:50:01PM +0300, Yura Pakhuchiy wrote: > > 2005/7/14, Nathan Scott : > > > On Wed, Jul 13, 2005 at 06:22:28PM +0300, Yura Pakhuchiy wrote: > > > > I found patch by Greg Ungreger to fix this problem, but why it's still > > > > not in mainline? Or it's a gcc problem and should be fixed by gcc folks? > > > > > > Yes, IIRC the patch was incorrect for other platforms, and it sure > > > looked like an arm-specific gcc problem (this was ages back, so > > > perhaps its fixed by now). > > > > AFAIR gcc-3.4.3 was released after this conversation take place at linux-xfs, > > maybe add something like this: > > > > #ifdef XSCALE > > /* We need this because some gcc versions for xscale are broken. */ > > [patched version here] > > #else > > [original version here] > > #endif > > no, just fix your compiler or let the gcc folks do it. Did anyone of > the arm folks ever open a PR at the gcc bugzilla with a reproduced > testcase? You're never get your compiler fixed with that attitude. Yes, but a lof of people use older versions of compilers and suffer from this bug. I personally was very unhappy when lost my data. Best regards, Yura From owner-linux-xfs@oss.sgi.com Thu Jul 14 07:51:05 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 14 Jul 2005 07:51:08 -0700 (PDT) Received: from pentafluge.infradead.org (pentafluge.infradead.org [213.146.154.40]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6EEp4H9016133 for ; Thu, 14 Jul 2005 07:51:05 -0700 Received: from hch by pentafluge.infradead.org with local (Exim 4.52 #1 (Red Hat Linux)) id 1Dt51b-0004ho-OQ; Thu, 14 Jul 2005 15:49:19 +0100 Date: Thu, 14 Jul 2005 15:49:19 +0100 From: Christoph Hellwig To: Yura Pakhuchiy Cc: Christoph Hellwig , Nathan Scott , linux-xfs@oss.sgi.com, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tibor@altlinux.ru Subject: Re: XFS corruption on move from xscale to i686 Message-ID: <20050714144919.GB17842@infradead.org> Mail-Followup-To: Christoph Hellwig , Yura Pakhuchiy , Nathan Scott , linux-xfs@oss.sgi.com, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tibor@altlinux.ru References: <1120756552.5298.10.camel@pc299.sam-solutions.net> <20050708042146.GA1679@frodo> <60868aed0507130822c2e9e97@mail.gmail.com> <20050714012048.GB937@frodo> <60868aed050714065047e3aaec@mail.gmail.com> <20050714143830.GA17842@infradead.org> <60868aed050714074550e0adcf@mail.gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <60868aed050714074550e0adcf@mail.gmail.com> User-Agent: Mutt/1.4.2.1i X-SRS-Rewrite: SMTP reverse-path rewritten from by pentafluge.infradead.org See http://www.infradead.org/rpr.html X-archive-position: 5640 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: linux-xfs Content-Length: 261 Lines: 7 On Thu, Jul 14, 2005 at 05:45:15PM +0300, Yura Pakhuchiy wrote: > Yes, but a lof of people use older versions of compilers and suffer > from this bug. > I personally was very unhappy when lost my data. then host the patch somewhere and make sure to apply it. From owner-linux-xfs@oss.sgi.com Thu Jul 14 08:58:51 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 14 Jul 2005 08:58:58 -0700 (PDT) Received: from av.mvista.com (gateway-1237.mvista.com [12.44.186.158]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6EFwpH9024121 for ; Thu, 14 Jul 2005 08:58:51 -0700 Received: from localhost.localdomain (av [127.0.0.1]) by av.mvista.com (8.9.3/8.9.3) with ESMTP id IAA09130; Thu, 14 Jul 2005 08:56:59 -0700 Subject: Re: RT and XFS From: Daniel Walker To: Ingo Molnar Cc: Dave Chinner , greg@kroah.com, Nathan Scott , Steve Lord , linux-kernel@vger.kernel.org, linux-xfs@oss.sgi.com In-Reply-To: <20050714052347.GA18813@elte.hu> References: <1121209293.26644.8.camel@dhcp153.mvista.com> <20050713002556.GA980@frodo> <20050713064739.GD12661@elte.hu> <1121273158.13259.9.camel@c-67-188-6-232.hsd1.ca.comcast.net> <20050714002246.GA937@frodo> <20050714135023.E241419@melbourne.sgi.com> <1121314226.14816.18.camel@c-67-188-6-232.hsd1.ca.comcast.net> <20050714052347.GA18813@elte.hu> Content-Type: text/plain Date: Thu, 14 Jul 2005 08:56:58 -0700 Message-Id: <1121356618.14816.45.camel@c-67-188-6-232.hsd1.ca.comcast.net> Mime-Version: 1.0 X-Mailer: Evolution 2.0.4 (2.0.4-4) Content-Transfer-Encoding: 7bit X-archive-position: 5641 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: dwalker@mvista.com Precedence: bulk X-list: linux-xfs Content-Length: 1225 Lines: 28 On Thu, 2005-07-14 at 07:23 +0200, Ingo Molnar wrote: > * Daniel Walker wrote: > > > > The whole point of using a semaphore in the pagebuf is because there > > > is no tracking of who "owns" the lock so we can actually release it > > > in a different context. Semaphores were invented for this purpose, > > > and we use them in the way they were intended. ;) > > > > Where is the that semaphore spec, is that posix ? There is a new > > construct called "complete" that is good for this type of stuff too. > > No owner needed , just something running, and something waiting till > > it completes. > > wrt. posix, we dont really care about that for kernel-internal > primitives like struct semaphore. So whether it's posix or not has no > relevance. This reminds me of Documentation/stable_api_nonsense.txt . That no one should really be dependent on a particular kernel API doing a particular thing. The kernel is play dough for the kernel hacker (as it should be), including kernel semaphores. So we can change whatever we want, and make no excuses, as long as we fix the rest of the kernel to work with our change. That seems pretty sensible , because Linux should be an evolution. Daniel From owner-linux-xfs@oss.sgi.com Thu Jul 14 09:10:32 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 14 Jul 2005 09:10:39 -0700 (PDT) Received: from pentafluge.infradead.org (pentafluge.infradead.org [213.146.154.40]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6EGAWH9025241 for ; Thu, 14 Jul 2005 09:10:32 -0700 Received: from hch by pentafluge.infradead.org with local (Exim 4.52 #1 (Red Hat Linux)) id 1Dt6GM-00050y-8V; Thu, 14 Jul 2005 17:08:38 +0100 Date: Thu, 14 Jul 2005 17:08:38 +0100 From: Christoph Hellwig To: Daniel Walker Cc: Ingo Molnar , Dave Chinner , greg@kroah.com, Nathan Scott , Steve Lord , linux-kernel@vger.kernel.org, linux-xfs@oss.sgi.com Subject: Re: RT and XFS Message-ID: <20050714160838.GB19229@infradead.org> Mail-Followup-To: Christoph Hellwig , Daniel Walker , Ingo Molnar , Dave Chinner , greg@kroah.com, Nathan Scott , Steve Lord , linux-kernel@vger.kernel.org, linux-xfs@oss.sgi.com References: <1121209293.26644.8.camel@dhcp153.mvista.com> <20050713002556.GA980@frodo> <20050713064739.GD12661@elte.hu> <1121273158.13259.9.camel@c-67-188-6-232.hsd1.ca.comcast.net> <20050714002246.GA937@frodo> <20050714135023.E241419@melbourne.sgi.com> <1121314226.14816.18.camel@c-67-188-6-232.hsd1.ca.comcast.net> <20050714052347.GA18813@elte.hu> <1121356618.14816.45.camel@c-67-188-6-232.hsd1.ca.comcast.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1121356618.14816.45.camel@c-67-188-6-232.hsd1.ca.comcast.net> User-Agent: Mutt/1.4.2.1i X-SRS-Rewrite: SMTP reverse-path rewritten from by pentafluge.infradead.org See http://www.infradead.org/rpr.html X-archive-position: 5643 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: linux-xfs Content-Length: 1621 Lines: 36 On Thu, Jul 14, 2005 at 08:56:58AM -0700, Daniel Walker wrote: > On Thu, 2005-07-14 at 07:23 +0200, Ingo Molnar wrote: > > * Daniel Walker wrote: > > > > > > The whole point of using a semaphore in the pagebuf is because there > > > > is no tracking of who "owns" the lock so we can actually release it > > > > in a different context. Semaphores were invented for this purpose, > > > > and we use them in the way they were intended. ;) > > > > > > Where is the that semaphore spec, is that posix ? There is a new > > > construct called "complete" that is good for this type of stuff too. > > > No owner needed , just something running, and something waiting till > > > it completes. > > > > wrt. posix, we dont really care about that for kernel-internal > > primitives like struct semaphore. So whether it's posix or not has no > > relevance. > > This reminds me of Documentation/stable_api_nonsense.txt . That no one > should really be dependent on a particular kernel API doing a particular > thing. The kernel is play dough for the kernel hacker (as it should be), > including kernel semaphores. > > So we can change whatever we want, and make no excuses, as long as we > fix the rest of the kernel to work with our change. That seems pretty > sensible , because Linux should be an evolution. > > Daniel > > - > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ ---end quoted text--- From owner-linux-xfs@oss.sgi.com Thu Jul 14 09:10:33 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 14 Jul 2005 09:10:37 -0700 (PDT) Received: from pentafluge.infradead.org (pentafluge.infradead.org [213.146.154.40]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6EGAXH9025245 for ; Thu, 14 Jul 2005 09:10:33 -0700 Received: from hch by pentafluge.infradead.org with local (Exim 4.52 #1 (Red Hat Linux)) id 1Dt6GJ-00050r-FL; Thu, 14 Jul 2005 17:08:35 +0100 Date: Thu, 14 Jul 2005 17:08:35 +0100 From: Christoph Hellwig To: Daniel Walker Cc: Ingo Molnar , Dave Chinner , greg@kroah.com, Nathan Scott , Steve Lord , linux-kernel@vger.kernel.org, linux-xfs@oss.sgi.com Subject: Re: RT and XFS Message-ID: <20050714160835.GA19229@infradead.org> Mail-Followup-To: Christoph Hellwig , Daniel Walker , Ingo Molnar , Dave Chinner , greg@kroah.com, Nathan Scott , Steve Lord , linux-kernel@vger.kernel.org, linux-xfs@oss.sgi.com References: <1121209293.26644.8.camel@dhcp153.mvista.com> <20050713002556.GA980@frodo> <20050713064739.GD12661@elte.hu> <1121273158.13259.9.camel@c-67-188-6-232.hsd1.ca.comcast.net> <20050714002246.GA937@frodo> <20050714135023.E241419@melbourne.sgi.com> <1121314226.14816.18.camel@c-67-188-6-232.hsd1.ca.comcast.net> <20050714052347.GA18813@elte.hu> <1121356618.14816.45.camel@c-67-188-6-232.hsd1.ca.comcast.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1121356618.14816.45.camel@c-67-188-6-232.hsd1.ca.comcast.net> User-Agent: Mutt/1.4.2.1i X-SRS-Rewrite: SMTP reverse-path rewritten from by pentafluge.infradead.org See http://www.infradead.org/rpr.html X-archive-position: 5642 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: linux-xfs Content-Length: 726 Lines: 14 On Thu, Jul 14, 2005 at 08:56:58AM -0700, Daniel Walker wrote: > This reminds me of Documentation/stable_api_nonsense.txt . That no one > should really be dependent on a particular kernel API doing a particular > thing. The kernel is play dough for the kernel hacker (as it should be), > including kernel semaphores. > > So we can change whatever we want, and make no excuses, as long as we > fix the rest of the kernel to work with our change. That seems pretty > sensible , because Linux should be an evolution. Daniel, get a fucking clue. Read some CS 101 literature on what a semaphore is defined to be. If you want PI singing dancing blinking christmas tree locking primites call them a mutex, but not a semaphore. From owner-linux-xfs@oss.sgi.com Thu Jul 14 10:09:51 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 14 Jul 2005 10:09:58 -0700 (PDT) Received: from omx2.sgi.com (omx2-ext.sgi.com [192.48.171.19]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6EH9pH9029631 for ; Thu, 14 Jul 2005 10:09:51 -0700 Received: from ledzep.americas.sgi.com (ledzep.americas.sgi.com [198.149.16.14]) by omx2.sgi.com (8.12.11/8.12.9/linux-outbound_gateway-1.1) with ESMTP id j6EJ1CIY019122 for ; Thu, 14 Jul 2005 12:01:12 -0700 Received: from maine.americas.sgi.com (maine.americas.sgi.com [128.162.232.87]) by ledzep.americas.sgi.com (8.12.9/8.12.10/SGI_generic_relay-1.2) with ESMTP id j6EH86sL18711842; Thu, 14 Jul 2005 12:08:06 -0500 (CDT) Received: from hch by maine.americas.sgi.com with local (Exim 3.36 #1 (Debian)) id 1Dt7Bu-0005K9-00; Thu, 14 Jul 2005 12:08:06 -0500 To: linux-xfs@oss.sgi.com, sgi.bugs.xfs@fido.engr.sgi.com Subject: TAKE 936236 - remove struct vnode::v_type Message-Id: From: Christoph Hellwig Date: Thu, 14 Jul 2005 12:08:06 -0500 X-archive-position: 5644 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: hch@sgi.com Precedence: bulk X-list: linux-xfs Content-Length: 2889 Lines: 45 Date: Thu Jul 14 10:07:55 PDT 2005 Workarea: maine.americas.sgi.com:/home/daisy40/hch/ptools/xfs-2.6.x Inspected by: nathans The following file(s) were checked into: bonnie.engr.sgi.com:/isms/linux/2.6.x-xfs Modid: xfs-linux:xfs-kern:195878a fs/xfs/xfsidbg.c - 1.279 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfsidbg.c.diff?r1=text&tr1=1.279&r2=text&tr2=1.278&f=h fs/xfs/xfs_vnodeops.c - 1.648 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_vnodeops.c.diff?r1=text&tr1=1.648&r2=text&tr2=1.647&f=h fs/xfs/xfs_dmapi.c - 1.131 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_dmapi.c.diff?r1=text&tr1=1.131&r2=text&tr2=1.130&f=h fs/xfs/xfs_acl.c - 1.53 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_acl.c.diff?r1=text&tr1=1.53&r2=text&tr2=1.52&f=h fs/xfs/xfs_inode.c - 1.416 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_inode.c.diff?r1=text&tr1=1.416&r2=text&tr2=1.415&f=h fs/xfs/linux-2.6/xfs_ioctl.c - 1.123 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/linux-2.6/xfs_ioctl.c.diff?r1=text&tr1=1.123&r2=text&tr2=1.122&f=h fs/xfs/linux-2.6/xfs_vnode.c - 1.129 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/linux-2.6/xfs_vnode.c.diff?r1=text&tr1=1.129&r2=text&tr2=1.128&f=h fs/xfs/linux-2.6/xfs_vnode.h - 1.106 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/linux-2.6/xfs_vnode.h.diff?r1=text&tr1=1.106&r2=text&tr2=1.105&f=h fs/xfs/linux-2.6/xfs_super.c - 1.336 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/linux-2.6/xfs_super.c.diff?r1=text&tr1=1.336&r2=text&tr2=1.335&f=h fs/xfs/linux-2.6/xfs_iops.c - 1.225 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/linux-2.6/xfs_iops.c.diff?r1=text&tr1=1.225&r2=text&tr2=1.224&f=h fs/xfs/linux-2.4/xfs_ioctl.c - 1.119 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/linux-2.4/xfs_ioctl.c.diff?r1=text&tr1=1.119&r2=text&tr2=1.118&f=h fs/xfs/linux-2.4/xfs_vnode.c - 1.130 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/linux-2.4/xfs_vnode.c.diff?r1=text&tr1=1.130&r2=text&tr2=1.129&f=h fs/xfs/linux-2.4/xfs_vnode.h - 1.99 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/linux-2.4/xfs_vnode.h.diff?r1=text&tr1=1.99&r2=text&tr2=1.98&f=h fs/xfs/linux-2.4/xfs_super.c - 1.308 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/linux-2.4/xfs_super.c.diff?r1=text&tr1=1.308&r2=text&tr2=1.307&f=h fs/xfs/linux-2.4/xfs_iops.c - 1.210 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/linux-2.4/xfs_iops.c.diff?r1=text&tr1=1.210&r2=text&tr2=1.209&f=h fs/xfs/linux-2.6/xfs_ksyms.c - 1.23 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/linux-2.6/xfs_ksyms.c.diff?r1=text&tr1=1.23&r2=text&tr2=1.22&f=h fs/xfs/linux-2.4/xfs_ksyms.c - 1.19 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/linux-2.4/xfs_ksyms.c.diff?r1=text&tr1=1.19&r2=text&tr2=1.18&f=h From owner-linux-xfs@oss.sgi.com Thu Jul 14 23:09:12 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 14 Jul 2005 23:09:16 -0700 (PDT) Received: from tyo202.gate.nec.co.jp (TYO202.gate.nec.co.jp [210.143.35.52]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6F699H9019259 for ; Thu, 14 Jul 2005 23:09:11 -0700 Received: from mailgate3.nec.co.jp (mailgate54.nec.co.jp [10.7.69.197]) by tyo202.gate.nec.co.jp (8.11.7/3.7W01080315) with ESMTP id j6F67Im27405 for ; Fri, 15 Jul 2005 15:07:18 +0900 (JST) Received: (from root@localhost) by mailgate3.nec.co.jp (8.11.7/3.7W-MAILGATE-NEC) id j6F67IS26128 for linux-xfs@oss.sgi.com; Fri, 15 Jul 2005 15:07:18 +0900 (JST) Received: from secsv2.tnes.nec.co.jp (tnesvc1.tnes.nec.co.jp [10.1.101.14]) by mailsv.nec.co.jp (8.11.7/3.7W-MAILSV-NEC) with ESMTP id j6F67HC12057 for ; Fri, 15 Jul 2005 15:07:17 +0900 (JST) Received: from TNESVC1.tnes.nec.co.jp ([10.1.101.14]) by secsv2.tnes.nec.co.jp (ExpressMail 5.10) with SMTP id 20050715.151015.10302160 for ; Fri, 15 Jul 2005 15:10:15 +0900 Received: FROM noshiro.bsd.tnes.nec.co.jp BY TNESVC1.tnes.nec.co.jp ; Fri Jul 15 15:10:14 2005 +0900 Received: from localhost (localhost.localdomain [127.0.0.1]) by noshiro.bsd.tnes.nec.co.jp (Postfix) with ESMTP id 1D1BB749FA for ; Fri, 15 Jul 2005 15:07:17 +0900 (JST) Date: Fri, 15 Jul 2005 15:07:17 +0900 (JST) Message-Id: <20050715.150717.28782011.masano@tnes.nec.co.jp> To: linux-xfs@oss.sgi.com Subject: deadlocks on ENOSPC From: ASANO Masahiro X-Mailer: Mew version 3.3 on XEmacs 21.4.11 (Native Windows TTY Support) Mime-Version: 1.0 Content-Type: Text/Plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-archive-position: 5646 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: masano@tnes.nec.co.jp Precedence: bulk X-list: linux-xfs Content-Length: 5365 Lines: 112 Hi, I've been investigating a deadlock problem on a ENOSPC device. The phenomenon is repeatable with the following method: 1. Make some files to fill a XFS filesystem leaving 80MB. 2. Execute dd.sh, which spawn 10 `dd's. Each dd writes 16MB, so total is 160MB against 80MB free. 8<------8<------ dd.sh #!/bin/sh for i in `seq 10` do ( while :; do dd if=/dev/zero of=F$i bs=1024 count=16384 > /dev/null 2>&1; done ) & done 8<------8<------ dd.sh 3. Wait a minutes, then two (or more) processes will be deadlocked with `D' state. Its WCHAN is `text.l'. I tested on HT Pentium4 box with Linux-2.6.13-rc[123] + TAKE 938502. But I guess older version also have the same flaw. Here is kernel back trace. ADDR S PID SESS UID EUID MM NAME FLAGS df112530 U 1376 0 0 0 0 xfssyncd forknoexec fstrans randomize ded6b588 c03ee853 schedule+6f3 () [ded6b5fc] c03edf65 __down+75 (decef93c,decef93c,ded6b654) [ded6b634] c03ee0f2 __down_failed+a () [ded6b644] c02884de [.text.lock.xfs_buf+1f] [ded6b644] c0287034 pagebuf_lock+34 (d3215abc,14005,de2e11fc,0) [ded6b658] c0286811 _pagebuf_find+161 (df6a0280,4841ad1,0,200) [ded6b690] c02868ff xfs_buf_get_flags+6f (df6a0280,4841ad1,0,1) [ded6b6c4] c0286a22 xfs_buf_read_flags+32 (df6a0280,4841ad1,0,1) [ded6b6e8] c0277e31 xfs_trans_read_buf+211 (dedde400,c9d74730,df6a0280,4841ad1) [ded6b718] c0223e03 xfs_alloc_read_agf+a3 (dedde400,c9d74730,a,0) [ded6b75c] c0223a39 xfs_alloc_fix_freelist+449 (ded6b97c,0,0,0) [ded6b804] c0224285 xfs_alloc_vextent+345 (ded6b97c,ded6b8f0,0,ae71d5) [ded6b868] c02346ba xfs_bmap_alloc+15ca (ded6bb34,ded6baf4,0,0) [ded6b9dc] c02389ef xfs_bmapi+d1f (c9d74730,d0d64d20,7f1,0) [ded6bb84] c0264d54 xfs_iomap_write_allocate+2b4 (d0d64d20,7f1000,0,1000) [ded6bc74] c02639f0 xfs_iomap+460 (d0d64dfc,7f1000,0,1000) [ded6bd00] c028d9d1 xfs_bmap+41 (d0d64d40,7f1000,0,1000) [ded6bd24] c02843af xfs_map_blocks+4f (d3c2204c,7f1000,0,1000) [ded6bd58] c0285580 xfs_page_state_convert+510 (d3c2204c,c111d3e0,ded6bf44,1) [ded6be24] c0285d2f linvfs_writepage+6f (c111d3e0,ded6bf44,ded6be94,0) [ded6be58] c018e94e mpage_writepages+24e (d3c220f8,ded6bf44,0,ded6bf80) [ded6bef4] c014cc92 do_writepages+42 (d3c220f8,ded6bf44,0,0,0,fe6,0,0,0,0,0,0,ded6bf88,ffffffff,0,0,0,fe6,0,0,0,0,0,0,ded6bf88,28852) [ded6bf08] c01459ef __filemap_fdatawrite_range+9f () ADDR S PID SESS UID EUID MM NAME FLAGS dd3e3530 U 13387 0 524 524 cf073800 dd fstrans randomize cf511950 c03ee853 schedule+6f3 () [cf5119c4] c03edf65 __down+75 (decefa2c,decefa2c,cf511a1c) [cf5119fc] c03ee0f2 __down_failed+a () [cf511a0c] c02884de [.text.lock.xfs_buf+1f] [cf511a0c] c0287034 pagebuf_lock+34 (d321557c,c16e2800,cf510000,0) [cf511a20] c0286811 _pagebuf_find+161 (df6a0280,6c62839,0,200) [cf511a58] c02868ff xfs_buf_get_flags+6f (df6a0280,6c62839,0,1) [cf511a8c] c0286a22 xfs_buf_read_flags+32 (df6a0280,6c62839,0,1) [cf511ab0] c0277e31 xfs_trans_read_buf+211 (dedde400,ce19dad0,df6a0280,6c62839) [cf511ae0] c0223e03 xfs_alloc_read_agf+a3 (dedde400,ce19dad0,f,0) [cf511b24] c0223a39 xfs_alloc_fix_freelist+449 (cf511bf0,0,a,e730a) [cf511bcc] c0224549 xfs_free_extent+99 (ce19dad0,fd3ac4,0,60) [cf511c50] c0237225 xfs_bmap_finish+185 (cf511d84,cf511cf0,ffffffff,ffffffff) [cf511c8c] c025ffdf xfs_itruncate_finish+29f (cf511d84,d0d64bb0,0,0) [cf511d10] c027d53b xfs_setattr+f5b (d0d64bd0,cf511dbc,0,0) [cf511da0] c028be8d linvfs_setattr+fd (ce66c5c8,cf511e7c,dedde418,cf511e68) [cf511e3c] c0184a1c notify_change+3cc (ce66c5c8,cf511e7c,48,0) [cf511e70] c0164e62 do_truncate+42 (ce66c5c8,0,0,ce66c5c8) [cf511ec4] c0177b0f may_open+24f (cf511f44,2,8242,c0167e6a) [cf511ee8] c01780a6 open_namei+526 (d26a2000,8242,1b6,cf511f44) [cf511f30] c0165f7a filp_open+3a (d26a2000,8241,1b6,d25bc880) [cf511f8c] c0166389 sys_open+59 (bff419a6,8241,1b6,8241) # xfs_info /opt meta-data=/opt isize=256 agcount=16, agsize=947081 blks = sectsz=512 data = bsize=4096 blocks=15153296, imaxpct=25 = sunit=0 swidth=0 blks, unwritten=1 naming =version 2 bsize=4096 log =internal bsize=4096 blocks=7399, version=1 = sectsz=512 sunit=0 blks realtime =none extsz=65536 blocks=0, rtextents=0 # df /opt Filesystem 1K-blocks Used Available Use% Mounted on /dev/hda6 60583588 60583584 4 100% /opt After some investigation, I've found in this case: xfssyncd: allocating extents; locking AG#15 AGF, waiting AG#10 AGF. Because XFS could not allocate all of the delayed blocks in a single AG. dd: freeing extents; locking AG#10 AGF, waiting AGF15 AGF. Because the file is made from multiple AGs and XFS defines XFS_ITRUNC_MAX_EXTENTS as 2. Both processes are in a transaction region (PF_FSTRANS) and operating 2 AGs. It looks like AB-BA deadlock. So, I have a question. Is multiple AGs in a single transaction safe? IMHO, multiple AGs in a single transaction is easy to be deadlocked, because XFS must keep the xfs_buf busy(semaphore down) until it is committed to in-core log. -- masano From owner-linux-xfs@oss.sgi.com Fri Jul 15 03:25:43 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 15 Jul 2005 03:25:52 -0700 (PDT) Received: from mx1.elte.hu (mx1.elte.hu [157.181.1.137]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6FAPfH9006085 for ; Fri, 15 Jul 2005 03:25:42 -0700 Received: from chiara.elte.hu (chiara.elte.hu [157.181.150.200]) by mx1.elte.hu (Postfix) with ESMTP id AF0C132CE43; Fri, 15 Jul 2005 12:22:32 +0200 (CEST) Received: by chiara.elte.hu (Postfix, from userid 17806) id CD27A1FC2; Fri, 15 Jul 2005 12:23:07 +0200 (CEST) Date: Fri, 15 Jul 2005 12:23:11 +0200 From: Ingo Molnar To: Daniel Walker Cc: Dave Chinner , Nathan Scott , Steve Lord , linux-kernel@vger.kernel.org, linux-xfs@oss.sgi.com, Christoph Hellwig Subject: Re: RT and XFS Message-ID: <20050715102311.GA5302@elte.hu> References: <1121209293.26644.8.camel@dhcp153.mvista.com> <20050713002556.GA980@frodo> <20050713064739.GD12661@elte.hu> <1121273158.13259.9.camel@c-67-188-6-232.hsd1.ca.comcast.net> <20050714002246.GA937@frodo> <20050714135023.E241419@melbourne.sgi.com> <1121314226.14816.18.camel@c-67-188-6-232.hsd1.ca.comcast.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1121314226.14816.18.camel@c-67-188-6-232.hsd1.ca.comcast.net> User-Agent: Mutt/1.4.2.1i X-ELTE-SpamVersion: MailScanner 4.31.6-itk1 (ELTE 1.2) SpamAssassin 2.63 ClamAV 0.73 X-ELTE-VirusStatus: clean X-ELTE-SpamCheck: no X-ELTE-SpamCheck-Details: score=-4.9, required 5.9, autolearn=not spam, BAYES_00 -4.90 X-ELTE-SpamLevel: X-ELTE-SpamScore: -4 X-archive-position: 5647 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: mingo@elte.hu Precedence: bulk X-list: linux-xfs Content-Length: 411 Lines: 13 * Daniel Walker wrote: > PI is always good, cause it allows the tracking of what is high > priority , and what is not . that's just plain wrong. PI might be good if one cares about priorities and worst-case latencies, but most of the time the kernel is plain good enough and we dont care. PI can also be pretty expensive. So in no way, shape or form can PI be "always good". Ingo From owner-linux-xfs@oss.sgi.com Fri Jul 15 05:23:17 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 15 Jul 2005 05:23:33 -0700 (PDT) Received: from relay02.roc.ny.frontiernet.net (relay02.roc.ny.frontiernet.net [66.133.182.165]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6FCNHH9020252 for ; Fri, 15 Jul 2005 05:23:17 -0700 Received: from filter06.roc.ny.frontiernet.net (filter06.roc.ny.frontiernet.net [66.133.183.73]) by relay02.roc.ny.frontiernet.net (Postfix) with ESMTP id 7B14D37075C; Fri, 15 Jul 2005 12:21:32 +0000 (UTC) Received: from relay02.roc.ny.frontiernet.net ([66.133.182.165]) by filter06.roc.ny.frontiernet.net (filter06.roc.ny.frontiernet.net [66.133.183.73]) (amavisd-new, port 10024) with LMTP id 18689-06-49; Fri, 15 Jul 2005 12:21:32 +0000 (UTC) Received: from [192.168.1.100] (67-137-96-87.dsl2.brv.mn.frontiernet.net [67.137.96.87]) by relay02.roc.ny.frontiernet.net (Postfix) with ESMTP id E4A7F3707AD; Fri, 15 Jul 2005 12:21:25 +0000 (UTC) Message-ID: <42D7AA45.2040608@xfs.org> Date: Fri, 15 Jul 2005 07:21:25 -0500 From: Steve Lord User-Agent: Mozilla Thunderbird 1.0.2-1.3.3 (X11/20050513) X-Accept-Language: en-us, en MIME-Version: 1.0 To: ASANO Masahiro Cc: linux-xfs@oss.sgi.com Subject: Re: deadlocks on ENOSPC References: <20050715.150717.28782011.masano@tnes.nec.co.jp> In-Reply-To: <20050715.150717.28782011.masano@tnes.nec.co.jp> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 5648 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: lord@xfs.org Precedence: bulk X-list: linux-xfs Content-Length: 1172 Lines: 40 ASANO Masahiro wrote: > > After some investigation, I've found in this case: > > xfssyncd: allocating extents; locking AG#15 AGF, waiting AG#10 AGF. > Because XFS could not allocate all of the delayed blocks > in a single AG. > > dd: freeing extents; locking AG#10 AGF, waiting AGF15 AGF. > Because the file is made from multiple AGs and XFS defines > XFS_ITRUNC_MAX_EXTENTS as 2. > > Both processes are in a transaction region (PF_FSTRANS) and operating > 2 AGs. It looks like AB-BA deadlock. > > So, I have a question. Is multiple AGs in a single transaction safe? > > IMHO, multiple AGs in a single transaction is easy to be deadlocked, > because XFS must keep the xfs_buf busy(semaphore down) until it is > committed to in-core log. > > -- > masano > Hi Masano, That is definitely a bug, the extent logic is not supposed to lock allocation groups out of order. Multiple allocation groups are OK, but wrapping past the last allocation group back to the first again is not. Try changing the definition of XFS_STRAT_WRITE_IMAPS from 2 to 1 in xfs_iomap.c as a workaround for now. Steve Steve From owner-linux-xfs@oss.sgi.com Fri Jul 15 06:07:59 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 15 Jul 2005 06:08:04 -0700 (PDT) Received: from topalm2.dionis.local (82.211.131.6.planetsky.com [82.211.131.6] (may be forged)) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6FD7vH9022974 for ; Fri, 15 Jul 2005 06:07:58 -0700 Received: from [10.0.0.99] (helo=[10.0.0.99]) by topalm2.dionis.local with esmtp (Exim 3.36 #1 (Debian)) id 1DtPtG-0005F6-00 for ; Fri, 15 Jul 2005 17:06:06 +0400 Message-ID: <42D7B4DA.1060707@yandex.ru> Date: Fri, 15 Jul 2005 17:06:34 +0400 From: Ed User-Agent: Debian Thunderbird 1.0.2 (X11/20050331) X-Accept-Language: en-us, en MIME-Version: 1.0 To: linux-xfs@oss.sgi.com Subject: postgresql perfomance on xfs Content-Type: text/plain; charset=KOI8-R; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 5649 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: spied@yandex.ru Precedence: bulk X-list: linux-xfs Content-Length: 71 Lines: 2 http://archives.postgresql.org/pgsql-performance/2005-07/msg00208.php From owner-linux-xfs@oss.sgi.com Fri Jul 15 09:18:50 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Fri, 15 Jul 2005 09:19:00 -0700 (PDT) Received: from av.mvista.com (gateway-1237.mvista.com [12.44.186.158]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6FGIoH9005368 for ; Fri, 15 Jul 2005 09:18:50 -0700 Received: from localhost.localdomain (av [127.0.0.1]) by av.mvista.com (8.9.3/8.9.3) with ESMTP id JAA21707; Fri, 15 Jul 2005 09:16:56 -0700 Subject: Re: RT and XFS From: Daniel Walker To: Ingo Molnar Cc: Dave Chinner , Nathan Scott , Steve Lord , linux-kernel@vger.kernel.org, linux-xfs@oss.sgi.com, Christoph Hellwig In-Reply-To: <20050715102311.GA5302@elte.hu> References: <1121209293.26644.8.camel@dhcp153.mvista.com> <20050713002556.GA980@frodo> <20050713064739.GD12661@elte.hu> <1121273158.13259.9.camel@c-67-188-6-232.hsd1.ca.comcast.net> <20050714002246.GA937@frodo> <20050714135023.E241419@melbourne.sgi.com> <1121314226.14816.18.camel@c-67-188-6-232.hsd1.ca.comcast.net> <20050715102311.GA5302@elte.hu> Content-Type: text/plain Date: Fri, 15 Jul 2005 09:16:55 -0700 Message-Id: <1121444215.19554.18.camel@c-67-188-6-232.hsd1.ca.comcast.net> Mime-Version: 1.0 X-Mailer: Evolution 2.0.4 (2.0.4-4) Content-Transfer-Encoding: 7bit X-archive-position: 5650 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: dwalker@mvista.com Precedence: bulk X-list: linux-xfs Content-Length: 686 Lines: 17 On Fri, 2005-07-15 at 12:23 +0200, Ingo Molnar wrote: > * Daniel Walker wrote: > > > PI is always good, cause it allows the tracking of what is high > > priority , and what is not . > > that's just plain wrong. PI might be good if one cares about priorities > and worst-case latencies, but most of the time the kernel is plain good > enough and we dont care. PI can also be pretty expensive. So in no way, > shape or form can PI be "always good". I don't agree with that. But of course I'm always speaking from a real time perspective . PI is expensive , but it won't always be. However, no one is forcing PI on anyone, even if I think it's good .. Daniel From owner-linux-xfs@oss.sgi.com Sat Jul 16 00:06:12 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Sat, 16 Jul 2005 00:06:15 -0700 (PDT) Received: from raad.intranet ([213.184.187.212]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6G761H9014783 for ; Sat, 16 Jul 2005 00:06:05 -0700 Received: from i810 (rescueCli [10.254.254.253]) by raad.intranet (8.8.7/8.8.7) with ESMTP id KAA18276; Sat, 16 Jul 2005 10:03:11 +0300 Message-Id: <200507160703.KAA18276@raad.intranet> From: "Al Boldi" To: Cc: , , , "'Nathan Scott'" Subject: Re: XFS corruption during power-blackout Date: Sat, 16 Jul 2005 10:02:41 +0300 MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit X-Mailer: Microsoft Office Outlook, Build 11.0.5510 Thread-Index: AcWJ1FuXffNb7HvTTDeY4O4AIvjIuQ== X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2600.0000 X-archive-position: 5652 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: a1426z@gawab.com Precedence: bulk X-list: linux-xfs Content-Length: 591 Lines: 25 Russell Howe wrote: { XFS only journals metadata, not data. So, you are supposed to get a consistent filesystem structure, but your data consistency isn't guaranteed. } What did XFS do to detect filedata-corruption before it was added to the vanilla-kernel? Maybe it did not update the metadata before the fs was sync'd? Really, it should wait for fs sync and then update metadata! This would imply 2 syncs in succession to ensure updated filedata/metadata consistency, which is OK. Is it possible to instruct XFS to delay metadata update until after a filedata sync? Thanks! Al From owner-linux-xfs@oss.sgi.com Sat Jul 16 10:39:40 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Sat, 16 Jul 2005 10:39:46 -0700 (PDT) Received: from 218-228-172-11.eonet.ne.jp (218-228-172-11.eonet.ne.jp [218.228.172.11]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id j6GHdaH9029034; Sat, 16 Jul 2005 10:39:37 -0700 Subject: Targeted - Cable TV Users Message-ID: <015K4E23PS2W4A80RG97L2Z1@botchery%620.hotpop.com> From: "Spencer H. Mcbride, VI" To: "Spencer H. Mcbride, VI" Cc: linux-xfs@oss.sgi.com, rios@oss.sgi.com, root@oss.sgi.com, ryan@oss.sgi.com, schmidt@oss.sgi.com, sullivan@oss.sgi.com, terry@oss.sgi.com, todd@oss.sgi.com Date: Sat, 16 Jul 2005 15:30:57 -0300 MIME-Version: 1.0 Content-Type: text/plain Content-Disposition: inline Content-Transfer-Encoding: 7bit X-archive-position: 5653 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: HolcombbenightmareO@graffiti.net Precedence: bulk X-list: linux-xfs Content-Length: 172 Lines: 13 Hows it been going, linux-xfs@oss.sgi.com? Many bartenders like driving every other day. About life Best Regards, Stephanie Lake [[HTML alternate version deleted]] From owner-linux-xfs@oss.sgi.com Sat Jul 16 12:12:21 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Sat, 16 Jul 2005 12:12:26 -0700 (PDT) Received: from virtualhost.dk (ns.virtualhost.dk [195.184.98.160]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6GJCJH9001467 for ; Sat, 16 Jul 2005 12:12:21 -0700 Received: from [62.242.22.158] (helo=router.home.kernel.dk) by virtualhost.dk with esmtp (Exim 3.36 #1) id 1Dts3U-0005Aq-00; Sat, 16 Jul 2005 21:10:32 +0200 Received: from nelson.home.kernel.dk ([192.168.0.33] helo=kernel.dk) by router.home.kernel.dk with esmtp (Exim 4.22) id 1Dts3Q-0001cr-OZ; Sat, 16 Jul 2005 21:10:28 +0200 Received: by kernel.dk (Postfix, from userid 1000) id C6EDD1E23C; Sat, 16 Jul 2005 21:12:17 +0200 (CEST) Date: Sat, 16 Jul 2005 21:12:17 +0200 From: Jens Axboe To: Andi Kleen Cc: Nathan Scott , linux-xfs@oss.sgi.com Subject: Re: XFS, 4K stacks, and Red Hat Message-ID: <20050716191217.GE1568@suse.de> References: <42CD4D38.1090703@xfs.org> <20050708043740.GB1679@frodo> <42D3F44B.308@strike.wu-wien.ac.at> <20050713015626.GD980@frodo> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-archive-position: 5654 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: axboe@suse.de Precedence: bulk X-list: linux-xfs Content-Length: 1520 Lines: 41 On Wed, Jul 13 2005, Andi Kleen wrote: > Nathan Scott writes: > > > On Tue, Jul 12, 2005 at 06:48:11PM +0200, Alexander Bergolth wrote: > > > On 07/08/2005 06:37 AM, Nathan Scott wrote: > > > >... > > > > As other cases pop up (with a reproducible test case please, and > > > > no stacking drivers in the way too :), we slowly iron them out.. > > ^^^^^^^^^^^^^^^^^^^^^^^^^^ > > > > *cough* > > > > > I'm getting frequent stack overflows on one system, using xfs, lvm2, > > > sw-raid and libata but I don't know, if they are XFS-related. > > > > Hmmm - xfs on lvm on md on ide ...? Looks like its death by > > a thousand cuts.. thats the sort of case Steve keeps talking > > about. You will be able to crash using any filesystem doing > > this, eventually - and we haven't even got NFS in the picture > > here yet. > > Eventually even 8k stack systems might run into problems. > > A generic way to solve this would be to let the block layer > who calls into the various stacking layers check how much stack is left > first and when it is too low push the work to another thread using > a workqueue. > > Jens, do you think that would be feasible? (sorry for the late reply, vacation) Sounds like a possible solution for the problem. 4kb stack is never going to be completely enough for some block layer stacking setups. Would need some careful work, I don't want to see each and every io pushed to a worker for processing and potentially incurring 2 context switches per io. -- Jens Axboe From owner-linux-xfs@oss.sgi.com Sat Jul 16 18:58:54 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Sat, 16 Jul 2005 18:58:57 -0700 (PDT) Received: from mx2.suse.de (ns2.suse.de [195.135.220.15]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6H1wrH9027878 for ; Sat, 16 Jul 2005 18:58:54 -0700 Received: from Relay1.suse.de (mail2.suse.de [195.135.221.8]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx2.suse.de (Postfix) with ESMTP id C80A91D29B; Sun, 17 Jul 2005 03:57:00 +0200 (CEST) Date: Sun, 17 Jul 2005 03:57:00 +0200 From: Andi Kleen To: Jens Axboe Cc: Andi Kleen , Nathan Scott , linux-xfs@oss.sgi.com Subject: Re: XFS, 4K stacks, and Red Hat Message-ID: <20050717015700.GC8459@wotan.suse.de> References: <42CD4D38.1090703@xfs.org> <20050708043740.GB1679@frodo> <42D3F44B.308@strike.wu-wien.ac.at> <20050713015626.GD980@frodo> <20050716191217.GE1568@suse.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20050716191217.GE1568@suse.de> X-archive-position: 5655 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: ak@suse.de Precedence: bulk X-list: linux-xfs Content-Length: 444 Lines: 11 > Sounds like a possible solution for the problem. 4kb stack is never > going to be completely enough for some block layer stacking setups. > Would need some careful work, I don't want to see each and every io > pushed to a worker for processing and potentially incurring 2 context > switches per io. Well, it's better than crashing. Also I think it could be a problem even with 8k stacks when the stacking setups become more complex. -Andi From owner-linux-xfs@oss.sgi.com Sun Jul 17 00:57:54 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Sun, 17 Jul 2005 00:57:59 -0700 (PDT) Received: from tyo201.gate.nec.co.jp (TYO201.gate.nec.co.jp [202.32.8.214]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6H7vrH9022735 for ; Sun, 17 Jul 2005 00:57:53 -0700 Received: from mailgate3.nec.co.jp (mailgate54.nec.co.jp [10.7.69.195]) by tyo201.gate.nec.co.jp (8.11.7/3.7W01080315) with ESMTP id j6H7u6E22896 for ; Sun, 17 Jul 2005 16:56:06 +0900 (JST) Received: (from root@localhost) by mailgate3.nec.co.jp (8.11.7/3.7W-MAILGATE-NEC) id j6H7u6F24904 for linux-xfs@oss.sgi.com; Sun, 17 Jul 2005 16:56:06 +0900 (JST) Received: from secsv2.tnes.nec.co.jp (tnesvc1.tnes.nec.co.jp [10.1.101.14]) by mailsv3.nec.co.jp (8.11.7/3.7W-MAILSV4-NEC) with ESMTP id j6H7u5L09945 for ; Sun, 17 Jul 2005 16:56:05 +0900 (JST) Received: from TNESVC1.tnes.nec.co.jp ([10.1.101.14]) by secsv2.tnes.nec.co.jp (ExpressMail 5.10) with SMTP id 20050717.165852.25803816 for ; Sun, 17 Jul 2005 16:58:52 +0900 Received: FROM noshiro.bsd.tnes.nec.co.jp BY TNESVC1.tnes.nec.co.jp ; Sun Jul 17 16:58:51 2005 +0900 Received: from localhost (localhost.localdomain [127.0.0.1]) by noshiro.bsd.tnes.nec.co.jp (Postfix) with ESMTP id 6DFF9749F6; Sun, 17 Jul 2005 16:55:49 +0900 (JST) Date: Sun, 17 Jul 2005 16:55:49 +0900 (JST) Message-Id: <20050717.165549.730551511.masano@tnes.nec.co.jp> To: lord@xfs.org Cc: linux-xfs@oss.sgi.com Subject: Re: deadlocks on ENOSPC From: ASANO Masahiro In-Reply-To: <42D7AA45.2040608@xfs.org> References: <20050715.150717.28782011.masano@tnes.nec.co.jp> <42D7AA45.2040608@xfs.org> X-Mailer: Mew version 3.3 on XEmacs 21.4.11 (Native Windows TTY Support) Mime-Version: 1.0 Content-Type: Text/Plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-archive-position: 5656 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: masano@tnes.nec.co.jp Precedence: bulk X-list: linux-xfs Content-Length: 1734 Lines: 57 Hi Steve, From: Steve Lord Subject: Re: deadlocks on ENOSPC Date: Fri, 15 Jul 2005 07:21:25 -0500 > Hi Masano, > > That is definitely a bug, the extent logic is not supposed to lock > allocation groups out of order. Multiple allocation groups are OK, > but wrapping past the last allocation group back to the first again > is not. I agree. XFS needs order (ascending priorities) for both allocating and freeing extents in multiple AGs. Freeing is OK, because extents are sorted in xfs_bmap_add_free(). But allocating is not... :-p > Try changing the definition of XFS_STRAT_WRITE_IMAPS from 2 to 1 in > xfs_iomap.c as a workaround for now. I've tried it, but I could not see any differences. It seems that xfssyncd is stuck in the second xfs_bmap_alloc() -> xfs_alloc_vextent() path. FYI: Here is the transaction table of xfssyncd at that time. (taken by a tool, http://sourceforge.jp/projects/mcrash/ ) The transaction includes NO XFS_LID_DIRTY'ed item. > xfs_trans d462d628 magic: 5452414e "TRAN" logcb.next: 0 logcb.func: 0 ( 0 ) forw/back: 0 0 type: f strat_write log_res: 1a6b8 log_count: 2 ticket: de8b7b88 (xlog_ticket) lsn: 0 commit_lsn: 0 mountp: c172f400 (xfs_mount) callback: 0 flags: 4 perm_log_res items_free: c items.next: 0 items.free/unused: 7ff8 5 ITEM TYPE SIZ IDX FLAGS items.descs 0: d524b050 (INODE ) 0 0 - items.descs 1: ccd8b45c (BUF ) 0 1 - //NOTE: AG#15 AGF items.descs 2: ccd8ba24 (BUF ) 0 2 - busy_free: 1f busy.next: 0 busy.free/unused 7fffffff 0 -- masano From owner-linux-xfs@oss.sgi.com Sun Jul 17 08:56:59 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Sun, 17 Jul 2005 08:57:06 -0700 (PDT) Received: from wproxy.gmail.com (wproxy.gmail.com [64.233.184.206]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6HFuwH9029330 for ; Sun, 17 Jul 2005 08:56:59 -0700 Received: by wproxy.gmail.com with SMTP id i13so881187wra for ; Sun, 17 Jul 2005 08:55:10 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:mime-version:content-type:content-transfer-encoding:content-disposition; b=CkT8dqn0bQdPknIP+5Xb978eS/JAAkJl0r9VunrtUxOhNVNp/+yg5KV3ZKNwn/RDUzE3HTeeL+v6HMrfF9VT3srbQKgAhp0VX4l7fK90Wxl/KlrcuCKS0Z2X6hkHDkpL5UIRlarF6OdwNASBZ5DR1iYVEfL/hqxTspBP2nh1RWI= Received: by 10.54.11.6 with SMTP id 6mr117592wrk; Sun, 17 Jul 2005 08:55:10 -0700 (PDT) Received: by 10.54.118.2 with HTTP; Sun, 17 Jul 2005 08:55:10 -0700 (PDT) Message-ID: <1c9ad59d05071708557128581@mail.gmail.com> Date: Sun, 17 Jul 2005 21:25:10 +0530 From: Hemant Thakur Reply-To: Hemant Thakur To: linux-xfs@oss.sgi.com Subject: xfs info Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id j6HFuxH9029333 X-archive-position: 5657 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: hemant.t.thakur@gmail.com Precedence: bulk X-list: linux-xfs Content-Length: 403 Lines: 13 hi, i need information about xfs as pertaining to following: - its architecture - its storage specifications - patch structure - physical level (implementation specifications) i am a student interested in understanding this excititng FS i will be grateful if you satisfy my query at the earnest reply urgently at the address hemant.t.thakur@gmail.com From owner-linux-xfs@oss.sgi.com Sun Jul 17 21:24:25 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Sun, 17 Jul 2005 21:24:29 -0700 (PDT) Received: from mail.sandeen.net (sandeen.net [209.173.210.139]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6I4OPH9014936 for ; Sun, 17 Jul 2005 21:24:25 -0700 Received: from [10.0.0.4] (liberator.sandeen.net [10.0.0.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.sandeen.net (Postfix) with ESMTP id B2AB92800F2; Sun, 17 Jul 2005 23:22:37 -0500 (CDT) Message-ID: <42DB2E8A.5000004@sandeen.net> Date: Sun, 17 Jul 2005 23:22:34 -0500 From: Eric Sandeen User-Agent: Mozilla Thunderbird 1.0.5 (Macintosh/20050711) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Hemant Thakur Cc: linux-xfs@oss.sgi.com Subject: Re: xfs info References: <1c9ad59d05071708557128581@mail.gmail.com> In-Reply-To: <1c9ad59d05071708557128581@mail.gmail.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 5659 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: sandeen@sandeen.net Precedence: bulk X-list: linux-xfs Content-Length: 661 Lines: 25 Hemant Thakur wrote: > hi, > > i need information about xfs as pertaining to following: > - its architecture > - its storage specifications > - patch structure > - physical level (implementation specifications) > > i am a student interested in understanding this excititng FS > i will be grateful if you satisfy my query at the earnest I have found one site which is able to answer all these questions and more: http://www.google.com For more detailed information, http://oss.sgi.com/projects/xfs and http://oss.sgi.com/cgi-bin/cvsweb.cgi/linux-2.6-xfs/fs/xfs/ most likely has everything you need. -Eric From owner-linux-xfs@oss.sgi.com Mon Jul 18 04:35:25 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 18 Jul 2005 04:35:35 -0700 (PDT) Received: from lirs02.phys.au.dk (lirs02.phys.au.dk [130.225.28.43]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6IBZMH9018336 for ; Mon, 18 Jul 2005 04:35:25 -0700 Received: from da410.phys.au.dk (da410 [10.12.1.21]) by lirs02.phys.au.dk (8.12.6/8.12.6) with SMTP id j6IBWxfe026934; Mon, 18 Jul 2005 13:32:59 +0200 Received: from localhost by da410.phys.au.dk (5.65v4.0/1.1.19.2/02Feb99-1132AM) id AA05062; Mon, 18 Jul 2005 13:33:08 +0200 Date: Mon, 18 Jul 2005 13:33:07 +0200 (METDST) From: Esben Nielsen To: Daniel Walker Cc: Ingo Molnar , Dave Chinner , Nathan Scott , Steve Lord , linux-kernel@vger.kernel.org, linux-xfs@oss.sgi.com, Christoph Hellwig Subject: Re: RT and XFS In-Reply-To: <1121444215.19554.18.camel@c-67-188-6-232.hsd1.ca.comcast.net> Message-Id: Mime-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Scanned-By: MIMEDefang 2.52 on 10.12.1.54 X-archive-position: 5660 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: simlo@phys.au.dk Precedence: bulk X-list: linux-xfs Content-Length: 3459 Lines: 79 On Fri, 15 Jul 2005, Daniel Walker wrote: > On Fri, 2005-07-15 at 12:23 +0200, Ingo Molnar wrote: > > * Daniel Walker wrote: > > > > > PI is always good, cause it allows the tracking of what is high > > > priority , and what is not . > > > > that's just plain wrong. PI might be good if one cares about priorities > > and worst-case latencies, but most of the time the kernel is plain good > > enough and we dont care. PI can also be pretty expensive. So in no way, > > shape or form can PI be "always good". > > I don't agree with that. But of course I'm always speaking from a real > time perspective . PI is expensive , but it won't always be. However, no > one is forcing PI on anyone, even if I think it's good .. > Is PI needed? If you use a mutex to protect a critical area you are destroying the strict meaning of priorities if the mutex doesn't have PI: Priority inversion can effectively make the high priority task low priority in that situation and postpone it's execution indefinitely. For RT applications that is clearly unacceptable. One can argue that for non-RT tasks priorities aren't supposed to be that rigid as for RT tasks, anyway. Therefore it doesn't matter so much. But as I read the comments in sched.c a nice -20 task have to preempt any nice 0 task no matter how much a cpu-hog it is. If it happens to share a critical section with a nice +19 task, priority inversion will occationally destroy that property. If we disregard the costs of PI, PI is thus a good thing. But how expensive is PI? Ofcourse there is an overhead in doing the calculations. Ingo's implementation can be optimized quite a bit once things are settled but it will always be many times more expensive than a raw spin-lock. But is it much more expensive than a plain binary semaphore? If the is no congestion on a mutex the PI code will not be called at all. On UP, the only occation where congestion can occur is when a low priority task is preempted by a higher priority task while it has the mutex. So let us look at the expensive part where the high priority task tries to grab the mutex: With PI: The owner have to be boosted, an immediate task switch have to take place, the owner runs to the unlock operation and it set down in priority, whereafter there is a task-switch again to the highpriority task. Without PI: The owner waits and there is a task switch to some thread which might not be the owner but often is. When the owner eventually unlocks the mutex it will be follow by a task-switch - because congestion can only occur when the task trying to get the task preempts and thus have higher priority than the owner. The number of task switches are thus the same with and without PI! And then there is the cache issue: When other tasks gets scheduled in the priority inversion case the data being protected can be flushed from the cache while they are running. With PI the CPU continues to work with the same data - and most often in the same code module. I.e. there is a higher chance that the instruction and data cache contains the right data. Thus in the end it all depends on how cheaply the PI calculations can be made. Esben > Daniel > > - > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ > From owner-linux-xfs@oss.sgi.com Mon Jul 18 05:12:46 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 18 Jul 2005 05:12:50 -0700 (PDT) Received: from lirs02.phys.au.dk (lirs02.phys.au.dk [130.225.28.43]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6ICCjH9020781 for ; Mon, 18 Jul 2005 05:12:46 -0700 Received: from da410.phys.au.dk (da410 [10.12.1.21]) by lirs02.phys.au.dk (8.12.6/8.12.6) with SMTP id j6ICAOfe028946; Mon, 18 Jul 2005 14:10:25 +0200 Received: from localhost by da410.phys.au.dk (5.65v4.0/1.1.19.2/02Feb99-1132AM) id AA11574; Mon, 18 Jul 2005 14:10:31 +0200 Date: Mon, 18 Jul 2005 14:10:31 +0200 (METDST) From: Esben Nielsen To: Christoph Hellwig Cc: Daniel Walker , Ingo Molnar , Dave Chinner , greg@kroah.com, Nathan Scott , Steve Lord , linux-kernel@vger.kernel.org, linux-xfs@oss.sgi.com Subject: Re: RT and XFS In-Reply-To: <20050714160835.GA19229@infradead.org> Message-Id: Mime-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Scanned-By: MIMEDefang 2.52 on 10.12.1.54 X-archive-position: 5661 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: simlo@phys.au.dk Precedence: bulk X-list: linux-xfs Content-Length: 2404 Lines: 47 On Thu, 14 Jul 2005, Christoph Hellwig wrote: > On Thu, Jul 14, 2005 at 08:56:58AM -0700, Daniel Walker wrote: > > This reminds me of Documentation/stable_api_nonsense.txt . That no one > > should really be dependent on a particular kernel API doing a particular > > thing. The kernel is play dough for the kernel hacker (as it should be), > > including kernel semaphores. > > > > So we can change whatever we want, and make no excuses, as long as we > > fix the rest of the kernel to work with our change. That seems pretty > > sensible , because Linux should be an evolution. > > Daniel, get a fucking clue. Read some CS 101 literature on what a semaphore > is defined to be. If you want PI singing dancing blinking christmas tree > locking primites call them a mutex, but not a semaphore. > As a matter of fact I just finished what corresponds to your "CS 101" (I study CS in spare time while having a full time job coding RT stuff): To the one lecture I attended they talked about sempahores. They tought students to use binary semphores for locking. Based on real-life experience (and the Pathfinder story), I complained and told them they ought to teach the students to use a mutex instead. They had no clue "It is the same thing they said". Yes, a mutex can be implemented just as a binary semaphore but the semantics of it is different. In RT the difference is very important and even without-RT it is a good idea to maintain the difference for readability and deadlock detection. If you later on want to optimize the semaphore for what it is used for it is also good to have maintained that information. It is a bit like discarding the type information from you programs. You want to keep the type information even though the compilere end up producing the same code. The kernel developer clearly have followed the same lectures and used plain binary semaphores, sometimes calling the mutex sometimes semaphore. I believe that the semaphore ought to be removed. Either use a mutex or a completion. Far the most code is using a sempahore as either signalling - i.e. as a completion - or critical sections - i.e. as a mutex. If code mixes the usage it is must likely very hard to read.... Unfortunately, one of the goals of the preempt-rt branch is to avoid altering too much code. Therefore the type semaphore can't be removed there. Therefore the name still lingers ... :-( Esben From owner-linux-xfs@oss.sgi.com Mon Jul 18 05:21:34 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 18 Jul 2005 05:21:40 -0700 (PDT) Received: from chaos.analogic.com (alog0388.analogic.com [208.224.222.164]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6ICLXH9021915 for ; Mon, 18 Jul 2005 05:21:34 -0700 Received: from chaos.analogic.com (localhost.localdomain [127.0.0.1]) by chaos.analogic.com (8.12.11/8.12.11) with ESMTP id j6ICHK31023680; Mon, 18 Jul 2005 08:17:20 -0400 Received: (from linux-os@localhost) by chaos.analogic.com (8.12.11/8.12.11/Submit) id j6ICHJg1023679; Mon, 18 Jul 2005 08:17:19 -0400 Date: Mon, 18 Jul 2005 08:17:19 -0400 (EDT) From: "Richard B. Johnson" Reply-To: linux-os@analogic.com To: Esben Nielsen cc: Christoph Hellwig , Daniel Walker , Ingo Molnar , Dave Chinner , greg@kroah.com, Nathan Scott , Steve Lord , linux-kernel@vger.kernel.org, linux-xfs@oss.sgi.com Subject: Re: RT and XFS In-Reply-To: Message-ID: References: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-archive-position: 5662 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: linux-os@analogic.com Precedence: bulk X-list: linux-xfs Content-Length: 2803 Lines: 58 On Mon, 18 Jul 2005, Esben Nielsen wrote: > On Thu, 14 Jul 2005, Christoph Hellwig wrote: > >> On Thu, Jul 14, 2005 at 08:56:58AM -0700, Daniel Walker wrote: >>> This reminds me of Documentation/stable_api_nonsense.txt . That no one >>> should really be dependent on a particular kernel API doing a particular >>> thing. The kernel is play dough for the kernel hacker (as it should be), >>> including kernel semaphores. >>> >>> So we can change whatever we want, and make no excuses, as long as we >>> fix the rest of the kernel to work with our change. That seems pretty >>> sensible , because Linux should be an evolution. >> >> Daniel, get a fucking clue. Read some CS 101 literature on what a semaphore >> is defined to be. If you want PI singing dancing blinking christmas tree >> locking primites call them a mutex, but not a semaphore. >> > > As a matter of fact I just finished what corresponds to your "CS 101" (I > study CS in spare time while having a full time job coding RT stuff): > To the one lecture I attended they talked about sempahores. They tought > students to use binary semphores for locking. Based on real-life > experience (and the Pathfinder story), I complained and told > them they ought to teach the students to use a mutex instead. They had no > clue "It is the same thing they said". Yes, a mutex can be implemented > just as a binary semaphore but the semantics of it is different. In RT the > difference is very important and even without-RT it is a good idea to > maintain the difference for readability and deadlock detection. If you > later on want to optimize the semaphore for what it is used for it is also > good to have maintained that information. It is a bit like discarding > the type information from you programs. You want to keep the type information > even though the compilere end up producing the same code. > > The kernel developer clearly have followed the same lectures and used > plain binary semaphores, sometimes calling the mutex sometimes semaphore. > I believe that the semaphore ought to be removed. Either use a mutex or > a completion. Far the most code is using a sempahore as either signalling > - i.e. as a completion - or critical sections - i.e. as a mutex. If code > mixes the usage it is must likely very hard to read.... > > Unfortunately, one of the goals of the preempt-rt branch is to avoid > altering too much code. Therefore the type semaphore can't be removed > there. Therefore the name still lingers ... :-( > > Esben > A MUTEX is a procedure. A semaphore is an object, often used in such a procedure. Cheers, Dick Johnson Penguin : Linux version 2.6.12 on an i686 machine (5537.79 BogoMips). Notice : All mail here is now cached for review by Dictator Bush. 98.36% of all statistics are fiction. From owner-linux-xfs@oss.sgi.com Mon Jul 18 07:47:52 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 18 Jul 2005 07:47:56 -0700 (PDT) Received: from anchor-post-31.mail.demon.net (anchor-post-31.mail.demon.net [194.217.242.89]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6IElpH9014299 for ; Mon, 18 Jul 2005 07:47:51 -0700 Message-Id: <200507181447.j6IElpH9014299@oss.sgi.com> Received: from pr-webmail-2.demon.net ([194.159.244.50]) by anchor-post-31.mail.demon.net with esmtp (Exim 4.42) id 1DuWo3-000FeG-4J for linux-xfs@oss.sgi.com; Mon, 18 Jul 2005 14:41:23 +0000 Received: from localhost ([127.0.0.1] helo=pr-webmail-2.demon.net) by pr-webmail-2.demon.net with smtp (Exim 4.42) id 1DuWsS-000JBx-BI for linux-xfs@oss.sgi.com; Mon, 18 Jul 2005 15:45:52 +0100 Received: from minter.demon.co.uk ([212.44.43.80]) by web.mail.demon.net with http; Mon, 18 Jul 2005 15:45:52 +0100 From: jim@minter.demon.co.uk To: linux-xfs@oss.sgi.com Subject: Deadlock on xfs_do_force_shutdown Date: Mon, 18 Jul 2005 15:45:52 +0100 User-Agent: Demon-WebMail/2.0 MIME-Version: 1.0 Content-Type: text/plain Content-Transfer-Encoding: 7bit X-archive-position: 5663 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: jim@minter.demon.co.uk Precedence: bulk X-list: linux-xfs Content-Length: 3115 Lines: 50 Hi, I got a deadlock on 2.6.10 where (due to some fs corruption somewhere -- not the point of this e-mail) xfs_trans_delete_ail called xfs_do_force_shutdown holding the AIL_LOCK. Later on, xfs_trans_tail_ail was called, which went for AIL_LOCK again... The code path in question (though perhaps there are other possible ones) looks like: xfs_trans_delete_ail holds AIL_LOCK -> calls xfs_do_force_shutdown -> calls xfs_log_force_umount -> calls xlog_state_sync_all -> calls xlog_state_release_iclog -> calls xlog_assign_tail_lsn -> calls xfs_trans_tail_ail -> tries to take AIL_LOCK A sample backtrace I got (seen due to memory shortages as it happens, but this too is a separate problem) was: Call Trace: {__alloc_pages+816} {__get_free_pages+14} {cache_grow+273} {cache_alloc_refill+440} {kmem_cache_alloc+54} {alloc_skb+44} {:e1000:e1000_alloc_rx_buffers+110} {:e1000:e1000_clean+1869} {net_rx_action+132} {__do_softirq+113} {do_softirq+53} {do_IRQ+63} {ret_from_intr+0} {printk+141} {flat_send_IPI_mask+0} {.text.lock.spinlock+0} {xfs_trans_tail_ail+33} {xlog_assign_tail_lsn+30} {xlog_state_release_iclog+57} {xlog_state_sync_all+209} {xfs_cmn_err+214} {xfs_log_force_umount+322} {pagebuf_iodone_work+0} {xfs_do_force_shutdown+132} {xfs_trans_delete_ail+219} {xfs_trans_delete_ail+219} {__up_wakeup+53} {xfs_buf_iodone+44} {xfs_buf_do_callbacks+42} {xfs_buf_iodone_callbacks+322} {__wake_up+67} {pagebuf_iodone_work+0} {worker_thread+496} {default_wake_function+0} {default_wake_function+0} {keventd_create_kthread+0} {worker_thread+0} {keventd_create_kthread+0} {kthread+217} {child_rip+8} {keventd_create_kthread+0} {kthread+0} {child_rip+0} The dmesg said: Filesystem "sdf1": xfs_trans_delete_ail: attempting to delete a log item that is not in the AIL xfs_force_shutdown(sdf1,0x8) called from line 382 of file fs/xfs/xfs_trans_ail.c. Return address = 0xffffffff8021538b Soon after the first CPU deadlocked, each other CPU on my system locked up going for the same AIL_LOCK. It'd be great this particular deadlock case could be fixed so that fs problems like this don't bring entire systems down. Cheers, Jim Minter From owner-linux-xfs@oss.sgi.com Mon Jul 18 11:19:47 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 18 Jul 2005 11:19:51 -0700 (PDT) Received: from web50708.mail.yahoo.com (web50708.mail.yahoo.com [206.190.38.106]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id j6IIJjH9030365 for ; Mon, 18 Jul 2005 11:19:46 -0700 Received: (qmail 7469 invoked by uid 60001); 18 Jul 2005 18:17:58 -0000 Message-ID: <20050718181758.7467.qmail@web50708.mail.yahoo.com> Received: from [69.140.143.163] by web50708.mail.yahoo.com via HTTP; Mon, 18 Jul 2005 11:17:58 PDT X-RocketYMMF: emelamud Date: Mon, 18 Jul 2005 11:17:58 -0700 (PDT) From: Eugene Melamud Reply-To: melamud@umbi.umd.edu Subject: xfsdump slow on large filesystem. To: linux-xfs@oss.sgi.com MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 8bit X-archive-position: 5664 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: melamud@umbi.umd.edu Precedence: bulk X-list: linux-xfs Content-Length: 1234 Lines: 25 Greetings, I am getting really miserable results from xfsdump when mirroring large filesystem over network. The command I run on the source computer is this. xfsdump -A -J - /dev/sda1 | ssh node22 xfsrestore -J -A - /backup Two computers are connected via 1Gbit network. Connection is good, when I test transfers with rsync, I get transfer between two computers at 35Mb/sec easy on large files. The maximum read speed on /dev/sda1 measured with dd is about 45 Mb/sec. The maximum write speed on /backup raid disk is about 100Mb/sec. Given that the most file on the filesystem are small, I can not expect very high throughput. I was hopping for at least 10Mb/sec. What I get is 35G transferred in the last 16hrs, that's less than 1Mb/sec. There is an initial slow down when xfsdump computes tree and inode attributes for transfer, but after that I would think it should really kick in. May be it has something to do with operating system, ( RHEL3 on source machine, CentOS4 on destination, xfsdump version 2.2.25-1). I know I know,I should have gone with SUSE. I am new to xfsdump, so may be I am doing something wrong. If I can't figure this out, I'll just go back to using "cp -au" over nfs. :) Any advice is appreciated. From owner-linux-xfs@oss.sgi.com Mon Jul 18 11:33:28 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 18 Jul 2005 11:33:32 -0700 (PDT) Received: from wproxy.gmail.com (wproxy.gmail.com [64.233.184.197]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6IIXRH9031500 for ; Mon, 18 Jul 2005 11:33:28 -0700 Received: by wproxy.gmail.com with SMTP id 37so1069646wra for ; Mon, 18 Jul 2005 11:31:39 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=VNE13WRsj2jcZBTsjFj+jFZTc1kLaowPQ5MwqF74A1jGPLWcX/yBT0YHyS0cpc9RCFq14JdtSYP4tmfSVQpeE3PjXKYWQ2JeeLO4rprv7tD0sEuxJseyGuqZI5X33J3Cu/Mktpz+z2OKCrLipBGU8kAk/EvgIqvZJ21h7cbdtOo= Received: by 10.54.44.45 with SMTP id r45mr467580wrr; Mon, 18 Jul 2005 11:30:55 -0700 (PDT) Received: by 10.54.2.76 with HTTP; Mon, 18 Jul 2005 11:30:55 -0700 (PDT) Message-ID: <87f94c3705071811305dc8116d@mail.gmail.com> Date: Mon, 18 Jul 2005 14:30:55 -0400 From: Greg Freemyer Reply-To: Greg Freemyer To: melamud@umbi.umd.edu Subject: Re: xfsdump slow on large filesystem. Cc: linux-xfs@oss.sgi.com In-Reply-To: <20050718181758.7467.qmail@web50708.mail.yahoo.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Disposition: inline References: <20050718181758.7467.qmail@web50708.mail.yahoo.com> Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id j6IIXSH9031504 X-archive-position: 5665 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: greg.freemyer@gmail.com Precedence: bulk X-list: linux-xfs Content-Length: 1092 Lines: 30 On 7/18/05, Eugene Melamud wrote: > Greetings, > > I am getting really miserable results from xfsdump when mirroring large filesystem over network. > The command I run on the source computer is this. > > xfsdump -A -J - /dev/sda1 | ssh node22 xfsrestore -J -A - /backup > > Two computers are connected via 1Gbit network. Connection is good, when I test transfers with > rsync, I get transfer between two computers at 35Mb/sec easy on large files. The maximum read > speed on /dev/sda1 measured with dd is about 45 Mb/sec. The maximum write speed on /backup raid > disk is about 100Mb/sec. > > Given that the most file on the filesystem are small, I can not expect very high throughput. I was > hopping for at least 10Mb/sec. What I get is 35G transferred in the last 16hrs, that's less than > 1Mb/sec. > Still not fast, but by my math 35GB in 16 hours is 5.3 Mb/sec. (Were you calculating MB/sec. ?) Or were all of the above Mb/sec. figures meant to be MB/Sec. They do all seem too small for Mb/Sec. values. Greg -- Greg Freemyer The Norcross Group Forensics for the 21st Century From owner-linux-xfs@oss.sgi.com Mon Jul 18 13:10:26 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 18 Jul 2005 13:10:31 -0700 (PDT) Received: from web50707.mail.yahoo.com (web50707.mail.yahoo.com [206.190.38.105]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id j6IKANH9011718 for ; Mon, 18 Jul 2005 13:10:26 -0700 Received: (qmail 49304 invoked by uid 60001); 18 Jul 2005 20:08:36 -0000 Message-ID: <20050718200836.49302.qmail@web50707.mail.yahoo.com> Received: from [69.140.143.163] by web50707.mail.yahoo.com via HTTP; Mon, 18 Jul 2005 13:08:35 PDT X-RocketYMMF: emelamud Date: Mon, 18 Jul 2005 13:08:35 -0700 (PDT) From: Eugene Melamud Reply-To: melamud@umbi.umd.edu Subject: Re: xfsdump slow on large filesystem. To: Greg Freemyer , melamud@umbi.umd.edu Cc: linux-xfs@oss.sgi.com In-Reply-To: <87f94c3705071811305dc8116d@mail.gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 8bit X-archive-position: 5666 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: melamud@umbi.umd.edu Precedence: bulk X-list: linux-xfs Content-Length: 1456 Lines: 46 I see the confusion, I should have been more exact with units. The units are in bytes not bits. So if my math is correct, 35 gigabytes in 16 hrs is 35,840 megabytes in 16*60*60 sec, that gives me 0.62 megabytes per sec. Still too slow.. --- Greg Freemyer wrote: > On 7/18/05, Eugene Melamud wrote: > > Greetings, > > > > I am getting really miserable results from xfsdump when mirroring large filesystem over > network. > > The command I run on the source computer is this. > > > > xfsdump -A -J - /dev/sda1 | ssh node22 xfsrestore -J -A - /backup > > > > Two computers are connected via 1Gbit network. Connection is good, when I test transfers with > > rsync, I get transfer between two computers at 35Mb/sec easy on large files. The maximum read > > speed on /dev/sda1 measured with dd is about 45 Mb/sec. The maximum write speed on /backup > raid > > disk is about 100Mb/sec. > > > > Given that the most file on the filesystem are small, I can not expect very high throughput. I > was > > hopping for at least 10Mb/sec. What I get is 35G transferred in the last 16hrs, that's less > than > > 1Mb/sec. > > > > Still not fast, but by my math 35GB in 16 hours is 5.3 Mb/sec. (Were > you calculating MB/sec. ?) > > Or were all of the above Mb/sec. figures meant to be MB/Sec. They do > all seem too small for Mb/Sec. values. > > Greg > -- > Greg Freemyer > The Norcross Group > Forensics for the 21st Century > From owner-linux-xfs@oss.sgi.com Mon Jul 18 14:34:40 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 18 Jul 2005 14:34:45 -0700 (PDT) Received: from wproxy.gmail.com (wproxy.gmail.com [64.233.184.194]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6ILYbH9017430 for ; Mon, 18 Jul 2005 14:34:40 -0700 Received: by wproxy.gmail.com with SMTP id i1so1099673wra for ; Mon, 18 Jul 2005 14:32:49 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:mime-version:content-type:content-transfer-encoding:content-disposition; b=fiTJjq9NJaO7gEe+M1MG5Wfe3k3fXPDpZH4XoC8DEXncuYPqPDyiYB1alODTAoQCB073gyKGAxL4kvUs2IjkC22Ck5YMtx2nlxLUqc7Ij83iH3U+/dSTjJFGORqZVqawer4LVLO5jxEQxpLlhp0kMx+UhNu79MMOBsqZudG6r50= Received: by 10.54.11.12 with SMTP id 12mr520050wrk; Mon, 18 Jul 2005 14:31:44 -0700 (PDT) Received: by 10.54.47.12 with HTTP; Mon, 18 Jul 2005 14:31:44 -0700 (PDT) Message-ID: Date: Mon, 18 Jul 2005 15:31:44 -0600 From: Nick I Reply-To: Nick I To: linux-xfs@oss.sgi.com Subject: XFS links Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id j6ILYeH9017432 X-archive-position: 5667 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: clusterbuilder@gmail.com Precedence: bulk X-list: linux-xfs Content-Length: 875 Lines: 20 Hi, I work on a Web site at www.clusterbuilder.org. The site highlights a broad spectrum of high performance computing related technologies. It has links to reviews, comparisons, pricing and other information related to the many HPC solutions available. The site also allows for you to complete one request-for-quote form and have multiple hardware vendors of your choice provide pricing quotes to you. We need knowledgeable cluster users to submit sites that have proved a valuable resources to them. One particular area that we want to make sure is complete are file systems, which XFS is listed under. Please take a minute and submit additional links associated with XFS, File systems or other areas that will benefit the HPC community. (To submit content for the site, click on the Submit Content section on www.clusterbuilder.org). Thank you for your help. Nick From owner-linux-xfs@oss.sgi.com Mon Jul 18 20:24:08 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 18 Jul 2005 20:24:11 -0700 (PDT) Received: from smtp.lnxw.com (smtp.lnxw.com [207.21.185.24]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6J3O8H9009661 for ; Mon, 18 Jul 2005 20:24:08 -0700 Received: from pop.lnxw.com (pop.lnxw.com [207.21.185.6]) by smtp.lnxw.com (8.13.1/8.13.1) with ESMTP id j6J3M1pN006132; Mon, 18 Jul 2005 20:22:01 -0700 Received: from nietzsche (nietzsche.lynx.com [172.17.1.73]) by pop.lnxw.com (8.12.8/8.12.8) with ESMTP id j6J3Lwme001827; Mon, 18 Jul 2005 20:21:58 -0700 Received: from bhuey by nietzsche with local (Exim 4.52) id 1DuipQ-0005kJ-LZ; Mon, 18 Jul 2005 20:31:32 -0700 Date: Mon, 18 Jul 2005 20:31:32 -0700 To: Daniel Walker Cc: Ingo Molnar , Dave Chinner , Nathan Scott , Steve Lord , linux-kernel@vger.kernel.org, linux-xfs@oss.sgi.com, Christoph Hellwig Subject: Re: RT and XFS Message-ID: <20050719033132.GB22060@nietzsche.lynx.com> References: <1121209293.26644.8.camel@dhcp153.mvista.com> <20050713002556.GA980@frodo> <20050713064739.GD12661@elte.hu> <1121273158.13259.9.camel@c-67-188-6-232.hsd1.ca.comcast.net> <20050714002246.GA937@frodo> <20050714135023.E241419@melbourne.sgi.com> <1121314226.14816.18.camel@c-67-188-6-232.hsd1.ca.comcast.net> <20050715102311.GA5302@elte.hu> <1121444215.19554.18.camel@c-67-188-6-232.hsd1.ca.comcast.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1121444215.19554.18.camel@c-67-188-6-232.hsd1.ca.comcast.net> User-Agent: Mutt/1.5.9i From: Bill Huey (hui) X-archive-position: 5669 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: bhuey@lnxw.com Precedence: bulk X-list: linux-xfs Content-Length: 737 Lines: 16 On Fri, Jul 15, 2005 at 09:16:55AM -0700, Daniel Walker wrote: > I don't agree with that. But of course I'm always speaking from a real > time perspective . PI is expensive , but it won't always be. However, no > one is forcing PI on anyone, even if I think it's good .. It depends on what kind of PI under specific circumstances. In the general kernel, it's really to be avoided at all costs since it's masking a general contention problem at those places. In a formally provable worst case system using priority ceiling emulation and stuff, PI really valuable. How a system like the Linux kernel fits into that is a totally different story. General purpose kernels using general purpose facilities don't. That's how I see it. bill From owner-linux-xfs@oss.sgi.com Mon Jul 18 20:19:47 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 18 Jul 2005 20:19:54 -0700 (PDT) Received: from smtp.lnxw.com ([207.21.185.24]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6J3JkH9009303 for ; Mon, 18 Jul 2005 20:19:47 -0700 Received: from pop.lnxw.com (pop.lnxw.com [207.21.185.6]) by smtp.lnxw.com (8.13.1/8.13.1) with ESMTP id j6J3Gr0P005997; Mon, 18 Jul 2005 20:16:53 -0700 Received: from nietzsche (nietzsche.lynx.com [172.17.1.73]) by pop.lnxw.com (8.12.8/8.12.8) with ESMTP id j6J3Gome001560; Mon, 18 Jul 2005 20:16:50 -0700 Received: from bhuey by nietzsche with local (Exim 4.52) id 1DuikS-0005jw-VE; Mon, 18 Jul 2005 20:26:24 -0700 Date: Mon, 18 Jul 2005 20:26:24 -0700 To: Esben Nielsen Cc: Christoph Hellwig , Daniel Walker , Ingo Molnar , Dave Chinner , greg@kroah.com, Nathan Scott , Steve Lord , linux-kernel@vger.kernel.org, linux-xfs@oss.sgi.com Subject: Re: RT and XFS Message-ID: <20050719032624.GA22060@nietzsche.lynx.com> References: <20050714160835.GA19229@infradead.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.9i From: Bill Huey (hui) X-archive-position: 5668 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: bhuey@lnxw.com Precedence: bulk X-list: linux-xfs Content-Length: 466 Lines: 11 On Mon, Jul 18, 2005 at 02:10:31PM +0200, Esben Nielsen wrote: > Unfortunately, one of the goals of the preempt-rt branch is to avoid > altering too much code. Therefore the type semaphore can't be removed > there. Therefore the name still lingers ... :-( This is where you failed. You assumed that that person making the comment, Christopher, in the first place didn't have his head up his ass in the first place and was open to your end of the discussion. bill From owner-linux-xfs@oss.sgi.com Tue Jul 19 06:30:11 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 19 Jul 2005 06:30:18 -0700 (PDT) Received: from pentafluge.infradead.org (pentafluge.infradead.org [213.146.154.40]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6JDU8H9022774 for ; Tue, 19 Jul 2005 06:30:10 -0700 Received: from hch by pentafluge.infradead.org with local (Exim 4.52 #1 (Red Hat Linux)) id 1Dus8U-0005MP-Ok; Tue, 19 Jul 2005 14:27:50 +0100 Date: Tue, 19 Jul 2005 14:27:50 +0100 From: Christoph Hellwig To: Ingo Molnar Cc: Bill Huey , Esben Nielsen , Christoph Hellwig , Daniel Walker , Dave Chinner , greg@kroah.com, Nathan Scott , Steve Lord , linux-kernel@vger.kernel.org, linux-xfs@oss.sgi.com Subject: Re: RT and XFS Message-ID: <20050719132750.GA20595@infradead.org> Mail-Followup-To: Christoph Hellwig , Ingo Molnar , Bill Huey , Esben Nielsen , Daniel Walker , Dave Chinner , greg@kroah.com, Nathan Scott , Steve Lord , linux-kernel@vger.kernel.org, linux-xfs@oss.sgi.com References: <20050714160835.GA19229@infradead.org> <20050719032624.GA22060@nietzsche.lynx.com> <20050719123457.GC12368@elte.hu> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20050719123457.GC12368@elte.hu> User-Agent: Mutt/1.4.2.1i X-SRS-Rewrite: SMTP reverse-path rewritten from by pentafluge.infradead.org See http://www.infradead.org/rpr.html X-archive-position: 5672 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: hch@infradead.org Precedence: bulk X-list: linux-xfs Content-Length: 452 Lines: 9 On Tue, Jul 19, 2005 at 02:34:57PM +0200, Ingo Molnar wrote: > (I do disagree with Christoph on another point: i do think we eventually > want to change the standard semaphore type in a similar fashion upstream > as well - but that probably has to come with a s/struct semaphore/struct > mutex/ change as well.) Actually having a mutex_t in mainline would be a good idea even without preempt rt, to document better what kind of locking we expect. From owner-linux-xfs@oss.sgi.com Tue Jul 19 06:53:40 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Tue, 19 Jul 2005 06:53:46 -0700 (PDT) Received: from mx2.elte.hu (mx2.elte.hu [157.181.151.9]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6JDrdH9024156 for ; Tue, 19 Jul 2005 06:53:40 -0700 Received: from chiara.elte.hu (chiara.elte.hu [157.181.150.200]) by mx2.elte.hu (Postfix) with ESMTP id BB3C73283FC; Tue, 19 Jul 2005 15:49:59 +0200 (CEST) Received: by chiara.elte.hu (Postfix, from userid 17806) id 713811FC2; Tue, 19 Jul 2005 15:50:57 +0200 (CEST) Date: Tue, 19 Jul 2005 15:50:56 +0200 From: Ingo Molnar To: Christoph Hellwig , Bill Huey , Esben Nielsen , Daniel Walker , Dave Chinner , greg@kroah.com, Nathan Scott , Steve Lord , linux-kernel@vger.kernel.org, linux-xfs@oss.sgi.com Subject: Re: RT and XFS Message-ID: <20050719135056.GA19552@elte.hu> References: <20050714160835.GA19229@infradead.org> <20050719032624.GA22060@nietzsche.lynx.com> <20050719123457.GC12368@elte.hu> <20050719132750.GA20595@infradead.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20050719132750.GA20595@infradead.org> User-Agent: Mutt/1.4.2.1i X-ELTE-SpamVersion: MailScanner 4.31.6-itk1 (ELTE 1.2) SpamAssassin 2.63 ClamAV 0.73 X-ELTE-VirusStatus: clean X-ELTE-SpamCheck: no X-ELTE-SpamCheck-Details: score=-4.9, required 5.9, autolearn=not spam, BAYES_00 -4.90 X-ELTE-SpamLevel: X-ELTE-SpamScore: -4 X-archive-position: 5673 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: mingo@elte.hu Precedence: bulk X-list: linux-xfs Content-Length: 1344 Lines: 30 * Christoph Hellwig wrote: > On Tue, Jul 19, 2005 at 02:34:57PM +0200, Ingo Molnar wrote: > > (I do disagree with Christoph on another point: i do think we eventually > > want to change the standard semaphore type in a similar fashion upstream > > as well - but that probably has to come with a s/struct semaphore/struct > > mutex/ change as well.) > > Actually having a mutex_t in mainline would be a good idea even > without preempt rt, to document better what kind of locking we expect. cool! I'll cook up a patch for that. Right now these are the numbers: there are 526 uses of struct semaphore in 2.6.12. In the -RT tree i had to change 23 of them to be compat_semaphore - i.e. 23 uses were definitely non-mutex. (We sure have missed some cases - but it would be fair to say that the expected number of cases is less than 50, and that we've mapped the most common ones already. That makes it a 90%/10% splitup: more than 90% of all struct semaphore use is pure mutex.) Of the remaining <10% cases, the majority is of the type of completions, and there are a handful of (<10) cases of 'counted semaphore' uses: semaphores with a count larger than 1. (e.g. ACPI uses it to count resources, some audio code too - but it's very rare) Btw., that's the only 'true' (in terms of CS) semaphore use. Ingo From owner-linux-xfs@oss.sgi.com Wed Jul 20 00:32:09 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 20 Jul 2005 00:32:14 -0700 (PDT) Received: from wproxy.gmail.com (wproxy.gmail.com [64.233.184.199]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6K7W8H9022766 for ; Wed, 20 Jul 2005 00:32:08 -0700 Received: by wproxy.gmail.com with SMTP id 69so500338wra for ; Wed, 20 Jul 2005 00:30:19 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:mime-version:content-type:content-transfer-encoding:content-disposition; b=oR/ZIFA7O5rLqmWIhHsYxLfRvlz9bwMQHkkZ5Csle+PWkRs7bgv2zAIaRBSVitDh09R1x3GYlzhc5+0BmV/UwBJQvTj7pI6hsCOMQUIixS6GAV6nZOApMB8aI3ZE5Df9kftewAhwRlaBE4a9Bp/ar7EDUNc70o5ZPR7TMSroQbE= Received: by 10.54.38.54 with SMTP id l54mr1029480wrl; Wed, 20 Jul 2005 00:29:17 -0700 (PDT) Received: by 10.54.15.56 with HTTP; Wed, 20 Jul 2005 00:29:17 -0700 (PDT) Message-ID: Date: Wed, 20 Jul 2005 04:29:17 -0300 From: Alvaro R Reply-To: Alvaro R To: linux-xfs@oss.sgi.com Subject: 3.000.000 files on single dir = freeze Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id j6K7W9H9022768 X-archive-position: 5674 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: askxfs@gmail.com Precedence: bulk X-list: linux-xfs Content-Length: 872 Lines: 26 Hello, I have some directories that are on XFS under SLES9, on top of a raid0 device. on those directories, I have about 3.000.000 on one and 200.000 on another, if I try to access one file by name via Apache, no problem, I get the file right away, but when I try to tar those directories, the tar command goes ok for a while, then freezes for 5 minutes, then resume for some 30 seconds and keep cycleing like that... it's about 350 gigs worth of data, and after 12 hours I have onle 18 gigs backed up... tried to do an ls.. and that takes forever and never return the command prompt back. du -hs takes some 15 minutes and then gives the dir size. rsync can count all the files, and it also freezes while copying. any hints ? Alvaro PS - I am backing up the directories to another disk with: tar lcf - imagelistener imagefiles | (cd /dataentry_bkp/; tar xvpf - ) From owner-linux-xfs@oss.sgi.com Wed Jul 20 05:03:39 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 20 Jul 2005 05:03:49 -0700 (PDT) Received: from postfix3-1.free.fr (postfix3-1.free.fr [213.228.0.44]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6KC3XH9025753 for ; Wed, 20 Jul 2005 05:03:38 -0700 Received: from Gargamel.ravioli (lns-vlq-29-82-254-9-245.adsl.proxad.net [82.254.9.245]) by postfix3-1.free.fr (Postfix) with ESMTP id 7195517350D for ; Wed, 20 Jul 2005 14:01:43 +0200 (CEST) Subject: xfs userspace tools packaging From: Flavien Bridault Reply-To: disk@sourcemage.org To: linux-xfs@oss.sgi.com Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="=-yHXCUfX8xsgalnrledWx" Organization: Source Mage Date: Wed, 20 Jul 2005 14:02:59 +0200 Message-Id: <1121860979.6776.9.camel@Gargamel.ravioli> Mime-Version: 1.0 X-Mailer: Evolution 2.2.3 X-archive-position: 5675 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: f.bridault@fra.net Precedence: bulk X-list: linux-xfs Content-Length: 1384 Lines: 47 --=-yHXCUfX8xsgalnrledWx Content-Type: text/plain Content-Transfer-Encoding: quoted-printable Hello people, I'm the maintainer of the disk section of Source Mage GNU/Linux. I recently had trouble with compiling xfsprogs and xfsdump with gcc >=3D 4.0. I finally found that it were already corrected in CVS HEAD [0], and indeed that worked. :-) But this is not what I'm writing here. In the same time, I noticed that Christoph Hellwig mentioned in this same bug that packagers should ALWAYS use latest CVS HEAD. As our distribution is source-based, I wonder if it is really a good idea to use a cvs checkout for such a critical software. At this time my Project Leader asked me to remove the CVS checkout option and only propose the tarballs (respectively 2.6.25 and 2.2.25). Could you clarify this situation please ?? Thanks a lot. [0] http://oss.sgi.com/bugzilla/show_bug.cgi?id=3D409 --=20 -- -- -- -- -- -- -- Flavien Bridault Source Mage GNU/Linux - Disk Section Guru irc: vlaaad jabber: vlaaad@amessage.be --=-yHXCUfX8xsgalnrledWx Content-Type: application/pgp-signature; name=signature.asc Content-Description: This is a digitally signed message part -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (GNU/Linux) iD8DBQBC3j1z2em/EE6e1kQRArHoAJ0d2OnoveCkj520AP00C4JGcTMq1wCdGdjz 7H9VcTuMuQUzGCAkzS9icbU= =AKO0 -----END PGP SIGNATURE----- --=-yHXCUfX8xsgalnrledWx-- From owner-linux-xfs@oss.sgi.com Wed Jul 20 12:44:59 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 20 Jul 2005 12:45:02 -0700 (PDT) Received: from omx2.sgi.com (omx2-ext.sgi.com [192.48.171.19]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6KJixH9032288 for ; Wed, 20 Jul 2005 12:44:59 -0700 Received: from flecktone.americas.sgi.com (flecktone.americas.sgi.com [198.149.16.15]) by omx2.sgi.com (8.12.11/8.12.9/linux-outbound_gateway-1.1) with ESMTP id j6KLb8N5010682 for ; Wed, 20 Jul 2005 14:37:08 -0700 Received: from maine.americas.sgi.com (maine.americas.sgi.com [128.162.232.87]) by flecktone.americas.sgi.com (8.12.9/8.12.10/SGI_generic_relay-1.2) with ESMTP id j6KJh8DN12562192; Wed, 20 Jul 2005 14:43:08 -0500 (CDT) Received: from hch by maine.americas.sgi.com with local (Exim 3.36 #1 (Debian)) id 1DvKTE-0002Xp-00; Wed, 20 Jul 2005 14:43:08 -0500 To: linux-xfs@oss.sgi.com, sgi.bugs.xfs@fido.engr.sgi.com Subject: PARTIAL TAKE 936584 - Message-Id: From: Christoph Hellwig Date: Wed, 20 Jul 2005 14:43:08 -0500 X-archive-position: 5676 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: hch@sgi.com Precedence: bulk X-list: linux-xfs Content-Length: 522 Lines: 16 delay I/O completion for unwritten extents after until we issue the transaction to convert to an written extent Date: Wed Jul 20 12:42:41 PDT 2005 Workarea: maine.americas.sgi.com:/home/daisy40/hch/ptools/xfs-2.4.x Inspected by: felixb The following file(s) were checked into: bonnie.engr.sgi.com:/isms/linux/2.4.x-xfs Modid: xfs-linux:xfs-kern:196144a fs/xfs/linux-2.4/xfs_aops.c - 1.90 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/linux-2.4/xfs_aops.c.diff?r1=text&tr1=1.90&r2=text&tr2=1.89&f=h From owner-linux-xfs@oss.sgi.com Thu Jul 21 05:52:54 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 21 Jul 2005 05:53:04 -0700 (PDT) Received: from wproxy.gmail.com (wproxy.gmail.com [64.233.184.200]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6LCqrH9024338 for ; Thu, 21 Jul 2005 05:52:54 -0700 Received: by wproxy.gmail.com with SMTP id 69so202639wra for ; Thu, 21 Jul 2005 05:51:01 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=uaWf4/3cQmB7U/kgbDldVT7CXFKVNchwDJVvJQzTgf8ji8CmBTSJJH4Od9e/CW9ABrlX9dqDatr2Z+frAKzhEhXPRKWHMv2+41zwYEhZqWBzOVQtViJIlnli1OAYbhYt/Vi3a+ny/bpSA9nUnQo1Sq9MmJy7BwTm8dNAdjLavus= Received: by 10.54.13.67 with SMTP id 67mr502908wrm; Thu, 21 Jul 2005 05:50:34 -0700 (PDT) Received: by 10.54.15.56 with HTTP; Thu, 21 Jul 2005 05:50:34 -0700 (PDT) Message-ID: Date: Thu, 21 Jul 2005 09:50:34 -0300 From: Alvaro R Reply-To: Alvaro R To: linux-xfs@oss.sgi.com Subject: re: 3.000.000 files on single dir = freeze In-Reply-To: Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Disposition: inline References: <20050720122414.GC8649@chihiro.cern.ch> Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id j6LCqsH9024340 X-archive-position: 5677 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: askxfs@gmail.com Precedence: bulk X-list: linux-xfs Content-Length: 512 Lines: 25 That worked fine, alvaro@blade01:/dataentry/imagefiles> time ~alvaro/listdir | wc -l 3305187 real 0m4.054s user 0m1.902s sys 0m1.568s anyway to copy the files using this method ? As a matter of fact, listing the files is not my concern, backup is what worries me, I have tar running for 36hours now, and only got 117gigs so far... Thanks On 7/20/05, KELEMEN Peter wrote: > Try the attached program like: > > cd /my/dir/with/3m/files; /path/to/listdir > > Peter > From owner-linux-xfs@oss.sgi.com Thu Jul 21 16:49:05 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 21 Jul 2005 16:49:08 -0700 (PDT) Received: from mail.rmic.com (dero.rmic.com [207.235.76.34]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6LNn1H9015629 for ; Thu, 21 Jul 2005 16:49:05 -0700 X-WSS-ID: 6EFEEC6F1AC1988673-01-02 Date: Thu, 21 Jul 2005 19:46:46 -0400 From: "Email Firewall Notifier" To: linux-xfs@oss.sgi.com Message-ID: <6EFEEC6C1AC1988674-01@EMF_rmic.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="_-==6EFEEC6C1AC31063==-_" Subject: EMF Notification X-archive-position: 5678 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: emailfirewall-notifier@rmic.com Precedence: bulk X-list: linux-xfs Content-Length: 730 Lines: 21 --_-==6EFEEC6C1AC31063==-_ Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Sender Note - Inbound Virus Found Attention: A virus was detected in a message you recently sent to our location. The infected message was quarantined and will not be delivered to the recipients at this organization. Please run an antivirus program immediately to scan your desktop for known viruses. After you have ensured that your desktop is virus-free, you can resend the message. If you need assistance, please contact your mail administrator. Virus information on this infected message follows: Virus Scanner found the W32/Mydoom.o@MM!zip virus in the attached file: rmic.com.zip --_-==6EFEEC6C1AC31063==-_-- From owner-linux-xfs@oss.sgi.com Thu Jul 21 18:34:04 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 21 Jul 2005 18:34:09 -0700 (PDT) Received: from jokes.com (71-32-106-194.albq.qwest.net [71.32.106.194]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id j6M1XcH9022700; Thu, 21 Jul 2005 18:33:50 -0700 From: "Lang Michael" To: "Bogdanoff Dimitriy" Subject: Re[3]: discussion about his pills Date: Fri, 22 Jul 2005 20:55:21 +1100 Message-ID: <2ef701c58f3a$34e34a08$54007977@jokes.com> MIME-Version: 1.0 X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.2527 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.2527 Content-Type: text/plain Content-Disposition: inline Content-Transfer-Encoding: 7bit X-archive-position: 5679 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: Mey@ski.com.au Precedence: bulk X-list: linux-xfs Content-Length: 2080 Lines: 84 arm Inc e Yo xual Des Spe ume by % reas ur Se ire and rm vol 500 100 ural and de Eff - in con t to wel wn bra % Nat No Si ects tras l-kno nds. Expe ce thr es lon gas rien ee tim ger or ms Wor de shi g wit hou ld Wi ppin hin 24 rs SP -M UR The we and Saf Wa Ph acy is Ne st The est y of arm Inc e Yo xual Des Spe ume by % reas ur Se ire and rm vol 500 100 ural and de Eff - in con t to wel wn bra % Nat No Si ects tras l-kno nds. Expe ce thr es lon gas rien ee tim ger or ms Wor de shi g wit hou ld Wi ppin hin 24 rs SP -M UR The we and Saf Wa Ph acy is Ne st The est y of arm Inc e Yo xual Des Spe ume by % reas ur Se ire and rm vol 500 100 ural and de Eff - in con t to wel wn bra % Nat No Si ects tras l-kno nds. Expe ce thr es lon gas rien ee tim ger or ms Wor de shi g wit hou ld Wi ppin hin 24 rs SP -M UR The we and Saf Wa Ph acy is Ne st The est y of arm Inc e Yo xual Des Spe ume by % reas ur Se ire and rm vol 500 100 ural and de Eff - in con t to wel wn bra % Nat No Si ects tras l-kno nds. Expe ce thr es lon gas rien ee tim ger or ms Wor de shi g wit hou ld Wi ppin hin 24 rs SP -M UR The we and Saf Wa Ph acy is Ne st The est y of arm Inc e Yo xual Des Spe ume by % reas ur Se ire and rm vol 500 100 ural and de Eff - in con t to wel wn bra % Nat No Si ects tras l-kno nds. Expe ce thr es lon gas rien ee tim ger or ms Wor de shi g wit hou ld Wi ppin hin 24 rs SP -M UR The we and Saf Wa Ph acy is Ne st The est y of arm Inc e Yo xual Des Spe ume by % reas ur Se ire and rm vol 500 100 ural and de Eff - in con t to wel wn bra % Nat No Si ects tras l-kno nds. Expe ce thr es lon gas rien ee tim ger or ms Wor de shi g wit hou ld Wi ppin hin 24 rs SP -M UR The we and Saf Wa Ph acy is Ne st The est y of arm Inc e Yo xual Des Spe ume by % reas ur Se ire and rm vol 500 100 ural and de Eff - in con t to wel wn bra % Nat No Si ects tras l-kno nds. Expe ce thr es lon gas rien ee tim ger or ms Wor de shi g wit hou ld Wi ppin hin 24 rs SP -M UR The we and Saf Wa Ph acy is Ne st The est y of arm Inc e Yo xual Des Spe ume by % reas [[HTML alternate version deleted]] From owner-linux-xfs@oss.sgi.com Sat Jul 23 04:35:55 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Sat, 23 Jul 2005 04:36:01 -0700 (PDT) Received: from mxfep01.bredband.com (mxfep01.bredband.com [195.54.107.70]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6NBZrH9020334 for ; Sat, 23 Jul 2005 04:35:55 -0700 Received: from [192.168.1.50] ([83.227.232.164] [83.227.232.164]) by mxfep01.bredband.com with ESMTP id <20050723113401.GBHA11632.mxfep01.bredband.com@[192.168.1.50]> for ; Sat, 23 Jul 2005 13:34:01 +0200 Message-ID: <42E22B2E.5080206@bredband.net> Date: Sat, 23 Jul 2005 13:34:06 +0200 From: Jonathan Selander User-Agent: Mozilla Thunderbird 1.0.2 (Windows/20050317) X-Accept-Language: en-us, en MIME-Version: 1.0 To: linux-xfs@oss.sgi.com Subject: XFS and LVM problem Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 5686 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: jonathan.selander@bredband.net Precedence: bulk X-list: linux-xfs Content-Length: 2476 Lines: 57 Hi, I have a quite large (~1.2TB) LVM volume on which I use an XFS filesystem. However, I noticed that after a couple of days of usage, the filesystem dies and has to be remounted to be used again. I don't get any hardware related errors in dmesg, all I get is this: ----------------------------------------------------------------------------------------- xfs_force_shutdown(dm-0,0x8) called from line 4073 of file fs/xfs/xfs_bmap.c. Return address = 0xe0bbe94b Filesystem "dm-0": Corruption of in-memory data detected. Shutting down filesystem: dm-0 Please umount the filesystem, and rectify the problem(s) xfs_force_shutdown(dm-0,0x1) called from line 353 of file fs/xfs/xfs_rw.c. Return address = 0xe0bbe94b XFS mounting filesystem dm-0 Starting XFS recovery on filesystem: dm-0 (dev: dm-0) XFS internal error XFS_WANT_CORRUPTED_GOTO at line 1610 of file fs/xfs/xfs_alloc.c. Caller 0xe0b51603 [] xfs_free_ag_extent+0x451/0x770 [xfs] [] xfs_free_extent+0xe3/0x110 [xfs] [] xfs_free_extent+0xe3/0x110 [xfs] [] kmem_zone_alloc+0x4c/0xc0 [xfs] [] xfs_efd_init+0x86/0x90 [xfs] [] xfs_trans_get_efd+0x38/0x50 [xfs] [] xlog_recover_process_efi+0x1fd/0x280 [xfs] [] xlog_recover_process_efis+0xaf/0xd0 [xfs] [] xlog_recover_finish+0x29/0xe0 [xfs] [] xfs_rtmount_inodes+0xbe/0xf0 [xfs] [] xfs_log_mount_finish+0x2c/0x30 [xfs] [] xfs_mountfs+0x81d/0xed0 [xfs] [] _atomic_dec_and_lock+0x32/0x70 [] xfs_readsb+0x198/0x200 [xfs] [] xfs_ioinit+0x1f/0x40 [xfs] [] xfs_mount+0x2ae/0x4c0 [xfs] [] vfs_mount+0x43/0x50 [xfs] [] linvfs_fill_super+0x9e/0x200 [xfs] [] snprintf+0x27/0x30 [] disk_name+0xb4/0xc0 [] sb_set_blocksize+0x2e/0x60 [] get_sb_bdev+0x100/0x150 [] linvfs_get_sb+0x30/0x40 [xfs] [] linvfs_fill_super+0x0/0x200 [xfs] [] do_kern_mount+0xa0/0x170 [] do_new_mount+0x77/0xc0 [] do_mount+0x174/0x1c0 [] copy_mount_options+0x63/0xc0 [] sys_mount+0x9f/0xe0 [] syscall_call+0x7/0xb Ending XFS recovery on filesystem: dm-0 (dev: dm-0) ----------------------------------------------------------------------------------------- Any idea what could be wrong? This is driving me quite mad. I run Debian 3.1 with a vanilla 2.6.12.2 kernel. From owner-linux-xfs@oss.sgi.com Sat Jul 23 14:37:39 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Sat, 23 Jul 2005 14:37:41 -0700 (PDT) Received: from flyingAngel.upjs.sk (rudnanet.customer.vol.cz [195.122.192.2]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6NLbbH9008876 for ; Sat, 23 Jul 2005 14:37:38 -0700 Received: by flyingAngel.upjs.sk (Postfix, from userid 500) id A91BF100159; Sat, 23 Jul 2005 23:35:41 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by flyingAngel.upjs.sk (Postfix) with ESMTP id A6407180122; Sat, 23 Jul 2005 23:35:41 +0200 (CEST) Date: Sat, 23 Jul 2005 23:35:41 +0200 (CEST) From: Jan Derfinak X-X-Sender: ja@alienAngel.home.sk To: Eugene Melamud Cc: linux-xfs@oss.sgi.com Subject: Re: xfsdump slow on large filesystem. In-Reply-To: <20050718181758.7467.qmail@web50708.mail.yahoo.com> Message-ID: References: <20050718181758.7467.qmail@web50708.mail.yahoo.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 5687 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: ja@mail.upjs.sk Precedence: bulk X-list: linux-xfs Content-Length: 760 Lines: 25 On Mon, 18 Jul 2005, Eugene Melamud wrote: > Greetings, > > I am getting really miserable results from xfsdump when mirroring large filesystem over network. > The command I run on the source computer is this. > > xfsdump -A -J - /dev/sda1 | ssh node22 xfsrestore -J -A - /backup > http://oss.sgi.com/archives/linux-xfs/2005-05/msg00166.html ... > There is an initial slow down when xfsdump computes tree and inode attributes for transfer, but > after that I would think it should really kick in. May be it has something to do with operating > system, ( RHEL3 on source machine, CentOS4 on destination, xfsdump version 2.2.25-1). I know I > know,I should have gone with SUSE. If RHEL3 doesn't support ihashsize, you should try newer kernel. jan -- From owner-linux-xfs@oss.sgi.com Sat Jul 23 19:26:19 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Sat, 23 Jul 2005 19:26:28 -0700 (PDT) Received: from omx2.sgi.com (omx2-ext.sgi.com [192.48.171.19]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6O2QIH9016738 for ; Sat, 23 Jul 2005 19:26:18 -0700 Received: from spindle.corp.sgi.com (spindle.corp.sgi.com [198.29.75.13]) by omx2.sgi.com (8.12.11/8.12.9/linux-outbound_gateway-1.1) with ESMTP id j6O4IrKH005534 for ; Sat, 23 Jul 2005 21:18:53 -0700 Received: from [127.0.0.1] (sshgate.corp.sgi.com [198.149.36.12]) by spindle.corp.sgi.com (SGI-8.12.5/8.12.9/generic_config-1.2) with ESMTP id j6O2NOJb78653701; Sat, 23 Jul 2005 19:23:25 -0700 (PDT) Message-ID: <42E2FB9B.9050008@sgi.com> Date: Sat, 23 Jul 2005 21:23:23 -0500 From: Eric Sandeen User-Agent: Mozilla Thunderbird 1.0.5 (Macintosh/20050711) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Jonathan Selander CC: linux-xfs@oss.sgi.com Subject: Re: XFS and LVM problem References: <42E22B2E.5080206@bredband.net> In-Reply-To: <42E22B2E.5080206@bredband.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-archive-position: 5688 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: sandeen@sgi.com Precedence: bulk X-list: linux-xfs Content-Length: 342 Lines: 12 Jonathan Selander wrote: > Any idea what could be wrong? This is driving me quite mad. I run Debian > 3.1 with a vanilla 2.6.12.2 kernel. Do you have CONFIG_LBD turned on? I have no idea how stable LVM is > 1T, frankly... anything > 1T starts bumping up against 32 bits & signs have you seen this problem only on the larger fs? -Eric From owner-linux-xfs@oss.sgi.com Sun Jul 24 21:36:52 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Sun, 24 Jul 2005 21:36:58 -0700 (PDT) Received: from boing.melbourne.sgi.com (mverd138.asia.info.net [61.14.31.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6P4aoH9023065 for ; Sun, 24 Jul 2005 21:36:51 -0700 Received: from boing.melbourne.sgi.com (localhost [127.0.0.1]) by boing.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id j6P4YrfE3487085; Mon, 25 Jul 2005 14:34:54 +1000 (AEST) Received: (from tes@localhost) by boing.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id j6P4YpS63488316; Mon, 25 Jul 2005 14:34:51 +1000 (AEST) Date: Mon, 25 Jul 2005 14:34:51 +1000 From: Tim Shimmin To: jim@minter.demon.co.uk Cc: linux-xfs@oss.sgi.com Subject: Re: Deadlock on xfs_do_force_shutdown Message-ID: <20050725143451.N2249146@boing.melbourne.sgi.com> References: <200507181447.j6IElpH9014299@oss.sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.2.5.1i In-Reply-To: <200507181447.j6IElpH9014299@oss.sgi.com>; from jim@minter.demon.co.uk on Mon, Jul 18, 2005 at 03:45:52PM +0100 X-archive-position: 5690 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: tes@sgi.com Precedence: bulk X-list: linux-xfs Content-Length: 980 Lines: 32 Hi Jim, Thanks for the info. On Mon, Jul 18, 2005 at 03:45:52PM +0100, jim@minter.demon.co.uk wrote: > Hi, > > I got a deadlock on 2.6.10 where (due to some fs corruption somewhere > -- not the point of this e-mail) xfs_trans_delete_ail called > xfs_do_force_shutdown holding the AIL_LOCK. Later on, > xfs_trans_tail_ail was called, which went for AIL_LOCK again... > > The code path in question (though perhaps there are other possible ones) looks like: > xfs_trans_delete_ail holds AIL_LOCK > -> calls xfs_do_force_shutdown > -> calls xfs_log_force_umount > -> calls xlog_state_sync_all > -> calls xlog_state_release_iclog > -> calls xlog_assign_tail_lsn > -> calls xfs_trans_tail_ail > -> tries to take AIL_LOCK > Yes, we should be dropping the AIL_LOCK before calling xfs_force_shutdown() (instead of afterwards). I'll check in the fix shortly. (I believe others are currently looking into a scenario in which the item to be deleted is missing from the AIL.) --Tim From owner-linux-xfs@oss.sgi.com Sun Jul 24 22:18:03 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Sun, 24 Jul 2005 22:18:05 -0700 (PDT) Received: from larry.melbourne.sgi.com (mverd138.asia.info.net [61.14.31.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id j6P5HvH9028379 for ; Sun, 24 Jul 2005 22:18:02 -0700 Received: from snort.melbourne.sgi.com (snort.melbourne.sgi.com [134.14.54.149]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id PAA03331; Mon, 25 Jul 2005 15:15:58 +1000 Received: from snort.melbourne.sgi.com (localhost [127.0.0.1]) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5) with ESMTP id j6P5FVol13521985; Mon, 25 Jul 2005 15:15:31 +1000 (EST) Received: (from tes@localhost) by snort.melbourne.sgi.com (SGI-8.12.5/8.12.5/Submit) id j6P5FTPp13528182; Mon, 25 Jul 2005 15:15:29 +1000 (EST) Date: Mon, 25 Jul 2005 15:15:29 +1000 (EST) From: Timothy Shimmin Message-Id: <200507250515.j6P5FTPp13528182@snort.melbourne.sgi.com> To: linux-xfs@oss.sgi.com, sgi.bugs.xfs@engr.sgi.com Subject: TAKE 940076 - xfs_trans_delete_ail should unlock AIL before calling xfs_force_shutdown X-archive-position: 5691 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: tes@snort.melbourne.sgi.com Precedence: bulk X-list: linux-xfs Content-Length: 579 Lines: 16 Date: Mon Jul 25 15:14:14 AEST 2005 Workarea: snort.melbourne.sgi.com:/home/tes/isms/xfs-linux Inspected by: nathans@sgi.com The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-kern/xfs-linux-melb Modid: xfs-linux-melb:xfs-kern:23260a xfs_trans_ail.c - 1.74 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_trans_ail.c.diff?r1=text&tr1=1.74&r2=text&tr2=1.73&f=h - Need to unlock the AIL before calling xfs_force_shutdown() because when it goes to force out the log, and get the tail lsn, it will want to get the AIL lock. From owner-linux-xfs@oss.sgi.com Mon Jul 25 18:13:19 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 25 Jul 2005 18:13:28 -0700 (PDT) Received: from cfa.harvard.edu (cfa.harvard.edu [131.142.10.1]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6Q1DIH9026492 for ; Mon, 25 Jul 2005 18:13:18 -0700 Received: from titan (titan [131.142.24.40]) by cfa.harvard.edu (8.12.9-20030924/8.12.9/cfunix Mast-Sol 1.0) with ESMTP id j6Q1BJGL007033; Mon, 25 Jul 2005 21:11:19 -0400 (EDT) Date: Mon, 25 Jul 2005 21:11:19 -0400 (EDT) From: Gaspar Bakos Reply-To: gbakos@cfa.harvard.edu To: linux-raid@vger.kernel.org, linux-xfs@oss.sgi.com Subject: 3ware + RAID5 + xfs performance Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 5692 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: gbakos@cfa.harvard.edu Precedence: bulk X-list: linux-xfs Content-Length: 38716 Lines: 970 Dear all, The purpose of this email is twofold: - to share the results of the many tests I performed with a 3ware RAID card + RAID-5 + XFS, pushing for better file I/O, - and to initiate some brainstorming on what parameters can be tuned for getting a good performance out of this hardware under 2.6.* kernels. I started all these tests because the performance was quite poor, meaning that the write speed was slow, the read speed was barely acceptable, and the system load went very high (10.0) during bonnie++ tests. My questions are marked below with "Q". 1. There are many useful links related to the 3ware card and related anomalies. The bugzilla page: https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=121434 contains some 260 comments. It is mostly 2.4 kernel and RHEL specific. 2. A newer description of the problem can be found in the thread: http://lkml.org/lkml/2005/4/20/110 http://openlab-debugging.web.cern.ch/openlab-debugging/raid/ by Andreas Hirstius. There was a nasty fls() bug, which was eliminated recently, and improved performance and stability. 3. There are recommendations by 3ware, which can be summarized in one line: "blockdev --setra 16384". http://www.3ware.com/reference/techlibrary.asp "Maximum Performance for Linux Kernel 2.6 Combined with XFS File System", which actually leads to a PDF that has a different title: "Benchmarking the 9000 controller with linux 2.6". Q: Any other useful links? Briefly, the hardware setup I use ================================= - Tyan S2882 Thunder K8S Pro motherboard - Dual AMD opteron CPUs - 4Gb RAM - 3ware 9500-8S 8 port serial ATA controller - 8 x 300GB ST3300831AS SATA Seagate disks in hardware RAID-5 More details at the end of this email. OS/setup ======= - Redhat FC3, first with 2.6.9-1.667smp kernel, then with all the upgrades, and finally a self-compiled 2.6.12.3 x86_64 kernel - XFS filesystem - Raid strip size = 64k, write-cache enabled Kernel config attached. ========================================================================== Tuneable parameters ==================== 1. Kernel itself. I tried 2.6.9-1.667smp, 2.6.11-1.14_FC3smp, and 2.6.12.3 (self-compiled) 1.a Kernel config (NUMA system, etc.) 2. Raid setup on the card. - Write-cache enabled? (I use "YES") - Raid strip size - firmware, bios, etc. on the card - staggered spinup (I use "YES", but the drives may not support it. I always "warm up" the unit before the tests, ) 3. 3ware driver version - 3w-9xxx_2.26.02.002 the older version in the kernels - 3w-9xxx_2.26.03.015fw from the 3ware website, containing the firmware as well. 4. Run-time kernel parameters (my device is /dev/sde): 4.a /sys/class/scsi_host/host6/ cmd_per_lun can_queue 4.b /sys/block/sde/queue/, e.g. iosched max_sectors_kb read_ahead_kb max_hw_sectors_kb nr_requests scheduler 4.c /sys/block/sde/device/ e.g. queue_depth 4.d Other params from the 2.4 kernel, if they have an alternative in 2.6: /proc/sys/vm/max-readahead Q: Anything else? 5. blockdev --setra This is possibly belongs to those points mentioned under 4.) 6. For not raw IO (dd), the XFS filesystem parameters. 7. Q: Anything crucial parameter i am missing? ========================================================================== Tests ===== I changed the following during the tests. It is not an orthogonal set of parameters, and I did not try everything with every combination. - kernel - raid strip size: 64K and 256K - 3ware driver and firmware - /sys/block/sde/queue/nr_requests - blockdev --setra xxx /dev/sde - XFS filesystem parameters I used 5 bonnie++ commands to do not only simple IO, but also combined filesystem performance: MOUNT=/mnt/3w1/un0 SIZE=20480 echo "Bonnie test for IO performance" sync; time bonnie++ -m cfhat5 -n 0 -u 0 -r 4092 -s $SIZE -f -b -d $MOUNT echo "Testing with zero size files" sync; time bonnie++ -m cfhat5 -n 50:0:0:50 -u 0 -r 4092 -s 0 -b -d $MOUNT echo "Testing with tiny files" sync; time bonnie++ -m cfhat5 -n 20:10:1:20 -u 0 -r 4092 -s 0 -b -d $MOUNT echo "Testing with 100Kb to 1Mb files" sync; time bonnie++ -m cfhat5 -n 10:1000000:100000:10 -u 0 -r 4092 -s 0 -b -d $MOUNT echo "Testing with 16Mb size files" sync; time bonnie++ -m cfhat5 -n 1:17000000:17000000:10 -u 0 -r 4092 -s 0 -b -d $MOUNT ========================================================================== System information during the tests =================================== This is just to make sure the system is behaving OK, and to catch some errors. Done only outside the recorded tests, so as not to affect the results. 1. top, or cat /proc/loadavg to see the load 2. iostat, iostat -x 3. vmstat 4. ps -eaf If the system behaves strange, as if locked. Q: Anything else recommended that can be useful to check healthy system behaviour? ========================================================================== Other testing tools? ==================== 1. iozone mentioning an Excel table in the man page made me uncertain whether to try it... 2. dd for raw IO. Q: What else? ========================================================================== Conclusions in a nutshell ========================= 1. With any of the kernels below 2.6.12.3, on the ___ x86_64 ___ architecture, the performance is poor. Load becomes huge, system unresponsive, kswapd0, kswapd1 running on top of the "top". 2. The blockdev --setra 16384 does almost nothing else than increases the read speed from the disks by also consuming much more CPU time. The write and re-write speed do not change considerably. It is not really a solution, when a system is run in hw raid based on an expensive card so as to save CPU cycles for other tasks. (Then we can use sw RAID-5 on JBOD, which is just much faster with more CPU usage) 3. The best I got during normal operation (no kswapd anomaly and unresponsive system) was about 80Mb/s write, 40Mb/s rewrite and 350Mb/s read. However, this was with "blockdev --setra 4092" and 43% CPU usage. I would rather quote a more conservative 180Mb/s at setra 256 and 20% CPU. 4. I made tests Migration from 64kb to 256kb stripe size on a 2Tb array would take forever. The performance during this migration is really bad, indifferent from what the IO priority is set up in the 3ware interface: 50Mb/s write, 8Mb/s rewrite (!) and 12Mb/s read. As I had no data yet to loose, it was much faster to reboot, and delete unit, create one with 256Kb stripe size, and initialize it. 5. The performance of the 3ware card seemed worse with the 256k strip size. Write: 68 Rewrite: 21, read: 60Mb/s 6. Changing /sys/block/sde/queue/nr_requests from 128 to 512 does a moderate improvement. Going to higher numbers, such as 1024 does not make it better any more. ========================================================================== QUESTIONS: ========= Q: Where is useful information on how to tune the various /sys/* parameters.? What are recommended values for a 2Tb array running on 3ware card? What are the relation between these parameters? Notably: nr_requests, can_queue, command_per_lun, max-readahead, etc. Q: Are there any benchmarks showing better (re)write performance on an eight disk SATA RAID-5 with similar capacity (2Tb)? Q: (mostly to 3ware/amcc inc.) Why is the 256K strip size so inefficient compared to the 64k? ========================================================================== TEST RESULTS ============ --------------------------------------------------------------------------- TEST2.1 ------- raid strip size = 64k blockdev --setra 256 /dev/sde /sys/block/sde/queue/nr_requests = 128 mkfs.xfs -f -b size=4k -d su=64k,sw=7 -i size=1k -l version=2 xfs_info /mnt/3w1/un0/ meta-data=/mnt/3w1/un0 isize=1024 agcount=32, agsize=16021136 blks = sectsz=512 data = bsize=4096 blocks=512676288, imaxpct=25 = sunit=16 swidth=112 blks, unwritten=1 naming =version 2 bsize=4096 log =internal bsize=4096 blocks=32768, version=2 = sectsz=512 sunit=16 blks realtime =none extsz=65536 blocks=0, rtextents=0 Testing with zero size files Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 100/100 577 5 +++++ +++ 914 5 763 6 +++++ +++ 97 0 real 24m32.187s user 0m0.365s sys 0m32.705s Testing with tiny files Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files:max /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 100:10:0/100 125 2 103182 100 824 7 127 2 84106 99 82 1 real 49m47.104s user 0m0.494s sys 1m5.833s Testing with 100Kb to 1Mb files Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 10:1000000:100000/10 42 5 75 5 685 11 41 5 24 1 212 4 real 18m29.176s user 0m0.240s sys 0m45.138s 16Mb files: Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 1:17000000:17000000 4 14 7 14 461 39 4 15 5 10 562 43 Testing with 16Mb size files Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 1:17000000:17000000 3 14 7 14 522 40 4 14 6 11 493 39 real 13m43.331s user 0m0.455s sys 1m53.656s ----------------------------------------------------------------------------- TEST 2.2 -------- -> change inode size Strip size 64Kb blockdev --setra 256 /dev/sde /sys/block/sde/queue/nr_requests = 128 mkfs.xfs -f -b size=4k -d su=64k,sw=7 -i size=2k -l version=2 /dev/sde1 meta-data=/dev/sde1 isize=2048 agcount=32, agsize=16021136 blks = sectsz=512 data = bsize=4096 blocks=512676288, imaxpct=25 = sunit=16 swidth=112 blks, unwritten=1 naming =version 2 bsize=4096 log =internal log bsize=4096 blocks=32768, version=2 = sectsz=512 sunit=16 blks realtime =none extsz=65536 blocks=0, rtextents=0 Disk IO Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP cfhat5 20G 57019 97 75887 16 47033 10 35907 61 192411 22 311.6 0 Testing with zero size files Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 50/50 655 6 +++++ +++ 944 5 717 6 +++++ +++ 112 0 real 10m58.033s user 0m0.182s sys 0m16.954s Testing with tiny files Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 20:10:1/20 111 2 +++++ +++ 805 7 107 2 +++++ +++ 126 1 real 9m23.056s user 0m0.105s sys 0m12.835s Testing with 100Kb to 1Mb files Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 10:1000000:100000/10 44 5 221 13 504 7 43 5 22 1 164 2 real 17m25.308s user 0m0.207s sys 0m42.914s ==> Seq. read speed increased to 3x, seq. delete decreased Testing with 16Mb size files Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 1:17000000:17000000/10 4 14 10 20 450 34 4 14 5 9 419 34 real 13m24.856s user 0m0.483s sys 1m53.478s ==> Delete speed decreased. Seq. read speed somewhat increased. ==> No significant difference compared to smaller inode size. ----------------------------------------------------------------------------- TEST2.3 -------- Tests done while migrating from Stripe 64kB to Stripe 256kB. /sys/block/sde/queue/nr_requests = 128 blockdev --setra 256 /dev/sde Extremely slow. Bonnie test for IO performance Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP cfhat5 20G 53072 11 8848 1 12039 1 139.3 0 Testing with zero size files Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 50/50 289 3 +++++ +++ 603 3 444 4 +++++ +++ 77 0 real 17m19.235s user 0m0.186s sys 0m17.566s Testing with tiny files Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 20:10:1/20 86 1 +++++ +++ 564 5 86 1 +++++ +++ 90 0 real 12m16.227s user 0m0.099s sys 0m12.125s Testing with 100Kb to 1Mb files Delete files in random order...done. Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 10:1000000:100000/10 29 3 13 0 466 6 25 3 11 0 125 2 real 41m4.151s user 0m0.255s sys 0m42.095s Testing with 16Mb size files Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 1:17000000:17000000/10 2 9 2 5 273 20 2 8 1 3 258 19 real 29m20.672s user 0m0.469s sys 1m49.345s ===> Disk IO becomes extreme slow when array is migrating strip size ----------------------------------------------------------------------------- TEST 2.4 -------- Tests done with 256Kb RAID array size blockdev --setra 256 /dev/sde /sys/block/sde/queue/nr_requests = 128 mkfs.xfs -f -b size=4k -d su=256k,sw=7 -i size=1k -l version=2 -L cfhat5_1_un0 /dev/sde1 meta-data=/dev/sde1 isize=1024 agcount=32, agsize=16021184 blks = sectsz=512 data = bsize=4096 blocks=512676288, imaxpct=25 = sunit=64 swidth=448 blks, unwritten=1 naming =version 2 bsize=4096 log =internal log bsize=4096 blocks=32768, version=2 = sectsz=512 sunit=64 blks realtime =none extsz=65536 blocks=0, rtextents=0 top - 11:54:04 up 11:31, 2 users, load average: 8.52, 7.56, 5.07 Tasks: 104 total, 1 running, 102 sleeping, 1 stopped, 0 zombie Cpu(s): 0.3% us, 4.0% sy, 0.0% ni, 0.7% id, 94.5% wa, 0.0% hi, 0.5% si Mem: 4010956k total, 3988284k used, 22672k free, 0k buffers Swap: 7823576k total, 224k used, 7823352k free, 3789640k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 30821 root 18 0 8312 916 776 D 5.3 0.0 1:21.60 bonnie++ 175 root 15 0 0 0 0 D 1.3 0.0 0:16.35 kswapd1 176 root 15 0 0 0 0 S 1.0 0.0 0:18.38 kswapd0 Bonnie test for IO performance Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP cfhat5 20G 68990 14 21157 5 60837 7 250.2 0 real 27m58.805s user 0m1.118s sys 1m58.749s Testing with zero size files Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 50/50 255 3 +++++ +++ 247 2 252 3 +++++ +++ 61 0 real 23m59.997s user 0m0.186s sys 0m26.721s ==> Much slower than 64kb size with setra=256 Testing with tiny files Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 20:10:1/20 110 3 +++++ +++ 243 3 112 3 +++++ +++ 77 1 real 11m57.399s user 0m0.100s sys 0m17.356s ==> Much slower than 64kb size with setra=256 Testing with 100Kb to 1Mb files Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 10:1000000:100000/10 36 5 77 5 232 4 40 5 35 2 92 2 real 18m25.701s user 0m0.238s sys 0m45.724s ==> Somewhat slower than 64kb size with setra=256 Testing with 16Mb size files Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 1:17000000:17000000/10 4 15 3 6 227 18 3 14 2 4 155 13 real 20m11.168s user 0m0.508s sys 1m55.892s ==> Somewhat slower than 64kb size with setra=256 ==> Definitely inferior to the 64kb raid strip size ------------------------------------------------------------------------------ TEST2.5 ------- raid strip size = 256K Change su to 64k blockdev --setra 256 /dev/sde /sys/block/sde/queue/nr_requests = 128 mkfs.xfs -f -b size=4k -d su=64k,sw=7 -i size=1k -l version=2 -L cfhat5_1_un0 /dev/sde1 Bonnie test for IO performance Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP cfhat5 20G 72627 15 23325 5 63101 7 272.0 0 real 25m56.324s user 0m1.097s sys 1m57.267s ===> General IO was slightly faster with su=64k than su=256k Testing with zero size files Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 50/50 788 7 +++++ +++ 989 6 781 7 +++++ +++ 93 0 real 12m8.633s user 0m0.158s sys 0m16.578s ===> Filesystem is much faster with su=64k Testing with tiny files Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 20:10:1/20 135 2 +++++ +++ 818 7 133 2 +++++ +++ 145 1 real 7m51.365s user 0m0.091s sys 0m12.182s ===> Filesystem is somewhat faster with su=64k Testing with 100Kb to 1Mb files Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 10:1000000:100000/10 41 5 91 5 787 12 41 5 24 1 224 4 real 18m6.138s user 0m0.243s sys 0m42.042s ===> For larger files, it becomes almost indifferent if we use su=64k or su=256k Testing with 16Mb size files Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 1:17000000:17000000/10 4 14 3 6 476 34 3 11 2 5 546 40 real 19m37.665s user 0m0.548s sys 1m49.408s ===> For larger files, it becomes almost indifferent if we use su=64k or su=256k ------------------------------------------------------------------------------ TEST 2.6 --------- Tests done with 256Kb RAID array size blockdev --setra 1024 /dev/sde /sys/block/sde/queue/nr_requests = 128 blockdev --setra 1024 /dev/sde mkfs.xfs -f -b size=4k -d su=256k,sw=7 -i size=1k -l version=2 -L cfhat5_1_un0 /dev/sde1 meta-data=/dev/sde1 isize=1024 agcount=32, agsize=16021184 blks = sectsz=512 data = bsize=4096 blocks=512676288, imaxpct=25 = sunit=64 swidth=448 blks, unwritten=1 naming =version 2 bsize=4096 log =internal log bsize=4096 blocks=32768, version=2 = sectsz=512 sunit=64 blks realtime =none extsz=65536 blocks=0, rtextents=0 Bonnie test for IO performance Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP cfhat5 20G 68794 14 26139 6 118452 14 255.5 0 real 22m2.101s user 0m1.268s sys 1m58.232s => Speed increased compared to TEST 2.4 (setra 256). CPU % didn't increase. Testing with zero size files Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 50/50 253 3 +++++ +++ 247 2 251 3 +++++ +++ 60 0 real 24m14.398s user 0m0.178s sys 0m27.186s => No change compared to 2.4 Testing with tiny files Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 20:10:1/20 112 3 +++++ +++ 241 3 109 3 +++++ +++ 71 1 real 12m21.663s user 0m0.089s sys 0m17.502s => No change. Testing with 100Kb to 1Mb files Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 10:1000000:100000/10 39 5 90 5 237 4 37 5 32 1 82 1 real 18m47.223s user 0m0.260s sys 0m45.430s => No change. Testing with 16Mb size files Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 1:17000000:17000000/10 4 13 6 12 215 16 4 14 5 9 171 13 real 14m21.865s user 0m0.474s sys 1m49.301s ==> Improved. ------------------------------------------------------------------------------ TEST 2.6 -------- Back to raid-strip = 64k /sys/block/sde/queue/nr_requests = 128 mkfs.xfs -f -b size=4k -d su=64k,sw=7 -i size=1k -l version=2 -L cfhat5_1_un0 /dev/sde1 blockdev --setra 256 /dev/sde top - 10:51:03 up 8:06, 3 users, load average: 9.69, 4.18, 1.63 Tasks: 128 total, 1 running, 127 sleeping, 0 stopped, 0 zombie Cpu(s): 0.2% us, 5.0% sy, 0.0% ni, 5.2% id, 88.5% wa, 0.0% hi, 1.2% si Mem: 4010956k total, 3987456k used, 23500k free, 52k buffers Swap: 7823576k total, 224k used, 7823352k free, 3677224k cached System stays responsive despite the giant load. PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 5757 root 18 0 8308 916 776 D 6.3 0.0 0:35.69 bonnie++ 176 root 15 0 0 0 0 D 1.3 0.0 0:05.27 kswapd0 175 root 15 0 0 0 0 S 1.0 0.0 0:05.64 kswapd1 Bonnie test for IO performance Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP cfhat5 20G 65322 14 46177 10 183637 21 293.2 0 real 15m23.264s user 0m1.118s sys 1m58.544s Testing with zero size files Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 50/50 701 6 +++++ +++ 983 5 733 6 +++++ +++ 111 0 real 10m56.735s user 0m0.171s sys 0m15.877s Testing with tiny files Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 20:10:1/20 109 2 +++++ +++ 824 7 108 2 +++++ +++ 147 1 real 8m58.359s user 0m0.107s sys 0m12.546s Testing with 100Kb to 1Mb files Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 10:1000000:100000/10 45 5 214 13 642 9 45 5 22 1 211 3 real 16m59.573s user 0m0.230s sys 0m42.618s Testing with 16Mb size files Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 1:17000000:17000000/10 4 13 11 20 467 32 4 13 5 9 416 30 real 13m15.243s user 0m0.534s sys 1m47.777s ------------------------------------------------------------------------------ TEST 2.7 --------- Change setra: blockdev --setra 4092 /dev/sde raid-strip = 64k /sys/block/sde/queue/nr_requests = 128 mkfs.xfs -f -b size=4k -d su=64k,sw=7 -i size=1k -l version=2 -L cfhat5_1_un0 /dev/sde1 [root@cfhat5 diskio]# iostat -x /dev/sde Linux 2.6.12.3-GB2 (cfhat5) 07/25/2005 avg-cpu: %user %nice %sys %iowait %idle 0.29 0.04 1.00 4.88 93.80 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sde 0.04 903.28 19.74 44.03 4757.48 8632.40 2378.74 4316.20 209.94 7.73 121.17 1.96 12.51 Bonnie test for IO performance Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP cfhat5 20G 66303 13 41254 9 345730 41 274.7 0 real 15m21.055s user 0m1.114s sys 1m57.199s ==> Write does not change. Rewrite decreases. Read increases. Testing with zero size files Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 50/50 624 6 +++++ +++ 904 5 727 6 +++++ +++ 113 0 real 10m59.528s user 0m0.189s sys 0m16.520s Testing with tiny files Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 20:10:1/20 111 2 +++++ +++ 798 7 102 2 +++++ +++ 143 1 real 9m12.536s user 0m0.120s sys 0m12.467s Testing with 100Kb to 1Mb files Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 10:1000000:100000/10 46 6 323 20 686 10 43 5 30 1 207 3 real 14m42.960s user 0m0.262s sys 0m42.090s Testing with 16Mb size files Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 1:17000000:17000000/10 4 14 20 40 524 38 4 13 11 21 492 35 real 10m42.784s user 0m0.453s sys 1m51.078s ------------------------------------------------------------------------------ TEST 2.8 --------- echo 512 > /sys/block/sde/queue/nr_requests raid-strip = 64k mkfs.xfs -f -b size=4k -d su=64k,sw=7 -i size=1k -l version=2 -L cfhat5_1_un0 /dev/sde1 blockdev --setra 4092 /dev/sde Bonnie test for IO performance Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP cfhat5 20G 78573 16 42444 9 353894 42 284.6 0 real 14m14.938s user 0m1.213s sys 1m55.382s Testing with zero size files Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 50/50 623 6 +++++ +++ 894 5 739 6 +++++ +++ 123 0 real 10m25.379s user 0m0.186s sys 0m16.846s Testing with tiny files Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 20:10:1/20 107 2 +++++ +++ 835 7 100 1 +++++ +++ 159 1 real 9m7.268s user 0m0.104s sys 0m12.589s Testing with 100Kb to 1Mb files Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 10:1000000:100000/10 47 6 324 19 697 10 44 5 35 2 232 4 real 13m41.706s user 0m0.234s sys 0m42.614s Testing with 16Mb size files Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 1:17000000:17000000/10 4 14 19 38 448 32 4 13 11 21 506 36 real 10m40.404s user 0m0.469s sys 1m51.098s ------------------------------------------------------------------------------ TEST 2.9 --------- echo 1024 > /sys/block/sde/queue/nr_requests raid-strip = 64k mkfs.xfs -f -b size=4k -d su=64k,sw=7 -i size=1k -l version=2 -L cfhat5_1_un0 /dev/sde1 blockdev --setra 4092 /dev/sde Bonnie test for IO performance Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP cfhat5 20G 79546 16 41227 9 351637 43 285.0 0 real 14m26.609s user 0m1.136s sys 1m57.398s ==> No improvement Testing with zero size files Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 50/50 616 5 +++++ +++ 880 5 748 6 +++++ +++ 123 0 real 10m25.469s user 0m0.186s sys 0m16.723s Testing with tiny files cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 20:10:1/20 99 2 +++++ +++ 779 7 104 2 +++++ +++ 165 1 real 9m12.385s user 0m0.111s sys 0m12.947s Testing with 100Kb to 1Mb files Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 10:1000000:100000/10 47 6 316 20 616 9 47 6 36 2 248 4 real 13m22.360s user 0m0.231s sys 0m43.679s Testing with 16Mb size files Version 1.03 ------Sequential Create------ --------Random Create-------- cfhat5 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 1:17000000:17000000/10 3 13 16 31 386 27 4 13 11 22 558 40 real 11m1.018s user 0m0.464s sys 1m49.534s ============================================================================ Hardware info ============= [root@cfhat5 diskio]# cat /proc/cpuinfo processor : 0 vendor_id : AuthenticAMD cpu family : 15 model : 5 model name : AMD Opteron(tm) Processor 246 stepping : 10 cpu MHz : 1991.008 cache size : 1024 KB fpu : yes fpu_exception : yes cpuid level : 1 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext lm 3dnowext 3dnow bogomips : 3915.77 TLB size : 1024 4K pages clflush size : 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management: ts fid vid ttp processor : 1 vendor_id : AuthenticAMD cpu family : 15 model : 5 model name : AMD Opteron(tm) Processor 246 stepping : 10 cpu MHz : 1991.008 cache size : 1024 KB fpu : yes fpu_exception : yes cpuid level : 1 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext lm 3dnowext 3dnow bogomips : 3973.12 TLB size : 1024 4K pages clflush size : 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management: ts fid vid ttp ----------------------------------------------------------- [root@cfhat5 diskio]# cat /sys/class/scsi_host/host6/stats 3w-9xxx Driver version: 2.26.03.015fw Current commands posted: 0 Max commands posted: 79 Current pending commands: 0 Max pending commands: 1 Last sgl length: 2 Max sgl length: 32 Last sector count: 0 Max sector count: 256 SCSI Host Resets: 0 AEN's: 0 -------------------------- 3ware card info Model 9500S-8 Serial # L19403A5100293 Firmware FE9X 2.06.00.009 Driver 2.26.03.015fw BIOS BE9X 2.03.01.051 Boot Loader BL9X 2.02.00.001 Memory Installed 112 MB # of Ports 8 # of Units 1 # of Drives 8 Write cache enabled Auto-spin up enabled, 2 sec between spin-up Drives, however, probably do not support spinup. ------------------------------- Disks: Drive Information (Controller ID 6) Port Model Capacity Serial # Firmware Unit Status 0 ST3300831AS 279.46 GB 3NF0BZYJ 3.02 0 OK 1 ST3300831AS 279.46 GB 3NF0AC04 3.01 0 OK 2 ST3300831AS 279.46 GB 3NF0A7JE 3.01 0 OK 3 ST3300831AS 279.46 GB 3NF0ABT1 3.01 0 OK 4 ST3300831AS 279.46 GB 3NF0A63J 3.01 0 OK 5 ST3300831AS 279.46 GB 3NF0ACC5 3.01 0 OK 6 ST3300831AS 279.46 GB 3NF09FLP 3.01 0 OK 7 ST3300831AS 279.46 GB 3NF046WY 3.01 0 OK ---------------------------------- [root@cfhat5 diskio]# vmstat procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 380 3781540 0 58004 0 0 2712 3781 243 216 0 2 91 7 [root@cfhat5 diskio]# free total used free shared buffers cached Mem: 4010956 229532 3781424 0 0 58004 -/+ buffers/cache: 171528 3839428 Swap: 7823576 380 7823196 ============================================================================ Kernel config See http://www.cfa.harvard.edu/~gbakos/diskio/ From owner-linux-xfs@oss.sgi.com Mon Jul 25 21:36:27 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Mon, 25 Jul 2005 21:36:33 -0700 (PDT) Received: from smtp2.es.uci.edu (smtp2.es.uci.edu [128.200.80.5]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6Q4aMH9010095 for ; Mon, 25 Jul 2005 21:36:27 -0700 Received: from [42.47.251.58] (wireless-am3.ucsd.edu [128.54.48.7]) (authenticated bits=0) by smtp2.es.uci.edu (8.12.8/8.12.8) with ESMTP id j6Q4YKdO030601 (version=TLSv1/SSLv3 cipher=RC4-MD5 bits=128 verify=NOT); Mon, 25 Jul 2005 21:34:20 -0700 X-UCInetID: hmangala From: Harry Mangalam Organization: tacg Informatics To: gbakos@cfa.harvard.edu Subject: Re: 3ware + RAID5 + xfs performance Date: Mon, 25 Jul 2005 21:34:13 -0700 User-Agent: KMail/1.7.2 Cc: linux-raid@vger.kernel.org, linux-xfs@oss.sgi.com References: In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Disposition: inline Message-Id: <200507252134.13869.hjm@tacgi.com> X-NACS_ES-MailScanner: No viruses found X-MailScanner-From: hjm@tacgi.com Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by oss.sgi.com id j6Q4aRH9010101 X-archive-position: 5693 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: hjm@tacgi.com Precedence: bulk X-list: linux-xfs Content-Length: 2625 Lines: 65 I went thru the same hardware config gymnastics (confounded by bad disks, a bad controller, bad hotswap cages, and some SW config issues - see: http://maillists.uci.edu/mailman/public/uci-linux/2005-June/001067.html). The 3ware info is partially reported here: http://maillists.uci.edu/mailman/public/uci-linux/2005-June/001134.html and I believe I also sent a similar report to this list, but google doesn't show it. Here's the results of my bonnie tests on a similar system: 8x250GB WD SD series disks on a 3ware 9500S-8, on an IWILL 2x Opteron mobo, 4GB RAM, with XFS. The XFS parameters are with a 64K stripe to match the RAID card; other params more or less vanilla. Below are some bonnie timing results with differing filesystems (1 run with ext3, 3 with XFS with differnt file sizes - unwrap in an editor to compare in columns: XFS sand, 7000M,52315,99,104402,22,32242,8,32018,60,127998,20,435.5,0,80,1724,15,+++++, +++, 3248, 16,1718,15,+++++,+++,750, 4 ext3 sand, 7000M,40682,91, 47732,23,25432,9,38027,72,179352,27,311.1,0,80, 416,99,+++++,+++,53250,100, 423,99,+++++,+++,560,58 XFS sand, 7000M,50040,98,106324,24,33046,8,31269,59,112240,17,416.0,0,80,1930,17,+++++, +++, 3753, 17,1913,17,+++++,+++,449, 2 XFS sand, 15000M,51065,99,101659,24,26884,7,35344,69,141223,23,263.0,0,80,1666,14, +++++,+++, 4565, 22,1700,16,+++++,+++,793, 4 From my reading (see URL above for resourcelist), XFS is quite bad for tiny files - we use it for very large files (>GB size); using XFS for this would generally be a bad thing. We do NOT get tremendous performance out of it; but the performance is much better than with ext3 and the CPU usage is lower, sometimes dramtically so. Real life experience with some benchmarks confirms this - we get approximately ~ the same real life thruput as we do on a IBM SP2 8way module with a direct attach disk. We are now considering adding a local PVFS2 system to a small cluster for very fast IO under MPI On Monday 25 July 2005 18:11, Gaspar Bakos wrote: > Dear all, > > The purpose of this email is twofold: > - to share the results of the many tests I performed with >   a 3ware RAID card + RAID-5 + XFS, pushing for better file I/O, > - and to initiate some brainstorming on what parameters can be tuned for >   getting a good performance out of this hardware under 2.6.* kernels. > > I started all these tests because the performance was quite poor, meaning > that the write speed was slow, the read speed was barely acceptable, and > the system load went very high (10.0) during bonnie++ tests. > My questions are marked below with "Q". From owner-linux-xfs@oss.sgi.com Wed Jul 27 02:23:15 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 27 Jul 2005 02:23:20 -0700 (PDT) Received: from sommereik.ii.uib.no (sommereik.ii.uib.no [129.177.16.236]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6R9NEH9027772 for ; Wed, 27 Jul 2005 02:23:14 -0700 Received: from loko.ii.uib.no ([129.177.20.21]:47044) by sommereik.ii.uib.no with esmtps (TLSv1:AES256-SHA:256) (Exim 4.43) id 1Dxi67-000269-IO; Wed, 27 Jul 2005 11:21:07 +0200 Received: (from janfrode@localhost) by loko.ii.uib.no (8.12.11/8.12.11/Submit) id j6R9L4M1007010; Wed, 27 Jul 2005 11:21:04 +0200 Date: Wed, 27 Jul 2005 11:21:04 +0200 From: Jan-Frode Myklebust To: Harry Mangalam Cc: gbakos@cfa.harvard.edu, linux-raid@vger.kernel.org, linux-xfs@oss.sgi.com Subject: Re: 3ware + RAID5 + xfs performance Message-ID: <20050727092104.GA6215@ii.uib.no> References: <200507252134.13869.hjm@tacgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <200507252134.13869.hjm@tacgi.com> X-archive-position: 5694 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: Jan-Frode.Myklebust@bccs.uib.no Precedence: bulk X-list: linux-xfs Content-Length: 1401 Lines: 28 On Mon, Jul 25, 2005 at 09:34:13PM -0700, Harry Mangalam wrote: > I went thru the same hardware config gymnastics me too, but for RAID0, so here are my numbers for show-off, since I was quite impressed with them. HW is 3ware 8506-8 with 8 Maxtor 7Y250M0 250 GB drives, on a dual 2.4 GHz Xeon with 2 GB memory. Running RHEL3 (probably around update 1). 3ware CLI> maint createunit c0 rraid0 k64k p0:1:2:3:4:5:6:7 # mkfs.xfs -d sunit=128,swidth=1024 -l logdev=/dev/hdb1,version=2,size=18065b -f /dev/sda1 # mount -o noatime,logbufs=8,logdev=/dev/hdb1 /dev/sda1 /mnt/sda1 % bonnie++ -f -x 5 hydra.ii.uib.no,4G,,,185270,64,73385,28,,,167514,31,506.5,1,16,6076,34,+++++,+++,5274,28,6079,25,+++++,+++,4519,31 hydra.ii.uib.no,4G,,,211922,68,72763,28,,,170114,30,496.9,0,16,6118,31,+++++,+++,5307,45,6112,39,+++++,+++,4546,29 hydra.ii.uib.no,4G,,,207263,75,73464,28,,,177586,33,478.6,2,16,6072,32,+++++,+++,5291,31,6135,43,+++++,+++,4524,37 hydra.ii.uib.no,4G,,,213538,77,72984,28,,,176985,33,488.6,1,16,6147,43,+++++,+++,5277,23,6161,41,+++++,+++,4505,28 hydra.ii.uib.no,4G,,,202534,62,72048,27,,,164839,30,522.8,1,16,6130,46,+++++,+++,5274,33,6158,52,+++++,+++,4516,28 > We are now considering adding a local PVFS2 system to a small cluster for very > fast IO under MPI We use IBM's GPFS for our cluster (85 dual opterons + 3 storage nodes), and have only positive experience with it.. -jf From owner-linux-xfs@oss.sgi.com Wed Jul 27 09:17:50 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 27 Jul 2005 09:17:58 -0700 (PDT) Received: from smtp2.es.uci.edu (smtp2.es.uci.edu [128.200.80.5]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6RGHnH9008007 for ; Wed, 27 Jul 2005 09:17:50 -0700 Received: from [42.26.178.58] (wireless-am3.ucsd.edu [128.54.48.7]) (authenticated bits=0) by smtp2.es.uci.edu (8.12.8/8.12.8) with ESMTP id j6RGFZNa014460 (version=TLSv1/SSLv3 cipher=RC4-MD5 bits=128 verify=NOT); Wed, 27 Jul 2005 09:15:41 -0700 X-UCInetID: hmangala From: Harry Mangalam Organization: tacg Informatics To: Jan-Frode Myklebust Subject: Re: 3ware + RAID5 + xfs performance Date: Wed, 27 Jul 2005 09:15:31 -0700 User-Agent: KMail/1.7.2 Cc: linux-raid@vger.kernel.org, linux-xfs@oss.sgi.com References: <200507252134.13869.hjm@tacgi.com> <20050727092104.GA6215@ii.uib.no> In-Reply-To: <20050727092104.GA6215@ii.uib.no> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200507270915.32011.hjm@tacgi.com> X-NACS_ES-MailScanner: No viruses found X-MailScanner-From: hjm@tacgi.com X-archive-position: 5695 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: hjm@tacgi.com Precedence: bulk X-list: linux-xfs Content-Length: 2531 Lines: 58 That's very impressive! How many config iterations did you have to do to get this perf? You can't see it from his log of commands but one thing about XFS for tuning is that unlike ext3 or even reiserfs, it takes only a sec or so to create even a very large filesystem so you can try one set of params, run your tests then rewrite the filesystem with different parameters to try another. I should mention that we need redundant space more than speed and that our journal is on the RAID system - it looks like Jan has his on an external device which is rec for extra speed (journal writes don't compete for i/o bandwidth with data). I'd certainly consider GPFS but I was under the impression that it was only available for IBM-branded Linux boxes w/ customized kernels. Can you buy it a la carte? hjm On Wednesday 27 July 2005 02:21, Jan-Frode Myklebust wrote: > On Mon, Jul 25, 2005 at 09:34:13PM -0700, Harry Mangalam wrote: > > I went thru the same hardware config gymnastics > > me too, but for RAID0, so here are my numbers for show-off, since I was > quite impressed with them. HW is 3ware 8506-8 with 8 Maxtor 7Y250M0 250 GB > drives, on a dual 2.4 GHz Xeon with 2 GB memory. Running RHEL3 (probably > around update 1). > > 3ware CLI> maint createunit c0 rraid0 k64k p0:1:2:3:4:5:6:7 > # mkfs.xfs -d sunit=128,swidth=1024 -l > logdev=/dev/hdb1,version=2,size=18065b -f /dev/sda1 # mount -o > noatime,logbufs=8,logdev=/dev/hdb1 /dev/sda1 /mnt/sda1 > % bonnie++ -f -x 5 > hydra.ii.uib.no,4G,,,185270,64,73385,28,,,167514,31,506.5,1,16,6076,34,++++ >+,+++,5274,28,6079,25,+++++,+++,4519,31 > hydra.ii.uib.no,4G,,,211922,68,72763,28,,,170114,30,496.9,0,16,6118,31,++++ >+,+++,5307,45,6112,39,+++++,+++,4546,29 > hydra.ii.uib.no,4G,,,207263,75,73464,28,,,177586,33,478.6,2,16,6072,32,++++ >+,+++,5291,31,6135,43,+++++,+++,4524,37 > hydra.ii.uib.no,4G,,,213538,77,72984,28,,,176985,33,488.6,1,16,6147,43,++++ >+,+++,5277,23,6161,41,+++++,+++,4505,28 > hydra.ii.uib.no,4G,,,202534,62,72048,27,,,164839,30,522.8,1,16,6130,46,++++ >+,+++,5274,33,6158,52,+++++,+++,4516,28 > > > We are now considering adding a local PVFS2 system to a small cluster for > > very fast IO under MPI > > We use IBM's GPFS for our cluster (85 dual opterons + 3 storage > nodes), and have only positive experience with it.. > > > -jf > - > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html From owner-linux-xfs@oss.sgi.com Wed Jul 27 10:38:30 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 27 Jul 2005 10:38:35 -0700 (PDT) Received: from sommereik.ii.uib.no (sommereik.ii.uib.no [129.177.16.236]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6RHcTH9011878 for ; Wed, 27 Jul 2005 10:38:30 -0700 Received: from loko.ii.uib.no ([129.177.20.21]:47247) by sommereik.ii.uib.no with esmtps (TLSv1:AES256-SHA:256) (Exim 4.43) id 1DxppY-0000ZP-Oq; Wed, 27 Jul 2005 19:36:32 +0200 Received: (from janfrode@localhost) by loko.ii.uib.no (8.12.11/8.12.11/Submit) id j6RHaTq9015138; Wed, 27 Jul 2005 19:36:29 +0200 Date: Wed, 27 Jul 2005 19:36:29 +0200 From: Jan-Frode Myklebust To: Harry Mangalam Cc: linux-raid@vger.kernel.org, linux-xfs@oss.sgi.com Subject: Re: 3ware + RAID5 + xfs performance Message-ID: <20050727173629.GA14879@ii.uib.no> References: <200507252134.13869.hjm@tacgi.com> <20050727092104.GA6215@ii.uib.no> <200507270915.32011.hjm@tacgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <200507270915.32011.hjm@tacgi.com> X-archive-position: 5696 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: Jan-Frode.Myklebust@bccs.uib.no Precedence: bulk X-list: linux-xfs Content-Length: 841 Lines: 22 On Wed, Jul 27, 2005 at 09:15:31AM -0700, Harry Mangalam wrote: > That's very impressive! How many config iterations did you have to do to get > this perf? Not too many I think.. only tried a few tunings of stripe-unit in the 3ware CLI, and scratched my heads a few times to try to understand the mkfs.xfs sunit/swidth-options.. > > I'd certainly consider GPFS but I was under the impression that it was only > available for IBM-branded Linux boxes w/ customized kernels. Can you buy it > a la carte? Yes, and maybe even for free from the IBM Scholars programme if you qualify. But yes, it does require a couple of "GPFS" kernel modules, which will typically only build for vendor kernels (i.e. not keeping up with the latest kernel.org releases). We're running a RHEL-3 clone, so that's not been a problem for us so far. -jf From owner-linux-xfs@oss.sgi.com Wed Jul 27 18:14:50 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 27 Jul 2005 18:14:56 -0700 (PDT) Received: from larry.melbourne.sgi.com (mverd138.asia.info.net [61.14.31.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id j6S1EmH9008531 for ; Wed, 27 Jul 2005 18:14:49 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA24651; Thu, 28 Jul 2005 11:12:46 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16302) id A836249BB22B; Thu, 28 Jul 2005 11:24:31 +1000 (EST) To: linux-xfs@oss.sgi.com, sgi.bugs.xfs@engr.sgi.com Subject: TAKE 918919 - xfsdump Message-Id: <20050728012431.A836249BB22B@chook.melbourne.sgi.com> Date: Thu, 28 Jul 2005 11:24:31 +1000 (EST) From: nathans@sgi.com (Nathan Scott) X-archive-position: 5697 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: linux-xfs Content-Length: 919 Lines: 20 Fix xfsrq use of setquota. Fix verbose mode in xfsrq also. Date: Thu Jul 28 11:12:33 AEST 2005 Workarea: chook.melbourne.sgi.com:/build/nathans/xfs-cmds Inspected by: wkendall@sgi.com The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:23283a xfsdump/VERSION - 1.71 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsdump/VERSION.diff?r1=text&tr1=1.71&r2=text&tr2=1.70&f=h xfsdump/doc/CHANGES - 1.78 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsdump/doc/CHANGES.diff?r1=text&tr1=1.78&r2=text&tr2=1.77&f=h xfsdump/quota/xfsrq.sh - 1.7 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsdump/quota/xfsrq.sh.diff?r1=text&tr1=1.7&r2=text&tr2=1.6&f=h xfsdump/debian/changelog - 1.54 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsdump/debian/changelog.diff?r1=text&tr1=1.54&r2=text&tr2=1.53&f=h From owner-linux-xfs@oss.sgi.com Wed Jul 27 21:37:43 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 27 Jul 2005 21:37:47 -0700 (PDT) Received: from larry.melbourne.sgi.com (mverd138.asia.info.net [61.14.31.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id j6S4bfH9021639 for ; Wed, 27 Jul 2005 21:37:43 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA29202; Thu, 28 Jul 2005 14:35:40 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16302) id 7BE5749BB229; Thu, 28 Jul 2005 14:47:26 +1000 (EST) To: linux-xfs@oss.sgi.com, sgi.bugs.xfs@engr.sgi.com Subject: TAKE 940256 - fix xfs_db segv Message-Id: <20050728044726.7BE5749BB229@chook.melbourne.sgi.com> Date: Thu, 28 Jul 2005 14:47:26 +1000 (EST) From: nathans@sgi.com (Nathan Scott) X-archive-position: 5698 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: linux-xfs Content-Length: 468 Lines: 14 Do not dereference a null xfs_mount pointer if we fail to initialise. Date: Thu Jul 28 14:35:13 AEST 2005 Workarea: chook.melbourne.sgi.com:/build/nathans/xfs-cmds Inspected by: tes The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:23294a xfsprogs/db/init.c - 1.13 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/db/init.c.diff?r1=text&tr1=1.13&r2=text&tr2=1.12&f=h From owner-linux-xfs@oss.sgi.com Wed Jul 27 21:48:06 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Wed, 27 Jul 2005 21:48:08 -0700 (PDT) Received: from larry.melbourne.sgi.com (mverd138.asia.info.net [61.14.31.138]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with SMTP id j6S4m4H9022391 for ; Wed, 27 Jul 2005 21:48:05 -0700 Received: from chook.melbourne.sgi.com (chook.melbourne.sgi.com [134.14.54.237]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id OAA29448; Thu, 28 Jul 2005 14:46:01 +1000 Received: by chook.melbourne.sgi.com (Postfix, from userid 16302) id 8353A49BB229; Thu, 28 Jul 2005 14:57:47 +1000 (EST) To: linux-xfs@oss.sgi.com, sgi.bugs.xfs@engr.sgi.com Subject: TAKE 940257 - mkfs alignment checks Message-Id: <20050728045747.8353A49BB229@chook.melbourne.sgi.com> Date: Thu, 28 Jul 2005 14:57:47 +1000 (EST) From: nathans@sgi.com (Nathan Scott) X-archive-position: 5699 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: nathans@sgi.com Precedence: bulk X-list: linux-xfs Content-Length: 484 Lines: 14 Perform log stripe alignment checks on external as well as internal logs. Date: Thu Jul 28 14:45:47 AEST 2005 Workarea: chook.melbourne.sgi.com:/build/nathans/xfs-cmds Inspected by: tes The following file(s) were checked into: longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb Modid: master-melb:xfs-cmds:23296a xfsprogs/mkfs/xfs_mkfs.c - 1.66 - changed http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsprogs/mkfs/xfs_mkfs.c.diff?r1=text&tr1=1.66&r2=text&tr2=1.65&f=h From owner-linux-xfs@oss.sgi.com Thu Jul 28 15:10:48 2005 Received: with ECARTIS (v1.0.0; list linux-xfs); Thu, 28 Jul 2005 15:10:53 -0700 (PDT) Received: from mail.planetmirror.com (silk.planetmirror.com [203.16.234.18]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id j6SMAjH9029606 for ; Thu, 28 Jul 2005 15:10:48 -0700 Received: by mail.planetmirror.com (Postfix, from userid 90) id 65536A8501; Fri, 29 Jul 2005 08:08:45 +1000 (EST) Received: from silk.planetmirror.com (silk.planetmirror.com [203.16.234.18]) by mail.planetmirror.com (Postfix) with ESMTP id 7BA70A8500; Fri, 29 Jul 2005 08:08:44 +1000 (EST) Date: Fri, 29 Jul 2005 08:08:44 +1000 (EST) From: Dan Goodes X-X-Sender: dang@silk.planetmirror.com To: linux-xfs@oss.sgi.com Cc: PlanetMirror Support Subject: Trying to contact XFS Mirror Maintainers Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-archive-position: 5700 X-ecartis-version: Ecartis v1.0.0 Sender: linux-xfs-bounce@oss.sgi.com Errors-to: linux-xfs-bounce@oss.sgi.com X-original-sender: support@planetmirror.com Precedence: bulk X-list: linux-xfs Content-Length: 444 Lines: 18 Hi Folks, We're trying to get in touch with the folks who maintain oss.sgi.com::xfsftp/ - or the main rsync server at oss.sgi.com. The archive has recently changed, and we're trying to work out how to migrate our local copy. Thanks! :-) Regards, Dan Goodes : Systems Programmer : dang@planetmirror.com Help support PlanetMirror - Australia's largest Internet archive by signing up for PlanetMirror Premium : http://planetmirror.com