Search String: Display: Description: Sort:

Results:

References: [ +subject:/^(?:^\s*(re|sv|fwd|fw)[\[\]\d]*[:>-]+\s*)*xfsdump\s+stuck\s+in\s+io_schedule\s*$/: 62 ]

Total 62 documents matching your query.

1. tdown in xfs_trans_cancel (score: 1)
Author: Zlatko Calusic <zlatko.calusic@xxxxxxxx>
Date: Fri, 15 Nov 2002 11:15:29 +0100
This is my first post to linux-xfs, so please be gentle. First of all, I would like to thank you guys for bringing us such a high performance file system. And thanks SGI for making it GPL. Back to t
/archives/xfs/2002-11/msg00257.html (8,685 bytes)

2. _schedule (score: 1)
Author: Chris Wedgwood <cw@xxxxxxxx>
Date: Fri, 15 Nov 2002 03:21:34 -0800
I too had (have?) this problem. I tried to reproduce it with CVS of late (last three days) and so far haven't been able to. I'm now running kdb kernels though in the hope to get something useful when
/archives/xfs/2002-11/msg00259.html (9,288 bytes)

3. ith fsstress in scsi, the process is locked. (score: 1)
Author: Zlatko Calusic <zlatko.calusic@xxxxxxxx>
Date: Fri, 15 Nov 2002 13:07:57 +0100
What kernel do you use? Hm, before answering that, I'll need a little help to compile kernel with xfs from cvs. How do i proceed, get everything from cvs and put instead of in-kernel tree? I thought
/archives/xfs/2002-11/msg00260.html (9,767 bytes)

4. is locked. (score: 1)
Author: Andi Kleen <ak@xxxxxxx>
Date: Fri, 15 Nov 2002 13:52:33 +0100
It somehow manages to run out of memory and then blocks trying to free some. You can do cat /proc/slabinfo and see if there are any suspicious leaks of objects (compare before and after dump) You ca
/archives/xfs/2002-11/msg00261.html (9,350 bytes)

5. io_schedule (score: 1)
Author: Zlatko Calusic <zlatko.calusic@xxxxxxxx>
Date: Fri, 15 Nov 2002 14:01:51 +0100
Yes, I'll try to see if slabinfo can discover any leak, although I'm not quite sure we have a leak problem here, it's more like some part of xfs code tries to allocate memory too fast, and kernel can
/archives/xfs/2002-11/msg00262.html (10,526 bytes)

6. io_schedule (score: 1)
Author: Andi Kleen <ak@xxxxxxx>
Date: Fri, 15 Nov 2002 14:06:35 +0100
I guess xfsdump pins both a lot of pages and a lot of inodes/dentries (similar to a program that opens a few thousand files and mlocks significant parts of its address space). May not be the best te
/archives/xfs/2002-11/msg00263.html (10,008 bytes)

7. io_schedule (score: 1)
Author: Zlatko Calusic <zlatko.calusic@xxxxxxxx>
Date: Fri, 15 Nov 2002 16:28:38 +0100
Oh, yes, you're completely right, of course. It's the pinned part of page cache that makes big pressure on the memory. Whole lot of inactive page cache pages (~700MB in my case) is not really good in
/archives/xfs/2002-11/msg00264.html (11,303 bytes)

8. io_schedule (score: 1)
Author: Andi Kleen <ak@xxxxxxx>
Date: Fri, 15 Nov 2002 16:40:12 +0100
It may be possible to just hack the user space program to limit the data currently in flight, but that would likely impact performance somewhat. Better than doing no backups though. -Andi
/archives/xfs/2002-11/msg00265.html (10,130 bytes)

9. ext (e.g. HSM) (score: 1)
Author: Chris Wedgwood <cw@xxxxxxxx>
Date: Fri, 15 Nov 2002 12:33:50 -0800
Presently I'm running a kernel compiled from CVS as of yesterday, but the problem I found went away two days ago or so (using whatever was current then). Something like: cvs -d :pserver:cvs@xxxxxxxxx
/archives/xfs/2002-11/msg00270.html (9,994 bytes)

10. butes: process vs. kernel context (e.g. HSM) (score: 1)
Author: Steve Lord <lord@xxxxxxx>
Date: 15 Nov 2002 14:40:10 -0600
It varies, but that is about right. Steve -- Steve Lord voice: +1-651-683-3511 Principal Engineer, Filesystem Software email: lord@xxxxxxx
/archives/xfs/2002-11/msg00271.html (9,375 bytes)

11. (e.g. HSM) (score: 1)
Author: Chris Wedgwood <cw@xxxxxxxx>
Date: Fri, 15 Nov 2002 12:45:01 -0800
It's something else (or was) I think. Under memory pressure (xfsdump to another filesystem is a good way to show this) various allocations fail but don't seem to be harmful. To get these messages typ
/archives/xfs/2002-11/msg00272.html (9,989 bytes)

12. xfs performance on scsi in 2.4.20-rc1 (score: 1)
Author: Andrew Morton <akpm@xxxxxxxxx>
Date: Fri, 15 Nov 2002 14:32:34 -0800
Does xfs_dump actually pin 700 megs of memory?? If someone could provide a detailed description of what xfs_dump is actually doing internally, that may help me shed some light. xfs_dump is actually u
/archives/xfs/2002-11/msg00275.html (10,334 bytes)

13. to 2.4.20-rc2 (score: 1)
Author: Steve Lord <lord@xxxxxxx>
Date: 15 Nov 2002 16:42:35 -0600
Hmm, a detailed description of xfsdump would take a long time. However, it is reading the filesystem via system calls. It uses an ioctl to read inodes from disk in chunks, it does not do a directory
/archives/xfs/2002-11/msg00278.html (11,633 bytes)

14. butes: process vs. kernel context (e.g. HSM) (score: 1)
Author: Andrew Morton <akpm@xxxxxxxxx>
Date: Fri, 15 Nov 2002 14:48:18 -0800
Oh, so there isn't any special-purpose in-kernel bulk disk access stuff to support xfs_dump? Hm. It should all be easily reclaimable then. Bill Irwin wrote a couple of neat scripts for monitoring sla
/archives/xfs/2002-11/msg00279.html (11,196 bytes)

15. (e.g. HSM) (score: 1)
Author: Andi Kleen <ak@xxxxxxx>
Date: Fri, 15 Nov 2002 23:48:45 +0100
There seems to be an somewhat detailed documented in the CVS tree in cmd/xfsdump/doc/xfsdump.html (with wrong mime type) http://oss.sgi.com/cgi-bin/cvsweb.cgi/linux-2.4-xfs/cmd/xfsdump/doc/xfsdump.h
/archives/xfs/2002-11/msg00280.html (10,447 bytes)

16. greSQL and file system level backup (score: 1)
Author: Chris Wedgwood <cw@xxxxxxxx>
Date: Sat, 16 Nov 2002 01:38:12 -0800
No. It uses bulkstat; I'm not convinced that alone is the problem (I have other applications that use it and never caused the kinds of lockups). --cw
/archives/xfs/2002-11/msg00286.html (9,955 bytes)

17. 0-rc1 (score: 1)
Author: Zlatko Calusic <zlatko.calusic@xxxxxxxx>
Date: Sat, 16 Nov 2002 11:40:30 +0100
Before that, I think we should try to fix it properly. Yes, xfsdump is blazingly fast and probably that is the reason we have some problems with it now, but let's try not to surrender before a fight.
/archives/xfs/2002-11/msg00288.html (10,908 bytes)

18. ump stuck in io_schedule (score: 1)
Author: Zlatko Calusic <zlatko.calusic@xxxxxxxx>
Date: Sat, 16 Nov 2002 13:19:11 +0100
Hey, thanks! It's being pulled right now, minor obstraction is that the whole kernel tree is going to take some time to download through this slow link of mine. :( I'll report my observations ASAP. -
/archives/xfs/2002-11/msg00290.html (9,995 bytes)

19. er pinning (score: 1)
Author: Zlatko Calusic <zlatko.calusic@xxxxxxxx>
Date: Sat, 16 Nov 2002 18:12:55 +0100
I don't know, it's just a possibility. No, LOWMEM only, I have attached relevant files in the end of this message. Chris, unfortunately here I have exactly the same problems with the code from CVS pu
/archives/xfs/2002-11/msg00292.html (21,014 bytes)

20. o_schedule (score: 1)
Author: Chris Wedgwood <cw@xxxxxxxx>
Date: Sat, 16 Nov 2002 12:59:42 -0800
I'll try and older kernel then and see if I can reproduce it, for whatever reason right now, I'm having no luck making this happen again. Oddly, I *always* got the hang at the start of a dump which f
/archives/xfs/2002-11/msg00293.html (11,846 bytes)


This search system is powered by Namazu