xfs
[Top] [All Lists]

Re: TAKE - delalloc buffer and page handling cleanup

To: "D. Stimits" <stimits@xxxxxxxxxx>
Subject: Re: TAKE - delalloc buffer and page handling cleanup
From: Chris Pascoe <c.pascoe@xxxxxxxxxxxxxx>
Date: Sun, 10 Jun 2001 18:30:00 +1000 (EST)
In-reply-to: <3B232396.5344500B@idcomm.com>
Sender: owner-linux-xfs@xxxxxxxxxxx
Hi,

Did you mean to CC this to the list as well?  I'm not an XFS developer,
just testing!

The kernel I used for my tests was the 2.4.6-pre2 one, with the delalloc
changes.  Check your CVS/Entries files to make sure you got the versions
affected by this TAKE (it took several days before I got them!)

042 was always working for me before these changes... just this time, it
failed after several complete runs through all of the tests.  The 017 one
was quite dramatic for me, just start it and it would crash within a
second...

Chris

On Sun, 10 Jun 2001, D. Stimits wrote:

> Chris Pascoe wrote:
> >
> > > I get a repeatable crash immediately on 017 with these changes.  Same
> > > machine as before (1GB/Dual P3-Xeon), just a slightly different partiton
> > > layout.  Was I supposed to avoid test 017 too?
> >
> > Avoiding tests 013, 017, 049 (and all the tape ones!), the stress tests
> > have completed 3 passes now..  Hmm, must have seen me typing - 042 just
> > failed (running SMP/highmem again).
>
> My SMP worked on 042 without fail; this was from the 2.4.6-pre1, are you
> using this version too?
>
> I did have 049 fail, "!!! failed to loop mount xfs", as well as 053
> giving a lot of fails due to unimplemented functions. I am still
> wondering about the post I gave for my failed tests, but I can add
> something of an experiment I did. For the use of encrypted filesystem
> partitions, it requires use of loop device. Should something happen to
> the encrypted system, such as power fail or power off, one has to do any
> sort of filesystem recovery through the loop device. That in turn
> requires mounting the partition and then running the test against
> /dev/loop0 (or maybe loop1 if it was the second encrypted system to
> mount). But xfs_repair refuses to run on a mounted system. So I might
> suggest that xfs_repair be allowed on a mounted system, provided some
> circumstances are met. One, which I am not sure if is possible, is that
> it could be allowed to run if it is detected to be a loopback device; if
> this detection isn't possible, there might be a need for a flag to allow
> forced repair (maybe the encrypted partition could be mounted read
> only?).
>
> Anyway, the other limitation is that during the creation of a loopback
> encrypted partition, the mkfs.xfs causes an error note about block size
> parameter not being valid. This didn't seem to make any big impact,
> because I was able to create an XOR loopback layer over a partition, and
> still run mkfs.xfs. I then copied a large amount of data over, and
> intentionally shut the power off without unmounting it (it wasn't
> writing at the time though). When I brought it back up, mounting it with
> the encryption password worked fine, and under superficial inspection it
> appeared to be ok. I did not check it close enough to know if it really
> was ok, so it might not be as good as it seems.
>
> If anyone is interested in seeing what happens, you can create an
> encrypted partition via something like this (I'll assume /dev/hda4;
> normally you would also pad the partition with random bytes first, I
> won't for this):
> losetup -e XOR /dev/loop0 /dev/hda4
> [enter some easy to remember pass]
> mkfs.xfs /dev/loop0
> losetup -d /dev/loop0
>
> Now to mount it:
> mount /dev/hda4 /mnt/somewhere -t xfs -oloop,encryption=xor
> [enter pass]
>
> You should be able to use the partition normally at /mnt/somewhere. Now
> if you wanted to run repair on it and it was ext2, you'd mount it this
> way and run:
> fsck.ext2 /dev/loop0
>
> Running instead
> fsck.xfs /dev/loop0
> will run and not say anything...does this mean it didn't do anything? Or
> just that it was clean?
>
> If instead I wanted to run xfs_repair, it will refuse...I suppose it may
> not be appropriate even, should xfs_repair be useful against loopback
> devices?
>
> D. Stimits, stimits@xxxxxxxxxx
>
> >
> > Active process was:
> > 0xef9e6000 00005670 00005669  1  001  run   0xef9e6350*xfs_fsr
> >
> > 042 90s ...Start mounting filesystem: sd(8,19)
> > Ending clean XFS mount for filesystem: sd(8,19)
> > Start mounting filesystem: sd(8,19)
> > Ending clean XFS mount for filesystem: sd(8,19)
> > Unable to handle kernel NULL pointer dereference at virtual address
> > 00000000
> >  printing eip:
> > c0161ea4
> > *pde = 00000000
> >
> > Entering kdb (current=0xef9e6000, pid 5670) on processor 1 Oops: Oops
> > due to oops @ 0xc0161ea4
> > eax = 0x00000000 ebx = 0xc4b09d60 ecx = 0x00000009 edx = 0x00000000
> > esi = 0x4014e200 edi = 0x00000000 esp = 0xef9e7d7c eip = 0xc0161ea4
> > ebp = 0xef9e7db8 xss = 0x00000018 xcs = 0x00000010 eflags = 0x00010246
> > xds = 0x00000018 xes = 0x00000018 origeax = 0xffffffff &regs = 0xef9e7d48
> > [1]kdb> bt
> >     EBP       EIP         Function(args)
> > 0xef9e7db8 0xc0161ea4 _pb_direct_io+0x90 (0xf3594240, 0x0, 0x0, 0x100000, 
> > 0xef9e7e1c)
> >                                kernel .text 0xc0100000 0xc0161e14 0xc0161fd4
> > 0xef9e7e3c 0xc0163285 _pagebuf_file_write+0x175 (0xd8a99320, 0x4014e200, 
> > 0x100000, 0xef9e7ea8, 0xc01cea90)
> >                                kernel .text 0xc0100000 0xc0163110 0xc016330c
> > 0xef9e7eb0 0xc016340d pagebuf_generic_file_write+0x101 (0xd8a99320, 
> > 0x4014e200, 0x100000, 0xef9e7f84, 0xc01cea90)
> >                                kernel .text 0xc0100000 0xc016330c 0xc016374c
> > 0xef9e7f34 0xc01cfd98 xfs_write+0x348 (0xece267a8, 0xef9e7f78, 0x0, 0x0, 
> > 0x0)
> >                                kernel .text 0xc0100000 0xc01cfa50 0xc01d0054
> > 0xef9e7f98 0xc01cb765 linvfs_write+0x10d (0xd8a99320, 0x4014e200, 0x100000, 
> > 0xd8a99340)
> >                                kernel .text 0xc0100000 0xc01cb658 0xc01cb7a0
> > 0xef9e7fbc 0xc0136175 sys_write+0x95 (0x7, 0x4014e200, 0x100000, 0x100000, 
> > 0x100000)
> >                                kernel .text 0xc0100000 0xc01360e0 0xc01361b0
> >            0xc0106fcb system_call+0x33
> >                                kernel .text 0xc0100000 0xc0106f98 0xc0106fd0
> >
> > Chris
>



<Prev in Thread] Current Thread [Next in Thread>