xfs
[Top] [All Lists]

Re: Linux 2.4.17-xfs vs previous XFS versions and certain non-us charact

To: Stephen Lord <lord@xxxxxxx>
Subject: Re: Linux 2.4.17-xfs vs previous XFS versions and certain non-us characters in filenames
From: Håkan Lindqvist <lindqvist@xxxxxxxxxx>
Date: 27 Jan 2002 13:30:12 +0100
Cc: Linux XFS Mailing List <linux-xfs@xxxxxxxxxxx>
In-reply-to: <3C536F44.1020301@sgi.com>
References: <1012101803.1045.28.camel@steelnest> <1012102374.1045.35.camel@steelnest> <3C536F44.1020301@sgi.com>
Sender: owner-linux-xfs@xxxxxxxxxxx
Thanks for your prompt reply!

Okay, I will first of all recompile my 2.4.16-xfs kernel now to make
sure that this is not caused by the compiler (or rather a change of
compiler versions).

If my new 2.4.16-xfs kernel (which is for sure compiled using the same
tools as my 2.4.17-xfs kernel was) still works together with the earlier
2.4.16-xfs kernel, then it must have to do with some change in the
kernel, right? (Not necessarily in the XFS code, but that would have
made sense as it doesn't affect all filesystems.)

I will also do what you said, of course.


/Håkan


On Sun, 2002-01-27 at 04:08, Stephen Lord wrote:
> Hekan Lindqvist wrote:
> 
> >I just noticed this seems to be the same problem as mentioned in the
> >"Problems with yesterday CVS and international characters"-thread which
> >I failed to find before sending "my problem" to the list.
> >
> >Are you sure that XFS can't have to do with this? (What baffles me is
> >how this can avoid affecting ext2, and clearly depends on whether I use
> >2.4.16-xfs or 2.4.17-xfs (I change obsolutely nothing else between my
> >tests) if it is not XFS-related.)
> >
> >Best regards,
> >Hekan Lindqvist
> >
> >
> >On Sun, 2002-01-27 at 04:23, Hekan Lindqvist wrote:
> >
> >>The current (as of today) CVS version of linux-2.4-xfs (2.4.17) does not
> >>seem to be able to handle files created under earlier versions of XFS
> >>which have filenames containing certain (latin1) characters (Swedish
> >>characters e,d,v (a with ring on top, a with dots on top and o with dots
> >>on top) for example).
> >>
> >>The kind of errors I get is that if I run ls so that it finds these
> >>files it spits out "ls: <filename>: No such file or directory" (the
> >>filename can be a match of a wildcard or tabcompletion - so it can't be
> >>a case of bad typing).
> >>Going back to my previous kernel (2.4.16-xfs) makes things work again.
> >>However if I create a new file with a e (for example) in the filename
> >>under 2.4.17-xfs, that file causes the same kind of problem under
> >>2.4.16-xfs.
> >>
> >>It seems that stat:ing the file fails eventhough the file exists and can
> >>be found. (This output from "strace ls -l janneeee" seems to point in
> >>that direction too: 'lstat64("janneeee", 0x80548bc)          = -1 ENOENT
> >>(No such file or directory)')
> >>
> >>This does not seem to affect other filesystems (at least not ext2),
> >>therefore I assume the issue is with some new XFS code.
> >>
> There really is no new xfs code in 2.4.17, just some I/O related bug 
> fixes, nothing
> at all to do with directories. Did you switch compilers between building 
> these
> kernel versions? The fact that files created on one kernel cause 
> problems for the
> other is strange, it suggests the hash calculations are coming out 
> different between
> the kernels.
> 
> >>
> >>
> >>Best regards,
> >>Hekan Lindqvist
> >>
> >
> >
> OK, do this. First find the inode number of the containing directory using
> ls -lid pathname of dir
> [root@burst xfs]# ls -lid .
>     128 drwxr-xr-x    5 root     root           70 Jan 26 14:26 .
> 
> 
> It would work best if you could find a small directory with this problem,
> one whose size in ls shows up as less than 4K.
> 
> Then as root run
> 
> xfs_db -r /dev/xxx
> 
> on the device - preferably when it is not mounted.
> 
> Enter
> 
> inode xxxx
> p
> 
> where xxxx is the number in the first column of ls output (128 above).
> 
> This will dump out the condents of the inode, so for my example:
> [root@burst xfs]# xfs_db -r /dev/hda3
> xfs_db: inode 128
> xfs_db: p
> core.magic = 0x494e
> core.mode = 040755
> core.version = 1
> core.format = 1 (local)
> core.nlinkv1 = 5
> core.uid = 0
> core.gid = 0
> core.atime.sec = Sat Jan 26 13:35:15 2002
> core.atime.nsec = 021650000
> core.mtime.sec = Sat Jan 26 14:26:02 2002
> core.mtime.nsec = 451650000
> core.ctime.sec = Sat Jan 26 14:26:02 2002
> core.ctime.nsec = 451650000
> core.size = 70
> core.nblocks = 0
> core.extsize = 0
> core.nextents = 0
> core.naextents = 0
> core.forkoff = 0
> core.aformat = 2 (extents)
> core.dmevmask = 0
> core.dmstate = 0
> core.newrtbm = 0
> core.prealloc = 0
> core.realtime = 0
> core.gen = 0
> next_unlinked = null
> u.sfdir2.hdr.count = 5
> u.sfdir2.hdr.i8count = 0
> u.sfdir2.hdr.parent.i4 = 128
> u.sfdir2.list[0].namelen = 3
> u.sfdir2.list[0].offset = 0x30
> u.sfdir2.list[0].name = "tmp"
> u.sfdir2.list[0].inumber.i4 = 131
> u.sfdir2.list[1].namelen = 10
> u.sfdir2.list[1].offset = 0x50
> u.sfdir2.list[1].name = "client.txt"
> u.sfdir2.list[1].inumber.i4 = 133
> u.sfdir2.list[2].namelen = 8
> u.sfdir2.list[2].offset = 0x80
> u.sfdir2.list[2].name = "NBSIMULD"
> u.sfdir2.list[2].inumber.i4 = 6292640
> u.sfdir2.list[3].namelen = 4
> u.sfdir2.list[3].offset = 0xd8
> u.sfdir2.list[3].name = "doio"
> u.sfdir2.list[3].inumber.i4 = 136
> u.sfdir2.list[4].namelen = 4
> u.sfdir2.list[4].offset = 0xe8
> u.sfdir2.list[4].name = "lord"
> u.sfdir2.list[4].inumber.i4 = 132
> 
> If your directory is 4K in length then you would get output like this;
> 
> xfs_db: inode 6292640
> xfs_db: p
> core.magic = 0x494e
> core.mode = 040700
> core.version = 1
> core.format = 2 (extents)
> core.nlinkv1 = 3
> core.uid = 0
> core.gid = 0
> core.atime.sec = Sat Jan 26 04:02:53 2002
> core.atime.nsec = 588413000
> core.mtime.sec = Tue Jan 15 05:19:26 2002
> core.mtime.nsec = 427555000
> core.ctime.sec = Tue Jan 15 05:19:26 2002
> core.ctime.nsec = 427555000
> core.size = 4096
> core.nblocks = 1
> core.extsize = 0
> core.nextents = 1
> core.naextents = 0
> core.forkoff = 0
> core.aformat = 2 (extents)
> core.dmevmask = 0
> core.dmstate = 0
> core.newrtbm = 0
> core.prealloc = 0
> core.realtime = 0
> core.gen = 8
> next_unlinked = null
> u.bmx[0] = [startoff,startblock,blockcount,extentflag] 0:[0,796416,1,0]
> 
> Use the commands
> dblock 0
> p
> 
> And will dump the directory contents:
> xfs_db: p
> bhdr.magic = 0x58443242
> bhdr.bestfree[0].offset = 0x108
> bhdr.bestfree[0].length = 0xe98
> bhdr.bestfree[1].offset = 0
> bhdr.bestfree[1].length = 0
> bhdr.bestfree[2].offset = 0
> bhdr.bestfree[2].length = 0
> bu[0].inumber = 6292640
> bu[0].namelen = 1
> bu[0].name = "."
> bu[0].tag = 0x10
> bu[1].inumber = 128
> bu[1].namelen = 2
> bu[1].name = ".."
> bu[1].tag = 0x20
> bu[2].inumber = 8651008
> bu[2].namelen = 6
> bu[2].name = "CLIENT"
> bu[2].tag = 0x30
> .....
> 
> For the name you cannot find run
> 
> hash xxxx
> 
> where xxxx is the name.
> 
> Finally from within xfsdb (you must be unmounted for this)
> 
> blockget -n
> 
> Send all the output to me and I will see if there is anything odd looking.
> 
> Steve
> 
> 
> 
> 
> 
> 
> 



<Prev in Thread] Current Thread [Next in Thread>