[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: xfsdump max file size
On Wed, Jan 15, 2003 at 11:21:49AM -0600, Jason Joines wrote:
> Mandy Kirkconnell wrote:
> >Jason Joines wrote:
> > >When running a dump with "xfsdump -F -e -f
> > >/local/backup/weekly/sdb3.dmp -l 0 /dev/sdb3" I get the message,
> > >"xfsdump: WARNING: could not open regular file ino 4185158 mode
> > >0x000081b0: File too large: not dumped". The file in question is 5.0
> > >GB.
> >
> >Perhaps you could also use xfs_db to look at the extents of the file:
> >
> ># xfs_db -r /dev/sdb3
> >xfs_db: inode 4185158 p
>
> # xfs_db -r /dev/sdb3
> xfs_db: inode 4185158
> xfs_db: p
> core.magic = 0xfeff
> core.mode = 0
> core.version = 4
> core.format = 0 (dev)
> core.uid = 0
> core.gid = 0
> core.atime.sec = Wed Dec 6 15:33:04 1916
> core.atime.nsec = -1818818560
> core.mtime.sec = Mon Dec 14 15:16:30 1992
> core.mtime.nsec = 1140850688
> core.ctime.sec = Tue Feb 6 19:52:21 1973
> core.ctime.nsec = -1674700016
Those times look really wrong. Perhaps should should run xfs_check on
your file system.
> core.size = -7811766231833445970
> core.nblocks = 5764888998212337664
That doesn't look right to me either. Run xfs_check and xfs_repair -n
on the file system. I'd wager you'll get some interesting output.
--
Nate Straz nstraz@sgi.com
sgi, inc http://www.sgi.com/
Linux Test Project http://ltp.sf.net/