Jason Joines wrote:
Mandy Kirkconnell wrote:
Jason Joines wrote:
What's the maximum file size for a file to be dumped by xfsdump?
xfsdump doesn't (really) have a maximum file size limitation. There
is a maximum file size defined in xfsdump/dump/content.c but it is
set to the largest theoretical file size, 18 million terabytes. The
definition is defined in bytes:
/* max "unsigned long long int"
*/
#define ULONGLONG_MAX 18446744073709551615LLU
Obviously this maximum limit is impossible to hit, which is why I say
xfsdump doesn't have a max file size limit. You should be able to
dump the biggest possible file you can create.
There is, however, a command line option (-z) to set a maximum file
size for your dump. This option allows you to specify a maximum file
size, in kilobytes. Files over this size will be excluded from the
dump.
When running a dump with "xfsdump -F -e -f
/local/backup/weekly/sdb3.dmp -l 0 /dev/sdb3" I get the message,
"xfsdump: WARNING: could not open regular file ino 4185158 mode
0x000081b0: File too large: not dumped". The file in question is 5.0 GB.
Jason
===========
xfsdump does not set EFBIG (errno 27) anywhere. It looks like the error
is coming from the filesystem on the first attempt to open the file.
What version of xfs are you running? Are you using the released
version of xfsdump, or have you built your own copy?
Perhaps you could also use xfs_db to look at the extents of the file:
# xfs_db -r /dev/sdb3
xfs_db: inode 4185158 p
We are able to dump a file of 4.5 GB without hitting the error. Perhaps
we can figure out what's different between our environments and go from
there.
--
Mandy Kirkconnell
SGI, Storage Software Engineer
alkirkco@xxxxxxx
651-683-3422
|