On Mon, 18 Jun 2001, Christoph Lukas wrote:
> we would like to use xfsdump to backup one of our xfs filesystems onto
> another hard disk. Therefore I tried:
>
> xfsdump -l0 -F -f /mnt/backupfile /dev/sda3
>
> Although we are running kernel 2.4.3 and glibc _with_ >2GB support (tried
> with 'dd if=/dev/null of=/mnt/testfile' ) xfsdump exits after dumping 2GB
> with:
I would think that 'dd if=/dev/null' will give you a 0 byte file .. what
happens with 'dd if=/dev/zero of=/mnt/testfile bs=1024 count=3145728' ?
> xfsdump: creating dump session media file 0 (media 0, file 0)
> xfsdump: dumping ino map
> xfsdump: dumping directories
> xfsdump: dumping non-directory files
> File size limit exceeded
This message is not produced by xfsdump, but is the result of a failed
write (errno == 25).
> is there any possibility to exceed this limit (compile time option?) or any
> easy alternative to work with multiple 2GB files (like the ordinary dump
> does)?
By default xfsdump is compiled with -D_FILE_OFFSET_BITS=64 -D_GNU_SOURCE
which should be enough to make it work properly.
If you still have problems, could you run xfsdump with '-v5' to get lots
of verbose output so we can get an idea of where exactly xfsdump fails.
Ivan
--
Ivan Rayner
ivanr@xxxxxxxxxxxxxxxxx
|