On 23/02/11 18:04, Linda A. Walsh wrote:
> I tried using the 'iflag=fullblock' as you recommend and it made the
> 'consistent' with the output of 'mbuffer', i.e. it transfered less data
> and the truncation was consistent with a 512M divisor, indicating it was
> 'cat' default record output size that was causing the difference.
Right. That's expected as with 'fullblock', both mbuffer and dd
will read/write 512M at a time. Both will fail in the same
way when they try to write the odd sized chunk at the end.
This was only changed for dd in version coreutils 7.5
(where it reverts to a standard write for the last chunk)
> I've tried significantly shorter files and NOT had this problem
> (record size=64k, and 2 files oneat 64+57k). Both copied
> Something to do with large file buffers.
Small blocks cause an issue on ext at least.
I modified dd here to behave like yours and got:
$ truncate -s513 small
$ dd oflag=direct if=small of=small.out
./dd: writing `small.out': Invalid argument
> Of *SIGNIFICANT* note. In trying to create an empty file of the size
> used, from scratch, using 'xfs_mkfile', I got an error:
>> xfs_mkfile 5776419696 testfile
> pwrite64: Invalid argument
Looks like that uses the same O_DIRECT write
method with the same issues?
You could try fallocate(1) which is newly available
in util-linux and might be supported by your xfs.
p.s. dd would if written today default to using fullblock.
For backwards and POSIX compat though we must keep
the current default behavior
p.p.s. There are situations were fullblock is required,
and I'll patch dd soon to auto apply that option when appropriate.
[io]flag=direct is one of those cases I think.
p.p.p.s coreutils 8.11 should have the oflag=nocache option
which will write to disk without using up your page cache,
and also avoiding O_DIRECT constraints.