On Wed, Oct 24, 2012 at 04:19:28PM -0500, Wayne Walker wrote:
> On 10/12/2012 07:14 PM, Dave Chinner wrote:
> >And SB/AGF 3 and 4 are ok, too. So, the filesystem headers just
> >beyond the 2TB offset are zero. That tends to point to a block
> >device problem, as an offset of 2TB is where a 32 bit sector count
> >will overflow (i.e. 2^32). Next step is to run blktrace/blkparse
> >on the cp workload that generates the error to see if anything
> >actually writes to the 2TB offset region, and if so, where it
> >comes from. Probably best to compress the resultant blkparse
> >output file - it might be quite large but the text will compress
> >well. Cheers, Dave.
> Thank you for your help.
> 10 MB .gz file at http://rx-7.bybent.com/blktrace.sde1.out.gz
> What I can see seems to have most of the writes are around 2^31.
Which is 2^31 sectors of just above 1TB. i.e. writing the data into
AG #1. The writes stop a short way into #AG1. The filesystem does
not issue any writes to the AG#2 headers, only a single read. IOWs,
the filesystem is not overwriting it's own metadata during the
workload, so that implies a problem at a lower storage layer....
IOWs, I can't see the filesystem doing anything wrong here.
FWIW, can you pattern the block device around the 2TB offset and
re-run the test, and see if the sb/agf 2 are zeroed via xfs_db after
the failure occurs? i.e. do something like:
# xfs_io -F -f -c "pwrite 2047g 2g" -c fsync /dev/sde1
the mkfs, run xfs_db to dump sb 2 and agf 2, then run the test and
dump sb 2/agf 2 again after the test? use the same xfs-db scripts as
the previous time (i.e. including the drop caches commands) if you