I have seen a releated problem when the space is tight.
When you untar a lot of small files onto a partition that is tight on space,
you get
out of space errors.
After the delalloc blocks are commited to disk, some space become available and
files get written to disk again.
All the space get delalloced again with resulting out of space errors.
This cycle continues until all the space are really gone or the tar exits.
You end up where not all the files in the tar file get extracted to disk,
eventhough
everything would have fitted onto the partition. If you write the files
synchronously, they all fit onto the partition.
I am not sure if this fix will address this issue. If you flush before giving an
"Out of space error", it should fix it.
(I tried fixing this myself, but ended up with a kernel that deadlocks)
I also observed a similar behaviour within the allocation groups.
When all the space in an ag(#1) are delalloced, xfs switches to the next ag(#2).
This is not bad if inode and data blocks would both switch to #2, but I observed
that the inode blocks stayed in #1, while the data blocks switched to #2 (#1,#2
situation).
This results in very bad inode vs data block layout.
Xfs switches back to the previous ag (#1,#1 situation) once the delalloced
blocks
are commited.
Paul
> linux/fs/xfs/linux/xfs_lrw.c - 1.148
> - when out of space, flush the log as well as the delalloc buffers
> Description :
> We came across an end case on linux where a filesystem had a few blocks
> free and was continually spinning in the strategy path attempting to
> allocate space for a delalloc block.
|