This is an automated email from the git hooks/post-receive script. It was
generated because a ref change was pushed to the repository containing
the project "XFS development tree".
The branch, master has been updated
ad1858d xfs: rework remote attr CRCs
d4c712b xfs: fully initialise temp leaf in xfs_attr3_leaf_compact
8517de2 xfs: fully initialise temp leaf in xfs_attr3_leaf_unbalance
6863ef8 xfs: correctly map remote attr buffers during removal
4af3644 xfs: remote attribute tail zeroing does too much
913e96b xfs: remote attribute read too short
90253cf xfs: remote attribute allocation may be contiguous
f648167 xfs: avoid nesting transactions in xfs_qm_scall_setqlim()
from e461fcb194172b3f709e0b478d2ac1bdac7ab9a3 (commit)
Those revisions listed above that are new to this repository have
not appeared on any other notification email; so we list those
revisions in full, below.
- Log -----------------------------------------------------------------
commit ad1858d77771172e08016890f0eb2faedec3ecee
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date: Tue May 21 18:02:08 2013 +1000
xfs: rework remote attr CRCs
Note: this changes the on-disk remote attribute format. I assert
that this is OK to do as CRCs are marked experimental and the first
kernel it is included in has not yet reached release yet. Further,
the userspace utilities are still evolving and so anyone using this
stuff right now is a developer or tester using volatile filesystems
for testing this feature. Hence changing the format right now to
save longer term pain is the right thing to do.
The fundamental change is to move from a header per extent in the
attribute to a header per filesytem block in the attribute. This
means there are more header blocks and the parsing of the attribute
data is slightly more complex, but it has the advantage that we
always know the size of the attribute on disk based on the length of
the data it contains.
This is where the header-per-extent method has problems. We don't
know the size of the attribute on disk without first knowing how
many extents are used to hold it. And we can't tell from a
mapping lookup, either, because remote attributes can be allocated
contiguously with other attribute blocks and so there is no obvious
way of determining the actual size of the atribute on disk short of
walking and mapping buffers.
The problem with this approach is that if we map a buffer
incorrectly (e.g. we make the last buffer for the attribute data too
long), we then get buffer cache lookup failure when we map it
correctly. i.e. we get a size mismatch on lookup. This is not
necessarily fatal, but it's a cache coherency problem that can lead
to returning the wrong data to userspace or writing the wrong data
to disk. And debug kernels will assert fail if this occurs.
I found lots of niggly little problems trying to fix this issue on a
4k block size filesystem, finally getting it to pass with lots of
fixes. The thing is, 1024 byte filesystems still failed, and it was
getting really complex handling all the corner cases that were
showing up. And there were clearly more that I hadn't found yet.
It is complex, fragile code, and if we don't fix it now, it will be
complex, fragile code forever more.
Hence the simple fix is to add a header to each filesystem block.
This gives us the same relationship between the attribute data
length and the number of blocks on disk as we have without CRCs -
it's a linear mapping and doesn't require us to guess anything. It
is simple to implement, too - the remote block count calculated at
lookup time can be used by the remote attribute set/get/remove code
without modification for both CRC and non-CRC filesystems. The world
becomes sane again.
Because the copy-in and copy-out now need to iterate over each
filesystem block, I moved them into helper functions so we separate
the block mapping and buffer manupulations from the attribute data
and CRC header manipulations. The code becomes much clearer as a
result, and it is a lot easier to understand and debug. It also
appears to be much more robust - once it worked on 4k block size
filesystems, it has worked without failure on 1k block size
filesystems, too.
Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
Reviewed-by: Ben Myers <bpm@xxxxxxx>
Signed-off-by: Ben Myers <bpm@xxxxxxx>
commit d4c712bcf26a25c2b67c90e44e0b74c7993b5334
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date: Tue May 21 18:02:06 2013 +1000
xfs: fully initialise temp leaf in xfs_attr3_leaf_compact
xfs_attr3_leaf_compact() uses a temporary buffer for compacting the
the entries in a leaf. It copies the the original buffer into the
temporary buffer, then zeros the original buffer completely. It then
copies the entries back into the original buffer. However, the
original buffer has not been correctly initialised, and so the
movement of the entries goes horribly wrong.
Make sure the zeroed destination buffer is fully initialised, and
once we've set up the destination incore header appropriately, write
is back to the buffer before starting to move entries around.
While debugging this, the _d/_s prefixes weren't sufficient to
remind me what buffer was what, so rename then all _src/_dst.
Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
Reviewed-by: Ben Myers <bpm@xxxxxxx>
Signed-off-by: Ben Myers <bpm@xxxxxxx>
commit 8517de2a81da830f5d90da66b4799f4040c76dc9
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date: Tue May 21 18:02:05 2013 +1000
xfs: fully initialise temp leaf in xfs_attr3_leaf_unbalance
xfs_attr3_leaf_unbalance() uses a temporary buffer for recombining
the entries in two leaves when the destination leaf requires
compaction. The temporary buffer ends up being copied back over the
original destination buffer, so the header in the temporary buffer
needs to contain all the information that is in the destination
buffer.
To make sure the temporary buffer is fully initialised, once we've
set up the temporary incore header appropriately, write is back to
the temporary buffer before starting to move entries around.
Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
Reviewed-by: Ben Myers <bpm@xxxxxxx>
Signed-off-by: Ben Myers <bpm@xxxxxxx>
commit 6863ef8449f1908c19f43db572e4474f24a1e9da
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date: Tue May 21 18:02:04 2013 +1000
xfs: correctly map remote attr buffers during removal
If we don't map the buffers correctly (same as for get/set
operations) then the incore buffer lookup will fail. If a block
number matches but a length is wrong, then debug kernels will ASSERT
fail in _xfs_buf_find() due to the length mismatch. Ensure that we
map the buffers correctly by basing the length of the buffer on the
attribute data length rather than the remote block count.
Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
Reviewed-by: Ben Myers <bpm@xxxxxxx>
Signed-off-by: Ben Myers <bpm@xxxxxxx>
commit 4af3644c9a53eb2f1ecf69cc53576561b64be4c6
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date: Tue May 21 18:02:03 2013 +1000
xfs: remote attribute tail zeroing does too much
When an attribute data does not fill then entire remote block, we
zero the remaining part of the buffer. This, however, needs to take
into account that the buffer has a header, and so the offset where
zeroing starts and the length of zeroing need to take this into
account. Otherwise we end up with zeros over the end of the
attribute value when CRCs are enabled.
While there, make sure we only ask to map an extent that covers the
remaining range of the attribute, rather than asking every time for
the full length of remote data. If the remote attribute blocks are
contiguous with other parts of the attribute tree, it will map those
blocks as well and we can potentially zero them incorrectly. We can
also get buffer size mistmatches when trying to read or remove the
remote attribute, and this can lead to not finding the correct
buffer when looking it up in cache.
Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
Reviewed-by: Ben Myers <bpm@xxxxxxx>
Signed-off-by: Ben Myers <bpm@xxxxxxx>
commit 913e96bc292e1bb248854686c79d6545ef3ee720
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date: Tue May 21 18:02:02 2013 +1000
xfs: remote attribute read too short
Reading a maximally size remote attribute fails when CRCs are
enabled with this verification error:
XFS (vdb): remote attribute header does not match required off/len/owner)
There are two reasons for this, the first being that the
length of the buffer being read is determined from the
args->rmtblkcnt which doesn't take into account CRC headers. Hence
the mapped length ends up being too short and so we need to
calculate it directly from the value length.
The second is that the byte count of valid data within a buffer is
capped by the length of the data and so doesn't take into account
that the buffer might be longer due to headers. Hence we need to
calculate the data space in the buffer first before calculating the
actual byte count of data.
Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
Reviewed-by: Ben Myers <bpm@xxxxxxx>
Signed-off-by: Ben Myers <bpm@xxxxxxx>
commit 90253cf142469a40f89f989904abf0a1e500e1a6
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date: Tue May 21 18:02:01 2013 +1000
xfs: remote attribute allocation may be contiguous
When CRCs are enabled, there may be multiple allocations made if the
headers cause a length overflow. This, however, does not mean that
the number of headers required increases, as the second and
subsequent extents may be contiguous with the previous extent. Hence
when we map the extents to write the attribute data, we may end up
with less extents than allocations made. Hence the assertion that we
consume the number of headers we calculated in the allocation loop
is incorrect and needs to be removed.
Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
Reviewed-by: Ben Myers <bpm@xxxxxxx>
Signed-off-by: Ben Myers <bpm@xxxxxxx>
commit f648167f3ac79018c210112508c732ea9bf67c7b
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date: Tue May 21 18:02:00 2013 +1000
xfs: avoid nesting transactions in xfs_qm_scall_setqlim()
Lockdep reports:
=============================================
[ INFO: possible recursive locking detected ]
3.9.0+ #3 Not tainted
---------------------------------------------
setquota/28368 is trying to acquire lock:
(sb_internal){++++.?}, at: [<c11e8846>] xfs_trans_alloc+0x26/0x50
but task is already holding lock:
(sb_internal){++++.?}, at: [<c11e8846>] xfs_trans_alloc+0x26/0x50
from xfs_qm_scall_setqlim()->xfs_dqread() when a dquot needs to be
allocated.
xfs_qm_scall_setqlim() is starting a transaction and then not
passing it into xfs_qm_dqet() and so it starts it's own transaction
when allocating the dquot. Splat!
Fix this by not allocating the dquot in xfs_qm_scall_setqlim()
inside the setqlim transaction. This requires getting the dquot
first (and allocating it if necessary) then dropping and relocking
the dquot before joining it to the setqlim transaction.
Reported-by: Michael L. Semon <mlsemon35@xxxxxxxxx>
Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
Reviewed-by: Ben Myers <bpm@xxxxxxx>
Signed-off-by: Ben Myers <bpm@xxxxxxx>
-----------------------------------------------------------------------
Summary of changes:
fs/xfs/xfs_attr_leaf.c | 71 ++++++---
fs/xfs/xfs_attr_remote.c | 408 +++++++++++++++++++++++++++++------------------
fs/xfs/xfs_attr_remote.h | 10 ++
fs/xfs/xfs_buf.c | 1 +
fs/xfs/xfs_qm_syscalls.c | 40 +++--
5 files changed, 330 insertions(+), 200 deletions(-)
hooks/post-receive
--
XFS development tree
|