[Top] [All Lists]

[PATCH] xfstests: use xfs_io fiemap instead of filefrag V2

To: <xfs@xxxxxxxxxxx>, <sandeen@xxxxxxxxxx>
Subject: [PATCH] xfstests: use xfs_io fiemap instead of filefrag V2
From: Josef Bacik <jbacik@xxxxxxxxxxxx>
Date: Mon, 24 Jun 2013 10:21:36 -0400
Delivered-to: xfs@xxxxxxxxxxx
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/simple; d=fusionio.com; s=default; t=1372083699; bh=904gjX0lhZhVXP50boKu3vBHZE8AONzk6FO79fXOYsY=; h=From:To:Subject:Date; b=EtUqz8rDF9fH4RzQjbm3qgyiN/PUXp0hwDL4vmFHWnbsF72bByLvWZHAdgw7i/T7D dZjIvi3tOeiQV3Algi1E3NMhIBJe/XFwlY6gnEldTJfEFQlvXY0Gyl/SH57TfXq5QZ ZObN+Spm7RFbAvMqFS1+S3ns+ZITe9eAqJvIoWoM=
Btrfs has always failed shared/218 because of the way we allocate extents on
disk.  The last part of 218 writes contiguously holey from the start of the file
forward, which for btrfs means we get 16 extents but they are physically
contigous.  filefrag -v shows all 16 extents, but prints out that there is 1
extent, because they are physically contiguous.  This isn't quite right and
makes the test fail.  So instead of using filefrag use xfs_io -c fiemap which
will print the whole map and then get the count from there.  With this patch
btrfs now passes the test, I also verified that ext4 and xfs still pass this
test.  Thanks,

Signed-off-by: Josef Bacik <jbacik@xxxxxxxxxxxx>
V1->V2: change _require_defrag to check for xfs_io having fiemap support as per
Eric's suggestion.

 common/defrag |    6 +++---
 1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/common/defrag b/common/defrag
index f04fd42..a4bc976 100644
--- a/common/defrag
+++ b/common/defrag
@@ -38,13 +38,13 @@ _require_defrag()
     _require_command $DEFRAG_PROG
-    _require_command $FILEFRAG_PROG
+    _require_xfs_io_fiemap
-       $FILEFRAG_PROG $1 | awk '{print $2}'
-       $FILEFRAG_PROG -v $1  >> $seqres.full 2>&1
+       $XFS_IO_PROG -c "fiemap" $1 | tail -n +2 | grep -v hole | wc -l
+       $XFS_IO_PROG -c "fiemap" $1  >> $seqres.full 2>&1
 # Defrag file, check it, and remove it.

<Prev in Thread] Current Thread [Next in Thread>