[PATCH 2/2] xfstests: 219: ignore duplicates reported by repquota
Eric Sandeen
sandeen at sandeen.net
Thu Jan 28 23:09:20 CST 2010
Alex Elder wrote:
> (Re-sending; I misaddressed it the first time.)
>
> Arrange to ignore duplicate entries reported by the repquota command.
> This can happen if an id is used more than once (such as when two user
> names are assigned the same uid).
>
> Do this here by simply dropping any reported entries whose id number
> has already been seen in the output.
>
> Signed-off-by: Alex Elder <aelder at sgi.com>
again with the late review ;)
This is causing failures for me:
--- 219.out 2009-11-12 17:27:40.209152659 -0600
+++ 219.out.bad 2010-01-28 23:03:05.933323333 -0600
@@ -27,7 +27,6 @@
Block limits File limits
User used soft hard grace used soft hard grace
----------------------------------------------------------------------
-#1 -- 144 0 0 3 0 0
raw output looks like:
Block limits File limits
User used soft hard grace used soft hard grace
----------------------------------------------------------------------
#0 -- 0 0 0 3 0 0
#1 -- 144 0 0 3 0 0
there's probably better awk to be written than this, but I think this
fixes it:
Alex, you look like an awk-master, can you fix it?
-Eric
> ---
> 219 | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> Index: b/219
> ===================================================================
> --- a/219
> +++ b/219
> @@ -85,7 +85,8 @@ test_accounting()
> $here/src/lstat64 $file | head -3 | filter_scratch
> done
>
> - repquota -$type -s -n $SCRATCH_MNT | grep -v "^#0" | filter_scratch
> + repquota -$type -s -n $SCRATCH_MNT | grep -v "^#0" | filter_scratch |
> + awk '/^#/ { if (! seen[$1]) { seen[$1]++; next; } } { print }'
> }
>
> # real QA test starts here
>
> _______________________________________________
> xfs mailing list
> xfs at oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
>
More information about the xfs
mailing list