[Top] [All Lists]

Re: [PATCH 06/13] xfs: xfs_sync_data is redundant.

To: Brian Foster <bfoster@xxxxxxxxxx>
Subject: Re: [PATCH 06/13] xfs: xfs_sync_data is redundant.
From: Mark Tinguely <tinguely@xxxxxxx>
Date: Mon, 01 Oct 2012 16:31:36 -0500
Cc: Dave Chinner <david@xxxxxxxxxxxxx>, xfs@xxxxxxxxxxx
In-reply-to: <5069F9B0.50804@xxxxxxxxxx>
References: <1348807485-20165-1-git-send-email-david@xxxxxxxxxxxxx> <1348807485-20165-7-git-send-email-david@xxxxxxxxxxxxx> <5069F9B0.50804@xxxxxxxxxx>
User-agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:9.0) Gecko/20120122 Thunderbird/9.0
On 10/01/12 15:14, Brian Foster wrote:
<deletes by mt>
Heads up... I was doing some testing against my eofblocks set rebased
against this patchset and I'm reproducing a new 273 failure. The failure
bisects down to this patch.

With the bisection, I'm running xfs top of tree plus the following patch:

xfs: only update the last_sync_lsn when a transaction completes

... and patches 1-6 of this set on top of that. i.e.:

xfs: xfs_sync_data is redundant.
xfs: Bring some sanity to log unmounting
xfs: sync work is now only periodic log work
xfs: don't run the sync work if the filesystem is read-only
xfs: rationalise xfs_mount_wq users
xfs: xfs_syncd_stop must die
xfs: only update the last_sync_lsn when a transaction completes
xfs: Make inode32 a remountable option

This is on a 16p (according to /proc/cpuinfo) x86-64 system with 32GB
RAM. The test and scratch volumes are both 500GB lvm volumes on top of a
hardware raid. I haven't looked into this at all yet but I wanted to
drop it on the list for now. The 273 output is attached.


 <deletes by mt>


QA output created by 273
start the workload
_porter 31 not complete
_porter 79 not complete
_porter 149 not complete_porter 74 not complete
_porter 161 not complete
_porter 54 not complete
_porter 98 not complete
_porter 99 not complete
_porter 167 not complete
_porter 76 not complete
_porter 45 not complete
_porter 152 not complete
_porter 173 not complete_porter 24 not complete

 <deletes by mt>

I see it too on a single machine. It looks like an interaction between patch 06 and the "...update the last_sync_lsn...".

I like the "...update the last_sync_lsn ..." patch because it fixes the "xlog_verify_tail_lsn: tail wrapped" and "xlog_verify_tail_lsn: ran out of log space" messages that I am getting on that machine.


<Prev in Thread] Current Thread [Next in Thread>