[Top] [All Lists]

Re: aborted SCSI commands while discarding/unmapping via mkfs.xfs

To: Dave Chinner <david@xxxxxxxxxxxxx>
Subject: Re: aborted SCSI commands while discarding/unmapping via mkfs.xfs
From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
Date: Wed, 15 Aug 2012 08:31:14 +0200
Cc: "xfs@xxxxxxxxxxx" <xfs@xxxxxxxxxxx>, Christoph Hellwig <hch@xxxxxx>, Ronnie Sahlberg <ronniesahlberg@xxxxxxxxx>, dchinner@xxxxxxxxxx
In-reply-to: <20120814213535.GK2877@dastard>
References: <502AB82D.9090408@xxxxxxxxxxxx> <20120814213535.GK2877@dastard>
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:14.0) Gecko/20120714 Thunderbird/14.0
Am 14.08.2012 23:35, schrieb Dave Chinner:
On Tue, Aug 14, 2012 at 10:42:21PM +0200, Stefan Priebe wrote:
Hello list,

i'm testing KVM with qemu, libiscsi, virtio-scsi-pci and
scsi-general on top of a nexenta storage solution. While doing
mkfs.xfs on an already used LUN / block device i discovered that the
unmapping / discard commands mkfs.xfs sends take a long time which
results in a lot of aborted scsi commands.

Sounds like a problem with your storage being really slow at

Would it make sense to let mkfs.xfs send these unmapping commands in
small portations (f.e. 100MB)

No, because the underlying implementation (blkdev_issue_discard())
already breaks the discard request up into the granularity that is
supported by the underlying storage.....

or is there another problem in the
patch to the block device? Any suggestions or ideas?

.... which, of course, had bugs in it so is a muchmore likely cause
of your problems.

That said,the discard granularity is derived from information the
storage supplies the kernel in it's SCSI mode page, so if the
discard granularity is too large, that's a storage problem, not a
linux problem at all, let alone a mkfs.xfs problem.

Thanks for this excelent explanation.


<Prev in Thread] Current Thread [Next in Thread>