xfs
[Top] [All Lists]

Re: aborted SCSI commands while discarding/unmapping via mkfs.xfs

To: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
Subject: Re: aborted SCSI commands while discarding/unmapping via mkfs.xfs
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Wed, 15 Aug 2012 07:35:35 +1000
Cc: "xfs@xxxxxxxxxxx" <xfs@xxxxxxxxxxx>, Christoph Hellwig <hch@xxxxxx>, Ronnie Sahlberg <ronniesahlberg@xxxxxxxxx>, dchinner@xxxxxxxxxx
In-reply-to: <502AB82D.9090408@xxxxxxxxxxxx>
References: <502AB82D.9090408@xxxxxxxxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
On Tue, Aug 14, 2012 at 10:42:21PM +0200, Stefan Priebe wrote:
> Hello list,
> 
> i'm testing KVM with qemu, libiscsi, virtio-scsi-pci and
> scsi-general on top of a nexenta storage solution. While doing
> mkfs.xfs on an already used LUN / block device i discovered that the
> unmapping / discard commands mkfs.xfs sends take a long time which
> results in a lot of aborted scsi commands.

Sounds like a problem with your storage being really slow at
discards.

> Would it make sense to let mkfs.xfs send these unmapping commands in
> small portations (f.e. 100MB)

No, because the underlying implementation (blkdev_issue_discard())
already breaks the discard request up into the granularity that is
supported by the underlying storage.....

> or is there another problem in the
> patch to the block device? Any suggestions or ideas?

.... which, of course, had bugs in it so is a muchmore likely cause
of your problems.

That said,the discard granularity is derived from information the
storage supplies the kernel in it's SCSI mode page, so if the
discard granularity is too large, that's a storage problem, not a
linux problem at all, let alone a mkfs.xfs problem.

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>