xfs
[Top] [All Lists]

Re: quotacheck speed

To: Dave Chinner <david@xxxxxxxxxxxxx>
Subject: Re: quotacheck speed
From: Arkadiusz Miśkiewicz <arekm@xxxxxxxx>
Date: Mon, 13 Feb 2012 19:16:51 +0100
Cc: xfs@xxxxxxxxxxx
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=maven.pl; s=maven; h=from:to:subject:date:user-agent:cc:references:in-reply-to :mime-version:content-type:content-transfer-encoding:message-id; bh=8nCqLcNg6iqJGSQ2AFNVBFh+HbJ5Q2Aj4IZMveIc0eY=; b=viN0fIAS2OGM6ATVL3bBSuQN9DKFsVakmhulgtv+R7hh6jzYC0XmPO1twERPnUus12 Q7cjS78kmEhgdgaLmIZhTBJ9vHs2zccl1vdRxwr3ZV1TMom1dShoIFx0hHt7ye0sIFgc /hOCQs/Efp8P9LvxioAmrgYwmW5ao9hO5jbnA=
In-reply-to: <20120212222159.GJ12836@dastard>
References: <201202122201.07649.arekm@xxxxxxxx> <20120212222159.GJ12836@dastard>
User-agent: KMail/1.13.7 (Linux/3.3.0-rc3-00171-g8df54d6-dirty; KDE/4.8.0; x86_64; ; )
On Sunday 12 of February 2012, Dave Chinner wrote:
> On Sun, Feb 12, 2012 at 10:01:07PM +0100, Arkadiusz Miśkiewicz wrote:
> > Hi,
> > 
> > When mounting 800GB filesystem (after repair for example) here quotacheck
> > takes 10 minutes. Quite long time that adds to whole time of filesystem
> > downtime (repair + quotacheck).
> 
> How long does a repair vs quotacheck of that same filesystem take?
> repair has to iterate the inodes 2-3 times, so if that is faster
> than quotacheck, then that is really important to know....

Don't have exact times but looking at nagios and dmesg it took about:
repair ~20 minutes, quotacheck ~10 minutes (it's 800GB of maildirs).

> 
> > I wonder if quotacheck can be somehow improved or done differently like
> > doing it in parallel with normal fs usage (so there will be no downtime)
> > ?
> 
> quotacheck makes the assumption that it is run on an otherwise idle
> filesystem that nobody is accessing. Well, what it requires is that
> nobody is modifying it. What we could do is bring the filesystem up
> in a frozen state so that read-only access could be made but
> modifications are blocked until the quotacheck is completed.

Read-only is better than no access at all. I was hoping that there is a way to 
make quotacheck being recalculated on the fly with taking all write accesses 
that happen in meantime into account.

> Also, quotacheck uses the bulkstat code to iterate all the inodes
> quickly. Improvements in bulkstat speed will translate directly
> into faster quotachecks. quotacheck could probably drive bulkstat in
> a parallel manner to do the quotacheck faster, but that assumes that
> the underlying storage is not already seek bound. What is the
> utilisation of the underlying storage and CPU while quotacheck is
> running?

Will try to gather more information then.

> 
> Otherwise, bulkstat inode prefetching could be improved like
> xfs_repair was to look at inode chunk density and change IO patterns
> and to slice and dice large IO buffers into smaller inode buffers.
> We can actually do that efficiently now that we don't use the page
> cache for metadata caching. If repair is iterating inodes faster
> than bulkstat, then this optimisation will be the reason and having
> that data point is very important....
> 
> Cheers,
> 
> Dave.


-- 
Arkadiusz Miśkiewicz        PLD/Linux Team
arekm / maven.pl            http://ftp.pld-linux.org/

<Prev in Thread] Current Thread [Next in Thread>