xfs
[Top] [All Lists]

Re: bad performance on touch/cp file on XFS system

To: Dave Chinner <david@xxxxxxxxxxxxx>
Subject: Re: bad performance on touch/cp file on XFS system
From: Zhang Qiang <zhangqiang.buaa@xxxxxxxxx>
Date: Tue, 26 Aug 2014 18:04:52 +0800
Cc: Greg Freemyer <greg.freemyer@xxxxxxxxx>, xfs-oss <xfs@xxxxxxxxxxx>
Delivered-to: xfs@xxxxxxxxxxx
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=sNpXNANNgntmWdbBTWMb96jF3yCEqMpcLyy5STLa29o=; b=RM4UoqTq/tqInPHUebEVSvIryj6VFnvJ/kx2rF9DF1gp6idVKnGebFJ4D8ySTvns2d 9FsKqdadpJIcXPMo4DATD/e5RgoiEITPXnn53fHt0ISwmcFURccNEX6yrjppTqzlbco2 fSUzWLmIsnPV8aVPcgnVbt0+g1UbIG5Ve+eW3kk/tDliyzQY3jLgR0/LDD9+U8DUCpwu n6P1JwsKf01zGSRS8S4tCFdqFFys0Lc7okxNX4W02yS+1mnn5lIpAyvjwGpqzT6AErYt OgpHZif2ba9tPh8VE6BUIP6GUls9JBAQRfFalAQuNXllO2zIVU8kSX9z9hs2ldQM/aQs 2J3w==
In-reply-to: <20140826023754.GH20518@dastard>
References: <CAKEtwsWxZseS8M+O7vSR2FRXr4gjVQ0RDO8ok+jMPWq-8jPEeA@xxxxxxxxxxxxxx> <20140825051801.GY26465@dastard> <CAKEtwsXiVKTWAW+YszjNnFnD4_Ld7g2qXEvw48A-SitYSGyXHA@xxxxxxxxxxxxxx> <20140825090843.GE20518@dastard> <CAKEtwsU4gywG7fVVMVU1Y_TG9Pgg_-sFV0=SPg_7Ob5EV6FTew@xxxxxxxxxxxxxx> <20140825222657.GF20518@dastard> <CAGpXXZL2=ynv4x6hhBSsBPZmBG9Ac8mPOgE-Ekjs3tLvQO9Uaw@xxxxxxxxxxxxxx> <20140826023754.GH20518@dastard>
ThanksÂDave/Greg for your analysis and suggestions.

I can summarize what I should do next:

- backup my data using xfsdump
- rebuilt filesystem using mkfs with options: agcount=32 for 2T disk
- mount filesystem with option inode64,nobarrier
- applied patches about adding free list inode on disk structure

As we have about ~100 servers need back up, so that will take much effort, do you have any other suggestion?

What I am testing (ongoing):
Â- created a new 2T partition filesystem
Â- try to create small files and fill whole spaces then remove some of them randomly
Â- check the performance of touch/cp files
Â- apply patches and verify it.

I have got more data from server:

1) flush all cache(echo 3 > /proc/sys/vm/drop_caches), and umount filesystem
2) mount filesystem and testing with touch command
 * The first touch new file command take about ~23s
 * second touch command take about ~0.1s.

Here's the perf data:
First touch command:

Events: 435 Âcycles
+ Â 7.51% Âtouch Â[xfs] Â Â Â Â Â Â Â[k] xfs_inobt_get_rec
+ Â 5.61% Âtouch Â[xfs] Â Â Â Â Â Â Â[k] xfs_btree_get_block
+ Â 5.38% Âtouch Â[xfs] Â Â Â Â Â Â Â[k] xfs_btree_increment
+ Â 4.26% Âtouch Â[xfs] Â Â Â Â Â Â Â[k] xfs_btree_get_rec
+ Â 3.73% Âtouch Â[kernel.kallsyms] Â[k] find_busiest_group
+ Â 3.43% Âtouch Â[xfs] Â Â Â Â Â Â Â[k] _xfs_buf_find
+ Â 2.72% Âtouch Â[xfs] Â Â Â Â Â Â Â[k] xfs_btree_readahead
+ Â 2.38% Âtouch Â[xfs] Â Â Â Â Â Â Â[k] xfs_trans_buf_item_match
+ Â 2.34% Âtouch Â[xfs] Â Â Â Â Â Â Â[k] xfs_dialloc
+ Â 2.32% Âtouch Â[kernel.kallsyms] Â[k] generic_make_request
+ Â 2.09% Âtouch Â[xfs] Â Â Â Â Â Â Â[k] xfs_btree_rec_offset
+ Â 1.75% Âtouch Â[kernel.kallsyms] Â[k] kmem_cache_alloc
+ Â 1.63% Âtouch Â[kernel.kallsyms] Â[k] cpumask_next_and
+ Â 1.41% Âtouch Â[sd_mod] Â Â Â Â Â [k] sd_prep_fn
+ Â 1.41% Âtouch Â[kernel.kallsyms] Â[k] get_page_from_freelist
+ Â 1.38% Âtouch Â[kernel.kallsyms] Â[k] __alloc_pages_nodemask
+ Â 1.27% Âtouch Â[kernel.kallsyms] Â[k] scsi_request_fn
+ Â 1.22% Âtouch Â[kernel.kallsyms] Â[k] blk_queue_bounce
+ Â 1.20% Âtouch Â[kernel.kallsyms] Â[k] cfq_should_idle
+ Â 1.10% Âtouch Â[xfs] Â Â Â Â Â Â Â[k] xfs_btree_rec_addr
+ Â 1.03% Âtouch Â[kernel.kallsyms] Â[k] cfq_dispatch_requests+ Â 1.00% Âtouch Â[kernel.kallsyms] Â[k] _spin_lock_irqsave+ Â 0.94% Âtouch Â[kernel.kallsyms] Â[k] memcpy+ Â 0.86% Âtouch Â[kernel.kallsyms] Â[k] swiotlb_map_sg_attrs+ Â 0.84% Âtouch Â[kernel.kallsyms] Â[k] alloc_pages_current
+ Â 0.82% Âtouch Â[kernel.kallsyms] Â[k] submit_bio
+ Â 0.81% Âtouch Â[megaraid_sas] Â Â [k] megasas_build_and_issue_cmd_fusion
+ Â 0.77% Âtouch Â[kernel.kallsyms] Â[k] blk_peek_request
+ Â 0.73% Âtouch Â[xfs] Â Â Â Â Â Â Â[k] xfs_btree_setbuf
+ Â 0.73% Âtouch Â[megaraid_sas] Â Â [k] MR_GetPhyParams
+ Â 0.73% Âtouch Â[kernel.kallsyms] Â[k] run_timer_softirq
+ Â 0.71% Âtouch Â[kernel.kallsyms] Â[k] pick_next_task_rt
+ Â 0.71% Âtouch Â[kernel.kallsyms] Â[k] init_request_from_bio
+ Â 0.70% Âtouch Â[kernel.kallsyms] Â[k] thread_return
+ Â 0.69% Âtouch Â[kernel.kallsyms] Â[k] cfq_set_request
+ Â 0.67% Âtouch Â[kernel.kallsyms] Â[k] mempool_alloc
+ Â 0.66% Âtouch Â[xfs] Â Â Â Â Â Â Â[k] xfs_buf_hold
+ Â 0.66% Âtouch Â[kernel.kallsyms] Â[k] find_next_bit
+ Â 0.62% Âtouch Â[kernel.kallsyms] Â[k] cfq_insert_request
+ Â 0.61% Âtouch Â[kernel.kallsyms] Â[k] scsi_init_io
+ Â 0.60% Âtouch Â[megaraid_sas] Â Â [k] MR_BuildRaidContext
+ Â 0.59% Âtouch Â[kernel.kallsyms] Â[k] policy_zonelist
+ Â 0.59% Âtouch Â[kernel.kallsyms] Â[k] elv_insert
+ Â 0.58% Âtouch Â[xfs] Â Â Â Â Â Â Â[k] xfs_buf_allocate_memory


Second perf command:


Events: 105 Âcycles
+ Â20.92% Âtouch Â[xfs] Â Â Â Â Â Â Â[k] xfs_inobt_get_rec
+ Â14.27% Âtouch Â[xfs] Â Â Â Â Â Â Â[k] xfs_btree_get_rec
+ Â12.21% Âtouch Â[xfs] Â Â Â Â Â Â Â[k] xfs_btree_get_block
+ Â12.12% Âtouch Â[xfs] Â Â Â Â Â Â Â[k] xfs_btree_increment
+ Â 9.86% Âtouch Â[xfs] Â Â Â Â Â Â Â[k] xfs_btree_readahead
+ Â 7.87% Âtouch Â[xfs] Â Â Â Â Â Â Â[k] _xfs_buf_find
+ Â 4.93% Âtouch Â[xfs] Â Â Â Â Â Â Â[k] xfs_btree_rec_addr
+ Â 4.12% Âtouch Â[xfs] Â Â Â Â Â Â Â[k] xfs_dialloc
+ Â 3.03% Âtouch Â[kernel.kallsyms] Â[k] clear_page_c
+ Â 2.96% Âtouch Â[xfs] Â Â Â Â Â Â Â[k] xfs_btree_rec_offset
+ Â 1.31% Âtouch Â[kernel.kallsyms] Â[k] kmem_cache_free
+ Â 1.03% Âtouch Â[xfs] Â Â Â Â Â Â Â[k] xfs_trans_buf_item_match
+ Â 0.99% Âtouch Â[kernel.kallsyms] Â[k] _atomic_dec_and_lock
+ Â 0.99% Âtouch Â[xfs] Â Â Â Â Â Â Â[k] xfs_inobt_get_maxrecs
+ Â 0.99% Âtouch Â[xfs] Â Â Â Â Â Â Â[k] xfs_buf_unlock
+ Â 0.99% Âtouch Â[xfs] Â Â Â Â Â Â Â[k] kmem_zone_alloc
+ Â 0.98% Âtouch Â[kernel.kallsyms] Â[k] kmem_cache_alloc
+ Â 0.28% Âtouch Â[kernel.kallsyms] Â[k] pgd_alloc
+ Â 0.17% Âtouch Â[kernel.kallsyms] Â[k] page_fault
+ Â 0.01% Âtouch Â[kernel.kallsyms] Â[k] native_write_msr_safe

I have compared the memory used, it seems that xfs try to load inode bmap block for the first time, which take much time, is that the reason to take so much time for the first touch operation?

Thanks
Qiang
2014-08-26 10:37 GMT+08:00 Dave Chinner <david@xxxxxxxxxxxxx>:
On Mon, Aug 25, 2014 at 06:46:31PM -0400, Greg Freemyer wrote:
> On Mon, Aug 25, 2014 at 6:26 PM, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
> > On Mon, Aug 25, 2014 at 06:31:10PM +0800, Zhang Qiang wrote:
> >> 2014-08-25 17:08 GMT+08:00 Dave Chinner <david@xxxxxxxxxxxxx>:
> >>
> >> > On Mon, Aug 25, 2014 at 04:47:39PM +0800, Zhang Qiang wrote:
> >> > > I have checked icount and ifree, but I found there are about 11.8 percent
> >> > > free, so the free inode should not be too few.
> >> > >
> >> > > Here's the detail log, any new clue?
> >> > >
> >> > > # mount /dev/sda4 /data1/
> >> > > # xfs_info /data1/
> >> > > meta-data="" Â Â Â Â Â Â isize=256Â Â agcount=4, agsize=142272384
> >> >
> >> > 4 AGs
> >> >
> >> Yes.
> >>
> >> >
> >> > > icount = 220619904
> >> > > ifree = 26202919
> >> >
> >> > And 220 million inodes. There's your problem - that's an average
> >> > of 55 million inodes per AGI btree assuming you are using inode64.
> >> > If you are using inode32, then the inodes will be in 2 btrees, or
> >> > maybe even only one.
> >> >
> >>
> >> You are right, all inodes stay on one AG.
> >>
> >> BTW, why i allocate 4 AGs, and all inodes stay in one AG for inode32?,
> >
> > Because the top addresses in the 2nd AG go over 32 bits, hence only
> > AG 0 can be used for inodes. Changing to inode64 will give you some
> > relief, but any time allocation occurs in AG0 is will be slow. i.e.
> > you'll be trading always slow for "unpredictably slow".
> >
> >> > With that many inodes, I'd be considering moving to 32 or 64 AGs to
> >> > keep the btree size down to a more manageable size. The free inode
> >>
> >> btree would also help, but, really, 220M inodes in a 2TB filesystem
> >> > is really pushing the boundaries of sanity.....
> >> >
> >>
> >> So the better inodes size in one AG is about 5M,
> >
> > Not necessarily. But for your storage it's almost certainly going to
> > minimise the problem you are seeing.
> >
> >> is there any documents
> >> about these options I can learn more?
> >
> > http://xfs.org/index.php/XFS_Papers_and_Documentation
>
> Given the apparently huge number of small files would he likely see a
> big performance increase if he replaced that 2TB or rust with SSD.

Doubt it - the profiles showed the allocation being CPU bound
searching the metadata that indexes all those inodes. Those same
profiles showed all the signs that it was hitting the buffer
cache most of the time, too, which is why it was CPU bound....

Cheers,

Dave.
--
Dave Chinner
david@xxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>