| To: | Waiman Long <Waiman.Long@xxxxxxx> |
|---|---|
| Subject: | Re: [RFC PATCH 1/2] percpu_counter: Allow falling back to global counter on large system |
| From: | Christoph Lameter <cl@xxxxxxxxx> |
| Date: | Mon, 7 Mar 2016 12:24:31 -0600 (CST) |
| Cc: | Tejun Heo <tj@xxxxxxxxxx>, Dave Chinner <dchinner@xxxxxxxxxx>, xfs@xxxxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx, Ingo Molnar <mingo@xxxxxxxxxx>, Peter Zijlstra <peterz@xxxxxxxxxxxxx>, Scott J Norton <scott.norton@xxxxxx>, Douglas Hatch <doug.hatch@xxxxxx> |
| Delivered-to: | xfs@xxxxxxxxxxx |
| Dkim-signature: | v=1; a=rsa-sha256; c=relaxed/relaxed; d=comcast.net; s=q20140121; t=1457375072; bh=fV7mbUfBIz/+TJayRuC8l+iCFcO4p2iE+mzRS1Jkn4g=; h=Received:Received:Received:Received:Date:From:To:Subject: Message-ID:Content-Type; b=kMm9EUAQe5Z59LGUDGCyn3Ari55BPwJTorS9j7RJK0LHIgc+CDigG4WHFEtcMcLyU A7uLM1qaY2NMnSQr1l/zbBVA8dPQcXmQQEF/y+g+G6+b/T+bYWGivrXNZ+VMvsMT5+ +m4TPs1ELWLSODXO3bsdhpQkHhD3ojEn+hi38JQUId+mOn5J05yxdhZmT+8D09ikaQ k3RrcUAY9OnpK1jyp/ndK8pEM1KAGphyaIT2D+i2fOQHNpCtOv/ggRbc6olu1TzLFg cknGXZ63qSoIiVRPCl8zBUiZZqXfCdBLGelwcGxxzrTSVtYLATcPDnhfM8yDaxbXJa wKQIV65x8+E9w== |
| In-reply-to: | <1457146299-1601-2-git-send-email-Waiman.Long@xxxxxxx> |
| References: | <1457146299-1601-1-git-send-email-Waiman.Long@xxxxxxx> <1457146299-1601-2-git-send-email-Waiman.Long@xxxxxxx> |
On Fri, 4 Mar 2016, Waiman Long wrote: > This patch provides a mechanism to selectively degenerate per-cpu > counters to global counters at per-cpu counter initialization time. The > following new API is added: > > percpu_counter_set_limit(struct percpu_counter *fbc, > u32 percpu_limit) > > The function should be called after percpu_counter_set(). It will > compare the total limit (nr_cpu * percpu_limit) against the current > counter value. If the limit is not smaller, it will disable per-cpu > counter and use only the global counter instead. At run time, when > the counter value grows past the total limit, per-cpu counter will > be enabled again. Hmmm... That is requiring manual setting of a limit. Would it not be possible to completely automatize the switch over? F.e. one could keep a cpumask of processors that use the per cpu counters. Then in the fastpath if the current cpu is a member increment the per cpu counter. If not do the spinlock thing. If there is contention add the cpu to the cpumask and use the per cpu counters. Thus automatically scaling for the processors on which frequent increments are operating. Then regularly (once per minute or so) degenerate the counter by folding the per cpu diffs into the global count and zapping the cpumask. If the cpumask is empty you can use the global count. Otherwise you just need to add up the counters of the cpus set in the cpumask. |
| <Prev in Thread] | Current Thread | [Next in Thread> |
|---|---|---|
| ||
| Previous by Date: | Re: [PATCH 6/6] xfs: pad xfs_attr_leaf_name_remote to avoid tripping on m68k, Christoph Hellwig |
|---|---|
| Next by Date: | Re: [RFC PATCH 0/2] percpu_counter: Enable switching to global counter, Dave Chinner |
| Previous by Thread: | [RFC PATCH 1/2] percpu_counter: Allow falling back to global counter on large system, Waiman Long |
| Next by Thread: | Re: [RFC PATCH 1/2] percpu_counter: Allow falling back to global counter on large system, Christoph Lameter |
| Indexes: | [Date] [Thread] [Top] [All Lists] |