| To: | Dave Chinner <david@xxxxxxxxxxxxx> |
|---|---|
| Subject: | Re: [PATCH] percpu_counter: return precise count from __percpu_counter_compare() |
| From: | Tejun Heo <tj@xxxxxxxxxx> |
| Date: | Wed, 7 Oct 2015 18:09:47 -0700 |
| Cc: | Waiman Long <waiman.long@xxxxxxx>, Christoph Lameter <cl@xxxxxxxxxxxxxxxxxxxx>, linux-kernel@xxxxxxxxxxxxxxx, xfs@xxxxxxxxxxx, Scott J Norton <scott.norton@xxxxxxx>, Douglas Hatch <doug.hatch@xxxxxxx> |
| Delivered-to: | xfs@xxxxxxxxxxx |
| Dkim-signature: | v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; bh=cbCL/brhBz7SHqhmp4afv3fhRyWNPH9zML+3PyXzucI=; b=JIMATKEPOxrPJYWhg4J2/t9Znkbq5LN9isJ+zK/JLx0kvFf37l+P36F97vs5Pe4ESA S7gT/8UNEDw+sqlK+ArHJBxFknFjWYWO2Uww3e3Aetg1MWsn8aW/yp8b+TUVOX32EVVl xFMgDuXUm3h+9WdpiIyUyYPrwcyNGqEPKW585vpdW8M2ynDZwu6BvUoBjmE1h/DgxoUs AkGtS7SKXQjTTOvfKUl2RMmV4fGTXXAsedUrQGZC8jDSA9Jl4upl817Sw/7MZI22gjOn 3zj3dYftB19CK0chEe8kW/e8BDTHCOfhXrXeg2gUQPq1PA7ltybyXgVS3BDZsxL1LZHE BazA== |
| In-reply-to: | <20151008010218.GT27164@dastard> |
| References: | <1443806997-30792-1-git-send-email-Waiman.Long@xxxxxxx> <20151002221649.GL27164@dastard> <5613017D.7080909@xxxxxxx> <20151006002538.GO27164@dastard> <561405E1.8020008@xxxxxxx> <20151006213023.GP27164@dastard> <561579EA.60507@xxxxxxx> <20151007230441.GG32150@dastard> <20151007232010.GA21142@xxxxxxxxxxxxxxx> <20151008010218.GT27164@dastard> |
| Sender: | Tejun Heo <htejun@xxxxxxxxx> |
| User-agent: | Mutt/1.5.23 (2014-03-12) |
Hello, Dave. On Thu, Oct 08, 2015 at 12:02:18PM +1100, Dave Chinner wrote: > > percpu cmpxchg is no different from sub or any other operations > > regarding cross-CPU synchronization. They're safe iff the operations > > are on the local CPU. They have to be made atomics if they need to be > > manipulated from remote CPUs. > > Again, another trivially solvable problem, but still irrelevant > because we don't have the data that tells us whether changing the > counter behaviour solves the problem.... Dude, it isn't trivially solvable. You either can't do it or have to pay the overhead during local access to get around it. > > That said, while we can't manipulate the percpu counters directly, we > > can add a separate global counter to cache sum result from the > > previous run which gets automatically invalidated when any percpu > > counter overflows. > > > > That should give better and in case of > > back-to-back invocations pretty good precision compared to just > > returning the global overflow counter. Interface-wise, that'd be a > > lot easier to deal with although I have no idea whether it'd fit this > > particular use case or whether this use case even exists. > > No, it doesn't help - it's effectively what Waiman's original patch > did by returning the count from the initial comparison and using > that for ENOSPC detection instead of doing a second comparison... Just chipping in purely from percpu side. If what Waiman suggested is something useable, caching the result inside percpu_counter would be a better interface. If not, no idea. Thanks. -- tejun |
| Previous by Date: | Re: [PATCH] percpu_counter: return precise count from __percpu_counter_compare(), Dave Chinner |
|---|---|
| Next by Date: | Re: Question about non asynchronous aio calls., Dave Chinner |
| Previous by Thread: | Re: [PATCH] percpu_counter: return precise count from __percpu_counter_compare(), Dave Chinner |
| Next by Thread: | Re: [PATCH] percpu_counter: return precise count from __percpu_counter_compare(), Waiman Long |
| Indexes: | [Date] [Thread] [Top] [All Lists] |