xfs
[Top] [All Lists]

[PATCH 4/5] percpu_counter: tie error bounds more obviously to count val

To: XFS Mailing List <xfs@xxxxxxxxxxx>
Subject: [PATCH 4/5] percpu_counter: tie error bounds more obviously to count values
From: Alex Elder <aelder@xxxxxxx>
Date: Wed, 22 Dec 2010 21:56:38 -0600
Reply-to: aelder@xxxxxxx
This change simply moves around the computed error bound used
in a few spots so it is more closely associated with the
count value.  This is based on this interpretation of the
correct value of of a percpu_counter:
    percpu_counter->count +/- error

So when thinking about the code, it is useful to think of
(count + error) or (count - error) to represent an upper
or lower bound of the percpu_counter's value.  And this
change simply moves things around to match that way of
thinking.

Doing this made me realize there's another optimization
to be made--namely skipping the per-cpu sum if after
taking the lock we know the result will be below the
threshold.

Signed-off-by: Alex Elder <aelder@xxxxxxx>

---
 lib/percpu_counter.c |   32 +++++++++++++++++++++++---------
 1 file changed, 23 insertions(+), 9 deletions(-)

Index: b/lib/percpu_counter.c
===================================================================
--- a/lib/percpu_counter.c
+++ b/lib/percpu_counter.c
@@ -234,16 +234,21 @@ int __percpu_counter_add_unless_lt(struc
 
        preempt_disable();
 
-       /* Check to see if rough count will be sufficient for comparison */
+       /*
+        * Check to see if rough count will be sufficient for
+        * comparison.  First, if the upper bound is too low,
+        * we're done.
+        */
        count = percpu_counter_read(fbc);
-       if (count + amount < threshold - error)
+       if (count + error + amount < threshold)
                goto out;
 
        /*
-        * If the updated counter will be over the threshold we know
-        * we can safely add, and might be able to avoid locking.
+        * Next, if the lower bound is above the threshold, we can
+        * safely add the amount.  See if we can do so without
+        * locking.
         */
-       if (count + amount > threshold + error) {
+       if (count - error + amount > threshold) {
                s32 *pcount = this_cpu_ptr(fbc->counters);
 
                count = *pcount + amount;
@@ -255,12 +260,21 @@ int __percpu_counter_add_unless_lt(struc
        }
 
        /*
-        * If the result is over the error threshold, we can just add it
-        * into the global counter ignoring what is in the per-cpu counters
-        * as they will not change the result of the calculation.
+        * We're within the error margin, so we need to be more
+        * precise.  Take the lock, get the current count value, and
+        * check once more whether the result will be outside the
+        * error threshold.
+        *
+        * If we find we can safely add, just add the amount into
+        * the global counter ignoring what is in the per-cpu
+        * counters as they will not change the result of the
+        * calculation.
         */
        spin_lock(&fbc->lock);
-       if (fbc->count + amount > threshold + error) {
+       if (fbc->count + error + amount < threshold)
+               goto out_unlock;
+
+       if (fbc->count - error + amount > threshold) {
                fbc->count += amount;
                ret = 1;
                goto out_unlock;


<Prev in Thread] Current Thread [Next in Thread>
  • [PATCH 4/5] percpu_counter: tie error bounds more obviously to count values, Alex Elder <=