pagg
[Top] [All Lists]

Re: [Lse-tech] [PATCH] RCU subscriber_list Process Notification (pnotify

To: Erik Jacobson <erikj@xxxxxxx>
Subject: Re: [Lse-tech] [PATCH] RCU subscriber_list Process Notification (pnotify)
From: Dipankar Sarma <dipankar@xxxxxxxxxx>
Date: Thu, 29 Sep 2005 22:39:46 +0530
Cc: Christoph Hellwig <hch@xxxxxxxxxxxxx>, lse-tech@xxxxxxxxxxxxxxxxxxxxx, akpm@xxxxxxxx, kingsley@xxxxxxxxxx, canon@xxxxxxxxx, pagg@xxxxxxxxxxx
In-reply-to: <20050929165328.GA15246@xxxxxxx>
References: <20050921213645.GB28239@xxxxxxx> <20050922151647.GA30784@xxxxxxxxxxxxx> <20050929165328.GA15246@xxxxxxx>
Reply-to: dipankar@xxxxxxxxxx
Sender: pagg-bounce@xxxxxxxxxxx
User-agent: Mutt/1.5.10i
On Thu, Sep 29, 2005 at 11:53:28AM -0500, Erik Jacobson wrote:
> My feeling is we shouldn't use this RCU-protected-subscriber-list version 
> of pnotify - see my performance post to follow.  It includes notes about
> sleeping while rcu_read_lock is held, for exmaple.

On a quick look this patch does look bogus to me.

> +
> +/**
> + * pnotify_unsubscribe_rcu - Free up the pnotify_subscriber when RCU is ready
> + * @rcu - the rcu_head to retrieve the pointer to free from
> + *
> + */
> +static void
> +pnotify_unsubscribe_rcu (struct rcu_head *rcu) {
> +     struct pnotify_subscriber *sub = container_of(rcu,
> +       struct pnotify_subscriber, rcu);
> +
> +     kfree(sub);
> +}
> +
> +/**
> + *
> + */
> +void
> +pnotify_unsubscribe(struct pnotify_subscriber *subscriber)
> +{
> +     atomic_dec(&subscriber->events->refcnt); /* decr the ref cnt on events 
> */
> +     list_del_rcu(&subscriber->entry);
> +     call_rcu(&subscriber->rcu, pnotify_unsubscribe_rcu);
> +}


Could you use a per-subscriber reference count ? That will allow
you to drop rcu_read_lock() safely.


> +static void
> +remove_subscriber_from_all_tasks(struct pnotify_events *events)
> +{
> +     if (events == NULL)
> +             return;
> +
> +     /* Because of internal race conditions we can't gaurantee
> +      * getting every task in just one pass so we just keep going
> +      * until there are no tasks with subscribers from this events struct
> +      * attached. The inefficiency of this should be tempered by the fact 
> that
> +      * this happens at most once for each registered client.
> +      */
> +     while (atomic_read(&events->refcnt) != 0) {
> +             struct task_struct *g = NULL, *p = NULL;
> +
> +             read_lock(&tasklist_lock);
> +             do_each_thread(g, p) {
> +                     struct pnotify_subscriber *subscriber;
> +                     int task_exited;
> +
> +                     get_task_struct(p);
> +                     read_unlock(&tasklist_lock);
> +                     rcu_read_lock();
> +                     down_write(&p->pnotify_subscriber_list_sem);

Wrong. Will refcounting suscrbiber itself here be costly ?

Thanks
Dipankar

<Prev in Thread] Current Thread [Next in Thread>