sched: fix list traversal to use _rcu variant
authorChris Friesen <cfriesen@nortel.com>
Mon, 22 Sep 2008 17:06:09 +0000 (11:06 -0600)
committerIngo Molnar <mingo@elte.hu>
Mon, 22 Sep 2008 17:43:10 +0000 (19:43 +0200)
load_balance_fair() calls rcu_read_lock() but then traverses the list
 using the regular list traversal routine.  This patch converts the
list traversal to use the _rcu version.

Signed-off-by: Chris Friesen <cfriesen@nortel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
kernel/sched_fair.c

index 7328383690f1e66a603c707c231f1388f2f1008d..3b89aa6594a918c47c6f8c97194132aa2e04ba41 100644 (file)
@@ -1521,7 +1521,7 @@ load_balance_fair(struct rq *this_rq, int this_cpu, struct rq *busiest,
        rcu_read_lock();
        update_h_load(busiest_cpu);
 
-       list_for_each_entry(tg, &task_groups, list) {
+       list_for_each_entry_rcu(tg, &task_groups, list) {
                struct cfs_rq *busiest_cfs_rq = tg->cfs_rq[busiest_cpu];
                unsigned long busiest_h_load = busiest_cfs_rq->h_load;
                unsigned long busiest_weight = busiest_cfs_rq->load.weight;