cpufreq: schedutil: Redefine the rate_limit_us tunable
authorViresh Kumar <viresh.kumar@linaro.org>
Tue, 21 Feb 2017 04:45:18 +0000 (10:15 +0530)
committerRafael J. Wysocki <rafael.j.wysocki@intel.com>
Sun, 12 Mar 2017 22:11:33 +0000 (23:11 +0100)
The rate_limit_us tunable is intended to reduce the possible overhead
from running the schedutil governor.  However, that overhead can be
divided into two separate parts: the governor computations and the
invocation of the scaling driver to set the CPU frequency.  The latter
is where the real overhead comes from.  The former is much less
expensive in terms of execution time and running it every time the
governor callback is invoked by the scheduler, after rate_limit_us
interval has passed since the last frequency update, would not be a
problem.

For this reason, redefine the rate_limit_us tunable so that it means the
minimum time that has to pass between two consecutive invocations of the
scaling driver by the schedutil governor (to set the CPU frequency).

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
kernel/sched/cpufreq_schedutil.c

index cd7cd489f739817f07349e8526812eaa3e110075..78468aa051ab8a84543b876f337626e6e20c0371 100644 (file)
@@ -93,14 +93,13 @@ static void sugov_update_commit(struct sugov_policy *sg_policy, u64 time,
 {
        struct cpufreq_policy *policy = sg_policy->policy;
 
-       sg_policy->last_freq_update_time = time;
-
        if (policy->fast_switch_enabled) {
                if (sg_policy->next_freq == next_freq) {
                        trace_cpu_frequency(policy->cur, smp_processor_id());
                        return;
                }
                sg_policy->next_freq = next_freq;
+               sg_policy->last_freq_update_time = time;
                next_freq = cpufreq_driver_fast_switch(policy, next_freq);
                if (next_freq == CPUFREQ_ENTRY_INVALID)
                        return;
@@ -109,6 +108,7 @@ static void sugov_update_commit(struct sugov_policy *sg_policy, u64 time,
                trace_cpu_frequency(next_freq, smp_processor_id());
        } else if (sg_policy->next_freq != next_freq) {
                sg_policy->next_freq = next_freq;
+               sg_policy->last_freq_update_time = time;
                sg_policy->work_in_progress = true;
                irq_work_queue(&sg_policy->irq_work);
        }