sched/core: Handle overflow in cpu_shares_write_u64
authorKonstantin Khlebnikov <khlebnikov@yandex-team.ru>
Wed, 27 Feb 2019 08:10:18 +0000 (11:10 +0300)
committerIngo Molnar <mingo@kernel.org>
Fri, 19 Apr 2019 11:42:10 +0000 (13:42 +0200)
Bit shift in scale_load() could overflow shares. This patch saturates
it to MAX_SHARES like following sched_group_set_shares().

Example:

 # echo 9223372036854776832 > cpu.shares
 # cat cpu.shares

Before patch: 1024
After pattch: 262144

Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/155125501891.293431.3345233332801109696.stgit@buzz
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/sched/core.c

index fb09eaad1d3a58f43291e772ff1f27e3a521fdac..685b1541ce517b254e909ab10bde477845eeda78 100644 (file)
@@ -6507,6 +6507,8 @@ static void cpu_cgroup_attach(struct cgroup_taskset *tset)
 static int cpu_shares_write_u64(struct cgroup_subsys_state *css,
                                struct cftype *cftype, u64 shareval)
 {
+       if (shareval > scale_load_down(ULONG_MAX))
+               shareval = MAX_SHARES;
        return sched_group_set_shares(css_tg(css), scale_load(shareval));
 }