bpf/trace: Remove redundant preempt_disable from trace_call_bpf()
authorThomas Gleixner <tglx@linutronix.de>
Mon, 24 Feb 2020 14:01:37 +0000 (15:01 +0100)
committerAlexei Starovoitov <ast@kernel.org>
Tue, 25 Feb 2020 00:18:20 +0000 (16:18 -0800)
commitb0a81b94cc50a112601721fcc2f91fab78d7b9f3
treef4d7b2395896a53576f2743900dfb102ee86e261
parent70ed0706a48e3da3eb4515214fef658ff1184b9f
bpf/trace: Remove redundant preempt_disable from trace_call_bpf()

Similar to __bpf_trace_run this is redundant because __bpf_trace_run() is
invoked from a trace point via __DO_TRACE() which already disables
preemption _before_ invoking any of the functions which are attached to a
trace point.

Remove it and add a cant_sleep() check.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200224145643.059995527@linutronix.de
kernel/trace/bpf_trace.c