android_kernel_xiaomi_sm8350/kernel/trace
Petr Pavlu 965cbc6b62 tracing: Fix a possible race when disabling buffered events
commit c0591b1cccf708a47bc465c62436d669a4213323 upstream.

Function trace_buffered_event_disable() is responsible for freeing pages
backing buffered events and this process can run concurrently with
trace_event_buffer_lock_reserve().

The following race is currently possible:

* Function trace_buffered_event_disable() is called on CPU 0. It
  increments trace_buffered_event_cnt on each CPU and waits via
  synchronize_rcu() for each user of trace_buffered_event to complete.

* After synchronize_rcu() is finished, function
  trace_buffered_event_disable() has the exclusive access to
  trace_buffered_event. All counters trace_buffered_event_cnt are at 1
  and all pointers trace_buffered_event are still valid.

* At this point, on a different CPU 1, the execution reaches
  trace_event_buffer_lock_reserve(). The function calls
  preempt_disable_notrace() and only now enters an RCU read-side
  critical section. The function proceeds and reads a still valid
  pointer from trace_buffered_event[CPU1] into the local variable
  "entry". However, it doesn't yet read trace_buffered_event_cnt[CPU1]
  which happens later.

* Function trace_buffered_event_disable() continues. It frees
  trace_buffered_event[CPU1] and decrements
  trace_buffered_event_cnt[CPU1] back to 0.

* Function trace_event_buffer_lock_reserve() continues. It reads and
  increments trace_buffered_event_cnt[CPU1] from 0 to 1. This makes it
  believe that it can use the "entry" that it already obtained but the
  pointer is now invalid and any access results in a use-after-free.

Fix the problem by making a second synchronize_rcu() call after all
trace_buffered_event values are set to NULL. This waits on all potential
users in trace_event_buffer_lock_reserve() that still read a previous
pointer from trace_buffered_event.

Link: https://lore.kernel.org/all/20231127151248.7232-2-petr.pavlu@suse.com/
Link: https://lkml.kernel.org/r/20231205161736.19663-4-petr.pavlu@suse.com

Cc: stable@vger.kernel.org
Fixes: 0fc1b09ff1 ("tracing: Use temp buffer when filtering events")
Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-12-13 18:18:14 +01:00
..
blktrace.c
bpf_trace.c bpf: Clear the probe_addr for uprobe 2023-09-23 10:59:41 +02:00
fgraph.c
ftrace_internal.h
ftrace.c ftrace: Fix possible warning on checking all pages used in ftrace_process_locs() 2023-08-11 11:53:46 +02:00
Kconfig
Makefile
power-traces.c
preemptirq_delay_test.c
ring_buffer_benchmark.c
ring_buffer.c ring-buffer: Update "shortest_full" in polling 2023-10-10 21:46:41 +02:00
rpm-traces.c
trace_benchmark.c
trace_benchmark.h
trace_branch.c
trace_clock.c
trace_dynevent.c
trace_dynevent.h
trace_entries.h
trace_event_perf.c
trace_events_filter_test.h
trace_events_filter.c tracing: Have trace_event_file have ref counters 2023-11-28 16:50:22 +00:00
trace_events_hist.c tracing/histograms: Return an error if we fail to add histogram to hist_vars list 2023-07-27 08:37:45 +02:00
trace_events_trigger.c
trace_events.c tracing: Have trace_event_file have ref counters 2023-11-28 16:50:22 +00:00
trace_export.c
trace_functions_graph.c
trace_functions.c
trace_hwlat.c
trace_irqsoff.c tracing: Fix memleak due to race between current_tracer and trace 2023-08-30 16:27:23 +02:00
trace_kdb.c
trace_kprobe_selftest.c
trace_kprobe_selftest.h
trace_kprobe.c tracing/probes: Have process_fetch_insn() take a void * instead of pt_regs 2023-08-30 16:27:14 +02:00
trace_mmiotrace.c
trace_nop.c
trace_output.c tracing: Make sure trace_printk() can output as soon as it can be used 2023-02-06 07:52:43 +01:00
trace_output.h
trace_preemptirq.c
trace_printk.c
trace_probe_tmpl.h tracing/probes: Fix to update dynamic data counter if fetcharg uses it 2023-08-30 16:27:14 +02:00
trace_probe.c
trace_probe.h tracing/probe: trace_probe_primary_from_call(): checked list_first_entry 2023-06-09 10:29:02 +02:00
trace_sched_switch.c
trace_sched_wakeup.c tracing: Fix memleak due to race between current_tracer and trace 2023-08-30 16:27:23 +02:00
trace_selftest_dynamic.c
trace_selftest.c
trace_seq.c
trace_stack.c
trace_stat.c
trace_stat.h
trace_syscalls.c
trace_uprobe.c bpf: Clear the probe_addr for uprobe 2023-09-23 10:59:41 +02:00
trace.c tracing: Fix a possible race when disabling buffered events 2023-12-13 18:18:14 +01:00
trace.h tracing: Have trace_event_file have ref counters 2023-11-28 16:50:22 +00:00
tracing_map.c
tracing_map.h