tracing: Fix a warning when allocating buffered events fails

[ Upstream commit 34209fe83ef8404353f91ab4ea4035dbc9922d04 ]

Function trace_buffered_event_disable() produces an unexpected warning
when the previous call to trace_buffered_event_enable() fails to
allocate pages for buffered events.

The situation can occur as follows:

* The counter trace_buffered_event_ref is at 0.

* The soft mode gets enabled for some event and
  trace_buffered_event_enable() is called. The function increments
  trace_buffered_event_ref to 1 and starts allocating event pages.

* The allocation fails for some page and trace_buffered_event_disable()
  is called for cleanup.

* Function trace_buffered_event_disable() decrements
  trace_buffered_event_ref back to 0, recognizes that it was the last
  use of buffered events and frees all allocated pages.

* The control goes back to trace_buffered_event_enable() which returns.
  The caller of trace_buffered_event_enable() has no information that
  the function actually failed.

* Some time later, the soft mode is disabled for the same event.
  Function trace_buffered_event_disable() is called. It warns on
  "WARN_ON_ONCE(!trace_buffered_event_ref)" and returns.

Buffered events are just an optimization and can handle failures. Make
trace_buffered_event_enable() exit on the first failure and left any
cleanup later to when trace_buffered_event_disable() is called.

Link: https://lore.kernel.org/all/20231127151248.7232-2-petr.pavlu@suse.com/
Link: https://lkml.kernel.org/r/20231205161736.19663-3-petr.pavlu@suse.com

Fixes: 0fc1b09ff1 ("tracing: Use temp buffer when filtering events")
Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
This commit is contained in:
Petr Pavlu 2023-12-05 17:17:35 +01:00 committed by Greg Kroah-Hartman
parent 4713be8445
commit 8244ea916b

View File

@ -2438,8 +2438,11 @@ void trace_buffered_event_enable(void)
for_each_tracing_cpu(cpu) { for_each_tracing_cpu(cpu) {
page = alloc_pages_node(cpu_to_node(cpu), page = alloc_pages_node(cpu_to_node(cpu),
GFP_KERNEL | __GFP_NORETRY, 0); GFP_KERNEL | __GFP_NORETRY, 0);
if (!page) /* This is just an optimization and can handle failures */
goto failed; if (!page) {
pr_err("Failed to allocate event buffer\n");
break;
}
event = page_address(page); event = page_address(page);
memset(event, 0, sizeof(*event)); memset(event, 0, sizeof(*event));
@ -2453,10 +2456,6 @@ void trace_buffered_event_enable(void)
WARN_ON_ONCE(1); WARN_ON_ONCE(1);
preempt_enable(); preempt_enable();
} }
return;
failed:
trace_buffered_event_disable();
} }
static void enable_trace_buffered_event(void *data) static void enable_trace_buffered_event(void *data)