android_kernel_xiaomi_sm8350/arch/sparc64/kernel/stacktrace.c
David S. Miller 85a7935335 [SPARC64]: Make save_stack_trace() more efficient.
Doing a 'flushw' every stack trace capture creates so much overhead
that it makes lockdep next to unusable.

We only care about the frame pointer chain and the function caller
program counters, so flush those by hand to the stack frame.

This is significantly more efficient than a 'flushw' because:

1) We only save 16 bytes per active register window to the stack.

2) This doesn't push the entire register window context of the current
   call chain out of the cpu, forcing register window fill traps as we
   return back down.

Note that we can't use 'restore' and 'save' instructions to move
around the register windows because that wouldn't work on Niagara
processors.  They optimize 'save' into a new register window by
simply clearing out the registers instead of pulling them in from
the on-chip register window backing store.

Based upon a report by Tom Callaway.

Signed-off-by: David S. Miller <davem@davemloft.net>
2008-03-24 20:06:24 -07:00

38 lines
813 B
C

#include <linux/sched.h>
#include <linux/stacktrace.h>
#include <linux/thread_info.h>
#include <asm/ptrace.h>
#include <asm/stacktrace.h>
void save_stack_trace(struct stack_trace *trace)
{
unsigned long ksp, fp, thread_base;
struct thread_info *tp = task_thread_info(current);
stack_trace_flush();
__asm__ __volatile__(
"mov %%fp, %0"
: "=r" (ksp)
);
fp = ksp + STACK_BIAS;
thread_base = (unsigned long) tp;
do {
struct reg_window *rw;
/* Bogus frame pointer? */
if (fp < (thread_base + sizeof(struct thread_info)) ||
fp >= (thread_base + THREAD_SIZE))
break;
rw = (struct reg_window *) fp;
if (trace->skip > 0)
trace->skip--;
else
trace->entries[trace->nr_entries++] = rw->ins[7];
fp = rw->ins[6] + STACK_BIAS;
} while (trace->nr_entries < trace->max_entries);
}