x86 NUMA kernels crash in the scheduler setup code if "nosmp" or
"maxcpus=0" is passed on the boot command line:
| Brought up 1 CPUs
| BUG: unable to handle kernel NULL pointer dereference at virtual address 00000000
| printing eip: c011f0b5 *pde = 00000000
| Oops: 0000 [#1] SMP
|
| Pid: 1, comm: swapper Not tainted (2.6.23 #67)
| EIP: 0060:[<c011f0b5>] EFLAGS: 00010246 CPU: 0
| EIP is at sd_degenerate+0x35/0x40
the reason is sloppy spaghetti code in smpboot_32.c that resulted in a
missing map_cpu_to_logical_apicid() call - which also had the side-effect
of setting up the cpu_2_node[] entry for the lone CPU. That resulted in
node_to_cpumask(0) resulting in 00000000 - confusing the sched-domains
setup code.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Miscellaneous x86 stuff that can live in .rodata.
[ tglx: arch/x86 adaptation ]
Signed-off-by: Jan Beulich <jbeulich@novell.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
if nosmp has been passed as a boot option, but nmi_watchdog=2 has also
been enabled then keep minimal local APIC functionality around to make
the watchdog work.
this allowed me to debug a hard hang that would only occur with a nosmp
bootup.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Convert cpu_sibling_map from a static array sized by NR_CPUS to a per_cpu
variable. This saves sizeof(cpumask_t) * NR unused cpus. Access is mostly
from startup and CPU HOTPLUG functions.
Signed-off-by: Mike Travis <travis@sgi.com>
Cc: Andi Kleen <ak@suse.de>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: "Luck, Tony" <tony.luck@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This is from an earlier message from 'Christoph Lameter':
cpu_core_map is currently an array defined using NR_CPUS. This means that
we overallocate since we will rarely really use maximum configured cpu.
If we put the cpu_core_map into the per cpu area then it will be allocated
for each processor as it comes online.
This means that the core map cannot be accessed until the per cpu area
has been allocated. Xen does a weird thing here looping over all processors
and zeroing the masks that are not yet allocated and that will be zeroed
when they are allocated. I commented the code out.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Mike Travis <travis@sgi.com>
Cc: Andi Kleen <ak@suse.de>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>